diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhwdd" "b/data_all_eng_slimpj/shuffled/split2/finalzzhwdd" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhwdd" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\\ \\ \\ \\ Given (always bounded, linear) operators $A$, $B$ on a\nBanach space $X$, define $L_A$, $R_B$ on $L(X)$ (the space of\nbounded linear operators on $X$) by $L_AT=AT$, $R_BT=TB$.\nOperators of the form $L_A R_B$ on $L(X)$ are called {\\em\nmultiplication operators}. The beginning point of this paper is a\nproblem raised in 1992 by E. Saksman and H.-O. Tylli \\cite {st1}\n(see also \\cite[Problem 2.8]{st2}):\n\nCharacterize the multiplication operators on $L(L_p)$, $12$, an even stronger theorem was proved by Kadec-Pe\\lambda czy\\'nski\n \\cite{kp}. When $12$ this is immediate from \\cite{kp}, and Lutz Weis \\cite{weis} proved the case $p<2$.\n\nAlthough Saksman and Tylli did not know a complete\ncharacterization of the weakly compact multiplication operators on\n$L(L_p)$, they realized that a classification must involve\n$\\ell_p$ and $\\ell_2$-strictly singular operators on $L_p$. This\nled Tylli to ask us about possible classifications of the $\\ell_p$\nand $\\ell_2$-strictly singular operators on $L_p$. The $\\ell_2$\ncase is known. It is enough to consider the case $22$ the Weak Tylli Conjecture\nis true, while the example in Section \\ref{example} yields that\nthe Tylli Conjecture is false for all $p\\not= 2$ and the Weak\nTylli Conjecture is false for $p<2$.\n\nThere are however some interesting consequences of the Weak Tylli\nConjecture that are true when $p<2$. In Theorem\n\\ref{maintheorem} we prove that if $T$ is an $\\ell_p$-strictly\nsingular operator on $L_p$, $12$. For $p\\le r$ we denote the identity operator from $\\ell_p$\ninto $\\ell_r$ by $i_{p,r}$. It is immediate from \\cite{kp} that an operator $T$ on $L_p$, $20$ there is $M_\\varepsilon$ so that\n$W\\subset \\varepsilon B_{L_p} + M_\\varepsilon B_{L_\\infty}$.\n\\item[4.] $|W|^p$ is uniformly integrable; i.e.,\n$\\lim_{t\\downarrow 0} \\sup_{x\\in W} \\sup_{\\mu(E)< t} \\| \\mathbf{1} _E x\\|_p\n=0$.\n\\end{enumerate}\n\\end{lm}\n\nWhen $p=1$, the assumptions that $W$ is convex and and $W$\nsymmetric are not needed, and the conditions in Lemma\n\\ref{ellpsequence} are equivalent to the non weak compactness of\nthe weak closure of $ {W}$. This case is essentially proved in\n\\cite{kp} and proofs can also be found in books; see, e.g.,\n\\cite[Theorem 3.C.12]{woj}). (Condition (3) does not appear in\n\\cite{woj}, but it is easy to check the equivalence of (3) and\n(4). Also, in the proof in \\cite[Theorem 3.C.12]{woj}) that not\n(4) implies not (1), Wojtaszczyk only constructs a basic sequence\nin $W$ that is equivalent to the unit vector basis for $\\ell_1$;\nhowever, it is clear that the constructed basic sequence spans a\ncomplemented subspace.)\n\n\n For $p>2$, Lemma\n\\ref{ellpsequence} and stronger versions of condition (1) can be\ndeduced from \\cite{kp}. For $10$. By passing to a\nsubsequence, we can assume that $\\{ x_n \\}_{n=1}^{\\infty}$ converges weakly to, say,\n$x$. Suppose first that $x=0$. Then by passing to a further\nsubsequence, we may assume that $\\{ x_n \\}_{n=1}^{\\infty}$ is a small perturbation of a\nblock basis of the Haar basis for $L_p$ and hence is an\nunconditionally basic sequence. Since $L_p$ has type $p$, this\nimplies that there is a constant $C$ so that for all sequences\n$\\{ a_n \\}_{n=1}^{\\infty}$ of scalars, $\\|\\sum a_n x_n\\|_p\\le C(\\sum |a_n|^p)^{1\/p}$.\nLet $P$ be the norm one projection from $L_p$ onto the closed\nlinear span $Y$ of the disjoint sequence $ \\{ \\1_{E_n} x_n \\}_{n=1}^{\\infty}$. Then $Px_n$ is\nweakly null in a space isometric to $\\ell_p$ and $\\|Px_n\\|_p$ is\nbounded away from zero, so there is a subsequence\n$\\{Px_{n(k)}\\}_{k=1}^\\infty$ which is equivalent to the unit\nvector basis for $\\ell_p$ and whose closed span is the range of a\nprojection $Q$ from $Y$. The projection $QP$ from $L_p$ onto the\nthe closed span of $\\{Px_{n(k)}\\}_{k=1}^\\infty$ maps $x_{n(k)}$ to\n$Px_{n(k)}$ and, because of the upper $p$ estimate on $\\{ x_{n(k)} \\}_{k=1}^{\\infty}$, maps\nthe closed span of $\\{ x_{n(k)} \\}_{k=1}^{\\infty}$ isomorphically onto the closed span of\n $\\{Px_{n(k)}\\}_{k=1}^\\infty$. This yields that $\\{ x_{n(k)} \\}_{k=1}^{\\infty}$ is\nequivalent to the unit vector basis for $\\ell_p$ and spans a\ncomplemented subspace. Suppose now that the weak limit $x$ of\n$\\{ x_n \\}_{n=1}^{\\infty}$ is not zero. Choose a subsequence $\\{ x_{n(k)} \\}_{k=1}^{\\infty}$ so that $\\inf\n\\|1_{E_{n(2k+1)}}(x_{n(2k)}-x_{n(2k+1)})\\|_p>0$ and replace $\\{ x_n \\}_{n=1}^{\\infty}$\nwith $\\{{{x_{n(2k)}-x_{n(2k+1)}}\\over 2}\\}_{k=1}^\\infty$ in the\nargument above.\n\nNotice that the argument outlined above gives that if $\\{ x_n \\}_{n=1}^{\\infty}$ is a\nsequence in $L_p$, $1< p\\not=2<\\infty$, which is equivalent to the\nunit vector basis of $\\ell_p$, then there is a subsequence $\\{ y_n \\}_{n=1}^{\\infty}$\n whose closed linear span in $L_p$ is complemented. This\nis how one proves that the identity on $\\ell_p$ factors through\nany operator on $L_p$ which is not $\\ell_p$-strictly singular.\n\n\nThe Weak Tylli Conjecture for $p>2$ is an easy consequence of the\nfollowing lemma.\n\n\n\\begin{lm}\\label{weak} Let $T$ be an operator from a\n$\\mathcal{L}_{1}$ space $V$ into $L_p$, $10$ there is an operator $S:V\\to L_2$ so that $\\|T-I_{2,p}S\\| <\\varepsilon$.\n \\end{lm}\n \\noindent {\\bf Proof: \\ } Let $\\varepsilon>0$. By condition (3) in Lemma \\ref{ellpsequence},\nfor each norm one vector $x$ in $V$ there is a vector $Ux$ in\n$L_2$ with $\\|Ux\\|_2\\le \\|Ux\\|_\\infty\\le M_\\varepsilon$ and\n$\\|Tx-Ux\\|_p\\le \\varepsilon$. By the definition of $\\mathcal{L}_{1}$\nspace, we can write $V$ as a directed union $\\cup_\\alpha E_\\alpha$\nof finite dimensional spaces that are uniformly isomorphic to\n$\\ell_1^{n_\\alpha}$, $n_\\alpha=\\dim E_\\alpha$, and let\n$(x^\\alpha_i)_{i=1}^{n_\\alpha}$ be norm one vectors in $E_\\alpha$\nwhich are, say, $\\lambda$-equivalent to the unit vector basis for\n$\\ell_1^{n_\\alpha}$ with $\\lambda$ independent of $\\alpha$. Let\n$U_\\alpha$ be the linear extension to $E_\\alpha$ of the mapping\n$x^\\alpha_i\\mapsto Ux^\\alpha_i$, considered as an operator into\n$L_2$. Then $\\|T_{|E_\\alpha}- I_{2,p}U_\\alpha\\|\\le \\lambda \\varepsilon$\nand $\\|U_\\alpha\\|\\le \\lambda M_\\varepsilon$. A standard Lindenstrauss\ncompactness argument produces an operator $S:V\\to L_2$ so that\n$\\|S\\|\\le \\lambda M_\\varepsilon$ and $\\|T-I_{2,p}S\\|\\le \\lambda \\varepsilon$.\nIndeed, extend $U_\\alpha$ to all of $V$ by letting $U_\\alpha x=0 $\nif $x\\not\\in E_\\alpha$. The net $T_\\alpha$ has a subnet $S_\\beta$\nso that for each $x$ in $V$, $S_\\beta x$ converges weakly in\n$L_2$; call the limit $Sx$. It is easy to check that $S$ has the\nproperties claimed.\n\\qed \\medskip\n\n\n\n\n\\begin{thm}\\label{weaktyllithm} Let $T$ be an $\\ell_p$-strictly singular operator on\n$L_p$, $20$ there is an operator $S:L_p\\to Z$ so that $S$ factors\nthrough $\\ell_2$ and $\\|JT-S\\| <\\varepsilon$.\n\\end{thm}\n \\noindent {\\bf Proof: \\ } Lemma \\ref{weak} gives the conclusion when $J$ is the adjoint\nof a quotient mapping from $\\ell_1$ or $L_1$ onto $L_{p'}$. The\ngeneral case then follows from the injectivity of $Z$.\n\\qed \\medskip\n\n\n\nThe next proposition, when souped-up via ``abstract nonsense\" and\nknown results,\n gives our main\nresult about\n$\\ell_p$-strictly singular operators on $L_p$. Note that it shows that an\n$\\ell_p$-strictly singular operator on $L_p$, $10$ a constant $M_\\epsilon$ so that\n\\begin{equation}\\label{eqprop1}\nT B_{L_p}\\subset \\epsilon B_{L_p} + M_\\epsilon B_{L_2}.\n\\end{equation}\n\nIndeed, otherwise condition (1) in Lemma \\ref{ellpsequence} gives\na bounded sequence $\\{ x_n \\}_{n=1}^{\\infty}$ in $L_p$ so that $\\{ Tx_n \\}_{n=1}^{\\infty}$ is equivalent\nto the unit vector basis of $\\ell_p$. By passing to a subsequence\nof differences of $\\{ x_n \\}_{n=1}^{\\infty}$, we can assume, without loss of\ngenerality, that $\\{ x_n \\}_{n=1}^{\\infty}$ is a small perturbation of a block basis of\nthe Haar basis for $L_p$ and hence is an unconditionally basic\nsequence. Since $L_p$ has type $p$, the sequence $\\{ x_n \\}_{n=1}^{\\infty}$ has an\nupper $p$ estimate, which means that there is a constant $C$ so\nthat for all sequences $\\{ a_n \\}_{n=1}^{\\infty}$ of scalars, $\\| \\sum a_n x_n\\| \\le\nC\\| (\\sum |a_n|^p)^{1\/p}\\|$. Since $\\{ Tx_n \\}_{n=1}^{\\infty}$ is equivalent to the\nunit vector basis of $\\ell_p$, $\\{ x_n \\}_{n=1}^{\\infty}$ also has a lower $p$ estimate\nand hence $\\{ x_n \\}_{n=1}^{\\infty}$ is equivalent to the unit vector basis of\n$\\ell_p$. This contradicts the $\\ell_p$ strict singularity of $T$.\n\nIterating this we get for every $n$ and $0<\\epsilon<1\/2$\n\\[\na^n B_X \\subset T^n B_{L_p}\\subset \\epsilon^n B_{L_p} + 2M_\\epsilon B_{L_2}\n\\]\nor, setting $A:=1\/a$,\n\\[\nB_X \\subset A^n\\epsilon^n B_{L_p} + 2 A^n M_\\epsilon B_{L_2}.\n\\]\n\n\n\nFor $f$ a unit vector in $X$ write $f=f_n+g_n$ with $\\|f_n\\|_2\\le 2 A^n M_\\epsilon$\nand $\\|g_n\\|_p\\le (A\\epsilon)^n$. Then $f_{n+1}-f_n=g_n-g_{n+1}$, and since\nevidently $f_n$ can be chosen to be of the form $(f\\vee -k_n)\\wedge k_n$ (with\nappropriate interpretation when the set $[f_n=\\pm k_n]$ has positive measure), the\nchoice of\n$f_n$, $g_n$ can be made so that\n\\[\n\\|f_{n+1}-f_n\\|_2\\le \\|f_{n+1}\\|_2\\le 2M_\\epsilon A^{n+1}\n\\]\n\\[\n\\|g_n-g_{n+1}\\|_p\\le \\|g_n\\|_p\\le (A\\epsilon)^n.\n\\]\n\nFor $p2$, then for any sequence $\\{ x_n \\}_{n=1}^{\\infty}$ of\nsymmetric, independent random variables in $L_p$, $\\|\\sum x_n\\|_p$\nis equivalent (with constant depending only on $p$) to $(\\sum\n\\|x_n\\|_p^p)^{1\/p}\\vee (\\sum \\|x_n\\|_2^2)^{1\/2}$. Thus if $\\{ x_n \\}_{n=1}^{\\infty}$ is\nnormalized in $L_p$, $p>2$, and $w_n:=\\|x_n\\|_2$, then $\\|\\sum\na_n x_n\\|_p$ is equivalent to $\\|\\{ a_n \\}_{n=1}^{\\infty}\\|_{p,w}:=(\\sum\n|a_n|^p)^{1\/p}\\vee (\\sum |a_n|^2w_n^2)^{1\/2}$. The completion of\nthe finitely non zero sequences of scalars under the norm\n$\\|\\cdot\\|_{p,w}$ is called $X_{p,w}$. It follows that if $w=\\{ w_n \\}_{n=1}^{\\infty}$\nis any sequence of numbers in $[0,1]$, then $X_{p,w}$ is\nisomorphic to a complemented subspace of $L_p$. Suppose now that\n$w=\\{ w_n \\}_{n=1}^{\\infty}$ and $v=\\{ v_n \\}_{n=1}^{\\infty}$ are two such sequences of weights and $v_n\\ge\nw_n$, then the diagonal operator $D$ from $X_{p,w}$ to $X_{p,v}$\nthat sends the $n$th unit vector basis vector $e_n$ to\n${{w_n}\\over{v_n}} e_n$ is contractive and it is more or less\nobvious that $D$ is $\\ell_p$-strictly singular if\n${{w_n}\\over{v_n}}\\to 0$ as $n\\to \\infty$. Since $X_{p,w}$ and\n$X_{p,v}$ are isomorphic to complemented subspaces of $L_p$, the\nadjoint operator $D^*$ is $\\ell_{p'}$ strictly singular and\n(identifying $X_{p,w}^*$ and $X_{p,v}^*$ with subspaces of\n$L_{p'}$) extends to a $\\ell_{p'}$ strictly singular operator on\n$L_{p'}$. Our goal in this section is produce weights $w$ and $v$\nso that $D^*$ is an isomorphism on a subspace of $X_{p,v}^*$ which\nis not isomorphic to a Hilbert space.\n\nFor all $00$ there are\n$00$, one can find $a,b$ and $\\varepsilon$, such that\nfor the corresponding $\\{a_j,Y_j\\}$ there is a symmetric\n$r$-stable $Y$ (with characteristic function $e^{-|t|^r}$)\nsatisfying\n\\[\n\\|Y-\\sum_{j=1}^n a_j Y_j\\|_q\\le\\eta.\n\\]\nThis follows from classical translation of various convergence\nnotions; see e.g. \\cite[p. 154]{rosII}.\n\n Let now $0<\\delta<1$. Put\n$w_j=\\delta v_j$, $j=1,\\dots,n$, and let $Z_j$, $j=1,\\dots,n$, be\nindependent, symmetric, three valued\n random variables such that $|Z_j|=w_j^{\\frac{-2}{2-q}} \\mathbf{1} _{C_j}$\n with ${\\rm Prob}(C_j)=w_j^{\\frac{2q}{2-q}}$, so that in particular\n $\\|Z_j\\|_q=1$ and $w_j=\\|Z_j\\|_q\/\\|Z_j\\|_2$. In a similar manner\n to the argument above we get that,\n\n \\begin{equation*}\n\\begin{array}{rl}\n\\varphi_{\\sum \\delta a_jZ_j}(t)= &\n\\prod_{j=1}^n(1-w_j^{\\frac{2q}{2-q}}(1-\\cos\n (t\\delta a_j w_j^{\\frac{-2}{2-q}})))\\\\\n = &\n \\prod_{j=1}^n(1-\\delta^{\\frac{2q}{2-q}}v_j^{\\frac{2q}{2-q}}(1-\\cos\n (t\\delta^{\\frac{-q}{2-q}} a_j v_j^{\\frac{-2}{2-q}})))\\\\\n\n\n\n =\n & (1+O(\\varepsilon))\\exp(-\\delta^{\\frac{q(2-r)}{2-q}}|t|^r)\n\\end{array}\n\\end{equation*}\n for all $|t|\\in\n[\\delta^{\\frac{q}{2-q}}a,\\delta^{\\frac{q}{2-q}}b]$, where the $O$\nnow depends also on $\\delta$.\n\nAssuming $\\delta^{\\frac{q(2-r)}{2-q}}>1\/2$ and for a choice of\n$a,b$ and $\\varepsilon$, depending on $\\delta, r, q$ and $\\eta$ we get that\nthere is a symmetric $r$-stable random variable $Z$ (with\ncharacteristic function $e^{-\\delta^\\frac{q(2-r)}{2-q}|t|^r}$)\nsuch that\n\\[\n\\|Z-\\sum_{j=1}^n \\delta a_j Z_j\\|_q\\le\\eta.\n\\]\nNote that the ratio between the $L_q$ norms of $Y$ and $Z$ are\nbounded away from zero and infinity by universal constants and\neach of these norms is also universally bounded away from zero.\nConsequently, if $\\varepsilon$ is small enough the ratio between the $L_q$\nnorms of $\\sum_{j=1}^n a_j Y_j$ and $\\sum_{j=1}^n \\delta a_j Z_j$\nare bounded away from zero and infinity by universal constants.\n\nLet now $\\delta_i$ be any sequence decreasing to zero and $r_i$\nany sequence such that $q1\/2$. Then, for any sequence\n$\\varepsilon_i\\downarrow 0$ we can find two sequences of symmetric,\nindependent, three valued random variables $\\{Y_i\\}$ and\n$\\{W_i\\}$, all normalized in $L_q$, with the following additional\nproperties:\n\n\\begin{itemize}\n\\item put $v_j=\\|Y_j\\|_q\/\\|Y_j\\|_2$ and $w_j=\\|Z_j\\|_q\/\\|Z_j\\|_2$.\nThen there are disjoint finite subsets of the integers $\\sigma_i$,\n$i=1,2,\\dots$, such that $w_j=\\delta_i v_j$ for $j\\in \\sigma_i$.\n\\item There are independent random variables $\\{\\bar Y_i\\}$ and $\\{\\bar Z_i\\}$\nwith $\\bar Y_i$ and $\\bar Z_i$ $r_i$ stable with bounded, from\nzero and infinity, ratio of $L_q$ norms and there are coefficients\n$\\{a_j\\}$ such that\n\\[\n\\|\\bar Y_i-\\sum_{j\\in\\sigma_i}a_jY_j\\|_q<\\varepsilon_i \\ \\ \\ \\mbox {and}\\ \\\n\\ \\|\\bar Z_i-\\sum_{j\\in\\sigma_i}\\delta_ia_jZ_j\\|_q<\\varepsilon_i.\n\\]\n\\end{itemize}\n\nFrom \\cite{rosxp} we know that the spans of $\\{Y_j\\}$ and\n$\\{Z_j\\}$ are complemented in $L_q$, $1p$. Indeed, by interpolation\nit is sufficient to check that $T_\\varepsilon $ maps $L_s$ into $L_2$ for\nsome $s=s(\\varepsilon)<2$. But there is a constant $C_s$ which tends to $1$\nas $s\\uparrow 2$ so that for all $f\\in W_n$, $\\|f\\|_2\\le C_s^n\n\\|f\\|_s$ and the orthogonal projection $P_n$ onto (the closure of)\n$W_n$ satisfies $\\|P_n\\|_p\\le C_s^n$. From this it is easy to\ncheck that if $\\varepsilon C_s^2<1$, then $T_\\varepsilon$ maps $L_s$ into $L_2$.\nWe remark in passing that Bonami \\cite{bon} found for each $p$ (including $p\\ge 2$) and $\\varepsilon$ the largest value of $r=r(p,\\varepsilon)$ such that $T_\\varepsilon$ maps $L_p$ into $L_r$.\n\nThus Theorem \\ref{maintheorem} yields that if $X$ is a subspace of\n$L_p$, $11$, then $X$ is complemented in $L_p$.\n\\end{thm}\n\n\n\nWe now prove Rosenthal's lemma \\cite[proof of Theorem 5]{rosAltgeld} and defer the proof of the ``moreover\" statement in\nTheorem \\ref{rosProblem} until after the proof of the lemma.\n.\n\n\\begin{lm}\\label{rosLemma} Suppose that $T$ is an operator on\n$L_p$, $1\\le p0$ so\nthat $\\|Tx\\|_p \\ge {a} \\| x\\|_p$ for all $x$ in $X$. Since\n$\\ell_p$ does not embed into $L_s$ we get from (4) in Lemma\n\\ref{ellpsequence} that there is $\\eta>0$ so that if $E$ has\nmeasure larger than $1-\\eta$, then $\\| \\mathbf{1} _{\\sim E}x\\|_p \\le\n{\\frac{a}{2}}\\|x\\|_p$ for all $x$ in $x$. Obviously (1) and (3)\nare satisfied for any such $E$. It is proved in \\cite{ros} that\n there is strictly positive $g$ with $\\|g\\|_1=1$ so that\n$\\frac {x}{g}$ is in $L_r$ for all $x$ in $X$. Now simply choose\n$t<\\infty$ so that $E:=[gs$ and hence, by Lemma \\ref{rosLemma}, $X$ embeds into $L_t$.\n\nFinally we prove the ``moreover\" statement in Theorem \\ref{rosProblem}. We now know that $X$ is isomorphic to a Hilbert space. In the proof of Lemma \\ref{rosLemma}, instead of using Rosenthal's result from \\cite{ros}, use Grothendieck's theorem \\cite[Theorem 3.5]{djt}, which implies that there is strictly positive $g$ with $\\|g\\|_1=1$ so that\n$\\frac {x}{g}$ is in $L_2$ for all $x$ in $X$. Choosing $E$ the same way as in the proof of Lemma \\ref{rosLemma} with $T:=T_\\varepsilon$, we get that (1), (2), and (3) are true with $r=2$. Now the $L_2$ and $L_p$ norms are equivalent on both $X_0$ and on $T_\\varepsilon X_0$. But it is clear that the only way that $T_\\varepsilon$ can be an isomorphism on a subspace $X_0$ of $L_2$ is for the orthogonal projection $P_n$ onto the closed span of $W_k$, $0\\le k\\le n$, to be an isomorphism on $X_0$ for some finite $n$. But then also in the $L_p$ norm the restriction of $P_n$ to $X_0$ is an isomorphism because the $L_p$ norm and the $L_2$ norm are equivalent on the span of $W_k$, $0\\le k\\le n$ and $P_n$ is bounded on $L_p$ (since $p>1$). It follows that the operator $S:= P_n \\circ \\mathbf{1} _E$ on $L_p$ maps $X_0$ isomorphically onto a complemented subspace of $L_p$, which implies that $X_0$ is also complemented in $L_p$.\n\n\n\nWe conclude this section with the open problem that started us thinking about $\\ell_p$-strictly singular operators.\n\n\\begin{problem}\\label{problem} Let $10$ so that for all $n$, $\\|ATB x_n\\|>\\delta$. Since a reflexive space with the compact approximation property also has the compact metric approximation property \\cite{chojohnson}, there are $C_n\\in K(X)$ with $\\|C_n\\|< 1+1\/n$, $C_n Bx_i=Bx_i$ for $i\\le n$. Since the $C_n$ are compact, for each $n$, $\\|C_n B x_m\\| \\to 0$ as $m\\to \\infty$. Thus $A(TC_n)B x_i=ATB x_i$ for $i\\le n$ and $\\|A(TC_n)B x_m\\|\\to 0$ as $m\\to \\infty$. This implies that no convex combination of $\\{A(TC_n)B\\}_{n=1}^\\infty$ can converge in the norm of $L(X)$ and hence $\\{A(TC_n)B\\}_{n=1}^\\infty$ has no weakly convergent subsequence. This contradicts the weak compactness of $L_A R_B$ and completes the proof.\n\\qed \\medskip\n\n\n\n\n\n\n\\vfill\\eject\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe study of correlations among particles produced in different rapidity\nregions may provide an understanding of the elementary (partonic) interactions \nwhich lead to hadronization.\nMany experiments show strong short-range correlations (SRC) over a region of $\\sim \\pm $ 1 units in rapidity \\cite{alner,aexopoulos}. \nIn high-energy nucleon-nucleon collisions ($\\sqrt{s} \\gg$ 100 GeV) the \nnonsingly diffractive inelastic cross section increases significantly with energy and the magnitude of the long-range forward-backward multiplicity correlations (LRC) increases with the energy \\cite{aexopoulos}. These effects can be understood in terms of multiparton interactions \\cite{walker2004}. \n\n\n In high energy nucleus-nucleus collisions, it has been predicted that multiple parton interactions would produce larger long-range forward-backward multiplicity correlations that extend beyond $\\pm$ 1 units in rapidity, compared to hadron-hadron scattering at the same energy \\cite{capela1,capela2,larry}. The model based on multipomeron exchanges\\\\(Dual Parton Model) predicts the existence of long range correlations \\cite{capela1,capela2}.\n In the Color Glass Condensate\\\\(CGC) picture of particle production the correlations of the particles created at early stages of the collisions can spread over large rapidity intervals, unlike the particles produced at later stages. Thus the measurement of the long range rapidity correlations of the produced particle multiplicities could give us some insight into the space-time dynamics of the early stages of the collisions \\cite{larry}. \n\nOne method to study the LRC strength \nis to measure the magnitude of the forward-backward multiplicity correlation over a long range in pseudorapidity. Such correlations were studied in several experiments \\cite{alner,aexopoulos,na35,e802,tom,phobos,phenix} and investigated \ntheoretically \\cite{capela2,amelin,braun,larry,giov,shi,urqmd}.\nIn this paper we present the first results on the forward-backward (FB) multiplicity correlation strength and its range in pseudorapidity in heavy ion collisions at the Relativistic Heavy Ion Collider (RHIC) measured by the STAR experiment. \nEarlier analyses in STAR have focused on the relative correlations of charged particle pairs on the difference variables $\\Delta\\eta$ (pseudorapidity) and $\\Delta\\phi$ (azimuth). It was observed that the near-side peak is elongated in $\\Delta\\eta$ in central Au+Au as compared to peripheral collisions \\cite{tom}. In the present work the measure of correlation strength as defined in Eq. (1) and the coordinate system differs from that of these earlier STAR measurements. The FB correlation strength is measured in an absolute\ncoordinate system, where $\\eta $ = 0 is always physically located at\nmidrapidity (the collision vertex), instead of the relative $\\eta $ difference\nutilized in other 2-particle analyses. These differences allow the determination of the absolute magnitude of the correlation strength.\n\n\nThe correlation strength is defined by the dependence of the average charged particle multiplicity in the backward hemisphere, $\\langle N_{b} \\rangle$, on the\nevent multiplicity in the forward hemisphere, $N_{f}$, such that $\\langle N_{b}\\rangle = a + b N_{f}$, where $\\textit{a}$ is a constant and $\\textit{b}$ measures the correlation strength: \n\\begin{equation}\n b = \\frac{\\langle N_{f}N_{b}\\rangle - \\langle N_{f}\\rangle \\langle N_{b}\\rangle}{\\langle N_{f}^{2}\\rangle - \\langle N_{f} \\rangle ^{2}}= \\frac{D_{bf}^{2}}{D_{ff}^{2}} \n\\label{b}\n\\end{equation} \nIn Eq. (1), $D_{bf}^{2}$(covariance) and $D_{ff}^{2}$(variance) are the backward-forward and forward-forward dispersions, respectively \\cite{capela1,capela2}.\n\n\nThe data utilized for this analysis are from year 2001 (Run II) $\\sqrt{s_{NN}}$ = 200 GeV minimum bias Au+Au collisions ($\\sim$ 2.5$\\times 10^{6}$ events) at the RHIC, as measured by the STAR experiment \\cite{starnim}. The FB correlation has been studied as a function of the centrality of the collision. \n The centralities studied in this analysis account for 0-10, 10-20, 20-30, 30-40, 40-50 and 50-80\\% of the total hadronic cross section. All primary tracks with distance of closest approach to the primary event vertex $< 3$ cm in the Time Projection Chamber (TPC) pseudorapidity range $|\\eta|<1.0$ and with transverse momentum $p_{T} >$ 0.15 GeV\/c were considered. This region was subdivided into bins of width $\\eta = 0.2$. The FB intervals were located symmetrically about midrapidity ($\\eta = 0$) with the distance between bin centers (pseudorapidity gap $\\Delta\\eta$): 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, and 1.8. \nTo avoid a bias in the FB correlation measurements, care was taken to use different pseudo-rapidity selections for the centrality determination which is also based on multiplicity.\nTherefore, the centrality determination for the\n FB correlation strength for $\\Delta\\eta$ = 0.2, 0.4 and 0.6 is based on the multiplicity in $0.5<|\\eta|<1.0$, while for $\\Delta\\eta$ = 1.2, 1.4, 1.6 and 1.8 the centrality is obtained from $|\\eta|<0.5$. For $\\Delta\\eta$ = 0.8 and 1.0 the sum of multiplicities from $|\\eta|<0.3 $ and $0.8<|\\eta|<1.0$ is used for the centrality determination.\nThe effect of centrality region selection on FB correlation strength was also studied by narrowing the region to $|\\eta|<$ 0.3, 0.2 and 0.1 for all $\\Delta\\eta$ bins. This increases the FB correlation strength by $\\sim$ 10-15\\% at the most.\n Since the pseudorapidity particle density ($dN\/d\\eta$) plateau at $\\sqrt{s_{NN}}$ = 200 GeV in Au+Au collisions extends over the region of interest \\cite{phobos2}, this procedure yields consistent particle counts in the FB measurement intervals. An analysis of the data from (Run II) $\\textit{p+p}$ collisions at $\\sqrt{s}$ = 200 GeV, was also performed on minimum bias events ($\\sim$ 3.5$\\times 10^{6}$ events) using the same track cuts as for the Au+Au analysis.\n Corrections for detector geometric acceptance and tracking efficiency were carried out using a Monte Carlo event generator (HIJING) and propagating the simulated particles through a GEANT representation of the STAR detector geometry. \n\n\nIn order to eliminate (or at least reduce) the effect of impact parameter (centrality) fluctuations on the measurement of the FB correlation strength, each relevant quantity ($N_{f}$, $N_{b}$, $N_{f}^{2}$, $N_{f}N_{b}$) was obtained on an event-by-event basis as a function of the event multiplicity, $N_{ch}$. The average uncorrected mean multiplicities\n$\\langle N_{f} \\rangle_{uncorr}$, $\\langle N_{b} \\rangle_{uncorr}$, $\\langle N_{f}^{2} \\rangle_{uncorr}$, and $\\langle N_{f}N_{b} \\rangle_{uncorr}$ in each centrality bin were calculated from a fit to the $N_{ch}$ dependences \\cite{ebye1,tjtwinter}. The functional forms that were used are linear in $N_{f}$, $N_{b}$ and quadratic in $N_{f}^{2}$ and $N_{f}N_{b}$ for all $\\Delta\\eta$ bins. \nTracking efficiency and acceptance corrections were then applied to $\\langle N_{f} \\rangle_{uncorr}$, $\\langle N_{b} \\rangle_{uncorr}$, $\\langle N_{f}^{2} \\rangle_{uncorr}$, and $\\langle N_{f}N_{b} \\rangle_{uncorr}$ to each event. Then the corrected values of $\\langle N_{f} \\rangle$, $\\langle N_{b} \\rangle$, $\\langle N_{f}^{2} \\rangle$, and $\\langle N_{f}N_{b} \\rangle $ for each event were used to calculate the backward-forward and forward-forward dispersions, $D_{bf}^{2}$ and $D_{ff}^{2}$, binned by centrality, and normalized by the total number of events in each bin. This method removes the dependence of the FB correlation strength on the width of the centrality bin.\n As a cross check an alternative method of centrality determination was also carried out using the STAR Zero Degree Calorimeter (ZDC) for the 0-10\\% centrality range and the results\nare shown in Fig. 1a along with the 0-10\\% most central events from the minimum bias dataset.\n\\begin{figure}[thbp]\n\\centering \n\\vspace*{-0.2cm}\n\\includegraphics[width=0.56\\textwidth,height=4.0in]{Fig1_FB.eps}\n\\vspace*{-1.5cm}\n\\caption{(a) FB correlation strength for 0-10\\% (circle) and ZDC based centrality ( square) (b) FB correlation strength for 10-20, 20-30, 30-40, 40-50 and 50-80\\% ( square) Au+Au and (c) for $\\textit {p+p}$ collisions as a function of $\\Delta\\eta$ at $\\sqrt{s_{NN}}$ = 200 GeV. The error bars represent the systematic point-to-point error. The boxes show the correlated systematic errors.}\n\\label{CentralAu}\n\\end{figure}\n Statistical errors are smaller than the data points. \nSystematic effects dominate the error determination. The systematic errors are determined by binning events according to the z-vertex in steps of 10 cm varying from -30 to 30 cm and the maximum value of the fit range (0-570, 0-600 and 0-630) for $\\langle N_{f} \\rangle, \\langle N_{b} \\rangle, \\langle N_{f}^2 \\rangle$, and $\\langle N_{f}N_{b} \\rangle$ vs $N_{ch}$. An additional error could arise due to finite detection efficiency in the TPC. This is estimated to be $\\sim$ 5\\% for most central collisions. \n The overall systematic errors due to the fit range, which causes a\ncorrelated shift along the y-axis, are shown in figures as boxes.\n\n\n\n Figure \\ref{CentralAu} shows the FB correlation strength as a function of $\\Delta\\eta$ for $\\textit{p+p}$ and centrality selected Au+Au collisions along with the ZDC based centrality results. The results from ZDC are slightly lower as compared to the 0-10\\% most central events\n sampled from minimum bias datasets using $N_{ch}$. \nIt is observed that the magnitude of the FB correlation strength decreases with the decrease in centrality. \n The FB correlation strength evolves from a nearly flat function for 0-10\\% to a sharply decreasing function with $\\Delta\\eta$ for the 40-50 and 50-80\\% centrality bins, which is expected if only short range correlations (SRC) are present \\cite{capela1}.\nThe FB correlation strength values for 40-50 and 50-80\\% centrality bins at large gap \n($\\Delta\\eta >$ 1.0) have an average value near zero. The individual b values are near zero within their systematic errors.\n Figure \\ref{CentralAu} shows that the dependence of the FB correlation strength with $\\Delta\\eta$ is quite different in central Au+Au compared to $\\textit{p+p}$ collisions. It is also observed that the FB correlation strength decreases faster in the peripheral (40-50 \\% centrality) Au+Au as compared to $\\textit{p+p}$ collisions. This indicates that the short range correlation length is smaller in \nAu+Au collisions than in $\\textit{p+p}$. \n\n Figure 2 shows the dependence of the dispersions $D_{bf}^{2}$ and $D_{ff}^{2}$ on $\\Delta\\eta$ for central Au+Au collisions (Fig. 2a) and $\\textit{p+p}$ collisions (Fig. 2b). \n The nearly constant value of $D_{ff}^{2}$ with $\\Delta\\eta$ represents the dispersion within the same $\\eta$ window, which has approximately the same average multiplicity for all $\\Delta\\eta$ values. The behavior of $D_{bf}^{2}$ is similar to the FB correlation strength. Thus the FB correlation variation with the size of $\\Delta\\eta$ is dominated by the $D_{bf}^{2}$ in Eq. (1).\n\n\\begin{figure}[thbp]\n\\vspace*{-0.2cm}\n\\centering \n\\includegraphics[width=0.56\\textwidth,height=3.5in]{Fig2_FB.eps}\n\\vspace*{-1.5cm} \n\\caption{(a) Backward-forward dispersion, $D_{bf}^{2}$, and forward-forward dispersion, $D_{ff}^{2}$, for 0-10\\% centrality as a function of $\\Delta\\eta$ for Au+Au collisions at $\\sqrt{s_{NN}}$ = 200 GeV. (b) $D_{bf}^{2}$, and $D_{ff}^{2}$ for \\textit{p+p} collisions at $\\sqrt{s}$ = 200 GeV. The error bars represent the systematic point-to-point error. The boxes show the correlated systematic errors.}\n\n\\label{PeripAuandpp}\n\\end{figure}\n \n Short range correlations have been previously observed in high energy hadron-hadron collisions \\cite{alner}. The shape of the SRC function is symmetric about midrapidity and has a maximum at $\\eta$ = 0. It can be parameterized as $\\propto \\exp (-\\Delta\\eta\/\\lambda)$, where $\\lambda$ is the short range correlation length and is found to be $\\lambda$ $\\sim$ 1. Thus the SRC are significantly reduced\nby a separation of $\\sim 1.0$ units of pseudorapidity \\cite{capela2,larry2}. \nThe short range correlation length is smaller in nucleus-nucleus collisions as compared to high energy hadron-hadron collisions \\cite{e802,urqmd}. \n The remaining portion of the correlation strength can be attributed to the LRC. This can be seen in Fig. 1b where the magnitude of the FB correlation strength is zero for \n $\\Delta\\eta \\sim 1$ for 40-50\\% centrality. \n In case of 0-10\\% Au+Au collisions the magnitude of FB correlation strength is 0.6, indicating that 60\\% of the observed hadrons are correlated. \n \n The FB correlation results are compared with the predictions of two models of A+A collisions widely used at RHIC energies - HIJING \\cite{hijing} and the Parton String Model (PSM) \\cite{amelin2}. The PSM is the Monte Carlo implementation of the Dual Parton Model (DPM) \\cite{capela2} or Quark-Gluon String Model (QGSM) concepts \\cite{kaidalov}, considering both soft and semihard components on a partonic level. The HIJING model is based on perturbative QCD processes which lead to multiple jet production and jet interactions in matter \\cite{hijing}. Nearly 1 million minimum bias Au+Au collisions at $\\sqrt{s_{NN}}$=200 GeV were simulated for each model.\nThe PSM events were obtained without string fusion options. We used HIJING version 1.383 with default options. We have also simulated 10 million $\\textit{p+p}$ minimum bias events at the same cms energy to\n provide the reference for comparison with Au+Au collisions. The correlation analysis was carried out exactly in the same manner as for the data. \n\\begin{figure}[hb]\n\\centering \n\\vspace*{-0.3cm} \n\\includegraphics[width=0.50\\textwidth,height=3.0in]{Fig3_FB.eps}\n\\vspace*{-1.5cm}\n\\caption{ The FB correlation strength for 0-10\\% most central Au+Au collisions and $\\textit{p+p}$ \nfrom data ( circle), HIJING ( triangle) and PSM ( square). The error bars shown are for data.}\n\\label{Models}\n\\end{figure} \n Both PSM and HIJING predictions are lower than the data as shown in Fig. 3 but PSM exhibits a large LRC for $\\Delta\\eta > 1.0$ while HIJING predicts significantly smaller correlations than observed in the data. In case of $\\textit{p+p}$ collisions the HIJING prediction agrees with the data. The PSM does not show the decrease of $\\textit{b}$ with $\\Delta\\eta$ as seen in the data. \nThese trends are illustrated in Fig. 3, where the variation\nof the FB correlation strength with $\\Delta\\eta$ is shown\nfor Au+Au, HIJING, and PSM. The strong fall of $\\textit{b}$ with $\\Delta\\eta$ in HIJING provides some constraints on the contribution of impact parameter fluctuations to the correlation strength (Fig.3). Recently, Hadrod-string dynamics (HSD) transport model \\cite{hsd} and CGC \\cite{lappi} have addressed the possible effect of impact parameter fluctuations on the correlations with different results. \n \n \nA description of the FB correlations, which qualitatively agrees with the measured behavior of FB correlation strength is provided by the Dual Parton Model (DPM) \\cite{capela1}. \nAs mentioned earlier the FB correlation strength is controlled by the numerator of Eq. \\ref{b}. For the case of hadron-hadron collisions:\n\\begin{eqnarray}\n\\nonumber D_{bf}^2 = \\langle N_{f}N_{b} \\rangle - \\langle N_{f} \\rangle \\langle N_{b} \\rangle = \\\\\n\\nonumber \\langle k \\rangle (\\langle N_{0f}N_{0b} \\rangle - \\langle N_{0f} \\rangle \\langle N_{0b} \\rangle ) \\\\\n+ [(\\langle k^2 \\rangle - \\langle k \\rangle^2)]\\langle N_{0f} \\rangle \\langle N_{0b} \\rangle \\hspace{0.05cm}\n\\label {Dbf}\n\\end{eqnarray}\nwhere $\\langle N_{0f} \\rangle$ and $\\langle N_{0b} \\rangle$ are the average multiplicity of charged particles produced in \nthe forward and backward hemispheres in a single elementary inelastic collision \\cite{capela2}. The average number of elementary (parton-parton) inelastic collisions is given by $\\langle k \\rangle$. \nThe first term in Eq. 2 is the correlation between particles produced in the same inelastic collision, representing the SRC in rapidity. The second term, $\\langle k^2 \\rangle - \\langle k \\rangle^2$, is due to the fluctuation in the number of elementary inelastic collisions and is controlled by unitarity. This term gives rise to LRC \\cite{capela1,capela2}. \n\n\n Recently, long range FB multiplicity correlations have also been discussed in the framework of the CGC\/glasma motivated phenomenology \\cite{larry2,dias}. The glasma provides a QCD based description which includes many features of the DPM approach, in particular the longitudinal rapidity structure \\cite{venu}. This model predicts the growth of LRC with collision centrality \\cite{larry2}. It has been argued that the long range rapidity correlations are due to the fluctuations of the number of gluons and can only be created at early time shortly after the collision \\cite{larry,larry3}.\n\nIn summary, this is the first measurement of long-range FB correlation strengths in ultra relativistic nucleus-nucleus collisions. A large long range correlation is observed in central Au+Au collisions that vanishes for 40-50\\% centrality. Both DPM and CGC argue that the long range correlations are produced by multiple parton-parton interactions \\cite{capela1,larry}. Multiple parton interactions are necessary for the formation of partonic matter.\nIt remains an open question whether the DPM and CGC models can describe the LRC reported here and the near-side correlations \\cite{tom} simultaneously.\nFurther studies of the forward-backward correlations using identified baryons and mesons as well as the dependence of the correlations on the collision energy may be able to distinguish between these two models. \n \n We express our gratitude to C. Pajares and N. Armesto for many fruitful discussions and providing us with the PSM code. We also thank A. Capella, E.G. Ferreiro and Larry McLerran for important discussions. \nWe thank the RHIC Operations Group and RCF at BNL, and the NERSC Center \nat LBNL and the resources provided by the Open Science Grid consortium \nfor their support. This work was supported in part by the Offices of NP \nand HEP within the U.S. DOE Office of Science, the U.S. NSF, the Sloan \nFoundation, the DFG cluster of excellence `Origin and Structure of the Universe', \nCNRS\/IN2P3, RA, RPL, and EMN of France, STFC and EPSRC of the United Kingdom, FAPESP \nof Brazil, the Russian Ministry of Sci. and Tech., the NNSFC, CAS, MoST, \nand MoE of China, IRP and GA of the Czech Republic, FOM of the \nNetherlands, DAE, DST, and CSIR of the Government of India,\nthe Polish State Committee for Scientific Research, and the Korea Sci. \n\\& Eng. Foundation.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\subsection{Our results} \n\\label{sec:intro:setup}\nIn this work, we focus on the simple yet prevalent case where the potential\noutcomes are linear in the collected features (as in the housing market\ndisequlibrium example), with a residual (random) error term that is uncorrelated\nacross potential outcomes.\nMore precisely, \nwe consider a setting involving $n$ individuals\n(observations) and $k$ potential outcomes (models). Each individual $i \\in [n]$ has a feature\nvector $\\vec{x}^{(i)} \\in \\mathbb{R}^d$ and each model $j \\in [k]$ has a vector of regression\nparameters $\\vec{w}^*_j \\in \\mathbb{R}^d$. Then, each individual with feature vector\n$\\vec{x}$ recieves a label for each outcome $j$ equalling \n$\\smash{y_{j} = \\vec{w}_j^{* \\top} \\vec{x} + \\varepsilon_{j}}$,\nwhere $\\vec{\\varepsilon}$ is assumed to be an standard multivariate Gaussian random\nvariable. Our setting will involve self-selection bias arising from the fact\nthat every individual will only choose to reveal one of their labels, \n$j_* \\in \\{1, \\ldots, k\\}$, as determined by some function of\nthat individual's full set of labels $y_{1}, \\ldots, y_{k}$.\nOur goal in this paper is to estimate the $k$ parameter vectors \n$\\vec{w}^*_1$, $\\dots$, $\\vec{w}^*_k$ under both the known-index and\nunknown-index observational models.\n\nWe begin with our results in the known-index setting, which we formally define\nin Definition~\\ref{defn:index_observed}: \n\\begin{description}\n \\item[Known-index Setting:] We observe $n$ samples \n $(\\vec{x}^{(i)}, y^{(i)}, j_*^{(i)})$, where \n $j_*^{(i)} = S(y_{1}^{(i)}, \\dots, y_{k}^{(i)})$, and \n $y^{(i)} = y_{j_*^{(i)}}^{(i)}$ for\n some known self-selection rule\n $S : \\mathbb{R}^k \\rightarrow \\{1, \\ldots, k\\}$.\n\\end{description}\nHere, we can estimate the unknown parameter vectors $\\vec{w}^*_1$, $\\dots$,\n$\\vec{w}^*_k$ to arbitrary accuracy and we allow for quite general\nself-selection rules.\nIn particular, we allow the self-selection rule $S$ to be any \n\\textit{convex-inducing rule} (Definition~\\ref{defn:convex-inducing}): \nletting $\\vec{y} \\in \\mathbb{R}^k$ be the vector of potential\noutcomes, if we fix the $j$-th \ncoordinate $y_j$ for any $j \\in [k]$, then deciding whether $j$ will be the\nwinner (i.e., whether $j_* = j$) is the same as\ndeciding whether $\\vec{y}_{-j}$ belongs to some convex set (this convex set\ncan depend on $y_j$).\nWe formally define the self-selection rule in Definition \n\\ref{def:selfSelectionRule} and the convex-inducing self-selection rule in \nDefinition \\ref{defn:convex-inducing}. Below we present an informal version of \nour estimation theorem for the known-index case. The corresponding formal \nversion of the theorem can be found in Theorem \\ref{thm:knownIndex}.\n\n\\begin{inftheorem}[Known-Index Estimation] \\label{infthm:knownIndex}\n Let $(\\vec{x}^{(i)}, y^{(i)}, j_*^{(i)})_{i = 1}^n$ be $n$ observations from \n the known-index self-selection model with $k$ linear models $\\vec{w}^*_1$,\n $\\dots$, $\\vec{w}^*_k$ as described in Section \n \\ref{sec:intro:setup}. If the self-selection rule is convex-inducing and that the probability of observing each model is lower \n bounded by $(\\alpha \/ k)$ for some $\\alpha > 0$, then there is an estimation algorithm that outputs \n $\\hat{\\vec{w}}_1$, $\\dots$, $\\hat{\\vec{w}}_k$ with \n $\\norm{\\hat{\\vec{w}}_i - \\vec{w}_i^*} \\le \\varepsilon$, when \n $n \\ge \\mathrm{poly}(d, k, 1\/\\alpha, 1\/\\varepsilon)$. Furthermore, the running time of the algorithm is \n $\\mathrm{poly}(d, k, 1\/\\alpha, 1\/\\varepsilon)$.\n\\end{inftheorem}\n\nThe case of unknown-index is significantly more challenging and as we already \nmentioned even the with infinite number of samples it is unclear if we have \nenough information to estimate $\\vec{w}_j$'s. We define the setting formally in\nDefinition \\ref{defn:index_unobserved} and informally below:\n\\begin{description}\n \\item[Unknown-index Setting:] We observe $n$ samples of the\n form $(\\vec{x}^{(i)}, y^{(i)})$, where $\\vec{x}^{(i)} \\sim \\mathcal{N}(0,\n \\bm{I}_d)$, and\n $y^{(i)} = \\max_{j \\in [k]} y_{j}^{(i)}$.\n\\end{description}\nOur first result below shows this\nidentifiability and its formal version is Theorem \\ref{thm:identifiability_no_index}.\n\n\\begin{inftheorem}[Unknown-index Identifiability] \\label{infthm:unknownIndex:identifiability}\n If we have infinitely many samples from the unknown-index\n setting, then we can identify\n all $k$ linear models $\\vec{w}^*_1$, $\\dots$, $\\vec{w}^*_k$.\n\\end{inftheorem}\n\n Next we continue with finite-sample and finite-time algorithms for the \nunknown-index case. To achieve this problem we need some separability assumption\nbetween the $\\vec{w}_i^*$.\n\n\\begin{infassumption}[Separability Assumption -- See Assumption \\ref{as:no_index}] \\label{infasp:separability}\n The separability assumption that we use is that the projection of any vector\n $\\vec{w}^*_i$ to the direction of any other vector $\\vec{w}^*_j$ cannot be\n larger that the norm of $\\vec{w}_j^*$ (and in fact, must be at least\n $\\Delta$ smaller).\n\\end{infassumption}\n\nOur first result for arbitrary $k$ guarantees the estimation of the parameter\nvectors $\\vec{w}^*_i$ within accuracy $1\/\\mathrm{poly}(k)$. For a formal statement of \nthe theorem below we refer to Theorem \\ref{thm:no_index}.\n\n\\begin{inftheorem}[Unknown-index Estimation -- General $k$ Case] \\label{infthm:unknownIndex:general}\n Let $(\\vec{x}^{(i)}, y^{(i)})_{i = 1}^n$ be $n$ observations from a\n self-selection setting with $k$ linear models $\\vec{w}^*_1$, $\\dots$, \n $\\vec{w}^*_k$ as described in the unknown-index setting of Section \n \\ref{sec:intro:setup}. If we also assume Informal Assumption \n \\ref{infasp:separability} and that the probability of observing each model is\n lower bounded by $(\\alpha \/ k)$ for some $\\alpha > 0$, then there exists an estimation algorithm that outputs \n $\\hat{\\vec{w}}_1$, $\\dots$, $\\hat{\\vec{w}}_k$ with \n $\\norm{\\hat{\\vec{w}}_i - \\vec{w}_i^*} \\le 1\/\\mathrm{poly}(k)$, assuming that \n $n \\ge \\exp(\\mathrm{poly}(k)) \\cdot \\mathrm{poly}(d, 1\/\\alpha)$. Furthermore, the running time of our\n algorithm is also $\\exp(\\mathrm{poly}(k)) \\cdot \\mathrm{poly}(d, 1\/\\alpha)$.\n\\end{inftheorem}\n\nFinally, we study the case where $k = 2$ and show how to estimate the \nparameters to arbitrary accuracy. For this we only need a more direct \nseparability assumption on the parameter vectors:\n\n\\begin{infassumption}[$k = 2$ Separability Assumption -- See Assumption \\ref{as:no_index:2}] \\label{infasp:separability:2}\n The two weight vectors are separated in Euclidean distance, i.e.,\n $\\norm{\\vec{w}_1^* - \\vec{w}_2^*}_2 \\geq \\Delta$.\n\\end{infassumption}\n\nUnder the above assumption we can recover the parameters efficiently: for the\nformal version of the informal theorem below we refer to Theorem\n\\ref{thm:no_index:2}.\n\n\\begin{inftheorem}[Unknown-index Estimation -- $k = 2$ Case] \\label{infthm:unknownIndex:2}\n Let $\\varepsilon > 0$ and $(\\vec{x}^{(i)}, y^{(i)})_{i = 1}^n$ be $n$ observations from a\n self-selection setting with $2$ linear models $\\vec{w}^*_1$, $\\vec{w}^*_2$ \n as described in the unknown-index setting of Section \\ref{sec:intro:setup}. Under Informal Assumption \\ref{infasp:separability:2}, there exists an estimation \n algorithm that outputs $\\hat{\\vec{w}}_1$, $\\hat{\\vec{w}}_2$ with \n $\\norm{\\hat{\\vec{w}}_i - \\vec{w}_i^*} \\le \\varepsilon$, assuming that \n $n \\ge \\mathrm{poly}(d, 1\/\\alpha, 1\/\\varepsilon)$. Furthermore, the running time of the\n algorithm is also $\\mathrm{poly}(d, 1\/\\alpha, 1\/\\varepsilon)$.\n\\end{inftheorem}\n\n\\subsection{Our Contributions and Techniques} \\label{sec:intro:contributions}\nWe initiate a line of work on attaining statistical and\ncomputational efficiency guarantees in the face of structured self-selection\nbias. \n\n\\paragraph{Known-index case.} \nIn the known-index case, we require $\\mathrm{poly}(1\/\\varepsilon, k, d)$ sample and time\ncomplexity to estimate all $k$ model parameters to accuracy $\\varepsilon$ in $d$\ndimensions, and can accommodate quite general selection criteria. To prove this\nknown-index result, we construct a log-likelihood-inspired objective function that has the true set of parameters as optimum. The key difficulty associated with this formulation is that unlike ``nice'' settings (for example, the data generating model belongs to an exponential family), the log-likelihood involves an integration over possible outputs of the \\emph{unobserved} models. This scenario is reminiscent of latent-variable models where strong structural properties on the objective functions are uncommon. Nevertheless, we show that this objective function is \\emph{strongly convex} where we crucially rely on the variance reduction properties of log-concave densities conditioned on convex sets. Our next goal is to run projected stochastic gradient descent (PSGD) on this objective function. Unfortunately, in contrast to standard settings in stochastic optimization, we do not have simple\naccess to unbiased stochastic estimates of the gradient due to the integrating out of the unobserved models in the objective function. Consequently, the gradient in this case involves sampling from the conditional distribution over outputs from the unobserved models given the observed sample at the candidate parameter set currently being considered. To sample from this conditional distribution, we show that a projected version of the Langevin Monte Carlo sampling algorithm due to Bubeck, Eldan and Lehec \\citep{bubecksampling} mixes fast and produces an approximate stochastic gradient. Finally, we show that this\napproximate stochastic gradient suffices for the PSGD algorithm to converge.\nWe provide the details of this algorithm and its analysis in Section\n\\ref{sec:known}.\n\n\\paragraph{Identification with unknown index.} \nIn the more challenging unknown-index case, it is not even clear whether the \nparameters $\\vec{w}_j$ are even \\emph{identifiable} from the sample that we have. First, we show that this is in fact the case. Our proof uses an novel identification argument which we believe can be applied to other self-selection settings beyond the Max-selection criterion considered in this work. Formally speaking, we would like to exhibit the existence of a\nmapping $f$ from $\\Phi$ to the set of parameters $\\vec{W}$ given access to the distribution function, $\\Phi$, of the pairs $(\\vec{x}, y)$ generated according to the self-selection model with unknown indices. Our construction of $f$ is based on a conditional moment calculation where we analyze the moments of $y$ conditioned on $\\vec{x}$ lying in various one-dimensional subspaces. The main observation is that while closed form solutions are not known for the conditional moments, the higher order moments of $y$ still determine the length of the projection of the parameter vector with the largest projection along $x$. Concretely, we show that the higher-order moments of $y$ conditioned on $x$ being parallel to a unit vector $v$ are upper and lower bounded (up to constants) by the moments of normal distribution with $0$ and variance $\\max_i (v^\\top w_i)^2 + 1$. While a single direction does not uniquely determine any of the underlying parameter vectors, the direction maximizing this quantity over all one dimensional subspaces corresponds to the unit vector along the longest parameter vectors allowing recovery of \\emph{one} of $k$ vectors. In the next step, we show that we may effectively ``peel off'' the single identified model from the distribution function, $\\Phi$, reducing the problem of recovering the remaining parameter vectors to a self-selection problem with $k - 1$ parameter vectors. A recursive application of this argument allows identification of the remaining parameter vectors one by one.\n\n\\paragraph{Estimation with unknown index.}\n We then move our attention to estimation with finite time and samples. We target\nthe common $\\max$-selection criterion and, under some separability assumption\namong the $\\vec{w}_j$'s, we provide an algorithm with \n$\\mathrm{poly}(d) \\cdot \\exp(\\mathrm{poly}(k))$ sample and time complexity to estimate the\nregression parameters up to error $1\/\\mathrm{poly}(k)$. \nOur technique to prove these finite-time and finite-sample results is to try to\ndevelop a finite accuracy version of our identifiability argument. Unfortunately, \nthis requires an exponential (in the ambient dimension $d$) sized grid search over the unit sphere. The traditional approach to address such difficulties is to restrict our search to a\nsuitably chosen $k$ dimensional subspace, $U$, identifiable from the data and\ncrucially \\emph{contains} the parameter vectors, $\\vec{w}_i$. We identify this\nsubspace through the spectrum of a suitably chosen matrix but we encounter an additional \n\\emph{key} difficulty, compared to the previous applications of this method. While\nour choice of matrix, $M \\triangleq \\mathbb{E} [y^2 \\vec{x} \\vec{x}^\\top]$, is natural, $M$ does not decompose into a independently weighted sum of matrices each corresponding to a single parameter vector, in stark contrast to the scenario encountered in simpler problems such as mixtures of linear regressions. Hence, showing that the top singular singular subspace of $M$\ncontains the $\\vec{w}_i$ is significantly more involved and requires novel \napproximation ideas. Despite these difficulties, we derive a closed form expression for $M$ as a (positively) weighted sum of the identity matrix, $I$ and outer products $\\vec{w}_i (\\vec{w}_i)^\\top$ where the coefficients for a single parameter vector \\emph{depend} on the other vectors. To obtain this closed form characterization, we replace the max function in the self-selection criterion with a \\textit{smooth maximization} function resulting\nin a matrix $M'$ approximating $M$ and analyze its spectrum through several careful applications of Stein's Lemma facilitated by the differentiability of the smooth maximization function. Applying a limiting argument to $M'$, we obtain the closed form characterization of $M$ and consequently, show that the span of the parameter vectors $\\vec{w}_i$ is contained in the top-$k$ singular subspace of $M$ and the singular values associated with these directions are bounded away from those for the orthogonal complement establishing a strict spectral gap. Having identified the \nlow-dimensional subspace containing the $w_i$, we may now restrict our search to \nthis subspace. We conclude our estimation argument with a careful finite-sample\nadaptation of our identifiability argument highlighted above. \n\n\\paragraph{Efficient estimation for $k=2$.} For the specific yet well-studied case of $k = 2$, e.g. \\cite{fair1972methods}, we develop a estimation algorithm based on the method of moments that achieves estimation error $\\varepsilon$ with $\\mathrm{poly}(d, 1\/\\varepsilon)$ time and sample complexity. It is an interesting open problem whether a similar procedure may be derived for the general case.\n\\medskip\n\n\n\n\n\\subsection{Related Work} \\label{sec:intro:related}\n\\input{related}\n\n\n\n\\subsection{Objective Function for Linear Regression with Self-Selection Bias} \n\\label{sec:log_lik}\n \nThe objective function that we use is inspired by the log-likelihood function.\nWe show that our objective function is convex (even though linear regression with \nself-selection bias does not belong to any exponential family). Suppose we have a\ngiven parameter estimate for the $[\\vec{w}_j^*]_{j=1}^k$ given by \n$\\bm{W} = [\\vec{w}_j]_{j=1}^k$ then we define its objective value\n$\\overline{\\ell}(\\vec{W})$ as follows.\n\\begin{align}\n \\overline{\\ell}(\\vec{W}) \n &\\triangleq \n \\frac{1}{n} \\sum_{i = 1}^n \n \\mathbb{E}_{(y, j_*) \\sim \\mathcal{D}(\\vec{x}^{(i)}; \\vec{W}^*)} \n \\left[ \\ell(\\vec{W}; \\vec{x}^{(i)}, y, j_*) \\right] \\notag \\\\\n &\\triangleq \\frac{1}{n} \\sum_{i = 1}^n \\mathbb{E}_{(y, j_*)} \\left[\n \\log\\lr{f_{\\sigma}(y - \\bm{w}_{j_*}^\\top \\vec{x}^{(i)})} + \n \\log\\lr{\\int_{C_{j_*}(y)} \\prod_{j \\neq j_*} \n f_{\\sigma}(\\bm{z}_j - \\vec{w}_j^\\top \\vec{x}^{(i)})\\, d\\bm{z}_{-j_*} }\n \\right] \\label{eq:objectiveDefinition},\n\\end{align}\nwhere we recall that $f_\\sigma$ is the density function of the standard normal\ndistribution. \nThe above expression is based on the population\nlikelihood under the current estimate $\\vec{W}$ of the pair $(y, j_*)$\nconditioned on the value of $\\vec{x}$: see Appendix\n\\ref{app:computationsLogLikelihood} for the exact derivation. \nThe gradient of $\\overline{\\ell}$ can then be expressed in the following form: \n\\begin{align}\n \\label{eq:ll_grad_i:main}\n \\nabla_{\\vec{w}_j} \\overline{\\ell}(\\vec{W}) &= \n \\frac{1}{n\\sigma^2} \\sum_{i = 1}^n \\Exp_{(y, j_*)} \\left[ \n \\bm{1}_{j = j_*}\n \\cdot y +\n \\bm{1}_{j \\neq j_*} \\cdot \n \\mathbb{E}_{\\vec{z}_{-j_*} \\sim \\mathcal{N}((\\vec{W}^\\top x^{(i)})_{-j_*}, \\sigma^2 \\bm{I}_{k-1})}\\left[z_j|\\bm{z}_{-j_*} \\in C_{j_*}(y)\\right]\n - \\vec{w}_{j}^\\top \\vec{x}^{(i)}\n \\right] \\vec{x}^{(i)}.\n\\end{align}\nThe first thing to verify is that the set of true parameters $\\vec{W}^*$ are \na stationary point of the objective function that we proposed above. The \nproof of the following lemma can be found in Appendix \\ref{app:known}. \n\n\\begin{lemma} \\label{lem:known:stationary}\n It holds that $\\nabla \\overline{\\ell}(\\vec{W}^*) = 0$, where \n $\\bm{W}^* = [\\bm{w}_j^*]_{j=1}^k$ is the set of true parameters \n of the known-index self-selection model described in Definition\n \\ref{defn:index_observed}. \n\\end{lemma}\n\\begin{proof}\n See Appendix \\ref{app:known:stationary}.\n\\end{proof}\n\nOur goal is to apply projected stochastic gradient descent (PSGD) on\n$\\overline{\\ell}$. To this end, we need to prove that our objective function is\nactually strongly concave and hence the optimum of $\\overline{\\ell}$ is unique and\nequal to $\\vec{W}^*$. We show this strong convexity in Section\n\\ref{sec:convexity}. Next, we need to show that we actually apply PSGD and hence\nneed to find a procedure to sample unbiased estimates of the gradient of\n$\\bar{\\ell}$. Unfortunately the form of the objective function does not allow us\nto find such an efficient procedure. For this reason we relax our requirement to\nfinding approximately unbiased estimates of $\\nabla \\bar{\\ell}$. To achieve this\nwe use a projected version of Langevin dynamics as we show in Section \n\\ref{sec:Langevin}. Additionally, we need to show that the second moment of our \ngradient estimates cannot be very large which we also show in Section \n\\ref{sec:Langevin}. Finally, we need to adapt the proof of convergence of PSGD \nto show that the small bias that Langevin dynamics introduces can be controlled \nin a way that does not severely affect the quality of the output estimation \nwhich we show in Section \\ref{sec:biased_sgd}. In Section \n\\ref{sec:known:main:proof} we combine everything together to prove our \nestimation result.\n\n\\subsection{Strong Concavity} \\label{sec:convexity}\n\nThe Hessian of $\\overline{\\ell}$ is difficult to analyze directly. We thus start\nwith the Hessian of the log-likelihood for a single sample $(\\vec{x}^{(i)},\ny^{(i)}, j_*^{(i)})$. In particular, in Appendix\n\\ref{app:computationsLogLikelihood} we derive the Hessian of this function\n$\\ell(\\vec{W}; \\vec{x}^{(i)}, y^{(i)}, j_*^{(i)})$, \nwhich comprises blocks $\\bm{H}_{jl}$ such that \n\\[ (\\bm{H}_{j,l})_{ab} = \\ddt{(\\bm{w}_l)_a}{(\\bm{w}_j)_b} \\ell(\\bm{W};\n\\vec{x}, y, j_*). \\]\nFollowing the computation in Appendix \\ref{app:computationsLogLikelihood}, it\nfollows that for a single sample $\\vec{x}, y, j_*$, the matrix block \n$\\bm{H}_{j,j_*} = 0$ for all $j \\neq j_*$. Thus, it remains to consider only the\nblocks $\\bm{H}_{j_*,j_*}$ and $\\bm{H}_{j,l}$ for $j,l \\neq j_*$. In Appendix \n\\ref{app:computationsLogLikelihood} we show that\n\\begin{align*}\n \\bm{H}_{j_*,j_*} = -\\frac{1}{\\sigma^2}\\vec{x}\\vec{x}^\\top\n\\end{align*}\nand that\n\\begin{equation*}\n \\bm{H}_{\\vec{W}_{-j_*}} = \\frac{1}{\\sigma^4} \\lr{\n \\text{Cov}_{\\bm{z}_{-j_*} \\sim \\mathcal{N}((\\bm{W}^\\top \\vec{x})_{-j_*},\\, \\sigma^2 \\bm{I}_{k-1})}\n \\left[z_j, z_l\\,|\\, \\bm{z}_{-j_*} \\in C_{j_*}(y)\\right]\n - \\sigma^2 \\bm{I}\n } \\otimes \\vec{x}\\vec{x}^\\top,\n\\end{equation*}\nwhere $\\otimes$ represents the Kronecker product. Now, the key property of\nconvex-inducing selection functions in our proof is that, for Gaussian random\nvariables over $\\mathbb{R}^{k-1}$, the variance is non-increasing when the\nvariable is restricted to a convex set.\n\n\\begin{lemma}[Corollary 2.1 of \\citep{kanter1977reduction}]\n Let $\\vec{X} \\in \\mathbb{R}^n$ be a random vector with Gaussian density \n $f_{\\vec{X}}$. For a convex set $A \\subseteq \\mathbb{R}^n$ with positive mass\n under the distribution of $\\vec{X}$, define $\\vec{X}_A$ to be $X$ restricted to\n $A$, i.e., a random variable with density \n $f_{\\vec{X}_A}(\\vec{x}) = f_{\\vec{X}}(\\vec{x}) \\cdot (\\int_{A} f_X(\\vec{z})\\, d\\vec{z})^{-1}$.\n Then, for all $\\vec{v} \\in \\mathbb{R}^n$, \n \\[ \\text{Var}[\\vec{v}^\\top \\vec{X}_A] \\leq \\text{Var}[\\vec{v}^\\top \\vec{X}]. \\]\n\\end{lemma}\n\nIn particular, together with our thickness assumption and properties of the\nKronecker product, this implies that \n$\\bm{H}_{\\vec{W}_{-j_*}} \\preceq 0$. Thus, the complete Hessian of the \nfunction $\\ell$ can be expressed as a block matrix of the form:\n\\[ \\bm{H} \n = \\left[\\begin{matrix}\n -\\frac{1}{\\sigma^2} \\vec{x}\\vec{x}^\\top & \\bm{0} \\in \\mathbb{R}^{d \\times d(k-1)} \\\\\n \\bm{0} \\in \\mathbb{R}^{d(k-1) \\times d} & \\bm{H}_{\\vec{W}_{-j_*}}\n \\end{matrix}\\right]\n \\preceq \\left[\\begin{matrix}\n -\\frac{1}{\\sigma^2} \\vec{x}\\vec{x}^\\top & \\bm{0} \\\\\n \\bm{0} & \\bm{0}\n \\end{matrix}\\right].\n\\]\nWe are now ready to upper bound the Hessian $\\bm{H}_{pop}$ of our objective\nfunction $\\bar{\\ell}$. \nIn particular, \nat this point we can use our minimum-probability assumption \n(Assumption \\ref{asp:known:1}) and our thickness of covariates assumption\n(Assumption \\ref{asp:known:2}) from which we get that for the Hessian\n$\\bm{H}_{pop}$ it holds that \n\\[ \\bm{H}_{pop} \\preceq -\\frac{\\alpha}{\\sigma^2 \\cdot k} \\bm{I}. \\]\nFrom the above we conclude that the following lemma\n\n\\begin{lemma} \\label{lem:strongConcavity}\n The objective function $\\bar{\\ell}$ is \n $\\left(\\frac{\\alpha}{\\sigma^2 \\cdot k}\\right)$-strongly-concave.\n\\end{lemma}\n\n\\subsection{Approximate Stochastic Gradient Estimation} \\label{sec:Langevin}\nIn this section we describe an algorithm for sampling approximate stochastic \nestimates of our objective function $\\bar{\\ell}$. Our algorithm is based on \nprojected Langevin dynamics. We start with the expression of the gradient of\n$\\bar{\\ell}$ based on \\eqref{eq:ll_grad_i:main}. \n\\begin{align}\n \\label{eq:ll_grad_i:main:2}\n \\nabla_{\\vec{w}_j} \\overline{\\ell}(\\vec{W}) &= \n \\frac{1}{n\\sigma^2} \\sum_{i = 1}^n \\Exp_{(y, j_*)} \\left[ \n \\bm{1}_{j = j_*}\n \\cdot y +\n \\bm{1}_{j \\neq j_*} \\cdot \n \\mathbb{E}_{\\vec{z}_{-j_*} \\sim \\mathcal{N}((\\vec{W}^\\top x^{(i)})_{-j_*}, \\sigma^2 \\bm{I}_{k-1})}\\left[z_j|\\bm{z}_{-j_*} \\in C_{j_*}(y)\\right]\n - \\vec{w}_{j}^\\top \\vec{x}^{(i)}\n \\right] \\vec{x}^{(i)}.\n\\end{align}\nwhere we remind that $\\mathbb{E}_{(y,j^*)}$ denotes the expectation of the pair\n$(y^{(i)}, j_*^{(i)})$ conditioned on $\\vec{x}^{(i)}$ and $\\vec{W}^*$.\nTo obtain stochastic gradient estimates, we will replace these expectations \nwith their \ncorresponding observed values as we will see below. The more difficult step of\nthe gradient estimation process is \nsampling the last term of \\eqref{eq:ll_grad_i:main:2}, for which \nit suffices to be able to sample the truncated normal distribution \n\\begin{align}\n \\mathcal{N}((\\vec{W}^\\top x^{(i)})_{-j_*}, \\sigma^2 \\bm{I}_{k-1};\\ C_{j_*}(y)) \n \\label{eq:difficultTermToSample}\n\\end{align}\ngiven some set of parameter $\\vec{W}$, a vector of covariates \n$\\vec{x}^{(i)}$ and a pair $(y, j_*)$ drawn from \n$\\mathcal{D}(\\vec{x}^{(i)}; \\vec{W}^*)$. The simplest way to get a sample from \n\\eqref{eq:difficultTermToSample} is to first sample from \n$\\mathcal{N}((\\vec{W}^\\top x^{(i)})_{-j_*}, \\sigma^2 \\bm{I}_{k-1}),$ \nand then\napply rejection sampling until we get a sample inside $C_{j_*}(y)$.\nThis is feasible\ninformation-theoretically but it might require a lot of computational steps if \nthe survival probability of $C_{j_*}(y)$ is small. In particular, the rejection \nsampling might require time that is exponential in the norm of the $\\vec{W}_j$'s.\nFor this reason, if we require statistical efficiency we need to apply a more \nelaborate technique. In particular, we use projected Langevin \ndynamics. Let $K = S_{j_*}(y) \\cap \\mathcal{B}(R)$ for some sufficiently large \nconstant $R$ and let $\\vec{\\mu}_{-j_*} = (\\vec{W}^\\top \\vec{x}^{(i)})_{-j_*}$ \nfor the rest of this section. The iteration of projected Langevin algorithm for\nsampling \nis the following \\cite{bubecksampling}:\n\\begin{align}\n \\vec{z}^{(t + 1)} = \\Pi_K\\left(\\vec{z}^{(t)} - \\frac{\\gamma}{2 \\cdot \\sigma^2} \n (\\vec{z}^{(t)} - \\vec{\\mu}_{-j_*}) + \\sqrt{\\gamma} \\cdot \\vec{\\xi}^{(t)} \\right) \\label{eq:projectedLangevin}\n\\end{align}\nwhere $\\vec{\\xi}^{(1)}, \\vec{\\xi}^{(2)}, \\dots$ are i.i.d. samples from the \nstandard normal distribution in $(k - 1)$-dimensions. The next lemma describes \nthe sampling guarantees of the Langevin iteration \\eqref{eq:projectedLangevin}.\n\n\\begin{lemma} \\label{lem:samp_trunc}\n Let $L \\subset \\mathbb{R}^{(k - 1)}$ be a convex set with \n $\\mathbb{P}_{\\mathcal{N}(\\vec{0}, \\sigma^2 \\vec{I})}(L) \\geq \\alpha$ for $\\alpha > 0$.\n Then, for any $\\vec{\\mu}_{-j_*} \\in \\mathbb{R}^{k - 1}$ and $\\epsilon \\in (0, 1\/2]$, the \n projected Langevin sampling algorithm \\eqref{eq:projectedLangevin} with \n $K = L \\cap \\mathcal{B}(R)$ for some appropriate value $R$, and initialized\n with $\\bm{z}^{(0)} = \\Pi_K(\\bm{0}_{k-1})$, generates a random\n variable $\\hat{\\vec{X}} = \\vec{z}^{(m)}$ satisfying \n \\begin{equation*}\n \\mathrm{TV} \\left(\\widehat{\\vec{X}}, \\mathcal{N}(\\vec{\\mu}_{-j_*}, \\sigma^2 \\cdot \\vec{I}_{k-1}, K)\\right) \\leq\n \\epsilon\n \\end{equation*}\n assuming that the number of steps $m$ is larger that \n $\\mathrm{poly} (k, \\norm{\\vec{w}}, 1 \/ \\epsilon, 1 \/ \\alpha, \\sigma^2, 1\/\\sigma^2)$.\n\\end{lemma}\n\\begin{proof}\nThe proof of this lemma can be found in Appendix \\ref{app:lem:samp_trunc}.\n\\end{proof}\n\\smallskip\n\nNow that we can sample from the distribution \n\\eqref{eq:difficultTermToSample}, we can move to approximately estimating a \nstochastic gradient of $\\overline{\\ell}$. First, we sample uniformly $i \\in [n]$a\nuniformly at random, and we fix the corresponding $\\vec{x}^{(i)}$. Then, we use \nthe $i$-th sample from the true model to substitute in the pair $(y, j_*)$. \nFinally, we use the Langevin algorithm that we described above to sample \n\\eqref{eq:difficultTermToSample}. Before moving to bounding the bias of our \nestimator there is one more thing that we need to take care of, and this is that\nfor every $\\vec{x}^{(i)}$ we only have one sample of the pair $(y, j_*)$. Hence,\nwe need to make sure that during the execution of the algorithm while we pick \nthe indices $i \\in [n]$ uniformly at random we will never pick the same \nindex $i$ twice. To ensure that we are going to require more samples than the\nones we need. \n\nLet $n$ be the total number of samples that we have and $T$ be total number of\nsamples that we need for our PSGD algorithm. A straightforward birthday paradox\ncalculation \nyields that the probability \nof sampling the same $i$ twice is at most $2 T^2\/n$. Thus,\nif we pick $n \\ge 2 T^2\/\\zeta$, then the collision probability \nduring the execution of the PSGD algorithm is at most $\\zeta$.\n\\smallskip\n\nWe are now ready to put everything together in algorithm that \ndescribes our combined estimation procedure. The following lemma whose proof be\nfound in Appendix \\ref{app:lem:samplingGradient} describes the performance\nguarantees of the above estimation algorithm.\n\n\\begin{algorithm}[t]\n \\caption{Approximate Stochastic Gradient Estimation Algorithm}\\label{alg:langevin}\n \\begin{algorithmic}[1]\n \\Procedure{EstimateGradient}{$\\vec{W}$}\n \\State sample $i$ uniformly from $[n]$\n \\State $K \\gets C_{j_*^{(i)}}(y^{(i)}) \\cap \\mathcal{B}(R)$\n \\State $\\vec{\\mu} \\gets (\\vec{W}^\\top \\vec{x}^{(i)})_{-j_*^{(i)}}$\n \\State $\\vec{z}^{(0)} \\gets \\Pi_K(\\vec{0})$\n \\For{$t = 1,\\ldots, m$}\n \\State sample $\\vec{\\xi}^{(t-1)}$ from $\\mathcal{N}(\\vec{0}, \\vec{I})$\n \\State $\\vec{z}^{(t)} \\gets \\Pi_{K}\\left(\\vec{z}^{(t-1)} - \n \\frac{\\gamma}{2 \\cdot \\sigma^2} (\\vec{z}^{(t-1)} - \\vec{\\mu}) + \n \\sqrt{\\gamma} \\cdot \\vec{\\xi}^{(t-1)} \\right)$\n \\EndFor\n \\For{$j = 1,\\ldots, k$}\n \\State $\\vec{g}_j \\gets \\frac{1}{\\sigma^2} \\left(\n \\bm{1}_{j_*^{(i)} = j} \\vec{y}^{(i)} + \\bm{1}_{j_*^{(i)} \\neq j} \\cdot\n z^{(m)}_j\n - \\vec{w}_j^\\top \\vec{x}^{(i)}\n \\right) \\cdot \\vec{x}^{(i)}$\n \\EndFor \n \\State \\Return $\\vec{g} = (\\vec{g}_1, \\dots, \\vec{g}_k)$\n \\EndProcedure\n \\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[t]\n \\caption{Projected Stochastic Gradient Descent}\\label{alg:psgd}\n \\begin{algorithmic}[1]\n \\Procedure{PSGD}{}\n \\State $\\vec{W}^{(0)} \\gets 0$\n \\For{$t = 1, \\ldots, T$}\n \\State $\\eta_t \\gets 1\/\\lambda \\cdot t$\n \\State $\\vec{g} \\gets \\textsc{EstimateGradient}(\\vec{W}^{(t)})$\n \\State $\\vec{w}^{(t)}_j \\gets \\Pi_{\\mathcal{K}}\\lr{\\vec{w}^{(t)}_j \n - \\eta_t \\cdot \\vec{g}^{(t)}_j}$ for all $j \\in [k]$\n \\EndFor\n \\State \\Return $\\bar{\\vec{W}} \\triangleq \\frac{1}{T} \\sum_{t = 1}^{T} \\left(\\vec{w}^{(t)}_j\\right)_{j = 1}^k$\n \\EndProcedure\n \\end{algorithmic}\n\\end{algorithm}\n\n\\begin{lemma} \\label{lem:samplingGradient}\n Let $\\vec{g}^{(1)}, \\dots, \\vec{g}^{(T)}$ be a sequence of outputs of \n Algorithm \\ref{alg:langevin} when used with input \n $\\vec{W}^{(1)}, \\dots, \\vec{W}^{(T)}$, where\n $\\norm{\\vec{W}^{(p)}}_2 \\le k \\cdot B$ and $\\vec{W}^{(p)}$ can\n depend on $\\vec{W}^{(p - 1)}, \\vec{g}^{(p - 1)}$. If $n \\ge 2 T^2\/\\zeta$,\n for the hyperparameters $\\eta$, $R$, it holds that \n $R, 1\/\\eta \\le \\mathrm{poly} (k, B, 1 \/ \\beta, 1 \/ \\alpha, \\sigma^2, 1\/\\sigma^2)$, and $m \\ge \\mathrm{poly} (k, B, 1 \/ \\beta, 1 \/ \\alpha, \\sigma^2, 1\/\\sigma^2)$\n then with probability at least $1 - \\zeta$ it holds that for every $p \\in [T]$\n \\begin{align}\n \\norm{\\Exp \\left[\\vec{g}^{p} \\mid \\vec{W}^{(p - 1)}, \\vec{g}^{(p - 1)} \\right] - \\nabla \\bar{\\ell}(\\vec{W}^{(p)})}_2 \\le \\beta. \\label{eq:boundOnBias}\n \\end{align}\n\\end{lemma}\n\n\\subsection{Stochastic Gradient Descent with Biased Gradients}\n\\label{sec:biased_sgd}\n\n In the previous section we showed that we can compute approximate stochastic \ngradients of the strongly-concave function $\\bar{\\ell}$. In this section we show \nthat this is enough to approximately optimize $\\bar{\\ell}$ using projected \ngradient descent. We start with a description of the PSGD algorithm.\n\n\n\\begin{lemma}%\n \\label{lemma:sgd}\n Let $f: \\mathbb{R}^k \\to \\mathbb{R}$ be a convex function, $\\mathcal{K} \\subset\n \\mathbb{R}^k$ a convex set, and fix an initial estimate $w^{(0)} \\in \\mathcal{K}$.\n Now, let $\\vec{x}^{(1)}, \\ldots, \\vec{x}^{(T)}$ be the iterates generated by \n running $T$\n steps of projected SGD using gradient estimates \n $\\vec{g}^{(1)}, \\ldots, \\vec{g}^{(M)}$\n satisfying $\\mathbb{E}[\\vec{g}^{(i)} | \\vec{x}^{(i-1)}] = \n \\nabla f(\\vec{x}^{(i-1)}) + \\vec{b}^{(i)}\n $. Let $\\vec{x}_* = \\arg\\min_{\\vec{x} \\in\n \\mathcal{K}} f(w)$ be a minimizer of $f$. Then, if we assume:\n \\begin{enumerate}\n \\item[(i)] \\textbf{Bounded step variance:} \n $\\mathbb{E}\\left[\\|\\vec{g}^{(i)}\\|_2^2 \\right] \\leq \\rho^2$,\n \\item[(ii)] \\textbf{Strong convexity:} \n $f$ is $\\lambda$-strongly convex, and \n \\item[(iii)] \\textbf{Bounded gradient bias:}\n $\\|\\vec{b}^{(i)}\\| \\leq \n \\frac{\\rho^2}{2\\cdot \\lambda\\cdot \\mathrm{diam}(\\mathcal{K}) \\cdot i},$\n \\end{enumerate}\n then the average iterate $\\hat{\\vec{x}} = \\frac{1}{T} \\sum_{t=1}^T \\vec{x}^{(t)}$\n satisfies \n $\\mathbb{E}[f(\\hat{\\vec{x}}) - f(\\vec{x}_*)] \n \\leq \\frac{\\rho^2}{\\lambda T}(1\n + \\log(T))$. \n\\end{lemma}\n\\begin{proof}\n See Appendix \\ref{app:bias_sgd_proof}.\n\\end{proof}\n\n\\subsection{Proof of Theorem \\ref{thm:knownIndex}}\n\\label{sec:known:main:proof}\nWe are now ready to combine the results of the previous sections into a recovery\nguarantee for $\\bm{W}^* = \\{\\vec{w}_i\\}$. In particular, we will apply Lemma \\ref{lemma:sgd}\nto show that Algorithm \\ref{alg:psgd} converges to an average iterate $\\hat{\\vec{W}}$ \nthat is close to $\\bm{W}^*$.\nFirst, observe that the norm of the gradient estimates outputted by Lemma \\ref{app:lem:samp_trunc}\nare bounded in norm by\n\\begin{align*}\n \\mathbb{E}\\left[\\|v\\|_2^2\\right] \\leq \\mathbb{E}\\left[\n \\sum_{j=1}^k \\left\\| \\nabla_{\\vec{w}_j} \\ell(\\bm{W}; (\\vec{x}, i, y)) \\right\\|^2\n \\right] + \\beta,\n\\end{align*}\nwhere $\\beta$ is as in Lemma \\ref{lem:samp_trunc}.\nOur bounds on the norm of the weights and covariates directly implies \n\\[\n \\mathbb{E}\\left[\\|v\\|_2^2\\right] \\in O\\lr{k\\cdot \\text{poly}(B, C)} + \\beta.\n\\] \nNext, Lemma \\ref{lem:strongConcavity} guarantees that $f$ in Lemma \\ref{lemma:sgd} is \nstrongly convex with $\\lambda = \\alpha\/(\\sigma k)$.\nFinally, Lemma \\ref{lem:samp_trunc} ensures access to gradients with appropriately\nbounded bias (i.e., satisfying assumptions (i) and (iii) in Lemma \\ref{lemma:sgd}) \nin $\\text{poly}(k, B, C, T, 1\/\\alpha, \\sigma^2, 1\/\\sigma^2)$-time.\nWe are thus free to apply Lemma \\ref{lemma:sgd} to our\nproblem---after averaging $T$ steps of projected stochastic gradient\ndescent, we will find $\\bm{W} = \\{\\vec{w}_j\\}_{j=1}^k$ such that\n\\begin{align}\n \\label{eq:sgd_bound}\n \\mathbb{E}[\\overline{\\ell}(\\bm{W})] - \\overline{\\ell}(\\bm{W}^*) \n \\leq \\frac{\\sigma^2 \\cdot k^2\\cdot \\mathrm{poly}(B, C)}{2\\cdot \\alpha\\cdot T} (1 + \\log(T)).\n\\end{align}\nAn application of Markov's inequality shows that, with probability at least $1 -\n\\delta$, \n\\[\n \\overline{\\ell}(\\bm{W}) - \\overline{\\ell}(\\bm{W}^*)\n \\leq \\frac{\\sigma^2 \\cdot k^2\\cdot \\mathrm{poly}(B, C, 1\/\\delta)}{2\\cdot \\alpha\\cdot T} (1 + \\log(T)).\n\\]\nThus, we can condition on the event in \\eqref{eq:sgd_bound} while only losing a\nfactor of $1 - \\delta$ in success probability. Finally, a parameter-space\nrecovery bound follows from another application of convexity:\n\\[\n \\left\\|\\bm{W} - \\bm{W}^* \\right\\|_F\n \\leq \\frac{\\sigma^4 \\cdot k^3\\cdot \\mathrm{poly}(B, C, 1\/\\delta)}{\\alpha^2\\cdot T} (1 + \\log(T)).\n\\]\n\n\n\n\n\n\n\n\n\\section{Introduction}\n \\label{sec:intro}\n \\input{intro}\n \n \\section{Model and Main Results} \\label{sec:model}\n \\input{setup}\n \n\n\n \\section{Parameter Estimation for the Known-Index Setting} \\label{sec:known}\n \\input{known_index}\n\n \\section{Parameter Estimation for the Unknown-index Setting} \\label{sec:unknown}\n \\input{unknown_index}\n \n\n\n\\section{Acknowledgements}\nThis work is supported by NSF Awards CCF-1901292, DMS-2022448 and\nDMS2134108, a Simons Investigator Award, the Simons Collaboration on the Theory\nof Algorithmic Fairness, a DSTA grant, the DOE PhILMs project\n(DE-AC05-76RL01830), an Open Philanthropy AI Fellowship and a Microsoft\nResearch-BAIR Open Research Commons grant. \n\n\\printbibliography\n\n\\clearpage\n\n\n\n\\subsection{Known-Index Setting} \\label{sec:model:knownIndex}\n\n We start with the definition of a self-selection rule that is fundamental in \nthe modeling of the linear regression problem with self-selection bias in the \nknown-index setting.\n\n\\begin{definition}[Self-Selection Rule] \\label{def:selfSelectionRule}\n A \\textit{self-selection rule} is a function \n $S: \\mathbb{R}^k \\to [k]$. We assume throughout this work that we have query\n access to $S$, i.e. for every $\\bm{y} \\in \\mathbb{R}^k$ there is an oracle\n that outputs $S(\\bm{y})$.\n We also define a \\underline{slice} $S_j(a)$ of $S$ as follows:\n \\[\n S_j: \\mathbb{R} \\to \\mathbb{R}^{k} \\qquad \n S_j(a) \\triangleq \\{\\bm{y} \\in \\mathbb{R}^k: S(\\bm{y}) = j, \\bm{y}_j = a\\}.\n \\]\n\\end{definition}\n\n The first setting that we consider is the \\textit{known-index setting}, where\nthe observed data for each covariate include the response variable, as well as\nthe index of the corresponding regressor.\n\n\\begin{definition}[Self-Selection with Observed Index]\n \\label{defn:index_observed}\n Self-selection with observed index is parameterized by an unknown set of\n weight vectors $\\vec{w}^*_1,\\ldots \\vec{w}_k^* \\in \\mathbb{R}^d$, a known \n variance $\\sigma > 0$,\n and a self-selection rule\n $S: \\mathbb{R}^k \\rightarrow [k]$. For $i \\in [n]$, an observation\n $(\\vec{x}^{(i)}, y^{(i)}, j_*^{(i)})$ in this model is a triplet, comprising\n a feature vector $\\vec{x}^{(i)}$, and a pair $(y^{(i)},j_*^{(i)})$ sampled as\n follows conditioning on $\\vec{x}^{(i)}$: \n \\begin{enumerate}\n \\item[(1)] Sample the latent variables $y_{j}^{(i)} \\sim \\mathcal{N}(\\vec{w}_j^{* \\top} \\vec{x}^{(i)},\n \\sigma^2)$ for each $j \\in [k]$.\n \\item[(2)] Reveal the observation index $j_*^{(i)} =\n S(y_{1}^{(i)},\\ldots,y_{k}^{(i)})$ \n and the response variable $y^{(i)} = y_{j_*^{(i)}}^{(i)}$\n \\end{enumerate}\n For a fixed $\\vec{x}$ and $\\vec{W}^*=(\\vec{w}^*_1,\\ldots \\vec{w}_k^*)$, we use $\\mathcal{D}(\\vec{x}; \\vec{W}^*)$ to denote the\n probability distribution of the pair $(i, y)$ sampled according to Steps (1) and (2) above. For example, if \n $S(y_{1}^{(i)},\\ldots,y_{k}^{(i)}) = \\arg\\max_{j \\in [k]} y_{j}^{(i)}$, then under\n this model we observe only the largest $y_{j}^{(i)}$ and its index. Our goal is\n to obtain an accurate estimate $\\hat{\\vec{w}}_j$ for each $\\vec{w}^*_j$ given\n only samples from the above model.\n\\end{definition}\n\n\\noindent In most of this paper we are mainly concerned with estimating \nthe weights $\\vec{w_1^*}$, $\\ldots$, $\\vec{w_k^*}$ from observation of the\ncovariates and the {\\em maximal} response variable \n\\[y = \\max_{j \\in [k]}\\ \\{\\vec{w_j}^\\top \\vec{x} + \\bm{\\varepsilon}_j\\}, \\qquad \\text{\nwhere } \\bm{\\varepsilon} \\sim \\mathcal{N}(0, \\sigma^2 \\cdot \\bm{I}_{k}). \\] \nIn the observed-index setting, however, it turns out that our efficient estimation\ncan be applied to a much larger set of selection functions $S(\\{y_j\\})$, which we\ncall {\\em convex-inducing}.\n\n\\begin{definition}[Convex-inducing Self-Selection Function]\n \\label{defn:convex-inducing}\n We call a self-selection function $S: \\mathbb{R}^k \\to [k]$ {\\em\n convex-inducing} if, for each $j \\in [k]$ and $a \\in \\mathbb{R}$, there\n exists a convex set $C_j(a)$ such that \n \\[\n S_{j}(a) = \\bm{1}\\{\\bm{y}_{-j} \\in C_j(a)\\},\n \\]\n where $S_j(a)$\n are the slices defined in Definition \\ref{def:selfSelectionRule}. \n\\end{definition}\n\nNotably, setting $S(\\bm{y}) = \\arg\\max_{j \\in [k]} \\bm{y}_j$ recovers the maximum-response\nobservation model, and this choice of $S(\\cdot)$ also satisfies Definition\n\\ref{defn:convex-inducing} with $C_j(a) = (-\\infty, a]^{k-1}$. The definition\nalso allows us to capture cases beyond the maximum; for example, convex-inducing\nfunctions also include $S(\\bm{y}) = \\arg\\max_{j \\in [k]} f(\\bm{y}_j)$ for any\nmonotonic function $f$.\n\nIn order for our estimation procedure to succeed we use the following assumptions\non the feature and parameter vectors which are classical even in many standard \nlinear regression instances.\n\n\\begin{assumption}[Feature and Parameter Vectors] \\label{asp:known:1}\n For the set of feature vectors $\\{\\vec{x}^{(i)}\\}_{i = 1}^n$ that we have\n observed we assume that:\n \\begin{align} \\label{eq:asp:known}\n \\norm{\\vec{x}^{(i)}}_2 \\le C \\quad \\text{for all $i \\in [n] \\quad$ and } \\quad \\quad\n \\frac{1}{n} \\sum_{i = 1}^n \\vec{x}^{(i)} \\vec{x}^{(i) \\top} \\succeq \\mathbf{I}.\n \\end{align}\n For the true parameter vectors $\\vec{w}_1^*$, $\\ldots$, $\\vec{w}_k^*$ we\n assume that $\\norm{\\vec{w}_j^*}_2 \\le B$.\n\\end{assumption}\n\nAlthough our theorem allows for a wide range of self-selection rules, we need some\nadditional assumptions that allow the recovery of every parameter vector \n$\\vec{w}_j$. To see why this is needed imagine the setting where the \nself-selection rule $S$ is the $\\argmax$ and for some $j$ the coordinates of\n$\\vec{w}_j$ are extremely small. In this case it is impossible to hope to estimate\n$\\vec{w}_j$ since there is a huge probability that we do not even observe one \nsample of the form $y = \\vec{x}^T \\vec{w}_j + \\varepsilon_j$. For this reason we need the\nfollowing assumption.\n\n\\begin{assumption}[Survival Probability -- Self-Selection Rule] \\label{asp:known:2}\n For every sample $(\\vec{x}, y, j_*)$ that we have observed,\n the following properties hold: \n \\begin{itemize}\n \\item[(i)] for every sample $\\vec{x}^{(i)}$ and every $j \\in [k]$, the\n probability that $j_* = j$ is at least $\\alpha \/ k$ for some constant $\\alpha$;\n \\item[(ii)] the mass of the set $C_{j_*}(y)$ with respect to \n $\\mathcal{N}((\\vec{W}^{* \\top} \\vec{x})_{-j_*}, \\sigma^2 \\cdot \\vec{I}_{k-1})$\n is at least $\\alpha$.\n \\end{itemize}\n\\end{assumption}\n\nFinally, we need to assume oracle access to a self-selection rule $S(\\cdot)$\nthat is convex-inducing, in the same sense introduced in Definition\n\\ref{defn:convex-inducing}: \n\\begin{assumption}[Self-Selection Rule] \\label{asp:known:3}\n We assume that the self-selection rule $S: \\mathbb{R}^k \\to [k]$ is\n convex-inducing, and that for every $j \\in [k]$ and $y \\in \\mathbb{R}$ we have access\n to the following: \n \\begin{itemize}\n \\item[(i)] a membership oracle for $S(y)$, i.e., for every \n $\\vec{y} \\in \\mathbb{R}^{k}$ and $j \\in [k]$, we can answer whether $S(\\vec{y}) = j$\n or not, and\n \\item[(ii)] a projection oracle to the convex sets $C_j(y)$ that correspond\n to slices $S_j(y)$ of the self-selection rule, i.e., for every\n $\\vec{y} \\in \\mathbb{R}^{k}$ such that $\\vec{y} \\not\\in S_j(a)$, we can efficiently\n compute the nearest vector $\\vec{y}'$ such that $\\vec{y}'_j$\n $\\argmin_{\\vec{s} \\in S_i(y)} \\norm{\\vec{z} - \\vec{s}}$.\n \\end{itemize}\n\\end{assumption}\n\nUsing the above definitions and assumptions we are now ready to state our main \ntheorem for the known-index setting.\n\n\\begin{theorem}[Known-Index Estimation] \\label{thm:knownIndex}\n Let $(\\vec{x}^{(i)}, y^{(i)}, i^{(i)})_{i = 1}^n$ be $n$ observations from \n a self-selection setting with $k$ linear models $\\vec{w}^*_1$, $\\dots$, \n $\\vec{w}^*_k$ as described in Definition \\ref{defn:index_observed}. Under Assumptions \\ref{asp:known:1}, \\ref{asp:known:2}(i), and \\ref{asp:known:3}(i),\n there exists an estimation algorithm that outputs $\\hat{\\vec{w}}_1$, $\\dots$,\n $\\hat{\\vec{w}}_k$ such that with probability at least $0.99$, for every \n $j \\in [k]$\n \\[ \\norm{\\hat{\\vec{w}}_i - \\vec{w}_i^*}_2^2 \\le \\mathrm{poly}(\\sigma, k, 1\/\\alpha, B, C) \\cdot \\frac{\\log(n)}{n}.\\] \n If we additionally assume the Assumptions \\ref{asp:known:2}(ii), and \n \\ref{asp:known:3}(ii), then the running time of the algorithm is \n $\\mathrm{poly}(n, d, k, 1\/\\alpha, B, C, \\sigma, 1\/\\sigma)$.\n\\end{theorem}\n\n In Section \\ref{sec:known}, we explain our algorithm for proving Theorem \n\\ref{thm:knownIndex} and we describe the main ideas and techniques for the proof.\nTo construct an efficient algorithm we use an interesting combination of \nprojected gradient descent and the Langevin algorithm, after we develop an \nappropriate objective function.\n\n\n\n\n\n\n\n\\subsection{Unknown-Index Setting} \\label{sec:model:unknownIndex}\n\nWe next consider the more challenging {\\em unknown-index} setting, where we have\nsample access to the response variable, but do not observe the index of the\ncorresponding weight vector. In this more challenging setting, we make a few \nadditional assumptions on the structure of the problem, namely that the\ncovariates $\\vec{x}^{(i)}$ are drawn from an mean-zero identity-covariance\nGaussian distribution (rather than being arbitrary); that the noise terms\n$\\vec{\\varepsilon}^{(i)}$ are also identity-covariance (rather than $\\sigma\\cdot\n\\bm{I}_k$ for $\\sigma > 0$); and that the self-selection rule is the maximum\nresponse rule $S(\\bm{y}) = \\arg\\max_{j \\in [k]} \\bm{y}_j$ (rather than an arbitrary\nconvex-inducing rule).\n\n\\begin{definition}[Self-Selection with Unknown Index]\n \\label{defn:index_unobserved}\n Just as for Definition \\ref{defn:index_observed}, we have a set of weight\n vectors $\\vec{w}^*_1,\\ldots \\vec{w}^*_k \\in \\mathbb{R}^d$. \n We assume that vectors $\\vec{x}^{(i)}$ are drawn i.i.d. from the standard\n multivariate normal distribution $\\mathcal{N}(\\vec{0}, \\bm{I}_d)$. For\n each sampled covariate $\\vec{x}^{(i)}$, we:\n \\begin{enumerate}\n \\item[(1)] Sample $y_{j}^{(i)} \\sim \\mathcal{N}(\\vec{w_j}^{* \\top}\n \\vec{x}^{(i)}, 1)$ for each $j \\in [k]$.\n \\item[(2)] Compute the maximum response $y^{(i)} = \\max_{j \\in [k]} y_{j}^{(i)}$.\n \\item[(3)] Observe only the covariate-response pair $(\\vec{x}^{(i)},\n y^{(i)})$: in particular, we do not observe the index of the maximum\n response. \n \\end{enumerate}\n Our goal is to obtain an accurate estimate $\\hat{\\vec{w}}_j$ for each\n $\\vec{w}^*_j$ given only samples from the above model.\n\\end{definition}\n\nThis model resembles mixtures of linear regressions, with the key\ndifference being that in the latter, the index of the weight vector used for\neach covariate is sampled i.i.d. from a categorical distribution with fixed\nmixture probability. In contrast, here the probability of observing a response\nfrom a given weight vector depends on both the covariate and the sampled noise\n$\\eta_i$. \n\\smallskip\n\n In this model it is not even clear that estimation of $\\vec{w}^*_j$'s is\n possible: \nindeed, our first result (Theorem \\ref{thm:identifiability_no_index}) is an\ninformation-theoretic one that shows (infinite-sample) identifiability for\nunknown-index model. \nTo show this information-theoretic result, we use a novel argument that might be\nof independent interest---we believe that it can be used to show the \nidentifiability of other statistical problems with self-selection bias as well.\nThe proof of the theorem below is presented in Section \n\\ref{ssec:identifiability}.\n\n\\begin{restatable}{theorem}{unknownidentify} \\label{thm:identifiability_no_index}\n Let $\\vec{W}^* = [\\vec{w}^*_j]_{j = 1}^k \\in \\mathbb{R}^{d \\times k}$ and\n $\\Phi_{\\vec{W}^*}$ be the distribution function of the pairs \n $(\\vec{x}^{(i)}, y^{(i)})$ associated with the self-selection model with\n unknown indices as described in Definition~\\ref{defn:index_unobserved}. \n Then, there exists a function $f$ satisfying:\n \\begin{equation*}\n \\forall~\\vec{W}^* = [\\vec{w}^*_j]_{j = 1}^k \\subset \\mathbb{R}^d ~ \\text{it holds that} ~ f(\\Phi_{\\vec{W}^*}) = \\vec{W}^*.\n \\end{equation*}\n\\end{restatable}\n\n\\noindent In order to transform the above information-theoretic result to a \nfinite-sample and finite-time bound in the unknown-index case, we need the \nfollowing separability assumption among the $\\vec{w}^*_j$'s.\n\n\\begin{assumption}[Separability Assumption] %\n\\label{as:no_index}\n For some known real values $B$, $\\Delta$ it holds that:\n\\begin{equation*}\n \\forall i \\neq j: \\frac{\\abs*{{\\vec{w}_i^{* \\top} \\vec{w}^*_j}}}{\\norm{\\vec{w}^*_j}} + \\Delta \\leq \\norm{\\vec{w}^*_j} \\text{ and } \\max_{j \\in [k]} \\norm{\\vec{w}^*_j} \\leq B.\n \\tag{A} \\label{eq:asump_no_index}\n \\end{equation*}\n\\end{assumption}\n\n\\begin{theorem}[Unknown-Index Estimation]\n \\label{thm:no_index}\n Let $(\\vec{x}^{(i)}, y^{(i)})_{i = 1}^n$ be $n$ observations from a\n self-selection setting with $k$ linear models $\\vec{w}^*_1$, $\\dots$,\n $\\vec{w}^*_k$ as described in the unknown-index setting in Definition \n \\ref{defn:index_unobserved}. If we assume Assumption \\ref{asp:known:2}(i) and \n Assumption \\ref{as:no_index}, then there exists an estimation algorithm that\n outputs $\\hat{\\vec{w}}_1$, $\\dots$, $\\hat{\\vec{w}}_k$ such that with\n probability at least $0.99$, for every $j \\in [k]$\n \\[ \\norm{\\hat{\\vec{w}}_j - \\vec{w}_j^*}_2 \\le \\varepsilon,\\] \n as long as \n $n \\geq \\mathrm{poly} (d) \\cdot \\exp \\blr{\\mathrm{poly} (B \/ \\varepsilon) \\cdot \\widetilde{O}(k)}$ and\n $\\varepsilon \\le \\Delta\/16$. \n Furthermore, the running time of the algorithm is at most \n $n \\cdot \\exp \\blr{\\mathrm{poly} (B \/ \\varepsilon) \\cdot \\widetilde{O} (k)}$. \n\\end{theorem}\n\n Our estimation algorithm above only makes sense for $\\varepsilon = 1\/\\mathrm{poly}(k)$ and \nresembles the corresponding results of mixtures of linear regressions \n\\cite{li2018learning, chen2020learning}. What we are missing for this model is a \nlocal analysis corresponding to \\cite{kwon2020converges} that will enable us to \nget error $\\varepsilon$ with running time and number of samples that are polynomial in\n$1\/\\varepsilon$. In the special, yet very relevant case of $k = 2$, we are able to\nimprove our aforementioned result and show that a moment-based algorithm\nefficiently recovers $\\vec{w}_1$ and $\\vec{w}_2$ up to $\\varepsilon$ with running time\nand number of samples that are polynomial in $1\/\\varepsilon$, under the following \nslightly different separability assumption.\n\n\\begin{assumption}[Separability Assumption -- $k = 2$] %\n\\label{as:no_index:2}\n For some \\emph{known} $B, \\Delta > 0$ it holds that\n \\begin{equation*}\n \\norm{\\vec{w}_1^* - \\vec{w}_2^*}_2 \\ge \\Delta \\quad \\text{ and } \\quad \\max_{j \\in [k]} \\norm{\\vec{w}_j^*} \\leq B ~~~ \\forall i \\in [2]. \\tag{B}\n \\label{eq:asump_no_index:2}\n \\end{equation*}\n\\end{assumption}\n\n\\begin{theorem}[Unknown-Index Estimation -- $k = 2$]\n \\label{thm:no_index:2}\n Let $(\\vec{x}^{(i)}, y^{(i)})_{i = 1}^n$ be $n$ observations from a\n self-selection setting with $2$ linear models $\\vec{w}^*_1$, $\\vec{w}^*_2$ as\n described in the unknown-index setting of Section \\ref{sec:model}. If we assume\n Assumption \\ref{asp:known:2} and Assumption \\ref{as:no_index:2}, then there\n exists an estimation algorithm that outputs $\\hat{\\vec{w}}_1$,\n $\\hat{\\vec{w}}_2$ such that with probability at least $0.99$, for every \n $j \\in [2]$\n \\[ \\norm{\\hat{\\vec{w}}_j - \\vec{w}_j^*}_2 \\le \\varepsilon,\\] \n as long as $n \\geq \\mathrm{poly} (d, 1\/\\varepsilon, \\Delta, B, 1\/\\alpha)$ and \n $\\varepsilon \\le \\Delta\/10$. Furthermore, the running time of the algorithm is at most \n $n \\cdot \\mathrm{poly} (d, 1\/\\varepsilon, \\Delta, B, 1\/\\alpha)$. \n\\end{theorem}\n\n In Section \\ref{sec:unknown} we describe the algorithms and proofs for Theorem \n\\ref{thm:no_index} and Theorem \\ref{thm:no_index:2}.\n\\medskip\n\n\\noindent \\textbf{Remark.} (High-Probability Results)\n All of the above results are expressed in term of constant probability of \n error. We can boost this probability to $\\delta$ by paying an additional \n $\\log(1\/\\delta)$ factor in the sample and time complexities. This boosting can \n be done because we are solving a parametric problem and it a folklore idea that\n any probability of error less that $1\/2$ can be boosted to $\\delta$. Roughly \n the way that this boosting works is that we run the algorithm independently \n $\\log(1\/\\delta)$ times and from the $\\log(1\/\\delta)$ different estimates we \n keep one that contains at least half of all the others within a ball of radius \n $2 \\varepsilon$.\n\n\n\n\n\n\\subsection{Identifiability with Unknown Indices}\n\\label{ssec:identifiability}\nHere, we establish the information theoretic identifiability of the self-selection model with unknown indices. Recall, that we receive samples generated according to $y = \\max_{j = 1}^k w_j^\\top x + \\eta_j$ where $x \\thicksim \\mathcal{N} (0, I)$ and $\\eta_j \\overset{iid}{\\thicksim} \\mathcal{N} (0, 1)$. We now establish the following theorem:\n\n\\unknownidentify*\n\n\\begin{proof}\n Our proof will be based on an inductive argument on the number of components, $k$. We will use a peeling argument to reduce the parameter recovery problem with $k$ components to one with $k - 1$ components. The base case when $k = 1$, reduces to standard linear regression where, for example, $\\mathbb{E} [xy] = w_1$ suffices. For the inductive argument, suppose $k > 1$ and consider the following function:\n \\begin{equation*}\n \\forall v \\in \\mathbb{R}^d, \\norm{v} = 1: F(v) = \\lim_{\\text{Even } p \\to \\infty} \\lim_{\\gamma \\to 0} \\lr{\\frac{\\mathbb{E} \\sqlr{y^p \\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma}}{(p-1)!!}}^{1 \/ p}.\n \\end{equation*}\n We will now show that the above function is well defined for all $\\norm{v} = 1$. The conditional moments may be evaluated with access to the distribution function $\\Phi_{\\mc{W}}$. Defining $j^* = \\argmax_{j \\in [k]} \\abs{v^\\top w_j}$ and $\\sigma_j = \\abs{v^\\top \\vec{w}_{j}}$, we now lower bound the conditional moment:\n \\begin{align*}\n \\mathbb{E} [y^p \\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma] &\\geq \\mathbb{E} [y^p \\cdot \\bm{1} \\blr{\\vec{w}_{j^*}^\\top x + \\eta_{j^*}\\geq 0} \\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma] \\geq \\frac{1}{2} \\cdot \\mathbb{E} [y_{j^*}^p\\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma] \\\\\n &\\geq \\frac{1}{2} \\cdot \\mathbb{E} [(\\vec{w}_{j^*}^\\top \\mathcal{P}_v (x) + \\eta_{j^*} + \\vec{w}_{j^*}^\\top \\mathcal{P}^\\perp_v (x))^p\\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma] \\\\\n &\\geq \\frac{1}{2} \\cdot \\mathbb{E} \\sqlr{\\sum_{l = 0}^{p \/ 2} \\binom{p}{2l} (\\vec{w}_{j^*}^\\top \\mathcal{P}_v (x) + \\eta_{j^*})^{p - 2l} (\\vec{w}_{j^*}^\\top \\mathcal{P}^\\perp_v (x))^{2l} \\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma} \\\\\n &\\geq \\frac{1}{2} \\cdot (p - 1)!! \\cdot (\\sigma_{j^*}^2 + 1)^{p \/ 2}.\n \\end{align*}\n Through a similar computation, we obtain an upper bound on the conditional moment:\n \\begin{align*}\n &\\mathbb{E} [y^p \\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma] \\\\\n &\\leq \\sum_{j = 1}^k \\mathbb{E} [(\\vec{w}_j^\\top x + \\eta_j)^p \\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma] \\\\\n &= \\sum_{j = 1}^k \\mathbb{E} [(\\vec{w}_j^\\top \\mathcal{P}_v x + \\eta_j + \\vec{w}_j^\\top \\mathcal{P}^\\perp_v x)^p\\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma] \\\\\n &= \\sum_{j = 1}^k \\mathbb{E} \\sqlr{\\sum_{l = 0}^{p \/ 2} \\binom{p}{2l} (\\vec{w}_{j}^\\top \\mathcal{P}_v (x) + \\eta_{j})^{p - 2l}(\\vec{w}_{j}^\\top \\mathcal{P}^\\perp_v (x))^{2l} \\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma} \\\\\n &\\leq \\sum_{j = 1}^k (p - 1)!! \\cdot (\\sigma_j^2 + 1)^{p \/ 2} + \\sum_{l = 1}^{p \/ 2} \\mathbb{E} \\binom{p}{2l} \\sqlr{(\\vec{w}_{j}^\\top \\mathcal{P}_v (x) + \\eta_{j})^{p - 2l} (\\vec{w}_{j}^\\top \\mathcal{P}^\\perp_v (x))^{2l} \\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma} \\\\\n &\\leq k \\cdot (p - 1)!! \\cdot (\\sigma_{j^*}^2 + 1)^{p \/ 2} + \\sum_{j = 1}^k \\sum_{l = 1}^{p \/ 2} \\binom{p}{2l} \\mathbb{E} \\sqlr{(\\vec{w}_{j}^\\top \\mathcal{P}_v (x) + \\eta_{j})^{p - 2l} (\\gamma \\cdot \\norm{\\vec{w}_j})^{2l} \\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma}.\n \\end{align*}\n From the previous two displays, we get by taking $p^{th}$ roots and taking the limit as $\\gamma \\to 0$:\n \\begin{equation*}\n \\lr{\\frac{1}{2}}^{1 \/ p}\\cdot \\sqrt{(\\sigma_{j^*}^2 + 1)} \\leq \\lim_{\\gamma \\to 0} \\lr{\\frac{\\mathbb{E} [y^p \\mid \\norm{\\mathcal{P}^\\perp_v x} \\leq \\gamma]}{(p - 1)!!}}^{1 \/ p} \\leq k^{1 \/ p}\\cdot \\sqrt{(\\sigma_{j^*}^2 + 1)}.\n \\end{equation*}\n Taking $p \\to \\infty$, we obtain:\n \\begin{equation}\n \\label{eq:f_def}\n F(v) = \\sqrt{\\max_{j} \\abs{w_j^\\top v}^2 + 1}.\n \\end{equation}\n Now, let $v^*$ be such that $v^* = \\argmax_{\\norm{v} = 1} F(v)$. From \\eqref{eq:f_def}, we get that $v^* = \\pm w_{j^*}$ for some $j^* \\in [k]$ satisfying $j^* = \\argmax_{j} \\norm{w_j}$. Furthermore, $\\sigma^* \\coloneqq \\norm{w_{j^*}} = \\sqrt{F(v^*)^2 - 1}$. To identify the correct sign, consider the random variable $(x, y - x^\\top (\\sigma^* v^*))$. Note that this is a max-selection model with parameter set $\\{w_j - \\sigma^* v^*\\}_{j \\in [k]}$ with associated function $\\wt{F}$ defined analogously to $F$. Now, we have the following two cases:\n \\begin{itemize}\n \\item[] \\textbf{Case 1: } $\\wt{F} (v^*) = \\sqrt{4 \\cdot (\\sigma^*)^2 + 1}$. In this case, there exists $w \\in \\mc{W}$ with $w = - \\sigma^* v^*$\n \\item[] \\textbf{Case 2: } $\\wt{F} (v^*) < \\sqrt{4 \\cdot (\\sigma^*)^2 + 1}$. In this case, we must have $w_{j^*} = \\sigma^* v^*$.\n \\end{itemize}\n In either case, we identify a single $w \\in \\mc{W}$. To complete the reduction, note that $(x, y - w^\\top x)$ is a self-selection model with parameter set $\\mc{W}^\\prime = \\{w_j - w\\}_{j \\in [k]}$. Defining the $k - 1$ sized point set $\\mc{W}^\\dagger = \\{w_j - w\\}_{\\substack{j \\in [k], j \\neq j^*}}$, we note the relationship between the distribution functions $\\Phi_{\\mc{W}^\\prime}$ and $\\Phi_{\\mc{W}^\\dagger}$ for all $S \\subset \\mathbb{R}^d$ with $\\P \\blr{x \\in S} \\neq 0$:\n \\begin{equation*}\n \\forall t \\in \\mathbb{R}: \\Phi_{W^\\dagger} (S \\times (-\\infty, t]) = \\frac{\\Phi_{W^\\prime} (S \\times (-\\infty, t])}{\\Phi (t)}.\n \\end{equation*}\n Hence, the distribution function $\\Phi_{\\mc{W}^\\dagger}$ is a function of the distribution function of $\\Phi_{\\mc{W}^\\prime}$ which is in turn a function of the distribution function of $\\Phi_{\\mc{W}}$. From our induction hypothesis, we have that $\\mc{W}^\\dagger$ is identifiable from $\\Phi_{\\mc{W}^\\dagger}$ and consequently, from $\\Phi_{\\mc{W}}$. The proof of the inductive step now follows from the observation that $\\mc{W} = \\{z + w\\}_{z \\in \\mc{W}^\\prime} \\cup \\{w\\}$. \n\\end{proof}\n\n\\subsection{Finding an $O(k)$-subspace}\n\\label{ssec:identifying_subspace}\n\\newcommand{\\bm{M}}{\\bm{M}}\nWe now move towards a finite-sample estimation algorithm for the unknown-index\ncase. The first step in our approach is an algorithm for approximately\nidentifying a size-$k$ subspace that has high overlap with $\\text{span}(w_1,\n\\ldots, w_k)$. \nIn order to estimate the subspace, we will consider the matrix \n$\\bm{M} = \\mathbb{E}\\lr{\\max(0, y)^2\\cdot \\vec{x}\\vec{x}^\\top}$. The following Lemma\nshows that the top $k$ eigenvectors of $\\bm{M}$ capture the span of the weight\nvectors $w_k$:\n\n\\begin{lemma}[Weighted covariance]\n \\label{lemma:svd_cov}\n Consider the matrix $\\bm{M} = \\mathbb{E}\\lr{\\max(0, y)^2 \\cdot\n \\vec{x}\\vec{x}^\\top}$, and let $$p_i = \\mathbb{P}\\lr{\\left\\{\n i = \\argmax_{j \\in [k]} w_j^\\top \\vec{x} + \\eta_j\\right\\} \n \\text{ and } \\left\\{ w_i^\\top \\vec{x} + \\eta_i > 0\\right\\}}.$$ Then,\n \\[\n \\bm{M} = \\mathbb{E}\\lr{\\max(0, y)^2} \\cdot \\bm{I} + 2\\sum_{i=1}^{k} p_i \\cdot w_iw_i^\\top.\n \\]\n\\end{lemma}\n\\begin{proof}\n \\input{subspace_sq}\n\\end{proof}\n\nWe now use the spectral gap shown in the previous Lemma, in combination with a\nmatrix concentration argument, to argue that $k$-SVD on $\\bm{M} -\n\\mathbb{E}[y^2]\\cdot \\bm{I}$ (where $\\bm{M}$ is as defined in the previous Lemma)\nwill approximately recover the span of the $\\{\\vec{w_i}\\}$. \nWe use as a primitive the following result about $k$-SVD:\n\\begin{fact}[\\citep{rokhlin2010randomized}]\n \\label{fac:k_svd}\n Let $M \\in \\mathbb{R}^{k \\times k}$, and let $\\sigma_1 \\geq \\sigma_2,\n \\geq \\ldots \\geq \\sigma_d$ denote the non-zero singular values of M. For\n any $j \\in [k-1]$, define the spectral gap $g_j = \\sigma_j \/\n \\sigma_{j+1}$. Furthermore, suppose we have access to an oracle which\n computes $Mv$ for any $v \\in \\mathbb{R}^k$ in time $R$. Then, for any \n $\\eta, \\delta > 0$, there is an algorithm $\\textsc{ApproxSVD}(M,\n \\eta, \\delta)$ which runs in time $\\tilde{O}(\\frac{j \\cdot R}{\\min(1, g_j - 1)} \\cdot log(k\/(\\eta \\delta))$\n and with probability at least $1 - \\delta$ outputs $U \\in R^{k\\times j}$\n with orthonormal columns so that $\\|U - U_j\\|_2 < \\eta$, where $U_k$ is\n the matrix whose columns are the top $j$ right singular vectors of M. \n\\end{fact}\n\nThe combination of Lemma \\ref{lemma:svd_cov} and Fact \\ref{fac:k_svd} imply that it\nsuffices to show concentration of the average of a sequence $M^{x_i,y_i}$. The\nfollowing Lemma helps us establish this concentration:\n\\begin{restatable}{lemma}{svdconc}\n \\label{lem:svd_conc}\n Suppose we generate $n$ samples $(\\vec{x}^{(l)}, y^{(l)}) \\sim X, Y$ from the\n self-selected linear regression model, i.e., $\\vec{x}^{(l)} \\sim\n \\mathcal{N}(0, \\bm{I}_d)$, then $y^{(l)}_i = \\bm{w}_i^\\top \\vec{x}^{(l)} +\n \\mathcal{N}(0, 1)$ for vectors $\\vec{w_i} \\in \\mathbb{R}^d$ with $\\|\\vec{w_i}\\|_2 \\leq B$, and $y^{(l)} = \\max_{i \\in [k]} y^{(l)}_i$. Define the\n empirical second-moment matrix \n \\begin{align*}\n \\widehat{\\bm{M}} = \\frac{1}{n}\\sum_{l=1}^n \\max(0, y^{(l)})^2 \\cdot \\vec{x}^{(l)} {\\vec{x}^{(l)}}^\\top.\n \\end{align*}\n Fix any $\\delta \\in (0, 1)$. Then, if $n \\geq \\Omega(\\max(1\/\\delta, d))$,\n with probability at least $1 - \\delta$,\n \\begin{align*}\n \\left\\|\\widehat{\\bm{M}} - \\mathbb{E}[\\widehat{\\bm{M}} ]\\right\\|_2 \\in O\\lr{\n \\mathrm{poly}(B)\\cdot \\frac{\\log(kn)}{\\sqrt{n}}\n \\max\\left\\{\n \\sqrt{\\log(2\/\\delta)},\\sqrt{d}\n \\right\\}\n }.\n \\end{align*}\n\\end{restatable}\n\\begin{proof}\n See Appendix \\ref{app:proof:svd_conc}.\n\\end{proof}\n\\noindent We can thus \napproximately identify the\nrelevant subspace in polynomial time.\n\n\n\\subsection{Estimating Parameters using the Low-Dimensional Subspace}\n\\label{ssec:est_low_dim}\n\\input{ui_estimation_old}\n\n\\subsection{Estimation in the $k = 2$ case}\nWe will now demonstrate how, when $k = 2$, we can use a moment-based\nalgorithm to estimate $\\{\\vec{w_1}, \\vec{w_2}\\}$ in $\\text{poly}(1\/\\epsilon$)\ntime. The algorithm will operate as follows: \n\\begin{enumerate}\n \\item Using the procedure outlined in Section \\ref{ssec:identifying_subspace}, find an\n approximation $U$ to the linear subspace $U^*$ containing\n $\\text{span}(\\vec{w_1}, \\vec{w_2})$. \n \\item Set up an $\\epsilon$-covering over the span of $U$. Since we assume\n $\\|\\vec{w_i}\\| \\leq 1$, the covering is of size $O(1\/\\epsilon^2)$.\n \\item For each element $\\widehat{\\vec{w}}$ of the\n covering, collect samples $(\\vec{x}, y - \\vec{x}^\\top \\vec{v}_t)$ where $(x, y)$ are from the\n no-index self selection model.\n \\item Using the moments of $y - \\vec{x}^\\top \\widehat{\\vec{w}}$ estimate\n $\\min_{i \\in \\{0, 1\\}} \\|\\vec{w_i} - \\vec{w}\\|^2$. We will show that\n $O(\\frac{1}{\\delta\\varepsilon^2\\text{poly}(B)})$ suffices to get an\n $\\epsilon$-close approximation to this quantity $1-\\delta$. Setting \n $\\delta = O(\\rho \\epsilon^2)$ for some $\\rho < 1$ ensures that we get\n accurate estimates of this quantity for each element $\\vec{w}$ of our covering.\n \\item As long as $\\vec{w_1}$ and $\\vec{w_2}$ are sufficiently separated,\n we can estimate $\\vec{w_1}$ to be the minimum of our estimate over the\n $\\epsilon$-covering, i.e., $\\widehat{\\vec{w_1}} = \\arg\\min_{w} \\min_{i \\in \\{0, 1\\}}\n \\|\\vec{w_i} - \\vec{w}\\|$. We can then estimate $\\vec{w_2}$ to be the\n minimizer of the estimate over points that are $\\Delta$-far from\n $\\widehat{\\vec{w_1}}$. \n\\end{enumerate}\n\nTurning this outline into an efficient algorithm entails tackling a few distinct\ntechnical challenges. First, we will show how to estimate $\\min_{i \\in \\{0, 1\\}}\n\\|\\vec{w_i} - \\vec{v}_t\\|$ using samples $(\\vec{x}, y)$ from our data-generating\nprocess. Then, we will show that our sequential approach to estimating\n$\\vec{w_1}$ and $\\vec{w_2}$ indeed suffices to recover both with good enough\naccuracy. Finally, we will show that the error incurred by the subspace-finding\nstep does not adversely affect our estimation of the weight vectors.\n\n\\paragraph{Distance to the nearest weight vector.}\nWhen $k = 2$, direct integration allows us to compute the moment generating\nfunction of $y = \\max \\{\\vec{w_1}^\\top \\vec{x} + \\eta_1, \\vec{w_2}^\\top\n\\vec{x} + \\eta_2\\}$ in terms of the covariance matrix between $y_1$ and\n$y_2$. This in turn allows us to accurately estimate the lesser of\n$\\text{Var}[y_1]$ and $\\text{Var}[y_2]$, as captured by the following Lemma:\n\\begin{lemma} \\label{lem:minVariance}\n Given a two-dimensional Gaussian random variable $z \\sim \\mathcal{N}(0,\n \\Sigma)$ with $0 \\prec \\Sigma \\in \\mathbb{R}^2$, there exists an\n algorithm $\\textsc{MinVariance}(\\delta, \\epsilon)$ which given\n $O(\\frac{1}{\\delta\\epsilon^2})$ samples, outputs an estimate\n $\\widehat{\\sigma}$ of $\\min_{0, 1} \\Sigma_{ii}$ satisfying\n \\[\n (1 - \\beta)\\lr{\\min_{0, 1} \\Sigma_{ii}} \\leq \\widehat{\\sigma} \\leq (1 + \\beta) \\lr{\\min_{0, 1} \\Sigma_{ii}}\n \\]\n with probability $1 - \\delta$.\n\\end{lemma}\n\\begin{proof}\n \\input{k_equals_two}\n\\end{proof}\nAs a corollary of this result, we can estimate $\\min_{i \\in \\{0, 1\\}}\n\\|\\vec{w_i} - \\vec{w}\\|^2$ to $\\epsilon$-precision with probability at least $1\n- \\delta$:\n\\begin{corollary}\n Suppose we have samples $\\{(x, y)\\}$ generated from the self-selection model\n with unobserved index. Suppose further that $\\|\\vec{w_i}\\| \\leq B$ for $i\n \\{0, 1\\}$. Then, we can use the \\textsc{MinVariance} algorithm of Lemma \n \\ref{lem:minVariance} to recover $\\sigma$ such that with probability at least\n $1 - \\delta$, \n \\[\n \\left|\\sigma - \\min_{i \\in \\{0, 1\\}} \\|\\vec{w_i} - \\vec{w}\\|^2 \\right| \\leq \\epsilon.\n \\]\n\\end{corollary} \\label{cor:minVariance}\n\\begin{proof}\n Define the random variable $X = y - \\vec{w}^\\top x$. Then, $X$ is the\n maximum of the two Gaussian random variables \n $X_1 = (\\vec{w_1} - \\vec{w})^\\top \\vec{x} + \\eta_i$ and \n $X_2 = (\\vec{w_2} - \\vec{w})^\\top \\vec{x} + \\eta_i$.\n In particular, $\\text{Var}[X_i] = \\|\\vec{w_i} - \\vec{w}\\|^2 + 1$. Thus, by\n applying the \\textsc{MinVariance} algorithm to $X$, we can recover the\n desired quantity.\n\\end{proof}\n\n\\paragraph{Accuracy loss due to subspace recovery.}\n\nFrom Section \\ref{ssec:identifying_subspace}, we may assume the existence of a \n$2$-dimensional subspace $\\hat{U}$ satisfying the following:\n\\begin{equation*}\n \\forall i \\in [2]: \\frac{\\norm{w_i - \\mathcal{P}_{\\hat{U}} w_i}}{\\norm{w_i}} \\leq \\varepsilon\/2\n\\end{equation*}\nwhere to find $\\hat{U}$ we need $\\mathrm{poly}(d, 1\/\\varepsilon, B, 1\/\\alpha)$ sample and time \ncomplexity. Now from Corollary \\ref{cor:minVariance} we have that if we estimate\n$\\sigma_{\\min}$ within accuracy $\\varepsilon\/10$ then we can identify the point on the\n$2$-dimensional $\\varepsilon\/2$-cover that is at least $\\varepsilon$ close to $\\vec{w}_1$, \nassuming also that $\\vec{w}_2$ is at least $\\Delta \\ge \\varepsilon$ far away from \n$\\vec{w}_1$. Once we have identified $\\vec{w}_1$ we can substract \n$\\vec{w}_1^{\\top} \\vec{x}$ from all the next samples of the form $(\\vec{x}, y)$\nfind the minimum variance among the grid points that are at least $\\Delta - \\varepsilon$\nfar away from the estimated $\\vec{w}_1$ and this way we are guaranteed to recover\nthe remaining $\\vec{w}_2$ as well. ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgements}\n\nThis work was supported by the National Institutes of Health (NIGMS\/P20GM109090), the National Science Foundation (CNS2016714), and the University of Nebraska Collaboration Initiative.\n\n\n\\bibliographystyle{IEEEtran} \n\n\n\\section{Introduction} \n\\label{sec:intro}\n\nA number of clinical populations experience difficulty in controlling gait during locomotion, and such difficulties increase the risk of falling. Hence, there is a strong interest in developing easy-to-use technology to improve gait, and ultimately, prevent falls. In recent years, a number of researchers and clinicians have noted the benefit of using metronomes to improve the control of human gait in cases of aging and neurodegenerative disease such as Parkinson's Disease~\\cite{cubo2004short,vaz2020gait}. Similar improvements have been observed in athletic training, where the effect of metronomic training has been suggested to exhibit a general effect on motor coordination. Hence, the use of metronomes shows great promise in these clinical and exercise science settings~\\cite{kim2019effects}. An issue, though, is that those studies have largely used isochronous interbeat intervals for training and rehabilitation. An alternative approach is the use of irregular metronomes based on the statistical regularities observed in gait patterns of young, healthy people~\\cite{hunt2014influence,kaipust2013gait}. Results from our research team suggest that this alternative approach may be effective in restoring healthy gait patterns in older adults~\\cite{vaz2020gait,kaipust2013gait}. \n\nThe contribution of this paper is two-fold. First, we present a laboratory study that employs irregular metronomes within the context of mechanical perturbations to gait. Second, we discuss our ongoing work re-thinking these metronomes as devices with advanced networking capabilities, enhanced with immersive technologies, such as Augmented Reality. Our paper is organized as follows. In Section~\\ref{sec:relwork}, we give a brief background and discuss related work, while in Section~\\ref{sec:experiments}, we present our methodology. In Section~\\ref{sec:results}, we present our evaluation and, in Section~\\ref{sec:nextgen}, we discuss directions on developing the next-generation of network-capable metronomes. Finally, in Section~\\ref{sec:conclusion}, we conclude our work.\n\n\n\n\n\n\n\n\\section{Background and Related Work}\n\\label{sec:relwork}\n\n\nVariability is a fundamental feature of both human movements and the underlying physiological processes that support them~\\cite{Aks2002Memory, Stergiou2011Human, Peng1995Fractal, Peng1995Quantification, Ivanov2001From, Kello2010Scaling, cavanaugh2017multifractality}. For example, gait variability reflects how gait parameters (e.g., stride time, stride length) change over many successive steps. Physiological variability may be observed by successive observations of physiological events such as the timing of breaths or heart beats. The study of variability in overt movements and physiological events has revealed systematic patterns of variability associated with both health and disease. Historically, variability was thought to be synonymous with error such as a faulty execution of an intended motor program~\\cite{Fitts1954information,Reason1990Human,Schmidt2003Motor}. Evidence supporting this point of view is the frequent observation that novices, older adults, and those with neurodegenerative diseases often produce greater variability than young, healthy, experts. From this point of view, variability is something to be reduced or eliminated. This perspective has become increasingly difficult to support given the growing evidence that structured variability is essential for a healthy, adaptive system~\\cite{Stergiou2006Optimal,harrison2015complex}. Elimination of variability introduces the possibility of a system that is inflexible in the face of novel circumstances. Novel circumstances are the rule in locomotion which involves an ever-changing environment. In contrast, optimal variability should strike a balance between complexity (i.e., behavioral richness) and predictability. Too much predictability leads to robotic movements; too little predictability leads to randomness. Both overly predictable and overly random behavior has been associated loss of adaptability associated with aging and disease.\n\\subsection{Fractality in Human Movements}\nHuman movements entail the coordination of many nested neuromuscular processes~\\cite{cavanaugh2017multifractality,kelty2013tutorial,Likens2014Neural,Likens2015Experimental,Goldberger2002Fractal, Ivanov2001From,harrison2015complex}. The successful interaction of the many components that make up such a system are reflected in the time varying properties (i.e., variability) observed in movement. Movement and physiological patterns represent a form of variability known as \"multifractality\" that has been suggested to represent the coordination of motor degrees of freedom~\\cite{cavanaugh2017multifractality}. Fractals (aka monofractals) are geometric objects that are self-similar over different scales of analysis~\\cite{Goldberger2002Fractal}. Fractals are observable from time series data (e.g., a sequence of stride times) by investigating how statistical properties (e.g., variance) change according to scale. When the logarithm of variance increases linearly with the logarithm of scale, the time series is said to exhibit fractal scaling. The slope that relates these quantities is the fractal scaling exponent, sometimes referred to as the Hurst exponent, $H$. Monofractal scaling suggests that single scaling exponent is sufficient for characterizing the dynamics of gait events. Furthermore, monofractal scaling does not in itself support the conclusion of interactions among the time scales that make up movement. Hence, this one-size fits all approach may not reflect the adaptability needed in human motor control. Multifractality, on the other hand, suggests that fractal patterns may be context dependent, changing to reflect the adaptive nature of the human neuromuscular system. \n\nAlthough there are exceptions~\\cite{ashkenazy2002stochastic,dutta2013multifractal,west2005multifractal}, the majority of work investigating fractal characteristics in human gait have been conducted from a monofractal perspective~\\cite{Hausdorff1995Is,hausdorff1996fractal,hunt2014influence,kaipust2013gait}. Furthermore, although the influence of metronomes on monofractal properties of gait are well known~\\cite{kaipust2013gait,hunt2014influence,hausdorff1996fractal}, we are not aware of any studies that have systematically investigated the effect of metronomes on mulitfractal patterns in observed in human gait. The current work addresses this gap in the literature while also providing critical information regarding the role of gait fractality in resilience to mechanical perturbations. \n\\subsection{Metronomes as Assistive Devices to Prevent Falls}\nAuditory and visual metronomes have emerged as promising rehabilitation tools for restoring healthy motor patterns in a number of domains including stroke, Parkinson's disease, aging~\\cite{hollands2015feasibility,vaz2020gait,hunt2014influence,baker2008effect,spaulding2013cueing}. For example, pacing people with an isochronous metronomes seems to increase gait speed and stride length~\\cite{spaulding2013cueing}. At issue, though, is that studies of this nature often employ isochronous metronomes, which runs contrary to known properties of human movement variability~\\cite{kaipust2013gait,hunt2014influence, hausdorff1996fractal}. This is problematic because, while some studies show improvement in gait parameters using isochronous metronomes, other studies have shown that isochronous metronomes may actually alter healthy dynamics in such a way that mimics pathological gait variability~\\cite{kaipust2013gait,hunt2014influence,vaz2020gait}. In contrast, using metronomes that exhibit the same properties as found in healthy gait does not degrade healthy gait dynamics~\\cite{hunt2014influence} and may have exert a restorative effect on gait patterns in older adults~\\cite{van2016efficacy}. \n\n\\subsection{The current study}\nIn this paper, we present a novel study in which we demonstrate that metronomes may be effective in reducing the risk of falling due to an unexpected perturbation. Participants walked on a treadmill while either being paced by one of several metronome types or walking at a self-selected comfortable pace. During the trial, participants experienced a mechanical perturbation (a treadmill belt was briefly halted). We hypothesized that a metronome with statistical characteristics similar to those of healthy, self-paced walking would exhibit more resilience to perturbation as compared with other metronome types. \n\n\\section{Methodology}\n\\label{sec:experiments}\n\n\\subsection{Participants}\nThirty-three healthy young adults (age: 19 \u2013 30 years; height: 1.75 \u00b1 0.91; weight: 71 \u00b1 0.1 kg; 13 females, 20 males) volunteered to participate in this study. All participants reported normal or corrected-to-normal vision and hearing, and the ability to walk for $>=$ 45 consecutive minutes. Participants reported no neurological, movement or vestibular disorder, dizziness, impairments to vision or hearing, current musculoskeletal injury or pain. Participants had no cardiovascular events, and had not experienced surgery within the previous 6 months, and were not pregnant. The institutional review board approved all procedures.\n\n\\subsection{Experimental protocol}\nWalking trials took place on an instrumented, split belt treadmill (Bertec Corp., OH, USA). Participants were randomly assigned to one of four groups, corresponding to each of the four metronome conditions; no metronome (or none), isochronous, random, and 1\/f noise. Participants wore tight-fitting clothing and were fitted with a ceiling-mounted harness. We determined self-selected walking speed by increasing and decreasing treadmill speed until the participant reported that the speed was 'faster than comfortable' or 'slower than comfortable'. We averaged the first three 'faster' and 'slower' to estimate a comfortable, self-selected speed and used that value for the duration of the session.\n\n\\subsection{Baseline trial}\nParticipants walked on the treadmill for 25 minutes at the self-selected walking speed but without a metronome. After the baseline trial, participants rested for 20 minutes. Right foot contact events from the last 3 minutes of the baseline trial were used to compute inter-stride intervals (ISIs; mean and SD) based on the trajectory of the vertical velocity of the left and right heel markers.\n\n\\subsection{Stimuli creation}\nBaseline gait characteristics for each participant were used to construct 45 minute metronomes, appropriate to group, using MATLAB (The MathWorks Inc, Natick, MA, USA). Inter-beat (IBI) intervals for the isochronous condition were set to each participants' mean baseline ISI. For the random condition, a random sequence of values between -1 and 1 was generated normalized baseline ISIs means and variances. 1\/f noise time series were generated using custom MATLAB scaled in a manner similar to the random condition. Metronome intervals were converted to 4-beat drum patterns wherein beats were sounded by a closed hi-hat. A base drum and snare were played on the first and third beats, respectively. The bass drum signaled start of stride; the snare drum signaled contralateral limb foot contact. Stimuli were converted to MIDI sequences, and played by drum generator app (Drum Studio, Rollerchimp, Sydney, Australia). Participants in the metronome condition groups were asked to time their steps to beat of the metronome while walking. Before beginning experimental trials, participants demonstrated synchronizing their steps with beats. If initially unable to synchronize, researchers coached them and then participants practiced 2 additional minutes. Some participants required minor coaching but were all able to walk in time with the beat.\n\n\\subsection{Perturbation trial}\nThirty-nine retroreflective markers were affixed to anatomical landmarks to define a nine-segment (left and right foot, left and right shank, left and right thigh, pelvis, trunk, head) mechanical model of the human body. Clusters of rigid reflective markers were also placed on the thigh and lower leg. Participants walked for 45-minute in one of three metronome conditions (1\/f noise, random, or isochronous) or at a self-selected speed. Metronome beats were played speakers at the same volume for all conditions. Twenty-five minutes into the trial, we halted one treadmill belt for 500ms, providing a mechanical perturbation meant to approximate a trip. The perturbation was timed to when the ankle of the dominant limb in swing passed the ankle of the support limb. Treadmill acceleration and deceleration was set to 3 ms-2. Participants were asked to resume walking after the normal belt movement resumed. The trial continued for 20 more minutes. Kinematic data was recorded at 100 Hz using an 8-camera, 3D motion capture system (Vicon Motion Systems, Oxford, UK). The timing and duration of the treadmill perturbation were controlled by D-Flow (Motek Medical BV, Amsterdam, The Netherlands). Participants only received a single perturbation to prevent learning effects. Participants were naive to the trip.\n\n\\subsection{Analysis}\nPeriods before (PRE) and after the perturbation (POST) were identified from the 45 minutes walking trials and the foot contact events were estimated based on the trajectory of the vertical velocity of the right heel marker using custom Matlab code. Foot contact events were also identified from the 25 minute baseline trial. ISIs were calculated as the time elapsed between subsequent heel strike events of the same foot. \n\nAfter extraction, ISIs were used as input to a time series method called Multifractal Detrended Fluctuation Analysis (MFDFA)~\\cite{kantelhardt2002multifractal}. MFDFA is a generalization of well known Detrended Flucatuation Analysis \\cite{peng1994mosaic} and has been used in numerous disciplines to study the correlation structure of time series data, including relevant areas such as physiology and movement science~\\cite{Hausdorff1995Is,hausdorff1996fractal, hunt2014influence, kaipust2013gait,Goldberger2002Fractal, Likens2014Neural, Likens2015Experimental}. See~\\cite{kelty2013tutorial,ihlen2012introduction} for tutorials on this and related methods. We used MFDFA to characterize the time-varying properties of gait at baseline and during each experimental task segment (PRE\/POST perturbation). Kantelhardt et al.~\\cite{kantelhardt2002multifractal} describe MFDFA as an algorithm involving five steps. \n\\begin{itemize}\n \\item Step 1 involves determining the \\emph{profile} as the cumulative sum of a mean-detrended time series according to \n \\[Y(i)=\\Sigma_{k=1}^i(x_k-\\Bar{x}), i = 1,...,N\\]\n where $N$ is the length of the series, $x_k$, and $\\Bar{x}$ is mean of $x_k$.\n \\item Step 2 requires dividing $Y(i)$ into $int(N\/s)$ non-overlapping segments of length $s$. Because $N$ will not always be a multiple of $s$, the procedure is repeated twice, once from each end of $Y(i)$ to avoid dropping data. This results in $2N_s$ segments.\n \\item Step 3 calculates local trends in each of the $N_s$ segments by ordinary least squares regression. The variance is then estimated as:\n \\[ F^2(v,s)=\\frac{1}{s}\\Sigma_{i=1}^{s}\\{Y[(v-1)s+i]-y_v(i)\\}^2 \\]\n for each segment, $v, v = 1,...,N_s$ and\n \\[ F^2(v,s)=\\frac{1}{s}\\Sigma_{i=1}^{s}\\{Y[N-(v-N_s)s+i]-y_v(i)\\}^2 \\]\n for $v=N_s+1,...2N_s$. $y_v(i)$ is the local trend determined by OLS regression. Here, $y_v(i)$ is assumed to be a first order polynomial, but higher order polynomials may also be used\\cite{kantelhardt2002multifractal}.\n \\item Step 4 entails estimating the $q$th order fluctuation function by averaging over all $2N_s$ segments according to\n \\[ F_q(s)=\\{ \\frac{1}{2N_s}\\Sigma_{v=1}^{2N_s}[F^2(v,s)]^{\\frac{q}{2}} \\}^{\\frac{1}{q}}\\]\n where $q$ is a real number. When $q=2$, the procedure reduces to the standard DFA\\cite{peng1994mosaic} procedure. Steps 2 through 4 are repeated for several values of $s$ where $s\\geq m+2$, where $m$ is polynomial order of $y_v(i)$. In the current work, we used $s_{min}=16$ and $q=-10,-9,...10$.\n \\item Step 5 identifies the scaling behavior of $F_q(s)$ by regressing its logarithm against the logarithm of $s$. If the time series exhibits long range correlation, then $F_q(s)$ increases like a power law as a function of $s$, $F_q(s) ~ s^h(q)$, where the generalized Hurst exponent, $h(q)$ is the slope of the log-log plot of $F_q(s)$ and $s$. In the case of of multifractality, the collection of slopes, $h(q)$ will not be equal for all $q$. When $s$ is large, only a few values are included in calculation of $F_q(s)$. Hence, by convention, we chose $s_{max}=N\/4$ for the current analysis. Lastly, $h(q)$ as $q$ tends toward 0, so $h(0)$ is estimated according to:\n \\[F_0(s)=\\exp{\\frac{1}{4N_s}\\Sigma_{v=1}^{2N_s}\\ln[F^2(v,s)]} = s^{h(0)}.\\]\n Other scaling exponents may be calculated based on $h(q)$. See \\cite{kantelhardt2002multifractal} for details.\n\\end{itemize}\n\nAfter extracting $h(q)$ by applying MFDFA to each ISI time series, $h(q)$ were analyzed via linear mixed effect models (LMEs). Models were fit using forward selection of parameters starting with fixed effects, adding higher order interactions one at time. Model terms were retained based on significant likelihood ratio tests. \n\\section{Evaluation}\n\\label{sec:results}\nLME results are presented as an analysis of variance in Table~\\ref{table:anova}. That table indicates that all main effects and interactions were significant. Main effects and two-way interactions are not interpreted further because their implication in a three-way interaction among time, group, and q-order. \n\\begin{table}[ht]\n\n\\centering\n\\begin{tabular}{lrrrrrr}\n \\hline\n & SS & MS & DF1 & DF2 & F \\\\ \n \\hline\nq & 5.48 & 5.48 & 1.00 & 1879.60 & 268.53 \\\\ \n g & 1.68 & 0.56 & 3.00 & 27.61 & 27.48 \\\\ \n t & 22.81 & 11.40 & 2.00 & 1887.16 & 558.63 \\\\ \n q$\\times$g & 0.73 & 0.24 & 3.00 & 1879.60 & 11.87 \\\\ \n q$\\times$t & 0.22 & 0.11 & 2.00 & 1879.60 & 5.46 \\\\ \n g$\\times$t & 30.79 & 5.13 & 6.00 & 1886.43 & 251.36 \\\\ \n q$\\times$g$\\times$t & 1.57 & 0.26 & 6.00 & 1879.60 & 12.83\\\\ \n \\hline\n\\end{tabular}\n\\caption{\\label{table:anova}Type III Analysis of Variance Table adjusted with Kenward-Rogers method. Results suggest a three-way interaction among q-order (q), group (g), and time (t). All $p < .01$}\n\\vspace{+0.5cm}\n\\end{table}\n\nModel implied means are depicted in Figure~\\ref{fig:qorder} as a way to visualize this interaction. Based on that figure, several general trends seem apparent. First, both the random and the isochronous metronomes reduced the central tendency of $h(q)$. On average, $h(q)$ appears lower in those conditions than either none or the 1\/f conditions, replicating known trends in the literature\\cite{hunt2014influence}. Second, the range of $h(q)$ appears to be larger when paced by a 1\/f metronome than when paced by a either a random or isochronous metronome. To investigate the influence of group and time on $h(q)$, we performed a series of simple effects tests at extreme values of $q$ included in the MFDFA procedure i.e., (-10,10). These tests are presented in Table~\\ref{table:hq_table}. When $q=-10$, small fluctuations dominate the estimation of fractal scaling. In that case, the none condition did not vary across time. However, large decreases in $h(q)$ were observed in the isochronous and random conditions, pre- and post-perturbation relative to the baseline. In contrast, the 1\/f condition increased $h(q)$ relative to baseline. Only the 1\/f condition showed a pre-post decline in $h(q)$, possibly due to floor effects in the other two conditions. When $q=10$, large fluctuations dominate the estimation of fractal scaling. In this case, all conditions showed differences in scaling behavior relative to baseline. However, only the none and isochronous conditions showed differences between pre- and post-perturbation. \n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{lrrrrl}\n \\hline\ncontrast & estimate & SE & df & t & p \\\\ \n \\hline\n\\multicolumn{6}{l}{group = None, q = -10}\\\\\nbase - pre & 0.02 & 0.03 & 1882.06 & 0.63 & 0.806 \\\\ \n base - post & 0.03 & 0.03 & 1882.06 & 1.01 & 0.569 \\\\ \n pre - post & 0.01 & 0.03 & 1880.01 & 0.40 & 0.916 \\\\ \n \\hline\n\\multicolumn{6}{l}{group = Isoch., q = -10}\\\\\nbase - pre & 0.65 & 0.03 & 1880.01 & 20.36 & $<$.001 \\\\ \n base - post & 0.71 & 0.03 & 1880.01 & 22.11 & $<$.001 \\\\ \n pre - post & 0.06 & 0.03 & 1880.01 & 1.76 & 0.185 \\\\ \n \\hline\n\\multicolumn{6}{l}{group = 1\/f, q = -10}\\\\\nbase - pre & -0.20 & 0.03 & 1888.66 & -5.85 & $<$.001 \\\\ \n base - post & -0.09 & 0.03 & 1888.66 & -2.80 & 0.0143 \\\\ \n pre - post & 0.10 & 0.03 & 1880.01 & 2.97 & 0.009 \\\\ \n \\hline\n\\multicolumn{6}{l}{group = Rand., q = -10}\\\\\nbase - pre & 0.29 & 0.03 & 1882.06 & 9.75 & $<$.001 \\\\ \n base - post & 0.33 & 0.03 & 1882.06 & 11.14 & $<$.001 \\\\ \n pre - post & 0.04 & 0.03 & 1880.01 & 1.44 & 0.321 \\\\ \n \\hline\n\\multicolumn{6}{l}{group = None, q = 10}\\\\\nbase - pre & -0.18 & 0.03 & 1882.06 & -5.97 & $<$.001 \\\\ \n base - post & -0.07 & 0.03 & 1882.06 & -2.53 & 0.031 \\\\ \n pre - post & 0.10 & 0.03 & 1880.01 & 3.56 & 0.001 \\\\ \n \\hline\n\\multicolumn{6}{l}{group = Isoch., q = 10}\\\\\nbase - pre & 0.64 & 0.03 & 1880.01 & 19.96 & $<$.001 \\\\ \n base - post & 0.53 & 0.03 & 1880.01 & 16.39 & $<$.001 \\\\ \n pre - post & -0.11 & 0.03 & 1880.01 & -3.57 & 0.001 \\\\ \n \\hline\n\\multicolumn{6}{l}{group = 1\/f, q = 10}\\\\\nbase - pre & 0.20 & 0.03 & 1888.66 & 5.85 & $<$.001 \\\\ \n base - post & 0.15 & 0.03 & 1888.66 & 4.44 & $<$.001 \\\\ \n pre - post & -0.05 & 0.03 & 1880.01 & -1.37 & 0.354 \\\\ \n \\hline\n\\multicolumn{6}{l}{group = Rand., q = 10}\\\\\nbase - pre & 0.41 & 0.03 & 1882.06 & 13.97 & $<$.001 \\\\ \n base - post & 0.39 & 0.03 & 1882.06 & 13.27 & $<$.001 \\\\ \n pre - post & -0.02 & 0.03 & 1880.01 & -0.72 & 0.752 \\\\ \n \\hline\n\\multicolumn{6}{l}{{\\footnotesize Degrees-of-freedom method: kenward-roger}}\\\\\n\n\\multicolumn{6}{l}{{\\footnotesize P value adjustment: tukey method for comparing a family of 3 estimates}}\\\\\n\\end{tabular}\n\\caption{\\label{table:hq_table}Pairwise comparisons of $h(q)$ as a function of time and group at extreme values of q.}\n\\vspace{+0.1cm}\n\\end{table}\n\n\nTo understand the influence of various metronomes and a mechanical perturbation on gait variability, we investigated how the slope relating $q$ and $h(q)$ changed according to time and group. These results are presented in Table~\\ref{table:slopes}. A general finding is that the $q \\times h(q)$ slope differed between baseline and pre-perturbation for all conditions except the isochronous condition. In addition, the $q \\times h(q)$ slope differed between baseline and post for the isochronous and 1\/f conditions. Lastly, the pre-post differences in the $q \\times h(q)$ slopes were only observed in the isochronous and 1\/f conditions.\n\nCollectively, these results emphasize a few key points related to the effects of various metronome types and fractal scaling in ISIs. First, the current results bolster previous observations that suggest isochronous and random metronomes produce gait patterns more consistent with pathology than with healthy gait \\cite{kaipust2013gait,hunt2014influence,hausdorff1996fractal}. Second, the results while not entirely consistent, suggest that metronomes exert a profound effect over gait fractality, affecting the scaling properties of both very small and very large fluctuations, alike. The current slope analysis was limited to extreme values of $q$, but there is good reason to suspect that fluctuations of intermediate size would likewise be affected. Third, and surprisingly, our results did not produce a consistent pre-post perturbation effect with respect to either the central tendency of $h(q)$ or the $q \\times h(q)$ slopes. There are several possible explanations for these pattern of results. Some conditions such as the random condition could have produced a floor effect, preventing successful reorganization of motor degrees of freedom to be detectable post-perturbation. This is consistent with our hypothesis concerning 1\/f noise where measurable pre-post perturbation effects were observed. Another possibility is that post-perturbation incorporated in our analysis were too long. A limitation of MFDFA is time series need to quite long in order to obtain stable estimates. Hence, it is possible that the post-perturbation multifractality included too many strides after recovery to capture nuanced effects of perturbation. Future research should explore the use of a modified MFDFA procedure better suited for 'short' time series.\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{lrrrrl}\n \\hline\ncontrast & estimate & SE & df & t & p \\\\ \n \\hline\n\\multicolumn{6}{l}{g = NONE}\\\\\nbase - pre & -0.01 & 0.00 & 1880.01 & -3.87 & 0.0003 \\\\ \n base - post & -0.01 & 0.00 & 1880.01 & -2.08 & 0.0944 \\\\ \n pre - post & 0.00 & 0.00 & 1880.01 & 1.84 & 0.1556 \\\\ \n \\hline\n\\multicolumn{6}{l}{g = Isoch.}\\\\\nbase - pre & -0.00 & 0.00 & 1880.01 & -0.23 & 0.9716 \\\\ \n base - post & -0.01 & 0.00 & 1880.01 & -3.34 & 0.0024 \\\\ \n pre - post & -0.01 & 0.00 & 1880.01 & -3.11 & 0.0053 \\\\ \n \\hline\n\\multicolumn{6}{l}{g = 1\/f}\\\\\nbase - pre & 0.02 & 0.00 & 1880.01 & 6.90 & $<$.0001 \\\\ \n base - post & 0.01 & 0.00 & 1880.01 & 4.27 & 0.0001 \\\\ \n pre - post & -0.01 & 0.00 & 1880.01 & -2.54 & 0.0300 \\\\ \n \\hline\n\\multicolumn{6}{l}{g = Rand.}\\\\\nbase - pre & 0.01 & 0.00 & 1880.01 & 2.47 & 0.0359 \\\\ \n base - post & 0.00 & 0.00 & 1880.01 & 1.25 & 0.4238 \\\\ \n pre - post & -0.00 & 0.00 & 1880.01 & -1.26 & 0.4173 \\\\ \n \\hline\n\\multicolumn{6}{l}{{\\footnotesize Degrees-of-freedom method: kenward-roger}}\\\\\n\n\\multicolumn{6}{l}{{\\footnotesize P value adjustment: tukey method for comparing a family of 3 estimates}}\\\\\n\\end{tabular}\n\\caption{\\label{table:slopes}Analysis of $q\\times h(q)$ slopes as a function of time and group.}\n\\vspace{+0.1cm}\n\\end{table}\n\n\n\\begin{figure}\n\\centering\n\n\\includegraphics[width=1\\linewidth]{images\/model_implied_means.pdf}\n\\caption{Generalized Hurst exponent, $h(q)$, as a function of time and group.}\n\\label{fig:qorder} \n\n\\vspace{+0.5cm}\n\\end{figure}\n\n\n\n\\section{Next-Generation Networked Metronomes}\n\\label{sec:nextgen}\n\nIn this section, we describe directions for building the next generation of metronomes, featuring advanced networking capabilities in conjunction with technologies, such as Augmented Reality (AR), to provide an immersive and interactive quality of experience to participants~\\cite{shannigrahi2020next}. \n\n\\subsection{Smart Phone Metronome Applications}\n\nA direction for the implementation of gait metronomes nowadays is through smart phone applications. Such applications will be installed on the research participants' phones, while parameters, such as the type and frequency of the auditory beat, can be adjusted as input parameters of the applications. Such metronomes enable flexible mobility opportunities, so that experiments can be performed outside of laboratory spaces in daily living contexts (e.g., walking on a university campus, working out at a park). This is particularly important because laboratories may restrict the flexibility of the performed experiments (e.g., treadmills constrain participants to walk on fixed directions). On the other hand, daily living contexts can offer insights to habitual behaviors, which may not be feasible in laboratory spaces.\n\n\n\n\\subsection{Augmented Reality and ``Gamified'' Metronomes}\n\nSo far, we have focused on auditory gait metronomes. The auditory stimuli is discreet and limited in nature. To this end, gait metronomes can be augmented with visual effects, so that they provide visual (continuous) stimuli to research participants, offering new opportunities for gait-related research. In Figure~\\ref{fig:glasses}, we present a prototype of a visual metronome. This metronome is implemented through a pair of glasses that participants wear. These glasses are powered and are directly connected to a tablet device. The stimuli are visualized through a bar that moves vertically. When the bar reaches its highest point, the participants are instructed to touch the ground with their feet as they are taking a step. \n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\linewidth]{images\/glasses_wide.png}\n \\caption{Prototype of visual metronome. A moving bar depicted on right hand side is displayed on a video screen mounted to a non-prescription pair of glasses.}\n \\label{fig:glasses} \n \\vspace{+0.5cm}\n\\end{figure}\n\nThis visual metronome prototype, however, requires that participants carry a tablet device with them while walking. Technologies to connect the glasses wirelessly (e.g., Bluetooth, WiFi) to the tablet may be useful, however, still such glasses have limited capabilities in terms of graphics. A direction that is worthy of exploring is the integration of AR to provide immersive graphical experience that enhances the research participant's perception of the world. In Figure~\\ref{fig:AR}, we present an early prototype of an AR-based metronome that we implemented based on Unity. We have deployed this prototype on a Hololens 2 headset. This metronome is a direct conversion of the metronome of Figure~\\ref{fig:AR} to an AR environment.\n\nLeveraging AR, a direction to explore is ``gamifying'' gait metronomes in order to provide an interactive, gaming-like experience to research participants. For example, participants may be instructed to synchronize their steps with visual targets (e.g., collide with 3D bulls-eye objects). Another example may be a metronome, where participants need to step on 3D tiles of a certain color or shape that the AR headset projects on the floor. At the same time, participants can receive real-time feedback about their performance in this gaming experience (e.g., through messages displayed on the headset's screen), being: (i) encouraged to step on 3D bulls-eye objects or certain 3D tiles if they do not follow the instructions of the game; and (ii) praised for their performance if they follow the instructions of the game.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\linewidth]{images\/hololens.pdf}\n \\caption{Prototype of a AR-capable metronome developed based on Unity and deployed on a Hololens 2 headset.}\n \\label{fig:AR} \n \\vspace{+0.5cm}\n\\end{figure}\n\n\\subsection{Cloud and Edge Computing Technologies}\n\nGait metronomes coupled with wireless capabilities and potentially based on AR may need to utilize remote computing resources for computationally expensive operations (e.g., 3D rendering, object detection). Cloud computing technologies could facilitate the execution of computationally expensive operations on remote computing resources. However, cloud computing resources may result in high response times, which can be problematic in cases of low-latency applications, such as interactive AR. To this end, edge computing~\\cite{shi2016edge} has emerged as a paradigm that features computing resources physically close to users (at the edge of the network), subsequently resulting in fast response times.\n\nEdge computing solutions open opportunities for future computer networking research. Such opportunities include mechanisms for the allocation and discovery of computing resources at the edge~\\cite{mastorakis2019towards, mastorakis2020dlwiot}, as well as the dynamic offloading of computational tasks at the edge~\\cite{mastorakis2020icedge}. For example, if multiple edge servers are available, we may need consecutive computational tasks to be offloaded to the same edge server. This is due to the fact that the processing of these tasks may determine whether a participant follows the provided instructions (e.g., whether the participant is stepping on projected 3D tiles of the right color) not only momentarily, but over a certain period of time. The processing outcome of such tasks will subsequently determine the feedback to be provided to the participant (e.g., the message to be displayed on the headset's screen). In addition, mechanisms for in-network computing coupled with advanced wireless capabilities (e.g., LTE\/5G) could further increase the available bandwidth and reduce response times~\\cite{nour2020compute, ullah2020icn}. \n\n\n\\section{Conclusion and Future Work}\n\\label{sec:conclusion}\n\nIn this paper, we have presented a preliminary study investigating the utility of different metronomes as assistive devices. Our results showed that, when followed by healthy adults, metronomes typically used for rehabilitation (i.e., isochronous metronomes) produce gait patterns more consistent with pathological gait than variability observed in typical walking. In contrast, metronomes based on healthy gait patterns left those patterns unperturbed, even after experiencing a strong mechanical perturbation. Lastly, we provided initial plans for enhancing metronomes as assistive devices through the use of advanced networking and computing technologies as well as augmented reality. Our vision is that these technologies will advance these promising tools by allowing better graphics, faster calculations, and more freedom of use. Our goal is to increase the efficacy of metronomic rehabilitation tools by: (1) using metronomes that take into consideration the natural regularities in healthy gait, (2) providing a better user experience in a broader range of settings. Ultimately, these improvements may encourage broader use of these tools and prevent falls.\n\n\\section{Introduction}\nThis document is a model and instructions for \\LaTeX.\nPlease observe the conference page limits. \n\n\\section{Ease of Use}\n\n\\subsection{Maintaining the Integrity of the Specifications}\n\nThe IEEEtran class file is used to format your paper and style the text. All margins, \ncolumn widths, line spaces, and text fonts are prescribed; please do not \nalter them. You may note peculiarities. For example, the head margin\nmeasures proportionately more than is customary. This measurement \nand others are deliberate, using specifications that anticipate your paper \nas one part of the entire proceedings, and not as an independent document. \nPlease do not revise any of the current designations.\n\n\\section{Prepare Your Paper Before Styling}\nBefore you begin to format your paper, first write and save the content as a \nseparate text file. Complete all content and organizational editing before \nformatting. Please note sections \\ref{AA}--\\ref{SCM} below for more information on \nproofreading, spelling and grammar.\n\nKeep your text and graphic files separate until after the text has been \nformatted and styled. Do not number text heads---{\\LaTeX} will do that \nfor you.\n\n\\subsection{Abbreviations and Acronyms}\\label{AA}\nDefine abbreviations and acronyms the first time they are used in the text, \neven after they have been defined in the abstract. Abbreviations such as \nIEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use \nabbreviations in the title or heads unless they are unavoidable.\n\n\\subsection{Units}\n\\begin{itemize}\n\\item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''.\n\\item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation.\n\\item Do not mix complete spellings and abbreviations of units: ``Wb\/m\\textsuperscript{2}'' or ``webers per square meter'', not ``webers\/m\\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''.\n\\item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\\textsuperscript{3}'', not ``cc''.)\n\\end{itemize}\n\n\\subsection{Equations}\nNumber equations consecutively. To make your \nequations more compact, you may use the solidus (~\/~), the exp function, or \nappropriate exponents. Italicize Roman symbols for quantities and variables, \nbut not Greek symbols. Use a long dash rather than a hyphen for a minus \nsign. Punctuate equations with commas or periods when they are part of a \nsentence, as in:\n\\begin{equation}\na+b=\\gamma\\label{eq}\n\\end{equation}\n\nBe sure that the \nsymbols in your equation have been defined before or immediately following \nthe equation. Use ``\\eqref{eq}'', not ``Eq.~\\eqref{eq}'' or ``equation \\eqref{eq}'', except at \nthe beginning of a sentence: ``Equation \\eqref{eq} is . . .''\n\n\\subsection{\\LaTeX-Specific Advice}\n\nPlease use ``soft'' (e.g., \\verb|\\eqref{Eq}|) cross references instead\nof ``hard'' references (e.g., \\verb|(1)|). That will make it possible\nto combine sections, add equations, or change the order of figures or\ncitations without having to go through the file line by line.\n\nPlease don't use the \\verb|{eqnarray}| equation environment. Use\n\\verb|{align}| or \\verb|{IEEEeqnarray}| instead. The \\verb|{eqnarray}|\nenvironment leaves unsightly spaces around relation symbols.\n\nPlease note that the \\verb|{subequations}| environment in {\\LaTeX}\nwill increment the main equation counter even when there are no\nequation numbers displayed. If you forget that, you might write an\narticle in which the equation numbers skip from (17) to (20), causing\nthe copy editors to wonder if you've discovered a new method of\ncounting.\n\n{{\\rm B\\kern-.05em{\\sc i\\kern-.025em b}\\kern-.08emT\\kern-.1667em\\lower.7ex\\hbox{E}\\kern-.125emX}} does not work by magic. It doesn't get the bibliographic\ndata from thin air but from .bib files. If you use {{\\rm B\\kern-.05em{\\sc i\\kern-.025em b}\\kern-.08emT\\kern-.1667em\\lower.7ex\\hbox{E}\\kern-.125emX}} to produce a\nbibliography you must send the .bib files. \n\n{\\LaTeX} can't read your mind. If you assign the same label to a\nsubsubsection and a table, you might find that Table I has been cross\nreferenced as Table IV-B3. \n\n{\\LaTeX} does not have precognitive abilities. If you put a\n\\verb|\\label| command before the command that updates the counter it's\nsupposed to be using, the label will pick up the last counter to be\ncross referenced instead. In particular, a \\verb|\\label| command\nshould not go before the caption of a figure or a table.\n\nDo not use \\verb|\\nonumber| inside the \\verb|{array}| environment. It\nwill not stop equation numbers inside \\verb|{array}| (there won't be\nany anyway) and it might stop a wanted equation number in the\nsurrounding equation.\n\n\\subsection{Some Common Mistakes}\\label{SCM}\n\\begin{itemize}\n\\item The word ``data'' is plural, not singular.\n\\item The subscript for the permeability of vacuum $\\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''.\n\\item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.)\n\\item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates).\n\\item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''.\n\\item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased.\n\\item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''.\n\\item Do not confuse ``imply'' and ``infer''.\n\\item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen.\n\\item There is no period after the ``et'' in the Latin abbreviation ``et al.''.\n\\item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''.\n\\end{itemize}\nAn excellent style manual for science writers is \\cite{b7}.\n\n\\subsection{Authors and Affiliations}\n\\textbf{The class file is designed for, but not limited to, six authors.} A \nminimum of one author is required for all conference articles. Author names \nshould be listed starting from left to right and then moving down to the \nnext line. This is the author sequence that will be used in future citations \nand by indexing services. Names should not be listed in columns nor group by \naffiliation. Please keep your affiliations as succinct as possible (for \nexample, do not differentiate among departments of the same organization).\n\n\\subsection{Identify the Headings}\nHeadings, or heads, are organizational devices that guide the reader through \nyour paper. There are two types: component heads and text heads.\n\nComponent heads identify the different components of your paper and are not \ntopically subordinate to each other. Examples include Acknowledgments and \nReferences and, for these, the correct style to use is ``Heading 5''. Use \n``figure caption'' for your Figure captions, and ``table head'' for your \ntable title. Run-in heads, such as ``Abstract'', will require you to apply a \nstyle (in this case, italic) in addition to the style provided by the drop \ndown menu to differentiate the head from the text.\n\nText heads organize the topics on a relational, hierarchical basis. For \nexample, the paper title is the primary text head because all subsequent \nmaterial relates and elaborates on this one topic. If there are two or more \nsub-topics, the next level head (uppercase Roman numerals) should be used \nand, conversely, if there are not at least two sub-topics, then no subheads \nshould be introduced.\n\n\\subsection{Figures and Tables}\n\\paragraph{Positioning Figures and Tables} Place figures and tables at the top and \nbottom of columns. Avoid placing them in the middle of columns. Large \nfigures and tables may span across both columns. Figure captions should be \nbelow the figures; table heads should appear above the tables. Insert \nfigures and tables after they are cited in the text. Use the abbreviation \n``Fig.~\\ref{fig}'', even at the beginning of a sentence.\n\n\\begin{table}[htbp]\n\\caption{Table Type Styles}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\textbf{Table}&\\multicolumn{3}{|c|}{\\textbf{Table Column Head}} \\\\\n\\cline{2-4} \n\\textbf{Head} & \\textbf{\\textit{Table column subhead}}& \\textbf{\\textit{Subhead}}& \\textbf{\\textit{Subhead}} \\\\\n\\hline\ncopy& More table copy$^{\\mathrm{a}}$& & \\\\\n\\hline\n\\multicolumn{4}{l}{$^{\\mathrm{a}}$Sample of a Table footnote.}\n\\end{tabular}\n\\label{tab1}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics{fig1.png}}\n\\caption{Example of a figure caption.}\n\\label{fig}\n\\end{figure}\n\nFigure Labels: Use 8 point Times New Roman for Figure labels. Use words \nrather than symbols or abbreviations when writing Figure axis labels to \navoid confusing the reader. As an example, write the quantity \n``Magnetization'', or ``Magnetization, M'', not just ``M''. If including \nunits in the label, present them within parentheses. Do not label axes only \nwith units. In the example, write ``Magnetization (A\/m)'' or ``Magnetization \n\\{A[m(1)]\\}'', not just ``A\/m''. Do not label axes with a ratio of \nquantities and units. For example, write ``Temperature (K)'', not \n``Temperature\/K''.\n\n\\section*{Acknowledgment}\n\nThe preferred spelling of the word ``acknowledgment'' in America is without \nan ``e'' after the ``g''. Avoid the stilted expression ``one of us (R. B. \nG.) thanks $\\ldots$''. Instead, try ``R. B. G. thanks$\\ldots$''. Put sponsor \nacknowledgments in the unnumbered footnote on the first page.\n\n\\section*{References}\n\nPlease number citations consecutively within brackets \\cite{b1}. The \nsentence punctuation follows the bracket \\cite{b2}. Refer simply to the reference \nnumber, as in \\cite{b3}---do not use ``Ref. \\cite{b3}'' or ``reference \\cite{b3}'' except at \nthe beginning of a sentence: ``Reference \\cite{b3} was the first $\\ldots$''\n\nNumber footnotes separately in superscripts. Place the actual footnote at \nthe bottom of the column in which it was cited. Do not put footnotes in the \nabstract or reference list. Use letters for table footnotes.\n\nUnless there are six authors or more give all authors' names; do not use \n``et al.''. Papers that have not been published, even if they have been \nsubmitted for publication, should be cited as ``unpublished'' \\cite{b4}. Papers \nthat have been accepted for publication should be cited as ``in press'' \\cite{b5}. \nCapitalize only the first word in a paper title, except for proper nouns and \nelement symbols.\n\nFor papers published in translation journals, please give the English \ncitation first, followed by the original foreign-language citation \\cite{b6}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the beginning of the development of inflationary cosmology our main goal was to find some simple semi-realistic versions of inflationary theory, where inflation could be naturally realized for a wide variety of initial conditions. The search for such models has lead us to the discovery of chaotic inflation \\cite{Chaot}, where inflation can be achieved in the simplest models, such as the models of the fields minimally coupled to gravity, with polynomial potentials. Inflation occurs in such models for a wide variety of initial conditions \\cite{Linde:1985ub} and may continue eternally \\cite{Vilenkin:1983xq,Linde:1986fd}.\n\nIn this class of theories the constant part of the scalar field does not have any independent physical meaning; its value enters the theory only because it makes other fields massive, and because it has a potential energy density $V(\\varphi)$. As a result, these theories have a weakly broken shift symmetry $\\varphi \\to \\varphi+c$. This is broken only by the $\\varphi$-dependence of the potential $V$ and of the masses of particles interacting with the field $\\varphi$. It is this property which allows the existence of chaotic inflation potentials such as $m^{2}\\varphi^{2}$, which remain flat even for $\\varphi \\gg M_{p}$ \\cite{Chaot}. Shift symmetries forbid the dangerous quantum corrections to the effective potential $\\sim M_p^{4} \\bigl(\\frac{\\varphi}{M_{p}}\\bigr)^{n}$ often discussed in the literature which renders inflation in these theories easier to achieve; see \\cite{Linde:2007fr} for a recent discussion of this issue. Another popular example of shift symmetry is the symmetry $\\theta \\to \\theta + c$ of the axion potential; this potential appears only after the shift symmetry becomes broken by non-perturbative effects. This symmetry was the basis for the so-called natural inflation scenario \\cite{Freese:1990rb}.\n\nHowever, in supergravity and string theory the scalar fields usually have physical or geometric meaning. For example, in the simplest supergravity models the K\\\"ahler potential $K = \\frac{\\Phi\\bar\\Phi}{M_p^{2}}$ describes K\\\"ahler geometry. The F-term scalar potential is proportional to $e^{K} \\sim e^{\\Phi\\bar\\Phi\/ M_p^{2}}$. As a result, the mass of the inflaton field typically acquires a contribution $O(H^{2})$, which tends to prevent inflation. Quite often, one can fine tune the parameters of the theory and achieve desirable flatness of the potential for some small range of the values of the scalar fields. In such models, inflation in a certain sense happens by accident: One must have the parameters of the models fixed in a very narrow range, and in addition, one should hope that the scalar field from the very beginning by some happy accident was in the required range of its values where inflation is possible. A similar situation, which can be called `accidental inflation,' often occurs in string theory. \n\nAn interesting and quite instructive model of this type was proposed by Holman, Ramond and Ross \\cite{Holman:1984yj}. We will briefly describe it here, because it will be closely related to the models of string theory inflation to be discussed in this paper.\nThe K\\\"ahler potential in this model is $K = \\frac{\\Phi\\bar\\Phi}{M_p^{2}}$, and the superpotential is\n\\begin{equation}\nW = c\\, (\\Phi-\\Phi_{0})^{2}\\ ,\n\\end{equation}\nwhere $\\Phi = (\\varphi +i\\alpha)\/\\sqrt 2$, and $\\varphi$ and $\\alpha$ are canonically normalized scalar fields. The potential has a minimum at $\\alpha = 0$, $\\varphi = \\sqrt 2 \\Phi_{0}$.\n\nIn what follows, we will work in units $M_{p} = 1$. For the particular case $\\Phi_{0} = 1$, the potential $V(\\varphi)$ is shown by the thick blue solid line in Fig. 1. It has a minimum at $\\varphi = \\sqrt 2$, and a small plateau at $|\\varphi| \\ll 1$, where the potential is described by the simple equation\n\\begin{equation}\\label{cubic}\nV(\\varphi) =V_{0}\\, (1-\\sqrt 2\\, \\varphi^{3} + O(1)\\,\\varphi^{4}) \\ ,\n\\end{equation} \nwhere $V_{0} = c^{2}$.\n\n\n\\begin{figure}\n\\centering{\\includegraphics[height=5cm]{Ramond.eps}}\n \\caption{The thick blue solid line shows the inflationary potential for the model of Ref. \\cite{Holman:1984yj}, with $\\Phi_{0} = 1$. If one takes the model with a slightly larger $\\Phi_{0}$, the potential has two minima separated by a barrier, as shown by the thin green dashed line, corresponding to $\\Phi_{0} = 1.05$. For $\\Phi_{0}<1$, the metastable minimum disappears, and $V'$ becomes negative everywhere, as shown by the thin red solid line corresponding to $\\Phi_{0} = 0.95$. Inflation is possible only if $|\\Phi_{0}-1|\\ll 1$, which requires fine tuning.} \\label{Ramond}\n\\end{figure}\n\nThis potential is flat at $\\varphi = 0$, $V' = V'' = 0$, which corresponds to an inflection point. Since the slow-roll parameters $\\epsilon$ and $\\eta $ vanish at $\\varphi = 0$, this potential can support inflation if the initial value of the field is sufficiently small. The slow-roll conditions in this model are satisfied for $\\varphi \\lesssim 10^{{-1}}$. The field $\\varphi_{60}$ corresponding to the beginning of the period of the last 60 e-foldings is even much smaller. Therefore it is quite sufficient to use just the first two terms of the expression (\\ref{cubic}), i.e. the constant and the cubic term, for investigation of inflation in this model. \n\nOn the other hand, there is no special reason why $\\Phi_{0}$ must be exactly equal $1$. If we take, for example, $\\Phi_{0}= 0.95$, we will obtain a steep potential shown by the thin red solid line in Fig. 1, and inflation disappears. For $\\Phi_{0}= 1.05$, the potential has two minima and a non-inflationary maximum with $V'' \\sim V$. To achieve 60 e-folds of inflation in this scenario one must have $\\Phi = 1$ with a great accuracy, and the universe from the very beginning should reside very close to the inflection point $\\varphi = 0$. In this sense, inflation happens in the models of this type by accident. \n\n\n\n\nThis model was proposed back in 1984. It took 10 years, from 1984 to 1994, until the hybrid inflation scenario was developed \\cite{Hybrid} and it became possible to introduce the F-term \\cite{F} and the D-term hybrid inflation scenario \\cite{D}, where inflation could appear in a relatively natural way. It took 6 more years after that until the simple realization of chaotic inflation with a quadratic potential was proposed in supergravity \\cite{Kawasaki:2000yn}, and it took 7 more years until a supergravity realization of natural inflation became possible \\cite{Kallosh:2007ig}.\n\n\nA systematic investigation of inflation in string theory began only in 2003, after the development of the KKLT mechanism of vacuum stabilization \\cite{KKLT}, see also \\cite{Giddings:2001yu,Silverstein:2001xn}. The first string inflation model based on the KKLT mechanism, the KKLMMT model \\cite{Kachru:2003sx}, is a brane inflation version of the hybrid inflation scenario. A detailed investigation of this model \\cite{Baumann:2007np,delicate,Krause:2007jk,Panda:2007ie,Itzhaki:2007nk}\n demonstrated that its inflationary potential is very similar to the inflection point potential of Ref. \\cite{Holman:1984yj}.\\footnote{Similar potentials emerge in other inflationary models as well, such as the MSSM inflation, see Ref. \\cite{Allahverdi:2006we}.} Thus, in a certain sense, in the recent studies of string theory inflation we returned to the discussion of accidental inflation in the first models of inflation in supergravity.\n\n\n\nIn parallel with the investigation of the KKLMMT scenario, string theorists are trying to develop more natural versions of inflationary theory, where the inflationary potential would naturally have flat directions, such as D3\/D7 inflation \\cite{D3D7}. We do not know yet whether it is possible to obtain a generalization of the chaotic inflation scenario and\/or of the natural inflation scenario; some related ideas are discussed e.g. in \\cite{Dimopoulos:2005ac,Kallosh:2007ig,McAllister:2007bg,Grimm:2007hs}. As we just mentioned, the search for such models in supergravity took many years. The investigation of inflation in string theory has just began. Therefore at this early stage it may make a lot of sense to study the models of accidental inflation discussed above, despite the fact that they appear fine-tuned in more than one respect. This may be quite appropriate because the old ideas of naturalness, unnaturalness and fine-tuning look quite different in the context of the eternal inflation scenario in the string theory landscape consisting of $10^{10^{3}}$ different string theory vacua \\cite{Lerche,Bousso,KKLT,Susskind:2003kw, Douglas:2003um}.\n\n\nIn this paper we will pursue two different goals. First of all, the dynamical mechanism of inflation in the KKLMMT model is quite complicated. It involves investigation of gravitational effects related to the motion of branes in warped space, which requires a consistent merger of open string theory and closed string theory. Therefore it would be nice to have a simple toy model of accidental inflation in string theory, analogous to the model of Ref. \\cite{Holman:1984yj}. An example of such model will be presented in Section \\ref{KLinf}.\n\nSecondly, we would like to understand generic features of accidental inflation in the context of the string landscape scenario, and, if possible, to find some unambiguous predictions of this class of models which would allow us to distinguish it from other versions of string inflation. These issues will be discussed in Sections \\ref{toyinf} -- \\ref{predict}. We will summarize our results in Section \\ref{concl}. A short discussion concerning eternal inflation can be found in the Appendix.\n\n\n\n\\section{Volume modulus inflation}\\label{KLinf}\n\nFor a long time, it did not seem possible that the volume modulus may play the role of the inflaton field in the KKLT construction. Indeed, the standard KKLT potential has only one minimum at finite values of the volume modulus, and the slow-roll conditions for this potential are strongly violated~\\cite{alphaprime}.\n\nThe situation did not change much with the invention of the racetrack inflation: It became possible to find a saddle point of the potential and satisfy the slow-roll conditions with respect to the axion field, but not with respect to the volume modulus \\cite{Blanco-Pillado:2004ns}.\n\n\nIn this section, we propose a model of string theory inflation driven by the volume modulus in the context of the KL~\\cite{Kallosh:2004yh} model of moduli stabilization. (Another model of the volume modulus inflation will be described soon in \\cite{CKLQ}.) Here the combination of fluxes with a racetrack superpotential leads to a scalar potential with two local asymmetric minima for the volume modulus which become de Sitter after uplifting by e.g. $\\overline{\\rm D3}$-branes or ${\\rm D7}$-induced D-terms. The higher of the two minima connects through a saddle point to the late-time dS minimum, or degenerates into an inflection point. With a proper choice of parameters, one can make the potential near the saddle point or the inflection point sufficiently flat to provide for \nmore than the necessary 60 e-foldings of slow-roll inflation driven by the volume modulus. \n\n\nTo begin with, we shall thus shortly review the structure of the KL setup~\\cite{Kallosh:2004yh}. The model starts out similar to the KKLT construction~\\cite{KKLT} by assuming that the complex structure moduli and the axion-dilaton have been frozen at a high KK-related mass scale by turning on sufficiently generic 3-form fluxes $G_3$ along the lines of~\\cite{Giddings:2001yu}. With respect to the K\\\"ahler moduli this results in a constant and fine-tunable term $W_0=\\int_{\\rm CY_3}G_3\\wedge \\Omega$ in superpotential of the low-energy effective 4d ${\\cal N}=1$ supergravity. Considering then the subclass of Calabi-Yau 3-folds with $h^{1,1}=1$, i.e. a single K\\\"ahler modulus which is the volume modulus $T$, the presence of non-perturbative effects leads to the stabilization of $T$. These exponential terms\narise either from Euclidean ${\\rm D3}$-branes of from gaugino condensation on ${\\rm D7}$-branes, as explained in\n\\cite{KKLT,Blanco-Pillado:2004ns}. If there is only one exponential term in the superpotential, then this leads directly to the KKLT construction.\n\nThe KL model is then specified by assuming the presence of non-perturbative contributions in $T$ to the superpotential, leading to a racetrack superpotential. Thus the setup is given by\n\\begin{eqnarray}\nK &=& - 3 \\ln(T + \\overline{T})\\;,\\qquad T=\\varphi+i\\tau \\ , \\nonumber\\\\ &&\\\\\nW &=& W_0 + Ae^{-aT}+ Be^{-bT} \\ ,\\nonumber\n\\label{setup}\n\\end{eqnarray}\nwhere the volume modulus $\\varphi$ measuring the volume of the single 4-cycle of the Calabi-Yau pairs up with the RR-axion $\\tau$ to form the complex chiral superfield $T$.\n\n\nThe F-term scalar potential, \\begin{equation} V= e^{K}\\left( G^{T \\overline T} D_{T}W \\overline {D_{T} W}\n- 3|W|^2 \\right), \\end{equation} as the function of the volume modulus $\\varphi$ is given by\n\\begin{eqnarray}\n&&V= \\frac{e^{-2(a+b)\\varphi}}{6\\varphi^2}(b B e^{a\\varphi}+ a A e^{b\\varphi})\\nonumber\n\\\\ &\\times& \\left [Be^{a\\varphi}(3+b\\varphi)+e^{b\\varphi}(A(3+a\\varphi)+3e^{a\\varphi}W_0)\\right].\n\\label{pot}\\end{eqnarray} \nIt can be shown that this is a minimum in the axion direction, provided we take we take $A,a$, $B, b$ and $W_0$ to be all real\nand the sign of $A$ and $B$ opposite, as well as $a>b$ and $A>|B|$.\n\n\nUp to this point, the KL model does not differ from other racetrack models. However, as it was found in \\cite{Kallosh:2004yh}, the scalar potential in this model for some values of its parameters has {\\it two} supersymmetric AdS minima which solve $D_T W=0$. The one at smaller volume is given (similarly to the heterotic racetrack models) by\n\\begin{equation}\n \\varphi_{cr}= \\frac{1}{a-b}\\ln \\left |\\frac{a\\,A}{b\\,B}\\right| \\ ,\n\\label{ReTcr} \\end{equation}\nwhile the one at larger volume is the deeper AdS minimum of the two with its position determined similarly to the KKLT case\n\\begin{equation}\n\\varphi_{_{\\rm KKLT-like}}\\sim-\\frac{1}{a}\\ln(W_0\/A)\\;.\\label{ReTAdS} \n\\end{equation}\nIn the special case where one arranges for a particular relation\nbetween the parameters of the superpotential,\n\\begin{equation} -W_0= A \\left |\\frac{a\\,A}{\nb\\,B}\\right|^\\frac{a}{b-a} +B \\left |\\frac{a\\,A}{b\\,B}\\right| ^\\frac{b}{b-a}\\ , \\end{equation} \nthe smaller volume `racetrack-like' minimum actually becomes a SUSY Minkowski one with vanishing gravitino mass~\\cite{Kallosh:2004yh}.\n\n\\begin{figure}\n\\centering{\\hskip 1cm ~ \\includegraphics[height=5cm]{1dpot.eps}}\n \\caption{The uplifted potential for $W_0= 4\\cdot 10^{-8},\\ A=1,\\ B=-0.62704017319,\\ a=2\\pi\/58,\\\nb=2\\pi\/60,\\ C=3.01\\cdot 10^{-18}$. The potential is shown in units of $10^{{-23}}$ of the Planck density. A late-time dS minimum at $V\\approx 0$ stabilizes the volume at $\\varphi_{\\rm dS}\\approx 157.1$. A tunable dS saddle or inflection point, which is responsible for inflation, is at $\\varphi_{cr}\\approx 144.5$. There is a barrier at $\\varphi \\sim 200$ protecting the late-time dS minimum. The height of the barrier exceeds the height of the inflationary saddle\/inflection point implying an absence of the cosmological overshoot problem~\\cite{Brustein:1992nk,kaloper}.} \\label{3}\n\\end{figure}\n\n\\begin{figure}\n\\centering{\\includegraphics[height=6.5cm]{2dpot.eps}}\n \\caption{The potential as a function of the complex field $T = \\varphi +i\\tau$, in units of $10^{{-23}}$ of the Planck density. The inflationary saddle\/inflection point and the late-time dS minimum both\noccur at $\\tau = {\\rm Im}~T =0$, as shown in the analytic\ninvestigation.} \\label{4}\n\\end{figure}\n\n\n\nOne can then get a late-time de Sitter vacuum by adding, e.g. $\\overline{\\rm D3}$-branes, which due to warping provide for a hierarchically small supersymmetry breaking and uplifting term $C\/\\varphi^2$ in the scalar potential.\\footnote{Alternatives include D-terms from a 4-cycle wrapping ${\\rm D7}$-brane threaded by $F_2$-flux~\\cite{Dterms}, F-terms of hidden sector matter fields~\\cite{nilles} (see also~\\cite{ReinoScrucca}), meta-stable vacua of condensing gauge theories in the free magnetic range along the lines of ISS~\\cite{ISSmodels}, and $\\alpha'$-corrections~\\cite{alphaprimelift}} This lifting can then be used to either lift the smaller volume (and more shallow) AdS\/Minkowski minimum eq.~\\eqref{ReTcr} to a tiny positive value of the cosmological constant, or to lift the deeper AdS minimum eq.~\\eqref{ReTAdS}. In this latter possibility, the smaller volume minimum eq.~\\eqref{ReTcr} becomes much more strongly lifted and acquires a large positive cosmological constant. Since in many cases it will still be separated from the lower dS minimum at larger volume by a barrier, this specific sub-setup realizes the situation depicted in the context of the toy model of the last Section in Fig.~\\ref{Ramond}.\n\nIt is now clear that by tuning $W_0$ and (through their dependence on the VEV's of the complex structure moduli) also $A$ and $B$ with fluxes, we can arrange for the higher-lying minimum at smaller volume to degenerate with the barrier saddle point. As this will guarantee the existence of a very flat saddle or inflection point, which can lead to inflationary regime. For instance, a choice of parameters\n\\begin{eqnarray} W_0&=& 4\\cdot 10^{-8},\\ A=1,\\ B=-0.62704017319,\\nonumber \\\\ &&\\\\ a&=&\\frac{2\\pi}{58},\\ b=\\frac{2\\pi}{60}\\;{\\rm and}\\;C=3.01\\cdot 10^{-18}\\nonumber\\label{input}\\end{eqnarray} \nfor the full uplifted scalar potential\n\\begin{equation}\nV= e^{K}\\left( G^{T \\overline T} D_{T}W \\overline {D_{T} W}\n- 3|W|^2 \\right)+\\frac{C}{\\varphi^2}\n\\end{equation}\nleads - as discussed below - to a long period of volume modulus inflation of more than 190 e-folds.\n\nThe similarity of this model with the supergravity model of Ref. \\cite{Holman:1984yj} is obvious when one compares Fig.~1 and Fig.~\\ref{3}, while Fig.~\\ref{4} demonstrates that $\\tau=0$ indeed denotes a local minimum in the axion direction for all values of $\\varphi$.\n\nWe shall now describe the inflationary dynamics for the parameters chosen in eq.~\\eqref{input} in more detail. For these parameters we find that the minimum given by eq.~\\eqref{ReTcr} has already merged completely with the barrier saddle point to form a so-called inflection point: Instead of the former saddle point that provides for $\\epsilon=0$ by definition and needs than $\\eta$ to be tuned small, we now have a situation where $\\eta=0$ at the inflection point and we have to keep $\\epsilon$ small.\n\nThe slow-roll parameters of the inflection point have now to be given for the case of non-canonically normalized scalar field, as the metric on scalar field space for $\\varphi$ is given by $g_{\\varphi\\varphi}=3\/2\\varphi^2$~\\cite{Blanco-Pillado:2004ns}. Since the inflaton trajectory is one-dimensional (as we are in the local minimum of the axion direction, there is no field evolution in this direction), we arrive for the chosen set of parameters at values~\\cite{Blanco-Pillado:2004ns}\n\\begin{eqnarray} \\epsilon&=&\\frac{g^{\\varphi\\varphi}}{2}\\left(\\frac{\\partial_\\varphi V}{V}\\right)^2=\\frac{\\varphi^2}{3}\\left(\\frac{\\partial_\\varphi V}{V}\\right)^2=1.8\\cdot 10^{-18}\\nonumber\\\\ && \\\\ \\eta&=&g^{\\varphi\\varphi}\\frac{\\partial_\\varphi^2 V}{V}=\\frac{2\\varphi^2}{3}\\frac{\\partial_\\varphi^2 V}{V}\\equiv 0\\nonumber\\end{eqnarray}\nat the inflection point at $\\varphi_{cr}=144.4588$.\n\n\\begin{figure}[ht!]\n\\centering\\leavevmode\\epsfysize=5.4cm \\epsfbox{evolution.eps} \\caption{The evolution of the volume modulus $t$ as a function of the total number of e-folds $N$.} \\label{efolds}\n\\end{figure}\n\nThe corresponding inflationary regime can be directly seen by numerically solving the equations of motion for $\\varphi$. In transforming into measuring time by e-folds $N$\n\\begin{equation} \\frac{\\partial \\varphi}{\\partial t}=H\\frac{\\partial \\varphi}{\\partial N}\\equiv H\\varphi' \\end{equation}\nwith\n\\begin{eqnarray} H^2&=&\\frac{1}{3}\\left(\\frac{1}{2}g_{\\varphi\\varphi} \\dot\\varphi^2+V(\\varphi)\\right)\\nonumber\\\\ &=&\\frac{1}{3}\\frac{V(\\varphi)}{1-\\frac{1}{6}g_{\\varphi\\varphi} \\varphi'^2}\\end{eqnarray} we get the equation of motion to be~\\cite{Blanco-Pillado:2004ns,noncanon}\n\\begin{eqnarray}\n \\varphi''&=&-\\left(1-\\frac{1}{6}g_{\\varphi\\varphi} \\varphi'^2\\right)\\left(3\\varphi'+3g^{\\varphi\\varphi}\\frac{V_{,\\varphi}}{V}\\right)\\nonumber\\\\\n &-&\\frac{\\varphi'^2}{2}\\frac{\\partial\\ln(g_{\\varphi\\varphi})}{\\partial\\varphi}\\;.\\label{eom} \n\\end{eqnarray}\nHere we have allowed for a non-canonical kinetic term ${\\cal L}_{\\rm kin}=\\frac{1}{2}g_{\\varphi\\varphi}\\partial_\\mu\\varphi\\partial^{\\mu}\\varphi$, as this is the generic case when dealing with moduli.\n\nAccording to eq.~\\eqref{eom} and plugging in $g_{\\varphi\\varphi}$, the equation of motion for the volume modulus $\\varphi$ then takes the form~\\cite{Blanco-Pillado:2004ns}\n\\begin{equation}\n\\varphi''=-\\left(1-\\frac{\\varphi'^2}{4\\varphi^2}\\right)\\left(3\\varphi'+2\\varphi^2\\frac{V_{,\\varphi}}{V}\\right)+\\frac{\\varphi'^2}{\\varphi}\\;.\\label{teom}\n\\end{equation}\nStarting at the inflection point $\\varphi(t =0)=\\varphi_0$ with $\\dot \\varphi(0)=0$, this leads to about 193 e-folds of slow-roll inflation, before the volume modulus rolls off into the late-time dS minimum at $\\varphi_{\\rm dS}\\approx 157.1$. The CMB scales of COBE normalization at about $k_{\\rm CMB}\\simeq 0.002\/{\\rm Mpc}$ leave the horizon about 60 e-folds before the end of inflation, i.e. here at $N_{\\rm CMB}\\approx 133$. The magnitude of the primordial curvature perturbation $\\Delta_{\\cal R}^2$ generated at this point evaluates for the parameters chosen to be\n\\begin{equation} \\Delta_{\\cal R}^2=\\frac{1}{8\\pi^2}\\frac{H^4}{{\\cal L}_{\\rm kin}}=\\frac{1}{4\\pi^2}\\frac{H^2}{g_{\\varphi\\varphi}\\varphi'^2}\\approx 2.9\\cdot 10^{-9}\\end{equation}\nwhich agrees with the measured value of COBE and WMAP~\\cite{WMAP}.\nIn Fig.~\\ref{efolds} we see the last 20 e-folds of the $t$-evolution. Note the sharp end of inflation at $N_{\\rm tot}\\approx 193.5$ and the subsequent onset of oscillations.\n\nNext, the spectral index of the curvature perturbation power spectrum is given by \\begin{equation}\nn_s=1+\\left.\\frac{d\\ln \\Delta_{\\cal R}^2(k)}{d\\ln\nk}\\,\\right|_{k=RH}\\label{specind2}\\end{equation} evaluated as usual at horizon crossing.\nNote that here we can replace $d\\ln k\\simeq dN$ because $k$ is evaluated at\nhorizon crossing $k=RH\\sim H e^N$. Then we arrive at \\begin{equation} n_s=1+\\left.\\frac{d\\ln\\Delta_{\\cal R}^2(k)}{dN}\\right|_{k=RH}\\end{equation} which results in the curve shown in\nFig.~\\ref{specindex}.\n\n\n\\begin{figure}[ht!]\n\\vspace*{0.5cm}\n\\centering\\leavevmode\\epsfysize=5.3cm \\epsfbox{specindex.eps} \\caption{The spectral index of the density fluctuations as a function of the total number of e-folds $N$. The blue arrow denotes the spectral index at the time when the COBE normalization scale left the horizon, i.e. at $N_e^{\\rm CMB}=60$ e-folds before the end of inflation.} \\label{specindex}\n\\end{figure}\n\nThis evaluates at the COBE normalization scale\n\\begin{equation} n_s\\approx 0.93\\end{equation}\nWithin the $1\\sigma$-level this result agrees\nwith the combined 3-year WMAP + SDSS galaxy survey\nresult $n_s=0.948^{+0.015}_{-0.018}$~\\cite{WMAP} (the 3-year WMAP\ndata alone give $n_s=0.951^{+0.015}_{-0.019}$).\n\nWe note further, that there is only negligible running at CMB scales as the third order slow-roll parameter evaluates to be $\\xi^2\\approx -7\\cdot 10^{-4}$ which then gives $dn_s\/d\\ln k=2\\xi^2\\approx-0.0014$, and an extremely small level of tensor fluctuations as $\\epsilon\\approx 5 \\cdot 10^{-17}$ at CMB scales.\n\n\n\n\\section{Basics of fine-tuning in generic models}\\label{toyinf}\n\nNow that we have two simple models of accidental inflation with an inflection point, the supergravity model of \\cite{Holman:1984yj} and the string inflation model described in the previous section, and also the models of brane inflation \\cite{Baumann:2007np,delicate,Krause:2007jk,Panda:2007ie,Itzhaki:2007nk} and MSSM inflation \\cite{Allahverdi:2006we}, we can analyze general features of the models of this type.\n\n\nFine-tuning in the model \\cite{Holman:1984yj} can be described in the following general way. If one takes $\\Phi_{0}> 1$ in this model, one will have a potential with a metastable dS minimum at $\\varphi < 0$ very close to the local maximum at $\\varphi > 0$, see Fig. 1. If the difference $\\Phi_{0}- 1$ is sufficiently small, then in the first approximation the term $V_{0}$ and the coefficient in front of $\\varphi^{3}$ in (\\ref{cubic}) will not change, but two other terms with very small coefficients $\\lambda_{1},\\lambda_{2} \\ll 1$ will appear:\n\\begin{equation}\\label{cubic2}\nV(\\varphi) =V_{0}\\, \\bigl(1 + \\lambda_{1}\\varphi + \\frac{\\lambda_{2}}{ 2} \\varphi^{2}-\\sqrt 2\\, \\varphi^{3} + O(1)\\,\\varphi^{4}\\bigr) \\ .\n\\end{equation} \nIn the limit $\\Phi_{0} \\to 1$, the metastable minimum moves towards the local maximum, and they form an inflection point. This corresponds to the limit $\\lambda_{1},\\lambda_{2} \\to 0$. The slow-roll parameters at $\\varphi = 0$ are directly related to $\\lambda_{1}$ and $\\lambda_{2}$:\n\\begin{eqnarray}\n\\epsilon &=& \\frac{1}{2}\\left(\\frac{V'}{V}\\right)^{2} = \\frac{1}{2} \\lambda_{1}^{2} \\ ,\\nonumber\\\\\n\\eta &=& \\frac{V''}{V} = \\lambda_{2} \\ .\n\\end{eqnarray}\n\nThus in order to achieve inflation one should first find a potential with a metastable dS minimum close to a local maximum of $V(\\varphi)$, and then change the parameters in such a way as to make these two extrema very close, which automatically will make the slow-roll parameters small. The required fine-tuning can be expressed in terms of the fine-tuning of the parameters $\\lambda_{1}$ and $\\lambda_{2}$.\n\n\n\nOne can look at this situation from a slightly different but closely related perspective. In accidental inflation, by definition, inflation occurs by chance, in a small vicinity of some point which can be called $\\varphi =0$ without any loss of generality.\nConsider the potential in a vicinity of this point and expand it in powers of $\\varphi$:\n\\begin{equation}\\label{cubic1}\nV(\\varphi) =V_{0}\\, \\Bigl(1 + \\lambda_{1}\\varphi + \\frac{\\lambda_{2}}{ 2} \\varphi^{2} + \\frac{\\lambda_{3}}{3}\\, \\varphi^{3} + \\frac{\\lambda_{4}}{4}\\,\\varphi^{4}+...\\Bigr)\n\\end{equation} \nIf the potential has flat directions, as in the simplest versions of chaotic or natural inflation, then one may consider theories where only few first terms are present \\cite{Linde:2007fr}. However, in most of the versions of string inflation all parameters $\\lambda_{n}$ can be large, so making some of them small requires fine-tuning. Slow roll inflation is possible for $|\\varphi | \\ll 1$ if $|\\lambda_{1}| \\ll 1$ and $|\\lambda_{2}|\\ll 1$. We will assume that $\\lambda_{3} \\gtrsim 1$, which is indeed the case in the two models which we considered so far. Unless one fine-tunes $\\lambda_{4} \\gg \\lambda_{3}$, one can ignore the quartic term while describing inflation at $\\varphi \\ll 1$. This means that in accidental inflation, which occurs only at $\\varphi \\ll 1$, it is generically sufficient to use only three first terms in the expansion of the potential. \n\nMoreover, one can show that by a shift $\\varphi \\to \\varphi + \\Delta$ in a theory with $\\lambda_{1},\\lambda_{2} \\ll 1$, $\\lambda_{3} \\gtrsim 1$, one can always represent the potential (\\ref{cubic1}) in the form where either $\\lambda_{1} = 0$, or $\\lambda_{2} = 0$. For example, in the model of Ref. \\cite{Holman:1984yj} one has three possibilities illustrated by Fig. 1.\nFor $\\Phi_{0} = 1$ one has an inflection point at $\\varphi = 0$ where $\\lambda_{1} = \\lambda_{2} = 0$. For $\\Phi_{0} > 1$ inflation begins near a local maximum of the potential at $\\varphi > 0$. If one makes a change of variables $\\varphi \\to \\varphi + \\Delta$, where $\\Delta$ corresponds to the position of this maximum, the potential acquires the form\n\\begin{equation}\\label{23}\nV(\\varphi) =V_{0}\\, \\Bigl(1 + \\frac{\\lambda_{2}}{ 2} \\varphi^{2} + \\frac{\\lambda_{3}}{3}\\, \\varphi^{3} + ...\\Bigr) \\ .\n\\end{equation}\nwith $\\lambda_{2} < 0$. Meanwhile for $\\Phi_{0} > 1$ one can always make a change of variables bringing the potential to the form \n\\begin{equation}\\label{13}\nV(\\varphi) =V_{0}\\, \\Bigl(1 + {\\lambda_{1} } \\varphi + \\frac{\\lambda_{3}}{3}\\, \\varphi^{3} + ...\\Bigr) \\ .\n\\end{equation}\n\nWe ignored here the quartic term because accidental inflation, by definition, occurs only for $|\\varphi| \\ll 1$. Therefore, unless one makes an additional fine-tuning $\\lambda_{3} \\ll \\lambda_{4}$, one can ignore the term $\\frac{\\lambda_{4}}{4}\\,\\varphi^{4}$ in the description of inflationary dynamics in these models.\\footnote{We should emphasize that some of our conclusions are based on the assumption that in the class of the models we study now inflation occurs only for $|\\varphi| \\ll 1$, which is the trademark of accidental inflation. In other words, we study generic features of the worst case scenario. It should remain our goal to search for the models with large flat directions.}\n\nAn important exception is represented by the models where the potentials have a maximum with $Z_{2}$ symmetry $\\varphi \\to -\\varphi$. In this case\n\\begin{equation}\\label{24}\nV(\\varphi) =V_{0}\\, \\Bigl(1 + \\frac{\\lambda_{2}}{ 2} \\varphi^{2} + \\frac{\\lambda_{4}}{4}\\,\\varphi^{4}+...\\Bigr) \\ ,\n\\end{equation} \nand the required fine-tuning is $|\\lambda_{2}|\\ll 1$. In this case inflation is most efficient in the limit $\\lambda_{2} \\to 0$, which corresponds to the purely quartic potential with $\\lambda_{4} < 0$. This is what happens in the original versions of the new inflation scenario \\cite{New}. This regime was also found in the racetrack inflation models \\cite{Blanco-Pillado:2004ns}, where inflation occurs during the rolling in the axion direction from the saddle point of the potential.\n\n\n\nNote that it is insufficient to simply take $\\lambda_{1}\\ll 1$ or $\\lambda_{2} \\ll 1$, respectively, in our models. The required fine-tuning should be somewhat stronger as we want to achieve more than 50 or 60 e-folds of inflation. \n\nAs an example, one may consider the potential \\eqref{13} describing an inflection point and check how small should be the constant $\\lambda_{1}$. Investigation of this situation was performed in~\\cite{delicate}, and result for the total number of e-folding during inflation is \n\\begin{equation}\nN_{\\rm tot}=\\int_{0}^\\infty \\frac{ V d\\varphi}{V'} = \\int_{0}^\\infty \\frac{d\\varphi}{\\sqrt{2\\epsilon}}=\\frac{\\pi}{2}\\frac{1}{\\sqrt{\\lambda_1\\lambda_3}} \\ .\\label{NtotBaum}\n\\end{equation}\nIn order to avoid significant running of the spectrum, one must have $N_{\\rm tot} \\gg N_e^{\\rm CMB}\\simeq 60$. This leads to a condition\n\\begin{equation}\n\\lambda_{1} \\ll 10^{{-3}} \\lambda_{3}^{{-1}} \\lesssim 10^{-3} \\ ,\\label{tunerunning}\n\\end{equation}\nwhereas we already assumed that $\\lambda_{3} \\gtrsim 1$ (this assumption is justified below). Thus we must have fine-tuning at least of the order $10^{-3}$. We must fine-tune $\\lambda_{2}$ in a similar way in the models (\\ref{23}), (\\ref{24}). \n\nWhile this fine-tuning is certainly not nice, in the next section we will explain possible reasons which may make it looking quite natural.\n\n\nThe values of the slow roll parameters, the duration of inflation, and the spectral index $n_{s} $ depend only on the $\\lambda_{n}$, but they do not depend on the overall factor $V_{0}$. Meanwhile the square of the amplitude of metric perturbations $\\Delta_{\\cal R}^2$ scales as $V_{0}$. This allows to achieve an inflationary regime with desirable properties by fine-tuning $\\lambda_{1}$ or $\\lambda_2$, and then dial the desirable amplitude of perturbations of metric by changing $V_{0}$ without changing $\\lambda_{n}$ any further \\cite{Blanco-Pillado:2004ns}.\n\n\n\n\n\n\n\n\\section{Classical rolling versus quantum diffusion}\\label{diffusion}\n\n\nThe structure of the model discussed before -- inflation starts driven by a single field direction from either a flat saddle or a flat inflection point of the potential -- lends itself to an analytical treatment of the inflationary dynamics with respect to the properties of the power spectrum of density fluctuations generated during slow-roll.\n\nThe starting point is the observation, that due to the fact that the whole inflationary dynamics is governed by very small field displacements $\\Delta \\varphi\\ll 1$ away from the saddle\/inflection point at $\\varphi_{\\rm cr}$, we can canonically normalize the field $\\varphi$ in the vicinity of the unstable point and describe its motion by expanding the potential around the unstable point and `renormalizing' every derivative with respect to $\\varphi$ as $\\partial_\\varphi \\to \\sqrt{g^{\\varphi\\varphi}(\\varphi_{\\rm cr})}\\, \\partial_\\varphi$. This leads to slow-roll parameters defined as\n\\begin{eqnarray} \\epsilon&=&\\frac{\\varphi_{\\rm cr}^2}{3}\\left(\\frac{\\partial_\\varphi V}{V}\\right)^2\\nonumber\\\\ && \\\\ \\eta&=&\\frac{2\\varphi_{\\rm cr}^2}{3}\\frac{\\partial_\\varphi^2 V}{V}\\quad.\\nonumber\\end{eqnarray}\nNext, we write the expansion eq.~\\eqref{cubic1} of the potential around the unstable point as\n\\begin{equation}\\label{Vexp}\nV(\\varphi) = V_0\\left(1+\\sum_{p\\geq 1}\\frac{\\lambda_p}{p}\\,\\varphi^p\\right) \\ .\n \\end{equation}\nHere $\\lambda_p=(g^{\\varphi\\varphi})^{p\/2}\\, {V^{(p)}_0}\/{V_0}$, \\, \n$V_0^{(p)}=\\partial_\\varphi^p V(\\varphi_{\\rm cr})$, {and we replace }\\, $\\varphi\\to \\varphi+\\varphi_{\\rm cr}$, \n so that $\\varphi=0$ now denotes the critical point and $\\varphi$ denotes what was $\\Delta\\varphi$ before.\n\n\nDepending on whether the potential is monotonic around the unstable point or not, we get either an expansion around an inflection point solving $\\lambda_2\\sim\\partial_\\varphi^2 V(\\varphi_{\\rm cr})=0$ or a saddle point solving $\\lambda_1\\sim\\partial_\\varphi V(\\varphi_{\\rm cr})=0$. Successful inflation requires us then to tune small either $\\lambda_1$ (inflection point) or $\\lambda_2$ (saddle point).\n\n\n\nFollowing the discussion of the last Section, we can thus parametrize the leading approximation to the potential in the two cases as\n\\begin{equation} \nV=\\begin{cases}V_0\\left(1-\\lambda_1\\varphi-\\frac{\\lambda_3}{3}\\varphi^3\\right) & \\text{for an inflection point}\\\\\nV_0\\big(1+\\frac{\\lambda_2}{2}\\varphi^2\\pm\\frac{\\lambda_p}{p}\\varphi^p\\big) & \\text{for a saddle}\n\\end{cases}\\end{equation}\nwhere we have\n\\begin{equation}\n\\lambda_2=\\eta_0\\equiv\\eta(0)=-m^2\/V_0<0\\ .\n\\end{equation}\n\nWe subdivide the second case into saddle points which are subject to a symmetry $Z_2:\\,\\varphi\\to-\\varphi$ (then $p=4+2n,~n\\in \\mathbb{N}$ starts with 4) and saddle points which are not (then p starts with 3).\n\nWe can then compute within the validity of the slow-roll regime the number of e-foldings $N_e(\\varphi)$ counting backwards from the value $N_e(\\varphi_{\\rm end})\\equiv 0$ at the end of inflation at $\\varphi_{\\rm end}$\n\\begin{equation}\nN_e(\\varphi)=\\int_{\\varphi_{\\rm end}}^\\varphi \\frac{d\\varphi}{\\sqrt{2\\epsilon}}\\;.\n\\end{equation}\nThis integral can be performed in closed form in both cases. From the expression for $N_e(\\varphi)$ we can then express the slow-roll parameters $\\epsilon$ and $\\eta$ as function of $N_e$.\n\nSince we need to fine-tune $\\lambda_{1}, \\lambda_{2}$ to be extremely small, we will start our investigation with the study of the simple model \n\\begin{equation}\nV=V_0\\left(1-\\frac{\\lambda_p}{p}\\varphi^p\\right) .\\label{Vsimp}\n\\end{equation}\nIn this simplified situation we have\n\\begin{equation}\nN_e(\\varphi_N)=\\frac{1}{p-2}\\cdot\\frac{1}{\\lambda_p\\varphi_N^{p-2}}\\ , \\label{NePbic}\n\\end{equation}\nwhere we have taken $\\varphi_{\\rm end}\\to\\infty$ as this does not change the result significantly. From this expression we find the slow-roll parameters to be\n\\begin{eqnarray}\\label{etansPbic}\n\\epsilon&=&\\frac{1}{2(p-2)^{2\\frac{p-1}{p-2}}}\\lambda_p^{-\\frac{2}{p-2}} N_e^{-2\\frac{p-1}{p-2}}\\nonumber\\\\ && \\\\ \\eta&=&-\\,\\frac{p-1}{p-2}\\cdot\\frac{1}{N_e}\\quad.\\nonumber\n\\end{eqnarray}\nWe can then compute the density fluctuations at $N_e$ e-folds before the end of inflation\n\\begin{eqnarray}\\label{DensflucAnalyt}\n\\left.\\Delta_{\\cal R}^2\\right|_{N_e}&=&\\frac{1}{4\\pi^2}\\left(\\frac{H^2}{\\dot\\varphi}\\right)_{N_e}^2=\\frac{1}{24\\pi^2}\\left.\\frac{V}{\\epsilon}\\right|_{N_e}\\ , \\nonumber\\\\ &&\n \\\\\n&=&\\frac{V_0(p-2)^{2\\frac{p-1}{p-2}}}{12\\pi^2}\\lambda_p^{\\frac{2}{p-2}} N_e^{2\\frac{p-1}{p-2}}\\ .\\nonumber\\end{eqnarray}\n\nAs the quantity $\\left.\\Delta_{\\cal R}^2\\right|_{N_e}$ is constrained by the COBE and WMAP results at the scale of $N_e=N_e^{\\rm CMB} \\sim 60$, this equation leads to a relation between the scale of the potential $V_0$ and its small-field power-law behavior controlled by $\\lambda_p$.\n\nWe shall illustrate this constraint for the case $p=3$ of the explicit string model of the Section~\\ref{KLinf} where\n\\begin{equation}\np=3\\ ,\\quad \\lambda_3\\approx 3.7\\cdot 10^{4}\\ ,\\quad V_0\\approx 2.7\\cdot 10^{-23} \\ , \\label{ExampleVal}\n\\end{equation}\nwhich implies then from the eq.~\\eqref{DensflucAnalyt} above that\n\\begin{equation}\n{V_0} \\approx 3\\cdot 10^{-14}\\, \\lambda_3^{-2} \n\\end{equation}\nin units of Planck density. (In our paper we use Planck mass $M_{\\rm P}=2.4\\cdot 10^{18}\\,{\\rm GeV}$ and work in units $M_{p}= 1$.)\n\nUpon inspection of eq.~\\eqref{NePbic} we might conclude that the total number of e-folds is infinite if we were to start inflation at $\\varphi\\to0$. However, our classical equations ignored quantum diffusion of the field $\\varphi$, which is important near the point $\\varphi = 0$. \n\nIndeed, the field does not move classically from this point because $V' = 0$ there. However, the long-wavelength perturbations grow as follows:\n\\begin{equation}\n\\langle \\varphi^{2} \\rangle = \\frac{H^{3}}{4\\pi^{2}}t \\ .\n\\end{equation}\n These perturbations are stretched by inflation, and therefore for all practical purposes they look as a nearly homogeneous classical field with a typical magnitude \n\\begin{equation}\n\\varphi = \\sqrt{\\langle \\varphi^{2} \\rangle} = \\frac{H}{2\\pi }\\sqrt {Ht} \\ \n\\end{equation} \nand speed \n\\begin{equation}\n\\dot \\varphi = \\frac{H}{4\\pi }\\sqrt \\frac{H}{ t} = \\frac{H^{3}}{8\\pi^{2} \\varphi} \\ .\n\\end{equation} \nThe speed of the classical motion, $\\dot\\varphi = V'\/3H$, becomes greater than the speed of diffusion for\n\\begin{equation}\n|\\varphi\\, V'| = V_{0 }\\lambda_{p} \\varphi^{p} > \\frac{V_{0}^{2}}{24\\pi^{2}} \\ .\n\\end{equation}\nThus the field grows due to quantum diffusion until some moment $\\bar t$ when its average value grows up to \n\\begin{equation}\\label{phibar}\n\\bar\\varphi \\sim \\left(\\frac{V_{0}}{24\\pi^{2}\\lambda_{p}}\\right)^{1\/p} \\ .\n\\end{equation}\nDuring this part of the process the size of the universe grows by a factor of $e^{\\bar N} \\sim e^{H\\bar t}$, where\n\\begin{equation}\n\\bar N \\sim \\lambda_{p}^{-2\/p}\\, V_{0}^{2\/p - 1}\\frac{(24\\pi^{2})^{1-2\/p}}{2} \\ .\n\\end{equation}\nAfter that, the universe continues growing in the classical regime. According to Eq. (\\ref{NePbic}), the number of e-folds in this regime is given by\n\\begin{equation}\\label{expen}\nN_e(\\bar\\varphi)\\sim \\lambda_{p}^{-2\/p}\\, V_{0}^{2\/p - 1}\\frac{(24\\pi^{2})^{1-2\/p}}{p-2}=\\frac{2\\bar N}{p-2} \\ .\n\\end{equation}\nThis gives\n\\begin{equation}\\label{expen2}\nN_{\\rm tot} = \\bar N + N_{e}\\sim \\frac{1}{2}\\lambda_{p}^{-2\/p}\\, V_{0}^{2\/p - 1} (24\\pi^{2})^{1-2\/p}\\ \\frac{p}{p-2} \\ .\n\\end{equation}\n\n\nLet us consider now a realistic situation with $p = 3$, $\\lambda_{3} = 1$ and $V_{0} \\sim 3\\times 10^{{-14}}$. In this case the total growth of volume of the universe can be estimated by\n\\begin{equation}\ne^{3N_{\\rm tot}} \\sim e^{10^{6}} \\ .\n\\end{equation}\nThe analogous result for the quartic potential, p = 4, with $\\lambda_{4} = O(1)$, is \n\\begin{equation}\ne^{3N_{\\rm tot}} \\sim e^{3\\cdot 10^{8}} \\ .\n\\end{equation}\n\nNotice that the expression for $N_{\\rm tot}$ still diverges for $\\lambda_p\\to0$. Clearly, this divergence is unphysical as for $\\lambda_p\\lesssim 1$ we would find that the potential reaches $V \\approx 0$ only for $\\varphi >1$ in Planck units. This would lead to chaotic inflation at large $\\varphi$, which should be treated separately.\n\nIn this paper we consider only situations where $\\varphi\\ll 1$, $\\lambda_p\\gtrsim 1$. In the explicit realization of the Section~\\ref{KLinf} this always fulfilled by a large margin, $\\lambda_p\\gg 1$, once one expands the potential there around $t_{\\rm cr}$ according to eq.~\\eqref{Vexp}. Furthermore, demanding $\\lambda_p\\gtrsim 1$ implies that $\\bar\\varphi\\sim V_0^{1\/p}\\ll 1$ as long as $V_0\\ll 1$ which justifies the simplifications made before in that the inflationary process really takes place very close to the unstable point.\n\n\nThe constraint $\\lambda_p\\gtrsim 1$ plugged into eq.~\\eqref{etansPbic} implies an upper bound on $\\epsilon$ as now we have\n\\begin{equation}\n\\epsilon \\lesssim N_e^{-2\\frac{p-1}{p-2}}\\leq \\frac{1}{N_e^4}\\quad\\text{for }p\\geq 3\\quad.\n\\end{equation}\nFurther, from eq.~\\eqref{etansPbic} we have a spectral index~\\cite{LythRiotto,hilltop}\n\\begin{equation}\nn_s=1+2\\eta=1-2\\,\\frac{p-1}{p-2}\\cdot\\frac{1}{N_{e}} \\ .\\label{nsAnalyt}\n\\end{equation}\n\nObservationally we probe $N_e=N_e^{\\rm CMB}\\simeq 60$. Thus, we infer an immeasurably low fraction $r=12.4\\,\\epsilon_{60}\\lesssim 10^{-6}$ of power in gravitational waves and a spectral index which changes from $0.93$ for $p = 3$ to $0.97$ for $p \\gg 3$.\n\nWe may now test our methods by applying them to the explicit string theory model proposed in Section \\ref{KLinf}. Instead of calculating the amplitude of perturbations directly, we may represent the potential as a sum of terms $\\lambda_{n}\\varphi^{n}$ up to $n = 3$ using eq.~(\\ref{Vexp}), and then use the methods developed in this section. Using the values of eq.~\\eqref{ExampleVal} gives us in terms of eq.s.~\\eqref{DensflucAnalyt} and \\eqref{nsAnalyt}\n\\begin{equation}\n\\Delta_{\\cal R}^2\\approx 4\\cdot 10^{-9}\\ ,\n\\quad n_s\\approx 0.93\\ ,\n\\end{equation}\nwhich are evaluated at the time of horizon crossing of the CMB normalization scale modes at $N_e\\simeq 60$.\nThe agreement with the numerical results of Section \\ref{KLinf} is good enough to justify the use of the simplified potential, where we have neglected the small linear term and higher powers $p\\geq 4$.\n\nFor completeness, we note that the analogous treatment of the important exception represented by a $Z_2$-symmetric saddle point (quartic potential with a very small quadratic piece) yields\n\\begin{equation}\nn_s^{(p=4)}=1-\\frac{3}{N_e^{\\rm CMB}}\\approx 0.95\\ .\n\\end{equation}\n\nFinally, we may give here the results for $N_{\\rm tot}$ and $n_s$ for finite values of $\\lambda_{1,2}$.\n\nWe will consider the case of a $p=3$ inflection point with finite $\\lambda_1$ and a general $p\\geq 3$ saddle point with finite $\\lambda_2$. In this situation, $N_{\\rm tot}$ and $n_s$ will be a function of $\\lambda_1$ or $\\lambda_2$, respectively, for fixed $\\lambda_p,\\; p\\geq 3$. We get again \n\\begin{equation}\nN_{\\rm tot}\\simeq N_e(\\bar\\varphi)\\simeq \\frac{\\pi}{2}\\frac{1}{\\sqrt{\\lambda_1\\lambda_3}}\\quad\\text{if: }\\lambda_1\\gtrsim\\lambda_3\\bar\\varphi^2 \\sim 10^{-10}\n\\end{equation}\nfor the $p=3$ inflection point and\n\\begin{equation}\\label{Ntotsaddlepoint}\nN_{\\rm tot}\\simeq N_e(\\bar\\varphi)=\\frac{\\lambda_2^{-1}}{p-2}\\ln(1-\\lambda_2\\lambda_p^{-1}\\bar\\varphi^{2-p})\n\\end{equation}\nfor the saddle point. Using these expressions we can compute $\\eta(N_{\\rm tot})$ by expressing $\\lambda_{1,2}$ through the appropriate $N_{\\rm tot}$ and we get $n_s(N_{\\rm tot})$. The results are~\\cite{delicate,Aterm}\n\\begin{eqnarray}\nn_s&=&1+2\\eta=1-\\frac{2\\pi}{N_{\\rm tot}}\\cot\\left(\\frac{\\pi N_e}{2N_{\\rm tot}}\\right)\\nonumber\\\\ &=&1-\\frac{4}{N_e}\\ ,\\quad\\text{for }N_{\\rm tot}\\gg N_e\\label{nsinflec}\n\\end{eqnarray}\nfor the $p=3$ inflection point and\n\\begin{eqnarray}\\label{nssaddle}\nn_s&=&1+2\\eta=1-2\\lambda_2\\left(1+\\frac{p-1}{e^{(p-2)\\lambda_2 N_e}-1}\\right)\\nonumber\\\\ &=&1-2\\frac{p-1}{p-2}\\cdot\\frac{1}{N_e }\\ ,\\\\&&\\nonumber \\\\\n&&\\quad\\text{for}\\; \\ N_{\\rm tot}\\gg N_e\\quad\\Leftrightarrow\\quad |\\lambda_2|=|\\eta_0|\\ll 1\\;.\\nonumber\n\\end{eqnarray}\nfor a saddle point~\\cite{Aterm}, where $\\lambda_2=\\lambda_2(N_{\\rm tot})$ is the inverse of eq.~\\eqref{Ntotsaddlepoint}.\n\nWe see that we get back the results valid in the pure $\\lambda_p\\varphi^p$-potential for $N_{\\rm tot}\\gg N_e$ which corresponds to either very small $\\lambda_1$ or $\\lambda_2$. This behavior is displayed in Fig.~\\ref{specindex_analytic} for the three cases of a $p=3$ inflection point (solid black), a $p=3$ saddle point (dash-dot black), and a $p=4$ $Z_2$-symmetric saddle point (solid red).\n\n\\begin{figure}[ht!]\n\\hspace*{-0.2cm}\\leavevmode\\epsfysize=5.6cm \\epsfbox{specindex_analytic.eps} \\caption{The spectral index $n_s$ as a function of the total number of e-folds $N_{\\rm tot}$ at the COBE scale, i.e. in eq.s~\\eqref{nsinflec} and \\eqref{nssaddle} at $N_e=N_e^{\\rm CMB}=60$. Black: The solid line corresponds to the $p=3$ inflection point. The dash-dotted line represents the $p=3$ saddle point case. The dashed horizontal line is their common asymptotic value $n_s=0.933$. Red: The solid line gives $n_s(N_{\\rm tot})$ for the case $p=4$ with a $Z_2$-symmetry, as e.g. realized in the model of `racetrack inflation'~\\cite{Blanco-Pillado:2004ns}. From there the values of $V_0$ and $\\lambda_4$ have been extracted using eq.~\\eqref{Vexp} analogously to the $p=3$ case treated here explicitly. This allows to compute $n_s(N_{\\rm tot})$ in the same way. The dashed horizontal line gives the $p=4$ asymptotic value of $n_s=0.95$.} \\label{specindex_analytic}\n\\end{figure}\n\n\n\n\n\n\\section{Spectral index $n_{s}$ for accidental inflation}\\label{predict}\n\n\\subsection{Naturalness of fine tuning}\n\nWe can now re-enter the linear or quadratic term, respectively, into the two cases of an inflection or a saddle point. Keeping them will enable to derive the dependence of spectral properties like $n_s$ on $\\lambda_{1}$ and $\\lambda_{2}$.\n\nLet us see first what will happen if we add back to our cubic potential the term $\\lambda_{1}\\varphi$ with $\\lambda_{1} < 0$. In this case the field $\\varphi$ will roll down a bit faster, and the total number of e-folding will be smaller. The corresponding effect will be relatively insignificant if\n\\begin{equation}\\label{tuning}\n|\\lambda_{1}| \\ll |\\lambda_{3}|\\bar\\varphi^{2} \\sim (24\\pi^{2})^{-2\/3} \\, |\\lambda_{3}|^{1\/3} V_{0}^{2\/3} \\ .\n\\end{equation}\nFor example, if one considers the semi-realistic case\n$\\lambda_{3} = {\\cal O}(1)$ and $V_{0} \\sim 3\\cdot 10^{{-14}}$, this constraint means that for\n\\begin{equation}\\label{tuning2}\n|\\lambda_{1}| \\ll 10^{{-10}} \n\\end{equation}\nthe average total growth of volume during inflation in a vicinity of any given point will remain as large as $e^{3 \\cdot 10^{5}}$. On the other hand, if we consider the case which, naively, would seem much more natural, $|\\lambda_{1}|\\gg 10^{-10}$, then the average total growth of volume during inflation will become exponentially smaller. As we already mentioned, the total degree of inflationary expansion will exceed the required 60 e-foldings for much smaller fine-tuning, $|\\lambda_{1}| < 10^{{-3}}$, but it produces a growth of volume which is smaller than the volume produced for $|\\lambda_{1}| \\ll 10^{-10}$ by the enormous factor of $e^{-3 \\cdot 10^{5}}$.\n\nNow let us discuss the fine tuning of initial conditions in accidental inflation. According to (\\ref{phibar}), if initially the absolute value of the field $\\varphi$ in our part of the universe was smaller than $\\bar\\varphi \\sim \\left(\\frac{V_{0}}{24\\pi^{2}\\lambda_{3}}\\right)^{1\/3} \\sim 10^{-5}$, then the volume of this part grows by an enormous factor $e^{10^{6}}$ (for $\\lambda_{3} \\sim 1, V_{0} \\sim 3\\times 10^{{-14}}$ and $\\lambda_{1} \\sim 10^{{-10}}$). However, if the field initially was at a greater distance from the inflection point, the gain in volume immediately becomes much smaller. This is an important fact to keep in mind while discussing naturalness of initial conditions for inflation. An additional factor is the process of eternal inflation, which also occurs only for small $\\varphi$, see Appendix.\n \n\n\nAn interesting feature of our results is that the total growth of volume does not increase when we fine-tune $\\lambda_{1}$ with an accuracy greater than $10^{-10}$. In other words, our considerations do not predict $\\lambda_{1} = 0$; they predict $\\lambda_{1} \\sim 10^{{-10}}$.\n\n\n\nSimilarly, if we add the quadratic term $\\lambda_{2} \\varphi^{2}\/2$ to the model with $\\lambda_{3} = O(1)$ and $V_{0} \\sim 10^{{-14}}$, the total amount of inflation will be unaffected for $|\\lambda_{2}| \\ll 10^{{-5}}$,\nand it will decrease dramatically for $|\\lambda_{2}| \\gg 10^{{-5}}$.\n\nFinally if we consider the $Z_2$-symmetric class of models with a quartic potential with $\\lambda_{4} = O(1)$, and add to it the quadratic term $\\lambda_{2} \\varphi^{2}\/2$, then the average total growth of volume during inflation in a vicinity of any given point remains in the range of $e^{8 \\cdot 10^{7}}$ for $|\\lambda_{2}| \\ll 10^{{-10}}$, and it will rapidly drop down for $|\\lambda_{2}| \\gg 10^{{-10}}$.\n\n\n\nLet us now discuss the consequences of these results for understanding of naturalness\/unnaturalness of fine-tuning required for accidental inflation. Here we should remember that inflation in string theory landscape is always eternal because of the existence of an incredibly large number of metastable vacua \\cite{Lerche,Bousso,KKLT,Susskind:2003kw, Douglas:2003um}. The problem of assigning probabilities for various histories in the eternal inflation scenario is still a subject of intense debates. One of the popular ideas is that the probability to be born in a particular part of the landscape is proportional to the volume of the parts of the landscape of a given type, see e.g. \\cite{LLM,Mediocr,Bellido2,Meas1,Linde:2006nw,LindeMeas,Hawking:2007vf} and references therein. Many authors agree that the volume-weighted probability should be proportional to the growth of volume during the slow-roll inflation \\cite{Meas1,LindeMeas,Hawking:2007vf,LindeWinitz}.\n\nIf this is the case, then the results obtained above suggest that the fine-tuning of $\\lambda_{1}$ and $\\lambda_{2}$ required for accidental inflation is in fact quite natural; see \\cite{Mediocr,Bellido2,Linde:2007fr} for a discussion of related ideas, and the Appendix of our paper where we reach similar conclusions using slightly different assumptions.\n\n\n\n\\subsection{A prediction for $n_s$}\n\nThese arguments also imply that the most probable values for the spectral index in this class of models will be the asymptotic values of eq.~\\eqref{nsAnalyt} for the pure cubic (or quartic, in case of a $Z_2$-symmetric potential) case discussed above:\n\nUsing the volume-weighted measure, we should expect to find ourselves in a Hubble volume with a long period of slow-roll inflation in the past. And as we noted above, maximizing the amount slow-roll inflation implies having either $\\lambda_1\\to 0$ or $\\lambda_2\\to 0$ corresponding to a very large $N_{\\rm tot}\\gg N_e^{\\rm CMB}$.\n\nLooking again at Fig.~\\ref{specindex_analytic}, it becomes clear that the asymptotic values of eq.~\\eqref{nsAnalyt} for the pure cubic (or quartic, in case of a $Z_2$-symmetric potential)\n\\begin{equation}\n\\begin{cases} n_s\\approx 0.93\\quad\\text{no }Z_2\\text{-symmetry of }V\\\\\nn_s\\approx 0.95\\quad Z_2\\text{-symmetry of }V\\quad\n\\end{cases}\n\\end{equation}\nbecome a tentative prediction for accidental inflation in the string theory landscape. Together with the negligible tensor fraction $r<10^{-6}$ this is the hallmark of accidental inflation in string theory. We call this prediction tentative because it is based on certain assumptions about the probability measure. Moreover, it may change if we are allowed to fine-tune even further, by fine-tuning $\\lambda_{3}$ to zero (which would lead to a greater value of $n_{s}$), changing $V_{0}$, the amplitude of density perturbations, etc., see the Appendix for a discussion of this issue. \n\n\n\n\n\n\n\\section{Conclusion}\\label{concl}\n\n\nWe have shown that the generic presence of racetrack superpotentials together with fluxes leads to a realization of volume modulus inflation in type IIB string theory. The central observation there is that the multi-exponential superpotential generically leads to more than one local minimum. This can be used to tune the higher-lying of two adjacent minima to (nearly) degenerate with the separating barrier into a very flat saddle or inflection point, which then drives slow-roll inflation. Following these considerations, in Section \\ref{KLinf} we constructed a model of volume modulus inflation, which is one of the simplest models of string theory inflation available now. \n\n\nInflation in a vicinity of an inflection or saddle point requires fine-tuning of the model parameters and of the initial conditions. \nHowever, this simple setup describes both the majority of known closed-string modular inflation models~\\cite{Blanco-Pillado:2004ns,alphaprime}, as well as the recently improved KKLMMT model~\\cite{Baumann:2007np,delicate,Krause:2007jk,Panda:2007ie,Itzhaki:2007nk}.\\footnote{The ${\\rm D3}$-${\\rm D7}$-brane inflationary scenario~\\cite{D3D7} does not fall into this class as it leads quite naturally to hybrid inflation.} Thus, the dynamics of accidental inflation described in this paper may represent one of the most generic classes of inflationary models, covering a large part of the landscape. The required fine tuning may appear quite natural if one takes into account that it dramatically increases the total growth of volume during inflation, and may even make inflation eternal.\n\n\nThis fact leads to a statistical prediction of the spectral index $n_s\\approx 0.93$, or $n_s\\approx 0.95$, depending on whether the potential is $Z_2$-symmetric or not. Furthermore, the value $n_s\\approx 0.95$ for $Z_2$-symmetric potentials by means of this symmetry is an upper limit (see Fig.~\\ref{specindex_analytic}) as long as we tune just the first even power in such a potential, i.e. $\\lambda_2$. to zero (see also~\\cite{Blanco-Pillado:2004ns,racetrns}). These results, combined with a prediction of a negligibly small tensor to scalar ratio $r<10^{-6}$, become a testable (tentative) prediction for a large class of models of inflation in string theory.\n\\vskip 0.3cm\n\n\\noindent {\\bf Acknowledgments}: It is a pleasure to thank R. Bousso, J. Conlon, R. Kallosh, F. Quevedo, V. Vanchurin, A.~Vilenkin and S.~Winitzki for useful discussions. This work was supported in part by NSF grant PHY-0244728 and by the Alexander-von-Humboldt Foundation.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}