diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeamw" "b/data_all_eng_slimpj/shuffled/split2/finalzzeamw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeamw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\t\\red{A} torus action on a smooth quasiprojective complex variety induces many cohomological structures. The Bia\u0142ynicki-Birula decomposition and the localization theorems are the \\red{most} widely known examples. In this paper we aim to compare two \\red{families of} $K$-theoretic characteristic classes induced by the Bia\u0142ynicki-Birula decomposition: the equivariant motivic Chern classes of Bia\u0142ynicki-Birula cells and the stable envelopes of cotangent \\red{bundles}. For simplicity\\red{,} we \n\t\\old{use shortcuts} \\red{write} BB-decomposition and BB-cells for Bia\u0142ynicki-Birula decomposition and Bia\u0142ynicki-Birula cells\\red{,} respectively.\n\t\n\t Stable envelopes are characteristic classes defined for symplectic varieties equipped with torus \\red{actions}. They are important objects in \\old{the} modern geometric representation theory (cf. \\cite{Op} for a survey).\n\t Stable envelopes occur in three versions: cohomological \\cite{OM}, $K$-theoretic \\cite{OS,O2} and elliptic \\cite{OA}. In this paper we focus on the $K$-theoretic ones. They depend on a choice of \\red{a} linearisable line bundle called slope. Their axioms define \\red{a} unique class\n\t for general enough slope, yet \\red{the} existence of \\red{elements}\n\t satisfying \\red{the} axioms is still unknown in many cases.\n\t \n\t The motivic Chern class is an offshoot of the program of generalising characteristic classes of tangent \\red{bundles} \n\t to \\red{the} singular case.\n\t It began with the construction of the Chern-Schwartz-MacPherson class in \\cite{CSM} and was widely developed\n\t (\\red{see} e.g.\t\\cite{Oh,BSY,CMOSSY}\n\t and \\cite{SYp} for a survey). The common point of many of \\old{defined} characteristic classes \\red{that have been defined is} additivity properties with respect to \\red{the} decomposition of a variety \\red{as a union of} closed and open \\red{subvarieties}. For example the non-equivariant motivic Chern class $mC_\\red{y}$ (cf. \\cite{BSY}) assigns to every map of varieties $\\red{f:}X \\to M$ a polynomial over \\red{the} $K$-theory of coherent sheaves of $M$: an element \\hbox{$\\red{mC_y(X\\xto{f}M)}\\in G(M)[y]$.} Its additivity property states that:\n\t $$mC_\\red{y}(X\\xto{f} M)=mC_\\red{y}(Z\\xto{f_{|Z}} M)+mC_\\red{y}(X\\setminus Z\\xto{\\red{f_{|X\\setminus Z}}} M)\\,,$$\n\t for every closed subvariety $Z \\subset X$. Similar properties are satisfied by the Chern-Schwartz-MacPherson class and the Hirzebruch class.\n\t \n\t Lately\\red{,} equivariant versions of many such classes \\red{have been} defined (e.g. \\cite{Oh2, WeHir,FRW,AMSS}).\n\t For an algebraic torus $\\T \\simeq \\C^r$ the $\\T$-equivariant motivic Chern class (cf. \\cite{FRW,AMSS}) assigns to every $\\T$-equivariant map of varieties $f: X \\to M$ a polynomial over \\red{the} $K$-theory of $\\T$-equivariant coherent sheaves of $M$: an element $\\red{mC^\\T_y(X\\xto{f}M)}\\in G^\\T(M)[y]$. It is uniquely defined by \\red{the following} three properties (after \\cite{FRW}, section~2.3):\n\t \\begin{description}\n\t \t\\item[1. Additivity] If \\red{a $\\T$-variety $X$ decomposes as a union of closed and open invariant subvarieties} $X=Y\\sqcup U$, then $$mC_{\\red{y}}^\\T(X\\xto{\\red{f}} M)=mC_{\\red{y}}^\\T(Y\\xto{\\red{f_{|Y}}} M)+mC_{\\red{y}}^\\T(U\\xto{\\red{f_{|U}}} M)\\,.$$\n\t \t\n\t \t\\item[2. Functoriality] For \\red{an equivariant} proper map $f:M\\to M'$ we have $$mC_{\\red{y}}^\\T(X\\stackrel{f\\circ g}\\to M')=f_*mC_{\\red{y}}^\\T(X\\stackrel{g}\\to M)\\,.$$\n\t \t\n\t \t\\item[3. Normalization] For a smooth \\red{$\\T$-}variety $M$ we have $$mC_{\\red{y}}^\\T(id_M)=\\lambda_y(T^*M):=\\sum_{i=0}^{\\rank T^*M}[\\Lambda^iT^*M]y^i \\,.$$\n\t \t\n\t \\end{description}\n \tIn many cases\\red{,} one can directly compute this class using the Lefschetz-Riemann-Roch theorem (cf. \\cite{ChGi} theorem 5.11.7) and \\red{the} above properties. For examples of \\red{computations}\n \t see \\cite{Feh,FRW,Kon}. In the paper \\cite{FRW} (see also \\cite{FRWp}) it was found that for $G$-equivariant varieties with \\red{a} finite number\n \t of orbits\\red{,} the motivic Chern classes $mC_{\\red{y}}^G$ of $G$-orbits satisfy axioms similar to those of the stable envelopes.\n \t\n \tThere is one more family of characteristic classes, called the weight functions,\n \t\\red{closely} connected \\red{to the} characteristic classes mentioned above.\n \tTheir relations with other characteristic classes were widely studied e.g. \\cite{RTV',FRW,RW,KRW,RTV}.\n \t\n \tIn this paper we consider a smooth projective variety $M$ equipped with an action of a torus $A$. Suppose that the fixed point set $M^A$ is finite. Our main result states that under some geometric assumptions on $M$\n \tthe stable envelopes for \\red{a} small enough anti-ample slope and the motivic Chern classes coincide up to normalization. Namely:\n \t\\begin{atw*}\n \t\tLet $M$ be as above. Consider the cotangent variety $X=T^*M$ with the action of the torus $\\T=\\C^*\\times A$ (where the \\red{first factor} $\\C^*$ acts on fibers by scalar multiplication). Choose any weight chamber $\\mathfrak{C}$ of the torus $A$ and polarization $T^{1\/2}:=TM$. Suppose that the variety $M$ satisfies the local product condition\n \t\t(see definition \\ref{df:prd}). For any anti-ample $A$-linearisable line bundle $s$ and \\red{a} sufficiently big integer $n$\\red{,} the \\red{element}\n \t\t$$\n \t\t\\frac{mC_{-y}^A(M^+_F \\to M)}{y^{\\dim M_F^+}} \\in K^A(M)[y,y^{-1}] \\simeq K^\\T(M) \\simeq K^\\T(T^*M)\n \t\t$$\n \t\t\\old{determine} \\red{is equal to} the $K$-theoretic stable envelope $y^{-\\frac{1}{2}\\dim M_F^+}Stab^{\\frac{s}{n}}_{\\mathfrak{C},T^{1\/2}}(\\red{1_F})$.\n \t\\end{atw*}\n \t \\red{A} similar comparison was done in \\cite{FR,RV,AMSS0} for the Chern-Schwartz-MacPherson class in the cohomological setting. The above theorem is a generalisation of the previous results of \\red{\\cite{AMSS,FRW}}\n \t where analogous equality\n \t is proved for the flag varieties $G\/B$. \\red{\\cite{AMSS}} approach is based on \\red{the} study of the Hecke algebra action on $K$-theory of flag variety, whereas our strategy is \\red{similar to \\cite{FRW}}. First\\red{,} we make \\red{a change} \\old{correction}\n \t in \\red{the} definition\n \t of the stable envelope such that it coincides with\\old{the} Okounkov's\\old{one} for general enough slope and is unique for all slopes (section \\ref{s:env} and appendix \\red{\\hyperref[s:Ok]{A}}). By direct check of axioms\\red{,} we prove that the equality from the theorem holds for the trivial slope (section \\ref{s:mC}). \\red{Then,} we check that the stable envelopes for \\red{the} trivial slope and \\red{a} small anti-ample slope coincide (section \\ref{s:slope}).\n \t \\red{Finally}, in \\old{the} appendix \\red{\\hyperref[s:G\/P]{B}} we check that \\red{the} homogenous varieties $G\/P$ satisfy \\red{the} local product condition\n \t mentioned in the theorem.\n \t \n \t Our main technical tool is \\red{the} study of\n \tlimits of Laurent polynomials (of one or many variables).\n \tThe limit technique was investigated \\old{both} for motivic Chern classes \\cite{WeBB,FRW,Kon} as well as for stable envelopes \\cite{SZZ,O2}.\n \tWe use it mainly to prove various containments of Newton polytopes.\n \t\n \tHomogenous varieties $G\/P$ are our main examples of varieties satisfying the local product condition. The study of characteristic classes and stable envelopes of such varieties is \\red{an} important theme\n \tpresent in recent research (e.g. \\cite{AM,SZZ2,RV,RSVZ}).\n \tA priori the stable envelope is defined for symplectic varieties which admit a proper map to an affine variety. This condition is satisfied by the cotangent bundles to flag varieties of any reductive group and all homogenous varieties of $GL_n$.\n \tThere is \\red{a} weaker condition\n \t(see section \\ref{s:env} condition ($\\star$)) which is sufficient to define stable envelope and holds for cotangent bundle to any variety which \\red{satisfies}\n \tthe local product condition.\n \t\n \t\\subsection{Acknowledgements}\n \tI would like to thank \\old{to} Andrzej Weber for his guidance and support.\n \tI am grateful to Agnieszka Bojanowska for her valuable remarks.\n \tI thank anonymous referees for their helpful comments. \n \tThe author was supported by the research project of the Polish National Research Center 2016\/23\/G\/ST1\/04282 (Beethoven 2, German-Polish joint project).\n\n\t\n\n\\section{Tools}\n\tThis section gathers technical results, useful in the further parts of \\red{this} paper. All considered varieties are assumed to be complex and quasiprojective.\n\n\\subsection{Equivariant K-theory}\n\t\\red{Our main reference for the equivariant $K$-theory is \\cite{ChGi}.}\n\tLet $X$ be a complex quasiprojective variety equipped with an action of a torus $\\T$. We consider \\red{the} equivariant $K$-theory of coherent sheaves $G^\\T(X)$ and \\red{the} equivariant $K$-theory of vector bundles $K^\\T(X)$. For a smooth variety these two notions coincide. We use the lambda operations $\\lambda_y:K^\\T(X) \\to K^\\T(X)[y]$ defined by:\n\t$$ \\lambda_y(E):=\\sum_{i=0}^{\\rank E}[\\Lambda^iE]y^i \\,. $$\n\tThe operation $\\lambda_{-1}:K^\\T(X) \\to K^\\T(X)$ applied to the dual \\red{bundle} is the $K$-theoretic Euler class. Namely\n\t$$eu(E)=\\lambda_{-1}(E^*)\\,. $$\n\t\\old{For an immersion of a smooth subvariety $Y\\subset X$ We denote by $eu(Y\\subset X)$ the Euler class of normal bundle. Namely}\n\t\\red{Let $Y\\subset X$ be an immersion of a smooth \\hbox{$\\T$-invariant} locally closed subvariety. Its normal bundle is denoted by\n\t$$\\nu(Y\\subset X)=\\coker (TY \\to TX_{|Y}) \\in K^\\T(Y) \\,.$$\n\tWe denote by $eu(Y\\subset X)$ the Euler class of normal bundle. Namely}\n\t\t$$eu(Y\\subset X):=\\lambda_{-1}(\\nu^*(Y\\subset X)) \\in K^\\T(Y) \\,. $$\n\t\n\\begin{adf}\n\tConsider a \\red{$\\T$-variety} $X$.\n\tFor an element $a \\in G^\\T(X)$ and a closed \\red{invariant} subvariety $Y \\subset X$ we say that $\\supp(a) \\subset Y$ if and only if $a$ \\red{lies} in the image of \\red{a} pushforward map\n\t$$G^\\T(Y) \\xto{i_*} G^\\T(X).$$\n\tThe short exact sequence (cf. \\cite{ChGi} proposition 5.2.14)\n\t$$ G^\\T(Y) \\xto{i_*} G^\\T(X) \\to G^\\T(X \\setminus Y) \\to 0 $$\n\timplies that \\red{$\\supp(a) \\subset Y$} \\old{this} is equivalent to $a_{|X \\setminus Y}=0$.\n\\end{adf}\n\\begin{rem}\n\tNote that for an element $a \\in K^\\T(\\red{X})$\n\tthe support of $a$ \\red{is not a} well defined subset\n\tof $X$. We \\red{can} only define \\red{the} notion\n\t$\\supp(a) \\subset Y$ for a closed subvariety $Y\\subset X$. The fact that $\\supp(a) \\subset Y_1$ and $\\supp(a) \\subset Y_2$ \\red{does not} imply that $\\supp(a) \\subset Y_1 \\cap Y_2$. \n\\end{rem}\n\n\\begin{pro}\\label{cor:K}\n\tConsider a reducible $\\T$-variety $X=X_1\\cup X_2$. Denote the inclusions of the closed subvarieties $X_1$ and $X_2$ by $i$ and $j$\\red{,} respectively. Then the pushforward map\n\t$$i_*+j_*: G^\\T(X_1)\\oplus G^\\T(X_2) \\to G^\\T(X)$$\n\tis an epimorphism.\n\\end{pro}\n\\begin{proof}\n\tDenote by $U_1$ and $U_2$ the complements of the closed sets $X_1$ and $X_2$. Note that due to the exact sequence\n\t$$ G^\\T(X_1) \\xto{i_{*}} G^\\T(X) \\to G^\\T(U_1) \\to 0$$\n\tit is enough to prove that the composition\n\t$$\\alpha: G^\\T(X_2) \\xto{j_{*}} G^\\T(X) \\to G^\\T(U_1)$$\n\tis an epimorphism. Note that $U_1 \\cap X_2=U_1$ and by pushforward pullback argument the map $\\alpha$ is equal to the restriction to open subset\n\t$$G^\\T(X_2)\\to G^\\T(U_1).$$\n\tSuch restriction\n\tis \\red{an} epimorphic map\n\tdue to the exact sequence of \\red{a} closed immersion.\n\\end{proof}\n\\red{Consider a $\\C^*$-variety $F$ for which the action is trivial. Every equivariant vector bundle $E\\in Vect^{\\C^*}(F)$ decomposes as a sum of $\\C^*$-eigenspaces\n$$E=\\bigoplus\\limits_{n\\in \\Z} E_n \\,.$$\nThe sum $E^+=\\oplus_{n>0} E_n$ is called the attracting (or positive) part of $E$ while the the sum $E^-=\\oplus_{n<0} E_n$ is called the repelling (or negative) part.\nThe assignment of the positive (or negative) part induces a map of the equivariant K-theory. Namely}\n\\begin{pro}\n\t\\label{lem:attr}\n\tLet $\\sigma \\subset \\T$ be a one dimensional subtorus. Suppose that $F$ is a $\\T$-variety \\red{for which the action of $\\sigma$ is trivial}. Then taking \\red{the} attracting part with respect to the torus $\\sigma$ \\old{of a vector bundle} induces a well defined map\n\t$$K^\\T(F) \\to K^\\T(F)\\,.$$\n\t\\red{An} analogous result holds for \\red{the} repelling part. More generally taking direct summand corresponding to a chosen character of $\\sigma$ induces such a map.\n\\end{pro}\n\\begin{proof}\n\t$\\T$-equivariant maps of vector bundles preserve \\red{the} weight decomposition with respect to the torus $\\sigma$.\n\tThus\\red{, any} exact sequence of $\\T$-vector bundles splits into \\red{a} direct sum of sequences corresponding to characters of $\\sigma$.\n\tIt follows that taking the part corresponding to any subset of characters preserves exactness.\n\\end{proof}\n\\begin{rem}\n\t\\red{Denote by $R(\\sigma)$ the representation ring of the torus $\\sigma$.} Proposition \\ref{lem:attr} is a consequence of an isomorphism\n\t$$K^\\T(F) \\simeq K^{\\T\/\\sigma}(F)\\otimes R(\\sigma) .$$\n\\end{rem}\n\n\\subsection{BB-decomposition}\n The BB-decomposition was introduced in \\cite{B-B1} and further studied in \\cite{B-B3} (see also \\cite{CarBB} for a survey).\n We recall here its definition and fundamental properties. Consider a smooth $\\sigma=\\C^*$-variety $X$.\n\t\\begin{adf} \\label{def:BB}\n\t\tLet $F$ be a component of the fixed point set $X^\\sigma$. The positive BB-cell of $F$ is the subset\n\t\t$$X_F^+=\\{ x\\in X\\;|\\; \\lim_{t\\to 0} t\\cdot x\\in F\\} \\,.$$\n\t\tAnalogously the negative BB-cell of $F$ is the subset\n\t\t$$X_F^-=\\{ x\\in X\\;|\\; \\lim_{t\\to \\infty} t\\cdot x\\in F\\} \\,.$$\t\n\t\\end{adf}\nIt follows from \\cite{B-B1} that\n\t\\begin{atw} \\label{tw:BB}\n\t\t\\begin{enumerate}\n\t\t\t\\item The BB-cells are locally closed, smooth, algebraic subvarieties of~$X$. Moreover, we have the equality of vector bundles\n\t\t\t$$T(X_F^+)_{|F}=(TX_{|F})^+\\oplus TF $$\n\t\t\t\\item There exists an algebraic morphism\n\t\t\t$$\\lim_{t\\to 0}: X_F^+\\to F \\,.$$\n\t\t\t\\item Suppose that the variety $X$ is projective. Then there is a set decomposition (called BB-decomposition)\n\t\t\t$$X=\\bigsqcup_{F\\subset X^\\sigma} X_F^+ \\,.$$\n\t\t\t\\item Suppose that the variety $X$ is projective. Then the morphism $\\lim_{t\\to 0}: X_F^+\\to F$ is an affine bundle.\n\t\t\t\\item Suppose that a bigger torus $\\sigma\\subset\\T$ acts on $X$. Then the BB-cells (defined by the action of $\\sigma$) are $\\T$-equivariant subvarieties.\n\t\t\t\\item The BB-decomposition induces a partial order on the fixed point set $X^\\sigma$, defined by the transitive closure of the relation\n\t\t\t$$F_2\\in\\overline{X^+_{F_1}} \\Rightarrow F_1>F_2 \\,.$$\n\t\t\\end{enumerate}\n\t\\end{atw}\n\\begin{adf}[cf. \\cite{OM} paragraph 3.2.1]\n\tSuppose that \\red{$X$ is a smooth $\\T$-variety.}\n\tConsider the space of cocharacters\n\t$$\\mathfrak{t}:=\\Hom(\\C^*,\\T)\\otimes_\\Z\\R \\,.$$\n\tFor a fixed points component $F \\subset X^\\T$, denote \\red{by $v_1^F,...,v_{\\codim F}^F$} the torus weights appearing in the normal bundle.\n\t\\red{A} weight chamber\n\t is a connected component of the set\n\t$$\\mathfrak{t} \\setminus\\bigcup_{F\\subset \\red{X^\\T}, i \\le \\text{codim}F}\\{v_i^F=0\\}.$$\n\\end{adf}\n\n\nSuppose that a torus $\\T$ acts on a smooth variety $X$. For \\red{a} one dimensional subtorus $\\sigma \\subset \\T$ and a weight chamber $\\mathfrak{C}$ we \\red{write}\n $\\sigma \\in \\mathfrak{C}$ when the cocharacter of $\\sigma$ belongs to the chamber $\\mathfrak{C}$. \n\n\\begin{pro} \\label{lem:st}\n\t\\red{Let $X$ be a smooth $\\T$-variety.} Consider one dimensional subtorus $\\sigma \\subset \\T$ such that $\\sigma \\in \\mathfrak{C}$ for some weight chamber $\\mathfrak{C}$. Then the fixed \\red{point} sets\n\t$X^\\T$ and $X^\\sigma$ are equal.\n\\end{pro}\n\n\\begin{pro}\n\t\\label{lem:chamb}\n\t\t\\red{Let $X$ be a smooth $\\T$-variety.} Choose a weight chamber $\\mathfrak{C}$.\n\t\tConsider one dimensional subtori \\hbox{$\\sigma_1,\\sigma_2 \\subset \\T$} such that $\\sigma_1,\\sigma_2 \\in \\mathfrak{C}$. Then the tori $\\sigma_1$ and $\\sigma_2$ induce the same decomposition of \\red{the} normal bundle to the fixed \\red{point} set\n\t\t\\red{into} the attracting and \\red{the}\n\trepelling\n\tpart. Moreover\\red{,} the BB-decompositions with respect to these tori are equal.\n\\end{pro}\n\\begin{proof}\n\tThe only nontrivial part is \\red{the} equality of the BB-decompositions. It is \\red{a} consequence\n\tof theorem 3.5 of \\cite{HuBB}.\n\tAlternatively\\red{,} thanks to the Sumihiro theorem \\cite{Su} it is enough to prove the lemma for $X$ equal to the projective space. In this case\\red{,} the proof is straightforward.\n\\end{proof}\n\n\\subsection{Symplectic varieties}\n\n\n\\begin{adf}[\\cite{OS} section 2.1.2]\n\t\\label{df:pol}\n\tConsider a smooth symplectic variety $(X,\\omega)$ \\red{equipped} with an action of a torus $\\T$. Assume that the symplectic form $\\omega$ is an eigenvector of $\\T$, let $h$ be its character. \\red{A} polarization\n\tis \n\tan element $T^{1\/2} \\in K^\\T(X)$\n\tsuch that\n\t$$T^{1\/2}\\oplus \\C_{-h}\\otimes(T^{1\/2})^*=TX \\in K^\\T(X) \\,.$$ \n\\end{adf} \n\n\\begin{rem}\n\tNote that according to the above definition\\red{, a} polarization\n\t is an element \\red{of the} $K$-theory, thus it may be a virtual vector bundle.\n\\end{rem}\n\n\\subsection{Newton polytopes}\n\n\tLet $R$ be \\red{a commutative ring with unit} and $\\Lambda$ a lattice of finite rank. Consider a polynomial $f \\in R[\\Lambda]$. The Newton polytope $N(f) \\subset \\Lambda\\otimes_\\Z \\R$ is a convex hull of lattice points corresponding to the nonzero coefficients of the polynomial $f$. We \\red{recall} elementary properties of Newton polytopes:\n\t\n\\begin{pro} \\label{lem:New}\n\tLet $R$ be \\red{a commutative ring with unit}.\n\t For any Laurent polynomials $f,g\\in R[\\Lambda]$.\n\t\\begin{enumerate}[(a)]\n\t\t\\item $N(fg) \\subseteq N(f)+N(g)$.\n\t\t\\item $N(fg) = N(f)+N(g)$ when the ring $R$ is a domain.\n\t\t\\item $N(fg) =N(f)+N(g)$ when \\red{the} coefficients of the class $f$ corresponding to the vertices of \\red{the} polytope $N(f)$ \\red{are not} zero divisors.\n\t\t\\item $N(f+g) \\subseteq conv(N(f),N(g))$.\n\t\t\\item Let $\\theta: R \\to R'$ be a homomorphism of rings\n\t\tand $\\theta': R[\\Lambda] \\to R'[\\Lambda]$ its extension.\n\t\tThen $N(\\theta'(f))\\subseteq N(f)$.\n\t\\end{enumerate}\n\\end{pro}\n\nConsider a smooth variety $X$ equipped with an action of a torus $\\T$. Choose a subtorus $A \\subset \\T$. For \\red{a fixed point set}\ncomponent $F\\subset X^\\T$ and an element $a\\in K^\\T(X)$ we want to \\red{define}\nthe Newton polytope\n$$N^A(a_{|F})\\subset \\Hom(A,\\C^*)\\otimes_Z\\R :=\\mathfrak{a}^*. $$\nIt is possible due to the following proposition:\n\n\\begin{pro} \\label{lem:N(a)}\n\t\\red{Let $F$ be a smooth $\\T$-variety.}\n\t Let $A\\subset \\T$ be a subtorus which acts trivially on $F$.\n\t Any \\red{splitting}\n\t of the inclusion $A\\subset \\T$ induces isomorphism\n\t$$\\alpha: K^\\T(F) \\simeq K^{\\T\/A}(F) \\otimes_{\\Z} R(A) \\simeq K^{\\T\/A}(F)[\\Hom(A,\\C^*)] \n\t\\,,$$\n\t\\red{where $R(A)$ denotes the representation ring of the torus $A$.}\n\tFor an element $a \\in K^\\T(F)$ we consider the Newton polytope $N^A(a) \\subset \\mathfrak{a}^*$\n\t\\red{defined by the polynomial $\\alpha(a)$}.\n\tThe isomorphism $\\alpha$ depends on the choice of \\red{splitting},\n\t yet the Newton polytope is independent \\red{of} it.\n\\end{pro}\n\\begin{proof}\n\tConsider two \\red{splittings}\n\t$s_1,s_2: \\T\/A \\to \\T$. Denote by $\\alpha_1,\\alpha_2$ the induced isomorphisms\n\t$$K^{\\T}(F) \\to K^{\\T\/A}(F)[\\Hom(A,\\C^*)]\\,. $$\n\tNote that the quotient $\\frac{s_2}{s_1}$ induces a group homomorphism \\hbox{$h:\\T\/A \\to A$.} Consider an arbitrary class $E \\in K^{\\T\/A}(F)$ and a character $\\chi \\in \\Hom(A,\\C^*)$. Direct calculation \\red{provides}\n\tus with the formula \n\t$$\\alpha_2\\circ\\alpha_1^{-1}(Ee^{\\chi})=\\left(E\\otimes\\C_{\\chi\\circ h}\\right)e^{\\chi} \\,.$$\n\tThus\\red{,} the Newton polytope is independent \\red{of} the choice of\n\t\\red{splitting}.\n\\end{proof}\n\\begin{rem} \\label{rem:N}\n\tConsider the situation as in proposition \\ref{lem:N(a)}. Let $E$ be a $\\T$-vector bundle over $F$. Then the Euler class $eu(E)$ satisfies \\red{the} assumption of proposition \\ref{lem:New} (c). To see this use \\red{the} weight decomposition of $E$ with respect to the torus $A$. Note that for a vector bundle $V$ with \\red{an} action of $A$ given by a single character the Newton polytope $N^A(eu(V))$ is an interval. Moreover\\red{,} the coefficients corresponding to the ends of this interval are invertible (they are equal to classes of the line bundles $1$ and $\\det V^*$).\n\\end{rem}\n\nAt the end of this section we want to \\red{mention the} behaviour of \\old{the} polytope $N^A$ after restriction to one dimensional subtorus of $A$. Namely let $\\sigma \\subset A$ be a one dimensional subtorus. Denote by\n$$|_\\sigma:K^A(pt) \\to K^{\\sigma}(pt)$$\n\\red{the} induced map on the $K$-theory and by\n$$\\pi_\\sigma:\\Hom(A,\\C^*) \\otimes_\\Z \\R \\to \\Hom(\\sigma,\\C^*) \\otimes_\\Z \\R$$\n\\red{the} induced map on the characters.\n\n\\begin{pro} \\label{lem:Npi}\n\tFor an element $a \\in K^\\T(F)$ there exists a finite union of hyperplanes $K$ in the vector space of cocharacters such that for all one dimensional subtori $\\sigma$ whose cocharacter \\red{does not} belong to $K$\n\t$$N^\\sigma(a|_\\sigma)=\\pi_\\sigma \\left(N^A(a)\\right) .$$\n\\end{pro}\n\n\\begin{pro} \\label{lem:ver}\n \\red{Let $M$ be a smooth $A$-variety.}\n Consider \\red{a} one dimensional subtorus\n $\\sigma \\subset A$ such that $\\sigma \\in \\mathfrak{C}$ for some weight chamber $\\mathfrak{C}$. Let $F$ be a component of \\red{the fixed point set $M^A$.}\n Then the point $0$ is a vertex of the polytope $N^A(eu(\\nu_F^-))$. Moreover\\red{,} the point $\\pi_\\sigma(0)$ is the minimal term of \\red{the} line segment\n $\\pi_\\sigma\\left(N^A(eu(\\nu_F^-))\\right)$.\n\\end{pro}\n\n\n\\section{Stable envelopes for isolated fixed points} \\label{s:env}\nIn this section we recall the definition of the $K$-theoretic stable envelope introduced in \\cite{OS,O2} (see also \\cite{SZZ,AMSS,RTV} for\nspecial cases) in the case of isolated fixed points. \nIn \\old{the} appendix \\red{\\hyperref[s:Ok]{A}} we give \\red{a} rigorous proof that such classes are unique (proposition \\ref{pro:uniq}). Using \\old{the} Okounkov's definition it is true only for a general enough slope. We introduce \\red{a weaker version of the axioms} \\old{correction in the definition} to bypass this problem.\n\nWe use the following notations and \\red{assumptions}\n\\begin{itemize}\n\t\\item $\\T \\simeq \\C^r$ is an algebraic torus.\n\t\\item $(X,\\omega)$ is a smooth symplectic $\\T$-variety.\n\t\\item $A \\subset \\T$ is a subtorus which preserves the symplectic form.\n\t\\item We assume that the fixed point set $X^A$ is finite (\\red{this} implies that $X^A=X^\\T$).\n\t\\item We assume that $\\omega$ is an eigenvector of $\\T$ \\red{and} denote its character by \\hbox{$h \\in \\Hom(\\T,\\C^*)$.} \n\n\t\\item $\\mathfrak{C} \\subset \\mathfrak{a}$ is a weight chamber.\n\t\\item For a fixed point $F \\in X^A$\\red{,} we denote by $X_F^+$ its positive BB-cell \\old{depending (only) on} \\red{according to} the chamber $\\mathfrak{C}$ (cf. \\red{definition \\ref{def:BB} and} proposition \\ref{lem:chamb}).\n\t\\item We denote by $\\ge$ the partial order on the fixed \\red{point} set \\red{$X^A$}\n\t induced by the chamber~$\\mathfrak{C}$ (cf. theorem \\ref{tw:BB} (6)).\n\t\\item For a fixed point $F \\in X^A$\\red{, let $$\\nu_F:=\\nu(F\\subset X) \\simeq TX_{|F} \\,,$$\n\tbe the normal bundle to the inclusion $F\\subset X$.\n\tLet $\\nu_F=\\nu_F^+\\oplus\\nu_F^-$ be its decomposition}\n\tinduced by the chamber $\\mathfrak{C}$. (cf. proposition \\ref{lem:chamb}).\n\t\\item $T^{1\/2} \\in K^\\T(X)$ is a polarization (cf. definition \\ref{df:pol}).\n\t\\item For a fixed \\red{point} \\old{set\n\tcomponent} $F\\in X^A$\\red{,} we denote by $T^{1\/2}_{F,>0} \\in K^\\T(F)$ the attracting part \n\tof\n\t$(T^{1\/2})_{|F} \\in K^\\T(F)$ (cf. proposition \\ref{lem:attr}). \n\t\\item $s \\in Pic(X) \\otimes \\Q$ is a rational $A$-linearisable line bundle which we call slope.\n\t\\red{\\item For a fixed point $F\\in X^A$, we denote by $1_F$ the multiplicative unit in the ring $K^\\T(F)$ given by the class of the equivariant structure sheaf.}\n\\end{itemize}\n\nMoreover\\red{,} we assume that the variety $X$\nsatisfies the following condition:\n\\begin{enumerate}[$(\\star)$]\n\t\\item\n\tThe set $\\bigsqcup_{F \\in X^A} X^+_{F}$ is closed.\n\\end{enumerate}\n\\begin{rem} \n\tThe $(\\star)$ condition \\red{implies} that for any fixed point $F_0\\in X^A$ the set $\\bigsqcup_{F\\in X^A, F\\le F_0} X^+_{F}$ is closed.\n\\end{rem}\nIn \\old{the} Okounkov's papers stronger condition\n\\red{on $X$ is assumed. Namely it is required that $X$ admits an}\n equivariant proper map to an affine variety cf. \\cite{OM} paragraph 3.1.1.\nExistence of such \\red{a} map implies the condition $(\\star)$ cf. \\cite{OM} lemma 3.2.7.\nIt turns out that the condition~$(\\star)$ is sufficient to prove uniqueness of the stable envelope (cf. proposition~\\ref{pro:uniq}).\n\n\n\n\n\\begin{adf} [cf. chapter 2 of \\cite{OS}] \\label{df:env}\n\tThe stable envelope is a morphism of \\hbox{$K^\\T(pt)$-modules}\n\t$$Stab_{\\mathfrak{C},T^{1\/2}}^s: K^\\T(X^A) \\to K^\\T(X)$$\n\tsatisfying three properties:\n\t\\begin{enumerate}[{\\bf a)}]\n\t\t\\item For any fixed point $F$\n\t\t$$\\supp\\left(Stab_{\\mathfrak{C},T^{1\/2}}^s(1_F)\\right) \\subset \\bigsqcup_{F' \\le F} X^+_{F'} \\,.$$\n\t\t\\item For any fixed point $F$\n\t\t$$Stab_{\\mathfrak{C},T^{1\/2}}^s(1_F)_{|F}=\n\t\teu(\\nu^-_F)\\frac{(-1)^{\\rank T^{1\/2}_{F,>0}}}{\\det T^{1\/2}_{F,>0}} h^{\\frac{1}{2}\\rank T^{1\/2}_{F,>0}} \\,.$$\n\t\t\\item Choose any $A$-linearisation of the slope $s$. For a pair of fixed points $F',F$ such that $F'0} +s_{|F'},$$\n\t\twhere the Newton polytopes are defined as in \\old{the}\n\t\tproposition \\ref{lem:N(a)}.\n\t\tAn addition\n\t\tof \\red{the restriction of} a line bundle \\red{means} translation by its character.\n\\end{enumerate}\n\\end{adf}\nFor a comparison of the above definition with the one given in \\cite{OS,O2} see appendix~\\red{\\hyperref[s:Ok]{A}}.\n\\begin{rem} \nTo define the element $h^{\\frac{1}{2}\\rank T^{1\/2}_{F,>0}}$ in the axiom {\\bf b)} one may need to pass to the double cover of \\red{the} torus $\\T$ (cf. paragraph 2.1.4 in \\cite{OS}). To avoid this problem we consider \\red{normalized version of the stable envelope (see the expression (\\ref{wyr:stab}) below).} \\old{the morphisms $h^{-\\frac{1}{2}\\rank T^{1\/2}_{F,>0}}Stab_{\\mathfrak{C},T^{1\/2}}^s$.}\n\\end{rem}\n\\begin{rem} \\label{rm:uni}\n\t\\old{The} Okounkov's definition differs from the one given above in the axiom {\\bf c)}. In the paper \\cite{OS} weaker set containment is required \n\t\t$$N^A\\left(Stab_{\\mathfrak{C},T^{1\/2}}^s(\\red{1_F})_{|F'}\\right)+s_{|F}\n\t\\subseteq\n\tN^A(eu(\\nu^-_{F'})) -\\det T^{1\/2}_{F',>0} +s_{|F'}.$$\n\tIt defines the stable envelope uniquely only for \\red{a} general enough slope (cf. paragraph 2.1.8 in \\cite{OS}, proposition 9.2.2 in \\cite{O2} and example \\ref{ex:uni}). With our \\red{version of axioms} \\old{correction} the stable envelope is unique for any choice of slope and coincides with \\old{the} Okounkov's\\old{one} for general enough slope. \\old{(namely such that $s_{|F}-s_{|F'}$ is not an integral point)}\n\\end{rem}\n\n\tFor simplicity\\red{,} we omit the weight chamber and polarization in the notation. The stable envelope is determined by the set of elements\n\t\\begin{align} \\label{wyr:stab}\n\tStab^\\red{s}(F) :=h^{-\\frac{1}{2}\\rank T^{1\/2}_{F,>0}}Stab(1_F), \n\t\\end{align}\n\tindexed by \\red{the fixed point set $X^A$.}\n\tIt leads to the following equivalent definition.\n\t\\begin{adf} \\label{def:ele}\n\t\t The $K$-theoretic stable envelope is a set of elements $Stab^\\red{s}(F)\\in K^\\T(X)$ indexed by \\red{the fixed point set $X^A$}, such that\n\t\t\t\\begin{enumerate}[{\\bf a)}]\n\t\t\t\\item For any fixed point $F$ \n\t\t\t$$\\supp(Stab^\\red{s}(F)) \\subset \\bigsqcup_{F' \\le F} X^+_{F'} \\,.$$\n\t\t\t\\item For any fixed point $F$\n\t\t\t$$Stab^\\red{s}(F)_{|F}=\n\t\t\teu(\\nu^-_F)\\frac{(-1)^{\\rank T^{1\/2}_{F,>0}}}{\\det T^{1\/2}_{F,>0}} \\,.$$\n\t\t\t\\item Choose any $A$-linearisation of the slope $s$. For a pair of fixed points $F',F$ such that $F'0} +s_{|F'}.$$ \n\t\t\\end{enumerate}\n\t\\end{adf}\n\n\\begin{pro} \\label{pro:uniq}\n\t\\old{With} \\red{Under the} assumptions given at the beginning of this section the stable envelope (definitions \\ref{df:env}, \\ref{def:ele}) is unique. \n\\end{pro}\n\t\nThe simplest example of a symplectic variety is \\red{the} cotangent \\red{bundle}\nto a smooth variety.\nIt is natural to ask whether such a variety satisfies the assumptions needed to define $K$-theoretic stable envelope.\nConsider a smooth \\red{$A$-variety $M$}\nwith \\red{a} finite number \nof fixed points. Consider the cotangent variety $X=T^*M$ with the action of the torus $\\T=A \\times \\C^*$ such that $\\C^*$ acts on the fibers by scalar multiplication.\n\tThe fixed \\red{point} set\n\t of this action is finite. In fact\\red{,} we have \\red{equalities}\n\t$$X^{\\T}=X^A=M^A.$$\n\tThe variety $X$ is equipped with the canonical symplectic nondegenerate form $\\omega$. This form is preserved by the torus $A$ and it is an eigenvector of the torus $\\T$ with character corresponding to \\red{the} projection on the second factor. Denote this character by $y$.\n\tThe subbundle $TM \\subset TX$ satisfies the polarization condition (see definition \\ref{df:pol}). Choose a weight chamber $\\mathfrak{C}$ of the torus $A$ and a one dimensional subtorus $\\sigma \\subset A$ such that $\\sigma \\in \\mathfrak{C}$. The BB-cells of $\\sigma$ in $X$ are the conormal bundles to the BB-cells of $\\sigma$ in $M$.\n\tThe condition $(\\star)$ means that the set\n\t$$\\bigcup\\limits_{F \\in M^A} \\nu^*(M^+_F)$$\n\tis a closed subset of $T^*M$. This condition is not always satisfied.\n\t\\begin{rem}\n\t\tConsider a smooth manifold $M$ decomposed as \\red{a} disjoint union\n\t\tof smooth locally closed submanifolds\n\t\t$$M=\\bigsqcup_i S_i .$$\n\t\t\\red{The disjoint union}\n\t\tof the conormal bundles to strata is a closed subset of the cotangent bundle $T^*M$ if and only if the decomposition satisfies the Whitney condition (A) (cf.\n\t\t\\cite{Whcond} exercise 2.2.4).\n\t\t Thus\\red{,} the $(\\star)$ condition can be seen as \\red{an} algebraic counterpart of the Whitney condition~(A).\n\t\\end{rem}\n\t\\begin{ex}\n\t\tConsider the projective space $\\PP^2$ with the action of \\red{the} diagonal torus of $GL_3(\\C)$. Choose a general enough subtorus $\\sigma$. Let $x$ be the middle fixed point in the Bruhat order. Then the cotangent bundle to \\red{the} blow-up $T^*(Bl_x\\PP^2)$ \\red{does not} satisfy the $(\\star)$ condition.\n\t\\end{ex}\n\t\t\\begin{ex}\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item The cotangent bundle to any flag variety $G\/B$ admits an equivariant projective morphism to an affine variety (cf. \\cite{ChGi} section 3.2) \n\t\t\t\tso it satisfies $(\\star)$ condition.\n\t\t\t\t\\item The cotangent bundle to any $GL_n$ homogenous variety is a Nakajima quiver variety (cf. \\cite{Nak} section 7) so it admits an equivariant projective morphism to an affine variety (\\cite{Nak2} section 3, \\cite{GiNak} section 5.2).\n\t\t\t\\end{enumerate}\n\t\\end{ex}\n\tWe introduce a stronger condition on the BB-cells of $M$ which \\red{implies}\n\tthe ($\\star$) condition for $X=T^*M$ and is satisfied by the homogenous varieties (cf. theorem \\ref{tw:prod}).\n\t\\begin{adf}\\label{df:prd}\n\t\tConsider a projective smooth variety $M$ \\red{equipped} with an action of a torus $A$ with \\red{a} chosen one dimensional subtorus $\\sigma$. Suppose that the fixed \\red{point} set\n\t\t$M^\\sigma$ is finite.\n\t\tWe say that $M$ satisfies the local product condition\n\t\tif for any fixed point $x \\in M^\\sigma$ there exist an $A$-equivariant Zariski open neighbourhood $U$ of $x$ and $A$-variety $Z_x$ such that:\n\t\t\\begin{enumerate}\n\t\t\t\\item There \\red{exists}\n\t\t\t an $A$-equivariant isomorphism $$\\theta: U \\simeq (U\\cap M_x^+)\\times Z_x \\,. $$\n\t\t\t\\item For any fixed point $y \\in M^\\sigma$ there \\red{exists} a subvariety $\\red{Z'_{x,y}}\\subset Z_x$ such that $\\theta$ induces isomorphism:\n\t\t\t$$U\\cap M_y^+ \\simeq (U\\cap M_x^+)\\times \\red{Z'_{x,y}} \\,.$$\n\t\t\\end{enumerate}\t\n\t\\end{adf}\n\n\t\\begin{pro}\n\t\tSuppose that a projective smooth \\red{$\\C^*$}-variety $M$\n\t\t satisfies the local product condition. Then the cotangent variety $T^*M$ satisfies the $(\\star)$ condition.\n\t\\end{pro}\n\t\\begin{proof}\n\t\tIt is enough to prove that for every fixed point $F_0 \\in M^\\sigma$ there is an inclusion\n\t\t$$\\overline{\\nu^*(M_{F_0}^+\\subset M)}\\subset \\bigsqcup_{F\\in M^\\sigma}\\nu^*(M_{F}^+\\subset M) \\,. $$\n\t\tIt is equivalent to claim that for arbitrary fixed points $F_0,F$ \n\t\t$$\\overline{\\nu^*(M_{F_0}^+\\subset M)}\\cap T^*M_{|M_F^+}\\subset \\nu^*(M_{F}^+\\subset M) \\,. $$\n\t\tDenote by $U$ the neighbourhood of $F$ from \\red{the} definition of the local product condition. All of the subsets in the above formula are $\\sigma$ equivariant. Thus\\red{,} it is enough to prove that\n\t\t$$\\overline{\\nu^*(M_{F_0}^+\\cap U\\subset U)}\\cap T^*U_{|M_F^+}\\subset \\nu^*(M_{F}^+\\cap U\\subset U) \\,. $$\n\t\tThe \\red{local}\n\t\tproduct property implies existence of isomorphisms\n\t\t$$\n\t\tU\\simeq M_F^+\\times Z, \\ \\\n\t\tM_F^+ \\simeq M_F^+\\times \\{pt\\}, \\ \\\n\t\tM_{F_0}^+ \\simeq M_F^+\\times Z' \\,,\n\t\t$$\n\t\tfor some subvariety $Z'\\subset Z$ and point $pt\\in Z$. Denote by $E$ the subbundle $$M_F^+\\times T^*Z \\subset T^*U.$$\n\t\tNote that\n\t\t\\begin{align*}\n\t\t\t&\\nu^*(M_{F}^+\\cap U\\subset U)=E_{|M_{F}^+} \\\\\n\t\t\t&\\nu^*(M_{F_0}^+\\cap U\\subset U)=M_F^+\\times \\nu^*(Z'\\subset Z) \\subset E_{|M^+_{F_0}}\n\t\t\\end{align*}\n\t\tThus\n\t\t\\begin{multline*}\n\t\t\\overline{\\nu^*(M_{F_0}^+\\cap U\\subset U)}\\cap T^*U_{|M_F^+}\\subset \\overline{E_{|M^+_{F_0}}} \\cap T^*U_{|M_F^+} \\subset\n\t\t\\\\\n\t\t\\subset E \\cap T^*U_{|M_F^+}=E_{|M_F^+}= \\nu^*(M_{F}^+\\cap U\\subset U) \\,.\n\t\t\\end{multline*}\n\t\\end{proof}\n\t \n\t\\red{\\section{Motivic Chern class}\n\tThe motivic Chern class is defined in \\cite{BSY}. The equivariant version is due to \\cite[section 4]{AMSS} and \\cite[section 2]{FRW}. Here we recall the definition of the torus equivariant motivic Chern class. Consult \\cite{AMSS,FRW} for a detailed account.}\n\n\t\t\\red{\\begin{adf}[after {\\cite[section 2.3]{FRW}}]\n\t\t\tLet $A$ be an algebraic torus.\n\t\t\tThe motivic Chern class assigns to every $A$-equivariant map of quasi-projective $A$-varieties $f:X \\to M$ an element\n\t\t\t$$mC_y^A(f)=mC_y^A(X \\xto{f} M) \\in G^A(M)[y]$$\n\t\t\tsuch that the following properties are satisfied\n\t\t\t\\begin{description}\n\t\t\t\t\\item[1. Additivity] If a $A$-variety $X$ decomposes as a union of closed and open invariant subvarieties $X=Y\\sqcup U$, then $$mC_y^A(X\\xto{f} M)=mC_y^A(Y\\xto{f_{|Y}} M)+mC_y^A(U\\xto{f_{|U}} M)\\,.$$\n\t\t\t\n\t\t\t\t\\item[2. Functoriality] For an equivariant proper map $f:M\\to M'$ we have $$mC_y^A(X\\stackrel{f\\circ g}\\to M')=f_*mC_y^A(X\\stackrel{g}\\to M)\\,.$$\n\t\t\t\n\t\t\t\t\\item[3. Normalization] For a smooth $A$-variety $M$ we have $$mC_y^A(id_M)=\\lambda_y(T^*M):=\\sum_{i=0}^{\\rank T^*M}[\\Lambda^iT^*M]y^i \\,.$$ \n\t\t\t\\end{description}\n\t\tThe motivic Chern class is the unique assignment satisfying the above properties. \n\\end{adf}}\n\n\t\\section{Comparison with the motivic Chern classes} \\label{s:mC}\n\tIn this section we aim to compare the stable envelopes for \\red{the} trivial slope with the motivic Chern classes of BB-cells. \n\tOur main results are\n\\begin{pro} \\label{pro:mC}\n\tLet $M$ be a projective, smooth variety\n\t\\red{equipped} with an action of an algebraic torus $A$. \\red{Suppose that the fixed point set $M^A$ is finite.} \\old{with a finite number\n\t of fixed points.} Consider the variety $X=T^*M$ \\red{equipped} with the action of the torus $\\T=\\C^*\\times A$. Choose any weight chamber $\\mathfrak{C}$ of the torus $A$, polarization $T^{1\/2}=TM$ and the trivial line bundle $\\theta$ as a slope.\n\tThen\\red{,} the elements\n\t$$\n\t\\frac{mC_{-y}^A(M^+_F \\to M)}{y^{\\dim M_F^+}} \\in K^A(X)[y,y^{-1}] \\simeq K^\\T(X) \\simeq K^\\T(T^*X)\n\t$$\n\tsatisfy the axioms {\\bf b)} and {\\bf c)} of the stable envelope \\red{$Stab^\\theta(F)$.}\n\\end{pro}\n\n\\begin{rem}\n\tIn this proposition we \\red{do not} assume that $M$ \\red{satisfies} the local product condition or even that $X$ satisfies the $(\\star)$ condition.\n\\end{rem}\n\n\\begin{atw} \\label{tw:mC}\n\tConsider the situation such as in proposition \\ref{pro:mC}.\n\tSuppose that the variety $M$ with the action of a one dimensional torus $\\sigma \\in \\mathfrak{C}$ satisfies the local product condition\n\t(definition \\ref{df:prd}). Then \\red{the element}\n\t$$\n\t\\frac{mC_{-y}^A(M^+_F \\to M)}{y^{\\dim M_F^+}} \\in K^A(X)[y,y^{-1}] \\simeq K^\\T(X) \\simeq K^\\T(T^*X)\n\t$$\n\t\\old{determine}\\red{is equal to} the $K$-theoretic stable envelope \\red{$Stab^\\theta(F)$.} \\old{$y^{-\\frac{1}{2}\\dim M_F^+}Stab^\\theta_{\\mathfrak{C},T^{1\/2}}(F)$.}\n\\end{atw}\n\\red{\nOur main examples of varieties satisfying the local product property are homogenous spaces (see appendix \\red{\\hyperref[s:G\/P]{B}}). Let $G$ be a reductive, complex Lie group with a chosen maximal torus $A$. Let $B$ be a Borel subgroup and $P$ a parabolic subgroup. We consider the action of the torus $A$ on the variety $G\/P$.}\n\n\\red{\nA choice of weight chamber $\\mathfrak{C} \\subset\\mathfrak{a}$ induces a choice of Borel subgroup $B_\\mathfrak{C}\\subset G$. Let $F\\in (G\/P)^A$ be a fixed point. It is a classical fact that the BB-cell $(G\/P)_F^+$ (with respect to the chamber $\\mathfrak{C}$)\ncoincides with the $B_\\mathfrak{C}$-orbit of $F$. These orbits are called Schubert cells.\n}\n\\begin{cor}\n\t\\red{In the situation presented above} the stable envelopes for \\red{the} trivial slope are equal to the motivic Chern classes of Schubert cells\n\t$$\\frac{mC_{-y}^A((G\/P)^+_F \\to G\/P)}{y^{\\dim (G\/P)^+_F}} \n\t=y^{-\\frac{1}{2}\\dim (G\/P)^+_F}Stab^\\theta_{\\mathfrak{C},T(G\/P)}(1_{F}).$$\n\\end{cor}\n\\begin{proof}\n\tTheorem \\ref{tw:prod} \\red{implies}\n\t that homogenous varieties satisfy the local product condition.\n\t Thus, the corollary follows from theorem \\ref{tw:mC}.\n\\end{proof}\n\n\\begin{rem}\n\tIn the case of flag varieties $G\/B$\\red{,} our results for \\red{the} trivial slope agree with the previous results of \\cite{AMSS} (theorem 7.5) for a small anti-ample slope \n\tup to \\red{a} change of $y$ to $y^{-1}$. \\red{This} difference is a consequence of the fact that in \\cite{AMSS} the inverse action of $\\C^*$ on the fibers of cotangent bundle is considered.\n\\end{rem}\n\t\n\nBefore the proof of theorem \\ref{tw:mC} we make several simple observations.\n\tLet $\\tilde{\\nu}_F \\simeq TM_{|F}$ denote the normal space to the fixed point $F$ in $M$.\n\tDenote by $\\tilde{\\nu}^-_F$ and $\\tilde{\\nu}^+_F$ its decomposition into the positive and negative part induced by the weight chamber $\\mathfrak{C}$.\n\tLet\n\t$$\\nu_F\\red{\\simeq TX_{|F}} \\simeq TM_{|F}\\oplus (T^*M_{|F}\\red{\\otimes \\C_y})$$\n\t denote the normal space to the fixed point $F$ in the variety $X=T^*M$. It is a straightforward observation that\n\t\\begin{align*}\n\t&\t\\nu_F^-\\simeq\\tilde{\\nu}^-_F\\oplus y(\\tilde{\\nu}^+_F)^* \\red{\\,,} \\\\\n\t&\t\\nu_F^+\\simeq\\tilde{\\nu}^+_F\\oplus y(\\tilde{\\nu}^-_F)^* \\red{\\,,} \\\\\n\t&T^{1\/2}_{F,>0}= \\tilde{\\nu}^+_F.\n\t\\end{align*} \n\n\tIn the course of proofs we use the following computation:\n\t\\begin{alemat} \\label{lem:comp}\n\t\tLet $V$ be a $\\T$-vector space. We have an equality\n\t\t$$\n\t\t\\frac{\\lambda_{-1}\\left(y^{-1}V\\right)}{\\det V} =\\frac{\\lambda_{-y}(V^*)}{(-y)^{\\dim V}}\n\t\t$$\n\t\tin the $\\T$-equivariant $K$-theory of a point.\n\t\\end{alemat}\n\\begin{proof}\n\tBoth sides of the formula are multiplicative with respect to the direct sums of $\\T$-vector spaces. Every $\\T$-vector space decomposes as a sum of one dimensional spaces, so it is enough to check the equality for $\\dim V=1$. Then it simplifies to trivial form:\n\t$$\\frac{1-\\frac{\\alpha}{y}}{\\alpha}=\\frac{1-\\frac{y}{\\alpha}}{-y} ,$$\n\twhere $\\alpha$ is \\red{the} character of the action of the torus $\\T$ on the linear space $V$.\n\\end{proof}\n\\begin{proof}[Proof of proposition \\ref{pro:mC}]\n\tWe start the proof by checking the axiom {\\bf b)}. We need to show that\n\t\\begin{align*}\n\t\t\\frac{mC_{-y}^A(M^+_F \\to M)_{|F}}{y^{\\dim M_F^+}}=\n\t\teu(\\nu^-_F)\\frac{(-1)^{\\rank T^{1\/2}_{F,>0}}}{\\det T^{1\/2}_{F,>0}}\t\\,.\t\n\t\\end{align*}\n\twhich is equivalent to\n\t\n\t\\begin{align} \\label{wyr:b}\n\t\\frac{mC_{-y}^A(M^+_F \\to M)_{|F}}\n\t{(-y)^{\\dim \\tilde{\\nu}^+_F}}=\n\teu(\\tilde{\\nu}^{-}_F)\n\t\\frac{\\lambda_{-1}(y^{-1}\\tilde{\\nu}^{+}_F)}{\\det \\tilde{\\nu}^+_F} \\,.\n\t\\end{align}\n\t\nThe BB-cell $M_F^+$ is a locally closed subvariety. \nChoose an open neighbourhood $U$ of the fixed point $F$ in $M$ such that the morphism $M_F^+\\cap U \\subset U$ is a closed immersion. The functorial properties of the motivic Chern class (cf. paragraph 2.3 of \\cite{FRW}, or theorem 4.2 from \\cite{AMSS})\n imply that:\n\\begin{align*}\n&mC_{-y}^A\\left(M^+_F \\xto{i} M\\right)_{|F}=\nmC_{-y}^A\\left(M^+_F \\cap U \\xto{i} U\\right)_{|F}= \\\\\n&=i_*mC_{-y}^A\\left(id_{M^+_F \\cap U}\\right)_{|F}=\ni_*\\left(\\lambda_{-y}\\left(T^*(M^+_F \\cap U)\\right)\\right)_{|F}=\n\\lambda_{-y}\\left(\\tilde{\\nu}^{+*}_F\\right)eu(\\tilde{\\nu}^-_F) \\,.\n\\end{align*}\n \n So the left hand side of expression (\\ref{wyr:b}) is equal to\n $$eu(\\tilde{\\nu}^-_F) \\frac{\\lambda_{-y}\\left(\\tilde{\\nu}^{+*}_F\\right)}{(-y)^{\\dim \\tilde{\\nu}^{+}_F}}.$$\n Lemma \\ref{lem:comp} implies that the right hand \\red{side}\n is also of this form.\n \n We proceed to the axiom {\\bf c)}. Consider a pair of fixed points $F,F'$ such that $F'0}\n \\end{align}\n and take care of the distinguished point\n \\begin{align} \\label{wyr:Npt}\n \t-\\det T^{1\/2}_{F',>0} \\notin N^\\red{A}\\left(\\frac{mC_{-y}^A(M^+_F \\to M)_{|F'}}{y^{\\dim M_F^+}}\\right).\n \\end{align}\nLet's concentrate on the inclusion (\\ref{wyr:Ninc}).\n There is an equality of polytopes\n $$\n N^A(eu(\\nu^-_{F'}))-\\det T^{1\/2}_{F',>0}=\n\n N^A\\left(eu(\\tilde{\\nu}^{-}_{\\red{F'}})\n \\frac{\\lambda_{-1}(y^{-1}\\tilde{\\nu}^{+}_{\\red{F'}})}{\\det \\tilde{\\nu}^+_{\\red{F'}}}\\right)=\n N^A\\left(eu(\\tilde{\\nu}^{-}_{F'})\\lambda_{-y}\\left(\\tilde{\\nu}^{+*}_{F'}\\right)\\right),\n $$\n where the second equality follows from lemma \\ref{lem:comp}.\n After substitution of $y=1$ \\red{into} the class $eu(\\tilde{\\nu}^{-}_{F'})\\lambda_{-y}\\left(\\tilde{\\nu}^{+*}_{F'}\\right)$ we obtain the class $eu(\\tilde{\\nu}_{F'})$.\n Thus, proposition \\ref{lem:New} (e) implies that \\old{there is an inclusion}\n $$\n N^A(eu(\\tilde{\\nu}_{F'})) \\subseteq N^A\\left(eu(\\tilde{\\nu}^{-}_{F'})\\lambda_{-y}\\left(\\tilde{\\nu}^{+*}_{F'}\\right)\\right)\\,.\n $$\nMoreover\n$$N^A\\left(\\frac{mC_{-y}^A(M^+_F \\to M)_{|F'}}{y^{\\dim M_F^+}}\\right)= N^A\\left(mC_{-y}^A(M^+_F \\to M)_{|F'}\\right)=\nN^A\\left(mC_{y}^A(M^+_F \\to M)_{|F'}\\right) \\,.$$\nTheorem 4.2 from \\cite{FRW} implies that there is an inclusion \n$$\nN^A\\left(mC_{y}^A(M^+_F \\to M)_{|F'}\\right) \\subseteq\nN^A(eu(\\tilde{\\nu}_{F'})) \\,.\n$$\nTo conclude\\red{,} we have proven inclusions\n\\begin{multline*}\nN^A\\left(mC_{y}^A(M^+_F \\to M)_{|F'}\\right) \\subseteq\nN^A(eu(\\tilde{\\nu}_{F'}))\\subseteq \\\\\n\\subseteq N^A\\left(eu(\\tilde{\\nu}^{-}_{F'})\\lambda_{-y}\\left(\\tilde{\\nu}^{+*}_{F'}\\right)\\right)=\nN^A(eu(\\nu^-_{F'}))-\\det T^{1\/2}_{F',>0} \\,.\n\\end{multline*}\n\nThe next step is the proof of the formula (\\ref{wyr:Npt}). We proceed in a manner similar to the proof of corollary 4.5 in \\cite{FRW}. Consider a general enough one dimensional subtorus $\\sigma\\in \\mathfrak{C}$. Proposition \\ref{lem:Npi} implies that \n\\begin{align} \\label{wyr:gens}\nN^\\sigma\\left(mC_{y}^A(M^+_F \\to M)_{|F'}|_\\sigma\\right)=\\pi_\\sigma\\left(N^A\\left(mC_{y}^A(M^+_F \\to M)_{|F'}\\right)\\right).\n\\end{align}\n\nTheorem 4.2 from \\cite{FRW} (cf. also theorem 10 from \\cite{WeBB}), with the limit in $\\infty$ changed to the limit in $0$ to get positive BB-cell instead of negative one, implies that \n$$\n\\lim_{\\xi\\to 0}\\left(\\left.\\frac{mC_y^A(M_F^+\\subset M)_{|F'}}{eu(\\red{\\tilde{\\nu}}_{F'})}\\right|_\\sigma\\right)=\\;\\chi_y(M_F^+\\cap M^+_{F'})=\\chi_y(\\varnothing)=0\\,,\n$$\n\t\\red{where $\\xi$ is the chosen primitive character of the torus $\\sigma$} and the class $\\chi_y$ is the Hirzebruch genus (cf. \\cite{chiy, BSY}).\n\nThus\\red{,} the lowest term of line segment $N^\\sigma\\left(mC_{y}^A(M^+_F \\to X)_{|F'}|_\\sigma\\right)$ is greater than the lowest term of line segment $ N^\\sigma\\left(eu(\\red{\\tilde{\\nu}}_{F'})|_\\sigma\\right)$,\n which is equal to $\\pi_\\sigma(-\\det T^{1\/2}_{F',>0})$ by proposition \\ref{lem:ver}.\nThus\n$$\\pi_\\sigma(-\\det T^{1\/2}_{F',>0}) \\notin \\pi_\\sigma\\left(N^A\\left(mC_{y}^A(M^+_F \\to X)_{|F'}\\right)\\right).$$\n\\red{This}\nimplies\n$$-\\det T^{1\/2}_{F',>0} \\notin N^A\\left(\\frac{mC_{-y}^A(M^+_F \\to X)_{|F'}}{y^{\\dim M_F^+}}\\right),$$\nas demanded in (\\ref{wyr:Npt}).\n\\end{proof}\n\n\nTo prove theorem \\ref{tw:mC} we need the following technical lemma.\n\\begin{alemat}[cf. {\\cite[Remark after Theorem 3.1]{RTV'}} {\\cite[Lemma 5.2-4]{RTV}}] \\label{lem:supp}\n\tLet $M$ and $X$ be varieties such as in proposition \\ref{pro:mC}. Suppose that \\red{$X$}\n\tsatisfies the ($\\star$) condition.\n\tConsider a fixed point $F \\in M^A$.\n\tSuppose that an element $a \\in K^\\T(X)$ satisfies two conditions:\n\t\\begin{enumerate}\n\t\t\\item $\\supp(a) \\subset \\bigcup_{F'\\le F} T^*M_{|M^+_{F'}} \\red{\\,,}$\n\t\t\\item $\\lambda_{-y}(T^*M^+_{F_i})_{|F_i}$ divides $a_{|F_i}$ for any fixed points $F_i \\in M^A$.\n\t\\end{enumerate}\n\tThen $\\supp(a) \\subset \\bigcup_{F'\\le F} \\red{\\nu^*}(M^+_{F'}\\subset M)$.\n\\end{alemat}\n\\begin{proof}\n\tConsider the set of positive BB-cells of $M$ corresponding to fixed points $F'\\le F$. Arrange them in a sequence $B_1,...,B_k$ in such a way that for every $t\\le k$ the sum $\\bigcup_{i=1}^t B_i$ is a closed subset of $M$ (cf. \\cite{B-B3} theorem 3). Denote by $F_i$ the fixed point corresponding to the BB-cell $B_i$. Denote by\n\t$$E_t=\\bigcup_{i=1}^t T^*M_{|B_i} \\text{ and by }V=\\bigcup_{i=1}^{k} \\nu^*(B_i \\subset M).$$\n\tNote that the ($\\star$) condition implies that the sets $E_\\red{t}\\cup V$ are closed. Our goal is to prove by induction that\n\t$$\\supp(a) \\subset E_t \\cup V.$$\n\t The first condition implies this containment for $t=k$, which allows to start induction. To prove the proposition we need this containment for $t=0$. \n\n \tAssume that $\\supp(a) \\subset E_{t}\\cup V$ for some $t \\ge 1$. We want to prove that \\hbox{$\\supp(a) \\subset E_{t-1}\\cup V.$} \n \tDenote by $\\iota$ the inclusion\n \t$$\\iota:E_{t}\\cup V \\subset X.$$\n \tElement $a$ is equal to $\\iota_{*}\\alpha$ for some $\\alpha \\in G^\\T(E_t \\cup V)$. Denote by $U$ the variety $T^*M_{|B_t} \\setminus \\nu^*B_t$. Using the equality of the complements\n \t$$\\left(E_{t}\\cup V\\right) \\setminus \\left(E_{t-1}\\cup V\\right)=T^*M_{|B_t} \\setminus \\nu^*B_t=U\\,,$$\n \twe get an exact sequence\n \t$$ G^\\T(E_{t-1} \\cup V) \\to G^\\T(E_t \\cup V) \\to G^\\T(U) \\to 0\\,.$$\n \tSo it is enough to show that $\\alpha$ restricted to the open subset $U$ vanishes. The variety $E_t \\cup V$ is reducible. Denote by $i$ and $j$ the inclusions $E_t \\subset E_t \\cup V$ and $V \\subset E_t \\cup V$. Proposition \\ref{cor:K} implies that the map\n \t$$i_*+j_*: G^\\T(E_t) \\oplus G^\\T(V) \\onto G^\\T(E_t \\cup V)$$\n \tis epimorphic. Choose any decomposition\n \t$$\\alpha=i_*\\alpha_E +j_*\\alpha_V$$\n \tsuch that $\\alpha_E \\in G^\\T(E_t)$ and $\\alpha_V\\in G^\\T(V)$. The subsets $V$ and $U$ have empty intersection so $$(j_*\\alpha_V)_{|U}=0.$$\n \tThus, it is enough to show that $i_*\\alpha_E$ also vanishes after restriction to $U$.\n \t\n \tNote that lemma \\ref{lem:comp} implies the following equality in $K^\\T(F_t)$\n \t\t$$\\lambda_{-y}(T^*B_t)=\\frac{(-y)^{\\dim B_t}}{\\det T^*B_t}\\lambda_{-1}(y^{-1}TB_t)=\\frac{(-y)^{\\dim B_t}}{\\det T^*B_t}eu(\\nu^*B_t \\subset T^*M_{|B_t}).$$\n \t\tMoreover\\red{,} the first map in the exact sequence of closed immersion\n \t\t$$K^\\T(\\nu^*B_t) \\xto{i_*} K^\\T(T^*M_{|B_t}) \\to K^\\T(U) \\to 0$$\n \t\tis multiplication by the Euler class $eu(\\nu^*B_t \\subset T^*M_{|B_i})$. It follows that for an arbitrary element $b\\in K^\\T(T^*M_{|B_i})$ the restriction of $b$ to the set $U$ is trivial if and only if $\\lambda_{-y}(T^*B_t)_{\\red{|F_t}}$ divides $b_{|F_t}$.\n \t\n \t The second assumption and the fact that $(\\iota_*j_*\\alpha_V)_{|U}=0$ \\red{imply} that the element \n \t$$(\\iota_*i_*\\alpha_E)_{|F_t}=a_{|F_t}-(\\iota_{*}j_*(\\alpha_V))_{|F_t}$$\n is divisible by $\\lambda_{-y}(T^*B_t)_{\\red{|F_t}}$.\n The pushforward-pullback argument shows that\n \t$$\\left(eu(T^*M_{|B_t}\\subset X)\\alpha_E\\right)_{|F_t}=\\left(\\iota_*i_*\\alpha_E\\right)_{|F_t}.$$\n \tIt follows that $\\lambda_{-y}(T^*B_i)_{|F_t}$ divides $\\alpha_{E|F_t}$ multiplied by the Euler class \\hbox{$eu(T^*M_{|B_t}\\subset X)_{|F_t}$.} We need to prove that it divides $\\alpha_{E|F_t}$. We use the following simple algebra exercise:\n \t\\begin{exer*}\n \t\tLet $R$ be a domain and $R[y,y^{-1}]$ the ring of Laurent polynomials. Assume that $A(y) \\in R[y,y^{-1}]$ is a monic Laurent polynomials and $r\\in R$ a nonzero element. Then for any polynomial $B(y) \\in R[y,y^{-1}]$\n \t\t $$A(y)|B(y) \\iff A(y)|rB(y) \\,. $$\n \t\\end{exer*}\n \tThe ring $K^\\T(F_t)$ is isomorphic to $K^A(F_t)[y,y^{-1}]$, the polynomial $\\lambda_{-y}(T^*B_t)_{|F_t}$ is monic (the smallest coefficient is equal to one) and the Euler class $eu(T^*M_{|B_t}\\subset X)_{|F_t}$ belongs to the subring $K^A(F_t)$. The exercise implies that $\\lambda_{-y}(T^*B_t)$ divides $\\alpha_{E|F_t}$. So \\red{the class} $\\alpha_E$ vanishes on $U$. Proof of the lemma follows by induction.\n\\end{proof}\n\n\\begin{proof}[Proof of theorem \\ref{tw:mC}]\n\tProposition \\ref{pro:mC} implies that the axioms {\\bf b)} and {\\bf c)} holds. It is enough to check the support axiom.\n\t\n\tChose a fixed point $F\\in M^A$.\t\n\tFunctorial properties of the motivic Chern class imply that the support of class\n\t$$mC_{-y}^A(M^+_F \\to M) \\in K^A(M)[y]\\subset K^\\T(M)$$\n\tis contained in the closure of $M^+_F$ which is contained in the closed set\n\t$\\bigcup_{F'\\le F}M^+_{F'}.$\n\tIt follows that the support of \\red{the} pullback element\n\t$$mC_{-y}^A(M^+_F \\to M) \\in K^\\T(T^*M)$$ \n\tis contained in restriction of the cotangent bundle $T^*M$ to the subset $\\bigcup_{F'\\le F}M^+_{F'}.$\n\tTo prove the support axiom we need to check that it is contained in the smaller subset\n\t$$\\bigcup_{F'\\le F} \\nu^*(M^+_{F'}\\subset M).$$\n\t Thus, it is enough to check the assumptions of lemma \\ref{lem:supp} for $a$ equal to the class $mC_{-y}^A(M^+_F \\to M)$. We know that the first assumption holds. The local product condition (definition \\ref{df:prd}) implies that for any fixed point $F'$\n\t \\begin{align*} mC_{-y}^A(M^+_{F} \\to M)_{|F'}=&\n\t mC_{-y}^A(M^+_F \\cap U \\to U)_{|F'} \\\\\n\t =&mC^A_{-y}(\\red{Z_{F',F}'} \\times M^+_{|F'} \\to Z_{F'} \\times M^+_{|F'})_{|F'} \\\\\n\t =&mC^A_{-y}(\\red{Z_{F',F}'} \\subset Z_{F'})_{|F'} mC^A_{-y}(M^+_{|F'} \\to M^+_{|F'})_{|F'} \\\\\n\t =&mC^A_{-y}(\\red{Z_{F',F}'} \\subset Z_{F'})_{|F'} \\lambda_{-y}(T^*M_{F'}^+)_{|F'} \\,.\n\t \\end{align*}\n\t\\red{The second equality follows from the local product condition and the third from \\cite[theorem 4.2(3)]{AMSS}.}\n\tHence the class $mC_{-y}^A(M^+_F \\to M)$ satisfies the assumptions of lemma \\ref{lem:supp}.\n\\end{proof}\n\n\\begin{rem}\n\tIn the proof of theorem \\ref{tw:mC} we need the local product condition only to get divisibility demanded in lemma \\ref{lem:supp}. Namely for a pair of fixed points $F,F'$ such that $F>F'$ we need divisibility of $mC_{-y}^{\\red{A}}(M_F^+\\subset M)_{|F'}$ by $\\lambda_{-y}(T^*M^+_{F'})_{|F'}$.\n\tIf the closure of the BB-cell of $F$ is smooth at $F'$ and the BB-cells form a stratification of $M$ then the divisibility condition automatically holds. In general case one can assume \\red{the} existence of \\red{a} motivically transversal slice instead of the local product condition.\nNamely suppose there is a smooth locally closed subvariety $S \\subset M$ such that:\n\t\\begin{itemize}\n\t\t\\item $F'\\in S$ and $S$ is transversal to $M_{F'}^+$ at $F'$\\red{,}\n\t\t\\item $S$ is of dimension complementary to $M_{F'}^+$\\red{,}\n\t\t\\item $S$ is motivically transversal (cf. \\cite{FRWp}, section 8) to $M_F^+$\\red{.}\n\t\\end{itemize}\n\tThen theorem 8.5 from \\cite{FRWp}, or reasoning analogous to \\red{the} proof\n\tof lemma 5.1 from \\cite{FRW} proves the desired divisibility condition. In the case of homogenous varieties\\red{,} divisibility can be also acquired using theorem 5.3 of \\cite{FRW} for the Borel group action.\n\\end{rem}\n\n\n\n\\section{Other slopes} \\label{s:slope}\nComputation of the stable envelopes for \\red{the} trivial slope allows one to easily get formulas for all integral slopes.\n\\begin{cor}\n\tConsider a situation described in theorem \\ref{tw:mC}. For a $A$-linearisable line bundle $s \\in Pic(X)$ \\red{the element}\n\t$$\n\t\\frac{s}{s_{|F}}\\cdot\\frac{mC_{-y}^A(M^+_F \\to M)}{y^{\\dim M_F^+}} \\in K^A(M)[y,y^{-1}] \\simeq K^\\T(M) \\simeq K^\\T(T^*M)\n\t$$\n\t\\old{determine} \\red{is equal to} the $K$-theoretic stable envelope \\red{$Stab^s(F)$.}\n\t\\old{$y^{-\\frac{1}{2}\\dim M_F^+}Stab^s_{\\mathfrak{C},T^{1\/2}}(F)$.}\n\\end{cor}\nIn this section we aim to prove that the stable envelope for \\red{the} trivial slope coincides with the one for \\red{a} sufficiently small anti-ample slope. Namely:\n\\begin{atw} \\label{tw:slo}\n\tLet $M$ be a projective, smooth \\red{A-variety. Suppose that the fixed point set $M^A$ is finite.}\n\tConsider the variety $X=T^*M$ with the action of the torus $\\T=\\C^*\\times A$. \n\tChoose any weight chamber $\\mathfrak{C}$ of the torus $A$ and polarization $T^{1\/2}=TM$.\n\tSuppose that $M$ satisfies the local product condition.\n\tFor any anti-ample $A$-linearisable line bundle $s$ and a sufficiently big integer $n$\\red{, the element}\n\t$$\n\t\\frac{mC_{-y}^A(M^+_F \\to M)}{y^{\\dim M_F^+}} \\in K^A(M)[y,y^{-1}] \\simeq K^\\T(M) \\simeq K^\\T(T^*M)\n\t$$\n\t\\old{determine} \\red{is equal to} the $K$-theoretic stable envelope \\red{$Stab^{\\frac{s}{n}}(F)$.}\n\t\\old{$y^{-\\frac{1}{2}\\dim M_F^+}Stab^s_{\\mathfrak{C},T^{1\/2}}(F)$.}\n\\end{atw}\n\\begin{proof}\n\tTheorem \\ref{tw:mC} implies that the considered \\red{element satisfies} the axioms {\\bf a)} and {\\bf b)} of stable envelope. It is enough to check the axiom {\\bf c)}. Namely for a fixed point $F'\\le F$ we need to show that\n\t$$N^A\\left(mC_{-y}^A(M^+_F \\to M)_{|F'}\\right) +\\frac{s_{F}-s_{F'}}{n} \\subseteq N^A(eu(\\nu^-_{F'}) -\\{0\\})-\\det T^{1\/2}_{F',>0}.$$\n\tNote that a part of theorem \\ref{tw:mC} is an analogous inclusion for the trivial slope. It implies that the point $-\\det T^{1\/2}_{F',>0}$ \\red{does not} belong to the Newton polytope of motivic Chern class. The Newton polytope is a closed set\\red{,} thus for \\red{a} small enough vector\n\t $\\vv \\in \\mathfrak{a}^*$ its translation by $\\vv$ also \\red{does not} contain the point $-\\det T^{1\/2}_{F',>0}$.\n\tSo it is enough to prove that there exists an integer $n$ such that\n\t$$N^A\\left(mC_{-y}^A(M^+_F \\to M)_{|F'}\\right) +\\frac{s_{F}-s_{F'}}{n} \\subseteq N^A(eu(\\nu^-_{F'}))-\\det T^{1\/2}_{F',>0}.$$\n\tMoreover\\red{,} in the course of proof of proposition \\ref{pro:mC} we showed containment of polytopes\n\t $$N^A(eu(\\tilde{\\nu}^-_{F'}))\\subseteq N^A(eu(\\nu^-_{F'}))-\\det T^{1\/2}_{F',>0}.$$\n\t Therefore, it is enough to show that for \\red{a} big enough integer $n$ there is an inclusion\n\t $$N^A\\left(mC_{-y}^A(M^+_F \\to M)_{|F'}\\right) +\\frac{s_{F}-s_{F'}}{n} \\subseteq N^A(eu(\\tilde{\\nu}^-_{F'})).$$\n\t \n\t Consider a lattice polytope $N \\subset \\mathfrak{a}^*$. Define a facet as a codimension one face.\n\t Let integral hyperplane denote an affine subspace of codimension one, spanned by lattice points. Suppose that the whole interior of the polytope $N$ lies on one side of an integral hyperplane $H$. Denote by $E_H$ \\red{half-space}\n\t which is the closure of the component of complement of $H$ which contains the interior of $N$. \n\t \n\t Suppose that the affine span of \\red{a lattice polytope} $N$ is the whole ambient space. For a facet $\\tau$ of $N$\\red{,} let $H_\\tau$ be an integral hyperplane which is the affine span of the face $\\tau$. Note that\n\t $$N=\\bigcap_{\\tau} E_{H_\\tau} $$\n\t where the intersection is indexed by \\red{the set of} codimension one faces. \n\t \n\t \\red{A} similar argument\n\t can \\red{be}\n\t applied to any\\old{Newton} \\red{lattice} polytope, not necessarily spanning the whole ambient space. Denote by aff$(-)$ the affine span operator.\n\t For any facet $\\tau$ choose an integral hyperplane $H_\\tau$ such that \n\t $$H_\\tau \\cap \\text{aff}(N)=\\text{aff}(\\tau) \\,. $$\n\tThen\n\t$$N=\\text{aff}(N)\\cap \\bigcap_{\\tau} E_{H_\\tau} $$\n\tThus\\red{,} to check containment in the polytope $N$ it is enough to check containment in finitely many integral \\red{half-spaces}\n\t$E_{H_\\tau}$ and \\red{the} affine span of $N$. We use this observation for $N$ equal to the Newton polytope $N^A(eu(\\tilde{\\nu}^-_{F'}))$.\n\t \n\t \\red{Let $H$ be an integral hyperplane.} We say that \\red{a} vector\n\t $\\vv \\in \\mathfrak{a}^*$ points to $E_H$ when addition of $\\vv$ preserves $E_H$. Our strategy of the proof is to show that for an integral hyperplane $H_\\tau$ corresponding to \\red{a} facet\n\t $\\tau$ of the polytope $N^A(eu(\\tilde{\\nu}^-_{F'}))$ at least one of the following conditions holds: \n\t \\begin{itemize}\n\t \t\\item The intersection $N^A\\left(mC_{-y}^A(M^+_F \\to M)_{|F'}\\right) \\cap H_\\tau$ is empty.\n\t \t\\item The vector $s_F-s_{F'}$ points to $E_{H_\\tau}$.\n\t \\end{itemize}\n \tMoreover\\red{,} if an integral hyperplane $H$ contains the whole polytope $N^A(eu(\\tilde{\\nu}^-_{F'}))$ then the addition of the vector $s_F-s_{F'}$ preserves $H$. \n \t\n \tNote that the above facts are sufficient to prove the theorem. Namely proposition \\ref{pro:mC} shows that the polytope $N^A\\left(mC_{-y}^A(M^+_F \\to M)_{|F'}\\right)$ is contained \\red{in}\n \t $N^A(eu(\\tilde{\\nu}^-_{F'}))$.\n \tIt follows that it lies inside $E_{H_\\tau}$ for every facet $\\tau$.\n \tIf the vector $s_F-s_{F'}$ points to $E_{H_\\tau}$ then for every integer $n \\in \\N$\n \t$$N^A\\left(mC_{-y}^A(M^+_F \\to M)_{|F'}\\right)+\\frac{s_{F}-s_{F'}}{n} \\subset E_{H_\\tau} \\red{\\,.} $$\n \tOn the other hand if the intersection $N^A\\left(mC_{-y}^A(M^+_F \\to M)_{|F'}\\right) \\cap \\red{H_\\tau}$\n \tis empty then translation of the polytope $N^A\\left(mC_{-y}^A(M^+_F \\to M)_{|F'}\\right)$ by a sufficiently small vector still lies in~$E_{H_\\tau}$. There are only finitely many facets of \\red{ the polytope $N^A(eu(\\tilde{\\nu}^-_{F'}))$} so\n \tthere exists \\red{an integer} $n$ such that\n \t$$N^A\\left(mC_{-y}^A(M^+_F \\to M)_{|F'}\\right)+\\frac{s_{F}-s_{F'}}{n} \\subset \\bigcap_{\\tau}E_{H_\\tau}.$$\n \tMoreover\\red{,} addition of the vector $s_{F}-s_{F'}$ preserves the affine span of $N^A(eu(\\tilde{\\nu}^-_{F'}))$. It follows that for \\red{a sufficiently big integer} $n$ the desired inclusion holds.\n \t\n \tLet $H \\subset \\mathfrak{a}^*$ be any integral hyperplane. Denote by $\\tilde{H}$ the vector space parallel to $H$ (i.e a hyperplane passing through $0$).\n \tConsider the one dimensional subspace $$\\mathfrak{h}=ker(\\mathfrak{a} \\onto \\tilde{H}^*).$$\n \tThe hyperplane $H$ is integral so $\\mathfrak{h}$ corresponds to a one dimensional subtorus $\\sigma_H\\subset A$.\n \tChoose an isomorphism $\\sigma_H \\simeq \\C^*$ such that the induced map\n \t$$\\pi_H:\\mathfrak{a}^* \\onto \\mathfrak{a}^*\/\\tilde{H} \\simeq \\mathfrak{h}^* \\simeq \\R,$$\n \tsends the vectors pointing to\n \t$E_H$ to \\red{the} non-negative numbers. Thus, the vector $s_F-s_{F'}$ points to $E_H$\n \tif and only if\n \t $$\\pi_H(s_F-s_{F'})\\ge0.$$\n \t\n \t The \\red{choice of} isomorphism $\\sigma_H \\red{\\simeq} \\C^*$ corresponds to \\red{the} choice of primitive character $\\ttt$ of the torus $\\sigma_H$.\n \t To study the intersection with hyperplane $H$ we use the limit technique with respect to the torus $\\sigma_H$. We use the definition of limit map from \\cite{Kon} definition 4.1. It is a map defined on a subring of \\red{the} localised K-theory:\n \t $$\\lim_{\\ttt \\to 0}: S_A^{-1}K^A(pt)[y,y^{-1}] \\dashrightarrow S_{A\/\\sigma_H}^{-1}K^{A\/\\sigma_H}(pt)[y,y^{-1}]\\,.$$\n \t The multiplicative system $S_A$ (respectively $S_{A\/\\sigma_H}$) \\old{is equal to} \\red{consists of} all nonzero elements of $K^A(pt)$ (respectively $K^{A\/\\sigma_H}(pt)$). We present a sketch of construction of the above map. Choose an isomorphism of tori $A \\simeq \\sigma_H\\times A\/\\sigma_H$. It induces an isomorphism $K^A(pt) \\simeq K^{A\/\\sigma_H}(pt)[\\ttt,\\ttt^{-1}]$. Then the limit map is defined on the subring $K^{A\/\\sigma_H}(pt)[\\ttt][y,y^{-1}]$ by killing all positive powers of {\\bf t}. For technical details and extension to the localised $K$-theory\n \t see \\cite{Kon} section~4. \n \t \n \t Let $H$ be a hyperplane corresponding to \\red{a} facet of \\red{the} polytope $N^A(eu(\\tilde{\\nu}^-_{F'})) \\subset E_H$. Thus (for a more detailed discussion see remark \\ref{rem:lim})\n \t$$N^A\\left(mC_{-y}^A(M^+_F \\to M)_{|F'}\\right) \\cap H = \\varnothing \\iff\n \t\\lim_{\\ttt \\to 0} \\frac{mC_{-y}^A(M^+_F \\to M)_{|F'}}{eu(\\tilde{\\nu}^-_{F'})}=0\\,.$$\n \tLet $\\tilde{F} \\subset X^{\\sigma_H}$ be a component of the fixed \\red{point} set\n \t which contains $F'$. Proposition 4.3 and theorem 4.4 from \\cite{Kon} \\red{imply}\n \t that\n \t\\begin{multline*}\n \t\t\\lim_{\\ttt \\to 0} \\frac{mC_{-y}^A(M^+_F \\to M)_{|F'}}{eu(\\tilde{\\nu}^-_{F'})}=\n \t\n \t\t\\lim_{\\ttt \\to 0}\\left( \\frac{mC_{-y}^A(M^+_F \\to M)_{|\\tilde{F}}}{eu(\\tilde{F}\\to M)\\lambda_{-1}(T^*\\tilde{F})}\\right)_{|F'}= \\\\=\n \t\n \t\t\\left(\\lim_{\\ttt \\to 0} \\frac{mC_{-y}^A(M^+_F \\to M)_{|\\tilde{F}}}{eu(\\tilde{F}\\to M)\\lambda_{-1}(T^*\\tilde{F})}\\right)_{|F'}= \n \t\n \t\t\\left(\\frac{1}{\\lambda_{-1}(T^*\\tilde{F})}\\lim_{\\ttt \\to 0} \\frac{mC_{-y}^A(M^+_F \\to M)_{|\\tilde{F}}}{eu(\\tilde{F}\\to M)}\\right)_{|F'}= \\\\=\n \t\n \t\t\\left(\\frac{mC_{-y}^{A\/\\sigma_{H}}(M^+_F \\cap M_{\\tilde{F}}^{\\sigma_H,+}\n \t\t\\to \\tilde{F})}{\\lambda_{-1}(T^*\\tilde{F})}\\right)_{|F'}\n \t\\end{multline*} \n\twhere $M_{\\tilde{F}}^{\\sigma_H,+}$ is the positive BB-cell of $\\tilde{F}$ with respect to the torus $\\sigma_H$. It follows that if the intersection $ M^+_F \\cap M_{\\tilde{F}}^{\\sigma_H,+}$ is empty then the intersection $N^A\\left(mC_{-y}^A(M^+_F \\to M)_{|F'}\\right) \\cap H$ is also empty.\n\tThe closure of \\red{the} set $M_{\\tilde{F}}^{\\sigma_H,+}$ is $A$-equivariant, thus $M^+_F \\cap M_{\\tilde{F}}^{\\sigma_H,+}$ can be nonempty only if $F$ belongs to the closure of $M_{\\tilde{F}}^{\\sigma_H,+}$.\n\tTo conclude it is enough to prove that $\\pi_H(s_F-s_{F'})\\ge 0$ whenever $F$ belongs to the closure of $M_{\\tilde{F}}^{\\sigma_H,+}$.\n\t\n\tFor $F \\in \\overline{M_{\\tilde{F}}^{\\sigma_H,+}}$ there exist a finite number of points $A_1,...,A_{m-1},B_1,...,B_m \\in M^{\\sigma_H}$ such that\n\t \\begin{itemize}\n\t \t\\item $F=B_1$ and $B_m \\in \\tilde{F}$,\n\t \t\\item for every $i$ points $A_i$ and $B_i$ lies in the same component of \\red{the fixed point set}~$M^{\\sigma_H}$,\n\t \t\\item there exists one dimensional $\\sigma_H$-orbit from point $B_i$ to $A_{i-1}$,\n\t \\end{itemize}\n (see lemma 9 from \\cite{B-B3} for a proof in the case of isolated fixed points). For a fixed point $B \\in M^{\\sigma_H}$ the fiber $s_{|B}$ is a $\\sigma_H$-representation, denote by $\\tilde{s}_B$ its character.\n \\red{If $B \\in M^A$ then} there is an equality $\\tilde{s}_B=\\pi(s_B)$.\n The \\red{line} bundle $s$ is anti-ample, so its restriction to every one dimensional $\\sigma_H$ orbit is also anti-ample. For every anti-ample line bundle on $\\PP^1$\\red{,} weight on the repelling fixed point is greater or equal than weight on the attracting one\\red{.} Thus\n $$\\tilde{s}_{B_i} =\\tilde{s}_{A_{i}} \\ge \\tilde{s}_{B_{i+1}}\\,.$$\n It follows that\n $$\\pi(s_F)=\\tilde{s}_{B_1} \\ge \\tilde{s}_{B_m}=\\pi(s_{F'}).$$\n \n Assume now that an integral hyperplane $H$ contains the whole polytope $N^A(eu(\\tilde{\\nu}^-_{F'}))$. Then the torus $\\sigma_H$ acts trivially on the tangent space $\\tilde{\\nu}^-_{F'}$.\n Consider the fixed \\red{point} set\n component $F_H\\subset M^{\\sigma_H}$ which contains $F$. It is a smooth closed subvariety of $M$ whose tangent space at $F'$ is the whole tangent space $T_{F'}M$. Thus\\red{,} it contains the connected component of $F'$.\n It follows that $\\sigma_H$ weights of $s$ restricted to $F$ and $F'$ coincide, thus $\\pi(s_F-s_{F'})=0$.\n\\end{proof}\n\\red{\n\\begin{rem}\n\t Let $H$ be a hyperplane corresponding to a facet of the polytope $N^A(eu(\\tilde{\\nu}^-_{F'}))$ and let $\\sigma_H\\subset A$ be the corresponding one dimensional subtorus. The fixed point set $M^{\\sigma_H}$ may be non-isolated.\n\\end{rem}\n}\n\\begin{rem}\n\tThe inequality about weights of \\red{an} anti-ample line bundle on $\\PP^1$ can be checked directly using the fact that all anti-ample line bundles on the projective line $\\PP^1$ are of the form $\\mathcal{O}(n)$ for $n < 0$. It can be also derived from the localization formula in equivariant cohomology and the fact that \\red{the} degree of an anti-ample line bundle on $\\PP^1$ is negative (cf. \\cite{OM} paragraph 3.2.4).\n\\end{rem}\n\n\\begin{rem}\\label{rem:lim}\n\t \\old{Intuitively one can think of the limit map of fraction as considering only this part of classes which corresponds to the last translation of hyperplane which intersects the Newton polytope of denominator. Namely} Consider the subtorus $\\sigma_H \\subset A$, corresponding to some integral hyperplane $H$, with chosen primitive character ${\\bf t}$. The choice of ${\\bf t}$ induces \\red{a} choice of a \\red{half-space}\n\t $E_H$. Consider classes $a,b \\in K^A(pt)$ such that \n\t $$N^A(a), N^A(b) \\subset E_H \\text{ and } N^A(b)\\cap H\\neq \\varnothing.$$\n\t \\red{Intuitively, the limit of fraction $\\lim_{\\ttt \\to 0}\\frac{a}{b}$ takes into account the parts of classes $a,b$ that correspond to the intersections $N^A(a)\\cap H$ and $N^A(b)\\cap H$. More formally, a} choice\n\t of splitting $A \\simeq \\sigma_H \\times A\/\\sigma_H$ corresponds to a choice of integral character $\\gamma \\in \\mathfrak{a}^*$ \\red{whose} restriction to the subtorus $\\sigma_H$ is equal to {\\bf t}.\n\t There \\red{exists an} integer $m$ such that \\red{$0\\in\\gamma^{m}H$ (thus $\\gamma^{m}H=\\tilde{H}$)}. It follows that under the isomorphism $K^A(pt)\\simeq K^{A\/\\sigma_H}(pt)[\\gamma,\\gamma^{-1}]$ we have\n\t $$\\gamma^{m}a,\\gamma^mb \\in K^{A\/\\sigma_H}(pt)[\\gamma].$$\n\t Moreover\\red{,} $\\gamma^mb$ has nontrivial coefficient corresponding to $\\gamma^{0}$. It is equal to the part of class $b$\n\t \\red{corresponding}\n\t to the intersection $H\\cap N^A(b)$. Denote by $q$ the projection\n\t $$\\red{q:}K^{A\/\\sigma_H}(pt)[\\gamma] \\to K^{A\/\\sigma_H}(pt) $$\n\t defined by \\red{$q(\\gamma)=0$.}\n\t It follows that the limit map is defined on the element $\\frac{a}{b}$ as\n\t $$\\lim_{\\ttt\\to 0}\\frac{a}{b}=\\lim_{\\ttt\\to 0}\\frac{\\gamma^ma}{\\gamma^mb}=\\frac{q(\\gamma^ma)}{q(\\gamma^mb)}.$$\n\\end{rem}\n\n\n\\begin{rem}\n\tUsing limit techniques one usually restricts to $K^\\sigma(pt) \\simeq \\Z[\\ttt,\\ttt^{-1}]$ for \\red{a} general enough subtorus $\\sigma$ and then consider limits (cf. \\cite{FRW,SZZ}).\n\tIn our case\\red{,} we consider a chosen subtorus $\\sigma_H$ so we cannot proceed in this manner. It may happen that after restriction to $K^{\\sigma_H}(pt)$ denominator vanishes.\n\\end{rem}\n\n\\red{\\begin{rem}\n\tFor a generalization of theorem \\ref{tw:slo} to the case of arbitrary slope see our next paper \\cite{KonW}.\n\\end{rem}}\n\n\\section{Example: The Projective plane}\nIn this section we aim to illustrate the proof of theorem \\ref{tw:slo} by presenting explicit computations in the case of projective plane. We consider (using notation from theorem~\\ref{tw:slo})\n\\begin{itemize}\n\t\\item The torus $A=(\\C^*)^2$ acting on the projective plane $M=\\PP^2$ by:\n\t$$(t_1,t_2)[x:y:z]=[x:t_1y:t_2z].$$\n\t\\item The weight chamber corresponding to the one dimensional subgroup $\\sigma(t)=(t,t^2).$\n\t\\item The anti-ample line bundle $s=\\mathcal{O}(-1)$ as a slope.\n\t\\item The fixed points $F=[0:1:0]$ and $F'=[0:0:1].$\n\\end{itemize}\nDenote by $\\alpha$ and $\\beta$ characters of the torus $\\T$ given by projections to the first and the second coordinates of $A$, respectively. Local computation leads to formulas:\n\\begin{multicols}{2}\n\t\\begin{align*}\n\t&eu(\\nu(F'\\to M))_{|F'}=\\left(1-\\frac{\\beta}{\\alpha}\\right)(1-\\beta) \\\\\n\t&mC_{-y}(M^+_F\\to M)_{|F'}=(1-y)\\frac{\\beta}{\\alpha}(1-\\beta) \\\\\n\t&\\det T^{1\/2}_{F',>0}=0 \\\\\n\t&\\vv:=s_{|F}-s_{|F'}=\\frac{\\alpha}{\\beta}\n\t\\end{align*}\n\t\\columnbreak\n\t\\centering\n\t\\begin{tikzpicture}[scale=1.25]\n\t\\coordinate (Origin) at (0,0);\n\t\\coordinate (XAxisMin) at (-2,0);\n\t\\coordinate (XAxisMax) at (1,0);\n\t\\coordinate (YAxisMin) at (0,-1);\n\t\\coordinate (YAxisMax) at (0,2);\n\t\n\t\\draw [thin, black,-latex] (XAxisMin) -- (XAxisMax);\n\t\\draw [thin, black,-latex] (YAxisMin) -- (YAxisMax);\n\t\n\t\\coordinate (B1) at (0,0);\n\t\\coordinate (B2) at (0,1);\n\t\\coordinate (B3) at (-1,2); \n\t\\coordinate (B4) at (-1,1); \n\n\t\\filldraw[fill=yellow, fill opacity=0.5, draw=yellow, draw opacity = 0] (B1)--(B2)--(B3)--(B4);\n\t\n\t\\foreach \\x in {-2,...,0}{\n\t\t\\node[draw,circle,inner sep=1pt,fill] at (\\x, 0) {};\n\t}\n\t\t\\foreach \\y in {-1,...,1}{\n\t\t\\node[draw,circle,inner sep=1pt,fill] at (0, \\y) {};\n\t}\n\t\\foreach \\x in {-2,...,1}{\n\t\t\\foreach \\y in {-1,...,2}{\n\t\t\t\\node[draw,circle,inner sep=0.5pt,fill] at (\\x,\\y) {};\n\t\t}\n\t}\n\t\n\t\\draw [very thick,blue](B3) -- (B4);\n\t\\draw [very thick,green,-latex](0,0) -- (1,-1);\n\t\\draw[ fill=red] (0,0) circle (.1);\n\t\\node at (1.25,0.25){$\\alpha$};\n\t\\node at (0.25,2.25){$\\beta$};\n\t\\node[green] at (1,-0.75) {$\\vv$};\t\n\t\\node at (0.25,0.5){$\\tau_1$};\n\t\\node at (-0.5,1.75){$\\tau_2$};\n\t\\node at (-1.25,1.5){$\\tau_3$};\n\t\\node at (-0.5,0.25){$\\tau_4$};\n\t\\end{tikzpicture}\n\\end{multicols}\nDenote by $B:=N^A(mC_{-y}(M^+_F\\to M)_{|F'})$ (blue interval), $C=N^A(eu(\\nu(F'\\to M))_{F'})$ (yellow parallelogram) and $D=\\det T^{1\/2}_{F',>0}=0$ (red point). Denote the facets of polytope $C$ by $\\tau_1,\\tau_2,\\tau_3, \\tau_4$ according to the picture.\n\nTheorem \\ref{tw:mC} implies that the blue interval $B$ is contained in the yellow polytope $C$ and the red point $D$ \\red{does not} belong to the interval $B$.\nIt is enough to prove that for \\red{a} sufficiently big integer $n$\nand every facet $\\tau$\n$$B+\\frac{\\vv}{n}\\subset E_{H_{\\tau}}.$$\nLet's compute\nhalf-planes\nand subtori associated with the facets:\n$$\\begin{array}{|c|c|c|c|c|c|} \\hline\n\t\\text{facet} & E_{H_\\tau} & \\tilde{H}_\\tau &\\sigma_H\\subset A & \\pi_{H_\\tau} & \\text{character {\\bf t}} \\\\\n\t\\hline\n\t\\tau_1 & \\{x\\alpha+y\\beta|x\\le 0\\}&\\red{\\lin}(\\beta)&(t,0)&x\\alpha+y\\beta \\to -x&(t,0)\\to \\frac{1}{t}\\\\\n\t\\tau_2 & \\{x\\alpha+y\\beta|x+y\\le 1\\}&\\red{\\lin}(\\alpha-\\beta)&(t,t)&x\\alpha+y\\beta \\to -x-y&(t,t)\\to \\frac{1}{t}\\\\\n\t\\tau_3 & \\{x\\alpha+y\\beta|x\\ge -1\\}&\\red{\\lin}(\\beta)&(t,0)&x\\alpha+y\\beta \\to x&(t,0)\\to t\\\\\n\t\\tau_4 & \\{x\\alpha+y\\beta|x+y\\ge 0\\}&\\red{\\lin}(\\alpha-\\beta)&(t,t)&x\\alpha+y\\beta \\to x+y&(t,t)\\to t\\\\ \\hline\n\\end{array}$$\n\\red{Where $\\lin$ denotes the linear span.}\n\\red{A} choice\nof splitting $A \\simeq \\sigma_H \\times A\/\\sigma_H$ corresponds to a choice of integral character $\\gamma \\in \\mathfrak{a}^*$\n\\red{whose}\nrestriction to the subtorus $\\sigma_H$ is equal to {\\bf t}. For $\\tau_1$ and $\\tau_3$ let's choose $\\gamma=\\left(\\frac{\\alpha}{\\beta^2}\\right)^{\\pm 1}.$ It induces a splitting of cohomology $K^A(pt)=\\Z[\\beta,\\beta^{-1}][\\frac{\\alpha}{\\beta^2},\\frac{\\beta^2}{\\alpha}].$ Using this splitting we can compute limits\n\\begin{align*}\n\t&\\tau_1: \\ \\lim_{\\ttt \\to 0}(1-y) \\frac{\\frac{\\beta}{\\alpha}}{1-\\frac{\\beta}{\\alpha}}=\n\t(1-y)\\lim_{\\ttt \\to 0} \\frac{\\frac{\\beta^2}{\\alpha}}{\\beta-\\frac{\\beta^2}{\\alpha}}=(1-y)\\frac{0}{\\beta}=0 \\red{\\,,} \\\\\n\t&\\tau_3: \\ \\lim_{\\ttt \\to 0}(1-y) \\frac{\\frac{\\beta}{\\alpha}}{1-\\frac{\\beta}{\\alpha}}=\n\t(1-y)\\lim_{\\ttt \\to 0} \\frac{\\frac{1}{\\beta}}{\\frac{\\alpha}{\\beta^2}-\\frac{1}{\\beta}}=(1-y)\\frac{\\frac{1}{\\beta}}{-\\frac{1}{\\beta}}=y-1 \\red{\\,.}\n\\end{align*}\n\t For $\\tau_2$ and $\\tau_4$ let's choose $\\gamma=\\alpha^{\\pm 1}$. It induces a splitting $K^A(pt)=\\Z[\\frac{\\alpha}{\\beta},\\frac{\\beta}{\\alpha}][\\alpha,\\alpha^{-1}]$ (the character $\\frac{\\beta}{\\alpha}$ is a basis of $\\tilde{H}_\\tau$) and\n\t\\begin{align*}\n\t &\\tau_2,\\tau_4: \\lim_{\\ttt \\to 0}(1-y) \\frac{\\frac{\\beta}{\\alpha}}{1-\\frac{\\beta}{\\alpha}}=\n\t (1-y)\\frac{\\frac{\\beta}{\\alpha}}{1-\\frac{\\beta}{\\alpha}}.\n\t\\end{align*}\n\t \\red{These}\n\t calculations imply\\old{the fact} that \n\t $$B\\cap\\tau_i \\neq \\varnothing \\iff i\\in\\{2,3,4\\}.$$\n\t We want to show that \\red{the} addition of the vector $\\vv$ preserves\n\t half-plane\n\t $E_{H_{\\tau}}$ for these three facets by proving that\n\t$$\\pi_{H_{\\tau_i}}(\\vv) \\ge 0 \\text{ for } i \\in \\{2,3,4\\}. $$ \n\tFor $\\tau_2,\\tau_4$ the points $F',F$ \\red{belong} to the same fixed \\red{point} set\n\tcomponent of the torus~$\\sigma_{H}$. It implies that $\\pi_{H}(\\vv)=0$\\red{. This} agrees with direct computation\n\t$$\\pi_{H}(\\vv)=\\pi_H(\\alpha-\\beta)=\\pm(1+(-1))=0.$$\n\tMoreover\\red{,} for $\\tau_3$ there is one dimensional $\\sigma_H$-orbit $[0:x:y]$ from $F$ to $F'$. It implies that $\\pi_{H_{\\tau_3}}(\\vv)>0$\\red{. This} agrees with direct computation\n\t$$\\pi_{H_{\\tau_3}}(\\vv)=\\pi_{H_{\\tau_3}}(\\alpha-\\beta)=1.$$\n\tTo conclude, for every $n \\in \\N$ the interval $B+\\frac{\\vv}{n}$ is contained in the intersection of\n\thalf-planes\n\t $\\bigcap_{i=2}^4E_{H_{\\tau_i}}$. Moreover\\red{,} it is also contained in $E_{H_{\\tau_1}}$ for \\red{a} sufficiently big integer~$n$.\n\t\n\\section{Appendix \\red{A}: uniqueness of the stable envelopes} \\label{s:Ok}\nIn \\cite{OS,O2} the stable envelope was defined for \\red{an} action of a reductive group $G$. In this appendix we show that for the group $G$ equal to a torus and a general enough slope our definition \\ref{df:env} of the stable envelope coincides with\\old{the} Okounkov's\\old{one}. Moreover\\red{,} we prove the uniqueness of stable envelopes for an arbitrary slope.\n\n\t\\red{We use the notations and assumptions from the beginning of section \\ref{s:env}.} According to \\cite{OS,O2} the stable envelope is a map $$K^\\T(X^A) \\to K^\\T(X)$$ given by a correspondence\n\t$$Stab \\in K^\\T(X^A\\times X) \\,,$$\n\twhich \\red{satisfies} three properties (cf. \\cite{O2} paragraph 9.1.3).\n\tFor a $\\T$-variety $X$ and a finite set $F$ \\red{(with the trivial $\\T$-action)} any map of $K^\\T(pt)$ modules\n\t$$f: K^\\T(F) \\to K^\\T(X)$$\n\tis determined by a correspondence $G \\in K^\\T(F\\times X)$ such that for any $x\\in F$\n\t$$G_{x\\times X}=f(1_{x}).$$\n\tBelow we denote both morphism and correspondence by $Stab$. The main ingredient in\\old{the} Okounkov's definition are attracting sets. For a one parameter subgroup $\\sigma:\\C^*\\to A$ it is defined as (cf. \\cite{OS} paragraph 2.1.3, \\cite{O2} paragraph 9.1.2):\n\t$$Attr=\\{(y,x) \\in X^A \\times X|\\lim_{t\\to 0}\\sigma(t)x=y\\} \\,. $$\n\tMoreover\\red{,} for \\red{a fixed point $F\\in X^A$} we define\n\t$$Attr(F)=\\{x|\\lim_{t\\to 0}x\\in F\\} \\subset X \\,.$$\n\tThe straightforward comparison of definitions shows that the attracting sets coincide with the BB-cells\n\t$$Attr(F)=X_F^+ \\,.$$\n\t\\old{Moreover in the case of isolated fixed points we have\n\t$Attr=\\bigsqcup_{F \\in X^\\T} F \\times X_F^+ \\,.$}\t\n\n{\\bf Support condition:} (paragraph 2.1.1 from \\cite{OS},\nparagraph 9.1.3 point 1 from \\cite{O2}\nand theorem 3.3.4 point (i) of \\cite{OM})\nIn\\old{the} Okounkov's papers it is required that\n$$\\supp (Stab) \\subset \\bigsqcup_{F \\in X^\\red{A}}\n\\left(F\\times \\bigsqcup_{F'0}}\\left(\\frac{\\det \\nu^-_F}{\\det T^{1\/2}_{F,\\neq0}}\\right)^{1\/2} \\otimes \\mathcal{O}_{Attr}|_{F\\times F} \\red{\\,.}$$\nAfter substitutions\n\\begin{align*}\n&\\mathcal{O}_{Attr}|_{F\\times F}=\\mathcal{O}_{\\diag F} \\otimes eu(\\nu_F^-) \\red{\\,,} \\\\\n& \\frac{\\det \\nu^-_F}{\\det T^{1\/2}_{F,\\neq0}}= h^{\\rank T^{1\/2}_{>0}}\\left(\\frac{1}{\\det T^{1\/2}_{F,>0}}\\right)^2\n\\end{align*}\n\\red{as} noted in paragraph 2.1.4 of \\cite{OS} we obtain\n$$Stab_{|F\\times F}=\neu(\\nu^-_F)\\frac{(-1)^{\\rank T^{1\/2}_{F,>0}}}{\\det T^{1\/2}_{F,>0}} \\otimes h^{\\frac{1}{2}\\rank T^{1\/2}_{F,>0}}\\otimes \\mathcal{O}_{\\diag F}.$$\nChanging correspondence to a morphism we get an equivalent condition\n$$Stab(1_F)_{|F}=\neu(\\nu^-_F)\\frac{(-1)^{\\rank T^{1\/2}_{F,>0}}}{\\det T^{1\/2}_{F,>0}} h^{\\frac{1}{2}\\rank T^{1\/2}_{F,>0}}\\,,$$\nwhich is exactly our axiom {\\bf b)}.\n\n\n{\\bf Smallness condition:} (paragraph 2.1.6 from \\cite{OS}, paragraph 9.1.9 from \\cite{O2})\nIn the case of isolated fixed points\\red{,} the last axiom of stable envelope from\\old{the} Okounkov's papers states that for any pair of fixed points $F_1,F_2 \\in X^A$ \n $$N^A\\left(Stab_{|F_1\\times F_2}\\otimes s_{|F_1}\\right) \\subseteq\n N^A\\left(Stab_{|F_2\\times F_2}\\otimes s_{|F_2}\\right).$$\n The support condition implies that this requirement is nontrivial only when $F_1 > F_2.$ Changing correspondence to a morphism we get an equivalent form\n $$\n N^A\\left(Stab(1_{F_1})_{|F_2}\\right) +s_{|F_1} \\subseteq\n N^A\\left(Stab(1_{F_2})_{|F_2}\\right) + s_{|F_2}.\n $$\n \\old{Replace $Stab(1_{F_2})_{|F_2}$ by its value determined by the normalization condition.}\n \\red{The normalization axiom implies that\n $$N^A\\left(Stab(1_{F_2})_{|F_2}\\right)=N^A\\left(eu(\\nu^-_{F_2})\\frac{(-1)^{\\rank T^{1\/2}_{{F_2},>0}}}{\\det T^{1\/2}_{{F_2},>0}} h^{\\frac{1}{2}\\rank T^{1\/2}_{{F_2},>0}}\\right) \\,. $$}\n Note that the torus $A$ preserves the symplectic form $\\omega$, thus multiplication by $h$ \\red{does not} change Newton polytope $N^A$. Thus, we get an equivalent formulation\n $$N^A\\left(Stab(1_{F_1})_{|F_2}\\right)+s_{|F_1}\n \\subseteq\n N^A\\left(eu(\\nu^-_{F_2})\\right) -\\det T^{1\/2}_{F_2,>0} +s_{|F_2},$$\n which is very similar to the axiom {\\bf c)}. The only difference is that we additionally require\n $$-\\det T^{1\/2}_{F_2,>0} +s_{|F_2} \\notin N^A\\left(Stab(1_{F_1})_{|F_2}\\right)+s_{|F_1} .$$\n For a general enough slope this requirement automatically holds because \\red{the point}\n $$-\\det T^{1\/2}_{F_2,>0} +s_{|F_2}-s_{|F_1}$$\n is a vertex of polytope which \\red{is not} a lattice point.\n \\red{The} addition of this assumption is \\red{necessary} to acquire uniqueness of the stable envelopes for all slopes.\n \n\n\t\\begin{ex} \\label{ex:uni}\n\tConsider the variety $X=T^*\\PP^1$ \\red{equipped} with the action of the torus $\\T=\\C^*\\times A$ where $A$ is the one dimensional torus acting on $\\PP^1$ by\n\t$$\\alpha[a:b]=[\\alpha a:b]$$\n\tand $\\C^*$ acts on the fibers by scalar multiplication. Denote by $\\alpha$ and $y$ characters of $\\T$ corresponding to projections to the tori $A$ and $\\C^*$. The action of the torus $\\T$ has two fixed points $\\ee_1=[1:0]$ and $\\ee_2=[0:1]$. \n\tThe variety $X$ satisfies \\red{the} condition~$(\\star)$ in a trivial way.\n\t\n\tConsider the stable envelope for the positive weight chamber (such that $\\alpha$ is \\old{a} positive), the tangent bundle $T\\PP^1$ as polarization and the trivial line bundle \\red{$\\theta$} as a slope. \n\tIf we omit the point zero in the axiom {\\bf c)} then both\n\t$$Stab^\\red{\\theta}(\\ee_1)=1-O(-1), \\ Stab^\\red{\\theta}(\\ee_2)=\\frac{1}{y}- \\frac{O(-1)}{\\alpha}$$\n\tand $$Stab^\\red{\\theta}(\\ee_1)=1-O(-1), \\ Stab^\\red{\\theta}(\\ee_2)=\\frac{O(-1)}{y}- \\frac{O(-2)}{\\alpha}$$\n\t\\red{satisfy} the axioms of stable envelope.\n\\end{ex}\n\nThe rest of this appendix is devoted to the proof of uniqueness of the stable envelope (proposition \\ref{pro:uniq}).\nFor \\red{a} general enough slope it was proved in proposition 9.2.2 of \\cite{O2}. For the sake of completeness\\red{,} we present it with all necessary technical details omitted in the original.\nThe proof is a generalisation of the proof of uniqueness of cohomological envelopes (paragraph 3.3.4 in \\cite{OM}).\nWe need the following lemma.\n\\begin{alemat} \\label{lem:uniq}\n\tChoose a set of vectors $l_F \\in \\Hom(A,\\C^*)\\otimes \\Q$ indexed by \\red{the fixed point set $X^A$.}\n\tSuppose that an element $a \\in K^\\T(X)$ satisfies conditions\n\t\\begin{enumerate}\n\t\t\\item $\\supp(a) \\subset \\bigsqcup_{F\\in X^\\red{A}} X^+_F$\\red{,}\n\t\t\\item\t\t\\red{for any fixed point $F\\in X^A$ we have containment of the Newton polytopes}\n\t\t\t$$N^A(a_{|F}) \\subseteq \\left(N^A(eu(\\nu^-_{F}))\\setminus \\{0\\}\\right)+l_F \\,.$$\t\n\t\\end{enumerate}\n\tThen $a=0$.\n\\end{alemat}\n\\begin{proof}\n\tWe proceed by induction on the partially ordered set $X^A$. \n\tSuppose that the element $a$ is supported on the closed set $Y=\\bigsqcup_{F\\in Z} X^+_F$ for some subset $Z\\subset X^\\red{A}$. Choose a BB-cell $X^+_{F_1}$, corresponding to fixed point $F_1 \\in X^A$, which is an open subvariety of $Y$. We aim to show that $a$ is supported on the closed subset $\\bigsqcup_{F\\in (Z-F_1)} X^+_F$. By induction it implies that $a=0$. \\\\\n\tChoose an open subset $U \\subset X$ such that $U \\cap Y =X^+_{F_1}$. Consider the diagram \n\t$$\n\t\\xymatrix{\n\t\t& U \\ar[r]^i & X \\\\\n\t\tF_1 \\ar[r]^{s_0} & X^+_{F_1} \\ar[u]^{\\tilde{j}} \\ar[r]^{\\tilde{i}} & Y \\ar[u]^{j}\n\t}\n\t$$\n\tThe square in the diagram is \\red{a} pullback. The BB-cells are smooth\\red{,} locally closed subvarieties, so the map $\\tilde{j}$ is an inclusion of \\red{a} smooth subvariety.\n\tThere exist an element $\\alpha \\in G^\\T(Y) $ such that $j_*(\\alpha)=a$. \n\tIt follows that:\n\t$$ a_{|F_1}=(j_*\\alpha)_{|F_1}=\n\t\\red{s_0^*}\\tilde{j}^*i^*j_*\\alpha=\n\t\\red{s_0^*}\\tilde{j}^*\\tilde{j}_*\\tilde{i}^* \\alpha = eu(\\nu_{F_1}^-) \\alpha_{|F_1},$$\n\twhich implies\n\t\\begin{align} \\label{wyr:i1}\n\tN^A(eu(\\nu_{F_1}^-) \\alpha_{|F_1}) =N^A(a_{|F_1}) \\subseteq\n\t\\left(N^A(eu(\\nu^-_{F_1}))\\setminus \\{0\\}\\right) +l_{F_1}.\n\t\\end{align}\n\tAssume that $\\alpha_{|F_1}$ is a nonzero element. Then the Newton polytope $N^A(\\alpha_{|F_1})$ is nonempty. The ring $K^{\\T\/A}(F_1)$ is a domain so proposition \\ref{lem:New} (b) implies that\n\t\\begin{align} \\label{wyr:i2}\n\tN^A\\left(eu(\\nu^-_{F_1})\\right) \\subseteq\n\tN^A\\left(eu(\\nu_{F_1}^-)\\right) +N^A(\\alpha_{|F_1})=\n\tN^A\\left(eu(\\nu_{F_1}^-) \\alpha_{|F_1}\\right).\n\t\\end{align}\n\t\\old{In the case of non isolated fixed points one need to use proposition \\ref{lem:New} (c) for the class $eu(\\nu^-_{F_1})$ (see remark \\ref{rem:N}).} The inclusions (\\ref{wyr:i1}) and (\\ref{wyr:i2}) imply that\n\t$$N^A\\left(eu(\\nu^-_{F_1})\\right)\\subseteq\\left(N^A(eu(\\nu^-_{F_1}))\\setminus \\{0\\}\\right) +l_{F_1} \\,. $$\n\tBut no polytope can be translated into a proper subset of itself. This contradiction proves that the element $\\alpha_{|F_1}$ is equal to zero. The map $s_0$ is a section of an affine bundle \n\tso it induces an isomorphism \\red{of} the algebraic $K$-theory. It follows that $\\alpha_{|X^+_{F_1}}=0$. Thus\\red{,}\n\tthe element $\\alpha$ is supported on the closed set $\\bigsqcup_{F\\in (Z-F_1)} X^+_F$. It follows that $a$ is also supported on this set. \n\\end{proof}\n\\begin{proof}[Proof of proposition \\ref{pro:uniq}]\n\tLet $\\{Stab(F)\\}_{F\\in X^A} $ and $\\{\\widetilde{Stab}(F)\\}_ {F\\in X^A}$ be two sets of elements satisfying the axioms of stable envelope. It is enough to show that for any \\red{fixed point $F\\in X^A$} the element $Stab(F)-\\widetilde{Stab}(F)$\n\tsatisfies conditions of lemma \\ref{lem:uniq} for the set of vectors\n\t$$l_{F'}= s_{F'}-s_F-\\det T^{1\/2}_{F',>0} .$$\n\tThe support condition follows from the axiom {\\bf {a)}}.\n\tLet's focus on the second condition. The only nontrivial case is $F'< F$. In the other cases the axioms {\\bf {a)}} and {\\bf {b)}} imply that\n\t$$Stab(F)_{\\red{|F'}} -\\widetilde{Stab}(F)_{\\red{|F'}}=0.$$\n\tWhen $F'< F$ the axiom {\\bf {c)}} implies that the Newton polytopes $N^\\red{A}(Stab(F)_{\\red{|F'}})$ and $N^\\red{A}(\\widetilde{Stab}(F)_{\\red{|F'}})$ are contained in the convex set (cf. proposition \\ref{lem:ver})\n\t$$\\left(N^A(eu(\\nu^-_{\\red{F'}}))\\setminus \\{0\\}\\right)+l_\\red{F'}.$$\n\tThus\n\t\\begin{align*}\n\tN^\\red{A}\\left(Stab(F)_{\\red{|F'}}-\\widetilde{Stab}(F)_{\\red{|F'}}\\right) \\subseteq&\n\tconv\\left(N^\\red{A}(Stab(F)_{\\red{|F'}}),N^\\red{A}(\\widetilde{Stab}(F)_{\\red{|F'}})\\right) \\\\\n\t\\subseteq& \\left(N^A(eu(\\nu^-_{\\red{F'}}))\\setminus \\{0\\}\\right)+l_\\red{F'} \\,.\n\t\\end{align*} \n\\end{proof}\n\\red{\n\\begin{rem}\n\tIn this paper we always assume that the fixed point set $X^A$ is finite. In the case of nonisolated fixed points, our definition \\ref{def:ele} is not equivalent to Okounkov's definition. For a component of the fixed point set $F\\subset X^A$, the morphism\n\t$$K^\\T(F)\\to K^\\T(X)$$\n\t is not determined by its value on the element $1_F$. However, even in this case, there is at most one element satisfying the axioms of definition \\ref{def:ele}. The proofs of analogues of proposition \\ref{pro:uniq} and lemma \\ref{lem:uniq} are almost identical to those presented above. The only difference is that the ring $K^\\T(F)$ may not be a domain. Thus, in the proof of lemma \\ref{lem:uniq}, one needs to use proposition \\ref{lem:New} (c) (for the class $eu(\\nu^-_{F_1})$, see remark \\ref{rem:N}) instead of \\ref{lem:New} (b).\n\\end{rem}\n}\n\\section{Appendix \\red{B}: The local product property of Schubert cells} \\label{s:G\/P}\nLet $G$ be \\red{a} reductive, complex Lie group\nwith chosen maximal torus $\\T$ and Borel subgroup $B^+$. Any one dimensional subtorus $\\sigma \\subset \\T$ induces a linear functional $$\\varphi_\\sigma:\\mathfrak{t}^* \\to \\C.$$ For a general enough subtorus $\\sigma$ \nwe can assume that no roots belong to the kernel of this functional. Consider the Borel subgroups $B_\\sigma^+$ such that the corresponding Lie algebra is \\red{the} union of these weight spaces whose characters are positive with respect to $\\varphi_\\sigma$. Denote its unipotent subgroup by $U_\\sigma^+$. Analogously one can define groups $B_\\sigma^-$ and $U_\\sigma^-$.\n\nFor a parabolic group $B^+ \\subset P \\subset G$ consider the BB-decomposition of the variety $G\/P$ with respect to the torus $\\sigma$.\nIt is a classical fact that\nthe positive (respectively negative) BB-cells\nare the orbits of group $B_\\sigma^+$ (respectively $B_\\sigma^-$).\nWe prove that the stratification of $G\/P$ by BB-cells of the torus $\\sigma$ behaves like a product in a neighbourhood of a fixed point of the torus $\\T$ (see definition \\ref{df:prd}).\n\\begin{atw}\\label{tw:prod}\n\tConsider the situation described above.\nAny fixed point $x \\in(G\/P)^\\T$ has an open neighbourhood $U$ such that:\n\t\\begin{enumerate}\n\t\t\\item There exist a $\\T$-equivariant isomorphism $$\\theta: U \\simeq \\left(U\\cap (G\/P)_x^+\\right)\\times \\left(U\\cap (G\/P)_x^-\\right) $$\n\t\t\\item For any fixed point $y \\in(G\/P)^\\T$ the isomorphism $\\theta$ induces isomorphism:\n\t\t$$U\\cap (G\/P)_y^+ \\simeq \\left(U\\cap (G\/P)_x^+\\right)\\times \\left(U\\cap (G\/P)_x^- \\cap (G\/P)_y^+\\right)$$\n\t\\end{enumerate}\t\n\\end{atw}\nIn the course of proof we use the following interpretation of classical notions of the theory of Lie groups in the language of BB-decomposition. Note that we consider BB-cells in smooth quasi-projective varieties.\n\\begin{alemat} \\label{lem:BorABB}\n\tConsider \\red{the} action\n\tof the torus $\\sigma$ on the group $G$ defined by conjugation. Denote by $F$ the component of the fixed \\red{point} set\n\twhich contains the identity. For a subset $Y\\subset F$ we use abbreviations\n\t$$G^+_Y=\\{x\\in G| \\lim_{t\\to 0}x\\in Y\\} \\text{ and } G^-_Y=\\{x\\in G| \\lim_{t\\to \\infty}x\\in Y\\}$$\n\tfor the fibers of projections $G^+_F\\to F$ and $G^-_F\\to F$ over $Y$.\n\t\\begin{enumerate}\n\t\t\\item The Borel subgroup $B_\\sigma^+$ (respectively $B_\\sigma^-$) is the positive (respectively negative) BB-cell of the maximal torus $\\T$ i.e.\n\t\t$$B_\\sigma^+=G^+_\\T\\,.$$\n\t\t\\item The unipotent subgroup $U_\\sigma^+$ (respectively $U_\\sigma^-$) is the positive (respectively negative) BB-cell of the identity element i.e.\n\t\t$$U_\\sigma^+=G^+_{id}\\,.$$\n\t\\end{enumerate} \n\\end{alemat}\n\\begin{proof}\n\tWe prove only the first case for the positive Borel subgroup. \\red{The} other cases\n\tare analogous.\n\tIt is enough to show that $G^+_\\T$ is a connected subgroup of $G$ whose Lie algebra coincides with the Lie algebra of $B_\\sigma^+$.\n\t\n\tThe variety $G^+_\\T$ is a subgroup because the maximal torus $\\T$ is a group and \\red{the} limit \\red{map} preserves multiplication. Namely for $g,h\\in G^+_\\T$ such that\n\t$\\lim_{t\\to 0} g=a \\in \\T$ and $\\lim_{t\\to 0} h=b \\in \\T$\n\tit is true that\n\t$$\\lim_{t\\to 0} g^{-1}h=a^{-1}b \\in \\T. $$\n\tThe variety $G^+_\\T$ is connected because the maximal torus $\\T$ is connected.\n\tSo it is enough to compute the tangent space to $G^+_\\T$ at identity. The exponent map is an isomorphism in some neighbourhood of zero so we can limit ourselves to computation in the Lie algebra $\\mathfrak{g}$.\n\tThe action of \\red{the} torus $\\sigma$ on $\\mathfrak{g}$ is given by differentiation of the action on $G$.\n\tThe tangent space $T_0G^+_\\T$ is equal to the BB-cell $\\mathfrak{g}^+_\\mathfrak{t}$.\n\tConsider \\red{the} weight decomposition $$\\mathfrak{g}=\\bigoplus_{h\\in \\mathfrak{t}^*} V_h.$$\n\tDifferentiation of the action of $\\sigma$ on $\\mathfrak{g}$ is equal to the Lie bracket so\n\t$$\\mathfrak{g}_\\mathfrak{t}^+=\\bigoplus_{h\\in \\mathfrak{t}^*, \\varphi_\\sigma(h) \\ge 0} V_h. $$\n\t\\red{This}\n\tis exactly the tangent space to the Borel subgroup $B_\\sigma^+$. \n\\end{proof}\n\\begin{proof}[Proof of the theorem \\ref{tw:prod}]\n\tNote that the Weyl group acts transitively on \\red{the fixed point set $(G\/P)^\\T$.}\n\tThus\\red{,} replacing the torus $\\sigma$ by its conjugate by a Weyl group element we may assume that a fixed point $x$ is equal to the class of identity.\n\t\n\tDenote the Lie algebra of the parabolic subgroup $P$ by $\\mathfrak{p} \\subset \\mathfrak{g}$. Denote by $\\mathfrak{u}_P$ the Lie subalgebra consisting of the root spaces which \\red{do not} belong to $\\mathfrak{p}$. Let $U_P$ be \\red{the}\n\tcorresponding Lie group.\n\t The group $U_P$ is unipotent (as a subgroup of the unipotent group $U^-$). Consider the action of \\red{the} torus\n\t $\\T$ on $U_P$ by conjugation. Let's note two facts from the theory of Lie groups.\n\t\\begin{enumerate}\n\t\t\\item $U_P$ is isomorphic to its complex Lie algebra as a complex $\\T$-variety (cf. paragraph 15.3b from \\cite{Bor}, or paragraph 8.0 from \\cite{Unip}). \\label{1}\n\t\t\\item The quotient map $G \\to G\/P$ induces $\\T$-equivariant isomorphism from $U_P$ to some open neighbourhood of identity.\n\t\\end{enumerate}\n\tChoose $U_P$ as a neighbourhood $U$ of identity.\n\tThe second observation and \\red{the} second point of lemma \\ref{lem:BorABB} \\red{imply} that:\n\t$$ X_+:=U_P \\cap (G\/P)_{id}^+ \\simeq U_P \\cap G_{id}^+ \\simeq U_P \\cap U_\\sigma^+ \\red{\\,,}$$\n\tanalogously\n\t$$ X_-:=U_P \\cap (G\/P)_{id}^- \\simeq U_P \\cap U_\\sigma^-.$$\n\tBoth \\red{isomorphisms} are given by the quotient morphism $G \\to G\/P$. We define morphism \n\t$$\\theta: X_+ \\times X_- \\to U_P $$\n\tas multiplication in $U_P$. We aim to prove that this is an isomorphism. We start by showing injectivity on points. Both varieties $X_+$ and $X_-$ are subgroups of $U_P$. So to prove injectivity it is enough to show that $X_+ \\cap X_- =\\{id\\}$. But $X_+$ is contained in the positive unipotent group and $X_-$ in the negative unipotent group, so their intersection must be trivial. \n\t\n\tAs a variety $U_P$ is isomorphic to an affine space - its Lie algebra $\\mathfrak{u}_P$. The induced action of $\\T$ on the linear space $\\mathfrak{u}_P$ is linear - it is \\red{a} part of the adjoint representation of $G$.\n\tIt follows that both $X_+$ and $X_-$ are BB-cells of \\red{a} linear action on a linear space and therefore linear subspaces. Thus\\red{,} the product $X_+\\times X_-$ is isomorphic to \\red{an} affine space of dimension equal to dimension of $U_P$.\n\tThus\\red{,} the map $\\theta$ is an algebraic endomorphism of an affine space which is injective on points. The Ax\u2013Grothendieck theorem (cf. \\cite[Theorem 10.4.11.]{Ax,Gro})\n\timplies that it is bijective on points.\n\tAffine space is smooth and connected so the Zariski main theorem (cf. \\cite[Theorem 4.4.3]{EGA3.1})\n\t implies that $\\theta$ is an algebraic isomorphism.\n\t \n\t To prove the second property it is enough to show the containment\n\t $$\\theta\\left(X_+ \\times (U_P\\cap (G\/P)_y^+)\\right) \\subset (G\/P)_y^+,$$\n\t for any fixed point $y$. Note that\n\t $$X_+ \\subset U_\\sigma^+\\subset B_\\sigma^+.$$\n\t Moreover\\red{,} the BB-cell $(G\/P)_y^+$ is an orbit of the group $B_\\sigma^+$ and the morphism $\\theta$ coincides with the action of $B_\\sigma^+$. So the desired inclusion holds. \n\\end{proof}\n\\begin{rem}\n\tOne can show alternatively that $X_-$ and $X_+$ are isomorphic to affine spaces using theorem 1.5 of \\cite{JeSi}. It is also possible to omit the Ax-Grothendieck theorem by using classical results of the theory of Lie groups. Namely lemma 17 from \\cite{Stei} implies that the map $\\theta$ is bijective.\n\\end{rem}\n\n \t\t\n\\newcommand{\\etalchar}[1]{$^{#1}$}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec1}\n\nMany illnesses show heterogeneous response to treatment. For\nexample, a study on schizophrenia \\cite{ishigooka2001} found that\npatients who take the same antipsychotic (olanzapine) may have very\ndifferent responses. Some may have to discontinue the treatment due\nto serious adverse events and\/or acutely worsened symptoms, while\nothers may experience few if any adverse events and have improved\nclinical outcomes. Results of this type have motivated\nresearchers to advocate the individualization of treatment to each\npatient \\cite{lesko2007,piquette2007,insel2009}. One step in this\ndirection is to estimate each patient's risk level and then match\ntreatment to risk category \\cite{cai2008,cai2010}. However, this\napproach is best used to decide whether to treat; otherwise it\nassumes the knowledge of the best treatment for each risk category.\nAlternately, there is an abundance of literature\nfocusing on predicting each patient's prognosis\nunder a particular treatment \\cite{feldstein1978,stoehlmacher2004}.\nThus, an obvious way to individualize treatment is to recommend the\ntreatment achieving the best predicted prognosis for that patient. In\ngeneral, the goal is to use data to construct individualized treatment rules\nthat, if implemented in future, will optimize\nthe mean response.\n\n\nConsider data from a single stage randomized trial involving\nseveral active treatments. A first natural procedure to construct the\noptimal individualized treatment rule is to maximize an empirical\nversion of the mean response over a class of treatment rules (assuming\nlarger responses are preferred). As will be seen, this maximization\nis computationally difficult because the mean response of a\ntreatment rule is the expectation of a weighted indicator that is\nnoncontinuous and nonconcave in the parameters. To address this\nchallenge, we make a substitution. That is, instead of directly maximizing\nthe empirical mean response to estimate the treatment rule, we\nuse a two-step procedure that first estimates a\nconditional mean and then from this estimated conditional mean derives the\nestimated treatment rule. As will be seen in Section \\ref\n{sec:relation}, even if the optimal treatment rule is contained in the\nspace of treatment rules considered by the substitute two-step\nprocedure, the estimator derived from the two-step procedure may not be\nconsistent.\nHowever, if the conditional mean is modeled correctly, then\nthe two-step procedure consistently estimates the optimal\nindividualized treatment rule.\nThis\nmotivates consideration of rich conditional mean models with many\nunknown parameters.\nFurthermore, there\nare frequently many pretreatment variables that may or may not be\nuseful in constructing an optimal individualized treatment rule, yet\ncost and interpretability considerations imply that\nfewer rather than more variables should be used by the treatment\nrule. This consideration motivates the use of $l_1$-penalized least\nsquares ($l_1$-PLS).\n\nWe propose to estimate an optimal individualized treatment rule using\na~two step procedure that first estimates the conditional mean response\nusing $l_1$-PLS with a rich linear model and second, derives the\nestimated treatment rule from estimated conditional mean.\nFor brevity, throughout, we call the two step procedure the $l_1$-PLS method.\nWe\nderive several finite sample upper bounds on the difference between\nthe mean response to the optimal treatment rule and the mean\nresponse to the estimated treatment rule. All of the\nupper bounds hold even if our linear model for the conditional mean\nresponse is incorrect and\nto our knowledge are, up to constants, the best available.\nWe use the upper bounds in Section~\\ref{sec:relation} to illuminate the\npotential\nmismatch between using least squares in the two-step procedure and the\ngoal of\nmaximizing the mean response.\nThe\nupper bounds in Section~\\ref{sec:finaloracle} involve a minimized sum\nof the approximation error and estimation\nerror; both errors result from the estimation of the conditional mean response.\nWe shall see that $l_1$-PLS estimates a linear model that minimizes\nthis approximation plus estimation error sum among a set of suitably\nsparse linear models.\n\nIf the part of the model for the conditional mean\ninvolving the treatment effect is correct, then the upper bounds imply\nthat, although a surrogate two-step procedure is used, the estimated\ntreatment rule is consistent. The upper bounds provide a convergence\nrate as well. Furthermore, in this\nsetting, the upper bounds can be used to inform how\nto choose the tuning parameter involved in the $l_1$ penalty to\nachieve the best rate of convergence. As a~by-product,\nthis paper also contributes to existing literature on $l_1$-PLS\nby providing a finite sample prediction error\nbound for the $l_1$-PLS estimator in the random design setting without\nassuming the model class contains or is close to the true model.\n\n\nThe paper is organized as follows. In Section \\ref{sec:prelim}, we\nformulate the decision making problem. In Section\n\\ref{sec:relation}, for any given decision, that is, individualized\ntreatment rule, we relate the reduction in mean response to the excess\nprediction error. In Section \\ref{sec:lasso}, we estimate an optimal\nindividualized treatment rule via\n$l_1$-PLS and provide a finite sample upper\nbound on the reduction in mean response achieved by the estimated rule.\nIn Section \\ref{sec:data}, we consider a data dependent tuning\nparameter selection criterion. This method is evaluated using\nsimulation studies and illustrated with data from the\nNefazodone-CBASP trial \\cite{keller2000}. Discussions and future\nwork are presented in Section \\ref{sec:discussion}.\n\n\\section{Individualized treatment rules} \\label{sec:prelim}\nWe use upper case letters to denote random variables and lower case\nletters to denote values of the random variables. Consider data from\na randomized trial. On each subject, we have the pretreatment\nvariables $X\\in\\mathcal{X}$, treatment $A$ taking values in a\nfinite, discrete treatment space~$\\mathcal{A}$, and a real-valued\nresponse $R$ (assuming large values are desirable). An\n\\textit{individualized treatment rule} (ITR) $d$ is a deterministic\ndecision rule from $\\mathcal{X}$ into the treatment space\n$\\mathcal{A}$.\n\nDenote the distribution of $(X,A,R)$ by $P$. This is the\ndistribution of the clinical trial data; in particular, denote the\nknown randomization distribution of $A$ given $X$ by $p(\\cdot|X)$.\nThe likelihood of\n$(X,A,R)$ under $P$ is then $f_0(x)p(a|x)f_1(r|x,a)$, where $f_0$ is\nthe unknown density of\n$X$ and $f_1$ is the unknown density of $R$ conditional on $(X,A)$.\nDenote the expectations with respect to the distribution $P$ by an\n$E$.\nFor any ITR $d\\dvtx\n\\mathcal{X}\\rightarrow\\mathcal{A}$, let $P^d$ denote the\ndistribution of $(X,A,R)$ in which $d$ is used to assign treatments.\nThen the likelihood of $(X,A,R)$ under $P^{d}$ is\n$f_0(x)1_{a=d(x)}f_1(r|x,a)$. Denote\nexpectations with respect to the distribution $P^d$ by an $E^d$. The\n\\textit{Value} of~$d$ is defined as $V(d)\\triangleq E^d(R)$. An \\textit{optimal\nITR}, $d_0$, is a rule that has the maximal Value,\nthat is,\n\\[\nd_0\\in\\mathop{\\arg\\max}_d V(d),\n\\]\nwhere the $\\arg\\max$ is over all possible decision rules. The Value of\n$d_0$, $V(d_0)$, is the \\textit{optimal Value}.\n\nAssume $P[p(a|X)>0]=1$ for all $a\\in\\mathcal{A}$ (i.e., all\ntreatments in $\\mathcal{A}$ are possible for all values of $X$\na.s.). Then $P^d$ is absolutely continuous with respect to $P$ and a\nversion of the Radon--Nikodym derivative is\n$dP^d\/dP=1_{a=d(x)}\/p(a|x)$. Thus, the Value of $d$ satisfies\n\\begin{equation}\\label{eqn:value}\nV(d)=E^d(R)=\\int R\\,dP^d=\\int\nR\\,\\frac{dP^d}{dP}\\,dP=E\\biggl[\\frac{1_{A=d(X)}}{p(A|X)}R\\biggr].\n\\end{equation}\nOur goal is to estimate $d_0$, that is, the ITR that\nmaximizes (\\ref{eqn:value}), using data from distribution $P$. When\n$X$ is low dimensional and the best rule within a~simple class of\nITRs is desired, empirical versions of the Value can be\nused to construct estimators \\cite{murphy2001,robins2008}. However,\nif the best rule within a larger class of ITRs is of\ninterest, these approaches are no longer feasible.\n\nDefine $Q_0(X,A)\\triangleq E(R|X,A)$ [$Q_0(x,a)$ is sometimes called\nthe ``Quality'' of treatment $a$ at observation $x$]. It follows from\n(\\ref{eqn:value}) that\nfor any ITR $d$,\n\\[\nV(d) =\nE\\biggl[\\frac{1_{A=d(X)}}{p(A|X)}Q_0(X,A)\\biggr]=E\\biggl[\\sum_{a\\in\\mathcal\n{A}}1_{d(X)=a}Q_0(X,a)\\biggr]\n=E[Q_0(X,d(X))].\n\\]\nThus, $V(d_0)=E[Q_0(X,d_0(X))]\\leq E[\\max_{a\\in\\mathcal{A}} Q_0(X,a)]$.\nOn the other hand,\nby the definition of $d_0$,\n\\[\nV(d_0)\\geq\nV(d)|_{d(X)\\in\\mathop{\\arg\\max}_{a\\in\\mathcal{A}}\nQ_0(X,a)}=E\\Bigl[\\max_{a\\in\\mathcal{A}} Q_0(X,a)\\Bigr].\n\\]\nHence, an optimal ITR\nsatisfies $d_0(X)\\in\\arg\\max_{a\\in\\mathcal{A}}$ $Q_0(X,a)$ a.s.\n\n\\section{Relating the reduction in Value to excess prediction error}\n\\label{sec:relation}\n\nThe above argument indicates that the estimated ITR will be of high quality\n(i.e., have high Value) if we can estimate $Q_0$\naccurately. In this section, we justify this by providing a\nquantitative relationship between the Value and\nthe prediction error.\n\nBecause $\\mathcal A$ is a finite, discrete treatment space, given any ITR,\n$d$, there \\mbox{exists} a square integrable function\n$Q\\dvtx\\mathcal{X}\\times\\mathcal{A}\\rightarrow\\mathbb{R}$ for which\n$d(X)\\in\\break\\arg\\max_aQ(X,a)$ a.s. Let $L(Q)\\triangleq E[R-Q(X,A)]^2$\ndenote the prediction error of $Q$ (also called the mean quadratic loss).\nSuppose that $Q_0$ is square integrable and that the randomization\nprobability satisfies $p(a|x)\\geq\nS^{-1}$ for an $S>0$ and all $(x,a)$ pairs. Murphy \\cite{murphy2005}\nshowed that\n\\begin{equation} \\label{eqn:bound1}\nV(d_0)-V(d)\\leq\n2S^{1\/2}[L(Q)-L(Q_0)]^{1\/2}.\n\\end{equation}\nIntuitively, this upper bound means that if the excess prediction error of\n$Q$ [i.e., $L(Q)-L(Q_0)$] is small, then the reduction in Value of the\nassociated ITR $d$ [i.e., $V(d_0)-V(d)$] is small.\nFurthermore, the upper bound provides a\nrate of convergence for the Value of an estimated ITR. For example, suppose\n$Q_0$ is linear, that is, $Q_0=\\Phi(X,A)\\bolds{\\theta}_0$ for a\ngiven vector-valued basis function $\\Phi$ on $\\mathcal{X}\\times\\mathcal\n{A}$ and an unknown parameter ${\\theta}_0$.\nAnd suppose we use a correct linear model for $Q_0$ (here ``linear''\nmeans linear in parameters), say the model $\\mathcal{Q}=\\{\\Phi\n(X,A)\\bolds{\\theta}\\dvtx\\bolds{\\theta}\\in\\mathbb{R}^{\\mathrm{dim}(\\Phi)}\\}\n$ or a linear model containing $\\mathcal{Q}$ with dimension of\nparameters fixed in $n$. If we estimate $\\bolds{\\theta}$ by least\nsquares and denote the estimator by $\\bolds{\\hat\\theta}$, then the\nprediction error of $\\hat Q =\\Phi\\bolds{\\hat\\theta}$ converges to\n$L(Q_0)$ at rate $1\/n$ under mild regularity conditions.\nThis together with inequality (\\ref{eqn:bound1}) implies that\nthe Value obtained by the estimated ITR, $\\hat d(X)\\in\\arg\\max_a\\hat\nQ(X,a)$, will converge to\nthe optimal Value at rate at least $1\/\\sqrt n$.\n\nIn the following theorem, we improve this upper\nbound in two aspects. First, we show that an upper bound with\nexponent larger than $1\/2$ can be obtained under a margin condition,\nwhich implicitly implies a faster rate of convergence.\nSecond, it turns out that the upper bound need only depend on one term\nin the function $Q$; we call this the treatment effect term $T$. For\nany square integrable $Q$, the associated treatment effect\nterm is defined as $T(X,A)\\triangleq Q(X,A) - E[Q(X,A)|X]$. Note that\n$d(X)\\in\\arg\\max_a T(X,a)=\\arg\\max_a Q(X,a)$ a.s.\nSimilarly, the true treatment effect\nterm is given by\n\\begin{equation} \\label{eqn:trteffect}\nT_0(X,A)\\triangleq Q_0(X,A) - E[Q_0(X,A)|X].\n\\end{equation}\n$T_0(x,a)$ is the centered effect of treatment\n$A=a$ at observation $X=x$; $d_0(X)\\in\\arg\\max_a T_0(X,a)$.\n\\begin{theorem}\\label{thm:bound2}\nSuppose $p(a|x)\\geq S^{-1}$ for a positive constant $S$ for all\n$(x,a)$ pairs. Assume there exists some constants $C>0$ and\n$\\alpha\\geq0$ such that\n\\begin{equation}\\label{eqn:noise}\n\\mathbf{P}\\Bigl(\\max_{a\\in\\mathcal{A}}T_0(X,a)-\n\\max_{a\\in\\mathcal{A}\\setminus\\mathop{\\arg\\max}_{a\\in\\mathcal{A}}T_0(X,a)}T_0(X,a)\n\\leq\\epsilon\\Bigr)\\leq C\\epsilon^\\alpha\n\\end{equation}\nfor all positive $\\epsilon$.\nThen for any ITR\n$d\\dvtx\\mathcal{X}\\rightarrow\\mathcal{A}$ and square integrable function\n$Q\\dvtx\\mathcal{X}\\times\\mathcal{A}\\rightarrow\\mathbb{R}$ such that\n$d(X)\\in\\arg\\max_{a\\in\\mathcal{A}}Q(X,a)$ a.s., we have\n\\begin{equation}\n\\label{eqn:bound2}\nV(d_0)-V(d)\\leq C' [L(Q)-L(Q_0)]^{(1+\\alpha)\/(2+\\alpha)}\n\\end{equation}\nand\n\\begin{equation}\n\\label{eqn:bound3}\nV(d_0)-V(d)\\leq C'\n\\bigl[E\\bigl(T(X,A)-T_0(X,A)\\bigr)^2\\bigr]^{(1+\\alpha)\/(2+\\alpha)},\n\\end{equation}\nwhere $C'=(2^{2+3\\alpha}S^{1+\\alpha}C)^{1\/(2+\\alpha)}$.\n\\end{theorem}\n\nThe proof of Theorem \\ref{thm:bound2} is in Appendix\n\\ref{apd:margin}.\n\n\\begin{Remarks*}\n\n\\begin{longlist}[(1)]\n\\item[(1)] We set the second maximum in (\\ref{eqn:noise}) to $-\\infty$ if\nfor an $x$,\n$T_0(x,a)$ is constant in $a$ and thus the set\n$\\mathcal{A}\\setminus\\arg\\max_{a\\in\\mathcal{A}}T_0(x,a)=\\varnothing$.\n\\item[(2)]\nCondition (\\ref{eqn:noise}) is similar to the margin condition in\nclassification\n\\cite{polonik1995,mammen1999,tsybakov2004}; in classification this\nassumption is often used to obtain\nsharp upper bounds on the excess $0$--$1$ risk in terms of other\nsurrogate risks \\cite{bartlett2006}. Here\n$\\max_{a\\in\\mathcal{A}}T_0(x, a)-\n\\max_{a\\in\\mathcal{A}\\setminus\\arg\\max_{a\\in\\mathcal{A}}T_0(x,a)}T_0(x,a)$\ncan be viewed as the ``margin'' of $T_0$ at observation $X=x$. It\nmeasures the difference in mean responses between the optimal\ntreatment(s) and the best suboptimal treatment(s) at $x$. For example,\nsuppose $X\\sim U[-1,1]$, $P(A=1|X)=P(A=-1|X)=1\/2$ and $T_0(X,A)=XA$. Then\nthe margin condition holds with $C=1\/2$ and $\\alpha=1$.\nNote the margin condition does not exclude multiple optimal treatments\nfor any observation~$x$.\nHowever, when $\\alpha>0$, it does exclude suboptimal treatments that\nyield a conditional mean response\nvery close to the\nlargest conditional mean response for a set of $x$ with nonzero\nprobability.\n\n\n\\item[(3)] For $C=1, \\alpha=0$, condition (\\ref{eqn:noise}) always holds for all\n$\\epsilon>0$; in this case (\\ref{eqn:bound2}) reduces to (\\ref{eqn:bound1}).\n\n\\item[(4)] The larger the $\\alpha$, the larger the exponent $(1+\\alpha\n)\/(2+\\alpha)$\nand thus the stronger the upper bounds in (\\ref{eqn:bound2}) and (\\ref\n{eqn:bound3}).\nHowever, the\nmargin condition is unlikely to hold for all $\\epsilon$ if $\\alpha$ is\nvery large.\nAn alternate margin condition and upper bound are as follows.\n\n\\textit{Suppose $p(a|x)\\geq S^{-1}$ for all $(x,a)$ pairs. Assume there\nis an $\\epsilon\\!>\\!0$, such~that\n\\begin{equation}\\label{eqn:noise2}\n\\mathbf{P}\\Bigl(\\max_{a\\in\\mathcal{A}}T_0(X,a)\n-\\max_{a\\in\\mathcal{A}\\setminus\\mathop{\\arg\\max}_{a\\in\\mathcal{A}}T_0(X,a)}T_0(X,a)\n<\\epsilon\\Bigr)=0.\n\\end{equation}\nThen $V(d_0)-V(d)\\leq4S[L(Q)-L(Q_0)]\/\\epsilon$ and $V(d_0)-V(d)\\leq\n4SE(T-T_0)^2\/\\epsilon$.}\n\nThe proof is essentially the same as that of Theorem\n\\ref{thm:bound2} and is omitted. Condition\n(\\ref{eqn:noise2}) means that $T_0$ evaluated at the optimal\ntreatment(s) minus $T_0$ evaluated at the best suboptimal treatment(s)\nis bounded below by a~positive constant for almost all $X$ observations.\nIf $X$ assumes only a finite number of values, then this condition always\nholds, because we can take $\\epsilon$ to be the smallest difference in $T_0$\nwhen evaluated at\nthe optimal treatment(s) and the suboptimal treatment(s)\n[note that if $T_0(x,a)$ is constant for all $a\\in\\mathcal{A}$ for\nsome observation $X=x$, then all treatments are optimal for that\nobservation].\n\n\\item[(5)] Inequality (\\ref{eqn:bound3}) cannot be improved in the sense\nthat choosing $T=T_0$ yields zero on both sides of the inequality.\nMoreover, an inequality in the opposite direction is not possible,\nsince each ITR is associated with many\nnontrivial $T$-functions. For example, suppose $X\\sim U[-1,1]$,\n$P(A=1|X)=P(A=-1|X)=1\/2$ and $T_0(X,A) = (X-1\/3)^2A$. The optimal ITR\nis $d_0(X)=1$ a.s. Consider $T(X,A)=\\theta A$. Then\nmaximizing $T(X,A)$ yields the optimal ITR as long as\n$\\theta>0$. This means that the left-hand side (LHS) of (\\ref\n{eqn:bound3}) is zero,\nwhile the right-hand side (RHS) is always positive no matter what value\n$\\theta$\ntakes.\n\\end{longlist}\n\\end{Remarks*}\n\nTheorem \\ref{thm:bound2} supports the approach of minimizing\nthe estimated prediction error to estimate $Q_0$ or $T_0$ and\nthen maximizing this estimator over $a\\in\\mathcal{A}$ to obtain an ITR.\nIt is natural to expect that even when the approximation space\nused in estimating $Q_0$ or $T_0$ does not contain the truth, this\napproach will\nprovide the best (highest Value) of the considered ITRs. Unfortunately,\nthis does not occur due to the mismatch between the loss functions\n(weighted 0--1 loss and the quadratic loss). This mismatch is indicated\nby remark (5) above.\nMore precisely, note that the approximation space, say $\\mathcal{Q}$\nfor $Q_0$, places implicit restrictions on the\nclass of ITRs that will be considered. In\neffect, the class of ITRs is\n$\\mathcal{D}_{\\mathcal{Q}}=\\{d(X)\\in\\arg\\max_aQ(X,a)\\dvtx Q\\in\\mathcal{Q}\\}$.\nIt\nturns out that minimizing the prediction error may not result in the\nITR in $\\mathcal{D}_{\\mathcal{Q}}$ that maximizes the Value. This\noccurs when the approximation space $\\mathcal{Q}$ does not provide a\ntreatment effect term close to the treatment effect term in $Q_0$. In\nthe following toy example, the optimal ITR $d_0$ belongs to\n$\\mathcal{D}_\\mathcal{Q}$, yet the prediction error minimizer over\n$\\mathcal{Q}$ does not yield $d_0$.\n\\begin{exam*}\nSuppose $X$ is uniformly distributed in $[-1,1]$, $A$ is binary\n$\\{-1,1\\}$ with probability $1\/2$ each and is independent of $X$,\nand $R$ is normally distributed with mean $Q_0(X,A)=(X-1\/3)^2A$ and\nvariance $1$. It is easy to see that the optimal ITR satisfies\n$d_0(X)=1$ a.s. and\n$V(d_0)=4\/9$.\nConsider approximation space $\\mathcal{Q}=\\{Q(X,A;\\bolds{\\theta})=(1,\nX, A, XA)\\bolds{\\theta}\\dvtx\\bolds{\\theta}\\in\\mathbb{R}^4\\}$\nfor $Q_0$. Thus the space of ITRs under consideration is\n$\\mathcal{D}_{\\mathcal{Q}}=\\{d(X)=\\operatorname{sign}(\\theta_3+\\theta_4X)\\dvtx\\theta\n_3,\\theta_4\\in\n\\mathbb{R}\\}$. Note that $d_0\\in\\mathcal{D}_{\\mathcal{Q}}$ since\n$d_0(X)$ can be written as $\\operatorname{sign}(\\theta_3+\\theta_4X)$ for any\n$\\theta_3>0$ and $\\theta_4=0$. $d_0$ is the best treatment rule in\n$\\mathcal{D}_{\\mathcal{Q}}$. However, minimizing the prediction\nerror $L(Q)$ over $\\mathcal{Q}$ yields $Q^*(X,A)=(4\/9-2\/3X)A$. The ITR\nassociated with $Q^*$ is\n$d^*(X)=\\arg\\max_{a\\in\\{-1,1\\}}Q^*(X,a)=\\operatorname{sign}(2\/3-X)$, which has lower\nValue than $d_0$\n($V(d^*)=E[\\frac{1_{A(2\/3-X)>0}R}{1\/2}]=29\/81\nL(\\Phi\\bolds{\\theta}^*)+3\\|\\bolds{\\theta}^*\\|_0\\lambda\n^2_n\/\\beta$ for\nany $\\bolds{\\theta}$ such that\n$\\|\\bolds{\\theta}\\|_0>\\|\\bolds{\\theta}^*\\|_0$.\n\nThe following theorem provides a finite sample performance guarantee\nfor the\nITR produced by the $l_1$-PLS method. Intuitively, this result implies\nthat if\n$Q_0$ can be well approximated by the sparse linear representation\n$\\bolds{\\theta}_n^{**}$ [so\nthat both $L(\\Phi\\bolds{\\theta}^{**}_n)-L(Q_0)$ and\n$\\Vert\\bolds{\\theta}^{**}_n\\Vert_0$ are small], then $\\hat d_n$ will\nhave Value\nclose to the optimal Value in finite samples.\n\\begin{theorem} \\label{thm:finaloracle}\nSuppose $p(a|x)\\geq S^{-1}$ for a positive constant $S$ for all\n$(x,a)$ pairs and the margin condition\n(\\ref{eqn:noise}) holds for some $C>0$, $\\alpha\\geq0$ and\nall positive~$\\epsilon$. Assume:\n\\begin{longlist}[(1)]\n\\item[(1)] \\hypertarget{ap:errorterm}\nthe error terms $\\varepsilon_i=R_i-Q_0(X_i,A_i), i=1,\\ldots,n$,\nare independent of $(X_i,A_i), i=1,\\ldots, n$ and are\ni.i.d. with $E(\\varepsilon_i)=0$ and\n$E[|\\varepsilon_i|^l]\\leq l!c^{l-2}\\sigma^2\/2$ for some\n$c,\\sigma^2>0$ for all $l\\geq2$;\n\n\\item[(2)] \\hypertarget{ap:basis}\nthere exist finite, positive constants $U$ and $\\eta$ such that\n$\\max_{j=1,\\ldots,{J}}$ $\\|\\phi_j\\|_\\infty\/\\sigma_j\\leq U$ and\n$\\|Q_0-\\Phi\\bolds{\\theta}^*\\|_\\infty\\leq\\eta$; and\n\n\n\\item[(3)] \\hypertarget{ap:grammatrix1} $E[(\\phi_1\/\\sigma_1,\\ldots,\\phi_J\/\\sigma_J)^T(\\phi_1\/\\sigma\n_1,\\ldots,\\phi_J\/\\sigma_J)]$ is positive definite, and the smallest\neigenvalue is denoted by $\\beta$.\n\\end{longlist}\nConsider the estimated ITR $\\hat d_n$ defined by (\\ref{eqn:pihat}) with tuning\nparameter\n\\begin{equation}\\label{eqn:lambdaconditionfix}\n\\lambda_n\\geq k \\sqrt{\\frac{\\log(Jn)}{n}},\n\\end{equation}\nwhere $k=82\\max\\{c,\\sigma,\\eta\\}$.\nLet $\\Theta_n$ be the set defined in (\\ref{eqn:oracleset}). Then for\nany $n\\geq24U^2\\log(Jn)$ and for which\n$\\Theta_n$ is nonempty, we have, with probability at least $1-1\/n$,\nthat\n\\begin{equation} \\label{eqn:finaloracle}\\qquad\nV(d_0)-V(\\hat d_n)\\leq C'\\Bigl[\n\\min_{\\bolds{\\theta}\\in\\Theta_n}\\bigl(L(\\Phi\\bolds{\\theta})-L(Q_0)\n+3\\Vert\\bolds{\\theta}\\Vert_0\\lambda_n^2\/\\beta\\bigr)\n\\Bigr]^{({1+\\alpha})\/({2+\\alpha})},\n\\end{equation}\nwhere $C'=(2^{2+3\\alpha}S^{1+\\alpha}C)^{1\/(2+\\alpha)}$.\n\\end{theorem}\n\nThe result follows from inequality (\\ref{eqn:bound2}) in Theorem \\ref\n{thm:bound2} and inequality (\\ref{eqn:peoraclefix}) in Theorem\n\\ref{thm:peoraclefix}. Similar\nresults in a more general setting can be obtained by combining\n(\\ref{eqn:bound2}) with inequality (\\ref{eqn:peoracle}) in Appendix\n\\ref{apd:peoracle}.\n\n\\begin{Remarks*}\n\\begin{longlist}[(1)]\n\\item[(1)] Note that $\\bolds{\\theta}^{**}_n$ is the minimizer of the\nupper bound on the RHS of (\\ref{eqn:finaloracle}) and that\n$\\bolds{\\theta}^{**}_n$ is contained\\vadjust{\\goodbreak} in the set\n$\\{\\bolds{\\theta}^{*,(m)}_n\\dvtx m\\subset\\{1,\\ldots,J\\}\\}$. Each\n$\\bolds{\\theta}^{*,(m)}_n$ satisfies\n$\\bolds{\\theta}^{*,(m)}_n=\n\\arg\\min_{\\{\\bolds{\\theta}\\in\\Theta_n\\dvtx\\theta_j=0\\ \\mathrm{for}\\ \\mathrm{all}\n\\ j\\notin m\\}}L(\\Phi\\bolds{\\theta})$; that is, $\\bolds{\\theta}^{*,(m)}_n$\nminimizes the prediction error of the model indexed by the set $m$\n(i.e., model $\\{\\sum_{j\\in m}\\phi_j\\theta_j\\dvtx\\theta_j\\in\\mathbb{R}\\}$)\n(within $\\Theta_n$). For each $\\bolds{\\theta}^{*,(m)}_n$, the\nfirst term in the upper bound in (\\ref{eqn:finaloracle}) [i.e.,\n$L(\\Phi\\bolds{\\theta}^{*,(m)}_n)-L(Q_0)$] is the approximation\nerror of the model indexed by $m$ within~$\\Theta_n$. As in\\vspace*{1pt}\nvan de Geer \\cite{vandegeer2008}, we call the second term\n$3\\Vert\\bolds{\\theta}^{*,(m)}_n\\Vert_0\\lambda_n^2\/\\beta$ the estimation\nerror of the model indexed by $m$. To see why, first put $\\lambda_n= k\n\\sqrt{\\log(Jn)\/n}$. Then, ignoring the $\\log(n)$ factor, the second\nterm is a function of the sparsity of model $m$ relative to the sample\nsize, $n$. Up to constants, the second term is a ``tight'' upper bound\nfor the estimation error of the OLS estimator from model $m$, where\n``tight'' means that the convergence rate in the bound is the best\nknown rate. Note that $\\bolds{\\theta}_n^{**}$ is the parameter\nthat minimizes the sum of the two errors over all models. Such a model\n(the model corresponding to $\\bolds{\\theta}_n^{**}$) is called an\noracle model.\nThe $\\log(n)$ factor in the estimation error can be viewed as the price\npaid for not knowing the sparsity of the oracle model and thus having\nto conduct model selection.\nSee remark (2) after Theorem \\ref{thm:peoraclefix} for the precise\ndefinition of the oracle model and its relationship to $\\bolds\n{\\theta}_n^{**}$.\n\n\\item[(2)] Suppose $\\lambda_n = o(1)$. Then in large samples the estimation\nerror term\n$3\\Vert\\bolds{\\theta}\\Vert_0\\lambda_n^2\/\\beta$ is negligible. In this\ncase, $\\bolds{\\theta}^{**}_n$ is close to\n$\\bolds{\\theta}^*$.\nWhen the model\n$\\Phi\\bolds{\\theta}^*$ approximates $Q_0$ sufficiently well, we\nsee that setting $\\lambda_n$ equal to its lower bound in (\\ref\n{eqn:lambdaconditionfix}) provides the fastest rate of convergence of\nthe upper bound to zero. More precisely, suppose $Q_0\n=\\Phi\\bolds{\\theta}^*$ [i.e.,\n$L(\\Phi\\bolds{\\theta}^*)-L(Q_0)=0$]. Then inequality\n(\\ref{eqn:finaloracle}) implies that $V(d_0)-V(\\hat d_n)\\leq\nO_p( (\\log n\/n)^{(1+\\alpha)\/(2+\\alpha)})$. A~convergence in mean result is\npresented in Corollary \\ref{cor:convergence}.\n\n\\item[(3)] In finite samples, the estimation error\n$3\\Vert\\bolds{\\theta}\\Vert_0\\lambda_n^2\/\\beta$ is nonnegligible.\nThe argument of the minimum in the upper bound (\\ref{eqn:finaloracle}),\n$\\bolds{\\theta}^{**}_n$, minimizes prediction error among\nparameters with controlled sparsity.\nIn remark (2) after Theorem \\ref{thm:peoraclefix}, we discuss how this\nupper bound can be viewed as a tight upper bound for the prediction error of the OLS\nestimator from an oracle model in the step-wise model selection setting.\nIn this sense,\ninequality (\\ref{eqn:finaloracle}) implies that the treatment rule\nproduced by the $l_1$-PLS method will have a reduction in Value roughly\nas if it\nknew the sparsity of the oracle model and were estimated from the\noracle model using OLS.\n\n\\item[(4)] Assumptions \\hyperlink{ap:errorterm}{(1)}--\\hyperlink{ap:grammatrix1}{(3)} in Theorem\n\\ref{thm:finaloracle} are employed to derive the finite sample\nprediction error bound for the $l_1$-PLS estimator\n$\\bolds{\\hat\\theta}_n$ defined in (\\ref{eqn:thetahat}). Below\nwe briefly discuss these assumptions.\n\nAssumption \\hyperlink{ap:errorterm}{(1)} implicitly implies that the error\nterms do not have heavy tails. This condition is often assumed to\nshow that the sample mean of a~variable is concentrated around its\ntrue mean with a high probability. It is easy to verify that this\nassumption holds if each $\\varepsilon_i$ is bounded. Moreover, it\nalso holds for some commonly used error distributions that have\nunbounded support, such as the normal or double exponential.\n\nAssumption \\hyperlink{ap:basis}{(2)} is also used to show the concentration of\nthe sample mean around the true mean. It is possible to replace the\nboundedness condition by a moment condition similar to assumption\n\\hyperlink{ap:errorterm}{(1)}. This assumption requires that all basis\nfunctions and the difference between $Q_0$ and its best linear\napproximation are bounded. Note that we do not assume $\\mathcal{Q}$\nto be a good approximation space for $Q_0$. However, if\n$\\Phi\\bolds{\\theta}^*$ approximates $Q_0$ well, $\\eta$ will\nbe small, which will result in a smaller upper bound in\n(\\ref{eqn:finaloracle}). In fact, in the generalized result (Theorem\n\\ref{thm:peoracle}) we allow $U$ and $\\eta$ to\nincrease in $n$.\n\nAssumption \\hyperlink{ap:grammatrix1}{(3)} is employed to avoid collinearity.\nIn fact, we only need\n\\begin{equation}\\label{eqn:grammatrix1}\nE[\\Phi(\\bolds{\\theta}^\\prime-\\bolds{\\theta})]^2\\Vert\\bolds\n{\\theta}\\Vert_0\\geq\\beta\n\\biggl(\\sum_{j\\in\nM_0(\\bolds{\\theta})}\\sigma_j|\\theta_j^\\prime-\\theta_j|\\biggr)^2\n\\end{equation}\nfor\n$\\bolds{\\theta}$, $\\bolds{\\theta}^\\prime$ belonging to a\nsubset of $\\mathbb{R}^J$ (see Assumption \\ref{apn:grammatrix}),\nwhere $M_0(\\bolds{\\theta})\\triangleq\\{j=1,\\ldots,J\\dvtx\\theta_j\\neq0\\}$.\nCondition (\\ref{eqn:grammatrix1}) has been used in van de Geer \\cite\n{vandegeer2008}.\nThis condition is also similar to the restricted\neigenvalue assumption in Bickel, Ritov and Tsybakov \\cite{bickel2008}\nin which\n$E$ is replaced by $E_n$, and a fixed design matrix is considered.\nClearly, assumption \\hyperlink{ap:grammatrix1}{(3)} is a sufficient condition for\n(\\ref{eqn:grammatrix1}). In addition, condition\n(\\ref{eqn:grammatrix1}) is satisfied if the correlation\n$|E\\phi_j\\phi_k|\/(\\sigma_j\\sigma_k)$ is small for all $k\\in\nM_0(\\bolds{\\theta})$, $j\\neq k$ and a subset of $\\bolds{\\theta\n}$'s (similar results in a fixed design\nsetting have been proved in Bickel, Ritov and Tsybakov \\cite\n{bickel2008}. The\ncondition on correlation is also known as ``mutual coherence''\ncondition in Bunea, Tsybakov and Wegkamp \\cite{bunea2007}). See Bickel,\nRitov and Tsybakov\n\\cite{bickel2008} for other sufficient conditions for\n(\\ref{eqn:grammatrix1}).\n\\end{longlist}\n\\end{Remarks*}\n\nThe above upper bound for $V(d_0)-V(\\hat d_n)$ involves\n$L(\\Phi\\bolds{\\theta})-L(Q_0)$, which measures how well the\nconditional mean function $Q_0$ is approximated by~$\\mathcal{Q}$.\nAs we have seen in Section \\ref{sec:relation}, the\nquality of the estimated ITR only depends\non the estimator of the treatment effect term $T_0$. Below we\nprovide a~strengthened result in the sense that the upper bound\ndepends only on how well we approximate the treatment effect term.\n\nFirst, we identify terms in the linear model $\\mathcal{Q}$ that\napproximate $T_0$ (recall that $T_0(X,A)\\triangleq\nQ_0(X,A)-E[Q_0(X,A)|X]$). Without loss of generality, we rewrite the\nvector of basis functions as\n$\\Phi(X,A)=(\\Phi^{(1)}(X),\\Phi^{(2)}(X,A))$, where\n$\\Phi^{(1)}=(\\phi_1(X),\\ldots,\\phi_{J^{(1)}}(X))$ is composed of\nall components in $\\Phi$ that do not contain $A$ and\n$\\Phi^{(2)}=(\\phi_{J^{(1)}+1}(X,A),\\ldots,\\phi_{J}(X,A))$ is\ncomposed of all components in $\\Phi$ that contain $A$.\nNote that $A$ takes only finite values. When the randomization\ndistribution $p(a|x)$ does not depend on $x$,\nwe can code~$A$ so that\n$E[\\Phi^{(2)}(X,A)^T|X]=\\mathbf{0}$ a.s. (see Section~\\ref{sec:realdata}\nand Appendix \\ref{sec:simdesign}, for examples). For any\n$\\bolds{\\theta}=(\\theta_1,\\ldots,\\theta_J)^T\\in\\mathbb{R}^J$,\ndenote\n$\\bolds{\\theta}^{(1)}=(\\theta_1,\\ldots,\\theta_{J^{(1)}})^T$ and\n$\\bolds{\\theta}^{(2)}=(\\theta_{J^{(1)}+1},\\ldots,\\theta_{J})^T$.\nThen $\\Phi^{(1)}\\bolds{\\theta}^{(1)}$ approximates\n$E[Q_0(X,A)|X]$ and $\\Phi^{(2)}\\bolds{\\theta}^{(2)}$\napproximates $T_0$\n\nThe following theorem implies that if the treatment effect term\n$T_0$ can be well approximated by a sparse representation, then\n$\\hat d_n$ will have Value close to the optimal Value.\n\\begin{theorem} \\label{cor:finaloracletopt}\nSuppose $p(a|x)\\geq S^{-1}$ for a positive constant $S$ for all\n$(x,a)$ pairs and the margin condition\n(\\ref{eqn:noise}) holds for some $C>0$, $\\alpha\\geq0$ and\nall positive~$\\epsilon$. Assume\n$E[\\Phi^{(2)}(X,A)^T|X]= \\mathbf{0}$ a.s. Suppose assumptions\n\\hyperlink{ap:errorterm}{(1)}--\\hyperlink{ap:grammatrix1}{(3)} in Theorem \\ref\n{thm:finaloracle} hold.\nLet $\\hat d_n$ be the estimated ITR with $\\lambda_n$\nsatisfying condition (\\ref{eqn:lambdaconditionfix}). Let $\\Theta_n$\nbe the set defined in (\\ref{eqn:oracleset}). Then for any\n$n\\geq24U^2\\log(Jn)$ and for which $\\Theta_n$ is\nnonempty, we have, with probability at least $1-1\/n$, that\n\\begin{eqnarray} \\label{eqn:finaloracle1}\n&&V(d_0)-V(\\hat d_n)\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\qquad\\leq C'\\Bigl[\n\\min_{\\bolds{\\theta}\\in\\Theta_n}\\bigl(E\\bigl(\\Phi^{(2)}\\bolds{\\theta\n}^{(2)}-T_0\\bigr)^2\n+5\\bigl\\Vert\\bolds{\\theta}^{(2)}\\bigr\\Vert_0\\lambda_n^2\/\\beta\\bigr)\n\\Bigr]^{({1+\\alpha})\/({2+\\alpha})},\\nonumber\n\\end{eqnarray}\nwhere $C'=(2^{2+3\\alpha}S^{1+\\alpha}C)^{1\/(2+\\alpha)}$.\n\\end{theorem}\n\nThe result follows from inequality (\\ref{eqn:bound3}) in Theorem \\ref\n{thm:bound2} and inequality (\\ref{eqn:peoraclefix1}) in Theorem\n\\ref{thm:peoraclefix}.\n\n\\begin{Remarks*}\n\\begin{longlist}[(1)]\n\\item[(1)] Inequality\\vspace*{1pt} (\\ref{eqn:finaloracle1}) improves inequality (\\ref\n{eqn:finaloracle}) in the\nsense that it guarantees a small reduction in Value of $\\hat d_n$\n[i.e., $V(d_0)-V(\\hat d_n)$] as long as\nthe treatment effect term $T_0$ is well approximated by a sparse\nlinear representation; it does not require a good approximation of the\nentire conditional mean function $Q_0$. In many situations $Q_0$\nmay be very complex, but $T_0$ could be very simple. This means that\n$T_0$ is much more likely to be well approximated as compared to\n$Q_0$ (indeed, if there is no difference between treatments, then\n$T_0\\equiv0$).\n\n\n\n\\item[(2)] Inequality (\\ref{eqn:finaloracle1})\ncannot be improved in the sense that if there is no treatment effect\n(i.e., $T_0\\equiv0$), then both sides of the inequality are zero.\nThis result implies that minimizing the penalized empirical\nprediction error indeed yields high Value (at least asymptotically) if\n$T_0$ can be well approximated.\n\n\\end{longlist}\n\\end{Remarks*}\n\nThe following asymptotic result follows from Theorem\n\\ref{cor:finaloracletopt}. Note that when\n$E[\\Phi^{(2)}(X,A)^T |X] = \\mathbf{0}$ a.s.,\n$L(\\Phi\\bolds{\\theta})-L(Q_0) = E[\\Phi^{(1)}\\bolds{\\theta\n}^{(1)}- E(Q_0|X)]^2 + E[\\Phi^{(2)}\\bolds{\\theta}^{(2)}-T_0]^2 $.\nThus, the estimation of the treatment effect term $T_0$ is\nasymptotically separated from the estimation of the main effect term\n$E(Q_0|X)$.\\vspace*{1pt}\nIn this case, $\\Phi^{(2)}\\bolds{\\theta}^{(2),*}$ is the best\nlinear approximation of\nthe treatment effect term~$T_0$, where $\\bolds{\\theta}^{(2),*}$ is\nthe vector of components in $\\bolds{\\theta}^*$ corresponding to\n$\\Phi^{(2)}$.\n\\begin{corollary} \\label{cor:convergence}\nSuppose $p(a|x)\\geq S^{-1}$ for a positive constant $S$ for all\n$(x,a)$ pairs and the margin condition\\vspace*{1pt}\n(\\ref{eqn:noise}) holds for some $C>0$, $\\alpha\\geq0$ and\nall positive $\\epsilon$. Assume\n$E[\\Phi^{(2)}(X,A)^T |X] = \\mathbf{0}$ a.s. In addition,\\vspace*{2pt} suppose assumptions\n\\hyperlink{ap:errorterm}{(1)}--\\hyperlink{ap:grammatrix1}{(3)} in Theorem \\ref{thm:finaloracle}\nhold. Let $\\hat d_n$ be the estimated\nITR with tuning parameter $\\lambda_n=k_1 \\sqrt{\\log(Jn)\/n}$ for a\nconstant $k_1\\geq82\\max\\{c,\\sigma,\\eta\\}$. If\n$T_0(X,A)=\\Phi^{(2)}\\bolds{\\theta}^{(2),*}$, then\n\\[\nV(d_0)-\\mathbf{E}[V(\\hat d_n)]= O\\bigl((\\log\nn\/n)^{(1+\\alpha)\/(2+\\alpha)}\\bigr).\n\\]\n\\end{corollary}\n\nThis result provides a guarantee on\nthe convergence rate of $V(\\hat d_n)$ to the optimal Value. More\nspecifically, it means that if $T_0$ is correctly approximated, then\nthe Value of $\\hat d_n$ will converge to the optimal Value in mean\nat rate at least as fast as $(\\log n\/n)^{(1+\\alpha)\/(2+\\alpha)}$\nwith an appropriate choice of $\\lambda_n$.\n\n\\subsection{Prediction error bound for the $l_1$-PLS estimator}\\label{sec:lassooracle}\n\nIn this section, we provide a finite sample upper bound for the\nprediction error of the $l_1$-PLS estimator~$\\bolds{\\hat\\theta}_n$.\nThis result is needed to prove Theorem \\ref{thm:finaloracle}.\nFurthermore, this result strengthens existing literature on $l_1$-PLS\nmethod in prediction. Finite sample prediction error bounds for the\n$l_1$-PLS estimator in the random design setting have been provided in\nBunea, Tsybakov and Wegkamp \\cite {bunea2007} for quadratic loss, van\nde Geer \\cite{vandegeer2008} mainly for Lipschitz loss, and\nKoltchinskii~\\cite{kol2009} for a variety of loss functions. With\nregards quadratic loss, Koltchinskii~\\cite{kol2009} requires the\nresponse $Y$ is bounded, while both Bunea, Tsybakov and Wegkamp\n\\cite{bunea2007} and van de Geer \\cite {vandegeer2008} assumed the\nexistence of a sparse $\\bolds{\\theta}\\in\\mathbb{R}^{J}$ such that\n$E(\\Phi\\bolds{\\theta}-Q_0)^2$ is upper bounded by a quantity that\ndecreases to $0$ at a certain rate as $n\\rightarrow\\infty$ (by\npermitting $J$ to increase with $n$ so $\\Phi$ depends on $n$ as well).\nWe improve the results in the sense that we do not make such\nassumptions (see Appendix \\ref{apd:peoracle} for results when $\\Phi$,\n$J$ are indexed by $n$ and $J$ increases with $n$).\n\nAs in the prior sections, the sparsity of\n$\\bolds{\\theta}$ is measured by its $l_0$ norm, $\\|\\bolds\n{\\theta}\\|_0$ (see the Appendix \\ref{apd:peoracle} for proofs with a laxer\ndefinition of sparsity). Recall that the parameter $\\bolds{\\theta\n}^{**}_n$ defined in (\\ref{eqn:thetastarstar}) has small prediction\nerror and controlled sparsity.\n\\begin{theorem} \\label{thm:peoraclefix}\nSuppose assumptions \\hyperlink{ap:errorterm}{(1)}--\\hyperlink{ap:grammatrix1}{(3)} in\nTheorem \\ref{thm:finaloracle} hold. For any $\\eta_1\\geq0$, let\n$\\bolds{\\hat\\theta}_n$ be the $l_1$-PLS estimator defined by\n(\\ref{eqn:thetahat}) with tuning parameter $\\lambda_n$ satisfying\ncondition\\vspace*{1pt} (\\ref{eqn:lambdaconditionfix}). Let $\\Theta_n$ be the set\ndefined in (\\ref{eqn:oracleset}). Then for any $n\\geq\n24U^2\\log(Jn)$ and for which $\\Theta_n$ is\nnonempty, we have, with probability at least $1-1\/n$, that\n\\begin{equation}\\label{eqn:peoraclefix}\\qquad\nL(\\Phi\\bolds{\\hat\\theta}_n)\\leq\n\\min_{\\bolds{\\theta}\\in\\Theta_n}\\bigl(L(\\Phi\\bolds{\\theta})\n+3\\|\\bolds{\\theta}\\|_0\\lambda_n^2\/\\beta\\bigr)\n=L(\\Phi\\bolds{\\theta}^{**}_n)\n+3\\|\\bolds{\\theta}^{**}_n\\|_0\\lambda_n^2\/\\beta.\n\\end{equation}\n\nFurthermore, suppose $E[\\Phi^{(2)}(X,A)^T|X]= \\mathbf{0}$ a.s. Then\nwith probability at least\n$1-1\/n$,\n\\begin{equation}\\label{eqn:peoraclefix1}\nE\\bigl(\\Phi^{(2)}\\bolds{\\hat\\theta}{}^{(2)}_n-T_0\\bigr)^2\\leq\n\\min_{\\bolds{\\theta}\\in\\Theta_n}\\bigl(E\\bigl(\\Phi^{(2)}\\bolds{\\theta\n}^{(2)}-T_0\\bigr)^2\n+5\\bigl\\|\\bolds{\\theta}^{(2)}\\bigr\\|_0\\lambda_n^2\/\\beta\\bigr).\n\\end{equation}\n\\end{theorem}\n\nThe results follow from Theorem \\ref{thm:peoracle} in\nAppendix \\ref{apd:peoracle} with $\\rho=0$, $\\gamma=1\/8$, $\\eta_1=\\eta\n_2=\\eta$, $t=\\log2n$ and some simple algebra [notice that\nassumption \\hyperlink{ap:grammatrix1}{(3)} in Theorem \\ref{thm:finaloracle} is a\nsufficient condition for Assumptions \\ref{apn:grammatrix} and \\ref\n{apn:grammatrix_Topt}].\n\n\\begin{Remarks*}\nInequality (\\ref{eqn:peoraclefix1}) provides a finite sample upper\nbound on the mean square difference between $T_0$ and its estimator.\nThis result is used to prove Theorem \\ref{cor:finaloracletopt}. The\nremarks below discuss how\ninequality (\\ref{eqn:peoraclefix}) contributes to the\n$l_1$-penalization literature in prediction.\n\\begin{longlist}[(1)]\n\\item[(1)] The conclusion of Theorem \\ref{thm:peoraclefix} holds for all\nchoices of $\\lambda_n$ that satisfy~(\\ref{eqn:lambdaconditionfix}).\nSuppose $\\lambda_n=o(1)$. Then\n$L(\\Phi\\bolds{\\theta}^{**}_n)-L(\\Phi\\bolds{\\theta}^{*})\\rightarrow\n0$ as $n\\rightarrow\\infty$ (since $\\|\\bolds{\\theta}\\|_0$ is\nbounded). Inequality (\\ref{eqn:peoraclefix}) implies that\n$L(\\Phi\\bolds{\\hat\\theta}_n)-L(\\Phi\\bolds{\\theta}^{*})\\rightarrow\n0$ in probability. To achieve the best rate of convergence, equal\nsign should be taken in (\\ref{eqn:lambdaconditionfix}).\n\n\\item[(2)]\nNote that $\\bolds{\\theta}^{**}_n$\nminimizes\n$L(\\Phi\\bolds{\\theta})-L(Q_0)+3\\|\\bolds{\\theta}\\|_0\\lambda\n_n^2\/\\beta$. Below we de\\-monstrate that\nthe minimum of\n$L(\\Phi\\bolds{\\theta})-L(Q_0)+3\\|\\bolds{\\theta}\\|_0\\lambda\n_n^2\/\\beta$\ncan be viewed as the approximation error plus a ``tight'' upper\nbound of the estimation error of an ``oracle'' in the stepwise model\nselection framework [when ``$=$'' is taken in\n(\\ref{eqn:lambdaconditionfix})]. Here ``tight'' means the\nconvergence rate in the bound is the best known rate, and ``oracle''\nis defined as follows.\n\nLet $m$ denote a nonempty subset of the index set $\\{1,\\ldots,J\\}$.\nThen each~$m$ represents a model which uses a nonempty subset of\n$\\{\\phi_1,\\ldots,\\phi_{J}\\}$ as basis functions (there are $2^J-1$ such\nsubsets). Define\n\\[\n\\bolds{\\hat\\theta}{}^{(m)}_n=\\mathop{\\arg\\min}_{\\{\\bolds{\\theta}\\in\n\\mathbb{R}^J\\dvtx\\theta_j=0\\\n\\mathrm{for}\\ \\mathrm{all} \\ j\\notin m\\}}E_n(R-\\Phi\\bolds{\\theta})^2\n\\]\nand\n\\[\n\\bolds{\\theta}^{*,(m)}=\\mathop{\\arg\\min}_{\\{\\bolds{\\theta}\\in\n\\mathbb{R}^J\\dvtx\\theta_j=0\\\n\\mathrm{for}\\ \\mathrm{all}\\ j\\notin m\\}}L(\\Phi\\bolds{\\theta}).\n\\]\nIn this setting, an ideal model selection criterion will pick model\n$m^*$ such that $L(\\Phi\\bolds{\\hat\\theta}{}^{(m^*)}_n)=\\inf_m\nL(\\Phi\\bolds{\\hat\\theta}{}^{(m)}_n)$. $\\bolds{\\hat\\theta}{}^{(m^*)}_n$ is\nreferred as an ``oracle'' in Massart~\\cite{massart2005}. Note that the\nexcess prediction error of each $\\bolds{\\hat\\theta}{}^{(m)}_n$ can be\nwritten as\n\\[\nL\\bigl(\\Phi\\bolds{\\hat\\theta}{}^{(m)}_n\\bigr)-L(Q_0)=\\bigl[L\\bigl(\\Phi\\bolds{\\theta}^{*,(m)}\\bigr)-L(Q_0)\\bigr]\n+\\bigl[L\\bigl(\\Phi\\bolds{\\hat\\theta}{}^{(m)}_n\\bigr)-L\\bigl(\\Phi\\bolds{\\theta}^{*,(m)}\\bigr)\\bigr],\n\\]\nwhere the first term is called the approximation error of model $m$\nand the second term is the estimation error. It can be shown that\n\\cite{bartlett2008} for each model $m$ and $x_m>0$, with\nprobability at least $1-\\exp(-x_m)$,\n\\[\nL\\bigl(\\Phi\\bolds{\\hat\\theta}{}^{(m)}_n\\bigr)-L\\bigl(\\Phi\\bolds{\\theta}^{*,(m)}\\bigr)\\leq\n\\mbox{constant}\\times\\biggl(\\frac{x_m+|m|\\log(n\/|m|)}{n}\\biggr)\n\\]\nunder\nappropriate technical conditions, where $|m|$ is the cardinality of\nthe index set $m$. To our knowledge, this is the best rate known so\nfar. Taking $x_m=\\log n+|m|\\log J$ and using the union bound\nargument, we have with probability at least $1-O(1\/n)$,\n\\begin{eqnarray}\\label{eqn:esterror1}\n&&L\\bigl(\\Phi_n\\bolds{\\hat\\theta}{}^{(m^*)}_n\\bigr)-L(Q_0)\\nonumber\\\\\n&&\\qquad=\n\\min_{m} \\bigl(\\bigl[L\\bigl(\\Phi\\bolds{\\theta}^{*,(m)}\\bigr)- L(Q_0)\\bigr]\n+\nL\\bigl(\\Phi\\bolds{\\hat\\theta}{}^{(m)}_n\\bigr)-L\\bigl(\\Phi\\bolds{\\theta}^{*,(m)}\\bigr)\\bigr)\\nonumber\\\\\n&&\\qquad\\leq\\min_{m} \\biggl(\\bigl[L\\bigl(\\Phi\\bolds{\\theta}^{*,(m)}\\bigr)-\nL(Q_0)\\bigr] + \\mbox{constant}\\times\\frac{|m|\\log(Jn)}{n}\\biggr)\\nonumber\\\\\n&&\\qquad= \\min_{\\bolds{\\theta}} \\biggl([L(\\Phi\\bolds{\\theta})-\nL(Q_0)] + \\mbox{constant}\\times\\frac{\\|\\bolds{\\theta}\\|_0\\log\n(Jn)}{n}\\biggr).\n\\end{eqnarray}\n\nOn the other hand, take $\\lambda_n$ so that condition\n(\\ref{eqn:lambdaconditionfix}) holds with ``$=$''.\nEquation (\\ref{eqn:peoraclefix}) implies that, with probability at least\n$1-1\/n$,\n\\[\nL(\\Phi\\bolds{\\hat\\theta}_n)-L(Q_0) \\leq\n\\min_{\\bolds{\\theta}\\in\\Theta_n}\n\\biggl([L(\\Phi\\bolds{\\theta})-L(Q_0)]+\\mbox{constant}\\times\n\\frac{\\|\\bolds{\\theta}\\|_0\\log(Jn)}{n}\\biggr)\n\\]\nwhich is essentially (\\ref{eqn:esterror1}) with the constraint of\n$\\bolds{\\theta}\\in\\Theta_n$. (The ``\\textit{constant}'' in the\nabove inequalities may take different values.) Since $\\bolds{\\theta}=\\bolds{\\theta}^{**}_n$ minimizes the approximation error plus\na tight upper bound for the estimation error in the oracle model,\nwithin $\\bolds{\\theta}\\in\\Theta_n$, we\nrefer to $\\bolds{\\theta}^{**}_n$ as an oracle.\n\n\\item[(3)] The result can be used to emphasize that\n$l_1$ penalty behaves similarly as the $l_0$ penalty. Note that\n$\\bolds{\\hat\\theta}_n$ minimizes the empirical prediction error\n$E_n(R-\\Phi\\bolds{\\theta})^2$ plus an $l_1$ penalty, whereas\n$\\bolds{\\theta}^{**}_n$ minimizes the prediction error\n$L(\\Phi\\bolds{\\theta})$ plus an $l_0$ penalty. We provide an\nintuitive connection between these two quantities. First, note that\n$E_n(R-\\Phi\\bolds{\\theta})^2$ estimates\n$L(\\Phi\\bolds{\\theta})$ and $\\hat\\sigma_j$ estimates\n$\\sigma_j$. We use ``$\\approx$'' to denote this relationship. Thus,\n\\begin{eqnarray}\\label{eqn:thetahatint}\n&&\nE_n(R-\\Phi\\bolds{\\theta})^2\n+\n\\lambda_n\\sum_{j=1}^J\\hat\\sigma_j|\\theta_j|\\\\\n&&\\qquad\\approx L(\\Phi\\bolds{\\theta})+\n\\lambda_n\\sum_{j=1}^J\\sigma_j|\\theta_j|\\nonumber\\\\\n&&\\qquad\\leq L(\\Phi\\bolds{\\theta})+\n\\lambda_n\\sum_{j=1}^J\\sigma_j|\\hat\\theta_{n,j}-\\theta_j|\n+\\lambda_n\\sum_{j=1}^J\\sigma_j|\\hat\\theta_{n,j}|,\\nonumbe\n\\end{eqnarray}\nwhere $\\hat\\theta_{n,j}$ is the $j$th component of\n$\\bolds{\\hat\\theta}_n$. In Appendix \\ref{apd:peoracle}, we show\nthat for any\n$\\bolds{\\theta}\\in\\Theta_{n}$,\n$\\lambda_n\\sum_{j=1}^J\\sigma_j|\\hat\\theta_{n,j}-\\theta_j|$ is upper\nbounded by $\\|\\bolds{\\theta}\\|_0\\lambda_n^2\/\\beta$ up to a~constant\nwith a high probability. Thus, $\\bolds{\\hat\\theta}_n$\nminimizes (\\ref{eqn:thetahatint}) and\n$\\bolds{\\theta}^{**}_n$ roughly minimizes an upper bound of\n(\\ref{eqn:thetahatint}).\n\n\\item[(4)] The constants involved in the theorem can be improved; we focused\non readability as opposed to\nproviding the best constants.\n\\end{longlist}\n\\end{Remarks*}\n\n\\section{A practical implementation and an evaluation}\\label{sec:data}\nIn this section, we develop a practical implementation of the $l_1$-PLS\nmethod, compare this method to two commonly used alternatives and\nlastly illustrate the method using the motivating data from the\nNefazodone-CBASP trial \\cite{keller2000}.\n\nA realistic implementation of $l_1$-PLS method should use a\ndata-dependent method to select the tuning parameter, $\\lambda_n$.\nSince the primary goal is to maximize the Value, we select $\\lambda_n$\nto maximize a cross validated Value estimator. For any ITR~$d$, it is\neasy to verify that $E[(R-V(d))1_{A=d(X)}\/p(A|X)]=0$. Thus, an unbiased\nestimator of $V(d)$ is\n\\[\nE_n\n\\bigl[1_{A=d(X)}R\/p(A|X)\\bigr]\/E_n \\bigl[1_{A=d(X)}\/p(A|X)\\bigr]\n\\]\n\\cite{murphy2001} [recall that the randomization distribution $p(a|X)$\nis known]. We\nsplit the data into $10$ roughly equal-sized parts; then for each\n$\\lambda_n$ we apply\nthe $l_1$-PLS based method on each $9$ parts of the data to obtain an\nITR, and estimate the Value of this ITR using\nthe remaining part;\nthe $\\lambda_n$ that maximizes the average of the\n$10$ estimated Values is selected. Since the Value of an ITR is\nnoncontinuous in the parameters, this usually results in a set of candidate\n$\\lambda_n$'s achieving maximal Value. In the simulations below, the\nresulting $\\lambda_n$ is nonunique in around $97\\%$ of the data sets.\nIf necessary, as a second step we reduce the set of $\\lambda_n$'s by\nincluding only $\\lambda_n$'s leading to the ITR's using the least number\nof variables. In the simulations below, this second\ncriterion effectively reduced the number of candidate $\\lambda_n$'s\nin around $25\\%$ of the data sets, however multiple $\\lambda_n$'s still\nremained in\naround $90\\%$ of the data sets. This is not surprising since the Value of\nan ITR only depends on the relative magnitudes of\nparameters in the ITR. In the third step we select the $\\lambda_n$\nthat minimizes the 10-fold cross\nvalidated prediction error estimator from the remaining candidate\n$\\lambda_n$'s;\nthat is, minimization of the empirical\nprediction error is used as a final tie breaker.\n\n\n\\subsection{Simulations} \\label{sec:simulation}\n\nA first alternative to $l_1$-PLS is to use ordinary least squares\n(OLS). The\nestimated ITR is $\\hat\nd_{\\mathrm{OLS}}\\in\\arg\\max_a\\Phi(X,a)\\bolds{\\hat\\theta}_{\\mathrm{OLS}}$ where\n$\\bolds{\\hat\\theta}_{\\mathrm{OLS}}$ is the OLS\nestimator of $\\bolds{\\theta}$. A second alternative is called\n``prognosis prediction'' \\cite{kent2002}. Usually this method employs\nmultiple data\nsets, each of which involves one active treatment. Then\nthe treatment associated with the best predicted prognosis\nis selected. We implement this method by\nestimating $E(R|X,A=a)$ via least squares with $l_1$\npenalization for each treatment group (each $a\\in\\mathcal{A}$)\nseparately. The\ntuning parameter involved in each treatment group is selected by\nminimizing the $10$-fold cross-validated prediction error estimator.\nThe resulting ITR satisfies $\\hat\nd_{\\mathrm{PP}}(X)\\in\\arg\\max_{a\\in\\mathcal{A}}\\hat E(R|X,A=a)$ where the\nsubscript ``PP''\ndenotes prognosis prediction.\n\nFor simplicity, we consider binary $A$. All three methods use the same\nnumber of\ndata points and the same number of basis functions but use these data\npoints\/basis\nfunctions differently. $l_1$-PLS and OLS use all $J$ basis functions to conduct\nestimation with all $n$ data points whereas the prognosis prediction\nmethod splits the data into the two treatment groups and uses $J\/2$\nbasis functions to conduct estimation with the $n\/2$ data points in\neach of the two treatment groups. To ensure the comparison is fair\nacross the three methods, the approximation model for each treatment\ngroup is consistent with the approximation model used in both $l_1$-PLS\nand OLS [e.g., if $Q_0$ is approximated by $(1, X, A,\nXA)\\bolds{\\theta}$ in $l_1$-PLS and OLS, then in prognosis\nprediction we approximate\n$E(R|X,A=a)$ by $(1, X)\\bolds{\\theta}_{\\mathrm{PP}}$ for each treatment\ngroup].\nWe do not penalize the intercept coefficient in either prognosis\nprediction or $l_1$-PLS.\n\nThe three methods are compared using two criteria: (1)\nValue maximization; and (2) simplicity of the estimated ITRs (measured\nby the number of variables\/basis functions used in\nthe rule).\n\nWe illustrate the comparison of the three methods using $4$ examples\nselected to reflect three\nscenarios (see Section S.3 of the supplemental article \\cite\n{supplement} for $4$ further examples):\n\\begin{longlist}[(1)]\n\\item[(1)] There is no treatment effect [i.e., $Q_0$ is constructed so that\n$T_0=0$; example~(1)]. In this case, all ITRs yield the same Value. Thus, the\nsimplest rule is preferred.\n\n\\item[(2)] There is a treatment effect and the treatment effect term $T_0$\nis correctly modeled [example (4) for large $n$ and example (2)]. In this\ncase, minimizing the prediction error will yield the\nITR that maximizes the Value.\n\n\\item[(3)] There is a treatment effect and the treatment effect term\n$T_0$ is misspecified [example (4) for small $n$ and example (3)]. In this\ncase, there might be\na mismatch between prediction error minimization and Value\nmaximization.\n\\end{longlist}\n\nThe examples are generated as follows. The\ntreatment $A$ is generated uniformly from $\\{-1,1\\}$ independent of $X$\nand the response $R$. The response $R$ is\nnormally distributed with mean $Q_0(X,A)$. In examples (1)--(3),\n$X\\sim U[-1,1]^5$ and we consider three simple examples for $Q_0$. In\nexample (4),\n$X \\sim U[0,1]$ and we use a complex $Q_0$, where\n$Q_0(X,1)$ and $Q(X, -1)$ are similar to the blocks function used in\nDonoho and Johnstone\n\\cite{donoho1994}.\nFurther details of the simulation design are\nprovided in Appendix \\ref{sec:simdesign}.\n\nWe consider two types of approximation models for $Q_0$. In examples\n(1)--(3), we approximate $Q_0$ by\n$(1,X,A,XA)\\bolds{\\theta}$. In example\n(4), we approximate $Q_0$ by Haar wavelets. The number of basis\nfunctions may increase as $n$ increases (we index $J$, $\\Phi$ and\n$\\bolds{\\theta}^*$ by $n$ in this case). Plots for $Q_0(X,A)$\nand the associated best wavelet fits\n$\\Phi_n(X,A)\\bolds{\\theta}^*_n$ are provided in Figure\n\\ref{fig:qoptfit}.\n\n\\begin{figure}\n\\vspace{-2pt}\n\\includegraphics{864f01.eps}\n\\vspace{-5pt}\n\\caption{Plots for:\nthe conditional mean function $Q_0(X,A)$ (\\textup{left}), $Q_0(X,A)$ and the\nassociated best wavelet fit when $J_n=8$ (\\textup{middle}), and $Q_0(X,A)$\nand the associated best wavelet fit when $J_n=128$ (\\textup{right}) [example\n(4)].}\n\\label{fig:qoptfit}\n\\vspace{-8pt}\n\\end{figure}\n\nFor each example, we simulate data sets of sizes $n=2^k$ for $k=5,\\ldots,10$.\n$1\\mbox{,}000$ data sets are generated for each\nsample size.\nThe Value of each estimated ITR is evaluated via Monte Carlo using a\ntest set of size $10\\mbox{,}000$.\nThe Value of the optimal ITR is also evaluated using the\ntest set.\n\n\\begin{figure}\n\\vspace{-2pt}\n\\includegraphics{864f02.eps}\n\\vspace{-5pt}\n\\caption{Comparison of the $l_1$-PLS based method\nwith the OLS method and the PP method [examples (1)--(4)]: plots for\nmedians and median absolute deviations (MAD) of\nthe Value of the estimated decision rules (top panels) and the number\nof variables (terms) needed for treatment assignment (including the\nmain treatment\neffect term, bottom panels) over $1\\mbox{,}000$ samples versus sample size\non the log scale. The black dash-dotted line in each plot on the\nfirst row denotes the Value of the optimal treatment rule,\nfor each example. [$n=32, 64, 128, 256, 512, 1024$. The\ncorresponding numbers of basis functions in example (4) are\n$J_n=8, 16, 32, 64, 64, 128$.]}\\label{fig:simulation}\n\\vspace{-8pt}\n\\end{figure}\n\nSimulation results are presented in Figure \\ref{fig:simulation}.\nWhen the approximation model is of\nhigh quality, all methods produce ITRs with similar Value [see examples\n(1), (2) and example (4) for large $n$].\nHowever, when the approximation model is poor, the $l_1$-PLS method\nmay produce highest Value [see example (3)]. Note\nthat in example (3) settings in which the sample size is small, the Value\nof the ITR produced by\n$l_1$-PLS method has larger median absolute deviation (MAD)\nthan the other two methods. One\npossible reason is that due to the mismatch between maximizing the\nValue and minimizing the prediction error, the Value estimator\nplays a strong role in selecting $\\lambda_n$.\nThe nonsmoothness of the Value estimator combined with the mismatch results\nin very different $\\lambda_n$'s and thus the estimated decision rules\nvary greatly\nfrom data set to data set in this example. Nonetheless, the $l_1$-PLS\nmethod is still preferred after taking the variation into account; indeed\n$l_1$-PLS produces ITRs with higher Value than both OLS\nand PP in around $46\\%$, $55\\%$ and $67\\%$ in data sets of sizes $n=32, 64$\nand $128$, respectively. Furthermore, in general the $l_1$-PLS\nmethod uses much fewer variables for treatment assignment than the\nother two methods. This is expected because the OLS method\ndoes not have variable selection functionality and the PP method\nwill use all variables that are predictive of the response $R$ whereas\nthe use of the Value in selecting the tuning parameter in $l_1$-PLS\ndiscounts variables that are only useful in\npredicting the response (and less useful in selecting the best treatment).\n\n\n\\subsection{Nefazodone-CBASP trial example} \\label{sec:realdata}\n\nThe Nefazodone-CBASP trial was conducted to compare the efficacy of\nseveral alternate treatments for patients with chronic depression.\nThe study randomized $681$ patients with nonpsychotic chronic major\ndepressive disorder (MDD) to either Nefazodone, cognitive\nbehavioral-analysis system of psychotherapy (CBASP) or the\ncombination of the two treatments. Various assessments were taken\nthroughout the study, among which the score on the 24-item Hamilton\nRating Scale for Depression (HRSD) was the primary outcome. Low HRSD\nscores are desirable. See Keller et al. \\cite{keller2000} for more\ndetail of the\nstudy design and the primary analysis.\n\nIn the data analysis, we use a subset of the Nefazodone-CBASP data\nconsisting of $656$ patients for whom the response HRSD score was\nobserved. In this trial, pairwise comparisons show that the\ncombination treatment resulted in significantly lower HRSD scores\nthan either of the single treatments. There was no overall\ndifference between the single treatments.\n\nWe use $l_1$-PLS to develop an ITR. In\nthe analysis, the HRSD score is reverse coded so that higher is\nbetter. We consider $50$ pretreatment variables\n$X=(X_1,\\ldots,X_{50})$. Treatments are coded using contrast coding\nof dummy variables $A = (A_1,A_2)$, where $A_1 = 2$ if the\ncombination treatment is assigned and $-1$ otherwise and $A_2 = 1$\nif CBASP is assigned, $-1$ if nefazodone and $0$ otherwise. The\nvector of basis functions, $\\Phi(X,A)$, is of the form\n$(1,X,A_1,XA_1, A_2, XA_2)$. So the number of basis functions is\n$J=153$. As a contrast, we also consider the OLS method and the PP\nmethod (separate prognosis prediction for each treatment). The\nvector of basis functions used in PP is $(1, X)$ for each of the three treatment\ngroups. Neither the intercept term nor the main treatment effect\nterms in $l_1$-PLS or PP is penalized (see Section S.2 of the\nsupplemental article \\cite{supplement} for the modification of the\nweights $\\hat\\sigma_j$\nused in (\\ref{eqn:thetahat})).\n\nThe ITR given by the $l_1$-PLS method\nrecommends the combination treatment to all (so none of the\npretreatment variables enter the rule). On the other hand, the PP\nmethod produces an ITR that uses $29$ variables. If the\nrule produced by PP were used to assign\ntreatment for the $656$ patients in the trial, it would recommend\nthe combination treatment for $614$ patients and nefazodone for the\nother $42$ patients. In addition, the OLS method will use all the\n$50$ variables. If the ITR produced by OLS were used to\nassign treatment for the $656$ patients in the trial, it would\nrecommend the combination treatment for $429$ patients, nefazodone\nfor $145$ patients and CBASP for the other $82$ patients.\n\n\n\\section{Discussion}\\label{sec:discussion}\n\nOur goal is to construct a high quality ITR that will benefit future patients.\nWe considered an $l_1$-PLS\nbased method and provided a finite sample upper bound for\n$V(d_0)-V(\\hat d_n)$, the reduction in Value of the estimated ITR.\n\nThe use of an $l_1$ penalty allows us\nto consider a large model for the conditional mean function $Q_0$\nyet permits a sparse estimated ITR. In\nfact, many other penalization methods such as SCAD \\cite{fan2001}\nand $l_1$ penalty with adaptive weights (adaptive Lasso;\n\\cite{zou2006}) also have this property. We choose the nonadaptive\n$l_1$ penalty to represent these methods. Interested readers may\njustify other PLS methods using similar proof techniques.\n\nThe high\\vspace*{1pt} probability finite sample upper bounds [i.e., (\\ref\n{eqn:finaloracle}) and\n(\\ref{eqn:finaloracle1})] cannot be used to construct a prediction\/confidence\ninterval for $V(d_0)-V(\\hat d_n)$ due to the unknown quantities in\nthe bound. How to develop a tight computable upper bound to\nassess the quality of $\\hat d_n$ is an open question.\n\nWe used cross validation with Value maximization to select the\ntuning parameter involved in the $l_1$-PLS method. As compared to\nthe OLS method and the PP method, this method\nmay yield higher Value when $T_0$ is misspecified.\nHowever, since only the Value is used to select the tuning\nparameter, this method may produce a complex ITR for which the Value is\nonly slightly higher than that\nof a much simpler ITR. In this case, a simpler rule may\nbe preferred due to the interpretability and cost of collecting the\nvariables. Investigation of a tuning parameter selection criterion\nthat trades off the Value with the number of variables in an\nITR is needed.\n\nThis paper studied a one stage decision problem. However, it is\nevident that some diseases require time-varying treatment. For\nexample, individuals with a chronic disease often experience a\nwaxing and waning course of illness. In these settings, the goal is\nto construct a sequence of ITRs that\ntailor the type and dosage of treatment through time according to an\nindividual's changing status. There is an abundance of statistical\nliterature in this area\n\\cite\n{thall2000,thall2002,murphy2003,murphy2005,robins2004,lunceford2002,vanderlaan2005,wahedtsiatis2006}.\nExtension of the least squares\nbased method to the multi-stage decision problem has been presented\nin Murphy \\cite{murphy2005}. The performance of $l_1$ penalization\nin this setting is unclear and worth investigation.\n\n\\begin{appendix}\n\n\\section*{Appendix}\n\n\\subsection{\\texorpdfstring{Proof of Theorem \\lowercase{\\protect\\ref{thm:bound2}}}{Proof of Theorem 3.1}}\n\\label{apd:margin}\n\nFor any ITR $d\\dvtx\\mathcal{X}\\rightarrow\\mathcal{A}$, denote\n$\\triangle\nT_d(X)$ $\\triangleq\\max_{a\\in\\mathcal{A}}T_0(X,a)-T_0(X,d(X))$. Using\nsimilar arguments to that in Section~\\ref{sec:prelim}, we have\n$V(d_0)-V(d)=E(\\triangle T_d)$. If $V(d_0)-V(d)=0$, then\n(\\ref{eqn:bound2}) and (\\ref{eqn:bound3}) automatically hold.\nOtherwise, $E(\\triangle T_d)^2\\geq(E\\triangle T_d)^2>0$. In this\ncase, for any $\\epsilon>0$, define the event\\vspace{-2pt}\n\\[\n\\Omega_\\epsilon=\\Bigl\\{\\max_{a\\in\\mathcal{A}}T_0(X,a)-\n\\max_{a\\in\\mathcal{A}\\setminus\\mathop{\\arg\\max}_{a\\in\\mathcal\n{A}}T_0(X,a)}T_0(X,a)\\leq\\epsilon\\Bigr\\}.\n\\]\nThen $\\triangle T_d\\leq(\\triangle T_d)^2\/\\epsilon$ on the event\n$\\Omega_\\epsilon^C$. This together with the fact that $\\triangle\nT_d\\leq(\\triangle T_d)^2\/\\epsilon+\\epsilon\/4$ implies\n\\begin{eqnarray*}\nV(d_0)-V(d) &=& E(1_{\\Omega_\\epsilon^C}\\triangle T_d)+\nE(1_{\\Omega_\\epsilon}\\triangle T_d)\\\\[-2pt]\n&\\leq&\\frac{1}{\\epsilon}E[1_{\\Omega_\\epsilon^C} (\\triangle\nT_d)^2]+E\\biggl[1_{\\Omega_\\epsilon}\\biggl(\\frac{(\\triangle\nT_d)^2}{\\epsilon}+\\frac{\\epsilon}{4}\\biggr)\\biggr]\\\\[-2pt]\n&=&\\frac{1}{\\epsilon}E[(\\triangle\nT_d)^2]+\\frac{\\epsilon}{4}P(\\Omega_\\epsilon)\\leq\n\\frac{1}{\\epsilon}E[(\\triangle\nT_d)^2]+\\frac{C}{4}\\epsilon^{1+\\alpha},\n\\end{eqnarray*}\nwhere the last inequality follows from the margin condition\n(\\ref{eqn:noise}). Choosing $\\epsilon= (4E(\\triangle\nT_d)^2\/C)^{1\/(2+\\alpha)}$ to minimize the above upper bound\nyields\n\\begin{equation} \\label{eqn:thm1part1}\nV(d_0)-V(d)\\leq\n2^{\\alpha\/(2+\\alpha)}C^{1\/(2+\\alpha)}[E(\\triangle\nT_d)^2]^{(1+\\alpha)\/(2+\\alpha)}.\n\\end{equation}\n\nNext, for any $d$ and $Q$ such that\n$d(X)\\in\\max_{a\\in\\mathcal{A}}Q(X,a)$, let $T(X,A)$ be the associated\ntreatment effect term. Then\n\\begin{eqnarray*}\nE(\\triangle T_d)^2\n&=&E\\Bigl[\\Bigl(\\max_{a\\in\\mathcal{A}}T_0(X,a)-\\max_{a\\in\\mathcal{A}}T(X,a)+\nT(X,d(X))-T_0(X,d(X))\\Bigr)^2\\Bigr]\\\\[-2pt]\n&\\leq&\n2E\\Bigl[\\Bigl(\\max_{a\\in\\mathcal{A}}T_0(X,a)-\\max_{a\\in\\mathcal\n{A}}T(X,a)\\Bigr)^2\\\\[-2pt]\n&&\\hspace*{17.8pt}{} +\n\\bigl(T(X,d(X))-T_0(X,d(X))\\bigr)^2\\Bigr]\\\\[-2pt]\n&\\leq&\n4E\\Bigl[\\max_{a\\in\\mathcal{A}}\\bigl(T(X,a)-T_0(X,a)\\bigr)^2\\Bigr],\n\\end{eqnarray*}\nwhere the last inequality follows from the fact that neither\n$|{\\max_aT_0(X,a)}-\\max_aT(X,a)|$ nor $|T(X,d(X))-T_0(X,d(X))|$ is\nlarger than ${\\max_a}|T(X,a)-T_0(X,a)|$. Since $p(a|x)\\geq S^{-1}$ for\nall $(x,a)$ pairs, we have\n\\begin{eqnarray} \\label{eqn:thm1part2}\nE(\\triangle T_d)^2 &\\leq&4S\nE\\Bigl[\\sum_{a\\in\\mathcal{A}}\\bigl(T(X,a)-T_0(X,a)\\bigr)^2p(a|X)\n\\Bigr]\\nonumber\\\\[-2pt]\n&=&4SE\\bigl(T(X,A)-T_0(X,A)\\bigr)^2.\n\\end{eqnarray}\nInequality (\\ref{eqn:bound3}) follows by substituting\n(\\ref{eqn:thm1part2}) into (\\ref{eqn:thm1part1}). Inequality\n(\\ref{eqn:bound2}) can be proved similarly by noticing that\n$\\triangle T_d(X)=\\max_{a\\in\\mathcal{A}}Q_0(X,a)-Q_0(X,d(X))$.\\vspace{-2pt}\n\n\n\\subsection{\\texorpdfstring{Generalization of Theorem \\lowercase{\\protect\\ref{thm:peoraclefix}}\n{Generalization of Theorem 4.3}}\n\\label{apd:peoracle}\n\nIn this section, we present a generalization of Theorem\n\\ref{thm:peoraclefix} where $J$ may depend on $n$ and the sparsity\nof any $\\bolds{\\theta}\\in\\mathbb{R}^{J}$ is measured by the\nnumber of ``large'' components in $\\bolds{\\theta}$ as described\nin Zhang and Huang \\cite{chzhang2008}. In this case, $J$, $\\Phi$ and\nthe prediction\nerror minimizer $\\bolds{\\theta}^*$\nare denoted as $J_n, \\Phi_n$ and $\\bolds{\\theta}^*_n$,\nrespectively. All relevant quantities and assumptions are restated below.\n\nLet $|M|$ denote the cardinality of any index set\n$M\\subseteq\\{1,\\ldots,J_n\\}$. For any $\\bolds{\\theta}\\in\n\\mathbb{R}^{J_n}$ and constant $\\rho\\geq0$, define\n\\[\nM_{\\rho\\lambda_n}(\\bolds{\\theta})\\in\\mathop{\\arg\\min}_{\\{M\\subseteq\\{\n1,\\ldots,J_n\\}\\dvtx\n\\sum_{j\\in\\{1,\\ldots,J_n\\}\\setminus M}\\sigma_j|\\theta_j|\\leq\\rho\n|M|\\lambda_n\\}}|M|.\n\\]\nThen $M_{\\rho\\lambda_n}(\\bolds{\\theta})$ is the smallest index\nset that contains only ``large'' components\nin~$\\bolds{\\theta}$. $|M_{\\rho\\lambda_n}(\\bolds{\\theta})|$\nmeasures the sparsity of $\\bolds{\\theta}$. It is easy to see that\nwhen $\\rho=0$,\n$M_0(\\bolds{\\theta})$ is the index set of nonzero components in\n$\\bolds{\\theta}$ and $|M_0(\\bolds{\\theta})|=\\|\\bolds{\\theta}\\|_0$. Moreover,\n$M_{\\rho\\lambda_n}(\\bolds{\\theta})$ is an empty set if and only\nif $\\bolds{\\theta}=\\mathbf{0}$.\n\nLet $[\\bolds{\\theta}^*_n]$ be the set of most sparse prediction error\nminimizers in the linear model, that is,\\vspace{-5pt}\n\\begin{equation}\\label{eqn:thetastar}\n[\\bolds{\\theta}^*_n]=\\mathop{\\arg\\min}_{\\bolds{\\theta}\\in\\mathop{\\arg\\min}\n_{\\bolds{\\theta}}\nL(\\Phi_n\\bolds{\\theta})}|M_{\\rho\\lambda_n}(\\bolds{\\theta})|.\n\\end{equation}\nNote that $[\\bolds{\\theta}^*_n]$ depends on $\\rho\\lambda_n$.\n\nTo derive the finite sample upper bound for $L(\\Phi_n\\bolds{\\hat\n\\theta}_n)$, we need the following assumptions.\\vspace{-5pt}\n\\begin{assumption}\\label{apn:errorterm} The error\\vspace*{1pt} terms $\\varepsilon_i,\ni=1,\\ldots,n$ are independent of $(X_i,A_i), i=1,\\ldots,n$ and are\ni.i.d. with $E(\\varepsilon_i)=0$ and\n$E[|\\varepsilon_i|^l]\\leq\\frac{l!}{2}c^{l-2}\\sigma^2$ for some\n$c,\\sigma^2>0$ for all $l\\geq2$.\n\\end{assumption}\n\\begin{assumption}\\label{apn:basis} For all $n\\geq1$:\n\n\\begin{longlist}[(a)]\n\\item[(a)]\nthere exists an $1\\leq U_n<\\infty$ such that\n$\\max_{j=1,\\ldots,{J_n}}\\|\\phi_j\\|_\\infty\/\\sigma_j\\leq U_n$, where\n$\\sigma_j\\triangleq(E\\phi_j^2)^{1\/2}$.\n\n\\item[(b)] there exists an $0<\\eta_{1,n}<\\infty$, such that\n$\\sup_{\\bolds{\\theta}\\in\n[\\bolds{\\theta}^*_n]}\\|Q_0-\\Phi_n\\bolds{\\theta}\\|_\\infty\\leq\n\\eta_{1,n}$.\n\\end{longlist}\n\\end{assumption}\n\nFor any $0\\leq\\gamma<1\/2$, $\\eta_{2,n}\\geq0$ (which may\ndepend on $n$) and tuning parameter~$\\lambda_n$, define\n\\begin{eqnarray*}\n\\Theta_{n}^o &=&\n\\biggl\\{\\bolds{\\theta}\\in\\mathbb{R}^{J_n}\\dvtx\\exists\n\\bolds{\\theta}^o\\in[\\bolds{\\theta}^*_n]\n\\mbox{ s.t. }\n\\|\\Phi_n(\\bolds{\\theta}-\\bolds{\\theta}^o)\\|_\\infty\\leq\\eta\n_{2,n}\\\\\n&&\\hspace*{31.13pt}\n\\mbox{ and }\n\\max_{j=1,\\ldots,J_n}\\biggl|E\\biggl[\\Phi_n(\\bolds{\\theta}-\\bolds{\\theta}^o)\\frac{\\phi_j}{\\sigma_j}\\biggr]\\biggr|\\leq\n\\gamma{\\lambda_n}\\biggr\\}.\n\\end{eqnarray*}\n\\begin{assumption}\\label{apn:grammatrix} For any $n\\geq1$,\nthere exists a $\\beta_n>0$ such that\n\\[\nE[\\Phi_n(\\bolds{\\tilde\\theta}-\\bolds{\\theta})]^2|M_{\\rho\n\\lambda_n}(\\bolds{\\theta})|\\geq\\beta_n\n\\biggl[\\biggl(\\sum_{j\\in\nM_{\\rho\\lambda_n}(\\bolds{\\theta})}\\sigma_j|\\tilde\\theta_j-\\theta\n_j|\\biggr)^2-\\rho^2\n|M_{\\rho\\lambda_n}(\\bolds{\\theta})|^2\\lambda_n^2\\biggr]\n\\]\nfor all\n$\\bolds{\\theta}\\in\\Theta_{n}^o\\setminus\\{\\mathbf{0}\\}$,\n$\\bolds{\\tilde\\theta}\\in\\mathbb{R}^{J_n}$ satisfying $\\sum_{j\\in\n\\{1,\\ldots,J_n\\}\\setminus\nM_{\\rho\\lambda_n}(\\bolds{\\theta})}\\sigma_j|\\tilde\\theta_j|\\leq\n\\frac{2\\gamma+5}{1-2\\gamma}\\times(\\sum_{j\\in\nM_{\\rho\\lambda_n}(\\bolds{\\theta})}\\sigma_j|\\tilde\\theta_j-\\theta\n_j|+\\rho\n|M_{\\rho\\lambda_n}(\\bolds{\\theta})|\\lambda_n)$.\n\\end{assumption}\n\nWhen $E(\\Phi_n^{(2)}(X,A)^T|X)=\\mathbf{0}$ a.s. ($\\Phi_n^{(2)}$\nis defined in Section \\ref{sec:finaloracle}), we need an extra\nassumption to derive the finite sample upper bound for the mean square\nerror of the treatment effect estimator $E[\\Phi_n^{(2)}\\bolds\n{\\hat\\theta}{}^{(2)}_n-T_0(X,A)]^2$ (recall that $T_0(X,A)\\triangleq\nQ_0(X,A)-E[Q_0(X,A)|X]$).\n\\begin{assumption}\n\\label{apn:grammatrix_Topt} For any $n\\geq1$, there exists a\n$\\beta_n>0$ such that\n\\begin{eqnarray*}\n&&E\\bigl[\\Phi_n^{(2)}\\bigl(\\bolds{\\tilde\\theta}{}^{(2)}-\\bolds{\\theta}^{(2)}\\bigr)\\bigr]^2\n\\bigl|M_{\\rho\\lambda_n}^{(2)}(\\bolds{\\theta})\\bigr|\\\\\n&&\\hspace*{0pt}\\qquad\\geq\\beta_n\n\\biggl[\\biggl(\\sum_{j\\in\nM_{\\rho\\lambda_n}^{(2)}(\\bolds{\\theta})}\\sigma_j|\\tilde\\theta\n_j-\\theta_j|\\biggr)^2-\\rho^2\n\\bigl|M_{\\rho\\lambda_n}^{(2)}(\\bolds{\\theta})\\bigr|^2\\lambda_n^2\\biggr]\n\\end{eqnarray*}\nfor all\n$\\bolds{\\theta}\\in\\Theta_{n}^o\\setminus\\{\\mathbf{0}\\}$,\n$\\bolds{\\tilde\\theta}\\in\\mathbb{R}^{J_n}$ satisfying $\\sum_{j\\in\n\\{1,\\ldots,J_n\\}\\setminus\nM_{\\rho\\lambda_n}(\\bolds{\\theta})}\\sigma_j|\\tilde\\theta_j|\\leq\n\\frac{2\\gamma+5}{1-2\\gamma}\\times(\\sum_{j\\in\nM_{\\rho\\lambda_n}(\\bolds{\\theta})}|\\tilde\\theta_j-\\theta_j|+\\rho\n|M_{\\rho\\lambda_n}(\\bolds{\\theta})|\\lambda_n)$, where\n\\[\nM_{\\rho\\lambda_n}^{(2)}(\\bolds{\\theta})\\in\\mathop{\\arg\\min}_{\\{M\\subseteq\\{\nJ_n^{(1)}+1,\\ldots,J_n\\}\\dvtx\n\\sum_{j\\in\\{J_n^{(1)}+1,\\ldots,J_n\\}\\setminus\nM}\\sigma_j|\\theta_j|\\leq\\rho|M|\\lambda_n\\}}|M|\n\\]\nis the smallest index set that contains only large components in\n$\\bolds{\\theta}^{(2)}$.\n\\end{assumption}\n\nWithout loss of generality, we assume that Assumptions \\ref\n{apn:grammatrix} and \\ref{apn:grammatrix_Topt} hold with the same value\nof $\\beta_n$. And we can always choose a small enough $\\beta_n$ so that\n$\\rho\\beta_n\\leq1$ for a given $\\rho$.\n\nFor any given $t>0$, define\n\\begin{eqnarray}\\label{eqn:thetan2}\n\\Theta_{n}&=&\n\\Biggl\\{\\bolds{\\theta}\\in\\Theta_{n}^o\\dvtx|M_{\\rho\\lambda\n_n}(\\bolds{\\theta})|\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\hspace*{6.5pt}\\leq\n\\frac{(1-2\\gamma)^2\\beta_n}{120}\\Biggl[\\sqrt{\\frac{1}{9}\n+\\frac{n}{2U_n^2[\\log(3J_n(J_n+1))+t]}}-\\frac{1}{3}\\Biggr]\\Biggr\\}.\\nonumber\n\\end{eqnarray}\n\nNote that we allow $U_n, \\eta_{1,n}, \\eta_{2,n}$ and $\\beta_n^{-1}$\nto increase as $n$ increases. However, if those quantities are\nsmall, the upper bound in (\\ref{eqn:peoracle}) will be tighter.\n\\begin{theorem} \\label{thm:peoracle}\nSuppose Assumptions \\ref{apn:errorterm} and \\ref{apn:basis} hold.\nFor any given $0\\leq\\gamma<1\/2$, $\\eta_{2,n}>0$, $\\rho\\geq0$ and\n$t>0$, let $\\bolds{\\hat\\theta}_n$ be the $l_1$-PLS estimator\ndefined in (\\ref{eqn:thetahat}) with tuning parameter\n\\begin{eqnarray}\\label{eqn:lambdacondition}\n\\lambda_n&\\geq&\\frac{8\\max\\{3c,2(\\eta_{1,n}+\\eta_{2,n})\\}U_n(\\log\n6J_n+t)}\n{(1-2\\gamma)n}\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&{}+\\frac{12\\max\\{\\sigma,(\\eta_{1,n}+\\eta_{2,n})\\}}{(1-2\\gamma)}\n\\sqrt{\\frac{2(\\log6J_n+t)}{n}}.\\nonumber\n\\end{eqnarray}\nSuppose Assumption \\ref{apn:grammatrix} holds with $\\rho\\beta_n\\leq\n1$. Let $\\Theta_n$ be the set defined in (\\ref{eqn:thetan2}) and\nassume $\\Theta_n$ is nonempty.\nIf\n\\begin{equation}\\label{eqn:jncondition}\n\\frac{\\log2J_n}{n}\\leq\n\\frac{2(1-2\\gamma)^2}{27U_n^2-10\\gamma-22},\n\\end{equation}\nthen with probability at least $1-\\exp(-k'_nn)-\\exp(-t)$, we have\n\\begin{equation}\\label{eqn:peoracle}\nL(\\Phi_n\\bolds{\\hat\\theta}_n)\\leq\n\\min_{\\bolds{\\theta}\\in\\Theta_n}\\biggl[L(\\Phi_n\\bolds{\\theta})\n+K_n\n\\frac{|M_{\\rho\\lambda_n}(\\bolds{\\theta})|}{\\beta_n}\\lambda_n^2\\biggr],\n\\end{equation}\nwhere $k_n'=13(1-2\\gamma)^2\/[6(27U_n^2-10\\gamma-22)]$ and\n$K_n=[40\\gamma(12\\beta_n\\rho+2\\gamma+5)]\/[(1-2\\gamma)(2\\gamma+19)]\n+130(12\\beta_n\\rho+2\\gamma+5)^2\/[9(2\\gamma+19)^2]$.\n\nFurthermore, suppose $E(\\Phi_n^{(2)}(X,A)^T|X)=\\mathbf{0}$ a.s. If\nAssumption \\ref{apn:grammatrix_Topt} holds with\nthe same $\\beta_n$ as that in Assumption \\ref{apn:grammatrix}, then\nwith probability at least\n$1-\\exp(-k'_nn)-\\exp(-t)$, we have\n\\[\nE\\bigl(\\Phi_n^{(2)}\\bolds{\\hat\\theta}{}^{(2)}_n-T_0\\bigr)^2 \\leq\n\\min_{\\bolds{\\theta}\\in\\Theta_n}\\biggl[E\\bigl(\\Phi_n^{(2)}\\bolds{\\theta}^{(2)}-T_0\\bigr)^2\n+ K'_n\\frac{|M_{\\rho\\lambda_n}^{(2)}\n(\\bolds{\\theta})|}{\\beta_n}\\lambda_n^2\\biggr],\n\\]\nwhere $K_n'=\n20(12\\beta_n\\rho+2\\gamma+5)\\{\\gamma\/[(1-2\\gamma)(7-6\\beta_n\\rho)]\n+[3(1-2\\gamma)\\beta_n\\rho+10(2\\gamma+5)]\/[9(2\\gamma+19)^2]\\}$.\n\\end{theorem}\n\\begin{Remarks*}\n\\begin{longlist}[(1)]\n\\item[(1)]\nNote that $K_n$ is upper bounded by a constant under the assumption\n$\\beta_n\\rho\\leq1$. In the asymptotic setting when\n$n\\rightarrow\\infty$ and $J_n\\rightarrow\\infty$,\n(\\ref{eqn:peoracle}) implies that\n$L(\\Phi_n\\bolds{\\hat\\theta}_n)-\\min_{\\bolds{\\theta}\\in\\mathbb\n{R}^{J_n}}L(\\Phi_n\\bolds{\\theta})\\rightarrow^p\n0$ if (i)\n$|M_{\\rho\\lambda_n}(\\bolds{\\theta}^o)|\\lambda_n^2\/\\beta_n=o(1)$,\n(ii) $U_n^2\\log J_n\/n\\leq k_1$ and\n$|M_{\\rho\\lambda_n}(\\bolds{\\theta}^o)|\\leq\nk_2\\beta_n\\sqrt{n\/(U_n^2\\log J_n)}$ for some sufficiently small\npositive constants $k_1$ and $k_2$ and (iii)\n$\\lambda_n\\geq k_3\\max\\{1,\\eta_{1,n}+\\eta_{2,n}\\}\\sqrt{\\log J_n\/n}$ for\na sufficiently large\nconstant $k_3$, where $\\bolds{\\theta}^o\\in[\\bolds{\\theta}^*_n]$ (take $t=\\log J_n$).\n\n\\item[(2)] Below we briefly discuss Assumptions \\ref{apn:basis}--\\ref\n{apn:grammatrix_Topt}.\n\nAssumption \\ref{apn:basis} is very similar to assumption \\hyperlink{ap:basis}{(2)}\nin Theorem \\ref{thm:finaloracle} (which is used to prove the\nconcentration of the sample mean around the true mean), except that\n$U_n$ and $\\eta_{1,n}$ may increase as $n$ increases.\nThis relaxation allows the use of basis functions for which the sup\nnorm $\\max_j\\|\\phi_j\\|_\\infty$ is increasing in $n$ [e.g., the wavelet\nbasis used in example (4) of the simulation studies].\n\nAssumption \\ref{apn:grammatrix} is a generalization of condition (\\ref\n{eqn:grammatrix1}) [which has been discussed in remark (4) following\nTheorem \\ref{thm:finaloracle}]\nto the case where $J_n$ may increase in $n$ and the sparsity of a\nparameter is measured by the number of ``large'' components as\ndescribed at the beginning of this section. This condition is used to\navoid the collinearity problem. It is easy to see that when $\\rho=0$\nand $\\beta_n$ is fixed in $n$, this assumption simplifies to\ncondition~(\\ref{eqn:grammatrix1}).\n\nAssumption \\ref{apn:grammatrix_Topt} puts a strengthened constraint on\nthe linear model of the treatment effect part, as compared to\nAssumption \\ref{apn:grammatrix}.\nThis assumption, together with Assumption \\ref{apn:grammatrix}, is\nneeded in deriving the upper bound for the mean square error of the\ntreatment effect estimator. It is easy to verify that if $E[\\Phi_n^T\\Phi\n_n]$ is positive definite, then both Assumptions \\ref{apn:grammatrix} and \\ref\n{apn:grammatrix_Topt} hold.\nAlthough the result is about the treatment effect part, which is\nasymptotically independent of the main effect of $X$ (when $E[\\Phi\n_n^{(2)}(X,A)|X]=\\mathbf{0}$ a.s.), we still need Assumption \\ref\n{apn:grammatrix} to show that the cross product term\n$E_n[(\\Phi_n^{(1)}\\bolds{\\hat\\theta}{}^{(1)}_n-\\Phi\n^{(1)}_n\\bolds{\\theta}^{(1)})\n(\\Phi^{(2)}_n\\bolds{\\hat\\theta}{}^{(2)}_n-\\Phi^{(2)}_n\\bolds{\\theta}^{(2)})]$\nis upper bounded by a quantity converging to $0$ at the desired\nrate. We may use a really poor model for the main effect part\n$E(Q_0(X,A)|X)$ (e.g., $\\Phi^{(1)}_n\\equiv1$), and Assumption~\\ref{apn:grammatrix_Topt}\nimplies Assumption \\ref{apn:grammatrix} when $\\rho\n=0$. This poor model only effects the constants involved in the result.\nWhen the sample size is large (so that $\\lambda_n$ is\nsmall), the estimated ITR will be of high quality as long\nas $T_0$ is well approximated.\n\\end{longlist}\n\\end{Remarks*}\n\\begin{pf*}{Proof of Theorem \\ref{thm:peoracle}}\nFor any $\\bolds{\\theta}\\in\\Theta_n$, define the events\n\\begin{eqnarray*}\n\\Omega_1&=&\\bigcap_{j=1}^{J_n}\\biggl\\{\\frac{2(1+\\gamma)}{3}\\sigma_j\\leq\n\\hat\\sigma_j\\leq\\frac{2(2-\\gamma)}{3}\\sigma_j\\biggr\\} \\qquad\\mbox{[where }\\hat\n\\sigma_j\\triangleq(E_n\\phi_j^2)^{1\/2}\\mbox{]},\\\\\n\\Omega_2(\\bolds{\\theta})&=&\n\\biggl\\{\\max_{j,k=1,\\ldots,{J_n}}\\biggl|(E-E_n)\\biggl(\\frac{\\phi_j\\phi\n_k}{\\sigma_j\\sigma_k}\\biggr)\\biggr|\n\\leq\\frac{(1-2\\gamma)^2\\beta_n}{120|M_{\\rho\\lambda_n}(\\bolds{\\theta})|}\\biggr\\},\\\\[-1pt]\n\\Omega_3(\\bolds{\\theta})&=&\n\\biggl\\{\\max_{j=1,\\ldots,{J_n}}\\biggl|E_n\\biggl[(R-\\Phi_n\\bolds{\\theta})\\frac{\\phi_j}{\\sigma_j}\\biggr]\\biggr|\\leq\n\\frac{4\\gamma+1}{6}\\lambda_n\\biggr\\}.\n\\end{eqnarray*}\nThen there exists a\n$\\bolds{\\theta}^o\\in[\\bolds{\\theta}^*_n]$ such that\n\\begin{eqnarray*}\nL(\\Phi_n\\bolds{\\hat\\theta}_n)\n&=&L(\\Phi_n\\bolds{\\theta})\n+2E[(\\Phi_n\\bolds{\\theta}^o-\\Phi_n\\bolds{\\theta})\\Phi\n_n(\\bolds{\\theta}-\\bolds{\\hat\\theta}_n)]\n+E[\\Phi_n(\\bolds{\\hat\\theta}_n-\\bolds{\\theta})]^2\\\\[-1pt]\n&\\leq& L(\\Phi_n\\bolds{\\theta})+2\\max_{j=1,\\ldots,J_n}\\biggl|\nE\\biggl[\\Phi_n(\\bolds{\\theta}^o-\\bolds{\\theta})\\frac{\\phi\n_j}{\\sigma_j}\\biggr]\\biggr|\n\\Biggl(\\sum_{j=1}^{J_n}\\sigma_j|\\hat\\theta_{n,j}-\\theta_j|\\Biggr)\\\\[-1pt]\n&&{}+\nE[\\Phi_n(\\bolds{\\hat\\theta}_n-\\bolds{\\theta})]^2\\\\[-2pt]\n&\\leq& L(\\Phi_n\\bolds{\\theta})+2\\gamma\\lambda_n\n\\Biggl(\\sum_{j=1}^{J_n}\\sigma_j|\\hat\\theta_{n,j}-\\theta_j|\\Biggr)+\nE[\\Phi_n(\\bolds{\\hat\\theta}_n-\\bolds{\\theta})]^2,\n\\end{eqnarray*}\nwhere the first equality follows from the fact that\n$E[(R-\\Phi_n\\bolds{\\theta}^o)\\phi_j]=0$ for any\n$\\bolds{\\theta}^o\\in[\\bolds{\\theta}^*_n]$ for $j=1,\\ldots,\nJ_n$ and the last inequality follows from the definition\nof~$\\Theta_n^o$.\n\nBased on Lemma \\ref{lemma:oracle} below, we have that on the event\n$\\Omega_1\\cap\\Omega_2(\\bolds{\\theta})\\cap\\Omega_3(\\bolds{\\theta})$,\n\\[\nL(\\Phi_n\\bolds{\\hat\\theta}_n) \\leq\nL(\\Phi_n\\bolds{\\theta})+K_n\\frac{|M_{\\rho\\lambda_n}\n(\\bolds{\\theta})|}{\\beta_n}\\lambda_n^2.\\vspace{-2pt}\n\\]\n\nSimilarly, when $E[\\Phi^{(2)}_2(X,A)^T|X]=\\mathbf{0}$, by Lemma \\ref\n{lemma:oracle1}, we have that on the event\n$\\Omega_1\\cap\\Omega_2(\\bolds{\\theta})\\cap\\Omega_3(\\bolds{\\theta})$,\\vspace{-3pt}\n\\begin{eqnarray*}\nE\\bigl(\\Phi_n^{(2)}\\bolds{\\hat\\theta}{}^{(2)}_n-T_0\\bigr)^2\n&\\leq& E\\bigl(\\Phi_n^{(2)}\\bolds{\\theta}^{(2)}-T_0\\bigr)^2\n+2\\gamma\\lambda_n\n\\Biggl(\\sum_{j=J_n^{(1)}+1}^{J_n}\\sigma_j|\\hat\\theta_{n,j}-\\theta_j|\\Biggr)\\\\[-1pt]\n&&{} +\nE\\bigl[\\Phi_n^{(2)}\\bigl(\\bolds{\\hat\\theta}{}^{(2)}_n-\\bolds{\\theta}^{(2)}\\bigr)\\bigr]^2\\\\[-1pt]\n&\\leq&\nE\\bigl(\\Phi_n^{(2)}\\bolds{\\theta}^{(2)}-T_0\\bigr)^2+K'_n\\frac{|M_{\\rho\\lambda\n_n}^{(2)}\n(\\bolds{\\theta})|}{\\beta_n}\\lambda_n^2.\n\\end{eqnarray*}\n\nThe conclusion of the theorem follows from the union probability\nbounds of the events $\\Omega_1$, $\\Omega_2(\\bolds{\\theta})$ and\n$\\Omega_3(\\bolds{\\theta})$ provided in Lemmas\n\\ref{lemma:omega1}, \\ref{lemma:omega2} and \\ref{lemma:omega3}.\\vspace{-5pt}\n\\end{pf*}\n\nBelow we state the lemmas used in the proof of Theorem \\ref{thm:peoracle}.\nThe proofs of the lemmas are given in Section S.4 of the supplemental\narticle \\cite{supplement}.\n\\begin{lemma}\\label{lemma:oracle}\nSuppose Assumption \\ref{apn:grammatrix} holds with $\\rho\\beta_n\\leq\n1$. Then for any $\\bolds{\\theta}\\in\\Theta_n$, on the event\n$\\Omega_1\\cap\\Omega_2(\\bolds{\\theta})\\cap\\Omega_3(\\bolds{\\theta})$,\nwe have\n\\begin{equation}\\label{eqn:hatthetabound}\n\\sum_{j=1}^{J_n}\\sigma_j|\\hat\\theta_{n,j}-\\theta_j|\\leq\n\\frac{20(12\\rho\\beta_n+2\\gamma+5)}{(1-2\\gamma)(19+2\\gamma)\\beta_n}\n|M_{\\rho\\lambda_n}(\\bolds{\\theta})|\\lambda_n\n\\end{equation}\nand\n\\begin{equation}\\label{eqn:thetariskbound}\nE[\\Phi_n(\\bolds{\\hat\\theta}_n-\\bolds{\\theta})]^2\\leq\n\\frac{130(12\\rho\\beta_n+2\\gamma+5)^2\n}{9(19+2\\gamma)^2\\beta_n}|M_{\\rho\\lambda_n}(\\bolds{\\theta})|\\lambda_n^2\n\\end{equation}\n\\end{lemma}\n\\begin{Remark*}\nThis lemma implies that $\\bolds{\\hat\\theta}_n$ is close to each\n$\\bolds{\\theta}\\in\\Theta_n$ on the event\n$\\Omega_1\\cap\\Omega_2(\\bolds{\\theta})\\cap\\Omega_3(\\bolds{\\theta})$.\nThe intuition is as follows. Since $\\bolds{\\hat\\theta}_n$\nminimizes~(\\ref{eqn:thetahat}), the first order conditions imply\nthat\n$\\max_j|E_n(R-\\Phi_n\\bolds{\\hat\\theta}_n)\\phi_j\/\\hat\\sigma_j|\\leq\n\\lambda_n\/2$.\nSimilar property holds for $\\bolds{\\theta}$ on the event\n$\\Omega_1\\cap\\Omega_3(\\bolds{\\theta})$. Assumption\n\\ref{apn:grammatrix} together with event\n$\\Omega_2(\\bolds{\\theta})$ ensures that there is no\ncollinearity in the $n\\times J_n$ design matrix\n$(\\Phi_n(X_i,A_i))_{i=1}^n$. These two aspects guarantee the\ncloseness of $\\bolds{\\hat\\theta}_n$ to~$\\bolds{\\theta}$.\n\\end{Remark*}\n\\begin{lemma}\\label{lemma:oracle1}\nSuppose $E[\\Phi_n^{(2)}(X,A)^T|X]=\\mathbf{0}$ a.s. and\nAssumptions \\ref{apn:grammatrix} and \\ref{apn:grammatrix_Topt} hold\nwith $\\rho\\beta_n\\leq1$.\nThen for any $\\bolds{\\theta}\\in\\Theta_n$, on the event\n$\\Omega_1\\cap\\Omega_2(\\bolds{\\theta})\\cap\\Omega_3(\\bolds{\\theta})$,\nwe have\n\\begin{equation}\n\\label{eqn:hatthetabound1}\n\\sum_{j=J_n^{(1)}+1}^{J_n}\\sigma_j|\\hat\\theta_{n,j}-\\theta_j|\\leq\n\\frac{10(12\\beta_n\\rho+2\\gamma+5)}{(1-2\\gamma)(7-6\\beta_n\\rho)\\beta\n_n}\\bigl|M_{\\rho\\lambda_n}^{(2)}\n(\\bolds{\\theta})\\bigr|\\lambda_n\n\\end{equation}\nand\n\\begin{eqnarray}\\label{eqn:thetariskbound1}\n\\hspace*{28pt}&&E\\bigl[\\Phi_n^{(2)}\\bigl(\\bolds{\\hat\\theta}{}^{(2)}_n-\\bolds{\\theta}^{(2)}\\bigr)\\bigr]^2\n\\nonumber\\\\[-8pt]\\\\[-8pt]\n\\hspace*{28pt}&&\\qquad\\leq\n\\frac{20(12\\rho\\beta_n+2\\gamma+5)[3(1-2\\gamma)\\beta_n\\rho+10(2\\gamma+5)]\n}{9(2\\gamma+19)^2\\beta_n}\\bigl|M_{\\rho\\lambda_n}^{(2)}(\\bolds{\\theta})\\bigr|\\lambda_n^2.\\nonumber\n\\end{eqnarray}\n\\end{lemma}\n\\begin{lemma}\\label{lemma:omega1}\nSuppose Assumption \\ref{apn:basis}\\textup{(a)} and inequality\n(\\ref{eqn:jncondition}) hold. Then $\\mathbf{P}(\\Omega_1^C)\\leq\\exp\n(-k'_nn)$, where $k'_n = 13(1-2\\gamma)^2\/[6(27U_n^2-10\\gamma-22)]$.\n\\end{lemma}\n\\begin{lemma}\\label{lemma:omega2}\nSuppose Assumption \\ref{apn:basis}\\textup{(a)} holds. Then for any\n$t>0$ and $\\bolds{\\theta}\\in\\Theta_n$,\n$\\mathbf{P}(\\{\\Omega_2(\\bolds{\\theta})\\}^C)\\leq\\exp(-t)\/3$.\n\\end{lemma}\n\\begin{lemma}\\label{lemma:omega3}\nSuppose Assumptions \\ref{apn:errorterm} and \\ref{apn:basis} hold.\nFor any $t>0$, if $\\lambda_n$\nsatisfies condition (\\ref{eqn:lambdacondition}), then for any\n$\\bolds{\\theta}\\in\\Theta_n$, we have\n$\\mathbf{P}(\\{\\Omega_3(\\bolds{\\theta})\\}^C)\\leq2\\exp(-t)\/3$.\n\\end{lemma}\n\n\n\\subsection{\\texorpdfstring{Design of simulations in Section \\lowercase{\\protect\\ref{sec:simulation}}\n{Design of simulations in Section 5.1}}\n\\label{sec:simdesign}\n\nIn this section, we present the detailed simulation design of the\nexamples used in Section \\ref{sec:simulation}. These examples\nsatisfy all assumptions listed in the theorems [it is easy to verify\nthat for examples (1)--(3). Validity of the assumptions for\nexample (4) is addressed in the remark after example (4)]. In addition,\n$\\Theta_n$ defined in (\\ref{eqn:oracleset}) is nonempty as long as\n$n$ is sufficiently large (note that the constants involved in\n$\\Theta_n$ can be improved and are not that meaningful. We focused\non a presentable result instead of finding the best constants).\n\nIn examples (1)--(3),\n$X=(X_1,\\ldots,X_5)$ is uniformly distributed on $[-1,1]^5$. The\ntreatment $A$ is then generated independently of $X$ uniformly from $\\{\n-1,1\\}$. Given $X$ and $A$, the response $R$ is\ngenerated from a normal distribution with mean\n$Q_0(X,A)=1+2X_1+X_2+0.5X_3 + T_0(X,A)$ and variance~$1$. We consider\nthe following three examples for\n$T_0$:\n\\begin{longlist}[(1)]\n\\item[(1)]$T_0(X,A) = 0$ (i.e., there is no treatment\neffect).\n\\item[(2)] $T_0(X,A) = 0.424(1-X_1-X_2)A$.\n\\item[(3)]$T_0(X,A) = 0.446 \\operatorname{sign}(X_1)(1-X_1)^2A$.\n\\end{longlist}\nNote that in each example $T_0(X,A)$ is equal to the treatment effect\nterm, $ Q_0(X,A)-E[Q_0(X,A)|X]$. We approximate $Q_0$ by $\\mathcal{Q}=\\{\n(1, X, A, XA)\\bolds{\\theta}\\dvtx\\break\\bolds{\\theta}\\in\\mathbb{R}^{12}\\}$.\nThus, in examples (1) and (2) the treatment effect term $T_0$ is correctly modeled,\nwhile in example (3) the treatment effect term $T_0$ is misspecified.\n\nThe parameters in examples (2) and (3) are chosen to reflect a medium\neffect size according to Cohen's d index. When there are two\ntreatments, the Cohen's d effect size index is defined as the\nstandardized\ndifference in mean responses between two treatment groups, that is,\n\\[\n\\mathrm{es}=\\frac{E(R|A=1)-E(R|A=-1)}{([\\operatorname{Var}(R|A=1)+\\operatorname{Var}(R|A=-1)]\/2)^{1\/2}}.\n\\]\nCohen \\cite{cohen1988} tentatively defined the effect size as ``small''\nif the Cohen's d index is $0.2$, ``medium'' if the index is $0.5$\nand ``large'' if the index is $0.8$.\n\nIn example (4), $X$ is uniformly distributed on $[0,1]$. Treatment $A$\nis generated independently of $X$ uniformly from $\\{-1,1\\}$. The\nresponse $R$ is generated from a normal distribution\nwith mean $Q_0(X,A)$ and variance $1$, where $Q_0(X,1)\n=\\sum_{j=1}^8\\vartheta_{(1),j}1_{X1$ is an integer. If $X_t^i = (\\oplus, k)$, this means that the pool pump is on at time $t$, and has remained on for the past $k$ time units. In this paper we take the same state space, but with a new interpretation of each state.\n\n\n\\begin{figure}[h]\n\\Ebox{.75}{pppDynamicsRS.pdf} \n\\vspace{-2.5ex}\n\\caption{State transition diagram for pool model.}\n\\label{f:ppp}\n\\vspace{-1.25ex}\n\\end{figure} \n\n \n \nThe controlled transition matrix is of the form,\n\\begin{equation}\nP_\\zeta = (1-\\delta)I + \\delta {\\check{P}}_\\zeta\n\\label{e:poolP}\n\\end{equation}\nin which ${\\check{P}}_\\zeta$ is the transition matrix used in \\cite{meybarbusyueehr15}, and $\\delta\\in (0,1)$. At each time $t$, a weighted coin is flipped with probability of heads equal to $\\delta$. If the outcome is a tail, then the state does not change. Otherwise, a transition is made from the current state $x$ to a new state $x^+$ with probability $ {\\check{P}}_{\\zeta_t}(x,x^+)$. \n\nA state transition diagram is shown in \\Fig{f:ppp}. The state transition diagram for ${\\check{P}}_\\zeta$ is identical, except that the self-loops are absent.\n\n\n\n\\begin{figure}[h]\n\\Ebox{.95}{Tracking_20pools-lr.pdf} \n\\vspace{-2.5ex}\n\\caption{The deviation in power consumption tracks well even with only 20 pools engaged.}\n\\label{f:20pools}\n\\vspace{-1.25ex}\n\\end{figure} \n\n\nThe motivation comes from conflicting needs of the grid and the load: a single load turns on or off only a few times per day, yet the grid operator wishes to send a signal far more frequently -- In this example we assume every 5 minutes. If the sampling increments for each load were taken to be 5 minutes, then it would be necessary to take ${\\cal I}$ very large in the approach of \\cite{meybarbusyueehr15}.\n\n\n\nIn this paper ${\\check{P}}_\\zeta$ is obtained using the optimal-control approach of \\cite{meybarbusyueehr15}; we take ${\\cal I}=48$, and hence $d=|{\\mathchoice{\\hbox{\\sf X}}\\sfX{\\hbox{\\scriptsize\\sf X}}\\smallsfX}|=96$. It is assumed that $\\delta=1\/6$, so that the pool state changes every 30 minutes on average. In \\cite{meybarbusyueehr15} it is shown that the transition matrix has desirable properties for control: the linearized dynamics are minimum phase, with positive DC gain. Hence, for example, \na persistent positive value of $\\zeta_t$ will lead to an increase in aggregate power consumption.\n\n\nFor sake of illustration,\n\\Fig{f:20pools} shows tracking performance of this scheme with only \\textit{twenty pools}. Each pool is assumed to consume 1~kW when operating, and each has a 12 hour cleaning cycle. The grid operator uses PI compensation to determine $\\bfmath{\\zeta}$ (see \\Section{s:hetero}\n for details). With such a small number of loads it is not surprising to see some evidence of quantization. For 100 loads or more, and the reference scaled proportionately, the tracking is nearly perfect. \n\n\n\n\n\nTwo QoS metrics have been considered for this model. First is `chattering' -- a large number of switches from on to off. A large value means poor QoS, but this is already addressed through design of the controlled transition matrix \\cite{meybarbusyueehr15}. The design of $P_\\zeta$ also helps to enforce upper and lower bounds on the duration of cleaning each day.\n\nA second metric is total cleaning over a time horizon of one week or more. This is the QoS metric considered in \\cite{chebusmey14}. \nIn this paper we consider a discounted version:\nWe assume that $P_0$ has a unique invariant pmf $\\pi_0$.\nWith $\\ell\\colon{\\mathchoice{\\hbox{\\sf X}}\\sfX{\\hbox{\\scriptsize\\sf X}}\\smallsfX}\\to\\field{R}$ a given function with zero steady-state mean, $\\sum_x \\pi_0(x) \\ell(x) =0$, we define for each $i$ and $t$,\n\\begin{equation}\n{\\cal L}^i_t = \\sum_{k=0}^t \\beta^{t-k} \\ell(X^i_k)\n\\label{e:QoS15}\n\\end{equation} \nwith $\\beta\\in (0,1]$ a constant.\nThe function $\\ell(x) = \\field{I}(m=\\oplus) - \\field{I}(m=\\ominus)$ was used in \\cite{chebusmey14} in the case of a 12 hour cleaning cycle (recall the notation $x= (m,k) $).\n\n\nExperimental results surveyed in \\Section{s:num} demonstrate that it is possible to estimate functionals of the state process such as the QoS metric $\\{{\\cal L}^i_t\\}$, \neven if the observations are subject to significant measurement noise.\n\n\n\n\n\n\n\n\\begin{figure}\n\\Ebox{.7}{ObsGramian_full-lr.pdf} \n\\vspace{-2.5ex}\n\\caption{Eigenvalues for the observability Grammian for the pool model in two cases: The magnitude of eigenvalues decays rapidly for a typical sample-path of $\\bfmath{\\zeta}$, and for the LTI model obtained with $\\bfmath{\\zeta}\\equiv 0$.}\n\\label{f:obsGramm}\n\\vspace{-1ex}\n\\end{figure} \n \n\n\nIn anticipation of the results to come we ask, \\textit{what would linear systems theory predict with respect to state estimation performance?} The observability Grammian associated with \\eqref{e:StateForEst} was computed for typical sample paths $\\{A_t = P_{\\zeta_t}^{\\hbox{\\it\\tiny T}} : 1\\le t \\le 2016\\}$, where the value $2016$ corresponds to one week, and $\\bfmath{\\zeta}$ was scaled to lie between $\\pm \\half$. Its rank was found to be approximately 40, while the maximal rank is 96 (the dimension of the state). With $\\zeta_t\\equiv 0$ the system is time-invariant. In this case the rank of the observability Grammian coincides with the rank of the observability matrix, which was found to be 23. \n\n\n\\spm{new:}\nHowever, these values were obtained using the ``rank'' command in Matlab, which relies on finite numerical precision. A plot of the magnitude of the eigenvalues for the two observability Grammians shown in \\Fig{f:obsGramm} suggests that both matrices are full rank. \n\n\\spm{new:}\nFurther analysis establishes that the LTI model obtained with $\\bfmath{\\zeta}\\equiv 0$ \\textit{cannot be observable}, due to a particular symmetry found in this example. A general result given in \\Prop{t:symNot} implies that $\\lambda^0_i=0$ for $50\\le i\\le 96$.\n\n\n\n\n\\section{Kalman Filter Equations}\n\\label{s:Kalman}\n\nThe second order statistics for the disturbances appearing in the linear model (\\ref{e:StateForEst}, \\ref{e:StateForEst-i}, \\ref{e:obsPhi}) are derived here. These expressions are used\nto construct a Kalman filter that generates approximations for the conditional mean and covariance\n\\eqref{e:meanW-e:SigPhi}.\nOther statistics of interest are,\n\\[\n\\begin{aligned}\n\\widehat \\Phi_t^i &= {\\sf E}[\\Phi^i_t | {\\cal Y}_t] \\,, \\quad\n \\Sigma^i_t = {\\sf E}[{\\widetilde \\Phi}^i_t({\\widetilde \\Phi}^i_t)^{\\hbox{\\it\\tiny T}} | {\\cal Y}_t] \n\\\\[.1cm] \n\\widehat \\Phi_{t+1\\mid t}^i &= {\\sf E}[\\Phi^i_{t+1} | {\\cal Y}_t] \\,, \\quad\n \\Sigma^i_{t+1\\mid t} = {\\sf E}[{\\widetilde \\Phi}^i_{t+1}({\\widetilde \\Phi}^i_{t+1})^{\\hbox{\\it\\tiny T}} | {\\cal Y}_t] \n\\\\[.1cm] \n\\widehat \\Phi_{t+1\\mid t} &= {\\sf E}[\\Phi_{t+1} | {\\cal Y}_t]\\,, \\quad\n \\Sigma_{t+1\\mid t} = {\\sf E}[{\\widetilde \\Phi}_{t+1}({\\widetilde \\Phi}_{t+1})^{\\hbox{\\it\\tiny T}} | {\\cal Y}_t] \n\\end{aligned}\n\\]\nwhere again tildes represent deviations, such as ${\\widetilde \\Phi}^i_t = \\Phi^i_t -\\widehat \\Phi^i_t$.\n\n\n\n \n\n \\Prop{t:EstIndExchangeable} states that some statistics of the individual can be expressed in terms of those of the population:\nIt is \\textit{not} the case that $\\Sigma_t^i=N \\Sigma_t $,\nsince $\\{\\Phi^i_t : 1\\le i\\le N\\}$ are correlated. \n\n \n \n \n\\begin{proposition}\n\\label{t:EstIndExchangeable}\nFor each $s$, $t$, $i$, and any set $S\\subset\\field{R}^d$, the conditional probability is independent of $i$:\n \\begin{equation}\n{\\sf P}\\{ \\Phi^i_s\\in S \\mid {\\cal Y}_t\\} = {\\sf P}\\{ \\Phi^1_s\\in S \\mid {\\cal Y}_t\\} \n\\label{e:EstIndExchangeable}\n\\end{equation}\nand consequently, $\n{\\sf E}[ \\Phi^i_s \\mid {\\cal Y}_t] = {\\sf E}[ \\Phi_s \\mid {\\cal Y}_t] $. \nMoreover, \n$\\widehat \\Phi_{t+1\\mid t}^i = A_t \\widehat \\Phi_t$, the state covariances for the individual are\n\\[\n\\begin{aligned}\n \\Sigma_{t}^i & = \\diag(\\widehat \\Phi_t) - \\widehat \\Phi_t \\widehat \\Phi_t^{\\hbox{\\it\\tiny T}}\n\\\\\n \\Sigma_{t+1\\mid t}^i & = \\diag(\\widehat \\Phi_{t+1\\mid t}) - \\widehat \\Phi_{t+1\\mid t} \\widehat \\Phi_{t+1\\mid t}^{\\hbox{\\it\\tiny T}},\n\\end{aligned}\n \\]\n and the cross covariances can be expressed, \n\\[\n\\begin{aligned}\n {\\sf E}\\bigl[ {\\widetilde \\Phi}^i_t ( {\\widetilde \\Phi}_t ) ^{\\hbox{\\it\\tiny T}} \\mid {\\cal Y}_t\\bigr] &= \\Sigma_t\n \\\\\n {\\sf E}\\bigl[ {\\widetilde \\Phi}^i_{t+1\\mid t} ( {\\widetilde \\Phi}_{t+1\\mid t} ) ^{\\hbox{\\it\\tiny T}} \\mid {\\cal Y}_t\\bigr] &= \\Sigma_{t+1\\mid t} \\,, \\quad 1\\le i\\le N.\n\\end{aligned}\n \\]\n\\end{proposition}\n\n\\begin{proof}\nThe proof of \\eqref{e:EstIndExchangeable}\n follows from the symmetry and independence conditions imposed in (A1--A4). The remaining results follow from this, and the fact that $\\Phi^i_t$ has binary entries [in particular, $\\Phi^i_t(\\Phi^i_t)^{\\hbox{\\it\\tiny T}} = \\diag(\\Phi^i_t)$].\n\\end{proof}\n\n\n\n\n\nRecall from the introduction that two formulations of the Kalman filter \nhave been considered in this research. \nFor a conditionally Gaussian model, the Kalman filter equations require the conditional covariances for the state noise,\n\\begin{equation}\n\\!\\!\\!\\!\n\\Sigma^{W^i}_t={\\sf E}[W^i_{t+1}(W_{t+1}^i)^{\\hbox{\\it\\tiny T}} \\mid {\\cal Y}_t] \\,,\n\\ \\\n\\Sigma^W_t={\\sf E}[W_{t+1}W_{t+1}^{\\hbox{\\it\\tiny T}} \\mid {\\cal Y}_t]\n\\label{e:NoiseCondCov}\n\\end{equation}\nand also the conditional covariance of the measurement noise,\n\\begin{equation}\n\\Sigma^V_t={\\sf E}[V_t^2 \\mid {\\cal Y}_{t-1}] \n\\label{e:NoiseObsCondCov}\n\\end{equation}\nFormulae for the state noise covariances can be obtained in full generality. We require the distribution of the random vector $\\gamma_t$ introduced in A4 to obtain a formula for $\\Sigma^V_t$. \n\nThe Kalman filter that generates $L_2$-optimal estimates over all \\textit{linear functions of the observations} uses instead the (unconditional) covariance matrices,\n\\begin{equation}\n\\overline{\\Sigma}^{W^i}_t={\\sf E}[W^i_{t+1}(W_{t+1}^i)^{\\hbox{\\it\\tiny T}}],\n\\quad\n\\overline{\\Sigma}^W_t={\\sf E}[W_{t+1}W_{t+1}^{\\hbox{\\it\\tiny T}} ],\n\\label{e:NoisCov}\n\\end{equation}\nand $\n\\overline{\\Sigma}^{V}_t={\\sf E}[V_t^2]$ (the notation used in standard texts is $Q_t =\\overline{\\Sigma}^W_t $ and $R_t=\\overline{\\Sigma}^{V}_t$ \\cite{cai88}). \nWe show in \\Prop{t:NoiseCondCov} that the two covariance matrices in \\eqref{e:NoiseCondCov} are linear functions of the \\textit{true conditional mean} $\\widehat \\Phi_t$. Expressions for the two covariance matrices in \\eqref{e:NoisCov} follow from \\Prop{t:NoiseCondCov} and the smoothing property of conditional expectation, provided we can compute ${\\sf E}[\\widehat \\Phi_t]={\\sf E}[\\Phi_t]$.\nThe formula we obtain for $\\Sigma^{V}_t$ in \\Prop{t:NoiseObsCondCov} is a linear function of the conditional covariance matrices $ \\Sigma^i_{t+1\\mid t} $ and $ \\Sigma_{t+1\\mid t} $. It is unlikely we can obtain formula for the means of these covariance matrices, and hence we do not expect to obtain an exact formula for $\\overline{\\Sigma}^{V}_t$. \n\n \n\n\n\n\n\n\\subsection{State noise covariance}\n\\label{s:KFstate}\n\n \n The following result provides formulae for the conditional covariances for the state noise \\eqref{e:NoiseCondCov} as a function of the conditional mean $\\widehat \\Phi_t$. \n\\begin{proposition}\n\\label{t:NoiseCondCov}\nUnder Assumptions~A1--A4,\n\\begin{eqnarray} \n\\Sigma^W_t &=&\n \\frac{1}{N}\\Bigl( \\diag(A_t \\widehat \\Phi_t) - A_t \\diag(\\widehat \\Phi_t) A_t^{\\hbox{\\it\\tiny T}} \\Bigr)\n \\label{e:NoiseCondCovForm}\n \\\\\n\\Sigma^{W^i}_t &=&\n \\diag(A_t \\widehat \\Phi_t^i) - A_t \\diag(\\widehat \\Phi_t^i) A_t^{\\hbox{\\it\\tiny T}} \n\\label{e:NoiseCondCovForm-i}\n\\end{eqnarray}\nThe second covariance is independent of $i$, with common value $\\Sigma^{W^i}_t = N\\Sigma^W_t$.\n\\end{proposition}\n\n\n\\medbreak\n\n\\begin{proof}\nSince\n$\\{ W_t^i : 1\\le i\\le N\\}$ is uncorrelated, \n\\begin{equation}\n\\Sigma^W_t=\n \\frac{1}{N^2}\\sum_{i=1}^N \\Sigma^{W^i}_t \n \\label{e:SigmaWavg}\n\\end{equation}\nMoreover, \\Prop{t:EstIndExchangeable} gives $\\widehat \\Phi_t^i=\\widehat \\Phi_t$, and from this or \\eqref{e:Phi-sum} it is obvious that\n\\[\n\\widehat \\Phi_t = \\frac{1}{N} \\sum_{i=1}^N \\widehat \\Phi_t^i \n\\]\nConsequently, \\eqref{e:NoiseCondCovForm} follows from \\eqref{e:NoiseCondCovForm-i}.\n\nThe derivation of the formula \\eqref{e:NoiseCondCovForm-i} for $\\Sigma^{W^i}_t $ is similar to the Kalman filter construction in \\cite{lipkrirub84}.\nGiven the larger sigma-field, \n\\[\n{\\cal Y}_t^+ = \\sigma\\{\\Phi_r^i, Y_r, \\zeta_r, A_r : r\\le t,\\ i\\le N\\}\n\\]\nthe smoothing property of conditional expectation implies,\n\\[\n\\Sigma^{W^i}_t = {\\sf E}\\bigl[ {\\sf E}[W_{t+1}^i (W_{t+1}^i)^{\\hbox{\\it\\tiny T}} \\mid {\\cal Y}_t^+] \\mid {\\cal Y}_t\\bigr]\n\\] \nThe inner conditional expectation is transformed using the definition \\eqref{e:Wi}:\n\\[\n\\begin{aligned}\n{\\sf E}[W^i_{t+1}(W_{t+1}^i)^{\\hbox{\\it\\tiny T}} &\\mid {\\cal Y}_t^+] \n\\\\\n&= \n{\\sf E}[ \\Phi_{t+1}^i (\\Phi_{t+1}^i)^{\\hbox{\\it\\tiny T}} \\mid {\\cal Y}_t^+] \n\\\\\n&\\qquad - {\\sf E}[ \\Phi_{t+1}^i \\mid {\\cal Y}_t^+] {\\sf E}[ \\Phi_{t+1}^i \\mid {\\cal Y}_t^+]^{\\hbox{\\it\\tiny T}}\n\\\\\n&= \\diag(A_t \\Phi_t^i) \n\\\\\n&\\qquad - A_t \\diag( \\Phi_t^i) A_t^{\\hbox{\\it\\tiny T}}\n\\end{aligned}\n\\]\nwhere the final equation uses\n$ {\\sf E}[ \\Phi_{t+1}^i \\mid {\\cal Y}_t^+] =A_t \\Phi_t^i $, and the fact that $\\Phi_r^i$ has binary entries for each $i$ and $r$.\n\nTaking the conditional expectation given ${\\cal Y}_t$ gives \\eqref{e:NoiseCondCovForm-i}. \n\\end{proof}\n\n\n \n\n\n\n\\subsection{Sampling and observation covariance}\n\\label{s:samplingCov}\n\nThe observation model used in the numerical experiments that follow is based on random sampling of loads: An integer $n \\Theta^{\\text{max}}\\\\\n m^i_t, & \\quad \\text{otherwise}\n \\end{array} \\right.\n\\] \nThe temperature is modeled as a linear system driven by white noise:\n \\[\n \\theta^i_{t+1} = a^i \\theta^i_t + (1-a^i)(\\theta^0_t - m^i_t R^i \\varrho^{\\text{etr}}) + \\eta^i_t,\n \\]\nin which $0 0.\\label{S}\n\\end{eqnarray}\nThe large time asymptotics of the solution of the corresponding FP equation has been given in \\cite{AY} \n\\begin{eqnarray}\nP_g(x,y)=\\exp{\\left(-\\sigma_R (x^2+2 r x y + (1+2 r^2) y^2)\\right)},\\;\\;\\;\\; r=\\frac{\\sigma_R}{\\sigma_I}, \\label{P}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\int_{R^2} P_g(x,y)=\\frac{\\pi}{\\sigma_R\\sqrt{1+r^2}} , \\nonumber \n\\end{eqnarray}\nand thoroughly analysed in the literature \\cite{DH,HP}.\n\nTo see the validity of (\\ref{B1}), e.g. for polynomial observables, consider the generating function\n\\begin{eqnarray}\nG_{LHS}(t)= \\frac{\\int_{-\\infty}^{\\infty} e^{t x} e^{-S_g(x)} dx}{\\int_{-\\infty}^{\\infty} e^{-S_g(x)} dx } =\\exp{\\left(\\frac{t^2}{2\\sigma}\\right)}, \\label{B2}\n\\end{eqnarray}\nand the average from the RHS of (\\ref{B1}) \n\\begin{eqnarray}\nG_{RHS}(t)=\\frac{\\int_{-\\infty}^{\\infty}\\int_{-\\infty}^{\\infty} dx dy e^{ t (x + i y) } P_g(x,y)}{\\int_{-\\infty}^{\\infty}\\int_{-\\infty}^{\\infty} dx dy P_g(x,y)}, \\nonumber \\label{B5}\n\\end{eqnarray}\nwhich indeed agrees with (\\ref{B2}).\n\nSummarising: the complex Langevin approach can in principle be used to perform simulations with ``complex distributions\". \nHowever, in practice, extending \nthe stochastic process into a complex plane encounters difficulties. Asymptotic solutions of the two dimensional Fokker-Planck equation\nare generally not known and cannot be simply constructed from the complex action. Moreover, the random walk wanders often far \ninto the imaginary direction and may run away or converge to the wrong answer. \n\n\n\\section{Generalization}\nOn the other hand we do not really need to generate the positive two dimensional distribution with the stochastic process in the complex plane.\nThe only and the real problem is to find a positive distribution which satisfies (\\ref{B1}). Given $P(x,y)$ one can generate it with other\nmethods. \n\nTherefore, we propose to avoid difficulties of the complex random walk and concentrate instead on constructing $P(x,y)$ directly, using\neq.(\\ref{B1}) as a guide. To this end rewrite the RHS of (\\ref{B1}) in terms of two, holomorphic and antiholomorphic, variables\n\\begin{eqnarray}\n\\;\\;\\;z=x+iy,\\;\\;\\;\\bar{z}=x-iy, \\label{xy}\n\\end{eqnarray}\n\\begin{eqnarray}\n \\frac{\\int_R \\int_R f(x+i y) P(x,y) dx dy }{\\int_R \\int_R P(x,y) dx dy}=\\frac{\\int_{\\Gamma_z} \\int_{\\Gamma_{\\bar{z}}} f(z) P(z,\\bar{z}) dz d\\bar{z} }{\\int_{\\Gamma_z} \\int_{\\Gamma_{\\bar{z}}} P(z,\\bar{z}) dz d\\bar{z}}. \\label{Pz}\n\\end{eqnarray}\nNow, continue analytically the complex density on the LHS of (\\ref{B1}) from the real axis into the complex plane\n\\begin{eqnarray}\n\\rho(x)=e^{-S(x)} \\longrightarrow \\rho(z), \\nonumber\n\\end{eqnarray}\nrotate the contour of integration on the LHS of (\\ref{B1}), $R\\rightarrow \\Gamma_z$, and then seek to satisfy the relation\n\\begin{eqnarray}\n \\frac{\\int_{\\Gamma_z} f(z) \\rho(z) dz}{\\int_{\\Gamma_z} \\rho(z) dz} = \\frac{\\int_{\\Gamma_z} \\int_{\\Gamma_{\\bar{z}}} f(z) P(z,\\bar{z}) dz d\\bar{z} }{\\int_{\\Gamma_z} \\int_{\\Gamma_{\\bar{z}}} P(z,\\bar{z}) dz d\\bar{z}}. \\label{B3}\n\\end{eqnarray}\nThis will be the case provided\n\\begin{eqnarray}\n\\rho(z)=\\int_{\\Gamma_{\\bar{z}}} P(z,\\bar{z}) d \\bar{z}. \\label{Pro}\n\\end{eqnarray}\nThat is, we will look for the distribution $P(z,\\bar{z})$, which: (1) upon integration over $\\bar{z}$ reproduces the analytic continuation $\\rho(z)$, \nand (2) is positive and normalizable when expressed in terms of real and imaginary parts $x$ and $y$. Given that, we will have found\nthe positive representation for the LHS of (\\ref{B3})\n\\begin{eqnarray}\n \\frac{\\int_{\\Gamma_z} f(z) \\rho(z) dz}{\\int_{\\Gamma_z} \\rho(z) dz} = \\frac{\\int_{R^2} f(x+iy) P(x,y) dx dy }{\\int_{R^2} P(x,y) dx dy}.\\nonumber \n\\end{eqnarray}\nThe integral on the RHS is over the whole $(x,y)$ plane (at least in the cases considered here), while the contours $\\Gamma_z$ and $\\Gamma_{\\bar{z}}$ \nhave to be within domains determined by parameters of both distributions. For a range of parameters\na domain for $\\Gamma_z$ contains the real axis and then Eq.(\\ref{B1}) can be established. \n\nIt is shown below that this program can in fact be carried through quantitatively, at least in few physically interesting cases, already providing \nsome novel results.\n\n\\section{Generalized gaussian model}\nA more general than (\\ref{P}) positive distribution can be derived if we start from a generic quadratic action for (\\ref{Pz}) in two complex variables $z$ and $\\bar{z}$\n\\begin{eqnarray}\nS(z,\\bar{z})&=& a^* z^2 + 2 b z \\bar{z} + a \\bar{z}^2, \\nonumber \n\\end{eqnarray}\nwith an arbitrary complex $a=\\alpha+i\\beta$ and real $b=b^*$.\nIn terms of real and imaginary parts (\\ref{xy})\n\\begin{eqnarray}\nS(x,y)= 2(b+\\alpha) x^2 + 4\\beta x y + 2(b-\\alpha) y^2\\label{Sb},\n\\end{eqnarray}\nand gives the positive and normalizable (for real $x$ and $y$) distribution \n\\begin{eqnarray}\nP(x,y)= \\exp{\\left(-S(x,y)\\right)}, \\label{Pb}\n\\end{eqnarray}\nprovided $b > |a|$, since the two eigenvalues of (\\ref{Sb}) \n\\begin{eqnarray}\n\\lambda_{\\pm}=2 (b\\pm |a|). \\nonumber\n\\end{eqnarray}\nAt the same time the normalization reads\n\\begin{eqnarray}\n\\int_{R^2} dx dy P(x,y) = \\frac{\\pi}{2\\sqrt{b^2-|a|^2}}.\\label{Pc}\n\\end{eqnarray}\nOn the other hand, integrating \n\\begin{eqnarray}\nP(z,\\bar{z})=\\frac{i}{2}P(x,y)=\\frac{i}{2}\\exp{\\left(-S(z,\\bar{z})\\right)}, \\nonumber\n\\end{eqnarray}\n as in (\\ref{Pro}), gives\n\\begin{eqnarray}\n\\rho(z)=\\int_{\\Gamma_{\\bar{z}}} P(z,\\bar{z}) d \\bar{z} = \n \\frac{1}{2}\\sqrt{\\frac{\\pi}{-a}}\\exp{\\left( - s z^2\\right)},\\;\\;\\;s=\\frac{|a|^2-b^2}{a}. \\label{ip2}\n\\end{eqnarray}\nwhich is properly normalized in view of (\\ref{Pc}).\nThe contour $\\Gamma_{\\bar{z}}$ depends on a phase of a complex parameter $a$ and is chosen such that the integral converges. This choice also determines unambiguously the phase of $-a$.\n\nWith $a$ and $b$ parametrized by \n\\begin{eqnarray}\nb=\\frac{\\sigma_R}{2}(1+r^2) , \\;\\;\\;\\; \\alpha=-\\frac{\\sigma_R}{2} r^2 , \\;\\;\\;\\; \\beta=\\frac{\\sigma_R}{2} r, \\;\\;\\;\\;\\sigma_R>0, \\nonumber\n\\end{eqnarray}\nequations (\\ref{Sb}) and (\\ref{ip2}) reproduce the original gaussian model, i.e. (\\ref{P}) and (\\ref{S}) respectively. However (\\ref{Sb}) gives a more general, positive and normalizable probability. In fact the generalized model (\\ref{Sb}) realizes the positive representation of the gaussian (\\ref{ip2}) for any complex value of the slope, $s$, or equivalently $a$, $b > |a|$.\n\nA complex gaussian, e.g. $e^{- s z^2}, s,z \\in C$, is integrable only along a family of contours contained in a wedge specified by a phase \nof $s$. However its moments can be analytically continued to any complex $s$. The point of (\\ref{Sb},\\ref{Pb}) is that it provides a positive and normalizable integral representation for this continuations at arbitrary complex $s$.\nIn another words: even though the complex density $\\rho$ was derived and is integrable only along particular family of contours for a given $a$, \nthe positive density $P(x,y)$ exists and is integrable for all $a\\in C$. \n\nIt is a simple matter to check the equivalence of (\\ref{Pb},\\ref{Sb}) and (\\ref{ip2}), e.g. by calculating generating function (\\ref{B5}) with both representations.\nHere we illustrate this only for the second moment. In the matrix notation the action (\\ref{Sb}) reads\n\\begin{eqnarray}\nS(x,y)=X^T M X,\\;\\;\\;X^T=(x,y). \\nonumber\n\\end{eqnarray}\nTherefore\n\\begin{eqnarray}\n\\langle (x+i y)^2 \\rangle_P=\\frac{1}{2}\\left(M^{-1}_{11}-M^{-1}_{22} + 2 i M^{-1}_{12}\\right)=\n-\\frac{1}{2}\\frac{\\alpha+i\\beta}{b^2-|a|^2}, \\nonumber\n\\end{eqnarray}\nwhich indeed is identical to the average over the complex density (\\ref{ip2})\n\\begin{eqnarray}\n\\langle z^2 \\rangle_{\\rho} = \\frac{1}{2 s}. \\nonumber\n\\end{eqnarray}\n\nTo conclude this Section we discuss two interesting special cases.\n\n For real and negative $s$, the complex density blows up along the real axis. On the other hand the distribution $P(x,y)$ is positive and normalizable at $\\alpha>0$ and $\\beta=0$ producing the correct average over the \"divergent\" distribution $\\rho$. This explains a ``striking example\" observed in the literature \\cite{AS}, namely that, upon change of variables, the complex Langevin simulation based on (\\ref{P}) actually has the correct fixed point also for negative ${\\cal R}e\\;\\sigma$. The answer is that the positive distribution (\\ref{P}) used until now is part of a richer structure (\\ref{Sb}), which accommodates negative $\\sigma_R$ as well.\n \n Similarly, the complex density $\\rho(z)$ for purely imaginary $s$ is readily represented by the positive distribution $P(x,y)$, which is perfectly well defined at $\\alpha=0$ and arbitrary $\\beta$, as long as $|\\beta|