diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdckf" "b/data_all_eng_slimpj/shuffled/split2/finalzzdckf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdckf" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nHidden-variable theories allege that a state of a quantum system, even\nif it is pure and thus contains as much information as quantum\nmechanics permits, actually describes an ensemble of systems with\ndistinct values of some hidden variables. Once the values of these\nvariables are specified, the system becomes determinate or at least\nmore determinate than quantum mechanics says. Thus the randomness in\nquantum predictions results, entirely or partially, from the\nrandomness involved in selecting a member of the ensemble.\n\nNo-go theorems assert that, under reasonable assumptions, a hidden-variable interpretation cannot reproduce the predictions of quantum mechanics. In this paper, we examine two species of such theorems, \\emph{value} no-go theorems and \\emph{expectation} no-go theorems.\nThe value approach originated in the work of Bell \\cite{bell64,bell66} and of Kochen and Specker \\cite{ks} in the 1960's. Value no-go theorems establish that, under suitable hypotheses, hidden-variable theories cannot reproduce the predictions of quantum mechanics concerning the possible results of the measurements of observables.\n\nThe expectation approach was developed in the last decade by Spekkens \\cite{spekkens} and by Ferrie, Emerson, and Morris \\cite{ferrieA,ferrieB,ferrieC}, with \\cite{ferrieC} giving the sharpest result. In this approach, the discrepancy between hidden-variable theories and quantum mechanics appears in the predictions of the expectation values of the measurements of effects, i.e.\\ the elements of POVMs, positive operator-valued measures. There is no need to consider the actual values obtained by measurements or the probability distributions over these values.\n\nIn both cases, measurements are associated to Hermitian operators, but\nthey are different sorts of measurements. In the value approach,\nHermitian operators serve as observables, and measuring one of them\nproduces a number in its spectrum. In the expectation approach,\ncertain Hermitian operators serve as effects, and measuring one of\nthem produces 0 or 1, even if the spectrum consists entirely of other points.\nThe only Hermitian operators for which these two uses coincide are\nprojections.\n\nWe sharpen the results of both approaches so that only projection\nmeasurements are used. Regarding the expectation approach, we\nsubstantially weaken the hypotheses. We do not need arbitrary effects,\nbut only rank-1 projections. Accordingly, we need convex-linearity\nonly for the hidden-variable picture of states, not for that of\neffects. Regarding the value approach, it turns out that rank-1\nprojections are sufficient in the finite dimensional case but not in\ngeneral. Finally, using a successful hidden-variable theory of John\nBell for a single qubit, we demonstrate that the expectation approach does not subsume\nthe value approach.\n\n\\section{Expectation No-Go Theorem}\n\\label{sec:exp}\n\n\\begin{definition}\\label{def:exp}\\rm\nAn \\emph{expectation representation} for quantum systems described by a Hilbert space $\\H$ is a triple $(\\Lambda,\\mu,F)$ where\n\\begin{itemize}\n\\item $\\Lambda$ is a measurable space,\n\\item $\\mu$ is a convex-linear map assigning to each density operator $\\rho$ on $\\H$ a probability measure $\\mu(\\rho)$ on $\\Lambda$, and\n\\item $F$ is a map assigning to each rank-1 projection $E$ in $\\H$ a measurable function $F(E)$ from $\\Lambda$ to the real interval $[0,1]$.\n\\end{itemize}\nIt is required that for all density matrices $\\rho$ and all rank-1 projections $E$\n\\begin{equation}\\label{eq1}\n\\Tr{\\rho\\cdot E}=\\int_\\Lambda F(E)\\,d\\mu(\\rho)\n\\end{equation}\n\\end{definition}\n\nThe convex linearity of $\\mu$ means that $\\mu(a_1\\rho_1 + a_2\\rho_2) = a_1\\mu(\\rho_1) + a_2\\mu(\\rho_2)$ whenever $a_1,a_2$ are nonnegative real numbers with sum 1.\n\nThe definition of expectation representation is similar to Ferrie-Morris-Emerson's definition of the probability representation \\cite{ferrieC} except that (i)~the domain of $F$ contains only rank-1 projections, rather than arbitrary effects, and (ii)~we do not (and cannot) require that $F$ be convex-linear.\n\nIntuitively an expectation representation $(\\Lambda,\\mu,F)$ attempts to predict the expectation value of any rank-1 projection $E$ in a given mixed state $\\rho$. The hidden variables are combined into one variable ranging over $\\Lambda$. Further, $\\mu(\\rho)$ is the probability measure on $\\Lambda$ determined by $\\rho$, and $(F(E))(\\lambda)$ is the probability of determining the effect $E$ at the subensemble of $\\rho$ determined by $\\lambda$. The left side of \\eqref{eq1} is the expectation of $E$ in state $\\rho$ predicted by quantum mechanics and the right side is the expectation of $F(E)$ in the ensemble described by $\\mu(\\rho)$.\n\nBut why is $\\mu$ supposed to be convex linear? Well, mixed states have\nphysical meaning and so it is desirable that $\\mu$ be defined on mixed\nstates as well. If you are a hidden-variable theorist, it is most\nnatural for you to think of a mixed state as a classical probabilistic\ncombination of the component states. This leads you to the convex\nlinearity of $\\mu$. For example, if $\\rho = \\sum_{i=1}^k p_i\\rho_i$\nwhere $p_i$'s are nonnegative reals and $\\sum p_i = 1$ then, by the\nrules of probability theory, $(\\mu(\\rho))(S) = \\sum p_i(\\mu(\\rho_i))(S)$\nfor any measurable $S\\subseteq\\Lambda$. Note, however, that you cannot\nstart with any wild probability distribution $\\mu$ on pure states and\nthen extend it to mixed states by convex linearity. There is an\nimportant constraint on $\\mu$. The same mixed state $\\rho$ may have\ndifferent representations as a convex combination of pure states; all\nsuch representations must lead to the same probability measure\n$\\mu(\\rho)$.\n\n\\begin{theorem}[First Bootstrapping Theorem]\\label{thm:boot1}\nLet $\\H$ be a closed subspace of a Hilbert space $\\H'$. From any expectation representation for quantum systems described by $\\H'$, one can directly construct such a representation for systems described by $\\H$.\n\\end{theorem}\n\n\\begin{proof}\nWe construct an expectation representation $(\\Lambda,\\mu,F)$ for quantum systems described by $\\H$ from any such representation $(\\Lambda',\\mu',F')$ for the larger Hilbert space $\\H'$. To begin, we set $\\Lambda=\\Lambda'$.\n\nTo define $\\mu$ and $F$, we use the inclusion map $i:\\H\\to\\H'$,\nsending each element of $\\H$ to itself considered as an element of\n$\\H'$, and we use its adjoint $p:\\H'\\to\\H$, which is the orthogonal\nprojection of $\\H'$ onto $\\H$. Any density operator $\\rho$ over $\\H$,\ngives rise to a density operator $\\bar\\rho=i\\circ\\rho\\circ p$ over\n$\\H'$. Note that this expansion is very natural: If $\\rho$\ncorresponds to a pure state $\\ket\\psi\\in\\H$, i.e., if\n$\\rho=\\ket\\psi\\bra\\psi$, then $\\bar\\rho$ corresponds to the same\n$\\ket\\psi\\in\\H'$. If, on the other hand, $\\rho$ is a mixture of\nstates $\\rho_i$, then $\\bar\\rho$ is the mixture, with the same\ncoefficients, of the $\\overline{\\rho_i}$. Define\n$\\mu(\\rho)=\\mu'(\\bar\\rho)$.\n\nThe definition of $F$ is similar. For any rank-1 projection $E$ in $\\H$, $\\bar E=i\\circ E\\circ p$ is a rank-1 projection in $\\H'$, and so we define $F(E)=F'(\\bar E)$. If $E$ projects to the one-dimensional subspace spanned by $\\ket\\psi\\in\\H$, then $\\bar E$ projects to the same subspace, now considered as a subspace of $\\H'$.\n\nThis completes the definition of $\\Lambda$, $\\mu$ , and $F$.\nMost of\nthe requirements in Definition~\\ref{def:exp} are trivial to verify.\nFor the last requirement, the agreement between the expectation\ncomputed as a trace in quantum mechanics and the expectation computed\nas an integral in the expectation representation, it is useful to\nnotice first that $p\\circ i$ is the identity operator on $\\H$. We can then\ncompute, for any density operator $\\rho$ and any rank-1 projection\n$E$ on $\\H$,\n\\begin{align*}\n \\int_\\Lambda F(E)\\,d\\mu(\\rho)&=\\int_\\Lambda F'(\\bar E)\\,d\\mu'(\\bar\\rho)\n =\\Tr{\\bar\\rho\\bar E}\n =\\Tr{i\\circ\\rho\\circ p\\circ i\\circ E\\circ p}\\\\\n &=\\Tr{i\\circ\\rho\\circ E\\circ p}\n =\\Tr{\\rho\\circ E\\circ p\\circ i} = \\Tr{\\rho\\circ E},\n\\end{align*}\nas required.\n\\end{proof}\n\n\\begin{theorem}[Expectation no-go theorem]\\label{thm:exp}\n If the dimension of the Hilbert space $\\H$ is at least 2 then there is no expectation representation for quantum systems described by $\\H$.\n\\end{theorem}\n\nWe cannot expect any sort of no-go result in lower dimensions, because quantum theory in Hilbert spaces of dimensions 0 and 1 is trivial and therefore classical.\nBy the First Bootstrapping Theorem, it suffices to prove Theorem~\\ref{thm:exp} just in the case $\\Dim{\\H}=2$.\nBut we find Ferrie-Morris-Emerson's proof that works directly for\nall dimensions \\cite{ferrieC} instructive, and we adjust it to prove\nTheorem~\\ref{thm:exp}.\nThe adjustment involves adding some details and observing that a\ndrastically reduced domain of $F$ suffices. The adjustment also\ninvolves making a little correction. Ferrie et al.\\ quoted an\nerroneous\nresult of Bugajski \\cite{bugajski} which needs some additional\nhypotheses to become correct. Fortunately for Ferrie et al., those\nhypotheses hold in their situation.\n\n\\begin{proof}\nThe proof involves several normed vector spaces.\n\\begin{itemize}\n\\item $\\mathcal B$ is the real Banach space of bounded self-adjoint operators\n $\\H\\to\\H$ with norm\\\\ $\\nm A=\\sup\\{\\nm{Ax}:x\\in \\H,\\,\\nm x=1\\}$.\n\\item $\\mathcal F$ is the real vector space of bounded, measurable, real-valued functions on $\\Lambda$ with norm\\\\ $\\nm f = \\sup \\{|f(\\lambda)|: \\lambda\\in\\Lambda\\}$.\n\\item $\\mathcal M$ is the real vector space of bounded, signed, real-valued measures on $\\Lambda$ with the total variation norm $\\nm\\mu = \\mu_+(\\Lambda) + \\mu_-(\\Lambda)$ where $\\mu=\\mu_+-\\mu_-$ and $\\mu_+$ and $\\mu_-$ are positive measures with disjoint supports.\n\\item $\\mathcal T$ is the vector subspace of $\\mathcal B$ consisting of the trace-class operators. These are the operators $A$ whose spectrum consists of real eigenvalues $\\alpha_i$ such that the sum $\\sum_i|\\alpha_i|$ is finite; eigenvalues with multiplicity $>1$ are repeated in this list, and the continuous spectrum is empty or $\\{0\\}$. The sum $\\sum_i|\\alpha_i|$ serves as the norm of $A$ in $\\mathcal T$. The sum $\\sum_i\\alpha_i$ of eigenvalues themselves (rather than their absolute values) is the trace of $A$. Note that density operators are positive trace-class operators of trace 1.\n\\end{itemize}\n\nIn the rest of the proof, by default, operators, transformations and\nfunctionals are bounded and of course linear. Suppose, toward a contradiction, that we have an expectation representation $(\\Lambda,\\mu,F)$ for some $\\H$ with $\\Dim{\\H}\\ge2$.\n\n\\begin{lemma}\\label{lem:e1}\n$\\mu$ can be extended in a unique way to a transformation, also denoted $\\mu$, from all of $\\mathcal T$ into $\\mathcal M$.\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:e1}]\nEvery $A\\in\\mathcal T$ can be written as a linear combination of two density operators. Indeed, if $\\nm A>0$ and $A$ is positive then $\\Tr{A}=\\nm A$ and $A={\\nm A}\\rho$ where $\\rho=\\frac{A}{\\nm A}$. In general, it suffices to represent $A$ as the difference $B-C$ of positive trace-class operators. Choose $A_+$ (resp.\\ $A_-$) to have the same positive (resp.\\ negative) eigenvalues and corresponding eigenspaces as $A$ and be identically zero on all the eigenspaces corresponding to the remaining eigenvalues. The desired $B = A_+$ and $C = - A_-$.\n\n\nIf $A$ is a linear combination $b\\rho+c\\sigma$ of two density\noperators, define $\\mu(A)=b\\mu(\\rho)+c\\mu(\\sigma)$.\nUsing the convex linearity of $\\mu$ on the density operators, it is easy to check that if $A$ has another such representation $b'\\rho'+c'\\sigma'$ then\n$b\\mu(\\rho) + c\\mu(\\sigma) = b'\\mu(\\rho') + c'\\mu(\\sigma')$ which means that $\\mu(A)$ is well-defined.\n\nThe uniqueness of the extension is obvious. It remains to check that\nthe extended $\\mu$ is bounded. In fact, we show more, namely that\n$\\nm{\\mu(A)} \\le1$ if $\\nm A \\le1$. \nSo let $A\\in\\mathcal T$ and $\\nm A \\le1$. As we saw above, there are\npositive trace-class operators $B,C$ such that $A = B-C$. Then $A =\n{\\nm B}\\rho - {\\nm C}\\sigma$ for some density operators $\\rho, \\sigma$\nwhere $b,c\\ge0$ and $b+c={\\nm A}\\le1$. Now, $\\mu(\\rho)$ and\n$\\mu(\\sigma)$ are measures with norm 1. So $\\nm{\\mu(A)} \\le b{\\mu(\\rho)}\n+ c{\\mu(\\sigma)}\\le b+c \\le 1$.\n\\end{proof}\n\nLet $\\mathcal M'$ be the space of the functionals $\\mathcal M\\to\\mathbb R$ where $\\mathbb R$ is the\nset of real numbers. Similarly let $\\mathcal T'$ be the space of the\nfunctionals $\\mathcal T\\to\\mathbb R$. $\\mu$ gives rise to a dual transformation\n$\\mu': \\mathcal M'\\to\\mathcal T'$ that sends any $h\\in\\mathcal M'$ to $\\mu'(h) = h\\circ\\mu$ so\nthat \n\\begin{equation}\\label{eq2}\n\\mu'(h)(A) = h(\\mu(A))\\quad \\text{for all $h\\in\\mathcal M'$ and all $A\\in\\mathcal T$}.\n\\end{equation}\nEvery measurable function $f\\in\\mathcal F$ induces a functional $\\bar f \\in\\mathcal M'$ by integration: $\\bar f(\\mu)=\\int_\\Lambda f\\,d\\mu$. This gives rise to a transformation $\\nu: \\mathcal F\\to\\mathcal T'$ that sends every $f$ to $\\mu'(\\bar f)$. Specifying $h$ to $\\bar f$ in Equation~\\ref{eq2} gives\n\\begin{equation}\\label{eq3}\n(\\nu f)(A) = \\int_\\Lambda f\\,d\\mu(A)\\quad\n\\text{for all $f\\in\\mathcal F$ and all $A\\in\\mathcal T$}.\n\\end{equation}\nHere and below we omit the parentheses around the argument of $\\nu$.\n\n\\begin{lemma}\\label{lem:e2}\nFor every $f\\in\\mathcal F$, there is a unique $B\\in\\mathcal B$ with $(\\nu f)(\\rho) =\n\\Tr{B\\cdot\\rho}$ for all density operators $\\rho$. \n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:e2}]\nEvery $B\\in\\mathcal B$ induces a functional $\\bar B\\in\\mathcal T'$ by $\\bar\nB(A)=\\Tr{B\\cdot A}$. Here $\\Tr{B\\cdot\\rho}$ is well-defined because\nthe product of a bounded operator and a trace-class operator is again\nin the trace class \\cite[Lemma~3, p.\\ 38]{schatten}.\n\nThe map $B\\mapsto\\bar B$ is an isometric isomorphism between $\\mathcal B$ and $\\mathcal T'$ \\cite[Theorem~2, p.~47]{schatten}.\nSo, for every $X\\in\\mathcal T'$, there is a unique $B_X\\in\\mathcal B$ such that $X(A) = \\Tr{B_X\\cdot A}$ for all $A\\in\\mathcal T$. Furthermore, there is a unique $B_X\\in\\mathcal B$ such that $X(\\rho) = \\Tr{B_X\\cdot\\rho}$ for all density operators $\\rho$. This is because, as we showed above, the linear span of the density matrices is the whole space $\\mathcal T$. The lemma follows because every $\\nu f$ belongs to $\\mathcal T'$.\n\\end{proof}\n\n\nFor any $f\\in\\mathcal F$, the unique operator $B$ with $(\\nu f)(\\rho) =\n\\Tr{B\\cdot\\rho}$ for all $\\rho$ will be denoted $[\\nu f]$. \n\n\\begin{lemma}\\label{lem:e3}\n$[\\nu F(E)] = E$ for every rank-1 projection $E$, and $[\\nu1]=I$ where 1 is the constant function with value 1 and $I$ is the unit matrix.\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:e3}]\nLemma~\\ref{lem:e2} and equation~\\eqref{eq3} give\n\n\\begin{equation}\\label{eq4}\n\\Tr{[\\nu f]\\cdot\\rho}=\\int_\\Lambda f\\,d\\mu(\\rho)\\quad\n\\text{for every density operator }\\rho.\n\\end{equation}\n\nEquations~\\eqref{eq1} and \\eqref{eq4} imply\n\\begin{equation}\\label{eq5}\n\\Tr{\\rho\\cdot E} = \\int_\\Lambda F(E)\\,d\\mu(\\rho)\n = \\Tr{[\\nu F(E)]\\cdot\\rho}\n\\end{equation}%\nfor every density operator $\\rho$.\n\nThe right sides of Equations~\\eqref{eq1} and \\eqref{eq4} coincide if\nwe specify $f$ to $F(E)$. \nTherefore their left sides are equal.\n\n\\begin{equation*}\n\\Tr{E\\rho} = \\Tr{[\\nu F(E)]\\rho}\n\\end{equation*}\n\nWe now invoke the last clause in Definition~\\ref{def:exp} to find that, for\nall rank-1 projections $E$ and all density matrices $\\rho$,\n\\[\n\\Tr{E\\rho}=\\int_\\Lambda F(E)\\,d\\mu(\\rho) = \\Tr{[\\nu F(E)]\\rho)}.\n\\]\nBut this is, as we saw in the proof of Lemma~\\ref{lem:e2}, enough to show that\n$[\\nu F(E)]=E$.\n\nBy Lemma~\\ref{lem:e2}, we see that $\\mu'(1)$ is the unique operator that satisfies, for all $\\rho$,\n\\[\n\\Tr{[\\nu1]\\rho}=\\int_\\Lambda\\,d\\mu(\\rho) = (\\mu(\\rho))(\\Lambda)=1=\\Tr\\rho\n=\\Tr{I\\rho},\n\\]\nwhere the third equality comes from the fact that $\\mu$ maps density matrices to probability measures. Thus, $[\\nu1]=I$.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:e4}\n For any two rank-1 projections $A,B$ of $\\H$, there exists an operator $H\\in\\mathcal B$ such that all four of $H$, $A-H$, $B-H$, and $I-A-B+H$ are positive operators.\n\\end{lemma}\n\n\\begin{proof} [Proof of Lemma~\\ref{lem:e4}]\nRecall that an operator $A$ is said to be\npositive if $\\bra\\psi A\\ket\\psi\\geq0$ for all $\\ket\\psi\\in\\H$ and\nthat $A\\leq B$ means that $B-A$ is positive. A function $f\\in\\mathcal F$ is \\emph{nonnegative} if $f(\\lambda)\\geq0$ for all $\\lambda\\in\\Lambda$.\n\n\\begin{claim} \\label{cla:e1} If $f\\in\\scr F$ is nonnegative then $[\\nu f]$ is a\n positive operator. Therefore, if $f\\leq g$ pointwise in $\\mathcal F$ then\n $[\\nu f]\\le [\\nu g]$ in $\\mathcal B$.\n\\end{claim}\n\n\\begin{proof}[Proof of Claim~\\ref{cla:e1}]\n The second assertion follows immediately from the first applied to\n $g-f$, because $\\nu$ is linear. To prove the first assertion,\n suppose $f\\in\\scr F$ is nonnegative, and let $\\ket\\psi$ be any\n vector in $\\H$. The conclusion we want to deduce, $\\bra\\psi\n [\\nu f]\\ket\\psi\\geq0$, is obvious if $\\ket\\psi=0$, so we may assume\n that \\ket\\psi\\ is a non-zero vector. Normalizing it, we may assume\n further that its length is 1. Then $\\ket\\psi\\bra\\psi$ is a density operator\n and therefore $\\mu(\\ket\\psi\\bra\\psi)$ is a measure. Using\n equation~\\eqref{eq5}, we compute\n\\[\n\\bra\\psi [\\nu f]\\ket\\psi=\\Tr{[\\nu f]\\ket\\psi\\bra\\psi}\n=\\int_\\Lambda f\\,d\\mu(\\ket\\psi\\bra\\psi)\\geq0,\n\\]\nwhere we have used that both the measure $\\mu(\\ket\\psi\\bra\\psi)$ and\nthe integrand $f$ are nonnegative.\n\\end{proof}\n\nLet $\\mathcal F_{[0,1]}$ be the subset of $\\mathcal F$ comprising the functions all of whose values are in the interval $[0,1]$.\n\n\\begin{claim}\\label{cla:e2}\n For any $f,g\\in\\mathcal F_{[0,1]}$ there exists $h\\in\\mathcal F_{[0,1]}$ such that all four of $h$, $f-h$, $g-h$, and $1-f-g+h$ are nonnegative.\n\\end{claim}\n\n\\begin{proof}[Proof of Claim~\\ref{cla:e2}]\n Define $h(\\lambda)=\\min\\{f(\\lambda),g(\\lambda)\\}$ for all\n $\\lambda\\in\\Lambda$. Then the first three of the assertions in the\n lemma are obvious, and the fourth becomes obvious if we observe that \n $f+g-h=\\max\\{f,g\\}\\leq 1$.\n\\end{proof}\n\nNow we are ready to complete the proof of Lemma~\\ref{lem:e4}.\nApply Claim~\\ref{cla:e2} \nwith $f=F(A)$ and $g=F(B)$,\nlet $h$ be the function given by the lemma, and let $H=[\\nu(h)]$. \nThe nonnegativity of $h$, $f-h$, $g-h$, and $1-f-g+h$ implies, by \nClaim~\\ref{cla:e1}, \nthe positivity of $[\\nu(h)]=H$, $[\\nu(F(A)-h)]=A-H$,\n$[\\nu(F(B)-h)]=B-H$, \nand $[\\nu(1-F(A)-F(B)+h)]=I-A-B+H$, where we have also used the\nlinearity of $\\nu$, the fact that $[\\nu(1)]=I$, and the formula\n$[\\nu(F(A))]=A$ for all $A$ in the domain of $F$.\n\\end{proof}\n\nNow we are ready to prove Theorem~\\ref{thm:exp}.\nLet us apply Lemma~\\ref{lem:e3} to two specific rank-1 projections. Fix\ntwo orthonormal vectors \\ket0 and \\ket1. (This is where we use that\n$\\H$ has dimension at least 2.) Let $\\ket+=(\\ket0+\\ket1)\/\\sqrt2$.\nWe use the projections $A=\\ket0\\bra0$ and $B=\\ket+\\bra+$ to the\nsubspaces spanned by $\\ket0$ and $\\ket+$. Let $H$ be as in\nLemma~\\ref{lem:e4} for these projections $A$ and $B$.\n\nFrom the positivity of $H$ and of $A-H$, we get that\n$0\\leq\\bra1H\\ket1$ and that\n\\[\n0\\leq\\bra1(A-H)\\ket1=\\bra1A\\ket1-\\bra1H\\ket1=-\\bra1H\\ket1,\n\\]\nwhere we have used that \\ket1, being orthogonal to \\ket0, is\nannihilated by $A$. Combining the two inequalities, we infer that\n$\\bra1H\\ket1=0$ and therefore, since $H$ is positive, $H\\ket1=0$.\nSimilarly, using the orthogonal vectors $\\ket+$ and\n$\\ket-=\\ket0-\\ket1)\/\\sqrt2$ in place of \\ket0 and \\ket1, we obtain\n$H\\ket-=0$. So, being linear, $H$ is identically zero on the subspace\nof $\\H$ spanned by \\ket1 and \\ket-; note that \\ket0 is in this\nsubspace, so we have $H\\ket0=0$.\n\nNow we use the positivity of $I-A-B+H$. Since $H\\ket0=0$, we\ncan compute\n\\[\n0\\leq\\bra0(I-A-B+H)\\ket0=\\sq{0|0}-\\bra0A\\ket0-\\bra0B\\ket0=\n1-1-\\frac1{\\sqrt2}=\\frac{-1}{\\sqrt2}.\n\\]\nThis contradiction completes the proof of the theorem.\n\\end{proof}\n\n\\begin{remark}[Symmetry or the lack of thereof]\\rm\nIn view of the idea of symmetry or even-handedness suggested by\nSpekkens \\cite{spekkens}, one might ask whether there is a dual\nversion of Theorem~\\ref{thm:exp}, that is, a version that requires\nconvex-linearity for effects but looks only at pure states and does not\nrequire any convex-linearity for states.\nThe answer is no; with such requirements there is a trivial example of a successful hidden-variable theory, regardless of the dimension of the Hilbert space. The theory can be concisely described as taking the quantum state itself as the ``hidden'' variable. In more detail, let $\\Lambda$ be the set of all pure states. Let $\\mu$ assign to each operator $\\ket\\psi\\bra\\psi$ the probability measure on $\\Lambda$ concentrated at the point $\\lambda_{\\ket\\psi}$ that corresponds to the vector $\\ket\\psi$. Let $F$ assign to each effect $E$ the function on $\\Lambda$ defined by\n\\[\nF(E)(\\lambda_{\\ket\\psi})=\\bra\\psi E\\ket\\psi.\n\\]\nWe have trivially arranged for this to give the correct expectation\nfor any effect $E$ and any pure state \\ket\\psi. The formula for\n$F(E)$ is clearly convex-linear (in fact, linear) as a function of\n$E$. Of course, $\\mu$ cannot be extended convex-linearly to mixed\nstates, so that Theorem~\\ref{thm:exp} does not apply.\n\\end{remark}\n\n\\section{Value No-Go Theorems}\n\\label{sec:val}\nValue no-go theorems assert that hidden-variable theories cannot even produce the correct outcomes for individual measurements, let alone the correct probabilities or expectation values. Such theorems considerably predated the expectation no-go theorems considered in the preceding section. Value no-go theorems were first established by Bell \\cite{bell64,bell66} and then by Kochen and Specker \\cite{ks}; we shall also refer to the user-friendly exposition given by Mermin \\cite{mermin}. To formulate value no-go theorems, one must specify what ``correct outcomes for individual measurements'' means.\n\n\\begin{definition} \\label{valmap}\nLet $\\H$ be a Hilbert space, and let $\\O$ be a set of observables, i.e., self-adjoint operators on $\\H$. A \\emph{valuation} for $\\O$ in $\\H$ is a function $v$ assigning to each observable $A\\in\\O$ a number $v(A)$ in the spectrum of $A$, in such a way that $(v(A_1),\\dots,v(A_n))$ is in the joint spectrum $\\sigma(A_1,\\dots,A_n)$ of $(A_1,\\dots,A_n)$\nwhenever $A_1,\\dots,A_n$ are pairwise commuting.\n\\end{definition}\n\nThe intention behind this definition is that, in a hidden-variable theory, a quantum state represents an ensemble of individual systems, each of which has definite values for observables. That is, each individual system has a valuation associated to it, describing what values would be obtained if we were to measure observable properties of the system. A believer in such a hidden-variable theory would expect a valuation for the set of all self-adjoint operators on $\\H$, unless there were superselection rules rendering some such operators unobservable.\n\nBefore we proceed, we recall the notion of joint spectra \\cite[Section~6.5]{spectral}.\n\n\\begin{definition}\nThe \\emph{joint spectrum} $\\sigma(A_1,\\dots,A_n)$ of pairwise\ncommuting, self-adjoint operators $A_1,\\dots,A_n$ on a Hilbert space\n$\\H$ is a subset of $\\mathbb R^n$. If $A_1,\\dots,A_n$ are simultaneously\ndiagonalizable then $(\\lambda_1,\\dots,\\lambda_n)\n\\in\\sigma(A_1,\\dots,A_n)$ iff there is a non-zero vector $\\ket\\psi$\nwith $A_i\\ket\\psi=\\lambda_i\\ket\\psi$ for $i=1,\\dots,n$. In general,\n$(\\lambda_1,\\dots,\\lambda_n) \\in\\sigma(A_1,\\dots,A_n)$ iff for every\n$\\varepsilon>0$ there is a unit vector $\\ket\\psi\\in\\H$ with\n$\\nm{A_i\\ket\\psi-\\lambda_i\\ket\\psi}<\\varepsilon$ for $i=1,\\dots,n$.\n\\end{definition}\n\n\\begin{proposition}\\label{pro:jspec}\\\nFor any continuous function $f:\\mathbb R^n\\to\\mathbb R$,\\\\ $f(A_1,\\dots,A_n)=0$ if and only if $f$ vanishes identically on $\\sigma(A_1,\\dots,A_n)$.\n\\end{proposition}\n\nThe proposition is implicit in the statement, on page~155 of\n\\cite{spectral}, that ``most of Section~1, Subsection~4, about\nfunctions of one operator,'' \ncan be repeated in the context of\nseveral commuting operators. We give a detailed proof of the\nproposition in \\cite[\\S4.1]{G228}. \n\n\\begin{theorem}[\\cite{bell66,ks,mermin}]\\label{thm:dim3}\nIf $\\Dim{\\H}=3$ then there is a finite set $\\O$ of rank~1 projections for which no valuation exists.\n\\end{theorem}\n\nThe proof of Theorem~\\ref{thm:dim3} can be derived from the work of\nBell \\cite[Section~5]{bell66}, and we do that explicitly in\n\\cite[\\S4.3]{G228}. The construction given by Kochen and Specker\n\\cite{ks} provides the desired $\\O$ more directly. The proof of\nTheorem~1 in \\cite{ks} uses a Boolean algebra generated by a finite\nset of one-dimensional subspaces of $\\H$, and it shows that the\nprojections to those subspaces constitute an $\\O$ of the required\nsort. Mermin's elegant exposition \\cite[Section~IV]{mermin} deals\ninstead with squares $S_i^2$ of certain spin-components of a spin-1\nparticle, but these are projections to 2-dimensional subspaces of\n$\\H$, and the complementary rank-1 projections $I-S_i^2$ serve as the\ndesired $\\O$.\n\n\\begin{theorem}[Second Bootstrapping Theorem]\\label{thm:boot2}\nSuppose $\\H\\subseteq\\H'$ are finite-dimensional Hilbert spaces. Suppose further that $\\O$ is a finite set of rank-1 projections of $\\H$ for which no valuation exists. Then there is a finite set $\\O'$ of rank-1 projections of $\\H'$ for which no valuation exists.\n\\end{theorem}\n\nThis is our second bootstrapping theorem. Intuitively, such dimension bootstrapping results are to be expected. If hidden-variable theories could explain the behavior of quantum systems described by the larger Hilbert space, say $\\H'$, then they could also provide an explanation for systems described by the subspace $\\H$. The latter systems are, after all, just a special case of the former, consisting of the pure states that happen to lie in $\\H$ or mixtures of such states. But often no-go theorems give much more information than just the impossibility of matching the predictions of quantum-mechanics with a hidden-variable theory. They establish that hidden-variable theories must fail in very specific ways. It is not so obvious that these specific sorts of failures, once established for a Hilbert space $\\H$, necessarily also apply to its\nsuperspaces $\\H'$.\n\n\\begin{proof}\n Clearly, if two Hilbert spaces are isomorphic and if one of them has\n a finite set $\\O$ of rank-1 projections with no valuation, then\n the other also has such a set. It suffices to conjugate the\n projections in $\\O$ by any isomorphism between the two\n spaces. Thus, the existence of such a set $\\O$ depends only on the\n dimension of the Hilbert space, not on the specific space.\n\n Proceeding by induction on the dimension of $\\H'$, we see that\n it suffices to prove the theorem in the case where $\\dim(\\H')=\\dim(\\H)+1$. Given such $\\H$ and $\\H'$, let \\ket\\psi\\\n be any unit vector in $\\H'$, and observe that its orthogonal\n complement, $\\ket\\psi^\\bot$, is a subspace of $\\H'$ of the same\n dimension as $\\H$ and thus isomorphic to $\\H$. By the induction\n hypothesis, this subspace $\\ket\\psi^\\bot$ has a finite set $\\O$ of\n rank-1 projections for which no valuation exists. Each element of\n $\\O$ can be regarded as a rank-1 projection of $\\H'$; indeed,\n if the projection was given by $\\ket\\phi\\bra\\phi$ in\n $\\ket\\psi^\\bot$, then we can just interpret the same formula\n $\\ket\\phi\\bra\\phi$ in $\\H'$, using the same unit vector\n $\\ket\\phi\\in\\ket\\psi^\\bot$\n\n Let $\\O_1$ consist of all the projections from $\\O$,\n interpreted as projections of $\\H'$, together with one\n additional rank-1 projection, namely $\\ket{\\psi}\\bra{\\psi}$. What\n can a valuation $v$ for $\\O_1$ look like? It must send\n $\\ket{\\psi}\\bra{\\psi}$ to one of its eigenvalues, 0 or 1.\n\nSuppose first that $v(\\ket{\\psi}\\bra{\\psi})=0$. Then, using the fact\nthat $\\ket{\\psi}\\bra{\\psi}$ commutes with all the other elements of\n$\\O_1$, we easily compute that what $v$ does to those other\nelements amounts to a valuation for $\\O$. But $\\O$ was chosen so\nthat it has no valuation, and so we cannot have\n$v(\\ket{\\psi}\\bra{\\psi})=0$. Therefore $v(\\ket{\\psi}\\bra{\\psi})=1$. (It\nfollows that $v$ maps the projections associated to all the other\nelements of $\\O'$ to zero, but we shall not need this fact.)\n\nWe have thus shown that any valuation for the finite set $\\O_1$\nmust send $\\ket\\psi\\bra\\psi$ to 1. Repeat the argument for another\nunit vector $\\ket{\\psi'}$ that is orthogonal to \\ket\\psi. There is a\nfinite set $\\O_2$ of rank-1 projections such that any valuation\nfor $\\O_2$ must send \\ket{\\psi'}\\bra{\\psi'} to 1. No valuation\ncan send both \\ket\\psi\\bra\\psi\\ and \\ket{\\psi'}\\bra{\\psi'} to 1,\nbecause their joint spectrum consists of only $(1,0)$ and $(0,1)$.\nTherefore, there can be no valuation for the union $\\O_1\\cup\\O_2$, which thus serves as the $\\O'$ required by the theorem.\n\\end{proof}\n\n\\begin{theorem}[Value no-go theorem]\\label{thm:val}\nSuppose that the dimension of the Hilbert space is at least 3.\n\\begin{enumerate}\n\\item There is a finite set $\\O$ of projections for which no valuation exists.\n\\item If the dimension is finite then there is a finite set $\\O$\n of rank~1 projections for which no valuation exists.\n\\end{enumerate}\n\\end{theorem}\n\nThe desired finite sets of projections are constructed explicitly in the proof. The finiteness assumption in part (2) of the theorem cannot be omitted. If $\\Dim{\\H}$ is infinite, then the set $\\O$ of all finite-rank projections admits a valuation, namely the constant zero function. This works because the definition of ``valuation'' imposes constraints on only finitely many observables at a time.\n\n\n\\begin{proof}\nWhen the dimension of $\\H$ is greater than 3, but still finite, we use our Second Bootstrapping Theorem. Notice that, if one merely wants a no-go theorem saying that some $\\O$ has no valuation, then this bootstrapping is easy, as noted in \\cite{bell64,ks,mermin}. Work is needed only to get all the operators in $\\O$ to be rank~1 projections.\n\nIt remains to treat the case of infinite-dimensional $\\H$. Let $\\mathcal K$ and $\\L$ be Hilbert spaces, with $\\dim(\\mathcal K)=3$ and $\\dim(\\L)=\\dim(\\H)$. Note that then their tensor product $\\mathcal K\\otimes\\L$ has the same dimension as $\\H$, so it can be identified with $\\H$.\n\nLet $\\O$ be as in Theorem~\\ref{thm:dim3} for the 3-dimensional $\\mathcal K$. Let\n$\\O'=\\{P\\otimes I_{\\L}:P\\in\\O\\}$, where $I_{\\L}$ is the identity operator on $\\L$. Then $\\O'$ is a set of infinite-rank projections of $\\mathcal K\\otimes\\L=\\H$, having the same algebraic structure as $\\O$. It follows that there is no valuation for $\\O'$.\n\\end{proof}\n\nLet's say that a projection $A$ on Hilbert space $\\H$ is a \\emph{rank-$n$ projection modulo identity} if either $A$ is of rank $n$ or else $\\H$ splits into a tensor product $\\mathcal K\\otimes\\L$ such that $\\mathcal K$ is finite-dimensional and $A$ has the form $P\\otimes I_{\\L}$ where $P$ is of rank $n$ and $I_{\\L}$ is the identity operator on $\\L$. The proof of Theorem~\\ref{thm:val} gives us the following corollary.\n\n\\begin{corollary}\nIf the dimension of the Hilbert space is at least 3 then there is a finite set of rank-1 projections modulo identity for which no valuation exists.\n\\end{corollary}\n\n\\section{One successful hidden-variable theory}\n\\label{sec:bell}\n\nBy reducing both species of no-go theorems to projection measurement,\nwhere measurement as observable and measurement as effect coincide, we\nmade it easier to see similarities and differences. No, the\nexpectation no-go theorem does not imply the value no-go theorem. But\nthe task of proving this claim formally, say for a given dimension\n$d=\\Dim{\\H}$, is rather thankless. You have to construct a\ncounter-factual physical world where the expectation no-go theorem\nholds but the value no-go theorem fails. There is, however, one\nexceptional case, that of dimension~2. Theorem~\\ref{thm:exp} assumes\n$\\Dim{\\H}\\ge2$ while Theorem~\\ref{thm:val} assumes $\\Dim{\\H}\\ge3$. So\nwhat about dimension 2?\n\nBell developed, in \\cite{bell64} and \\cite{bell66}, a hidden-variable theory for a two-dimensional Hilbert space $\\H$. Here we summarize the improved version of Bell's theory due to Mermin \\cite{mermin}, we simplify part of Mermin's argument, and we explain why the theory doesn't contradict Theorem~\\ref{thm:exp}.\n\nIn the rest of this section, we work in the two-dimensional Hilbert\nspace $\\H$. Let $\\mathcal V$ be the set of value maps $v$ for all the\nobservables on $\\H$. In each pure state $\\psi$, the hidden variables\nshould determine a particular member of $\\mathcal V$. \n\n\\begin{definition}\\label{def:val}\\rm\nA \\emph{value representation} for quantum systems described by $\\H$ is a pair $(\\Lambda,V)$ where\n\\begin{itemize}\n\\item $\\Lambda$ is a probability space and\n\\item $V$ a function $\\psi\\to V_\\psi$ on the pure states such that every $V_\\psi$ is a map $\\lambda\\to V_\\psi^\\lambda$ from (the sample space of) $\\Lambda$ onto $\\mathcal V$.\n\\end{itemize}\nFurther, we require that, for any pure state $\\psi$ and any observable $A$, the expectation $\\int_\\Lambda V_\\psi^\\lambda(A)\\: d\\lambda$ of the eigenvalue of $A$ agrees with the prediction $\\bra{\\psi}A\\ket{\\psi}$ of quantum theory:\n\\begin{equation}\\label{bell1}\n \\int_\\Lambda V_\\psi^\\lambda(A)\\: d\\lambda = \\bra{\\psi}A\\ket{\\psi}\n\\end{equation}\n\\end{definition}\n\n\\medskip\nDefinition~\\ref{def:val} is narrowly tailored for our goals in this section; in the full paper we will give a general definition of value representation. Notice that, if a random variable (in our case, the eigenvalue of $A$ in $\\psi$) takes only two values, then the expected value determines the probability distribution. A priori we should be speaking about commuting operators and joint spectra but things trivialize in the 2-dimensional case. Recall Proposition~\\ref{pro:jspec} and notice that, in the 2-dimensional Hilbert space, if operators $A,B$ commute, then one of them is a polynomial function of the other.\n\n\\begin{theorem}\\label{thm:bell}\nThere exists a value representation for the quantum systems described by the two-dimensional Hilbert system $\\H$.\n\\end{theorem}\n\n\\begin{proof}\nLet $\\vec\\sigma$ be the triple of the Pauli matrices\n$\\displaystyle \\sigma_x=\n\\begin{pmatrix}\n 0&1\\\\1&0\n\\end{pmatrix}, \\sigma_y=\n\\begin{pmatrix}\n 0&-i\\\\i&0\n\\end{pmatrix}, \\sigma_z=\n\\begin{pmatrix}\n 1&0\\\\0&-1\n\\end{pmatrix}$.\nFor any unit vector $\\vec n\\in\\mathbb R^3$, \nthe dot product $\\vec n\\cdot\\vec\\sigma$ is a Hermitian operator with\neigenvalues $\\pm1$. \nEvery pure state of $\\H$ is an eigenstate, for eigenvalue $+1$, of\n$\\vec n\\cdot\\vec\\sigma$ for a unique $\\vec n$. We use the notation\n\\ket{\\vec n} for this eigenstate. \n\nIf \\scr H represents the states of a spin-$\\frac12$ particle, then the operator $\\frac12\\vec n\\cdot\\vec\\sigma$ represents the spin component in the direction $\\vec n$, and so \\ket{\\vec n} represents the state in which the spin is definitely aligned in the direction $\\vec n$. It is a special property of spin $\\frac12$ that all pure states are of this form; for higher spins, a superposition of states with definite spin directions need not have a definite spin direction.\n\nOn $\\H$, any Hermitian operator $A$ has the form $a_0I + \\a\\cdot\\vec\\sigma$ for some scalar\n$a_0\\in\\mathbb R$ and vector $\\a\\in\\mathbb R^3$. The eigenvalues of $A$ are\n$a_0\\pm\\nm{\\a}$. Observables $a_0I+\\a\\cdot\\vec\\sigma$ and $b_0I+\\vec b\\cdot\\vec\\sigma$\ncommute if and only if $\\a$ and $\\vec b$ are either parallel or\nantiparallel.\n\nThe desired probability space is the set $S^2$ of unit vectors in $\\mathbb R^3$ with the uniform probability measure. Let $\\vec m$ range over $S^2$. Then\n\\[\n V_{\\vec n}^{\\vec m} (a_0 I + \\a\\cdot\\vec\\sigma) =\n \\begin{cases}\n a_0 + \\nm{\\a}&\\text{if }(\\vec m+\\vec n)\\cdot\\a \\geq 0,\\\\\n a_0 - \\nm{\\a}&\\text{if }(\\vec m+\\vec n)\\cdot\\a < 0.\n \\end{cases}\n\\]\nIt remains to check that\n\\begin{equation}\\label{bell2}\n\\int_{S^2} V_\\psi^{\\vec m}(a_0 I + \\a\\cdot\\vec\\sigma)\\: d\\vec m =\n\\bra{\\vec n}(a_0 I + \\a\\cdot\\vec\\sigma)\\ket{\\vec n}.\n\\end{equation}\n\nWe begin with a couple of simplifications. First, we may assume that $a_0=0$, because a general $a_0$ would just be added to both sides of Equation~\\eqref{bell2}. Second, thanks to the rotational symmetry of the situation (where rotations are applied to all three of $\\a$, $\\vec n$ and $\\vec m$), we may assume that the vector $\\a$ points in the $z$-direction. Finally, by scaling, we may assume that $\\a=(0,0,1)$, so that the right side of Equation~\\eqref{bell2} is $n_z$.\n\nSo our task is to prove that the average over $\\vec m$ of the values assigned to $\\sigma_z$ is $n_z$. By definition, the value\nassigned to $\\sigma_z$ is $\\pm1$, where the sign is chosen to agree\nwith that of $m_z+n_z$. In view of how $\\vec m$ is chosen, this\n$m_z+n_z$ is the $z$-coordinate of a random point on the unit sphere\ncentered at $\\vec n$. So the question reduces to determining what\nfraction of this sphere lies above the $x$-$y$ plane.\n\nThis plane cuts $S^2$ horizontally at a level $n_z$ below the sphere's\ncenter. By a theorem of Archimedes, when a sphere is cut\nby a plane, its area is divided in the same ratio as the length of the\ndiameter perpendicular to the plane. So the plane divides the sphere's area in\nthe ratio of $1+n_z$ (above the plane) to $1-n_z$ (below the plane).\nThat is, the value assigned to $\\sigma_z$ is $+1$ with probability\n$(1+n_z)\/2$ and $-1$ with probability $(1-n_z)\/2$. Thus, the average\nvalue of $\\sigma_z$ is $n_z$, as required.\n\\end{proof}\n\nFinally, we explain why Bell's theory doesn't contradict Theorem~\\ref{thm:exp}. To obtain an expectation representation, we must extend the map $V$\nconvex-linearly to all density matrices. But no such extension exists. Here is an example showing what goes wrong.\nConsider the four pure states corresponding to spin in the directions\nof the positive $x$, negative $x$, positive $z$ and negative $z$\naxes. The corresponding density operators are the projections\n\\[\n\\frac{I+\\sigma_x}2,\\quad\\frac{I-\\sigma_x}2,\\quad\n\\frac{I+\\sigma_z}2,\\quad \\frac{I-\\sigma_z}2,\n\\]\nrespectively. Averaging the first two with equal weights, we get\n$\\frac12 I$; averaging the last two gives the same result. So a\nconvex-linear extension $T$ would have to assign to the density\noperator $\\frac12I$ the average of the probability measures assigned\nto the pure states with spins in the $\\pm x$ directions and also the\naverage of the probability measures assigned to pure states with spins\nin the $\\pm z$ directions. But these two averages are visibly very\ndifferent. The first is concentrated on the union of two unit spheres\ntangent to the $y$-$z$-plane at the origin, while the second is\nconcentrated on the union of two unit spheres tangent to the\n$x$-$y$-plane at the origin.\n\n\nThus, Bell's example of a hidden-variable theory for 2-dimensional\n\\scr H does not fit the assumptions in any of the expectation no-go\ntheorems. It does not, therefore, clash with the fact that those\ntheorems, unlike the value no-go theorems, apply in the 2-dimensional\ncase.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{section:intro}\n\n\\noindent\nDiscretizing results in metric geometry is important for many applications,\nranging from discrete differential geometry to numerical methods. The discrete\nresults are stronger as they typically imply the continuous results in the limit.\nUnfortunately, more often than not, straightforward discretizations fall apart;\nnew tools and ideas are needed to even formulate these extensions; see e.g.~\\cite{BS,Lin}\nand~\\cite[$\\S$21--$\\S$24]{Pak}.\n\nIn this paper we introduce a new notion of \\emph{$G$-Kirszbraun graphs},\nwhere~$G$ is vertex-transitive graph. The idea is to discretize the classical\n\\emph{Kirszbraun theorem} in metric geometry~\\cite{kirszbraun1934}\n(see also~\\cite[$\\S$1.2]{BL}). Our main goal\nis to explain the variational principle for the height functions of tilings introduced\nby the third author in~\\cite{Tas} and further developed in~\\cite{MR3530972,TassyMenz2016}\n(see Section~\\ref{section: motivation}); we also aim to lay a proper foundation\nfor the future work.\n\nOur second goal is to clarify the connection to the \\emph{Helly theorem},\na foundational result in convex and discrete geometry~\\cite{helly1923}\n(see also~\\cite{MR0157289,Mat}). Graphs that satisfy the \\emph{Helly's property}\nhas been intensely studied in recent years~\\cite{MR2405677}, and we establish\na connection between two areas. Roughly, we show that $\\mathbb{Z}^d$-Kirszbraun\ngraphs are somewhat rare, and are exactly the graphs that satisfy the\nHelly's property with certain parameters.\n\n\\smallskip\n\n\\subsection{Main results}\nLet $\\ell_2$ denote the usual Euclidean metric on $\\mathbb R^n$ for all $n$. Given a metric\nspace~$X$ and a subset $A$, we write $A\\subset X$ to mean that the subset~$A$ is\nendowed with the restricted metric from~$X$. The \\emph{Kirszbraun theorem} says\nthat for all \\hskip.03cm $A\\subset (\\mathbb R^n,\\ell_2)$, and all Lipschitz functions \\hskip.03cm\n$f: A \\longrightarrow (\\mathbb R^n,\\ell_2)$, there is an extension to a\nLipschitz function on $\\mathbb R^n$ with the same Lipschitz constant.\n\nRecall now the \\emph{Helly theorem}: Suppose a collection of convex sets\n$B_1, B_2, \\ldots, B_k$ satisfies the property that every $(n+1)$-subcollection has a\nnonempty intersection, then $\\cap B_i\\neq \\varnothing$. Valentine in~\\cite{MR0011702}\nfamously showed how the Helly theorem can be used to obtain the Kirszbraun theorem.\nThe connection between these two theorems is the key motivation behind this paper.\n\n\\smallskip\n\nGiven metric spaces $X$ and $Y$, we say that $Y$ is \\emph{$X$-Kirszbraun}\nif all $A\\subset X$, every $1$-Lipschitz maps \\hskip.03cm $f: A \\longrightarrow Y$ \\hskip.03cm\nhas a $1$-Lipschitz extension from~$A$ to~$X$. In this notation, the Kirszbraun\ntheorem says that \\hskip.03cm $(\\mathbb R^n,\\ell_2)$ \\hskip.03cm is \\hskip.03cm $(\\mathbb R^n,\\ell_2)$-Kirszbraun.\n\nLet $m\\in \\mathbb N$ and $n\\in \\mathbb N\\cup\\{\\infty\\}$, $n>m$. Metric space $X$\nis said to have \\emph{$(n, m)$-Helly's property} if for every collection of closed balls\n$B_1, B_2, \\ldots, B_n$ of radius~$\\ge~1$ whenever every $m$-subcollection has a\nnonempty intersection, we have $\\cap_{i=1}^n B_i\\neq \\varnothing$.\nSince balls in $\\mathbb R^n$ with the Euclidean metric are convex,\nthe Helly theorem can be restated to say that $(\\mathbb R^n,\\ell_2)$ is $(\\infty, n+1)$-Helly.\nNote that the metric is important here, i.e.\\\n$(\\mathbb R^n,\\ell_\\infty)$ is $(\\infty, 2)$-Helly, see e.g.~\\cite{MR0157289,Pak}.\n\nGiven a graph $H$, we endow the set of vertices (also denoted by $H$) with the path metric.\nBy~$\\mathbb{Z}^d$ we mean the Cayley graph of the group $\\mathbb{Z}^d$ with respect to standard generators.\nAll graphs in this paper are nonempty, connected and simple (no loops and multiple edges).\nThe following is the main result of this paper.\n\n\\begin{thm}[Main theorem]\\label{thm:Zd Kirszbraun}\nGraph $H$ is $\\mathbb{Z}^d$-Kirszbraun if and only if $H$ is $(2d, 2)$-Helly.\\label{theorem: main result}\n\\end{thm}\n\nLet $K_n$ denote the \\emph{complete graph} on $n$-vertices. Clearly, $K_n$ is $G$-Kirszbraun\nfor all graphs $G$, since all maps $f: G\\longrightarrow K_n$ are $1$-Lipschitz. On\nthe other hand, $\\mathbb{Z}^2$ is not $\\mathbb{Z}^2$-Kirszbraun, see Figure~\\ref{f:Z2}. This example\ncan be modified to satisfy a certain extendability property, see $\\S$\\ref{ss:ext-bip}.\n\n\\begin{figure}[hbt]\n \\begin{center}\n \\includegraphics[height=4.2cm]{Z2-ex.eps}\n \\caption{Here $A=\\{a,b,c,d\\}\\subset \\mathbb{Z}^2$. Define $f: a\\to a'$,\n $b\\to b'$, $c\\to c'$, $d\\to d'$. Then $f: A\\to \\mathbb{Z}^2$ is $1$-Lipschitz\n but not extendable to $\\{O,a,b,c,d\\}$.}\n \\label{f:Z2}\n \\end{center}\n\\end{figure}\n\n\n\\smallskip\n\n\\subsection{Structure of the paper}\nWe begin with a short non-technical Section~\\ref{section: motivation}\ndescribing some background results and ideas. In essence, it is a\nremark which is too long to be in the introduction. It reflects the\nauthors' different points of view on the subject, which includes\nergodic theory, geometric combinatorics and discrete probability.\nWhile in principle this section it can be skipped, we do\nrecommend reading it as it motivates other parts of the paper.\n\nWe then proceed to prove Theorem~\\ref{thm:Zd Kirszbraun} in\nSection~\\ref{s:proof-main}. In Section~\\ref{section: extensions of main},\nwe present several extensions and applications of the main theorem. These\nsection is a mixed bag: we include a continuous analogue of the main\ntheorem (Theorem~\\ref{thm: rd version}), the extension to larger\nintegral Lipschitz constants (Theorem~\\ref{thm: about t lipschitz maps}),\nand the bipartite extension useful for domino tilings\n(Theorem~\\ref{thm: bipartite version of main theorem}).\nIn a short Section~\\ref{section:recognisability and an application},\nwe discuss computational aspects of the $\\mathbb{Z}^d$-Kirszbraun property,\nmotivated entirely by applications to tilings. We conclude with final\nremarks and open problems in Section~\\ref{s:finrem}.\n\n\\bigskip\n\n\\section{Motivation and background}\\label{section: motivation}\nA \\emph{graph homomorphism} from a graph $G$ to a graph $H$ is an adjacency\npreserving map between the respective vertices. Let $\\textup{\\textrm{Hom}}(G, H)$ denote the set of all\ngraph homomorphisms from $G$ to~$H$. We refer to~\\cite{HN} for background on\ngraph homomorphisms and connections to coloring and complexity problems.\n\nOur motivation comes from two very distinct sources:\n\\begin{enumerate}\n\\item\nFinding `fast' algorithms to determine whether a given graph homomorphism on the boundary of a box in $\\mathbb{Z}^d$ to $H$ extends to the entire box.\\label{motivation: 1}\n\\item\nFinding a natural parametrization of the so-called ergodic Gibbs measures on space of graph homomorphisms $\\textup{\\textrm{Hom}}(\\mathbb{Z}^d, H)$ (see \\cite{MR2251117,TassyMenz2016}).\n\\label{motivation: 2}\n\\end{enumerate}\n\nFor~$(1)$, roughly, suppose we are given a certain simple set of tiles~$\\textrm{T}$,\nsuch as dominoes or more generally bars \\hskip.03cm $\\{k\\times 1, 1 \\times \\ell\\}$. It turns out, that\n$\\textrm{T}$-tileability of a simply-connected region $\\Gamma$ corresponds to existence of a graph\nhomomorphism with given boundary conditions on $\\partial\\Gamma$. We refer to~\\cite{Pak-horizons,Thu}\nfor the background, and to~\\cite{MR3530972,Tas} for further details. Our\nTheorem~\\ref{prop: hole filling} is motivated by these problems.\n\nFor both these problems, the $\\mathbb{Z}^d$-Kirszbraun property of the graph $H$ (or a related graph)\nis critical and motivates this line of research; the space of\n$1$-Lipschitz maps is the same as the space of graph homomorphisms if and only if $H$ is\n\\emph{reflexive}, that is, every vertex has a self-loop. The study of\nKirszbraun-type theorems among metric spaces and its relationship to Helly-like properties\nis an old one and goes back to the original paper by Kirszbraun~\\cite{kirszbraun1934}.\nA short and readable proof is given in \\cite[p.~201]{federer1969}.\nThis was later rediscovered in \\cite{MR0011702} where it was generalized to the cases\nwhere the domain and the range are spheres in the Euclidean space or Hilbert spaces.\nThe effort of understanding which metric spaces\nsatisfy Kirszbraun properties culminated in the theorem by\nLang and Schroeder~\\cite{langschroder1997} that identified\nthe right curvature assumptions on the\nunderlying spaces for which the theorem holds.\n\nIn the metric graph theory, the research has focused largely on a certain universality property.\nFormally, a graph is called \\emph{Helly} if it is $(\\infty, 2)$-Helly. An easy deduction,\nfor instance following the discussion in \\cite[Page 153]{MR0157289}, shows that $H$ is\nHelly if and only if for all graphs $G$, $H$ is $G$-Kirszbraun. Some nice\ncharacterizations of Helly graphs can be found in the survey~\\cite[$\\S$3]{MR2405677}.\nHowever, we are not aware of any other study of $G$-Kirszbraun graphs for fixed~$G$.\n\n\\bigskip\n\n\\section{Proof of the main theorem}\\label{s:proof-main}\n\n\\subsection{Geodesic extensions}\nLet $d_H$ denote the path metric on the graph $H$.\nA \\emph{walk}~$\\gamma$ in the graph $H$ of \\emph{length}~$k$,\nis a sequence of $k+1$ vertices $(v_0, v_1, \\ldots,v_k)$, s.t.\n$d_H(v_{i}, v_{i+1})\\leq 1$, for all $0\\leq i \\leq k-1$.\nWe say that $\\gamma$ starts at $v_0$ and ends at $v_k$.\nA \\emph{geodesic} from vertex $v$ to $w$ in a graph~$G$\nis a walk~$\\gamma$ from $v$ to $w$ of the shortest length.\n\nConsider a graph $G$, a subset $A\\subset G$ and $b\\in G\\setminus A$. Define\nthe \\emph{geodesic extension of $A$ with respect to $b$} \\hskip.03cm as the following set:\n$$\\aligned\n\\textup{\\textrm{Ext}}(A,b)\\, := & \\, \\, \\{a\\in A~:~\\text{there does not exist \\hskip.03cm $a'\\in A\\setminus \\{a\\}$ \\hskip.03cm s.t.\\ there is a }\\\\\n& \\hskip1.86cm \\text{geodesic~$\\gamma$ from $a$ to $b$ which passes through $a'$}\\}.\n\\endaligned\n$$\nFor example, let $A\\subset \\mathbb{Z}^2\\setminus \\{(0,0)\\}$. If $(i,j), (k,l)\\in \\textup{\\textrm{Ext}}(A,\\vec 0)$\nare elements of the same quadrant, then $|i|> |k|$ if and only if $|j|<|l|$.\n\n\\begin{remark}\\label{remark: zd culling coordinates}{\\rm\nIf $A\\subset \\mathbb{Z}^d$ be contained in the coordinate axes then $|\\textup{\\textrm{Ext}}(A,\\vec 0)|\\leq 2d$. }\n\\end{remark}\n\nThe notion of geodesic extension allows us to prove that certain $1$-Lipschitz\nmaps can be extended:\n\n\\begin{prop}\\label{prop:culling of extras}\nLet $A\\subset G$, map $f: A\\longrightarrow H$ be $1$-Lipschitz, and let $b\\in G \\setminus A$.\nThe map $f$ has a $1$-Lipschitz extension to \\hskip.03cm $A\\cup \\{b\\}$ \\hskip.03cm if and only if \\hskip.03cm\n$f|_{\\textup{\\textrm{Ext}}(A,b)}$ \\hskip.03cm has a $1$-Lipschitz extension to \\hskip.03cm $\\textup{\\textrm{Ext}}(A,b)\\cup\\{b\\}$.\n\\end{prop}\n\n\\begin{proof}\nThe forward direction of the proof is immediate because $\\textup{\\textrm{Ext}}(A,b)\\subset A$. For the backwards direction let $\\widetilde f: \\textup{\\textrm{Ext}}(A,b)\\cup\\{b\\}\\subset G\\longrightarrow\nH$ be a $1$-Lipschitz extension of $f|_{\\textup{\\textrm{Ext}}(A,b)}$ and consider the map\n$\\hat f: A\\cup \\{b\\}\\subset G\\longrightarrow H$\ngiven by\n$$\\hat f(a):=\\begin{cases}\nf(a)&\\text{ if }a\\in A\\\\\n\\widetilde f(b)&\\text{ if }a=b.\n\\end{cases}\u2022$$\nTo prove that $\\hat f$ is $1$-Lipschitz we need to verify that for all $a\\in A$, $d_ H(\\hat f(a), \\hat f(b))\\leq d_ G(a,b)$.\nFrom the hypothesis it follows for $a\\in \\textup{\\textrm{Ext}}(A,b)$. Now suppose $a\\in A\\setminus \\textup{\\textrm{Ext}}(A,b)$. Then there exists $a'\\in \\textup{\\textrm{Ext}}(A,b)$ such that there exists a\ngeodesic from $a$ to $b$ passing through $a'$. This implies that $d_{ G}(a,b)=d_{ G}(a,a')+d_ G(a',b)$. But\n\\begin{eqnarray*}\nd_ G(a,a')\\geq d_{ H}(\\hat f(a), \\hat f (a'))=d_{ H}(f(a), f (a'))\\text{ because }f\\text{ is $1$-Lipschitz}\\\\\nd_ G(a',b)\\geq d_{ H}(\\hat f(a'), \\hat f (b))=d_{ H}(\\widetilde f(a'),\\widetilde f (b))\\text{ because }\\widetilde f\\text{ is $1$-Lipschitz}.\n\\end{eqnarray*}\nBy the triangle inequality, the proof is complete.\n\\end{proof}\n\n\\smallskip\n\n\\subsection{Helly's property}\nGiven a graph $H$, a vertex $v\\in H$ and $n\\in \\mathbb N$ denote by $B^H_n(v)$,\nthe ball of radius $n$ in $H$ centered at $v$. We will now interpret\nthe $(n,2)$-Helly's property in a different light.\n\n\\begin{prop}\\label{prop: small culled vertices}\nLet $H$ be a graph satisfying the $(n,2)$-Helly's property.\nFor all 1-Lipschitz maps $f: A\\subset G\\longrightarrow\nH$ and $b\\in G\\setminus A$ such that $|\\textup{\\textrm{Ext}}(A, b)|\\leq n$,\nthere exists a $1$-Lipschitz extension of $f$ to $A\\cup \\{b\\}$.\n\\end{prop}\n\n\\begin{proof}\tConsider the extension $\\widetilde f$ of $f$ to the set $A\\cup \\{b\\}$,\n where $\\widetilde f(b)$ is any vertex in\n$$\n\\bigcap _{b'\\in \\textup{\\textrm{Ext}}(A, b)}B^H_{d_G(b,b')}(f(b'));\n$$\nthe intersection is nonempty because $|\\textup{\\textrm{Ext}}(A, b)|\\leq n$ and for all $a, a'\\in \\textup{\\textrm{Ext}}(A, b)$, we have:\n$$d_H\\bigl(f(a), f(a')\\bigr) \\, \\leq \\, d_G(a, a')\\, \\leq \\, d_G(a, b)\\. + \\. d_{G}(b, a')\n$$\nwhich implies\n$$B^H_{d_G(a,b)}(f(a))\\cap B^H_{d_G(b,a')}\\bigl(f(a')\\bigr)\\, \\neq \\, \\varnothing.\n$$\nThe function \\hskip.03cm $\\widetilde f|_{\\textup{\\textrm{Ext}}(A,b)\\cup\\{b\\}}$ is $1$-Lipschitz, so\nProposition~\\ref{prop:culling of extras} completes the proof.\n\\end{proof}\n\n\n\\subsection{Examples}\nLet $C_n$ and $P_n$ denote the \\emph{cycle graph} and the \\emph{path graph} with $n$ vertices, respectively.\n\n\\begin{corollary}\nAll connected graphs are \\hskip.03cm $P_n$--, \\hskip.03cm $C_n$-- and \\hskip.03cm $\\mathbb{Z}$--Kirszbraun.\n\\end{corollary}\n\nIn the case when $G=P_n, C_n$ or $\\mathbb{Z}$ we have for all $A\\subset G$ and\n$b\\in G\\setminus A$, $|\\textup{\\textrm{Ext}}(A,b)|\\leq 2$; the corollary follows from\nProposition~\\ref{prop: small culled vertices} and the fact that\nall graphs are $(2,2)$-Helly.\n\nLet $\\mathbf r = (r_1, r_2, \\ldots r_n)\\in \\mathbb N^n$. Denote by $T_{\\mathbf r}$ the \\emph{star-shaped tree}\nwith a central vertex $b_0$ and $n$ disjoint walks of lengths\n$r_1,\\ldots,r_n$ emanating from it and ending in vertices $b_1, b_2, \\ldots, b_n$.\n\\begin{corollary}\\label{cor: helly and test tree}\nGraph $H$ has the $(n,2)$-Helly's property if and only if $H$ is $T_{\\mathbf r}$-Kirszbraun, for all $\\mathbf r\\in \\mathbb N^n$.\n\\end{corollary}\n\\begin{proof}\nFor all $\\mathbf r\\in \\mathbb N^n$, $A\\subset T_{\\mathbf r}$ and $b\\in T_{\\mathbf r}\\setminus \\{A\\}$, we have \\hskip.03cm $|\\textup{\\textrm{Ext}}(A,b)|\\leq n$. Thus by Proposition \\ref{prop: small culled vertices} if $H$ has the $(n,2)$-Helly's property then $H$ is $T_{\\mathbf r}$-Kirszbraun. For the other direction, let $H$ be $T_{\\mathbf r}$-Kirszbraun for all $\\mathbf r\\in \\mathbb N^n$. Suppose that $$B^H_{r_1}(v_1), B^H_{r_2}(v_2), \\ldots, B^H_{r_n}(v_n)$$ are balls in $H$ such that $B^H_{r_i}(v_i)\\cap B^H_{r_j}(v_j)\\neq \\emptyset$ for all $1\\leq i, j \\leq n$. Then $f: \\{b_1, b_2, \\ldots, b_n\\}\\subset T_{\\mathbf r}\\to H$ given by $f(b_i):= v_i$ is $1$-Lipschitz with a $1$-Lipschitz extension $\\widetilde f: T_{\\mathbf r}\\to H$. It follows that $\\widetilde f (b_0)\\in \\bigcap_{i=1}^n B^H_{r_i}(v_i)$ proving that $H$ has the $(n,2)$-Helly's property.\n\\end{proof}\n\\smallskip\n\n\\subsection{Proof of Theorem~\\ref{thm:Zd Kirszbraun}}\nWe will first prove the ``only if'' direction. Let $H$ be a graph which is\n$\\mathbb{Z}^d$-Kirszbraun. For all $\\mathbf r\\in \\mathbb N^{2d}$ there is an isometry from $T_{\\mathbf r}$ to\n$\\mathbb{Z}^d$ mapping the walks emanating from the central vertex to the coordinate axes.\nHence $H$ is $T_{\\mathbf r}$-Kirszbraun for all $\\mathbf r\\in \\mathbb N^{2d}$. By Corollary\n~\\ref{cor: helly and test tree}, we have proved the $(2d,2)$-Helly's property for~$H$.\n\nIn the ``if'' direction, suppose $H$ has the $(2d,2)$-Helly's property.\nWe need to prove that for all $A\\subset \\mathbb{Z}^d$, every\n$1$-Lipschitz maps $f:A\\to H$ has a $1$-Lipschitz extension.\nIt is sufficient to prove this for finite subsets~$A$.\nWe proceed by induction on $|A|$. Namely, we prove the following property $\\textup{\\textrm{St}}(n)$:\n\n\\smallskip\n\n\\noindent\n\\qquad Let $f:A\\subset \\mathbb{Z}^d\\longrightarrow H$ be $1$-Lipschitz with $|A|=n$. Let $b\\in \\mathbb{Z}^d\\setminus A$. Then the function $f$ has\n\n\\noindent\n\\qquad a $1$-Lipschitz extension to $A\\cup \\{b\\}$.\n\n\\smallskip\n\n\\noindent\nWe know $\\textup{\\textrm{St}}(n)$ for $n\\leq 2d$ by the $(2d,2)$-Helly's property.\nLet us assume $\\textup{\\textrm{St}}(n)$ for some $n\\geq 2d$; we want to prove $\\textup{\\textrm{St}}(n+1)$.\nLet \\hskip.03cm $f:A\\longrightarrow H$, $A\\subset \\mathbb{Z}^d$, be $1$-Lipschitz with $|A|=n+1$\nand \\hskip.03cm $b\\in \\mathbb{Z}^d\\setminus A$. Without loss of generality assume that $b=\\vec 0$.\nAlso assume that \\hskip.03cm $\\textup{\\textrm{Ext}}(A, \\vec 0)=A$; otherwise we can use the induction\nhypothesis and Proposition~\\ref{prop:culling of extras} to obtain\nthe required extension to \\hskip.03cm $A\\cup \\{\\vec 0\\}$.\n\nWe will prove that there exists a set \\hskip.03cm $\\widetilde A\\subset \\mathbb{Z}^d$ and a\n$1$-Lipschitz function \\hskip.03cm $\\widetilde f: \\widetilde A\\longrightarrow H$, such that\n\\begin{enumerate}\n\\item\nIf $\\widetilde f$ has an extension to $\\widetilde A\\cup\\{\\vec 0\\}$ then $f$ has an extension\nto \\hskip.03cm $A\\cup \\{\\vec 0\\}$.\n\\item\nEither the set $\\widetilde A$ is contained in the coordinate axes of $\\mathbb{Z}^d$ or \\hskip.03cm $|\\widetilde A|\\leq 2d$.\n\\end{enumerate}\nBy Remark~\\ref{remark: zd culling coordinates}, if $A$ is contained in the coordinate axis then $|\\textup{\\textrm{Ext}}(A, \\vec 0)|\\leq 2d$. Since $H$ has the $(2d,2)$-Helly's property, by Proposition \\ref{prop: small culled vertices} it follows that $\\widetilde f$ has an extension to $\\widetilde A\\cup\\{\\vec 0\\}$ which completes the proof.\n\n\nSince $|A|\\geq n+1>2d$, there exists ${\\vec{i}},{\\vec{j}}\\in A$ and a coordinate $1\\leq k \\leq d$ such that $i_k, j_k$ are non-zero and have the same sign. Suppose $i_k\\leq\nj_k$. Then there is a geodesic from ${\\vec{j}}$ to ${\\vec{i}} - i_k \\vec e_k$ which passes through ${\\vec{i}}$. Since $A=\\textup{\\textrm{Ext}}(A, \\vec 0)$ we have that $${\\vec{i}} - i_k \\vec e_k\\notin \\{\\vec\n0\\}\\cup A.$$\nThus ${\\vec{j}} \\notin \\textup{\\textrm{Ext}}(A, {\\vec{i}}-i_k\\vec e_k)$ and hence $|\\textup{\\textrm{Ext}}(A, {\\vec{i}}-i_k\\vec e_k)|\\leq n$. By $\\textup{\\textrm{St}}(n)$ there exists a $1$-Lipschitz extension of $f|_{\\textup{\\textrm{Ext}}(A, {\\vec{i}}-i_k\\vec\ne_k)}$ to $\\textup{\\textrm{Ext}}(A, {\\vec{i}}-i_k\\vec e_k)\\cup\\{{\\vec{i}}-i_k\\vec e_k\\}$. By Proposition~\\ref{prop:culling of extras}\nthere is a $1$-Lipschitz extension of $f$ to $ f': A\\cup\n\\{{\\vec{i}}-i_k\\vec e_k\\}\\longrightarrow H$. But there is a geodesic from ${\\vec{i}}$ to $\\vec 0$ which passes through ${\\vec{i}} -i_k \\vec e_k$. Thus\n$$\\textup{\\textrm{Ext}}\\bigl(A\\cup\\{{\\vec{i}}-i_k \\vec e_k\\}, \\vec 0\\bigr)\\subset \\bigl(A\\setminus\\{{\\vec{i}}\\}\\bigr)\\cup\\{{\\vec{i}} -i_k \\vec e_k\\}.$$\nSet \\hskip.03cm $A':=\\bigl(A\\setminus\\{{\\vec{i}}\\}\\bigr)\\cup\\{{\\vec{i}} -i_k \\vec e_k\\}$. By Proposition~\\ref{prop:culling of extras},\nmap~$f'$ has a $1$-Lipschitz extension to $A\\cup \\{{\\vec{i}}-i_k\\vec e_k\\}\n\\cup \\{\\vec 0\\}$ if and only if $f'|_{A'}$ has a $1$-Lipschitz extension to $A'\\cup \\{\\vec 0\\}$.\n\nThus we have obtained a set $A'$ and a $1$-Lipschitz map $f':A'\\subset \\mathbb{Z}^d\\longrightarrow H$ for which\n\\begin{enumerate}\n\\item\nIf $f'$ has an extension to $A'\\cup\\{\\vec 0\\}$ then $f$ has an extension to $A\\cup \\{\\vec 0\\}$.\n\\item\nThe sum of the number of non-zero coordinates of elements of $A'$ is less than the sum of the number of non-zero coordinates of elements of $A$.\n\\end{enumerate}\nBy repeating this procedure (formally this is another induction) we get the required set $\\widetilde A\\subset \\mathbb{Z}^d$ and $1$-Lipschitz map $\\widetilde f: \\widetilde A\\longrightarrow H$.\nThis completes the proof.\n\n\\bigskip\n\n\\section{Applications of the main theorem}\\label{section: extensions of main}\n\n\n\\subsection{Back to continuous setting} \\label{ss:ext-cont}\nThe techniques involved in the proof of Theorem~\\ref{thm:Zd Kirszbraun}\nextend to the continuous case with only minor modifications. The following\nresult might be of interest in metric geometry.\n\n\\smallskip\n\nA metric space $(X,\\textbf{m})$ is \\emph{geodesically complete} if for all\n$x,y\\in X$ there exists a continuous function $f: [0,1]\\to X$, such that\n$$\n\\textbf{m}(x, f(t))\\. = \\. t \\hskip.03cm \\textbf{m}(x,y) \\quad \\text{ and }\\quad \\textbf{m}(f(t), y) \\. =\\. (1-t)\\hskip.03cm \\textbf{m}(x,y).\n$$\n\n\\begin{thm}\\label{thm: rd version}\nLet $Y$ be a metric space such that every closed ball in $Y$ is compact.\nThen $Y$ is $(\\mathbb R^d, \\ell_1)$-Kirszbraun if and only if\n$Y$ is geodesically complete and $(2d,2)$-Helly.\n\\end{thm}\n\nFirst, we need the following result.\n\n\\begin{lemma}\\label{lemma: observation}\nLet $(X, \\textbf{m})$ be separable and the closed balls in $(Y, \\textbf{m}')$ be compact.\nThen $(Y, \\textbf{m}')$ is $(X, \\textbf{m})$-Kirszbraun if and only if all finite sets\n$A\\subset X$ and $1$-Lipschitz maps $f:A\\to Y$, $y\\in Y\\setminus A$\nhave a $1$-Lipschitz extension to $A\\cup\\{y\\}$.\n\\end{lemma}\n\n\\begin{proof}\nThe ``only if'' part is obvious. For the ``if'' part, let\n$A\\subset X$ and $f:A\\to Y$ be $1$-Lipschitz. Since $X$ is separable,\n$A$ is also separable. Let\n$$\n\\{x_i~:~i\\in \\mathbb N\\} \\. \\subset \\. X \\quad \\text{ and } \\quad \\{a_i~:~i\\in\\mathbb N\\}\\. \\subset \\. A\n$$\n be countable dense sets. By the hypothesis, we have\n$$\n\\bigcap _{1 \\leq i\\leq n}B_{\\textbf{m}(x_1, a_i)}(f(a_i))\\, \\neq \\. \\varnothing.\n$$\nSince closed balls in $Y$ are compact it follows that\n$$\n\\bigcap_{i\\in \\mathbb N}B_{\\textbf{m}(x_1, a_i)}(f(a_i))\\, \\neq \\. \\varnothing.\n$$\nThus $f$ has a $1$-Lipschitz extension to \\hskip.03cm $A\\cup\\{x_1\\}$.\nBy induction we get that $f$ has a $1$-Lipschitz extension \\hskip.03cm\n$\\widetilde f: A\\cup\\{x_i~:~i\\in \\mathbb N\\}\\to Y$. Let $g: X\\to Y$ be the map given by\n$$g(x)\\. := \\. \\lim_{j\\to \\infty} \\. \\widetilde f(x_{i_j}) \\quad \\text{ for all } \\ \\, x\\in X,\n$$\nwhere ${x_{i_j}}$ is some sequence such that $\\lim_{j\\to \\infty} x_{i_j}=x$.\nThe limit above exists since closed balls in $Y$ are compact, and hence $Y$\nis a complete metric space. By the continuity of $\\widetilde f$ it follows that\n$g|_A=f$ and by the continuity of the distance function it follows that\n$g$ is $1$-Lipschitz.\n\\end{proof}\n\nFor the proof of Theorem~\\ref{thm: rd version}, note that the main property\nof $\\mathbb{Z}^d$ exploited in proof of Theorem~\\ref{theorem: main result}\nis that the graph metric is same as the $\\ell_1$ metric. From the lemma\nabove, the proof proceeds analogously. We omit the details.\n\n\\smallskip\n\n\\subsection{Lipschitz constants} \\label{ss:ext-Lip}\nThe following extension deals with other Lipschitz constants. In the continuous case this is trivial; however it is more delicate in the discrete setting. Since we are interested in Lipschitz maps between\ngraphs we restrict our attention to integral Lipschitz constants.\n\n\\begin{thm}\\label{thm: about t lipschitz maps}\nLet $t\\in \\mathbb N$ and $H$ be a connected graph. Then every $t$-Lipschitz map\n$f: A\\longrightarrow H$, $A\\subset \\mathbb{Z}^d$, has a $t$-Lipschitz extension to\n$\\mathbb{Z}^d$ if and only if\n$$\n\\text{for all balls }B_1, B_2, \\ldots B_{2d} \\text{ of radii multiples\nof }t\\text{ mutually intersect }\\Longrightarrow\\cap B_i\\neq \\varnothing.\n$$\n\\end{thm}\n\nThe proof of Theorem~\\ref{thm: about t lipschitz maps} follow verbatim\nthe proof of Theorem~\\ref{thm:Zd Kirszbraun}; we omit the details.\n\n\\smallskip\n\nLet $H$ be a $\\mathbb{Z}^d$-Kirszbraun graph. The theorem implies that\nall $t$-Lipschitz maps $f: A\\longrightarrow H$, $A\\subset \\mathbb{Z}^d$,\nhave a $t$-Lipschitz extension. On the other hand, it is easy\nto construct graphs $G$ and $H$ for which $H$ is $G$-Kirszbraun\nbut there exists a $2$-Lipschitz map \\hskip.03cm $f: A\\longrightarrow H$,\n$A\\subset G$, which does not have a $2$-Lipschitz extension.\nFirst, we need the following result.\n\n\\begin{prop}\\label{prop: small diameter Kirszbraun graph}\nLet $ G$ be a finite graph with diameter $n$ and $ H$ be a connected\ngraph such that $B^H_{n}(v)$ is $G$-Kirszbraun for all $v\\in H$.\nThen $H$ is a $G$-Kirszbraun graph.\n\\end{prop}\n\n\\begin{proof}\nLet $f: A\\subset G\\longrightarrow H$ be $1$-Lipschitz and pick $a\\in A$. Then \\hskip.03cm Image$(f)\\subset B^H_n(f(a))$. Since $B^H_{n}(f(a))$ is $G$-Kirszbraun the\nresult follows.\n\\end{proof}\n\nSince trees are Helly graphs, we have as an immediate application of the above that $C_{n}$ is $ G$-Kirszbraun if \\hskip.03cm diam$(G)\\leq n-1$.\nFor instance, let $\\mathbf r = (1,1,1,1,1,1)\\in \\mathbb N^6$ and consider the \\emph{star}~$T_{\\mathbf r}$. We obtain that $C_6$ is $T_\\mathbf r$-Kirszbraun.\n Now label the leaves of $T_\\mathbf r$ as $b_i$, $1\\leq i \\leq 6$, respectively. For $A=\\{b_1, \\ldots, b_6\\}\\subset T_\\mathbf r$,\n consider the map\n$$f: A \\to C_6 \\ \\ \\text{ given by } \\ \\. f(b_i)=i\\hskip.03cm.\n$$\nThe function $f$ is $2$-Lipschitz but it has no $2$-Lipschitz extension to~$T_\\mathbf r$.\n\n\\smallskip\n\n\\subsection{Hyperoctahedron graphs} \\label{ss:ext-ex}\nThe \\emph{hyperoctahedron graph} $O_d$ is the graph obtained by removing a perfect\nmatching from the complete graph~$K_{2d}$. Theorem~\\ref{theorem: main result}\ncombined with the following proposition implies that $O_d$ are $\\mathbb{Z}^{d-1}$-Kirszbraun\nbut not $\\mathbb{Z}^{d}$-Kirszbraun. When $d=2$ this is the example in the introduction\n(see Figure~\\ref{f:Z2}).\n\n\\begin{prop}\\label{p:cross-pol}\nGraph $O_d$ is $(2d-1,2)$-Helly but not $(2d,2)$-Helly.\n\\end{prop}\n\n\\begin{proof}\nLet \\hskip.03cm $B_1, B_2, \\ldots, B_{2d-1}$ \\hskip.03cm be balls of radius $\\ge 1$. Then, for all\n$1\\leq i\\leq 2d-1$, $B_i\\supset O_{d}\\setminus\\{j_i\\}$ for some $j_i\\in O_{d}$. Thus:\n$$\n\\bigcap \\hskip.03cm B_i \\, \\supseteq \\, O_d\\setminus\\{j_i~:~ 1\\leq i \\leq 2d-1\\} \\, \\neq \\, \\varnothing\\hskip.03cm.\n$$\nThis implies that $O_d$ is $(2d-1,2)$-Helly.\n\nIn the opposite direction, let \\hskip.03cm $B_1, B_1, \\ldots, B_{2d}$ \\hskip.03cm\nbe distinct balls of radius one in~$O_d$. It is easy to see that they intersect pairwise,\nbut \\hskip.03cm $\\cap B_{i}=\\varnothing$. Thus, graph $O_d$ is not $(2d,2)$-Helly, and\nTheorem~\\ref{thm:Zd Kirszbraun} proves the claim.\n\\end{proof}\n\nLet us mention that the hyperoctaheron graph \\hskip.03cm $O_d\\simeq K_{2,\\ldots,2}$ \\hskip.03cm\nis a well-known obstruction to the Helly's property, see e.g.~\\cite{MR2405677}.\n\n\\smallskip\n\n\\subsection{Bipartite version} \\label{ss:ext-bip}\nIn the study of Helly graphs it is well-known (see e.g.~\\cite[$\\S$3.2]{MR2405677}) that results which are true with regard to $1$-Lipschitz\nextensions usually carry forward to graph homomorphisms in the bipartite case after some small technical modifications. This is also true in our case.\n\nA bipartite graph $H$ is called \\emph{bipartite $(n,m)$-Helly} if for balls $B_1, B_2, B_3, \\ldots, B_n$ (if $n\\neq \\infty$ and any finite collection otherwise)\nand partite class $H_1$, we have that\nany subcollection of size $m$ among $B_1\\cap H_1, B_2\\cap H_1, \\ldots, B_n\\cap H_1$ has a nonempty intersection implies\n$$\\bigcap_{i=1}^n B_i\\cap H_1\\neq \\varnothing.$$\n\nLet $G, H$ be bipartite graphs with partite classes $G_1, G_2$ and $H_1,H_2$ respectively. The graph $H$ is called \\emph{bipartite $G$-Kirszbraun} if for all\n$1$-Lipschitz maps $f: A\\subset G\\longrightarrow H$ for which $f(A\\cap G_1)\\subset H_1$ and $f(A\\cap G_2)\\subset H_2$ there exists $\\widetilde f\\in \\textup{\\textrm{Hom}}(G, H)$ extending it.\n\n\\begin{thm}\\label{thm: bipartite version of main theorem}\nGraph $H$ is bipartite $\\mathbb{Z}^d$-Kirszbraun if and only if $H$ is bipartite $(2d,2)$-Helly.\n\\end{thm}\n\nAs noted in the introduction, Theorem~\\ref{theorem: main result} implies that graph $\\mathbb{Z}^2$ is not $(4,2)$-Helly. However it is\nbipartite $(\\infty,2)$-Helly (see below).\n\n\\smallskip\n\nGiven a graph $H$ we say that $v\\sim_H w$ to mean that\n$(v,w)$ form an edge in the graph. Let $H_1, H_2$ be graphs with vertex sets $V_1, V_2$ respectively. Define:\n\\begin{enumerate}\n\\item\n\\emph{Strong product} $H_1\\boxtimes H_2$ as the graph with the vertex set $V_1\\times V_2$, and edges given by\n$$\\aligned\n(v_1,v_2)\\, \\sim_{H_1\\boxtimes H_2}(w_1, w_2) \\quad & \\text{ if } \\ \\ v_1=w_1 \\ \\ \\text{and} \\ \\ v_2\\sim_{H_2}w_2\\hskip.03cm, \\\\\n& \\text{ or } \\ \\ v_1\\sim_{H_1}w_1 \\ \\ \\text{and} \\ \\ v_2=w_2\\hskip.03cm, \\\\\n& \\text{ or }\\ \\ v_1\\sim_{H_1}w_1 \\ \\ \\text{and} \\ \\ v_2\\sim_{H_2}w_2\\hskip.03cm.\n\\endaligned\n$$\n\\item\n\\emph{Tensor product} $H_1\\times H_2$ as the graph with the vertex set $V_1\\times V_2$, and edges given by\n$$\n(v_1,v_2)\\sim_{H_1\\times H_2}(w_1, w_2)\\quad \\text{ if } \\ \\ \\. v_1\\sim_{H_1}w_1\\ \\ \\text{and} \\ \\ v_2\\sim_{H_2}w_2\\hskip.03cm.\n$$\n\\end{enumerate}\n\n\\smallskip\n\n\\begin{prop}\\label{prop: products} If for a graph $G$, graphs $H_1$ and $H_2$ are $G$-Kirszbraun then $H_1\\boxtimes H_2$ is $G$-Kirszbraun. If for a bipartite\ngraph $G$, bipartite graphs\n$H_1'$ and $H_2'$ are bipartite $G$-Kirszbraun then the connected components of $H_1'\\times H_2'$ are bipartite $G$-Kirszbraun.\n\\end{prop}\n\n\n\\begin{proof} We will prove this in the non-bipartite case; the bipartite case follows similarly. Let $f:=(f_1, f_2): A\\subset G\\longrightarrow H_1\\boxtimes H_2$\nbe $1$-Lipschitz. It follows that the functions $f_1$ and $f_2$ are $1$-Lipschitz as well; hence they have $1$-Lipschitz extensions $\\widetilde f_1: G\\longrightarrow\nH_1$ and $\\widetilde f_2: G\\longrightarrow H_2$. Thus $(\\widetilde f_1, \\widetilde f_2): G\\longrightarrow H_1\\boxtimes H_2$ is $1$-Lipschitz and extends $f$.\n\\end{proof}\n\n\\begin{corollary}[{cf.~\\cite[$\\S$3.2]{MR2405677}}]\nGraph $\\mathbb{Z}^2$ is bipartite $(\\infty,2)$-Helly.\n\\end{corollary}\n\n\\begin{proof}\nAs we mentioned above, it is easy to see that all trees are Helly graphs.\nBy Proposition~\\ref{prop: products}, so are the connected components of\n$\\mathbb{Z}\\times \\mathbb{Z}$ which are graph isomorphic to $\\mathbb{Z}^2$. Now\nTheorem~\\ref{thm: bipartite version of main theorem} implies the result.\n\\end{proof}\n\n\n\\bigskip\n\n\n\\section{Complexity aspects}\\label{section:recognisability and an application}\n\n\n\\subsection{The recognition problem} Below we give a polynomial time algorithm\nto decide whether a given graph is $\\mathbb{Z}^d$-Kirszbraun. We assume\nthat the graph is presented by its adjacency matrix.\n\n\\begin{prop}\\label{prop:recognition}\nFor all fixed $n,m\\in \\mathbb N$, the recognition problem of\n$(n,m)$-Helly graphs and bipartite $(n,m)$-Helly graphs can be\ndecided in \\hskip.03cm \\textrm{poly}$(|H|)$ time.\n\\end{prop}\n\nFor $n=\\infty$ and $m=2$, the recognition problem was solved\nin~\\cite{MR1029165}; that result does not follow from\nthe proposition.\n\n\\begin{proof}\nLet us seek the algorithm in the case of $(n,m)$-Helly graphs; as always,\nthe bipartite case is similar. In the following, for a function $g:\\mathbb R\\to \\mathbb R$ by\n$t=O(g(|H|))$ we mean $t\\leq k g(|H|)$, where $k$ is independent\nof $|H|$ but might depend on $m,n$.\n\\begin{enumerate}\n\\item\nDetermine the distance between the vertices of the graph. This takes $O(|H|^3)$ time.\n\\item\nNow make a list of all the collections of balls; each collection being of cardinality~$n$.\nSince the diameter of the graph $H$ is bounded by $|H|$; listing\nthe centers and the radii of the balls takes time $O(|H|^{2n})$.\n\\item\nFind the collections for which all the subcollections of cardinality $m$ intersect.\nFor each collection, this step takes $O(|H|)$ time.\n\\item\nCheck if the intersection of the balls in the collections found in the previous\nstep is nonempty. This step again takes $O(|H|)$ time.\n\\end{enumerate}\nThus, the total time is $O(|H|^{2n+3})$, as desired.\n\\end{proof}\n\n\\smallskip\n\n\\subsection{The hole filling problem} The following application\nis motivated by the tileability problems, see Section~\\ref{section: motivation}.\n\nFix $d\\geq 2$. By a \\emph{box} $B_n$ in $\\mathbb{Z}^d$ we mean a subgraph $\\{0,1,\\ldots,n\\}^d$.\nBy the \\emph{boundary} $\\partial_n$ we mean the internal vertex boundary of $B_n$,\nthat is, vertices of $B_n$ where at least one of the coordinates is either $0$ or~$n$.\nThe \\emph{hole-filling problem} asks: \\hskip.03cm Given a graph $H$ and a graph homomorphism \\hskip.03cm\n$f\\in \\textup{\\textrm{Hom}}(\\partial_n, H)$, does it extend to a graph homomorphism \\hskip.03cm $\\widetilde f\\in \\textup{\\textrm{Hom}}(B_n,H)$?\n\n\\begin{thm}\\label{prop: hole filling}\nFix $d\\ge 1$. Let $H$ be a finite bipartite $(2d,2)$-Helly graph, $B_n\\subset \\mathbb{Z}^d$\na box and $f\\in \\textup{\\textrm{Hom}}(\\partial_n, H)$ be a graph homomorphism as above. Then the\nhole-filling problem for $f$ can be decided in \\hskip.03cm \\textrm{poly}$(n+|H|)$ time.\n\\end{thm}\n\nThe same result holds in the context of $1$-Lipschitz maps for $(2d,2)$-Helly graphs;\nthe algorithm is similar. For general~$H$, without the $(2d,2)$-Helly assumption,\nthe problem is a variation on existence of graph homomorphism, see~\\cite[Ch.~5]{HN}.\nThe latter is famously~\\textup{\\textsf{NP}}-complete in almost all nontrivial cases, which makes the\ntheorem above even more surprising.\n\n\\begin{proof} In the following, for a function $g:\\mathbb R^2\\to \\mathbb R$, by $t=O(g(|H|,n))$\nwe mean $t\\leq k g(|H|,n)$ where $k$ is independent of $|H|$ and $n$.\nLet $f\\in \\textup{\\textrm{Hom}}(\\partial_n, H)$ be given. Since $H$ is bipartite $(2d,2)$-Helly graph,\nby Theorem~\\ref{theorem: main result}, $f$ extends to $B_n$ if and only if\n$f$ is $1$-Lipschitz. Thus to decide the hole-filling problem we need to\ndetermine whether or not $f$ is $1$-Lipschitz. This can be decided in\npolynomial time:\n\\begin{enumerate}\n\\item\nDetermine the distances between all pairs of vertices in~$H$. This costs $O(|H|^3)$.\n\\item\nFor each pair of vertices in the graph $\\partial_n$, determine the distance\nbetween the pair and their image under $f$ and verify the Lipschitz condition.\nThis costs $O(n^{2d-2})$.\n\\end{enumerate}\nThe total cost is $O(n^{2d-2}+|H|^3)$, which completes the proof.\n\\end{proof}\n\nFor $d=2$ and $H=\\mathbb{Z}$, the above algorithm can be modified to give\na $O(n^2)$ time complexity for the hole-filling problem of an $[n\\times n]$\nbox. This algorithm can be improved to the nearly linear time\n$O(n \\log n)$, by using the tools in~\\cite{MR3530972}.\nWe omit the details.\n\n\\bigskip\n\n\\section{Final remarks and open problems} \\label{s:finrem}\n\n\\subsection{}\nIn the view of our motivation, we focus on the $\\mathbb{Z}^d$-Kirszbraun property\nthroughout the paper. It would be interesting to find characterizations\nfor other bipartite domain graphs such as the hexagonal and the\nsquare-octahedral lattice (cf.~\\cite{Ken,Thu}). Similarly, it would be\ninteresting to obtain a sharper time bound on the recognition\nproblem as in Proposition~\\ref{prop:recognition}, to obtain applications\nsimilar to~\\cite{MR3530972}.\n\nNote that a vast majority of tileability problems are computationally hard,\nwhich makes the search for tractable sets of tiles even more interesting\n(see~\\cite{Pak-horizons}). The results in this paper, especially the bipartite\nversions, give a guideline for such a search.\n\n\\subsection{}\nAs we mention in Section~\\ref{section: motivation}, there are\nintrinsic curvature properties of the underlying spaces,\nwhich allow for the Helly-type theorems~\\cite{langschroder1997}.\nIn fact, there are more general local hyperbolic properties\nwhich can also be used in this setting, see~\\cite{CE,CDV,lang1999}.\nSee also a curious ``local-to-global'' characterization\nof Helly graphs in~\\cite{CC+}, and a generalization of\nHelly's properties to hypergraphs~\\cite{BeD,DPS}.\n\n\\subsection{}\nIn literature, there are other ``discrete Kirszbraun theorems'' stated in different contexts.\nFor example, papers~\\cite{AT,Bre} give a PL-version of the result for finite\n$A,B\\subset \\mathbb R^d$ and 1-Lipschitz $f:A\\to B$. Such results are related to the classical\n\\emph{Margulis napkin problem} and other isometric embedding\/immersion problems,\nsee e.g.~\\cite[$\\S$38--$\\S$40]{Pak} and references therein.\n\nIn a different direction, two more related problems are worth mentioning. \nFirst, the \\emph{carpenter's rule problem} can be viewed as a problem of finding \na ``discrete $1$-Lipschitz homotopy'', see~\\cite{CD,MR2007962}. This is also \n(less directly) related to the well-known \\emph{Kneser--Poulsen conjecture}, \nsee e.g.~\\cite{Bez}.\n\n\\bigskip\n\n\\subsection*{Acknowledgements} We would like to thank the referee for several useful comments and suggestions. We are grateful to Arseniy Akopyan,\nAlexey Glazyrin, Georg Menz, Alejandro Morales and Adam Sheffer\nfor helpful discussions. Victor Chepoi kindly read the draft\nof the paper and gave us many useful remarks, suggestions and\npointers to the literature.\n\nWe thank the organizers of the thematic\nschool ``Transversal Aspects of Tiling'' at Ol\\'{e}ron, France in~2016,\nfor inviting us and giving us an opportunity to meet and interact\nfor the first time. The first author has been funded by ISF grant\nNos.~1599\/13, 1289\/17 and ERC grant No.~678520. The second author was\npartially supported by the NSF.\n\n\n\\vskip1.1cm\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFinding optimal treatment strategies that can incorporate patient heterogeneity is a cornerstone of personalized medicine. When treatment options change over time, optimal sequential \ntreatment rules (STR) can be learned using longitudinal patient\ndata. With increasing availability of large-scale longitudinal data such as electronic health records (EHR) data in recent years, reinforcement learning (RL) has found much success in estimating such optimal STR \\citep{2019PM}. Existing RL methods include G-estimation \\citep{robins2004}, Q-learning \\citep{Watkins1989,murphy2005}, A-learning \\citep{MurphyAlearning} and directly maximizing the value function \\citep{Zhao2015}. Both G-estimation and $A$-learning attempt to model only the component of the outcome regression relevant to the treatment contrast, while $Q$-learning posits complete models for the outcome regression. Although G-estimation and $A$-learning models can be more efficient and robust to mis-specification, $Q$-learning is widely adopted due to its ease of implementation, flexibility and interpretability \\citep{Watkins1989,DTRbook,schulte2014}.\n\nLearning STR with EHR data, however, often faces an additional challenge of whether outcome information is readily available. Outcome information, such as development of a clinical event or whether a patient is considered as a responder, is often not well coded but rather embedded in clinical notes. Proxy variables such as diagnostic codes or mentions of relevant clinical terms in clinical notes via natural language processing (NLP), while predictive of the true outcome, are often not sufficiently accurate to be used directly in place of the outcome \\citep{hong2019semi,zhang2019high,cheng2020robust}. On the other hand, extracting precise outcome information often requires manual chart review, which is resource intensive, particularly when the outcome needs to be annotated over time. This indicates the need for a semi-supervised learning (SSL) approach that can efficiently leverage a small sized labeled data $\\Lsc$ with true outcome observed and a large sized unlabeled data $\\Usc$ for predictive modeling. It is worthwhile to note that the SSL setting differs from the standard missing data setting in that the probability of missing tends to 1 asymptotically, which violates the positivity assumption required by the classical missing data methods \\citep{chakrabortty}.\n\nWhile SSL methods have been well developed for prediction, classification and regression tasks \\citep[e.g.][]{Chapelle2006,zhu05,BlitzerZ08,Wang2011,qiao2018,chakrabortty}, there is a paucity of literature on SSL methods for estimating optimal treatment rules. Recently, \\cite{cheng2020robust} and \\cite{kallus2020role} proposed SSL methods for estimating an average causal treatment effect. \\cite{Finn2016} proposed a semi-supervised RL method which achieves impressive empirical results and outperforms simple approaches such as direct imputation of the reward. However, there are no theoretical guarantees and the approach lacks causal validity and interpretability within a domain context. Additionally, this method does not leverage available surrogates. In this paper, we fill this gap by proposing a theoretically justified SSL approach to Q-learning using a large unlabeled data $\\Usc$ which contains sequential observations on features $\\bO$, treatment assignment $A$, and surrogates $\\bW$ that are imperfect proxies of $Y$, as well as a small set of labeled data $\\Lsc$ which contains true outcome $Y$ at multiple stages along with $\\bO$, $A$ and $\\bW$. We will also develop robust and efficient SSL approach to estimating the value function of the derived optimal STR, defined as the expected counterfactual outcome under the derived STR.\n\nTo describe the main contributions of our proposed SSL approach to RL, we first note two important distinctions between the proposed framework and classical SSL methods. First, existing SSL literature often assumes that $\\mathcal{U}$ is large enough that the feature distribution is known \\citep{Wasserman2007}. However, under the RL setting, the outcome of the stage $t-1$, denoted by $Y_{t-1}$, becomes a feature of stage $t$ for predicting $Y_t$. As such, the feature distribution for predicting $Y_t$ can not be viewed as known in the $Q$-learning procedure. Our methods for estimating an optimal STR and its associated value function, carefully adapt to this sequentially missing data structure. Second, we modify the SSL framework to handle the use of surrogate variables $\\bW$ which are predictive of the outcome through the joint law $\\Pbb_{Y,\\bO,A,\\bW}$, but are not part of the conditional distribution of interest $\\Pbb_{Y|\\bO,A}$. To address these issues, we propose a two-step fitting procedure for finding an optimal STR and for estimating its value function in the SSL setting. Our method consists of using the outcome-surrogates ($\\bW$) and features ($\\bO,A$) for non-parametric estimation of the missing outcomes ($Y$). We subsequently use these imputations to estimate $Q$ functions, learn the optimal treatment rule and estimate its associated value function. We provide theoretical results to understand when and to what degree efficiency can be gained from $\\bW$ and $\\bO,A$.\n\nWe further show that our approach is robust to mis-specification of the imputation models. To account for potential mis-specification in the models for the $Q$ function, we provide a double robust value function estimator for the derived STR. If either the regression models for the Q functions or the propensity score functions are correctly specified, our value function estimators are consistent for the true value function. \n\n\nWe organize the rest of the paper as follows. In Section \\ref{section: set up} we formalize the problem mathematically and provide some notation to be used in the development and analysis of the methods. In Section \\ref{section: SS Q learning} we discuss traditional $Q$-learning and propose an SSL estimation procedure for the optimal STR. Section \\ref{section: SS value function} details an SSL doubly robust estimator of the value function for the derived STR. In Section \\ref{theory} we provide theoretical guarantees for our approach and discuss implications of our assumptions and results. Section \\ref{section: sims and application} is devoted for numerical experiments as well as real data analysis with an inflammatory bowel disease (IBD) data-set. We end with a discussion of the methods and possible extensions in Section \\ref{section: discussion}. The proposed method has been implemented in R and the code can be found at \\url{github.com\/asonabend\/SSOPRL}. Finally all the technical proofs and supporting lemmas are collected in Appendices \\ref{sec:proof_main_results} and \\ref{sec:proof_technical_lemmas}.\n\n\\section{Problem setup}\\label{section: set up}\n\nWe consider a longitudinal observational study with outcomes, confounders and treatment indices potentially available over multiple stages. Although our method is generalizable for any number of stages, for ease of presentation we will use two time points of (binary) treatment allocation as follows. For time point $t\\in \\{1,2\\}$, let $\\bO_t\\in\\mathbb{R}^{d^o_t}$ denote the vector of covariates measured prior at stage $t$ of dimension $d^o_t$; $A_t\\in\\{0,1\\}$ a treatment indicator variable; and $Y_{t+1}\\in\\mathbb{R}$ the outcome observed at stage $t+1$, for which higher values of $Y_{t+1}$ are considered beneficial. Additionally we observe surrogates $\\bW_t \\in\\mathbb{R}^{d^\\omega_ t}$, a $d^\\omega_ t$-dimensional vector of post-treatment covariates potentially predictive of $Y_{t+1}$. In the labeled data where $\\bY = (Y_2,Y_3)\\trans$ is annotated, we observe a random sample of $n$ independent and identically distributed (iid) random vectors, denoted by \n$$\n\\mathcal{L}=\\{\\bL_i = (\\bUvec_i\\trans,\\bY_i\\trans)\\trans\\}_{i=1}^n, \\quad \\mbox{where $\\bU_{ti}=(\\bO_{ti}\\trans, A_{ti},\\bW_{ti}\\trans)\\trans$ and $\\bUvec_i=(\\bU_{1i}\\trans,\\bU_{2i}\\trans)\\trans$.}\n$$ \nWe additionally observe an unlabeled set consisting of $N$ iid random vectors, \n$$\\mathcal{U}=\\{\\bUvec_{j}\\}_{j=1}^N$$\nwith $N \\gg n$. We denote the entire data as $\\mathbb{S}=(\\mathcal{L}\\cup\\mathcal{U})$. To operationalize our statistical arguments we denote the joint distribution of the observation vector $\\bL_i$ in $\\mathcal{L}$ as $\\Pbb$. In order to connect to the unlabeled set, we assume that any observation vector $\\bUvec_{j}$ in $\\mathcal{U}$ has the distribution induced by $\\Pbb$.\n\nWe are interested in finding the optimal STR and estimating its {\\em value function} to be defined as expected counterfactual outcomes under the derived regime. To this end, let $Y_{t+1}^{(a)}$ be the potential outcome for a patient at time $t+1$ had the patient been assigned at time $t$ to treatment $a\\in \\{0,1\\}$. A dynamic treatment regime is a set of functions $\\mathcal{D}=(d_1,d_2)$, where $d_t(\\cdot)\\in\\{0,1\\}$ , $t=1,2$ map from the patient's history up to time $t$ to the treatment choice $\\{0,1\\}$.\nWe define the patient's history as $\\bH_1\\equiv [\\bH_{10}\\trans, \\bH_{11}\\trans]\\trans$ with $\\bH_{1k} = \\bphi_{1k}(\\bO_1)$, $\\bH_2=[\\bH_{20}\\trans, \\bH_{21}\\trans]\\trans$ with $\\bH_{2k}=\\bphi_{2k}(\\bO_1,A_1,\\bO_2)$, where $\\{\\bphi_{tk}(\\cdot), t=1,2, k=0,1\\}$ are pre-specified basis functions. We then define features derived from patient history for regression modeling as $\\bX_1\\equiv[\\bH_{10}\\trans,A_1\\bH_{11}\\trans]\\trans$ and $\\bX_2\\equiv[\\bH_{20}\\trans,A_{2}\\bH_{21}\\trans]\\trans$. For ease of presentation, we also let $\\bHcheck_1 = \\bH_1\\trans$, $\\bHcheck_2 = (Y_2, \\bH_2\\trans)\\trans$, $\\bXcheck_1 = \\bX_1$, $\\bXcheck_2 = (Y_2, \\bX_2\\trans)\\trans$, and $\\bSigma_t=\\mathbb{E}[\\bXcheck_t\\bXcheck_t\\trans]$. \n\nLet $\\Ebb_\\Dsc$ be the expectation with respect to the measure that generated the data under regime $\\Dsc$. Then these sets of rules $\\Dsc$ have an associated value function which we can write as $V(\\Dsc)=\\mathbb{E}_\\Dsc\\left[Y_2^{(d_1)}+Y_3^{(d_2)}\\right]$. Thus, an optimal dynamic treatment regime is a rule $\\Dscbar=(\\dbar_1,\\dbar_2)$ such that $\\Vbar=V\\left(\\Dscbar\\right)\\ge V\\left(\\Dsc\\right)$ for all $\\Dsc$ in a suitable class of admissible decisions \\citep{DTRbook}. To identify $\\Dscbar$ and $\\Vbar$ from the observed data we will require the following sets of standard assumptions \\citep{robins1997, schulte2014}: (i) consistency -- $Y_{t+1}=Y_{t+1}^{(0)}I(A_t=0)+Y_{t+1}^{(1)}I(A_t=1)\\text{ for }t=1,2$, (ii) no unmeasured confounding -- $Y_{t+1}^{(0)},Y_{t+1}^{(1)}\\indep A_t|\\bH_t\\text{ for }t=1,2$ and (iii) positivity -- $\\Pbb(A_t|\\bH_t)>\\nu$, $\\text{ for }t=1,2,\\:A_t\\in\\{0,1\\}$, for some fixed $\\nu>0$. \n\nWe will develop SSL inference methods to derive optimal STR $\\Dscbar$ as well the associated value function $\\Vbar$ by leveraging the richness of the unlabeled data and the predictive power of surrogate variables which allows us to gain crucial statistical efficiency. Our main contributions in this regard can be described as follows. First, we provide a systematic generalization of the $Q$-learning framework with theoretical guarantees to the semi-supervised setting with improved efficiency. Second, we provide a doubly robust estimator of the value function in the semi-supervised setup. Third, our $Q$-learning procedure and value function estimator are flexible enough to allow for standard off-the-shelf machine learning tools and are shown to perform well in finite-sample numerical examples. \n\n\n\n\\section{Semi-Supervised \\texorpdfstring{$Q$}{Lg}-learning}\\label{section: SS Q learning}\n\nIn this section we propose a semi-supervised Q-learning approach to deriving an optimal STR. To this end, we first recall the basic mechanism of traditional linear parametric $Q$-learning \\citep{DTRbook} and then detail our proposed method. We defer the theoretical guarantees to Section \\ref{theory}. \n\n\\subsection{Traditional \\texorpdfstring{$Q$}{Lg}-learning}\n\n$Q$-learning is a backward recursive algorithm that identifies optimal STR by optimizing two stage Q-functions defined as: $$Q_2(\\bHcheck_2,A_2)\\equiv\\mathbb{E}[Y_3|\\bHcheck_2,A_2], \\quad \\mbox{and}\\quad\nQ_1(\\bHcheck_1,A_1)\\equiv\\mathbb{E}[Y_2+\\underset{a_2}{\\text{max}}\\:Q_2(\\bHcheck_2,a_2)|\\bHcheck_1,A_1]\n$$ \n\\citep{sutton2018,murphy2005}.\nIn order to perform inference one typically proceeds by positing models for the $Q$ functions. In its simplest form one assumes a (working) linear model for some parameters $\\btheta_t=(\\bbeta_t\\trans,\\bgamma_t\\trans)\\trans$, $t=1,2$, as follows: \n\\begin{align}\\label{linear_Qs}\n\\begin{split}\nQ_1(\\bHcheck_1,A_1;\\btheta_1^0)=& \\bXcheck_1\\trans\\btheta_1^0=\\bH_{10}\\trans \\bbeta_1^0+A_{1}(\\bH_{11}\\trans \\bgamma_1^0) ,\\\\\nQ_2(\\bHcheck_2,A_2;\\btheta_2^0)=&\\bXcheck_2\\trans\\btheta_2^0 = Y_2 \\beta_{21}^0 +\n\\bH_{20}\\trans\\bbeta_{22}^0\n+A_{2}(\\bH_{21}\\trans\\bgamma_2^0).\n\\end{split}\n\\end{align}\nTypical $Q$-learning consists of performing a least squares regression for the second stage to estimate $\\bthetahat_2$ followed by defining the stage 1 pseudo-outcome for $i=1,...,n$ as \n\\[\n\\Yhat_{2i}^*=Y_{2i}+\\underset{a_2}{\\text{max}}\\:Q_2(\\bHcheck_{2i},a_2;\\bthetahat_2)=Y_{2i}(1+\\hat\\beta_{21})+\\bH_{20i}\\trans{\\bbetahat}_{22}\n+[\\bH_{21i}\\trans{\\bgammahat}_2]_+,\n\\]\nwhere $[x]_+=xI(x>0)$. One then proceeds to estimate $\\bthetahat_1$ using least squares again, with $\\Yhat_2^*$ as the outcome variable. Indeed, valid inference on $\\Dscbar$ using the method described above crucially depends on the validity of the model assumed. However as we shall see, even without validity of this model we will be able to provide valid inference on suitable analogues of the $Q$-function working model parameters, and on the value function using a double robust type estimator. To that end it will be instructive to define the least square projections of $Y_3$ and $Y_2^*$ onto $\\bXcheck_2$ and $\\bXcheck_1$ respectively. The linear regression working models given by \\eqref{linear_Qs} have $\\btheta_1^0,\\:\\btheta_2^0$ as unknown regression parameters. To account for the potential mis-specification of the working models in \\eqref{linear_Qs}, we define the target population parameters $\\bthetabar_1,\\bthetabar_2$ as the population solutions to the expected normal equations \n\\[\n\\mathbb{E}\\left\\{\\bXcheck_1(\\Ybar_2^*-\\bXcheck_1\\trans\\bthetabar_1)\\right\\}=\\bzero, \\quad \\mbox{and}\\quad \\mathbb{E}\\left\\{ \\bXcheck_2\\trans\\left(Y_3-\\bXcheck_2\\trans\\bthetabar_2\\right)\\right\\}=\\bzero,\n\\]\nwhere $\\Ybar_2^*=Y_2+\\underset{a_2}{\\text{max}}\\:Q_2(\\bHcheck_2,a_2;\\bthetabar_2)$. As these are linear in the parameters, uniqueness and existence for\n$\\bthetabar_1,\\bthetabar_2$ are well defined. In fact, $Q_1(\\bHcheck_1,A_1;\\bthetabar_1)=\\bXcheck_1\\trans\\bthetabar_1,Q_2(\\bHcheck_2,A_2;\\bthetabar_2)=\\bXcheck_2\\trans\\bthetabar_2$ are the $L_2$ projection of $\\mathbb{E}(Y_2^*|\\bXcheck_1)\\in\\mathcal{L}_2\\left(\\Pbb_{\\bXcheck_1}\\right),\\:\\mathbb{E}(Y_3|\\bXcheck_2)\\in\\mathcal{L}_2\\left(\\Pbb_{\\bXcheck_2}\\right)$ onto the subspace of all linear functions of $\\bXcheck_1,\\bXcheck_2\\trans$ respectively. Therefore, $Q$ functions in \\eqref{linear_Qs} are the best linear predictors of $\\Ybar_2^*$ conditional on $\\bXcheck_1$ and $Y_3$ conditional on $\\bXcheck_2\\trans$. \n\n\n\nTraditionally, one only has access to labeled data $\\mathcal{L}$, and hence proceeds by estimating $(\\btheta_1,\\btheta_2)$ in \\eqref{linear_Qs} by solving the following sample version set of normal equations:\n\\begin{align}\\label{full_EE}\n\\begin{split}\n\\Pbb_n \n\\left[\\begin{matrix}\n\\bXcheck_2(Y_3-\\bXcheck_2\\trans\\btheta_2)\n\\end{matrix}\\right] \\equiv\n\\Pbb_n\\left[\\begin{matrix}\nY_2\\{Y_3-(Y_2,\\bX_2\\trans)\\btheta_2\\}\\\\\n\\bX_2\\{Y_3-(Y_2,\\bX_2\\trans)\\btheta_2\\}\n\\end{matrix}\\right] \n=&\\textbf{0},\\\\\n\\Pbb_n \\left[\n\\bX_1\n\\{Y_2(1+\\beta_{21})+\\bH_{20}\\trans{\\bbeta}_{22}\n+[\\bH_{21}\\trans{\\bgamma}_2]_+-\\bX_1\\trans\\btheta_1\\}\\right]=&\\bf0.\n\\end{split}\n\\end{align}\n\\citep{DTRbook}, where $\\Pbb_n$ denotes the empirical measure: i.e. for a measurable function $f:\\mathbb{R}^p\\mapsto\\mathbb{R}$ and random sample $\\{\\bL_i\\}_{i=1}^n$, $\\Pbb_nf=\\frac{1}{n}\\sum_{i=1}^nf(\\bL_i)$. \nThe asymptotic distribution for the $Q$ function parameters in the fully-supervised setting has been well studied \\cite[see][]{laber2014}.\n\n\n\\subsection{Semi-supervised \\texorpdfstring{$Q$}{Lg}-learning}\\label{sec: ssQ}\nWe next detail our robust imputation-based semi-supervised $Q$-learning that leverages the unlabeled data $\\mathcal{U}$ to replace the unobserved $Y_{t}$ in \\eqref{full_EE} with their properly imputed values for subjects in $\\Usc$. Our SSL procedure includes three key steps: (i) imputation, (ii) refitting, and (iii) projection to the unlabeled data. \nIn step (i), we develop flexible imputation models for the conditional mean functions $\\{\\mu_t(\\cdot), \\mu_{2t}(\\cdot), t = 2, 3\\}$, where $\\mu_t(\\bUvec) = \\mathbb{E}(Y_{t}|\\bUvec)$ and $\\mu_{2t}(\\bUvec) = \\mathbb{E}(Y_{2}Y_{t}|\\bUvec)$. \nThe refitting in step (ii) will ensure the validity of the SSL estimators under potential mis-specifications of the imputation models. \n\n\\subsubsection*{Step I: Imputation.}\n\nOur first imputation step involves weakly parametric or non-parametric prediction modeling to approximate the conditional mean functions $\\{\\mu_t(\\cdot), \\mu_{2t}(\\cdot), t = 2, 3\\}$. Commonly used models such as non-parametric kernel smoothing, basis function expansion or kernel machine regression can be used. We denote the corresponding estimated mean functions as $\\{\\mhat_t(\\cdot), \\mhat_{2t}(\\cdot), t = 2, 3\\}$ under the corresponding imputation models $\\{m_t(\\bUvec), m_{2t}(\\bUvec), t=2,3\\}$. Theoretical properties of our proposed SSL estimators on specific choices of the imputation models are provided in section \\ref{theory}. We also provide additional simulation results comparing different imputation models in section \\ref{section: sims and application}.\n\n\n\n\n\\subsubsection*{Step II: Refitting.}\nTo overcome the potential bias in the fitting from the imputation model, especially under model mis-specification, we update the imputation model with an additional refitting step by expanding it to include linear effects of $\\{\\bX_t, t=1,2\\}$ with cross-fitting to control overfitting bias.\nSpecifically, to ensure the validity of the SSL algorithm from the refitted imputation model, we note that the final imputation models for $\\{Y_t, Y_{2t}, t=2,3\\}$, denoted by $\\{\\mubar_t(\\bUvec), \\mubar_{2t}, t=2,3\\}$, need to satisfy \n\\begin{equation*}\n\\begin{alignedat}{3}\n \\Ebb\\left[\\bXvec\\{Y_2 - \\mubar_2(\\bUvec)\\}\\right] = \\bzero, &\\quad&\n\\Ebb\\left\\{Y_2^2 - \\mubar_{22}(\\bUvec)\\right\\} =&0 &, \\\\\n\\Ebb\\left[\\bX_2\\{Y_3 - \\mubar_3(\\bUvec)\\}\\right] = \\bzero, &\\quad&\n\\Ebb\\left\\{Y_2Y_3 - \\mubar_{23}(\\bUvec)\\right\\} =&0 &.\n\\end{alignedat}\n\\end{equation*}\nwhere $\\bXvec = (1,\\bX_1\\trans,\\bX_2\\trans)\\trans$.\nWe thus propose a refitting step that expands $\\{m_t(\\bUvec), m_{2t}(\\bUvec), t=2,3\\}$ to additionally adjust for linear effects of $\\bX_1$ and\/or $\\bX_2$ to ensure the subsequent projection step is unbiased. To this end, let $\\{\\mathcal{I}_k,k=1,...,K\\}$ denote $K$ random equal sized partitions of the labeled index set $\\{1,...,n\\}$, and let $\\{\\mhat_{t}\\supnk(\\bUvec), \\mhat_{2t}\\supnk(\\bUvec), t=2,3\\}$ be the counterpart of $\\{\\mhat_t(\\bUvec), \\mhat_{2t}(\\bUvec),t=2,3\\}$ with labeled observations in $\\{1,..,n\\}\\setminus\\mathcal{I}_k$. We then obtain $\\bEtahat_2$, $\\etahat_{22}$, $\\bEtahat_3$, $\\etahat_{23}$ respectively as the solutions to\n\\begin{equation}\\label{eta_EE_Q1}\n\\begin{alignedat}{3}\n\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\\bXvec_i\\left\\{Y_{2i}- \\mhat_2\\supnk(\\bUvec_i)-\\bEta_2\\trans\\bXvec_i\\right\\} = &\\bzero, \\ & \\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\n\\left\\{Y_{2i}^2-\\mhat_{22}\\supnk(\\bUvec_i)-\n\\eta_{22}\\right\\} = &0& , \\\\\n\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\n\\bX_{2i}\\left\\{Y_{3i}-\\mhat_{3}\\supnk(\\bUvec_i)-\n\\bEta_3\\trans\\bX_{2i}\n\\right\\} = &\\bzero, \\ & \n\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\n\\left\\{Y_{2i}Y_{3i}-\\mhat_{23}\\supnk(\\bUvec_i)-\n\\eta_{23}\\right\\} =&0& .\n\\end{alignedat}\n\\end{equation}\n\nFinally, we impute $Y_2$, $Y_3$, $Y_{2}^2$ and $Y_{2}Y_3$ respectively as\n$\\muhat_2(\\bUvec) = K^{-1}\\sum_{k=1}^K \\mhat_2\\supnk(\\bUvec) + \\bEtahat_2\\trans\\bXvec$, \n$\\muhat_3(\\bUvec) = K^{-1}\\sum_{k=1}^K \\mhat\\supnk_3(\\bUvec) + \\bEtahat_3\\trans\\bX_2$, \n$\\muhat_{22}(\\bUvec)=K^{-1}\\sum_{k=1}^K \\mhat\\supnk_{22}(\\bUvec) + \\etahat_{22}$, and $\\muhat_{23}(\\bUvec)=K^{-1}\\sum_{k=1}^K \\mhat\\supnk_{23}(\\bUvec) + \\etahat_{23}$.\n\n\n\n\\subsubsection*{Step III: Projection}\\label{section: SSQL}\n\nIn the last step, we proceed to estimate $\\bthetahat$ \nby replacing $\\{Y_t, Y_{2}Y_{t}, t=2,3\\}$ in (\\ref{full_EE}) with their the imputed values $\\{ \\muhat_t(\\bUvec), \\muhat_{2t}(\\bUvec), t= 2,3\\}$ and project to the unlabeled data. Specifically, we obtain the final SSL estimators for $\\btheta_1$ and $\\btheta_2$ via the following steps:\n\\begin{enumerate} \n\n\t\\item Stage 2 regression:\n\twe obtain the SSL estimator for $\\btheta_2$ as \n \\begin{align*}\n \\bthetahat_2=(\\bbetahat_2\\trans,\\bgammahat_2\\trans)\\trans: \n \\mbox{the solution to}\\quad\n \\begin{split}\n \\Pbb_N&\n \\begin{bmatrix}\n \\muhat_{23}(\\bUvec)-\n [\\muhat_{22}(\\bUvec),\\muhat_2(\\bUvec)\\bX_2\\trans]\\btheta_2\\\\\n \\bX_2\\{\\muhat_3(\\bUvec)-\n [\\muhat_2(\\bUvec),\\bX_2\\trans]\\btheta_2\\}\n \\end{bmatrix}\n =\\textbf{0}\n \\end{split}\n \\end{align*}\n \n\\item We compute the imputed pseudo-outcome: \n\\[\n\\Ytilde_{2}^*=\\muhat_{2}(\\bUvec)+\\underset{a\\in\\{0,1\\}}{\\text{max }}Q_2\\left(\\bH_{2},\\muhat_{2}(\\bUvec),a;\\bthetahat_2\\right),\n\\]\n\\item Stage 1 regression: we estimate $\\bthetahat_1=(\\bbetahat_1\\trans,\\bgammahat_1\\trans)\\trans$ as the solution to:\n\\begin{align*}\n\\Pbb_N \\left\\{\\bX_1\n(\\Ytilde_2^*-\\bX_1\\trans\\btheta_1)\\right\\}=\\bf0.\n\\end{align*}\n\n\\end{enumerate}\n\nBased on the SSL estimator for the Q-learning model parameters, we can then obtain an estimate for the optimal treatment protocol as:\n\\[\n\\dhat_t \\equiv \\dhat_t(\\bH_t)\\equiv d_t(\\bH_t; \\bthetahat_t), \\mbox{ where } d_t(\\bH_t,\\btheta_t) = \\argmax{a \\in \\{0,1\\}}Q_t(\\bH_t,a;\\btheta_t)=I\\left(\\bH_{t1}\\trans\\bgamma_t>0\\right), \\: t = 1, 2.\n\\]\nTheorems \\ref{theorem: unbiased theta2} and \\ref{theorem: unbiased theta1} of Section \\ref{theory} demonstrate\nthe consistency and asymptotic normality of the SSL estimators $\\{\\bthetahat_t,t=1,2\\}$ for their respective population parameters $\\{\\bthetabar_t,t=1,2 \\}$ even in the possible mis-specification of \\eqref{linear_Qs}. As we explain next, this in turn yields desirable statistical results for evaluating the resulting policy $\\dbar_t \\equiv \\dbar_t(\\bH_t) \\equiv d_t(\\bH_t,\\bthetabar_t) = \\argmax{a \\in \\{0,1\\}}Q_t(\\bHcheck_t,a;\\bthetabar_t)$ for $t=1,2$.\n\n\\section{Semi Supervised Off-Policy Evaluation of the Policy}\\label{section: SS value function}\n\nTo evaluate the performance of the optimal policy $\\Dscbar = \\{\\dbar_t(\\bH_t), t=1,2\\}$, derived under the Q-learning framework, one may estimate the expected population outcome under the policy $\\Dscbar$: \n\\[\n\\Vbar\\equiv\n\\mathbb{E}\\left[\\mathbb{E}\\{Y_2+\\mathbb{E}\\{Y_3|\\bHcheck_2,A_2=\\dbar_2(\\bH_2)\\}|\\bH_1,A_1=\\dbar_1(\\bH_1)\\}\\right].\n\\]\n\nIf models in \\eqref{linear_Qs} are correctly specified, then under standard causal assumptions (consistency, no unmeasured confounding, and positivity), an asymptotically consistent supervised estimator for the value function can be obtained as\n\\[\n\\Vhat_Q=\\Pbb_n\\left[\\Qopt_1(\\bHcheck_1;\\bthetahat_1)\\right],\n\\]\nwhere $\\Qopt_t(\\bHcheck_t;\\btheta_t)\\equiv Q_t\\left(\\bHcheck_t,d_t(\\bH_t;\\btheta_t);\\btheta_t\\right)$. However, $\\Vhat_Q$ is likely to be biased when the outcome models in \\eqref{linear_Qs} are mis-specified. This occurs frequently in practice since $Q_1(\\bHcheck_1,A_1)$ is especially difficult to specify. \n\n\nTo improve the robustness to model mis-specification, we augment $\\Vhat_Q$ via propensity score weighting. This gives us an SSL doubly robust (SSL$\\subDR$) estimator for $\\Vbar$. To this end, we define propensity scores: $$\\pi_t(\\bHcheck_t)=\\Pbb\\{A_t=1|\\bHcheck_t\\}, \\quad t=1,2.$$ To estimate $\\{\\pi_t(\\cdot),t=1,2\\}$, we impose the following generalized linear models (GLM):\n\\begin{align}\\label{logit_Ws}\n\\pi_t(\\bHcheck_t;\\bxi_t)=&\\sigma\\left(\\bHcheck_t\\trans\\bxi_t\\right), \\quad \\mbox{with}\\quad \\sigma(x)\\equiv1\/(1+e^{-x}) \\quad \\mbox{for}\\quad t=1,2.\n\\end{align}\nWe use the logistic model with potentially non-linear basis functions $\\bHcheck$ for simplicity of presentation but one may choose other GLM or alternative basis expansions to incorporate non-linear effects in the propensity model. We estimate $\\bxi=(\\bxi_1\\trans,\\bxi_2\\trans)\\trans$ based on the standard maximum likelihood estimators using labeled data, denoted by $\\bxihat = (\\bxihat_1\\trans,\\bxihat_2\\trans)\\trans$.\nWe denote the limit of $\\bxihat$ as $\\bxibar = (\\bxibar_1\\trans,\\bxibar_2\\trans)\\trans$. Note that this is not necessarily equal to the true model parameter under correct specification of \\eqref{logit_Ws}, but corresponds to the population solution of the fitted models. \n\nOur framework is flexible to allow an SSL approach to estimate the propensity scores. As these are nuisance parameters needed for estimation of the value function, and SSL for GLMs has been widely explored \\citep[See][Ch. 2]{ChakraborttyThesis}, we proceed with the usual GLM estimation to keep the discussion focused. However, SSL for propensity scores can be beneficial in certain cases, as we show in Proposition \\ref{lemma: v funcion var}.\n\n\n\\subsection{SUP\\texorpdfstring{$\\subDR$}{Lg} Value Function Estimation }\nTo derive a supervised doubly robust (SUP$\\subDR$) estimator for $\\Vbar$ overcoming confounding in the observed data, we let $\\bTheta = (\\btheta\\trans,\\bxi\\trans)\\trans$ and define the inverse probability weights (IPW) using the propensity scores\n\\begin{align*}\n\\omega_1(\\bHcheck_1,A_1,\\bTheta)\n&\\equiv\n\\frac{d_1(\\bH_1;\\btheta_1)A_1}{\\pi_1(\\bHcheck_1;\\bxi_1)}+\\frac{\\{1-d_1(\\bH_1;\\btheta_1)\\}\\{1-A_1\\}}{1-\\pi_1(\\bHcheck_1;\\bxi_1)}, \\quad \\mbox{and}\\\\ \\omega_2(\\bHcheck_2,A_2,\\bTheta)\n&\\equiv\n\\omega_1(\\bHcheck_1,A_1,\\bTheta)\\left(\\frac{d_2(\\bH_2;\\btheta_2)A_2}{\\pi_2(\\bHcheck_2;\\bxi_2)}+\\frac{\\{1-d_2(\\bH_2;\\btheta_2)\\}\\{1-A_2\\}}{1-\\pi_2(\\bHcheck_2;\\bxi_2)}\\right).\n\\end{align*}\nThen we augment $\\Qopt_1(\\bH_1;\\bthetahat_1)$ based on the estimated propensity scores via\n\\begin{align*}\n\\begin{split}\n\\Vsc\\subSUPDR(\\bL; \\bThetahat)=\n\\Qopt_1(\\bH_1;\\bthetahat_1)\n+&\\omega_1(\\bHcheck_1,A_1,\\bThetahat)\n\\left[\nY_2-\\left\\{\\Qopt_1(\\bH_1, \\bthetahat_1)- \\Qopt_2(\\bHcheck_2;\\bthetahat_2)\n\\right\\}\\right]\\\\\n+&\\omega_2(\\bHcheck_2,A_2,\\bThetahat)\\left\\{\nY_3-\\Qopt_2(\\bHcheck_2; \\bthetahat_2)\n\\right\\}\n\\end{split}\n\\end{align*}\nand estimate $\\Vbar$ as \n\\begin{align}\\label{eq: lab value fun}\n\\Vhat\\subSUPDR = \\Pbb_n\\left\\{\\Vsc\\subSUPDR(\\bL; \\bThetahat)\\right\\} .\n\\end{align}\n\n\\begin{remark}\nThe importance sampling estimators previously proposed in \\citet{Jiang2016} and \\citet{WDR} for value function estimation employ similar augmentation strategies. However, they consider a fixed policy, and we account for the fact that the STR is estimated with the same data. The construction of augmentation in $\\Vhat\\subSUPDR$ also differs from the usual augmented IPW estimators \\citep{DTRbook}. As we are interested in the value had the population been treated with function $\\Dscbar$ and not a fixed sequence $(A_1,A_2)$, we augment the weights for a fixed treatment (i.e. $A_t=1$) with the propensity score weights for the estimated regime $I(A_t = \\dbar_t)$. Finally, we note that this estimator can easily be extended to incorporate non-binary treatments.\n\\end{remark}\n\nThe supervised value function estimator $\\Vhat\\subSUPDR$ is doubly robust in the sense that if either the outcome models of the propensity score models are correctly specified, then $\\Vhat\\subSUPDR\\stackrel{\\Pbb}{\\rightarrow} \\Vbar$ in probability. Moreover, under certain reasonable assumptions, $\\Vhat\\subSUPDR$ \nis asymptotically normal. Theoretical guarantees and proofs for this procedure are shown in Appendix \\ref{app_DR_Vfun}.\n\n\\subsection{SSL\\texorpdfstring{$\\subDR$}{Lg} Value Function Estimation}\\label{section: SS value function est}\nAnalogous to semi-supervised $Q$-learning, we propose a procedure for adapting the augmented value function estimator to leverage $\\mathcal{U}$, by imputing suitable functions of the unobserved outcome in \\eqref{eq: lab value fun}. Since $\\bHcheck_2$ involves $Y_2$, both $\\omega_2(\\bHcheck_2,A_2;\\bTheta)$ and $\\Qopt_2(\\bHcheck_2;\\btheta_2)=Y_2 \\beta_{21}+\\Qopt_{2-}(\\bH_2;\\btheta_2)$ are not available in the unlabeled set, where $\\Qopt_{2-}(\\bH_2;\\btheta_2)=\\bH_{20}\\trans\\bbeta_{22} + [\\bH_{21}\\trans\\bgamma_2]_+$. By writing $\\Vsc\\subSUPDR(\\bL; \\bThetahat)$ as\n\\begin{align*}\n\\begin{split}\n\\Vsc\\subSUPDR(\\bL; \\bThetahat)=\n\\Qopt_1(\\bH_1;\\bthetahat_1)\n+&\\omega_1(\\bHcheck_1,A_1,\\bThetahat)\n\\left\\{\n(1+\\betahat_{21})Y_2-\\Qopt_1(\\bH_1, \\bthetahat_1)+ \\Qopt_{2-}(\\bH_2;\\bthetahat_2)\n\\right\\}\\\\\n+&\\omega_2(\\bHcheck_2,A_2,\\bThetahat)\\left\\{\nY_3-\\betahat_{21}Y_2 -\\Qopt_{2-}(\\bH_2; \\bthetahat_2)\n\\right\\},\n\\end{split}\n\\end{align*}\nwe note that to impute $\\Vsc\\subSUPDR(\\bL; \\bThetahat)$ for subjects in $\\Usc$, we need to impute $Y_2$, $\\omega_2(\\bHcheck_2,A_2; \\bThetahat)$, and $Y_t \\omega_2(\\bHcheck_2,A_2; \\bThetahat)$ for $t=2,3$. We define the conditional mean functions \\[\\mu^v_2(\\bUvec)\\equiv\\mathbb{E}[Y_2|\\bUvec], \\quad \\mu^v_{\\omega_2}(\\bUvec)\\equiv\\mathbb{E}[\\omega_2(\\bHcheck_2,A_2; \\bThetabar)|\\bUvec],\\quad \\mu^v_{t\\omega_2}(\\bUvec)\\equiv\\mathbb{E}[Y_t\\omega_2(\\bHcheck_2,A_2; \\bThetabar)|\\bUvec], \n\\]\nfor $t=2,3$, where\n$\\bThetabar = (\\bthetabar\\trans,\\bxibar\\trans)\\trans$.\nAs in Section \\ref{sec: ssQ} we approximate these expectations using a flexible imputation model followed by a refitting step for bias correction under possible mis-specification of the imputation models.\n\n\\subsubsection*{Step I: Imputation}\nWe fit flexible weakly parametric or non-parametric models to the labeled data to approximate the functions $\\{\\mu^v_2(\\bUvec), \\mu^v_{\\omega_2}(\\bUvec), \\mu^v_{t\\omega_2}(\\bUvec),\\:t=2,3\\}$ with unknown parameter $\\bTheta$ estimated via the SSL $Q$-learning as in Section \\ref{sec: ssQ} and the propensity score modeling as discussed above. Denote the respective imputation models as $\\{ m_2(\\bUvec), \nm_{\\omega_2}(\\bUvec),m_{t\\omega_2}(\\bUvec),\\:t=2,3\\}$ and their fitted values as\n$\\{\\mhat_2(\\bUvec), \n\\mhat_{\\omega_2}(\\bUvec),\\mhat_{t\\omega_2}(\\bUvec),\\:t=2,3\\}$. \n\n\\subsubsection*{Step II: Refitting}\nTo correct for potential biases arising from finite sample estimation and model mis-specifications, we perform refitting to obtain final imputed models for $\\{Y_2,\\omega_2(\\bHcheck_2, A_2; \\bThetabar),$ $Y_t\\omega_2(\\bHcheck_2, A_2; \\bThetabar),$ $t=2,3\\}$ as $\\{\\mubar^v_2(\\bUvec)=m_2(\\bUvec)+\\eta_2^v, \\mubar^v_{\\omega_2}(\\bUvec)=m_{\\omega_2}(\\bUvec)+\\eta_{\\omega_2}^v, \\mubar^v_{t\\omega_2}(\\bUvec)=m_{t\\omega_2}(\\bUvec)+\\eta_{t\\omega_2}^v,\\:t=2,3\\}$. As for the estimation of $\\btheta$ for $Q$-learning training, these refitted models are not required to be correctly specified but need to satisfy the following constraints:\n\\begin{equation*}\n\\begin{alignedat}{3}\n\\Ebb\\left[\\omega_1(\\bHcheck_1,A_1;\\bThetabar)\\left\\{Y_2-\\mubar_2^v(\\bUvec)\\right\\}\\right] &=0, \\\\\n\\Ebb\\left[\\Qopt_{2-}(\\bUvec;\\btheta_2)\\left\\{\\omega_2(\\bHcheck_2,A_2;\\bThetabar)-\\mubar_{\\omega_2}^v(\\bUvec)\\right\\}\\right] &= 0,\\\\\n\\Ebb\\left[\\omega_2(\\bHcheck_2,A_2;\\bThetabar)Y_t- \\mubar^v_{t\\omega_2}(\\bUvec)\\right] &= 0,\\:t=2,3.\n\\end{alignedat}\n\\end{equation*}\nTo estimate $\\eta_2^v$ $\\eta_{\\omega_2}^v$, and $\\eta_{t\\omega_2}^v$ under these constraints, we again employ cross-fitting and obtain $\\etahat_2^v$ $\\etahat_{\\omega_2}^v$, and $\\etahat_{t\\omega_2}^v$ as the solution to the following estimating equations\n\\begin{equation}\\label{V function reffitting}\n\\begin{alignedat}{3}\n\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\\omega_1(\\bHcheck_{1i},A_{1i};\\bThetahat)\\left\\{Y_2-\\mhat_2\\supnk(\\bUvec_i)-\n\\etahat_2^v\\right\\} &=0, \\\\\n\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}{ \\Qopt_{2-}(\\bUvec_i;\\bthetahat_2)}\\left\\{\\omega_2(\\bHcheck_{2i},A_{2i};\\bThetahat)-\\mhat_{\\omega_2}\\supnk(\\bUvec_i)-\n\\etahat_{\\omega_2}^v\\right\\} &=0, \\\\\n\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\\left\\{\\omega_2(\\bHcheck_{2i},A_{2i};\\bThetahat)Y_{ti}-\\mhat_{t\\omega_2}\\supnk(\\bUvec_i)-\n\\etahat_{t\\omega_2}^v\\right\\} &=0, \\:t=2,3.\n\\end{alignedat}\n\\end{equation}\n\nThe resulting imputation functions for $Y_2,\\omega_2(\\bHcheck_2, A_2; \\bThetabar)$ and $Y_t\\omega_2(\\bHcheck_2, A_2; \\bThetabar)$ are respectively constructed as $\\muhat^v_2(\\bUvec)= K^{-1}\\sum_{k=1}^{K}\n\\mhat_2\\supnk(\\bUvec)+\n\\etahat_2^v,$ \n$\\muhat^v_{\\omega_2}(\\bUvec)= K^{-1}\\sum_{k=1}^{K}\n\\mhat_{\\omega_2}(\\bUvec)+\n\\etahat_{\\omega_2}^v,$ and \n$\\muhat^v_{t\\omega_2}(\\bUvec)= K^{-1}\\sum_{k=1}^{K}\n\\mhat_{t\\omega_2}\\supnk(\\bUvec)+\n\\etahat_{t\\omega_2}^v,$ for $t=2,3$.\n\n\\subsubsection*{Step III: Semi-supervised augmented value function estimator.}\n\nFinally, we proceed to estimate the value of the policy $\\Vbar$, using the following semi-supervised augmented estimator:\n\t\\begin{align}\\label{SS_value_fun}\n\t\\Vhat\\subSSLDR=\\Pbb_N\\left\\{\n\t\\Vsc\\subSSLDR(\\bUvec;\\bThetahat,\\muhat)\\right\\},\n\t\\end{align}\n\t where $\\Vschat\\subSSLDR(\\bUvec)$ is the semi-supervised augmented estimator for observation $\\bUvec$ defined as:\n\n\t\\begin{align*}\n\t\\begin{split}\n\t\\Vsc\\subSSLDR(\\bUvec;\\bThetahat,\\muhat)\n\t=&\n\t\\Qopt_1(\\bHcheck_1;\\bthetahat_1)\n\t+\\omega_1(\\bHcheck_1,A_1,\\bThetahat)\n\t\\left[(1+\\betahat_{21})\\muhat_2^v(\\bUvec)-\n\t \\Qopt_1(\\bHcheck_1;\\bthetahat_1)+\\Qopt_{2-}(\\bH_2;\\bthetahat_2)\n\t\\right]\\\\\n\t+&\n\t\\muhat_{3\\omega_2}(\\bUvec)-\\betahat_{21}\\muhat_{2\\omega_2}(\\bUvec)-\\Qopt_{2-}(\\bH_2;\\bthetahat_2)\\muhat_{\\omega_2}(\\bUvec) .\n\t\\end{split}\n\t\\end{align*}\n\t\n\t\nThe above SSL estimator uses both labeled and unlabeled data along with outcome surrogates to estimate the value function, which yields a gain in efficiency as we show in Proposition \\ref{lemma: v funcion var}. As its supervised counterpart, $\\Vhat\\subSSLDR$ is doubly robust in the sense that if either the $Q$ functions or the propensity scores are correctly specified, the value function will converge in probability to the true value $\\Vbar$. Additionally, it does not assume that the estimated treatment regime was derived from a different sample. These properties are summarized in Theorem \\ref{thrm_ssV_fun} and Proposition \\ref{cor_dr_V} of the following section.\n\n\\section{Theoretical Results}\\label{theory}\n\nIn this section we discuss our assumptions and theoretical results for the semi-supervised $Q$-learning and value function estimators. Throughout, we define the norm $\\|g(x)\\|_{L_2(\\Pbb)}\\equiv\\sqrt{\\int g(x)^2d\\Pbb(x)}$ for any real valued function $g(\\cdot)$. Additionally, let $\\{U_n\\}$, and $\\{V_n\\}$ be two sequences of random variables. We will use $U_n=O_\\Pbb(V_n)$ to denote stochastic boundedness of the sequence $\\{U_n\/V_n\\}$, that is, for any $\\epsilon>0$, $\\exists M_\\epsilon,n_\\epsilon\\in\\mathbb{R}$ such that $\\Pbb\\left(|U_n\/V_n|>M_\\epsilon\\right)<\\epsilon$ $\\forall n>n_\\epsilon$. We use $U_n=o_\\Pbb(V_n)$ to denote that $U_n\/V_n\\stackrel{\\Pbb}{\\rightarrow}0.$\n\n\\subsection{Theoretical Results for SSL Q-learning}\n\n\\begin{assumption}\\label{assumption: covariates}\n\t(a) Sample size for $\\mathcal{U}$, and $\\mathcal{L}$, are such that $n\/N\\longrightarrow 0$ as $N,n\\longrightarrow\\infty$, (b) $\\bHcheck_t\\in\\mathcal{H}_t$, $\\bXcheck_t\\in\\mathcal{X}_t$ have finite second moments and compact support in $\\mathcal{H}_t\\subset\\mathbb{R}^{q_t}$, $\\mathcal{X}_t\\subset\\mathbb{R}^{p_t}$ $t=1,2$ respectively (c) $\\bSigma_1,\\:\\bSigma_2$ are nonsingular.\n\\end{assumption}\n\n\\begin{assumption}\\label{assumption: Q imputation}\n\tFunctions $m_s$, $s\\in\\{2,3,22,23\\}$ are such that (i) $\\sup_{\\bUvec}|m_s(\\bUvec)|<\\infty$, and (ii) the estimated functions $\\hat m_s$ satisfy (ii) $\\sup_{\\bUvec}|\\mhat_s(\\bUvec)-m_s(\\bUvec)|=o_\\Pbb(1)$. \n\\end{assumption}\n\n\n\\begin{assumption}\\label{assumption SS linear model}\nSuppose $\\Theta_1,\\Theta_2$ are open bounded sets, and $p_1,p_2$ fixed under \\eqref{linear_Qs}.\nWe define the following class of functions:\n\\begin{align*\n\\begin{split}\n\\mathcal{Q}_t&\\equiv\\left\\{Q_t:\\mathcal{X}_1\\mapsto\\mathbb{R}|\\btheta_1\\in\\Theta_1\\subset\\mathbb{R}^{p_t}\\right\\},\\:t=1,2.\n\\end{split}\n\\end{align*}\nFurther suppose for $t=1,2$, the solutions for $\\mathbb{E}[S^\\theta_t(\\btheta_t)]=\\bzero,$ i.e. $\\bthetabar_1$ and $ \\bthetabar_2$ satisfy\n\n\\be\nS^\\theta_2(\\btheta_2)=\\frac{\\partial}{\\partial\\btheta_2\\trans}\n\\|Y_3-Q_2(\\bXcheck_2;\\btheta_2)\\|_2^2,\\:\nS^\\theta_1(\\btheta_1)=\\frac{\\partial}{\\partial\\btheta_1\\trans}\n\\|Y_2^*-Q_1(\\bXcheck_1;\\btheta_1)\\|_2^2.\n\\ee\n\nThe target parameters satisfy $\\bthetabar_t\\in\\Theta_t\\:,t=1,2$. We write $\\bbetabar_t,\\bgammabar_t$ as the components of $\\bthetabar_t$, according to equation \\eqref{full_EE}.\n\\end{assumption}\nAssumption \\ref{assumption: covariates} (a) distinguishes our setting from the standard missing data context. Theoretical results for the missing completely at random (MCAR) setting generally assume that the missingness probability is bounded away from zero \\citep{tsiatis2006}, which enables the use of standard semiparametric theory. However, in our setting one can intuitively consider the probability of observing an outcome being $\\frac{n}{n+N}$ which converges to $0$. \n\nAssumption \\ref{assumption: Q imputation} is fairly standard as it just requires boundedness of the imputation functions -- which is natural to expect from the boundedness of the covariates. We also require uniform convergence of the estimated functions to their limit. This allows for the normal equations targeting the imputation residuals in \\eqref{eta_EE_Q1} and \\eqref{V function reffitting} to be well defined. Moreover, several off-the-shelf flexible imputation models for estimation can satisfy these conditions. See for example, local polynomial estimators, basis expansion regression like natural cubic splines or wavelets \\citep{tsybakov2009}. In particular, it is worth noting that we do not require any specific rate of convergence. As a result, the required condition is typically much easier to verify for many off-the-shelf algorithms. It is likely that other classes of models such as random forests can satisfy Assumption \\ref{assumption: Q imputation}. Recent work suggests that it is plausible to use the existing point-wise convergence results to show uniform convergence. \\citep[see][]{scornet2015,biau2008}.\n\nAssumption \\ref{assumption SS linear model} is fairly standard in the literature and ensures well-defined population level solutions for $Q$-learning regressions $\\bthetabar$ exist, and belong to that parameter space. In this regard, we differentiate between population solutions $\\bthetabar$ and true model parameters $\\btheta^0$ shown in equation \\eqref{linear_Qs}. If the working models are mis-specified, Theorems \\ref{theorem: unbiased theta2} and \\ref{theorem: unbiased theta1} still guarantee the $\\bthetahat$ is consistent and asymptotically normal centered at the population solution $\\bthetabar$. However, when equation \\eqref{linear_Qs} is correct, $\\bthetahat$ is asymptotically normal and consistent for the true parameter $\\btheta^0$. Now we are ready to state the theoretical properties of the semi-supervised $Q$-learning procedure described in Section \\ref{sec: ssQ}. \n\n\\begin{theorem}[Distribution of $\\bthetahat_2$]\\label{theorem: unbiased theta2} \nUnder Assumptions \\ref{assumption: covariates}-\\ref{assumption SS linear model}, $\\bthetahat_2$ satisfies\n\\[\n\\sqrt n\n(\\bthetahat_2-\\bthetabar_2)\n=\n\\bSigma_2^{-1}\\frac{1}{\\sqrt n}\\sum_{i=1}^n\n\\bpsi_2(\\bL_i;\\bthetabar_2)\n+o_\\Pbb\\left(1\\right)\\stackrel{d}{\\rightarrow}\\mathcal{N}\\bigg({\\bf 0},\\bV_{2 \\scriptscriptstyle \\sf SSL}(\\bthetabar_2)\\bigg),\n\\]\n\nwhere $\\bSigma_2=\\Ebb[\\bXcheck_2\\bXcheck_2\\trans]$ is defined in Section \\ref{section: set up}, the influence function $\\bpsi_2$ is given by\n\\[\n\\bpsi_2(\\bL;\\bthetabar_2)\n=\n\\begin{bmatrix}\n\\{Y_2Y_3-\\mubar_{23}(\\bUvec)\\}-\\bar\\beta_{21}\n\\{Y_2^2-\\mubar_{22}(\\bUvec)\\}-\nQ_{2-}(\\bH_2,A_2;\\bthetabar_2)\n\\{Y_2-\\mubar_2(\\bUvec)\\}\\\\\n\\bX_2\n\\{Y_3-\\mubar_3(\\bUvec)\\}-\\bar\\beta_{21}\n\\bX_2\n\\{Y_2-\\mubar_2(\\bUvec)\\}\n\\end{bmatrix},\n\\]\nand $\n\\bV_{2 \\scriptscriptstyle \\sf SSL}(\\bthetabar_2)=\\bSigma_2^{-1}\\Ebb\\left[\\bpsi_2(\\bL;\\bthetabar_2)\\bpsi_2(\\bL;\\bthetabar_2)\\trans\\right]\\left(\\bSigma_2^{-1}\\right)\\trans$. \\end{theorem}\n\n\nWe hold off remarks until the end of the results for the $Q$-learning parameters. Since the first stage regression depends on the second stage regression through a non-smooth maximum function, we make the following standard assumption \\citep{laber2014} in order to provide valid statistical inference. \n\\begin{assumption}\\label{assumption: non-regularity} Non-zero estimable population treatment effects $\\bgammabar_t$, $t=1,2$: i.e. the population solution to \\eqref{full_EE}, is such that (a) $\\bH_{21}\\trans\\bgammabar_2\\neq0$ for all $\\bH_{21}\\neq\\bzero$, and (b) $\\bgammabar_1$ is such that $\\bH_{11}\\trans\\bgammabar_1\\neq0$ for all $ \\bH_{11}\\neq\\bzero$. \n\\end{assumption}\n\nAssumption \\ref{assumption: non-regularity} yields regular estimators for the stage one regression and the value function, which depend on non-smooth components of the form $[x]_+$. This is needed to achieve asymptotic normality of the $Q$-learning parameters for the first stage regression. Note that the estimating equation for the stage one regression in Section \\ref{section: SSQL} includes $\\left[\\bH_{21}\\trans\\bgammahat_2\\right]_+$. Thus, for the asymptotic normality of $\\bthetahat_1$, we require $\\sqrt n\\Pbb_n\\left(\\left[\\bH_{21}\\trans\\bgammahat_2\\right]_+-\\left[\\bH_{21}\\trans\\bar{\\bgamma}_2\\right]_+\\right)$ to be asymptotically normal.\nThe latter is automatically true if $\\bH_{11}$ contains continuous covariates as $\\Pbb\\left(\\bH_{21}\\trans\\bgammabar_2=0\\right)=0$. Violation of Assumption \\ref{assumption: non-regularity} will yield non-regular estimates which translate into poor coverage for the confidence intervals (see \\citet{laber2014} for a thorough discussion on this topic).\n\n\\begin{theorem}[Distribution of $\\hat{\\btheta}_{1}$]\\label{theorem: unbiased theta1} \n\tUnder Assumptions \\ref{assumption: covariates}-\\ref{assumption SS linear model}, and \\ref{assumption: non-regularity} (a), $\\bthetahat_1$ satisfies\n\t\n\t\\[\n\t\\sqrt n(\\hat{\\btheta}_1-\\bthetabar_1)=\\bSigma_1^{-1}\\frac{1}{\\sqrt n}\\sum_{i=1}^n\\bpsi_1(\\bL_i;\\bthetabar_1)+o_\\Pbb(1)\\stackrel{d}{\\rightarrow}\\mathcal{N}\\bigg({\\bf 0},\\bV_{1 \\scriptscriptstyle \\sf SSL}(\\bthetabar_1)\\bigg)\n\t\\]\n\twhere $\\bSigma_1^{-1}=\\Ebb[\\bXcheck_1\\bXcheck_1\\trans]$, the influence function $\\bpsi_1$ is given by\n\t\\begin{align*}\n\t\\bpsi_1(\\bL;\\bthetabar_1)=\n\t&\\bX_1(1+\\bar\\beta_{21})\\{Y_2-\\mubar_2(\\bUvec)\\}\n\t+\n\t\\mathbb{E}\\left[\\bX_1\n\t\\left(Y_2,\\bH_{20}\\trans\\right)\\right]\n\t\\bpsi_{\\beta_2}(\\bL;\\bthetabar_2)\\\\\n\t+&\n\t\\mathbb{E}\\left[\\bX_1\\bH_{21}\\trans|\\bH_{21}\\trans\\bgammabar_2>0\\right]\\Pbb\\left(\\bH_{21}\\trans\\bgammabar_2>0\\right)\n\t\\bpsi_{\\gamma_2}(\\bL;\\bthetabar_2\n\t),\n\t\\end{align*} \n\t$\\bV_{1 \\scriptscriptstyle \\sf SSL}(\\bthetabar_1)=\\bSigma_1^{-1}\\Ebb\\left[\\bpsi_1(\\bL;\\bthetabar_1)\\bpsi_1(\\bL;\\bthetabar_1)\\trans\\right]\\left(\\bSigma_1^{-1}\\right)\\trans$, and $\\bpsi_{\\beta_2}$, $\\bpsi_{\\gamma_2}$ are the elements corresponding to $\\bbetabar_2$, $\\bgammabar_2$ of the influence function $\\bpsi_2$ defined in Theorem \\ref{theorem: unbiased theta2}.\n\\end{theorem}\n\n\n\\begin{remark}\n1) Theorems \\ref{theorem: unbiased theta2} and \\ref{theorem: unbiased theta1} establish the $\\sqrt n$-consistency and asymptotic normality (CAN) of $\\bthetahat_1,\\bthetahat_2$ for any $K\\geq2$. Beyond asymptotic normality at $\\sqrt n$ scale, these theorems also provide an asymptotic linear expansion of the estimators with influence functions $\\bpsi_1$ and $\\bpsi_2$ respectively. \n\n2) $\\bV_{1 \\scriptscriptstyle \\sf SSL}(\\bthetabar)$, $\\bV_{2 \\scriptscriptstyle \\sf SSL}(\\bthetabar)$ reflect an efficiency gain over the fully supervised approach due to sample $\\mathcal{U}$ and the surrogates contribution in prediction performance. This gain is formalized in Proposition \\ref{lemma: Q parameter var} which quantifies how correlation between surrogates and outcome increases efficiency.\\\\\n3) Let $\\bpsi=[\\bpsi_1\\trans,\\bpsi_2\\trans]\\trans$, we collect the vector of estimated $Q$-learning parameters $\\btheta=(\\btheta_1\\trans,\\btheta_2\\trans)\\trans$, then under Assumptions \\ref{assumption: covariates}-\\ref{assumption SS linear model}, \\ref{assumption: non-regularity} (a), we have\n \\begin{align*}\n\t\\sqrt n\n\t(\\bthetahat-\\bthetabar)\n\t=&\n\t\\bSigma^{-1}\\frac{1}{\\sqrt n}\\sum_{i=1}^n\n\t\\bpsi(\\bL_i;\\bthetabar)\n\n\t+o_\\Pbb(1)\\stackrel{d}{\\rightarrow}\\mathcal{N}\\bigg({\\bf 0},\\bV\\subSSL\\left(\\bthetabar\\right)\\bigg)\n\\end{align*}\nwith $\\bV\\subSSL(\\bthetabar)=\\bSigma^{-1}\\Ebb\\left[\\bpsi(\\bL;\\bthetabar)\\bpsi(\\bL;\\bthetabar)\\trans\\right]\\left(\\bSigma^{-1}\\right)\\trans$.\\\\\n4) Theorems \\ref{theorem: unbiased theta2} and \\ref{theorem: unbiased theta1} hold even when the $Q$ functions are mis-specified, that is, $\\bthetahat_1,\\bthetahat_2$ are CAN for $\\bthetabar_1,\\bthetabar_2$. Furthermore, if model \\eqref{linear_Qs} is correctly specified then we can simply replace $\\bthetabar$ with $\\btheta^0$ in the above result.\\\\\n3) We estimate $\\bV\\subSSL(\\bthetabar)$ via sample-splitting as\n\\begin{align*}\n\\widehat\\bV\\subSSL(\\bthetahat)&=\\widehat\\bSigma^{-1}\\widehat\\bA(\\btheta)\\left(\\widehat\\bSigma^{-1}\\right)\\trans,\\text{ where}\\\\\n\\widehat\\bA(\\bthetahat)&=\nn^{-1}\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\n\\bpsi\\supnk\\left(\\bL_i;\\bthetahat\\right)\\bpsi\\supnk\\left(\\bL_i;\\bthetahat\\right)\\trans,\\\\\n\\widehat\\bSigma_t&=\n\\Pbb_n\n\\left\\{\\bX_t\\bX_t\\trans\\right\\},\\:\\:t=1,2.\n\\end{align*}\n\\end{remark}\n\n\nNote that we can decompose $\\bpsi$ into the influence function for each set of parameters. For example, we have $\\bpsi_2=\\left(\\bpsi_{\\beta_2}\\trans,\\bpsi_{\\gamma_2}\\trans\\right)\\trans$ where \n$\n\\bpsi_{\\gamma_2}(\\bL;\\bthetabar_2)=\\bH_{21}A_2\\left[\\{Y_3-\\mubar_3(\\bUvec)\\}-\\bar\\beta_{21}\\{Y_2-\\mubar_2(\\bUvec)\\}\\right].\n$\nTherefore we can decompose the variance-covariance matrix into a component for each parameter, the variance-covariance for the treatment effect for stage 2 regression $\\bgamma_2$ is \\[\n\\mathbb{E}\\left[\\bpsi_{\\gamma_2}(\\bL;\\bthetabar_2)\\bpsi_{\\gamma_2}(\\bL;\\bthetabar_2)\\trans\\right]=\n\\mathbb{E}\\left[\\bH_{21}\\bH_{21}\\trans A_{2}^2\\left\\{\n Y_3-\\mubar_3(\\bUvec)-\\beta_{21}\n \\left(Y_2-\\mubar_2(\\bUvec)\\right)\\right\\}^2\\right].\n\\]\nThis gives us some insight into how the predictive power of $\\bUvec$, which contains surrogates $\\bW_1,\\bW_2$, decreases parameter standard errors. This is the case for the influence functions for estimating $\\bthetabar_1$, $\\bthetabar_2$ as well. We formalize this result with the following proposition. Let $\\bthetahat_{\\subSUP}$ be the estimator for the fully supervised $Q$-learning procedure (i.e. only using labeled data), with influence function and asymptotic variance denoted as $\\bpsi\\subSUP$ and $\\bV\\subSUP$ respectively (see Appendix \\ref{section: Q learning result proofs} for the exact form of $\\bpsi\\subSUP$ and $\\bV\\subSUP$). \n\nFor the following proposition we need the imputation models $\\mubar_s$, $s\\in\\{2,3,22,23\\}$ to satisfy additional constraints of the form $\\Ebb\\left[\\bX_2\\bX_2\\trans\\{Y_2Y_3-\\mubar_{23}(\\bUvec)\\}\\right]=\\bzero$. We list them in Assumption \\ref{assumption: additional constraints}, Appendix \\ref{section: Q learning result proofs}. One can construct estimators which satisfy such conditions by simply augmenting $\\bEta_2,$ $\\eta_{22},$ $\\bEta_3,$ $\\eta_{23}$ in \\eqref{eta_EE_Q1} with additional terms in the refitting step.\n\n\n\\begin{proposition}\\label{lemma: Q parameter var}\nUnder Assumptions \\ref{assumption: covariates}-\\ref{assumption SS linear model}, \\ref{assumption: non-regularity} (a), and \\ref{assumption: additional constraints}\nthen\n\\[\n\\bV\\subSSL(\\bthetabar)=\\bV\\subSUP(\\bthetabar)-\\bSigma^{-1}\\text{Var}\\left[\\bpsi\\subSUP(\\bL;\\bthetabar)-\\bpsi\\subSSL(\\bL;\\bthetabar)\\right]\\left(\\bSigma^{-1}\\right)\\trans.\n\\]\n\n\\end{proposition}\n\\begin{remark}\nProposition \\ref{lemma: Q parameter var} illustrates how the estimates for the semi-supervised $Q$-learning parameters are at least as efficient, if not more so, than the supervised ones. Intuitively, the difference in efficiency is explained by how much information is gained by incorporating the surrogates $\\bW_1,\\bW_2$ into the estimation procedure. If there is no new information in the surrogate variables, then residuals found in $\\bpsi\\subSSL(\\bL;\\btheta)$ will be of similar magnitude to those in $\\bpsi\\subSUP(\\bL;\\btheta)$, and thus the difference in efficiency will be small: $\\text{Var}\\left[\\bpsi\\subSUP(\\bL;\\bthetabar)-\\bpsi\\subSSL(\\bL;\\bthetabar)\\right]\\approx0$. In this case both methods will yield equally efficient parameters. The gain in precision is especially relevant for the treatment interaction coefficients $\\bgamma_1,\\bgamma_2$ used to learn the dynamic treatment rules. Finally, note that for Proposition \\ref{lemma: Q parameter var}, we do not need the correct specification of $Q$-functions or imputation models.\n\\end{remark}\n\n\n\\subsection{Theoretical Results for SSL Estimation of the Value Function}\n\nIf model \\eqref{linear_Qs} is correct, one only needs to add Assumption \\ref{assumption: non-regularity} (b)\nfor $\\Pbb_N\\{\\Qopt_1(\\bH_1;\\bthetahat_1)\\}$ to be a consistent estimator of the value function $\\Vbar$ \\citep{zhu2019}. However, as we discussed earlier, \\eqref{linear_Qs} is likely mis-specified. Therefore, we show our semi-supervised value function estimator is doubly robust. We also show it is asymptotically normal and more efficient that its supervised counterpart. To that end, define the following class of functions:\n\t\\[\n\t\\mathcal{W}_t\\equiv\\left\\{\\pi_t:\\mathcal{H}_t\\mapsto\\mathbb{R}|\\bxi_t\\in\\Omega_t\\right\\},\\:t=1,2,\n\t\\]\nunder propensity score models $\\pi_1,\\:\\pi_2$ in \\eqref{logit_Ws}.\n\\begin{assumption}\\label{assumption: donsker w}\n\t\nLet the population equations $\\Ebb\\left[S^\\xi_t(\\bHcheck_t;\\bxi_t)\\right]=\\bzero, t=1,2$ have solutions $\\bxibar_1,\\bxibar_2$, where\n\\be\nS^\\xi_t(\\bHcheck_t;\\bxi_t)=&\\frac{\\partial}{\\partial\\bxi_t}\\log \\left[\\pi_t(\\bHcheck_t;\\bxi_t)^{A_t}\\{1-\\pi_t(\\bHcheck_t;\\bxi_t)\\}^{(1-A_t)}\\right],\\:t=1,2,\n\\ee\n(i) $\\Omega_1,\\Omega_2$ are open, bounded sets and the population solutions satisfy $\\bxibar_t\\in\\Omega_t,t=1,2$,\\\\\n\t(ii) for $\\bxibar_t,t=1,2$, $\\inf\\limits_{\\bHcheck_t\\in\\mathcal{H}_1}\\pi_1(\\bHcheck_t;\\bxibar_t)>0$,\\\\\n\t(iii) Finite second moment: $\\Ebb\\left[S^\\xi_t(\\bHcheck_t;\\bTheta_t)^2\\right]\\le\\infty$, and Fisher information matrix: $\\Ebb\\left[\\frac{\\partial}{\\partial\\bxi_t}S^\\xi_t(\\bHcheck_t;\\bTheta_t)\\right]$ exists and is non singular,\\\\\n\t(iv) Second-order partial derivatives of $S^\\xi_t(\\bHcheck_t;\\bTheta_t)$ with respect to $\\bxi$ exist and for every $\\bHcheck_t$, and satisfy $|\\partial^2S^\\xi_t(\\bHcheck_t;\\bTheta_t)\/\\partial\\bxi_i\\partial\\bxi_j|\\leq \\tilde S_t(\\bHcheck_t)$ for some integrable measurable function $\\tilde S_t$ in a neighborhood of $\\bxibar$.\n\n\\end{assumption}\n\n\\begin{assumption}\\label{assumption: V imputation}\nFunctions $m_2,m_{\\omega_2},m_{t\\omega_2}$ $t=2,3$ are such that (i) $\\sup_{\\bUvec}|m_s(\\bUvec)|<\\infty$, and (ii) the estimated functions $\\hat m_s$ satisfy (ii) $\\sup_{\\bUvec}|\\mhat_s(\\bUvec)-m_s(\\bUvec)|=o_\\Pbb(1)$, $s\\in\\{2,\\omega_2,2\\omega_2,3\\omega_2\\}$.\n\\end{assumption}\nAssumption \\ref{assumption: donsker w} is standard for Z-estimators \\citep[see][Ch.~5.6]{vaart_donsker}. Assumption \\ref{assumption: V imputation} is the propensity score equivalent version of Assumption \\ref{assumption: Q imputation}. Finally, we use $\\bpsi^\\xi$ and and $\\bpsi^\\theta$ to denote the influence function for $\\bxihat$, and $\\bthetahat$ respectively. We are now ready to state our theoretical results for the value function estimator in equation \\eqref{SS_value_fun}. The proof, and the exact form of $\\bpsi^\\xi$ can be found in Appendix \\ref{appendix: value fun}.\n\n\\begin{theorem}[Asymptotic Normality for $\\Vhat\\subSSLDR$]\\label{thrm_ssV_fun}\n\tUnder Assumptions \\ref{assumption: covariates}-\\ref{assumption: V imputation}, $\\Vhat\\subSSLDR$ defined in \\eqref{SS_value_fun} satisfies\n\t\\[\n\t\\sqrt n\n\t\\left\\{\n\t\\Vhat\\subSSLDR-\\mathbb{E}_\\mathbb{S}\\left[\\Vsc\\subSSLDR(\\bL;\\bThetabar,\\mubar)\\right]\n\t\\right\\}\n\t=\n\t\\frac{1}{\\sqrt n}\\sum_{i=1}^n\\psi^{v}_{\\subSSLDR}(\\bL_i;\\bThetabar)\n\t+o_\\Pbb\\left(1\\right),\n\t\\]\n\n\twhere\n\t\\[\n\t\\frac{1}{\\sqrt n}\\sum_{i=1}^n\\psi^{v}_{\\subSSLDR}(\\bL_i;\\bThetabar)\n\t\\stackrel{d}{\\longrightarrow}\\mathcal{N}\\left(0,\\sigma^2\\subSSLDR\\right).\n\t\\]\n\tHere\n\t\\begin{align*}\n\t\\psi^{v}\\subSSLDR(\\bL;\\bThetabar)\n\t=&\n\t\\nu\\subSSLDR(\\bL;\\bThetabar)\n\t+\n\t\\bpsi^\\theta(\\bL)\\trans\n\t\\frac{\\partial}{\\partial \\btheta}\\int\\Vsc\\subSUPDR(\\bL;\\bTheta)d\\Pbb_{\\bL}\\bigg|_{\\bTheta=\\bThetabar}\\\\\n\t&\\hspace{2.1cm}+\n\t\\bpsi^\\xi(\\bL)\\trans\n\t\\frac{\\partial}{\\partial \\bxi}\\int\\Vsc\\subSUPDR(\\bL;\\bTheta)d\\Pbb_{\\bL}\\bigg|_{\\bTheta=\\bThetabar},\\\\\n\t\\nu_{\\subSSLDR}(\\bL;\\bThetabar)\n\t=&\n\t\\omega_1(\\bHcheck_1,A_1;\\bThetabar_1)\n\t(1+\\bar\\beta_{21})\n\t\\left\\{\n\tY_2-\\mubar_2^v(\\bUvec)\n\t\\right\\}\n\t+\n\t\\omega_2(\\bHcheck_2,A_2,\\bThetabar_2)Y_3-\n\t\\mubar_{3\\omega_2}(\\bUvec)\\\\\n\t-&\n\t\\bar\\beta_{21}\\left\\{\\omega_2(\\bHcheck_2,A_2,\\bThetabar_2)Y_2-\n\t\\mubar_{2\\omega_2}(\\bUvec)\\right\\}\n\t-\n\t\\Qopt_{2-}(\\bH_2; \\bthetabar_2)\n\t\\left\\{\n\t\\omega_2(\\bHcheck_2,A_2,\\bThetabar_2)-\\mubar_{\\omega_2}(\\bUvec)\n\t\\right\\}\n\t,\n\t\\end{align*}\n\t$\\sigma\\subSSLDR^2=\\Ebb\\left[\\psi^{v}_{\\subSSLDR}(\\bL;\\bThetabar)^2\\right],$ and $\\Vsc\\subSUPDR(\\bL;\\bTheta)$ is as defined in \\eqref{eq: lab value fun}.\n\\end{theorem}\n\n\n\\begin{proposition}[Double Robustness of $\\Vhat\\subSSLDR$ as an estimator of $\\Vbar$]\\label{cor_dr_V} (a) If either $\\|Q_t(\\bHcheck_t, A_t; \\bthetahat_t)-Q_t(\\bHcheck_t,A_t)\\|_{L_2(\\Pbb)}\\rightarrow 0$, or $\\| \\pi_t(\\bHcheck_t; \\bxihat_t)-\\pi_t(\\bHcheck_t)\\|_{L_2(\\Pbb)}\\rightarrow 0$ for $t=1,2$, then under Assumptions \\ref{assumption: covariates}-\\ref{assumption: V imputation}, $\\Vhat\\subSSLDR$ satisfies\n\n\t\\[\n\t\\Vhat\\subSSLDR\\stackrel{\\Pbb}{\\longrightarrow}\\Vbar.\n\t\\]\n\t\n\t(b) If $\\|Q_t(\\bHcheck_t, A_t; \\bthetahat_t)-Q_t(\\bHcheck_t,A_t)\\|_{L_2(\\Pbb)}\\| \\pi_t(\\bHcheck_t; \\bxihat_t)-\\pi_t(\\bHcheck_t)\\|_{L_2(\\Pbb)}=o_\\Pbb\\left(n^{-\\frac{1}{2}}\\right)$ for $t=1,2$, then under Assumptions \\ref{assumption: covariates}-\\ref{assumption: V imputation}, $\\Vhat\\subSSLDR$ satisfies\n\n\t\\[\n\t\\sqrt n\\left(\\Vhat\\subSSLDR-\\Vbar\\right)\\stackrel{d}{\\longrightarrow}\\mathcal{N}\\left(0,\\sigma^2\\subSSLDR\\right).\n\t\\]\n\\end{proposition}\n\nNext we define the supervised influence function for estimator $\\Vhat\\subSUPDR$. Let $\\bpsi^\\theta\\subSUP$, be the influence function for the supervised estimator $\\bthetahat\\subSUP$ for model \\eqref{linear_Qs}. The influence function for SUP$\\subDR$ Value Function Estimation estimator \\eqref{eq: lab value fun} and its variance is (see Theorem \\ref{thrm_supV_fun} in Appendix \\ref{app_DR_Vfun}):\n\n\\begin{align*}\n\t\\psi^{v}\\subSUPDR(\\bL;\\bThetabar)\n\t=&\n\t\\Vsc\\subSUPDR(\\bL;\\bThetabar)-\\mathbb{E}_\\mathbb{S}\\left[\\Vsc\\subSUPDR(\\bL;\\bThetabar)\\right]\\\\\n\t+&\n\t\\bpsi\\subSUP^\\theta(\\bL)\\trans\n\t\\frac{\\partial}{\\partial \\btheta}\\int\\Vsc\\subSUPDR(\\bL;\\bTheta)d\\Pbb_{\\bL}\\bigg|_{\\bTheta=\\bThetabar}\n\t+\n\t\\bpsi^\\xi(\\bL)\\trans\n\t\\frac{\\partial}{\\partial \\bxi}\\int\\Vsc\\subSUPDR(\\bL;\\bTheta)d\\Pbb_{\\bL}\\bigg|_{\\bTheta=\\bThetabar},\\\\\n\t\\sigma\\subSUPDR^2=&\\mathbb{E}\\left[\\psi^{v}_{\\subSUPDR}(\\bL;\\bThetabar)^2\\right].\n\\end{align*}\n\nThe flexibility of our SSL value function estimator $V\\subSSLDR$, allows the use of either supervised or SSL approach for estimation of propensity score nuisance parameters $\\bxi$. For SSL estimation, we can use an approach similar to Section \\ref{sec: ssQ}, \\citep[see][Ch. 2]{chakrabortty} for details. This can be beneficial in that we can then quantify the efficiency gain of $V\\subSSLDR$ vs. $V\\subSUPDR$ by comparing the asymptotic variances. In light of this, we assume SSL is used for $\\bxi$ when estimating $V\\subSSLDR$.\n\nBefore stating the result we discuss an additional requirement for the imputation models. As for Proposition \\ref{lemma: Q parameter var}, models $\\mubar^v_2(\\bUvec),$ $\\mubar^v_{\\omega_2}(\\bUvec),$ $\\mubar^v_{t\\omega_2}(\\bUvec),$ $t=2,3$ need to satisfy a few additional constraints of the form \n\\[\n\\Ebb\\left[\\omega_1(\\bHcheck_1,A_1;\\bThetabar_1)\\Qopt_{2-}(\\bH_2;\\bthetabar_1)\\{Y_2-\\mubar^v_2(\\bUvec)\\}\\right]=\\bzero.\n\\]\nAs there are several constraints, we list them in Appendix \\ref{appendix: value fun}, and condense them in Assumption \\ref{assumption: value additional constraints}, Appendix \\ref{appendix: value fun}. Again, one can construct estimators which satisfy such conditions by simply augmenting $\\eta_2^v,$ $\\eta_{\\omega_2}^v,$ $\\eta_{t\\omega_2}^v,$ $t=2,3$ in \\eqref{V function reffitting} with additional terms in the refitting step.\n\n\\begin{proposition}\\label{lemma: v funcion var}\nUnder Assumptions \\ref{assumption: covariates}-\\ref{assumption: V imputation}, and \\ref{assumption: value additional constraints}, asymptotic variances\n$\\sigma\\subSSLDR^2$, $\\sigma\\subSUPDR^2$ satisfy \n\\[\n\\sigma\\subSSLDR^2=\\sigma\\subSUPDR^2-\\text{Var}\\left[\\psi^{v}\\subSUPDR(\\bL;\\bThetabar)-\\psi^{v}\\subSSLDR(\\bL;\\bThetabar)\\right].\n\\]\n\\end{proposition}\n\n\n\\begin{remark}\\label{remark: se for V}\n\n\n1) Proposition \\ref{cor_dr_V} illustrates how $\\Vhat\\subSSLDR$ is asymptotically unbiased if either the $Q$ functions or the propensity scores are correctly specified.\\\\\n2) An immediate consequence of Proposition \\ref{lemma: v funcion var} is that the semi-supervised estimator is at least as efficient (or more) as its supervised counterpart, that is $\\text{Var}\\left[\\psi\\subSSLDR(\\bL;\\bTheta)\\right]\\le\\text{Var}\\left[\\psi\\subSUPDR(\\bL;\\bTheta)\\right]$. As with Proposition \\ref{lemma: Q parameter var}, the difference in efficiency is explained by the information gain from incorporating surrogates.\\\\\n3) To estimate standard errors for $V\\subSSLDR(\\bUvec;\\bThetabar)$, we will approximate the derivatives of the expectation terms $\\frac{\\partial}{\\partial \\bTheta}\\int\\Vsc\\subSUPDR(\\bL;\\bThetabar)d\\Pbb_{\\bL}$ using kernel smoothing to replace the indicator functions. In particular, let $\\mathbb{K}_h(x)=\\frac{1}{h}\\sigma(x\/h)$, $\\sigma$ defined as in \\eqref{logit_Ws}, we approximate $d_t(\\bH_t,\\btheta_2)=I(\\bH_{t1}\\trans\\bgamma_t>0)$ with $\\mathbb{K}_h(\\bH_{t1}\\trans\\bgamma_t)$ $t=1,2$, and define the smoothed propensity score weights as\n\\begin{align*}\n\\tilde\\omega_1(\\bHcheck_1,A_1,\\bTheta)\n&\\equiv\n\\frac{A_1\\mathbb{K}_h(\\bH_{11}\\trans\\bgamma_1)}{\\pi_1(\\bHcheck_1;\\bxi_1)}\n+\n\\frac{\\left\\{1-A_1\\right\\}\\left\\{1-\\mathbb{K}_h(\\bH_{11}\\trans\\bgamma_1)\\right\\}}{1-\\pi_1(\\bHcheck_1;\\bxi_1)}, \\quad \\mbox{and}\\\\ \\tilde\\omega_2(\\bHcheck_2,A_2,\\bTheta)\n&\\equiv\n\\tilde\\omega_1(\\bHcheck_1,A_1,\\bTheta)\\left[\\frac{A_2\\mathbb{K}_h(\\bH_{21}\\trans\\bgamma_2)}{\\pi_2(\\bHcheck_2;\\bxi_2)}+\n\\frac{\\left\\{1-A_2\\right\\}\\left\\{1-\\mathbb{K}_h(\\bH_{21}\\trans\\bgamma_2)\\right\\}}{1-\\pi_2(\\bHcheck_2;\\bxi_2)}\\right].\n\\end{align*}\n\nWe simply replace the propensity score functions with these smooth versions in $\\psi^{v}_{\\subSSLDR}(\\bL;\\bThetabar)$, detail is given in Appendix \\ref{variance estimation of Vss}. To estimate the variance we use the sample-split estimators:\n\\[\n\\hat\\sigma^2\\subSSLDR=\nn^{-1}\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\\psi^{v{\\scriptscriptstyle (\\text{-}k)}}\\subSSLDR(\\bUvec_i;\\bThetahat)^2.\n\\]\n\n\n\n\\end{remark}\n\n\n\n\\section{Simulations and application to EHR data:}\\label{section: sims and application}\nWe perform extensive simulations to evaluate the finite sample performance of our method. Additionally we apply our methods to an EHR study of treatment response for patients with inflammatory bowel disease to identify optimal treatment sequence. These data have treatment response outcomes available for a small subset of patients only. \n\n\\subsection{Simulation results}\nWe compare our SSL Q-learning methods to fully supervised $Q$-learning using labeled datasets of different sizes and settings. We focus on the efficiency gains of our approach. First we discuss our simulation settings, then go on to show results for the $Q$ function parameters under correct and incorrect working models for \\eqref{linear_Qs}. We then show value function summary statistics under correct models, and mis-specification for the $Q$ models in \\eqref{linear_Qs} and the propensity score function $\\pi_2$ in \\eqref{logit_Ws}.\n\nFollowing a similar set-up as in \\citet{schulte2014}, we first consider a simple scenario with a single confounder variable at each stage with $\\bH_{10}=\\bH_{11}=(1,O_1)\\trans$, $\\bHcheck_{20}=(Y_2,1,O_1,A_1,O_1A_1,O_2)\\trans$, and $\\bH_{21}=(1,A_1,O_2)\\trans$. Specifically, we sequentially generate \n\\begin{alignat*}{3}\n& O_1\\sim \\Bern(0.5), &\\quad& A_1\\sim \\Bern(\\sigma\\left\\{\\bH_{10}\\trans\\bxi_1^0\\right\\}),&\\quad& Y_2\\sim\\Nsc(\\bXcheck_1\\trans\\btheta^0_1, 1), \\\\\n& O_2 \\sim \\Nsc(\\bHcheck_{20}\\trans\\pmb\\delta^0,2),&\\quad&\nA_2 \\sim\\Bern\\left(\\sigma\\left\\{\\bH_{20}\\trans\\bxi_2^0+\\xi_{26}^0O_2^2\\right\\}\\right), \\quad \\mbox{and} &\\quad& Y_3 \\sim\\Nsc(m_3\\left\\{\\bHcheck_{20}\\right\\}, 2). \n\\end{alignat*}\nwhere $m_3\\{\\bHcheck_{20}\\}=\\bH_{20}\\trans\\bbeta^0_2+A_2(\\bH_{21}\\trans\\bgamma^0_2)+\\beta_{27}^0O_2^2Y_2\\sin\\{[O_2^2(Y_2+1)]^{-1}\\}$. \nSurrogates are generated as $W_t=\\floor{Y_{t+1}+Z_t},$ $Z_t\\sim\\Nsc(0,\\sigma^2_{z,t})$, $t=1,2$ where $\\floor{x}$ corresponds to the integer part of $x\\in\\mathbb{R}$. Throughout, we let $\\bxi_1^0=(0.3,-0.5)\\trans$, $\\bbeta_1^0=(1,1)\\trans$, $\\bgamma_1^0=(1,-2)\\trans$\n$\\pmb\\delta^0=(0,0.5,-0.75,0.25)\\trans$, $\\bxi^0_2=(0, 0.5, 0.1,-1,-0.1)\\trans$ $\\bbeta_2^0=(.1, 3, 0,0.1,-0.5,-0.5)\\trans$, $\\bgamma_2^0=(1, 0.25, 0.5)\\trans$. \n\n\nWe consider an additional case to mimic the structure of the EHR data set used for the real-data application. Outcomes $Y_t$ are binary, and we use a higher number of covariates for the $Q$ functions and multivariate count surrogates $\\bW_t$ $t=1,2$. Data is simulated with $\\bH_{10}=(1,O_1,\\dots,O_6)\\trans$, $\\bH_{11}=(1,O_2,\\dots,O_6)\\trans$, $\\bHcheck_{20}=(Y_2,1,O_1,\\dots,O_6,A_1,Z_{21},Z_{22})\\trans$, and $\\bH_{21}=(1,O_1,\\dots,O_4,A_1,Z_{21},Z_{22})\\trans$, generated according to\n\\begin{alignat*}{3}\n& \\bO_1\\sim\\Nsc(\\bzero,I_6), &\\quad& A_1\\sim\\Bern(\\sigma\\{\\bH_{10}\\trans\\bxi_1^0\\}),&\\quad& Y_2\\sim\\Bern(\\sigma\\{\\bXcheck_1\\trans\\btheta^0_1\\}), \\\\\n& \\bO_2=\\left[I\\left\\{Z_1>0\\right\\},I\\left\\{Z_2>0\\right\\}\\right]\\trans&\\quad&\nA_2 \\sim\\Bern\\left(\\tilde m_2\\{\\bHcheck_{20}\\}\\right), \\quad \\mbox{and} &\\quad& Y_3 \\sim\\Bern(\\tilde m_3\\left\\{\\bHcheck_{20}\\right\\}),\n\\end{alignat*}\nwith $\\tilde m_2=\\sigma\\left\\{\\bH_{20}\\trans\\bxi_2^0+\\tilde\\bxi_2\\trans\\bO_2\\right\\}$, $\\tilde m_3(\\bHcheck_{20})=\\bH_{20}\\trans\\bbeta^0_2+A_2(\\bH_{21}\\trans\\bgamma^0_2)+\\tilde\\bbeta_2\\trans\\bO_2Y_2\\sin\\{\\|\\bO_2\\|^2_2\/(Y_2+1)\\}$ and $Z_l=O_{1l}\\delta_l^0+\\epsilon_z$, $\\epsilon_z\\sim\\Nsc(0,1)$ $l=1,2$. The dimensions for the $Q$ functions are 13 and 37 for the first and second stage respectively, which match with our IBD dataset discussed in Section \\ref{section: IBD}. The surrogates are generated according to $\\bW_t=\\floor{\\bZ_t}$, with $\\bZ_t\\sim\\Nsc\\left(\\balpha\\trans(1,\\bO_t,A_t,Y_t),I\\right)$. Parameters are set to $\\bxi_1^0=(-0.1,1,-1,0.1)\\trans$, $\\bbeta_1^0=(0.5, 0.2,-1,-1,0.1,-0.1,0.1)\\trans$, $\\bgamma_1^0=(1, -2,-2,-0.1,0.1,-1.5)\\trans$, $\\bxi^0_2=(0, 0.5, 0.1,-1,1,-0.1)\\trans$, $\\bbeta_2^0=(1,\\bbeta_1^0, 0.25,-1,-0.5)\\trans$, $\\bgamma_2^0=(1, 0.1,-0.1,0.1,-0.1,0.25,-1,-0.5)\\trans$, and $\\balpha=(1,\\bzero,1)\\trans$.\n\nFor all settings, we fit models $Q_1(\\bH_1,A_1)=\\bH_{10}\\trans\\bbeta^0_1+A_1(\\bH_{11}\\trans\\bgamma^0_1)$, $Q_2(\\bHcheck_2,A_2)=\\bHcheck_{20}\\trans\\bbeta^0_2+A_2(\\bH_{21}\\trans\\bgamma^0_2)$ for the $Q$ functions, $\\pi_1(\\bH_1)=\\sigma\\left(\\bH_{10}\\trans\\bxi_1\\right)$ and $\\pi_2(\\bHcheck_2)=\\sigma\\left(\\bH_{20}\\trans\\bxi_2\\right)$ for the propensity scores. The parameters $\\xi_{26}^0$ and $\\beta_{27}^0$ and $\\tilde\\bxi_2,\\tilde\\bbeta_2$ index mis-specification in the fitted Q-learning outcome models and the propensity score models with a value of 0 corresponding to a correct specification. In particular, we set $\\xi_{26}^0=1$, $\\tilde\\bxi_2=\\frac{1}{\\|(1,\\dots,1)\\|_2}(1,\\dots,1)\\trans$, and $\\beta_{27}^0=1$, $\\tilde\\bbeta_2=\\frac{1}{\\|(1,\\dots,1)\\|_2}(1,\\dots,1)\\trans$ for mis-specification of propensity score $\\pi_2$ and $Q_1,$ $Q_2$ functions respectively. We set $\\xi_{26}^0=\\beta_{27}^0=0$ and $\\tilde\\bxi=\\bzero$, $\\tilde\\bbeta_2=\\bzero$ for correct model specification. Under mis-specification of the outcome model or propensity score model, the term omitted by the working models is highly non-linear, in which case the imputation model will be mis-specified as well. We note that our method does not need correct specification of the imputation model. For the imputation models, we considered both random forest (RF) with 500 trees and basis expansion (BE) with piecewise-cubic splines with 2 equally spaced knots on the quantiles 33 and 67 \\citep{HastieT}. Finally, we consider two choices of $(n,N)$: $(135,1272)$ which are similar to the sizes of our EHR study and larger sizes of $(500,10000)$. For each configuration, we summarize results based on $1,000$ replications. \n\n\\begin{table}[ht]\n\\centering\n\\centerline{(a) $n=135$ and $N=1272$}\n\\scalebox{0.8}{\n\\begin{tabular}{@{}lccccccclcccccl@{}}\n\\toprule\n & \\multicolumn{2}{c}{Supervised} & & \\multicolumn{11}{c}{Semi-Supervised} \\\\ \\cmidrule(lr){2-3} \\cmidrule(lr){5-15} \n & \\multicolumn{2}{c}{} & & \\multicolumn{5}{c}{Random Forests} & & \\multicolumn{5}{c}{Basis Expansion} \\\\ \\cmidrule(l){5-9} \\cmidrule(l){11-15} \nParameter & Bias & ESE & & Bias & ESE & ASE & CovP & RE & & Bias & ESE & ASE & CovP & RE \\\\ \\cmidrule(l){1-3} \\cmidrule(l){5-9} \\cmidrule(l){11-15} \n $\\gamma_{11}$=1.4 & -0.03 & 0.41 & & 0.00 & 0.26 & 0.24 & 0.93 & 1.57 & & 0.00 & 0.24 & 0.23 & 0.93 & 1.68 \\\\ \n $\\gamma_{12}$=-2.6 & 0.04 & 0.58 & & -0.01 & 0.36 & 0.34 & 0.94 & 1.61 & & -0.02 & 0.35 & 0.31 & 0.90 & 1.69 \\\\ \n $\\gamma_{21}$=0.8 & 0.00 & 0.34 & & 0.01 & 0.21 & 0.20 & 0.93 & 1.61 & & 0.00 & 0.20 & 0.19 & 0.94 & 1.71 \\\\ \n $\\gamma_{22}$=0.2 & -0.02 & 0.45 & & -0.01 & 0.28 & 0.28 & 0.95 & 1.60 & & -0.01 & 0.27 & 0.26 & 0.94 & 1.70 \\\\ \n $\\gamma_{23}$=0.5 & 0 & 0.18 & & 0.01 & 0.11 & 0.11 & 0.94 & 1.59 & & 0.00 & 0.11 & 0.11 & 0.94 & 1.68 \\\\ \\bottomrule\n\\end{tabular}}\\vspace{.1in}\n\\centerline{(b) $n=500$ and $N=10,000$}\n\\scalebox{0.8}{\n\\begin{tabular}{@{}lccccccclcccccl@{}}\n\\toprule\n & \\multicolumn{2}{c}{Supervised} & & \\multicolumn{11}{c}{Semi-Supervised} \\\\ \\cmidrule(lr){2-3} \\cmidrule(lr){5-15} \n & \\multicolumn{2}{c}{} & & \\multicolumn{5}{c}{Random Forests} & & \\multicolumn{5}{c}{Basis Expansion} \\\\ \\cmidrule(l){5-9} \\cmidrule(l){11-15} \nParameter & Bias & ESE & & Bias & ESE & ASE & CovP & RE & & Bias & ESE & ASE & CovP & RE \\\\ \\cmidrule(l){1-3} \\cmidrule(l){5-9} \\cmidrule(l){11-15} \n $\\gamma_{11}$=1.4 & 0.01 & 0.22 & & 0.01 & 0.12 & 0.11 & 0.92 & 1.76 & & 0.01 & 0.12 & 0.11 & 0.92 & 1.80 \\\\ \n $\\gamma_{12}$=-2.6 & 0 & 0.29 & & 0 & 0.17 & 0.16 & 0.93 & 1.73 & & -0.01 & 0.16 & 0.15 & 0.93 & 1.80 \\\\ \n $\\gamma_{21}$=0.8 & 0.00 & 0.17 & & 0.00 & 0.10 & 0.09 & 0.93 & 1.80 & & 0.00 & 0.09 & 0.09 & 0.93 & 1.86 \\\\ \n $\\gamma_{22}$=0.2 & -0.01 & 0.23 & & 0 & 0.13 & 0.12 & 0.93 & 1.81 & & 0 & 0.13 & 0.12 & 0.94 & 1.83 \\\\ \n $\\gamma_{23}$=0.5 & 0.00 & 0.09 & & 0.00 & 0.05 & 0.05 & 0.94 & 1.78 & & 0.00 & 0.05 & 0.05 & 0.95 & 1.81 \\\\ \\bottomrule\n\\end{tabular}}\n\\caption{Bias, empirical standard error (ESE) of the supervised and the SSL estimators with either random forest imputation or basis expansion imputation strategies for $\\bgammabar_1,\\bgammabar_2$ when (a) $n=135$ and $N=1272$ and (b) $n=500$ and $N=10,000$. For the SSL estimators, we also obtain the average of the estimated standard errors (ASE) as well as the empirical coverage probabilities (CovP) of the 95\\% confidence intervals.}\n\\label{tab:simgamma}\n\\end{table}\n\nWe start discussing results under correct specification of the $Q$ functions. In Table \\ref{tab:simgamma}, we present the results for the estimation of treatment interaction coefficients $\\bgammabar_1,\\bgammabar_2$, under the correct model specification, continuous outcome setting with $\\beta_{27}^0=\\xi_{26}^0=0$. The complete tables for all $\\bthetabar$ parameters for the continuous and EHR-like settings can be found in Appendix \\ref{appendix: alt sims}. We report bias, empirical standard error (ESE), average standard error (ASE), 95\\% coverage probability (CovP) and relative efficiency (RE) defined as the ratio of supervised ESE over SSL estimate ESE. Overall, compared to the supervised approach, the proposed semi-supervised $Q$-learning approach has substantial gains in efficiency while maintaining comparable or even lower bias. This is likely due to the refitting step which helps take care of the finite sample bias, both from the missing outcome imputation and $Q$ function parameter estimation. Imputation with BE yields slightly better estimates than when using RF, both in terms of efficiency and bias. Coverage probabilities are close to the nominal level due to the good performance of the standard error estimation.\n\nWe next turn to $Q$-learning parameters under mis-specification of \\eqref{linear_Qs}. Figure \\ref{fig_misspec_Q_sin} shows the bias and root mean square error (RMSE) for the treatment interaction coefficients in the 2-stage $Q$ functions. We focus on the continuous setting, where we set $\\beta_{27}^0\\in\\{-1,0,1\\}$. Note that $\\beta_{27}^0\\neq0$ implies that both $Q$ functions are mis-specified as the fitting of $Q_1$ depends on formulation of $Q_2$ as seen in \\eqref{full_EE}. Semi-supervised $Q$-learning is more efficient for any degree of mis-specification for both small and large finite sample settings. As the theory predicts, there is no real difference in efficiency gain of SSL across mis-specification of the $Q$ function models. This is because asymptotic distribution of $\\bgammahat\\subSSL$ shown in Theorems \\ref{theorem: unbiased theta2} \\& \\ref{theorem: unbiased theta1} are centered on the target parameters $\\bgammabar$. Thus, both SSL and SUP have negligible bias regardless of the true value of $\\beta_{27}^0$.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=1.05\\textwidth]{figures\/gamma_stats.png}\n\t\\caption{Monte Carlo estimates of bias and RMSE ratios for estimation of $\\gamma_{11},\\:\\gamma_{12}$, $\\gamma_{21},\\:\\gamma_{22},\\:\\gamma_{23}$ under mis-specification of the $Q$-functions through $\\beta_{27}^0$. Results are shown for the large ($N=10,000$, $n=500$) and small ($N=1,272$, $n=135$) data samples for the continuous setting over 1,000 simulated datasets.}\n\t\\label{fig_misspec_Q_sin} \n\\end{figure}\n\n \n\\begin{table}[ht]\n\\centering\n\\centerline{(a) $n=135$ and $N=1272$}\n\\scalebox{0.7}{\n\\begin{tabular}{rlrlrrlrrrrrlrrrrr}\n\\hline\n\\multicolumn{1}{l}{} & & \\multicolumn{1}{l}{} & & \\multicolumn{2}{c}{Supervised} & & \\multicolumn{11}{c}{Semi-Supervised} \\\\ \\cmidrule{5-6} \\cmidrule{8-18} \n\\multicolumn{1}{l}{} & & \\multicolumn{1}{l}{} & & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & & \\multicolumn{5}{c}{Random Forests} & & \\multicolumn{5}{c}{Basis Expansion} \\\\ \\cmidrule{8-12} \\cmidrule{14-18} \nSetting & Model & $\\Vbar$ & & Bias & ESE & & Bias & ESE & ASE & CovP & RE & & Bias & ESE & ASE & CovP & RE \\\\ \\cmidrule{1-3} \\cmidrule{5-6} \\cmidrule{8-12} \\cmidrule{14-18} \n & Correct & 6.08 & & 0.02 & 0.27 & & 0.04 & 0.21 & 0.24 & 0.97 & 1.27 & & 0.02 & 0.23 & 0.25 & 0.97 & 1.18 \\\\ \n Continuous & Missp. $Q$ & 6.34 & & 0.01 & 0.24 & & 0.03 & 0.19 & 0.22 & 0.97 & 1.27 & & 0.00 & 0.20 & 0.22 & 0.97 & 1.20 \\\\ \n & Missp. $\\pi$ & 6.08 & & 0.01 & 0.28 & & 0.02 & 0.22 & 0.24 & 0.97 & 1.24 & & 0.01 & 0.25 & 0.25 & 0.97 & 1.12 \\\\ \n & Correct & 1.38 & & 0.09 & 0.15 & & 0.05 & 0.12 & 0.12 & 0.94 & 1.24 & & 0.04 & 0.13 & 0.12 & 0.95 & 1.12 \\\\ \n EHR & Missp. $Q$ & 1.43 & & 0.09 & 0.14 & & 0.04 & 0.12 & 0.12 & 0.96 & 1.12 & & 0.03 & 0.14 & 0.12 & 0.95 & 1.02 \\\\ \n & Missp. $\\pi$ & 1.38 & & 0.09 & 0.15 & & 0.05 & 0.14 & 0.13 & 0.96 & 1.13 & & 0.04 & 0.14 & 0.13 & 0.96 & 1.05 \\\\ \\end{tabular}}\\vspace{.1in}\n\\centerline{(b) $n=500$ and $N=10,000$}\n\\scalebox{0.7}{\n\\begin{tabular}{rlrlrrlrrrrrlrrrrr}\n\\toprule\n\\multicolumn{1}{l}{} & & \\multicolumn{1}{l}{} & & \\multicolumn{2}{c}{Supervised} & & \\multicolumn{11}{c}{Semi-Supervised} \\\\ \\cmidrule{5-6} \\cmidrule{8-18} \n\\multicolumn{1}{l}{} & & \\multicolumn{1}{l}{} & & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & & \\multicolumn{5}{c}{Random Forests} & & \\multicolumn{5}{c}{Basis Expansion} \\\\ \\cmidrule{8-12} \\cmidrule{14-18} \nSetting & Model & $\\Vbar$ & & Bias & ESE & & Bias & ESE & ASE & CovP & RE & & Bias & ESE & ASE & CovP & RE \\\\ \\cmidrule{1-3} \\cmidrule{5-6} \\cmidrule{8-12} \\cmidrule{14-18} \n & Correct & 6.08 & & 0.02 & 0.15 & & 0.03 & 0.11 & 0.12 & 0.96 & 1.32 & & 0.02 & 0.13 & 0.13 & 0.95 & 1.16 \\\\ \n Continuous & Missp. $Q$ & 6.34 & & 0.01 & 0.13 & & 0.03 & 0.10 & 0.10 & 0.96 & 1.31 & & 0.01 & 0.11 & 0.11 & 0.96 & 1.16 \\\\ \n & Missp. $\\pi$ & 6.08 & & 0.01 & 0.14 & & 0.03 & 0.11 & 0.12 & 0.96 & 1.28 & & 0.02 & 0.12 & 0.12 & 0.95 & 1.16 \\\\ \n & Correct & 1.38 & & 0.02 & 0.07 & & 0.01 & 0.04 & 0.06 & 0.99 & 1.55 & & 0.00 & 0.06 & 0.06 & 0.98 & 1.23 \\\\ \n EHR & Missp. $Q$ & 1.43 & & 0.01 & 0.07 & & 0.00 & 0.04 & 0.05 & 0.99 & 1.66 & & 0.00 & 0.05 & 0.06 & 0.98 & 1.35 \\\\ \n & Missp. $\\pi$ & 1.38 & & 0.02 & 0.08 & & 0.01 & 0.06 & 0.07 & 0.99 & 1.22 & & 0.00 & 0.07 & 0.07 & 0.97 & 1.03 \\\\ \\bottomrule\n\\end{tabular}}\n\\caption{Bias, empirical standard error (ESE) of the supervised estimator $\\Vhat\\subSUPDR$ and bias, ESE, average standard error (ASE) and coverage probability (CovP) for $\\Vhat\\subSSLDR$ with either random forest imputation or basis expansion imputation strategies when (a) $n=135$ and $N=1272$ and (b) $n=500$ and $N=10,000$. We show performance and relative efficiency across both simulation settings for estimation under correct models, and mis-specification of $Q$ function or propensity score function.}\n\\label{tab:simvalues}\n\\end{table}\n\nNext we analyze performance of the doubly robust value function estimators for both continuous and EHR-like settings. Table \\ref{tab:simvalues} shows bias and RMSE across different sample sizes, and comparing SSL vs. SUP estimators. Results are shown for the correct specification of the $Q$ functions and propensity scores, and when either is mis-specified. Bias across simulation settings is relatively similar between $\\Vhat\\subSSLDR$ and $\\Vhat\\subSUPDR$, and appears to be small relative to RMSE. The low magnitude of bias suggests both estimators are robust to model mis-specification. There is an exception on the EHR setting with small sample size, for which the bias is non-negligible. This is likely due to the fact that the $Q$ function parameters to estimate are 13+37, and the propensity score functions have 12 parameters which add up to a large number relative to the labeled sample size: $n=135$. The SSL bias is lower in this case which could be due to the refitting step, which helped to reduce the finite sample bias. Efficiency gains of $\\Vhat\\subSSLDR$ are consistent across model specification. We next illustrate our approach using an IBD dataset.\n\n\\subsection{Application to an EHR Study of Inflammatory Bowel Disease}\\label{section: IBD}\n\n\nAnti\u2013tumor necrosis factor (anti-TNF) therapy has greatly changed the management and improved the outcomes of patients with inflammatory bowl disease (IBD) \\citep{peyrin2010anti}. However, it remains unclear whether a specific anti-TNF agent has any advantage in efficacy over other agents, especially at the individual level. There have been few randomized clinical trials performed to directly compare anti-TNF agents for treating IBD patients \\citep{sands2019vedolizumab}. Retrospective studies comparing infliximab and adalimumab for treating IBD have found limited and sometimes conflicting evidence of their relative effectiveness \\citep{inokuchi2019long,lee2019comparison,osterman2017infliximab}. There is even less evidence regarding optimal STR for choosing these treatments over time \\citep{ashwin2016}. To explore this, we performed RL using data from a cohort of IBD patients previously identified via machine learning algorithms from the EHR systems of two tertiary referral academic centers in the Greater Boston metropolitan area \\citep{ashwin2012}. We focused on the subset of $N=1,272$ patients who initiated either Infliximab ($A_1=0$) or Adalimumab ($A_1=1$) and continued to be treated by either of these two therapies during the next 6 months. The observed treatment sequence distributions are shown in Table \\ref{Table: obs As}. The outcomes of interest are the binary indicator of treatment response at 6 months ($t=2$) and at 12 months ($t=3$), both of which were only available on a subset of $n=135$ patients whose outcomes were manually annotated via chart review. \n\nTo derive the STR, we included gender, age, Charlson co-morbidity index \\citep{charlson}, prior exposure to anti-TNF agents, as well as mentions of clinical terms associated with IBD such as bleeding complications extracted from the clinical notes via natural language processing (NLP) features as confounding variables at both time points. To improve the imputation of $Y_t$, we use 15 relevant NLP features such as mentions of rectal or bowel resection surgery as surrogates at $t=1,2$. We transformed all count variables using $x\\mapsto \\log(1+x)$ to decrease skewness in the distributions, and centered continuous features. We used RF with 500 trees to carry out the imputation step, and 5-fold cross-validation (CV) to estimate the value function.\n\n\nThe supervised and semi-supervised estimates are shown in Table \\ref{Table: IBD results} for the $Q$-learning models and in Table \\ref{Table: V IBD results} for the value functions associated with the estimated STR. Similar to those observed in the simulation studies, the semi-supervised $Q$-learning has more power to detect significant predictors of treatment response. Relative efficiency for almost all $Q$ function estimates is near or over 2. The supervised $Q$-learning does not have the power to detect predictors such as prior use of anti-TNF agents, which are clearly relevant to treatment response \\citep{ashwin2016}. Semi-supervised $Q$-learning is able to detect that the efficacy of Adalimumab wears off as patients get older, meaning younger patients in the first stage experienced a higher rate of treatment response to Adalimumab, a finding that cannot be detected with supervised $Q$-learning. Additionally, supervised $Q$-learning does not pick up that there is a higher rate of response to Adalimumab among patients that are male or have experienced an abscess. This translates into a far from optimal treatment rule as seen in the cross-validated value function estimates. Table \\ref{Table: V IBD results} reflects that using our semi-supervised approach to find the regime and to estimate the value function of such treatment rules yields a more efficient estimate, as the semi-supervised value function estimate $\\Vhat\\subSUPDR$ yielded a smaller standard error than that of the supervised estimate $\\Vhat\\subSUPDR$. However, the standard errors are large relative to the point estimates. On the upside, they both yield estimates very close in numerical value which is reassuring: both should be unbiased as predicted by theory and simulations.\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{clcc}\n\\hline\n & & \\multicolumn{2}{c}{$A_1$} \\\\ \\cline{2-4} \n\\multicolumn{1}{c}{} & & 0 & 1 \\\\\n\\multicolumn{1}{c}{\\multirow{2}{*}{$A_2$}} & 0 & 912 & 327 \\\\\n\\multicolumn{1}{c}{} & 1 & 27 & 183 \\\\ \\hline\n\\end{tabular}\\caption{Distribution of treatment trajectories for observed sample of size 1407.}\\label{Table: obs As}\n\n\\end{table}\n\n\n\\begin{table}[ht]\n\\scalebox{0.55}{\n\\begin{tabular}{lcccccccccclccccccccc}\n\\cmidrule{1-10} \\cmidrule{12-21}\n\\multicolumn{10}{c}{Stage 1 Regression} & & \\multicolumn{10}{c}{Stage 2 Regression} \\\\ \\cmidrule{1-10} \\cmidrule{12-21} \n & \\multicolumn{3}{c}{Supervised} & & \\multicolumn{3}{c}{Semi-Supervised} & & & & \\multicolumn{4}{c}{Supervised} & & \\multicolumn{3}{c}{Semi-Supervised} & & \\\\ \\cmidrule{2-4} \\cmidrule{6-8} \\cmidrule{12-15} \\cmidrule{17-19}\nParameter & Estimate & SE & P-val & & Estimate & SE & P-val & & RE & & Parameter & Estimate & SE & P-val & & Estimate & SE & P-val & & RE \\\\ \\cmidrule{1-4} \\cmidrule{6-8} \\cmidrule{10-10} \\cmidrule{12-15} \\cmidrule{17-19} \\cmidrule{21-21} \nIntercept & \\textbf{0.424} & \\textbf{0.082} & \\textbf{0.00} & & \\textbf{0.518} & \\textbf{0.028} & \\textbf{0.00} & & 2.937 & & $Y_1$ & \\textbf{0.37} & \\textbf{0.11} & \\textbf{0.00} & & \\textbf{0.55} & \\textbf{0.05} & \\textbf{0.00} & & 2.08 \\\\ \n Female & -0.237 & 0.167 & 0.16 & & \\textbf{-0.184} & \\textbf{0.067} & \\textbf{0.007} & & 2.514 & & Intercept & 0.08 & 0.06 & 0.17 & & 0.04 & 0.02 & 0.14 & & 2.40 \\\\ \n Age & 0.155 & 0.088 & 0.081 & & \\textbf{0.18} & \\textbf{0.034} & \\textbf{0.00} & & 2.588 & & Female & -0.01 & 0.10 & 0.92 & & -0.00 & 0.05 & 0.98 & & 2.21 \\\\ \n Charlson Score & 0.006 & 0.072 & 0.929 & & -0.047 & 0.026 & 0.075 & & 2.776 & & Age & 0.05 & 0.06 & 0.35 & & \\textbf{0.07} & \\textbf{0.02} & \\textbf{0.00} & & 2.33 \\\\ \n Prior anti-TNF & -0.038 & 0.06 & 0.524 & &\\textbf{ -0.085} & \\textbf{0.019} & \\textbf{0.00} & & 3.177 & & Charlson Score & 0.04 & 0.04 & 0.33 & & \\textbf{0.06} & \\textbf{0.02} & \\textbf{0.01} & & 2.06 \\\\ \n Perianal & \\textbf{0.138} & \\textbf{0.06} & \\textbf{0.022} & & \\textbf{0.179} & \\textbf{0.022} & \\textbf{0.00} & & 2.688 & & Prior anti-TNF & -0.05 & 0.05 & 0.29 & & \\textbf{-0.09} & \\textbf{0.02} & \\textbf{0.00} & & 2.39 \\\\ \n Bleeding & 0.049 & 0.08 & 0.54 & & 0.058 & 0.03 & 0.055 & & 2.675 & & Perianal & -0.01 & 0.04 & 0.80 & & \\textbf{-0.03} & \\textbf{0.02} & \\textbf{0.06} & & 2.31 \\\\ \n A1 & 0.163 & 0.488 & 0.739 & & 0.148 & 0.206 & 0.473 & & 2.374 & & Bleeding & -0.04 & 0.05 & 0.49 & & -0.03 & 0.03 & 0.29 & & 2.14 \\\\ \n Female$\\times A_1$ & 0.168 & 0.696 & 0.81 & & -0.042 & 0.287 & 0.886 & & 2.424 & & A1 & 0.11 & 0.25 & 0.67 & & 0.03 & 0.10 & 0.74 & & 2.60 \\\\ \n Age$\\times A_1$ & -0.177 & 0.264 & 0.503 & & \\textbf{-0.278} & \\textbf{0.109} & \\textbf{0.013} & & 2.418 & & Abscess$_2$ & 0.06 & 0.04 & 0.16 & & \\textbf{0.05} & \\textbf{0.01} & \\textbf{0.00} & & 2.68 \\\\ \n Charlson Score$\\times A_1$ & 0.136 & 0.391 & 0.728 & & 0.195 & 0.178 & 0.276 & & 2.194 & & Fistula$_2$ & 0.02 & 0.05 & 0.67 & & 0.01 & 0.02 & 0.62 & & 2.33 \\\\ \n Perianal$\\times A_1$ & -0.113 & 0.226 & 0.618 & & -0.019 & 0.08 & 0.808 & & 2.838 & & Female$\\times A_1$ & 0.13 & 0.38 & 0.74 & & 0.17 & 0.16 & 0.30 & & 2.37 \\\\ \n Bleeding$\\times A_1$ & 0.262 & 0.364 & 0.474 & & 0.127 & 0.161 & 0.431 & & 2.267 & & Age$\\times A_1$ & -0.02 & 0.12 & 0.88 & & -0.09 & 0.06 & 0.17 & & 1.94 \\\\ \n & & & & & & & & & & & Charlson Score$\\times A_1$ & -0.02 & 0.16 & 0.89 & & 0.04 & 0.07 & 0.55 & & 2.19 \\\\ \n & & & & & & & & & & & Perianal$\\times A_1$ & -0.14 & 0.09 & 0.15 & & \\textbf{-0.17} & \\textbf{0.04} & \\textbf{0.00} & & 2.34 \\\\ \n & & & & & & & & & & & Bleeding$\\times A_1$ & 0.13 & 0.20 & 0.51 & & 0.03 & 0.09 & 0.76 & & 2.17 \\\\ \n & & & & & & & & & & & A2 & 0.07 & 0.17 & 0.69 & & \\textbf{0.22} & \\textbf{0.07} & \\textbf{0.00} & & 2.55 \\\\ \n & & & & & & & & & & & Female$\\times A_2$ & -0.39 & 0.28 & 0.16 & & \\textbf{-0.51} & \\textbf{0.11} & \\textbf{0.00} & & 2.53 \\\\ \n & & & & & & & & & & & Age$\\times A_2$ & 0.09 & 0.10 & 0.40 & & \\textbf{0.15} & \\textbf{0.04} & \\textbf{0.00} & & 2.27 \\\\ \n & & & & & & & & & & & Charlson Score$\\times A_2$ & 0.01 & 0.07 & 0.84 & & -0.03 & 0.03 & 0.42 & & 2.08 \\\\ \n & & & & & & & & & & & Perianal$\\times A_2$ & \\textbf{0.20} & \\textbf{0.09} &\\textbf{ 0.04} & & \\textbf{0.23} & \\textbf{0.04} &\\textbf{ 0.00} & & 2.23 \\\\ \n & & & & & & & & & & & Bleeding$\\times A_2$ & 0.03 & 0.08 & 0.77 & & 0.02 & 0.04 & 0.49 & & 2.34 \\\\ \n & & & & & & & & & & & Abscess$_2\\times A_2$ & -0.13 & 0.07 & 0.06 & & \\textbf{-0.09} & \\textbf{0.03} & \\textbf{0.00} & & 2.31 \\\\ \n & & & & & & & & & & & Fistula$_2\\times A_2$ & -0.04 & 0.06 & 0.56 & & -0.03 & 0.03 & 0.36 & & 2.17 \\\\ \\bottomrule\n\\end{tabular}}\n\\caption{Results of Inflammatory Bowel Disease data set, for first and second stage regressions. Fully supervised $Q$-learning is shown on the left and semi-supervised is shown on the right. Last columns in the panels show relative efficiency (RE) defined as the ratio of standard errors of the semi-supervised vs. supervised method, RE greater than one favors semi-supervised. Significant coefficients at the 0.05 level are in bold.}\\label{Table: IBD results}\n\\end{table}\n\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{lll}\n & Estimate & SE \\\\ \\hline\n$\\Vhat\\subSUPDR$ & 0.851 & 0.486 \\\\\n$\\Vhat\\subSSLDR$ & 0.871 & 0.397 \\\\ \\hline\n\\end{tabular}\\caption{Value function estimates for Inflammatory Bowel Disease data set, the first row has the estimate for treatment rule learned using $\\mathcal{U}$ and its respective value function, the second row shows the same for a rule estimated using $\\mathcal{L}$ and its estimated value.}\\label{Table: V IBD results}\n\\end{table}\n\\section{Discussion}\\label{section: discussion} \n\nWe have proposed an efficient and robust strategy for estimating optimal dynamic treatment rules and their value function, in a setting where patient outcomes are scarce. In particular, we developed a two step estimation procedure amenable to non-parametric imputation of the missing outcomes. This helped us establish $\\sqrt n$-consistency and asymptotic normality for both the $Q$ function parameters $\\bthetahat$ and the doubly robust value function estimator $\\Vhat\\subSSLDR$. We additionally provided theoretical results which illustrate if and when the outcome-surrogates $\\bW$ contribute towards efficiency gain in estimation of $\\bthetahat\\subSSL$ and $\\Vhat\\subSSLDR$. This lets us conclude that our procedure is always preferable to using the labeled data only: since estimation is robust to mis-specification of the imputation models, our approach is safe to use and will be at least as efficient as the supervised methods.\n\nWe focused on the 2-time point, binary action setting for simplicity but all our theoretical results and algorithms can be easily extended to a higher finite time horizon, and multiple actions with careful bookkeeping of notation. In practice, one would need to be careful with the variability of the IPW-value function which increases substantially with time. However, the SSL approach would come in handy to estimate propensity scores, providing an efficiency gain that would help stabilize the IPW in longer horizons. \n\nWe are interested in extending this framework to handle missing at random (MAR) sampling mechanisms. In the EHR setting, it is feasible to sample a subset of the data completely at random in order to annotate the records. Hence, we argue the MCAR assumption is true by design in our context. However, the MAR context allows us to leverage different data sources for $\\Lsc$ and $\\Usc$. For example, we could use an annotated EHR data cohort and a large unlabeled registry data repository for our inference, ultimately making the policies and value estimation more efficient and robust. We believe this line of work has the potential to leverage massive observational cohorts, which will help to improve personalized clinical care for a wide range of diseases. \n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLiver cancer is the second leading cause of global cancer mortality\n(after lung cancer), and is one of the most rapidly increasing cancers\nin terms of incidence and mortality worldwide and in the United States\n\\cite{Ferlay2015,Petrick2016}. Although contrast-enhanced computed\ntomography (CT) has been widely used for liver cancer screening, diagnosis,\nprognosis, and the assessment of its response to treatment, proper\ninterpretation of CT images is normally time-consuming and prone to\nsuffer from inter- and intra-observer variabilities. Therefore, computerized\nanalysis methods have been developed to assist radiologists and oncologists\nfor better interpretation of liver CT images.\n\nAutomatically segmenting liver and viable tumors from other tissue\nis an essential step in quantitative image analysis of abdominal CT\nimages. However, automatic liver segmentation is a challenging task\ndue to the low contrast inside liver, fuzzy boundaries to its adjacent\norgans and highly varying shape. Meanwhile, automatic tumor segmentation\non liver normally suffers from significant variety of appearance in\nsize, shape, location, intensity, textures, as well as the number\nof occurrences. Although researchers have developed various methods\nto conquer these challenges \\cite{Heimann:2009,Yuan:2015,Linguraru:2012},\ninteractive approaches are still the only way to achieve acceptable\ntumor segmentation.\n\nIn this paper, we present a fully automatic framework based on deep\nfully convolutional-deconvolutional neural networks (CDNN) \\cite{long2015fully,noh2015learning,yuan2017ieee}\nfor liver and liver tumor segmentation on contrast-enhanced abdominal\nCT images. Similar to \\cite{christ2016automatic}, our framework is\nhierarchical and includes three steps. In the first step, a simple\nCDNN model is trained to obtain a quick but coarse segmentation of\nthe liver on the entire 3D CT volume; then another CDNN is applied\nto the liver region for fine liver segmentation; finally, the segmented\nliver region is enhanced by histogram equalization and serves as an\nadditional input to the third CDNN for tumor segmentation. Instead\nof developing sophisticated pre- and post-processing methods and hand-crafted\nfeatures, we focus on designing appropriate network architecture and\nefficient learning strategies such that our framework can handle images\nunder various acquisition conditions.\n\n\\section{Dataset and Preprocessing}\n\nOnly LiTS challenge datasets were used for model training and validation.\nThe LiTS datasets consist of $200$ contrast-enhanced abdominal CT\nscans provided by various clinical sites around the world, in which\n$130$ cases were used for training and the rest $70$ for testing.\nThe datasets have significant variations in image quality, spatial\nresolution and field-of-view, with in-plane resolution ranging from\n$0.6\\times0.6$ to $1.0\\times1.0$ mm and slice thickness from $0.45$\nto $6.0$ mm. Each axial slice has identical size of $512\\times512$,\nbut the number of slices in each scan varies from $42$ to $1026$.\n\nAs for pre-processing, we simply truncated the voxel values of all\nCT scans to the range of {[}-100, 400{]} HU to eliminate the irrelevant\nimage information. While a comprehensive 3D contextual information\ncould potentially improve the segmentation performance, due to the\nlimited hardware resource, it is infeasible to perform a fully 3D\nCDNN on the volumetric CT scans in our experimental environment. Thus,\nour CDNN model is based on 2D slice and the CT volume is processed\nslice-by-slice, with the two most adjacent slices concatenated as\nadditional input channels to the CDNN model. Different resampling\nstrategies were applied at different hierarchical levels and will\nbe described below.\n\n\\section{Method}\n\n\\subsection{CDNN model}\n\nOur CDNN model \\cite{yuan2017ieee} belongs to the category of fully\nconvolutional network (FCN) that extends the convolution process across\nthe entire image and predicts the segmentation mask as a whole. This\nmodel performs a pixel-wise classification and essentially serves\nas a filter that projects the 2D CT slice to a map where each element\nrepresents the probability that the corresponding input pixel belongs\nto liver (or tumor). This model consists two pathways, in which contextual\ninformation is aggregated via convolution and pooling in the convolutional\npath and full image resolution is recovered via deconvolution and\nup-sampling in the deconvolutional path. In this way, the CDNN model\ncan take both global information and fine details into account for\nimage segmentation.\n\nWe fix the stride as $1$ and use Rectified Linear Units (ReLUs) \\cite{krizhevsky2012imagenet}\nas the activation function for each convolutional\/deconvolutional\nlayer. For output layer, we use sigmoid as the activation function.\nBatch normalization is added to the output of every convolutional\/deconvolutional\nlayer to reduce the internal covariate shift \\cite{ioffe2015batch}.\n\nWe employ a loss function based on Jaccard distance proposed in \\cite{yuan2017ieee}\nin this study: \n\\begin{equation}\nL_{d_{J}}=1-\\frac{\\underset{i,j}{\\sum}(t_{ij}p_{ij})}{\\underset{i,j}{\\sum}t_{ij}^{2}+\\underset{i,j}{\\sum}p_{ij}^{2}-\\underset{i,j}{\\sum}(t_{ij}p_{ij})},\\label{eq:ja-loss}\n\\end{equation}\nwhere $t_{ij}$ and $p_{ij}$ are target and the output of pixel $(i,\\,j)$,\nrespectively. As compared to cross entropy used in the previous work\n\\cite{christ2016automatic,ronneberger2015u}, the proposed loss function\nis directly related to image segmentation task because Jaccard index\nis a commonly used metric to assess medical imaging segmentation.\nMeanwhile, this loss function is well adapted to the problems with\nhigh imbalance between foreground and background classes as it does\nnot require any class re-balancing. We trained the network using Adam\noptimization \\cite{kingma2014adam} to adjust the learning rate based\non the first and the second-order moments of the gradient at each\niteration. The initial learning rate was set as $0.003$.\n\nIn order to reduce overfitting, we added two dropout layers with $p=0.5$\n- one at the end of convolutional path and the other right before\nthe last deconvolutional layer. We also employed two types of image\naugmentations to further improve the robustness of the proposed model\nunder a wide variety of image acquisition conditions. One consists\nof a series of geometric transformations, including randomly flipping,\nshifting, rotating and scaling. The other type focuses on randomly\nnormalizing the contrast of each input channels in the training image\nslices.Note that these augmentations only require little extra computation,\nso the transformed images are generated from the original images for\nevery mini-batch within each iteration.\n\n\\subsection{Liver localization}\n\nThis step aims to locate the liver region by performing a fast but\ncoarse liver segmentation on the entire CT volume, thus we designed\na relatively simple CDNN model for this task. This model, named CDNN-I,\nincludes $19$ layers with $230,129$ trainable parameters and its\narchitectural details can be found in \\cite{yuan2017ieee}. For each\nCT volume, the axial slice size was firstly reduced to $128\\times128$\nby down-sampling and then the entire image volume was resampled with\nslice thickness of $3$ mm. We found that not all the slices in a\nCT volume were needed in training this CDNN model, so only the slices\nwith liver, as well as the $5$ slices superior and inferior to the\nliver were included in the model training. For liver localization\nand segmentation, the liver and tumor labels were merged as a single\nliver label to provide the ground truth liver masks during model training.\n\nDuring testing, the new CT images were pre-processed following the\nsame procedure as training data preparation, then the trained CDNN-I\nwas applied to each slice of the entire CT volume. Once all slices\nwere segmented, a threshold of $0.5$ was applied to the output of\nCDNN and a 3D connect-component labeling was performed. The largest\nconnected component was selected as the initial liver region.\n\n\\subsection{Liver segmentation}\n\nAn accurate liver localization enables us to perform a fine liver\nsegmentation with more advanced CDNN model while reducing computational\ntime. Specifically, we firstly resampled the original image with slice\nthickness of $2$ mm, then the bounding-box of liver was extracted\nand expanded by $10$ voxels in $x,$ $y$ and $z$ directions to\ncreate a liver volume of interest (VOI). The axial dimensions of VOI\nwere further adjusted to $256\\times256$ either by down-sampling if\nany dimension was greater than $256$, or by expanding in $x$ and\/or\n$y$ direction otherwise. All slices in the VOI were used for model\ntraining.\n\nThe CDNN model used in the liver segmentation (named CDNN-II) includes\n$29$ layers with about $5\\,M$ trainable parameters. As compared\nto CDNN-I, the size of $local\\,receptive\\,field$ (LRF), or filter\nsize, is reduced in CDNN-II such that the network can go deeper, i.e.\nmore number of layers, which allows applying more non-linearities\nand being less prone to overfitting \\cite{simonyan2014very}. Meanwhile,\nthe number of feature channels is doubled in each layer. Please refer\nto \\cite{yuan2017automatic} for more details.\n\nDuring testing, liver VOI was extracted based on the initial liver\nmask obtained in the liver localization step, then the trained CDNN-II\nwas applied to each slice in the VOI to yield a 3D probability map\nof liver. We used the same post-processing as liver localization to\ndetermine the final liver mask.\n\n\\subsection{Tumor segmentation}\n\nThe VOI extraction in tumor segmentation was similar to that in liver\nsegmentation, except that the original image resolution was used to\navoid potentially missing small lesions due to image blurring from\nresampling. Instead of using all the slices in the VOI, we only collected\nthose slices with tumor as training data so as to focus the training\non the liver lesions and reduce training time. Besides the original\nimage intensity, a 3D regional histogram equalization was performed\nto enhance the contrast between tumors and surrounding liver tissues,\nin which only those voxels within the 3D liver mask were considered\nin constructing intensity histogram. The enhanced image served as\nan additional input channel to another CDNN-II model for tumor segmentation.\nWe found this additional input channel could further boost tumor segmentation\nperformance.\n\nDuring testing, liver VOI was extracted based on the liver mask from\nthe liver segmentation step. A threshold of $0.5$ was applied to\nthe output of CDNN-II model and liver tumors were determined as all\ntumor voxels within the liver mask.\n\n\\subsection{Implementation}\n\nOur CDNN models were implemented with Python based on Theano \\cite{team2016theano}\nand Lasagne\\footnote{http:\/\/github.com\/Lasagne\/Lasagne} packages.\nThe experiments were conducted using a single Nvidia GTX 1060 GPU\nwith 1280 cores and 6GB memory.\n\nWe used five-fold cross validation to evaluate the performance of\nour models on the challenge training datasets. The total number of\nepochs was set as $200$ for each fold. When applying the trained\nmodels on the challenge testing datasets, a bagging-type ensemble\nstrategy was implemented to combine the outputs of six models to further\nimprove the segmentation performance \\cite{yuan2017ieee}.\n\nAn epoch in training CDNN-I model for liver localization took about\n$70$ seconds, but the average time per epoch became 610 seconds and\n500 seconds when training CDNN-II models for liver segmentation and\ntumor segmentation, respectively. This increase was primarily due\nto larger slice size and more complicated CDNN models. Applying the\nentire segmentation framework on a new test case was, however, very\nefficient, taking about $33$ seconds on average ($8$, $8$ and $17$\ns for liver localization, liver segmentation and tumor segmentation,\nrespectively).\n\n\\section{Results and Discussion}\n\nWe applied the trained models to the $70$ LiTS challenge test cases (team: deepX).\nBased on the results from the challenge organizers, our method achieved\nan average dice similarity coefficient (DSC) of $0.963$ for liver\nsegmentation, a DSC of $0.657$ for tumor segmentation, and a root\nmean square error (RMSE) of $0.017$ for tumor burden estimation,\nwhich ranked our method in the first, fifth and third place, respectively.\nThe complete evaluation results are shown in Table \\ref{tab:Liver-segmentation-results}-\\ref{tab:Tumor-burden-results}.\n\nTo summarize our work, we develop a fully automatic framework for\nliver and its tumor segmentation on contrast-enhanced abdominal CT\nscans based on three steps: liver localization by a simple CDNN model\n(CDNN-I), liver fine segmentation by a deeper CDNN model with doubled\nfeature channels in each layer (CDNN-II), and tumor segmentation by\nCDNN-II model with enhanced liver region as additional input feature.\nOur CDNN models are fully trained in an end-to-end fashion with minimum\npre- and post-processing efforts.\n\nWhile sharing some similarities with previous work such as U-Net \\cite{ronneberger2015u}\nand Cascaded-FCN \\cite{christ2016automatic}, our CDNN model is different\nfrom them in the following aspects: 1) The loss function used in CDNN\nmodel is based on Jaccard distance that is directly related to image\nsegmentation task while eliminating the need of sample re-weighting;\n2) Instead of recovering image details by long skip connections as\nin U-Net, the CDNN model constructs a deconvolutional path where deconvolution\nis employed to densify the coarse activation map obtained from up-sampling.\nIn this way, feature map concatenation and cropping are not needed.\n\nDue to the limited hardware resource, training a complex CDNN model\nis very time consuming and we had to restrict the total number of\nepochs to $200$ in order to catch the deadline of LiTS challenge\nsubmission. While upgrading hardware is clearly a way to speed up\nthe model training, we plan to improve our network architectures and\nlearning strategies in our future work such that the models can be\ntrained in a more effective and efficient way. Other post-processing\nmethods, such as level sets \\cite{cha2016urinary} and conditional\nrandom field (CRF) \\cite{chen2014semantic}, can also be potentially\nintegrated into our model to further improve the segmentation performance.\n\n\\begin{table}[htbp]\n\\caption{Liver segmentation results (deepX) on LiTS testing cases\\label{tab:Liver-segmentation-results}}\n\n\\centering{\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline \nDice \/ case & Dice global & VOE & RVD & ASSD & MSSD & RMSD\\tabularnewline\n\\hline \n\\hline \n$0.9630$ & $0.9670$ & $0.071$ & $-0.010$ & $1.104$ & $23.847$ & $2.303$\\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\n\\begin{table}[htbp]\n\\caption{Tumor segmentation results (deepX) on LiTS testing cases\\label{tab:Tumor-segmentation-results}}\n\n\\centering{\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline \nDice \/ case & Dice global & VOE & RVD & ASSD & MSD & RMSD\\tabularnewline\n\\hline \n\\hline \n$0.6570$ & $0.8200$ & $0.378$ & $0.288$ & $1.151$ & $6.269$ & $1.678$\\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\n\\begin{table}[htbp]\n\\caption{Tumor burden results (deepX) on LiTS testing cases\\label{tab:Tumor-burden-results}}\n\n\\centering{\n\\begin{tabular}{|c|c|}\n\\hline \nRMSE & Max Error\\tabularnewline\n\\hline \n\\hline \n$0.0170$ & $0.0490$\\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAs a classic estimation algorithm, the standard Kalman filter is extensively used. However, it may perform poorly in the case that the actual model does not coincide with the nominal one. For such a reason, manifold robust versions of the Kalman filter have been considered, see for instance \\cite{HASSIBI_SAYED_KAILATH_BOOK,speyer_2008,GHAOUI_CALAFIORE_2001,kim2020robust,Li2020}.\n\nIn particular, risk sensitive Kalman filters \\cite{RISK_WHITTLE_1980,RISK_PROP_BANAVAR_SPEIER_1998,LEVY_ZORZI_RISK_CONTRACTION,huang2018distributed,OPTIMAL_SPEYER_FAN_BANAVAR_1992} have been proposed in order to address model uncertainty. The latter consider an exponential quadratic loss function rather than the standard quadratic loss function. This means that large errors are severely penalized according to the so called risk sensitivity parameter. Hereafter, these robust Kalman filters have been proved to be equivalent to solve a minimax problem, \\cite{boel2002robustness,HANSEN_SARGENT_2005,YOON_2004,HANSEN_SARGENT_2007}. More precisely, there are two players. One player, say nature, selects the least favorable model in a prescribed ambiguity set which is a ball about the nominal model and whose radius reflects the amount of uncertainty in respect to the nominal model. The other player designs the optimum filter according to the least favorable model.\n\nRecently, instead of concentrating the entire model uncertainty in a single relative entropy constraint, a new paradigm of risk sensitive filters has been proposed in \\cite{ROBUST_STATE_SPACE_LEVY_NIKOUKHAH_2013,zorzi2018robust,abadeh2018wasserstein,zorzi2019distributed,RKDISTR_opt,zenere2018coupling}. The latter characterizes the uncertainty using separate constraints to each time increment of the model. In other words, the ambiguity set is specified at each time step by forming a ball, in the Kullback-Leibler (KL) topology, about the nominal model, \\cite{ROBUST_STATE_SPACE_LEVY_NIKOUKHAH_2013,robustleastsquaresestimation}. It is worth noting this ball can be defined by using also the Tau-divergence family, \\cite{STATETAU_2017,OPTIMALITY_ZORZI}. These filters, however, are well defined only in the case that the nominal and the actual transition probability density functions are non-degenerate. This guarantees that the corresponding distorted Riccati iteration evolves on the cone of the positive definite matrices.\n\nUnfortunately, in many practical applications, like weather forecasting and oceanography, the standard Kalman filter fails to work. More precisely, the Riccati iteration produces numerical covariance matrices which are indefinite because of their large dimension. This issue is typically addressed by resorting to a low-rank Kalman algorithm \\cite{dee1991simplification}.\n\n\nThe contribution of this paper is to extend the robust paradigm in \\cite{ROBUST_STATE_SPACE_LEVY_NIKOUKHAH_2013} to the case in which the transition probability density is possibly degenerate. Some preliminary results can be found in \\cite{yi2020low}. Within our framework, degenerate Gaussian probability densities could be also involved in the dynamic game. Accordingly, the resulting robust Kalman filter corresponds to a low-rank risk sensitive Riccati iteration. Although low-rank and distorted Riccati iterations have been widely studied in the literature, e.g. \\cite{bonnabel2013geometry,Ferrante20141176,ferrante2013generalised,Baggio-Ferrante-TAC-19,Baggio-Ferrante-TAC-16}, our iteration appears to be new. Then, we also derive the least favorable model over a finite simulation horizon. Last but not least, we study the convergence of the distorted Riccati iteration in the case that the nominal model has constant parameters by means of the contraction analysis, \\cite{BOUGEROL_1993}. It turns out that the convergence is guaranteed if the nominal model is stabilizable, the reachable subspace is observable and the ambiguity set is ``small''.\n\n\nThe outline of the paper is as follows. Section \\ref{sec_2} discusses the low-rank robust static estimation problem where the ambiguity set is characterized by a relative entropy constraint and possibly contains degenerate densities. The robust Kalman filtering problem is presented in Section \\ref{sec_3}. The latter is then reformulated as a static minimax game in Section \\ref{sec_4}. In Section \\ref{sec_5}, we derive the corresponding least favorable model. Section \\ref{sec_6} regards the convergence of the proposed low-rank robust Kalman filter in the case of constant parameters. In Section \\ref{sec_7} some numerical examples are provided. Finally, we draw the conclusions in Section \\ref{sec_8}.\n\n{\\em Notation:} The image, the kernel and the trace of matrix $K$ are denoted by $\\mathrm{Im}(K)$, $\\mathrm{ker}(K)$ and $\\mathrm{tr}(K)$, respectively. Given a symmetric matrix $K$: $K>0$ ($K\\geq 0$) means that $K$ is positive (semi)definite; $\\sigma_{max}(K)$ is the maximum eigenvalue of $K$; $K^+$ and $\\det^+(K)$ denote the pseudo inverse and the pseudo determinant of $K$, respectively. The Kronecker product between two matrices $K$ and $V$ is denoted by $K\\otimes V$. $x\\sim \\mathcal N(m,K)$ means that $x$ is a Gaussian random variable with mean $m$ and covariance matrix $K$. $\\mathcal Q^n$ is the vector space of symmetric matrices of dimension $n$; $\\mathcal Q_+^n$ denotes the cone of positive definite symmetric matrices of dimension $n$, while $ \\overline{\\mathcal Q_+^n}$ denotes its closure. The diagonal matrix whose elements in the main diagonal are $a_1,a_2,\\ldots , a_n$ is denoted by $\\mathrm{diag}(a_1,a_2, \\ldots a_n)$; $\\mathrm{Tp}(A_1, A_2,\\ldots ,A_n)$ denotes the block upper triangular Toeplitz matrix whose first block row is $[\\,A_1 \\; A_2\\; \\ldots \\; A_n \\, ]$.\n \\section{Low-rank robust static estimation}\\label{sec_2}\nConsider the problem to estimate a random vector $x\\in\\mathbb R^n$ from an observation vector $y\\in\\mathbb R^p$. We assume that the nominal probability density function of $z:=[\\, x^{\\top}\\; y^{\\top} \\,]^{\\top}$ is $f(z) \\sim \\mathcal{N}\\left(m_{z}, K_{z}\\right)$ where the mean vector $m_z\\in\\mathbb R^{n+p}$ and the covariance matrix $K_z\\in \\overline{\\mathcal Q^{n+p}_+}$ are such that\n$$\nm_{z}=\\left[\\begin{array}{c}{m_{x}} \\\\ {m_{y}}\\end{array}\\right], \\quad K_{z}=\\left[\\begin{array}{cc}{K_{x}} & {K_{x y}} \\\\ {K_{y x}} & {K_{y}}\\end{array}\\right].\n$$\nMoreover, we assume that $K_z$ is possibly a singular matrix and such that $\\mathrm{rank}(K_z)=r+p$ with $r\\leq n$ and $K_y>0$. Therefore, $f(z)$ is possibly a degenerate density whose support is the $r+p$-dimensional affine subspace\n$$\n\\mathcal{A}=\\left\\{m_{z}+v, \\quad v \\in \\mathrm {Im} \\left(K_{z}\\right)\\right\\}\n$$\nand\n\\begin{equation} \\label{pdf_f}\n\\begin{aligned} f(z) =&\\left[(2 \\pi)^{r+p} \\operatorname{det}^{+}\\left(K_{z}\\right)\\right]^{-1 \/ 2} \\\\\n& \\times \\exp \\left[-\\frac{1}{2}\\left(z-m_{z}\\right)^{\\top} K_{z}^{+}\\left(z-m_{z}\\right)\\right]. \\end{aligned}\n\\end{equation}\nLet $\\tilde f(z) \\sim \\mathcal{N}(\\tilde m_{z}, \\tilde K_{z})$ denote the actual probability density function of $z$ and we assume that $\\mathrm{rank}(\\tilde K_z)=r+p$. Accordingly,\n\\begin{equation}\\label{pdf_tildef}\n\\begin{aligned} \\tilde f(z) =&\\left[(2 \\pi)^{{r+p}} \\operatorname{det}^{+}\\left(\\tilde K_{z}\\right)\\right]^{-1 \/ 2} \\\\\n& \\times \\exp \\left[-\\frac{1}{2}\\left(z-\\tilde m_{z}\\right)^{ \\top} \\tilde K_{z}^{+}\\left(z-\\tilde m_{z}\\right)\\right] \\end{aligned}\n\\end{equation}\nwith support\n$\\mathcal{\\tilde A}= \\{\\tilde m_{z}+v, \\quad v \\in \\mathrm {Im}(\\tilde{ K}_{z}) \\}.$ We use the KL-divergence to measure the deviation between the nominal probability density function $f(z)$ and the actual one $\\tilde f(z)$. Since the KL-divergence is not able to measure the ``deterministic'' deviations, we have to assume that the two probability density functions have the same support, i.e. $\\mathcal{A}=\\mathcal{\\tilde A}$. In other words, we have to impose that:\n\\begin{equation} \\mathrm {Im}\\left(K_{z}\\right)=\\mathrm {Im} (\\tilde{ K}_{z}), \\; \\; \\Delta m_z \\in \\mathrm {Im}\\left(K_{z}\\right) \\label{KL}\\end{equation}\nwhere $\\Delta m_z=\\tilde m_z-m_z$. Hence, under assumption (\\ref{KL}), the KL-divergence between $\\tilde f(z)$ and $f(z)$ is defined as\n\\begin{equation} \\label{def_DL_ddeg}\nD (\\tilde{f}, f )=\\int_\\mathcal{A} \\ln \\left(\\frac{\\tilde{f}(z)}{f(z)}\\right) \\tilde{f}(z) d z.\n\\end{equation}\nThen, if we substitute (\\ref{pdf_f}) and (\\ref{pdf_tildef}) in (\\ref{def_DL_ddeg}), we obtain\n\\begin{equation*}\n\\begin{aligned} D(\\tilde f, {f})=& \\frac{1}{2}\\left[\\Delta{m}_{z}^{\\top}K_{z}^{+} \\Delta m_z+\\ln \\operatorname{det}^{+} (K_{z})\\right.\\\\ &\\left.-\\ln \\operatorname{det}^{+} (\\tilde{K}_{z})+\\operatorname{tr}\\left(K_{z}^{+} \\tilde{K}_{z}\\right)-(r+p)\\right] .\\end{aligned}\n\\end{equation*}\n\n\\begin{lemma}\n\\label{lemma1}\nLet $f(z) \\sim \\mathcal{N}\\left(m_{z}, K_{z}\\right)$ and $\\tilde f(z) \\sim \\mathcal{N}(\\tilde m_{z}, \\tilde K_{z})$ be Gaussian and possibly degenerate probability density functions with the same $r+p$-dimensional support $\\mathcal{A}$. Let \\begin{align*}\\mathcal U&=\\{\\tilde m_z \\in\\mathbb R^{n+p} \\hbox{ s.t. } \\tilde m_z-m_z\\in \\mathrm{Im}(K_z)\\}\\\\\n\\mathcal{V}&={\\{\\tilde K_{z} \\in \\mathcal{Q}^{n+p} ~\\text {s.t.}~ \\operatorname{Im}(K_{z} )=\\operatorname{Im} (\\tilde{K}_{z} )}\\}. \\end{align*} Then, $D(\\tilde f,f)$ is strictly convex with respect to $\\tilde m_z\\in\\mathcal U$ and $\\tilde K_z\\in\\mathcal V \\cap \\overline{\\mathcal Q_+^{n+p}}$. Moreover, $D(\\tilde f, f) \\geq 0$ and equality holds if and only if $f=\\tilde f$.\n\\end{lemma}\n\n\n\n\n\\IEEEproof\nLet $U_{\\mathfrak{r}}^\\top$ be a matrix whose columns form an orthonormal basis for $\\mathrm {Im}(K_z)$. Moreover, we define $ K_z^{\\mathfrak{r}}= U_{\\mathfrak{r}} K_zU_{\\mathfrak{r}}^\\top$ and $\\tilde K_z^{\\mathfrak{r}}= U_{\\mathfrak{r}} \\tilde K_zU_{\\mathfrak{r}}^\\top$. Since $f(z)$ and $\\tilde f(z)$ have the same support, then $\\tilde K_z$ and $\\tilde m_z$ are such that $\\mathrm {Im}(\\tilde K_z)=\\mathrm {Im}( K_z)$ and $\\Delta m_z \\in \\mathrm {Im}(K_z)$. Thus, $D(\\tilde f,f)$ is strictly convex in $\\tilde K_z \\in\\mathcal V\\cap \\overline{\\mathcal Q_+^{n+p}}$ if and only if it is strictly convex in $\\tilde K_z^{\\mathfrak{r}} \\in \\mathcal Q_+^{r+p}$. Then, it is not difficult to see that\n\\begin{equation*}\n\\begin{aligned} D(\\tilde f, {f})=& \\frac{1}{2}\\left[\\Delta m^{\\top}_z U^{\\top}_{\\mathfrak{r}} (K^{\\mathfrak{r}}_{z})^{-1} U_{\\mathfrak{r}} \\Delta m_z +\\operatorname{tr}((K^{\\mathfrak{r}}_{z})^{-1} \\tilde{K}^{\\mathfrak{r}}_{z}) \\right.\\\\ &\\left.-\\ln \\operatorname{det} ((K^{\\mathfrak{r}}_{z})^{-1} \\tilde{K}^{\\mathfrak{r}}_{z})-(r+p)\\right]\\end{aligned}\n\\end{equation*}\nwhich is strictly convex in $K^{\\mathfrak{r}}_{z} \\in \\mathcal Q_+^{n+p}$, see \\cite{COVER_THOMAS}. Hence, we proved the strict convexity of $D(\\tilde f,f)$ in $\\tilde K_z \\in \\mathcal V\\cap \\overline {\\mathcal Q_+^{n+p}}$. Using similar reasonings, we can conclude that $D(\\tilde f, {f})$ is strictly convex in $\\tilde m_z\\in \\mathcal U$.\nFinally, the unique minimum of $D(\\tilde f,f)$ with respect to $\\tilde f$ is given by the stationary conditions $\\tilde m_z= m_z$ and $\\tilde K _{z}={K}_{z}$, i.e. $\\tilde f=f$. Since $D(f,f)=0$, we conclude that $D(\\tilde f,f)\\geq 0$ and equality holds if and only if $\\tilde f=f$.\n\\qed \\\\\n\nIn what follows, we assume that $f$ is known while $\\tilde f$ is not and both have the same support. Then, we assume that $\\tilde f$ belongs to the ambiguity set, i.e. a ``ball'':\n$$\n\\mathcal{B}=\\{\\tilde{f} \\sim \\mathcal N(\\tilde m_z,\\tilde K_z)\\hbox{ s.t. } D(\\tilde{f}, f) \\leq c\\}\n$$\nwhere $c>0$, hereafter called tolerance, represents the radius of this ball.\nIt is worth noting that $f$ is usually estimated from measured data. More precisely, we fix a parametric and Gaussian model class $\\mathcal M$, then we select $f\\in\\mathcal M$ according to the maximum likelihood principle. Thus, when the length of the data is sufficiently large, we have\n\\al{f \\approx \\underset{f\\in\\mathcal M}{\\mathrm{argmin}}\\, D(\\tilde f,f)\\nonumber}\nunder standard hypotheses. Therefore, the uncertainty on $f$ is naturally defined by $\\mathcal B$, i.e. the actual model $\\tilde f$ satisfies the constraint $\\tilde f\\in\\mathcal B$ with $c=D(\\tilde f,f)$. Finally, an estimate of $c$ is given by $\\hat c=D(\\check f,f )$ where $\\check f$ is estimated from measured data using a model class $\\check{\\mathcal M}$ sufficiently ``large'', i.e. it contains many candidate models having diverse features.\n\n\nIn view of Lemma \\ref{lemma1}, $\\mathcal B$ is a convex set. We consider the robust estimator $\\hat x=g^0(y)$ solving the following minimax problem\n\\begin{equation} \\label{robust_p}\n(\\tilde f^0, g^0)=\\operatorname{arg}\\min _{g \\in \\mathcal{G}} \\max _{\\tilde{f} \\in \\mathcal{B}} J(\\tilde{f}, g)\n\\end{equation}\nwhere$$\n\\begin{aligned} J(\\tilde{f}, g) &=\\frac{1}{2} E_{\\tilde{f}}\\left[\\| x-g(y )\\|^{2}\\right] \\\\ &=\\frac{1}{2} \\int_\\mathcal{A}\\| x-g(y) \\|^{2} \\tilde{f}(z) d z. \\end{aligned}\n$$$\\mathcal{G}$ is the set of estimators for which $E_{\\tilde{f}}\\left[\\|x-g(y)\\|^{2}\\right]$ is bounded with respect to all the Gaussian densities in $\\mathcal B$.\n\n\\begin{theorem} \\label{theo1} Let $f$ be a Gaussian (possibly degenerate) density defined as in (\\ref{pdf_f}) with $K_y>0$. Then, the least favorable Gaussian density $\\tilde f^0$ is with mean vector and covariance matrix\n\\al{\\label{def_m_K_tilde}\n\\tilde{m}_{z}^0=m_{z}=\\left[\\begin{array}{c}{m_{x}} \\\\ {m_{y}}\\end{array}\\right], \\quad \\tilde{K}_{z}^0=\\left[\\begin{array}{cc}{\\tilde{K}_{x}} & {K_{x y}} \\\\ {K_{y x}} & {K_{y}}\\end{array}\\right]}\nso that, only the covariance of $x$ is perturbed.\nThen, the Bayesian estimator\n\\begin{equation*}\ng^{0}(y)=G_{0}\\left(y-m_{y}\\right)+m_{x},\n\\end{equation*}\nwith $G_0=K_{x y} K_{y}^{-1}$, solves the robust estimation problem.\nThe nominal posterior covariance matrix of $x$ given $y$ is given by \\begin{align} \\label{def_P_stat}P&:=K_{x}-K_{x y} K_{y}^{-1} K_{y x}.\\end{align} while the least favorable one is:\n\\begin{equation*} \\tilde P=(P^+-\\lambda ^{-1}H^{ \\top} H)^+\\end{equation*}\nwhere $H^\\top\\in\\mathbb R^{n\\times r}$ is a matrix whose columns form an orthonormal basis for $ \\mathrm {Im} (P)$.\nMoreover, there exists a unique Lagrange multiplier $\\lambda >\\sigma_{max}(P)>0$ such that $c=D(\\tilde f^0, f)$.\n\n\\end{theorem}\n\n\\IEEEproof\nThe saddle point of $J$ must satisfy the conditions\n\\begin{equation} \\label{ineq_saddle}\nJ (\\tilde{f}, g^{0} ) \\leq J (\\tilde{f}^{0}, g^{0} ) \\leq J(\\tilde{f}^{0}, g )\n\\end{equation}\nfor all $\\tilde f \\in \\mathcal{B}$ and $g \\in \\mathcal{G}$.\nThe second inequality in (\\ref{ineq_saddle}) is based on the fact that the Bayesian estimator $g^0$ minimizes $J(\\tilde f^0,\\cdot)$. Therefore, it remains to prove the first inequality in (\\ref{ineq_saddle}).\n\nNotice that the minimizer of $J(\\tilde f, \\cdot)$ is $$g^{\\star}(y)=\\tilde K_{xy}\\tilde K_y^{-1}(y-\\tilde m_y)+\\tilde m_x$$ where\n$$\n\\tilde m_{z}:=\\left[\\begin{array}{c}{\\tilde m_{x}} \\\\ {\\tilde m_{y}}\\end{array}\\right], \\quad \\tilde K_{z}:=\\left[\\begin{array}{cc}{\\tilde K_{x}} & {\\tilde K_{x y}} \\\\ {\\tilde K_{y x}} & {\\tilde K_{y}}\\end{array}\\right].\n$$ Moreover, $J(\\tilde f,g^\\star)=1\/2\\operatorname{tr}(\\tilde P)$ where $\\tilde P:=\\tilde K_{x}-\\tilde K_{x y} \\tilde K_{y}^{-1} \\tilde K_{y x}$. Since $\\mathrm{Im}(\\tilde K_z)=\\mathrm{Im}(K_z)$, then $\\mathrm{Im}(\\tilde P)=\\mathrm{Im}(P)=\\mathrm{Im}(H^\\top)$. Let $\\check H^\\top\\in\\mathbb R^{n\\times (n-r)}$ be a matrix whose columns form an orthonormal basis for the orthogonal complement of $\\mathrm{Im}(P)$ in $\\mathbb R^n$. Then, $\\check H^\\top \\check H+H^\\top H=I_n$. Therefore,\n\\al{ J(\\tilde f,g^\\star )&= \\frac{1}{2}\\operatorname{tr}(\\tilde P(\\check H^\\top \\check H+H^\\top H))= \\frac{1}{2} \\operatorname{tr}(\\tilde P H^\\top H)\\nonumber\\\\\n&=\\frac{1}{2} \\mathbb{E}_{\\tilde f}[\\|H(x-g(y))\\|^2].\\nonumber}\nThis means that the maximization problem can be formulated by reducing the dimension of the random vector $z$, i.e. we take\n\\al{z_{\\mathfrak{r}}:= \\left[\\begin{array}{c}{ x_{\\mathfrak{r} }} \\\\ y \\end{array}\\right]=U_{\\mathfrak{r}}z, \\quad U_{\\mathfrak{r}}:=\n\\left[\\begin{array}{cc}\nH & 0 \\\\\n0 & I\n\\end{array}\\right],\\nonumber}\nand $z=U_{\\mathfrak{r}}^\\top z_{\\mathfrak{r}}$. The nominal and actual reduced densities are, respectively,\n$f_{\\mathfrak{r}}\\sim \\mathcal N( m_z^{\\mathfrak{r}}, K_z^{\\mathfrak{r}})$ and\n$\\tilde{f}_{\\mathfrak{r}} \\sim \\mathcal N(\\tilde m_z^{\\mathfrak{r}},\\tilde K_z^{\\mathfrak{r}})$ where $m_z^{\\mathfrak{r}}= U_{\\mathfrak{r}} m_z$, $\\tilde m_z^{\\mathfrak{r}}= U_{\\mathfrak{r}}\\tilde m_z$,\n$K_z^{\\mathfrak{r}}= U_{\\mathfrak{r}} K_zU_{\\mathfrak{r}}^\\top$,\n$\\tilde K_z^{\\mathfrak{r}}= U_{\\mathfrak{r}} \\tilde K_zU_{\\mathfrak{r}}^\\top$.\nAccordingly, the maximization of $J(\\cdot,g^0)$ is equivalent to the maximization $J(\\tilde f_{\\mathfrak{r}},g_{\\mathfrak{r}}^0)=\\mathbb{E}_{\\tilde f_{\\mathfrak{r}}}[\\| x_{\\mathfrak{r}} -g^0_{\\mathfrak{r}}(y)\\|^2]$ where $\\tilde f_{\\mathfrak{r}}\\in \\mathcal B_{\\mathfrak{r}}:=\\{\\tilde{f}_{\\mathfrak{r}} \\sim \\mathcal N(\\tilde m_z^{\\mathfrak{r}},\\tilde K_z^{\\mathfrak{r}})\\hbox{ s.t. } D(\\tilde{f}_{\\mathfrak{r}}, f_{\\mathfrak{r}}) \\leq c\\}$, $g^0_{\\mathfrak{r}}:=Hg^0$ and\n\\begin{equation*}\n\\begin{aligned} D(\\tilde f_{\\mathfrak{r}}, {f_{\\mathfrak{r}}})=& \\frac{1}{2}\\left[(\\Delta m^{\\mathfrak{r}}_z)^{\\top} (K^{\\mathfrak{r}}_{z})^{-1} \\Delta m^{\\mathfrak{r}}_z +\\operatorname{tr}((K^{\\mathfrak{r}}_{z})^{-1} \\tilde{K}^{\\mathfrak{r}}_{z}) \\right.\\\\ &\\left.-\\ln \\operatorname{det} ((K^{\\mathfrak{r}}_{z})^{-1} \\tilde{K}^{\\mathfrak{r}}_{z})-(r+p)\\right]\\end{aligned}\n\\end{equation*}\nwhere $\\Delta m^{\\mathfrak{r}}_{z}=\\tilde m^{\\mathfrak{r}}_z-m^{\\mathfrak{r}}_z$.\nSince $K_z^{\\mathfrak{r}}>0$ and $\\tilde K_z^{\\mathfrak{r}}>0$, by Theorem 1 in \\cite{robustleastsquaresestimation, ROBUST_STATE_SPACE_LEVY_NIKOUKHAH_2013} (see also \\cite{yi2020low}), we have that the maximizer $\\tilde f^0_{\\mathfrak{r}}$ has mean $m_z^{\\mathfrak{r}}$ and covariance matrix\n\\begin{equation} \\label{tildeK_z}\n\\tilde{K}^{\\mathfrak{r}}_{z}=\\left[\\begin{array}{cc}\\tilde K_x^{\\mathfrak{r}}& K_{xy}^{\\mathfrak{r}} \\\\ K_{yx}^{\\mathfrak{r}} & K_y\\end{array}\\right] \\nonumber\n\\end{equation} where $\\tilde K_x^{\\mathfrak{r}}= \\tilde P^{\\mathfrak{r}}+\\tilde K_{xy}^{\\mathfrak{r}} K_y^{-1} \\tilde K_{yx}^{\\mathfrak{r}}$ with\n\\begin{equation} \\label{tilde_P}\n\\tilde P^{\\mathfrak{r}}=\\left( (P^{\\mathfrak{r}})^{-1}-\\lambda^{-1} I_r\\right)^{-1},\\nonumber\n\\end{equation} $P^{\\mathfrak{r}}:= HPH^\\top >0$ and $\\lambda >\\sigma_{max}( P^{\\mathfrak{r}})>0$ is the unique solution to\n\\al{\\label{eq_lambda}\\frac{1}{2}\\{ \\ln \\det P^{\\mathfrak{r}} -\\ln \\det \\tilde P^{\\mathfrak{r}} +\\operatorname{tr}((P^{\\mathfrak{r}})^{-1}\\tilde P^{\\mathfrak{r}}-I_r)\\}=c. } We conclude that the least favorable density $\\tilde f^0(z)$ has mean and covariance matrix as in (\\ref{def_m_K_tilde}). Moreover,\n\\al{\\tilde K_x=H^\\top \\tilde K_x^{\\mathfrak{r}}H = \\underbrace{H^\\top \\tilde P^{\\mathfrak{r}}H }_{=:\\tilde P} +\\underbrace{ H^\\top K_{xy}^{\\mathfrak{r}}}_{=K_{xy}}K_y^{-1} K_{yx}^{\\mathfrak{r}} H \\nonumber}\nand \\al{\\tilde P=H^\\top((P^{\\mathfrak{r}})^{-1}-\\lambda^{-1}I_r)^{-1}H=(P^+-\\lambda^{-1}H^\\top H)^+.\\nonumber}\nFinally, Equation (\\ref{eq_lambda}) can be written in terms of $P$ and $\\tilde P$:\n\\al{ \\frac{1}{2}\\{ \\ln {\\det}^{+} P -\\ln {\\det}^{+} \\tilde P +\\operatorname{tr}(P ^{+}\\tilde P -I_r)\\}=c\\nonumber}\nwhere $\\lambda >\\sigma_{max}(H P H^\\top )=\\sigma_{max}(P ).$\n\n\\qed \\\\\n\n\\begin{remark} If $P>0$ then $f$ is a non-degenerate density. In such a case, Theorem \\ref{theo1} still holds and: the pseudo inverse is replaced by the inverse; moreover, $H^{ \\top} H$ becomes the identity matrix. Therefore, we recover the robust static estimation problem proposed in \\cite{robustleastsquaresestimation}.\n\\end{remark}\n\n\\begin{remark}\n In Problem (\\ref{robust_p}) we could add the constraint $\\operatorname{rank}(G)\\leq q$, with $q