diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbsyh b/data_all_eng_slimpj/shuffled/split2/finalzzbsyh new file mode 100644 index 0000000000000000000000000000000000000000..9804789a033f612543b7cb1481d0c4ef8cf9ecb6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbsyh @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sxn:intro}\n\nTwo ubiquitous aspects of large-scale data analysis are that the data often\nhave heavy-tailed properties and that diffusion-based or spectral-based\nmethods are often used to identify and extract structure of interest.\nIn the absence of strong assumptions on the data, \npopular distribution-independent methods such as those based on the VC\ndimension fail to provide nontrivial results for even simple learning\nproblems such as binary classification in these two settings.\nAt root, the reason is that in both of these situations the data are\nformally very high dimensional and that (without additional regularity\nassumptions on the data) there may be a small number of ``very outlying''\ndata points.\nIn this paper, we develop distribution-dependent learning methods that\ncan be used to provide dimension-independent sample complexity bounds for\nthe maximum margin version of the binary classification problem in these \ntwo popular settings.\nIn both cases, \nwe are able to obtain nearly optimal linear classification hyperplanes since\nthe distribution-dependent tools we employ are able to control the aggregate\neffect of the ``outlying'' data points.\nIn particular, our results will hold even though the data may be \ninfinite-dimensional and unbounded.\n\n\n\\subsection{Overview of the problems}\n\\label{sxn:intro_overview}\n\nSpectral-based kernels have received a great deal of attention recently in\nmachine learning for data classification, regression, and exploratory data\nanalysis via dimensionality reduction~\\cite{SWHSL06}.\nConsider, for example, Laplacian Eigenmaps~\\cite{BN03} and the related\nDiffusion Maps~\\cite{Lafon06}.\nGiven a graph $G=(V,E)$ (where this graph could be constructed from the data\nrepresented as feature vectors, as is common in machine learning, or it\ncould simply be a natural representation of a large social or information\nnetwork, as is more common in other areas of data analysis), let\n$f_0, f_1, \\dots, f_n$ be the eigenfunctions of the normalized Laplacian of\n$G$ and let $\\l_0, \\l_1, \\dots, \\l_n$ be the corresponding eigenvalues.\nThen, the Diffusion Map is the following feature map\n$$\n\\Phi: v \\mapsto(\\l_0^k f_0(v), \\dots, \\l_n^k f_n(v)) ,\n$$\nand Laplacian Eigenmaps is the special case when $k=0$.\nIn this case, the support of the data distribution is unbounded as the size\nof the graph increases; the VC dimension of hyperplane classifiers is\n$\\Theta(n)$; and thus existing results do not give dimension-independent\nsample complexity bounds for classification by Empirical Risk Minimization\n(ERM).\nMoreover, it is possible (and indeed quite common in certain applications) \nthat on some vertices $v$ the eigenfunctions\nfluctuate wildly---even on special classes of graphs, such as random graphs\n$G(n, p)$, a non-trivial uniform upper bound stronger than $O(n)$ on\n$\\|\\Phi(v)\\|$ over all vertices $v$ does not appear to be known\n\\footnote{It should be noted that, while potentially problematic for what we\nare discussing in this paper, such eigenvector localization often has a\nnatural interpretation in terms of the processes generating the data and can\nbe useful in many data analysis applications.\nFor example, it might correspond to a high degree node or an articulation\npoint between two clusters in a large informatics\ngraph~\\cite{LLDM08_communities_CONF,LLDM08_communities_TR,LLM10_communities_CONF};\nor it might correspond to DNA single-nucleotide polymorphisms that are\nparticularly discriminative in simple models that are chosen for\ncomputational rather than statistical reasons~\\cite{CUR_PNAS,Paschou07b}.}\nEven for maximum margin or so-called ``gap-tolerant'' classifiers, defined \nprecisely in Section~\\ref{sxn:background} and which are easier to learn\nthan ordinary linear hyperplane classifiers, the existing bounds of Vapnik\nare not independent of the number $n$ of nodes\n\\footnote{VC theory provides an upper bound of\n$O\\left(\\left(n\/\\Delta\\right)^2\\right)$ on the VC dimension of gap-tolerant\nclassifiers applied to the Diffusion Map feature space corresponding to a\ngraph with $n$ nodes.\n(Recall that by Lemma~\\ref{lem:vap1} below, the VC dimension of the space of\ngap-tolerant classifiers corresponding to a margin $\\Delta$, applied to a\nball of radius $R$ is $\\sim \\left(R\/\\Delta\\right)^2$.)\nOf course, although this bound is quadratic in the number of nodes, VC\ntheory for ordinary linear classifiers gives an $O(n)$ bound.}\n\nA similar problem arises in the seemingly very-different situation that the\ndata exhibit heavy-tailed or power-law behavior.\nHeavy-tailed distributions are probability distributions with tails that are\nnot exponentially bounded~\\cite{Resnick07,CSN09}.\nSuch distributions can arise via several mechanisms, and they are ubiquitous\nin applications~\\cite{CSN09}.\nFor example, graphs in which the degree sequence decays according to a power\nlaw have received a great deal of attention recently.\nRelatedly, such diverse phenomenon as the distribution of packet\ntransmission rates over the internet, the frequency of word use in common\ntext, the populations of cities, the intensities of earthquakes, and the\nsizes of power outages all have heavy-tailed behavior.\nAlthough it is common to normalize or preprocess the data to remove the\nextreme variability in order to apply common data analysis and machine\nlearning algorithms, such extreme variability is a fundamental property of\nthe data in many of these application domains.\n\nThere are a number of ways to formalize the notion of heavy-tailed behavior\nfor the classification problems we will consider, and in this paper we will\nconsider the case where the \\emph{magnitude} of the entries decays according \nto a power law.\n(Note, though, that in Appendix~\\ref{sxn:learnHT}, we will, for completeness,\nconsider the case in which the probability that an entry is nonzero decays in \na heavy-tailed manner.) \nThat is, if\n$$\n\\Phi: v \\mapsto(\\phi_0(v), \\dots, \\phi_n(v))\n$$\nrepresents the feature map, then $\\phi_i(v) \\leq Ci^{-\\alpha}$ for some\nabsolute constant $C>0$, with $\\alpha > 1$.\nAs in the case with spectral kernels, in this heavy-tailed situation, the\nsupport of the data distribution is unbounded as the size of the graph\nincreases, and the VC dimension of hyperplane classifiers is $\\Theta(n)$.\nMoreover, although there are a small number of ``most important'' features,\nthey do not ``capture'' most of the ``information'' of the data.\nThus, when calculating the sample complexity for a classification task for\ndata in which the feature vector has heavy-tailed properties, bounds that do\nnot take into account the distribution are likely to be very weak.\n\nIn this paper, we develop distribution-dependent bounds for problems in\nthese two settings.\nClearly, these results are of interest since VC-based arguments fail to\nprovide nontrivial bounds in these two settings, in spite of ubiquity of\ndata with heavy-tailed properties and the widespread use of spectral-based\nkernels in many applications.\nMore generally, however, these results are of interest since the\ndistribution-dependent bounds underlying them provide insight into how\nbetter to deal with heterogeneous data with more realistic noise\nproperties.\n\n\n\\subsection{Summary of our main results}\n\\label{sxn:intro_summary1}\n\nOur first main result provides bounds on classifying data whose\nmagnitude decays in a heavy-tailed manner.\nIn particular, in the following theorem we show that if the weight of the\n$i^{th}$ coordinate of random data point is less than $C i^{-\\a}$ for some\n$C>0, \\a > 1$, then the number of samples needed before a maximum-margin classifier is approximately optimal with high probability is independent of the number of features.\n\\begin{theorem} [Heavy-Tailed Data]\n\\label{thm:learn_heavytail}\nLet the data be heavy-tailed in that the feature vector:\n$$\n\\Phi: v \\mapsto(\\phi_1(v), \\dots, \\phi_n(v)) ,\n$$\nsatisfy $|\\phi_i(v)| \\leq Ci^{-\\alpha}$ for some absolute constant $C>0$, with\n$\\alpha > 1$. Let $\\zeta(\\cdot)$ denote the Riemann zeta function.\nThen, for any $\\ell$, if a maximum margin classifier has a margin $> \\De $, with probability more than $1 - \\de$, its risk is less than\n$$ \\eps := \\frac{\\tilde{O}\\left( \\frac{\\sqrt{\\zeta(2 \\a) \\ell}}{\\Delta}\\right)+ \\log \\frac{1}{\\de}}{\\ell},$$\n where $\\tilde{O}$ hides multiplicative polylogarithmic factors.\n\\end{theorem}\nThis result follows from a bound on the annealed entropy of gap-tolerant\nclassifiers in a Hilbert space that is of independent interest.\nIn addition, it makes important use of the fact that although individual\nelements of the heavy-tailed feature vector may be large, the vector has\nbounded moments.\n\nOur second main result provides bounds on classifying data with spectral\nkernels.\nIn particular, in the following theorem\nwe give dimension-independent upper bounds on the sample complexity of\nlearning a nearly-optimal maximum margin classifier\nin the feature space of the Diffusion Maps.\n\\begin{theorem} [Spectral Kernels]\n\\label{thm:learn_spectral}\nLet the following Diffusion map be given:\n$$\n\\Phi: v \\mapsto(\\l_1^k f_1(v), \\dots, \\l_n^k f_n(v)) ,\n$$ where $f_i$ are normalized eigenfunctions (whose $\\ell_2(\\mu)$) norm is $1$, $\\mu$ being the uniform distribution), $\\l_i$ are the eigenvalues of the corresponding Markov Chain and $k \\geq 0$.\nThen, for any $\\ell$, if a maximum margin classifier has a margin $> \\De $, with probability more than $1 - \\de$, its risk is less than\n$$ \\eps := \\frac{\\tilde{O}\\left( \\frac{\\sqrt{\\ell}}{\\Delta}\\right)+ \\log \\frac{1}{\\de}}{\\ell},$$\n where $\\tilde{O}$ hides multiplicative polylogarithmic factors.\n\\end{theorem}\nAs with the proof of our main heavy-tailed learning result, the proof of our\nmain spectral learning result makes essential use of an upper bound on the\nannealed entropy of gap-tolerant classifiers.\nIn applying it, we make important use of the fact that although individual\nelements of the feature vector may fluctuate wildly, the norm of the\nDiffusion Map feature vector is bounded.\n\nAs a side remark, note that we are not viewing the feature map in\nTheorem~\\ref{thm:learn_spectral} as necessarily being either a random\nvariable or requiring knowledge of some marginal distribution---as might\nbe the case if one is generating points in some space according to some\ndistribution; then constructing a graph based on nearest neighbors; and\nthen doing diffusions to construct a feature map.\nInstead, we are thinking of a data graph in which the data are adversarially\npresented, \\emph{e.g.}, a given social network is presented, and diffusions\nand\/or a feature map is then constructed.\n\nThese two theorems provides a dimension-independent (\\emph{i.e.},\nindependent of the size $n$ of the graph and the dimension of the feature\nspace) upper bound on the number of samples needed to learn a maximum \nmargin classifier, under \nthe assumption that a heavy-tailed feature map or the Diffusion Map kernel\nof some scale is used as the feature map.\nAs mentioned, both proofs (described below in Sections~\\ref{sxn:gapHS_HTapp}\nand~\\ref{sxn:gapHS_SKapp}) proceed by providing a\ndimension-independent upper bound on the annealed entropy of gap-tolerant\nclassifiers in the relevant feature space, and then appealing to\nTheorem~\\ref{thm:Vapnik} (in Section~\\ref{sxn:background}) relating the\nannealed entropy to the generalization error.\nFor this bound on the annealed entropy of these gap-tolerant classifiers,\nwe crucially use the fact that\n$\\E_v \\|\\Phi(v)\\|^2$ is bounded, even if $\\sup_v \\|\\Phi(v)\\|$ is\nunbounded as~$n \\ra \\infty$.\nThat is, although bounds on the individual entries of the feature map do not\nappear to be known, we crucially use that there exist nontrivial bounds on\nthe magnitude of the feature vectors.\nSince this bound is of more general interest, we describe it separately.\n\n\n\\subsection{Summary of our main technical contribution}\n\\label{sxn:intro_summary2}\n\nThe distribution-dependent ideas that underlie our two main results\n(in Theorems~\\ref{thm:learn_heavytail} and~\\ref{thm:learn_spectral}) can\nalso be used to bound the sample complexity of a classification task more\ngenerally under the assumption that the expected value of a norm of the data\nis bounded, \\emph{i.e.}, when the magnitude of the feature vector of the\ndata in some norm has a finite moment.\nIn more detail:\n\\begin{itemize}\n\\item\nLet $\\PP$ be a probability measure on a Hilbert space $\\HH$,\nand let $\\Delta>0$.\nIn Theorem~\\ref{thm:ddb_HS} (in Section~\\ref{sxn:gapHS_annent}), we prove\nthat if $\\EE_{\\mathcal{P}} \\|x\\|^{2} = r^{2} < \\infty$,\nthen the annealed entropy of gap-tolerant classifiers (defined in Section~\\ref{sxn:background}) in $\\HH$ can be\nupper bounded in terms of a function of $r$, $\\Delta$, and (the number of\nsamples) $\\ell$, independent of the (possibly infinite) dimension of $\\HH$.\n\\end{itemize}\nIt should be emphasized that the assumption that the expectation of some\nmoment of the norm of the feature vector is bounded is a \\emph{much} weaker\ncondition than the more common assumption that the largest element is\nbounded, and thus this result is likely of more general interest in dealing\nwith heterogeneous and noisy data.\nFor example, similar ideas have been applied recently to the problem of\nbounding the sample complexity of learning smooth cuts on a low-dimensional\nmanifold~\\cite{NN09}.\n\nTo establish this result, we use a result\n(See Lemma~\\ref{lem:vap1} in Section~\\ref{sxn:gapHS_vcdim}.)\nthat the VC\ndimension of gap-tolerant classifiers in a Hilbert space when the margin is\n$\\Delta$ over a bounded domain such as a ball of radius $R$ is bounded above\nby $\\lfloor R^2\/\\Delta^2 \\rfloor + 1$.\nSuch bounds on the VC dimension of gap-tolerant classifiers have been stated\npreviously by Vapnik~\\cite{Vapnik98}.\nHowever, in the course of his proof bounding the VC dimension of a\ngap-tolerant classifier whose margin is $\\Delta$ over a ball of radius $R$\n(See~\\cite{Vapnik98}, page~353.), Vapnik states, without further\njustification, that due to symmetry the set of points in a ball that is\nextremal in the sense of being the hardest to shatter with gap-tolerant\nclassifiers is the regular simplex.\nAttention has been drawn to this fact by Burges (See~\\cite{Burges98},\nfootnote~20.), who mentions that a rigorous proof of this fact seems to be\nabsent.\nHere, we provide a new proof of the upper bound on the VC dimension of such\nclassifiers without making this assumption.\n(See Lemma~\\ref{lem:vap1} in Section~\\ref{sxn:gapHS_vcdim} and\nits proof.)\nHush and Scovel~\\cite{HS01} provide an alternate proof of Vapnik's claim;\nit is somewhat different than ours, and they do not extend their proof to \nBanach spaces.\n\nThe idea underlying our new proof of this result generalizes to the case\nwhen the data need not have compact support and where the margin may be\nmeasured with respect to more general norms.\nIn particular, we show that the VC dimension of gap-tolerant classifiers with\nmargin $\\Delta$ in a ball of radius $R$ in a Banach space of Rademacher type\n$p \\in (1, 2]$ and type constant $T$ is bounded above\nby $\\sim \\left(3TR\/\\Delta\\right)^{\\frac{p}{p-1}}$, and that there exists a Banach space of type $p$ (in fact $\\ell_p$) for which the VC dimension is bounded below by $\\left(R\/\\Delta\\right)^{\\frac{p}{p-1}}$.\n(See Lemmas~\\ref{lem:banach_ub} and~\\ref{lem:banach_lb} in\nSection~\\ref{sxn:gapBS_vcdim}.)\nUsing this result, we can also prove bounds for the annealed entropy of\ngap-tolerant classifiers in a Banach space.\n(See Theorem~\\ref{thm:ddb_BS} in Section~\\ref{sxn:gapBS_annent}.)\nIn addition to being of interest from a theoretical perspective, this result\nis of potential interest in cases where modeling the relationship between\ndata elements as a dot product in a Hilbert space is too restrictive, and\nthus this may be of interest, \\emph{e.g.}, when the data are extremely\nsparse and heavy-tailed.\n\n\n\\subsection{Maximum margin classification and ERM with gap-tolerant classifiers}\n\\label{sxn:intro_maxmargin}\n\n\nGap-tolerant classifiers---see Section~\\ref{sxn:background} for more\ndetails---are useful, at least theoretically, as a means of implementing\nstructural risk minimization (see, \\emph{e.g.}, Appendix A.2\nof~\\cite{Burges98}).\nWith gap-tolerant classifiers, the margin $\\De$ is fixed before hand, and\ndoes not depend on the data.\nSee, \\emph{e.g.},~\\cite{FS99b,HBS05,HS01,SBWA98}.\nWith maximum margin classifiers, on the other hand, the margin is a function\nof the data.\nIn spite of this difference, the issues that arise in the analysis of these\ntwo classifiers are similar.\nFor example, through the fat-shattering dimension, bounds can be obtained\nfor the maximum margin classifier, as shown by Shawe-Taylor\n\\emph{et al.}~\\cite{SBWA98}.\nHere, we briefly sketch how this is achieved.\n\\begin{definition} Let $\\FF$ be a set of real valued functions. We say that a set of points $x_1, \\dots, x_s$ is $\\gamma-$shattered by $\\FF$ if there are real numbers $t_1, \\dots, t_s$ such that for all binary vectors $\\mathbf{b} = (b_1, \\dots, b_s)$ and each $ i \\in [s]=\\{1,\\ldots,s\\}$, there is a function $f_{\\mathbf{b}}$ satisfying,\n\\beq f_{\\mathbf{b}}(x_i) = \\left\\{ \\begin{array}{ll}\n > t_i + \\gamma, & \\hbox{if $ b_i = 1$;} \\\\\n < t_i - \\gamma, & \\hbox{otherwise.}\n \\end{array} \\right. \\eeq\n For each $\\gamma > 0$, the {\\it fat shattering dimension} $\\fat_\\FF(\\gamma)$ of the set $\\FF$ is defined to be the size of the largest $\\gamma-$shattered set if this is finite; otherwise it is declared to be infinity.\n \\end{definition}\nNote that, in this definition, $t_i$ can be different for different $i$, which is not the case\nin gap-tolerant classifiers.\nHowever, one can incorporate this shift into the feature space by a simple\nconstruction.\nWe start with the following definition of a Banach space of type $p$ with\ntype constant $T$.\n\n\\begin{definition} [Banach space, type, and type constant]\n\\label{def:rad}\nA {\\it Banach space} is a complete normed vector space.\nA Banach space $\\mathcal{B}$ is said to have (Rademacher) type $p$ if there\nexists $T < \\infty$ such that for all $n$ and $x_1, \\dots, x_n \\in\n\\mathcal{B}$\n\\beqs\n\\EE_\\epsilon[\\|\\sum_{i=1}^n \\epsilon_i x_i\\|_{\\mathcal{B}}^p]\n \\leq T^p \\sum_{i=1}^n \\|x_i\\|_{\\mathcal{B}}^p .\n\\eeqs\nThe smallest $T$ for which the above holds with $p$ equal to the type, is\ncalled the type constant of~$\\mathcal{B}$.\n\\end{definition}\nGiven a Banach space $\\BB$ of type $p$ and type constant $T$, let $\\BB'$\nconsist of all tuples $(v, c)$ for $v \\in \\BB$ and $c \\in \\RR$, with the\nnorm $$\\|(v, c)\\|_{\\BB'} := \\left(\\|v\\|^p + |c|^p\\right)^{1\/p}.$$\nNoting that if $\\BB$ is a Banach space of type $p$ and type constant $T$\n(see Sections~\\ref{sxn:gapBS_prelim} and~\\ref{sxn:gapBS_vcdim}), one can\neasily check that $\\BB'$ is a Banach space of type $p$ and type constant\n$\\max(T, 1)$.\n\nIn our distribution-specific setting, we cannot control the fat-shattering\ndimension, but we can control the logarithm of the expected value of\n$2^{\\kappa(\\fat_\\FF(\\gamma))}$ for any constant $\\kappa$ by applying\nTheorem~\\ref{thm:ddb_BS} to $\\BB'$.\nAs seen from Lemma 3.7 and Corollary 3.8 of the journal version\nof~\\cite{SBWA98}, this is all that is required for obtaining\ngeneralization error bounds for maximum margin classification.\nIn the present context, the logarithm of the expected value of the\nexponential of the fat shattering dimension of linear $1$-Lipschitz\nfunctions on a random data set of size $\\ell$ taken i.i.d from $\\PP$\non $\\BB$ is bounded by the annealed entropy of gap-tolerant classifiers\non $\\BB'$ with respect to the push-forward $\\PP'$ of the measure $\\PP$\nunder the inclusion $\\BB \\hookrightarrow \\BB'$.\n\nThis allows us to state the following theorem, which is an analogue of\nTheorem~4.17 of the journal version of~\\cite{SBWA98}, adapted using\nTheorem~\\ref{thm:ddb_BS} of this paper\n\n\\begin{theorem}\n\\label{thm:margin_BS}\nLet $\\Delta > 0$.\nSuppose inputs are drawn independently\naccording to a distribution $\\PP$ be a probability measure on a Banach space $\\BB$ of type $p$ and type constant $T$, and $\\EE_{\\mathcal{P}} \\|x\\|^p = r^p < \\infty$. If we succeed in\ncorrectly classifying $\\ell$ such inputs by a maximum margin hyperplane of margin $\\De$, then with confidence $1 - \\de$\nthe generalization error will be bounded from above by\n$$ \\eps := \\frac{\\tilde{O}\\left( \\frac{ Tr \\ell^{\\frac{1}{p}}}{\\Delta}\\right)+ \\log \\frac{1}{\\de}}{\\ell},$$\n where $\\tilde{O}$ hides multiplicative polylogarithmic factors involving $\\ell, T, r$ and $\\De$.\n\\end{theorem}\nSpecializing this theorem to a Hilbert space, we have the following theorem \nas a corollary.\n\\begin{theorem}\n\\label{thm:margin_HS}\nLet $\\Delta > 0$.\nSuppose inputs are drawn independently\naccording to a distribution $\\PP$ be a probability measure on a Hilbert space $\\HH$, and $\\EE_{\\mathcal{P}} \\|x\\|^2 = r^2 < \\infty$. If we succeed in\ncorrectly classifying $\\ell$ such inputs by a maximum margin hyperplane with margin $\\De$, then with confidence $1 - \\de$\nthe generalization error will be bounded from above by\n$$ \\eps := \\frac{\\tilde{O}\\left( \\frac{ r \\ell^{\\frac{1}{2}}}{\\Delta}\\right)+ \\log \\frac{1}{\\de}}{\\ell},$$\n where $\\tilde{O}$ hides multiplicative polylogarithmic factors involving $\\ell, r$ and $\\De$.\n\\end{theorem}\nNote that Theorem~\\ref{thm:margin_HS} is an analogue of Theorem~4.17 of the \njournal version of~\\cite{SBWA98}, but adapted using Theorem~\\ref{thm:ddb_HS} \nof this paper.\nIn particular, note that this theorem does not assume that the distribution \nis contained in a ball of some radium $R$, but instead it assumes only that \nsome moment of the distribution is bounded.\n\n\n\\subsection{Outline of the paper}\n\\label{sxn:intro_outline}\n\nIn the next section, Section~\\ref{sxn:background}, we review some technical\npreliminaries that we will use in our subsequent analysis.\nThen, in Section~\\ref{sxn:gapHS}, we state and prove our main result for\ngap-tolerant learning in a Hilbert space, and we show how this result can be\nused to prove our two main theorems in maximum margin learning.\nThen, in Section~\\ref{sxn:gapBS}, we state and prove an extension of our\ngap-tolerant learning result to the case when the gap is measured with\nrespect to more general Banach space norms; and\nthen, in Sections~\\ref{sxn:discussion} and~\\ref{sxn:conclusion} we\nprovide a brief discussion and conclusion.\nFinally, for completeness, in Appendix~\\ref{sxn:learnHT}, we will provide \na bound for exact (as opposed to maximum margin) learning in the case in \nwhich the probability that an entry is nonzero (as opposed to the value of \nthat entry) decays in a heavy-tailed manner.\n\n\n\n\\section{Background and preliminaries}\n\\label{sxn:background}\n\nIn this paper, we consider the supervised learning problem of binary\nclassification, \\emph{i.e.}, we consider an input space $\\mathcal{X}$\n(\\emph{e.g.}, a Euclidean space or a Hilbert space) and an output space\n$\\mathcal{Y}$, where $\\mathcal{Y}=\\{-1,+1\\}$, and where the data consist of\npairs $(X,Y) \\in \\mathcal{X} \\times \\mathcal{Y}$ that are random variables\ndistributed according to an unknown distribution.\nWe shall assume that for any $X$, there is at most one pair $(X, Y)$ that is\nobserved.\nWe observe $\\ell$ i.i.d. pairs $(X_i,Y_i), i=1,\\dots,\\ell$ sampled according\nto this unknown distribution, and the goal is to construct a classification\nfunction $\\a:\\mathcal{X} \\rightarrow \\mathcal{Y}$ which predicts\n$\\mathcal{Y}$ from $\\mathcal{X}$ with low probability of error.\n\nWhereas an ordinary linear hyperplane classifier consists of an oriented\nhyperplane, and points are labeled $\\pm1$, depending on which side of the\nhyperplane they lie, a \\emph{gap-tolerant classifier} consists of an\noriented hyperplane and a margin of thickness $\\Delta$ in some norm.\nAny point outside the margin is labeled $\\pm 1$, depending on which side of\nthe hyperplane it falls on, and all points within the margin are declared\n``correct,'' without receiving a $\\pm 1$ label.\nThis latter setting has been considered in~\\cite{Vapnik98,Burges98} (as a\nway of implementing structural risk minimization---apply empirical risk\nminimization to a succession of problems, and choose where the gap $\\Delta$\nthat gives the minimum risk bound).\n\nThe\n\\emph{risk} $R(\\a)$ of a linear hyperplane classifier $\\alpha$ is the\nprobability that $\\a$ misclassifies a random data point $(x, y)$ drawn from\n$\\PP$; more formally, $R(\\a) := \\E_{\\PP}[\\a(x) \\neq y]$.\nGiven a set of $\\ell$ labeled data points\n$(x_1, y_1), \\dots, (x_\\ell, y_\\ell)$, the \\emph{empirical risk}\n$R_{emp}(\\a, \\ell)$ of a linear hyperplane classifier $\\alpha$ is the\nfrequency of misclassification on the empirical data; more formally,\n$R_{emp}(\\a, \\ell) := \\frac{1}{\\ell}\\sum_{i=1}^\\ell \\II[x_i \\neq y_i]$, where\n$\\II[\\cdot]$ denotes the indicator of the respective event.\nThe risk and empirical risk for gap-tolerant classifiers are defined in the\nsame manner.\nNote, in particular, that data points labeled as ``correct'' do not\ncontribute to the risk for a gap-tolerant classifier, \\emph{i.e.}, data\npoints that are on the ``wrong'' side of the hyperplane but that are within\nthe $\\Delta$ margin are not considered as incorrect and do not contribute\nto the risk.\n\nIn classification, the ultimate goal is to find a classifier that minimizes\nthe true risk, \\emph{i.e.}, $\\arg\\min_{\\a \\in \\Lambda} R(\\a)$, but since\nthe true risk $R(\\a)$ of a classifier $\\a$ is unknown, an empirical\nsurrogate is often used.\nIn particular,\n\\emph{Empirical Risk Minimization (ERM)} is the procedure of choosing a\nclassifier $\\a$ from a set of classifiers $\\Lambda$ by minimizing the\nempirical risk $\\arg\\min_{\\a \\in \\Lambda} R_{emp}(\\a, \\ell)$.\nThe consistency and rate of convergence of ERM---see~\\cite{Vapnik98} for\nprecise definitions---can be related to uniform bounds on the difference\nbetween the empirical risk and the true risk over all $\\a \\in \\Lambda$.\nThere is a large body of literature on sufficient conditions for this kind\nof uniform convergence.\nFor instance, the VC dimension is commonly-used toward this end.\nNote that, when considering gap-tolerant classifiers, there is an additional\ncaveat, as one obtains uniform bounds only over those gap-tolerant\nclassifiers that do not contain any data points in the margin---the\nappendix A.2 of~\\cite{Burges98} addresses this issue.\n\nIn this paper, our main emphasis is on the annealed entropy:\n\n\\begin{definition} [Annealed Entropy]\n\\label{def:an_ent}\nLet $\\PP$ be a probability measure supported on a vector space $\\HH$.\nGiven a set $\\Lambda$ of decision rules and a set of points\n$Z = \\{z_1, \\dots, z_\\ell\\} \\subset \\HH$, let $\\Nl(z_1, \\dots, z_\\ell)$ be\nthe number of ways of labeling $\\{z_1, \\dots, z_\\ell\\} $ into positive and\nnegative samples such that there exists a gap-tolerant classifier that\npredicts {\\it incorrectly} the label of each $z_i$.\nGiven the above notation,\n\\beqs\n\\Hnl(k) := \\ln \\E_{\\PP^{\\times k}} \\Nl(z_1, \\dots, z_k)\n\\eeqs\nis the annealed entropy of the classifier $\\Lambda$ with respect to $\\PP$.\n\\end{definition}\nNote that although we have defined the annealed entropy for general \ndecision rules, below we will consider the case that $\\Lambda$ consists of \nlinear decision rules.\n\nAs the following theorem states, the annealed entropy of a classifier can be\nused to get an upper bound on the generalization error.\nThis follows from Theorem $8$ in \\cite{Burges98} and a remark on page 198\nof~\\cite{DevGyoLug96}. Note that the class $\\Lambda^*$ is itself random, and consequently, $\\sup_{\\a \\in \\Lambda^*}\n {R(\\a) - R_{emp}(\\a, \\ell)}$ is the supremum over a random class.\n\\begin{theorem}\n\\label{thm:Vapnik}\nLet $\\Lambda^*$ be the family of all gap-tolerant classifiers such that no data point lies inside the margin. Then,\n$$\n\\p\\left[\\sup_{\\a \\in \\Lambda^*}\n {R(\\a) - R_{emp}(\\a, \\ell)}\n > \\epsilon\\right]\n < 8 \\exp{\\left(\\left({H_{ann}^\\Lambda(\\ell)}\\right) - \\frac{\\epsilon^2\\ell}{32}\\right)}\n$$\nholds true, for any number of samples $\\ell$ and for any error parameter\n$\\epsilon$.\n\\end{theorem}\nThe key property of the function class that leads to uniform bounds is the\nsublinearity of the logarithm of the expected value of the ``growth\nfunction,'' which measures the number of distinct ways in which a data set\nof a particular size can be split by the function class.\nA finite VC bound guarantees this in a distribution-free setting.\nThe annealed entropy is a distribution-specific measure,\n\\emph{i.e.}, the same family of classifiers can have different annealed\nentropies when measured with respect to different distributions.\nFor a more detailed exposition of uniform bounds in the context of gap-tolerant classifiers, we refer the reader\nto~(\\cite{Burges98}, Appendix A.2).\n\nNote also that normed vector spaces (such as Hilbert spaces and Banach\nspaces) are relevant to learning theory for the following reason.\nData are often accompanied with an underlying metric which carries\ninformation about how likely it is that two data points have the same label.\nThis makes concrete the intuition that points with the same class label are\nclustered together.\nMany algorithms cannot be implemented over an arbitrary metric space, but\nrequire a linear structure.\nIf the original metric space does not have such a structure, as is the case\nwhen classifying for example, biological data or decision trees, it is\ncustomary to construct a feature space representation, which embeds data\ninto a vector space.\nWe will be interested in the commonly-used Hilbert spaces, in which distances\nin the feature space are measure with respect to the $\\ell_2$ distance (as\nwell as more general Banach spaces, in Section~\\ref{sxn:gapBS}).\n\nFinally, note that our results where the margin is measured in $\\ell_2$ can\nbe transferred to a setting with kernels.\nGiven a kernel $k(\\cdot, \\cdot)$, it is well known that linear classification\nusing a kernel $k(\\cdot, \\cdot)$ is equivalent to mapping $x$ onto the\nfunctional $k(x, \\cdot)$ and then finding a separating halfspace in the\nReproducing Kernel Hilbert Space (RKHS) which is the Hilbert Space generated\nby the functionals of the form $k(x, \\cdot)$.\nSince the span of any finite set of points in a Hilbert Space can be\nisometrically embedded in $\\ell_2$, our results hold in the setting of\nkernel-based learning as well, when one first uses the feature map\n$x \\mapsto k(x, \\cdot)$ and works in the RKHS.\n\n\n\n\\section{Gap-tolerant classifiers in Hilbert spaces}\n\\label{sxn:gapHS}\n\nIn this section, we state and prove Theorem~\\ref{thm:ddb_HS}, our main result\nregarding an upper bound for the annealed entropy of gap-tolerant classifiers\nin $\\ell_2$.\nThis result is of independent interest, and it was used in a crucial way in\nthe proof of Theorems~\\ref{thm:learn_heavytail} and~\\ref{thm:learn_spectral}.\nWe start in Section~\\ref{sxn:gapHS_annent} with the statement and proof of\nTheorem~\\ref{thm:ddb_HS}, and then in Section~\\ref{sxn:gapHS_vcdim} we bound\nthe VC dimension of gap-tolerant classifiers over a ball of radius $R$.\nThen, in Section~\\ref{sxn:gapHS_HTapp}, we apply these results to prove our\nmain theorem on learning with heavy-tailed data, and finally in\nSection~\\ref{sxn:gapHS_SKapp}, we apply these results to prove our main\ntheorem on learning with spectral kernels.\n\n\n\\subsection{Bound on the annealed entropy of gap-tolerant classifiers in Hilbert spaces}\n\\label{sxn:gapHS_annent}\n\nThe following theorem is our main result regarding an upper bound for\nthe annealed entropy of gap-tolerant classifiers.\nThe result holds for gap-tolerant classification in a Hilbert space,\n\\emph{i.e.}, when the distances in the feature space are measured with\nrespect to the $\\ell_2$ norm.\nAnalogous results hold when distances are measured more generally, as we\nwill describe in Section~\\ref{sxn:gapBS}.\n\n\\begin{theorem} [Annealed entropy; Upper bound; Hilbert Space]\n\\label{thm:ddb_HS}\nLet $\\PP$ be a probability measure on a Hilbert space $\\HH$,\nand let $\\Delta>0$.\nIf $\\EE_{\\mathcal{P}} \\|x\\|^{2} = r^{2} < \\infty$, then\nthen the annealed entropy of gap-tolerant classifiers in $\\HH$,\nwhere the gap is $\\Delta$, is\n$$\n\\Hnl(\\ell)\n \\leq \\left( \\ell^{\\frac{1}{2}}\\left(\\frac{r}{\\Delta}\\right) + 1\\right)\n (1+\\ln(\\ell+1)) .\n$$\n\\end{theorem}\n\\begin{proof}\nLet $\\ell$ independent, identically distributed (i.i.d) samples\n$z_1, \\dots, z_\\ell$ be chosen from $\\mathcal{P}$.\nWe partition them into two classes:\n$$X = \\{x_1, \\dots, x_{\\ell-k}\\} := \\{z_i\\,\\, |\\,\\, \\|z_i\\| > R\\},$$ and\n$$Y = \\{y_1, \\dots, y_k\\} := \\{z_i \\,\\,|\\,\\, \\|z_i\\| \\leq R\\}.$$\nOur objective is to bound from above the annealed entropy\n$\\Hnl(\\ell)=\\ln\\EE[N^\\Lambda(z_1, \\dots, z_\\ell)]$.\nBy Lemma~\\ref{lem:submul}, $N^{\\Lambda}$ is sub-multiplicative.\nTherefore,\n$$N^{\\Lambda}(z_1, \\dots, z_\\ell) \\leq N^{\\Lambda}(x_1, \\dots,\nx_{\\ell-k}) N^{\\Lambda}(y_1, \\dots, y_k).$$ Taking an expectation\nover $\\ell$ i.i.d samples from $\\mathcal{P}$,\n$$\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)] \\leq \\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell-k})N^{\\Lambda}(y_1, \\dots,\ny_k)].$$\nNow applying Lemma~\\ref{lem:vap1}, we see that\n$$\n\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)]\n \\leq \\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell -k})(k+1)^{R^2\/\\Delta^2+1}] .\n$$\nMoving $(k+1)^{R^2\/\\Delta^2+1}$ outside this expression,\n$$\n\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)]\n \\leq \\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell -k})](k+1)^{R^2\/\\Delta^2+1} .\n$$\nNote that $N^{\\Lambda}(x_1, \\dots, x_{\\ell-k})$ is always bounded above by\n$2^{\\ell-k}$ and that the random variables $\\I[E_i [\\|x_i\\| > R]]$ are i.i.d.\nLet $\\rho = \\p[{\\|x_i\\| > R}]$, and note that\n$\\ell-k$ is the sum of $\\ell$ independent Bernoulli variables.\nMoreover, by Markov's inequality,\n$$\n\\p[\\|z_i\\|>R] \\,\\, \\leq \\,\\, \\frac{\\EE[\\|z_i\\|^2]}{R^2} ,\n$$\nand therefore $\\rho \\leq (\\frac{r}{R})^2$.\nIn addition,\n $$\\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell-k})] \\leq \\EE[2^{\\ell-k}].$$\n Let $I[\\cdot]$ denote an indicator variable. $\\EE[2^{\\ell-k}]$ can be written as\n $$\\prod_{i=1}^\\ell \\EE[2^{I[\\|z_i\\|> R]}] = (1+\\rho)^\\ell \\leq\n e^{\\rho \\ell}.$$\nPutting everything together, we see that\n\\beq\n\\label{eqn:jensen1}\n\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)]\n \\leq\n \\exp\\left(\\ell \\left(\\frac{r}{R}\\right)^2\n + \\ln(\\ell+1)\\left(\\frac{R^2}{\\Delta^2} + 1\\right)\\right) .\n\\eeq\nIf we substitute $R = (\\ell r^2 \\Delta^2)^{\\frac{1}{4}}$, it follows that\n\\begin{eqnarray*}\n\\Hnl(\\ell)\n &=& \\log \\EE\\left[N^{\\Lambda}(z_1, \\dots, z_\\ell)\\right] \\\\\n &\\leq& \\left(\\ell^{\\frac{1}{2}}\\left(\\frac{r}{\\Delta}\\right) + 1\\right) (1+\\ln(\\ell+1)) .\n\\end{eqnarray*}\n\\end{proof}\n\nFor ease of reference, we note the following easily established fact about\n$\\Nl$.\nThis lemma is used in the proof of Theorem~\\ref{thm:ddb_HS} above and\nTheorem~\\ref{thm:ddb_BS} below.\n\n\\begin{lemma}\n\\label{lem:submul}\nLet $\\{x_1, \\dots, x_\\ell\\} \\cup \\{ y_1, \\dots, y_k\\}$ be a partition of the\ndata $Z$ into two parts.\nThen, $\\Nl$ is submultiplicative in the following sense:\n$$\n\\Nl(x_1, \\dots, x_\\ell, y_1, \\dots y_k)\n \\leq \\Nl(x_1, \\dots, x_\\ell) \\Nl(y_1, \\dots, y_k) .\n$$\n\\end{lemma}\n\\begin{proof}\nThis holds because any partition of\n$Z := \\{x_1, \\dots, x_\\ell, y_1, \\dots, y_k\\}$\ninto two parts by an element $\\II \\in \\Lambda$ induces such a partition for\nthe sets $\\{x_1, \\dots, x_\\ell\\}$ and $\\{y_1, \\dots, y_\\ell\\}$, and for any\npair of partitions of $\\{x_1, \\dots, x_\\ell\\}$ and $\\{y_1, \\dots, y_k\\}$,\nthere is at most one partition of $Z$ that induces them.\n\\end{proof}\n\n\n\\subsection{Bound on the VC dimension of gap-tolerant classifiers in Hilbert spaces}\n\\label{sxn:gapHS_vcdim}\n\nAs an intermediate step in the proof of Theorem~\\ref{thm:ddb_HS}, we\nneeded a bound on the VC dimension of a gap-tolerant classifier within a\nball of fixed radius.\nLemma~\\ref{lem:vap1} below provides such a bound and is due to\nVapnik~\\cite{Vapnik98}.\nNote, though, that in the course of his proof (See~\\cite{Vapnik98},\npage 353.), Vapnik states, without further justification, that due to\nsymmetry the set of points that is extremal in the sense of being the\nhardest to shatter with gap-tolerant classifiers is the regular simplex.\nAttention has also been drawn to this fact by Burges (\\cite{Burges98},\nfootnote~20), who mentions that a rigorous proof of this fact seems to be\nabsent.\nVapnik's claim has since been proved by Hush and\nScovel~\\cite{HS01}.\nHere, we provide a new proof of Lemma~\\ref{lem:vap1}.\nIt is simpler than previous proofs, and in Section~\\ref{sxn:gapBS} we will\nsee that it generalizes to cases when the margin is measured with norms other\nthan~$\\ell_2$.\n\n\\begin{lemma} [VC Dimension; Upper bound; Hilbert Space]\n\\label{lem:vap1}\nIn a Hilbert-space,\nthe VC dimension of a gap-tolerant classifier\nwhose margin is $\\Delta$\nover a ball of radius $R$\ncan by bounded above by\n$\\lfloor \\frac{R^2}{\\Delta^2} \\rfloor + 1$.\n\\end{lemma}\n\\begin{proof}\nSuppose the VC dimension is $n$. Then there exists a set of $n$\npoints $X = \\{x_1, \\dots, x_n\\}$ in $B(R)$ that can be completely\nshattered using gap-tolerant classifiers. We will consider two\ncases, first that $n$ is even, and then that $n$ is odd.\n\nFirst, assume that $n$ is even, i.e., that $n=2k$ for some positive\ninteger $k$. We apply the probabilistic method to obtain a upper\nbound on $n$. Note that for every set $S \\subseteq [n]$, the set\n$X_S := \\{x_i | i \\in S\\}$ can be separated from $X - X_S$ using a\ngap-tolerant classifier. Therefore the distance between the\ncentroids (respective centers of mass) of these two sets is greater\nor equal to $2\\Delta$. In particular, for each $S$ having $k=n\/2$\nelements,\n$$\n\\|\\frac{\\sum_{i \\in S}x_i}{k} - \\frac{\\sum_{i \\not \\in S}x_i}{k}\\|\n \\geq 2\\Delta .\n$$\nSuppose now that $S$ is chosen uniformly at random from the ${n\n\\choose k}$ sets of size $k$. Then,\n\\begin{eqnarray*}\n4 \\Delta^2\n &\\leq& \\EE\\left[\\|\\frac{\\sum_{i \\in S}x_i}{k} - \\frac{\\sum_{i \\not \\in S}x_i}{k}\\|^2\\right] \\\\\n &=& k^{-2}\\left\\{\\frac{2k+1}{2k}\\sum_{i=1}^n \\|x_i\\|^2 - \\frac{\\|\\sum_1^n x_i\\|^2}{2k} \\right\\} \\\\\n &\\leq& \\frac{4(n+1)}{n^2} R^2 .\n\\end{eqnarray*}\nTherefore,\n\\begin{eqnarray*}\n\\Delta^2\n &\\leq& \\frac{n+1}{n^2} R^2 \\\\\n &<& \\frac{R^2}{n-1}\n\\end{eqnarray*}\nand so\n$$\nn < \\frac{R^2}{\\Delta^2} + 1 .\n$$\n\nNext, assume that $n$ is odd. We perform a similar calculation for\n$n = 2k+1$. As before, we average over all sets $S$ of cardinality\n$k$ the squared distance between the centroid of $X_S$ and the\ncentroid (center of mass) of $X-X_S$. Proceeding as before,\n\\begin{eqnarray*}\n4\\Delta^2\n & \\leq & \\EE\\left[\\|\\frac{\\sum_{i \\in S}x_i}{k} - \\frac{\\sum_{i \\not \\in S}x_i}{k+1}\\|^2\\right] \\\\\n & = & \\frac{\\sum_{i=1}^n \\|x_i\\|^2 (1 + \\frac{1}{2n}) - \\frac{1}{2n}\\|\\sum_{1 \\leq i \\leq n} x_i\\|^2}{k(k+1)} \\\\\n & \\leq & \\frac{\\sum_{i=1}^n \\|x_i\\|^2 (1 + \\frac{1}{2n})}{k(k+1)} \\\\\n & = & \\frac{4k+3}{2k(2k+1)(k+1)}\\{(2k+1)R^2\\} \\\\\n & < & \\frac{4R^2}{n-1} .\n\\end{eqnarray*}\nTherefore, $n < \\frac{R^2}{\\Delta^2} + 1$.\n\\end{proof}\n\n\n\\subsection{Learning with heavy-tailed data: proof of Theorem~\\ref{thm:learn_heavytail}}\n\\label{sxn:gapHS_HTapp}\n\n\n\\begin{proof}\nFor a random data sample $x$,\n \\beq \\E \\|\\x\\|^2 & \\leq & \\sum_{i=1}^n\n(C i^{-\\a})^2 \\\\ & \\leq & C^2 \\zeta(2 \\a),\\eeq\nwhere $\\zeta$ is the Riemann zeta function.\nThe theorem then follows from Theorem~\\ref{thm:margin_HS}.\n\\end{proof}\n\n\n\\subsection{Learning with spectral kernels: proof of Theorem~\\ref{thm:learn_spectral}}\n\\label{sxn:gapHS_SKapp}\n\n\n\\begin{proof}\nA Diffusion Map for the graph $G=(V, E)$ is the feature map that\nassociates with a vertex $x$, the feature vector $\\x = (\\l_1^\\a\nf_1(x), \\dots, \\l_m^\\a f_m(x))$, when the eigenfunctions\ncorresponding to the top $m$ eigenvalues are chosen. Let $\\mu$ be\nthe uniform distribution on $V$ and $|V| = n$. We note that if the\n$f_j$ are normalized eigenfunctions, \\emph{i.e.}, $\\forall j, \\sum_{x \\in V}\nf_j(x)^2 = 1,$ \n\\beq \n\\E \\|\\x\\|^2 & = & \\frac{\\sum_{i=1}^m {\\l_i^{2\\a}}}{n} \\leq \\frac{\\sum_{i=1}^n {\\l_i^{2\\a}}}{n} \\leq 1.\n\\eeq \nThe above inequality holds because\nthe eigenvalues have magnitudes that are less or equal to $1$:\n$$1 = \\l_1 \\geq \\dots \\geq \\l_n \\geq -1.$$ \nThe theorem then follows from Theorem~\\ref{thm:margin_HS}.\n\\end{proof}\n\n\n\n\\section{Gap-tolerant classifiers in Banach spaces}\n\\label{sxn:gapBS}\n\nIn this section, we state and prove Theorem~\\ref{thm:ddb_BS}, our main result\nregarding an upper bound for the annealed entropy of gap-tolerant classifiers\nin a Banach space.\nWe start in Section~\\ref{sxn:gapBS_prelim} with some technical preliminaries;\nthen in Section~\\ref{sxn:gapBS_vcdim} we bound the VC dimension of\ngap-tolerant classifiers in Banach spaces over a ball of radius $R$; and\nfinally in Section~\\ref{sxn:gapBS_annent} we prove Theorem~\\ref{thm:ddb_BS}.\nWe include this result for completeness since it is of theoretical interest;\nsince it follows using similar methods to the analogous results for Hilbert\nspaces that we presented in Section~\\ref{sxn:gapHS}; and since this result\nis of potential practical interest in cases where modeling the relationship\nbetween data elements as a dot product in a Hilbert space is too\nrestrictive, \\emph{e.g.}, when the data are extremely sparse and\nheavy-tailed.\nFor recent work in machine learning on Banach spaces,\nsee~\\cite{DerLee,MP04,Men02,ZXZ09}.\n\n\n\\subsection{Technical preliminaries}\n\\label{sxn:gapBS_prelim}\n\n\nRecall the definition of a Banach space from Definition~\\ref{def:rad} above.\nWe next state the following form of the Chernoff bound, which we will use in\nthe proof of Lemma~\\ref{lem:banach_ub} below.\n\n\\begin{lemma} [Chernoff Bound]\n\\label{lem:chernoff}\nLet $X_1, \\dots, X_n$ be discrete independent random variables such that\n$\\EE[X_i]=0$ for all $i$ and $|X_i| \\leq 1$ for all $i$.\nLet $X = \\sum_{i=1}^n X_i$ for all $i$ and $\\sigma^2$ be the variance of $X$.\nThen\n$$\n\\p[|X| \\geq \\lambda \\sigma] \\leq 2e^{-\\lambda^2\/4}\n$$\nfor any $0 \\leq \\lambda \\leq 2\\sigma$.\n\\end{lemma}\n\n\n\\subsection{Bounds on the VC dimension of gap-tolerant classifiers in Banach spaces}\n\\label{sxn:gapBS_vcdim}\n\n\nThe idea underlying our new proof of Lemma~\\ref{lem:vap1} (of\nSection~\\ref{sxn:gapHS_vcdim}, and that provides an upper bound on the\nVC dimension of a gap-tolerant classifier in Hilbert spaces) generalizes to\nthe case when the the gap is measured in more general Banach spaces.\nWe state the following lemma for a Banach space of type $p$ with type\nconstant $T$.\nRecall, \\emph{e.g.}, that $\\ell_p$ for $p \\geq 1$ is a Banach space of type\n$\\min(2, p)$ and type constant $1$.\n\n\\begin{lemma} [VC Dimension; Upper bound; Banach Space]\n\\label{lem:banach_ub}\nIn a Banach Space of type $p$ and type constant $T$,\nthe VC dimension of a gap-tolerant classifier\nwhose margin is $\\Delta$\nover a ball of radius $R$\ncan by bounded above by\n$ \\left(\\frac{3T R}{\\Delta}\\right)^{\\frac{p}{p-1}} + 64$\n\\end{lemma}\n\\begin{proof}\nSince a general Banach space does not possess an inner product, the proof\nof Lemma~\\ref{lem:vap1} needs to be modified here.\nTo circumvent this difficulty, we use Inequality~(\\ref{ineq:rad})\ndetermining the Rademacher type of $\\mathcal{B}$.\nThis, while permitting greater generality, provides weaker bounds than\npreviously obtained in the Euclidean case.\nNote that if $\\mu :=\\frac{1}{n}\\sum_{i=1}^n x_i$, then by repeated application of the\nTriangle Inequality,\n\\begin{eqnarray*}\n \\|x_i -\n\\mu\\| & \\leq & (1-\\frac{1}{n})\\|x_i\\| + \\sum_{j\\neq i} \\frac{\\|x_j\\|}{n}\\\\\n& < & 2 \\sup_i \\|x_i\\|.\n\\end{eqnarray*}\nThis shows that if we start with $x_1, \\dots, x_n$ having norm $\\leq\nR$, $\\|x_i - \\mu\\| \\leq 2R$ for all $i$. The property of being\nshattered by gap-tolerant classifiers is translation invariant.\nThen, for $ \\emptyset \\subsetneq S \\subsetneq [n]$, it can be\nverified that\n\\begin{eqnarray}\n\\nonumber\n2\\Delta\n &\\leq& \\left\\| \\frac{\\sum_{i \\in S} (x_i-\\mu)}{|S|} - \\frac{\\sum_{i \\not\\in S} (x_i-\\mu)}{n - |S|} \\right\\| \\\\\n &=& \\frac{n}{2|S|(n-|S|)}\\left\\|\\sum_{i \\in S} (x_i -\\mu) - \\sum_{i \\not\\in S} (x_i - \\mu)\\right\\| .\n\\label{eleven}\n\\end{eqnarray}\nThe Rademacher Inequality states that\n\\beq\\label{ineq:rad}\\EE_\\epsilon[\\|\\sum_{i=1}^n \\epsilon_i x_i\\|^p]\n\\leq T^p \\sum_{i=1}^n \\|x_i\\|^p.\\eeq Using the version of Chernoff's\nbound in Lemma~\\ref{lem:chernoff} \\beq \\label{twelve}\n\\p[|\\sum_{i=1}^n \\epsilon_i| \\leq \\lambda \\sqrt{n}] \\geq 1- 2\ne^{-\\lambda^2\/4}. \\eeq We shall denote the above event by\n$E_{\\lambda}$. Now, let $x_1, \\dots, x_n$ be $n$ points in\n$\\mathcal{B}$ with a norm less or equal to $R$. Let $\\mu =\n\\frac{\\sum_{i=1}^n x_i}{n}$ as before.\n\\begin{eqnarray*}\n 2^pT^p n R^p & \\geq & 2^pT^p \\sum_{i=1}^n \\|x_i\\|^p\\\\\n & \\geq & T^p \\sum_{i=1}^n \\|x_i - \\mu\\|^p\\\\\n & \\geq & \\EE_\\epsilon [\\|\\epsilon_i (x_i-\\mu)\\|^p]\\\\\n & \\geq & \\EE_\\epsilon [\\|\\epsilon_i (x_i-\\mu)\\|^p| E_{\\lambda}]\\,\\,\\p[E_{\\lambda}]\\\\\n & \\geq & \\EE_\\epsilon [(n - \\lambda^2)^p(2\\Delta)^p (1 - 2\n e^{-\\lambda^2\/4})]\\,\\,\n\\end{eqnarray*}\nThe last inequality follows from (\\ref{eleven}) and (\\ref{twelve}).\nWe infer from the preceding sequence of inequalities that\n$$n^{p-1} \\leq 2^p T^p \\left(\\frac{R}{\\Delta}\\right)^p\n\\left\\{(1-\\frac{\\lambda^2}{n})^p\n(1-2e^{-\\lambda^2\/4})\\right\\}^{-1}.$$ The above is true for any\n$\\lambda \\in (0, 2\\sqrt{n})$, by the conditions in the Chernoff\nbound stated in Lemma~\\ref{lem:chernoff}. If $n \\geq 64$, choosing\n$\\lambda$ equal to $8$ gives us $n^{p-1} \\leq 3^p T^p\n\\left(\\frac{R}{\\Delta}\\right)^p.$ Therefore, it is always true that\n$n \\leq \\left(\\frac{3TR}{\\Delta}\\right)^\\frac{p}{p-1} + 64.$\n\\end{proof}\n\n\nFinally, for completeness, we next state a lower bound for VC dimension of\ngap-tolerant classifiers when the margin is measured in a norm that is\nassociated with a Banach space of type $p \\in (1, 2]$.\nSince we are interested only in a lower bound, we consider the special case\nof $\\ell_p^n$.\nNote that this argument does not immediately generalize to Banach spaces of\nhigher type because for $p >2$, $\\ell_p$ has type $2$.\n\n\\begin{lemma} [VC Dimension; Lower Bound; Banach Space]\n\\label{lem:banach_lb}\nFor each $p \\in (1, 2]$, there\nexists a Banach space of type $p$ such that the VC dimension of gap-tolerant\nclassifiers with gap $\\Delta$ over a ball of radius $R$ is\ngreater or equal to\n$$\\left(\\frac{R}{\\Delta}\\right)^{\\frac{p}{p-1}}.$$ Further, this\nbound is achieved when the space is $\\ell_p$.\n\\end{lemma}\n\\begin{proof}\nWe shall show that the first $n$ unit norm basis vectors in the\ncanonical basis can be shattered using gap-tolerant classifiers,\nwhere $\\Delta = n^{\\frac{1-p}{p}}$. Therefore in this case, the VC\ndimension is $\\geq (\\frac{R}{\\Delta})^\\frac{p}{p-1}$. Let $e_j$ be\nthe $j^{th}$ basis vector. In order to prove that the set $\\{e_1,\n\\dots, e_n\\}$ is shattered, due to symmetry under permutations, it\nsuffices to prove that for each $k$, $\\{e_1, \\dots, e_k\\}$ can be\nseparated from $\\{e_{k+1}, \\dots, e_{n}\\}$ using a gap-tolerant\nclassifier. Points in $\\ell_p$ are infinite sequences $(x_1, \\dots\n)$ of finite $\\ell_p$ norm. Consider the hyperplane $H$ defined by\n$\\sum_{i=1}^k x_i - \\sum_{i=k+1}^n x_i = 0$. Clearly, it separates\nthe sets in question. We may assume $e_j$ to be $e_1$, replacing if\nnecessary, $k$ by $n-k$. Let $x = \\inf_{y \\in H} \\|e_1 - y\\|_p.$\nClearly, all coordinates $x_{n+1}, \\dots $ of $x$ are $0$. In order\nto get a lower bound on the $\\ell_p$\ndistance, we use the power-mean inequality:\nIf $p \\geq 1$, and $x_1, \\dots, x_n \\in \\mathbb{R}$,\n$$\\left(\\frac{\\sum_{i=1}^n |x_i|^p}{n}\\right)^{\\frac{1}{p}} \\geq\n\\frac{\\sum _{i=1}^n |x_i|}{n}.$$ This implies that\n\\begin{eqnarray*}\n\\|e_1 -x\\|_p\n & \\geq & n^{\\frac{1-p}{p}} \\|e_1 - x\\|_1 \\\\\n & = & n^{\\frac{1-p}{p}}\\left(|1-x_1| + \\sum_{i=2}^n |x_i|\\right) \\\\\n & \\geq & n^{\\frac{1-p}{p}}\\left(1 - \\sum_{i=1}^k x_i + \\sum_{i=k+1}^n x_i\\right) \\\\\n & = & n^{\\frac{1-p}{p}} .\n\\end{eqnarray*}\nFor $p>2$, the type of $\\ell_p$ is $2$~\\cite{LedouxTalagrand}. Since\n$\\frac{p}{p-1}$ is a decreasing function of $p$ in this regime, we\ndo not recover any useful bounds.\n\\end{proof}\n\n\n\\subsection{Bound on the annealed entropy of gap-tolerant classifiers in Banach spaces}\n\\label{sxn:gapBS_annent}\n\nThe following theorem is our main result regarding an upper bound for\nthe annealed entropy of gap-tolerant classifiers in Banach spaces.\nNote that the $\\ell_2$ bound provided by this theorem is slightly weaker\nthan that provided by Theorem~\\ref{thm:ddb_HS}.\nNote also that it may seem counter-intuitive that in the case of $\\ell_2$\n(\\emph{i.e.}, when we set $\\gamma = 2$), the dependence of $\\Delta$ is\n$\\Delta^{-1}$, which is weaker than in the VC bound, where it is\n$\\Delta^{-2}$.\nThe explanation is that the bound on annealed entropy here depends on the\nnumber of samples $\\ell$, while the VC dimension does not.\nTherefore, the weaker dependence on $\\Delta$ is compensated for by a term\nthat in fact tends to $\\infty$ as the number of samples~$\\ell \\ra \\infty$.\n\n\\begin{theorem} [Annealed entropy; Upper bound; Banach Space]\n\\label{thm:ddb_BS}\nLet $\\PP$ be a probability measure on a Banach space $\\BB$ of type $p$ and\ntype constant $T$.\nLet $\\gamma, \\Delta > 0$, and let $\\eta = \\frac{p}{p + \\gamma(p-1)}$.\nIf $\\EE_{\\mathcal{P}} \\|x\\|^\\gamma = r^\\gamma< \\infty$, then\nthe annealed entropy of gap-tolerant classifiers in $\\BB$,\nwhere the gap is $\\Delta$, is\n$$\n\\Hnl(\\ell)\n \\leq\n \\left(\\eta^{-\\eta}(1-\\eta)^{-1+\\eta} \\left(\\frac{\\ell}{\\ln(\\ell+1)}\n \\left(\\frac{3Tr}{\\Delta}\\right)^\\gamma\\right)^\\eta + 64\\right)\n \\ln(\\ell+1) .\n$$\n\\end{theorem}\n\\begin{proof}\nThe proof of this theorem parallels that of Theorem~\\ref{thm:ddb_HS}, except\nthat here we use Lemma~\\ref{lem:banach_ub} instead of Lemma~\\ref{lem:vap1}.\nWe include the full proof for completeness.\nLet $\\ell$ independent, identically distributed (i.i.d) samples\n$z_1, \\dots, z_\\ell$ be chosen from $\\mathcal{P}$.\nWe partition them into two classes:\n$$X = \\{x_1, \\dots, x_{\\ell-k}\\} := \\{z_i\\,\\, |\\,\\, \\|z_i\\| > R\\},$$ and\n$$Y = \\{y_1, \\dots, y_k\\} := \\{z_i \\,\\,|\\,\\, \\|z_i\\| \\leq R\\}.$$\nOur objective is to bound from above the annealed entropy\n$\\Hnl(\\ell)=\\ln\\EE[N^\\Lambda(z_1, \\dots, z_\\ell)]$.\nBy Lemma~\\ref{lem:submul}, $N^{\\Lambda}$ is sub-multiplicative.\nTherefore,\n$$N^{\\Lambda}(z_1, \\dots, z_\\ell) \\leq N^{\\Lambda}(x_1, \\dots,\nx_{\\ell-k}) N^{\\Lambda}(y_1, \\dots, y_k).$$ Taking an expectation\nover $\\ell$ i.i.d samples from $\\mathcal{P}$,\n$$\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)] \\leq \\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell-k})N^{\\Lambda}(y_1, \\dots,\ny_k)].$$\nNow applying Lemma~\\ref{lem:banach_ub}, we see that\n$$\n\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)]\n \\leq \\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell -k})(k+1)^{(3TR\/\\Delta)^{\\frac{p}{p-1}}+ 64}] .\n$$\nMoving $(k+1)^{((2+o(1)TR\/\\Delta)^{\\frac{p}{p-1}})}$ outside this expression,\n$$\n\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)]\n \\leq \\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell -k})](k+1)^{(3TR\/\\Delta)^{\\frac{p}{p-1}}+ 64} .\n$$\nNote that $N^{\\Lambda}(x_1, \\dots, x_{\\ell-k})$ is always bounded above by\n$2^{\\ell-k}$ and that the random variables $\\I[E_i [\\|x_i\\| > R]]$ are i.i.d.\nLet $\\rho = \\p[{\\|x_i\\| > R}]$, and note that\n$\\ell-k$ is the sum of $\\ell$ independent Bernoulli variables.\nMoreover, by Markov's inequality,\n$$\n\\p[\\|z_i\\|>R] \\,\\, \\leq \\,\\, \\frac{\\EE[\\|z_i\\|^\\gamma]}{R^\\gamma} ,\n$$\nand therefore $\\rho \\leq (\\frac{r}{R})^\\gamma$.\nIn addition,\n $$\\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell-k})] \\leq \\EE[2^{\\ell-k}].$$\n Let $I[\\cdot]$ denote an indicator variable. $\\EE[2^{\\ell-k}]$ can be written as\n $$\\prod_{i=1}^\\ell \\EE[2^{I[\\|z_i\\|> R]}] = (1+\\rho)^\\ell \\leq\n e^{\\rho \\ell}.$$\nPutting everything together, we see that\n\\beq\n\\label{eqn:jensen2}\n\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)]\n \\leq \\exp\\left(\\ell \\left(\\frac{r}{R}\\right)^{\\gamma} + \\ln(k+1)\\left(64 + \\frac{3T R}{\\Delta}\\right)^{\\frac{p}{p-1}}\\right) .\n\\eeq\nBy setting $\\eta :=\n\\frac{p}{\\gamma(p-1) + p},$ and adjusting $R$ so that $$\\ell\n\\left(\\frac{r}{R}\\right)^\\gamma \\eta^{-1} = (1-\n\\eta)^{-1}\\ln(\\ell+1)\\left(\\frac{3TR}{\\Delta}\\right)^{\\frac{p}{p-1}}.$$\nWe see that\n\\begin{eqnarray*}\n\\ell \\left(\\frac{r}{R}\\right)^\\gamma +\n\\left(\\frac{3TR}{\\Delta}\\right)^{\\frac{p}{p-1}}\n &=& \\left(\\ell \\left(\\frac{r}{R}\\right)^\\gamma \\eta^{-1}\\right)^\\eta\n \\left((1- \\eta)^{-1}\\ln(\\ell+1)\n \\left(\\frac{3TR}{\\Delta}\\right)^{\\frac{p}{p-1}}\\right)^{1-\\eta} \\\\\n &=& \\eta^{-\\eta}(1-\\eta)^{-1+\\eta}\n \\left(\\ell\\left(\\frac{3Tr}{\\Delta}\\right)^\\gamma\\right)^\\eta .\n\\end{eqnarray*}\nThus, it follows that\n\\begin{eqnarray*}\n\\Hnl(\\ell)\n &=& \\log \\EE\\left[N^{\\Lambda}(z_1, \\dots, z_\\ell)\\right] \\\\\n &\\leq& \\left(\\eta^{-\\eta}(1-\\eta)^{-1+\\eta}\n \\left(\\frac{\\ell}{\\ln(\\ell+1)}\n \\left(\\frac{3Tr}{\\Delta}\\right)^\\gamma\\right)^\\eta + 64\\right)\n \\ln(\\ell+1) .\n\\end{eqnarray*}\n\\end{proof}\n\n\n\n\n\\section{Discussion}\n\\label{sxn:discussion}\n\nIn recent years, there has been a considerable amount of somewhat-related\ntechnical work in a variety of settings in machine learning.\nThus, in this section we will briefly describe some of the more technical\ncomponents of our results in light of the existing related literature.\n\\begin{itemize}\n\\item\nTechniques based on the use of Rademacher inequalities allow one to obtain\nbounds without any assumption on the input distribution as long as the\nfeature maps are uniformly bounded.\nSee, \\emph{e.g.},~\\cite{Gurvits,KP00,BartMen02,Kol01}.\nViewed from this perspective, our results are interesting because the\nuniform boundedness assumption is not satisfied in either of the two\nsettings we consider, although those settings are ubiquitous in\napplications.\nIn the case of heavy-tailed data, the uniform boundedness assumption is not\nsatisfied due to the slow decay of the tail and the large variability of\nthe associated features.\nIn the case of spectral learning, uniform boundedness assumption is not\nsatisfied since for arbitrary graphs one can have localization and thus\nlarge variability in the entries of the eigenvectors defining the feature\nmaps.\nIn both case, existing techniques based on Rademacher inequalities or\nVC dimensions fail to give interesting results, but we show that\ndimension-independent bounds can be achieved by bounding the annealed\nentropy.\n\\item\nA great deal of work has focused on using diffusion-based and spectral-based\nmethods for nonlinear dimensionality reduction and the learning a nonlinear\nmanifold from which the data are assumed to be drawn~\\cite{SWHSL06}.\nThese results are very different from the type of learning bounds we\nconsider here.\nFor instance, most of those learning results involve convergence to an\nhypothesized manifold Laplacian and not of learning process itself, which\nis what we consider here.\n\\item\nWork by Bousquet and Elisseeff~\\cite{BE02} has focused on establishing\ngeneralization bounds based on stability.\nIt is important to note that their results assume a given algorithm and\nshow how the generalization error changes when the data are changed, so\nthey get generalization results for a given algorithm.\nOur results make no such assumptions about working with a given algorithm.\n\\item\nGurvits~\\cite{Gurvits} has used Rademacher complexities to prove upper bounds\nfor the sample complexity of learning bounded linear functionals on\n$\\ell_p$ balls.\nThe results in that paper can be used to derive an upper bound on the VC\ndimension of gap-tolerant classifiers with margin $\\Delta$ in a ball of\nradius $R$ in a Banach space of Rademacher type $p \\in (1, 2]$.\nConstants were not computed in that paper, therefore our results do not\nfollow.\nMoreover, our paper contains results on distribution specific bounds which\nwere not considered there.\nFinally, our paper considers the application of these tools to the\npractically-important settings of spectral kernels and heavy-tailed data\nthat were not considered there.\n\\end{itemize}\n\n\n\\section{Conclusion}\n\\label{sxn:conclusion}\n\nWe have considered two simple machine learning problems motivated by recent\nwork in large-scale data analysis, and we have shown that although\ntraditional distribution-independent methods based on the VC-dimension fail\nto yield nontrivial sampling complexity bounds, we can use\ndistribution-dependent methods to obtain dimension-independent learning\nbounds.\nIn both cases, we take advantage of the fact that, although there may be\nindividual data points that are ``outlying,'' in aggregate their effect is\nnot too large.\nDue to the increased popularity of vector space-based methods (as opposed\nto more purely combinatorial methods) in machine learning in recent years,\ncoupled with the continued generation of noisy and poorly-structured data,\nthe tools we have introduced are likely promising more generally for\nunderstanding the effect of noise and noisy data on popular machine\nlearning tasks.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\\blfootnote{\n \n %\n \n \n \n \n \n \n \n %\n \\hspace{-0.65cm} \n This work is licensed under a Creative Commons \n Attribution 4.0 International License.\n License details:\n \\url{http:\/\/creativecommons.org\/licenses\/by\/4.0\/}.\n}\n\nRecently, commonsense reasoning tasks~\\cite{zellers2018swag,talmor2018commonsenseqa,lin2019comgen} have been proposed to investigate the ability of machines to make acceptable and logical inferences about ordinary scenes in our daily life. Both SWAG~\\cite{zellers2018swag} and CommonsenseQA~\\cite{talmor2018commonsenseqa} present a piece of text (an event description or a question) together with several choices (subsequent events or answers), and the system is asked to choose the correct option based on the context. Different from these two discriminative tasks, CommonGen~\\cite{lin2019comgen} moves to a generation setting. It requires the system to construct a logical sentence based on several concepts related to a specific scenario. \n\n\n\n\nThe task of text generation from given concepts are challenging in two ways. First, the sentence needs to be grammatically sound with the constraints of including given concepts. Second, the sentence needs to be correct in terms of our common knowledge. Existing approaches apply pretrained encoder-decoder models~\\cite{lewis2019bart,bao2020unilmv2} for description construction and concepts are modeled as constraints to guide the generation process. Sentences generated by these models are fluent, however, the output might violates the commonsense. An example is shown in Table~\\ref{comparison}. The model \\emph{BART} generates sentence with ``guitar sits\" which is incorrect. This demonstrates that the language model itself is not able to determine the rational relationship between concepts. \n\n\n\\begin{table}[!th]\n\\begin{center}\n\\begin{tabular}{l|l|l}\n\\midrule[1.0pt]\n\\emph{Concepts} &front, guitar, microphone, sit &ear, feel, pain, pierce \\\\\n\\midrule[1.0pt]\n\\multirow{2}{*}{\\emph{BART}} &\\underline{guitar sits} in front of a microphone &I can feel the pain in my ears and \\underline{feel} \\\\\n&in the front. &\\underline{the pierce} in my neck from the piercing. \\\\\n\\midrule[0.5pt]\n\\multirow{2}{*}{\\emph{Prototype}} &A singer performed the song standing in &He expresses severe pain as he tries \\\\\n&front of the audiences while playing guitar. &to pierce his hand.\\\\\n\\midrule[0.5pt]\n\\emph{BART+}&A singer sitting in front of the audiences &He expresses severe pain as he \\\\\n\\emph{Prototype}&while playing guitar. &pierces his ear.\\\\\n\\midrule[1.0pt]\n\\end{tabular}\n\\end{center}\n\\caption{Example of \\emph{BART}, \\emph{Prototype} and \\emph{BART+Prototype}.}\n\\label{comparison}\n\\end{table}\n\nIn order to enrich the source information and bridge the semantic gap between source and target, we argue that external knowledge related to the scene of given concepts are needed to determine the relationships between concepts. Motivated by the retrieve-and-generation framework~\\cite{song2016two,hashimoto2018retrieve} for text generation, we retrieve prototypes for concepts from external corpora as scene knowledge and construct sentences by editing prototypes. The prototype would introduce scenario knowledge to make up the shortcoming of the language model in finding out reasonable concept combination. Furthermore, prototypes would provide the missing key concepts besides the provided concept set, such as ``singer'' of the first example in Table~\\ref{comparison}, to help complete a natural and coherent scenario.\n\n\n\n\n\n\n\n\n\n\n\nIn order to better utilize the prototypes, we propose two additional modules on top of the pretrained encoder-decoder model with the guidance of given concepts. First, considering tokens in the prototype make various contributions in the sentence generation, a \\emph{scaling module} is introduced to assign weights to tokens in the prototype. Second, tokens closer to the concept words in prototypes might be more important for scene description generation, therefore, a \\emph{position indicator} is proposed to mark the relative position of different tokens in the prototypes. The main contributions of this work are three folds. 1) We propose a retrieve-and-edit framework, \\textbf{E}nhanced \\textbf{K}nowledge \\textbf{I}njection \\textbf{BART}, for the task of commonsense generation. 2) We combine the two modules, scaling module and prototype position indicator, to better utilize the scenario knowledge of prototype. 3) we conduct experiments on CommonGen benchmark, and results show that our method achieves significantly improvement by using both in-domain and out-of-domain plain text datasets as external knowledge source.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Model}\n\nIn this section, we introduce our retrieve-and-generation framework based \\emph{EKI-BART} as $G_{\\theta}$ with parameter $\\theta$ that retrieves prototype $\\mathcal{O}=(o_{1},o_{2},\\cdots,o_{n_{o}})$ from external text knowledge corpus and extracts the prototype knowledge under the guidance of concepts $\\mathcal{C}=(c_{1},\\cdots,c_{n_{c}})$ to improve the commonsense generation of target $\\mathcal{T}=(t_{1},\\cdots,t_{n_{t}})$.\nThe overall framework of our proposed model is shown in Figure~\\ref{framework}.\n\n\\subsection{Pretrained Encoder-Decoder}\nPretrained encoder-decoder model, \\emph{BART}~\\cite{lewis2019bart}, commonly follows the transformer architecture. Several encoder-layers stack as encoder and each of them is composed of a self-attention network and a feed-forward network. The input sequence would be encoded into a hidden state sequence $\\mathcal{H}^{e}=(h^{e}_{1},\\cdots,h^{e}_{n_{h}})$. Decoder is also stacked by a few decoder-layers and the key difference between encoder-layer and decoder-layer is that there exists an encoder-decoder-attention in the middle of self-attention and feed-forward network. In each encoder-decoder-attention module, the decoder representation $h^{d}_{u}$ would attend to $\\mathcal{H}^{e}$ following Equation~\\ref{raw-decoder-encoder-attention}.\n\\begin{align}\n &s_{x}(h^{d}_{u}, h^{e}_{v})=(W_{x,q}h^{d}_{u})^{T}(W_{x,k}h^{e}_{v})\\big\/\\sqrt{d_k} \\nonumber\\\\\n &a_{x}=softmax\\big(s_{x}(h^{d}_{u},h^{e}_{1}),\\cdots,s_{x}(h^{d}_{u},h^{e}_{n_{h}})\\big) \\nonumber \\\\\n &\\hat{h}^{d}_{u}=W_{o}\\big[W_{1,v}\\mathcal{H}^{e}a_{1},\\cdots,W_{X,v}\\mathcal{H}^{e}a_{X}\\big] \\nonumber \\\\\n &h^{d}_{u}=LN\\big(h^{d}_{u}+\\hat{h}^{d}_{u}\\big)\n ~\\label{raw-decoder-encoder-attention}\n\\end{align}\nwhere $x$ denotes the $x$th attention head, where $\\{W_{x,q},W_{x,k},W_{x,v}\\}\\in \\mathbb{R}^{d_{k}\\times d}$ are trainable parameters for query, key and value, $x$ denotes the attention head, $d$ is the hidden size, $d_k$ is the attention head dimension, and $LN$ is the layernorm function. Generally, there is a normalization operation before we get the encoder output, in other words, the correlation between $h^{e}_{v}$ and $h^{d}_{u}$ mainly depends on the direction of $h^{e}_{v}$ and $h^{d}_{u}$.\n\n\n\\subsection{Model Input}\nFollowing the input setting of \\emph{BART}, we concatenate the provided concepts $\\mathcal{C}$ and the retrieved prototype $\\mathcal{O}$ as a whole input $\\mathcal{S}$ to feed into the pretrained model.\n\\begin{gather}\n \\mathcal{S}=\\big[\\mathcal{C},\\mathcal{O}\\big]=\\big[c_{1},\\cdots,c_{n_{c}},o_{1},\\cdots,o_{n_{o}}\\big]\n\\end{gather}\nwhere $\\big[\\cdot,\\cdot\\big]$ is the concatenation operation.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width = 4.5in]{images\/framework.png}\n\t\\caption{The framework of our proposed \\emph{EKI-BART}. $E_{B}$ , $\\mathit{E}_{\\mathcal{C}}$, $\\mathit{E}_{\\mathcal{O}}$ and $\\mathit{E}_{D}$ are the embedding function of \\emph{BART} model, concept $\\mathcal{C}$, prototype $\\mathcal{O}$ and distance of prototype position indicator. $s_{v}$ and $h^{e}_{v}$ are the $v$th token of input and the corresponding \\emph{BART} encoder output. $\\mathcal{L}_{E}$ and $\\mathcal{L}_{D}$ are classification loss and the loss-likelihood loss, respectively. Refers to Table~\\ref{comparison} for the example in the framework.}\n\t\\label{framework}\n\\end{figure}\n\nIn our retrieve-and-generation framework, we need to modify the prototype $\\mathcal{O}$ to meet the requirement of $\\mathcal{C}$. To distinguish each token from $\\mathcal{O}$ or $\\mathcal{C}$, we add the group embedding on top of original \\emph{BART} embedding function as Equation~\\ref{EMB} shows.\n\\begin{align}\n \\mathit{E}(c_{j})=\\mathit{E}_{B}(c_{j})+\\mathit{E}_{\\mathcal{C}},\\\n \\mathit{E}(o_{k})=\\mathit{E}_{B}(o_{k})+\\mathit{E}_{\\mathcal{O}} \\label{EMB}\n\\end{align}\nwhere $\\mathit{E}_{B}$ stands for the original embedding function in \\emph{BART} including token embedding and position embedding, $\\mathit{E}_{\\mathcal{C}}$ and $\\mathit{E}_{\\mathcal{O}}$ are two group embedding for concepts $\\mathcal{C}$ and prototype ${\\mathcal{O}}$, and $\\mathit{E}$ is the final embedding function. \n\n\\subsection{Generation}\nThe prototype $\\mathcal{O}$ not only introduces scenario bias and effective additional concepts, but also brings noises into generation. In order to inject the retrieved knowledge into generation more effectively, we argue to extract the scenario knowledge of prototype in a more fine-grained manner. From Equation~\\ref{raw-decoder-encoder-attention}, we can see that each token in $\\mathcal{S}$ gets involved in encoder-decoder-attention with the encoder output $h^{e}_{v}$, thus we propose two mechanisms, namely, scaling module and prototype position indicator, to improve the generation.\n\n\n\\subsubsection{Encoder with Scaling Module}\nWe observe that noises and concept tokens both appear in the retrieved prototype, and these noises would dominate the generation.\nThe simplest solution is to utilize a hard mask, in other words, only keep those concept tokens in prototype and mask others, but the decoder would be no longer aware of the complete prototype scenario, and these effective additional concepts would be also unavailable. Instead of hard masking, we propose scaling module to assign scaling factor for input tokens which can be applied in encoder-decoder-attention, then the model is capable of receiving less noises and learn more from effective tokens.\n\nWe investigate the dot product based attention mechanism shown in Equation~\\ref{raw-decoder-encoder-attention}. Function $\\mathit{F}$ with a scaling factor $\\lambda$ on top of the normalized encoder output states $\\mathcal{H}$ is defined in Equation~\\ref{scale1},\n\\begin{align}\n F(\\lambda)&=S(h^{d}_{u}, \\lambda h^{e}_{v})=\\lambda \\Big(\\big(W_{q}h^{d}_{u}\\big)^{T}\\big(W_{k}h^{e}_{v}\\big)\\big\/\\sqrt{d_k}\\Big)=\\lambda S(h^{d}_{u}, h^{e}_{v})=\\lambda F(1) ~\\label{scale1}\n\\end{align}\nFrom Equation~\\ref{scale1}, we can see that when $\\big(W_{q}h^{d}_{u}\\big)^{T}\\big(W_{k}h^{e}_{v}\\big)$ is a large positive value or $h^{e}_{v}$ takes important attention weights in $h^{d}_{u}$, then $F(\\lambda)$ is a monotonically decreasing function. This inspires us to refine the representation of $h^{e}_{v}$ through $\\lambda$. Viewing $\\lambda$ as an importance factor, we are able to weaken\/strength $h^{e}_{v}$ in encoder-decoder-attention through decreasing\/increasing $\\lambda$. \n\n\n\n\n\n\n\n\nWith the awareness of the phenomenon in Equation~\\ref{scale1}, we devise a scaling module on the basis of Equation~\\ref{raw-decoder-encoder-attention}. \nIn practice, we attach a scaling module to the encoder, which can increase the norm if $h^{e}_{v}$ is likely to contribute to the generation and decrease when the $h^{e}_{v}$ has a conflict with concepts. Each channel of $h^{e}_{v}$ would be taken into account separately. This is accomplished with the following scaling module. The module is composed of \n\\begin{gather}\n \\Lambda=\\mathop{Sigmoid}\\Big(W_{2}ReLU\\big(W_{1}h^{e}_{v}+b_{1}\\big)+b_{2}\\Big) \\nonumber \\\\\n h^{e}_{v}=h^{e}_{v}\\odot\\big(2\\times\\Lambda\\big) \\label{scale-output}\n\\end{gather}\nwhere $W_{1}\\in\\mathbb{R}^{d_{s}\\times d},W_{2}\\in\\mathbb{R}^{d\\times d_{s}},b_{1}\\in\\mathbb{R}^{d_{s}},b_{2}\\in\\mathbb{R}^{d}$ are trainable parameters in the scaling module.\n\nConsider that the parameters of pretrained encoder-decoder model have been optimized during pretraining, simply adding the parameter $\\Lambda$ may destroy the distribution of encoder output $\\mathcal{H}$ and leads to training failure. We try to initialize these parameters in scaling module with $N(0,var)$, where $var$ is a small value, then the output with sigmoid activation would gather around 0.5, and $2\\times$ would make them fall near 1.0. Thus in the beginning of training, the participation of scaling module would not lead to a mess.\n\nIn our knowledge, prototype tokens that co-occur in $\\mathcal{T}$ should be more important than others for the generation of $\\mathcal{T}$. We hope these prior knowledge could help the model to better discriminate the importance of these prototype tokens, thus we introduce an encoder classification task that requires the scaling module to determine which tokens would appear in the generated sentence. \n\\begin{gather}\n \\mathcal{L}_{E}=-\\sum_{s_{v}\\in\\mathcal{S}}\\Big(\\mathcal{I}_{\\{s_{v}\\in\\mathcal{T}\\}}\\log\\mathop{Mean}(\\Lambda_{v})+\\mathcal{I}_{\\{s_{v}\\notin\\mathcal{T}\\}}\\log\\big(1-\\mathop{Mean}(\\Lambda_{v})\\big)\\Big)\n\\end{gather}\nwhere $Mean$ is to get the mean value and $\\mathcal{I}$ is indicator function, $\\mathcal{I}_{\\{s_{v}\\in\\mathcal{T}\\}}=0$ if $s_{v}\\in\\mathcal{T}$ otherwise 1, so is $\\mathcal{I}_{\\{s_{v}\\notin\\mathcal{T}\\}}$.\n\n\n\\subsubsection{Decoder with Prototype Position Indicator}\nThese surrounding tokens of concept tokens in prototype $\\mathcal{O}$ tend to describe how these concepts interact with the complete scenario. We argue that informing the decoder of these relative positions would help decoder better learn effective scenario bias of the prototype $\\mathcal{O}$.\n\n\n\nBefore the computation of encoder-decoder-attention, we devise a position indicator function to assign positions to those tokens in prototype. First, we assign virtual positions to tokens in prototype $\\mathcal{O}$ in sequence, from 1 to $n_{o}$. Second, we pick up the positions of those concept tokens in prototype as multiple position centers. Third, for each token $o_{v}\\in\\mathcal{O}$, we compute the smallest distance from $o_{v}$ to those concept tokens. The process is shown in Equation~\\ref{dist_compute}.\n\\begin{gather}\n \\mathit{D}(s_{v})=min\\big\\{|v-p|,s_{p}=c,s_{p}\\in\\mathcal{O},c\\in \\mathcal{C}\\big\\}\\label{dist_compute}\n\\end{gather}\n\nOur inputs tokens are composed of prototype ones and concept ones. Considering the particularity of concept words $\\mathcal{C}$, we assign them with a default position value 0 and adjust the position indicator function of prototype tokens through adding one, the process is shown in Equation~\\ref{all_dist_compute}.\n\\begin{gather}\n\\mathit{D}(s_{v})=\\left\\{\\begin{array}{ll}\n \\mathit{D}(s_{v})+1 & s_{v}\\in \\mathcal{O} \\\\\n 0 & s_{v} \\in \\mathcal{C} \n\\end{array} \\right.\\label{all_dist_compute}\n\\end{gather}\n\nOn the basis of the prototype position indicator function in Equation~\\ref{all_dist_compute}, we add the information of relative position from each token itself to the closest concept token in prototype into encoder-decoder-attention through Equation~\\ref{new-encoder-decoder-attention}.\n\\begin{gather}\n \\mathit{ED}(h^{e}_{v})=\\mathit{E}_{\\mathit{D}}\\big(\\mathit{D}(s_{v})\\big)\\nonumber \\\\\n S(h^{d}_{u}, h^{e}_{v})=\\big(W_{q}h^{d}_{u}\\big)^{T}\\big(W_{k}h^{e}_{v}+\\mathit{ED}(h^{e}_{v})\\big)\\big\/\\sqrt{d_{k}}\n ~\\label{new-encoder-decoder-attention}\n\\end{gather}\nwhere $\\mathit{E}_{\\mathit{D}}$ is the embedding for those distance values in $\\mathit{D}$. \nThese prototype tokens that more close to the concept tokens are expected to receive more attention than other tokens.\n\n\\subsection{Training}\nThe objective of our model is to maximize the log-likelihood for $\\mathcal{T}$ given $\\mathcal{O}$ and $\\mathcal{C}$.\n\\begin{gather}\n \\mathcal{L}_{D}=-\\mathop{log}\\sum_{k}P(t_{k}|\\mathcal{O},\\mathcal{C},t_{ B\n\\end{cases}} \n}\n\t\\end{aligned}$ & $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash$ $\\forall$ A B. \n\t\tD\\_INCLUSIVE\\_BEFORE A B =}}\\\\[-1\\jot]\n\t\t& {\\texttt{ ($\\lambda$s. if A s $\\leq$ B s then A s}}\\\\[-1\\jot] \n\t\t&{\\texttt{ else PosInf)\n}}\\end{aligned}$ \\\\ \\hline\n\\end{tabular}\n\\vspace{20pt}\n\\end{table}\n\n\\begin{figure}[!b]\n\\centering\n\\subfigure[OR]{\n \\makebox[0.2\\textwidth]{\n\\includegraphics[scale=0.55]{OR3.jpg}\n}}\n\\subfigure[AND]{\n \\makebox[0.18\\textwidth]{\n\\includegraphics[scale=0.55]{AND3.jpg}}}\n\\subfigure[FDEP]{\n \\makebox[0.18\\textwidth]{\n\\includegraphics[scale=0.55]{FDEP3.jpg}}}\n\\subfigure[PAND]{\n \\makebox[0.18\\textwidth]{\n\\includegraphics[scale=0.55]{PAND3.jpg}}}\n\\subfigure[Spare]{\n \\makebox[0.18\\textwidth]{\n\\includegraphics[scale=0.55]{SPARE3.jpg}}}\n\\caption{Fault Tree Gates}\n\\label{fig:DFT_Gates}\n\\end{figure}\n\n\n\\indent In \\cite{Merle-thesis}, the DFT gates, shown in Figure~\\ref{fig:DFT_Gates}, are modeled based on the time of failure of their output. For instance, the Functional DEPendency (FDEP) gate is used to model failure triggers of system components. The spare gate models spare parts in a system, where the spare ($X$) replaces a main part ($Y$) after its failure. In the general case, the failure distribution of the spare is attenuated by a dormancy factor from the active state. Therefore, in the DFT algebra, two variables are used to distinguish the spare in both its states; active ($X_{a}$) and dormant ($X_{d}$). Table~\\ref{table:gates} lists the definitions of these gates. In~\\cite{elderhalli2019probabilistic}, we provided the HOL formalization of these gates. However, to verify the probability of failure expression given in Table~\\ref{table:gates}, it is required first to define a \\texttt{DFT\\_event} to be used in the probabilistic analysis. This is formally defined as\\cite{elderhalli2019probabilistic}:\n\n\\begin{defn}\n\\small{\\texttt{$\\vdash\\forall$p X t. DFT\\_event p X t = \\{s | X s $\\scriptstyle\\leq$ Normal t\\} $\\cap$ p\\_space p} }\n\\end{defn}\n\n\n\\noindent where \\texttt{p} is a probability space. \\texttt{p\\_space} is a function that returns the space of \\texttt{p}. \\texttt{X} is the time to failure function that can represent inputs and outputs of DFT gates and \\texttt{t} is the time until which we are interested in finding the probability of failure. The type of \\texttt{t} is real, while the time to failure functions are of type \\texttt{extreal} and thus it is required to typecast \\texttt{t} to \\texttt{extreal} using the \\texttt{Normal} function.\nWe verified the probability of failure of all DFT gates based on this event and using their formal definitions, as given in Table~\\ref{table:gates}~\\cite{elderhalli2019probabilistic}.\\\\\n\n\n\\begin{table}[!t]\n\\centering\n\\small\n\\caption{DFT Gates Expressions and Probability of Failure}\n\\label{table:gates}\n\\begin{tabular}{|l|l|l|}\n\\hline\nGate & Mathematical Expression & Probability of Failure \\\\ \\hline \\hline\n {AND} & $\\!\\begin{aligned}[b]\n\t{{ X \\cdot Y = max (X,Y)\n} \n}\n\t\\end{aligned}$ &$\\!\\begin{aligned}[b]\n\t{{ F_{X}(t) \\times F_{Y}(t)\n} \n}\n\t\\end{aligned}$ \\\\ \\hline\n {OR } & $\\!\\begin{aligned}[b]\n\t{{ X + Y = min (X,Y)\n} \n}\n\t\\end{aligned}$ & $\\!\\begin{aligned}[b]\n\t{{ F_{X}(t) + F_{Y}(t) - F_{X}(t) \\times F_{Y}(t)\n} \n}\n\t\\end{aligned}$ \\\\ \\hline\n {PAND} & $\\!\\begin{aligned}[b]\n\t{{ Q_{PAND}= }{ { \n\t\\begin{cases} Y, & X \\leq Y\\\\ + \\infty, & X > Y\n\\end{cases}}} \n}\n\t\\end{aligned}$ & $\\!\\begin{aligned}[b]\n\t{{ \\int_{0}^{t}f_{Y}(y)\\ F_{X}(y)\\ dy}}\n\t\\end{aligned}$ \\\\ \\hline\n {FDEP} & $\\!\\begin{aligned}[b]\n\t{{ X + Y = min (X,Y)\n} \n}\n\t\\end{aligned}$ & $\\!\\begin{aligned}[b]\n\t{{ F_{X}(t) + F_{Y}(t) - F_{X}(t) \\times F_{Y}(t) }}\n\t\\end{aligned}$ \\\\ \\hline\nSpare & $\\!\\begin{aligned}[c]\n\t Q_{SP} = & Y\\cdot(X_{d} \\lhd Y)+ X_{a}\\cdot(Y \\lhd X_{a}) \\\\ & {+Y\\Delta X_{a}+Y\\Delta X_{d}\n} \n\n\t\\end{aligned}$ & $\\!\\begin{aligned}[c]\n\t&{ \\int_{0}^{t} \\Big{(}\\int_{v}^{t} f_{(X_{a}|Y=v)} (u) du \\Big{)} f_{Y}(v) dv +}\\\\[-2\\jot]&{ { \\int_{0}^{t} f_{Y}(u) F_{X_{d}}(u) du}} \n\t\\end{aligned}$ \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[!b]\n\\centering\n\\includegraphics[scale=0.55]{drive_by_wire2.jpg}\n\\caption{DFT of Drive-by-wire System}\n\\label{fig:dbw_dft}\n\\end{figure}\n\n\\indent As an example, we provide the details of analyzing the DFT of a drive-by-wire system (DBW) \\cite{altby2014design}, shown in Figure~\\ref{fig:dbw_dft}, to explain the required steps to use our formalized algebra. This system is used in modern vehicles to control its functionality using a computerized controller. We provide the reliability model of the brake and throttle subsystems. The throttle system fails due to the failure of the throttle ($TF$) or the engine ($EF$). The brake control unit ($BCU$) failure leads to the failure of this system. A spare gate is used to model the failure of a primary control unit ($PC$) with a warm spare ($SC$). Finally, the system can fail due to the failure of the throttle sensor ($TS$) or the brake sensor ($BS$).\\\\\n\n\n\\indent To formally conduct the analysis using our formalization, it is required first to express the function of the top event algebraically as:\n\n\\begin{minipage}{\\textwidth}\n\\center{{\\texttt{Q\\textsubscript{DBW} = (TF + EF) + BCU + WSP\\ PC\\ SC\\textsubscript{a}\\ SC\\textsubscript{d} + (TS + BS)}}}\\\\\n\\end{minipage}\n\\emph{}\\\\\n\n\\noindent Then, we create a \\texttt{DFT\\_event} for \\texttt{Q\\textsubscript{DBW}} as: \\texttt{DFT\\_event p Q\\textsubscript{DBW} t}, and verify that it equals the union of the individual DFT events, i.e.:\\\\\n\n\\noindent {\\texttt{DFT\\_event p TF t $\\cup$ DFT\\_event p EF t $\\cup$ DFT\\_event p BCU t $\\cup$ DFT\\_event p (WSP PC SC\\textsubscript{a} SC\\textsubscript{d}) t $\\cup$ DFT\\_event p TS t $\\cup$ DFT\\_event p BS t}}\\\\\n\n\n\\noindent Thus, we can use the probabilistic principle of inclusion and exclusion (PIE) \\cite{Merle-thesis} to verify the probability of failure of \\texttt{Q\\textsubscript{DBW}}. The probabilistic PIE expresses the probability of the union of events as the continuous summation and subtraction of the probabilities of combinations of intersection of events. \nThe DBW example is represented as the union of six events, therefore, applying the probabilistic PIE results in having 63 different terms in the final expression. We verify the probability of failure of the DBW as: \\\\\n\n\\begin{thm}\n\\label{thm:PROB_DBW}\n\\emph{}\\\\\n\\mbox{\\small{\\texttt{$\\vdash$ $\\forall$BS TS BCU PC SC$_{\\texttt{a}}$ SC$_{\\texttt{d}}$ EF TF p t f$_{\\texttt{PC}}$ f\\textsubscript{(SC\\textsubscript{a}|PC)} f\\textsubscript{SC\\textsubscript{a}PC}. 0 $\\leq$ t $\\wedge$ }}}\\\\\n\\mbox{\\small{\\texttt{dbw\\_event\\_req [BS; TS; BCU; PC; SC\\textsubscript{a}; SC\\textsubscript{d}; EF; TF] p t f$_{\\texttt{PC}}$ f\\textsubscript{(SC\\textsubscript{a}|PC)} f\\textsubscript{SC\\textsubscript{a}PC} $\\Rightarrow$}}}\\\\\n\\mbox{\\small{\\texttt{$\\bigg($prob p~(DFT\\_event p Q\\textsubscript{DBW} t) = }}}\\\\\n $\\!\\begin{aligned}[c]\n&\\small{\\texttt{F\\textsubscript{TF}(t)+F\\textsubscript{EF}(t)+F\\textsubscript{BCU}(t)+$\\bigg[\\int_{0}^{t}$f\\textsubscript{PC}(pc)$\\times\\big(\\int_{pc}^{t}$f\\textsubscript{(SC\\textsubscript{a}|PC=pc)}(sc\\textsubscript{a}) $d$sc\\textsubscript{a}$\\big)d$pc$\\bigg]$+F\\textsubscript{BS}(t)+F\\textsubscript{TS}}}\\\\[-2pt]\n&\\small{\\texttt{-...+...- F\\textsubscript{TF}(t)$\\times$F\\textsubscript{EF}(t)$\\times$F\\textsubscript{BCU}(t)$\\times$F\\textsubscript{BS}(t)$\\times$F\\textsubscript{TS}(t)$\\times$}} \\\\[-2pt]\n&\\small{\\texttt{$\\bigg[\\bigg(\\int_{0}^{t}$ f\\textsubscript{PC}(pc)$\\times\\bigg($ $\\int_{pc}^{t}$f\\textsubscript{(SC\\textsubscript{a}|PC=pc)}(sc\\textsubscript{a})$d$sc\\textsubscript{a}$\\bigg)d$pc$\\bigg)$+$\\int_{0}^{t}$f\\textsubscript{PC}(pc)$\\times$F\\textsubscript{SC\\textsubscript{d}}(pc)\\ $d$pc$\\bigg]\\bigg)$}}\n\\end{aligned}$ \n\\end{thm}\n\n\\noindent where \\texttt{dbw\\_event\\_req} ensures the required conditions for independence of the events and defines the conditional density functions with their proper conditions \\cite{ifm-short-code}. The first six terms in the conclusion of Theorem \\ref{thm:PROB_DBW} represent the probabilities of the six individual events of the union of the DBW. Since there are 63 different terms, we are only showing a part of the theorem and the full version is available at \\cite{ifm-short-code}. The script of the DBW DFT analysis required around 4850 lines of code and 24 man-hours to be developed. \n\n\\section{DRBD Algebra and its HOL Formalization}\n\\label{sec:DRBD-algebra}\n\nDRBDs capture the dynamic dependencies among system components using DRBD constructs, such as the spare and load sharing constructs. The blocks in a DRBD can be connected in series, parallel, series-parallel and parallel-series. \nRecently, we proposed an algebra that allows expressing the structure of a given DRBD based on system blocks \\cite{Yassmeen-DRBDTR}. The reliability of a given system can be expressed using this DRBD algebra. We defined several operators that enable expressing DRBDs of series and parallel configurations and even more complex structures. Furthermore, the defined operators allow modeling a DRBD spare construct to capture the behavior of spares in a system. We provided the HOL formalization of this algebra to ensure its soundness and enable the formal analysis using HOL4. We first formally define a DRBD event that creates the set of time until which we are interested in finding the reliability~\\cite{Yassmeen-DRBDTR}:\n\n\\begin{defn}\n\\small{\\texttt{$\\vdash\\forall$p X t. DRBD\\_event p X t = \\{z | Normal t < X s\\} $\\cap$ p\\_space p}}\n\\end{defn}\n\n\\noindent where $X$ is the time to failure function of a system component and $t$ is the moment of time until which we are interested in finding the reliability of the system. The probability of this event represents the reliability of the system until time $t$~\\cite{Yassmeen-DRBDTR}:\n\n\n\\begin{defn}\n\\small{\\texttt{$\\vdash\\forall$p X t. Rel p X t = prob p (DRBD\\_event p X t)}}\n\\end{defn}\n\n\\noindent Then, we verify that its probability is related to the CDF~\\cite{Yassmeen-DRBDTR}.\n\n\\begin{table}[t]\n\\caption{Definitions of DRBD Operators}\n\\small\n\\centering\n\\label{table:DRBD-element-operator}\n\\begin{tabular}{|l|l|l|}\n\\hline\nOperator & Mathematical Expression & Formalization \\\\ \\hline \\hline\n{ {\\texttt{AND}}}&\n$\\!\\begin{aligned}[b]\n\t{{ X \\cdot Y= min (X ,Y)}}\t\\end{aligned}$& $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash$ $\\forall$X Y. \n\t\tR\\_AND X Y =}}\\\\[-1\\jot]\n &\t{\\texttt{($\\lambda$s. min (X s) (Y s))}}\\end{aligned}$ \n \\\\ \\hline\n{ {\\texttt{OR}}}&\n$\\!\\begin{aligned}[b]\n\t{{ X + Y= max (X, Y)}\n}\n\t\\end{aligned}$& $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash$ $\\forall$X Y. \n\t\tR\\_OR X Y =}}\\\\[-1\\jot]\n &\t{\\texttt{($\\lambda$s. max (X s) (Y s))\n}}\\end{aligned}$ \n \\\\ \\hline\n\n{ {\\texttt{After}}}&\n$\\!\\begin{aligned}[b]\n\t{{ X \\rhd Y= }{ \n\t\\begin{cases} X, &X > Y\\\\ +\\infty, &X\\leq Y\n\\end{cases}} \n}\n\t\\end{aligned}$& $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash$ $\\forall$X Y. \n\t\tR\\_AFTER X Y =}}\\\\[-1\\jot]\n\t\t& {\\texttt{($\\lambda$s. if Y s < X s then X s}}\\\\[-1\\jot]\n &\t{\\texttt{else PosInf)\n}}\\end{aligned}$ \n \\\\ \\hline\n{ {\\texttt{Simultaneous}}}& $\\!\\begin{aligned}[b]\n\t{{ X \\Delta Y= }{ \n\t\\begin{cases} X, &X = Y\\\\ +\\infty, &X\\neq Y\n\\end{cases}} \n}\n\t\\end{aligned}$ & $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash$ $\\forall$X Y. \n\t\tR\\_SIMULT X Y =}}\\\\[-1\\jot]\n\t\t& {\\texttt{($\\lambda$s. if X s = Y s then X s}}\\\\[-1\\jot]\n &\t{\\texttt{else PosInf)\n}}\\end{aligned}$ \\\\ \\hline\n{ {\\texttt{Inclusive After}}}& $\\!\\begin{aligned}[b]\n\t{{ X \\unrhd Y=}{ \n\t\\begin{cases} X, &X \\geq Y\\\\ +\\infty, &X < Y\n\\end{cases}} \n}\n\t\\end{aligned}$ & $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash$ $\\forall$ X Y. \n\t\tR\\_INCLUSIVE\\_AFTER X Y =}}\\\\ \t\t& {\\texttt{($\\lambda$s. if Y s $\\leq$ X s then X s}}\\\\[-1\\jot]\n &\t{\\texttt{else PosInf)\n}}\\end{aligned}$ \\\\ \\hline\n\\end{tabular}\n\\vspace{20pt}\n\\end{table}\n\n\nWe introduced DRBD identity elements and operators to model both the combinatorial and dynamic behaviors, as listed in Table~\\ref{table:DRBD-element-operator}. The idea is similar to the DFT algebra, where the blocks are modeled based on their time of failure. We need to recall that DRBDs are concerned in modeling the successful behavior, i.e., the ``not failing\" behavior, and thus we can use the time to failure functions to model the behavior of a given DRBD. We defined two identity elements for DRBD that are similar to the DFT elements, i.e., ALWAYS = $0$ and NEVER = $+\\infty$. The DRBD operators are listed in Table~\\ref{table:DRBD-element-operator}. The AND operator ($\\cdot$) models series DRBD blocks, where it is required that all the blocks are working. The output of the AND operator fails with the first failure of any component of its inputs. On the other hand, the OR operator ($+$) models parallel structures, where at least one of the blocks should continue to work to maintain the system functionality. To capture the dynamic behavior, we introduced three temporal operators, i.e., \\textit{After}, \\textit{Simultaneous} and \\textit{Inclusive-after}~\\cite{Yassmeen-DRBDTR}. The after operator ($\\rhd$) models the sequence of events, where the system continues to work as long as one component continues to work after the failure of the other. The simultaneous operator ($\\Delta$) is similar to the one of the DFT algebra, where its output fails when both inputs fail at the same time. Finally, the inclusive-after operator ($\\unrhd$) combines the behavior of both after and simultaneous operators. We provided the HOL formalization of these elements and operators based on lambda abstracted functions and \\texttt{extreal} numbers. The mathematical expressions and the HOL formalization are listed in Table~\\ref{table:DRBD-element-operator}. The reliability expressions of these operators are available at~\\cite{Yassmeen-DRBDTR}.\n\n\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale = 0.8]{spare_DRBD1.jpg}\n\\caption{DRBD Spare Construct}\n\\label{fig:drbd_spare}\n\\end{figure}\n\n\\indent A spare construct, shown in Figure~\\ref{fig:drbd_spare}, is introduced in DRBDs to model spare parts in systems by having spare controllers that activate the spare after the failure of the main part. In Table~\\ref{table:spare_reliability}, $Y$ is the main part and after its failure $X$ is activated. We use two variables ($X_{a}$, $X_{d}$), like the DFT algebra.\n\n\\begin{table}[!t]\n\\centering\n\\small\n\\caption{Mathematical and Reliability Expressions of Spare Constructs}\n\\label{table:spare_reliability}\n\\begin{tabular}{|c|c|}\n\\hline\n \\small{Math. Model} & \\small{Reliability}\\\\ \\hline\\hline$ {\n Q_{SP}= (X_{a} \\rhd Y)\\cdot (Y \\rhd X_{d}) }$ & $\\!\\begin{aligned}[t] { R_{SP}(t) =} & {1 - \\int_{0}^{t} \\int_{y}^{t} f_{(X_{a}|Y=y)}(x)\\ f_{Y}(y) dx dy}\\\\& { - \\int_{0}^{t} f_{Y}(y)F_{X_{d}}(y)dy } \\end{aligned}$ \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\n\\begin{table}[!b]\n\\centering\n\\renewcommand{\\arraystretch}{1}\n\\caption{Mathematical Models and Reliability of Series and Parallel Structures}\n\\label{table:series-parallel}\n\\begin{tabular}{ll|l|l|}\n\\cline{3-4}\n & & \\small{Math. Model} & \\small{Reliability} \\\\ \\hline\n\\multicolumn{1}{|l}{$\\!\\begin{aligned}[c] Series \\end{aligned}$} & {\\raisebox{-0.25cm}{$\\!\\begin{aligned}[c]\\includegraphics[scale = 0.45]{series1.jpg}\\end{aligned}$}} & $ \\bigcap_{i=1}^{n}(event\\ (X_{i},\\ t))$ &$ \\prod_{i=1}^{n}R_{X_{i}}(t)$ \\\\ \\hline\n\\multicolumn{1}{|l}{$\\!\\begin{aligned}[b] Parallel \\end{aligned}$} &{\\raisebox{-0.9cm}{ $\\!\\begin{aligned}[c] \\includegraphics[scale = 0.45]{parallel1.jpg}\\end{aligned}$}} & $ \\bigcup_{i=1}^{n}(event\\ (X_{i},\\ t))$ & $ 1- \\prod_{1=1}^{n}(1-R_{X_{i}}(t))$ \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nDRBD blocks can be connected in series, parallel and more nested structures. We provide here the details of only the series and parallel structures, as listed in Table~\\ref{table:series-parallel}. Details about the nested structures can be found in \\cite{Yassmeen-DRBDTR}. The series structure, shown in Table~\\ref{table:series-parallel}, continues to work as long as all the blocks are working. Once one of these blocks stops working, then the entire system stops as well. It can be expressed using the AND operator. Its mathematical model is expressed as the intersection of the individual DRBD events \\cite{hasan2015reliability}. The parallel structure, shown in Table~\\ref{table:series-parallel}, is composed of several blocks that are connected in parallel. Its structure function can be expressed using the OR operator. Its mathematical model is represented using the union of the individual DRBD events. We developed the HOL formalization of these structures and verified their reliability expressions assuming the independence of the individual blocks~\\cite{Yassmeen-DRBDTR}. \n\n\n\n\\begin{figure}[!b]\n\\centering\n\\includegraphics[scale=0.8]{DBW_DRBD1.jpg}\n\\caption{DRBD of Drive-by-wire System}\n\\label{fig:dbw_drbd}\n\\end{figure}\n\n\nWe demonstrate the applicability of the DRBD algebra in the formal analysis of the DRBD of the DBW system given in Figure~\\ref{fig:dbw_drbd}. This DRBD is a series sructure with one spare construct to model the main part $PC$ that is replaced by $SC$ after failure. The structure function of the DBW DRBD ($F_{DBW}$) can be expressed as:\n\\begin{equation}\nF_{DBW} = TF \\cdot EF \\cdot BCU \\cdot (SC_{a} \\rhd PC) \\cdot (PC \\rhd SC_{d}) \\cdot TS \\cdot BS\n\\end{equation}\nThen, we verify the reliability of the DBW system as:\\\\\n\n\\begin{thm}\n\\label{thm:Rel_DBW}\n\\textup{\\small{\\texttt{$\\vdash\\forall$p TF EF BCU PC SC\\textsubscript{a} SC\\textsubscript{d} TS BS t.}}}\\\\\n\\mbox{\\textup{\\small{\\texttt{~DBW\\_set\\_req p TF EF BCU PC SC\\textsubscript{a} SC\\textsubscript{d} TS BS t $\\Rightarrow$}}}}\\\\\n\\mbox{\\textup{\\small{\\texttt{~(prob p (DRBD\\_event p F\\textsubscript{DBW} t) =}}}}\\\\\n\\mbox{\\textup{\\small{\\texttt{~~Rel p TF t * Rel p EF t * Rel p BCU t * Rel p (R\\_WSP PC SC\\textsubscript{a} SC\\textsubscript{d}) t *}}}}\\\\ \\mbox{\\textup{\\small{\\texttt{~~Rel p TS t * Rel p BS t})}}}\n\\end{thm}\n\n\\noindent where \\texttt{DBW\\_set\\_req} ascertains the required conditions for the independence of the DBW system blocks~\\cite{Yassmeen-DRBDTR}. The reliability of the spare construct can be further rewritten using the reliability expression of the spare using integrals. The script of the reliability analysis of the DBW DRBD is 150 lines long and required only one hour of work. \\\\\n\n\\section{Integrated Framework for Formal DFT-DRBD Analysis}\n\n\\begin{figure}[!b]\n\\centering\n\\includegraphics[width= \\textwidth]{methodology9.png}\n\\caption{Integrated Framework for Formal DFT-DRBD Analysis using HOL4}\n\\label{fig:methodology}\n\\end{figure}\n\nThe proposed framework integrating DFT and DRBD algebras is depicted in Figure~\\ref{fig:methodology}.\nIt can be utilized to conduct both DFT and DRBD analyses using the HOL formalized algebras and allows formally converting a DFT model into its corresponding DRBD based on the equivalence of both algebras. The analysis starts by a given system description that can be modeled as a DFT or DRBD. Formal models of the given system can be created based on the HOL formalized algebras. The DRBD model can be analyzed as described in Section~\\ref{sec:DRBD-algebra}, where a DRBD event is created and its reliability is verified based on the available verified theorems of DRBD algebra. On the other hand, a DFT model can be analyzed using the formalized DFT algebra, which requires dealing with the probabilistic PIE. Furthermore, the DRBD model can be converted to a DFT to model the failure instead of the success, then this model is analyzed using the DFT algebra. Similarly, the DFT model can be analyzed by converting it to its counterpart DRBD model, which results in an easier process as the PIE is not invoked. \n\nIn order to handle the DFT analysis using DRBD algebra and the DRBD analysis using the DFT algebra, it is required to be able to represent the DRBD of the corresponding DFT gates using the DRBD algebra and vice-versa (the equivalence proof in Figure~\\ref{fig:methodology}). According to \\cite{distefano2007dynamic}, the OR, AND and FDEP gates can be represented using series, parallel and series RBDs, respectively. Therefore, they can be modeled using AND and OR operators, while the spare gate corresponds to the spare construct. Finally, the PAND gate can be expressed using the inclusive after operator ($Y \\unrhd X$). However, we need to formally verify this equivalence to ensure its correctness. In Table~\\ref{table:verify-DRBD-DFT}, we provide the theorems of equivalence of DFT gates and DRBD operators and constructs, where \\texttt{D\\_AND}, \\texttt{D\\_OR}, \\texttt{FDEP}, \\texttt{P\\_AND} and \\texttt{WSP} are the names of the AND, OR, FDEP, PAND and spare DFT gates in our HOL formalization~\\cite{elderhalli2019probabilistic}. \\texttt{R\\_WSP} is the name of the spare DRBD construct in our formalized DRBD \\cite{Yassmeen-DRBDTR} and \\texttt{ALL\\_DISTINCT [Y X\\textsubscript{a} X\\textsubscript{d}]} ensures that the inputs cannot fail at the same time.\n\n\\begin{table}[!t]\n\\centering\n\\small\n\\caption{Verified Equivalence of DFT Gates and DRBD Algebra}\n\\label{table:verify-DRBD-DFT}\n\\begin{tabular}{|c|c|l|}\n\\hline\n\\small{DFT Gate}& \\small{DRBD Operator\/Construct} & \\small{Verified Theorem} \\\\ \\hline \\hline\n\\scriptsize{AND} & {OR} & {\\texttt{$\\vdash\\forall$X Y. D\\_AND X Y = R\\_OR X Y}} \\\\ \\hline\n {OR} & {AND} & {\\texttt{$\\vdash\\forall$X Y. D\\_OR X Y = R\\_AND X Y}} \\\\ \\hline\n {FDEP} & {AND} & {\\texttt{$\\vdash\\forall$X Y. FDEP X Y = R\\_AND X Y}} \\\\ \\hline\n {PAND} & {Inclusive After} & $\\!\\begin{aligned}[c] &{\\texttt{$\\vdash\\forall$X Y. P\\_AND X Y =}}\\\\[-1\\jot]\n &\\texttt{{ R\\_INCLUSIVE\\_AFTER Y X}} \\end{aligned}$ \\\\ \\hline\n {Spare} & {Spare} & $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash\\forall$X\\textsubscript{a} X\\textsubscript{d} Y.}}\\\\[-1\\jot]\n\t&\\texttt{{($\\forall$s. ALL\\_DISTINCT [Y s;X\\textsubscript{a} s;X\\textsubscript{d} s]) $\\Rightarrow$ }}\\\\[-2\\jot]\n& {\\texttt{(WSP Y X\\textsubscript{a} X\\textsubscript{d} = R\\_WSP Y X\\textsubscript{a} X\\textsubscript{d})}} \\end{aligned}$ \n \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nIn order to use these verified expressions in Table~\\ref{table:verify-DRBD-DFT}, we need to verify that the \\texttt{DRBD\\_event} and the \\texttt{DFT\\_event} possess complementary sets in the probability space. We formally verify this as:\n\n\\begin{thm}\n\\small{\\texttt{$\\vdash\\forall$p X t. prob\\_space p $\\wedge$ (DFT\\_event p X t) $\\in$ events p $\\Rightarrow$}}\\\\\n\\mbox{\\small{\\texttt{(prob p (DRBD\\_event p X t) = 1 - prob p (DFT\\_event p X t))}}}\n\\end{thm} \n\n\\noindent where the conditions ensure that \\texttt{p} is a probability space and that the DFT event belongs to the events of the probability space. This theorem can be verified also if we ensure that the DRBD event belongs to the probability space. This theorem means that for the same time to failure function, the DRBD and DFT events are the complements of each other. This way, we can analyze DFTs using the DRBD algebra and vice-versa. \n\nBased on the verification results obtained in Table~\\ref{table:verify-DRBD-DFT}, DFT gates can be formally represented using DRBDs. We show that the amount of effort required by the verification engineer to formally analyze DFTs by analyzing its counterpart DRBD is less than that of analyzing the original DFT model. In Section~\\ref{sec:DFT-algebra}, a DFT is formally analyzed using the DFT algebra by expressing the DFT event of the structure function as the union of the individual DFT events. Then the probabilistic PIE is utilized to formally verify the probability of failure of the top event. The number of terms in the final result equals $2^n-1$, where $n$ is the number of individual events in the union of the structure function. Therefore, in the verification process, it is required to verify at least $2^n-1$ expressions. On the other hand, verifying a DRBD would require verifying a single expression for each nested structure.\n\nAs an example, consider the reliability analysis of the DBW system. Analyzing the DFT of this system required verifying 63 subgoals as the top event is composed of the union of six different events. While analyzing the DRBD of the DBW system required verifying only one main subgoal to be manipulated to reach the final goal. Table~\\ref{table:compare} provides a comparison of the size of the script, the required time to develop it and the number of goals to be verified. Based on these observations, analyzing the reliability of the DBW using the DRBD required $1\/24$ of the time needed by the DFT. These results show that it is more convenient to analyze the DRBD of a system rather than its DFT if the algebraic approaches are to be used. The only added step will be to formally verify that the DFT and DRBD are the complements of each other, which is straightforward utilizing the theorems in Table~\\ref{table:verify-DRBD-DFT}. Therefore, we verify this as:\\\\\n\n\\begin{thm}\n\\small{\\texttt{$\\vdash\\forall$p TF EF BCU PC SC\\textsubscript{a} SC\\textsubscript{d} TS BS t. }}\\\\\n\\mbox{\\small{\\texttt{prob\\_space p $\\wedge$ DBW\\_events\\_p p TF EF BCU PC SC\\textsubscript{a} SC\\textsubscript{d} TS BS t $\\Rightarrow$.}}}\\\\\n\\mbox{\\small{\\texttt{(prob p (DRBD\\_event p F\\textsubscript{DBW} t) = 1- prob p (DFT\\_event p Q\\textsubscript{DBW} t))}}}\\\\\n\\end{thm}\n\n\\noindent where \\texttt{DBW\\_events\\_p} ensures that the DBW DFT events are in the events of the probability space. Thus, we can use the DRBD reliability expression (Theorem~\\ref{thm:Rel_DBW}) to verify the probability of failure of the DFT, which results in a reduction in the analysis efforts. \n\n\\begin{table}[!t]\n\\centering\n\\caption{Comparison of Formal Analysis Efforts of DBW}\n\\label{table:compare}\n\\begin{tabular}{c|c|c|c|}\n\\cline{2-4}\n & \\small{\\# of subgoals} & \\small{\\# of lines in the script} & \\small{required time} \\\\ \\hline \\hline\n\\multicolumn{1}{|c|}{\\small{DFT}} & \\small{ 63} & \\small{ 4850} & \\small{ 24 hours} \\\\ \\hline\n\\multicolumn{1}{|c|}{\\small{DRBD}} & \\small{ 1 } & \\small{150} & \\small{1 hour } \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\section{Conclusions}\nIn this report, we proposed an integrated framework to enable the multiway formal algebraic analysis of DFTs and DRBDs within a theorem prover. This framework allows transforming a DFT and DRBD models into their corresponding DBRD and DFT models, respectively, to be either analyzed more effectively using the DRBD algebra or to clearly observe the failure dependencies in the form of a DFT. This requires formally verifying the equivalence of both DFT and DRBD algebras. To illustrate the efficiency and usefulness of the proposed framework, we provided a comparison of the efforts required to analyze a drive-by-wire system and the results showed that using the DRBD in the analysis instead of DFTs required verifying less goals (1:63), smaller script size (150:4850) and less time (1h:24h).\n\\bibliographystyle{unsrt}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\\subsection{Monitoring Complex Industrial Systems}\n\nThe industry is currently experiencing major changes in the condition monitoring of machines and production plants as data is always more easily captured and collected. This opens many opportunities to rethink the traditional problems of Prognostics and Health Management and to look for data-driven solutions, based on machine learning. Machine learning is being increasingly applied to fault detection, fault diagnostics (usually both combined in a classification task), and to the subsequent steps such as fault mitigation or decision support. One crucial component in the success of learning from the data, is the representativeness of the collected dataset used to train the models. Most modern machine learning methods, and deep learning in particular, require for their training not only a large collection of examples to learn from but also a representative sampling of these examples over the input space, or in other words, over the possible operating conditions. In fact, if the interpolation capabilities of machine learning have been acclaimed in many fields, extrapolation remains a challenge. Since most machine learning tools perform local transformation of the data to achieve better separability, it is very difficult to prove that these transformations are still relevant for data that are outside the value range used for training the models or in our case, for data stemming from different operating conditions.\n\nThis representativeness requirement on the training set is a major constraint for the health monitoring of complex or critical industrial systems, such as passenger transporting vehicles, power plants, or any systems whose failure would lead to dramatic consequences, and this for the following reasons: First, due to the fact that failures of such systems are by nature unacceptable, these systems are reliable by design and preventive maintenance is regularly performed to minimise any risk of a major fault developing. In addition, possible faults, if extremely unlikely, are plentiful and this prevents the gathering of enough data to perform data-driven fault recognition and classification. Second, the system operating conditions might evolve over very-long time scale (e.g.,\\ yearly trends in a power plant operation). Collecting a representative dataset to train a reliable data-driven health monitoring model would require too much time.\n\nThese two limitations, missing faulty patterns to learn from and the need of data representative of all operating conditions, have solutions in the literature but it is seldom that both problems are considered together. First, instead of learning the faulty patterns, novelty detection methods exist, and have already been successfully applied. Yet, when data representative of all possible operating conditions are lacking, such approaches face the difficult problem of distinguishing between anomalies and a normal evolution of the system that was not observed in the training dataset. To the opposite, in the second case, many works have focused on domain adaptation, that is, either on identifying patterns significant of faults that are independent of the operating conditions or on their adaptation to new conditions. Such approaches require however examples of all possible faults in some operating conditions, to then generalise their characteristics to other operating conditions.\n\nA intuitive approach to this domain adaptation task is to consider several similar systems each with different operating conditions and to learn fault signatures valid across all systems. This is the \\textit{fleet approach} to domain transfer. A trivial context is to assume the systems identical in design and in usage, such that a single model can be trained for the whole fleet. But the task becomes more challenging when one of both constraints is relaxed. First, in a fleet from the operator perspective \\parencite{Jin2015}, the units can come from different manufacturers but the units are used similarly. In this case the monitoring of the units might vary due to different sensor equipment and the challenge lies in the transformation of the data to a space independent of the sensing characteristics and technologies. Second, in a fleet from the manufacturer perspective, the units come from the same manufacturer but they are used in different operating conditions or by different operators \\parencite{Leone2017}. In this second case the operating conditions will be the distinguishing elements between units and the challenge is in the alignment of the data, such that faults can be recognisable independently of the operation. Of course, the combination of both is also a possibility and would lead to an even more challenging task. In this paper, we set ourselves in the context of similarly monitored units with different operating conditions, that is, in the fleet from the manufacturer perspective. \n\nFor the monitoring and diagnosis of such fleets, a vast literature exists proposing solutions that can be organised by increasing complexity of the task as follows:\n\\begin{enumerate}\n \\item Identifying some relevant parameters of the units in order to adapt them to each unit or to perform clustering and use the data of each cluster to train the model. \\textcite{Zio2010} compare multi-dimensional measurements, independent of time. \\textcite{lapira2012fault} clusters a fleet of wind turbine based on power versus wind diagrams, pre-selecting diagrams with similar wind regimes. \\textcite{Gonzalez-PriDa2016} propose an entropy inspired index based on availability and productivity to cluster the units. \\textcite{Peysson2019} propose a framework for fleet-wide maintenance with knowledge base architecture, uniting semantic and systemic approach of the fleet.\n \\item The entire time series are used to cluster units together. \\textcite{Leone2016} compare one dimensional time series by computing the euclidean distance between a trajectory and reference trajectories. \\textcite{liu2018cyber} proposes, among other, the use of time machine, clustering time series from a fleet a wind turbine with the DS3 algorithm. \\textcite{al2018framework} cluster nuclear power-plant based on their shut-down transient.\n \\item Model each unit functional behavior and try to identify similar ones. \\textcite{Michau2018b} use the whole set of condition monitoring data to define the similarity. Such approaches do not depend on the length of the observation since the functional relationship is learnt.\n \\item Align the feature space of different units, such as proposed in the present paper.\n\\end{enumerate}\n\nEach level increases the complexity of the solution, but tends to mitigate some of the limitations of the previous one. The main limitations of each of the above described levels are:\n\\begin{enumerate}\n \\item Aggregated parameters do not guarantee that all the relevant conditions have been covered. E.g.,\\ \\textcite{lapira2012fault} have to first segment diagram with similar wind regimes.\n \\item Comparing the distances between datasets is a problem affected by the curse of dimensionality: in high dimensions, the notion of distance loses its traditional meaning \\parencite{Domingos2012}, and the temporal dimension particularly important when operating conditions evolve, make this comparison even more challenging. E.g.,\\ \\textcite{al2018framework} restrict themselves to fixed-length extracted transient from the time series.\n \\item Even though such approaches are more robust to variations in the behaviour of the system, sufficient similarity in the operating range is still a strong requirement, which may require large fleets for the approach to be applicable. \n \\item When the alignment is really robust to variations in the operating conditions, it can be to the point that the subsequent condition monitoring model might interpret as natural some degradation of the system and miss the alarms. \n\\end{enumerate}\n\nAligning the feature space in the Prognostics and Health Management field is not a new idea, but it has been only applied to diagnosis or Remaining Useful Life estimation, to the best of the authors' knowledge. Such diagnostics problem have been extensively studied with traditional machine learning approaches \\parencite{margolis2011literature}, but also more recently and more specifically with deep-learning \\parencite{kouw2019review}. In the context of diagnostics, it is almost always assumed that the labels on the possible faults or on the degradation trajectories exist for some units, which will be used as reference for the alignment and are therefore denoted by \\textit{source} units. The challenge is then to make sure that the models perform as well on the \\textit{target} units for which diagnostics labels were not available in sufficient quantity or nor available at all.\n\nMost of the alignment methods follow the same framework: First, features are extracted, engineered or learned, such as to maximise the performances of a subsequent classifier trained in the source domain where labels are available. Some works aim at adapting the target features to match the source by means of a transformation \\parencite{fernando2013unsupervised,Xie2016,Zhang2017a}, others combine both alignment and feature learning in a single task. To do so, a min-max problem is solved, to minimise the classification loss in the source domain while maximising the loss of a domain discriminator. For example, \\textcite{Lu2017a} train a neural network such that one intermediate layer (namely the feature or latent space) minimises the Maximum Mean Discrepancy \\parencite{Borgwardt2006} between the source and the target and maximises the detection accuracy of a subsequent fault classifier. Ensuring that the origin of the features cannot be classified encourages a distribution overlap in the feature space.\n\\textcite{Li2018} introduced the use of generative model to transfer faults from one operating condition to another with a two level deep learning approach. Fake faulty and healthy samples in different operating conditions are generated to train in the second step, a classifier also on operating conditions where the faults were not really experienced.\n\nAs target labels are missing in the training set, such approaches are sometimes also denoted as \\textit{Unsupervised Domain Adaptation}, where adaptation is performed on an unsupervised domain, that is, the target domain \\parencite{fernando2013unsupervised}.\nThe training of the feature extractor and of the classifier is however supervised in the source domain.\nRecent results demonstrate that this supervision of the feature extractor training in the source domain through the classifier is essential to the success of these approaches. By making sure that the features can be classified, the classifier constrains greatly the feature space \\parencite{wang2019domain}.\n\nThese approaches cannot be directly applied in the context of unsupervised health monitoring, where anomalous labels and anomalous samples are available neither for the source nor for the target. In our context, as the training uses healthy data only, there is no classification information to back-propagate to the feature extractor. The feature extraction is in fact unsupervised with respect to the health of the system, while the alignment is still required. This setup is thus an even more challenging task never solved so far in the literature, to the best of our knowledge.\n\nTo solve this task, it seems necessary to constrain the feature space in an unsupervised manner, to convey maximal information about the health of the system within the features, as illustrated in Figure~\\ref{fig:FeatAlignt}. We propose three approaches.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{Figures\/Feat-Alignt.png}\n\\caption{Feature alignment: (a) Combining healthy features of source and target without alignment lead to wide anomaly detection boundaries and missed anomalies. (b) Healthy feature non-discriminability. Without constraints on the transformation, anomalies might be mixed with healthy features. (c) By imposing non-discriminability and an homothetic transformation, inter-point relationships are kept, ensuring that the initial separability of the anomalies is conserved after the alignment.}\n\\label{fig:FeatAlignt}\n\\end{figure}\n\nFirst, we propose an auto-encoder with shared latent space both for source and target. The features of both source and target need to be encoded in a single latent space while allowing for good input reconstruction. Using an auto-encoder is a natural extension of data-driven unsupervised anomaly detection approaches \\parencite{michau_deep_2017}. We will test this approach with a variational auto-encoder, $\\beta$-VAE \\parencite{higgins2017beta}. VAE has shown in the past that, with its probabilistic description of the input and latent space, the obtained features are very useful for subsequent tasks \\parencite{ellefsen2019remaining}.\n\nSecond, we introduce the homothety loss. The loss is designed to make sure that inter-point distance relationships are kept in the latent space for both the source and the target. If the features were obtained through an homothetic projection from the input to the feature space, they would minimise the homothety loss.\n\nLast, we use an origin discriminator on the feature space trained in an adversarial manner. The discriminator itself is trained to best classify the origin dataset of the features while the feature extractor is trained such as the resulting features cannot be classified by the discriminator.\n\nThe remainder of the paper is organised as follows: \nSection 2 provides an overview of the tools used for the proposed approach and motivates their usage in the particular context of unsupervised feature learning and alignment. \nSection 3 presents a real application case study that faces the difficulties discussed above, including rare faults, limited observation time, limited representative condition monitoring data collected over a short period. The comparisons are performed on a fleet comprising 112 power plants monitored for one year, 12 of which experienced a fault. We limit ourselves to two months of data available for the target unit training, quite a small time interval compared to yearly fluctuations of a power plant operation.\n\n \\subsection{Notations}\n \n\\makebox[2cm]{${\\cdot}_S $} Variables for the source dataset\\\\\n\\makebox[2cm]{${\\cdot}_T $} Variables for the target dataset\\\\\n\\makebox[2cm]{$X_\\cdot$} Input Variable\\\\\n\\makebox[2cm]{$F_\\cdot$} Feature Variable\\\\\n\\makebox[2cm]{$Y_\\cdot$} One-class classifier output\\\\\n\\makebox[2cm]{$D_\\cdot$} Discriminator output\\\\\n\\makebox[2cm]{$\\mathcal{L}$} Loss\\\\\n\\makebox[2cm]{$\\mathcal{L}_{rec}$} Reconstruction Loss\\\\\n\\makebox[2cm]{$\\mathcal{L}_{KL}$} Kullback-Leibler divergence Loss\\\\\n\\makebox[2cm]{$\\mathcal{L}_{FA}$} Feature Alignment Loss\\\\\n\\makebox[2cm]{$\\mathcal{L}_{D}$} Discriminator Loss\\\\\n\\makebox[2cm]{$\\mathcal{L}_{H}$} Homothety Loss\\\\\n\\makebox[2cm]{ELM} Extreme Learning Machine\\\\\n\\makebox[2cm]{VAE} Variational Auto-Encoder\\\\\n\\makebox[2cm]{GRL} Gradient Reversal (GR) Layer\\\\\n\\makebox[2cm]{FPR} False Positive Rate (in \\%)\\\\\n\n\n\\section{Methodology for Adversarial Transfer of Unsupervised Detection}\n\\label{sec:ua}\n\n \\subsection{Anomaly Detection and One-Class Classification}\n \n\nThe traditional framework for unsupervised fault detection usually consists in a two step approach, with first, feature extraction, and second feature monitoring. Features can stem from a physical modelling of the system, from aggregated statistics on the dataset, from varied machine learning tools or from deep learning on surrogate tasks, such as auto-encoding. The monitoring of the features can be rule based (e.g.,\\ by defining a threshold on the features), or statistical (e.g.,\\ $\\chi^2$ or Student test) or use some machine learning tools such as clustering (K-means), nearest neighbours analysis, density based modelling, subspace analysis (e.g.,\\ PCA), or one-class classification such as one-class SVM or one-class classifier neural network (see the work of \\textcite{Pecht2019} for more details).\nTo the best of the author's knowledge, unsupervised detection has never been treated with end-to-end learning due to the lack of supervision inherent to the task.\n\nIn continuity with the traditional approaches to unsupervised fault detection, we propose here to split our architecture in a feature extractor and a feature monitoring network. To monitor the features, we propose to train a one-class classifier neural network, such as developed by \\textcite{leng2015one, yan2016one, michau_deep_2017, Michau2018a}, which has proven to provide more insights on system health than the traditional state-of-the-art one-class SVM, and provides better detection performances than other indices such as the \\textit{special index} \\parencite{Fan2017}.\n\nIn order to handle the lack of supervision in the training of the one-class classifier, \\textcite{Michau2018a} proposed to train a one-class Extreme Learning Machine (ELM). Such single layered networks have randomly drawn weights between the input layer and the hidden layer. Only the weights between the hidden layer and the output are learned. Mathematical proof has been given that they are universal approximators \\parencite{huang_universal_2006}, and since the problem of finding the output weights become a single variable convex optimisation problem, there is a unique optimal solution easily found by state-of-the-art algorithms with convergence guaranties, in little time compared to the traditional training of neural networks with iterative back-propagation.\n\n\\textcite{Michau2018a} demonstrated that a good decision boundary on the output of the one-class ELM is the threshold\n\\begin{equation}\n\\label{eq:thrd}\n\\mbox{Thrd} = \\gamma\\cdot \\mbox{percentile}_{p}(\\vert \\ensuremath{\\mymathbb 1}}%1\\hspace{-0.25em}\\mbox{l}} - \\val{Y}\\vert),\n\\end{equation}\nwhere $\\val{Y}$ is the ouput of the one-class classifier for a validation dataset, healthy but not used in the training. It provides an estimation of the natural fluctuation in the one-class classifier output in healthy operating conditions. $p$ represents the number of expected outliers, which is linked to the cleanness of the dataset and $\\gamma$ represents the sensitivity of the detection. In this paper, we take $\\gamma=1.5$ and $p=99.5$, values identified as relevant in the paper.\nOutliers are discriminated from the main healthy class if they are above this threshold. \n\nThis framework was developed and successfully applied in combination to a auto-encoder ELM (namely Hierarchical ELM or HELM) to the monitoring of a single machine for which long training was available \\parencite{michau_deep_2017,Michau2018a}. In the context of short training time, the same architecture was applied to paired units (source with one year of data available and target with only two months of data available) in the work of \\textcite{Michau2018b}. If this approach provided better detection rates with lower false alarm rates than using two months of data only, it faces the limitation that when the units have very different operating conditions, it is difficult to pair them such that the final model provides satisfactory performances.\n\nIn this paper, we propose to change the feature extractor from an ELM auto-encoder, that is not suitable for more advanced alignment since there is no possible back-propagation of any loss, to either a Variational Auto-Encoder (VAE) or a simple feed forward network with a new custom made loss function. We propose therefore new ways to learn features across source and target but still use the one-class classifier ELM to monitor the health of the target unit.\n\n \\subsection{Adversarial Unsupervised Feature Alignment}\n \nThe proposed framework is composed of a feature extractor and of a one-class classifier trained with healthy data only. In order to perform feature alignment we explore three strategies in the training of the feature extractor, auto-encoding, the homothety loss and adversarial discriminator. \nThese alignment strategies are not exclusive, so we also explore their different combinations.\n\n \\subsubsection{Auto-encoding as a Feature Constraint}\n \nAn auto-encoder is a model trained to reconstruct its inputs after some transformation of the data. These transformations could be to a space of lower dimensionality (compressive auto-encoder), higher dimensionality, linear (e.g.,\\ PCA) or non-linear (e.g.,\\ neural networks). Auto-encoding models are a popular feature extractors as they do not require any supervision. They rely on the assumption that since the learned features can be used to reconstruct the inputs, the features should contain the most meaningful information on the data and that they will be better suitable for subsequent machine learning tasks. It is in addition quite easy to enforce some additional properties of the features that seems suitable for the task at hand such as sparsity (e.g.,\\ $\\ell_1$ regularisation), low coefficients (e.g.,\\ $\\ell_2$ regularisation), robustness to noise (denoising auto-encoder), etc... \n\nAmong the vast family of possible auto-encoders, the variational auto-encoder, proposed by \\textcite{kingma2013auto}, is learning the best probabilistic representation of the input space using a superposition of Gaussian kernels. The neurons in the latent space are interpreted as mean and variance of different Gaussian kernels, from which features are sampled and decoded. Such networks can be used as traditional auto-encoders but also as generative models: by randomly sampling the Gaussian kernels, new samples can be decoded.\n\nThe training of the variational auto-encoder consists in the minimisation of two losses, first the reconstruction loss, $\\mathcal{L}_{rec}$, and second the loss on the distribution discrepancy between the input (or prior) and the Gaussian kernels, by mean of the Kullback-Leibler divergence, $\\mathcal{L}_{KL}$. \nThe reader interested in the implementation details of the variational auto-encoder can refer to the work of \\textcite{higgins2017beta}. In this work, the concept of $\\beta$-VAE is introduced and propose to apply a weight $\\beta$ to the Kullback-Leibler divergence loss to either prioritise a good distribution match over a good reconstruction (particularly important when VAE is used as a generative model) or the opposite.\n\nVariational auto-encoder have been successfully used in hierarchical architectures in many fields. In PHM, they have been used in semi-supervised learning tasks for remaining useful life prediction \\parencite{yoon2017semi,ellefsen2019remaining}, and for anomaly detection \\parencite{kim2018squeezed}).\n\n\n \\subsubsection{Homothety as a Feature Constraint}\n \nAn alternative to auto-encoding, is the use of more traditional feed-forward networks, on which we would impose an additional constraint on the feature space. Instead of the combined loss on both the reconstruction from the feature and on the feature distribution, we propose to introduce here a loss that encourages inter-point relationship conservation in the feature space. To do so, we define the homothety loss to keep constant the inter-point distance ratios between the input $X$ and the feature space $F$. The intuition lies on the idea that a good alignment of the two dataset should correspond to both source and target sharing the same lower dimensionality feature space while being scaled in similar ways.\n\nThe proposed homothety loss is defined as:\n\\begin{equation}\n\\mathcal{L}_H = \\sum_{S\\in \\left\\lbrace \\substack{\\mbox{Source}\\\\\\mbox{Target}} \\right\\rbrace} \\frac{1}{\\vert S \\vert} \\sum_{(i,j)\\in S}\\left\\Vert \\left\\Vert X_i - X_j \\right\\Vert_2 - \\eta \\left\\Vert F_i - F_j \\right\\Vert_2 \\right\\Vert_2\n\\end{equation}\nwhere\n\\begin{equation}\n\\eta = \\Argmin{\\tilde{\\eta}} \\mathcal{L}_H (\\tilde{\\eta})\n\\end{equation}\n\n \\subsubsection{Domain Discriminator}\n \nFor both the VAE and the Homothetic Feature Alignment, the alignment can be helped further with the help of an origin discriminator trained in an adversarial manner. This training is done by solving a min-max problem where the discriminator is trained at minimising the discrimination loss on sample origins, while the feature extractor is trained to maximise this loss, that is, to make the feature indistinguishable from the discriminator perspective.\n\nSuch adversarial training has been greatly simplified since the introduction of the Gradient Reversal Layer trick, proposed by \\textcite{Ganin2016}. This simple yet efficient idea consists in connecting the feature extractor to the discriminator through an additional layer which performs the identity operation in the forward pass but negates the gradient in the backward pass. The gradient passed to the feature extractor during the backward pass goes therefore in the opposite direction as the minimisation problem would require.\n\nFor the discriminator, we propose to experiment with a classic densely connected softmax classifier, and with a Wasserstein discriminator. This setup is inspired from the Wasserstein GAN \\parencite{arjovsky2017wasserstein}, where a generative model is trained to minimise the Wasserstein distance between generated samples and true samples, as to make their distribution indistinguishable. The authors demonstrate that, using the Kantorovich-Rubinstein duality, this problem can also be solved by adversarial training with a neural network playing the role of the discriminator aiming at maximising the following loss:\n\n\\begin{equation}\n\\mathcal{L}_{D} = \\mathbb{E} \\left( \\mbox{disc}\\left( F_{\\mbox{Source}}\\right)\\right) - \\delta_w \\mathbb{E} \\left(\\mbox{disc}\\left( F_{\\mbox{Target}}\\right) \\right)\n\\end{equation}\n\nOur implementation of the Wasserstein adversarial training takes into account the latest improvements, including the gradient penalty as proposed in \\textcite{gulrajani2017improved} to ensure the requirement of the Kantorovich-Rubinstein duality that the function disc() is 1-Lipschitz, and the asymmetrically relaxed Wasserstein distance as proposed by \\textcite{wu2019domain}. This relaxation, here when $\\delta_w >1.0$, encourages the target feature distribution to be fully contained in the source feature distribution, but does not require full reciprocal overlap. This is important here since we assume a small non-representative dataset for the target training. It is in the nature of this task to assume that the source feature distribution will have a larger support, from which the health monitoring model can learn from.\n\n\n \\subsection{Summary and overlook of final architectures}\n \n \\subsubsection{Tested Architectures}\n \n\nIn summary, we compare the various propositions alone and combined together as follows:\n\\begin{itemize}\n\\item \\textbf{$\\beta$-VAE}: the traditional $\\beta$-VAE whose features are used as input to a one-class classifier. \n\\item \\textbf{$\\beta$-VAEDs}: The same $\\beta$-VAE and one-class classifier with the addition of a softmax discriminator connected to the feature space with a GR-layer\n\\item \\textbf{$\\beta$-VAEDw}: Similar as before but with a Wasserstein discriminator. In this version the adversarial training aims at minimising the Wasserstein distance between source and target.\n\\end{itemize}\nSimilarly, for the homothetic alignment, we explore the following combinations:\n\\begin{itemize}\n\\item \\textbf{HFA}: Homothetic Feature Alignment, made out of a feature extractor trained with the homothetic loss and a one-class classifier.\n\\item \\textbf{HAFAs}: Homothetic and Adversarial Feature Alignment with softmax discriminator, similar as before with in addition a softmax discriminator connected through a GR-layer.\n\\item \\textbf{HAFAw}: Homothetic and Adversarial Feature Alignment with a Wasserstein discriminator.\n\\item \\textbf{AFAs}: Similar as the HAFAs architecture but without the homothetic loss.\n\\item \\textbf{AFAw}: Similar to the HAFAw architecture but without the homothetic loss.\n\\end{itemize}\n\nFigure~\\ref{fig:UAFA} summarises the architectures explored here and Figure~\\ref{fig:framework} how the whole framework is organised.\n\nWe compare in addition the results to our previous results in which units were paired together to train an HELM without alignment of any kind \\parencite{Michau2018b}.\n\n\\begin{figure*}\n\\hfil\n\\subfloat[]{\\includegraphics[width=7.5cm]{Figures\/HAFA.png}\\label{sfig:HAFA}}\n\\hfil\n\\subfloat[]{\\includegraphics[width=7.5cm]{Figures\/VAED.png}\\label{sfig:VAED}}\n\\hfil\n\\caption{\\textbf{Adversarial \\& Unsupervised Feature Alignment Architectures}. (a) HAFA's architecture. (b) $\\beta$-VAE based architectures.\nThe feature encoder $N_1$ is trained to minimise a feature alignment loss $\\mathcal{L}_{FA}$ composed of the reversed discriminator loss $-\\alpha\\mathcal{L}_D$ and either (a) the homothety loss $L_H$ or (b) the variational autoencoder loss ($\\mathcal{L}_{rec}+\\beta\\mathcal{L}_{KL}$). The discriminator $N_2$ is trained to minimise the classification loss $\\mathcal{L}_D$ on the origin of the data (source vs. target). For HFA and $\\beta$-VAE, the discriminator is removed. Alternatively, this corresponds to setting arbitrarily $\\mathcal{L}_D=0$.}\n\\label{fig:UAFA}\n\\end{figure*}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\columnwidth]{Figures\/Framework.png}\n\\caption{\\textbf{Flow-Chart.} (a) Using Source and Target training data, both $N_1$ and $1C$ are trained. The training of $N_1$ requires incidentally the training of the discriminator $N_2$ and of the decoder when used. (b) The output of $1C$ is analysed with a validation set from both source and target and the threshold~\\eqref{eq:thrd} is computed. (c) This threshold is used as decision rule for test target data.}\n\\label{fig:framework}\n\\end{figure}\n\t\t\\subsubsection{Hyper-parameters}\n\t\nWe tested the architecture in similar context: A two layer feature extractor with 10 neurons, a two layer discriminator with 10 and 5 neurons, \\textit{relu} as activation function (but for the softmax output layer of the discriminator), a learning rate of $10^{-4}$, the ADAM optimiser \\parencite{kingma2014adam}, over 200 epoch with batch size of 1000. The Wasserstein discriminator has a gradient penalty weight of 10 as proposed in the seminal paper. The VAE decoder has a symmetric architecture to that of the feature extractor.\n\nThe asymmetric coefficient of the Wasserstein discriminator $\\delta_w$ is tested with $1.0$ (no asymmetric relaxation) and $4.0$ (as proposed in the seminal paper). The gradient reversal layer is tested with weights $\\alpha$ set to $1.0$ and $0.2$. With $0.2$, the discriminator would be trained with a gradient 5 times higher than the feature extractor, increasing the relative training of the discriminator. The $\\beta$ of the $\\beta$-VAE is tested with $10.0$ (more weight to the Kullback-Leibler loss), $1.0$ and $0.1$ (more weight to the reconstruction loss). Results are only reported for $\\alpha=\\beta=1.0$ as all other combinations, in our setting, were providing worse performance.\n \n \n\\section{Case Study}\n\\label{sec:cs}\n\t\\subsection{Introduction to the Case Study}\n\n\nTo demonstrate the suitability and effectiveness of the proposed approaches and compare between the different strategies, a comparison is performed on a fleet comprising 112 power plants, similarly to that presented in \\textcite{Michau2018b, Michau2019}. \nIn the available fleet, 100 gas turbines have not experienced identifiable faults during the observation period (approximately one year), they are therefore considered here as healthy and 12 units have experienced a failure of the stator vane. \n\nA vane in a compressor redirects the gas between the blade rows, leading to an increase in pressure and temperature.\nThe failure of a compressor vane in a gas turbine is usually due to a Foreign Object Damage (FOD) caused by a part loosening and travelling downstream, affecting subsequent compressor parts, the combustor or the turbine itself.\nFatigue and impact from surge can also affect the vane geometry and shape and lead to this failure mode. Parts are stressed to their limits to achieve high operational efficiency with complex cooling schemes to avoid their melting, especially during high load.\nSuch failures are undesirable due to the associated costs, including repair costs and operational costs of the unexpected power plant shutdown. \n\nBecause of the various different factors that can contribute to the source of the failure mode, including assembly, material errors, or the result of specific operation profiles, the occurrence of a specific failure mode is considered as being random. Therefore, the focus is nowadays on early detection and fault isolation and not on prediction.\n\nSo far, the detection of compressor vane failures mainly relied on analytic stemming from domain expertise. \nYet, if the algorithms are particularly tuned for high detection rates, they often generate too many false alarms. \nFalse alarms are very costly, each raised alarm is manually verified by an expert which makes it a time- and resource-consuming task.\n\n\t\\subsection{The dataset}\n\n\nThe turbines are monitored with 24 parameters, sampled every 5 minutes over 1 year. They stem from 15 real sensors and 9 ISO variables (measurements modified by a physical model to represent some hand-crafted features in standard operating conditions 15$^o$ C, 1 atmosphere).\nAvailable ISO measurements are, the power, the heat rate, the efficiency and indicators on the compressor (efficiency, pressure ratio, discharge pressure, discharge temperature, flow). Other measurements are pressures and temperatures from the different parts of the turbine and of the compressor (inlet and bell mouth), ambient condition measurements and operating state measurements such as the rotor speed, the turbine output, and fuel stroke. The data available in this case study is limited to one year, over which the gas turbines have not experienced all relevant operating conditions. We aim at being able to propose condition monitoring methods that rely on two months of data only from the turbine of interest.\n\nTo test the model and report the results, we apply the proposed methodology to the 12 gas turbines with faults as target, from which we extract the first two months of data as training set (around 17\\,000 measurements). The 100 remaining gas turbines are candidate source datasets, considered as healthy. \nFor the 12 target gas turbines, all data points after the first two months and until a month before the expert detected the fault (around 39\\,000 measurements), are considered as healthy and are used to quantify the percentage of false positives (FPR). The last month before the detection is ignored as faults could be the consequence of prior deterioration and a detection could not be reliably compared to any ground truth. Last, the data points after the time of detection by the expert are considered as unhealthy. As many of the 12 datasets have very few points available after that detection time (from 8 to 1000 points), we will consider the fault as detected if the threshold is exceeded at least twice successively.\n\nThe validation dataset is made by extracting 6\\% of the training dataset. The data has been normalised such that all variables 1st and 99th-percentiles are respectively $-1$ and $1$, such that the resulting normalisation is independent to the presence of outliers. Rows with any missing values or 0 (which is not a possible value for any of the measurement) have been removed. \n\n\t\\subsection{Alignment abilities}\n\n\\begin{table}\n\\centering\n\\setlength\\tabcolsep{3pt}\n\\begin{adjustbox}{max width=\\columnwidth}\n\n\\begin{tabular}{l|rrrrrrrrr}\n\\toprule\n\\small\nUnit &\\small HELM &\\small $\\beta$-VAE &\\small $\\beta$-VAEs &\\small $\\beta$-VAEw &\\small HFA &\\small AFAs &\\small AFAw &\\small HAFAs &\\small HAFAw \\\\\n\\midrule\n1 & 11 & 74 & 65 & 74 & 83 & 83 & 85 & 86 & 79 \\\\\n2 & 0 & 5 & 20 & 13 & 10 & 13 & 24 & 5 & 12 \\\\\n3 & 10 & 28 & 22 & 22 & 21 & 23 & 30 & 32 & 34 \\\\\n4 & 17 & 30 & 21 & 32 & 54 & 55 & 54 & 52 & 49 \\\\\n5 & 94 & 68 & 47 & 67 & 90 & 59 & 63 & 80 & 85 \\\\\n6 & 92 & 51 & 68 & 63 & 85 & 77 & 79 & 92 & 93 \\\\\n7 & 0 & 13 & 29 & 24 & 29 & 45 & 31 & 34 & 26 \\\\\n8 & 95 & 40 & 42 & 43 & 67 & 61 & 63 & 65 & 58 \\\\\n9 & 2 & 19 & 19 & 18 & 26 & 28 & 32 & 22 & 39 \\\\\n10 & 1 & 18 & 15 & 8 & 21 & 28 & 24 & 34 & 29 \\\\\n11 & 2 & 20 & 35 & 47 & 59 & 63 & 51 & 60 & 51 \\\\\n12 & 0 & 3 & 3 & 4 & 2 & 2 & 1 & 1 & 3 \\\\\n\\midrule\n\\small R\\% (5\\%) & 27.3 & 31.1 & 32.5 & 34.9 & 46.0 & 45.2 &45.2 & 47.4 & 47.0 \\\\\n\\small R\\% (1\\%) & 13.5 & 20.6 & 22.0 & 25.8 & 30.5 & 27.0 & 25.5 & 32.8 & 30.1\\\\\n\\bottomrule\n\\end{tabular}\n\\end{adjustbox}\\caption{Number of successfully aligned pairs (detected fault and less than 5\\% FPR). Last rows R are the mean ratio of aligned pairs (in \\%), with selection threshold on FPR at 5\\% and 1\\%.}\n\\label{tbl:alipair}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\setlength\\tabcolsep{3pt}\n\\begin{adjustbox}{max width=\\columnwidth}\n\\begin{tabular}{l|rrrcrrrrr}\n\\toprule\n\\small\nUnit &\\small HELM &\\small $\\beta$-VAE &\\small $\\beta$-VAEs &\\small $\\beta$-VAEw &\\small HFA &\\small AFAs &\\small AFAw &\\small HAFAs &\\small HAFAw \\\\\n\\midrule\n1 & 1.46 & 0.00 & 0.01 & 0.04 & 0.00 & 0.00 & 0.04 & 0.01 & 0.00 \\\\\n2 & 10.00 & 0.12 & 0.13 & 0.00 & 0.01 & 0.05 & 0.03 & 0.52 & 0.02 \\\\\n3 & 0.35 & 0.01 & 0.18 & 0.31 & 0.10 & 0.01 & 0.00 & 0.04 & 0.49 \\\\\n4 & 0.81 & 0.08 & 0.00 & 0.00 & 0.01 & 0.02 & 0.01 & 0.06 & 0.01 \\\\\n5 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\\\\n6 & 0.35 & 0.01 & 0.03 & 0.00 & 0.02 & 0.02 & 0.01 & 0.02 & 0.01 \\\\\n7 & 6.45 & 1.85 & 0.09 & 0.27 & 0.15 & 0.02 & 0.17 & 0.62 & 0.23 \\\\\n8 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\\\\n9 & 3.20 & 0.14 & 0.09 & 0.06 & 0.05 & 0.01 & 0.03 & 0.09 & 0.00 \\\\\n10 & 4.91 & 0.20 & 0.07 & 0.53 & 0.25 & 0.07 & 0.29 & 0.21 & 0.72 \\\\\n11 & 4.71 & 0.15 & 0.01 & 0.10 & 0.09 & 0.08 & 0.00 & 0.01 & 0.15 \\\\\n12 & 7.27 & 1.61 & 1.41 & 1.76 & 2.94 & 1.01 & 3.37 & 2.64 & 2.65 \\\\\n\\midrule\nMean & 3.29 & 0.35 & 0.17 & 0.26 & 0.30 & 0.11 & 0.33 & 0.35 & 0.36 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{adjustbox}\n\\caption{FPR for the best models.}\n\\label{tbl:bestmodel}\n\\end{table}\n\nThe results presented in this section aim at comparing the different combinations, and highlight the benefits of each components, in combination of others or alone. To provide insights on how well each model is able to align features from the different units of the fleet, we paired each of the 12 units, considered as target, with every one of the 100 healthy units, considered as source. \n\nThe fleet of tested unit has a strong variability and most pairs will not provide an interesting model. Therefore, a good indicator to compare the methodologies in their capability to take benefit of a pair of datasets is to tally the number of pairs leading to a relatively successful model, that is, a model that detect the fault in the target, and with a low FPR. Thus we report the results by setting an arbitrary threshold at 5\\% FPR. We also report the aggregated results with a threshold at 1\\%, to demonstrate that the comparative study conclusions remain valid independently of this selection process.\nOut of the resulting 34 combinations (8 architectures and different hyper-parameter settings) trained and tested on the 1200 pairs, we report the results for each architecture for the hyper-parameters maximising the overall number of successfully models.\n\nWe report in Table~\\ref{tbl:alipair} for each unit, how many aligned pairs were achieved with each combination. \nWe also report in Table~\\ref{tbl:bestmodel} the lowest FPR achieved for each of the 12 units, for models which could detect the fault.\nWe ran the experiments with Euler V\\footnote{\\url{https:\/\/scicomp.ethz.ch\/wiki\/Euler}}, with two cores from a processor Intel Xeon E5-2697v2. The heaviest models (HAFA's family) took around two minutes to be trained and tested. 40800 models were trained totalling more than 1.5 months of computation time.\n\n\\section{Discussion}\n\\label{sec:disc}\n\n\\begin{table}\n\\centering\n\\setlength\\tabcolsep{3pt}\n\\begin{adjustbox}{max width=\\columnwidth}\n\\begin{tabular}{l|rrrrrrrrrr}\n\\toprule\n\\small Unit & \\tiny{2mHELM} &\\small HELM &\\small $\\beta$-VAE &\\small $\\beta$-VAEs &\\small $\\beta$-VAEw &\\small HFA &\\small AFAs &\\small AFAw &\\small HAFAs &\\small HAFAw \\\\\n\\midrule\n1 & 9.05 & 1.46 & 0.98 & 0.55 & 0.74 & 3.61 & 0.32 & 4.77 & 0.52 & 0.82 \\\\\n2 & & 15.12 & 8.55 & 7.88 & 0.34 & & & & & 5.48 \\\\\n3 & 18.90 & 13.22 & 4.89 & 0.70 & 4.63 & & & & 4.54 & 3.05 \\\\\n4 & 42.61 & 26.34 & & & & 1.51 & 2.18 & 0.70 & 3.81 & \\\\\n5 & 4.20 & 1.50 & 0.08 & 3.48 & 0.08 & & 0.01 & 0.03 & 1.02 & 0.26 \\\\\n6 & 3.14 & 1.16 & 1.16 & 1.25 & 0.66 & 2.13 & 1.14 & 0.27 & 1.71 & 0.99 \\\\\n7 & 56.55 & 20.33 & 13.30 & 1.26 & 14.17 & 5.43 & & 6.22 & 0.62 & 4.77 \\\\\n8 & 0.19 & 0.08 & & 1.28 & & 0.03 & & 0.19 & 0.05 & 0.16 \\\\\n9 & 20.13 & & 13.11 & 6.65 & 0.06 & 4.56 & & & 0.63 & 0.20 \\\\\n10 & 81.76 & 39.79 & 2.26 & & 8.06 & & 0.07 & 30.62 & & 2.71 \\\\\n11 & 36.18 & 25.53 & 14.72 & 0.15 & 3.09 & 2.97 & 0.26 & 0.90 & 4.37 & 0.17 \\\\\n12 & 68.60 & 37.05 & & & & & 19.60 & 8.10 & & \\\\\n\\midrule\n\\#AP & 3 & 4 & 5 & 7 & 7 & 6 & 6 & 6 & \\textbf{9} & \\textbf{9} \\\\\n\\bottomrule\n\\end{tabular}\n\\end{adjustbox}\n\\caption{FPR for the pairs minimising the Maximum Mean Discrepancy. Empty cells correspond to models with missed fault. The last row (\\#AP) contains the number of models with less than 5\\% FPR.}\n\\label{tbl:mmdsel}\n\\end{table}\n\nThe results presented in Table~\\ref{tbl:alipair} demonstrate that the proposed alignment methodology all bring positive impact on the problem of transferring operating conditions between units for unsupervised health monitoring of the gas turbines. Compared to the naive combination of the source and target dataset in a single training set with HELM, all proposed alignment methods improve the number of successfully aligned pairs (target fault detected and less than 5\\% FPR). Each alignment strategy improves the results and leads to very efficient models as illustrated in Table~\\ref{tbl:bestmodel} (most aligned models have far less than 1\\% FPR). The homothetic loss is clearly the most contributing factor to the alignment as all architectures making use of it align successfully more pairs. The use of adversarial discriminator is also contributing quite significantly to the alignment process. It improves the results from the $\\beta$-VAE significantly and provides good alignment even when used alone (\\textit{cf.}\\ AFAs and AFAw). When used with the $\\beta$-VAE, the Wasserstein discriminator provides the best results with an asymmetric coefficient $\\delta_w=4.0$. This demonstrates the importance to relax the distribution alignment. In the homothetic alignment conditions, the selected asymmetric coefficient is $\\delta_w=1.0$, showing that the homothetic loss encourages already a distribution overlap and reduces the need of asymmetric alignment. In that case, the classic softmax discriminator is actually providing better results.\n\nThe results presented above demonstrate that, first the alignment procedures lead to an increased probability of training a useful model given a pair of units and that second, the models are performing better with alignment (lower FPR and higher detection rate). An interesting question is the \\textit{a priori} selection of the pair of units which, once aligned, can be used to train a successful health monitoring model. While this question is left for future research, a possible solution is to compare units based on few relevant characteristics. Here we propose a possible solution consisting in the selection of the pair for which the Maximum Mean Discrepancy (MMD) on two representative variables is minimal (Output Power and Inlet Guide Vanes Angle). The results are reported in Table~\\ref{tbl:mmdsel}. Empty cells corresponds to cases where the fault was not detected by the model. For comparison purposes, these models are compared to a simple HELM only trained with the two months of data from the target unit (2mHELM). Based on this imperfect selection process, the proposed alignment procedures all improves the results, both in number of useful models but also in reducing the FPR. Previous results demonstrated that the units are very different from each others, this is highlighted here with the relatively low number of successfully aligned pairs for units 2, 3, 7, 9, 10, 11 and 12. These units have also very high variability in their operating conditions as shown by the very high FPR of the model trained only on their first two months of data. This high variability makes it extremely challenging to identify the other units likely to bring beneficial information to the model. \n\nTo the opposite, units 5, 6 and 8 looks very stable in their operation as the models trained only on the first two months already provide satisfactory results (less than 5\\% FPR). Already with HELM they could be matched with almost all other units. This can explain that aligning those unit with other sources is not necessary and might actually confuse the subsequent one-class classifier (for these units, the number of successfully aligned pairs decreases with some of the alignment procedures). Yet, for the best performing approaches (HAFAs and HAFAw), the results are improved, even on these units.\n\nAlso, the results presented in this paper focused on providing a fair comparison of different alignment strategies. The different combinations trained on the 1200 pairs led to the training of over 40\\,000 models. Once a strategy is chosen, plenty of room is left for hyper-parameter tuning, which can only improve the results presented here.\n\n\\section{Conclusions}\n\nIn this paper, we tackled the problem of feature alignment for unsupervised anomaly detection. In the context where a fleet of units is available, we proposed to align a target unit, for which little condition monitoring data are available, with a source unit, for which longer observations period had been recorded, both in healthy conditions. Contrarily to the traditional case in domain alignment, the feature learning cannot rely on the back-propagation of the loss of a subsequent classifier. Instead, we presented three alignment strategies, auto-enconding in shared latent space, the newly proposed homothety loss and the adversarial training of a discriminator (with a traditional softmax classifier and a Wasserstein discriminator). All strategies improve the results for the subsequent unsupervised anomaly detection model. Among these strategies, we demonstrated that the newly proposed homothety loss has the strongest impact on the results and can be further improved by the use of a discriminator.\n\nIn the future, deeper analysis of the unit characteristics might help to identify the units for which additional data is required and to also identify which other units to select among the fleet. Such analysis could rely on expert knowledge or on an online adaptation of the sourcing data depending on the current operation. Such approach would face the challenge of distinguishing new from degraded operating conditions. Another interesting trail is the selection of the sourcing data among the whole fleet rather than attempting to pair specific units. In such case, the right data selection remains an open research question.\n\n\\section*{Acknowledgement}\nThis research was funded by the Swiss National Science Foundation (SNSF)\nGrant no. PP00P2 176878.\n\\printbibliography\n \n\\end{document}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\noindent In this paper, we consider the following finite-sum composite convex optimization problem:\n\\vspace{-2mm}\n\\begin{equation}\\label{equ1}\n\\min_{x\\in\\mathbb{R}^{d}} \\phi(x):=f(x)+g(x)=\\frac{1}{n}\\!\\sum\\nolimits_{i=1}^{n}\\!f_{i}(x)+g(x),\n\\vspace{-2mm}\n\\end{equation}\nwhere $f(x)\\!:=\\!\\frac{1}{n}\\!\\sum^{n}_{i=1}f_{i}(x)$ is a convex function that is a finite average of $n$ convex functions $f_{i}(x)\\!:\\!\\mathbb{R}^{d}\\!\\rightarrow\\!\\mathbb{R}$, and $g(x)$ is a ``simple\" possibly non-smooth convex function (referred to as a regularizer, e.g.\\ $\\lambda_{1}\\|x\\|^{2}$, the $\\ell_{1}$-norm regularizer $\\lambda_{2}\\|x\\|_{1}$, and the elastic net regularizer $\\lambda_{1}\\|x\\|^{2}\\!+\\!\\lambda_{2}\\|x\\|_{1}$, where $\\lambda_{1},\\lambda_{2}\\!\\geq\\!0$ are the regularization parameters). Such a composite problem~\\eqref{equ1} naturally arises in many applications of machine learning and data mining, such as regularized empirical risk minimization (ERM) and eigenvector computation~\\cite{shamir:pca,garber:svd}. As summarized in~\\cite{zhu:Katyusha,zhu:box}, there are mainly four interesting categories of Problem~\\eqref{equ1} as follows:\n\n\\vspace{-2mm}\n\\begin{itemize}\n\\item Case 1: Each $f_{i}(x)$ is $L$-smooth and $\\phi(x)$ is $\\mu$-strongly convex ($\\mu$-SC). Examples: ridge regression and elastic net regularized logistic regression.\n\\vspace{-2mm}\n\\item Case 2: Each $f_{i}(x)$ is $L$-smooth and $\\phi(x)$ is non-strongly convex (NSC). Examples: Lasso and $\\ell_{1}$-norm regularized logistic regression.\n\\vspace{-2mm}\n\\item Case 3: Each $f_{i}(x)$ is non-smooth (but Lipschitz continuous) and $\\phi(x)$ is $\\mu$-SC. Examples: linear support vector machine (SVM).\n\\vspace{-2mm}\n\\item Case 4: Each $f_{i}(x)$ is non-smooth (but Lipschitz continuous) and $\\phi(x)$ is NSC. Examples: $\\ell_{1}$-norm regularized SVM.\n\\vspace{-2mm}\n\\end{itemize}\n\nTo solve Problem~\\eqref{equ1} with a large sum of $n$ component functions, computing the full (sub)gradient of $f(x)$ (e.g.\\ $\\nabla\\!f(x)\\!=\\!\\frac{1}{n}\\!\\sum^{n}_{i=1}\\!\\nabla\\!f_{i}(x)$ for the smooth case) in first-order methods is expensive, and hence stochastic (sub)gradient descent (SGD), also known as incremental gradient descent, has been widely used in many large-scale problems~\\cite{sutskever:sgd,zhang:sgd}. SGD approximates the gradient from just one example or a mini-batch, and thus it enjoys a low per-iteration computational complexity. Moreover, SGD is extremely simple and highly scalable, making it particularly suitable for large-scale machine learning, e.g., deep learning~\\cite{sutskever:sgd}. However, the variance of the stochastic gradient estimator may be large in practice~\\cite{johnson:svrg,zhao:sampling}, which leads to slow convergence and poor performance. Even for Case 1, standard SGD can only achieve a sub-linear convergence rate~\\cite{rakhlin:sgd,shamir:sgd}.\n\n\nRecently, the convergence speed of standard SGD has been dramatically improved with various variance reduced methods, such as SAG~\\cite{roux:sag}, SDCA~\\cite{shalev-Shwartz:sdca}, SVRG~\\cite{johnson:svrg}, SAGA~\\cite{defazio:saga}, and their proximal variants, such as~\\cite{schmidt:sag}, \\cite{shalev-Shwartz:acc-sdca}, \\cite{xiao:prox-svrg} and \\cite{koneeny:mini}. Indeed, many of those stochastic methods use past full gradients to progressively reduce the variance of stochastic gradient estimators, which leads to a revolution in the area of first-order methods. Thus, they are also called the semi-stochastic gradient descent method~\\cite{koneeny:mini} or hybrid gradient descent method~\\cite{zhang:svrg}. In particular, these recent methods converge linearly for Case 1, and their overall complexity (total number of component gradient evaluations to find an $\\epsilon$-accurate solution) is $\\mathcal{O}\\!\\left((n\\!+\\!{L}\/{\\mu})\\log({1}\/{\\epsilon})\\right)$, where $L$ is the Lipschitz constant of the gradients of $f_{i}(\\cdot)$, and $\\mu$ is the strong convexity constant of $\\phi(\\cdot)$. The complexity bound shows that those stochastic methods always converge faster than accelerated deterministic methods (e.g.\\ FISTA~\\cite{beck:fista})~\\cite{koneeny:mini}. Moreover, \\cite{zhu:vrnc} and \\cite{reddi:svrnc} proved that SVRG with minor modifications can converge asymptotically to a stationary point in the non-convex case. However, there is still a gap between the overall complexity and the theoretical bound provided in~\\cite{woodworth:bound}. For Case 2, they converge much slower than accelerated deterministic algorithms, i.e., $\\mathcal{O}(1\/T)$ vs.\\ $\\mathcal{O}(1\/T^2)$.\n\nMore recently, some accelerated stochastic methods were proposed. Among them, the successful techniques mainly include the Nesterov's acceleration technique~\\cite{lan:rpdg,lin:vrsg,nitanda:svrg}, the choice of growing epoch length~\\cite{mahdavi:sgd,zhu:univr}, and the momentum acceleration trick~\\cite{zhu:Katyusha,hien:asmd}. \\cite{lin:vrsg} presents an accelerating Catalyst framework and achieves a complexity of $\\mathcal{O}((n\\!+\\!\\!\\sqrt{n{L}\/{\\mu}})\\log({L}\/{\\mu})\\log({1}\/{\\epsilon}))$ for Case 1. However, adding a dummy regularizer hurts the performance of the algorithm both in theory and in practice~\\cite{zhu:univr}. The methods~\\cite{zhu:Katyusha,hien:asmd} attain the best-known complexity of $\\mathcal{O}(n\\sqrt{1\/\\epsilon}+\\!\\sqrt{nL\/\\epsilon})$ for Case 2. Unfortunately, they require at least two auxiliary variables and two momentum parameters, which lead to complicated algorithm design and high per-iteration complexity.\n\n\\textbf{Contributions:} To address the aforementioned weaknesses of existing methods, we propose a fast stochastic variance reduced gradient (FSVRG) method, in which we design a novel update rule with the Nesterov's momentum~\\cite{nesterov:fast}. The key update rule has only one auxiliary variable and one momentum weight. Thus, FSVRG is much simpler and more efficient than~\\cite{zhu:Katyusha,hien:asmd}. FSVRG is a direct accelerated method without using any dummy regularizer, and also works for non-smooth and proximal settings. Unlike most variance reduced methods such as SVRG, which only have convergence guarantee for Case 1, FSVRG has convergence guarantees for both Cases 1 and 2. In particular, FSVRG uses a flexible growing epoch size strategy as in~\\cite{mahdavi:sgd} to speed up its convergence. Impressively, FSVRG converges much faster than the state-of-the-art stochastic methods. We summarize our main contributions as follows.\n\n\\vspace{-2mm}\n\\begin{itemize}\n\\item We design a new momentum accelerating update rule, and present two selecting schemes of momentum weights for Cases 1 and 2, respectively.\n\\vspace{-2mm}\n\\item We prove that FSVRG attains linear convergence for Case 1, and achieves the convergence rate of $\\mathcal{O}(1\/T^2)$ and a complexity of $\\mathcal{O}(n\\sqrt{1\/\\epsilon}\\!+\\!\\sqrt{nL\/\\epsilon})$ for Case 2, which is the same as the best known result in~\\cite{zhu:Katyusha}.\n\\vspace{-2mm}\n\\item Finally, we also extend FSVRG to mini-batch settings and non-smooth settings (i.e., Cases 3 and 4), and provide an empirical study on the performance of FSVRG for solving various machine learning problems.\n\\end{itemize}\n\n\n\\section{Preliminaries}\nThroughout this paper, the norm $\\|\\!\\cdot\\!\\|$ is the standard Euclidean norm, and $\\|\\!\\cdot\\!\\|_{1}$ is the $\\ell_{1}$-norm, i.e., $\\|x\\|_{1}\\!=\\!\\sum_{i}\\!|x_{i}|$. We denote by $\\nabla\\!f(x)$ the full gradient of $f(x)$ if it is differentiable, or $\\partial\\!f(x)$ a sub-gradient of $f(x)$ if $f(x)$ is only Lipschitz continuous. We mostly focus on the case of Problem~\\eqref{equ1} when each $f_{i}(x)$ is $L$-smooth\\footnote{In fact, we can extend all our theoretical results below for this case (i.e., when the gradients of all component functions have the same Lipschitz constant $L$) to the more general case, when some $f_{i}(x)$ have different degrees of smoothness.}. For non-smooth component functions, we can use the proximal operator oracle~\\cite{zhu:box} or the Nesterov's smoothing~\\cite{nesterov:smooth} and homotopy smoothing~\\cite{xu:hs} techniques to smoothen them, and then obtain the smoothed approximations of all functions $f_{i}(\\cdot)$.\n\nWhen the regularizer $g(\\cdot)$ is non-smooth (e.g., $\\!g(\\cdot)\\!=\\!\\lambda\\|\\cdot\\|_{1}\\!$), the update rule of general SGD is formulated as follows:\n\\vspace{-1mm}\n\\begin{equation}\\label{equ2}\nx_{k}=\\mathop{\\arg\\min}_{y\\in\\mathbb{R}^{d}}\\, g(y)\\!+\\!y^{T}\\nabla\\! f_{i_{k}}\\!(x_{k-\\!1})\\!+\\!({1}\/{2\\eta_{k}})\\!\\cdot\\!\\|y\\!-\\!x_{k-\\!1}\\|^2,\n\\vspace{-2mm}\n\\end{equation}\nwhere $\\eta_{k}\\!\\propto\\!1\/k$ is the step size (or learning rate), and $i_{k}$ is chosen uniformly at random from $\\{1,\\ldots,n\\}$. When $g(x)\\!\\equiv\\!0$, the update rule in~\\eqref{equ2} becomes $x_{k}\\!=\\!x_{k-\\!1}\\!-\\!\\eta_{k}\\nabla\\!f_{i_{k}}\\!(x_{k-\\!1})$. If each $f_{i}(\\cdot)$ is non-smooth (e.g., the hinge loss), we need to replace $\\nabla\\! f_{i_{k}}\\!(x_{k-\\!1})$ in~\\eqref{equ2} with $\\partial\\!f_{i_{k}}\\!(x_{k-\\!1})$.\n\nAs the representative methods of stochastic variance reduced optimization, SVRG~\\cite{johnson:svrg} and its proximal variant, Prox-SVRG~\\cite{xiao:prox-svrg}, are particularly attractive because of their low storage requirement compared with~\\cite{roux:sag,shalev-Shwartz:sdca,defazio:saga,shalev-Shwartz:acc-sdca}, which need to store all the gradients of the $n$ component functions $f_{i}(\\cdot)$ (or dual variables), so that $O(nd)$ storage is required in general problems. At the beginning of each epoch of SVRG, the full gradient $\\nabla\\! f(\\widetilde{x})$ is computed at the snapshot point $\\widetilde{x}$. With a constant step size $\\eta$, the update rules for the special case of Problem~\\eqref{equ1} (i.e., $g(x)\\!\\equiv\\!0$) are given by\n\\vspace{-1mm}\n\\begin{equation}\\label{equ3}\n\\begin{split}\n\\widetilde{\\nabla}\\! f_{i_{k}}\\!(x_{k-1})&=\\nabla\\! f_{i_{k}}\\!(x_{k-1})-\\nabla\\! f_{i_{k}}\\!(\\widetilde{x})+\\nabla\\! f(\\widetilde{x}),\\\\\nx_{k}&=x_{k-1}-\\eta\\widetilde{\\nabla}\\! f_{i_{k}}\\!(x_{k-1}).\n\\end{split}\n\\vspace{-2mm}\n\\end{equation}\n\\cite{zhu:univr} proposed an accelerated SVRG method, SVRG++~, with doubling-epoch techniques. Moreover, Katyusha~\\cite{zhu:Katyusha} is a direct accelerated stochastic variance reduction method, and its main update rules are formulated as follows:\n\\vspace{-1mm}\n\\begin{equation}\\label{equ4}\n\\begin{split}\n&x_{k}=\\theta_{1}y_{k-\\!1}+\\theta_{2}\\widetilde{x}+(1-\\theta_{1}-\\theta_{2})z_{k-\\!1},\\\\\n&y_{k}=\\mathop{\\arg\\min}_{y\\in\\mathbb{R}^{d}}\\, g(y)\\!+\\!y^{T}\\widetilde{\\nabla}\\! f_{i_{k}}\\!(x_{k})\\!+\\!({1}\/{2\\eta})\\!\\cdot\\!\\|y\\!-\\!y_{k-\\!1}\\|^2,\\\\\n\\vspace{-2mm}\n&z_{k}=\\mathop{\\arg\\min}_{z\\in\\mathbb{R}^{d}}\\, g(z)\\!+\\!z^{T}\\widetilde{\\nabla}\\! f_{i_{k}}\\!(x_{k})\\!+\\!({3L}\/{2})\\!\\cdot\\!\\|z\\!-\\!x_{k}\\|^2,\n\\end{split}\n\\vspace{-2mm}\n\\end{equation}\nwhere $\\theta_{1},\\theta_{2}\\!\\in\\![0,1]$ are two parameters, and $\\theta_{2}$ is fixed to $0.5$ in~\\cite{zhu:Katyusha} to eliminate the need for parameter tuning.\n\n\n\\section{Fast SVRG with Momentum Acceleration}\nIn this paper, we propose a fast stochastic variance reduction gradient (FSVRG) method with momentum acceleration for Cases 1 and 2 (e.g., logistic regression) and Cases 3 and 4 (e.g., SVM). The acceleration techniques of the classical Nesterov's momentum and the Katyusha momentum in~\\cite{zhu:Katyusha} are incorporated explicitly into the well-known SVRG method~\\cite{johnson:svrg}. Moreover, FSVRG also uses a growing epoch size strategy as in~\\cite{mahdavi:sgd} to speed up its convergence.\n\n\n\\subsection{Smooth Component Functions}\nIn this part, we consider the case of Problem (\\ref{equ1}) when each $f_{i}(\\cdot)$ is smooth, and $\\phi(\\cdot)$ is SC or NSC (i.e., Case 1 or 2). Similar to existing stochastic variance reduced methods such as SVRG~\\cite{zhu:Katyusha} and Prox-SVRG~\\cite{xiao:prox-svrg}, we design a simple fast stochastic variance reduction algorithm with momentum acceleration for solving smooth objective functions, as outlined in Algorithm~\\ref{alg1}. It is clear that Algorithm~\\ref{alg1} is divided into $S$ epochs (which is the same as most variance reduced methods, e.g., SVRG and Katyusha), and each epoch consists of $m_{s}$ stochastic updates, where $m_{s}$ is set to $m_{s}\\!=\\!\\rho^{s-\\!1}\\!\\cdot m_{1}$ as in~\\cite{mahdavi:sgd}, where $m_{1}$ is a given initial value, and $\\rho\\!>\\!1$ is a constant. Within each epoch, a full gradient $\\nabla\\! f(\\widetilde{x}^{s})$ is calculated at the snapshot point $\\widetilde{x}^{s}$. Note that we choose $\\widetilde{x}^{s}$ to be the average of the past $m_{s}$ stochastic iterates rather than the last iterate because it has been reported to work better in practice~\\cite{xiao:prox-svrg,zhu:univr,zhu:Katyusha}. Although our convergence guarantee for the SC case depends on the initialization of $x^{s}_{0}\\!=\\!y^{s}_{0}\\!=\\!\\widetilde{x}^{s-\\!1}$, the choices of $x^{s+\\!1}_{0}\\!=\\!x^{s}_{m_{s}}$ and $y^{s+\\!1}_{0}\\!=\\!y^{s}_{m_{s}}$ also work well in practice, especially for the case when the regularization parameter is relatively small (e.g., $10^{-7}$), as suggested in~\\cite{shang:vrsgd}.\n\n\\begin{algorithm}[t]\n\\caption{FSVRG for smooth component functions}\n\\label{alg1}\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Initialize:}}\n\\renewcommand{\\algorithmicoutput}{\\textbf{Output:}}\n\\begin{algorithmic}[1]\n\\REQUIRE the number of epochs $S$ and the step size $\\eta$.\\\\\n\\ENSURE $\\widetilde{x}^{0}$\\!, $m_{1}$, $\\theta_{1}$, and $\\rho>1$.\\\\\n\\FOR{$s=1,2,\\ldots,S$}\n\\STATE {$\\widetilde{\\mu}=\\frac{1}{n}\\!\\sum^{n}_{i=1}\\!\\nabla\\!f_{i}(\\widetilde{x}^{s-\\!1})$, $x^{s}_{0}=y^{s}_{0}=\\widetilde{x}^{s-\\!1}$;}\n\\FOR{$k=1,2,\\ldots,m_{s}$}\n\\STATE {Pick $i^{s}_{k}$ uniformly at random from $\\{1,\\ldots,n\\}$;}\n\\STATE {$\\widetilde{\\nabla} f_{i^{s}_{k}}(x^{s}_{k-\\!1})=\\nabla f_{i^{s}_{k}}(x^{s}_{k-\\!1})-\\nabla f_{i^{s}_{k}}(\\widetilde{x}^{s-\\!1})+\\widetilde{\\mu}$;}\n\\STATE {$y^{s}_{k}=y^{s}_{k-\\!1}-\\eta\\;\\![\\widetilde{\\nabla}f_{i^{s}_{k}}(x^{s}_{k-\\!1})+\\nabla g(x^{s}_{k-\\!1})]$;}\n\\STATE {$x^{s}_{k}=\\widetilde{x}^{s-1}+\\theta_{s}(y^{s}_{k}-\\widetilde{x}^{s-1})$;}\n\\ENDFOR\n\\STATE {$\\widetilde{x}^{s}=\\frac{1}{m_{s}}\\!\\sum^{m_{s}}_{k=1}\\!x^{s}_{k}$, $\\,m_{s+1}=\\lceil\\rho^{s}\\!\\cdot m_{1}\\rceil$;}\n\\ENDFOR\n\\OUTPUT {$\\widetilde{x}^{S}$}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsubsection{Momentum Acceleration}\nWhen the regularizer $g(\\cdot)$ is smooth, e.g., the $\\ell_{2}$-norm regularizer, the update rule of the auxiliary variable $y$ is\n\\begin{equation}\\label{equ5}\ny^{s}_{k}=y^{s}_{k-\\!1}-\\eta[\\widetilde{\\nabla}f_{i^{s}_{k}}(x^{s}_{k-\\!1})+\\nabla g(x^{s}_{k-\\!1})].\n\\end{equation}\nWhen $g(\\cdot)$ is non-smooth, e.g., the $\\ell_{1}$-norm regularizer, the update rule of $y$ is given as follows:\n\\vspace{-1mm}\n\\begin{equation}\\label{equ6}\ny^{s}_{k}=\\textup{prox}_{\\,\\eta,g}\\!\\left(y^{s}_{k-\\!1}-\\eta\\widetilde{\\nabla}\\!f_{i^{s}_{k}}(x^{s}_{k-\\!1})\\right)\\!,\n\\vspace{-1mm}\n\\end{equation}\nand the proximal operator $\\textup{prox}_{\\,\\eta,g}(\\cdot)$ is defined as\n\\vspace{-1mm}\n\\begin{equation}\\label{equ7}\n\\textup{prox}_{\\,\\eta,g}(y):=\\mathop{\\arg\\min}_{x}({1}\/{2\\eta})\\!\\cdot\\!\\|x-y\\|^{2}+g(x).\n\\vspace{-1mm}\n\\end{equation}\nThat is, we only need to replace the update rule (\\ref{equ5}) in Algorithm~\\ref{alg1} with (\\ref{equ7}) for the case of non-smooth regularizers.\n\nInspired by the momentum acceleration trick for accelerating first-order optimization methods~\\cite{nesterov:fast,nitanda:svrg,zhu:Katyusha}, we design the following update rule for $x$:\n\\vspace{-1mm}\n\\begin{equation}\\label{equ8}\nx^{s}_{k}=\\widetilde{x}^{s-1}+\\theta_{s}(y^{s}_{k}-\\widetilde{x}^{s-1}),\n\\end{equation}\nwhere $\\theta_{s}\\!\\in\\![0,1]$ is the weight for the key momentum term. The first term of the right-hand side of (\\ref{equ8}) is the snapshot point of the last epoch (also called as the Katyusha momentum in~\\cite{zhu:Katyusha}), and the second term plays a key role as the Nesterov's momentum in deterministic optimization.\n\nWhen $\\theta_{s}\\!\\equiv\\!1$ and $\\rho\\!=\\!2$, Algorithm~\\ref{alg1} degenerates to the accelerated SVRG method, SVRG++~\\cite{zhu:univr}. In other words, SVRG++ can be viewed as a special case of our FSVRG method. As shown above, FSVRG only has one additional variable $y$, while existing accelerated stochastic variance reduction methods, e.g., Katyusha~\\cite{zhu:Katyusha}, require two additional variables $y$ and $z$, as shown in (\\ref{equ4}). In addition, FSVRG only has one momentum weight $\\theta_{s}$, compared with the two weights $\\theta_{1}$ and $\\theta_{2}$ in Katyusha~\\cite{zhu:Katyusha}. Therefore, FSVRG is much simpler than existing accelerated methods~\\cite{zhu:Katyusha,hien:asmd}.\n\n\n\\subsubsection{Momentum Weight}\nFor the case of SC objectives, we give a selecting scheme for the momentum weight $\\theta_{s}$. As shown in Theorem~\\ref{theo1} below, it is desirable to have a small convergence factor $\\alpha$, implying fast convergence. The following proposition obtains the optimal $\\theta_{\\star}$, which can yield the smallest $\\alpha$ value.\n\n\\begin{proposition}\nGiven the appropriate learning rate $\\eta$, the optimal weight $\\theta_{\\star}$ is given by\n\\vspace{-2mm}\n\\begin{equation}\\label{equ10}\n\\theta_{\\star}=\\mu\\eta m_{s}\/2.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nUsing Theorem 1 below, we have\n\\begin{equation*}\n\\alpha(\\theta)=1-\\theta+{\\theta^{2}}\/({\\mu\\eta m_{s}}).\n\\end{equation*}\nTo minimize $\\alpha(\\theta)$ with given $\\eta$, we have $\\theta_{\\star}\\!=\\!\\mu\\eta m_{s}\/2$.\n\\end{proof}\n\\vspace{-2mm}\n\nIn fact, we can fix $\\theta_{s}$ to a constant for the case of SC objectives, e.g., $\\theta_{s}\\!\\equiv\\!0.9$ as in accelerated SGD~\\cite{ruder:sgd}, which works well in practice. Indeed, larger values of $\\theta_{s}$ can result in better performance for the case when the regularization parameter is relatively large (e.g., $10^{-4}$).\n\nUnlike the SC case, we initialize $y^{s+\\!1}_{0}\\!=\\!y^{s}_{m_{s}}$ in each epoch for the case of NSC objectives. And the update rule of $\\theta_{s}$ is defined as follows: $\\theta_{1}\\!=\\!1\\!-\\!{L\\eta}\/({1\\!-\\!L\\eta})$, and for any $s\\!>\\!1$,\n\\vspace{-1mm}\n\\begin{equation}\\label{equ11}\n\\theta_{s}=(\\sqrt{\\theta^{4}_{s-\\!1}+4\\theta^{2}_{s-\\!1}}-\\theta^{2}_{s-\\!1})\/{2}.\n\\end{equation}\nThe above rule is the same as that in some accelerated optimization methods~\\cite{nesterov:co,su:nag,liu:sadmm}.\n\n\n\\subsubsection{Complexity Analysis}\nThe per-iteration cost of FSVRG is dominated by the computation of $\\nabla\\! f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})$, $\\nabla\\! f_{i^{s}_{k}}\\!(\\widetilde{x}^{s})$, and $\\nabla\\!g(x^{s}_{k-\\!1})$ or the proximal update~\\eqref{equ6}, which is as low as that of SVRG~\\cite{johnson:svrg} and SVRG++~\\cite{zhu:univr}. For some ERM problems, we can save the intermediate gradients $\\nabla\\! f_{i}(\\widetilde{x}^{s})$ in the computation of $\\widetilde{\\mu}$, which requires $O(n)$ additional storage in general. In addition, FSVRG has a much lower per-iteration complexity than other accelerated methods such as Katyusha~\\cite{zhu:Katyusha}, which have at least one more variable, as analyzed above.\n\n\n\\subsection{Non-Smooth Component Functions}\nIn this part, we consider the case of Problem (\\ref{equ1}) when each $f_{i}(\\cdot)$ is non-smooth (e.g., hinge loss and other loss functions listed in~\\cite{yang:ssgd}), and $\\phi(\\cdot)$ is SC or NSC (i.e.\\ Case 3 or 4). As stated in Section 2, the two classes of problems can be transformed into the smooth ones as in~\\cite{nesterov:smooth,zhu:box,xu:hs}, which can be efficiently solved by Algorithm~\\ref{alg1}. However, the smoothing techniques may degrade the performance of the involved algorithms, similar to the case of the reduction from NSC problems to SC problems~\\cite{zhu:box}. Thus, we extend Algorithm~\\ref{alg1} to the non-smooth setting, and propose a fast stochastic variance reduced sub-gradient algorithm (i.e., Algorithm~\\ref{alg2}) to solve such problems directly, as well as the case of Algorithm~\\ref{alg1} to directly solve the NSC problems in Case 2.\n\nFor each outer iteration $s$ and inner iteration $k$, we denote by $\\widetilde{\\partial}f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})$ the stochastic sub-gradient $\\partial f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})\\!-\\!\\partial f_{i^{s}_{k}}\\!(\\widetilde{x}^{s-\\!1})\\!+\\!\\widetilde{\\xi}$, where $\\widetilde{\\xi}\\!=\\!\\!\\frac{1}{n}\\!\\!\\sum^{n}_{i=1}\\!\\!\\partial f_{i}(\\widetilde{x}^{s-\\!1})$, and $\\partial f_{i}(\\widetilde{x}^{s-\\!1})$ denotes a sub-gradient of $f_{i}(\\cdot)$ at $\\widetilde{x}^{s-\\!1}$. When the regularizer $g(\\cdot)$ is smooth, the update rule of $y$ is given by\n\\vspace{-1mm}\n\\begin{equation}\\label{equ12}\ny^{s}_{k}=\\Pi_{\\mathcal{K}}\\!\\!\\left[y^{s}_{k-\\!1}-\\eta\\left(\\widetilde{\\partial}f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})+\\nabla g(x^{s}_{k-\\!1})\\right)\\right]\\!,\n\\end{equation}\nwhere $\\Pi_{\\mathcal{K}}$ denotes the orthogonal projection on the convex domain $\\mathcal{K}$. Following the acceleration techniques for stochastic sub-gradient methods~\\cite{rakhlin:sgd,julien:ssg,shamir:sgd}, a general weighted averaging scheme is formulated as follows:\n\\vspace{-2mm}\n\\begin{equation}\\label{equ13}\n\\widetilde{x}^{s}=\\frac{1}{\\sum_{k}\\! w_{k}}\\!\\sum^{m_{s}}_{k=1}w_{k}x^{s}_{k},\n\\vspace{-1mm}\n\\end{equation}\nwhere $w_{k}$ is the given weight, e.g., $w_{k}\\!=\\!1\/m_{s}$.\n\n\\begin{algorithm}[t]\n\\caption{FSVRG for non-smooth component functions}\n\\label{alg2}\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Initialize:}}\n\\renewcommand{\\algorithmicoutput}{\\textbf{Output:}}\n\\begin{algorithmic}[1]\n\\REQUIRE the number of epochs $S$ and the step size $\\eta$.\\\\\n\\ENSURE $\\widetilde{x}^{0}$\\!, $m_{1}$, $\\theta_{1}$, $\\rho>1$, and $w_{1},\\ldots,w_{m}$.\\\\\n\\FOR{$s=1,2,\\ldots,S$}\n\\STATE {$\\widetilde{\\xi}=\\frac{1}{n}\\!\\sum^{n}_{i=1}\\!\\partial f_{i}(\\widetilde{x}^{s-\\!1})$, $x^{s}_{0}=y^{s}_{0}=\\widetilde{x}^{s-\\!1}$;}\n\\FOR{$k=1,2,\\ldots,m_{s}$}\n\\STATE {Pick $i^{s}_{k}$ uniformly at random from $\\{1,\\ldots,n\\}$;}\n\\STATE {$\\widetilde{\\partial}f_{i^{s}_{k}}(x^{s}_{k-\\!1})=\\partial f_{i^{s}_{k}}(x^{s}_{k-\\!1})-\\partial f_{i^{s}_{k}}(\\widetilde{x}^{s-\\!1})+\\widetilde{\\xi}$;}\n\\STATE {$y^{s}_{k}=\\Pi_{\\mathcal{K}}\\!\\!\\left[y^{s}_{k-\\!1}-\\eta\\;\\!(\\widetilde{\\partial}f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})+\\nabla g(x^{s}_{k-\\!1}))\\right]$;}\n\\STATE {$x^{s}_{k}=\\widetilde{x}^{s-1}+\\theta_{s}(y^{s}_{k}-\\widetilde{x}^{s-1})$;}\n\\ENDFOR\n\\STATE {$\\widetilde{x}^{s}\\!=\\!\\frac{1}{\\sum_{k}\\! w_{k}}\\!\\sum^{m_{s}}_{k=1}\\!w_{k} x^{s}_{k}$, $\\,m_{s+1}=\\lceil\\rho^{s}\\!\\cdot m_{1}\\rceil$;}\n\\ENDFOR\n\\OUTPUT {$\\widetilde{x}^{S}$}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\section{Convergence Analysis}\nIn this section, we provide the convergence analysis of FSVRG for solving the two classes of problems in Cases 1 and 2. Before giving a key intermediate result, we first introduce the following two definitions.\n\n\\begin{definition}[Smoothness]\\label{assum1}\nA function $h(\\cdot):\\mathbb{R}^{d}\\!\\rightarrow\\!\\mathbb{R}$ is $L$-smooth if its gradient is $L$-Lipschitz, that is, $\\|\\nabla h(x)-\\nabla h(y)\\|\\leq L\\|x-y\\|$ for all $x,y\\in\\mathbb{R}^{d}$.\n\\end{definition}\n\n\\begin{definition}[Strong Convexity]\\label{assum2}\nA function $\\phi(\\cdot):\\mathbb{R}^{d}\\!\\rightarrow\\!\\mathbb{R}$ is $\\mu$-strongly convex ($\\mu$-SC), if there exists a constant $\\mu\\!>\\!0$ such that for any $x,y\\!\\in\\!\\mathbb{R}^{d}$,\n\\begin{equation}\\label{equ30}\n\\phi(y)\\geq \\phi(x)+\\nabla\\phi(x)^{T}(y-x)+\\frac{\\mu}{2}\\|y-x\\|^{2}.\n\\vspace{-2mm}\n\\end{equation}\n\\end{definition}\nIf $\\phi(\\cdot)$ is non-smooth, we can revise the inequality~\\eqref{equ30} by simply replacing $\\nabla\\phi(x)$ with an arbitrary sub-gradient $\\partial\\phi(x)$.\n\n\\begin{lemma}\\label{lemm1}\nSuppose each component function $f_{i}(\\cdot)$ is $L$-smooth. Let $x_{\\star}$ be the optimal solution of Problem~\\eqref{equ1}, and $\\{\\widetilde{x}^{s},y^{s}_{k}\\}$ be the sequence generated by Algorithm~\\ref{alg1}. Then the following inequality holds for all $s\\!=\\!1,\\ldots,S$:\n\\begin{equation}\\label{equ31}\n\\begin{split}\n&\\!\\!\\!\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{s})\\!-\\!\\phi(x_{\\star})\\right]\\leq(1\\!-\\!\\theta_{s})\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{s-\\!1})-\\phi(x_{\\star})\\right]\\\\\n&\\quad\\qquad\\quad\\quad\\quad\\;\\;\\;\\;+\\!\\frac{\\theta^{2}_{s}}{2\\eta m_{s}}\\mathbb{E}\\!\\left[\\|y^{s}_{0}\\!-\\!x_{\\star}\\|^2\\!-\\!\\|y^{s}_{m_{s}}\\!\\!-\\!x_{\\star}\\|^2\\right]\\!.\n\\end{split}\n\\end{equation}\n\\end{lemma}\n\nThe detailed proof of Lemma~\\ref{lemm1} is provided in APPENDIX. To prove Lemma 1, we first give the following lemmas, which are useful for the convergence analysis of FSVRG.\n\n\\begin{lemma}[Variance bound, \\cite{zhu:Katyusha}]\n\\label{lemm2}\nSuppose each function $f_{i}(\\cdot)$ is $L$-smooth. Then the following inequality holds:\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\left\\|\\widetilde{\\nabla}\\! f_{i^{s}_{k}}(x^{s}_{k-1})-\\nabla\\! f(x^{s}_{k-1})\\right\\|^{2}\\right]\\\\\n\\leq&\\,2L\\!\\left[f(\\widetilde{x}^{s-\\!1})-f(x^{s}_{k-\\!1})+[\\nabla\\!f(x^{s}_{k-\\!1})]^{T}(x^{s}_{k-\\!1}-\\widetilde{x}^{s-\\!1})\\right].\n\\end{split}\n\\end{equation*}\n\\end{lemma}\n\nLemma~\\ref{lemm2} is essentially identical to Lemma 3.4 in~\\cite{zhu:Katyusha}. This lemma provides a tighter upper bound on the expected variance of the variance-reduced gradient estimator $\\widetilde{\\nabla}\\! f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})$ than that of~\\cite{xiao:prox-svrg,zhu:univr}, e.g., Corollary 3.5 in~\\cite{xiao:prox-svrg}.\n\n\\begin{lemma} [3-point property, \\cite{lan:sgd}]\n\\label{lemm3}\nAssume that $z^{*}$ is an optimal solution of the following problem,\n\\begin{displaymath}\n\\min_{z}({\\tau}\/{2})\\!\\cdot\\!\\|z-z_{0}\\|^{2}+\\psi(z),\n\\end{displaymath}\nwhere $\\psi(z)$ is a convex function (but possibly non-differentiable). Then for any $z\\!\\in\\!\\mathbb{R}^{d}$, we have\n\\begin{displaymath}\n\\psi(z)+\\frac{\\tau}{2}\\|z-z_{0}\\|^{2}\\geq \\psi(z^{*})+\\frac{\\tau}{2}\\|z^{*}-z_{0}\\|^{2}+\\frac{\\tau}{2}\\|z-z^{*}\\|^{2}.\n\\end{displaymath}\n\\end{lemma}\n\n\n\\subsection{Convergence Properties for Case 1}\nFor SC objectives with smooth component functions (i.e., Case 1), we analyze the convergence property of FSVRG.\n\n\\begin{theorem}[Strongly Convex]\\label{theo1}\nSuppose each $f_{i}(\\cdot)$ is $L$-smooth, $\\phi(\\cdot)$ is $\\mu$-SC, $\\theta_{s}$ is a constant $\\theta$ for Case 1, and $m_{s}$ is sufficiently large\\footnote{If $m_{1}$ is not sufficiently large, the first epoch can be viewed as an initialization step.} so that\n\\vspace{-2mm}\n\\begin{equation*}\n\\alpha_{s}:=\\,1-\\theta+{\\theta^{2}}\/({\\mu\\eta m_{s}})< 1.\n\\vspace{-1mm}\n\\end{equation*}\nThen Algorithm 1 has the convergence in expectation:\n\\vspace{-2mm}\n\\begin{equation}\\label{equ32}\n\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})-\\phi(x_{\\star})\\right]\\leq\\,\\left(\\prod^{S}_{s=1}\\alpha_{s}\\right)[\\phi(\\widetilde{x}^{0})-\\phi(x_{\\star})].\n\\end{equation}\n\\end{theorem}\n\n\n\\begin{proof}\nSince $\\phi(x)$ is $\\mu$-SC, then there exists a constant $\\mu\\!>\\!0$ such that for all $x\\!\\in\\!\\mathbb{R}^{d}$\n\\vspace{-1mm}\n\\begin{equation*}\n\\phi(x)\\geq \\phi(x_{\\star})+[\\nabla\\!\\phi(x_{\\star})]^{T}(x-x_{\\star})+\\frac{\\mu}{2}\\|x-x_{\\star}\\|^{2}.\n\\vspace{-1mm}\n\\end{equation*}\n\nSince $x_{\\star}$ is the optimal solution, we have\n\\vspace{-2mm}\n\\begin{equation}\\label{equ33}\n\\nabla\\phi(x_{\\star})=0\\,\\footnote{When $\\phi(\\cdot)$ is non-smooth, there must exist a sub-graident $\\partial\\phi(x_{\\star})$ of $\\phi(\\cdot)$ at $x_{\\star}$ such that $\\partial\\phi(x_{\\star})=0$.},\\;\\;\\;\\phi(x)-\\phi(x_{\\star})\\geq \\frac{\\mu}{2}\\|x-x_{\\star}\\|^{2}.\n\\vspace{-1mm}\n\\end{equation}\nUsing the inequality in \\eqref{equ33} and $y^{s}_{0}\\!=\\!\\widetilde{x}^{s-\\!1}$, we have\n\\vspace{-1mm}\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{s})-\\phi(x_{\\star})\\right]\\\\\n\\leq&\\,(1\\!-\\!\\theta)\\mathbb{E}[\\phi(\\widetilde{x}^{s-\\!1})\\!-\\!\\phi(x_{\\star})]\\!+\\!\\frac{\\theta^{2}\\!\/\\eta}{2 m_{s}\\!\\!}\\mathbb{E}\\!\\left[\\|y^{s}_{0}\\!\\!-\\!x_{\\star}\\|^2\\!-\\!\\|y^{s}_{m_{s}}\\!\\!-\\!x_{\\star}\\|^2\\right]\\\\\n\\leq&\\,(1\\!-\\!\\theta)\\mathbb{E}[\\phi(\\widetilde{x}^{s-\\!1})\\!-\\!\\phi(x_{\\star})]\\!+\\!\\frac{\\theta^{2}\\!\/\\eta}{\\mu m_{s}}\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{s-\\!1})\\!-\\!\\phi(x_{\\star})\\right]\\\\\n=&\\,\\left(1\\!-\\!\\theta+\\frac{\\theta^{2}\\!\/\\eta}{\\mu m_{s}}\\right)\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{s-\\!1})-\\phi(x_{\\star})\\right]\\!,\n\\end{split}\n\\end{equation*}\nwhere the first inequality holds due to Lemma 1, and the second inequality follows from the inequality in \\eqref{equ33}.\n\\end{proof}\n\nFrom Theorem~\\ref{theo1}, it is clear that $\\alpha_{s}$ decreases as $s$ increases, i.e., $1\\!>\\!\\alpha_{1}\\!>\\!\\alpha_{2}\\!>\\!\\ldots\\!>\\!\\alpha_{S}$. Therefore, there exists a positive constant $\\gamma\\!<\\!1$ such that $\\alpha_{s}\\!\\leq\\!\\alpha_{1}\\gamma^{s-\\!1}$ for all $s\\!=\\!1,\\ldots,S$. Then the inequality in \\eqref{equ32} can be rewritten as $\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})\\!-\\!\\phi(x_{\\star})\\right]\\!\\leq\\!(\\alpha_{1}\\sqrt{\\gamma^{S-\\!1}})^{S}[\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star})]$, which implies that FSVRG attains linear (geometric) convergence.\n\n\n\\subsection{Convergence Properties for Case 2}\n\\label{sect42}\nFor NSC objectives with smooth component functions (i.e., Case 2), the following theorem gives the convergence rate and overall complexity of FSVRG.\n\n\\begin{theorem}[Non-Strongly Convex]\\label{theo2}\nSuppose each $f_{i}(\\cdot)$ is $L$-smooth. Then the following inequality holds:\n\\vspace{-1mm}\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})-\\phi(x_{\\star})\\right]\\\\\n\\leq&\\,\\frac{4(1\\!-\\!\\theta_{1})}{\\theta^{2}_{1}(S\\!+\\!2)^{2}}\\![\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star})]\\!+\\!\\frac{2\/\\eta}{m_{1}(S\\!+\\!2)^{2}}\\|x_{\\star}\\!-\\!\\widetilde{x}^{0}\\|^2.\n\\end{split}\n\\end{equation*}\nIn particular, choosing $m_{1}\\!=\\!\\Theta(n)$, Algorithm~\\ref{alg1} achieves an $\\varepsilon$-accurate solution, i.e., $\\mathbb{E}[\\phi(\\widetilde{x}^{S})]\\!-\\!\\phi(x_{\\star})\\leq \\varepsilon$ using at most $\\mathcal{O}(\\frac{n\\sqrt{\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star})}}{\\sqrt{\\varepsilon}}\\!+\\!\\frac{\\sqrt{nL}\\|\\widetilde{x}^{0}\\!-\\!x_{\\star}\\|}{\\sqrt{\\varepsilon}})$ iterations.\n\\end{theorem}\n\n\\begin{proof}\nUsing the update rule of $\\theta_{s}$ in (\\ref{equ11}), it is easy to verify that\n\\begin{equation}\\label{equ34}\n({1-\\theta_{s+1}})\/{\\theta^{2}_{s+1}}\\leq{1}\/{\\theta^{2}_{s}},\\;\\;\\theta_{s}\\leq2\/(s+2).\n\\end{equation}\nDividing both sides of the inequality in (\\ref{equ31}) by $\\theta^{2}_{s}$, we have\n\\vspace{-3mm}\n\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}[\\phi(\\widetilde{x}^{s})-\\phi(x_{\\star})]\/\\theta^{2}_{s}\\\\\n\\leq&\\frac{1\\!-\\!\\theta_{s}\\!}{\\theta^{2}_{s}}\\mathbb{E}[\\phi(\\widetilde{x}^{s-\\!1})\\!-\\!\\phi(x_{\\star})]\\!+\\!\\frac{1\/\\eta}{2m_{s}\\!}\\mathbb{E}[\\|x_{\\star}\\!\\!-\\!y^{s}_{0}\\|^2\\!-\\!\\|x_{\\star}\\!\\!-\\!y^{s}_{m_{s}}\\!\\|^2],\n\\end{split}\n\\end{equation*}\nfor all $s\\!=\\!1,\\ldots,S$. By $y^{s+\\!1}_{0}\\!=\\!y^{s}_{m_{s}}$ and the inequality in (\\ref{equ34}), and summing the above inequality over $s\\!=\\!1,\\ldots,S$, we have\n\\vspace{-1mm}\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})-\\phi(x_{\\star})\\right]\/\\theta^{2}_{S}\\\\\n\\leq&\\frac{1\\!-\\!\\theta_{1}}{\\theta^{2}_{1}}[\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star}\\!)]\\!+\\!\\frac{1\/\\eta}{2m_{1}}\\mathbb{E}\\!\\!\\left[\\|x_{\\star}\\!-\\!y^{1}_{0}\\|^2\\!-\\!\\|x_{\\star}\\!-\\!y^{S}_{m_{S}}\\!\\|^2\\right]\\!.\n\\end{split}\n\\end{equation*}\n\\vspace{-2mm}\n\nThen\n\\vspace{-1mm}\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})-\\phi(x_{\\star}\\!)\\right]\\\\\n\\leq&\\frac{4(1\\!-\\!\\theta_{1})}{\\theta^{2}_{1}(S\\!\\!+\\!\\!2)^{2}}\\![\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star}\\!)]\\!+\\!\\!\\frac{2\/\\eta}{m_{1}\\!(S\\!\\!+\\!\\!2)^{2}}\\!\\mathbb{E}\\!\\!\\left[\\!\\|x_{\\star}\\!\\!-\\!y^{1}_{0}\\|^2\\!\\!-\\!\\|x_{\\star}\\!\\!-\\!y^{S}_{m_{\\!S}}\\!\\|^2\\!\\right]\\\\\n\\leq&\\frac{4(1\\!-\\!\\theta_{1})}{\\theta^{2}_{1}(S\\!+\\!2)^{2}}\\![\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star}\\!)]\\!+\\!\\frac{2\/\\eta}{m_{1}(S\\!+\\!2)^{2}}\\!\\left[\\|x_{\\star}\\!-\\!\\widetilde{x}^{0}\\|^2\\right].\n\\end{split}\n\\end{equation*}\nThis completes the proof.\n\\end{proof}\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=0.496\\columnwidth]{Fig11-eps-converted-to.pdf}\n\\includegraphics[width=0.496\\columnwidth]{Fig13-eps-converted-to.pdf}\n\\includegraphics[width=0.496\\columnwidth]{Fig15-eps-converted-to.pdf}\n\\includegraphics[width=0.496\\columnwidth]{Fig17-eps-converted-to.pdf}\n\\vspace{-1.6mm}\n\n\\subfigure[IJCNN: $\\lambda\\!=\\!10^{-4}$]{\\includegraphics[width=0.496\\columnwidth]{Fig12-eps-converted-to.pdf}}\n\\subfigure[Protein: $\\lambda\\!=\\!10^{-4}$]{\\includegraphics[width=0.496\\columnwidth]{Fig14-eps-converted-to.pdf}}\n\\subfigure[Covtype: $\\lambda\\!=\\!10^{-5}$]{\\includegraphics[width=0.496\\columnwidth]{Fig16-eps-converted-to.pdf}}\n\\subfigure[SUSY: $\\lambda\\!=\\!10^{-6}$]{\\includegraphics[width=0.496\\columnwidth]{Fig18-eps-converted-to.pdf}}\n\\vspace{-3mm}\n\n\\caption{Comparison of SVRG~\\cite{johnson:svrg}, SVRG++~\\cite{zhu:vrnc}, Katyusha~\\cite{zhu:Katyusha}, and our FSVRG method for $\\ell_{2}$-norm (i.e., $\\lambda\\|x\\|^{2}$) regularized logistic regression problems. The $y$-axis represents the objective value minus the minimum, and the $x$-axis corresponds to the number of effective passes (top) or running time (bottom).}\n\\label{figs1}\n\\end{figure*}\n\nFrom Theorem 2, we can see that FSVRG achieves the optimal convergence rate of $\\mathcal{O}(1\/T^2)$ and the complexity of $\\mathcal{O}(n\\sqrt{1\/\\epsilon}\\!+\\!\\sqrt{nL\/\\epsilon})$ for NSC problems, which is consistent with the best known result in~\\cite{zhu:Katyusha,hien:asmd}. By adding a proximal term into the problem of Case 2 as in~\\cite{lin:vrsg,zhu:box}, one can achieve faster convergence. However, this hurts the performance of the algorithm both in theory and in practice~\\cite{zhu:univr}.\n\n\\begin{figure*}[!th]\n\\centering\n\\subfigure[IJCNN: $\\lambda_{1}\\!=\\!\\lambda_{2}\\!=\\!10^{-4}$]{\\includegraphics[width=0.496\\columnwidth]{Fig42-eps-converted-to.pdf}}\n\\subfigure[Protein: $\\lambda_{1}\\!=\\!\\lambda_{2}\\!=\\!10^{-4}$]{\\includegraphics[width=0.496\\columnwidth]{Fig44-eps-converted-to.pdf}}\n\\subfigure[Covtype: $\\lambda_{1}\\!=\\!\\lambda_{2}\\!=\\!10^{-5}$]{\\includegraphics[width=0.496\\columnwidth]{Fig46-eps-converted-to.pdf}}\n\\subfigure[SUSY: $\\lambda_{1}\\!=\\!\\lambda_{2}\\!=\\!10^{-6}$]{\\includegraphics[width=0.496\\columnwidth]{Fig48-eps-converted-to.pdf}}\n\\vspace{-3mm}\n\n\\caption{Comparison of Prox-SVRG~\\cite{xiao:prox-svrg}, SVRG++~\\cite{zhu:vrnc}, Katyusha~\\cite{zhu:Katyusha}, and our FSVRG method for elastic net (i.e., $\\lambda_{1}\\|x\\|^{2}\\!+\\!\\lambda_{2}\\|x\\|_{1}$) regularized logistic regression problems.}\n\\label{figs2}\n\\end{figure*}\n\n\n\\subsection{Convergence Properties for Mini-Batch Settings}\nIt has been shown in~\\cite{shwartz:svm,nitanda:svrg,koneeny:mini} that mini-batching can effectively decrease the variance of stochastic gradient estimates. So, we extend FSVRG and its convergence results to the mini-batch setting. Here, we denote by $b$ the mini-batch size and $I^{s}_{k}$ the selected random index set $I_{k}\\!\\subset\\![n]$ for each outer-iteration $s\\!\\in\\![S]$ and inner-iteration $k\\!\\in\\!\\{1,\\ldots,m_{s}\\}$. Then the stochastic gradient estimator $\\widetilde{\\nabla}\\!f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})$ becomes\n\\begin{equation}\\label{equ36}\n\\widetilde{\\nabla}\\! f_{I^{s}_{k}}(x^{s}_{k-\\!1})\\!=\\!\\frac{1}{b}\\!\\sum_{i\\in I^{s}_{k}}\\!\\!\\left[\\nabla\\!f_{i}(x^{s}_{k-\\!1})\\!-\\!\\nabla\\! f_{i}(\\widetilde{x}^{s-\\!1})\\right]\\!+\\!{\\nabla}\\! f(\\widetilde{x}^{s-\\!1}).\n\\vspace{-2mm}\n\\end{equation}\nAnd the momentum weight $\\theta_{s}$ is required to satisfy $\\theta_{s}\\!\\leq\\! 1\\!-\\!\\frac{\\rho(b)L\\eta}{1-L\\eta}$ for SC and NSC cases, where $\\rho(b)\\!=\\!(n\\!-\\!b)\/[(n\\!-\\!1)b]$. The upper bound on the variance of $\\widetilde{\\nabla}\\!f_{i^{s}_{k}}(x^{s}_{k-\\!1})$ in Lemma~\\ref{lemm2} is extended to the mini-batch setting as follows~\\cite{liu:sadmm}.\n\n\\begin{corollary}[Variance bound of Mini-Batch]\n\\label{cor11}\n\\begin{displaymath}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\left\\|\\widetilde{\\nabla}\\! f_{I^{s}_{k}}(x^{s}_{k-1})-\\nabla\\! f(x^{s}_{k-1})\\right\\|^{2}\\right]\\\\\n\\leq&2L\\rho(b)\\!\\left[\\nabla\\! f(x^{s}_{k-\\!1})^{T}\\!(x^{s}_{k-\\!1}\\!-\\!\\widetilde{x}^{s-\\!1})\\!-\\!f(x^{s}_{k-\\!1})\\!+\\!f(\\widetilde{x}^{s-\\!1})\\right]\\!.\n\\end{split}\n\\end{displaymath}\n\\end{corollary}\n\\vspace{-2mm}\n\nIt is easy to verify that $0\\!\\leq\\!\\rho(b)\\!\\leq\\!1$, which implies that mini-batching is able to reduce the variance upper bound in Lemma~\\ref{lemm2}. Based on the variance upper bound in Corollary~\\ref{cor11}, we further analyze the convergence properties of our algorithms for the mini-batch setting. Obviously, the number of stochastic iterations in each epoch is reduced from $m_{s}$ to $\\lfloor m_{s}\/b\\rfloor$. For the case of SC objective functions, the mini-batch variant of FSVRG has almost identical convergence properties to those in Theorem~\\ref{theo1}. In contrast, we need to initialize $\\theta_{1}\\!=\\!1\\!-\\!\\frac{\\rho(b){L}\\eta}{1-{L}\\eta}$ and update $\\theta_{s}$ by the procedure in (10) for the case of NSC objective functions. Theorem~\\ref{theo2} is also extended to the mini-batch setting as follows.\n\n\\begin{corollary}\\label{cor12}\nSuppose $f_{i}(\\cdot)$ is $L$-smooth, and let $\\theta_{1}\\!=\\!1\\!-\\!\\frac{\\rho(b){L}\\eta}{1-{L}\\eta}$ and $\\beta\\!=\\!{1}\/{L\\eta}$, then the following inequality holds:\n\\vspace{-1mm}\n\\begin{equation}\\label{equ37}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})\\!-\\phi(x_{\\star})\\right]\\leq\\frac{2\/\\eta}{m_{1}(S\\!+\\!2)^{2}}\\!\\left[\\|x_{\\star}\\!-\\!\\widetilde{x}^{0}\\|^2\\right]\\\\\n&\\quad\\;\\;+\\!\\frac{4(\\beta-1)\\rho(b)}{(\\beta\\!-\\!1\\!-\\!\\rho(b))^{2}(S\\!+\\!2)^{2}}[\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star}\\!)].\n\\end{split}\n\\end{equation}\n\\end{corollary}\n\n\\begin{proof}\nSince\n\\begin{displaymath}\n\\theta_{1}=1-({\\rho(b){L}\\eta})\/({1-{L}\\eta})=1-{\\rho(b)}\/({\\beta-1}),\n\\end{displaymath}\nthen we have\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})-\\phi(x_{\\star})\\right]\\\\\n\\leq&\\frac{4(\\beta-1)\\rho(b)}{(\\beta\\!-\\!1\\!-\\!\\rho(b))^{2}(S\\!\\!+\\!\\!2)^{2}}\\![\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star}\\!)]\\!+\\!\\!\\frac{2\/\\eta}{ m_{1}\\!(S\\!\\!+\\!\\!2)^{2}}\\!\\left[\\|x_{\\star}\\!\\!-\\!\\widetilde{x}^{0}\\!\\|^2\\right]\\!\\!.\n\\end{split}\n\\end{equation*}\nThis completes the proof.\n\\end{proof}\n\n\\begin{remark}\nWhen $b\\!=\\!1$, we have $\\rho(1)\\!=\\!1$, and then Corollary~\\ref{cor12} degenerates to Theorem~\\ref{theo2}. If $b\\!=\\!n$ (i.e., the batch setting), we have $\\rho(n)\\!=\\!0$, and the second term on the right-hand side of \\eqref{equ37} diminishes. In other words, FSVRG degenerates to the accelerated deterministic method with the optimal convergence rate of $\\mathcal{O}(1\/T^{2})$.\n\\end{remark}\n\n\n\\section{Experimental Results}\n\\label{sec5}\nIn this section, we evaluate the performance of our FSVRG method for solving various machine learning problems, such as logistic regression, ridge regression, Lasso and SVM. All the codes of FSVRG and related methods can be downloaded from the first author's website.\n\n\n\\subsection{Experimental Setup}\nFor fair comparison, FSVRG and related stochastic variance reduced methods, including SVRG~\\cite{johnson:svrg}, Prox-SVRG~\\cite{xiao:prox-svrg}, SVRG++~\\cite{zhu:univr} and Katyusha~\\cite{zhu:Katyusha}, were implemented in C++, and the experiments were performed on a PC with an Intel i5-2400 CPU and 16GB RAM. As suggested in~\\cite{johnson:svrg,xiao:prox-svrg,zhu:Katyusha}, the epoch size is set to $m\\!=\\!2n$ for SVRG, Prox-SVRG, and Katyusha. FSVRG and SVRG++ have the similar strategy of growing epoch size, e.g., $m_{1}\\!\\!=\\!n\/2$ and $\\rho\\!=\\!1.6$ for FSVRG, and $m_{1}\\!\\!=\\!n\/4$ and $\\rho\\!=\\!2$ for SVRG++. Then for all these methods, there is only one parameter to tune, i.e., the learning rate. Note that we compare their performance in terms of both the number of effective passes (evaluating $n$ component gradients or computing a single full gradient is considered as one effective pass) and running time (seconds). Moreover, we do not compare with other stochastic algorithms such as SAGA~\\cite{defazio:saga} and Catalyst~\\cite{lin:vrsg}, as they have been shown to be comparable or inferior to Katyusha~\\cite{zhu:Katyusha}.\n\n\n\\subsection{Logistic Regression}\nIn this part, we conducted experiments for both the $\\ell_{2}$-norm and elastic net regularized logistic regression problems on the four popular data sets: IJCNN, Covtype, SUSY, and Protein, all of which were obtained from the LIBSVM Data website{\\footnote{\\url{https:\/\/www.csie.ntu.edu.tw\/~cjlin\/libsvm\/}}} and the KDD Cup 2004 website{\\footnote{\\url{http:\/\/osmot.cs.cornell.edu\/kddcup}}}. Each example of these date sets was normalized so that they have unit length as in~\\cite{xiao:prox-svrg}, which leads to the same upper bound on the Lipschitz constants $L_{i}$ of functions $f_{i}(\\cdot)$.\n\nFigures~\\ref{figs1} and~\\ref{figs2} show the performance of different methods for solving the two classes of logistic regression problems, respectively. It can be seen that SVRG++ and FSVRG consistently converge much faster than the other methods in terms of both running time (seconds) and number of effective passes. The accelerated stochastic variance reduction method, Katyusha, has much better performance than the standard SVRG method in terms of number of effective passes, while it sometimes performs worse in terms of running time. FSVRG achieves consistent speedups for all the data sets, and outperforms the other methods in all the settings. The main reason is that FSVRG not only takes advantage of the momentum acceleration trick, but also can use much larger step sizes, e.g., 1\/(3$L$) for FSVRG vs.\\ 1\/(7$L$) for SVRG++ vs.\\ 1\/(10$L$) for SVRG. This also confirms that FSVRG has much lower per-iteration cost than Katyusha.\n\n\n\\begin{figure*}[!th]\n\\centering\n\\includegraphics[width=0.496\\columnwidth]{Fig51-eps-converted-to.pdf}\n\\includegraphics[width=0.496\\columnwidth]{Fig53-eps-converted-to.pdf}\n\\includegraphics[width=0.496\\columnwidth]{Fig55-eps-converted-to.pdf}\n\\includegraphics[width=0.496\\columnwidth]{Fig57-eps-converted-to.pdf}\n\\vspace{-1.6mm}\n\n\\subfigure[IJCNN: $\\lambda\\!=\\!10^{-4}$]{\\includegraphics[width=0.496\\columnwidth]{Fig52-eps-converted-to.pdf}}\n\\subfigure[Protein: $\\lambda\\!=\\!10^{-4}$]{\\includegraphics[width=0.496\\columnwidth]{Fig54-eps-converted-to.pdf}}\n\\subfigure[Covtype: $\\lambda\\!=\\!10^{-5}$]{\\includegraphics[width=0.496\\columnwidth]{Fig56-eps-converted-to.pdf}}\n\\subfigure[SUSY: $\\lambda\\!=\\!10^{-6}$]{\\includegraphics[width=0.496\\columnwidth]{Fig58-eps-converted-to.pdf}}\n\\vspace{-3mm}\n\n\\caption{Comparison of Prox-SVRG~\\cite{xiao:prox-svrg}, SVRG++~\\cite{zhu:vrnc}, Katyusha~\\cite{zhu:Katyusha}, and FSVRG for Lasso problems.}\n\\label{figs3}\n\\end{figure*}\n\n\n\n\\subsection{Lasso}\nIn this part, we conducted experiments for the Lasso problem with the regularizer $\\lambda\\|x\\|_{1}$ on the four data sets. We report the experimental results of different methods in Figure~\\ref{figs3}, where the regularization parameter is varied from $\\lambda\\!=\\!10^{-4}$ to $\\lambda\\!=\\!10^{-6}$. From all the results, we can observe that FSVRG converges much faster than the other methods, and also outperforms Katyusha in terms of number of effective passes, which matches the optimal convergence rate for the NSC problem. SVRG++ achieves comparable and sometimes even better performance than SVRG and Katyusha. This further verifies that the efficiency of the growing epoch size strategy in SVRG++ and FSVRG.\n\n\n\\subsection{Ridge Regression}\nIn this part, we implemented all the algorithms mentioned above in C++ for high-dimensional sparse data, and compared their performance for solving ridge regression problems on the two very sparse data sets, Rcv1 and Sido0 (whose sparsity is 99.84\\% and 90.16\\%), as shown in Figure~\\ref{figs4}. The two data sets can be downloaded from the LIBSVM Data website and the Causality Workbench website{\\footnote{\\url{http:\/\/www.causality.inf.ethz.ch\/home.php}}}. From the results, one can observe that although Katyusha outperforms SVRG in terms of number of effective passes, both of them usually have similar convergence speed. SVRG++ has relatively inferior performance (maybe due to large value for $\\rho$, i.e., $\\rho\\!=\\!2$) than the other methods. It can be seen that the objective value of FSVRG is much lower than those of the other methods, suggesting faster convergence.\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.486\\columnwidth]{Fig61-eps-converted-to.pdf}\n\\includegraphics[width=0.486\\columnwidth]{Fig63-eps-converted-to.pdf}\n\\vspace{-1.6mm}\n\n\\subfigure[Rcv1]{\\includegraphics[width=0.486\\columnwidth]{Fig62-eps-converted-to.pdf}}\n\\subfigure[Sido0]{\\includegraphics[width=0.486\\columnwidth]{Fig64-eps-converted-to.pdf}}\n\\vspace{-3mm}\n\n\\caption{Comparison of SVRG~\\cite{johnson:svrg}, SVRG++~\\cite{zhu:vrnc}, Katyusha~\\cite{zhu:Katyusha}, and FSVRG for ridge regression problems with regularization parameter $\\lambda\\!=\\!10^{-4}$.}\n\\label{figs4}\n\\end{figure}\n\n\nFigure~\\ref{figs5} compares the performance of our FSVRG method with different mini-batch sizes on the two data sets, IJCNN and Protein. It can be seen that by increasing the mini-batch size to $b\\!=\\!2,4$, FSVRG has comparable or even better performance than the case when $b\\!=\\!1$.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.486\\columnwidth]{Fig71-eps-converted-to.pdf}\n\\includegraphics[width=0.486\\columnwidth]{Fig72-eps-converted-to.pdf}\n\\vspace{-3mm}\n\n\\caption{Results of FSVRG with different mini-batch sizes on IJCNN (left) and Protein (right).}\n\\label{figs5}\n\\end{figure}\n\n\n\\subsection{SVM}\nFinally, we evaluated the empirical performance of FSVRG for solving the SVM optimization problem\n\\vspace{-2mm}\n\\begin{equation*}\n\\min_{x}\\frac{1}{n}\\sum^{n}_{i=1}\\max\\{0,1-b_{i}\\langle a_{i},x\\rangle\\}+\\frac{\\lambda}{2}\\|x\\|^{2},\n\\vspace{-2mm}\n\\end{equation*}\nwhere $(a_{i},b_{i})$ is the feature-label pair. For the binary classification data set, IJCNN, we randomly divided it into 10\\% training set and 90\\% test set. We used the standard one-vs-rest scheme for the multi-class data set, the MNIST data set{\\footnote{\\url{http:\/\/yann.lecun.com\/exdb\/mnist\/}}}, which has a training set of 60,000 examples and a test set of 10,000 examples. The regularization parameter is set to $\\lambda\\!=\\!10^{-5}$. Figure~\\ref{figs6} shows the performance of the stochastic sub-gradient descent method (SSGD)~\\cite{shamir:sgd}, SVRG and FSVRG for solving the SVM problem. Note that we also extend SVRG to non-smooth settings, and use the same scheme in (12). We can see that the variance reduced methods, SVRG and FSVRG, yield significantly better performance than SSGD. FSVRG consistently outperforms SSGD and SVRG in terms of convergence speed and testing accuracy. Intuitively, the momentum acceleration trick in (8) can lead to faster convergence. We leave the theoretical analysis of FSVRG for this case as our future research.\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.486\\columnwidth]{Fig81-eps-converted-to.pdf}\n\\includegraphics[width=0.486\\columnwidth]{Fig82-eps-converted-to.pdf}\n\\includegraphics[width=0.486\\columnwidth]{Fig83-eps-converted-to.pdf}\n\\includegraphics[width=0.486\\columnwidth]{Fig84-eps-converted-to.pdf}\n\\vspace{-3mm}\n\n\\caption{Comparison of different methods for SVM problems on IJCNN (top) and MNIST (bottom).}\n\\label{figs6}\n\\end{figure}\n\n\n\\section{Discussion and Conclusion}\nRecently, there is a surge of interests in accelerating stochastic variance reduction gradient optimization. SVRG++~\\cite{zhu:vrnc} uses the doubling-epoch technique (i.e., $m_{s+\\!1}\\!\\!=\\!2m_{s}$), which can reduce the number of gradient calculations in the early iterations, and lead to faster convergence in general. In contrast, our FSVRG method uses a more general growing epoch size strategy as in~\\cite{mahdavi:sgd}, i.e., $m_{s+\\!1}\\!=\\!\\rho m_{s}$ with the factor $\\rho\\!>\\!1$, which implies that we can be much more flexible in choosing $\\rho$. Unlike SVRG++, FSVRG also enjoys the momentum acceleration trick and has the convergence guarantee for SC problems in Case 1.\n\nThe momentum acceleration technique has been used in accelerated SGD~\\cite{sutskever:sgd} and variance reduced methods~\\cite{lan:rpdg,lin:vrsg,nitanda:svrg,zhu:Katyusha}. Different from other methods~\\cite{lan:rpdg,lin:vrsg,nitanda:svrg}, the momentum term of FSVRG involves the snapshot point, i.e., $\\widetilde{x}^{s}$, which is also called as the Katyusha momentum in~\\cite{zhu:Katyusha}. It was shown in~\\cite{zhu:Katyusha} that Katyusha has the best known overall complexities for both SC and NSC problems. As analyzed above, FSVRG is much simpler and more efficient than Katyusha, which has also been verified in our experiments. Therefore, FSVRG is suitable for large-scale machine learning and data mining problems.\n\nIn this paper, we proposed a simple and fast stochastic variance reduction gradient (FSVRG) method, which integrates the momentum acceleration trick, and also uses a more flexible growing epoch size strategy. Moreover, we provided the convergence guarantees of our method, which show that FSVRG attains linear convergence for SC problems, and achieves the optimal $\\mathcal{O}(1\/T^2)$ convergence rate for NSC problems. The empirical study showed that the performance of FSVRG is much better than that of the state-of-the-art stochastic methods. Besides the ERM problems in Section~\\ref{sec5}, we can apply FSVRG to other machine learning problems, e.g., deep learning and eigenvector computation~\\cite{garber:svd}.\n\n\n\\section*{APPENDIX: Proof of Lemma 1}\n\n\\vspace{3mm}\n\\begin{proof}\nThe smoothness inequality in Definition 1 has the following equivalent form,\n\\begin{displaymath}\nf(x_{2})\\!\\leq\\! f(x_{1})\\!+\\!\\langle\\nabla\\! f(x_{1}),x_{2}\\!-\\!x_{1}\\rangle\\!+\\!0.5L\\|x_{2}\\!-\\!x_{1}\\|^{2}\\!,\\forall x_{1},x_{2}\\!\\in\\!\\mathbb{R}^{d}.\n\\end{displaymath}\nLet $\\beta_{1}\\!\\!>\\!2$ be an appropriate constant, $\\beta_{2}\\!\\!=\\!\\beta_{1}\\!\\!-\\!1\\!>\\!1$, and $\\widetilde{\\nabla}_{\\!i^{s}_{k}}\\!:=\\!\\widetilde{\\nabla}\\!f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})$. Using the above inequality, we have\n\\begin{equation}\\label{equ71}\n\\begin{split}\n\\!\\!\\!f(x^{s}_{k})\\!\\leq& f(x^{s}_{k\\!-\\!1})\\!+\\!\\langle\\nabla\\!f(x^{s}_{k\\!-\\!1}),x^{s}_{k}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\rangle\\!\\!+\\!0.5L\\|x^{s}_{k}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\!\\|^{2}\\!\\\\\n=& f(x^{s}_{k-\\!1})\\!+\\!\\left\\langle\\nabla\\! f(x^{s}_{k-\\!1}),x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\right\\rangle\\\\\n&+\\!0.5\\beta_{1} L\\!\\left\\|x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\right\\|^{2}\\!-\\!0.5\\beta_{2}L\\!\\left\\|x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\right\\|^{2}\\\\\n=& f(x^{s}_{k-\\!1})\\!+\\!\\langle \\widetilde{\\nabla}_{\\!i^{s}_{k}},x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\rangle\\!+\\!0.5\\beta_{1}L\\|x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\|^2\\\\\n&\\!+\\!\\!\\langle\\nabla\\! f(x^{s}_{k\\!-\\!1})\\!-\\!\\!\\widetilde{\\nabla}_{\\!i^{s}_{k}}\\!,x^{s}_{k}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\rangle\\!-\\!0.5\\beta_{2}L\\|x^{s}_{k}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\!\\|^{2}\\!\\!.\n\\end{split}\n\\end{equation}\n\\vspace{-5mm}\n\n\\begin{equation}\\label{equ72}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\langle\\nabla\\! f(x^{s}_{k-\\!1})\\!-\\!\\widetilde{\\nabla}_{i^{s}_{k}},\\,x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\rangle\\right]\\\\\n\\leq\\,& \\mathbb{E}\\!\\left[\\frac{1}{2\\beta_{2}{L}}\\|\\nabla\\!f(x^{s}_{k-\\!1})\\!-\\!\\widetilde{\\nabla}_{i^{s}_{k}}\\|^{2}+\\frac{\\beta_{2}{L}}{2}\\|x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\|^{2}\\right]\\\\\n\\leq\\,& \\frac{1}{\\beta_{2}}\\!\\left[f(\\widetilde{x}^{s-\\!1})\\!-\\!f(x^{s}_{k-\\!1})\\!-\\!\\left\\langle\\nabla\\! f(x^{s}_{k-\\!1}),\\,\\widetilde{x}^{s-\\!1}\\!-\\!x^{s}_{k-\\!1}\\right\\rangle\\right]\\\\\n&+\\!{0.5\\beta_{2}L}\\;\\!\\mathbb{E}\\!\\left[\\|x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\|^{2}\\right],\n\\end{split}\n\\end{equation}\nwhere the first inequality follows from the Young's inequality (i.e.\\ $x^{T}y\\!\\leq\\!{\\|x\\|^2}\\!\/{(2\\beta)}\\!+\\!{\\beta\\|y\\|^2}\\!\/{2}$ for all $\\beta\\!>\\!0$), and the second inequality holds due to Lemma~\\ref{lemm2}. Note that for the mini-batch setting, Lemma~\\ref{lemm2} needs to be replaced with Corollary~\\ref{cor11}, and all statements in the proof of this lemma can be revised by simply replacing $1\/\\beta_{2}$ with $\\rho(b)\/\\beta_{2}$.\n\nSubstituting the inequality \\eqref{equ72} into the inequality \\eqref{equ71}, and taking expectation over $i^{s}_{k}$, we have\n\\vspace{-1mm}\n\\begin{equation*}\\label{equ73}\n\\begin{split}\n&\\;\\;\\;\\;\\,\\mathbb{E}\\left[\\phi(x^{s}_{k})\\right]-f(x^{s}_{k-\\!1})\\\\\n&\\leq\\!\\mathbb{E}\\!\\left[g(x^{s}_{k})\\!+\\!\\langle\\widetilde{\\nabla}_{\\!i^{s}_{k}}, x^{s}_{k}\\!\\!-\\!x^{s}_{k-\\!1}\\rangle\\!+\\!0.5\\beta_{1}\\!{L}\\|x^{s}_{k}\\!\\!-\\!x^{s}_{k-\\!1}\\|^2\\right]\\\\\n&\\quad+\\!\\beta^{-\\!1}_{2}\\!\\left[f(\\widetilde{x}^{s-\\!1})\\!-\\!f(x^{s}_{k-\\!1})\\!+\\!\\left\\langle\\nabla\\! f(x^{s}_{k-\\!1}),\\,x^{s}_{k-\\!1}\\!-\\!\\widetilde{x}^{s-\\!1}\\right\\rangle\\right]\\\\\n&\\leq\\!\\mathbb{E}\\!\\left[\\langle \\theta_{\\!s}\\!\\widetilde{\\nabla}_{\\!i^{s}_{k}}, y^{s}_{k}\\!-\\!y^{s}_{k-\\!1}\\rangle\\!+\\!0.5\\beta_{1}\\!{L}\\theta_{s}^{2}\\|y^{s}_{k}\\!-\\!y^{s}_{k-\\!1}\\|^2\\!+\\!\\theta_{s} g(y^{s}_{k})\\right]\\\\\n&\\quad\\!\\!\\!\\!\\!+\\!(1\\!\\!-\\!\\theta_{\\!s}\\!)g(\\widetilde{x}^{s\\!-\\!1}\\!)\\!+\\!\\beta^{-\\!1}_{2}\\!\\!\\left[f(\\widetilde{x}^{s\\!-\\!1}\\!)\\!-\\!\\!f(x^{s}_{k\\!-\\!1}\\!)\\!+\\!\\!\\langle\\nabla\\! f(x^{s}_{k\\!-\\!1}\\!),x^{s}_{k\\!-\\!1}\\!\\!-\\!\\widetilde{x}^{s\\!-\\!1}\\rangle\\right]\\\\\n&\\leq\\!\\mathbb{E}\\!\\!\\left[\\langle \\theta_{\\!s}\\!\\widetilde{\\nabla}_{\\!i^{s}_{k}}\\!,x_{\\star}\\!\\!-\\!y^{s}_{k\\!-\\!1}\\rangle\\!+\\!\\frac{\\!\\beta_{1}\\!{L} \\theta_{\\!s}^{2}\\!\\!}{2}(\\|y^{s}_{k\\!-\\!1}\\!\\!-\\!x_{\\star}\\!\\|^2\\!\\!-\\!\\|y^{s}_{k}\\!\\!-\\!x_{\\star}\\!\\|^2)\\!+\\!\\theta_{\\!s} g(x_{\\star})\\!\\right]\\\\\n&\\quad\\!\\!\\!\\!+\\!(1\\!\\!-\\!\\theta_{\\!s}\\!)g(\\widetilde{x}^{s\\!-\\!1}\\!)\\!+\\!\\beta^{-\\!1}_{2}\\!\\!\\left[f(\\widetilde{x}^{s\\!-\\!1}\\!)\\!-\\!\\!f(x^{s}_{k\\!-\\!1}\\!)\\!+\\!\\langle\\nabla\\! f(x^{s}_{k\\!-\\!1}\\!),x^{s}_{k\\!-\\!1}\\!\\!-\\!\\widetilde{x}^{s\\!-\\!1}\\rangle\\right]\\\\\n&=\\!\\mathbb{E}\\!\\!\\left[\\frac{\\beta_{1}\\!{L} \\theta_{\\!s}^{2}}{2}\\!\\!\\left(\\|y^{s}_{k\\!-\\!1}\\!-\\!x_{\\star}\\!\\|^2\\!\\!-\\!\\|y^{s}_{k}\\!-\\!x_{\\star}\\!\\|^2\\right)\\!+\\!\\theta_{\\!s} g(x_{\\star})\\right]\\!\\!+\\!(1\\!\\!-\\!\\theta_{\\!s}\\!)g(\\widetilde{x}^{s\\!-\\!1}\\!)\\\\\n&\\quad\\!\\!\\!\\!\\!\\!+\\!\\left\\langle\\!\\nabla\\! f(x^{s}_{k\\!-\\!1}\\!),\\theta_{\\!s} x_{\\star}\\!\\!+\\!(1\\!\\!-\\!\\theta_{\\!s}\\!)\\widetilde{x}^{s\\!-\\!1}\\!\\!\\!-\\!x^{s}_{k\\!-\\!1}\\!\\!+\\!\\!\\beta^{-\\!1}_{2}\\!(x^{s}_{k\\!-\\!1}\\!\\!-\\!\\widetilde{x}^{s\\!-\\!1}\\!)\\!\\right\\rangle\\!+\\!\\!\\beta^{-\\!1}_{2}\\!f(\\widetilde{x}^{s\\!-\\!1}\\!)\\\\\n&\\quad\\!\\!\\!\\!\\!\\!+\\!\\mathbb{E}\\!\\!\\left[\\left\\langle\\!-\\!\\nabla\\!f_{i^{s}_{k}}\\!(\\widetilde{x}^{s\\!-\\!1}\\!)\\!+\\!\\!\\nabla\\! f(\\widetilde{x}^{s\\!-\\!1}\\!),\\theta_{\\!s} x_{\\star}\\!\\!+\\!(1\\!-\\!\\theta_{\\!s}\\!)\\widetilde{x}^{s\\!-\\!1}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\!\\right\\rangle\\!\\right]\\!\\!-\\!\\!\\beta^{-\\!1}_{2}\\!f(x^{s}_{\\!k\\!-\\!1}\\!)\\\\\n&=\\!\\mathbb{E}\\!\\!\\left[\\!\\frac{\\beta_{1}\\!{L} \\theta_{\\!s}^{2}}{2}\\!\\!\\left(\\|y^{s}_{k-\\!1}\\!\\!-\\!x_{\\star}\\!\\|^2\\!\\!-\\!\\|y^{s}_{k}\\!\\!-\\!x_{\\star}\\!\\|^2\\right)\\!+\\!\\theta_{\\!s} g(x_{\\star})\\!\\right]\\!\\!+\\!(1\\!-\\!\\theta_{\\!s}\\!)g(\\widetilde{x}^{s-\\!1})\\\\\n&\\quad\\!+\\!\\!\\left\\langle\\nabla\\! f(x^{s}_{k-\\!1}),\\theta_{s} x_{\\star}\\!+\\!(1\\!-\\!\\theta_{s})\\widetilde{x}^{s-\\!1}\\!-\\!x^{s}_{k-\\!1}\\!+\\!\\beta^{-\\!1}_{2}(x^{s}_{k-\\!1}\\!-\\!\\widetilde{x}^{s-\\!1})\\right\\rangle\\\\\n&\\quad\\!+\\!\\beta^{-\\!1}_{2}\\!\\left[f(\\widetilde{x}^{s-\\!1})\\!-\\!f(x^{s}_{k-\\!1})\\right],\n\\end{split}\n\\vspace{-2mm}\n\\end{equation*}\nwhere the second inequality follows from the facts that $x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\!=\\!\\theta_{\\!s}(y^{s}_{k}\\!-\\!y^{s}_{k-\\!1})$ and\n$g(\\theta_{\\!s}y^{s}_{k}\\!+\\!(1\\!\\!-\\!\\theta_{\\!s})\\widetilde{x}^{s-\\!1})\\!\\leq\\! \\theta_{\\!s} g(y^{s}_{k})\\!+\\!(1\\!\\!-\\!\\theta_{\\!s})g(\\widetilde{x}^{s-\\!1})$; the third inequality holds due to Lemma~\\ref{lemm3} with $z^{*}\\!=\\!y^{s}_{k}$, $z\\!=\\!x_{\\star}$, $z_{0}\\!=\\!y^{s}_{k-\\!1}$, $\\rho\\!=\\!\\beta_{1}L\\theta_{\\!s}$, and $\\psi(z)\\!:=\\!g(z)+\\!\\langle \\widetilde{\\nabla}_{\\!i^{s}_{k}},z\\!-\\!y^{s}_{k-\\!1}\\rangle$ (or $\\psi(z)\\!:=\\!\\langle \\widetilde{\\nabla}_{\\!i^{s}_{k}}\\!+\\!\\nabla\\! g(x^{s}_{k-\\!1}),z\\!-\\!y^{s}_{k-\\!1}\\rangle$). The first equality holds due to the facts that\n\\vspace{-2mm}\n\\begin{displaymath}\n\\begin{split}\n&\\langle \\theta_{s}\\!\\widetilde{\\nabla}_{\\!i^{s}_{k}},\\, x_{\\star}\\!\\!-\\!y^{s}_{k-\\!1}\\rangle\\!=\\!\\langle \\widetilde{\\nabla}_{\\!i^{s}_{k}},\\, \\theta_{s} x_{\\star}\\!\\!+\\!(1\\!-\\!\\theta_{s})\\widetilde{x}^{s-\\!1}\\!\\!-\\!x^{s}_{k-\\!1}\\rangle\\\\\n=&\\left\\langle\\nabla\\! f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1}),\\, \\theta_{s} x_{\\star}\\!\\!+\\!(1\\!-\\!\\theta_{s})\\widetilde{x}^{s-\\!1}\\!\\!-\\!x^{s}_{k-\\!1}\\right\\rangle\\\\\n&+\\!\\left\\langle-\\nabla\\! f_{i^{s}_{k}}\\!(\\widetilde{x}^{s-\\!1})\\!+\\!\\nabla\\! f(\\widetilde{x}^{s-\\!1}),\\, \\theta_{s} x_{\\star}\\!\\!+\\!(1\\!-\\!\\theta_{s})\\widetilde{x}^{s-\\!1}\\!\\!-\\!x^{s}_{k-\\!1}\\right\\rangle,\n\\end{split}\n\\end{displaymath}\nand $\\mathbb{E}[\\nabla\\! f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})]\\!=\\!\\nabla\\! f(x^{s}_{k-\\!1})$. The last equality follows from\n$\\mathbb{E}\\!\\left[\\langle-\\nabla\\! f_{i^{s}_{k}}\\!(\\widetilde{x}^{s-\\!1})\\!+\\!\\nabla\\! f(\\widetilde{x}^{s-\\!1}),\\,\\theta_{s} x_{\\star}\\!+\\!(1\\!-\\!\\theta_{s})\\widetilde{x}^{s-\\!1}\\!\\!-\\!x^{s}_{k-\\!1}\\rangle\\right]\\!=\\!0.$\n\\vspace{-4mm}\n\n\\begin{equation*}\\label{equ75}\n\\begin{split}\n&\\langle\\nabla\\! f(x^{s}_{k\\!-\\!1}),\\theta_{s} x_{\\star}\\!\\!+\\!(1\\!-\\!\\theta_{s})\\widetilde{x}^{s\\!-\\!1}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\!\\!+\\!\\beta^{-\\!1}_{2}\\!(x^{s}_{k\\!-\\!1}\\!\\!-\\!\\widetilde{x}^{s\\!-\\!1})\\rangle\\\\\n=&\\langle\\nabla\\! f(x^{s}_{k\\!-\\!1}),\\theta_{s} x_{\\star}\\!+\\!(1\\!-\\!\\theta_{s}\\!-\\!\\beta^{-\\!1}_{2})\\widetilde{x}^{s\\!-\\!1}\\!\\!+\\!\\beta^{-\\!1}_{2}\\!x^{s}_{k\\!-\\!1}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\rangle\\\\\n\\leq &f\\!\\left(\\theta_{s} x_{\\star}\\!+\\!(1\\!-\\!\\theta_{s}\\!-\\!\\beta^{-\\!1}_{2})\\widetilde{x}^{s-\\!1}\\!+\\!\\beta^{-\\!1}_{2}\\!x^{s}_{k-\\!1}\\right)\\!-\\!f(x^{s}_{k-\\!1})\\\\\n\\leq &\\theta_{s} f(x_{\\star})\\!+\\!(1\\!-\\!\\theta_{s}\\!-\\!\\beta^{-\\!1}_{2})f(\\widetilde{x}^{s-\\!1})\\!+\\!\\beta^{-\\!1}_{2}\\!f(x^{s}_{k-\\!1})\\!-\\!f(x^{s}_{k-\\!1}),\n\\end{split}\n\\end{equation*}\nwhere the first inequality holds due to the fact that $\\langle \\nabla\\! f(x_{1}),$ $x_{2}-x_{1}\\rangle\\!\\leq\\! f(x_{2})\\!-\\!f(x_{1})$, and the last inequality follows from the convexity of $f(\\cdot)$ and $1\\!-\\!\\theta_{s}\\!-\\!\\beta^{-\\!1}_{2}\\!\\geq\\!0$, which can be easily satisfied. Combining the above two inequalities, we have\n\\vspace{-1mm}\n\\begin{equation*}\\label{equ76}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(x^{s}_{k})\\right]\\\\\n\\leq&\\theta_{s}\\phi(x_{\\star})\\!+\\!(1\\!\\!-\\!\\theta_{s})\\phi(\\widetilde{x}^{s-\\!1})\\!+\\!\\frac{\\!\\beta_{1}\\!{L} \\theta_{\\!s}^{2}\\!}{2}\\mathbb{E}\\!\\!\\left[\\|y^{s}_{k-\\!1}\\!\\!-\\!x_{\\star}\\|^2\\!-\\!\\|y^{s}_{k}\\!\\!-\\!x_{\\star}\\|^2\\right]\\!\\!,\n\\end{split}\n\\end{equation*}\n\\vspace{-4mm}\n\\begin{equation*}\n\\begin{split}\n&\\;\\,\\mathbb{E}\\!\\left[\\phi(x^{s}_{k})\\!-\\!\\phi(x_{\\star})\\right]\\\\\n\\vspace{-1mm}\n\\leq&\\,(1\\!\\!-\\!\\theta_{s})[\\phi(\\widetilde{x}^{s-\\!1})\\!-\\!\\phi(x_{\\star})]\\!+\\!\\frac{\\!\\beta_{1}\\!{L} \\theta_{\\!s}^{2}\\!}{2}\\mathbb{E}\\!\\!\\left[\\|y^{s}_{k-\\!1}\\!\\!-\\!x_{\\star}\\|^2\\!-\\!\\|y^{s}_{k}\\!-\\!x_{\\star}\\|^2\\right]\\!\\!.\n\\end{split}\n\\vspace{-1mm}\n\\end{equation*}\nDue to the convexity of $\\phi(\\cdot)$ and the definition $\\widetilde{x}^{s}\\!\\!=\\!\\!\\frac{1}{m_{s}}\\!\\!\\sum^{m_{s}}_{k=1}\\!x^{s}_{k}$, then $\\phi\\!\\left(\\frac{1}{m_{s}}\\!\\!\\sum^{m_{s}}_{k=1}\\!x^{s}_{k}\\right)\\!\\leq\\!\\frac{1}{m_{s}}\\!\\sum^{m_{s}}_{k=1}\\!\\!\\phi(x^{s}_{k})$. Taking expectation over the history of $i_{1},\\ldots,i_{m_{s}}$ on the above inequality, and summing it up over $k\\!=\\!1,\\ldots,m_{s}$ at the $s$-th stage, we have\n$\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{s})\\!-\\!\\phi(x_{\\star})\\right]\n\\leq\\frac{\\theta_{s}^{2}}{2\\eta m_{s}}\\mathbb{E}\\!\\left[\\|y^{s}_{0}\\!-\\!x_{\\star}\\|^2\\!-\\!\\|y^{s}_{m_{s}}\\!-\\!x_{\\star}\\|^2\\right]\\!+(1\\!-\\!\\theta_{s})\\mathbb{E}[\\phi(\\widetilde{x}^{s-\\!1})\\!-\\!\\phi(x_{\\star})]$. This completes the proof.\n\\end{proof}\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzckid b/data_all_eng_slimpj/shuffled/split2/finalzzckid new file mode 100644 index 0000000000000000000000000000000000000000..3d8ca09f81693f85867cc183cdebe5d78005a5a6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzckid @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction}\n\nCoarse, or large-scale, geometry has long been studied in various guises, but \nmost notably in the context of metric spaces. Most generically, a \\emph{coarse \nspace} is a space together with some kind of large-scale structure (e.g., a \nmetric modulo \\emph{quasi-isometry}; see Remark~\\ref{rmk:lsLip-qisom}). A \n\\emph{coarse map} between coarse spaces is then a map which respects this \nstructure (e.g., a large-scale Lipschitz map). Since the small-scale (i.e., the \ntopology) is ignored, one can typically take coarse spaces to be \n\\emph{discrete}, replacing any nondiscrete space by some ``coarsely dense'' \nsubset.\n\nIn recent decades, coarse ideas have played an important role in the study of \ninfinite discrete groups using the methods of geometric group theory, \nespecially in the work of Gromov and his followers (see, e.g., \n\\cite{MR1253544}). The most basic example here is that if $\\Gamma$ is a \nfinitely generated group, then the word length metric on $\\Gamma$ is modulo \nquasi-isometry independent of the finite set of generators used in defining it.\n\nCoarse ideas have also arisen in geometric topology, and more specifically \ncontrolled topology which primarily concerns itself with problems on the \nstructure of manifolds. (We refer the reader to \\cite{MR1308714}*{Ch.~9} for a \nsurvey of the topic and for references.) In this setting, one is interested in \n``operations'' (e.g., homotopies, surgeries) on spaces which respect some \nlarge-scale structure, i.e., are \\emph{controlled}. As before, one may take the \nlarge-scale structure to be given by a metric (i.e., \\emph{bounded control}). \nHowever, it is often more convenient to work with a coarser large-scale \nstructure which is defined in purely topological terms (i.e., \\emph{continuous \ncontrol}; see \\S\\ref{subsect:cts-ctl}).\n\nControlled topology parallels the more classical theory for compact manifolds \nwhich relies on the use of algebraic invariants (e.g., algebraic $K$-theory). \nIn controlled topology, one gets controlled versions of those invariants (in, \ne.g., \\emph{bounded} and \\emph{continuously controlled $K$-theory} \n\\cites{MR1000388, MR802790, MR1277522}; see also \\cite{MR1208729}). By \nconsidering the fundamental group of a space, a key object of study in the \nstudy of homotopy invariants (e.g., the Novikov Conjecture on higher \nsignatures), many of the problems of geometric topology are related back to \ngeometric group theory.\n\nOn the analytic side, to any coarse space $X$, Roe has associated a \n$C^*$-algebra $C^*(X)$ (the \\emph{Roe algebra} of $X$), as well as various \n``(co)homology'' groups, e.g., \\emph{coarse $K$-homology} $KX_\\grstar(X)$. (For \na good overview of this and the following, see \\cite{MR1399087}.) On the other \nhand, one can also take the $K$-theory of $C^*(X)$; the Coarse Baum--Connes \nConjecture is that a certain assembly map $KX_\\grstar(X) \\to K_\\grstar(C^*(X))$ \nis an isomorphism, at least for suitably nice $X$.\n\nThe $K$-theory of Roe algebras arises in the index theory of elliptic operators \non noncompact manifolds (on compact manifolds, the Roe algebra is just the \ncompact operators and the results specialize to classical index theory). \nIndeed, historically it was the study of index theory on noncompact manifolds \nwhich led Roe to coarse geometry (see \\cites{MR918459, MR1399087}), and not the \nother way around. In this way, the analytic approaches to the Novikov \nConjecture (starting with the work of Lusztig \\cite{MR0322889}) are again \nrelated to coarse geometry. (See \\cite{MR1388295} for a nice survey of the \ndifferent approaches to the Novikov Conjecture.)\n\n\n\\subsection*{Roe's coarse geometry}\n\nAfter originally developing coarse geometry in the metric context \n\\cite{MR1147350}, Roe (and his collaborators) realized that one can define an \nabstract notion of \\emph{coarse space}, just as in small-scale geometry one has \nabstract \\emph{topological spaces}. Just as the passage from metric space to \ntopological space forgets large-scale (metric) information, the passage from \nmetric space to coarse space should forget small-scale information. But an \nabstract coarse space retains enough structure to perform the large-scale \nconstructions which were previously done in the metric context (e.g., construct \nthe Roe algebras, coarse $K$-homology, etc.).\n\nA \\emph{coarse space} is a set $X$ together with a \\emph{coarse structure}, \nwhich is a collection $\\calE_X$ of subsets of $X^{\\cross 2} \\defeq X \\cross X$ \n(called the \\emph{entourages} of $X$) satisfying various axioms. When $X$ is a \n(proper) metric space, $\\calE_X$ consists of the subsets $E \\subseteq X^{\\cross \n2}$ such that\n\\[\n \\sup \\set{d_X(x,x') \\suchthat (x,x') \\in E} < \\infty.\n\\]\nA subset $K \\subseteq X$ is \\emph{bounded} if and only if $K^{\\cross 2}$ is an \nentourage of $X$; when $X$ is a metric space, $K$ is bounded if and only if it \nis metrically bounded. If $X$ is a discrete set, one typically axiomatically \ninsists that the bounded subsets of $X$ be finite (we call this the \n\\emph{properness axiom}; see Definition~\\ref{def:prop-ax}); more generally, if \n$X$ is a topological space, the bounded subsets are required to be compact.\n\nA set map $f \\from Y \\to X$ is a \\emph{coarse map} if $f$ is \\emph{proper} in \nthe sense that the inverse image of any bounded subset of $X$ is a bounded \nsubset of $Y$ and if $f$ \\emph{preserves entourages} in the sense that \n$f^{\\cross 2}(F) \\defeq (f \\cross f)(F)$ is an entourage of $X$. In the metric \ncase, $f$ is a coarse map if it is metrically proper and ``nonexpansive''.\n\nThere is an obvious notion of closeness for maps into a metric space: maps $f, \nf' \\from Y \\to X$ are \\emph{close} if\n\\[\n \\sup \\set{d_X(f(y),f'(y)) \\suchthat y \\in Y} < \\infty.\n\\]\nThis generalizes to the case when $X$ is a general coarse space: $f$, $f'$ are \nclose if $(f \\cross f')(1_Y)$ is an entourage of $X$, where $1_Y$ is the \ndiagonal set $\\set{(y,y) \\suchthat y \\in Y}$.\n\nRoe's \\emph{coarse category} has coarse spaces as objects, and closeness \nclasses of coarse maps as morphisms. (A coarse map is a \\emph{coarse \nequivalence} if it represents an isomorphism in the coarse category.) Coarse \ninvariants are defined on this category, either as functions on the isomorphism \nclasses of the coarse category (e.g., \\emph{asymptotic dimension}) or as \nfunctors from the coarse category to some other category (e.g., \\emph{coarse \n$K$-homology}). Though coarse invariants are the primary object of study in \ncoarse geometry, the coarse category is rarely analyzed directly, and there is \nsome confusion in the literature about what the coarse category is (some \nauthors take its arrows to be actual coarse maps; we will call this the \n\\emph{precoarse category}).\n\nThere is an obvious ``product coarse structure'' on the cartesian (set) product \n$X \\cross Y$. The entourages are exactly the subsets of $(X \\cross Y)^{\\cross \n2}$ which project to entourages of $X$ and $Y$ in the obvious way. However, \nthis is not (usually) a product in the coarse category: the projection maps are \nnot proper, unless both $X$ and $Y$ are finite\/compact. This problem already \narises in the category of proper metric spaces and proper maps (modulo \ncloseness).\n\n\\begin{UNremark}\nThe above does \\emph{not} prove that $X$ and $Y$ do not have a product in the \ncoarse category. Certain products (of infinite\/noncompact coarse spaces) \n\\emph{do} exist in the coarse category; indeed, there is an infinite space $X$\n(namely the continuously controlled ray, or equivalently a countable set \nequipped with the \\emph{terminal}, i.e., ``indiscrete'', coarse structure) such \nthat the product of $X$ with every countable coarse space exists \n(Remark~\\ref{rmk:term-unital-prod}). The above does not even prove that the \n``product coarse space'' $X \\cross Y$, as defined above, is not a product of \n$X$ and $Y$ if equipped with suitable maps $X \\cross Y \\to X$ and $X \\cross Y \n\\to Y$ (not the set projections).\n\\end{UNremark}\n\n\n\\subsection*{Nonunital coarse spaces and locally proper maps}\n\nMetric spaces always yield \\emph{unital} coarse spaces, i.e., coarse spaces $X$ \nsuch that $1_X \\defeq \\set{(x,x) \\suchthat x \\in X}$ is an entourage. Though \nRoe defines nonunital coarse spaces, unitality is usually a standing \nassumption, presumably since nonunital coarse spaces have no obvious use.\n\n\\emph{The} major innovation of this paper is the following: We relax the \nrequirement that coarse maps be proper, to a requirement that we call \n\\emph{locally properness}. Local properness is not new: it is actually included \nin Bartels's definition of ``coarse map'' \\cite{MR1988817}*{Def.~3.3}. When the \ndomain is a unital coarse space, local properness is equivalent to (``global'') \nproperness (Corollary~\\ref{cor:loc-prop-uni}). However, when the domain is \nnonunital, we get many more coarse maps. Consequently, using nonunital coarse \nspaces, it becomes extremely easy to construct (nonzero) categorical products \nin the coarse category. Indeed, we can do much more.\n\n\\begin{UNexample}\nSuppose $X'$ is a (closed) subspace of a proper metric space $X$, so that $X'$ \nis itself a coarse space. There is an obvious \\emph{ideal} $\\lAngle 1_{X'} \n\\rAngle_X$ of $\\calE_X$ generated by $1_{X'}$ (see Definition~\\ref{def:ideal}). \nThe coarse space $|X|_{\\lAngle 1_{X'} \\rAngle_X}$ with underlying set $X$ and \ncoarse structure $\\lAngle 1_{X'} \\rAngle_X$ is nonunital, unless $X'$ is \n``coarsely dense'' in $X$.\n\nDefine a (set) map $p \\from X \\to X'$ which sends each $x \\in X$ to a point \n$p(x)$ in $X'$ closest to $x$. Then $p$ is usually not proper, hence is not \ncoarse as a map $X \\to X'$. However, it \\emph{is} locally proper and coarse (in \nour generalized sense) as a map $|X|_{\\lAngle 1_{X'} \\rAngle_X} \\to X'$, and is \nactually a coarse equivalence. (We leave it to the reader to verify this, after \nlocating the required definitions.)\n\\end{UNexample}\n\nFor simplicity as well as for philosophical reasons, we only consider \n\\emph{discrete} coarse spaces; hence for us a map is (globally) proper if and \nonly if the inverse image of any point is a finite set. If a map $f \\from Y \\to \nX$ between coarse spaces is proper, then $f^{\\cross 2}$ is a proper map, and \nhence the restriction of $f^{\\cross 2}$ to any entourage $F \\subseteq Y^{\\cross \n2}$ of $Y$ is proper. We take the latter as the definition of local properness: \nA map $f \\from Y \\to X$ between coarse spaces (not necessarily unital) is \n\\emph{locally proper} if, for all entourages $F$ of $Y$, the restriction \n$f^{\\cross 2} |_F \\from F \\to X^{\\cross 2}$ is a proper map. There are a number \nof equivalent ways of defining local properness, the most intuitive of which is \nthe following. For a nonunital coarse space, there is an obvious notion of \n\\emph{unital subspace}; a map is locally proper if and only if the restriction \nto every unital subspace of its domain is a proper map \n(Corollary~\\ref{cor:loc-prop-uni}).\n\nWhen $X$ is nonunital, we must modify the the definition of closeness, lest the \nidentity map on $X$ not be close to itself. We modify it in a simple way, now \nrequiring that the domain also be a coarse space: Coarse maps $f, f' \\from Y \n\\to X$ (between possibly nonunital coarse spaces) are \\emph{close} if $(f \n\\cross f')(F)$ is an entourage of $X$ for every entourage $F$ of $Y$. After \nchecking the usual things, we get our nonunital \\emph{coarse category}, whose \nobjects are (possibly nonunital) coarse spaces and whose arrows are closeness \nclasses of (locally proper) coarse maps.\n\n\\begin{UNremark}\nEmerson--Meyer have defined a notion of \\emph{$\\sigma$-coarse spaces}, coarse \nmaps between such spaces, and an appropriate notion of closeness \n\\cite{MR2225040}. A $\\sigma$-coarse space is just the colimit of an increasing \nsequence of unital coarse spaces. In fact, we show that the (pre)coarse \ncategory of discrete $\\sigma$-coarse spaces is equivalent to a subcategory of \nour (pre)coarse category consisting of the \\emph{$\\sigma$-unital coarse spaces} \n(we do not examine the situation when one allows $\\sigma$-coarse spaces to have \nnontrivial topologies).\n\\end{UNremark}\n\n\n\\subsection*{Products, limits, etc.}\n\nLet us see how to construct the product of coarse spaces $X$ and $Y$ in this \ncategory. We do so by putting a \\emph{nonunital} coarse structure on the set $X \n\\cross Y$. The entourages of the \\emph{coarse product} $X \\cross Y$ are the $G \n\\subseteq (X \\cross Y)^{\\cross 2}$ such that:\n\\begin{enumerate}\n\\item the restricted projections $\\pi_1 |_G, \\pi_2 |_G \\from G \\to X \\cross Y$ \n are proper maps (this is the aforementioned properness axiom);\n\\item $\\pi_X |_G \\from G \\to X^{\\cross 2}$ and $\\pi_Y |_G \\from G \\to Y^{\\cross \n 2}$ are proper maps; and\n\\item $(\\pi_X)^{\\cross 2}(G)$ is an entourage of $X$ and $(\\pi_Y)^{\\cross \n 2}(G)$ is an entourage of $Y$.\n\\end{enumerate}\nOne can then check that this is a product in our nonunital coarse category \n(indeed, it is a product in our nonunital \\emph{precoarse category}). We must \nemphasize that the coarse structure on the set product is crucial: If $\\ast$ is \na one-point coarse space, then $X \\cross \\ast \\cong X$ as a set, but unless $X$ \nis bounded the coarse product $X \\cross \\ast$ is \\emph{not} coarsely equivalent \nto $X$.\n\nThe above construction generalizes to all nonzero products (by nonzero product, \nwe mean a product of a nonempty collection of spaces), including infinite \nproducts (Theorem~\\ref{thm:PCrs-lim} and Proposition~\\ref{prop:Crs-prod}). We \nwill then proceed to examine equalizers in the nonunital coarse category, and \ndiscover that it has all equalizers of pairs of maps \n(Proposition~\\ref{prop:Crs-equal}). A standard categorical corollary is that \nthe nonunital coarse category has all nonzero (projective) limits \n(Theorem~\\ref{thm:Crs-lim}). One can similarly analyze coproducts (i.e., sums \nor ``disjoint unions'') and coequalizers, and get that the nonunital coarse \ncategory has all colimits, i.e., inductive limits \n(Theorem~\\ref{thm:Crs-colim}).\n\n\n\\subsection*{Terminal objects and quotients}\n\nFor set theoretic reasons, the coarse category does not have a terminal object. \n(As we shall see in \\S\\ref{subsect:rest-Crs}, one way of obtaining a terminal \nobject is to restrict the cardinality of coarse spaces, though there is a \nbetter way to proceed. For most purposes, no such restriction is needed.) \nHowever, there is a plethora of coarse spaces which behave like terminal \nobjects. The \\emph{terminal coarse structure} on a set $X$ consists of the \nsubsets of $X^{\\cross 2}$ which are both ``row- and column-finite''; denote the \nresulting coarse space by $|X|_1$. A rather underappreciated fact about such \ncoarse spaces is that, for any coarse space $Y$, \\emph{all} coarse maps $Y \\to \n|X|_1$ are close to one another. An immediate categorical consequence of this \nis that, assuming that any such coarse map exists, the product of $|X|_1$ and \n$Y$ in the (unital or nonunital) coarse category is just $Y$ itself \n(Proposition~\\ref{prop:term-id}).\n\nIn the unital coarse category, $X \\mapsto |X|_1$ is a functor. In the nonunital \ncoarse category, one must replace $|X|_1$ with a different coarse space, \ndenoted $\\Terminate(X)$ (with $\\Terminate(X) = |X|_1$ for $X$ unital), to \nobtain a functor. In an abelian category, one can define a quotient $X\/f(Y)$ \n(for $f \\from Y \\to X$) as push-out $X \\copro_Y 0$. This generalizes to any \ncategory with zero objects and push-out squares. We will see that in fact we \ncan generalize this to the coarse setting, defining $X\/[f](Y)$ to be the \npush-out $X \\copro_Y \\Terminate(Y)$ in the (nonunital) coarse category. \n(Indeed, one can do the same in the category of topological spaces, noting that \nthere are two cases: ``$\\Terminate(X)$'' is a one-point space if $X \\neq \n\\emptyset$ and the empty space otherwise.)\n\n\n\\subsection*{Applications}\n\nOur development of coarse geometry is a strict generalization of Roe's, despite \nour assumption of discreteness (see \\S\\ref{sect:top-crs}). Most of the standard \nconstructions in Roe's coarse geometry (such as those mentioned above) \ngeneralize easily to our nonunital, locally proper version. (Note, however, \nthat our theory does not encompass what one may call, following the language of \n\\cite{MR1817560}*{Ch.~12}, the ``uniform category'' in which both the coarse \nstructure and the topology are important. For example, Roe's $C^*$-algebras \n$D^*(X)$, which are functorial for uniform maps, require a notion of \n\\emph{topological coarse space}. We defer this task to \\cite{crscat-III}; see \nRemark~\\ref{rmk:top-crs-sp}.) However, we will refrain from fully developing \nthese applications in this paper. For the purposes of this paper, we briefly \nexamine some things enabled by our generalization.\n\nHaving examined the coarse category from the categorical point of view, many \nstandard constructions from topology transfer easily over to the coarse \nsetting. For example, one obtains a notion of coarse simplicial complex. Of \ncourse, it is easy to deal with finite complexes explicitly in the unital \ncoarse category. However, one result of having \\emph{all} colimits, including \ninfinite ones, is that we actually obtain infinite coarse simplicial complexes. \nThis should enable one to apply simplicial methods in coarse geometry.\n\n\\begin{comment}\nAnother application is to coarse homotopy. There are various notions of \nhomotopy used in coarse geometry, e.g., the coarse homotopy of Higson and Roe \n\\cite{MR1243611}. However, the standard description of coarse homotopy is not \n``categorical'', for the obvious reason that the standard, unital coarse \ncategory does not seem to have products in general. We rectify this, and \nreformulate coarse homotopy in much more familiar categorical terms: We find a \ncoarse space $I$ such that, for (at most) countable coarse spaces $X$ and $Y$, \na coarse homotopy of maps $Y \\to X$ is exactly given by a coarse map $(h_t) \n\\defeq Y \\cross I \\to X$. This $I$ comes equipped with coarse maps $\\delta_j \n\\from P \\defeq |\\setN|_1 \\to I$, $j = 0, 1$ (or, indeed, $j \\in \\ccitvl{0,1}$), \nwhich allows one to recover the coarse maps ``at the endpoints'' via the \ncompositions\n\\[\n Y \\isoto Y \\cross P \\nameto{\\smash{\\id \\cross \\delta_j}} Y \\cross I \\to X.\n\\]\nAs a historical note, we mention that this description is motivated by the \ncontinuously controlled case in which the connection to topology is much more \nobvious.\n\\end{comment}\n\n\n\\subsection*{Notes on history and references}\n\nThe framework and terminology we use are essentially due to Roe and his \ncollaborators (see \\cites{MR1147350, MR1451755}, in particular). Since our \ndevelopment differs in various details and in the crucial concept of local \nproperness, and for the sake of completeness, we provide a complete exposition \nfrom basic principles; other, more standard, expositions include \n\\cites{MR1451755, MR1399087, MR1817560, MR2007488}. In the basics, we do not \nclaim much originality and most of the results will be known to those familiar \nwith coarse geometry. However, in the context of locally proper maps, we have \nfound certain methods of proof (in particular, the use of \nProposition~\\ref{prop:prop}) to be particularly effective, and have emphasized \nthe use of these methods. Thus our proofs of standard results may differ from \nthe usual proofs.\n\nWe have endeavoured to provide reasonably thorough references. However, it is \noften unwieldy to provide complete data for things which have been generalized \nand refined over the years. In such cases, rather than providing references to \nthe original definition and all the subsequent generalizations, we simply \nreference a work (often expository in nature) which provides the current \nstandard definition; often, such definitions can be found in a number of \nplaces, such as the aforementioned standard expositions.\n\n\n\\subsection*{Organization}\n\nThe rest of this paper is organized into five (very unequal) sections:\n\\begin{description}\n\\item[\\S\\ref{sect:crs-geom}] We define our basic framework of coarse \n structures, coarse spaces, and coarse maps, together with important results \n on local properness, and push-forward and pull-back coarse structures.\n\\item[\\S\\ref{sect:Crs}] We consider the precoarse categories (and $\\CATPCrs$ in \n particular) and their properties; the arrows in these categories are actual \n coarse maps. We discuss limits and colimits in these categories, as well as \n the relation between the general category $\\CATPCrs$ and the subcategories \n of unital and\/or connected coarse spaces.\n\\item[\\S\\ref{sect:Crs}] We discuss the relation of closeness on coarse maps, \n establish basic properties of closeness, and consider the quotient coarse \n categories ($\\CATCrs$ in particular). We show that $\\CATCrs$ has all \n nonzero products and all equalizers, hence all nonzero limits. Similarly, \n it has all coproducts and all coequalizers, hence all colimits. We define \n the termination functor $\\Terminate$, and examine some of its properties; \n in particular, it provides ``identities'' for the product. We characterize \n the monic arrows of $\\CATCrs$ and show that $\\CATCrs$ has categorical \n images, and dually we do the same for epi arrows and coimages. We apply \n $\\Terminate$, together with push-outs, to define quotient coarse spaces. \n Finally, we discuss ways to ``restrict'' $\\CATCrs$ to obtain subcategories \n with terminal objects.\n\\item[\\S\\ref{sect:top-crs}] We examine Roe's formalization of coarse geometry, \n which allows coarse spaces to carry topologies, and the relation \n between the Roe coarse category and ours. In particular, we discuss how, \n given a ``proper coarse space'' (in the sense of Roe), one can functorially \n obtain a (discrete) coarse space (in our sense). We show that this gives a \n fully faithful functor from the Roe coarse category to $\\CATCrs$.\n\\item[\\S\\ref{sect:ex-appl}] We give the basic examples of coarse spaces: those \n which come from proper metric spaces, and those which come from \n compactifications (i.e., continuously controlled coarse spaces). We define \n corresponding metric and continuously controlled coarse simplices, and \n indicate how one might then develop coarse simplicial theory.\n We compare Emerson--Meyer's $\\sigma$-coarse spaces to our nonunital coarse \n spaces (in the discrete case). Finally, we briefly examine the relation \n between quotients of coarse spaces, the $K$-theory of Roe algebras, and \n Kasparov $K$-homology.\n\\end{description}\n\n\n\\subsection*{Acknowledgements}\n\nThis work has been greatly influenced by many people, too many to enumerate. \nHowever, I would like to specifically thank my thesis advisor John Roe for his \nguidance over the years, as well as Heath Emerson and Nick Wright for helpful \ndiscussions. I would also like to thank Marcelo Laca, John Phillips, and Ian \nPutnam for their support at the University of Victoria.\n\n\n\n\n\\section{Foundations of coarse geometry}\\label{sect:crs-geom}\n\nThroughout this section, $X$, $Y$, and $Z$ will be sets (sometimes with extra \nstructure), and $f \\from Y \\to X$ and $g \\from Z \\to Y$ will be (set) maps. We \ndenote the restriction of $f$ to $T \\subseteq Y$ by $f |_T \\from T \\to X$. When \n$f(Y) \\subseteq S \\subseteq X$, we denote the range restriction of $f$ to $S$ \nby $f |^S \\from Y \\to S$. Thus if $T \\subseteq Y$ and $f(T) \\subseteq S \n\\subseteq X$, we have a restriction $f |_T^S \\from T \\to S$.\n\n\n\\subsection{\\pdfalt{\\maybeboldmath Operations on subsets of $X \\cross X$}%\n {Operations on subsets of X x X}}\n\nMuch of the following can be developed in the more abstract context of \ngroupoids \\cite{MR1451755}, but we will refrain from doing so. The basic object \nin question is the pair groupoid $X^{\\cross 2} \\defeq X \\cross X$. Recall that \n$X^{\\cross 2}$ has object set $X$ and set of arrows $X \\cross X$. The map $X \n\\injto X^{\\cross 2}$ is $x \\mapsto (x,x) \\eqdef 1_x$ for $x \\in X$. The target \nand source maps are the projections $\\pi_1, \\pi_2 \\from X^{\\cross 2} \\to X$, \nrespectively. For $x,x',x'' \\in X$, composition is given by $(x,x') \\circ \n(x',x'') \\defeq (x,x'')$ and the inverse by $(x,x')^{-1} \\defeq (x',x)$. Any \nset map $f \\from Y \\to X$ induces a groupoid morphism\n\\[\n f^{\\cross 2} \\defeq f \\cross f \\from Y^{\\cross 2} \\to X^{\\cross 2}\n\\]\nwhich in turn induces a map $\\powerset(Y^{\\cross 2}) \\to \\powerset(X^{\\cross \n2})$, again denoted $f^{\\cross 2}$, between power sets.\n\n\\begin{definition}[\\maybeboldmath operations on $\\powerset(X^{\\cross 2})$]\nFor $E, E' \\in \\powerset(X^{\\cross 2})$:\n\\begin{enumerate}\n\\item (\\emph{addition}) $E + E' \\defeq E \\union E'$;\n\\item (\\emph{multiplication}) $E \\circ E' \\defeq \\set{e \\circ e' \\suchthat \n \\text{$e \\in E$, $e' \\in E'$, and $\\pi_2(e) = \\pi_1(e')$}}$; and\n\\item (\\emph{transpose}) $E^\\transpose \\defeq \\set{e^{-1} \\suchthat e \\in E}$.\n\\end{enumerate}\nFor $S \\subseteq X$, put $1_S \\defeq \\set{1_x \\suchthat x \\in S}$ (the \n\\emph{local unit} over $S$, or simply \\emph{unit} if $S = X$).\n\\end{definition}\n\n\\begin{proposition}\nFor all $E \\in \\powerset(X^{\\cross 2})$,\n\\[\n E \\circ 1_S = (\\pi_2 |_E)^{-1}(S)\n\\quad\\text{and}\\quad\n 1_S \\circ E = (\\pi_1 |_E)^{-1}(S)\n\\]\n\\end{proposition}\n\n\\begin{remark}\nWe will refrain from calling $E \\circ E'$ a ``product'' to avoid confusion with \ncartesian\/categorical products (e.g., $X \\cross Y$). The transpose \n$E^\\transpose$ is often called the ``inverse'' and denoted $E^{-1}$; we avoid \nthis terminology and notation since it is somewhat deceptive (though, \nadmittedly, also rather suggestive). Our units $1_X$ are usually denoted \n$\\Delta_X$ (and called the diagonal, for obvious reasons); our terminology is \nmore representative of the ``algebraic'' role played by the unit (and the local \nunits) and avoids confusion with the (related) diagonal map $\\Delta_X \\from X \n\\to X \\cross X$ (where $X \\cross X$ is the cartesian\/categorical product).\n\\end{remark}\n\nThe operations of addition and composition make $\\powerset(X^{\\cross 2})$ into \na semiring: addition is commutative with identity $\\emptyset$, multiplication \nis associative with identity $1_X$, multiplication distributes over addition, \nand $\\emptyset \\circ E = \\emptyset = E \\circ \\emptyset$ for all $E$. Addition \nis idempotent in that $E + E = E$ for all $E$. Each $1_S$ is idempotent with \nrespect to multiplication, i.e., $1_S \\circ 1_S = 1_S$ for all $S$. The \ntranspose is involutive, i.e., $(E^\\transpose)^\\transpose = E$ for all $E$, \nand, moreover, $(E + E')^\\transpose = E^\\transpose + (E')^\\transpose$, $(E \n\\circ E')^\\transpose = (E')^\\transpose \\circ E^\\transpose$, and \n$(1_S)^\\transpose = 1_S$, for all $E$, $E'$, and $S$.\n\n\\begin{definition}[neighbourhoods and supports]\nFor any $S \\subseteq X$ and $E \\in \\powerset(X^{\\cross 2})$, put\n\\begin{align*}\n E \\cdot S & \\defeq \\pi_1(E \\circ 1_S) = \\pi_1( (\\pi_2 |_E)^{-1}(S) )\n && \\text{(\\emph{left $E$-neighbourhood of $S$})}\n\\shortintertext{and}\n S \\cdot E & \\defeq \\pi_2(1_S \\circ E) = \\pi_2( (\\pi_1 |_E)^{-1}(S) )\n && \\text{(\\emph{right $E$-neighbourhood of $S$})}.\n\\end{align*}\nWe also call $E \\cdot X = \\pi_1(E)$ the \\emph{left support} of $E$ and $X \\cdot \nE = \\pi_2(E)$ the \\emph{right support} of $E$.\n\\end{definition}\n\n\\begin{remark}\nThe notations $N_E(S) \\defeq E_S \\defeq E[S] \\defeq E \\cdot S$ and $E^S \\defeq \nS \\cdot E$ are common, though our notation is hopefully more suggestive of the \nrelation between $E \\cdot S$, $S \\cdot E$ and the previously defined \noperations.\n\\end{remark}\n\n\\begin{proposition\nFor all $E$, $E'$, and $S$:\n\\begin{align*}\n (E + E') \\cdot S & = E \\cdot S \\union E' \\cdot S &\n & \\text{and} &\n S \\cdot (E + E') & = S \\cdot E \\union S \\cdot E'; \\\\\n E \\circ 1_{E' \\cdot S} & = E \\circ E' \\circ 1_S &\n & \\text{and} &\n 1_{S \\cdot E} \\circ E' & = 1_S \\circ E \\circ E'; \\\\\n (E \\circ E') \\cdot S & = E \\cdot (E' \\cdot S) &\n & \\text{and} &\n S \\cdot (E \\circ E') & = (S \\cdot E) \\cdot E'; \\quad\\text{and} \\\\\n E^\\transpose \\cdot S & = S \\cdot E &\n & \\text{and} &\n S \\cdot E^\\transpose & = E \\cdot S.\n\\end{align*}\n\\end{proposition}\n\n$\\powerset(X^{\\cross 2})$ and $\\powerset(X)$ are partially ordered by \ninclusion. All of the above ``operations'' are monotonic with respect to these \npartial orders.\n\n\\begin{proposition\nIf $E_1, E_2, E'_1, E'_2 \\in \\powerset(X^{\\cross 2})$ with $E_1 \\subseteq E_2$ \nand $E'_1 \\subseteq E'_2$, and $S_1, S_2 \\subseteq X$ with $S_1 \\subseteq S_2$, \nthen:\n\\begin{align*}\n E_1 + E'_1 & \\subseteq E_2 + E'_2, & &&\n E_1 \\circ E'_1 & \\subseteq E_2 \\circ E'_2, \\\\\n (E_1)^\\transpose & \\subseteq (E_2)^\\transpose, & &&\n 1_{S_1} & \\subseteq 1_{S_2}, \\\\\n E_1 \\cdot S_1 & \\subseteq E_2 \\cdot S_2, &\n & \\quad\\text{and} &\n S_1 \\cdot E_1 & \\subseteq S_2 \\cdot E_2.\n\\end{align*}\n\\end{proposition}\n\n\n\\subsection{Discrete properness}\n\nSince our coarse spaces are essentially discrete, for now we only discuss \nproperness for maps between discrete sets.\n\n\\begin{definition}\\label{def:proper}\nA set map $f \\from Y \\to X$ is \\emph{proper} if the inverse image $f^{-1}(K)$ \nof every finite subset $K \\subseteq X$ is again finite.\n\\end{definition}\n\nIf $Y$ is itself a finite set, then any $f \\from Y \\to X$ is automatically \nproper. We will use the following facts extensively (compare \n\\cite{MR979294}*{\\S{}10.1 Prop.~5}).\n\n\\begin{proposition}\\label{prop:prop}\nConsider the composition of (set) maps $Z \\nameto{\\smash{g}} Y \n\\nameto{\\smash{f}} X$:\n\\begin{enumerate}\n\\item\\label{prop:prop:I} If $f$ and $g$ are proper, then $f \\circ g$ is proper.\n\\item\\label{prop:prop:II} If $f \\circ g$ is proper, then $g$ is proper.\n\\item\\label{prop:prop:III} If $f \\circ g$ is proper and $g$ is surjective, then \n $f$ (and $g$) are proper.\n\\end{enumerate}\nNote that injectivity implies properness.\n\\end{proposition}\n\nIn \\enumref{prop:prop:III} above, the hypothesis that $g$ be surjective can be \nweakened to the requirement that $Y \\setminus g(Z)$ be a finite set. All \nrestrictions (including range restrictions) of proper maps are again proper.\n\n\n\\subsection{The properness axiom and coarse structures}\n\n\\begin{definition}\\label{def:prop-ax}\nA set $E \\in \\powerset(X^{\\cross 2})$ satisfies the \\emph{properness axiom} if \nthe restricted target and source maps (i.e., projections) $\\pi_1 |_E, \\pi_2 |_E \n\\from E \\to X$ (or, also restricting the ranges, $\\pi_1 |_E^{E \\cdot X}$, \n$\\pi_2 |_E^{X \\cdot E}$) are proper set maps.\n\\end{definition}\n\n\\begin{proposition}\\label{prop:prop-ax}\nFor $E \\in \\powerset(X^{\\cross 2})$, the following are equivalent:\n\\begin{enumerate}\n\\item\\label{prop:prop-ax:I} $E$ satisfies the properness axiom;\n\\item\\label{prop:prop-ax:II} $E \\circ 1_S$ and $1_S \\circ E$ are finite for all \n finite $S \\subseteq X$; and\n\\item\\label{prop:prop-ax:III} $E \\circ E'$ and $E' \\circ E$ are finite for all \n finite $E' \\in \\powerset(X^{\\cross 2})$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\n\\enumref{prop:prop-ax:I} $\\iff$ \\enumref{prop:prop-ax:II}: Immediate from $E \n\\circ 1_S = (\\pi_2 |_E)^{-1}(S)$ (and symmetrically).\n\n\\enumref{prop:prop-ax:II} $\\iff$ \\enumref{prop:prop-ax:III}: The reverse \nimplication is clear. For the forward implication, if $E'$ is finite then $E' \n\\cdot X$ is finite, and hence so too is\n\\[\n E \\circ E' = E \\circ E' \\circ 1_X = E \\circ 1_{E' \\cdot X}\n\\]\n(and symmetrically).\n\\end{proof}\n\n\\begin{corollary}\\label{cor:prop-ax}\nIf $E \\in \\powerset(X^{\\cross 2})$ satisfies the properness axiom, then $E \n\\cdot S$ and $S \\cdot E$ are finite for all finite $S \\subseteq X$.\n\\end{corollary}\n\n\\begin{proof}\nUse $E \\cdot S \\defeq \\pi_1(E \\circ 1_S)$ (and similarly symmetrically).\n\\end{proof}\n\n\\begin{remark}\nThe converse of the above Corollary holds since we are only considering pair \ngroupoids: observe that\n\\[\n (\\pi_1 |_E)^{-1}(S) \\subseteq S \\cross S \\cdot E\n\\]\n(and similarly symmetrically). However, the converse does not hold in general \nfor coarse structures on groupoids.\n\\end{remark}\n\n\\begin{proposition}[``algebraic'' operations and the properness axiom]%\n \\label{prop:prop-ax-alg}\nIf $E, E' \\in \\powerset(X^{\\cross 2})$ satisfy the properness axiom, then $E + \nE'$, $E \\circ E'$, $E^\\transpose$, and all subsets of $E$ satisfy the \nproperness axiom. Also, all singletons $\\set{e}$, $e \\in X^{\\cross 2}$, and \nhence all finite subsets of $X^{\\cross 2}$ satisfy the properness axiom, as \ndoes the unit $1_X$.\n\\end{proposition}\n\n\\begin{proof}\nClear, except possibly for $E \\circ E'$; for this, use \nProposition~\\ref{prop:prop-ax}\\enumref{prop:prop-ax:III} (and associativity of \nmultiplication).\n\\end{proof}\n\nIf $T$, $T'$ are matrices over $X^{\\cross 2}$ (with values in some ring) are \nsupported on $E, E' \\in \\powerset(X^{\\cross 2})$ satisfying the properness \naxiom, then the product $TT'$ is defined and has support contained in $E \\circ \nE'$. The passage to rings of matrices motivates the following.\n\n\\begin{definition}\\label{def:crs-sp}\nA \\emph{coarse structure} on $X$ is a subset $\\calE_X \\subseteq\n\\powerset(X^{\\cross 2})$ such that:\n\\begin{enumerate}\n\\item each $E \\in \\calE_X$ satisfies the properness axiom;\n\\item $\\calE_X$ is closed under the operations of addition, multiplication, \n transpose, and the taking of subsets (i.e., if $E \\subseteq E'$ and $E' \\in \n \\calE_X$, then $E \\in \\calE_X$); and\n\\item for all $x \\in X$, the singleton $\\set{1_x}$ is in $\\calE_X$.\n\\end{enumerate}\nA \\emph{coarse space} is a set $X$ equipped with a coarse structure $\\calE_X$ \non $X$. We denote such a coarse space by $|X|_{\\calE_X}$ or simply $X$. The \nelements of $\\calE_X$ are called \\emph{entourages} (of $\\calE_X$ or of $X$).\n\\end{definition}\n\n\\begin{example}[finite sets]\nIf $X$ is a finite set, then any coarse structure on $X$ must be unital. \nMoreover, there is only one connected coarse structure on $X$, namely the power \nset of $X$.\n\\end{example}\n\nHere are two natural coarse structures which exist on any set.\n\n\\begin{definition}\nThe \\emph{initial coarse structure} $\\calE_{|X|_0}$ on $X$ is the minimum \ncoarse structure on $X$. The \\emph{terminal coarse structure} $\\calE_{|X|_1}$ \non a set $X$ is the maximum coarse structure. (We denote the corresponding \ncoarse spaces by $|X|_0$ and $|X|_1$, respectively.)\n\\end{definition}\n\nBy Proposition~\\ref{prop:prop-ax-alg}, $\\calE_{|X|_1}$ simply consists of all \nthe $E \\in \\powerset(X^{\\cross 2})$ which satisfy the properness axiom. (Thus \n``$E \\in \\calE_{|X|_1}$'' is a convenient abbreviation for ``$E \\in \n\\powerset(X^{\\cross 2})$ satisfies the properness axiom''.) Any coarse \nstructure on $X$ is a subset of the terminal coarse structure (and obviously \ncontains the initial coarse structure). More generally, we have the following.\n\n\\begin{proposition}\nThe intersection of any collection of coarse structures on $X$ (possibly \ninfinite) is again a coarse structure on $X$.\n\\end{proposition}\n\n\\begin{definition}\nThe coarse structure $\\langle \\calE' \\rangle_X$ on $X$ \\emph{generated} by a \nsubset $\\calE' \\subseteq \\calE_{|X|_1}$ is the minimum coarse structure on $X$ \nwhich contains $\\calE'$.\n\\end{definition}\n\nOf course, $\\langle \\calE' \\rangle_X$ is just the intersection of all the \ncoarse structures on $X$ containing $\\calE'$. Note that $\\calE_{|X|_0} = \n\\langle \\emptyset \\rangle_X$; more concretely, $\\calE_{|X|_0}$ consists of all \nthe finite local units $1_S$, $S \\subseteq X$ finite.\n\nGiven two subsets $\\calE', \\calE'' \\subseteq \\calE_{|X|_1}$ (e.g., coarse \nstructures on $X$), denote\n\\[\n \\langle \\calE', \\calE'' \\rangle_X\n \\defeq \\langle \\calE' \\union \\calE'' \\rangle_X.\n\\]\nObserve that $\\langle \\calE', \\calE'' \\rangle_X$ contains both $\\langle \\calE' \n\\rangle_X$ and $\\langle \\calE'' \\rangle_X$. We use similar notation given three \nor more subsets of $\\calE_{|X|_1}$ and, more generally, if $\\set{\\calE'_j \n\\suchthat j \\in J}$ ($J$ some index set) is a collection of subsets of \n$\\calE_{|X|_1}$,\n\\[\n \\langle \\calE'_j \\suchthat j \\in J \\rangle_X\n \\defeq \\bigl\\langle \\textstyle\\bigunion_{j \\in J} \\calE'_j\n \\bigr\\rangle_X.\n\\]\n\nOne can describe the coarse structure generated by $\\calE'$ rather more \nconcretely.\n\n\\begin{proposition}\nIf $\\calE' \\subseteq \\calE_{|X|_1}$ contains all the singletons $\\set{1_x}$, $x \n\\in X$, and is closed under the ``algebraic'' operations of addition, \nmultiplication, and transpose, then\n\\[\n \\langle \\calE' \\rangle_X = \\set{E \\subseteq E' \\suchthat E' \\in \\calE'}.\n\\]\n\\end{proposition}\n\n\\begin{corollary}\\label{cor:crs-struct-gen}\nFor any $\\calE' \\subseteq \\calE_{|X|_1}$, $\\langle \\calE' \\rangle_X$ consists \nof the all subsets of the ``algebraic closure'' of the union\n\\[\n \\calE' \\union \\set{\\set{1_x} \\suchthat x \\in X}.\n\\]\n\\end{corollary}\n\nSubsets of coarse spaces are naturally coarse spaces.\n\n\\begin{definition}\nSuppose $X$ is a coarse space and $X' \\subseteq X$ is a subset. Then\n\\[\n \\calE_{X'} \\defeq \\calE_X |_{X'}\n \\defeq \\calE_X \\intersect \\powerset((X')^{\\cross 2})\n\\]\nis a coarse structure on $X'$, called the \\emph{subspace coarse structure}. \nCall $X' \\subseteq X$ equipped with the subspace coarse structure a (coarse) \n\\emph{subspace} of $X$.\n\\end{definition}\n\n\\begin{example}[discrete metric spaces]\\label{ex:disc-met}\nLet $(X,d)$ be a discrete, proper metric space. ($X$ is metrically \n\\emph{proper} if closed balls of $X$ are compact; thus $X$ is discrete and \nproper if and only if every metrically bounded subset is finite.) The \n\\emph{($d$-)metric coarse space} $|X|_d$ (or just $|X|$ for short) has as \nentourages the $E \\in \\calE_{|X|_1} \\subseteq \\powerset(X^{\\cross 2})$ such \nthat\n\\begin{equation}\\label{ex:disc-met:eq}\n \\sup \\set{d(x,x') \\suchthat (x,x') \\in E} < \\infty.\n\\end{equation}\nWe may also allow $d(x,x') = \\infty$ (for $x \\neq x'$). In the senses defined \nbelow, $|X|_d$ is always unital but is connected if and only if $d(x,x') < \n\\infty$ always. If $X' \\subseteq X$, then the metric coarse structure on $X'$ \nobtained from the restriction of the metric $d$ is just the subspace coarse \nstructure $\\calE_{|X|_d} |_{X'}$.\n\\end{example}\n\n\n\\subsection{Unitality and connectedness}\n\n\\begin{definition}\\label{def:uni-conn}\nA coarse structure $\\calE_X$ on $X$ is \\emph{unital} if $1_X \\in \\calE_X$. \n$\\calE_X$ is \\emph{connected} if every singleton $\\set{e}$, $e \\in X^{\\cross \n2}$, is an entourage of $\\calE_X$. A pair of points $x, x' \\in X$ are \n\\emph{connected} (with respect to $\\calE_X$, or in the coarse space $X$) if \n$\\set{(x,x')} \\in \\calE_X$.\n\\end{definition}\n\nMost treatments of coarse geometry assume both unitality and connectedness, but \nwe will assume neither. Connectedness is a relatively benign assumption (see \n\\S\\ref{subsect:PCrs-conn}), but \\emph{not} assuming unitality will be \nparticularly crucial.\n\n\\begin{remark}\\label{rmk:gpd-conn}\nConnectedness in the general coarse groupoid case is more complicated, since \nthere may be multiple arrows having the same target and source, and since a \ngroupoid itself may not be connected. Let $\\calE_\\calG$ be a coarse structure \non a groupoid $\\calG$. There are several possible notions of connectedness:\n\\begin{enumerate}\n\\item The obvious translation of the above to groupoids is to say that \n $\\calE_\\calG$ is (locally) \\emph{connected} if all singletons $\\set{e}$ \n ($e$ an arrow in the groupoid) are entourages of $\\calE_\\calG$.\n\\item $\\calE_\\calG$ is \\emph{globally connected} if it is (locally) connected \n and $\\calG$ is connected as a groupoid.\n\\setcounter{tempcounter}{\\value{enumi}}\n\\end{enumerate}\nObjects $x$, $x'$ are \\emph{connected} if \\emph{all} arrows $e$ with target $x$ \nand source $x'$ yield entourages $\\set{e}$. Then $\\calE_\\calG$ is (locally) \nconnected if and only if \\emph{all groupoid-connected} pairs of objects are \nconnected, and globally connected if and only if \\emph{all} pairs of objects \nare connected. But there is also a weaker notion of connectedness: $x$, $x'$ \nare \\emph{weakly connected} if there is \\emph{some} arrow $e$ with target $x$ \nand source $x'$ such that $\\set{e}$ is an entourage.\n\\begin{enumerate}\n\\setcounter{enumi}{\\value{tempcounter}}\n\\item $\\calE_\\calG$ is (locally) \\emph{weakly connected} if all \n groupoid-connected objects $x$, $x'$ are weakly connected.\n\\item $\\calE_\\calG$ is \\emph{globally weakly connected} if it is (locally) \n weakly connected and $\\calG$ is connected as a groupoid.\n\\end{enumerate}\nWhen $\\calG$ is a pair groupoid (i.e., in our case), all the above notions \ncoincide.\n\\end{remark}\n\n\\begin{proposition}\nThe terminal structure on any set $X$ is always unital and connected.\n\\end{proposition}\n\nThe intersection of unital coarse structures on a set $X$ is again unital, and \nsimilarly for connected coarse structures. Thus, for any $\\calE' \\subseteq \n\\calE_{|X|_1}$, there are \\emph{unital}, \\emph{connected}, and \\emph{connected \nunital} coarse structures on $X$ generated by $\\calE'$. These can be described \nrather simply as\n\\begin{align*}\n \\langle \\calE' \\rangle_X^\\TXTuni\n & \\defeq \\bigl\\langle \\calE', \\set{1_X} \\bigr\\rangle_X, \\\\\n \\langle \\calE' \\rangle_X^\\TXTconn\n & \\defeq \\bigl\\langle \\calE', \\set{\\set{e}\n \\suchthat e \\in X^{\\cross 2}} \\bigr\\rangle_X,\n\\quad\\text{and} \\\\\n \\langle \\calE' \\rangle_X^\\TXTconnuni\n & \\defeq \\bigl\\langle \\calE', \\set{\\set{e}\n \\suchthat e \\in X^{\\cross 2}}, \\set{1_X} \\bigr\\rangle_X,\n\\end{align*}\nrespectively.\n\n\\begin{definition}\nThe \\emph{initial unital}, \\emph{initial connected}, or \\emph{initial connected \nunital coarse structure} on a set $X$ is the minimum coarse structure having \nthe given property or properties, respectively. Denote the resulting coarse \nspaces by $|X|_0^\\TXTuni$, $|X|_0^\\TXTconn$, and $|X|_0^\\TXTconnuni$, \nrespectively.\n\\end{definition}\n\nClearly, $\\calE_{|X|_0^\\TXTuni} = \\langle \\set{1_X} \\rangle_X$, so a coarse \nstructure on $X$ is unital if and only if it contains $\\calE_{|X|_0^\\TXTuni}$. \nSimilarly for the other properties. Note in particular that \n$\\calE_{|X|_0^\\TXTconn}$ consists of all the finite subsets of $X^{\\cross 2}$.\n\n\\begin{remark}\nIn the groupoid case, the intersection of (locally) connected coarse structures \non a given groupoid (in the sense of Remark~\\ref{rmk:gpd-conn}) is again \n(locally) connected, and so all of the above holds. However, the intersection \nof weakly connected coarse structures (see the same Remark) may not be weakly \nconnected, so there may not be a minimum weakly connected coarse structure on a \ngiven groupoid.\n\\end{remark}\n\nWe get an obvious notion of \\emph{unital subspace} of any coarse space $X$. \nClearly, $X' \\subseteq X$ is a unital subspace if and only if $1_{X'}$ is an \nentourage of $X$. (Bartels calls the set of unital subspaces of $X$ the \n``domain of $\\calE_X$'' \\cite{MR1988817}*{Def.~3.2}.) Slightly more is true.\n\n\\begin{proposition}\\label{prop:uni-subsp}\nA subspace $X' \\subseteq X$ of a coarse space $X$ is unital if and only if it \noccurs as the left (or right) support of some entourage of $X$.\n\\end{proposition}\n\n\\begin{proof}\nIf $X'$ is a unital subspace, then $X' = 1_{X'} \\cdot X$. Conversely, if $X' = \nE \\cdot X$ for some $E \\in \\calE_X$, then $1_{X'} \\subseteq E \\circ \nE^\\transpose$ must be an entourage of $X$.\n\\end{proof}\n\nSimilarly, we get a notion of \\emph{connected subspace} of $X$.\n\n\\begin{definition}\nA (connected) \\emph{component} of a coarse space $X$ is a maximal connected \nsubspace of $X$.\n\\end{definition}\n\n\\begin{proposition}\nAny coarse space $X$ is partitioned, as a set, into (a disjoint union of) its \nconnected components.\n\\end{proposition}\n\nWe caution this decomposition is not necessarily a coproduct (in the coarse or \nprecoarse category); see Corollary~\\ref{cor:PCrs-fin-components}.\n\n\n\\subsection{Local properness, preservation, and coarse maps}\n\nRecall that any (set) map $f \\from Y \\to X$ induces a map (indeed, a groupoid \nmorphism) $f^{\\cross 2} \\from Y^{\\cross 2} \\to X^{\\cross 2}$. Insisting that \n$f$ be proper is too strong a requirement when $Y$ is a nonunital coarse space. \nWe thus introduce the following weaker requirement.\n\n\\begin{definition}\\label{def:loc-prop}\nA map $f \\from Y \\to X$ is \\emph{locally proper for $F \\in \\calE_{|Y|_1}$} if \n$E \\defeq f^{\\cross 2}(F) \\in \\calE_{|X|_1}$ and the restriction $f^{\\cross 2} \n|_F \\from F \\to X^{\\cross 2}$ (or $f^{\\cross 2} |_F^E$) is a proper (set) map. \nIf $Y$ is a coarse space, then $f$ is \\emph{locally proper} (for $\\calE_Y$) if \nit is locally proper for all $F \\in \\calE_Y$.\n\\end{definition}\n\nLocal properness only requires a coarse structure on the domain, so we cannot \nsay that the composition of locally proper maps is again locally proper. \nNonetheless, separating local properness from the following will be useful when \nwe define push-forward coarse structures (below).\n\n\\begin{definition}\nSuppose $X$ is a coarse space. A map $f \\from Y \\to X$ \\emph{preserves $F \\in \n\\calE_{|Y|_1}$} (with respect to $\\calE_X$) if $E \\defeq f^{\\cross 2}(F) \\in \n\\calE_X$. If $Y$ is also a coarse space, then $f$ \\emph{preserves entourages} \n(of $\\calE_Y$, with respect to $\\calE_X$) if $f$ preserves every $F \\in \n\\calE_Y$.\n\\end{definition}\n\n\\begin{definition}\nSuppose $X$ is a coarse space. A map $f \\from Y \\to X$ is \\emph{coarse for $F \n\\in \\calE_{|Y|_1}$} if $f$ is locally proper for $F$ and if $f$ preserves $F$. \nIf $Y$ is also a coarse space, then $f$ is \\emph{coarse} (or is a \\emph{coarse \nmap}) if $f$ is coarse for every $F \\in \\calE_Y$, i.e., if $f$ is locally \nproper and preserves entourages.\n\\end{definition}\n\n\\begin{remark}\nThe definition of ``coarse map'' is slightly redundant: If $f$ preserves \nentourages, then $f^{\\cross 2}(F) \\in \\calE_X \\subseteq \\calE_{|X|_1}$ (which \nis one of the stipulations of local properness).\n\\end{remark}\n\n\\begin{proposition}\\label{prop:crs-map-comp}\nConsider a composition of $Z \\nameto{\\smash{g}} Y \\nameto{\\smash{f}} X$, where \n$X$ and $Y$ are coarse spaces. If $f$, $g$ are locally proper and $g$ preserves \nentourages, then $f \\circ g$ is locally proper.\n\\end{proposition}\n\n\\begin{corollary}\nA composition of coarse maps is again a coarse map.\n\\end{corollary}\n\n\n\\subsection{Basic properties of maps}\n\nWe first concentrate on local properness.\n\n\\begin{proposition}\\label{prop:loc-prop}\nSuppose $f \\from Y \\to X$ is a set map, $F \\in \\calE_{|Y|_1}$, and $E \\defeq \nf^{\\cross 2}(F)$. The following are equivalent:\n\\begin{enumerate}\n\\item\\label{prop:loc-prop:I} $f$ is locally proper for $F$;\n\\item\\label{prop:loc-prop:II} the restrictions $f |_{F \\cdot Y}$ and $f |_{Y \n \\cdot F}$ (or $f |_{F \\cdot Y}^{E \\cdot X}$ and $f |_{Y \\cdot F}^{X \\cdot \n E}$) of $f$ to the left and right supports of $F$ are proper; and\n\\item\\label{prop:loc-prop:III} $f^{-1}(S) \\cdot F$ and $F \\cdot f^{-1}(S)$ are \n finite for all finite $S \\subseteq X$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\n(We omit proofs of the symmetric cases.) Consider the diagram\n\\[\\begin{CD}\n F @>{f^{\\cross 2} |_F^E}>> E \\\\\n @V{\\pi_1 |_F^{F \\cdot Y}}VV @V{\\pi_1 |_E^{E \\cdot X}}VV \\\\\n F \\cdot Y @>{f |_{F \\cdot Y}^{E \\cdot X}}>> E \\cdot X\n\\end{CD}\\quad.\\]\nObserve the following: the above diagram commutes, i.e.,\n\\[\n \\pi_1 |_E^{E \\cdot X} \\circ f^{\\cross 2} |_F^E\n = f |_{F \\cdot Y}^{E \\cdot X} \\circ \\pi_1 |_F^{F \\cdot Y};\n\\]\nthe two maps emanating from $F$ are surjections; and $\\pi_1 |_F^{F \\cdot Y}$ is \nproper. We now apply Proposition~\\ref{prop:prop} several times.\n\n\\enumref{prop:loc-prop:I} \\textimplies{} \\enumref{prop:loc-prop:II}: $f^{\\cross \n2} |_F^E$ and $\\pi_1 |_E^{E \\cdot X}$ are proper, so their composition is \nproper. Since $\\pi_1 |_F^{F \\cdot Y}$ is surjective, $f |_{F \\cdot Y}$ is \nproper.\n\n\\enumref{prop:loc-prop:II} \\textimplies{} \\enumref{prop:loc-prop:III}: $f |_{F \n\\cdot Y}$ and $\\pi_1 |_F^{F \\cdot Y}$ are proper, so their composition is \nproper. Then\n\\begin{equation}\\label{prop:loc-prop:pf:eq}\\begin{split}\n f^{-1}(S) \\cdot F & = \\pi_2( (\\pi_1 |_F)^{-1}(f^{-1}(S)) ) \\\\\n & = \\pi_2( (f_{F \\cdot Y} \\circ \\pi_1 |_F^{F \\cdot Y})^{-1}(S) )\n\\end{split}\\end{equation}\nis finite if $S \\subseteq X$ is finite.\n\n\\enumref{prop:loc-prop:III} \\textimplies{} \\enumref{prop:loc-prop:I}: By \n\\eqref{prop:loc-prop:pf:eq} and since $\\pi_2 |_F$ is proper, the composition \n$f_{F \\cdot Y}^{E \\cdot X} \\circ \\pi_1 |_F^{F \\cdot Y}$ is proper. Hence \n$f^{\\cross 2} |_F^E$ is proper and, since $f^{\\cross 2} |_F^E$ is surjective, \nso is $\\pi_1 |_E^{E \\cdot X}$.\n\\end{proof}\n\n\\begin{corollary}\nIf a set map $f \\from Y \\to X$ is globally proper, then $f$ is locally proper \nfor any $F \\in \\calE_{|Y|_1}$ (so $f$ is locally proper for any coarse \nstructure on $Y$).\n\\end{corollary}\n\n\\begin{proof}\nThis follows from \\enumref{prop:loc-prop:III} and Corollary~\\ref{cor:prop-ax}.\n\\end{proof}\n\n\\begin{corollary}\nIf $X$ is a coarse space and $X' \\subseteq X$ is a subspace, then the inclusion \nof $X'$ into $X$ is a coarse map. Thus the restriction of any coarse map to a \nsubspace is a coarse map.\n\\end{corollary}\n\n\\begin{proof}\nBy definition of the subspace coarse structure, the inclusion map preserves \nentourages. The inclusion map is injective, hence (globally) proper, hence \nlocally proper.\n\\end{proof}\n\n\\begin{corollary}\\label{cor:loc-prop-uni}\nSuppose $Y$ is a coarse space. A map $f \\from Y \\to X$ is locally proper if and \nonly if the restriction of $f$ to every unital subspace of $Y$ is proper. Thus, \nfor $Y$ unital, $f$ is locally proper if and only if $f$ is globally proper.\n\\end{corollary}\n\n\\begin{proof}\nThis follows from \\enumref{prop:loc-prop:II} and \nProposition~\\ref{prop:uni-subsp}.\n\\end{proof}\n\nFor (discrete) unital coarse spaces, our notion of ``coarse map'' is just the \nclassical notion. It also follows that local properness of a map $f \\from Y \\to \nX$ is a property which can be defined in terms of the unital subspaces of the \ncoarse structure on $Y$. In particular, if $f$ is locally proper, then $f$ \nwould also be locally proper for any coarse structure on $Y$ (possibly larger \nthan $\\calE_Y$) with the same unital subspaces.\n\n\\begin{remark}\nOne may take the \\emph{definition} of local properness to be the \ncharacterization of the above Corollary, i.e., define $f \\from Y \\to X$ to be \nlocally proper if $f$ is proper on every unital subspace of $Y$ (perhaps \n``unital properness'' would be a more apt term). Indeed, this is the form in \nwhich local properness appears in Bartels's definition of ``coarse map'' \n\\cite{MR1988817}*{Def.~3.3}, and hence (modulo our coarse spaces not carrying \ntopologies) our definition of ``coarse map'' is the same as Bartels's. More \ngenerally, one could remove coarse structures entirely, and define local \nproperness for sets equipped with families of supports (i.e., of unital \nsubspaces). However, we will not do so since we are mainly concerned with \ncoarse maps, for which Definition~\\ref{def:loc-prop} is most convenient.\n\\end{remark}\n\n\\begin{corollary}\nCoarse maps send unital subspaces to unital subspaces, i.e., if $f \\from Y \\to \nX$ is a coarse map and $Y' \\subseteq Y$ is a unital subspace, then the image \n$f(Y') \\subseteq X$ is a unital subspace.\n\\end{corollary}\n\n\\begin{proposition}[``algebraic'' operations and local properness]%\n \\label{prop:loc-prop-alg}\nIf $f \\from Y \\to X$ is locally proper for $F, F' \\in \\calE_{|Y|_1}$, then $f$ \nis locally proper for $F + F'$, $F \\circ F'$, $F^\\transpose$, and all subsets \nof $F$. Also, $f$ is locally proper for all singletons $\\set{e}$, $e \\in \nY^{\\cross 2}$, hence is locally proper for $\\calE_{|Y|_0^\\TXTconn} \\supseteq \n\\calE_{|Y|_0}$. (However, $f$ is locally proper for the unit $1_Y$ if and only \nif $f$ is globally proper.)\n\\end{proposition}\n\n\\begin{proof}\nThe only nontrivial assertion is that $f$ is locally proper for $F \\circ F'$. \nBy assumption, $f^{\\cross 2}(F), f^{\\cross 2}(F') \\in \\calE_{|X|_1}$ and, since\n\\[\n f^{\\cross 2}(F \\circ F') \\subseteq f^{\\cross 2}(F) \\circ f^{\\cross 2}(F'),\n\\]\n$f^{\\cross 2}(F \\circ F')$ also satisfies the properness axiom, by \nProposition~\\ref{prop:prop-ax-alg}. We have a commutative diagram\n\\[\\begin{CD}\n F \\circ F' @>{f^{\\cross 2} |_{F \\circ F'}}>> X^{\\cross 2} \\\\\n @V{\\pi_1 |_{F \\circ F'}^{(F \\circ F') \\cdot Y}}VV @V{\\pi_1}VV \\\\\n (F \\circ F') \\cdot Y @>{f |_{(F \\circ F') \\cdot Y}}>> X\n\\end{CD}\\quad.\\]\nBy the same Proposition, $F \\circ F' \\in \\calE_{|Y|_1}$, so $\\pi_1 |_{F \\circ \nF'}^{(F \\circ F') \\cdot Y}$ is proper. Since $(F \\circ F') \\cdot Y \\subseteq F \n\\cdot Y$ and $f |_{F \\cdot Y}$ is proper by \nProposition~\\ref{prop:loc-prop}\\enumref{prop:loc-prop:II}, $f |_{(F \\circ F') \n\\cdot Y}$ is proper. Hence the composition\n\\[\n f |_{(F \\circ F') \\cdot Y} \\circ \\pi_1 |_{F \\circ F'}^{(F \\circ F') \\cdot Y}\n = \\pi_1 \\circ f^{\\cross 2} |_{F \\circ F'}\n\\]\nis proper, so $f^{\\cross 2} |_{F \\circ F'}$ is proper by \nProposition~\\ref{prop:prop}\\enumref{prop:prop:II}.\n\\end{proof}\n\n\\begin{corollary}\\label{cor:loc-prop-gen}\nIf $f \\from Y \\to X$ is locally proper for all $F \\in \\calE' \\subseteq \n\\calE_{|Y|_1}$, then $f$ is locally proper for the coarse structure $\\langle \n\\calE' \\rangle_Y$ on $Y$ generated by $\\calE'$ (and for the connected coarse \nstructure $\\langle \\calE' \\rangle_Y^\\TXTconn$ generated by $\\calE'$).\n\\end{corollary}\n\n\\begin{proof}\nThis follows immediately from the Proposition and \nCorollary~\\ref{cor:crs-struct-gen}.\n\\end{proof}\n\nThe same evidently does not hold for the unital (or connected unital) coarse \nstructure generated by $\\calE'$.\n\nWe now state some parallel results for preservation of entourages. Combining \nthese with the above results for local properness, we get parallel results for \ncoarseness of maps.\n\n\\begin{proposition}\\label{``algebraic'' operations and preservation}%\n \nSuppose $X$ is a coarse space. If $f \\from Y \\to X$ preserves $F, F' \\in \n\\calE_{|Y|_1}$, then $f$ preserves $F + F'$, $F \\circ F'$, $F^\\transpose$, and \nall subsets of $F$. Also, $f$ preserves all singletons $\\set{1_y}$, $y \\in Y$ \n(hence preserves $\\calE_{|Y|_0}$); if $X$ is connected, $f$ preserves all \nsingletons $\\set{e}$, $e \\in Y^{\\cross 2}$ (hence preserves \n$\\calE_{|Y|_0^\\TXTconn}$); and if $X$ is unital, $f$ preserves $1_Y$.\n\\end{proposition}\n\n\\begin{proof}\nThe only (slightly) nontrivial one is $F \\circ F'$, for which ones uses\n\\[\n f^{\\cross 2}(F \\circ F') \\subseteq f^{\\cross 2}(F) \\circ f^{\\cross 2}(F').\n\\]\n\\end{proof}\n\n\\begin{corollary\nSuppose $X$ is a coarse space. If $f \\from Y \\to X$ preserves $\\calE' \\subseteq \n\\calE_{|Y|_1}$, then $f$ preserves the coarse structure $\\langle \\calE' \n\\rangle_Y$ on $Y$ generated by $\\calE'$. (If $X$ is also connected, then $f$ \npreserves $\\langle \\calE' \\rangle_Y^\\TXTconn$; if $X$ is unital, then $f$ \npreserves $\\langle \\calE' \\rangle_Y^\\TXTuni$; if $X$ is both, then $f$ \npreserves $\\langle \\calE' \\rangle_Y^\\TXTconnuni$.)\n\\end{corollary}\n\n\\begin{proposition}[``algebraic'' operations and coarseness]%\n \\label{prop:coarse-alg}\nSuppose $X$ is a coarse space. If $f \\from Y \\to X$ is coarse for $F, F' \\in \n\\calE_{|Y|_1}$, then $f$ is coarse for $F + F'$, $F \\circ F'$, $F^\\transpose$, \nand all subsets of $F$. Also, $f$ is coarse for all singletons $\\set{1_y}$, $y \n\\in Y$ (hence is coarse for $\\calE_{|Y|_0}$); if $X$ is connected, $f$ is \ncoarse for all singletons $\\set{e}$, $e \\in Y^{\\cross 2}$ (hence is coarse for \n$\\calE_{|Y|_0^\\TXTconn}$); and if $X$ is unital and $f$ is proper, $f$ is \ncoarse for $1_Y$.\n\\end{proposition}\n\n\\begin{corollary}\\label{cor:coarse-gen}\nSuppose $X$ and $Y$ are coarse spaces, $\\calE' \\subseteq \\calE_{|Y|_1}$, and $f \n\\from Y \\to X$ is a set map.\n\\begin{enumerate}\n\\item If $\\calE_Y = \\langle \\calE' \\rangle_Y$, then $f$ is a coarse map if and \n only if $f$ is coarse for all $F \\in \\calE'$.\n\\item If $\\calE_Y = \\langle \\calE' \\rangle_Y^\\TXTconn$, then $f$ is a coarse \n map if and only if $f$ is coarse for all $F \\in \\calE'$ and all $\\set{e}$, \n $e \\in Y^{\\cross 2}$.\n\\item If $\\calE_Y = \\langle \\calE' \\rangle_Y^\\TXTuni$, then $f$ is a coarse map \n if and only if $f$ is proper and $f$ is coarse for (or preserves) all $F \n \\in \\calE'$.\n\\item If $\\calE_Y = \\langle \\calE' \\rangle_Y^\\TXTconnuni$, then $f$ is a coarse \n map if and only if $f$ is proper and $f$ is coarse for (or preserves) all \n $F \\in \\calE'$ and all $\\set{e}$, $e \\in Y^{\\cross 2}$.\n\\end{enumerate}\nNote that requiring that $f$ be coarse for all $\\set{e}$, $e \\in Y^{\\cross 2}$, \nis equivalent to requiring $f(y)$, $f(y')$ be connected for all $y, y' \\in Y$.\n\\end{corollary}\n\nIf $f, f' \\from Y \\to X$ are (globally) proper maps, then certainly $f \\cross \nf' \\from Y \\cross Y \\to X \\cross X$ is proper. The same also holds locally, and \nthis will be essential later.\n\n\\begin{proposition}\\label{prop:loc-prop-prod}\nIf (set) maps $f, f' \\from Y \\to X$ are locally proper for $F \\in\n\\calE_{|Y|_1}$, then:\n\\begin{enumerate}\n\\item $E \\defeq (f \\cross f')(F) \\subseteq X^{\\cross 2}$ satisfies the \n properness axiom; and\n\\item the restriction $(f \\cross f') |_F^E \\from F \\to E$ is a proper map.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nFix $F \\in \\calE_{|Y|_1}$, put $E \\defeq (f \\cross f')(F)$, and consider the \ncommutative diagram\n\\[\\begin{CD}\n F @>{(f \\cross f') |_F^E}>> E \\\\\n @V{\\pi_1 |_F^{F \\cdot Y}}VV @V{\\pi_1 |_E}VV \\\\\n F \\cdot Y @>{f |_{F \\cdot Y}}>> X\n\\end{CD}\\quad.\\]\nThe composition along the left and bottom is proper, and thus so is composition \nalong the top and right. Consequently, $(f \\cross f') |_F^E$ is proper. Since \n$(f \\cross f') |_F^E$ is surjective, $\\pi_1 |_E$ is proper and similarly for \n$\\pi_2 |_E$.\n\\end{proof}\n\nWe have the following ``very'' local analogue of Proposition~\\ref{prop:prop}. \nFor a more general analogue, we will need push-forward coarse structures.\n\n\\begin{proposition}\\label{prop:loc-prop-for-comp}\nConsider the composition of (set) maps $Z \\nameto{\\smash{g}} Y \n\\nameto{\\smash{f}} X$, supposing that $G \\in \\calE_{|Z|_1}$ and putting $F \n\\defeq g^{\\cross 2}(G)$:\n\\begin{enumerate}\n\\item\\label{prop:loc-prop-for-comp-I} If $g$ is locally proper for $G$ and $f$ \n is locally proper for $F$, then $f \\circ g$ is locally proper for $G$.\n\\item\\label{prop:loc-prop-for-comp-II} If $f \\circ g$ is locally proper for \n $G$, then $g$ is locally proper for $G$.\n\\item\\label{prop:loc-prop-for-comp-III} If $f \\circ g$ is locally proper for \n $G$, then $f$ is locally proper for $F$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nPut $E \\defeq f^{\\cross 2}(F)$. We apply Proposition~\\ref{prop:prop} to the \ncommutative diagram\n\\[\\begin{CD}\nG @>{g^{\\cross 2} |_G^F}>> F @>{f^{\\cross 2} |_F^E}>> E \\\\\n@V{\\pi_1 |_G}VV @V{\\pi_1 |_F}VV @V{\\pi_1 |_E}VV \\\\\nZ @>{g}>> Y @>{F}>> X\n\\end{CD}\\quad.\\]\n\\enumref{prop:loc-prop-for-comp-I} is clear. For \n\\enumref{prop:loc-prop-for-comp-II} and \\enumref{prop:loc-prop-for-comp-III}: \nIf $f \\circ g$ is locally proper for $G$, then $\\pi_1 |_E$ and\n\\[\n (f \\circ g)^{\\cross 2} |_G^E\n = f^{\\cross 2} |_F^E \\circ g^{\\cross 2} |_G^F\n\\]\nare proper. By the latter, $g^{\\cross 2} |_G^F$ is proper. $g^{\\cross 2} |_G^F$ \nis surjective, so $f^{\\cross 2} |_F^E$ is also proper. Then\n\\[\n \\pi_1 |_E \\circ f^{\\cross 2} |_F^E = f \\circ \\pi_1 |_F\n\\]\nis proper, so $\\pi_1 |_F$ is proper.\n\\end{proof}\n\n\n\\subsection{Pull-back and push-forward coarse structures}\n\n\\begin{definition}\nSuppose $X$ is a coarse space. The \\emph{pull-back coarse structure} (of \n$\\calE_X$) on $Y$ along (a set map) $f \\from Y \\to X$ is\n\\[\n f^* \\calE_X \\defeq \\set{F \\in \\calE_{|Y|_1} \\suchthat\n \\text{$f$ is coarse for $F$}}.\n\\]\n\\end{definition}\n\nBy Proposition~\\ref{prop:coarse-alg}, $f^* \\calE_X$ is actually a coarse \nstructure. If $X$ is connected, then $f^* \\calE_X$ is connected. If $X$ is \nunital and $f$ is (globally) proper, then $f^* \\calE_X$ is unital. The \nfollowing are clear.\n\n\\begin{proposition}\nIf $X$ is a coarse space and $f \\from Y \\to X$ is a set map, then $f^* \\calE_X$ \nis the maximum coarse structure on $Y$ which makes $f$ into a coarse map.\n\\end{proposition}\n\n\\begin{corollary}\\label{cor:crs-factor-I}\nIf $f \\from Y \\to X$ is a coarse map, then $f$ factors as a composition of \ncoarse maps\n\\[\n Y \\nameto{\\smash{\\beta}} |Y|_{f^* \\calE_X} \\nameto{\\smash{\\utilde{f}}} X,\n\\]\nwhere $\\beta = \\id_Y$ and $\\utilde{f} = f$ as set maps.\n\\end{corollary}\n\nMore generally, if $\\set{X_j \\suchthat j \\in J}$ ($J$ some index set) is a \ncollection of coarse spaces and $\\set{f_j \\from Y \\to X_j}$ is a collection of \nset maps, then\n\\[\n \\calE \\defeq \\bigintersect_{j \\in J} (f_j)^* \\calE_{X_j}\n\\]\nis the maximum coarse structure on $Y$ which makes all the $f_j$ into coarse \nmaps. If $Y$ is a coarse space and the $f_j \\from Y \\to X_j$ are all coarse \nmaps, then each $f_j$ factors as a composition of coarse maps $\\utilde{f}_j \n\\circ \\beta$ in the obvious way. Moreover, if all the $X_j$ are connected, then \n$\\calE$ is connected; if all the $X_j$ are unital and all the $f_j$ are \n(globally) proper, then $\\calE$ is unital.\n\n\\begin{definition}\nSuppose $Y$ is a coarse space. The \\emph{push-forward coarse structure} (of \n$\\calE_Y$) on $X$ along a \\emph{locally proper} map $f \\from Y \\to X$ is\n\\[\n f_* \\calE_Y \\defeq \\bigl\\langle \\set{f^{\\cross 2}(F) \\suchthat F \\in\n \\calE_Y} \\bigr\\rangle.\n\\]\nWe similarly define \\emph{unital}, \\emph{connected}, and \\emph{connected unital \npush-forward coarse structures}.\n\\end{definition}\n\nIf $Y$ is connected and $f$ is surjective, then $f_* \\calE_Y$ is connected. \nSimilarly, if $Y$ is unital (hence $f$ globally proper) and $f$ is surjective, \nthen $f_* \\calE_Y$ is unital.\n\n\\begin{proposition}\nIf $Y$ is a coarse space and $f \\from Y \\to X$ is a locally proper map, then \n$f_* \\calE_Y$ is the minimum coarse structure on $X$ which makes $f$ into a \ncoarse map.\n\\end{proposition}\n\n\\begin{corollary}\\label{cor:crs-factor-II}\nIf $f \\from Y \\to X$ is a coarse map, then $f$ factors as a composition of \ncoarse maps\n\\[\n Y \\nameto{\\smash{\\tilde{f}}} |X|_{f_* \\calE_Y} \\nameto{\\smash{\\alpha}} X\n\\]\nwhere $\\tilde{f} = f$ and $\\alpha = \\id_X$ as set maps.\n\\end{corollary}\n\nOf course, there are obvious unital, connected, and connected unital versions \nof the above. For the unital versions one needs $f$ to be proper and $Y$ should \nprobably be unital; for the connected versions, $Y$ should probably be \nconnected.\n\nMore generally, if $\\set{Y_j \\suchthat j \\in J}$ ($J$ some index set) is a \ncollection of coarse spaces and $\\set{f_j \\from Y_j \\to X}$ is a collection of \nlocally proper maps, then\n\\[\n \\calE \\defeq \\bigl\\langle (f_j)_* \\calE_{Y_j} \\bigr\\rangle\n\\]\nis the minimum coarse structure on $X$ which makes all the $f_j$ into coarse \nmaps. If $X$ is a coarse space and the $f_j \\from Y_j \\to X$ are all coarse \nmaps, then each $f_j$ factors as $\\alpha \\circ \\tilde{f}_j$. Again, there are \nunital, connected, and connected unital versions of this.\n\n\\begin{remark}\nWe emphasize that whereas one can pull back coarse structures along \\emph{any} \nset map (or collection of set maps), one can only push forward coarse \nstructures along \\emph{locally proper} maps. If one wants all the coarse \nstructures to be unital (and take unital, possibly connected, push-forwards), \nthen one evidently requires all maps to be (globally) proper.\n\\end{remark}\n\nIt is easy to see what happens when one pushes a coarse structure forward and \nthen pulls it back along the same map (or vice versa).\n\n\\begin{proposition}\nIf $Y$ is a coarse space and $f \\from Y \\to X$ is a locally proper map, then \n$\\calE_Y \\subseteq f^* f_* \\calE_Y$.\n\\end{proposition}\n\n\\begin{proof}\n$f$ is coarse as a map $Y \\to |X|_{f_* \\calE_Y}$. Applying \nCorollary~\\ref{cor:crs-factor-I}, this map factors as $Y \\nameto{\\smash{\\beta}} \n|Y|_{f^* f_* \\calE_Y} \\to |X|_{f_* \\calE_Y}$ where $\\beta$ is the identity as a \nset map.\n\\end{proof}\n\n\\begin{proposition}\nIf $X$ is a coarse space and $f \\from Y \\to X$ is any set map, then $f_* f^* \n\\calE_X \\subseteq \\calE_X$.\n\\end{proposition}\n\n\\begin{proof}\nNow $f$ is coarse as a map $|Y|_{f^* \\calE_X} \\to X$, to which we apply \nCorollary~\\ref{cor:crs-factor-II}.\n\\end{proof}\n\nUsing push-forward coarse structures (and Corollary~\\ref{cor:loc-prop-gen}), we \ncan ``restate'' Proposition~\\ref{prop:loc-prop-for-comp} as follows.\n\n\\begin{proposition}\\label{prop:loc-prop-comp}\nConsider the composition of (set) maps $Z \\nameto{\\smash{g}} Y \n\\nameto{\\smash{f}} X$, where $Z$ is a coarse space:\n\\begin{enumerate}\n\\item\\label{prop:loc-prop-comp:I} If $g$ is locally proper and $f$ is locally \n proper for the push-forward coarse structure $g_* \\calE_Z$ on $Y$, then $f \n \\circ g$ is locally proper.\n\\item\\label{prop:loc-prop-comp:II} If $f \\circ g$ is locally proper, then $g$ \n is locally proper.\n\\item\\label{prop:loc-prop-comp:III} If $f \\circ g$ is locally proper, then $f$ \n is locally proper for the push-forward coarse structure $g_* \\calE_Z$ on \n $Y$.\n\\end{enumerate}\nThe above also hold with connected push-forward coarse structures in place of \npush-forward coarse structures. Also, that injectivity implies global \nproperness implies local properness.\n\\end{proposition}\n\n\\begin{remark}\nApplying the above Proposition with $Z \\defeq |Z|_1$ having the terminal coarse \nstructure, we get \\enumref{prop:prop:I} and \\enumref{prop:prop:II} of \nProposition~\\ref{prop:prop}. If $g$ is surjective, then the push-forward coarse \nstructure $g_* \\calE_{|Z|_1}$ is the terminal coarse structure $\\calE_{|Y|_1}$ \nand we get \\enumref{prop:prop:III} as well.\n\\end{remark}\n\n\n\n\n\\section{The precoarse categories}\\label{sect:PCrs}\n\nWe now define several categories of coarse spaces, whose arrows are coarse \nmaps, and examine their properties. These \\emph{precoarse categories} differ \nfrom the coarse categories, which are quotients of these categories (see \n\\S\\ref{sect:Crs}).\n\n\n\\subsection{Set and category theory}\\label{subsect:set-cat}\n\nWe will be unusually careful with our set and category theoretic constructions. \nThe following can mostly be ignored safely, though will be needed eventually \nfor rigorous, ``canonical'' constructions (e.g., when we consider sets of \n``all'' modules over a coarse space).\n\nAssuming the Grothendieck axiom that any set is contained in some universe, we \nfirst fix a universe $\\calU$ (containing $\\omega$). \\emph{Small} (or \n$\\calU$-small) objects are elements of $\\calU$. A \\emph{$\\calU$-category} is \none whose object set is a subset of $\\calU$. A ($\\calU$-)small category is one \nwhose object set (hence morphism set and composition law) is in $\\calU$. A \nsmall category is necessarily a $\\calU$-category, but not vice versa. A \n$\\calU$-category in turn is $\\calU^+$-small, where $\\calU^+$ denotes the \nsmallest universe having $\\calU$ as an element. A \\emph{locally small} \n$\\calU$-category is a $\\calU$-category whose $\\Hom$-sets $\\Hom(\\cdot,\\cdot)$ \nare all small.\n\nRecall the notion of quotient categories (from, e.g., \\cite{MR1712872}*{Ch.~II \n\\S{}8}): Given a category $\\calC$ and an equivalence relation $\\sim$ on each \n$\\Hom$-set of $\\calC$, there is a \\emph{quotient category} $\\calC\/{\\sim}$ and a \n\\emph{quotient functor} $\\Quotient \\from \\calC \\to \\calC\/{\\sim}$ satisfying the \nfollowing universal property: For all functors $F \\from \\calC \\to \\calC'$ \n($\\calC'$ any category, which can be taken to be $\\calU$-small if $\\calC$ is \n$\\calU$-small) such that $f \\sim f'$ ($f$, $f'$ in some $\\Hom$-set of $\\calC$) \nimplies $F(f) \\sim F(f')$, there is a unique functor $F' \\from \\calC\/{\\sim} \\to \n\\calC'$ such that $F = F' \\circ \\Quotient$. Moreover, if the equivalence \nrelation $\\sim$ is preserved under composition then, for all objects $X$, $Y$ \nof $\\calC$, the set $\\Hom_{\\calC\/{\\sim}}(\\Quotient(Y),\\Quotient(X))$ is in \nnatural bijection with the set of $\\sim$-equivalence classes of \n$\\Hom_{\\calC}(Y,X)$.\n\nAs usual, $\\CATSet$ denotes the category of small sets (and set maps). \n$\\CATTop$ is the category of small topological spaces and continuous maps. \nForgetful functors will be denoted by $\\Forget$, with the source and target \ncategories (the latter often being $\\CATSet$) implied by context. For a \ncategory $\\calC$ equipped with a forgetful functor to $\\CATSet$, we denote the \nfull subcategory of $\\calC$ of nonempty objects (i.e., those $X$ with \n$\\Forget(X) \\neq \\emptyset$) by $\\CATne{\\calC}$.\n\nFor the most part, henceforth $X$, $Y$, and $Z$ will be (small) coarse spaces, \nand $f \\from Y \\to X$ and $g \\from Z \\to Y$ coarse maps. $\\setZplus \\defeq \n\\set{n \\in \\setZ \\suchthat n \\geq 0}$ is the set of nonnegative integers and \nsimilarly $\\setRplus \\defeq \\coitvl{0,\\infty}$ is the set of nonnegative real \nnumbers.\n\n\n\\subsection{The precoarse categories}\n\n\\begin{definition}\nThe \\emph{precoarse category} $\\CATPCrs$ has as objects all (small) coarse \nspaces and as arrows coarse maps. The \\emph{connected precoarse category} \n$\\CATConnPCrs$ is full subcategory of $\\CATPCrs$ consisting of the connected \ncoarse spaces. Similarly define the \\emph{unital precoarse category} \n$\\CATUniPCrs$ and the \\emph{connected unital precoarse category} \n$\\CATConnUniPCrs$.\n\\end{definition}\n\n\\begin{remarks}\nIn many ways, the category $\\CATne{\\CATConnPCrs}$ of nonempty connected coarse \nspaces, i.e., coarse spaces with exactly one connected component, is more \nnatural. Observe that that $\\CATConnUniPCrs = \\CATConnPCrs \\intersect \n\\CATUniPCrs$ is a full subcategory of the other three categories. (One might \nargue that the unital categories above are not the ``correct'' ones and further \ninsist that the arrows in the unital categories should be ``unit \npreserving'', i.e., surjective as set maps. However, the above unital \ncategories are the usual ones used in coarse geometry; see \\S\\ref{sect:top-crs} \nand especially Corollary~\\ref{cor:UniCrs-RoeCrs-equiv}.)\n\\end{remarks}\n\nWe will analyze various properties of the categories $\\CATPCrs$ and \n$\\CATConnPCrs$ (which are better behaved than the others). In particular, we \nexamine limits and colimits in these categories, which include as special cases \nproducts and coproducts, equalizers and coequalizers, and terminal and initial \nobjects. (We use the standard terminology from category theory, topology, etc.: \nlimits are also called ``projective limits'' or ``inverse limits'' and colimits \nare called ``inductive limits'' or ``direct limits'', though ``direct limits'' \nare often more specifically filtered colimits.)\n\nLet us first recall some standard terminology (see, e.g., \\cite{MR1712872}). \nLet $\\calC$ be a category and suppose $\\calF_X \\from \\calJ \\to \\calC$ ($\\calJ$ \na small, often finite, category) is a functor. A \\emph{cone $\\nu \\from X \\to \n\\calF_X$ to $\\calF_X$} consists of an $X \\in \\Obj(\\calC)$ and arrows $\\nu_j \n\\from X \\to X_j \\defeq \\calF_X(j)$, $j \\in \\Obj(\\calJ)$, such that the \ntriangles emanating from $X$ commute. A \\emph{limit} in $\\calC$ for $\\calF_X$ \nis given by a cone $X \\to \\calF_X$ which is universal, i.e., a \\emph{limiting \ncone}. Limits of $\\calF_X$ in $\\calC$ are unique up to natural isomorphism. \nThus we will sometimes follow the customary abuses of referring to \\emph{the} \nlimit of $\\calF_X$ and of referring to the object $X$ (often denoted $\\OBJlim \n\\calF_X$) as the limit with the $\\nu_j$ understood. A functor $F \\from \\calC \n\\to \\calC'$ \\emph{preserves limits} if whenever $\\nu \\from X \\to \\calF_X$ is a \nlimiting cone in $\\calC$, $F \\circ \\nu \\from F(X) \\to F \\circ \\calF_X$ is \nlimiting in $\\calC'$. Dually, one has \\emph{cones from $\\calF_Y$}, \n\\emph{colimits}, \\emph{colimiting cones}, and functors which \\emph{preserve \ncolimits}. All limits and colimits considered will be small. In particular, the \ncategory $\\calJ$ and functors $\\calF_X$ and $\\calF_Y$ will be small.\n\nFirst, we examine the relation between $\\CATPCrs$ and $\\CATConnPCrs$.\n\n\n\\subsection{\\pdfalt{\\maybeboldmath $\\CATPCrs$ versus $\\CATConnPCrs$}%\n {PCrs versus CPCrs}}\\label{subsect:PCrs-conn}\n\nBelow, $I$ will always denote the inclusion $\\CATConnPCrs \\injto \\CATPCrs$. \nNote that $I$ is fully faithful.\n\n\\begin{definition}\n$\\Connect \\from \\CATPCrs \\to \\CATConnPCrs$ is the functor defined as follows:\n\\begin{enumerate}\n\\item For a coarse space $X$, $\\Connect(X)$ is just $X$ as a set, but with the \n \\emph{connected} coarse structure $\\langle \\calE_X \\rangle_X^\\TXTconn$ \n generated by $\\calE_X$.\n\\item For a coarse map $f \\from Y \\to X$, $\\Connect(f) \\from \\Connect(Y) \\to \n \\Connect(X)$ is the same as a set map as $f$ (which is coarse by \n Corollary~\\ref{cor:coarse-gen}).\n\\end{enumerate}\n\\end{definition}\n\nThe following is clear.\n\n\\begin{proposition}\n$\\Connect \\circ I$ is the identity functor on $\\CATConnPCrs$.\n\\end{proposition}\n\n\\begin{proposition\n$\\Connect \\from \\CATPCrs \\to \\CATConnPCrs$ is left adjoint to the inclusion \nfunctor.\n\\end{proposition}\n\nThe counit maps $Y \\to I(\\Connect(Y))$, $Y \\in \\Obj(\\CATPCrs)$, of the above \nadjunction are just the identities as set maps. The unit maps $X = \n\\Connect(I(X)) \\to X$, $X \\in \\Obj(\\CATConnPCrs)$, are the identity maps.\n\n\\begin{proof}\nSince $\\Connect \\circ I$ is the identity, $\\Connect$ induces natural maps\n\\[\n \\Hom_{\\CATPCrs}(Y, I(X)) \\to \\Hom_{\\CATConnPCrs}(\\Connect(Y), X)\n\\]\n(for $Y$ possibly disconnected and $X$ connected), which are clearly bijections.\n\\end{proof}\n\n\\begin{corollary}\\label{cor:PCrs-ConnPCrs-preserve}\n$I \\from \\CATConnPCrs \\injto \\CATPCrs$ preserves limits and $\\Connect \\from \n\\CATPCrs \\to \\CATConnPCrs$ preserves colimits. Moreover, if $\\calF \\from \\calJ \n\\to \\CATConnPCrs$ is a functor and $\\nu$ is a limiting cone to (or colimiting \ncone from) $I \\circ \\calF$ in $\\CATPCrs$, then $\\Connect \\circ \\nu$ is a \nlimiting cone to (or colimiting cone from, respectively) $\\calF = \\Connect \n\\circ I \\circ \\calF$ in $\\CATConnPCrs$.\n\\end{corollary}\n\n\\begin{proof}\nSee, e.g., \\cite{MR1712872}*{Ch.~V \\S{}5} or \\cite{MR0349793}*{16.4.6} for the \nfirst statement, and \\cite{MR0349793}*{16.6.1} for the second.\n\\end{proof}\n\n\n\\subsection{Limits in the precoarse categories}\n\n\\begin{theorem}\\label{thm:PCrs-lim}\n$\\CATPCrs$ has all nonzero limits (i.e., limits of functors $\\calJ \\to \n\\CATPCrs$ for $\\calJ$ nonempty). Moreover, the forgetful functor $\\Forget \\from \n\\CATPCrs \\to \\CATSet$ preserves limits, and the limits of connected coarse \nspaces are connected. Consequently, the same hold with $\\CATConnPCrs$ in place \nof $\\CATPCrs$.\n\\end{theorem}\n\nIt is actually easy to see that $\\Forget \\from \\CATPCrs \\to \\CATSet$ preserves \nlimits: $\\Forget$ is naturally equivalent to the covariant $\\Hom$-functor \n$\\Hom_{\\CATPCrs}(\\ast,\\cdot) \\from \\CATPCrs \\to \\CATSet$, where $\\ast$ is any \none-point coarse space, and thus preserves limits (see, e.g., \n\\cite{MR1712872}*{Ch.~V \\S{}4 Thm.~1}). Since I do not know a similar argument \nfor colimits, let us proceed in ignorance of this.\n\n\\begin{proof}\nRecall that $\\CATSet$ has all limits. Given $\\calF_X \\from \\calJ \\to \\CATPCrs$, \nfix a limiting set cone $\\nu \\from X \\to \\Forget \\circ \\calF_X$, so that $X$ is \na set and $\\nu_j \\from X \\to X_j \\defeq \\calF_X(j)$, $j \\in \\Obj(\\calJ)$, are \nset maps. It suffices to put a coarse structure on $X$ so that we get a \nlimiting cone $\\nu \\from X \\to \\calF_X$ in $\\CATPCrs$ (with $X$ connected if \nall the $X_j$ are connected).\n\nWe need all the $\\nu_j \\from X \\to X_j$ to become coarse maps. Taking the \ncoarse structure on $X$ to be the intersection\n\\[\n \\calE_X \\defeq \\bigintersect_{j \\in \\Obj(\\calJ)} (\\nu_j)^* \\calE_{X_j}\n\\]\nof pull-back coarse structures clearly makes this so. (Since pull-backs of \nconnected coarse structures are connected and intersections of connected coarse \nstructures are connected, $\\calE_X$ is connected if all the $X_j$ are.) Since \n$\\Forget$ is faithful, $\\nu \\from X \\to \\calF_X$ is a cone in $\\CATPCrs$. We \nmust show that it is universal.\n\nSuppose $\\mu \\from Y \\to \\calF_X$ is another cone in $\\CATPCrs$. Applying \n$\\Forget$, we get a cone $\\mu \\from Y \\to \\Forget \\circ \\calF_X$ in $\\CATSet$ \n(properly written $\\Forget \\circ \\mu \\from \\Forget(Y) \\to \\Forget \\circ \n\\calF_X$). Since $\\nu$ is universal in $\\CATSet$, there is a set map $t \\from Y \n\\to X$ such that $\\mu = \\nu \\circ t$ as cones in $\\CATSet$. We must show that \n$t$ is actually a coarse map (uniqueness is clear).\n\nFirst, since $\\calJ$ is nonzero, there is some object $j_0 \\in \\Obj(\\calJ)$; \nthen $\\mu_{j_0} = \\nu_{j_0} \\circ t$ (as set maps) is locally proper, so $t$ is \nlocally proper \n(Proposition~\\ref{prop:loc-prop-comp}\\enumref{prop:loc-prop-comp:II}). Next, \nfor each $j \\in \\Obj(\\calJ)$ and $F \\in \\calE_Y$, $\\nu_j$ is coarse for $E \n\\defeq t^{\\cross 2}(F)$ (which is in $\\calE_{|X|_1}$, by local properness) and \nhence $E \\in (\\nu_j)^* \\calE_{X_j}$: Since $\\mu_j = \\nu_j \\circ t$ is locally \nproper for $F$, $\\nu_j$ is locally proper for $E$ \n(Proposition~\\ref{prop:loc-prop-for-comp}\\enumref{prop:loc-prop-for-comp-II}), \nand also $\\nu_j$ clearly preserves $E$.\n\nFor $\\CATConnPCrs$, the assertions follow from \nCorollary~\\ref{cor:PCrs-ConnPCrs-preserve}.\n\\end{proof}\n\nThe above proof gives a rather concrete description of limits in $\\CATPCrs$ \n(and $\\CATConnPCrs$), and in particular of products. The product \n$\\pfx{\\CATPCrs}\\prod_{j \\in J} X_j$ in $\\CATPCrs$ (or in $\\CATConnPCrs$) is \njust the set product (i.e., cartesian product) $X \\defeq \\pfx{\\CATSet}\\prod_{j \n\\in J} X_j$ together with the entourages of $|X|_1$ which project properly to \nentourages of all the $X_j$.\n\nThe ``nonzero'' stipulation in Theorem~\\ref{thm:PCrs-lim} is necessary.\n\n\\begin{proposition}\\label{prop:PCrs-no-map-to}\nFor each coarse space $X$, there exists a (nonempty) connected, unital coarse \nspace $Y$ such that there is no coarse map $Y \\to X$.\n\\end{proposition}\n\n\\begin{proof}\nGiven $X$, take $Y \\defeq |Y|_1$ to be an infinite set with cardinality \nstrictly greater than the cardinality of $X$, equipped with the terminal coarse \nstructure (which is connected and unital), e.g., $Y \\defeq |\\powerset(X) \n\\disjtunion \\setN|_1$. Then no locally proper map $Y \\to X$ exists, since no \nglobally proper map $Y \\to X$ exists and $Y$ is unital (see \nCorollary~\\ref{cor:loc-prop-uni}). (Note that the cardinality of sets in our \nuniverse $\\calU$ is bounded above by some cardinal, namely by $\\# \\calU$, but \nno element of $\\calU$ has this cardinality.)\n\\end{proof}\n\n\\begin{corollary}\\label{cor:PCrs-no-term}\nNone of the precoarse categories ($\\CATPCrs$, $\\CATConnPCrs$, \n$\\CATne{\\CATConnPCrs}$, $\\CATUniPCrs$, and $\\CATConnUniPCrs$) has a terminal \nobject.\n\\end{corollary}\n\nThe failure of existence of terminal objects in the precoarse categories is not \njust a failure of uniqueness of maps, but more seriously of existence. Thus we \nwill also get the following on the coarse categories (which are quotients of \nthe precoarse categories).\n\n\\begin{corollary}\\label{cor:Crs-no-term}\nNo quotient of any of the above precoarse categories has a terminal object.\n\\end{corollary}\n\nIt is straightforward to show that the inclusion $\\CATne{\\CATConnPCrs} \\injto \n\\CATConnPCrs$ preserves limits, and moreover that a nonzero limit exists in \n$\\CATne{\\CATConnPCrs}$ if and only if the corresponding set limit is nonempty \n(but the example below shows that $\\CATne{\\CATConnPCrs}$ does \\emph{not} have \nall nonzero limits). On the other hand unitality poses a fatal problem: The \nforgetful functor $\\CATUniPCrs \\to \\CATSet$ still preserves limits, so a \n(nonzero) limit in $\\CATUniPCrs$ can only exist when all the maps in the \ncorresponding limiting set cone are proper (but this is often not the case, \ne.g., in the case of products).\n\n\\begin{example}\nLet $X \\defeq |\\setZplus|_1$ (which is connected and nonempty), $f \\defeq \\id_X \n\\from X \\to X$ be the identity, and define $g \\from X \\to X$ by $g(x) \\defeq \nx+1$. Then the equalizer of $f$ and $g$ in $\\CATPCrs$ is the empty set.\n\\end{example}\n\nTo get ahead of ourselves (see \\S\\ref{sect:Crs}), note that though $f$ and $g$ \nare \\emph{close}, the equalizer of $f$ and itself (which is just $X$ mapping \nidentically to itself) is not \\emph{coarsely equivalent} to the equalizer of \n$f$ and $g$. Indeed, one can obtain other inequivalent equalizers: e.g., the \nequalizer of $h \\from X \\to X$ where $h(x) \\defeq \\min \\set{0,x-1}$ (which also \nclose to $f$) and $f$ is $\\set{0}$ (including into $X$). On the other hand, in \nthe quotient coarse category $\\CATCrs$, $[f] = [g] = [h]$ so the equalizer of \nany pair of these maps is $X$. Since limits in $\\CATPCrs$ are not \n\\emph{coarsely invariant}, they are of limited interest.\n\nWe also see that the quotient functor $\\CATPCrs \\to \\CATCrs$ does not preserve \nequalizers, hence does not preserve limits. However, it \\emph{does} preserve \nproducts. Using this and a method parallel to the one employed in the proof of \nTheorem~\\ref{thm:PCrs-lim}, we will show that $\\CATCrs$ also has all nonzero \nlimits (which will, by definition, be coarsely invariant).\n\nWe will use products extensively. We take this opportunity to mention several \ncanonical coarse maps which arise due to the existence of (nonzero) products \n(all objects are coarse spaces and arrows coarse maps):\n\\begin{enumerate}\n\\item For any $X$ and $Y$, there are \\emph{projection maps} $\\pi_X \\from X \n \\cross Y \\to X$ and $\\pi_Y \\from X \\cross Y \\to Y$.\n\\item For any $X$, there is a \\emph{diagonal map} $\\Delta_X \\from X \\to X \n \\cross X$.\n\\item For $f \\from Y \\to X$ and $f' \\from Y' \\to X'$, there is a \\emph{product \n map} $f \\cross f' \\from Y \\cross Y' \\to X \\cross X'$.\n\\end{enumerate}\nThe above can all be generalized to larger (even infinite) products.\n\n\n\\subsection{Colimits in the precoarse categories}\n\n\\begin{theorem}\\label{thm:PCrs-colim}\nA colimit exists in $\\CATPCrs$ if and only if all the maps from a corresponding \ncolimiting set cone are locally proper. Moreover, the forgetful functor \n$\\Forget \\from \\CATPCrs \\to \\CATSet$ preserves colimits. The same hold with \n$\\CATConnPCrs$ in place of $\\CATPCrs$.\n\\end{theorem}\n\n\\begin{proof}\nThis proof is basically dual to the proof of Theorem~\\ref{thm:PCrs-lim}, only \nwith the added onus of showing the ``only if''. The reason for the local \nproperness requirement is that coarse structures can only be pushed forward \nalong locally proper maps (whereas they can be pulled back along all maps).\n\nRecall that $\\CATSet$ has all colimits. Given $\\calF_Y \\from \\calJ \\to \n\\CATPCrs$, fix a colimiting set cone $\\nu \\from \\Forget \\circ \\calF_Y \\to Y$, \nso that $Y$ is a set and $\\nu_j \\from Y_j \\defeq \\calF_Y(j) \\to Y$, $j \\in \n\\Obj(\\calJ)$, are set maps. Suppose all of the $\\nu_j$ are locally proper. \nTaking the coarse structure on $Y$ to be\n\\[\n \\calE_Y \\defeq \\langle (\\nu_j)_* \\calE_{Y_j}\n \\suchthat j \\in \\Obj(\\calJ) \\rangle_Y,\n\\]\nwe clearly get a cone $\\nu \\from \\calF_Y \\to Y$ in $\\CATPCrs$; we must prove \nthat it is universal.\n\nSuppose $\\mu \\from \\calF_Y \\to X$ is another cone in $\\CATPCrs$. Then there is \na canonical set map $t \\from Y \\to X$ such that $\\mu = t \\circ \\nu$ as cones in \n$\\CATSet$. We must show that $t$ is coarse (again uniqueness is clear). \nEntourages $(\\nu_j)^{\\cross 2}(F)$, $F \\in \\calE_{Y_j}$, $j \\in \\Obj(\\calJ)$, \ngenerate $\\calE_Y$. $t$ is locally proper for each such entourage (using $\\mu_j \n= t \\circ \\nu_j$ and \nProposition~\\ref{prop:loc-prop-for-comp}\\enumref{prop:loc-prop-for-comp-III}) \nand clearly preserves each such entourage. Thus $t$ is coarse, as required.\n\nIf the $\\nu_j$ are \\emph{not} all locally proper, we must show that $\\calF_Y$ \ndoes not have a colimit (in $\\CATPCrs$); in fact, we show something stronger, \nthat there is no cone from $\\calF_Y$ in $\\CATPCrs$. We proceed by \ncontradiction, so suppose that $\\nu_{j_0}$ is not locally proper ($j_0 \\in \n\\Obj(\\calJ)$ fixed) and suppose $\\mu \\from \\calF_Y \\to X$ is a cone in \n$\\CATPCrs$. Again there must be a set map $t \\from Y \\to X$ such that $\\mu = t \n\\circ \\nu$ as set cones. But then $\\mu_{j_0} = t \\circ \\nu_{j_0}$ is locally \nproper, which implies that $\\nu_{j_0}$ is locally proper \n(Proposition~\\ref{prop:loc-prop-comp}\\enumref{prop:loc-prop-comp:II}) which is \na contradiction.\n\nTo get the asserted colimits in $\\CATConnPCrs$, simply apply \nCorollary~\\ref{cor:PCrs-ConnPCrs-preserve}. To show that $\\CATConnPCrs$ has no \nmore colimits than $\\CATPCrs$ (i.e., $\\calF_Y \\from \\calJ \\to \\CATConnPCrs$ has \na colimit in $\\CATConnPCrs$ only if $I \\circ \\calF_Y \\from \\calJ \\to \\CATPCrs$ \nhas a colimit in $\\CATPCrs$), it is probably simplest to modify the above \nproof.\n\\end{proof}\n\nThe following are clear.\n\n\\begin{corollary}\n$\\CATPCrs$ and $\\CATConnPCrs$ have all coproducts.\n\\end{corollary}\n\n\\begin{corollary}\nThe empty coarse space is the (unique) initial object in $\\CATPCrs$ and in \n$\\CATConnPCrs$.\n\\end{corollary}\n\n\\begin{corollary}\\label{cor:PCrs-fin-components}\nAny coarse space with only \\emph{finitely} many connected components is \n(isomorphic in $\\CATPCrs$ to) the coproduct in $\\CATPCrs$ of its connected \ncomponents.\n\\end{corollary}\n\nThe above Corollary does not necessarily hold for coarse spaces with infinitely \nmany connected components. One may say, more generally, that any coarse space \nwhose unital subspaces have only finitely many connected components is the \ncoproduct of its connected components.\n\nWe get concrete descriptions of coproducts in $\\CATPCrs$ and in $\\CATConnPCrs$. \nThe coproduct $\\pfx{\\CATPCrs}\\coprod_{j \\in J} Y_j$ in $\\CATPCrs$ is just the \nset coproduct (i.e., disjoint union) $Y \\defeq \\pfx{\\CATSet}\\coprod_{j \\in J} \nY_j$ with entourages finite unions of entourages of the $Y_j$ (included into \n$Y$). The corresponding coproduct in $\\CATConnPCrs$ is the same as a set, but \none may also take an additional union with an arbitrary finite subset of \n$Y^{\\cross 2}$.\n\nThe inclusion $\\CATne{\\CATConnPCrs} \\injto \\CATConnPCrs$ preserves colimits. \n$\\CATne{\\CATConnPCrs}$ does not have a zero colimit (i.e., initial object), but \notherwise has a colimit if the corresponding colimit exists in $\\CATConnPCrs$, \nin which case the two colimits coincide; note that a nonzero colimit of \nnonempty sets is nonempty. Unitality does not pose a problem for colimits: \nTheorem~\\ref{thm:PCrs-colim} also holds with $\\CATUniPCrs$ in place of \n$\\CATPCrs$ (and $\\CATConnUniPCrs$ in place of $\\CATConnPCrs$). In the proof, \none simply takes the unital coarse structure\n\\[\n \\langle (\\nu_j)_* \\calE_{Y_j} \\suchthat j \\in \\Obj(\\calJ) \\rangle_Y^\\TXTuni\n\\]\ninstead. Of course, in the unital cases, one may substitute ``(globally) \nproper'' for ``locally proper''.\n\nThe ``locally proper'' hypothesis is necessary, as the following shows.\n\n\\begin{example}\nLet $X \\defeq |\\setZplus|_1$, $f \\from X \\to X$ be the identity, and define $g \n\\from X \\to X$ by $g(x) \\defeq \\min \\set{0,x-1}$. The coequalizer of $f$ and \n$g$ in $\\CATSet$ is the one-point set $\\ast$; since $X$ is unital, $f$ and $g$ \ndo not have a coequalizer in $\\CATPCrs$.\n\\end{example}\n\nAgain, to get ahead of ourselves, we see that coequalizers in $\\CATPCrs$ are \nnot coarsely invariant. Even though $f$ is close to $g$ and the coequalizer of \n$f$ and itself is just $X$, $f$ and $g$ do not have a coequalizer in \n$\\CATPCrs$. In the quotient category $\\CATCrs$, there are no problems: the \ncoequalizer of $[f]$ and $[g]$ is $X$, as expected.\n\nThe quotient functor $\\CATPCrs \\to \\CATCrs$ does not preserve coequalizers or \ncolimits in general. However, it \\emph{does} preserve coproducts, and we will \nuse these to show that in fact $\\CATCrs$ has \\emph{all} colimits (which are \nevidently coarsely invariant). In particular, $\\CATCrs$ has all coequalizers, \nwhich contrasts with the situation in $\\CATPCrs$ (recall that having all \ncoproducts and all coequalizers would imply having all colimits).\n\n\n\n\n\\section{The coarse categories}\\label{sect:Crs}\n\n\n\\subsection{Closeness of maps}\n\nIn classical (unital) coarse geometry, two maps $f, f' \\from Y \\to X$ are \n\\emph{close} if $(f \\cross f')(1_Y)$ is an entourage of $X$. Closeness is an \nequivalence relation on maps $Y \\to X$, but note that it does not involve the \ncoarse structure on $Y$ at all! In the nonunital case, we must modify the \ndefinition, lest closeness not even be reflexive (e.g., take $Y \\defeq X$ \nnonunital and $f \\defeq f' \\defeq \\id_X$).\n\n\\begin{definition}\nCoarse maps $f, f' \\from Y \\to X$ are \\emph{close} (write $f \\closeequiv f'$) \nif $(f \\cross f')(F) \\in \\calE_X$ for all $F \\in \\calE_Y$.\n\\end{definition}\n\n\\begin{proposition}\nCloseness of coarse maps $Y \\to X$ is an equivalence relation (on the \n$\\Hom$-set $\\Hom_{\\CATPCrs}(Y,X)$).\n\\end{proposition}\n\n\\begin{proof}\nReflexivity follows since coarse maps preserve entourages. Symmetry follows by \ntaking transposes. Transitivity: Suppose $f, f', f'' \\from Y \\to X$ are coarse \nmaps with $f \\closeequiv f'$ and $f' \\closeequiv f''$. For any $F \\in \\calE_Y$,\n\\[\n (f \\cross f'')(F)\n \\subseteq (f \\cross f')(1_{F \\cdot Y}) \\circ (f' \\cross f'')(F)\n\\]\nis an entourage of $X$ since $1_{F \\cdot Y} \\in \\calE_Y$ (since $1_{F \\cdot Y} \n\\subseteq F \\circ F^\\transpose$).\n\\end{proof}\n\nLike local properness, closeness is also determined ``on'' unital subspaces of \nthe domain. Thus for unital coarse spaces, our notion of closeness is just the \nclassical one.\n\n\\begin{proposition}\\label{prop:close-uni}\nCoarse maps $f, f' \\from Y \\to X$ are close if and only if, for every unital \nsubspace $Y' \\subseteq Y$, $f |_{Y'}$ and $f' |_{Y'}$ are close (i.e., $(f \n\\cross f')(1_{Y'}) \\in \\calE_X)$. Thus, for $Y$ unital, $f$ and $f'$ are close \nif and only if $(f \\cross f')(1_Y) \\in \\calE_X$.\n\\end{proposition}\n\n\\begin{proof}\n(\\textimplies): Immediate.\n\n(\\textimpliedby): For $F \\in \\calE_Y$, $Y' \\defeq F \\cdot Y \\union Y \\cdot F$ \nis a unital subspace of $Y$, and $F \\in \\calE_{Y'}$. Then\n\\[\n (f \\cross f')(F) = (f |_{Y'} \\cross f' |_{Y'})(F) \\in \\calE_X,\n\\]\nas required.\n\\end{proof}\n\nWe have not used local properness at all, so we can actually define closeness \nfor maps which preserve entourages (but are not necessarily locally proper). \nHowever, we will not need this.\n\nThe following observation is rather important.\n\n\\begin{proposition}\\label{prop:term-close}\nSuppose $f, f' \\from Y \\to X$ are coarse maps. If $X = |X|_1$ has the terminal \ncoarse structure, then $f$ and $f'$ are close.\n\\end{proposition}\n\nThus if $X = |X|_1$, then for any coarse space $Y$ there is \\emph{at most} one \n(but possibly no) closeness class of coarse map $Y \\to X$.\n\n\\begin{proof}\nThis follows immediately from Proposition~\\ref{prop:loc-prop-prod}.\n\\end{proof}\n\n\n\\subsection{The coarse categories}\n\nCloseness, an equivalence relation on the $\\Hom$-sets of $\\CATPCrs$, yields a \nquotient category\n\\[\n \\CATCrs \\defeq \\CATPCrs\/{\\closeequiv}\n\\]\n(see \\S\\ref{subsect:set-cat}), which we call the \\emph{coarse category}, \ntogether with a quotient functor $\\Quotient \\from \\CATPCrs \\to \\CATCrs$. We may \nsimilarly define quotients $\\CATConnCrs$, $\\CATUniCrs$, and $\\CATConnUniCrs$ of \n$\\CATConnPCrs$, $\\CATUniPCrs$, and $\\CATConnUniPCrs$, respectively. These \nlatter categories are full subcategories of $\\CATPCrs$, so their quotients are \nfull subcategories of $\\CATCrs$.\n\nThe following is clear.\n\n\\begin{proposition}\nCloseness is respected by composition: If $f, f' \\from Y \\to X$ and $g, g' \n\\from Z \\to Y$ are coarse maps with $f \\closeequiv f'$ and $g \\closeequiv g'$, \nthen $f \\circ g \\closeequiv f' \\circ g'$.\n\\end{proposition}\n\nThis allows us to describe the arrows of $\\CATCrs$ as closeness equivalence \nclasses of coarse maps. Denote such classes by $[f]_\\TXTclose \\from Y \\to X$ \n(or simply $[f]$ for brevity), where $f$ is usually taken to be a \nrepresentative map $Y \\to X$, i.e., $\\Quotient(f) = [f]$. However, we will use \nthe notation $[f] \\from Y \\to X$ for arrows $Y \\to X$ in $\\CATCrs$ even when we \ndo not have a particular $f$ in mind.\n\nThe notion of isomorphism in $\\CATCrs$ is weaker than in $\\CATPCrs$. A coarse \nmap $f \\from Y \\to X$ is a \\emph{coarse equivalence} if $[f]$ is an isomorphism \nin $\\CATCrs$. In other words, $f$ is a coarse equivalence if and only if there \nis a coarse map $g \\from X \\to Y$ so that the two possible compositions are \nclose to the identities (i.e., $[f \\circ g] = [\\id_X]$ and $[g \\circ f] = \n[\\id_Y]$).\n\nA functor $F \\from \\CATPCrs \\to \\calC$, $\\calC$ any category, is \\emph{coarsely \ninvariant} if $f \\closeequiv f'$ implies $F(f) = F(f')$. Any coarsely invariant \n$F$ induces a functor $[F] \\from \\CATCrs \\to \\calC$ with $F = [F] \\circ \n\\Quotient$. Coarsely invariant functors send coarse equivalences to \nisomorphisms. For functors $F \\from \\CATPCrs \\to \\CATPCrs$ (or with codomain \none of the other precoarse categories), we abuse terminology and also say that \n$F$ is \\emph{coarsely invariant} if $\\Quotient \\circ F \\from \\CATPCrs \\to \n\\CATCrs$ is coarsely invariant in the previous (stronger) sense. Such a \ncoarsely invariant functor $F \\from \\CATPCrs \\to \\CATPCrs$ induces a functor \n$[F] \\from \\CATCrs \\to \\CATCrs$; if $F \\from \\CATPCrs \\to \\CATConnPCrs$, then \n$[F] \\from \\CATCrs \\to \\CATConnCrs$; etc.\n\n\n\\subsection{\\pdfalt{\\maybeboldmath $\\CATCrs$ versus $\\CATConnCrs$}%\n {Crs versus CCrs}}\n\nThe relation between the quotient categories $\\CATCrs$ and $\\CATConnCrs$ is \nessentially the same as that between $\\CATPCrs$ and $\\CATConnPCrs$ for the \nfollowing reasons, which are easy to check.\n\n\\begin{proposition}\nThe functors $I \\from \\CATConnPCrs \\injto \\CATPCrs$ and $\\Connect \\from \n\\CATPCrs \\to \\CATConnPCrs$ are coarsely invariant, hence induce functors $[I] \n\\from \\CATConnCrs \\to \\CATCrs$ and $[\\Connect] \\from \\CATCrs \\to \\CATConnCrs$, \nrespectively. In fact, $[I]$ is just the inclusion and is fully faithful. \nAgain, $[\\Connect] \\circ [I]$ is the identity functor (now on $\\CATConnCrs$), \nand $[\\Connect]$ is left adjoint to $[I]$.\n\\end{proposition}\n\nConsequently, we get the following (exact) analogues of \nCorollary~\\ref{cor:PCrs-ConnPCrs-preserve}.\n\n\\begin{corollary\n$[I]$ preserves limits and $[\\Connect]$ preserves colimits. If $\\nu$ is a \nlimiting cone to (or colimiting cone from) $[I] \\circ \\calF$, where $\\calF \n\\from \\calJ \\to \\CATConnCrs$, then $[\\Connect] \\circ \\nu$ is a limiting cone to \n(or colimiting cone from, respectively) $\\calF = [\\Connect] \\circ [I] \\circ \n\\calF$.\n\\end{corollary}\n\n\\begin{remark}\nEvidently, $I$ and $\\Connect$ ``commute'' with the quotient functors \n$\\Quotient$ ($\\CATPCrs \\to \\CATCrs$ and its restriction $\\CATConnPCrs \\to \n\\CATConnCrs$) in that\n\\[\n \\Quotient \\circ I = [I] \\circ \\Quotient\n\\quad\\text{and}\\quad\n \\Quotient \\circ \\Connect = [\\Connect] \\circ \\Quotient.\n\\]\nThe quotient functors give a map of adjunctions (see, e.g., \n\\cite{MR1712872}*{Ch.~IV \\S{}7}) from $(\\Connect,I)$ to $([\\Connect],[I])$.\n\\end{remark}\n\n\\begin{remark}\n$[I]$ is fully faithful, but $[\\Connect]$ is neither full nor faithful (even \nthough $\\Connect$ is faithful, though also not full): e.g., consider\n\\[\n \\Hom_{\\CATCrs}(\\ast, \\ast \\copro \\ast)\n\\quad\\text{and}\\quad\n \\Hom_{\\CATCrs}(\\Connect(|\\setZplus|_1 \\copro |\\setZplus|_1),\n |\\setZplus|_1 \\copro |\\setZplus|_1),\n\\]\nrespectively.\n\\end{remark}\n\n\n\\subsection{\\pdfalt{\\maybeboldmath $\\CATConnCrs$ versus $\\CATne{\\CATConnCrs}$}%\n {CCrs versus CCrs\\textcaret{}x}}\n\nAfter passing to the quotients by closeness, the situation with respect to \n\\emph{nonempty} connected coarse spaces is greatly improved. Below, we work in \n$\\CATConnPCrs$ or its quotient $\\CATConnCrs$ (or the nonempty subcategories), \nso all coarse spaces will be connected. Let $I \\from \\CATne{\\CATConnPCrs} \n\\injto \\CATConnPCrs$ denote the inclusion; it is coarsely invariant, hence \ninduces $[I] \\from \\CATne{\\CATConnCrs} \\injto \\CATConnCrs$, which is also the \ninclusion and which is fully faithful. Again, the inclusion functors \n``commute'' with the quotient functors.\n\n\\begin{definition}\nFix a one-point coarse space $\\ast$. Define a functor\n\\[\n \\AddPt \\from \\CATConnPCrs \\to \\CATne{\\CATConnPCrs}\n\\]\nas follows:\n\\begin{enumerate}\n\\item For a coarse space $X$, $\\AddPt(X) \\defeq X \\copro_{\\CATConnPCrs} \\ast$ \n (coproduct in $\\CATConnPCrs$).\n\\item For a coarse map $f \\from Y \\to X$, $\\AddPt(f) \\defeq f \n \\copro_{\\CATConnPCrs} \\id_\\ast$.\n\\end{enumerate}\n\\end{definition}\n\n(It is easy to construct the functor $\\AddPt$ concretely, and all functors \nsatisfying the above are naturally equivalent.) The following are all easy to \nverify.\n\n\\begin{proposition}\n$\\AddPt \\from \\CATConnPCrs \\to \\CATne{\\CATConnPCrs}$ is coarsely invariant and \nhence induces a functor $[\\AddPt] \\from \\CATConnCrs \\to \\CATne{\\CATConnCrs}$.\n\\end{proposition}\n\n$\\AddPt$ is not terribly useful, but $[\\AddPt]$ is.\n\n\\begin{proposition\n$[\\AddPt] \\circ [I]$ is naturally equivalent to the identity on \n$\\CATne{\\CATConnCrs}$. Moreover, $[\\AddPt] \\from \\CATConnCrs \\to \n\\CATne{\\CATConnCrs}$ is left adjoint to $[I]$.\n\\end{proposition}\n\nIt follows that $[\\AddPt]$ is naturally equivalent to a functor $[\\AddPt]'$ \nsuch that $[\\AddPt]' \\circ [I]$ is equal to the identity functor. It is easy to \ngive a natural equivalence $\\Id_{\\CATne{\\CATConnCrs}} \\to [\\AddPt] \\circ [I]$: \nfor each (nonempty, connected) $X$, the canonical inclusion $\\iota_X \\from X \n\\to X \\copro \\ast$ is a coarse equivalence hence an isomorphism $[\\iota_X] \n\\from X \\to \\AddPt(X)$ in $\\CATne{\\CATConnCrs}$.\n\n\\begin{corollary\n$[I]$ preserves limits and $[\\AddPt]$ preserves colimits. If $\\nu$ is a \nlimiting cone to (or, a colimiting cone from) $[I] \\circ \\calF$, where $\\calF \n\\from \\calJ \\to \\CATne{\\CATConnCrs}$, then $[\\AddPt] \\circ \\nu$ is a limiting \ncone to (or, respectively, a colimiting cone from) $[\\AddPt] \\circ [I] \\circ \n\\calF$ (or $\\calF$ after applying a natural equivalence).\n\\end{corollary}\n\n\n\\subsection{Limits in the coarse categories}\n\nWe first prove our assertion that nonzero products in the nonunital coarse \ncategories are just images (under the quotient functor) of products in the \nprecoarse categories. We then show the nonunital coarse categories also have \nall equalizers of pairs of arrows. It then follows by a standard construction \nthat the nonunital coarse categories have all nonzero limits.\n\n\\begin{proposition}\\label{prop:Crs-prod}\nSuppose $\\set{X_j \\suchthat j \\in J}$ ($J$ some index set) is a nonzero \ncollection of coarse spaces. The product of the $X_j$ in $\\CATCrs$ (or in \n$\\CATConnCrs$ or $\\CATne{\\CATConnCrs}$, as appropriate) is just the coarse \nspace\n\\[\n X \\defeq \\pfx{\\CATPCrs}\\prod_{j \\in J} X_j\n\\]\n(product in $\\CATPCrs$) together with the projections $[\\pi_j] \\from X \\to \nX_j$, $j \\in J$ (closeness classes of the projections). Thus $\\CATCrs$ (and \n$\\CATConnCrs$ and $\\CATne{\\CATConnCrs}$) have all nonzero products.\n\\end{proposition}\n\nRecall, from Corollary~\\ref{cor:Crs-no-term}, that none of the quotient coarse \ncategories has a zero product, i.e., terminal object.\n\n\\begin{proof}\nThe cone $\\pi$ in $\\CATPCrs$ maps (via the quotient functor) to a cone $[\\pi] \n\\defeq \\Quotient \\circ \\pi$ in $\\CATCrs$; we must prove universality. Suppose \n$Y$ is a coarse space and $[\\mu_j] \\from Y \\to X_j$, $j \\in J$, is a collection \nof arrows in $\\CATCrs$. Choosing representative coarse maps $\\mu_j \\from Y \\to \nX_j$, we get (since the cone $\\pi$ is universal) a natural coarse map $t \\from \nY \\to X$ such that $\\mu_j = \\pi_j \\circ t$ for all $j$. Of course, this implies \n$[\\mu_j] = [\\pi_j] \\circ [t]$.\n\nWe must show that this $[t]$ is unique (hence does not depend on our choice of \nrepresentatives $\\mu_j$). Suppose $[t'] \\from Y \\to X$ is a class such that \n$[\\mu_j] = [\\pi_j] \\circ [t']$ for all $j$. Choose a representative $t'$. \nSuppose $F \\in \\calE_Y$, and put $E \\defeq (t \\cross t')(F)$, which is in \n$\\calE_{|X|_1}$ by Proposition~\\ref{prop:loc-prop-prod}. For each $j$, we have \nthat $\\mu'_j \\defeq \\pi_j \\circ t' \\closeequiv \\mu_j = \\pi_j \\circ t$, and \nhence\n\\[\n (\\pi_j)^{\\cross 2}(E) = ((\\pi_j \\circ t) \\cross (\\pi_j \\circ t'))(F)\n\\]\nis in $\\calE_{X_j}$. Moreover, since $(\\mu_j \\cross \\mu'_j) |_F = \n(\\pi_j)^{\\cross 2} |_E \\circ (t \\cross t') |_E^F$ is proper (by the same \nProposition), $\\pi_j$ is locally proper for $E$. Thus $E \\in (\\pi_j)^* \n\\calE_{X_j}$ for all $j$, so $E \\in \\calE_X$. Hence $t$ is close to $t'$.\n\nFor $\\CATConnCrs$ and $\\CATne{\\CATConnCrs}$, it suffices to recall that nonzero \nproducts of connected coarse spaces are connected, and nonzero products of \nnonempty coarse spaces are nonempty.\n\\end{proof}\n\n\\begin{remark}\nFor obvious reasons, we cannot usually obtain products in the unital coarse \ncategories using the above construction. However, unlike in $\\CATPCrs$, this \ndoes not imply the nonexistence of products. In certain cases (see, e.g., \nRemark~\\ref{rmk:term-unital-prod}), the (nonunital) product above will be \ncoarsely equivalent to a unital coarse space which is a product in \n$\\CATUniCrs$. I do not know, in general, which products exist in $\\CATUniCrs$.\n\\end{remark}\n\nNext, we examine equalizers in the (nonunital) coarse categories. Unlike \nproducts, equalizers in the coarse categories are not usually ``the same'' as \nequalizers in the precoarse categories.\n\n\\begin{definition}\nSuppose $X$ is a coarse space and $f, f' \\from Y \\to X$ are set maps ($Y$ some \nset). $f$ and $f'$ are \\emph{pointwise connected} if $f(y)$ is connected to \n$f'(y)$ for all $y \\in Y$. $f$ and $f'$ are \\emph{close for $F \\in f^* \\calE_X \n\\intersect (f')^* \\calE_X$} if $(f \\cross f')(F) \\in \\calE_X$.\n\\end{definition}\n\nOf course, if $X$ is connected, all set maps into $X$ are pointwise connected. \nIf $f \\from Y \\to X$ is a coarse map, then any coarse map close to $f$ is \npointwise connected to $f$.\n\n\\begin{lemma}\nSuppose $X$ is a coarse space and $f, f' \\from Y \\to X$ are set maps. If $f$ \nand $f'$ are close for both $F, F' \\in \\calE_{|Y|_1}$, then $f$ and $f'$ are \nclose for $F + F'$, $F \\circ F'$, $F^\\transpose$, and all subsets of $F$.\n\\end{lemma}\n\n\\begin{proof}\nAgain, the only (slightly) nontrivial one is $F \\circ F'$:\n\\[\n (f \\cross f')(F \\circ F') \\subseteq f^{\\cross 2}(F) \\circ (f \\cross f')(F')\n \\in \\calE_X.\n\\]\n\\end{proof}\n\n\\begin{definition}\nSuppose $X$ is a coarse space and $f, f' \\from Y \\to X$ are pointwise connected \ncoarse maps. The \\emph{equalizing pull-back coarse structure} $(f,f')^* \n\\calE_X$ (on $Y$ along $f$ and $f'$) is\n\\[\n (f,f')^* \\calE_X \\defeq \\set{F \\in f^* \\calE_X \\intersect (f')^* \\calE_X\n \\suchthat \\text{$f$ and $f'$ are close for $F$}}.\n\\]\n\\end{definition}\n\nPointwise-connectedness is important: it guarantees that the singletons \n$\\set{1_y}$, $y \\in Y$, are in $(f,f')^* \\calE_X$. It then follows from the \nLemma that $(f,f')^* \\calE_X$ is a coarse structure on $Y$.\n\n\\begin{definition}\nSuppose $f, f' \\from Y \\to X$ are coarse maps. Define the \\emph{equalizer} of \n$[f]$ and $[f']$ is\n\\[\n \\OBJequalizer_{[f],[f']}\n \\defeq \\set{y \\in Y \\suchthat \\text{$f(y)$ is connected to $f'(y)$}}\n \\subseteq Y\n\\]\nwith the coarse structure\n\\[\n \\calE_{\\OBJequalizer_{[f],[f']}} \\defeq \\calE_Y |_{\\OBJequalizer_{[f],[f']}}\n \\intersect (f |_{\\OBJequalizer_{[f],[f']}},\n f' |_{\\OBJequalizer_{[f],[f']}})^* \\calE_X\n\\]\n(where $\\calE_Y |_{\\OBJequalizer_{[f],[f']}}$ is the subspace coarse structure \non $\\OBJequalizer_{[f],[f']} \\subseteq Y$), together with closeness class of \nthe inclusion map\n\\[\n \\equalizer_{[f,[f']} \\from \\OBJequalizer_{[f],[f']} \\to Y\n\\]\n(which is coarse).\n\\end{definition}\n\nClearly, the restrictions $f |_{\\OBJequalizer_{[f],[f']}}$ and $f' \n|_{\\OBJequalizer_{[f],[f']}}$ are pointwise connected, so \n$\\calE_{\\OBJequalizer_{[f],[f']}}$ really is a coarse structure. Also, the \nabove definition does not depend on order, i.e., $\\OBJequalizer_{[f],[f']} = \n\\OBJequalizer_{[f'],[f]}$.\n\n\\begin{lemma}\nSuppose $f, f' \\from Y \\to X$ are coarse maps. The equalizer of $[f]$ and \n$[f']$ is coarsely invariant in the sense that $\\OBJequalizer_{[f],[f']}$ and \n$[\\equalizer_{[f],[f']}]$ (indeed, $\\equalizer_{[f],[f']}$) only depend on the \ncloseness class of $f$ and $f'$ (hence the notation).\n\\end{lemma}\n\n\\begin{proof}\nSuppose $e, e' \\from Y \\to X$ are close to $f$, $f'$, respectively. Then, for \nall $y \\in Y$, $e(y)$ is connected to $f(y)$ and $e'(y)$ is connected to \n$f'(y)$; for $y \\in \\OBJequalizer_{[f],[f']} \\subseteq Y$, $f(y)$ is connected \nto $f'(y)$ hence $e(y)$ is connected to $e'(y)$. Thus the \\emph{set} \n$\\OBJequalizer_{[f],[f']}$ is coarsely invariant.\n\nIt remains to show that the coarse structure $\\calE_{\\OBJequalizer_{[f],[f']}}$ \nis also coarsely invariant. Observe that\n\\[\n \\calE_{\\OBJequalizer_{[f],[f']}}\n = \\set{F \\in \\calE_Y |_{\\OBJequalizer_{[f],[f']}} \\suchthat\n (f \\cross f')(F) \\in \\calE_X}\n\\]\nand $\\calE_Y |_{\\OBJequalizer_{[f],[f']}} = \\calE_Y \n|_{\\OBJequalizer_{[f],[f']}}$. If $F \\in \\calE_{\\OBJequalizer_{[f],[f']}}$, \nthen\n\\[\n (e \\cross e')(F) \\subseteq (e \\cross f)(1_{F \\cdot Y})\n \\circ (f \\cross f')(F) \\circ (f' \\cross e)(1_{Y \\cdot F})\n\\]\nis in $\\calE_X$, and so $F \\in \\calE_{\\OBJequalizer_{[e],[e']}}$; the reverse \ninclusion follows symmetrically.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:Crs-equal}\nThe equalizer of $[f], [f'] \\from Y \\to X$ really is (in the categorical sense) \nthe equalizer of $[f]$ and $[f']$ in $\\CATCrs$ (or in $\\CATConnCrs$ or \n$\\CATne{\\CATConnCrs}$, as appropriate), hence the terminology. Thus $\\CATCrs$ \n(and $\\CATConnCrs$ and $\\CATne{\\CATConnCrs}$) have all equalizers of pairs of \narrows.\n\\end{proposition}\n\n\\begin{proof}\nFix representative coarse maps $f$ and $f'$, and suppose $g \\from Z \\to Y$ is a \ncoarse map such that $f \\circ g \\closeequiv f' \\circ g$. Then clearly the (set) \nimage of $g$ is contained in $\\OBJequalizer_{[f],[f']}$, and indeed\n\\[\n \\tilde{g} \\defeq g |^{\\OBJequalizer_{[f],[f']}}\n \\from Z \\to \\OBJequalizer_{[f],[f']}\n\\]\nis clearly coarse with $g = \\equalizer_{[f],[f']} \\circ \\tilde{g}$ (hence $[g] \n= [\\equalizer_{[f],[f']}] \\circ [\\tilde{g}]$).\n\nWe must prove uniqueness of $[\\tilde{g}]$. Suppose $\\tilde{g}' \\from Z \\to \n\\OBJequalizer_{[f],[f']}$ is a coarse map with $g \\closeequiv \n\\equalizer_{[f],[f']} \\circ \\tilde{g}' \\eqdef g'$. Then, for all $G \\in \n\\calE_Z$,\n\\[\n F \\defeq (\\tilde{g} \\cross \\tilde{g}')(G) = (g \\cross g')(G) \\in \\calE_Y\n\\]\nand, since $f \\circ g' \\closeequiv f \\circ g \\closeequiv f' \\circ g \\closeequiv \nf' \\circ g'$, we have\n\\[\n (f \\cross f')(F) = ((f \\circ g') \\cross (f' \\circ g'))(F) \\in \\calE_X,\n\\]\nso $F \\in \\calE_{\\OBJequalizer_{[f],[f']}}$. Hence $\\tilde{g}$ is close to \n$\\tilde{g}'$, as required.\n\nIf $X$ and $Y$ are connected, then $\\OBJequalizer_{[f],[f']}$ is clearly \nconnected. Moreover, if $X$ is connected, then $\\OBJequalizer_{[f],[f']} = Y$ \nas a set and hence is nonempty if $Y$ is nonempty.\n\\end{proof}\n\n\\begin{remark}\nThe above construction does not work in the unital coarse categories because \nthe equalizing pull-back coarse structures are not in general unital (and one \ncannot ``unitalize'' them and still have the required properties). Again, this \ndoes not imply the nonexistence of equalizers in $\\CATUniCrs$, and I do not \nknow which equalizers exist in $\\CATUniCrs$.\n\\end{remark}\n\n\\begin{remark}\nWhen $X$ is a coarse space and $f, f' \\from Y \\to X$ are just set maps, one can \ntake\n\\[\n \\calE_Y \\defeq f^* \\calE_X \\intersect (f')^* \\calE_X\n\\]\nand apply the above Proposition. If $g \\from Z \\to Y$ is another set map, one \ncan then take $\\calE_Z \\defeq g^* \\calE_Y$. Then $f \\circ g$ is close to $f' \n\\circ g$ if and only if $g$ factors through the equalizer of $[f]$ and $[f']$.\n\\end{remark}\n\nWe have now shown that the nonunital coarse categories have all nonzero \nproducts and all equalizers. It follows, using a standard argument, that these \ncategories have all nonzero limits. For completeness, we give this argument.\n\n\\begin{theorem}\\label{thm:Crs-lim}\nThe nonunital coarse categories $\\CATCrs$, $\\CATConnCrs$, and \n$\\CATne{\\CATConnCrs}$ have all nonzero limits.\n\\end{theorem}\n\n\\begin{proof}\nLet $\\calC$ be one of the above categories and suppose $\\calF_X \\from \\calJ \\to \n\\calC$ ($\\calJ$ nonzero, small) is a functor, putting $X_j \\defeq \\calF_X(j)$ \nfor $j \\in \\Obj(\\calJ)$ as usual. If $\\calJ$ has no arrows (i.e., $\\Map(\\calJ) \n= \\emptyset$), then $\\pfx{\\calC}\\OBJlim \\calF_X$ is just a product, and we are \ndone.\n\nOtherwise, let\n\\begin{align*}\n Y & \\defeq \\pfx{\\calC}\\prod_{\\mathclap{j \\in \\Obj(\\calJ)}} \\; X_j\n\\shortintertext{and}\n X & \\defeq \\,\\pfx{\\calC}\\prod_{\\mathclap{u \\in \\Map(\\calJ)}}\\;\\,\n X_{\\target(u)}.\n\\end{align*}\nWe have two collections of arrows $[f_u], [f'_u] \\from Y \\to X_{\\target(u)}$, \n$u \\in \\Map(\\calJ)$:\n\\[\n [f_u] \\defeq [\\pi_{\\target(u)}]\n\\quad\\text{and}\\quad\n [f'_u] \\defeq \\calF_X(u) \\circ [\\pi_{\\source(u)}].\n\\]\nBy the universal property of products, these collections of arrows give rise to \ncanonical arrows $[f] \\from Y \\to X$ and $[f'] \\from Y \\to X$, respectively. \nPut\n\\[\n \\pfx{\\calC}\\OBJlim \\calF_X \\defeq \\OBJequalizer_{[f],[f']},\n\\]\nwith the cone $[nu] \\from \\pfx{\\calC}\\OBJlim \\calF_X \\to \\calF_X$ being defined \nby $[\\nu_j] \\defeq [\\pi_j] \\circ [\\equalizer_{[f],[f']}]$ for $j \\in \n\\Obj(\\calJ)$. It is easy to check that $[\\nu]$ is indeed a limiting cone.\n\\end{proof}\n\n\\begin{remark}\nIt follows from the above proof that, as a set, one can always take the limit \n$\\OBJlim \\calF_X$ to be a subset of the product (set) $\\prod_{j \\in \n\\Obj(\\calJ)} X_j$. When all the coarse spaces $X_j$ are connected (i.e., in \n$\\CATConnCrs$), one can take $\\OBJlim \\calF_X$ to be, as a set, exactly the set \nproduct. Moreover, the proof actually gives a concrete description of limits in \nthe coarse categories. If all the $X_j$ are connected, the coarse structure on\n\\[\n Y \\defeq \\OBJlim \\calF_X\n \\defeq \\pfx{\\CATSet}\\prod_{\\mathclap{j \\in \\Obj(\\calJ)}}\\; X_j\n\\]\nconsists of all $F \\in \\calE_{|Y|_1}$ such that, for all arrows $u \\in \n\\Map(\\calJ)$ and (all) representative coarse maps $f_u \\from X_{\\source(u)} \\to \nX_{\\target(u)}$ of $\\calF_X(u)$:\n\\begin{enumerate}\n\\item $((f_u \\circ \\pi_{\\source(u)}) \\cross \\pi_{\\target(u)}) |_F$ is proper; \n and\n\\item $((f_u \\circ \\pi_{\\source(u)}) \\cross \\pi_{\\target(u)})(F)$ is an \n entourage of $X_{\\target(u)}$.\n\\end{enumerate}\n(By taking $u$ to be the identity arrow of $j \\in \\Obj(\\calJ)$, one gets that \nthe $\\pi_j$ are coarse for $F$.)\n\\end{remark}\n\n\n\\subsection{Entourages as subspaces of products}\\label{ent-subsp-prod}\n\nIs there a relation between entourages of a coarse space $X$, which are subsets \nof $X^{\\cross 2} \\defeq X \\cross X$, and the product coarse space $X \\cross X$? \nWe first need a coarse space $\\Terminate(X)$ which we will discuss more \nthoroughly in \\S\\ref{subsect:Crs-Term}: For any $X$, $\\Terminate(X) \\defeq X$ \nas a set, with coarse structure\n\\[\n \\calE_{\\Terminate(X)} \\defeq \\set{E \\in \\calE_{|X|_1}\n \\suchthat 1_{E \\cdot X}, 1_{X \\cdot E} \\in \\calE_X}.\n\\]\nNote that if $X$ is unital, $\\Terminate(X) = |X|_1$.\n\nThe following will be useful later in conjunction with various universal \nproperties, as well as generalized coarse quotients (which we intend to study \nin \\cite{crscat-quot}).\n\n\\begin{proposition}\\label{prop:ent-prod}\nSuppose $X$ is a coarse space. If $E \\in \\calE_{\\Terminate(X)}$, then $E$ can \nbe considered as a unital subspace $|E|$ of the product coarse space $X \\cross \nX$. If in fact $E \\in \\calE_X$, then the restricted projections $\\pi_1 |_{|E|}, \n\\pi_2 |_{|E|} \\from |E| \\to X$ are close. Conversely, any unital subspace $|E| \n\\subseteq X \\cross X$ determines a subset $E \\in \\calE_{\\Terminate(X)} \n\\subseteq \\calE_{|X|_1}$; if $\\pi_1 |_{|E|}$, $\\pi_2 |_{|E|}$ are close, then \n$E \\in \\calE_X$.\n\\end{proposition}\n\n\\begin{proof}\nIf $E \\in \\calE_{\\Terminate(X)}$, then $1_{|E|}$ is an entourage of $X \\cross \nX$: certainly $1_{|E|} \\in \\calE_{|X \\cross X|_1}$, and $(\\pi_1 \\cross \n\\pi_1)(1_{|E|}) = 1_{E \\cdot X}$ and $(\\pi_2 \\cross \\pi_2)(1_{|E|}) = 1_{X \n\\cdot E}$ are entourages of $X$. If $E \\in \\calE_X$, then $(\\pi_1 \\cross \n\\pi_2)(1_{|E|}) = E \\in \\calE_X$; since $|E|$ is unital, it follows that the \nrestricted projections are close.\n\nConversely, suppose $|E| \\subseteq X \\cross X$ is a unital subspace. Then the \nrestricted projections $\\pi_1 |_E = \\pi_1 |_{|E|}$ and $\\pi_2 |_E = \\pi_2 \n|_{|E|}$ are proper, so $E \\in \\calE_{|X|_1}$. Since $\\pi_1$ maps unital \nsubspaces of $X \\cross X$ to unital subspaces of $X$ and $\\pi_1(|E|) = E \\cdot \nX$, the left support, and symmetrically the right support, of $E$ is a unital \nsubspace of $X$, and so $E \\in \\calE_{\\Terminate(X)}$. If the restricted \nprojections are close, then $E = (\\pi_1 \\cross \\pi_2)(1_{|E|}) \\in \\calE_X$.\n\\end{proof}\n\n\n\\subsection{Colimits in the coarse categories}\n\nWe now do the same for coproducts, coequalizers, and thus colimits in the \ncoarse categories.\n\n\\begin{proposition}\nSuppose that $\\calC$ is one of the coarse categories $\\CATCrs$, $\\CATConnCrs$, \n$\\CATUniCrs$, or $\\CATConnUniCrs$, that $\\calP\\calC$ is the corresponding \nprecoarse category, and that $\\set{Y_j \\suchthat j \\in J}$ ($J$ some index set) \nis a collection of coarse spaces in $\\calC$ (or $\\calP\\calC$). The coproduct of \nthe $Y_j$ in $\\calC$ is just the coarse space\n\\[\n Y \\defeq \\pfx{\\calP\\calC}\\coprod_{j \\in J} Y_j\n\\]\n(coproduct in $\\calP\\calC$) together with the ``inclusions'' $[\\iota_j] \\from \nY_j \\to Y$, $j \\in J$ (closeness classes of the inclusions). If instead $\\calC \n= \\CATne{\\CATConnCrs}$, then the same holds except when $J = \\emptyset$, in \nwhich case the coproduct is any one-point coarse space. Thus all the coarse \ncategories have all coproducts.\n\\end{proposition}\n\n\\begin{proof}\nWe have shown (or at least mentioned, in the unital cases) the existence of the \ncorresponding coproduct cone $\\iota$ in the corresponding precoarse category, \nleaving aside the special case of $\\calC = \\CATne{\\CATConnCrs}$ and $J = \n\\emptyset$ (which is easily handled). The quotient functor yields a cone \n$[\\iota]$ in the coarse category $\\calC$; we must show that it is universal.\n\nSuppose $X$ is a coarse space and $[\\mu_j] \\from Y_j \\to X$, $j \\in J$, is a \ncollection of arrows in $\\calC$. Choosing representative coarse maps $\\mu_j$, \nwe get a natural coarse map $t \\from Y \\to X$ such that $\\mu_j = t \\circ \n\\iota_j$ (and hence $[\\mu_j] = [\\pi_j \\circ [t]$) for all $j$. We must show \nthis $[t]$ is unique. Suppose $t' \\from Y \\to X$ is such that $\\mu_j \n\\closeequiv t' \\circ \\iota_j$ for all $j$. The coarse structure on the \nprecoarse coproduct $Y$ is generated by $F \\defeq (\\iota_j)^{\\cross 2}(F_j)$, \n$F_j \\in \\calE_{Y_j}$, $j \\in J$, and so to show $t \\closeequiv t'$ it is \nenough to show that $(t \\cross t')(F) \\in \\calE_X$ for such $F$. But\n\\[\n (t \\cross t')(F) = ((t \\circ \\iota_j) \\cross (t' \\circ \\iota_j))(F_j)\n\\]\nis in $\\calE_X$ since $t \\circ \\iota_j = \\mu_j \\closeequiv t' \\circ \\iota_j$, \nas required.\n\\end{proof}\n\nNext, coequalizers: Unlike coproducts, coequalizers in the coarse categories \ndiffer from coequalizers in the precoarse categories; in particular, they \nalways exist.\n\n\\begin{definition}\nSuppose $Y$ is a coarse space and $f, f' \\from Y \\to X$ ($X$ some set) are \nlocally proper maps. The \\emph{coequalizing push-forward coarse structure} \n$(f,f')_* \\calE_Y$ (on $X$ along $f$ and $f'$) is\n\\[\n (f,f')_* \\calE_Y \\defeq \\langle f_* \\calE_Y, (f')_* \\calE_Y,\n \\set{(f \\cross f')(F) \\suchthat F \\in \\calE_Y} \\rangle_Y.\n\\]\n(We may similarly define connected, unital, and connected unital versions.)\n\\end{definition}\n\nBy Proposition~\\ref{prop:loc-prop-prod}, the sets $(f \\cross f')(F)$ satisfy \nthe properness axiom. The coequalizing push-forward coarse structure makes $f$ \nand $f'$ \\emph{close} coarse maps, and is the minimum coarse structure on $X$ \nfor which this is true.\n\n\\begin{definition}\nSuppose $f, f' \\from Y \\to X$ are coarse maps. The \\emph{coequalizer} of $[f]$ \nand $[f']$ is $\\OBJcoequalizer_{[f],[f']} \\defeq X$ equipped the coarse \nstructure\n\\[\n \\calE_{\\OBJcoequalizer_{[f],[f']}}\n \\defeq \\langle \\calE_X, (f,f')_* \\calE_Y \\rangle_X,\n\\]\ntogether with the closeness class of ``identity'' map\n\\[\n \\coequalizer_{[f],[f']} \\from X \\to \\OBJcoequalizer_{[f],[f']}\n\\]\n(which is a coarse map).\n\\end{definition}\n\nObserve that if $X$ is unital so too is the coequalizer, and similarly if $X$ \nis connected.\n\n\\begin{lemma}\nSuppose $f, f' \\from Y \\to X$ are coarse maps. The coequalizer of $[f]$ and \n$[f']$ is coarsely invariant (hence the notation).\n\\end{lemma}\n\n\\begin{proof}\nSuppose $e, e' \\from Y \\to X$ are close to $f$, $f'$, respectively. Observe \nthat, since $f_* \\calE_Y, (f')_* \\calE_Y \\subseteq \\calE_X$,\n\\[\n \\calE_{\\OBJcoequalizer_{[f],[f']}} = \\langle \\calE_X,\n \\set{(f \\cross f')(F) \\suchthat F \\in \\calE_Y} \\rangle_X\n\\]\nand similarly for $e$ and $e'$. Thus it suffices to show\n\\[\n \\set{(e \\cross e')(F) \\suchthat F \\in \\calE_Y}\n \\subseteq \\calE_{\\OBJcoequalizer_{[f],[f']}}\n\\]\nand similarly symmetrically. But if $F \\in \\calE_Y$, then\n\\[\n (e \\cross e')(F) \\subseteq (e \\cross f)(1_{F \\cdot Y})\n \\circ (f \\cross f')(F) \\circ (f' \\cross e')(1_{Y \\cdot F})\n\\]\nis in $\\calE_{\\OBJcoequalizer_{[f],[f']}}$, as required.\n\\end{proof}\n\n\\begin{proposition}\nThe coequalizer of $[f], [f'] \\from Y \\to X$ really is (in the categorical \nsense) the coequalizer of $[f]$ and $[f']$ in $\\CATCrs$ (or in $\\CATConnCrs$, \n$\\CATne{\\CATConnCrs}$, $\\CATUniCrs$, or $\\CATConnUniCrs$, as appropriate), \nhence the terminology. Thus $\\CATCrs$ (and the other coarse categories) have \nall coequalizers of pairs of arrows.\n\\end{proposition}\n\n\\begin{proof}\nFix representative coarse maps $f$ and $f'$, and suppose $g \\from X \\to W$ is a \ncoarse map such that $g \\circ f \\closeequiv g \\circ f'$. Let $\\utilde{g} \\from \n\\OBJcoequalizer_{[f],[f']} \\to W$ be the same, as a set map, as $g$; then \nclearly $g = \\utilde{g} \\circ \\coequalizer_{[f],[f']}$, and hence $[g] = \n[\\utilde{g}] \\circ [\\coequalizer_{[f],[f']}]$, assuming $\\utilde{g}$ is \nactually a coarse map. To show that $\\utilde{g}$ is coarse, it suffices to show \nthat $\\utilde{g}$ coarse for sets $E \\defeq (f \\cross f')(F)$, $F \\in \\calE_Y$. \nSince\n\\[\n ((g \\circ f) \\cross (g \\circ f')) |_F\n = g^{\\cross 2} |_E \\circ (f \\cross f') |_F^E\n\\]\nis proper (Proposition~\\ref{prop:loc-prop-prod}), it follows that \n$\\utilde{g}^{\\cross 2} |_E = g^{\\cross 2} |_E$ is proper, hence $\\utilde{g}$ is \nlocally proper for $E$. Since $g \\circ f$ and $g \\circ f'$ are close, it \nfollows that $\\utilde{g}$ preserves $E$.\n\nUniqueness of $[\\utilde{g}]$: Suppose $\\utilde{g}' \\from \n\\OBJcoequalizer_{[f],[f']} \\to W$ is a coarse map such that $g \\closeequiv \n\\utilde{g}' \\circ \\coequalizer_{[f],[f']}$. To show that $\\utilde{g}$ is close \nto $\\utilde{g}'$, we must show that $(\\utilde{g} \\cross \\utilde{g}')(E) \\in \n\\calE_W$ for all $E \\in \\calE_{\\OBJcoequalizer_{[f],[f']}}$. Clearly, this is \nthe case for $E \\in \\calE_X \\subseteq \\calE_{\\OBJcoequalizer_{[f],[f']}}$, so \nit suffices to show this for $E = (f \\cross f')(F)$ for some $F \\in \\calE_Y$. \nThe map $g' \\defeq \\utilde{g}' \\circ \\coequalizer_{[f],[f']}$ is close to $g$, \nhence $g \\circ f \\closeequiv g' \\circ f'$. Therefore,\n\\[\n (\\utilde{g} \\cross \\utilde{g}')((f \\cross f')(F))\n = ((g \\circ f) \\cross (g' \\circ f'))(F),\n\\]\nis in $\\calE_W$, as required.\n\nAs previously noted, if $X$ is connected, unital, and\/or nonempty, then \n$\\OBJequalizer_{[f],[f']}$ has the corresponding property or properties, so the \nabove actually proves the result in all the coarse categories.\n\\end{proof}\n\nSince the coarse categories have all coproducts and coequalizers, we \nimmediately get the following.\n\n\\begin{theorem}\\label{thm:Crs-colim}\nThe coarse categories $\\CATCrs$, $\\CATConnCrs$, $\\CATne{\\CATConnCrs}$, \n$\\CATUniCrs$, and $\\CATConnUniCrs$ have all colimits.\n\\end{theorem}\n\n\n\\subsection{The termination functor}\\label{subsect:Crs-Term}\n\nFor essentially set theoretic reasons, $\\CATCrs$ does not have a terminal \nobject (Corollary~\\ref{cor:Crs-no-term}). However, for many purposes, one can \nfind a suitable substitute. We begin with some general definitions which are \napplicable in any category $\\calC$.\n\n\\begin{definition}\nIn $\\calC$, an object $\\tilde{X}$ \\emph{terminates} an object $X$ if:\n\\begin{enumerate}\n\\item there is a (unique) arrow $\\tau_X \\from X \\to \\tilde{X}$; and\n\\item for all $Y \\in \\Obj(\\calC)$, there is at most one arrow $Y \\to \\tilde{X}$.\n\\end{enumerate}\nI.e., $\\tilde{X}$ is terminal in the full subcategory of $\\calC$ consisting of \n$X$ and all objects mapping to $\\tilde{X}$. $\\tilde{X}$ \\emph{universally \nterminates} $X$ if it is the smallest object terminating $X$ (i.e., for all \n$\\tilde{X}'$ terminating, $X$ there is an arrow $\\tilde{X} \\to \\tilde{X}'$).\n\\end{definition}\n\nIf $\\tilde{X}$ terminates $X$, then for all $Y$ and pairs of arrows $f, g \\from \nY \\to X$, $\\tau_X \\circ f = \\tau_X \\circ g$. Two objects universally \nterminating $X$ are canonically and uniquely isomorphic. If $\\tilde{X}$ \nterminates any object, then it universally terminates itself.\n\nIn a category with a terminal object $1$, the product of any object $Y$ and $1$ \nis just $Y$. The following generalizes this.\n\n\\begin{proposition}\\label{prop:term-id}\nIf there is some arrow $f \\from Y \\to X$ in $\\calC$ and $\\tilde{X}$ \nterminates $X$ in $\\calC$, then $Y$ is the (categorical) product of \n$\\tilde{X}$ and $Y$ (in $\\calC$).\n\\end{proposition}\n\n\\begin{proof}\nThe two ``projections'' from $Y$ are $\\pi_{\\tilde{X}} \\defeq \\tau_X \\circ f \n\\from Y \\to \\tilde{X}$ and $\\pi_Y \\defeq \\id_Y \\from Y \\to Y$. Suppose $Z \\in \n\\Obj(\\calC)$ is equipped with arrows $p_{\\tilde{X}} \\from Z \\to \\tilde{X}$ and \n$p_Y \\from Z \\to Y$. Both these arrows factor through $p_Y$: evidently $p_Y = \n\\pi_Y \\circ p_Y$, but also $p_{\\tilde{X}} = \\pi_{\\tilde{X}} \\circ p_Y$ since \nthere is only one arrow $Z \\to \\tilde{X}$.\n\\end{proof}\n\nIf $\\calC$ is known to have products (of pairs of objects), we can restate the \nabove Proposition in the following way: Whenever there is an arrow $f \\from Y \n\\to X$ and $\\tilde{X}$ terminates $X$, the projection $\\pi_Y \\from \n\\tilde{X} \\cross Y \\to Y$ is an isomorphism. Moreover, one the inverse \nisomorphism is given by the composition\n\\[\n Y \\nameto{\\smash{\\Delta_Y}} Y \\cross Y\n \\nameto{\\smash{(\\tau_X \\circ f) \\cross \\id_Y}} \\tilde{X} \\cross Y.\n\\]\n\n\\begin{definition}\nA \\emph{termination functor} on $\\calC$ is a functor $\\calC \\to \\calC$ \n(temporarily denoted $X \\mapsto \\tilde{X}$) which sends each $X$ to an object \n$\\tilde{X}$ terminating $X$; such a functor is \\emph{universal} if $\\tilde{X}$ \nalways universally terminates $X$.\n\\end{definition}\n\nThe following is implied: Whenever there is an arrow $f \\from Y \\to X$, there \nis a unique arrow $\\tilde{Y} \\to \\tilde{X}$ (namely $\\tilde{f}$). Note that \nuniversality is meant in the ``pointwise'' sense, and we do not assert \nuniversality as a termination functor. Universal termination functors are \nunique up to natural equivalence. Also observe that universal termination \nfunctors are idempotent up to natural equivalence.\n\n\\begin{example}\nIf $\\calC$ has a terminal object $1$, then $1$ terminates all objects, and $X \n\\mapsto 1$ is a termination functor (not necessarily universal). In \n$\\CATpt{\\CATSet}$ or $\\CATpt{\\CATTop}$ (pointed sets or topological spaces, \nrespectively), the functor $X \\mapsto \\ast$, where $\\ast$ is any one-point set \nor space, is a universal termination functor. More generally, in any category \n$\\calC$ with a zero object $0$ (i.e., $0$ is initial and terminal), $X \\mapsto \n0$ is a universal termination functor.\n\\end{example}\n\n\\begin{example}\\label{ex:Set-Top-univ-term}\nIn $\\CATSet$ or $\\CATTop$, the functor given by\n\\[\n X \\mapsto \\begin{cases}\n \\emptyset & \\text{if $X = \\emptyset$, or} \\\\\n \\ast & \\text{if $X \\neq \\emptyset$,}\n \\end{cases}\n\\]\nis a universal termination functor.\n\\end{example}\n\n\\begin{example}\nIn $\\CATCrs$ (and our various full subcategories), $|X|_1$ terminates any \ncoarse space $X$ (Proposition~\\ref{prop:term-close}). However, $X \\mapsto \n|X|_1$ does not define a functor on $\\CATPCrs$ (or $\\CATCrs$). E.g., for any \nset $X$, there is always a (unique) coarse map from $|X|_0$ to a one-point \ncoarse space $\\ast$, but no coarse map $|X|_1 = |\\,|X|_0\\,|_1 \\to \\ast$ when \n$X$ is infinite. The problem is that coarse maps from $|X|_1$ must be globally \nproper; in the unital categories this is not a problem, so $X \\mapsto |X|_1$ \ndoes define a coarsely invariant functor $\\CATUniPCrs \\to \\CATUniPCrs$ (for \nexample). The induced functor on unital coarse category $\\CATUniCrs$ is a \nuniversal termination functor. We wish to generalize this to all of $\\CATCrs$.\n\\end{example}\n\nWe recall the definition of the coarse space $\\Terminate(X)$ (for $X$ a coarse \nspace) from \\S\\ref{ent-subsp-prod}, and extend $\\Terminate$ to a functor in the \nobvious way.\n\n\\begin{definition}\nFor any coarse space $X$, $\\Terminate(X)$ is the coarse space which is just $X$ \nas a set with coarse structure\n\\[\n \\calE_{\\Terminate(X)} \\defeq \\set{E \\in \\calE_{|X|_1}\n \\suchthat 1_{E \\cdot X}, 1_{X \\cdot E} \\in \\calE_X};\n\\]\n$\\tau_X \\from X \\to \\Terminate(X)$ is the ``identity'' map. If $f \\from Y \\to \nX$, $\\Terminate(f) \\from \\Terminate(Y) \\to \\Terminate(X)$ is the same as $f$ as \na set map.\n\\end{definition}\n\nObserve the following:\n\\begin{enumerate}\n\\item $E \\subseteq X^{\\cross 2}$ is an entourage of $\\Terminate(X)$ if and only \n if $E$ satisfies the properness axiom (i.e., $E \\in \\calE_{|X|_1}$) and the \n left and right supports of $E$ are unital subspaces of $X$.\n\\item $\\Terminate(X)$ has the same unital subspaces as $X$ and is the maximum \n coarse structure on $X$ with this property. (Consequently, if $X$ is \n unital, $\\Terminate(X) = |X|_1$. It also follows that $\\Terminate$ is \n idempotent, and hence so too is the induced functor $[\\Terminate]$; see \n below.)\n\\end{enumerate}\n\n\\begin{proposition}\n$\\Terminate$ is a coarsely invariant functor $\\CATPCrs \\to \\CATPCrs$. The \ninduced functor $[\\Terminate] \\from \\CATCrs \\to \\CATCrs$ is a universal \ntermination functor.\n\\end{proposition}\n\n\\begin{proof}\nThat $\\Terminate(f)$ is a coarse map follows from the above observations, and \nhence $\\Terminate(f)$ is a functor. Moreover, using the above observations, we \nsee that, for all $X$, all coarse maps to $\\Terminate(X)$ are close. In \nparticular, this implies first that $\\Terminate$ is coarsely invariant and \nsecond that $[\\Terminate]$ is a termination functor on $\\CATCrs$.\n\nIt only remains to show universality. Suppose $\\tilde{X}$ terminates $X$, so \nthere is a unique $[t] \\from X \\to \\tilde{X}$, represented by a coarse map $t$, \nsay. It suffices to show that there is a coarse map $t' \\from \\Terminate(X) \\to \n\\tilde{X}$; since $\\tilde{X}$ terminates $X$ in $\\CATCrs$, uniqueness of $[t']$ \nfollows, as does the equality $[t] = [t'] \\circ [\\tau_X]$.\n\nTake $t' \\defeq t \\from \\Terminate(X) = X \\to \\tilde{X}$ as a set map. Local \nproperness of $t'$ follows from the above observations and \nProposition~\\ref{prop:loc-prop}\\enumref{prop:loc-prop:II}. To see that $t'$ \npreserves entourages, we use Proposition~\\ref{prop:ent-prod}: If $E \\in \n\\calE_{\\Terminate(X)}$, consider the unital subspace $|E|$ of the product \ncoarse space $X \\cross X$. Since $\\tilde{X}$ terminates $X$, $t \\circ \\pi_1 \n|_{|E|} \\closeequiv t \\circ \\pi_2 |_{|E|}$, and hence\n\\[\n ((t \\circ \\pi_1 |_{|E|}) \\cross (t \\circ \\pi_2 |_{|E|}))(1_{|E|})\n = (t')^{\\cross 2}(E)\n\\]\nis an entourage of $\\tilde{X}$, as required.\n\\end{proof}\n\nIn the above proof, one could instead consider the map $\\Terminate(t) \\from \n\\Terminate(X) \\to \\Terminate(\\tilde{X})$, and show that $\\Terminate(\\tilde{X}) =\n\\tilde{X}$.\n\n\\begin{remark}\n$\\Terminate$ restricts to (coarsely invariant) endofunctors on the other \nprecoarse categories, and hence $[\\Terminate]$ restricts to universal \ntermination functors on the other coarse categories. (The proof of the above \nProposition requires only unital coarse spaces $|E|$ and not actually the \nnonunital products $X \\cross X$, and hence works even in the unital cases.) Of \ncourse, in the unital cases, $\\Terminate$ is just the functor $X \\mapsto \n|X|_1$.\n\\end{remark}\n\nBy applying Proposition~\\ref{prop:term-id}, we immediately get the following, \nwhich will play a crucial role in the development of exponential objects in the \ncoarse categories \\cite{crscat-II}.\n\n\\begin{corollary}\\label{cor:Crs-term-id}\nIf there is a coarse map $Y \\to \\Terminate(X)$, where $X$ and $Y$ are coarse \nspaces, then\n\\[\n \\pi_Y \\from \\Terminate(X) \\cross Y \\to Y\n\\]\nis a coarse equivalence. The maps\n\\[\n D_\\tau \\defeq (\\tau \\cross \\id_Y) \\circ \\Delta_Y\n \\from Y \\to \\Terminate(X) \\cross Y,\n\\]\nwhere $\\tau \\from Y \\to \\Terminate(X)$ is any coarse map (they are all close), \nare coarsely inverse to $\\pi_Y$. Hence, if there is a coarse map $Y \\to \n\\Terminate(X)$, then $Y \\cong \\Terminate(X) \\cross Y$ canonically in $\\CATCrs$ \n(or in $\\CATConnCrs$ or $\\CATne{\\CATConnCrs}$). In the case $Y \\defeq X$, we \nget that $\\pi_X \\from \\Terminate(X) \\cross X \\to X$ and $D_X \\defeq D_{\\tau_X} \n\\from X \\to \\Terminate(X) \\cross X$ are coarsely inverse coarse equivalences, \nso $X \\cong \\Terminate(X) \\cross X$ canonically in $\\CATCrs$ (or in \n$\\CATConnCrs$ or $\\CATne{\\CATConnCrs}$).\n\\end{corollary}\n\n\\begin{remark}\\label{rmk:term-unital-prod}\nFor any set $X$, $\\Terminate(|X|_1) = |X|_1$, so $|X|_1 \\cross |X|_1$ is \n(canonically) coarsely equivalent to $|X|_1$. While $|X|_1$ is always unital, \n$|X|_1 \\cross |X|_1$ is unital only when $X$ is finite. In particular, \nunitality is \\emph{not} coarsely invariant. It also follows easily that $|X|_1$ \nis actually the product of $|X|_1$ with itself in the unital coarse category \n$\\CATUniCrs$. More generally, for any coarse space $X$, the product of $X$ and \n$|X|_1$ in $\\CATUniCrs$ is just $X$. (As previously mentioned, $\\CATUniCrs$ has \nsome products of infinite spaces, even though the natural construction of the \ncorresponding products in $\\CATCrs$ are nonunital.)\n\\end{remark}\n\n\n\\subsection{Monics and images}\n\n\\begin{example}\nPull-back coarse structures are not coarsely invariant. That is, suppose $f, f' \n\\from Y \\to X$ are coarse maps. Even if $f \\closeequiv f'$, it may not be the \ncase that $f^* \\calE_X = (f')^* \\calE_X$. To see this, take $Y \\defeq \n|\\setN|_0^\\TXTconn$, $X \\defeq |\\setN|_1$, $f$ to be the ``identity'' map (as a \nset map), and $f'$ to be a constant map. Then $f^* \\calE_X = \\calE_{|Y|_1}$ \nwhereas $(f')^* \\calE_X = \\calE_Y$.\n\\end{example}\n\n\\begin{proposition}\nIf $f, f' \\from Y \\to X$ are coarse maps with $f \\closeequiv f'$, then\n\\[\n \\calE_{\\Terminate(Y)} \\intersect f^* \\calE_X\n = \\calE_{\\Terminate(Y)} \\intersect (f')^* \\calE_X.\n\\]\n\\end{proposition}\n\n\\begin{proof}\nWe prove inclusion $\\subseteq$; containment $\\supseteq$ follows symmetrically. \nSuppose $F \\in \\calE_{\\Terminate(Y)} \\intersect f^* \\calE_X$. Since $F \\in \n\\Terminate(Y)$, $f'$ is locally proper for $F$ \n(Proposition~\\ref{prop:loc-prop}\\enumref{prop:loc-prop:II}). It only remains to \nshow that $(f')^{\\cross 2}(F) \\in \\calE_X$. But\n\\[\\begin{split}\n (f')^{\\cross 2}(F) \\subseteq (f' \\cross f)(1_{F \\cdot Y})\n \\circ f^{\\cross 2}(F) \\circ (f \\cross f')(1_{Y \\cdot F}) \\in \\calE_X\n\\end{split}\\]\nsince $f \\closeequiv f'$ (and the left and right supports of $F$ are unital \nsubspaces of $Y$) and $f^{\\cross 2}(F) \\in \\calE_X$.\n\\end{proof}\n\n\\begin{definition}\nSuppose $[f] \\from Y \\to X$. The \\emph{coarsely invariant pull-back coarse \nstructure} $[f]^* \\calE_X$ on $Y$ (along $[f]$) is given by\n\\[\n [f]^* \\calE_X \\defeq \\calE_{\\Terminate(Y)} \\intersect f^* \\calE_X\n\\]\n(where $f \\from Y \\to X$ is any representative coarse map).\n\\end{definition}\n\n\\begin{proposition}\\label{prop:Crs-factor-I}\nIf $[f] \\from Y \\to X$ is represented by a coarse map $f$, then $[f]$ factors \nas\n\\[\n Y \\nameto{\\smash{[\\beta]}} |Y|_{[f]^* \\calE_X}\n \\nameto{\\smash{[\\utilde{f}]}} X,\n\\]\nwhere $\\beta = \\id_Y$ and $\\utilde{f} = f$ as set maps (i.e., $\\calE_Y \n\\subseteq [f]^* \\calE_X$). Moreover, $[\\utilde{f}]$ depends only on $[f]$ (and \nnot on the particular $f$) and is unique in the above factorization.\n\\end{proposition}\n\n\\begin{proof}\nThe factorization follows immediately from Corollary~\\ref{cor:crs-factor-I}. We \nnow show that $f \\closeequiv f'$ implies $\\utilde{f} \\closeequiv \\utilde{f}'$ \n(noting that $[f]^* \\calE_X = [f']^* \\calE_X$). If $F \\in [f]^* \\calE_X$, then\n\\[\\begin{split}\n (f \\cross f')(F) & = (f \\cross f')(F \\circ 1_{Y \\cdot F}) \\\\\n & \\subseteq f^{\\cross 2}(F) \\circ (f \\cross f')(1_{Y \\cdot F})\n\\end{split}\\]\nis in $\\calE_X$ since $f^{\\cross 2}(F) \\in \\calE_X$ and $1_{Y \\cdot F} \\in \n\\calE_Y$ so $(f \\cross f')(1_{Y \\cdot F}) \\in \\calE_X$ as $f \\closeequiv f'$. \nUniqueness: If $[f] = [g] \\circ [\\beta]$, where $[g] \\from |Y|_{[f]^* \\calE_X} \n\\to X$ and $g$ is any representative, then $f \\closeequiv g \\circ \\beta$, so \n$\\utilde{f} \\closeequiv (g \\circ \\beta)\\utilde{\\mathstrut} = g$.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:Crs-monic}\n$[f] \\from Y \\to X$ is monic in $\\CATCrs$ if and only if $\\calE_Y = [f]^* \n\\calE_X$ (i.e., if and only if $Y = |Y|_{[f]^* \\calE_X}$).\n\\end{proposition}\n\n\\begin{proof}\nFix a representative coarse map $f \\from Y \\to X$ and let $Y \n\\nameto{\\smash{\\beta}} |Y|_{[f]^* \\calE_X} \\nameto{\\smash{\\utilde{f}}} X$ be \nthe canonical factorization.\n\n(\\textimplies): Suppose there exists some $F \\in [f]^* \\calE_X \\setminus \n\\calE_Y$. Consider $|F|$ as a unital subspace of the product $Y \\cross Y$, with \nprojections $\\pi_1 |_{|F|}, \\pi_2 |_{|F|} \\from |F| \\to Y$. Then\n\\[\n (\\pi_1 |_{|F|} \\cross \\pi_2 |_{|F|})(1_{|F|}) = F,\n\\]\nso $\\pi_1 |_{|F|}$ is not close to $\\pi_2 |_{|F|}$, but $\\beta \\circ \\pi_1 \n|_{|F|}$ is close to $\\beta \\circ \\pi_2 |_{|F|}$. Hence $[\\pi_1 |_{|F|}] \\neq \n[\\pi_2 |_{|F|}]$ but\n\\[\n [f] \\circ [\\pi_1 |_{|F|}] = [\\utilde{f}] \\circ [\\beta] \\circ [\\pi_1 |_{|F|}]\n = [\\utilde{f}] \\circ [\\beta] \\circ [\\pi_2 |_{|F|}]\n = [f] \\circ [\\pi_2 |_{|F|}],\n\\]\nso $[f]$ is not monic.\n\n(\\textimpliedby): Suppose $g, g' \\from Z \\to Y$ are coarse maps such that $[f] \n\\circ [g] = [f] \\circ [g']$. Then, for each $G \\in \\calE_Z$,\n\\[\n ((f \\circ g) \\cross (f \\circ g'))(G)\n = f^{\\cross 2}((g \\cross g')(G)) \\in \\calE_X.\n\\]\nBut then $(g \\cross g')(G) \\in [f]^* \\calE_X = \\calE_Y$, so $[g] = [g']$, as \nrequired.\n\\end{proof}\n\n\\begin{corollary}\nFor any $[f] \\from Y \\to X$, the canonical arrow\n\\[\n [\\utilde{f}] \\from |Y|_{[f]^* \\calE_X} \\to X\n\\]\nis monic in $\\CATCrs$.\n\\end{corollary}\n\n\\begin{definition}\\label{def:Crs-image}\nSuppose $[f] \\from Y \\to X$. Denote $\\OBJim [f] \\defeq |Y|_{[f]^* \\calE_X}$ and \n$\\im [f] \\defeq [\\utilde{f}] \\from \\OBJim [f] \\to X$, where $[\\utilde{f}]$ is \ndefined as above. We will also sometimes write $[f](Y) \\defeq \\OBJim [f]$.\n\\end{definition}\n\nDespite the notation, $[f](Y)$ should not be considered as a subspace of $X$ \n(however, see Proposition~\\ref{prop:Crs-subsp-images} and the discussion which \nprecedes it).\n\n\\begin{theorem}\nFor any $[f] \\from Y \\to X$, the subobject of $X$ represented by the arrow $\\im \n[f] \\from \\OBJim [f] \\monto X$ is the (categorical) image of $[f]$ in \n$\\CATCrs$.\n\\end{theorem}\n\n\\begin{proof}\nSuppose $[f]$ also factors as $Y \\nameto{\\smash{[h]}} Z \\nameto{\\smash{[g]}} X$ \nwhere $[g]$ is monic, so that $\\calE_Z = [g]^* \\calE_X$. We must show that \nthere is a unique $[\\underline{h}] \\from \\OBJim [f] \\to Z$ such that $\\im [f] = \n[g] \\circ [\\underline{h}]$.\n\nPick a representative coarse map $h \\from Y \\to Z$, and put $\\underline{h} \n\\defeq h$ as a set map $\\OBJim [f] = Y \\to Z$. First, $\\underline{h}$ is a \ncoarse map: Local properness is equivalent to properness when restricted to \nunital subspaces (Corollary~\\ref{cor:loc-prop-uni}); since $\\OBJim [f]$ and $Y$ \nhave the same unital subspaces (and $\\underline{h} = h$ as set maps), \n$\\underline{h}$ is locally proper. Reasoning similarly, for any $F \\in \n\\calE_{\\OBJim [f]} = [f]^* \\calE_X$, $\\underline{h}^{\\cross 2}(F)$ is in \n$\\calE_{\\Terminate(Z)}$. Then, since $\\calE_Z = [g]^* \\calE_X$, it follows that \n$\\underline{h}$ is coarse. From the uniqueness assertion of \nProposition~\\ref{prop:Crs-factor-I}, we get that $[g] \\circ [\\underline{h}] = \n\\im [f]$. Uniqueness of $[\\underline{h}]$: If $[h'] \\from \\OBJim [f] \\to Z$ and \n$[g] \\circ [h'] = \\im [f] = [g] \\circ [h]$, then $[h] = [h']$ since $[g]$ is \nmonic.\n\\end{proof}\n\n\n\\subsection{Epis and coimages}\n\nFor rather trivial reasons, push-forward coarse structures are not coarsely \ninvariant. Recall that coarse structures are semirings, which gives rise to an \nobvious notion of ideals.\n\n\\begin{definition}\\label{def:ideal}\nSuppose $\\calE_X$ is a coarse structure on a set $X$. A subset $\\calE \\subseteq \n\\calE_X$ is an \\emph{ideal} of $\\calE_X$ if it is a coarse structure on $X$ \nsuch that $E \\circ E', E' \\circ E \\in \\calE$ for all $E \\in \\calE$, $E' \\in \n\\calE_X$. Note that any intersection of ideals is again an ideal. The \n\\emph{ideal} $\\lAngle \\calE \\rAngle_X$ (of $\\calE_X$ generated by $\\calE$) is \nthe smallest ideal of $\\calE_X$ which contains $\\calE$.\n\\end{definition}\n\n\\begin{proposition}\nSuppose $f, f' \\from Y \\to X$ are coarse maps with $f \\closeequiv f'$. Then\n\\[\n \\lAngle f_* \\calE_Y \\rAngle_X = \\lAngle (f')_* \\calE_Y \\rAngle_X.\n\\]\n\\end{proposition}\n\n\\begin{proof}\nElements $E \\in \\lAngle f_* \\calE_Y \\rAngle_X$ are exactly subsets\n\\[\n E \\subseteq E' \\circ f^{\\cross 2}(F) \\circ E'' \\union E'''\n\\]\nfor some $F \\in \\calE_Y$ and some $E', E'', E''' \\in \\calE_X$ with $E'''$ \nfinite. But then\n\\[\n E \\subseteq (E' \\circ (f \\cross f')(1_{F \\cdot Y})) \\circ (f')^{\\cross 2}(F)\n \\circ ((f' \\cross f)(1_{Y \\cdot F}) \\circ E'') \\union E'''\n\\]\nis in $\\lAngle (f')_* \\calE_Y \\rAngle_X$ (and symmetrically) as required.\n\\end{proof}\n\n\\begin{definition}\nSuppose $[f] \\from Y \\to X$. The \\emph{coarsely invariant push-forward coarse \nstructure} $[f]_* \\calE_Y$ on $X$ (along $[f]$) is given by\n\\[\n [f]_* \\calE_Y \\defeq \\lAngle f_* \\calE_Y \\rAngle_X\n\\]\n(where $f \\from Y \\to X$ is any representative coarse map).\n\\end{definition}\n\nDespite the obvious parallels with coarsely invariant pull-backs, the coarsely \ninvariant push-forward $[f]_* \\calE_Y$ depends very little on $\\calE_Y$. In \nfact, it depends only on the set of unital subspaces of $Y$ (recall from \nProposition~\\ref{prop:close-uni} that closeness is entirely determined on the \nunital subspaces). Thus we have the following.\n\n\\begin{proposition}\nFor any $[f] \\from Y \\to X$,\n\\[\n [f]_* \\calE_Y = (\\im [f])_* \\calE_{\\OBJim [f]}.\n\\]\n\\end{proposition}\n\n\\begin{proof}\nRecall that $\\OBJim [f] \\defeq |Y|_{[f]^* \\calE_X}$, where $[f]^* \\calE_X \n\\defeq \\calE_{\\Terminate(Y)} \\intersect f^* \\calE_X$ (for any representative \nmap $f$) and $\\im [f] \\defeq f$ as a set map. Since $\\calE_Y \\subseteq [f]^* \n\\calE_X$, $[f]_* \\calE_Y \\subseteq (\\im [f])_* \\calE_{\\OBJim [f]}$. For the \nopposite inclusion, it suffices to show that, for $F \\in [f]^* \\calE_X$,\n\\[\n E \\defeq f^{\\cross 2}(F) \\in \\lAngle f_* \\calE_Y \\rAngle_X;\n\\]\nbut $F \\cdot Y$ is a unital subspace of $Y$ (hence $1_{F \\cdot Y} \\in \\calE_Y$) \nand $f^{\\cross 2}(F) \\in \\calE_X$, so\n\\[\n E = f^{\\cross 2}(1_{F \\cdot Y}) \\circ E \\in \\lAngle f_* \\calE_Y \\rAngle_X,\n\\]\nas required.\n\\end{proof}\n\nSuppose $[f] \\from Y \\to X$, represented by a coarse map $f$. Denote\n\\[\n X_{[f]} \\defeq \\set{x \\in X\n \\suchthat \\text{$x$ is connected to some $x' \\in f(Y)$}} \\subseteq X,\n\\]\na subspace of $X$. It is easy to see that $X_{[f]}$ really only depends on the \ncloseness class $[f]$, as the notation indicates. (If $X$ is connected, then of \ncourse $X_{[f]} = X$.)\n\nThe subspace $X_{[f]} \\subseteq X$ contains the set image of $f$ (and indeed of \nany coarse map close to $f$), and hence we may take the range restriction $f \n|^{X_{[f]}}$ which is evidently a coarse map $Y \\to X_{[f]}$. It is easy to see \nthat the closeness class $[f |^{X_{[f]}}]$ only depends on the closeness class \n$[f]$, and hence we also temporarily denote\n\\[\n [f] |^{X_{[f]}} \\defeq [f |^{X_{[f]}}] \\from Y \\to X_{[f]}.\n\\]\nNow, we may coarsely invariantly push $\\calE_Y$ forward along $[f] |^{X_{[f]}}$ \nto get a coarse space $|X_{[f]}|_{([f] |^{X_{[f]}})_* \\calE_Y}$. We get the \nfollowing.\n\n\\begin{proposition}\\label{prop:Crs-factor-II}\nIf $[f] \\from Y \\to X$ is represented by a coarse map $f$, then $[f]$ factors \nas\n\\[\n Y \\nameto{\\smash{[\\tilde{f}]}} |X_{[f]}|_{([f] |^{X_{[f]}})_* \\calE_Y}\n \\nameto{\\smash{[\\alpha]}} X,\n\\]\nwhere $\\tilde{f} = f |^{X_{[f]}}$ and $\\alpha$ is the inclusion as set maps \n(thus $([f] |^{X_{[f]}})_* \\calE_Y \\subseteq \\calE_X$). Moreover, $[\\tilde{f}]$ \ndepends only on $[f]$ (and not $f$) and is unique in the above factorization.\n\\end{proposition}\n\n\\begin{proof}\nNearly all the assertions are clear from the definitions, \nCorollary~\\ref{cor:crs-factor-II}, and the previous remarks. We show that $f \n\\closeequiv f'$ implies $\\tilde{f} \\closeequiv \\tilde{f}'$: If $F \\in \\calE_Y$, \nthen\n\\[\n (\\tilde{f} \\cross \\tilde{f}')(F) = (f \\cross f')(F)\n \\subseteq f^{\\cross 2}(F) \\circ (f \\cross f')(1_{Y \\cdot F})\n\\]\nis in $([f] |^{X_{[f]}})_* \\calE_Y$ since $f^{\\cross 2}(F) \\in (f \n|^{X_{[f]}})_* \\calE_Y$ and $(f \\cross f')(1_{Y \\cdot F}) \\in \\calE_X \n|_{X_{[f]}}$. Uniqueness: If $[f] = [\\alpha] \\circ [g]$, where $[g] \\from Y \\to \n|X_{[f]}|_{([f] |^{X_{[f]}})_* \\calE_Y}$ and $g$ is any representative, then $f \n\\closeequiv \\alpha \\circ g$, so $\\tilde{f} \\closeequiv (\\alpha \\circ \ng)\\tilde{\\mathstrut} = g$.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:Crs-epi}\n$[f] \\from Y \\to X$ is epi in $\\CATCrs$ if and only if $X_{[f]} = X$ and $[f]_* \n\\calE_Y = \\calE_X$ (i.e., if and only if $|X_{[f]}|_{([f] |^{X_{[f]}})_* \n\\calE_Y} = X$).\n\\end{proposition}\n\n\\begin{proof}\n(\\textimplies): Consider the push-out square\n\\[\\begin{CD}\n Y @>{[f]}>> X \\\\\n @V{[f]}VV @V{[e_1]}VV \\\\\n X @>{[e_2]}>> X \\copro_Y X\n\\end{CD}\\]\n(in $\\CATCrs$). Fix a representative coarse map $f \\from Y \\to X$. As a set, \none may take $X \\copro_Y X \\defeq X_1 \\disjtunion X_2$ (disjoint union of sets) \nwhere $X_1 \\defeq X_2 \\defeq X$, with coarse structure\n\\[\n \\langle \\calE_{X_1}, \\calE_{X_2},\n \\set{(f_1 \\cross f_2)(F) \\suchthat F \\in \\calE_Y}\n \\rangle_{X_1 \\disjtunion X_2},\n\\]\nwhere $\\calE_{X_j} \\defeq \\calE_X \\subseteq \\powerset((X_j)^{\\cross 2})$ and \n$f_j \\defeq f \\from Y \\to X = X_j$, for $j = 1, 2$. As set maps, one may take \n$e_1$, $e_2$ to be the two inclusions.\n\nIf $X_{[f]} \\neq X$, then there exists $x_0 \\in X$ not connected to any $f(y)$, \n$y \\in Y$. The entourage $\\set{1_{x_0}} \\in \\calE_X$ then shows that $e_1$ is \nnot close to $e_2$, hence $[e_1] \\neq [e_2]$ while $[e_1] \\circ [f] = [e_2] \n\\circ [f]$ so $[f]$ is not epi. Similarly, if $E \\in \\calE_X \\setminus [f]_* \n\\calE_Y$, then one can show that $(e_1 \\cross e_2)(E)$ is not an entourage of \n$X \\copro_Y X$, hence again $[f]$ is not epi.\n\n(\\textimpliedby): It suffices to show that $|X_{[f]}|_{([f] |^{X_{[f]}})_* \n\\calE_Y} = X$ implies that $[e_1] = [e_2]$ in the push-out square considered \nabove. If $|X_{[f]}|_{([f] |^{X_{[f]}})_* \\calE_Y} = X$, then every entourage \nof $[f]_* \\calE_Y$ is a subset of an entourage of the form $E_1 \\circ f^{\\cross \n2}(F) \\circ E_2$ for $F \\in \\calE_Y$ and $E_1, E_2 \\in \\calE_X$. Thus if $[f]_* \n\\calE_Y = \\calE_X$, given $E \\in \\calE_X$ choose $F$, $E_1$, and $E_2$ so that \n$E \\subseteq E_1 \\circ f^{\\cross 2}(F) \\circ E_2$, and then\n\\[\n (e_1 \\cross e_2)(E) \\subseteq E_1 \\circ (f_1 \\cross f_2)(F) \\circ E_2\n\\]\n(where we now consider $E_j \\in \\calE_{X_j} = \\calE_X$ for $j = 1, 2$) is an \nentourage of $X \\copro_Y X$. Thus $e_1$ is close to $e_2$ as required.\n\\end{proof}\n\n\\begin{corollary}\nFor any $[f] \\from Y \\to X$, the canonical arrow\n\\[\n [\\tilde{f}] \\from Y \\to |X_{[f]}|_{([f] |^{X_{[f]}})_* \\calE_Y}\n\\]\nis epi in $\\CATCrs$.\n\\end{corollary}\n\n\\begin{corollary}\\label{cor:Crs-epi-crs-structs}\nSuppose $\\calE$, $\\calE'$ are coarse structures on a set $X$ with $\\calE' \n\\subseteq \\calE$. If every unital subspace of $|X|_{\\calE}$ is a unital \nsubspace of $|X|_{\\calE'}$, then the class $[q]$ of the ``identity'' map\n\\[\n q \\from |X|_\\calE' \\to |X|_\\calE\n\\]\nis epi in $\\CATCrs$.\n\\end{corollary}\n\n\\begin{proof}\nTrivially, $(|X|_\\calE)_{[q]} = |X|_\\calE$. We have that\n\\[\n [q]_* \\calE' = \\lAngle \\calE' \\rAngle_{|X|_{\\calE}}\n\\]\nis an ideal of $\\calE$; we must prove equality, so suppose $E \\in \\calE$. Then \n$1_{E \\cdot X}$ is in $\\calE$ hence also in $\\calE'$, so\n\\[\n E = 1_{E \\cdot X} \\circ E\n\\]\nis in $[q]_* \\calE'$, as required.\n\\end{proof}\n\n\\begin{definition}\\label{def:Crs-coimage}\nSuppose $[f] \\from Y \\to X$. Denote $\\OBJcoim [f] \\defeq |X_{[f]}|_{([f] \n|^{X_{[f]}})_* \\calE_Y}$ and $\\coim [f] \\defeq [\\tilde{f}] \\from Y \\to \\OBJcoim \n[f]$, where $[\\tilde{f}]$ is defined as above.\n\\end{definition}\n\n\\begin{theorem}\nFor any $[f] \\from Y \\to X$, the quotient object of $Y$ represented by the \narrow $\\coim [f] \\from Y \\surto \\OBJcoim [f]$ is the (categorical) coimage of \n$[f]$ in $\\CATCrs$.\n\\end{theorem}\n\n\\begin{proof}\nSuppose $[f]$ also factors as $Y \\nameto{\\smash{[h]}} Z \\nameto{\\smash{[g]}} X$ \nwhere $[h]$ is epi, so that $Z_{[h]} = Z$ and $\\calE_Z = [h]_* \\calE_Y$. We \nmust show that there is a unique $[\\bar{g}] \\from Z \\to \\OBJcoim [f]$ such that \n$\\coim [f] = [\\bar{g}] \\circ [h]$.\n\nPick representative coarse maps $g \\from Z \\to X$ and $h \\from Y \\to Z$. We may \nthen take $f \\defeq g \\circ h$ as a representative for $[f]$. Since $Z_{[f]} = \nZ$ and $[g] \\circ [h] = [f]$, it follows that $g$ has set image contained in \n$X_{[f]}$. Thus we may put $\\bar{g} \\defeq g |^{X_{[f]}}$ as a set map $Z \\to \nX_{[f]} = \\OBJcoim [f]$. $\\bar{g}$ is a coarse map: It is locally proper since \n$g = \\alpha \\circ \\bar{g}$ is locally proper. Since $\\calE_Z = [h]_* \\calE_Y$, \nevery entourage of $Z$ is contained in one of the form $G_1 \\circ h^{\\cross \n2}(F) \\circ G_2$, for $F \\in \\calE_Y$, $G_1, G_2 \\in \\calE_Z$. For such an \nentourage,\n\\[\n \\bar{g}^{\\cross 2}(G_1 \\circ h^{\\cross 2}(F) \\circ G_2)\n \\subseteq g^{\\cross 2}(G_1) \\circ (g \\circ h)^{\\cross 2}(F)\n \\circ g^{\\cross 2}(G_2)\n\\]\nis in $([f] |^{X_{[f]}})_* \\calE_Y$ since $g^{\\cross 2}(G_1), g^{\\cross 2}(G_2) \n\\in \\calE_X$ (and $g$ has set image in $X_{[f]}$) and $(g \\circ h)^{\\cross \n2}(F) = f^{\\cross 2}(F)$. Thus $\\bar{g}$ is coarse. From the uniqueness \nassertion of Proposition~\\ref{prop:Crs-factor-II} (or, since $\\tilde{f} = \n\\bar{g} \\circ h$), we get that $\\coim [f] = [\\bar{g}] \\circ [h]$. Uniqueness of \n$[\\bar{g}]$ follows immediately from the hypothesis that $[h]$ is epi.\n\\end{proof}\n\n\n\\subsection{Monic and epi arrows}\n\nI do not know if $\\CATCrs$ is a \\emph{balanced} category, i.e., whether every \narrow in $\\CATCrs$ which is both monic and epi is an isomorphism (the converse \nis always true, of course). To show that a monic and epi $[f] \\from Y \\to X$ is \nan isomorphism one must show that there is an inverse $[f]^{-1} \\from X \\to Y$. \nWhen $X$ is unital, this is fairly straightforward (see below), but I do not \nknow how to prove it when $X$ is not.\n\n\\begin{theorem}\\label{thm:Crs-unital-balanced}\nIf $[f] \\from Y \\to X$ is monic and epi in $\\CATCrs$ and $X$ is a unital coarse \nspace, then $[f]$ is an isomorphism in $\\CATCrs$.\n\\end{theorem}\n\n\\begin{proof}\nFix a representative coarse map $f \\from Y \\to X$. Since $[f]$ is epi, by \nProposition~\\ref{prop:Crs-epi}, $X_{[f]} = X$ and\n\\[\n [f]_* \\calE_Y \\defeq \\lAngle f_* \\calE_Y \\rAngle_X = \\calE_X.\n\\]\nThen every entourage $E_0 \\in \\calE_X$ is contained in one of the form $E_1 \n\\circ E_2 \\circ E_3$, where $E_1, E_3 \\in \\calE_X$ and $E_2 \\in f_* \\calE_Y$. \nEvery $E_2 \\in f_* \\calE_Y$ is contained in an entourage of the form\n\\[\n \\bigl(f^{\\cross 2}(F_2^1) \\circ \\dotsb \\circ f^{\\cross 2}(F_2^N)\\bigr)\n \\union \\bigunion_{j \\in J} (K_j \\cross K'_j),\n\\]\nwhere $F_2^1, \\dotsc, F_2^N \\in \\calE_Y$ (some $N \\geq 0$), $J$ is the set of \nconnected components of $X$, and $K_j$, $K'_j$ are finite subsets of $j$ for \neach $j \\in J$. Since $X_{[f]} = X$ (and $f^{\\cross 2}(F_2^k) \\in \\calE_X$ for \n$k = 2, \\dotsc, N$), it follows that every $E \\in \\calE_X$ is contained in a \nsome entourage\n\\[\n E_0 \\circ f^{\\cross 2}(F_0) \\circ E'_0,\n\\]\nwhere $E_0, E'_0 \\in \\calE_X$ and $F_0 \\in \\calE_Y$.\n\nWe specialize the above discussion to the case $E = 1_X$ which is in $\\calE_X$ \nby unitality. Fix $E_0, E'_0 \\in \\calE_X$ and $F_0 \\in \\calE_Y$, so that $1_X \n\\subseteq E_0 \\circ f^{\\cross 2}(F_0) \\circ E'_0$. Define a set map $e \\from X \n\\to Y$ as follows. For each $x \\in X$, there are $x', x'' \\in X$ and $y', y'' \n\\in Y$ such that $(x,x') \\in E_0$, $(x'',x) \\in E'_0$, $f(y') = x'$, $f(y'') = \nx''$, and $(y',y'') \\in F_0$; choosing such a $y'' \\in Y$ in particular, put \n$e(x) \\defeq y''$.\n\nWe must verify that (any) $e \\from X \\to Y$ as constructed above is a coarse \nmap. Local properness: $X$ is unital, so $e$ is locally proper if and only if \nit is proper. For any $y \\in Y$, $e^{-1}(\\set{y}) \\subseteq (E_0 \\circ \nf^{\\cross 2}(F_0)) \\cdot \\set{f(y)}$ is finite, since $E_0 \\circ f^{\\cross \n2}(F_0) \\in \\calE_X \\subseteq \\calE_{|X|_1}$ satisfies the properness axiom. \n$e$ preserves entourages: Fix $E \\in \\calE_X$ and put $F \\defeq e^{\\cross \n2}(E)$. Since $[f]$ is monic, by Proposition~\\ref{prop:Crs-monic},\n\\[\n \\calE_Y = [f]^* \\calE_X \\defeq \\calE_{\\Terminate(Y)} \\intersect f^* \\calE_X.\n\\]\nSince $e$ is (locally) proper, $F$ satisfies the properness axiom; since the \nimage of $e$ is contained in the unital subspace $Y \\cdot F_0$ of $Y$, it then \nfollows that $F \\in \\calE_{\\Terminate(Y)}$ and hence also that $f$ is locally \nproper for $F$. To show that $F \\in f^* \\calE_X$, it only remains to show that \n$f^{\\cross 2}(F) \\in \\calE_X$: Since\n\\[\n G_0 \\defeq (\\id_X \\cross (f \\circ e))(1_X)\n \\subseteq E_0 \\circ f^{\\cross 2}(F_0)\n\\]\nis in $\\calE_X$,\n\\[\n f^{\\cross 2}(F)\n = (f \\circ e)^{\\cross 2}(E)\n \\subseteq (G_0)^\\transpose \\circ E \\circ G_0\n\\]\nis also in $\\calE_X$.\n\nSince $G_0 \\in \\calE_X$, we also get that $f \\circ e$ is close to $\\id_X$, \ni.e., $[f \\circ e] = [f] \\circ [e]$ is the identity arrow $[\\id_X]$ of $X$ in \n$\\CATCrs$. Since $[e]$ is monic (and $[f] \\circ [e] \\circ [f] = [f] = [f] \\circ \n[\\id_Y]$), it also follows that $[e] \\circ [f] = [\\id_Y]$. Thus $[e] = \n[f]^{-1}$, as required.\n\\end{proof}\n\n\\begin{corollary}\nIf $[f] \\from Y \\to X$ is monic and epi in $\\CATCrs$ and $X$ is coarsely \nequivalent to a unital coarse space, then $[f]$ is an isomorphism in $\\CATCrs$.\n\\end{corollary}\n\nThe problem with the above Corollary, of course, is that I do not know when a \ncoarse space is coarsely equivalent to a unital one. If $\\iota \\from X' \\injto \nX$ is the inclusion of a subspace of $X$ into $X$, then $[\\iota]$ is monic (and \n$\\OBJim [\\iota] = X'$), so $\\coim [\\iota] \\from X' \\to \\OBJcoim [\\iota]$ is \nboth monic and epi. (If $X$ is connected and $X'$ nonempty, $\\OBJcoim [\\iota]$ \nis just the set $X$ equipped with the coarse structure of entourages in \n$\\calE_X$ ``supported near $X'$''.) However, I do not know when $\\coim [\\iota]$ \nis a coarse equivalence.\n\nMore generally, for any $[f] \\from Y \\to X$, the natural arrow $Y \\to \\OBJim \n[f]$ is epi (either use Proposition~\\ref{prop:Crs-epi}, or the fact that \n$\\CATCrs$ has equalizers and, e.g., \\cite{MR0202787}*{Ch.~I Prop.~10.1}) and \nhence there is a natural epi arrow $[\\gamma] \\from \\OBJim [f] \\to \\OBJcoim [f]$ \nthrough which $\\im [f] \\from \\OBJim [f] \\to X$ factors; as $\\im [f]$ is monic, \n$[\\gamma]$ must also be monic. (One may dually show that the natural arrow \n$\\OBJcoim [f] \\to X$ is monic, but this yields the same arrow $[\\mu]$.) Of \ncourse, I do not know when $[\\mu]$ is an isomorphism. But when it is an \nisomorphism, one can, in a coarsely invariant way, describe the image of $[f]$ \nas a subset of $X$ with a certain coarse structure. This would be an appealing \n``generalization'' of the following, which is not coarsely invariant in the \ndesired sense.\n\n\\begin{proposition}\\label{prop:Crs-subsp-images}\nIf $f \\from Y \\to X$ is a coarse map and $Y$ is unital, then $\\OBJim [f] = \nf(Y)$ (where $f(Y)$ is the subspace of $X$ determined by the set image of $f$) \nas subobjects of $X$ in $\\CATCrs$.\n\\end{proposition}\n\n\\begin{proof}\nIf $Y$ is unital, $X' \\defeq f(Y)$ is also unital. The range restriction $f \n|^{X'} \\from Y \\to X$ is a coarse map, and $[f]^* \\calE_X = [f |^{X'}]^* \n\\calE_X$ hence $\\OBJim [f] = \\OBJim [f |^{X'}]$. Using this equality, we get \n$\\im [f] = [\\iota] \\circ \\im [f |^{X'}]$, where $\\iota \\from X' \\injto X$ is \nthe inclusion. But it is easy to check that $\\calE_{X'} \\defeq \\calE_X |_{X'} = \n[f |^{X'}]_* \\calE_Y$, so $[f |^{X'}]$ is epi. Hence $\\im [f |^{X'}] \\from \n\\OBJim [f] = \\OBJim [f |^{X'}] \\to X'$ is both monic and epi, hence an \nisomorphism by Theorem~\\ref{thm:Crs-unital-balanced}.\n\\end{proof}\n\n\n\\subsection{Quotients of coarse spaces}\\label{subsect:Crs-quot}\n\nWe now discuss a notion of quotient coarse spaces in $\\CATCrs$. The quotient \nspaces below are not the most general possible; rather, they appear to be a \nspecial case of a more general notion (of quotients by \\emph{coarse equivalence \nrelations}). However, I have not fully explored the more general notion, and so \nI leave it to a future paper.\n\nSuppose $\\calC$ is a category with zero object $0$ (e.g., an abelian category), \ni.e., $0$ is both initial and terminal. Given an arrow $f \\from Y \\to X$ (often \ntaken to be monic) in $\\calC$, a standard way of defining the quotient, denoted \n$X\/f(Y)$, is as the push-out $X \\copro_Y 0$ (assuming it exists); i.e., \n$X\/f(Y)$ fits into a push-out square\n\\[\\begin{CD}\n Y @>f>> X \\\\\n @VVV @VVV \\\\\n 0 @>>> X\/f(Y)\n\\end{CD}\\quad.\\]\nThe quotient $X\/f(Y)$ comes equipped with an arrow $X \\to X\/f(Y)$ and, in the \nabove case, also an arrow $0 \\to X\/f(Y)$.\n\nIn an abelian category, $X\/f(Y)$ is by definition just the cokernel of $f$. If \n$\\calC = \\CATpt{\\CATSet}$ or $\\CATpt{\\CATTop}$ (pointed sets or spaces), then \none has $0 = \\ast$ (a one-point set\/space) and $X\/f(Y)$ is (isomorphic to) just \n$X$ with the image of $f$ collapsed to the base point. The situation in \n$\\CATSet$ or $\\CATTop$ is slightly more complicated: If $Y \\neq \\emptyset$, one \ncan again take the push-out $X\/f(Y) \\defeq X \\copro_Y \\ast$. However, if $Y = \n\\emptyset$, then $X\/f(Y) \\cong X$; one should instead take $X\/f(Y) \\defeq X \n\\copro_Y \\emptyset$. In other words, one takes $X\/f(Y) \\defeq X \\copro_Y \n\\tilde{Y}$, where $\\tilde{Y}$ universally terminates $Y$ (see \nExample~\\ref{ex:Set-Top-univ-term}). This is exactly what we do in the coarse \ncategories.\n\n\\begin{definition}\nSuppose $[f] \\from Y \\to X$ (in $\\CATCrs$). The \\emph{quotient coarse space} \n$X\/[f](Y)$ is the push-out $X \\copro_Y \\Terminate(Y)$ in $\\CATCrs$, i.e., \n$X\/[f](Y)$ fits into a push-out square\n\\[\\begin{CD}\n Y @>{[f]}>> X \\\\\n @V{[\\tau_Y]}VV @V{[q]}VV \\\\\n \\Terminate(Y) @>{[f]\/[f]}>> X\/[f](Y)\n\\end{CD}\\quad.\\]\nIf $Y \\subseteq X$ is a subspace, we will write $X\/[Y] \\defeq X\/[\\iota(Y)]$, \nwhere $\\iota \\from Y \\injto X$ is the inclusion.\n\\end{definition}\n\nThe justification for our notation is the following.\n\n\\begin{proposition}\nFor any $[f] \\from Y \\to X$, the quotient coarse space $X\/[f](Y)$ and the \nnatural map $X \\to X\/[f](Y)$ only depend on the image of $[f]$.\n\\end{proposition}\n\n\\begin{proof}\n$[f]$ factorizes canonically as\n\\[\n Y \\nameto{\\smash{[\\beta]}} [f](Y) \\nameto{\\smash{\\im [f]}} X\n\\]\n(Proposition~\\ref{prop:Crs-factor-I} and Definition~\\ref{def:Crs-image}). Thus \n$X\/[f](Y)$ is also the colimit of the diagram\n\\[\\begin{CD}\n Y @>{[\\beta]}>> [f](Y) @>{\\im [f]}>> X \\\\\n @V{[\\tau_Y]}VV @V{[\\tau_{[f](Y)}]}VV \\\\\n \\Terminate(Y) @>{[\\Terminate(\\beta)]}>> \\Terminate([f](Y)),\n\\end{CD}\\]\nand hence also of the cofinal subdiagram obtained by deleting $Y$ and \n$\\Terminate(Y)$.\n\\end{proof}\n\nThe coarse categories have all push-outs and we have seen how to describe them \nconcretely; the standard construction would take $X\/[f](Y)$ to be, as a set, \nthe disjoint union of $X$, $Y$, and $\\Terminate(Y)$. Taking a representative \ncoarse map $f \\from Y \\to X$, we have two ``smaller'' descriptions of the \nquotient:\n\\begin{enumerate}\n\\item Take $X\/[f](Y) \\defeq X \\disjtunion \\Terminate(Y)$ as a set with the \n coarse structure generated by $\\calE_X$, $\\calE_{\\Terminate(Y)}$, and \n $\\set{(f \\cross \\tau_Y)(F) \\suchthat F \\in \\calE_Y}$, where we consider \n $\\Terminate(Y)$ and $X$ as subsets of $X \\disjtunion \\Terminate(Y)$. (This \n is a particular instance of a ``smaller'' construction of push-outs in \n $\\CATCrs$.)\n\\item Take $X\/[f](Y) \\defeq X$ as a set with the coarse structure generated by \n $\\calE_X$ and $f_* \\calE_{\\Terminate(Y)}$, where we treat $f$ as a set map \n $\\Terminate(Y) = Y \\to X$.\n\\end{enumerate}\n\nUsing the second description above and applying \nCorollary~\\ref{cor:Crs-epi-crs-structs} (the left and right supports of \nentourages in $f_* \\calE_{\\Terminate(Y)}$ are already unital subspaces of $X$), \nwe immediately get the following.\n\n\\begin{proposition}\nFor any $[f] \\from Y \\to X$, $X\/[f](Y)$ is a quotient of $X$ in the categorical \nsense (i.e., the natural map $[q] \\from X \\to X\/[f](Y)$ is epi).\n\\end{proposition}\n\n\n\\subsection{Restricted coarse categories}\\label{subsect:rest-Crs}\n\nThe lack of restriction on the size of coarse spaces (other than that imposed \nby the choice of universe) may be somewhat bothersome, and moreover prevent \n$\\CATCrs$ from having a terminal object. It is tempting to restrict the \ncardinality of coarse spaces, i.e., consider the full subcategory of $\\CATCrs$ \nof the coarse spaces of cardinality at most $\\kappa$, for some fixed, small \n(probably infinite) cardinal $\\kappa$. This is not the correct thing to do: \nFirst, one would no longer have all small limits and colimits (though as long \nas $\\kappa$ is infinite one have all finite limits and colimits). Second, and \nmore importantly, it would bar constructions involving the set of (set) \nfunctions $Y \\to X$ ($\\#X, \\#Y \\leq \\kappa$) which will be important in \n\\cite{crscat-II}.\n\nA better way to proceed is to consider the full subcategory of $\\CATCrs$ of \ncoarse spaces $X$ for which there exists a coarse map $X \\to R$, where $R \n\\defeq \\Terminate(R_0)$ for some fixed $R_0$. (Of particular interest is the \ncase when $R_0$ is a unital coarse space of some infinite cardinality $\\kappa$, \nin which case $R = |R_0|_1$ only depends on $\\kappa$ up to coarse equivalence.)\n\nWe will first discuss this in full generality, using terminology from the \nbeginning of \\S\\ref{subsect:Crs-Term}. In the following, suppose $\\calC$ is \nsome category and that is some object which $\\tilde{X}$ terminates any object \n(e.g., itself) in $\\calC$.\n\n\\begin{definition}\nThe \\emph{$\\tilde{X}$-restriction} $\\calC_{\\preceq \\tilde{X}}$ of $\\calC$ is \nthe full subcategory of $\\calC$ consisting of all the objects $Y$ in $\\calC$ \nsuch that there exists some (unique) arrow $Y \\to \\tilde{X}$.\n\\end{definition}\n\nIn other words, $\\calC_{\\preceq \\tilde{X}}$ consists of all objects which are \nterminated by $\\tilde{X}$. Equivalently, one may consider the comma category \n$(\\calC \\CATover \\tilde{X})$. It is easy to check that the range restricted \nprojection functor $(\\calC \\CATover \\tilde{X}) \\to \\calC_{\\preceq \\tilde{X}}$ \nis an isomorphism of categories.\n\nLet $I \\from \\calC_{\\preceq \\tilde{X}} \\to \\calC$ denote the inclusion functor. \nWhen a \\emph{nonzero} limit $\\calC_{\\preceq \\tilde{X}}$ already exists in \n$\\calC$, the limits are the same. More precisely, we have the following.\n\n\\begin{proposition}\\label{prop:restricted-lim}\nSuppose $\\calF \\from \\calJ \\to \\calC_{\\preceq \\tilde{X}}$, where $\\calJ$ is \nnonempty. If the limit $\\pfx{\\calC}\\OBJlim (I \\circ \\calF)$ exists, then\n\\[\n \\pfx{\\calC_{\\preceq \\tilde{X}}}\\OBJlim \\calF\n = \\pfx{\\calC}\\OBJlim (I \\circ \\calF);\n\\]\ni.e., the limit of $\\calF$ in $\\calC_{\\preceq \\tilde{X}}$ exists and any \nlimiting cone in $\\calC$ gives a limiting cone in $\\calC_{\\preceq \\tilde{X}}$.\n\\end{proposition}\n\n\\begin{proof}\nThe nonemptiness of $\\calJ$ ensures that the object $\\pfx{\\calC}\\OBJlim (I \n\\circ \\calF)$ is in $\\calC_{\\preceq \\tilde{X}}$ (since it must map to some \nobject of $\\calC_{\\preceq \\tilde{X}}$, hence to $\\tilde{X}$). The rest follows \neasily, since the inclusion functor $I$ is fully faithful.\n\\end{proof}\n\nThe following is trivial.\n\n\\begin{proposition}\n$\\tilde{X}$ is a terminal object (i.e., zero limit) in $\\calC_{\\preceq \n\\tilde{X}}$.\n\\end{proposition}\n\nThus $\\calC_{\\preceq \\tilde{X}}$ has all the limits that $\\calC$ does (to the \nextent that this makes sense), but also has a terminal object, which $\\calC$ \nmay not have. However, $\\calC$ may have a terminal object which is not \nisomorphic to $\\tilde{X}$ (in which case $\\calC_{\\preceq \\tilde{X}}$ is a \nproper subcategory of $\\calC$), so the inclusion functor $I$ may not preserve \nlimits.\n\nThe result dual to Proposition~\\ref{prop:restricted-colim} is true without the \nnonemptiness criterion.\n\n\\begin{proposition}\\label{prop:restricted-colim}\nSuppose $\\calF \\from \\calJ \\to \\calC_{\\preceq \\tilde{X}}$. If the colimit \n$\\pfx{\\calC}\\OBJcolim (I \\circ \\calF)$ exists, then\n\\[\n \\pfx{\\calC_{\\preceq \\tilde{X}}}\\OBJcolim \\calF\n = \\pfx{\\calC}\\OBJcolim (I \\circ \\calF).\n\\]\n\\end{proposition}\n\n\\begin{proof}\nIf $\\pfx{\\calC}\\OBJcolim (I \\circ \\calF)$ exists, then it maps to $\\tilde{X}$ \nsince there is a (unique) cone $\\calJ \\to \\tilde{X}$; thus the colimiting cone \nis actually in $\\calC_{\\preceq \\tilde{X}}$ and is universal since $I$ is fully \nfaithful.\n\\end{proof}\n\nNow, we return to our coarse context. Suppose $R \\defeq \\Terminate(R_0)$ for \nsome coarse space $R_0$. The \\emph{$R$-restricted coarse category} \n$\\CATCrs_{\\preceq R}$ is, as the notation indicates, the $R$-restriction of \n$\\CATCrs$. We similarly get $R$-restricted connected and connected, nonempty \ncoarse categories. We refer to the above categories collectively (i.e., for all \n$R$ and the various cases) as the \\emph{restricted coarse categories}.\n\n\\begin{theorem}\nThe restricted coarse categories have all (small) limits and colimits.\n\\end{theorem}\n\n\\begin{proof}\nThis follows immediately from Theorems \\ref{thm:Crs-lim} \nand~\\ref{thm:Crs-colim}, and Propositions \\ref{prop:restricted-lim} \nand~\\ref{prop:restricted-colim}.\n\\end{proof}\n\nOne can also check that all the earlier facts on monics and images, epis and \ncoimages, quotients, etc. hold in the restricted coarse categories.\n\n\n\n\n\\section{Topology and coarse spaces}\\label{sect:top-crs}\n\nOur coarse spaces are discrete, as opposed to the more standard definition of \n\\emph{proper coarse spaces} which allows coarse spaces to carry topologies and \nthus has different properness requirements (see the works of Roe, e.g., \n\\cite{MR2007488}*{Def.~2.22}). Our aim here is not to provide a general \ndiscussion of topological coarse spaces but to provide a means from going from \nRoe's \\emph{proper coarse spaces} to our (discrete) coarse spaces.\n\nWe will use the terms \\emph{compact} and \\emph{locally compact} in the sense of \nBourbaki \\cite{MR1712872}*{Ch.~I \\S{}9}, including the Hausdorff condition; in \nfact, all spaces will be Hausdorff unless otherwise stated. Throughout, $X$ and \n$Y$ will denote paracompact, locally compact topological spaces. Recall that a \nsubset $K$ of a space $X$ is \\emph{relatively compact} if it is contained in \nsome compact subspace of $X$. (If $X$ is Hausdorff, $K$ is relatively compact \nif and only if $\\overline{K}$ is compact.)\n\n\n\\subsection{Roe coarse spaces}\\label{subsect:Roe-crs-sp}\n\nWe will diverge from the standard terminology to avoid confusion with our \npreviously defined terms. \\emph{Roe coarse spaces} will be what are usually \ncalled proper coarse spaces. Let us recall these definitions (compare \nDefinitions \\ref{def:prop-ax} and~\\ref{def:crs-sp}).\n\n\\begin{definition}[see, e.g., \\cite{MR2007488}*{Def.~2.1}]%\n \\label{def:Roe-prop-ax}\nA subset $E \\subseteq X^{\\cross 2}$ satisfies the \\emph{Roe properness axiom} \nif $E \\cdot K$ and $K \\cdot E$ are relatively compact subsets of $X$ for all \n(relatively) compact $K \\subseteq X$.\n\\end{definition}\n\n\\begin{definition}[see, e.g., \\cite{MR2007488}*{Def.~2.22}]%\n \\label{def:Roe-crs-sp}\nA \\emph{Roe coarse structure} on $X$ is a subset $\\calR_X \\subseteq\n\\powerset(X^{\\cross 2})$ such that:\n\\begin{enumerate}\n\\item each $E \\in \\calR_X$ satisfies the Roe properness axiom;\n\\item $\\calR_X$ is closed under the operations of addition, multiplication, \n transpose, and the taking of subsets;\n\\item\\label{def:Roe-crs-sp:III} if $K \\subseteq X$ is \\emph{bounded} in the \n sense that $K^{\\cross 2} \\in \\calR_X$, then $K$ is relatively compact; and\n\\item\\label{def:Roe-crs-sp:IV} there is a neighbourhood (with respect to the \n product topology on $X^{\\cross 2}$ of the unit (i.e., diagonal) $1_X$ which \n is in $\\calR_X$.\n\\end{enumerate}\nA \\emph{Roe coarse space} is a paracompact, locally compact space $X$ equipped \nwith a Roe coarse structure $\\calR_X$ on $X$.\n\\end{definition}\n\n\\enumref{def:Roe-crs-sp:IV} implies Roe coarse spaces are always unital (in the \nobvious sense; see Definition~\\ref{def:uni-conn}) and that any Roe coarse space \n$X$ has an open cover $\\calU \\subseteq \\powerset(X)$ which is \\emph{uniformly \nbounded} in the sense that $\\bigunion_{U \\in \\calU} U^{\\cross 2}$ is in \n$\\calR_X$. Paracompactness implies that this cover can be taken to be locally \nfinite. The local compactness requirement is redundant, since it is implied by \n\\enumref{def:Roe-crs-sp:III} and \\enumref{def:Roe-crs-sp:IV}.\n\n\\begin{definition}\\label{def:top-prop}\nA continuous map $f \\from Y \\to X$ between locally compact spaces is \n\\emph{topologically proper} if $f^{-1}(K)$ is compact for every compact $K \n\\subseteq X$. More generally, also say that a (not necessarily continuous) map \n$f \\from Y \\to X$ between locally compact spaces is \\emph{topologically proper} \nif $f^{-1}(K)$ is relatively compact for every relatively compact $K \\subseteq \nX$.\n\\end{definition}\n\n\\begin{definition}[see, e.g., \\cite{MR2007488}*{Def. 2.21 and~2.14}]\nA (not necessarily continuous) map $f \\from Y \\to X$ between Roe coarse spaces \nis a \\emph{Roe coarse map} if it is topologically proper and \\emph{preserves \nentourages} in the sense that $f^{\\cross 2}(F) \\in \\calR_X$ for all $F \\in \n\\calR_Y$. (Roe coarse maps are usually called \\emph{proper coarse maps}.) Roe \ncoarse maps $f, f' \\from Y \\to X$ are \\emph{close} if $(f \\cross f')(1_Y) \\in \n\\calR_X$ (or equivalently if $(f \\cross f')(F) \\in \\calR_X$ for all $F \\in \n\\calR_Y$).\n\\end{definition}\n\nWe get an obvious \\emph{Roe precoarse category} $\\CATRoePCrs$ with objects all \n(small) Roe coarse spaces and arrows Roe coarse maps, and a quotient \\emph{Roe \ncoarse category} $\\CATRoeCrs$ with the same objects but whose arrows are \ncloseness classes of Roe coarse maps. \\emph{Roe coarse equivalences} are Roe \ncoarse maps which represent isomorphisms in $\\CATRoeCrs$.\n\n\n\\subsection{Discretization of Roe coarse spaces}\\label{subsect:Disc}\n\nWe now provide a way of passing from Roe coarse spaces to our (discrete) coarse \nspaces.\n\n\\begin{definition}\nA set $E \\in \\powerset(X^{\\cross 2})$ satisfies the \\emph{topological \nproperness axiom} (with respect to the topology of $X$) if, for all compact \nsubspaces $K \\subseteq X$, $(\\pi_1 |_E)^{-1}(K)$ and $(\\pi_2 |_E)^{-1}(K)$ are \nfinite.\n\\end{definition}\n\nSince all our spaces are Hausdorff hence $\\text{T}_{\\text{1}}$, the topological \nproperness axiom implies the (discrete) properness axiom \n(Definition~\\ref{def:prop-ax}).\n\nThe following is easy to check.\n\n\\begin{proposition}\nA set $E \\in \\powerset(X^{\\cross 2})$ satisfies the topological properness \naxiom if and only if $E$ is a (closed) discrete subset of $X^{\\cross 2}$ and \nthe restricted projections $\\pi_1 |_E, \\pi_2 |_E \\from E \\to X$ are \ntopologically proper maps.\n\\end{proposition}\n\n\\begin{remark}\\label{rmk:top-crs-sp}\nWe provide only a means from passing from Roe coarse spaces to our coarse \nspaces and not a complete discussion of ``topological coarse spaces'' since the \ntopological properness axiom does not encompass the axioms of \nDefinition~\\ref{def:Roe-crs-sp} (\\enumref{def:Roe-crs-sp:IV} in particular). We \nwould like not just a direct translation of Roe's definition to our setting, \nbut a proper generalization: First, we would like to allow nonunital \ntopological coarse spaces. Second, we do not want to impose local compactness \nfor two (possibly related) reasons:\n\\begin{inparaenum}\n\\item The ``topological coarse category'' should have all colimits (including \n infinite ones). In particular, we are interested in ``large'' simplicial \n complexes which may not be locally finite.\n\\item We wish to be able to analyze Hilbert space and other Banach spaces \n directly as coarse spaces. This seems especially relevant as methods \n involving uniform (i.e., coarse) embeddings into such spaces have gained \n prominence in recent years (e.g., in \\cite{MR1728880}, Yu shows that the \n Coarse Baum--Connes Conjecture is true for metric spaces of bounded \n geometry which uniformly embed in Hilbert space).\n\\end{inparaenum}\n\nInstead of requiring that spaces be paracompact and locally compact, we \nshould probably require that spaces be \\emph{compactly generated} (i.e., be \nweak Hausdorff $k$-spaces). The topological properness axiom makes sense for \nsuch spaces (weak Hausdorffness still implies the $\\text{T}_{\\text{1}}$ \ncondition), but the problem of translating axioms \\enumref{def:Roe-crs-sp:III} \nand \\enumref{def:Roe-crs-sp:IV} becomes more complicated. Moreover, in the \ncompactly generated case, there are different, inequivalent definitions for \n``topological properness'' (whereas they all agree in the locally compact case; \nsee, e.g., \\cite{MR1712872}*{Ch.~I \\S{}10}), though perhaps one could still use \nDefinition~\\ref{def:top-prop} verbatim. In that case, the above Proposition \nremains true so long as $X^{\\cross 2}$ is given the categorically appropriate \ntopology, namely the $k$-ification of the standard product topology. We leave \nthese problems to a future paper \\cite{crscat-III}.\n\\end{remark}\n\nCompare the following, which is easy, with Proposition~\\ref{prop:prop-ax-alg}.\n\n\\begin{proposition}\nIf $E, E' \\in \\powerset(X^{\\cross 2})$ satisfy the topological properness \naxiom, then $E + E'$, $E \\circ E'$, $E^\\transpose$, and all subsets of $E$ \nsatisfy the topological properness axiom. Also, all singletons $\\set{e}$, $e \n\\in X^{\\cross 2}$, and hence all finite subsets of $X^{\\cross 2}$ satisfy the \nproperness axiom. Consequently,\n\\[\n \\calE_{|X|_\\tau}\n \\defeq \\set{E \\in \\calE_{|X|_1} \\subseteq \\powerset(X^{\\cross 2})\n \\suchthat \\text{$E$ satisfies the topological properness axiom}}\n\\]\nis a coarse structure on the set $X$ (in the sense of \nDefinition~\\ref{def:crs-sp}).\n\\end{proposition}\n\n\\begin{definition}\\label{def:Disc-Roe}\nThe \\emph{discretization} of a Roe coarse space $X$ is the coarse space \n$\\Disc(X) \\defeq X$ as a set with the coarse structure\n\\[\n \\calE_{\\Disc(X)} \\defeq \\calR_X \\intersect \\calE_{|X|_\\tau}\n\\]\n(consisting of all elements of $\\calR_X$ which satisfy the topological \nproperness axiom).\n\\end{definition}\n\nIt is easy to check that $\\calE_{\\Disc(X)}$ is in fact a coarse structure on \nthe set $X$. Unless $X$ is discrete, the coarse space $\\Disc(X)$ is not unital, \neven though the Roe coarse space $X$ is.\n\n\\begin{proposition}\nIf $f \\from Y \\to X$ is a Roe coarse map, then the set map $\\Disc(f) \\defeq f$ \nis coarse as a map $\\Disc(Y) \\to \\Disc(X)$.\n\\end{proposition}\n\n\\begin{proof}\nThe only thing to check is that if $f$ (not necessarily continuous) is \ntopologically proper and $F \\subseteq Y^{\\cross 2}$ satisfies the topological \nproperness axiom, then $E \\defeq f^{\\cross 2}(F) \\subseteq X^{\\cross 2}$ also \nsatisfies the topological properness axiom. This follows since\n\\[\n E \\cdot K \\subseteq f(F \\cdot f^{-1}(K))\n\\]\nand $f$ is topologically proper (and similarly symmetrically).\n\\end{proof}\n\nSince, trivially, $\\Disc(f \\circ g) = \\Disc(f) \\circ \\Disc(g)$, we get the \nfollowing.\n\n\\begin{corollary}\n$\\Disc$ is a functor from the Roe precoarse category $\\CATRoePCrs$ to the \nprecoarse category $\\CATPCrs$.\n\\end{corollary}\n\n$\\Disc$ is coarsely invariant in the following way, which yields a canonical \nfunctor $[\\Disc] \\from \\CATRoeCrs \\to \\CATCrs$ between the closeness quotients. \n(We continue to write $\\Disc(X)$ instead of $[\\Disc](X)$ for Roe coarse \nspaces.)\n\n\\begin{proposition}\nIf Roe coarse maps $f, f' \\from Y \\to X$ are close, then\n\\[\n \\Disc(f), \\Disc(f') \\from \\Disc(Y) \\to \\Disc(X)\n\\]\nare close coarse maps.\n\\end{proposition}\n\n\\begin{proof}\nThe result follows easily from the following fact (which is also easy): If $f, \nf' \\from Y \\to X$ are topologically proper and $F \\subseteq Y^{\\cross 2}$ \nsatisfies the topological properness axiom, then $(f \\cross f')(F) \\subseteq \nX^{\\cross 2}$ also satisfies the topological properness axiom.\n\\end{proof}\n\n\\begin{corollary}\nIf $f \\from Y \\to X$ is a Roe coarse equivalence, then $\\Disc(f) \\from \\Disc(Y) \n\\to \\Disc(X)$ is a coarse equivalence.\n\\end{corollary}\n\n\n\\subsection{Properties of the discretization functors}\n\nLet $\\CATDiscRoePCrs \\subseteq \\CATRoePCrs$ and $\\CATDiscRoeCrs \\subseteq \n\\CATRoeCrs$ be the full subcategories of \\emph{discrete} Roe coarse spaces \n(call them the \\emph{discrete Roe precoarse} and \\emph{coarse categories}, \nrespectively). On the discrete subcategories, $\\Disc$ and $[\\Disc]$ are fully \nfaithful.\n\n\\begin{proposition}\\label{prop:DiscRoePCrs-Disc-fullfaith}\nIf $X$, $Y$ are Roe coarse spaces with $Y$ discrete, then the map\n\\[\n \\Disc_{Y,X} \\from \\Hom_{\\CATRoePCrs}(Y,X)\n \\to \\Hom_{\\CATPCrs}(\\Disc(Y),\\Disc(X))\n\\]\nis a bijection. Hence, in particular, the restriction of $\\Disc$ to \n$\\CATDiscRoePCrs$ (which actually maps into $\\CATUniPCrs$) is a fully faithful \nfunctor.\n\\end{proposition}\n\n\\begin{proof}\n$\\Disc_{Y,X}$ is trivially injective, so it only remains to show surjectivity. \nSuppose $f \\from \\Disc(Y) \\to \\Disc(X)$ is a coarse map. If $K \\subseteq X$ is \nrelatively compact, then\n\\[\n f^{-1}(K) = f^{-1}(f^{\\cross 2}(1_Y) \\cdot K)\n\\]\nis finite: since $Y$ is discrete, $\\Disc(Y)$ is unital so $f^{\\cross 2}(1_Y) \n\\in \\calE_{\\Disc(X)}$ satisfies the topological properness axiom (so $f^{\\cross \n2}(1_Y) \\cdot K$ is finite) and $f$ is (discretely) globally proper. Thus $f$ \nis topologically proper. Since $Y$ is discrete, $\\calE_{\\Disc(Y)} = \\calR_Y$, \nso $f$ preserves entourages of $\\calR_Y$ (of course, $\\calE_{\\Disc(X)} \n\\subseteq \\calR_X$). Thus $f$ is Roe coarse as a map $Y \\to X$.\n\\end{proof}\n\nThe unrestricted functor $\\Disc \\from \\CATRoePCrs \\to \\CATPCrs$ is \\emph{not} \nfull.\n\n\\begin{example}\\label{ex:RoePCrs-Disc-notfull}\nLet $X \\defeq \\setRplus$ equipped with the Euclidean metric Roe coarse \nstructure (see \\S\\ref{subsect:prop-met}), and $Y \\defeq \\setRplus \\union \n\\set{\\infty}$ be the one-point compactification of $\\setRplus$ equipped with \nthe unique Roe coarse structure $\\calR_Y \\defeq \\powerset(Y^{\\cross 2})$ (which \nis also the metric Roe coarse structure for any metric which metrizes $Y$ \ntopologically). Define $f \\from Y \\to X$ by\n\\[\n f(t) \\defeq \\begin{cases}\n t & \\text{if $t \\in \\setRplus$, and} \\\\\n 0 & \\text{if $t = \\infty$.}\n \\end{cases}\n\\]\nThen $f$ is actually coarse as a map $\\Disc(Y) \\to \\Disc(X)$. However, clearly \n$f$ does not preserve entourages of $\\calR_Y$, hence does \\emph{not} define a \nRoe coarse map $Y \\to X$. As a map $\\Disc(Y) \\to \\Disc(X)$, $f$ is close to any \nconstant map $\\Disc(Y) \\to \\Disc(X)$ (sending all of $Y$ to some fixed element \nof $X$); every such constant map \\emph{does} define a Roe coarse map $Y \\to X$.\n\\end{example}\n\n\\begin{proposition}\\label{prop:DiscRoeCrs-Disc-fullfaith}\nIf $X$, $Y$ are Roe coarse spaces with $Y$ discrete, then the map\n\\[\n [\\Disc]_{Y,X} \\from \\Hom_{\\CATRoeCrs}(Y,X)\n \\to \\Hom_{\\CATCrs}(\\Disc(Y),\\Disc(X))\n\\]\nis a bijection. Hence the restriction of $[\\Disc]$ to $\\CATDiscRoeCrs$ (which \nactually maps into $\\CATUniCrs$) is fully faithful.\n\\end{proposition}\n\n\\begin{proof}\nBy the previous Proposition, $[\\Disc]_{Y,X}$ is surjective, so it only remains \nto show injectivity. Suppose $f, f' \\from Y \\to X$ are Roe coarse maps. If \n$\\Disc(f)$ is close to $\\Disc(f')$, then since $\\Disc(Y)$ is unital,\n\\[\n (f \\cross f')(1_Y) = (\\Disc(f) \\cross \\Disc(f'))(1_Y)\n \\in \\calE_{\\Disc(X)} \\subseteq \\calR_X,\n\\]\nso $f$ is close to $f'$, as required.\n\\end{proof}\n\nIf $X' \\subseteq X$ is a \\emph{closed} subspace of a Roe coarse space, then the \nobvious \\emph{Roe subspace coarse structure} $\\calR_{X'} \\defeq \\calR_X |_{X'} \n\\defeq \\calR_X \\intersect \\powerset((X')^{\\cross 2})$ is actually Roe coarse \nstructure on $X'$ (this is not the case if $X'$ is not closed), which makes \n$X'$ into a \\emph{Roe coarse subspace} of $X$. The inclusion of any Roe coarse \nsubspace into the ambient space is a Roe coarse map. The following result is \nwell known.\n\n\\begin{proposition}\\label{prop:Roe-disc-subsp}\nFor any Roe coarse space $X$, there is a (closed) discrete Roe coarse subspace \n$X' \\subseteq X$ such that the inclusion $\\iota \\from X' \\to X$ is a Roe coarse \nequivalence.\n\\end{proposition}\n\n\\begin{proof}\nFix a locally finite, uniformly bounded open cover $\\calU$ of $X$ by nonempty \nsets. For each $U \\in \\calU$, pick a point $x'_U \\in U$ and put $X' \\defeq \n\\set{x'_U \\suchthat U \\in \\calU}$. Since $\\calU$ is locally finite, it is easy \nto check that $X'$ is closed and discrete.\n\nInvoking the Axiom of Choice, fix a map $\\kappa \\from X \\to X'$ such that, for \nall $x \\in X$, $\\kappa(x) \\in U$ for some $U \\in \\calU$ such that $x \\in U$. We \nmay also ensure that $\\kappa(x') = x'$ for all $x' \\in X'$. $\\kappa$ is \ntopologically proper: For any $x' \\in X'$,\n\\[\n \\kappa^{-1}(\\set{x'})\n \\subseteq \\bigunion_{\\substack{U \\in \\calU \\suchthat \\\\ x' \\in U}} U\n\\]\nwhich is a finite union of relatively compact sets, hence \n$\\kappa^{-1}(\\set{x'})$ is relatively compact (this suffices to show \ntopological properness since $X'$ is discrete). $\\kappa$ preserves entourages \nof $X$: Put\n\\[\n E_\\calU \\defeq \\bigunion_{U \\in \\calU} U^{\\cross 2} \\in \\calR_X;\n\\]\nfor any $E \\in \\calR_X$,\n\\[\n \\kappa^{\\cross 2}(E) \\subseteq E_\\calU \\circ E \\circ E_\\calU \\in \\calR_X,\n\\]\nhence $\\kappa^{\\cross 2}(E) \\in \\calR_X |_{X'}$, as required. Thus $\\kappa$ is \na Roe coarse map.\n\nTrivially, $\\kappa \\circ \\iota = \\id_{X'}$. Finally, $\\iota \\circ \\kappa$ is \nclose to $\\id_X$: Letting $E_\\calU$ be as above, we have\n\\[\n (\\kappa \\cross \\id_X)(1_X) \\subseteq E_\\calU \\in \\calR_X,\n\\]\nas required.\n\\end{proof}\n\n\\begin{remark}\nThough we do not so insist, Roe coarse maps are sometimes required to be Borel \n(see, e.g., \\cite{MR1451755}*{Def.~2.2}). In that case, the map $\\kappa$ used \nin the above proof may not suffice. However, if one insists that all Roe coarse \nspaces be, e.g., second countable, then one can construct $\\kappa$ to be Borel. \nThus, as long as one so constrains the allowable Roe coarse spaces, the above \nProposition remains true.\n\\end{remark}\n\n\\begin{corollary}\nThe inclusion functor $\\CATDiscRoeCrs \\injto \\CATRoeCrs$ is fully faithful and \nin fact an equivalence of categories.\n\\end{corollary}\n\n\\begin{theorem}\\label{thm:RoeCrs-Disc-fullfaith}\nThe functor $[\\Disc] \\from \\CATRoeCrs \\to \\CATCrs$ is fully faithful.\n\\end{theorem}\n\n\\begin{proof}\nThis is immediate upon combining Propositions \n\\ref{prop:DiscRoeCrs-Disc-fullfaith} and~\\ref{prop:Roe-disc-subsp}.\n\\end{proof}\n\nEvery unital coarse space (in our sense) becomes a Roe coarse space when it is \ngiven the discrete topology, with coarse maps between unital coarse spaces \nbecoming Roe coarse maps. Thus $\\CATUniPCrs$ and $\\CATDiscRoePCrs$ are \nisomorphic as categories, and hence so too are $\\CATUniCrs$ and \n$\\CATDiscRoeCrs$.\n\n\\begin{corollary}\\label{cor:UniCrs-RoeCrs-equiv}\nOur unital coarse category $\\CATUniCrs$ is equivalent to the Roe coarse \ncategory $\\CATRoeCrs$, with the functor which sends a unital coarse space to \nthe ``identical'' discrete Roe coarse space an equivalence of categories.\n\\end{corollary}\n\n\n\n\n\\section{Examples and applications}\\label{sect:ex-appl}\n\nAs stated in the Introduction, we will not discuss even the standard \napplications of coarse geometry. We will first discuss a couple of basic \nexamples which we will need later, namely proper metric spaces and continuous \ncontrol, and then briefly examine a few things which arise from the categorical \npoint of view (some of which are not obviously possible in standard, unital \ncoarse geometry).\n\n\n\\subsection{Proper metric spaces}\\label{subsect:prop-met}\n\nSuppose that $(X,d) \\defeq (X,d_X)$ is a proper metric space (i.e., its closed \nballs are compact). We wish to produce a coarse space from $X$; we have already \ndiscussed the discrete case in Example~\\ref{ex:disc-met}, and what follows is a \ngeneralization of that.\n\nThere is a well known way to produce a Roe coarse space $|X|_d^\\TXTRoe$ from \n$(X,d)$ (noting that properness implies local compactness, and metrizability \nimplies paracompactness), taking the Roe coarse structure to be consist of the \n$E \\subseteq X^{\\cross 2}$ satisfying inequality \\eqref{ex:disc-met:eq} of \nExample~\\ref{ex:disc-met} (see, e.g., \\cite{MR2007488}*{Ex.~2.5}). One can then \napply the discretization functor $\\Disc$ to this Roe coarse space to obtain the \n\\emph{($d$-)metric coarse space} $|X| \\defeq |X|_d$. More directly and entirely \nequivalently, $|X|_d$ has as entourages the $E \\in \\calE_{|X|_\\tau}$ (i.e., the \n$E$ satisfying the topological properness axiom) which also satisfy the same \ninequality \\eqref{ex:disc-met:eq}. As in the discrete case, we may also allow \n$d(x,x') = \\infty$ (for $x \\neq x'$), and $|X|_d$ is connected if and only if \n$d(x,x') < \\infty$ for all $x$ and $x'$. If $X' \\subseteq X$ is a closed \n(topological) subspace, then the restriction of $d$ to $X'$ makes $X'$ into a \nproper metric space; the subspace coarse structure on $X'$ is the same as the \ncoarse structure coming from the restricted metric.\n\nSuppose $(Y,d_Y)$ is another proper metric space. A (not necessarily \ncontinuous) map $f \\from Y \\to X$ is Roe coarse as a map $|Y|_{d_Y}^\\TXTRoe \\to \n|X|_{d_X}^\\TXTRoe$ if and only if it is topologically proper and\n\\begin{equation}\\label{subsect:prop-met:Roe-crs:eq}\n \\sup \\set{d_X(f(y),f(y'))\n \\suchthat \\text{$y, y' \\in Y$ and $d_Y(y,y') \\leq r$}} < \\infty\n\\end{equation}\nfor every $r \\geq 0$. Since $X$, $Y$ are proper metric spaces, $f$ is \ntopologically proper if and only if it is \\emph{metrically proper} in the sense \nthat inverse images of metrically bounded subsets of $X$ are metrically bounded \nin $Y$. Roe coarse maps $f, f' \\from |Y|_{d_Y}^\\TXTRoe \\to |X|_{d_X}^\\TXTRoe$ \nare close if and only if\n\\begin{equation}\\label{subsect:prop-met:Roe-close:eq}\n \\sup \\set{d_X(f(y),f'(y)) \\suchthat y \\in Y} < \\infty.\n\\end{equation}\n\nWe must warn that there may be a map $f \\from Y \\to X$ which is coarse (in our \nsense) as a map $|Y|_{d_Y} \\to |X|_{d_X}$, yet does not satisfy \n\\eqref{subsect:prop-met:Roe-crs:eq}. Similarly, there may be coarse maps $f, f' \n\\from |Y|_{d_Y} \\to |X|_{d_X}$ which are close but do not satisfy \n\\eqref{subsect:prop-met:Roe-close:eq}. Example~\\ref{ex:RoePCrs-Disc-notfull}, \nwhich shows that $\\Disc$ is not full, exhibits both phenomena. In the former \ncase, Theorem~\\ref{thm:RoeCrs-Disc-fullfaith} shows that every coarse map $f' \n\\from |Y|_{d_Y} \\to |X|_{d_X}$ is close to some coarse map $f \\from |Y|_{d_Y} \n\\to |X|_{d_X}$ which satisfies \\ref{subsect:prop-met:Roe-crs:eq}. (The \ncorresponding statement in the latter case is trivial.) Alternatively, one may \navoid both ``problems'' by considering only discrete, proper metric spaces \n(Proposition~\\ref{prop:DiscRoePCrs-Disc-fullfaith}); every proper metric space \nis Roe coarsely equivalent to a discrete one \n(Proposition~\\ref{prop:Roe-disc-subsp}).\n\n\\begin{remark}[see, e.g., \\cite{MR2007488}*{\\S{}1.3}]\\label{rmk:lsLip-qisom}\nIf $X$ and $Y$ are proper \\emph{length spaces}, then one can characterize the \nRoe coarse maps, and indeed Roe coarse equivalences, $Y \\to X$ a bit more \nstrictly: A map $f \\from Y \\to X$ (not necessarily continuous) is Roe coarse if \nand only if it is (metrically\/topologically) proper and \\emph{large-scale \nLipschitz} in the sense that there exist constants $C > 0$ and $R \\geq 0$ such \nthat\n\\[\n d_X(f(y),f(y')) \\leq C d_Y(y,y') + R\n\\]\nfor all $y, y' \\in Y$. $f$ is a Roe coarse equivalence if and only if it is a \n\\emph{quasi-isometry} in that there are constants $c, C > 0$ and $r, R \\geq 0$ \nsuch that\n\\[\n c d_Y(y,y') - r \\leq d_X(f(y),f(y')) \\leq C d_Y(y,y') + R\n\\]\nfor all $y, y' \\in Y$ (evidently, one can always take $c = 1\/C$ and $r = R$, as \nis conventional) and there is a constant $D \\geq 0$ such that every point of \n$X$ is within distance $D$ of a point in the image of $f$.\n\nOne can replace the length space hypothesis with a weaker condition, but some \nhypothesis is necessary; for general metric spaces there are Roe coarse maps, \nand indeed Roe coarse equivalences, which are not large-scale Lipschitz. \nHowever, every proper large-scale Lipschitz map is evidently also Roe coarse, \nand every quasi-isometry is a coarse equivalence.\n\\end{remark}\n\n\n\\subsection{Continuous control}\\label{subsect:cts-ctl}\n\nMost of the following originates from \\cites{MR1277522, MR1451755}, but see \nalso, e.g., \\cite{MR2007488}*{\\S{}2.2}. In the following, all topological \nspaces will be assumed to be second countable and locally compact (and \nHausdorff), whence paracompact. $X$ and $Y$ will always denote such spaces.\n\n\\begin{definition}\nA \\emph{compactified space} is a (second countable, locally compact) \ntopological space $X$ equipped with a (second countable) compactification \n$\\overline{X}$; its \\emph{boundary} is the space $\\die X \\defeq \\overline{X} \n\\setminus X$.\n\\end{definition}\n\nThe \\emph{continuously controlled Roe coarse structure} $\\calR_{|X|_{\\die \nX}^\\TXTRoe}$ on $X$ (for the compactification $\\overline{X}$, or for the \nboundary $\\die X$) consists of the $E \\subseteq X^{\\cross 2}$ such that\n\\begin{equation}\\label{subsect:cts-ctl:eq}\n \\overline{E} \\subseteq X^{\\cross 2} \\union 1_{\\die X}\n \\subseteq \\overline{X}^{\\cross 2},\n\\end{equation}\nwhere $1_{\\die X}$ is the diagonal subset of $(\\die X)^{\\cross 2}$ and the \nclosure is taken in $\\overline{X}$ (for the proof that this a Roe coarse \nstructure, see, e.g., \\cite{MR2007488}*{Thm.~2.27}). The associated coarse \nspace (resulting from applying the discretization functor $\\Disc$ to the above \nRoe coarse space $|X|_{\\die X}^\\TXTRoe$) is the \\emph{continuously controlled \ncoarse space} $|X|_{\\die X}$ (for the compactification $\\overline{X}$, or for \nthe boundary $\\die X$) whose entourages are the $E \\in \\calE_{|X|_\\tau}$ (i.e., \n$E$ satisfying the topological properness axiom) which also satisfy \n\\eqref{subsect:cts-ctl:eq}.\n\n\\begin{remark}\nIf $X$ is compact (so $\\overline{X} = X$ and $\\die X = \\emptyset$), then \n$|X|_{\\die X} = |X|_0^\\TXTconn$ (i.e., $X$ equipped with the initial connected \ncoarse structure).\n\\end{remark}\n\nThe following is standard.\n\n\\begin{proposition}\nSuppose $X$, $Y$ are compactified spaces. Any Roe coarse map $f \\from |Y|_{\\die \nY}^\\TXTRoe \\to |X|_{\\die X}^\\TXTRoe$ determines a canonical \\emph{continuous} \nmap $\\die Y \\to \\die X$ which we denote by $\\die [f]$. Moreover, Roe coarse \nmaps $f, f' \\from |Y|_{\\die Y}^\\TXTRoe \\to |X|_{\\die X}^\\TXTRoe$ are close if \nand only if $\\die [f] = \\die [f']$ (which justifies our notation).\n\\end{proposition}\n\nThe ``converse'' is also true: Any set map $Y \\to X$ (not necessarily \ncontinuous, but necessarily topologically proper) which ``extends \ncontinuously'' to a continuous map $\\die Y \\to \\die X$ is Roe coarse as a map \n$|Y|_{\\die Y}^\\TXTRoe \\to |X|_{\\die X}^\\TXTRoe$. This is essentially \ntautological, since the definition of ``extends continuously'' is exactly the \ndefinition of ``is continuously controlled''.\n\n\\begin{proof}\nFix a Roe coarse map $f$. Given $y_\\infty \\in \\die Y$, define $(\\die \n[f])(y_\\infty)$ as follows: By second countability, there is a sequence \n$\\seq{y_n}_{n=1}^\\infty$ in $Y$ which converges to $y_\\infty$. Then the \ndiagonal set $1_{\\set{y_n \\suchthat n \\in \\setN}}$ is in $\\calR_{|Y|_{\\die \nY}^\\TXTRoe}$, so $1_{\\set{f(y_n) \\suchthat n \\in \\setN}}$ must be in \n$\\calR_{|X|_{\\die X}^\\TXTRoe}$. By topological properness, the limit points of \n$\\seq{f(y_n)}_{n=1}^\\infty$ in $\\overline{X}$ (which exist by compactness) are \nall in $\\die X \\subseteq \\overline{X}$; in fact there is only one limit point \nwhich we call $(\\die [f])(y_\\infty)$. Well-definedness follows from the \nobservation that if $\\seq{y'_n}_{n=1}^\\infty \\subseteq Y$ (possibly a \nsubsequence of $\\seq{y_n}_{n=1}^\\infty$) also converges to $y_\\infty$, then \n$1_{\\set{(y_n,y'_n) \\suchthat n \\in \\setN}} \\in \\calR_{|Y|_{\\die Y}^\\TXTRoe}$ \nhence $1_{\\set{(f(y_n),f(y'_n)) \\suchthat n \\in \\setN}} \\in \\calR_{|X|_{\\die \nX}^\\TXTRoe}$, so $\\seq{f(y'_n)}_{n=1}^\\infty$ and $\\seq{f(y_n)}_{n=1}^\\infty$ \nhave the same limit points. To see that $\\die [f]$ is continuous, one proves \nsequential continuity (which suffices) using the obvious diagonal argument.\n\n$f$ (and similarly $f'$) ``extend continuously'' to maps $\\overline{Y} \\to \n\\overline{X}$: e.g.,\n\\[\n \\bar{f}(y) \\defeq \\begin{cases}\n f(y) & \\text{if $y \\in Y$, and} \\\\\n (\\die [f])(y) & \\text{if $y \\in \\die Y$.}\n \\end{cases}\n\\]\nThe second assertion then follows using the observation that, for any $F \\in \n\\calR_{|Y|_{\\die Y}^\\TXTRoe}$,\n\\[\n \\overline{(f \\cross f')(F)}\n = (\\bar{f} \\cross \\bar{f}')(\\overline{F})\n\\]\n(closures $\\overline{X}^{\\cross 2}$ and $\\overline{Y}^{\\cross 2}$).\n\\end{proof}\n\nTemporarily let $\\calC$ be the category of second countable, compact spaces \n(and continuous maps). If $M \\in \\Obj(\\calC)$, $\\setRplus \\cross M$ \ncompactified with boundary $M$ (so $\\overline{\\setRplus \\cross M}$ is \nhomeomorphic to $\\ccitvl{0,1} \\cross M$) is a compactified space. Then $M \n\\mapsto |\\setRplus \\cross M|_M^\\TXTRoe$ (on objects; $g \\mapsto \\id_{\\setRplus} \n\\cross g$ on functions) defines a (Roe) coarsely invariant functor \n$\\calO_\\TXTtop^\\TXTRoe \\from \\calC \\to \\CATRoePCrs$. By the above Proposition,\n\\[\n [\\calO_\\TXTtop^\\TXTRoe] \\defeq \\Quotient \\circ \\calO_\\TXTtop^\\TXTRoe\n \\from \\calC \\to \\CATRoeCrs\n\\]\nis fully faithful. As $[\\Disc] \\from \\CATRoeCrs \\to \\CATCrs$ is also fully \nfaithful (Proposition~\\ref{thm:RoeCrs-Disc-fullfaith}), the resulting \ncomposition\n\\[\n [\\calO_\\TXTtop] \\defeq [\\Disc] \\circ [\\calO_\\TXTtop^\\TXTRoe]\n \\from \\calC \\to \\CATCrs\n\\]\nis again fully faithful. Note that\n\\[\n [\\calO_\\TXTtop] = \\Quotient \\circ \\calO_\\TXTtop,\n\\]\nwhere $\\calO_\\TXTtop \\defeq \\calO_\\TXTtop^\\TXTRoe \\from \\calC \\to \\CATRoePCrs$ \n(a coarsely invariant functor).\n\n\\begin{definition}[see, e.g., \\cite{MR1817560}*{\\S{}6.2}]%\n \\label{def:ctsctl-cone}\nFor any second countable, compact space $M$, the \\emph{continuously controlled \nopen cone} on $M$ is the coarse space\n\\[\n \\calO_\\TXTtop M \\defeq |\\setRplus \\cross M|_M.\n\\]\n\\end{definition}\n\nWe saw above that $M \\mapsto \\calO_\\TXTtop M$ is a coarsely invariant functor \nfrom the category of second countable, compact topological spaces to the \nprecoarse category $\\CATPCrs$.\n\n\\begin{remark}[compare \\cite{MR1341817}*{Thm.~1.23 and Cor.~1.24}]%\n \\label{rmk:ctsctl-cones}\nAll continuously controlled coarse spaces can be described as cones in a \nnatural way. That is, for any compactified space $X$, there is a natural coarse \nequivalence\n\\[\n \\calO_\\TXTtop(\\die X) \\isoto |X|_{\\die X}\n\\]\n(indeed, there is a natural Roe coarse equivalence $|\\setRplus \\cross \\die \nX|_{\\die X}^\\TXTRoe \\isoto |X|_{\\die X}^\\TXTRoe$). Thus, up to coarse \nequivalence, $|X|_{\\die X}$ only depends on the topology of the boundary $\\die \nX$, and not of $X$ itself. We leave this to the reader.\n\\end{remark}\n\n\\begin{remark}\\label{rmk:ctsctl-quot}\nSuppose $M$ is a second countable, compact space, and $N \\subseteq M$ is a \nclosed subspace. There is a natural (coarse) inclusion $\\iota \\from \n\\calO_\\TXTtop N \\injto \\calO_\\TXTtop M$ of continuously controlled open cones, \nhence a quotient coarse space\n\\[\n (\\calO_\\TXTtop M)\/[\\calO_\\TXTtop N]\n \\defeq (\\calO_\\TXTtop M)\/[\\iota](\\calO_\\TXTtop N)\n\\]\n(see \\S\\ref{subsect:Crs-quot}). One can check that the quotient $(\\calO_\\TXTtop \nM)\/[\\calO_\\TXTtop N]$ is naturally coarsely equivalent to the continuously \ncontrolled open cone $\\calO_\\TXTtop (M\/N)$ on the topological quotient $M\/N$.\n\\end{remark}\n\n\\begin{remark}\\label{rmk:ctsctl-ray}\nThe continuously controlled ray\n\\[\n |\\coitvl{0,1}|_{\\set{1}} \\cong |\\setRplus|_{\\ast} \\cong |\\setZplus|_{\\ast} \n \\cong \\calO_\\TXTtop \\ast\n\\]\n(where $\\ast$ is a one-point space) is coarsely equivalent to $|\\setZplus|_1$, \ni.e., a countable set with the terminal coarse structure.\n\\end{remark}\n\n\n\\subsection{Metric coarse simplices}\\label{subsect:met-simpl}\n\nWe index our simplices in the same way as Mac~Lane \\cite{MR1712872}*{Ch.~VII \n\\S{}5}, shifted by $1$ from most topologists' indexing. That is, our \n$n$-simplices are topologists' $(n-1)$-simplices (which have geometric \ndimension $n-1$) and we include the ``true'' $0$-simplex.\n\n\\begin{definition}\nAs sets, put $\\Delta_0 \\defeq \\set{0}$, $\\Delta_1 \\defeq \\setRplus \\defeq \n\\coitvl{0,\\infty}$, \\ldots, $\\Delta_n \\defeq (\\setRplus)^n$, \\ldots. For each \n$n = 0, 1, 2, \\dotsc$, let $d \\defeq d_n$ be the $l^1$ metric on $\\Delta_n$, \ni.e.,\n\\[\n d_n((x_0,\\dotsc,x_{n-1}),(x'_0,\\dotsc,x'_{n-1}))\n \\defeq |x_0 - x'_0| + \\dotsb + |x_{n-1} - x'_{n-1}|,\n\\]\nand denote the resulting coarse space, called the \\emph{metric coarse \n$n$-simplex}, by\n\\[\n |\\Delta_n| \\defeq |\\Delta_n|_\\TXTmet \\defeq |\\Delta_n|_{d_n}\n\\]\n(the metric coarse space defined in \\S\\ref{subsect:prop-met}). We may also \nsubstitute the coarsely equivalent unital subspaces $(\\setZplus)^n \\subseteq \n(\\setRplus)^n$ for the $\\Delta_n$ when convenient.\n\\end{definition}\n\nNote that we may replace the $l^1$ metric with any $l^p$-metric ($1 \\leq p \\leq \n\\infty$), since\n\\[\n \\norm{x}_\\infty \\leq \\norm{x}_p \\leq \\norm{x}_1 \\leq n \\norm{x}_\\infty\n\\]\n(for all $1 \\leq p \\leq \\infty$, $x \\in \\Delta_n \\subseteq \\setR^n$); all these \nmetrics yield the same Roe coarse structure and hence the same coarse structure \non $\\Delta_n$. See Proposition~\\ref{prop:met-simp-univ} below for a bit more \nabout the ``universality'' of metric coarse simplices.\n\nFor each $n = 0, 1, 2, \\dotsc$, $j = 0, \\dotsc, n$, define a coarse map \n$\\delta_j \\defeq \\delta_j^n \\from |\\Delta_n| \\to |\\Delta_{n+1}|$ by\n\\begin{equation}\\label{subsect:met-simpl:eq:delta}\n \\delta_j(x_0, \\dotsc, x_{n-1})\n \\defeq (x_0, \\dotsc, x_{j-1}, 0, x_j, \\dotsc, x_{n-1})\n\\end{equation}\n(for $n = 0$, let $\\delta_0^0$ be the inclusion). For each $n = 1, 2, 3, \n\\dotsc$, $j = 0, \\dotsc, n-1$, define a coarse map $\\sigma_j \\defeq \\sigma_j^n \n\\from |\\Delta_{n+1}| \\to |\\Delta_n|$ by\n\\begin{equation}\\label{subsect:met-simpl:eq:sigma}\n \\sigma_j(x_0, \\dotsc, x_n)\n \\defeq (x_0, \\dotsc, x_{j-1}, x_j+x_{j+1}, x_{j+2}, \\dotsc, x_n).\n\\end{equation}\nIt is easy to verify that the above maps are coarse and satisfy the \n\\emph{cosimplicial identities} (see, e.g., \\cite{MR1711612}*{I.1} or equations \n(11)--(13) in \\cite{MR1712872}*{Ch.~VII \\S{}5}). Consequently, we get a functor \nfrom the \\emph{simplicial category} $\\CATSimp$ to $\\CATPCrs$. Composing with \nthe quotient functor yields the \\emph{metric coarse simplex functor}\n\\[\n |\\Delta|_\\TXTmet \\from \\CATSimp \\to \\CATCrs;\n\\]\nfor $n \\in \\Obj(\\CATSimp) = \\set{0, 1, 2, \\dotsc}$, $|\\Delta|_\\TXTmet(n) = \n|\\Delta_n|_\\TXTmet$.\n\nProceeding as standard (see, e.g., \\cite{MR1711612}), we may obtain \n\\emph{metric coarse realizations} of any simplicial set (since $\\CATCrs$ has \nall colimits), get a corresponding notion of (metric coarse) ``weak \nequivalence'', define \\emph{metric coarse singular sets} and a resulting \n\\emph{metric coarse singular homology}, and so on. We leave all of this to a \nfuture paper (or to the reader).\n\n\\begin{remark}\nMitchener has defined a related notion of \\emph{coarse $n$-cells} and \n\\emph{coarse $(n-1)$-spheres} (and resulting \\emph{coarse $CW$-complexes}) \n\\cites{MR1834777, MR2012966}. We will also defer the comparison of these with \nour coarse simplices (and resulting coarse simplicial complexes) to a future \npaper.\n\\end{remark}\n\nThe $l^1$ (or any $l^p$, $1 \\leq p \\leq \\infty$) metric coarse structure on a \n$\\Delta_n$ is the minimal ``good'' one, in the following sense. Fix $n \\geq 0$, \nand consider the maps\n\\[\n \\delta_{j_1}^m \\circ \\dotsb \\circ \\delta_{j_{n-m}}^{n-1}\n \\from \\Delta_m \\to \\Delta_n\n\\]\nfor all $0 \\leq m < n$. (The $\\delta_j$ all topologically embed their domains \nas closed subspaces of their codomains, and hence the same is true of \ncompositions of the $\\delta_j$.) Let us call the (set or topological) images of \nthe each of the above maps a \\emph{boundary simplex} of the topological space \n$\\Delta_n$. We will not prove the following in full detail.\n\n\\begin{proposition}\\label{prop:met-simp-univ}\nSuppose $|\\Delta_n|_\\calR$ is a Roe coarse space with underlying topological \nspace $\\Delta_n \\defeq (\\setRplus)^n$ and Roe coarse structure $\\calR$. Then \nthere is a Roe coarse map $i \\from |\\Delta_n|_\\TXTmet \\to |\\Delta_n|_\\calR$ \nsuch that (as a set map) $i$ maps each boundary simplex of $\\Delta_n$ to \nitself.\n\\end{proposition}\n\nIn fact, with a bit more trouble, one can even take $i$ to be a homeomorphism. \nThe obvious discrete version of the above, with $(\\setZplus)^n$ in place of \n$\\Delta_n \\defeq (\\setRplus)^n$, is rather trivial. To get a nontrivial \nversion, one should replace $|\\Delta_n|_\\calR$ with a ``sector'' which grows \narbitrarily quickly away from the origin.\n\n\\begin{proof}[Sketch of proof]\nIt is trivial for $n = 0$, so suppose that $n \\geq 1$. Fix an open \nneighbourhood $E_0 \\in \\calR$ of the diagonal $1_{\\Delta_n}$. We will say that \n$B \\subseteq \\Delta_n$ is \\emph{$E_0$-bounded} if $B^{\\cross 2} \\subseteq E_0$. \nIn the following, \\emph{disc} will mean ``closed $l^1$ metric disc in \n$\\Delta_n$''; the \\emph{diameter} of a disc will always be measured in the \n$l^1$ metric.\n\nTesselate $\\Delta_n$ by discs diameter $1$ as in \nFigure~\\ref{prop:met-simp-univ:fig-I} (we illustrate the case $n = 2$), and let\n\\[\n L_{2j} \\defeq \\set{x \\in \\Delta_n \\suchthat j \\leq \\norm{x}_1 \\leq j+1}\n\\]\nfor $j = 0, 1, 2, \\dotsc$ be the ``layers'' of the tesselation. Then there is a \nrefinement of this tesselation by discs as in \nFigure~\\ref{prop:met-simp-univ:fig-II} such that each ``small'' disc of the \nrefinement is $E_0$-bounded; label the layers of this tesselation $L'_{2j_0}, \nL'_{2j_1}, \\dotsc$ as indicated in the Figure.\n\n\\begin{figure}\n\\resizebox{0.95\\linewidth}{!}{\\input{crscat-I-fig-subdiv-1.pdf_t}}\n\\caption{\\label{prop:met-simp-univ:fig-I}%\nThe tesselation of $\\Delta_2$ by discs of $l^1$-diameter $1$.}\n\\end{figure}\n\n\\begin{figure}\n\\resizebox{0.95\\linewidth}{!}{\\input{crscat-I-fig-subdiv-2.pdf_t}}\n\\caption{\\label{prop:met-simp-univ:fig-II}%\nA refinement of the tesselation by $E_0$-controlled discs.}\n\\end{figure}\n\nDefine a continuous, ``tesselation preserving'' map $i \\from \\Delta_n \\to \n\\Delta_n$ which sends $L_{2j_0}$ to $L'_{2j_0}$, $L_{2j_1}$ to $L'_{2j_1}$, \netc., collapsing the $L_{2j}$ which do not occur in the sequence $L_{2j_0}, \nL_{2j_1}, \\dotsc$; in the example illustrated in the Figures, $L_2$ is \ncollapsed to the ``level set'' $\\set{x \\in \\Delta_2 \\suchthat \\norm{x}_1 = 1}$, \n$L_{12}$ through $L_{22}$ are collapsed to $\\set{x \\in \\Delta_2 \\suchthat \n\\norm{x}_1 = 3}$, etc.\n\nThe map $i$ is (Roe) coarse. This map is proper and ``preserves'' the boundary \nsimplices. Consider the cover $\\set{B_1, B_2, \\dotsc}$ of $\\Delta_n$ by \n(overlapping) discs $B_k$ of diameter $2$, each a union of $2^n$ adjacent discs \nin the tesselation of Figure~\\ref{prop:met-simp-univ:fig-I}. We have that\n\\[\n \\bigunion_{k=1}^\\infty (B_k)^{\\cross 2}\n\\]\ngenerates the Roe coarse structure of $|\\Delta_n|_\\TXTmet$. The collection \n$\\set{i(B_1), i(B_2), \\dotsc}$ is uniformly $(E_0 \\circ E_0)$-bounded: the \ncollection of unions of $2^n$, adjacent, \\emph{equal-sized} discs in the \ntesselation of Figure~\\ref{prop:met-simp-univ:fig-II} is uniformly $(E_0 \\circ \nE_0)$-bounded, and each $i(B_k)$ is contained in such a disc (there are four \ncases to check in the latter assertion: (1) $i$ does not collapse $B_k$ at all, \n(2) $i$ completely collapses $B_k$, (3) $i$ collapses the ``top half'' of \n$B_k$, or (4) $i$ collapses the ``bottom half'' of $B_k$). This suffices to \nshow that $i$ preserves all (Roe) entourages of $|\\Delta_n|_\\TXTmet$.\n\\end{proof}\n\n\\begin{remark}\nThe above Proposition is not entirely satisfactory. $|\\Delta_n|_\\TXTmet$ should \nsatisfy a stronger universal property (which I have not yet proven): \n$|\\Delta_n|_\\TXTmet$ should be \\emph{coarse-homotopy}-universal with the above \nproperty. That is, if $\\calS$ is any Roe coarse structure on $\\Delta_n$ such \nthat the above is true with $|\\Delta_n|_\\calS$ in place of \n$|\\Delta_n|_\\TXTmet$, then $|\\Delta_n|_\\calS$ should be coarse homotopy \nequivalent to $|\\Delta_n|_\\TXTmet$ in such a way that its boundary simplices \nare preserved (compare \\cite{MR1243611}*{Thm.~7.3}).\n\\end{remark}\n\n\n\\subsection{Continuously controlled coarse simplices}\n\nIf the previously defined metric coarse structure on a simplex $\\Delta_n$ is \nthe minimal ``good'' one, the continuously controlled coarse structure on \n$\\Delta_n$ defined below is the maximal ``good'' one (again, we will not make \nthis precise in this paper).\n\nFor $n = 1, 2, 3, \\dotsc$, let $\\overline{\\Delta}_n$ be the obvious \ncompactification of the topological space $\\Delta_n \\defeq (\\setRplus)^n$ by \nthe standard topological simplex of geometric dimension $n-1$. Alternatively \n(and equivalently, for our purposes), put\n\\begin{align*}\n \\Delta_n & \\defeq \\Bigset{(x_0, \\dotsc, x_{n-1}) \\in (\\setRplus)^n\n \\suchthat \\sum_{j=0}^{n-1} x_j < 1}\n\\quad\\text{and} \\\\\n \\overline{\\Delta}_n & \\defeq \\Bigset{(x_0, \\dotsc, x_{n-1})\n \\in (\\setRplus)^n \\suchthat \\sum_{j=0}^{n-1} x_j \\leq 1},\n\\end{align*}\nso that $\\die \\Delta_n \\defeq \\overline{\\Delta}_n \\setminus \\Delta_n$ really is \nthe standard topological $(n-1)$-simplex. Put $\\Delta_0 \\defeq \\set{0}$ which \nis compact, so $\\overline{\\Delta}_0 = \\Delta_0$ and $\\die \\Delta_0 = \n\\emptyset$.\n\n\\begin{definition}\nFor $n = 0, 1, 2, \\dotsc$, the \\emph{continuously controlled coarse \n$n$-simplex} is the continuously controlled coarse space\n\\[\n |\\Delta_n| \\defeq |\\Delta_n|_\\TXTtop \\defeq |\\Delta_n|_{\\die \\Delta_n}.\n\\]\n\\end{definition}\n\nEquivalently (see Remark~\\ref{rmk:ctsctl-cones}), we can define \n$|\\Delta_n|_\\TXTtop$ to be the continuously controlled open cone \n$\\calO_\\TXTtop(\\die \\Delta_n)$ (with underlying set $\\setRplus \\cross (\\die \n\\Delta_n)$).\n\nAgain, as in \\S\\ref{subsect:met-simpl}, we can define various coarse maps \n$\\delta_j$ and $\\sigma_j$ between the continuously controlled coarse simplices. \nIndeed (using either of the above descriptions of the $\\Delta_n$), we may \ndefine them using the same formul\\ae\\ \\eqref{subsect:met-simpl:eq:delta} and \n\\eqref{subsect:met-simpl:eq:sigma}, and hence they also satisfy the \ncosimplicial identities. Consequently, we get a \\emph{continuously controlled \ncoarse simplex functor}\n\\[\n |\\Delta|_\\TXTtop \\from \\CATSimp \\to \\CATCrs,\n\\]\nand everything that comes along with it: \\emph{continuously controlled coarse \nrealizations} of simplicial sets, a notion of (continuously controlled coarse) \n``weak equivalence'', \\emph{continuously controlled coarse singular sets} and \n\\emph{homology}, etc.\n\n\\begin{remark\nIf $X = \\calO_\\TXTtop M$ for a second countable compact topological space $M$ \n(where $\\calO_\\TXTtop M$ is the continuously controlled open cone on $M$ from \nDef.~\\ref{def:ctsctl-cone}), then it is easy to see that the continuously \ncontrolled coarse singular homology of $\\calO_\\TXTtop M$ is exactly the \nsingular homology of $M$ (in this case, we would want to discard our \n$0$-simplices and shift our indexing to match the topologists').\nContinuously controlled coarse simplices have another nice feature: \n$|\\Delta_1|_\\TXTtop$ is the continuously controlled ray, which is coarsely \nequivalent to $|\\setZplus|_1$, so $|\\Delta_1|_\\TXTtop$ is a product identity \nfor most coarse spaces which arise in practice (those in $\\CATCrs_{\\preceq \n|\\Delta_1|_\\TXTtop}$, which includes all those which are coarsely equivalent to \ncountable coarse spaces). However, continuously controlled simplices have a \nfundamental problem: they are too coarse, and so many coarse spaces $X$ of \ninterest (e.g., metric coarse spaces) do not even admit a coarse map \n$|\\Delta_1|_\\TXTtop \\to X$.\n\\end{remark}\n\n\n\\begin{comment}\n\\subsection{Coarse homotopy}\n\nThroughout this section, $X$ and $Y$ will be second countable Roe coarse \nspaces.\n\n\\begin{definition}[\\cite{MR1243611}*{Def.~1.2}]\nContinuous Roe coarse maps $f_0, f_1 \\from Y \\to X$ are \\emph{directly coarsely \nhomotopic} if there is a continuous, topologically proper map $(h_t) \\defeq H \n\\from Y \\cross \\ccitvl{0,1} \\to X$ such that $f_j = h_j$ for $j = 0, 1$ (where \nwe write $h_t \\defeq H(\\cdot,t)$ for $t \\in \\ccitvl{0,1}$) and, for every $F \n\\in \\calR_Y$,\n\\[\n \\bigunion_{t \\in \\ccitvl{0,1}} (h_t)^{\\cross 2}(F) \\in \\calR_X.\n\\]\nA \\emph{direct coarse homotopy} is a map $H$ of the above form. (Generalized) \n\\emph{coarse homotopy} of Roe coarse maps is the equivalence relation generated \nby direct coarse homotopy and closeness.\n\\end{definition}\n\nDirect coarse homotopy is an equivalence relation, but is evidently not (Roe) \ncoarsely invariant; (generalized) coarse homotopy is the invariant version.\n\nDenote $P \\defeq P_\\TXTtop \\defeq |\\Delta_1|_\\TXTtop$ (recall our slightly \nnonstandard indexing of simplices); of course, $P$ is coarsely equivalent (and \nuniquely isomorphic in $\\CATCrs$) to $|\\setZplus|_1$, and $P = \\Terminate(P)$. \nAlso denote $I \\defeq I_\\TXTtop \\defeq |\\Delta_2|_\\TXTtop$. Recall that we have \ntwo coarse maps $\\delta_0, \\delta_1 \\from P \\to I$ and $\\sigma_I \\defeq \n\\sigma_0 \\from I \\to P$, with the latter unique modulo closeness.\n\nRecall that we may describe $|\\Delta_1|_\\TXTtop$ and $|\\Delta_2|_\\TXTtop$ as \ncontinuously controlled open cones: $|\\Delta_1|_\\TXTtop \\defeq \\calO_\\TXTtop \n\\ast$ (where $\\ast$ is a one-point space) and $|\\Delta_2|_\\TXTtop \\defeq \n\\calO_\\TXTtop \\ccitvl{0,1}$, which are just $\\setRplus$ and $\\setRplus \\cross \n\\ccitvl{0,1}$ as sets, respectively. Using these descriptions, we may take\n\\begin{align*}\n \\delta_0(r) & \\defeq (r,0), \\\\\n \\delta_1(r) & \\defeq (r,1), \\quad\\text{and} \\\\\n \\sigma_I(r,t) & \\defeq r.\n\\end{align*}\n\nSince $Y$ is second countable, it is Roe coarsely equivalent to a countable \n(discrete) space. Hence there is a canonical arrow $[\\sigma_Y] \\from \\Disc(Y) \n\\to P$ in $\\CATCrs$ (choosing a representative coarse map $\\sigma_Y$ whenever \nnecessary), and $\\Disc(Y) \\cross P$ is canonically isomorphic to $\\Disc(Y)$ in \n$\\CATCrs$. From the coarse maps $\\delta_0$ and $\\delta_1$, we get two canonical \narrows $\\Disc(Y) \\to \\Disc(Y) \\cross I$.\n\n\\begin{proposition}\nSuppose $(h_t) \\defeq H \\from Y \\cross \\ccitvl{0,1} \\to X$ is a direct coarse \nhomotopy, and define a set map\n\\[\n \\tilde{H} \\from Y \\cross I = Y \\cross \\setRplus \\cross \\ccitvl{0,1} \\to X\n\\]\nby $\\tilde{H}(y,r,t) \\defeq H(y,t)$. Then $\\tilde{H}$ is actually coarse as a \nmap from the product coarse space $\\Disc(Y) \\cross I$ to $\\Disc(X)$.\n\\end{proposition}\n\n(Recall that the set underlying the product of connected coarse spaces can be \ntaken, in a natural way, to be the set product of the underlying sets.)\n\n\\begin{proof}\nLet us write $|X| \\defeq \\Disc(X)$ and $|Y| \\defeq \\Disc(Y)$. Define \n\\[\n \\tilde{H}' \\from Y \\cross I \\to X \\cross P\n\\]\nby $\\tilde{H}'(y,r,t) \\defeq (H(y,t),r)$. It suffices to show that $\\tilde{H}'$ \nis coarse as a map $|Y| \\cross I \\to |X| \\cross P$ since $\\tilde{H} = \\pi_{|X|} \n\\circ \\tilde{H}'$ and $\\pi_{|X|} \\from |X| \\cross P \\to |X|$ is coarse.\n\nWe have a commutative square\n\\[\\begin{CD}\n |Y| \\cross I @>{\\tilde{H}'}>> |X| \\cross P \\\\\n @V{\\pi_I}VV @V{\\pi_P}VV \\\\\n I @>{\\sigma_I}>> P\n\\end{CD}\\quad;\\]\nsince $\\sigma_I \\circ \\pi_I$ is coarse hence locally proper, $\\tilde{H}'$ must \nbe locally proper by Proposition~\\ref{prop:loc-prop-comp}.\n\nIt remains to show that $\\tilde{H}'$ preserves entourages, so fix\n$G \\in \\calE_{\\Disc(Y) \\cross I}$. Since $\\tilde{H}'$ is locally proper, \n$(\\tilde{H}')^{\\cross 2}(G)$ satisfies the properness axiom. Using the above \ncommutative square, we also get that $(\\pi_P)^{\\cross 2}((\\tilde{H}')^{\\cross \n2}(G))$ is in $\\calE_P$. Thus it only remains to show that\n\\[\n E \\defeq (\\pi_{|X|})^{\\cross 2}((\\tilde{H}')^{\\cross 2}(G))\n = \\tilde{H}^{\\cross 2}(G)\n\\]\nis in $\\calE_{|X|}$. Let us also put $F \\defeq (\\pi_{|Y|})^{\\cross 2}(G) \\in \n\\calE_{|Y|}$.\n\n$E$ satisfies the topological properness axiom: Fix a compact $K \\subseteq X$; \nwe must show that $E \\cdot K$, $K \\cdot E$ are finite sets (we omit the latter, \nsymmetric case). Since $H^{-1}(K)$ is relatively compact (recall that $H$ is \ntopologically proper), $L \\defeq \\pi_Y(H^{-1}(K)) \\subseteq Y$ is also \nrelatively compact. As $F$ satisfies the topological properness axiom, $F \\cdot \nL$ is a finite set, so $G \\cdot (\\pi_{|Y|})^{-1}(L)$ is finite since $\\pi_{|Y|} \n\\from |Y| \\cross I \\to |Y|$ is locally proper. If $x \\in E \\cdot K$, there is \nan $x' \\in K$ such that $(x,x') \\in E$, and then a $((y,r,t),(y',r',t')) \\in G$ \nsuch that $H(y,t) = x$ and $H(y',t') = x'$; but then $(y',r',t') \\in \n(\\pi_{|Y|})^{-1}(L)$, so $(y,r,t) \\in G \\cdot (\\pi_{|Y|})^{-1}(L)$. Hence $E \n\\cdot K$ is the image of a finite set, hence finite.\n\n$E$ is in $\\calR_X$:\n\n\n\\ldots\n\\end{proof}\n\n\n\n\n\n\n\n\n\\ldots\n\\end{comment}\n\n\n\\subsection{\\pdfalt{\\maybeboldmath $\\sigma$-coarse spaces and $\\sigma$-unital\n coarse spaces}{sigma-coarse spaces and sigma-unital coarse spaces}}\n\nIn \\cite{MR2225040}*{\\S{}2}, Emerson--Meyer consider increasing sequences of \ncoarse spaces. Their coarse spaces are equipped with topologies and are \nconnected and unital (i.e., are \\emph{Roe coarse spaces} in the terminology of \n\\S{}\\ref{subsect:Roe-crs-sp}). We will simply handle the discrete case. (This \nis perhaps at significant loss of generality, since in a sense Emerson--Meyer \nare largely interested in ``non-locally-compact coarse spaces'' which we do not \nreally examine in this paper; see Remark~\\ref{rmk:top-crs-sp}.) For our \npurposes, we may safely discard the connectedness assumption, though we still \nneed unitality.\n\n\\begin{definition}[\\cite{MR2225040}*{\\S{}2}]\nA (discrete) \\emph{$\\sigma$-coarse space} $(X_m)$ is a nondecreasing sequence\n\\[\n X_0 \\subseteq X_1 \\subseteq X_2 \\subseteq \\dotsb\n\\]\nof unital coarse spaces such that, for all $m \\geq 0$, $X_m$ is a coarse \nsubspace of $X_{m+1}$ (i.e., is a subset and has the subspace coarse \nstructure).\n\\end{definition}\n\n\\begin{remark}\nGiven a sequence $(X_m)$ which is a $\\sigma$-coarse space in the sense of \nEmerson--Meyer (i.e., each $X_m$ is a Roe coarse space thus may have nontrivial \ntopology), one can obtain a nondecreasing sequence of coarse spaces by applying \nour discretization functor $\\Disc$ to each $X_m$. However, $\\Disc(X_m)$ is \ntypically not unital. It may be interesting to remove the unitality assumption \nfrom the above Definition, and thus be able to consider $(\\Disc(X_m))$ as a \n``nonunital $\\sigma$-coarse space''.\n\\end{remark}\n\nUntil otherwise stated (near the end of this section), $(X_m)$ and $(Y_n)$ will \nalways denote $\\sigma$-coarse spaces.\n\n\\begin{definition}[\\cite{MR2225040}*{\\S{}4}]\nA \\emph{coarse map} $(f_n) \\from (Y_n) \\to (X_m)$ of $\\sigma$-coarse spaces is \na map of directed systems in $\\CATPCrs$ (taken modulo cofinality).\n\\end{definition}\n\nThat is, a coarse map $(f_n) \\from (Y_n) \\to (X_m)$ is represented by a \nsequence of coarse maps $f_n \\from Y_n \\to X_{m(n)}$, $n = 0, 1, \\dotsc$, \n(where $0 \\leq m(0) \\leq m(1) \\leq \\dotsb$ is a nondecreasing sequence) such \nthat the obvious diagram commutes (in $\\CATPCrs$, not modulo closeness); two \nrepresentative sequences $(f_n)$, $(f'_n)$ are considered to be equivalent if, \nfor all $n$, the compositions\n\\begin{equation}\\label{sect:sigma-crs:eq:maps}\n Y_n \\nameto{\\smash{f_n}} X_{m(n)} \\injto X_{\\max\\set{m(n),m'(n)}}\n\\quad\\text{and}\\quad\n Y_n \\nameto{\\smash{f'_n}} X_{m(n)} \\injto X_{\\max\\set{m(n),m'(n)}}\n\\end{equation}\nare equal.\n\nActually, Emerson--Meyer consider maps $\\bigunion_n Y_n \\to \\bigunion_m X_m$, \ni.e., maps between set colimits which restrict to give sequences of coarse \nmaps. This is equivalent to our definition (which avoids set colimits).\n\n\\begin{definition}[\\cite{MR2225040}*{\\S{}4}]\\label{def:sigma-crs}\nCoarse maps $(f_n), (f'_n) \\from (Y_n) \\to (X_m)$ are \\emph{close} if, for all \n$n$ (and any, hence all, representative sequences $(f_n)$, $(f'_n)$, \nrespectively), the compositions \\eqref{sect:sigma-crs:eq:maps} are close. We \ndenote the \\emph{closeness} (equivalence) \\emph{class} of $(f_n)$ by $[f_n]$.\n\\end{definition}\n\nEquivalently, coarse maps $(f_n)$, $(f'_n)$ are close if they yield maps of \ndirected systems in $\\CATCrs$ which are equivalent modulo cofinality.\n\nSince the system $X_0 \\to X_1 \\to \\dotsb$ consists of inclusion maps, the \nprecoarse colimit $\\pfx{\\CATPCrs}\\OBJcolim X_m$ exists; one may take it to be\n\\[\n X \\defeq \\pfx{\\CATPCrs}\\OBJcolim X_m \\defeq \\bigunion_m X_m\n\\]\nas a set, with coarse structure\n\\[\n \\calE_X \\defeq \\langle \\calE_{X_m} \\suchthat m = 0, 1, \\dotsc \\rangle_X\n\\]\ngenerated by the coarse structures of all the $X_m$. In fact, since $X_m$ is a \ncoarse subspace of $X_{m+1}$ for all $m$,\n\\[\n \\calE_X = \\bigunion_m \\calE_{X_m}\n\\]\n(and $X_m$ is a subspace of $X$); conversely, we get, for each $m$, that\n$\\calE_{X_m} = \\calE_X |_{X_m}$.\n\nUntil otherwise stated, let $X$ be as above and similarly $Y \\defeq \n\\pfx{\\CATPCrs}\\OBJcolim Y_m \\defeq \\bigunion_n Y_n$.\n\nThe coarse colimit $\\pfx{\\CATCrs}\\OBJcolim X_m$ also exists (since all colimits \nin $\\CATCrs$ exist), and maps canonically to $X$ in $\\CATCrs$. The following is \neasy to show.\n\n\\begin{proposition}\n$\\pfx{\\CATCrs}\\OBJcolim X_m = \\pfx{\\CATPCrs}\\OBJcolim X_m \\eqdef X$. More \nprecisely, the canonical arrow\n\\[\n \\pfx{\\CATCrs}\\OBJcolim X_m \\to \\pfx{\\CATPCrs}\\OBJcolim X_m \\eqdef X\n\\]\nis an isomorphism (in $\\CATCrs$).\n\\end{proposition}\n\nBy definition, any coarse map $(f_n) \\from (Y_n) \\to (X_m)$ of $\\sigma$-coarse \nspaces yields a well-defined coarse map $f \\from Y \\to X$. (Of course, $f$ is \njust, as a set map, given by $f(y_n) \\defeq f_n(y_n)$ for all $n$ and $y_n \\in \nY_n$.) Likewise, its closeness class $[f_n]$ yields a well-defined closeness \nclass $[f] \\from Y \\to X$.\n\nLet $\\calP\\calS$ be the category of $\\sigma$-coarse spaces and coarse maps, and \n$\\calS$ be the category of $\\sigma$-coarse spaces and closeness classes of \ncoarse maps. We have defined functors\n\\[\n \\calL \\defeq \\pfx{\\CATPCrs}\\OBJcolim \\from \\calP\\calS \\to \\CATPCrs\n\\quad\\text{and}\\quad\n [\\calL] \\defeq \\pfx{\\CATCrs}\\OBJcolim \\from \\calS \\to \\CATCrs.\n\\]\n\n\\begin{proposition}\\label{prop:sigma-crs:L-full-faith}\nThe functor $\\calL \\from \\calP\\calS \\to \\CATPCrs$ is fully faithful.\n\\end{proposition}\n\n(Recall that ``faithful'' does not require injectivity on object sets!)\n\n\\begin{proof}\nFaithfulness: Clear, since representative sequences $(f_n), (f'_n) \\from (Y_n) \n\\to (X_m)$ are cofinally equivalent if and only if they are equal on colimits \n(i.e., $f = f'$).\n\nFullness: To show that $\\calL$ maps $\\Hom_{\\calP\\calS}((Y_n),(X_m))$ to \n$\\Hom_{\\CATPCrs}(Y,X)$ surjectively, we must use the unitality of the $Y_n$. \nSuppose $f \\from Y \\to X$ is a coarse map (not a priori in the image of \n$\\calL$). For each $n$, $Y_n$ is a unital subspace of $Y$, and hence $f(Y_n)$ \nis a unital subspace of $X$. Then $1_{f(Y_n)}$ must be an entourage of some \n$X_m$; let $m(n)$ be the least such $m$. Since $\\calE_{X_{m(n)}} = \\calE_X \n|_{X_{m(n)}}$, $f_n \\defeq f |_{Y_n}^{X_{m(n)}} \\from Y_n \\to X_{m(n)}$ is a \ncoarse map. It follows that $(f_n)$ is a coarse map of $\\sigma$-coarse spaces, \nand that $\\calL((f_n)) = f$.\n\\end{proof}\n\nThe following shows that unitality of the $Y_n$ really is needed for fullness.\n\n\\begin{example}\\label{ex:sigma-crs:L-nonunital-not-full}\nPut, for each $m$, $X_m \\defeq |\\set{0, \\dotsc, m-1}|_1$, so that $X \\defeq \n\\calL((X_m))$ is just $\\setZplus$ as a set, with entourages the finite subsets \nof $(\\setZplus)^{\\cross 2}$. Put, for all $n$, $Y_n \\defeq X$, so that colimit \n$Y \\defeq X$ is nonunital ($(Y_n)$ is not a $\\sigma$-coarse space in our \nterminology). The identity map $Y \\to X$ is coarse, but its image is not \ncontained in any single $X_m$ so is no ``coarse map'' $(f_n) \\from (Y_n) \\to \n(X_m)$ which yields $f$.\n\\end{example}\n\nA $\\sigma$-coarse space $(X_m)$ includes as a part of its structure the \n``filtration'' $X_0 \\subseteq X_1 \\subseteq \\dotsb$. However, the particular \nchoice of ``filtration'' is not important, since maps of $\\sigma$-coarse spaces \nare taken modulo cofinality.\n\n\\begin{corollary}\nIf $X \\defeq \\calL((X_m))$ is isomorphic in $\\CATPCrs$ to $Y \\defeq \n\\calL((Y_n))$ (i.e., there is a \\emph{bijection} of sets $f \\from Y \\to X$ such \nthat $f$ and $f^{-1}$ are both coarse maps), then $(X_m)$ is isomorphic to \n$(Y_n)$ in $\\calP\\calS$ (in particular, this is the case when $X = Y$ as coarse \nspaces).\n\\end{corollary}\n\nThe situation modulo closeness parallels the above.\n\n\\begin{proposition}\\label{prop:sigma-crs:QL-full-faith}\nThe functor $[\\calL] \\from \\calS \\to \\CATCrs$ is fully faithful.\n\\end{proposition}\n\n\\begin{proof}\nFaithfulness: Since each $Y_n$ is a subspace of $Y$ and each $X_m$ a subspace \nof $X$, closeness of $f = \\calL((f_n))$ to $f' = \\calL((f'_n))$ implies \ncloseness of the compositions \\eqref{sect:sigma-crs:eq:maps} (noting that $f_n \n= f |_{Y_n}^{X_{m(n)}}$ and similarly for $f'_n$).\n\nFullness: Here, we implicitly use the unitality condition. We have a \ncommutative diagram\n\\[\\begin{CD}\n \\calP\\calS @>{\\calL}>> \\CATPCrs \\\\\n @V{\\Quotient}VV @V{\\Quotient}VV \\\\\n \\calS @>{[\\calL]}>> \\CATCrs\n\\end{CD}\\,.\\]\nSince $\\calL$ is full and evidently the quotient functors are also full and map \nsurjectively onto object sets, $[\\calL]$ is full.\n\\end{proof}\n\nIt is not clear to me whether fullness of $[\\calL]$ fails if the unitality \ncondition is removed from Definition~\\ref{def:sigma-crs}; the counterexample of \nExample~\\ref{ex:sigma-crs:L-nonunital-not-full} fails.\n\n\\begin{corollary}\nIf $X \\defeq \\calL((X_m))$ is coarsely equivalent (i.e., isomorphic in \n$\\CATCrs$) to $Y \\defeq \\calL((Y_n))$, then $(X_m)$ is isomorphic to $(Y_n)$ in \n$\\calS$.\n\\end{corollary}\n\nIt follows from Propositions \\ref{prop:sigma-crs:QL-full-faith} \nand~\\ref{prop:sigma-crs:L-full-faith} that $\\calL$ and $[\\calL]$ are \nequivalences (of categories) onto their images. We now consider what the images \nof these functors are (and how one constructs ``inverse'' functors).\n\nLet us ``reset'' our notation: $X$, $Y$ are just coarse spaces, not necessarily \ncoming from $\\sigma$-coarse spaces, and $(X_m)$, $(Y_n)$ are not assumed to \nhave any meaning.\n\n\\begin{definition}\nA coarse space $X$ is \\emph{$\\sigma$-unital} if there is a nondecreasing \nsequence\n\\[\n X_0 \\subseteq X_1 \\subseteq \\dotsb \\subseteq X\n\\]\nof unital subspaces of $X$ such that each unital subspace $X' \\subseteq X$ is \ncontained in some $X_m$ ($m$ depending on $X'$).\n\\end{definition}\n\nIt is implied that $X = \\bigunion_m X_m$, though this equality certainly does \nnot imply that each unital subspace of $X$ is contained in some $X_m$.\n\nLet $\\CATPCrs_{\\bfsigma} \\subseteq \\CATPCrs$ and $\\CATCrs_{\\bfsigma} \\subseteq \n\\CATCrs$ denote the full subcategories of $\\sigma$-unital coarse spaces. \nClearly, $\\calL$ and $[\\calL]$ map both into and onto $\\CATPCrs_{\\bfsigma}$ and \n$\\CATCrs_{\\bfsigma}$, respectively. We get the following.\n\n\\begin{theorem}\nThe functors $\\calL \\from \\calP\\calS \\to \\CATPCrs_{\\bfsigma}$ and $[\\calL] \n\\from \\calS \\to \\CATCrs_{\\bfsigma}$ are equivalences of categories.\n\\end{theorem}\n\nIt is also easy to construct ``inverse'' functors. Choose, for each \n$\\sigma$-unital $X$, a ``filtration'' $(X_m)$. Then $X \\mapsto (X_m)$ (and, for \n$f \\from Y \\to X$, $f \\mapsto (f_n)$, where $f_n$ is an appropriate range \nrestriction of $f |_{Y_n}$) gives a functor ``inverse'' to $\\calL \\from \n\\calP\\calS \\to \\CATPCrs_{\\bfsigma}$. Choosing representative coarse maps, one \ndoes the same to obtain an ``inverse'' to $[\\calL] \\from \\calS \\to \n\\CATCrs_{\\bfsigma}$.\n\n\n\\subsection{Quotients and Roe algebras}\\label{subsect:quot-Cstar}\n\nWe shall assume that the reader is familiar with the definition and \nconstruction of the Roe algebras $C^*(X)$ for $X$ a (Roe) coarse space (see, \ne.g., \\cite{MR1451755}); the generalization to our nonunital situation is \nstraightforward. We will follow the standard, abusive practice of pretending \nthat $X \\mapsto C^*(X)$ is a functor. (The situation is slightly complicated by \nour nonunital situation. However, there are a number of ways of obtaining an \nactual functor, just not to the category of $C^*$-algebras. One could, for \nexample, construct a coarsely invariant functor from $\\CATCrs$ to the category \nof $C^*$-categories \\cite{MR1881396}.) The important fact is that, applying \n$K$-theory, one gets a coarsely invariant functor $X \\mapsto \nK_\\grstar(C^*(X))$. The following should be regarded as a sketch, with more \ndetails to follow in a future paper.\n\nFix a coarse space $X$ and a subspace $Y \\subseteq X$, and denote the inclusion \n$Y \\injto X$ by $\\iota$. We note that the following does not depend on our \ngeneralizations, and even works in the ``classical'' unital context; if $X$ is \na Roe coarse space in the sense of \\S\\ref{sect:top-crs}, $Y$ should be closed \nin $X$. Recall that we simply denote the quotient $X\/[\\iota](Y)$ (defined in \n\\S\\ref{subsect:Crs-quot}) by $X\/[Y]$. The coarse space $X\/[Y]$ is easy to \ndescribe: It is just $X$ as a set, with coarse structure generated by the \nentourages of $X$ and those of $\\Terminate(Y)$ (if $X$ is unital, the latter \nare just those of the terminal coarse structure on $Y$).\n\nThe quotient $Y\/[Y] \\Terminate(Y)$ is a subspace of $X\/[Y]$, with\n$\\utilde{\\iota} \\from Y\/[Y] \\injto X\/[Y]$ an inclusion. We get a commutative \nsquare\n\\[\\begin{CD}\n Y @>{\\iota}>> X \\\\\n @V{\\tilde{q}}VV @V{q}VV \\\\\n Y\/[Y] @>{\\utilde{\\iota}}>> X\/[Y]\n\\end{CD}\\quad,\\]\nwhere $\\tilde{q}$ and $q$ represent the quotient maps (which one can take to be \nidentity set maps). This square gives rise to a commutative diagram\n\\[\\begin{CD}\n 0 @>>> C^*_X(Y) @>{\\iota_*}>> C^*(X) @>>> Q_{X,Y} @>>> 0 \\\\\n @. @V{\\tilde{q}_*}VV @V{q_*}VV @V{\\utilde{q}_*}VV \\\\\n 0 @>>> C^*_{X\/[Y]}(Y\/[Y]) @>{\\utilde{\\iota}_*}>> C^*(X\/[Y])\n @>>> Q_{X\/[Y],Y\/[Y]} @>>> 0\n\\end{CD}\\]\nof $C^*$-algebras whose rows are exact; $C^*_X(Y)$ denotes the ideal of \n$C^*(X)$ of operators supported near $Y$ (which can be identified with \n$C^*(\\OBJcoim [\\iota])$, where $\\OBJcoim [\\iota] = X$ as a set with the \nnonunital coarse structure of entourages of $X$ supported near $Y$; see \nDef.~\\ref{def:Crs-coimage}) and $Q_{X,Y}$ is the quotient $C^*$-algebra (and \nsimilarly for the second row).\n\nNext, one observes that $\\utilde{q}_*$ is an isomorphism \\emph{of \n$C^*$-algebras} hence induces an isomorphism on $K$-theory. Let us specialize \nto the case when $X$ is unital (from which it follows that $Y$ and the quotient \ncoarse spaces are also unital), and examine the consequences. If $Y$ is finite \n(or compact, in the Roe coarse space version) then $X = X\/[Y]$ and $Y = Y\/[Y]$, \nso $\\tilde{q}_*$ and $q_*$ are identity maps on the level of $C^*$-algebras and \nhence the diagram is trivial.\n\nOn the other hand, if $Y$ is infinite, then $Y\/[Y] = |Y|_1$ has the terminal \ncoarse structure and one can show by a standard ``Eilenberg swindle'' (see, \ne.g., \\cite{MR1817560}*{Lem.~6.4.2}) that $K_\\grstar(C^*_{X\/[Y]}(Y\/[Y])) = 0$. \nThus we get a canonical isomorphism\n\\[\n K_\\grstar(C^*(X\/[Y])) \\isoto K_\\grstar(Q_{X\/[Y],Y\/[Y]})\n \\isoto K_\\grstar(Q_{X,Y})\n\\]\non $K$-theory. Consequently, using the isomorphism $K_\\grstar(C^*(Y)) \\isoto\nK_\\grstar(C^*_X(Y))$ (which is easy to prove under most circumstances), we get \na long (or six-term) exact sequence\n\\begin{equation}\\label{subsect:quot-Cstar:eq:lx}\n \\dotsb \\nameto{\\smash{\\die}} K_\\grstar(C^*(Y))\n \\to K_\\grstar(C^*(X))\n \\to K_\\grstar(C^*(X\/[Y]))\n \\nameto{\\smash{\\die}} K_{\\grstar-1}(C^*(Y))\n \\to \\dotsb \\,.\n\\end{equation}\n\n\\begin{remark}[continuous control]\\label{rmk:ctsctl-quot-Cstar}\nIn the above situation, suppose that $X = \\calO M$ and $Y = \\calO N$ are \ncontinuously controlled open cones, where $N$ is a nonempty closed subspace of \na second countable, compact space $M$ and we abbreviate $\\calO \\defeq \n\\calO_\\TXTtop$. Then there are natural isomorphisms\n\\begin{equation}\\label{rmk:ctsctl-quot-Cstar:eq}\n K_\\grstar(C^*(\\calO M)) \\cong \\tilde{K}^{1-\\grstar}(C(M))\n = \\tilde{K}_{\\grstar-1}(M)\n\\quad\\text{and}\\quad\n K_\\grstar(C^*(\\calO N)) \\cong \\tilde{K}_{\\grstar-1}(N),\n\\end{equation}\nwhere $\\tilde{K}$ is reduced $K$-homology (see, e.g., \n\\cite{MR1817560}*{Cor.~6.5.2}). One can check that there is also a natural \nisomorphism\n\\[\n K_\\grstar(C^*(\\calO M\/[\\calO N])) \\cong K_{\\grstar-1}(M,N)\n\\]\n(to relative $K$-homology), so that the above long exact sequence \n\\eqref{subsect:quot-Cstar:eq:lx} naturally maps isomorphically to the reduced \n$K$-homology sequence\n\\[\n \\dotsb \\nameto{\\smash{\\die}} \\tilde{K}_{\\grstar-1}(N)\n \\to \\tilde{K}_{\\grstar-1}(M)\n \\to K_{\\grstar-1}(M,N)\n \\nameto{\\smash{\\die}} K_{\\grstar-2}(N)\n \\to \\dotsb \\,.\n\\]\nWe have three natural isomorphisms\n\\begin{align*}\n K_\\grstar(C^*(\\calO M\/[\\calO N])) & \\cong K_\\grstar(C^*(\\calO (M\/N))), \\\\\n K_\\grstar(C^*(\\calO (M\/N))) & \\cong \\tilde{K}_{\\grstar-1}(M\/N),\n\\qquad\\qquad\\quad\\text{and} \\\\\n K_{\\grstar-1}(M,N) & \\cong \\tilde{K}_{\\grstar-1}(M\/N),\n\\end{align*}\nfrom Remark~\\ref{rmk:ctsctl-quot}, as in \\eqref{rmk:ctsctl-quot-Cstar:eq} \nabove, and by excision for $K$-homology, respectively; these are mutually \ncompatible in the obvious sense.\n\\end{remark}\n\n\\begin{example}[\\maybeboldmath $K$-theory of $\\calO_\\TXTtop S^n$]\nWe give yet another version of a standard calculation (see, e.g., \n\\cite{MR1817560}*{Thm.~6.4.10}). For $n \\geq 0$, denote the topological \n$n$-sphere by $S^n$ and, for $n \\geq 1$, the closed $n$-disc by $D^n$; recall \nthat $D^n$ has ``boundary'' $S^{n-1}$ and that $D^n\/S^{n-1} \\cong S^n$. Again \nwe abbreviate $\\calO \\defeq \\calO_\\TXTtop$.\n\nFirst, we compute the $K$-theory of $X \\defeq \\calO S^0$. Put $Y \\defeq \\calO \n\\set{-1} \\subseteq X$, $X' \\defeq \\calO \\set{1} \\subseteq X$, and $Y' \\defeq \n\\set{0} \\subseteq Y \\intersect X'$. It is well known that $K_\\grstar(C^*(X')) = \n0$ and $K_\\grstar(C^*_X(Y)) = 0$ (by the aforementioned ``Eilenberg swindle''), \nand that\n\\[\n K_\\grstar(C^*_{X'}(Y')) = \\begin{cases}\n \\setZ & \\text{if $\\grstar \\equiv 0 \\AMSdisplayoff\\pmod{2}$,\n and} \\\\\n 0 & \\text{otherwise}\n \\end{cases}\n\\]\n(since $C^*_{X'}(Y')$ is just the compact operators). We have a map of short \nexact sequences\n\\[\\begin{CD}\n 0 @>>> C^*_{X'}(Y') @>>> C^*(X') @>>> Q' @>>> 0 \\\\\n @. @VVV @VVV @VVV \\\\\n 0 @>>> C^*_{X}(Y) @>>> C^*(X) @>>> Q @>>> 0\n\\end{CD}\\quad.\\]\nBut one checks easily that the map $Q' \\to Q$ is an isomorphism of \n$C^*$-algebras, hence from the $K$-theory long exact sequences we get\n\\begin{multline*}\n K_\\grstar(C^*(\\calO S^0)) = K_\\grstar(Q) = K_\\grstar(Q') \\\\\n = K_{\\grstar-1}(C^*_{X'}(Y')) = \\begin{cases}\n \\setZ & \\text{if $\\grstar \\equiv 1 \\AMSdisplayoff\\pmod{2}$,\n and} \\\\\n 0 & \\text{otherwise.}\n \\end{cases}\n\\end{multline*}\n\nWe proceed to calculate the $K$-theory of $\\calO S^n$, $n \\geq 1$, by \ninduction. Put $X \\defeq \\calO D^n$ and $Y \\defeq \\calO S^{n-1} \\subseteq X$, \nand recall that $X\/[Y] = \\calO (D^n\/S^{n-1}) = \\calO S^n$. Then, by \nRemark~\\ref{rmk:ctsctl-quot-Cstar} above, we have a long exact sequence\n\\[\n \\dotsb \\nameto{\\smash{\\die}} K_\\grstar(C^*(Y))\n \\to K_\\grstar(C^*(X))\n \\to K_\\grstar(C^*(\\calO S^n))\n \\nameto{\\smash{\\die}} K_{\\grstar-1}(C^*(Y))\n \\to \\dotsb \\,.\n\\]\nBy another ``Eilenberg swindle'', one shows that $K_\\grstar(C^*(X)) = 0$ and \nhence\n\\[\n K_\\grstar(C^*(\\calO S^n)) = K_{\\grstar-1}(C^*(Y))\n = \\begin{cases}\n \\setZ & \\text{if $\\grstar \\equiv n-1 \\AMSdisplayoff\\pmod{2}$,\n and} \\\\\n 0 & \\text{otherwise.}\n \\end{cases}\n\\]\n\\end{example}\n\n\\begin{example}[\\maybeboldmath suspensions in $K$-homology]%\n \\label{ex:ctsctl-quot-Khom}\nSuppose that $A$ is a separable $C^*$-algebra. It is known that, for $n \\geq \n0$, elements of the Kasparov $K$-homology group $K^{n+1}(A)$ can be represented \nby (equivalence classes of) $C^*$-algebra morphisms\n\\begin{equation}\\label{ex:ctsctl-quot-Khom:eq:A}\n \\phi \\from A \\to C^*(\\calO S^n)\n\\end{equation}\n(see \\cites{MR1627621, my-thesis}; I caution that, in my opinion, this is \nprobably not the ``best'' coarse geometric description of $K$-homology, but \nwork remains ongoing). The pairing of $K_m(A)$ with a $K$-homology class \nrepresented by such $\\phi$ is given simply by applying $K$-theory to $\\phi$ \n(and using the computation as in the previous Example).\n\nFix $n \\geq 1$ and suppose that we are given an element of $K^n(\\Sigma A)$, \nwhere $\\Sigma A \\defeq C_0(\\ooitvl{0,1}) \\tensor A$ is the $C^*$-algebraic \nsuspension of $A$, represented by a morphism\n\\begin{equation}\\label{ex:ctsctl-quot-Khom:eq:SA}\n \\tilde{\\psi} \\from \\Sigma A \\to C^*(\\calO S^{n-1}).\n\\end{equation}\nActually, let us assume something stronger, that we are given a morphism $\\psi$ \nwhich fits into the following commutative diagram whose \\emph{rows} are with \nexact:\n\\[\\begin{CD}\n 0 @>>> \\Sigma A @>>> CA @>>> A @>>> 0 \\\\\n @. @V{\\psi |_{\\Sigma A}}VV @V{\\psi}VV @VVV \\\\\n 0 @>>> C^*_X(Y)\n @>>> C^*(X)\n @>>> Q @>>> 0 \\\\\n @. @VVV @VVV @VVV \\\\\n 0 @>>> C^*_{X\/[Y]}(Y\/[Y])\n @>>> C^*(X\/[Y])\n @>>> Q' @>>> 0\n\\end{CD}\\quad,\\]\nwhere $CA \\defeq C_0(\\coitvl{0,1}) \\tensor A$ is the cone on $A$, $X \\defeq \n\\calO D^n$, and $Y \\defeq \\calO S^{n-1} \\subseteq X$. (In fact, given a \n$\\tilde{\\psi}$, one can find a $\\psi$ such that\n\\[\\begin{CD}\n K_\\grstar(\\Sigma A) @>{\\tilde{\\psi}}>> K_\\grstar(C^*(Y)) \\\\\n @V{=}VV @V{\\sim}VV \\\\\n K_\\grstar(\\Sigma A) @>{\\psi |_{\\Sigma A}}>>\n K_\\grstar(C^*_X(Y))\n\\end{CD}\\]\ncommutes. This is not easy to prove, and seems to require that $A$ be \nseparable.)\n\nDenote the composition $A \\to Q \\to Q'$ by $\\utilde{\\phi}$. From the previous \nExample, we have natural isomorphisms\n\\[\n K_\\grstar(\\calO S^{n-1}) = K_\\grstar(C^*_X(Y)) = K_{\\grstar+1}(Q)\n = K_{\\grstar+1}(Q') = K_{\\grstar+1}(X\/[Y]) = K_{\\grstar+1}(\\calO S^n).\n\\]\nMoreover, since $K_\\grstar(CA) = 0$, we have $K_\\grstar(\\Sigma A) = \nK_{\\grstar+1}(A)$. These isomorphisms are all compatible, in the sense that \n$\\psi$ and $\\utilde{\\phi}$ are naturally equivalent on $K$-theory (with a \ndimension shift).\n\nIn fact, one can ``lift'' the morphism $\\utilde{\\phi}$ to a morphism $\\phi \n\\from A \\to C^*(X\/[Y]) = C^*(\\calO S^n)$ in the weak sense that the composition \n$A \\nameto{\\smash{\\phi}} C^*(X\/[Y]) \\to Q'$ is equal to $\\utilde{\\phi}$ on the \nlevel of $K$-theory. (This is not too difficult, but again seems to require \nthat $A$ be separable.) This provides a map from the $K$-homology group \n$K^n(\\Sigma A)$ (described as classes of morphisms as in \n\\eqref{ex:ctsctl-quot-Khom:eq:SA}) to the group $K^{n+1}(A)$ (described as in \n\\eqref{ex:ctsctl-quot-Khom:eq:A}).\n\\end{example}\n\n\n\n\n\\begin{bibsection}\n\n\\begin{biblist}\n\\bibselect{crscat}\n\\end{biblist}\n\n\\end{bibsection}\n\n\n\n\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn this paper, we study Markov Decision Processes (hereafter MDPs) with arbitrarily varying rewards.\nMDP provides a general mathematical framework for modeling sequential decision making under uncertainty \\cite{bertsekas1995dynamic, howard1960dynamic, puterman2014markov}. In the standard MDP setting, if the process is in some state $s$, the decision maker takes an action $a$ and receives an expected reward $r(s,a)$, before the process randomly transitions into a new state. The goal of the decision maker is to maximize the total expected reward. It is assumed that the decision maker has complete knowledge of the reward function $r(s,a)$, which does not change over time.\n\nOver the past two decades, there has been much interest in sequential learning and decision making in an unknown and possibly \\emph{adversarial} environment.\nA wide range of sequential learning problems can be modeled using the framework of Online Convex Optimization (OCO) \\cite{zinkevich2003online,hazan2016introduction}. In OCO, the decision maker plays a repeated game against an adversary for a given number of rounds. At the beginning of each round indexed by $t$, the decision maker chooses an action $a_t$ in some convex compact set $A$ and the adversary chooses a concave reward function $r_t$, hence a reward of $r_t(a_t)$ is received. After observing the realized reward function, the decision maker chooses its next action $a_{t+1}$ and so on. Since the decision maker does not know how the future reward functions will be chosen, its goal is to achieve a small \\emph{regret}; that is, the cumulative reward earned throughout the game should be close to the cumulative reward if the decision maker had been given the benefit of hindsight to choose one fixed action. We can express the regret after $T$ rounds as\n\\[\n\\text{Regret} (T) = \\max_{a \\in A} \\sum_{t=1}^T r_t(a) - \\sum_{t=1}^T r_t(a_t).\n\\]\nThe OCO model has many applications such as universal portfolios \\cite{cover1991, kalai2002, helmbold1998}, online shortest path \\cite{takimoto2003path}, and online submodular minimization \\cite{hazan2012submodular}. It also has close relations with areas such as convex optimization \\cite{hazan2010optimal, ben2015oracle} and game theory \\cite{cesa2006prediction}. There are many algorithms that guarantee sublinear regret, e.g., Online Gradient Descent \\cite{zinkevich2003online}, Perturbed Follow the Leader \\cite{kalai2005efficient}, and Regularized Follow the Leader \\cite{shalev2007online,abernethy2009competing}.\nCompared with the MDP setting, the main difference is that in OCO there is no notion of states, however the payoffs may be chosen by an adversary. \n\n\n\n\nIn this work, we study a general problem that unites the MDP and the OCO frameworks, which we call the {\\bf Online MDP problem}. More specifically, we consider MDPs where the decision maker knows the transition probabilities but the rewards are dynamically chosen by an adversary. \nThe Online MDP model can be used for a wide range of applications, including multi-armed bandits with constraints \\cite{yu2009markov}, the paging problem in computer operating systems \\cite{even2009online}, the $k$-server problem \\cite{even2009online}, stochastic inventory control in operations research \\cite{puterman2014markov}, and scheduling of queueing networks \\cite{de2003linear,abbasi2014linear}.\n\n\n\n\\subsection{Main Results}\nWe propose a new computationally efficient algorithm that achieves near optimal regret for the Online MDP problem.\nOur algorithm is based on the linear programming formulation of infinite-horizon average reward MDPs, which uses the occupancy measure of state-action pairs as decision variables. This approach differs from other papers that have studied the Online MDP problem previously, see review in \\S\\ref{subsec:literature}. \n\n\nWe prove that the algorithm achieves regret bounded by $O(\\tau +\\sqrt{\\tau T (\\ln \\vert S \\vert +\\ln \\vert A \\vert)} \\ln(T) )$, where $S$ denotes the state space, $A$ denotes the action space, $\\tau$ is the mixing time of the MDP, and $T$ is the number of periods. Notice that this regret bound depends \\emph{logarithmically} on the size of state and action space.\nThe algorithm solves a regularized linear program in each period with $poly(|S||A|)$ complexity. The regret bound and the computation complexity compares favorably to the existing methods discussed in \\S\\ref{subsec:literature}. \n\n\nWe then extend our results to the case where the state space $S$ is extremely large so that $poly(|S||A|)$ computational complexity is impractical. We assume the state-action occupancy measures associated with stationary policies are approximated with a linear architecture of dimension $d \\ll |S|$.\nWe design an approximate algorithm combining several innovative techniques for solving large scale MDPs inspired by \\cite{abbasi2019large,abbasi2014linear}.\nA salient feature of this algorithm is that its computational complexity does not depend on the size of the state-space but instead on the number of features $d$.\nThe algorithm has a regret bound $O(c_{S,A}(\\ln|S|+\\ln|A|)\\sqrt{\\tau T}\\ln T)$, where $c_{S,A}$ is a problem dependent constant.\nTo the best of our knowledge, this is the first $\\tilde{O}(\\sqrt{T})$ regret result for large scale Online MDPs.\n\n\\subsection{Related Work}\n\\label{subsec:literature}\n\nThe history of MDPs goes back to the seminal work of Bellman \\cite{bellman1957markovian} and Howard \\cite{howard1960dynamic} from the 1950's. \nSome classic algorithms for solving MDPS include policy iteration, value iteration, policy gradient, Q-learning and their approximate versions (see \\cite{puterman2014markov, bertsekas1995dynamic,bertsekas1996neuro} for an excellent discussion). In this paper, we will focus on a relatively less used approach, which is based on finding the \\textit{occupancy measure} using linear programming, as done recently in \\cite{chen2018scalable,wang2017primal,abbasi2019large} to solve MDPs with \\emph{static} rewards (see more details in Section \\ref{mdp_via_lp}). To deal with the curse of dimensionality, \\cite{chen2018scalable} uses bilinear functions to approximate the occupancy measures and \\cite{abbasi2019large} uses a linear approximation.\n\n\nThe Online MDP problem was first studied a decade ago by \\cite{yu2009markov,even2009online}. In \\cite{even2009online}, the authors developed no regret algorithms where the bound scales as $O(\\tau^2 \\sqrt{T \\ln(\\vert A \\vert)})$, where $\\tau$ is the mixing time (see \\S\\ref{sec:mdp_rftl}). Their method runs an expert algorithm (e.g. Weighted Majority \\cite{littlestone1994weighted}) on every state where the actions are the experts. However, the authors did not consider the case with large state space in their paper. \n In \\cite{yu2009markov}, the authors provide a more computationally efficient algorithm using a variant of Follow the Perturbed Leader \\cite{kalai2005efficient}, but unfortunately their regret bound becomes $O(|S||A|^2\\tau T^{3\/4+\\epsilon})$. \nThey also considered approximation algorithm for large state space, but did not establish an exact regret bound.\n The work most closely related to ours is that from \\cite{dick2014online}, where the authors also use a linear programming formulation of MDP similar to ours. \nHowever, there seem to be some gaps in the proof of their results.\\footnote{In particular, we believe the proof of Lemma 1 in \\cite{dick2014online} is incorrect. Equation (8) in their paper states that the regret relative to a policy is equal to the sum of a sequence of vector products; however, the dimensions of vectors involved in these dot products are incompatible. By their definition, the variable $\\nu_t$ is a vector of dimension $\\vert S \\vert$, which is being multiplied with a loss vector with dimension $\\vert S \\vert \\vert A \\vert$.}\n\n The paper \\cite{ma2015online} also considers Online MDPs with large state space. Under some conditions, they show sublinear regret using a variant of approximate policy iteration, but the regret rate is left unspecified in their paper. \\cite{zimin2013online} considers a special class of MDPs called \\textit{episodic} MDPs and design algorithms using the occupancy measure LP formulation. Following this line of work, \\cite{neu2017unified} shows that several reinforcement learning algorithms can be viewed as variant of Mirror Descent \\cite{juditsky2011first} thus one can establish convergence properties of these algorithms. In \\cite{neu2014online} the authors consider Online MDPs with bandit feedback and provide an algorithm based on \\cite{even2009online}'s with regret of $O(T^{2\/3})$.\n \n A more general problem to the Online MDP setting considered here is where the MDP transition probabilites also change in an adversarial manner, which is beyond the scope of this paper. It is believed that this problem is much less tractable computationally \\cite[see discussion in][]{even2005experts}. \\cite{yu2009online} studies MDPs with changing transition probabilities,\nalthough \\cite{neu2014online} questions the correctness of their result, as the regret obtained seems to have broken a lower bound. In \\cite{gajane2018sliding}, the authors use a sliding window approach under a particular definition of regret. \\cite{abbasi2013online} shows sublinear regret with changing transition probabilities when they compare against a restricted policy class.\n\n\n\n\\section{Problem Formulation: Online MDP}\\label{section:online_mdps}\nWe consider a general Markov Decision Process with known transition probabilities but unknown and adversarially chosen rewards. Let $S$ denote the set of possible states, and $A$ denote the set of actions. (For notational simplicity, we assume the set of actions a player can take is the same for all states, but this assumption can be relaxed easily.)\nAt each period $t \\in [T]$, if the system is in state $s_t \\in S$, the decision maker chooses an action $a_t \\in A$ and collects a reward $r_t(s_t,a_t)$. Here, $r_t : S \\times A \\rightarrow [-1,1]$ denotes a reward function for period $t$. \nWe assume that the sequence of reward functions $\\{r_t\\}_{t=1}^T$ is initially unknown to the decision maker. The function $r_t$ is revealed only after the action $a_t$ has been chosen. We allow the sequence $\\{r_t\\}_{t=1}^T$ to be chosen by an \\textit{adaptive adversary}, meaning $r_t$ can be chosen using the history $\\{s_i\\}_{i=1}^{t}$ and $\\{a_i\\}_{i=1}^{t-1}$; in particular, the adversary does \\emph{not} observe the action $a_t$ when choosing $r_t$. \nAfter $a_t$ is chosen, the system then proceeds to state $s_{t+1}$ in the next period with probability $P(s_{t+1}\\vert s_t, a_t)$. \nWe assume the decision maker has complete knowledge of the transition probabilities given by $P(s' \\vert s, a) : S \\times A \\rightarrow S$.\n\nSuppose the initial state of the MDP follows $s_1 \\sim \\nu_1$, where $\\nu_1$ is a probability distribution over $S$. \nThe objective of the decision maker is to choose a sequence of actions based on the history of states and rewards observed, such that the cumulative reward in $T$ periods is close to that of the optimal offline static policy.\nFormally, let $\\pi$ denote a stationary (randomized) policy: $\\pi:S\\rightarrow \\Delta_A$, where $\\Delta_A$ is the set of probability distributions over the action set $A$. Let $\\Pi$ denote the set of all stationary policies. We aim to find an algorithm that minimizes \n\\begin{align}\\label{regret_def}\n\\text{MDP-Regret}(T)\\triangleq \\sup_{\\pi \\in \\Pi} R(T,\\pi), \\; \\text{with } R(T,\\pi) \\triangleq \\mathbb{E}[\\sum_{t=1}^T r_t(s^\\pi_t , a^\\pi_t)] - \\mathbb{E}[\\sum_{t=1}^T r_t(s_t,a_t)],\n\\end{align}\nwhere the expectations are taken with respect to random transitions of MDP and (possibly) external randomization of the algorithm. \n\n\n \\section{Preliminaries}\n\n\nNext we provide additional notation for the MDP. \nLet $P^\\pi_{s,s'} \\triangleq P(s' \\mid s, \\pi(s))$ be the probability of transitioning from state $s$ to $s'$ given a policy $\\pi$. Let $P^\\pi$ be the $\\vert S\\vert \\times \\vert S\\vert$ matrix with entries $P^\\pi_{s,s'} \\, \\forall s,s' \\in S$. \nWe use row vector $\\nu_t \\in \\Delta_S$ to denote the probability distribution over states at time $t$. Let $\\nu^\\pi_{t+1}$ be the distribution over states at time $t+1$ under policy $\\pi$, given by $\\nu^{\\pi}_{t+1} = \\nu_{t} P^\\pi$. \nLet $\\nu^\\pi_{st}$ denote the stationary distribution for policy $\\pi$, which satisfies the linear equation $\\nu^\\pi_{st} = \\nu^\\pi_{st} P^\\pi$. \nWe assume the following condition on the convergence to stationary distribution, which is commonly used in the MDP literature \\cite[see][]{yu2009markov,even2009online,neu2014online}.\n\n\\begin{assumption}\\label{assumption:mixing}\nThere exists a real number $\\tau \\geq 0$ such that for any policy $\\pi \\in \\Pi$ and any pair of distributions $\\nu,\\nu' \\in \\Delta_S$, it holds that $\\Vert \\nu P^\\pi - \\nu' P^\\pi\\Vert_1 \\leq e^{-\\frac{1}{\\tau}}\\Vert \\nu - \\nu'\\Vert_1$.\n\\end{assumption}\n\nWe refer to $\\tau$ in Assumption~\\ref{assumption:mixing} as the \\emph{mixing time}, which measures the convergence speed to the stationary distribution. In particular, the assumption implies that $\\nu^\\pi_{st}$ is unique for a given policy $\\pi$.\n\nWe use $\\mu(s,a)$ to denote the proportion of time that the MDP visits state-action pair $(s,a)$ in the long run. We call $\\mu^\\pi \\in \\mathbb{R}^{\\vert S\\vert \\times \\vert A\\vert}$ the \\emph{occupancy measure} of policy $\\pi$. \nLet $\\rho_t^\\pi $ be the long-run average reward under policy $\\pi$ when the reward function is fixed to be $r_t$ every period, i.e., $\\rho^\\pi_t \\triangleq \\lim_{T\\rightarrow \\infty} \\frac{1}{T} \\sum_{i=1}^T\\mathbb{E}[ r_t(s^\\pi_i,a^\\pi_i) ] $. We define $\\rho_t \\triangleq \\rho_t^{\\pi_t}$, where $\\pi_t$ is the policy selected by the decision maker for time $t$.\n\n\n\\subsection{Linear Programming Formulation for the Average Reward MDP}\\label{mdp_via_lp}\nGiven a reward function $r: S \\times A \\rightarrow [-1,1]$, suppose one wants to find a policy $\\pi$ that maximizes the long-run average reward: $\\rho^*=\\sup_{\\pi}\\lim_{T\\rightarrow \\infty} \\frac{1}{T}\\sum_{t=1}^T r(s^\\pi_t,a^\\pi_t)$.\nUnder Assumption~\\ref{assumption:mixing}, the Markov chain induced by any policy is ergodic and the long-run average reward is independent of the starting state \\cite{bertsekas1995dynamic}. \nIt is well known that \nthe optimal policy can be obtained by solving the Bellman equation, which in turn can be written as a linear program (in the dual form):\n\\begin{align}\n\\rho^* = \\max_\\mu & \\sum_{s\\in S} \\sum_{a \\in A} \\mu(s,a)r(s,a) \\label{eq:LP} \\\\\n\\text{s.t. } & \\sum_{s\\in S} \\sum_{a\\in A} \\mu(s,a) P(s' \\vert s,a) = \\sum_{a\\in A} \\mu (s',a) \\quad \\forall s' \\in S \\nonumber \\\\\n&\\sum_{s\\in S} \\sum_{a\\in A} \\mu(s,a) = 1,\\quad \\mu(s,a)\\geq 0 \\quad \\forall s\\in S,\\,\\forall a\\in A. \\nonumber \n\\end{align}\nLet $\\mu^*$ be an optimal solution to the LP \\eqref{eq:LP}. We can construct an optimal policy of the MDP by defining $ \\pi^*(s,a) \\triangleq \\frac{\\mu^*(s,a)}{\\sum_{a\\in A} \\mu^*(s,a)}$ for all $s\\in S$ such that $\\sum_{a\\in A} \\mu^*(s,a)>0$; for states where the denominator is zero, the policy may choose arbitrary actions, since those states will not be visited in the stationary distribution.\n Let $\\nu^*_{st}$ be the stationary distribution over states under this optimal policy. \n\nFor simplicity, we will write the first constraint of LP \\eqref{eq:LP}\nin the matrix form as $\\mu^\\top (P-B)=0$, for appropriately chosen matrix $B$. We denote the feasible set of the above LP as $\\Delta_M \\triangleq \\{\\mu\\in \\mathbb{R}: \\mu \\geq 0, \\mu^\\top1=1, \\mu^\\top(P-B)=0 \\}$. \nThe following definition will be used in the analysis later.\n\\begin{definition}\\label{def:delta_0}\nLet $\\delta_0 \\geq 0$ be the largest real number such that for all $\\delta \\in[0,\\delta_0]$, the set $\\Delta_{M,\\delta}\\triangleq\\{ \\mu \\in \\mathbb{R}^{\\vert S \\vert \\times \\vert A \\vert}: \\mu \\geq \\delta, \\mu^\\top 1 = 1, \\mu^\\top(P-B)=0 \\}$ is nonempty.\n\\end{definition}\n\n\\section{A Sublinear Regret Algorithm for Online MDP}\\label{sec:mdp_rftl}\n\nIn this section, we present an algorithm for the Online MDP problem.\n\n\\begin{algorithm}[!htb]\n\\caption{(\\textsc{MDP-RFTL})}\n\\label{alg:MDP-RFTL}\n\\begin{algorithmic}\n \\STATE {\\bfseries input:} parameter $\\delta>0, \\eta>0$, regularization term $R(\\mu) = \\sum_{s\\in S} \\sum_{a\\in A} \\mu(s,a) \\ln(\\mu(s,a))$\n \\STATE {\\bfseries initialization:} choose any $\\mu_1 \\in \\Delta_{M,\\delta} \\subset \\mathbb{R}^{|S|\\times|A|}$\n \\FOR{$t=1,...T$} \n \\STATE observe current state $s_t$\n \\IF{$\\sum_{a\\in A}\\mu_t(s_t,a) > 0$} \n \t\\STATE {choose action $a \\in A$ with probability $\\frac{\\mu_t(s_t,a)}{\\sum_{a}\\mu_t(s_t,a)}$.}\n \\ELSE \n \t\\STATE{choose action $a\\in A$ with probability $\\frac{1}{|A|}$} \n \\ENDIF \n \\STATE observe reward function $r_t \\in [-1,1]^{\\vert S\\vert \\vert A \\vert}$\n \\STATE update $\\mu_{t+1}\\leftarrow \\arg \\max_{\\mu \\in \\Delta_{M,\\delta}} \\sum_{i=1}^t \\left[ \\langle r_i , \\mu \\rangle - \\frac{1}{\\eta}R(\\mu) \\right]$\n \\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\nAt the beginning of each round $t\\in [T]$, the algorithm starts with an occupancy measure $\\mu_t$. If the MDP is in state $s_t$, we play action $a\\in A$ with probability $\\frac{\\mu_t(s_t,a)}{\\sum_{a}\\mu_t(s_t,a)}$. If the denominator is 0, the algorithm picks any action in $A$ with equal probability.\nAfter observing reward function $r_t$ and collecting reward $r_t(s_t,a_t)$, the algorithm changes the occupancy measure to $\\mu_{t+1}$. \n\nThe new occupancy measure is chosen according to the Regularized Follow the Leader (RFTL) algorithm\n \\cite{shalev2007online,abernethy2009competing}. RFTL chooses the best occupancy measure for the cumulative reward observed so far $\\sum_{i=1}^t r_i$, plus a regularization term $R(\\mu)$. The regularization term forces the algorithm not to drastically change the occupancy measure from round to round. In particular, we choose $R(\\mu)$ to be the entropy function.\n\nThe complete algorithm is shown in Algorithm~\\ref{alg:MDP-RFTL}. \nThe main result of this section is the following.\n\n\\begin{theorem}\\label{theorem:mdp_rftl}\nSuppose $\\{r_t\\}_{t=1}^T$ is an arbitrary sequence of rewards such that $\\vert r_t(s,a)\\vert \\leq 1$ for all $s\\in S$ and $a\\in A$. For $T\\geq \\ln^2({1}\/{\\delta_0})$, the MDP-RFTL algorithm with parameters $\\eta = \\sqrt{\\frac{T \\ln(\\vert S \\vert \\vert A \\vert)}{\\tau}}$, $\\delta = e^{-{\\sqrt{T}}\/{\\sqrt{\\tau}}}$ guarantees\n\\begin{align*}\n\\text{MDP-Regret}(T) \\leq O \\left( \\tau + 4 \\sqrt{\\tau T (\\ln\\vert S \\vert + \\ln \\vert A \\vert)} \\ln(T) \\right).\n\\end{align*}\n\\end{theorem}\n\nThe regret bound in Theorem~\\ref{theorem:mdp_rftl} is near optimal: a lower bound of $\\Omega(\\sqrt{T\\ln|A|})$ exists for the problem of learning with expert advice \\cite{freund1999adaptive,hazan2016introduction}, a special case of Online MDP where the state space is a singleton.\nWe note that the bound only depends \\emph{logarithmically} on the size of the state space and action space.\nThe state-of-the-art regret bound for Online MDPs is that of \\cite{even2009online}, which is $O(\\tau + \\tau^2 \\sqrt{\\ln(|A|)T})$. Compared to their result, our bound is better by a factor of $\\tau^{3\/2}$. However, our bound has depends on $\\sqrt{\\ln|S|+\\ln|A|}$, whereas the bound in \\cite{even2009online} depends on $\\sqrt{\\ln |A|}$.\nBoth algorithms require $poly(|S||A|)$ computation time, but are based on different ideas:\n The algorithm of \\cite{even2009online} is based on expert algorithms and requires computing $Q$-functions at each time step, whereas our algorithm is based on RFTL. In the next section, we will show how to extend our algorithm to the case with large state space.\n \n\n\n\n\\subsection{Proof Idea for Theorem \\ref{theorem:mdp_rftl}}\n\nThe key to analyze the algorithm is to decompose the regret with respect to policy $\\pi \\in \\Pi$ as follows \n{\n\\medmuskip=1mu\n\\begin{align}\\label{regret_decomposition}\nR(T, \\pi) = \\left[\\mathbb{E}[\\sum_{t=1}^T r_t(s^\\pi_t , a^\\pi_t)] - \\sum_{t=1}^T \\rho^\\pi_t\\right] + \\left[\\sum_{t=1}^T \\rho^\\pi_t - \\sum_{t=1}^T \\rho_t \\right] + \\left[ \\sum_{t=1}^T \\rho_t - \\mathbb{E}[\\sum_{t=1}^T r_t(s_t,a_t)] \\right].\n\\end{align}\n}\nThis decomposition was first used by \\cite{even2009online}. We now give some intuition on why $R(T, \\pi)$ should be sublinear. By the mixing condition in Assumption~\\ref{assumption:mixing}, the state distribution $\\nu^\\pi_t$ at time $t$ under a policy $\\pi$ differs from the stationary distribution $\\nu^\\pi_{st}$ by at most $O(\\tau)$. This result can be used to bound the first term of \\eqref{regret_decomposition}.\n\nThe second term of \\eqref{regret_decomposition} can be related to the online convex optimization (OCO) problem through the linear programming formulation from Section~\\ref{mdp_via_lp}. Notice that $ \\rho^\\pi_t = \\sum_{s\\in S} \\sum_{a\\in A} \\mu^\\pi(s,a) r(s,a) = \\langle \\mu^\\pi, r\\rangle $, and $ \\rho_t = \\sum_{s\\in S} \\sum_{a\\in A} \\mu^\\pi_t(s,a) r(s,a) = \\langle \\mu^{\\pi_t}, r\\rangle$.\nTherefore, we have that \n\\begin{align}\n\\sum_{t=1}^T \\rho^\\pi_t - \\sum_{t=1}^T \\rho_t = \\sum_{t=1}^T\\langle \\mu^\\pi, r_t\\rangle - \\sum_{t=1}^T \\langle \\mu^{\\pi_t}, r_t \\rangle,\n\\end{align}\nwhich is exactly the regret quantity commonly studied in OCO. \nWe are thus seeking an algorithm that can bound $\\max_{\\mu \\in \\Delta_M} \\sum_{t=1}^T\\langle \\mu^\\pi, r_t\\rangle - \\sum_{t=1}^T \\langle \\mu^{\\pi_t}, r_t \\rangle$. In order to achieve logarithmic dependence on $|S|$ and $|A|$ in Theorem~\\ref{theorem:mdp_rftl}, we apply the RFTL algorithm, regularized by the negative entropy function $R(\\mu)$. A technical challenge we faced in the analysis is that $R(\\mu)$ is not Lipschitz continuous over $\\Delta_M$, the feasible set of LP \\eqref{eq:LP}. So we design the algorithm to play in a shrunk set $\\Delta_{M,\\delta}$ for some $\\delta >0$ (see Definition~\\ref{def:delta_0}), in which $R(\\mu)$ is indeed Lipschitz continuous.\n\nFor the last term in \\eqref{regret_decomposition}, note that it is similar to the first term, although more complicated: the policy $\\pi$ is fixed in the first term, but the policy $\\pi_t$ used by the algorithm is varying over time. To solve this challenge, the key idea is to show that the policies do not change too much from round to round, so that the third term grows sublinearly in $T$. To this end, we use the property of the RFTL algorithm with carefully chosen regularization parameter $\\eta>0$.\nThe complete proof of Theorem \\ref{theorem:mdp_rftl} can be found in Appendix \\ref{sec:regret_analysis}.\n\n\\section{Online MDPs with Large State Space}\\label{sec:large_mdps}\nIn the previous section, \nwe designed an algorithm for Online MDP with sublinear regret.\nHowever, the computational complexity of our algorithm\nis $O(poly(\\vert S \\vert \\vert A \\vert))$ per round. In practice, \nMDPs often have extremely large state space $S$ due to the curse of dimenionality \\cite{bertsekas1995dynamic}, so computing the exact solution becomes impractical.\nIn this section we propose an approximate algorithm that can handle large state spaces.\n\n\\subsection{Approximating Occupancy Measures and Regret Definition}\n\nWe consider an approximation scheme introduced in \\cite{abbasi2014linear} for standard MDPs. The idea is to use $d$ feature vectors (with $d \\ll \\vert S \\vert \\vert A \\vert$) to approximate occupancy measures $\\mu \\in \\mathbb{R}^{|S|\\times|A|}$.\nSpecifically, we approximate $\\mu \\approx \\Phi \\theta$ where $\\Phi$ is a given matrix of dimension $\\vert S \\vert \\vert A \\vert \\times d$, and $\\theta \\in \\Theta \\triangleq \\{ \\theta \\in \\mathbb{R}_+^d: \\Vert \\theta \\Vert_1 \\leq W\\}$ for some positive constant $W$.\nAs we will restrict the occupancy measures chosen by our algorithm to satisfy $\\mu = \\Phi \\theta$, the definition of MDP-regret \\eqref{regret_def} is too strong as it compares against all stationary policies.\nInstead, we restrict the benchmark to be the set of policies $\\Pi^\\Phi$ that can be represented by matrix $\\Phi$, where\n\\begin{align*}\n\\Pi^\\Phi \\triangleq \\{ \\pi \\in \\Pi : \\text{ there exists $\\mu^\\pi \\in \\Delta_M$ such that } \\mu^\\pi = \\Phi \\theta \\text{ for some } \\theta \\in \\Theta \\}.\n\\end{align*}\nOur goal will now be to achieve sublinear $\\Phi$-MDP-regret defined as\n\\begin{align}\\label{eq:regret_def_phi}\n\\text{$\\Phi$-MDP-Regret}(T) \\triangleq \\max_{\\pi \\in \\Pi^\\Phi} \\mathbb{E}[\\sum_{t=1}^T r_t(s^\\pi_t , a^\\pi_t)] - \\mathbb{E}[\\sum_{t=1}^T r_t(s_t,a_t)],\n\\end{align} \nwhere the expectation is taken with respect to random state transitions of the MDP and randomization used in the algorithm. Additionally, we want to make the computational complexity \\emph{independent} of $\\vert S \\vert$ and $\\vert A \\vert$.\n\n\n{\\bf Choice of Matrix $\\Phi$ and Computation Efficiency.}\nThe columns of matrix $\\Phi \\in \\mathbb{R}^{\\vert S \\vert \\vert A \\vert \\times d}$ represent probability distributions over state-action pairs. \nThe choice of $\\Phi$ is problem dependent, and a detailed discussion is beyond the scope of this paper.\n\\cite{abbasi2014linear} shows that for many applications such as the game of Tetris and queuing networks, $\\Phi$ can be naturally chosen as a sparse matrix, which allows\nconstant time access to entries of $\\Phi$ and efficient dot product operations. We will assume such constant time access throughout our analysis.\nWe refer readers to \\cite{abbasi2014linear} for further details. \n\n\n\\subsection{The Approximate Algorithm}\n\nThe algorithm we propose is built on \\textsc{MDP-RFTL}, but is significantly modified in several aspects. In this section, we start with key ideas on how and why we need to modify the previous algorithm, and then formally present the new algorithm. \n\nTo aid our analysis, we make the following definition.\n\\begin{definition}\\label{assumption:large-MDP}\nLet $\\tilde{\\delta}_0 \\geq 0$ be the largest real number such that for all $\\delta \\in[0,\\tilde{\\delta}_0]$ the set $\\Delta^\\Phi_{M,\\delta}\\triangleq\\{ \\mu \\in \\mathbb{R}^{\\vert S \\vert \\vert A \\vert}: \\text{ there exists $\\theta \\in \\Theta$ such that } \\mu = \\Phi \\theta, \\mu \\geq \\delta, \\mu^\\top 1 = 1, \\mu^\\top(P-B)=0 \\}$ is nonempty. We also write $\\Delta_M^\\Phi \\triangleq \\Delta^\\Phi_{M,0}$.\n\\end{definition}\n\nAs a first attempt, one could replace the shrunk set of occupancy measures $\\Delta_{M,\\delta}$ in Algorithm~\\ref{alg:MDP-RFTL} with $\\Delta^\\Phi_{M,\\delta}$ defined above. We then use occupancy measures $\\mu^{\\Phi \\theta^*_{t+1}} \\triangleq \\Phi \\theta^*_{t+1}$ given by the RFTL algorithm, i.e.,\n$ \\theta^*_{t+1} \\leftarrow \\arg \\max_{\\theta \\in \\Delta_{M,\\delta}^\\Phi} \\sum_{i=1}^t \\left[ \\langle r_i, \\mu \\rangle - ({1}\/{\\eta}) R(\\mu) \\right]$. \nThe same proof of Theorem~\\ref{theorem:mdp_rftl} would apply and guarantee a sublinear $\\Phi$-MDP-Regret.\nUnfortunately, replacing $\\Delta_{M,\\delta}$ with $\\Delta^\\Phi_{M,\\delta}$ does not reduce the time complexity of computing the iterates $\\{\\mu^{\\Phi \\theta^*_{t}}\\}_{t=1}^T$, which is still $poly(|S||A|)$.\n\nTo tackle this challenge, we will not apply the RFTL algorithm exactly, but will instead obtain an approximate solution in $poly(d)$ time. We relax the constraints $\\mu \\geq \\delta$ and $\\mu^\\top(P-B) = 0$ that define the set $\\Delta_{M,\\delta}^\\Phi$, and add the following penalty term to the objective function: \n\\begin{equation}\\label{eq:penalty}\n V(\\theta) \\triangleq -H_t \\Vert (\\Phi \\theta )^\\top (P-B) \\Vert_1 - H_t \\Vert \\min\\{\\delta, \\Phi \\theta \\} \\Vert_1.\n\\end{equation}\nHere, $\\{H_t\\}_{t=1}^T$ is a sequence of tuning parameters that will be specified in Theorem~\\ref{thm:large_mdp_regret}.\nLet $ \\Theta^\\Phi \\triangleq \\{\\theta \\in \\Theta\\ , \\mathbf{1}^\\top (\\Phi \\theta) = 1 \\}$.\nThus, the original RFTL step in Algorithm~\\ref{alg:MDP-RFTL} now becomes\n\\begin{equation}\\label{eq:c_t_eta}\n\\max_{\\theta \\in \\Theta^\\Phi} \\sum_{i=1}^tc^{t,\\eta}(\\theta), \\quad \\text{where }\nc^{t,\\eta}(\\theta) \\triangleq \\sum_{i=1}^t \\left[\\langle r_i , \\Phi \\theta \\rangle - \\frac{1}{\\eta} R^\\delta (\\Phi \\theta)\\right] + V(\\theta).\n\\end{equation}\nIn the above function, we use a modified entropy function $R^\\delta(\\cdot)$ as the regularization term, because the standard entropy function has an infinite gradient at the origin.\nMore specifically, let \n$R_{(s,a)}(\\mu)\\triangleq \\mu(s,a) \\ln(\\mu(s,a))$ be the entropy function. We define $ R^\\delta (\\mu) = \\sum_{(s,a)} R^\\delta_{(s,a)}(\\mu (s,a))$, where\n\\begin{align}\\label{eq:def_R_delta}\nR^\\delta_{(s,a)} \\triangleq\n\\begin{cases}\nR_{(s,a)}(\\mu) \\quad &\\text{if } \\mu(s,a) \\geq \\delta \\\\\nR_{(s,a)}(\\delta) + \\frac{d}{d \\mu(s,a)} R_{(s,a)}(\\delta) (\\mu(s,a)-\\delta) \\quad &\\text{otherwise}.\n\\end{cases}\n\\end{align}\n\n\n\nSince computing an exact gradient for function $c^{t,\\eta}(\\cdot)$ would take $O(\\vert S \\vert \\vert A \\vert)$ time, we solve problem \\eqref{eq:c_t_eta} by stochastic gradient ascent. The following lemma shows how to efficiently generate stochastic subgradients for function $c^{t,\\eta}$ via sampling.\n\\begin{lemma}\\label{stochastic_gradient_of_c}\nLet $q_1$ be any probability distribution over state-action pairs, and $q_2$ be any probability distribution over all states. Sample a pair $(s',a')\\sim q_1$ and $s'' \\sim q_2$. The quantity \n\\begin{align*}\ng_{s',a',s''}(\\theta) &= \\Phi^\\top r_{t} + \\frac{H_t}{q_1(s',a')} \\Phi_{(s',a'),:} \\mathbb{I}\\{\\Phi_{(s',a'),:} \\leq \\delta \\}\\\\\n&\\quad - \\frac{H_t}{q_2(s'')} [(P-B)^\\top \\Phi]_{s'',:} sign([(P-B)^\\top \\Phi ]_{s'',:} \\theta) - \\frac{t}{\\eta q_1(s',a')} \\nabla_\\theta R^\\delta_{(s',a')}(\\Phi \\theta)\n\\end{align*}\nsatisfies $\\mathbb{E}_{(s',a')\\sim q_1, s'' \\sim q_2}[g_{s',a',s''}(\\theta) \\vert \\theta] = \\nabla_\\theta c^{\\eta, t}(\\theta)$ for any $\\theta \\in \\Theta$. Morever, we have $ \\Vert g(\\theta) \\Vert_2 \\leq t \\sqrt{d} + H_t (C_1 + C_2)+ \\frac{t}{\\eta}(1 + \\ln (Wd) + \\vert \\ln(\\delta) \\vert) C_1,\n$ w.p.1, where\n\\begin{equation}\\label{def_of_Cs}\nC_1 = \\max_{(s,a)\\in S \\times A} \\frac{\\Vert \\Phi_{(s,a),:}\\Vert_2}{q_1(s,a)}, \\quad C_2 = \\max_{s\\in S} \\frac{\\Vert (P-B)^\\top_{:,s} \\Phi \\Vert_2}{q_2(s)}.\n\\end{equation}\n\\end{lemma}\n\n\n\n\n\n\nPutting everything together, we present the complete approximate algorithm for large state online MDPs in Algorithm~\\ref{alg:SGD-MDP-RFTL}. \nThe algorithm uses Projected Stochastic Gradient Ascent (Algorithm~\\ref{alg:P-SGA}) as a subroutine, which uses the sampling method in Lemma~\\ref{stochastic_gradient_of_c} to generate stochastic sub-gradients.\n\n\n\\begin{algorithm}[tbh]\n\\caption{(\\textsc{Large-MDP-RFTL})}\n\\label{alg:SGD-MDP-RFTL}\n\\begin{algorithmic}\n \\STATE {\\bfseries input:} matrix $\\Phi$, parameters: $\\eta, \\delta >0$, convex function $R^\\delta(\\mu)$, SGA step-size schedule $\\{w_t\\}_{t=0}^T$, penalty term parameters $\\{H_t\\}_{t=1}^T$\n \n \\STATE {\\bfseries initialize:} $\\tilde{\\theta}_1\\leftarrow $ \\textsc{PSGA}($- R^{\\delta}(\\Phi \\theta) + V(\\theta), \\Theta^\\Phi, w_0, K_0 $) \n \\FOR{$t=1,...,T$}\n \\STATE observe current state $s_t$; play action $a$ with distribution $\\frac{[\\Phi \\tilde{\\theta}_t]_+ (s_t,a)}{\\sum_{a\\in A} [\\Phi \\tilde{\\theta}_t]_+ (s_t,a)}$ \n \\STATE observe $r_t \\in [-1,1]^{\\vert S\\vert \\vert A \\vert}$\n \\STATE $\\tilde{\\theta}_{t+1}\\leftarrow $ \\textsc{PSGA}($\\sum_{i=1}^t [ \\langle r_i, \\Phi \\theta \\rangle -\\frac{1}{\\eta} R^\\delta(\\Phi \\theta) ] + V(\\theta), \\Theta^\\Phi, w_t, K_t $)\n \\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{algorithm}[tbh]\n\\caption{Projected Stochastic Gradient Ascent: \\textsc{PSGA}$(f, X, w, K)$}\n\\label{alg:P-SGA}\n\\begin{algorithmic}\n \\STATE {\\bfseries input:} concave objective function $f$, feasible set $X$, stepsize $w$, $x_1 \\in X$\n \\FOR{$k=1,...K$}\n \\STATE compute a stochastic subgradient $g_k$ such that $\\mathbb{E}[g_k] = \\nabla f(x_k)$ using Lemma \\ref{stochastic_gradient_of_c}\n \\STATE set $x_{k+1} \\leftarrow P_X(x_k + w g(x_k))$ \n \\ENDFOR\n \\STATE {\\bfseries output:} $\\frac{1}{K} \\sum_{k=1}^K x_k$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Analysis of the Approximate Algorithm}\n\n\n\n\nWe establish a regret bound for the \\textsc{Large-MDP-RFTL} algorithm as follows.\n\n\\begin{theorem}\\label{thm:large_mdp_regret}\nSuppose $\\{r_t\\}_{t=1}^T$ is an arbitrary sequence of rewards such that $\\vert r_t(s,a)\\vert \\leq 1$ for all $s\\in S$ and $a\\in A$. \nFor $T \\geq \\ln^2(\\frac{1}{\\delta_0})$, \\textsc{Large-MDP-RFTL} with parameters $\\eta = \\sqrt{\\frac{T}{\\tau}}, \\delta = e^{-\\sqrt{T}}$, $K(t) = \\left[ W t^2 \\sqrt{d} \\tau^4 (C_1 + C_2) T\\ln(W T) \\right]^2$, $H_t = t \\tau^2 T^{3\/4}$, $w_t=\\frac{W}{\\sqrt{K(t)} (t \\sqrt{d} + H_t (C_1 + C_2) + \\frac{t}{\\eta} C_1)} $ guarantees that \n\\begin{align*}\n\\text{$\\Phi$-MDP-Regret}(T) \\leq O( c_{S,A} \\ln(\\vert S \\vert \\vert A\\vert) \\sqrt{\\tau T} \\ln(T) ).\n\\end{align*}\nHere $c_{S,A}$ is a problem dependent constant. The constants $C_1, C_2$ are defined in Lemma~\\ref{stochastic_gradient_of_c}.\n\\end{theorem}\n\nA salient feature of the \\textsc{Large-MDP-RFTL} algorithm is that its computational complexity in each period is independent of the size of state space $|S|$ or the size of action space $|A|$, and thus is amenable to large scale MDPs. In particular, in Theorem~\\ref{thm:large_mdp_regret}, the number of SGA iterations, $K(t)$, is $O(d)$ and independent of $|S|$ and $|A|$.\n\nCompared to Theorem~\\ref{theorem:mdp_rftl}, we achieve a regret with similar dependence on the number of periods $T$ and the mixing time $\\tau$. The regret bound also depends on $\\ln(|S|)$ and $\\ln(|A|)$, with an additional constant term $c_{S,A}$. The constant comes from a projection problem (see details in Appendix~\\ref{sec:regret_analysis_large_mdp}) and may grow with $|S|$ and $|A|$ in general. But for some classes of MDP problem, $c_{S,A}$ is bounded by an absolute constant: an example is the Markovian Multi-armed Bandit problem \\cite{whittle1980multi}.\n\n{\\bf Proof Idea for Theorem~\\ref{thm:large_mdp_regret}.} Consider the \\textsc{MDP-RFTL} iterates ,\\{$\\theta^*_t\\}_{t=1}^T$, and the occupancy measures $\\{\\mu^{\\Phi \\theta^*_t}\\}_{t=1}^T$ induced by following policies $\\{\\Phi \\theta^*_t\\}_{t=1}^T$. Since $\\theta^*_t \\in \\Delta_{M,\\delta}^\\Phi$ it holds that $\\mu^{\\Phi \\theta^*_t} = \\Phi \\theta^*_t$ for all $t$. Thus, following the proof of Theorem \\ref{theorem:mdp_rftl}, we can obtain the same $\\Phi$-MDP-Regret bound in Theorem \\ref{theorem:mdp_rftl} if we follow policies $\\{\\Phi \\theta^*_t\\}_{t=1}^T$. However, computing $\\theta^*_t$ takes $O(poly(\\vert S \\vert \\vert A \\vert))$ time. \n\nThe crux the proof of Theorem \\ref{thm:large_mdp_regret} is to show that the $\\{\\Phi \\tilde{\\theta}_{t}\\}_{t=1}^T$ iterates in Algorithm~\\ref{alg:SGD-MDP-RFTL} induce occupancy measures $\\{\\mu^{\\Phi \\tilde{\\theta}_{t}}\\}_{t=1}^T$ that are close to $\\{\\mu^{\\Phi \\theta^*_t}\\}_{t=1}^T$. Since the algorithm has relaxed constraints of $\\Delta_{M,\\delta}^\\Phi$, in general we have $\\tilde{\\theta}_{t} \\notin \\Delta_{M,\\delta}^\\Phi$ and thus $\\mu^{\\Phi \\tilde{\\theta}_{t}} \\neq \\Phi \\tilde{\\theta}_{t}$. So we need to show that the distance between $\\mu^{\\Phi \\theta^*_{t+1}}$, and $\\mu^{\\Phi \\tilde{\\theta}_{t+1}}$ is small. Using triangle inequality we have \n\\begin{align*}\n\\Vert \\mu^{\\Phi \\theta_t^*} - \\mu^{\\Phi \\tilde{\\theta}_t} \\Vert_1 &\\leq \\Vert \\mu^{\\Phi \\theta_t^*} - P_{\\Delta_{M,\\delta}^\\Phi}(\\Phi \\tilde{\\theta}_t) \\Vert_1 + \\Vert P_{\\Delta_{M,\\delta}^\\Phi}(\\Phi \\tilde{\\theta}_t) - \\Phi \\tilde{\\theta}_t \\Vert_1 + \\Vert \\Phi \\tilde{\\theta}_t - \\mu^{\\Phi \\tilde{\\theta}_t} \\Vert_1, \n\\end{align*}\nwhere $P_{\\Delta_{M,\\delta}^\\Phi}(\\cdot)$ denotes the Euclidean projection onto $\\Delta_{M,\\delta}^\\Phi$. We then proceed to bound each term individually. We defer the details to Appendix \\ref{sec:regret_analysis_large_mdp} as bounding each term requires lengthy proofs. \n\n\\section{Conclusion}\n\nWe consider Markov Decision Processes (MDPs) where the transition probabilities are known but\nthe rewards are unknown and may change in an adversarial manner. We provide an online algorithm, which applies Regularized Follow the Leader (RFTL) to the linear programming formulation of the average reward MDP. The algorithm achieves a\nregret bound of $O( \\sqrt{\\tau (\\ln|S|+\\ln|A|)T}\\ln(T))$, where $S$ is the state space, $A$ is the action space, $\\tau$ is the mixing time of the MDP, and $T$ is the number of periods. The algorithm's computational complexity is polynomial in $|S|$ and $|A|$ per period. \n\nWe then consider a setting often encountered in practice, where the state space of the MDP is too large to allow for exact solutions. We approximate the state-action occupancy measures with a linear architecture of dimension $d\\ll|S||A|$. We then propose an approximate algorithm which relaxes the constraints in the RFTL algorithm, and solve the relaxed problem using stochastic gradient descent method. \nA salient feature of our algorithm is that its computation time complexity is independent of the size of state space $|S|$ and the size of action space $|A|$. \nWe prove a regret bound of $O( c_{S,A} \\ln(\\vert S \\vert \\vert A\\vert) \\sqrt{\\tau T} \\ln(T))$ compared to the best static policy approximated by the linear architecture, where $c_{S,A}$ is a problem dependent constant.\nTo the best of our knowledge, this is the first $\\tilde{O}(\\sqrt{T})$ regret bound for large scale MDPs with changing rewards.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nIn the Hilbert space formalism of quantum mechanics (QM), a quantum probability function is a function $\\pee$ from the lattice $L(\\h)$ of projection operators acting on the Hilbert space $\\h$, to the unit interval $[0,1]$, which satisfies the rules\n\\begin{itemize}\n\\item $\\pee(1)=1$,\n\\item $\\pee(P_1\\vee P_2\\vee\\ldots)=\\pee(P_1)+\\pee(P_2)+\\ldots$ \\quad whenever $P_iP_j=0$ for all $i\\neq j$.\n\\end{itemize}\nThis characterization of quantum probabilities is reminiscent (at least in form) of the classical structure, where a probability function is specified by a pair $(\\mathcal{F},\\pee)$ where $\\mathcal{F}$ denotes a $\\sigma$-algebra of subsets of some space $\\Omega$ and $\\pee:\\mathcal{F}\\to[0,1]$ is a function which satisfies\n\\begin{itemize}\n\\item $\\pee(\\Omega)=1$,\n\\item $\\pee(\\Delta_1\\cup\\Delta_2\\cup\\ldots)=\\pee(\\Delta_1)+\\pee(\\Delta_2)+\\ldots$ \\quad whenever $\\Delta_i\\cap\\Delta_j=\\varnothing$ for all $i\\neq j$.\n\\end{itemize}\n\nBecause of the resemblance in structure between the two definitions, it is tempting to think that one must be able to find resemblances between admissible interpretations of the probabilities.\nHowever, as indicated in \\cite{Wilce12}, such a program has to face the problem of explaining the non-Boolean nature of $L(\\h)$.\nThis issue may best be understood by contrasting the quantum structure with the classical structure of $\\sigma$-algebras.\nBy far the majority of philosophical approaches to classical probability agree on one thing.\nNamely, that the probability $\\pee(\\Delta)$ can be understood as the probability that some proposition $S_\\Delta$ codified by $\\Delta$ is true. \nFor example, if $\\Omega$ is understood as the set of all possible worlds, then $\\Delta$ corresponds to the subset of all possible worlds in which $S_\\Delta$ is true. \nThe question for the quantum case is now straightforward: if the elements of $L(\\h)$ correspond to propositions, what do these propositions express?\n\nThe most prominent early contribution to this question is undoubtedly the work of Birkhoff and von Neumann \\cite{BirkhoffNeumann36}.\nThe backbone of this paper already appeared in \\cite{Neumann55} where the idea of taking projections as propositions was introduced.\nIt is roughly motivated as follows.\nIf $A$ is an observable, and $\\Delta$ a (measurable) subset of the reals, then the proposition $A\\in\\Delta$ is taken to be true if and only if the probability that a measurement of $A$ reveals some value in $\\Delta$ equals one.\nThis is then the case if and only if the state of the system lies in $P_A^{\\Delta}\\mathcal{H}$, with $\\mathcal{H}$ the Hilbert space describing the system, and $P_A$ the projection valued measure generated by $A$.\nOr, in terms of density operators, if and only if $\\mathrm{Tr}(\\rho P_A^\\Delta)=1$.\n\nOf course this raises the question of what is meant by ``a measurement of $A$ reveals some value in $\\Delta$''.\nThe most tempting answer is the one suggested by the notation $A\\in\\Delta$, i.e., that a measurement of $A$ reveals some value in $\\Delta$ if and only if the observable $A$ has some value in $\\Delta$, independent of the measurement.\nA straightforward adoption of this realist reading is bound to run into difficulties posed by results such as the Kochen-Specker theorem \\cite{KS67} or the Bell inequalities \\cite{Bell64,Clauser69}.\nA possible escape is to deny that $A\\in\\Delta$ expresses a proposition (or has a truth value) for every pair $(A,\\Delta)$ at every instant.\nThis can be done by, for example, adopting the strong property postulate ($A$ has a value if and only if the system is in an eigen state of $A$).\nAnother possibility is to deny the bijection between observables and self-adjoint operators such as in Bohmian mechanics, where position observables play a privileged role.\\footnote{In this approach, observables are associated with functions on the configuration space of particle positions. Every proposition $A\\in\\Delta$ then reduces to a proposition on particle positions. This reduction is contextual in a sense; the function associated with $A$ is not the same for every experimental setup used to measure $A$.}\nOr one may suggest that these propositions do not obey the laws of classical logic.\nThis last option was the one suggested by Putnam in \\cite{Putnam69}.\nHe argued that, apart from coinciding with a set of propositions about values possessed by observables, the lattice $L(\\h)$ also describes the correct logic for these propositions.\nHowever, this realist interpretation of the quantum logic $L(\\h)$ is known to lead to problems \\cite{Dummett76,Stairs83} and the consensus seems to be that there is no hope for this direction \\cite{Maudlin05}. \n\nIn contrast to these difficulties encountered in realist approaches to understanding $L(\\h)$, it is often thought that an operationalist interpretation is unproblematic, and perhaps even straightforward:\n\\begin{quote}\nIf we put aside scruples about `measurement' as a primitive term in physical theory, and accept a principled distinction between `testable' and non-testable properties, then the fact that $L(\\h)$ is not Boolean is unremarkable, and carries no implication about logic per se. \nQuantum mechanics is, on this view, a theory about the possible statistical distributions of outcomes of certain measurements, and its non-classical `logic' simply reflects the fact that not all observable phenomena can be observed simultaneously. -- Wilce \\cite[\\S 2]{Wilce12}\n\\end{quote}\nThe idea is that the task of interpreting the structure of $L(\\h)$ is tied up with explicating the notion of measurement.\nThen, since the operationalist takes the notion of measurement as primitive, the structure of $L(\\h)$ may also be taken as primitive.\nThis line of reasoning seems unsatisfactory to me and I think that the questions of what the propositions in $L(\\h)$ express, and what the logic is that governs them, are also meaningful and require study from an operationalist point of view.\nAnd as a guide to such a logic one may take into account Bohr's demand that \n\\begin{quote}\nall well-defined experimental evidence, even if it cannot be analyzed in terms of classical physics, must be expressed in ordinary language making use of common logic. -- Bohr \\cite{Bohr48}\n\\end{quote}\nIn this paper I continue the work done in \\cite{Hermens12} to meet this demand.\nSpecifically, in section \\ref{SQMsection} I first introduce a lattice of experimental propositions $S_{QM}$ which is then extended to an intuitionistic logic $L_{QM}$ in section \\ref{LQMsection}.\nThese results comprise a summary of some of the results in \\cite{Hermens12}.\nThen, also in section \\ref{LQMsection}, this logic is extended to a classical logic $CL_{QM}$ for experimental propositions.\nIn section \\ref{QProbsec} R\\'enyi's theory of conditional probability spaces \\cite{Renyi55} is applied to $CL_{QM}$ and it is shown that the quantum probabilities as given by the Born rule can be reproduced.\nThis is followed by a short discussion on the issue that non-quantum probabilities are also allowed in this framework.\nFinally, in section \\ref{conclusionsection}, the results of this paper are evaluated and some ideas on what further role they may play in foundational debates are given.\n\n \n \n \n\n\\section{The lattice of experimental propositions}\\label{SQMsection}\nThe idea to focus on the experimental side of QM is reminiscent of the approach of Birkhoff and von Neumann in \\cite{BirkhoffNeumann36}.\nThey start by introducing `experimental propositions' which are identified with subsets of `observation spaces'.\nAn observation space is the Cartesian product of the spectra of a set of pairwise commuting observables.\nHowever, once such a subset is associated with a projection operator in the familiar way, the experimental context (i.e., the set of pairwise commuting observables) is no longer being considered.\nFor example, one may have that $P_{A_1}^{\\Delta_1}=P_{A_2}^{\\Delta_2}$ even though $A_1$ and $A_2$ may be incompatible.\nThis indicates that whatever is encoded by propositions in $L(\\h)$, they do not refer to actual experiments.\n\nTo avoid a cumbersome direct discussion of $L(\\h)$, it is then appropriate to instead develop a logic of propositions that \\emph{do} encode these experimental contexts.\nAs an elementary example of what Bohr may have had in mind when speaking of ``well-defined experimental evidence'' I take propositions of the form\n\\begin{equation}\\label{reading}\n M_A(\\Delta) =\\text{``$A$ is measured and the result lies in $\\Delta$.''}\n\\end{equation}\nOf course, no logical structure can be derived for these propositions without some structural assumptions.\nI will adopt the following two assumptions which I believe to be quite innocent\\footnote{These assumptions are the ones also adopted in \\cite{Hermens12} although there the second was implicit and the first was formulated to also apply to other theories.}:\n\\begin{itemize}\n\\item[\\textbf{LMR}](Law-Measurement Relation) If $A_1$ and $A_2$ are two observables that can be measured together, and if $f$ is a function such that whenever $A_1$ and $A_2$ are measured together the outcomes $a_1$ and $a_2$ satisfy $a_1=f(a_2)$ (i.e., $f$ represents a law), then a measurement of $A_2$ alone also counts as a measurement of $A_1$ with outcome $f(a_2)$.\\footnote{This assumption is reminiscent of the FUNC rule adopted in the Kochen-Specker theorem (c.f. \\cite[p.121]{Redhead87}). However, FUNC is a much stronger assumption stating that whenever measurement outcomes for a pair of observables satisfy a functional relation, the values possessed by these observables must satisfy the same relationship. That is, FUNC is the conjunction of LMR together with the idea that measurements reveal pre-existing values.}\n\\item[\\textbf{IEA}](Idealized Experimenter Assumption) Every experiment has an outcome, i.e., for every observable $A$ $M_A(\\varnothing)$ is understood as a contradiction.\n\\end{itemize}\nThe following lemma describes the structure arising from these assumptions for QM. \nA more elaborate exposition may be found in \\cite{Hermens12}.\n\n\\begin{lemma}\nUnder the assumptions LMR and IEA the set of (equivalence classes of) experimental propositions for quantum mechanics lead to the lattice \n\\begin{equation}\n S_{QM}:=\\{(\\A,P)\\:;\\:\\A\\in\\mathfrak{A},P\\in L(\\A)\\backslash\\{0\\}\\}\\cup\\{\\bot\\},\n\\end{equation}\nwhere $\\mathfrak{A}$ denotes the set of all Abelian operator algebras on $\\h$ and $L(\\A)$ the Boolean lattice $\\A\\cap L(\\h)$.\nThe partial order is given by\n\\begin{equation}\n (\\mathcal{A}_1,P_1)\\leq(\\mathcal{A}_2,P_2)~\\text{iff}~P_1=0~\\text{or}~\\mathcal{A}_1\\supset\\mathcal{A}_2,P_1\\leq P_2.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nA proposition $M_{A_1}(\\Delta_1)$ implies $M_{A_2}(\\Delta_2)$ if and only if a measurement of $A_1$ implies a measurement of $A_2$ and every outcome in $\\Delta_1$ implies an outcome in $\\Delta_2$ for the measurement of $A_2$ (or if $M_{A_1}{\\Delta_1}$ expresses a contradiction).\nBy LMR, this is the case if and only if the Abelian algebra $\\A_2$ generated by $A_2$ is a subalgebra of the Abelian algebra $\\A_1$ generated by $A_1$ and $P_{A_1}^{\\Delta_1}\\leq P_{A_2}^{\\Delta_2}$ (or $P_{A_1}^{\\Delta_1}=0$).\nThis also establishes that $M_{A_1}(\\Delta_1)$ and $M_{A_2}(\\Delta_2)$ are equivalent if and only if $(\\mathcal{A}_1,P_{A_1}^{\\Delta_1})=(\\mathcal{A}_2,P_{A_2}^{\\Delta_2})$ (or both express a contradiction). \n\\end{proof}\n\nThe meet on $S_{QM}$ is given by\n\\begin{equation}\n (\\mathcal{A}_1,P_1)\\wedge(\\mathcal{A}_2,P_2)=\\begin{cases}(\\mathpzc{Alg}(\\mathcal{A}_1,\\mathcal{A}_2),P_1\\wedge P_2),&\\text{if}~[\\mathcal{A}_1,\\mathcal{A}_2]=0,P_1\\wedge P_2\\neq0,\\\\ \\bot,&\\text{otherwise,}\\end{cases}\n\\end{equation}\nwhere $P_1\\wedge P_2$ is the meet on $L(\\h)$.\nIt is consistent with the interpretation given to $M_A(\\Delta)$ in the sense that this meet `distributes' over the `and' in ``$A$ is measured and the result lies in $\\Delta$''.\nIn particular, it assigns a contradiction to a simultaneous measurement of two observables whenever their corresponding operators do not commute.\n\nIt is harder to interpret the join on this lattice as a disjunction.\nWhen restricting to joins of two elements of $S_{QM}$ with the same algebra, the results seem intuitively correct.\nIn this case one has \n\\begin{equation}\\label{disjcom}\n (\\mathcal{A},P_1)\\vee(\\mathcal{A},P_2)=(\\mathcal{A},P_1\\vee P_2).\n\\end{equation}\nAnd indeed, ``$A$ is measured and the measurement reveals some value in $\\Delta_1\\cup\\Delta_2$'' sounds like a good paraphrase of ``$A$ is measured and the measurement reveals some value in $\\Delta_1$ or $A$ is measured and the measurement reveals some value in $\\Delta_2$''.\nBut in more general cases problems arise.\nFor example, one has $(\\mathcal{A}_1,1)\\vee(\\mathcal{A}_2,1)=(\\mathcal{A}_1\\cap\\mathcal{A}_2,1)$.\nThen, if the two algebras are completely incompatible (i.e. $\\mathcal{A}_1\\cap\\mathcal{A}_2=\\mathbb{C}1$), this equation states that ``$A_1$ is measured or $A_2$ is measured'' is a tautology even though one can consider situations in which neither is measured.\n\nThe example can further be used to show that $S_{QM}$ is not distributive.\nLet $A_3$ be a third observable that is totally incompatible with both $A_1$ and $A_2$.\nOne then has\n\\begin{equation}\n (\\mathcal{A}_3,1)\\wedge((\\mathcal{A}_1,1)\\vee(\\mathcal{A}_2,1))\n =(\\mathcal{A}_3,1)\\neq\\bot\n =((\\mathcal{A}_3,1)\\wedge(\\mathcal{A}_1,1))\\vee((\\mathcal{A}_3,1)\\wedge(\\mathcal{A}_2,1)).\n\\end{equation}\nHence, when it comes to providing a logic that fares well with natural language, $S_{QM}$ doesn't perform much better than the original $L(\\h)$.\n\n \n \n \n\n\\section{Deriving the logics \\texorpdfstring{$L_{QM}$}{LQM} and \\texorpdfstring{$CL_{QM}$}{CLQM}}\\label{LQMsection}\nTo solve the problem of non-distributivity of $S_{QM}$ the lattice has to be extended by formally introducing disjunctions which in general are stronger than the join on $S_{QM}$.\nThe outcome of this process is described by the following theorem.\\footnote{Again, more details may be found in \\cite{Hermens12}.}\n\n\\begin{theorem}\nFormally introducing disjunctions to $S_{QM}$ while respecting \\eqref{disjcom} results in the complete distributive lattice \n\\begin{equation}\n L_{QM}=\\left\\{S:\\mathfrak{A}\\to L(\\mathcal{H})\\:;\\:\\substack{S(\\mathcal{A})\\in L(\\A)\\\\ S(\\A_1)\\leq S(\\A_2)~\\text{whenever}~\\A_1\\subset\\A_2}\\right\\}\n\\end{equation} \nwith partial order on $L_{QM}$ defined by\n\\begin{equation}\n S_1\\leq S_2~\\text{iff}~S_1(\\mathcal{A})\\leq S_2(\\mathcal{A})\\forall\\mathcal{A}\\in\\mathfrak{A}\n\\end{equation} \ngiving the meet and join\n\\begin{equation}\\label{operations}\n\\begin{split}\n (S_1\\vee S_2)(\\mathcal{A})&=S_1(\\mathcal{A})\\vee S_2(\\mathcal{A}),\\\\\n (S_1\\wedge S_2)(\\mathcal{A})&=S_1(\\mathcal{A})\\wedge S_2(\\mathcal{A}).\n\\end{split}\n\\end{equation}\nThe embedding of $S_{QM}$ into $L_{QM}$ is given by\\footnote{For notational convenience $(\\A,0)$ is identified with $\\bot\\in S_{QM}$.}\n\\begin{equation}\\label{imbed}\n i:(\\mathcal{A},P)\\mapsto S_{(\\mathcal{A},P)},~S_{(\\mathcal{A},P)}(\\mathcal{A}'):=\\begin{cases}P,&\\mathcal{A}\\subset\\mathcal{A}',\\\\0,&\\text{otherwise}.\\end{cases}\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nThe key to the extension of $S_{QM}$ to $L_{QM}$ lies in noting that the embedding \\eqref{imbed} satisfies the relation\n\\begin{equation}\\label{relation}\n S=\\bigvee_{\\A\\in\\mathfrak{A}}i(\\A,S(\\A)).\n\\end{equation}\nThis gives the intended reading of elements of $L_{QM}$, namely, as a disjunction of experimental propositions.\nBy LMR it follows that $i$ preserves the interpretation of elements of $S_{QM}$.\nThe conjunctions on $L_{QM}$ further agree with those on $S_{QM}$:\n\\begin{equation}\n S_{(\\A_1,P_1)}\\wedge S_{(\\A_2,P_2)}= S_{(\\A_1,P_1)\\wedge(\\A_2,P_2)}.\n\\end{equation}\nFurther, $L_{QM}$ respects \\eqref{disjcom} while introducing `new' elements for other disjunctions, i.e.,\n\\begin{equation}\n S_{(\\A_1,P_1)}\\vee S_{(\\A_2,P_2)}\\leq S_{(\\A_1,P_1)\\vee(\\A_2,P_2)},\n\\end{equation}\nwith equality if and only if either $P_1$ or $P_2$ equals $0$, or $\\A_1=\\A_2$. \nFinally, that $L_{QM}$ is a complete distributive lattice follows from \\eqref{operations} and observing that $L(\\mathcal{A})$ is a complete distributive lattice for every $\\mathcal{A}\\in\\mathfrak{A}$.\n\\end{proof}\n\nThe lattice $L_{QM}$ is turned into a Heyting algebra by obtaining the relative pseudo-complement in the usual way:\n\\begin{equation}\n S_1\\to S_2=\\bigvee\\left\\{S\\in L_{QM}\\:;\\:S\\wedge S_1\\leq S_2\\right\\}.\n\\end{equation}\nAnd negation is introduced by $\\neg S:=S\\to\\bot$ thus obtaining an intuitionistic logic for reasoning with experimental propositions.\nThe logic $L_{QM}$ is also a proper Heyting algebra and, in fact, is radically non-classical.\nThat is, the law of excluded middle only holds for the top and bottom element.\nThis is easily checked by observing that $S(\\mathbb{C}1)=1$ if and only if $S=\\top$. \nIt then follows that $S\\vee\\neg S=\\top$ if and only if $S=\\top$ or $\\neg S=\\top$.\n\nThe strong non-classical behavior can be understood by noting that the logic is entirely built up from propositions about performed measurements.\nThe negation of $M_A(\\Delta)$ is identified with other propositions about measurements that exclude the possibility of $M_A(\\Delta)$ but leave open the option of no experiment having been performed:\n\\begin{equation}\n \\neg S_{(\\A,P)}=\\bigvee \\left\\{S_{(\\A',P')}\\:;\\: [\\A',\\A]\\neq 0~\\text{or}~P'\\wedge P=0\\right\\}.\n\\end{equation}\nAn unexcluded middle thus presents itself as the proposition $\\neg M_A$=``$A$ is not measured''.\nIt was suggested in \\cite[\\S2]{Hermens12} that incorporating such propositions could lead to a classical logic.\nI will now show that this is indeed the case.\n\nIt is tempting to start the program for a classical logic for experimental propositions by formally adding the suggested propositions $\\neg M_A$ and building on from there.\nHowever, it turns out that it is much more convenient to introduce new elementary experimental propositions, namely\n\\begin{equation}\\label{prov}\n M_A^!(\\Delta)=\\text{``$A$ is measured with result in $\\Delta$, and no finer grained measurement has been performed''}.\n\\end{equation}\nThe notion of fine graining here relies on LMR.\nAnother way of expressing the conjunct added is to state that ``only $A$ and all the observables implied to be measured by the measurement of $A$ have been measured''.\nWith this concept the following can now be shown.\\footnote{In the remainder of this paper it is tacitly assumed that $\\h$ is finite-dimensional.}\n\n\\begin{theorem}\nThe classical logic for experimental propositions is given by\\footnote{The exclamation mark is merely a notational artifact introduced to discern the elements of $S^!_{QM}$ from those of $S_{QM}$. That is, without the exclamation mark, $S^!_{QM}$ would be a subset of $S_{QM}$.}\n\\begin{equation}\n CL_{QM}=\\mathcal{P}(S^!_{QM}),~S^!_{QM}:=\\left\\{(\\A^!,P)\\:;\\:\\A\\in\\mathfrak{A}, P\\in L_0(\\A)\\right\\},\n\\end{equation}\nwhere $L_0(\\A)$ denotes the set of atoms of the lattice $L(\\A)$. \n\\end{theorem}\n\\begin{proof}\nBeing defined as a power set, it is clear that $CL_{QM}$ is a classical propositional lattice. \nSo it only has to be shown that every element signifies an experimental proposition, and that it is rich enough to incorporate the propositions already introduced by $L_{QM}$. \nThe first issue is readily established by the identification\n\\begin{equation}\\label{ident}\n M_A^!(\\Delta)\\mapsto \\{(\\A^!,P)\\in S^!_{QM}\\:;\\:P\\leq \\mu_A(\\Delta)\\},\n\\end{equation}\nwith $\\A$ the algebra generated by $A$.\nIdentifying $\\A$ with $A$ is again justified by LMR, and IEA is incorporated by the fact that, whenever $\\Delta$ contains no elements in the spectrum of $A$, $M_A^!(\\Delta)$ is identified with the empty set.\nThus \\eqref{ident} ensures that every element of $CL_{QM}$ is understood as a disjunction of experimental propositions, i.e., every singleton set in $CL_{QM}$ encodes a proposition of the form $M_A^!(\\{a\\})$.\n\nThat $CL_{QM}$ is indeed rich enough follows by observing that the map\n\\begin{equation}\n c:L_{QM}\\to CL_{QM},~S\\mapsto\\bigcup_{\\A\\in\\mathfrak{A}}\\left\\{(\\A^!,P)\\in S^!_{QM}\\:;\\:P\\leq S(\\A)\\right\\}\n\\end{equation}\nis an injective complete homomorphism.\n\\end{proof}\n\nSome additional reflections are helpful to get a grip on $CL_{QM}$.\nFirst one may note that the map $(c\\circ i):S_{QM}\\to CL_{QM}$, encoding propositions of the form $M_A(\\Delta)$, acts as\n\\begin{equation}\\label{ident2}\n c\\circ i:(\\A,P)\\mapsto \\left\\{(\\A'^!,P')\\in S^!_{QM}\\:;\\:(\\A',P')\\leq (\\A,P)\\right\\}.\n\\end{equation}\nThus the provision ``and no finer grained measurement has been performed'' that distinguishes $M_A^!(\\Delta)$ from $M_A(\\Delta)$ precisely excludes all the $(\\A'^!,P')\\in (c\\circ i)(\\A,P) $ with $\\A'\\neq\\A$, i.e., the finer grained measurements (c.f. \\eqref{ident} and \\eqref{ident2}). \n\nThe non-classicality of $L_{QM}$ is also explicated by $CL_{QM}$.\nFor any $(\\A,P)\\in S_{QM}$, the complement of \n\\begin{equation}\n c(S_{(\\A,P)}\\vee\\neg S_{(\\A,P)})\n\\end{equation}\nconsists of all the $(\\A'^!,P')\\in S^!_{QM}$ with $\\A'\\subsetneq\\A$ and $P'\\leq P$.\nThis is because only the added provision in \\eqref{prov} makes $(\\A'^!,P')$ exclude the measurement of $A$, while within $L_{QM}$ there is nothing to denote a possible incompatibility for $A$ and $A'$.\nA proposition in $CL_{QM}$ of special interest in this context is the singleton $\\{(\\mathbb{C}1,1)\\}$ which expresses that only the trivial measurement is performed.\nIn light of IEA this is just the same as saying that no measurement is performed; the most typical proposition in $CL_{QM}$ not present in $L_{QM}$. \n\nTo conclude this section I compare the constructions of $L_{QM}$ and $CL_{QM}$ to another way of extending $S_{QM}$ to a distributive lattice, namely, by using the Bruns-Lakser theory of distributive hulls \\cite{BrunsLakser70} as advocated in \\cite{Coecke02}.\nIn this approach $S_{QM}$ is embedded into the lattice of its distributive ideals\\footnote{A distributive ideal is a subset $I\\subset S_{QM}$ that is downward closed and further contains the join $\\bigvee\\{(\\A,P)\\in J\\}$ for every subset $J\\subset I$ that satisfies $\\left(\\bigvee_{(\\A,P)\\in J}(\\A,P)\\right)\\wedge(\\A',P')= \\bigvee_{(\\A,P)\\in J}\\left((\\A,P)\\wedge(\\A',P')\\right)$ for all $(\\A',P')\\in S_{QM}$.} $\\mathcal{DI}(S_{QM})$ by taking each element to its downward set, i.e.,\n\\begin{equation}\n (\\A,P)\\mapsto\\downarrow(\\A,P):=\\{(\\A',P')\\in S_{QM}\\:;\\:(\\A',P')\\leq(\\A,P)\\}.\n\\end{equation}\n\nGenerally, the abstract nature of the extension $\\mathcal{DI}(L)$ for some non-distributive lattice $L$ prevents a straightforward identification of distributive ideals with propositions.\nIn the present situation, for example, it is not immediately clear whether the resulting lattice operations behave consistently with respect to the intended reading of the elements $\\downarrow(\\A,P)$ given by \\eqref{reading}. \nFortunately, this issue can be resolved for $S_{QM}$ by noting that the resulting lattice of its distributive ideals is isomorphic to the power set of the atoms in $S_{QM}$:\n\\begin{equation}\n \\mathcal{DI}(S_{QM})\\simeq\\mathcal{P}(S^*_{QM}),~S^*_{QM}:=\\{(\\A,P)\\in S_{QM}\\:;\\:\\A\\in \\mathfrak{A}_{\\mathrm{max}},P\\in L_0(\\A)\\},\n\\end{equation}\nwhere $\\mathfrak{A}_{\\mathrm{max}}$ is the set of maximal Abelian algebras.\nHere then one sees that the proposition identified with $(\\A,P)\\in S_{QM}$ is understood in $\\mathcal{P}(S^*_{QM})$ as a disjunction over all possible \\emph{complete} measurements compatible with a measurement of $A$. \nThat is, $M_A(\\Delta)$ in this setting may be rephrased as ``some complete measurement $A'$ has been performed with outcome $a'$ which implies $M_A(\\Delta)$''.\nAlthough this provides a consistent reading, the underlying presupposition that every measurement in fact constitutes a complete measurement seems somewhat reckless.\n\n \n \n \n\n\\section{Quantum logic and probability}\\label{QProbsec}\nIf the logic $CL_{QM}$ is to play any significant role in quantum mechanics, it has to be connected with quantum probability theory.\nThat is, one has to know how the Born rule can be expressed using the language of $CL_{QM}$.\nIdeally this consists of two parts.\nFirst, to show that probability functions on $CL_{QM}$ can be introduced that are in accordance with the Born rule.\nAnd secondly, to show that the Born rule comes out as a necessary consequence holding for all probability functions on $CL_{QM}$ (i.e., a result similar to Gleason's theorem for $L(\\h)$ (\\cite{Gleason57})). \nIn this section some first investigations in this direction are made.\n\nThe first approach for probabilities on $CL_{QM}$ undoubtedly is to note that $(S^!_{QM},CL_{QM})$ is a measure space.\nHence, it is tempting to state that all probability functions are simply all probability measures on this space.\nHowever, this does not do justice to the quantum formalism.\nFor example, one possible probability measure is the Dirac measure peaked at $\\{(\\mathbb{C}1,1)\\}$ expressing certainty that no measurement will be performed.\nThis is likely to be too minimalistic even for the hardened instrumentalist. \nThat is, although one may adhere to the idea that `unperformed experiments have no results' \\cite{Peres78}, a central idea in QM is that possible outcomes for measurements do have probabilities irrespective of whether the measurements are performed.\nIndeed, it is a trait of QM that certain conditional probabilities have definite values even if the propositions on which one conditions have prior probability zero.\nThe Dirac measure does not provide these conditional probabilities.\n\nTo get around this it is natural to take conditional probabilities as primitive in the quantum case.\\footnote{Quantum mechanics has even been used as an example to advocate that conditional probability should be considered more fundamental than unconditional probability, c.f. \\cite{Hajek03}.}\nFor this approach the axiomatic scheme of R\\'enyi is an appropriate choice.\\footnote{C.f., \\cite{Renyi55}. A similar axiomatic approach was developed independently by Popper \\cite[A *ii--*v]{Popper59}. The present presentation is taken from \\cite{Spohn86}.}\n\n\\begin{definition}\nLet $(\\Omega,\\mathcal{F})$ be a measurable space and $\\mathcal{C}\\subset\\mathcal{F}\\backslash\\{\\varnothing\\}$ be a non-empty set, then $(\\Omega,\\mathcal{F},\\mathcal{C},\\pee)$ is called a \\emph{conditional probability space (CPS)} if $\\pee:\\mathcal{F}\\times\\mathcal{C}\\to[0,1]$ satisfies\n\\begin{enumerate}\n\\item for every $C\\in\\mathcal{C}$ the function $A\\mapsto\\pee(A|C)$ is a probability measure on $(\\Omega,\\mathcal{F})$,\n\\item for all $A,B,C\\in\\mathcal{F}$ with $C,B\\cap C\\in\\mathcal{C}$ one has\n\\begin{equation}\n \\pee(A\\cap B|C)=\\pee(A|B\\cap C)\\pee(B|C).\n\\end{equation}\n\\end{enumerate}\nIf in addition to these criteria $\\mathcal{C}$ is closed under finite unions, the space is called an \\emph{additive CPS}, and if $\\mathcal{C}=\\mathcal{F}\\backslash\\{\\varnothing\\}$ it is called a \\emph{full CPS}.\n\\end{definition}\n\nIn the present situation it is straightforward to take $(S^!_{QM},CL_{QM})$ as the measure space.\nFor the set of conditions a modest choice is to take the set of propositions expressing the performance of a measurement: \n\\begin{equation}\n \\mathcal{C}_{QM}:=\\left\\{F_{\\A}\\:;\\:\\A\\in\\mathfrak{A}\\right\\},~F_{\\A}:=(c\\circ i)(\\A,1).\n\\end{equation}\nFor this set of conditions the Born rule can easily be captured with the CPS $(S^!_{QM},CL_{QM},\\mathcal{C}_{QM},\\pee_{\\rho})$ with the probability function given by\n\\begin{equation}\\label{qmcpss}\n \\pee_\\rho(F|F_{\\A})\n :=\\sum_{\\left\\{\\substack{P\\in L(\\h)\\:;\\\\ (\\A^!,P)\\in F}\\right\\}}\\mathrm{Tr}(\\rho P),\\\\\n\\end{equation}\nfor $F\\in CL_{QM}$ and with $\\rho$ a density operator on $\\h$.\\footnote{Another possibility is to take for the conditions the set of propositions expressing the performance of a measurement excluding the possibility of a finer grained measurement: \n\\begin{equation*}\n\\mathcal{C}_{QM}^!:=\\left\\{F^!_{\\A}\\:;\\:\\A\\in\\mathfrak{A}\\right\\},~F^!_{\\A}:=\\{(\\A^!,P)\\in S^!_{QM}\\:;\\:P\\in L(\\A)\\}.\n\\end{equation*}\nAlso in this setting the Born rule can be recovered by taking the CPS $(S^!_{QM},CL_{QM},\\mathcal{C}^!_{QM},\\pee^!_{\\rho})$ with the probability function given by\n\\begin{equation*}\n \\pee_\\rho^!(F|F^!_{\\A})\n :=\\sum_{\\left\\{\\substack{P\\in L(\\h)\\:;\\\\ (\\A^!,P)\\in F}\\right\\}}\\mathrm{Tr}(\\rho P)=\\pee_\\rho(F|F_{\\A}).\n\\end{equation*}\nMuch of the remainder of this section can also be phrased in terms of this CPS.\nI will however restrict myself to \\eqref{qmcpss} for the sake of clarity.} \n\nIt would however be more interesting to take $\\mathcal{C}=CL_{QM}\\backslash\\{\\varnothing\\}$ for the set of conditions.\nAt the present it is not clear to me if this is possible, as extending the set of conditions seems a non-trivial matter.\nThere is a positive result on this matter however, namely, the CPS\n$(S^!_{QM},CL_{QM},\\mathcal{C}_{QM},\\pee_{\\rho})$ can be extended to\nan additive CPS.\nThe proof for this claim relies on the following definition and theorem.\n\n\\begin{definition}\nA family of measures $(\\mu_i)_{i\\in I}$ on $(\\Omega,\\mathcal{F})$ is called a \\emph{generating family of measures for the CPS} $(\\Omega,\\mathcal{F},\\mathcal{C},\\pee)$ if and only if for all $C\\in\\mathcal{C}$ there exists $i\\in I$ such that $0<\\mu_i(C)<\\infty$ and for all $F\\in\\mathcal{F}$, for all $C\\in\\mathcal{C}$ and for all $i\\in I$ with $0<\\mu_i(C)<\\infty$\n\\begin{equation}\n \\pee(F|C)=\\frac{\\mu_i(F\\cap C)}{\\mu_i(C)}.\n\\end{equation}\nThe family is called \\emph{dimensionally ordered} if and only if there is a total order $<$ on $I$ such that for all $F\\in\\mathcal{F}$ and $i,j\\in I$: $i0$ \nfor a commutative algebra $\\A$. We give here the proof \nfor any unital algebra $\\A$.\nLet $L_n$ be the (directed) line graph of $n+1$ vertices ($v_0,...,v_n$)\n and $n$ edges ($e_1,...,e_n$); Fig. 3.1. \n\\\\ \\ \\\\\n\\centerline{\\psfig{figure=Ln-dir,height=1.6cm}}\n\\centerline{ Figure 3.1} \\ \\\\\n\n\\begin{lemma}\nThe graph cochain complex of $L_n$:\\\\\n$C^*_{\\A}(L_n):\\ \\ C^{0} \\stackrel{d^0}{\\to} C^{1} \\stackrel{d^1}{\\to} C^{2} \n...\\to C^{n-1} \\stackrel{d^{(n-1)}}{\\to} C^{n} $, \nis acyclic, except for the first term. \nThat is, $\\hat H^{i}_{\\A}(L_{n})=0$ for $i>0$ \nand $\\hat H^{0}_{\\A}(L_{n})$ is usually nontrivial\\footnote{From the \nfact that chromatic polynomial of $L_n$ is equal to $\\lambda(\\lambda -1)^n$ \nfollows that rank($\\hat H^{0}_{\\A}(L_{n})=$ \nrank($\\A)$ $(rank(\\A)-1)^n)$, we assume here that $k$ is a principal ideal \ndomain. It was proven in \\cite{H-R-2} that for \na commutative $\\A$ decomposable into \n$k1 \\oplus \\A\/k$ one has that $H^*_{\\A}(L_n)= H^0_{\\A}(L_n)= \n\\A \\otimes (\\A\/k)^{\\otimes n}$.}. \n\\end{lemma}\nFor a line graph $\\hat H_{\\A} = H_{\\A}$ so we will use $H_{\\A}$ to simplify \nnotation. \\ \nWe prove Lemma 3.3 by induction on $n$. For $n=0$, $L_0$ is \nthe one vertex graph, thus $H^*_{\\A}= H^0_{\\A}= \\A$ and \nthe Lemma 3.3 holds. Assume that the lemma holds for $L_k$ with $k0$. \nTo get exactly the chain complex of the ${\\mathbb M}$-reduced\n (directed) graph homology of $P_{n}$, $\\hat H^*_{{\\A},{\\mathbb M}}(P_n)$,\n we extend this chain complex to \n$$\\{\\mathbb M \\otimes_{{\\A}^e} C^i\\}_{i=0}^{n}\\ :\\ \\\n\\mathbb M \\otimes_{{\\A}^e} C^0 \\stackrel{\\partial^0}{\\to}\n\\mathbb M \\otimes_{{\\A}^e} C^1 \\stackrel{\\partial^1}{\\to} ... \\to\n\\mathbb M \\otimes_{{\\A}^e} C^{n-1} \\stackrel{\\partial^{n-1}}{\\to}\n\\mathbb M \\otimes_{{\\A}^e} C^{n}=\\mathbb M \\to 0$$\nwhere the homomorphism\n$\\partial^{n-1}$ is the zero map. \n\n\nTo complete the proof of Theorem 3.1 we show that this complex is \nexactly the same as the cochain complex of the ${\\mathbb M}$-reduced\n (directed) graph cohomology of $P_{n}$. We consider carefully the \nmap $\\mathbb M \\otimes_{{\\A}^e} C^j \\stackrel{\\partial^j}{\\to} \n\\mathbb M \\otimes_{{\\A}^e} C^{j+1}$. In the calculation \nwe follow the proof of Proposition 1.1.13 of \\cite{Lo}.\nThe idea is to ``bend\" the line graph $L_n$ to the polygon $P_n$ and \nshow that it corresponds to tensoring, over ${\\A}^e$ with \n$\\mathbb M$; compare Figure 3.2.\n\\\\ \\ \\\\\n\\centerline{\\psfig{figure=bending,height=3.4cm}}\n\\centerline{ Figure 3.2} \\ \\\\\n\nLet us order components of $[G:s]$ ($G$ is equal to $P_n$ or $L_n$) \nin the anticlockwise orientation, \nstarting from the component containing $v_0$ (decorated by an element of $\\M$ \nif $G=P_n$). The element of $C^j_{\\A}(L_n)$ will be denoted by \n$(a_{i_0},a_{i_1},...,a_{i_{n-j-1}},a_{i_{n-j}})(s)$ and of \n$C^j_{\\A,\\M}(P_n)$ by $(m,a_{i_1},...,a_{i_{n-j-1}})(s)$. In the \nisomorphism $\\M \\otimes_{{\\A}^e} C^{j}_{\\A}(L_n) \\to C^j_{\\A,\\M}(P_n)$ \n($j2$. Then the torsion part of the \nKhovanov homology of $T_{2,-n}$ is given by (in the description \nof homology we use notation of \\cite{Vi} treating $T_{2,-n}$ as \na framed link):\\\\ \n(Odd)\\ \\ For $n$ odd, all the torsion of $H_{**}(T_{2,-n})$ is \nsupported by \\\\ \\ \\ \n$H_{n-2,3n-4}(T_{2,-n})= H_{n-4,3n-8}(T_{2,-n})=...= \nH_{-n+4,-n+8}(T_{2,-n})=Z_2$.\\\\\n(Even)\\ \\ For $n$ even, all the torsion of $H_{**}(T_{2,-n})$ is\nsupported by\\\\\n\\ \\ $H_{n-4,3n-8}(T_{2,-n})= H_{n-6,3n-12}(T_{2,-n})=...=\nH_{-n+4,-n+8}(T_{2,-n})=Z_2$.\\\\\n\nFor a right-handed torus link of type $(2,n)$, $n>2$, we can \nuse the formula for the mirror image (Khovanov duality theorem; \nsee for example\\\\\n \\cite{A-P,APS}:\\ \\ \\ $H_{-i,-j}({\\bar D})= \nH_{ij}(D)\/Tor(H_{ij}(D)) \\oplus Tor(H_{i-2,j}(D))$.\n\\end{corollary}\n\nThe result on Hochschild homology of symmetric algebras has a \nmajor generalization to the large class of algebras called \n{\\it smooth algebras}.\n\\begin{theorem}(\\cite{Lo,HKR})\\label{4.1}\nFor any smooth algebra $\\A$ over $k$, the antisymmetrization map \n$\\varepsilon_*: \\Lambda^*_{\\A|k} \\to HH_n(\\A)$ is an isomorphism of \ngraded algebras. Here $\\Omega^n_{\\A|k}= \\Lambda^n \\Omega^1_{\\A|k}$ \nis an $\\A$-module of differential $n$-forms.\n\\end{theorem}\nWe refer to \\cite{Lo} for a precise definition of a smooth algebra, \nhere we only \nrecall that the following are examples of smooth\nalgebras:\\\\\n (i) Any finite extension of a perfect field $k$ (e.g. a field\nof characteristic zero). \\\\\n(ii) The ring of algebraic functions on a nonsingular\nvariety over an algebraically closed field $k$, e.g. $k[x]$, $k[x_1,...,x_n]$,\n$k[x,y,z,t]\/(xt-yz-1)$ \\cite{Lo}. \\\\ \nNot every quotient of a polynomial algebra is a smooth algebra.\nFor example, $C[x,y])(x^2y^3)$ or $Z[x]\/(x^m)$ are not smooth.\nThe broadest, to my knowledge, treatment of Hochschild homology of \nalgebras $C[x_1,...,x_n]\/(Ideal)$ is given by Kontsevich in \\cite{Kon}.\nFor us the motivation came from one variable polynomials, Theorem 40 \nof \\cite{H-P-R}. In particular we generalize Theorem 40(i) from a \ntriangle to any polygon that is we compute the graph cohomology \nof a polygon for truncated\npolynomial algebras and their deformations. Thus, possibly, we can \napproximate Khovanov-Rozansky $sl(n)$ homology and their deformations. \n\n\\begin{theorem}\\label{4.6}\n\\begin{enumerate}\n\\item[(i)] $HH_i({\\A}_{p(x)}) = Z[x]\/(p(x),p'(x))$ for $i$ odd and \n$HH_i({\\A}_{p(x)}) = \\{[q(x)]\\in Z[x]\/(p(x)) \\ | \\ \\ \nq(x)p'(x)$ is divisible by $p(x)$ \\}, \nfor $i$ even $i \\geq 2$. In both cases the $Z$ rank of the group \nis equal to the degree of gcd$(p(x),p'(x))$.\n\\item[(ii)] In particular, for $p(x)= x^{m+1}$, we obtain homology of \nthe ring of truncated polynomials, ${\\A}_{m+1}=Z[x]\/x^{m+1}$ \nfor which:\\ \\ \n$HH_i({\\A}_{m+1})=Z_{m+1} \\oplus Z^m$ for odd\\ $i$ and \n$HH_i({\\A}_{m+1})= Z^m$ for for $i$ even, $i\\geq 2$. \n$HH_0({\\A}_{m+1})= \\A = Z^{m+1}$.\n\\item[(iii)] The graph cohomology of a polygon $P_n$, $H^i_{{\\A}_{p(x)}}(P_n)$, \nis given by:\\\\\n $H^{n-2i}_{{\\A}_{p(x)}}(P_n)= {\\A}_{p(x)}\/(p'(x))$ for $1\\leq i \\leq \\frac{v-1}{2}$,\\ \nand \\\\\n$H^{n-2i-1}_{{\\A}_{p(x)}}(P_n)= ker({\\A}_{p(x)} \n\\stackrel{p'(x)}{\\to} {\\A}_{p(x)})$ \nfor $1\\leq i \\leq \\frac{v-2}{2}$.\\\\\nFurthermore, $H^{k}_{{\\A}_{p(x)}}(P_n)= 0$ for $k\\geq n-1$ and \n$H^{0}_{{\\A}_{p(x)}}(P_n)$ is a free abelian group of rank \n$(d-1)^n + (-1)^n(d-1)$ for $n$ even ($d$ denotes the degree of $p(x)$) and\n it is of rank $(d-1)^n + (-1)^n(d-1) - rank(H^{1}_{{\\A}_{p(x)}}(P_n))$ \nif $n$ is odd (notice that $(d-1)^n + (-1)^n(d-1)$ \nis the Euler characteristic of $\\{H^{i}_{{\\A}_{p(x)}}(P_n)\\}$). \n \n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof} Theorem 4.6(i)\n is proven by considering a resolution of ${\\A}_{p(x)}$ as \nan ${\\A}_{p(x)}^e={\\A}_{p(x)}\\otimes \n{\\A}^{op}_{p(x)}$ \nmodule:\n$$ \\cdots \\to {\\A}_{p(x)}\\otimes {\\A}_{p(x)} \n\\stackrel{u}{\\to} \n{\\A}_{p(x)}\\otimes {\\A}_{p(x)} \\stackrel{v}{\\to}\n{\\A}_{p(x)}\\otimes {\\A}_{p(x)} \\stackrel{u}{\\to} \\cdots\n\\to {\\A}_{p(x)} $$ \nwhere $u= x\\otimes 1 - 1\\otimes x$ and $v=\\Delta (p(x))$ is a coproduct \ngiven by $\\Delta (x^{i+1}) = x^i\\otimes 1 + x^{i-1}\\otimes x +...+ \nx \\otimes x^{i-1} + 1\\otimes x^i$.\n\\end{proof}\n Curious but not accidental observation \nis that by choosing coproduct $\\Delta (1)= v$ \nwe define a Frobenius algebra structure on $\\A$. \nIn Frobenius algebra $(x\\otimes 1)\\Delta (1) = (1 \\otimes x)\\Delta (1)$ \nwhich makes $uv=vu=0$ in our resolution. Furthermore the distinguished \nelement of the Frobenius algebra $\\mu\\Delta (1)= p'(x)$. \n\\\\\n\n\\iffalse\n\\ \\\\\nEND OF THE PAPER (ideas from letters below will be developed in \nfuture papers, hopefully).\n\n\\section{Letter to Mietek; August 16, 2005}\nI will use the general definition Def. 2.4 of the paper:\nand extend Example 2.5 from plane signed graphs to signed graphs with\nedges along every vertex ordered, i.e. flat vertex graphs\n (such a graph defines the unique embedding into a closed surface).\n\n\\begin{definition}\\label{Text2.4}\nLet $k$ be a commutative ring and $E=E_+ \\cup E_-$ a finite set divided\ninto two disjoint subsets (positive and negative sets). We consider the\ncategory of subsets of $E$\n($E\\supset s=s_+\\cup s_-$ where $s_{\\pm}= s\\cap E_{\\pm}$). the set $Mor(s,s')$\nis either empty or has one element if $s_- \\subset s'_-$ and\n$s_+ \\supset s'_-$. Objects are graded by $\\sigma(s)=|s_-|- |s_+|$.\nLet call this category the superset category (as the set $E$\n is initially bigraded). We define ``Khovanov cohomology\" for every functor,\n$\\Phi$, from the superset category to the category of $k$-modules.\nWe define cohomology of $\\Phi$ in the similar way as for a functor\nfrom the category of sets (which corresponds to the case $E=E_-$).\nThe cochain complex corresponding to $\\Phi$ is defined to be\n$\\{C^i(\\Phi)\\}$ where $C^i(\\Phi)$ is the direct sum of $\\Phi(s)$ over all\n$s\\in E$ with $\\sigma(s)=i$. To define $d: C^i(\\Phi) \\to C^{i+1}(\\Phi)$\nwe first define face maps $d_e(s)$ where $e=e_- \\notin s_-$ ($e_- \\in E_-$)\nor $e=e_+ \\in s_+$. Namely, in such cases\n$d_{e_-}(s)= \\Phi(s\\subset s\\cup e_-)(s)$ and\n$d_{e_+}(s)= \\Phi(s\\supset s-e_+)$. Now, as usually\n$d(s)= \\sum_{e \\notin s} (-1)^{t(s,e)}d_e(s)$,\nwhere $t(s,e)$ requires ordering of elements of $E$ and is equal the\nnumber of elements of $s_-$ smaller then $e$ plus the number of elements\nof $s_+$ bigger than $e$.\nAs before we obtain the cochain complex whose cohomology does not\ndepend on ordering of $E$.\n\\end{definition}\n\nConstruction of (Khovanov type) graph homology for signed \"flat vertex\"\ngraphs.\nLet $G$ be a connected signed plane graph with an edge set\n$E=E_+ \\cup E_-$ were $E_+$ is the\nset of positive edges and $E_-$ is the set of negative edges.\nFurthermore we assume that at every vertex adjacent edges are ordered, loops\nare listed twice\n(or equivalently the neighborhood of each vertex is placed on the plane).\n[[Such a structure defines a 2-cell embedding of $G$ into a closed\noriented surface, $F_G$. $2$-cell embedding means that $F_G -G$\nis a collection of open discs.]]\nWe define the functor from the superset category $\\mathbb E$ using the\nfact that $G \\subset F_G$ defines a link diagram $D(G)$ on $F_G$.\n\nTo define the functor $\\Phi$ we fix\na Frobenius algebra $A$ with a multiplication $\\mu$ and a comultiplication\n$\\Delta$ (the main example used by Khovanov,\n is the algebra of truncated polynomials $\\A_m=Z[x]\/x^m$ with a\ncoproduct $\\Delta (x^k)=\\sum_{i+j=m-1+k}x^i \\otimes x^j$). We assume that\n$\\A$ is commutative and cocommutative. The main property of Frobenius\nalgebra we use is: $(\\mu \\otimes Id)(Id \\otimes \\Delta)= \\Delta\\mu$.\nFor a functor on objects $s \\subset S$, we consider the Kauffman state\nassociated to $s$, that is every crossing is decorated by a marker.\nIf $e \\in s_+$ or $e \\in E_- - s_-$ then we choose a positive marker,\notherwise (i.e. $e \\in E_+ -s_+$ or $e\\in s_-$) we choose negative\nmarker (Figure ??). We denote by $D_s$ the set of circles obtained from\n$D_G$ by smoothing crossings of $D_G$ along markers yielded by $s$.\n$k(s)$ denotes the number of circles in $D_s$. Now we define\n$\\Phi(s)= \\A^{\\otimes k(s)}$ and we interpret it as having $\\A$\nattached to every circle of $D_s$.\nTo define $\\Phi$ on morphisms pairs of objects\n$s,s'$ such that $Mor(s,s')\\neq \\emptyset$\n and $\\sigma(s')=\\sigma(s)+1$ and define $\\Phi(Mor(s,s')$ first.\nWe have two cases to consider:\\\\\n(1) $k(s')= k(s)-1$. In this case we use the product in $\\A$.\nIf two circles of $D_s$, say $u_i,u_{i+1}$, are combined into one circle\nin $D_(s')$ then $\\Phi(Mor(s,s'))(a_1\\otimes a_2 \\otimes ...\\otimes a_i\n\\otimes a_{i+1})\\otimes ...\\otimes a_{k(s}) =\n(a_1\\otimes a_2 \\otimes ...\\otimes \\mu(a_i\\otimes a_{i+1})\\otimes\n...\\otimes a_{k(s)})$.\\\\\n(2) $k(s')= k(s)-1$. In this case we use the coproduct in $\\A$.\nIf a circle of $D_s$, say $u_i$ is split into two circles in \n$D_{s'}$ then $\\Phi(Mor(s,s'))(a_1\\otimes a_2 \\otimes ...\\otimes a_i\n\\otimes ...\\otimes a_{k(s}) =\n(a_1\\otimes a_2 \\otimes ...\\otimes \\Delta (a_i)\\otimes ...\\otimes a_{k(s})$.\n\\\\\n\\ \\\\\n$\\Phi$ can be now extended to all morphisms is superset category\ndue to Frobenius algebra axioms.\n\nCohomology of our functor $H^i(\\Phi)$ are cohomology of\nour signed, flat vertex graph. If $G$ is a disjoint union of $G_1$\nand $G_2$ then we define $H^{*}(G) = H^{*}(G_1) \\otimes H^{*}(G_2)$.\nSo we use K\\\"unneth formula as a definition.\n\n[[Because our graph is uniquely embedded into a closed surface we \ncan produce more delicate (co)homology theory by distinguishing \nnontrivial from trivial curves on the surface and following \\cite{APS} \nin the construction; at least it will work for the algebra $A_2$ but \nwe do not use a product and coproduct as the only information but \nextend Frobenius algebra rules lead by the topology of the surface.]]\n\\section{From e-mails to Laure and Yongwu: August 12, 13, 2005}\n\n{\\bf Long exact sequence of functor homology}\\\\\n\\ \\\\\nAssume that $E' \\subset E$ and consider three functors;\n$\\Phi_1$ and $\\Phi_2$ on the category of subsets of $E'$ and\n$\\Phi$ on the category of subsets of $E$.\nWe say that $\\Phi_1$ and $\\Phi$ are covariantly coherent if\n$\\Phi_1(s)= \\Phi (s\\cup E\")$ where $E''= E-E'$ and\n$\\Phi_1(s_1 \\subset s_2) = \\Phi (s_1\\cup E\" \\subset s_2 \\cup E\")$.\n\nClaim 1. If $\\Phi_1$ and $\\Phi$ are covariantly coherent than we have\na cochain map $\\alpha: C^*(\\Phi_1) \\to C^{*+|E\"|}(\\Phi)$. In the definition\nof $\\alpha$ we use the isomorphism $\\Phi_1(s) \\to \\Phi (s\\cup E\")$ for\nevery $s \\in E$.\n\nConsequently we have the map on cohomology:\n$\\alpha_*: H^*(\\Phi_1) \\to H^{*+|E\"|}(\\Phi)$.\n\nWe say that $\\Phi_2$ and $\\Phi$ are cotravariantly coherent if\nfor $s \\in E'$ we have $\\Phi_2(s)= \\Phi (s)$ and for\n$s_1\\subset s_2 \\in E'$ we have $\\Phi_2(s_1 \\subset s_2) =\n\\Phi(s_1 \\subset s_2)$.\n\nClaim 2. If $\\Phi_2$ and $\\Phi$ are cotravariantly coherent than we have\na cochain map $\\beta: C^*(\\Phi) \\to C^{*}(\\Phi_2)$.\nWe define $\\beta: C^i(\\Phi) \\to C^{i}(\\Phi_2)$ as follows.\nIf $s \\subset E'$ then $\\beta$ is the identity on $\\Phi(s)$, if\n$s \\notin E'$ then $\\beta$ restricted to $\\Phi(s)$ is the zero map.\n\nOur goal is to use these maps to construct the long exact sequence of\ncohomology involving $H^*(\\Phi_1), H^*(\\Phi)$ and $H^*(\\Phi_2)$.\n\nTheorem. Let $E'=E-\\{e\\}$ and $\\Phi_1,\\Phi_2$ are functors on $E'$\nand $\\Phi$ is a functor on $E$ such that\n$\\Phi_1$ and $\\Phi$ are covariantly coherent and\n$\\Phi_2$ and $\\Phi$ are cotravariantly coherent.\nThen\nWe have the short exact sequence of cochain complexes:\n$$ O \\to C^i(\\Phi_1) \\to C^{i+1}(\\Phi) \\to C^{i+1}(\\Phi_2) \\to 0$$\nleading to the long exact sequence of (co)homology.\n\n\\fi\n\\ \\\\\n\\ \\\\\n\\ \\\\\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLinear response theory for nonequilibrium systems is slowly emerging from a great variety of formal approaches --- see \\cite{njp} for a recent review. It \nremains however very important in nonequilibrium to concentrate more on the physical--operational meaning of \nthe response expressions. Obviously, it is very practical to have experimental access to the various terms in a \nresponse formula and to learn in general to recognize facts of the unperturbed system that are responsible \nfor the particular response. \nThat at least is what has made the fluctuation-dissipation theorem so useful in equilibrium. \nFor example, transport properties as summarised in the mobility or conductivity, can be obtained from the diffusion \nin the unperturbed equilibrium system. In other words, not only is there a unifying response relation in equilibrium, \nit also possesses a general meaning in terms of fluctuations and dissipation. Such is not yet quite the situation for \nnonequilibrium systems and extra examples, in particular for spatially extended systems will therefore be useful.\\\\\n\n\nThe present paper gives the response systematics for the zero range process.\nThe zero range process regularly appears in nonequilibrium studies and has the simplifying structure that its stationary distribution is \nsimple (and remains a product distribution even away from equilibrium) while it shows a rich and quite realistic phenomenology. \nWe refer to \\cite{shuts,EH} for a general introduction and nonequilibrium study of the model. We refer to Section 3 in \\cite{EH} for a review \nof applications, in particular for the correspondence\nwith shaken granular gases. We will repeat the set-up in \nSection \\ref{zrp}. Interestingly, the time-reversed zero range process has an external field and particle currents directed \nopposite to the density profile. To start however we repeat in the next section some more formal aspects of the nonequilibrium \nlinear response. \n Our point of view is to look in particular for the \ndecomposition of the response into a frenetic and an entropic contribution. The entropic part is expressed in terms of \n(time-antisymmetric) currents and the frenetic part gets related to the (time-symmetric) dynamical activity. The latter \nrefers to the number of exits and entrances of particles at the boundaries of the system.\nSection \\ref{rzrp} performs that decomposition for the boundary driven zero range process, and gives a number of response formul{\\ae} for density and current. There we find our main results, in particular modified Green-Kubo relations.\nFinally, in Section \\ref{eqd} we treat some special cases which bring the nonequilibrium response to resemble \nthe equilibrium Kubo formula. That opens the separate theme of trying to understand under what physical conditions nonequilibrium \nfeatures remain largely absent.\n\n\\section{Nonequilibrium response}\\label{nonr}\nWe restrict ourselves to open systems connected to various different equilibrium reservoirs. Their nonequilibrium is passive \nin the sense that they do not affect the reservoirs directly and that all nonequilibrium forcing works directly on the \nparticles of the system. \n\nThe state of the open system is described by values $x$ for some reduced variables, e.g. (some) particle positions. In the course of time $[0,t]$ there is a \npath or trajectory $\\omega := (x_s, 0\\leq s \\leq t)$ for which we have a well-defined entropy flux $S(\\omega)$, that is \nthe change of entropy in the equilibrium reservoirs. That $S(\\omega)$ typically depends on the elementary changes in $x$ and \nhow that affects the energy and particle number of the reservoirs. Of course $S(\\omega)$ also depends on parameters such as \nthe temperature, the chemical potentials of the reservoir and coupling coefficients of interaction. It will thus change under \nperturbations.\\\\Similarly, for every state $x$ there is a notion of reactivity, like the escape rate from $x$. \nFor the path $\\omega$ there will then be a dynamical activity $D(\\omega)$ which reflects the expected amount of changes \nalong the path $\\omega$, again function of the system and reservoir parameters.\\\\\nConsider now a perturbation of that same system in which parameters are changed. Clearly, for any path $\\omega$ the entropy \nflux $S$ and the dynamical activity $D$ will change. We can look for the linear excess, that is the amount by which the perturbation \nhas changed these observables to first order. \nWe refer to \\cite{bam09,bmw1} for the general introduction, and to \\cite{mnw08} for complementary aspects to entropy. \nThe linear response for a path-observable $O(\\omega)$ is the difference in \nexpectation $\\langle \\cdot\\rangle^h$ between the perturbed process (with small time-dependent amplitude $h_s, s\\in [0,t]$) and the \noriginal steady expectation $\\langle\\cdot\\rangle$. It has the form\n\\begin{equation}\n\\langle O(\\omega)\\rangle^h - \\langle O(\\omega)\\rangle = \\frac {1}{2}\\langle \\mbox{Ent}^{\\left[ 0,t \\right]}(\\omega)\\,O(\\omega)\\rangle\n- \\langle \\mbox{Esc}^{\\left[ 0,t\\right] }(\\omega)\\,O(\\omega )\\rangle\n\\label{eq:xsusc}\n\\end{equation}\nwhere Ent$^{\\left[ 0,t\\right] }$ is the excess in entropy flux per $k_B$ over the trajectory\ndue to the perturbation and Esc$^{\\left[ 0,t\\right] }$ \nis the excess in dynamical activity over the trajectory.\nThe latter and second term on the right-hand side of \\eqref{eq:xsusc} is \nthe frenetic contribution \\footnote{the response formula \\eqref{eq:xsusc} can be written in \nseveral equivalent ways: in the second term often there is a \nfactor $1\/2$ which here we include in the \ndefinition of the dynamical activity term in brackets; section \\ref{zrp} \nwill treat some specific formulations. }. \nIn many nonequilibrium situations \nthe physical challenge is to learn to guess \nor to find and evaluate that Esc$^{\\left[ 0,t \\right] }$ from partial information on the dynamics. \nThe present paper takes the opportunity to explore this question and to make such task more specific for the zero range process. \nLet us however first give the more general formal structure, restricting ourselves to Markov jump processes. \nFor a more general review of various recent approaches, see \\cite{njp}.\\\\\n\n\nOn the finite state space $K$ we consider transition rates\n$k(x,y), x,y\\in K$. We assume irreducibility so that there is exponentially fast convergence to a unique stationary\ndistribution $\\rho(x), x\\in K$, satisfying\n\\[\n\\sum_{y\\in K} [\\rho(x)\\,k(x,y) - \\rho(y)\\,k(y,x)] = 0, \\quad x\\in\nK \\] \nStill, in general, there are nonzero currents of the form\n$j(x,y) := \\rho(x)\\,k(x,y) - \\rho(y)\\,k(y,x)\\neq 0$ for some pairs $x\\neq\ny\\in K$, so that the stationary process is not time-reversible.\\\\\nFor physical models the rates carry a specific meaning. Following the condition of local detailed balance the ratio\n\\[\n\\log\\frac{k(x,y)}{k(y,x)} =\\sigma(x,y)\n\\]\nshould be the entropy flux (in units of $k_B$) in the transition $x\\rightarrow y$. \nConsider now again the path $\\omega := (x_s, 0\\leq s\\leq t)$. It consists of jumps (transitions) at specific times $s_i$ \nand waiting times over $s_{i+1} - s_i$. The total entropy flux $S$ (in units of $k_B$) is\n\\begin{equation}\\label{set}\nS(\\omega) = \\sum_{s_i} \\sigma(x_{s_i^-},x_{s_{i}}) \n\\end{equation}\nwhere the sum takes the two states of the transition $x_{s_i^-} \\longrightarrow x_{s_{i}}$, with $x_{s_i^-}$ being the state \njust before the jump time $s_i$ to $x_{s_i}$.\\\\\nFor the dynamical activity we need a reference process. Writing \n\\[\n\tk(x,y) = \\psi(x,y) e^{\\sigma(x,y)\/2} \\mbox{ with } \\psi(x,y) =\\psi(y,x) \\mbox{ and } \\sigma(x,y)=-\\sigma(y,x),\n\\]\nwe take the reference rates $k_o(x,y) =1$ whenever $\\psi(x,y)\\neq 0$ and zero otherwise. That reference process corresponds to an infinite \ntemperature limit but it will not matter \nin the end. With respect to that reference we do not only have a change \nin ``potential barrier'' $-\\log 1 = 0 \\rightarrow -\\log \\psi(x,y)$ for each transition, but also a change in the escape \nrates for each state $x$:\n\\[\n\\xi(x) = \\sum_{y: \\psi(x,y) > 0} [k(x,y)-1]\n\\]\nWe then take the dynamical activity $D$ over the path $\\omega$ be the combination\n\\begin{equation}\\label{setd}\nD(\\omega) = \\int_0^t\\id s \\,\\xi(x_s) - \\sum_{s_i} \\log \\psi(x_{s_i^-},x_{s_{i}}).\n\\end{equation}\nPerturbations change $S$ and $D$. Let us look at a specific example of\nperturbed transition rates considered in\n\\cite{diez,mprf}:\n\\begin{equation}\\label{general}\nk_s(x,y) =\nk(x,y)\\,e^{h_s[b V(y)-a V(x)]}, \\quad t>s\\geq 0\n\\end{equation}\nwhere the $a,b \\in \\bbR$ are independent of the perturbing potential $V$ and \nthe $h_s \\ll 1$ is small. The coresponding perturbed Master \nequation for the \ntime-dependent probability law $\\rho_t$ is\n\\[\n\\frac{\\id}{\\id t} \\rho_t(x) = \\sum_y \\big[k_t(y,x) \\rho_t(y) - k_t(x,y) \\rho_t(x)\\big];\n\\]\nwhile the unperturbed equations of motion are obtained by making $h_s=0$.\nOne standard possible choice of perturbation is\ntaking $a= b = 1\/2T$ where $T$ is the temperature of the environment which exchanges the energy $V$ with the system. \nIn general $a,b$ could be \narbitrary; however, for the perturbed rates in \\eqref{general} to satisfy \nthe condition of local detailed balance, one requires that $a+b=1\/T$.\n\nWe continue however with the more general perturbation \\eqref{general}.\nIt is instructive to rewrite the perturbation \\eqref{general} as\n\\begin{eqnarray}\nk_s(x,y) &=&\nk(x,y)\\,e^{h_s\\frac{b-a}{2}(V(x) + V(y))}\\,e^{h_s \\frac{a+b}{2}(V(y) - V(x))}\\nonumber\\\\\n&=& \\big[\\psi(x,y)e^{h_s\\frac{b-a}{2}(V(x) + V(y))}\\big]\\;e^{\\sigma(x,y)\/2 + h_s (a+b)(V(y) - V(x))\/2}\\label{ins}\n\\end{eqnarray}\nagain being split in a symmetric prefactor (between square brackets) and an anti-symmetric part in the exponential. \nFrom here it is easy to see the excess for the entropy flux at a transition $x\\rightarrow y$ to be\n\\begin{equation}\\label{se}\n h_s (a+b)(V(y) - V(x))\n\\end{equation}\nWe use \\eqref{se} to find the perturbation to $S$ in \\eqref{set} yielding\n\\begin{eqnarray}\\label{en}\n\\mbox{Ent}^{\\left[ 0,t\\right] }(\\omega) &=& (a+b)\\, \\sum_{s_i} h_{s_i} [V(x_{s_{i}}) - V(x_{s_i^-}) \\nonumber\\\\\n&=& (a+b) \\{h_t V(x_t) - h_0 V(x_0) - \\int_0^t \\id s \\,\\dot{h}_s V(x_s)\\}.\n\\end{eqnarray}\nFor the dynamical activity we should use again the reference process with rates $k_o(x,y)$ as above. Then,\nat least for the change in escape rates (first term in \\eqref{setd}) at state $x$,\n\\begin{eqnarray}\\label{dy}\n\\sum_y [k_s(x,y) - k(x,y)] &=& h_s \\sum_y k(x,y)\\{\\frac{b-a}{2}(V(x) + V(y)) + \\frac{a+b}{2}(V(y) - V(x))\\}\\nonumber\\\\\n&=& h_s \\,\\sum_y k(x,y)[bV(y) - aV(x)]\n\\end{eqnarray}\nto first order in $h_s$.\nThe total change to $D$ of \\eqref{setd} is thus\n\\begin{equation}\\label{es}\n\\mbox{Esc}^{\\left[ 0,t\\right] }(\\omega) = \\int_0^t\\id s\\, h_s\\,\\sum_y k(x_s,y)\\,[bV(y) - aV(x_s)] + \\frac{a-b}{2}\\sum_{s_i} h_{s_i} [V(x_{s_{i}}) + V(x_{s_i^-})]\n\\end{equation}\nwhere the last term corresponds to the change in the second term of $D$ in \\eqref{setd}, as from \\eqref{ins}.\nIn all, the expressions \\eqref{en} and \\eqref{es} completely specify the response \\eqref{eq:xsusc} for the example \\eqref{general}.\\\\\n\n \nWe can still rewrite the previous formul{\\ae}, loosing somewhat the physical interpretation but gaining somewhat formal elegance. To start, let us restrict ourselves to the more simple situation where the observable $O$ is just a state function $O(x), x\\in K$. The response then investigates the change\n\\[\n\\langle O(x_t)\\rangle^h - \\langle O(x_t)\\rangle = \\langle O(x_t)\\rangle^h - \\langle O\\rangle\n\\]\nto first order in the $h_s$, where the first expectation $\\langle \\cdot\\rangle^h$ is under the perturbed Markov dynamics ($s\\geq 0$) and the second $\\langle \\cdot \\rangle$ is the original steady expectation. To say it differently, linear response wants to compute the generalized susceptibility $R(t,s)$ in\n\\[\n\\langle O(x_t)\\rangle^h = \\langle O \\rangle +\n\\int_0^t\\id s\\,\nh_s\\, R(t,s) + o(h)\\]\nThe nonequilibrium answer can be written in a variety of ways, many of which are rather formal, \nbut they should in the end all coincide with \\eqref{eq:xsusc} for \\eqref{en}--\\eqref{es}. For example,\nin terms of the backward generator $L$ of the jump process,\n\\[\nLf(x) = \\left.\\frac{\\id}{\\id s}\\right|_{s=0}\\langle\nf(x_s)\\rangle_{x_0=x}\\; = \\;\\sum_y k(x,y)[f(y)-f(x)]\n\\]\nwe have for \\eqref{en} that\n\\begin{eqnarray}\n\\langle \\{h_t V(x_t) - h_0 V(x_0) - \\int_0^t \\id s \\,\\dot{h}_s V(x_s)\\} \\,O(x_t)\\rangle &=&\n \\int_0^t \\id s \\,h_s \\frac{\\id}{\\id s}\\langle V(x_s) \\,O(x_t)\\rangle\\nonumber\\\\\n&=& -\\int_0^t \\id s \\,h_s \\langle V(x_s) \\,LO(x_t)\\rangle\\nonumber\n \\end{eqnarray}\nOn the other hand, for \\eqref{es},\n \\[\n \\mbox{Esc}^{\\left[ 0,t\\right] }(\\omega) = \nb\\,\\int_0^t\\id s\\, h_s\\, LV(x_s) + (b-a)\\{\\int_0^t\\id s\\, h_s\\, \n\\sum_y k(x_s,y)\\,V(x_s) - \\sum_{s_i} h_{s_i} \\frac{V(x_{s_{i}}) + V(x_{s_i^-})}{2}\\}\n \\]\nWe must substitute that expression together with \\eqref{en} into \\eqref{eq:xsusc}, which leads to\n\\begin{equation}\\label{stationary}\n R(t,s) = a\\frac{\\partial}{\\partial s}\\langle V(x_s)O(x_t)\\rangle - b\\langle LV(x_s)\\,O(x_t)\\rangle\n\\end{equation}\nfor all times $0\\leq s s$\n\\[\n\\langle LV(x_s)\\,O(x_t)\\rangle = \\langle V(x_s)\\,LO(x_t)\\rangle = \\frac{\\partial}{\\partial t}\\langle V(x_s)O(x_t)\\rangle \n\\]\nand hence the two terms in the right-hand side of\n\\eqref{stationary}\ncoincide and we recover the Kubo-formula, \\cite{kubo66},\n\\begin{equation}\\label{eqform}\nR^{\\mbox{eq}}(t,s) = \\frac{1}{T}\\,\\frac{\\partial}{\\partial s}\\left<\nV(s)\\,O(t)\n \\right>_{\\textrm{eq}},\\quad 0 < s 0$ for $k>0$. At the boundary site $i=1$ a particle is added at rate $\\alpha$ and at $i=N$ \nis added at rate $\\delta$, while a particle moves out from $i=1$ at rate $\\gamma\\, w(x(1))$ and moves out from $i=N$ \nat rate $\\beta\\, w(x(N))$. As reference for more details using mostly the same notation, we refer to \\cite{EH}.\\\\\nIt is well-known that the product distribution $\\rho = \\rho_{N,\\alpha,\\beta,\\gamma,\\delta}$ is invariant,\n\\begin{eqnarray}\\label{ste}\n\\rho(x) &=& \\prod_{i=1}^N \\nu_i(x(i)), \\quad \\nu_i(k) = \\frac{z_i^{k}}{{\\cal Z}_i}\\,\\frac 1{w(1)\\,w(2)\\,\\ldots w(k)},\\; k>0\\nonumber\\\\\n{\\cal Z}_i &=& 1 + \\sum_{k=1}^\\infty\\frac{z_i^{k}}{w(1)\\,w(2)\\,\\ldots w(k)}\n\\end{eqnarray}\nThe ``fugacities'' $z_i$ are of the form $z_i = Ci + B = z_1 + C(i-1)$ where\n\\[\nB :=\\frac{\\alpha + (1-\\gamma)C}{\\gamma},\\quad C:=\\frac{\\delta \\gamma - \\beta \\alpha}{\\beta \\gamma N + \\beta(1-\\gamma) + \\gamma}=\n\\frac{e^{\\mu_r\/T} - e^{\\mu_\\ell\/T}}{N}\\,\\big(1 + \\frac{\\beta +\\gamma-\\beta \\gamma}{\\beta\\gamma N}\\big)^{-1} \n\\]\nWe have introduced ``chemical potentials'' $\\mu_\\ell := T\\log \\alpha\/\\gamma$ and $\\mu_r := T\\log \\delta\/\\beta$ with \n$T$ the environment temperature.\nWhen $\\mu_\\ell=\\mu_r, \\alpha\/\\gamma = \\delta\/\\beta$, then $C=0, B=z_i=\\alpha\/\\gamma$; and detailed balance is satisfied. \nIf not, we get a stationary particle current (to the right) equal to $\\langle J_i\\rangle = -C = \\alpha -\\gamma z_1 = \\beta z_N - \\delta$ and thermodynamic \ndriving force $(\\mu_\\ell - \\mu_r)\/T = \\log \\alpha\/\\gamma - \\log \\delta\/\\beta$. Note however that for $C\\neq 0$ the (then nonequilibrium) stationary \ndistribution $\\rho$ also depends on purely kinetic (and not only on thermodynamic) aspects; they will again enter the \nresponse in terms of the dynamical activity. \nFor example, fixing $\\alpha\/\\gamma$ and $\\delta\/\\beta$ does not determine $C$, trivially but importantly.\\\\\n\n\n\\subsection{Time-reversal}\nTo make explicit use of the formula \\eqref{rrr}, we need to know the time-reversed process, which is interesting in itself.\\\\ \nIn general for a Markov process as we had it described in Section \\ref{nonr} the time-reversed process is again a Markov jump \nprocess with generator $L^*$ in \\eqref{ster} and with rates\n\\[\nk^{\\text rev}(x,y) = k(y,x)\\,\\frac{\\rho(y)}{\\rho(x)}\n\\]\nfor the stationary distribution $\\rho$. Because we know the stationary distribution $\\rho$ of the zero range process as the\nproduct distribution \\eqref{ste}, it is actually easy to determine explicitly the time-reversed process. This is interesting \nalso because, \nby time-reversing, the particle current will be reversed\/change sign but the stationary density profile, as given in terms of the \nfugacities $z_i$, will remain the same. As can be guessed, that only works because by time-reversing one actually generates \nan external field. Let us see the details.\\\\\n\nFirst we take a bulk transition in which a particle hops to a neighboring site. \nTake $y=x-e_i+e_{i+1}$ where $e_i$ stands for the particle configuration with exactly one particle at site $i$. Then,\n\\[\n\\frac{\\rho(y)}{\\rho(x)} = \\frac{z_{i+1}}{z_i}\\frac{w(x(i))}{w(x(i+1))},\\quad k(y,x) = w(x(i+1)) \n\\]\nwhich means that in the time-reversed process a particle moves from site $i$ to $i+1$ at rate \n$k^{\\text rev}(x,x-e_i+e_{i+1}) = z_{i+1} \\,w(x(i))\/z_{i}$ while \nsimilarly for a jump from $i$ to $i-1$, $k^{\\text rev}(x,x-e_i+e_{i-1}) = z_{i-1} \\,w(x(i))\/z_{i}$. \nWe have therefore for the time-reversed process again a zero range process but now in an inhomogeneous bulk field \n\\[\nE_i := 2\\log \\frac{z_{i+1}}{z_{i}}\n\\]\nover the bond $(i,i+1)$, having the sign of $C$, i.e., pushing the particles towards the boundary {\\it where the chemical potential \nwas largest}. \nAt the boundaries we find the creation and annihilation parameters for the time-reversed process to be\n\\[\n\\alpha^{\\text rev} = \n\\gamma z_1,\\quad \\beta^{\\text rev} = \\frac{\\delta}{z_N},\\quad \\gamma^{\\text rev}= \\frac{\\alpha}{z_1},\\quad \\delta^{\\text rev} =\\beta z_N.\n\\]\nThat means that the chemical potentials for the reversed process have become \n\\begin{eqnarray}\n\\mu_\\ell^{\\text rev} &=& - \\mu_\\ell +2 T\\log (e^{\\mu_\\ell\/T} + C\/\\gamma)\\nonumber\\\\\n\\mu_r^{\\text rev} &=& -\\mu_r + 2T \\log(e^{\\mu_r\/T} - C\/\\beta)\\nonumber\n\\end{eqnarray}\n\n\nNote of course that in the case of detailed balance $E_i\\equiv 0 $ and $\\alpha^{\\text rev} =\\alpha$ etc., \nso that the equilibrium process is unchanged\nby time-reversal.\\\\\n\nWe can now write down the explicit expression for the second term in \\eqref{rrr}:\n\\begin{eqnarray}\n(L - L^*)V\\,(x) &=& \\big(\\gamma-\\frac{\\alpha}{z_1}\\big)\\,w(x(1))\\,[V(x-e_1)-V(x)] + \\big(\\alpha-\\gamma z_1\\big)\\,[V(x+e_1)-V(x)] \\nonumber\\\\\n&+& \\big(\\beta-\\frac{\\delta}{z_N}\\big)\\, w(x(N))\\,[V(x-e_N)-V(x)] + \\big(\\delta -\\beta z_N\\big)\\,[V(x+e_N)-V(x)]\\nonumber\\\\\n&&-C\\sum_{i=1}^{N-1} \\frac{w(x(i))}{z_i} [V(x-e_i+e_{i+1})-V(x)]\\nonumber\\\\\n&&+C\\sum_{i=2}^{N} \\frac{w(x(i))}{z_i} [V(x-e_i+e_{i-1})-V(x)]\\nonumber\n\\end{eqnarray}\nApplying that for $V(x) = {\\cal N}(x):= x(1) + x(2) +\\ldots + x(N)$ the total number of particles in the system, we get\n\\begin{equation}\\label{lminl}\n(L - L^*){\\cal N}\\,(x) =\\big(\\frac{\\alpha}{z_1}-\\gamma\\big)\\,w(x(1)) + \\big(\\frac{\\delta}{z_N} - \\beta\\big)\\, w(x(N))\n\\end{equation}\nwhere we have also used that $\\gamma z_1 + \\beta z_N = \\alpha + \\delta$.\n\n\\section{Responses in the zero range process}\\label{rzrp}\nLet us consider the perturbation\n\\begin{equation}\\label{mape}\n\\alpha \\rightarrow q\\,\\alpha,\\quad \\beta\\rightarrow p'\\,\\beta,\\quad \\gamma \\rightarrow p\\,\\gamma,\\quad \\delta\\rightarrow q'\\,\\delta\n\\end{equation}\nto the parameters governing the entrance and exit rates at the boundaries of the system.\nTheir thermodynamic meaning is to shift the chemical potentials by $h_\\ell = T\\log q\/p$ for the left and by \n$h_r = T\\log q'\/p'$ for the right reservoir. Depending on the remaining freedom how to choose the $p, p'$ we can distinguish still several ``kinetic'' possibilities.\n\n\\subsection{``Potential'' perturbation}\\label{potp}\nA first possible perturbation that we consider is that\n\\begin{equation}\\label{shi}\n\\frac{q}{p} = \\frac{q'}{p'} = e^{ h\/T}\n\\end{equation}\nwith $h$ the small (equal) shift in left and right chemical potential.\nEven while the zero range process is not formulated directly in terms of a potential, even at detailed balance, \nit is still easy to fit \\eqref{shi} into the scheme of \\eqref{general}, in particular by choosing \n$h_t\\equiv h$ (time-independent), $a=b=1\/(2T)$, potential $V={\\cal N}$ equal to the particle number, and\n\\begin{equation}\\label{potper}\ne^{h\/(2T)}=q=q', \\quad e^{-h\/(2T)}=p=p'.\n\\end{equation}\n We can thus apply \\eqref{rrr} with formula \\eqref{lminl} to give the correct modification of the Kubo formula as \n \\begin{eqnarray}\\label{rrrtotc}\n\\frac{\\langle O(x_t)\\rangle^{h}- \\langle O \\rangle}{h} &=& \\frac 1{T} \\langle {\\cal N}\\, O\\rangle - \\frac 1{T}\\,\\langle {\\cal N}(x_0)\\, O(x_t)\\rangle \\nonumber\\\\\n&& +\\frac 1{2T}\\int_0^t\\id s\\{\\big(\\frac{\\alpha}{z_1}-\\gamma\\big)\\,\\langle w(x_0(1))\\,O(x_s)\\rangle + \\big(\\frac{\\delta}{z_N}-\\beta\\big)\\, \\langle w(x_0(N))\\,O(x_s)\\rangle\\} \n \\end{eqnarray}\nOf course we could also have used \\eqref{stationary} with $L{\\cal N}(x) = \\alpha + \\delta - \\gamma w(x(1)) -\\beta w(x(N))$ to obtain\n\\begin{eqnarray}\\label{rpo}\n\\frac{\\langle O(x_t)\\rangle^{h}- \\langle O \\rangle}{h} &=& \\frac 1{2T} \\langle {\\cal N}(x_t) - {\\cal N}(x_0); O(x_t)\\rangle \\nonumber\\\\\n&& +\\frac{1}{2T}\n\\int_0^t\\id s\\{\\gamma\\,\\langle w(x_0(1));O(x_s)\\rangle + \\beta\\, \\langle w(x_0(N));O(x_s)\\rangle\\} \n \\end{eqnarray}\nwhere we have used connected correlation functions $\\langle A;B\\rangle := \\langle A\\,B\\rangle - \\langle A\\rangle\\,\\langle B\\rangle$. The first term in the right-hand side \n is the entropic or dissipative part of the response, since in that correlation\none sees the observable $O$ correlated with the particle loss; the last term may be called the frenetic part of the response, since one meets there the correlation with the time-integrated escape rates.\\\\\n\n\nFinally one finds place for the Agarwal-Kubo formula \\eqref{ak}, which here is explicit because the stationary density \n$\\rho$ is given in \\eqref{ste}. For the ``potential'' perturbation \\eqref{shi}--\\eqref{potper} given \nby $\\alpha \\rightarrow (1+ h\/(2T)) \\alpha, \\beta \\rightarrow (1- h\/(2T)) \\beta, \\gamma \\rightarrow (1- h\/(2T)) \\gamma, \\delta \\rightarrow (1+ h\/(2T)) \\delta$ and under discussion so far, that gives\n\\begin{eqnarray}\\label{ak12}\n\\frac{{L^h}^+\\rho - L^+\\rho}{\\rho}(x) \n= \\alpha \\frac{h}{2T}\\, [\\frac{\\rho(x-e_1)}{\\rho(x)} - 1] + \\delta \\frac{h}{2T}\\,[\\frac{\\rho(x-e_N)}{\\rho(x)} -1]\n+ \\gamma\\frac{h}{2T}\\,w(x(1)) \\nonumber\\\\\n- \\gamma\\frac{h}{2T}w(x(1)+1)\\frac{\\rho(x+e_1)}{\\rho(x)} +\\beta\\frac{h}{2T}\\,w(x(N)) \n- \\beta\\frac{h}{2T}w(x(N)+1)\\frac{\\rho(x+e_N)}{\\rho(x)}\\nonumber\\\\\n= \\frac{h}{2T}\\{\\frac{\\alpha}{z_1}(w(x(1)) - z_1) + \\frac{\\delta}{z_N}(w(x(N)) - z_N)\n+\\gamma( w(x(1)) - z_1) + \\beta (w(x(N)) - z_N) \\}\n\\end{eqnarray}\nThis calculation results in the linear response formula\n\\begin{equation}\\label{lak}\n\\frac{\\langle O(x_t)\\rangle^{h}- \\langle O \\rangle}{h} = \\frac 1{2T}\n\\int_0^t\\id s\\{\\big(\\frac{\\alpha}{z_1}+ \\gamma\\big)\\,\\langle w(x_0(1));O(x_s)\\rangle \n+ \\big(\\frac{\\delta}{z_N}+ \\beta\\big)\\, \\langle w(x_0(N));O(x_s)\\rangle\\} \n\\end{equation}\n\n\\subsection{General perturbation}\n\nWe emphasize that the three response formul{\\ae} \\eqref{rrrtotc}--\\eqref{rpo}--\\eqref{lak} are mathematically identical. \nThey all start from the ``potential perturbation'' \\eqref{general} as realized in \\eqref{shi}--\\eqref{potper}. They are \nhowever not to be applied for other perturbations even consistent with \\eqref{shi}, except in equilibrium where the \nresponse does not pick up the detailed kinetics.\nLet us therefore do better (more general) and illustrate the systematic interpretation with unique formula \\eqref{eq:xsusc} to \nthe perturbation \\eqref{mape}.\\\\\n\n\nWe only need experience with entropy and no calculation to find the first term in \\eqref{eq:xsusc}. For the perturbation \\eqref{mape} \nthe entropic part in the response follows the usual (irreversible) thermodynamics and we must have the excess in entropy flux given by\n\\begin{equation}\\label{fer}\n \\mbox{Ent}^{\\left[ 0,t \\right]}(\\omega) = -\\frac{h_r}{T}\\,J_r(\\omega) -\\frac{h_\\ell}{T}\\,J_\\ell(\\omega)\n\\end{equation}\nwhere $J_r$ ($J_\\ell$) is the net number of particles that have exited to the right (left) reservoir (time-integrated current).\nWhen we specify to a perturbation like \\eqref{shi} in which the chemical potentials get shifted together, $h=h_r=h_\\ell$, we can use that $J_\\ell(\\omega) + J_r(\\omega) = {\\cal N}(x_0) - {\\cal N}(x_t)$ so that the excess in entropy flux becomes\n\\begin{equation}\\label{gfer}\n \\mbox{Ent}^{\\left[ 0,t \\right]}(\\omega) = \\frac{h}{T}\\,({\\cal N}_t - {\\cal N}_0)\n\\end{equation}\n proportional to the change over time in particle number.\\\\\n For the second term in \\eqref{eq:xsusc} we lack the experience and calculation will guide us. The point is that the \ndynamical activity \\eqref{setd} exactly picks up the time-symmetric part in the action for path-integration. More specifically, \nlet us now call $P^h$ the process started from the unperturbed stationary zero range process \\eqref{ste} but under the \nperturbed dynamics for a time $[0,t]$. The unperturbed stationary process is denoted by $P$. We can compute the action \n${\\cal A^h}$ for which\n\\[\nP^h = e^{-{\\cal A}^h}\\,P \\simeq (1-{\\cal A}^h)\\,P\n\\]\nwith\n\\begin{eqnarray}\n{\\cal A}^h &=& - I^\\ell_{\\shortleftarrow}\\,\\log p- I^\\ell_{\\shortrightarrow}\\,\\log q - I^r_{\\shortrightarrow}\\,\\log p' - \nI^r_{\\shortleftarrow}\\,\\log q'\\nonumber\\\\\n &+& \\int_0^t \\id s \\{(p-1)\\,\\gamma\\, w(x_s(1)) + (p'-1)\\,\\beta\\, w(x_s(N)) + (q-1)\\alpha + (q'-1) \\delta \\}\n\\end{eqnarray}\nwhere for example $I^\\ell_{\\shortrightarrow}$ equals the total number of particles that have entered the system from \nthe left, and $I^r_{\\shortrightarrow}$ is the total number of particles that have escaped to the right reservoir.\nWe decompose this action with the time-reversal $\\theta$ which makes $(\\theta x)_s= x_{t-s}$, so that\nthe response (up to higher order in $h$) can be obtained from\n\\begin{eqnarray}\nP^h - P &=& \\frac 1{2}[{\\cal A}^h\\theta - {\\cal A}^h]\\,P - \\frac 1{2}[{\\cal A}^h\\theta + {\\cal A}^h]\\,P\\nonumber\\\\\n&=& \\{\\frac 1{2} \\mbox{Ent}^{\\left[ 0,t \\right]}\n- \\mbox{Esc}^{\\left[ 0,t\\right] }\\}\\,P\n\\end{eqnarray}\nwhere we indicate the general relation with \\eqref{eq:xsusc}.\\\\\nIn particular, we verify that\n\\[\n{\\cal A}^h\\theta - {\\cal A}^h = \\log\\frac{q}{p} \\,(I^\\ell_{\\shortrightarrow} - I^\\ell_{\\shortleftarrow} ) +\n\\log\\frac{p'}{q'} \\,(I^r_{\\shortrightarrow} - I^r_{\\shortleftarrow} ) \n\\]\nindeed exactly equals \\eqref{fer} (using for example $I^\\ell_{\\shortrightarrow} - I^\\ell_{\\shortleftarrow} = -J_\\ell$). \nOn the other hand, for the time-symmetric part\n\\begin{eqnarray}\n{\\cal A}^h\\theta + {\\cal A}^h &= & -\\log (pq) \\,I^\\ell - \\log (p'q') \\,I^r \n+ 2(p-1) \\gamma \\int_0^t \\id s \\,w(x_s(1))\\nonumber\\\\ &+& 2(p'-1) \n\\beta \\int_0^t \\id s\\, w(x_s(N)) + 2(q-1) \\alpha t + 2 (q'-1) \\delta \\,t\n\\end{eqnarray}\nwith left activity $I^\\ell := I^\\ell_{\\shortleftarrow} + I^\\ell_{\\shortrightarrow}$ the total number of \ntransitions at the left boundary and similarly for $I^r$ at site $N$. \nThe excess in dynamical activity $\\mbox{Esc}^{\\left[ 0,t\\right] } = ({\\cal A}^h\\theta + {\\cal A}^h)\/2$ that we need for \nthe general response in \\eqref{eq:xsusc} is thus\n\\begin{eqnarray}\\label{fes}\n\\mbox{Esc}^{\\left[ 0,t\\right] }(\\omega) &= & -\\log \\sqrt{pq} \\,I^\\ell - \\log \\sqrt{p'q'} \\,I^r \n+ (p-1) \\gamma \\int_0^t \\id s \\,w(x_s(1))\\nonumber\\\\ &+& (p'-1) \\beta \\int_0^t \\id s\\, w(x_s(N)) + (q-1) \\alpha t + (q'-1) \\delta \\,t\n\\end{eqnarray}\nNote that of course here the separate $p,p'$ and $q,q'$ play a role, and not just their ratio\n$p\/q, p'\/q'$ as for \\eqref{fer} --- that is how the frenetic contribution picks up kinetic information, while \nthe entropic part is purely thermodynamic.\nSubstituting \\eqref{fer} and \\eqref{fes} into \\eqref{eq:xsusc} gives the general response of the zero range process under \\eqref{mape}.\nA natural application is to look at how the current into the left reservoir changes when $h_r=0, h_\\ell = -a$ or $q'=p=p'=1$ but $q=1 -a\/T$, \ndecreasing (for $a>0$) the chemical potential of the left reservoir. Then, for that choice, \\eqref{fer} and \\eqref{fes} give\n\\begin{equation}\\label{gkr}\n\\langle J_\\ell\\rangle^h - \\langle{J_\\ell}\\rangle = \\frac{a}{2T}\\langle J_\\ell;J_\\ell\\rangle -\\frac{a}{2T}\\langle J_\\ell;I^\\ell\\rangle\n\\end{equation}\nwhich is the modification to the Green-Kubo relation \\cite{yan10} , for all times $t>0$, for the boundary driven zero range process. \nObserve that it is the correlation between current $J_\\ell$ and dynamical activity $I^\\ell$ that governs the correction. \nWhen $t\\uparrow +\\infty$, the conductivity will of course coincide with the change of $C$ in \\eqref{ste} under $\\alpha$.\nThere is a similar relation for the change in expected dynamical activity, so that in fact\n\\[\n\\langle J_\\ell + I^\\ell \\rangle^h - \\langle{J_\\ell + I^\\ell}\\rangle = \n\\frac{a}{2T}\\langle J_\\ell;J_\\ell\\rangle -\\frac{a}{2T}\\langle I^\\ell;I^\\ell\\rangle\n\\]\nis given by a difference between variances of the current and dynamical activity,\nwhere still $\\langle J_\\ell\\rangle = C =-\\alpha+\\gamma z_1,\\langle I^\\ell\\rangle = \\alpha + \\gamma z_1$.\\\\\nFormul{\\ae} \\eqref{gfer}--\\eqref{fes} in \\eqref{eq:xsusc} will of course also lead again to a formula equal \nto each of the \\eqref{rrrtotc}--\\eqref{rpo}--\\eqref{lak} when restricting to \\eqref{shi}--\\eqref{potper}.\n\n\\subsection{``External'' perturbation} \nShifting the chemical potentials (from the outside) realistically means to change $\\alpha\\rightarrow q\\,\\alpha$ and \n$\\delta \\rightarrow q'\\, \\delta$ but not the exit rates $\\beta$ and $\\gamma$. That is thermodynamically the same \n(in the shift of chemical potentials) as for the ``potential'' perturbation in Section \\ref{potp} but it is kinetically different. \nThe response formul{\\ae} \\eqref{rrrtotc}--\\eqref{rpo}--\\eqref{lak} are then invalid except at equilibrium.\n Here we look when we change only the rates of the incoming particles in \\eqref{mape} but restricting ourselves to \\eqref{shi}:\n\\begin{equation}\n\\label{rep}\np=1=p',\\quad q=q'=1+ h\/T\n\\end{equation}\n Note that the expected total activity in the unperturbed steady regime equals\n\\[\n\\langle I^\\ell+I^r \\rangle = (\\alpha + \\gamma z_1 + \\beta z_N + \\delta)t = 2(\\alpha + \\delta)t\n\\]\nbecause the stationary current equals $\\alpha-\\gamma z_1 = \\beta z_N - \\delta$.\nThat means that the excess dynamical activity \\eqref{fes} (for perturbation \\eqref{rep}) simply equals\n\\begin{equation}\n\\mbox{Esc}^{\\left[ 0,t\\right] }(\\omega) = \\frac{h}{2T} \\{ \\langle I^\\ell+I^r \\rangle - [I^\\ell + I^r]\\} \n\\end{equation}\nwhich is now very visibly related to the dynamical activity. We therefore find the linear response formula \\eqref{eq:xsusc} to become\n\\begin{equation}\\label{LINR}\n\\frac{\\langle O(\\omega)\\rangle^h - \\langle O(\\omega)\\rangle}{h} = \\frac{1}{2T} \\,\\langle({\\cal N}_t - {\\cal N}_0);O(\\omega)\\rangle \n+\\frac{1}{2T}\\langle (I^\\ell+I^r); O(x)\\rangle\n\\end{equation}\nwhich is another result for the linear response of the boundary driven zero range model when both left and right entrance rates have been \nincreased with the same small amount. Note that from \\eqref{eq:xsusc} it is here also possible to take a general path-observable \n$O(\\omega)$ that depends on the whole trajectory $\\omega$. The first term is entropic corresponding to the dissipation of particles and \nthe second term is frenetic with the total dynamical activity \n$I := I^\\ell+I^r = I^\\ell_{\\shortleftarrow} + I^\\ell_{\\shortrightarrow} +I^r_{\\shortleftarrow} + I^r_{\\shortrightarrow}$.\\\\ \nLet us check the formula \\eqref{LINR} for the linear response around equilibrium ($C=0$, detailed balance), \nand with $O = I$ the total activity. Then, since the first term $\\langle({\\cal N}_t - {\\cal N}_0);O(\\omega)\\rangle^{\\text eq} = 0$ \nfor time-symmetric $O$, we have a Green-Kubo type formula for the linear response of the dynamical activity around equilibrium:\n\\begin{equation}\n\\frac{\\langle I \\rangle^h - \\langle I\\rangle^{\\text eq}}{h} =\\frac{1}{2T}\\,\\mbox{Var} I > 0\n\\end{equation}\nwith, in the right-hand side, the unperturbed equilibrium variance of the dynamical activity giving the expected change in \nthat same dynamical activity when the left and right chemical potentials get slightly shifted. Whether, say for positive $h$, \nthe change in dynamical activity remains positive also for boundary driven zero range processes depends apparently on whether \nthe dynamical activity is positively or negatively correlated with the dissipation of particles. One could guess that \nfor very small $\\alpha, \\delta \\ll 1$ while keeping $ \\gamma,\\beta w(k) \\simeq 1$ (low temperature reservoirs) there is \na negative correlation between ${\\cal N}_t - {\\cal N}_0$ and $I$ which would make at least the first \nterm in \\eqref{LINR} for $O=I$ negative.\\\\\nIn any event however, be it equilibrium or nonequilibrium, we have the positivity of\n\\begin{equation}\n \\frac{\\langle {\\cal N}(x_t) + I\\rangle^h - \\langle {\\cal N}(x_0)+ I\\rangle}{h} =\\frac{1}{2T} \\,\\,\\mbox{Var} ({\\cal N}_t - {\\cal N}_0 + I) \n> 0\n\\end{equation}\nby taking the observable $O = {\\cal N}_t - {\\cal N}_0 + I$ in \\eqref{LINR}.\\\\\n\n \nLet us further simplify and take $O$ in \\eqref{LINR} a state function. It is then relevant to see how the stationary \ndistribution \\eqref{ste} gets modified under \\eqref{rep}. It is straightforward to check that $C, B \\rightarrow qC, qB$ so that \nthe new ``fugacities'' become equal to $qz_i$. The stationary distribution thus simply changes by multiplying $\\exp[h{\\cal N}(x)\/T]$ \nto the weights $\\rho(x)$. It is therefore not so surprising that the linear response drastically simplifies. \nTo check it we take the opportunity to illustrate again the Agarwal-Kubo procedure \\eqref{ak} but now for the perturbation \\eqref{rep}:\n\\begin{eqnarray}\n\\frac{{L^h}^+\\rho - L^+\\rho}{\\rho}(x) &=& \n\\alpha (q-1)\\, [\\frac{\\rho(x-e_1)}{\\rho(x)} - 1] + \\delta (q'-1) \\,[\\frac{\\rho(x-e_N)}{\\rho(x)} -1]\\nonumber\\\\\n&=& \\alpha \\frac{h}{T}\\, [\\frac{w(x(1))}{z_1} - 1] + \\delta \\frac{h}{T} \\,[\\frac{w(x(N))}{z_N} -1]\\nonumber\n\\end{eqnarray}\nwhere we substituted the known stationary distribution $\\rho$ from \\eqref{ste}. On the other hand, the backward generator \nof the time-reversed process equals\n\\[\nL^*{\\cal N}\\,(x) = -\\frac{\\alpha}{z_1} \\,w(x(1)) + \\gamma z_1 + \\beta z_N -\\frac{\\delta}{z_N}\\,w(x(N)) \n\\]\nand $\\alpha-\\gamma z_1 + \\delta -\\beta z_N=0$.\nTherefore,\n\\begin{equation}\\label{acc}\n\\frac{{L^h}^+\\rho - L^+\\rho}{\\rho} = -\\frac{h}{T}\\,L^* {\\cal N}\n\\end{equation}\nAs a consequence, using \\eqref{ak} results in the linear response exactly of the same form \\eqref{eqform} as in equilibrium, because\n (with $V = {\\cal N}$ in \\eqref{stra}),\n \\begin{eqnarray}\n\\frac{\\id}{\\id s} \\langle {\\cal N}(x_s)\\,O(x_t)\\rangle &=& - \\frac{\\id}{\\id t} \\langle {\\cal N}(x_0)\\,O(x_{t-s})\\rangle\\nonumber\\\\\n&=& -\\langle {\\cal N}(x_0) LO(x_{t-s})\\rangle = -\\langle L^*{\\cal N}(x_0)\\,O(x_{t-s})\\rangle\n\\label{kuboform}\n\\end{eqnarray} \nIn other words, for state observables the linear response of any boundary driven zero range process \nto ``external'' perturbations \\eqref{rep} \nhas always the same equilibrium Kubo-form \\eqref{eqform}, independent of being close or far from detailed balance.\n\n\n\n\n\\section{Intersections of equilibrium and nonequilibrium evolutions}\\label{eqd}\nThe difference between equilibrium and nonequilibrium processes is not always so crystal clear. For exampe, if one starts \nwith a dynamics for which the Gibbs distribution $\\sim e^{-\\beta H}$ is invariant, for some Hamiltonian $H$, then that \ndistribution is also obviously unchanged when adding extra transformations or updating that leave the Hamiltonian $H$ invariant. \nOn a more formal level, suppose we modify the Liouville equation to\n\\begin{equation}\n\\frac{\\partial}{\\partial t} \\rho(x,t) + \\{\\rho,H\\} = \\int\\id x [k(y,x)\\,\\rho(y) - k(x,y)\\, \\rho(x)]\n\\end{equation}\nwhere the right-hand side involves transition rates $k(x,y)$ between states $x\\rightarrow y$. \nIf these $k(x,y)$ are zero unless $H(x) = H(y)$, then $\\rho\\sim \\exp[-\\beta H]$ remains of course invariant. \nOn the other hand, the modified dynamics need not at all to satisfy detailed balance and then the resulting \nstationary regime will not be time-reversal invariant.\n\nThe Kubo formula \\eqref{eqform} summarizes equilibrium linear response in terms of a fluctuation-dissipation formula. As we have seen in the previous \nsection with the combination \n\\eqref{acc}--\\eqref{kuboform}, \nthe Kubo formula extends to the zero range process and for \nexternal perturbations \\eqref{rep} to the nonequilibrium case. \nIn the present section we look at that from a more general perspective.\n\n\n\n\n\\subsection{Special perturbations}\\label{spp}\nA special case arises when $b=0$ and $a=1\/T$ in (\\ref{general}),\nbecause then the response is of the equilibrium form \\eqref{eqform}.\\\\\nSuppose we have (quite arbitrary) a Markov jump process with rates $k(x,y)$ that we perturb\nby adding a time-dependent potential into\n\\begin{equation}\\label{suft}\nk_t(x,y) = k(x,y)\\, e^{-h_t V(x)\/T}\n\\end{equation}\nwhere $h_t$ is the small parameter. \nThe linear response formula is obtained by putting $b=0$ in \\eqref{stationary} \nwhich gives the Kubo-equilibrium formula.\\\\\nThat can also be seen\nfrom the following consideration. Take $h$ to be constant; the law\n$\\rho^h$ defined by $\\rho^h(x) \\propto \\rho(x)e^{ h V(x)\/T}$ is\nstationary for the new dynamics (to all orders in $h$). In other\nwords, here the resulting behavior under this perturbation is like\nin equilibrium, even though the unperturbed dynamics can be far\nfrom equilibrium.\\\\\n\nThe case of perturbation \\eqref{rep} for zero range is just slightly different and is summarized in \\eqref{acc}, which \nis the condition that there exists a function $V$ for which\n\\[\n({L^h}^+ - L^+)\\rho = h\\,\\rho\\,L^*V = h\\, L^+(V\\rho)\n\\]\n for the stationary density $\\rho$. That\nis equivalent with finding a potential $V$ so that for all functions $f$\n\\begin{equation}\\label{suf}\n\\sum_x ((L^h-L)f )(x)\\,\\rho(x) = h\\sum_x (Lf)(x)\\, V(x)\\, \\rho(x)\n\\end{equation}\nIt is easily seen that \\eqref{suf} exactly follows when $L^h = (1+ hV)\\,L$ which (basically) is \n\\eqref{suft}. Therefore, \\eqref{acc} or \\eqref{suf} is only slightly weaker than \\eqref{suft}.\n\n\n\n\n\n\n\n\\subsection{Density response in the boundary driven Lorentz gas}\\label{lg}\n\\begin{figure}\\begin{center}\\includegraphics[scale=0.9]{lorentzfig.eps}\\caption{The boundary driven Lorentz gas. \nA flat rectangular slab is placed between two thermo-chemical reservoirs and contains \nan array of fixed discs, which scatter particles (red dots) via elastic collisions. \nThe centers of the scatterers of radius $R$ are placed in a regular triangular lattice with {\\it finite horizon}; that is, \nthe distance among the centers\nof contiguous disks ($4R\/\\sqrt 3$) ensures that a particle cannot cross the distance of a unit \ncell without \ncolliding at least once with a scatterer. There is a uniform temperature in the \nreservoirs $T$, which determines the velocities of all gas particles. \nIn the molecualr dynamics simulation, when a particle hits a boundary wall \nit disappears from the system, while other particles are injected \nto the system at given rates, proportional to each reservoir density.\n}\\label{fig:lgs}\\end{center}\\end{figure}\nThe Lorentz gas is a well known mechanical model of particle scattering that reproduces \nelectron transport in metals \\cite{lor05, dru00}. Concerning our present focus and subject what becomes \nimportant is the fact that in the appropriate scales of time and energy the \nLorentz gas is diffusive, see \\cite{sza00, kla07} and references therein. \nMoreover, when the system is connected to reservoirs, \nthe ``external'' perturbations \\eqref{rep} become very natural. \nThus, one can expect that the response for the density profile follows the zero range process \nas studied in the previous sections. \nWe have performed \nextensive numerical experiments in such model to corroborate our expectations. \\\\\n\nTo be more precise, consider the two-dimensional slab containing a Lorentz gas illustrated in Fig \\ref{fig:lgs}. \nThere is a cloud of point particles which \nmove freely in the space between the array of scatterers and collide elastically with them. \nThe vertical coordinate is periodic and in the horizontal direction \nthere are left and right boundary walls, which connect the system to thermo-chemical reservoirs, \ncharacterized by chemical potentials $\\mu_\\ell, \\mu_r$ with uniform \ntemperature $T$. In terms of the mean reservoir density $\\rho $, the reservoir chemical potential \n$\\mu \\propto T\\ln (\\rho\/T)$. During time evolution, as a particle hits the boundaries, it moves into a reservoir; \nadditionally, other particles are emitted to the system at given rates \n$\\pi _{\\ell,r}\\sim \\rho _{\\ell,r}\\sqrt{T}$ and incoming velocities taken \nfrom Maxwellians at temperature $T$. \nThe complete model of stochastic thermal and particle reservoirs connected to \nthe Lorentz slab is \nborrowed from a similar work on a modified Lorentz gas; a detailed description about the choice \nof emission rates and chemical potential, temperature and incoming particle velocities from the reservoirs \ncan be found there \\cite{splg03}. In our present case we are interested in independent particles \nwith constant temperature $T$; with this setting in mind the planar Lorentz gas slab of \nFig. \\ref{fig:lgs} evolves to a nonequilibrium stationary state with diffusive transport of particles, \nwhenever $\\Delta \\mu \\equiv \\mu_\\ell - \\mu_r\\neq 0$.\\\\\n\nWe now wish to connect this model with the zero range model. \nThe rates at which particles enter (like $\\alpha$ and $\\delta$ in \nthe zero range process) are controlled externally by the nominal reservoir \ntemperature and the chemical potentials. \nFor the rates at which individual particles leave, that is only controlled by the temperature and the local \n(boundary) density. Thus, one is under perturbation \\eqref{rep}. We have therefore proceeded to test \nwhether our boundary driven Lorentz gas satisfies the response as predicted by the Kubo-formula \\eqref{eqform} \nindependent of the distance to equilibrium. The simulation result is indeed positive. \n\n\\begin{center}\\begin{figure}\\includegraphics[scale=1.2]{neqfdtsol.eps}\n\\caption{The response in the number of particles \n$\\mathcal{N}$ of the driven Lorentz gas when both reservoir chemical potentials are shifted, \n$\\mu_{\\ell,r}\\rightarrow \\mu_{\\ell,r} + h$. \nThe full curve is the Kubo-equilibrium formula, calculated with $\\Delta \\mu \/T=0.2$, $T=150$. \nThe dotted curve \ncorresponds to direct measurements of $\\mathcal{N}$ while performing the shift at $t=0$. These curves are \nobtained from averages over $1.5\\times 10^6$ initial conditions. Also in the plot, \nthe crosses (blue) show the response obtained by solving \nthe diffusion equation $\\partial _t \\rho(x,t)= \\lambda \\partial _{xx}\\rho(x,t)$, with $\\lambda $ diffusivity, \ntaking the stationary unperturbed particle density profile as initial condition, and perturbed densities \nas boundary conditions. \n}\\label{fig:responselgs}\n\\end{figure}\\end{center}\n\nWe have carried out nonequilibrium molecular dynamics simulations of the system in Fig. \\ref{fig:lgs} and have taken \nas observable the total number ${\\cal N}$ of particles in the system. \nThe perturbation simply consists of modifying the reservoir densities, so that the entrance rates $\\pi _{\\ell,r}$ \nare shifted by the same small amount $\\pi _{\\ell,r} \\rightarrow \\pi _{\\ell,r}e^{h\/T}$ (depending also on the constant \ntemperature). The response of $\\mathcal{N}_t$ to this perturbation is shown in Fig \\ref{fig:responselgs} \nfor a nonequilibrium stationary regime with moderate driving of $\\Delta \\mu\/T=0.2$ and $T=150$, which relaxes to \na new stationary regime with different chemical potentials. The perturbation \nis applied at time $t=0$ and the system is then observed in transient states which evolve to the new stationary state. \nEach response curve consists of averages over an ensemble of $1.5\\times 10^3$ initial conditions from the steady regime; \nrelaxation to the final (stationary) state takes about $9.15\\times 10^4$ collisions in the gas. \nThe response for a similar setting with a higher driving $\\Delta \\mu \/T = 2.0$, and using either \nof the terms in \\eqref{kuboform}, gives similar outcomes: indeed we see that \nthe Kubo-relation \\eqref{eqform} follows no matter how far from equilibrium we are. That is not surprising because of the \nindependence of the particles; actually we can predict all density \nresponses simply from solving the linear diffusion equation. This is also shown in \nFig. \\ref{fig:responselgs} with the curve in crosses. Yet, one must note that this interesting example \nis just a special case of what happens more generally \nin the zero range model (possibly showing non-linear hydrodynamics).\n\n\\section{Conclusions}\nOne of the less understood facts of nonequilibrium physics is that the regime of linear response around equilibrium appears \nto extend sometimes quite beyond its theoretical boundaries. Depending on the situation, that is the case for certain \ntransport equations like the Fourier or even sometimes Ohm's law, but also for the more general regime of hydrodynamics where local \nequilibrium often appears to be a very good approximation. In nonequilibrium and irreversible thermodynamics, \nGreen-Kubo relations and general principles like the minimum\/maximum \nentropy production principle often continue to work \nand are used beyond their theoretical limits of validity. \n\nIn fact, one of the reasons for not having yet an established nonequilibrium statistical mechanics may well be the lack of \nurgent questions as irreversible thermodynamics continues to work surprisingly well in a large range of transport and \nrate processes in physical or chemical systems. Much of standard thermodynamics can even be mimicked for relatively \nsmall systems without feeling the urge for new concepts beyond those available in close-to-equilibrium regimes. \nOnly with turbulence and very-far-from-equilibrium processes where new phenomena such as pattern formation and \nself-organization appear, do we really see major modifications with respect to the traditional approach. \\\\\n\nIn this paper we have studied response in the nonequilibrium zero range process, \ngiving explicit expressions of the entropic and frenetic terms in which such response is formally decomposed.\nThat was done for various types of perturbations to the boundary rates. We have found systematic contributions\nof correlation functions with the dynamical activity to correct in general the Kubo-equilibrium formula.\nThere are in particular modified Green-Kubo relations where the current and the dynamical activity complement their responses.\nThere is however also an important case of ``external'' perturbations where the response retains the equilibrium form; \nthat can also be checked for the driven Lorentz gas, which is a microscopic mechanical model.\nWe may expect similar behavior for other boundary driven systems with diffusive transport \nfor which the analogy with certain aspects of the zero range process can be argued. \\\\\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdckf b/data_all_eng_slimpj/shuffled/split2/finalzzdckf new file mode 100644 index 0000000000000000000000000000000000000000..10340d63ab55a77664679e2593f5c8c989a5f9f9 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdckf @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nHidden-variable theories allege that a state of a quantum system, even\nif it is pure and thus contains as much information as quantum\nmechanics permits, actually describes an ensemble of systems with\ndistinct values of some hidden variables. Once the values of these\nvariables are specified, the system becomes determinate or at least\nmore determinate than quantum mechanics says. Thus the randomness in\nquantum predictions results, entirely or partially, from the\nrandomness involved in selecting a member of the ensemble.\n\nNo-go theorems assert that, under reasonable assumptions, a hidden-variable interpretation cannot reproduce the predictions of quantum mechanics. In this paper, we examine two species of such theorems, \\emph{value} no-go theorems and \\emph{expectation} no-go theorems.\nThe value approach originated in the work of Bell \\cite{bell64,bell66} and of Kochen and Specker \\cite{ks} in the 1960's. Value no-go theorems establish that, under suitable hypotheses, hidden-variable theories cannot reproduce the predictions of quantum mechanics concerning the possible results of the measurements of observables.\n\nThe expectation approach was developed in the last decade by Spekkens \\cite{spekkens} and by Ferrie, Emerson, and Morris \\cite{ferrieA,ferrieB,ferrieC}, with \\cite{ferrieC} giving the sharpest result. In this approach, the discrepancy between hidden-variable theories and quantum mechanics appears in the predictions of the expectation values of the measurements of effects, i.e.\\ the elements of POVMs, positive operator-valued measures. There is no need to consider the actual values obtained by measurements or the probability distributions over these values.\n\nIn both cases, measurements are associated to Hermitian operators, but\nthey are different sorts of measurements. In the value approach,\nHermitian operators serve as observables, and measuring one of them\nproduces a number in its spectrum. In the expectation approach,\ncertain Hermitian operators serve as effects, and measuring one of\nthem produces 0 or 1, even if the spectrum consists entirely of other points.\nThe only Hermitian operators for which these two uses coincide are\nprojections.\n\nWe sharpen the results of both approaches so that only projection\nmeasurements are used. Regarding the expectation approach, we\nsubstantially weaken the hypotheses. We do not need arbitrary effects,\nbut only rank-1 projections. Accordingly, we need convex-linearity\nonly for the hidden-variable picture of states, not for that of\neffects. Regarding the value approach, it turns out that rank-1\nprojections are sufficient in the finite dimensional case but not in\ngeneral. Finally, using a successful hidden-variable theory of John\nBell for a single qubit, we demonstrate that the expectation approach does not subsume\nthe value approach.\n\n\\section{Expectation No-Go Theorem}\n\\label{sec:exp}\n\n\\begin{definition}\\label{def:exp}\\rm\nAn \\emph{expectation representation} for quantum systems described by a Hilbert space $\\H$ is a triple $(\\Lambda,\\mu,F)$ where\n\\begin{itemize}\n\\item $\\Lambda$ is a measurable space,\n\\item $\\mu$ is a convex-linear map assigning to each density operator $\\rho$ on $\\H$ a probability measure $\\mu(\\rho)$ on $\\Lambda$, and\n\\item $F$ is a map assigning to each rank-1 projection $E$ in $\\H$ a measurable function $F(E)$ from $\\Lambda$ to the real interval $[0,1]$.\n\\end{itemize}\nIt is required that for all density matrices $\\rho$ and all rank-1 projections $E$\n\\begin{equation}\\label{eq1}\n\\Tr{\\rho\\cdot E}=\\int_\\Lambda F(E)\\,d\\mu(\\rho)\n\\end{equation}\n\\end{definition}\n\nThe convex linearity of $\\mu$ means that $\\mu(a_1\\rho_1 + a_2\\rho_2) = a_1\\mu(\\rho_1) + a_2\\mu(\\rho_2)$ whenever $a_1,a_2$ are nonnegative real numbers with sum 1.\n\nThe definition of expectation representation is similar to Ferrie-Morris-Emerson's definition of the probability representation \\cite{ferrieC} except that (i)~the domain of $F$ contains only rank-1 projections, rather than arbitrary effects, and (ii)~we do not (and cannot) require that $F$ be convex-linear.\n\nIntuitively an expectation representation $(\\Lambda,\\mu,F)$ attempts to predict the expectation value of any rank-1 projection $E$ in a given mixed state $\\rho$. The hidden variables are combined into one variable ranging over $\\Lambda$. Further, $\\mu(\\rho)$ is the probability measure on $\\Lambda$ determined by $\\rho$, and $(F(E))(\\lambda)$ is the probability of determining the effect $E$ at the subensemble of $\\rho$ determined by $\\lambda$. The left side of \\eqref{eq1} is the expectation of $E$ in state $\\rho$ predicted by quantum mechanics and the right side is the expectation of $F(E)$ in the ensemble described by $\\mu(\\rho)$.\n\nBut why is $\\mu$ supposed to be convex linear? Well, mixed states have\nphysical meaning and so it is desirable that $\\mu$ be defined on mixed\nstates as well. If you are a hidden-variable theorist, it is most\nnatural for you to think of a mixed state as a classical probabilistic\ncombination of the component states. This leads you to the convex\nlinearity of $\\mu$. For example, if $\\rho = \\sum_{i=1}^k p_i\\rho_i$\nwhere $p_i$'s are nonnegative reals and $\\sum p_i = 1$ then, by the\nrules of probability theory, $(\\mu(\\rho))(S) = \\sum p_i(\\mu(\\rho_i))(S)$\nfor any measurable $S\\subseteq\\Lambda$. Note, however, that you cannot\nstart with any wild probability distribution $\\mu$ on pure states and\nthen extend it to mixed states by convex linearity. There is an\nimportant constraint on $\\mu$. The same mixed state $\\rho$ may have\ndifferent representations as a convex combination of pure states; all\nsuch representations must lead to the same probability measure\n$\\mu(\\rho)$.\n\n\\begin{theorem}[First Bootstrapping Theorem]\\label{thm:boot1}\nLet $\\H$ be a closed subspace of a Hilbert space $\\H'$. From any expectation representation for quantum systems described by $\\H'$, one can directly construct such a representation for systems described by $\\H$.\n\\end{theorem}\n\n\\begin{proof}\nWe construct an expectation representation $(\\Lambda,\\mu,F)$ for quantum systems described by $\\H$ from any such representation $(\\Lambda',\\mu',F')$ for the larger Hilbert space $\\H'$. To begin, we set $\\Lambda=\\Lambda'$.\n\nTo define $\\mu$ and $F$, we use the inclusion map $i:\\H\\to\\H'$,\nsending each element of $\\H$ to itself considered as an element of\n$\\H'$, and we use its adjoint $p:\\H'\\to\\H$, which is the orthogonal\nprojection of $\\H'$ onto $\\H$. Any density operator $\\rho$ over $\\H$,\ngives rise to a density operator $\\bar\\rho=i\\circ\\rho\\circ p$ over\n$\\H'$. Note that this expansion is very natural: If $\\rho$\ncorresponds to a pure state $\\ket\\psi\\in\\H$, i.e., if\n$\\rho=\\ket\\psi\\bra\\psi$, then $\\bar\\rho$ corresponds to the same\n$\\ket\\psi\\in\\H'$. If, on the other hand, $\\rho$ is a mixture of\nstates $\\rho_i$, then $\\bar\\rho$ is the mixture, with the same\ncoefficients, of the $\\overline{\\rho_i}$. Define\n$\\mu(\\rho)=\\mu'(\\bar\\rho)$.\n\nThe definition of $F$ is similar. For any rank-1 projection $E$ in $\\H$, $\\bar E=i\\circ E\\circ p$ is a rank-1 projection in $\\H'$, and so we define $F(E)=F'(\\bar E)$. If $E$ projects to the one-dimensional subspace spanned by $\\ket\\psi\\in\\H$, then $\\bar E$ projects to the same subspace, now considered as a subspace of $\\H'$.\n\nThis completes the definition of $\\Lambda$, $\\mu$ , and $F$.\nMost of\nthe requirements in Definition~\\ref{def:exp} are trivial to verify.\nFor the last requirement, the agreement between the expectation\ncomputed as a trace in quantum mechanics and the expectation computed\nas an integral in the expectation representation, it is useful to\nnotice first that $p\\circ i$ is the identity operator on $\\H$. We can then\ncompute, for any density operator $\\rho$ and any rank-1 projection\n$E$ on $\\H$,\n\\begin{align*}\n \\int_\\Lambda F(E)\\,d\\mu(\\rho)&=\\int_\\Lambda F'(\\bar E)\\,d\\mu'(\\bar\\rho)\n =\\Tr{\\bar\\rho\\bar E}\n =\\Tr{i\\circ\\rho\\circ p\\circ i\\circ E\\circ p}\\\\\n &=\\Tr{i\\circ\\rho\\circ E\\circ p}\n =\\Tr{\\rho\\circ E\\circ p\\circ i} = \\Tr{\\rho\\circ E},\n\\end{align*}\nas required.\n\\end{proof}\n\n\\begin{theorem}[Expectation no-go theorem]\\label{thm:exp}\n If the dimension of the Hilbert space $\\H$ is at least 2 then there is no expectation representation for quantum systems described by $\\H$.\n\\end{theorem}\n\nWe cannot expect any sort of no-go result in lower dimensions, because quantum theory in Hilbert spaces of dimensions 0 and 1 is trivial and therefore classical.\nBy the First Bootstrapping Theorem, it suffices to prove Theorem~\\ref{thm:exp} just in the case $\\Dim{\\H}=2$.\nBut we find Ferrie-Morris-Emerson's proof that works directly for\nall dimensions \\cite{ferrieC} instructive, and we adjust it to prove\nTheorem~\\ref{thm:exp}.\nThe adjustment involves adding some details and observing that a\ndrastically reduced domain of $F$ suffices. The adjustment also\ninvolves making a little correction. Ferrie et al.\\ quoted an\nerroneous\nresult of Bugajski \\cite{bugajski} which needs some additional\nhypotheses to become correct. Fortunately for Ferrie et al., those\nhypotheses hold in their situation.\n\n\\begin{proof}\nThe proof involves several normed vector spaces.\n\\begin{itemize}\n\\item $\\mathcal B$ is the real Banach space of bounded self-adjoint operators\n $\\H\\to\\H$ with norm\\\\ $\\nm A=\\sup\\{\\nm{Ax}:x\\in \\H,\\,\\nm x=1\\}$.\n\\item $\\mathcal F$ is the real vector space of bounded, measurable, real-valued functions on $\\Lambda$ with norm\\\\ $\\nm f = \\sup \\{|f(\\lambda)|: \\lambda\\in\\Lambda\\}$.\n\\item $\\mathcal M$ is the real vector space of bounded, signed, real-valued measures on $\\Lambda$ with the total variation norm $\\nm\\mu = \\mu_+(\\Lambda) + \\mu_-(\\Lambda)$ where $\\mu=\\mu_+-\\mu_-$ and $\\mu_+$ and $\\mu_-$ are positive measures with disjoint supports.\n\\item $\\mathcal T$ is the vector subspace of $\\mathcal B$ consisting of the trace-class operators. These are the operators $A$ whose spectrum consists of real eigenvalues $\\alpha_i$ such that the sum $\\sum_i|\\alpha_i|$ is finite; eigenvalues with multiplicity $>1$ are repeated in this list, and the continuous spectrum is empty or $\\{0\\}$. The sum $\\sum_i|\\alpha_i|$ serves as the norm of $A$ in $\\mathcal T$. The sum $\\sum_i\\alpha_i$ of eigenvalues themselves (rather than their absolute values) is the trace of $A$. Note that density operators are positive trace-class operators of trace 1.\n\\end{itemize}\n\nIn the rest of the proof, by default, operators, transformations and\nfunctionals are bounded and of course linear. Suppose, toward a contradiction, that we have an expectation representation $(\\Lambda,\\mu,F)$ for some $\\H$ with $\\Dim{\\H}\\ge2$.\n\n\\begin{lemma}\\label{lem:e1}\n$\\mu$ can be extended in a unique way to a transformation, also denoted $\\mu$, from all of $\\mathcal T$ into $\\mathcal M$.\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:e1}]\nEvery $A\\in\\mathcal T$ can be written as a linear combination of two density operators. Indeed, if $\\nm A>0$ and $A$ is positive then $\\Tr{A}=\\nm A$ and $A={\\nm A}\\rho$ where $\\rho=\\frac{A}{\\nm A}$. In general, it suffices to represent $A$ as the difference $B-C$ of positive trace-class operators. Choose $A_+$ (resp.\\ $A_-$) to have the same positive (resp.\\ negative) eigenvalues and corresponding eigenspaces as $A$ and be identically zero on all the eigenspaces corresponding to the remaining eigenvalues. The desired $B = A_+$ and $C = - A_-$.\n\n\nIf $A$ is a linear combination $b\\rho+c\\sigma$ of two density\noperators, define $\\mu(A)=b\\mu(\\rho)+c\\mu(\\sigma)$.\nUsing the convex linearity of $\\mu$ on the density operators, it is easy to check that if $A$ has another such representation $b'\\rho'+c'\\sigma'$ then\n$b\\mu(\\rho) + c\\mu(\\sigma) = b'\\mu(\\rho') + c'\\mu(\\sigma')$ which means that $\\mu(A)$ is well-defined.\n\nThe uniqueness of the extension is obvious. It remains to check that\nthe extended $\\mu$ is bounded. In fact, we show more, namely that\n$\\nm{\\mu(A)} \\le1$ if $\\nm A \\le1$. \nSo let $A\\in\\mathcal T$ and $\\nm A \\le1$. As we saw above, there are\npositive trace-class operators $B,C$ such that $A = B-C$. Then $A =\n{\\nm B}\\rho - {\\nm C}\\sigma$ for some density operators $\\rho, \\sigma$\nwhere $b,c\\ge0$ and $b+c={\\nm A}\\le1$. Now, $\\mu(\\rho)$ and\n$\\mu(\\sigma)$ are measures with norm 1. So $\\nm{\\mu(A)} \\le b{\\mu(\\rho)}\n+ c{\\mu(\\sigma)}\\le b+c \\le 1$.\n\\end{proof}\n\nLet $\\mathcal M'$ be the space of the functionals $\\mathcal M\\to\\mathbb R$ where $\\mathbb R$ is the\nset of real numbers. Similarly let $\\mathcal T'$ be the space of the\nfunctionals $\\mathcal T\\to\\mathbb R$. $\\mu$ gives rise to a dual transformation\n$\\mu': \\mathcal M'\\to\\mathcal T'$ that sends any $h\\in\\mathcal M'$ to $\\mu'(h) = h\\circ\\mu$ so\nthat \n\\begin{equation}\\label{eq2}\n\\mu'(h)(A) = h(\\mu(A))\\quad \\text{for all $h\\in\\mathcal M'$ and all $A\\in\\mathcal T$}.\n\\end{equation}\nEvery measurable function $f\\in\\mathcal F$ induces a functional $\\bar f \\in\\mathcal M'$ by integration: $\\bar f(\\mu)=\\int_\\Lambda f\\,d\\mu$. This gives rise to a transformation $\\nu: \\mathcal F\\to\\mathcal T'$ that sends every $f$ to $\\mu'(\\bar f)$. Specifying $h$ to $\\bar f$ in Equation~\\ref{eq2} gives\n\\begin{equation}\\label{eq3}\n(\\nu f)(A) = \\int_\\Lambda f\\,d\\mu(A)\\quad\n\\text{for all $f\\in\\mathcal F$ and all $A\\in\\mathcal T$}.\n\\end{equation}\nHere and below we omit the parentheses around the argument of $\\nu$.\n\n\\begin{lemma}\\label{lem:e2}\nFor every $f\\in\\mathcal F$, there is a unique $B\\in\\mathcal B$ with $(\\nu f)(\\rho) =\n\\Tr{B\\cdot\\rho}$ for all density operators $\\rho$. \n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:e2}]\nEvery $B\\in\\mathcal B$ induces a functional $\\bar B\\in\\mathcal T'$ by $\\bar\nB(A)=\\Tr{B\\cdot A}$. Here $\\Tr{B\\cdot\\rho}$ is well-defined because\nthe product of a bounded operator and a trace-class operator is again\nin the trace class \\cite[Lemma~3, p.\\ 38]{schatten}.\n\nThe map $B\\mapsto\\bar B$ is an isometric isomorphism between $\\mathcal B$ and $\\mathcal T'$ \\cite[Theorem~2, p.~47]{schatten}.\nSo, for every $X\\in\\mathcal T'$, there is a unique $B_X\\in\\mathcal B$ such that $X(A) = \\Tr{B_X\\cdot A}$ for all $A\\in\\mathcal T$. Furthermore, there is a unique $B_X\\in\\mathcal B$ such that $X(\\rho) = \\Tr{B_X\\cdot\\rho}$ for all density operators $\\rho$. This is because, as we showed above, the linear span of the density matrices is the whole space $\\mathcal T$. The lemma follows because every $\\nu f$ belongs to $\\mathcal T'$.\n\\end{proof}\n\n\nFor any $f\\in\\mathcal F$, the unique operator $B$ with $(\\nu f)(\\rho) =\n\\Tr{B\\cdot\\rho}$ for all $\\rho$ will be denoted $[\\nu f]$. \n\n\\begin{lemma}\\label{lem:e3}\n$[\\nu F(E)] = E$ for every rank-1 projection $E$, and $[\\nu1]=I$ where 1 is the constant function with value 1 and $I$ is the unit matrix.\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:e3}]\nLemma~\\ref{lem:e2} and equation~\\eqref{eq3} give\n\n\\begin{equation}\\label{eq4}\n\\Tr{[\\nu f]\\cdot\\rho}=\\int_\\Lambda f\\,d\\mu(\\rho)\\quad\n\\text{for every density operator }\\rho.\n\\end{equation}\n\nEquations~\\eqref{eq1} and \\eqref{eq4} imply\n\\begin{equation}\\label{eq5}\n\\Tr{\\rho\\cdot E} = \\int_\\Lambda F(E)\\,d\\mu(\\rho)\n = \\Tr{[\\nu F(E)]\\cdot\\rho}\n\\end{equation}%\nfor every density operator $\\rho$.\n\nThe right sides of Equations~\\eqref{eq1} and \\eqref{eq4} coincide if\nwe specify $f$ to $F(E)$. \nTherefore their left sides are equal.\n\n\\begin{equation*}\n\\Tr{E\\rho} = \\Tr{[\\nu F(E)]\\rho}\n\\end{equation*}\n\nWe now invoke the last clause in Definition~\\ref{def:exp} to find that, for\nall rank-1 projections $E$ and all density matrices $\\rho$,\n\\[\n\\Tr{E\\rho}=\\int_\\Lambda F(E)\\,d\\mu(\\rho) = \\Tr{[\\nu F(E)]\\rho)}.\n\\]\nBut this is, as we saw in the proof of Lemma~\\ref{lem:e2}, enough to show that\n$[\\nu F(E)]=E$.\n\nBy Lemma~\\ref{lem:e2}, we see that $\\mu'(1)$ is the unique operator that satisfies, for all $\\rho$,\n\\[\n\\Tr{[\\nu1]\\rho}=\\int_\\Lambda\\,d\\mu(\\rho) = (\\mu(\\rho))(\\Lambda)=1=\\Tr\\rho\n=\\Tr{I\\rho},\n\\]\nwhere the third equality comes from the fact that $\\mu$ maps density matrices to probability measures. Thus, $[\\nu1]=I$.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:e4}\n For any two rank-1 projections $A,B$ of $\\H$, there exists an operator $H\\in\\mathcal B$ such that all four of $H$, $A-H$, $B-H$, and $I-A-B+H$ are positive operators.\n\\end{lemma}\n\n\\begin{proof} [Proof of Lemma~\\ref{lem:e4}]\nRecall that an operator $A$ is said to be\npositive if $\\bra\\psi A\\ket\\psi\\geq0$ for all $\\ket\\psi\\in\\H$ and\nthat $A\\leq B$ means that $B-A$ is positive. A function $f\\in\\mathcal F$ is \\emph{nonnegative} if $f(\\lambda)\\geq0$ for all $\\lambda\\in\\Lambda$.\n\n\\begin{claim} \\label{cla:e1} If $f\\in\\scr F$ is nonnegative then $[\\nu f]$ is a\n positive operator. Therefore, if $f\\leq g$ pointwise in $\\mathcal F$ then\n $[\\nu f]\\le [\\nu g]$ in $\\mathcal B$.\n\\end{claim}\n\n\\begin{proof}[Proof of Claim~\\ref{cla:e1}]\n The second assertion follows immediately from the first applied to\n $g-f$, because $\\nu$ is linear. To prove the first assertion,\n suppose $f\\in\\scr F$ is nonnegative, and let $\\ket\\psi$ be any\n vector in $\\H$. The conclusion we want to deduce, $\\bra\\psi\n [\\nu f]\\ket\\psi\\geq0$, is obvious if $\\ket\\psi=0$, so we may assume\n that \\ket\\psi\\ is a non-zero vector. Normalizing it, we may assume\n further that its length is 1. Then $\\ket\\psi\\bra\\psi$ is a density operator\n and therefore $\\mu(\\ket\\psi\\bra\\psi)$ is a measure. Using\n equation~\\eqref{eq5}, we compute\n\\[\n\\bra\\psi [\\nu f]\\ket\\psi=\\Tr{[\\nu f]\\ket\\psi\\bra\\psi}\n=\\int_\\Lambda f\\,d\\mu(\\ket\\psi\\bra\\psi)\\geq0,\n\\]\nwhere we have used that both the measure $\\mu(\\ket\\psi\\bra\\psi)$ and\nthe integrand $f$ are nonnegative.\n\\end{proof}\n\nLet $\\mathcal F_{[0,1]}$ be the subset of $\\mathcal F$ comprising the functions all of whose values are in the interval $[0,1]$.\n\n\\begin{claim}\\label{cla:e2}\n For any $f,g\\in\\mathcal F_{[0,1]}$ there exists $h\\in\\mathcal F_{[0,1]}$ such that all four of $h$, $f-h$, $g-h$, and $1-f-g+h$ are nonnegative.\n\\end{claim}\n\n\\begin{proof}[Proof of Claim~\\ref{cla:e2}]\n Define $h(\\lambda)=\\min\\{f(\\lambda),g(\\lambda)\\}$ for all\n $\\lambda\\in\\Lambda$. Then the first three of the assertions in the\n lemma are obvious, and the fourth becomes obvious if we observe that \n $f+g-h=\\max\\{f,g\\}\\leq 1$.\n\\end{proof}\n\nNow we are ready to complete the proof of Lemma~\\ref{lem:e4}.\nApply Claim~\\ref{cla:e2} \nwith $f=F(A)$ and $g=F(B)$,\nlet $h$ be the function given by the lemma, and let $H=[\\nu(h)]$. \nThe nonnegativity of $h$, $f-h$, $g-h$, and $1-f-g+h$ implies, by \nClaim~\\ref{cla:e1}, \nthe positivity of $[\\nu(h)]=H$, $[\\nu(F(A)-h)]=A-H$,\n$[\\nu(F(B)-h)]=B-H$, \nand $[\\nu(1-F(A)-F(B)+h)]=I-A-B+H$, where we have also used the\nlinearity of $\\nu$, the fact that $[\\nu(1)]=I$, and the formula\n$[\\nu(F(A))]=A$ for all $A$ in the domain of $F$.\n\\end{proof}\n\nNow we are ready to prove Theorem~\\ref{thm:exp}.\nLet us apply Lemma~\\ref{lem:e3} to two specific rank-1 projections. Fix\ntwo orthonormal vectors \\ket0 and \\ket1. (This is where we use that\n$\\H$ has dimension at least 2.) Let $\\ket+=(\\ket0+\\ket1)\/\\sqrt2$.\nWe use the projections $A=\\ket0\\bra0$ and $B=\\ket+\\bra+$ to the\nsubspaces spanned by $\\ket0$ and $\\ket+$. Let $H$ be as in\nLemma~\\ref{lem:e4} for these projections $A$ and $B$.\n\nFrom the positivity of $H$ and of $A-H$, we get that\n$0\\leq\\bra1H\\ket1$ and that\n\\[\n0\\leq\\bra1(A-H)\\ket1=\\bra1A\\ket1-\\bra1H\\ket1=-\\bra1H\\ket1,\n\\]\nwhere we have used that \\ket1, being orthogonal to \\ket0, is\nannihilated by $A$. Combining the two inequalities, we infer that\n$\\bra1H\\ket1=0$ and therefore, since $H$ is positive, $H\\ket1=0$.\nSimilarly, using the orthogonal vectors $\\ket+$ and\n$\\ket-=\\ket0-\\ket1)\/\\sqrt2$ in place of \\ket0 and \\ket1, we obtain\n$H\\ket-=0$. So, being linear, $H$ is identically zero on the subspace\nof $\\H$ spanned by \\ket1 and \\ket-; note that \\ket0 is in this\nsubspace, so we have $H\\ket0=0$.\n\nNow we use the positivity of $I-A-B+H$. Since $H\\ket0=0$, we\ncan compute\n\\[\n0\\leq\\bra0(I-A-B+H)\\ket0=\\sq{0|0}-\\bra0A\\ket0-\\bra0B\\ket0=\n1-1-\\frac1{\\sqrt2}=\\frac{-1}{\\sqrt2}.\n\\]\nThis contradiction completes the proof of the theorem.\n\\end{proof}\n\n\\begin{remark}[Symmetry or the lack of thereof]\\rm\nIn view of the idea of symmetry or even-handedness suggested by\nSpekkens \\cite{spekkens}, one might ask whether there is a dual\nversion of Theorem~\\ref{thm:exp}, that is, a version that requires\nconvex-linearity for effects but looks only at pure states and does not\nrequire any convex-linearity for states.\nThe answer is no; with such requirements there is a trivial example of a successful hidden-variable theory, regardless of the dimension of the Hilbert space. The theory can be concisely described as taking the quantum state itself as the ``hidden'' variable. In more detail, let $\\Lambda$ be the set of all pure states. Let $\\mu$ assign to each operator $\\ket\\psi\\bra\\psi$ the probability measure on $\\Lambda$ concentrated at the point $\\lambda_{\\ket\\psi}$ that corresponds to the vector $\\ket\\psi$. Let $F$ assign to each effect $E$ the function on $\\Lambda$ defined by\n\\[\nF(E)(\\lambda_{\\ket\\psi})=\\bra\\psi E\\ket\\psi.\n\\]\nWe have trivially arranged for this to give the correct expectation\nfor any effect $E$ and any pure state \\ket\\psi. The formula for\n$F(E)$ is clearly convex-linear (in fact, linear) as a function of\n$E$. Of course, $\\mu$ cannot be extended convex-linearly to mixed\nstates, so that Theorem~\\ref{thm:exp} does not apply.\n\\end{remark}\n\n\\section{Value No-Go Theorems}\n\\label{sec:val}\nValue no-go theorems assert that hidden-variable theories cannot even produce the correct outcomes for individual measurements, let alone the correct probabilities or expectation values. Such theorems considerably predated the expectation no-go theorems considered in the preceding section. Value no-go theorems were first established by Bell \\cite{bell64,bell66} and then by Kochen and Specker \\cite{ks}; we shall also refer to the user-friendly exposition given by Mermin \\cite{mermin}. To formulate value no-go theorems, one must specify what ``correct outcomes for individual measurements'' means.\n\n\\begin{definition} \\label{valmap}\nLet $\\H$ be a Hilbert space, and let $\\O$ be a set of observables, i.e., self-adjoint operators on $\\H$. A \\emph{valuation} for $\\O$ in $\\H$ is a function $v$ assigning to each observable $A\\in\\O$ a number $v(A)$ in the spectrum of $A$, in such a way that $(v(A_1),\\dots,v(A_n))$ is in the joint spectrum $\\sigma(A_1,\\dots,A_n)$ of $(A_1,\\dots,A_n)$\nwhenever $A_1,\\dots,A_n$ are pairwise commuting.\n\\end{definition}\n\nThe intention behind this definition is that, in a hidden-variable theory, a quantum state represents an ensemble of individual systems, each of which has definite values for observables. That is, each individual system has a valuation associated to it, describing what values would be obtained if we were to measure observable properties of the system. A believer in such a hidden-variable theory would expect a valuation for the set of all self-adjoint operators on $\\H$, unless there were superselection rules rendering some such operators unobservable.\n\nBefore we proceed, we recall the notion of joint spectra \\cite[Section~6.5]{spectral}.\n\n\\begin{definition}\nThe \\emph{joint spectrum} $\\sigma(A_1,\\dots,A_n)$ of pairwise\ncommuting, self-adjoint operators $A_1,\\dots,A_n$ on a Hilbert space\n$\\H$ is a subset of $\\mathbb R^n$. If $A_1,\\dots,A_n$ are simultaneously\ndiagonalizable then $(\\lambda_1,\\dots,\\lambda_n)\n\\in\\sigma(A_1,\\dots,A_n)$ iff there is a non-zero vector $\\ket\\psi$\nwith $A_i\\ket\\psi=\\lambda_i\\ket\\psi$ for $i=1,\\dots,n$. In general,\n$(\\lambda_1,\\dots,\\lambda_n) \\in\\sigma(A_1,\\dots,A_n)$ iff for every\n$\\varepsilon>0$ there is a unit vector $\\ket\\psi\\in\\H$ with\n$\\nm{A_i\\ket\\psi-\\lambda_i\\ket\\psi}<\\varepsilon$ for $i=1,\\dots,n$.\n\\end{definition}\n\n\\begin{proposition}\\label{pro:jspec}\\\nFor any continuous function $f:\\mathbb R^n\\to\\mathbb R$,\\\\ $f(A_1,\\dots,A_n)=0$ if and only if $f$ vanishes identically on $\\sigma(A_1,\\dots,A_n)$.\n\\end{proposition}\n\nThe proposition is implicit in the statement, on page~155 of\n\\cite{spectral}, that ``most of Section~1, Subsection~4, about\nfunctions of one operator,'' \ncan be repeated in the context of\nseveral commuting operators. We give a detailed proof of the\nproposition in \\cite[\\S4.1]{G228}. \n\n\\begin{theorem}[\\cite{bell66,ks,mermin}]\\label{thm:dim3}\nIf $\\Dim{\\H}=3$ then there is a finite set $\\O$ of rank~1 projections for which no valuation exists.\n\\end{theorem}\n\nThe proof of Theorem~\\ref{thm:dim3} can be derived from the work of\nBell \\cite[Section~5]{bell66}, and we do that explicitly in\n\\cite[\\S4.3]{G228}. The construction given by Kochen and Specker\n\\cite{ks} provides the desired $\\O$ more directly. The proof of\nTheorem~1 in \\cite{ks} uses a Boolean algebra generated by a finite\nset of one-dimensional subspaces of $\\H$, and it shows that the\nprojections to those subspaces constitute an $\\O$ of the required\nsort. Mermin's elegant exposition \\cite[Section~IV]{mermin} deals\ninstead with squares $S_i^2$ of certain spin-components of a spin-1\nparticle, but these are projections to 2-dimensional subspaces of\n$\\H$, and the complementary rank-1 projections $I-S_i^2$ serve as the\ndesired $\\O$.\n\n\\begin{theorem}[Second Bootstrapping Theorem]\\label{thm:boot2}\nSuppose $\\H\\subseteq\\H'$ are finite-dimensional Hilbert spaces. Suppose further that $\\O$ is a finite set of rank-1 projections of $\\H$ for which no valuation exists. Then there is a finite set $\\O'$ of rank-1 projections of $\\H'$ for which no valuation exists.\n\\end{theorem}\n\nThis is our second bootstrapping theorem. Intuitively, such dimension bootstrapping results are to be expected. If hidden-variable theories could explain the behavior of quantum systems described by the larger Hilbert space, say $\\H'$, then they could also provide an explanation for systems described by the subspace $\\H$. The latter systems are, after all, just a special case of the former, consisting of the pure states that happen to lie in $\\H$ or mixtures of such states. But often no-go theorems give much more information than just the impossibility of matching the predictions of quantum-mechanics with a hidden-variable theory. They establish that hidden-variable theories must fail in very specific ways. It is not so obvious that these specific sorts of failures, once established for a Hilbert space $\\H$, necessarily also apply to its\nsuperspaces $\\H'$.\n\n\\begin{proof}\n Clearly, if two Hilbert spaces are isomorphic and if one of them has\n a finite set $\\O$ of rank-1 projections with no valuation, then\n the other also has such a set. It suffices to conjugate the\n projections in $\\O$ by any isomorphism between the two\n spaces. Thus, the existence of such a set $\\O$ depends only on the\n dimension of the Hilbert space, not on the specific space.\n\n Proceeding by induction on the dimension of $\\H'$, we see that\n it suffices to prove the theorem in the case where $\\dim(\\H')=\\dim(\\H)+1$. Given such $\\H$ and $\\H'$, let \\ket\\psi\\\n be any unit vector in $\\H'$, and observe that its orthogonal\n complement, $\\ket\\psi^\\bot$, is a subspace of $\\H'$ of the same\n dimension as $\\H$ and thus isomorphic to $\\H$. By the induction\n hypothesis, this subspace $\\ket\\psi^\\bot$ has a finite set $\\O$ of\n rank-1 projections for which no valuation exists. Each element of\n $\\O$ can be regarded as a rank-1 projection of $\\H'$; indeed,\n if the projection was given by $\\ket\\phi\\bra\\phi$ in\n $\\ket\\psi^\\bot$, then we can just interpret the same formula\n $\\ket\\phi\\bra\\phi$ in $\\H'$, using the same unit vector\n $\\ket\\phi\\in\\ket\\psi^\\bot$\n\n Let $\\O_1$ consist of all the projections from $\\O$,\n interpreted as projections of $\\H'$, together with one\n additional rank-1 projection, namely $\\ket{\\psi}\\bra{\\psi}$. What\n can a valuation $v$ for $\\O_1$ look like? It must send\n $\\ket{\\psi}\\bra{\\psi}$ to one of its eigenvalues, 0 or 1.\n\nSuppose first that $v(\\ket{\\psi}\\bra{\\psi})=0$. Then, using the fact\nthat $\\ket{\\psi}\\bra{\\psi}$ commutes with all the other elements of\n$\\O_1$, we easily compute that what $v$ does to those other\nelements amounts to a valuation for $\\O$. But $\\O$ was chosen so\nthat it has no valuation, and so we cannot have\n$v(\\ket{\\psi}\\bra{\\psi})=0$. Therefore $v(\\ket{\\psi}\\bra{\\psi})=1$. (It\nfollows that $v$ maps the projections associated to all the other\nelements of $\\O'$ to zero, but we shall not need this fact.)\n\nWe have thus shown that any valuation for the finite set $\\O_1$\nmust send $\\ket\\psi\\bra\\psi$ to 1. Repeat the argument for another\nunit vector $\\ket{\\psi'}$ that is orthogonal to \\ket\\psi. There is a\nfinite set $\\O_2$ of rank-1 projections such that any valuation\nfor $\\O_2$ must send \\ket{\\psi'}\\bra{\\psi'} to 1. No valuation\ncan send both \\ket\\psi\\bra\\psi\\ and \\ket{\\psi'}\\bra{\\psi'} to 1,\nbecause their joint spectrum consists of only $(1,0)$ and $(0,1)$.\nTherefore, there can be no valuation for the union $\\O_1\\cup\\O_2$, which thus serves as the $\\O'$ required by the theorem.\n\\end{proof}\n\n\\begin{theorem}[Value no-go theorem]\\label{thm:val}\nSuppose that the dimension of the Hilbert space is at least 3.\n\\begin{enumerate}\n\\item There is a finite set $\\O$ of projections for which no valuation exists.\n\\item If the dimension is finite then there is a finite set $\\O$\n of rank~1 projections for which no valuation exists.\n\\end{enumerate}\n\\end{theorem}\n\nThe desired finite sets of projections are constructed explicitly in the proof. The finiteness assumption in part (2) of the theorem cannot be omitted. If $\\Dim{\\H}$ is infinite, then the set $\\O$ of all finite-rank projections admits a valuation, namely the constant zero function. This works because the definition of ``valuation'' imposes constraints on only finitely many observables at a time.\n\n\n\\begin{proof}\nWhen the dimension of $\\H$ is greater than 3, but still finite, we use our Second Bootstrapping Theorem. Notice that, if one merely wants a no-go theorem saying that some $\\O$ has no valuation, then this bootstrapping is easy, as noted in \\cite{bell64,ks,mermin}. Work is needed only to get all the operators in $\\O$ to be rank~1 projections.\n\nIt remains to treat the case of infinite-dimensional $\\H$. Let $\\mathcal K$ and $\\L$ be Hilbert spaces, with $\\dim(\\mathcal K)=3$ and $\\dim(\\L)=\\dim(\\H)$. Note that then their tensor product $\\mathcal K\\otimes\\L$ has the same dimension as $\\H$, so it can be identified with $\\H$.\n\nLet $\\O$ be as in Theorem~\\ref{thm:dim3} for the 3-dimensional $\\mathcal K$. Let\n$\\O'=\\{P\\otimes I_{\\L}:P\\in\\O\\}$, where $I_{\\L}$ is the identity operator on $\\L$. Then $\\O'$ is a set of infinite-rank projections of $\\mathcal K\\otimes\\L=\\H$, having the same algebraic structure as $\\O$. It follows that there is no valuation for $\\O'$.\n\\end{proof}\n\nLet's say that a projection $A$ on Hilbert space $\\H$ is a \\emph{rank-$n$ projection modulo identity} if either $A$ is of rank $n$ or else $\\H$ splits into a tensor product $\\mathcal K\\otimes\\L$ such that $\\mathcal K$ is finite-dimensional and $A$ has the form $P\\otimes I_{\\L}$ where $P$ is of rank $n$ and $I_{\\L}$ is the identity operator on $\\L$. The proof of Theorem~\\ref{thm:val} gives us the following corollary.\n\n\\begin{corollary}\nIf the dimension of the Hilbert space is at least 3 then there is a finite set of rank-1 projections modulo identity for which no valuation exists.\n\\end{corollary}\n\n\\section{One successful hidden-variable theory}\n\\label{sec:bell}\n\nBy reducing both species of no-go theorems to projection measurement,\nwhere measurement as observable and measurement as effect coincide, we\nmade it easier to see similarities and differences. No, the\nexpectation no-go theorem does not imply the value no-go theorem. But\nthe task of proving this claim formally, say for a given dimension\n$d=\\Dim{\\H}$, is rather thankless. You have to construct a\ncounter-factual physical world where the expectation no-go theorem\nholds but the value no-go theorem fails. There is, however, one\nexceptional case, that of dimension~2. Theorem~\\ref{thm:exp} assumes\n$\\Dim{\\H}\\ge2$ while Theorem~\\ref{thm:val} assumes $\\Dim{\\H}\\ge3$. So\nwhat about dimension 2?\n\nBell developed, in \\cite{bell64} and \\cite{bell66}, a hidden-variable theory for a two-dimensional Hilbert space $\\H$. Here we summarize the improved version of Bell's theory due to Mermin \\cite{mermin}, we simplify part of Mermin's argument, and we explain why the theory doesn't contradict Theorem~\\ref{thm:exp}.\n\nIn the rest of this section, we work in the two-dimensional Hilbert\nspace $\\H$. Let $\\mathcal V$ be the set of value maps $v$ for all the\nobservables on $\\H$. In each pure state $\\psi$, the hidden variables\nshould determine a particular member of $\\mathcal V$. \n\n\\begin{definition}\\label{def:val}\\rm\nA \\emph{value representation} for quantum systems described by $\\H$ is a pair $(\\Lambda,V)$ where\n\\begin{itemize}\n\\item $\\Lambda$ is a probability space and\n\\item $V$ a function $\\psi\\to V_\\psi$ on the pure states such that every $V_\\psi$ is a map $\\lambda\\to V_\\psi^\\lambda$ from (the sample space of) $\\Lambda$ onto $\\mathcal V$.\n\\end{itemize}\nFurther, we require that, for any pure state $\\psi$ and any observable $A$, the expectation $\\int_\\Lambda V_\\psi^\\lambda(A)\\: d\\lambda$ of the eigenvalue of $A$ agrees with the prediction $\\bra{\\psi}A\\ket{\\psi}$ of quantum theory:\n\\begin{equation}\\label{bell1}\n \\int_\\Lambda V_\\psi^\\lambda(A)\\: d\\lambda = \\bra{\\psi}A\\ket{\\psi}\n\\end{equation}\n\\end{definition}\n\n\\medskip\nDefinition~\\ref{def:val} is narrowly tailored for our goals in this section; in the full paper we will give a general definition of value representation. Notice that, if a random variable (in our case, the eigenvalue of $A$ in $\\psi$) takes only two values, then the expected value determines the probability distribution. A priori we should be speaking about commuting operators and joint spectra but things trivialize in the 2-dimensional case. Recall Proposition~\\ref{pro:jspec} and notice that, in the 2-dimensional Hilbert space, if operators $A,B$ commute, then one of them is a polynomial function of the other.\n\n\\begin{theorem}\\label{thm:bell}\nThere exists a value representation for the quantum systems described by the two-dimensional Hilbert system $\\H$.\n\\end{theorem}\n\n\\begin{proof}\nLet $\\vec\\sigma$ be the triple of the Pauli matrices\n$\\displaystyle \\sigma_x=\n\\begin{pmatrix}\n 0&1\\\\1&0\n\\end{pmatrix}, \\sigma_y=\n\\begin{pmatrix}\n 0&-i\\\\i&0\n\\end{pmatrix}, \\sigma_z=\n\\begin{pmatrix}\n 1&0\\\\0&-1\n\\end{pmatrix}$.\nFor any unit vector $\\vec n\\in\\mathbb R^3$, \nthe dot product $\\vec n\\cdot\\vec\\sigma$ is a Hermitian operator with\neigenvalues $\\pm1$. \nEvery pure state of $\\H$ is an eigenstate, for eigenvalue $+1$, of\n$\\vec n\\cdot\\vec\\sigma$ for a unique $\\vec n$. We use the notation\n\\ket{\\vec n} for this eigenstate. \n\nIf \\scr H represents the states of a spin-$\\frac12$ particle, then the operator $\\frac12\\vec n\\cdot\\vec\\sigma$ represents the spin component in the direction $\\vec n$, and so \\ket{\\vec n} represents the state in which the spin is definitely aligned in the direction $\\vec n$. It is a special property of spin $\\frac12$ that all pure states are of this form; for higher spins, a superposition of states with definite spin directions need not have a definite spin direction.\n\nOn $\\H$, any Hermitian operator $A$ has the form $a_0I + \\a\\cdot\\vec\\sigma$ for some scalar\n$a_0\\in\\mathbb R$ and vector $\\a\\in\\mathbb R^3$. The eigenvalues of $A$ are\n$a_0\\pm\\nm{\\a}$. Observables $a_0I+\\a\\cdot\\vec\\sigma$ and $b_0I+\\vec b\\cdot\\vec\\sigma$\ncommute if and only if $\\a$ and $\\vec b$ are either parallel or\nantiparallel.\n\nThe desired probability space is the set $S^2$ of unit vectors in $\\mathbb R^3$ with the uniform probability measure. Let $\\vec m$ range over $S^2$. Then\n\\[\n V_{\\vec n}^{\\vec m} (a_0 I + \\a\\cdot\\vec\\sigma) =\n \\begin{cases}\n a_0 + \\nm{\\a}&\\text{if }(\\vec m+\\vec n)\\cdot\\a \\geq 0,\\\\\n a_0 - \\nm{\\a}&\\text{if }(\\vec m+\\vec n)\\cdot\\a < 0.\n \\end{cases}\n\\]\nIt remains to check that\n\\begin{equation}\\label{bell2}\n\\int_{S^2} V_\\psi^{\\vec m}(a_0 I + \\a\\cdot\\vec\\sigma)\\: d\\vec m =\n\\bra{\\vec n}(a_0 I + \\a\\cdot\\vec\\sigma)\\ket{\\vec n}.\n\\end{equation}\n\nWe begin with a couple of simplifications. First, we may assume that $a_0=0$, because a general $a_0$ would just be added to both sides of Equation~\\eqref{bell2}. Second, thanks to the rotational symmetry of the situation (where rotations are applied to all three of $\\a$, $\\vec n$ and $\\vec m$), we may assume that the vector $\\a$ points in the $z$-direction. Finally, by scaling, we may assume that $\\a=(0,0,1)$, so that the right side of Equation~\\eqref{bell2} is $n_z$.\n\nSo our task is to prove that the average over $\\vec m$ of the values assigned to $\\sigma_z$ is $n_z$. By definition, the value\nassigned to $\\sigma_z$ is $\\pm1$, where the sign is chosen to agree\nwith that of $m_z+n_z$. In view of how $\\vec m$ is chosen, this\n$m_z+n_z$ is the $z$-coordinate of a random point on the unit sphere\ncentered at $\\vec n$. So the question reduces to determining what\nfraction of this sphere lies above the $x$-$y$ plane.\n\nThis plane cuts $S^2$ horizontally at a level $n_z$ below the sphere's\ncenter. By a theorem of Archimedes, when a sphere is cut\nby a plane, its area is divided in the same ratio as the length of the\ndiameter perpendicular to the plane. So the plane divides the sphere's area in\nthe ratio of $1+n_z$ (above the plane) to $1-n_z$ (below the plane).\nThat is, the value assigned to $\\sigma_z$ is $+1$ with probability\n$(1+n_z)\/2$ and $-1$ with probability $(1-n_z)\/2$. Thus, the average\nvalue of $\\sigma_z$ is $n_z$, as required.\n\\end{proof}\n\nFinally, we explain why Bell's theory doesn't contradict Theorem~\\ref{thm:exp}. To obtain an expectation representation, we must extend the map $V$\nconvex-linearly to all density matrices. But no such extension exists. Here is an example showing what goes wrong.\nConsider the four pure states corresponding to spin in the directions\nof the positive $x$, negative $x$, positive $z$ and negative $z$\naxes. The corresponding density operators are the projections\n\\[\n\\frac{I+\\sigma_x}2,\\quad\\frac{I-\\sigma_x}2,\\quad\n\\frac{I+\\sigma_z}2,\\quad \\frac{I-\\sigma_z}2,\n\\]\nrespectively. Averaging the first two with equal weights, we get\n$\\frac12 I$; averaging the last two gives the same result. So a\nconvex-linear extension $T$ would have to assign to the density\noperator $\\frac12I$ the average of the probability measures assigned\nto the pure states with spins in the $\\pm x$ directions and also the\naverage of the probability measures assigned to pure states with spins\nin the $\\pm z$ directions. But these two averages are visibly very\ndifferent. The first is concentrated on the union of two unit spheres\ntangent to the $y$-$z$-plane at the origin, while the second is\nconcentrated on the union of two unit spheres tangent to the\n$x$-$y$-plane at the origin.\n\n\nThus, Bell's example of a hidden-variable theory for 2-dimensional\n\\scr H does not fit the assumptions in any of the expectation no-go\ntheorems. It does not, therefore, clash with the fact that those\ntheorems, unlike the value no-go theorems, apply in the 2-dimensional\ncase.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{section:intro}\n\n\\noindent\nDiscretizing results in metric geometry is important for many applications,\nranging from discrete differential geometry to numerical methods. The discrete\nresults are stronger as they typically imply the continuous results in the limit.\nUnfortunately, more often than not, straightforward discretizations fall apart;\nnew tools and ideas are needed to even formulate these extensions; see e.g.~\\cite{BS,Lin}\nand~\\cite[$\\S$21--$\\S$24]{Pak}.\n\nIn this paper we introduce a new notion of \\emph{$G$-Kirszbraun graphs},\nwhere~$G$ is vertex-transitive graph. The idea is to discretize the classical\n\\emph{Kirszbraun theorem} in metric geometry~\\cite{kirszbraun1934}\n(see also~\\cite[$\\S$1.2]{BL}). Our main goal\nis to explain the variational principle for the height functions of tilings introduced\nby the third author in~\\cite{Tas} and further developed in~\\cite{MR3530972,TassyMenz2016}\n(see Section~\\ref{section: motivation}); we also aim to lay a proper foundation\nfor the future work.\n\nOur second goal is to clarify the connection to the \\emph{Helly theorem},\na foundational result in convex and discrete geometry~\\cite{helly1923}\n(see also~\\cite{MR0157289,Mat}). Graphs that satisfy the \\emph{Helly's property}\nhas been intensely studied in recent years~\\cite{MR2405677}, and we establish\na connection between two areas. Roughly, we show that $\\mathbb{Z}^d$-Kirszbraun\ngraphs are somewhat rare, and are exactly the graphs that satisfy the\nHelly's property with certain parameters.\n\n\\smallskip\n\n\\subsection{Main results}\nLet $\\ell_2$ denote the usual Euclidean metric on $\\mathbb R^n$ for all $n$. Given a metric\nspace~$X$ and a subset $A$, we write $A\\subset X$ to mean that the subset~$A$ is\nendowed with the restricted metric from~$X$. The \\emph{Kirszbraun theorem} says\nthat for all \\hskip.03cm $A\\subset (\\mathbb R^n,\\ell_2)$, and all Lipschitz functions \\hskip.03cm\n$f: A \\longrightarrow (\\mathbb R^n,\\ell_2)$, there is an extension to a\nLipschitz function on $\\mathbb R^n$ with the same Lipschitz constant.\n\nRecall now the \\emph{Helly theorem}: Suppose a collection of convex sets\n$B_1, B_2, \\ldots, B_k$ satisfies the property that every $(n+1)$-subcollection has a\nnonempty intersection, then $\\cap B_i\\neq \\varnothing$. Valentine in~\\cite{MR0011702}\nfamously showed how the Helly theorem can be used to obtain the Kirszbraun theorem.\nThe connection between these two theorems is the key motivation behind this paper.\n\n\\smallskip\n\nGiven metric spaces $X$ and $Y$, we say that $Y$ is \\emph{$X$-Kirszbraun}\nif all $A\\subset X$, every $1$-Lipschitz maps \\hskip.03cm $f: A \\longrightarrow Y$ \\hskip.03cm\nhas a $1$-Lipschitz extension from~$A$ to~$X$. In this notation, the Kirszbraun\ntheorem says that \\hskip.03cm $(\\mathbb R^n,\\ell_2)$ \\hskip.03cm is \\hskip.03cm $(\\mathbb R^n,\\ell_2)$-Kirszbraun.\n\nLet $m\\in \\mathbb N$ and $n\\in \\mathbb N\\cup\\{\\infty\\}$, $n>m$. Metric space $X$\nis said to have \\emph{$(n, m)$-Helly's property} if for every collection of closed balls\n$B_1, B_2, \\ldots, B_n$ of radius~$\\ge~1$ whenever every $m$-subcollection has a\nnonempty intersection, we have $\\cap_{i=1}^n B_i\\neq \\varnothing$.\nSince balls in $\\mathbb R^n$ with the Euclidean metric are convex,\nthe Helly theorem can be restated to say that $(\\mathbb R^n,\\ell_2)$ is $(\\infty, n+1)$-Helly.\nNote that the metric is important here, i.e.\\\n$(\\mathbb R^n,\\ell_\\infty)$ is $(\\infty, 2)$-Helly, see e.g.~\\cite{MR0157289,Pak}.\n\nGiven a graph $H$, we endow the set of vertices (also denoted by $H$) with the path metric.\nBy~$\\mathbb{Z}^d$ we mean the Cayley graph of the group $\\mathbb{Z}^d$ with respect to standard generators.\nAll graphs in this paper are nonempty, connected and simple (no loops and multiple edges).\nThe following is the main result of this paper.\n\n\\begin{thm}[Main theorem]\\label{thm:Zd Kirszbraun}\nGraph $H$ is $\\mathbb{Z}^d$-Kirszbraun if and only if $H$ is $(2d, 2)$-Helly.\\label{theorem: main result}\n\\end{thm}\n\nLet $K_n$ denote the \\emph{complete graph} on $n$-vertices. Clearly, $K_n$ is $G$-Kirszbraun\nfor all graphs $G$, since all maps $f: G\\longrightarrow K_n$ are $1$-Lipschitz. On\nthe other hand, $\\mathbb{Z}^2$ is not $\\mathbb{Z}^2$-Kirszbraun, see Figure~\\ref{f:Z2}. This example\ncan be modified to satisfy a certain extendability property, see $\\S$\\ref{ss:ext-bip}.\n\n\\begin{figure}[hbt]\n \\begin{center}\n \\includegraphics[height=4.2cm]{Z2-ex.eps}\n \\caption{Here $A=\\{a,b,c,d\\}\\subset \\mathbb{Z}^2$. Define $f: a\\to a'$,\n $b\\to b'$, $c\\to c'$, $d\\to d'$. Then $f: A\\to \\mathbb{Z}^2$ is $1$-Lipschitz\n but not extendable to $\\{O,a,b,c,d\\}$.}\n \\label{f:Z2}\n \\end{center}\n\\end{figure}\n\n\n\\smallskip\n\n\\subsection{Structure of the paper}\nWe begin with a short non-technical Section~\\ref{section: motivation}\ndescribing some background results and ideas. In essence, it is a\nremark which is too long to be in the introduction. It reflects the\nauthors' different points of view on the subject, which includes\nergodic theory, geometric combinatorics and discrete probability.\nWhile in principle this section it can be skipped, we do\nrecommend reading it as it motivates other parts of the paper.\n\nWe then proceed to prove Theorem~\\ref{thm:Zd Kirszbraun} in\nSection~\\ref{s:proof-main}. In Section~\\ref{section: extensions of main},\nwe present several extensions and applications of the main theorem. These\nsection is a mixed bag: we include a continuous analogue of the main\ntheorem (Theorem~\\ref{thm: rd version}), the extension to larger\nintegral Lipschitz constants (Theorem~\\ref{thm: about t lipschitz maps}),\nand the bipartite extension useful for domino tilings\n(Theorem~\\ref{thm: bipartite version of main theorem}).\nIn a short Section~\\ref{section:recognisability and an application},\nwe discuss computational aspects of the $\\mathbb{Z}^d$-Kirszbraun property,\nmotivated entirely by applications to tilings. We conclude with final\nremarks and open problems in Section~\\ref{s:finrem}.\n\n\\bigskip\n\n\\section{Motivation and background}\\label{section: motivation}\nA \\emph{graph homomorphism} from a graph $G$ to a graph $H$ is an adjacency\npreserving map between the respective vertices. Let $\\textup{\\textrm{Hom}}(G, H)$ denote the set of all\ngraph homomorphisms from $G$ to~$H$. We refer to~\\cite{HN} for background on\ngraph homomorphisms and connections to coloring and complexity problems.\n\nOur motivation comes from two very distinct sources:\n\\begin{enumerate}\n\\item\nFinding `fast' algorithms to determine whether a given graph homomorphism on the boundary of a box in $\\mathbb{Z}^d$ to $H$ extends to the entire box.\\label{motivation: 1}\n\\item\nFinding a natural parametrization of the so-called ergodic Gibbs measures on space of graph homomorphisms $\\textup{\\textrm{Hom}}(\\mathbb{Z}^d, H)$ (see \\cite{MR2251117,TassyMenz2016}).\n\\label{motivation: 2}\n\\end{enumerate}\n\nFor~$(1)$, roughly, suppose we are given a certain simple set of tiles~$\\textrm{T}$,\nsuch as dominoes or more generally bars \\hskip.03cm $\\{k\\times 1, 1 \\times \\ell\\}$. It turns out, that\n$\\textrm{T}$-tileability of a simply-connected region $\\Gamma$ corresponds to existence of a graph\nhomomorphism with given boundary conditions on $\\partial\\Gamma$. We refer to~\\cite{Pak-horizons,Thu}\nfor the background, and to~\\cite{MR3530972,Tas} for further details. Our\nTheorem~\\ref{prop: hole filling} is motivated by these problems.\n\nFor both these problems, the $\\mathbb{Z}^d$-Kirszbraun property of the graph $H$ (or a related graph)\nis critical and motivates this line of research; the space of\n$1$-Lipschitz maps is the same as the space of graph homomorphisms if and only if $H$ is\n\\emph{reflexive}, that is, every vertex has a self-loop. The study of\nKirszbraun-type theorems among metric spaces and its relationship to Helly-like properties\nis an old one and goes back to the original paper by Kirszbraun~\\cite{kirszbraun1934}.\nA short and readable proof is given in \\cite[p.~201]{federer1969}.\nThis was later rediscovered in \\cite{MR0011702} where it was generalized to the cases\nwhere the domain and the range are spheres in the Euclidean space or Hilbert spaces.\nThe effort of understanding which metric spaces\nsatisfy Kirszbraun properties culminated in the theorem by\nLang and Schroeder~\\cite{langschroder1997} that identified\nthe right curvature assumptions on the\nunderlying spaces for which the theorem holds.\n\nIn the metric graph theory, the research has focused largely on a certain universality property.\nFormally, a graph is called \\emph{Helly} if it is $(\\infty, 2)$-Helly. An easy deduction,\nfor instance following the discussion in \\cite[Page 153]{MR0157289}, shows that $H$ is\nHelly if and only if for all graphs $G$, $H$ is $G$-Kirszbraun. Some nice\ncharacterizations of Helly graphs can be found in the survey~\\cite[$\\S$3]{MR2405677}.\nHowever, we are not aware of any other study of $G$-Kirszbraun graphs for fixed~$G$.\n\n\\bigskip\n\n\\section{Proof of the main theorem}\\label{s:proof-main}\n\n\\subsection{Geodesic extensions}\nLet $d_H$ denote the path metric on the graph $H$.\nA \\emph{walk}~$\\gamma$ in the graph $H$ of \\emph{length}~$k$,\nis a sequence of $k+1$ vertices $(v_0, v_1, \\ldots,v_k)$, s.t.\n$d_H(v_{i}, v_{i+1})\\leq 1$, for all $0\\leq i \\leq k-1$.\nWe say that $\\gamma$ starts at $v_0$ and ends at $v_k$.\nA \\emph{geodesic} from vertex $v$ to $w$ in a graph~$G$\nis a walk~$\\gamma$ from $v$ to $w$ of the shortest length.\n\nConsider a graph $G$, a subset $A\\subset G$ and $b\\in G\\setminus A$. Define\nthe \\emph{geodesic extension of $A$ with respect to $b$} \\hskip.03cm as the following set:\n$$\\aligned\n\\textup{\\textrm{Ext}}(A,b)\\, := & \\, \\, \\{a\\in A~:~\\text{there does not exist \\hskip.03cm $a'\\in A\\setminus \\{a\\}$ \\hskip.03cm s.t.\\ there is a }\\\\\n& \\hskip1.86cm \\text{geodesic~$\\gamma$ from $a$ to $b$ which passes through $a'$}\\}.\n\\endaligned\n$$\nFor example, let $A\\subset \\mathbb{Z}^2\\setminus \\{(0,0)\\}$. If $(i,j), (k,l)\\in \\textup{\\textrm{Ext}}(A,\\vec 0)$\nare elements of the same quadrant, then $|i|> |k|$ if and only if $|j|<|l|$.\n\n\\begin{remark}\\label{remark: zd culling coordinates}{\\rm\nIf $A\\subset \\mathbb{Z}^d$ be contained in the coordinate axes then $|\\textup{\\textrm{Ext}}(A,\\vec 0)|\\leq 2d$. }\n\\end{remark}\n\nThe notion of geodesic extension allows us to prove that certain $1$-Lipschitz\nmaps can be extended:\n\n\\begin{prop}\\label{prop:culling of extras}\nLet $A\\subset G$, map $f: A\\longrightarrow H$ be $1$-Lipschitz, and let $b\\in G \\setminus A$.\nThe map $f$ has a $1$-Lipschitz extension to \\hskip.03cm $A\\cup \\{b\\}$ \\hskip.03cm if and only if \\hskip.03cm\n$f|_{\\textup{\\textrm{Ext}}(A,b)}$ \\hskip.03cm has a $1$-Lipschitz extension to \\hskip.03cm $\\textup{\\textrm{Ext}}(A,b)\\cup\\{b\\}$.\n\\end{prop}\n\n\\begin{proof}\nThe forward direction of the proof is immediate because $\\textup{\\textrm{Ext}}(A,b)\\subset A$. For the backwards direction let $\\widetilde f: \\textup{\\textrm{Ext}}(A,b)\\cup\\{b\\}\\subset G\\longrightarrow\nH$ be a $1$-Lipschitz extension of $f|_{\\textup{\\textrm{Ext}}(A,b)}$ and consider the map\n$\\hat f: A\\cup \\{b\\}\\subset G\\longrightarrow H$\ngiven by\n$$\\hat f(a):=\\begin{cases}\nf(a)&\\text{ if }a\\in A\\\\\n\\widetilde f(b)&\\text{ if }a=b.\n\\end{cases}\u2022$$\nTo prove that $\\hat f$ is $1$-Lipschitz we need to verify that for all $a\\in A$, $d_ H(\\hat f(a), \\hat f(b))\\leq d_ G(a,b)$.\nFrom the hypothesis it follows for $a\\in \\textup{\\textrm{Ext}}(A,b)$. Now suppose $a\\in A\\setminus \\textup{\\textrm{Ext}}(A,b)$. Then there exists $a'\\in \\textup{\\textrm{Ext}}(A,b)$ such that there exists a\ngeodesic from $a$ to $b$ passing through $a'$. This implies that $d_{ G}(a,b)=d_{ G}(a,a')+d_ G(a',b)$. But\n\\begin{eqnarray*}\nd_ G(a,a')\\geq d_{ H}(\\hat f(a), \\hat f (a'))=d_{ H}(f(a), f (a'))\\text{ because }f\\text{ is $1$-Lipschitz}\\\\\nd_ G(a',b)\\geq d_{ H}(\\hat f(a'), \\hat f (b))=d_{ H}(\\widetilde f(a'),\\widetilde f (b))\\text{ because }\\widetilde f\\text{ is $1$-Lipschitz}.\n\\end{eqnarray*}\nBy the triangle inequality, the proof is complete.\n\\end{proof}\n\n\\smallskip\n\n\\subsection{Helly's property}\nGiven a graph $H$, a vertex $v\\in H$ and $n\\in \\mathbb N$ denote by $B^H_n(v)$,\nthe ball of radius $n$ in $H$ centered at $v$. We will now interpret\nthe $(n,2)$-Helly's property in a different light.\n\n\\begin{prop}\\label{prop: small culled vertices}\nLet $H$ be a graph satisfying the $(n,2)$-Helly's property.\nFor all 1-Lipschitz maps $f: A\\subset G\\longrightarrow\nH$ and $b\\in G\\setminus A$ such that $|\\textup{\\textrm{Ext}}(A, b)|\\leq n$,\nthere exists a $1$-Lipschitz extension of $f$ to $A\\cup \\{b\\}$.\n\\end{prop}\n\n\\begin{proof}\tConsider the extension $\\widetilde f$ of $f$ to the set $A\\cup \\{b\\}$,\n where $\\widetilde f(b)$ is any vertex in\n$$\n\\bigcap _{b'\\in \\textup{\\textrm{Ext}}(A, b)}B^H_{d_G(b,b')}(f(b'));\n$$\nthe intersection is nonempty because $|\\textup{\\textrm{Ext}}(A, b)|\\leq n$ and for all $a, a'\\in \\textup{\\textrm{Ext}}(A, b)$, we have:\n$$d_H\\bigl(f(a), f(a')\\bigr) \\, \\leq \\, d_G(a, a')\\, \\leq \\, d_G(a, b)\\. + \\. d_{G}(b, a')\n$$\nwhich implies\n$$B^H_{d_G(a,b)}(f(a))\\cap B^H_{d_G(b,a')}\\bigl(f(a')\\bigr)\\, \\neq \\, \\varnothing.\n$$\nThe function \\hskip.03cm $\\widetilde f|_{\\textup{\\textrm{Ext}}(A,b)\\cup\\{b\\}}$ is $1$-Lipschitz, so\nProposition~\\ref{prop:culling of extras} completes the proof.\n\\end{proof}\n\n\n\\subsection{Examples}\nLet $C_n$ and $P_n$ denote the \\emph{cycle graph} and the \\emph{path graph} with $n$ vertices, respectively.\n\n\\begin{corollary}\nAll connected graphs are \\hskip.03cm $P_n$--, \\hskip.03cm $C_n$-- and \\hskip.03cm $\\mathbb{Z}$--Kirszbraun.\n\\end{corollary}\n\nIn the case when $G=P_n, C_n$ or $\\mathbb{Z}$ we have for all $A\\subset G$ and\n$b\\in G\\setminus A$, $|\\textup{\\textrm{Ext}}(A,b)|\\leq 2$; the corollary follows from\nProposition~\\ref{prop: small culled vertices} and the fact that\nall graphs are $(2,2)$-Helly.\n\nLet $\\mathbf r = (r_1, r_2, \\ldots r_n)\\in \\mathbb N^n$. Denote by $T_{\\mathbf r}$ the \\emph{star-shaped tree}\nwith a central vertex $b_0$ and $n$ disjoint walks of lengths\n$r_1,\\ldots,r_n$ emanating from it and ending in vertices $b_1, b_2, \\ldots, b_n$.\n\\begin{corollary}\\label{cor: helly and test tree}\nGraph $H$ has the $(n,2)$-Helly's property if and only if $H$ is $T_{\\mathbf r}$-Kirszbraun, for all $\\mathbf r\\in \\mathbb N^n$.\n\\end{corollary}\n\\begin{proof}\nFor all $\\mathbf r\\in \\mathbb N^n$, $A\\subset T_{\\mathbf r}$ and $b\\in T_{\\mathbf r}\\setminus \\{A\\}$, we have \\hskip.03cm $|\\textup{\\textrm{Ext}}(A,b)|\\leq n$. Thus by Proposition \\ref{prop: small culled vertices} if $H$ has the $(n,2)$-Helly's property then $H$ is $T_{\\mathbf r}$-Kirszbraun. For the other direction, let $H$ be $T_{\\mathbf r}$-Kirszbraun for all $\\mathbf r\\in \\mathbb N^n$. Suppose that $$B^H_{r_1}(v_1), B^H_{r_2}(v_2), \\ldots, B^H_{r_n}(v_n)$$ are balls in $H$ such that $B^H_{r_i}(v_i)\\cap B^H_{r_j}(v_j)\\neq \\emptyset$ for all $1\\leq i, j \\leq n$. Then $f: \\{b_1, b_2, \\ldots, b_n\\}\\subset T_{\\mathbf r}\\to H$ given by $f(b_i):= v_i$ is $1$-Lipschitz with a $1$-Lipschitz extension $\\widetilde f: T_{\\mathbf r}\\to H$. It follows that $\\widetilde f (b_0)\\in \\bigcap_{i=1}^n B^H_{r_i}(v_i)$ proving that $H$ has the $(n,2)$-Helly's property.\n\\end{proof}\n\\smallskip\n\n\\subsection{Proof of Theorem~\\ref{thm:Zd Kirszbraun}}\nWe will first prove the ``only if'' direction. Let $H$ be a graph which is\n$\\mathbb{Z}^d$-Kirszbraun. For all $\\mathbf r\\in \\mathbb N^{2d}$ there is an isometry from $T_{\\mathbf r}$ to\n$\\mathbb{Z}^d$ mapping the walks emanating from the central vertex to the coordinate axes.\nHence $H$ is $T_{\\mathbf r}$-Kirszbraun for all $\\mathbf r\\in \\mathbb N^{2d}$. By Corollary\n~\\ref{cor: helly and test tree}, we have proved the $(2d,2)$-Helly's property for~$H$.\n\nIn the ``if'' direction, suppose $H$ has the $(2d,2)$-Helly's property.\nWe need to prove that for all $A\\subset \\mathbb{Z}^d$, every\n$1$-Lipschitz maps $f:A\\to H$ has a $1$-Lipschitz extension.\nIt is sufficient to prove this for finite subsets~$A$.\nWe proceed by induction on $|A|$. Namely, we prove the following property $\\textup{\\textrm{St}}(n)$:\n\n\\smallskip\n\n\\noindent\n\\qquad Let $f:A\\subset \\mathbb{Z}^d\\longrightarrow H$ be $1$-Lipschitz with $|A|=n$. Let $b\\in \\mathbb{Z}^d\\setminus A$. Then the function $f$ has\n\n\\noindent\n\\qquad a $1$-Lipschitz extension to $A\\cup \\{b\\}$.\n\n\\smallskip\n\n\\noindent\nWe know $\\textup{\\textrm{St}}(n)$ for $n\\leq 2d$ by the $(2d,2)$-Helly's property.\nLet us assume $\\textup{\\textrm{St}}(n)$ for some $n\\geq 2d$; we want to prove $\\textup{\\textrm{St}}(n+1)$.\nLet \\hskip.03cm $f:A\\longrightarrow H$, $A\\subset \\mathbb{Z}^d$, be $1$-Lipschitz with $|A|=n+1$\nand \\hskip.03cm $b\\in \\mathbb{Z}^d\\setminus A$. Without loss of generality assume that $b=\\vec 0$.\nAlso assume that \\hskip.03cm $\\textup{\\textrm{Ext}}(A, \\vec 0)=A$; otherwise we can use the induction\nhypothesis and Proposition~\\ref{prop:culling of extras} to obtain\nthe required extension to \\hskip.03cm $A\\cup \\{\\vec 0\\}$.\n\nWe will prove that there exists a set \\hskip.03cm $\\widetilde A\\subset \\mathbb{Z}^d$ and a\n$1$-Lipschitz function \\hskip.03cm $\\widetilde f: \\widetilde A\\longrightarrow H$, such that\n\\begin{enumerate}\n\\item\nIf $\\widetilde f$ has an extension to $\\widetilde A\\cup\\{\\vec 0\\}$ then $f$ has an extension\nto \\hskip.03cm $A\\cup \\{\\vec 0\\}$.\n\\item\nEither the set $\\widetilde A$ is contained in the coordinate axes of $\\mathbb{Z}^d$ or \\hskip.03cm $|\\widetilde A|\\leq 2d$.\n\\end{enumerate}\nBy Remark~\\ref{remark: zd culling coordinates}, if $A$ is contained in the coordinate axis then $|\\textup{\\textrm{Ext}}(A, \\vec 0)|\\leq 2d$. Since $H$ has the $(2d,2)$-Helly's property, by Proposition \\ref{prop: small culled vertices} it follows that $\\widetilde f$ has an extension to $\\widetilde A\\cup\\{\\vec 0\\}$ which completes the proof.\n\n\nSince $|A|\\geq n+1>2d$, there exists ${\\vec{i}},{\\vec{j}}\\in A$ and a coordinate $1\\leq k \\leq d$ such that $i_k, j_k$ are non-zero and have the same sign. Suppose $i_k\\leq\nj_k$. Then there is a geodesic from ${\\vec{j}}$ to ${\\vec{i}} - i_k \\vec e_k$ which passes through ${\\vec{i}}$. Since $A=\\textup{\\textrm{Ext}}(A, \\vec 0)$ we have that $${\\vec{i}} - i_k \\vec e_k\\notin \\{\\vec\n0\\}\\cup A.$$\nThus ${\\vec{j}} \\notin \\textup{\\textrm{Ext}}(A, {\\vec{i}}-i_k\\vec e_k)$ and hence $|\\textup{\\textrm{Ext}}(A, {\\vec{i}}-i_k\\vec e_k)|\\leq n$. By $\\textup{\\textrm{St}}(n)$ there exists a $1$-Lipschitz extension of $f|_{\\textup{\\textrm{Ext}}(A, {\\vec{i}}-i_k\\vec\ne_k)}$ to $\\textup{\\textrm{Ext}}(A, {\\vec{i}}-i_k\\vec e_k)\\cup\\{{\\vec{i}}-i_k\\vec e_k\\}$. By Proposition~\\ref{prop:culling of extras}\nthere is a $1$-Lipschitz extension of $f$ to $ f': A\\cup\n\\{{\\vec{i}}-i_k\\vec e_k\\}\\longrightarrow H$. But there is a geodesic from ${\\vec{i}}$ to $\\vec 0$ which passes through ${\\vec{i}} -i_k \\vec e_k$. Thus\n$$\\textup{\\textrm{Ext}}\\bigl(A\\cup\\{{\\vec{i}}-i_k \\vec e_k\\}, \\vec 0\\bigr)\\subset \\bigl(A\\setminus\\{{\\vec{i}}\\}\\bigr)\\cup\\{{\\vec{i}} -i_k \\vec e_k\\}.$$\nSet \\hskip.03cm $A':=\\bigl(A\\setminus\\{{\\vec{i}}\\}\\bigr)\\cup\\{{\\vec{i}} -i_k \\vec e_k\\}$. By Proposition~\\ref{prop:culling of extras},\nmap~$f'$ has a $1$-Lipschitz extension to $A\\cup \\{{\\vec{i}}-i_k\\vec e_k\\}\n\\cup \\{\\vec 0\\}$ if and only if $f'|_{A'}$ has a $1$-Lipschitz extension to $A'\\cup \\{\\vec 0\\}$.\n\nThus we have obtained a set $A'$ and a $1$-Lipschitz map $f':A'\\subset \\mathbb{Z}^d\\longrightarrow H$ for which\n\\begin{enumerate}\n\\item\nIf $f'$ has an extension to $A'\\cup\\{\\vec 0\\}$ then $f$ has an extension to $A\\cup \\{\\vec 0\\}$.\n\\item\nThe sum of the number of non-zero coordinates of elements of $A'$ is less than the sum of the number of non-zero coordinates of elements of $A$.\n\\end{enumerate}\nBy repeating this procedure (formally this is another induction) we get the required set $\\widetilde A\\subset \\mathbb{Z}^d$ and $1$-Lipschitz map $\\widetilde f: \\widetilde A\\longrightarrow H$.\nThis completes the proof.\n\n\\bigskip\n\n\\section{Applications of the main theorem}\\label{section: extensions of main}\n\n\n\\subsection{Back to continuous setting} \\label{ss:ext-cont}\nThe techniques involved in the proof of Theorem~\\ref{thm:Zd Kirszbraun}\nextend to the continuous case with only minor modifications. The following\nresult might be of interest in metric geometry.\n\n\\smallskip\n\nA metric space $(X,\\textbf{m})$ is \\emph{geodesically complete} if for all\n$x,y\\in X$ there exists a continuous function $f: [0,1]\\to X$, such that\n$$\n\\textbf{m}(x, f(t))\\. = \\. t \\hskip.03cm \\textbf{m}(x,y) \\quad \\text{ and }\\quad \\textbf{m}(f(t), y) \\. =\\. (1-t)\\hskip.03cm \\textbf{m}(x,y).\n$$\n\n\\begin{thm}\\label{thm: rd version}\nLet $Y$ be a metric space such that every closed ball in $Y$ is compact.\nThen $Y$ is $(\\mathbb R^d, \\ell_1)$-Kirszbraun if and only if\n$Y$ is geodesically complete and $(2d,2)$-Helly.\n\\end{thm}\n\nFirst, we need the following result.\n\n\\begin{lemma}\\label{lemma: observation}\nLet $(X, \\textbf{m})$ be separable and the closed balls in $(Y, \\textbf{m}')$ be compact.\nThen $(Y, \\textbf{m}')$ is $(X, \\textbf{m})$-Kirszbraun if and only if all finite sets\n$A\\subset X$ and $1$-Lipschitz maps $f:A\\to Y$, $y\\in Y\\setminus A$\nhave a $1$-Lipschitz extension to $A\\cup\\{y\\}$.\n\\end{lemma}\n\n\\begin{proof}\nThe ``only if'' part is obvious. For the ``if'' part, let\n$A\\subset X$ and $f:A\\to Y$ be $1$-Lipschitz. Since $X$ is separable,\n$A$ is also separable. Let\n$$\n\\{x_i~:~i\\in \\mathbb N\\} \\. \\subset \\. X \\quad \\text{ and } \\quad \\{a_i~:~i\\in\\mathbb N\\}\\. \\subset \\. A\n$$\n be countable dense sets. By the hypothesis, we have\n$$\n\\bigcap _{1 \\leq i\\leq n}B_{\\textbf{m}(x_1, a_i)}(f(a_i))\\, \\neq \\. \\varnothing.\n$$\nSince closed balls in $Y$ are compact it follows that\n$$\n\\bigcap_{i\\in \\mathbb N}B_{\\textbf{m}(x_1, a_i)}(f(a_i))\\, \\neq \\. \\varnothing.\n$$\nThus $f$ has a $1$-Lipschitz extension to \\hskip.03cm $A\\cup\\{x_1\\}$.\nBy induction we get that $f$ has a $1$-Lipschitz extension \\hskip.03cm\n$\\widetilde f: A\\cup\\{x_i~:~i\\in \\mathbb N\\}\\to Y$. Let $g: X\\to Y$ be the map given by\n$$g(x)\\. := \\. \\lim_{j\\to \\infty} \\. \\widetilde f(x_{i_j}) \\quad \\text{ for all } \\ \\, x\\in X,\n$$\nwhere ${x_{i_j}}$ is some sequence such that $\\lim_{j\\to \\infty} x_{i_j}=x$.\nThe limit above exists since closed balls in $Y$ are compact, and hence $Y$\nis a complete metric space. By the continuity of $\\widetilde f$ it follows that\n$g|_A=f$ and by the continuity of the distance function it follows that\n$g$ is $1$-Lipschitz.\n\\end{proof}\n\nFor the proof of Theorem~\\ref{thm: rd version}, note that the main property\nof $\\mathbb{Z}^d$ exploited in proof of Theorem~\\ref{theorem: main result}\nis that the graph metric is same as the $\\ell_1$ metric. From the lemma\nabove, the proof proceeds analogously. We omit the details.\n\n\\smallskip\n\n\\subsection{Lipschitz constants} \\label{ss:ext-Lip}\nThe following extension deals with other Lipschitz constants. In the continuous case this is trivial; however it is more delicate in the discrete setting. Since we are interested in Lipschitz maps between\ngraphs we restrict our attention to integral Lipschitz constants.\n\n\\begin{thm}\\label{thm: about t lipschitz maps}\nLet $t\\in \\mathbb N$ and $H$ be a connected graph. Then every $t$-Lipschitz map\n$f: A\\longrightarrow H$, $A\\subset \\mathbb{Z}^d$, has a $t$-Lipschitz extension to\n$\\mathbb{Z}^d$ if and only if\n$$\n\\text{for all balls }B_1, B_2, \\ldots B_{2d} \\text{ of radii multiples\nof }t\\text{ mutually intersect }\\Longrightarrow\\cap B_i\\neq \\varnothing.\n$$\n\\end{thm}\n\nThe proof of Theorem~\\ref{thm: about t lipschitz maps} follow verbatim\nthe proof of Theorem~\\ref{thm:Zd Kirszbraun}; we omit the details.\n\n\\smallskip\n\nLet $H$ be a $\\mathbb{Z}^d$-Kirszbraun graph. The theorem implies that\nall $t$-Lipschitz maps $f: A\\longrightarrow H$, $A\\subset \\mathbb{Z}^d$,\nhave a $t$-Lipschitz extension. On the other hand, it is easy\nto construct graphs $G$ and $H$ for which $H$ is $G$-Kirszbraun\nbut there exists a $2$-Lipschitz map \\hskip.03cm $f: A\\longrightarrow H$,\n$A\\subset G$, which does not have a $2$-Lipschitz extension.\nFirst, we need the following result.\n\n\\begin{prop}\\label{prop: small diameter Kirszbraun graph}\nLet $ G$ be a finite graph with diameter $n$ and $ H$ be a connected\ngraph such that $B^H_{n}(v)$ is $G$-Kirszbraun for all $v\\in H$.\nThen $H$ is a $G$-Kirszbraun graph.\n\\end{prop}\n\n\\begin{proof}\nLet $f: A\\subset G\\longrightarrow H$ be $1$-Lipschitz and pick $a\\in A$. Then \\hskip.03cm Image$(f)\\subset B^H_n(f(a))$. Since $B^H_{n}(f(a))$ is $G$-Kirszbraun the\nresult follows.\n\\end{proof}\n\nSince trees are Helly graphs, we have as an immediate application of the above that $C_{n}$ is $ G$-Kirszbraun if \\hskip.03cm diam$(G)\\leq n-1$.\nFor instance, let $\\mathbf r = (1,1,1,1,1,1)\\in \\mathbb N^6$ and consider the \\emph{star}~$T_{\\mathbf r}$. We obtain that $C_6$ is $T_\\mathbf r$-Kirszbraun.\n Now label the leaves of $T_\\mathbf r$ as $b_i$, $1\\leq i \\leq 6$, respectively. For $A=\\{b_1, \\ldots, b_6\\}\\subset T_\\mathbf r$,\n consider the map\n$$f: A \\to C_6 \\ \\ \\text{ given by } \\ \\. f(b_i)=i\\hskip.03cm.\n$$\nThe function $f$ is $2$-Lipschitz but it has no $2$-Lipschitz extension to~$T_\\mathbf r$.\n\n\\smallskip\n\n\\subsection{Hyperoctahedron graphs} \\label{ss:ext-ex}\nThe \\emph{hyperoctahedron graph} $O_d$ is the graph obtained by removing a perfect\nmatching from the complete graph~$K_{2d}$. Theorem~\\ref{theorem: main result}\ncombined with the following proposition implies that $O_d$ are $\\mathbb{Z}^{d-1}$-Kirszbraun\nbut not $\\mathbb{Z}^{d}$-Kirszbraun. When $d=2$ this is the example in the introduction\n(see Figure~\\ref{f:Z2}).\n\n\\begin{prop}\\label{p:cross-pol}\nGraph $O_d$ is $(2d-1,2)$-Helly but not $(2d,2)$-Helly.\n\\end{prop}\n\n\\begin{proof}\nLet \\hskip.03cm $B_1, B_2, \\ldots, B_{2d-1}$ \\hskip.03cm be balls of radius $\\ge 1$. Then, for all\n$1\\leq i\\leq 2d-1$, $B_i\\supset O_{d}\\setminus\\{j_i\\}$ for some $j_i\\in O_{d}$. Thus:\n$$\n\\bigcap \\hskip.03cm B_i \\, \\supseteq \\, O_d\\setminus\\{j_i~:~ 1\\leq i \\leq 2d-1\\} \\, \\neq \\, \\varnothing\\hskip.03cm.\n$$\nThis implies that $O_d$ is $(2d-1,2)$-Helly.\n\nIn the opposite direction, let \\hskip.03cm $B_1, B_1, \\ldots, B_{2d}$ \\hskip.03cm\nbe distinct balls of radius one in~$O_d$. It is easy to see that they intersect pairwise,\nbut \\hskip.03cm $\\cap B_{i}=\\varnothing$. Thus, graph $O_d$ is not $(2d,2)$-Helly, and\nTheorem~\\ref{thm:Zd Kirszbraun} proves the claim.\n\\end{proof}\n\nLet us mention that the hyperoctaheron graph \\hskip.03cm $O_d\\simeq K_{2,\\ldots,2}$ \\hskip.03cm\nis a well-known obstruction to the Helly's property, see e.g.~\\cite{MR2405677}.\n\n\\smallskip\n\n\\subsection{Bipartite version} \\label{ss:ext-bip}\nIn the study of Helly graphs it is well-known (see e.g.~\\cite[$\\S$3.2]{MR2405677}) that results which are true with regard to $1$-Lipschitz\nextensions usually carry forward to graph homomorphisms in the bipartite case after some small technical modifications. This is also true in our case.\n\nA bipartite graph $H$ is called \\emph{bipartite $(n,m)$-Helly} if for balls $B_1, B_2, B_3, \\ldots, B_n$ (if $n\\neq \\infty$ and any finite collection otherwise)\nand partite class $H_1$, we have that\nany subcollection of size $m$ among $B_1\\cap H_1, B_2\\cap H_1, \\ldots, B_n\\cap H_1$ has a nonempty intersection implies\n$$\\bigcap_{i=1}^n B_i\\cap H_1\\neq \\varnothing.$$\n\nLet $G, H$ be bipartite graphs with partite classes $G_1, G_2$ and $H_1,H_2$ respectively. The graph $H$ is called \\emph{bipartite $G$-Kirszbraun} if for all\n$1$-Lipschitz maps $f: A\\subset G\\longrightarrow H$ for which $f(A\\cap G_1)\\subset H_1$ and $f(A\\cap G_2)\\subset H_2$ there exists $\\widetilde f\\in \\textup{\\textrm{Hom}}(G, H)$ extending it.\n\n\\begin{thm}\\label{thm: bipartite version of main theorem}\nGraph $H$ is bipartite $\\mathbb{Z}^d$-Kirszbraun if and only if $H$ is bipartite $(2d,2)$-Helly.\n\\end{thm}\n\nAs noted in the introduction, Theorem~\\ref{theorem: main result} implies that graph $\\mathbb{Z}^2$ is not $(4,2)$-Helly. However it is\nbipartite $(\\infty,2)$-Helly (see below).\n\n\\smallskip\n\nGiven a graph $H$ we say that $v\\sim_H w$ to mean that\n$(v,w)$ form an edge in the graph. Let $H_1, H_2$ be graphs with vertex sets $V_1, V_2$ respectively. Define:\n\\begin{enumerate}\n\\item\n\\emph{Strong product} $H_1\\boxtimes H_2$ as the graph with the vertex set $V_1\\times V_2$, and edges given by\n$$\\aligned\n(v_1,v_2)\\, \\sim_{H_1\\boxtimes H_2}(w_1, w_2) \\quad & \\text{ if } \\ \\ v_1=w_1 \\ \\ \\text{and} \\ \\ v_2\\sim_{H_2}w_2\\hskip.03cm, \\\\\n& \\text{ or } \\ \\ v_1\\sim_{H_1}w_1 \\ \\ \\text{and} \\ \\ v_2=w_2\\hskip.03cm, \\\\\n& \\text{ or }\\ \\ v_1\\sim_{H_1}w_1 \\ \\ \\text{and} \\ \\ v_2\\sim_{H_2}w_2\\hskip.03cm.\n\\endaligned\n$$\n\\item\n\\emph{Tensor product} $H_1\\times H_2$ as the graph with the vertex set $V_1\\times V_2$, and edges given by\n$$\n(v_1,v_2)\\sim_{H_1\\times H_2}(w_1, w_2)\\quad \\text{ if } \\ \\ \\. v_1\\sim_{H_1}w_1\\ \\ \\text{and} \\ \\ v_2\\sim_{H_2}w_2\\hskip.03cm.\n$$\n\\end{enumerate}\n\n\\smallskip\n\n\\begin{prop}\\label{prop: products} If for a graph $G$, graphs $H_1$ and $H_2$ are $G$-Kirszbraun then $H_1\\boxtimes H_2$ is $G$-Kirszbraun. If for a bipartite\ngraph $G$, bipartite graphs\n$H_1'$ and $H_2'$ are bipartite $G$-Kirszbraun then the connected components of $H_1'\\times H_2'$ are bipartite $G$-Kirszbraun.\n\\end{prop}\n\n\n\\begin{proof} We will prove this in the non-bipartite case; the bipartite case follows similarly. Let $f:=(f_1, f_2): A\\subset G\\longrightarrow H_1\\boxtimes H_2$\nbe $1$-Lipschitz. It follows that the functions $f_1$ and $f_2$ are $1$-Lipschitz as well; hence they have $1$-Lipschitz extensions $\\widetilde f_1: G\\longrightarrow\nH_1$ and $\\widetilde f_2: G\\longrightarrow H_2$. Thus $(\\widetilde f_1, \\widetilde f_2): G\\longrightarrow H_1\\boxtimes H_2$ is $1$-Lipschitz and extends $f$.\n\\end{proof}\n\n\\begin{corollary}[{cf.~\\cite[$\\S$3.2]{MR2405677}}]\nGraph $\\mathbb{Z}^2$ is bipartite $(\\infty,2)$-Helly.\n\\end{corollary}\n\n\\begin{proof}\nAs we mentioned above, it is easy to see that all trees are Helly graphs.\nBy Proposition~\\ref{prop: products}, so are the connected components of\n$\\mathbb{Z}\\times \\mathbb{Z}$ which are graph isomorphic to $\\mathbb{Z}^2$. Now\nTheorem~\\ref{thm: bipartite version of main theorem} implies the result.\n\\end{proof}\n\n\n\\bigskip\n\n\n\\section{Complexity aspects}\\label{section:recognisability and an application}\n\n\n\\subsection{The recognition problem} Below we give a polynomial time algorithm\nto decide whether a given graph is $\\mathbb{Z}^d$-Kirszbraun. We assume\nthat the graph is presented by its adjacency matrix.\n\n\\begin{prop}\\label{prop:recognition}\nFor all fixed $n,m\\in \\mathbb N$, the recognition problem of\n$(n,m)$-Helly graphs and bipartite $(n,m)$-Helly graphs can be\ndecided in \\hskip.03cm \\textrm{poly}$(|H|)$ time.\n\\end{prop}\n\nFor $n=\\infty$ and $m=2$, the recognition problem was solved\nin~\\cite{MR1029165}; that result does not follow from\nthe proposition.\n\n\\begin{proof}\nLet us seek the algorithm in the case of $(n,m)$-Helly graphs; as always,\nthe bipartite case is similar. In the following, for a function $g:\\mathbb R\\to \\mathbb R$ by\n$t=O(g(|H|))$ we mean $t\\leq k g(|H|)$, where $k$ is independent\nof $|H|$ but might depend on $m,n$.\n\\begin{enumerate}\n\\item\nDetermine the distance between the vertices of the graph. This takes $O(|H|^3)$ time.\n\\item\nNow make a list of all the collections of balls; each collection being of cardinality~$n$.\nSince the diameter of the graph $H$ is bounded by $|H|$; listing\nthe centers and the radii of the balls takes time $O(|H|^{2n})$.\n\\item\nFind the collections for which all the subcollections of cardinality $m$ intersect.\nFor each collection, this step takes $O(|H|)$ time.\n\\item\nCheck if the intersection of the balls in the collections found in the previous\nstep is nonempty. This step again takes $O(|H|)$ time.\n\\end{enumerate}\nThus, the total time is $O(|H|^{2n+3})$, as desired.\n\\end{proof}\n\n\\smallskip\n\n\\subsection{The hole filling problem} The following application\nis motivated by the tileability problems, see Section~\\ref{section: motivation}.\n\nFix $d\\geq 2$. By a \\emph{box} $B_n$ in $\\mathbb{Z}^d$ we mean a subgraph $\\{0,1,\\ldots,n\\}^d$.\nBy the \\emph{boundary} $\\partial_n$ we mean the internal vertex boundary of $B_n$,\nthat is, vertices of $B_n$ where at least one of the coordinates is either $0$ or~$n$.\nThe \\emph{hole-filling problem} asks: \\hskip.03cm Given a graph $H$ and a graph homomorphism \\hskip.03cm\n$f\\in \\textup{\\textrm{Hom}}(\\partial_n, H)$, does it extend to a graph homomorphism \\hskip.03cm $\\widetilde f\\in \\textup{\\textrm{Hom}}(B_n,H)$?\n\n\\begin{thm}\\label{prop: hole filling}\nFix $d\\ge 1$. Let $H$ be a finite bipartite $(2d,2)$-Helly graph, $B_n\\subset \\mathbb{Z}^d$\na box and $f\\in \\textup{\\textrm{Hom}}(\\partial_n, H)$ be a graph homomorphism as above. Then the\nhole-filling problem for $f$ can be decided in \\hskip.03cm \\textrm{poly}$(n+|H|)$ time.\n\\end{thm}\n\nThe same result holds in the context of $1$-Lipschitz maps for $(2d,2)$-Helly graphs;\nthe algorithm is similar. For general~$H$, without the $(2d,2)$-Helly assumption,\nthe problem is a variation on existence of graph homomorphism, see~\\cite[Ch.~5]{HN}.\nThe latter is famously~\\textup{\\textsf{NP}}-complete in almost all nontrivial cases, which makes the\ntheorem above even more surprising.\n\n\\begin{proof} In the following, for a function $g:\\mathbb R^2\\to \\mathbb R$, by $t=O(g(|H|,n))$\nwe mean $t\\leq k g(|H|,n)$ where $k$ is independent of $|H|$ and $n$.\nLet $f\\in \\textup{\\textrm{Hom}}(\\partial_n, H)$ be given. Since $H$ is bipartite $(2d,2)$-Helly graph,\nby Theorem~\\ref{theorem: main result}, $f$ extends to $B_n$ if and only if\n$f$ is $1$-Lipschitz. Thus to decide the hole-filling problem we need to\ndetermine whether or not $f$ is $1$-Lipschitz. This can be decided in\npolynomial time:\n\\begin{enumerate}\n\\item\nDetermine the distances between all pairs of vertices in~$H$. This costs $O(|H|^3)$.\n\\item\nFor each pair of vertices in the graph $\\partial_n$, determine the distance\nbetween the pair and their image under $f$ and verify the Lipschitz condition.\nThis costs $O(n^{2d-2})$.\n\\end{enumerate}\nThe total cost is $O(n^{2d-2}+|H|^3)$, which completes the proof.\n\\end{proof}\n\nFor $d=2$ and $H=\\mathbb{Z}$, the above algorithm can be modified to give\na $O(n^2)$ time complexity for the hole-filling problem of an $[n\\times n]$\nbox. This algorithm can be improved to the nearly linear time\n$O(n \\log n)$, by using the tools in~\\cite{MR3530972}.\nWe omit the details.\n\n\\bigskip\n\n\\section{Final remarks and open problems} \\label{s:finrem}\n\n\\subsection{}\nIn the view of our motivation, we focus on the $\\mathbb{Z}^d$-Kirszbraun property\nthroughout the paper. It would be interesting to find characterizations\nfor other bipartite domain graphs such as the hexagonal and the\nsquare-octahedral lattice (cf.~\\cite{Ken,Thu}). Similarly, it would be\ninteresting to obtain a sharper time bound on the recognition\nproblem as in Proposition~\\ref{prop:recognition}, to obtain applications\nsimilar to~\\cite{MR3530972}.\n\nNote that a vast majority of tileability problems are computationally hard,\nwhich makes the search for tractable sets of tiles even more interesting\n(see~\\cite{Pak-horizons}). The results in this paper, especially the bipartite\nversions, give a guideline for such a search.\n\n\\subsection{}\nAs we mention in Section~\\ref{section: motivation}, there are\nintrinsic curvature properties of the underlying spaces,\nwhich allow for the Helly-type theorems~\\cite{langschroder1997}.\nIn fact, there are more general local hyperbolic properties\nwhich can also be used in this setting, see~\\cite{CE,CDV,lang1999}.\nSee also a curious ``local-to-global'' characterization\nof Helly graphs in~\\cite{CC+}, and a generalization of\nHelly's properties to hypergraphs~\\cite{BeD,DPS}.\n\n\\subsection{}\nIn literature, there are other ``discrete Kirszbraun theorems'' stated in different contexts.\nFor example, papers~\\cite{AT,Bre} give a PL-version of the result for finite\n$A,B\\subset \\mathbb R^d$ and 1-Lipschitz $f:A\\to B$. Such results are related to the classical\n\\emph{Margulis napkin problem} and other isometric embedding\/immersion problems,\nsee e.g.~\\cite[$\\S$38--$\\S$40]{Pak} and references therein.\n\nIn a different direction, two more related problems are worth mentioning. \nFirst, the \\emph{carpenter's rule problem} can be viewed as a problem of finding \na ``discrete $1$-Lipschitz homotopy'', see~\\cite{CD,MR2007962}. This is also \n(less directly) related to the well-known \\emph{Kneser--Poulsen conjecture}, \nsee e.g.~\\cite{Bez}.\n\n\\bigskip\n\n\\subsection*{Acknowledgements} We would like to thank the referee for several useful comments and suggestions. We are grateful to Arseniy Akopyan,\nAlexey Glazyrin, Georg Menz, Alejandro Morales and Adam Sheffer\nfor helpful discussions. Victor Chepoi kindly read the draft\nof the paper and gave us many useful remarks, suggestions and\npointers to the literature.\n\nWe thank the organizers of the thematic\nschool ``Transversal Aspects of Tiling'' at Ol\\'{e}ron, France in~2016,\nfor inviting us and giving us an opportunity to meet and interact\nfor the first time. The first author has been funded by ISF grant\nNos.~1599\/13, 1289\/17 and ERC grant No.~678520. The second author was\npartially supported by the NSF.\n\n\n\\vskip1.1cm\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFinding optimal treatment strategies that can incorporate patient heterogeneity is a cornerstone of personalized medicine. When treatment options change over time, optimal sequential \ntreatment rules (STR) can be learned using longitudinal patient\ndata. With increasing availability of large-scale longitudinal data such as electronic health records (EHR) data in recent years, reinforcement learning (RL) has found much success in estimating such optimal STR \\citep{2019PM}. Existing RL methods include G-estimation \\citep{robins2004}, Q-learning \\citep{Watkins1989,murphy2005}, A-learning \\citep{MurphyAlearning} and directly maximizing the value function \\citep{Zhao2015}. Both G-estimation and $A$-learning attempt to model only the component of the outcome regression relevant to the treatment contrast, while $Q$-learning posits complete models for the outcome regression. Although G-estimation and $A$-learning models can be more efficient and robust to mis-specification, $Q$-learning is widely adopted due to its ease of implementation, flexibility and interpretability \\citep{Watkins1989,DTRbook,schulte2014}.\n\nLearning STR with EHR data, however, often faces an additional challenge of whether outcome information is readily available. Outcome information, such as development of a clinical event or whether a patient is considered as a responder, is often not well coded but rather embedded in clinical notes. Proxy variables such as diagnostic codes or mentions of relevant clinical terms in clinical notes via natural language processing (NLP), while predictive of the true outcome, are often not sufficiently accurate to be used directly in place of the outcome \\citep{hong2019semi,zhang2019high,cheng2020robust}. On the other hand, extracting precise outcome information often requires manual chart review, which is resource intensive, particularly when the outcome needs to be annotated over time. This indicates the need for a semi-supervised learning (SSL) approach that can efficiently leverage a small sized labeled data $\\Lsc$ with true outcome observed and a large sized unlabeled data $\\Usc$ for predictive modeling. It is worthwhile to note that the SSL setting differs from the standard missing data setting in that the probability of missing tends to 1 asymptotically, which violates the positivity assumption required by the classical missing data methods \\citep{chakrabortty}.\n\nWhile SSL methods have been well developed for prediction, classification and regression tasks \\citep[e.g.][]{Chapelle2006,zhu05,BlitzerZ08,Wang2011,qiao2018,chakrabortty}, there is a paucity of literature on SSL methods for estimating optimal treatment rules. Recently, \\cite{cheng2020robust} and \\cite{kallus2020role} proposed SSL methods for estimating an average causal treatment effect. \\cite{Finn2016} proposed a semi-supervised RL method which achieves impressive empirical results and outperforms simple approaches such as direct imputation of the reward. However, there are no theoretical guarantees and the approach lacks causal validity and interpretability within a domain context. Additionally, this method does not leverage available surrogates. In this paper, we fill this gap by proposing a theoretically justified SSL approach to Q-learning using a large unlabeled data $\\Usc$ which contains sequential observations on features $\\bO$, treatment assignment $A$, and surrogates $\\bW$ that are imperfect proxies of $Y$, as well as a small set of labeled data $\\Lsc$ which contains true outcome $Y$ at multiple stages along with $\\bO$, $A$ and $\\bW$. We will also develop robust and efficient SSL approach to estimating the value function of the derived optimal STR, defined as the expected counterfactual outcome under the derived STR.\n\nTo describe the main contributions of our proposed SSL approach to RL, we first note two important distinctions between the proposed framework and classical SSL methods. First, existing SSL literature often assumes that $\\mathcal{U}$ is large enough that the feature distribution is known \\citep{Wasserman2007}. However, under the RL setting, the outcome of the stage $t-1$, denoted by $Y_{t-1}$, becomes a feature of stage $t$ for predicting $Y_t$. As such, the feature distribution for predicting $Y_t$ can not be viewed as known in the $Q$-learning procedure. Our methods for estimating an optimal STR and its associated value function, carefully adapt to this sequentially missing data structure. Second, we modify the SSL framework to handle the use of surrogate variables $\\bW$ which are predictive of the outcome through the joint law $\\Pbb_{Y,\\bO,A,\\bW}$, but are not part of the conditional distribution of interest $\\Pbb_{Y|\\bO,A}$. To address these issues, we propose a two-step fitting procedure for finding an optimal STR and for estimating its value function in the SSL setting. Our method consists of using the outcome-surrogates ($\\bW$) and features ($\\bO,A$) for non-parametric estimation of the missing outcomes ($Y$). We subsequently use these imputations to estimate $Q$ functions, learn the optimal treatment rule and estimate its associated value function. We provide theoretical results to understand when and to what degree efficiency can be gained from $\\bW$ and $\\bO,A$.\n\nWe further show that our approach is robust to mis-specification of the imputation models. To account for potential mis-specification in the models for the $Q$ function, we provide a double robust value function estimator for the derived STR. If either the regression models for the Q functions or the propensity score functions are correctly specified, our value function estimators are consistent for the true value function. \n\n\nWe organize the rest of the paper as follows. In Section \\ref{section: set up} we formalize the problem mathematically and provide some notation to be used in the development and analysis of the methods. In Section \\ref{section: SS Q learning} we discuss traditional $Q$-learning and propose an SSL estimation procedure for the optimal STR. Section \\ref{section: SS value function} details an SSL doubly robust estimator of the value function for the derived STR. In Section \\ref{theory} we provide theoretical guarantees for our approach and discuss implications of our assumptions and results. Section \\ref{section: sims and application} is devoted for numerical experiments as well as real data analysis with an inflammatory bowel disease (IBD) data-set. We end with a discussion of the methods and possible extensions in Section \\ref{section: discussion}. The proposed method has been implemented in R and the code can be found at \\url{github.com\/asonabend\/SSOPRL}. Finally all the technical proofs and supporting lemmas are collected in Appendices \\ref{sec:proof_main_results} and \\ref{sec:proof_technical_lemmas}.\n\n\\section{Problem setup}\\label{section: set up}\n\nWe consider a longitudinal observational study with outcomes, confounders and treatment indices potentially available over multiple stages. Although our method is generalizable for any number of stages, for ease of presentation we will use two time points of (binary) treatment allocation as follows. For time point $t\\in \\{1,2\\}$, let $\\bO_t\\in\\mathbb{R}^{d^o_t}$ denote the vector of covariates measured prior at stage $t$ of dimension $d^o_t$; $A_t\\in\\{0,1\\}$ a treatment indicator variable; and $Y_{t+1}\\in\\mathbb{R}$ the outcome observed at stage $t+1$, for which higher values of $Y_{t+1}$ are considered beneficial. Additionally we observe surrogates $\\bW_t \\in\\mathbb{R}^{d^\\omega_ t}$, a $d^\\omega_ t$-dimensional vector of post-treatment covariates potentially predictive of $Y_{t+1}$. In the labeled data where $\\bY = (Y_2,Y_3)\\trans$ is annotated, we observe a random sample of $n$ independent and identically distributed (iid) random vectors, denoted by \n$$\n\\mathcal{L}=\\{\\bL_i = (\\bUvec_i\\trans,\\bY_i\\trans)\\trans\\}_{i=1}^n, \\quad \\mbox{where $\\bU_{ti}=(\\bO_{ti}\\trans, A_{ti},\\bW_{ti}\\trans)\\trans$ and $\\bUvec_i=(\\bU_{1i}\\trans,\\bU_{2i}\\trans)\\trans$.}\n$$ \nWe additionally observe an unlabeled set consisting of $N$ iid random vectors, \n$$\\mathcal{U}=\\{\\bUvec_{j}\\}_{j=1}^N$$\nwith $N \\gg n$. We denote the entire data as $\\mathbb{S}=(\\mathcal{L}\\cup\\mathcal{U})$. To operationalize our statistical arguments we denote the joint distribution of the observation vector $\\bL_i$ in $\\mathcal{L}$ as $\\Pbb$. In order to connect to the unlabeled set, we assume that any observation vector $\\bUvec_{j}$ in $\\mathcal{U}$ has the distribution induced by $\\Pbb$.\n\nWe are interested in finding the optimal STR and estimating its {\\em value function} to be defined as expected counterfactual outcomes under the derived regime. To this end, let $Y_{t+1}^{(a)}$ be the potential outcome for a patient at time $t+1$ had the patient been assigned at time $t$ to treatment $a\\in \\{0,1\\}$. A dynamic treatment regime is a set of functions $\\mathcal{D}=(d_1,d_2)$, where $d_t(\\cdot)\\in\\{0,1\\}$ , $t=1,2$ map from the patient's history up to time $t$ to the treatment choice $\\{0,1\\}$.\nWe define the patient's history as $\\bH_1\\equiv [\\bH_{10}\\trans, \\bH_{11}\\trans]\\trans$ with $\\bH_{1k} = \\bphi_{1k}(\\bO_1)$, $\\bH_2=[\\bH_{20}\\trans, \\bH_{21}\\trans]\\trans$ with $\\bH_{2k}=\\bphi_{2k}(\\bO_1,A_1,\\bO_2)$, where $\\{\\bphi_{tk}(\\cdot), t=1,2, k=0,1\\}$ are pre-specified basis functions. We then define features derived from patient history for regression modeling as $\\bX_1\\equiv[\\bH_{10}\\trans,A_1\\bH_{11}\\trans]\\trans$ and $\\bX_2\\equiv[\\bH_{20}\\trans,A_{2}\\bH_{21}\\trans]\\trans$. For ease of presentation, we also let $\\bHcheck_1 = \\bH_1\\trans$, $\\bHcheck_2 = (Y_2, \\bH_2\\trans)\\trans$, $\\bXcheck_1 = \\bX_1$, $\\bXcheck_2 = (Y_2, \\bX_2\\trans)\\trans$, and $\\bSigma_t=\\mathbb{E}[\\bXcheck_t\\bXcheck_t\\trans]$. \n\nLet $\\Ebb_\\Dsc$ be the expectation with respect to the measure that generated the data under regime $\\Dsc$. Then these sets of rules $\\Dsc$ have an associated value function which we can write as $V(\\Dsc)=\\mathbb{E}_\\Dsc\\left[Y_2^{(d_1)}+Y_3^{(d_2)}\\right]$. Thus, an optimal dynamic treatment regime is a rule $\\Dscbar=(\\dbar_1,\\dbar_2)$ such that $\\Vbar=V\\left(\\Dscbar\\right)\\ge V\\left(\\Dsc\\right)$ for all $\\Dsc$ in a suitable class of admissible decisions \\citep{DTRbook}. To identify $\\Dscbar$ and $\\Vbar$ from the observed data we will require the following sets of standard assumptions \\citep{robins1997, schulte2014}: (i) consistency -- $Y_{t+1}=Y_{t+1}^{(0)}I(A_t=0)+Y_{t+1}^{(1)}I(A_t=1)\\text{ for }t=1,2$, (ii) no unmeasured confounding -- $Y_{t+1}^{(0)},Y_{t+1}^{(1)}\\indep A_t|\\bH_t\\text{ for }t=1,2$ and (iii) positivity -- $\\Pbb(A_t|\\bH_t)>\\nu$, $\\text{ for }t=1,2,\\:A_t\\in\\{0,1\\}$, for some fixed $\\nu>0$. \n\nWe will develop SSL inference methods to derive optimal STR $\\Dscbar$ as well the associated value function $\\Vbar$ by leveraging the richness of the unlabeled data and the predictive power of surrogate variables which allows us to gain crucial statistical efficiency. Our main contributions in this regard can be described as follows. First, we provide a systematic generalization of the $Q$-learning framework with theoretical guarantees to the semi-supervised setting with improved efficiency. Second, we provide a doubly robust estimator of the value function in the semi-supervised setup. Third, our $Q$-learning procedure and value function estimator are flexible enough to allow for standard off-the-shelf machine learning tools and are shown to perform well in finite-sample numerical examples. \n\n\n\n\\section{Semi-Supervised \\texorpdfstring{$Q$}{Lg}-learning}\\label{section: SS Q learning}\n\nIn this section we propose a semi-supervised Q-learning approach to deriving an optimal STR. To this end, we first recall the basic mechanism of traditional linear parametric $Q$-learning \\citep{DTRbook} and then detail our proposed method. We defer the theoretical guarantees to Section \\ref{theory}. \n\n\\subsection{Traditional \\texorpdfstring{$Q$}{Lg}-learning}\n\n$Q$-learning is a backward recursive algorithm that identifies optimal STR by optimizing two stage Q-functions defined as: $$Q_2(\\bHcheck_2,A_2)\\equiv\\mathbb{E}[Y_3|\\bHcheck_2,A_2], \\quad \\mbox{and}\\quad\nQ_1(\\bHcheck_1,A_1)\\equiv\\mathbb{E}[Y_2+\\underset{a_2}{\\text{max}}\\:Q_2(\\bHcheck_2,a_2)|\\bHcheck_1,A_1]\n$$ \n\\citep{sutton2018,murphy2005}.\nIn order to perform inference one typically proceeds by positing models for the $Q$ functions. In its simplest form one assumes a (working) linear model for some parameters $\\btheta_t=(\\bbeta_t\\trans,\\bgamma_t\\trans)\\trans$, $t=1,2$, as follows: \n\\begin{align}\\label{linear_Qs}\n\\begin{split}\nQ_1(\\bHcheck_1,A_1;\\btheta_1^0)=& \\bXcheck_1\\trans\\btheta_1^0=\\bH_{10}\\trans \\bbeta_1^0+A_{1}(\\bH_{11}\\trans \\bgamma_1^0) ,\\\\\nQ_2(\\bHcheck_2,A_2;\\btheta_2^0)=&\\bXcheck_2\\trans\\btheta_2^0 = Y_2 \\beta_{21}^0 +\n\\bH_{20}\\trans\\bbeta_{22}^0\n+A_{2}(\\bH_{21}\\trans\\bgamma_2^0).\n\\end{split}\n\\end{align}\nTypical $Q$-learning consists of performing a least squares regression for the second stage to estimate $\\bthetahat_2$ followed by defining the stage 1 pseudo-outcome for $i=1,...,n$ as \n\\[\n\\Yhat_{2i}^*=Y_{2i}+\\underset{a_2}{\\text{max}}\\:Q_2(\\bHcheck_{2i},a_2;\\bthetahat_2)=Y_{2i}(1+\\hat\\beta_{21})+\\bH_{20i}\\trans{\\bbetahat}_{22}\n+[\\bH_{21i}\\trans{\\bgammahat}_2]_+,\n\\]\nwhere $[x]_+=xI(x>0)$. One then proceeds to estimate $\\bthetahat_1$ using least squares again, with $\\Yhat_2^*$ as the outcome variable. Indeed, valid inference on $\\Dscbar$ using the method described above crucially depends on the validity of the model assumed. However as we shall see, even without validity of this model we will be able to provide valid inference on suitable analogues of the $Q$-function working model parameters, and on the value function using a double robust type estimator. To that end it will be instructive to define the least square projections of $Y_3$ and $Y_2^*$ onto $\\bXcheck_2$ and $\\bXcheck_1$ respectively. The linear regression working models given by \\eqref{linear_Qs} have $\\btheta_1^0,\\:\\btheta_2^0$ as unknown regression parameters. To account for the potential mis-specification of the working models in \\eqref{linear_Qs}, we define the target population parameters $\\bthetabar_1,\\bthetabar_2$ as the population solutions to the expected normal equations \n\\[\n\\mathbb{E}\\left\\{\\bXcheck_1(\\Ybar_2^*-\\bXcheck_1\\trans\\bthetabar_1)\\right\\}=\\bzero, \\quad \\mbox{and}\\quad \\mathbb{E}\\left\\{ \\bXcheck_2\\trans\\left(Y_3-\\bXcheck_2\\trans\\bthetabar_2\\right)\\right\\}=\\bzero,\n\\]\nwhere $\\Ybar_2^*=Y_2+\\underset{a_2}{\\text{max}}\\:Q_2(\\bHcheck_2,a_2;\\bthetabar_2)$. As these are linear in the parameters, uniqueness and existence for\n$\\bthetabar_1,\\bthetabar_2$ are well defined. In fact, $Q_1(\\bHcheck_1,A_1;\\bthetabar_1)=\\bXcheck_1\\trans\\bthetabar_1,Q_2(\\bHcheck_2,A_2;\\bthetabar_2)=\\bXcheck_2\\trans\\bthetabar_2$ are the $L_2$ projection of $\\mathbb{E}(Y_2^*|\\bXcheck_1)\\in\\mathcal{L}_2\\left(\\Pbb_{\\bXcheck_1}\\right),\\:\\mathbb{E}(Y_3|\\bXcheck_2)\\in\\mathcal{L}_2\\left(\\Pbb_{\\bXcheck_2}\\right)$ onto the subspace of all linear functions of $\\bXcheck_1,\\bXcheck_2\\trans$ respectively. Therefore, $Q$ functions in \\eqref{linear_Qs} are the best linear predictors of $\\Ybar_2^*$ conditional on $\\bXcheck_1$ and $Y_3$ conditional on $\\bXcheck_2\\trans$. \n\n\n\nTraditionally, one only has access to labeled data $\\mathcal{L}$, and hence proceeds by estimating $(\\btheta_1,\\btheta_2)$ in \\eqref{linear_Qs} by solving the following sample version set of normal equations:\n\\begin{align}\\label{full_EE}\n\\begin{split}\n\\Pbb_n \n\\left[\\begin{matrix}\n\\bXcheck_2(Y_3-\\bXcheck_2\\trans\\btheta_2)\n\\end{matrix}\\right] \\equiv\n\\Pbb_n\\left[\\begin{matrix}\nY_2\\{Y_3-(Y_2,\\bX_2\\trans)\\btheta_2\\}\\\\\n\\bX_2\\{Y_3-(Y_2,\\bX_2\\trans)\\btheta_2\\}\n\\end{matrix}\\right] \n=&\\textbf{0},\\\\\n\\Pbb_n \\left[\n\\bX_1\n\\{Y_2(1+\\beta_{21})+\\bH_{20}\\trans{\\bbeta}_{22}\n+[\\bH_{21}\\trans{\\bgamma}_2]_+-\\bX_1\\trans\\btheta_1\\}\\right]=&\\bf0.\n\\end{split}\n\\end{align}\n\\citep{DTRbook}, where $\\Pbb_n$ denotes the empirical measure: i.e. for a measurable function $f:\\mathbb{R}^p\\mapsto\\mathbb{R}$ and random sample $\\{\\bL_i\\}_{i=1}^n$, $\\Pbb_nf=\\frac{1}{n}\\sum_{i=1}^nf(\\bL_i)$. \nThe asymptotic distribution for the $Q$ function parameters in the fully-supervised setting has been well studied \\cite[see][]{laber2014}.\n\n\n\\subsection{Semi-supervised \\texorpdfstring{$Q$}{Lg}-learning}\\label{sec: ssQ}\nWe next detail our robust imputation-based semi-supervised $Q$-learning that leverages the unlabeled data $\\mathcal{U}$ to replace the unobserved $Y_{t}$ in \\eqref{full_EE} with their properly imputed values for subjects in $\\Usc$. Our SSL procedure includes three key steps: (i) imputation, (ii) refitting, and (iii) projection to the unlabeled data. \nIn step (i), we develop flexible imputation models for the conditional mean functions $\\{\\mu_t(\\cdot), \\mu_{2t}(\\cdot), t = 2, 3\\}$, where $\\mu_t(\\bUvec) = \\mathbb{E}(Y_{t}|\\bUvec)$ and $\\mu_{2t}(\\bUvec) = \\mathbb{E}(Y_{2}Y_{t}|\\bUvec)$. \nThe refitting in step (ii) will ensure the validity of the SSL estimators under potential mis-specifications of the imputation models. \n\n\\subsubsection*{Step I: Imputation.}\n\nOur first imputation step involves weakly parametric or non-parametric prediction modeling to approximate the conditional mean functions $\\{\\mu_t(\\cdot), \\mu_{2t}(\\cdot), t = 2, 3\\}$. Commonly used models such as non-parametric kernel smoothing, basis function expansion or kernel machine regression can be used. We denote the corresponding estimated mean functions as $\\{\\mhat_t(\\cdot), \\mhat_{2t}(\\cdot), t = 2, 3\\}$ under the corresponding imputation models $\\{m_t(\\bUvec), m_{2t}(\\bUvec), t=2,3\\}$. Theoretical properties of our proposed SSL estimators on specific choices of the imputation models are provided in section \\ref{theory}. We also provide additional simulation results comparing different imputation models in section \\ref{section: sims and application}.\n\n\n\n\n\\subsubsection*{Step II: Refitting.}\nTo overcome the potential bias in the fitting from the imputation model, especially under model mis-specification, we update the imputation model with an additional refitting step by expanding it to include linear effects of $\\{\\bX_t, t=1,2\\}$ with cross-fitting to control overfitting bias.\nSpecifically, to ensure the validity of the SSL algorithm from the refitted imputation model, we note that the final imputation models for $\\{Y_t, Y_{2t}, t=2,3\\}$, denoted by $\\{\\mubar_t(\\bUvec), \\mubar_{2t}, t=2,3\\}$, need to satisfy \n\\begin{equation*}\n\\begin{alignedat}{3}\n \\Ebb\\left[\\bXvec\\{Y_2 - \\mubar_2(\\bUvec)\\}\\right] = \\bzero, &\\quad&\n\\Ebb\\left\\{Y_2^2 - \\mubar_{22}(\\bUvec)\\right\\} =&0 &, \\\\\n\\Ebb\\left[\\bX_2\\{Y_3 - \\mubar_3(\\bUvec)\\}\\right] = \\bzero, &\\quad&\n\\Ebb\\left\\{Y_2Y_3 - \\mubar_{23}(\\bUvec)\\right\\} =&0 &.\n\\end{alignedat}\n\\end{equation*}\nwhere $\\bXvec = (1,\\bX_1\\trans,\\bX_2\\trans)\\trans$.\nWe thus propose a refitting step that expands $\\{m_t(\\bUvec), m_{2t}(\\bUvec), t=2,3\\}$ to additionally adjust for linear effects of $\\bX_1$ and\/or $\\bX_2$ to ensure the subsequent projection step is unbiased. To this end, let $\\{\\mathcal{I}_k,k=1,...,K\\}$ denote $K$ random equal sized partitions of the labeled index set $\\{1,...,n\\}$, and let $\\{\\mhat_{t}\\supnk(\\bUvec), \\mhat_{2t}\\supnk(\\bUvec), t=2,3\\}$ be the counterpart of $\\{\\mhat_t(\\bUvec), \\mhat_{2t}(\\bUvec),t=2,3\\}$ with labeled observations in $\\{1,..,n\\}\\setminus\\mathcal{I}_k$. We then obtain $\\bEtahat_2$, $\\etahat_{22}$, $\\bEtahat_3$, $\\etahat_{23}$ respectively as the solutions to\n\\begin{equation}\\label{eta_EE_Q1}\n\\begin{alignedat}{3}\n\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\\bXvec_i\\left\\{Y_{2i}- \\mhat_2\\supnk(\\bUvec_i)-\\bEta_2\\trans\\bXvec_i\\right\\} = &\\bzero, \\ & \\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\n\\left\\{Y_{2i}^2-\\mhat_{22}\\supnk(\\bUvec_i)-\n\\eta_{22}\\right\\} = &0& , \\\\\n\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\n\\bX_{2i}\\left\\{Y_{3i}-\\mhat_{3}\\supnk(\\bUvec_i)-\n\\bEta_3\\trans\\bX_{2i}\n\\right\\} = &\\bzero, \\ & \n\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\n\\left\\{Y_{2i}Y_{3i}-\\mhat_{23}\\supnk(\\bUvec_i)-\n\\eta_{23}\\right\\} =&0& .\n\\end{alignedat}\n\\end{equation}\n\nFinally, we impute $Y_2$, $Y_3$, $Y_{2}^2$ and $Y_{2}Y_3$ respectively as\n$\\muhat_2(\\bUvec) = K^{-1}\\sum_{k=1}^K \\mhat_2\\supnk(\\bUvec) + \\bEtahat_2\\trans\\bXvec$, \n$\\muhat_3(\\bUvec) = K^{-1}\\sum_{k=1}^K \\mhat\\supnk_3(\\bUvec) + \\bEtahat_3\\trans\\bX_2$, \n$\\muhat_{22}(\\bUvec)=K^{-1}\\sum_{k=1}^K \\mhat\\supnk_{22}(\\bUvec) + \\etahat_{22}$, and $\\muhat_{23}(\\bUvec)=K^{-1}\\sum_{k=1}^K \\mhat\\supnk_{23}(\\bUvec) + \\etahat_{23}$.\n\n\n\n\\subsubsection*{Step III: Projection}\\label{section: SSQL}\n\nIn the last step, we proceed to estimate $\\bthetahat$ \nby replacing $\\{Y_t, Y_{2}Y_{t}, t=2,3\\}$ in (\\ref{full_EE}) with their the imputed values $\\{ \\muhat_t(\\bUvec), \\muhat_{2t}(\\bUvec), t= 2,3\\}$ and project to the unlabeled data. Specifically, we obtain the final SSL estimators for $\\btheta_1$ and $\\btheta_2$ via the following steps:\n\\begin{enumerate} \n\n\t\\item Stage 2 regression:\n\twe obtain the SSL estimator for $\\btheta_2$ as \n \\begin{align*}\n \\bthetahat_2=(\\bbetahat_2\\trans,\\bgammahat_2\\trans)\\trans: \n \\mbox{the solution to}\\quad\n \\begin{split}\n \\Pbb_N&\n \\begin{bmatrix}\n \\muhat_{23}(\\bUvec)-\n [\\muhat_{22}(\\bUvec),\\muhat_2(\\bUvec)\\bX_2\\trans]\\btheta_2\\\\\n \\bX_2\\{\\muhat_3(\\bUvec)-\n [\\muhat_2(\\bUvec),\\bX_2\\trans]\\btheta_2\\}\n \\end{bmatrix}\n =\\textbf{0}\n \\end{split}\n \\end{align*}\n \n\\item We compute the imputed pseudo-outcome: \n\\[\n\\Ytilde_{2}^*=\\muhat_{2}(\\bUvec)+\\underset{a\\in\\{0,1\\}}{\\text{max }}Q_2\\left(\\bH_{2},\\muhat_{2}(\\bUvec),a;\\bthetahat_2\\right),\n\\]\n\\item Stage 1 regression: we estimate $\\bthetahat_1=(\\bbetahat_1\\trans,\\bgammahat_1\\trans)\\trans$ as the solution to:\n\\begin{align*}\n\\Pbb_N \\left\\{\\bX_1\n(\\Ytilde_2^*-\\bX_1\\trans\\btheta_1)\\right\\}=\\bf0.\n\\end{align*}\n\n\\end{enumerate}\n\nBased on the SSL estimator for the Q-learning model parameters, we can then obtain an estimate for the optimal treatment protocol as:\n\\[\n\\dhat_t \\equiv \\dhat_t(\\bH_t)\\equiv d_t(\\bH_t; \\bthetahat_t), \\mbox{ where } d_t(\\bH_t,\\btheta_t) = \\argmax{a \\in \\{0,1\\}}Q_t(\\bH_t,a;\\btheta_t)=I\\left(\\bH_{t1}\\trans\\bgamma_t>0\\right), \\: t = 1, 2.\n\\]\nTheorems \\ref{theorem: unbiased theta2} and \\ref{theorem: unbiased theta1} of Section \\ref{theory} demonstrate\nthe consistency and asymptotic normality of the SSL estimators $\\{\\bthetahat_t,t=1,2\\}$ for their respective population parameters $\\{\\bthetabar_t,t=1,2 \\}$ even in the possible mis-specification of \\eqref{linear_Qs}. As we explain next, this in turn yields desirable statistical results for evaluating the resulting policy $\\dbar_t \\equiv \\dbar_t(\\bH_t) \\equiv d_t(\\bH_t,\\bthetabar_t) = \\argmax{a \\in \\{0,1\\}}Q_t(\\bHcheck_t,a;\\bthetabar_t)$ for $t=1,2$.\n\n\\section{Semi Supervised Off-Policy Evaluation of the Policy}\\label{section: SS value function}\n\nTo evaluate the performance of the optimal policy $\\Dscbar = \\{\\dbar_t(\\bH_t), t=1,2\\}$, derived under the Q-learning framework, one may estimate the expected population outcome under the policy $\\Dscbar$: \n\\[\n\\Vbar\\equiv\n\\mathbb{E}\\left[\\mathbb{E}\\{Y_2+\\mathbb{E}\\{Y_3|\\bHcheck_2,A_2=\\dbar_2(\\bH_2)\\}|\\bH_1,A_1=\\dbar_1(\\bH_1)\\}\\right].\n\\]\n\nIf models in \\eqref{linear_Qs} are correctly specified, then under standard causal assumptions (consistency, no unmeasured confounding, and positivity), an asymptotically consistent supervised estimator for the value function can be obtained as\n\\[\n\\Vhat_Q=\\Pbb_n\\left[\\Qopt_1(\\bHcheck_1;\\bthetahat_1)\\right],\n\\]\nwhere $\\Qopt_t(\\bHcheck_t;\\btheta_t)\\equiv Q_t\\left(\\bHcheck_t,d_t(\\bH_t;\\btheta_t);\\btheta_t\\right)$. However, $\\Vhat_Q$ is likely to be biased when the outcome models in \\eqref{linear_Qs} are mis-specified. This occurs frequently in practice since $Q_1(\\bHcheck_1,A_1)$ is especially difficult to specify. \n\n\nTo improve the robustness to model mis-specification, we augment $\\Vhat_Q$ via propensity score weighting. This gives us an SSL doubly robust (SSL$\\subDR$) estimator for $\\Vbar$. To this end, we define propensity scores: $$\\pi_t(\\bHcheck_t)=\\Pbb\\{A_t=1|\\bHcheck_t\\}, \\quad t=1,2.$$ To estimate $\\{\\pi_t(\\cdot),t=1,2\\}$, we impose the following generalized linear models (GLM):\n\\begin{align}\\label{logit_Ws}\n\\pi_t(\\bHcheck_t;\\bxi_t)=&\\sigma\\left(\\bHcheck_t\\trans\\bxi_t\\right), \\quad \\mbox{with}\\quad \\sigma(x)\\equiv1\/(1+e^{-x}) \\quad \\mbox{for}\\quad t=1,2.\n\\end{align}\nWe use the logistic model with potentially non-linear basis functions $\\bHcheck$ for simplicity of presentation but one may choose other GLM or alternative basis expansions to incorporate non-linear effects in the propensity model. We estimate $\\bxi=(\\bxi_1\\trans,\\bxi_2\\trans)\\trans$ based on the standard maximum likelihood estimators using labeled data, denoted by $\\bxihat = (\\bxihat_1\\trans,\\bxihat_2\\trans)\\trans$.\nWe denote the limit of $\\bxihat$ as $\\bxibar = (\\bxibar_1\\trans,\\bxibar_2\\trans)\\trans$. Note that this is not necessarily equal to the true model parameter under correct specification of \\eqref{logit_Ws}, but corresponds to the population solution of the fitted models. \n\nOur framework is flexible to allow an SSL approach to estimate the propensity scores. As these are nuisance parameters needed for estimation of the value function, and SSL for GLMs has been widely explored \\citep[See][Ch. 2]{ChakraborttyThesis}, we proceed with the usual GLM estimation to keep the discussion focused. However, SSL for propensity scores can be beneficial in certain cases, as we show in Proposition \\ref{lemma: v funcion var}.\n\n\n\\subsection{SUP\\texorpdfstring{$\\subDR$}{Lg} Value Function Estimation }\nTo derive a supervised doubly robust (SUP$\\subDR$) estimator for $\\Vbar$ overcoming confounding in the observed data, we let $\\bTheta = (\\btheta\\trans,\\bxi\\trans)\\trans$ and define the inverse probability weights (IPW) using the propensity scores\n\\begin{align*}\n\\omega_1(\\bHcheck_1,A_1,\\bTheta)\n&\\equiv\n\\frac{d_1(\\bH_1;\\btheta_1)A_1}{\\pi_1(\\bHcheck_1;\\bxi_1)}+\\frac{\\{1-d_1(\\bH_1;\\btheta_1)\\}\\{1-A_1\\}}{1-\\pi_1(\\bHcheck_1;\\bxi_1)}, \\quad \\mbox{and}\\\\ \\omega_2(\\bHcheck_2,A_2,\\bTheta)\n&\\equiv\n\\omega_1(\\bHcheck_1,A_1,\\bTheta)\\left(\\frac{d_2(\\bH_2;\\btheta_2)A_2}{\\pi_2(\\bHcheck_2;\\bxi_2)}+\\frac{\\{1-d_2(\\bH_2;\\btheta_2)\\}\\{1-A_2\\}}{1-\\pi_2(\\bHcheck_2;\\bxi_2)}\\right).\n\\end{align*}\nThen we augment $\\Qopt_1(\\bH_1;\\bthetahat_1)$ based on the estimated propensity scores via\n\\begin{align*}\n\\begin{split}\n\\Vsc\\subSUPDR(\\bL; \\bThetahat)=\n\\Qopt_1(\\bH_1;\\bthetahat_1)\n+&\\omega_1(\\bHcheck_1,A_1,\\bThetahat)\n\\left[\nY_2-\\left\\{\\Qopt_1(\\bH_1, \\bthetahat_1)- \\Qopt_2(\\bHcheck_2;\\bthetahat_2)\n\\right\\}\\right]\\\\\n+&\\omega_2(\\bHcheck_2,A_2,\\bThetahat)\\left\\{\nY_3-\\Qopt_2(\\bHcheck_2; \\bthetahat_2)\n\\right\\}\n\\end{split}\n\\end{align*}\nand estimate $\\Vbar$ as \n\\begin{align}\\label{eq: lab value fun}\n\\Vhat\\subSUPDR = \\Pbb_n\\left\\{\\Vsc\\subSUPDR(\\bL; \\bThetahat)\\right\\} .\n\\end{align}\n\n\\begin{remark}\nThe importance sampling estimators previously proposed in \\citet{Jiang2016} and \\citet{WDR} for value function estimation employ similar augmentation strategies. However, they consider a fixed policy, and we account for the fact that the STR is estimated with the same data. The construction of augmentation in $\\Vhat\\subSUPDR$ also differs from the usual augmented IPW estimators \\citep{DTRbook}. As we are interested in the value had the population been treated with function $\\Dscbar$ and not a fixed sequence $(A_1,A_2)$, we augment the weights for a fixed treatment (i.e. $A_t=1$) with the propensity score weights for the estimated regime $I(A_t = \\dbar_t)$. Finally, we note that this estimator can easily be extended to incorporate non-binary treatments.\n\\end{remark}\n\nThe supervised value function estimator $\\Vhat\\subSUPDR$ is doubly robust in the sense that if either the outcome models of the propensity score models are correctly specified, then $\\Vhat\\subSUPDR\\stackrel{\\Pbb}{\\rightarrow} \\Vbar$ in probability. Moreover, under certain reasonable assumptions, $\\Vhat\\subSUPDR$ \nis asymptotically normal. Theoretical guarantees and proofs for this procedure are shown in Appendix \\ref{app_DR_Vfun}.\n\n\\subsection{SSL\\texorpdfstring{$\\subDR$}{Lg} Value Function Estimation}\\label{section: SS value function est}\nAnalogous to semi-supervised $Q$-learning, we propose a procedure for adapting the augmented value function estimator to leverage $\\mathcal{U}$, by imputing suitable functions of the unobserved outcome in \\eqref{eq: lab value fun}. Since $\\bHcheck_2$ involves $Y_2$, both $\\omega_2(\\bHcheck_2,A_2;\\bTheta)$ and $\\Qopt_2(\\bHcheck_2;\\btheta_2)=Y_2 \\beta_{21}+\\Qopt_{2-}(\\bH_2;\\btheta_2)$ are not available in the unlabeled set, where $\\Qopt_{2-}(\\bH_2;\\btheta_2)=\\bH_{20}\\trans\\bbeta_{22} + [\\bH_{21}\\trans\\bgamma_2]_+$. By writing $\\Vsc\\subSUPDR(\\bL; \\bThetahat)$ as\n\\begin{align*}\n\\begin{split}\n\\Vsc\\subSUPDR(\\bL; \\bThetahat)=\n\\Qopt_1(\\bH_1;\\bthetahat_1)\n+&\\omega_1(\\bHcheck_1,A_1,\\bThetahat)\n\\left\\{\n(1+\\betahat_{21})Y_2-\\Qopt_1(\\bH_1, \\bthetahat_1)+ \\Qopt_{2-}(\\bH_2;\\bthetahat_2)\n\\right\\}\\\\\n+&\\omega_2(\\bHcheck_2,A_2,\\bThetahat)\\left\\{\nY_3-\\betahat_{21}Y_2 -\\Qopt_{2-}(\\bH_2; \\bthetahat_2)\n\\right\\},\n\\end{split}\n\\end{align*}\nwe note that to impute $\\Vsc\\subSUPDR(\\bL; \\bThetahat)$ for subjects in $\\Usc$, we need to impute $Y_2$, $\\omega_2(\\bHcheck_2,A_2; \\bThetahat)$, and $Y_t \\omega_2(\\bHcheck_2,A_2; \\bThetahat)$ for $t=2,3$. We define the conditional mean functions \\[\\mu^v_2(\\bUvec)\\equiv\\mathbb{E}[Y_2|\\bUvec], \\quad \\mu^v_{\\omega_2}(\\bUvec)\\equiv\\mathbb{E}[\\omega_2(\\bHcheck_2,A_2; \\bThetabar)|\\bUvec],\\quad \\mu^v_{t\\omega_2}(\\bUvec)\\equiv\\mathbb{E}[Y_t\\omega_2(\\bHcheck_2,A_2; \\bThetabar)|\\bUvec], \n\\]\nfor $t=2,3$, where\n$\\bThetabar = (\\bthetabar\\trans,\\bxibar\\trans)\\trans$.\nAs in Section \\ref{sec: ssQ} we approximate these expectations using a flexible imputation model followed by a refitting step for bias correction under possible mis-specification of the imputation models.\n\n\\subsubsection*{Step I: Imputation}\nWe fit flexible weakly parametric or non-parametric models to the labeled data to approximate the functions $\\{\\mu^v_2(\\bUvec), \\mu^v_{\\omega_2}(\\bUvec), \\mu^v_{t\\omega_2}(\\bUvec),\\:t=2,3\\}$ with unknown parameter $\\bTheta$ estimated via the SSL $Q$-learning as in Section \\ref{sec: ssQ} and the propensity score modeling as discussed above. Denote the respective imputation models as $\\{ m_2(\\bUvec), \nm_{\\omega_2}(\\bUvec),m_{t\\omega_2}(\\bUvec),\\:t=2,3\\}$ and their fitted values as\n$\\{\\mhat_2(\\bUvec), \n\\mhat_{\\omega_2}(\\bUvec),\\mhat_{t\\omega_2}(\\bUvec),\\:t=2,3\\}$. \n\n\\subsubsection*{Step II: Refitting}\nTo correct for potential biases arising from finite sample estimation and model mis-specifications, we perform refitting to obtain final imputed models for $\\{Y_2,\\omega_2(\\bHcheck_2, A_2; \\bThetabar),$ $Y_t\\omega_2(\\bHcheck_2, A_2; \\bThetabar),$ $t=2,3\\}$ as $\\{\\mubar^v_2(\\bUvec)=m_2(\\bUvec)+\\eta_2^v, \\mubar^v_{\\omega_2}(\\bUvec)=m_{\\omega_2}(\\bUvec)+\\eta_{\\omega_2}^v, \\mubar^v_{t\\omega_2}(\\bUvec)=m_{t\\omega_2}(\\bUvec)+\\eta_{t\\omega_2}^v,\\:t=2,3\\}$. As for the estimation of $\\btheta$ for $Q$-learning training, these refitted models are not required to be correctly specified but need to satisfy the following constraints:\n\\begin{equation*}\n\\begin{alignedat}{3}\n\\Ebb\\left[\\omega_1(\\bHcheck_1,A_1;\\bThetabar)\\left\\{Y_2-\\mubar_2^v(\\bUvec)\\right\\}\\right] &=0, \\\\\n\\Ebb\\left[\\Qopt_{2-}(\\bUvec;\\btheta_2)\\left\\{\\omega_2(\\bHcheck_2,A_2;\\bThetabar)-\\mubar_{\\omega_2}^v(\\bUvec)\\right\\}\\right] &= 0,\\\\\n\\Ebb\\left[\\omega_2(\\bHcheck_2,A_2;\\bThetabar)Y_t- \\mubar^v_{t\\omega_2}(\\bUvec)\\right] &= 0,\\:t=2,3.\n\\end{alignedat}\n\\end{equation*}\nTo estimate $\\eta_2^v$ $\\eta_{\\omega_2}^v$, and $\\eta_{t\\omega_2}^v$ under these constraints, we again employ cross-fitting and obtain $\\etahat_2^v$ $\\etahat_{\\omega_2}^v$, and $\\etahat_{t\\omega_2}^v$ as the solution to the following estimating equations\n\\begin{equation}\\label{V function reffitting}\n\\begin{alignedat}{3}\n\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\\omega_1(\\bHcheck_{1i},A_{1i};\\bThetahat)\\left\\{Y_2-\\mhat_2\\supnk(\\bUvec_i)-\n\\etahat_2^v\\right\\} &=0, \\\\\n\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}{ \\Qopt_{2-}(\\bUvec_i;\\bthetahat_2)}\\left\\{\\omega_2(\\bHcheck_{2i},A_{2i};\\bThetahat)-\\mhat_{\\omega_2}\\supnk(\\bUvec_i)-\n\\etahat_{\\omega_2}^v\\right\\} &=0, \\\\\n\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\\left\\{\\omega_2(\\bHcheck_{2i},A_{2i};\\bThetahat)Y_{ti}-\\mhat_{t\\omega_2}\\supnk(\\bUvec_i)-\n\\etahat_{t\\omega_2}^v\\right\\} &=0, \\:t=2,3.\n\\end{alignedat}\n\\end{equation}\n\nThe resulting imputation functions for $Y_2,\\omega_2(\\bHcheck_2, A_2; \\bThetabar)$ and $Y_t\\omega_2(\\bHcheck_2, A_2; \\bThetabar)$ are respectively constructed as $\\muhat^v_2(\\bUvec)= K^{-1}\\sum_{k=1}^{K}\n\\mhat_2\\supnk(\\bUvec)+\n\\etahat_2^v,$ \n$\\muhat^v_{\\omega_2}(\\bUvec)= K^{-1}\\sum_{k=1}^{K}\n\\mhat_{\\omega_2}(\\bUvec)+\n\\etahat_{\\omega_2}^v,$ and \n$\\muhat^v_{t\\omega_2}(\\bUvec)= K^{-1}\\sum_{k=1}^{K}\n\\mhat_{t\\omega_2}\\supnk(\\bUvec)+\n\\etahat_{t\\omega_2}^v,$ for $t=2,3$.\n\n\\subsubsection*{Step III: Semi-supervised augmented value function estimator.}\n\nFinally, we proceed to estimate the value of the policy $\\Vbar$, using the following semi-supervised augmented estimator:\n\t\\begin{align}\\label{SS_value_fun}\n\t\\Vhat\\subSSLDR=\\Pbb_N\\left\\{\n\t\\Vsc\\subSSLDR(\\bUvec;\\bThetahat,\\muhat)\\right\\},\n\t\\end{align}\n\t where $\\Vschat\\subSSLDR(\\bUvec)$ is the semi-supervised augmented estimator for observation $\\bUvec$ defined as:\n\n\t\\begin{align*}\n\t\\begin{split}\n\t\\Vsc\\subSSLDR(\\bUvec;\\bThetahat,\\muhat)\n\t=&\n\t\\Qopt_1(\\bHcheck_1;\\bthetahat_1)\n\t+\\omega_1(\\bHcheck_1,A_1,\\bThetahat)\n\t\\left[(1+\\betahat_{21})\\muhat_2^v(\\bUvec)-\n\t \\Qopt_1(\\bHcheck_1;\\bthetahat_1)+\\Qopt_{2-}(\\bH_2;\\bthetahat_2)\n\t\\right]\\\\\n\t+&\n\t\\muhat_{3\\omega_2}(\\bUvec)-\\betahat_{21}\\muhat_{2\\omega_2}(\\bUvec)-\\Qopt_{2-}(\\bH_2;\\bthetahat_2)\\muhat_{\\omega_2}(\\bUvec) .\n\t\\end{split}\n\t\\end{align*}\n\t\n\t\nThe above SSL estimator uses both labeled and unlabeled data along with outcome surrogates to estimate the value function, which yields a gain in efficiency as we show in Proposition \\ref{lemma: v funcion var}. As its supervised counterpart, $\\Vhat\\subSSLDR$ is doubly robust in the sense that if either the $Q$ functions or the propensity scores are correctly specified, the value function will converge in probability to the true value $\\Vbar$. Additionally, it does not assume that the estimated treatment regime was derived from a different sample. These properties are summarized in Theorem \\ref{thrm_ssV_fun} and Proposition \\ref{cor_dr_V} of the following section.\n\n\\section{Theoretical Results}\\label{theory}\n\nIn this section we discuss our assumptions and theoretical results for the semi-supervised $Q$-learning and value function estimators. Throughout, we define the norm $\\|g(x)\\|_{L_2(\\Pbb)}\\equiv\\sqrt{\\int g(x)^2d\\Pbb(x)}$ for any real valued function $g(\\cdot)$. Additionally, let $\\{U_n\\}$, and $\\{V_n\\}$ be two sequences of random variables. We will use $U_n=O_\\Pbb(V_n)$ to denote stochastic boundedness of the sequence $\\{U_n\/V_n\\}$, that is, for any $\\epsilon>0$, $\\exists M_\\epsilon,n_\\epsilon\\in\\mathbb{R}$ such that $\\Pbb\\left(|U_n\/V_n|>M_\\epsilon\\right)<\\epsilon$ $\\forall n>n_\\epsilon$. We use $U_n=o_\\Pbb(V_n)$ to denote that $U_n\/V_n\\stackrel{\\Pbb}{\\rightarrow}0.$\n\n\\subsection{Theoretical Results for SSL Q-learning}\n\n\\begin{assumption}\\label{assumption: covariates}\n\t(a) Sample size for $\\mathcal{U}$, and $\\mathcal{L}$, are such that $n\/N\\longrightarrow 0$ as $N,n\\longrightarrow\\infty$, (b) $\\bHcheck_t\\in\\mathcal{H}_t$, $\\bXcheck_t\\in\\mathcal{X}_t$ have finite second moments and compact support in $\\mathcal{H}_t\\subset\\mathbb{R}^{q_t}$, $\\mathcal{X}_t\\subset\\mathbb{R}^{p_t}$ $t=1,2$ respectively (c) $\\bSigma_1,\\:\\bSigma_2$ are nonsingular.\n\\end{assumption}\n\n\\begin{assumption}\\label{assumption: Q imputation}\n\tFunctions $m_s$, $s\\in\\{2,3,22,23\\}$ are such that (i) $\\sup_{\\bUvec}|m_s(\\bUvec)|<\\infty$, and (ii) the estimated functions $\\hat m_s$ satisfy (ii) $\\sup_{\\bUvec}|\\mhat_s(\\bUvec)-m_s(\\bUvec)|=o_\\Pbb(1)$. \n\\end{assumption}\n\n\n\\begin{assumption}\\label{assumption SS linear model}\nSuppose $\\Theta_1,\\Theta_2$ are open bounded sets, and $p_1,p_2$ fixed under \\eqref{linear_Qs}.\nWe define the following class of functions:\n\\begin{align*\n\\begin{split}\n\\mathcal{Q}_t&\\equiv\\left\\{Q_t:\\mathcal{X}_1\\mapsto\\mathbb{R}|\\btheta_1\\in\\Theta_1\\subset\\mathbb{R}^{p_t}\\right\\},\\:t=1,2.\n\\end{split}\n\\end{align*}\nFurther suppose for $t=1,2$, the solutions for $\\mathbb{E}[S^\\theta_t(\\btheta_t)]=\\bzero,$ i.e. $\\bthetabar_1$ and $ \\bthetabar_2$ satisfy\n\n\\be\nS^\\theta_2(\\btheta_2)=\\frac{\\partial}{\\partial\\btheta_2\\trans}\n\\|Y_3-Q_2(\\bXcheck_2;\\btheta_2)\\|_2^2,\\:\nS^\\theta_1(\\btheta_1)=\\frac{\\partial}{\\partial\\btheta_1\\trans}\n\\|Y_2^*-Q_1(\\bXcheck_1;\\btheta_1)\\|_2^2.\n\\ee\n\nThe target parameters satisfy $\\bthetabar_t\\in\\Theta_t\\:,t=1,2$. We write $\\bbetabar_t,\\bgammabar_t$ as the components of $\\bthetabar_t$, according to equation \\eqref{full_EE}.\n\\end{assumption}\nAssumption \\ref{assumption: covariates} (a) distinguishes our setting from the standard missing data context. Theoretical results for the missing completely at random (MCAR) setting generally assume that the missingness probability is bounded away from zero \\citep{tsiatis2006}, which enables the use of standard semiparametric theory. However, in our setting one can intuitively consider the probability of observing an outcome being $\\frac{n}{n+N}$ which converges to $0$. \n\nAssumption \\ref{assumption: Q imputation} is fairly standard as it just requires boundedness of the imputation functions -- which is natural to expect from the boundedness of the covariates. We also require uniform convergence of the estimated functions to their limit. This allows for the normal equations targeting the imputation residuals in \\eqref{eta_EE_Q1} and \\eqref{V function reffitting} to be well defined. Moreover, several off-the-shelf flexible imputation models for estimation can satisfy these conditions. See for example, local polynomial estimators, basis expansion regression like natural cubic splines or wavelets \\citep{tsybakov2009}. In particular, it is worth noting that we do not require any specific rate of convergence. As a result, the required condition is typically much easier to verify for many off-the-shelf algorithms. It is likely that other classes of models such as random forests can satisfy Assumption \\ref{assumption: Q imputation}. Recent work suggests that it is plausible to use the existing point-wise convergence results to show uniform convergence. \\citep[see][]{scornet2015,biau2008}.\n\nAssumption \\ref{assumption SS linear model} is fairly standard in the literature and ensures well-defined population level solutions for $Q$-learning regressions $\\bthetabar$ exist, and belong to that parameter space. In this regard, we differentiate between population solutions $\\bthetabar$ and true model parameters $\\btheta^0$ shown in equation \\eqref{linear_Qs}. If the working models are mis-specified, Theorems \\ref{theorem: unbiased theta2} and \\ref{theorem: unbiased theta1} still guarantee the $\\bthetahat$ is consistent and asymptotically normal centered at the population solution $\\bthetabar$. However, when equation \\eqref{linear_Qs} is correct, $\\bthetahat$ is asymptotically normal and consistent for the true parameter $\\btheta^0$. Now we are ready to state the theoretical properties of the semi-supervised $Q$-learning procedure described in Section \\ref{sec: ssQ}. \n\n\\begin{theorem}[Distribution of $\\bthetahat_2$]\\label{theorem: unbiased theta2} \nUnder Assumptions \\ref{assumption: covariates}-\\ref{assumption SS linear model}, $\\bthetahat_2$ satisfies\n\\[\n\\sqrt n\n(\\bthetahat_2-\\bthetabar_2)\n=\n\\bSigma_2^{-1}\\frac{1}{\\sqrt n}\\sum_{i=1}^n\n\\bpsi_2(\\bL_i;\\bthetabar_2)\n+o_\\Pbb\\left(1\\right)\\stackrel{d}{\\rightarrow}\\mathcal{N}\\bigg({\\bf 0},\\bV_{2 \\scriptscriptstyle \\sf SSL}(\\bthetabar_2)\\bigg),\n\\]\n\nwhere $\\bSigma_2=\\Ebb[\\bXcheck_2\\bXcheck_2\\trans]$ is defined in Section \\ref{section: set up}, the influence function $\\bpsi_2$ is given by\n\\[\n\\bpsi_2(\\bL;\\bthetabar_2)\n=\n\\begin{bmatrix}\n\\{Y_2Y_3-\\mubar_{23}(\\bUvec)\\}-\\bar\\beta_{21}\n\\{Y_2^2-\\mubar_{22}(\\bUvec)\\}-\nQ_{2-}(\\bH_2,A_2;\\bthetabar_2)\n\\{Y_2-\\mubar_2(\\bUvec)\\}\\\\\n\\bX_2\n\\{Y_3-\\mubar_3(\\bUvec)\\}-\\bar\\beta_{21}\n\\bX_2\n\\{Y_2-\\mubar_2(\\bUvec)\\}\n\\end{bmatrix},\n\\]\nand $\n\\bV_{2 \\scriptscriptstyle \\sf SSL}(\\bthetabar_2)=\\bSigma_2^{-1}\\Ebb\\left[\\bpsi_2(\\bL;\\bthetabar_2)\\bpsi_2(\\bL;\\bthetabar_2)\\trans\\right]\\left(\\bSigma_2^{-1}\\right)\\trans$. \\end{theorem}\n\n\nWe hold off remarks until the end of the results for the $Q$-learning parameters. Since the first stage regression depends on the second stage regression through a non-smooth maximum function, we make the following standard assumption \\citep{laber2014} in order to provide valid statistical inference. \n\\begin{assumption}\\label{assumption: non-regularity} Non-zero estimable population treatment effects $\\bgammabar_t$, $t=1,2$: i.e. the population solution to \\eqref{full_EE}, is such that (a) $\\bH_{21}\\trans\\bgammabar_2\\neq0$ for all $\\bH_{21}\\neq\\bzero$, and (b) $\\bgammabar_1$ is such that $\\bH_{11}\\trans\\bgammabar_1\\neq0$ for all $ \\bH_{11}\\neq\\bzero$. \n\\end{assumption}\n\nAssumption \\ref{assumption: non-regularity} yields regular estimators for the stage one regression and the value function, which depend on non-smooth components of the form $[x]_+$. This is needed to achieve asymptotic normality of the $Q$-learning parameters for the first stage regression. Note that the estimating equation for the stage one regression in Section \\ref{section: SSQL} includes $\\left[\\bH_{21}\\trans\\bgammahat_2\\right]_+$. Thus, for the asymptotic normality of $\\bthetahat_1$, we require $\\sqrt n\\Pbb_n\\left(\\left[\\bH_{21}\\trans\\bgammahat_2\\right]_+-\\left[\\bH_{21}\\trans\\bar{\\bgamma}_2\\right]_+\\right)$ to be asymptotically normal.\nThe latter is automatically true if $\\bH_{11}$ contains continuous covariates as $\\Pbb\\left(\\bH_{21}\\trans\\bgammabar_2=0\\right)=0$. Violation of Assumption \\ref{assumption: non-regularity} will yield non-regular estimates which translate into poor coverage for the confidence intervals (see \\citet{laber2014} for a thorough discussion on this topic).\n\n\\begin{theorem}[Distribution of $\\hat{\\btheta}_{1}$]\\label{theorem: unbiased theta1} \n\tUnder Assumptions \\ref{assumption: covariates}-\\ref{assumption SS linear model}, and \\ref{assumption: non-regularity} (a), $\\bthetahat_1$ satisfies\n\t\n\t\\[\n\t\\sqrt n(\\hat{\\btheta}_1-\\bthetabar_1)=\\bSigma_1^{-1}\\frac{1}{\\sqrt n}\\sum_{i=1}^n\\bpsi_1(\\bL_i;\\bthetabar_1)+o_\\Pbb(1)\\stackrel{d}{\\rightarrow}\\mathcal{N}\\bigg({\\bf 0},\\bV_{1 \\scriptscriptstyle \\sf SSL}(\\bthetabar_1)\\bigg)\n\t\\]\n\twhere $\\bSigma_1^{-1}=\\Ebb[\\bXcheck_1\\bXcheck_1\\trans]$, the influence function $\\bpsi_1$ is given by\n\t\\begin{align*}\n\t\\bpsi_1(\\bL;\\bthetabar_1)=\n\t&\\bX_1(1+\\bar\\beta_{21})\\{Y_2-\\mubar_2(\\bUvec)\\}\n\t+\n\t\\mathbb{E}\\left[\\bX_1\n\t\\left(Y_2,\\bH_{20}\\trans\\right)\\right]\n\t\\bpsi_{\\beta_2}(\\bL;\\bthetabar_2)\\\\\n\t+&\n\t\\mathbb{E}\\left[\\bX_1\\bH_{21}\\trans|\\bH_{21}\\trans\\bgammabar_2>0\\right]\\Pbb\\left(\\bH_{21}\\trans\\bgammabar_2>0\\right)\n\t\\bpsi_{\\gamma_2}(\\bL;\\bthetabar_2\n\t),\n\t\\end{align*} \n\t$\\bV_{1 \\scriptscriptstyle \\sf SSL}(\\bthetabar_1)=\\bSigma_1^{-1}\\Ebb\\left[\\bpsi_1(\\bL;\\bthetabar_1)\\bpsi_1(\\bL;\\bthetabar_1)\\trans\\right]\\left(\\bSigma_1^{-1}\\right)\\trans$, and $\\bpsi_{\\beta_2}$, $\\bpsi_{\\gamma_2}$ are the elements corresponding to $\\bbetabar_2$, $\\bgammabar_2$ of the influence function $\\bpsi_2$ defined in Theorem \\ref{theorem: unbiased theta2}.\n\\end{theorem}\n\n\n\\begin{remark}\n1) Theorems \\ref{theorem: unbiased theta2} and \\ref{theorem: unbiased theta1} establish the $\\sqrt n$-consistency and asymptotic normality (CAN) of $\\bthetahat_1,\\bthetahat_2$ for any $K\\geq2$. Beyond asymptotic normality at $\\sqrt n$ scale, these theorems also provide an asymptotic linear expansion of the estimators with influence functions $\\bpsi_1$ and $\\bpsi_2$ respectively. \n\n2) $\\bV_{1 \\scriptscriptstyle \\sf SSL}(\\bthetabar)$, $\\bV_{2 \\scriptscriptstyle \\sf SSL}(\\bthetabar)$ reflect an efficiency gain over the fully supervised approach due to sample $\\mathcal{U}$ and the surrogates contribution in prediction performance. This gain is formalized in Proposition \\ref{lemma: Q parameter var} which quantifies how correlation between surrogates and outcome increases efficiency.\\\\\n3) Let $\\bpsi=[\\bpsi_1\\trans,\\bpsi_2\\trans]\\trans$, we collect the vector of estimated $Q$-learning parameters $\\btheta=(\\btheta_1\\trans,\\btheta_2\\trans)\\trans$, then under Assumptions \\ref{assumption: covariates}-\\ref{assumption SS linear model}, \\ref{assumption: non-regularity} (a), we have\n \\begin{align*}\n\t\\sqrt n\n\t(\\bthetahat-\\bthetabar)\n\t=&\n\t\\bSigma^{-1}\\frac{1}{\\sqrt n}\\sum_{i=1}^n\n\t\\bpsi(\\bL_i;\\bthetabar)\n\n\t+o_\\Pbb(1)\\stackrel{d}{\\rightarrow}\\mathcal{N}\\bigg({\\bf 0},\\bV\\subSSL\\left(\\bthetabar\\right)\\bigg)\n\\end{align*}\nwith $\\bV\\subSSL(\\bthetabar)=\\bSigma^{-1}\\Ebb\\left[\\bpsi(\\bL;\\bthetabar)\\bpsi(\\bL;\\bthetabar)\\trans\\right]\\left(\\bSigma^{-1}\\right)\\trans$.\\\\\n4) Theorems \\ref{theorem: unbiased theta2} and \\ref{theorem: unbiased theta1} hold even when the $Q$ functions are mis-specified, that is, $\\bthetahat_1,\\bthetahat_2$ are CAN for $\\bthetabar_1,\\bthetabar_2$. Furthermore, if model \\eqref{linear_Qs} is correctly specified then we can simply replace $\\bthetabar$ with $\\btheta^0$ in the above result.\\\\\n3) We estimate $\\bV\\subSSL(\\bthetabar)$ via sample-splitting as\n\\begin{align*}\n\\widehat\\bV\\subSSL(\\bthetahat)&=\\widehat\\bSigma^{-1}\\widehat\\bA(\\btheta)\\left(\\widehat\\bSigma^{-1}\\right)\\trans,\\text{ where}\\\\\n\\widehat\\bA(\\bthetahat)&=\nn^{-1}\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\n\\bpsi\\supnk\\left(\\bL_i;\\bthetahat\\right)\\bpsi\\supnk\\left(\\bL_i;\\bthetahat\\right)\\trans,\\\\\n\\widehat\\bSigma_t&=\n\\Pbb_n\n\\left\\{\\bX_t\\bX_t\\trans\\right\\},\\:\\:t=1,2.\n\\end{align*}\n\\end{remark}\n\n\nNote that we can decompose $\\bpsi$ into the influence function for each set of parameters. For example, we have $\\bpsi_2=\\left(\\bpsi_{\\beta_2}\\trans,\\bpsi_{\\gamma_2}\\trans\\right)\\trans$ where \n$\n\\bpsi_{\\gamma_2}(\\bL;\\bthetabar_2)=\\bH_{21}A_2\\left[\\{Y_3-\\mubar_3(\\bUvec)\\}-\\bar\\beta_{21}\\{Y_2-\\mubar_2(\\bUvec)\\}\\right].\n$\nTherefore we can decompose the variance-covariance matrix into a component for each parameter, the variance-covariance for the treatment effect for stage 2 regression $\\bgamma_2$ is \\[\n\\mathbb{E}\\left[\\bpsi_{\\gamma_2}(\\bL;\\bthetabar_2)\\bpsi_{\\gamma_2}(\\bL;\\bthetabar_2)\\trans\\right]=\n\\mathbb{E}\\left[\\bH_{21}\\bH_{21}\\trans A_{2}^2\\left\\{\n Y_3-\\mubar_3(\\bUvec)-\\beta_{21}\n \\left(Y_2-\\mubar_2(\\bUvec)\\right)\\right\\}^2\\right].\n\\]\nThis gives us some insight into how the predictive power of $\\bUvec$, which contains surrogates $\\bW_1,\\bW_2$, decreases parameter standard errors. This is the case for the influence functions for estimating $\\bthetabar_1$, $\\bthetabar_2$ as well. We formalize this result with the following proposition. Let $\\bthetahat_{\\subSUP}$ be the estimator for the fully supervised $Q$-learning procedure (i.e. only using labeled data), with influence function and asymptotic variance denoted as $\\bpsi\\subSUP$ and $\\bV\\subSUP$ respectively (see Appendix \\ref{section: Q learning result proofs} for the exact form of $\\bpsi\\subSUP$ and $\\bV\\subSUP$). \n\nFor the following proposition we need the imputation models $\\mubar_s$, $s\\in\\{2,3,22,23\\}$ to satisfy additional constraints of the form $\\Ebb\\left[\\bX_2\\bX_2\\trans\\{Y_2Y_3-\\mubar_{23}(\\bUvec)\\}\\right]=\\bzero$. We list them in Assumption \\ref{assumption: additional constraints}, Appendix \\ref{section: Q learning result proofs}. One can construct estimators which satisfy such conditions by simply augmenting $\\bEta_2,$ $\\eta_{22},$ $\\bEta_3,$ $\\eta_{23}$ in \\eqref{eta_EE_Q1} with additional terms in the refitting step.\n\n\n\\begin{proposition}\\label{lemma: Q parameter var}\nUnder Assumptions \\ref{assumption: covariates}-\\ref{assumption SS linear model}, \\ref{assumption: non-regularity} (a), and \\ref{assumption: additional constraints}\nthen\n\\[\n\\bV\\subSSL(\\bthetabar)=\\bV\\subSUP(\\bthetabar)-\\bSigma^{-1}\\text{Var}\\left[\\bpsi\\subSUP(\\bL;\\bthetabar)-\\bpsi\\subSSL(\\bL;\\bthetabar)\\right]\\left(\\bSigma^{-1}\\right)\\trans.\n\\]\n\n\\end{proposition}\n\\begin{remark}\nProposition \\ref{lemma: Q parameter var} illustrates how the estimates for the semi-supervised $Q$-learning parameters are at least as efficient, if not more so, than the supervised ones. Intuitively, the difference in efficiency is explained by how much information is gained by incorporating the surrogates $\\bW_1,\\bW_2$ into the estimation procedure. If there is no new information in the surrogate variables, then residuals found in $\\bpsi\\subSSL(\\bL;\\btheta)$ will be of similar magnitude to those in $\\bpsi\\subSUP(\\bL;\\btheta)$, and thus the difference in efficiency will be small: $\\text{Var}\\left[\\bpsi\\subSUP(\\bL;\\bthetabar)-\\bpsi\\subSSL(\\bL;\\bthetabar)\\right]\\approx0$. In this case both methods will yield equally efficient parameters. The gain in precision is especially relevant for the treatment interaction coefficients $\\bgamma_1,\\bgamma_2$ used to learn the dynamic treatment rules. Finally, note that for Proposition \\ref{lemma: Q parameter var}, we do not need the correct specification of $Q$-functions or imputation models.\n\\end{remark}\n\n\n\\subsection{Theoretical Results for SSL Estimation of the Value Function}\n\nIf model \\eqref{linear_Qs} is correct, one only needs to add Assumption \\ref{assumption: non-regularity} (b)\nfor $\\Pbb_N\\{\\Qopt_1(\\bH_1;\\bthetahat_1)\\}$ to be a consistent estimator of the value function $\\Vbar$ \\citep{zhu2019}. However, as we discussed earlier, \\eqref{linear_Qs} is likely mis-specified. Therefore, we show our semi-supervised value function estimator is doubly robust. We also show it is asymptotically normal and more efficient that its supervised counterpart. To that end, define the following class of functions:\n\t\\[\n\t\\mathcal{W}_t\\equiv\\left\\{\\pi_t:\\mathcal{H}_t\\mapsto\\mathbb{R}|\\bxi_t\\in\\Omega_t\\right\\},\\:t=1,2,\n\t\\]\nunder propensity score models $\\pi_1,\\:\\pi_2$ in \\eqref{logit_Ws}.\n\\begin{assumption}\\label{assumption: donsker w}\n\t\nLet the population equations $\\Ebb\\left[S^\\xi_t(\\bHcheck_t;\\bxi_t)\\right]=\\bzero, t=1,2$ have solutions $\\bxibar_1,\\bxibar_2$, where\n\\be\nS^\\xi_t(\\bHcheck_t;\\bxi_t)=&\\frac{\\partial}{\\partial\\bxi_t}\\log \\left[\\pi_t(\\bHcheck_t;\\bxi_t)^{A_t}\\{1-\\pi_t(\\bHcheck_t;\\bxi_t)\\}^{(1-A_t)}\\right],\\:t=1,2,\n\\ee\n(i) $\\Omega_1,\\Omega_2$ are open, bounded sets and the population solutions satisfy $\\bxibar_t\\in\\Omega_t,t=1,2$,\\\\\n\t(ii) for $\\bxibar_t,t=1,2$, $\\inf\\limits_{\\bHcheck_t\\in\\mathcal{H}_1}\\pi_1(\\bHcheck_t;\\bxibar_t)>0$,\\\\\n\t(iii) Finite second moment: $\\Ebb\\left[S^\\xi_t(\\bHcheck_t;\\bTheta_t)^2\\right]\\le\\infty$, and Fisher information matrix: $\\Ebb\\left[\\frac{\\partial}{\\partial\\bxi_t}S^\\xi_t(\\bHcheck_t;\\bTheta_t)\\right]$ exists and is non singular,\\\\\n\t(iv) Second-order partial derivatives of $S^\\xi_t(\\bHcheck_t;\\bTheta_t)$ with respect to $\\bxi$ exist and for every $\\bHcheck_t$, and satisfy $|\\partial^2S^\\xi_t(\\bHcheck_t;\\bTheta_t)\/\\partial\\bxi_i\\partial\\bxi_j|\\leq \\tilde S_t(\\bHcheck_t)$ for some integrable measurable function $\\tilde S_t$ in a neighborhood of $\\bxibar$.\n\n\\end{assumption}\n\n\\begin{assumption}\\label{assumption: V imputation}\nFunctions $m_2,m_{\\omega_2},m_{t\\omega_2}$ $t=2,3$ are such that (i) $\\sup_{\\bUvec}|m_s(\\bUvec)|<\\infty$, and (ii) the estimated functions $\\hat m_s$ satisfy (ii) $\\sup_{\\bUvec}|\\mhat_s(\\bUvec)-m_s(\\bUvec)|=o_\\Pbb(1)$, $s\\in\\{2,\\omega_2,2\\omega_2,3\\omega_2\\}$.\n\\end{assumption}\nAssumption \\ref{assumption: donsker w} is standard for Z-estimators \\citep[see][Ch.~5.6]{vaart_donsker}. Assumption \\ref{assumption: V imputation} is the propensity score equivalent version of Assumption \\ref{assumption: Q imputation}. Finally, we use $\\bpsi^\\xi$ and and $\\bpsi^\\theta$ to denote the influence function for $\\bxihat$, and $\\bthetahat$ respectively. We are now ready to state our theoretical results for the value function estimator in equation \\eqref{SS_value_fun}. The proof, and the exact form of $\\bpsi^\\xi$ can be found in Appendix \\ref{appendix: value fun}.\n\n\\begin{theorem}[Asymptotic Normality for $\\Vhat\\subSSLDR$]\\label{thrm_ssV_fun}\n\tUnder Assumptions \\ref{assumption: covariates}-\\ref{assumption: V imputation}, $\\Vhat\\subSSLDR$ defined in \\eqref{SS_value_fun} satisfies\n\t\\[\n\t\\sqrt n\n\t\\left\\{\n\t\\Vhat\\subSSLDR-\\mathbb{E}_\\mathbb{S}\\left[\\Vsc\\subSSLDR(\\bL;\\bThetabar,\\mubar)\\right]\n\t\\right\\}\n\t=\n\t\\frac{1}{\\sqrt n}\\sum_{i=1}^n\\psi^{v}_{\\subSSLDR}(\\bL_i;\\bThetabar)\n\t+o_\\Pbb\\left(1\\right),\n\t\\]\n\n\twhere\n\t\\[\n\t\\frac{1}{\\sqrt n}\\sum_{i=1}^n\\psi^{v}_{\\subSSLDR}(\\bL_i;\\bThetabar)\n\t\\stackrel{d}{\\longrightarrow}\\mathcal{N}\\left(0,\\sigma^2\\subSSLDR\\right).\n\t\\]\n\tHere\n\t\\begin{align*}\n\t\\psi^{v}\\subSSLDR(\\bL;\\bThetabar)\n\t=&\n\t\\nu\\subSSLDR(\\bL;\\bThetabar)\n\t+\n\t\\bpsi^\\theta(\\bL)\\trans\n\t\\frac{\\partial}{\\partial \\btheta}\\int\\Vsc\\subSUPDR(\\bL;\\bTheta)d\\Pbb_{\\bL}\\bigg|_{\\bTheta=\\bThetabar}\\\\\n\t&\\hspace{2.1cm}+\n\t\\bpsi^\\xi(\\bL)\\trans\n\t\\frac{\\partial}{\\partial \\bxi}\\int\\Vsc\\subSUPDR(\\bL;\\bTheta)d\\Pbb_{\\bL}\\bigg|_{\\bTheta=\\bThetabar},\\\\\n\t\\nu_{\\subSSLDR}(\\bL;\\bThetabar)\n\t=&\n\t\\omega_1(\\bHcheck_1,A_1;\\bThetabar_1)\n\t(1+\\bar\\beta_{21})\n\t\\left\\{\n\tY_2-\\mubar_2^v(\\bUvec)\n\t\\right\\}\n\t+\n\t\\omega_2(\\bHcheck_2,A_2,\\bThetabar_2)Y_3-\n\t\\mubar_{3\\omega_2}(\\bUvec)\\\\\n\t-&\n\t\\bar\\beta_{21}\\left\\{\\omega_2(\\bHcheck_2,A_2,\\bThetabar_2)Y_2-\n\t\\mubar_{2\\omega_2}(\\bUvec)\\right\\}\n\t-\n\t\\Qopt_{2-}(\\bH_2; \\bthetabar_2)\n\t\\left\\{\n\t\\omega_2(\\bHcheck_2,A_2,\\bThetabar_2)-\\mubar_{\\omega_2}(\\bUvec)\n\t\\right\\}\n\t,\n\t\\end{align*}\n\t$\\sigma\\subSSLDR^2=\\Ebb\\left[\\psi^{v}_{\\subSSLDR}(\\bL;\\bThetabar)^2\\right],$ and $\\Vsc\\subSUPDR(\\bL;\\bTheta)$ is as defined in \\eqref{eq: lab value fun}.\n\\end{theorem}\n\n\n\\begin{proposition}[Double Robustness of $\\Vhat\\subSSLDR$ as an estimator of $\\Vbar$]\\label{cor_dr_V} (a) If either $\\|Q_t(\\bHcheck_t, A_t; \\bthetahat_t)-Q_t(\\bHcheck_t,A_t)\\|_{L_2(\\Pbb)}\\rightarrow 0$, or $\\| \\pi_t(\\bHcheck_t; \\bxihat_t)-\\pi_t(\\bHcheck_t)\\|_{L_2(\\Pbb)}\\rightarrow 0$ for $t=1,2$, then under Assumptions \\ref{assumption: covariates}-\\ref{assumption: V imputation}, $\\Vhat\\subSSLDR$ satisfies\n\n\t\\[\n\t\\Vhat\\subSSLDR\\stackrel{\\Pbb}{\\longrightarrow}\\Vbar.\n\t\\]\n\t\n\t(b) If $\\|Q_t(\\bHcheck_t, A_t; \\bthetahat_t)-Q_t(\\bHcheck_t,A_t)\\|_{L_2(\\Pbb)}\\| \\pi_t(\\bHcheck_t; \\bxihat_t)-\\pi_t(\\bHcheck_t)\\|_{L_2(\\Pbb)}=o_\\Pbb\\left(n^{-\\frac{1}{2}}\\right)$ for $t=1,2$, then under Assumptions \\ref{assumption: covariates}-\\ref{assumption: V imputation}, $\\Vhat\\subSSLDR$ satisfies\n\n\t\\[\n\t\\sqrt n\\left(\\Vhat\\subSSLDR-\\Vbar\\right)\\stackrel{d}{\\longrightarrow}\\mathcal{N}\\left(0,\\sigma^2\\subSSLDR\\right).\n\t\\]\n\\end{proposition}\n\nNext we define the supervised influence function for estimator $\\Vhat\\subSUPDR$. Let $\\bpsi^\\theta\\subSUP$, be the influence function for the supervised estimator $\\bthetahat\\subSUP$ for model \\eqref{linear_Qs}. The influence function for SUP$\\subDR$ Value Function Estimation estimator \\eqref{eq: lab value fun} and its variance is (see Theorem \\ref{thrm_supV_fun} in Appendix \\ref{app_DR_Vfun}):\n\n\\begin{align*}\n\t\\psi^{v}\\subSUPDR(\\bL;\\bThetabar)\n\t=&\n\t\\Vsc\\subSUPDR(\\bL;\\bThetabar)-\\mathbb{E}_\\mathbb{S}\\left[\\Vsc\\subSUPDR(\\bL;\\bThetabar)\\right]\\\\\n\t+&\n\t\\bpsi\\subSUP^\\theta(\\bL)\\trans\n\t\\frac{\\partial}{\\partial \\btheta}\\int\\Vsc\\subSUPDR(\\bL;\\bTheta)d\\Pbb_{\\bL}\\bigg|_{\\bTheta=\\bThetabar}\n\t+\n\t\\bpsi^\\xi(\\bL)\\trans\n\t\\frac{\\partial}{\\partial \\bxi}\\int\\Vsc\\subSUPDR(\\bL;\\bTheta)d\\Pbb_{\\bL}\\bigg|_{\\bTheta=\\bThetabar},\\\\\n\t\\sigma\\subSUPDR^2=&\\mathbb{E}\\left[\\psi^{v}_{\\subSUPDR}(\\bL;\\bThetabar)^2\\right].\n\\end{align*}\n\nThe flexibility of our SSL value function estimator $V\\subSSLDR$, allows the use of either supervised or SSL approach for estimation of propensity score nuisance parameters $\\bxi$. For SSL estimation, we can use an approach similar to Section \\ref{sec: ssQ}, \\citep[see][Ch. 2]{chakrabortty} for details. This can be beneficial in that we can then quantify the efficiency gain of $V\\subSSLDR$ vs. $V\\subSUPDR$ by comparing the asymptotic variances. In light of this, we assume SSL is used for $\\bxi$ when estimating $V\\subSSLDR$.\n\nBefore stating the result we discuss an additional requirement for the imputation models. As for Proposition \\ref{lemma: Q parameter var}, models $\\mubar^v_2(\\bUvec),$ $\\mubar^v_{\\omega_2}(\\bUvec),$ $\\mubar^v_{t\\omega_2}(\\bUvec),$ $t=2,3$ need to satisfy a few additional constraints of the form \n\\[\n\\Ebb\\left[\\omega_1(\\bHcheck_1,A_1;\\bThetabar_1)\\Qopt_{2-}(\\bH_2;\\bthetabar_1)\\{Y_2-\\mubar^v_2(\\bUvec)\\}\\right]=\\bzero.\n\\]\nAs there are several constraints, we list them in Appendix \\ref{appendix: value fun}, and condense them in Assumption \\ref{assumption: value additional constraints}, Appendix \\ref{appendix: value fun}. Again, one can construct estimators which satisfy such conditions by simply augmenting $\\eta_2^v,$ $\\eta_{\\omega_2}^v,$ $\\eta_{t\\omega_2}^v,$ $t=2,3$ in \\eqref{V function reffitting} with additional terms in the refitting step.\n\n\\begin{proposition}\\label{lemma: v funcion var}\nUnder Assumptions \\ref{assumption: covariates}-\\ref{assumption: V imputation}, and \\ref{assumption: value additional constraints}, asymptotic variances\n$\\sigma\\subSSLDR^2$, $\\sigma\\subSUPDR^2$ satisfy \n\\[\n\\sigma\\subSSLDR^2=\\sigma\\subSUPDR^2-\\text{Var}\\left[\\psi^{v}\\subSUPDR(\\bL;\\bThetabar)-\\psi^{v}\\subSSLDR(\\bL;\\bThetabar)\\right].\n\\]\n\\end{proposition}\n\n\n\\begin{remark}\\label{remark: se for V}\n\n\n1) Proposition \\ref{cor_dr_V} illustrates how $\\Vhat\\subSSLDR$ is asymptotically unbiased if either the $Q$ functions or the propensity scores are correctly specified.\\\\\n2) An immediate consequence of Proposition \\ref{lemma: v funcion var} is that the semi-supervised estimator is at least as efficient (or more) as its supervised counterpart, that is $\\text{Var}\\left[\\psi\\subSSLDR(\\bL;\\bTheta)\\right]\\le\\text{Var}\\left[\\psi\\subSUPDR(\\bL;\\bTheta)\\right]$. As with Proposition \\ref{lemma: Q parameter var}, the difference in efficiency is explained by the information gain from incorporating surrogates.\\\\\n3) To estimate standard errors for $V\\subSSLDR(\\bUvec;\\bThetabar)$, we will approximate the derivatives of the expectation terms $\\frac{\\partial}{\\partial \\bTheta}\\int\\Vsc\\subSUPDR(\\bL;\\bThetabar)d\\Pbb_{\\bL}$ using kernel smoothing to replace the indicator functions. In particular, let $\\mathbb{K}_h(x)=\\frac{1}{h}\\sigma(x\/h)$, $\\sigma$ defined as in \\eqref{logit_Ws}, we approximate $d_t(\\bH_t,\\btheta_2)=I(\\bH_{t1}\\trans\\bgamma_t>0)$ with $\\mathbb{K}_h(\\bH_{t1}\\trans\\bgamma_t)$ $t=1,2$, and define the smoothed propensity score weights as\n\\begin{align*}\n\\tilde\\omega_1(\\bHcheck_1,A_1,\\bTheta)\n&\\equiv\n\\frac{A_1\\mathbb{K}_h(\\bH_{11}\\trans\\bgamma_1)}{\\pi_1(\\bHcheck_1;\\bxi_1)}\n+\n\\frac{\\left\\{1-A_1\\right\\}\\left\\{1-\\mathbb{K}_h(\\bH_{11}\\trans\\bgamma_1)\\right\\}}{1-\\pi_1(\\bHcheck_1;\\bxi_1)}, \\quad \\mbox{and}\\\\ \\tilde\\omega_2(\\bHcheck_2,A_2,\\bTheta)\n&\\equiv\n\\tilde\\omega_1(\\bHcheck_1,A_1,\\bTheta)\\left[\\frac{A_2\\mathbb{K}_h(\\bH_{21}\\trans\\bgamma_2)}{\\pi_2(\\bHcheck_2;\\bxi_2)}+\n\\frac{\\left\\{1-A_2\\right\\}\\left\\{1-\\mathbb{K}_h(\\bH_{21}\\trans\\bgamma_2)\\right\\}}{1-\\pi_2(\\bHcheck_2;\\bxi_2)}\\right].\n\\end{align*}\n\nWe simply replace the propensity score functions with these smooth versions in $\\psi^{v}_{\\subSSLDR}(\\bL;\\bThetabar)$, detail is given in Appendix \\ref{variance estimation of Vss}. To estimate the variance we use the sample-split estimators:\n\\[\n\\hat\\sigma^2\\subSSLDR=\nn^{-1}\\sum_{k=1}^{K}\\sum_{i\\in\\mathcal{I}_k}\\psi^{v{\\scriptscriptstyle (\\text{-}k)}}\\subSSLDR(\\bUvec_i;\\bThetahat)^2.\n\\]\n\n\n\n\\end{remark}\n\n\n\n\\section{Simulations and application to EHR data:}\\label{section: sims and application}\nWe perform extensive simulations to evaluate the finite sample performance of our method. Additionally we apply our methods to an EHR study of treatment response for patients with inflammatory bowel disease to identify optimal treatment sequence. These data have treatment response outcomes available for a small subset of patients only. \n\n\\subsection{Simulation results}\nWe compare our SSL Q-learning methods to fully supervised $Q$-learning using labeled datasets of different sizes and settings. We focus on the efficiency gains of our approach. First we discuss our simulation settings, then go on to show results for the $Q$ function parameters under correct and incorrect working models for \\eqref{linear_Qs}. We then show value function summary statistics under correct models, and mis-specification for the $Q$ models in \\eqref{linear_Qs} and the propensity score function $\\pi_2$ in \\eqref{logit_Ws}.\n\nFollowing a similar set-up as in \\citet{schulte2014}, we first consider a simple scenario with a single confounder variable at each stage with $\\bH_{10}=\\bH_{11}=(1,O_1)\\trans$, $\\bHcheck_{20}=(Y_2,1,O_1,A_1,O_1A_1,O_2)\\trans$, and $\\bH_{21}=(1,A_1,O_2)\\trans$. Specifically, we sequentially generate \n\\begin{alignat*}{3}\n& O_1\\sim \\Bern(0.5), &\\quad& A_1\\sim \\Bern(\\sigma\\left\\{\\bH_{10}\\trans\\bxi_1^0\\right\\}),&\\quad& Y_2\\sim\\Nsc(\\bXcheck_1\\trans\\btheta^0_1, 1), \\\\\n& O_2 \\sim \\Nsc(\\bHcheck_{20}\\trans\\pmb\\delta^0,2),&\\quad&\nA_2 \\sim\\Bern\\left(\\sigma\\left\\{\\bH_{20}\\trans\\bxi_2^0+\\xi_{26}^0O_2^2\\right\\}\\right), \\quad \\mbox{and} &\\quad& Y_3 \\sim\\Nsc(m_3\\left\\{\\bHcheck_{20}\\right\\}, 2). \n\\end{alignat*}\nwhere $m_3\\{\\bHcheck_{20}\\}=\\bH_{20}\\trans\\bbeta^0_2+A_2(\\bH_{21}\\trans\\bgamma^0_2)+\\beta_{27}^0O_2^2Y_2\\sin\\{[O_2^2(Y_2+1)]^{-1}\\}$. \nSurrogates are generated as $W_t=\\floor{Y_{t+1}+Z_t},$ $Z_t\\sim\\Nsc(0,\\sigma^2_{z,t})$, $t=1,2$ where $\\floor{x}$ corresponds to the integer part of $x\\in\\mathbb{R}$. Throughout, we let $\\bxi_1^0=(0.3,-0.5)\\trans$, $\\bbeta_1^0=(1,1)\\trans$, $\\bgamma_1^0=(1,-2)\\trans$\n$\\pmb\\delta^0=(0,0.5,-0.75,0.25)\\trans$, $\\bxi^0_2=(0, 0.5, 0.1,-1,-0.1)\\trans$ $\\bbeta_2^0=(.1, 3, 0,0.1,-0.5,-0.5)\\trans$, $\\bgamma_2^0=(1, 0.25, 0.5)\\trans$. \n\n\nWe consider an additional case to mimic the structure of the EHR data set used for the real-data application. Outcomes $Y_t$ are binary, and we use a higher number of covariates for the $Q$ functions and multivariate count surrogates $\\bW_t$ $t=1,2$. Data is simulated with $\\bH_{10}=(1,O_1,\\dots,O_6)\\trans$, $\\bH_{11}=(1,O_2,\\dots,O_6)\\trans$, $\\bHcheck_{20}=(Y_2,1,O_1,\\dots,O_6,A_1,Z_{21},Z_{22})\\trans$, and $\\bH_{21}=(1,O_1,\\dots,O_4,A_1,Z_{21},Z_{22})\\trans$, generated according to\n\\begin{alignat*}{3}\n& \\bO_1\\sim\\Nsc(\\bzero,I_6), &\\quad& A_1\\sim\\Bern(\\sigma\\{\\bH_{10}\\trans\\bxi_1^0\\}),&\\quad& Y_2\\sim\\Bern(\\sigma\\{\\bXcheck_1\\trans\\btheta^0_1\\}), \\\\\n& \\bO_2=\\left[I\\left\\{Z_1>0\\right\\},I\\left\\{Z_2>0\\right\\}\\right]\\trans&\\quad&\nA_2 \\sim\\Bern\\left(\\tilde m_2\\{\\bHcheck_{20}\\}\\right), \\quad \\mbox{and} &\\quad& Y_3 \\sim\\Bern(\\tilde m_3\\left\\{\\bHcheck_{20}\\right\\}),\n\\end{alignat*}\nwith $\\tilde m_2=\\sigma\\left\\{\\bH_{20}\\trans\\bxi_2^0+\\tilde\\bxi_2\\trans\\bO_2\\right\\}$, $\\tilde m_3(\\bHcheck_{20})=\\bH_{20}\\trans\\bbeta^0_2+A_2(\\bH_{21}\\trans\\bgamma^0_2)+\\tilde\\bbeta_2\\trans\\bO_2Y_2\\sin\\{\\|\\bO_2\\|^2_2\/(Y_2+1)\\}$ and $Z_l=O_{1l}\\delta_l^0+\\epsilon_z$, $\\epsilon_z\\sim\\Nsc(0,1)$ $l=1,2$. The dimensions for the $Q$ functions are 13 and 37 for the first and second stage respectively, which match with our IBD dataset discussed in Section \\ref{section: IBD}. The surrogates are generated according to $\\bW_t=\\floor{\\bZ_t}$, with $\\bZ_t\\sim\\Nsc\\left(\\balpha\\trans(1,\\bO_t,A_t,Y_t),I\\right)$. Parameters are set to $\\bxi_1^0=(-0.1,1,-1,0.1)\\trans$, $\\bbeta_1^0=(0.5, 0.2,-1,-1,0.1,-0.1,0.1)\\trans$, $\\bgamma_1^0=(1, -2,-2,-0.1,0.1,-1.5)\\trans$, $\\bxi^0_2=(0, 0.5, 0.1,-1,1,-0.1)\\trans$, $\\bbeta_2^0=(1,\\bbeta_1^0, 0.25,-1,-0.5)\\trans$, $\\bgamma_2^0=(1, 0.1,-0.1,0.1,-0.1,0.25,-1,-0.5)\\trans$, and $\\balpha=(1,\\bzero,1)\\trans$.\n\nFor all settings, we fit models $Q_1(\\bH_1,A_1)=\\bH_{10}\\trans\\bbeta^0_1+A_1(\\bH_{11}\\trans\\bgamma^0_1)$, $Q_2(\\bHcheck_2,A_2)=\\bHcheck_{20}\\trans\\bbeta^0_2+A_2(\\bH_{21}\\trans\\bgamma^0_2)$ for the $Q$ functions, $\\pi_1(\\bH_1)=\\sigma\\left(\\bH_{10}\\trans\\bxi_1\\right)$ and $\\pi_2(\\bHcheck_2)=\\sigma\\left(\\bH_{20}\\trans\\bxi_2\\right)$ for the propensity scores. The parameters $\\xi_{26}^0$ and $\\beta_{27}^0$ and $\\tilde\\bxi_2,\\tilde\\bbeta_2$ index mis-specification in the fitted Q-learning outcome models and the propensity score models with a value of 0 corresponding to a correct specification. In particular, we set $\\xi_{26}^0=1$, $\\tilde\\bxi_2=\\frac{1}{\\|(1,\\dots,1)\\|_2}(1,\\dots,1)\\trans$, and $\\beta_{27}^0=1$, $\\tilde\\bbeta_2=\\frac{1}{\\|(1,\\dots,1)\\|_2}(1,\\dots,1)\\trans$ for mis-specification of propensity score $\\pi_2$ and $Q_1,$ $Q_2$ functions respectively. We set $\\xi_{26}^0=\\beta_{27}^0=0$ and $\\tilde\\bxi=\\bzero$, $\\tilde\\bbeta_2=\\bzero$ for correct model specification. Under mis-specification of the outcome model or propensity score model, the term omitted by the working models is highly non-linear, in which case the imputation model will be mis-specified as well. We note that our method does not need correct specification of the imputation model. For the imputation models, we considered both random forest (RF) with 500 trees and basis expansion (BE) with piecewise-cubic splines with 2 equally spaced knots on the quantiles 33 and 67 \\citep{HastieT}. Finally, we consider two choices of $(n,N)$: $(135,1272)$ which are similar to the sizes of our EHR study and larger sizes of $(500,10000)$. For each configuration, we summarize results based on $1,000$ replications. \n\n\\begin{table}[ht]\n\\centering\n\\centerline{(a) $n=135$ and $N=1272$}\n\\scalebox{0.8}{\n\\begin{tabular}{@{}lccccccclcccccl@{}}\n\\toprule\n & \\multicolumn{2}{c}{Supervised} & & \\multicolumn{11}{c}{Semi-Supervised} \\\\ \\cmidrule(lr){2-3} \\cmidrule(lr){5-15} \n & \\multicolumn{2}{c}{} & & \\multicolumn{5}{c}{Random Forests} & & \\multicolumn{5}{c}{Basis Expansion} \\\\ \\cmidrule(l){5-9} \\cmidrule(l){11-15} \nParameter & Bias & ESE & & Bias & ESE & ASE & CovP & RE & & Bias & ESE & ASE & CovP & RE \\\\ \\cmidrule(l){1-3} \\cmidrule(l){5-9} \\cmidrule(l){11-15} \n $\\gamma_{11}$=1.4 & -0.03 & 0.41 & & 0.00 & 0.26 & 0.24 & 0.93 & 1.57 & & 0.00 & 0.24 & 0.23 & 0.93 & 1.68 \\\\ \n $\\gamma_{12}$=-2.6 & 0.04 & 0.58 & & -0.01 & 0.36 & 0.34 & 0.94 & 1.61 & & -0.02 & 0.35 & 0.31 & 0.90 & 1.69 \\\\ \n $\\gamma_{21}$=0.8 & 0.00 & 0.34 & & 0.01 & 0.21 & 0.20 & 0.93 & 1.61 & & 0.00 & 0.20 & 0.19 & 0.94 & 1.71 \\\\ \n $\\gamma_{22}$=0.2 & -0.02 & 0.45 & & -0.01 & 0.28 & 0.28 & 0.95 & 1.60 & & -0.01 & 0.27 & 0.26 & 0.94 & 1.70 \\\\ \n $\\gamma_{23}$=0.5 & 0 & 0.18 & & 0.01 & 0.11 & 0.11 & 0.94 & 1.59 & & 0.00 & 0.11 & 0.11 & 0.94 & 1.68 \\\\ \\bottomrule\n\\end{tabular}}\\vspace{.1in}\n\\centerline{(b) $n=500$ and $N=10,000$}\n\\scalebox{0.8}{\n\\begin{tabular}{@{}lccccccclcccccl@{}}\n\\toprule\n & \\multicolumn{2}{c}{Supervised} & & \\multicolumn{11}{c}{Semi-Supervised} \\\\ \\cmidrule(lr){2-3} \\cmidrule(lr){5-15} \n & \\multicolumn{2}{c}{} & & \\multicolumn{5}{c}{Random Forests} & & \\multicolumn{5}{c}{Basis Expansion} \\\\ \\cmidrule(l){5-9} \\cmidrule(l){11-15} \nParameter & Bias & ESE & & Bias & ESE & ASE & CovP & RE & & Bias & ESE & ASE & CovP & RE \\\\ \\cmidrule(l){1-3} \\cmidrule(l){5-9} \\cmidrule(l){11-15} \n $\\gamma_{11}$=1.4 & 0.01 & 0.22 & & 0.01 & 0.12 & 0.11 & 0.92 & 1.76 & & 0.01 & 0.12 & 0.11 & 0.92 & 1.80 \\\\ \n $\\gamma_{12}$=-2.6 & 0 & 0.29 & & 0 & 0.17 & 0.16 & 0.93 & 1.73 & & -0.01 & 0.16 & 0.15 & 0.93 & 1.80 \\\\ \n $\\gamma_{21}$=0.8 & 0.00 & 0.17 & & 0.00 & 0.10 & 0.09 & 0.93 & 1.80 & & 0.00 & 0.09 & 0.09 & 0.93 & 1.86 \\\\ \n $\\gamma_{22}$=0.2 & -0.01 & 0.23 & & 0 & 0.13 & 0.12 & 0.93 & 1.81 & & 0 & 0.13 & 0.12 & 0.94 & 1.83 \\\\ \n $\\gamma_{23}$=0.5 & 0.00 & 0.09 & & 0.00 & 0.05 & 0.05 & 0.94 & 1.78 & & 0.00 & 0.05 & 0.05 & 0.95 & 1.81 \\\\ \\bottomrule\n\\end{tabular}}\n\\caption{Bias, empirical standard error (ESE) of the supervised and the SSL estimators with either random forest imputation or basis expansion imputation strategies for $\\bgammabar_1,\\bgammabar_2$ when (a) $n=135$ and $N=1272$ and (b) $n=500$ and $N=10,000$. For the SSL estimators, we also obtain the average of the estimated standard errors (ASE) as well as the empirical coverage probabilities (CovP) of the 95\\% confidence intervals.}\n\\label{tab:simgamma}\n\\end{table}\n\nWe start discussing results under correct specification of the $Q$ functions. In Table \\ref{tab:simgamma}, we present the results for the estimation of treatment interaction coefficients $\\bgammabar_1,\\bgammabar_2$, under the correct model specification, continuous outcome setting with $\\beta_{27}^0=\\xi_{26}^0=0$. The complete tables for all $\\bthetabar$ parameters for the continuous and EHR-like settings can be found in Appendix \\ref{appendix: alt sims}. We report bias, empirical standard error (ESE), average standard error (ASE), 95\\% coverage probability (CovP) and relative efficiency (RE) defined as the ratio of supervised ESE over SSL estimate ESE. Overall, compared to the supervised approach, the proposed semi-supervised $Q$-learning approach has substantial gains in efficiency while maintaining comparable or even lower bias. This is likely due to the refitting step which helps take care of the finite sample bias, both from the missing outcome imputation and $Q$ function parameter estimation. Imputation with BE yields slightly better estimates than when using RF, both in terms of efficiency and bias. Coverage probabilities are close to the nominal level due to the good performance of the standard error estimation.\n\nWe next turn to $Q$-learning parameters under mis-specification of \\eqref{linear_Qs}. Figure \\ref{fig_misspec_Q_sin} shows the bias and root mean square error (RMSE) for the treatment interaction coefficients in the 2-stage $Q$ functions. We focus on the continuous setting, where we set $\\beta_{27}^0\\in\\{-1,0,1\\}$. Note that $\\beta_{27}^0\\neq0$ implies that both $Q$ functions are mis-specified as the fitting of $Q_1$ depends on formulation of $Q_2$ as seen in \\eqref{full_EE}. Semi-supervised $Q$-learning is more efficient for any degree of mis-specification for both small and large finite sample settings. As the theory predicts, there is no real difference in efficiency gain of SSL across mis-specification of the $Q$ function models. This is because asymptotic distribution of $\\bgammahat\\subSSL$ shown in Theorems \\ref{theorem: unbiased theta2} \\& \\ref{theorem: unbiased theta1} are centered on the target parameters $\\bgammabar$. Thus, both SSL and SUP have negligible bias regardless of the true value of $\\beta_{27}^0$.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=1.05\\textwidth]{figures\/gamma_stats.png}\n\t\\caption{Monte Carlo estimates of bias and RMSE ratios for estimation of $\\gamma_{11},\\:\\gamma_{12}$, $\\gamma_{21},\\:\\gamma_{22},\\:\\gamma_{23}$ under mis-specification of the $Q$-functions through $\\beta_{27}^0$. Results are shown for the large ($N=10,000$, $n=500$) and small ($N=1,272$, $n=135$) data samples for the continuous setting over 1,000 simulated datasets.}\n\t\\label{fig_misspec_Q_sin} \n\\end{figure}\n\n \n\\begin{table}[ht]\n\\centering\n\\centerline{(a) $n=135$ and $N=1272$}\n\\scalebox{0.7}{\n\\begin{tabular}{rlrlrrlrrrrrlrrrrr}\n\\hline\n\\multicolumn{1}{l}{} & & \\multicolumn{1}{l}{} & & \\multicolumn{2}{c}{Supervised} & & \\multicolumn{11}{c}{Semi-Supervised} \\\\ \\cmidrule{5-6} \\cmidrule{8-18} \n\\multicolumn{1}{l}{} & & \\multicolumn{1}{l}{} & & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & & \\multicolumn{5}{c}{Random Forests} & & \\multicolumn{5}{c}{Basis Expansion} \\\\ \\cmidrule{8-12} \\cmidrule{14-18} \nSetting & Model & $\\Vbar$ & & Bias & ESE & & Bias & ESE & ASE & CovP & RE & & Bias & ESE & ASE & CovP & RE \\\\ \\cmidrule{1-3} \\cmidrule{5-6} \\cmidrule{8-12} \\cmidrule{14-18} \n & Correct & 6.08 & & 0.02 & 0.27 & & 0.04 & 0.21 & 0.24 & 0.97 & 1.27 & & 0.02 & 0.23 & 0.25 & 0.97 & 1.18 \\\\ \n Continuous & Missp. $Q$ & 6.34 & & 0.01 & 0.24 & & 0.03 & 0.19 & 0.22 & 0.97 & 1.27 & & 0.00 & 0.20 & 0.22 & 0.97 & 1.20 \\\\ \n & Missp. $\\pi$ & 6.08 & & 0.01 & 0.28 & & 0.02 & 0.22 & 0.24 & 0.97 & 1.24 & & 0.01 & 0.25 & 0.25 & 0.97 & 1.12 \\\\ \n & Correct & 1.38 & & 0.09 & 0.15 & & 0.05 & 0.12 & 0.12 & 0.94 & 1.24 & & 0.04 & 0.13 & 0.12 & 0.95 & 1.12 \\\\ \n EHR & Missp. $Q$ & 1.43 & & 0.09 & 0.14 & & 0.04 & 0.12 & 0.12 & 0.96 & 1.12 & & 0.03 & 0.14 & 0.12 & 0.95 & 1.02 \\\\ \n & Missp. $\\pi$ & 1.38 & & 0.09 & 0.15 & & 0.05 & 0.14 & 0.13 & 0.96 & 1.13 & & 0.04 & 0.14 & 0.13 & 0.96 & 1.05 \\\\ \\end{tabular}}\\vspace{.1in}\n\\centerline{(b) $n=500$ and $N=10,000$}\n\\scalebox{0.7}{\n\\begin{tabular}{rlrlrrlrrrrrlrrrrr}\n\\toprule\n\\multicolumn{1}{l}{} & & \\multicolumn{1}{l}{} & & \\multicolumn{2}{c}{Supervised} & & \\multicolumn{11}{c}{Semi-Supervised} \\\\ \\cmidrule{5-6} \\cmidrule{8-18} \n\\multicolumn{1}{l}{} & & \\multicolumn{1}{l}{} & & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & & \\multicolumn{5}{c}{Random Forests} & & \\multicolumn{5}{c}{Basis Expansion} \\\\ \\cmidrule{8-12} \\cmidrule{14-18} \nSetting & Model & $\\Vbar$ & & Bias & ESE & & Bias & ESE & ASE & CovP & RE & & Bias & ESE & ASE & CovP & RE \\\\ \\cmidrule{1-3} \\cmidrule{5-6} \\cmidrule{8-12} \\cmidrule{14-18} \n & Correct & 6.08 & & 0.02 & 0.15 & & 0.03 & 0.11 & 0.12 & 0.96 & 1.32 & & 0.02 & 0.13 & 0.13 & 0.95 & 1.16 \\\\ \n Continuous & Missp. $Q$ & 6.34 & & 0.01 & 0.13 & & 0.03 & 0.10 & 0.10 & 0.96 & 1.31 & & 0.01 & 0.11 & 0.11 & 0.96 & 1.16 \\\\ \n & Missp. $\\pi$ & 6.08 & & 0.01 & 0.14 & & 0.03 & 0.11 & 0.12 & 0.96 & 1.28 & & 0.02 & 0.12 & 0.12 & 0.95 & 1.16 \\\\ \n & Correct & 1.38 & & 0.02 & 0.07 & & 0.01 & 0.04 & 0.06 & 0.99 & 1.55 & & 0.00 & 0.06 & 0.06 & 0.98 & 1.23 \\\\ \n EHR & Missp. $Q$ & 1.43 & & 0.01 & 0.07 & & 0.00 & 0.04 & 0.05 & 0.99 & 1.66 & & 0.00 & 0.05 & 0.06 & 0.98 & 1.35 \\\\ \n & Missp. $\\pi$ & 1.38 & & 0.02 & 0.08 & & 0.01 & 0.06 & 0.07 & 0.99 & 1.22 & & 0.00 & 0.07 & 0.07 & 0.97 & 1.03 \\\\ \\bottomrule\n\\end{tabular}}\n\\caption{Bias, empirical standard error (ESE) of the supervised estimator $\\Vhat\\subSUPDR$ and bias, ESE, average standard error (ASE) and coverage probability (CovP) for $\\Vhat\\subSSLDR$ with either random forest imputation or basis expansion imputation strategies when (a) $n=135$ and $N=1272$ and (b) $n=500$ and $N=10,000$. We show performance and relative efficiency across both simulation settings for estimation under correct models, and mis-specification of $Q$ function or propensity score function.}\n\\label{tab:simvalues}\n\\end{table}\n\nNext we analyze performance of the doubly robust value function estimators for both continuous and EHR-like settings. Table \\ref{tab:simvalues} shows bias and RMSE across different sample sizes, and comparing SSL vs. SUP estimators. Results are shown for the correct specification of the $Q$ functions and propensity scores, and when either is mis-specified. Bias across simulation settings is relatively similar between $\\Vhat\\subSSLDR$ and $\\Vhat\\subSUPDR$, and appears to be small relative to RMSE. The low magnitude of bias suggests both estimators are robust to model mis-specification. There is an exception on the EHR setting with small sample size, for which the bias is non-negligible. This is likely due to the fact that the $Q$ function parameters to estimate are 13+37, and the propensity score functions have 12 parameters which add up to a large number relative to the labeled sample size: $n=135$. The SSL bias is lower in this case which could be due to the refitting step, which helped to reduce the finite sample bias. Efficiency gains of $\\Vhat\\subSSLDR$ are consistent across model specification. We next illustrate our approach using an IBD dataset.\n\n\\subsection{Application to an EHR Study of Inflammatory Bowel Disease}\\label{section: IBD}\n\n\nAnti\u2013tumor necrosis factor (anti-TNF) therapy has greatly changed the management and improved the outcomes of patients with inflammatory bowl disease (IBD) \\citep{peyrin2010anti}. However, it remains unclear whether a specific anti-TNF agent has any advantage in efficacy over other agents, especially at the individual level. There have been few randomized clinical trials performed to directly compare anti-TNF agents for treating IBD patients \\citep{sands2019vedolizumab}. Retrospective studies comparing infliximab and adalimumab for treating IBD have found limited and sometimes conflicting evidence of their relative effectiveness \\citep{inokuchi2019long,lee2019comparison,osterman2017infliximab}. There is even less evidence regarding optimal STR for choosing these treatments over time \\citep{ashwin2016}. To explore this, we performed RL using data from a cohort of IBD patients previously identified via machine learning algorithms from the EHR systems of two tertiary referral academic centers in the Greater Boston metropolitan area \\citep{ashwin2012}. We focused on the subset of $N=1,272$ patients who initiated either Infliximab ($A_1=0$) or Adalimumab ($A_1=1$) and continued to be treated by either of these two therapies during the next 6 months. The observed treatment sequence distributions are shown in Table \\ref{Table: obs As}. The outcomes of interest are the binary indicator of treatment response at 6 months ($t=2$) and at 12 months ($t=3$), both of which were only available on a subset of $n=135$ patients whose outcomes were manually annotated via chart review. \n\nTo derive the STR, we included gender, age, Charlson co-morbidity index \\citep{charlson}, prior exposure to anti-TNF agents, as well as mentions of clinical terms associated with IBD such as bleeding complications extracted from the clinical notes via natural language processing (NLP) features as confounding variables at both time points. To improve the imputation of $Y_t$, we use 15 relevant NLP features such as mentions of rectal or bowel resection surgery as surrogates at $t=1,2$. We transformed all count variables using $x\\mapsto \\log(1+x)$ to decrease skewness in the distributions, and centered continuous features. We used RF with 500 trees to carry out the imputation step, and 5-fold cross-validation (CV) to estimate the value function.\n\n\nThe supervised and semi-supervised estimates are shown in Table \\ref{Table: IBD results} for the $Q$-learning models and in Table \\ref{Table: V IBD results} for the value functions associated with the estimated STR. Similar to those observed in the simulation studies, the semi-supervised $Q$-learning has more power to detect significant predictors of treatment response. Relative efficiency for almost all $Q$ function estimates is near or over 2. The supervised $Q$-learning does not have the power to detect predictors such as prior use of anti-TNF agents, which are clearly relevant to treatment response \\citep{ashwin2016}. Semi-supervised $Q$-learning is able to detect that the efficacy of Adalimumab wears off as patients get older, meaning younger patients in the first stage experienced a higher rate of treatment response to Adalimumab, a finding that cannot be detected with supervised $Q$-learning. Additionally, supervised $Q$-learning does not pick up that there is a higher rate of response to Adalimumab among patients that are male or have experienced an abscess. This translates into a far from optimal treatment rule as seen in the cross-validated value function estimates. Table \\ref{Table: V IBD results} reflects that using our semi-supervised approach to find the regime and to estimate the value function of such treatment rules yields a more efficient estimate, as the semi-supervised value function estimate $\\Vhat\\subSUPDR$ yielded a smaller standard error than that of the supervised estimate $\\Vhat\\subSUPDR$. However, the standard errors are large relative to the point estimates. On the upside, they both yield estimates very close in numerical value which is reassuring: both should be unbiased as predicted by theory and simulations.\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{clcc}\n\\hline\n & & \\multicolumn{2}{c}{$A_1$} \\\\ \\cline{2-4} \n\\multicolumn{1}{c}{} & & 0 & 1 \\\\\n\\multicolumn{1}{c}{\\multirow{2}{*}{$A_2$}} & 0 & 912 & 327 \\\\\n\\multicolumn{1}{c}{} & 1 & 27 & 183 \\\\ \\hline\n\\end{tabular}\\caption{Distribution of treatment trajectories for observed sample of size 1407.}\\label{Table: obs As}\n\n\\end{table}\n\n\n\\begin{table}[ht]\n\\scalebox{0.55}{\n\\begin{tabular}{lcccccccccclccccccccc}\n\\cmidrule{1-10} \\cmidrule{12-21}\n\\multicolumn{10}{c}{Stage 1 Regression} & & \\multicolumn{10}{c}{Stage 2 Regression} \\\\ \\cmidrule{1-10} \\cmidrule{12-21} \n & \\multicolumn{3}{c}{Supervised} & & \\multicolumn{3}{c}{Semi-Supervised} & & & & \\multicolumn{4}{c}{Supervised} & & \\multicolumn{3}{c}{Semi-Supervised} & & \\\\ \\cmidrule{2-4} \\cmidrule{6-8} \\cmidrule{12-15} \\cmidrule{17-19}\nParameter & Estimate & SE & P-val & & Estimate & SE & P-val & & RE & & Parameter & Estimate & SE & P-val & & Estimate & SE & P-val & & RE \\\\ \\cmidrule{1-4} \\cmidrule{6-8} \\cmidrule{10-10} \\cmidrule{12-15} \\cmidrule{17-19} \\cmidrule{21-21} \nIntercept & \\textbf{0.424} & \\textbf{0.082} & \\textbf{0.00} & & \\textbf{0.518} & \\textbf{0.028} & \\textbf{0.00} & & 2.937 & & $Y_1$ & \\textbf{0.37} & \\textbf{0.11} & \\textbf{0.00} & & \\textbf{0.55} & \\textbf{0.05} & \\textbf{0.00} & & 2.08 \\\\ \n Female & -0.237 & 0.167 & 0.16 & & \\textbf{-0.184} & \\textbf{0.067} & \\textbf{0.007} & & 2.514 & & Intercept & 0.08 & 0.06 & 0.17 & & 0.04 & 0.02 & 0.14 & & 2.40 \\\\ \n Age & 0.155 & 0.088 & 0.081 & & \\textbf{0.18} & \\textbf{0.034} & \\textbf{0.00} & & 2.588 & & Female & -0.01 & 0.10 & 0.92 & & -0.00 & 0.05 & 0.98 & & 2.21 \\\\ \n Charlson Score & 0.006 & 0.072 & 0.929 & & -0.047 & 0.026 & 0.075 & & 2.776 & & Age & 0.05 & 0.06 & 0.35 & & \\textbf{0.07} & \\textbf{0.02} & \\textbf{0.00} & & 2.33 \\\\ \n Prior anti-TNF & -0.038 & 0.06 & 0.524 & &\\textbf{ -0.085} & \\textbf{0.019} & \\textbf{0.00} & & 3.177 & & Charlson Score & 0.04 & 0.04 & 0.33 & & \\textbf{0.06} & \\textbf{0.02} & \\textbf{0.01} & & 2.06 \\\\ \n Perianal & \\textbf{0.138} & \\textbf{0.06} & \\textbf{0.022} & & \\textbf{0.179} & \\textbf{0.022} & \\textbf{0.00} & & 2.688 & & Prior anti-TNF & -0.05 & 0.05 & 0.29 & & \\textbf{-0.09} & \\textbf{0.02} & \\textbf{0.00} & & 2.39 \\\\ \n Bleeding & 0.049 & 0.08 & 0.54 & & 0.058 & 0.03 & 0.055 & & 2.675 & & Perianal & -0.01 & 0.04 & 0.80 & & \\textbf{-0.03} & \\textbf{0.02} & \\textbf{0.06} & & 2.31 \\\\ \n A1 & 0.163 & 0.488 & 0.739 & & 0.148 & 0.206 & 0.473 & & 2.374 & & Bleeding & -0.04 & 0.05 & 0.49 & & -0.03 & 0.03 & 0.29 & & 2.14 \\\\ \n Female$\\times A_1$ & 0.168 & 0.696 & 0.81 & & -0.042 & 0.287 & 0.886 & & 2.424 & & A1 & 0.11 & 0.25 & 0.67 & & 0.03 & 0.10 & 0.74 & & 2.60 \\\\ \n Age$\\times A_1$ & -0.177 & 0.264 & 0.503 & & \\textbf{-0.278} & \\textbf{0.109} & \\textbf{0.013} & & 2.418 & & Abscess$_2$ & 0.06 & 0.04 & 0.16 & & \\textbf{0.05} & \\textbf{0.01} & \\textbf{0.00} & & 2.68 \\\\ \n Charlson Score$\\times A_1$ & 0.136 & 0.391 & 0.728 & & 0.195 & 0.178 & 0.276 & & 2.194 & & Fistula$_2$ & 0.02 & 0.05 & 0.67 & & 0.01 & 0.02 & 0.62 & & 2.33 \\\\ \n Perianal$\\times A_1$ & -0.113 & 0.226 & 0.618 & & -0.019 & 0.08 & 0.808 & & 2.838 & & Female$\\times A_1$ & 0.13 & 0.38 & 0.74 & & 0.17 & 0.16 & 0.30 & & 2.37 \\\\ \n Bleeding$\\times A_1$ & 0.262 & 0.364 & 0.474 & & 0.127 & 0.161 & 0.431 & & 2.267 & & Age$\\times A_1$ & -0.02 & 0.12 & 0.88 & & -0.09 & 0.06 & 0.17 & & 1.94 \\\\ \n & & & & & & & & & & & Charlson Score$\\times A_1$ & -0.02 & 0.16 & 0.89 & & 0.04 & 0.07 & 0.55 & & 2.19 \\\\ \n & & & & & & & & & & & Perianal$\\times A_1$ & -0.14 & 0.09 & 0.15 & & \\textbf{-0.17} & \\textbf{0.04} & \\textbf{0.00} & & 2.34 \\\\ \n & & & & & & & & & & & Bleeding$\\times A_1$ & 0.13 & 0.20 & 0.51 & & 0.03 & 0.09 & 0.76 & & 2.17 \\\\ \n & & & & & & & & & & & A2 & 0.07 & 0.17 & 0.69 & & \\textbf{0.22} & \\textbf{0.07} & \\textbf{0.00} & & 2.55 \\\\ \n & & & & & & & & & & & Female$\\times A_2$ & -0.39 & 0.28 & 0.16 & & \\textbf{-0.51} & \\textbf{0.11} & \\textbf{0.00} & & 2.53 \\\\ \n & & & & & & & & & & & Age$\\times A_2$ & 0.09 & 0.10 & 0.40 & & \\textbf{0.15} & \\textbf{0.04} & \\textbf{0.00} & & 2.27 \\\\ \n & & & & & & & & & & & Charlson Score$\\times A_2$ & 0.01 & 0.07 & 0.84 & & -0.03 & 0.03 & 0.42 & & 2.08 \\\\ \n & & & & & & & & & & & Perianal$\\times A_2$ & \\textbf{0.20} & \\textbf{0.09} &\\textbf{ 0.04} & & \\textbf{0.23} & \\textbf{0.04} &\\textbf{ 0.00} & & 2.23 \\\\ \n & & & & & & & & & & & Bleeding$\\times A_2$ & 0.03 & 0.08 & 0.77 & & 0.02 & 0.04 & 0.49 & & 2.34 \\\\ \n & & & & & & & & & & & Abscess$_2\\times A_2$ & -0.13 & 0.07 & 0.06 & & \\textbf{-0.09} & \\textbf{0.03} & \\textbf{0.00} & & 2.31 \\\\ \n & & & & & & & & & & & Fistula$_2\\times A_2$ & -0.04 & 0.06 & 0.56 & & -0.03 & 0.03 & 0.36 & & 2.17 \\\\ \\bottomrule\n\\end{tabular}}\n\\caption{Results of Inflammatory Bowel Disease data set, for first and second stage regressions. Fully supervised $Q$-learning is shown on the left and semi-supervised is shown on the right. Last columns in the panels show relative efficiency (RE) defined as the ratio of standard errors of the semi-supervised vs. supervised method, RE greater than one favors semi-supervised. Significant coefficients at the 0.05 level are in bold.}\\label{Table: IBD results}\n\\end{table}\n\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{lll}\n & Estimate & SE \\\\ \\hline\n$\\Vhat\\subSUPDR$ & 0.851 & 0.486 \\\\\n$\\Vhat\\subSSLDR$ & 0.871 & 0.397 \\\\ \\hline\n\\end{tabular}\\caption{Value function estimates for Inflammatory Bowel Disease data set, the first row has the estimate for treatment rule learned using $\\mathcal{U}$ and its respective value function, the second row shows the same for a rule estimated using $\\mathcal{L}$ and its estimated value.}\\label{Table: V IBD results}\n\\end{table}\n\\section{Discussion}\\label{section: discussion} \n\nWe have proposed an efficient and robust strategy for estimating optimal dynamic treatment rules and their value function, in a setting where patient outcomes are scarce. In particular, we developed a two step estimation procedure amenable to non-parametric imputation of the missing outcomes. This helped us establish $\\sqrt n$-consistency and asymptotic normality for both the $Q$ function parameters $\\bthetahat$ and the doubly robust value function estimator $\\Vhat\\subSSLDR$. We additionally provided theoretical results which illustrate if and when the outcome-surrogates $\\bW$ contribute towards efficiency gain in estimation of $\\bthetahat\\subSSL$ and $\\Vhat\\subSSLDR$. This lets us conclude that our procedure is always preferable to using the labeled data only: since estimation is robust to mis-specification of the imputation models, our approach is safe to use and will be at least as efficient as the supervised methods.\n\nWe focused on the 2-time point, binary action setting for simplicity but all our theoretical results and algorithms can be easily extended to a higher finite time horizon, and multiple actions with careful bookkeeping of notation. In practice, one would need to be careful with the variability of the IPW-value function which increases substantially with time. However, the SSL approach would come in handy to estimate propensity scores, providing an efficiency gain that would help stabilize the IPW in longer horizons. \n\nWe are interested in extending this framework to handle missing at random (MAR) sampling mechanisms. In the EHR setting, it is feasible to sample a subset of the data completely at random in order to annotate the records. Hence, we argue the MCAR assumption is true by design in our context. However, the MAR context allows us to leverage different data sources for $\\Lsc$ and $\\Usc$. For example, we could use an annotated EHR data cohort and a large unlabeled registry data repository for our inference, ultimately making the policies and value estimation more efficient and robust. We believe this line of work has the potential to leverage massive observational cohorts, which will help to improve personalized clinical care for a wide range of diseases. \n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLiver cancer is the second leading cause of global cancer mortality\n(after lung cancer), and is one of the most rapidly increasing cancers\nin terms of incidence and mortality worldwide and in the United States\n\\cite{Ferlay2015,Petrick2016}. Although contrast-enhanced computed\ntomography (CT) has been widely used for liver cancer screening, diagnosis,\nprognosis, and the assessment of its response to treatment, proper\ninterpretation of CT images is normally time-consuming and prone to\nsuffer from inter- and intra-observer variabilities. Therefore, computerized\nanalysis methods have been developed to assist radiologists and oncologists\nfor better interpretation of liver CT images.\n\nAutomatically segmenting liver and viable tumors from other tissue\nis an essential step in quantitative image analysis of abdominal CT\nimages. However, automatic liver segmentation is a challenging task\ndue to the low contrast inside liver, fuzzy boundaries to its adjacent\norgans and highly varying shape. Meanwhile, automatic tumor segmentation\non liver normally suffers from significant variety of appearance in\nsize, shape, location, intensity, textures, as well as the number\nof occurrences. Although researchers have developed various methods\nto conquer these challenges \\cite{Heimann:2009,Yuan:2015,Linguraru:2012},\ninteractive approaches are still the only way to achieve acceptable\ntumor segmentation.\n\nIn this paper, we present a fully automatic framework based on deep\nfully convolutional-deconvolutional neural networks (CDNN) \\cite{long2015fully,noh2015learning,yuan2017ieee}\nfor liver and liver tumor segmentation on contrast-enhanced abdominal\nCT images. Similar to \\cite{christ2016automatic}, our framework is\nhierarchical and includes three steps. In the first step, a simple\nCDNN model is trained to obtain a quick but coarse segmentation of\nthe liver on the entire 3D CT volume; then another CDNN is applied\nto the liver region for fine liver segmentation; finally, the segmented\nliver region is enhanced by histogram equalization and serves as an\nadditional input to the third CDNN for tumor segmentation. Instead\nof developing sophisticated pre- and post-processing methods and hand-crafted\nfeatures, we focus on designing appropriate network architecture and\nefficient learning strategies such that our framework can handle images\nunder various acquisition conditions.\n\n\\section{Dataset and Preprocessing}\n\nOnly LiTS challenge datasets were used for model training and validation.\nThe LiTS datasets consist of $200$ contrast-enhanced abdominal CT\nscans provided by various clinical sites around the world, in which\n$130$ cases were used for training and the rest $70$ for testing.\nThe datasets have significant variations in image quality, spatial\nresolution and field-of-view, with in-plane resolution ranging from\n$0.6\\times0.6$ to $1.0\\times1.0$ mm and slice thickness from $0.45$\nto $6.0$ mm. Each axial slice has identical size of $512\\times512$,\nbut the number of slices in each scan varies from $42$ to $1026$.\n\nAs for pre-processing, we simply truncated the voxel values of all\nCT scans to the range of {[}-100, 400{]} HU to eliminate the irrelevant\nimage information. While a comprehensive 3D contextual information\ncould potentially improve the segmentation performance, due to the\nlimited hardware resource, it is infeasible to perform a fully 3D\nCDNN on the volumetric CT scans in our experimental environment. Thus,\nour CDNN model is based on 2D slice and the CT volume is processed\nslice-by-slice, with the two most adjacent slices concatenated as\nadditional input channels to the CDNN model. Different resampling\nstrategies were applied at different hierarchical levels and will\nbe described below.\n\n\\section{Method}\n\n\\subsection{CDNN model}\n\nOur CDNN model \\cite{yuan2017ieee} belongs to the category of fully\nconvolutional network (FCN) that extends the convolution process across\nthe entire image and predicts the segmentation mask as a whole. This\nmodel performs a pixel-wise classification and essentially serves\nas a filter that projects the 2D CT slice to a map where each element\nrepresents the probability that the corresponding input pixel belongs\nto liver (or tumor). This model consists two pathways, in which contextual\ninformation is aggregated via convolution and pooling in the convolutional\npath and full image resolution is recovered via deconvolution and\nup-sampling in the deconvolutional path. In this way, the CDNN model\ncan take both global information and fine details into account for\nimage segmentation.\n\nWe fix the stride as $1$ and use Rectified Linear Units (ReLUs) \\cite{krizhevsky2012imagenet}\nas the activation function for each convolutional\/deconvolutional\nlayer. For output layer, we use sigmoid as the activation function.\nBatch normalization is added to the output of every convolutional\/deconvolutional\nlayer to reduce the internal covariate shift \\cite{ioffe2015batch}.\n\nWe employ a loss function based on Jaccard distance proposed in \\cite{yuan2017ieee}\nin this study: \n\\begin{equation}\nL_{d_{J}}=1-\\frac{\\underset{i,j}{\\sum}(t_{ij}p_{ij})}{\\underset{i,j}{\\sum}t_{ij}^{2}+\\underset{i,j}{\\sum}p_{ij}^{2}-\\underset{i,j}{\\sum}(t_{ij}p_{ij})},\\label{eq:ja-loss}\n\\end{equation}\nwhere $t_{ij}$ and $p_{ij}$ are target and the output of pixel $(i,\\,j)$,\nrespectively. As compared to cross entropy used in the previous work\n\\cite{christ2016automatic,ronneberger2015u}, the proposed loss function\nis directly related to image segmentation task because Jaccard index\nis a commonly used metric to assess medical imaging segmentation.\nMeanwhile, this loss function is well adapted to the problems with\nhigh imbalance between foreground and background classes as it does\nnot require any class re-balancing. We trained the network using Adam\noptimization \\cite{kingma2014adam} to adjust the learning rate based\non the first and the second-order moments of the gradient at each\niteration. The initial learning rate was set as $0.003$.\n\nIn order to reduce overfitting, we added two dropout layers with $p=0.5$\n- one at the end of convolutional path and the other right before\nthe last deconvolutional layer. We also employed two types of image\naugmentations to further improve the robustness of the proposed model\nunder a wide variety of image acquisition conditions. One consists\nof a series of geometric transformations, including randomly flipping,\nshifting, rotating and scaling. The other type focuses on randomly\nnormalizing the contrast of each input channels in the training image\nslices.Note that these augmentations only require little extra computation,\nso the transformed images are generated from the original images for\nevery mini-batch within each iteration.\n\n\\subsection{Liver localization}\n\nThis step aims to locate the liver region by performing a fast but\ncoarse liver segmentation on the entire CT volume, thus we designed\na relatively simple CDNN model for this task. This model, named CDNN-I,\nincludes $19$ layers with $230,129$ trainable parameters and its\narchitectural details can be found in \\cite{yuan2017ieee}. For each\nCT volume, the axial slice size was firstly reduced to $128\\times128$\nby down-sampling and then the entire image volume was resampled with\nslice thickness of $3$ mm. We found that not all the slices in a\nCT volume were needed in training this CDNN model, so only the slices\nwith liver, as well as the $5$ slices superior and inferior to the\nliver were included in the model training. For liver localization\nand segmentation, the liver and tumor labels were merged as a single\nliver label to provide the ground truth liver masks during model training.\n\nDuring testing, the new CT images were pre-processed following the\nsame procedure as training data preparation, then the trained CDNN-I\nwas applied to each slice of the entire CT volume. Once all slices\nwere segmented, a threshold of $0.5$ was applied to the output of\nCDNN and a 3D connect-component labeling was performed. The largest\nconnected component was selected as the initial liver region.\n\n\\subsection{Liver segmentation}\n\nAn accurate liver localization enables us to perform a fine liver\nsegmentation with more advanced CDNN model while reducing computational\ntime. Specifically, we firstly resampled the original image with slice\nthickness of $2$ mm, then the bounding-box of liver was extracted\nand expanded by $10$ voxels in $x,$ $y$ and $z$ directions to\ncreate a liver volume of interest (VOI). The axial dimensions of VOI\nwere further adjusted to $256\\times256$ either by down-sampling if\nany dimension was greater than $256$, or by expanding in $x$ and\/or\n$y$ direction otherwise. All slices in the VOI were used for model\ntraining.\n\nThe CDNN model used in the liver segmentation (named CDNN-II) includes\n$29$ layers with about $5\\,M$ trainable parameters. As compared\nto CDNN-I, the size of $local\\,receptive\\,field$ (LRF), or filter\nsize, is reduced in CDNN-II such that the network can go deeper, i.e.\nmore number of layers, which allows applying more non-linearities\nand being less prone to overfitting \\cite{simonyan2014very}. Meanwhile,\nthe number of feature channels is doubled in each layer. Please refer\nto \\cite{yuan2017automatic} for more details.\n\nDuring testing, liver VOI was extracted based on the initial liver\nmask obtained in the liver localization step, then the trained CDNN-II\nwas applied to each slice in the VOI to yield a 3D probability map\nof liver. We used the same post-processing as liver localization to\ndetermine the final liver mask.\n\n\\subsection{Tumor segmentation}\n\nThe VOI extraction in tumor segmentation was similar to that in liver\nsegmentation, except that the original image resolution was used to\navoid potentially missing small lesions due to image blurring from\nresampling. Instead of using all the slices in the VOI, we only collected\nthose slices with tumor as training data so as to focus the training\non the liver lesions and reduce training time. Besides the original\nimage intensity, a 3D regional histogram equalization was performed\nto enhance the contrast between tumors and surrounding liver tissues,\nin which only those voxels within the 3D liver mask were considered\nin constructing intensity histogram. The enhanced image served as\nan additional input channel to another CDNN-II model for tumor segmentation.\nWe found this additional input channel could further boost tumor segmentation\nperformance.\n\nDuring testing, liver VOI was extracted based on the liver mask from\nthe liver segmentation step. A threshold of $0.5$ was applied to\nthe output of CDNN-II model and liver tumors were determined as all\ntumor voxels within the liver mask.\n\n\\subsection{Implementation}\n\nOur CDNN models were implemented with Python based on Theano \\cite{team2016theano}\nand Lasagne\\footnote{http:\/\/github.com\/Lasagne\/Lasagne} packages.\nThe experiments were conducted using a single Nvidia GTX 1060 GPU\nwith 1280 cores and 6GB memory.\n\nWe used five-fold cross validation to evaluate the performance of\nour models on the challenge training datasets. The total number of\nepochs was set as $200$ for each fold. When applying the trained\nmodels on the challenge testing datasets, a bagging-type ensemble\nstrategy was implemented to combine the outputs of six models to further\nimprove the segmentation performance \\cite{yuan2017ieee}.\n\nAn epoch in training CDNN-I model for liver localization took about\n$70$ seconds, but the average time per epoch became 610 seconds and\n500 seconds when training CDNN-II models for liver segmentation and\ntumor segmentation, respectively. This increase was primarily due\nto larger slice size and more complicated CDNN models. Applying the\nentire segmentation framework on a new test case was, however, very\nefficient, taking about $33$ seconds on average ($8$, $8$ and $17$\ns for liver localization, liver segmentation and tumor segmentation,\nrespectively).\n\n\\section{Results and Discussion}\n\nWe applied the trained models to the $70$ LiTS challenge test cases (team: deepX).\nBased on the results from the challenge organizers, our method achieved\nan average dice similarity coefficient (DSC) of $0.963$ for liver\nsegmentation, a DSC of $0.657$ for tumor segmentation, and a root\nmean square error (RMSE) of $0.017$ for tumor burden estimation,\nwhich ranked our method in the first, fifth and third place, respectively.\nThe complete evaluation results are shown in Table \\ref{tab:Liver-segmentation-results}-\\ref{tab:Tumor-burden-results}.\n\nTo summarize our work, we develop a fully automatic framework for\nliver and its tumor segmentation on contrast-enhanced abdominal CT\nscans based on three steps: liver localization by a simple CDNN model\n(CDNN-I), liver fine segmentation by a deeper CDNN model with doubled\nfeature channels in each layer (CDNN-II), and tumor segmentation by\nCDNN-II model with enhanced liver region as additional input feature.\nOur CDNN models are fully trained in an end-to-end fashion with minimum\npre- and post-processing efforts.\n\nWhile sharing some similarities with previous work such as U-Net \\cite{ronneberger2015u}\nand Cascaded-FCN \\cite{christ2016automatic}, our CDNN model is different\nfrom them in the following aspects: 1) The loss function used in CDNN\nmodel is based on Jaccard distance that is directly related to image\nsegmentation task while eliminating the need of sample re-weighting;\n2) Instead of recovering image details by long skip connections as\nin U-Net, the CDNN model constructs a deconvolutional path where deconvolution\nis employed to densify the coarse activation map obtained from up-sampling.\nIn this way, feature map concatenation and cropping are not needed.\n\nDue to the limited hardware resource, training a complex CDNN model\nis very time consuming and we had to restrict the total number of\nepochs to $200$ in order to catch the deadline of LiTS challenge\nsubmission. While upgrading hardware is clearly a way to speed up\nthe model training, we plan to improve our network architectures and\nlearning strategies in our future work such that the models can be\ntrained in a more effective and efficient way. Other post-processing\nmethods, such as level sets \\cite{cha2016urinary} and conditional\nrandom field (CRF) \\cite{chen2014semantic}, can also be potentially\nintegrated into our model to further improve the segmentation performance.\n\n\\begin{table}[htbp]\n\\caption{Liver segmentation results (deepX) on LiTS testing cases\\label{tab:Liver-segmentation-results}}\n\n\\centering{\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline \nDice \/ case & Dice global & VOE & RVD & ASSD & MSSD & RMSD\\tabularnewline\n\\hline \n\\hline \n$0.9630$ & $0.9670$ & $0.071$ & $-0.010$ & $1.104$ & $23.847$ & $2.303$\\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\n\\begin{table}[htbp]\n\\caption{Tumor segmentation results (deepX) on LiTS testing cases\\label{tab:Tumor-segmentation-results}}\n\n\\centering{\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline \nDice \/ case & Dice global & VOE & RVD & ASSD & MSD & RMSD\\tabularnewline\n\\hline \n\\hline \n$0.6570$ & $0.8200$ & $0.378$ & $0.288$ & $1.151$ & $6.269$ & $1.678$\\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\n\\begin{table}[htbp]\n\\caption{Tumor burden results (deepX) on LiTS testing cases\\label{tab:Tumor-burden-results}}\n\n\\centering{\n\\begin{tabular}{|c|c|}\n\\hline \nRMSE & Max Error\\tabularnewline\n\\hline \n\\hline \n$0.0170$ & $0.0490$\\tabularnewline\n\\hline \n\\end{tabular}\n\\end{table}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAs a classic estimation algorithm, the standard Kalman filter is extensively used. However, it may perform poorly in the case that the actual model does not coincide with the nominal one. For such a reason, manifold robust versions of the Kalman filter have been considered, see for instance \\cite{HASSIBI_SAYED_KAILATH_BOOK,speyer_2008,GHAOUI_CALAFIORE_2001,kim2020robust,Li2020}.\n\nIn particular, risk sensitive Kalman filters \\cite{RISK_WHITTLE_1980,RISK_PROP_BANAVAR_SPEIER_1998,LEVY_ZORZI_RISK_CONTRACTION,huang2018distributed,OPTIMAL_SPEYER_FAN_BANAVAR_1992} have been proposed in order to address model uncertainty. The latter consider an exponential quadratic loss function rather than the standard quadratic loss function. This means that large errors are severely penalized according to the so called risk sensitivity parameter. Hereafter, these robust Kalman filters have been proved to be equivalent to solve a minimax problem, \\cite{boel2002robustness,HANSEN_SARGENT_2005,YOON_2004,HANSEN_SARGENT_2007}. More precisely, there are two players. One player, say nature, selects the least favorable model in a prescribed ambiguity set which is a ball about the nominal model and whose radius reflects the amount of uncertainty in respect to the nominal model. The other player designs the optimum filter according to the least favorable model.\n\nRecently, instead of concentrating the entire model uncertainty in a single relative entropy constraint, a new paradigm of risk sensitive filters has been proposed in \\cite{ROBUST_STATE_SPACE_LEVY_NIKOUKHAH_2013,zorzi2018robust,abadeh2018wasserstein,zorzi2019distributed,RKDISTR_opt,zenere2018coupling}. The latter characterizes the uncertainty using separate constraints to each time increment of the model. In other words, the ambiguity set is specified at each time step by forming a ball, in the Kullback-Leibler (KL) topology, about the nominal model, \\cite{ROBUST_STATE_SPACE_LEVY_NIKOUKHAH_2013,robustleastsquaresestimation}. It is worth noting this ball can be defined by using also the Tau-divergence family, \\cite{STATETAU_2017,OPTIMALITY_ZORZI}. These filters, however, are well defined only in the case that the nominal and the actual transition probability density functions are non-degenerate. This guarantees that the corresponding distorted Riccati iteration evolves on the cone of the positive definite matrices.\n\nUnfortunately, in many practical applications, like weather forecasting and oceanography, the standard Kalman filter fails to work. More precisely, the Riccati iteration produces numerical covariance matrices which are indefinite because of their large dimension. This issue is typically addressed by resorting to a low-rank Kalman algorithm \\cite{dee1991simplification}.\n\n\nThe contribution of this paper is to extend the robust paradigm in \\cite{ROBUST_STATE_SPACE_LEVY_NIKOUKHAH_2013} to the case in which the transition probability density is possibly degenerate. Some preliminary results can be found in \\cite{yi2020low}. Within our framework, degenerate Gaussian probability densities could be also involved in the dynamic game. Accordingly, the resulting robust Kalman filter corresponds to a low-rank risk sensitive Riccati iteration. Although low-rank and distorted Riccati iterations have been widely studied in the literature, e.g. \\cite{bonnabel2013geometry,Ferrante20141176,ferrante2013generalised,Baggio-Ferrante-TAC-19,Baggio-Ferrante-TAC-16}, our iteration appears to be new. Then, we also derive the least favorable model over a finite simulation horizon. Last but not least, we study the convergence of the distorted Riccati iteration in the case that the nominal model has constant parameters by means of the contraction analysis, \\cite{BOUGEROL_1993}. It turns out that the convergence is guaranteed if the nominal model is stabilizable, the reachable subspace is observable and the ambiguity set is ``small''.\n\n\nThe outline of the paper is as follows. Section \\ref{sec_2} discusses the low-rank robust static estimation problem where the ambiguity set is characterized by a relative entropy constraint and possibly contains degenerate densities. The robust Kalman filtering problem is presented in Section \\ref{sec_3}. The latter is then reformulated as a static minimax game in Section \\ref{sec_4}. In Section \\ref{sec_5}, we derive the corresponding least favorable model. Section \\ref{sec_6} regards the convergence of the proposed low-rank robust Kalman filter in the case of constant parameters. In Section \\ref{sec_7} some numerical examples are provided. Finally, we draw the conclusions in Section \\ref{sec_8}.\n\n{\\em Notation:} The image, the kernel and the trace of matrix $K$ are denoted by $\\mathrm{Im}(K)$, $\\mathrm{ker}(K)$ and $\\mathrm{tr}(K)$, respectively. Given a symmetric matrix $K$: $K>0$ ($K\\geq 0$) means that $K$ is positive (semi)definite; $\\sigma_{max}(K)$ is the maximum eigenvalue of $K$; $K^+$ and $\\det^+(K)$ denote the pseudo inverse and the pseudo determinant of $K$, respectively. The Kronecker product between two matrices $K$ and $V$ is denoted by $K\\otimes V$. $x\\sim \\mathcal N(m,K)$ means that $x$ is a Gaussian random variable with mean $m$ and covariance matrix $K$. $\\mathcal Q^n$ is the vector space of symmetric matrices of dimension $n$; $\\mathcal Q_+^n$ denotes the cone of positive definite symmetric matrices of dimension $n$, while $ \\overline{\\mathcal Q_+^n}$ denotes its closure. The diagonal matrix whose elements in the main diagonal are $a_1,a_2,\\ldots , a_n$ is denoted by $\\mathrm{diag}(a_1,a_2, \\ldots a_n)$; $\\mathrm{Tp}(A_1, A_2,\\ldots ,A_n)$ denotes the block upper triangular Toeplitz matrix whose first block row is $[\\,A_1 \\; A_2\\; \\ldots \\; A_n \\, ]$.\n \\section{Low-rank robust static estimation}\\label{sec_2}\nConsider the problem to estimate a random vector $x\\in\\mathbb R^n$ from an observation vector $y\\in\\mathbb R^p$. We assume that the nominal probability density function of $z:=[\\, x^{\\top}\\; y^{\\top} \\,]^{\\top}$ is $f(z) \\sim \\mathcal{N}\\left(m_{z}, K_{z}\\right)$ where the mean vector $m_z\\in\\mathbb R^{n+p}$ and the covariance matrix $K_z\\in \\overline{\\mathcal Q^{n+p}_+}$ are such that\n$$\nm_{z}=\\left[\\begin{array}{c}{m_{x}} \\\\ {m_{y}}\\end{array}\\right], \\quad K_{z}=\\left[\\begin{array}{cc}{K_{x}} & {K_{x y}} \\\\ {K_{y x}} & {K_{y}}\\end{array}\\right].\n$$\nMoreover, we assume that $K_z$ is possibly a singular matrix and such that $\\mathrm{rank}(K_z)=r+p$ with $r\\leq n$ and $K_y>0$. Therefore, $f(z)$ is possibly a degenerate density whose support is the $r+p$-dimensional affine subspace\n$$\n\\mathcal{A}=\\left\\{m_{z}+v, \\quad v \\in \\mathrm {Im} \\left(K_{z}\\right)\\right\\}\n$$\nand\n\\begin{equation} \\label{pdf_f}\n\\begin{aligned} f(z) =&\\left[(2 \\pi)^{r+p} \\operatorname{det}^{+}\\left(K_{z}\\right)\\right]^{-1 \/ 2} \\\\\n& \\times \\exp \\left[-\\frac{1}{2}\\left(z-m_{z}\\right)^{\\top} K_{z}^{+}\\left(z-m_{z}\\right)\\right]. \\end{aligned}\n\\end{equation}\nLet $\\tilde f(z) \\sim \\mathcal{N}(\\tilde m_{z}, \\tilde K_{z})$ denote the actual probability density function of $z$ and we assume that $\\mathrm{rank}(\\tilde K_z)=r+p$. Accordingly,\n\\begin{equation}\\label{pdf_tildef}\n\\begin{aligned} \\tilde f(z) =&\\left[(2 \\pi)^{{r+p}} \\operatorname{det}^{+}\\left(\\tilde K_{z}\\right)\\right]^{-1 \/ 2} \\\\\n& \\times \\exp \\left[-\\frac{1}{2}\\left(z-\\tilde m_{z}\\right)^{ \\top} \\tilde K_{z}^{+}\\left(z-\\tilde m_{z}\\right)\\right] \\end{aligned}\n\\end{equation}\nwith support\n$\\mathcal{\\tilde A}= \\{\\tilde m_{z}+v, \\quad v \\in \\mathrm {Im}(\\tilde{ K}_{z}) \\}.$ We use the KL-divergence to measure the deviation between the nominal probability density function $f(z)$ and the actual one $\\tilde f(z)$. Since the KL-divergence is not able to measure the ``deterministic'' deviations, we have to assume that the two probability density functions have the same support, i.e. $\\mathcal{A}=\\mathcal{\\tilde A}$. In other words, we have to impose that:\n\\begin{equation} \\mathrm {Im}\\left(K_{z}\\right)=\\mathrm {Im} (\\tilde{ K}_{z}), \\; \\; \\Delta m_z \\in \\mathrm {Im}\\left(K_{z}\\right) \\label{KL}\\end{equation}\nwhere $\\Delta m_z=\\tilde m_z-m_z$. Hence, under assumption (\\ref{KL}), the KL-divergence between $\\tilde f(z)$ and $f(z)$ is defined as\n\\begin{equation} \\label{def_DL_ddeg}\nD (\\tilde{f}, f )=\\int_\\mathcal{A} \\ln \\left(\\frac{\\tilde{f}(z)}{f(z)}\\right) \\tilde{f}(z) d z.\n\\end{equation}\nThen, if we substitute (\\ref{pdf_f}) and (\\ref{pdf_tildef}) in (\\ref{def_DL_ddeg}), we obtain\n\\begin{equation*}\n\\begin{aligned} D(\\tilde f, {f})=& \\frac{1}{2}\\left[\\Delta{m}_{z}^{\\top}K_{z}^{+} \\Delta m_z+\\ln \\operatorname{det}^{+} (K_{z})\\right.\\\\ &\\left.-\\ln \\operatorname{det}^{+} (\\tilde{K}_{z})+\\operatorname{tr}\\left(K_{z}^{+} \\tilde{K}_{z}\\right)-(r+p)\\right] .\\end{aligned}\n\\end{equation*}\n\n\\begin{lemma}\n\\label{lemma1}\nLet $f(z) \\sim \\mathcal{N}\\left(m_{z}, K_{z}\\right)$ and $\\tilde f(z) \\sim \\mathcal{N}(\\tilde m_{z}, \\tilde K_{z})$ be Gaussian and possibly degenerate probability density functions with the same $r+p$-dimensional support $\\mathcal{A}$. Let \\begin{align*}\\mathcal U&=\\{\\tilde m_z \\in\\mathbb R^{n+p} \\hbox{ s.t. } \\tilde m_z-m_z\\in \\mathrm{Im}(K_z)\\}\\\\\n\\mathcal{V}&={\\{\\tilde K_{z} \\in \\mathcal{Q}^{n+p} ~\\text {s.t.}~ \\operatorname{Im}(K_{z} )=\\operatorname{Im} (\\tilde{K}_{z} )}\\}. \\end{align*} Then, $D(\\tilde f,f)$ is strictly convex with respect to $\\tilde m_z\\in\\mathcal U$ and $\\tilde K_z\\in\\mathcal V \\cap \\overline{\\mathcal Q_+^{n+p}}$. Moreover, $D(\\tilde f, f) \\geq 0$ and equality holds if and only if $f=\\tilde f$.\n\\end{lemma}\n\n\n\n\n\\IEEEproof\nLet $U_{\\mathfrak{r}}^\\top$ be a matrix whose columns form an orthonormal basis for $\\mathrm {Im}(K_z)$. Moreover, we define $ K_z^{\\mathfrak{r}}= U_{\\mathfrak{r}} K_zU_{\\mathfrak{r}}^\\top$ and $\\tilde K_z^{\\mathfrak{r}}= U_{\\mathfrak{r}} \\tilde K_zU_{\\mathfrak{r}}^\\top$. Since $f(z)$ and $\\tilde f(z)$ have the same support, then $\\tilde K_z$ and $\\tilde m_z$ are such that $\\mathrm {Im}(\\tilde K_z)=\\mathrm {Im}( K_z)$ and $\\Delta m_z \\in \\mathrm {Im}(K_z)$. Thus, $D(\\tilde f,f)$ is strictly convex in $\\tilde K_z \\in\\mathcal V\\cap \\overline{\\mathcal Q_+^{n+p}}$ if and only if it is strictly convex in $\\tilde K_z^{\\mathfrak{r}} \\in \\mathcal Q_+^{r+p}$. Then, it is not difficult to see that\n\\begin{equation*}\n\\begin{aligned} D(\\tilde f, {f})=& \\frac{1}{2}\\left[\\Delta m^{\\top}_z U^{\\top}_{\\mathfrak{r}} (K^{\\mathfrak{r}}_{z})^{-1} U_{\\mathfrak{r}} \\Delta m_z +\\operatorname{tr}((K^{\\mathfrak{r}}_{z})^{-1} \\tilde{K}^{\\mathfrak{r}}_{z}) \\right.\\\\ &\\left.-\\ln \\operatorname{det} ((K^{\\mathfrak{r}}_{z})^{-1} \\tilde{K}^{\\mathfrak{r}}_{z})-(r+p)\\right]\\end{aligned}\n\\end{equation*}\nwhich is strictly convex in $K^{\\mathfrak{r}}_{z} \\in \\mathcal Q_+^{n+p}$, see \\cite{COVER_THOMAS}. Hence, we proved the strict convexity of $D(\\tilde f,f)$ in $\\tilde K_z \\in \\mathcal V\\cap \\overline {\\mathcal Q_+^{n+p}}$. Using similar reasonings, we can conclude that $D(\\tilde f, {f})$ is strictly convex in $\\tilde m_z\\in \\mathcal U$.\nFinally, the unique minimum of $D(\\tilde f,f)$ with respect to $\\tilde f$ is given by the stationary conditions $\\tilde m_z= m_z$ and $\\tilde K _{z}={K}_{z}$, i.e. $\\tilde f=f$. Since $D(f,f)=0$, we conclude that $D(\\tilde f,f)\\geq 0$ and equality holds if and only if $\\tilde f=f$.\n\\qed \\\\\n\nIn what follows, we assume that $f$ is known while $\\tilde f$ is not and both have the same support. Then, we assume that $\\tilde f$ belongs to the ambiguity set, i.e. a ``ball'':\n$$\n\\mathcal{B}=\\{\\tilde{f} \\sim \\mathcal N(\\tilde m_z,\\tilde K_z)\\hbox{ s.t. } D(\\tilde{f}, f) \\leq c\\}\n$$\nwhere $c>0$, hereafter called tolerance, represents the radius of this ball.\nIt is worth noting that $f$ is usually estimated from measured data. More precisely, we fix a parametric and Gaussian model class $\\mathcal M$, then we select $f\\in\\mathcal M$ according to the maximum likelihood principle. Thus, when the length of the data is sufficiently large, we have\n\\al{f \\approx \\underset{f\\in\\mathcal M}{\\mathrm{argmin}}\\, D(\\tilde f,f)\\nonumber}\nunder standard hypotheses. Therefore, the uncertainty on $f$ is naturally defined by $\\mathcal B$, i.e. the actual model $\\tilde f$ satisfies the constraint $\\tilde f\\in\\mathcal B$ with $c=D(\\tilde f,f)$. Finally, an estimate of $c$ is given by $\\hat c=D(\\check f,f )$ where $\\check f$ is estimated from measured data using a model class $\\check{\\mathcal M}$ sufficiently ``large'', i.e. it contains many candidate models having diverse features.\n\n\nIn view of Lemma \\ref{lemma1}, $\\mathcal B$ is a convex set. We consider the robust estimator $\\hat x=g^0(y)$ solving the following minimax problem\n\\begin{equation} \\label{robust_p}\n(\\tilde f^0, g^0)=\\operatorname{arg}\\min _{g \\in \\mathcal{G}} \\max _{\\tilde{f} \\in \\mathcal{B}} J(\\tilde{f}, g)\n\\end{equation}\nwhere$$\n\\begin{aligned} J(\\tilde{f}, g) &=\\frac{1}{2} E_{\\tilde{f}}\\left[\\| x-g(y )\\|^{2}\\right] \\\\ &=\\frac{1}{2} \\int_\\mathcal{A}\\| x-g(y) \\|^{2} \\tilde{f}(z) d z. \\end{aligned}\n$$$\\mathcal{G}$ is the set of estimators for which $E_{\\tilde{f}}\\left[\\|x-g(y)\\|^{2}\\right]$ is bounded with respect to all the Gaussian densities in $\\mathcal B$.\n\n\\begin{theorem} \\label{theo1} Let $f$ be a Gaussian (possibly degenerate) density defined as in (\\ref{pdf_f}) with $K_y>0$. Then, the least favorable Gaussian density $\\tilde f^0$ is with mean vector and covariance matrix\n\\al{\\label{def_m_K_tilde}\n\\tilde{m}_{z}^0=m_{z}=\\left[\\begin{array}{c}{m_{x}} \\\\ {m_{y}}\\end{array}\\right], \\quad \\tilde{K}_{z}^0=\\left[\\begin{array}{cc}{\\tilde{K}_{x}} & {K_{x y}} \\\\ {K_{y x}} & {K_{y}}\\end{array}\\right]}\nso that, only the covariance of $x$ is perturbed.\nThen, the Bayesian estimator\n\\begin{equation*}\ng^{0}(y)=G_{0}\\left(y-m_{y}\\right)+m_{x},\n\\end{equation*}\nwith $G_0=K_{x y} K_{y}^{-1}$, solves the robust estimation problem.\nThe nominal posterior covariance matrix of $x$ given $y$ is given by \\begin{align} \\label{def_P_stat}P&:=K_{x}-K_{x y} K_{y}^{-1} K_{y x}.\\end{align} while the least favorable one is:\n\\begin{equation*} \\tilde P=(P^+-\\lambda ^{-1}H^{ \\top} H)^+\\end{equation*}\nwhere $H^\\top\\in\\mathbb R^{n\\times r}$ is a matrix whose columns form an orthonormal basis for $ \\mathrm {Im} (P)$.\nMoreover, there exists a unique Lagrange multiplier $\\lambda >\\sigma_{max}(P)>0$ such that $c=D(\\tilde f^0, f)$.\n\n\\end{theorem}\n\n\\IEEEproof\nThe saddle point of $J$ must satisfy the conditions\n\\begin{equation} \\label{ineq_saddle}\nJ (\\tilde{f}, g^{0} ) \\leq J (\\tilde{f}^{0}, g^{0} ) \\leq J(\\tilde{f}^{0}, g )\n\\end{equation}\nfor all $\\tilde f \\in \\mathcal{B}$ and $g \\in \\mathcal{G}$.\nThe second inequality in (\\ref{ineq_saddle}) is based on the fact that the Bayesian estimator $g^0$ minimizes $J(\\tilde f^0,\\cdot)$. Therefore, it remains to prove the first inequality in (\\ref{ineq_saddle}).\n\nNotice that the minimizer of $J(\\tilde f, \\cdot)$ is $$g^{\\star}(y)=\\tilde K_{xy}\\tilde K_y^{-1}(y-\\tilde m_y)+\\tilde m_x$$ where\n$$\n\\tilde m_{z}:=\\left[\\begin{array}{c}{\\tilde m_{x}} \\\\ {\\tilde m_{y}}\\end{array}\\right], \\quad \\tilde K_{z}:=\\left[\\begin{array}{cc}{\\tilde K_{x}} & {\\tilde K_{x y}} \\\\ {\\tilde K_{y x}} & {\\tilde K_{y}}\\end{array}\\right].\n$$ Moreover, $J(\\tilde f,g^\\star)=1\/2\\operatorname{tr}(\\tilde P)$ where $\\tilde P:=\\tilde K_{x}-\\tilde K_{x y} \\tilde K_{y}^{-1} \\tilde K_{y x}$. Since $\\mathrm{Im}(\\tilde K_z)=\\mathrm{Im}(K_z)$, then $\\mathrm{Im}(\\tilde P)=\\mathrm{Im}(P)=\\mathrm{Im}(H^\\top)$. Let $\\check H^\\top\\in\\mathbb R^{n\\times (n-r)}$ be a matrix whose columns form an orthonormal basis for the orthogonal complement of $\\mathrm{Im}(P)$ in $\\mathbb R^n$. Then, $\\check H^\\top \\check H+H^\\top H=I_n$. Therefore,\n\\al{ J(\\tilde f,g^\\star )&= \\frac{1}{2}\\operatorname{tr}(\\tilde P(\\check H^\\top \\check H+H^\\top H))= \\frac{1}{2} \\operatorname{tr}(\\tilde P H^\\top H)\\nonumber\\\\\n&=\\frac{1}{2} \\mathbb{E}_{\\tilde f}[\\|H(x-g(y))\\|^2].\\nonumber}\nThis means that the maximization problem can be formulated by reducing the dimension of the random vector $z$, i.e. we take\n\\al{z_{\\mathfrak{r}}:= \\left[\\begin{array}{c}{ x_{\\mathfrak{r} }} \\\\ y \\end{array}\\right]=U_{\\mathfrak{r}}z, \\quad U_{\\mathfrak{r}}:=\n\\left[\\begin{array}{cc}\nH & 0 \\\\\n0 & I\n\\end{array}\\right],\\nonumber}\nand $z=U_{\\mathfrak{r}}^\\top z_{\\mathfrak{r}}$. The nominal and actual reduced densities are, respectively,\n$f_{\\mathfrak{r}}\\sim \\mathcal N( m_z^{\\mathfrak{r}}, K_z^{\\mathfrak{r}})$ and\n$\\tilde{f}_{\\mathfrak{r}} \\sim \\mathcal N(\\tilde m_z^{\\mathfrak{r}},\\tilde K_z^{\\mathfrak{r}})$ where $m_z^{\\mathfrak{r}}= U_{\\mathfrak{r}} m_z$, $\\tilde m_z^{\\mathfrak{r}}= U_{\\mathfrak{r}}\\tilde m_z$,\n$K_z^{\\mathfrak{r}}= U_{\\mathfrak{r}} K_zU_{\\mathfrak{r}}^\\top$,\n$\\tilde K_z^{\\mathfrak{r}}= U_{\\mathfrak{r}} \\tilde K_zU_{\\mathfrak{r}}^\\top$.\nAccordingly, the maximization of $J(\\cdot,g^0)$ is equivalent to the maximization $J(\\tilde f_{\\mathfrak{r}},g_{\\mathfrak{r}}^0)=\\mathbb{E}_{\\tilde f_{\\mathfrak{r}}}[\\| x_{\\mathfrak{r}} -g^0_{\\mathfrak{r}}(y)\\|^2]$ where $\\tilde f_{\\mathfrak{r}}\\in \\mathcal B_{\\mathfrak{r}}:=\\{\\tilde{f}_{\\mathfrak{r}} \\sim \\mathcal N(\\tilde m_z^{\\mathfrak{r}},\\tilde K_z^{\\mathfrak{r}})\\hbox{ s.t. } D(\\tilde{f}_{\\mathfrak{r}}, f_{\\mathfrak{r}}) \\leq c\\}$, $g^0_{\\mathfrak{r}}:=Hg^0$ and\n\\begin{equation*}\n\\begin{aligned} D(\\tilde f_{\\mathfrak{r}}, {f_{\\mathfrak{r}}})=& \\frac{1}{2}\\left[(\\Delta m^{\\mathfrak{r}}_z)^{\\top} (K^{\\mathfrak{r}}_{z})^{-1} \\Delta m^{\\mathfrak{r}}_z +\\operatorname{tr}((K^{\\mathfrak{r}}_{z})^{-1} \\tilde{K}^{\\mathfrak{r}}_{z}) \\right.\\\\ &\\left.-\\ln \\operatorname{det} ((K^{\\mathfrak{r}}_{z})^{-1} \\tilde{K}^{\\mathfrak{r}}_{z})-(r+p)\\right]\\end{aligned}\n\\end{equation*}\nwhere $\\Delta m^{\\mathfrak{r}}_{z}=\\tilde m^{\\mathfrak{r}}_z-m^{\\mathfrak{r}}_z$.\nSince $K_z^{\\mathfrak{r}}>0$ and $\\tilde K_z^{\\mathfrak{r}}>0$, by Theorem 1 in \\cite{robustleastsquaresestimation, ROBUST_STATE_SPACE_LEVY_NIKOUKHAH_2013} (see also \\cite{yi2020low}), we have that the maximizer $\\tilde f^0_{\\mathfrak{r}}$ has mean $m_z^{\\mathfrak{r}}$ and covariance matrix\n\\begin{equation} \\label{tildeK_z}\n\\tilde{K}^{\\mathfrak{r}}_{z}=\\left[\\begin{array}{cc}\\tilde K_x^{\\mathfrak{r}}& K_{xy}^{\\mathfrak{r}} \\\\ K_{yx}^{\\mathfrak{r}} & K_y\\end{array}\\right] \\nonumber\n\\end{equation} where $\\tilde K_x^{\\mathfrak{r}}= \\tilde P^{\\mathfrak{r}}+\\tilde K_{xy}^{\\mathfrak{r}} K_y^{-1} \\tilde K_{yx}^{\\mathfrak{r}}$ with\n\\begin{equation} \\label{tilde_P}\n\\tilde P^{\\mathfrak{r}}=\\left( (P^{\\mathfrak{r}})^{-1}-\\lambda^{-1} I_r\\right)^{-1},\\nonumber\n\\end{equation} $P^{\\mathfrak{r}}:= HPH^\\top >0$ and $\\lambda >\\sigma_{max}( P^{\\mathfrak{r}})>0$ is the unique solution to\n\\al{\\label{eq_lambda}\\frac{1}{2}\\{ \\ln \\det P^{\\mathfrak{r}} -\\ln \\det \\tilde P^{\\mathfrak{r}} +\\operatorname{tr}((P^{\\mathfrak{r}})^{-1}\\tilde P^{\\mathfrak{r}}-I_r)\\}=c. } We conclude that the least favorable density $\\tilde f^0(z)$ has mean and covariance matrix as in (\\ref{def_m_K_tilde}). Moreover,\n\\al{\\tilde K_x=H^\\top \\tilde K_x^{\\mathfrak{r}}H = \\underbrace{H^\\top \\tilde P^{\\mathfrak{r}}H }_{=:\\tilde P} +\\underbrace{ H^\\top K_{xy}^{\\mathfrak{r}}}_{=K_{xy}}K_y^{-1} K_{yx}^{\\mathfrak{r}} H \\nonumber}\nand \\al{\\tilde P=H^\\top((P^{\\mathfrak{r}})^{-1}-\\lambda^{-1}I_r)^{-1}H=(P^+-\\lambda^{-1}H^\\top H)^+.\\nonumber}\nFinally, Equation (\\ref{eq_lambda}) can be written in terms of $P$ and $\\tilde P$:\n\\al{ \\frac{1}{2}\\{ \\ln {\\det}^{+} P -\\ln {\\det}^{+} \\tilde P +\\operatorname{tr}(P ^{+}\\tilde P -I_r)\\}=c\\nonumber}\nwhere $\\lambda >\\sigma_{max}(H P H^\\top )=\\sigma_{max}(P ).$\n\n\\qed \\\\\n\n\\begin{remark} If $P>0$ then $f$ is a non-degenerate density. In such a case, Theorem \\ref{theo1} still holds and: the pseudo inverse is replaced by the inverse; moreover, $H^{ \\top} H$ becomes the identity matrix. Therefore, we recover the robust static estimation problem proposed in \\cite{robustleastsquaresestimation}.\n\\end{remark}\n\n\\begin{remark}\n In Problem (\\ref{robust_p}) we could add the constraint $\\operatorname{rank}(G)\\leq q$, with $q