diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgptt" "b/data_all_eng_slimpj/shuffled/split2/finalzzgptt" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgptt" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{sec:intro}\nSeveral complex phenomena, such as those involving surface tension, can be interpreted in terms of perimeters. In general, perimeters provide a good local description of these intrinsically nonlocal phenomena. The study of fractional minimal surfaces, which can be interpreted as a non-infinitesimal version of classical minimal surfaces, began with the seminal works by Imbert \\cite{Imbert09} and Caffarelli, Roquejoffre and Savin \\cite{CaRoSa10}. \n\nAs a motivation for the notion of fractional minimal sets let us show how it arises in the study of a nonlocal version of the Ginzburg-Landau energy, extending a well-known result for classical minimal sets \\cite{Modica87, ModicaMortola77}. Let $\\Omega \\subset \\mathbb{R}^d$ be a bounded set with Lipschitz boundary, $\\varepsilon > 0$ and define the energy\n\\[\n\\mathcal{J}_{\\varepsilon}[u;\\Omega] = \\frac{\\varepsilon^2}{2} \\int_\\Omega |\\nabla u(x)|^2 \\; dx + \\int_{\\Omega} W(u(x)) \\; dx,\n\\]\nwhere $W(t) = \\frac14(1-t^2)^2$ is a double-well potential. Then, for every sequence $\\{ u_\\varepsilon \\}$ of minimizers of the rescaled functional $\\mathcal{F}_{\\varepsilon}[u;\\Omega]=\\varepsilon^{-1}\\mathcal{J}_{\\varepsilon}[u;\\Omega]$ with uniformly bounded energies there exists a subsequence $\\{ u_{\\varepsilon_k} \\}$ such that\n\\[ u_{\\varepsilon_k} \\to \\chi_E - \\chi_{E^c} \\quad \\mbox{in } L^1(\\Omega),\n\\]\nwhere $E$ is a set with minimal perimeter in $\\Omega$.\nAnalogously, given $s \\in (0,1)$, consider the energy\n\\[\n\\mathcal{J}^s_{\\varepsilon}[u;\\Omega] = \\frac{\\varepsilon^{2s}}{2}\\iint_{Q_{\\Omega}} \\frac{|u(x) - u(y)|^2}{|x-y|^{n+2s}} \\; dx dy + \\int_{\\Omega} W(u(x)) \\; dx,\n\\]\nwhere ${Q_{\\Omega} = \\left( \\mathbb{R}^d \\times \\mathbb{R}^d \\right) \\setminus \\left( {\\Omega}^c \\times {\\Omega}^c \\right)}$,\nand rescale it as\n\\[\n\\mathcal{F}^s_\\varepsilon[u;\\Omega] =\n\\left\\lbrace \\begin{array}{ll}\n\\varepsilon^{-2s} \\mathcal{J}^s_{\\varepsilon}[u;\\Omega] & \\mbox{if } s \\in (0,1\/2); \\\\\n\\varepsilon^{-1} | \\log \\varepsilon|^{-1} \\mathcal{J}^s_{\\varepsilon}[u;\\Omega] & \\mbox{if } s = 1\/2; \\\\\n\\varepsilon^{-1} \\mathcal{J}^s_{\\varepsilon}[u;\\Omega] & \\mbox{if } s \\in (1\/2,1) .\n\\end{array}\n\\right.\n\\]\nNote that the first term in the definition of $\\mathcal{J}^s_\\varepsilon$ involves the $H^s(\\mathbb{R}^d)$-norm of $u$, except that the interactions over $\\Omega^c \\times \\Omega^c$ are removed; for a minimization problem in $\\Omega$, these are indeed fixed. As proved in \\cite{SaVa12Gamma}, for every sequence $\\{ u_\\varepsilon \\}$ of minimizers of $\\mathcal{F}^s_{\\varepsilon}$ with uniformly bounded energies there exists a subsequence $\\{ u_{\\varepsilon_k} \\}$ such that\n\\[ u_{\\varepsilon_k} \\to \\chi_E - \\chi_{E^c} \\quad \\mbox{in } L^1(\\Omega) \\quad \\mbox{as } \\varepsilon_k \\to 0^+.\n\\]\nIf $s \\in [1\/2,1)$, then $E$ has minimal classical perimeter in $\\Omega$, whereas if $s \\in (0,1\/2)$, then $E$ minimizes the nonlocal $s$-perimeter functional given by \\Cref{def:s-perimeter}.\n\nOther applications of nonlocal perimeter functionals include motions of fronts by nonlocal mean curvature \\cite{CaSo10,ChaMorPon12,ChaMorPon15} and nonlocal free boundary problems \\cite{CaSaVa15,DipiSavinVald15,DipiVald18}. We also refer the reader to \n\\cite[Chapter 6]{BucurValdinoci16} and \\cite{CoFi17} for nice introductory expositions to the topic.\n\nThe goal of this work is to design and analyze finite element schemes in order to compute fractional minimal sets over cylinders $\\Omega\\times\\mathbb{R}^d$ in $\\mathbb{R}^{d+1}$, provided the external data is a subgraph. In such a case, minimal sets turn out to be subgraphs in the interior of the domain $\\Omega$ as well, and the minimization problem for minimal sets can be equivalently stated as a minimization problem for a functional acting on functions $u \\colon \\mathbb{R}^d \\to \\mathbb{R}$, given by\n\\begin{equation}\\label{Is}\nI_s[u] = \\iint_{Q_{\\Omega}} F_s\\left(\\frac{u(x)-u(y)}{|x-y|}\\right) \\frac{1}{|x-y|^{d+2s-1}} \\;dxdy,\n\\end{equation}\nwhere $Q_\\Omega = (\\mathbb{R}^d\\times\\mathbb{R}^d) \\setminus(\\Omega^c\\times\\Omega^c)$, $\\Omega^c=\\mathbb{R}^d\\setminus\\Omega$ is the complement of $\\Omega$ in $\\mathbb{R}^d$ and $F_s:\\mathbb{R}\\to\\mathbb{R}$ is a suitable convex nonnegative function. This is the $s$-fractional version of the classical graph area functional\n\\[\nI [u] = \\int_\\Omega \\sqrt{1 + |\\nabla u (x)|^2 } \\, dx \n\\]\namong suitable functions $u:\\Omega\\to\\mathbb{R}$ satisfying the Dirichlet condition $u=g$ on $\\partial\\Omega$. A crucial difference between the two problems is that the Dirichlet condition for $I_s[u]$ must be imposed in $\\Omega^c$, namely\n\\[\nu = g \\quad\\text{in }\\Omega^c.\n\\]\nWe propose a discrete counterpart of $I_s[u]$ based on piecewise linear Lagrange\nfinite elements on shape-regular meshes, and prove a few properties of the discrete\nsolution $u_h$, including convergence in $W^{2r}_1(\\Omega)$ for any $0 0.\n\\]\nThus, if we set the constant $C_3 = \\int_0^\\infty \\frac{1}{\\left( 1+r^2\\right)^{(d+1+2s)\/2}} dr$, we deduce that\n\\[\nF_s(\\rho) \\le C_3 \\rho \\quad \\forall \\rho \\ge 0.\n\\]\nThis implies the second inequality in \\eqref{eq:norm_bound}.\n\t\nOn the other hand, the first estimate in \\eqref{eq:norm_bound}, with constant $C_1 = \\iint_{\\Omega\\times\\Omega} \\frac{dxdy}{|x-y|^{d+2s-1}} < \\infty$ because $\\Omega \\subset \\mathbb{R}^d$ is bounded, is a consequence of the bound \n\\begin{equation} \\label{eq:lower-bound-Fs}\n\\rho \\le 1 + C_2 F_s(\\rho) \\quad \\forall \\rho \\ge 0.\n\\end{equation}\nIt is obvious that such a bound holds for $0 \\le \\rho \\le 1$, whereas if $\\rho > 1$, we have $F'_s (\\rho) \\ge F'_s (1)$ and therefore, $F_s(\\rho) > F'_s(1) (\\rho - 1)$. The desired inequality follows with constant $C_2 = 1\/ F'_s(1)$.\n\\end{proof}\n\nTaking into account the lemma we have just proved, we introduce the natural spaces in which to look for nonlocal minimal graphs.\n\n\\begin{Definition}[space $\\mathbb{V}^g$] \\label{Def:space-Vg} \nGiven $g \\colon \\Omega^c \\to \\mathbb{R}$, we consider\n\\begin{equation*}\n\\mathbb{V}^g = \\{ v \\colon \\mathbb{R}^d \\to \\mathbb{R} \\; \\colon \\; v\\big|_\\Omega \\in W^{2s}_1(\\Omega), \\ v = g \\text{ in } {\\Omega}^c, \\ |v|_{\\mathbb{V}^g} < \\infty \\}, \n\\end{equation*}\nequipped with the norm\n\\[\n\\| v \\|_{\\mathbb{V}^g } = \\| v \\|_{L^1(\\Omega)} + | v |_{\\mathbb{V}^g },\n\\]\nwhere\n\\[\n| v |_{\\mathbb{V}^g} := \\iint_{Q_{\\Omega}} \\frac{|v(x)-v(y)|}{|x-y|^{d+2s}} dxdy. \n\\]\n\\end{Definition}\n\nIn the specific case where $g$ is the zero function, we denote the resulting space $\\mathbb{V}^g$ by $\\mathbb{V}^0$. The set $\\mathbb{V}^g$ can also be understood as that of functions in $W^{2s}_1 (\\Omega)$ with `boundary value' $g$. Indeed, we point out that in Definition \\ref{Def:space-Vg} we do not require $g$ to be a function in $W^{2s}_1(\\Omega^c)$ (in particular, $g$ may not decay at infinity). The seminorm $|\\cdot |_{\\mathbb{V}^g}$ does not take into account interactions over $\\Omega^c \\times \\Omega^c$, because these are fixed for the applications we consider.\n\n\nAs stated in the next Proposition, given a Dirichlet datum $g$, the space $\\mathbb{V}^g$ is the natural domain of the energy $I_s$.\n\n\\begin{Proposition}[energy domain] \\label{Prop:domain-energy} Let $s \\in (0,1\/2)$ and $\\Omega, g$ be given according to \\eqref{E:assumptions}. Let $v \\colon \\mathbb{R}^d \\to \\mathbb{R}$ be such that $v = g$ in $\\Omega^c$. Then, $v \\in \\mathbb{V}^g $ if and only if $I_s[v] < \\infty$.\n\\end{Proposition}\n\\begin{proof}\nThe claim follows easily from Lemma \\ref{lemma:W2s}. Let $v$ be a function that coincides with $g$ in $\\Omega^c$. Then, if $v \\in \\mathbb{V}^g$, the second estimate in \\eqref{eq:norm_bound} gives $I_s[v] < \\infty$, because $|v|_{\\mathbb{V}^g} < \\infty$. \n\nReciprocally, if $I_s[v] < \\infty$, the first inequality in \\eqref{eq:norm_bound} implies that $| v \\big|_\\Omega|_{W^{2s}_1(\\Omega)} < \\infty$. Fix $R > 0$ such that $\\Omega \\subset B_{R\/2}$; because of \\eqref{eq:lower-bound-Fs} and the Lipschitz continuity of $F_s$, integrating over $\\Omega \\times (B_R \\setminus \\Omega)$, we obtain\n\\begin{equation} \\label{eq:L1-bound}\n\\begin{aligned}\n\\frac{\\| v \\|_{L^1(\\Omega)}}{R^{2s}} & \\lesssim \\iint_{\\Omega \\times (B_R \\setminus \\Omega)} \\left(1 + F_s \\left( \\frac{|v(x)|}{|x-y|} \\right)\\right) \\frac{dx dy}{|x-y|^{d+2s-1}} \\\\\n& \\lesssim I_s[v] + \\iint_{\\Omega \\times (B_R \\setminus \\Omega)} \\frac{1+|g(y)|}{|x-y|^{d+2s}} dx dy. \n\\end{aligned}\n\\end{equation}\nThe last integral in the right hand side above is finite because $\\| g \\|_{L^\\infty} < \\infty$. Therefore, $v \\in L^1(\\Omega)$. To deduce that $|v|_{\\mathbb{V}^g} < \\infty$, we split the integral, use the triangle inequality, integrate in polar coordinates and apply Hardy's inequality \\cite[Theorem 1.4.4.4]{Grisvard} to derive\n\\begin{equation*}\n\\begin{aligned}\n|v|_{\\mathbb{V}^g} & \\le | v \\big|_\\Omega|_{W^{2s}_1(\\Omega)} + 2 \\iint_{\\Omega\\times\\Omega^c} \\frac{|v(x)|}{|x-y|^{d+2s}} dx dy + 2 \\iint_{\\Omega\\times\\Omega^c} \\frac{|g(y)|}{|x-y|^{d+2s}} dx dy \\\\\n& \\lesssim | v \\big|_\\Omega|_{W^{2s}_1(\\Omega)} + \\int_{\\Omega} \\frac{|v(x)|}{\\textrm{dist}(x,\\partial\\Omega)^{2s}} dx + \\| g \\|_{L^\\infty(\\Omega^c)} \\lesssim \\| v \\big|_\\Omega\\|_{W^{2s}_1(\\Omega)} + \\| g \\|_{L^\\infty(\\Omega^c)}.\n\\end{aligned} \n\\end{equation*}\nThis proves that $v \\in \\mathbb{V}^g$ and concludes the proof.\n\\end{proof}\n\nTaking into account \\Cref{prop:perimeter-energy} and \\Cref{Prop:domain-energy}, we obtain a characterization of nonlocal minimal graphs (see also \\cite[Theorem 4.1.11]{Lombardini-thesis}).\n\n\\begin{Corollary}[relation between minimization problems] \\label{Cor:characterization-NMS}\nLet $s \\in (0,1\/2)$ and $\\Omega, g$ satisfy \\eqref{E:assumptions}. Given a function $u \\colon \\mathbb{R}^d \\to \\mathbb{R}$ that coincides with $g$ in $\\Omega^c$, consider the set $E$ given by \\eqref{E:Def-E}. Then, $E$ is locally $s$-minimal in $\\Omega' = \\Omega \\times \\mathbb{R}$ if and only if $u$ minimizes the energy $I_s$ in the space $\\mathbb{V}^g$.\n\\end{Corollary}\n\nThe functional $I_s$ is strictly convex, because the weight $F_s$ appearing in its definition (cf. \\eqref{E:def_Fs}) is strictly convex as well. Therefore, we straightforwardly deduce the next result.\n\\begin{Corollary}[uniqueness]\nUnder the same hypothesis as in Corollary \\ref{Cor:characterization-NMS}, there exists a unique locally $s$-minimal set.\n\\end{Corollary}\n\nWe conclude this section with a result about the regularity of the minimizers of $I_s$.\nIn spite of being prone to be discontinuous across the boundary, minimal graphs are smooth in the interior of the domain. The following theorem is stated in \\cite[Theorem 1.1]{CabreCozzi2017gradient}, where an estimate for the gradient of the minimal function is derived. Once such an estimate is obtained, the claim follows by the arguments from \\cite{Barrios2014bootstrap} and \\cite{Figalli2017regularity}.\n\n\\begin{Theorem}[interior smoothness of nonlocal minimal graphs] \\label{thm:smoothness}\nAssume $E \\subset \\mathbb{R}^{d+1}$ is an $s$-minimal surface in $\\Omega' = \\Omega \\times \\mathbb{R}$, given by the subgraph of a measurable function $u$ that is bounded in an open set $\\Lambda \\supset \\Omega$.\nThen, $u \\in C^\\infty (\\Omega)$.\n\\end{Theorem}\n\n\n\\subsection{Weak formulation}\nIn order to define the proper variational setting to study the nonlocal minimal graph problem, we introduce the function $G_s \\colon \\mathbb{R} \\to \\mathbb{R}$,\n\\begin{equation} \\label{E:DEF-Gs}\nG_s(\\rho) = \\int_0^\\rho (1+r^2)^{-(d+1+2s)\/2} dr = F'_s(\\rho).\n\\end{equation}\nWe recall that $s \\in (0,1\/2)$. Clearly, $G_s$ is an odd and uniformly bounded function:\n\\begin{equation} \\label{E:bound-Gs}\n| G_s (\\rho) | \\le K := \\int_{0}^\\infty (1+r^2)^{-(d+1+2s)\/2} dr = \\frac{\\Gamma\\left( \\frac{d+2s}{2}\\right) \\sqrt{\\pi}}{2 \\Gamma \\left(\\frac{d+1+2s}{2}\\right)}.\n\\end{equation}\nThe constant $K$ has already appeared in the proof of \\Cref{lemma:W2s}, under the label $C_3(d,s)$. The last equality above follows from the substitution $t = (1+r^2)^{-1}$ and the basic relation between the beta and gamma functions, $B(x,y) =\\frac{\\Gamma(x)\\Gamma(y)}{\\Gamma(x+y)}$. \n\nGiven a function $u \\in \\mathbb{V}^g$, we take the bilinear form $a_u \\colon \\mathbb{V}^g \\times \\mathbb{V}^0 \\to \\mathbb{R}$,\n\\begin{equation} \\label{E:def-a}\na_u(w,v) := \\iint_{Q_{\\Omega}} \\widetilde{G}_s\\left(\\frac{u(x)-u(y)}{|x-y|}\\right) \\frac{(w(x)-w(y))(v(x)-v(y))}{|x-y|^{d+1+2s}}dx dy, \n\\end{equation}\nwhere $\\widetilde{G}_s(\\rho) = \\int_0^1 (1+ \\rho^2 r^2)^{-(d+1+2s)\/2} dr$ and hence it satisfies $\\rho \\widetilde{G}_s(\\rho) = G_s(\\rho)$. \n\nTo obtain a weak formulation of our problem, we compute the first variation of \\eqref{E:NMS-Energy-Graph}, that yields\n\\[\n\\delta I_s [u] (v) = a_u(u,v) \\mbox{ for all } \\quad v \\in \\mathbb{V}^0.\n\\]\nThus, we seek a function $u \\in \\mathbb{V}^g$ such that \n\\begin{equation}\\label{E:WeakForm-NMS-Graph}\na_u(u,v) = 0 \\quad \\mbox{ for all } v \\in \\mathbb{V}^0.\n\\end{equation}\n\n\nAnother approach --at least formal-- to derive problem \\eqref{E:WeakForm-NMS-Graph} is to write it as the weak form of a suitable Euler-Lagrange equation. More precisely, assuming that the set $E$ is the subgraph of a function $u$, this can be written as the following nonlocal and nonlinear equation \\cite{Barrios2014bootstrap}\n\\begin{equation}\\label{E:EL-eq-Graph}\nH_s[E](x) = \\text{P.V.} \\int_{\\mathbb{R}^d} G_s\\left(\\frac{u(x)-u(y)}{|x-y|}\\right) \\frac{1}{|x-y|^{d+2s}} \\;dy = 0,\n\\end{equation}\nin a viscosity sense, for every $x \\in \\Omega$.\nWith some abuse of notation, we let $H_s[u]$ represent $H_s[E]$ when $E$ is the subgraph of $u$. Therefore, $u$ solves the Dirichlet problem\n\\begin{equation}\\label{E:PDE-NMS}\n\\left\\{\\begin{array}{rl}\nH_s[u](x) = 0 \\ \\quad & \\quad x \\in \\Omega, \\\\\nu(x) = g(x) & \\quad x \\in \\mathbb{R}^d \\setminus \\Omega.\n\\end{array}\\right.\n\\end{equation}\nIn this regard, the weak formulation of \\eqref{E:PDE-NMS} is set by multiplying it by a test function, integrating and taking advantage of the fact that $G_s$ is an odd function. This corresponds to \\eqref{E:WeakForm-NMS-Graph}. \n\nWe finally point out that \\eqref{E:WeakForm-NMS-Graph} can be interpreted as a fractional diffusion problem of order $s+1\/2$ with weights depending on the solution $u$ and fixed nonhomogeneous boundary data; this is in agreement with the local case \\eqref{eq:classical-MS}. Like for the classical minimal graph problem, the nonlinearity degenerates wherever the Lipschitz modulus of continuity of $u$ blows up. We expect this to be the case as $\\textrm{dist}(x,\\partial\\Omega) \\to 0$, as this has been shown to be the generic behavior in one-dimensional problems \\cite{DipiSavinVald19}.\n\n\n\\section{Numerical Method} \\label{sec:numerical-method}\n\nThis section introduces the framework for the discrete nonlocal minimal graph problems under consideration. We set the notation regarding discrete spaces and analyze their approximation properties by resorting to a quasi-interpolation operator of Cl\\'ement type. We include a brief discussion on the solution of the resulting nonlinear discrete problems.\n\n\\subsection{Finite element discretization} \\label{ss:FE_discretization}\nAs discussed in \\Cref{R:assumptions}, in this work we assume that $g$ is a function with bounded support. Concretely, we assume that \n\\begin{equation} \\label{E:support}\n\\mbox{supp}(g) \\subset \\Lambda \\mbox{ for some bounded set } \\Lambda.\n\\end{equation}\nApproximations for unboundedly supported data are discussed in a forthcoming paper by the authors \\cite{BoLiNo19computational}. Without loss of generality, we may assume that $\\Lambda = B_R(0)$ is a ball of radius $R$ centered at the origin.\n\nWe consider a family $\\{\\mathcal{T}_h \\}_{h>0}$ of conforming and simplicial meshes of $\\Lambda$, that we additionally require to exactly mesh $\\Omega$.\nMoreover, we assume this family to be shape-regular, namely:\n\\[\n \\sigma = \\sup_{h>0} \\max_{T \\in \\mathcal{T}_h} \\frac{h_T}{\\rho_T} <\\infty,\n\\]\nwhere $h_T = \\mbox{diam}(T)$ and $\\rho_T $ is the diameter of the largest ball contained in $T$. As usual, the subindex $h$ denotes the mesh size, $h = \\max_{T \\in \\mathcal{T}_h} h_T$.\nThe set of vertices of $\\mathcal{T}_h$ will be denoted by $\\mathcal{N}_h$, and $\\varphi_i$ will denote the standard piecewise linear Lagrangian basis function associated to the node $\\texttt{x}_i \\in \\mathcal{N}_h$. In this work we assume that the elements are closed sets. Thus, the star or first ring of an element $T \\in \\mathcal{T}_h$ is given by\n\\[\n S^1_T = \\bigcup \\left\\{ T' \\in \\mathcal{T}_h \\colon T \\cap T' \\neq \\emptyset \\right\\}.\n\\]\nWe also introduce the star or second ring of $S^1_T$,\n\\[\n S^2_T = \\bigcup \\left\\{ T' \\in \\mathcal{T}_h \\colon S^1_T \\cap T' \\neq \\emptyset \\right\\},\n\\]\nand the star of the node $\\texttt{x}_i \\in \\mathcal{N}_h$, $S_i = \\mbox{supp}(\\varphi_i)$.\nWe split the mesh nodes into two disjoint sets, consisting of either vertices in $\\Omega$ and in $\\Omega^c$,\n\\[\n\\mathcal{N}_h^\\circ = \\left\\{ \\texttt{x}_i \\colon \\texttt{x}_i \\in \\Omega \\right\\}, \\qquad \\mathcal{N}_h^c = \\left\\{ \\texttt{x}_i \\colon \\texttt{x}_i \\in \\Omega^c \\right\\}.\n\\]\nWe emphasize that, because $\\Omega$ is an open set, nodes on $\\partial\\Omega$ belong to $\\mathcal{N}_h^c$. \n\n\nThe discrete spaces we consider consist of continuous piecewise linear functions in $\\Lambda$. Indeed, we set\n\\[\n\\mathbb{V}_h = \\{ v \\in C(\\Lambda) \\colon v|_T \\in \\mathcal{P}_1 \\; \\forall T \\in \\mathcal{T}_h \\}.\n\\]\nFor this work, we make use of certain Cl\\'ement-type interpolation operators on $\\mathbb{V}_h$. To account for boundary data, given an integrable function $g \\colon \\Lambda \\setminus \\Omega \\to \\mathbb{R}$, we define \n\\[\n\\mathbb{V}_h^g = \\{ v \\in \\mathbb{V}_h \\colon \\ v|_{\\Lambda \\setminus \\Omega} = \\Pi_h^c g\\}.\n\\]\nHere, $\\Pi_h^c$ denotes the {\\em exterior} Cl\\'ement interpolation operator in $\\Omega^c$, defined as\n\\[\n\\Pi_h^c g := \\sum_{\\texttt{x}_i \\in \\mathcal{N}_h^c} (\\Pi_h^{\\texttt{x}_i} g) (\\texttt{x}_i) \\; \\varphi_i,\n\\]\nwhere $\\Pi_h^{\\texttt{x}_i} g$ is the $L^2$-projection of $g\\big|_{S_i \\cap \\Omega^c}$ onto $\\mathcal{P}_1(S_i \\cap \\Omega^c)$. Thus, $\\Pi_h^c g (\\texttt{x}_i)$ coincides with the standard Cl\\'ement interpolation of $g$ on $\\texttt{x}_i$ for all nodes $\\texttt{x}_i$ such that $S_i \\subset \\mathbb{R}^d \\setminus \\overline{\\Omega}$. On the other hand, for nodes on the boundary of $\\Omega$, $\\Pi_h^c$ only averages over the elements in $S_i$ that lie in $\\Omega^c$. Although $\\Pi_h^c g$ only takes into account values of $g$ in $\\Omega^c$, the support of $\\Pi_h^c g$ is not contained in $\\Omega^c$, because $\\varphi_i$ attains nonzero values in $\\Omega$ for $x_i \\in \\partial\\Omega$.\n\nUsing the same convention as before, in case $g$ is the zero function, we write the corresponding space as $\\mathbb{V}_h^0$. Also, we define the {\\em interior} Cl\\'ement interpolation operator $\\Pi_h^\\circ \\colon L^1(\\Omega) \\to \\mathbb{V}^0_h$,\n\\[\n\\Pi_h^\\circ v := \\sum_{\\texttt{x}_i \\in \\mathcal{N}_h^\\circ} (\\Pi_h^{\\texttt{x}_i} v) (\\texttt{x}_i) \\; \\varphi_i,\n\\]\nwhere $\\Pi_h^{\\texttt{x}_i} v$ is the $L^2$-projection of $v \\big|_{\\Omega}$ onto $\\mathcal{P}_1(S_i)$.\n\n\n\n\\begin{Remark}[discrete functions are continuous]\nEven though nonlocal minimal surfaces can develop discontinuities across $\\partial\\Omega$\n--recall the sticky behavior commented in \\Cref{R:stickiness}-- the discrete spaces we consider consist of continuous functions. This does not preclude the convergence of the numerical scheme we propose in `trace blind' fractional Sobolev spaces. \nFurthermore, the strong imposition of the Dirichlet data simplifies both the method and its analysis. The use of discrete spaces that capture discontinuities across the boundary of the domain is subject of ongoing work by the authors.\n\\end{Remark}\n\nWith the notation introduced above, the discrete counterpart of \\eqref{E:WeakForm-NMS-Graph} reads: find $u_h \\in \\mathbb{V}^g_h$ such that\n\\begin{equation}\\label{E:WeakForm-discrete}\na_{u_h}(u_h, v_h) = 0 \\quad \\mbox{for all } v_h \\in \\mathbb{V}^0_h. \n\\end{equation}\n\nDue to our assumption \\eqref{E:assumptions} on the datum $g$, it follows immediately that $u_h$ is a solution of \\eqref{E:WeakForm-discrete} if and only if $u_h$ minimizes the strictly convex energy $I_s[u_h]$ over the discrete space $\\mathbb{V}_h^g$. This leads to the existence and uniqueness of solutions to the discrete problem \\eqref{E:WeakForm-discrete}.\n\n\\subsection{Localization} An obvious difficulty when trying to prove interpolation estimates in \nfractional Sobolev spaces is that their seminorms are non-additive with respect to disjoint domain partitions. Here we state a localization result, proved by Faermann \\cite{Faermann2,Faermann} in the case $p=2$. For brevity, since the proof for $p \\neq 2$ follows by the same arguments as in those references, we omit it. \n\n\n\\begin{Proposition}[localization of fractional-order seminorms]\n\\label{prop:faermann}\nLet $s \\in (0,1)$, $p \\in [1,\\infty)$, and $\\Omega$ be a bounded Lipschitz domain. Let $\\mathcal{T}_h$ denote a mesh as above. Then, for any $v \\in W^s_p(\\Omega)$ there holds\n\\begin{equation}\n\\label{E:faermann}\n |v|_{W^s_p(\\Omega)} \\leq \\left[\\sum_{T \\colon T \\subset \\overline\\Omega} \\iint_{T \\times (S^1_T \\cap \\Omega)} \\frac{|v (x) - v (y)|^p}{|x-y|^{d+sp}} dy dx \n + C(\\sigma) \\, \\frac{2^p \\omega_{d-1}}{s p h_T^{s p}} \\, \\| v \\|^p_{L^p(T)} \\right]^{1\/p}.\n\\end{equation}\nAbove, $\\omega_{d-1}$ denotes the measure of the $(d-1)$-dimensional unit sphere.\n\\end{Proposition}\n\nThis localization of fractional-order seminorms is instrumental for our error analysis. It implies that, in order to prove approximability estimates in $W^s_p(\\Omega)$, it suffices to produce {\\em local} estimates in patches of the form $T\\times S^1_T$ and scaled local $L^p(T)$ estimates for every $T \\in \\mathcal{T}_h$.\n\n\\subsection{Interpolation operator}\nHere we define a quasi-interpolation operator that plays an important role in the analysis of the discrete scheme proposed in this paper. Such an operator combines the two Cl\\'ement-type interpolation operators introduced in the previous subsection. More precisely, we set $\\mathcal I_h \\colon L^1(\\mathbb{R}^d) \\to \\mathbb{V}_h^g$,\n\\begin{equation} \\label{E:interpolation}\n\\mathcal I_h v = \\Pi_h^\\circ \\left(v \\big|_\\Omega \\right) + \\Pi_h^c g .\n\\end{equation}\n\nUsing standard arguments for Cl\\'ement interpolation, we obtain local approximation estimates in the interior of $\\Omega$.\n\n\\begin{Proposition}[local interpolation error]\n\\label{prop:local_interpolation_estimate}\nLet $s \\in (0,1)$, $p \\ge 1$, $s < t \\le 2$. Then, for all $T \\in \\mathcal{T}_h$ it holds\n\\[\n\\| v - \\mathcal I_h v \\|_{L^p(T)} \\lesssim h^{t} |v|_{W^t_p(S^1_T)},\n\\]\nand\n\\[\n\\left(\\iint_{T \\times S_T^1} \\frac{|(v - \\mathcal I_h v (x)) - (v - \\mathcal I_h v (y))|^p}{|x - y|^{d + sp}} dy dx \\right)^\\frac1p \\lesssim h^{t-s} |v|_{W^t_p(S^2_T)} . \n\\]\n\\end{Proposition}\n\nFrom \\Cref{Cor:characterization-NMS} and \\Cref{thm:smoothness} we know that, under suitable assumptions, minimal graphs are $W^{2s}_1$-functions that are locally smooth in $\\Omega$. These conditions are sufficient to prove the convergence of the interpolation operator $\\mathcal I_h$. \n\n\n\\begin{Proposition}[interpolation error] \\label{prop:interpolation_estimate}\n\tLet $s \\in (0,1)$, and $p \\ge 1$ be such that $sp < 1$. Assume that $\\Omega$ and $g$ satisfy \\eqref{E:assumptions} and \\eqref{E:support}. Then, for all \n\t$v \\colon \\mathbb{R}^d \\to \\mathbb{R}$\tsatisfying $v \\big|_\\Omega \\in W^s_p(\\Omega)$ and $v = g$ in $\\Omega^c$, \n\\[\n\\iint_{Q_{\\Omega}} \\frac{\\left| (\\mathcal I_h v -v)(x) - (\\mathcal I_h v -v)(y) \\right|^p}{|x-y|^{d+sp}} \\; dxdy \\to 0 \\ \\textrm{ as } h \\to 0.\n\\]\n\\end{Proposition}\n\\begin{proof}\nIn first place, we split\n\\[ \\begin{aligned}\n\t\\iint_{\\Omega\\times\\Omega^c} \\frac{\\left| (\\mathcal I_h v -v)(x) - (\\mathcal I_h v -v)(y) \\right|^p}{|x-y|^{d+sp}} \\; dxdy \\lesssim & \\iint_{\\Omega\\times\\Omega^c} \\frac{\\left| \\mathcal I_h v (x) -v(x) \\right|^p}{|x-y|^{d+sp}} \\; dxdy \\\\ \n\t& + \\iint_{\\Omega\\times\\Omega^c} \\frac{\\left| \\Pi_h^c g (y) -g (y) \\right|^p}{|x-y|^{d+sp}} \\; dxdy .\n\\end{aligned} \n\\]\n\t\nGiven $x \\in \\Omega$, we have $\\int_{\\Omega^c} \\frac{1}{|x-y|^{d+sp}} dy \\lesssim d(x,\\partial\\Omega)^{-sp},$ and since $\\mathcal I_h v -v \\in W^s_p(\\Omega)$, we invoke the Hardy inequality \\cite[Theorem 1.4.4.4]{Grisvard} to deduce that\n\\[\n\\iint_{\\Omega\\times\\Omega^c} \\frac{\\left| (\\mathcal I_h v -v)(x)\\right|^p}{|x-y|^{d+sp}} \\; dxdy \\lesssim \\int_{\\Omega} \\frac{\\left| (\\mathcal I_h v -v)(x)\\right|^p}{d(x,\\partial\\Omega)^{sp}} \\; dx \\lesssim \\| \\mathcal I_h v -v \\|_{W^s_p(\\Omega)}^p.\n\\]\nSince $g$ is uniformly bounded, we first claim that $\\Pi_h^c g \\to g$ a.e. in $\\Omega^c$ as $h \\to 0$. Indeed, for every $y \\in T \\subset \\Omega^c$, we express $\\Pi_h^c g(y)$ as a linear combination of $\\Pi_h^c g(\\texttt{x}_i)$, where $\\{ \\texttt{x}_i \\}$ are the vertices of $T$, and deduce that\n\\[\n\\Pi_h^c g(y) = \\int_{S_T^1} \\varphi^*_{y}(x) g(x) dx, \n\\]\nfor some function $\\varphi^*_{y}$ satisfying $\\int_{S_T^1} \\varphi^*_{y}(x) dx = 1$ and$\\Vert \\varphi^*_{y} \\Vert_{L^{\\infty}(S_T^1)} \\le C(d,\\sigma) h^{-d}$. Since $S_T^1 \\subset \\overline{B}_{2h}(y)$, for every Lebesgue point y of $g$ we have\n\\[\\begin{aligned}\t\n\\big| (\\Pi_h^c g)(y) - g(y) \\big| &= \\left| \\int_{S_T^1} \\varphi^*_{y}(x) \\big( g(x) - g(y) \\big) dx\\right| \\le \\Vert \\varphi^*_{y} \\Vert_{L^{\\infty}(S_T^1)} \\int_{B_{2h}(y)} |g(x) -g(y)|dx \\\\\n&\\lesssim \\frac{1}{\\left| B_{2h}(y) \\right| }\\int_{B_{2h}(y)} |g(x) -g(y)|dx \\to 0 \\textrm{ as } h \\to 0.\n\\end{aligned} \n\\]\nBy the Lebesgue differentiation theorem, almost every $y \\in \\Omega^c$ is a Lebesgue point of $g$, and therefore \n\t\\begin{equation}\\label{E:OmegaC-ae-converge}\n\t\\Pi_h^c g \\to g \\; \\textrm{ a.e. in } \\Omega^c \\textrm{ as } h \\to 0.\n\t\\end{equation}\n\tIn addition, it follows from $\\Vert \\varphi^*_{y} \\Vert_{L^{\\infty}(S_T^1)} \\le C(d,\\sigma) h^{-d}$ that\n\t$\\Vert \\Pi_h^c g \\Vert_{L^{\\infty}(\\mathbb{R}^d)} \\lesssim \\Vert g \\Vert_{L^{\\infty}(\\Omega^c)}$, and hence\n\\[\n\\iint_{\\Omega\\times\\Omega^c} \\frac{\\left| \\Pi_h^c g (y) -g (y) \\right|^p}{|x-y|^{d+sp}} dxdy \\lesssim \\iint_{\\Omega\\times\\Omega^c} \\frac{\\Vert g \\Vert^p_{L^{\\infty}(\\Omega^c)}}{|x-y|^{d+sp}} < \\infty.\n\\]\nApplying the Lebesgue Dominated convergence theorem, we obtain\n\\[\n\\iint_{\\Omega\\times\\Omega^c} \\frac{\\left| \\Pi_h^c g (y) -g (y) \\right|^p}{|x-y|^{d+sp}} dxdy \\to 0 \\textrm{ as } h \\to 0.\n\\]\nTherefore, we have shown that\n\\[\n\\iint_{\\Omega\\times\\Omega^c} \\frac{\\left| (\\mathcal I_h v -v)(x) - (\\mathcal I_h v -v)(y) \\right|^p}{|x-y|^{d+sp}} \\; dxdy \\lesssim \\| \\mathcal I_h v -v \\|_{W^s_p(\\Omega)}^p + o(1),\n\\]\nand thus we just need to bound the interpolation error in $W^s_p(\\Omega)$.\n\n\t\nWe write $\\mathcal I_h v -v = \\Big( \\Pi_h^\\circ \\left(v \\big|_\\Omega \\right) - v \\Big) + \\Pi_h^c g$, and split\n\\[\n\\big\\|\\mathcal I_h v -v \\big\\|_{W^s_p(\\Omega)}^p \\lesssim\n\\big\\| \\Pi_h^\\circ \\left(v \\big|_\\Omega \\right) - v \\big\\|_{W^s_p(\\Omega)}^p + \n\\big\\| \\Pi_h^c g \\big\\|_{W^s_p(\\Omega)}^p.\n\\]\t\nUsing the localization estimate \\eqref{E:faermann}, we bound $\\big\\| \\Pi_h^c g \\big\\|_{W^s_p(\\Omega)}^p$ by\n\\[ \\begin{aligned}\n\\big\\| \\Pi_h^c g \\big\\|_{W^s_p(\\Omega)}^p \\leq\n\\sum_{T \\colon T \\subset \\overline\\Omega} \\bigg[ & \\iint_{T \\times (S^1_T \\cap \\Omega)} \\frac{| \\Pi_h^c g (x) - \\Pi_h^c g (y)|^p}{|x-y|^{d+sp}} dy dx \\\\ & + \\left(1 + C(\\sigma) \\, \\frac{2^p \\omega_{d-1}}{s p h_T^{s p}} \\right) \\, \\| \\Pi_h^c g \\|^p_{L^p(T)} \\bigg]\n\\end{aligned} \\]\nRecalling $\\Vert \\Pi_h^c g \\Vert_{L^{\\infty}(\\mathbb{R}^d)} \\lesssim \\Vert g \\Vert_{L^{\\infty}(\\Omega^c)}$ and using an inverse inequality, we have\n\\[ \\begin{aligned}\n\\big\\| \\Pi_h^c g \\big\\|_{W^s_p(\\Omega)}^p \\lesssim\n\\sum_{T \\subset \\overline \\Omega \\colon S^1_T \\cap \\Omega^c \\neq \\emptyset} \\bigg[ h_T^{-sp+d} \\Vert \\Pi_h^c g \\Vert_{L^{\\infty}(\\mathbb{R}^d)}^p\t\\bigg] \\lesssim \\Vert g \\Vert_{L^{\\infty}(\\Omega^c)} \\!\\!\\!\\!\\!\n\\sum_{T \\subset \\overline \\Omega \\colon S^1_T \\cap \\Omega^c \\neq \\emptyset} \nh_T^{-sp+d},\n\\end{aligned} \\]\nfor $h$ small enough. The sum in the right hand side above can be straightforwardly estimated by \n\\[ \\begin{aligned}\n\\sum_{T \\subset \\overline \\Omega \\colon S^1_T \\cap \\Omega^c \\neq \\emptyset} \nh_T^{-sp+d}\t& \\lesssim \\sum_{T \\subset \\overline{\\Omega}: \\; T \\cap \\Omega^c \\neq \\emptyset} h_T^{-sp+d} \\le h^{1-sp} \\sum_{T \\subset \\overline{\\Omega}: \\; T \\cap \\Omega^c \\neq \\emptyset} h_T^{d-1} \\\\ \n& \\lesssim h^{1-sp}\\; \\mathcal{H}^{d-1}(\\partial\\Omega),\n\\end{aligned} \\]\nwhere $\\mathcal{H}^{d-1}$ is the $(d-1)$-dimensional Hausdorff measure. This establishes that $\\big\\| \\Pi_h^c g \\big\\|_{W^s_p(\\Omega)}^p \\to 0$ as $h \\to 0$.\t\t\n\t\nIt only remains to show that $\\big\\| \\Pi_h^\\circ \\left(v \\big|_\\Omega \\right) - v \\big\\|_{W^s_p(\\Omega)}^p \\to 0$ as $h \\to 0$. For simplicity, we write $\\Pi_h^\\circ v$ instead of $\\Pi_h^\\circ \\left(v \\big|_\\Omega \\right)$. Since $\\Pi_h^\\circ v$ is a continuous linear operator from $W^s_p(\\Omega)$ to $W^s_p(\\Omega)$ with\n\\[\n\\| \\Pi_h^\\circ v \\|_{W^s_p(\\Omega)} \\le C(d,s,p,\\sigma,\\Omega) \\| v \\|_{W^s_p(\\Omega)},\n\\]\nit suffices to prove the convergence for $v \\in C^{\\infty}(\\overline{\\Omega})$. We use the localization estimate \\eqref{E:faermann} for $\\big\\| \\Pi_h^\\circ v - v \\big\\|_{W^s_p(\\Omega)}^p$ and write\n\\[ \\begin{aligned}\n\\big\\| \\Pi_h^\\circ v - v \\big\\|_{W^s_p(\\Omega)}^p & \\leq \n\\sum_{T \\colon T \\subset \\overline\\Omega} \\bigg[ \\iint_{T \\times (S^1_T \\cap \\Omega)} \\frac{| (\\Pi_h^\\circ v - v) (x) - (\\Pi_h^\\circ v - v) (y)|^p}{|x-y|^{d+sp}} dy dx \\\\\n& \\qquad \\qquad \\qquad + \\left(1 + C(\\sigma) \\, \\frac{2^p \\omega_{d-1}}{s p h_T^{s p}} \\right) \\, \\| \\Pi_h^\\circ v - v \\|^p_{L^p(T)} \\bigg] \\\\\n& =: \\sum_{T \\colon S_T^2 \\subset \\Omega} I_T(v) \\ + \\sum_{T \\subset \\overline\\Omega \\colon S_T^2 \\cap \\Omega^c \\neq \\emptyset} I_T(v).\n\\end{aligned} \\]\n\t\nOn the one hand, we point out that, because\n\\[\n\\left| \\bigcup_{T \\subset \\overline\\Omega \\colon S^2_T \\cap \\Omega^c \\neq \\emptyset} T \\right| \\approx h,\n\\]\nand $v \\in W^s_p(\\Omega)$, we have\n\\[\n\\sum_{T \\subset \\Omega \\colon S_T^2 \\cap \\widetilde{\\Omega}^c \\neq \\emptyset} I_T(v) \\le C(d,s,p,\\sigma) \\sum_{T \\subset \\Omega \\colon S_T^2 \\cap \\widetilde{\\Omega}^c \\neq \\emptyset} \\|v\\|_{W^s_p(S^2_T \\cap \\Omega)}^p \\to 0, \\mbox{ as } h \\to 0.\n\\]\n\nOn the other hand, over the elements $T$ such that $S_T^2 \\subset \\Omega$, \\Cref{prop:local_interpolation_estimate} gives\n\\[\n\\sum_{T \\colon S_T^2 \\subset \\widetilde{\\Omega}} I_T(v) \\lesssim h^{p(t-s)} \\sum_{T \\colon S_T^2 \\subset \\widetilde{\\Omega}} |v|_{W^t_p(S^2_T)}^p \\lesssim h^{p(t-s)} |v|^p_{W^t_p(\\Omega)},\n\\]\nfor every $v$ smooth in $\\Omega$. This finishes the proof.\n\\end{proof}\n\nIf, under the same conditions as in \\Cref{prop:interpolation_estimate}, we add the hypothesis that $v$ is smoother than $W^s_p(\\Omega)$, then it is possible to derive interpolation rates.\n\n\\begin{Proposition}[interpolation rate] \\label{prop:interpolation_estimate-2}\nAssume that $v \\in W^t_p(\\Lambda)$ for some $t \\in (s,2]$. Then, \n\\[\n\\iint_{Q_{\\Omega}} \\frac{\\left| (\\mathcal I_h v -v)(x) - (\\mathcal I_h v -v)(y) \\right|^p}{|x-y|^{d+sp}} \\; dxdy \\lesssim h^{p(t-s)} |v|^p_{W^t_p(\\Lambda)} .\n\\]\n\\end{Proposition}\n\\begin{proof}\nWe split $Q_{\\Omega}$ into the sets\n\\[\nQ_{\\Omega} \\subset \\big( \\Lambda \\times \\Lambda \\big)\n\\cup \\Big( \\big( \\Omega^c \\setminus \\Lambda \\big) \\times \\Omega \\Big) \\; \\cup \\; \\Big( \\Omega \\times \\big( \\Omega^c \\setminus \\Lambda \\big) \\Big).\n\\]\nThe estimate in $\\Lambda \\times \\Lambda$ is standard and follows along the lines of the estimates for $\\Vert \\mathcal I_h v - v \\Vert_{W^s_p(\\Omega)}$ in \\Cref{prop:interpolation_estimate}. The estimate in $\\big( \\Omega^c \\setminus \\Lambda \\big) \\times \\Omega$ reduces to $\\Vert \\mathcal I_h v - v \\Vert_{L^p(\\Omega)}$ because $\\textrm{dist}(\\Omega^c \\setminus \\Lambda, \\Omega) > 0$ and $v$ is zero in $\\Omega^c \\setminus \\Lambda$.\n\\end{proof}\n\n\\subsection{Numerical schemes} \\label{SS:schemes}\nWe briefly include some details about the implementation and solution of the discrete problem \\eqref{E:WeakForm-discrete}. In first place we point out that we can compute $a_{u_h}(u_h, v_h)$ for any given $u_h \\in \\mathbb{V}_h^g, \\ v_h \\in \\mathbb{V}_h^0$ by following the implementation techniques from \\cite{AcosBersBort2017short, AcosBort2017fractional}.\nFurther details on the quadrature rules employed and the treatment of the discrete form $a_{u_h}(u_h, v_h)$ can be found in \\cite{BoLiNo19computational}.\n\nIn order to solve the nonlinear discrete problem we resort to two different approaches: a semi-implicit $H^\\alpha$-gradient flow and a damped Newton algorithm. For the former, we consider $\\alpha \\in [0,1)$ (with the convention that $H^0 (\\Omega) = L^2(\\Omega)$), fix a step size $\\tau > 0$ and, given an initial guess $u_h^0$, we solve the following equation in every step,\n\\begin{equation}\\label{E:semi-implicit-GF}\n\\frac1{\\tau} \\langle {u^{k+1}_h - u^k_h} \\ , \\ v_h \\rangle_{H^{\\alpha}(\\Omega)} = -a_{u^k_h}(u^{k+1}_h \\ , \\ v_h), \\qquad \\forall v_h \\in \\mathbb{V}^0_h .\n\\end{equation}\n\nFor the damped Newton method, we take the first variation of $a_{u_h}(u_h,v_h)$, $\\frac{\\delta a_{u_h}(u_h,v_h)}{\\delta u_h}(w_h)$, which is well-defined for all $u_h \\in \\mathbb{V}^g_h$, $v_h, w_h \\in \\mathbb{V}^0_h$. We point out that the analogue of this variation at the continuous level is not well-defined. The resulting step is obtained by solving for $w_h^k$ the equation\n\\begin{equation}\\label{E:damped-Newton-wh}\n\\frac{\\delta a_{u_h}(u_h^k, v_h)}{\\delta u_h^k}(w_h^k)\n= -a_{u_h^k}(u_h^k, v_h), \\qquad \\forall v_h \\in \\mathbb{V}^0_h\n\\end{equation}\nand performing a line search to determine the step size. We refer the reader to \\cite{BoLiNo19computational} for full details on these algorithms and further discussion on their performance.\n\n\n\\section{Convergence} \\label{sec:convergence}\n\nIn this section, we prove the convergence of the discrete solution $u_h$ without assumptions on the regularity of the nonlocal minimal graphs.\n\nWe first prove that the discretization proposed in \\Cref{ss:FE_discretization} is energy-consistent. Due to our assumption that $g \\in C_c(\\Omega^c)$, we know from Proposition \\ref{Prop:domain-energy} that the energy minimizing function $u \\in W^{2s}_1(\\Omega)$, while Theorem \\ref{thm:smoothness} guarantees that for any region $\\widetilde{\\Omega} \\Subset \\Omega$, $u \\in W^{2t}_1(\\widetilde{\\Omega})$ for every $t \\in \\mathbb{R}$.\nThese two properties are sufficient to guarantee the consistency of the discrete energy.\n\n\n\\begin{Theorem}[energy consistency] \\label{thm:consistency}\nLet $s \\in (0,1\/2)$, and assume that $\\Omega$ and $g$ satisfy \\eqref{E:assumptions} and \\eqref{E:support}. Let $u$ and $u_h$ be, respectively, the solutions to \\eqref{E:WeakForm-NMS-Graph} and \\eqref{E:WeakForm-discrete}. Then, \n\\begin{equation*}\n \\lim_{h \\to 0} I_s[u_h] = I_s[u].\n\\end{equation*}\n\\end{Theorem}\n\\begin{proof}\nLet $\\mathcal I_h$ be defined according to \\eqref{E:interpolation}. Since \n$\\mathcal I_h u \\in \\mathbb{V}_h^g$, it follows that $I_s[\\mathcal I_h u] \\geq I_s[u_h]$ and hence\n\\begin{equation*}\n\\begin{aligned}\n 0 &\\leq I_s[u_h] - I_s[u] \\leq I_s[\\mathcal I_h u] - I_s[u] \\\\\n &= \\iint_{Q_{\\Omega}} \\left( F_s\\left(\\frac{\\mathcal I_h u(x)-\\mathcal I_h u(y)}{|x-y|}\\right)\n - F_s\\left(\\frac{u(x)-u(y)}{|x-y|}\\right) \\right) \\frac{dxdy}{|x-y|^{d+2s-1}}.\n\\end{aligned}\n\\end{equation*}\nBecause $F_s$ is Lipschitz continuous, we deduce\n\\begin{equation*}\n\\begin{aligned}\n0 \\le I_s[u_h] - I_s[u] &\\lesssim \\iint_{Q_{\\Omega}} \n \\frac{\\left| (\\mathcal I_h u -u)(x) - (\\mathcal I_h u -u)(y) \\right|}{|x-y|^{d+2s}} \\; dxdy,\n\\end{aligned}\n\\end{equation*}\nand using \\Cref {prop:interpolation_estimate} we conclude that $\\lim_{h \\to 0} I_s[u_h] = I_s[u]$.\n\\end{proof}\n\nFinally, we prove the convergence of the finite element approximations to the nonlocal minimal graph as the maximum element size $h \\to 0$.\n\n\\begin{Theorem}[convergence] \nUnder the same hypothesis as in \\Cref{thm:consistency}, it holds that\n\\[\n\\lim_{h \\to 0} \\| u - u_h \\|_{W^{2r}_1(\\Omega)} = 0 \\quad \\forall r \\in [0,s).\n\\]\n\\end{Theorem}\n\\begin{proof}\nDue to our assumptions on $g$, we apply Theorem \\ref{thm:consistency} to deduce that the finite element discretization is energy-consistent. Thus, the family $\\{I_s[u_h]\\}_{h>0}$ is uniformly bounded.\n\nSimilarly to the first formula in \\eqref{eq:norm_bound}, we obtain\n\\begin{equation*}\n|u_h|_{W^{2s}_1(\\Omega)} \\le C_1 + C_2 I_s[u_h] ,\n\\end{equation*}\nwhereas $\\Vert u_h \\Vert_{L^1(\\Omega)}$ is bounded as in \\eqref{eq:L1-bound}. It thus follows that $\\Vert u_h \\Vert_{W^{2s}_1(\\Omega)}$ is uniformly bounded.\n\nThis fact, combined with the compactness of the embedding $W^{2s}_1(\\Omega) \\subset L^1(\\Omega)$, allows us to extract a subsequence $\\{u_{h_n}\\}$, which converges to some $\\widetilde{u}$ in $L^1(\\Omega)$. According to \\eqref{E:OmegaC-ae-converge} in the proof of \\Cref{prop:interpolation_estimate}, $\\{u_{h_n}\\}$ converges to $g$ a.e. in $\\Omega^c$; then, extending $\\widetilde{u}$ as $g$ onto $\\Omega^c$, we have by Fatou's Lemma that\n\\begin{equation*}\nI_s[\\widetilde{u}] \\leq \\liminf_{n \\to \\infty} I_s[u_{h_n}] = I_s[u].\n\\end{equation*}\nAs a consequence, $I_s[\\widetilde{u}]$ is finite and, by \\Cref{Prop:domain-energy}, it follows that $\\widetilde{u} \\in \\mathbb{V}^g$. Because $\\widetilde{u}$ minimizes the energy $I_s$,\nit is a solution of \\eqref{E:WeakForm-discrete},\nand by uniqueness it must be $\\widetilde{u} = u$. Since any subsequence of $\\{ u_h \\}$\nhas a subsequence converging to $u$ in $L^1(\\Omega)$, it follows immediately that $u_h$ converges to $u$ in $L^1(\\Omega)$ as $h \\to 0$. \n\nFinally, the convergence in the $W^{2r}_1(\\Omega)$-norm for $r \\in (0,s)$ is obtained by\ninterpolation between the spaces $L^1(\\Omega)$ and $W^{2s}_1(\\Omega)$.\n\\end{proof}\n\n\\section{A geometric notion of error} \\label{sec:geometric-error}\nIn this section, we introduce a geometric notion of error and prove the convergence of the discrete approximations proposed in \\Cref{ss:FE_discretization} according to it. The error estimate for this novel quantity mimics the estimates in the classical Plateau problem for the error\n\\begin{equation}\\label{eq:def-e}\n\\begin{aligned}\ne^2(u,u_h) & =\\int_{\\Omega} \\ \\Big| \\widehat{\\nu}(\\nabla u) - \\widehat{\\nu}(\\nabla u_h) \\Big|^2 \\;\\frac{Q(\\nabla u) + Q(\\nabla u_h)}{2} \\ dx , \\\\\n& = \\int_{\\Omega} \\ \\Big( \\widehat{\\nu}(\\nabla u) - \\widehat{\\nu}(\\nabla u_h) \\Big) \\cdot \\ \\Big( \\nabla (u-u_h), 0 \\Big) dx ,\n\\end{aligned}\n\\end{equation}\nwhere $Q(\\pmb{a}) = \\sqrt{1+|\\pmb{a}|^2}$, $\\widehat{\\nu}(\\pmb{a}) = \\frac{(\\pmb{a},-1)}{Q(\\pmb{a})}$. Since, in this context, $\\widehat{\\nu}(\\nabla u)$ is the normal unit vector on the graph of $u$, $e(u,u_h)$ represents a weighted $L^2$-error for the normal vectors of the corresponding graphs given by $u$ and $u_h$, where the weight is the average of the area\nelements of the graphs of $u$ and $u_h$. An estimate for $e(u,u_h)$ was derived by Fierro and Veeser \\cite{FierroVeeser03} in the framework of a posteriori error estimation for prescribed mean curvature equations. Geometric notions of errors like $e(u,u_h)$ have also been considered in the setting of mean curvature flows \\cite{DeckelnickDziuk00, DeDzEl05} and surface diffusion \\cite{BaMoNo04}.\n\n\nFor the nonlocal minimal surface problem, let $u$ and $u_h$ be the solutions to \\eqref{E:WeakForm-NMS-Graph} and \\eqref{E:WeakForm-discrete}, respectively. We introduce the quantity\n\\begin{equation}\\label{eq:def-es} \n{\n\\begin{aligned}\ne_s(u,u_h) := \\left( C_{d,s} \\iint_{Q_{\\Omega}} \\Big( G_s\\left(d_u(x,y)\\right) - G_s\\left(d_{u_h}(x,y)\\right) \\Big) \\frac{d_{u-u_h}(x,y)}{|x-y|^{d-1+2s}} dxdy \\right)^{1\/2} ,\n\\end{aligned}}\n\\end{equation}\nwhere $G_s$ is given by \\eqref{E:DEF-Gs}, the constant $C_{d,s} = \\frac{1 - 2s}{\\alpha_{d}}$, with $\\alpha_{d}$ denoting the volume of the $d$-dimensional unit ball and, for any function $v$, $d_v(x,y)$ is defined as\n\\begin{equation}\\label{eq:def-d_u}\nd_v(x,y) := \\frac{v(x)-v(y)}{|x-y|}.\n\\end{equation}\nThe term in parenthesis in \\eqref{eq:def-es} is non-negative because $G_s$ is non-decreasing on $\\mathbb{R}$. We include the constant $C_{d,s}$ in the definition of $e_s$ in order to have asymptotic compatibility in the limit $s \\to \\frac12^-$ (cf. \\Cref{Thm:asymptotics-es} below). \n\n\\Cref{ss:error} derives an estimate for $e_s(u,u_h)$ that does not rely on regularity assumptions. Although the proof of such an error estimate is simple, providing an interpretation of the quantity $e_s$ is not a straightforward task. Thus, in \\Cref{ss:asymptotics} we study the behavior of $e_s$ and related quantities in the limit $s \\to \\frac12^-$.\n\n\\subsection{Error estimate} \\label{ss:error}\nIn this section we derive an upper bound for the geometric discrepancy $e_s(u,u_h)$ between the continuous and discrete minimizers $u$ and $u_h$, without additional assumptions on the regularity of $u$. More precisely, the next theorem states that $e_s(u,u_h)$ can be bounded in terms of the approximability of $u$ by the discrete spaces $\\mathbb{V}^g_h$ in terms of the $\\mathbb{V}^g$-seminorm.\n\n\\begin{Theorem}[geometric error]\\label{thm:geometric-error}\nLet $s \\in (0,1\/2)$ and let $\\Omega$ and $g$ satisfy \\eqref{E:assumptions} and \\eqref{E:support}. Let $u$ and $u_h$ be the solutions to \\eqref{E:WeakForm-NMS-Graph} and \\eqref{E:WeakForm-discrete} respectively. Then, it holds that\n\\begin{equation} \\label{eq:geometric-error} \\begin{aligned}\ne_s(u,u_h) &\\le \\inf_{v_h \\in \\mathbb{V}_h^g} \\sqrt{2 C_{d,s} K} \\ | u-v_h |_{\\mathbb{V}^g} \\\\\n& = \\inf_{v_h \\in \\mathbb{V}_h^g} \\left( 2 C_{d,s} K \\iint_{Q_{\\Omega}} \\frac{|(u-v_h)(x)-(u-v_h)(y)|}{|x-y|^{d+2s}} dxdy \\right) ^{1\/2},\n\\end{aligned} \\end{equation}\nwhere $K$ is the constant from \\eqref{E:bound-Gs}.\n\\end{Theorem}\n\\begin{proof}\nThe proof follows by `Galerkin orthogonality'. Indeed, let $v_h \\in \\mathbb{V}_h^g$ and use $u_h - v_h$ as test function in \\eqref{E:WeakForm-NMS-Graph} and \\eqref{E:WeakForm-discrete} to obtain\n\\[\n\\iint_{Q_{\\Omega}} \\Big( G_s\\left(d_u(x,y)\\right) - G_s\\left(d_{u_h}(x,y)\\right) \\Big) \\; \\frac{d_{u_h}(x,y) - d_{v_h}(x,y)}{|x-y|^{d-1+2s}} dxdy = 0.\n\\]\nThe identity above immediately implies that\n\\begin{equation} \\label{eq:orthogonality-es} \\begin{aligned}\ne^2_s(u,u_h)\n&= C_{d,s} \\iint_{Q_{\\Omega}} \\Big( G_s\\left(d_u(x,y)\\right) - G_s\\left(d_{u_h}(x,y)\\right) \\Big) \\frac{d_u(x,y) - d_{u_h}(x,y)}{|x-y|^{d-1+2s}} dxdy \\\\\n&= C_{d,s} \\iint_{Q_{\\Omega}} \\Big( G_s\\left(d_u(x,y)\\right) - G_s\\left(d_{u_h}(x,y)\\right) \\Big) \\frac{d_u(x,y) - d_{v_h}(x,y)}{|x-y|^{d-1+2s}} dxdy.\n\\end{aligned}\n\\end{equation} \nEstimate \\eqref{eq:geometric-error} follows immediately from the bound $|G_s| \\le K$ (cf. \\eqref{E:bound-Gs}).\n\\end{proof}\n\nIn case the fractional minimal graph possesses additional regularity, a convergence rate follows straightforwardly by applying \\Cref{prop:interpolation_estimate-2}.\n\n\\begin{Corollary}[convergence rate]\nLet the same conditions as in \\Cref{thm:geometric-error} be valid and further assume that $u \\in W^t_1(\\Lambda)$ for some $t \\in (2s,2]$, where $\\Lambda$ is given by \\eqref{E:support}. Then,\n\\[\ne_s(u,u_h) \\lesssim h^{t\/2-s} |u|^{1\/2}_{W^t_1(\\Lambda)} .\n\\]\n\\end{Corollary}\n\n\\begin{Remark}[BV regularity]\nAlthough minimal graphs are expected to be discontinuous across the boundary, they are smooth in the interior of $\\Omega$ and, naturally, possess the same regularity as the datum $g$ over $\\Omega^c$. Therefore, in general, we expect that $u \\in BV(\\Lambda)$ whence the error estimate\n\\[\ne_s(u,u_h) \\lesssim h^{1\/2-s} |u|^{1\/2}_{BV(\\Lambda)}.\n\\]\n\\end{Remark}\n\n\\subsection{Asymptotic behavior} \\label{ss:asymptotics}\nOur goal in this section is to show that, for $u$ and $v$ smooth enough, $e_s(u,v)$ converges to the geometric notion of error $e(u,v)$ defined in \\eqref{eq:def-e} in the limit $s \\to {\\frac{1}{2}}^-$. To this aim, we first introduce a nonlocal normal vector. \n\n\\begin{Definition}[nonlocal normal vector] \\label{def:normal-vector}\nLet $s \\in (0, 1\/2)$ and $E \\subset \\mathbb{R}^d$ be an open, bounded, measurable set. The nonlocal inward normal vector of order $s$ at a point $x \\in \\partial E$ is defined as\n\\begin{equation}\\label{eq:def-nonlocal-normal}\n \\nu_s(x;E) = \\frac{C_{d-1,s}}{2} \\; \\lim_{R \\to \\infty} \\int_{B_R(x)}\n \\frac{\\chi_E(y) - \\chi_{E^c}(y)}{|x-y|^{d+2s}} \\; (y - x) \\; dy,\n\\end{equation}\nwhere $C_{d-1,s} = \\frac{1-2s}{\\alpha_{d-1}}$ as in \\eqref{eq:def-es}, except that $d$ is replaced by $d-1$.\n\\end{Definition}\n\n\\begin{Remark}[dimensions]\nWe point out that, definition \\eqref{eq:def-es} aims to measure the normal vector discrepancies over graphs in $\\mathbb{R}^{d+1}$, whereas \\Cref{def:normal-vector} deals with the normal vector to a subset of $\\mathbb{R}^d$. This is why in \\eqref{eq:def-nonlocal-normal} we use the constant $C_{d-1,s}$ instead of $C_{d,s}$.\n\\end{Remark}\n\nNotice that, by symmetry, \n\\begin{equation*}\n\\int_{\\partial B_R(x)} \\frac{y - x}{|x-y|^{d+2s}} dS(y) = 0 \\quad \\forall R>0.\n\\end{equation*}\nConsequently, because $\\chi_{E^c} = 1 - \\chi_{E}$, if $E \\subset B_R(x)$ for some $R > 0$, then\n\\begin{equation*} \\begin{aligned}\n \\nu_s(x;E) & = \\frac{C_{d-1,s}}{2} \\int_{B_R(x)}\n \\frac{\\chi_E(y) - \\chi_{E^c}(y)}{|x-y|^{d+2s}} (y - x) \\; dy \\\\\n & = C_{d-1,s} \\int_{B_R(x)}\n \\frac{\\chi_E(y)}{|x-y|^{d+2s}} (y - x) \\; dy.\n\\end{aligned} \\end{equation*}\n\nThe following lemma justifies that the nonlocal normal vector defined in \\eqref{eq:def-nonlocal-normal} is indeed an extension of the classical notion of normal vector. The scaling factor in the definition of $\\nu_s$ yields the convergence to the normal derivative as $s \\to \\frac12^-$.\n\n\\begin{Lemma}[asymptotic behavior of $\\nu_s$]\\label{lem:asymp-nonlocal-normal}\nLet $E$ be a bounded set in $\\mathbb{R}^d$, $x$ be a point on $\\partial E$, the surface $\\partial E$ be locally $C^{1,\\gamma}$ for some $\\gamma > 0$ and $\\nu(x)$ be the inward normal vector to $\\partial E$ at $x$. Then, the following holds:\n\\begin{equation}\\label{eq:asymp-nonlocal-normal}\n\\lim_{s \\to {\\frac{1}{2}}^-} \\nu_s(x;E) = \\nu(x) .\n\\end{equation}\n\\end{Lemma}\n\\begin{proof}\nWithout loss of generality, we assume $x = 0$. Let $\\widetilde{E} := \\{y: y \\cdot \\nu(x) \\ge 0\\}$ and for simplicity we write $B_r = B_r(x)$. Then, since $\\partial E$ is locally $C^{1,\\gamma}$, there exists some $r_0 > 0$ such that\n\\begin{equation}\\label{eq:proof-asymp-nonlocal-normal}\n \\left| \\int_{\\partial B_r} \\chi_{E \\triangle \\widetilde{E}}(y) \\; dS(y) \\right| \\lesssim r^{d+\\gamma-1}\n\\end{equation}\nfor any $r \\in (0,r_0]$, where $\\triangle$ denotes the symmetric difference between sets. Fix $R > r_0$ large enough so that $E \\subset B_R(x)$. Then, we can write $\\nu_s(x;E)$ as\n\\begin{equation}\\label{eq:proof-asymp-nonlocal-normal-2}\n\\begin{aligned}\n\\nu_s(x;E) &= C_{d-1,s} \\int_{B_R} \\frac{\\chi_E(y)}{|y|^{d+2s}} \\; y \\; dy \\\\\n&= C_{d-1,s} \\left(\\int_{B_R \\setminus B_{r_0}} \\frac{\\chi_E(y)}{|y|^{d+2s}} \\; y \\; dy + \\int_{B_{r_0}} \\frac{\\chi_E(y)}{|y|^{d+2s}} \\; y \\; dy \\right).\n\\end{aligned}\n\\end{equation}\nFor the first integral in the right hand side, since the surface area of the $(d-1)$-dimensional unit ball equals $d \\alpha_d$, we have\n\\[ \\begin{aligned}\n\\left| \\int_{B_R \\setminus B_{r_0}} \\frac{\\chi_E(y)}{|y|^{d+2s}} \\; y \\; dy \\right| & = \\left| \\int_{r_0}^R dr \\int_{\\partial B_r} \\frac{\\chi_E(y)}{r^{d+2s}} \\; y \\; dS(y) \\right| \\\\\n& \\le \\int_{r_0}^R dr \\int_{\\partial B_r} \\frac{1}{r^{d+2s}} \\; r^d \\; dS = d \\alpha_d \\int_{r_0}^R r^{-2s} \\; dr \\\\\n& = \\frac{d \\, \\alpha_d}{1-2s} (R^{1-2s} - r_0^{1-2s}).\n\\end{aligned} \\]\nTherefore, in the limit $s\\to\\frac12^-$, we obtain \n\\begin{equation} \\label{eq:lim-E-far}\nC_{d-1,s} \\left| \\int_{B_R \\setminus B_{r_0}} \\frac{\\chi_E(y)}{|y|^{d+2s}} \\; y \\; dy \\right| \\le\n\\frac{d \\alpha_d}{\\alpha_{d-1}} \\left(R^{1-2s} - r_0^{1-2s} \\right) \\to 0.\n\\end{equation}\n\nWe now deal with the second term in the right hand side in \\eqref{eq:proof-asymp-nonlocal-normal-2}. Without loss of generality, we additionally assume $\\nu(x) = e_1$.\nIf we replace $E$ by the set $\\widetilde{E}$ defined above, that coincides with the half-space $\\{ y \\colon y_1 \\ge 0 \\}$, it follows by symmetry that all components but the first one in the integral vanish. The first component can be calculated explicitly by writing it as an iterated integral along the $(d-1)$-dimensional slices $\\Pi_t = \\{ y_1 = t \\}$ and integrating in polar coordinates on these:\n\\[ \\begin{aligned}\n\\int_{B_{r_0}} \\frac{\\chi_{\\widetilde{E}}(y)}{|y|^{d+2s}} \\; y_1 \\; dy & =\n\\int_0^{r_0} dt \\int_{\\Pi_t \\cap \\{ |z|^2 \\le r_0^2 - t^2\\} } \\frac{t}{\\left( t^2 + |z|^2 \\right)^\\frac{d+2s}{2}} \\; dz \\\\\n& = \\int_0^{r_0} dt \\int_0^{\\sqrt{r_0^2 - t^2}} dr \\int_{\\partial B^{(d-2)}_r} \\frac{t}{\\left( t^2 + r^2 \\right)^\\frac{d+2s}{2}} \\; dS(z) \\\\\n& = (d-1) \\alpha_{d-1} \\int_0^{r_0} dt \\int_0^{\\sqrt{r_0^2 - t^2}} \\frac{t r^{d-2}}{\\left( t^2 + r^2 \\right)^\\frac{d+2s}{2}} \\; dr .\n\\end{aligned} \\]\n\nThe iterated integral above can be calculated with elementary manipulations (Fubini's theorem, change of variables $t \\mapsto w = \\left(\\frac{t}{r}\\right)^2$ and explicit computation of integrals) to give\n\\[\n\\int_0^{r_0} dt \\int_0^{\\sqrt{r_0^2 - t^2}} \\frac{t r^{d-2}}{\\left( t^2 + r^2 \\right)^\\frac{d+2s}{2}} \\; dr = \\frac{r_0^{1-2s}}{(d-1)(1-2s)},\n\\]\nand therefore, as $s \\to \\frac12^-$,\n\\[\nC_{d-1,s} \\int_{B_{r_0}} \\frac{\\chi_{\\widetilde{E}}(y)}{|y|^{d+2s}} \\; y_1 \\; dy = C_{d-1,s} \\; (d-1) \\alpha_{d-1} \\frac{r_0^{1-2s}}{(d-1)(1-2s)} = r_0^{1-2s} \\to 1.\n\\]\nThis shows that\n\\begin{equation} \\label{eq:lim-E-tilde}\n\\lim_{s \\to \\frac12^-} \\int_{B_{r_0}} \\frac{2\\chi_{\\widetilde{E}}(y)}{|y|^{d+2s}} \\; y_1 \\; dy = \\nu(x) .\n\\end{equation} \n\nUsing \\eqref{eq:proof-asymp-nonlocal-normal}, the difference between the integrals over $E$ and $\\widetilde{E}$ can be bounded as\n\\[ \\begin{aligned}\nC_{d-1,s} \\left| \\int_{B_{r_0}} \\frac{\\chi_{E} (y) - \\chi_{\\widetilde{E}}(y)}{|y|^{d+2s}} \\; y \\; dy \\right|\n& \\le C_{d-1,s} \\int_0^{r_0} dr \\int_{\\partial B_r} \\chi_{E \\triangle \\widetilde{E}}(y) \\; r^{-d-2s} \\; r \\; dS(y) \\\\\n& \\lesssim C_{d-1,s} \\int_0^{r_0} r^{\\gamma-2s} \\; dr = C_{d-1,s} \\; \\frac{r_0^{\\gamma+1-2s}}{\\gamma+1-2s},\n\\end{aligned} \\]\nwhere the right hand side above tends to $0$ because $C(d-1,s) = \\frac{1-2s}{\\alpha_{d-1}}$ and $\\gamma > 0$ is fixed. Combining this estimate with \\eqref{eq:proof-asymp-nonlocal-normal-2}, \\eqref{eq:lim-E-far} and \\eqref{eq:lim-E-tilde}, we finally get\n\\[\n\\lim_{s \\to {\\frac{1}{2}}^-} \\nu_s(x;E)\n= \\nu(x),\n\\]\nthereby finishing the proof.\n\\end{proof}\n\n\\begin{Remark}[localization]\\label{rem:asymp-nonlocal-normal}\nFrom the preceding proof, it follows that only the part of the integral near $x$ remains in the limit when $s \\to {\\frac{1}{2}}^-$. \nThus, for any neighborhood $\\mathcal{N}_x$ of $x$, we could similarly prove\n\\[\n\\lim_{s \\to {\\frac{1}{2}}^-} \\frac{C_{d-1,s}}{2} \\; \\int_{\\mathcal{N}_x} \\frac{\\chi_E(y) - \\chi_{E^c}(y)}{|x-y|^{d+2s}} \\; (y - x) \\; dy = \\nu(x)\n\\]\nwithout the assumption of the boundedness of $E$.\n\\end{Remark}\n\nWe now go to the graph setting and consider \n\\[\nE = \\left\\{ (x, x_{d+1}): x_{d+1} \\le u(x), x \\in \\mathbb{R}^d \\right\\} \\subset \\mathbb{R}^{d+1},\n\\] \nwhere $u \\in L^{\\infty}(\\mathbb{R}^d)$. For such a set $E$ it is clear that our definition \\eqref{eq:def-nonlocal-normal} is not adequate: the limit of the integral therein does not exist. However, the only issue in such a definition is that the last component of the nonlocal normal vector in $\\mathbb{R}^{d+1}$ tends to $-\\infty$, and thus it can be solved in a simple way. Indeed, we introduce the projection operator $P$ that maps \n\\[\n\\mathbb{R}^{d+1} \\ni \\widetilde{x} = (x,x_{d+1}) \\mapsto P(\\widetilde{x}) = x \\in \\mathbb{R}^d.\n\\] \nThen we could actually define the normal vector for this type of unbounded set $E$ as the projection $P\\left(\\nu_s(x;E)\\right)$. \n\nMore precisely, given $\\widetilde{x} = \\left(x,u(x) \\right)$, we define the projection of nonlocal normal vector, $\\widetilde{\\nu}_s(\\widetilde{x};E) = P\\left(\\nu_s(\\widetilde{x};E)\\right)$, as\n\\begin{equation}\\label{eq:def-nonlocal-normal-proj}\n \\widetilde{\\nu}_s(\\widetilde{x};E) = \\frac{C_{d,s}}{2} \\lim_{R \\to \\infty} \\int_{B_R(\\widetilde{x})}\n \\frac{\\chi_E(\\widetilde{y}) - \\chi_{E^c}(\\widetilde{y})}{|\\widetilde{x}-\\widetilde{y}|^{d+1+2s}} \\; P(\\widetilde{y} - \\widetilde{x}) \\; d\\widetilde{y},\n\\end{equation}\nwhere $x = P(\\widetilde{x})$ and $y = P(\\widetilde{y})$.\nTo show that this limit exists, consider the sets\n\\[ \\begin{aligned}\n& B_R^+\\left(\\widetilde{x}\\right) := \\left\\{ \\widetilde{y} = (y,y_{d+1}) \\in \\mathbb{R}^{d+1} : |\\widetilde{y} - \\widetilde{x}| \\le R, \\; y_{d+1} \\ge u(x) \\right\\}, \\\\\n& B_R^-(\\widetilde{x}) := B_R(\\widetilde{x}) \\setminus B_R^+(\\widetilde{x}).\n\\end{aligned}\\]\nSince both $B_R^+\\left(\\widetilde{x}\\right)$ and $B_R^-\\left(\\widetilde{x}\\right)$ are half balls, by symmetric cancellation, we have\n\\[\n\\int_{B_R^+(\\widetilde{x})} \\frac{1}{|\\widetilde{x}-\\widetilde{y}|^{d+1+2s}} d\\widetilde{y} - \\int_{B_R^-(\\widetilde{x})} \\frac{1}{|\\widetilde{x}-\\widetilde{y}|^{d+1+2s}} d\\widetilde{y} = 0\n\\]\nin the principal value sense. Therefore, using that $\\chi_{E} = 1 - \\chi_{E^c}$, we can express\n\\[\\begin{aligned}\n\\int_{B_R(\\widetilde{x})} \\frac{\\chi_E(\\widetilde{y}) - \\chi_{E^c}(\\widetilde{y})}{|\\widetilde{x}-\\widetilde{y}|^{d+1+2s}} & \\; P(\\widetilde{y} - \\widetilde{x}) \\; d\\widetilde{y} \\\\\n & = 2\\int_{B_R^+(\\widetilde{x}) \\cap E}\n \\frac{(y - x)}{|\\widetilde{x}-\\widetilde{y}|^{d+1+2s}} d\\widetilde{y} - 2\\int_{B_R^-(\\widetilde{x}) \\setminus E}\n \\frac{(y - x)}{|\\widetilde{x}-\\widetilde{y}|^{d+1+2s}} d\\widetilde{y}.\n\\end{aligned} \\]\nThe two integrands above have enough decay at infinity because we are assuming $u \\in L^\\infty(\\mathbb{R}^d)$. Thus, as $R \\to \\infty$, we may replace $B_R^-(\\widetilde{x})$ by the half space $H^-\\left(\\widetilde{x}\\right) := \\{(y,y_{d+1}) \\in \\mathbb{R}^{d+1} : y_{d+1} < u(x)\\}$. Thus, the vector defined in \\eqref{eq:def-nonlocal-normal-proj} can be written as\n\\[ \\begin{aligned}\n\\widetilde{\\nu}_s(\\widetilde{x};E) &= C_{d,s} \\int_{E \\setminus H^-\\left(\\widetilde{x}\\right)}\n \\frac{(y-x)}{|\\widetilde{x}-\\widetilde{y}|^{d+1+2s}} d\\widetilde{y} - C_{d,s} \\int_{H^-\\left(\\widetilde{x}\\right) \\setminus E}\n \\frac{(y-x)}{|\\widetilde{x}-\\widetilde{y}|^{d+1+2s}} d\\widetilde{y} \\\\\n &= C_{d,s} \\int_{\\mathbb{R}^d} dy \\int_{u(x)}^{u(y)} \\frac{(y-x)}{\\left(|x-y|^2+(y_{d+1}-u(x))^2 \\right)^{\\frac{d+1+2s}{2}}} \\; dy_{d+1} .\n\\end{aligned} \\]\n\nMaking the substitution $t = \\frac{y_{d+1}-u(x)}{|x-y|}$, recalling the definitions of $G_s$ and $d_u$ (cf. \\eqref{E:DEF-Gs} and \\eqref{eq:def-d_u}, respectively), and noticing that $d_u(x,y) = -d_u(y,x)$, we conclude that\n\\[ \\begin{aligned}\n\\widetilde{\\nu}_s(\\widetilde{x};E) & = C_{d,s} \\int_{\\mathbb{R}^d} (y-x) \\; dy \\int_{0}^{\\frac{u(y)-u(x)}{|x-y|}} \\frac{1}{|x-y|^{d+2s} \\left( 1+t^2 \\right)^{\\frac{d+1+2s}{2}}} \\; dt \\\\\n &= C_{d,s} \\int_{\\mathbb{R}^d} \\frac{G_s \\left(d_u(x,y) \\right)}{|x-y|^{d+1+2s}} \\; (x-y) \\; dy.\n\\end{aligned} \\]\n\nAs we mentioned above, $\\widetilde{\\nu}_s(\\widetilde{x};E)$ can be regarded as the projection of $\\nu_s(\\widetilde{x};E)$ under $P$. Therefore, following similar steps as in \\Cref{lem:asymp-nonlocal-normal} and \\Cref{rem:asymp-nonlocal-normal}, it is possible to prove the following result.\n\n\\begin{Lemma}[asymptotics of $\\widetilde{\\nu}_s$]\\label{lem:asymp-nonlocal-normal-proj}\nLet $E = \\left\\{ (x, x_{d+1}): x_{d+1} \\le u(x), x \\in \\mathbb{R}^d \\right\\}$, where $u \\in L^{\\infty}(\\mathbb{R}^d)$ and $u$ is locally $C^{1,\\gamma}$ around a point $x$ for some $\\gamma > 0$. Then, the following asymptotic behavior holds\n\\begin{equation}\\label{eq:asymp-nonlocal-normal-proj}\n\\begin{aligned}\n\\lim_{s \\to {\\frac{1}{2}}^-} \\widetilde{\\nu}_s(\\widetilde{x};E) &= \n\\lim_{s \\to {\\frac{1}{2}}^-} C_{d,s} \\int_{\\mathbb{R}^d} \\frac{G_s \\left(d_u(x,y) \\right)}{|x-y|^{d+2s}} \\; (x-y) \\; dy \\\\\n&= \\frac{\\nabla u(x)}{\\sqrt{1+|\\nabla u(x)|^2}},\n\\end{aligned}\n\\end{equation}\nwhere $\\widetilde{x} = (x,u(x))$. In addition, we also have\n\\begin{equation}\\label{eq:asymp-nonlocal-normal-proj2}\n\\lim_{s \\to {\\frac{1}{2}}^-} C_{d,s} \\int_{\\mathcal{N}_x} \\frac{G_s \\left(d_u(x,y) \\right)}{|x-y|^{d+2s}} \\; (x-y) \\; dy\n= \\frac{\\nabla u(x)}{\\sqrt{1+|\\nabla u(x)|^2}},\n\\end{equation}\nfor any neighborhood $\\mathcal{N}_x$ of $x$.\n\\end{Lemma}\n\nOur next lemma deals with the interaction between the nonlocal normal vector to the graph of $u \\colon \\mathbb{R}^d \\to \\mathbb{R}$ and a function $v\\colon \\mathbb{R}^d \\to \\mathbb{R}$. For that purpose, we redefine $a_u$ so as to include the proper scaling factor for $s \\to \\frac12^-$.\nIndeed, given $u \\in \\mathbb{V}^g$, we set $a_u \\colon \\mathbb{V}^g \\times \\mathbb{V}^0 \\to \\mathbb{R}$ to be\n\\begin{equation} \\label{E:def-a-scaled}\na_u(w,v) := C_{d,s} \\iint_{Q_{\\Omega}} \\widetilde{G}_s\\left(\\frac{u(x)-u(y)}{|x-y|}\\right) \\frac{(w(x)-w(y))(v(x)-v(y))}{|x-y|^{d+1+2s}}dx dy.\n\\end{equation}\n\n\\begin{Lemma}[asymptotics of $a_u$ with H\\\"older regularity]\\label{lem:asymp-nonlocal-normal-interaction}\nLet $u,v \\in C^{1,\\gamma}_c(\\Lambda)$ for some $\\gamma > 0$ and a bounded set $\\Lambda$ containing $\\Omega \\subset \\mathbb{R}^d$. Then, it holds that\n\\[\n\\lim_{s \\to {\\frac{1}{2}}^-}\na_{u}(u,v) = \\int_{\\Omega} \\frac{\\nabla u(x) \\cdot \\nabla v(x)}{\\sqrt{1+|\\nabla u(x)|^2}} dx,\n\\]\nwhere $a_u(u,v)$ is the form defined in \\eqref{E:def-a-scaled}.\n\\end{Lemma}\n\n\\begin{Remark}[heuristic interpretation of \\Cref{lem:asymp-nonlocal-normal-interaction}]\nSuppose $v$ was a linear function. Then, for all $x,y$ we have $v(x) - v(y) = (x-y) \\cdot \\nabla v(x)$, and thus we can write\n\\[\na_u(u,v) = C_{d,s} \\iint_{Q_{\\Omega}} G_s\\left(d_u(x,y) \\right) \\frac{(x-y) \\cdot \\nabla v(x)}{|x-y|^{d+2s}} \\; dx dy,\n\\] \nwhile\n\\[\n\\widetilde{\\nu}_s(\\widetilde{x};E) \\cdot \\nabla v (x) = \nC_{d,s} \\left(\\int_{\\mathbb{R}^d} \\frac{G_s \\left(d_u(x,y) \\right)}{|x-y|^{d+2s}} (x-y) \\; dy \\right) \\cdot \\nabla v (x) .\n\\]\nTherefore, taking into account the asymptotic behavior in \\eqref{eq:asymp-nonlocal-normal-proj2},\n\\Cref{lem:asymp-nonlocal-normal-interaction} would follow upon integration of the identity above over $\\Omega$. \nHowever, for an arbitrary (nonlinear) $v$, we can only interpret $a_u(u,v)$ as a certain interaction between the nonlocal normal vector $\\widetilde{\\nu}_s$ and the `nonlocal gradient' $d_v$. Nevertheless, in the limit $s \\to {\\frac{1}{2}}^-$, only the interaction for $x,y$ close remains, and the asserted result follows because any $C^1$ function is locally linear.\n\n\\end{Remark}\n\n\\begin{proof}[Proof of \\Cref{lem:asymp-nonlocal-normal-interaction}]\nWe first split the domain of integration using symmetry:\n\\begin{equation} \\label{eq:proof-asymp-nonlocal-normal-interaction-1}\n\\begin{aligned}\na_u(u,v) &= C_{d,s} \\iint_{\\Omega \\times \\Omega} G_s\\left(d_u(x,y)\\right) \\frac{d_v(x,y) }{|x-y|^{d-1+2s}} dxdy \\\\\n&+2 C_{d,s} \\iint_{\\Omega \\times \\Omega^c} G_s\\left(d_u(x,y)\\right) \\frac{d_v(x,y) }{|x-y|^{d-1+2s}} dxdy.\n\\end{aligned}\\end{equation}\n\nConsider the first integral in \\eqref{eq:proof-asymp-nonlocal-normal-interaction-1}. For a fixed $x \\in \\Omega$, we expand $v(y) = v(x) + \\nabla v(x) \\cdot (x-y) + O(|x-y|^{1+\\gamma})$ and exploit the fact that $|G_s|$ is uniformly bounded (cf. \\eqref{E:bound-Gs}) to obtain\n\\begin{equation}\\label{eq:proof-asymp-nonlocal-normal-interaction-2}\n{\\begin{aligned}\n& \\int_{\\Omega} G_s\\left(d_u(x,y)\\right) \\frac{d_v(x,y) }{|x-y|^{d-1+2s}} dy \\\\\n& = \\int_{\\Omega} G_s\\left(d_u(x,y)\\right) \\frac{\\nabla v(x) \\cdot (x-y) + O\\left(|x-y|^{1+\\gamma}\\right) }{|x-y|^{d+2s}} dy \\\\\n& = \\left( \\int_{\\Omega} G_s\\left(d_u(x,y)\\right) \\frac{x - y}{|x-y|^{d+2s}} dy \\right) \\cdot \\nabla v(x) + \nO\\left( \\int_{\\Omega} \\frac{1}{|x-y|^{d+2s-1-\\gamma}} dy \\right).\n\\end{aligned}}\n\\end{equation}\nLet us define $D = \\sup_{x,y \\in \\Lambda} |x-y|$. Then, it is clear that $\\Omega \\subset \\Lambda \\subset \\overline{B}_D(x)$ and integrating in polar coordinates we get\n\\[ \\begin{aligned}\nC_{d,s} \\int_{\\Omega} \\frac{1}{|x-y|^{d+2s-1-\\gamma}} dy & \\lesssim \\frac{1-2s}{\\alpha_{d}} \\int_0^D r^{-2s+\\gamma} dr \\\\% \\to 0, \\quad \\text{ as } s \\to {\\frac{1}{2}}^-.\n& \\lesssim \\frac{1-2s}{(\\gamma+1-2s) \\alpha_{d}} D^{\\gamma+1-2s} \\to 0, \\quad \\text{ as } s \\to {\\frac{1}{2}}^-.\n\\end{aligned} \\]\nIdentity \\eqref{eq:asymp-nonlocal-normal-proj2} guarantees that\n\\[\n\\lim_{s \\to {\\frac{1}{2}}^-} C_{d,s} \\left( \\int_{\\Omega} G_s\\left(d_u(x,y)\\right) \\frac{x - y}{|x-y|^{d+2s}} dy \\right) \\cdot \\nabla v(x) = \\frac{\\nabla u(x) \\cdot \\nabla v(x)}{\\sqrt{1+|\\nabla u(x)|^2}},\n\\]\nso that it follows from \\eqref{eq:proof-asymp-nonlocal-normal-interaction-2} that\n\\[\n\\lim_{s \\to {\\frac{1}{2}}^-} C_{d,s} \\int_{\\Omega} G_s\\left(d_u(x,y)\\right) \\frac{d_v(x,y) }{|x-y|^{d-1+2s}} dy =\n\\frac{\\nabla u(x) \\cdot \\nabla v(x)}{\\sqrt{1+|\\nabla u(x)|^2}},\n\\]\nfor every $x \\in \\Omega$. Since for all $x \\in \\Omega$ we have\n\\[ \\begin{aligned}\n\\left| C_{d,s} \\int_{\\Omega} G_s\\left(d_u(x,y)\\right) \\frac{d_v(x,y) }{|x-y|^{d-1+2s}} dy \\right| & \\le\nC_{d,s} \\, |v|_{C^{0,1}(\\overline{\\Omega})} \\int_{\\overline{\\Omega}} \\frac{1}{|x-y|^{d-1+2s}} dy \\\\\n& \\le d (1-2s) \\int_0^D r^{-2s} dr\n= d D^{1-2s},\n\\end{aligned} \\]\nwe can apply the Lebesgue Dominated Convergence Theorem to deduce that\n\\begin{equation} \\label{E:convergence-Omega}\n\\lim_{s \\to {\\frac{1}{2}}^-} C_{d,s} \\int_{\\Omega\\times \\Omega} G_s\\left(d_u(x,y)\\right) \\frac{d_v(x,y)}{|x-y|^{d-1+2s}} dxdy\n= \\int_{\\Omega} \\frac{\\nabla u(x) \\cdot \\nabla v(x)}{\\sqrt{1+|\\nabla u(x)|^2}} dx.\n\\end{equation}\n\nIt remains to prove that the last term in \\eqref{eq:proof-asymp-nonlocal-normal-interaction-1} converges to $0$ as $s \\to {\\frac{1}{2}}^-$. This is a consequence of the Dominated Convergence Theorem as well. For $x \\in \\Omega$, we write $\\delta(x) = \\mbox{dist}(x,\\partial\\Omega)$. We first use that $|G_s|$ is uniformly bounded, according to \\eqref{E:bound-Gs}, and integrate in polar coordinates to obtain, for every $x \\in \\Omega$,\n\\[\n\\left| C_{d,s} \\int_{\\Omega^c} G_s\\left(d_u(x,y)\\right) \\frac{d_v(x,y) }{|x-y|^{d-1+2s}} dy \\right|\n\\lesssim (1-2s) \\; |v|_{C^{0,1}(\\overline{\\Omega})} \\int_{\\delta(x)}^D r^{-2s} \\;dr \\to 0.\n\\]\n\nTo prove that the integrands are uniformly bounded, we invoke again the uniform boundedness of $G_s$ and split the integral with respect to $y$ into two parts:\n\\[{\\begin{aligned}\n& C_{d,s} \\int_{\\Omega^c} G_s\\left(d_u(x,y)\\right) \\frac{d_v(x,y) }{|x-y|^{d-1+2s}} dy \\lesssim (1-2s) \\int_{\\Omega^c} \\frac{d_v(x,y) }{|x-y|^{d-1+2s}} dy\\\\\n& \\lesssim (1-2s) \\left( \\int_{\\{y \\colon |y-x| \\le 1 \\}} \\frac{d_v(x,y) }{|x-y|^{d-1+2s}} dy\n+ \\int_{\\{ y \\colon |y-x| > 1\\} }\\frac{d_v(x,y) }{|x-y|^{d-1+2s}} dy \\right) \\\\\n& \\leq (1-2s) \\left( \\int_{\\{y \\colon |y-x| \\le 1 \\}} \\frac{|v|_{C^{0,1}(\\Lambda)} }{|x-y|^{d-1+2s}} dy + \\int_{\\{ y \\colon |y-x| > 1\\} } \\frac{2|v|_{L^{\\infty(\\Lambda)}}}{|x-y|^{d+2s}} dy \\right) \\\\\n& \\lesssim (1-2s) \\left( \\int_0^1 r^{-2s} dr + \\int_1^{\\infty} r^{-2s-1} dr \\right) = 1 + \\frac{1-2s}{2s}.\n\\end{aligned}} \\]\nConsequently, we have proved that\n\\[\n\\lim_{s \\to {\\frac{1}{2}}^-} C_{d,s} \\iint_{\\Omega\\times \\Omega^c} G_s\\left(d_u(x,y)\\right) \\frac{d_v(x,y)}{|x-y|^{d-1+2s}} dxdy\n= 0.\n\\]\nThis, together with \\eqref{eq:proof-asymp-nonlocal-normal-interaction-1} and \\eqref{E:convergence-Omega}, finishes the proof.\n\\end{proof}\n\nActually, the regularity assumptions in \\Cref{lem:asymp-nonlocal-normal-interaction} can be weakened by a density argument. To this aim, we recall the following stability\nresult proved in \\cite[Theorem 1]{BourBrezMiro2001another}: given $f \\in W^1_p(\\Omega)$, $1 \\le p < \\infty$ and $\\rho \\in L^1(\\mathbb{R}^d)$ such that $\\rho \\ge 0$,\n\\begin{equation}\\label{eq:Bourgain-stab}\n\\iint_{\\Omega \\times \\Omega} \\frac{|f(x)-f(y)|^p}{|x-y|^p} \\rho(x-y) \\;dxdy \\le C \\Vert f \\Vert^p_{W^1_p(\\Omega)} \\Vert \\rho \\Vert_{L^1(\\mathbb{R}^d)}.\n\\end{equation}\nThe constant $C$ depends only on $p$ and $\\Omega$. We next state and prove a modified version of \\Cref{lem:asymp-nonlocal-normal-interaction}.\n\n\\begin{Lemma}[asymptotics of $a_u$ with Sobolev regularity]\\label{lem:asymp-nonlocal-normal-interaction-2}\nLet $u,v \\in H^1_0(\\Lambda)$, for some bounded set $\\Lambda$ containing $\\Omega$. Then, it holds that\n\\[\n\\lim_{s \\to {\\frac{1}{2}}^-} a_u(u,v) = \\int_{\\Omega} \\frac{\\nabla u(x) \\cdot \\nabla v(x)}{\\sqrt{1+|\\nabla u(x)|^2}} dx.\n\\]\n\\end{Lemma}\n\n\\begin{proof}\nFirst we point out that the double integral in the definition of $a_u$ is stable in the $H^1$-norm. More specifically, for $u_1,u_2,v_1,v_2 \\in H^1_0(\\Lambda)$, we have\n\\[ \\begin{aligned}\n |a_{u_1}&(u_1,v_1) - a_{u_2}(u_2,v_2)|\n \\le |a_{u_1}(u_1,v_1) - a_{u_1}(u_1,v_2) + a_{u_1}(u_1,v_2) - a_{u_2}(u_2,v_2) | \\\\\n&\\lesssim (1-2s) \\iint_{\\mathbb{R}^d \\times \\mathbb{R}^d} \\frac{|d_{u_1}(x,y)| \\; |d_{v_1-v_2}(x,y)| + |d_{u_1-u_2}(x,y)| \\; |d_{v_2}(x,y)|}{|x-y|^{d-1+2s}} \\; dx dy.\n\\end{aligned} \\]\n\nAs before, set $D = \\sup_{x,y \\in \\Lambda} |x-y|$. Using the Cauchy-Schwarz inequality and choosing\n\\[\n\\rho(x) = \\left\\{\n\\begin{array}{ll}\n|x|^{-d+1-2s}, \\quad & |x| \\le D \\\\\n0, \\quad & |x| > D\n\\end{array}\n\\right.\n\\]\nin \\eqref{eq:Bourgain-stab}, we obtain that \n\\[ \\begin{aligned}\n |a_{u_1}& (u_1,v_1) - a_{u_2}(u_2,v_2)| \\\\\n& \\lesssim (1-2s) \\left(|u_1|_{H^1(\\mathbb{R}^d)} |v_1 - v_2|_{H^1(\\mathbb{R}^d)} + |u_1-u_2|_{H^1(\\mathbb{R}^d)} |v_2|_{H^1(\\mathbb{R}^d)} \\right)\n\\| \\rho \\|_{L^1(\\mathbb{R}^d)}.\n\\end{aligned} \\]\n\n\nFor the function $\\rho$ we have chosen, it holds that \n\\[\n \\| \\rho \\|_{L^1(\\mathbb{R}^d)} = \\frac{d \\alpha_d D^{1-2s}}{1-2s}.\n\\]\nThus, we obtain the following stability result for the form $a$:\n\\begin{equation} \\label{E:stability-a}\n{|a_{u_1}(u_1,v_1) - a_{u_2}(u_2,v_2)|\n\\le C \\left(|u_1|_{H^1(\\mathbb{R}^d)} |v_1 - v_2|_{H^1(\\mathbb{R}^d)} + |u_1-u_2|_{H^1(\\mathbb{R}^d)} |v_2|_{H^1(\\mathbb{R}^d)} \\right),}\n\\end{equation}\nwhere the constant $C$ is independent of $s \\in (0,\\frac{1}{2})$ and the functions involved. \n\nA standard argument now allows us to conclude the proof. Given $u,v \\in H^1_0(\\Lambda)$, consider sequences $\\{u_n\\}, \\{v_n\\} \\in C^{\\infty}_c(\\Lambda)$ such that $u_n \\to u$ and $v_n \\to v$ in $H^1(\\mathbb{R}^d)$. Due to \\Cref{lem:asymp-nonlocal-normal-interaction}, we have, for every $n$,\n\\[\n\\lim_{s \\to {\\frac{1}{2}}^-} a_{u_n}(u_n,v_n) = \\int_{\\Omega} \\frac{\\nabla u_n(x) \\cdot \\nabla v_n(x)}{\\sqrt{1+|\\nabla u_n(x)|^2}} dx.\n\\]\nApplying \\eqref{E:stability-a}, the claim follows.\n\\end{proof}\n\nWe are finally in position to show the asymptotic behavior of the notion of error $e_s$ introduced at the beginning of this section (cf. \\eqref{eq:def-es}). Notice that, with the rescaling \\eqref{E:def-a-scaled}, \n\\[\ne_s^2(u,v) = a_u(u,u) - a_u(u,v) - a_v(v,u) + a_v(v,v),\n\\]\nwhile for its local counterpart \\eqref{eq:def-e},\n\\[\ne^2(u,v) = \\int_\\Omega \\frac{\\nabla u(x) \\cdot \\nabla u(x)}{\\sqrt{1+|\\nabla u(x)|^2}}\n- \\frac{\\nabla u(x) \\cdot \\nabla v(x)}{\\sqrt{1+|\\nabla u(x)|^2}}\n- \\frac{\\nabla v(x) \\cdot \\nabla u(x)}{\\sqrt{1+|\\nabla v(x)|^2}}\n+ \\frac{\\nabla v(x) \\cdot \\nabla v(x)}{\\sqrt{1+|\\nabla v(x)|^2}}.\n\\]\nApplying \\Cref{lem:asymp-nonlocal-normal-interaction-2} term by term in the expansions above, we conclude that effectively, $e_s$ recovers $e$ in the limit. \n\n\\begin{Theorem}[asymptotics of $e_s$] \\label{Thm:asymptotics-es}\nLet $u,v \\in H^1_0(\\Lambda)$, for some bounded set $\\Lambda$ containing $\\Omega$. Then, we have\n\\[\n\\lim_{s \\to {\\frac{1}{2}}^-} e_s(u,v) = e(u,v).\n\\]\n\\end{Theorem}\n\n\n\\section{Numerical experiments} \\label{sec:numerics}\nThis section presents some numerical results that illustrate the properties of the algorithms discussed in \\Cref{SS:schemes}. As an example, we consider $\\Omega = B_1 \\setminus \\overline{B}_{1\/2}$, where $B_r$ denotes an open ball with radius $r$ centered at the origin. For the Dirichlet data, we simply let $g = 0$ in $\\mathbb{R}^d \\setminus B_1$ and $g = 0.4$ in $\\overline{B}_{1\/2}$. Our computations are performed on an Intel Xeon E5-2630 v2 CPU (2.6 GHz), 16 GB RAM using MATLAB R2016b. More numerical experiments will be presented in an upcoming paper by the authors \\cite{BoLiNo19computational}.\n\n\\begin{Remark}[classical minimal graph in a symmetric annulus]\\label{R:NMS-classical-annulus}\nWe consider the classical graph Plateau problem in the same domain as our example above, with $g = 0$ on $\\partial B_1$ and $g = \\gamma$ on $\\partial B_{1\/2}$.\nWhen $\\gamma > \\gamma^* := \\frac{1}{2}\\ln(2+\\sqrt{3}) \\approx 0.66$, the minimal surface consists of two parts. The first part is given by the graph of function $u(x,y) = \\gamma^* - \\frac{1}{2} \\cosh^{-1}(2\\sqrt{x^2+y^2})$ and the second part is given by $\\{(x,y,z): \\gamma^* \\le z \\le \\gamma, (x,y) \\in \\partial B_{1\/2} \\}$. In this situation, a stickiness phenomenon occurs and $u$ is discontinuous across $\\partial B_{1\/2}$. Notice with our choice of Dirichlet data $\\gamma = 0.4 < \\gamma^*$, stickiness should not be observed for the classical minimal graph.\n\\end{Remark}\n\nWe first compute the solution $u_h$ of nonlinear system \\eqref{E:WeakForm-discrete} using the $L^2$-gradient flow mentioned in \\Cref{SS:schemes}. For $s = 0.25$ and mesh size $h = 2^{-4}$, we choose the initial solution $u_h^0 = 0$ and time step $\\tau = 1$. The computed discrete solution $u_h$ is plotted in \\Cref{F:GF-s025}. By symmetry we know the continuous solution $u$ should be radially symmetric, and we almost recover this property on the discrete level except in the region very close to $\\partial B_{1\/2}$ where the norm of $\\nabla u_h$ is big. It is also seen that $0 \\le u_h \\le 0.4$, which shows computationally that the numerical scheme is stable in $L^{\\infty}$ for this example. To justify convergence of the $L^2$-gradient flow, we consider the hat functions $\\{ \\varphi_i \\}_{i=1}^N$ forming a basis of $\\mathbb{V}^0_h$ where $N$ is the degrees of freedom. Consider residual vector $\\{ r_i \\}_{i=1}^N$ where $r_i := a_{u_h^k}(u_h^k, \\varphi_i)$, we plot the Euclidean norm $\\Vert r \\Vert_{l^2}$ along the iteration for different time step $\\tau$ in \\Cref{F:GF-properties} (left). In the picture, the line for $\\tau = 1$ and $\\tau = 10$ almost coincide and we get faster convergence (fewer iterations) for larger time step $\\tau$. For every choice of time step, we observe the linear convergence for the gradient flow iteration computationally. We have also tried different choices of initial solution $u_h^0$, and always end up observing the similar linear convergence behavior. \nWe also plot the energy $I_s[u_h^k]$ along the iterations in \\Cref{F:GF-properties} (right); this shows that the energy $I_s[u_h^k]$ monotonically decreases along the gradient flow iterations independently of the step size $\\tau > 0$. This energy decay property will be proved in the upcoming paper \\cite{BoLiNo19computational}.\n\n\\begin{figure}[!htb]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.425\\linewidth]{Ex3_s025_white.png}\n\t\\end{center}\n\t\\caption{\\small Plot of $u_h$ computed by $L^2$-gradient flow for $s = 0.25$ and $h = 2^{-4}$.}\n\t\\label{F:GF-s025}\n\\end{figure}\n\n\\begin{figure}[!htb]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.376\\linewidth]{Ex3_residue_new.eps}\n\t\t%\n\t\t\\includegraphics[width=0.38\\linewidth]{Ex3_Energy_new.eps}\n\t\\end{center}\n\t\\caption{\\small Gradient flow for $s=0.25$ for different choices of $\\tau$. Left: norm of residual vector $r$ in the iterative process. Right: energy of $I_s[u_h^k]$ in the iterative process.}\n\t\\label{F:GF-properties}\n\\end{figure}\n\n\n\\begin{figure}[!htb]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.6\\linewidth]{Ex3_multis_white.png}\n\t\\end{center}\n\t\\caption{\\small Plot of $u_h$ computed by damped Newton method for $s = 0.05, 0.15, 0.25, 0.35, 0.45$ (from left to right) and $h = 2^{-4}$.}\n\t\\label{F:NMS-Ex3_multis}\n\\end{figure}\n\nThe solution $u_h$ of nonlinear system \\eqref{E:WeakForm-discrete} can also be solved using the damped Newton method mentioned in \\Cref{SS:schemes}. We choose initial solution $u_h^0 = 0$ and the plots of $u_h$ for several different $s \\in (0, 1\/2)$ are shown in \\Cref{F:NMS-Ex3_multis}. The computed discrete solution for $s = 0.25$ is almost the same as the one computed by gradient flow in \\Cref{F:GF-s025}. However, the damped Newton method is more efficient than the gradient flow since we only need $4$ iterations and $243$ seconds compared with $26$ iterations and $800$ seconds when using the gradient flow with $\\tau = 1$. \n\nAs shown in the \\Cref{F:NMS-Ex3_multis}, the graph of $u_h$ near $\\partial B_{1\/2}$ is steeper, and the norm of $\\nabla u_h$ larger for smaller $s$, while it becomes smoother, and the norm of $\\nabla u_h$ smaller as $s$ increases. This seems to suggest a stickiness phenomenon (see \\Cref{R:stickiness}) (stickiness) for small $s$ in this example. We also notice that on the other part of boundary $\\partial B_{1}$, the stickiness seems to be small or vanish (i.e. the gap of $u$ on both sides of $\\partial\\Omega$ is small or zero), which is kind of expected since there is no stickiness on $\\partial B_{1}$ for the classical case \\Cref{R:NMS-classical-annulus}.\n\nDue to the Gamma-convergence result of fractional perimeter in \\cite[Theorem 3]{AmbrPhilMart2011Gamma}, $s-$nonlocal minimal graph $u$ converges to the classical minimal graph $u^*$ in $L^1(\\Omega)$ as $s \\to \\frac12^-$. Since we know the analytical solution of classical minimal graph $u^{*}$ in our example, to verify our computation, we could compare the discrete nonlocal minimal graph $u_h$ for $s = 0.499999 \\approx \\frac12$ with $u^*$. \\Cref{F:Newton-multi-h} shows that at least $\\Vert u_h - u^* \\Vert$ converges for $L^1$ norm to a small number, which indicates the convergence of $u_h$ as $h \\to 0$, and a second order convergence rate. Although we could not prove this theoretically, this second order convergence might be due to the fact that $s$ is too close to $0.5$, and the nonlocal graph is almost the same as the classical one. In fact, this $O(h^2)$ convergence rate has been proved for the classical minimal graph problems in $L^1$ norm under proper assumptions for dimension $d=2$ in \\cite[Theorem 2]{JoTh75}.\n\n\\begin{figure}[!htb]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.45\\linewidth]{NMS_error_l1.eps}\n\t\\end{center}\n\t\\caption{\\small Plot of $\\Vert u_h - u^* \\Vert_{L^1(\\Omega)}$ for different mesh size $h$, where $u_h$ is the discrete solution for $s = 0.499999$ and $u^*$ is the exact solution of classical minimal graph. Least square regression suggests a convergence rate $1.96$, which is close to $O(h^2)$.}\n\t\\label{F:Newton-multi-h}\n\\end{figure}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\n\n\nWe consider a linear stochastic bandit problem in high dimension $K$. At each round $t$, from $1$ to $n$, the player chooses an arm $x_t$ in a fixed set of arms and receives a reward $r_t= \\langle x_t, \\theta + \\eta_t \\rangle $, where $\\theta \\in \\mathbb R^K$ is an unknown parameter and $\\eta_t$ is a noise term. Note that $r_t$ is a (noisy) projection of $\\theta$ on $x_t$. The goal of the learner is to maximize the sum of rewards.\n\nWe are interested in cases where the number of rounds is much smaller than the dimension of the parameter, i.e.~$n \\ll K$. This is new in bandit literature but useful in practice, as illustrated by the problem of gradient ascent for a high-dimensional function, described later.\n\n\nIn this setting it is in general impossible to estimate $\\theta$ in an accurate way (since there is not even one sample per dimension). It is thus necessary to restrict the setting, and the assumption we consider here is that $\\theta$ is $S$-\\textit{sparse} (i.e., at most $S$ components of $\\theta$ are non-zero). We assume also that the set of arms to which $x_t$ belongs is the unit ball with respect to the $||.||_2$ norm, induced by the inner product\n\n\n\n\\vspace{-0.2cm}\n\n\\paragraph{Bandit Theory meets Compressed Sensing} \n\nThis problem poses the fundamental question at the heart of bandit theory, namely the exploration\\footnote{Exploring all directions enables to build a good estimate of all the components of $\\theta$ in order to deduce which arms are the best.} versus exploitation\\footnote{Pulling the empirical best arms in order to maximize the sum of rewards.} dilemma. Usually, when the dimension $K$ of the space is smaller than the budget $n$, it is possible to project the parameter $\\theta$ \\textit{at least once} on \\textit{each} directions of a basis (e.g.~the canonical basis) which enables to explore efficiently. However, in our setting where $K\\gg n$, this is not possible anymore, and we use the sparsity assumption on $\\theta$ to build a clever exploration strategy.\n\n\n\\textit{Compressed Sensing} (see e.g.~\\citep{candes2007dantzig,chen1999atomic,blumensath2009iterative}) provides us with a exploration technique that enables to estimate $\\theta$, or more simply its support, \\textit{provided that $\\theta$ is sparse}, with few measurements. The idea is to project $\\theta$ on random (isotropic) directions $x_t$ such that each reward sample provides equal information about \\textit{all} coordinates of $\\theta$. This is the reason why we choose the set of arm to be the unit ball. Then, using a regularization method (Hard Thresholding, Lasso, Dantzig selector...), one can recover the support of the parameter. \nNote that although Compressed Sensing enables to build a good estimate of $\\theta$, it is not designed for the purpose of maximizing the sum of rewards. Indeed, this exploration strategy is uniform and non-adaptive (i.e., the sampling direction $x_t$ at time $t$ does not depend on the previously observed rewards $r_1,\\dots,r_{t-1}$). \n\nOn the contrary, \\textit{Linear Bandit Theory} (see e.g.~\\citet{rusmevichientong2008linearly,dani2008stochastic,filippiparametric} and the recent work by \\citet{Csaba}) addresses this issue of maximizing the sum of rewards by efficiently balancing between exploration and exploitation. The main idea of our algorithm is to use Compressed Sensing to estimate the (small) support of $\\theta$, and combine this with a linear bandit algorithm with a set of arms restricted to the estimated support of $\\theta$. \n\n\n\n\n\n\n\n\n{\\bf Our contributions} are the following:\n\n\\vspace{-0.2cm}\n\\begin{itemize}\n \\item We provide an algorithm, called SL-UCB (for Sparse Linear Upper Confidence Bound) that mixes ideas of Compressed Sensing and Bandit Theory and provide a regret bound\\footnote{We define the notion of regret in Section \\ref{s:setting}.} of order $O(S\\sqrt{n})$.\n\n\\item We detailed an application of this setting to the problem of gradient ascent of a high-dimensional function that depends on a small number of relevant variables only (i.e., its gradient is sparse). We explain why the setting of gradient ascent can be seen as a bandit problem and report numerical experiments showing the efficiency of SL-UCB for this high-dimensional optimization problem.\n\\end{itemize}\n\n\\vspace{-0.2cm}\n\nThe topic of sparse linear bandits is also considered in the paper~\\citep{Yasin} published simultaneously. Their regret bound scales as $O(\\sqrt{K S n})$ (whereas ours do not show any dependence on $K$) but they do not make the assumption that the set of arms is the Euclidean ball and their noise model is different from ours. \n\n\nIn Section \\ref{s:setting} we describe our setting and recall a result on linear bandits. Then in Section~\\ref{s:SL-UCB} we describe the SL-UCB algorithm and provide the main result. In Section~\\ref{s:grad.asc} we detail the application to gradient ascent and provide numerical experiments.\n\n\\vspace{-0.2cm}\n\n\n\\section{Setting and a useful existing result} \\label{s:setting}\n\\vspace{-0.1cm}\n\n\\subsection{Description of the problem}\n\nWe consider a linear bandit problem in dimension $K$. An algorithm (or strategy) $\\mathcal Alg$ is given a budget of $n$ pulls. At each round $1\\leq t \\leq n$ it selects an arm $x_t$ in the set of arms $\\mathcal B_{K}$, which is the unit ball for the $||.||_2$-norm induced by the inner product. It then receives a reward\n\\begin{equation*}\nr_t = \\langle x_t,\\theta+\\eta_t\\rangle ,\n\\end{equation*}\nwhere $\\eta_t\\in \\mathbb R^K$ is an i.i.d.~white noise\\footnote{This means that $\\mathbb{E}_{\\eta_t}(\\eta_{k,t}) = 0$ for every $(k,t)$, that the $(\\eta_{k,t})_k$ are independent and that the $(\\eta_{k,t})_t$ are i.i.d..} that is independent from the past actions, i.e.~from $\\Big\\{(x_{t'})_{t'\\leq t}\\Big\\}$, and $\\theta \\in \\mathbb R^K$ is an unknown parameter.\n\nWe define the \\textit{performance} of algorithm $\\mathcal Alg$ as\n\n\\vspace{-0.2cm}\n\\begin{equation}\\label{def:performance}\nL_n(\\mathcal Alg) = \\sum_{t=1}^n \\langle \\theta,x_t\\rangle .\n\\end{equation}\n\nNote that $L_n(\\mathcal Alg)$ differs from the sum of rewards $\\sum_{t=1}^n r_t$ but is close (up to a $O(\\sqrt{n})$ term) in high probability. Indeed, $\\sum_{t=1}^n \\langle \\eta_t,x_t\\rangle$ is a Martingale, thus if we assume that the noise $\\eta_{k,t}$ is bounded by $\\frac{1}{2}\\sigma_k$ (note that this can be extended to sub-Gaussian noise), Azuma's inequality implies that with probability $1-\\delta$, we have $\\sum_{t=1}^n r_t = L_n(\\mathcal Alg)+ \\sum_{t=1}^n \\langle \\eta_t,x_t\\rangle \\leq L_n(\\mathcal Alg) + \\sqrt{2\\log(1\/\\delta)} ||\\sigma||_2 \\sqrt{n}$.\n\n\n\nIf the parameter $\\theta$ were known, the best strategy $\\mathcal Alg^*$ would always pick $x^* = \\arg\\max_{x \\in \\mathcal B_{K}} \\langle \\theta,x\\rangle = \\frac{\\theta}{||\\theta||_2}$ and obtain the performance:\n\n\\vspace{-0.3cm}\n\\begin{equation}\\label{def:performance*}\nL_n(\\mathcal Alg^*) = n ||\\theta||_2.\n\\end{equation}\n\n\nWe define the \\textit{regret} of an algorithm $\\mathcal Alg$ with respect to this optimal strategy as\n\n\\vspace{-0.3cm}\n\\begin{equation}\\label{def:regret}\nR_n(\\mathcal Alg) = L_n(\\mathcal Alg^*) - L_n(\\mathcal Alg).\n\\end{equation}\n\n\n\nWe consider the class of algorithms that do not know the parameter $\\theta$. Our objective is to find an adaptive strategy $\\mathcal Alg$ (i.e.~that makes use of the history $\\{(x_1,r_1),\\ldots,(x_{t-1},r_{t-1})\\}$ at time $t$ to choose the next state $x_t$) with smallest possible regret.\n\nFor a given $t$, we write $X_t = (x_1;\\ldots;x_t)$ the matrix in $\\mathbb R^{K\\times t}$ of all chosen arms, and $R_t = (r_1, \\ldots, r_t)^T$ the vector in $\\mathbb R^t$ of all rewards, up to time $t$.\n\nIn this paper, we consider the case where the dimension $K$ is much larger than the budget, i.e., $n \\ll K$.\nAs already mentioned, in general it is impossible to estimate accurately the parameter and thus achieve a sub-linear regret. This is the reason why we make the assumption that $\\theta$ is $S-$sparse with $S0$ and (ii) $t\\geq \\frac{\\sqrt{n}}{\\max_k |\\hat{\\theta}_{k,t}| - \\frac{b}{\\sqrt{t}}}$.\n\n\\paragraph{Step 1: A result on the empirical best arm}\n\nOn the event $\\xi$, we know that for any $t$ and any $k$, $|\\theta_{k}| - \\frac{b}{\\sqrt{t}} \\leq |\\hat{\\theta}_{k,t}| \\leq |\\theta_{k}| + \\frac{b}{\\sqrt{t}}$. In particular for $k^* = \\arg\\max_k |\\theta_k|$ we have\n\\begin{align}\\label{eq:criterion}\n|\\theta_{k^*}| - \\frac{b}{\\sqrt{t}} \\leq \\max_k |\\hat{\\theta}_{k,t}| \\leq |\\theta_{k^*}| + \\frac{b}{\\sqrt{t}}.\n\\end{align}\n\n\\vspace{-0.3cm}\n\n\\paragraph{Step 2: Maximum length of the Support Exploration Phase.}\n\nIf $|\\theta_{k^*}| - \\frac{3b}{\\sqrt{t}} >0$ then by Equation~\\ref{eq:criterion}, the first (i) criterion is verified on $\\xi$. If $t \\geq \\frac{1}{\\theta_{k^*} - \\frac{3b}{\\sqrt{t}}}\\sqrt{n}$ then by Equation~\\ref{eq:criterion}, the second (ii) criterion is verified on $\\xi$.\n\nNote that both those conditions are thus verified if $t \\geq \\max\\big(\\frac{9b^2}{|\\theta_{k^*}|^2}, \\frac{4\\sqrt{n}}{3|\\theta_{k^*}|}\\big)$. The Support Exploration Phase stops thus before this moment. Note that as the budget of the algorithm is $n$, we have on $\\xi$ that $T \\leq \\max\\big(\\frac{9b^2}{|\\theta_{k^*}|^2}, \\frac{4\\sqrt{n}}{3|\\theta_{k^*}|}, n\\big) \\leq \\frac{9\\sqrt{S}b^2}{||\\theta||_2} \\sqrt{n}$. We write $T_{\\max} = \\frac{9\\sqrt{S}b^2}{||\\theta||_2} \\sqrt{n}$.\n\n\n\\paragraph{Step 3: Minimum length of the Support Exploration Phase.}\n\n\nIf the first (i) criterion is verified then on $\\xi$ by Equation~\\ref{eq:criterion} $|\\theta_{k^*}| - \\frac{b}{\\sqrt{t}} >0$. If the second (ii) criterion is verified then on $\\xi$ by Equation~\\ref{eq:criterion} we have $t \\geq \\frac{\\sqrt{n}}{|\\theta_{k^*}|}$.\n\nCombining those two results, we have on the event $\\xi$ that $T \\geq \\max\\big(\\frac{b^2}{\\theta_{k^*}^2}, \\frac{\\sqrt{n}}{|\\theta_{k^*}|}\\big) \\geq \\frac{b^2}{||\\theta||_2}\\sqrt{n}$. We write $T_{\\min} = \\frac{b^2}{||\\theta||_2} \\sqrt{n}$.\n\n\n\n\n\n\\subsection{Description of the set $\\mathcal A$}\n\n\nThe set $\\mathcal A$ is defined as $\\mathcal A = \\Big\\{k: |\\hat{\\theta}_{k, T}| \\geq \\frac{2b}{\\sqrt{T}}\\Big\\}$.\n\n\n\\paragraph{Step 1: Arms that are in $\\mathcal A$}\n\n\n\nLet us consider an arm $k$ such that $|\\theta_k| \\geq \\frac{3b\\sqrt{||\\theta||_2}}{n^{1\/4}}$. Note that $T \\geq T_{\\min} = \\frac{b^2}{||\\theta||_2} \\sqrt{n}$ on $\\xi$. We thus know that on $\\xi$\n\\vspace{-0.3cm}\n\\begin{align*}\n|\\hat{\\theta}_{k,T}| &\\geq |\\theta_{k}| - \\frac{b}{\\sqrt{T}}\\geq \\frac{3b\\sqrt{||\\theta||_2}}{n^{1\/4}} - \\frac{b\\sqrt{||\\theta||_2}}{n^{1\/4}} \\geq \\frac{2b}{\\sqrt{T}}.\n\\end{align*}\nThis means that $k \\in \\mathcal A$ on $\\xi$. We thus know that $|\\theta_k| \\geq \\frac{3b\\sqrt{||\\theta||_2}}{n^{1\/4}}$ implies on $\\xi$ that $k \\in \\mathcal A$.\n\n\\paragraph{Step 2: Arms that are not in $\\mathcal A$}\n\n\nNow let us consider an arm $k$ such that $|\\theta_k| < \\frac{b}{2\\sqrt{n}}$. Then on $\\xi$, we know that\n\\vspace{-0.3cm}\n\\begin{align*}\n|\\hat{\\theta}_{k,T}| &< |\\theta_{k}| + \\frac{b}{\\sqrt{T}}< \\frac{b}{2\\sqrt{n}} + \\frac{b}{\\sqrt{T}} < \\frac{3b}{2\\sqrt{T}} < \\frac{2b}{\\sqrt{T}}.\n\\end{align*}\n\nThis means that $k \\in \\mathcal A^c$ on $\\xi$. This implies that on $\\xi$, if $|\\theta_k|=0$, then $k \\in \\mathcal A^c$.\n\n\n\\paragraph{Step 3: Summary.}\n\nFinally, we know that $\\mathcal A$ is composed of all the $|\\theta_k| \\geq \\frac{3b\\sqrt{||\\theta||_2}}{n^{1\/4}}$, and that it contains only the strictly positive components $\\theta_k$, i.e.~at most $S$ elements since $\\theta$ is $S-$sparse. We write $\\mathcal A_{\\min} = \\{ k: |\\theta_k| \\geq \\frac{3b\\sqrt{||\\theta||_2}}{n^{1\/4}}\\}$.\n\n\n\n\\subsection{Comparison of the best element on $\\mathcal A$ and on $\\mathcal B_{K}$.}\n\nNow let us compare $\\max_{x_t \\in Vec(\\mathcal A)\\cap\\mathcal B_{K}} \\langle \\theta,x_t\\rangle$ and $\\max_{x_t \\in \\mathcal B_{K}} \\langle \\theta,x_t\\rangle$.\n\nAt first, note that $\\max_{x_t \\in \\mathcal B_{K}} \\langle \\theta,x_t\\rangle = ||\\theta||_2$ and that $\\max_{x_t \\in Vec(\\mathcal A)\\cap\\mathcal B_{K}} \\langle \\theta,x_t\\rangle = ||\\theta_{\\mathcal A}||_2 = \\sqrt{\\sum_{k=1}^K \\theta_k^2 \\ind{k \\in \\mathcal A}}$, where $\\theta_{\\mathcal A,k} = \\theta_k$ if $k \\in \\mathcal A$ and $\\theta_{\\mathcal A,k} = 0$ otherwise. This means that\n\\vspace{-0.2cm}\n\\begin{align}\n &\\max_{x_t \\in \\mathcal B_{K}} \\langle \\theta,x_t\\rangle - \\max_{x_t \\in Vec(\\mathcal A)\\cap\\mathcal B_{K}} \\langle \\theta,x_t\\rangle \\nonumber \\\\ \n&= ||\\theta||_2 - ||\\theta\\ind{k\\in\\mathcal A}||_2 = \\frac{||\\theta||_2^2 - ||\\theta\\ind{k\\in\\mathcal A}||_2^2}{||\\theta||_2 + ||\\theta\\ind{k\\in\\mathcal A}||_2} \\nonumber \\\\\n&\\leq \\frac{\\sum_{k\\in\\mathcal A^c} \\theta_k^2}{||\\theta||_2}\\leq \\frac{\\sum_{k\\in\\mathcal A_{\\min}^c} \\theta_k^2}{||\\theta||_2} \\leq \\frac{9Sb^2}{\\sqrt{n}}. \\label{eq:distthetbig}\n\\end{align}\n\\vspace{-0.4cm}\n\n\n\\subsection{Expression of the regret of the algorithm}\n\n\nAssume that we run the algorithm $CB_2(Vec(\\mathcal A)\\cap\\mathcal B_{K},\\delta, T)$ at time $T$ where $\\mathcal A \\subset Supp(\\theta)$ with a budget of $n_1 = n-T$ samples. In the paper~\\citep{dani2008stochastic}, they prove that on an event $\\xi_2(Vec(\\mathcal A)\\cap\\mathcal B_{K}, \\delta, T)$ of probability $1-\\delta$ the regret of algorithm $CB_2$ is bounded by $R_n(\\mathcal Alg_{CB_2(Vec(\\mathcal A)\\cap\\mathcal B_{K},\\delta, T)}) \\leq 64 |\\mathcal A|\\Big(||\\theta||_2 + ||\\sigma||_2\\Big)(\\log(n^2\/\\delta))^2\\sqrt{n_1}$.\n\nNote that since $\\mathcal A \\subset Supp(\\theta)$, we have $\\xi_2(Vec(\\mathcal A)\\cap\\mathcal B_{K}, \\delta, T) \\subset \\xi_2(Vec(Supp(\\theta))\\cap\\mathcal B_{K}, \\delta, T)$ (see the paper~\\citep{dani2008stochastic} for more details on the event $\\xi_2$). We thus now that, conditionally to $T$, with probability $1-\\delta$, the regret is bounded for any $\\mathcal A \\subset Supp(\\theta)$ as $R_n(\\mathcal Alg_{CB_2(Vec(\\mathcal A)\\cap\\mathcal B_{K},\\delta,T)}) \\leq 64 S\\Big(||\\theta||_2 + ||\\sigma||_2\\Big)(\\log(n^2\/\\delta))^2\\sqrt{n_1}$.\n\nBy an union bound on all possible values for $T$ (i.e.~from $1$ to $n$), we obtain that on an event $\\xi_2$ whose probability is larger than $1-\\delta$, $R_n(\\mathcal Alg_{CB_2(Vec(\\mathcal A)\\cap\\mathcal B_{K},\\delta, T)}) \\leq 64 S\\Big(||\\theta||_2 + ||\\sigma||_2\\Big)(\\log(n^3\/\\delta))^2\\sqrt{n}$.\n\nWe thus have on $\\xi \\bigcup \\xi_2$, i.e.~on an event with probability larger than $1-2\\delta$, that\n\n\\vspace{-0.4cm}\n\\begin{align}\n R_n(\\mathcal Alg_{SL-UCB}, \\delta) &\\leq 2T_{\\max}||\\theta||_2 \\nonumber\\\\ \n&+ \\max_t R_n(\\mathcal Alg_{CB_2(Vec(\\mathcal A)\\cap\\mathcal B_{K},\\delta, t)}) \\nonumber\\\\\n &+ n \\Big( \\max_{x \\in \\mathcal B_{K}} \\langle x,\\theta\\rangle - \\hspace{-6mm}\\max_{x \\in \\mathcal B_{K}\\cap Vect(\\mathcal A_{\\min})} \\langle x,\\theta\\rangle \\Big). \\nonumber\n\\end{align}\n\\vspace{-0.4cm}\n\nBy using this Equation, the maximal length of the support exploration phase $T_{\\max}$ deduced in Step 2 of Subsection~\\ref{ss:length}, and Equation~\\ref{eq:distthetbig}, we obtain on $\\xi$ that\n\\begin{eqnarray*}\n R_n &\\leq& 64 S\\big(||\\theta||_2 + ||\\sigma||_2\\big)(\\log(n^2\/\\delta))^2\\sqrt{n} \\\\\n& & + 18 Sb^2\\sqrt{n}+ 9Sb^2 \\sqrt{n}\\\\\n&\\leq& 118 (\\bar{\\theta}_2+ \\bar{\\sigma}_2)^2 \\log(2K\/\\delta)S \\sqrt{n}.\n\\end{eqnarray*}\nby using $b=(\\bar{\\theta}_2+ \\bar{\\sigma}_2) \\sqrt{2\\log(2K\/\\delta)}$ for the third step.\n\n\n\n\n\\section*{Conclusion}\n\n\n\nIn this paper we introduced the SL-UCB algorithm for sparse linear bandits in high dimension. It has been designed using ideas from Compressed Sensing and Bandit Theory. Compressed Sensing is used in the support exploration phase, in order to select the support of the parameter. A linear bandit algorithm is then applied to the small dimensional subspace defined in the first phase. We derived a regret bound of order $O(S\\sqrt{n})$. Note that the bound scales with the sparsity $S$ of the unknown parameter $\\theta$ instead of the dimension $K$ of the parameter (as is usually the case in linear bandits). We then provided an example of application for this setting, the optimization of a function in high dimension. Possible further research directions include:\n\\vspace{-0.2cm}\n\\begin{itemize}\n\\item The case when the support of $\\theta$ changes with time, for which it would be important to define assumptions under which sub-linear regret is achievable. One idea would be to use techniques developed for \\textit{adversarial bandits} (see \\citep{abernethy2008competing, bartlett2008high, cesa2009combinatorial, koolen2010hedging, audibert2011minimax}, but also \\citep{flaxman2005online} for a more gradient-specific modeling) or also from \\textit{restless\/switching bandits} (see e.g.~\\citep{whittle1988restless,nino2001restless, slivkins2008adapting,garivier} and many others). This would be particularly interesting to model gradient ascent for e.g.~convex function where the support of the gradient is not constant.\n\\item Designing an improved analysis (or algorithm) in order to achieve a regret of order $O(\\sqrt{Sn})$, which is the lower bound for the problem of linear bandits in a space of dimension $S$. Note that when an upper bound $S'$ on the sparsity is available, it seems possible to obtain such a regret by replacing condition (ii) in the algorithm by $t < \\frac{\\sqrt{n}}{||\\big(\\hat{\\theta}_{t,k}\\ind{\\hat{\\theta}_{t,k} \\geq \\frac{b}{\\sqrt{t}}}\\big)_k||_2 - \\frac{\\sqrt{S'b}}{\\sqrt{t}}}$, and using for the Exploitation phase the algorithm in~\\citep{rusmevichientong2008linearly}. The regret of such an algorithm would be in $O(\\sqrt{S'n})$. But it is not clear whether it is possible to obtain such a result when no upper bound on $S$ is available (as is the case for SL-UCB).\n\\end{itemize}\n\n\\subsection*{Acknowledgements}\n\n\\vspace{-3mm}\nThis research was partially supported by Region Nord-Pas-de-Calais Regional Council, French ANR EXPLO-RA (ANR-08-COSI-004), the European Community's Seventh Framework Programme (FP7\/2007-2013) under grant agreement 231495 (project CompLACS), and by Pascal-2.\n\n\n\n\n\n\n\n\n\n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThis paper aims to provide a short, simple, and complete proof of existence of equilibrium prices under the same set of assumptions made in Mas-Colell et al. (1995). The existence result is derived by two lemmas which define, respectively, the non-empty compact convex set and the continuous function required by the Brouwer fixed point theorem. As usual, it is shown that the fixed point is an equilibrium price vector.\\par\nIt would be pretentious to say that the proof is new as it follows closely some results of Neuefeind (1980) and Geanakoplos (2003). However, the approach developed here highlights clearly the central role of fixed point theorems in proving the existence of equilibrium prices. This can be interesting from a pedagogical and historical point of view as the two results are strongly linked. I further discuss this point in the last section.\\par\nLet me briefly outline the results of Neuefeind (1980) and Geanakoplos (2003). Neuefeind (1980) showed that, under a proper boundary behaviour of the excess demand function, it is possible to consider a trimmed simplex which does not contain prices equal to zero. In the first lemma I prove the same result by considering a different boundary condition. Geanakoplos (2003) added a perturbation in a correspondence considered by Debreu (1959) to make it single-valued and to apply the Brouwer fixed point theorem. The second lemma states this key result and I report its proof for completeness. Furthermore, by using the result of Neuefeind (1980), the boundary condition of the excess demand function is used in a different way from Geanakoplos (2003) (for a detailed analysis of the boundary condition see Ruscitti, 2012).\n\n\\section{Mathematical model}\nConsider an exchange economy with $L$ commodities, $l=1,\\dots,L$. Let $z:\\mathbb{R}_{++}^L\\rightarrow\\mathbb{R}^L$ be the exchange economy's excess demand function defined over the set of positive price vectors. I make the same assumptions used in Proposition 17.C.1 of Mas-Colell et al. (1995):\n\\begin{enumerate}\n\\item[(A1)]$z(\\cdot)$ is continuous;\n\\item[(A2)]$z(\\cdot)$ is homogeneous of degree zero;\n\\item[(A3)]$p\\cdot z(p)=0$ for all $p\\in\\mathbb{R}_{++}^L$ (Walras' law);\n\\item[(A4)]There is an $s>0$ such that $z_l(p)>-s$ for all $l$ and for all $p\\in\\mathbb{R}_{++}^L$.\n\\item[(A5)]If $p^n\\rightarrow p$, where $p\\neq 0$ and $p_l=0$ for some $l$,\\footnote{The symbol $0$ denotes the origin in $\\mathbb{R}^L$ as well as the real number zero.} then $$\\mbox{max}\\{z_1(p^n),\\dots, z_L(p^n)\\}\\rightarrow+\\infty.$$\\end{enumerate}\nExchange economies with a positive aggregate endowment of each commodity and with consumers having continuous, strongly monotone, and strictly convex preferences satisfy Assumptions A1-A5 (see Proposition 17.B.2 in Mas-Colell et al., 1995).\\par\nLet $\\Delta=\\{p\\in \\mathbb{R}_+^L:\\sum_l p_l=1\\}$ be the unit simplex. Since the excess demand function is homogeneous of degree zero, its domain can be restricted to the interior of the unit simplex $\\mbox{int}\\Delta$. However, this is an open set and to apply the fixed point theorem it is required a closed set which contains only positive prices. For this reason, I define a trimmed simplex $\\Delta^\\epsilon=\\{p\\in \\Delta:p_l\\geq\\epsilon, \\mbox{ for all }l\\}$, with $\\epsilon\\in(0,\\frac{1}{L}]$. In the paper it is always assumed that $\\epsilon$ lies in $(0,\\frac{1}{L}]$. Finally, in the proof I will repeatedly use the price vector $\\bar{q}=(\\frac{1}{L},\\dots,\\frac{1}{L})$, at which all prices are equal, and the following straightforward result.\n\\begin{proposition} The sets $\\Delta$ and $\\Delta^\\epsilon$ are non-empty, compact, and convex.\\end{proposition}\nI finally state the theorem of existence of equilibrium prices.\n\\begin{theorem}Under Assumptions A1-A5, there exists a $p^*\\in\\mathbb{R}_{++}^L$ such that $z(p^*)=0$.\\end{theorem}\nIt is immediate to see that $z(p)=0$ corresponds to the system of L equations in L unknowns introduced by Walras (1874-7) and the solution $p^*$ is the equilibrium price vector at which all markets clear.\n\n\\section{Proof}\nThe first step consists in defining the non-empty, compact, and convex set required by the fixed point theorem. To this end, the result of the next lemma implies that if an equilibrium price vector $p^*$ exists, then there is an $\\epsilon$ such that $p^*$ belongs to the interior of the set $\\Delta^\\epsilon$.\n\\begin{lemma}Let $Q=\\{p\\in \\mbox{int}\\Delta: \\sum_l z_l(p)\\leq 0\\}$. Under Assumptions A1-A5, there exists an $\\epsilon\\in(0,\\frac{1}{L}]$ such that $Q\\subseteq \\mbox{int}\\Delta^\\epsilon$.\\end{lemma}\n\\begin{proof}First, the set $Q$ is non-empty as the price vector $\\bar{q}$ belongs to $Q$ by Walras' law, i.e., $\\frac{1}{L}\\sum_l z_l(\\bar{q})=0$. Next, I show, by contradiction, that there exists an $\\epsilon\\in(0,\\frac{1}{L}]$ such that $Q\\subseteq\\mbox{int}\\Delta^\\epsilon$. Suppose that for all $\\epsilon\\in(0,\\frac{1}{L}]$ there exists a price vector $p\\in Q$ such that $p\\notin\\mbox{int}\\Delta^\\epsilon$. Consider a sequence of $\\{\\epsilon^n\\}$ with $\\epsilon^n=\\frac{1}{nL}$. Then, there exists a sequence of price vectors $\\{p^n\\}$ such that $p^n\\in Q$ and $p^n\\notin \\mbox{int}\\Delta^{\\epsilon^n}$ for all $n$. Since the sequence $\\{p^n\\}$ belongs to the compact set $\\Delta$, there is a subsequence $p^{k_n}\\rightarrow p$ with $p\\in\\Delta$. As $p^{k_n}\\notin\\mbox{int}\\Delta^{\\epsilon^{k_n}}$ for all $n$, it follows that, for all $n$, $p_l^{k_n}\\leq\\epsilon^{k_n}$ for some $l$. Then, I can conclude that $p_l=0$ for some $l$ because $\\epsilon^{k_n}\\rightarrow 0$. Hence, $\\mbox{max}\\{z_1(p^{k_n}),\\dots,z_L(p^{k_n})\\}\\rightarrow\\infty$ by A5. Moreover, since $z(\\cdot)$ has a lower bound by A4, the inequality $\\mbox{max}\\{z_1(p^{k_n}),\\dots, z_L(p^{k_n})\\}-s(L-1)<\\sum_{l}z_l(p^{k_n})$ holds for all $n$. As the left hand side converges to infinity by the previous result, there exists an $m$ such that $\\mbox{max}\\{z_1(p^{k_n}),\\dots, z_L(p^{k_n})\\}-s(L-1)>0$ for all $n>m$. But then, it follows that $\\sum_l z_l(p^{k_n})>0$ for all $n>m$. As $p^{k_n}\\in Q$ for all $n$, it also follows that $\\sum z_l(p^{k_n})\\leq 0$ for all $n$, a contradiction. Therefore, there exists an $\\epsilon\\in(0,\\frac{1}{L}]$ such that $Q\\subseteq \\mbox{int}\\Delta^\\epsilon$.\\end{proof}\nThe next step consists in defining the continuous function on $\\Delta^\\epsilon$ to itself required by the Brouwer fixed point theorem. A possible approach would be to consider the correspondence\n$$\\mu(p)=\\underset{q\\in \\Delta^\\epsilon}{\\mbox{arg max}}\\{q\\cdot z(p)\\},$$\nwhich may be multivalued, and apply the Kakutani fixed point theorem. As noted in Debreu (1959), the use of this correspondence is motivated by the idea that if there is excess demand (supply) in a market the price should increase (decrease). Differently, I follow the approach of Geanakoplos (2003) who introduce a perturbation, $-\\|q-p\\|^2$, to make the correspondence above single-valued.\\footnote{The symbol $\\|x-y\\|$ denotes the Euclidean distance between the vectors $x$ and $y$.} The next lemma deals with the continuous function $\\phi(\\cdot)$ on which I will apply the Brouwer fixed point theorem.\n\\begin{lemma}Let $\\phi:\\Delta^\\epsilon\\rightarrow \\Delta^\\epsilon$ be such that\n$$\n\\phi(p)=\\underset{q\\in \\Delta^\\epsilon}{\\mbox{arg\\ max}}\\left\\{q\\cdot z(p)-\\|q-p\\|^2\\right\\}.$$\nUnder Assumptions A1-A5, $\\phi(\\cdot)$ is a continuous function.\\end{lemma}\n\\begin{proof}Define the function $g(q,p)=q\\cdot z(p)-\\|q-p\\|^2$. Since I consider only prices belonging to $\\Delta^\\epsilon$, $g(\\cdot,\\cdot)$ is continuous as it is a sum of continuous function. Let $p$ be a constant. Then, $g(\\cdot,p)$ has a maximum point on the non-empty compact set $\\Delta^\\epsilon$ by the Weierstrass Theorem. It is immediate to verify that the square of the Euclidean distance is strictly convex. In fact, its Hessian matrix is a diagonal matrix with positive entries and it is then positive definite. Then, $g(\\cdot,p)$ is strictly concave because it is a sum of a linear function and a strictly concave function. But then, the maximum is unique and $\\phi(\\cdot)$ is a function. Finally, I prove that $\\phi(\\cdot)$ is continuous. Let $q^*=\\phi(p)$ be the unique maximum point of $g(\\cdot,p)$. Since $\\Delta^\\epsilon$ is compact, I can consider a sequence $p^n\\rightarrow p$ and a corresponding subsequence $\\{p^{k_n}\\}$ such that $\\phi(p^{k_n})\\rightarrow r$ with $r\\in\\Delta^\\epsilon$. By the definition of $\\phi(\\cdot)$, the following inequality holds $g(q^*,p^{k_n})\\leq g(\\phi(p^{k_n}),p^{k_n})$. As the function $g(\\cdot,\\cdot)$ is continuous, it follows that $g(q^*,p^{k_n})\\rightarrow g(q^*,p)$ and $g(\\phi(p^{k_n}),p^{k_n})\\rightarrow g(r,p)$. Then, $g(q^*,p)\\leq g(r,p)$ by the properties of sequences. Since the maximum point is unique, it follows that $r=q^*$. But then, if the subsequence $p^{k_n}\\rightarrow p$, it follows that $\\phi(p^{k_n})\\rightarrow q^*=\\phi(p)$. Hence, $\\phi(\\cdot)$ is continuous.\\end{proof}\nI finally prove the existence theorem\n\\begin{proof} By the results in Lemmas 1 and 2, I can apply the Brouwer fixed point theorem and then there exists a fixed point $p^*$ of the function $\\phi(\\cdot)$, i.e., $p^*=\\phi(p^*)$. Next, I show that $p^*\\in\\mbox{int}\\Delta^\\epsilon$. Since $p^*$ is the unique maximum point of $g(\\cdot,p^*)$ on $\\Delta^\\epsilon$ and $g(p^*,p^*)=0$ by Walras' law, it follows that \n$$g(\\alpha p+(1-\\alpha)p^*,p^*)=(\\alpha p+(1-\\alpha)p^*)\\cdot z(p^*)-\\|(\\alpha p+(1-\\alpha)p^*)-p^*\\|^2<0,$$\nfor each $\\alpha\\in (0,1]$ and $p\\in\\Delta^\\epsilon$ with $p\\neq p^*$. By applying the Walras Law the previous inequality simplifies in\n$$p\\cdot z(p^*)<\\alpha\\|p-p^*\\|^2,$$\nfor each $\\alpha\\in (0,1]$. Suppose now that there exists a $p\\in \\Delta^\\epsilon$ such that $p\\cdot z(p^*)>0$. Then, there exists a sufficiently small $\\bar{\\alpha}\\in(0,1]$ such that $p\\cdot z(p^*)>\\bar{\\alpha}\\|p-p^*\\|^2$, a contradiction. Hence, $p\\cdot z(p^*)\\leq 0$ for all $p\\in\\Delta^\\epsilon$. Since $\\bar{q}\\in \\Delta^\\epsilon$, it follows that $\\bar{q}\\cdot z(p^*)=\\frac{1}{L}\\sum_lz_l(p^*)\\leq 0$ which implies that $p^*\\in Q$. But then, $p^*\\in \\mbox{int}\\Delta^{\\epsilon}$ by Lemma 1. Furthermore, note that $p^*$ maximises $p\\cdot z(p^*)$ on $\\Delta^\\epsilon$ because $p^*\\cdot z(p^*)=0$, by Walras' law, and $p\\cdot z(p^*)\\leq 0$ for all $p\\in\\Delta^\\epsilon$, by the previous result. Finally, I show that the fixed point $p^*$ is an equilibrium price vector, i.e., $z(p^*)=0$. I proceed by contradiction and I suppose that there exists a commodity $l$ such that $z_l(p^*)<0$. As $p^*$ maximises $p\\cdot z(p^*)$ on $\\Delta^\\epsilon$, it follows that $p_l^*=\\epsilon$. But $p^*\\in \\mbox{int}\\Delta^{\\epsilon}$, a contradiction. Hence, $z_l(p^*)\\geq 0$ for all $l$. By the fact that $\\sum_l z_l(p^*)\\leq 0$, I can conclude that $z(p^*)=0$.\\end{proof}\n\n\\section{Fixed point theorems and equilibrium prices}\nIn the literature there are alternative approaches to prove the existence of equilibrium prices which do not rely on fixed point theorems (see Greenberg (1977) and Quah (2008) among others). However they require stronger assumptions such as the weak gross substitutability or the weak axiom of revealed preference. This is due to the fact there is an equivalence between fixed point theorems and the existence of equilibrium prices under the classical assumptions as shown by Uzawa (1962). Debreu (1982) also pointed out that the proof of existence of equilibrium prices, under the classical assumptions, requires mathematical tools of the same power as fixed point theorems.\\par\nAs the Brouwer fixed point theorem was published in 1911, it is not surprising that the problem of existence of equilibrium prices formulated by Walras (1874-7) was solved in a general way by McKenzie (1954) and Arrow and Debreu (1954) for the first time.\\par\nUnder more restrictive assumptions some rigorous existence results were also obtained by A. Wald and J. von Neumann in the 1930s (for modern explanation of Wald's result see John, 1999). It is also worth mentioning that A. Wald wrote another paper on the existence problem in 1935 which unfortunately went lost. Duppe and Weintraub (2016) gave a detailed history of this lost proof and clarified that it applied a fixed-point theorem to show the existence of equilibrium prices in exchange economies.\n\n\n{\\small ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe \\textsc{Set Cover}\\ problem is one of the most fundamental and well-studied problems in approximation algorithms, and was one of Karp's original 21 NP-complete problems~\\cite{karp}. It can be stated quite plainly: given a finite set $X$ of $n$ items and a collection $\\mathcal S \\subseteq 2^X$ of $m$ subsets\nof $X$ called ``cover-sets'', the \\textsc{Set Cover}\\ problem on instance $(X, \\mathcal S)$\nis the problem of finding the smallest collection $\\mathcal C$ of cover-sets in $\\mathcal S$ such that $X = \\bigcup_{S \\in \\mathcal C} S$.\nThat is, every item in $X$ must appear in at least one cover-set in $\\mathcal C$. Here, we will consider the minimum cost (or weighted) version of the problem, where each\ncover-set $S$ has a nonnegative cost $c(S)$, and the goal is to find a collection of cover-sets with minimum total cost, subject to the above constraint.\n\n\\iffalse\nOne can also consider a minimum cost version of the problem where each $S \\in \\mathcal S$ has a cost $c(S)$ and the the goal is to find a subset $\\mathcal C$ of $\\mathcal S$\nof minimum total cost $\\sum_{S \\in \\mathcal C} c(S)$ such that $X = \\bigcup_{S \\in \\mathcal C} S$. This generalizes the former version of the problem by setting all costs $c(S)$ to 1, and we will\nwork with this version of \\textsc{Set Cover}\\ throughout the paper.\n\\fi\n\nAs is well known, the problem can be approximated within a logarithmic factor. For instance, Johnson~\\cite{johnson} showed that for uniform costs,\nthe simple greedy algorithm that iteratively chooses the cover-set containing the maximum number of uncovered elements\ngives an $H_n$-approximation (where $H_n=\\ln n + O(1)$ is the $n$'th harmonic number $\\sum_{k=1}^{\\lfloor n\\rfloor} 1\/k$). %\nLater, Lov\\'asz~\\cite{lovasz} showed that the cost of the solution found by this greedy algorithm is at most an $H_n$-factor larger than the optimum value of the natural linear programming (LP) relaxation.\n\nChv\\'atal~\\cite{chvatal} extended these results to a greedy algorithm for the weighted case. He also proved that the approximation guarantee of this algorithm is actually only $H_b$ where\n$b$ is the size of the largest cover-set in $\\mathcal S$, and moreover that this algorithm gives an $H_b$-factor approximation relative to the optimum of the natural {\\textsf{LP}} relaxation (thus %\nextending the integrality gap bound of Lov\\'asz~\\cite{lovasz}).\nSlav\\'ik~\\cite{slavik} refined the lower-order terms in the analysis of the greedy algorithm's approximation guarantee, giving a tight bound of $\\ln n - \\ln\\ln n + O(1)$.\nSrinivasan~\\cite{srinivasan} also improved the lower-order terms in the upper bound on the integrality gap. On the other hand,\n the integrality gap of the standard {\\textsf{LP}} relaxation for \\textsc{Set Cover}\\ is at least $(1-o(1))\\ln n$. Recently, Cygan, Kowalik, and Wykurz \\cite{cygan:kowalik:wykurz} demonstrated that Set Cover can be approximated\nwithin $(1 - \\epsilon)\\cdot \\ln n + O(1)$ in time $2^{n^\\epsilon + O(\\log m)}$. It is interesting to note that this time-approximation tradeoff is essentially optimal assuming Moshkovitz's Projection Games Conjecture~\\cite{moshkovitz} and the Exponential Time Hypothesis (ETH)~\\cite{IP01}.\n\n\nFrom the hardness perspective,\n Feige~\\cite{feige} showed that for every constant $\\varepsilon > 0$, there is no\n$(1 - \\varepsilon)\\ln n$-approximation algorithm for \\textsc{Set Cover}\\ unless all problems in NP can be solved deterministically in time $n^{O(\\log\\log n)}$.\n To date, the strongest hardness of approximation for \\textsc{Set Cover}\\ assuming only ${\\mathrm{P}}\\neq{\\mathrm{NP}}$ gives a $c\\ln n$-hardness %\n for $c \\approx 0.2267$ (Alon et al.~\\cite{alon:moshkovitz:safra}).\n\n\\iffalse\nRecently, Moshkovitz~\\cite{moshkovitz} put forward the {\\em Projection Games Conjecture} regarding the maximum necessary size of an alphabet for a certain PCP. Specifically, for every~$\\varepsilon > 0$,\nshe conjectures that an almost-linear size PCP exists for projection games with perfect completeness, soundness~$\\varepsilon$, and alphabet size polynomial in~$1\/\\varepsilon$ (such a PCP exists\nif the alphabet size is allowed to be exponential in~$1\/\\varepsilon$, cf.\\ Moshkovitz and Raz~\\cite{moshkovitz:raz}).\n Moshkovitz also shows that if the Projection Games Conjecture is true,\nthen for every constant $\\varepsilon > 0$ it is NP-hard to approximate \\textsc{Set Cover}\\ within a $(1-\\varepsilon)\\ln n$ factor.\n\\fi\n\n\n\\subsection{Hierarchies of convex relaxations and connection to \\textsc{Knapsack}}\n\nOne of the most powerful and ubiquitous tools in approximation algorithms has been the use of mathematical programming relaxations, such as linear programming (\\textsf{LP}) and semidefinite programming ({\\textsf{SDP}}). The common approach is as follows: solve a convex (\\textsf{LP}\\ or {\\textsf{SDP}}) relaxation for the 0-1 program, and ``round\" the relaxed solution to give a (possibly suboptimal) feasible 0-1 solution. Since the approximation ratio is usually analyzed by comparing the value of the relaxed solution to the value of the output (note that the 0-1 optimum is always sandwiched between these two), a natural obstacle is the worst case ratio between the relaxed optimum and the 0-1 optimum, known as the {\\em integrality gap}.\n\nWhile for many problems, this approach gives optimal approximations (e.g., Raghavendra~\\cite{ragh08} shows this for all CSPs, assuming the \\textsc{Unique Games}\\ Conjecture), there are still many cases where natural {\\textsf{LP}} and {\\textsf{SDP}} relaxations have large integrality gaps. This limitation can be circumvented by considering more powerful relaxations. In particular, Sherali and Adams~\\cite{SA90} Lov\\'asz and Schrijver~\\cite{LS91}, and Lasserre~\\cite{Las} each have devised different systems, collectively known as {\\em hierarchies} or {\\em lift-and-project techniques}, by which a simple relaxation can be strengthened until the polytope (or the convex body) it defines converges to the convex hull of the feasible 0-1 solutions. It is known that, for each of these hierarchies, if the original relaxation has $n$ variables, then the relaxation at level $t$ of the hierarchy can be solved optimally in time $n^{O(t)}$. Thus, to achieve improved approximations for a problem in polynomial (resp.\\ sub-exponential time), we would like to know if we can beat the integrality gap of the natural relaxation by using a relaxation at level $O(1)$ (resp.\\ $o(n\/\\log n)$) of some hierarchy.\n\nInitially, this approach was refuted (for specific problems) by a long series of results showing that integrality gaps do not decrease at sufficiently low levels (see, e.g.~\\cite{ABLT06,alekhnovich:arora:tourlakis,GMPT,CMM09}). Positive results have also recently emerged (e.g.~\\cite{Chl07,MK10,BRS11,GS11}), where improved approximations were given using constant-level relaxations from various hierarchies. For a survey on both positive and negative results, see~\\cite{CM-chapter}.\n\nRecently, Karlin et al.~\\cite{KMN11} considered how the use of {\\textsf{LP}} and {\\textsf{SDP}} hierarchies affects the integrality gap of relaxations for~\\textsc{Knapsack}. This is of particular relevance to us, since their approach relies on a well-known PTAS for \\textsc{Knapsack}\\ which is similar in structure to our own combinatorial algorithm for~\\textsc{Set Cover}. They showed that, while the \\textsf{Sherali-Adams}\\ {\\textsf{LP}} hierarchy requires $\\Omega(n)$ levels to bring the integrality gap below $2-o(1)$, level $k$ of the \\textsf{Lasserre}\\ {\\textsf{SDP}} hierarchy brings the integrality gap down to $1+O(1\/k)$. While we would like to emulate the success of their {\\textsf{SDP}}-hierarchy-based approach (and give an alternative sub-exponential algorithm for \\textsc{Set Cover}),\n we note that for \\textsc{Set Cover}\\, Alekhnovich et al.~\\cite{alekhnovich:arora:tourlakis} have shown that the {\\textsf{SDP}} hierarchy ${\\rm LS}_+$, due to Lov\\'asz and Schrijver, requires $\\Omega(n)$ levels to bring the integrality gap below $(1-o(1))\\ln n$. Nevertheless, the comparison is not perfect, since the \\textsf{Lasserre}\\ hierarchy is much stronger than ${\\rm LS}_+$. In particular, previously it was not known whether ${\\rm LS}_+$ also reduces the integrality gap for \\textsc{Knapsack}\\ (and indeed, the algorithm of Karlin et al.~\\cite{KMN11} relied a powerful decomposition theorem for the \\textsf{Lasserre}\\ hierarchy which does not seem to be applicable to ${\\rm LS}_+$).\n\n\\subsection{Our Results}\nTo facilitate our lift-and-project based approach, we start by giving in Section~\\ref{sec:combapprox sketch} a simple new sub-exponential time combinatorial algorithm for \\textsc{Set Cover}\\, which nearly matches the time-approximation tradeoff guarantee in~\\cite{cygan:kowalik:wykurz}.\n\\begin{theorem}\\label{thm:approx}\nFor any (not necessarily constant) $1 \\leq d \\leq n$, there is an $H_{n\/d}$-approximation algorithm for \\textsc{Set Cover}\\ running in time ${\\rm poly}(n,m) \\cdot m^{O(d)}$.\n\\end{theorem}\nWhile this theorem is slightly weaker than the previous best known guarantee, our algorithm is remarkably simple, and will be instrumental in designing a similar lift-and-project based \\textsc{Set Cover}\\ approximation.\nThe algorithm is combinatorial and does not rely on linear programming techniques.\nBy choosing $d = n^\\varepsilon$, we get a sub-exponential time algorithm whose approximation guarantee is better than $\\ln n$ by a constant factor.\n\n\n\n\n\nNext in Section~\\ref{sec:hierarchy}, we show that using level $d$ of the linear programming hierarchy of {\\textsf{Lov\\'asz-Schrijver}}~\\cite{LS91}, we can match the performance of the algorithm we use to prove Theorem \\ref{thm:approx},\nthough only if the ``lifting\" is done after guessing the value of the objective function (using a binary search), and adding this as a constraint a priori.\nIn this case, the rounding algorithm is quite fast, and avoids the extensive combinatorial guessing of our first algorithm, while the running time is dominated by the time it takes to solve the {\\textsf{LP}} relaxation.\n\nOn the other hand, without the trick of ``lifting the objective function\", we show in Section~\\ref{sec:sa_lower} that even the stronger {\\textsf{LP}} hierarchy of {\\textsf{Sherali-Adams}}~\\cite{SA90} has an integrality gap of at least $(1-\\varepsilon)\\ln n$ at level $\\Omega(n)$. Specifically, we show the following\n\\begin{theorem}\\label{thm: SA linear tight integrality gap}\nFor every $0<\\varepsilon, \\gamma \\leq \\frac{1}{2}$, and for sufficiently large values of $n$, there are instances of \\textsc{Set Cover}\\ on $n$ cover-sets (over a universe of $n$ items) for which the integrality gap of the level-$\\lfloor \\frac{\\gamma(\\varepsilon-\\varepsilon^2)}{1+\\gamma} n \\rfloor$ \\textsf{Sherali-Adams}\\ \\textsf{LP}\\ relaxation is at least $\\frac{1-\\varepsilon}{1+\\gamma}\\ln n$.\n\\end{theorem}\nAs we have mentioned, the prospect of showing a positive result using {\\textsf{SDP}} hierarchies is unlikely due to the work of Alekhnovich et al.~\\cite{alekhnovich:arora:tourlakis} which gives a similar integrality gap for ${\\rm LS}_+$.\n\nFor completeness, we also show in Section~\\ref{sec:knapsack} that the {\\textsf{Lasserre}}-hierarchy-based \\textsc{Knapsack}\\ algorithm of Karlin et al.~\\cite{KMN11} can be matched using the weaker ${\\rm LS}_+$ hierarchy and a more complex rounding algorithm. Specifically, we show that the integrality gap of the natural relaxation for \\textsc{Knapsack}\\ can be reduced to $1+\\varepsilon$ using $O(\\varepsilon^{-3})$ rounds of ${\\rm LS}_+$. This highlights a fundamental difference between \\textsc{Knapsack}\\ and \\textsc{Set Cover}, despite the similarity between our combinatorial algorithm and the PTAS for \\textsc{Knapsack}\\ on which Karlin et al.\\ rely. Formally, we prove the following:\n\\begin{theorem}\\label{thm:knapsack_ig} The integrality gap of level $k$ of the ${\\rm LS}_+$ relaxation for \\textsc{Knapsack}\\ is at most $1+O(k^{-1\/3})$.\n\\end{theorem}\n\nIn what follows, and before we start the exposition of our results, we present in Section~\\ref{sec: preliminaries on ls} the \\textsf{Lov\\'asz-Schrijver}\\ system along with some well-known facts that we will need later on. We end with a discussion of future directions in Section~\\ref{sec:conclusion}.\n\n\n\\section{Preliminaries on the \\textsf{Lov\\'asz-Schrijver}\\ System}\\label{sec: preliminaries on ls}\n\nFor any polytope $P$, this system begins by introducing a nonnegative %\n auxiliary variable $x_0$, so that in every constraint of $P$, constants are multiplied by $x_0$. This yields %\n the cone\n\n $K_0(P):=\\{(x_0, x_0{\\bf x})\\mid x_0\\geq 0~\\&~{\\bf x} \\in P\\}$.\n For an $n$-dimensional polytope $P$, the \\textsf{Lov\\'asz-Schrijver}\\ system finds a hierarchy of nested cones $K_0(P) \\supseteq K_1(P) \\supseteq \\ldots \\supseteq K_n(P)$ (in the \\textsf{SDP}\\ variant, we will write $K^+_t(P)$), defined recursively, and which enjoy remarkable algorithmic properties. In what follows, let $\\mathcal{P}_k$ denote the space of vectors indexed by subsets of $[n]$ of size at most $k$,\n and for any ${\\bf y}\\in\\mathcal{P}_k$, define the moment matrix $Y^{[{\\bf y}]}$ to be the square matrix with rows and columns indexed by sets of size at most $\\lfloor k\/2\\rfloor$, where the entry at row $A$ and column $B$ is $y_{A\\cup B}$. %\n Also we denote by ${\\bf e}_0, {\\bf e}_1, \\ldots, {\\bf e}_n$ the standard orthonormal basis of dimension $n+1$, such that $Y^{[{\\bf y}]} {\\bf e}_i$ is the $i$-th column of the moment matrix.\n\n\\begin{definition}[The \\textsf{Lov\\'asz-Schrijver}\\ (\\iLS) and \\textsf{Lov\\'asz-Schrijver}\\ \\textsf{SDP}\\ (\\textsf{LS}$_+$) systems]\\label{def: LS definition} \nConsider the conified polytope $K_0(P)$ defined earlier (let us also write $K^+_0(P)=K_0(P)$). The level-$t$ \\textsf{Lov\\'asz-Schrijver}\\ cone (relaxation or tightening) $K_t(P)$ (resp.\\ $K^+_t(P)$) of ${\\rm LS}$ (resp.\\ ${\\rm LS}_+$) is recursively defined as all $n+1$ dimensional vectors $(x_0, x_0{\\bf x})$ for which there exist ${\\bf y} \\in \\mathcal{P}_2$ such that $Y^{[{\\bf y}]} {\\bf e}_i, Y^{[{\\bf y}]} \\left( {\\bf e}_0 - {\\bf e}_i\\right) \\in K_{t-1}(P)$ (resp.\\ $K^+_{t-1}(P)$) and $(x_0,x_0{\\bf x}) = Y^{[{\\bf y}]} {\\bf e}_0$. The level-$t$ \\textsf{Lov\\'asz-Schrijver}\\ \\textsf{SDP}\\ tightening of ${\\rm LS}_+$ asks in addition that $Y^{[{\\bf y}]}$ is a positive-semidefinite matrix.\n\\end{definition}\n\nIn the original work of Lov\\'asz and Schrijver~\\cite{LS91} it is shown that the cone $K_n(P)$ (even in the \\iLS\\ system) projected on $x_0=1$ is exactly the integral hull of the original \\textsf{LP}\\ relaxation, while one can optimize over $K_t(P)$ in time $n^{O(t)}$, given that the original relaxation admits a (weak) polytime separation oracle. \n\\iffalse\nIn particular, the former property is derived by the following observation,\n\n\\begin{fact}\\label{fact: ls implications}\nSuppose that $(x_0,{\\bf x}) \\in K_t$ of the \\iLS\\ system, which is witnessed by ${\\bf y} \\in \\mathcal{P}_2$. Then for every $i=1,\\ldots,n$, \n$$(x_0,{\\bf x}) = Y^{[{\\bf y}]} {\\bf e}_i + Y^{[{\\bf y}]} \\left( {\\bf e}_0 - {\\bf e}_i\\right).$$\nMoreover the two vectors $Y^{[{\\bf y}]} {\\bf e}_i, Y^{[{\\bf y}]} \\left( {\\bf e}_0 - {\\bf e}_i\\right)$ satisfy all constraints of $K_{t-1}$ (hence all constraints of the original relaxation as well) and their $i$-th coordinate equals $x_i$ and $0$ respectively. \n\nIn particular, if $00$, the rescaled column vector $\\frac1{x_i}Y^{[{\\bf y}]} {\\bf e}_i$ is in $K_{t-1}(P)\\cap\\{(1,{\\bf x}')\\mid{\\bf x}'\\in[0,1]^n\\}$.\n\\end{fact}\n\\begin{fact}\\label{fact: ls integrality} For any vector ${\\bf x}\\in[0,1]^n$ such that $(1,{\\bf x})\\in K_t(P)$, and any coordinate $j$ such that $x_j$ in integral, for all $t'0 \\}$. We introduce some structure in the resulting instance $(Y, \\mathcal T)$ of \\textsc{Set Cover}\\ by choosing the indices we condition on as in the proof of Lemma~\\ref{lem:order}. This gives the following Lemma, whose proof is similar to that of Lemma~\\ref{lem:order}\n\\begin{lemma}\\label{lemma: small cover-sets in subinstance}\nIf at each step of the Conditioning Phase we choose the set $S$ in the support of the current solution ${\\bf x}^{(d')}$\ncontaining the most uncovered elements in $X$, then for all $T \\in \\mathcal T$ we have $|T| \\leq \\frac{n}{d}$.\n\\end{lemma}\n\\begin{proof\n\nFor $1 \\leq i \\leq d$ let $S_i$ denote the cover-set chosen in the Conditioning Phase for level $i$ and for $0 \\leq i \\leq d$ let $\\mathcal C_i = \\{S_d, S_{d-1}, \\ldots, S_{i+1}\\}$\n(with $\\mathcal C_d = \\emptyset$).\nWe show that at every iteration $1 \\leq i \\leq d$ we have $|S \\setminus \\bigcup_{T \\in \\mathcal C_i} T| \\leq \\frac{n}{d-i+1}$ for every $S \\in \\mathcal S \\setminus \\mathcal C_i$\nwith ${\\bf x}^{(i)}_S > 0$.\n\nFor $1 \\leq i \\leq d$, let $\\alpha_i := |S_i \\setminus \\bigcup_{T \\in \\mathcal C_i} T|$.\nSince we chose the largest (with respect to the uncovered items) cover-set $S_i$ in the support of ${\\bf x}^{(i)}$ we have $|S' \\setminus \\bigcup_{T \\in \\mathcal C_i} T| \\leq \\alpha_i$\nfor every cover-set $S' \\in \\mathcal S \\setminus \\mathcal C_i$ with ${\\bf x}^{(i)}_{S'} > 0$. For $2 \\leq i \\leq d$ we also have\n$\\alpha_{i-1} \\leq \\alpha_i$ because $\\mathcal C_i \\subseteq \\mathcal C_{i-1}$ and because we chose $S_i$ instead of $S_{i-1}$ in the Conditioning Phase for level $i$.\n\nSo, $\\alpha_d \\geq \\alpha_{d-1} \\geq \\ldots \\geq \\alpha_1$. Now, each item $j$ covered by $\\mathcal C_0$ contributes 1 to $\\alpha_i$ for the earliest index $i$ for which $j \\in S_i$,\nso $\\sum_{i=1}^d \\alpha_i \\leq n$.\nThis implies that $\\alpha_i \\leq \\frac{n}{d-i+1}$ for each $1 \\leq i \\leq d$. Therefore, every set in the support of ${\\bf x}^{(0)}$ has at most\n$\\frac{n}{d}$ elements that are not already covered by $\\mathcal C_0$. So, the instance $(Y, \\mathcal T)$ has $|T| \\leq \\frac{n}{d}$ for any $T \\in \\mathcal T$.\n\\end{proof}\n\n\n\nLet $\\mathcal D$ be the collection of cover-sets chosen as in Lemma~\\ref{lemma: small cover-sets in subinstance}. Observe that the vector ${\\bf x}^{(0)}$ projected on the cover-sets $\\mathcal S \\setminus \\mathcal D$ that were \\textit{not} chosen in the Conditioning Phase is feasible for the \\textsf{LP}\\ relaxation of the instance $(Y, \\mathcal T)$. In particular, the cost of the \\textsf{LP}\\ is at most $q - \\sum_{S \\in \\mathcal D} c(S)$, and by Lemma~\\ref{lemma: small cover-sets in subinstance} all cover-sets have size at most $\\frac{n}{d}$. By Theorem~\\ref{thm:greedy}, the greedy algorithm will find a solution for $(Y, \\mathcal T)$ of cost at most $H_\\frac{n}{d} \\cdot \\left( q - \\sum_{S \\in \\mathcal D} c(S) \\right)$. Altogether, this gives a feasible solution for $(X, \\mathcal S)$ of cost \n$$ H_\\frac{n}{d} \\cdot \\bigg( q - \\sum_{S \\in \\mathcal D} c(S) \\bigg) \n + \\sum_{S \\in \\mathcal D} c(S) \\leq H_\\frac{n}{d} \\cdot q \\leq H_\\frac{n}{d} \\cdot \\textsc{OPT}.$$\n\n\n\n\n\n\\section{Linear \\textsf{Sherali-Adams}\\ Integrality Gap for \\textsc{Set Cover}} \\label{sec:sa_lower}\n\nThe level-$\\ell$ \\textsf{Sherali-Adams}\\ relaxation is a tightened \\textsf{LP}\\ that can be derived systematically starting with any 0-1 \\textsf{LP}\\ relaxation. While in this work we are interested in tightening the \\textsc{Set Cover}\\ polytope, the process we describe below is applicable to any other relaxation. \n\n\\begin{definition}[The \\textsf{Sherali-Adams}\\ system]\\label{def: SA definition}\nConsider a polytope over the variables $y_1,\\ldots, y_n$ defined by finitely many constraints (including the box-constraints $0\\leq y_i \\leq 1$). The level-$\\ell$ \\textsf{Sherali-Adams}\\ relaxation is an \\textsf{LP}\\ over the variables $\\{y_A\\}$ where $A$ is any subset of $\\{1,2,\\ldots,n\\}$ of size at most $\\ell+1$, and where $y_\\emptyset =1$. For every constraint $\\sum_{i=1}^n a_i y_i \\geq b$ of the original polytope and for every disjoint $P,E \\subseteq \\{1,\\ldots, n\\}$ with $|P|+|E|\\leq \\ell$, the following is a constraint of the level-$\\ell$ \\textsf{Sherali-Adams}\\ relaxation\n$$\n\\sum_{i=1}^n a_i \\sum_{\\emptyset \\subseteq T \\subseteq E} (-1)^{|T|} y_{P \\cup T \\cup \\{i\\} }\n \\geq b \\sum_{\\emptyset \\subseteq T \\subseteq E} (-1)^{|T|} y_{P \\cup T }.\n$$\n\\end{definition}\n\nWe will prove Theorem~\\ref{thm: SA linear tight integrality gap} in this section. For this we will need\n two ingredients: (a) appropriate instances, and (b) a solution of the \\textsf{Sherali-Adams}\\ \\textsf{LP}\\ as described in Definition~\\ref{def: SA definition}. Our hard instances are described in the following lemma, which is due to Alekhnovich et al.~\\cite{alekhnovich:arora:tourlakis}.\n\\begin{lemma}[\\textsc{Set Cover}\\ instances with no small feasible solutions]~\n\\label{lem: hard SA instances}\\\\\nFor every $\\varepsilon> \\eta>0$, and for all sufficiently large $n$, there exist \\textsc{Set Cover}\\ instances over a universe of $n$ elements and $n$ cover-sets, such that:\\\\\n(i) Every element of the universe appears in exactly $(\\varepsilon-\\eta)n$ cover-sets, and\\\\\n(ii) There is no feasible solution that uses less than $\\log_{1+\\varepsilon} n$ cover-sets.\n\\end{lemma}\nIn order to prove Theorem~\\ref{thm: SA linear tight integrality gap} we will invoke Lemma~\\ref{lem: hard SA instances} with appropriate parameters. Then we will define a vector solution for the level-$\\ell$ \\textsf{Sherali-Adams}\\ relaxation as described.\n\n\\begin{lemma}\\label{lem: sa feasible solution}\nConsider a \\textsc{Set Cover}\\ instance on $n$ cover-sets as described in Lemma~\\ref{lem: hard SA instances}. Let $f$ denote the number of cover-sets covering every element of the universe. For $f\\geq 3 \\ell$, the vector ${\\bf y}$ indexed by subsets of $\\{1,\\ldots,n\\}$ of size at most $\\ell+1$ defined as \n$ y_A := \\frac{(f-\\ell-1)!}{(f-\\ell-1+|A|)!}, ~\\forall A \\subseteq \\{1,\\ldots,n\\}, ~|A|\\leq \\ell+1$,\nsatisfies the level-$\\ell$ \\textsf{Sherali-Adams}\\ \\textsf{LP}\\ relaxation of the \\textsc{Set Cover}\\ polytope. \n\\end{lemma}\n\nThe proof of Lemma~\\ref{lem: sa feasible solution} involves a number of extensive calculations which we give in Section~\\ref{sec: proof of lemma feasible sa solution}. Assuming the lemma,\n we are ready to prove Theorem~\\ref{thm: SA linear tight integrality gap}. \n\n\\begin{proof}[Proof of Theorem~\\ref{thm: SA linear tight integrality gap}]\nFix $\\varepsilon>0$ and invoke Lemma~\\ref{lem: hard SA instances} with $\\eta = \\varepsilon^2$ to obtain a \\textsc{Set Cover}\\ instance on $n$ universe elements and $n$ cover-sets for which (i) every universe element is covered by exactly $(\\varepsilon-\\varepsilon^2)n$ cover-sets, and (ii) no feasible solution exists of cost less than $\\log_{1+\\varepsilon} n$. Note that in particular (i) implies that in the \\textsc{Set Cover}\\ \\textsf{LP}\\ relaxation, every constraint has support exactly $f = (\\varepsilon-\\varepsilon^2)n$. \n\nSet $\\ell = \\frac{\\gamma(\\varepsilon-\\varepsilon^2)}{1+\\gamma} n$ and note that $f\/\\ell \\geq 3$, since $\\gamma \\leq \\frac{1}{2}$. This means we can define a feasible level-$\\ell$ \\textsf{Sherali-Adams}\\ solution as described in Lemma~\\ref{lem: sa feasible solution}. The values of the singleton variables are set to \n$$ y_{\\{i\\}} = \\frac{1}{(\\varepsilon-\\varepsilon^2)n-\\ell} = \\frac{1+\\gamma}{(\\varepsilon-\\varepsilon^2)n}.$$\nBut then, the integrality gap is at least\n$$ \t\\frac{\\textsc{OPT}}{\\sum_{i=1}^n y_{\\{i\\}}}\n\t\\geq \n\t\\frac{\\varepsilon-\\varepsilon^2}{1+\\gamma}\\cdot{ \\log_{1+\\varepsilon} n }\n\t=\n\t\\frac{ \\varepsilon-\\varepsilon^2 }{(1+\\gamma) \\ln(1+\\varepsilon)} \\ln n.\n\t$$\nThe lemma follows once we observe that $\\ln(1+\\varepsilon) = \\varepsilon-\\frac{1}{2}\\varepsilon^2 + \\Theta(\\varepsilon^3)$.\n\\end{proof}\n\n\\subsection{Proof of Lemma~\\ref{lem: sa feasible solution}}\\label{sec: proof of lemma feasible sa solution}\n\n\\iffalse\nIn this section we prove the following:\n\\begin{lemma-solution}\nConsider a \\textsc{Set Cover}\\ instance on $n$ cover-sets as described in Lemma~\\ref{lem: hard SA instances}. Let $f$ denote the number of cover-sets covering every element of the universe. For $f\\geq 3 \\ell$, the vector ${\\bf y}$ indexed by subsets of $\\{1,\\ldots,n\\}$ of size at most $\\ell+1$ defined as \n$$ y_A := \\frac{(f-\\ell-1)!}{(f-\\ell-1+|A|)!}, ~\\forall A \\subseteq \\{1,\\ldots,n\\}, ~|A|\\leq \\ell+1$$\nsatisfies the level-$\\ell$ \\textsf{Sherali-Adams}\\ \\textsf{LP}\\ relaxation of the \\textsc{Set Cover}\\ polytope. \n\\end{lemma-solution}\n\\fi\n\nRecall that $f$ denotes the support of every cover constraint in the \\textsc{Set Cover}\\ relaxation (i.e.\\ the number of cover-sets every element belongs to in the instance described in Lemma~\\ref{lem: hard SA instances}). Also recall that\n$$ y_A := \\frac{(f-\\ell-1)!}{(f-\\ell-1+|A|)!}, ~\\forall A \\subseteq \\{1,\\ldots,n\\}, ~|A|\\leq \\ell+1.$$\nTo show feasibility of the above vectors, we need to study two types of constraints, i.e. the so called box-constraints\n\\begin{equation}\\label{eqn: explicit box-constraints}\n0 \\leq \\sum_{\\emptyset \\subseteq T \\subseteq E} (-1)^{|T|} y_{P \\cup T } \\leq 1, ~\\forall P,E\\subseteq \\{1,\\ldots,n\\},~ |P|+|E|\\leq \\ell+1, \n\\end{equation}\nas well as the covering constraints\n\\begin{equation}\\label{eqn: explicit cover-constraints}\n\\sum_{i \\in D} \\sum_{\\emptyset \\subseteq T \\subseteq E} (-1)^{|T|} y_{P \\cup T \\cup \\{i\\} }\n \\geq \\sum_{\\emptyset \\subseteq T \\subseteq E} (-1)^{|T|} y_{P \\cup T },~\\forall P,E\\subseteq \\{1,\\ldots,n\\},~ |P|+|E|\\leq \\ell,\n\\end{equation}\nwhere $D\\subseteq \\{1,\\ldots,n\\}$ is a set of $f$ many cover-sets covering some element of the universe. The symmetry of the proposed solutions allows us to significantly simplify the above expressions, by noting that for $|P|=p$ and $|E|=e$ we have \n$$\\sum_{\\emptyset \\subseteq T \\subseteq E} (-1)^{|T|} y_{P \\cup T } \n\t= \n\t\\sum_{t=0}^e (-1)^{t} \\binom{e}{t} \\frac{(f-\\ell-1)!}{(f-\\ell-1+p+t)!}\n$$\n\nFor the sake of exposition, we show that our \\textsf{LP}\\ solution satisfies the two different kinds of constraints\n in two different lemmata. Set $x = f- \\ell-1+p$, and note that if $f\\geq 3\\ell$, then since $e\\leq \\ell$ we have $x \\geq \\frac{5}{3} e$. Thus, the feasibility of the box constraints~\\eqref{eqn: explicit box-constraints} (for our solution)\n is implied by the following combinatorial lemma. \n\n\\begin{lemma}\\label{lem: abstract box constraints}\nFor $x\\geq \\frac{5}{3} e$ we have\n$$ 0 \\leq \\sum_{t=0}^e (-1)^t \\binom{e}{t} \\frac{1}{(x+t)!} \\leq \\frac{1}{(x-p)!} $$\n\\end{lemma}\n\n\\begin{proof}\nFirst we show the lower bound, for which we study two consecutive summands. We note that \n\\begin{eqnarray*}\n\\binom{e}{2t} \\frac{1}{(x+2t)!} - \\binom{e}{2t+1} \\frac{1}{(x+2t+1)!}\n& = &\t\n\\binom{e}{2t} \\frac{1}{(x+2t)!} \\left( 1 - \\frac{\\binom{e}{2t+1}}{\\binom{e}{2t}} \\frac{1}{x+2t+1}\\right) \\\\\n& = &\t\n\\binom{e}{2t} \\frac{1}{(x+2t)!} \\left( 1 - \\frac{e-2t}{2t+1} \\frac{1}{x+2t+1}\\right) \\\\\n& \\geq &\t\n\\binom{e}{2t} \\frac{1}{(x+2t)!} \\left( 1 - \\frac{e}{2t+1} \\frac{1}{x+2t+1}\\right) \\\\\n& \\geq &\t\n\\binom{e}{2t} \\frac{1}{(x+2t)!} \\left( 1 - \\frac{e}{x} \\right).\n\\end{eqnarray*}\nSince $x\\geq e$, every two consecutive summands add up to a non-negative value. Thus, if the number of summands is even (that is, if $e$ is odd), the lower bound follows, and if the number of summands is odd, the bound also follows, since for even $e$, the last summand is positive.\n\n\nNow we show the upper bound. Note that, since $p$ does not appear in the sum, it suffices to bound the sum by the smaller value $\\frac{1}{x!}$. This is facilitated by noting that\n$$ \\sum_{t=0}^e (-1)^t \\binom{e}{t} \\frac{1}{(x+t)!} \n\t=\n \\frac{1}{x!} \\sum_{t=0}^e (-1)^t \\frac{1}{t!} \\frac{\\binom{x+e}{x+t}}{\\binom{x+e}{e}}. $$\nHence, it suffices to show that \n\\begin{equation}\n\\sum_{t=0}^e (-1)^t \\frac{1}{t!} \\binom{x+e}{x+t} \\leq \\binom{x+e}{e}. \\label{equa: upper bound sum in box constraints}\n\\end{equation}\nAs before, we analyze the sum of two consecutive terms (this time in the above sum).\n This is done in the next claim. \n\\begin{claim}\\label{cla: upper bound binom}\nFor $x \\geq \\frac{5}{3}e$ we have \n$$ \\frac{1}{(2t)!} \\binom{x+e}{x+2t} - \\frac{1}{(2t+1)!} \\binom{x+e}{x+2t+1}\n\\leq \\binom{x+e}{x+2t} - \\binom{x+e}{x+2t+1}$$\n\\end{claim}\n\\begin{proof}\nWe divide both sides of the desired inequality by $\\binom{x+e}{x+2t}$ to obtain the equivalent statement\n\\begin{eqnarray*}\n&& \\frac{1}{(2t)!} - \\frac{1}{(2t+1)!} \\frac{e-2t}{x+2t+1}\n\\leq 1 - \\frac{e-2t}{x+2t+1} \\\\\n&\\Leftrightarrow& \n\t\\frac{e-2t}{x+2t+1} \\left( 1- \\frac{1}{(2t+1)!} \\right)\n\t\t\\leq \n\t1 - \\frac{1}{(2t)!}\n\\end{eqnarray*}\nNote that the above is tight for $t=0$. For $t>0$, and since $x \\geq \\frac{5}{3} e$, we have $\\frac{e-2t}{x+2t+1} < \\frac{3}{5}$ which is small enough to compensate for the worst-case ratio of the expressions involving factorials, which occurs for $t=1$. \n\\end{proof}\n\nContinuing our proof of Lemma~\\ref{lem: abstract box constraints}, first suppose that $e$ is odd. Then Claim~\\ref{cla: upper bound binom} implies that \n$$\\sum_{t=0}^e (-1)^t \\frac{1}{t!} \\binom{x+e}{x+t} \n \\leq \n\t\\sum_{t=0}^e (-1)^t \\binom{x+e}{x+t} \n = \\binom{x+e-1}{e} \\leq \\binom{x+e}{e}, $$\nwhich gives the required condition~\\eqref{equa: upper bound sum in box constraints} for odd $e$. If $e$ is a positive even integer, then again Claim~\\ref{cla: upper bound binom} implies that \n$$\\sum_{t=0}^e (-1)^t \\frac{1}{t!} \\binom{x+e}{x+t} \n \\leq \n\t\\sum_{t=0}^{e-1} (-1)^t \\binom{x+e}{x+t} + \\frac{1}{e!}\n = \\binom{x+e-2}{e} + \\frac{1}{e!} < \\binom{x+e}{e}. $$\nFinally, if $e=0$ then~\\eqref{equa: upper bound sum in box constraints} holds with equality. This concludes\n the proof.\n\\end{proof}\n\nNow we turn our attention to cover constraints \\eqref{eqn: explicit cover-constraints}.\n Recall that the values $y_A$ only depend on the size of $A$. Since $f$ and $\\ell$ are fixed in the context of Lemma~\\ref{lem: sa feasible solution}, for $|P|=p$ and $|E|=e$\n we can define\n$$ H_{e,p}:=\\sum_{\\emptyset \\subseteq T \\subseteq E} (-1)^{|T|} y_{P \\cup T } = \\sum_{t=0}^e (-1)^{t} \\binom{e}{t} \\frac{(f-\\ell-1)!}{(f-\\ell-1+p+t)!}.$$\nThus\n the left-hand-side of \\eqref{eqn: explicit cover-constraints} (for fixed $P$ and $E$) involves only\n expressions of the form $H_{e,p}, H_{e,p+1}$, or 0,\ndepending on the relationship between the sets $P,E$ and $\\{i\\}$, while the right-hand-side equals $H_{e,p}$. \n\nMore concretely, let $|D \\cap P| = p_1$, $|D \\cap E| = e_1$, $|P\\setminus D| = p_0$ and $|E\\setminus D| = e_0$, where $p_0+p_1 = |P|=p$ and $e_0+e_1 = |E|=e$, and recall that $E$ and $P$ are disjoint. Then observe that the $p_1$\n indices in $D\\cap P$ each contribute $H_{e,p}$ to the left-hand-side of \\eqref{eqn: explicit cover-constraints}. In addition, the $e_1$ indices $i\\in D\\cap E$ each contribute $0$ to the left-hand-side. This follows because each $T \\subseteq E \\setminus \\{i\\}$ can be paired with $T' := T \\cup \\{i\\} \\subseteq E$ and the terms $y_{P \\cup T \\cup \\{i\\}}$ and\n$y_{P \\cup T' \\cup \\{i\\}}$ are identical, while they appear in the sum with opposite signs.\nFinally, the remaining $f-p_1-e_1$ indices contribute each $H_{e,p+1}$. Overall, for our proposed solution, Constraint~\\eqref{eqn: explicit cover-constraints} can be rewritten as\n$$\n(f - p_1 - e_1 ) H_{e,p+1} + p_1 H_{e,p} \\geq H_{e,p}\n$$\nClearly, for $p_1>0$, the above constraint is satisfied. Hence, we may assume that $|P \\cap D| = \\emptyset$, and so $|P|=p_1=p$. Note also that the value $e_1$ does not affect $H_{e,p}$, thus the above inequality holds for all $e_1\\leq e$ iff it holds for $e_1=e$ and $e_0=0$.\nTo summarize, to show that Constraint~\\eqref{eqn: explicit cover-constraints}\n is satisfied, we need only show that \n\\begin{equation}\\label{eqn: explicit cover-constraints evaluated}\n(f - e ) H_{e,p+1} \\geq H_{e,p}\n\\end{equation}\nfor $H_{e,p} = \\sum_{t=0}^e (-1)^{t} \\binom{e}{t} \\frac{(f-\\ell-1)!}{(f-\\ell-1+p+t)!}$, and $e+p \\leq \\ell$. Note that the value $(f-\\ell-1)!$ in the numerator appears in both sides. Also $f \\geq 3 \\ell$ and $e \\leq \\ell$ implies that $e<2\\ell\\leq f- \\ell$. Finally recall that in~\\eqref{eqn: explicit cover-constraints} we have\n$p+e =|P|+|L|\\leq \\ell$. Hence, to show that~\\eqref{eqn: explicit cover-constraints evaluated} (and thus Constraint~\\eqref{eqn: explicit cover-constraints}) is satisfied,\nit remains only to show the following lemma. \n\n\\begin{lemma}\\label{lem: abstract cover constraints}\nFor $e0$)\n the above expression is strictly positive. For this it suffices to show that $\\sum_{t=0}^e (-1)^{t} \\binom{e}{t} \\frac{t}{(f-\\ell+p+t)!} \\leq 0$.\n\nWe proceed again by analyzing every two consecutive terms. We observe that for $t\\geq 1$ we have\n\\begin{align*}\n- \\binom{e}{2t-1} &\\frac{2t-1}{(f-\\ell+p+2t-1)!} + \\binom{e}{2t} \\frac{2t}{(f-\\ell+p+2t)!} \\\\\n&=\n\\frac{\\binom{e}{2t-1}}{(f-\\ell+p+2t-1)!}\n\t\\left( -(2t-1) + \\frac{e-2t+1}{f-\\ell+p+2t} \\right) \\\\\n&<\n\\frac{\\binom{e}{2t-1}}{(f-\\ell+p+2t-1)!}\n\t\\left( -(2t-1) + \\frac{e}{f-\\ell} \\right) \\\\\n&< \\frac{\\binom{e}{2t-1}}{(f-\\ell+p+2t-1)!}\n\t\\left( -1 + \\frac{e}{f-\\ell} \\right) \n\\end{align*}\nwhich is non-positive, since $f-\\ell \\geq e$. This argument shows that every two consecutive summands of $\\sum_{t=0}^e (-1)^{t} \\binom{e}{t} \\frac{t}{(f-\\ell+p+t)!}$ add up to a non-positive value. If $e$ is even, then we are done, while for odd $e$ the unmatched summand (for $t=e$) is negative, and so the lemma follows.\n\\end{proof}\n\n\n\n\\section{An ${\\rm LS}_+$-based PTAS for \\textsc{Knapsack}} \\label{sec:knapsack}\n\nWe consider the \\textsc{Knapsack}\\ problem: We are given $n$ items which we identify with the integers~$[n]$, and each item $i\\in[n]$ has some associated (nonnegative) reward $r_i$ \n and cost (or size) $c_i$.\n The goal is to choose a set of items which fit in the knapsack, i.e.\\ whose total cost does not exceed some bound $C$, so as to maximize the total reward.\nIn what follows we will use the \\textsf{LP}\\ $\\{ \\max \\sum_{i=1}^n r_i x_i: ~\\sum_{i=1}^n c_i x_i \\leq C ~\\&~0 \\leq x_i \\leq 1 \\forall~ i \\in [n] \\}$, which is the natural relaxation for \\textsc{Knapsack}. \n\nDenote the polytope associated with the \\textsc{Knapsack}\\ \\textsf{LP}\\ relaxation \nby $P$. We will consider the \\textsf{SDP}\\ derived by applying sufficiently many levels of ${\\rm LS}_+$ (as defined in Section~\\ref{sec:hierarchy}) to the above \\textsf{LP}. That is, for some $\\ell>0$, we consider the \\textsf{SDP}\\ \n\\begin{align*}\n{\\rm maximize} \\qquad& \\sum_{i=1}^n r_i x_i & \\\\%\\nonumber\\\\\n{\\rm subject~to} \\qquad & (1,{\\bf x})\\in K^+_\\ell(P)\n\\end{align*}\n\n\\iffalse\nWe remind the reader that the ${\\rm LS}_+$ hierarchy, applied to the above relaxation, yields an SDP relaxation defined as follows. Denote by $\\mathrm{KS}_0$ the feasibility \\textsf{LP}\\ given by all constraints of the\\textsc{Knapsack}\\ \\textsf{LP}\\ relaxation.\nFor any integer $k\\geq 1$, the ${\\rm LS}_+$ relaxation at level $k$, which we denote by $\\mathrm{KS}_k$, is defined recursively as follows: we say that $(y_i)_{i\\in[n]}\\in\\mathrm{KS}_k$ if\n\\begin{align}\n\\exists y_0,y_{ij}=y_{ji}\\in[0,1] \\text{ (for }0\\leq i,j\\leq n\\text{)}\\quad & \\mathrm{s.t.} &\\nonumber\\\\\n& y_0 = 1 \\label{eq:normalization}&\\\\\n& y_{0i}=y_i=y_{ii} & \\forall 0 \\leq i \\leq n\\\\\n& y_i+y_j-1 \\leq y_{ij} \\leq y_i,y_j & \\forall i,j\\in[n] \\label{eq:y_i-y_ij-bounds}\\\\\n&Y=(y_{ij})_{i,j\\in\\{0,1,\\ldots,n\\}} \\succeq 0 &\\label{eq:psdness} \\\\\ny_i > 0 \\quad \\Rightarrow \\quad & (y_{ij}\/y_i)_{j\\in[n]} \\in \\mathrm{KS}_{k-1} &\\forall i\\in[n]\\label{eq:1-condition}\\\\\ny_i < 1 \\quad \\Rightarrow \\quad & ((y_j-y_{ij})\/(1-y_i))_{j\\in[n]} \\in \\mathrm{KS}_{k-1} &\\forall i\\in[n]\\label{eq:0-condition}\n\\end{align}\n\nConstraint~\\eqref{eq:psdness} says that the matrix $Y$ is positive-semidefinite. That is, that there exist vectors~$v_i$ such that $v_i\\cdot v_j = y_{ij}$. In the current setting, we actually will not need constraint~\\eqref{eq:0-condition} at all. It would suffice to have constraints~\\eqref{eq:normalization} through~\\eqref{eq:1-condition} plus the constraint that the vector of values $(y_i)_{i\\in[n]}$ satisfies the simple \\textsf{LP}\\ relaxation $\\mathrm{KS}_0$. This condition is implied by the above SDP (as is), and moreover any solution for KS$_k$ is also a solution for KS$_{k'}$ for any $k'0$, there is a constant $L_\\varepsilon$ such that the \\textsf{SDP}\\ relaxation for \\textsc{Knapsack}\\ arising from level $L_\\varepsilon$ of ${\\rm LS}_+$ has integrality gap at most $1+O(\\varepsilon)$. For the Lasserre hierarchy, this has been shown for level $1\/\\varepsilon$~\\cite{KMN11}.\n We will show this for $L_\\varepsilon=1\/\\varepsilon^3$ in the case of ${\\rm LS}_+$\n\nOur rounding algorithm will take as input the values of the \\textsc{Knapsack}\\ instance $(r_i)_i$, $(c_i)_i$, and $C$, an optimal solution ${\\bf x}$ s.t.\\ $(1,{\\bf x})\\in K^+_\\ell(P)$ (for some level $\\ell>0$, initially $\\ell=L_{\\varepsilon}$), and parameters $\\varepsilon$ and $\\rho$.\n\n The parameter $\\rho$ is intended to be the threshold $\\varepsilon\\cdot{\\sf OPT}$ in the set $S_{\\varepsilon{\\sf OPT}}=\\{i\\mid r_i > \\varepsilon\\cdot{\\sf OPT}\\}.$\n Rather than guessing a value for ${\\sf OPT}$, though, we will simply try all values of $\\rho\\in\\{r_i\\mid i\\in[n]\\}\\cup\\{0\\}$ and note that for exactly one of those values, the set $S_{\\varepsilon{\\sf OPT}}$ coincides with the set %\n $\\{i\\mid r_i > \\rho\\}$ (also note that $\\rho$ is a parameter of the rounding, and not involved at all in the \\textsf{SDP}\\ relaxation).\n \nThe intuition behind our rounding algorithm is as follows: As we did for \\textsc{Set Cover}, we would like to repeatedly ``condition\" on setting some variable to $1$, by using Fact~\\ref{fact: ls conditioning}. If we condition only on (variables corresponding to) items in $S_{\\varepsilon{\\sf OPT}}$, then after at most $1\/\\varepsilon$ iterations, the \\textsf{SDP}\\ solution will be integral on that set, and then by Corollary~\\ref{cor:greedy} the modified greedy algorithm will give a $1+O(\\varepsilon{\\sf OPT})$ approximation relative to the value of the objective function (since items outside $S_{\\varepsilon{\\sf OPT}}$ have reward at most $\\varepsilon{\\sf OPT}$). The problem with this approach (and the reason why \\textsf{LP}\\ hierarchies do not work), is the same problem as for \\textsc{Set Cover}: the conditioning step does not preserve the value of the objective function. While the optimum value of the \\textsf{SDP}\\ is at least ${\\sf OPT}$ by definition, after conditioning, the value of the new solution may be much smaller than ${\\sf OPT}$, which then makes the use of Corollary~\\ref{cor:greedy} meaningless. The key observation is that the use of \\textsf{SDP}{s} ensures that we can choose some item to condition on without any decrease in the objective function (see Lemma~\\ref{lem:increase-obj-fun}).\\footnote{Note that this is crucial for a maximization problem like \\textsc{Knapsack}, while for a minimization problem like \\textsc{Set Cover}\\ it does not seem helpful (and indeed, by the integrality gap of Alekhnovich et al.~\\cite{alekhnovich:arora:tourlakis}, we know it does not help).} A more refined analysis shows that we will be able condition {\\em specifically on items in $S_{\\varepsilon{\\sf OPT}}$} without losing too much. Counter-intuitively, we then need to show that the algorithm does not perform an unbounded number of conditioning steps which {\\em increase} the objective value (see Lemma~\\ref{lem:recurse-depth}). Our rounding algorithm {\\bf{KS-Round}} is described in Algorithm~\\ref{alg:knapsack}, while the performance guarantee is described in Section~\\ref{sec:knapsack_app}. \n\\begin{algorithm*}[ht]\n \\caption{~\\bf{KS-Round}$((r_i)_i,(c_i)_i,C,{\\bf x},\\varepsilon, \\rh\n)$} \\label{alg:knapsack} \n\\begin{algorithmic}[1] \n\\State Let ${\\bf y}\\in\\mathcal{P}_2$ be the moment vector associated with $(1,{\\bf x})$.\n\\State Let $S_\\rho \\leftarrow \\{i\\mid r_i>\\rho\\}$, and let $S^b \\leftarrow \\{i\\mid x_i =b\\}$ for $b=0,1$. \\label{step:knapsack:S-rho}\n\\If{$S_\\rho\\subseteq S^0\\cup S^1$}\n\\State Run the modified greedy algorithm. \\label{step:knapsack:0-1}\n\\ElsIf{$\\displaystyle\\sum_{i\\in S_\\rho\\setminus S^1}r_ix_i < \\varepsilon\\cdot\\sum_{i=1}^n r_ix_i$}\n\\State Run the modified greedy algorithm on items in $([n]\\setminus S_\\rho)\\cup S^1$. \\label{step:knapsack:small-rewards}\n\\ElsIf{there is some $i\\in S_\\rho\\setminus(S^0\\cup S^1)$ s.t.\\ $\\displaystyle\\sum_{j=1}^n r_jy_{\\{i,j\\}}\\geq (1-\\varepsilon^2)x_i\\cdot\\sum_{j=1}^nr_jx_j$}\n\\State Run {\\bf{KS-Round}}$((r_i)_i,(c_i)_i,C,\\frac1{x_i}Y^{[{\\bf y}]} {\\bf e}_i\n\\varepsilon, \\rho)$. \\Comment See Fact~\\ref{fact: ls conditioning\n\\label{step:knapsack:recurse-S-rho}\n\\Else\\State Choose $i\\in[n]\\setminus (S_\\rho\\cup S^0)$ s.t.\\ $\\displaystyle\\sum_{j=1}^n r_jy_{\\{i,j\\}}>(1+\\varepsilon^3)x_i\\cdot\\sum_{j=1}^nr_jx_j$ \\Comment See Lemma~\\ref{lem:strict-increase} \\label{step:knapsack:choose-increase}\n\\State Run {\\bf{KS-Round}}$((r_i)_i,(c_i)_i,C,\\frac1{x_i}Y^{[{\\bf y}]} {\\bf e}_i\n\\varepsilon, \\rho)$. \\Comment See Fact~\\ref{fact: ls conditioning\n\\label{step:knapsack:recurse-increase}\n\\EndIf\n\\end{algorithmic}\n\\end{algorithm*}\n\n\n\n\n\\subsection{Analysis of Algorithm~\\ref{alg:knapsack}}\n\\label{sec:knapsack_app}\n\nBefore we analyze the performance guarantee of the algorithm, there is one more simple fact about both ${\\rm LS}$ and ${\\rm LS}_+$ which we will use in this section.\n\n\\begin{fact}\\label{fact:LS-for-x_i=1}\nGiven a solution $(1,{\\bf x})\\in K_{\\ell}(P)$ for some $t\\geq 1$ and corresponding moment vector ${\\bf y}\\in\\mathcal{P}_2$, if $x_i=1$ for some $i\\in[n]$, then $Y^{[y]}{\\bf e}_i=Y^{[y]}{\\bf e}_0(=(1,{\\bf x}))$.\n\\end{fact} \nThis fact follows easily from the fact that $Y^{[{\\bf y}]} \\left( {\\bf e}_0 - {\\bf e}_i\\right) \\in K_{t-1}(P)$, and since if $x_i=y_{\\{i\\}}=1$ then the first entry in the above vector is $0$, which for the conified polytope $K_{t-1}(P)$ only holds for the all-zero vector.\n\nNow, consider the set $S_\\rho$ defined in Step~\\ref{step:knapsack:S-rho}. As we have pointed out, for the appropriate choice of $\\rho$, this set coincides with the set $S_{\\varepsilon{\\sf OPT}}$. Note that, by Corollary~\\ref{cor:greedy}, if all the $x_i$ values in $S_\\rho$ are integral, then Step~\\ref{step:knapsack:0-1} returns a solution with value $R'_G$ satisfying\n$${\\sf OPT} \\geq R'_G \\geq \\sum_ir_ix_i - \\max_{i\\not\\in S_\\rho}r_i \\geq \\sum_ir_ix_i - \\varepsilon\\cdot{\\sf OPT}.$$\nNow, if ${\\bf x}$ is the original \\textsf{SDP}\\ solution given to the rounding algorithm, this gives an upper bound of $1+\\varepsilon$ on the integrality gap (as well as a $(1+\\varepsilon)$-approximation). However, in Steps~\\ref{step:knapsack:recurse-S-rho} and~\\ref{step:knapsack:recurse-increase}, we recurse with a new \\textsf{SDP}\\ solution (to a lower level in the ${\\rm LS}_+$ hierarchy). Thus, our goal will be to arrive at an \\textsf{SDP}\\ solution which is integral on $S_\\rho$ (or assigns so little weight to $S_\\rho$ that we can ignore it), but we need to show that the objective function does not decrease too much during these recursive steps. To show this, we will rely crucially on the following easy lemma, which is also the only place where we use positive-semidefiniteness.\n\n\\begin{lemma}\\label{lem:increase-obj-fun} Let $(1,{\\bf x})$ be a solution to $K^+_\\ell(P)$ for some $\\ell\\geq 1$, with the corresponding moment vector ${\\bf y}$. Then the solution satisfies $$\\sum_{i=1}^nr_i\\sum_{j=1}^nr_jy_{\\{i,j\\}}\\geq\\left(\\sum_{i=1}^n r_ix_i\\right)^2 \\left(= \\sum_{i=1}^n r_i \\sum_{j=1}^n x_ir_jx_j\\right).$$\n\\end{lemma}\n\\begin{proof} By the positive semidefiniteness of the moment matrix $Y^{[{\\bf y}]}$, we have ${\\bf a}^{\\top} Y^{[{\\bf y}]} {\\bf a}\\geq 0$ for any vector ${\\bf a}\\in{\\mathbb R}^{n+1}$. Then the lemma follows immediately from this inequality, by letting ${\\bf a}=(a_i)_i$ be the vector defined by $a_i=r_i$ for $i\\in[n]$ and $a_0=-\\sum_{i=1}^nr_ix_i$.\n\\end{proof}\n\nThus, there is some item $i\\in[n]$ on which we can condition (by taking the new solution $\\frac1{x_i}Y^{[{\\bf y}]} {\\bf e}_i$,\n without any decrease in the value of the objective function. Moreover, using the above lemma, we can now show that the algorithm is well-defined (assuming we have start with an \\textsf{SDP}\\ at a sufficiently high level of ${\\rm LS}_+$ for all the recursive steps).\n\n\\begin{lemma}\\label{lem:strict-increase}\nIf Step~\\ref{step:knapsack:choose-increase} is reached, then there exists an item $i\\in[n]\\setminus (S_\\rho\\cup S^0)$ satisfying \n\\begin{equation}\\label{eq:lem:strict-increase}\\displaystyle\\sum_{j=1}^n r_jy_{\\{i,j\\}}>(1+\\varepsilon^3)x_i\\cdot\\sum_{j=1}^nr_jx_j.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nNote that if Step~\\ref{step:knapsack:choose-increase} is reached then we must have \n\\begin{equation}\\label{eq:lem-increase-1}\\displaystyle\\sum_{i\\in S_\\rho\\setminus S^1}r_ix_i \\geq \\varepsilon\\cdot\\sum_{i=1}^n r_ix_i.\n\\end{equation}\nMoreover, for all $i\\in S_\\rho\\setminus(S^0\\cup S^1)$ we have $\\displaystyle\\sum_{j=1}^n r_jy_{\\{i,j\\}}< (1-\\varepsilon^2)x_i\\cdot\\sum_{j=1}^nr_jx_j$. In particular, we have \n\\begin{equation}\\label{eq:lem-increase-2}\\sum_{i\\in S_\\rho\\setminus S^1}r_i\\sum_{j=1}^n r_jy_{\\{i,j\\}}< (1-\\varepsilon^2)\\sum_{i\\in S_\\rho\\setminus S^1}r_ix_i\\cdot\\sum_{j=1}^nr_jx_j.\n\\end{equation} \nThus (noting that all $i\\in S^0$ contribute nothing to the following sums), we have\n\\begin{align*} \\sum_{i\\in ([n]\\setminus (S_\\rho\\cup S^0))\\cup S^1}r_i\\sum_{j=1}^n r_jy_{\\{i,j\\}} &= \\sum_{i=1}^n r_i\\sum_{j=1}^n r_jy_{\\{i,j\\}} - \\sum_{i\\in S_\\rho\\setminus S^1}r_i\\sum_{j=1}^n r_jy_{\\{i,j\\}}\\\\\n&\\geq \\left(\\sum_{i=1}^n r_ix_i\\right)^2 - \\sum_{i\\in S_\\rho\\setminus S^1}r_i\\sum_{j=1}^n r_jy_{\\{i,j\\}} &\\text{by Lemma~\\ref{lem:increase-obj-fun}}\\\\\n&> \\left(\\sum_{i=1}^n r_ix_i\\right)^2 - (1-\\varepsilon^2)\\sum_{i\\in S_\\rho\\setminus S^1}r_ix_i\\cdot\\sum_{j=1}^nr_jx_j &\\text{by~\\eqref{eq:lem-increase-2}}\\\\\n&=\\sum_{i\\in ([n]\\setminus S_\\rho)\\cup S^1}r_ix_i\\cdot\\sum_{j=1}^nr_jx_j + \\varepsilon^2\\sum_{i\\in S_\\rho\\setminus S^1}r_ix_i\\cdot\\sum_{j=1}^nr_jx_j\\\\\n&\\geq\\sum_{i\\in ([n]\\setminus S_\\rho)\\cup S^1}r_ix_i\\cdot\\sum_{j=1}^nr_jx_j + \\varepsilon^3\\sum_{i=1}^nr_ix_i\\cdot\\sum_{j=1}^nr_jx_j &\\text{by~\\eqref{eq:lem-increase-1}}\\\\\n&\\geq (1+\\varepsilon^3)\\cdot \\sum_{i\\in ([n]\\setminus S_\\rho)\\cup S^1}r_ix_i\\cdot\\sum_{j=1}^nr_jx_j.\n\\end{align*}\n\nTherefore, there is some $i\\in ([n]\\setminus (S_\\rho\\cup S^0))\\cup S^1$ satisfying~\\eqref{eq:lem:strict-increase}. We only need to show that $i\\not\\in S^1$, that is, that $x_i < 1$. However, by Fact~\\ref{fact:LS-for-x_i=1},\n if $x_i=1$ then $y_{\\{i,j\\}}=x_j$, making inequality~\\eqref{eq:lem:strict-increase} impossible for such $i$.\n\\end{proof}\n\nNow that we have shown the algorithm to be well-defined, let us start to bound the depth of the recursion. We will use the following lemma to show that after a bounded number of recursive calls in Step~\\ref{step:knapsack:recurse-S-rho}, the \\textsf{SDP}\\ solution becomes integral on $S_\\rho$:\n\\begin{lemma}\\label{lem:knapsack-0-1} Let $(1,{\\bf x})$ be a solution to $K^+_\\ell$ for some $\\ell\\geq 1$. Then for all items $i\\in[n]\\setminus S^1$ such that $c_i > C - \\sum_{j\\in S^1}c_j$, we have $x_i=0$.\n\\end{lemma}\n\\begin{proof} Suppose, for the sake of contradiction, that there is some item $i$ satisfying this property, with $0 C.$$\n\\end{proof}\n\nSince no $1\/\\varepsilon$ items from $S_{\\varepsilon{\\sf OPT}}$ can fit simultaneously in the knapsack, we have the following:\n\n\\begin{corollary}\\label{cor:knapsack-0-1} For $\\rho$ s.t.\\ $S_\\rho=S_{\\varepsilon{\\sf OPT}}$, the recursion in Step~\\ref{step:knapsack:recurse-S-rho} cannot be repeated $\\lceil 1\/\\varepsilon \\rceil$ times. Moreover, if this step is repeated $(\\lceil 1\/\\varepsilon\\rceil-1)$ times, then after these recursive calls, we have $x_i\\in\\{0,1\\}$ for all $i\\in S_\\rho$.\n\\end{corollary}\n\nWe can now bound the total depth of the recursion in the algorithm.\n\n\\begin{lemma}\\label{lem:recurse-depth} For all positive $\\varepsilon\\leq0.3$, the algorithm performs the recursion in Step~\\ref{step:knapsack:recurse-increase} less than $\\lceil1\/\\varepsilon^3\\rceil$ times.\n\\end{lemma}\n\\begin{proof}\nThis follows by tracking the changes to the value of the objective function. Each time Step~\\ref{step:knapsack:recurse-S-rho} is performed, the value of the objective function changes by a factor at least $(1-\\varepsilon^2)$, while each time Step~\\ref{step:knapsack:recurse-increase} is performed, this value changes by a factor greater than $(1+\\varepsilon^3)$. Assuming, for the sake of contradiction, that Step~\\ref{step:knapsack:recurse-increase} is performed $\\lceil1\/\\varepsilon^3\\rceil$ times, then by Corollary~\\ref{cor:knapsack-0-1} the total change is at least a factor $(1-\\varepsilon^2)^{\\lceil 1\/\\varepsilon\\rceil-1}(1+\\varepsilon^3)^{\\lceil1\/\\varepsilon^3\\rceil}>2$. However, this is a contradiction, since initially the value of the objective function is at least ${\\sf OPT}$, and the integrality gap is always at most $2$.\n\\end{proof}\n\nFinally, we can prove that the algorithm gives a $(1+O(\\varepsilon))$-approximation, thus bounding the integrality gap. Theorem~\\ref{thm:knapsack_ig} follows immediately from\nthe following.\n\n\\begin{theorem}\\label{thm:knapsack-alg} For sufficiently small $\\varepsilon>0$, given an optimal (maximum feasible) solution $(1,{\\bf x})$ to $K^+_\\ell(P)$ for $\\ell=O(1\/\\varepsilon^3)$, there is a value $\\rho\\in\\{r_i\\mid i\\in[n]\\}\\cup\\{0\\}$ such that algorithm {\\bf{KS-Round}} finds a solution to the \\textsc{Knapsack}\\ instance with total value (reward) at least $(1-O(\\varepsilon))\\cdot\\sum_{i=1}^nr_ix_i$.\n\\end{theorem}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:knapsack-alg}.] It is easy to see that for (exactly one) $\\rho\\in\\{r_i\\mid i\\in[n]\\}\\cup\\{0\\}$ we have $S_\\rho=S_{\\varepsilon{\\sf OPT}}$. Choose this value of $\\rho$.\n\nNote that the only time the value of the objective function can decrease is in Step~\\ref{step:knapsack:recurse-S-rho}. Let $\\Phi$ be the initial value of the objective function. Then by Corollary~\\ref{cor:knapsack-0-1}, at the end of the recursion, the new value of the objective function will be at least $(1-\\varepsilon^2)^{1\/\\varepsilon}\\cdot\\Phi = (1-O(\\varepsilon))\\Phi$. Thus, it suffices to show the theorem relative to the final value of the objective function (as opposed to $\\Phi$)\\footnote{In fact, this will also show that the integrality gap remains at most $1+O(\\varepsilon)$ throughout the algorithm (as opposed to $2$, which we used in the proof of Lemma~\\ref{lem:recurse-depth}), thus bounding the depth of the recursion by $O(1\/\\varepsilon^2)$ rather than $O(1\/\\varepsilon^3)$.}.\n\nLet us consider the algorithm step-by-step. If the algorithm terminates in Step~\\ref{step:knapsack:0-1}, then since for all $i\\in[n]\\setminus S_{\\varepsilon{\\sf OPT}}$ we have $r_i\\leq\\varepsilon\\cdot{\\sf OPT}$, by Corollary~\\ref{cor:greedy} the rounding loses at most a $(1+O(\\varepsilon))$ factor relative to the value of the objective function.\n\nIf the algorithm terminates in Step~\\ref{step:knapsack:small-rewards}, then similarly, the rounding loses at most a $(1+O(\\varepsilon))$-factor relative to\n$\\sum_{i\\in[n]\\setminus S_\\rho}r_ix_i>(1-\\varepsilon)\\sum_{i=1}^nr_ix_i$.\n\nFinally, note that by Corollary~\\ref{cor:knapsack-0-1} and Lemma~\\ref{lem:recurse-depth}, we have sufficiently many levels of the hierarchy to justify the recursive Steps~\\ref{step:knapsack:recurse-S-rho} and~\\ref{step:knapsack:recurse-increase}, and note that the choice of item $i$ in Step~\\ref{step:knapsack:choose-increase} is well-defined by Lemma~\\ref{lem:increase-obj-fun}.\n\\end{proof}\n\n\n\n\\iffalse\n\\section{Proof of Lemma~\\ref{lem:order}} \\label{sec:orderlemma}\n\n\\begin{lemma-order}\nIf $\\mathcal C$ is solution to a \\textsc{Set Cover}\\ instance $(X, \\mathcal S)$, then there is an ordering $S_1, \\ldots, S_k$ of the cover-sets in $\\mathcal C$\nsuch that $|S_i \\setminus \\bigcup_{j=1}^{i'-1} S_j| \\leq \\frac{n}{i'}$ for any $1 \\leq i' \\leq i \\leq k$.\n\\end{lemma-order}\n\n\\fi\n\n\n\\iffalse\n\\section{Proof of Claim~\\ref{claim: small cover-sets in subinstance}}\\label{sec: small cover-sets in subinstance}\n\n\\begin{claim-subinstance}\nIf at each step of the Conditioning Phase we choose the set $S$ in the support of the current solution ${\\bf x}^{(d')}$\ncontaining the most uncovered elements in $X$, then for all $T \\in \\mathcal T$ we have $|T| \\leq \\frac{n}{d}$.\n\\end{claim-subinstance}\n\\begin{proof}\nLet $\\mathcal C_i$ denote the cover-sets that are chosen up to the $i$-th step of the Conditioning-Phase. We show that at every iteration $i$ we have $|S \\setminus \\bigcup_{T \\in \\mathcal C_i} T| \\leq \\frac{n}{d-i+1}$ for every $S \\in \\mathcal S \\setminus \\mathcal C_i$\nwith $x^{(i)}_S > 0$.\n\nFor $1 \\leq i \\leq d$, let $\\alpha_i := |S_i \\setminus \\bigcup_{T \\in \\mathcal C_i} T|$.\nSince we chose the largest (with respect to the uncovered items) cover-set $S_i$ in the support of ${\\bf z}^{(i)}$ we have $|S \\setminus \\bigcup_{T \\mathcal C_i} T| \\leq \\alpha_i$\nfor every cover-set $S' \\in \\mathcal S \\setminus \\mathcal C_i$. Since $\\mathcal C_{i-1}$ covers a subset of items that are covered by $\\mathcal C_i$,\nwe see $\\alpha_{i-1} \\leq \\alpha_i$. So, $\\alpha_d \\geq \\alpha_{d-1} \\geq \\ldots \\geq \\alpha_1$. Now, each item $j$ covered by $\\mathcal C_0$ contributes 1 to the earliest index $i$ for which $j \\in S_i$,\nso $\\sum_{i=1}^d \\alpha_i \\leq n$.\n\nThis implies that $\\alpha_i \\leq \\frac{n}{d-i+1}$ for each $1 \\leq i \\leq d$. Therefore, every set in the support of ${\\bf z}^{(0)}$ has at most\n$\\frac{n}{d}$ elements that are not already covered by $\\mathcal C_0$. So, the instance $(Y, \\mathcal T)$ has $|T| \\leq \\frac{n}{d}$ for any $T \\in \\mathcal T$.\n\\end{proof}\n\\fi\n\n\n\n\n\n\n\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nThe known sub-exponential time approximation algorithms place \\textsc{Set Cover}\\ in a distinct category from\nother optimization problems like \\textsc{Max-3SAT}\\ in the following sense. Though one can achieve provably hard approximation factors for \\textsc{Set Cover}\\ in sub-exponential time, Moshkovitz and Raz~\\cite{moshkovitz:raz} show that improving over the easy $\\frac87$-approximation for \\textsc{Max-3SAT}{}~\\cite{johnson} by any constant requires time $2^{n^{1-o(1)}}$, assuming the ETH.\n\nRather, \\textsc{Set Cover}\\ lies in the same category of problems as \\textsc{Chromatic Number}\\ and \\textsc{Clique}\\ both of which admit $n^{1-\\varepsilon}$ approximations in time $2^{\\tilde{O}(n^\\varepsilon)}$ (by partitioning the graph into sets of size $n^{\\varepsilon}$ and solving the problem optimally on these sets), despite the known $n^{1-o(1)}$ hardness of approximation for both problems~\\cite{hastad,FK,zuckerman}. This may also be taken as evidence that the recent subexponential time algorithm of Arora, Barak, and Steurer~\\cite{arora:barak:steurer} for \\textsc{Unique Games}\\ does not necessarily imply that \\textsc{Unique Games}\\ is not hard, or not as hard as other NP-hard optimization problems.\n\n\\iffalse\nOn a related note, Arora, Barak, and Steurer have discovered a sub-exponential $2^{\\tilde O(n^\\varepsilon)}$-time algorithm for distinguishing between instances of \\textsc{Unique Games}\\\nfor which at least a $(1-\\varepsilon)$-fraction of the constraints can be satisfied or at most an $\\varepsilon$-fraction of the constraints can be satisfied~\\cite{arora:barak:steurer}.\nIn~\\cite{arora:barak:steurer}, they mention that their algorithm shows that the \\textsc{Unique Games}\\ problem is ``significantly easier'' than other NP-hard optimization problems. Our result reinforces the\nfact that some NP-hard optimization problems can be approximated better than their hardness thresholds in sub-exponential time, so it is still a conceivable possibility that \\textsc{Unique Games}\\\nis NP-hard or, at least, quasi-NP-hard.\n\\fi\n\n\\iffalse\nIt is conceivable that lower-bounds for other problems can be defeated with sub-exponential approximation algorithms.\nFor instance, this is very easy to do with the independent set problem. Simply partition the nodes into groups of size $n^\\varepsilon$, and find the largest independent set in each of these groups\nwith a brute force $2^{O(n^\\varepsilon)}$ algorithm. The best solution among the $n^{1-\\varepsilon}$ groups will be within a $n^{1-\\varepsilon}$-factor of the optimum.\nThus, the threshold established by H\\r{a}stad~\\cite{hastad} and Zuckerman~\\cite{zuckerman} can be broken with a sub-exponential time algorithm.\nSimilarly, the chromatic number of a graph can be approximated in sub-exponential time by partitioning the nodes into groups of size $n^\\varepsilon$ and optimally coloring each\ngroup with a brute force $n^{O(n^\\varepsilon)} = 2^{\\tilde O(n^\\varepsilon)}$ algorithm, using a different set of colors for different partitions. This is also an $n^{1-\\varepsilon}$-approximation.\n\\fi\n\n\n\\iffalse\nFor another example, one could consider the maximum $k$-cover problem which is similar to \\textsc{Set Cover}\\, except the cover-sets do not have costs and the goal is to cover as many items as possible\nusing only $k$ cover-sets. Running the \\textsc{Set Cover}\\ greedy algorithm for only $k$ iterations is a $(1-1\/e)^{-1}$-approximation\n\\cite{hochbaum} and Feige's tight hardness for \\textsc{Set Cover}\\ can be extended to a lower bound of $(1-1\/e)^{-1} - \\varepsilon$~\\cite{feige}. A direct translation of our approach to this problem\nwould try guessing $n^\\varepsilon$ of the cover-sets and then run the greedy algorithm for $k-n^\\varepsilon$ iterations. However, this doesn't seem to give constant factor improvements leading one to wonder\nif a different sub-exponential time algorithm would have more success or if there is a more efficient hardness reduction ruling out constant-factor improvements in sub-exponential time.\n\\fi\n\n\\iffalse\nIt would also be interesting to see if other variants of \\textsc{Set Cover}\\ have improved approximation algorithms. We did not present improved approximation algorithms for instances with bounded\ncover-set size or ones where each item appears in at most $f$ of cover-sets. The latter has a lower bound of $f-1-\\varepsilon$ for any constant $\\varepsilon > 0$ assuming NP is not\ncontained in ${\\rm DTIME}(n^{{\\rm polylog}~n})$~\\cite{dinur:guruswami:khot:regev},\nor even the stronger bound of $f-\\varepsilon$ under the \\textsc{Unique Games}\\ conjecture~\\cite{khot:regev}. Considering the recent sub-exponential time algorithm for \\textsc{Unique Games}{}~\\cite{arora:barak:steurer}, improved sub-exponential-time approximation algorithms for these variants may be possible.\n\\fi\n\n\\iffalse\nTo summarize, an interesting general research question is to determine which NP-hard problems admit better approximation algorithms with sub-exponential time algorithms and which problems\ncannot be approximated better than their known lower bounds by sub-exponential time algorithms without refuting the Exponential Time Hypothesis.\nInstances of the former include \\textsc{Set Cover}, \\textsc{Unique Games}, and \\textsc{Group Steiner Tree}\\ tree while instances of the latter type include many CSPs such as Max-3SAT and classifying other problems is an intriguing direction for research.\n\\fi\n\nFinally, turning to lift-and-project methods, we note here that our ${\\rm LS}_+$-based algorithm for \\textsc{Knapsack}\\ shows that in some instances reduced integrality gaps which rely heavily on properties of the \\textsf{Lasserre}\\ hierarchy can be achieved using the weaker ${\\rm LS}_+$ hierarchy. This raises the question of whether the problems discussed in the recent series of \\textsf{Lasserre}-based approximation algorithms~\\cite{BRS11,GS11,RT} also admit similar results using ${\\rm LS}_+$. On the flip side, it would also be interesting to see whether any such problems have strong integrality gap lower bounds for ${\\rm LS}_+$, which would show a separation between the two hierarchies.\n\n\n\n\\paragraph{Acknowledgements} We would like to thank Dana Moshkovitz for pointing out the blowup in her \\textsc{Set Cover}\\ reduction. We would also like to thank Mohammad R. Salavatipour for preliminary discussions on sub-exponential time approximation algorithms in general, and Claire Mathieu for insightful past discussions of the \\textsc{Knapsack}-related results in~\\cite{KMN11}. Finally, we would like to thank Marek Cygan for bringing~\\cite{cygan:kowalik:wykurz} to our attention. \n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\t\n\tAs researchers seek to introduce robotic technology into various aspects of our society, the robots must be able to preform increasingly complex tasks with enhanced automation and generality. \n\tThese new tasks are often complex, with multiple implicit subgoals that vary depending on the environment. As such, it is common for only the target end goal to be specified explicitly. \n\tFor example, if we ask the robot to bring us a cup of coffee, the robot will need to know where we are, as well as where the kitchen is, the tools, and the procedure for making coffee. The effort of learning such a complex composite task is enormous. \n\tMore problematic is the fact that even if we provide explicit subgoal guidance: i.e. where our kitchen is, where the coffee machine is and how to use our coffee machine, this knowledge won't transfer to robots in other houses. Even for the individual robot the solution may be brittle, as simply moving the location of the coffee cups may cause the task to fail.\n\t\n\tThe biggest learning challenge for complex tasks is the complexity itself.\n\tAny complex task would almost always requires a large number of steps to complete an episode. This is particularly true for tasks with terminal-only sparse rewards. The longer the average trajectory is, the broader we can expect an unbounded state-space to become, and the lower our sample efficiency will be.\n\tIn an Imitation Learning setting, the use of expert trajectories helps alleviate the ``vanishing reward'' problem by providing feedback at each step of the trajectory. However, the exploration and data efficiency problems remain.\n\t\n\t\t\\begin{figure}[t]\n\t \\centering\n\t\\includegraphics[width=\\linewidth]{Image\/first_image3.png}\n\n\t\\vspace{-5mm}\n\t \\caption{In the ``make coffee and bring it back'' task, the traditional subgoal approach segments the complex task into smaller, manageable subtasks with clear end points and boundaries. In contrast the proposed CASE approach generates novel compositional subgoals which change at each step, gradually guiding the agent towards the final goal.}\n\t \\vspace{-5mm}\n\t \\label{fig:teaser}\n\t \n\t\\end{figure}\n\n\tThe second challenge we seek to address is generalization. \n\tIn an Imitation Learning setting, the data efficiency challenge mentioned above will often manifest as a relatively restricted set of expert trajectories.\n\tAs such learning to perform a complex task often involves repetitively training on a small set of sample tasks. This can easily lead to over-fitting on the training task set or the specific training examples of the tasks.\n\tA common approach to mitigate this, is to design the model hierarchically as shown in figure~\\ref{fig:teaser}. In this case each stage of the model is intended to specialize in solving a certain class of problems. This can simplify generalization within a subtask, but also exacerbates problems with data sparsity, as each submodel will only be exposed to a small portion of the training data.\n\t\n\tIn contrast, our approach to solving these complex goal-oriented tasks, is to fully exploit the dataset to build a compositional task representation space, from which we can generate novel subgoals on the fly.\n\tWe treat a single complex task as a sequence of smaller implicit tasks or subtasks. Each of these subtasks still requires multiple steps and some effort to learn. However, the need for long term planning (and the brittleness to divergences) is alleviated.\n\tImportantly, unlike previous approaches, we do not explicitly define a finite set of subtasks with hard boundaries (i.e. ``navigate to kitchen'', ``make coffee'' etc). Instead the subtasks can be any small sub-trajectory towards the overall goal (e.g. ``Move 3 meters towards the kitchen'', ``pick up the coffee beans'' etc) and is generated on-the-fly through compositionality. These sub-goals do not need to correspond to any task defined by the developer, nor do they need to have been previously observed during training. We refer to this approach as Compositional Adaptive Subgoal Estimation (CASE).\n\t\n\n\t\n\n\n\tFinally, an Imitation Learning policy is trained, using the learned compositional representation as it's state space, and with targets set via the adaptive subgoal estimation. These subgoals are adaptive as the rollout progresses, providing additional flexibility. Unlike traditional rigid and non-overlapping subgoals, our approach enables the agent to adapt to errors and drift, following alternative routes to the overall task, and avoiding deadlock with unachievable subtasks.\n\tIn addition, the learning task is simplified as long term planning happens via the compositional space and the agent focuses on short term execution.\n\tThis also allow the agent to perform one shot generalization over unseen tasks in the same environment. \n\tWith this approach, we are able to outperform standard imitation learning policies (including those using the same compositional state space) by over 30\\% in unseen task generalization.\n\n\t\n\t\n\t\n\tIn summary, the contributions of this paper are:\n\t\\begin{enumerate}\n\t\t\\item A novel approach estimate subgoal waypoints via a compositional task embedding space, referred to as CASE\n\t\t\\item An Imitation Learning approach for complex compound tasks, based on online-subgoal estimation\n\t\t\\item An evaluation of one-shot task-generalization for the policy, based on subgoal generalization\n\t\\end{enumerate}\n\n\t\n\t\n\t\\section{LITERATURE REVIEW}\n\tThis section will first provide a broad overview of prior works on Subgoal Search from all areas of machine learning. It will then discuss specifically the current state-of-the-art in Imitation Learning, with a focus on compositional Imitation Learning approaches.\n\t\n\t\\subsection{Subgoal Search}\n\tThe most common use of Subgoal Search is in the context of path planning \\cite{bengio2018advances}. Here, specific landmarks or semantic regions are recognized as a subgoal for the robot to navigate towards.\n\tThese tasks often uses existing content within the observation as the subgoal, especially in visual related tasks \\cite{gao2017intention, savinov2018semi, liu2020hallucinative,kim2021landmark }.\n\t\n\tMore recently there has been significant research on extending Subgoal Search beyond landmark based navigation, particularly exploring the interaction with reinforcement learning (RL). \n\t\\cite{bakker2004hierarchical} was one of the earliest works combining hierarchical reinforcement learning with subgoal discovery. Here, high level policies discover subgoals while low level policies specialize in executing each subgoal.\n\tIn a similar vein, \\cite{kim2021landmark} trains a high-level policy with a reduced action space guided by landmarks.\n\t\\cite{zhang2020generating} generalized this by looking for a sub-goal in a k-step adjacent region of the current state, within a general reinforcement learning environment.\n\t\\cite{li2021active} builds upon this stable representation of subgoals, and proposes an active hierarchical exploration strategy which seeks out new promising subgoals with novelty and potential.\n\tAll of these works discovered subgoals automatically, removing the need for hand-crafted subgoals. However, they all maintained the hard boundary between the subgoals.\n\t\n\tThe majority of the research in subgoal estimation focuses on optimizing the generation or assessment of a candidate sub-goal state.\n\tHowever, \\cite{czechowski2021subgoal} show a simple approach for efficiently generating subgoals which are precisely k-steps-ahead in reasoning tasks.\n\tIn contrast our CASE approach chooses a possible ``near future'' subgoal, which adapts over time. This allows the policy network to focus on learning a more generalized skill for solving the task rather than attaining any specific subgoal. This in turn improves the performance of our technique when encountering unseen tasks, and provides robustness by allowing recovery from errors.\n\t\n\t\\subsection{Imitation Learning}\n\tImitation Learning (IL) approaches \\cite{schaal1999imitation, silver2008high, chernova2009interactive, hester2018deep}\n\tutilize a dataset of expert demonstrations to guide the learning process.\n\tInitially, imitation learning relied heavily on supervision. \n\tHowever, these approaches perform poorly with increasing episode length, and have issues with generalization. To improve upon this, the work of Levine, et al\\cite{levine2013guided} uses expert trajectories to optimize the action policy rather than to imitate the expert exactly. \n\tLater works move away from a passive collection of demonstrations. Instead they exploit the active collection of demonstration from expert. \n\tTo this end \\cite{syed2007game, ho2016generative, song2018multi} create an interactive expert which provides demonstration as a response to the actions taken by the agent. This helps partially mitigate the issues with training data efficiency. \n\n\tIn more recent works, \\cite{sun2017deeply, sun2018truncated} aim to combine IL with RL, by using IL as a pre-training step for RL. \\cite{cheng2018fast} perform randomized switching from IL to RL during policy training to enable faster learning. \\cite{murali2016tsc} uses expert trajectories to learn the reward function rather than the action policy. \n\n\t\n\tTo the best of our knowledge, the only prior work combining subgoal search and imitation learning is \\cite{paul2019learning}. This approach uses a form of clustering on the expert trajectories to decompose the complex task into sub-goals. By learning to generate sub-goal states, the network obtains reward functions which direct the RL agent to move from one subgoal to another. However, the lack of compositionality in the model and the rigidity of the sub-goal prediction limit its generalization capability.\n\t\n\t\\section{METHODOLOGY}\n\t\n\tNext, we detail our CASE method, utilizing adaptive subgoal waypoints for learning complex tasks and generalising to previously unseen tasks. First we clarify terminology.\n\tA ``task'' is defined as a singular goal the agent must complete through a series of interactions with the environment. A ``sequence'' of tasks means a collection of multiple separate tasks, some of which may be independent and some of which may depend on each other. Disregarding task dependencies, we allow the individual tasks within a sequence to be completed in any order, during the completion of the sequence.\n\tWe further specify a ``complex task'' within this work, as a singular specified goal task, which nevertheless engenders an entire sequence of implicitly defined subtasks to be completed, due to the implicit dependencies within the environment.\n\tIn our imitation learning framework, we define the ``subgoal waypoint'' as a state in the expert reference trajectory, located in the ``near future'' of the current agent's state. Note that the current trajectory and reference trajectory are both solving the same sequence of tasks, but are operating in different environments. Thus the subgoal waypoint cannot be used directly to guide the agent's trajectory.\n\n\t\n\n\tWe learn a compositional latent space to represent tasks and sequences of tasks. \n\tMore specifically, a singular task maps to a unique point in the latent space. An unordered sequence of tasks (as defined above) also maps to a unique point in the latent space, which is the summation of the embeddings of all the subtasks within the sequence. This helps to draw a connection between ``complex tasks'' and ``task sequences'' as defined above. Both the singular complex task, and the explicit sequence of all dependent subtasks, should map to the same point within the latent space.\n\tThis compositional approach makes manifest the lack of ordering specified above. The summation of subtask embeddings is an commutative operation, therefore changing the order of the summation does not change the final embedding.\n\n\tIn order to learn this compositional task embedding, this constraint is codified as a number of regularization losses on the state encoder. \n\n\n\t\n\tFinally we train agents to select actions using the learned task embedding, as their state representation. This provides a compact definition of both the current environment and the tasks to be completed.\n\tWe tested our CASE framework for imitation learning. Taking the full set of states from timestep $t=0$ to the final timestep $t=T$, let $O_t$ be the observation of a state at time $t$ in a fully observable environment. We similarly specify $O_0^{ref}...O_T^{ref}$ as an expert reference trajectory which completes the same task sequence in a different environment. \n\tThe expert reference trajectories are extracted by greedy search over the environment for the optimal solution.\n\t\\vspace{-2mm}\n\t\\subsection{Compositional representation}\n\tA compositional representation is an embedding which encodes structural relationships between the items in the space \\cite{mikolov2013distributed}.\n\n\n\n\tConsider a compositional representation $\\vec{v}_{0,N}$ which encodes the trajectory and tasks required to progress from state $s_0$ to state $s_N$. More explicitly, $\\vec{v}_{0,N}$ can be defined with a parameterized encoding function $g_{\\phi}$ as:\n\t\\begin{equation}\n\t \\vec{v}_{0,N}=g_{\\phi}(s_0,s_N)=g_{\\phi}(s_0,s_1)+...+g_{\\phi}(s_{N-1},s_N),\n\t \\label{eq0}\n\t\\end{equation}\n\twhere $\\phi$ are the encoding parameters.\n\tThis representation defines a sequence of tasks as the sum of the representation for all subtasks within the sequence. \n\tTo prevent accidentally enforcing a specific ordering during the completion of these subtasks, the representation is built with commutativity, i.e. $\\vec{v}_{A,B} + \\vec{v}_{C,D} = \\vec{v}_{C,D}+\\vec{v}_{A,B}$\n\tThis is a very powerful representation for computing encodings of implicit groups of subtasks. As an example, the embedding of all tasks that have yet to be accomplished at timestep $t$ a sequence can be calculated as $\\vec{v} = \\vec{u}_{0,N} - \\vec{u}_{0,t}$.\n\tHowever, in a complex task sequence, the $\\vec{v}$ often embeds a long trajectory which consist of many tasks. \n\tThis makes the learning process difficult, as information about far future tasks is a distraction from completing the current task.\n\n\n\t\n\t\n\t\n\t\\subsection{Plan Arithmetic and Subgoal Waypoints}\n\tIn one-shot imitation learning, the agent must perform a task (or sequence of tasks) conditioned on one reference example of the same task. In our work we further generalize this by allowing the current and reference task to be performed under different environments. \n\tThe agent is trained with many sequences of other tasks in other environments and then provided with an expert trajectory as reference to guide the new task, with no additional learning. Humans are adept at this: generalizing previous experiences to newly defined problems. However, for machine learning this is extremely challenging, and represents an important stepping stone towards general AI.\n\t\n\tDuring training, the agent is given two trajectories, the training trajectory $U$ and expert trajectory $R$ with matching task lists. It then learns a policy to perform online prediction of the actions in one trajectory, conditioned on the other trajectory as the reference. \n\tIn the running example `getting coffee', the agent will be provided with trajectories of retrieving coffee from a different office with a different floor layout.\n\tLearning how to make coffee without relying on specific meta knowledge about a particular environment is vital for improving generalization.\n\n\n\n\tTo be more specific, a visual approach to task specification is taken. During both training and testing, the agent is given an image of the desired goal state for the current episode ($U_N$), as well as the goal state of the reference episode ($R_T$). It is also given an image of the current state ($U_t$), and an image of a future subgoal state ($R_I$) from the reference trajectory $\\{R_0, R_T\\}$.\n\tIt is important to emphasise that the agent is not provided with any future knowledge about the current trajectory, beyond the target goal state which specifies the task to be completed. Subgoals are drawn from the future of the reference trajectory, not the current trajectory.\n\t\n\tTo choose the subgoal waypoint, we assume the agent is always on the optimal path, therefore it's progress in the task is proportional to that the expert trajectory.\n\tAs such when we choose the waypoint, we take the state $R_p$ in the reference trajectory, which has the same percentage of completion as in the training episode\n\n\t$\\frac{p}{T} = \\frac{t}{N}$, then add a fixed number $k$ steps to ensure the waypoint is in the ``near future\" ($I=p+k$).\n\tThe value of $k$ is determined experimentally in our work, as in previous works in the broader subgoal selection literature \\cite{czechowski2021subgoal}.\n\tOne potential issue with this approach is that the length of each subtask is unknown. If the current subtask in training episode is significantly longer or shorter than the expert trajectory, then the waypoint may fall into a different subtask. \n\tHowever, we expect the agent to be able to adapt to this situation, as any state from the following subtask will already reflect the completion of the current subtask.\n\t\n Given our selected subgoal waypoint for this timestep, the model will first encode the compositional representation of the current state to the goal state ($\\vec{U}=g_{\\phi}(U_t,U_N)$). It will also encode the compositional representation of the reference sub-goal to the goal state of the reference episode ($\\vec{V}=g_{\\phi}(R_I,R_T)$).\n\n\n\n\tWe can thus calculate a waypoint embedding $\\vec{W}$ for the current trajectory with the following subtraction in the latent domain:\n\n\t\\begin{equation} \n\t\t\\vec{W} = \\vec{U} - \\vec{V} = g_{\\phi}(U_t, U_N) - g_{\\phi}(R_I, R_T).\n\t\t\\label{eq1}\n\t\\end{equation}\n\tThis estimates an embedded representation $\\vec{W}$ of the trajectory from the current state ($t$) of the agent to the corresponding subgoal waypoint in the ``near future''.\n\tThis short term task embedding is then used as the input for policy network $\\pi \\left(a_t|U_t, \\vec{W}\\right) $ to determine the actions of the agent.\n\t\n\t\\subsection{Policy and encoder learning}\n\tThe training of the compositional state encoder, and the subgoal based policy network, are undertaken concurrently.\n\tBoth models are updated according to the policy loss:\n\t \\begin{equation}\n\t\tL_a\\!(U_{t}, \\!U_N,\\! R_I, \\!R_T\\!) \\!\\!= \\!\\mhyphen log\\!\\left(\\! \\pi\\!\\left(\\!\\hat{a}_t|U_{t},g_{\\phi}(U_t, \\!U_N)\\mhyphen g_{\\phi}(R_I, \\!R_T\\right)\\!\\right)\n\t\t\\label{eq2}\n\t \\end{equation}\n\t where $\\hat{a}_t$ is the expert reference action at the current state $U_t$.\\looseness=-1\n\t\n\tAdditionally, there are two regularization losses using a triplet margin loss function $l_m$\\cite{chechik2010large}. The first ensures that arithmetic operations are correctly preserved within the latent space. To this end the compositionality is enforced by loss $L_H$ which ensures that the sum of the embedding of the past states and the embedding of the future states are equal to the embedding for the entire task \n\t\\begin{equation}\n\tL_H(U_0, U_t, U_N)\\! =\\! l_m(g_{\\phi}({U_0, U_t}) + g_{\\phi}({U_t, U_N}), g_{\\phi}({U_0, U_N})).\n\t\\label{eq3}\n\t\\end{equation}\n\tThe second regularization loss ensures that similarity in the latent space corresponds to semantically similar tasks. The full training and reference pair ($U, R$) should have similar embedded representations \n\n\t\\begin{equation}\n\tL_P(U_0, U_N, R_0, R_T) = l_m(g_{\\phi}({U_0, U_N}), g_{\\phi}({R_0, R_T})).\n\n\t\\label{eq4}\n\t\\end{equation}\n\tThus the loss function for the framework is expressed as the weighted sum of the three losses: $L = L_a + \\lambda_H L_H + \\lambda_P L_P$.\n\t\n\t\n\t\\section{EVALUATION}\n\tOur experiments aim to show the improvement of the agent's generalisation capability when trained using our CASE approach to online compositional subgoals.\n\tTherefore, we evaluate\n\n\tthe performance on tasks which were previously unseen during training, and for which only a single reference episode is provided.\n\n\tIn both cases, the observation is provided as a pair of images. The state encoder $g_{\\phi}$ is a 4 layer CNN, and is shared by the reference and training episode. This encodes the current state to goal state sub-trajectory, as well as the sub-goal state to reference goal state sub-trajectory.\n\tThe resulting latent will then be processed according to EQ1, and the output is fed into the policy network to estimate the action.\n\t\n\tIn each experiment we contrast several variants of our own approach, including the effect of the current image branch and the additional compositionality losses. We also compare against the current state-of-the-art in compositional IL \\cite{devin2019compositional}.\n\tAdditionally, we include an ablation study on the ``near future'' subgoal lookahead parameter $k$. In all other experiments we set $k=4$. We also set the loss weightings $\\lambda_H = \\lambda_P = 1$.\n\t\n\t\\subsection{Environment}\n\tIn order to examine generalization and long-term IL problems, we trained our agent with the Craft World\\cite{devin_craftingworld_2020} environment. This also facilitated fair comparison with the state-of-the-art compositional IL approach of \\cite{devin2019compositional},\n\tThis environment is a 2D discrete-action world with a top-down grid view where the agent is also able to move in one of 4 directions at each step. \n\tThere are different types of object in the environment as shown in figure~\\ref{fig:data_example} including tree, rock, axe, wheat, bread, etc. \n\tSome objects can be interacted with by an agent via pick up and drop off actions. \n\tThe object moves with the agent when it has been picked up, and can cause transformations to other objects in the environment. \n\tFor example, if the agent carries an axe to a tree, the tree will be transformed into a log, which then can be transformed into a house once the agent picks up a hammer and brings it to the log. \n\tIt is apparent that this environment, makes it possible to define complex long-horizon tasks such as ``make bread'' or ``build house'' which include many implicit subgoals. Furthermore, these tasks can be combined into sequences such as [``make bread'', ``eat bread'', ``build house'']. This provides a good selection of unique tasks to generate for training data. This also liberates the agent from skill list labels \\cite{oh2017zero} or language based skill description \\cite{shu2017hierarchical} which limits the generalisation to unseen tasks and sequences. \n\tAlso of interest is the fact that this allows the tasks sequence to be generated with no explicit ordering, giving more freedom in both data generation and generalization.\n\t\n\t\\begin{figure}[h]\n\t\\vspace{-3mm}\n\t \\centering\n\t \\includegraphics[width=0.9\\linewidth]{Image\/env.png}\n\t \\vspace{-2mm}\n\t \\caption{An example of the Craft World\\cite{devin_craftingworld_2020} environment, on the left is a rendering of the environment, the right is an unrendered input state.}\n\t \\label{fig:data_example}\n\t\\end{figure}\n\t\n\tThe training and testing dataset are generated with a random map, upon which the agent is required to perform 2-8 different tasks in sequence with no particular ordering.\n\tThe expert trajectory is generated with greedy search to ensure an optimal solution.\n\tThe agent is trained on 150,000 episodes and tested on the same number of unseen episodes.\n\tTo test one-shot generalization, the set of training tasks is different from the set of testing tasks, requiring generalization from the reference trajectory.\n\n\t\\subsection{One-shot task generalization and ablation study}\\label{sec:ablation}\n\t\n\n\tResults from the generalization test are shown in table \\ref{tab:my-table}.\n\tThe CASE agent is able to outperform the original SOTA benchmark \\cite{devin2019compositional} by over 30\\% in unseen task success rate. The additional assistive losses (CASE+L) only improved the performances by a small amount at later training steps.\n\n\tThe SOTA (CPV-FULL) in it's original form \\cite{devin2019compositional} does not take the desired goal state as input at test time. Instead the approach relies on the compositionality of the current and reference trajectory to produce a goal state. \n\tTherefore we also evaluate and enhanced version of \\cite{devin2019compositional} (CPV+ Goal Guidance), where the reference trajectory is replaced with the ground truth end-goal of the current trajectory at test time.\n\tAs shown in table \\ref{tab:my-table} the CASE approach is still able to outperform the modified SOTA consistently on unseen tasks. \n\t\n\t\n\n\t\n\tThe ablation study is performed over the different components of the framework. More specifically we explore the benefits of the assistive losses ($L_H$ \\ref{eq3} and $L_P$ \\ref{eq4}), and the current state image input ($U_t$).\n\tAs shown in table \\ref{tab:my-table}, the addition of both the current state image branch and the assistive losses each increased the performance of the model. \n\tAs expected, the current image provided additional information more relevant to the current task, while the assistive losses increased the generalization capability by encouraging a more regularized representation.\n\t\n\tDuring experimentation, we observed that the difference in performance between CASE and the enhanced SOTA grows as the number of tasks in a sequence increases.\n\tWhen the number of tasks in each sequence is chosen from the range 2-4, the CASE approach outperforms the baseline by around 3-4\\%. When sequence lengths increase, the performance gap becomes much wider as shown in fig \\ref{fig:20220213result}. It is worth mentioning that the CASE agent is able to maintain a success rate above 60\\% throughout the experiments.\n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=0.75\\linewidth]{Image\/diff_result.png}\n\t\t\\vspace{-2mm}\n\t\t\\caption{A graph of the difference in performance for CASE, and the original \\& enhanced baselines, as the number of tasks in a sequence increases.\n\t\t}\n\t\t\\vspace{-5mm}\n\t\t\\label{fig:20220213result}\n\t\\end{figure}\n\t\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{llll}\nModel &\n \\begin{tabular}[c]{@{}l@{}}Best\\\\ Performance\\end{tabular} &\n \\begin{tabular}[c]{@{}l@{}}Average \\\\ Performance\\end{tabular} &\n \\begin{tabular}[c]{@{}l@{}}Standard\\\\ Deviation\\end{tabular} \\\\ \\hline\nCPV-FULL\\cite{devin2019compositional} & 0.432 & 0.392 & 0.0166 \\\\\n\\begin{tabular}[c]{@{}l@{}}CPV+Goal\\\\ Guidance\\end{tabular} & 0.670 & 0.647 & 0.0146 \\\\\n\\textbf{CASE} & 0.689 & 0.641 & 0.0133 \\\\\n\\textbf{CASE+CI} & 0.701 & 0.676 & 0.0139 \\\\\n\\textbf{CASE+CI+L} & \\textbf{0.712} & \\textbf{0.687} & 0.0167\n\\end{tabular}\n\\caption{{\\footnotesize An ablation study on the different components of the network: Current state image (CI) and assistive losses (L).\n}}\n\\vspace{-10mm}\n\\label{tab:my-table}\n\\end{table}\n\n\t\n\tFinally, we tested several settings for the ``near future'' lookahead parameter $k$. \n\tAs shown in figure \\ref{fig:k-step}, when $k=4$ the agent's performance is maximized.\n\n\tThe graph shows some sensitivity to the parameter $k$, and performance can be unstable at lower values. The setting $k=1$ has a middling performance and the lowest standard deviation, while $k=2$ and $k=3$ have the worst performance and highest variance.\n\tWe suspect this is due to the inconsistency in the length of the randomized subtasks between the training episodes and expert trajectories.\n\tAs an example, imagine the first task in the training episode is to pick up an axe. In the agent's training episode the axe may be 10 steps away from the current state, while it is 2 steps away in the expert reference trajectory. If the $k$ value is less than 2, then the generated subgoal will be after the completion of the current subtask within the expert trajectory. \n\tThe reverse can also be true for larger values of $k$ if the distances are swapped.\n\tIn most cases the agent is able to deal with this: a subgoal for the following task is still easier to learn from than the entire remaining trajectory. Nevertheless, it may be interesting for future work to explore the automatic computation of the optimal $k$ parameter during compositional subgoal estimation.\n\t\n\t\\vspace{-3mm}\n\t\\begin{figure}[th]\n\t\t\\centering\n\t\t\\includegraphics[width=0.9\\linewidth]{Image\/k-step2}\n\t\t\t\\vspace{-2mm}\n\t\t\\caption{An ablation study over the $k$ parameter for choosing the ``near future'' waypoint. The average performance of each $k$-step is calculated from 8 trials.}\n\t\t\\label{fig:k-step}\n\t\t\t\\vspace{-6mm}\n\t\\end{figure}\n\n\t\n\t\\section{CONCLUSIONS}\n\n\tIn this work, we proposed CASE, an approach to learn a compositional task representation which enabled novel subgoal estimation from reference trajectories in IL. This makes it significantly easier to learn long and complex sequences of tasks, including those with implicit or poorly defined subtasks.\n\n\tWith this technique, we developed an IL agent which can generalize to previously unseen tasks with a success rate of around 70\\%. This represents an improvement of around 30\\% over the previous SOTA.\n\t\n\tHowever, this approach can be developed further in future work.\n\tAs discussed in section~\\ref{sec:ablation}, using a fixed value for the $k$-step lookahead parameter may be suboptimal. Experiments indicate that performance and stability may be improved by developing an adaptive lookahead window, based on recent developments in the broader field of subgoal search \\cite{czechowski2021subgoal}.\n\t\n\n\n\n\t\n\n\t\\addtolength{\\textheight}{-8cm} \n\n\n\n\n\n\n\n\t%\n\n\n\t\n\t\\newpage\n\t\\bibliographystyle{plain}\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}}