diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdqyr" "b/data_all_eng_slimpj/shuffled/split2/finalzzdqyr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdqyr" @@ -0,0 +1,5 @@ +{"text":"\\section{Methods}\n\\textbf{Device Fabrication.} Nanowires were selected by imaging with scanning electron microscopy. A double resist layer, consisting of PMMA AR-P 669.04 and PMMA AR-P 672.02, was spun onto the chip and patterned with e-beam lithography. After developing the resist with MIBK:IPA (1:2), an Ar reactive ion etch was performed to remove native oxide on the PbTe nanowires \\cite{Yang2008}. Immediately after the Ar etch, the chip was loaded into an e-beam evaporator where 5~nm Ti and 50~nm Au were deposited. Then, lift-off was carried out in acetone.\n\n\\textbf{$g$-factor fitting.} By fitting the complete set of $g$-factors extracted for all magnetic field orientations with Eq.~\\ref{eq:2}, we determined the principal $g$-factors and the principal axes of the effective $g$-factor tensor $|g^*|(\\vec{B})$. The magnetic field components $B_\\mathrm{x}$, $B_\\mathrm{y}$, $B_\\mathrm{z}$ and the measured $g$-factors formed the set of input parameters for the fit. The fit parameters were the principal $g$-factors $g_1$, $g_2$, $g_3$ and the Euler angles of rotation $\\phi$, $\\theta$, $\\psi$ \\cite{Liles2021}. With these angles, the magnetic field components were transformed from the Cartesian coordinate system to the coordinate system of the principal axes of $|g^*|(\\vec{B})$. This fitting procedure was repeated for two gate voltage regimes of Device~1 and for one regime of Device~2. Subsequently, with the Euler angles of rotation found by the fits, we transformed the principal $g$-factors to spherical coordinates to determine the orientation of each principal $g$-factor.\n\n\\section{Acknowledgments}\nWe thank W.~Riess, G.~Salis, E.~G.~Kelly, F.~J.~Schupp and the Cleanroom Operations Team of the Binnig and Rohrer Nanotechnology Center (BRNC) for their help and support. A.~Fuhrer acknowledges support from NCCR SPIN, funded by the SNSF under grant number 51NF40-180604. The work in Eindhoven is supported by the European Research Council (ERC TOCINA 834290). F.~Nichele acknowledges support from the European Research Council, grant number 804273, and the Swiss National Science Foundation, grant number 200021\\_201082.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFor a function $f:X\\rightarrow\\mathbb{R}$, we say that $x$ is a\n\\emph{critical point} if $\\nabla f(x)=\\mathbf{0}$, and $y$ is a\n\\emph{critical value} if there is some critical point $x$ such that\n$f(x)=y$. A critical point $x$ is a \\emph{saddle point} if it is\nneither a local minimizer nor a local maximizer. In this paper, we\npresent algorithms based on the multidimensional mountain pass theorem\nto find saddle points numerically.\n\nThe main purpose of critical point theory is the study of variational\nproblems. These are problems (P) such that there exists a smooth functional\n$\\Phi:X\\rightarrow\\mathbb{R}$ whose critical points are solutions\nof (P). Variational problems occur frequently in the study of partial\ndifferential equations. \n\nAt this point, we make a remark about saddle points in the study of\nmin-max problems. Such saddle points occur in problems in game theory\nand in constrained optimization using the Lagrangian, and have the\nsplitting structure\\[\n\\min_{x\\in X}\\max_{y\\in Y}f(x,y).\\]\nIn min-max problems, this splitting structure is exploited in numerical\nprocedures. See \\cite{RH03} for a survey of algorithms for min-max\nproblems. In the general case, for example in finding weak solutions\nof partial differential equations, such a splitting structure may\nonly be obtained after the saddle point is located, and thus is not\nhelpful for finding the saddle point.\n\nA critical point $x$ is \\emph{nondegenerate} if its Hessian $\\nabla^{2}f(x)$\nis nonsingular and it is \\emph{degenerate} otherwise. The \\emph{Morse\nindex} of a critical point is the maximal dimension of a subspace\nof $X$ on which the Hessian $\\nabla^{2}f(x)$ is negative definite.\nIn the finite dimensional case, the Morse index is the number of negative\neigenvalues of the Hessian.\n\nLocal maximizers and minimizers of $f:X\\rightarrow\\mathbb{R}$ are\neasily found using optimization, while saddle points are harder to\nfind. To find saddle points of Morse index 1, one can use algorithms\nmotivated by the mountain pass theorem. Given points $a,b\\in X$ ,\ndefine a \\emph{mountain pass} $p^{*}\\in\\Gamma(a,b)$ to be a minimizer\nof the problem \\[\n\\inf_{p\\in\\Gamma(a,b)}\\sup_{0\\leq t\\leq1}f(p(t)),\\]\nif it exists. Here, $\\Gamma(a,b)$ is the set of continuous paths\n$p:[0,1]\\rightarrow X$ such that $p(0)=a$ and $p(1)=b$. Ambrosetti\nand Rabinowitz's \\cite{AR73} mountain pass theorem states that under\nadded conditions, there is a critical value of at least $\\max\\{f(a),f(b)\\}$.\nTo find saddle points of higher Morse index, it is instructive to\nlook at theorems establishing the existence of critical points of\nMorse index higher than 1. Rabinowitz \\cite{R77} proved the multidimensional\nmountain pass theorem which in turn motivated the study of linking\nmethods to find saddle points. We shall recall theoretical material\nrelevant for finding saddle points of higher Morse index in this paper\nas needed.\n\nWhile the study of numerical methods for the mountain pass problem\nbegan in the 70's or earlier to study problems in computational chemistry,\nChoi and McKenna \\cite{CM93} were the first to propose a numerical\nmethod for the mountain pass problem to solve variational problems.\nMost numerical methods for finding critical points of mountain pass\ntype rely on discretizing paths in $\\Gamma(a,b)$ and perturbing paths\nto lower the maximum value of $f$ on the path. There are a few other\nmethods of finding saddle points of mountain pass type that do not\ninvolve perturbing paths, for example \\cite{H04,BT07}. \n\nSaddle points of higher Morse index are obtained with modifications\nof the mountain pass algorithm. Ding, Costa and Chen \\cite{DCC99}\nproposed a numerical method for finding critical points of Morse index\n2, and Li and Zhou \\cite{LZ01} proposed a method for finding critical\npoints of higher Morse index. \n\nIn \\cite{LP08}, we suggested a numerical method for finding saddle\npoints of mountain pass type. The key observation is that the value\n\\[\n\\sup\\big\\{l\\geq\\max\\big(f(a),f(b)\\big)\\mid a,b\\mbox{ lie in different path components of }\\{x\\mid f(x)\\leq l\\}\\big\\}\\]\nis a critical value. In other words, the supremum of all levels $l$\nsuch that there is no path connecting $a$ and $b$ in the level set\n$\\{x\\mid f(x)\\leq l\\}$ is a critical value. See Figure \\ref{fig:mtn-contrast}\nfor an illustration of the difference between the two approaches.\nAn extensive theoretical analysis and some numerical results of this\napproach were provided in \\cite{LP08}. \n\nIn this paper, we extend three of the themes in the level set approach\nto find saddle points of higher Morse index, namely the convergence\nof the basic algorithm (Sections \\ref{sec:Alg-desc} and \\ref{sec:Conv-ppties}),\noptimality condition of sub-problem (Section \\ref{sec:Opti-condns}),\nand a fast locally convergent method in $\\mathbb{R}^{n}$ (Sections\n\\ref{sec:Fast-local-convergence} and \\ref{sec:Proof-of-superlinear}).\nSection \\ref{sec:Other-local-analysis} presents an alternative result\non convergence to a critical point similar to that of Section \\ref{sec:Conv-ppties}.\n\nWe refer the reader to \\cite{LP08} for examples reflecting the limitations\nof the level set approach for finding saddle points of mountain pass\ntype, which will be relevant for the design of level set methods of\nfinding saddle points of general Morse index. \n\n\\begin{figure}[h]\n\\begin{tabular}{|c|c|}\n\\hline \n\\includegraphics[scale=0.4]{perturb} & \\includegraphics[scale=0.3]{converge2}\\tabularnewline\n\\hline\n\\end{tabular}\n\n\\caption{\\label{fig:mtn-contrast}The diagram on the left shows the classical\nmethod of perturbing paths for the mountain pass problem, while the\ndiagram on the right shows convergence to the critical point by looking\nat level sets.}\n\n\\end{figure}\n\n\n\n\\section*{Notation}\n\\begin{description}\n\\item [{$\\mbox{\\rm lev}_{\\geq b}f$}] This is the level set $\\{x\\mid f(x)\\geq b\\}$,\nwhere $f:X\\rightarrow\\mathbb{R}$. The interpretations of $\\mbox{\\rm lev}_{\\leq b}f$\nand $\\mbox{\\rm lev}_{=b}f$ are similar.\n\\item [{$\\mathbb{B}$}] The ball of center $\\mathbf{0}$ and radius $1$.\n$\\mathbb{B}(x,r)$ stands for a ball of center $x$ and radius $r$.\n$\\mathbb{B}^{n}$ denotes the $n$-dimensional sphere in $\\mathbb{R}^{n}$.\n\\item [{$\\mathbb{S}^{n}$}] The $n$-dimensional sphere in $\\mathbb{R}^{n+1}$.\n\\item [{$\\partial$}] Subdifferential of a real-valued function, or the\nrelative boundary of a set. If $h:\\mathbb{B}^{n}\\to S$ is a homeomorphism\nbetween $\\mathbb{B}^{n}$ and $S$, then the relative boundary of\n$S$ is $h(\\mathbb{S}^{n-1})$.\n\\item [{$\\mbox{lin}(A)$}] For an affine space $A$, the lineality space\n$\\mbox{lin}(A)$ is the space $\\{a-a^{\\prime}\\mid a,a^{\\prime}\\in A\\}$.\n\\end{description}\n\n\\section{\\label{sec:Alg-desc}Algorithm for critical points}\n\nWe look at the critical point existence theorems to give an insight\non our algorithm for finding critical points of higher Morse index\nbelow. Here is the definition of linking sets. We take our definition\nfrom \\cite[Section II.8]{Str08}.\n\\begin{defn}\n(Linking) Let $A$ be a subset of $\\mathbb{R}^{n}$, $B$ a submanifold\nof $\\mathbb{R}^{n}$ with relative boundary $\\partial B$. Then we\nsay that $A$ and $\\partial B$ \\emph{link} if \n\n(a) $A\\cap\\partial B=\\emptyset$, and \n\n(b) for any continuous $h:\\mathbb{R}^{n}\\to\\mathbb{R}^{n}$ such that\n$h\\mid_{\\partial B}=id$ we have $h(B)\\cap A\\neq\\emptyset$.\n\\end{defn}\nFigure \\ref{fig:Linking-subsets} illustrates two examples of linking\nsubsets in $\\mathbb{R}^{3}$. In the diagram on the left, the set\n$A$ is the union of two points inside and outside the sphere $B$.\nIn the diagram on the right, the sets $A$ and $B$ are the interlocking\n'rings'. Note however that $A$ and $B$ link does not imply that\n$B$ and $A$ link, though this will be true with additional conditions.\nWe hope this does not cause confusion. \n\nWe now recall the Palais-Smale condition.\n\\begin{defn}\n(Palais-Smale condition) Let $X$ be a Banach space and $f:X\\rightarrow\\mathbb{R}$\nbe $\\mathcal{C}^{1}$. We say that a sequence $\\{x_{i}\\}_{i=1}^{\\infty}\\subset X$\nis a \\emph{Palais-Smale sequence} if $\\{f(x_{i})\\}_{i=1}^{\\infty}$\nis bounded and $\\nabla f(x_{i})\\rightarrow\\mathbf{0}$, and $f$ satisfies\nthe \\emph{Palais-Smale condition} if any Palais-Smale sequence admits\na convergent subsequence.\n\\end{defn}\nThe classical multidimensional pass theorem originally due to Rabinowitz\n\\cite{R77} states that under added conditions, if there are linking\nsets $A$ and $B$ such that $\\max_{A}f<\\min_{B}f$ and the Palais-Smale\ncondition holds, then there is a critical value of at least $\\max_{A}f$\nfor the case when $f$ is smooth. (See Theorem \\ref{thm:Rabinowitz-77}\nfor a statement of the multidimensional mountain pass theorem) Generalizations\nin the nonsmooth case are also well-known in the literature. See for\nexample \\cite{Jab03}.\n\nTo find saddle points of Morse index $m$, we consider finding a sequence\nof linking sets $\\{A_{i}\\}_{i=1}^{\\infty}$ and $\\{B_{i}\\}_{i=1}^{\\infty}$\nsuch that $\\mbox{diam}(A_{i})$, the diameter of the set $A_{i}$,\ndecreases to zero, and the set $A_{i}$ is a subset of an $m$-dimensional\naffine space. This motivates the following algorithm.\n\n\\begin{figure}\n\\begin{tabular}{|c|c|}\n\\hline \n\\includegraphics[scale=0.5]{morse1b} & \\includegraphics[scale=0.5]{morse2}\\tabularnewline\n\\hline\n\\end{tabular}\n\n\\caption{\\label{fig:Linking-subsets}Linking subsets}\n\n\\end{figure}\n \n\\begin{algorithm}\n\\label{alg:saddle-points-outer}First algorithm for finding saddle\npoints of Morse index $m\\geq1$.\n\\begin{enumerate}\n\\item Set the iteration count $i$ to $0$, and let $l_{i}$ be a lower\nbound of the critical value and $u_{i}$ be an upper bound.\n\\item Find $x_{i}$ and $y_{i}$, where $(S_{i},x_{i},y_{i})$ is an optimizing\ntriple of \\begin{equation}\n\\min_{S\\in\\mathcal{S}}\\max_{x,y\\in S\\cap(\\scriptsize\\mbox{\\rm lev}_{\\geq\\frac{1}{2}(l_{i}+u_{i})}f)\\cap U_{i}}|x-y|,\\label{eq:keyexp1}\\end{equation}\nwhere $U_{i}$ is some open set. Here, $\\mathcal{S}$ is the set of\n$m$-dimensional affine subspaces of $\\mathbb{R}^{n}$ intersecting\n$U_{i}$. In the inner maximum problem above, we take the value to\nbe $0$ if $S\\cap(\\mbox{\\rm lev}_{\\leq\\frac{1}{2}(l_{i}+u_{i})}f)\\cap U_{i}$\nis empty, making the objective function above equal to $0$. For simplicity,\nwe shall just assume that minimizers and maximizers of the above problem\nexist.\n\\item (Bisection) If the objective of \\eqref{eq:keyexp1} is zero, then\n$\\frac{1}{2}(l_{i}+u_{i})$ is a lower bound of the critical value.\nSet $l_{i+1}=\\frac{1}{2}(l_{i}+u_{i})$ and $u_{i+1}=u_{i}$. Otherwise,\nset $l_{i+1}=l_{i}$ and $u_{i+1}=\\frac{1}{2}(l_{i}+u_{i})$. \n\\item Increase $i$ and go back to step 2.\n\\end{enumerate}\n\\end{algorithm}\nThe critical step of Algorithm \\ref{alg:saddle-points-outer} lies\nin step 2. We elaborate on optimal conditions that will be a useful\napproximate for this step in Section \\ref{sec:Opti-condns}. One may\nthink of the set $A_{i}$ as the relative boundary (to the affine\nspace $S_{i}$) of $S_{i}\\cap(\\mbox{\\rm lev}_{\\geq l_{i}}f)\\cap U_{i}$. A\nfrequent assumption we will make is nondegenericity.\n\\begin{defn}\nWe say that a critical point is \\emph{nondegenerate} if its Hessian\nis invertible.\n\\end{defn}\nAlgorithm \\ref{alg:saddle-points-outer} requires $m>0$, but when\n$m=0$, nondegenerate critical points of Morse index zero are just\nstrict local minimizers that can be easily found by optimization.\nWe illustrate two special cases of Algorithm \\ref{alg:saddle-points-outer}.\n\\begin{example}\n(Particular cases of Algorithm \\ref{alg:saddle-points-outer}) (a)\nFor the case $m=1$, $\\mathcal{S}$ is the set of lines. The inner\nmaximization problem in \\eqref{eq:keyexp1} has its solution on the\ntwo endpoints of $S_{i}\\cap(\\mbox{\\rm lev}_{\\geq\\frac{1}{2}(l_{i}+u_{i})}f)\\cap U_{i}$.\nThis means that \\eqref{eq:keyexp1} is equivalent to finding the local\nclosest points between two components of $(\\mbox{\\rm lev}_{\\leq\\frac{1}{2}(l_{i}+u_{i})}f)\\cap U_{i}$,\nas was analyzed in \\cite{LP08}.\n\n(b) For the case $m=n$, $\\mathcal{S}$ contains the whole of $\\mathbb{R}^{n}$.\nHence the outer minimization problem in \\eqref{eq:keyexp1} is superfluous.\nThe level set $(\\mbox{\\rm lev}_{\\geq\\frac{1}{2}(l_{i}+u_{i})}f)\\cap U_{i}$\ngets smaller and smaller as $\\frac{1}{2}(l_{i}+u_{i})$ approaches\nthe maximum value, till it becomes a single point if the maximizer\nis unique.\n\\end{example}\n\n\\section{\\label{sec:Conv-ppties}Convergence properties}\n\nIn this section, we prove the convergence of $x_{i}$, $y_{i}$ in\nAlgorithm \\ref{alg:saddle-points-outer} to a critical point when\nthey converge to a common limit. We recall some facts about nonsmooth\nanalysis needed for the rest of the paper. It is more economical to\nprove our result for nonsmooth critical points because the proofs\nare not that much harder, and nonsmooth critical points are also of\ninterest in applications.\n\nLet $X$ be a Banach space, and $f:X\\rightarrow\\mathbb{R}$ be a locally\nLipschitz function at a given point $x$.\n\\begin{defn}\n(Clarke subdifferential) \\cite[Section 2.1]{Cla83} Suppose $f:X\\rightarrow\\mathbb{R}$\nis locally Lipschitz at $x$. The \\emph{Clarke generalized directional\nderivative} of $f$ at $x$ in the direction $v\\in X$ is defined\nby\\[\nf^{\\circ}(x;v)=\\limsup_{t\\searrow0,y\\rightarrow x}\\frac{f(y+tv)-f(y)}{t},\\]\nwhere $y\\in X$ and $t$ is a positive scalar. The \\emph{Clarke subdifferential}\nof $f$ at $x$, denoted by $\\partial_{C}f(x)$, is the subset of\nthe dual space $X^{*}$ given by\\[\n\\left\\{ \\zeta\\in X^{*}\\mid f^{\\circ}(x;v)\\geq\\left\\langle \\zeta,v\\right\\rangle \\mbox{ for all }v\\in X\\right\\} .\\]\nThe point $x$ is a \\emph{Clarke (nonsmooth) critical point} if $\\mathbf{0}\\in\\partial_{C}f(x)$.\nHere, $\\left\\langle \\cdot,\\cdot\\right\\rangle :X^{*}\\times X\\rightarrow\\mathbb{R}$\ndefined by $\\left\\langle \\zeta,v\\right\\rangle :=\\zeta(v)$ is the\ndual relation.\n\\end{defn}\nFor the particular case of $\\mathcal{C}^{1}$ functions, $\\partial_{C}f(x)=\\{\\nabla f(x)\\}$.\nTherefore critical points of smooth functions are also nonsmooth critical\npoints. From the definitions above, it is clear that an equivalent\ndefinition of a nonsmooth critical point is $f^{\\circ}(x;v)\\geq0$\nfor all $v\\in X$. This property allows us to prove that a point is\nnonsmooth critical without appealing to the dual space $X^{*}$.\n\nWe now prove our result of convergence to nonsmooth critical points.\n\\begin{prop}\n\\label{pro:triples-conv}(Convergence to saddle point) Let $\\bar{z}\\in X$.\nSuppose there is a ball $\\mathbb{B}(\\bar{z},r)$, a sequence of triples\n$\\{(S_{i},x_{i},y_{i})\\}_{i=1}^{\\infty}$ and a sequence $l_{i}$\nmonotonically increasing to $f(\\bar{z})$ such that $(x_{i},y_{i})\\to(\\bar{z},\\bar{z})$\nand $(S_{i},x_{i},y_{i})$ is an optimizing triple of \\eqref{eq:keyexp1}\nin Algorithm \\ref{alg:saddle-points-outer} for $l_{i}$ with $U_{i}=\\mathbb{B}(\\bar{z},r)$.\nThen $\\bar{z}$ is a Clarke critical point.\\end{prop}\n\\begin{proof}\nSeeking a contradiction, suppose there exists some direction $\\bar{v}$\nsuch that $f^{\\circ}(\\bar{z};\\bar{v})<0$. This means that there is\nsome $\\bar{\\epsilon}>0$ such that if $|z-\\bar{z}|<\\bar{\\epsilon}$\nand $\\epsilon<\\bar{\\epsilon}$, then \\begin{eqnarray*}\n\\frac{f(z+\\epsilon\\bar{v})-f(z)}{\\epsilon} & < & \\frac{1}{2}f^{\\circ}(\\bar{z};\\bar{v})\\\\\n\\Rightarrow f(z+\\epsilon\\bar{v}) & < & f(z)+\\epsilon\\frac{1}{2}f^{\\circ}(\\bar{z};\\bar{v}).\\end{eqnarray*}\nSuppose $i$ is large enough so that $x_{i},y_{i}\\in\\mathbb{B}(\\bar{z},\\frac{\\bar{\\epsilon}}{2})$,\nand that $x_{i},y_{i}\\in A_{i}:=S_{i}\\cap(\\mbox{\\rm lev}_{\\geq l_{i}}f)\\cap\\mathbb{B}(\\bar{z},r)$\nare such that $|x_{i}-y_{i}|=\\mbox{\\rm diam}(A_{i})$. Consider the set $\\tilde{A}:=(S_{i}+\\epsilon_{1}\\bar{v})\\cap(\\mbox{\\rm lev}_{\\geq l_{i}}f)\\cap\\mathbb{B}(\\bar{z},r)$,\nwhere $\\epsilon_{1}>0$ is arbitrarily small. Let $\\tilde{x}_{i},\\tilde{y}_{i}\\in\\tilde{A}$\nbe such that $|\\tilde{x}_{i}-\\tilde{y}_{i}|=\\mbox{\\rm diam}(\\tilde{A})$. From\nthe minimality of the outer minimization, we have $|\\tilde{x}_{i}-\\tilde{y}_{i}|\\geq|x_{i}-y_{i}|$.\nNote that $f(\\tilde{x}_{i})=f(\\tilde{y}_{i})=l_{i}$. Then \\begin{eqnarray*}\nf(\\tilde{x}_{i}) & < & f(\\tilde{x}_{i}-\\epsilon_{1}\\bar{v})+\\epsilon_{1}\\frac{1}{2}f^{\\circ}(\\bar{z};\\bar{v})\\\\\n\\implies f(\\tilde{x}_{i}-\\epsilon_{1}\\bar{v}) & > & f(\\tilde{x}_{i})-\\epsilon_{1}\\frac{1}{2}f^{\\circ}(\\bar{z};\\bar{v})\\\\\n & > & l_{i}.\\end{eqnarray*}\nThe continuity of $f$ implies that we can find some $\\epsilon_{2}>0$\nsuch that $\\hat{x}_{i}:=\\tilde{x}_{i}-\\epsilon_{1}\\bar{v}+\\epsilon_{2}(\\tilde{x}_{i}-\\tilde{y}_{i})$\nlies in $A_{i}$. Similarly, $\\hat{y}_{i}:=\\tilde{y}_{i}-\\epsilon_{1}\\bar{v}$\nlie in $A_{i}$ as well. But\\begin{eqnarray*}\n|\\hat{x}_{i}-\\hat{y}_{i}| & > & |\\tilde{x}_{i}-\\tilde{y}_{i}|\\\\\n & \\geq & |x_{i}-y_{i}|.\\end{eqnarray*}\nThis contradicts the maximality of $|x_{i}-y_{i}|$ in $A_{i}$, and\nthus $\\bar{z}$ must be a critical point. \n\\end{proof}\n\n\\section{\\label{sec:Opti-condns}Optimality conditions}\n\nWe now reduce the min-max problem \\eqref{eq:keyexp1} to a condition\non the gradients $\\nabla f(x_{i})$ and $\\nabla f(y_{i})$ that is\neasy to verify numerically. This condition will help in the numerical\nsolution of \\eqref{eq:keyexp1}. We use methods in sensitivity analysis\nof optimization problems (as is done in \\cite{BS00}) to study how\nvarying the $m$-dimensional affine space $S$ in an $(m+1)$-dimensional\nsubspace affects the optimal value in the inner maximization problem\nin \\eqref{eq:keyexp1}. We conform as much as possible to the notation\nin \\cite{BS00} throughout this section. \n\nConsider the following parametric optimization problem $(P_{u})$\nin terms of $u\\in\\mathbb{R}$ as an $m+1$ dimensional model in $\\mathbb{R}^{m+1}$\nof the inner maximization problem in \\eqref{eq:keyexp1}:\\begin{eqnarray}\n(P_{u}):\\qquad v(u):= & \\min & F(x,y,u):=-|x-y|^{2}\\nonumber \\\\\n & \\mbox{s.t.} & G(x,y,u)\\in K,\\nonumber \\\\\n & & x,y\\in\\mathbb{R}^{m+1},\\label{eq:stylized-problem}\\end{eqnarray}\nwhere $G:(\\mathbb{R}^{m+1})^{2}\\times\\mathbb{R}\\rightarrow\\mathbb{R}^{4}$\nand $K\\subset\\mathbb{R}^{4}$ are defined by \\[\nG(x,y,u):=\\left(\\begin{array}{c}\n-f(x)+b\\\\\n-f(y)+b\\\\\n(0,0,\\dots,0,u,1)x\\\\\n(0,0,\\dots,0,u,1)y\\end{array}\\right),\\qquad K:=\\mathbb{R}_{-}^{2}\\times\\{0\\}^{2}.\\]\nThe problem $(P_{u})$ reflects the inner maximization problem of\n\\eqref{eq:keyexp1}. Due to the standard practice of writing optimization\nproblems as minimization problems, \\eqref{eq:stylized-problem} is\na minimization problem instead. We hope this does not cause confusion. \n\nLet $S(u)$ be the $m$-dimensional subspace orthogonal to $(0,\\dots,0,u,1)$.\nThe first two components of $G(x,y,u)$ model the constraints $f(x)\\geq b$\nand $f(y)\\geq b$, while the last two components enforce $x,y\\in S(u)$.\nDenote an optimal solution to $(P_{u})$ to be $(\\bar{x}(u),\\bar{y}(u))$,\nand let $(\\bar{x},\\bar{y}):=(\\bar{x}(0),\\bar{y}(0))$. We make the\nfollowing assumption throughout.\n\\begin{assumption}\n\\label{ass:uniqueness}(Uniqueness of optimizers) $(P_{0})$ has a\nunique solution $\\bar{x}=\\mathbf{0}$ and $\\bar{y}=(0,\\dots,0,1,0)$\nat $u=0$. \n\\end{assumption}\nWe shall investigate how the set of minimizers of $(P_{u})$ behaves\nwith respect to $u$ at $0$.\n\nThe derivatives of $F$ and $G$ with respect to $x$ and $y$, denoted\nby $D_{x,y}F$ and $D_{x,y}G$, are\\begin{eqnarray}\nD_{x,y}F(x,y,u) & = & 2\\big(\\begin{array}{cc}\n(y-x)^{T} & (x-y)^{T}\\end{array}\\big),\\nonumber \\\\\n\\mbox{ and }D_{x,y}G(x,y,u) & = & \\left(\\begin{array}{cc}\n-\\nabla f(x)^{T}\\\\\n & -\\nabla f(y)^{T}\\\\\n(0,0,\\dots,0,u,1)\\\\\n & (0,0,\\dots,0,u,1)\\end{array}\\right),\\label{eq:D-x-y}\\end{eqnarray}\nwhere the blank terms in $D_{x,y}G(x,y,u)$ are all zero. \n\nThe \\emph{Lagrangian} is the function $L:\\mathbb{R}^{m+1}\\times\\mathbb{R}^{m+1}\\times\\mathbb{R}^{4}\\times\\mathbb{R}\\rightarrow\\mathbb{R}$\ndefined by \\[\nL(x,y,\\lambda,u):=F(x,y,u)+\\sum_{i=1}^{4}\\lambda_{i}G_{i}(x,y,u).\\]\nWe say that $\\lambda:=(\\lambda_{1},\\lambda_{2},\\lambda_{3},\\lambda_{4})$,\ndepending on $u$, is a \\emph{Lagrange multiplier }if $D_{x,y}L(x,y,\\lambda,u)=\\mathbf{0}$\nand $\\lambda\\in N_{K}(G(x,y,u))$, and the set of all Lagrange multipliers\nis denoted by $\\Lambda(x,y,u)$. Here, $N_{K}(G(x,y,u))$ stands for\nthe \\emph{normal cone} defined by \\[\nN_{K}\\big(G(x,y,u)\\big):=\\{v\\in\\mathbb{R}^{4}\\mid v^{T}[w-G(x,y,u)]\\leq0\\mbox{ for all }w\\in K\\}.\\]\nWe are interested in the set $\\Lambda(\\bar{x},\\bar{y},0)$. It is\nclear that optimal solutions must satisfy $G(\\bar{x},\\bar{y},0)=\\mathbf{0}$,\nso $\\lambda\\in N_{K}(\\mathbf{0})=\\mathbb{R}_{+}^{2}\\times\\mathbb{R}^{2}$. \n\nThe condition $D_{x,y}L(\\bar{x},\\bar{y},\\lambda,0)=\\mathbf{0}$ reduces\nto \\begin{eqnarray*}\nD_{x,y}\\left(F(x,y,0)+\\sum_{i=1}^{4}\\lambda_{i}G_{i}(x,y,0)\\right)\\mid_{x=\\bar{x},y=\\bar{y}} & = & \\mathbf{0}\\\\\n\\Rightarrow2\\left({\\bar{y}-\\bar{x}\\atop \\bar{x}-\\bar{y}}\\right)+\\lambda_{1}\\left({-\\nabla f(\\bar{x})\\atop \\mathbf{0}}\\right)+\\lambda_{2}\\left({\\mathbf{0}\\atop -\\nabla f(\\bar{y})}\\right)\\qquad\\qquad\\qquad\\\\\n+\\lambda_{3}\\left({(0,0,\\dots,0,0,1)^{T}\\atop \\mathbf{0}}\\right)+\\lambda_{4}\\left({\\mathbf{0}\\atop (0,0,\\dots,0,0,1)^{T}}\\right) & = & \\mathbf{0}.\\end{eqnarray*}\n\n\nHere, $G_{i}(\\bar{x},\\bar{y},0)$ is the $i$th row of $G(\\bar{x},\\bar{y},0)$\nfor $1\\leq i\\leq4$. This is exactly the KKT conditions, and can be\nrewritten as\\begin{eqnarray}\n2(\\bar{y}-\\bar{x})-\\lambda_{1}\\nabla f(\\bar{x})+\\lambda_{3}(0,0,\\dots,0,0,1)^{T} & = & 0,\\nonumber \\\\\n2(\\bar{x}-\\bar{y})-\\lambda_{2}\\nabla f(\\bar{y})+\\lambda_{4}(0,0,\\dots,0,0,1)^{T} & = & 0.\\label{eq:small-KKT}\\end{eqnarray}\n\n\nIt is clear that $\\lambda_{1}$ and $\\lambda_{2}$ cannot be zero,\nand so we have \\begin{eqnarray*}\n\\nabla f(\\bar{x})^{T} & = & \\left(0,0,\\dots,0,\\frac{2}{\\lambda_{1}},\\frac{\\lambda_{3}}{\\lambda_{1}}\\right),\\\\\n\\nabla f(\\bar{y})^{T} & = & \\left(0,0,\\dots,0,-\\frac{2}{\\lambda_{2}},\\frac{\\lambda_{4}}{\\lambda_{2}}\\right).\\end{eqnarray*}\nRecall that $\\lambda_{1},\\lambda_{2}\\geq0$, so this gives more information\nabout $\\nabla f(\\bar{x})$ and $\\nabla f(\\bar{y})$.\n\nWe next discuss the optimality of the outer minimization problem of\n\\eqref{eq:keyexp1}, which can be studied by perturbations in the\nparameter $u$ of \\eqref{eq:stylized-problem}, but we first recall\na result on the first order sensitivity of optimal solutions.\n\\begin{defn}\n(Robinson's constraint qualification) (from \\cite[Definition 2.86]{BS00})\nWe say that \\emph{Robinson's constraint qualification }holds at $(\\bar{x},\\bar{y})\\in\\mathbb{R}^{m+1}\\times\\mathbb{R}^{m+1}$\n if the regularity condition \\[\n\\mathbf{0}\\in\\mbox{\\rm int}\\left\\{ G(\\bar{x},\\bar{y},0)+\\mbox{\\rm Range}\\big(D_{x,y}G(\\bar{x},\\bar{y},0)\\big)-K\\right\\} \\]\nis satisfied.\\end{defn}\n\\begin{thm}\n\\label{thm:from-BS-4.26}(Parametric optimization) (from \\cite[Theorem 4.26]{BS00})\nFor problem\\eqref{eq:stylized-problem}, let $(\\bar{x}(u),\\bar{y}(u))$\nbe as defined earlier. Suppose that \n\\begin{enumerate}\n\\item [(i)] Robinson's constraint qualification holds at $(\\bar{x}(0),\\bar{y}(0))$,\nand\n\\item [(ii)] if $u_{n}\\rightarrow0$, then $(P_{u_{n}})$ possesses an\noptimal solution $(\\bar{x}(u_{n}),\\bar{y}(u_{n}))$ that has a limit\npoint $(\\bar{x},\\bar{y})$.\n\\end{enumerate}\nThen $v(\\cdot)$ is directionally differentiable at $u=0$ and \\[\nv^{\\prime}(0)=D_{u}L(x,y,\\lambda,0).\\]\n\n\\end{thm}\nWe proceed to prove our result.\n\\begin{prop}\n(Optimality condition on $\\nabla f(\\bar{y})$) \\label{pro:perturb-decrease}Consider\nthe setup so far in this section and suppose Assumption \\ref{ass:uniqueness}\nholds. If $\\nabla f(\\bar{y})$ is not a positive multiple of $(0,0,\\dots,0,1,0)^{T}$\nat $u=0$, then we can perturb $u$ so that \\eqref{eq:stylized-problem}\nhas an increase in objective. \\end{prop}\n\\begin{proof}\nWe first obtain first order sensitivity information from Theorem \\ref{thm:from-BS-4.26}.\nRecall that by definition, Robinson's constraint qualification holds\nat $(\\bar{x},\\bar{y})$ if \\[\n\\mathbf{0}\\in\\mbox{int}\\left\\{ G(\\bar{x},\\bar{y},0)+\\mbox{Range}\\big(D_{x,y}G(\\bar{x},\\bar{y},0)\\big)-K\\right\\} .\\]\nFrom \\eqref{eq:small-KKT}, it is clear that $\\nabla f(\\bar{x})$\nand $(0,\\dots,0,0,1)$ are linearly independent, and so are $\\nabla f(\\bar{y})$\nand $(0,\\dots,0,0,1)$. From the formula of $D_{x,y}G(\\bar{x},\\bar{y},0)$\nin \\eqref{eq:D-x-y}, we see immediately that $\\mbox{Range}\\left(D_{x,y}G(\\bar{x},\\bar{y},0)\\right)=\\mathbb{R}^{4}$,\nthus the Robinson's constraint qualification indeed holds.\n\nSuppose that $\\lim_{n\\to\\infty}t_{n}=0$. We prove that part (ii)\nof Theorem \\ref{thm:from-BS-4.26} holds by proving that $(\\bar{x}(t_{n}),\\bar{y}(t_{n}))$\ncannot have any other limit points. Suppose that $(x^{\\prime},y^{\\prime})$\nis a limit point of $\\{(\\bar{x}(t_{n}),\\bar{y}(t_{n}))\\}_{n=1}^{\\infty}$.\nIt is clear that $x^{\\prime},y^{\\prime}\\in S(0)$.\n\nWe can find $y_{n}\\rightarrow\\bar{y}$ such that $y_{n}\\in S(t_{n})$\nand $f(y_{n})=b$. For example, we can use the Implicit Function Theorem\nwith the constraints \\begin{eqnarray*}\nf(y) & = & b,\\\\\ng(y,u) & = & 0,\\end{eqnarray*}\nwhere $g(y,u)=(0,0,\\dots,0,u,1)^{T}y$. The derivatives with respect\nto $y_{m}$ and $y_{m+1}$ are \\[\n\\begin{array}{ccc}\n\\frac{\\partial}{\\partial y_{m}}f(\\bar{y})=-\\frac{2}{\\lambda_{2}}, & & \\frac{\\partial}{\\partial y_{m+1}}f(\\bar{y})=\\frac{\\lambda_{4}}{\\lambda_{2}},\\\\\n\\frac{\\partial}{\\partial y_{m}}g(\\bar{y},0)=0, & & \\frac{\\partial}{\\partial y_{m+1}}g(\\bar{y},0)=1.\\end{array}\\]\nTherefore, for $y_{1}=y_{2}=\\cdots=y_{m-2}=y_{m-1}=0$ and any choice\nof $u$ close to zero, there is some $y_{m}$ and $y_{m+1}$ such\nthat $y\\in S(u)$ and $f(y)=b$.\n\nClearly $|\\bar{x}-y_{n}|\\leq|\\bar{x}(t_{n})-\\bar{y}(t_{n})|$. Taking\nlimits as $n\\rightarrow\\infty$, we have $|\\bar{x}-\\bar{y}|\\leq|x^{\\prime}-y^{\\prime}|$.\nSince $(\\bar{x},\\bar{y})$ minimize $F$, it follows that $|\\bar{x}-\\bar{y}|=|x^{\\prime}-y^{\\prime}|$,\nand by the uniqueness of solutions to $(P_{0})$, we can assume that\n$x^{\\prime}=\\bar{x}$ and $y^{\\prime}=\\bar{y}$.\n\nTheorem \\ref{thm:from-BS-4.26} implies that $v^{\\prime}(0)=D_{u}L(x,y,\\lambda,0)$.\nWe now calculate $D_{u}L(x,y,\\lambda,0)$. It is clear that $D_{u}G(x,y,\\lambda,0)=(0,0,0,1)^{T}$,\nand so $D_{u}L(x,y,\\lambda,0)=\\lambda_{4}$. Since $\\nabla f(\\bar{y})$\nis not a multiple of $(0,0,\\dots,0,1,0)^{T}$ at $u=0$, $\\lambda_{4}\\neq0$,\nand this gives the conclusion we need.\n\\end{proof}\nA direct consequence of Proposition \\ref{pro:perturb-decrease} is\nthe following easily checkable condition.\n\\begin{thm}\n(Gradients are opposite) \\label{pro:opposite-directions}Let $(S_{i},x_{i},y_{i})$\nbe an optimizing triple to \\eqref{eq:keyexp1} for some $l_{i}$ such\nthat $S_{i}\\cap(\\mbox{\\rm lev}_{\\geq l_{i}}f)\\cap U_{i}$ is closed, and $(x_{i},y_{i})$\nis the unique pair of points in $S_{i}\\cap(\\mbox{\\rm lev}_{\\geq l_{i}}f)\\cap U_{i}$\nsatisfying $|x_{i}-y_{i}|=\\mbox{\\rm diam}(S_{i}\\cap(\\mbox{\\rm lev}_{\\geq l_{i}}f)\\cap U_{i})$.\nThen $\\nabla f(x_{i})$ and $\\nabla f(y_{i})$ are nonzero and point\nin opposite directions.\\end{thm}\n\\begin{proof}\nWe can look at an $m+1$ dimensional subspace which reduces to the\nsetting that we are considering so far in this section. By Proposition\n\\ref{pro:perturb-decrease}, $\\nabla f(y_{i})$ is a positive multiple\nof $x_{i}-y_{i}$ at optimality. Similarly, $\\nabla f(x_{i})$ is\na positive multiple of $y_{i}-x_{i}$ at optimality, and the result\nfollows.\n\\end{proof}\n\n\nWe remark on how to start the algorithm. We look at critical points\nof Morse index 1 first. In this case, two local minima $\\bar{x}_{1}$,\n$\\bar{x}_{2}$ are needed before the mountain pass algorithm can guarantee\nthe existence of a critical point $\\bar{x}_{3}$. For any value above\nthe critical value corresponding to the critical point of Morse index\n1, the level set contains a path connecting $\\bar{x}_{1}$ and $\\bar{x}_{2}$\npassing through $\\bar{x}_{3}$. \n\nTo find the next critical point of Morse index 2 we remark that under\nmild conditions, if $\\mbox{\\rm lev}_{\\leq a}f$ contains a closed path homeomorphic\nto $\\mathbb{S}_{1}$, the boundary of the disc of dimension 2, then\nthe linking principle guarantees the existence of a critical point\nthrough the multidimensional mountain pass theorem. Theorem \\ref{thm:Rabinowitz-77}\nwhich we quote later gives an idea how this is possible. We refer\nthe reader to \\cite{Sch99} and \\cite[Chapter 19]{Jab03} for more\ndetails on linking methods. \n\nWe now illustrate with an example that without the assumption that\n$(x_{i},y_{i})$ is the unique pair of points satisfying $|x_{i}-y_{i}|=\\mbox{\\rm diam}(S_{i}\\cap(\\mbox{\\rm lev}_{\\geq l_{i}}f)\\cap U_{i})$,\nthe conclusion in Theorem \\ref{pro:opposite-directions} need not\nhold.\n\n\\begin{figure}[h]\n\\begin{tabular}{|c|c|}\n\\hline \n\\includegraphics[scale=0.6]{wedge} & \\includegraphics[scale=0.4]{wedge2}\\tabularnewline\n\\hline\n\\end{tabular}\n\n\\caption{\\label{fig:The-lines}The diagram on the left illustrates the setting\nof Lemma \\ref{lem:intersecting-rays}, while the diagram on the right\nillustrates Example \\ref{exa:4-lines}.}\n\n\\end{figure}\n\n\\begin{lem}\n(Shortest line segments) \\label{lem:intersecting-rays}Suppose lines\n$l_{1}$ and $l_{2}$ intersect at the origin in $\\mathbb{R}^{2}$,\nand let $P$ be a point on the angle bisector as shown in the diagram\non the left of Figure \\ref{fig:The-lines}. The minimum distance of\nthe line segment $AB$, where $A$ is a point on $l_{1}$ and $B$\nis a point on $l_{2}$ and $AB$ passes through $P$, is attained\nwhen $OAB$ is an isosceles triangle with $AB$ as its base. \\end{lem}\n\\begin{proof}\nMuch of this is high school trigonometry and plane geometry, but we\npresent full details for completeness. Let $\\alpha$ be the angle\n$\\measuredangle AOP$, $\\beta$ be the angle $\\measuredangle PAO$,\nand $d=|OP|$. By using the sine rule, we get \\[\n|AB|=d\\left(\\frac{\\sin\\alpha}{\\sin\\theta}+\\frac{\\sin\\alpha}{\\sin(\\pi-2\\alpha-\\theta)}\\right).\\]\nThe problem is now reduced to finding the $\\theta$ that minimizes\nthe value above. Continuing the arithmetic gives:\\begin{eqnarray*}\nd\\left(\\frac{\\sin\\alpha}{\\sin\\theta}+\\frac{\\sin\\alpha}{\\sin(\\pi-2\\alpha-\\theta)}\\right) & = & d\\sin\\alpha\\left(\\frac{1}{\\sin\\theta}+\\frac{1}{\\sin(2\\alpha+\\theta)}\\right)\\\\\n & = & d\\sin\\alpha\\left(\\frac{\\sin\\theta+\\sin(2\\alpha+\\theta)}{\\sin(\\theta)\\sin(2\\alpha+\\theta)}\\right)\\\\\n & = & d\\sin\\alpha\\left(\\frac{\\sin\\theta+\\sin(2\\alpha+\\theta)}{\\sin(\\theta)\\sin(2\\alpha+\\theta)}\\right)\\\\\n & = & d\\sin\\alpha\\left(\\frac{2\\sin(\\alpha+\\theta)\\cos\\alpha}{\\frac{1}{2}[\\cos(2\\alpha)-\\cos(2\\alpha+2\\theta)]}\\right)\\\\\n & = & 2d\\sin(2\\alpha)\\left(\\frac{\\sin(\\alpha+\\theta)}{\\cos(2\\alpha)-\\cos(2\\alpha+2\\theta)}\\right)\\end{eqnarray*}\nWe now differentiate the $\\frac{\\sin(\\alpha+\\theta)}{\\cos(2\\alpha)-\\cos(2\\alpha+2\\theta)}$\nterm above, which gives\\begin{eqnarray*}\n & & \\frac{d}{d\\theta}\\left(\\frac{\\sin(\\alpha+\\theta)}{\\cos(2\\alpha)-\\cos(2\\alpha+2\\theta)}\\right)\\\\\n & = & \\frac{1}{[\\cos(2\\alpha)-\\cos(2\\alpha+2\\theta)]^{2}}\\left[\\cos(\\alpha+\\theta)[\\cos(2\\alpha)-\\cos(2\\alpha+2\\theta)]-2\\sin(2\\alpha+2\\theta)\\sin(\\alpha+\\theta)\\right]\\end{eqnarray*}\nThe numerator is simplified to be:\\begin{eqnarray*}\n & & \\cos(\\alpha+\\theta)[\\cos(2\\alpha)-\\cos(2\\alpha+2\\theta)]-2\\sin(2\\alpha+2\\theta)\\sin(\\alpha+\\theta)\\\\\n & = & \\cos(\\alpha+\\theta)[\\cos(2\\alpha)-2\\cos^{2}(\\alpha+\\theta)+1]-4\\sin^{2}(\\alpha+\\theta)\\cos(\\alpha+\\theta)\\\\\n & = & \\cos(\\alpha+\\theta)[\\cos(2\\alpha)-2\\cos^{2}(\\alpha+\\theta)+1-4\\sin^{2}(\\alpha+\\theta)]\\\\\n & = & \\cos(\\alpha+\\theta)[\\cos(2\\alpha)-2\\cos^{2}(\\alpha+\\theta)+4\\cos^{2}(\\alpha+\\theta)-3]\\\\\n & = & \\cos(\\alpha+\\theta)[\\cos(2\\alpha)+2\\cos^{2}(\\alpha+\\theta)-3].\\end{eqnarray*}\nWith this formula, we see that the conditions for $\\frac{d}{d\\theta}\\left(\\frac{\\sin(\\alpha+\\theta)}{\\cos(2\\alpha)-\\cos(2\\alpha+2\\theta)}\\right)=0$\nis to have $\\cos(\\alpha+\\theta)=0$ or $\\cos(2\\alpha)+2\\cos^{2}(\\alpha+\\theta)=3$.\nThe first case gives us $\\theta=\\frac{\\pi}{2}-\\alpha$, which gives\nus the required conclusion. The second case requires $\\alpha=0$ or\n$\\alpha=\\pi$ and $\\theta=0$ or $\\theta=\\pi$, which are degenerate\ncases. This gives us all optimum solutions to our problem, and concludes\nthe proof.\n\\end{proof}\nWe now create an example in $\\mathbb{R}^{3}$ that illustrates that\nthe omission of the condition of unique solutions need not give us\npoints whose gradients point in opposite directions.\n\\begin{example}\n(Gradients need not be opposite) \\label{exa:4-lines}Define the four\nlines $L_{1}$ to $L_{4}$ by \\begin{eqnarray*}\nL_{1} & := & \\{(0,0,-1)+\\lambda(1,0,1)\\mid\\lambda\\in\\mathbb{R}_{+}\\}\\\\\nL_{2} & := & \\{(0,0,-1)+\\lambda(-1,0,1)\\mid\\lambda\\in\\mathbb{R}_{+}\\}\\\\\nL_{3} & := & \\{(0,0,1)+\\lambda(0,1,-1)\\mid\\lambda\\in\\mathbb{R}_{+}\\}\\\\\nL_{4} & := & \\{(0,0,1)+\\lambda(0,-1,-1)\\mid\\lambda\\in\\mathbb{R}_{+}\\}\\end{eqnarray*}\nThe lines $L_{1}$ and $L_{2}$ lie in the $x$-$z$ plane, while\nthe lines $L_{3}$ and $L_{4}$ lie in the $y$-$z$ plane. See the\ndiagram on the right of Figure \\ref{fig:The-lines}.\n\nConsider first the problem of finding a plane $S$ that is a minimizer\nof the maximum of the distances between the points defined by the\nintersections of $S$ and the $L_{i}$'s. We now show that $S$ has\nto be the $x$-$y$ plane. The plane $S$ intersects the $z$ axis\nat some point $(0,0,p)$. When $S$ is the $x$-$y$ plane, the maximum\ndistance between the points is $2$. By Lemma \\ref{lem:intersecting-rays},\nthe distance between the points $S\\cap L_{1}$ and $S\\cap L_{2}$\nis at least $2(1-p)$, while the distance between the the points $S\\cap L_{3}$\nand $S\\cap L_{4}$ is at least $2(1+p)$. This tells us that the $x$-$y$\nplane is optimal.\n\nWith this observation, we now construct our example. Consider the\nfunction $f:\\mathbb{R}^{3}\\rightarrow\\mathbb{R}$ defined by\\[\nf(x,y,z)=-\\left(\\frac{x}{1+z}+\\frac{y}{1-z}\\right)^{4\/3}-\\left(\\frac{x}{1+z}-\\frac{y}{1-z}\\right)^{4\/3}.\\]\nThe level set $\\mbox{\\rm lev}_{\\geq-2}f$ contains the lines $L_{1}$ to $L_{4}$.\nThis means that $\\mbox{diam}(S\\cap\\mbox{\\rm lev}_{\\geq-2}f)\\geq2$. This is\nin fact an equation when $S$ is the $x$-$y$ plane, and the maximizers\nbeing the pairs $\\{\\pm(1,0,0)\\}$ and $\\{\\pm(0,1,0)\\}$.\n\nThe gradient $\\nabla f(x,y,z)$ is\\[\n\\nabla f(x,y,z)=\\left(\\begin{array}{c}\n-\\frac{4}{3}\\left(\\frac{x}{1+z}+\\frac{y}{1-z}\\right)^{1\/3}-\\frac{4}{3}\\left(\\frac{x}{1+z}-\\frac{y}{1-z}\\right)^{1\/3}\\\\\n-\\frac{4}{3}\\left(\\frac{x}{1+z}+\\frac{y}{1-z}\\right)^{1\/3}+\\frac{4}{3}\\left(\\frac{x}{1+z}-\\frac{y}{1-z}\\right)^{1\/3}\\\\\n-\\frac{4}{3}\\left(\\frac{x}{1+z}+\\frac{y}{1-z}\\right)^{1\/3}\\left(-\\frac{x}{(1+z)^{2}}+\\frac{y}{(1-z)^{2}}\\right)-\\frac{4}{3}\\left(\\frac{x}{1+z}-\\frac{y}{1-z}\\right)^{1\/3}\\left(-\\frac{x}{(1+z)^{2}}-\\frac{y}{(1-z)^{2}}\\right)\\end{array}\\right)\\]\nWith this, we can evaluate $\\nabla f$ at $\\pm(1,0,0)$ and $\\pm(0,1,0)$\nto be \\begin{eqnarray*}\n\\nabla f(1,0,0) & = & \\left(-\\frac{8}{3},0,\\frac{8}{3}\\right),\\\\\n\\nabla f(-1,0,0) & = & \\left(\\frac{8}{3},0,\\frac{8}{3}\\right),\\\\\n\\nabla f(0,1,0) & = & \\left(0,-\\frac{8}{3},-\\frac{8}{3}\\right),\\\\\n\\nabla f(0,-1,0) & = & \\left(0,\\frac{8}{3},-\\frac{8}{3}\\right).\\end{eqnarray*}\nNeither of the pairs $\\{\\pm(1,0,0)\\}$ and $\\{\\pm(0,1,0)\\}$ have\nopposite pointing gradients, which concludes our example.\n\\end{example}\n\n\\section{\\label{sec:Other-local-analysis}Another convergence property of\ncritical points}\n\nIn this section, we look at a condition on critical points similar\nto Proposition \\ref{pro:triples-conv} that can be helpful for numerical\nmethods for finding critical points. Theorem \\ref{thm:crit-from-linking}\nbelow does not seem to be easily found in the literature, and can\nbe seen as a local version of the mountain pass theorem. \n\nWe prove Theorem \\ref{thm:crit-from-linking} in the more general\nsetting of metric spaces. Such a treatment includes the case of nonsmooth\nfunctions. We recall the following definitions in metric critical\npoint theory from \\cite{DM94,IS96,Katriel94}.\n\\begin{defn}\n\\label{def:Deformation-critical}Let $(X,d)$ be a metric space. We\ncall the point $x$ \\emph{Morse regular} for the function $f:X\\rightarrow\\mathbb{R}$\nif, for some numbers $\\gamma,\\sigma>0$, there is a continuous function\n\\[\n\\phi:\\mathbb{B}(x,\\gamma)\\times[0,\\gamma]\\rightarrow X\\]\nsuch that all points $u\\in\\mathbb{B}(x,\\gamma)$ and $t\\in[0,\\gamma]$\nsatisfy the inequality \\[\nf\\big(\\phi(x,t)\\big)\\leq f(x)-\\sigma t,\\]\nand that $\\phi(\\cdot,0):\\mathbb{B}(x,\\gamma)\\to\\mathbb{B}(x,\\gamma)$\nis the identity map. The point $x$ is \\emph{Morse critical }if it\nis not Morse regular. \n\nIf for some $\\phi$, there is some $\\kappa>0$ such that $\\phi$ also\nsatisfies the inequality \\[\nd\\big(\\phi(x,t),x\\big)\\leq\\kappa t,\\]\nthen we call $x$ \\emph{deformationally regular}. The point $x$ is\n\\emph{deformationally critical }if it is not deformationally regular.\n\\end{defn}\nIt is a fact that if $X$ is a Banach space and $f$ is locally Lipschitz,\nthen deformationally critical points are Clarke critical. The following\ntheorem gives a strategy for identifying deformationally critical\npoints.\n\\begin{thm}\n\\label{thm:crit-from-linking}(Critical points from sequences of linking\nsets) Let $X$ be a metric space and $f:X\\to\\mathbb{R}$. Suppose\nthere is some open set $U$ of $\\bar{x}$ and sequences of sets $\\{\\Phi_{i}\\}_{i=1}^{\\infty}$\nand $\\{\\Gamma_{i}\\}_{i=1}^{\\infty}$ such that \n\\begin{enumerate}\n\\item $\\Phi_{i}$ and $\\partial\\Gamma_{i}$ link.\n\\item $\\Gamma_{i}$ are homeomorphic to $\\mathbb{B}^{m}$ for all $i$,\nand $\\max_{x\\in\\partial\\Gamma_{i}}f(x)<\\inf_{x\\in\\Phi_{i}\\cap U}f(x)$. \n\\item For any open set $V$ containing $\\bar{x}$, there is some $I>0$\nsuch that $\\Gamma_{i}\\subset V$ for all $i>I$. \n\\item $f$ is Lipschitz in $U$.\n\\end{enumerate}\nThen $\\bar{x}$ is deformationally critical.\\end{thm}\n\\begin{proof}\nSuppose $\\bar{x}$ is deformationally regular. Then there are $\\gamma,\\sigma,\\kappa>0$\nand $\\phi:\\mathbb{B}(\\bar{x},\\gamma)\\times[0,\\gamma]\\to X$ such that\nthe inequalities\\[\nf\\big(\\phi(x,t)\\big)\\leq f(x)-\\sigma t\\mbox{ and }d\\big(\\phi(x,t),x\\big)\\leq\\kappa t\\]\nhold for all points $x\\in\\mathbb{B}(\\bar{x},\\gamma)$ and $t\\in[0,\\gamma]$,\nand $\\phi(\\cdot,0):\\mathbb{B}(\\bar{x},\\gamma)\\to\\mathbb{B}(\\bar{x},\\gamma)$\nis the identity map. We may reduce $\\gamma$ as necessary and assume\nthat $U=\\mathbb{B}(\\bar{x},\\gamma)$. \n\nCondition (3) implies that for any $\\alpha>0$, then there is some\n$I_{1}$ such that $\\Gamma_{i}\\subset\\mathbb{B}(\\bar{x},\\alpha)$\nfor all $i>I_{1}$. Consider $\\Gamma_{i,t}:=\\phi((\\Gamma_{i}\\times\\{t\\})\\cup(\\partial\\Gamma_{i}\\times[0,t]))$.\nProvided $00$ such that if $i>I_{2}$,\nthen $\\mbox{\\rm diam}(\\Gamma_{i})<\\frac{\\sigma t}{\\bar{\\kappa}}$. So for $i>\\max(I_{1},I_{2})$,\nwe have \\[\n\\max_{x\\in\\Gamma_{i,t}}f(x)=\\max_{x\\in\\partial\\Gamma_{i}}f(x)<\\inf_{x\\in\\Phi_{i}\\cap U}f(x).\\]\n\n\nHowever, the fact that $\\partial\\Gamma_{i}$ and $\\Phi_{i}$ link\nimplies that $\\Gamma_{i,t}$ and $\\Phi_{i}$ must intersect, and since\n$\\Gamma_{i,t}\\subset U$, $\\Gamma_{i,t}$ and $\\Phi_{i}\\cap U$ must\nintersect. This is a contradiction, so $\\bar{x}$ is deformationally\ncritical.\n\\end{proof}\nIt is reasonable to choose $\\Gamma_{i}$ to be a simplex (that is,\na convex hull of $m+1$ points) and $\\Phi_{i}$ to be an affine space.\nIf the sequence of sets $\\{\\Gamma_{i}\\}_{i=1}^{\\infty}$ converges\nto the single point $\\bar{x}$ and $f$ is $\\mathcal{C}^{2}$ there,\na quadratic approximation of $f$ using only the knowledge of the\nvalues of $f$ and $\\nabla f$ on the vertices of the simplex would\nbe good approximation of $f$ on the simplex. We outline our strategy\nbelow.\n\\begin{algorithm}\n\\label{alg:quad-approx}(Obtaining unknowns in quadratic) Let $h:\\mathbb{R}^{m}\\to\\mathbb{R}$\nbe defined by $h(x)=\\frac{1}{2}x^{T}Ax+b^{T}x+c$, and let $p_{1},\\dots,p_{m+1}$\nbe $m+1$ points in $\\mathbb{R}^{m}$. Suppose that the values of\n$h(p_{i})$ and $\\nabla h(p_{i})$ are known for all $i=1,\\dots,m+1$.\nWe seek to obtain the values of $A$, $b$ and $c$.\n\\begin{enumerate}\n\\item Let $P\\in\\mathbb{R}^{m\\times m}$ be the matrix such that the $i$th\ncolumn is $p_{i+1}-p_{1}$, and let $D\\in\\mathbb{R}^{m\\times m}$\nbe the matrix such that the $i$th column is $\\nabla h(p_{i+1})-\\nabla h(p_{1})$.\nCalculate $A$ with $A=DP^{-1}$.\n\\item Calculate $b$ with $b=\\nabla h(p_{1})-Ap_{1}$.\n\\item Calculate $c$ with $c=h(p_{1})-\\frac{1}{2}p_{1}^{T}Ap_{1}-b^{T}p_{1}$.\n\\end{enumerate}\n\\end{algorithm}\nIf $h$ is $\\mathcal{C}^{2}$ instead of being a quadratic, then the\nprocedure in Algorithm \\ref{alg:quad-approx} can be used to approximate\nthe values of $h$ on a simplex. \n\nIn Lemma \\ref{lem:quad-est} below, given $m+1$ points in $\\mathbb{R}^{n}$,\nwe need to approximate a quadratic function on $\\Delta$ as a subset\nof $\\mathbb{R}^{n}$. Even though $m0$, there\nis some $\\delta>0$ such that if $p_{1},\\dots,p_{m+1}\\in\\mathbb{B}(\\bar{x},\\delta)$,\nthen \\[\n|f_{e}(x)-f(x)|<\\frac{1}{2}\\mbox{\\rm diam}(\\Delta)^{2}\\epsilon(1+\\kappa\\|P\\|\\|P^{\\dagger}\\|)\\mbox{ for all }x\\in\\Delta,\\]\nwhere $\\|\\cdot\\|$ stands for the matrix 2-norm, $P^{\\dagger}$ is\nthe pseudoinverse of $P$, and $\\kappa$ is some constant dependent\nonly on $n$ and $m$.\\end{lem}\n\\begin{proof}\nThe first step of this proof is to show that step 1 of Algorithm \\ref{alg:quad-approx}\ngives a matrix in $\\mathbb{R}^{n\\times n}$ which is a good approximation\nof how $A=\\nabla^{2}f(\\bar{x})$ acts on the lineality space of the\naffine hull of $\\Delta$. Since $f$ is $\\mathcal{C}^{2}$, for any\n$\\epsilon>0$, there exists $\\delta>0$ such that $|\\nabla f(x)-\\nabla f(x^{\\prime})-A(x-x^{\\prime})|<\\epsilon|x-x^{\\prime}|$\nfor all $x,x^{\\prime}\\in\\mathbb{B}(\\bar{x},\\delta)$. Thus, there\nis some $\\kappa>0$ depending only on $m$ and $n$ such that if $p_{1},\\dots,p_{m+1}\\in\\mathbb{B}(\\bar{x},\\delta)$,\nthen \\begin{equation}\n\\|D-AP\\|<\\kappa\\epsilon\\|P\\|.\\label{eq:|D-AP|}\\end{equation}\n\n\nLet $P=QR$, where $Q\\in\\mathbb{R}^{n\\times m}$ has orthonormal columns\nand $R\\in\\mathbb{R}^{m\\times m}$, be a QR decomposition of $P$.\nFor any $v\\in\\mathbb{R}^{n}$ in the range of $P$, or equivalently,\n$v=Qv^{\\prime}$ for some $v^{\\prime}\\in\\mathbb{R}^{m}$, we want\nto show that $\\|Av-DR^{-1}Q^{T}v\\|$ is small. We note that $|v|=|v^{\\prime}|$,\nand we have the following calculation. \\begin{eqnarray*}\n\\|Av-DR^{-1}Q^{T}v\\| & = & \\|AQv^{\\prime}-DR^{-1}Q^{T}Qv^{\\prime}\\|\\\\\n & = & \\|AQv^{\\prime}-DR^{-1}v^{\\prime}\\|\\\\\n & \\leq & \\|AQ-DR^{-1}\\||v^{\\prime}|\\\\\n & \\leq & \\|AQR-D\\|\\|R^{-1}\\||v|\\\\\n & = & \\|D-AP\\|\\|R^{-1}\\||v|\\\\\n & \\leq & \\kappa\\epsilon\\|P\\|\\|R^{-1}\\||v|.\\end{eqnarray*}\nNext, for $x,x^{\\prime}\\in\\mathbb{B}(\\bar{x},\\delta)$, let $d=\\mbox{\\rm unit}(x^{\\prime}-x)$.\nThen\\begin{eqnarray*}\nf(x^{\\prime})-f(x) & = & \\int_{0}^{|x^{\\prime}-x|}\\nabla f(x+sd)^{T}d\\,\\mathbf{d}s\\\\\n & = & \\int_{0}^{|x^{\\prime}-x|}\\int_{0}^{s}d^{T}\\nabla^{2}f(x+td)d\\,\\mathbf{d}t+\\nabla f(x)^{T}d\\,\\mathbf{d}s.\\end{eqnarray*}\nSince $f$ is $\\mathcal{C}^{2}$, we may reduce $\\delta$ if necessary\nso that $\\|A-\\nabla^{2}f(x+td)\\|<\\epsilon$ for all $0\\leq t\\leq|x^{\\prime}-x|$.\nThis tells us that \\begin{eqnarray*}\n|d^{T}(DR^{-1}Q^{T})d-d^{T}\\nabla^{2}f(x)d| & \\leq & |d|\\|DR^{-1}Q^{T}d-\\nabla^{2}f(x)d\\|\\\\\n & \\leq & \\|DR^{-1}Q^{T}d-Ad\\|+\\|Ad-\\nabla^{2}f(x)d\\|\\\\\n & \\leq & \\kappa\\epsilon\\|P\\|\\|R^{-1}\\||d|+\\|A-\\nabla^{2}f(x)\\|\\\\\n & \\leq & \\epsilon(1+\\kappa\\|P\\|\\|R^{-1}\\|).\\end{eqnarray*}\nWe have \\begin{eqnarray*}\n & & \\int_{0}^{|x^{\\prime}-x|}\\int_{0}^{s}d^{T}(DR^{-1}Q^{T})d\\,\\mathbf{d}t+\\nabla f(x)^{T}d\\,\\mathbf{d}s\\\\\n & = & \\nabla f(x)^{T}(x^{\\prime}-x)+\\int_{0}^{|x^{\\prime}-x|}d^{T}(DR^{-1}Q^{T})ds\\,\\mathbf{d}s\\\\\n & = & \\nabla f(x)^{T}(x^{\\prime}-x)+\\frac{|x^{\\prime}-x|^{2}}{2}d^{T}(DR^{-1}Q^{T})d\\\\\n & = & \\nabla f(x)^{T}(x^{\\prime}-x)+\\frac{1}{2}(x^{\\prime}-x)^{T}(DR^{-1}Q^{T})(x^{\\prime}-x).\\end{eqnarray*}\nContinuing with the arithmetic earlier, we obtain\\begin{eqnarray*}\n & & \\left|f(x^{\\prime})-f(x)-\\left(\\nabla f(x)^{T}(x^{\\prime}-x)+\\frac{1}{2}(x^{\\prime}-x)^{T}(DR^{-1}Q^{T})(x^{\\prime}-x)\\right)\\right|\\\\\n & \\leq & \\int_{0}^{|x^{\\prime}-x|}\\int_{0}^{s}\\epsilon(1+\\kappa\\|P\\|\\|R^{-1}\\|)s\\,\\mathbf{d}t\\mathbf{d}s\\\\\n & = & \\frac{1}{2}|x^{\\prime}-x|^{2}\\epsilon(1+\\kappa\\|P\\|\\|R^{-1}\\|).\\end{eqnarray*}\nLet $x=p_{1}$ and $x^{\\prime}$ be any point in $\\Delta$. Define\n$f_{e}(x^{\\prime})$ by \\[\nf_{e}(x^{\\prime})=f(x)+\\nabla f(x)^{T}(x^{\\prime}-x)+\\frac{1}{2}(x^{\\prime}-x)^{T}(DR^{-1}Q^{T})(x^{\\prime}-x),\\]\nwhich is the quadratic function obtained using Algorithm \\ref{alg:quad-approx}.\nWe have \\begin{eqnarray*}\n|f_{e}(x^{\\prime})-f(x^{\\prime})| & \\leq & \\frac{1}{2}|x^{\\prime}-x|^{2}\\epsilon(1+\\kappa\\|P\\|\\|R^{-1}\\|)\\\\\n & \\leq & \\frac{1}{2}\\mbox{\\rm diam}(\\Delta)^{2}\\epsilon(1+\\kappa\\|P\\|\\|R^{-1}\\|),\\end{eqnarray*}\nwhich gives what we need.\n\\end{proof}\nIn the statement of Lemma \\ref{lem:quad-est}, we chose the domain\nof $f$ to be $\\mathbb{R}^{n}$ so that the inequality \\eqref{eq:|D-AP|}\nfollows from the equivalence of finite dimensional norms. Next, the\naccuracy of the computed values of $\\nabla f(p_{i+1})-\\nabla f(p_{1})$\nmight be poor, which makes the quadratic approximation strategy ineffective\nonce we are too close to the critical point $\\bar{x}$. We remark\non how we can overcome this problem by exploiting concavity.\n\\begin{rem}\n(Exploiting concavity) The lineality space of the affine hull of $\\Delta$\nmay span the eigenspaces of the $m$ negative eigenvalues of $\\nabla^{2}f(\\bar{x})$\nonce we are close to the critical point $\\bar{x}$. This can be checked\nby calculating the Hessian as was done earlier. If this is the case,\n$f$ would be concave in $\\Delta$ when $p_{1},\\dots,p_{m+1}$ are\nsufficiently close to $\\bar{x}$. The estimate $f(x)\\leq f(p_{i})+\\nabla f(p_{i})^{T}(x-p_{i})$\nwould hold for all $x\\in\\Delta$ and $1\\leq i\\leq m+1$, which can\ngive a sufficiently good estimate of $\\max_{x\\in\\partial\\Delta}f(x)$\nthrough linear programming.\n\\end{rem}\n\n\\section{Fast local convergence\\label{sec:Fast-local-convergence}}\n\nIn this section, we discuss how we can find good lower bounds that\nallow us to achieve better convergence if $f$ is $\\mathcal{C}^{2}$\nand $X=\\mathbb{R}^{n}$. Our method extends the local superlinearly\nconvergent method in \\cite{LP08} for finding smooth critical points\nof mountain pass type when $X=\\mathbb{R}^{n}$.\n\nLet us recall the multidimensional mountain pass theorem due to Rabinowitz\n\\cite{R77}.\n\\begin{thm}\n(Multidimensional mountain pass theorem) \\cite{R77}\\label{thm:Rabinowitz-77}\nLet $X=Y\\oplus Z$ be a Banach space with $Z$ closed in $X$ and\n$\\dim(Y)<\\infty$. For $\\rho>0$ define\\[\n\\mathcal{M}:=\\{u\\in Y\\mid\\|u\\|\\leq\\rho\\},\\quad\\mathcal{M}_{0}:=\\{u\\in Y\\mid\\|u\\|=\\rho\\}.\\]\nLet $f:X\\to\\mathbb{R}$ be $\\mathcal{C}^{1}$, and \\[\nb:=\\inf_{u\\in Z}f(u)>a:=\\max_{u\\in\\mathcal{M}_{0}}f(u).\\]\nIf $f$ satisfies the Palais Smale condition and \\[\nc:=\\inf_{\\gamma\\in\\Gamma}\\max_{u\\in\\mathcal{M}}f\\big(\\gamma(u)\\big)\\mbox{ where }\\Gamma:=\\left\\{ \\gamma:\\mathcal{M}\\to X\\mbox{ is continuous}\\mid\\gamma|_{\\mathcal{M}_{0}}=id\\right\\} ,\\]\nthen $c$ is a critical value of $f$. \n\\end{thm}\n For the case when $X=\\mathbb{R}^{3}$, we have an illustration\nin Figure \\ref{fig:multi-MPT-fig} of the case $f:\\mathbb{R}^{3}\\rightarrow\\mathbb{R}$\ndefined by $f(x)=x_{3}^{2}-x_{1}^{2}-x_{2}^{2}$. The critical point\n$\\mathbf{0}$ has critical value $0$. Choose $Y$ to be $\\mathbb{R}^{2}\\times\\{0\\}$\nand $Z$ to be $\\{\\mathbf{0}\\}\\times\\mathbb{R}$. The union of the\ntwo blue cones is the level set $\\mbox{lev}_{=0}f:=f^{-1}(0)$, while\nthe bold red ring denotes $\\mathcal{M}_{0}$ and the red disc denotes\na possible image of $\\mathcal{M}$ under $\\gamma$. \n\n\\begin{figure}\n\n\n\\includegraphics[scale=0.5]{multimtn}\\caption{\\label{fig:multi-MPT-fig}Illustration of the multidimensional mountain\npass theorem}\n\n\\end{figure}\n\n\nIt seems intuitively clear that $\\gamma(\\mathcal{M})$ has to intersect\nthe vertical axis. This is indeed the case, since $\\mathcal{M}_{0}$\nand $Z$ link. (See for example \\cite[Example II.8.2]{Str08}.)\n\nWith this observation, we easily see that $\\max_{u\\in\\mathcal{M}}f(\\gamma(u))\\geq\\inf_{z\\in Z}f(z)$.\nThus the critical value $c=\\inf_{\\gamma\\in\\Gamma}\\max_{u\\in\\mathcal{M}}f(\\gamma(u))$\nfrom Theorem \\ref{thm:Rabinowitz-77} is bounded from below by $\\inf_{z\\in Z}f(z)$.\nThis gives a lower bound for the critical value. In the mountain pass\ncase when $m=1$, the set $\\mathcal{M}_{0}$ consists of two points,\nand the space $Z$ separates the two points in $\\mathcal{M}_{0}$\nso that any path connecting the two points in $\\mathcal{M}_{0}$ must\nintersect $Z$. \n\nA first try for a fast locally convergent algorithm is as follows:\n\\begin{algorithm}\n\\label{alg:first-local-algorithm}A first try for a fast locally convergent\nalgorithm to find saddle points of Morse index $m$ for $f:X\\to\\mathbb{R}$.\\end{algorithm}\n\\begin{enumerate}\n\\item Set the iteration count $i$ to $0$, and let $l_{i}$ be a lower\nbound of the critical value. \n\\item Find $x_{i}$ and $y_{i}$, where $(S_{i},x_{i},y_{i})$ is an optimizing\ntriple of\\begin{equation}\n\\min_{S\\in\\mathcal{S}}\\max_{x,y\\in S\\cap(\\scriptsize\\mbox{\\rm lev}_{\\geq l_{i}}f)\\cap U_{i}}|x-y|,\\label{eq:key-exp2}\\end{equation}\nwhere $U_{i}$ is an open set. Here $\\mathcal{S}$ is the set of $m$-dimensional\naffine subspaces of $\\mathbb{R}^{n}$ intersecting $U_{i}$. (The\ndifference between this formula and \\eqref{eq:keyexp1} is that we\ntake level sets of level $l_{i}$ instead of $\\frac{1}{2}(l_{i}+u_{i})$.)\n\n\\item For an optimal solution $(S_{i},x_{i},y_{i})$, let $l_{i+1}$ be\nthe lower bound of $f$ on the $(n-m)$-dimensional affine space passing\nthrough $z_{i}:=\\frac{1}{2}(x_{i}+y_{i})$ whose lineality space is\northogonal to the lineality space of $S_{i}$ .\n\\item Increase $i$ and go back to step 2.\n\\end{enumerate}\nWhile Algorithm \\ref{alg:first-local-algorithm} as stated works fine\nfor the case $m=1$ to find critical points of mountain pass type,\nthe $l_{i}$'s calculated in this manner need not increase monotonically\nto the critical value when $m>1$. We first present a lemma on the\nmin-max problem \\eqref{eq:key-exp2} for the case of a quadratic.\n\\begin{lem}\n(Analysis on exact quadratic) \\label{lem:quadratic-min-max} Consider\n$f:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}$ defined by $f(x)=\\sum_{j=1}^{n}a_{j}x_{j}^{2}$,\nwhere $a_{i}$ are in decreasing order, with $a_{j}>0$ for $1\\leq j\\leq n-m$\nand $a_{j}<0$ for $n-m+1\\leq j\\leq n$. The function $f$ has one\ncritical point $\\mathbf{0}$, and $f(\\mathbf{0})=0$. Given $l<0$,\nan optimizing triple $(\\bar{S},\\bar{x},\\bar{y})$ of the problem \\begin{equation}\n\\min_{S\\in\\mathcal{S}}\\max_{x,y\\in S\\cap\\scriptsize\\mbox{\\rm lev}_{\\geq l}f}|x-y|,\\label{eq:quadratic-min-max}\\end{equation}\nwhere $\\mathcal{S}$ is the set of affine spaces of dimension $m$,\nsatisfies \\[\n\\bar{x}=\\left(0,0,\\dots,0,\\pm\\sqrt{\\frac{l}{a_{n-m+1}}},0,\\dots,0\\right),\\]\nwhere the nonzero term is in the $(n-m+1)$th position, and $\\bar{y}=-\\bar{x}$. \\end{lem}\n\\begin{proof}\nLet $S_{\\bar{z},V}:=\\{\\bar{z}+Vw\\mid w\\in\\mathbb{R}^{m}\\}$, where\n$V\\in\\mathbb{R}^{n\\times m}$ is a matrix with orthonormal columns.\nLet the matrix $A\\in\\mathbb{R}^{n\\times n}$ be the diagonal matrix\nwith entries $a_{j}$ in the $(j,j)$th position. The ellipse $S_{\\bar{z},V}\\cap\\mbox{\\rm lev}_{\\geq l}f$\ncan be written as a union of elements of the form $\\bar{z}+Vw$, where\n$w$ satisfies \\begin{eqnarray*}\n(\\bar{z}+Vw)^{T}A(\\bar{z}+Vw) & \\geq & l\\\\\n\\Leftrightarrow w^{T}V^{T}AVw+2\\bar{z}^{T}AVw+\\bar{z}^{T}A\\bar{z} & \\geq & l.\\end{eqnarray*}\nIf the matrix $V^{T}AV$ has a nonnegative eigenvalue, then $S_{\\bar{z},V}\\cap\\mbox{\\rm lev}_{\\geq l}f$\nis unbounded. Otherwise, the set \\[\n\\{\\bar{z}+Vw\\mid w^{T}V^{T}AVw+2\\bar{z}^{T}AVw+\\bar{z}^{T}A\\bar{z}\\geq l\\}\\]\nis bounded. Therefore the inner maximization problem of \\eqref{eq:quadratic-min-max}\ncorresponding to $S=S_{\\bar{z},V}$ has a (not necessarily unique)\npair of minimizers. We continue completing the square with respect\nto $w$ and let the symmetric matrix $C$ be the square root $C=[-V^{T}AV]^{\\frac{1}{2}}$.\\begin{eqnarray*}\n-w^{T}C^{2}w+2\\bar{z}^{T}AVw+\\bar{z}^{T}A\\bar{z} & \\geq & l\\\\\n\\Leftrightarrow-(Cw-C^{-1}V^{T}A\\bar{z})^{T}(Cw-C^{-1}V^{T}A\\bar{z})+\\bar{z}^{T}A\\bar{z}+\\bar{z}^{T}AVC^{-2}V^{T}A^{T}\\bar{z} & \\geq & l.\\end{eqnarray*}\nThe maximum length between two points of an ellipse is twice the distance\nbetween the center and the furthest point on the ellipse. (This fact\nis easily proved by reducing to, and examining, the two dimensional\ncase.) The distance between the center and the furthest point on\nthe ellipse $S_{\\bar{z},V}\\cap\\mbox{\\rm lev}_{\\geq l}f$ can be calculated to\nbe\\[\n\\sqrt{\\frac{1}{\\alpha}(\\bar{z}^{T}A\\bar{z}+\\bar{z}^{T}AVC^{-2}V^{T}A^{T}\\bar{z}-l)},\\]\nwhere $\\alpha$ is the square of the smallest eigenvalue in $C$,\nor equivalently the negative of the largest eigenvalue of $V^{T}AV$.\nThe term $(\\bar{z}^{T}A\\bar{z}+\\bar{z}^{T}AVC^{-2}V^{T}A^{T}\\bar{z})$\nis $\\max\\{f(x)\\mid x\\in S_{\\bar{z},V}\\}$, which we refer to as $\\max_{S_{\\bar{z},V}}f$.\nWe now proceed to minimize $\\max_{S_{\\bar{z},V}}f$ and maximize $\\alpha$\nseparately.\n\n\\textbf{Claim 1: $\\max_{S_{\\bar{z},V}}f\\geq0$.}\n\nWe first prove that the subspace \\[\nZ:=\\{z\\mid z_{n}=z_{n-1}=\\cdots=z_{n-m+1}=0\\}\\]\nmust intersect $S_{\\bar{z},V}$. Recall that $V^{T}AV$ is negative\ndefinite. Therefore for any $w\\neq\\mathbf{0}$, $w^{T}V^{T}AVw<0$.\nSince the first $n-m$ eigenvalues of $A$ are positive, $Vw$ cannot\nbe all zeros in its last $m$ components. This shows that the $m\\times m$\nmatrix $V((n-m+1):n,1:m)$ is invertible. We can find some $\\bar{w}$\nsuch that the last $m$ components of $\\bar{z}+V\\bar{w}$ are zeros.\nThis shows that $S_{\\bar{z},V}\\cap Z\\neq\\emptyset$, so $\\max_{S_{\\bar{z},V}}f\\geq\\min\\{f(x)\\mid x\\in Z\\}=0$.\n\n\\textbf{Claim 2: $\\alpha\\leq-a_{n-m+1}$.}\n\nTo find the maximum value of $\\alpha$, we recall that it is the negative\nof the largest eigenvalue of $V^{T}AV$. Since $V\\in\\mathbb{R}^{n\\times m}$,\nthe Courant-Fischer Theorem, gives $\\alpha\\leq-a_{n-m+1}$.\n\nChoose the affine space $\\bar{S}:=\\{\\mathbf{0}\\}\\times\\mathbb{R}^{m}$.\nThis minimizes $\\max_{S}f$ and maximizes $\\alpha$ as well, giving\nthe optimal solution in the statement of the lemma.\n\\end{proof}\nIt should be noted however that the minimizing subspace need not be\nunique, even if the values of $a_{j}$ are distinct. The example below\nhighlights how Algorithm \\ref{alg:first-local-algorithm} can fail.\n\\begin{example}\n(Failure of Algorithm \\ref{alg:first-local-algorithm}) \\label{exa:failure-first-try}Suppose\n$f(x)=x_{1}^{2}-x_{2}^{2}-3x_{3}^{2}$. The subspace $S=\\{x\\mid x_{1}=x_{3}\\}$\nintersects the level set $\\mbox{\\rm lev}_{\\geq-1}f$ in the disc \\[\n\\left\\{ \\lambda\\left(\\frac{1}{\\sqrt{2}}\\sin\\theta,\\cos\\theta,\\frac{1}{\\sqrt{2}}\\sin\\theta\\right)\\mid0\\leq\\theta\\leq2\\pi,0\\leq\\lambda\\leq1\\right\\} .\\]\nThe largest distance between two points on the disc is $2$, and the\nsubspace $S$ can be verified to give the optimal value to the min-max\nproblem \\eqref{eq:key-exp2} by Lemma \\ref{lem:quadratic-min-max}.\n\nOn the ray $S^{\\perp}=\\{\\lambda(1,0,-1)\\mid\\lambda\\in\\mathbb{R}\\}$,\nthe function $f$ is concave, hence there is no minimum. This example\nillustrates that Algorithm \\ref{alg:first-local-algorithm} can fail\nin general. See Figure \\ref{fig:bad-3d}.\n\\end{example}\n\\begin{figure}\n\\includegraphics[scale=0.4]{line_plane}\n\n\\caption{\\label{fig:bad-3d}An example where Algorithm \\ref{alg:first-local-algorithm}\nfails.}\n\n\n\n\\end{figure}\n\n\nExample \\ref{exa:failure-first-try} shows that even if there are\nonly 2 negative eigenvalues, it might be possible to find a two-dimensional\nsubspace $S$ on which the Hessian is negative definite on both $S$\nand $S^{\\perp}$. Therefore, we amend Algorithm \\ref{alg:first-local-algorithm}\nby determining the eigenspace corresponding to the $m$ smallest eigenvalues. \n\\begin{algorithm}\n\\label{alg:fast-local-method} Fast local method to find saddle points\nof Morse index $m$.\\end{algorithm}\n\\begin{enumerate}\n\\item Set the iteration count $i$ to $0$, and let $l_{i}$ be a lower\nbound of the critical value. \n\\item Find $x_{i}$ and $y_{i}$, where $(S_{i}^{\\prime},x_{i},y_{i})$\nis an optimizing triple of\\begin{equation}\n\\min_{S\\in\\mathcal{S}}\\max_{x,y\\in S\\cap(\\scriptsize\\mbox{\\rm lev}_{\\geq l_{i}}f)\\cap U_{i}}|x-y|,\\label{eq:key-exp}\\end{equation}\nwhere $U_{i}$ is an open set. Here $\\mathcal{S}$ is the set of $m$-dimensional\naffine subspaces of $\\mathbb{R}^{n}$ intersecting $U_{i}$. We emphasize\nthat the space where minimality is attained in the outer minimization\nproblem is $S_{i}^{\\prime}$. After solving the above problem, find\nthe subspace $S_{i}$ that approximates the eigenspace corresponding\nto the $m$ smallest eigenvalues using Algorithm \\ref{alg:find-S-prime}\nbelow.\n\\item For an optimizing triple $(S_{i},x_{i},y_{i})$ found in step 2, let\n$l_{i+1}$ be the lower bound of $f$ on the $(n-m)$-dimensional\naffine space passing through $z_{i}:=\\frac{1}{2}(x_{i}+y_{i})$ whose\nlineality space is orthogonal to the lineality space of $S_{i}$ .\n\\item Increase $i$ and go back to step 2 till convergence.\n\\end{enumerate}\nA local algorithm is needed in Algorithm \\ref{alg:fast-local-method}\nto find the subspace $S_{i}$ in step 2.\n\\begin{algorithm}\n\\label{alg:find-S-prime}Finding the subspace $S_{i}$ in step 2 of\nAlgorithm \\ref{alg:fast-local-method}:\\end{algorithm}\n\\begin{enumerate}\n\\item Let $X_{1}=\\mbox{span}\\{x_{i}-y_{i}\\}$, where $x_{i}$, $y_{i}$\nare found in step 2 of Algorithm \\ref{alg:fast-local-method}, and\nlet $j$ be $1$.\n\\item Find the closest point from $z_{i}:=\\frac{1}{2}(x_{i}+y_{i})$ to\n$\\mbox{\\rm lev}_{\\leq l_{i}}f\\cap(z_{i}+X_{j}^{\\perp})$, which we call $\\bar{p}_{j+1}$.\n\\item Let $X_{j+1}=\\mbox{span}\\{X_{j},\\bar{p}_{j+1}-z_{i}\\}$ and increase\n$j$ by $1$. If $j=m$, let $S_{i}$ be $z_{i}+X_{m}$ and the algorithm\nends. Otherwise, go back to step 2.\n\\end{enumerate}\nStep 2 of Algorithm \\ref{alg:find-S-prime} finds the negative eigenvalues\nand eigenvectors, starting from the eigenvalues furthest from zero.\nOnce all the eigenvectors are found, then $S_{i}$ is the span of\nthese eigenvectors. \n\nIn some situations, the lineality space of $S_{i}$ are known in advance,\nor do not differ too much from the previous iteration. In this case,\nwe can get around using Algorithm \\ref{alg:find-S-prime} and use\nthe estimate instead.\n\nWe are now ready to prove the convergence of Algorithm \\ref{alg:fast-local-method}. \n\n\n\\section{Proof of superlinear convergence of local algorithm\\label{sec:Proof-of-superlinear}}\n\n We prove our result on the convergence of Algorithm \\ref{alg:fast-local-method}\nin steps. The first step is to look closely at a model problem. \n\\begin{assumption}\n\\label{ass:condns-on-model}Given $\\delta>0$, suppose $h:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}$\nis $\\mathcal{C}^{2}$, \\begin{eqnarray*}\n|\\nabla h(x)-Ax| & \\leq & \\delta|x|,\\\\\n\\mbox{{\\rm and }}|h(x)-\\frac{1}{2}x^{T}Ax| & \\leq & \\frac{1}{2}\\delta|x|^{2}\\mbox{ {\\rm for all }}x\\in\\mathbb{B},\\end{eqnarray*}\nwhere $A\\in\\mathbb{R}^{n\\times n}$ is an invertible diagonal matrix\nwith diagonal entries ordered decreasingly, of which $a_{i}=A_{ii}$\nand \\[\na_{1}>a_{2}>\\cdots>a_{n-m}>0>a_{n-m+1}>\\cdots>a_{n}.\\]\nDefine $h_{\\min}:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}$ and $h_{\\max}:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}$\nby:\\[\nh_{\\min}(x)=\\frac{1}{2}x^{T}(A-\\delta I)x\\quad\\mbox{and}\\quad h_{\\max}(x)=\\frac{1}{2}x^{T}(A+\\delta I)x.\\]\n\n\\end{assumption}\nIt is clear that $\\nabla h(\\mathbf{0})=\\mathbf{0}$, $h(\\mathbf{0})=0$,\n$\\nabla^{2}h(\\mathbf{0})=A$, and the Morse index is $m$. Here is\na simple observation that bounds the level sets of $h$:\n\\begin{prop}\n(Level set property) The level sets of $h$ satisfy\\begin{eqnarray*}\n & & \\mathbb{B}\\cap\\mbox{\\rm lev}_{\\geq l}h_{\\min}\\subset\\mathbb{B}\\cap\\mbox{\\rm lev}_{\\geq l}h\\subset\\mathbb{B}\\cap\\mbox{\\rm lev}_{\\geq l}h_{\\max},\\\\\n & \\mbox{{\\rm and}} & \\mathbb{B}\\cap\\mbox{\\rm lev}_{\\leq l}h_{\\max}\\subset\\mathbb{B}\\cap\\mbox{\\rm lev}_{\\leq l}h\\subset\\mathbb{B}\\cap\\mbox{\\rm lev}_{\\leq l}h_{\\min}.\\end{eqnarray*}\n\\end{prop}\n\\begin{proof}\nThis follows easily from $|h(x)-\\frac{1}{2}x^{T}Ax|\\leq\\frac{1}{2}\\delta|x|^{2}$\nfor all $x\\in\\mathbb{B}$.\n\\end{proof}\nFor convenience, we highlight the standard problem below:\n\\begin{problem}\n\\label{pro:standard-prob}Suppose $g:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}$\nis $\\mathcal{C}^{2}$, with critical point $\\mathbf{0}$ of Morse\nindex $m$, $g(\\mathbf{0})=0$ and the Hessian $\\nabla^{2}g(\\mathbf{0})$\nhas distinct eigenvalues that are all nonzero. Consider the problem\n\\[\n\\min_{S\\in\\mathcal{S}}\\max_{x,y\\in S\\cap(\\scriptsize\\mbox{\\rm lev}_{\\geq l}g)\\cap\\mathbb{B}}|x-y|,\\]\nwhere $\\mathcal{S}$ is the set of $m$ dimensional affine subspaces.\n\\end{problem}\nNote that in Problem \\ref{pro:standard-prob}, we have limited the\nregion where $x$ and $y$ lie in by $\\mathbb{B}$. Here is a result\non the optimizing pair $(\\bar{x},\\bar{y})$ of the inner maximization\nproblem in Problem \\ref{pro:standard-prob}.\n\\begin{lem}\n(Convergence to eigenvector and saddle point)\\label{lem:min-maximizer-ppties}For\nall $\\delta>0$ sufficiently small, suppose that $h:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}$\nis such that Assumption \\ref{ass:condns-on-model} holds. Assume that\nfor the optimizing triple $(\\bar{S},\\bar{x},\\bar{y})$ of Problem\n\\ref{pro:standard-prob} for $g=h$, $(\\bar{x},\\bar{y})$ is the unique\npair of points in $\\bar{S}\\cap(\\mbox{\\rm lev}_{\\geq l}h)\\cap\\mathbb{B}$ such\nthat $|\\bar{x}-\\bar{y}|=\\mbox{\\rm diam}(\\bar{S}\\cap(\\mbox{\\rm lev}_{\\geq l}h)\\cap\\mathbb{B})$.\nThen there exists $\\epsilon>0$ such that if $l<0$ satisfies $-\\epsilon0$. Then \\begin{eqnarray*}\n\\lambda_{1}\\nabla h(\\bar{x})+\\lambda_{2}\\nabla h(\\bar{y}) & = & \\mathbf{0}\\\\\n|\\lambda_{1}A\\bar{x}+\\lambda_{2}A\\bar{y}| & \\leq & \\lambda_{1}|A\\bar{x}-\\nabla h(\\bar{x})|+\\lambda_{2}|A\\bar{y}-\\nabla h(\\bar{y})|\\\\\n & \\leq & \\delta(\\lambda_{1}|\\bar{x}|+\\lambda_{2}|\\bar{y}|)\\\\\n|A(\\lambda_{1}\\bar{x}+\\lambda_{2}\\bar{y})| & \\leq & \\delta(\\lambda_{1}|\\bar{x}|+\\lambda_{2}|\\bar{y}|)\\\\\n\\Rightarrow|\\lambda_{1}\\bar{x}+\\lambda_{2}\\bar{y}| & \\leq & |A^{-1}||A(\\lambda_{1}\\bar{x}+\\lambda_{2}\\bar{y})|\\\\\n & \\leq & |A^{-1}|\\delta(\\lambda_{1}|\\bar{x}|+\\lambda_{2}|\\bar{y}|).\\end{eqnarray*}\nThis means that there are points $x^{\\prime}$ and $y^{\\prime}$ such\nthat $\\lambda_{1}x^{\\prime}+\\lambda_{2}y^{\\prime}=\\mathbf{0}$, $|\\bar{x}-x^{\\prime}|\\leq|A^{-1}|\\delta|\\bar{x}|$\nand $|\\bar{y}-y^{\\prime}|\\leq|A^{-1}|\\delta|\\bar{y}|$. With this,\nwe now concentrate on pairs of points that are negative multiples\nof each other.\n\nNow,\\begin{eqnarray*}\n\\lambda_{1}\\nabla h(\\bar{x}) & = & \\bar{y}-\\bar{x}\\\\\n\\Rightarrow\\nabla h(\\bar{x}) & = & \\frac{1}{\\lambda_{1}}(\\bar{y}-\\bar{x})\\\\\n\\Rightarrow\\left|A\\bar{x}-\\frac{1}{\\lambda_{1}}(\\bar{y}-\\bar{x})\\right| & \\leq & \\delta|\\bar{x}|.\\end{eqnarray*}\nSimilarly, this gives us\\begin{eqnarray*}\n\\left|A(-\\bar{y})-\\frac{1}{\\lambda_{2}}(\\bar{y}-\\bar{x})\\right| & \\leq & \\delta|\\bar{y}|.\\end{eqnarray*}\nTherefore, \\[\n\\left|A(\\bar{y}-\\bar{x})+\\left(\\frac{1}{\\lambda_{1}}+\\frac{1}{\\lambda_{2}}\\right)(\\bar{y}-\\bar{x})\\right|\\leq\\delta(|\\bar{x}|+|\\bar{y}|).\\]\nThis gives:\\begin{eqnarray}\n & & \\left|A(y^{\\prime}-x^{\\prime})+\\left(\\frac{1}{\\lambda_{1}}+\\frac{1}{\\lambda_{2}}\\right)(y^{\\prime}-x^{\\prime})\\right|\\nonumber \\\\\n & \\leq & |A(y^{\\prime}-x^{\\prime})-A(\\bar{y}-\\bar{x})|+\\left(\\frac{1}{\\lambda_{1}}+\\frac{1}{\\lambda_{2}}\\right)|(y^{\\prime}-x^{\\prime})-(\\bar{y}-\\bar{x})|\\nonumber \\\\\n & & \\qquad+\\left|A(\\bar{y}-\\bar{x})+\\left(\\frac{1}{\\lambda_{1}}+\\frac{1}{\\lambda_{2}}\\right)(\\bar{y}-\\bar{x})\\right|\\nonumber \\\\\n & \\leq & |A||A^{-1}|\\delta(|\\bar{x}|+|\\bar{y}|)+\\left(\\frac{1}{\\lambda_{1}}+\\frac{1}{\\lambda_{2}}\\right)|A^{-1}|\\delta(|\\bar{x}|+|\\bar{y}|)+\\delta(|\\bar{x}|+|\\bar{y}|)\\nonumber \\\\\n & = & \\delta(|\\bar{x}|+|\\bar{y}|)\\underbrace{\\left(|A||A^{-1}|+\\left(\\frac{1}{\\lambda_{1}}+\\frac{1}{\\lambda_{2}}\\right)|A^{-1}|+1\\right)}_{(1)}.\\label{eq:y_minus_x_eigenvalue}\\end{eqnarray}\nNext, we relate $|\\bar{y}-\\bar{x}|$ and $|\\bar{x}|+|\\bar{y}|$. We\nhave \\begin{eqnarray*}\n|\\bar{x}| & \\leq & |x^{\\prime}|+|A^{-1}|\\delta|\\bar{x}|\\\\\n\\Rightarrow(1-|A^{-1}|\\delta)|\\bar{x}| & \\leq & |x^{\\prime}|,\\end{eqnarray*}\nand similarly, $(1-|A^{-1}|\\delta)|\\bar{y}|\\leq|y^{\\prime}|$. It\nis clear that $|y^{\\prime}-x^{\\prime}|=|x^{\\prime}|+|y^{\\prime}|$,\nand we get\\begin{eqnarray*}\n|\\bar{x}|+|\\bar{y}| & \\leq & \\frac{1}{1-|A^{-1}|\\delta}(|x^{\\prime}|+|y^{\\prime}|)\\\\\n & = & \\frac{1}{1-|A^{-1}|\\delta}|y^{\\prime}-x^{\\prime}|\\\\\n & \\leq & \\frac{1}{1-|A^{-1}|\\delta}[|\\bar{y}-\\bar{x}|+|A^{-1}|\\delta(|\\bar{x}|+|\\bar{y}|)]\\\\\n\\Rightarrow\\left(1-\\frac{|A^{-1}|\\delta}{1-|A^{-1}|\\delta}\\right)(|\\bar{x}|+|\\bar{y}|) & \\leq & \\frac{1}{1-|A^{-1}|\\delta}|\\bar{y}-\\bar{x}|\\\\\n\\Rightarrow(1-2|A^{-1}|\\delta)(|\\bar{x}|+|\\bar{y}|) & \\leq & |\\bar{y}-\\bar{x}|.\\end{eqnarray*}\nTo show that $(1)$ in \\eqref{eq:y_minus_x_eigenvalue} converges\nto $0$ as $\\delta\\searrow0$, we need to show that $(\\frac{1}{\\lambda_{1}}+\\frac{1}{\\lambda_{2}})$\nremains bounded as $\\delta\\searrow0$. Note that \\begin{eqnarray*}\n\\nabla h(\\bar{x})-\\nabla h(\\bar{y}) & = & \\left(\\frac{1}{\\lambda_{1}}+\\frac{1}{\\lambda_{2}}\\right)(\\bar{y}-\\bar{x})\\\\\n\\Rightarrow\\left|\\frac{1}{\\lambda_{1}}+\\frac{1}{\\lambda_{2}}\\right| & = & \\frac{\\left|\\nabla h(\\bar{x})-\\nabla h(\\bar{y})\\right|}{|\\bar{y}-\\bar{x}|}\\\\\n & \\leq & \\frac{1}{|\\bar{y}-\\bar{x}|}[|A(\\bar{x}-\\bar{y})|+\\delta(|\\bar{x}|+|\\bar{y}|)]\\\\\n & \\leq & \\frac{1}{|\\bar{y}-\\bar{x}|}\\left(|A||\\bar{x}-\\bar{y}|+\\frac{1}{1-2|A^{-1}|\\delta}\\delta|\\bar{y}-\\bar{x}|\\right)\\\\\n & = & |A|+\\frac{\\delta}{1-2|A^{-1}|\\delta}.\\end{eqnarray*}\nSince the eigenvectors depend continuously on the entries of a matrix\nwhen the eigenvalues remain distinct, we see that $\\frac{1}{|\\bar{y}-\\bar{x}|}(y^{\\prime}-x^{\\prime})$\nconverges to an eigenvector of $A$ as $\\delta\\rightarrow0$ from\nformula \\eqref{eq:y_minus_x_eigenvalue}. \n\nNext, we show that $\\frac{1}{|\\bar{y}-\\bar{x}|}(\\bar{y}-\\bar{x})$\nconverges to an eigenvector corresponding to the eigenvalue $a_{n-m+1}$.\nRecall that $2\\sqrt{\\frac{l}{a_{n-m+1}-\\delta}}\\leq|\\bar{x}-\\bar{y}|$\nand $\\left|(\\bar{x}-\\bar{y})-(x^{\\prime}-y^{\\prime})\\right|\\leq2|A^{-1}|\\delta$.\n So $\\frac{1}{|\\bar{y}-\\bar{x}|}(\\bar{y}-\\bar{x})$ has the same\nlimit as $\\frac{1}{|\\bar{y}-\\bar{x}|}(y^{\\prime}-x^{\\prime})$. If\n$x^{\\prime}$ and $y^{\\prime}$ are such that $\\frac{1}{|\\bar{y}-\\bar{x}|}(y^{\\prime}-x^{\\prime})$\nconverges to a eigenvector corresponding to $a_{k}$, then Lemma \\ref{lem:bounds-on-primes}\nbelow gives us the following chain of inequalities: \\begin{eqnarray*}\n|\\bar{x}-\\bar{y}| & \\leq & |\\bar{x}|+|\\bar{y}|\\\\\n & \\leq & 2\\sqrt{[(1+\\theta)^{2}+(n-1)\\theta^{2}]\\frac{l}{(a_{k}+\\delta)(1-\\theta)^{2}+(n-1)(a_{1}+\\delta)\\theta^{2}}},\\end{eqnarray*}\nwhere $\\theta\\to0$ as $\\delta\\searrow0$. We note that $k\\geq n-m+1$\nbecause $a_{k}$ cannot be nonnegative. As $\\delta\\searrow0$, the\nlimit of the RHS of the above is $2\\sqrt{\\frac{l}{a_{k}}}$. This\ngives a contradiction if $k>n-m+1$, so $k=n-m+1$.\n\n\\textbf{To show $\\frac{1}{2}(\\bar{x}+\\bar{y})\\to\\mathbf{0}$ as $\\delta\\searrow0$:\n}We now work out an upper bound for $\\left|\\frac{1}{2}(\\bar{x}+\\bar{y})\\right|$\nusing Lemma \\ref{lem:bounds-on-primes}. We get\\begin{eqnarray*}\n\\left|\\frac{1}{2}(\\bar{x}+\\bar{y})\\right| & \\leq & \\left|\\frac{1}{2}(x^{\\prime}+y^{\\prime})\\right|+\\frac{1}{2}|A^{-1}|\\delta(|\\bar{x}|+|\\bar{y}|)\\\\\n & = & \\frac{1}{2}\\big||x^{\\prime}|-|y^{\\prime}|\\big|+\\frac{1}{2}|A^{-1}|\\delta(|\\bar{x}|+|\\bar{y}|)\\\\\n & \\leq & \\frac{1}{2}\\big||\\bar{x}|-|\\bar{y}|\\big|+|A^{-1}|\\delta(|\\bar{x}|+|\\bar{y}|)\\\\\n & \\leq & \\frac{1}{2}\\sqrt{[(1+\\theta)^{2}+(n-1)\\theta^{2}]\\frac{l}{(a_{n-m+1}+\\delta)(1-\\theta)^{2}+(n-1)(a_{1}+\\delta)\\theta^{2}}}\\\\\n & & \\qquad-\\frac{1}{2}[1-\\theta]\\sqrt{\\frac{l}{(a_{n-m+1}-\\delta)(1+\\theta)^{2}+(n-1)(a_{n}-\\delta)\\theta^{2}}}+|A^{-1}|\\delta(|\\bar{x}|+|\\bar{y}|).\\end{eqnarray*}\nHere, $\\theta>0$ is such that $\\theta\\to0$ as $\\delta\\to0$. At\nthis point, we note that the final formula above can be written as\n$\\frac{1}{2}\\left(\\sqrt{\\frac{l}{c_{1}}}-\\sqrt{\\frac{l}{c_{2}}}\\right)+|A^{-1}|\\delta(|\\bar{x}|+|\\bar{y}|)$,\nwhere $c_{1},c_{2}<0$, with $|c_{1}|<|c_{2}|$, and $c_{1},c_{2}\\rightarrow a_{n-m+1}$\nas $\\delta\\rightarrow0$. Therefore\\begin{eqnarray*}\n\\left|\\frac{1}{2}(\\bar{x}+\\bar{y})\\right| & \\leq & \\frac{1}{2}\\left(\\sqrt{\\frac{l}{c_{1}}}-\\sqrt{\\frac{l}{c_{2}}}\\right)+|A^{-1}|\\delta(|\\bar{x}|+|\\bar{y}|)\\\\\n & = & \\frac{1}{2}\\frac{\\frac{l}{c_{1}}-\\frac{l}{c_{2}}}{\\sqrt{\\frac{l}{c_{1}}}+\\sqrt{\\frac{l}{c_{2}}}}+|A^{-1}|\\delta(|\\bar{x}|+|\\bar{y}|)\\\\\n & \\leq & \\frac{1}{2c_{1}c_{2}}\\frac{l(c_{2}-c_{1})}{2\\sqrt{\\frac{l}{c_{2}}}}+|A^{-1}|\\delta(|\\bar{x}|+|\\bar{y}|)\\\\\n & \\leq & \\frac{c_{2}-c_{1}}{4c_{1}}\\sqrt{\\frac{l}{c_{2}}}+|A^{-1}|\\delta(|\\bar{x}|+|\\bar{y}|).\\end{eqnarray*}\nIt is clear that as $\\delta\\rightarrow0$, the above formula goes\nto zero, so $|\\frac{1}{2}(\\bar{x}+\\bar{y})|^{2}\/|l|\\rightarrow0$\nas $\\delta\\rightarrow0$ as needed.\n\\end{proof}\nIn Lemma \\ref{lem:bounds-on-primes} below, we say that $e_{i}$ is\nthe $i$th \\emph{elementary vector} if it is the $i$th column of\nthe identity matrix. It is also the eigenvector corresponding to the\neigenvalue $a_{k}$ of $A$. \n\\begin{lem}\n(Length estimates of vectors) \\label{lem:bounds-on-primes} Let $h:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}$\nand $A\\in\\mathbb{R}^{n\\times n}$ satisfy Assumption \\ref{ass:condns-on-model}\nfor some $\\delta>0$. Suppose $h(\\bar{x})=h(\\bar{y})=l<0$. Suppose\n$\\theta>0$ is such that $|d_{\\bar{x}}-e_{k}|_{\\infty}<\\theta$ for\nsome $d_{\\bar{x}}$ pointing in the same direction as $\\bar{x}$,\nand that the same relation holds for $\\bar{y}$. \n\nThen $|\\bar{x}|$ and $|\\bar{y}|$ are bounded from below and above\nby\\[\n(1-\\theta)\\sqrt{\\frac{l}{(a_{k}-\\delta)(1+\\theta)^{2}+(n-1)(a_{n}-\\delta)\\theta^{2}}}\\leq|\\bar{x}|,|\\bar{y}|,\\]\n\\[\n|\\bar{x}|,|\\bar{y}|\\leq\\sqrt{[(1+\\theta)^{2}+(n-1)\\theta^{2}]\\frac{l}{(a_{k}+\\delta)(1-\\theta)^{2}+(n-1)(a_{1}+\\delta)\\theta^{2}}}.\\]\n\\end{lem}\n\\begin{proof}\nNecessarily we must have $k\\geq n-m+1$ because if $k0$, there exists\n$\\delta>0$ such that if $V_{k}\\in\\mathbb{R}^{n\\times k}$ has orthonormal\ncolumns and $|V_{k}-E_{k}|<\\delta$ , then $V_{k}$ can be completed\nto an orthogonal matrix $V_{n}\\in\\mathbb{R}^{n\\times n}$ such that\n$|I-V_{n}|<\\epsilon$.\n\\end{lem}\nThe above lemma is an easy consequence of the following result.\n\\begin{lem}\n(Finding orthogonal vector) \\label{lem:matrix-approx}For all $\\epsilon>0$,\nthere exists a $\\delta>0$ such that if $V_{k}\\in\\mathbb{R}^{n\\times k}$\nhas orthonormal columns and $|V_{k}-E_{k}|<\\delta$ , then there is\na vector $v_{k+1}\\in\\mathbb{R}^{n}$ such that $|v_{k+1}|_{2}=1$\nand is orthogonal to all columns of $V_{k}$, and the concatenation\n$V_{k+1}:=[V_{k},v_{k+1}]$ satisfies $|V_{k+1}-E_{k+1}|<\\epsilon$.\\end{lem}\n\\begin{proof}\nSince all finite dimensional norms are equivalent, we can assume that\nthe norm $|\\cdot|$ on $\\mathbb{R}^{n\\times k}$, $\\mathbb{R}^{n\\times(k+1)}$\nis the $\\infty$-norm for vectors, that is $|M|=\\max_{i,j}|M(i,j)|$.\nSuppose $|V_{k}-E_{k}|<\\delta$. Then $|V_{k}(i,j)|<\\delta$ if $i\\neq j$\nand $|V_{k}(i,i)-1|<\\delta$. We now construct the vector $v_{k+1}$\nusing the Gram-Schmidt process. \n\nThe direction of $v_{k+1}$ obtained by the Gram-Schmidt process is:\\begin{eqnarray*}\n(I-V_{k}V_{k}^{T})e_{k+1} & = & e_{k+1}-V_{k}V_{k}^{T}e_{k+1}\\\\\n & = & e_{k+1}-\\sum_{i=1}^{k}V_{k}(i,k+1)V_{k}(:,i).\\end{eqnarray*}\nSince $|V_{k}(i,j)|<\\delta$ for all $i\\neq j$, the sum $\\alpha_{k+1}\\in\\mathbb{R}^{n}$\ndefined by $\\alpha_{k+1}=\\sum_{i=1}^{k}V_{k}(i,k+1)V_{k}(:,i)$ has\ncomponents obeying the bounds\\[\n|\\alpha_{k+1}(j)|\\leq\\left\\{ \\begin{array}{ll}\nk\\delta^{2} & \\mbox{ if }j\\geq k+1,\\\\\n(k-1)\\delta^{2}+\\delta & \\mbox{ if }j0$\nbe sufficiently small, and suppose $h:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}$\nsatisfies Assumption \\ref{ass:condns-on-model}. Let $S_{\\bar{z},V}:=\\{\\bar{z}+Vw\\mid w\\in\\mathbb{R}^{n-m}\\}$,\nand $V\\in\\mathbb{R}^{n\\times(n-m)}$ be such that $V$ has orthonormal\ncolumns, with $|\\bar{z}-\\mathbf{0}|<\\delta$ and $|V-E_{n-m}|<\\delta$,\nwhere $E_{n-m}\\in\\mathbb{R}^{n\\times(n-m)}$ is the first $n-m$ columns\nof the identity matrix. Then\\[\n-\\frac{1}{2}|A-\\delta I|\\left(1+|[V^{T}(A-\\delta I)V]^{-1}||V^{T}||A-\\delta I|\\right)^{2}|\\bar{z}|^{2}\\leq\\min_{s\\in S_{\\bar{z},V}\\cap\\mathbb{B}}h(s).\\]\n\\end{lem}\n\\begin{proof}\nWe find a lower bound for the smallest value of $h$ on $S_{\\bar{z},V}$.\nThe function $h_{\\bar{z},V}:\\mathbb{R}^{n-m}\\rightarrow\\mathbb{R}$\ndefined by $h_{\\bar{z},V}(w):=h(\\bar{z}+Vw)$ satisfies\\begin{eqnarray*}\nh_{\\bar{z},V}(w) & = & h(\\bar{z}+Vw)\\\\\n & \\geq & \\frac{1}{2}(\\bar{z}+Vw)^{T}(A-\\delta I)(\\bar{z}+Vw).\\end{eqnarray*}\nLet us denote $h_{\\bar{z},V,\\min}:\\mathbb{R}^{n-m}\\rightarrow\\mathbb{R}$\nby $h_{\\bar{z},V,\\min}(w)=\\frac{1}{2}(\\bar{z}+Vw)^{T}(A-\\delta I)(\\bar{z}+Vw)$.\nThe Hessian of $h_{\\bar{z},V,\\min}$ is $V^{T}(A-\\delta I)V$, which\ntells us that $h_{\\bar{z},V,\\min}$ is strictly convex. Therefore,\nwe seek to find the minimizer of $h_{\\bar{z},V,\\min}$.\n\nThe minimizing value of $w$, which we denote as $\\bar{w}_{\\min}$,\nsatisfies $\\nabla h_{\\bar{z},V,\\min}(\\bar{w}_{\\min})=\\mathbf{0}$.\nThis gives us \\begin{eqnarray*}\nV^{T}(A-\\delta I)\\bar{z}+V^{T}(A-\\delta I)V\\bar{w}_{\\min} & = & \\mathbf{0}\\\\\n\\Rightarrow\\bar{w}_{\\min} & = & -[V^{T}(A-\\delta I)V]^{-1}V^{T}(A-\\delta I)\\bar{z}.\\end{eqnarray*}\nAn easy bound on $\\left|\\bar{w}_{\\min}\\right|$ is $\\left|\\bar{w}_{\\min}\\right|\\leq|[V^{T}(A-\\delta I)V]^{-1}||V^{T}||A-\\delta I||\\bar{z}|$.\n So $h_{\\bar{z},V,\\min}$ is bounded from below by \\begin{eqnarray}\n\\min_{w}h_{\\bar{z},V,\\min}(w) & = & \\min_{w}\\frac{1}{2}(\\bar{z}+Vw)^{T}(A-\\delta I)(\\bar{z}+Vw)\\nonumber \\\\\n & = & \\frac{1}{2}(\\bar{z}+V\\bar{w}_{\\min})^{T}(A-\\delta I)(\\bar{z}+V\\bar{w}_{\\min})\\nonumber \\\\\n & \\geq & -\\frac{1}{2}|\\bar{z}+V\\bar{w}_{\\min}||A-\\delta I||\\bar{z}+V\\bar{w}_{\\min}|\\nonumber \\\\\n & = & -\\frac{1}{2}|A-\\delta I||\\bar{z}+V\\bar{w}_{\\min}|^{2}\\nonumber \\\\\n & \\geq & -\\frac{1}{2}|A-\\delta I|[|\\bar{z}|+\\left|V\\bar{w}_{\\min}\\right|]^{2}\\nonumber \\\\\n & = & -\\frac{1}{2}|A-\\delta I|[|\\bar{z}|+|\\bar{w}_{\\min}|]^{2}\\nonumber \\\\\n & \\geq & -\\frac{1}{2}|A-\\delta I|\\left(|\\bar{z}|+\\left|[V^{T}(A-\\delta I)V]^{-1}\\right||V^{T}||A-\\delta I||\\bar{z}|\\right)^{2}\\nonumber \\\\\n & = & -\\frac{1}{2}|A-\\delta I|\\left(1+\\left|[V^{T}(A-\\delta I)V]^{-1}\\right||V^{T}||A-\\delta I|\\right)^{2}|\\bar{z}|^{2}.\\label{eq:bdd-below-h-subspace}\\end{eqnarray}\n\n\\end{proof}\nWe shall prove Lemma \\ref{lem:on-S-prime} about the approximation\nof the eigenvectors corresponding to the smallest eigenvalues. This\nlemma analyzes Algorithm \\ref{alg:find-S-prime}. We clarify our notation.\nIn the case of an exact quadratic, Algorithm \\ref{alg:fast-local-method}\nfirst finds $e_{n-m+1}$, then invokes Algorithm \\ref{alg:find-S-prime}\nto find the eigenvector $e_{n}$, followed by $e_{n-1}$, $e_{n-2}$\nand so on, all the way to $e_{n-m+2}$. \n\nWe define $I_{k}$ and $I_{k}^{\\perp}$ as subsets of $\\{1,\\dots,n\\}$\nby \\begin{eqnarray*}\nI_{k} & := & \\{n-m+1\\}\\cup\\{n-k+2,n-k+3,\\dots,n\\}\\\\\nI_{k}^{\\perp} & := & \\{1,\\dots,n\\}\\backslash I_{k}.\\end{eqnarray*}\nNext, we define $E_{k}^{\\prime}$ and $E_{k}^{\\perp}$. The matrix\n$E_{k}^{\\prime}\\in\\mathbb{R}^{n\\times k}$ has the $k$ columns $e_{n-m+1}$,\n$e_{n}$, $e_{n-1}$, ..., $e_{n-k+2}$, while the matrix $E_{k}^{\\perp}\\in\\mathbb{R}^{n\\times(n-k)}$\ncontains all the other columns in the $n\\times n$ identity matrix.\nThe columns of $E_{k}^{\\prime}$ and $E_{k}^{\\perp}$ are chosen from\nthe $n\\times n$ identity matrix from the index sets $I_{k}$ and\n$I_{k}^{\\perp}$ respectively.\n\nWe will need to analyze the eigenvalues of $(V_{k}^{\\perp})^{T}(A\\pm\\delta I)V_{k}^{\\perp}$\nin the proof of Lemma \\ref{lem:on-S-prime}, where $|V_{k}^{\\perp}-E_{k}^{\\perp}|$\nis small. Note that the matrix $(E_{k}^{\\perp})^{T}(A\\pm\\delta I)E_{k}^{\\perp}$\nis principle minor of $A\\pm\\delta I$, and its eigenvalues are the\neigenvalues of $A\\pm\\delta I$ chosen according to the index set $I_{k}^{\\perp}$.\nFurthermore, \\begin{eqnarray*}\n & & |(V_{k}^{\\perp})^{T}(A\\pm\\delta I)V_{k}^{\\perp}-(E_{k}^{\\perp})^{T}(A\\pm\\delta I)E_{k}^{\\perp}|\\\\\n & \\leq & |(V_{k}^{\\perp})^{T}(A\\pm\\delta I)V_{k}^{\\perp}-(V_{k}^{\\perp})^{T}(A\\pm\\delta I)E_{k}^{\\perp}|\\\\\n & & \\quad+|(V_{k}^{\\perp})^{T}(A\\pm\\delta I)E_{k}^{\\perp}-(E_{k}^{\\perp})^{T}(A\\pm\\delta I)E_{k}^{\\perp}|\\\\\n & \\leq & |(V_{k}^{\\perp})^{T}(A\\pm\\delta I)||V_{k}^{\\perp}-E_{k}^{\\perp}|\\\\\n & & \\quad+|(V_{k}^{\\perp})^{T}-(E_{k}^{\\perp})^{T}||(A\\pm\\delta I)E_{k}^{\\perp}|\\end{eqnarray*}\nIt is clear that as $|V_{k}^{\\perp}-E_{k}^{\\perp}|\\to0$, $|(V_{k}^{\\perp})^{T}(A\\pm\\delta I)V_{k}^{\\perp}-(E_{k}^{\\perp})^{T}(A\\pm\\delta I)E_{k}^{\\perp}|\\to0$.\nThe eigenvalues of a matrix varies continuously with respect to the\nentries when the eigenvalues are distinct, so we shall let $\\hat{a}_{i}$\ndenote the eigenvalue of $(V_{k}^{\\perp})^{T}(A+\\delta I)V_{k}^{\\perp}$\nthat is closest to $a_{i}$, and $\\tilde{a}_{i}$ denote the eigenvalue\nof $(V_{k}^{\\perp})^{T}(A-\\delta I)V_{k}^{\\perp}$ that is closest\nto $a_{i}$. \n\\begin{lem}\n(Estimates of eigenvectors to negative eigenvalues) \\label{lem:on-S-prime}Let\n$h:\\mathbb{R}^{n}\\to\\mathbb{R}$. Given a fixed $l<0$ sufficiently\nclose to $0$, let $p$ be the closest point to $\\bar{z}$ in the\nset $\\mbox{\\rm lev}_{\\leq l}h\\cap S_{\\bar{z},V_{k}^{\\perp}}$, where $S_{\\bar{z},V_{k}^{\\perp}}:=\\{\\bar{z}+V_{k}^{\\perp}w\\mid w\\in\\mathbb{R}^{n-k}\\}$\nand $V_{k}^{\\perp}\\in\\mathbb{R}^{n\\times(n-k)}$. Then for all $\\epsilon>0$,\nthere exists $\\delta>0$ such that if \n\\begin{enumerate}\n\\item $h:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}$ satisfies Assumption \\ref{ass:condns-on-model},\n\\item $|\\bar{z}|<\\delta$ and\n\\item $|V_{k}^{\\perp}-E_{k}^{\\perp}|<\\delta$, where $V_{k}^{\\perp}$ has\northogonal columns,\n\\end{enumerate}\nthen $|\\mbox{\\rm unit}(p-\\bar{z})-e_{n-k+1}|<\\epsilon$. As a consequence,\n$|V_{m}^{\\perp}-E_{m}^{\\perp}|\\to0$ as $\\delta\\to0$.\\end{lem}\n\\begin{proof}\nThe first step is to find an upper bound on the distance between $\\bar{z}$\nand $p$. The upper bound is obtained from looking at the closest\ndistance between $\\bar{z}$ and $\\mbox{\\rm lev}_{\\leq l}h_{\\max}\\cap S_{\\bar{z},V_{k}^{\\perp}}$.\n\nWe look at the function $h_{\\bar{z},V_{k}^{\\perp},\\max}:\\mathbb{R}^{n-k}\\rightarrow\\mathbb{R}$\ndefined by $h_{\\bar{z},V_{k}^{\\perp},\\max}(w):=h_{\\max}(\\bar{z}+V_{k}^{\\perp}w)$.\nWe have\\begin{eqnarray*}\nh_{\\bar{z},V_{k}^{\\perp},\\max}(w) & = & h_{\\bar{z},V_{k}^{\\perp},\\max}(\\bar{z}+V_{k}^{\\perp}w)\\\\\n & = & \\frac{1}{2}(\\bar{z}+V_{k}^{\\perp}w)^{T}(A+\\delta I)(\\bar{z}+V_{k}^{\\perp}w)\\\\\n\\Rightarrow\\nabla h_{\\bar{z},V_{k}^{\\perp},\\max}(w) & = & (V_{k}^{\\perp})^{T}(A+\\delta I)\\bar{z}+(V_{k}^{\\perp})^{T}(A+\\delta I)V_{k}^{\\perp}w.\\end{eqnarray*}\nThe critical point of $h_{\\bar{z},V_{k}^{\\perp},\\max}$ is thus $\\bar{w}_{\\max}:=-[(V_{k}^{\\perp})^{T}(A+\\delta I)V_{k}^{\\perp}]^{-1}(V_{k}^{\\perp})^{T}(A+\\delta I)\\bar{z}$.\nThe critical value corresponding to this is $h_{\\bar{z},V_{k}^{\\perp},\\max}(\\bar{w}_{\\max})$.\nAn upper bound for this critical value is $\\frac{1}{2}|\\bar{z}+V_{k}^{\\perp}\\bar{w}|\\left|A+\\delta I\\right||\\bar{z}+V_{k}^{\\perp}\\bar{w}|$,\nwhich is in turn bounded by:\\begin{eqnarray*}\n & & \\frac{1}{2}|\\bar{z}+V_{k}^{\\perp}\\bar{w}||A+\\delta I||\\bar{z}+V_{k}^{\\perp}\\bar{w}|\\\\\n & = & \\frac{1}{2}\\left|A+\\delta I\\right||\\bar{z}+V_{k}^{\\perp}\\bar{w}|^{2}\\\\\n & \\leq & \\frac{1}{2}\\left|A+\\delta I\\right|\\left(1+\\left|[(V_{k}^{\\perp})^{T}(A+\\delta I)V_{k}^{\\perp}]^{-1}\\right||(V_{k}^{\\perp})^{T}|\\left|A+\\delta I\\right|\\right)^{2}|\\bar{z}|^{2}.\\\\\n & & \\mbox{(following calculations similar to that of \\eqref{eq:bdd-below-h-subspace}).}\\end{eqnarray*}\nThen an upper bound of the distance $d(\\bar{z},\\mbox{\\rm lev}_{\\leq l}h_{\\max}\\cap S_{\\bar{z},V_{k}^{\\perp}})$\ncan be calculated by:\n\n\\begin{eqnarray*}\n & & d(\\bar{z},S_{\\bar{z},V_{k}^{\\perp}}\\cap\\mbox{\\rm lev}_{\\leq l}h_{\\max})\\\\\n & \\leq & \\left|\\bar{z}-\\left(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\max}\\right)\\right|+d(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\max},S_{\\bar{z},V_{k}^{\\perp}}\\cap\\mbox{\\rm lev}_{\\leq l}h_{\\max})\\\\\n & \\leq & \\underbrace{|\\bar{w}_{\\max}|+\\sqrt{\\frac{l-\\frac{1}{2}\\left|A+\\delta I\\right|\\left(1+\\left|[(V_{k}^{\\perp})^{T}(A+\\delta I)V_{k}^{\\perp}]^{-1}\\right||(V_{k}^{\\perp})^{T}|\\left|A+\\delta I\\right|\\right)^{2}|\\bar{z}|^{2}}{\\hat{a}_{n-k+1}+\\delta}}.}_{\\beta}\\end{eqnarray*}\n\n\nThe extra term $-\\frac{1}{2}\\left|A+\\delta I\\right|\\cdots|\\bar{z}|^{2}$\ncompensates for the fact that the critical value of $h_{\\bar{z},V_{k}^{\\perp},\\max}$\nis not necessarily zero. To simplify notation, let $\\beta$ be the\nright hand side of the above formula as marked.\n\nWe now figure the possible intersection between $\\mathbb{B}(\\bar{z},\\beta)$\nand $S_{\\bar{z},V_{k}^{\\perp}}\\cap\\mbox{\\rm lev}_{\\leq l}h$. Again, since $S_{\\bar{z},V_{k}^{\\perp}}\\cap\\mbox{\\rm lev}_{\\leq l}h\\subset S_{\\bar{z},V_{k}^{\\perp}}\\cap\\mbox{\\rm lev}_{\\leq l}h_{\\min}$,\nwe look at the intersection of $\\mathbb{B}(\\bar{z},\\beta)$ and $S_{\\bar{z},V_{k}^{\\perp}}\\cap\\mbox{\\rm lev}_{\\leq l}h_{\\min}$.\nWe find the critical point of $h_{\\bar{z},V_{k}^{\\perp},\\min}:\\mathbb{R}^{n-k}\\rightarrow\\mathbb{R}$\ndefined by $h_{\\bar{z},V_{k}^{\\perp},\\min}(w):=\\frac{1}{2}(\\bar{z}+V_{k}^{\\perp}w)^{T}(A-\\delta I)(\\bar{z}+V_{k}^{\\perp}w)$.\nThe gradient of $h_{\\bar{z},V_{k}^{\\perp},\\min}$ can be found to\nbe \\[\n\\nabla h_{\\bar{z},V_{k}^{\\perp},\\min}(w)=(V_{k}^{\\perp})^{T}(A-\\delta I)\\bar{z}+(V_{k}^{\\perp})^{T}(A-\\delta I)V_{k}^{\\perp}w.\\]\nOnce again, the critical point is $\\bar{w}_{\\min}=[(V_{k}^{\\perp})^{T}(A-\\delta I)V_{k}^{\\perp}]^{-1}(V_{k}^{\\perp})^{T}(A-\\delta I)\\bar{z}$.\nSo $\\mathbb{B}(\\bar{z},\\beta)\\subset\\mathbb{B}(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min},\\beta+|\\bar{w}_{\\min}|)$.\n\nConsider $p\\in\\mathbb{B}(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min},\\beta+|\\bar{w}_{\\min}|)\\cap(S_{\\bar{z},V_{k}^{\\perp}}\\cap\\mbox{\\rm lev}_{\\leq l}h_{\\min})$.\nLet us introduce a change of coordinates such that $p=\\sum_{i\\in I_{k}^{\\perp}}\\tilde{p}_{i}\\tilde{v}_{i}+\\bar{w}_{\\min}$,\nwhere $\\tilde{v}_{i}\\in\\mathbb{R}^{n-k}$ correspond to the eigenvectors\nof $(V_{k}^{\\perp})^{T}(A-\\delta I)V_{k}^{\\perp}$ (in turn corresponding\nto the eigenvalues $\\tilde{a}_{i}$) and $\\tilde{p}_{i}\\in\\mathbb{R}$\nare the multipliers. Then the condition $\\bar{z}+V_{k}^{\\perp}p\\in\\mathbb{B}(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min},\\beta+|\\bar{w}_{\\min}|)$\nand $\\bar{z}+V_{k}^{\\perp}p\\in S_{\\bar{z},V_{k}^{\\perp}}\\cap\\mbox{\\rm lev}_{\\leq l}h_{\\min}$\ncan be represented as the following constraints respectively:\\begin{eqnarray}\n\\sum_{i\\in I_{k}^{\\perp}}\\tilde{p}_{i}^{2} & \\leq & (\\beta+|\\bar{w}_{\\min}|)^{2},\\nonumber \\\\\n\\sum_{i\\in I_{k}^{\\perp}}\\tilde{a}_{i}\\tilde{p}_{i}^{2} & \\leq & l+\\frac{1}{2}(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})^{T}(A-\\delta I)(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min}).\\label{eq:linear-constraints}\\end{eqnarray}\nAs $\\delta\\rightarrow0$, the only admissible solution is $\\tilde{p}_{n-k+1}=\\sqrt{\\frac{l}{\\tilde{a}_{n-k+1}}}$\nand the rest of the $\\tilde{p}_{i}$'s are zero. The above constraints\nare linear in $\\tilde{p}_{i}^{2}$. We consider the minimum possible\nvalue of $\\frac{\\tilde{p}_{n-k+1}}{\\sqrt{\\sum\\tilde{p}_{i}^{2}}}=\\sqrt{\\frac{\\tilde{p}_{n-k+1}^{2}}{\\sum\\tilde{p}_{i}^{2}}}$,\nwhich is the dot product between the unit vectors in the direction\nof $\\tilde{v}_{n-k+1}$ and $p-(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})$.\nThis is equivalent to the linear fractional program in $\\tilde{p}_{i}^{2}$\nof minimizing $\\frac{\\tilde{p}_{n-k+1}^{2}}{\\sum\\tilde{p}_{i}^{2}}$\nsubject to the constraints in \\eqref{eq:linear-constraints}.\n\nThis linear fractional program can be transformed into a linear program\nby $q=\\frac{1}{\\sum\\tilde{p}_{i}^{2}}$ and $q_{i}=\\frac{\\tilde{p}_{i}^{2}}{\\sum\\tilde{p}_{i}^{2}}$,\nwhich gives:\\begin{eqnarray}\n\\min & q_{n-k+1}\\nonumber \\\\\n\\mbox{s.t.} & q & \\geq\\frac{1}{(\\beta+|\\bar{w}_{\\min}|)^{2}},\\label{eq:linear-constraint-2.1}\\\\\n & \\sum_{i\\in I_{k}^{\\perp}}\\tilde{a}_{i}q_{i} & \\leq\\Big[l+\\frac{1}{2}(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})^{T}(A-\\delta I)(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})\\Big]q,\\label{eq:linear-constraint-2.2}\\\\\n & \\sum_{i\\in I_{k}^{\\perp}}q_{i} & =1,\\label{eq:linear-constraint-2.3}\\\\\n & q_{i} & \\geq0\\mbox{ for all }i\\in I_{k}^{\\perp}.\\nonumber \\end{eqnarray}\nThe constraints of the linear program above gives \\begin{eqnarray*}\n\\sum_{i\\in I_{k}^{\\perp}}-\\tilde{a}_{i}q_{i}+\\tilde{a}_{n-k}\\sum_{i\\in I_{k}^{\\perp}}q_{i} & \\geq & -\\Big[l+\\frac{1}{2}(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})^{T}(A-\\delta I)(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})\\Big]q+\\tilde{a}_{n-1}\\\\\n\\Rightarrow\\sum_{i\\in I_{k}^{\\perp}}(-\\tilde{a}_{i}+\\tilde{a}_{n-k})q_{i} & \\geq & -\\Big[l+\\frac{1}{2}(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})^{T}(A-\\delta I)(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})\\Big]q+\\tilde{a}_{n-1}\\\\\n & \\geq & -\\frac{l+\\frac{1}{2}(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})^{T}(A-\\delta I)(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})}{(\\beta+|\\bar{w}_{\\min}|)^{2}}+\\tilde{a}_{n-1}.\\end{eqnarray*}\nSince only $-\\tilde{a}_{n-k+1}+\\tilde{a}_{n-k}$ is positive and the\nother $-\\tilde{a}_{i}+\\tilde{a}_{n-k}$ are nonpositive, we have\\begin{eqnarray*}\n & & (-\\tilde{a}_{n-k+1}+\\tilde{a}_{n-k})q_{n-k+1}\\\\\n & \\geq & \\sum_{i\\in I_{k}^{\\perp}}(-\\tilde{a}_{i}+\\tilde{a}_{n-k})q_{i}\\\\\n & \\geq & -\\frac{l+\\frac{1}{2}(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})^{T}(A-\\delta I)(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})}{(\\beta+|\\bar{w}_{\\min}|)^{2}}+\\tilde{a}_{n-k}\\end{eqnarray*}\n\\[\n\\Rightarrow q_{n-k+1}\\geq\\frac{1}{-\\tilde{a}_{n-k+1}+\\tilde{a}_{n-k}}\\left(-\\frac{l+\\frac{1}{2}(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})^{T}(A-\\delta I)(\\bar{z}+V_{k}^{\\perp}\\bar{w}_{\\min})}{(\\beta+|\\bar{w}_{\\min}|)^{2}}+\\tilde{a}_{n-k}\\right).\\]\nThe limit of the right hand side goes to $1$ as $\\delta\\rightarrow0$,\nso this means that $\\bar{z}-p$ is close to the direction of the eigenvector\ncorresponding to the eigenvalue $\\tilde{a}_{n-k+1}$ in $(V_{k}^{\\perp})^{T}AV_{k}^{\\perp}$,\nwhich in turn converges to $e_{n-k+1}$. The proof of this lemma is\ncomplete.\n\nThe conclusion that $|V_{m}^{\\perp}-E_{m}^{\\perp}|\\to0$ as $\\delta\\to0$\nfollows from the first part of this lemma and Lemma \\ref{lem:first-matrix-approx}.\n\\end{proof}\nWith these lemmas set up, we are now ready to prove the fast local\nconvergence of Algorithm \\ref{alg:fast-local-method} to the critical\npoint and critical value. We recall that \\emph{Q-linear convergence}\nof a sequence of positive numbers $\\{\\alpha_{i}\\}_{i=1}^{\\infty}$\nconverging to zero is defined by $\\limsup_{i\\to\\infty}\\frac{\\alpha_{i+1}}{\\alpha_{i}}<1$,\nwhile \\emph{Q-superlinear convergence} is defined by $\\lim_{i\\to\\infty}\\frac{\\alpha_{i+1}}{\\alpha_{i}}=0$.\nNext, \\emph{R-linear convergence} and \\emph{R-superlinear convergence}\nof a sequence are defined by being bounded by a Q-linearly convergent\nsequence and a Q-superlinearly convergent sequence respectively.\n\\begin{thm}\n(Fast convergence of Algorithm \\ref{alg:fast-local-method}) \\label{thm:Wrap-up}Suppose\nthat $f:\\mathbb{R}^{n}\\to\\mathbb{R}$ is $\\mathcal{C}^{2}$ and $\\mathbf{0}$\nis a nondegenerate critical point of $f$ of Morse index $m$, and\n$h(\\mathbf{0})=0$. \n\nThere is some $R>0$ such that if $00$, we can find\n$R>0$, such that \\[\n|f(x)-x^{T}Ax|<\\delta|x|^{2}\\mbox{ for all }x\\in\\mathbb{B}(\\mathbf{0},R).\\]\nThe function $f_{R}:\\mathbb{R}^{n}\\to\\mathbb{R}$ defined by $f_{R}(x):=\\frac{1}{R^{2}}f(Rx)$\nsatisfies Assumption \\ref{ass:condns-on-model} with $A:=\\nabla^{2}f_{R}(x)=\\nabla^{2}f(x)$. \n\nWe want to show that if $\\delta>0$ is sufficiently small, then for\nall $l<0$ sufficiently small, a step in Algorithm \\ref{alg:fast-local-method}\ngives good convergence to the critical value. Given an iterate $l_{i}$,\nthe next iterate $l_{i+1}$ is \\[\n\\min_{x\\in S_{z_{i},V_{m}^{\\perp}}\\cap\\mathbb{B}(\\mathbf{0},R)}f(x)=\\min_{x\\in S_{\\frac{1}{R}z_{i},V_{m}^{\\perp}}\\cap\\mathbb{B}}R^{2}f_{R}(x),\\]\nwhere $V_{m}^{\\perp}$, which approximates the first $n-m$ eigenvectors,\nis defined before Lemma \\ref{lem:on-S-prime}. \n\nWe seek to find $\\frac{|l_{i+1}|}{|l_{i}|}$. The value of $l_{i+1}$\ndepends on how well the last $m$ eigenvectors are approximated, and\nhow well the critical point is estimated, which in turn depends on\n$\\delta$. The ratio $\\frac{|l_{i+1}|}{|l_{i}|}$ is bounded from\nabove by\\[\n-\\frac{1}{2}|A-\\delta I|\\left(1+\\left|[(V_{m}^{\\perp})^{T}(A-\\delta I)V_{m}^{\\perp}]^{-1}\\right||(V_{m}^{\\perp})^{T}||A-\\delta I|\\right)^{2}|z_{i}|^{2}\/l_{i},\\]\nwhich converges to $0$ as $\\delta\\to0$ by Lemmas \\ref{lem:min-maximizer-ppties},\n\\ref{lem:first-matrix-approx}, \\ref{lem:bdd-on-l-(i+1)} and \\ref{lem:on-S-prime}. \n\nThe conclusion in the second part of the theorem follows a similar\nanalysis.\n\\end{proof}\n\n\n\n\\section{Conclusion and conjectures}\n\nIn this paper, we present a strategy to find saddle points of general\nMorse index, extending the algorithms for finding critical points\nof mountain pass type as was done in \\cite{LP08}. Algorithms \\ref{alg:fast-local-method}\nand \\ref{alg:find-S-prime} may not be easily implementable, especially\nwhen $m$ is large. However, Algorithm \\ref{alg:find-S-prime} can\nbe performed only as needed in a practical implementation. It is hoped\nthat this strategy can augment current methods for finding saddle\npoints, and can serve as a foundation for further research on effective\nmethods of finding saddle points.\n\nHere are some conjectures:\n\\begin{itemize}\n\\item How do the algorithms presented fare in real problems? Are there difficulties\nin the infinite dimensional case when implementing Algorithm \\ref{alg:fast-local-method}?\n\\item Are there ways to integrate Algorithms \\ref{alg:fast-local-method}\nand \\ref{alg:find-S-prime} to give a better algorithm?\n\\item Are there better algorithms than Algorithm \\ref{alg:find-S-prime}\nto approximate $S_{i}$?\n\\item Can the uniqueness assumption in Theorem \\ref{thm:Wrap-up} be lifted?\nIf not, how does it affect the design of algorithms?\\end{itemize}\n\n\\begin{acknowledgement*}\nI thank the Fields Institute in Toronto, where much of this paper\nwas written. They have provided a wonderful environment for working\non this paper.\\end{acknowledgement*}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nFractional calculus appeared in 1695, when Leibniz described the derivative \nof order $\\alpha=1\/2$ \\cite{OS,SKM,Ross}.\nDerivatives and integrals of noninteger order were studied by \nLeibniz, Liouville, Grunwald, Letnikov and Riemann.\nMany books have now been written about fractional calculus and \nfractional differential equations\n\\cite{OS,SKM,MR,Podlubny,KST,K,N}.\nDerivatives and integrals of noninteger order and \nfractional integro-differential equations \nhave found many applications in recent studies in physics\n(see, e.g., \\cite{Zaslavsky1,GM,WBG,H} and \\cite{Zaslavsky2,MS,MK1,MK2}).\n\nIn quantum mechanics, \nobservables are given by self-adjoint operators.\nThe dynamical description of a quantum system is given by superoperators.\nA superoperator is a map that assigns one operator some other operator.\n\nThe motion of a system is naturally described in terms of the \ninfinitesimal change of the system.\nThe equation for a quantum observable is called the Heisenberg equation. \nFor Hamiltonian quantum systems, the infinitesimal superoperator \nis defined by some form of derivation.\nA derivation is a linear map ${\\cal L}$ \nthat satisfies the Leibnitz rule\n${\\cal L}(AB)=({\\cal L}A)B+ A({\\cal L}B)$ for any operators $A$ and $B$.\nA fractional derivative can be defined as the fractional power\nof the derivative (see, e.g., \\cite{IJM}).\nIt is known that the infinitesimal generator ${\\cal L}=(1\/i\\hbar)[H, \\ . \\ ]$, \nwhich is used for Hamiltonian systems, is a derivation of quantum observables.\nIn \\cite{Heis}, we regarded a fractional power ${\\cal L}^{\\alpha}$ \nof the derivative operator ${\\cal L}=(1\/i\\hbar)[H, \\ . \\ ]$\nas a fractional derivative on a set of observables. \nAs a result, we obtain a fractional generalization of the Heisenberg equation,\nwhich allows generalizing the notion of Hamiltonian quantum systems. \nWe note that a fractional generalization of classical Hamiltonian systems \nwas suggested in \\cite{FracHam} (also see \\cite{JPA2006}).\nIn the general case, quantum systems are non-Hamiltonian\nand ${\\cal L}$ is not a derivation operator.\nFor a wide class of quantum systems, the infinitesimal generator ${\\cal L}$ is \ncompletely dissipative \\cite{Kossakowski,Dav,IngKos,kn3}.\nTherefore, it is interesting to consider a fractional generalization\nof the equation of motion for non-Hamiltonian quantum systems using\na fractional power of a completely dissipative superoperator.\n\nThe most general change of state of a non-Hamiltonian quantum system\nis a quantum operation \\cite{Kr1,Kr2,Kr3,Kr4,Schu,JPA}.\nA quantum operation for a quantum system can be described starting\nfrom a unitary evolution of some closed Hamiltonian system if\nthe quantum system is a part of the closed system \\cite{ALV,Weiss}.\nBut situations can arise where it is difficult or impossible \nto find a Hamiltonian system that includes the given quantum system.\nAs a result, the theory of non-Hamiltonian quantum systems \ncan be considered a fundamental generalization \nof the quantum mechanics of Hamiltonian systems \\cite{Kossakowski,Dav,IngKos,kn3}.\nThe quantum operations that describe the dynamics of non-Hamiltonian\nsystems can be regarded as real completely positive\ntrace-preserving superoperators on some operator space.\nThese superoperators form a completely positive semigroup.\nThe infinitesimal generator of this semigroup is completely dissipative. \nThe problem of non-Hamiltonian dynamics \nis to obtain an explicit form for the infinitesimal generator, \nwhich is in turn connected with the problem of determining \nthe most general explicit form of this superoperator.\nThis problem was investigated in \\cite{K1,K2,Lind1}. \nHere, we consider superoperators that are fractional powers of \ncompletely dissipative superoperators.\nWe prove that the suggested superoperators are infinitesimal generators \nof completely positive semigroups.\nThe quantum Markovian equations with a completely dissipative superoperator\nare the most general form of the Markovian master equation \ndescribing the nonunitary evolution of a density operator\nthat is trace preserving and completely positive. \nWe consider a fractional generalization of the quantum Markovian equation,\nwhich is solved for the harmonic oscillator with friction.\nWe can assume that other solutions and properties described in\n\\cite{Lind2,SS,ISSSS,ISS,N0,N1,N2,N3,TN2} \ncan also be considered for fractional generalizations of the quantum Markovian equation\nand the Gorini-Kossakowski-Sudarshan equation \\cite{K1,K2}.\n\nA fractional power of infinitesimal generator can be considered \na parameter describing a measure of \"screening\" of the environment. \nUsing the interaction representation of the quantum Markovian equation,\nwe consider a fractional power $\\alpha$ \nof the non-Hamiltonian part of the infinitesimal generator.\nWe obtain the Heisenberg equation for Hamiltonian systems\nin the limit as $\\alpha \\rightarrow 0$. \nIn the case $\\alpha=1$, we have the usual quantum Markovian equation.\nAs a result, we can distinguish the following cases: \n(1) absence of the environmental influence ($\\alpha=0$), \n(2) complete environmental influence ($\\alpha=1$), and\n(3) powerlike screening of the environmental influence ($0<\\alpha<1$). \nThe physical interpretation of the fractional quantum Markovian equation \ncan be connected with an existence of \na powerlike \"screening\" of the environmental influence.\nQuantum computations by quantum operations with mixed states \n(see, e.g., \\cite{JPA}) can be controlled by this parameter.\nWe assume that there exist stationary states of open quantum systems \n\\cite{Dav1,Lind2,Spohn,Spohn2,AH,ISS,TN2} \nthat depend on the fractional parameter.\nWe note that it is possible to consider quantum dynamics with \na low fractionality by a generalization of the\nmethod proposed in \\cite{TZ2} (also see \\cite{TP,TG}).\n\n\nIn Section 2, we briefly review of superoperators\non an operator Hilbert space and quantum operations and introduce the notation.\nIn Section 3, we consider the fractional power of a superoperator.\nIn Section 4, we suggest a fractional generalization of the quantum Markovian equation.\nIn Section 5, we describe the properties of the fractional semigroup.\nIn Sections 6 and 7, we solve the fractional equations \nfor the quantum harmonic oscillator with and without friction.\n\n\n\n\n\n\\section{Superoperator and quantum operations}\n\n\n\n$ \\quad \\ $\nQuantum theories essentially consist of two structures: \na kinematic structure describing the initial \nstates and observables of the system,\nand a dynamical structure describing the change of \nthese states and observables with time.\nIn quantum mechanics, \nthe states and observables can be given by operators.\nThe dynamical description of the quantum system is given by \na superoperator, which is\na map from a set of operators into itself.\n\nLet ${\\cal M}$ be an operator space. \nWe let ${\\cal M}^{*}$ denote the space dual to ${\\cal M}$.\nHence, ${\\cal M}^{*}$ is the set of all \nlinear functionals on ${\\cal M}$. \nThe classic denotations for an element of ${\\cal M}$ \nare $|B)$ and $B$.\nThe symbols $(A|$ and $\\omega$ denote the elements of ${\\cal M}^{*}$. \nBy the Riesz-Frechet theorem, any linear continuous functional $\\omega$\non an operator Hilbert space ${\\cal M}$ has the form\n$\\omega(B)=(A|B)$ for all $B \\in {\\cal M}$,\nwhere $|A)$ is an element in ${\\cal M}$.\nTherefore, the element $A$ can be considered not \nonly an element $|A)$ of ${\\cal M}$, but also \nan element $(A|$ of the dual space ${\\cal M}^{*}$. \nThe symbol $(A|B)$ for a value of \nthe functional $(A|$ on the operator $|B)$ \nis the graphic combination of the symbols $(A|$ and $|B)$. \\\\\n\n\n\\noindent {\\large Definition 1.} \n{\\it A linear superoperator is a map ${\\cal L}$ \nfrom an operator space ${\\cal M}$ into itself such that \nthe relation\n\\[ {\\cal L} (aA+bB) = a{\\cal L} (A) + b{\\cal L} (B) \\]\nis satisfied for all $A, B \\in D ({\\cal L}) \\subset {\\cal M} $, \nwhere $D({\\cal L})$ is the domain of ${\\cal L}$ \nand $a,b \\in \\mathbb{C}$. } \\\\\n\nA superoperator ${\\cal L}$ assigns each operator $A \\in D ({\\cal L})$ \nthe operator ${\\cal L}(A)$. \\\\\n\n\\noindent {\\large Definition 2.} \n{\\it Let ${\\cal L}$ be a superoperator on ${\\cal M}$.\nAn adjoint superoperator of ${\\cal L}$ \nis a superoperator $\\Lambda=\\bar {\\cal L}$ on ${\\cal M}^{*}$ \nsuch that \n\\begin{equation} \\label{Lam}\n(\\Lambda (A) |B) = (A | {\\cal L} (B)) \\end{equation}\nfor all $B \\in D ({\\cal L}) \\subset {\\cal M}$ \nand $A \\in D(\\Lambda) \\subset {\\cal M}^{*}$.} \\\\\n\nLet ${\\cal M}$ be an operator Hilbert space and\n${\\cal L}$ be a superoperator on ${\\cal M}$. \nThen $(A|B)=Tr[A^{\\dagger} B]$, and equation (\\ref{Lam}) becomes\n\\[ Tr [(\\Lambda (A))^{\\dagger} B] =Tr [A^{\\dagger} {\\cal L} (B)] . \\]\n\nIf ${\\cal M}$ is an operator Hilbert space, \nthen by the Riesz-Frechet theorem, \n${\\cal M}$ and ${\\cal M}^{*}$ are isomorphic, and\nwe can define the self-adjoint superoperators. \\\\\n\n\\noindent\n{\\large Definition 3.} \n{\\it A self-adjoint superoperator is a superoperator ${\\cal L}$ \non a Hilbert operator space ${\\cal M}$ such that\n$({\\cal L} (A) |B) = (A | {\\cal L} (B))$ for all \n$A, B \\in D ({\\cal L}) \\subset {\\cal M}$\nand $D ({\\cal L}) =D (\\bar {\\cal L})$.} \\\\\n\n\nLet ${\\cal M}$ be a normed operator space.\nThe superoperator ${\\cal L}$ is said to be called bounded if\n$\\|{\\cal L}(A)\\|_{\\cal M} \\le c \\|A\\|_{\\cal M}$ \nfor some constant $c$ and all $A \\in {\\cal M}$.\nThe value \n\\[ \\|{\\cal L}\\|=\\sup_{A \\ne 0} \n\\frac{\\|{\\cal L}(A)\\|_{\\cal M}}{\\|A\\|_{\\cal M}} \\]\nis called the norm of the superoperator ${\\cal L}$.\nIf ${\\cal M}$ is a normed space and \n${\\cal L}$ is a bounded superoperator, then \n$\\| \\bar {\\cal L} \\| = \\| {\\cal L} \\|$. \n\nIn quantum theory, the class of \nreal superoperators is the most important. \\\\\n\n\\noindent {\\large Definition 4.} \n{\\it Let ${\\cal M}$ be an operator space and \n$A^{\\dagger}$ be an adjoint operator of $A\\in {\\cal M}$.\nA real superoperator is a superoperator \n${\\cal L}$ on ${\\cal M}$ such that\n\\[ [{\\cal L} (A)]^{\\dagger} = {\\cal L} (A^{\\dagger}) \\] \nfor all $A \\in D ({\\cal L}) \\subset {\\cal M}$ \nand $A^{\\dagger}\\in D ({\\cal L})$.} \\\\\n\nIf ${\\cal L}$ is a real superoperator, then $\\Lambda=\\bar {\\cal L}$ is real.\nIf ${\\cal L}$ is a real superoperator\nand $A$ is a self-adjoint operator $A^{\\dagger}=A \\in D({\\cal L})$, \nthen the operator $B={\\cal L}(A)$ is self-adjoint.\nThen superoperators from a set of quantum observables ${\\cal M}$ \ninto itself should be real. \nAll possible dynamics of quantum systems\nmust be described by a set of real superoperators. \\\\\n\n\\noindent {\\large Definition 5.} \n{\\it A nonnegative superoperator is a map \n${\\cal L}$ from ${\\cal M}$ into ${\\cal M}$, such that \n${\\cal L} (A^2) \\ge 0$\nfor all $A^2=A^{\\dagger}A \\in D ({\\cal L}) \\subset {\\cal M}$.\nA positive superoperator is a map \n${\\cal L}$ from ${\\cal M}$ into itself such that\n${\\cal L}$ is nonnegative and ${\\cal L}(A)=0$ \nif and only if $A=0$.} \\\\\n\n\nLet ${\\cal M}$ denote an operator algebra.\nA left superoperator corresponding to $A \\in {\\cal M}$ \nis a superoperator $L_A$ on ${\\cal M}$ such that $L_A C=AC$ \nfor all $C \\in {\\cal M}$. \nWe can think of $L_A$ as meaning left multiplication by $A$. \nA right superoperator corresponding to $A \\in {\\cal M}$ \nis a superoperator $R_A$ on ${\\cal M}$ such that \n$R_A C=CA$ for all $C \\in {\\cal M}$. \n\nThe most general state change of a quantum system\nis a called a { \\it quantum operation} \\cite{Kr1,Kr2,Kr3,Kr4,Schu,JPA}.\nA quantum operation is described by a superoperator $\\hat{\\cal E}$\nthat is a map on a set of density operators.\nIf $\\rho$ is a density operator, then $\\hat{\\cal E}(\\rho)$\nshould also be a density operator.\nAny density operator $\\rho_t=\\rho(t)$ is\na self-adjoint ($\\rho^{\\dagger}_{t}=\\rho_{t}$),\npositive ($\\rho_{t}>0$) operator with unit trace ($ Tr\\rho_{t}=1$).\nTherefore, for a superoperator $\\hat{\\cal E}$\nto be the quantum operation, \nthe following conditions must be satisfied:\n\\begin{enumerate}\n\\item\nThe superoperator $\\hat{\\cal E}$ is a real superoperator, i.e.\n$\\Bigl(\\hat{\\cal E}(A)\\Bigr)^{\\dagger}=\\hat{\\cal E}(A^{\\dagger})$\nfor all $A$.\nThe real superoperator $\\hat{\\cal E}$ maps the self-adjoint operator\n$\\rho$ to the self-adjoint operator $\\hat{\\cal E}(\\rho)$:\n\\ $(\\hat{\\cal E}(\\rho))^{\\dagger}=\\hat{\\cal E}(\\rho)$.\n\\item\nThe superoperator $\\hat{\\cal E}$ is a positive superoperator,\ni.e. $\\hat{\\cal E}$ maps positive operators to positive operators:\n\\ $\\hat{\\cal E}(A^{2}) >0$ for all $A\\not=0$ or\n$\\hat{\\cal E}(\\rho)\\ge 0$.\n\\item\nThe superoperator $\\hat{\\cal E}$ is a trace-preserving map, i.e.\n$(I|\\hat{\\cal E}|\\rho)=(\\hat{\\cal E}^{\\dagger}(I)|\\rho)=1$\nor $\\hat{\\cal E}^{\\dagger}(I)=I$.\n\\end{enumerate}\n\nMoreover, we assume that the superoperator $\\hat{\\cal E}$\nis not only positive but also completely positive \\cite{Arveson}.\nThe superoperator $\\hat{\\cal E}$ is a {\\it completely positive}\nmap from an operator space ${\\cal M}$ into itself if \n\\[ \\sum^{n}_{k=1} \\sum^{n}_{l=1} B^{\\dagger}_{k}\n\\hat{\\cal E}(A^{\\dagger}_kA_l)B_l \\ge 0 \\]\nfor all operators $A_k$, $B_k \\in {\\cal M}$ and any integer $n$. \\\\\n\nLet the superoperator $\\hat{\\cal E}$ be a convex linear map\non the set of density operators, i.e.\n\\[ \\hat{\\cal E}\\Bigl(\\sum_{s} \\lambda_{s} \\rho_{s}\\Bigr)=\n\\sum_{s} \\lambda_{s} \\hat{\\cal E}(\\rho_{s}), \\]\nwhere $0<\\lambda_{s}<1$ for all $s$, and $\\sum_{s} \\lambda_{s}=1$.\nAny convex linear map of density operators\ncan be uniquely extended to a {\\it linear} map on self-adjoint operators.\nWe note that any linear completely positive superoperator can be represented by\n\\[ \\hat{\\cal E}=\\sum^{m}_{k=1} \\hat L_{A_{k}} \\hat R_{A^{\\dagger}_{k}}: \\quad\n\\hat{\\cal E}(\\rho)=\\sum^{m}_{k=1} A_k \\rho A^{\\dagger}_k. \\]\nIf this superoperator is trace-preserving, then\n\\[ \\sum^{m}_{k=1} A^{\\dagger}_{k} A_{k}=I. \\]\n\nBecause all processes occur in time, it is natural to consider \nquantum operations $\\hat {\\cal E}(t,t_{0})$\nthat depend on time. \nLet the linear superoperators $\\hat {\\cal E}(t,t_{0})$ form a completely\npositive quantum semigroup \\cite{AL} such that \n\\begin{equation} \\label{qo1}\n\\frac{d}{dt}\\hat{\\cal E}(t,t_{0})= \\hat\\Lambda_t \\hat{\\cal E}(t,t_{0}), \\end{equation}\nwhere $\\hat \\Lambda_t$ is an infinitesimal generator of \nthe semigroup \\cite{Lind1,AL,kn3}. \nThe evolution of a density operator $\\rho$ is described by\n\\[ \\hat{\\cal E}(t,t_{0}) \\rho(t_0)= \\rho(t) . \\]\nWe consider quantum operations $\\hat{\\cal E}(t,t_0)$ \nwith an infinitesimal generator $\\hat \\Lambda$ such that \nthe adjoint superoperator $ {\\cal L}$ is completely dissipative, i.e.\n\\[ {\\cal L} (A_{k}A_{l})-\n{\\cal L}(A_{k}) A_{l}-A_{k} {\\cal L}(A_{l}) \\ge 0, \\]\nfor all $A_1,...,A_n \\in D({\\cal L})$ such that $A_kA_l \\in D({\\cal L})$.\nThe superoperator ${\\cal L}$ describes the dynamics of observables \nof a non-Hamiltonian quantum system.\nThe completely dissipative superoperators are infinitesimal generators \nof completely positive semigroups \n$\\{\\Phi_t | \\ t>0\\}$ that are adjoint to $\\{ \\hat{\\cal E}_t | \\ t>0\\}$, \nwhere $\\hat{\\cal E}_t= \\hat{\\cal E}(t,0)$. \n\n\n\n\n\\section{Fractional power of a superoperator}\n\n\n$ \\quad \\ $ \nLet ${\\cal L}$ be a closed linear superoperator\nwith an everywhere dense domain $D({\\cal L})$\nand a resolvent $R(z,{\\cal L})$ on the negative semiaxis, and \nsatisfy the condition\n\\begin{equation} \\label{Rcond}\n\\| R(-z,{\\cal L}) \\| \\le M \/ z, \\quad (z>0, M>0) . \\end{equation}\nWe note that\n\\[ R(-z,{\\cal L})= (zL_I+{\\cal L})^{-1} . \\]\nThe superoperator\n\\begin{equation} \\label{LaA1}\n{\\cal L}^{\\alpha}=\\frac{\\sin \\pi \\alpha }{\\pi} \n\\int^{\\infty}_0 dz\\, z^{\\alpha-1} R(-z,{\\cal L}) \\, {\\cal L}\n\\end{equation}\nis defined on $D({\\cal L})$ for $0< \\alpha <1$ and \nis called a fractional power of the superoperator ${\\cal L}$ \\cite{HP,Yosida}.\nWe note that the superoperator ${\\cal L}^{\\alpha}$ allows a closure.\nIf a closed superoperator ${\\cal L}$ satisfies condition (\\ref{Rcond}), \nthen ${\\cal L}^{\\alpha} {\\cal L}^{\\beta}={\\cal L}^{\\alpha+\\beta}$ \nfor $\\alpha, \\beta>0$, and $\\alpha+\\beta<1$. \n\nLet ${\\cal L}$ be a closed generating superoperator of the semigroup \n$\\{\\Phi_{t} | \\ t \\ge 0\\}$.\nThen the fractional power ${\\cal L}^{\\alpha}$ of ${\\cal L}$ is given by\n\\[ {\\cal L}^{\\alpha}= \n\\frac{1}{\\Gamma(-\\alpha)} \\int^{\\infty}_0 dz\\, z^{-\\alpha-1} (\\Phi_z-L_I) , \\]\nwhich is called the {\\it Balakrishnan formula}.\n\nThe resolvent for the superoperator ${\\cal L}^{\\alpha}$ \ncan be found by the equation \n\\[ R(-z,{\\cal L}^{\\alpha})=\n(zL_I+ {\\cal L}^{\\alpha})^{-1}= \\]\n\\[ =\\frac{\\sin \\pi \\alpha }{\\pi} \\int^{\\infty}_0 dx\\, \n\\frac{x^{\\alpha}}{z^2+2 zx^{\\alpha} \\cos \\pi \\alpha +x^{2 \\alpha} } \n\\, R(-x,{\\cal L}) , \\]\ncalled {\\it Kato's formula}. \nIt follows from this formula that the inequality\n\\[ \\| R(-z, {\\cal L}^{\\alpha}) \\| \\le M \/ z, \\quad (z>0) , \\]\nis satisfied with the constant $M$ in inequality (\\ref{Rcond})\nfor the superoperator ${\\cal L}$. \nIt follows from the inequality\n\\[ \\|z R(-z,{\\cal L}) \\|=\\|z(zL_I+{\\cal L})^{-1}\\| \\le M \\]\nfor all $z>0$ that the superoperator $z(zL_I+{\\cal L})^{-1}$\nis uniformly bounded in every sector of the complex plane \ngiven by the relation $|\\arg z| \\le \\phi$ \nfor $\\phi$ not greater than some number $\\pi - \\psi$, ($0<\\psi<\\pi$).\nThen the superoperator $zR(-z, {\\cal L}^{\\alpha})$\nis uniformly bounded in every sector of the complex plane\nsuch that $|\\arg z| \\le \\phi$ for $\\phi<\\pi - \\alpha \\psi$.\n\nLet ${\\cal L}$ be a closed generating superoperator of \nthe semigroup $\\{\\Phi_{t} | \\ t \\ge 0\\}$.\nThen the superoperators\n\\begin{equation} \\label{BPf}\n\\Phi^{(\\alpha)}_t=\\int^{\\infty}_0 ds \\, f_{\\alpha}(t,s) \\, \\Phi_s , \n\\quad (t>0) , \\end{equation}\nform a semigroup such that\n${\\cal L}^{\\alpha}$ is an infinitesimal generator of $\\Phi^{(\\alpha)}_t$. \nEquation (\\ref{BPf}) is called the Bochner-Phillips formula. \n\nIn equation (\\ref{BPf}), we use the function\n\\begin{equation} \\label{fats}\nf_{\\alpha}(t,s)=\\frac{1}{2\\pi i} \\int^{a+\\i\\infty}_{a-i\\infty}\ndz \\, \\exp (sz-tz^{\\alpha}) ,\n\\end{equation}\nwhere $a,t>0$, $s \\ge 0 $, and $0<\\alpha <1$.\nThe branch of $z^{\\alpha}$ is chosen such that $Re(z^{\\alpha})>0$\nfor $Re(z)>0$.\nThis branch is a one-valued function in the $z$ plane \ncut along the negative real axis.\nThis integral obviously converges by virtue of the factor $\\exp(-tz^{\\alpha})$.\nThe function $f_{\\alpha}(t,s)$ has the following properties: \n\n\\begin{enumerate}\n\\item\nFor all $s>0$, the function $f_{\\alpha}(t,s)$ is nonnegative:\n$f_{\\alpha}(t,s)\\ge 0$. \n\\item\nWe have the identity\n\\[ \\int^{\\infty}_0 ds \\, f_{\\alpha}(t,s) =1 . \\]\n\\item\nFor $t>0$ and $x>0$, \n\\[ \\int^{\\infty}_0 ds \\, e^{-sx} \\, f_{\\alpha}(t,s) = e^{-tx^{\\alpha}}. \\]\n\\item\nPassing from the integration contour in (\\ref{fats})\nto the contour crossing of the two rays $r\\, \\exp(-i \\theta)$ and \n$r\\, \\exp(+i \\theta)$, where $r \\in (0,\\infty)$, \nand $\\pi\/2 \\le \\theta \\le \\pi$, we obtain\n\\[\nf_{\\alpha}(t,s)=\\frac{1}{\\pi} \\int^{\\infty}_0 dr \\,\n\\exp (sr \\cos \\theta - t r^{\\alpha} \\cos (\\alpha \\theta)) \\cdot \\]\n\\begin{equation} \\label{freal}\n \\cdot \\sin (sr \\sin \\theta - \nt r^{\\alpha} \\sin (\\alpha \\theta)+ \\theta) . \\end{equation}\n\\item\nIf $\\alpha=1\/2$, then $\\theta = \\pi$, and\n\\[ f_{1\/2}(t,s)=\\frac{1}{\\pi} \\int^{\\infty}_0 dr \\,\ne^{-sr} \\sin ( t \\sqrt{r} )=\n\\frac{t}{2 \\sqrt{\\pi} s^{3\/2}} e^{-t^2\/4s} . \\]\nwhich is a corollary of equation (\\ref{freal}).\n\\end{enumerate}\n\n\n\n\\section{Fractional quantum Markovian equation}\n\n\n$ \\quad \\ $\nThe motion of a systems is naturally described in terms of the\ninfinitesimal change.\nThis change can be described by an infinitesimal generator. \nOne problem of the non-Hamiltonian dynamics \nis to obtain an explicit form \nof the infinitesimal generator. \nFor this, it is necessary to find \nthe most general explicit form of this superoperator.\nThe problem was investigated in \\cite{K1,K2,Lind1}\nfor completely dissipative superoperators. \nLindblad showed that there exists a one-to-one correspondence \nbetween the completely positive norm-continuous semigroups and \ncompletely dissipative generating superoperators \\cite{Lind1}.\nLindblad's structural theorem gives \nthe most general form of a completely dissipative superoperator. \\\\\n\n\n\\noindent {\\large Theorem 1.} \n{\\it A generating superoperator ${\\cal L}_V$ of a completely positive \nunity-preserving semigroup $\\{\\Phi_t=\\exp(-t{\\cal L}_V)| \\ t \\ge 0\\}$\non an operator space ${\\cal M}$ can be represented in the form \n\\begin{equation} \\label{s153}\n-{\\cal L}_V (A) =- \\frac{1}{i\\hbar}[H, A] + \n\\frac{1}{2 \\hbar} \\sum^{\\infty}_{k=1} \n\\Bigl(V^{\\dagger}_{k}[A, V_{k}] + [V^{\\dagger}_{k}, A] V_{k} \\Bigr) ,\n\\end{equation}\nwhere $H$, $V_{k}$, $\\sum_k V^{\\dagger}_{k}, V^{\\dagger}_{k} V_{k} \\in {\\cal M}$.} \\\\\n\nWe note that the form of ${\\cal L}_V$ is not uniquely fixed by (\\ref{s153}). \nIndeed, formula (\\ref{s153}) preserves its form under the changes\n\\[ V_k \\ \\rightarrow \\ V_k+ a_k I , \\quad\nH \\ \\rightarrow \\ H+\\frac{1}{2i\\hbar} \n\\sum^{\\infty}_{k=1} (a^{*}_kV_k-a_kV^{\\dagger}_k) , \\]\nwhere $a_k$ are arbitrary complex numbers. \n\nUsing $A_t=\\Phi_t(A)$, where $\\Phi_t=\\exp(-t{\\cal L}_V)$, we obtain the equation \n\\begin{equation} \\label{LindA1}\n\\frac{d}{dt} A_t=-\\frac{1}{i\\hbar}[H,A_t]+ \n\\frac{1}{2 \\hbar} \\sum^{\\infty}_{k=1} \\Bigl(V^{\\dagger}_k [A_t, V_k] +\n[V^{\\dagger}_k, A_t] V_k \\Bigr) , \\end{equation}\nwhere ${\\cal L}_V$ is defined by (\\ref{s153}). \nThis is called the {\\it quantum Markovian equation} for the observable $A$. \n\nThe Lindblad theorem gives an explicit form of\nthe equations of motion if the following restrictions are satisfied \n(here $\\Lambda_V$ is adjoint to ${\\cal L}_V$): \n\n\\begin{enumerate}\n\\item\n${\\cal L}_V$ and $\\Lambda_V$ are bounded superoperators and \n\\item \n${\\cal L}_V$ and $\\Lambda_V$ are completely dissipative superoperators. \n\\end{enumerate}\n\nDavies extended the Lindblad result \nto a class of quantum dynamical semigroups \nwith unbounded generating superoperators \\cite{Davies2}. \n\n\n\nWe consider quantum Markovian equation (\\ref{LindA1})\nfor an observable $A_t$.\nWe rewrite this equation in the form\n\\begin{equation} \\label{Lind1a}\n\\frac{d}{dt} A_t= -{\\cal L}_V A_t , \\end{equation}\nwhere ${\\cal L}_V$ denotes the Markovian superoperator\n\\begin{equation} \\label{Lind1b}\n{\\cal L}_V =L^{-}_{H}+ \\frac{i}{2} \\sum^{\\infty}_{k=1} \n\\Bigl(L_{V^{\\dagger}_k} L^{-}_{V_k}-L^{-}_{V^{\\dagger}_k} R_{V_k} \\Bigr) . \\end{equation}\nHere, we use the superoperators of left multiplication $L_V$ and\nright multiplication $R_V$ determined by the relations $L_VA=VA$ and $R_VA=AV$.\nThe superoperator $L^{-}_H$ is a left Lie multiplication by $A$ such that \n\\begin{equation} \\label{Lminus}\nL^{-}_HA=\\frac{1}{i \\hbar} [H,A] . \\end{equation}\nIf all operators $V_k$ are equal to zero, then ${\\cal L}_V=L^{-}_H$,\nand equations (\\ref{Lind1a}) and (\\ref{Lind1b}) give \nthe Heisenberg equations for a Hamiltonian system. \nIn the general case, the quantum system is non-Hamiltonian \\cite{kn3}.\n\nWe obtain a fractional generalization\nof the quantum Markovian equation. \nFor this, we define a fractional power for \nthe Markovian superoperator ${\\cal L}_V$ in the form\n\\begin{equation} \\label{LaA3}\n-({\\cal L}_V)^{\\alpha}=\\frac{\\sin \\pi \\alpha }{\\pi} \n\\int^{\\infty}_0 dz \\, \nz^{\\alpha-1} R(-z,{\\cal L}_V) \\, {\\cal L}_V \\quad (0< \\alpha <1) .\n\\end{equation}\nThe superoperator $({\\cal L}_V)^{\\alpha}$ is called\na {\\it fractional power of the Markovian superoperator}.\nWe note that $({\\cal L}_V)^{\\alpha}({\\cal L}_V)^{\\beta}=\n({\\cal L}_V)^{\\alpha+\\beta}$ \nfor $\\alpha, \\beta>0$, and $\\alpha+\\beta<1$. \nAs a result, we obtain the equation\n\\begin{equation} \\label{Lind2} \n\\frac{d}{dt} A_t=-({\\cal L}_V)^{\\alpha} A_t , \\end{equation}\nwhere $t$, $H\/\\hbar$ and $V_k\/ \\sqrt{\\hbar}$ \nare dimensionless variables.\nWe call this is the {\\it fractional quantum Markovian equation}. \n\nIf $V_k=0$, then equation (\\ref{Lind2}) \ngives the fractional Heisenberg equation \\cite{Heis} of the form\n\\begin{equation} \\label{Heis2b} \n\\frac{d}{dt} A_t=-(L^{-}_H)^{\\alpha} A_t . \\end{equation}\nThe superoperator $(L^{-}_H)^{\\alpha}$ is \na {\\it fractional power of the left Lie superoperator} (\\ref{Lminus}). \nWe note that this equation cannot be represented in the form\n\\[ \\frac{d}{dt} A_t=-L^{-}_{H_{new}} A_t=\n\\frac{i}{\\hbar} [H_{new}, A_t] \\]\nwith some operator $H_{new}$.\nTherefore, quantum systems described by (\\ref{Heis2b}) \nare not Hamiltonian systems.\nThese systems are called the {\\it fractional Hamiltonian quantum systems} (FHQS). \nUsual Hamiltonian quantum systems can be considered a special case of FHQS. \nWe note that a fractional generalization of classical Hamiltonian systems \nwas suggested in \\cite{FracHam,JPA2006}.\n\nUsing the operators\n\\[ A_U(t) = U(t) A_t U^{\\dagger}(t) , \\quad W_k(t) = U(t) V_k U^{\\dagger}(t) , \\]\nwhere $U(t)= \\exp \\{ (1\/i\\hbar) H \\}$, \nwe can write the quantum Markovian equation in the form\n\\begin{equation} \\label{Lind-I}\n\\frac{d}{dt} A_U(t)= - \\tilde{\\cal L}_W A_U(t) . \\end{equation}\nThe superoperator\n\\begin{equation} \\label{Lind-W}\n\\tilde{\\cal L}_W =\\frac{i}{2} \\sum^{\\infty}_{k=1} \n\\Bigl(L_{W^{\\dagger}_k} L^{-}_{W_k}-L^{-}_{W^{\\dagger}_k} R_{W_k} \\Bigr) \\end{equation}\ndescribes the non-Hamiltonian part of the evolution.\nEquation (\\ref{Lind-I}) is the quantum Markovian equation \nin the interaction representation.\nThe fractional generalization of this equation is\n\\begin{equation} \\label{Lind-I2}\n\\frac{d}{dt} A_U(t)= - (\\tilde{\\cal L}_W)^{\\alpha} A_U(t) . \\end{equation}\nEquation (\\ref{Lind-I2}) is the fractional quantum Markovian equation \nin the interaction representation.\nThe parameter $\\alpha$ can be considered \na measure of the influence of the environment.\nFor $\\alpha=1$, we have quantum Markovian equation (\\ref{Lind-I}).\nIn the limit as $\\alpha \\rightarrow 0$, we obtain the Heisenberg equation\nfor the quantum observable $A_t$ of a Hamiltonian system.\nAs a result, we can consider the physical interpretation of equations \nwith a fractional power of the Markovian superoperator \nan influence of the environment.\nThe following cases can be considered in quantum theory: \n(1) absence of the environmental influence ($\\alpha=0$), \n(2) complete environmental influence ($\\alpha=1$), and \n(3) powerlike screening of the environmental influence ($0<\\alpha<1$). \nThe physical interpretation of fractional equation (\\ref{Lind-I2})\ncan be connected with an existence of a powerlike screening \nof the environmental influence on the system.\n\n\n\n\\section{Fractional semigroup}\n\n$ \\quad \\ $\nIf we consider the Cauchy problem for equation (\\ref{Lind1a}) \nwith the initial condition given at the time $t=0$ by $A_0$,\nthen its solution can be written in the form\n$A_t=\\Phi_t A_0$. \nThe one-parameter superoperators $\\Phi_t$, $t \\ge 0$ \nhave the properties\n\\[ \\Phi_t \\Phi_s=\\Phi_{t+s}, \\quad (t,s >0) , \\quad \\Phi_0=L_I . \\] \nAs a result, the superoperators $\\Phi_t$ form a semigroup, \nand the superoperator ${\\cal L}_V$ is a generating superoperator \nof the semigroup $\\{\\Phi_{t} | \\ t\\ge 0\\}$.\n\nWe consider the Cauchy problem for \nfractional quantum Markovian equation (\\ref{Lind2}) \nwith the initial condition given by $A_0$.\nThen its solution can be presented in the form\n\\[ A_t(\\alpha)=\\Phi^{(\\alpha)}_t A_0, \\]\nwhere the superoperators $\\Phi^{(\\alpha)}_t$, $t>0$, \nform a semigroup, \nwhich we call the {\\it fractional semigroup}.\nThe superoperator $-({\\cal L}_V)^{\\alpha}$ is a \ngenerating superoperator of \nthe semigroup $\\{\\Phi^{(\\alpha)}_t| \\ t\\ge 0\\}$.\nWe consider some properties of the fractional semigroups\n$\\{\\Phi^{(\\alpha)}_t | \\ t>0\\}$. \n\nThe superoperators $\\Phi^{(\\alpha)}_t$ can be constructed\nin terms of $\\Phi_t$ by Bochner-Phillips formula (\\ref{BPf}), \nwhere $f_{\\alpha}(t,s)$ is defined in (\\ref{fats}). \nIf $A_t$ is a solution of quantum Markovian equation (\\ref{Lind1a}),\nthen formula (\\ref{BPf}) gives the solution\n\\[ A_t(\\alpha)=\\int^{\\infty}_0 ds \\, f_{\\alpha}(t,s) A_s , \\quad (t>0) \\]\nof fractional quantum Markovian equation (\\ref{Lind2}). \n\nA linear superoperator $\\Phi^{(\\alpha)}_t$ is completely positive if \n\\[ \\sum_{i,j} B_i \\Phi^{(\\alpha)}_t(A^{\\dagger}_i A_j) B_j \\ge 0 \\]\nfor any $A_i, B_i \\in {\\cal M}$. \nThe following theorem states that the fractional semigroup\n$\\{\\Phi^{(\\alpha)}_t| \\ t>0\\}$ is completely positive. \\\\\n\n\\noindent\n{\\large Theorem 2.}\n{\\it If $\\{\\Phi_{t} | \\ t>0\\}$ is a completely positive semigroup of \nsuperoperator $\\Phi_t$ on ${\\cal M}$, \nthen the fractional superoperators $\\Phi^{(\\alpha)}_t$ \nform a completely positive semigroup $\\{ \\Phi^{(\\alpha)}_t | \\ t>0\\}$ .} \\\\\n\n\\noindent {\\it Proof.}\nBochner-Phillips formula (\\ref{BPf}) gives \n\\[ \\sum_{i,j} B_i \\Phi^{(\\alpha)}_t(A^{\\dagger}_i A_j) B_j =\n\\int^{\\infty}_0 ds \\, f_{\\alpha}(t,s) \n\\sum_{i,j} B_i \\Phi_s(A^{\\dagger}_i A_j) B_j \\]\nfor $t>0$.\nUsing \n\\[ \\sum_{i,j} B_i \\Phi_s(A^{\\dagger}_i A_j) B_j \\ge 0 , \\quad\nf_{\\alpha}(t,s)\\ge 0 \\quad (s>0) , \\]\nwe obtain\n\\[ \\sum_{i,j} B_i \\Phi^{(\\alpha)}_t(A^{\\dagger}_i A_j) B_j \\ge 0 . \\ \\ \\ \\Box \\]\n\n\n\\noindent\n{\\large Corollary.}\n{\\it If $\\Phi_t$, $t>0$, is a nonnegative one-parameter superoperator, i.e.,\n$\\Phi_t (A) \\ge 0$ for $A \\ge 0$, then\nthe superoperator $\\Phi^{(\\alpha)}_t$ is nonnegative, i.e., \n$\\Phi^{(\\alpha)}_t (A) \\ge 0 $ for $A\\ge 0$. } \\\\\n\nUsing the Bochner-Phillips formula and the property\n$f_{\\alpha}(t,s) \\ge 0$, $s>0$, \nwe can easily prove that the superoperator $\\Phi^{(\\alpha)}_t$\nis nonnegative, \nif $\\Phi_t$, $t>0$ is a nonnegative one-parameter superoperator. \nThis corollary can also be proved by using \n$B_1=I$, $A_1=A$, and $A_i=B_i=0$ ($i=2,...$) \nin the proof of the theorem. \\\\\n\nIn quantum theory, the class of real superoperators is the most important. \nLet $A^{\\dagger} \\in {\\cal M}^{*}$ be adjoint to $A\\in {\\cal M}$.\nA {\\it real superoperator} is a superoperator \n$\\Phi_t$ on ${\\cal M}$, such that\n$(\\Phi_t A)^{\\dagger} = \\Phi_t (A^{\\dagger})$\nfor all $A \\in D (\\Phi_t) \\subset {\\cal M}$. \nA quantum observable is a self-adjoint operator. \nIf $\\Phi_t$ is a real superoperator\nand $A$ is a self-adjoint operator, $A^{\\dagger}=A$, \nthen the operator $A_t=\\Phi_t A$ is self-adjoint, i.e., \n$(\\Phi_t A)^{\\dagger}=\\Phi_t A$. \nLet ${\\cal M}$ be a set of quantum observables. \nThen superoperators on ${\\cal M}$ into ${\\cal M}$ must be real because \nquantum dynamics, i.e., \ntemporal evolutions of quantum observables,\nmust be described by real superoperators. \\\\\n\n\\noindent\n{\\large Theorem 3.}\n{\\it If $\\Phi_t$ is a real superoperator, \nthen the superoperator $\\Phi^{(\\alpha)}_t$ is also real. } \\\\\n\n\\noindent {\\it Proof.}\nThe Bochner-Phillips formula gives\n\\[ (\\Phi^{(\\alpha)}_t A)^{\\dagger} =\n\\int^{\\infty}_0 ds f^{*}_{\\alpha}(t,s) \\, (\\Phi_s A)^{\\dagger} , \n\\quad (t>0) . \\]\nUsing (\\ref{freal}), we can easily see \nthat $f^{*}_{\\alpha}(t,s)=f_{\\alpha}(t,s)$ \nis a real-valued function.\nThen $(\\Phi_t A)^{\\dagger}=\\Phi_t A^{\\dagger}$ leads to \n$(\\Phi^{(\\alpha)}_t A)^{\\dagger} = \\Phi^{(\\alpha)}_t (A^{\\dagger})$\nfor all $A \\in D (\\Phi^{(\\alpha)}_t) \\subset {\\cal M}$. $\\ \\ \\ \\Box$ \\\\\n\n\n \nIf $\\Phi_t$ is a superoperator on a Hilbert operator space ${\\cal M}$, \nthen an {\\it adjoint superoperator} of $\\Phi_t$ \nis a superoperator $\\hat{\\cal E}_t$ on ${\\cal M}^{*}$ such that \n\\begin{equation} \\label{TrPhi} (\\hat{\\cal E}_t (A) |B) = (A | \\Phi_t (B)) \\end{equation}\nfor all $B \\in D (\\Phi_t) \\subset {\\cal M}$ \nand some $A \\in {\\cal M}^{*}$. \nUsing the Bochner-Phillips formula, we obtain\nthe following theorem. \\\\\n\n\\noindent\n{\\large Theorem 4.}\n{\\it If $\\hat{\\cal E}_t$ is an adjoint superoperator of $\\Phi_t$, \nthen the superoperator\n\\[ \\hat{\\cal E}^{(\\alpha)}_t =\n\\int^{\\infty}_0 ds \\, f_{\\alpha}(t,s) \\ \\hat{\\cal E}_s , \n\\quad (t>0) , \\]\nis an adjoint superoperator of $\\Phi^{(\\alpha)}_t$.} \\\\\n\n\\noindent {\\it Proof.}\nLet $\\hat{\\cal E}_t$ be adjoint to $\\Phi_t$, i.e.\nequation (\\ref{TrPhi}) is satisfied. \nThen\n\\[ ( \\hat{\\cal E}^{(\\alpha)}_t A| B) =\n\\int^{\\infty}_0 ds \\, f_{\\alpha}(t,s) (\\hat{\\cal E}_s A|B) = \\]\n\\[ =\\int^{\\infty}_0 ds \\, f_{\\alpha}(t,s) \n(A| \\Phi_s B) = (A| \\Phi^{(\\alpha)}_t B) . \\ \\ \\ \\Box \\]\n\n\nIt is known that $\\hat{\\cal E}_t$ is a real superoperator if\n$\\Phi_t$ is real. \nAnalogously, if $\\Phi^{(\\alpha)}_t$ is a real superoperator, then\n$\\hat{\\cal E}^{(\\alpha)}_t$ is real. \n\n\nLet $\\{ \\hat{\\cal E}_t | t>0\\}$ be a completely positive semigroup \nsuch that the density operator $\\rho_t=\\hat{\\cal E}_t \\rho_0$\nis described by\n\\begin{equation} \\label{Lindrho}\n\\frac{d}{dt} \\rho_t=- \\hat \\Lambda_V \\rho_t , \\end{equation}\nwhere $\\hat \\Lambda_V$ is adjoint to \nthe Markovian superoperator ${\\cal L}_V$.\nThe superoperator $\\hat \\Lambda_V$ can be represented in the form\n\\[ \\hat \\Lambda_V \\rho_t=-\\frac{1}{i\\hbar}[H,\\rho_t]+\n\\frac{1}{\\hbar} \\sum^{\\infty}_{k=1} \\Bigl(V_k \\rho_t V^{\\dagger}_k - \n(\\rho_t V^{\\dagger}_k V_k+V^{\\dagger}_k V_k \\rho_t) \\Bigr) . \\]\nWe note that equation (\\ref{Lindrho}) with $V_k=0$\ngives the von Neumann equation\n\\[ \\frac{d}{dt} \\rho_t=\\frac{1}{i\\hbar}[H,\\rho_t] . \\]\nThe semigroup $\\{ \\hat{\\cal E}^{(\\alpha)}_t | \\ t>0\\}$\ndescribes the evolution of the density operator\n$\\rho_t(\\alpha)=\\hat{\\cal E}^{(\\alpha)}_t \\rho_0$\nby the fractional equation\n\\[ \\frac{d}{dt} \\rho_t(\\alpha)=- (\\hat \\Lambda_V)^{\\alpha} \\rho_t(\\alpha) . \\]\nThis is the {\\it fractional quantum Markovian equation for the density operator}.\nFor $V_k=0$, this equation gives \n\\[ \\frac{d}{dt} \\rho_t=-(-L^{-}_H)^{\\alpha} \\rho_t . \\] \nwhich can be called the {\\it fractional von Neumann equation}. \n\n\n\n\\section{Fractional equation for the harmonic oscillator}\n\n$ \\quad \\ $ \nWe consider a quantum harmonic oscillator such that\n\\begin{equation} \\label{oscHam}\nH=\\frac{1}{2m} P^2 +\\frac{m\\omega^2}{2} Q^2, \\quad V_k=0 , \\end{equation}\nwhere $t$ and $P$ are dimensionless variables.\nThen equation (\\ref{Lind2}) (also see (\\ref{Heis2b})) describes a harmonic oscillator.\nFor $A=Q$ and $A=P$, equation (\\ref{Lind2}) for $\\alpha=1$ gives\n\\[ \\frac{d}{dt} Q_t=\\frac{1}{m} P_t, \\quad \n\\frac{d}{dt} P_t=-m \\omega^2 Q_t . \\]\nThe well-known solutions of these equations are\n\\[ Q_t=Q_0 \\cos (\\omega t) +\\frac{1}{m \\omega} P_0 \\sin (\\omega t) , \\]\n\\begin{equation} \\label{osc1}\nP_t=P_0 \\cos (\\omega t) - m \\omega Q_0 \\sin (\\omega t) . \\end{equation}\nUsing these solutions and the Bochner-Phillips formula, \nwe can obtain solutions of the fractional equations\n\\begin{equation} \\label{ex2}\n\\frac{d}{dt} Q_t=- (L^{-}_{H})^{\\alpha} Q_t , \\quad \n\\frac{d}{dt} P_t=- (L^{-}_{H})^{\\alpha} P_t , \\end{equation}\nwhere $H$ is given by (\\ref{oscHam}). \nThe solutions of fractional equations (\\ref{ex2}) have the forms\n\\[ Q_t(\\alpha)=\\Phi^{(\\alpha)}_t Q_0=\n\\int^{\\infty}_0 ds f_{\\alpha}(t,s) Q_s , \\]\n\\begin{equation} \\label{osc2}\nP_t(\\alpha)=\\Phi^{(\\alpha)}_t P_0=\n\\int^{\\infty}_0 ds f_{\\alpha}(t,s) P_s . \\end{equation}\nSubstituting (\\ref{osc1}) in (\\ref{osc2}) gives \\cite{Heis} the equations\n\\begin{equation} \\label{Hsol2a}\nQ_t=Q_0 C_{\\alpha}(t) +\\frac{1}{m \\omega} P_0 S_{\\alpha}(t) , \\quad\nP_t=P_0 C_{\\alpha}(t) - m \\omega Q_0 S_{\\alpha}(t) , \\end{equation}\nwhere\n\\[ C_{\\alpha}(t)=\\int^{\\infty}_0 ds \\, f_{\\alpha}(t,s)\\, \\cos(\\omega s) , \n\\quad \nS_{\\alpha}(t)=\\int^{\\infty}_0 ds \\, f_{\\alpha}(t,s)\\, \\sin(\\omega s) . \\]\nEquations (\\ref{Hsol2a}) describe solutions of \nfractional equations (\\ref{ex2}) for the quantum harmonic oscillator. \nFor $\\alpha=1\/2$, we have \n\\[ C_{1\/2}(t)=\\frac{t}{2 \\sqrt{\\pi}} \\int^{\\infty}_0 ds \\, \n\\frac{\\cos(\\omega s)}{s^{3\/2}} \\, e^{-t^2\/4s} , \\]\n\\[ S_{1\/2}(t)=\\frac{t}{2 \\sqrt{\\pi}} \\int^{\\infty}_0 ds \\, \n\\frac{\\sin(\\omega s)}{s^{3\/2}} \\, e^{-t^2\/4s} . \\]\nThese functions can be represented in terms of the Macdonald function\n(see Sec. 2.5.37.1 in \\cite{Prudnikov}), \nwhich is also called the modified Bessel function of the third kind.\n\nIt is easy to obtain the expectations \n\\[ =x_0 C_{\\alpha}(t) +\\frac{1}{m \\omega} p_0 S_{\\alpha}(t) , \\]\n\\[ =p_0 C_{\\alpha}(t) - m \\omega x_0 S_{\\alpha}(t) , \\]\nand the dispersions \n\\[ D_t(Q)=\\frac{a^2}{2} C^2_{\\alpha}(t) + \n\\frac{\\hbar^2}{2a^2 m^2 \\omega^2} S^2_{\\alpha}(t) , \\]\n\\[ D_t(P)=\\frac{\\hbar^2}{2a^2}\nC^2_{\\alpha}(t) +\\frac{a^2 m^2 \\omega^2}{2} S^2_{\\alpha}(t) . \\]\nHere, we use the coordinate representation and the pure state\n\\begin{equation} \\label{Psi0}\n\\Psi(x)== \\frac{1}{\\sqrt{a\\sqrt{\\pi}}} \\exp\\Bigl\\{-\\frac{(x-x_0)^2}{2a} +\n\\frac{i}{\\hbar} p_0x \\Bigr\\} . \\end{equation}\nThe expectation and dispersion are defined by usual.\n\n\n\\section{Fractional quantum Markovian equation for the oscillator with friction}\n\n$ \\quad \\ $ \nWe consider the fractional quantum Markovian equation with $V_k \\ne 0$.\nThe basic assumption is that the general form of a \nbounded completely dissipative superoperator \ngiven by the quantum Markovian equation\nalso holds for an unbounded completely \ndissipative superoperator ${\\cal L}_V$.\nAnother condition imposed on the operators $H$ and $V_k$\nis that they are functions of the operators $Q$ and $P$ \nsuch that the obtained model is exactly solvable \\cite{Lind2,SS}\n(also see \\cite{ISSSS,ISS}).\nWe assume that $V_k=V_k(Q,P)$ are \nthe first-degree polynomials in $Q$ and $P$,\nand that $H=H(Q,P)$ is a second-degree polynomial in $Q$ and $P$.\nThese assumptions are analogous to those used in classical\ndynamics when friction forces proportional to the velocity are considered.\nThen $V_k$ and $H$ are given in the forms:\n\\begin{equation} \\label{v-h} \nH = \\frac{1}{2m} P^2 + \\frac{m \\omega^2}{2} Q^2 + \\frac{\\mu}{2} (PQ+QP) ,\n\\quad V_k=a_kP+b_kQ , \n\\end{equation} \nwhere $a_k$ and $b_k $, $k=1,2$, are complex numbers.\nIt is easy to obtain\n\\[ {\\cal L}_V Q = \\frac{1}{m} P + \\mu Q - \\lambda Q , \\]\n\\[ {\\cal L}_V P = -m \\omega^2 Q - \\mu P - \\lambda P , \\] \nwhere\n\\[ \\lambda =Im\\Bigl (\\sum^{n=2}_{k=1} a_kb^*_k\\Bigr) =\n-Im \\Bigl(\\sum^{n=2}_{k=1} a^*_k b_k\\Bigr) . \\]\nUsing the matrices \n\\[ A = \n\\left (\\begin{array}{c} Q \\\\ P \\\\\n\\end{array}\n\\right) , \\quad \nM = \\left (\\begin{array}{cc} \n\\mu - \\lambda & \\frac{1}{m} \\\\\n-m \\omega^2 & - \\mu - \\lambda\n\\end{array} \\right) , \\]\nwe write the quantum Markovian equation for $A_t$ as\n\\begin{equation} \\label{Lineq} \n\\frac{d}{dt} A_{t} =MA_{t} , \\end{equation}\nwhere ${\\cal L}_V A_t =MA_t$. \nThe solution of (\\ref{Lineq}) is \n\\[ A_{t} = \\Phi_t A_0 = \n\\sum^{\\infty}_{n=0} \\frac{t^n}{n!}{\\cal L}^n_V A_0 =\n\\sum^{\\infty}_{n=0} \\frac{t^n}{n!} M^n A_0 . \\] \nThe matrix $M$ can be represented in the form $M=N^{-1} FN$, \nwhere $F$ is a diagonal matrix. \nLet $\\nu$ be a complex parameter such that\n$\\nu^2 = \\mu^2 - \\omega^2$. Then we have\n\\[ N =\n\\left (\\begin{array}{cc} \nm \\omega^2 & \\mu + \\nu \\\\ \nm \\omega^2 & \\mu - \\nu\n\\end{array} \\right), \\quad\nN^{-1} = \\frac{1}{2m \\omega^2 \\nu}\n\\left (\\begin{array}{cc} \n- (\\mu - \\nu) & \\mu + \\nu \\\\ \nm \\omega^2 & -m \\omega^2\n\\end{array} \\right) , \\]\n\\[ F = \\left (\n\\begin{array}{cc}\n- (\\lambda + \\nu) & 0 \\\\ \n0 & - (\\lambda - \\nu)\n\\end{array} \\right) . \\]\nTaking\n\\[ \\Phi_t = \\sum^{\\infty}_{n=0} \\frac{t^n}{n!} M^n =\nN^{-1} \\left(\\sum^{\\infty}_{n=0} \\frac{t^n}{n!} F^n\\right) N , \\]\ninto account, we obtain the superoperator $\\Phi_t$ in the form\n\\[ \\Phi_t=e^{tM} =N^{-1} e^{tF} N = \\]\n\\[ =e^{-\\lambda t}\n\\left (\\begin{array}{cc} \n\\cosh (\\nu t) + (\\mu\/\\nu) \\sinh (\\nu t) &\n(1\/m \\nu) \\sinh (\\nu t) \\\\ \n- (m \\omega^2\/\\nu) \\sinh (\\nu t) & \n\\cosh(\\nu t) - (\\mu\/\\nu) \\sinh (\\nu t)\n\\end{array} \\right) . \\]\nAs a result, we obtain\n\\[\nQ_t=e^{-\\lambda t}[\\cosh (\\nu t) + \\frac{\\mu}{\\nu} \\sinh (\\nu t)] Q_0 +\n\\frac{1}{m \\nu} e^{-\\lambda t} \\sinh (\\nu t) P_0 , \\]\n\\begin{equation} \\label{LindP1}\nP_t = - \\frac{m \\omega^2}{\\nu} e^{-\\lambda t} \\sinh (\\nu t) Q_0 +\ne^{-\\lambda t}[\\cosh (\\nu t) - \\frac{\\mu}{\\nu} \\sinh (\\nu t)] P_0 . \\end{equation}\n\n\n\nThe fractional quantum Markovian equations for $Q_t$ and $P_t$ are\n\\begin{equation} \\label{Lind3} \n\\frac{d}{dt} Q_t=-({\\cal L}_V)^{\\alpha} Q_t , \\quad\n\\frac{d}{dt} Q_t=-({\\cal L}_V)^{\\alpha} Q_t , \\end{equation}\nwhere $t$ and $V_k\/ \\sqrt{\\hbar}$ are dimensionless variables.\nThe solutions of these fractional equations\nare given by the Bochner-Phillips formula, \n\\[\nQ_t(\\alpha)=\\Phi^{(\\alpha)}_t Q_0=\n\\int^{\\infty}_0 ds f_{\\alpha}(t,s) Q_s , \n\\quad (t>0) , \\]\n\\begin{equation} \\label{LindP2}\nP_t(\\alpha)=\\Phi^{(\\alpha)}_t P_0=\n\\int^{\\infty}_0 ds f_{\\alpha}(t,s) P_s , \n\\quad (t>0) , \\end{equation}\nwhere $Q_s$ and $P_s$ are given by (\\ref{LindP1}) and \nthe function $f_{\\alpha}(t,s)$ is defined in (\\ref{fats}). \nSubstituting (\\ref{LindP1}) in (\\ref{LindP2}) gives\n\\[\nQ_t(\\alpha)=[Ch_{\\alpha}(t) + \\frac{\\mu}{\\nu} Sh_{\\alpha}(t)] Q_0 +\n\\frac{1}{m \\nu} Ch_{\\alpha}(t) P_0 , \\]\n\\begin{equation} \\label{LindP3}\nP_t(\\alpha) = - \\frac{m \\omega^2}{\\nu} Sh_{\\alpha}(t) Q_0 +\n[Ch_{\\alpha}(t) - \\frac{\\mu}{\\nu} Sh_{\\alpha}(t)] P_0 , \\end{equation}\nwhere\n\\[ Ch_{\\alpha}(t)=\\int^{\\infty}_0 ds \\, f_{\\alpha}(t,s)\\, \ne^{-\\lambda s} \\cosh (\\nu s) , \\]\n\\[ Sh_{\\alpha}(t)=\\int^{\\infty}_0 ds \\, f_{\\alpha}(t,s)\\, \ne^{-\\lambda s} \\sinh (\\nu s) . \\]\nFor $\\alpha=1\/2$, we have\n\\[ Ch_{1\/2}(t)=\\frac{t}{2 \\sqrt{\\pi}} \\int^{\\infty}_0 ds \\, \n\\frac{\\cosh(\\nu s)}{s^{3\/2}} \\, e^{-t^2\/4s - \\lambda s} , \\]\n\\[ Sh_{1\/2}(t)=\\frac{t}{2 \\sqrt{\\pi}} \\int^{\\infty}_0 ds \\, \n\\frac{\\sinh (\\nu s)}{s^{3\/2}} \\, e^{-t^2\/4s-\\lambda s} . \\]\nThese functions can be represented in terms of the Macdonald function\n(see Sec. 2.4.17.2 in \\cite{Prudnikov}) such that\n\\[ Ch_{1\/2}(t)=\\frac{t}{2 \\sqrt{\\pi}} \\Bigl[\nV(t,\\lambda,-\\nu)+V(t,\\lambda,\\nu) \\Bigr] , \\]\n\\[ Sh_{1\/2}(t)=\\frac{t}{2 \\sqrt{\\pi}} \\Bigl[\nV(t,\\lambda,-\\nu)-V(t,\\lambda,\\nu) \\Bigr] , \\]\nwhere we use the notation\n\\[ V(t,\\lambda,\\nu)=\\Bigl(\\frac{t^2+4\\nu}{4 \\lambda}\\Bigr)^{1\/4} \nK_{-1\/2} \\Bigl( 2 \\sqrt{\\frac{\\lambda(t^2+4\\nu)}{4}} \\Bigr) , \\]\nwhere $Re(t^2)> Re(\\nu)$, $Re(\\lambda)>0$,\nand $K_{\\alpha}(z)$ is the Macdonald function \\cite{OS,SKM}. \n\nAs a result, equations (\\ref{LindP3}) define a solution\nof the fractional quantum Markovian equation for \nthe harmonic oscillator with friction. \n\n\n\\section{Conclusion}\n\nQuantum dynamics can be described by superoperators.\nA map assigning each operator exactly one operator is called a superoperator.\nIt is natural to describe motion in terms of the \ninfinitesimal change of a system.\nThe equation of motion for a quantum observable\nis called the Heisenberg equation. \nFor Hamiltonian quantum systems, the infinitesimal superoperator \nis some form of derivation. \nA linear map ${\\cal L}$ satisfying the Leibnitz rule\n${\\cal L}(AB)=({\\cal L}A)B+ A({\\cal L}B)$ for all operators $A$ and $B$\nis called derivation.\nIt is known that the infinitesimal generator ${\\cal L}=(1\/i\\hbar)[H, \\ . \\ ]$, \nwhich is used for Hamiltonian systems, is a derivation of quantum observables.\nWe can regard a fractional power ${\\cal L}^{\\alpha}$ of the derivative \n${\\cal L}=(1\/i\\hbar)[H, \\ . \\ ]$ as a fractional derivative on\na set of quantum observables \\cite{Heis}. \nAs a result, we obtain a fractional generalization of \nthe Heisenberg equation \\cite{Heis}, which allows generalizing \nthe notion of Hamiltonian quantum systems. \nIn the general case, quantum systems are non-Hamiltonian\nand ${\\cal L}$ is not a derivation.\nFor a wide class of quantum systems, the infinitesimal generator ${\\cal L}$ is \ncompletely dissipative \\cite{Kossakowski,Dav,IngKos,kn3}.\n\nHere, we consider a fractional generalization of \nthe equation of motion for non-Hamiltonian quantum systems using\na fractional power of a completely dissipative superoperator.\nWe suggested a generalization of the quantum Markovian equation \nfor quantum observables.\nIn this equation, we used a superoperator that is a fractional power of \na completely dissipative superoperator.\nWe proved that the suggested superoperator \nis an infinitesimal generator of a completely positive semigroup and \ndescribed properties of this semigroup.\nWe solved the proposed fractional quantum Markovian equation exactly \nfor the harmonic oscillator with linear friction.\nA fractional power $\\alpha$ of the quantum Markovian superoperator \ncan be considered a parameter \ndescribing a measure of \"screening\" of the environment.\nWe can separate the cases where $\\alpha=0$, \nabsence of the environmental influence;\nwhere $\\alpha=1$, complete environmental influence; \nand where $0<\\alpha<1$, a powerlike environmental influence. \nA one-parameter description of a screening of the coupling between\nthe quantum system and the environment is thus \na physical interpretation of a fractional power of \nthe quantum Markovian superoperator.\n\nWe note that the quantum Markovian equation describes\na coupling between a quantum system and an environment (see \\cite{ALV}).\nAnother physical interpretation of a fractional power of \nthe infinitesimal generator is connected with \nBochner-Phillips formula (\\ref{BPf}) as follows.\nUsing the properties\n\\[ \\int^{\\infty}_0 f_{\\alpha}(t,s)=1 , \\quad \\quad f_{\\alpha}(t,s) \\ge 0 \\quad \n(for \\ all \\ \\ s>0) , \\]\nwe can assume that $f_{\\alpha}(t,s)$ is the density of a probability distribution.\nThen Bochner-Phillips formula (\\ref{BPf}) can be considered \na smoothing of the evolution $\\Phi_t$ with respect to the time $s>0$.\nThis smoothing can be considered a screening of \nthe environment of the quantum system. \n\nThe function $f_{\\alpha}(t,s)$\ncan be represented as the Levy distribution using a reparametrization. \nWe note that Levy distributions are solutions of fractional \nequations (see, e.g., \\cite{Zaslavsky2,SZ,CNSNS2008-1,Y})\nthat describe anomalous diffusion.\nIt is known that quantum Markovian equations are used to describe\nthe Brownian motion of quantum systems \\cite{Lind2}.\nPerhaps, the fractional generalization of \nquantum Markovian equations can be used to describe \nanomalous processes and random walks \n\\cite{Zaslavsky2,MS,MK1,MK2} in quantum systems.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nMany knowledge-driven applications are built on knowledge graphs (KGs), which store massive amounts of structured facts about the real world~\\cite{KG_survey}.\nThroughout the life-cycle of KG construction and application, \nnew facts, unseen entities, and unseen relations continually emerge into the KG on a regular basis. \nAs a result, the real-world KG is rarely a static graph, but rather evolves and grows alongside the development.\nFigure~\\ref{fig:illustration} illustrates an example excerpted from Wikidata~\\cite{Wikidata}, which shows the growth of the KG along with the continuous knowledge extraction. \nHowever, KG embedding, a critical task for downstream applications, has primarily focused on static KGs over the years~\\cite{KGE_survey}. \nLearning from scratch every time is inefficient and wastes previously acquired knowledge, and simply fine-tuning new facts would quickly disrupt previously acquired knowledge. \nHence, this paper proposes to investigate lifelong embedding learning and transfer for growing KGs, with the goal of learning new facts while retaining old knowledge without re-training from scratch.\n\nThe key idea of this paper comes from the human learning process.\nHumans are typical lifelong learners, with knowledge transfer and retention being the most important aspects of lifelong learning. \nHumans, in particular, can continually learn new knowledge given new facts and use previously learned knowledge to help new knowledge learning (\\textit{knowledge learning and transfer}),\nas well as update old knowledge while retaining useful knowledge (\\textit{knowledge update and retention}). \nMotivated by this, we seek to build a lifelong KG embedding model, namely \\textbf{LKGE\\xspace}, which is capable of learning, transferring and retaining knowledge for growing KGs efficiently.\nExisting related work, such as inductive KG embedding~\\cite{MEAN,LAN}, mainly focuses on knowledge transfer, ignoring new knowledge learning and old knowledge update.\n\n\\begin{figure}[!t]\n\\includegraphics[width=\\columnwidth]{figs_new\/illustration2.pdf}\n\\caption{An example of a growing KG. In each snapshot $i$, new facts are added into the KG, and previously unseen entities and relations emerge with the new facts.}\n\\label{fig:illustration}\n\\vspace{-10pt}\n\\end{figure}\n\nThe proposed lifelong KG embedding task faces two major challenges.\nFirst, how to strike a balance between new knowledge learning and old knowledge transfer? \nLearning embeddings for new entities and relations from scratch cannot leverage previously learned knowledge, and inductively generating embeddings for them ignores new knowledge in the new snapshot.\nSecond, how to update old knowledge while retaining useful knowledge?\nLearning new facts about an old entity usually requires updating the previously learned embeddings, which can be harmful to the old model.\nThis is because updating an old entity embedding would affect many other old embeddings of related entities.\nThis would cause the catastrophic forgetting issue, and therefore affect the applications built on the old KG snapshot.\n\nTo resolve the above challenges, we propose three solutions in our LKGE\\xspace model.\nFirst, as the base embedding model for new knowledge learning and old knowledge update, we design a masked KG autoencoder that masks and reconstructs the entities or relations in new facts. \nIt builds connections between locally-related old and new entities, acting as a bridge for knowledge transfer.\nSecond, to aid in the learning of new knowledge, we propose a knowledge embedding transfer strategy that uses previously learned knowledge to initialize the embeddings of new entities and relations.\nThese embeddings are used by the KG autoencoder to learn new facts.\nThird, to avoid catastrophic forgetting in the old knowledge update, \nwe propose embedding regularization to balance the learning of new facts and the update of old embeddings.\n\nWe build four datasets to assess the lifelong KG embedding performance, including the\nentity-centric, relation-centric, fact-centric, and hybrid growth.\nEach dataset examines a different aspect of KG growth.\nBy contrast, existing datasets \\cite{MEAN,DiCGRL,CKGE} all assume that a KG grows in an ideal way, with balanced new entities or facts in each new snapshot. \nIn our experiments, we compare the link prediction accuracy, knowledge transfer ability, and learning efficiency of the proposed model against baselines on the four datasets. \nThe results show that the proposed LKGE\\xspace not only achieves the best performance on the four datasets, but also has the best forward knowledge transfer ability and learning efficiency.\nThe main contributions of this paper are summarized as follows:\n\\begin{itemize}\n\\item We study \\textit{lifelong KG embedding learning and transfer}. It is a practical task since the real-world KGs continually evolve and grow, which requires the embedding model to be capable of handling the knowledge growth.\n\n\\item We propose \\textit{a novel lifelong learning model}, LKGE\\xspace. It includes a masked KG autoencoder as the basis of embedding learning and update, an embedding transfer strategy for knowledge transfer, and an embedding regularization to prevent catastrophic forgetting in knowledge update.\n\n\\item We conduct \\textit{extensive experiments on four new datasets}. The results demonstrate the effectiveness and efficiency of LKGE\\xspace against a variety of state-of-the-art models.\n\\end{itemize}\n\n\\section{Related Work}\n\\label{sec:related_work}\n\nIn this section, we review two lines of related work, i.e., KG embedding and lifelong learning.\n\n\\subsection{Knowledge Graph Embedding}\n\\label{subsec:knowledge_graph_embedding}\n\nKG embedding seeks to encode the symbolic representations of KGs into vector space to foster the application of KGs in downstream tasks. \nMost existing KG embedding models \\cite{TransE,TransH,ConvE,R-GCN,RSN,InteractE,CompGCN} focus on static graphs and cannot continually learn new knowledge on the growing KGs.\n\nTo embed unseen entities, inductive KG embedding models learn to represent an entity by aggregating its existing neighbors in the previous KG snapshot. \nMEAN \\cite{MEAN} uses a graph convolutional network (GCN) \\cite{GCN} for neighborhood aggregation. \nWhen an unseen entity emerges, the GCN would aggregate its previously seen neighboring entities to generate an embedding. \nLAN \\cite{LAN} adopts an attention mechanism to attentively aggregate different neighbors. \nAs MEAN and LAN rely on the entity neighborhood for embedding learning, they cannot handle the new entities that have no neighbors in the previous snapshot.\nFurthermore, inductive KG embedding disregards learning the facts about new entities.\n\nOur work is also relevant to dynamic KG embedding. \npuTransE \\cite{puTransE} trains several new models when facts are added. \nDKGE \\cite{DKGE} learns contextual embeddings for entities and relations, which can be automatically updated as the KG grows.\nThey both need partial re-training on old facts, but our model does not.\nIn addition, some subgraph-based models, such as GraIL \\cite{GraIL}, INDIGO \\cite{INDIGO}, and TACT \\cite{TACT}, can also represent unseen entities using the entity-independent features and subgraph aggregation.\nTheir subgraph-building process is time-consuming, making them only applicable to small KGs.\nIn order to run on large-scale KGs, NBFNet \\cite{NBFNet} proposes a fast node pair embedding model based on the Bellman-Ford algorithm, and NodePiece \\cite{NodePiece} uses tokenized anchor nodes and relational paths to represent new entities. However, they do not consider learning new knowledge and cannot support new relations.\n\n\\subsection{Lifelong Learning}\n\\label{subsec:lifelong_learning}\n\nLifelong learning seeks to solve new problems quickly without catastrophically forgetting previously acquired knowledge. \nLifelong learning models are broadly classified into three categories.\n(\\romannumeral1) Dynamic architecture models \\cite{PNN,CWR} extend the network to learn new tasks and avoid forgetting acquired knowledge. \n(\\romannumeral2) Regularization-based models \\cite{EWC,SI} capture the importance of model parameters for old tasks and limit the update of important parameters. \n(\\romannumeral3) Rehearsal-based models \\cite{GEM,EMR} memorize some data from old tasks and replay them when learning new knowledge. \n\nFew lifelong learning models focus on KG embedding. \nDiCGRL \\cite{DiCGRL} is a disentangle-based lifelong graph embedding model. \nIt splits node embeddings into different components and replays related historical facts to avoid catastrophic forgetting. \nThe work \\cite{CKGE} combines class-incremental learning models with TransE~\\cite{TransE} for continual KG embedding. \nHowever, it does not propose a specific lifelong KG embedding model. \n\n\n\n\\section{Lifelong Knowledge Graph Embedding}\n\\label{sec:method}\n\nIn this section, we first introduce our problem setting. \nThen, we present our model, LKGE\\xspace, in detail. \n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.9\\textwidth]{figs_new\/overview_new.pdf}\n\\caption{Overview of the proposed model for lifelong KG embedding.}\n\\label{fig:overview}\n\\centering\n\\vspace{-5pt}\n\\end{figure*}\n\n\\subsection{Preliminaries}\n\n\\textbf{Growing KG.} \nThe growth process of a KG yields a snapshot sequence, i.e., $\\mathcal{G}=\\{\\mathcal{S}_1, \\mathcal{S}_2, \\ldots, \\mathcal{S}_t\\}$. \nEach snapshot $\\mathcal{S}_i$ is defined as a triplet $(\\mathcal{T}_i,\\mathcal{E}_i, \\mathcal{R}_i)$, \nwhere $\\mathcal{T}_i, \\mathcal{E}_i$ and $\\mathcal{R}_i$ denote the fact, entity and relation sets, respectively.\nWe have $\\mathcal{T}_i \\subseteq \\mathcal{T}_{i+1}$, $\\mathcal{E}_i \\subseteq \\mathcal{E}_{i+1}$ and $\\mathcal{R}_i \\subseteq \\mathcal{R}_{i+1}$.\nWe use $\\mathcal{T}_{\\Delta i}=\\mathcal{T}_{i}-\\mathcal{T}_{i-1}$, $\\mathcal{E}_{\\Delta i}=\\mathcal{E}_{i}-\\mathcal{E}_{i-1}$, and $\\mathcal{R}_{\\Delta i}=\\mathcal{R}_{i}-\\mathcal{R}_{i-1}$ to denote the new facts, entities and relations, respectively.\nEach fact is in the form of $(s, r, o)\\in\\mathcal{T}_i$, where $s,o\\in\\mathcal{E}_i$ are the subject and object entities, respectively, and $r\\in\\mathcal{R}_i$ is their relation. \n\n\\smallskip\n\\noindent\\textbf{Lifelong KG embedding.} \nKG embedding seeks to encode the symbolic representations of entities and relations into vector space and capture KG semantics using vector operations. \nFor a growing KG, a lifelong KG embedding model learns to represent the snapshot sequence $\\mathcal{G}=\\{\\mathcal{S}_1, \\mathcal{S}_2, \\ldots, \\mathcal{S}_t\\}$ continually. \nWhen a new fact set $\\mathcal{T}_{\\Delta i}$ emerges, the current KG embedding model $\\mathcal{M}_{ i-1}$ needs update to fit the new facts and learn embeddings for the new entities $\\mathcal{E}_{\\Delta i}$ and new relations $\\mathcal{R}_{\\Delta i}$. \nThe resulting model is denoted by $\\mathcal{M}_i$.\n\n\\smallskip\n\\noindent\\textbf{Lifelong link prediction.} \nThe link prediction task asks the KG embedding model to predict the missing subject or object entity in an incomplete fact like $(s, r, ?)$ or $(?, r, o)$.\nFor each snapshot $\\mathcal{S}_i$, the new fact set $\\mathcal{T}_{\\Delta i}$ is divided into a training set $\\mathcal{D}_i$, a validation set $\\mathcal{V}_i$ and a test set $\\mathcal{Q}_i$. \nIn the lifelong setting, the model is required to learn the training data sets, $\\mathcal{D}_1,\\mathcal{D}_2,\\ldots,\\mathcal{D}_t$, in turn. \nAfter finishing the learning on $\\mathcal{D}_i$, the model is evaluated on the accumulated test data, which is $\\cup_{j=1}^i\\mathcal{Q}_j$, to assess the overall learning performance.\nAfter learning $\\mathcal{D}_i$, then $\\mathcal{D}_i,\\mathcal{V}_i$ would no longer be available for the following learning.\nNote that the goal of lifelong KG embedding is to improve the overall performance on all snapshots, which requires the KG embedding model to continually learn new knowledge and retain the learned knowledge.\n\n\\subsection{Model Overview}\nThe overview of our model, LKGE\\xspace, is shown in Figure~\\ref{fig:overview}.\nIt continually learns knowledge over a sequence of KG snapshots without re-training on previously seen data. \nThe foundation is a masked KG autoencoder that can reconstruct the entity and relation embeddings from the masked related subgraph.\nTo enable knowledge transfer from old snapshots to the new one, we propose embedding transfer to inject learned knowledge into unseen entities and relations and then iteratively optimize the model to fit the new facts.\nWe also propose a lightweight regularization method to retain old knowledge. \n\n\\subsection{Masked Knowledge Graph Autoencoder}\nIn lifelong KG embedding learning, the new facts, e.g., $\\mathcal{T}_{\\Delta i}$, bring new entities $\\mathcal{E}_{\\Delta i}$, and new relations $\\mathcal{R}_{\\Delta i}$. \nIt would also involve some old entities from $\\mathcal{E}_{i-1}$, and old relations from $\\mathcal{R}_{i-1}$.\nThus, a new entity may be connected with both new and old entities (referred to as new knowledge).\nThe involved old entities also receive more facts, and therefore their previously learned embeddings (referred to as old knowledge) need to be updated to fit the new facts.\nHence, the base KG embedding model for lifelong learning should be capable of capturing new knowledge as well as updating old knowledge.\nTo this end, we propose a masked KG autoencoder motivated by the recent success of self-supervised learning~\\cite{GraphMAE}.\nThe key idea is to reconstruct the embedding for an entity or relation based on its masked subgraph, which may include both other new entities and some old entities.\nSpecifically, we use the first-order subgraph of an entity or relation to reconstruct its embedding $\\bar{\\mathbf{x}}_i$:\n\\begin{equation}\n \\bar{\\mathbf{x}}_i = \\text{MAE}\\big(\\cup_{j=1}^i\\mathcal{N}_{j}(x)\\big),\n\\end{equation}\nwhere $x$ denotes either an entity or a relation, and $\\mathcal{N}_{j}\\subseteq\\mathcal{D}_j$ denotes the involved facts of $x$ in the $j$-th snapshot. $\\text{MAE}()$ is an encoder to represent the input subgraph.\nThe objective of our KG encoder is to align the entity or relation embedding with the reconstructed representation as follows:\n\\begin{equation}\n \\mathcal{L}_\\text{MAE} = \\sum_{e\\in\\mathcal{E}_i}\\Vert\\mathbf{e}_i-\\bar{\\mathbf{e}}_i\\Vert_2^2 + \\sum_{r\\in\\mathcal{R}_i}\\Vert\\mathbf{r}_i-\\bar{\\mathbf{r}}_i\\Vert_2^2.\n\\end{equation}\n\nThe key then becomes how to design an effective and efficient encoder for lifelong learning.\nGCN~\\cite{Kipf_GCN} and Transformer~\\cite{Transformer} are two common choices for the encoder. \nThe two encoders both introduce additional model parameters (e.g., the weight matrices).\nIn our lifelong learning setting, the encoder needs to be updated to fit new facts or subgraphs.\nIn this case, once the GCN or Transformer is updated, the changed model parameters would affect the embedding generation of \\emph{all} old entities (not just the involved old entities in new facts), increasing the risk of catastrophically forgetting previous snapshots. \nTo avoid this issue, we use the entity and relation embedding transition functions as encoders, which do not introduce additional parameters.\nWe borrow the idea of TransE \\cite{TransE} and interpret a relation embedding as the translation vector between the subject and object entity embeddings, i.e., $\\mathbf{s} + \\mathbf{r} \\approx \\mathbf{o}$,\nwhere $\\mathbf{s}, \\mathbf{r}, \\mathbf{o}$ denote the embeddings of subject entity, relation and object entity, respectively. \nBased on this, we can deduce two transition functions for entity and relation embeddings.\nThe subject entity of $(s,r,o)$ can be represented by $f_{\\rm{sub}}(\\mathbf{r}, \\mathbf{o})= \\mathbf{o} - \\mathbf{r}$,\nand the relation embedding is $f_{\\rm{rel}}(\\mathbf{s}, \\mathbf{o})= \\mathbf{o} - \\mathbf{s}$.\nWe can define the encoders as\n\\begin{align}\n\\label{eq:auto_ent_lifelong_1}\n \\bar{\\mathbf{e}}_i &= \\frac{\\sum_{j=1}^i\\sum_{(s, r, o) \\in \\mathcal{N}_j(e)} f_{\\rm{sub}}(\\mathbf{r}_i, \\mathbf{o}_i)}{\\sum_{j=1}^{i} |\\mathcal{N}_j(e)|},\\\\\n\\label{eq:auto_rel_lifelong_1}\n\\bar{\\mathbf{r}}_i &= \\frac{\\sum_{j=1}^i\\sum_{(s, r, o) \\in \\mathcal{N}_j(r)} f_{\\rm{rel}}(\\mathbf{s}_i, \\mathbf{o}_i)}{\\sum_{j=1}^{i} |\\mathcal{N}_j(r)|},\n\\end{align}\nwhere $\\mathbf{s}_i, \\mathbf{r}_i, \\mathbf{o}_i$ are the embeddings of $s,r,o$ during the training on $\\mathcal{D}_i$. \n$\\mathcal{N}_j(x)\\subseteq\\mathcal{D}_j$ is the set of facts containing $x$.\n\nIn lifelong learning, the model learns from a snapshot sequence. \nEqs.~(\\ref{eq:auto_ent_lifelong_1}) and (\\ref{eq:auto_rel_lifelong_1}) require training samples from the first $i$ snapshots, which are not in line with lifelong learning. \nTo reduce the reliance on learned data, we use $\\mathbf{e}_{i-1}$ and $\\mathbf{r}_{i-1}$ as the approximate average embeddings of $e$ and $r$ in the first $i-1$ snapshots, respectively, and rewrite the encoders as\n\\begin{equation}\n\\resizebox{.9\\columnwidth}{!}{$\n\\label{eq:auto_ent_lifelong_2}\n\\bar{\\mathbf{e}}_i \\approx \\frac{\\sum_{j=1}^{i-1}|\\mathcal{N}_j(e)| \\mathbf{e}_{i-1}+\\sum_{(s, r, o) \\in \\mathcal{N}_i(e)} f_{\\rm{sub}}(\\mathbf{r}_i, \\mathbf{o}_i)}\n {\\sum_{j=1}^{i-1}|\\mathcal{N}_j(e)|+ |\\mathcal{N}_i(e)|},\n$}\n\\end{equation}\n\\begin{equation}\n\\resizebox{.9\\columnwidth}{!}{$\n\\label{eq:auto_rel_lifelong_2}\n\\bar{\\mathbf{r}}_i \\approx \\frac{\\sum_{j=1}^{i-1}|\\mathcal{N}_j(r)| \\mathbf{r}_{i-1}+\\sum_{(s, r, o) \\in \\mathcal{N}_i(r)} f_{\\rm{rel}}(\\mathbf{s}_i, \\mathbf{o}_i)}\n {\\sum_{j=1}^{i-1}|\\mathcal{N}_j(r)| + |\\mathcal{N}_i(r)|}.\n$}\n\\end{equation}\n\nThe encoders use the facts involving both old and new entities and relations for embedding reconstruction and they build a bridge for knowledge transfer.\n\nFor each snapshot, to learn the knowledge from the new data and update the learned parameters, we leverage TransE \\cite{TransE} to train the embedding model:\n\\begin{equation}\n\\resizebox{.9\\columnwidth}{!}{$\n \\mathcal{L}_\\text{new} = \\sum\\limits_{(s, r, o)\\in\\mathcal{D}_i} \\max\\big(0,\\gamma+f(\\mathbf{s}, \\mathbf{r}, \\mathbf{o})-f(\\mathbf{s}', \\mathbf{r}, \\mathbf{o}')\\big),\n$}\n\\end{equation}\nwhere $\\gamma$ is the margin. $(\\mathbf{s}', \\mathbf{r}, \\mathbf{o}')$ is the embedding of a negative fact. \nFor each positive fact, we randomly replace the subject or object entity with a random entity $e'\\in\\mathcal{E}_i$.\n\n\n\\subsection{Embedding Transfer}\nDuring the lifecycle of a growing KG, there are abundant unseen entities and some unseen relations emerge with the new facts. \nLearning effective embeddings for them is an essential aspect of lifelong KG embedding learning.\nHowever, these unseen ones are not included in any learned snapshots, so only inheriting the learned parameters cannot transfer the acquired knowledge to their embeddings.\nTo avoid learning from scratch, we propose embedding transfer that seeks to leverage the learned embeddings to help represent unseen entities and relations.\nSpecifically, we initialize the embeddings of each unseen entity by aggregating its facts:\n\\begin{equation}\n \\mathbf{e}_{i} = \\frac{1}{|\\mathcal{N}_i(e)|}\\sum_{(e, r, o) \\in \\mathcal{N}_i(e)} f_{\\rm{sub}}(\\mathbf{r}_{i-1}, \\mathbf{o}_{i-1}) ,\n\\end{equation}\nwhere $\\mathcal{N}_i(e)\\subseteq\\mathcal{D}_i$ is the set of facts containing $e$. \nFor the new entities that do not have common facts involving existing entities, we randomly initialize their embeddings.\nWe also use this strategy to initialize the embeddings of unseen relations:\n\\begin{equation}\n \\mathbf{r}_i = \\frac{1}{|\\mathcal{N}_i(r)|}\\sum_{(s, r, o) \\in \\mathcal{N}_i(r)} f_{\\rm{rel}}(\\mathbf{s}_{i-1}, \\mathbf{o}_{i-1}),\n\\end{equation}\nwhere $\\mathcal{N}_i(r)\\subseteq\\mathcal{D}_i$ is the set of facts containing $r$.\n\n\\subsection{Embedding Regularization}\nLearning new snapshots is likely to overwrite the learned knowledge from old snapshots. \nTo avoid catastrophic forgetting, some regularization methods \\cite{EWC,GEM} constrain the updates of parameters that are important to old tasks.\nThe loss function of regularization methods is\n\\begin{equation}\n\\resizebox{.88\\columnwidth}{!}{$\n \\mathcal{L}_\\text{old} = \\sum\\limits_{e\\in\\mathcal{E}_{i-1}}\\omega(e)\\Vert\\mathbf{e}_i-\\mathbf{e}_{i-1}\\Vert_2^2 + \\sum\\limits_{r\\in\\mathcal{R}_{i-1}}\\omega(r)\\Vert\\mathbf{r}_i-\\mathbf{r}_{i-1}\\Vert_2^2,\n $}\n\\end{equation}\nwhere $\\omega(x)$ is the regularization weight for $x$.\n\nConventional regularization-based methods for classification tasks such as \\cite{EWC} model the importance of each parameter at a high cost based on the gradient or parameter change during training. \nThis problem is even more severe for KG embedding models that have a large number of embedding parameters (i.e. entity and relation embeddings).\nTo resolve this problem, we propose a lightweight embedding regularization method, which calculates the regularization weight of each entity or relation by the number of new and old facts containing it:\n\\begin{equation}\n \\omega(x) = 1 - \\frac{|\\mathcal{N}_{i}(x)|}{\\sum_{j=1}^i|\\mathcal{N}_{j}(x)|}.\n\\end{equation}\n\nAs a lightweight technique, it only keeps the total number of involved trained facts for each entity or relation and only updates regularization weights once per snapshot.\n\n\\subsection{Overall Learning Objective}\nTo learn new knowledge while retaining acquired knowledge, the overall lifelong learning objective $\\mathcal{L}$ is defined as follows:\n\\begin{equation}\n \\mathcal{L} = \\mathcal{L}_\\text{new} + \\alpha\\,\\mathcal{L}_\\text{old} + \\beta\\,\\mathcal{L}_\\text{MAE},\n\\end{equation}\nwhere $\\alpha,\\beta$ are hyperparameters for balancing the objectives.\n\n\\subsection{Complexity Analysis}\nCompared with fine-tuning, the proposed model requires few extra resources. \nIt does not increase the size of training samples like the rehearsal models \\cite{EMR, GEM, DiCGRL}. \nIn addition to fine-tuning, the proposed model calculates the loss of masked autoencoder and embedding regularization. \nThe additional time complexity in each iteration is $O(|\\mathcal{E}|+|\\mathcal{R}|+|\\mathcal{D}|)$. \nIn practice, we find that the loss of autoencoder can accelerate learning, and its time consumption is close to that of fine-tuning. \nThe space complexity of fine-tuning is $O((|\\mathcal{E}|+|\\mathcal{R}|)\\times d)$, and the space complexity of the proposed model is $O((|\\mathcal{E}|+|\\mathcal{R}|)\\times(d+1))$, where $d$ is the dimension of embeddings.\n\n\\begin{table*}\n\\centering\n\\resizebox{\\linewidth}{!}{\n \\begin{tabular}{lcccccccccccccccccccc}\n \\toprule\n \\multirow{2}{*}{Datasets} & \\multicolumn{3}{c}{Snapshot 1} & \\multicolumn{3}{c}{Snapshot 2} & \\multicolumn{3}{c}{Snapshot 3} & \\multicolumn{3}{c}{Snapshot 4} & \\multicolumn{3}{c}{Snapshot 5} \\\\ \n \\cmidrule(lr){2-4} \\cmidrule(lr){5-7} \\cmidrule(lr){8-10} \\cmidrule(lr){11-13} \\cmidrule(lr){14-16} & $|\\mathcal{T}_{\\Delta 1}|$ & $|\\mathcal{E}_1|$ & $|\\mathcal{R}_1|$ & $|\\mathcal{T}_{\\Delta 2}|$ & $|\\mathcal{E}_2|$ & $|\\mathcal{R}_2|$ & $|\\mathcal{T}_{\\Delta 3}|$ & $|\\mathcal{E}_3|$ & $|\\mathcal{R}_3|$ & $|\\mathcal{T}_{\\Delta 4}|$ & $|\\mathcal{E}_4|$ & $|\\mathcal{R}_4|$ & $|\\mathcal{T}_{\\Delta 5}|$ & $|\\mathcal{E}_5|$ & $|\\mathcal{R}_5|$ \\\\\n \\midrule\n \\textsc{Entity} & 46,388 & \\ \\ 2,909 & 233 & 72,111 & \\ \\ 5,817 & 236 & 73,785 & \\ \\ 8,275 & 236 & \\ \\ 70,506 & 11,633 & 237 & 47,326 & 14,541 & 237 \\\\\n \\textsc{Relation} & 98,819 & 11,560 & \\ \\ 48 & 93,535 & 13,343 & \\ \\ 96 & 66,136 & 13,754 & 143 & \\ \\ 30,032 & 14,387 & 190 & 21,594 & 14,541 & 237 \\\\\n \\textsc{Fact} & 62,024 & 10,513 & 237 & 62,023 & 12,779 & 237 & 62,023 & 13,586 & 237 & \\ \\ 62,023 & 13,894 & 237 & 62,023 & 14,541 & 237 \\\\\n \\textsc{Hybrid} & 57,561 & \\ \\ 8,628 & \\ \\ 86 & 20,873 & 10,040 & 102 & 88,017 & 12,779 & 151 & 103,339 & 14,393 & 209 & 40,326 & 14,541 & 237 \\\\\n \\bottomrule\n\\end{tabular}}\n\\caption{Statistical data of the four constructed growing KG datasets. \nFor the $i$-th snapshot, $\\mathcal{T}_{\\Delta i}$ denotes the set of new facts in this snapshot, and $\\mathcal{E}_i, \\mathcal{R}_{i}$ denote the sets of cumulative entities and relations in the first $i$ snapshots, respectively.}\n\\label{tab:datasets}\n\\end{table*} \n\n\\section{Dataset Construction}\n\\label{sec:datasets}\n\nTo simulate a variety of aspects of KG growth, we create four datasets based on FB15K-237 \\cite{FB237}, which are entity-centric, relation-centric, fact-centric, and hybrid.\nWe denote them by \\textsc{Entity}, \\textsc{Relation}, \\textsc{Fact} and \\textsc{Hybrid}, respectively. \nGiven a KG $\\mathcal{G}=\\{\\mathcal{E}, \\mathcal{R}, \\mathcal{T}\\}$, we construct five snapshots with the following steps:\n\\begin{enumerate}\n\\item \\textbf{Seeding.} \nWe randomly sample 10 facts from $\\mathcal{T}$ and add them into $\\mathcal{T}_1$ for initialization. \nThe entities and relations in the 10 facts form the initial $\\mathcal{E}_1$ and $\\mathcal{R}_1$, respectively.\n\n\\item \\textbf{Expanding.} \nTo build \\textsc{Entity}, \\textsc{Relation} and \\textsc{Fact}, we iteratively sample a fact containing at least one seen entity in $\\mathcal{E}_i$, add it into $\\mathcal{T}_i$, and extract the unseen entity and relation from it to expand $\\mathcal{E}_i$ and $\\mathcal{R}_i$. \nFor \\textsc{Entity}, once $|\\mathcal{E}_i| \\geq \\frac{i+1}{5}|\\mathcal{E}|$, we add all new facts $\\big\\{(s, r, o)\\,|\\,s\\in\\mathcal{E}_i\\wedge o\\in\\mathcal{E}_i \\big\\}$ into $\\mathcal{T}_i$ and start building the next snapshot. \nIn the same way, we construct \\textsc{Relation} and \\textsc{Fact}. \nAs for \\textsc{Hybrid}, we uniformly sample an entity, relation or fact without replacement from $\\mathcal{U}=\\mathcal{E}\\cup\\mathcal{R}\\cup\\mathcal{T}$ to join $\\mathcal{E}_i$, $\\mathcal{R}_i$ and $\\mathcal{T}_i$. \nNote that when the sampled fact contains an unseen entity or relation, we re-sample a fact that only contains seen entities and relations to replace it.\nAfter each iteration, we terminate the expansion of this snapshot with a probability $\\frac{5}{|\\mathcal{U}|}$. \nConsequently, the expansion of \\textsc{Hybrid} is uneven, making it more realistic and challenging.\nFor all datasets, we take the whole KG as the last snapshot, i.e., $\\mathcal{T}_5=\\mathcal{T}$, and $\\mathcal{E}_5=\\mathcal{E}, \\mathcal{R}_5=\\mathcal{R}$.\n\n\\item \\textbf{Dividing.} \nFor each snapshot, we randomly divide the new fact set $\\mathcal{T}_{\\Delta i}$ into a training set $\\mathcal{D}_i$, a validation set $\\mathcal{V}_i$ and a test set $\\mathcal{Q}_i$ by a split ratio of 3:1:1. \n\\end{enumerate}\n\nThe statistics of the four datasets are presented in Table~\\ref{tab:datasets}. \n\n\\section{Experimental Results}\n\\label{sec:experiments}\n\nWe conduct experiments regarding link prediction accuracy, knowledge transfer capability, and learning efficiency to validate the proposed model, LKGE.\nThe datasets and source code are available at \\url{https:\/\/github.com\/nju-websoft\/LKGE}.\n\n\\subsection{Experiment Settings}\n\\noindent\\textbf{Competitors.} We compare our model with 12 competitors, including \n(\\romannumeral1) three baseline models: snapshot only, re-training, and fine-tuning; \n(\\romannumeral2) two inductive models: MEAN \\cite{MEAN}, and LAN \\cite{LAN}; \n(\\romannumeral3) two dynamic architecture models: PNN \\cite{PNN}, and CWR \\cite{CWR};\n(\\romannumeral4) two regularization-based models: SI \\cite{SI}, and EWC \\cite{EWC}; \nand (\\romannumeral5) three rehearsal-based models: GEM \\cite{GEM}, EMR \\cite{EMR}, and DiCGRL \\cite{DiCGRL}.\n\n\\smallskip\n\\noindent\\textbf{Evaluation metrics.} Following the convention, we conduct the experiments on link prediction. \nGiven a snapshot $\\mathcal{S}_i$, for each test fact $(s, r, o)\\in\\mathcal{Q}_i$, we construct two queries $(s, r, ?)$ and $(?, r, t)$. \nWhen evaluating on $\\mathcal{Q}_i$, we set all seen entities in $\\mathcal{E}_i$ as candidate entities. \nWe select seven metrics to evaluate all models, including \n(\\romannumeral1) Four metrics on link prediction accuracy: mean reciprocal rank (MRR) and Hits@$k$ ($k=1,3,10$, and H@$k$ for shot). \nWe conduct the model $\\mathcal{M}_5$ trained on the last snapshot to evaluate on the union of the test sets in all snapshots.\n(\\romannumeral2) Two metrics on knowledge transfer capability: forward transfer (FWT) and backward transfer (BWT) \\cite{GEM}.\nFWT is the influence of learning a task to the performance on the future tasks, while BWT is the influence of learning to the previous tasks: \n\\begin{align}\n\\resizebox{.88\\columnwidth}{!}{$\n\\text{FWT} = \\frac{1}{n-1}\\sum\\limits_{i=2}^{n} h_{i-1,i},\\ \\text{BWT} = \\frac{1}{n-1}\\sum\\limits_{i=1}^{n-1} (h_{n,i} - h_{i,i}),\n$}\n\\end{align}\nwhere $n$ is the number of snapshots, $h_{i,j}$ is the MRR scores on $\\mathcal{Q}_j$ after training the model $\\mathcal{M}_i$ on the $i$-th snapshot.\nHigher scores indicate better performance.\n(\\romannumeral3) Time cost: The cumulative time cost of the learning on each snapshot.\n\n\n\\begin{table*}[!t]\n\\centering\n\\resizebox{1.0\\textwidth}{!}{\\Large\n\\begin{tabular}{lcccccccccccccccc}\n\\toprule\n\\multirow{2}{*}{Models} & \\multicolumn{4}{c}{\\textsc{Entity}} & \\multicolumn{4}{c}{\\textsc{Relation}} & \\multicolumn{4}{c}{\\textsc{Fact}} & \\multicolumn{4}{c}{\\textsc{Hybrid}} \\\\\n\\cmidrule(lr){2-5} \\cmidrule(lr){6-9} \\cmidrule(lr){10-13} \\cmidrule(lr){14-17} & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\\\\n\\midrule\nSnapshot\t& $.084_{\\pm .001}$ \t& $.028_{\\pm .000}$ \t& $.107_{\\pm .002}$ \t& $.193_{\\pm .003}$ \t& $.021_{\\pm .000}$ \t& $.010_{\\pm .000}$ \t& $.023_{\\pm .000}$ \t& $.043_{\\pm .001}$ \t& $.082_{\\pm .001}$ \t& $.030_{\\pm .001}$ \t& $.095_{\\pm .002}$ \t& $.191_{\\pm .006}$ \t& $.036_{\\pm .001}$ \t& $.015_{\\pm .001}$ \t& $.043_{\\pm .001}$ \t& $.077_{\\pm .003}$ \t\\\\\nRe-train\t& $.236_{\\pm .001}$ \t& $.137_{\\pm .001}$ \t& $.274_{\\pm .001}$ \t& $.433_{\\pm .001}$ \t& $.219_{\\pm .001}$ \t& $.128_{\\pm .001}$ \t& $.250_{\\pm .001}$ \t& $.403_{\\pm .002}$ \t& $.206_{\\pm .001}$ \t& $.118_{\\pm .001}$ \t& $.232_{\\pm .001}$ \t& $.385_{\\pm .001}$ \t& $.227_{\\pm .001}$ \t& $.134_{\\pm .001}$ \t& $.260_{\\pm .002}$ \t& $.413_{\\pm .001}$ \t\\\\\nFine-tune\t& $.165_{\\pm .002}$ \t& $.085_{\\pm .002}$ \t& $.188_{\\pm .003}$ \t& $.321_{\\pm .003}$ \t& $.093_{\\pm .003}$ \t& $.039_{\\pm .002}$ \t& $.106_{\\pm .003}$ \t& $.195_{\\pm .007}$ \t& $.172_{\\pm .003}$ \t& $.090_{\\pm .002}$ \t& $.193_{\\pm .004}$ \t& $.339_{\\pm .005}$ \t& $.135_{\\pm .002}$ \t& $.069_{\\pm .001}$ \t& $.151_{\\pm .003}$ \t& $.262_{\\pm .005}$ \t\\\\\n\\midrule\nMEAN\t& $.117_{\\pm .005}$ \t& $.068_{\\pm .003}$ \t& $.123_{\\pm .006}$ \t& $.212_{\\pm .007}$ \t& $.039_{\\pm .004}$ \t& $.024_{\\pm .005}$ \t& $.040_{\\pm .005}$ \t& $.067_{\\pm .008}$ \t& $.084_{\\pm .008}$ \t& $.051_{\\pm .005}$ \t& $.088_{\\pm .008}$ \t& $.146_{\\pm .015}$ \t& $.046_{\\pm .004}$ \t& $.029_{\\pm .003}$ \t& $.049_{\\pm .003}$ \t& $.080_{\\pm .004}$ \t\\\\\nLAN\t& $.141_{\\pm .004}$ \t& $.082_{\\pm .004}$ \t& $.149_{\\pm .003}$ \t& $.256_{\\pm .005}$ \t& $.052_{\\pm .003}$ \t& $.033_{\\pm .003}$ \t& $.052_{\\pm .004}$ \t& $.092_{\\pm .008}$ \t& $.106_{\\pm .007}$ \t& $.056_{\\pm .006}$ \t& $.113_{\\pm .007}$ \t& $.200_{\\pm .011}$ \t& $.059_{\\pm .005}$ \t& $.032_{\\pm .005}$ \t& $.062_{\\pm .005}$ \t& $.113_{\\pm .007}$ \t\\\\\n\\midrule\nPNN\t& $.229_{\\pm .001}$ \t& $.130_{\\pm .001}$ \t& $.265_{\\pm .001}$ \t& $\\textbf{.425}_{\\pm .001}$ \t& $.167_{\\pm .002}$ \t& $.096_{\\pm .001}$ \t& $.191_{\\pm .001}$ \t& $.305_{\\pm .001}$ \t& $.157_{\\pm .000}$ \t& $.084_{\\pm .002}$ \t& $.188_{\\pm .001}$ \t& $.290_{\\pm .001}$ \t& $.185_{\\pm .001}$ \t& $.101_{\\pm .001}$ \t& $.216_{\\pm .001}$ \t& $.349_{\\pm .001}$ \t\\\\\nCWR\t& $.088_{\\pm .002}$ \t& $.028_{\\pm .001}$ \t& $.114_{\\pm .004}$ \t& $.202_{\\pm .007}$ \t& $.021_{\\pm .000}$ \t& $.010_{\\pm .000}$ \t& $.024_{\\pm .000}$ \t& $.043_{\\pm .000}$ \t& $.083_{\\pm .001}$ \t& $.030_{\\pm .002}$ \t& $.095_{\\pm .002}$ \t& $.192_{\\pm .005}$ \t& $.037_{\\pm .001}$ \t& $.015_{\\pm .001}$ \t& $.044_{\\pm .002}$ \t& $.077_{\\pm .002}$ \t\\\\\n\\midrule\nSI\t& $.154_{\\pm .003}$ \t& $.072_{\\pm .003}$ \t& $.179_{\\pm .003}$ \t& $.311_{\\pm .004}$ \t& $.113_{\\pm .002}$ \t& $.055_{\\pm .002}$ \t& $.131_{\\pm .002}$ \t& $.224_{\\pm .002}$ \t& $.172_{\\pm .004}$ \t& $.088_{\\pm .003}$ \t& $.194_{\\pm .004}$ \t& $.343_{\\pm .005}$ \t& $.111_{\\pm .004}$ \t& $.049_{\\pm .003}$ \t& $.126_{\\pm .006}$ \t& $.229_{\\pm .006}$ \t\\\\\nEWC\t& $.229_{\\pm .001}$ \t& $.130_{\\pm .001}$ \t& $.264_{\\pm .002}$ \t& $.423_{\\pm .001}$ \t& $.165_{\\pm .005}$ \t& $.093_{\\pm .005}$ \t& $.190_{\\pm .005}$ \t& $.306_{\\pm .006}$ \t& $.201_{\\pm .001}$ \t& $.113_{\\pm .001}$ \t& $.229_{\\pm .001}$ \t& $.382_{\\pm .001}$ \t& $.186_{\\pm .004}$ \t& $.102_{\\pm .003}$ \t& $.214_{\\pm .004}$ \t& $.350_{\\pm .004}$ \t\\\\\n\\midrule\nGEM\t& $.165_{\\pm .002}$ \t& $.085_{\\pm .002}$ \t& $.188_{\\pm .002}$ \t& $.321_{\\pm .002}$ \t& $.093_{\\pm .001}$ \t& $.040_{\\pm .002}$ \t& $.106_{\\pm .002}$ \t& $.196_{\\pm .002}$ \t& $.175_{\\pm .004}$ \t& $.092_{\\pm .003}$ \t& $.196_{\\pm .005}$ \t& $.345_{\\pm .007}$ \t& $.136_{\\pm .003}$ \t& $.070_{\\pm .001}$ \t& $.152_{\\pm .004}$ \t& $.263_{\\pm .005}$ \t\\\\\nEMR\t& $.171_{\\pm .002}$ \t& $.090_{\\pm .001}$ \t& $.195_{\\pm .002}$ \t& $.330_{\\pm .003}$ \t& $.111_{\\pm .002}$ \t& $.052_{\\pm .002}$ \t& $.126_{\\pm .003}$ \t& $.225_{\\pm .004}$ \t& $.171_{\\pm .004}$ \t& $.090_{\\pm .003}$ \t& $.191_{\\pm .004}$ \t& $.337_{\\pm .006}$ \t& $.141_{\\pm .002}$ \t& $.073_{\\pm .001}$ \t& $.157_{\\pm .002}$ \t& $.267_{\\pm .003}$ \t\\\\\nDiCGRL\t& $.107_{\\pm .009}$ \t& $.057_{\\pm .009}$ \t& $.110_{\\pm .008}$ \t& $.211_{\\pm .009}$ \t& $.133_{\\pm .007}$ \t& $.079_{\\pm .005}$ \t& $.147_{\\pm .009}$ \t& $.241_{\\pm .012}$ \t& $.162_{\\pm .007}$ \t& $.084_{\\pm .007}$ \t& $.189_{\\pm .008}$ \t& $.320_{\\pm .007}$ \t& $.149_{\\pm .005}$ \t& $.083_{\\pm .004}$ \t& $.168_{\\pm .005}$ \t& $.277_{\\pm .008}$ \t\\\\\n\\midrule\nLKGE\t& $\\textbf{.234}_{\\pm .001}$ \t& $\\textbf{.136}_{\\pm .001}$ \t& $\\textbf{.269}_{\\pm .002}$ \t& $\\textbf{.425}_{\\pm .003}$ \t& $\\textbf{.192}_{\\pm .000}$ \t& $\\textbf{.106}_{\\pm .001}$ \t& $\\textbf{.219}_{\\pm .001}$ \t& $\\textbf{.366}_{\\pm .002}$ \t& $\\textbf{.210}_{\\pm .002}$ \t& $\\textbf{.122}_{\\pm .001}$ \t& $\\textbf{.238}_{\\pm .002}$ \t& $\\textbf{.387}_{\\pm .002}$ \t& $\\textbf{.207}_{\\pm .002}$ \t& $\\textbf{.121}_{\\pm .002}$ \t& $\\textbf{.235}_{\\pm .002}$ \t& $\\textbf{.379}_{\\pm .003}$ \t\\\\\n\\bottomrule\n\\end{tabular}}\n\\caption{Result comparison of link prediction on the union of the test sets in all snapshots.}\n\\label{tab:main_results}\n\\vspace{-5pt}\n\\end{table*}\n\n\\smallskip\n\\noindent\\textbf{Implementation details.}\nWe use TransE \\cite{TransE} as the base model and modify the competitors to do our task:\n\\begin{itemize}\n \\item \\textit{Snapshot only}. For the $i$-th snapshot, we reinitialize and train a model only on the training set $\\mathcal{D}_i$.\n \n \\item \\textit{Re-training}. For the $i$-th snapshot, we reinitialize and train a model on the accumulated training data $\\cup_{j=1}^i \\mathcal{D}_j$.\n \n \\item \\textit{Fine-tuning}. For the $i$-th snapshot, the model inherits the learned parameters of the model trained on the previous snapshots, and we incrementally train it on $\\mathcal{D}_i$.\n \n \\item \\textit{Inductive} models. We train each model on the first snapshot and obtain the embeddings of unseen entities in the following snapshots by neighborhood aggregation.\n \n \\item \\textit{Dynamic architecture} models. For PNN, \n following the implementation of \\cite{CKGE}, \n \n we freeze the parameters learned on previous snapshots and update new parameters. For CWR, after training on $\\mathcal{D}_1$, we replicate a model as the consolidated model. For the following $i$-th snapshot, we reinitialize and train a temporal model on $\\mathcal{D}_i$, and merge the temporal model into the consolidated model by copying new parameters or averaging old ones.\n \n \\item \\textit{Regularization} models. Since the base model parameters increase with the emergence of unseen entities and relations, we only use the parameters learned from the previous snapshot to calculate the regularization loss.\n \n \\item \\textit{Rehearsal} models. We store 5,000 training facts from previous snapshots and add them to the current training set of the $i$-th snapshot. After the learning, we randomly replace half of these facts with those in $\\mathcal{D}_i$.\n\\end{itemize}\n\nFor a fair comparison, we first tune the hyperparameters of the base model using grid-search: learning rate in \\{0.0005, 0.0001, 0.001\\}, batch size in \\{1024, 2048\\}, embedding dimension in \\{100, 200\\}.\nThen, we use the same base model for LKGE\\xspace and all competitors, and tune other hyperparameters.\nFor the regularization models, the $\\alpha$ of regularization loss is in \\{0.01, 0.1, 1.0\\}. \nFor our model, the $\\beta$ of MAE loss is in \\{0.01, 0.1, 1.0\\}.\nFor all competitors, we use Adam optimizer and set the patience of early stopping to 3.\nAll experiments are conducted with two NVIDIA RTX 3090 GPUs, two Intel Xeon Gold 5122 CPUs, and 384GB RAM.\n\n\n\\subsection{Link Prediction Accuracy}\n\nIn Table~\\ref{tab:main_results}, we run 5-seeds experiments for all models on our four link prediction datasets and report the means and standard deviations. \nThe results show that:\n(\\romannumeral1) our model consistently achieves the best performance across all datasets. Some results of our model in \\textsc{Fact} even outperforms the re-training model. \nThis is because our masked KG autoencoder effectively improves information propagation based on both old and new embeddings, and the embedding regularization avoids catastrophic forgetting. \nFurthermore, most competitors only work well on \\textsc{Entity}, while our model shows stable and promising results on all these datasets.\n(\\romannumeral2) Re-training is far superior to most of other baseline models on \\textsc{Relation} and \\textsc{Hybrid}, while the gaps on \\textsc{Entity} and \\textsc{Fact} are small. \nWe believe this is because the KG embedding model learns two aspects of knowledge: relational patterns and entity embeddings. \nIn \\textsc{Entity} and \\textsc{Fact}, the relational patterns are stable, while in \\textsc{Relation} and \\textsc{Hybrid}, their relational patterns are constantly changing due to unseen relations. \nThese phenomena illustrate that the variation of relational patterns is more challenging for lifelong KG embedding.\n(\\romannumeral3) The inductive models are only trained on the first snapshot, and cannot transfer knowledge to unseen relations. \nSo their results are lower than other models, especially on \\textsc{Relation} and \\textsc{Hybrid} which contain many unseen relations. \n(\\romannumeral4) Since the learned parameters are not updated, PNN preserves the learned knowledge well. \nBut on \\textsc{Fact}, due to a few unseen entities, it lacks new learnable parameters, and the performance is not well. \nConversely, CWR averages the old and new model parameters, which does not work well on the embedding learning task. \n(\\romannumeral5) EWC performs well because it can model the importance of each parameter using Fisher information matrices to the learned snapshots to avoid catastrophic forgetting. \n(\\romannumeral6) Unlike the classification tasks, most parameters of KG embedding models correspond to specific entities or relations, so the training data cannot be divided into a few types, and we cannot use 5,000 old samples to replay the learned facts for all entities. \nTherefore, the performance of GEM, EMR and DiCGRL is limited.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=.82\\columnwidth]{figs_new\/mrr.pdf}\n\\caption{MRR changes. $\\mathcal{M}_i$ is trained for the $i$-th snapshot and evaluated using the test data of previous snapshots 1 to $i$.}\n\\label{fig:mrr}\n\\vspace{-5pt}\n\\end{figure}\n\n\\begin{table*}[!t]\n\\centering\n\\resizebox{1.0\\textwidth}{!}{\n\\begin{tabular}{lcccccccccccccccc}\n\\toprule\n\\multirow{2}{*}{Variants} & \\multicolumn{4}{c}{\\textsc{Entity}} & \\multicolumn{4}{c}{\\textsc{Relation}} & \\multicolumn{4}{c}{\\textsc{Fact}} & \\multicolumn{4}{c}{\\textsc{Hybrid}} \\\\\n\\cmidrule(lr){2-5} \\cmidrule(lr){6-9} \\cmidrule(lr){10-13} \\cmidrule(lr){14-17} & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\\\\n\\midrule\nLKGE (full) & 0.234 \t& 0.136 \t& 0.269 \t& 0.425 \t& 0.192 \t& 0.106 \t& 0.219 \t& 0.366 \t& 0.210 \t& 0.122 \t& 0.238 \t& 0.387 \t& 0.207 \t& 0.121 \t& 0.235 \t& 0.379 \t\\\\\n$-$ fine-tuning\t& 0.123\t& 0.068\t& 0.136\t& 0.225\t& 0.126\t& 0.073\t& 0.146\t& 0.231\t& 0.154\t& 0.091\t& 0.176\t& 0.269\t& 0.109\t& 0.060\t& 0.126\t& 0.201\t\\\\\n$-$ masked KG autoencoder\t& 0.222\t& 0.124\t& 0.255\t& 0.415\t& 0.185\t& 0.100\t& 0.212\t& 0.355\t& 0.191\t& 0.105\t& 0.215\t& 0.369\t& 0.198\t& 0.111\t& 0.227\t& 0.367\t\\\\\n$-$ embedding transfer\t& 0.240\t& 0.141\t& 0.275\t& 0.433\t& 0.174\t& 0.091\t& 0.201\t& 0.339\t& 0.210\t& 0.123\t& 0.237\t& 0.390\t& 0.200\t& 0.112\t& 0.229\t& 0.372\t\\\\\n$-$ regularization\t& 0.166\t& 0.089\t& 0.184\t& 0.316\t& 0.040\t& 0.014\t& 0.049\t& 0.089\t& 0.175\t& 0.095\t& 0.195\t& 0.338\t& 0.154\t& 0.079\t& 0.171\t& 0.300\t\\\\\n\\bottomrule\n\\end{tabular}}\n\\caption{Ablation results of link prediction on the union of the test sets in all snapshots.}\n\\label{tab:ablation}\n\\vspace{-5pt}\n\\end{table*}\n\nTo show the performance evolution of our model during the learning process, we evaluate the model $\\mathcal{M}_i$ trained for the $i$-th snapshot using the test data from previous snapshots.\nWe report the MRR results in Figure~\\ref{fig:mrr}. \nWe can see that LKGE\\xspace can stably maintain the learned knowledge during lifelong learning. \nMoreover, on some snapshots like \\textsc{Entity} Snapshot 3, the knowledge update improves the performance on old test data,\nwhich shows that our old knowledge update has the potential for backward knowledge transfer.\n\n\\subsection{Knowledge Transfer Capability}\n\nTo evaluate the knowledge transfer and retention capability of all models, we report the FWT and BWT of MRR results in Figure~\\ref{fig:knowledge_transfer}. \nBecause of the embedding transfer, the FWT of LKGE\\xspace is higher than all lifelong learning competitors. \nEven on \\textsc{Relation} and \\textsc{Hybrid} where the KG schema changes, LKGE\\xspace still keeps the FWT capability well. \nMEAN and LAN are designed to transfer knowledge forward to embed new entities. \nSo, they work well on \\textsc{Entity}.\nHowever, their FWT capability is limited on other datasets since they cannot update the old embeddings to adapt to new snapshots.\n\nOn the other hand, BWT is usually negative due to the overwriting of learned knowledge. \nPNN, MEAN, and LAN do not update the learned parameters, so their BWT scores are ``NA''. \nThe poor BWT scores of CWR show the harmful effects of the average operation.\nThe BWT capability of rehearsal models is also not good because they cannot store enough facts. \nLKGE\\xspace achieves good BWT scores as the embedding regularization can well balance the learning of new and the update of old embeddings.\n\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{figs_new\/FWT_BWT.pdf}\n\\caption{Forward transfer and backward transfer of MRR.}\n\\label{fig:knowledge_transfer}\n\\vspace{-5pt}\n\\end{figure}\n\n\\subsection{Learning Efficiency}\nHere, we compare the training time. \nWe report the time cost on \\textsc{Fact} as all snapshots of \\textsc{Fact} have the same training set size, which is easier for comparison.\nFigure~\\ref{fig:time_cost} shows the results.\nUnsurprisingly, re-training is most time-consuming. \nSnapshot is also costly because it cannot inherit knowledge from previous snapshots. \nBy contrast, our model is most efficient, and its advantage is more significant in the final snapshot. \nThis is because the embedding transfer can use the learned knowledge to accelerate the learning of new facts.\n\n\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{figs_new\/time.pdf}\n\\caption{Cumulative time cost on \\textsc{Fact}.}\n\\label{fig:time_cost}\n\\vspace{-5pt}\n\\end{figure}\n\n\n\\subsection{Ablation Study}\n\\label{subsec:ablation}\nWe conduct an ablation study to validate the effectiveness of each model component. \nWe design four variants of LKGE\\xspace: ``w\/o fine-tuning'', ``w\/o masked KG autoencoder'', ``w\/o embedding transfer'' and ``w\/o regularization''. \nThe ``w\/o fine-tuning'' variant is trained on $\\mathcal{D}_1$ and performs the embedding transfer on other $\\mathcal{D}_i$. \nThe latter three variants disable the specific components from our model. \nThe results are shown in Table~\\ref{tab:ablation}. \nWe observe that \n(\\romannumeral1) although fine-tuning is disabled, ``w\/o fine-tuning'' can still perform well with only the knowledge from the first snapshot, showing that embedding transfer can effectively transfer learned knowledge to unseen entities and relations. \n(\\romannumeral2) Compared with the full model, both ``w\/o masked KG autoencoder'' and ``w\/o regularization'' significantly drop, demonstrating the effects of masked KG autoencoder and knowledge retention.\n(\\romannumeral3) Embedding transfer enables the model to be trained at a starting point closer to the optimal parameters and stabilizes the embedding space. There are declines when using the embedding transfer on \\textsc{Entity}. This is because the \\textsc{Entity} contains massive new entities and needs more plasticity rather than stability. However, \non \\textsc{Relation} and \\textsc{Hybrid}, the results of ``w\/o embedding transfer'' are lower than the full model, showing that embedding transfer can reduce the interference caused by the KG schema changes.\nOn \\textsc{Fact}, the results of ``w\/o embedding transfer'' are similar to the full model. \nThis shows that, even without embedding transfer, the model can still capture the knowledge.\n \n\n\\section{Conclusion and Future Work}\nThis paper proposes and studies lifelong embedding learning for growing KGs. \nAiming at better knowledge transfer and retention, we propose a lifelong KG embedding model consisting of a masked KG autoencoder, embedding transfer, and embedding regularization. \nThe experimental results on four newly-constructed datasets show better link prediction accuracy, knowledge transfer capability, and learning efficiency of our model. \nIn future work, we plan to investigate lifelong embedding learning in long-tail and low-resource settings.\n\n\\section*{Acknowledgments} \nThis work was supported by National Natural Science Foundation of China (No. 62272219).\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nConsider the system\n\\begin{equation}\\label{eq:1}\n\\ddot{u}=-\\dfrac{u}{|u|^{3}}+ \\varepsilon \\nabla_{u} U(t, u, \\varepsilon), u \\in \\mathbb{R}^{3}\n\\end{equation}\nwhere $U$ is a $C^{\\infty}$-function defined on $\\mathbb{R} \\times \\mathcal{U} \\times [0, \\varepsilon_{*}]$ for an open set $\\mathcal{U} \\subset \\mathbb{R}^{3}$ containing the origin $u=0$ and some $\\varepsilon_{*}>0$. The function $U(t, u, \\varepsilon)$ is assumed to be $T$-periodic in $t$. This system models a Kepler problem with an external periodic force. \\par\n\nIt is a classical problem in the theory of perturbations to look for periodic solutions for small $\\varepsilon$. {Traditionally, due to the proper-degeneracy of the Kepler problem, all the collisionless bounded orbits are closed and some non-degeneracy condition has to be imposed on the function $U(t,u,\\varepsilon )$. This condition has to be verified for concrete $U(t,u,\\varepsilon )$}. In typical situations the periodic orbits are found in the neighbourhood of a prescribed\nclosed orbit of the unperturbed Kepler problem and in particular they have no collisions.\\par \n\nA result of different nature was obtained in \\cite{BOZ}, in which it has been shown that for any smooth function $U$, the equation \\eqref{eq:1} has an arbitrarily large number of $T$-periodic solutions if $\\varepsilon$\nhas been accordingly chosen small enough. {As it is usually the case in the study of periodic orbits with singular potentials,} in the framework of \\cite{BOZ} the function $U$ was supposed to be globally defined; that is,\n\\begin{equation}\\label{global}\n\\mathcal{U}=\\mathbb{R}^{3}.\n\\end{equation} Also, the periodic solutions are understood in a generalized sense. They can have collisions but the energy and the direction of motion must be preserved after each bouncing at a time $t_*$ with $u(t_* )=0$.\n\\par There are good topological reasons for the introduction of generalized solutions. Let $\\mathcal{M} $ be the set of $T$-periodic solutions of the Kepler problem ($\\varepsilon =0$). After including solutions with collisions it becomes a manifold $$\\tilde{\\mathcal{M}}=\\bigcup_{n=1}^{\\infty} \\mathcal{M}_n$$ with infinitely many\nconnected components $\\mathcal{M}_n$, where each of them is compact. The periodic solutions of \\eqref{eq:1} for $\\varepsilon \\neq 0$ are obtained as bifurcations from these manifolds. The compactness is essential if we want to guarantee that\nthis bifurcation is always produced.\\par On the other hand, as we willexplain in this note, since our method is perturbative, the condition \\eqref{global} is not\nso essential. Since the components $\\mathcal{M}_n$ converge to the origin as $n\\to \\infty$, the projection of $\\mathcal{M}_n$ on the configuration space will lie inside any neighbourhood $\\mathcal{U}$ of the origin for $n$ large enough. In consequence, when \\eqref{global}\ndoes not hold, it is still possible\nto obtain bifurcations from $\\mathcal{M}_n$ when $n$ is large and the projection of $\\mathcal{M}_{n}$ to the configuration space has a neighborhood in which the function $U$ is well-defined. This observation may seem minor but it is of importance if we want to apply this technique to some classical problems in Celestial Mechanics. A typical situation arises in the so-called circular or elliptic restricted three-body problem, where the perturbation $U$ on the asteroid is due to a periodic gravitational force from another massive body. {The function $U$} will then be singular at the position of this massive body which is at some distance of $u=0$. Note that for the circular problem we shall consider the problem in a fixed inertial frame, and shall not go to some rotating frame to reduce the system to an autonomous one.\n\\par\nWe shall then apply this to show that an arbitrarily large number of generalized $T$-periodic solutions of a general class of circular or elliptic restricted three body problems with any eccentricity $e \\in [0, 1)$ can be found when the mass ratio between the primaries is taken as an external parameter that we assume to be sufficiently small. These solutions may collide with the big primary. In fact we will find periodic motions of the small body in a neighbourhood of this big primary. The common year of the primaries will also be a period for the third body and each year this small body will make a large number of revolutions around the big primary. There are many other results on the existence of periodic solutions of the circular or elliptic restricted problem and many families of periodic solutions have been identified. See for instance \\cite{CPS, AL, {PYFN}}. A possible novelty of our result is mainly in the absence of additional non-degeneracy conditions and the result is more global in the sense that the continuation is made from manifolds made of periodic orbits of the Kepler problem instead of particular periodic orbits of an approximating system.\nIt seems reasonable to expect other applications in similar models. For instance, in an elliptic restricted N-body problem where the primaries are assumed to move on an elliptic homographic motion and the infinitesimal body is assumed to stay close to a primary. \n\\par\nThe rest of the paper is organized in three sections. In Section \\ref{22} we work with the equation \\eqref{eq:1} and go back to the main theorem in \\cite{BOZ}. We explain the modifications in the proof allowing to eliminate the condition (\\ref{global}). The result in \\cite{BOZ} was obtained via the use of Levi-Civita regularization in dimension 2 and Kustaanheimo-Stiefel regularization in dimension 3, together with the application of a theorem of Weinstein \\cite{Weinstein}. Generalized periodic solutions can be characterized equivalently by periodic orbits of the regularized system. This was proved in \\cite{BOZ} in dimension 2 and we will prove that it is also the case in dimension 3. This proof will require a delicate topological result on the lifting of piecewise smooth paths via the Hopf fibration. For smooth paths this is a direct consequence of general results in the theory of Ehresmann connections, and the modifications required for the piecewise smooth case are explained in Section \\ref{33}. Finally, the application to the circular or elliptic restricted three body problem is presented in Section \\ref{44}.\\par\nIn this note we will work in the space $u \\in \\mathbb{R}^{3}$ but the results are easily adapted to the simpler case of the plane $u \\in \\mathbb{R}^{2}$ or the line $u \\in \\mathbb{R}$. \n\n\\section{Periodic Solutions via Regularization}\\label{22}\nFollowing \\cite{BOZ} we say that a continuous and $T$-periodic function $u: \\mathbb{R} \\to \\mathbb{R}^{3}$ is a generalized $T$-periodic solution of \\eqref{eq:1} if it satisfies the following conditions:\n\\begin{enumerate}\n\\item $\\mathcal{Z}=\\{t \\in \\mathbb{R}: u(t)=0\\}$ is a discrete set;\n\\item for any open interval $I \\subset \\mathbb{R} \\setminus \\mathcal{Z}$ the function u is in $C^{\\infty}(I, \\mathbb{R}^{3})$ and satisfies \\eqref{eq:1} on this interval;\n\\item For any $t_{0} \\in \\mathcal{Z}$ the limits below exist\n$$\\lim_{t \\to t_{0}} \\dfrac{u(t)}{|u(t)|}, \\qquad \\qquad \\lim_{t \\to t_{0}} \\Bigl( \\dfrac{1}{2} |\\dot{u}(t)|^{2}-\\dfrac{1}{|u(t)|} \\Bigr).$$\n\\end{enumerate}\n\nNote that for any classical solution of \\eqref{eq:1} tending to a collision at $t_{0}$, the left and right limits $t \\to t_{0}^{-}$ and $t \\to t_{0}^{+}$ always exist (see \\cite{Sperling}). The crucial point in the above definition is that they coincide. \n\nThe above definition extends in an obvious way for the case of sub-harmonic solutions having period $\\eta T$ with $\\eta \\in \\mathbb{Z}, \\eta \\ge 2$.\n\\par We will prove the existence of generalized $T$-periodic solutions using a regularization technique. We refer to \\cite{BDP} and \\cite{BOV} for an alternative use of variational techniques. \nFollowing \\cite{ZhaoKS, BOZ} we consider the Kustaanheimo-Stiefel regularization. The skew-field of quaternions is denoted by $\\mathbb{H}$. The space of purely imaginary quaternions $\\mathbb{IH}=\\{z \\in \\mathbb{H}: \\Re(z)=0\\}$ is naturally a three-dimensional vector field over the reals. The map \n$$ \\Pi: \\mathbb{H} \\to \\mathbb{IH}, \\qquad z \\mapsto \\bar{z} i z$$ \nplays an important role. \n\nWe set $\\mathbb{T}=\\mathbb{R}\/T \\mathbb{Z}$ for the quotient space of the real line of time by the lattice $T \\mathbb{Z}$, so for $t \\in \\mathbb{R}$ we denote by $\\bar{t}$ its corresponding quotient. The manifold \n$$\\mathcal{M}=\\mathbb{H} \\times \\mathbb{H} \\times \\mathbb{T} \\times \\mathbb{R}$$\nwith points $(z=z_{0}+z_{1} i + z_{2} j + z_{3} k, w=w_{0}+w_{1} i + w_{2} j + w_{3} k, \\bar{t}, \\tau)$ is endowed with the symplectic two-form\n$$\\omega=\\sum_{l=0}^{3} d z_{l} \\wedge d w_{l} + d t \\wedge d \\tau$$\nOn the symplectic manifold $(\\mathcal{M}, \\omega)$ we consider the Hamiltonian function\n$$K_{\\varepsilon}: \\mathcal{M} \\to \\mathbb{R}, K_{\\varepsilon}(z, w, t,\\tau)=\\dfrac{|w|^{2}}{8}+\\tau |z|^{2} -1 + \\varepsilon P(t, z, \\varepsilon)$$\nwith $P(t,z,\\varepsilon)=|z|^{2} U(t, \\bar{z} i z, \\varepsilon)$. This is the regularized system of the spatial forced Kepler problem in extended phase space \\cite{BOZ}. \n\nThis function is invariant under the Hamiltonian $S^{1}$-action\n$$g \\ast (z, w, \\bar{t}, \\tau)=(g z, g w, \\bar{t}, \\tau),$$\nwhere we realize $S^{1}$ as $\\{g \\in \\mathbb{C}: |g|=1\\}$. By standard theory, the corresponding moment map\n$$BL(z, w, \\bar{t}, \\tau)=\\bar{z} i w$$\nis a first integral of the system. \n\nAssume now that $X(s)=(z(s), w(s), \\bar{t}(s), \\tau(s))$ is a solution of the Hamiltonian system $(\\mathcal{M}, \\omega, K_{\\varepsilon})$ which lies in $K_{\\varepsilon}^{-1}(0) \\cap BL^{-1}(0)$ and $X(s+S)=g \\ast X(s), s \\in \\mathbb{R}$ for some $S >0$ and $g \\in S^{1}$. Then $X(s)$ gives a generalized periodic solution of \\eqref{eq:1}, as a consequence of Lem 5.1, Rem 5.2 in \\cite{BOZ}, with period $\\eta T$ where $\\eta$ is the degree of the map $\\bar{t}(s)$, given via a lift of this mapping as \n$$t(s+S)=t(s)+\\eta T.$$\n\nThese solutions were produced by a perturbation argument in the symplectically reduced manifold\n$$\\mathcal{M}_{0}=BL^{-1}(0)\/S^{1}.$$\nIndeed by applying a result of Weinstein \\cite{Weinstein}, we obtained a continuation from the sets $\\bar{\\Lambda}_{k}$ of periodic orbits of $K_{0}$ which are obtained as quotients from the sets\n$$\\Lambda_{k}=\\{X=(z, w, \\bar{t}, \\tau) \\in K_{0}^{-1}(0) \\cap BL^{-1}(0): \\tau=\\tau_{k}\\}$$\nwith $\\tau_{k}=\\Bigl(\\dfrac{\\sqrt{2} k \\pi}{T}\\Bigr)^{\\frac{2}{3}}, \\quad k=1,2,\\cdots$\n\nIn principle the set \n$$E_{k}=\\{\\bar{z} i z: X=(z, w, \\bar{t}, \\tau) \\in \\Lambda_{k}\\}$$\ncan occupy any region in $\\mathbb{IH}$. For this reason, it was assumed in \\cite{BOZ} that (\\ref{global}) holds. Nevertheless, by the result by Weinstein, the only requirement for continuation of periodic orbits from the periodic manifold $\\overline{\\Lambda}_{k}$ is that the perturbation be defined in a neighborhood of $\\overline{\\Lambda}_{k}$ and is sufficiently small. The computations in \\cite{BOZ} therefore remain valid when $\\mathcal{U} \\subsetneq \\mathbb{R}^{3}$ as long as $E_{k} \\subset \\mathcal{U}$ holds.\n\nThe set $\\mathcal{U}$ can be otherwise chosen arbitrarily. For instance we may choose $\\mathcal{U}$ small. The above-mentioned inclusion might not hold for all $k \\ge 1$. Nevertheless we now remark that this inclusion holds for $k$ sufficiently large.\n\nFrom $X \\in K_{0}^{-1} (0)$ we deduce that \n$$\\tau_{k} |z|^{2}+\\dfrac{|w|^{2}}{8}=1,$$\nwhich implies that $|z| \\le \\tau_{k}^{-1\/2} \\to 0$ as $k \\to + \\infty$. \n\nThe Hamiltonian function\n$$\\bar{K}_{\\varepsilon}(\\bar{X})=K_{\\varepsilon}(X)$$\nis not well-defined on the whole manifold $\\mathcal{M}_{0}$ but it is well-defined on the open set\n$$\\mathcal{D}_{\\delta}=\\{ \\bar{X}\\in \\mathcal{M}_0 :\\; X=(z,w,\\bar{t},\\tau),\\; |z| < \\delta \\},$$\nwhere $\\delta >0$ is such that\n$$\\{u \\in \\mathbb{R}^{3}: |u| < \\delta^{2}\\} \\subset \\mathcal{U}.$$\n\nThe periodic manifold $\\bar{\\Lambda}_{k}$ is thus contained in $\\mathcal{D}_{\\delta}$ for $k$ large enough. For these values of $k$, the argument in \\cite{BOZ} holds without change.\n\nWe sum up the previous discussions in the following theorem:\n\\begin{theo} \\label{theo: local} Assume that $U: \\mathbb{R} \\times \\mathcal{U} \\times [0, \\varepsilon_{*}] \\to \\mathbb{R}$ is a $C^{\\infty}$-function satisfying\n$$U(t+T, u, \\varepsilon)=U(t, u, \\varepsilon)$$\nfor some fixed $T >0$ and an open set $\\mathcal{U} \\subset \\mathbb{R}^{3}$ containing $u=0$. Given an integer $l \\ge 1$ there exists $\\varepsilon_{l}>0$ such that \\eqref{eq:1} has at least $l$ generalized $T$-periodic solutions (lying in $\\mathcal{U}$) for each $\\varepsilon \\in ]0, \\varepsilon_{l}[$.\n\\end{theo}\n\n\\begin{rem} The same result holds in the plane and in a line as well, which can be obtained by following the same line of argument and using Levi-Civita regularization instead. \n\\end{rem}\n\nIn the previous proof we have used that the solutions $X(s)=(z(s), w(s), t(s), \\tau(s))$ of the Hamiltonian system $(M, \\omega, K_{\\varepsilon})$ lying in \n$$\\{K_{\\varepsilon}(X)=0, BL(X)=0\\}$$ \nand satisfying \n$$X(s+S)=g \\ast X(s), t(s+S)=t(s)+\\eta T, s \\in \\mathbb{R}$$\nfor some $S>0, g \\in S^{1}$ and $\\eta \\in \\mathbb{Z}$ lead to generalized periodic solutions of \\eqref{eq:1} with period $\\eta T$. These are given by \n$$u(t)=\\overline{z(s(t))} i z(s(t)),$$\nwhere $s=s(t)$ is the inverse of the homeomorphism $t=t(s)$. Note that in the previous proof $\\eta=1$ for $\\varepsilon=0$ and so $\\eta=1$ also for small $\\varepsilon$, since $\\eta=\\eta_{\\varepsilon}$ is a continuous function taking values in $\\mathbb{Z}$.\n\nThe rest of the Section will be devoted to show that it is possible to go from generalized solutions of \\eqref{eq:1} to solutions $X(s)$ of $(M,\\omega, K_{\\varepsilon})$ satisfying the above conditions. \n\nA similar discussion for planar solutions and Levi-Civita regularization can be found in Section 4 of \\cite{BOZ}. We now explain that the same holds for the spatial case as well. \n\nNow assume that $u(t)$ is a generalized $T$-periodic solution of \\eqref{eq:1} and consider the function $\\sigma(t)=\\dfrac{u(t)}{|u(t)|}.$ In principle $\\sigma(t)$ is only defined for $t \\in \\mathbb{R} \\setminus \\mathcal{Z}$ but the notion of generalized solution implies that it has a continuous extension to the whole real line. At collisions, the function $\\sigma$ is not necessarily smooth but there is some control on its velocity. More precisely for each $t_{0} \\in \\mathcal{Z}$, there holds\n$$\\dot{\\sigma}(t)=O((t-t_{0})^{-1\/3})\\,\\, \\quad \\hbox{ as } t \\to t_{0}.$$\nThis asymptotic expansion follows from the formula\n$$\\dot{\\sigma}(t)=\\dfrac{|u(t)|^{2} \\dot{u}(t) - \\langle u(t), \\dot{u}(t) \\rangle u(t)}{|u(t)|^{3}}, \\qquad t \\in \\mathbb{R} \\setminus \\mathcal{Z}$$\ntogether with the classical estimates at collisions (see \\cite{Sperling})\n$$u(t)=a (t-t_{0})^{2\/3} + b(t) (t-t_{0})^{4\/3},$$\n$$\\dot{u}(t)=\\dfrac{2}{3}a (t-t_{0})^{-1\/3} + c(t) (t-t_{0})^{1\/3},$$\nwhere $a \\in \\mathbb{R}^{3} \\setminus \\{0\\}$ and $b(t), c(t)$ are bounded functions defined in a neighborhood of $t-t_{0}$.\nWe now identify $\\mathbb{R}^{3}$ and $\\mathbb{IH}$, $u=u(t), u=u_{1} i + u_{2} j + u_{3} k$ and now view $S^{2}$ and $S^{3}$ as unit spheres in $\\mathbb{IH}$ and $\\mathbb{H}$ respectively. The properties of $\\sigma: \\mathbb{R} \\to S^{2}$ suggests the following definition.\n\n\\begin{defi} Given an interval $[0, T]$ and a partition of it \n$$\\mathcal{P}:b_{0}=0 < t_{1 } < \\cdots < t_{N}=T,$$\n{the path $\\alpha:[0, T] \\to S^{d}$, $d=2$ or $3$,} is a $\\mathcal{P}$-path, if it is continuous, the restriction $\\alpha |_{]t_{i}, t_{i+1}[}$ is $C^{\\infty}$ and the integral below is finite:\n$$\\int_{t_{i}}^{t_{i+1}} |\\dot{\\alpha} (t) | d t < \\infty, i=0, \\cdots, N-1.$$\n\\end{defi}\nThe map $\\Pi :z \\mapsto \\bar{z} i z$ sends $S^{3}$ onto $S^{2}$ and we can define the Hopf map as the restriction \n$$\\pi: S^{3} \\to S^{2}, z \\mapsto \\bar{z} i z.$$\nNote that, in the notation of \\cite[pp. 376]{Cushman}, $x_{1}=-z_{0}, x_{2}=z_{1}, x_{3}=z_{2}, x_{4}=z_{3}$.\n\nWe will need the following result on the lifting of $\\mathcal{P}$-paths.\n\\begin{lem} \\label{lem: ehresmann} Assume that $\\gamma: [0, T] \\to S^{2}$ is a $\\mathcal{P}$-path. Then there exists a $\\mathcal{P}$-path $\\Gamma: [0, T] \\to S^{3}$ satisfying $\\pi \\circ \\Gamma=\\gamma$ and\n$$\\Re\\{\\bar{\\Gamma}(t) i \\dot{\\Gamma}(t)\\}=0, \\hbox{ when } t \\neq t_{i}.$$\nMoreover, if $\\gamma(0)=\\gamma(T)$ then $\\Gamma(T)=g \\Gamma(0)$ for some $g \\in S^{1}$.\n\\end{lem}\n\nThe proof of this result is postponed to the next Section. It is worth to observe that the last conclusion on closed paths follows easily from the structure of the fibers of the Hopf map\n$$\\pi^{-1}(\\gamma(0))=\\{g \\Gamma(0): g \\in S^{1}\\}.$$\n\nLet us take our generalized $T$-periodic solution $u(t)$. The set $[0, T] \\cap \\mathcal{Z}$ defines a partition $\\mathcal{P}$. Then $\\sigma$ is a $\\mathcal{P}$-path and Lemma \\ref{lem: ehresmann} can be applied to $\\gamma=\\sigma$. In consequence there exists a $\\mathcal{P}$-path $\\Sigma: [0, T] \\to S^{3}$ with $\\pi \\circ \\Sigma=\\sigma$ and $\\Re\\{\\bar{\\Sigma} i \\Sigma\\}=\\Re\\{\\bar{\n\\Sigma} i \\dot{\\Sigma}\\}=0$. Note that $\\bar{\\Sigma} i \\Sigma \\in S^{2} \\subset \\mathbb{IH}$.\n\nSince $\\sigma$ is $T$-periodic we extend $\\Sigma$ to the whole real line via the formula\n$\\Sigma(t+T)=g \\Sigma(t), t \\in \\mathbb{R}.$\nThen $\\Sigma: \\mathbb{R} \\to S^{3}$ is continuous.\n\nWe will define $X(s)=(z(s), w(s), t(s), \\tau(s))$ in terms of $\\Sigma$. The coordinate $t(s)$ is defined from Sundman integral as in \\cite{BOZ}. The set $\\mathcal{Z}^{*}=t^{-1} (\\mathcal{Z})$ is discrete. Then we have\n$$z(s)=r(s) \\Sigma(t(s)), s \\in \\mathbb{R},$$\nwhere \n$$r(s)=|u(t(s))|^{1\/2}, w(s)=4 z'(s), \\tau(s)=-E(t(s))+\\varepsilon U(t(s), \\bar{z}(s) i z(s), \\varepsilon), s \\in \\mathbb{R} \\setminus \\mathcal{Z}^{*}.$$ \n\nAfter this definition the proof follows along the same line of \\cite{BOZ}. The only essential difference is the verification of the additional condition $BL(X(s))=0$ which is equivalent to \n$$\\Re\\{\\bar{z}(s) i z'(s)\\}=0.$$\nTo check this condition, it is sufficient to observe that\n$$z'(s)=r'(s) \\Sigma(t(s))+r(s)^{3} \\dot{\\Sigma} (t(s)) $$\nand thus the condition follows directly.\n\n\\section{Lifting of paths via the Hopf map}\\label{33}\nFor each $z \\in S^{3}$ we consider the tangent space\n$$T_{z}(S^{3})=\\{w \\in \\mathbb{H}: w \\perp z\\}$$\nwith vertical and horizontal subspaces\n$$\\hbox{Vert}_{z}=\\ker (d \\Pi )_{z} \\cap T_{z} (S^{3})$$\n$$\\hbox{Hor}_{z}=[\\ker (d \\Pi )_{z}]^{\\perp} \\cap T_{z} (S^{3})$$\nThe real vector space $\\hbox{Vert}_{z}$ has dimension one and is spanned by $i z$. The space $\\hbox{Hor}_{z}$ has dimension two and is spanned by the vectors $jz, kz $. Moreover\n$$(d \\pi)_{z} (\\hbox{Hor}_{z})=T_{\\pi (z)}(S^{2}).$$\nThe splitting\n$$T_{z}(S^{3})=\\hbox{Vert}_{z} \\oplus \\hbox{Hor}_{z}$$\nis clearly smooth with respect to the base points and thus defines an Ehresmann connection associated to the submersion $\\pi$. We refer to \\cite[Chapter VIII]{Cushman} for more details.\n\nThe results in \\cite{Cushman} imply that this connection is ``good''. This means that, given a $C^{\\infty}$ curve $\\alpha: [t_{0}, t_{1}] \\to S^{2}$ and a point $\\xi \\in \\pi^{-1} (\\alpha (t_{0}))$, there exists a horizontal lift starting at $\\xi$. This lift is a $C^{\\infty}$ curve $A:[t_{0}, t_{1}] \\to S^{3}$ satisfying $A(t_{0})=\\xi, \\pi \\circ A = \\alpha, \\dot{A}(t) \\in \\hbox{Hor}_{A(t)}$ for each $t \\in [t_{0}, t_{1}]$. \n\nFrom our point of view, the key point is the following characterization of the horizontal component\n$$\\hbox{Hor}_{z}=\\{w \\in \\mathbb{H}: w \\perp z, \\Re\\{\\bar{z} i w\\}=0\\}.$$\nFrom this observation we could derive a version of Lemma \\ref{lem: ehresmann} in the class of $C^{\\infty}$-paths, or even in the class of piecewise $C^{\\infty}$-paths. This second case will follow by an iterative application of the above-mentioned lifting principle for good Ehresmann connections. However we must work in the larger class of $\\mathcal{P}$-paths and the proof uses specific properties of the Hopf map. To prove Lemma \\ref{lem: ehresmann} we will restrict to the case $\\mathcal{P}: t_{0}=00$ everywhere. This allows us to use the equations in \\eqref{eq: one} to solve $\\lambda_{1}$ and $\\lambda_{2}$. \n\nPlugging the corresponding formulas in \\eqref{eq: two} and taking into account the equations in \\eqref{eq: arbol} we obtain\n\\begin{equation}\\label{eq: liason}\n(1-\\gamma_1 )\\begin{pmatrix} \\dot{\\Gamma}_{2} \\\\ \\dot{\\Gamma}_{3}\\end{pmatrix}=-M\\begin{pmatrix} \\dot{\\Gamma}_{1} \\\\ \\dot{\\Gamma}_{0}\\end{pmatrix},\\; \\; M=\\begin{pmatrix} \\gamma_{2}&\\gamma_3 \\\\ \\gamma_3 &-\\gamma_2 \\end{pmatrix}.\n\\end{equation}\nAlso, from the second identity in \\eqref{eq: arbol},\n\\begin{equation}\\label{eq:deuxl}\n(1-\\gamma_1 )\\begin{pmatrix} \\Gamma_{1} \\\\ \\Gamma_{0}\\end{pmatrix}= \\begin{pmatrix} \\Gamma_{2}&\\Gamma_3 \\\\ -\\Gamma_3 &\\Gamma_2 \\end{pmatrix}\n\\begin{pmatrix}\\gamma_2 \\\\ \\gamma_3 \\end{pmatrix} =M\\begin{pmatrix} \\Gamma_{2} \\\\ \\Gamma_3 \\end{pmatrix}.\n\\end{equation}\nDifferentiating with respect to $t$ and substituting the result in \\eqref{eq: liason} we find\n$$\\begin{pmatrix} \\dot{\\Gamma}_{2} \\\\ \\dot{\\Gamma}_{3}\\end{pmatrix}=-\\frac{1}{2(1-\\gamma_1)} [\\dot{\\gamma}_1 M\\begin{pmatrix} \\Gamma_1 \\\\ \\Gamma_0 \\end{pmatrix}+M\\dot{M} \\begin{pmatrix} \\Gamma_{2} \\\\ \\Gamma_{3}\\end{pmatrix}].$$\nIn these computation we have used that $M^2 =(1-\\gamma_1^2)I$. Combining this identity with \\eqref{eq:deuxl}, we are led to a planar linear system of the type\n\\begin{equation}\\label{eq: ps}\n\\begin{pmatrix} \\dot{\\Gamma}_{2} \\\\ \\dot{\\Gamma}_{3}\\end{pmatrix}=B(t) \\begin{pmatrix} \\Gamma_{2} \\\\ \\Gamma_{3}\\end{pmatrix}.\n\\end{equation}\nwhere the coefficients of the matrix $B(t)$ are linear combinations of functions of the type $\\dfrac{\\gamma_{i} \\dot{\\gamma}_{j}}{1-\\gamma_{1}}$ and $ \\dfrac{\\dot{\\gamma}_{i}}{1-\\gamma_{1}}$. \n\nMost probably this matrix is not continuous at $t=t_0=0$ and $t=t_1=T$. \nNevertheless, since we know that $\\gamma $ is $C^{\\infty}$ in $]0, T[$ and $\\int_{0}^{T} |\\dot{\\gamma}(t)| d t < \\infty$, we deduce that the coefficients of $B(t)$ belong to the Lebesgue space $L^{1}(]0,T[)$ . We are assuming that $\\gamma_{1}(t) \\neq 1$ if $t \\in [0, T].$ In consequence, the matrix $B$ is integrable and the system \\eqref{eq: ps} satisfies the conditions of Carath\\'eodory's theorem (see for instance \\cite[Chapter 2]{CL}). Given $\\xi \\in \\pi^{-1}(\\gamma(0))$, we impose the initial condition \n$$\\Gamma_{2}(0)=\\xi_{2}, \\Gamma_{3}(0)=\\xi_{3}$$\nto obtain a unique solution of the Cauchy problem for \\eqref{eq: ps} defined on the whole interval $[0,T]$. These functions $\\Gamma_{2}$ and $\\Gamma_{3}$ are absolutely continuous in $[0, T]$. The functions $\\Gamma_0$ and $\\Gamma_1$ are\ndefined from the identity \\eqref{eq:deuxl} and they are also absolutely continuous. In particular there holds \n$$\\int_{0}^{T} |\\dot{\\Gamma}(t)| d t < \\infty$$\nand $\\Gamma$ is a $\\mathcal{P}$-path. Going back to the previous construction we observe that $\\Gamma$ is the desired lift. \n\\par\nTo complete the proof we must remove the extra assumption $\\gamma_1 \\neq 1$. Assume now that $\\gamma (t)$ is an arbitrary $\\mathcal{P}$-path. Since $\\gamma$ is smooth on $]t_0,t_1[$, the set $\\gamma ([t_0 ,t_1])$ has zero measure in $S^2$.\nLet us take a point $\\xi \\in S^2$ such that $\\gamma (t)\\neq \\xi$ for each $t\\in [t_0 ,t_1]$. We select a rotation of $S^2$ sending $\\xi$ into $i$. This rotation can be expressed in the form $z'=qz\\overline{q}$ for some $q\\in S^3$. Then $\\gamma_* (t)\n=q\\gamma (t)\\overline{q}$ is a $\\mathcal{P}$-path with $\\gamma_* (t)\\neq i$ for each $t\\in [t_0 ,t_1]$. The possible lifts of $\\gamma$ and $\\gamma_*$ are linked, for if $\\Gamma =\\Gamma (t)$ satisfies $\\pi \\circ \\Gamma =\\gamma$ then\n$\\pi \\circ \\Gamma_* =\\gamma_*$, where $\\Gamma_* (t)=\\Gamma (t)\\overline{q}$. Moreover, it is easy to check that $\\Re\\{\\bar{\\Gamma}(t) i \\dot{\\Gamma}(t)\\}=0$ is equivalent to $\\Re\\{\\bar{\\Gamma}_*(t) i \\dot{\\Gamma}_*(t)\\}=0.$ \n\n\\section{A restricted three-body problem}\\label{44}\nLet us assume that the $C^{\\infty}$ functions $X, x: \\mathbb{R} \\to \\mathbb{R}^{3}, X=X(t), x=x(t),$ are $T$-periodic and satisfy \n$$X(t) \\neq x(t)\\qquad \\forall t \\in \\mathbb{R}.$$\nIn addition we assign positive masses $M$ and $m$ to them, {by normalization}\n$$M+m=1.$$\nWe consider the system\n\\begin{equation}\\label{eq: 3R}\n\\ddot{\\xi}=\\dfrac{M(X(t)-\\xi)}{|X(t)-\\xi|^{3}} + \\dfrac{m(x(t)-\\xi)}{|x(t)-\\xi|^{3}}. \n\\end{equation}\nThis system describes the motion of an infinitesimal body attracted by two moving centers $X(t)$ and $x(t)$. When $(X(t), x(t))$ solves the corresponding two-body problem, then the system describes a restricted spatial three-body problem. By assuming that $(X(t), x(t))$ move on a circular or elliptic Keplerian orbit we obtain a periodic system. Note that for {the} restricted circular three-body problem many studies have been made in a proper rotating coordinate system and for the elliptic problem in a proper {rotating-pulsating coordinate system}. \nWe shall just consider the problems in the inertial system. \n\nIn principle we could have collisions of the infinitesimal particle $\\xi$ with any of the two primaries: $X=\\xi$ or $x=\\xi$. Nevertheless we shall only consider collisions with the first primary $X$. \n\n A continuous and $T$-periodic function $\\xi : \\mathbb{R} \\to \\mathbb{R}^{3},\\; \\xi=\\xi(t)$ is called a generalized periodic solution of the first kind if the following conditions hold:\n \\begin{itemize}\n \\item $\\mathcal{Z}_{M}=\\{t \\in \\mathbb{R}: \\xi(t)=X(t)\\}$ is discrete;\n \\item $\\mathcal{Z}_{m}=\\{t \\in \\mathbb{R}: \\xi(t)=x(t)\\}$ is empty;\n \\item In each interval $I \\subset \\mathbb{R} \\setminus \\mathcal{Z}_{M}$, the function $\\xi(t)$ is $C^{\\infty}$ and satisfies \\eqref{eq: 3R};\n \\item For each $t_{0} \\in \\mathcal{Z}_{M}$ the limits below exist\n $$\\lim_{t \\to t_{0}} \\dfrac{\\xi(t)-X(t)}{|\\xi(t)-X(t)|}, \\quad \\lim_{t \\to t_{0}} \\{\\dfrac{1}{2} |\\dot{\\xi}(t)-\\dot{X}(t)|^{2}-\\dfrac{M}{|\\xi(t)-X(t)|}\\}.$$\n \\end{itemize}\n\nLet us now assume that the primaries depend upon a parameter $\\varepsilon \\in [0, 1]$, $X_{\\varepsilon}=X_{\\varepsilon}(t), x_{\\varepsilon}=x_{\\varepsilon}(t)$. We assume that $(X_{\\varepsilon}(t), x_{\\varepsilon}(t))$ is a circular or elliptic solution of the two body problem with masses $M_{\\varepsilon}, m_{\\varepsilon}$ and with center of mass placed at the origin\n\n\\begin{equation}\\label{eq:cm}\nM_{\\varepsilon} X_{\\varepsilon} + m_{\\varepsilon} x_{\\varepsilon}=0. \n\\end{equation}\n\nFrom the general theory of Kepler problem we know that $X_{\\varepsilon}(t)$ satisfies\n\\begin{eqnarray*}\nX_{\\varepsilon}(t)= R_{\\varepsilon} \\left(a_{\\varepsilon} (\\cos u(t)-e_{\\varepsilon}), a_{\\varepsilon} \\sqrt{1-e_{\\varepsilon}^{2}} \\sin u(t), 0 \\right)^* \\\\\nu(t)-e_{\\varepsilon} \\sin u(t)=\\dfrac{m_{\\varepsilon}^{3\/2}}{a_{\\varepsilon}^{3\/2}} t.\n\\end{eqnarray*}\nWe are assuming that $t=0$ is the time of passage through the pericenter and the matrix $R_{\\varepsilon}$ is in the group of rotations $SO(3)$. From $X_{\\varepsilon}(t)$ one determines $x_{\\varepsilon}(t)$ from the center of mass condition Eq. \\eqref{eq:cm}. \n\nFrom now on it will be assumed that the functions\n$$\\varepsilon \\in [0, 1] \\mapsto R_{\\varepsilon} \\in SO(3)$$\nand\n$$\\varepsilon \\in [0, 1] \\mapsto m_{\\varepsilon}, M_{\\varepsilon}, a_{\\varepsilon}, e_{\\varepsilon}$$\nare all $C^{\\infty}$ and, for each $\\varepsilon>0$ there holds\n$$m_{\\varepsilon}, M_{\\varepsilon}>0, m_{\\varepsilon}+M_{\\varepsilon}=1, a_{\\varepsilon}>0, \\,\\, 0 \\le e_{\\varepsilon} <1.$$\n\nAccording to the third Kepler law, the system \\eqref{eq: 3R} is periodic in time with period\n\n\\begin{equation}\\label{eq:per}\nT_{\\varepsilon}=\\dfrac{2 \\pi}{m_{\\varepsilon}^{3\/2}} a_{\\varepsilon}^{3\/2}.\n\\end{equation}\n\nIn the following result we will assume that the primary $X_{0}$ is fixed at the origin while $x_{0}$ describes a circular or elliptic Keplerian orbit with mass $m_{0}=0$. \n\n\\begin{theo}\\label{theo: g3R}\nAssume in addition that \n$$m_{0} = 0 , e_0<1 , T_{\\varepsilon} \\to T_{0} >0 \\hbox{ as } \\varepsilon \\to 0^{+}. $$\nThen , for any given integer $l \\ge 1$, there exists $\\varepsilon_{l}>0$ such that the equation \\eqref{eq: 3R} has at least $l$ generalized $T_{\\varepsilon}$-periodic solutions of the first kind for $\\varepsilon \\in ]0, \\varepsilon_{l}[$.\n\\end{theo}\n\nIn contrast to many other results \\cite{CPS, AL, {PYFN}}, we do not impose any further resonance or non-degeneracy conditions and the dependence with respect to parameters is very general. On the other hand, our solutions are understood in a generalized sense.\n\n\\begin{proof} \nTo fix a uniform period we change time $s \\to t$ according to the relation\n$$T_{\\varepsilon} s=t$$\nand set $\\eta(s)=\\xi(t)$,\nso that \\eqref{eq: 3R} is transformed into\n$$\\eta^{''}=\\dfrac{T_{\\varepsilon}^{2} M_{\\varepsilon} (\\phi_{\\varepsilon}(s)-\\eta)}{|\\phi_{\\varepsilon}(s)-\\eta|^{3}}+ \\dfrac{T_{\\varepsilon}^{2} m_{\\varepsilon} (\\psi_{\\varepsilon}(s)-\\eta)}{|\\psi_{\\varepsilon}(s)-\\eta|^{3}},$$\nwhere $\\phi_{\\varepsilon}(s)=X_{\\varepsilon} (T_{\\varepsilon} s), \\psi_{\\varepsilon}(s)=x_{\\varepsilon} (T_{\\varepsilon} s)$.\n\nWe then introduce \n$$u={\\lambda^{-1}} (\\eta-\\phi_{\\varepsilon}(s))$$\nwhere $\\lambda>0$ is a normalization parameter to be adjusted. In this way we obtain an equation of the form of \\eqref{eq:1} if $\\lambda=\\lambda_{\\varepsilon}$ is given by\n\\begin{equation}\\label{eq:para}\n\\lambda_{\\varepsilon}^{3} =T_{\\varepsilon}^{2} M_{\\varepsilon}.\n\\end{equation}\nNamely,\n$$u''=-\\dfrac{u}{|u|^{3}}+\\varepsilon \\nabla_{u} U(s, u, \\varepsilon)$$\nwith\n$$U(s, u, \\varepsilon)=\\dfrac{1}{\\varepsilon} \\lambda_{\\varepsilon}^{-3} T_{\\varepsilon}^{2} m_{\\varepsilon} \\dfrac{1}{|\\lambda_{\\varepsilon}^{-1} (\\psi_{\\varepsilon} (s)-\\phi_{\\varepsilon}(s)) - u|} - \\dfrac{1}{\\varepsilon} \\lambda_{\\varepsilon}^{-1} \\langle \\phi''_{\\varepsilon}(s), u \\rangle.$$\nThis function belongs to $C^{\\infty} (\\mathbb{R} \\times \\mathcal{U} \\times [0, \\varepsilon_{*}])$ where $\\mathcal{U}$ is a neighborhood of $u=0$ and $\\varepsilon_{*}>0$ is small enough. \n\nTo prove this, we first observe that\n$$a_{\\varepsilon} (1-e_{\\varepsilon}) \\le |\\phi_{\\varepsilon} (s)| \\le a_{\\varepsilon} (1+e_{\\varepsilon}), s \\in \\mathbb{R}.$$\nThen, in view of \\eqref{eq:cm}, for small enough $\\varepsilon$ we have\n$$|\\lambda_{\\varepsilon}^{-1} (\\psi_{\\varepsilon}(s)-\\phi_{\\varepsilon}(s))| \\ge |\\lambda_{\\varepsilon}^{-1} \\phi_{\\varepsilon}(s)| \\ge \\lambda_{\\varepsilon}^{-1} \\dfrac{1}{m_{\\varepsilon}} a_{\\varepsilon} (1-e_{\\varepsilon}).$$ \nUsing \\eqref{eq:per} and \\eqref{eq:para} we see that the lower bound converges to $\\dfrac{1-e_{0}}{(2 \\pi)^{2\/3}}$ as $\\varepsilon \\to 0$. Consequently there exist $\\varepsilon_{*}>0$ and $\\delta>0$ such that\n$$|\\lambda_{\\varepsilon}^{-1} (\\psi_{\\varepsilon} (s) -\\phi_{\\varepsilon}(s)) | \\ge \\delta \\hbox{ for } \\varepsilon \\in [0, \\varepsilon_{*}], s \\in \\mathbb{R}.$$ \n\nWe define\n$$\\mathcal{U}=\\{u \\in \\mathbb{R}^{3}: |u| < \\delta\\}.$$\nFrom the explicit definition of $\\phi_{\\varepsilon}$ and $\\psi_{\\varepsilon}$ it is clear that $U$ is smooth on $\\mathbb{R} \\times \\mathcal{U} \\times ]0, \\varepsilon_{*}]$. It remains to analyze $\\varepsilon=0$.\n\nFirst we observe that \n$$\\phi_{\\varepsilon} (s)=a_{\\varepsilon} R_{\\varepsilon} (\\cos u-e_{\\varepsilon}, \\sqrt{1-e_{\\varepsilon}^{2}} \\sin u, 0)^*, u-e_{\\varepsilon}\\sin u=2 \\pi s$$\nand therefore $$\\lambda_{\\varepsilon}^{-1} (\\psi_{\\varepsilon} (s) -\\phi_{\\varepsilon}(s)) =-\\lambda_{\\varepsilon}^{-1} (\\frac{T_{\\varepsilon}}{2\\pi})^{2\/3} R_{\\varepsilon} (\\cos u-e_{\\varepsilon}, \n\\sqrt{1-e_{\\varepsilon}^{2}} \\sin u, 0)^*$$ is $C^{\\infty}$ in $\\mathbb{R}\\times [0,1]$. \nAlso, the function $f(\\varepsilon)=\\dfrac{1}{\\varepsilon} \\lambda_{\\varepsilon}^{-3} T_{\\varepsilon}^{2} m_{\\varepsilon}=\\dfrac{m_{\\varepsilon}}{ \\varepsilon M_{\\varepsilon}}$ belongs to $C^{\\infty}[0, \\varepsilon_{*}]$ and therefore the first summand in the definition of $U$ is smooth. \n\nTo analyze the second summand we differentiate twice the function $\\phi_{\\varepsilon}$. \nThen we have\n$$\\phi''_{\\varepsilon}(s)=a_{\\varepsilon} \\chi(s, \\varepsilon)$$ \nwith\n$$\\chi(s, \\varepsilon)=u'' R_{\\varepsilon} (-\\sin u, \\sqrt{1-e_{\\varepsilon}^{2}} \\cos u,0)^*-(u')^{2} R_{\\varepsilon} (\\cos u, \\sqrt{1-e_{\\varepsilon}^{2}} \\sin u, 0)^* $$\nand\n$$u'=\\dfrac{\\partial u}{\\partial s}=\\dfrac{2 \\pi}{1-e_{\\varepsilon} \\cos u}, u''=\\dfrac{\\partial^{2} u}{\\partial s^{2}}=-\\dfrac{2 \\pi e_{\\varepsilon} \\sin u}{(1-e_{\\varepsilon} \\cos u)^{2}} u'. $$\nThe function $\\chi(s, \\varepsilon)$ thus belongs to $C^{\\infty} (\\mathbb{R} \\times [0, \\varepsilon_{*}], \\mathbb{R}^{3})$. In addition, \n$$g(\\varepsilon)=\\dfrac{1}{\\varepsilon} \\lambda_{\\varepsilon}^{-1} a_{\\varepsilon}=\\dfrac{m_{\\varepsilon}}{\\varepsilon M_{\\varepsilon}^{1\/3} (2 \\pi)^{2\/3}}$$ is in $C^{\\infty}[0, \\varepsilon_{*}]$.\n\nTherefore $U$ is smooth in $\\mathbb{R} \\times \\mathcal{U} \\times [0, \\varepsilon_{*}]$ and Theorem \\ref{theo: local} is applicable. Undoing the change of variables we obtain generalized $T_{\\varepsilon}-$periodic solutions. To check the continuity of the energy\nat collisions it is convenient to use the identity $$\\frac{1}{2} |u'(s)|^2 -\\frac{1}{|u(s)|} =\\frac{T_{\\varepsilon}^2}{\\lambda_{\\varepsilon}^2}\\Bigl(\\frac{1}{2} |\\dot{\\xi}(t)-\\dot{X}_{\\varepsilon} (t)|^2 -\\frac{M_{\\varepsilon}}{|\\xi (t)-X_{\\varepsilon}(t)|}\\Bigr).$$ \n\\end{proof}\n\n\\begin{rem} After the change of variables $\\xi =R_{\\varepsilon} \\xi_1$ we can assume that the primaries lie on the fixed plane $x_3=0$. Then we can apply the planar version of Theorem \\ref{theo: local} to conclude that, for each $\\varepsilon$, the three bodies move in a common plane. \n\\end{rem}\n\nWe end this note with a comparison of our result with classical results concerning periodic orbits of the planar circular restricted three-body problem of the first and second kind in a rotating frame, where first and second kind refer to continuations of periodic orbits from circular and elliptic periodic orbits of the limiting Kepler problem in a rotating frame respectively. Indeed the period of rotations of the reference frame is derived from the corresponding period $T$ of the Keplerian elliptic motions of the primaries. When a periodic orbit has minimal period $T\/n$ in the rotating frame, then { it will also have }period $T$ and therefore {this period is preserved} in the initial fixed reference frame. The existence of such orbits of the first kind has been obtained by Poincar\\'e \\cite{P} and has been explained in \\cite{MZ}. \n \n \\begin{ack} R. O. is supported by MTM2017-82348-C2-1-P, L. Z. is supported by DFG ZH 605\/1-1. \n \\end{ack}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}