diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeqat" "b/data_all_eng_slimpj/shuffled/split2/finalzzeqat" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeqat" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nAbout one hundred years have passed since R.~Fuchs showed that \nthe sixth Painlev\\'e equation is represented as a monodromy preserving deformation \\cite{Fuchs1}. \nGarnier showed that every type of the Painlev\\'e equations is also \nrepresented as a monodromy preserving deformation \\cite{Garnier}. \nAlthough the monodromy data is preserved by the deformations, \nit is difficult to calculate the monodromy data in general. \n\nRiemann calculated the monodromy group for the Euler-Gauss hypergeometric equation.\nMoreover, monodromy groups of the Euler-Gauss hypergeometric equation \nare the polyhedron groups when it has algebraic solutions \\cite{Schw}. \nSchwarz' fifteen algebraic solutions are obtained from simple hypergeometric \nequations by rational transformations of the independent variable. \nWe will study an analogue of Schwarz' solutions in the Painlev\\'e case.\n\nIn this paper we call a linear equation whose isomonodromic \ndeformation gives a Painlev\\'e equation as {\\it the Painlev\\'e type}. \nFor the linear equation of the Painlev\\'e type, we can calculate the monodromy data \nexplicitly if we substitute particular solutions of the Painlev\\'e equations.\nHistorically, the first example of such solutions is \nPicard's solutions of the sixth Painlev\\'e equation \\cite{Pi}.\nR.~Fuchs calculated the monodromy group of the linear equation \ncorresponding to Picard's solutions \\cite{Fuchs:11}. \nIt seems that R.~Fuchs' paper \\cite{Fuchs:11}, whose title is the same as \nthe famous paper \\cite{Fuchs1}, has been forgotten for long years.\nRecently Mazzocco found again his result independently but \nhis paper was not referred in her paper \\cite{Ma1}. \nIn \\cite{Fuchs:11}, R.~Fuchs proposed the following problem: \n\\begin{quote}\n{\\bf R.~Fuchs' Problem}\\quad \nLet $y(t)$ be an algebraic solution $y(t)$ of a Painlev\\'e equation. \nFind a suitable transformation $x=x(z,t)$ such that the corresponding \nlinear differential equation \n$$\\frac{d^2 v}{dz^2}=Q(t,y(t),z)v$$\nis changed to the form without the deformation parameter $t$:\n$$\\frac{d^2 u}{dx^2}=\\tilde{Q}(x)u.$$\nHere $v= \\sqrt{ {dz}\/{dx}}\\ u.$ (See the lemma \\ref{keylemma}.)\n\\end{quote}\nPicard's solution is algebraic if it corresponds to rational points of \nelliptic curves. For three, four and six divided points, \nthe genus of algebraic Picard's solutions is zero. \n R.~Fuchs showed that for algebraic Picard's solutions whose genus are zero \nthe corresponding linear equations are reduced the Euler-Gauss hypergeometric \nequations by suitable rational transformations.\n \nIf R.~Fuchs' problem is true, algebraic solutions of the Painlev\\'e equations can be considered as\na kind of a generalization of Schwarz' solutions. Schwarz' solutions are \nconstructed by rational transformations which change hypergeometric equations \nto hypergeometric equations. Algebraic solutions of the Painlev\\'e equations are \nconstructed by rational transformations which change hypergeometric equations \nto linear equations of the Painlev\\'e type and we can calculate the monodromy data of \nthe linear equation explicitly. \n\n\nRecently, Kitaev, jointing with Andreev, constructed many algebraic solutions of the \nsixth Painlev\\'e equation, which include known ones and new ones, by rational transformations \n of the hypergeometric equations \\cite{AK}, \\cite{Kitaev:05}. \nAt least now, we do not know whether R.~Fuchs' problem is true or not for the \nsixth Painlev\\'e equation. We do not have negative example for R.~Fuchs' problem. \nKitaev's transformation is a generalization of classical work by Goursat \n\\cite{Goursat}. \nGoursat found many rational transformations which keep the hypergeometric equations. \nHis list is incomplete and Vid\\=unas made a complete list of rational transformations \\cite{Vidunas}.\n\nIn this paper we will study a confluent version of \\cite{AK} and \nwe show that R,~Fuchs' problem is true for the Painlev\\'e equation from the first \nto the fifth. \nWe will classify all rational transformations which change \nthe confluent hypergeometric equations to linear equations of \nthe Painlev\\'e type from the first to the fifth. Compared with the sixth Painlev\\'e equation,\nwe obtain less transformations since the Poincar\\'e rank of irregular singularities\nof the linear equations \nof the Painlev\\'e type is up to three. We have up to sixth order rational \nfunctions which change the confluent hypergeometric equations to linear equations of \nthe Painlev\\'e type. \n\nWe show such rational transformations correspond to almost all of algebraic solutions \nof the Painlev\\'e equations from the first to fifth up to the B\\\"acklund transformations.\nThe cases of the degenerate fifth Painlev\\'e equation \nand the Laguerre type solution of the fifth Painlev\\'e equation are exceptional. \nWe need exponential type transformations since the monodromy data is decomposable. \n\nThe parabolic cylinder equation (the Weber equation),\nthe Bessel equation and the Airy equation are reduced to the confluent hypergeometric \nequations by rational transformations of the independent variable (see \\eqref{bessel},\n\\eqref{weber} and \\eqref{airy}). These classical formula can be considered \nas confluent version of the Goursat transformations. Our result is \nan analogue of such classical formula in the Painlev\\'e analysis.\n\n\nMoreover, we obtain four non-algebraic solutions \nby rational transformations from the confluent hypergeometric equations.\nThey are called as symmetric solutions of the Painlev\\'e equations \nwhich are fixed points of simple transformations of the Painlev\\'e equations. \nThe symmetric solutions are not classical solutions in Umemura's sense, \nbut it is a kind of generalization of classical solutions.\n\n\nIn the section \\ref{PA}, we will review the Painlev\\'e equations and their special solutions. \nIn the section \\ref{chg}, we will review confluent hypergeometric equations since \nwe treat degenerate confluent hypergeometric equations. \nIn the section \\ref{iso}, we will give a list of linear equations which correspond \nto the Painlev\\'e equations. In the section \\ref{rational}, we will show that \nR.~Fuchs' problem is true for the Painlev\\'e equations from the first to the fifth.\n\n\nThe authors give thanks to Professor Kazuo Okamoto, Professor Alexander Kitaev and Mr. Kazuo Kaneko for \nfruitful discussions. \n\n\\section{Remarks on the Painlev\\'e equations}\\label{PA}\nIn order to fix the notation, we will review the Painlev\\'e equations.\n\n\\subsection{Remarks on P2 and P34}\nThe thirty-fourth Painlev\\'e equation P34($a$)\n$$y^{\\prime\\prime}=\\frac{(y^{\\prime })^2}{2y}+2y^2 -ty -\\frac{a}{2y} $$\nappeared in Gambier's list of second order nonlinear equations without \nmovable singularities \\cite{Gambier}. It is known that P34 is equivalent to\nthe second Painlev\\'e equation P2($\\alpha$)\n$$y^{\\prime\\prime}= 2y^3 + t y +\\alpha.$$\nP2($\\alpha$) can be written in the Hamiltonian form \n\\begin{equation}\\label{2:ha2}\n{\\cal H}_{II}: \\\n\\left\\{\\begin{array}{l}\n q^{\\prime } = -q^2 +p -\\frac t2, \\\\\n p^{\\prime } = 2pq+ \\left(\\alpha+\\frac12\\right), \\\\\n\\end{array}\n\\right.\n\\end{equation}\nwith the Hamiltonian\n $$ H_{II}= {1 \\over 2}p^2- \\left(q^2+{t \\over 2} \\right) p-\\left(\\alpha+\\frac12\\right) q.$$\nIf we eliminate $p$ from \\eqref{2:ha2}, we obtain P2($\\alpha$). \nIf we eliminate $q$ from \\eqref{2:ha2}, we obtain the thirty-fourth \nPainlev\\'e equation P34($(\\alpha+1\/2)^2$).\nTherefore P2($\\alpha$) and P34($(\\alpha+1\/2)^2$) are equivalent as nonlinear \ndifferential equations, but we will distinguish these two equations from the isomonodromic viewpoint. \n\n\\subsection{Remarks on P3 and P5}\n\nIn the following we distinguish the three types of \nthe third Painlev\\'e equation P3$(\\alpha, \\beta, \\gamma, \\delta)$:\n\\begin{eqnarray*}\n y^{\\prime\\prime}&=& \\frac{1}{ y}{y^{\\prime}}^2-\\frac{y^{\\prime}}{ t}+\n\\frac{\\alpha y^2 + \\beta} {t}+\\gamma y^3 + \\frac{\\delta }{ y},\n\\end{eqnarray*}\nbecause they have different forms of isomonodromic deformations:\n\\begin{itemize}\n\t\\item $D_8^{(1)}$ if $\\alpha\\not=0, \\beta\\not=0$, $\\gamma=0, \\delta=0$,\n\t\\item $D_7^{(1)}$ if $\\delta=0, \\beta\\not=0$ or $\\gamma=0, \\alpha\\not=0$,\n\t\\item $D_6^{(1)}$ if $\\gamma\\delta\\not=0$.\n\\end{itemize}\nIn the case $\\beta=0, \\delta=0$ (or $\\alpha=0, \\gamma=0$), the third Painlev\\'e equation \nis a quadrature, and we exclude this case from the Painlev\\'e family. $D_j^{(1)} (j=6,7,8)$ mean \nthe affine Dynkin diagrams corresponding to Okamoto's initial value spaces. \nIn this paper we omit the upper index $(1)$ for simplicity and denote $D_6, D_7, D_8$.\nFor details, see \\cite{KOOS}.\n By suitable scale transformations\n$t \\to ct, y \\to dt$, we can fix $\\gamma=4, \\delta=-4$ for $D_6$. \nand $\\gamma=2$ for $D_7$. \nWe will use another form of the third Painlev\\'e equation P3${}^\\prime(\\alpha, \\beta, \\gamma, \\delta)$ \n\\begin{eqnarray*}\nq^{\\prime\\prime}&=& \n\\frac{1}{q}{q^{\\prime}}^2-\\frac{q^{\\prime}}{x}+\n\\frac{q^2(\\alpha+\\beta)} {4x^2}+\\frac{\\gamma}{4x} + \\frac{\\delta }{4q},\n\\end{eqnarray*}\nsince P3${}^\\prime$ is more sympathetic to isomonodromic deformations than P3. \nWe can change P3 to P3${}^\\prime$ by $x=t^2, ty = q.$\n\\par\\bigskip\n\nFor the fifth equations P5$(\\alpha, \\beta, \\gamma, \\delta)$ \n\\begin{eqnarray*}\ny^{\\prime\\prime}&=& \\left(\\frac{1}{2y}+\\frac{1}{y-1} \\right){y^{\\prime}}^2-\\frac{1}{t}\n{y^{\\prime}}+\\frac{(y-1)^2}{t^2}\\left(\\alpha y +\\frac{\\beta}{y}\\right)\\\\\n&{}& \\hskip 2cm +\\gamma {y\\over t}+\\delta {y(y+1)\\over y-1},\n\\end{eqnarray*}\nwe assume that $\\delta\\not=0$. \nWhen $\\delta=0, \\gamma\\not=0$, the fifth equation is equivalent to the third equation of \nthe $D_6$ type \\cite{Gromak}. We denote deg-P5$(\\alpha, \\beta, \\gamma, 0)$ if $\\delta=0$. \nBy a suitable scale transformation \n$t \\to ct$, we can fix $\\delta=-1\/2$ for P5 and $\\gamma=-2$ for deg-P5. \nLet $q$ is a solution of P3${}^\\prime (4(\\alpha_1-\\beta_1),-4(\\alpha_1+\\beta_1-1), 4, \\ -4)$.\nThen\n$$ y =\\frac{tq'-q^2-(\\alpha_1+\\beta_1)q-t}{tq'+q^2-(\\alpha_1+\\beta_1)q-t}$$\nis a solution of deg-P5$( {\\alpha_1^2}\/2, -{\\beta_1^2}\/2, -2,0)$. \n\n\nP5$(\\alpha, \\beta, 0, 0)$ is quadrature and we exclude this case from the Painlev\\'e family.\n\n\\subsection{Special solutions from P1 to P5}\nWe will review special solutions of the Painlev\\'e equations. \nAlthough generic solutions of the Painlev\\'e equations are transcendental, \nthere exist some special solutions which reduce to known functions.\nUmemura defined a class of `known functions', which are\ncalled {\\it classical solutions} \\cite{U:stras}. He also gave a method \nhow to classify classical solutions of the Painlev\\'e equations. \n\nThere exist two types of classical solutions of Painlev\\'e equations, \none is algebraic and the second is the Riccati type. \nUp to now, all of classical solutions are classified \nexcept algebraic solutions of the sixth Painlev\\'e equations. \nWe will list up all of classical solutions of P1 to P5. \n\n\\begin{theorem}\\label{classical}\n1) All solutions of P1 are transcendental.\n\n\\par\\medskip\\noindent\n2) P2($0$) has a rational solution $y=0$. \nP2($-1\/2$) has a Riccati type solution $y=-u'\/u$. \nHere $u$ is any solution of the Airy equation $ u^{\\prime\\prime}+tu\/2=0$.\n\n\\par\\medskip\\noindent\n2) P34($(\\alpha+1\/2)^2$) is equivalent to P2($\\alpha$). \nP34($1\/4$) has a rational solution $y=t\/2$. P34($1$) has the Riccati type solutions. \n\n\\par\\medskip\\noindent\n4) P4($0,-2\/9$) has a rational solution $y=-2t\/3$. \nP4($1-s, -2s^2$) has a Riccati type solution $y=-u'\/u$. \nHere $u$ is any solution of the Hermite-Weber equation \n$ u^{\\prime\\prime}+2t u^{\\prime}+2s u =0$. If $s=1$, \nP4($0,-2$) has a rational solution $y=-2t$, which reduces to the \nHermite polynomial.\n\n\\par\\medskip\\noindent\n5) P3${}^{\\prime}(D_6)$($a,-a, 4,-4$) has an algebraic solution $y=-\\sqrt{t}$.\nP3$^{\\prime}(D_6)$($4h, 4(h+1),4,-4$) has a Riccati type solution $y=u'\/u$.\nHere $u$ is any solution of\n$ tu^{\\prime\\prime}+(2h+1) u^{\\prime}-4t u =0$. \n\n\\par\\medskip\\noindent\n6) P3${}^\\prime$($D_7$)($\\alpha, \\beta, \\gamma, 0$) does not have the Riccati type solution. \nP3${}^\\prime$($D_7$)($0, -2, 2,0$) has an algebraic solution $y= t^{1\/3}$. \n\n\\par\\medskip\\noindent\n7) P3${}^\\prime$($D_8$)($\\alpha, \\beta, 0, 0$) does not have the Riccati type solution. \nP3${}^\\prime$($D_8$) ($8h,$ $-8h,0,0$) has an algebraic solution $y= -\\sqrt{t} $. \n\n\\par\\medskip\\noindent\n8) P5($a, -a, 0, \\delta$) has a rational solution $y=-1$. \nP5($ (\\kappa_0+s)^2\/2, -\\kappa_0^2\/2, -(s+1), -1\/2$) has the Riccati type solutions \n$y=-tu'\/ (\\kappa_0+s)u$. \nHere $u$ is any solution of \n$ t^2 u^{\\prime\\prime}+ t( t-s-2\\kappa_0+1) u^{\\prime}+\\kappa_0(\\kappa_0+s) u 2=0$. \nIf $\\kappa_0=1 $, \nP5($ (s+1)^2\/2, -1\/2, -(s+1), -1\/2$) has a rational solution $y= t\/(s+1)+1$, \nwhich reduces to the Laguerre polynomial.\n\n\\par\\medskip\\noindent\n9) deg-P5$( {\\alpha_1^2}\/2, -{\\beta_1^2}\/2, -2,0)$ is equivalent to \nP3($D_6$)$(4(\\alpha_1-\\beta_1),-4(\\alpha_1+\\beta_1-1), 4, \\ -4)$. \ndeg-P5($h^2\/2,-8,-2,0$) has an algebraic solution $y=1+2\\sqrt{t}\/h$.\ndeg-P5($\\alpha, 0, \\gamma, 0$) has the Riccati type solutions. \n\n\\par\\medskip\\noindent\n10) All of classical solutions of P1 to P5 are equivalent to the \nabove solutions up to the B\\\"acklund transformations.\n\\end{theorem} \n\n\n\\subsection{Symmetric solutions for P1, P2, P34 and P5}\nWe will review symmetric solutions \\cite{AVK}, \\cite{KK}. \nThe first, second, thirty-fourth and fourth Painlev\\'e equations\n\\begin{eqnarray*}\n&\\textrm{P1}\\qquad &y^{\\prime\\prime}=6y^2 + t,\\\\\n&\\textrm{P2}(\\alpha)\\qquad &y^{\\prime\\prime}= 2y^3+ t y + \\alpha,\\\\\n&\\textrm{P34}(a)\\qquad & y^{\\prime\\prime}=\\frac{(y^{\\prime })^2}{2y}+2y^2 -ty -\\frac{a}{2y},\\\\\n&\\textrm{P4}(\\alpha,\\beta)\\qquad &y^{\\prime\\prime}= \\frac{1}{2y}{y^{\\prime}}^2+\\frac{3}{ 2}y^3+ 4 t y^2 \n+2(t^2-\\alpha)y +\\frac{\\beta}{y},\\\\\n\\end{eqnarray*}\nhave a simple symmetry:\n\\begin{eqnarray*}\n&\\textrm{P1}\\qquad &y \\to \\zeta^3 y, \\quad t \\to \\zeta t, \\quad (\\zeta^5=1)\\\\\n&\\textrm{P2}\\qquad &y \\to \\omega y, \\quad t \\to \\omega^2 t, \\quad (\\omega^3=1)\\\\\n&\\textrm{P34}\\qquad &y \\to \\omega y, \\quad t \\to \\omega t, \\quad (\\omega^3=1)\\\\\n&\\textrm{P4}\\qquad &y \\to - y, \\quad t \\to - t, \n\\end{eqnarray*}\nThere exist symmetric solutions which are invariant under the action of the cyclic group.\nThe symmetric solutions are studied by Kitaev \\cite{AVK} for P1 and P2 and by Kaneko \n\\cite{KK} for P4. Since these symmetric solutions exist for any parameter of the Painlev\\'e equations,\nthey are not algebraic for generic parameters. \n\n\n\\begin{theorem}\\label{symmetric} 1) For P1, we have two symmetric solutions\n\\begin{eqnarray*}\ny &=& \\frac16 t^3 +\\frac1{336}t^8+\\frac1{26208}t^{13}+\\frac{95}{224550144}t^{18}+\\cdots,\\\\\ny &=& t^{-2} -\\frac1{6}t^3+\\frac1{264}t^{8}-\\frac1{19008}t^{13}+\\cdots.\n\\end{eqnarray*}\n\n2) For P2($\\alpha$), we have three symmetric solutions\n\\begin{eqnarray*}\ny &=& \\frac{\\alpha}2 t^2 +\\frac{\\alpha}{40} t^5 +\\frac{10\\alpha^3+\\alpha}{40}t^{8}+\\cdots,\\\\\ny &=& t^{-1}-\\frac{\\alpha+1}{4} t^3 +\\frac{(\\alpha+1)(3\\alpha+1)}{112}t^{5}+\\cdots,\\\\\ny &=& - t^{-1} -\\frac{\\alpha-1}{4} t^3 -\\frac{(\\alpha-1)(3\\alpha-1)}{112}t^{5}+\\cdots.\n\\end{eqnarray*}\nThey are equivalent to each other by the B\\\"acklund transformations.\n\n2) For P34($a^2$), we have three symmetric solutions\n\\begin{eqnarray*}\ny &=& a t +\\frac{a(2a-1)}{8} t^4 +\\frac{a(2a-1)(10a-3)}{560}t^{7}+\\cdots,\\\\\ny &=& -a t +\\frac{a(2a+1)}{8} t^4 -+\\frac{a(2a+1)(10a+3)}{560}t^{7}+\\cdots,\\\\\ny &=& \\frac2{t^2}+\\frac t2 -\\frac{4a^2-9}{224} t^4 -\\frac{4a^2-9}{5600}t^7+\\cdots.\n\\end{eqnarray*}\nEach solution corresponds to the symmetric solutions of P2, respectively, by \n$$y_{34}=y_2^\\prime+y_2^2+\\frac t2.$$\n\n\n4) For P4($\\alpha, -8\\theta_0^2$), we have four symmetric solutions\n\\begin{eqnarray*}\ny &=& \\pm 4\\theta_0\\left(t -\\frac{2 \\alpha}{3} t^3 \n +\\frac{2}{15} (\\alpha^2+12\\theta_0^2\\pm \\theta_0+1 )t^{5}+\\cdots\\right),\\\\\ny &=& \\pm t^{-1}+\\frac{2}{3}(\\pm\\alpha-2) t^3 \n \\mp \\frac{2}{45}(-7\\alpha^2\\pm 16\\alpha+36\\theta_0^2-4 )t^{5}+\\cdots.\n\\end{eqnarray*}\nThey are equivalent to each other by the B\\\"acklund transformations.\n\\end{theorem}\n\n\\par\\noindent\nWe remark that there are no B\\\"acklund transformation for P1.\nWe may think symmetric solutions as a generalization of Umemura's classical \nsolutions. We notice that symmetric solutions are not classical functions \nexcept for special parameters. For example, the first solution of P2($\\alpha$) \nis transcendental for a generic parameter, but it is a rational solution $y=0$ \nfor $\\alpha=0$. It is rational if and only if $\\alpha\/3 $ is an integer. \nThe symmetric solutions exist for \nany parameters and classical solutions exist only for special parameters.\n\n\\section{Confluent hypergeometric equation}\\label{chg}\nIn this section we review Kummer's confluent hypergeometric equation. \nIt is known that there exist two standard forms, Kummer's form and Whittaker's form \nfor the confluent hypergeometric equation. At the first we will use Kummer's standard form.\n\nWe review irregular singularities to fix our notations. \nFor a rational function $a(x)$, we set\n$${\\rm ord}_{x=c}a(x)=\\ ({\\rm pole\\ order\\ of}\\ a(x)\\ {\\rm at}\\ x=c).$$\nA linear differential equation \n$$\\frac{d^2u}{dx^2}+p(x)\\frac{d u}{dx }+q(x)=0$$\nhas an irregular singularity at $x=c$ if and only if\n${\\rm ord}_{x=c}p(x)>1$ or ${\\rm ord}_{x=c}q(x)>2$. \nThe Poincar\\'e rank $r$ of the irregular singularity $x=c$ is \n$$r=\\max \\{{\\rm ord}_{x=c}p(x), 1\/2 \\cdot{\\rm ord}_{x=c}q(x) \\}-1.$$\nWe think that a regular singularity has the Poincar\\'e rank $r=0$. \n\n\nKummer's confluent hypergeometric equation is\n\\begin{equation}\\label{2:chg}\nx\\frac{d^2u}{dx^2}+(c-x)\\frac{du}{dx}-au=0, \n\\end{equation}\nwhich has a regular singularity at $x=0$ and an irregular singularity \nwith the Poincar\\'e rank $1$ at $x=\\infty$. \nWe set $x\\to \\varepsilon x, a \\to 1\/\\varepsilon$ in \\eqref{2:chg} and \ntake the limit $\\varepsilon \\to 0$. Then we get a degenerate confluent \nhypergeometric equation (the confluent hypergeometric limit equation)\n\\begin{equation}\\label{2:dchg}\nx\\frac{d^2u}{dx^2}+c\\frac{du}{dx}-u=0, \n\\end{equation}\nwhich has a regular singularity at $x=0$ and an irregular singularity \nwith the Poincar\\'e rank $1\/2$ at $x=\\infty$. \nThe solution of \\eqref{2:dchg} is\n$$y=C\\ {}_0F_{1}(c;x)+D\\ x^{1-c}{}_0F_{1}(2-c;x).$$\n\\eqref{2:dchg} is reduced to \\eqref{2:chg} by Kummer's second formula\n$${}_0F_{1}\\left(c; {x^2}\/{16}\\right)= e^{-x\/2}\\ {}_{1}F_1 (c-1\/2,2c-1;x).$$\nIt is also related to the Bessel function \n\\begin{equation}\\label{bessel}\n{}_0F_{1}\\left(c; {x^2}\/{16}\\right)= \\Gamma(c)(-ix\/4)^{1-c}J_{c-1}(-ix\/2).\n\\end{equation}\n\nLater we will use $SL$-type equations in the section\\ref{pa_type}. \n$SL$-type of the confluent hypergeometric equation is called the Whittaker equation:\n\\begin{eqnarray}\nW_{k,m}: & \\dfrac{d^2u}{dx^2}= \\left(\\dfrac{ 1}{4}-\\dfrac{k}{x}+\\dfrac{m^2-\\frac{1}{4}}{x^2} \\right)u=0,\\label{sl:chg}\\\\\nDW_{m}:& \\dfrac{d^2u}{dx^2}= \\left(\\dfrac{ 1}{x}+\\dfrac{m^2-\\frac{1}{4}}{x^2} \\right)u=0.\\label{sl:dchg}\n\\end{eqnarray}\nIn $W_{k,m}$, the parameters $k, m$ correspond to \n $k=c\/2-a, \\quad m= (c-1)\/2$ \nin \\eqref{2:chg}. In $DW_{m}$, the parameter $m$ corresponds to \n $ m=(c-1)\/2$ in \\eqref{2:dchg}. \n\n\n\\section{Linear equations of the Painlev\\'e type}\\label{iso}\nWe call a linear equation of the second order whose isomonodromic \ndeformation gives a Painlev\\'e equation \nas {\\it the Painlev\\'e type}. \nIn this section we will list up all of linear equations of the Painlev\\'e type.\nSince a linear equation of the second order is equivalent to a $2\\times 2$ system of \nlinear equations of the first order, we use a single equation. It is easy to \nrewrite as a $2\\times 2$ system. \n\n\\subsection{Singularity type}\n\nLinear differential equations of the Painlev\\'e type have \nthe following types of singular points. \n\\begin{center}\n\\begin{tabular}{ r|l }\nP6\\ & $(0)^4$ \\\\\n\\hline\nP5\\ & $(0)^2(1)$ \\\\\n\\hline\nP3$(D_6)$\\ & $(1)^2$ or $(0)^2(1\/2)$ \\\\\n\\hline\nP3$(D_7)$\\ & $(1)(1\/2)$ \\\\\n\\hline\nP3$(D_8)$\\ & $(0)(2)$ \\\\\n\\hline\nP4\\ & $(0)(2)$ \\\\\n\\hline\nP34\\ & $(0)(3\/2)$ \\\\\n\\hline\nP2\\ & $(3)$ \\\\\n\\hline\nP1\\ & $(5\/2)$ \n \\end{tabular} \n\\end{center}\nHere $(m)^k$ means $k$ singular points with the Poincar\\'e rank $m$, and \n$(0)$ means a regular singularity. The Painlev\\'e third equation\nhas two type of singularities. $(1)^2$ is a standard one, and $(0)^2(1\/2)$ is \ndeg-P5. \n\nTwo different types of linear equations are used for the isomonodromic deformation of \nthe Painlev\\'e second equation. One is Garnier's form \\cite{Garnier}, which is the same one used by \nOkamoto \\cite{Okamoto}\nand Miwa-Jimbo \\cite{JM2}, The second one is Flaschka-Newell's form \\cite{FN}, which is \nis equivalent to the type $(0)(3\/2)$ by a rational transformation \\cite{Kapaev}.\nIt is natural that Flaschka-Newell's form as an isomonodromic deformation for P34. \nWe will study Garnier's form and Flaschka-Newell's form in a succeeding paper.\n\n\n\n\\subsection{List of equations of the Painlev\\'e type}\\label{pa_type}\n\nIn the following, we will list up all of linear equations of the Painlev\\'e type.\nWe use linear equations of $SL$-type:\n\\begin{equation}\\label{SL} \n\\frac{d^2 u}{d z^2}=p(z,t)u.\n\\end{equation}\nIsomonodromic deformation for a linear equation of $SL$-type is given\nby the compatibility condition of the following system \\cite{Okamoto}:\n\\begin{equation}\\label{SL:mpd}\\begin{aligned} \n\\frac{\\partial^2 u}{\\partial x^2}&=V(x,t)u,\\\\\n\\frac{\\partial u}{\\partial t}=& A(x,t)\\frac{\\partial u}{\\partial x}\n -\\frac12\\frac{\\partial A(x,t)}{\\partial x }u.\n\\end{aligned}\\end{equation}\nWe will list up $V(x,t)$ and $A(z)$ of linear equations of Painlev\\'e type. We corrected \nmisprints in \\cite{Okamoto}. The compatibility condition gives the Painlev\\'e equation, \nwhich turn out a Hamiltonian system with the Hamiltonian $K_J$ in the following list.\n$(q,p)$ are canonical coordinates and $q$ satisfies the Painlev\\'e equation.\n\\par\\bigskip\\noindent\n{\\it Type $(5\/2)$}\\ : the first Painlev\\'e equation P1 \\\\\n$$V(z,t)=4z^3 +2t z +2 K_{\\rm I} +\\frac{3}{4(z-q)^2} -\\frac{p}{z-q}$$\n$$A(z)= \\frac 12\\cdot \\frac{1}{z-q}$$\n$$K_{\\rm I} =\\frac12 p^2 -2q^3 -t q.$$\n\n\\par\\bigskip\\noindent\n{\\it Type $(3)$}\\ : the second Painlev\\'e equation P2($\\alpha$)\\\\\n$$V(z,t)=z^4 +t z^2 +2\\alpha z +2 K_{\\rm II} +\\frac{3}{4(z-q)^2} -\\frac{p}{z-q}$$\n$$A(z)= \\frac 12\\cdot \\frac{1}{z-q}$$\n$$K_{\\rm II} =\\frac12 p^2 -\\frac12 q^4 -\\frac12 t q^2 -\\alpha q.$$\n\n\\par\\bigskip\\noindent\n{\\it Type $(1)(3\/2)$}\\ : the thirty-fourth Painlev\\'e equation P34($\\alpha$)\\\\\n$$V(z,t)=\\frac z2 -\\frac t2 +\\frac{\\alpha-1}{4z^2} -\\frac{ K_{\\rm XXXIV}}z\n +\\frac{3}{4(z-q)^2} -\\frac{pq}{z(z-q)}$$\n$$A(z)= - \\frac{z}{z-q}$$\n$$K_{\\rm XXXIV} =-qp^2 +p +\\frac{q^2}2 -\\frac{tq}{2} +\\frac{\\alpha-1}{4q}.$$\n\n\n\\par\\bigskip\\noindent\n{\\it Type $(1)(2)$}\\ : the fourth Painlev\\'e equation P4($\\alpha, \\beta$)\\\\\n$$V(z,t)=\\frac{{a_0}}{z^2}+\\frac{K_{\\rm VI}}{2z} +a_1 +\\left(\\frac{z+2t}4\\right)^2 \n+\\frac{3}{4(z-q)^2} -\\frac{pq}{z(z-q)},$$\n$$A(z)= \\frac{2z}{z-q},$$\n$$K_{\\rm VI} =2qp^2-2p -\\frac{{a_0}}{q} -2a_1 q-2q\\left(\\frac{q+2t}4\\right)^2,$$\n$$a_0= -\\frac\\beta 8-\\frac14,\\ a_1=-\\frac\\alpha4.$$\n\n\\par\\bigskip\\noindent\n{\\it Type $(1)^2$}\\ : the third Painlev\\'e equation P3${}^\\prime$($\\alpha, \\beta, \\gamma, \\delta$)\\\\\n$$V(z,t)=\\frac{a_0t^2}{z^4}+\\frac{a_0^\\prime t }{z^3}-\\frac{t K_{\\rm III}^\\prime}{ z^2}+\n\\frac{a_\\infty^\\prime}{ z } + a_\\infty\n+\\frac{3}{4(z-q)^2}-\\frac{pq}{z(z-q)},$$\n$$A(z)= \\frac{ qz}{t(z-q)},$$ \n$$t K_{\\rm III}^\\prime = q^2p^2 -qp -\\frac{a_0 t^2}{ q^2}\n-\\frac{a_0^\\prime t}{ q}- {a_\\infty^\\prime q} -a_\\infty {q^2},$$\n$$a_0= -\\frac\\delta{16},\\ a_0^\\prime=-\\frac\\beta8,\\ \na_\\infty= \\frac\\gamma{16},\\ a_\\infty^\\prime=\\frac\\alpha8.$$\n\n\\par\\bigskip\\noindent\n{\\it Type $(0)^2(1)$}\\ : the fifth Painlev\\'e equation P5($\\alpha, \\beta, \\gamma, \\delta$)\\\\\n$$V(z,t)=\\frac{a_1 t^2}{(z-1)^4}+\\frac{K_{\\rm V} t}{(z-1)^2 z}+\\frac{a_2 t}{(z-1)^3}-\\frac{p\n (q -1) q }{ z (z-1)(z-q )}+\\frac{a_\\infty}{(z-1)^2}+\\frac{{a_0}}{z^2}+\\frac{3}{4\n (z-q )^2}$$\n$$A(z)=\\frac{q-1}t \\cdot \\frac{z(z-1) }{ z-q }$$\n$$ tK_{\\rm V}= q(q -1)^2 \\left[-\\frac{a_1 t^2}{(q-1)^4}-\\frac{a_2 t}{(q -1)^3}+p^2-\n \\left(\\frac{1}{q }+\\frac{1}{q -1}\\right)p -\\frac{a_\\infty}{(q -1)^2}-\\frac{a_0}{q \n ^2}\\right] $$\n$$a_0= -\\frac\\beta 2-\\frac14,\\ a_1=-\\frac\\delta2 ,\\ a_2=-\\frac\\gamma2,\\ a_\\infty=\\frac12(\\alpha+\\beta)-\\frac34$$\n\n\\par\\bigskip\nIf $\\gamma=0$ or $\\delta=0$ for P3, the type of the linear equation is $(1)(1\/2)$. \nWe will take $\\delta=0$ as a standard form, which reduces to P3($D_7$)($\\alpha, \\beta, \\gamma,0$).\nIf $\\gamma=0$ and $\\delta=0$ for P3, the type of the linear equation is $ (1\/2)^2$, which reduces to\nP3($D_8$)($\\alpha, \\beta,0,0$). \nIf $\\delta=0$ for P5, the type of the linear equation is $(0)^2(1\/2)$, which reduces to\ndeg-P5($\\alpha, \\beta, \\gamma,0$). \nWe omit P6 since we treat the cases of irregular singularities in this paper.\n\n\n\n\\section{Rational transform of $W_{k,m}$ and $DW_m$}\\label{rational}\n\nIn this section, we will classify all of rational transformations \nof independent variables, which change $W_{k,m}$ or $DW_{m}$ into linear \ndifferential equations of the Painlev\\'e type.\n\nThe following simple lemma is a key of classification.\n\\begin{lemma}\nFor a linear differential equation \n\\begin{equation}\\label{2ndnormal}\n\\frac{d^2u}{dx^2}=p(x)u(x),\n\\end{equation}\nwe take a new independent variable $z$ defined by a rational transform $x=x(z)$.\nIf $x=c$ is a regular singularity, $z_c=x^{-1}(c)$ is also a regular singularity \nof the transformed equation. When $z_c$ is a $n$-th branched point and the distance\nof the two exponent at $x=c$ is $1\/n$, $z=z_c$ is an apparent singularity and \n$z=z_c$ can be reduced to a regular point by a suitable change $y=v(z)u$.\n\nIf $x=c$ is an irregular singularity of the Poincar\\'e rank $m$ and \n$z_c$ is a $n$-th branched point, $z=z_c$ is an irregular singularity of the Poincar\\'e \nrank $mn$.\n\\end{lemma}\n\nThe proof is obvious. Since we will start from the confluent hypergeometric equations, \nwe take a care of $x=0, \\infty$ as branched points. For $x=x(z)$, \nif $x^{-1}(0)$ are branch points with $\\mu_1,\\mu_2,\\cdots$ order and \n$x^{-1}(\\infty)$ are branch points with $\\nu_1,\\nu_2,\\cdots$ order, \nwe call $x(z)$ as a type $(\\mu_1+\\mu_2+\\cdots |\\nu_1+\\nu_2+\\cdots)$.\nWe allow $\\mu_j$ or $\\nu_j$ equals to one. $\\sum \\mu_j= \\sum\\nu_j$ is \nthe order of the rational function $x(z)$. \n\n\\begin{theorem}\\label{th:cover} By a rational transform $x=x(z)$, $W_{k,m}$ or $DW_m$\nchanges to a linear equation of the Painlev\\'e type or a confluent hypergeometric equation \nif and only if one of the following cases occur. \n\\par\\medskip\n\n1) Double cover \n\\par\\medskip\n\\begin{tabular}{ c|c|c|c }\n $W_{k,m}$\\ & $(2|2)$ & $(0)(2)$ & P4-sym \\\\ \n $W_{k, 1\/4}$\\ & $(2|2)$ & $(2)$ & Weber\\\\\t\n $W_{k, 1\/4}$ \\ & $(2|1+1)$ & $(1)^2 $ & D6-alg \\\\\t\n $W_{0, 1\/2}$\\ & $(1+1|2)$ & $(0)(2) $ & P4-Her \\\\\t\n $DW_{m}$\\ & $(2|2)$ & $(0)(1) $ & Kummer \\\\\t\n $DW_{m}$\\ & $(1+1|2)$ & $(1)^2(2) $ & P5-rat \\\\\t\n $DW_{1\/4}$\\ & $(2|1+1)$ & $(1\/2)^2$ & D8-alg \\\\\t\n\\end{tabular}\n\n\\par\\medskip\n2) Cubic cover\n\\par\\medskip\n\\begin{tabular}{ r|c|c|c }\n $W_{k,1\/3}$\\ & $(3|3)$ & $(3)$ & P2-sym \\\\\t\t\n $DW_{m}$\\ & $(3|3)$ & $(1)(3\/2)$ & P34-sym \\\\\t\n $DW_{1\/6}$\\ & $(3|3)$ & $(3\/2)$ & Airy \\\\\t\n $DW_{1\/4}$\\ & $(2+1|3)$ & $(0)(3\/2)$ & P34-rat \\\\\t\n $DW_{1\/6}$\\ & $(3|2+1)$ & $(1)(1\/2)$ & D7-alg \t\n\\end{tabular}\n\\par\\medskip\n\n\n3) Quartic cover\n\\par\\medskip\n\\begin{tabular}{ r|c|c|c }\n $DW_{1\/6}$\\ & $(3+1|4)$ & \\hskip 5mm $(3) \\hskip 5mm $ & P4-rat \n\\end{tabular}\n\\par\\medskip\n\n4) Quintic cover \n\\par\\medskip\n\\begin{tabular}{ r|c|c|c }\n $DW_{1\/5}$\\ & \\hskip 3mm $(5|5)$ \\hskip 3mm {} &\\hskip 3mm $(5\/2) $ \\hskip 3mm & P1-sym \\\\\n $DW_{1\/10}$\\ & \\hskip 3mm $(5|5)$ \\hskip 3mm {} &\\hskip 3mm $(5\/2) $ \\hskip 3mm & P1-sym \t\t\t\n\\end{tabular}\n\n5) Sextic cover \n\\par\\medskip\n\\begin{tabular}{ r|c|c|c }\n $DW_{1\/6}$\\ & $(3+3|6)$ &\\hskip 5mm $(3)\\hskip 5mm $ & P2-rat \n\\end{tabular}\n\\par\\medskip\\noindent\nHere the first column is the starting linear equation. The second column is \nthe type of a rational transform. The third column is the singularity type of \nof the transformed linear equation. The fourth column is the solution of \nthe Painlev\\'e equation.\n\\end{theorem}\n\\par\\noindent\n{\\it Proof.} The Poincare rank of irregular singularities of\n equations of the Painlev\\'e type is at most three. Therefore we can take \n up to a cubic cover of $W_{k,m}$ and we can take up to a sextic cover of $DW_{m}$.\n In each cases, we can classify the covering map directly.\n \\hfill \\boxed{} \n \\par\\bigskip\\noindent \nThe other types of confluent equations, such as the Weber (parabolic cylinder) equation\nand the Airy equation, appear in the table. Therefore covering of such equations are\nalso included in the covering of $W_{k,m}$ or $DW_{m}$. We consider the Bessel equation as a \nspecial case of $W_{0,m}$. \nWe will explain each case in the next subsection. The theorem \\ref{th:cover} shows that \nit is necessary to distinguish P2 and P34 from the isomonodromic viewpoint. \n \n\n\n\\subsection{Exact form}\\label{exact}\nIn this section we will write down algebraic solutions obtained by rational transformations\nfrom $W_{k,m}$ or $DW_{m}$ explicitly.\nThe following lemma is a key to calculate exact transformations.\n\n\\begin{lemma}\\label{keylemma} For the equation\n\\begin{equation*} \n \\frac{d^2u}{dx^2}=Q(x)u,\n\\end{equation*}\nwe set \n$$x=x(z), \\ u(x)= \\sqrt{\\frac{dx}{dz}} v(z).$$\nThen $v$ satisfies\n\\begin{equation}\\label{transformed}\n\\frac{d^2v}{dz^2} =\\left( Q(x(z))(x'(z))^2- \\frac12\\{z,x\\} \\right)v.\n\\end{equation}\nHere $\\{z,x\\}$ is the Schwarzian derivative\n$$\\{z,x\\}=\\frac{z^{\\prime\\prime\\prime}}{z'}\n -\\frac32 \\left(\\frac{z^{\\prime\\prime }}{z'} \\right).$$\n\\end{lemma}\n\\noindent \nWe will take $Q(x)$ as \\eqref{sl:chg} or \\eqref{sl:dchg}. Then we calculate $V(z,t)$ \nin the left hand side of \\eqref{transformed} in each case of the theorem \\ref{th:cover}. \nWe should take a suitable coefficients of the rational function $x=x(z)$ \nto coincide with the equation of the Painlev\\'e type. In the following $(q,p)$ is the \ncanonical coordinates in the section \\ref{pa_type}.\n\n\\subsubsection{Double cover}\n1) P4-sym $(2|2)$ \\\\ \\noindent\nFor $W_{k,m}$, we set $x=z^2\/4$. Then \n$$V(z,t)= -k+ \\frac{16m^2-1}{4 z^2}+\\frac {z^2}{16},$$\nwhich is the case $t=0$ for the symmetric solution $ q(t)=2(4m+1)t+O(t^3)$ \nof P4$(4k,-2(4m+1)^2)$.\n\n\\par\\bigskip\\noindent\n2) Weber $(2|2)$ \\\\ \\noindent\nThis is a special case of P4-sym.\nFor $W_{k,1\/4}$, we set $x=z^2\/2$. Then \n$$V(z,t)= \\frac{z^2}{4} -2k,$$\nwhich is the parabolic cylinder equation for $D_{2k-1\/2}(z)$. \nThis shows the well-known formula\n\\begin{equation}\\label{weber}\nD_{2k-1\/2}(z)= 2^k z^{-1\/2}W_{k,-1\/4}\\left(\\frac{z^2}2 \\right).\n\\end{equation}\n\n\\par\\bigskip\\noindent\n3) $D_6$-alg $(2|1+1)$ \\\\ \\noindent\nFor $W_{k,1\/4}$, we set $x=(z-\\sqrt{t})^2\/z$. Then \n\\begin{eqnarray*}\nV(z,t)= \\frac{1}{4} + \\frac{t^2}{4z^2}-\\frac{kt}{z^3}\n-\\frac{8t+32k\\sqrt{t}+3}{16 z^2}\n-\\frac{k}{z }+ \\frac{3}{ 4(z+\\sqrt{t})^2}-\\frac{3}{ 4z(z+\\sqrt{t})},\n\\end{eqnarray*}\nwhich gives the algebraic solution $ q(t)=-\\sqrt{t}$ of P3$(-8k,8k,4,-4)$.\n\n\\par\\bigskip\\noindent\n4) P4-Her $(1+1|2)$ \\\\ \\noindent\nFor $W_{0, 1\/2}$, we set $x =z (z+4t)\/4$. Then \n$$V(z,t)= \\left(\\frac{z+2t}{4} \\right)^2+\\frac{4}{ 3(z+2t)},$$\nwhich gives the rational solution $ q(t)=-2t$ of P4($0,-2$). \n\n\\par\\bigskip\\noindent\n5) Kummer's second formula $(2|2)$ \\\\ \\noindent\nFor $DW_{m}$, we set $x=z^2\/16$. Then \n$$V(z,t)= \\frac{1}{4} + \\frac{16m^2-1}{4 z^2},$$\nwhich is $W_{0, 2m}$. This is Kummer's second formula.\n\n\\par\\bigskip\\noindent\n6) P5-rat $(1+1|2)$ \\\\ \\noindent\nFor $DW_{m}$, we set $x= h t^2 z \/4(z-1)^2$. Then \n$$V(z,t)= \\frac{h t^2}{(z-1)^4} + \\frac{16m^2-1+h t^2}{4 z(z-1)^2}\n-\\frac3{4z} + \\frac{4m^2-1}{4z^2} + \\frac{3(z+2)}{4(z+1)^2},$$\nwhich gives the rational solution $ q(t)=-1$ of P5($2m^2, -2m^2, 0, -2h$).\n\n\\par\\bigskip\\noindent\n7) $D_8$-alg $(2|1+1)$ \\\\ \\noindent\nFor $DW_{1\/4}$, we set $x=h (z-\\sqrt{t})^2\/z$. Then \n$$V(z,t)= \\frac {h t}{z^3} +\\frac {32 h \\sqrt{t}-3}{16z^2}\n+\\frac{h}z+ \\frac{3}{4(z+\\sqrt{t})^2}-\\frac{3}{4z(z+\\sqrt{t})},$$\nwhich gives the algebraic solution $ q(t)=-\\sqrt{t}$ of P3$(8h,-8h,0,0)$.\n\n\n\\subsubsection{Cubic cover}\n1) P2-sym $(3|3)$\\\\ \\noindent\nFor $W_{k,1\/3}$, we set $x=2z^3\/3$. Then \n$$V(z,t)= z^4 -6kz +\\frac{3}{4z^2},$$\nwhich is the symmetric solution $ q(0)=0, p(0)=0$ of P2$(-3k)$.\n\n\\par\\bigskip\\noindent\n2) P34-sym $(3|3)$\\\\ \\noindent\nFor $DW_{m}$, we set $x= z^3\/18 $. Then \n$$V(z,t)=\\frac z 2 +\\frac{36m^2-1}{4z^2},$$\nwhich is the symmetric solution $ q(0)=0$ of P34$(3 (12m^2 -1))$. \nWe have two symmetric solutions with $ q(0)=0$, and both give the same \nequation when $t=0$.\n\n\\par\\bigskip\\noindent\n3) Airy $(3|3)$\\\\ \\noindent \nThis is a special case of P34-sym.\nFor $DW_{1\/6}$, we set $x= z^3\/9 $. Then \n$$V(z,t)= z,$$\nwhich is the Airy equation. It is know that \n\\begin{equation}\\label{airy}\n{\\rm Ai}(x)=\\frac1{ 3^{2\/3} \\Gamma(\\frac23) } {}_0F_1\\left(\\frac23; \\frac{z^3}9 \\right)\n-\\frac{x}{3^{1\/3}\\Gamma(\\frac13)}{}_0F_1 \\left(\\frac43; \\frac{z^3}9 \\right).\n\\end{equation}\n\n\n\\par\\bigskip\\noindent\n4) P34-rat $(2+1|3)$\\\\ \\noindent \nFor $DW_{1\/4}$, we set $x= z(z-3t\/2)^3\/18 $. Then \n$$V(z,t)= \\frac z 2-\\frac t2 +\\frac{t^3\/4+1}{2tz}-\\frac{3}{16z^2} \n +\\frac{3}{4(z-t\/2)^2}-\\frac{1}{2t(z-t\/2)},$$\nwhich gives the rational solution $ q(t)=t\/2$ of P34$( 1\/4)$.\n\n\n\\par\\bigskip\\noindent\n5) $D_7$-alg $(3|2+1)$\\\\ \\noindent\nFor $DW_{1\/6}$, we set $x =\\left(z+2 t^{1\/3} \\right)^3\/32z$. Then \n$$V(z,t)= \\frac{ t}{4z^3}-\\frac{16+27 t^{2\/3}}{72z^2}\n+\\frac{2}{ 3 t^{1\/3} z }+ \\frac18 +\\frac{3}{4 ( z- t^{1\/3})^2}\n- \\frac{2}{3t^{1\/3}( z- t^{1\/3}) },$$\nwhich is the algebraic solution $ q(t)= t^{1\/3} $ of P3($0, -2, 2,0$).\n\n\\subsubsection{Quartic cover}\nIf the covering degree is more than three, we start only from $DW_{m}$. \nIf a quartic covering map $x=x(z)$ splits to $x= (\\bar{x}(z))^2$, \nthe covering through $W_{0,2m}$, which is Kummer's case 5.1.4. \nSuch case occurs in case that all of $\\mu_j, \\nu_j$ are even.\n$(4|4), (4|2+2), (2+2|4)$ corresponds to $(2|2), (2|1+1), (1+1|2)$, respectively.\n\n\\par\\bigskip\\noindent\n1) P4-rat $(3+1|4)$ \\\\ \\noindent\nFor $DW_{1\/6}$, we set $ x=\\frac{1}{256} z \\left(z+ 8 t\/3 \\right)^3$.\nThen \n$$V(z,t)= -\\frac{2}{9 z^2}+\\frac{2t^3}{27 z} +\\left(\\frac{z+2t}4\\right)^2\n+\\frac{27}{4(3z+2t)^2}-\\frac1{z(3z+2t)}$$ \nwhich gives the rational solution $q(t)=-2t\/3$ of P4$(0, -2\/9)$.\n\n\\subsubsection{Quintic cover}\n1) P1-sym $(3+1|4)$ \\\\ \\noindent\nFor $DW_{1\/5}$, we set $x =4z^5\/25$. Then \n$$V(z,t)= \\frac{3}{4 z^2}+4 z^3,$$\nwhich is the symmetric solution $ y(0)=0, y'(0)=0, t=0$ of P1.\n\\par\\bigskip \\noindent\nFor $DW_{1\/10}$, we set $x =4z^5\/25$. Then \n$$V(z,t)= 4 z^3,$$\nwhich is the symmetric solution $ y(t)=1\/t^2+\\cdots$ and $t=0$ of P1. \nIf we substitute $ y(t)=1\/t^2+\\cdots$ in $Q(z,t)$, $Q(z,t)$ may have a pole \n$t=0$ but this pole is apparent. \n\n\\subsubsection{Sextic cover}\nAs the same as the quartic cover, we omit the case $(6|6), (4+2|6)$. \n\\par\\bigskip\\noindent\n1) P2-rat $(3+3|6)$ \\\\ \\noindent\nFor $DW_{1\/6}$, we set $x =\\frac1{36} (z^2+t)^3$. Then \n$$V(z,t)= \\frac{3}{4 z^2}+tz^2+ z^4,$$\nwhich is the rational solution $q=0$ of P2$(0)$.\n\n\n\\subsection{R.~Fuchs' Problem}\n\nCompared with the theorem \\ref{classical} and the theorem \\ref{th:cover}, \nwe obtain all of algebraic solutions and symmetric solutions except \nalgebraic solutions of deg-P5 and the Laguerre solutions of P5. \nThese two solutions are not obtained from $W_{k,m}$ or $DW_{m}$ by rational transformations, \nbut it is obtained by exponential type transformations. \n\n\\subsubsection{Split case}\n1) For deg-P5-alg, we start from\n$$\\frac{d^2 u}{dx^2} =\\frac {h^2-1}{4 x^2}u.$$\nWe set $x=e^{4\\sqrt{tz\/(z-1)}\/h }(\\sqrt{z}+\\sqrt{z-1} )\/ (\\sqrt{z}-\\sqrt{z-1} )$. Then \n\\begin{eqnarray*}\nV(z,t)=\\frac{t}{ (z-1)^3}-\\frac{4h^2-13 }{16(z-1)^2}-\\frac3{16z^2}\n -\\frac{2(h+2\\sqrt{t})^2-5}{8z(z-1)^2}\\\\\n +\\frac{3}{4(z-2\\sqrt{t}\/h-1)^2}-\\frac{3+8\\sqrt{t}\/h}{4z(z-1)(z-1-2\\sqrt{t}\/h)},\n\\end{eqnarray*} \nwhich gives an algebraic solution $y=1+2\\sqrt{t}\/h$ for P5($h^2\/2,-8,-2,0$).\n\n\\par\\bigskip\\noindent\n2) For P5-Lag, we start from\n$$\\frac{d^2 u}{dx^2} =\\frac {h^2-1}{4 x^2}u.$$\nWe set $x=e^{t\/(h(z-1))}(z-1) $. Then \n$$V(z,t)=\\frac{t^2}{4(z-1)^4}-\\frac{ht}{2(z-1)^3}+\\frac{h^2\/4-1}{(z-1)^2}\n -\\frac{3}{4(z-t\/h-1)^2}-\\frac{ht}{(z-1)^2(z-t\/h-1)},$$\nwhich gives a rational solution $y= t\/h+1$ for P5($h^2\/2,-1\/2,-h,-1\/2$).\n\nIn these two cases, the monodromy group of \n\\begin{equation}\\label{split}\n\\frac{d^2u}{dz^2}= V(z,t)u(z)\n\\end{equation} \nis diagonal. Therefore we cannot reduce them to $W_{k,m}$ or $DW_{m}$.\n\n\\subsubsection{Summary}\nWe summarize our result:\n\\begin{theorem}\\label{final} By any rational transformation from confluent hypergeometric \nequations $W_{k,m}$ or $DW_{m}$ to equations of the Painlev\\'e type, we get algebraic solutions or symmetric \nsolutions of P1, P2, P34, P3, P4, P5 except deg-P5-alg and P5-Lag. \nConversely, any algebraic solution except deg-P5-alg and P5-Lag or symmetric solution \nfrom P1 to P5 is obtained by a rational transformation of $W_{k,m}$ or $DW_{m}$. \nFor deg-P5-alg and P5-Lag, we can reduce them to differential equations with \n constant coefficients by exponential type transformations.\n\\end{theorem} \n\n\n\\par\\noindent\nIn this sense, R.~Fuchs' problem is true for P1 to P5. We obtain rational \nsolutions and symmetric solutions of P2 and P34 in different way. \nIn \\cite{AVK}, Kitaev studied a symmetric solution of P2, but his result is \non a symmetric solution of P34 from our viewpoint since he used Flaschka-Newell's form. \nSymmetric solutions of P2 is studied in \\cite{KK2} by using Miwa-Jimbo's form.\n\nThe authors do not know R.~Fuchs' problem is true or not in the case the sixth Painlev\\'e equations. \nRecent work by A.~V.~Kitaev \\cite{Kitaev:05} give partial, but affirmative \nanswers to R.~Fuchs' problem.\nWe do not know any negative examples of R.~Fuchs' problem.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nArtificial neural networks are often ill-conditioned systems in that a small change in the inputs can cause significant changes in the outputs \\citep{szegedy}.\nThis results in poor robustness and vulnerability under adversarial attacks which has been reported on a variety of networks including image classification \\citep{carlini,goodfellow}, speech recognition \\citep{kreuk2018fooling,alzantot2018did,carlini2018audio}, image captioning \\citep{chen2017show} and natural language processing \\citep{gao2018black,ebrahimi2017hotflip}.\nThese issues bring up both theoretical questions of how neural networks generalize \\citep{kawaguchi2017generalization,xu2012robustness} and practical concerns of security in applications \\citep{akhtar2018threat}.\n\nA number of remedies have been proposed for these issues and will be discussed in Section~\\ref{sec:liter}.\nWhite-box defense is particularly difficult and many proposals have failed.\nFor example, \\citet{athalye2018obfuscated} reported that out of eight recent defense works, only \\citet{mit} survived strong attacks.\nSo far the mainstream and most successful remedy is that of adversarial training \\citep{mit}.\nHowever, as will be shown in Tables~\\ref{tbl:mnist} and \\ref{tbl:cifar}, the robustness by adversarial training diminishes when a white-box attacker \\citep{carlini} is allowed to use more iterations.\n\nThis paper explores a different approach and demonstrates that a combination of the following three conditions results in enhanced robustness:\n1) the Lipschitz constant of a network from inputs to logits is no greater than 1 with respect to the $L_2$-norm;\n2) the loss function explicitly maximizes \\emph{confidence gap}, which is the difference between the largest and second largest logits of a classifier;\n3) the network architecture restricts confidence gaps as little as possible. We will elaborate.\n\nThere are previous works that achieve the first condition \\citep{cisse,hein} or bound responses to input perturbations by other means \\citep{kolter,raghu,haber}.\nFor example, Parseval networks \\citep{cisse} bound the Lipschitz constant by requiring each linear or convolution layer be composed of orthonormal filters.\nHowever, the reported robustness and guarantees are often under weak attacks or with low noise magnitude, and none of these works has demonstrated results that are comparable to adversarial training.\n\nIn contrast, we are able to build MNIST and CIFAR-10 classifiers, without needing any adversarial training, that exceed the state of the art \\citep{mit} in robustness against white-box $L_2$-bounded adversarial attacks.\nThe defense is even stronger if adversarial training is added.\nWe will refer to these networks as \\emph{$L_2$-nonexpansive neural networks} (L2NNNs).\nOur advantage comes from a set of new techniques:\nour weight regularization, which is key in enforcing the first condition, allows greater degrees of freedom in parameter training than the scheme in \\citet{cisse}; \na new loss function is specially designed for the second condition;\nwe adapt various layers in new ways for the third condition, for example norm-pooling and two-sided ReLU, which will be presented later.\n\nLet us begin with intuitions behind the second and third conditions.\nConsider a multi-class classifier.\nLet $g\\left({\\bf x}\\right)$ denote its confidence gap for an input data point ${\\bf x}$.\nIf the classifier is a single L2NNN,\\footnote{This is only an example and we recommend building a classifier as multiple L2NNNs, see Section~\\ref{sec:loss}.} we have a guarantee\\footnote{See Lemma~\\ref{th:sqrt2} in Appendix~\\ref{sec:proofs} for the proof of the guarantee.} that the classifier will not change its answer as long as the input ${\\bf x}$ is modified by no more than an $L_2$-norm of $g\\left({\\bf x}\\right)\/\\sqrt{2}$.\nTherefore maximizing the average confidence gap directly boosts robustness and this motivates the second condition.\nTo explain the third condition, let us introduce the notion of preserving distance: the distance between any pair of input vectors with two different labels ought to be preserved as much as possible at the outputs, while we do not care about the distance between a pair with the same label.\nLet $d\\left({\\bf x_1},{\\bf x_2}\\right)$ denote the $L_2$-distance between the output logit-vectors for two input points ${\\bf x_1}$ and ${\\bf x_2}$ that have different labels and that are classified correctly.\nIt is straightforward to verify the condition\\footnote{See Lemma~\\ref{th:gapsqrt2} in Appendix~\\ref{sec:proofs} for the proof of the condition.} of $g\\left({\\bf x_1}\\right)+g\\left({\\bf x_2}\\right) \\leq \\sqrt{2} \\cdot d\\left({\\bf x_1},{\\bf x_2}\\right)$.\nTherefore a network that maximizes confidence gaps well must be one that preserves distance well.\nUltimately some distances are preserved while others are lost, and ideally the decision of which distance to lose is made by parameter training rather than by artifacts of network architecture.\nHence the third condition involves distance-preserving architecture choices that leave the decision to parameter training as much as possible, and this motivates many of our design decisions such as Sections~\\ref{sec:relu} and \\ref{sec:pool}.\n\nIn practice we employ the strategy of divide and conquer and build each layer as a nonexpansive map with respect to the $L_2$-norm.\nIt is straightforward to see that a feedforward network composed of nonexpansive layers must implement a nonexpansive map overall.\nHow to adapt subtleties like recursion and splitting-reconvergence is included in the appendix.\n\nBesides being robust against adversarial noises, L2NNNs have other desirable properties.\nThey generalize better from noisy training labels than ordinary networks: for example, when 75\\% of MNIST training labels are randomized, an L2NNN still achieves 93.1\\% accuracy on the test set, in contrast to 75.2\\% from the best ordinary network.\nThe problem of exploding gradients, which is common in training ordinary networks, is avoided because the gradient of any output with respect to any internal signal is bounded between -1 and 1.\nUnlike ordinary networks, the confidence gap of an L2NNN classifier is a quantitatively meaningful indication of confidence on individual data points, and the average gap is an indication of generalization.\n\n\n\n\\section{Conclusions and future work}\n\nIn this work we have presented $L_2$-nonexpansive neural networks which are well-conditioned systems by construction.\nPractical techniques are developed for building these networks.\nTheir properties are studied through experiments and benefits demonstrated, including that our MNIST and CIFAR-10 classifiers exceed the state of the art in robustness against white-box adversarial attacks, that they are robust against partially random training labels, and that they output confidence gaps which are strongly correlated with robustness and generalization.\nThere are a number of future directions, for example, other applications of L2NNN, L2NNN-friendly neural network architectures, and the relation between L2NNNs and interpretability.\n\n\n\\section{Related work} \\label{sec:liter}\n\nAdversarial defense is a well-known difficult problem \\citep{szegedy,goodfellow,carlini,athalye2018obfuscated,gilmer2018adversarial}.\nThere are many avenues to defense \\citep{carlini2017adversarial,meng2017magnet}, and here we will focus on defense works that fortify a neural network itself instead of introducing additional components.\n\nThe mainstream approach has been adversarial training, where examples of successful attacks on a classifier itself are used in training \\citep{tramer2017ensemble,zantedeschi2017efficient}.\nThe work of \\citet{mit} has the best results to date and effectively flattens gradients around training data points, and, prior to our work, it is the only work that achieves sizable white-box defense.\nIt has been reported in \\citet{mindistort} that, for a small network, adversarial training indeed increases the average minimum $L_1$-norm and $L_{\\infty}$-norm of noise needed to change its classification.\nHowever, in view of results of Tables~\\ref{tbl:mnist} and \\ref{tbl:cifar}, adversarial-training results may be susceptible to strong attacks.\nThe works of \\citet{drucker1992improving,ross2017improving} are similar to adversarial training in aiming to flatten gradients around training data set but use different mechanisms.\n\nWhile the above approaches fortify a network around training data points, others aim to bound a network's responses to input perturbations over the entire input space.\nFor example, \\citet{haber} models ResNet as an ordinary differential equation and derive stability conditions.\nOther examples include \\citet{kolter,raghu,wong2018scaling} which achieved provable guarantees against $L_{\\infty}$-bounded attacks.\nHowever there exist scalability issues with respect to network depth, and the reported results so far are against relatively weak attacks or low noise magnitude.\nAs shown in Table~\\ref{tbl:li}, we can match their measured $L_{\\infty}$-bounded defense.\n\nControlling Lipschitz constants also regularizes a network over the entire input space.\n\\citet{szegedy} is the seminal work that brings attention to this topic.\n\\citet{bartlett} proposes the notion of spectrally-normalized margins as an indicator of generalization, which are strongly related to our confidence gap.\n\\citet{pascanu2013difficulty} studies the role of the spectral radius of weight matrices in the vanishing and the exploding gradient problems.\n\\citet{yoshida2017spectral} proposes a method to regularize the spectral radius of weight matrices and shows its effect in reducing generalization gap.\nThe work on Parseval networks \\citep{cisse} shows that it is possible to control Lipschitz constants of neural networks through regularization.\nThe core of their work is to constrain linear and convolution layer weights to be composed of Parseval tight frames, i.e., orthonormal filters, and thereby force the Lipschitz constant of these layers to be 1; they also propose to restrict aggregation operations.\nThe reported robustness results of \\citet{cisse}, however, are much weaker than those by adversarial training in \\citet{mit}.\nWe differ from Parseval networks in a number of ways.\nOur linear and convolution layers do not require filters to be orthogonal to each other and subsume Parseval layers as a special case, and therefore provide more freedom to parameter training.\nWe use non-standard techniques, e.g. two-sided ReLU, to modify various network components to maximize confidence gaps while keeping the network nonexpansive, and we propose a new loss function for the same purpose.\nWe are unable to obtain Parseval networks for a direct comparison, however it is possible to get a rough idea of what the comparison might be by looking at Table~\\ref{tbl:les} which shows the impacts of those new techniques.\nThe work of \\citet{hein} makes an important point regarding guarantees provided by local Lipschitz constants, which helps explain many observations in our results, including why adversarial training on L2NNNs leads to lasting robustness gains.\nThe regularization proposed by \\citet{hein} however is less practical and again introduces reliance on the coverage of training data points.\n\n\n\\section{$L_2$-nonexpansive neural networks} \\label{sec:tech}\n\nThis section describes how to adapt some individual operators in neural networks for L2NNNs.\nDiscussions on splitting-reconvergence, recursion and normalization are in the appendix.\n\n\\subsection{Weights} \\label{sec:w}\n\nThis section covers both the matrix-vector multiplication in a fully connected layer and the convolution calculation between input tensor and weight tensor in a convolution layer. The convolution calculation can be viewed as a set of vector-matrix multiplications: we make shifted copies of the input tensor and shuffle the copies into a set of small vectors such that each vector contains input entries in one tile; we reshape the weight tensor into a matrix by flattening all but the dimension of the output filters; then convolution is equivalent to multiplying each of the said small vectors with the flattened weight matrix. Therefore, in both cases, a basic operator is ${\\bf y} = W {\\bf x}$.\nTo be a nonexpansive map with respect to the $L_2$-norm, a necessary and sufficient condition is\n\\begin{equation} \\label{eq:radius}\n\\begin{aligned}\n{\\bf y}^{\\mathrm T}{\\bf y} \\leq {\\bf x}^{\\mathrm T}{\\bf x} \\quad \\Longrightarrow \\quad {\\bf x}^{\\mathrm T}W^{\\mathrm T}W{\\bf x} & \\leq {\\bf x}^{\\mathrm T}{\\bf x},\\quad \\forall {\\bf x}\\in \\mathbb{R}^N \\\\\n\\rho\\left( W^{\\mathrm T}W \\right) & \\leq 1\n\\end{aligned}\n\\end{equation}\nwhere $\\rho$ denotes the spectral radius of a matrix.\n\nThe exact condition of (\\ref{eq:radius}) is difficult to incorporate into training. Instead we use an upper bound:\\footnote{The spectral radius of a matrix is no greater than its natural $L_{\\infty}$-norm. $W^{\\mathrm T}W$ and $WW^{\\mathrm T}$ have the same non-zero eigenvalues and hence the same spectral radius.}\n\\begin{equation} \\label{eq:bound}\n\\rho\\left( W^{\\mathrm T}W \\right) \\leq b\\left(W\\right) \\triangleq \\min \\left( r(W^{\\mathrm T}W), r(WW^{\\mathrm T}) \\right)\n, \\quad\n\\textrm{where } r \\left( M \\right) = \\max_i\\sum_j \\left| M_{i,j} \\right|\n\\end{equation}\nThe above is where our linear and convolution layers differ from those in \\citet{cisse}: they require $WW^{\\mathrm T}$ to be an identity matrix, and it is straightforward to see that their scheme is only one special case that makes $b\\left(W\\right)$ equal to 1.\nInstead of forcing filters to be orthogonal to each other, our bound of $b\\left(W\\right)$ provides parameter training with greater degrees of freedom.\n\nOne simple way to use (\\ref{eq:bound}) is replacing $W$ with $W^\\prime = W \/ \\sqrt{b\\left(W\\right)}$ in weight multiplications, and this would enforce that the layer is strictly nonexpansive.\nAnother method is described in the appendix.\n\nAs mentioned, convolution can be viewed as a first layer of making copies and a second layer of vector-matrix multiplications.\nWith the above regularization, the multiplication layer is nonexpansive.\nHence we only need to ensure that the copying layer is nonexpansive.\nFor filter size of $K_1$ by $K_2$ and strides of $S_1$ by $S_2$, we simply divide the input tensor by a factor of $\\sqrt{ \\lceil K_1 \/ S_1 \\rceil \\cdot \\lceil K_2 \/ S_2 \\rceil }$.\n\n\\subsection{ReLU and others} \\label{sec:relu}\n\nLet us turn our attention to the third condition from Section~\\ref{sec:intro}.\nReLU, tanh and sigmoid are nonexpansive but do not preserve distance well.\nThis section presents a method that improves ReLU and is generalizable to other nonlinearities.\nA different approach to improve sigmoid is in the appendix.\n\nTo understand the weakness of ReLU, let us consider two input data points A and B, and suppose that a ReLU in the network receives two different negative values for A and B and outputs zero for both.\nComparing the A-B distance before and after this ReLU layer, there is a distance loss and this particular ReLU contributes to it.\nWe use \\emph{two-sided ReLU} which is a function from $\\mathbb{R}$ to $\\mathbb{R}^2$ and simply computes ReLU($x$) and ReLU($-x$).\nTwo-sided ReLU has been studied in \\citet{shang2016understanding} in convolution layers for accuracy improvement.\nIt is straightforward to verify that two-sided ReLU is nonexpansive with respect to any $L_p$-norm and that it preserves distance in the above scenario.\nWe will empirically verify its effectiveness in increasing confidence gaps in Section~\\ref{sec:results}.\n\nTwo-sided ReLU is a special case of the following general technique.\nLet $f(x)$ be a nonexpansive and monotonically increasing scalar function, and note that ReLU, tanh and sigmoid all fit these conditions.\nWe can define a function from $\\mathbb{R}$ to $\\mathbb{R}^2$ that computes $f(x)$ and $f(x)-x$.\nSuch a new function is nonexpansive with respect to any $L_p$-norm\\footnote{See Lemma~\\ref{th:twoside} in Appendix~\\ref{sec:proofs} for the proof of nonexpansiveness.} and preserves distance better than $f(x)$ alone.\n\n\\subsection{Pooling} \\label{sec:pool}\n\nThe popular max-pooling is nonexpansive, but does not preserve distance as much as possible.\nConsider a scenario where the inputs to pooling are activations that represent edge detection, and consider two images A and B such that A contains an edge that passes a particular pooling window while B does not.\nInside this window, A has positive values while B has all zeroes.\nFor this window, the A-B distance before pooling is the $L_2$-norm of A's values, yet if max-pooling is used, the A-B distance after pooling becomes the largest of A's values, which can be substantially smaller than the former.\nThus we suffer a loss of distance between A and B while passing this pooling layer.\n\nWe replace max-pooling with norm-pooling, which was reported in \\citet{boureau2010theoretical} to occasionally increase accuracy.\nInstead of taking the max of values inside a pooling window, we take the $L_2$-norm of them.\nIt is straightforward to verify that norm-pooling is nonexpansive\\footnote{See Lemma~\\ref{th:pool} in Appendix~\\ref{sec:proofs} for the proof of nonexpansiveness.} and would entirely preserve the $L_2$-distance between A and B in the hypothetical scenario above.\nOther $L_p$-norms can also be used.\nWe will verify its effectiveness in increasing confidence gaps in Section~\\ref{sec:results}.\n\nIf pooling windows overlap, we divide the input tensor by $\\sqrt{K}$ where $K$ is the maximum number of pooling windows in which an entry can appear, similar to convolution layers discussed earlier.\n\n\\subsection{Loss function} \\label{sec:loss}\n\nFor a classifier with $K$ labels, we recommend building it as $K$ overlapping L2NNNs, each of which outputs a single logit for one label.\nIn an architecture with no split layers, this simply implies that these $K$ L2NNNs share all but the last linear layer and that the last linear layer is decomposed into $K$ single-output linear filters, one in each L2NNN.\nFor a multi-L2NNN classifier, we have a guarantee\\footnote{See Lemma~\\ref{th:multi2} in Appendix~\\ref{sec:proofs} for the proof of the guarantee. The guarantee in either Lemma~\\ref{th:sqrt2} or Lemma~\\ref{th:multi2} is only a loose guarantee and it has been shown in \\citet{hein} that a larger guarantee exists by analyzing local Lipschitz constants, though it is expensive to compute.} that the classifier will not change its answer as long as the input ${\\bf x}$ is modified by no more than an $L_2$-norm of $g\\left({\\bf x}\\right)\/2$, where again $g\\left({\\bf x}\\right)$ denotes the confidence gap.\nAs mentioned in Section~\\ref{sec:intro}, a single-L2NNN classifier has a guarantee of $g\\left({\\bf x}\\right)\/\\sqrt{2}$.\nAlthough this seems better on the surface, it is more difficult to achieve large confidence gaps.\nWe will assume the multi-L2NNN approach.\n\nWe use a loss function with three terms, with trade-off hyperparameters $\\gamma$ and $\\omega$:\n\\begin{equation} \\label{eq:lossall}\n\\mathcal{L} = \\mathcal{L}_a + \\gamma\\cdot\\mathcal{L}_b + \\omega\\cdot\\mathcal{L}_c\n\\end{equation}\n\nLet $y_1,y_2,\\cdots,y_K$ be outputs from the L2NNNs. The first loss term is\n\\begin{equation} \\label{eq:loss0}\n\\mathcal{L}_a = \\textrm{softmax-cross-entropy}\\left( u_1y_1, u_2y_2, \\cdots, u_Ky_K, \\textrm{label} \\right)\n\\end{equation}\nwhere $u_1,u_2,\\cdots,u_K$ are trainable parameters. The second loss term is\n\\begin{equation} \\label{eq:loss1}\n\\mathcal{L}_b = \\textrm{softmax-cross-entropy}\\left( vy_1, vy_2, \\cdots, vy_K, \\textrm{label} \\right)\n\\end{equation}\nwhere $v$ can be either a trainable parameter or a hyperparameter.\nNote that $u_1,u_2,\\cdots,u_K$ and $v$ are not part of the classifier and are not used during inference. The third loss term is\n\\begin{equation} \\label{eq:loss2}\n\\mathcal{L}_c = \\frac{\\textrm{average}\\left( \\log\\left( 1 - \\textrm{softmax}\\left(zy_1, zy_2, \\cdots, zy_K\\right)_\\textrm{label}\\right) \\right)}{z}\n\\end{equation}\nwhere $z$ is a hyperparameter.\n\nThe rationale for the first loss term (\\ref{eq:loss0}) is that it mimics cross-entropy loss of an ordinary network.\nIf an ordinary network has been converted to L2NNNs by multiplying each layer with a small constant, its original outputs can be recovered by scaling up L2NNN outputs with certain constants, which is enabled by the formula (\\ref{eq:loss0}).\nHence this loss term is meant to guide the training process to discover any feature that an ordinary network can discover.\nThe rationale for the second loss term (\\ref{eq:loss1}) is that it is directly related to the classification accuracy.\nMultiplying L2NNN outputs uniformly with $v$ does not change the output label and only adapts to the value range of L2NNN outputs and drive towards better nominal accuracy.\nThe third loss term (\\ref{eq:loss2}) approximates average confidence gap: the log term is a soft measure of a confidence gap (for a correct prediction), and is asymptotically linear for larger gap values.\nThe hyperparameter $z$ controls the degree of softness, and has relatively low impact on the magnitude of loss due to the division by $z$; if we increase $z$ then (\\ref{eq:loss2}) asymptotically becomes the average of minus confidence gaps for correct predictions and zeroes for incorrect predictions.\nTherefore loss (\\ref{eq:loss2}) encourages large confidence gaps and yet is smooth and differentiable.\n\nA notable variation of (\\ref{eq:lossall}) is one that combines with adversarial training.\nOur implementation applies the technique of \\citet{mit} on the first loss term (\\ref{eq:loss0}): we use distorted inputs in calculating $\\mathcal{L}_a$.\nThe results are reported in Tables~\\ref{tbl:mnist} and \\ref{tbl:cifar} as Model 4.\nAnother possibility is to use distorted inputs in calculating $\\mathcal{L}_a$ and $\\mathcal{L}_b$, while $\\mathcal{L}_c$ should be based on original inputs.\n\n\n\\section{Experiments} \\label{sec:results}\n\nExperiments are divided into three groups to study different properties of L2NNNs.\nOur MNIST and CIFAR-10 classifiers are available at\\\\\n\\url{http:\/\/researcher.watson.ibm.com\/group\/9298}\n\n\\begin{figure}[th]\n\\centering\n\\includegraphics[width=0.1\\columnwidth]{1k}\n\\includegraphics[width=0.1\\columnwidth]{10k}\n\\caption{Attacks on Model 2 found after 1K and 10K iterations: the same 0 recognized as 5.}\n\\label{fig:zero}\n\\end{figure}\n\n\\subsection{Robustness} \\label{sec:defense}\n\nThis section evaluates robustness of L2NNN classifiers for MNIST and CIFAR-10 and compares against the state of the art \\citet{mit}.\nThe robustness metric is accuracy under white-box non-targeted $L_2$-bounded attacks. The attack code of \\citet{carlini} is used.\nWe downloaded the classifiers\\footnote{At {\\scriptsize\\rurl{github.com\/MadryLab\/mnist_challenge}} and {\\scriptsize\\rurl{github.com\/MadryLab\/cifar10_challenge}}.\nThese models (Model 2's in Tables~\\ref{tbl:mnist} and \\ref{tbl:cifar}) were built by adversarial training with $L_{\\infty}$-bounded adversaries \\citep{mit}.\nTo the best of our knowledge, \\citet{mittradeoff} from the same lab is the only paper in the literature that reports on models trained with $L_2$-bounded adversaries, and it reports that training with $L_2$-bounded adversaries resulted in weaker $L_2$ robustness than the $L_2$ robustness results from training with $L_{\\infty}$-bounded adversaries in \\citet{mit}.\nTherefore we choose to compare against the best available models, even though they were trained with $L_{\\infty}$-bounded adversaries.\nNote also that our own Model 4's in Tables~\\ref{tbl:mnist} and \\ref{tbl:cifar} are trained with the same $L_{\\infty}$-bounded adversaries.} of \\citet{mit} and report their robustness against $L_2$-bounded attacks in Tables~\\ref{tbl:mnist} and \\ref{tbl:cifar}.\\footnote{In reading Tables~\\ref{tbl:mnist} and \\ref{tbl:cifar}, it is worth remembering that the norm of after-attack accuracy is zero, and for example the 7.6\\% on MNIST is currently the state of the art.}\nNote that their defense diminishes as the attacks are allowed more iterations.\nFigure~\\ref{fig:zero} illustrates one example of this effect: the first image is an attack on MNIST Model 2 (0 recognized as 5) found after 1K iterations, with noise $L_2$-norm of 4.4, while the second picture is one found after 10K iterations, the same 0 recognized as 5, with noise $L_2$-norm of 2.1.\nWe hypothesize that adversarial training alone provides little absolute defense at the noise levels used in the two tables: adversarial examples still exist and are only more difficult to find.\nThe fact that in Table~\\ref{tbl:cifar} Model 2 accuracy is lower in the 1000x10 row than the 10K row further supports our hypothesis.\n\n\\begin{table}[th]\n\\caption{Accuracies of MNIST classifiers under white-box non-targeted attacks with noise $L_2$-norm limit of 3.\nMaxIter is the max number of iterations the attacker uses.\nModel 1 is an ordinarily trained model.\nModel 2 is the model from \\citet{mit}.\nModel 3 is L2NNN without adversarial training.\nModel 4 is L2NNN with adversarial training.}\n\\label{tbl:mnist}\n\\centering\n\\begin{tabular}{lcccc}\n\\toprule\nMaxIter & Model1 & Model2 & Model3 & Model4 \\\\\n\\midrule\nNatural & 99.1\\% & 98.5\\% & 98.7\\% & 98.2\\% \\\\\n100 & 70.2\\% & 91.7\\% & 77.6\\% & 75.6\\% \\\\\n1000 & 0.05\\% & 51.5\\% & 20.3\\% & 24.4\\% \\\\\n10K & 0\\% & 16.0\\% & 20.1\\% & 24.4\\% \\\\\n100K & 0\\% & 9.8\\% & 20.1\\% & 24.4\\% \\\\\n1M & 0\\% & 7.6\\% & 20.1\\% & 24.4\\% \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[th]\n\\caption{Accuracies of CIFAR-10 classifiers under white-box non-targeted attacks with noise $L_2$-norm limit of 1.5.\nMaxIter is the max number of iterations the attacker uses, and 1000x10 indicates 10 runs each with 1000 iterations.\nModel 1 is an ordinarily network.\nModel 2 is the model from \\citet{mit}.\nModel 3 is L2NNN without adversarial training.\nModel 4 is L2NNN with adversarial training.}\n\\label{tbl:cifar}\n\\centering\n\\begin{tabular}{lcccc}\n\\toprule\nMaxIter & Model1 & Model2 & Model3 & Model4 \\\\\n\\midrule\nNatural & 95.0\\% & 87.1\\% & 79.2\\% & 77.2\\% \\\\\n100 & 0\\% & 13.9\\% & 10.2\\% & 20.8\\% \\\\\n1000 & 0\\% & 9.4\\% & 10.1\\% & 20.4\\% \\\\\n10K & 0\\% & 9.0\\% & 10.1\\% & 20.4\\% \\\\\n1000x10 & 0\\% & 8.7\\% & 10.1\\% & 20.4\\% \\\\\n100K & 0\\% & NA & 10.1\\% & 20.4\\% \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\nIn contrast, the defense of the L2NNN models remain constant when the attacks are allowed more iterations, specifically MNIST Models beyond 10K iterations and CIFAR-10 Models beyond 1000 iterations.\nThe reason is that L2NNN classifiers achieve their defense by creating a confidence gap between the largest logit and the rest, and that half of this gap is a lower bound of $L_2$-norm of distortion to the input data in order to change the classification.\nHence L2NNN's defense comes from a minimum-distortion guarantee.\nAlthough adversarial training alone may also increase the minimum distortion limit for misclassification, as suggested in \\citet{mindistort} for a small network, that limit likely does not reach the levels used in Tables~\\ref{tbl:mnist} and \\ref{tbl:cifar} and hence the defense depends on how likely the attacker can reach a lower-distortion misclassification.\nConsequently when the attacks are allowed to make more attempts the defense with guarantee stands while the other diminishes.\n\nFor both MNIST and CIFAR-10, adding adversarial training boosts the robustness of Model 4.\nWe hypothesize that adversarial training lowers local Lipschitz constants in certain parts of the input space, specifically around the training images, and therefore makes local robustness guarantees larger \\citep{hein}.\nTo test this hypothesis on MNIST Models 3 and 4, we measure the average $L_2$-norm of their Jacobian matrices, averaged over the first 1000 images in the test set, and the results are 1.05 for Model 3 and 0.83 for Model 4.\nNote that the $L_2$-norm of Jacobian can be greater than 1 for multi-L2NNN classifiers.\nThese measurements are consistent with, albeit does not prove, the hypothesis.\n\n\\begin{table}[th]\n\\caption{Ablation studies: MNIST model without weight regularization;\none without $\\mathcal{L}_c$ loss;\none with max-pooling instead of norm-pooling;\none without two-sided ReLU;\nGap is average confidence gap.\nR-Accu is under attacks with 1000 iterations and with noise $L_2$-norm limit of 3.}\n\\label{tbl:les}\n\\centering\n\\begin{tabular}{lccc}\n\\toprule\n & Accu. & Gap & R-Accu. \\\\\n\\midrule\nno weight reg. & 99.4\\% & 68.3 & 0\\% \\\\\nno $\\mathcal{L}_c$ loss & 99.2\\% & 2.2 & 8.9\\% \\\\\nno norm-pooling & 98.8\\% & 1.3 & 9.9\\% \\\\\nno two-sided ReLU & 98.0\\% & 2.5 & 15.1\\% \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\nTo test the effects of various components of our method, we build models for each of which we disable a different technique during training.\nThe results are reported in Table~\\ref{tbl:les}.\nTo put the confidence gap values in context, our MNIST Model 3 has an average gap of 2.8.\nThe first one is without weight regularization of Section~\\ref{sec:w} and it becomes an ordinary network which has little defense against adversarial attacks; its large average confidence gap is meaningless.\nFor the second one we remove the third loss term (\\ref{eq:loss2}) and for the third one we replace norm-pooling with regular max-pooling, both resulting in smaller average confidence gap and less defense against attacks.\nFor the fourth one, we replace two-sided ReLU with regular ReLU, and this leads to degradation in nominal accuracy, average confidence gap and robustness.\nParseval networks \\citep{cisse} can be viewed as models without $\\mathcal{L}_c$ term, norm-pooling or two-sided ReLU, and with a more restrictive scheme for weight matrix regularization.\n\nModel 3 in Table~\\ref{tbl:mnist} and the second row of Table~\\ref{tbl:les} are two points along a trade-off curve that are controllable by varying hyperparameter $\\omega$ in loss function (\\ref{eq:lossall}).\nOther trade-off points have nominal accuracy and under-attack accuracy of (98.8\\%,19.1\\%), (98.4\\%,22.6\\%) and (97.9\\%,24.7\\%) respectively.\nSimilar trade-offs have been reported by other robustness works including adversarial training \\citep{mittradeoff} and adversarial polytope \\citep{wong2018scaling}.\nIt remains an open question whether such trade-off is a necessary part of life, and please see Section~\\ref{sec:random} for further discussion on the L2NNN trade-off.\n\n\\begin{table}[th]\n\\caption{Accuracy of L2NNN classifiers under white-box non-targeted attacks with 1000 iterations and with noise $L_{\\infty}$-norm limit of $\\epsilon$.}\n\\label{tbl:li}\n\\centering\n\\begin{tabular}{lccc}\n\\toprule\n & $\\epsilon$ & Model3 & Model4 \\\\\n\\midrule\nMNIST & 0.1 & 90.9\\% & 92.4\\% \\\\\nMNIST & 0.3 & 7.0\\% & 44.0\\% \\\\\nCIFAR-10 & $\\nicefrac{8}{256}$ & 32.3\\% & 42.5\\% \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\nAlthough we primarily focus on defending against $L_2$-bounded adversarial attacks in this work,\nwe achieve some level of robustness against $L_{\\infty}$-bounded attacks as a by-product.\nTable~\\ref{tbl:li} shows our results, again measured with the attack code of \\citet{carlini}.\nThe $\\epsilon$ values match those used in \\citet{raghu,kolter,mit}.\nOur MNIST $L_{\\infty}$ results are on par with \\citet{raghu,kolter} but not as good as \\citet{mit}.\nOur CIFAR-10 Model 4 is on par with \\citet{mit} for $L_{\\infty}$ defense.\n\n\\subsection{Meaningful outputs} \\label{sec:interp}\n\nThis section discusses how to understand and utilize L2NNNs' output values. We observe strong correlation between the confidence gap of L2NNN and the magnitude of distortion needed to force it to misclassify, and images are included in appendix.\n\nIn the next experiment, we sort test data by the confidence gap of a classifier on each image.\nThen we divide the sorted data into 10 bins and report accuracy separately on each bin in Figure~\\ref{fig:bars}.\nWe repeat this experiment for Model 2 \\citep{mit} and our Model 3 of Tables~\\ref{tbl:mnist} and \\ref{tbl:cifar}.\nNote that the L2NNN model shows better correlation between confidence and robustness:\nfor MNIST our first bin is 95\\% robust and second bin is 67\\% robust.\nThis indicates that the L2NNN outputs are much more quantitatively meaningful than those of ordinary neural networks.\n\n\\begin{figure}[th]\n\\centering\n\\includegraphics[width=\\columnwidth]{bars}\n\\caption{Accuracy percentages of classifiers on test data bin-sorted by the confidence gap.}\n\\label{fig:bars}\n\\end{figure}\n\nIt is an important property that an L2NNN has an easily accessible measurement on how robust its decisions are.\nSince robustness is easily measurable, it can be optimized directly, and we believe that this is the primary reason that we can demonstrate the robustness results of Tables~\\ref{tbl:mnist} and \\ref{tbl:cifar}.\nThis can also be valuable in real-life applications where we need to quantify how reliable a decision is.\n\nOne of the other practical implications of this property is that we can form hybrid models which use L2NNN outputs when the confidence is high and a different model when the confidence of the L2NNN is low.\nThis creates another dimension of trade-off between nominal accuracy and robustness that one can take advantage of in an application.\nWe built such a hybrid model for MNIST with the switch threshold of 1.0 and achieved nominal accuracy of 99.3\\%, where only 6.9\\% of images were delegated to the alternative classifier.\nWe built such a hybrid model for CIFAR-10 with the switch threshold of 0.1 and achieved nominal accuracy of 89.4\\%, where 25\\% of images were delegated.\nTo put these threshold values in context, MNIST Model 3 has an average gap of 2.8 and CIFAR-10 Model 3 has an average gap of 0.34.\nIn other words, if for a data point the L2NNN confidence gap is substantially below average, the classification is delegated to the alternative classifier, and this way we can recover nominal accuracy at a moderate cost of robustness.\n\n\\subsection{Generalization versus memorization} \\label{sec:random}\n\nThis section studies L2NNN's generalization through a noisy-data experiment where we randomize some or all MNIST training labels.\nThe setup is similar to \\citet{zhang_iclr17}, except that we added three scenarios where 25\\%, 50\\% and 75\\% of training labels are scrambled.\n\n\\begin{table}[th]\n\\caption{Accuracy comparison of MNIST classifiers that are trained on noisy data.\nRand is the percentage of training labels that are randomized.\nWD is weight decay.\nDR is dropout.\nES is early stopping.\nGap1 is L2NNN's average confidence gap on training set and Gap2 is that on test set.}\n\\label{tbl:rand}\n\\centering\n\\begin{tabular}{lcccccccc}\n\\toprule\nRand & \\multicolumn{5}{c}{Ordinary network} & \\multicolumn{3}{c}{L2NNN} \\\\\n & Vanilla & WD & DR & ES & WD+DR+ES & & Gap1 & Gap2 \\\\\n\\midrule\n 0 & 99.4\\% & 99.0\\% & 99.2\\% & 99.0\\% & 99.3\\% & 98.7\\% & 2.84 & 2.82 \\\\\n 25\\% & 90.4\\% & 91.1\\% & 91.8\\% & 96.2\\% & 98.0\\% & 98.5\\% & 0.64 & 0.63 \\\\\n 50\\% & 65.5\\% & 67.7\\% & 72.6\\% & 81.0\\% & 88.3\\% & 96.0\\% & 0.58 & 0.60 \\\\\n 75\\% & 41.5\\% & 44.9\\% & 41.8\\% & 75.2\\% & 66.4\\% & 93.1\\% & 0.86 & 0.89 \\\\\n 100\\%& 9.7\\% & 9.1\\% & 9.4\\% & NA & NA & 11.9\\% & 0.09 & 0.01 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[th]\n\\caption{Training-accuracy-versus-confidence-gap trade-off points of L2NNNs on 50\\%-scrambled MNIST training labels.}\n\\label{tbl:fifty}\n\\centering\n\\begin{tabular}{cccc}\n\\toprule\n\\multicolumn{2}{c}{on training set} & \\multicolumn{2}{c}{on test set} \\\\\n Accu. & Gap & Accu. & Gap \\\\\n\\midrule\n 98.7\\% & 0.17 & 79.0\\% & 0.12 \\\\\n 96.5\\% & 0.21 & 79.3\\% & 0.18 \\\\\n 89.4\\% & 0.22 & 86.3\\% & 0.20 \\\\\n 70.1\\% & 0.36 & 93.4\\% & 0.37 \\\\\n 66.1\\% & 0.45 & 93.7\\% & 0.47 \\\\\n 59.8\\% & 0.58 & 96.0\\% & 0.60 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\nTable~\\ref{tbl:rand} shows the comparison between L2NNNs and ordinary networks.\nDropout rate and weight-decay weight are tuned for each WD\/DR run, and each WD+DR+ES run uses the combined hyperparameters from its row.\nIn early-stopping runs, 5000 training images are withheld as validation set and training stops when loss on validation set stops decreasing.\nThe L2NNNs do not use weight decay, dropout or early stopping.\nL2NNNs achieve the best accuracy in all three partially-scrambled scenarios, and it is remarkable that an L2NNN can deliver 93.1\\% accuracy on test set when three quarters of training labels are random.\nMore detailed data and discussions are in the appendix.\n\nTo illustrate why L2NNNs generalize better than ordinary networks from noisy data, we show in Table~\\ref{tbl:fifty} trade-off points between accuracy and confidence gap on the 50\\%-scrambled training set.\nThese trade-off points are achieved by changing hyperparameters $\\omega$ in (\\ref{eq:lossall}) and $v$ in (\\ref{eq:loss1}).\nIn a noisy training set, there exist data points that are close to each other yet have different labels.\nFor a pair of such points, if an L2NNN is to classify both points correctly, the two confidence gaps must be small.\nTherefore, in order to achieve large average confidence gap, an L2NNN must misclassify some of the training data.\nIn Table~\\ref{tbl:fifty}, as we adjust the loss function to favor larger average gap, the L2NNNs are forced to make more and more mistakes on the training set.\nThe results suggest that loss is minimized when an L2NNN misclassifies some of the scrambled labels while fitting the 50\\% original labels with large gaps, and parameter training discovers this trade-off automatically.\nHence we see in Table~\\ref{tbl:fifty} increasing accuracies and gaps on the test set.\nThe above is a trade-off between memorization (training-set accuracy) and generalization (training-set average gap), and we hypothesize that L2NNN's trade-off between nominal accuracy and robustness, reported in Section~\\ref{sec:defense}, is due to the same mechanism.\nTo be fair, dropout and early stopping are also able to sacrifice accuracy on a noisy training set, however they do so through different mechanisms that tend to be brittle, and Table~\\ref{tbl:rand} suggests that L2NNN's mechanism is superior.\nMore discussions and the trade-off tables for 25\\% and 75\\% scenarios are in the appendix.\n\nAnother interesting observation is that the average confidence gap dramatically shrinks in the last row of Table~\\ref{tbl:rand} where the training is pure memorization.\nThis is not surprising again due to training data points that are close to each other yet have different labels.\nThe practical implication is that after an L2NNN model is trained, one can simply measure its average confidence gap to know whether and how much it has learned to generalize rather than to memorize the training data.\n\n\n\n\\section{$L_2$-nonexpansive network components} \\label{sec:parts}\n\n\\subsection{Additional methods for weight regularization}\n\nThere are numerous ways to utilize the bound of (\\ref{eq:bound}).\nThe main text describes a simple method of using $W^\\prime = W \/ \\sqrt{b\\left(W\\right)}$ to enforce strict nonexpansiveness.\nThe following is an alternative.\n\nApproximate nonexpansiveness can be achieved by adding a penalty to the loss function whenever $b\\left(W\\right)$ exceeds 1, for example:\n\\begin{equation} \\label{eq:loss}\n\\mathcal{L}_W = \\min \\left( l(W^{\\mathrm T}W), l(WW^{\\mathrm T}) \\right)\n, \\textrm{ where } l \\left( M \\right) = \\sum_i \\max\\left(\\sum_j \\left| M_{i,j} \\right|-1,0\\right)\n\\end{equation}\nThe sum of (\\ref{eq:loss}) losses over all layers becomes a fourth term in the loss function (\\ref{eq:lossall}), multiplied with one additional hyperparameter. This would lead to an approximate L2NNN with trade-offs between how much its layers violate (\\ref{eq:radius}) with surrogate (\\ref{eq:bound}) versus other objectives in the loss function.\n\nIn practice, we have found that it is beneficial to begin neural network training with the regularization scheme of (\\ref{eq:loss}), which allows larger learning rates, and switch to the first scheme of using $W^\\prime$, which avoids artifacts of an extra hyperparameter, when close to convergence.\nOf course if the goal is building approximate L2NNNs one can use (\\ref{eq:loss}) all the way.\n\n\\subsection{Sigmoid and others}\n\nSigmoid is nonexpansive as is, but does not preserve distance as much as possible. A better way is to replace sigmoid with the following operator\n\\begin{equation} \\label{eq:sigmoid}\ns\\left(x\\right) = t \\cdot \\mathrm{sigmoid} \\left( \\frac {4x}{t} \\right)\n\\end{equation}\nwhere $t>0$ is a trainable parameter and each neuron has its own $t$.\nIn general, the requirement for any scalar nonlinearity is that its derivative is bounded between -1 and 1. If a nonlinearity violates this condition, a shrinking multiplier can be applied. If the actual range of derivative is narrower, as in the case of sigmoid, an enlarging multiplier can be applied to preserve distance.\n\nFor further improvement, (\\ref{eq:sigmoid}) can be combined with the general form of the two-sided ReLU of Section~2.2.\nThen the new nonlinearity is a function from $\\mathbb{R}$ to $\\mathbb{R}^2$ that computes $s(x)$ and $s(x)-x$.\n\n\\subsection{Splitting and reconvergence}\n\nThere are different kinds of splitting in neural networks. Some splitting is not followed by reconvergence. For example, a classifier may have common layers followed by split layers for each label, and such an architecture can be viewed as multiple L2NNNs that overlap at the common layers and each contain one stack of split layers. In such cases, no modification is needed because there is no splitting within each individual L2NNN.\n\nSome splitting, however, is followed by reconvergence. In fact, convolution and pooling layers discussed earlier can be viewed as splitting, and reconvergence happens at the next layer. Another common example is skip-level connections such as in ResNet. Such splitting should be viewed as making two copies of a certain vector. Let the before-split vector be ${\\bf x}_0$, and we make two copies as\n\\begin{equation} \\label{eq:split}\n\\begin{aligned}\n{\\bf x}_1 & = t \\cdot {\\bf x}_0 \\\\\n{\\bf x}_2 & = \\sqrt{1-t^2} \\cdot {\\bf x}_0\n\\end{aligned}\n\\end{equation}\nwhere $t \\in \\left[0,1\\right]$ is a trainable parameter.\n\nIn the case of ResNet, the reconvergence is an add operator, which should be treated as vector-matrix multiplication as in Section~2.1, but with much simplified forms. Let ${\\bf x}_1$ be the skip-level connections and $f\\left({\\bf x}_2\\right)$ be the channels of convolution outputs to be added with ${\\bf x}_1$, we perform the addition as\n\\begin{equation} \\label{eq:reconv}\n{\\bf y} = t \\cdot {\\bf x}_1 + \\sqrt{1-t^2} \\cdot f\\left({\\bf x}_2\\right)\n\\end{equation}\nwhere $t \\in \\left[0,1\\right]$ is a trainable parameter and could be a common parameter with (\\ref{eq:split}).\n\nResNet-like reconvergence is referred to as aggregation layers in \\citet{cisse} and a different formula was used:\n\\begin{equation}\n{\\bf y} = \\alpha \\cdot {\\bf x}_1 + \\left(1-\\alpha\\right) \\cdot f\\left({\\bf x}_2\\right)\n\\end{equation}\nwhere $\\alpha \\in \\left[0,1\\right]$ is a trainable parameter.\nBecause splitting is not modified in \\citet{cisse}, their scheme may seem approximately equivalent to ours if a common $t$ parameter is used for (\\ref{eq:split}) and (\\ref{eq:reconv}).\nHowever, there is a substantial difference: in many ResNet blocks, $f\\left({\\bf x}_2\\right)$ is a subset of rather than all of the output channels of convolution layers, and our scheme does not apply the shrinking factor of $\\sqrt{1-t^2}$ on channels that are not part of $f\\left({\\bf x}_2\\right)$ and therefore better preserve distances.\nIn contrast, because splitting is not modified, at reconvergence the scheme of \\citet{cisse} must apply the shrinking factor of $1-\\alpha$ on all outputs of convolution layers, regardless of whether a channel is part of the aggregation or not.\nTo state the difference in more general terms, our scheme enables splitting and reconvergence at arbitrary levels of granularity and multiplies shrinking factors to only the necessary components.\nWe can also have a different $t$ per channel or even per entry.\n\nTo be fair, the scheme of \\citet{cisse} has an advantage of being nonexpansive with respect to any $L_p$-norm.\nHowever, for $L_2$-norm, it is inferior to ours in preserving distances and maximizing confidence gaps.\n\n\\subsection{Recursion} \\label{sec:rnn}\n\nThere are multiple ways to interpret recurrent neural networks (RNN) as L2NNNs. One way is to view an unrolled RNN as multiple overlapping L2NNNs where each L2NNN generates the output at one time step. Under this interpretation, nothing special is needed and recurrent inputs to a neuron are simply treated as ordinary inputs.\n\nAnother way to interpret an RNN is to view unrolled RNN as a single L2NNN that generates outputs at all time steps. Under this interpretation, recurrent connections are treated as splitting at their sources and should be handled as in (\\ref{eq:split}).\n\n\\subsection{Normalization}\n\nNormalization operations are limited in an L2NNN.\nSubtracting mean is nonexpansive and allowed, and subtract-mean operation can be performed on arbitrary subsets of any layer.\nSubtracting batch mean is also allowed because it can be viewed as subtracting a bias parameter.\nHowever, scaling, e.g., division by standard deviation or batch standard deviation is only allowed if the multiplying factors are between -1 and 1.\nTo satisfy this in practice, one simple method is to divide all multiplying factors in a normalization layer by the largest of their absolute values.\n\n\\section{MNIST images}\n\n\\begin{figure}[th]\n\\centering\n\\includegraphics[width=0.87\\columnwidth]{biggap}\\\\\nGap $\\quad$\\,\\, 5.1 $\\quad$\\,\\, 4.4 $\\quad$\\,\\, 5.1 $\\quad$\\,\\, 4.6 $\\quad$\\,\\, 5.0 $\\quad$\\,\\, 4.4 $\\quad$\\,\\, 4.6 $\\quad$\\,\\, 4.5 $\\quad$\\,\\, 4.0 $\\quad$\\,\\, 3.1 \\\\\n\\includegraphics[width=0.87\\columnwidth]{biggap2}\\\\\nMstk $\\quad$\\, 5 $\\quad\\quad$\\, 8 $\\quad\\quad$\\, 3 $\\quad\\quad$\\, 5 $\\quad\\quad$\\, 9 $\\quad\\quad$\\, 8 $\\quad\\quad$ 2 $\\quad\\quad$\\, 5 $\\quad\\quad$ 0 $\\quad\\quad$ 7 \\, \\\\\nDist $\\quad$\\, 4.8 $\\quad$\\,\\, 3.6 $\\quad$\\,\\, 4.8 $\\quad$\\,\\,\\, 3.4 $\\quad$\\,\\, 4.1 $\\quad$\\,\\, 3.7 $\\quad$\\,\\, 4.0 $\\quad$\\,\\, 4.5$\\quad$\\,\\,\\, 3.8 $\\quad$\\,\\, 2.3\n\\caption{Original and distorted images of MNIST digits in test set with the largest confidence gaps.\nMstk denotes the misclassified labels. Dist denotes the $L_2$-norm of the distortion noise.}\n\\label{fig:digits1}\n\\end{figure}\n\n\\begin{figure}[th]\n\\centering\n\\includegraphics[width=0.9\\columnwidth]{smallgap}\\\\\nGap $\\quad$\\,\\,\\,\\, 0.03 $\\quad$\\,\\, 0.2 $\\quad$\\,\\, 0.1 $\\quad$\\,\\, 0.03 $\\quad$\\, 0.03 $\\quad$ 0.001 $\\quad$\\, 0.3 $\\quad$\\, 0.06 \\,\\,\\,\\,\\, 0.01 \\,\\,\\,\\, 0.005\\\\\n\\includegraphics[width=0.9\\columnwidth]{smallgap2}\\\\\nMstk $\\quad$\\,\\,\\,\\, 8 $\\quad\\quad$\\,\\, 6 $\\quad\\quad$\\, 8 $\\quad\\quad$\\, 5 $\\quad\\quad$\\,\\, 9 $\\quad\\quad$\\,\\, 3 $\\quad\\quad$\\,\\,\\, 5 $\\quad\\quad$\\, 2 $\\quad\\quad$\\, 3 $\\quad\\quad$ 7 \\,\\,\\, \\\\\nDist $\\quad$\\,\\,\\, 0.04 $\\quad$\\,\\, 0.3 $\\quad$\\,\\, 0.1 $\\quad$\\,\\, 0.02 $\\quad$\\, 0.03 $\\quad$ 0.001 $\\quad$\\, 0.3 $\\quad$\\, 0.05 \\,\\,\\,\\,\\, 0.02 \\,\\,\\,\\, 0.01\n\\caption{Original and distorted images of MNIST digits in test set with the smallest confidence gaps.\nMstk denotes the misclassified output label. Dist denotes the $L_2$-norm of the distortion noise.}\n\\label{fig:digits2}\n\\end{figure}\n\nLet us begin by showing MNIST images with the largest confidence gaps in Figure~\\ref{fig:digits1} and those with the smallest confidence gaps in Figure~\\ref{fig:digits2}.\nThey include images before and after attacks as well as Model 3's confidence gap, the misclassified label and $L_2$-norm of the added noise.\nThe images with large confidence gaps seem to be ones that are most different from other digits, while some of the images with small confidence gaps are genuinely ambiguous.\nIt's worth noting the strong correlation between the confidence gap of L2NNN and the magnitude of distortion needed to force it to misclassify.\nAlso note that our guarantee states that the minimum $L_2$-norm of noise is half of the confidence gap, but in reality the needed noise is much stronger than the guarantee.\nThe reason is that the true local guarantee is in fact larger due to local Lipschitz constants, as pointed out by \\citet{hein}.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.1\\columnwidth]{orig}\n\\includegraphics[width=0.1\\columnwidth]{1k}\n\\includegraphics[width=0.1\\columnwidth]{10k}\n\\includegraphics[width=0.1\\columnwidth]{mine_1m}\n\\caption{Original image of 0; attack on Model 2 \\citep{mit} found after 1K iterations; attack on Model 2 found after 10K iterations; attack on Model 3 (L2NNN) found after 1M iterations. The latter three all lead to misclassification as 5.}\n\\label{fig:zeroes}\n\\end{figure}\n\nFigure~\\ref{fig:zeroes} shows additional details regarding the example in Figure~\\ref{fig:zero}.\nThe first image is the original image of a zero.\nThe second image is an attack on Model 2 \\citep{mit} found after 1K iterations, with noise $L_2$-norm of 4.4.\nThe third is one found after 10K iterations for Model 2, with noise $L_2$-norm of 2.1.\nThe last image is the best attack on our Model 3 found after one million iterations, with noise $L_2$-norm of 3.5.\nThese illustrates the trend shown in Table~\\ref{tbl:mnist} that the defense by adversarial training diminishes as the attacks are allowed more iterations, while L2NNNs withstand strong attacks and it requires more noise to fool an L2NNN.\nIt's worth noting that the slow degradation of Model 2's accuracy is an artifact of the attacker \\citep{carlini}: when gradients are near zero in some parts of the input space, which is true for MNIST Model 2 due to adversarial training, it takes more iterations to make progress.\nIt is conceivable that, with a more advanced attacker, Model 2 could drop quickly to 7.6\\%.\nWhat truly matter are the robust accuracies where we advance the state of the art from 7.6\\% to 24.4\\%.\n\n\\section{Details of scrambled-label experiments}\n\nFor ordinary networks in Table~\\ref{tbl:rand}, we use two network architectures.\nThe first has 4 layers and is the architecture used in \\citet{mit}.\nThe second has 22 layers and is the architecture of Models 3 and 4 in Table~\\ref{tbl:mnist}, which includes norm-pooling and two-sided ReLU.\nResults of ordinary networks using these two architectures are in Tables~\\ref{tbl:shallow} and \\ref{tbl:deep} respectively.\nThe ordinary-network section of Table~\\ref{tbl:rand} is entry-wise max of Tables~\\ref{tbl:shallow} and \\ref{tbl:deep}.\n\n\\begin{table}[th]\n\\caption{Accuracies of non-L2NNN MNIST classifiers that use a 4-layer architecture and that are trained on training data with various amounts of scrambled labels.\nRand is the percentage of training labels that are randomized.\nWD is weight decay.\nDR is dropout.\nES is early stopping.}\n\\label{tbl:shallow}\n\\centering\n\\begin{tabular}{lccccc}\n\\toprule\nRand & \\multicolumn{5}{c}{Ordinary network} \\\\\n & Vanilla & WD & DR & ES & WD+DR+ES \\\\\n\\midrule\n 0 & 98.9\\% & 99.0\\% & 99.2\\% & 99.0\\% & 99.3\\% \\\\\n 25\\% & 82.5\\% & 91.1\\% & 91.8\\% & 79.1\\% & 98.0\\% \\\\\n 50\\% & 57.7\\% & 67.7\\% & 72.6\\% & 66.4\\% & 88.3\\% \\\\\n 75\\% & 32.1\\% & 44.9\\% & 41.8\\% & 52.7\\% & 66.4\\% \\\\\n 100\\%& 9.5\\% & 8.9\\% & 9.4\\% & NA & NA \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[th]\n\\caption{Accuracies of non-L2NNN MNIST classifiers that use a 22-layer architecture and that are trained on training data with various amounts of scrambled labels.\nRand is the percentage of training labels that are randomized.\nWD is weight decay.\nDR is dropout.\nES is early stopping.}\n\\label{tbl:deep}\n\\centering\n\\begin{tabular}{lccccc}\n\\toprule\nRand & \\multicolumn{5}{c}{Ordinary network} \\\\\n & Vanilla & WD & DR & ES & WD+DR+ES \\\\\n\\midrule\n 0 & 99.4\\% & 99.0\\% & 99.0\\% & 99.0\\% & 99.0\\% \\\\\n 25\\% & 90.4\\% & 86.5\\% & 89.8\\% & 96.2\\% & 90.3\\% \\\\\n 50\\% & 65.5\\% & 62.5\\% & 63.7\\% & 81.0\\% & 83.1\\% \\\\\n 75\\% & 41.5\\% & 38.2\\% & 40.2\\% & 75.2\\% & 61.9\\% \\\\\n 100\\%& 9.7\\% & 9.1\\% & 8.8\\% & NA & NA \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\nIn Tables~\\ref{tbl:shallow} and \\ref{tbl:deep}, dropout rate and weight-decay weight are tuned for each WD\/DR run, and each WD+DR+ES run uses the combined hyperparameters from its row.\nIn early-stopping runs, 5000 training images are withheld as validation set and training stops when loss on validation set stops decreasing.\nEach ES or WD+DR+ES entry is an average over ten runs to account for randomness of the validation set.\nThe L2NNNs do not use weight decay, dropout or early stopping.\n\n\\begin{table}[th]\n\\caption{Training-accuracy-versus-confidence-gap trade-off points of L2NNNs on 25\\%-scrambled MNIST training labels.}\n\\label{tbl:twentyfive}\n\\centering\n\\begin{tabular}{cccc}\n\\toprule\n\\multicolumn{2}{c}{on training set} & \\multicolumn{2}{c}{on test set} \\\\\n Accu. & Gap & Accu. & Gap \\\\\n\\midrule\n 99.6\\% & 0.12 & 92.6\\% & 0.10 \\\\\n 97.6\\% & 0.20 & 95.7\\% & 0.17 \\\\\n 78.6\\% & 0.31 & 98.2\\% & 0.30 \\\\\n 77.2\\% & 0.64 & 98.5\\% & 0.63 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[th]\n\\caption{Training-accuracy-versus-confidence-gap trade-off points of L2NNNs on 75\\%-scrambled MNIST training labels.}\n\\label{tbl:seventyfive}\n\\centering\n\\begin{tabular}{cccc}\n\\toprule\n\\multicolumn{2}{c}{on training set} & \\multicolumn{2}{c}{on test set} \\\\\n Accu. & Gap & Accu. & Gap \\\\\n\\midrule\n 97.9\\% & 0.07 & 49.8\\% & 0.03 \\\\\n 93.0\\% & 0.09 & 59.2\\% & 0.05 \\\\\n 75.9\\% & 0.10 & 70.0\\% & 0.08 \\\\\n 58.0\\% & 0.18 & 80.4\\% & 0.17 \\\\\n 46.2\\% & 0.29 & 86.8\\% & 0.30 \\\\\n 40.1\\% & 0.44 & 89.8\\% & 0.46 \\\\\n 34.7\\% & 0.86 & 93.1\\% & 0.89 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\nTable~\\ref{tbl:twentyfive} shows L2NNN trade-off points between accuracy and confidence gap on the 25\\%-scrambled training set.\nTable~\\ref{tbl:seventyfive} shows L2NNN trade-off points between accuracy and confidence gap on the 75\\%-scrambled training set.\nLike Table~\\ref{tbl:fifty}, they demonstrate the trade-off mechanism between memorization (training-set accuracy) and generalization (training-set average gap).\n\nTo be fair, dropout and early stopping are also able to sacrifice accuracy on a noisy training set.\nFor example, the DR run in the 50\\%-scrambled row in Table~\\ref{tbl:shallow} has 67.5\\% accuracy on the training set and 72.6\\% on the test set.\nHowever, the underlying mechanisms are very different from that of L2NNN.\nDropout \\citep{dropout} has an effect of data augmentation, and, with a noisy training set, dropout can create a situation where the effective data complexity exceeds the network capacity.\nTherefore, the parameter training is stalled at a lowered accuracy on the training set, and we get better performance if the model tends to fit more of original labels and less of the scrambled labels.\nThe mechanism of early stopping is straightforward and simply stops the training when it is mostly memorizing scrambled labels.\nWe get better performance from early stopping if the parameter training tends to fit the original labels early.\nThese mechanisms from dropout and early stopping are both brittle and may not allow parameter training enough opportunity to learn from the useful data points with original labels.\nThe comparison in Table~\\ref{tbl:rand} suggests that they are inferior to L2NNN's trade-off mechanism as discussed in Section~\\ref{sec:random} and illustrated in Tables~\\ref{tbl:fifty}, \\ref{tbl:twentyfive} and \\ref{tbl:seventyfive}.\nThe L2NNNs in this paper do not use weight decay, dropout or early stopping, however it is conceivable that dropout may be complementary to L2NNNs.\n\n\\section{Proofs}\\label{sec:proofs}\n\n\\newtheorem{lemma}{Lemma}\n\\begin{lemma}\\label{th:sqrt2}\nLet $g\\left({\\bf x}\\right)$ denote a single-L2NNN classifier's confidence gap for an input data point ${\\bf x}$.\nThe classifier will not change its answer as long as the input ${\\bf x}$ is modified by no more than an $L_2$-norm of $g\\left({\\bf x}\\right)\/\\sqrt{2}$.\n\\end{lemma}\n\n\\begin{proof}\nLet ${\\bf y}\\left({\\bf x}\\right) = \\left[ y_1\\left({\\bf x}\\right),y_2\\left({\\bf x}\\right),\\cdots,y_K\\left({\\bf x}\\right) \\right]$ denote logit vector of a single-L2NNN classifier for an input data point ${\\bf x}$.\nLet ${\\bf x_1}$ and ${\\bf x_2}$ be two input vectors such that the classifier outputs different labels $i$ and $j$.\nBy definitions, we have the following inequalities:\n\\begin{equation}\n\\begin{aligned}\ny_i\\left({\\bf x_1}\\right) - y_j\\left({\\bf x_1}\\right) & \\geq g\\left({\\bf x_1}\\right) \\\\\ny_i\\left({\\bf x_2}\\right) - y_j\\left({\\bf x_2}\\right) & \\leq 0\n\\end{aligned}\n\\end{equation}\nBecause the classifier is a single L2NNN, it must be true that:\n\\begin{equation}\n\\begin{aligned}\n\\| {\\bf x_2} - {\\bf x_1} \\|_2 & \\geq \\| {\\bf y}\\left({\\bf x_2}\\right) - {\\bf y}\\left({\\bf x_1}\\right) \\|_2 \\\\\n & \\geq \\sqrt{ \\left( y_i\\left({\\bf x_2}\\right) - y_i\\left({\\bf x_1}\\right) \\right)^2 + \\left( y_j\\left({\\bf x_2}\\right) - y_j\\left({\\bf x_1}\\right) \\right)^2 } \\\\\n & = \\sqrt{ \\left( y_i\\left({\\bf x_1}\\right) - y_i\\left({\\bf x_2}\\right) \\right)^2 + \\left( y_j\\left({\\bf x_2}\\right) - y_j\\left({\\bf x_1}\\right) \\right)^2 } \\\\\n & \\geq \\sqrt{\\frac{ \\left( y_i\\left({\\bf x_1}\\right) - y_i\\left({\\bf x_2}\\right) + y_j\\left({\\bf x_2}\\right) - y_j\\left({\\bf x_1}\\right) \\right)^2 }{2}} \\\\\n & = \\sqrt{\\frac{ \\left( \\left( y_i\\left({\\bf x_1}\\right) - y_j\\left({\\bf x_1}\\right) \\right) + \\left( y_j\\left({\\bf x_2}\\right) - y_i\\left({\\bf x_2}\\right) \\right) \\right)^2 }{2}} \\\\\n & \\geq \\sqrt{\\frac{ \\left( g\\left({\\bf x_1}\\right) + 0 \\right)^2 }{2}} \\\\\n & = g\\left({\\bf x_1}\\right)\/\\sqrt{2}\n\\end{aligned}\n\\end{equation}\n\\end{proof}\n\n\n\\begin{lemma} \\label{th:gapsqrt2}\nLet $g\\left({\\bf x}\\right)$ denote a classifier's confidence gap for an input data point ${\\bf x}$.\nLet $d\\left({\\bf x_1},{\\bf x_2}\\right)$ denote the $L_2$-distance between the output logit-vectors for two input points ${\\bf x_1}$ and ${\\bf x_2}$ that have different labels and that are classified correctly.\nThen this condition holds: $g\\left({\\bf x_1}\\right)+g\\left({\\bf x_2}\\right) \\leq \\sqrt{2} \\cdot d\\left({\\bf x_1},{\\bf x_2}\\right)$.\n\\end{lemma}\n\n\\begin{proof}\nLet ${\\bf y}\\left({\\bf x}\\right) = \\left[ y_1\\left({\\bf x}\\right),y_2\\left({\\bf x}\\right),\\cdots,y_K\\left({\\bf x}\\right) \\right]$ denote logit vector of a classifier for an input data point ${\\bf x}$.\nLet $i$ and $j$ be the labels for ${\\bf x_1}$ and ${\\bf x_2}$.\nBy definitions, we have the following inequalities:\n\\begin{equation}\n\\begin{aligned}\ny_i\\left({\\bf x_1}\\right) - y_j\\left({\\bf x_1}\\right) & \\geq g\\left({\\bf x_1}\\right) \\\\\ny_j\\left({\\bf x_2}\\right) - y_i\\left({\\bf x_2}\\right) & \\geq g\\left({\\bf x_2}\\right)\n\\end{aligned}\n\\end{equation}\nTherefore,\n\\begin{equation}\n\\begin{aligned}\nd\\left({\\bf x_1},{\\bf x_2}\\right) & \\triangleq \\| {\\bf y}\\left({\\bf x_2}\\right) - {\\bf y}\\left({\\bf x_1}\\right) \\|_2 \\\\\n & \\geq \\sqrt{ \\left( y_i\\left({\\bf x_2}\\right) - y_i\\left({\\bf x_1}\\right) \\right)^2 + \\left( y_j\\left({\\bf x_2}\\right) - y_j\\left({\\bf x_1}\\right) \\right)^2 } \\\\\n & = \\sqrt{ \\left( y_i\\left({\\bf x_1}\\right) - y_i\\left({\\bf x_2}\\right) \\right)^2 + \\left( y_j\\left({\\bf x_2}\\right) - y_j\\left({\\bf x_1}\\right) \\right)^2 } \\\\\n & \\geq \\sqrt{\\frac{ \\left( y_i\\left({\\bf x_1}\\right) - y_i\\left({\\bf x_2}\\right) + y_j\\left({\\bf x_2}\\right) - y_j\\left({\\bf x_1}\\right) \\right)^2 }2} \\\\\n & = \\sqrt{\\frac{ \\left( \\left( y_i\\left({\\bf x_1}\\right) - y_j\\left({\\bf x_1}\\right) \\right) + \\left( y_j\\left({\\bf x_2}\\right) - y_i\\left({\\bf x_2}\\right) \\right) \\right)^2 }2} \\\\\n & \\geq \\sqrt{\\frac{ \\left( g\\left({\\bf x_1}\\right) + g\\left({\\bf x_2}\\right) \\right)^2 }2} \\\\\n & = \\frac{ g\\left({\\bf x_1}\\right) + g\\left({\\bf x_2}\\right) }{\\sqrt{2}}\n\\end{aligned}\n\\end{equation}\n\\end{proof}\n\n\\begin{lemma} \\label{th:ineq}\nFor any $a \\geq 0$, $b \\geq 0$, $p \\geq 1$, the following inequality holds: $a^p + b^p \\leq \\left( a+b \\right)^p$.\n\\end{lemma}\n\n\\begin{proof}\nIf $a$ and $b$ are both zero, the inequality holds.\nIf at least one of $a$ and $b$ is nonzero:\n\\begin{equation}\n\\begin{aligned}\na^p + b^p & = \\left( a+b \\right)^p\\cdot\\left( \\frac{a}{a+b} \\right)^p + \\left( a+b \\right)^p\\cdot\\left( \\frac{b}{a+b} \\right)^p \\\\\n & \\leq \\left( a+b \\right)^p\\cdot\\frac{a}{a+b} + \\left( a+b \\right)^p\\cdot\\frac{b}{a+b} \\\\\n & = \\left( a+b \\right)^p\n\\end{aligned}\n\\end{equation}\n\\end{proof}\n\n\n\\begin{lemma} \\label{th:twoside}\nLet $f(x)$ be a nonexpansive and monotonically increasing scalar function.\nDefine a function from $\\mathbb{R}$ to $\\mathbb{R}^2$: ${\\bf h}(x)=[f(x),f(x)-x]$.\nThen ${\\bf h}(x)$ is nonexpansive with respect to any $L_p$-norm.\n\\end{lemma}\n\n\\begin{proof}\nFor any $x_1>x_2$, by definition we have the following inequalities:\n\\begin{equation}\n\\begin{aligned}\nf(x_1)-f(x_2) & \\geq 0 \\\\\nf(x_1)-f(x_2) & \\leq x_1-x_2\n\\end{aligned}\n\\end{equation}\nFor any $p \\geq 1$, invoking Lemma~\\ref{th:ineq} with $a=f(x_1)-f(x_2)$ and $b=x_1-x_2-f(x_1)+f(x_2)$, we have:\n\\begin{equation}\n\\begin{aligned}\n\\left((f(x_1)-f(x_2)\\right)^p + \\left(x_1-x_2-f(x_1)+f(x_2)\\right)^p & \\leq \\left( x_1-x_2 \\right)^p \\\\\n\\left(\\left((f(x_1)-f(x_2)\\right)^p + \\left(x_1-x_2-f(x_1)+f(x_2)\\right)^p\\right)^{1\/p} & \\leq x_1-x_2 \\\\\n\\left(\\lvert f(x_1)-f(x_2) \\rvert^p + \\lvert (f(x_1)-x_1)-(f(x_2)-x_2) \\rvert^p\\right)^{1\/p} & \\leq x_1-x_2 \\\\\n\\| {\\bf h}(x_1) - {\\bf h}(x_2) \\|_p & \\leq x_1-x_2\n\\end{aligned}\n\\end{equation}\n\\end{proof}\n\n\\begin{lemma} \\label{th:pool}\nNorm-pooling within each pooling window is a nonexpansive map with respect to $L_2$-norm.\n\\end{lemma}\n\n\\begin{proof}\nLet ${\\bf x_1}$ and ${\\bf x_2}$ be two vectors with the size of a pooling window.\nBy triangle inequality, we have\n\\begin{equation}\n\\begin{aligned}\n\\| {\\bf x_1} - {\\bf x_2} \\|_2 + \\| {\\bf x_1} \\|_2 & \\geq \\| {\\bf x_2} \\|_2 \\\\\n\\| {\\bf x_1} - {\\bf x_2} \\|_2 + \\| {\\bf x_2} \\|_2 & \\geq \\| {\\bf x_1} \\|_2\n\\end{aligned}\n\\end{equation}\nTherefore,\n\\begin{equation}\n\\begin{aligned}\n\\| {\\bf x_1} - {\\bf x_2} \\|_2 & \\geq \\| {\\bf x_2} \\|_2 - \\| {\\bf x_1} \\|_2 \\\\\n\\| {\\bf x_1} - {\\bf x_2} \\|_2 & \\geq \\| {\\bf x_1} \\|_2 - \\| {\\bf x_2} \\|_2 \n\\end{aligned}\n\\end{equation}\nTherefore,\n\\begin{equation}\n\\| {\\bf x_1} - {\\bf x_2} \\|_2 \\geq \\lvert \\| {\\bf x_1} \\|_2 - \\| {\\bf x_2} \\|_2 \\rvert\n\\end{equation}\n\\end{proof}\n\n\n\\begin{lemma} \\label{th:multi2}\nLet $g\\left({\\bf x}\\right)$ denote a multi-L2NNN classifier's confidence gap for an input data point ${\\bf x}$.\nThe classifier will not change its answer as long as the input ${\\bf x}$ is modified by no more than an $L_2$-norm of $g\\left({\\bf x}\\right)\/2$.\n\\end{lemma}\n\n\\begin{proof}\nLet ${\\bf y}\\left({\\bf x}\\right) = \\left[ y_1\\left({\\bf x}\\right),y_2\\left({\\bf x}\\right),\\cdots,y_K\\left({\\bf x}\\right) \\right]$ denote logit vector of a multi-L2NNN classifier for an input data point ${\\bf x}$.\nLet ${\\bf x_1}$ and ${\\bf x_2}$ be two input vectors such that the classifier outputs different labels $i$ and $j$.\nBy definitions, we have the following inequalities:\n\\begin{equation}\n\\begin{aligned}\ny_i\\left({\\bf x_1}\\right) - y_j\\left({\\bf x_1}\\right) & \\geq g\\left({\\bf x_1}\\right) \\\\\ny_i\\left({\\bf x_2}\\right) - y_j\\left({\\bf x_2}\\right) & \\leq 0\n\\end{aligned}\n\\end{equation}\nFor a multi-L2NNN classifier, each logit is a nonexpansive function of the input, and it must be true that:\n\\begin{equation}\n\\begin{aligned}\n\\| {\\bf x_2} - {\\bf x_1} \\|_2 & \\geq \\lvert y_i\\left({\\bf x_1}\\right) - y_i\\left({\\bf x_2}\\right) \\rvert \\\\\n\\| {\\bf x_2} - {\\bf x_1} \\|_2 & \\geq \\lvert y_j\\left({\\bf x_2}\\right) - y_j\\left({\\bf x_1}\\right) \\rvert\n\\end{aligned}\n\\end{equation}\nTherefore,\n\\begin{equation}\n\\begin{aligned}\n\\| {\\bf x_2} - {\\bf x_1} \\|_2 & \\geq \\frac{ \\lvert y_i\\left({\\bf x_1}\\right) - y_i\\left({\\bf x_2}\\right) \\rvert + \\lvert y_j\\left({\\bf x_2}\\right) - y_j\\left({\\bf x_1}\\right) \\rvert }{2} \\\\\n & \\geq \\frac{ \\lvert y_i\\left({\\bf x_1}\\right) - y_i\\left({\\bf x_2}\\right) + y_j\\left({\\bf x_2}\\right) - y_j\\left({\\bf x_1}\\right) \\rvert }{2} \\\\\n & = \\frac{ \\lvert \\left( y_i\\left({\\bf x_1}\\right) - y_j\\left({\\bf x_1}\\right) \\right) + \\left( y_j\\left({\\bf x_2}\\right) - y_i\\left({\\bf x_2}\\right) \\right) \\rvert }{2} \\\\\n & \\geq \\frac{ \\lvert g\\left({\\bf x_1}\\right) + 0 \\rvert }{2} \\\\\n & = g\\left({\\bf x_1}\\right)\/2\n\\end{aligned}\n\\end{equation}\n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:level1}Introduction}\n$Introduction-$ In recent years, several acheivements have been reported~\\cite{Displacement, Goetz1, Goetz2, Pogorzalek, SanzIEEE, Federov}, which are now the potential building blocks of microwave quantum communication protocols. This novel research area is not only useful for free space communications, but also a candidate for chip-to-chip communication, required by distributed quantum computing. The latter represents an alternative paradigm to increasing the number of qubits in a single quantum processor, and aims at solving larger quantum algorithms in a distributed form between different processors with lower number of qubits. Nowadays, one of the best quantum platforms suited for quantum computing is superconducting circuits, which interact via microwave photons. The main challenge concerns the efficient distribution of such microwave states between circuits, the two main options being direct state transfer and teleportation. Concerning the former, several experiments have been done in a single cryogenic environment~\\cite{Axline, Campagne, Kurpiers, Leung, Roch, Narla, Dickel}, as well as with microwave to optical conversion~\\cite{Forsch, Rueda}. Recently, a successful transfer of transmon qubits has been reported, via a cryogenic waveguide coherently linking two dilution refrigerators separated by five meters, with average transfer and target state fidelities of 85.8 \\% and 79.5 \\%, respectively, in terms of discrete variables~\\cite{SuperQLAN}. \n\nSo far, microwave teleportation between two fridges has not been investigated experimentally, neither in terms of discrete variables nor continuous variables (CVs). However, a teleportation scheme in the microwave regime in terms of CVs has been proposed before~\\cite{Roberto}. Taking into account the limitations of the previous protocol, we investigate the feasibility of an experimental implementation of microwave quantum teleportation of Gaussian states in real conditions, e.g. fridge and free space, in terms of CVs based on a clear physical formalism. Indeed, in quantum information processing with CVs, Gaussian states play important roles~\\cite{Adesso1, Ferraro}, which are well-known and commonly-used experimentally. Here, we focus on teleportation of single-mode Gaussian states via entangled two-mode Gaussian states.\n\n$Quantum\\;\\;Teleportation\\;\\;with\\;\\;CVs-$A preliminary CV model for teleportation was proposed by Vaidman~\\cite{Vaidman}, and then developed by Braunstein and Kimble~\\cite{Braunstein}. The latter represented a conditional teleportation protocol, whereas an unconditional one was proposed by Furusawa et al.~\\cite{Furusawa1}. \n\nQuantum teleportation uses quantum entanglement and classical communication to transfer quantum information between two distant parties, Alice and Bob. The ideal scenario involves Alice and Bob sharing a maximally-entangled state, which in CVs translates to a two-mode squeezed vacuum (TMSV) state with infinite squeezing $r$. In a realistic scenario, the squeezing $r$ has technological limitations. In such protocol, Alice attempts to send a Gaussian state, with covariance matrix $ V'_{\\text{in}}$ and first moments $ \\bar{x}^{\\prime}_{\\text{in}}=( x_{\\text{in}}, p_{\\text{in}})$ (see Appendix) to Bob. The input state can also be described by a harmonic oscillator mode $\\alpha_{\\text{in}}=\\frac{x_{\\text{in}}+ip_{\\text{in}}}{\\sqrt{2}}$. If this process is successful, then Alice and Bob share a Gaussian entangled state with covariance matrix ${ V_{\\text{TMSS}}}$ and null first moments~\\cite{Serafini}. Then, after entangling the input state with TMSS, Alice makes a double homodyne measurement of both quadratures (i.e. a heterodyne detection) and modulates the classical results, $X_{\\text{u}}$ and $P_{\\text{v}}$, in the form of a single mode $\\delta=\\frac{X_{\\text{u}}+iP_{\\text{v}}}{\\sqrt{2}}$ and then sends it to Bob. Finally, Bob applies a unitary displacement with a function of $\\delta$ to his share of the original entangled state to reconstruct the input state (see Fig.~\\ref{fig1}). Different teleportation protocols in terms of CVs are discussed in Ref.~\\cite{Mancini}. In the following, we focus basically on the Braunstein-Kimble protocol~\\cite{Braunstein} for teleportation of Gaussian states.\n\n\\begin{figure}\n\\includegraphics[scale=0.3]{Teleportation2.pdf}\n\\caption{The Braunstein-Kimble Protocol for the Teleportation of Gaussian States. In this protocol, two parties, Alice and Bob, share an entangled quantum state with two modes $\\alpha_1$ and $\\alpha_2$. In fact, Alice wants to send a single-mode Gaussian state (i.e. mode $\\alpha_{\\text{in}}$) to Bob via the teleportation mechanism where she cannot send the original mode $\\alpha_{\\text{in}}$ to Bob directly but a classical mode $\\delta$ is sent instead. Finally, Bob can reconstruct the input state by applying the displacement $D(\\delta)$ operator on his mode $\\alpha_2$ to construct the input state as $\\alpha_{\\text{out}}\\approx \\alpha_{\\text{in}}$.}\n\\label{fig1}\n\\end{figure}\n\n\n\n$Measurement\\;\\;by\\;\\;Alice-$When Alice receives the mixture of input mode and one mode of the entangled state, she performs a double homodyne detection (see Fig.~\\ref{fig2}), which is the optimal measurement for the teleportation protocol. In this case, the results of the measurement (which are two classical values, $X_u$ and $P_v$) will be modulated as a single mode $\\delta$ to be sent to Bob. As seen in Fig.~\\ref{fig2}, two modes $\\alpha_{\\text{in}}$ and $\\alpha_1$ are passing through a 50:50 beamsplitter, and then again each output beam again passes through two other 50:50 beamsplitters while a classical local oscillator mode $\\alpha_{\\text{LO}_x}$ enters in the top beamsplitter and another classical local oscillator mode $\\alpha_{\\text{LO}_p}$ passes through the other beamsplitter. According to Fig.~\\ref{fig2} in each line for quantum modes we use the annihiliation operators, and $\\alpha$ as classical coherent state for the local oscillator. Finally, there would be four outputs at the end, where there is a detector at each output which can measure the current produced due to the collision of photons to the detector.\n\\begin{figure}\n\\includegraphics[scale=0.35]{Heterodyne.pdf}\n\\caption{Alice's measurement is performed via double homodyne detection. First, two modes $\\alpha_{\\text{in}}$ and $\\alpha_1$ passing through a 50:50 beamsplitter, and then again each output beam again passes through two other 50:50 beamsplitters while a classical local oscillator mode $\\alpha_{\\text{LO}_x}$ enters in the top beamsplitter and another classical local oscillator mode $\\alpha_{\\text{LO}_p}$ passes through the other beamsplitter. Then, Alice performs a double homodyne detection where the classical result of the measurement will be modulated as a single mode $\\delta$ to be sent to Bob.}\n\\label{fig2}\n\\end{figure}\n\n\nIn the symplectic representation (see Appendix), we consider two symplectic operators in the double-homodyne detection circuit depicted in Fig.~\\ref{fig2}, $ S_{\\text{h1}}= \\mathbb{I}_2 \\oplus B_{\\text{S}}(1\/2)_4 \\oplus \\mathbb{I}_2$ and $ S_{\\text{h2}}= B_{\\text{S}}(1\/2)_4 \\oplus B_{\\text{S}}(1\/2)_4$ where $\\mathbb{I}_2$ is the identity 2$\\times$2matrix, and $B_{\\text{S}}(1\/2)_4$ is the 50:50 beamsplitter operator, i.e. a 4$\\times$4 matrix. The input first moment in the double-homodyne detection is ${ {\\bf \\bar{x'}}_{\\text{in}}}=( x_{\\text{LO}_x}, p_{\\text{LO}_x}, x_{\\text{in}}, p_{\\text{in}}, e^{r}x_1, e^{-r}p_1, x_{\\text{LO}_p}, p_{\\text{LO}_p})^{ T}$, and therefore the output's first moments at the photodetectors right before the measurement are $ { \\bar{x'}}_{\\text{out}}= S_{\\text{h2}}S_{\\text{h1}}{ \\bar{x'}}_{\\text{in}}$, which gives ${ { \\bar{x'}}_{\\text{out}}}=\\sqrt{2}( {(x_{u'}), (p_{u'}), (x_{u''}), (p_{u''}), (x_{v'}), (p_{v'}), (x_{v''}), (p_{v''}))^T}$.\n\nHere, we assume that the detectors are ideal, therefore we define the produced current as $i= \\langle \\hat{n}\\rangle=\\langle \\hat{a}^\\dagger\\hat{a}\\rangle=\\langle 1\/2(\\hat{x}^2+\\hat{p}^2)-1\/2\\rangle=1\/2(x^2+p^2)-1\/2$. Then, the difference between the currents from each detector after the beam splitters is measured~\\cite{Braunstein2}. The current from the top beamsplitter is the difference between the currents in $u'$ and $u''$, i.e. \n$i_1=\\langle\\hat{a}^{\\dagger}_{u'}\\hat{a}_u'\\rangle-\\langle\\hat{a}^{\\dagger}_{u''}\\hat{a}_{u''}\\rangle$,\nand in the other two arms $i_2=\\langle\\hat{a}^{\\dagger}_{v'}\\hat{a}_{v'}\\rangle-\\langle\\hat{a}^{\\dagger}_{v''}\\hat{a}_{v''}\\rangle$\nAssuming $\\alpha_{\\text{LO}_x}=|\\alpha_{\\text{LO}_x}|e^{i\\theta_x}$, one obtains $i_1=|\\alpha_{\\text LO_x}|(x_{\\text{in}}+e^rx_1)$ by letting $\\theta_x=0$ that consequently can obtain $X_u=\\frac{i_1}{|\\alpha_{\\text LO_x}|}=x_{\\text{in}}+e^rx_1=\\text{Re}(\\delta)$. Similarly $\\alpha_{\\text{LO}_p}=|\\alpha_{\\text{LO}_p}|e^{i\\theta_p}$, if we let $\\theta_p=\\pi\/2$ then it turns to $i_2=|\\alpha_{\\text LO_p}|(p_{\\text{in}}-e^{-r}p_1)$, and therefore $P_v=\\frac{i_2}{|\\alpha_{\\text LO_p}|}=p_{\\text{in}}-e^{-r}p_1=\\text{Im}(\\delta)$. By using classical coherent light with similar amplitudes for both local oscillators, one can write $|\\alpha_{\\text LO_x}|=|\\alpha_{\\text LO_p}|=|\\alpha_{\\text LO}|$, so the currents $i_1$ and $i_2$ can be modulated into a single mode as $\\delta=X_u+iP_v$, to be sent and received by Bob. On the other side, at the same time that Alice measures her state, the state at Bob collapses to $\\alpha_2=\\frac{e^rx_2+ie^{-r}p_2}{\\sqrt{2}}$ with $x_2=-x_1$ and $p_2=p_1$. Comparing $\\delta$ and $\\alpha_2$ we realize that $\\delta+\\alpha_2=\\alpha_{\\text{in}}$. Once Bob receives this information, he can reconstruct Alice's input state by applying the displacement $D(\\delta)$ on his mode, i.e. $D(\\delta) |\\alpha_2\\rangle=|\\delta+\\alpha_2\\rangle=|\\alpha_{\\text{out}}\\rangle\\approx |\\alpha_{\\text{in}}\\rangle$. In the symplectic representation, this means that the first moments of Bob, ${\\bar{x}}_2=(x_2, p_2)^T$ should be displaced with ${ \\Delta}=(X_u, P_v)^T$, which means\n\\begin{equation}\n{ \\bar{x}}_2 \\rightarrow { \\bar{x}}_2+{ \\Delta}.\n\\end{equation}\nThe performance of the teleportation protocol can be measured by the teleportation fidelity $F$. It can be computed via a symplectic approach~\\cite{Weedbrook, Mancini}, i.e. $F=2\/\\sqrt{\\text{det}({ \\Gamma})}$, in which $ \\Gamma=2{ V'_{\\text{in}}+\\mathbb{Z}A\\mathbb{Z}+B-C\\mathbb{Z}-\\mathbb{Z}^TC^T}$ where $A$, $B$, and $C$ are the block matrices of two-mode squeezed states (TMSS) in symplectic representation, i.e. shown by a block matrix as\n$ V_{\\text{TMSS}}=\\left[A, C; C^T, B\\right]$, where $ A=A^T$, $ B=B^T$ and $ C$ is a $2\\times 2$ real matrix, and $T$ denotes transpose~\\cite{Serafini}. If the input is a squeezed coherent state, i.e. ${V'_{\\text{in}}}=\\big(\\begin{smallmatrix} e^{2y} & 0\\\\ 0 & e^{-2y} \\end{smallmatrix}\\big)$ where $y$ is the squeezing level of the input, the fidelity in general form is obtained as \n\\begin{equation}\t\nF=\\frac{1}{\\sqrt{(e^{-2y}+(2n+1)\\sigma)(e^{+2y}+(2n+1)\\sigma)}}.\n\\end{equation}\nwhere $\\sigma=\\exp{(-2r)}$ is the variance of the resource, and $n$ is the number of thermal photons in the two-mode squeezed thermal states (TMSTS) resource as a general form for TMSS. The particular case $n=0$ represents the fidelity for the general case when the resource is TMSV (see Appendix). One can compute the average fidelity for the teleportation of an arbitrary squeezed displaced vacuum state by integration over fidelity in the range 0 and 1.\n\n\n$\\;Microwave\\;\\;Quantum\\;\\;Teleportation-$ Microwave quantum communication, as an exciting line of research with potentially broad applications in science and industry, is an accessible technology due to the recent achievements of circuit quantum electrodynamics (cQED). Since thermal noise in the microwave domain are much larger than in the optical one, losses have to be taken into account, which can significantly affect the quality of the teleportation protocol. In order to suppress thermal fluctuations, macroscopic superconducting circuit operate at low temperatures, i.e. $T<10-100$mK~\\cite{Roberto}. In cQED, superconducting Josephson junctions are nonlinear elements which have essential applications in quantum computation and quantum information. Recently, path-entanglement between propagating quantum microwaves as TMSS was generated via Josephson parametric amplifiers (JPAs) and hybrid ring \\cite{Pogorzalek} to be used to perform microwave protocol equivalent to the traditional ones in optical quantum teleportation. In this protocol, a TMSS is generated from two single-mode squeezed states, with squeezing in orthogonal quadratures, generated by two JPAs, J$_1$ and J$_2$ at the same squeezing level $r$ with possible endogenous thermal photons, and then sending them through a hybrid ring, which is a microwave beam splitter~\\cite{Roberto, Pogorzalek}. In general, the quality of the entanglement between the two modes is affected by thermal fluctuations on the JPA during the generation of the single-mode squeezed states. Then, after the interaction of one mode with the input state via 50:50 beamsplitter, the outputs are connected to other two JPAs, J$_3$ and J$_4$, which operate as amplifiers with equal gain $g_{\\text J}=\\exp(2r_{\\text J})$ with squeezing parameter $r_{\\text J}$ (see Fig.~\\ref{fig3}). The final step is the measurement by Alice via heterodyne detection. The method is the same as what we explained before (and in Appendix) but with taking amplifications and losses into account. Transfer efficiencies are modelled by beamsplitters with reflectivities $\\epsilon$, $\\eta$, $\\kappa$, and $\\nu$, where the losses are indeed $1- \\epsilon$, $1-\\eta$, $1-\\kappa$, and $1-\\nu$, respectively. The reflectivity $\\eta$ represents the interaction of TMSS in free space. Other losses are related to JPAs and amplifiers. For simplicity, we choose $\\epsilon_1=\\epsilon$, $\\epsilon_2=1$, $\\eta_1=\\eta$, and $\\eta_2=1$. If the single-mode operator J$_{\\text {in}}$ squeezes the input coherent state with squeezing parameter $y$, e.g. ${ J}_{\\text {in}}=[e^{-y}, 0; e^{y}, 0]$, it produces a squeezed coherent state as $\\alpha_{in}=(e^{-y}x_{\\text{in}}+ie^{y}p_{\\text{in}})\/\\sqrt{2}$, which is the state to be teleported. Following the procedure in heterodyne detection, the output components can be obtained as $ X_u = |\\alpha_{\\text{LO}}| \\sqrt{\\nu\\kappa g_{\\text {J}}} [ e^{-y} x_{\\text{in}}+(e^rx_1 \\sqrt{\\eta\\epsilon}+\\zeta_{x_1})]\\cos (\\theta_x)$ and $P_u = |\\alpha_{\\text{LO}}| \\sqrt{\\frac{\\nu\\kappa}{g_{\\text {J}}}} [e^{ y} p_{\\text{in}}+(e^{-r}p_1 \\sqrt{\\eta\\epsilon}+\\zeta_{p_1})] \\sin (\\theta _x)$\nas the real and imaginary parts, respectively. By letting $\\theta_x=0$ the current $i_1$ turns to \n\n\\begin{equation}\nX_u= [|\\alpha_{\\text{LO}}|(\\sqrt{\\nu\\kappa g_{\\text {J}}} (e^rx_1\\sqrt{\\eta\\epsilon}+e^{- y} x_{\\text{in}})+\\zeta_{x})] \n\\end{equation}\n\nand $P_u=0$ where $\\zeta_{x}$ is the noise term, i.e. $\\zeta_{x}=x_{\\text{th-1}}\\sqrt{\\nu\\kappa\\eta(1-\\epsilon)}+x_{\\text{th-2}}\\sqrt{\\nu\\kappa(1-\\eta)}+x_{\\text{th-3}}\\sqrt{\\nu(1-\\kappa)}+x_{\\text{th-4}}\\sqrt{1-\\nu}$ where $x_{\\text{th-i}}$ ($i$=1, 2, 3, 4) are thermal quadratures at temperatures $T_1$, $T_2$, $T_3$, and $T_4$ respectively. If there are no losses ($\\epsilon=\\eta=\\kappa=\\nu=1$), then the current is $i_1= |\\alpha_{\\text{LO}}|\\sqrt{g_{\\text {f}}} (e^rx_1+e^{- y} x_{\\text{in}})$. Similarly for current $i_2$, one can obtain $X_v= |\\alpha_{\\text{LO}}|\\sqrt{\\frac{\\nu\\kappa}{g_{\\text {J}}}} [e^{- y} x_{\\text{in}}-(e^{r}x_1 \\sqrt{\\eta\\epsilon}-\\zeta_{x_2}) ] \\cos (\\theta_p)$ and $P_v= |\\alpha_{\\text{LO}}| \\sqrt{\\nu\\kappa g_{\\text {J}}} [e^{ y} p_{\\text{in}}-(e^{-r}p_1 \\sqrt{\\eta\\epsilon}-\\zeta_{p_2})] \\sin (\\theta _p)$. Again, under similar conditions, but letting $\\theta_p=\\pi\/2$ the current $i_2$ turns to \n\\begin{equation}\nP_v= |\\alpha_{\\text{LO}}|[\\sqrt{\\nu\\kappa g_{\\text {J}}} ( e^{ y} p_{\\text{in}}-e^{-r}p_1\\sqrt{\\eta\\epsilon})+\\zeta_{p}]\n\\end{equation}\nand $X_v=0$, where the noise term $\\zeta_{p}$ is $\\zeta_{p}=p_{\\text{th-1}}\\sqrt{\\nu\\kappa\\eta(1-\\epsilon)}+p_{\\text{th-2}}\\sqrt{\\nu\\kappa(1-\\eta)}+p_{\\text{th-3}}\\sqrt{\\nu(1-\\kappa)}+p_{\\text{th-4}}\\sqrt{1-\\nu}$. If there is no any loss, then the noise term turns to zero, and $i_2=|\\alpha_{\\text{LO}}|\\sqrt{g_{\\text {J}}}( e^{ y} p_{\\text{in}}-e^{-r}p_1)$. \n \n The above results are in agreement with the heterodyne outputs (see the Appendix) by letting $g_{\\text {J}}=1$ and $y=0$ for coherent states in the BK protocol. In reality, there are always losses for microwave signals. So, it is desirable to reduce the losses as far as possible to have a successful quantum protocol, otherwise the noisy signal received by Bob makes it harder to reconstruct the input state. In fact, the realization of a microwave single-photon detector is a difficult task due to the low energy of microwave photons, therefore measuring a quadrature of a weak microwave signal is hard. Thus, amplification of the signal is required. Cryogenic high electronic mobility transistor (HEMT) amplifiers are routinely used in quantum microwave experiments because of their large gains in a relatively broad frequency band. Basically, HEMT amplifiers are phase insensitive and add a significant amount of noise photons that may disrupt the protocol (see section E. on Amplification). These signals are digitized with analog-to-digital (ADC) converters and sent to a computer for digital data processing~\\cite{Pogorzalek}. In terms of ADC in heterodyne detection, quadrature moments are $I_1$, $ I_2$, $Q_1$, and $Q_2$ (see Fig.~\\ref{fig3}) which are calculated and averaged in the computer. The $I$ and $Q$ components can be described in terms of continuous variables $x$ and $p$, so they can be written as $I_i=\\sqrt{\\hbar \\omega_iBRg_{\\text H}}x_i$ and $Q_i=\\sqrt{\\hbar \\omega_iBRg_{\\text H}}p_i$, where $R=50\\Omega$, $B$ is the measurement bandwidth set by a digital filter, and $g_{\\text H}$ is the HEMT gain.\n \n\\begin{figure}\n\\includegraphics[scale=0.32]{Teleportationex3.pdf}\n\\caption{Microwave quantum teleportation circuit for Gaussian states}\n\\label{fig3}\n\\end{figure}\nThe modulated classical signal $\\delta=I_1+iQ_2=\\sqrt{\\hbar \\omega_iBRg_{\\text H}}(X_u+iP_v)$ is communicated classically with Bob, where\n\\begin{eqnarray}\n\\nonumber I_1 &=& \\sqrt{\\hbar \\omega_iBRg_{\\text H}} [|\\alpha_{\\text{LO}}|(\\sqrt{\\nu\\kappa g_{\\text {J}}} (e^{- y} x_{\\text{in}}+\\sqrt{\\eta\\epsilon}e^rx_1+\\zeta'_{x})], \\\\\nQ_2 &=& \\sqrt{\\hbar \\omega_iBRg_{\\text H}} [|\\alpha_{\\text{LO}}|(\\sqrt{\\nu\\kappa g_{\\text {J}}} ( e^{ y} p_{\\text{in}}-\\sqrt{\\eta\\epsilon}e^{-r}p_1+\\zeta'_{p})].\n\\end{eqnarray}\n\nwhere $\\zeta'_{x}=\\frac{\\zeta_{x}}{\\sqrt{\\nu\\kappa g_{\\text J}}}$, and $\\zeta'_{p}=\\frac{\\zeta_{p}}{\\sqrt{\\nu\\kappa g_{\\text J}}}$. The above states $I_1$ and $Q_2$ can be sent (or prepared at distant) to Bob via remote state preparation (RSP)\\cite{Pogorzalek}. Then Bob displaces his mode, i.e. $\\alpha_2=(e^rx_2+ie^{-r}p_2)\/\\sqrt{2}$, according to the received signal. In fact, after the measurement the values of entangled states already turn to $x_2=-x_1$ and $p_2=p_1$. The displacement on Bob's side is implemented with a directional coupler and is described as an asymmetric beam splitter with transmissivity $\\tau$, which is $B_S(\\tau)_4=[\\sqrt{1-\\tau}\\mathbb{I}_2, \\sqrt{\\tau}\\mathbb{I}_2; -\\sqrt{\\tau}\\mathbb{I}_2, \\sqrt{1-\\tau}\\mathbb{I}_2]$ with $\\tau=1-10^{\\beta\/10}$, where $\\beta$ is the coupling strength expressed in decibels (dB)\\cite{Pogorzalek, Roberto}. Letting the first moments before the beam splitter as ${ x_{\\tau}}=(I_1, Q_2, e^rx_2, e^{-r}p_2 )^T$ by adjusting the parameter $\\tau$ as\n\n\\begin{equation}\n\\tau=\\frac{\\epsilon\\eta}{2}=1-\\frac{1}{|\\alpha_{\\text{LO}}|^2(\\hbar \\omega_iBR\\nu\\kappa g_{\\text {J}}g_{\\text H} )}\n\\end{equation}\nand letting $\\Lambda=|\\alpha_{\\text{LO}}|^2\\hbar \\omega_iBR\\nu\\kappa g_{\\text {J}}g_{\\text H}$ therefore $\\tau=1-\\frac{1}{\\Lambda}$. Since the maximum value of $\\tau=\\frac{\\epsilon\\eta}{2}=1\/2$ then it makes a restriction on $\\Lambda$ as $1<\\Lambda \\leq 2$ to have a feasible protocol of teleportation.\n\nThe coupling strength turns to \n\n\\begin{equation}\n\\beta=10\\log\\frac{1}{\\Lambda}\n\\end{equation}\n\nFinally, after the operation $B_S(\\tau)_4{ x_{\\tau}}$, the state will be reconstructed in the upper arm after the beam splitter as $\\sqrt{2}\\alpha_{in}=e^{- y}x_{\\text{in}}+ie^{y}p_{\\text{in}}$ with noise term $\\zeta=\\zeta'_{x}+i\\zeta'_{p}$ where the components are \n \\begin{eqnarray} \n\\nonumber e^{-y}x_{\\text{in}} +\\zeta'_{x}= \\frac{1 }{\\sqrt{\\Lambda}}I_1+ \\sqrt{\\tau} e^rx_2 \\\\\ne^{y}p_{\\text{in}}+\\zeta'_{p} = \\frac{1 }{\\sqrt{\\Lambda}}Q_2+\\sqrt{\\tau} e^{-r}p_2 \n\\end{eqnarray}\n\n In a lossless protocol, $\\eta=\\epsilon=\\kappa=\\nu=1$, the noise term disappears and the state can be reconstructed perfectly, but in reality the amount of loss is significant in the microwave regime. The magnitudes of noise terms $\\zeta'_{x}$ and $\\zeta'_{p}$ can be obtained experimentally via calibration of the setup by letting the zero input values, i.e. $e^{-y}x_{\\text{in}}=0$ and $e^{y}p_{\\text{in}}=0$. \n\n\n\\begin{widetext}\n\n\\begin{table}\n\\centering\n\\begin{tabular}{l c c}\n\\hline\nParameter &Symbol &Value\\\\\n\\hline\nFrequency & $\\omega_i$ & 5 GHz\\\\\nMeasurement bandwidth & $B$ & 420 kHz\\\\\nResistance & $R$ & 50 $\\Omega$\\\\\nHEMT gain & $g_{\\text H}$ & $10^4$\\\\\nAmplification gain of JPA & $g_{\\text J}$ & $10^2$ \\\\\nTransfer efficiency (at T$_1$=40mK) & $\\epsilon$ & 0.95 \\\\\nTransfer efficiency (at T$_2$=300K, Free space) & $\\eta$ & 0.10\\\\\nTransfer efficiency (at T$_2$=4K, Fridge) & $\\eta$ & 0.90\\\\\nTransfer efficiency (at T$_3$=4K) & $\\kappa$ & 0.65\\\\\nTransfer efficiency (at T$_4$=100mK) & $\\nu$ & 0.75\\\\\n\nLocal oscillator mode amplitude & $|\\alpha_{\\text{LO}}|$ & $10^6$V\/m\\\\\nAmplification squeezing& $r_{\\text{J}}$ & 2.30\\\\ \nSqueezing parameter of the TMSS & $r$ & 1.32 \\\\\nTransmissivity (for T$_2$=300K, Free space) & $\\tau$ & 0.095\\\\\nTransmissivity (for T$_2$=4K, Fridge) & $\\tau$ & 0.427\\\\\nCoupling strength (for T$_2$=300K, Free space)& $\\beta$ & -0.41\\\\\nCoupling strength (for T$_2$=4K, Fridge)& $\\beta$ & -2.40\\\\\nCoefficient (for T$_2$=300K, Free space) & $\\Lambda$ & 1.10 \\\\\nCoefficient (for T$_2$=4K, Fridge) & $\\Lambda$ & 1.74\\\\\nNoise (zero input, for T$_2$=300K, Free space)& $\\zeta'_x$ & (0.954)$$+(1.152)$$\\\\\nNoise (zero input, for T$_2$=300K, Free space)& $\\zeta'_p$ & (0.954)$$+(0.082)$$ \\\\\nNoise (zero input, for $_2$=4K, Fridge)& $\\zeta'_x$ & (0.758)$$+(2.444)$$\\\\\nNoise (zero input, for T$_2$=4K, Fridge)& $\\zeta'_p$ & (0.758)$$+(0.174)$$\\\\\n\n\\hline\n\\end{tabular}\n\\caption{Some approximations for the values in the microwave teleportation circuit}\n\\end{table}\n\n\\end{widetext}\n\n\n \n\n\n\n\n\n\n$Amplification-$ The amplification of signals is an essential feature in microwave communication in open air. In the quantum regime, HEMT amplifiers are suited for experiments in the microwave regime. A commercial HEMT usually has $g_{H}=10^{4}$, and working at 5 GHz frequencies it introduces between $n\\sim 10-100$ thermal photons~\\cite{Roberto} which can have destructive effects on the protocol. One alternative for amplification is the replacement of each HMET with two additional JPAs in the same arm (see Fig.~\\ref{fig4}) to reach the same gain of HEMT but with significant reduction of noise. In this case, the parameter $\\Lambda$ turns to $\\Lambda'$ where $\\Lambda'_{\\text{J}}=|\\alpha_{\\text{LO}}|^2\\hbar \\omega_iBR\\nu\\kappa (g_{\\text {J}})^3$ and therefore the state can be reconstructed based on the circuit in Fig.~\\ref{fig4}, as follows\n\n\\begin{eqnarray} \n\\nonumber e^{-y}x_{\\text{in}} +\\zeta'_{x}= \\frac{1 }{\\sqrt{\\Lambda'_{\\text{J}}}}I_1+ \\sqrt{\\tau} e^rx_2 \\\\\ne^{y}p_{\\text{in}}+\\zeta'_{p} = \\frac{1 }{\\sqrt{\\Lambda'_{\\text{J}}}}Q_2+\\sqrt{\\tau} e^{-r}p_2 \n\\end{eqnarray}\n\n\n\\begin{figure}\n\\includegraphics[scale=0.3]{Teleportationex4.pdf}\n\\caption{Replacement of each HEMT with two JPAs in each arm to reach the same gain but with significant lower thermal noise.}\n\\label{fig4}\n\\end{figure}\n\nBased on the above theoretical proposed protocols and physical formalism, an unconditional microwave quantum teleportation is applicable in real conditions. However, quantum microwave signals are very fragile and there are experimental limitations in free space as well as some fundamental bounds \\cite{SanzIEEE, Pirandolanew} that should be studied as the next research prospect. \n\n\n\n\n\n\n\n\n\n\\begin{acknowledgments}\nThe author greatly acknowledges financial support from the projects QMiCS (820505) of the EU Flagship on Quantum Technologies as well as EU project EPIQUS (899368). Also, the author is very grateful for very helpful and constructive discussions with Mikel Sanz, Yasser Omar, Frank Deppe, and Shabir Barzanjeh.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nThe recent upgrade to the Compact Array Broadband Backend (CABB) correlator on the Australia Telescope Compact Array (ATCA) has increased the bandwidth from 128\\,MHz to 2\\,GHz (for more details see paper by Emil Lenc in these proceedings). The increased sensitivity this provides allows us not only to go deeper, but to also observe large samples of sources in smaller amounts of time. To this effect, we have observed a sample of 1138 X-ray selected QSOs at 20\\,GHz as a comprehensive study of the high-frequency radio luminosity distribution of QSOs.\n\nQSOs are often classified into two broad categories; radio-loud and radio-quiet, but the underlying distribution of radio luminosity has long been debated in the literature. There are two opposing views - the first is that the distribution of radio-loudness is bimodal, i.e.\\ there are distinct radio-loud and radio-quiet populations, with 10--20\\% of QSOs being radio-loud\\cite{kellermann}. The second view is that there is a broad, continuous distribution with no clear dividing line between radio-loud and radio-quiet QSOs\\cite{cirasuolo}. Understanding this distribution can provide new insights into the high frequency radio properties of these sources while also allowing us to investigate the difference between X-ray selected and radio-selected QSO populations.\n\nWe are currently studying two samples of QSOs; one selected in X-rays using the {\\it ROSAT} All Sky Survey (RASS), and the other selected in the radio from the Australia Telescope 20\\,GHz (AT20G) survey\\cite{murphy}. The two samples overlap in redshift, but only a small number of objects appear in both samples. We aim to compare the physical properties of radio and X-ray-selected QSOs in the redshift range $z<1$. As a first step, we have used the ATCA to make deep 20\\,GHz observations of the $1138$ QSOs from the RASS X-ray sample which were not detected at the 40--50\\,mJy flux limit of the AT20G survey. This will allow us to determine whether their radio luminosity distribution is continuous with that of radio-selected QSOs in the same redshift range, or is clumped at much lower radio luminosities (implying a bimodal distribution). \n\nBy observing at 20\\,GHz we pick up the central core component of the AGN and hence see the most recent activity. At lower frequencies we observe a higher fraction of emission from radio lobes; relics of past activity integrated over large timescales which therefore confuse our sample.\n\n\\section*{Sample Selection}\n\nTargets were seleced from the RASS--6dFGS catalogue\\cite{mahony}; a catalogue of 3405 AGN selected from the ROSAT ALL Sky Survey (RASS) Bright Source Catalogue\\cite{voges} that were observed as part of the 6dF Galaxy Survey (6dFGS)\\cite{jones}. Sources were selected if the 6dFGS spectrum exhibited broad emission features indicative of a QSO or Type 1 AGN. We also set a redshift cutoff of z<1 to minimize any evolutionary effects. We then searched the Australia Telescope 20\\,GHz (AT20G) survey for any known 20 GHz radio sources which were excluded from our target list. This leaves a final sample of 1138 X-ray selected QSOs at z<1. Example 6dFGS spectra and corresponding 20 GHz observations are shown in Figure \\ref{example}. Since this is a large, low-redshift QSO sample which spans a wide range in optical luminosity, it can provide a definitive test of whether the $z<1$ QSO population is bimodal in its radio properties. Selecting sources that have 6dFGS spectroscopic information not only provides a uniform sample, but will also give us a wealth of extra information. Hence we can also study the optical spectral line properties, black hole masses, multiwavelength properties (X-ray -- optical --radio) and how these vary with redshift. \n\n\\begin{figure}[h]\n\\begin{minipage}{0.6\\linewidth}\n\\centerline{\\epsfig{file=g0150-5322.eps, width=\\linewidth}}\n\\end{minipage}\n\\begin{minipage}{0.4\\linewidth}\n\\centerline{\\epsfig{file=overlay0150-5322.eps, width=0.8\\linewidth}}\n\\end{minipage}\n\\begin{minipage}{0.6\\linewidth}\n\\centerline{\\epsfig{file=g0311-2046.eps, width=\\linewidth}}\n\\end{minipage}\n\\begin{minipage}{0.4\\linewidth}\n\\centerline{\\epsfig{file=overlay0311-2046.eps, width=0.8\\linewidth}}\n\\end{minipage}\n\\caption{Example RASS selected QSOs. The left image shows the 6dFGS spectrum. The arrows denote where spectral features would occur at that redshift, but not all of these are necessarily observed. On the right are the corresponding optical (B-band) images with 20\\,GHz contours overlaid. The top source was observed in October 2008 (47.3\\,mJy) with the old correlator and the bottom source was observed in October 2009 (1.64\\,mJy) using CABB. In both images the contours represent 10\\% intervals in flux, with the central contour corresponding to 90\\%. }\\label{example}\n\\end{figure}\n\n\\section*{Observations}\n\nWe observed 1138 X-ray selected QSOs at 20 GHz with the ATCA in a compact, hybrid configuration from 2008 -- 2010. The observations were carried out in a two-step process; all objects were observed for 2$\\times$40s cuts and the objects not detected in that time were then reobserved for 2$\\times$5min. These observations are summarised in Table 1. \n\\begin{table}\n\\begin{center}\n\\caption{List of observations for this program. All the observations from 2008--2010 used the Hybrid 168m array configuration to obtain better u,v coverage. The October 2008 run was using the old correlator (128\\,MHz bandwidth) and all other runs used CABB. \\label{tab1}}\n\\begin{tabular}{lcccc}\n\\hline\n{\\bf Date of Observations} & \\multicolumn{2}{c}{\\bf No. Sources} & {\\bf Time on Source} & {\\bf Detection limit} \\\\\n& {\\bf Observed} & {\\bf Detected} & & \\\\\n\\hline\nOctober 2008 & 135 & 16 (12\\%) & 2$\\times$40s & 3\\,mJy \\\\\nApril 2009 & 417 & 103 (25\\%) & 2$\\times$40s & 0.9\\,mJy \\\\\nOctober 2009 & 586 & 100 (17\\%) & 2$\\times$40s & 0.9\\,mJy \\\\\n& 377 & 58 (15\\%) & 2$\\times$5m & 0.5\\,mJy \\\\\nMarch 2010 & 122 & 21(17\\%) & 2$\\times$5m & 0.5\\,mJy \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[h]\n \\centerline{\\epsfig{file=zrlum_split2.ps, width=0.85\\linewidth}}\n\\caption{20 GHz radio luminosity against redshift for our sample of X-ray selected QSOs. The green diamonds are sources that were observed using the old correlator on the Compact Array, including sources observed as part of the AT20G survey, while the blue circles show the sources that were detected at the 5 sigma threshold using CABB. The red triangles denote the upper limits for sources that weren't detected with CABB. }\n\\label{zrlum}\n\\end{figure}\n\nFigure \\ref{zrlum} shows preliminary results from these observations. The increased sensitivity provided by CABB is immediately obvious when comparing the 20\\,GHz luminosities of sources detected using the old correlator (green diamonds) and those detected with CABB (blue circles). Figure \\ref{hist} shows preliminary distributions of both the radio luminosity, and the R-parameter\\cite{kellermann} which is often used as a measure of the `radio-loudness' of a QSO. This is defined as the ratio of the radio to optical flux; in this case the 20\\,GHz radio flux divided by the optical B--band flux. In these figures, the red solid line indicates the distribution that was obtained using the old correlator on the ATCA whilst the solid black line shows the distributions obtained using CABB. The dashed line indicates the upper limits of the sources that were not detected. \n\n\\begin{figure}[h]\n\\begin{minipage}{0.5\\linewidth}\n \\centerline{\\epsfig{file=split_rlumdist.ps, width=\\linewidth}}\n\\end{minipage}\n\\begin{minipage}{0.5\\linewidth}\n \\centerline{\\epsfig{file=split_Rhist.ps, width=\\linewidth}}\n\\end{minipage}\n\\caption{{\\it(Left):} The radio luminosity distribution of our X-ray selected sample of QSOs at z<1. {\\it(Right):} The `radio-loudness' distribution of this sample. The `radio-loudness' was determined using the Kellermann R-parameter which is defined as the ratio of the radio to optical fluxes. Radio-loud QSOs have R>10, while radio-quiet QSOs generally have R<1. Since all of our objects have optical spectroscopy and reliable redshifts, this allows us to compare both ways of determining whether a QSO is radio-loud and investigate any differences between them.}\n\\label{hist}\n\\end{figure}\n\nThese figures, along with the detection limits noted in Table \\ref{tab1} highlight the significant improvement that CABB has achieved.\n\n\\section*{Future Work}\nThe results presented here are very preliminary and the data reduction and analysis is ongoing\\cite{mahony2}. However, the strict selection criteria and multiwavelength information provides a wealth of data allowing for a comprehensive study of these sources. In particular, work in progress includes:\n\\begin{itemize}\n\\item A statistical analysis of the data and resulting radio luminosity distributions.\n\\item Stacking experiments of the non-detected sources to study the average properties of radio-quiet X-ray QSOs. \n\\item We also have data at 5 and 9 GHz for a subset of this sample, allowing us to investigate whether the radio luminosity distributions change with frequency.\n\\end{itemize}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTime series exhibiting spatial dependencies are present in many domains including ecology, meteorology, biology, medicine, economics, traffic, and vision. The observations can come from multiple sources e.g. GPS, satellite imagery, video cameras, etc. Two main difficulties when modeling spatio-temporal data come from their size - sensors can cover very large space and temporal lags - and from the complexity of the data generation process. Reducing the dimensionality and uncovering the underlying data generation process naturally leads to consider latent dynamic models. This has been exploited both in statistics \\cite{cressie2011} and in machine learning (ML) \\cite{bahadori2014fast,koppula2013learning}.\n\nDeep learning has developed a whole range of models for capturing relevant information representations for different tasks and modalities. For dynamic data, recurrent neural networks (RNN) handles complex sequences for tasks like classification, sequence to sequence prediction, sequence generation and many others \\cite{Bengio2008, chung2015recurrent, li2015gated}.\n\nThese models are able to capture meaningful features of sequential data generation processes, but the spatial structure, essential in many applications, has been seldom considered in Deep Learning. Very recently, convolutional RNNs \\cite{NIPS2015_5955,srivastava2015} and video pixel networks \\cite{kalchbrenner2017} have been used to handle both spatiality and temporality, but for video applications only.\n\nWe explore a general class of deep spatio-temporal models by focusing on the problem of time series forecasting of spatial processes for different types of data. Although the target in this paper is forecasting, the model can easily be extended to cover related tasks like spatial forecasting (kriging \\cite{stein2012interpolation}) or data imputation.\n\n\nThe model, denoted Spatio-Temporal Neural Network (STNN), has been designed to capture the dynamics and correlations in multiple series at the spatial and temporal levels. This is a dynamical system model with two components: one for capturing the spatio-temporal dynamics of the process into latent states, and one for decoding these latent sates into actual series observations.\n\nThe model is tested and compared to state of the art alternatives, including recent RNN approaches, on spatio-temporal series forecasting problems: disease prediction, traffic forecasting, meteorology and oceanography. Besides a comparative evaluation on forecasting tasks, the ability of the model to discover relevant spatial relations between series is also analyzed.\n\nThe paper is organized as follow: in section \\ref{rw} we introduce the related work in machine learning and spatio-temporal statistics. The model is presented in section \\ref{model} with its different variants. The experiments are described in section \\ref{xp} for both forecasting \\ref{xp-forecast} and relations discovery \\ref{xp-rel}.\n \n\\section{Related Work}\n\\label{rw}\nThe classical topic of time series modeling and forecasting has given rise to an extensive literature, both in statistics and machine learning. In statistics, classical linear models are based on auto-regressive and moving average components. Most assume linear and stationary time dependencies with a noise component \\cite{de200625}. In machine learning, non linear extensions of these models based on neural networks were proposed as early as the nineties, opening the way to many other non linear models developed both in statistics and ML, like kernel methods \\cite{muller1999using} for instance.\n\nDynamical state space models, such as recurrent neural networks, have been used for time series forecasting in different contexts since the early nineties \\cite{connor1994recurrent}. Recently, these models have witnessed important successes in different areas of sequence modeling problems, leading to breakthrough in domains like speech \\cite{graves2013speech}, language generation \\cite{Sutskever2011}, translation \\cite{cho2014learning} and many others. A model closely related to our work is the dynamic factor graph model \\cite{mirowski2009dynamic} designed for multiple series modeling. Like ours, it is a generative model with a latent component that captures the temporal dynamics and a decoder for predicting the series. However, spatial dependencies are not considered in this model, and the learning and inference algorithms are different.\n\nRecently, the development of non parametric generative models has become a very popular research direction in Deep Learning, leading to different families of innovative and promising models. For example, the Stochastic Gradient Variational Bayes algorithm (SGVB) \\cite{kingma2013auto} provides a framework for learning stochastic latent variables with deep neural networks, and has recently been used by some authors to model time series \\cite{bayer2014learning, Chung2015, krishnan2015deep}. In our context, which requires to model explicitly both spatial and temporal dependencies between multiple time series, variational inference as proposed by such models is still intractable, especially when the number of series grows, as in our experiments.\n\nSpatio-temporal statistics have already a long history \\cite{cressie2011,wikle2010general}. The traditional methods rely on a descriptive approach using the first and second-order moments of the process for modeling the spatio-temporal dependencies. More recently, dynamical state space models, where the current state is conditioned on the past have been explored \\cite{Wikle2015}. For these models, time and space can be either continuous or discrete. The usual way is to consider discrete time, leading to the modeling of time series of spatial processes. When space is continuous, models are generally expressed by linear integro-difference equations, which is out of the scope of our work. With discrete time and space, models come down to general vectorial autoregressive formulations. These models face a curse of dimensionality in the case of a large number of sources. Different strategies have been adopted to solve this problem, such as embedding or parameter reduction. This leads to model families that are close to the ones used in machine learning for modeling dynamical phenomena, and incorporate a spatial components. An interesting feature of these approaches is the incorporation of prior knowledge inspired from physical models of space-time processes. This consists in taking inspiration from prior background of physical phenomena, e.g. diffusion laws in physics, and using this knowledge as guidelines for designing dependencies in statistical models. In climatology, models taking into account both temporal and geographical components have also been used such as Gaussian Markov Random Fields\\cite{rue2005gaussian} \n\nIn the machine learning domain, spatio-temporal modeling has been seldom considered, even though some spatio-temporal models have been proposed \\cite{ceci2017predictive}. \\cite{bahadori2014fast} introduce a tensor model for kriging and forecasting. \\cite{koppula2013learning} use conditional random fields for detecting activity in video, where time is discretized at the frame level and one of the tasks is the prediction of future activity. Brain Computer Interface (BCI) is another domain for spatio-temporal data analysis with some work focused on learning spatio-temporal filters \\cite{Dornhege2005,ren2014convolutional}, but this is a very specific and different topic.\n\n\\section{The STNN Model}\n\\label{model}\n\\subsection{Notations and Task}\nLet us consider a set of $n$ temporal series, $m$ is the dimensionality of each series and $T$ their length \\footnote{We assume that all the series have the same dimensionality and length. This is often the case for spatio-temporal problems otherwise this restriction can be easily removed.}. $m=1$ means that we consider $n$ univariate series, while $m>1$ correspond to $n$ multivariate series each with $m$ components. We will denote $X$ the values of all the series between time $1$ and time $T$. $X$ is then a $\\mathbb{R}^{ T \\times n \\times m}$ 3-dimensional tensor, such that $X_{t,i,j}$ is the value of the j-th component of series $i$ at time $t$. $X_t$ will denote a slice of $X$ at time $t$ such that $X_t \\in \\mathbb{R}^{n \\times m}$ denotes the values of all the series at time $t$.\n\nFor simplicity, we first present our model in a mono-relational setting. An extension to multi-relational series where different relations between series are observed is described in section \\ref{multi-modal-rel}. We consider that the spatial organization of the sources is captured through a matrix $W \\in \\mathbb{R}^{n \\times n}$. Ideally, $W$ would indicate the mutual influence between sources, given as a prior information. In practice, it might be a proximity or similarity matrix between the sources: for geo-spatial problems, this might correspond to the inverse of a physical distance - e.g. geodesic - between sources. For other applications, this might be provided through local connections between sources using a graph structure (e.g. adjacency matrix for connected roads in a traffic prediction application or graph kernel on the web). In a first step, we make the hypothesis that $W$ is provided as a prior on the spatial relations between the series. An extension where weights on these relations are learned is presented in section \\ref{learn-spt-rel}. \n\nWe consider in the following the problem of spatial time series forecasting i.e predicting the future of the series, knowing their past. We want to learn a model $f : \\mathbb{R}^{ T \\times n \\times m} \\times \\mathbb{R}^{n \\times n} \\rightarrow \\mathbb{R}^{\\tau \\times n \\times m}$ able to predict the future at $\\tau$ time-steps of the series based on $X$ and on their spatial dependency.\n\n\\subsection{Modeling Time Series with Continuous Latent Factors}\n\\label{mts}\nLet us first introduce the model in the simpler case of multiple time series prediction, without considering spatial relations. The model has two components.\n\nThe first one captures the dynamic of the process and is expressed in a latent space. Let $Z_t$ be the latent representation, or latent factors, of the series at time $t$. The dynamical component writes $Z_{t+1}=g(Z_t)$. The second component is a decoder which maps latent factors $Z_t$ onto a prediction of the actual series values at $t$: $\\tilde{X_t} = d(Z_t)$, $\\tilde{X_t}$ being the prediction computed at time $t$. In this model, both the representations $Z_t$ and the parameters of the dynamical and decoder components are learned. Note that this model is different from the classical RNN formulations \\cite{hochreiter1997long, cho2014learning}. The state space component of a RNN with self loops on the hidden cells writes $Z_{t+1}=g(Z_t,X'_t)$, where $X'_t$ is the ground truth $X_t$ during training, and the predicted value $\\tilde{X_t}$ during inference. In our approach, latent factors $Z_t$ are learned during training and are not an explicit function of past inputs as in RNNs: the dynamics of the series are then captured entirely in the latent space.\n\nThis formal definition makes the model more flexible than RNNs since not only the dynamic transition function $g(.)$, but also the state representations $Z_t$ are learned from data. A similar argument is developed in \\cite{mirowski2009dynamic}. It is similar in spirit to Hidden Markov models or Kalman filters.\n\n\n\n\n\n\n\n\n\\paragraph{Learning problem}\nOur objective is to learn the two mapping functions $d$ and $g$ together with the latent factors $Z_t$, directly from the observed series. We formalize this learning problem with a bi-objective loss function that captures the dynamics of the series in the latent space and the mapping from this latent space to the observations. Let $\\mathcal{L}(g,d,Z)$ be this objective function:\n\\begin{equation}\n\\begin{gathered}\n \\mathcal{L}(d,g,Z) = \\frac{1}{T}\\sum\\limits_t \\Delta(d(Z_t),X_t) \\hfill \\text{ (i)}\\\\+ \\lambda \\frac{1}{T} \\sum\\limits_{t=1}^{T-1} || Z_{t+1} - g(Z_t) ||^2 \\hfill \\text{ (ii)}\n\\end{gathered}\n\\label{eqfold}\n\\end{equation}\n\nThe first term (i) measures the ability of the model to reconstruct the observed values $X_t$ from the latent factor $Z_t$. It is based on loss function $\\Delta$ which measures the discrepancy between predictions $d(Z_t)$ and ground truth $X_t$. The second term (ii) aims at capturing the dynamicity of the series in the latent space. This term forces the system to learn latent factors $Z_{t+1}$ that are as close as possible to $g(Z_t)$. Note that in the ideal case, the model converges to a solution where $Z_{t+1}=g(Z_t)$, which is the classical assumption made when using RNNs. The hyper-parameter $\\lambda$ is used here to balance this constraint and is fixed by cross-validation. The solution $d^*,g^*,Z^*$ to this problem is computed by minimizing $\\mathcal{L}(d,g,Z)$:\n\\begin{equation}\nd^*,g^*,Z^* = \\arg \\min\\limits_{d,g,Z} \\mathcal{L}(d,g,Z)\n\\label{eqf}\n\\end{equation}\n\n\\paragraph{Learning algorithm}\nIn our setting, functions $d$ and $g$, described in the next section, are differentiable parametric functions. Hence, the learning problem can be solved end-to-end with Stochastic Gradient Descent (SGD) techniques\\footnote{In the experiments, we used the Nesterov's Accelerated Gradient (NAG) method \\cite{sutskever2013importance}.} directly from \\eqref{eqf}. At each iteration, a pair $(Z_t, Z_{t+1})$ is sampled, and $Z_t$, $Z_{t+1}$, $g$ and $d$ are updated according to the gradient of \\eqref{eqfold}. Training can also be performed via mini-batch, meaning that for each iteration several pairs $(Z_t, Z_{t+1})$ are sampled, instead of a single pair. This results in a high learning speed-up when using GPUs which are the classical configuration for running such methods.\n\\paragraph{Inference} Once the model is learned, it can be used to predict future values of the series. The inference method is the following: the latent factors of any future state of the series is computed using the $g$ function, and the corresponding observations is predicted by using $d$ on these factors. Formally, let us denote $\\tilde{Z}_\\tau$ the predicted latent factors at time $T+\\tau$. The forecasting process computes $\\tilde{Z}_{\\tau}$ by successively applying the $g$ function $\\tau$ times on the learned vector $Z_T$:\n\\begin{equation}\n\\tilde{Z}_{\\tau} = g \\circ g \\circ ... \\circ g(Z_T)\n\\end{equation}\nand then computes the predicted outputs $\\tilde{X}_{\\tau}$:\n\\begin{equation}\n\\tilde{X}_{\\tau}=d(\\tilde{Z_{\\tau}})\n\\end{equation}\n\n\\subsection{Modeling Spatio-Temporal Series}\n\\label{msts}\nLet us now introduce a spatial component in the model. We consider that each series has its own latent representation at each time step. $Z_t$ is thus a $n \\times N$ matrix such that $Z_{t,i} \\in \\mathbb{R}^N$ is the latent factor of series $i$ at time $t$, $N$ being the dimension of the latent space. This is different from approaches like \\cite{mirowski2009dynamic} or RNNs for multiple series prediction, where $Z_t$ would be a single vector common to all the series. The decoding and dynamic functions $d$ and $g$ are respectively mapping $\\mathbb{R}^{n \\times N}$ to $\\mathbb{R}^{n \\times m}$ and $\\mathbb{R}^{n \\times N}$ to $\\mathbb{R}^{n \\times N}$.\n\nThe spatial information is integrated in the dynamic component of our model through a matrix $W \\in \\mathbb{R}_+^{n \\times n}$. In a first step, we consider that $W$ is provided as prior information on the series' mutual influences. In \\ref{ref}, we remove this restriction, and show how it is possible to learn the weights of the relations, and even the spatial relations themselves, directly from the observed data. The latent representation of any series at time $t+1$ depends on its own latent representation at time $t$ (intra-dependency) and on the representations of the other series at $t$ (inter-dependency). Intra-dependency will be captured through a linear mapping denoted $\\Theta^{(0)} \\in \\mathbb{R}^{N \\times N}$ and inter-dependency will be captured by averaging the latent vector representations of the neighboring series using matrix $W$, and computing a linear combination denoted $\\Theta^{(1)} \\in \\mathbb{R}^{N \\times N}$ of this average. Formally, the dynamic model $g(Z_t)$ is designed as follow:\n\\begin{equation}\n\tZ_{t+1} = h(Z_t \\Theta^{(0)}+ W Z_t \\Theta^{(1)})\n \\label{dynamic}\n\\end{equation}\nHere, $h$ is a non-linear function. In the experiments we set $h=tanh$ but $h$ could also be a more complex parametrized function like a multi-layer perceptron (MLPs) for example -- see section \\ref{xp}. The resulting optimization problem over $d$, $Z$, $\\Theta^{(0)}$ and $\\Theta^{(1)}$ writes:\n\n\\begin{equation}\n\\begin{gathered}\nd^*, Z^*, \\Theta^{(0)*}, \\Theta^{(1)*} =\\\\ \\argmin_{d,Z,\\Theta^{(0)}, \\Theta^{(1)}} \\frac{1}{T}\\sum\\limits_t \\Delta(d(Z_t),X_t) +\\\\\n\\lambda \\frac{1}{T} \\sum\\limits_{t=1}^{T-1} || Z_{t+1} - h(Z_t \\Theta^{(0)}+ W Z_t \\Theta^{(1)}) ||^2\\\\\n\\text{ with } Z_t \\in \\mathbb{R}^{n \\times N}\n\\end{gathered}\n\\label{eqm}\n\\end{equation}\n\n\\begin{table*}[ht]\n\\centering\n\\begin{tabular}{|c||c||c|c|c|c|c|c|c|} \\hline\nDataset & Task & $n$ & $m$ & nb relations & time-step & total length & training length & \\#folds \\\\ \\hline \\hline\nGoogle Flu & Flu trends & 29 & 1 & 1 to 3 & weeks& $\\approx$ 10 years & 2 years & 50 \\\\ \nGHO (25 datasets) & Number of deaths & 91 & 1 & 1 to 3 & years& 45 years & 35 years & 5 \\\\ \\hline \nWind & Wind speed and orientation & 500 & 2 & 1 to 3 & hours& 30 days & 10 days & 20 \\\\\nPST & Temperature & 2520 & 1 & 8 & months & $\\approx$ 33 years& 10 years & 15 \\\\ \n\\hline\nBejing & Traffic Prediction & 5000 & 1 & 1 to 3 & 15 min& 1 week & 2 days & 20 \\\\ \\hline\n\\end{tabular}\n\n\\caption{Datasets statistics. $n$ is the number of series, $m$ is the dimension of each series, $timestep$ corresponds to the duration of one time-step and \\textit{\\#folds} corresponds to the number of temporal folds used for validation. For each fold, evaluation has been made on the next 5 values at $T+1,T+2,...,T+5$. The relation columns specifies the number of different relation types used in the experiments i.e the number of $W^{(r)}$ matrices used in each dataset. 1 to 3 means that the best among 1 to 3 relations was selected using cross validation}\n\\label{datasets}\n\\end{table*}\n\n\n\\subsection{Modeling different types of relations}\n\\label{multi-modal-rel}\nThe model in section \\ref{msts} considers that all the spatial relations are of the same type (e.g. source proximity). For many problems, we will have to consider different types of relations. For instance, when sensors correspond to physical locations and the target is some meteorological variable, the relative orientation or position of two sources may imply a different type of dependency between the sources. In the experimental section, we consider problems with relations based on the relative position of sources, $north, south, west, east, ...$. The multi-relational framework generalizes the previous formulation of the model, and allows us to incorporate more abstract relations, like different measures of proximity or similarity between sources. For instance, when sources are spatiality organized in a graph, it is possible to define different graph kernels, each one of them modeling a specific similarity. The following multi-relational formulation is based on adjacency matrices, and can directly incorporate different graph kernels. \n\n\n\n\nEach possible relation type is denoted \\textit{r} and is associated to a matrix $W^{(r)} \\in \\mathbb{R}_+^{n \\times n}$. For now, and as before, we consider that the $W^{(r)}$ are provided as prior knowledge. Each type of relation \\textit{r} is associated to a transition matrix $\\Theta^{(r)}$. This learned matrix captures the spatio-temporal relationship between the series for this particular type of relation. The model dynamics writes:\n\\begin{equation}\nZ_{t+1} = h( Z_t \\Theta^{(0)}+\\sum\\limits_{r \\in \\mathcal{R}} W^{(r)} Z_t \\Theta^{(r)})\n\\label{dynamic-multi-rel}\n\\end{equation}\nwhere $\\mathcal{R}$ is the set of all possible types of relations. The learning problem is similar to equation \\eqref{eqm} with $Z_{t+1}$ replaced by the expression in \\eqref{dynamic-multi-rel}. The corresponding model is illustrated in figure \\ref{networks}. This dynamic model aggregates the latent representations of the series for each type of relation, and then applies $\\Theta^{(r)}$ on this aggregate. Each $\\Theta^{(r)}$ is able to capture the dynamics specific to relation $(r)$.\n\n\n\n\n\\begin{figure}[ht]\n \\begin{center}\n \t\\includegraphics[width=1\\linewidth]{STNN_2.png} \\\\\n \\end{center}\n \\caption{Architecture of the STNN model as described in Section \\ref{multi-modal-rel}}.\n \\label{networks}\n\\end{figure}\n\n\\section{Learning the relation weights and capturing spatio-temporal correlations}\n\\label{learn-spt-rel}\n\\label{ref}\nIn the previous sections, we made the hypothesis that the spatial relational structure and the strength of influence between series were provided to the model through the $W^{(r)}$ matrices. We introduce below an extension of the model where weights on these relations are learned. This model is denoted STNN-R(efining). We further show that, with a slight modification, this model can be extended to learn both the relations and their weights directly from the data, without any prior. This extension is denoted STNN-D(iscovering).\n\n\n\n\\begin{table*}[t]\n\\centering\n\\begin{tabular}{|c||c|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{Models} & \\multicolumn{2}{c|}{Disease}& Car Traffic &\\multicolumn{2}{|c}{Geographical}\\\\\n\\hhline{~------}\n& Google Flu & GHO\\footnote{Results shown here correspond to the average for all diseases in the dataset. The detail is in the supplementary material for space convenience.} & Beijing &Speed&Direction&PST \\\\\n\\hhline{=:=:=:=:=:=:=}\nMEAN & .175& .335&.201&0.191&0.225&.258\\\\\\hline\nAR & .101$\\pm .004$&$.299 \\pm .008$&$.075 \\pm .003$&$.082 \\pm .005$&0.098$\\pm .016$&$.15\\pm .002$\\\\\\hline\nVAR-MLP & $.095 \\pm .004$& $.291 \\pm .004$ & $.07 \\pm .002$&$.071 \\pm .005$&0.111$\\pm 0.14$&$.132\\pm .003$\\\\\\hline\nDFG & $.095 \\pm .008$& $.288 \\pm .002$ & $.068 \\pm .005$&$.07 \\pm .004$&$.092 \\pm .006$&$.99\\pm .019$\\\\\\hline\nRNN-tanh & $.082 \\pm .008$& $.287 \\pm .011$ & $.075 \\pm .006$&$.064 \\pm .003$&$.09 \\pm .005$&$.141 \\pm .01$\\\\\\hline\nRNN-GRU & $.074 \\pm .007$ &$.268 \\pm .07$&$.074\\pm .002$&$.059 \\pm .009$&$.083 \\pm .005$&$.104\\pm .008$\\\\\\hline\nSTNN & $.066 \\pm .006$ &$\\textbf{.261}\\pm .009$&$.056\\pm .003$&$\\textbf{.047}\\pm .008$&$\\textbf{.061}\\pm .008$&$.095\\pm .008$\\\\\\hline\nSTNN-R & $\\textbf{.061} \\pm .008$ &$\\textbf{.261}\\pm .01$&$\\textbf{.055}\\pm .004$&$\\textbf{.047}\\pm .008$&$\\textbf{.061}\\pm .008$&$\\textbf{.08}\\pm .014$\\\\\\hline\nSTNN-D & $.073 \\pm .007$ & $.288 \\pm .09$&$.069\\pm .01$&$.059\\pm .008$&$.073\\pm .008$&$.109\\pm .015$ \\\\\\hline\n\n\\end{tabular}\n\\caption{\\label{tab:widgets}Average RMSE for the different datasets computed for T+1, T+2,...,T+5. Standard deviation was computed by re-training the models on different seeds.}\n\\label{table1}\n\\end{table*}\n\n\\begin{table*}[h]\n\\begin{center}\n\\begin{tabular}{|p{4.5cm}|c|c|c|c|c|c|}\n\n\\hline\nDisease \/ Model & AR & VAR-MLP& RNN-GRU &Mean&DFG&STNN-R\\\\\n\\hline\n\nAll causes&0.237&0.228&0.199&0.35&0.291&\\textbf{0.197}\\\\\n\\hline\nTuberculosis&0.407&0.418&\\textbf{0.37}&0.395&0.421&0.377\\\\\n\\hline\nCongenital syphilis&0.432&0.443&0.417&0.459&0.422&\\textbf{0.409}\\\\\n\\hline\nDiphtheria&0.406&0.396&0.387&0.404&0.419&\\textbf{0.385}\\\\\n\\hline\nMalignant neoplasm of esophagus&0.355&\\textbf{0.341}&\\textbf{0.341}&0.363&0.372&0.345\\\\\n\\hline\nMalignant neoplasm of stomach&0.44&0.434&0.431&0.455&0.452&\\textbf{0.43}\\\\\n\\hline\n&0.267&0.254&0.282&0.303&0.301&\\textbf{0.253}\\\\\n\\hline\nMalignant neoplasm of intestine&0.281&0.29&0.278&0.314&0.305&\\textbf{0.275}\\\\\n\\hline\nMalignant neoplasm of rectum&0.501&0.499&\\textbf{0.481}&0.504&0.509&0.498\\\\\n\\hline\nMalignant neoplasm of larynx&0.321&0.313&0.32&0.314&0.329&\\textbf{0.310}\\\\\n\\hline\nMalignant neoplasm of breast&0.375&0.375&0.382&0.394&0.38&\\textbf{0.36}\\\\\n\\hline\nMalignant neoplasm of prostate&0.111&0.113&\\textbf{0.109}&0.184&0.138&\\textbf{0.109}\\\\\n\\hline\nMalignant neoplasm of skin&0.253&0.243&0.227&0.264&0.256&\\textbf{0.221}\\\\\n\\hline\nMalignant neoplasm of bones&0.103&0.099&0.097&0.204&0.173&\\textbf{0.08}\\\\\n\\hline\nMalignant neoplasm of all other and unspecified sites &0.145&0.157&\\textbf{0.147}&0.164&0.169&0.156\\\\\n\\hline\nLymphosarcoma&0.15&0.132&0.13&0.231&0.135&\\textbf{0.122}\\\\\n\\hline\nBenign neoplasms &0.366&0.362&0.332&0.398&0.331&\\textbf{0.331}\\\\\n\\hline\nAvitaminonsis&0.492&0.474&0.449&0.571&0.58&\\textbf{0.414}\\\\\n\\hline\nAllergic disorders&0.208&0.217&0.221&0.342&0.24&\\textbf{0.202}\\\\\n\\hline\nMultiple sclerosis&0.061&0.057&0.061&0.242&0.152&\\textbf{0.056}\\\\\n\\hline\nRheumatic fever&0.325&0.31&0.287&0.345&0.313&\\textbf{0.256}\\\\\n\\hline\nDiseases of arteries&0.302&0.301&0.269&0.345&0.328&\\textbf{0.238}\\\\\n\\hline\nInfluenza&0.141&0.141&0.155&0.23&0.217&\\textbf{0.125}\\\\\n\\hline\nPneumonia&0.119&0.128&\\textbf{0.1}&0.187&0.187&\\textbf{0.1}\\\\\n\\hline\nPleurisy&0.246&0.246&0.247&0.29&0.272&\\textbf{0.245}\\\\\n\\hline\nGastro-enteritis&0.386&0.369&\\textbf{0.291}&0.394&0.398&0.295\\\\\n\\hline\nDisease of teeth&0.344&0.312&0.305&0.413&0.361&\\textbf{0.302}\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{RMSE of STNN-R over the 25 datasets in GHO for T+1, T+2,...,T+5}\n\\label{fl4}\n\\end{table*}\n\nWe will first introduce the STNN-R extension. Let $\\Gamma^{(r)} \\in \\mathbb{R}^{n \\times n}$ be a matrix of weights such that $\\Gamma^{(r)}_{i,j}$ is the strength of the relation between series $i$ and $j$ in the relation $r$. Let us extend the formulation in Equation \\eqref{dynamic-multi-rel} as follows:\n\\begin{equation}\nZ_{t+1} = h( Z_t \\Theta^{(0)}+\\sum\\limits_{r \\in \\mathcal{R}} (W^{(r)}\\odot \\Gamma^{(r)}) Z_t \\Theta^{(r)})\n\\label{gamma-multi-rel}\n\\end{equation}\nwhere $\\Gamma^{(r)}$ is a matrix to be learned, $W^{(r)}$ is a prior i.e a set of observed relations, and $\\odot$ is the element-wise multiplication between two matrices. \n The learning problem can be now be written as:\n\\begin{equation}\n\\begin{gathered}\nd^*, Z^*, \\Theta^*, \\Gamma^* =\\\\ \\argmin_{d,Z,\\Gamma} \\frac{1}{T}\\sum\\limits_t \\Delta(d(Z_t),X_t)+ \\gamma |\\Gamma| \\\\+ \\lambda \\frac{1}{T} \\sum\\limits_{t=1}^{T-1} || Z_{t+1} - h(\\sum\\limits_{r \\in \\mathcal(R)} (W^{(r)} \\odot \\Gamma^{(r)}) Z_t \\Theta^{(r)}) ||^2\n\\end{gathered}\n\\label{eqf}\n\\end{equation}\n\nwhere $|\\Gamma^{(r)}|$ is a $l_1$ regularizing term that aims at sparsifying $\\Gamma^{(r)}$. We thus add an hyper-parameter $\\gamma$ to tune this regularization factor.\n\nIf no prior is available, then simply removing the $W^{(r)}s$ from equation \\eqref{gamma-multi-rel} leads to the following model:\n\\begin{equation}\nZ_{t+1} = h( Z_t \\Theta^{(0)}+\\sum\\limits_{r \\in \\mathcal{R}} \\Gamma^{(r)} Z_t \\Theta^{(r)})\n\\label{gamma-multi-rel-no-W}\n\\end{equation}\n\nwhere $\\Gamma^{(r)}$ is no more constrained by the prior $W^{(r)}$ so that it will represent both the relational structure and the relation weights. Both models are learned with SGD, in the same way as described in \\ref{mts}. The only difference is that a gradient step on the $\\Gamma^{(r)}s$ is added.\n\n\\section{Experiments}\n\\label{xp}\n\\begin{figure*}[t]\n\\begin{center}\n\\begin{tabular}{ccc}\nGround Truth & RNN-GRU & STNN-R \\\\\n\\includegraphics[width=0.3\\linewidth]{truth2} &\n\\includegraphics[width=0.3\\linewidth]{predicrecur.png} & \n\\includegraphics[width=0.3\\linewidth]{predicted_selec.png} \n\\end{tabular}\n\\end{center}\n\\caption{Prediction of wind speed over around 500 stations on the US territory. prediction is shown at time-step $T+1$ for RNN-GRU (centre) and STNN-R (right).}\n\\label{meteo}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\begin{tabular}{ccc}\n Ground Truth & RNN-GRU & STNN-R \\\\\n \\includegraphics[width=0.30\\linewidth]{t1.png} \n & \\includegraphics[width=0.30\\linewidth]{r1.png}\n & \\includegraphics[width=0.30\\linewidth]{o1.png}\\\\ \n \\includegraphics[width=0.30\\linewidth]{t2.png} \n & \\includegraphics[width=0.30\\linewidth]{r2.png}\n & \\includegraphics[width=0.30\\linewidth]{o2.png}\\\\ \n \\includegraphics[width=0.30\\linewidth]{t3.png} \n & \\includegraphics[width=0.30\\linewidth]{r3.png}\n & \\includegraphics[width=0.30\\linewidth]{o3.png}\n \\end{tabular}\n \\caption{Example of a 3 months prediction of Pacific temperature. Left column is the ground truth, central and right columns correspond respectively to RNN-GRU and STNN-R predictions at horizon $T+1$, $T+2$ and $T+3$ (top to bottom).}\n\\label{oceano}\n\\end{figure*}\n\nExperiments are performed on a series of spatio-temporal forecasting problems representative of different domains. We consider predictions within a $+5$ horizon i.e. given a training series of size $T$, the evaluation of the quality of the model will be made over $T+1$ to $T+5$ time steps. The different model hyper-parameters are selected using a time-series cross-validation procedure called rolling origin as in \\cite{ben2014boosting,ganeshapillai2013learning}. This protocol makes use of a sliding window of size $T'$: on a series of length $T$, a window of size $T'$ shifted several times in order to create a set of train\/test folds. The beginning of the $T'$ window is used for training and the remaining for test. The value of $T'$ is fixed so that it is large enough to capture the main dynamics of the different series. Each series was re-scaled between $0$ and $1$.\n\nWe performed experiments with the following models:\\\\\n(i) \\textbf{Mean}: a simple heuristic which predicts future values of a series with the mean of its observed past values computed on the $T'$ training steps of each training fold.\\\\\n(ii) \\textbf{AR}: a classical univariate Auto-Regressive model. For each series and each variable of the series, the prediction is a linear function of $R$ past lags of the variable, $R$ being a hyper-parameter tuned on a validation set.\\\\\n(iii) \\textbf{VAR-MLP}: a vectorial auto-regressive model where the predicted values of the series at time $t+1$ depend on the past values of all the series for a lag of size $R$. The predictive model is a multi-layer perceptron with one hidden layer. Its performance were uniformly better than a linear VAR. Here again the hidden layer size and the lag $R$ were set by validation\\\\\n(iv) \\textbf{RNN-tanh}: a vanilla recurrent neural network with one hidden layer of recurrent units and tanh non-linearities. As for the \\textbf{VAR-MLP}, one considers all the series simultaneously, i.e. at time $t$ the RNN receives as input $X_{t-1}$ the values of all the series at $t-1$ and predicts $X_{t}$. A RNN is a dynamical state-space model but its latent state $Z_t$ explicitly depends through a functional dependency both on the preceding values of the series $X_{t-1}$ and on the preceding state $Z_{t-1}$. Note that this model has the potential to capture the spatial dependencies since all the series are considered simultaneously, but does not model them explicitly.\\\\\n(v) \\textbf{RNN-GRU}: same as the \\textbf{RNN-tanh}, but recurrent units is replaced with gated recurrent units (GRU) units, which are considered state of the art for many sequence prediction problems today \\footnote{We also performed tests with LSTM and obtained similar results as GRU.}. We have experimented with several architectures, but using more than one layer of GRU units did not improve the performance, so we used 1 layer in all the experiments.\\\\\n(vi) \\textbf{Dynamic Factor Graph (DFG)}: the model proposed in \\cite{mirowski2009dynamic} is the closest to ours but uses a joint vectorial latent representation for all the series as in the RNNs, and does not explicitly model the spatial relations between series.\\\\\n(vii) \\textbf{STNN}: our model where $g$ is the function described in equation \\eqref{dynamic-multi-rel}, $h$ is the $tanh$ function, and $d$ is a linear function. Note that other architectures for $d$ and $g$ have been tested (e.g. multi-layer perceptrons) without improving the quality of the prediction. The $\\lambda$ value has been set by cross validation.\\\\\n(viii and ix) \\textbf{STNN-R} and \\textbf{STNN-D}: For the forecasting experiments the $\\gamma$ value of the $L_1$ penalty (see equation \\eqref{eqf}) wer set to $0$ since higher value decreased the performance, a phenomena often observed in other models such as $L_1$-regularized SVMs. The influence of $\\gamma$ on the discovered spatial structure is further discussed and illustrated in figure \\ref{sparsity}. \n\nThe complete set of hyper-parameters values for the different models is given in appendix. \n\n\\subsection{Datasets}\n\\label{datasets}\n\\begin{figure}[t]\n\\centering\n\\begin{tikzpicture}\n \\begin{axis}[\n \n ybar=2pt,\n width=7cm,,\n enlargelimits=0.25,\n legend style={at={(0.5,-0.15)},\n anchor=north,legend columns=-1},\n ylabel={RMSE },\n symbolic x coords={0.0001,0.001,0.01,0.1,1,10,100},\n xtick=data,\n ticklabel style = {font=\\tiny},\n \n nodes near coords align={vertical},\n ]\n \\addplot coordinates {(0.0001,0.034) (0.001,0.026) (0.01,0.025) (0.1,0.021) (1,0.027) (10,0.031) (100,0.040)};\n \\legend{$\\lambda$}\n \\end{axis}\n\\end{tikzpicture}\n\\caption{RMSE on Google Flu w.r.t $\\lambda$}\n\\label{fl1}\n\\end{figure}\n\nThe different forecasting problems and the corresponding datasets are described below. The dataset characteristics are provided in table \\ref{datasets}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\linewidth]{flucourbe}\n\\end{center}\n\\caption{RMSE on the Google Flu dataset at horizon $T+1$ to $T+13$}\n\\label{curve}\n\\end{figure}\n\n\\begin{itemize}\n\\item \\textbf{Disease spread forecasting:} The \\textbf{Google Flu} dataset contains for 29 countries, about ten years of weekly estimates of influenza activity computed by aggregating Google search queries (see \\url{http:\/\/www.google.org\/flutrends}). We extract binary relations between the countries, depending on whether or not they share a border, as a prior $W$.\n\\item \\textbf{Global Health Observatory (GHO): } This dataset made available by the Global Health Observatory, (\\url{http:\/\/www.who.int\/en\/}) provides the number of deaths for several diseases. We picked 25 diseases corresponding to \\textbf{25 different datasets}, each one composed of 91 time series corresponding to 91 countries (see table \\ref{datasets}). Results are averages over all the datasets. As for Google Flu, we extract binary relations $W$ based on borders between the countries.\n\\item \\textbf{Geo-Spatial datasets:} The goal is to predict the evolution of geophysical phenomena measured on the surface of the Earth. \\\\The \\textbf{Wind} dataset (\\url{www.ncdc.noaa.gov\/}) consists of hourly summaries of meteorological data. We predict wind speed and orientation for approximately 500 land stations on U.S. locations. In this dataset, the relations correspond to a thresholded spatial proximity between the series. Given a selected threshold value $d$, two sources are connected ($w_{i,j}=1$) if their distance is below $d$ and not connected ($w_{i,j}=0$) otherwise. \\\\The \\textbf{Pacific Sea Temperature (PST)} dataset represents gridded (at a 2 by 2 degrees resolution, corresponding to 2520 spatial locations) monthly Sea Surface Temperature (SST) on the Pacific for 399 consecutive months from January 1970 through March 2003. The goal is to predict future temperatures at the different spatial locations. Data were obtained from the Climate Data Library at Columbia University (\\url{http:\/\/iridl.ldeo.columbia.edu\/}). Since the series are organized on a 2D grid, we extract 8 different relations : one for each cardinal direction (north, north-west, west, etc...). For instance, the relation $north$, is associated to a binary adjacency matrix $W^{(north)}$ such that $W^{(north)}_{i,j}$ is set to $1$ if and only if source $j$ is located 2 degree at the north of source $i$ (the pixel just above on the satellite image).\n\\item \\textbf{Car Traffic Forecasting:} The goal is to predict car traffic on a network of streets or roads. We use the \\textbf{Beijing dataset } provided in \\cite{yuan2011driving,yuan2010t} which consists of GPS trajectories for $\\sim 10500$ taxis during a week, for a total of 17 millions of points corresponding to road segments in Beijing. From this dataset, we extracted the traffic-volume aggregated on a 15 min window for 5,000 road segments. The objective is to predict the traffic at each segment. We connect two sources if they correspond to road segments with a shared crossroad.\n\\end{itemize}\n\nFor all the datasets but PST (i.e. Google Flu, GHO, Wind and Bejing), we defined the relational structure using a simple adjacency matrix $W$. Based on this matrix, we defined $K$ different relations by introducing the powers of this matrix: $W^{(1)}=W$, $W^{(2)}=W \\times W$, etc. In our setting $K$ took values from 1 to 3 and the optimal value for each dataset has been selected during the validation process. \n\n\n\\subsection{Forecasting Results}\n\\label{xp-forecast}\n\n\\begin{figure}[t]\n\\begin{center}\n\\begin{tabular}{c}\n\\includegraphics[width=0.75\\linewidth]{correl_all} \\\\\n\\includegraphics[width=0.75\\linewidth]{correl_l1strong} \\\\\n\\includegraphics[width=0.75\\linewidth]{correl_l1_verystrong.png} \n\\end{tabular}\n\\end{center}\n\\caption{Illustrations of correlations $\\Gamma$ discovered by the STNN-D model, with $\\gamma$ in $\\{0.01,0.1,1\\}$ ( from top to bottom).}\n\\label{sparsity}\n\\end{figure}\n\n\\begin{figure}[t]\n\\includegraphics[width=\\linewidth]{pst_static_refined_1.png}\n\\caption{Spatial correlations extracted by the STNN-R model on the PST dataset. The color of each pixel correspond to the principal relation extracted by the model.}\n\\label{alpah_refined_pst}\n\\end{figure}\n\n\n\nA quantitative evaluation of the different models and the baselines, on the different datasets is provided in table \\ref{table1}. All the results are average prediction error for $T+1$ to $T+5$ predictions. The score function used is the Root Mean Squared Error (RMSE). A first observation is that STNN and STNN-R models which make use of prior spatial information significantly outperform all the other models on all the datasets. For example, on the challenging PST dataset, our models increase by $23\\%$ the performance of the GRU-RNN baseline. The increase is more important when the number of series is high (geo-spatial and traffic datasets) than when it is small (disease datasets). In these experiments, STNN-D is on par with RNN-GRU. The two models do not use prior information on spatial proximity. STNN makes use of a more compact formulation than RNN-GRU for expressing the series mutual dependency but the results are comparable. Vectorial AR logically improves on mono-variable AR (not shown here) and non linear MLP-VAR improves on linear VAR.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{pst_dyn_angle_icdm.png}\n\\caption{Dynamic spatio-temporal relations extract from the PST dataset on the training set. The color represents the actual sea surface temperature. The arrows represent the extracted spatial relations that evolve through time.}\n\\label{arrows}\n\\end{figure}\n\nWe also provide in table \\ref{fl4} the score for each of the 25 diseases of the GHO dataset. STNN-R obtained the best performance compared to STNN and STNN-D. It outperforms state-of-the-art methods in 20 over 25 datasets, and is very close to the RNN-GRU model on the 5 remaining diseases where RNN-GRU performs best. It thus shows that our model is able to benefit from the neighbour information in the proximity graph. \n\nFigures \\ref{meteo} and \\ref{oceano} illustrate the prediction of \\textit{STNN-R} and \\textit{RNN-GRU} on the meteorology and on the oceanography datasets along with the ground truth. Clearly on these datasets, STNN qualitatively performs much better than RNNs by using explicit spatial information. STNN is able to predict fine details corresponding to local interactions when RNNs produce a much more noisy prediction. These illustrations are representative of the general behavior of the two models.\n\nWe also provide the performance of the models at different prediction horizons $T+1, T+2,.... T+13$ on figure \\ref{curve} for the Google Flu dataset. Results show that STNN performs better than the other approaches for all prediction horizons and is thus able to better capture longer-term dependencies.\n\nFigure \\ref{fl1} illustrates the RMSE of the STNN-R model when predicting at $T+1$ on the Google Flu dataset for different values of $\\lambda$. One can see that the best performance is obtained for an average value of $\\lambda$: low values corresponding to weak temporal constraints do not allow the model to learn the dynamicity of the series while high values degrade the performance of STNN. \n\n\\subsection{Discovering the Spatial Correlations}\n\\label{xp-rel}\n\nIn this subsection, we illustrate the ability of STNN to discover relevant spatial correlations on different datasets. Figure \\ref{sparsity} illustrates the values of $\\Gamma$ obtained by STNN-D where no structure (e.g. adjacency matrix $W$) is provided to the model on the PST dataset. Each pixel corresponds to a particular time series and the figure shows the correlation $\\Gamma_{i,j}$ discovered between each series $j$ with a series $i$ roughly located at the center of the picture. The darker a pixel is, the higher the absolute value of $\\Gamma_{i,j}$ is (note that black pixels correspond to countries and not sea). Different levels of sparsity are illustrated from low (up) to high (down). Even if the model does not have any knowledge about the spatial organization of the series (no $W$ matrix provided), it is able to re-discover this spatial organization by detecting strong correlations between close series, and low ones for distant series. \n\nFigure \\ref{alpah_refined_pst} illustrates the correlations discovered on the PST dataset. We used as priors 8 types of relations corresponding to the 8 cardinal directions (South, South-West, etc...). In this case, STNN-R learns weights (i.e $\\Gamma^{(r)}$) for each relation based on the prior structure. For each series, we plot the direction with the highest learned weight. The strongest direction for each series is illustrated by a specific color in the figure. For instance, a dark blue pixel indicates that the stronger spatial correlation learned for the corresponding series is the North-West direction. The model extracts automatically relations corresponding to temperature propagation directions in the pacific, providing relevant information about the spatio-temporal dynamics of the system. \\\\ \\linebreak\n\n\n\nThe model can be adapted to different situations. Figure \\ref{arrows} represents the temporal evolution of the spatial relations on the PST dataset. For this experiment, we have slightly changed the STNN-R model by making the $\\Gamma^{(r)}$ time dependent according to:\n\\begin{equation}\n\\Gamma^{(r)}_{t,j,i} = f_r(Z_t^i)\n\\end{equation}\nThis means that with this modified model, the spatial relation weights depend on the current latent state of the corresponding series and may evolve with time.\nIn the experiment, $f_r$ is a logistic function. On figure \\ref{arrows}, the different plots correspond to successive time steps. The color represent the actual sea surface temperatures, and the arrows represent the direction of the stronger relation weights $\\Gamma^{(r)}_{t}$ among the eight possible directions (N, NE, etc). One can see that the model captures coherent dynamic spatial correlations such as global currents directions or rotating motions that gradually evolve with time.\n\n\n\n\n\n\n\\section{Conclusion}\nWe proposed a new latent model for addressing multivariate spatio-temporal time series forecasting problems. For this model the dynamics are captured in a latent space and the prediction makes use of a decoder mechanism. Extensive experiments on datasets representative of different domains show that this model is able to capture spatial and temporal dynamics, and performs better than state of the art competing models. This model is amenable to different variants concerning the formulation of spatial and temporal dependencies between the sources.\n\nFor the applications, we have concentrated on forecasting (time based prediction). The same model could be used for interpolating (space based prediction or kriging) or for data imputation when dealing with time-series with missing values.\n\n\\section{Acknowledgments}\nLocust project ANR-15-CE23-0027-01, funded by Agence Nationale de la Recherche. \n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}