diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhnkh" "b/data_all_eng_slimpj/shuffled/split2/finalzzhnkh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhnkh" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe nonlinear Schr\\\"odinger (NLS) equation, as one of the universal\nequations that describe the evolution of slowly varying packets of\nquasi-monochromatic waves in weakly nonlinear dispersive media, has been\nvery successful in many applications such as nonlinear optics and water\nwaves \\cite{Kodamabook, Agrawalbook,Boydbook,Yarivbook}. The NLS equation is\nintegrable, which can be solved by the inverse scattering transform \\cit\n{Zakharov}.\n\nHowever, in the regime of ultra-short pulses where the width of optical\npulse is in the order of femtosecond ($10^{-15}$ s), the NLS equation\nbecomes less accurate \\cite{Rothenberg}. Description of ultra-short\nprocesses requires a modification of standard slow varying envelope models\nbased on the NLS equation. There are usually two approaches to meet this\nrequirement in the literature. The first one is to add several higher-order\ndispersive terms to get higher-order NLS equation \\cite{Agrawalbook}. The\nsecond one is to construct a suitable fit to the frequency-dependent\ndielectric constant $\\epsilon(\\omega)$ in the desired spectral range.\nSeveral models have been proposed by this approach including the short-pulse (SP) equation \\cite{SPE_Org,KimPRL,KimPRA,Bandelow}.\n\nRecently, Sch\\\"{a}fer and Wayne derived a so-called short pulse (SP)\nequation \\cite{SPE_Org}\n\\begin{equation}\nu_{xt}=u+ \\frac 16 \\left(u^{3}\\right)_{xx}\\,, \\label{SPE}\n\\end{equation\nto describe the propagation of ultra-short optical pulses in nonlinear\nmedia. Here, $u=u(x,t)$ is a real-valued function, representing the\nmagnitude of the electric field, the subscripts $t$ and $x$ denote partial\ndifferentiation. Apart from the context of nonlinear optics, the SP equation\nhas also been derived as an integrable differential equation associated with\npseudospherical surfaces \\cite{Robelo}. The SP equation has been shown to be\ncompletely integrable \\cite{Robelo,Beals,Sakovich,Brunelli1,Brunelli2}. The\nperiodic and soliton solutions of the SP equation were found in \\cit\n{Sakovich2,Kuetche,Parkes}. The connection between the SP equation and the\nsine-Gordon equation through the hodograph transformation was clarified, and\nthen the $N$-soliton solutions including multi-loop and multi-breather ones\nwere given in \\cite{Matsuno_SPE,Matsuno_SPEreview} by using the Hirota\nbilinear method \\cite{Hirota}. The integrable discretization of the SP\nequation was studied in \\cite{SPE_discrete1}, the geometric interpretation\nof the SP equation, as well as its integrable discretization, was given in\n\\cite{SPE_discrete2}. The higher-order corrections to the SP equation was\nstudied in \\cite{Schafer13} most recently.\n\nSimilar to the case of the NLS equation \\cite{Manakov1974}, it is necessary\nto consider its two-component or multi-component generalizations of the SP\nequation for describing the effects of polarization or anisotropy. As a\nmatter of fact, several integrable coupled short pulse have been proposed in\nthe literature \\cit\n{PKB_CSPE,Sakovich3,Hoissen_CSPE,Matsuno_CSPE,Feng_CSPE,ZengYao_CSPE}. Most\nrecently, the bi-Hamiltonian structures for the above two-component SP\nequations were obtained by Brunelli \\cite{Brunelli_CSPE}.\n\n\nIn the present paper, we propose and study a complex short pulse (CSP)\nequation\n\\begin{equation}\nq_{xt}+q+ \\frac{1}{2} \\left(|q|^{2}q_x \\right)_{x}=0\\,, \\label{CSP}\n\\end{equation\nand its two-component generalization\n\\begin{eqnarray}\n&& q_{1,xt}+q_1+\\frac{1}{2} \\left((|q_1|^2+|q_2|^2)q_{1,x}\\right)_{x}=0\\,,\n\\label{CCSP1} \\\\\n&& q_{2,xt}+q_2+\\frac{1}{2} \\left((|q_1|^2+|q_2|^2)q_{2,x}\\right)_{x}=0\\,.\n\\label{CCSP2}\n\\end{eqnarray}\nAs will be revealed in the present paper, both the CSP equation and its\ntwo-component generalization are integrable guaranteed by the existence of\nLax pairs and infinite number of conservation laws. They have $N$-soliton\nsolutions which can be constructed via Hirota's bilinear method.\n\n\nThe outline of the present paper is organized as follows. In section 2, we\nderive the CSP equation and coupled complex short pulse (CCSP) equation from\nthe physical context. In section 3, by providing the Lax pairs, the\nintegrability of the proposed two equations are confirmed, and further, the\nconservation laws, both local and nonlocal ones, are investigated. Then $N\n-soliton solutions to both the CSP and CCSP equations are constructed in\nterms of pfaffians by Hirota's bilinear method in section 4. In section 5,\nsoliton-interaction for coupled complex short pulse equation is investigated\nin details, which shows rich phenomena similar to the\ncoupled nonlinear Schr\\\"odinger equatoin. In particular, they may undergo\neither elastic or inelastic collision depending on the initial conditions.\nFor inelastic collisions, there is an energy exchange between solitons,\nwhich can allow the generation or vanishing of soliton. The dynamics is more\nricher in compared with the single component case. The paper is concluded by\ncomments and remarks in section 6.\n\n\\section{The derivation of the complex short pulse and coupled complex short\npulse equations}\n\nIn this section, following the procedure in \\cite{Agrawalbook,SPE_Org}, we\nderive the complex short pulse equation (\\ref{CSP}) and its two-component\ngeneralization that governs the propagation of ultra short pulse packet\nalong optical fibers.\n\n\\subsection{The complex short pulse equation}\n\nWe start with a wave equation for electric field\n\\begin{equation} \\label{E-wave-equation}\n\\nabla^2 \\mathbf{E} -\\frac{1}{c^2} \\mathbf{E}_{tt} = \\mu_0 \\mathbf{P}_{tt}\\,,\n\\end{equation}\noriginated from the Maxwell equation. Here $\\mathbf{E} ( \\mathbf{r},t)$ and \n\\mathbf{P} ( \\mathbf{r},t)$ represent the electric field and the induced\npolarization, respectively, $\\mu_0$ is the vacuum permeability, $c$ is the\nspeed of light in vacuum. If we assume the local medium response and only\nthe third-order nonlinear effects governed by $\\chi^{(3)}$, the induced\npolarization consists of two parts, $\\mathbf{P} ( \\mathbf{r},t)=\\mathbf{P\n_{L} ( \\mathbf{r},t)+\\mathbf{P}_{NL} ( \\mathbf{r},t)$, where the linear part\n\\begin{equation} \\label{P_L}\n\\mathbf{P}_{L} ( \\mathbf{r},t)=\\epsilon_0 \\int_{-\\infty}^{\\infty} \\chi^{(1)}\n(t-t^{\\prime })\\cdot \\mathbf{E} ( \\mathbf{r},t^{\\prime })\\,dt^{\\prime }\\,,\n\\end{equation}\nand the nonlinear part\n\\begin{equation} \\label{P_NL}\n\\mathbf{P}_{NL}( \\mathbf{r},t)=\\epsilon_0 \\int_{-\\infty}^{\\infty} \\chi^{(3)}\n(t-t_1,t-t_2,t-t_3)\\times \\mathbf{E} ( \\mathbf{r},t_1) \\mathbf{E} ( \\mathbf{\n},t_2) \\mathbf{E} ( \\mathbf{r},t_3)\\,dt_1dt_2dt_3\\,.\n\\end{equation}\nHere $\\epsilon_0$ is the vacuum permittivity and $\\chi^{(j)}$ is the $j\nth-order susceptibility. Since the nonlinear effects are relatively small in\nsilica fibers, $\\mathbf{P}_{NL}$ can be treated as a small perturbation.\nTherefore, we first consider (\\ref{E-wave-equation}) with $\\mathbf{P}_{NL}=0\n. Furthermore, we restrict ourselves to the case that the optical pulse\nmaintains its polarization along the optical fiber, and the transverse\ndiffraction term $\\Delta_{\\perp} \\mathbf{E}$ can be neglected. In this case,\nthe electric field can be considered to be one-dimensional and expressed as\n\\begin{equation} \\label{E-field}\n\\mathbf{E} = \\frac 12 \\mathbf{e_1} \\left(E(z,t)+c.c. \\right)\\,,\n\\end{equation}\nwhere $\\mathbf{e_1}$ is a unit vector in the direction of the polarization,\n$E(z,t)$ is the complex-valued function, and $c.c.$ stands for the complex\nconjugate. Conducting a Fourier transform on (\\ref{E-wave-equation}) leads\nto the Helmholtz equation\n\\begin{equation} \\label{Waveequation-Fourier}\n\\tilde {E}_{zz} (z,\\omega) + \\epsilon(\\omega) \\frac{\\omega^2}{c^2} \\tilde {E}\n(z,\\omega)=0\\,,\n\\end{equation}\nwhere $\\tilde {E} (z,\\omega)$ is the Fourier transform of $E(z,t)$ defined\nas\n\\begin{equation} \\label{E_Fourier}\n\\tilde E (z,\\omega)=\\int_{-\\infty}^{\\infty} {\\ E} (z,t) e^{\\mathrm{i} \\omega\nt}\\,dt\\,,\n\\end{equation}\n$\\epsilon(\\omega)$ is called the frequency-dependent dielectric constant\ndefined as\n\\begin{equation} \\label{Dielectric}\n\\epsilon(\\omega)=1+ \\tilde \\chi^{(1)} (\\omega)\\,,\n\\end{equation}\nwhere $\\tilde \\chi^{(1)} (\\omega)$ is the Fourier transform of $\\chi^{(1)}\n(t)$\n\\begin{equation} \\label{Chi_Fourier}\n\\tilde \\chi^{(1)} (\\omega)=\\int_{-\\infty}^{\\infty} \\chi^{(1)} (t) e^{\\mathrm\ni} \\omega t}\\,dt\\,.\n\\end{equation}\nNow we proceed to the consideration of the nonlinear effect. Assuming the\nnonlinear response is instantaneous so that $P_{NL}$ is given by \nP_{NL}(z,t)= \\epsilon_0 \\epsilon_{NL} E(z,t)$ \\cite{Agrawalbook} where the\nnonlinear contribution to the dielectric constant is defined as\n\\begin{equation} \\label{Epsnl}\n\\epsilon_{NL}= \\frac 34 \\chi^{(3)}_{xxxx} |E(z,t)|^2\\,.\n\\end{equation}\nIn this case, the Helmholtz equation (\\ref{Waveequation-Fourier}) can be\nmodified as\n\\begin{equation} \\label{Helmholtz}\n\\tilde {E}_{zz} (z,\\omega) + \\tilde \\epsilon(\\omega) \\frac{\\omega^2}{c^2}\n\\tilde {E} (z,\\omega)=0\\,,\n\\end{equation}\nwhere\n\\begin{equation} \\label{Modified_Dielectric}\n\\tilde \\epsilon(\\omega)=1+ \\tilde \\chi^{(1)} (\\omega)+ \\epsilon_{NL}\\,.\n\\end{equation}\nAs pointed out in \\cite{SPE_Org,Boydbook,KimPRL}, the Fourier transform \n\\tilde \\chi^{(1)}$ can be well approximated by the relation $\\tilde\n\\chi^{(1)}=\\tilde \\chi_0^{(1)} - \\tilde \\chi_2^{(1)} \\lambda^2$ if we\nconsider the propagation of optical pulse with the wavelength between 1600\nnm and 3000 nm. It then follows that the linear equation (\\re\n{Waveequation-Fourier}) written in Fourier transformed form becomes\n\\begin{equation} \\label{Wave_Fourier2}\n\\tilde {E}_{zz} + \\frac{1+\\tilde \\chi_0^{(1)}}{c^2} \\omega^2 \\tilde {E} -\n(2\\pi)^2 \\tilde \\chi_2^{(1)} \\tilde {E} + \\epsilon_{NL} \\frac{\\omega^2}{c^2}\n\\tilde {E} =0\\,.\n\\end{equation}\n\nApplying the inverse Fourier transform to (\\ref{Wave_Fourier2}) yields a\nsingle nonlinear wave equation\n\\begin{equation} \\label{Wave_nonlinear1}\nE_{zz} - \\frac{1}{c_1^2} E_{tt} = \\frac{1}{c_2^2} E +\\frac 34\n\\chi^{(3)}_{xxxx} \\left(|E|^2 E\\right)_{tt} =0\\,.\n\\end{equation}\n\nSimilar to \\cite{SPE_Org}, we focus on only a right-moving wave packet and\nmake a multiple scales \\textrm{ansatz}\n\\begin{equation} \\label{E_ansatz}\nE(z,t)=\\epsilon E_0(\\phi, z_1, z_2, \\cdots)+ \\epsilon^2 E_1(\\phi, z_1, z_2,\n\\cdots) + \\cdots\\,,\n\\end{equation}\nwhere $\\epsilon$ is a small parameter, $\\phi$ and $z_n$ are the scaled\nvariables defined by\n\\begin{equation} \\label{Multiple_ansatz}\n\\phi= \\frac{t-\\frac{x}{c_1}}{\\epsilon}, \\quad z_n=\\epsilon^n z\\,.\n\\end{equation}\nSubstituting (\\ref{E_ansatz}) with (\\ref{Multiple_ansatz}) into (\\re\n{Wave_nonlinear1}), we obtain the following partial differential equation\nfor $E_0$ at the order $O(\\epsilon)$:\n\n\\begin{equation} \\label{Wave_nonlinear2}\n- \\frac{2}{c_1} \\frac{\\partial^2 E_0}{\\partial{\\phi}\\partial{z_1}} = \\frac{\n}{c_2^2} E_0 +\\frac 34 \\chi^{(3)}_{xxxx} \\frac{\\partial}{\\partial{\\phi}}\n\\left(|E_0|^2 \\frac{\\partial E_0}{\\partial {\\phi}} \\right)\\,.\n\\end{equation}\n\n\nFinally, by a scale transformation\n\\begin{equation} \\label{Scaling}\nx= \\frac{c_1}{2} \\phi, \\quad t={c_2} z_1, \\quad q = \\frac{c_1\\sqrt\n6c_2\\chi^{(3)}_{xxxx}}}{4} E_0\\,,\n\\end{equation}\nwe arrive at the normalized form of the complex short pulse equation (\\re\n{CSP}).\n\n\\subsection{Coupled complex short pulse equation}\n\nIn the previous subsection, a major simplification made in the derivation of\nthe complex short pulse equation is to assume that the polarization is\npreserved during its propagating inside an optical fiber. However, this is\nnot really the case in practice. For birefringent fibers, two orthogonally\npolarized modes have to be considered. Therefore, similar to the extension\nof coupled nonlinear Schr{\\\"o}dinger equation from the NLS equation, an\nextension to a two-component version of the complex short pulse equation \n\\ref{CSP}) is needed to describe the propagation of ultra-short pulse in\nbirefringent fibers. In fact, several generalizations have been proposed for\nthe short pulse equation \\cit\n{PKB_CSPE,Sakovich3,Hoissen_CSPE,Matsuno_CSPE,Feng_CSPE,ZengYao_CSPE}.\nParticularly, by taking into account the effects of anisotropy and\npolarization, Pietrzyk \\textit{et. al.} have derived a general two-component\nshort-pulse equation from the physical context \\cite{PKB_CSPE}. We follow\nthe approach by Pietrzyk \\textit{et. al.} to derive a two-component complex\nshort pulse equation. However, as shown in subsequent section, the\ntwo-component complex short pulse equation admits multi-soliton solutions\nwhich reveals richer dynamics in soliton interactions in compared with the\nreal SP equation.\n\nWe first consider the linear birefringent fiber such that the electric field\nwith an arbitrarily polarized optical fiber can be expressed as\n\\begin{equation} \\label{E-field2}\n\\mathbf{E} = \\frac 12 \\left( \\mathbf{e_1} E_1(z,t)+ \\mathbf{e_2} E_2(z,t)\n\\right)+ c.c.\\,,\n\\end{equation}\nwhere $\\mathbf{e_1}$, $\\mathbf{e_2}$ are two unit vectors along positive $x\n- and $y$-direction in the transverse plane perpendicular to the optical\nfiber, respectively, $E_1$ and $E_2$ are the complex amplitudes of the\npolarization components correspondingly. Without the presence of nonlinear\npolarization ($P_{NL}=0$) and the transverse diffraction, the Fourier\ntransform converts (\\ref{E-wave-equation}) into a pair of Helmholtz\nequations\n\\begin{equation} \\label{CCSPE_Helmholtz1}\n\\tilde {E}_{1,zz} (z,\\omega) + \\epsilon(\\omega) \\frac{\\omega^2}{c^2} \\tilde \nE_1} (z,\\omega)=0\\,,\n\\end{equation}\n\n\\begin{equation} \\label{CCSPE_Helmholtz2}\n\\tilde {E}_{2,zz} (z,\\omega) + \\epsilon(\\omega) \\frac{\\omega^2}{c^2} \\tilde \nE_2} (z,\\omega)=0\\,.\n\\end{equation}\nSame as the scalar case, the frequency-dependent dielectric constant \n\\epsilon(\\omega)=1+ \\tilde \\chi^{(1)} (\\omega)$, where $\\tilde \\chi^{(1)}$\ncan be well approximated by the relation $\\tilde \\chi^{(1)}=\\tilde\n\\chi_0^{(1)} - \\tilde \\chi_2^{(1)} \\lambda^2$ for the propagation of optical\npulse with the wavelength between 1600 nm and 3000 nm.\n\nAs indicated in \\cite{Agrawalbook}, the nonlinear part of the induced\npolarization $\\mathbf{P}_{NL}$ can be written as\n\\begin{equation}\n\\mathbf{P}_{NL}=\\frac{1}{2}\\left( \\mathbf{e_{1}}P_{1}(z,t)+\\mathbf{e_{2}\nP_{2}(z,t)\\right) +c.c.\\,, \\label{E-field22}\n\\end{equation\nwhere\n\\begin{equation}\nP_{1}=\\frac{3\\epsilon _{0}}{4}\\chi _{xxxx}^{(3)}\\left[ \\left( |E_{1}|^{2}\n\\frac{2}{3}|E_{2}|^{2}\\right) E_{1}+\\frac{1}{3}(E_{1}^{\\ast }E_{2})E_{2\n\\right] \\,, \\label{Nonlinear_Polarization1}\n\\end{equation\n\\begin{equation}\nP_{2}=\\frac{3\\epsilon _{0}}{4}\\chi _{xxxx}^{(3)}\\left[ \\left( |E_{2}|^{2}\n\\frac{2}{3}|E_{1}|^{2}\\right) E_{2}+\\frac{1}{3}(E_{2}^{\\ast }E_{1})E_{1\n\\right] \\,. \\label{Nonlinear_Polarization2}\n\\end{equation\nThe last term in Eqs. (\\ref{Nonlinear_Polarization1}) and (\\re\n{Nonlinear_Polarization2}) leads to the degenerate four-wave mixing. In\nhighly birefringent fibers, the four-wave-mixing term can often be\nneglected. In this case, we arrive at a coupled nonlinear wave equation\n\\begin{equation} \\label{Coupled_equations1}\nE_{1,zz}-\\frac{1}{c_{1}^{2}}E_{1,tt}=\\frac{1}{c_{2}^{2}}E_{1}+\\frac{3}{4\n\\chi _{xxxx}^{(3)}\\left[ \\left( |E_{1}|^{2}+\\frac{2}{3}|E_{2}|^{2}\\right)\nE_{1}\\right] _{tt}\\,,\n\\end{equation}\n\n\\begin{equation} \\label{Coupled_equations2}\nE_{2,zz}-\\frac{1}{c_{1}^{2}}E_{2,tt}=\\frac{1}{c_{2}^{2}}E_{2}+\\frac{3}{4\n\\chi _{xxxx}^{(3)}\\left[ \\left( |E_{2}|^{2}+\\frac{2}{3}|E_{1}|^{2}\\right)\nE_{2}\\right] _{tt}\\,.\n\\end{equation}\n\nSimilar to the scalar case, by a multiple scales expansion and an\nappropriate scaling transformation, a couple complex short pulse equation\ncan be obtained from (\\ref{Coupled_equations1})--(\\ref{Coupled_equations2})\n\\begin{equation} \\label{CCSP_linear1}\nq_{1,xt}+q_{1}+\\frac{1}{2} \\left((|q_{1}|^{2}+\\frac{2}{3}|q_{2}|^{2})q_{1,x\n\\right) _{x}=0\\,,\n\\end{equation}\n\n\\begin{equation} \\label{CCSP_linear2}\nq_{2,xt}+q_{2}+\\frac{1}{2} \\left((|q_{2}|^{2}+\\frac{2}{3}|q_{1}|^{2})q_{2,x\n\\right) _{x}=0\\,.\n\\end{equation}\n\nMore generally, we can consider the coupled short pulse equation for\nelliptically birefringent fibers. In this case, the electric field can be\nwritten as\n\\begin{equation}\n\\mathbf{E}=\\frac{1}{2}\\left( \\mathbf{e_{x}}E_{x}(z,t)+\\mathbf{e_{y}\nE_{y}(z,t)\\right) +c.c.\\,, \\label{E-field3}\n\\end{equation\nwhere $\\mathbf{e_{x}}$ and $\\mathbf{e_{y}}$ are orthonormal polarization\neigenvectors\n\\begin{equation}\n\\mathbf{e_{x}}=\\frac{\\mathbf{e_{1}}+ir\\mathbf{e_{2}}}{\\sqrt{1+r^{2}}},\\quad\n\\mathbf{e_{y}}=\\frac{r\\mathbf{e_{1}}-i\\mathbf{e_{2}}}{\\sqrt{1+r^{2}}}\\,.\n\\label{Elliptical-unit}\n\\end{equation\nThe parameter $r$ represents the ellipticity. It is common to introduce the\nellipticity angle $\\theta $ as $r=\\tan (\\theta \/2)$. The case $\\theta =0$\nand $\\pi \/2$ correspond to linearly and circularly birefringent fibers,\nrespectively.\n\nFollowing a procedure similar to the case of linearly birefringent fibers,\none can drive the normalized form for the coupled complex short pulse\nequation\n\\begin{equation} \\label{CCSP_general1}\nq_{1,xt}+ q_1+ \\frac 12 \\left((|q_1|^2+ B|q_2|^2) q_{1,x} \\right)_x = 0\\,,\n\\end{equation}\n\\begin{equation} \\label{CCSP_general2}\nq_{2,xt}+ q_2+ \\frac 12 \\left((|q_2|^2+ B|q_1|^2) q_{2,x} \\right)_x = 0\\,.\n\\end{equation}\nwhere the parameter $B$ is related to the ellipticity angle $\\theta$ as\n\\begin{equation} \\label{B_theta}\nB=\\frac{2+2\\sin^2 \\theta}{2+\\cos^2 \\theta}\\,.\n\\end{equation}\nFor a linearly birefringent fiber ($\\theta=0$), $B=\\frac 23$, and Eqs. (\\re\n{CCSP_general1})-- (\\ref{CCSP_general2}) reduces to Eqs. (\\ref{CCSP_linear1\n)-- (\\ref{CCSP_linear2}). For a circularly birefringent fiber ($\\theta=\\pi\/2\n), $B=2$. In general, the coupling parameter $B$ depends on the ellipticity\nangle $\\theta$ and can vary from $\\frac 23$ to $2$ for values of $\\theta$ in\nthe range from $0$ to $\\pi\/2$. Note that $B=1$ when $\\theta \\approx 35^\\circ\n. As discussed in the subsequent section, this case is of particular\ninterest because the coupled system is integrable and admits $N$-soliton\nsolution.\n\\section{Lax pairs and conservation laws for the complex and\ncoupled complex short pulse equations}\n\n\\subsection{Lax pairs and integrability}\nIn \\cite{PKB_CSPE}, a matrix generalization for the SP equation is given\nbased on zero-curvature representation, from which the Lax pairs for several\nintegrable two-component SP equations are explicitly provided. In this\nsubsection, we will show the integrability of the complex short pulse and\ncoupled complex short pulse equations by finding their Lax pairs constructed\nfrom another matrix generalization of the SP equation.\n\nThe Lax pair for the complex short pulse equation (\\ref{CSP}) can be\nexpressed as\n\\begin{equation} \\label{comSPE_Laxa}\n\\Psi_x=U \\Psi, \\quad \\Psi_t=V\\Psi\\,,\n\\end{equation}\nwith\n\\begin{equation} \\label{comSPE_Laxb}\n\\displaystyle U= \\lambda \\left\n\\begin{array}{cc}\n1 & q_x \\\\\nq^{*}_x & -\n\\end{array\n\\right), \\quad V= \\left\n\\begin{array}{cc}\n-\\frac {\\lambda}{2} |q|^2-\\frac{1}{4\\lambda} & -\\frac{\\lambda}{2\n|q|^2q_x+\\frac q2 \\\\\n-\\frac{\\lambda}{2}|q|^2q^{*}_x-\\frac {q^{*}}{2} & \\frac {\\lambda}{2}|q|^2+\\frac{1}{4\\lambda\n\\end{array\n\\right)\\,.\n\\end{equation}\nIt can be easily shown that the compatibility condition $U_t-V_x+[U,\\,V]=0$\ngives the complex short pulse equation (\\ref{CSP}).\n\nThe Lax pair for the coupled complex short pulse equation (\\ref{CCSP1})--\n\\ref{CCSP2}) is found to be of the form:\n\\begin{equation}\n\\Psi _{x}=U\\Psi ,\\quad \\Psi _{t}=V\\Psi \\,, \\label{ccomSPE_Laxa}\n\\end{equation\nwith\n\\begin{equation}\nU=\\lambda \\left(\n\\begin{array}{cc}\nI_{2} & Q_{x} \\\\\nR_{x} & -I_{2\n\\end{array\n\\right) ,\\quad V=\\left(\n\\begin{array}{cc}\n-\\frac{\\lambda }{2}QR-\\frac{1}{4\\lambda }I_{2} & -\\frac{\\lambda }{2}QRQ_{x}\n\\frac{1}{2}Q \\\\\n-\\frac{\\lambda }{2}RQR_{x}-\\frac{1}{2}R & \\frac{\\lambda }{2}QR+\\frac{1}\n4\\lambda }I_{2\n\\end{array\n\\right) \\,, \\label{ccomSPE_Laxb}\n\\end{equation\nwhere $I_{2}$ is a $2\\times 2$ identity matrix, $Q$, $R$ are $2\\times 2$\nmatrices defined as\n\\begin{equation}\nQ=\\left(\n\\begin{array}{cc}\nq_{1} & q_{2} \\\\\n-q_{2}^{\\ast } & q_{1}^{\\ast \n\\end{array\n\\right) ,\\quad R=\\left(\n\\begin{array}{cc}\nq_{1}^{\\ast } & -q_{2} \\\\\nq_{2}^{\\ast } & q_{1\n\\end{array\n\\right) \\,. \\label{ccomSPE_Laxc}\n\\end{equation\nNote that $R=Q^{\\dag }$, thus,\n\\begin{equation}\nQR=RQ=(|q_{1}|^{2}+|q_{2}|^{2})I_{2}\\,,\n\\end{equation\nthe compatibility condition $U_{t}-V_{x}+[U,\\,V]=0$ for (\\ref{ccomSPE_Laxa})\ngives the coupled complex short pulse equation (\\ref{CCSP1})--(\\ref{CCSP2}).\n\nAs a matter of fact, the coupled complex short pulse equation can be\ngeneralized into a multi-component, or a vector complex short pulse equation\n\\begin{equation}\nq_{i,xt}+q_{i}+\\frac{1}{2}\\left( |\\mathbf{q}|^{2}q_{i,x}\\right)\n_{x}=0\\,,\\quad i=1,\\cdots ,n, \\label{NCSPE}\n\\end{equation\nwhere $\\mathbf{q}=(q_1,q_2, \\cdots, q_n)$. The integrability of Eq. (\\re\n{NCSPE})\ncan be guaranteed by the Lax pair constructed in a similar way as in \\cit\n{TsuchidaJPSJ}.\n\\begin{equation}\n\\Psi _{x}=U\\Psi ,\\quad \\Psi _{t}=V\\Psi \\,, \\label{vcomSPE_Laxa}\n\\end{equation\nwith\n\\[\nU=\\lambda \\left(\n\\begin{array}{cc}\nI_{2^{n-1}} & Q_{x}^{(n)} \\\\\nR_{x}^{(n)} & -I_{2^{n-1}\n\\end{array\n\\right) ,\n\\\n\\[\nV=\\left(\n\\begin{array}{cc}\n-\\frac{1}{2}Q^{(n)}R^{(n)}-\\frac{1}{4\\lambda }I_{2^{n-1}} & -\\frac{\\lambda }{2\nQ^{(n)}R^{(n)}Q_{x}^{(n)}+\\frac{1}{2}Q^{(n)} \\\\\n-\\frac{\\lambda }{2}R^{(n)}Q^{(n)}R_{x}^{(n)}-\\frac{1}{2}R^{(n)} & \\frac{1}{2\nQ^{(n)}R^{(n)}+\\frac{1}{4\\lambda }I_{2^{n-1}\n\\end{array\n\\right) \\,,\n\\\nwhere $I_{2^{n-1}}$ is a $2^{n-1}\\times 2^{n-1}$ identity matrix, $Q^{(n)}$\nand $R^{(n)}$ are $2^{n-1}\\times 2^{n-1}$ matrices can be constructed\nrecursively as follows\n\\begin{equation}\nQ^{(1)}=q_{1},\\quad R^{(1)}=q_{1}^{\\ast }\\,, \\label{NCSPE_Laxb1}\n\\end{equation\n\\begin{equation}\nQ^{(n+1)}=\\left(\n\\begin{array}{cc}\nQ^{(n)} & q_{n+1}I_{2^{n-1}} \\\\\n-q_{n+1}^{\\ast }I_{2^{n-1}} & R^{(n)\n\\end{array\n\\right) \\,, \\label{NCSPE_Laxb2}\n\\end{equation}\n\n\\begin{equation} \\label{NCSPE_Laxb3}\nR^{(n+1)}= \\left\n\\begin{array}{cc}\nR^{(n)} & -q_{n+1} I_{2^{n-1}} \\\\\nq^*_{n+1}I_{2^{n-1}} & Q^{(n)\n\\end{array\n\\right) \\,.\n\\end{equation}\nBy the above construction, we have $R^{(n+1)}=(Q^{(n+1)})^\\dag$, and further\n\\begin{equation}\nQ^{(n)} R^{(n)}=R^{(n)}Q^{(n)}=\\sum_{i=1}^n|q_i|^2I_{2^{n-1}}\\,.\n\\end{equation}\nTherefore, the zero curvature condition $U_t-V_x+[U,\\,V]=0$ gives the vector\ncomplex coupled short pulse equation (\\ref{NCSPE}).\n\n\\subsection{Local and nonlocal conservation laws}\n\nFollowing a systematic method developed by in \\cit\n{TsuchidaJPSJ,WadatiPTP75,WadatiJPSJ79,Zimerman}, we construct conservation\nlaws for the vector complex short pulse equation, the conservation laws for\nthe complex and coupled short pulse equations can be treated as special\ncases for $n=1,2$, respectively. To this end, let us rewrite the Lax pair\nfor the vector complex short pulse equation as follows:\n\\begin{equation} \\label{Laxpair_vcspe1}\n\\left\n\\begin{array}{c}\n\\Psi_1 \\\\\n\\Psi_\n\\end{array}\n\\right)_x = \\left\n\\begin{array}{cc}\n\\lambda I & \\lambda Q_x \\\\\n\\lambda R_x & - \\lambda \n\\end{array\n\\right) \\left\n\\begin{array}{c}\n\\Psi_1 \\\\\n\\Psi_\n\\end{array}\n\\right)\\,,\n\\end{equation}\n\n\\begin{equation} \\label{Laxpair_vcspe2}\n\\left\n\\begin{array}{c}\n\\Psi_1 \\\\\n\\Psi_\n\\end{array}\n\\right)_t = \\left(\\begin{array}{cc}\n-\\frac{\\lambda }{2}QR-\\frac{1}{4\\lambda }I_{2} & -\\frac{\\lambda }{2}QRQ_{x}\n\\frac{1}{2}Q \\\\\n-\\frac{\\lambda }{2}RQR_{x}-\\frac{1}{2}R & \\frac{\\lambda }{2}QR+\\frac{1}\n4\\lambda }I_{2\n\\end{array\n\\right) \\left\n\\begin{array}{c}\n\\Psi_1 \\\\\n\\Psi_\n\\end{array}\n\\right)\\,.\n\\end{equation}\nHere the size of matrices in the entries of Eqs. (\\ref{Laxpair_vcspe1})--\n\\ref{Laxpair_vcspe2}) is of $2^{n-1} \\times 2^{n-1}$ and is omitted for\nbrevity. If we define\n\\begin{equation} \\label{Cons_law1}\n\\Gamma \\equiv \\Psi_2 \\Psi_1^{-1}\\,\n\\end{equation}\nthen we have\n\\begin{equation} \\label{Cons_law2}\n2 \\lambda Q_x \\Gamma = \\lambda Q_x R_x - Q_x ((Q_x)^{-1} \\cdot Q_x \\Gamma)_x\n- \\lambda (Q_x \\Gamma)^2\\,\n\\end{equation}\n\n\nExpanding $Q_x \\Gamma$ in terms of the spectral parameter $\\lambda$ as\nfollows\n\\begin{equation} \\label{Cons_law4}\nQ_x \\Gamma = \\sum_{n=0}^\\infty F_n \\lambda^{-n}\\,,\n\\end{equation}\nand substituting into Eq. (\\ref{Cons_law2}), we obtain the following\nrelation\n\\begin{equation} \\label{Cons_law5}\n2 \\lambda F_n = Q_x R_x \\delta_{n,0} - Q_x((Q_x)^{-1} F_{n-1})_x -\n\\sum_{l=0}^n F_l F_{n-l}.\n\\end{equation}\nThe first local conserved density turns out to be\n\\begin{equation} \\label{Cons_law6}\nF_0=\\left(-1+\\sqrt{1+\\sum|q_{i,x}|^2}\\right)I\\,,\n\\end{equation}\n\nwhich is associated with a Hamiltonian of\n\\begin{equation} \\label{Cons_law7a}\nH_0=\\int \\sqrt{1+|q_x|^2}\\, dx \\,,\n\\end{equation}\nfor the complex short pulse equation (\\ref{CSP}) and\n\\begin{equation} \\label{Cons_law8a}\nH_0=\\int \\sqrt{1+|q_{1,x}|^2+|q_{2,x}|^2}\\, dx \\,,\n\\end{equation}\nfor the coupled complex short pulse equation (\\ref{CCSP1})--(\\ref{CCSP2}).\n\nFollowing the procedure in \\cite{Zimerman}, we can find the nonlocal\nconservation laws for vector complex short pulse equation. To this end, we\nexpand $Q_x \\Gamma$ as follows\n\\begin{equation} \\label{Cons_law9}\nQ_x \\Gamma = \\sum_{n=1}^\\infty F_{-n} (2\\lambda)^{n}\\,.\n\\end{equation}\nThe first two orders in $\\lambda$ yield the following equations\n\\begin{equation} \\label{Cons_law10}\n0 = Q_x R_x -Q_x \\left((Q_x)^{-1}F_{-1}\\right)_x\\,,\n\\end{equation}\n\\begin{equation} \\label{Cons_law11}\n2F_{-1} = -Q_x \\left((Q_x)^{-1}F_{-2}\\right)_x\\,,\n\\end{equation}\nfrom which, the first two nonlocal conserved densities can be calculated as\n\\begin{equation} \\label{Cons_law12}\nF_{-1}= \\frac 12 Q_x R\\,,\n\\end{equation}\n\\begin{equation} \\label{Cons_law13}\nF_{-2}= \\frac 12 Q R\\ -\\frac 12 \\partial_x\\left(Q\\partial_x R \\right).\n\\end{equation}\nThe first one turns out to be a trivial one, the second one accounts for a\nHamiltonian\n\\begin{equation} \\label{Cons_law7c}\nH_{-1}= \\frac 12 \\int |q|^2\\, dx \\,,\n\\end{equation}\nfor the complex short pulse equation (\\ref{CSP}) and\n\\begin{equation} \\label{Cons_law8c}\nH_{-1}= \\frac 12\\int (|q_{1}|^2+|q_{2}|^2)\\, dx \\,,\n\\end{equation}\nfor the coupled complex short pulse equation (\\ref{CCSP1})--(\\ref{CCSP2}).\n\\section{Multi-soliton solutions by Hirota's bilinear method}\n\n\n\\subsection{Bilinear equations and $N$-soliton solution to the complex short\npulse equation}\n\n\\begin{proposition}\nThe complex short pulse equation is derived from the following bilinear\nequations.\n\\begin{equation} \\label{CSPE_bilinear1}\nD_sD_y f \\cdot g =fg\\,,\n\\end{equation}\n\\begin{equation} \\label{CSPE_bilinear2}\nD^2_s f \\cdot f =\\frac{1}{2} |g|^2\\,,\n\\end{equation}\nby dependent variable transformation\n\\begin{equation} \\label{CSP_dtrf}\nq=\\frac{g}{f}\\,,\n\\end{equation}\nand hodograph transformation\n\\begin{equation}\nx = y -2(\\ln f)_s\\,, \\quad t=-s \\,, \\label{CSP_hodograph}\n\\end{equation}\nwhere $D$ is called Hirota $D$-operator defined by\n\\[\nD_s^n D_y^m f\\cdot g=\\left(\\frac{\\partial}{\\partial s} -\\frac{\\partial}\n\\partial s^{\\prime }}\\right)^n \\left(\\frac{\\partial}{\\partial y} -\\frac\n\\partial}{\\partial y^{\\prime }}\\right)^m f(y,s)g(y^{\\prime },s^{\\prime\n})|_{y=y^{\\prime }, s=s^{\\prime }}\\,.\n\\]\n\\end{proposition}\n\n.\n\n\\begin{proof}\nDividing both sides by $f^2$, the bilinear equations (\\ref{CSPE_bilinear1\n)-- (\\ref{CSPE_bilinear2}) can be cast into\n\\begin{equation}\n\\left\\\n\\begin{array}{l}\n\\displaystyle \\left(\\frac{g}{f} \\right)_{sy} + 2\\frac{g}{f} \\left( \\ln\nf\\right)_{sy} = \\frac{g}{f}\\,, \\\\[5pt]\n\\displaystyle \\left( \\ln f\\right)_{ss} =\\frac{1}{4} \\frac{|g|^2}{f^2}\\,\n\\end{array\n\\right. \\label{CSP_BL2}\n\\end{equation}\nFrom the hodograph transformation and dependent variable transformation, we\nthen have\n\\[\n\\frac{\\partial x}{\\partial s} = -2(\\ln f)_{ss} = -\\frac 12 |q|^2\\,, \\qquad\n\\frac{\\partial x}{\\partial y} = 1-2(\\ln f)_{sy}\\,,\n\\]\nwhich implies\n\\begin{equation} \\label{CSP_BL3}\n{\\partial_y} = \\rho^{-1} {\\partial_x}\\,, \\qquad {\\partial_s} = -{\\partial_t}\n- \\frac 12 |q|^2 {\\partial_x}\\,\n\\end{equation}\nby letting $1-2(\\ln f)_{sy} = \\rho^{-1}$.\n\nNotice that the first equation in (\\ref{CSP_BL2}) can be rewritten as\n\\begin{equation}\n\\left(\\frac{g}{f} \\right)_{sy} = \\left(1-2(\\ln f)_{sy} \\right) \\frac{g}{f}\\,,\n\\end{equation}\nor\n\\begin{equation} \\label{CSP_BL4}\n\\rho \\left(\\frac{g}{f} \\right)_{sy} = \\frac{g}{f}\\,,\n\\end{equation}\nwhich is converted into\n\\begin{equation} \\label{CSPE1}\n\\partial_x \\left(-\\partial_t - \\frac 12 |q|^2 \\partial_x \\right)q = q\\,,\n\\end{equation}\nby using (\\ref{CSP_BL3}). Eq. (\\ref{CSPE1}) is nothing but the complex short\npulse equation (\\ref{CSP}).\n\\end{proof}\n\n$N$-soliton solution to the bilinear equations (\\ref{CSPE_bilinear1})--(\\re\n{CSPE_bilinear2}) can be expressed by pfaffians similar to the ones for\ncoupled modified KdV equation \\cite{IwaoHirota}. To this end, we need to\ndefine two sets: $B_\\mu$ ($\\mu=1,2$): $B_1 = \\{b_1, b_2, \\cdots, b_{N} \\}$, \nB_2 = \\{b_{N+1}, b_2, \\cdots, b_{2N} \\}$, and an index function of $b_j$ by \nindex(b_j) = \\mu$ \\ if $b_j \\in B_\\mu$.\n\n\\begin{theorem}\nThe pfaffians\n\\begin{eqnarray}\nf &=& \\mathrm{Pf} (a_1, \\cdots, a_{2N}, b_1, \\cdots, b_{2N})\\,,\n\\label{CSP_sl1} \\\\\ng &=& \\mathrm{Pf} (d_0, \\beta_1, a_1, \\cdots, a_{2N}, b_1, \\cdots, b_{2N})\\,.\n\\label{CSP_sl2}\n\\end{eqnarray}\nsatisfy the bilinear equations (\\ref{CSPE_bilinear1})--(\\ref{CSPE_bilinear2\n) provided that the elements of the pfaffians are defined by\n\\begin{equation} \\label{CSPE_pf1}\n\\mathrm{Pf}(a_j,a_k)= \\frac{p_j-p_k}{p_j+p_k} e^{\\eta_j+\\eta_k}\\,, \\quad\n\\mathrm{Pf}(a_j,b_k)=\\delta_{j,k}\\,,\n\\end{equation}\n\\begin{equation} \\label{CSPE_pf2}\n\\mathrm{Pf}(b_j,b_k)=\\frac 14 \\frac{\\alpha_j \\alpha_k}{p^{-2}_j-p^{-2}_{k}}\n\\delta_{\\mu+1, \\nu}\\,, \\quad \\mathrm{Pf}(d_l,a_k)= p_k^{l} e^{\\eta_k}\\,,\n\\end{equation}\n\n\\begin{equation} \\label{CSPE_pf4}\n\\mathrm{Pf}(b_j,\\beta_1)=\\alpha_j \\delta_{\\mu, 1}\\,, \\quad \\mathrm{Pf\n(d_0,b_j) =\\mathrm{Pf}(d_0,\\beta_1) = \\mathrm{Pf}(a_j,\\beta_1)=0\\,.\n\\end{equation}\nHere $\\mu=index(b_j)$, $\\nu=index(b_k)$, $\\eta_j=p_j y + p_j^{-1} s$ which\nsatisfying $p_{j+N}=\\bar{p}_j$, $\\alpha_{j+N}=\\bar{\\alpha}_{j}$, $\\bar{p_j}$\nand $\\bar{\\alpha}_{j}$ represent the complex conjugates of $p_j$ and ${\\alph\n}_{j}$, respectively. The same notation will be used hereafter.\n\\end{theorem}\n\nThe proof of the Theorem is given in Appendix. Combined with dependent and\nhodograph transformations (\\ref{CSP_dtrf})--(\\ref{CSP_hodograph}), the above\npfaffians (\\ref{CSP_sl1})--(\\ref{CSP_sl2}) give $N$-soliton solution to the\ncomplex short pulse equation (\\ref{CSP}) in parametric form.\n\n\\subsection{One- and two-soliton solutions for the complex short pulse\nequation}\n\nIn this subsection, we provide one- and two-soliton to the complex short\npulse equation (\\ref{CSP}) and give a detailed analysis for their properties.\n\n\\subsubsection{One-soliton solution}\n\nBased on (\\ref{CSP_sl1})--(\\ref{CSP_sl2}), the tau-functions for one-soliton\nsolution ($N=1$) are\n\\begin{eqnarray}\n&& f = -1-\\frac 14 \\frac {|\\alpha_1|^2(p_1\\bar{p}_1)^2}{(p_1+\\bar{p}_1)^2}\ne^{\\eta_1+\\bar{\\eta}_1} \\,,\n\\end{eqnarray}\n\\begin{equation}\ng = -\\alpha_1 e^{\\eta_1}\\,.\n\\end{equation}\n\nLet $p_1=p_{1R}+ \\mathrm{i}p_{1I}$, and we assume $p_{1R} >0$ without loss\nof generality, then the one-soliton solution can be expressed in the\nfollowing parametric form\n\\begin{equation} \\label{CSP1solitona}\nq=\\frac{\\alpha_1}{|\\alpha_1|}\\frac{2p_{1R}}{|p_1|^2}e^{\\mathrm{i} \\eta_{1I}}\n\\mbox{sech} \\left(\\eta_{1R}+\\eta_{10}\\right) \\,,\n\\end{equation}\n\\begin{equation} \\label{CSP1solitonb}\nx=y-\\frac{2p_{1R}}{|p_1|^2}\\left(\\tanh\n\\left(\\eta_{1R}+\\eta_{10}\\right)+1\\right)\\,, \\quad t=-s\\,,\n\\end{equation}\nwhere\n\\begin{equation}\n\\eta_{1R}=p_{1R}y+\\frac{p_{1R}}{|p_1|^2}s , \\quad \\eta_{1I}=p_{1I} y-\\frac\np_{1I}}{|p_1|^2}s \\,,\\quad \\eta_{10}=\\ln \\frac{|\\alpha_1||p_1|^2}{4p_{1R}}\\,.\n\\end{equation}\n\nEq. (\\ref{CSP1solitona}) represents an envelope soliton of amplitude \n2p_{1R}\/|p_1|^2$ and phase $\\eta_{1I}$. To analyze the property for the\none-soliton solution, we calculate out\n\\begin{equation}\n\\frac{\\partial x}{\\partial y} = 1- \\frac{2p^2_{1R}}{|p_1|^2} {\\mbox{sech}\n^2(\\eta_{1R}+\\eta_{10})\\,.\n\\end{equation}\nTherefore, $\\partial x\/\\partial y \\to 1$ as $y \\to \\pm \\infty$. Moreover, it\nattains a minimum value of $({p^2_{1I}-p^2_{1R}})\/({p^2_{1I}+p^2_{1R}})$\nat the peak point of envelope soliton where $\\eta_{1R}+\\eta_{10}=0$. Since $\n\\partial |q|}\/{\\partial x}=\\frac{\\partial |q|\/\\partial y}{\\partial\nx\/\\partial y}$, we can classify this one-soliton solution as follows:\n\n\\begin{itemize}\n\\item \\textbf{smooth soliton:} when $|p_{1R}| < |p_{1I}|$, ${\\partial x}\/\n\\partial y}$ is always positive, which leads to a smooth envelope soliton\nsimilar to the envelope soliton for the nonlinear Schr{\\\"o}dinger equation.\nAn example with $p_1=1+1.5\\mathrm{i}$ is illustrated in Fig. 1 (a).\n\n\\item \\textbf{loop soliton:} when $|p_{1R}| > |p_{1I}|$, the minimum value\nof ${\\partial x}\/{\\partial y}$ at the peak point of the soliton becomes\nnegative. In view of the fact that $\\partial x\/\\partial y \\to 1$ as $y \\to\n\\pm \\infty$, ${\\partial x}\/{\\partial y}$ has two zeros at both sides of the\npeak of the envelope soliton. Moreover, ${\\partial x}\/{\\partial y}< 0$\nbetween these two zeros. This leads to a loop soliton for the envelope of $q\n. An example is shown in Fig. (b) with $p_1=1+0.5\\mathrm{i}$.\n\n\\item \\textbf{cuspon soliton:} when $|p_{1R}| = |p_{1I}|$, ${\\partial x}\/\n\\partial y}$ has a minimum value of zero at $\\eta_{1R}+\\eta_{10}=0$, which\nmakes the derivative of the envelope $|q|$ with respect to $x$ going to\ninfinity at the peak point. Thus, we have a cusponed envelope soliton, which\nis illustrated in Fig. 1 (c) with $p_1=1+\\mathrm{i}$.\n\\end{itemize}\n\\begin{figure}[htbp]\n\\centerline{\n\\includegraphics[scale=0.35]{onesolitona1b1p5.eps}\\quad\n\\includegraphics[scale=0.35]{onesolitona1b0p5.eps}} \\kern-0.3\\textwidth\n\\hbox to\n\\textwidth{\\hss(a)\\kern16em\\hss(b)\\kern11em} \\kern+0.3\\textwidth\n\\centerline{\n\\includegraphics[scale=0.35]{onesolitona1b1.eps}} \\kern-0.3\\textwidth\n\\hbox to\n\\textwidth{\\hss(c)\\kern21em} \\kern+0.3\\textwidth\n\\caption{Envelope soliton for the complex short pulse equation (\\protect\\re\n{CSP}), solid line: $Re(q)$, dashed line: $|q|$; (a) smooth soliton with \np_1=1+1.5\\mathrm{i}$, (b) loop soliotn with $p_1=1+0.5\\mathrm{i}$, (c)\ncuspon soliton with $p_1=1+\\mathrm{i}$.}\n\\label{figure:cspe1soliton}\n\\end{figure}\n\n\\begin{remark}\nThe one-soliton solution to the short pulse equation (\\ref{SPE}) is of\nloop-type, which lacks physical meaning in the context of nonlinear optics.\nHowever, the one-soliton solution to the complex short pulse equation (\\re\n{CSP}) is of breather-type, which allows physical meaning for optical pulse.\n\\end{remark}\n\n\\begin{remark}\nWhen $|p_{1R}| <|p_{1I}|$, there is no singularity for one-soliton solution.\nMoreover, in view of $\\eta_{1R}$ associated with the width of envelope\nsoliton and $\\eta_{1I}$ associated with the phase, it is obvious that this\nnonsingular envelope soliton can only contain a few optical cycle. This\nproperty coincides with the fact that the complex short pulse equation is\nderived for the purpose of describing ultra-short pulse propagation. When \n|p_{1R}| =|p_{1I}|$, the soliton becomes cuspon-like one, which agrees with\nthe results in \\cite{Bandelow} derived from a bidirectional model.\n\\end{remark}\n\n\\subsubsection{Two-soliton solution}\n\nBased on the $N$-soliton solution of the complex short pulse equation from \n\\ref{CSP_sl1})--(\\ref{CSP_sl2}), the tau-functions for two-soliton solution\ncan be expanded for $N=2$\n\\begin{eqnarray}\n&&f=\\mathrm{Pf}(a_{1},a_{2},a_{3},a_{4},b_{1},b_{2},b_{3},b_{4}) \\nonumber\n\\\\\n&&\\quad =1+a_{1\\bar{1}}e^{\\eta _{1}+\\bar{\\eta}_{1}}+a_{1\\bar{2}}e^{\\eta _{1}\n\\bar{\\eta}_{2}}+a_{2\\bar{1}}e^{\\eta _{2}+\\bar{\\eta}_{1}}+a_{2\\bar{2}}e^{\\eta\n_{2}+\\bar{\\eta _{2}}} \\nonumber \\\\\n&&\\qquad +|P_{12}|^{2}\\left( a_{1\\bar{1}}a_{2\\bar{2}}P_{1\\bar{2}}P_{2\\bar{1\n}-a_{1\\bar{2}}a_{2\\bar{1}}P_{1\\bar{1}}P_{2\\bar{2}}\\right) e^{\\eta _{1}+\\eta\n_{2}+\\bar{\\eta}_{1}+\\bar{\\eta}_{2}}\\,,\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&&g=\\mathrm{Pf}(d_{0},\\beta\n_{1},a_{1},a_{2},a_{3},a_{4},b_{1},b_{2},b_{3},b_{4}) \\nonumber \\\\\n&&\\quad =\\alpha _{1}e^{\\eta _{1}}+\\alpha _{2}e^{\\eta _{2}}+P_{12}\\left(\n\\alpha _{1}P_{1\\bar{1}}a_{2\\bar{1}}-\\alpha _{2}P_{2\\bar{1}}a_{1\\bar{1\n}\\right) e^{\\eta _{1}+\\eta _{2}+\\bar{\\eta}_{1}} \\nonumber \\\\\n&&\\qquad +P_{12}\\left( \\alpha _{1}P_{1\\bar{2}}a_{2\\bar{2}}-\\alpha _{2}P_{\n\\bar{2}}a_{1\\bar{2}}\\right) e^{\\eta _{1}+\\eta _{2}+\\bar{\\eta}_{2}}\\,,\n\\end{eqnarray\nwhere\n\\begin{equation}\nP_{ij}=\\frac{p_{i}-p_{j}}{p_{i}+p_{j}}\\,,\\quad P_{i\\bar{j}}=\\frac{p_{i}-\\bar\np}_{j}}{p_{i}+\\bar{p}_{j}}\\,,\\quad a_{i\\bar{j}}=\\frac{\\alpha _{i}\\bar{\\alpha\n_{j}(p_{i}\\bar{p}_{j})^{2}}{4(p_{i}+\\bar{p}_{j})^{2}}\\,,\n\\end{equation\nand $\\eta _{j}=p_{j}y+p_{j}^{-1}s$, $\\bar{\\eta}_{j}=\\bar{p}_{j}y+\\bar{p\n_{j}^{-1}s$.\n\n\\begin{figure}[htbp]\n\\centerline{\n\\includegraphics[scale=0.35]{CSPE_2soliton_ct.eps}\\quad\n\\includegraphics[scale=0.35]{CSPE_2soliton_elastic.eps}} \\kern-0.31\n\\textwidth \\hbox to\n\\textwidth{\\hss(a)\\kern4em\\hss(b)\\kern2em} \\kern+0.315\\textwidth\n\\caption{Two-soliton solution to the complex short pulse equation (a)\ncontour plot; (b) profiles at $t=-80$, $80$.}\n\\label{f:1com2soliton}\n\\end{figure}\nTo avoid the singularity of the envelope solitons, the conditions $|p_{1R}|<\n|p_{1I}|$ and $|p_{2R}|< |p_{2I}|$ need to be satisfied.\nWhen two solitons stay apart, the amplitude of each soliton is of $2|p_{iR}|\/|p_{i}|^2$, and the velocity is of $-1\/|p_i|^2$ in the $ys$-coordinate system. Therefore, the soliton of larger velocity will catch up with and collide with the soliton of smaller velocity if it is initially located on the left. Furthermore, the\ncollision is elastic, and there is no change in shape and amplitude of\nsolitons except a phase shift. In Fig. 2, we illustrate\nthe contour plot for the collision of two solitons (a), as well as the\nprofiles (b) before and after the collision. The parameters are taken as \n\\alpha_1=\\alpha_2=1.0$, $p_1=1+1.2\\mathrm{i}$ and $p_2=1+2\\mathrm{i}$.\n\nSince the velocity of single envelope soliton is $-1\/|p_i|^2$ in the $ys\n-coordinate system, a bound state can be formed under the condition of \n|p_1|^2 = |p_2|^2$ if two solitons stay close enough and move with the same\nvelocity. Such a bound state is shown in Fig. 3 for\nparameters chosen as $\\alpha_1=\\alpha_2=1.0$, $p_1=1.3+1.8193\\mathrm{i}$, \np_2=1+2\\mathrm{i}$.\nIt is interesting that the envelope of the bound state oscillates\nperiodically as it moves along $x$-axis.\n\\begin{figure}[htbp]\n\\centerline{\n\\includegraphics[scale=0.35]{1com_bd3d.eps}\\quad\n\\includegraphics[scale=0.35]{1com_bdprofile.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(a)\\kern4em\\hss(b)\\kern2em} \\kern+0.315\\textwidth\n\\caption{Bound state to the complex short pulse equation: (a) 3D plot (b)\nprofiles at $t=-100$, $40$.}\n\\label{f:1comboundstate}\n\\end{figure}\n\\subsection{Bilinear equations and $N$-soliton solutions to the coupled\ncomplex short pulse equation}\n\n\\begin{proposition}\nThe coupled complex short pulse equation is derived from bilinear equations\n\\begin{equation} \\label{CCSPE_bilinear1}\nD_sD_y f \\cdot g_i =fg_i, \\quad i=1,2 \\,,\n\\end{equation}\n\\begin{equation} \\label{CCSPE_bilinear2}\nD^2_s f \\cdot f =\\frac{1}{2} \\left(|g_1|^2+|g_2|^2\\right)\\,,\n\\end{equation}\nby dependent variable transformation\n\\begin{equation} \\label{CCSPE_vartrf}\nq_1=\\frac{g_1}{f}, \\quad q_2=\\frac{g_2}{f}\\,,\n\\end{equation}\nand hodograph transformation\n\\begin{equation}\nx = y -2(\\ln f)_s\\,, \\quad t=-s \\,, \\label{CCSP_hodograph}\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nDividing both sides of Eqs. (\\ref{CCSPE_bilinear1})--(\\ref{CCSPE_bilinear2})\nby $f^2$, we have\n\\begin{equation}\n\\left(\\frac{g_i}{f} \\right)_{sy} + 2\\frac{g_i}{f} \\left( \\ln f\\right)_{sy} =\n\\frac{g_i}{f}\\,, \\label{CCSP_BL2a}\n\\end{equation}\n\\begin{equation}\n\\left( \\ln f\\right)_{ss} =\\frac{1}{4} \\left( \\frac{|g_1|^2}{f^2}+\\frac\n|g_2|^2}{f^2} \\right)\\,. \\label{CCSP_BL2b}\n\\end{equation}\n\nFrom dependent variable and hodograph transformations (\\ref{CCSPE_vartrf})--\n\\ref{CCSP_hodograph}), we obtain\n\\[\n\\frac{\\partial x}{\\partial s} = -2(\\ln f)_{ss} = -\\frac 12\n\\left(|q_1|^2+|q_2|^2 \\right)\\,, \\qquad \\frac{\\partial x}{\\partial y} =\n1-2(\\ln f)_{sy}\\,,\n\\]\nwhich implies\n\\begin{equation} \\label{CCSP_BL3}\n{\\partial_y} = \\rho^{-1} {\\partial_x}\\,, \\qquad {\\partial_s} = -{\\partial_t}\n- \\frac 12 \\left(|q_1|^2+|q_2|^2 \\right) {\\partial_x}\\,\n\\end{equation}\nby letting $1-2(\\ln f)_{sy} = \\rho^{-1}$.\n\nWith the use of (\\ref{CCSP_BL3}), Eq. (\\ref{CCSP_BL2a}) can be recast into\n\\begin{equation} \\label{CCSP_BL4}\n\\rho \\left(\\frac{g_i}{f} \\right)_{sy} = \\frac{g_i}{f}\\,, \\quad i=1,2\\,,\n\\end{equation}\nwhich can be further converted into\n\\begin{equation} \\label{CCSPE_alt}\n\\partial_x \\left(-\\partial_t - \\frac 12 (|q_1|^2+|q_2|^2) \\partial_x\n\\right)q_i = q_i\\,, \\quad i=1,2\\,.\n\\end{equation}\nEq. (\\ref{CCSPE_alt}) is, obviously, equivalent to the coupled complex short\npulse equation (\\ref{CCSP1})--(\\ref{CCSP2}).\n\\end{proof}\n\n\n$N$-soliton solution for the coupled complex short pulse equation is given\nin a similar way as the complex short pulse equation by the following\ntheorem.\n\n\n\\begin{theorem}\nThe coupled complex short pulse equation admits the following $N$-soliton\nsolution\n\\[\nq_i=\\frac{g_i}{f}, \\quad x = y -2(\\ln f)_s\\,, \\quad t=-s \\,,\n\\]\nwhere $f$, $g_i$ are pfaffians defined as\n\\begin{eqnarray} \\label{CCSPE_Nsoliton1}\nf &=& \\mathrm{Pf} (a_1, \\cdots, a_{2N}, b_1, \\cdots, b_{2N})\\,, \\\\\ng_i &=& \\mathrm{Pf} (d_0, \\beta_{i}, a_1, \\cdots, a_{2N}, b_1, \\cdots,\nb_{2N})\\,, \\label{CCSPE_Nsoliton2}\n\\end{eqnarray}\nand the elements of the pfaffians are determined as\n\\begin{equation} \\label{NCSPE_pf1}\n\\mathrm{Pf}(a_j,a_k)= \\frac{p_j-p_k}{p_j+p_k} e^{\\eta_j+\\eta_k}\\,, \\quad\n\\mathrm{Pf}(a_j,b_k)=\\delta_{j,k}\\,,\n\\end{equation}\n\\begin{equation} \\label{NCSPE_pf2}\n\\mathrm{Pf}(b_j,b_k)=\\frac 14 \\frac{\\sum^2_{i=1} \\alpha^{(i)}_j\n\\alpha^{(i)}_k }{p^{-2}_j-p^{-2}_{k}} \\delta_{\\mu+1, \\nu}\\,, \\quad \\mathrm{P\n}(d_l,a_k)= p_k^{l} e^{\\eta_k}\\,,\n\\end{equation}\n\n\\begin{equation} \\label{NCSPE_pf4}\n\\mathrm{Pf}(b_j,\\beta_i)=\\alpha^{(i)}_j \\delta_{\\mu, i}\\,,\\quad \\mathrm{Pf\n(d_0,b_j) =\\mathrm{Pf}(d_0,\\beta_i) = \\mathrm{Pf}(a_j,\\beta_i)=0\\,.\n\\end{equation}\nHere $\\mu=index(b_j)$, $\\nu=index(b_k)$, $\\eta_j=p_j y + p_j^{-1} s +\n\\eta_{j,0}$ which satisfying $p_{j+N}=\\bar{p}_j$, $\\alpha_{j+N}=\\bar{\\alpha\n_{j}$.\n\\end{theorem}\n\nThe proof of the Theorem is given in the Appendix. In the subsequent\nsection, based on the $N$-soliton solution of coupled complex short pulse\nequation, we will investigate the dynamics of one- and two-solitons in\ndetails.\n\n\\begin{remark}\nThrough the transformations\n\\begin{equation} \\label{NCSPE_trfs}\nx = y -2(\\ln f)_s\\,, \\quad t=-s \\,, \\quad q_i=\\frac{g_i}{f}\\,,\n\\end{equation}\nthe vector complex short pulse equation (\\ref{NCSPE}) can be decomposed into\nthe following bilinear equations\n\\begin{equation} \\label{NCSPE_bilinear1}\nD_sD_y f \\cdot g_i =fg_i, \\quad i=1, \\cdots, n \\,,\n\\end{equation}\n\\begin{equation} \\label{NCSPE_bilinear2}\nD^2_s f \\cdot f =\\frac{1}{2} \\left(\\sum^n_{i=1}|g_i|^2\\right)\\,.\n\\end{equation}\nThe parametric form of $N$-soliton solution in terms of pfaffians to the\nvector complex short pulse equation (\\ref{NCSPE}) can be given in a very\nsimilar from as to to the coupled complex short pulse equation. Here, we\nomit the details and will report the results later on.\n\\end{remark}\n\n\\section{Dynamics of solitons to the coupled complex short pulse equation}\n\\subsection{One-soliton solution}\nThe tau-functions for one-soliton solution to the coupled complex short\npulse equation are obtained from (\\ref{CCSPE_Nsoliton1})--(\\re\n{CCSPE_Nsoliton2}) for $N=1$\n\\begin{equation}\nf = -1-\\frac 14 \\frac {\\sum_{i=1}^2|\\alpha^{(i)}_1|^2(p_1\\bar{p}_1)^2}{(p_1\n\\bar{p}_1)^2} e^{\\eta_1+\\bar{\\eta}_1} \\,,\n\\end{equation}\n\\begin{equation}\ng_1 = -\\alpha^{(1)}_1 e^{\\eta_1}\\,, \\quad g_2 = -\\alpha^{(2)}_1 e^{\\eta_1}\\,.\n\\end{equation}\n\nLet $p_{1}=p_{1R}+\\mathrm{i}p_{1I}$, the one-soliton solution can be\nexpressed in the following parametric form\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nq_{1} \\\\\nq_{2\n\\end{array\n\\right) =\\left(\n\\begin{array}{c}\nA_{1} \\\\\nA_{2\n\\end{array\n\\right) \\frac{2p_{1R}}{|p_{1}|^{2}}e^{\\mathrm{i}\\eta _{1I}}{\\mbox{sech}\n\\left( \\eta _{1R}+\\eta _{10}\\right) \\,, \\label{1soliton_ay}\n\\end{equation\n\\begin{equation}\nx=y-\\frac{2p_{1R}}{|p_{1}|^{2}}\\left( \\tanh (\\eta _{1R}+\\eta _{10})+1\\right)\n\\,,\\quad t=-s\\,, \\label{CCSP1solitonb}\n\\end{equation\nwhere\n\\begin{equation}\n\\eta _{1R}=p_{1R}\\left( y+\\frac{1}{|p_{1}|^{2}}s\\right) ,\\quad \\eta\n_{1I}=p_{1I}\\left( y-\\frac{1}{|p_{1}|^{2}}s\\right) \\,,\n\\end{equation\n\\begin{equation}\nA_{i}=\\frac{\\alpha _{1}^{(i)}}{\\sqrt{\\sum_{i=1}^{2}|\\alpha _{1}^{(i)}|^2}\n\\,,\\quad \\eta_{10}=\\ln \\frac{\\sqrt{\\sum_{i=1}^{2}|\\alpha _{1}^{(i)}|^2\n|p_{1}|^{2}}{4|p_{1R}|}\\,.\n\\end{equation\nThe amplitudes of the single soliton in each component are ${2|A_{1}|p_{1R}}\n{|p_{1}|^{2}}$ and ${2|A_{2}|p_{1R}}\/{|p_{1}|^{2}}$, respectively. Note that\n$|A_{1}|^{2}+|A_{2}|^{2}=1$. Same as the analysis for one-soliton solution\nof complex short pulse equation, if $|p_{1R}|<|p_{1I}|$, the envelope for\none-soliton in each of the component is smooth, whereas, if \n|p_{1R}|>|p_{1I}| $, it becomes a loop (multi-valued) soliton, if \n|p_{1R}|=|p_{1I}|$, it is a cuspon.\n\n\\subsection{Soliton interactions}\nTwo-soliton solution for coupled complex short pulse equation is obtained\nfrom (\\ref{CCSPE_Nsoliton1})--(\\ref{CCSPE_Nsoliton2}) for $N=2$. By\nexpanding the pfaffians, the tau-functions for two-soliton solution are\nexpressed by\n\\begin{eqnarray}\n&&f=1+e^{\\eta _{1}+\\bar{\\eta}_{1}+r_{1\\bar{1}}}+e^{\\eta _{1}+\\bar{\\eta\n_{2}+r_{1\\bar{2}}}+e^{\\eta _{2}+\\bar{\\eta}_{1}+r_{2\\bar{1}}}+e^{\\eta _{2}\n\\bar{\\eta}_{2}+r_{2\\bar{2}}} \\nonumber \\\\\n&&\\qquad +|P_{12}|^{2}|P_{1\\bar{2}}|^{2}P_{1\\bar{1}}P_{2\\bar{2}}\\left( B_{\n\\bar{1}}B_{2\\bar{2}}-B_{2\\bar{1}}B_{1\\bar{2}}\\right) e^{\\eta _{1}+\\eta _{2}\n\\bar{\\eta}_{1}+\\bar{\\eta}_{2}}\\,,\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&& g_1= \\alpha^{(1)}_{1} e^{\\eta_1} + \\alpha^{(1)}_2 e^{\\eta_2} +P_{12} P_{\n\\bar{1}} P_{2\\bar{1}} \\left( \\alpha^{(1)}_2 B_{1\\bar{1}} - \\alpha^{(1)}_1\nB_{2\\bar{1}} \\right) e^{\\eta_1+\\eta_2+\\bar{\\eta}_1} \\nonumber \\\\\n&& \\qquad +P_{12} P_{1\\bar{2}} P_{2\\bar{2}} \\left( \\alpha^{(1)}_2 B_{1\\bar{2\n} - \\alpha^{(1)}_1 B_{2\\bar{2}} \\right) e^{\\eta_1+\\eta_2+\\bar{\\eta}_2} \\,,\n\\end{eqnarray}\n\n\\begin{eqnarray}\n&&g_{2}=\\alpha _{1}^{(2)}e^{\\eta _{1}}+\\alpha _{2}^{(2)}e^{\\eta\n_{2}}+P_{12}P_{1\\bar{1}}P_{2\\bar{1}}\\left( \\alpha _{2}^{(2)}B_{1\\bar{1\n}-\\alpha _{1}^{(2)}B_{2\\bar{1}}\\right) e^{\\eta _{1}+\\eta _{2}+\\bar{\\eta}_{1}}\n\\nonumber \\\\\n&&\\qquad +P_{12}P_{1\\bar{2}}P_{2\\bar{2}}\\left( \\alpha _{2}^{(2)}B_{1\\bar{2\n}-\\alpha _{1}^{(2)}B_{2\\bar{2}}\\right) e^{\\eta _{1}+\\eta _{2}+\\bar{\\eta\n_{2}}\\,,\n\\end{eqnarray\nwhere\n\\[\nP_{ij}=\\frac{p_{i}-p_{j}}{p_{i}+p_{j}}\\,,\\quad P_{i\\bar{j}}=\\frac{p_{i}-\\bar\np}_{j}}{p_{i}+\\bar{p}_{j}}\\,,\n\\\n\\[\nB_{i\\bar{j}}=\\frac{\\alpha _{i}^{(1)}\\bar{\\alpha}_{j}^{(1)}+\\alpha _{i}^{(2)\n\\bar{\\alpha}_{j}^{(2)}}{4(p_{i}^{-2}-\\bar{p}_{j}^{-2})}\\,,\\quad e^{r_{i\\bar{\n}}}=\\frac{\\alpha _{i}^{(1)}\\bar{\\alpha}_{j}^{(1)}+\\alpha _{i}^{(2)}\\bar\n\\alpha}_{j}^{(2)}}{4(p_{i}^{-1}+\\bar{p}_{j}^{-1})^2}\\,.\n\\\nand $\\eta _{j}=p_{j}y+p_{j}^{-1}s$, $p_{3}=\\bar{p}_{1}$, $p_{4}=\\bar{p}_{2}\n, thus, $\\eta _{3}=\\bar{\\eta}_{1}$, $\\eta _{4}=\\bar{\\eta}_{2}$.\n\nNext, we investigate the asymptotic behavior of two-soliton solution. To\nthis end, we assume $p_{1R}>p_{2R}>0$, $p_{1R}\/|p_{1}|^{2}>p_{2R}\/|p_{2}|^{2}\n$ without loss of generality. For the above choice of parameters, we have\n(i) $\\eta _{1R}\\approx 0$, $\\eta _{2R}\\rightarrow \\mp \\infty $ as \nt\\rightarrow \\mp \\infty $ for soliton 1 and (ii) $\\eta _{2R}\\approx 0$, \n\\eta _{2R}\\rightarrow \\pm \\infty $ as $t\\rightarrow \\mp \\infty $ for soliton\n2. This leads to the following asymptotic forms for two-soliton solution.\n\\newline\n(i) Before collision ($t\\rightarrow -\\infty $) \\newline\nSoliton 1 ($\\eta _{1R}\\approx 0$, $\\eta _{2R}\\rightarrow -\\infty $):\n\\begin{eqnarray}\n\\left(\n\\begin{array}{c}\nq_{1} \\\\\nq_{2\n\\end{array\n\\right) &\\rightarrow &\\left(\n\\begin{array}{c}\n\\alpha _{1}^{(1)} \\\\\n\\alpha _{1}^{(2)\n\\end{array\n\\right) \\frac{e^{\\eta _{1}}}{1+e^{\\eta _{1}+\\bar{\\eta}_{1}+r_{1\\bar{1}}}}\\,,\n\\nonumber \\label{soliton1_aybf} \\\\\n&\\rightarrow &\\left(\n\\begin{array}{c}\nA_{1}^{1-} \\\\\nA_{2}^{1-\n\\end{array\n\\right) \\frac{2p_{1R}}{|p_{1}|^{2}}e^{i\\eta _{1I}}{\\mbox{sech}}\\left( \\eta\n_{1R}+\\frac{r_{1\\bar{1}}}{2}\\right) \\,,\n\\end{eqnarray\nwhere\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nA_{1}^{1-} \\\\\nA_{2}^{1-\n\\end{array\n\\right) =\\left(\n\\begin{array}{c}\n\\alpha _{1}^{(1)} \\\\\n\\alpha _{1}^{(2)\n\\end{array\n\\right) \\frac{1}{\\sqrt{|\\alpha _{1}^{(1)}|^{2}+|\\alpha _{1}^{(2)}|^{2}}}\\,.\n\\end{equation}\n\nSoliton 2 ($\\eta_{2R} \\approx 0$, $\\eta_{1R} \\to \\infty$):\n\\begin{equation} \\label{soliton2_aybf}\n\\left\n\\begin{array}{c}\nq_1 \\\\\nq_\n\\end{array\n\\right) \\to \\left\n\\begin{array}{c}\nA^{2-}_1 \\\\\nA^{2-}_\n\\end{array\n\\right)\\frac{2p_{2R}}{|p_{2}|^2} e^{i\\eta_{2I}} {\\mbox{sech}}\n\\left(\\eta_{2R}+\\frac {r_{1\\bar{1}2\\bar{2}}-r_{1\\bar{1}}}{2} \\right)\\,,\n\\end{equation}\nwhere\n\\begin{equation}\n\\left\n\\begin{array}{c}\nA^{2-}_1 \\\\\nA^{2-}_\n\\end{array\n\\right) = \\left\n\\begin{array}{c}\ne^{r^{(1)}_{1\\bar{1}2}} \\\\\ne^{r^{(2)}_{1\\bar{1}2}\n\\end{array\n\\right) \\frac{e^{-(r_{1\\bar{1}2\\bar{2}}+r_{1\\bar{1}}-r_{2\\bar{2}})\/{2}}} \n\\sqrt{|\\alpha^{(1)}_2|^2+|\\alpha^{(2)}_2|^2}} \\,,\n\\end{equation}\nwith\n\\begin{equation}\ne^{r^{(i)}_{1\\bar{1}2}}= P_{12} P_{1\\bar{1}} P_{2\\bar{1}} \\left(\n\\alpha^{(i)}_2 B_{1\\bar{1}} - \\alpha^{(i)}_1 B_{2\\bar{1}} \\right)\\,, \\quad\n(i=1,2)\n\\end{equation}\n\\begin{equation}\ne^{r_{1\\bar{1}2\\bar{2}}}=|P_{12}|^2 |P_{1\\bar{2}}|^2 P_{1\\bar{1}} P_{2\\bar{2\n} \\left(B_{1\\bar{1}} B_{2\\bar{2}} -B_{2\\bar{1}}B_{1\\bar{2}}\\right)\\,.\n\\end{equation}\n\\newline\nAfter collision ($t \\to \\infty$) \\newline\nSoliton 1 ($\\eta_{1R} \\approx 0$, $\\eta_{2R} \\to \\infty$):\n\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nq_{1} \\\\\nq_{2\n\\end{array\n\\right) \\rightarrow \\left(\n\\begin{array}{c}\nA_{1}^{1+} \\\\\nA_{2}^{1+\n\\end{array\n\\right) \\frac{2p_{1R}}{|p_{1}|^{2}}e^{i\\eta _{1I}}{\\mbox{sech}}\\left( \\eta\n_{2R}+\\frac{r_{1\\bar{1}2\\bar{2}}-r_{2\\bar{2}}}{2}\\right) \\,,\n\\label{soliton2_ayafter}\n\\end{equation\nwhere\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nA_{1}^{1+} \\\\\nA_{2}^{1+\n\\end{array\n\\right) =\\left(\n\\begin{array}{c}\ne^{r_{12\\bar{1}}^{(1)}} \\\\\ne^{r_{12\\bar{1}}^{(2)}\n\\end{array\n\\right) \\frac{e^{-(r_{1\\bar{1}2\\bar{2}}-r_{1\\bar{1}}+r_{2\\bar{2}})\/{2}}}\n\\sqrt{|\\alpha _{1}^{(1)}|^{2}+|\\alpha _{1}^{(2)}|^{2}}}\\,,\n\\end{equation\nwith\n\\begin{equation}\ne^{r_{12\\bar{1}}^{(i)}}=P_{12}P_{1\\bar{2}}P_{2\\bar{2}}\\left( \\alpha\n_{2}^{(i)}B_{1\\bar{2}}-\\alpha _{1}^{(i)}B_{2\\bar{2}}\\right) \\,,\\quad\n(i=1,2)\\,.\n\\end{equation}\n\nSoliton 2 ($\\eta _{2R}\\approx 0$, $\\eta _{1R}\\rightarrow -\\infty $):\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nq_{1} \\\\\nq_{2\n\\end{array\n\\right) \\rightarrow \\left(\n\\begin{array}{c}\nA_{1}^{2+} \\\\\nA_{2}^{2+\n\\end{array\n\\right) \\frac{2p_{2R}}{|p_{2}|^{2}}e^{i\\eta _{2I}}{\\mbox{sech}}\\left( \\eta\n_{2R}+\\frac{r_{2\\bar{2}}}{2}\\right) \\,, \\label{soliton2_ayaf}\n\\end{equation\nwhere\n\\begin{equation}\n\\left(\n\\begin{array}{c}\nA_{1}^{2+} \\\\\nA_{2}^{2+\n\\end{array\n\\right) =\\left(\n\\begin{array}{c}\n\\alpha _{2}^{(1)} \\\\\n\\alpha _{2}^{(2)\n\\end{array\n\\right) \\frac{1}{\\sqrt{|\\alpha _{2}^{(1)}|^{2}+|\\alpha _{2}^{(2)}|^{2}}}\\,.\n\\end{equation}\n\nSimilar to the analysis for the CNLS equations \\cit\n{Lakshmanan1997,Lakshmanan2001,Lakshmanan2003}, the change in the amplitude\nof each of the solitons in each component can be obtained by introducing the\ntransition matrix $T^k_j$ by $A_j^{k+}= T_j^k A_j^{k-}$, $j,k=1,2$. The\nelements of transition matrix is obtained from the above asymptotic analysis\nas\n\\begin{equation} \\label{trasition_matrix1}\nT_j^1=\\left(\\frac{P_{12}P_{1\\bar{2}}}{\\bar{P}_{12}\\bar{P}_{1\\bar{2}}}\n\\right)^{1\/2} \\frac{1}{\\sqrt{1-\\lambda_1 \\lambda_2}} \\left(1-\\lambda_2\\frac\n\\alpha_2^{(j)}}{\\alpha_1^{(j)}} \\right)\\,, \\quad j=1,2\\,,\n\\end{equation}\n\\begin{equation} \\label{trasition_matrix2}\nT_j^2=\\left(\\frac{\\bar{P}_{12}P_{1\\bar{2}}}{P_{12}\\bar{P}_{1\\bar{2}}}\n\\right)^{1\/2} \\sqrt{1-\\lambda_1 \\lambda_2} \\left(1-\\lambda_1\\frac\n\\alpha_1^{(j)}}{\\alpha_2^{(j)}} \\right)^{-1}\\,, \\quad j=1,2\\,,\n\\end{equation}\nwhere $\\lambda_1=B_{2\\bar{1}}\/B_{1\\bar{1}}$, $\\lambda_2=B_{1\\bar{2}}\/B_{\n\\bar{2}}$.\n\nTherefore, in general, there is an exchange of energies between two components of two solitons\nafter the collision.\nAn example is shown in Fig. 4 for the\nparameters taken as follows\n$p_{1}=1+1.2\\mathrm{i}$, $p_{2}=1+2\\mathrm{i}$, $\\alpha^{(1)}_{1}\n\\alpha^{(2)}_{1}=1.0$, $\\alpha^{(1)} _{2}=2.0$, $\\alpha^{(2)}_{2}=1.0$.\n\\begin{figure}[tbph]\n\\centerline{\n\\includegraphics[scale=0.35]{Inelastic_ct_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Inelastic_ct_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(a)\\kern6.5em\\hss(b)\\kern-1.5em} \\kern+0.315\\textwidth\n\\centerline{\n\\includegraphics[scale=0.35]{Inelastic_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Inelastic_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(c)\\kern6.5em\\hss(d)\\kern-1.5em} \\kern+0.315\\textwidth\n\\caption{Inelastic collision in coupled complex short\npulse equation. (a)-(b): contour plot; (c)-(d): profiles before and after the collision.}\n\\label{f:inelastic1}\n\\end{figure}\nHowever, only for the special case\n\\begin{equation}\n\\frac{\\alpha _{1}^{(1)}}{\\alpha _{2}^{(1)}}=\\frac{\\alpha _{1}^{(2)}}{\\alpha\n_{2}^{(2)}}\\,,\n\\end{equation\nthere is no energy exchange between two compoents of solitons\nafter the collision. An example is shown in Fig. 5 for the\nparameters\n$p_{1}=1+1.2\\mathrm{i}$,\\ $p_{2}=1+2\\mathrm{i}$,\\ $\\alpha^{(1)}_{1}\n\\alpha^{(2)}_{1}=1.0$, \\ $\\alpha^{(1)}_{2}=\\alpha^{(2)}_{2}=1.0$.\n\\begin{figure}[htbp]\n\\centerline{\n\\includegraphics[scale=0.35]{Elastic_ct_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Elastic_ct_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(a)\\kern6.5em\\hss(b)\\kern-1.5em} \\kern+0.315\\textwidth\n\\centerline{\n\\includegraphics[scale=0.35]{Elastic_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Elastic_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(c)\\kern6.5em\\hss(d)\\kern-1.5em} \\kern+0.315\\textwidth\n\\caption{Elastic collision in coupled complex short pulse equation.}\n\\label{f:elastic}\n\\end{figure}\n\nIt is interesting to note that if we just change the parameters in previous\ntwo examples as $\\alpha^{(1)}_{2}=0$, $\\alpha^{(2)}_{2}=1.0$,\nthe energy of one soliton is concentrated in component $q_2$ before the collision. However, component $q_1$ gains some energy after the collision.\nSuch an example is shown in Fig. 6.\n\n\\begin{figure}[htbp]\n\\centerline{\n\\includegraphics[scale=0.35]{Inelastic2_ct_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Inelastic2_ct_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(a)\\kern6.5em\\hss(b)\\kern-1.5em} \\kern+0.315\\textwidth\n\\centerline{\n\\includegraphics[scale=0.35]{Inelastic2_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Inelastic2_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(c)\\kern6.5em\\hss(d)\\kern-1.5em} \\kern+0.315\\textwidth\n\\caption{Inelastic collision in coupled\ncomplex short pulse equation for $p_{1}=1+1.2{\\rm i}$, $p_{2}=1+2{\\rm i}$, $\\alpha^{(1)}_{1}=\\alpha^{(2)}_{1}=1.0$, $\\alpha^{(1)}_{2}=0$, $\\alpha^{(2)}_{2}=1.0$. (a)-(b): contour plot; (c)-(d): profiles before and after the collision.}\n\\label{f:inelastic2}\n\\end{figure}\nOn the other hand, if we change the parameters as $\\alpha^{(1)}_{2}=1.0$, \n\\alpha^{(2)}_{2}=0$, then the energy of one soliton, which are distributed between two components before the\ncollision is concentrated into one component $q_2$ after the collision.\n The example is\nshown in Fig. 7.\n\\begin{figure}[htbp]\n\\centerline{\n\\includegraphics[scale=0.35]{Inelastic3_ct_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Inelastic3_ct_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(a)\\kern6.5em\\hss(b)\\kern-1.5em} \\kern+0.315\\textwidth\n\\centerline{\n\\includegraphics[scale=0.35]{Inelastic3_q1.eps}\\quad\n\\includegraphics[scale=0.35]{Inelastic3_q2.eps}} \\kern-0.315\\textwidth\n\\hbox to\n\\textwidth{\\hss(c)\\kern6.5em\\hss(d)\\kern-1.5em} \\kern+0.315\\textwidth\n\\caption{Inelastic collision in coupled\ncomplex short pulse equation for $p_{1}=1+1.2\\mathrm{i}$, $p_{2}=1+2\\mathrm{\n}$, $\\protect\\alpha^{(1)}_{1}=\\alpha^{(2)}_{1}=1.0$, \\ $\\alpha^{(1)}_{2}=1.0$, $\\alpha^{(2)}_{2}=0$. (a)-(b): contour plot; (c)-(d): profiles before and after the collision.}\n\\label{f:inelastic3}\n\\end{figure}\n\\section{Concluding Remarks}\nIn this paper, we proposed a complex short pulse equation and its\ntwo-component generalization. Both of the equations can be used to model\nthe propagation of ultra-short pulses in optical fibers. We have shown their\nintegrability by finding the Lax pairs and infinite numbers of conservation\nlaws. Furthermore, multi-soliton solutions are constructed via Hirota's\nbilinear method. In particular, one-soliton solution for the CSP equation is\nan envelope soliton with a few optical cycles under certain condition, which\nperfectly match the requirement for the ultra-short pulses.\nThe $N$-solution for complex short pulse equation and its\ntwo-component generalization is a benchmark for the study of soliton\ninteractions in ultra-short pulses propagation in optical fibers. It is expected that these analytical\nsolutions can be confirmed from experiments.\n\nSimilar to our previous results for the integrable discretizations of the short pulse equation \\cite{SPE_discrete1}, how to construct integrable discretizations of the CSP and coupled CSP equations and how to\napply them for the numerical simulations is also an interesting topic to be\nstudied. It is obviously beyond the scope of the present paper, we\nare to report the results on this aspect in a forthcoming paper.\n\n\n\n\n\n\n\\apptitle\n\\section{}\n\\appeqn\n\\textbf{Proof of Theorem 4.2}\n\n\\begin{proof}\nFirst we define\n\\[\n(b_j, \\bar{\\beta}_1)= \\bar{\\alpha}_j\\delta_{\\mu,1} \\,, \\quad (b_j, \\bar{\\bet\n}_2)= \\bar{\\alpha}_j\\delta_{\\mu,2}\\,,\n\\]\nwhere $index(b_j)=\\mu$ , then from the fact\n\\[\n\\mathrm{Pf}(\\bar{a}_j,a_k)= \\mathrm{Pf} (a_{N+j},a_{N+k})\\,, \\mathrm{Pf}\n\\bar{b}_j,b_k)= \\mathrm{Pf} (b_{N+j},b_{N+k})\\,,\n\\]\nwe obtain\n\\[\n\\bar{f}=f\\,, \\quad \\bar{g}= \\mathrm{Pf} (d_0, \\bar{\\beta}_1, a_1, \\cdots,\na_{2N}, b_1, \\cdots, b_{2N})\\,.\n\\]\nSince\n\\[\n\\frac{\\partial} {\\partial y} \\mathrm{Pf} (a_j,a_k)= (p_j -\np_k)e^{\\eta_j+\\eta_k} = \\mathrm{Pf} (d_0, d_1, a_j,a_k)\\,,\n\\]\n\n\\[\n\\frac{\\partial} {\\partial s} \\mathrm{Pf} (a_j,a_k)= (p^{-1}_k - p^{-1}_j)\ne^{\\eta_j+\\eta_k} = \\mathrm{Pf} (d_{-1}, d_0, a_j,a_k)\\,,\n\\]\n\n\\[\n\\frac{\\partial^2} {\\partial s^2} \\mathrm{Pf} (a_j,a_k)= (p^{-2}_k -\np^{-2}_j) e^{\\eta_j+\\eta_k} = \\mathrm{Pf} (d_{-2}, d_0, a_j,a_k)\\,,\n\\]\n\\[\n\\frac{\\partial^2} {\\partial y \\partial s}\\mathrm{Pf} (a_j,a_k)= (p_jp^{-1}_k\n- p_k p^{-1}_j) e^{\\eta_j+\\eta_k} = \\mathrm{Pf} (d_{-1}, d_1, a_i,a_j)\\,,\n\\]\nwe then have\n\\[\n\\frac{\\partial f} {\\partial y} = \\mathrm{Pf} (d_0, d_1, \\cdots)\\,,\n\\]\n\n\\[\n\\frac{\\partial f} {\\partial s} = \\mathrm{Pf} (d_{-1}, d_0, \\cdots)\\,,\n\\]\n\n\\[\n\\frac{\\partial^2 f} {\\partial s^2} = \\mathrm{Pf} (d_{-2}, d_0, \\cdots)\\,,\n\\]\n\n\\[\n\\frac{\\partial^2 f} {\\partial y \\partial s} = \\mathrm{Pf} (d_{-1}, d_1,\n\\cdots)\\,.\n\\]\nHere $\\mathrm{Pf} (d_0, d_1, a_1, \\cdots, a_{2N}, b_1, \\cdots, b_{2N})$ is\nabbreviated by $\\mathrm{Pf} (d_0, d_1, \\cdots)$, so as other similar\npfaffians.\n\nFurthermore, it can be shown\n\\begin{eqnarray*}\n&& \\frac{\\partial g} {\\partial y} = \\frac{\\partial} {\\partial y} \\left\n\\sum_{j=1}^{2N} (-1)^{j} \\mathrm{Pf} (d_0, a_j) \\mathrm{Pf} (\\beta_1, \\cdots\n,\\hat{a}_j, \\cdots)\\right] \\\\\n&& =\\sum_{j=1}^{2N} (-1)^{j} \\left[ \\left( {\\partial_y} \\mathrm{Pf} (d_0,\na_j) \\right) \\mathrm{Pf} (\\beta_1, \\cdots ,\\hat{a}_j, \\cdots) + \\mathrm{Pf}\n(d_0, a_j) {\\partial_y} \\mathrm{Pf} (\\beta_1, \\cdots ,\\hat{a}_j, \\cdots)\n\\right] \\\\\n&& =\\sum_{j=1}^{2N} (-1)^{j} \\left[ \\mathrm{Pf} (d_1, a_j) \\mathrm{Pf}\n(\\beta_1, \\cdots ,\\hat{a}_j, \\cdots) + \\mathrm{Pf} (d_0, a_j) \\mathrm{Pf}\n(\\beta_1, d_0, d_1, \\cdots ,\\hat{a}_j, \\cdots) \\right] \\\\\n&& = \\mathrm{Pf} ( d_1, \\beta_1, \\cdots)+ \\mathrm{Pf} ( d_0, \\beta_1, d_0,\nd_1, \\cdots) \\\\\n&& = \\mathrm{Pf} (d_1, \\beta_1, \\cdots)\\,.\n\\end{eqnarray*}\n\nHere $\\hat{a}_j$ means that the index $j$ is omitted. Similarly, we can show\n\\[\n\\frac{\\partial g} {\\partial s} = \\mathrm{Pf} (d_{-1}, \\beta_1, \\cdots)\\,,\n\\]\n\n\\begin{eqnarray*}\n&& \\frac{\\partial^2 g} {\\partial y \\partial s} = \\frac{\\partial} {\\partial y}\n\\left[\\sum_{j=1}^{2N} (-1)^{j} \\mathrm{Pf} (d_{-1}, a_j) \\mathrm{Pf}\n(\\beta_1, \\cdots ,\\hat{a}_j, \\cdots)\\right] \\\\\n&& =\\sum_{j=1}^{2N} (-1)^{j} \\left[ \\left( {\\partial_y} \\mathrm{Pf} (d_{-1},\na_j) \\right) \\mathrm{Pf} (\\beta_1, \\cdots ,\\hat{a}_j, \\cdots) + \\mathrm{Pf}\n(d_{-1}, a_j) {\\partial_y} \\mathrm{Pf} (\\beta_1, \\cdots ,\\hat{a}_j, \\cdots)\n\\right] \\\\\n&& =\\sum_{j=1}^{2N} (-1)^{j} \\left[ \\mathrm{Pf} (d_0, a_j) \\mathrm{Pf}\n(\\beta_1, \\cdots ,\\hat{a}_j, \\cdots) + \\mathrm{Pf} (d_{-1}, a_j) \\mathrm{Pf}\n(\\beta_1, d_0, d_1, \\cdots ,\\hat{a}_j, \\cdots) \\right] \\\\\n&& = \\mathrm{Pf} (d_0, \\beta_1, \\cdots)+ \\mathrm{Pf} (d_{-1}, \\beta_1, d_0,\nd_1, \\cdots)\\,.\n\\end{eqnarray*}\n\nAn algebraic identity of pfaffian \\cite{Hirota}\n\\begin{eqnarray*}\n&& \\mathrm{Pf} (d_{-1}, \\beta_1, d_0, d_1, \\cdots) \\mathrm{Pf} (\\cdots)=\n\\mathrm{Pf} (d_{-1}, d_0, \\cdots) \\mathrm{Pf} (d_1, \\beta_1, \\cdots) \\\\\n&& \\quad - \\mathrm{Pf} (d_{-1}, d_1, \\cdots) \\mathrm{Pf} (d_0, \\beta_1,\n\\cdots) + \\mathrm{Pf} (d_{-1}, \\beta_1, \\cdots) \\mathrm{Pf} (d_0, d_1,\n\\cdots)\\,,\n\\end{eqnarray*}\nimplies\n\\[\n( {\\partial_s} {\\partial_y} g-g) \\times f = {\\partial_s} f \\times {\\partial_\n} g - {\\partial_s} {\\partial_y} f \\times g + {\\partial_s} g \\times \n\\partial_y} f \\,.\n\\]\nTherefore, the first bilinear equation is approved.\n\nThe second bilinear equation can be proved in the same way by Iwao and\nHirota \\cite{IwaoHirota}.\n\\begin{eqnarray}\n&& \\frac{\\partial^2 f} {\\partial s^2} \\times 0 - \\frac{\\partial f} {\\partial\ns} \\frac{\\partial f} {\\partial s} \\nonumber \\\\\n&& = \\mathrm{Pf} (d_{-2}, d_0, \\cdots) \\mathrm{Pf} (d_{0}, d_0, \\cdots) -\n\\mathrm{Pf} (d_{-1}, d_0, \\cdots) \\mathrm{Pf} (d_{-1}, d_0, \\cdots)\n\\nonumber \\\\\n&& = \\sum_{i=1}^{2N} (-1)^i \\mathrm{Pf} (d_{-2}, a_i) \\mathrm{Pf} (d_0,\n\\cdots, \\hat{a}_i, \\cdots) \\sum_{j=1}^{2N} (-1)^j \\mathrm{Pf} (d_{0}, a_j)\n\\mathrm{Pf} (d_0, \\cdots, \\hat{a}_j, \\cdots) \\nonumber \\\\\n&& -\\sum_{i=1}^{2N} (-1)^i \\mathrm{Pf} (d_{-1}, a_i) \\mathrm{Pf} (d_0,\n\\cdots, \\hat{a}_i, \\cdots) \\sum_{j=1}^{2N} (-1)^j \\mathrm{Pf} (d_{-1}, a_j)\n\\mathrm{Pf} (d_0, \\cdots, \\hat{a}_j, \\cdots) \\nonumber \\\\\n&& =\\sum_{i,j=1}^{2N} (-1)^{i+j} \\left[ \\mathrm{Pf} (d_{-2}, a_i) \\mathrm{Pf}\n(d_{0}, a_j) -\\mathrm{Pf} (d_{-1}, a_i) \\mathrm{Pf} (d_{-1}, a_j) \\right]\n\\nonumber \\\\\n&& \\quad \\times \\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i, \\cdots) \\mathrm{Pf}\n(d_0, \\cdots, \\hat{a}_j, \\cdots) \\nonumber \\\\\n&&=\\sum_{i,j=1}^{2N} (-1)^{i+j+1} \\left[p_i^{-2} + p_i^{-1}p_j^{-1} \\right]\n\\mathrm{Pf} (a_i, a_j) \\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i, \\cdots) \\mathrm\nPf} (d_0, \\cdots, \\hat{a}_j, \\cdots) \\nonumber\n\\end{eqnarray}\n\nThe summation over the second term within the bracket vanishes due to the\nfact that\n\\begin{eqnarray*}\n&& \\sum_{i,j=1}^{2N} (-1)^{i+j+1} p_i^{-1}p_j^{-1} \\mathrm{Pf} (a_i, a_j)\n\\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots, \\hat{\n}_j, \\cdots) \\\\\n&& = \\sum_{j,i=1}^{2N} (-1)^{j+i+1} p_j^{-1}p_i^{-1} \\mathrm{Pf} (a_j, a_i)\n\\mathrm{Pf} (d_0, \\cdots, \\hat{a}_j, \\cdots) \\mathrm{Pf} (d_0, \\cdots, \\hat{\n}_i, \\cdots) \\\\\n&& = -\\sum_{i,j=1}^{2N} (-1)^{i+j+1} p_i^{-1}p_j^{-1} \\mathrm{Pf} (a_i, a_j)\n\\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots, \\hat{\n}_j, \\cdots)\\,.\n\\end{eqnarray*}\nTherefore,\n\\begin{eqnarray}\n&& - \\frac{\\partial f} {\\partial s} \\frac{\\partial f} {\\partial s}\n=\\sum_{i,j=1}^{2N} (-1)^{i+j+1} p_i^{-2} \\mathrm{Pf} (a_i, a_j) \\mathrm{Pf}\n(d_0, \\cdots, \\hat{a}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots, \\hat{a}_j, \\cdots)\n\\nonumber \\\\\n&&=\\sum_{i=1}^{2N} (-1)^{i+1} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i,\n\\cdots) \\left[ \\sum_{j=1}^{2N} (-1)^{j} \\mathrm{Pf} (a_i, a_j) \\mathrm{Pf}\n(d_0, \\cdots, \\hat{a}_j, \\cdots) \\right] \\nonumber \\label{CSP1_proof5}\n\\end{eqnarray}\nFurther, we note that the following identity can be substituted into the\nterm within bracket\n\\begin{eqnarray*}\n&& \\sum_{j=1}^{2N} (-1)^{j} \\mathrm{Pf} (a_i, a_j) \\mathrm{Pf} (d_0, \\cdots,\n\\hat{a}_j, \\cdots) \\\\\n&& = \\mathrm{Pf} (d_{0}, a_i) \\mathrm{Pf} (\\cdots) + (-1)^{i+1} \\mathrm{Pf}\n(d_0, \\cdots, \\hat{b}_i, \\cdots)\\,\n\\end{eqnarray*}\nwhich is obtained from the expansion of the following vanishing pfaffian \n\\mathrm{Pf} (a_i, d_0, \\cdots)$ on $a_i$. Consequently, we have\n\\begin{eqnarray}\n&& - \\frac{\\partial f} {\\partial s} \\frac{\\partial f} {\\partial s} =\n\\nonumber \\\\\n&& \\sum_{i=1}^{2N} (-1)^{i+1} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i,\n\\cdots) \\left[\\mathrm{Pf} (d_{0}, a_i) \\mathrm{Pf} (\\cdots) + (-1)^{i+1}\n\\mathrm{Pf} (d_0, \\cdots, \\hat{b}_i, \\cdots)\\right]\\,, \\nonumber \\\\\n&& = -\\mathrm{Pf} (\\cdots) \\mathrm{Pf} (d_{-2}, d_0, \\cdots)+\n\\sum_{i=1}^{2N} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots, \\hat{a}_i, \\cdots)\n\\mathrm{Pf} (d_0, \\cdots, \\hat{b}_i, \\cdots)\\,,\n\\end{eqnarray}\nwhich can be rewritten as\n\\begin{equation}\n\\frac{\\partial^2 f} {\\partial s^2} f- \\frac{\\partial f} {\\partial s} \\frac\n\\partial f} {\\partial s}= \\sum_{i=1}^{2N} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots,\n\\hat{a}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots, \\hat{b}_i, \\cdots)\\,.\n\\end{equation}\n\nNow, we work on the r.h.s of the second bilinear equation.\n\\begin{eqnarray} \\label{CSP1_proof1}\n&& \\frac 12 |g|^2 = \\frac 12 \\mathrm{Pf} (d_0,\\beta_1, \\cdots) \\mathrm{Pf}\n(d_0, \\bar{\\beta}_1, \\cdots) \\nonumber \\\\\n&& = \\frac 12 \\sum_{i,j}^{2N} (-1)^{i+j}\\mathrm{Pf} (b_i, \\beta_1) \\mathrm{P\n} (d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (b_j,\\bar{\\beta}_1) \\mathrm{Pf}\n(d_0, \\cdots,\\hat{b}_j, \\cdots) \\nonumber \\\\\n&& = \\frac 14 \\sum_{i,j}^{2N} (-1)^{i+j} (\\alpha_i \\bar{\\alpha}_j) \\mathrm{P\n} (d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_j, \\cdots)\n\\nonumber \\\\\n&& = \\sum_{i,j}^{2N} (-1)^{i+j} \\left(p_i^{-2}-p_{j}^{-2}\\right) \\mathrm{Pf}\n(b_i,b_j)\\mathrm{Pf}(d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots\n\\hat{b}_j, \\cdots) \\nonumber \\\\\n\\end{eqnarray}\nNext, the expansion of the vanishing pfaffian $\\mathrm{Pf} (b_i, d_0,\n\\cdots) $ on $b_i$ yields\n\\begin{equation}\n\\sum_{j=1}^{2N} (-1)^{i+j}\\mathrm{Pf} (b_i, b_j) \\mathrm{Pf} (d_0, \\cdots,\n\\hat{b}_j, \\cdots) = \\mathrm{Pf} (d_0, \\cdots,\\hat{a}_i, \\cdots)\\,,\n\\end{equation}\nwhich subsequently leads to\n\\begin{eqnarray}\n&& \\sum_{i,j}^{2N} (-1)^{i+j} p_i^{-2} \\mathrm{Pf} (b_i,b_j)\\mathrm{Pf}(d_0,\n\\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_j, \\cdots)\n\\nonumber \\\\\n&& = \\sum_{i}^{2N} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots,\\hat{a}_i, \\cdots)\n\\mathrm{Pf} (d_0, \\cdots,\\hat{b}_i, \\cdots)\\,. \\label{CSP1_proof2}\n\\end{eqnarray}\nSimilarly, we can show that\n\\begin{eqnarray}\n&& -\\sum_{i,j}^{2N} (-1)^{i+j} p_j^{-2} \\mathrm{Pf} (b_i,b_j)\\mathrm{Pf\n(d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_j, \\cdots)\n\\nonumber \\\\\n&& = \\sum_{j}^{2N} p_j^{-2} \\mathrm{Pf} (d_0, \\cdots,\\hat{a}_j, \\cdots)\n\\mathrm{Pf} (d_0, \\cdots,\\hat{b}_j, \\cdots)\\,. \\label{CSP1_proof3}\n\\end{eqnarray}\nSubstituting Eqs. (\\ref{CSP1_proof2})--(\\ref{CSP1_proof2}) into Eq. (\\re\n{CSP1_proof1}), we arrive at\n\\begin{equation} \\label{CSP1_proof4}\n\\frac 12 |g|^2= 2\\sum_{i}^{2N} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots,\\hat{a}_i,\n\\cdots) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_i, \\cdots)\\,.\n\\end{equation}\nConsequently we have\n\\begin{equation}\n2\\frac{\\partial^2 f} {\\partial s^2} f- 2\\frac{\\partial f} {\\partial s} \\frac\n\\partial f} {\\partial s}= \\frac 12 |g|^2\\,,\n\\end{equation}\nwhich is nothing but the second bilinear equation. Therefore, the proof is\ncomplete.\n\\end{proof}\n\n\\textbf{The proof of Theorem 4.6}\n\n\\begin{proof}\nThe proof of the first bilinear equation can be done exactly in the same way\nas for the complex short pulse equation. In what follows, we prove the\nsecond equation by starting from the r.h.s of this equation. Because\n\\[\n\\bar{g}_1= \\mathrm{Pf} (d_0, \\bar{\\beta}_1, a_1, \\cdots, a_{2N}, b_1,\n\\cdots, b_{2N})\\,,\n\\]\n\\[\n\\bar{g}_2= \\mathrm{Pf} (d_0, \\bar{\\beta}_2, a_1, \\cdots, a_{2N}, b_1,\n\\cdots, b_{2N})\\,,\n\\]\nthe r.h.s of the bilinear equation turns out to be\n\\begin{eqnarray} \\label{CCSP1_proof1}\n&& \\frac 12 \\left( g_{1} \\bar{g}_{1} + g_{2} \\bar{g}_{2} \\right) \\nonumber\n\\\\\n&& = \\frac 12 \\sum^2_{k=1} \\sum_{i,j}^{2N} (-1)^{i+j}\\mathrm{Pf} (b_i,\n\\beta_k) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (b_j,\\bar\n\\beta}_k) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_j, \\cdots) \\nonumber \\\\\n&& = \\frac 14 \\sum_{i,j}^{2N} (-1)^{i+j} \\sum^2_{k=1}(\\alpha^{(k)}_i \\bar\n\\alpha}^{(k)}_j) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf}\n(d_0, \\cdots,\\hat{b}_j, \\cdots) \\nonumber \\\\\n&& = \\sum_{i,j}^{2N} (-1)^{i+j} \\left(p_i^{-2}-p_{j}^{-2}\\right) \\mathrm{Pf}\n(b_i,b_j)\\mathrm{Pf}(d_0, \\cdots,\\hat{b}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots\n\\hat{b}_j, \\cdots) \\nonumber \\\\\n\\end{eqnarray}\nSimilar to the complex short pulse equation, we can show\n\n\\begin{equation} \\label{CCSP1_proof4}\n\\frac 12 \\left(|g_{1}|^2 + |g_{2}|^2 \\right) = 2\\sum_{i}^{2N} p_i^{-2}\n\\mathrm{Pf} (d_0, \\cdots,\\hat{a}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots,\\hat{b\n_i, \\cdots)\\,.\n\\end{equation}\nRegarding the r.h.s of the bilinear equation, exactly the same as the proof\nof the Theorem 4.2, we have\n\\begin{equation} \\label{CCSP1_proof5}\n\\frac{\\partial^2 f} {\\partial s^2} f- \\frac{\\partial f} {\\partial s} \\frac\n\\partial f} {\\partial s} =\\sum_{i}^{2N} p_i^{-2} \\mathrm{Pf} (d_0, \\cdots\n\\hat{a}_i, \\cdots) \\mathrm{Pf} (d_0, \\cdots,\\hat{b}_i, \\cdots)\\,.\n\\end{equation}\nTherefore the second bilinear equation is proved.\n\\end{proof}\n\n\\thank\n\\section{}\nThe author is grateful for the useful discussions with Dr. Yasuhiro Ohta\n(Kobe University) and Dr. Kenichi Maruno at Waseda University. This work is partially supported by the National Natural Science Foundation of China (No. 11428102).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWith the exponential growth of Internet usage, online users massively publish textual content on online media. For instance, a micro-blogging website, Twitter, allows users to post their content in 140-characters length. A popular social media like Facebook allows users to interact and share content in their communities, as known as ``Friends''. An electronic commercial website, Amazon, allows users to ask questions on their interested items and give reviews on their purchased products. While these textual data have been broadly studied in various research areas (e.g. automatic text summarization, information retrieval, information extraction, etc.), online debate domain, which recently becomes popular among Internet users, has not yet largely explored. For this reason, there are no sufficient resources of annotated debate data available for conducting research in this genre. This motivates us to explore online debate data. \n\nIn this paper, we collected and annotated debate data for an automatic summarization task. There are 11 debate topics collected. Each topic consists of different number of debate comments. In total, there are 341 debate comments collected, accounting for 2518 sentences. In order to annotate online debate data, we developed a web-based system which simply runs on web browsers. We designed the user interface for non-technical users. When participants logged into the system, a debate topic and a comment which is split to a list of consecutive sentences were shown at a time. The annotators were asked to select salient sentences from each comment which summarize it. The number of salient sentences chosen from each comment is controlled by a compression rate of 20\\% which is automatically calculated by the web-based system. For instance, Table \\ref{table_annotation} shows a debate comment to be annotated by an annotator. Based on the compression rate of 20\\%, the annotator needs to choose 1 sentence that summarizes the comment. This compression rate was also used in \\cite{Neto2002ATS} and \\cite{Morris199217}. In total, we obtained 5 sets of annotated debate data. Each set of data consists of 341 comments with total 519 annotated salient sentences. \n\n\nInter-annotator agreement in terms of Cohen's Kappa and Krippendorff's alpha are 0.28 and 0.27 respectively. For social media data such low agreements have been also reported by related work. For instance, \\cite{Mitrat} reports Kappa scores between 0.20 and 0.50 for human constructed newswire summaries. \\cite{Liu:2008:CRH:1557690.1557747} reports again Kappa scores between 0.10 and 0.35 for the conversation transcripts. Our agreement scores are based on strict conditions where agreement is achieved when annotators have selected exact the same sentences. However, such condition does not consider syntactically different sentences bearing the same semantic meaning. Thus we also experimented with a more relaxed version that is based on semantic similarity between sentences. We regard two sentences as identical when their semantic similarity is above a threshold. Our results revealed that after applying such an approach the averaged Cohen's Kappa and Krippendorff's alpha increase to 35.71\\% and 48.15\\% respectively. \n\n\n\n\nFinally we report our results of automatic debate data summarization. We implemented an extractive text summarization system that extracts salience sentences from user comments. Among the features the most contributing ones are sentence position, debate titles, and cosine similarity of the debate title words and sentences. \n\n\nThe paper is structured as follows. First we describe the nature of our online debate data. In Section \\ref{data_annotation} we discuss the procedures of data annotation and discuss our experiments with semantic similarity applied on inter-annotator agreement computation. In Section \\ref{experiment_salient}, we present our first results on automatically performing debate data summarization. We conclude in Section \\ref{conclusion}.\n\n\n\n\n\\begin{table}[ht]\n\\begin{flushleft}\n\\begin{framed}\n\\noindent\\textbf{Task 02: Is global warming fictitious?}\\\\\n\\emph{$[1]$} I do not think global warming is fictitious.\\\\\n\\emph{$[2]$} I understand a lot of people do not trust every source and they need solid proof.\\\\\n\\emph{$[3]$} However, if you look around us the proof is everywhere.\\\\\n\\emph{$[4]$} It began when the seasons started getting harsh and the water levels were rising.\\\\\n\\emph{$[5]$} I do not need to go and see the ice caps melting to know the water levels are rising and the weather is changing.\\\\\n\\emph{$[6]$} I believe global warming is true, and we should try and preserve as much of the Earth as possible.\n\\end{framed}\n\\end{flushleft}\n\\caption{Examples of the debate data to be annotated.}\\label{table_annotation} \n\\end{table}\n\n\n\n\n\n\\begin{table}[ht]\n\\begin{flushleft}\n\\begin{framed}\n\\textbf{Example 1: Propositions from the proponents} \n\\\\ - Global warming is real.\n\\\\ - Global warming is an undisputed scientific fact. \n\\\\ - Global warming is most definitely not a figment of anyone's imagination, because the proof is all around us.\n\\\\ - I believe that global warming is not fictitious, based on the observational and comparative evidence that is currently presented to us.\n\\vskip 0.2in\n\\textbf{Example 2: Propositions from the opponents} \n\\\\ - Global warming is bull crap.\n\\\\ - Global Warming isn't a problem at all.\n\\\\ - Just a way for the government to tax people on more things by saying their trying to save energy.\n\\\\ - Yes, global warming is a myth, because they have not really proven the science behind it. \n\\\\ \n\\end{framed}\n\\end{flushleft}\n\\caption{Examples of Paraphrased Arguments.}\\label{table_pargument} \n\\end{table}\n\\FloatBarrier\n\n\n\\section{Online Debate Data and Their Nature} \\label{nature_debate}\n\nThe nature of online debate is different from other domains. It gives opportunities to users to discuss ideological debates in which users can choose a stance of a debate, express their opinions to support their stance, and oppose other stances. To conduct our experiments we collected debate data from the Debate discussion forum.\\footnote{http:\/\/www.debate.org} The data are related to an issue of the existence of global warming. In the data, there are two main opposing sides of the arguments. A side of proponents believes in the existence of global warming and the other side, the opponents, says that global warming is not true. When the proponents and the opponents express their sentiments, opinions, and evidences to support their propositions, the arguments between them arise. Moreover, when the arguments are referred across the conversation in the forum, they are frequently paraphrased. Table \\ref{table_pargument} illustrates examples of the arguments being paraphrased. Sentences expressing related meaning are written in different context. \n\n\n\\section{Annotation Procedures} \\label{data_annotation}\nIn this paper, we collected and annotated debate data for an automatic summarization task. There are 11 debate topics collected. Each topic consists of a different number of debate comments as shown in Table \\ref{table_stats_corpus}. The annotation was guided through a web-based application. The application was designed for non-technical users. When participants logged in to the system, a debate topic and a comment which is split to a list of sentences were shown at a time. The annotators were given a guideline to read and select salient sentences that summarize the comments. From each comment we allowed the participants to select only 20\\% of the comment sentences. These 20\\% of the sentences are treated as the summary of the shown comment. In the annotation task, all comments in the 11 debate topics were annotated. We recruited 22 participants: 10 males and 12 participants to annotate salient sentences. The participants' backgrounds were those who are fluent in English and aged above 18 years old. We aimed to have 5 annotations sets for each debate topic. Due to a limited number of annotators and a long list of comments to be annotated in each debate topic, 11 participants were asked to complete more than one debate topic, but were not allowed to annotate the same debate topics in which they had done before. In total, 55 annotation sets were derived: 11 debate topics and each with 5 annotation sets. Each annotation set consists of 341 comments with total 519 annotated salient sentences.\\footnote{This dataset can be downloaded at https:\/\/goo.gl\/3aicDN.}\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{@{}clccc@{}}\n\\toprule\n\\textbf{Topic ID} & \\multicolumn{1}{c}{\\textbf{Debate Topics}} & \\textbf{Comments} & \\textbf{Sentences} & \\textbf{Words} \\\\ \\midrule\n01 & Is global warming a myth? & 18 & 128 & 2701 \\\\\n02 & Is global warming fictitious? & 28 & 173 & 3346 \\\\\n03 & Is the global climate change man made? & 10 & 47 & 1112 \\\\\n04 & Is global climate change man-made? & 103 & 665 & 12054 \\\\\n05 & Is climate change man-made? & 9 & 46 & 773 \\\\\n06 & Do you believe in global warming? & 21 & 224 & 3538 \\\\\n07 & Does global warming exist? & 68 & 534 & 9178 \\\\\n08 & \\begin{tabular}[c]{@{}l@{}}Can someone prove that climate \\\\ change is real (yes) or fake (no)?\\end{tabular} & 8 & 49 & 1127 \\\\\n09 & Is global warming real? & 51 & 434 & 6749 \\\\\n10 & Is global warming true? & 5 & 26 & 375 \\\\\n11 & \\begin{tabular}[c]{@{}l@{}}Is global warming real (yes) or just a bunch \\\\ of scientist going to extremes (no)?\\end{tabular} & 20 & 192 & 2988 \\\\\\midrule\n\\textbf{} & \\multicolumn{1}{r}{\\textbf{Average}} & \\textbf{31} & \\textbf{229} & \\textbf{3995} \\\\\n\\textbf{} & \\multicolumn{1}{r}{\\textbf{Total}} & \\textbf{341} & \\textbf{2518} & \\textbf{43941} \\\\ \\bottomrule\n\\end{tabular}\n\\caption{Statistical information of the online debate corpus.}\n\\label{table_stats_corpus}\n\\end{table}\n\n\n\n\\subsection{Inter-Annotator Agreement}\n\nIn order to compute inter-annotator agreement between the annotators we calculated the averaged Cohen's Kappa and Krippendorff's alpha with a distant metric, Measuring Agreement on Set-valued Items metric (MASI). The scores of averaged Cohen's Kappa and Krippendorff's alpha are 0.28 and 0.27 respectively. According to the scale of \\cite{krippendorff-2004}, our alpha did neither accomplish the reliability scale of 0.80, nor the marginal scales between 0.667 and 0.80. Likewise, our Cohen's Kappa only achieved the agreement level of \\emph{fair agreement}, as defined by \\cite{Landis77}. However, such low agreement scores are also reported by others who aimed creating gold standard summaries from news texts or conversational data \\cite{Mitrat} \\cite{Liu:2008:CRH:1557690.1557747} .\n\n\nOur analysis shows that the low agreement is caused by different preferences of annotators in the selection of salient sentences. As shown in Table \\ref{table_pargument} the sentences are syntactically different but bear the same semantic meaning. In a summarization task with a compression threshold, such situation causes the annotators to select one of the sentences but not all. Depending on each annotator's preference the selection leads to different set of salient sentences. To address this we relaxed the agreement computation by treating sentences equal when they are semantically similar. We outline details in the following section.\n\n\n\n\n\\subsection{Relaxed Inter-Annotator Agreement}\n\nWhen an annotator selects a sentence, other annotators might select other sentences expressing similar meaning. In this experiment, we aim to detect sentences that are semantically similar by applying Doc2Vec from the Gensim package \\cite{rehurek_lrec}. Doc2Vec model simultaneously learns the representation of words in sentences and the labels of the sentences. The labels are numbers or chunks of text which are used to uniquely identify each sentence. We used the debate data and a richer collections of sentences related to climate change to train the Doc2Vec model. In total, there are 10,920 sentences used as the training set. \n\nTo measure how two sentences are semantically referring to the same content, we used a function provided in the package to calculate cosine similarity scores among sentences. A cosine similarity score of 1 means that the two sentences are semantically equal and 0 is when it is opposite the case. In the experiment, we manually investigated pairs of sentences at different threshold values and found that the approach is stable at the threshold level above 0.44. The example below shows a pair of sentences obtained at 0.44 level. \\\\\n\n\n\\indent \\textbf{S1: }\\emph{Humans are emitting carbon from our cars, planes and factories, which is a heat trapping particle.}\\\\\n\\indent \\textbf{S2: }\\emph{So there is no doubt that carbon is a heat trapping particle, there is no doubt that our actions are emitting carbon into the air, and there is no doubt that the amount of carbon is increasing.}\\\\\n\n\nIn the pair, the two sentences mention the same topic (i.e. \\emph{carbon emission}) and express the idea in the same context. We used the threshold 0.44 to re-compute the agreement scores. By applying the semantic approach, the inter-annotator agreement scores of Cohen's Kappa and Krippendorff's alpha increase from 0.28 to 35.71\\% and from 0.27 to 48.15\\% respectively. The inter-annotator agreement results are illustrated in Table \\ref{iaa}. Note that, in the calculation of the agreement, we incremented the threshold by 0.02. Only particular thresholds are shown in the table due to the limited space.\n\n\n\n\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{@{}lcccc@{}}\n\\toprule\n\\multicolumn{1}{c}{\\textbf{Trial}} & \\textbf{\\begin{tabular}[c]{@{}c@{}}Threshold\\\\ ($\\ge$)\\end{tabular}} & \\textbf{$\\kappa$} & &\\textbf{$\\alpha$} \\\\ \\midrule\n\\multicolumn{1}{c}{Before} & & 0.28 && 0.27 \\\\\\midrule\n\\multicolumn{1}{c}{After} & 0.00 & 0.81 && 0.83 \\\\ \n & 0.10 & 0.62 & & 0.65 \\\\\n & 0.20 & 0.46 & & 0.50 \\\\\n & 0.30 & 0.40 & & 0.43 \\\\\n & 0.40 & 0.39 & & 0.41 \\\\\n & 0.42 & 0.38 & & 0.41 \\\\\n & \\textbf{0.44} & \\textbf{0.38} & & \\textbf{0.40} \\\\\n & 0.46 & 0.38 & & 0.40 \\\\\n & 0.48 & 0.38 & & 0.40 \\\\\n & 0.50 & 0.38 & & 0.40 \\\\\n & 0.60 & 0.38 & & 0.40 \\\\\n & 0.70 & 0.38 & & 0.40 \\\\\n & 0.80 & 0.38 & & 0.40 \\\\\n & 0.90 & 0.38 & & 0.40 \\\\\n & 1.00 & 0.38 & & 0.40 \\\\\\bottomrule\n\\end{tabular}\n\\caption{Inter-Annotator Agreement before and after applying the semantic similarity approach.}\n\\label{iaa}\n\\end{table}\n\n\n\n\n\n\\section{Automatic Salient Sentence Selection} \\label{experiment_salient} \n\\subsection{Support Vector Regression Model}\n\n In this experiment, we work on extractive summarization problem and aim to select sentences that are deemed important or that summarize the information mentioned in debate comments. Additionally, we aim to investigate the keys features which play the important roles in the summarization of the debate data. We view this salient sentence selection as a regression task. A regression score for each sentence is ranged between 1 to 5. It is derived by the number annotators selected that sentence divided by the number of all annotators. In this experiment, a popular machine learning package which is available in Python, called Scikit-learn \\cite{scikitLearn} is used to build a support vector regression model. We defined 8 different features and the support vector regression model combines the features for scoring sentences in each debate comment. From each comment, sentences with the highest regression scores are considered the most salient ones. \n\n\\subsection{Feature Definition}\n\\begin{enumerate}\n \\item \\textbf{Sentence Position (SP).}\nSentence position correlates with the important information in text \\cite{Baxendale,EdmundsonRatingSummary,Goldstein}. In general, humans are likely to mention the first topic in the earlier sentence and they express more information about it in the later sentences. We prove this claim by conducting a small experiment to investigate which sentence positions frequently contain salient sentences. From our annotated data, the majority votes of the sentences are significantly at the first three positions (approximately 60\\%), shaping the assumption that the first three sentences are considered as containing salient pieces of information. Equation \\ref{eq_sentence_position} shows the calculation of the score obtained by the sentence position feature. \\\\\n \\begin{equation}\t \\label{eq_sentence_position}\n SP=\\left\\{\n \\begin{array}{@{}ll@{}}\n \\frac{1}{sentence \\; position}, & \\text{if}\\ position <4 \\\\\n 0, & \\text{otherwise}\n \\end{array}\\right.\n \\end{equation} \n \n \n\\item \\textbf{Debate Titles (TT).}\nIn writing, a writer tends to repeat the title words in a document. For this reason, a sentence containing title words is likely to contain important information. We collected 11 debate titles as shown in Table \\ref{table_stats_corpus}. In our experiment, a sentence is considered as important when it contains mutual words as in debate titles. Equation \\ref{eq_titleword} shows the calculation of the score by this feature. \\\\\n \\begin{equation} \\label{eq_titleword}\n TT = \\frac{\\; number \\; of \\; title \\; words \\; in \\; sentence}{number \\; of \\; words \\;in \\;debate \\;titles}\n \\end{equation}\n \n \n \\item \\textbf{Sentence Length (SL).} \nSentence length also indicates the importance of sentence based on the assumption that either very short or very long sentences are unlikely to be included in the summary. Equation \\ref{eq_sentencelength} is used in the process of extracting salient sentences from debate comments. \\\\\n \\begin{equation} \\label{eq_sentencelength}\n SL = \\frac{\\; number \\; of \\; words \\; in \\; a \\; sentence}{number \\; of \\; words\\; in \\; the \\; longest \\; sentence}\n \\end{equation}\t\n\n\n \\item \\textbf{Conjunctive Adverbs (CJ).}\nOne possible feature that helps identify salient sentence is to determine conjunctive adverbs in sentences. Conjunctive adverbs were proved that they support cohesive structure of writing. For instance, ``the conjunctive adverb \\emph{moreover} has been used mostly in the essays which lead to a conclusion that it is one of the best accepted linkers in the academic writing process.\" \\cite{januliene2015use}. The NLTK POS Tagger\\footnote{http:\/\/www.nltk.org\/api\/nltk.tag.html} was used to determine conjunctive adverbs in our data. \\\\\n\n \\item \\textbf{Cosine Similarity.}\nCosine similarity has been used extensively in Information Retrieval, especially in the vector space model. Documents will be ranked according to the similarity of the given query. Equation \\ref{cosinesim} illustrates the equation of cosine similarity, where: \\emph{q} and \\emph{d} are n-dimensional vectors \\cite{Manning:1999:FSN:311445}. Cosine similarity is one of our features that is used to find similarity between two textual units. The following features are computed by applying cosine similarity. \n\n \\begin{equation} \\label{cosinesim}\t\n cos(q,d) = \\frac{\\sum\\limits_{i=1}^n q_{i} d_{i}}{\\sqrt{\\sum\\limits_{i=1}^n q^2_{i}}\\sqrt{\\sum\\limits_{i=1}^n d^2_{i}}} \n \\end{equation}\n\n \\begin{enumerate}\n \\item \\textbf{Cosine similarity of debate title words and sentences (COS\\_TTS).} For each sentence in debate comments we compute its cosine similarity score with the title words. This is based on the assumption that a sentence containing title words is deemed as important. \\\\\n \n \\item \\textbf{Cosine similarity of climate change terms and sentences (COS\\_CCTS)}. The climate change terms were collected from news media about climate change. We calculate cosine similarity between the terms and sentences. In total, there are 300 most frequent terms relating to location, person, organization, and chemical compounds.\\\\\n \n \\item \\textbf{Cosine similarity of topic signatures and sentences (COS\\_TPS).} Topic signatures play an important role in automatic text summarization and information retrieval. It helps identify the presence of complex concepts or the importance in text. In a process of determining topic signatures, words appearing occasionally in the input text but rarely in other text are considered as topic signatures. They are determined by an automatic predefined threshold which indicates descriptive information. Topic signatures are generated by comparing with pre-classified text on the same topic using a concept of likelihood ratio \\cite{nenkova-mckeown-2011,Lin:2000:AAT:990820.990892}, $\\lambda$ presented by \\cite{Dunning1993}. It is a statistical approach which calculates a likelihood of a word. For each word in the input, the likelihood of word occurrence is calculated in pre-classified text collection. Another likelihood values of the same word is calculated and compared in another out-of-topic collection. The word, on the topic-text collection that has higher likelihood value than the out-of-topic collection, is regarded as topic signature of a topic. Otherwise the word is ignored. \\\\ \n \\end{enumerate}\n \\item \\textbf{Semantic Similarity of Sentence and Debate Titles (COS\\_STT).} Since the aforementioned features do not semantically capture the meaning of context, we create this feature for such purpose. We compare each sentence to the list of debate titles based on the assumption that forum users are likely to repeat debate titles in their comments. Thus, we compare each sentence to the titles and then calculate the semantic similarity score by using Doc2Vec \\cite{rehurek_lrec}. \n\\end{enumerate}\n\n\n\n\n\n\n\\begin{table}[ht]\n\\centering\n\\scalebox{0.85}{\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n\n\n\\textbf{ROUGE-N} & \\textbf{CB} & \\textbf{CJ} & \\textbf{COS\\_CCT} & \\textbf{COS\\_TTS} & \\textbf{COS\\_TPS} & \\textbf{SL} & \\textbf{SP} & \\textbf{COS\\_STT} & \\textbf{TT} \\\\ \\hline\n\\textbf{R-1} & 0.4773 & 0.4988 & 0.3389 & 0.5630 & 0.3907 & 0.4307 & \\textbf{0.6124} & 0.4304 & 0.5407 \\\\ \\hline\n\\textbf{R-2} & 0.3981 & 0.4346 & 0.2558 & 0.5076 & 0.2986 & 0.3550 & \\textbf{0.5375} & 0.3561 & 0.4693 \\\\ \\hline\n\\textbf{R-SU4} & 0.3783 & 0.4147 & 0.2340 & 0.4780 & 0.2699 & 0.3335 & \\textbf{0.4871} & 0.3340 & 0.4303 \\\\ \\hline\n\\end{tabular}}\n\\caption{ROUGE scores after applying Doc2Vec to the salient sentence selection.}\\label{table_rouge_scores}\n\\end{table}\n\n\n\n\n\n\n\n\n\\begin{table}[ht]\n\\centering\n\\scalebox{0.85}{\n\\begin{tabular}{lcccccc}\n\\hline\n\\multicolumn{1}{|c|}{\\multirow{2}{*}{\\textbf{Comparison Pairs}}} & \\multicolumn{2}{c|}{\\textbf{ROUGE-1}} & \\multicolumn{2}{c|}{\\textbf{ROUGE-2}} & \\multicolumn{2}{c|}{\\textbf{ROUGE SU4}} \\\\ \\cline{2-7} \n\\multicolumn{1}{|c|}{} & \\multicolumn{1}{c|}{\\textbf{Z}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Asymp. Sig.\\\\ (2-tailed)\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{Z}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Asymp. Sig.\\\\ (2-tailed)\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{Z}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Asymp. Sig.\\\\ (2-tailed)\\end{tabular}}} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS CB} & \\multicolumn{1}{c|}{$-4.246^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-3.962^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-3.044^b$} & \\multicolumn{1}{c|}{0.002} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS CJ} & \\multicolumn{1}{c|}{$-3.570^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-3.090^b$} & \\multicolumn{1}{c|}{0.002} & \\multicolumn{1}{c|}{$-2.192^b$} & \\multicolumn{1}{c|}{0.028} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS COS\\_CCTS} & \\multicolumn{1}{c|}{$-6.792^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-6.511^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-6.117^b$} & \\multicolumn{1}{c|}{0*} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS COS\\_TTS} & \\multicolumn{1}{c|}{$-1.307^b$} & \\multicolumn{1}{c|}{0.191} & \\multicolumn{1}{c|}{$-.789^b$} & \\multicolumn{1}{c|}{0.43} & \\multicolumn{1}{c|}{$-.215^b$} & \\multicolumn{1}{c|}{0.83} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS COS\\_TPS} & \\multicolumn{1}{c|}{$-6.728^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-6.663^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-6.384^b$} & \\multicolumn{1}{c|}{0*} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS SL} & \\multicolumn{1}{c|}{$-4.958^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-4.789^b$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-4.110^b$} & \\multicolumn{1}{c|}{0*} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS COS\\_STT} & \\multicolumn{1}{c|}{$-4.546^c$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-4.322^c$} & \\multicolumn{1}{c|}{0*} & \\multicolumn{1}{c|}{$-3.671^c$} & \\multicolumn{1}{c|}{0*} \\\\ \\hline\n\\multicolumn{1}{|l|}{SP VS TT} & \\multicolumn{1}{c|}{$-3.360^c$} & \\multicolumn{1}{c|}{0.001*} & \\multicolumn{1}{c|}{$-2.744^c$} & \\multicolumn{1}{c|}{0.006} & \\multicolumn{1}{c|}{$-2.641^c$} & \\multicolumn{1}{c|}{0.008} \\\\ \\hline\n\\multicolumn{7}{l}{a) Wilcoxon Signed Ranks Test.} \\\\\n\\multicolumn{7}{l}{b) Based on negative ranks.} \\\\\n\\multicolumn{7}{l}{c) Based on positive ranks.}\n\\end{tabular}}\n\\caption{The statistical information of comparing sentence position and other features after applying Doc2Vec.}\n\\label{table_sig_position}\n\\end{table}\n\n\n\n\n\\subsection{Results} \\label{results}\nIn order to evaluate the system summaries against the reference summaries, we apply ROUGE-N evaluation metrics. We report ROUGE-1 (unigram), ROUGE-2 (bi-grams) and ROUGE-SU4 (skip-bigram with maximum gap length of 4). The ROUGE scores as shown in Table \\ref{table_rouge_scores} indicate that sentence position feature outperforms other features. The least performing feature is the cosine similarity of climate change terms and sentences feature.\n\n\nTo measure the statistical significance of the ROUGE scores generated by the features, we calculated a pairwise Wilcoxon signed-rank test with Bonferroni correction. We report the significance p = .0013 level of significance after the correct is applied. Our results indicate that there is statistically significance among the features. Table \\ref{table_sig_position} illustrates the statistical information of comparing sentence position and other features. The star indicates that there is a statistical significance difference between each comparison pair. \n\n\n\n\n\n\n\\section{Conclusion} \\label{conclusion}\nIn this paper we worked on an annotation task for a new annotated dataset, online debate data. We have manually collected reference summaries for comments given to global warming topics. The data consists of 341 comments with total 519 annotated salient sentences. We have performed five annotation sets on this data so that in total we have 5 X 519 annotated salient sentences. We also implemented an extractive text summarization system on this debate data. Our results revealed that the key feature that plays the most important role in the selection salient sentences is sentence position. Other useful features are debate title words feature, and cosine similarity of debate title words and sentences feature. \n\nIn future work, we aim to investigate further features for the summarization purposes. We also plan to integrate stance information so that summaries with pro-contra sides can be generated.\n\n\\section*{Acknowledgments}\nThis work was partially supported by the UK EPSRC Grant No. EP\/I004327\/1, the European Union under Grant Agreements No. 611233 PHEME, and the authors would like to thank Bankok University of their support. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDeep learning methods are quite successful in many fields such as image analytics and natural language processing.\nDeep learning uses several stacked layers of neural networks, which are optimised using loss functions such as cross entropy with stochastic gradient descent.\nAn example is presented in eq.\\ref{eq:empirical-risk}, where\n$\\alpha$ represents the possible configurations of the learning machine, $z_{i}$ is a set of examples $i = 1, ..., l$ and $\\mathcal{Q}$ is a loss function.\nMinimisation of the empirical risk equals minimising eq.~\\ref{eq:empirical-risk}.\n\n\\begin{equation}\nR_{emp}(\\alpha) = \\frac{1}{l} \\sum_{i=1}^l \\mathcal{Q}(z_{i},\\alpha)\n\\label{eq:empirical-risk}\n\\end{equation}\n\nWe could consider several ways in which to reduce the risk and several advances have been done in improving stochastic gradient descent~\\cite{kingma2014adam}.\nNeural networks have a large number of neurons, which implies that they have a large capacity able of modeling a large set of problems and several sets of network weight values could be identified during learning that have minimal empirical risk, which might have different generalisation capabilities.\nTheir capacity in terms of VC-dimension is large due to the large number of neurons, even though it is finite.\n\nAs indicated in equation~\\ref{eq:risk-generalization}, the bound on the generalisation performance of a trained model depends on the performance on the training set $R_{emp}(\\alpha_{l})$, the VC-dimension $h$ of the classifier and the size of the number of examples used as training data.\n\n\\begin{equation}\nR(\\alpha_{l}) \\leq R_{emp}(\\alpha_{l}) + \\dfrac{B \\mathcal{E} (l) }{2} (1+ \\sqrt{1 + \\dfrac{4 R_{emp} (\\alpha_{l}) }{B \\mathcal{E} (l)}}) \n\\label{eq:risk-generalization}\n\\end{equation}\n\n\\begin{equation}\n \\mathcal{E} (l) = 4 \\dfrac{h (ln \\dfrac{2l}{h}+1)- ln \\dfrac{\\eta}{4}}{l}\n\\end{equation}\n\nAssuming that the empirical risk on the training set is the same for several functions, in order to improve the predictive performance, the function with a lower VC-dimension or the one trained with more data should have a lower risk on the performance on an unseen test set.\n\nFor controlling the VC-dimension of a function, constraining the effective VC-dimension has been proposed~\\cite{vapnik1998statistical}.\nAmong existing work, regularisation is a way to constrain the search for functions that follow certain properties.\nLinear models have relied on Tikhonov regularisation~\\citep{tikhonov1977solutions}.\nNeural networks are made of a set of non-linearities, thus regularisation such as $L_2$ have been applied~\\cite{elsayed2018large}.\nThere have been several proposals, which include recent work to the models using knowledge~\\citep{roychowdhury2021regularizing}.\n\nThe structural risk minimisation framework~\\cite{vapnik1998statistical} intends to control parameters that minimise the VC-dimension and offers certain guarantees about the performance of the trained model.\nA good example of learning algorithm that implements the structural risk minimisation is support vector machines (SVM)~\\citep{vapnik2013nature}.\nSVM is a large margin classifier that aims separating the classes defining it as a constraint problem, whose solution builds on the Karush\u2013Kuhn\u2013Tucker approach that generalises the Lagrange multipliers.\nThe vectors from the training set that are on the margin are the named the support vectors.\nIf we have a training set defined by pairs $\\{x_i, y_i\\}$ where $x_i$ is a vector in $R^n$ and $y_i = \\{1,-1\\}$ defines the class of the instance where $i=1,...,l$.\nWhere the margin is defined by the $1\/w \\in R^n$ and $b \\in R^1$ is the bias term, the constraint optimisation problem for SVMs is defined as:\n\n\\begin{equation}\n\\begin{aligned}\nmin & & \\frac{1}{2}w^2 \\\\\n\\\\\ns.t. & & y_i (w x_i + b) \\geq 1\n\\end{aligned}\n\\end{equation}\n\nUsing Lagrange multipliers $\\alpha = \\{ \\alpha_1,..., \\alpha_l \\}$ with $\\alpha \\geq 0$ and $\\sum_{i=1}^l \\alpha_i y_i = 0$, the separating hyperplane has the formulation:\n\n\\begin{equation}\n f(x,\\alpha) = \\sum_{i=1}^l y_i \\alpha_i (x_i*x) + b\n\\label{eq:svm-hyperplane}\n\\end{equation}\n\nEq.~\\ref{eq:svm-hyperplane} depends on the inner product of $x_i$ and $x$. The elements of the training set on the margin are named the support vectors.\n\nIf the data is not linearly separably in input space, a mapping into a feature space in which the data is linearly separable and an inner product exists would be a solution. For instance, in eq.~\\ref{eq:mapping-svm} function $z$ maps the input space into a feature space.\nAs we can see, what is important is the calculation of the inner product and not the dimensionality of the space.\n\n\\begin{equation}\n f(x,\\alpha) = \\sum_{i=1}^l y_i \\alpha_i (z(x_i)*z(x)) + b\n\\label{eq:mapping-svm}\n\\end{equation}\n\nFrom the formulation of SVMs, the inner product between vectors is what is needed to define the solution to the large margin classifier.\nUsing Mercer's theorem allows using kernels to calculate the inner product in a Hilbert space.\nEq.~\\ref{eq:kernel-svm} shows how the SVM formulation would be defined using kernel $K$.\n\n\\begin{equation}\n f(x,\\alpha) = \\sum_{i=1}^l y_i \\alpha_i K(x_i,x) + b\n\\label{eq:kernel-svm}\n\\end{equation}\n\nUsing kernels in this way allows working in high-dimensional feature spaces without having to work in the high-dimensional space explicitly, this is known as the \\textit{kernel trick}.\nSpecialised kernels have been developed to use SVMs in several problem types.\n\n\nIn this paper, we explore using non-linear functions similar to deep neural networks as mapping functions from input space to feature space.\nWe show examples of the expected performance based on structural risk minimisation principles.\nWe evaluate the proposed method on several data sets and baseline methods.\nWhen the training data is small, the proposed method largely improves over the baseline methods.\nAs expected, this improvement is reduced when more training data is provided.\n\nThe code used in this experiments is available from the following GitHub repository:\\\\\\url{https:\/\/github.com\/ajjimeno\/nn-hyperplane-bounds}\n\n\\section{Methods}\n\nIn this section, we define a large margin linear classifier and we introduce the structural risk minimisation principle.\nThen, we provide a way to define a large margin classifier using a feature space defined by a set of non-linear functions.\nThese non-linear functions are the equivalent of a deep neural network.\n\n\\subsection{Large margin classifier}\n\n\n\nThere are several kernels that have been developed over time that turn the input space into a feature space that captures relations among the features, from image analytics (e.g.~\\cite{szeliski2010computer,camps2006composite}) to text analytics (e.g. string kernel~\\cite{lodhi2002text}).\nThe kernel trick mentioned above allows working in high dimensional feature spaces without the cost of working directly in the feature space.\nThis is achieved by using the kernel to calculate the dot product in feature space without having to map the instances into the feature space.\nOn the other hand, it is difficult to design a kernel that will work well with all sorts of data when compared to the recent success of deep learning.\n\n\nDeep neural network seem to be effective at approximating many different functions, thus it is interesting to map our input feature space into a feature space in which an optimal hyperplane could be identified.\nThis means, using a neural network $z$ as the mapping function between the input space and the feature space, so the optimisation problem should consider now $z(x_i) \\in R^m$ instead of $x_i$ and now the constraints look like $y_i (w z(x_i) + b) \\geq 1$. and $w \\in R^m$.\n\nOne problem is that current neural networks have a large number of parameters, which are needed to be effective in the current tasks where they are successful.\nThis implies as well that they have a large capacity or VC-dimension.\nIn the next sections, we explore how to search for the mapping that has better generalisation guarantees. \n\n\n\n\n\n\n\n\\subsection{Hyperplane bounds}\n\n\nIn this section, we explore properties of the separating hyperplane and what constrains are needed to identify a configuration of the neural network used as mapping function that has better generalisation properties.\n\n\nIf we consider a training set $\\{Y,X\\}$, as defined above, the following inequality holds true for vector $w_0$ and $\\rho_0$,\n\n\\begin{equation}\n min_{(y,x) \\in \\{Y,X\\}} \\frac{y(w_0 x)}{|w_0|} \\geq \\rho_0\n\\end{equation}\n\nwhich assumes that the training set is separable by a hyperplane with margin $\\rho_0$, then the following theorem holds true.\n\n\n\\paragraph{Novikoff theorem}\n\nGiven and infinite sequence of training examples $u_i$ with elements satisfying the inequality $|x_i| < D$, there is a hyperplane with coefficients $w_0$ that separates the training examples currently and satisfies conditions as above.\nUsing an iterative procedure, e.g. stochastic gradient descent, to construct such a hyperplane takes at most\n\n\\begin{equation}\n M = [\\frac{D^2}{\\rho^2_0}]\n\\end{equation}\n\nAs a follow up theorem, we accept that for the algorithm constructing hyperplanes in the regime that separates training data without error, the following bound on error rate is valid\n\n\\begin{equation}\n ER(w_l) \\leq \\frac{E[\\frac{D^2_l}{\\rho^2_l}]}{l+1}\n\\end{equation}\n\nwhere $[\\frac{D^2_l}{\\rho^2_l}]$ is estimated from training data.\n\nThese two theorems provide already a bound on the error of the separating hyperplane, which relies on parameters that can be estimated from the training data.\nIn the following two sections, we show theorems for the bounds on the VC-dimension of the separating hyperplane and properties of the optimal separating hyperplane that will be used to define the optimisation problem for the neural network mapping function. \n\n\\subsection{Bounds on the VC-dimension for \\texorpdfstring{$\\Delta$-margin}{TEXT} separating hyperplanes}\n\nIn this section, we explore theorems that define bounds on the VC-dimension of the separating hyperplane.\n\n\\paragraph{Theorem} Let vectors $x \\in X$ belong to a sphere of radius $R$, the set of $\\Delta$-margin separating hyperplanes has the following VC-dimension h bounded by the inequality\n\n\\begin{equation}\n h \\leq min([\\frac{R^2}{\\Delta^2}], n)+1 \n\\end{equation}\n\n\\paragraph{Corollary} With probability $1-\\eta$ the probability that a test example will not be separated correctly by the $\\Delta$-margin hyperplane has the bound\n\n\\begin{equation}\n P_{error}\\leq \\frac{m}{l} + \\frac{\\xi}{2}(1 + \\sqrt{1+\\frac{4m}{l\\xi}})\n\\end{equation}\n\nwhere \n\n\\begin{equation}\n \\xi = 4 \\frac{h(ln \\frac{2l}{h} + 1) -ln \\frac{\\eta}{4}}{l}\n\\end{equation}\n\nwhere $m$ is the number of examples not separated correctly by this $\\Delta$-margin hyperplane, $h$ is the bound in the VC-dimension, where a good generalisation is dependent on $\\Delta$.\n\nSo, we have already a bound on the VC-dimension of the separating hyperplane.\nIn the next section, we present the idea of identifying the optimal hyperplane, which links the hyperplane optimisation problem with structural risk minimisation.\n\n\\subsection{Optimal hyperplane properties}\n\nThe optimal hyperplane is the one that separates the training examples from classes $y={1,-1}$ with the maximal margin.\nIt has been previously shown that the optimal hyperplane is unique~\\cite{vapnik1998statistical}.\nThe optimal hyperplane has some interesting properties that are relevant to our work.\nOne of them is that the generalisation ability of the optimal hyperplane is better than the general bounds obtained for methods that minimise the empirical risk.\nLet's define the set $X = {x_1, ..., x_l}$ in space $R^n$, where we have our training examples. Within the elements of $X$ that have the following property:\n\n\\begin{equation}\n \\inf_{x \\in X} | w x + b | = 1\n\\end{equation}\n\nThese elements $x \\in X$ are the essential support vectors and are on the margin.\nHaving defined the essential support vectors, we define the number of essential support vectors $K_l$\n\n\\begin{equation}\n K_l = K((x_1, y_1), ... (x_l, y_l))\n\\end{equation}\n\nAnd the maximum norm $D_l$ of the essential support vectors.\n\n\\begin{equation}\nD_l = D((x_1, y_1), ... (x_l, y_l)) = \\max_i |x_i|\n\\end{equation}\n\nBased on the definitions above for the essential support vectors, the following properties have been proved for the optimal hyperplane.\nThe following inequality $K_l \\leq n$ holds true, which implies that the number of essential support vectors is smaller than the dimensionality of the elements of $X$ and $w$.\nLet $ER(\\alpha_l)$ be defined as the expectation of the probability of error of the optimal hyperplane defined using the training data, then the following inequalities hold for the optimal hyperplane considering the values estimated on the essential support vectors. \n\n\n\n\\begin{equation}\n ER(\\alpha_l) \\leq \\frac{EK_{l+1}}{l+1}\n\\end{equation}\n\n\nAdditionally, considering an optimal hyperplane passing through the origin:\n\n\\begin{equation}\n ER(\\alpha_l) \\leq \\frac{E(\\frac{D_{l+1}}{\\rho_{l+1}})^2}{l+1}\n \\label{eq:novikoff-error-expectation}\n\\end{equation}\n\nCombining the two previous inequalities, we obtain a bound on the expectation of the probability of error, which is bounded by the number of examples in the training data but as well by the number of essential support vectors and the relation between the ball in which the support vectors are and the margin, as shown below\n\n\\begin{equation}\n ER(\\alpha_l) \\leq \\frac{E \\min(K_l, (\\frac{D_{l+1}}{\\rho_{l+1}})^2)}{l+1}\n\\end{equation}\n\nLeave-one-out error has been used as an unbiased estimator to prove the bounds on the optimal hyperplane~\\citep{luntz1969estimation}.\n\n\\begin{equation}\n E\\frac{\\mathcal{L}(z_1,...,z_{l+1})}{l+1}=ER(\\alpha_l)\n\\end{equation}\n\nFirst, the number of errors by leave-one-out does not exceed the number of support vectors~\\citep{vapnik2000bounds}.\nIf a vector $x_i$ is not an essential support vector, then there is an expansion of the vector $\\phi_0$ that defines the optimal hyperplane that does not contain the vector $x_i$.\nThe optimal hyperplane is unique, so removing this vector from the training set does not change it.\nLeave-one-out recognizes correctly all the vectors that are not in the set of essential support vectors.\nThe number $\\mathcal{L}(z_1,...,z_{l+1})$ of errors in leave-one-out does not exceed $\\mathcal{K}_{l+1}$, which implies\n\n\n\\begin{equation}\n ER(\\alpha_l) = \\frac{E \\mathcal{L}(z_1,...,z_{l+1})}{l+1} \\leq \\frac{EK_{l+1}}{l+1}\n\\end{equation}\n\nTo prove~eq.~\\ref{eq:novikoff-error-expectation}, the number of errors in leave-one-out does not exceed the number of corrections M to find the optimal hyperplane, as defined by Novikoff's theorem presented above.\n\n\n\n\n\n\n\n\n\n\\subsection{Mapping into feature space using neural networks}\n\nUp to this point, the formulations rely on an input space defined by the training data instances $x_1, ..., x_l$ in a given space $R^n$.\nIf a hyperplane separating the data instances into classes $y=\\{1,-1\\}$ does not exist in input space, the instances are mapped into a feature space $R^m$.\n\n\\begin{equation}\n z(x) : R^n \\mapsto R^m\n\\end{equation}\n\nIn the case of support vector machines, the~\\textit{kernel trick} is used to calculate the dot product using a kernel in feature space without having to do the explicit mapping, which allows working with feature spaces of larger dimensions, even infinite dimensions.\nKernels have been designed to map the input space into separable feature spaces.\n\nOnce the kernel is designed, there is a point in the feature space for each training instance.\nThe properties described for the optimal hyperplane mentioned in the previous sections would still apply but in this case, these properties would be applied to the generated feature space.\nSpecific formulations are applied to profit from the kernel trick mentioned above.\n\nDeveloping the best kernel for a specific problem has shown to be a difficult task and multiple kernels have appeared for different tasks.\nRecently, neural networks have shown that they can learn a classifier with relatively less effort, despite the increase in computational power required to train these classifiers.\nNeural networks in a sense map the input space to another space using a concatenation of linear operators followed by a non-linearity (e.g. sigmoid, RELU, ...), as shown in equation~\\ref{eq:neural-network}.\n\n\\begin{equation}\nz(x) = \\sigma f_{k}(\\sigma f_{k-1}(... \\sigma f_1(x))))\n\\label{eq:neural-network}\n\\end{equation}\n\nEach function $f \\in {f_1, ..., f_k}$ will be a derivative of $f(x) = Wx+B$, where $W$ and $B$ are matrices of weights and biases.\nThese are parameters that need to be optimised for each function.\nStochastic gradient descent is typically used to optimize such functions, thus we prepare the optimisation to use stochastic gradient descent.\n\nRevisiting the classification that we intend to optimise and considering the mapping function $z$, we obtain eq.~\\ref{eq:neural-network-svm}.\nThe parameters to optimize are the vector $w$, the bias $b$ and the weights and biases of the neural network $z$.\nThe dimension of the weight vector $w$ is defined by $z$.\n\n\\begin{equation}\ny_i(w z(x_i) + b) \\geq 1\n\\label{eq:neural-network-svm}\n\\end{equation}\n\n\n\n\n\n\n\n\n\n\n\nIn a support vector machine in feature space, a way to compare different kernels is to measure the maximum radius $D_l$ in which the support vectors fit in and multiply it by $\\lVert w_{l} \\rVert ^2$ as in $[ D_l^2 \\lVert W_{l} \\rVert^2 ]$.\nConsidering that $D_l^2 \\lVert W_{l} \\rVert^2 = \\frac{D_l^2}{\\rho_l^2}$, the lower is this ration, the lower is the probability of error, which is considered as a mean to control the VC-dimension of the separating hyperplane for the setup defined in this work.\n\n\n\n\n\n\\subsection{Optimisation}\n\nTo find the values of the parameters of the system, we use stochastic gradient descent.\nOther approaches, such as Lagrange multipliers are not applicable to neural networks.\nOn the other hand, using stochastic gradient descent guarantees finding a local optima, which hopefully is an approximation with reasonable generalisation performance.\n\nFor optimisation, we use AdamW~\\citep{loshchilov2017decoupled} with eps=1e-08 and weight decay set to 0.1 (which already implies using an $L_{2}$ regularisation) and betas=(0.9, 0.999).\n\nWe have adapted~\\citep{zhang2004solving} using large margin loss.\nWe use the modified Huber loss, which has nicer properties, even though other large margin losses could be explored.\n\nModified Huber loss~\\citep{zhang2004solving}\n\\begin{equation}\n l(y)=\n \\begin{cases}\n -4*h*y & \\text{if }h*y <= -1\\\\\n (1-h*y)^2 & \\text{if } -1 < h*y <= 1\\\\\n 0 & \\text{if } h*y > 1\n \\end{cases}\n\\end{equation}\n\nGradient of the modified Huber loss:\n\n\\begin{equation}\n \\frac{\\partial l}{\\partial w_i}=\n \\begin{cases}\n -4*h*x_i & \\text{if }h*y <= -1\\\\\n -2* (1-h*y)*h*x_i & \\text{if } -1 < h*y <= 1\\\\\n 0 & \\text{if } h*y > 1\n \\end{cases}\n\\end{equation}\n\nWe need to estimate $w$ and $z$ subject to the constraints above.\n$z$ will be defined using a neural network and the size of the feature space derived from $z$ will define the size of the vector $w$.\n$\\lVert w \\rVert$ will be minimised and so will be the $\\lVert z(x) \\rVert$ of the vectors.\nThe loss function of the optimisation problem is defined as follows\n\n\\begin{equation}\n\\mathcal{L}(x_1, ..., x_l)_{inv}= \\mathcal{L} (x_1, ..., x_l) + \\alpha \\lVert w \\rVert^2 + \\beta \\sum^l_{i=1} \\lVert z(x_i) \\rVert^2\n\\end{equation}\n\nThe development above is prepared for binary classification.\nIn a multi class setting, as many classifiers $y \\in \\{1, -1\\}$ as classes are trained and during prediction, the classifier with the maximum value is returned as prediction as shown in equation \\ref{eq:argmax-multi-class}.\n\n\\begin{equation}\n \\arg \\max_{c \\in 1..n} \\{ f_{1}(x), ..., f_{n}(x) \\}\n\\label{eq:argmax-multi-class}\n\\end{equation}\n\nThe proposed method works in the multi class setting, but it has the advantage that it might be able of deciding when an instance does not belong to any class if all the classifier functions predict the -1 class.\n\n\n\n\n\n\\section{Results}\n\nWe have evaluated the proposed method using the MNIST and CIFAR 10 data sets.\nThe different methods have been evaluated using several algorithms.\n$\\alpha$ and $\\beta$ of the loss function have been set to the same value which is the best set up during the experiments.\nWe evaluate the both losses and the performance of dropout~\\citep{srivastava2014dropout} and augmentation based on affine transformations.\n\n\\subsection{MNIST}\n\nMNIST is a collection of hand written digits from 0 to 9.\nThe training set has a total of 60k examples while the test set contains a total of 10k examples.\nAll images are 28x28 with black background pixels and different white level pixels to define the numbers using just one channel.\nImages were normalised using a mean of 0.1307 and standard deviation of 0.3081.\n\nWe have used LeNet as base neural network, which has been adapted to be used in our approach.\nIn order to use LeNet in our approach, we have considered the last layer as the margin weights $w$, which defines each each one of the 10 MNIST classes functions.\nThe rest of the network has been considered as the function $z(x_i)$.\nThis means that both the vector $w$ and $z(x_i)$ belong to $R^{84}$.\nIf the parameters $\\alpha$ and $\\beta$ are set to zero, we are effectively using LeNet.\n\nTable~\\ref{tab:mnist-results-percentage} shows the results of different experimental setups, which includes dropout $p=0.5$, augmentation and a combination of several configurations.\nAugmentation was done using random affine transformations with a maximum rotation of 20 degrees, translation maximum of 0.1 on the x and y axis and scale between 0.9 and 1.1, as well brightness and contrast were randomly altered with a maximum change of 0.2.\nWe have used several partitions of the training set to simulate training the different configurations with data sets of several sizes, which evaluates as well the impact of the number of training examples in addition to the bounded VC-dimension of the separating hyperplane.\nExperiments have been run 5 times per configuration and results show the average and the standard deviation.\n\n\n\n\\begin{sidewaystable}\n\\begin{tabular}{l|c|c|c|c|c|c|c|c}\n\\hline\nMethod&1&5&10&20&40&60&80&100 \\\\\n\\hline\nce&91.52$\\pm$0.28&97.20$\\pm$0.16&98.07$\\pm$0.10&98.58$\\pm$0.10&99.04$\\pm$0.10&99.20$\\pm$0.06&99.25$\\pm$0.05&99.20$\\pm$0.03 \\\\\nce+do&94.36$\\pm$0.22&97.78$\\pm$0.09&98.54$\\pm$0.07&98.81$\\pm$0.07&99.17$\\pm$0.03&99.22$\\pm$0.03&99.32$\\pm$0.03&99.37$\\pm$0.04 \\\\\nce+lm-0.001&94.89$\\pm$0.23&97.95$\\pm$0.10&98.42$\\pm$0.09&98.61$\\pm$0.08&99.05$\\pm$0.10&99.01$\\pm$0.03&99.22$\\pm$0.07&99.25$\\pm$0.08 \\\\\nce+lm-0.001+do&95.12$\\pm$0.17&97.84$\\pm$0.15&98.53$\\pm$0.08&98.77$\\pm$0.04&98.98$\\pm$0.06&99.13$\\pm$0.02&99.22$\\pm$0.05&99.19$\\pm$0.04 \\\\\n\\hline\nce+aug&95.82$\\pm$0.17&98.37$\\pm$0.09&98.94$\\pm$0.09&99.23$\\pm$0.08&99.36$\\pm$0.04&99.40$\\pm$0.05&99.52$\\pm$0.04&99.50$\\pm$0.02 \\\\\nce+aug+do&95.18$\\pm$0.25&97.96$\\pm$0.10&98.56$\\pm$0.07&98.89$\\pm$0.11&99.11$\\pm$0.07&99.09$\\pm$0.06&99.16$\\pm$0.05&99.22$\\pm$0.06 \\\\\nce+aug+lm-0.001&96.82$\\pm$0.17&98.68$\\pm$0.12&99.08$\\pm$0.09&99.24$\\pm$0.06&99.37$\\pm$0.05&99.41$\\pm$0.04&99.47$\\pm$0.06&99.53$\\pm$0.01 \\\\\nce+aug+lm-0.001+do&95.57$\\pm$0.36&97.91$\\pm$0.09&98.55$\\pm$0.08&98.78$\\pm$0.06&98.95$\\pm$0.06&98.99$\\pm$0.04&99.05$\\pm$0.06&99.16$\\pm$0.05 \\\\\n\\hline\n\\hline\nmh&91.66$\\pm$0.48&97.22$\\pm$0.22&98.03$\\pm$0.15&98.49$\\pm$0.08&98.99$\\pm$0.08&99.13$\\pm$0.03&99.16$\\pm$0.03&99.27$\\pm$0.06 \\\\\nmh+do&94.75$\\pm$0.32&97.99$\\pm$0.10&98.51$\\pm$0.04&98.82$\\pm$0.05&99.16$\\pm$0.05&99.22$\\pm$0.08&99.33$\\pm$0.04&99.36$\\pm$0.08 \\\\\nmh+lm-0.001&94.17$\\pm$0.25&97.53$\\pm$0.12&98.26$\\pm$0.13&98.57$\\pm$0.10&98.89$\\pm$0.04&98.97$\\pm$0.07&99.05$\\pm$0.05&99.23$\\pm$0.04 \\\\\nmh+lm-0.001+do&95.03$\\pm$0.31&97.95$\\pm$0.13&98.50$\\pm$0.13&98.79$\\pm$0.07&99.01$\\pm$0.09&99.14$\\pm$0.03&99.14$\\pm$0.07&99.12$\\pm$0.08 \\\\\n\\hline\nmh+aug&96.20$\\pm$0.18&98.45$\\pm$0.05&98.96$\\pm$0.03&99.22$\\pm$0.04&99.40$\\pm$0.06&99.44$\\pm$0.02&99.53$\\pm$0.04&99.51$\\pm$0.04 \\\\\nmh+aug+do&95.25$\\pm$0.45&98.08$\\pm$0.13&98.60$\\pm$0.07&98.76$\\pm$0.07&99.01$\\pm$0.04&99.11$\\pm$0.06&99.19$\\pm$0.03&99.22$\\pm$0.04 \\\\\nmh+aug+lm-0.001&96.53$\\pm$0.12&98.62$\\pm$0.13&99.05$\\pm$0.08&99.19$\\pm$0.06&99.41$\\pm$0.02&99.46$\\pm$0.04&99.46$\\pm$0.06&99.50$\\pm$0.01 \\\\\nmh+aug+lm-0.001+do&95.14$\\pm$0.43&98.03$\\pm$0.06&98.52$\\pm$0.09&98.78$\\pm$0.12&98.90$\\pm$0.09&99.00$\\pm$0.06&99.09$\\pm$0.09&98.99$\\pm$0.06 \\\\\n\\hline\n\\end{tabular}\n\\caption{MNIST results using LeNet using cross-entropy (ce) vs Modified Huber Loss (ml), dropout (do), hyperplane bound loss factor (lm) and augmented (aug)}\n\\label{tab:mnist-results-percentage}\n\\end{sidewaystable}\n\n\\subsection{CIFAR10}\n\nCIFAR10 is a data set of 32x32 images with 10 classes of objects with 3 channels. There a total of 60k training images and 10k testing images, with an equal split between images.\nEach image channel was normalised using a mean of 0.5 and standard deviation of 0.5.\n\nFor CIFAR10, we have considered the vgg19~\\citep{simonyan14vgg19} network.\nThe last layer is considered the margin weights $w$ and the output of the rest of the network as the function $z(x_i)$, which belong to $R^{4096}$. \n\n\n\\begin{sidewaystable}\n\\begin{tabular}{l|c|c|c|c|c|c|c|c}\n\\hline\nMethod&1&5&10&20&40&60&80&100 \\\\\n\\hline\nce&28.77$\\pm$1.74&43.60$\\pm$0.76&51.51$\\pm$0.90&60.98$\\pm$0.61&70.99$\\pm$0.64&76.23$\\pm$0.49&78.98$\\pm$0.60&81.16$\\pm$0.33 \\\\\nce+do&29.92$\\pm$1.22&43.54$\\pm$0.94&52.05$\\pm$0.75&61.52$\\pm$0.81&71.11$\\pm$0.68&75.99$\\pm$0.68&79.15$\\pm$0.56&80.96$\\pm$0.37 \\\\\nce+lm-1e-05&29.73$\\pm$1.57&44.01$\\pm$0.80&51.54$\\pm$0.98&61.11$\\pm$0.91&70.92$\\pm$0.89&76.34$\\pm$0.28&79.68$\\pm$0.48&81.23$\\pm$0.21 \\\\\nce+lm-1e-05+do&30.55$\\pm$1.46&43.21$\\pm$1.36&51.37$\\pm$1.28&61.15$\\pm$0.73&71.16$\\pm$0.24&76.20$\\pm$0.55&79.64$\\pm$0.58&81.45$\\pm$0.56 \\\\\n\\hline\nce+aug&32.50$\\pm$1.44&49.81$\\pm$1.00&61.55$\\pm$0.65&70.58$\\pm$0.59&79.18$\\pm$0.32&83.03$\\pm$0.19&85.47$\\pm$0.16&87.09$\\pm$0.16 \\\\\nce+aug+do&32.77$\\pm$1.53&51.53$\\pm$0.92&61.38$\\pm$0.59&71.00$\\pm$0.48&79.22$\\pm$0.32&82.91$\\pm$0.29&85.34$\\pm$0.27&87.22$\\pm$0.19 \\\\\nce+aug+lm-1e-05&32.53$\\pm$1.56&50.03$\\pm$1.02&60.85$\\pm$0.73&70.81$\\pm$0.27&79.38$\\pm$0.38&83.21$\\pm$0.31&85.25$\\pm$0.22&87.12$\\pm$0.14 \\\\\nce+aug+lm-1e-05+do&33.31$\\pm$1.27&50.94$\\pm$1.93&61.29$\\pm$0.63&71.07$\\pm$0.64&79.50$\\pm$0.39&83.14$\\pm$0.32&85.66$\\pm$0.29&87.38$\\pm$0.14 \\\\\n\\hline\n\\hline\nmh&30.26$\\pm$1.59&45.11$\\pm$1.40&54.14$\\pm$0.93&63.39$\\pm$0.49&71.93$\\pm$0.71&76.83$\\pm$0.64&79.49$\\pm$0.48&81.58$\\pm$0.20 \\\\\nmh+do&30.63$\\pm$1.43&46.16$\\pm$1.01&54.66$\\pm$0.99&63.70$\\pm$0.94&71.99$\\pm$0.22&76.51$\\pm$0.54&79.13$\\pm$0.46&81.54$\\pm$0.32 \\\\\nmh+lm-1e-05&30.72$\\pm$1.35&45.22$\\pm$0.96&53.80$\\pm$1.19&63.46$\\pm$0.80&71.89$\\pm$0.59&76.78$\\pm$0.52&79.35$\\pm$0.56&81.14$\\pm$0.37 \\\\\nmh+lm-1e-05+do&31.47$\\pm$1.22&45.71$\\pm$0.96&53.98$\\pm$0.77&62.86$\\pm$0.82&71.56$\\pm$0.67&76.43$\\pm$0.50&79.29$\\pm$0.27&81.22$\\pm$0.51 \\\\\n\\hline\nmh+aug&37.25$\\pm$0.84&54.38$\\pm$0.42&63.54$\\pm$0.57&71.64$\\pm$0.45&79.66$\\pm$0.43&82.98$\\pm$0.40&85.25$\\pm$0.27&87.10$\\pm$0.36 \\\\\nmh+aug+do&36.37$\\pm$0.35&54.66$\\pm$0.66&63.11$\\pm$0.69&72.06$\\pm$0.53&79.46$\\pm$0.29&83.06$\\pm$0.40&85.32$\\pm$0.18&87.03$\\pm$0.21 \\\\\nmh+aug+lm-1e-05&36.91$\\pm$0.98&54.54$\\pm$0.76&63.53$\\pm$1.00&71.35$\\pm$0.51&79.49$\\pm$0.41&82.90$\\pm$0.37&85.36$\\pm$0.21&86.94$\\pm$0.23 \\\\\nmh+aug+lm-1e-05+do&36.30$\\pm$1.34&54.61$\\pm$0.70&63.34$\\pm$0.60&72.08$\\pm$0.37&79.44$\\pm$0.40&83.14$\\pm$0.21&85.45$\\pm$0.34&87.09$\\pm$0.16 \\\\\n\\hline\n\\end{tabular}\n\\caption{CIFAR10 results using vgg19 cross-entropy (ce) vs Modified Huber Loss (ml), dropout (do), hyperplane bound loss factor (lm) and augmented (aug).}\n\\label{tab:cifar-results-percentage}\n\\end{sidewaystable}\n\n\n\\section{Discussion}\n\nWe have evaluated the proposed approach using two well-known image classification data sets and compared againt several baselines.\nThe results are compared against several baselines, which includes dropout and augmentation using affine transformations.\nIn addition to the modified Huber loss, we compare as well the performance of cross entropy loss, which is typically used in deep learning.\n\nWe have mentioned at the beginning that there are two factors that are relevant for the risk of generalisation of machine learning algorithms.\nOne is the VC-dimension, which we have tried to influence in this work.\nThe second one is additional training data.\nProviding additional training data has been simulated in two ways.\nThe first one is by selecting portions of the training data available from the training sets.\nThe second one has been using affine transformations, which uses the examples in the portion of the data set used as training data.\n\nWe observe that the proposed method shows a strong performance improvement when less training data is being used.\nWe observe as well that by providing additional training data effectively affects performance on the test set positively, which can be combined with the proposed method.\nDropout performed well when no modification or data augmentation were used.\n\nWhen considering the values of $D_l^2$ and $\\lVert w_l \\rVert^2$ in the loss function, the changes in size on the training set seems to have limited impact. The values for the margin are very similar, while the radius $D_l^2$ for the support vectors are around 1.\nConsidering these variables in the loss function, their values become more stable between training runs.\nWhen not using these terms as part of the loss function, the values tend to change significantly between experiments.\n\n\\section{Related work}\n\nDeep learning has achieved tremendous success in many practical tasks but\nthe neural networks used in deep learning have a high number of parameters, which implies a high VC-dimension.\nThese networks are optimised to reduce the empirical risk and having large data sets helps reducing the generalisation risk linked to this.\n\nNeural networks are trained using stochastic gradient descent, which guarantees reaching an optimum even if it is a local one.\nThere is recent work in which the neural networks is studied within the kernel theory, which has shown interesting understanding about the global optimisation of neural networks~\\citep{du2019gradient} and the convergence of wide neural networks through\nNeural Tangent Kernels~\\citep{jacot2018neural}.\nThis work complements what we have studied in this work and provides as well directions for further research.\nOn the other hand, this previous research could be considered to define directions for better defining approximation functions that would use well know properties of kernels and the adaptability of neural networks. \n\nCompared as well to what we propose in this work from the point of view of regularisation, there have been several means of regularisation of neural networks that include $L^2$ regularisation and several variants where different layers are regularised independently.\nTransfer learning is a way to pre-train the neural networks using similar data to the training data, which has been quite successful in recent proposed systems to improve the capability of existing systems.\nIn addition, knowledge has been proposed as means of regularisation~\\citep{borghesi2020improving,roychowdhury2021regularizing}, which would reuse existing resources and reduce the need for training data.\n\nIn this work, we have used hyperplane bounds to study how this contribute to find a better hyperplane using neural networks as feature space mapper.\nThere are several recent directions that would provide directions to improve the definition of functions that have better guarantees in terms of optimisation.\n\n\\section{Conclusions and future work}\n\nImprove over the baseline, which is most significant when a small portion of the training data is used.\nWhen more training data is made available, the improvement is reduced, which is an expected behaviour as shown by the bounds of the VC-dimension for $\\Delta$-margin separating hyperplanes and formulation of the empirical risk.\n\nWe have considered well known networks for the experiments, based on convolution neural networks, it might be interesting to explore additional network configurations to understand the impact of the network architecture with the proposed approach.\nThere has been as well recent work to better understand the optimisation and behaviour of neural networks that we would like to explore as future work.\n\n\\section{Acknowledgements}\n\nThis research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec1}\n\\label{sec0}\nThe Lieb--Thirring inequalities~\\cite{LT} give estimates for $\\gamma$-moments\nof the negative eigenvalues of the Schr\\\"odinger operator $-\\Delta-V$ in $L_2(\\mathbb{R}^d)$,\nwhere $V=V(x)\\ge0$:\n\\begin{equation}\\label{LT}\n\\sum_{\\lambda_i\\le0}|\\lambda_i|^\\gamma\\le\\mathrm{L}_{\\gamma,d}\n \\int_{\\mathbb{R}^d} V(x)^{\\gamma+ d\/2}dx.\n\\end{equation}\nIn the case $\\gamma=1$ estimate~\\eqref{LT} is equivalent to the dual\ninequality\n\\begin{equation}\\label{orth}\n \\int_{\\mathbb{R}^d} \\rho(x)^{1+2\/d}dx\\le\\mathrm{k}_d\\sum_{j=1}^N\\|\\nabla\\psi_j\\|^2,\n\\end{equation}\nwhere $\\rho(x)$ is as in \\eqref{rho}, and $\\{\\psi_j\\}_{j=1}^N\\in H^1(\\mathbb{R}^d)$\nis an arbitrary orthonormal system. Furthermore the (sharp) constants $\\mathrm{k}_d$ and\n $\\mathrm{L}_{1,d}$ satisfy\n\\begin{equation}\\label{kL}\n \\mathrm{k}_d=(2\/d)(1+d\/2)^{1+2\/d}\\mathrm{L}_{1,d}^{2\/d}.\n\\end{equation}\n\nSharp constants in \\eqref{LT} were found in~\\cite{Lap-Weid} for $\\gamma\\ge 3\/2$,\nwhile for a long time the best available estimates for $1\\le\\gamma<3\/2$ were those found in\n\\cite{D-L-L}. Very recently an important improvement in the area was made\nin~\\cite{Frank-Nam}, where the original idea of~\\cite{Rumin2}\nwas developed and extended in a substantial way.\n\n\nInequality \\eqref{orth} plays an important role in the theory of the\n Navier--Stokes equations\n\\cite{Lieb, B-V, T}, where the constant $ \\mathrm{k}_2$ enters the estimates\nof the fractal dimension of the global attractors\nof the Navier--Stokes system in various two-dimensional formulations.\n(In the three-dimensional case the corresponding results are of a conditional character.)\n\nAlong with the problem in a bounded domain\n$\\Omega\\subset\\mathbb{R}^2$ with Dirichlet boundary conditions\nthe Navier--Stokes system is also studied with periodic boundary conditions,\nthat is, on a two-dimensional torus. In this case for the system to be dissipative\none has to impose the zero mean condition on the components of the velocity vector\nover the torus.\n\nAnother physically relevant model is the Navier--Stokes system on the sphere.\nIn this case the system is dissipative without extra orthogonality conditions.\nHowever, if we want to study the system in the form of the scalar vorticity equation, then the\nscalar stream function of a divergence free vector field is defined up\nto an additive constant, and without loss of generality we can (and always) assume that\nthe integral of the stream function over the sphere vanishes.\n\n\nWe can formulate our main result as follows.\n\n\n\n\n\\begin{theorem}\\label{Th:1}\nLet ${M}$ denote either $\\mathbb{S}^2$ or $\\mathbb{T}^2$,\nand let $\\dot H^1({M})$ be the Sobolev space of functions with\nmean value zero.\nLet $\\{\\psi_j\\}_{j=1}^N \\in\\dot H^1({M})$ be an orthonormal family\nin $L_2({M})$. Then\n\\begin{equation}\\label{rho}\n\\rho(x):=\\sum_{j=1}^N|\\psi_j(x)|^2\n\\end{equation}\nsatisfies the inequality\n\\begin{equation}\\label{M}\n\\int_{{M}}\\rho(x)^2d{M}\\le\\mathrm{k}\\sum_{j=1}^N\\|\\nabla\\psi_j\\|^2,\n\\end{equation}\nwhere\n$$\n\\mathrm{k}\\le\\frac{3\\pi}{32}=0.2945\\dots\\,.\n$$\n\\end{theorem}\n\n\\begin{corollary} Setting $N=1$ and $\\psi=\\varphi\/\\|\\varphi\\|$ we obtain\nthe interpolation inequality which is often called the Ladyzhenskaya\ninequality (in the context of the Navier--Stokes equations) or the\nKeller--Lieb--Thirring one-bound-state inequality (in the context of the spectral theory):\n$$\n\\|\\varphi\\|_{L_4}^4\\le \\mathrm{k}_{\\mathrm{Lad}}\\|\\varphi\\|^2\\|\\nabla\\varphi\\|^2,\n\\qquad \\mathrm{k}_{\\mathrm{Lad}}\\le\\mathrm{k}_{\\mathrm{LT}}.\n$$\n\\end{corollary}\n\\begin{remark}\n{\\rm\nThe previous estimate of the Lieb--Thirring constant\non $\\mathbb{T}^2$ and $\\mathbb{S}^2$ obtained in~\\cite{Zel-Il-Lap2019} and \\cite{I-L-AA} by\nmeans of the discrete version of the method of \\cite{Rumin2} was:\n$$\n\\mathrm{k}\\le\\frac{3}{2\\pi}=0.477\\,.\n$$\n}\n\\end{remark}\n\\begin{remark}\n{\\rm\nIn all cases $M=\\mathbb{S}^2, \\mathbb{T}^2$, or $\\mathbb{R}^2$ the Lieb--Thirring constant\nsatisfies the (semiclassical) lower bound\n$$\n0.1591\\dots=\\frac1{2\\pi}\\le\\mathrm{k}_{\\mathrm{LT}}.\\\n$$\nIn $\\mathbb{R}^2$ the sharp value of $\\mathrm{k}_{\\mathrm{Lad}}$ was found in\n\\cite{Weinstein} by the numerical solution of the corresponding Euler--Lagrange equation\n$$\n\\mathrm{k}_{\\mathrm{Lad}}=\\frac1{\\pi\\cdot 1.8622\\dots}=0.1709\\dots\\,,\n$$\nwhile the best to date closed form estimate for this constant was obtained in\n\\cite{Nasibov}\n$$\n\\mathrm{k}_{\\mathrm{Lad}}\\le\\frac{16}{27\\pi}=0.188\\dots,\n$$\nsee also \\cite[Theorem 8.5]{Lieb--Loss} where the equivalent result is obtained for the inequality\nin the additive form.\n}\n\\end{remark}\n\n\n\n\n\n\n\\setcounter{equation}{0}\n\\section{Lieb--Thirring inequalities on $\\mathbb{S}^2$ }\\label{sec2}\n\nWe begin with the case of a sphere and first consider the scalar\ncase. We recall the basic facts concerning the spectrum of the\nscalar Laplace operator $\\Delta=\\div\\nabla$ on the sphere\n$\\mathbb{S}^{2}$:\n\\begin{equation}\\label{harmonics}\n-\\Delta Y_n^k=n(n+1) Y_n^k,\\quad\nk=1,\\dots,2n+1,\\quad n=0,1,2,\\dots.\n\\end{equation}\nHere the $Y_n^k$ are the orthonormal real-valued spherical\nharmonics and each eigenvalue $\\Lambda_n:=n(n+1)$ has multiplicity $2n+1$.\n\n\nThe following identity is essential in what\nfollows \\cite{S-W}: for any $s\\in\\mathbb{S}^{2}$\n\\begin{equation}\\label{identity}\n\\sum_{k=1}^{2n+1}Y_n^k(s)^2=\\frac{2n+1}{4\\pi}.\n\\end{equation}\n\n\n\n\n\n\n\n\\begin{theorem}\\label{Th:S2}\nLet $\\{\\psi_j\\}_{j=1}^N\\in H^1(\\mathbb{S}^2)$ be an orthonormal family of scalar functions\nwith zero average: $\\int_{\\mathbb{S}^2}\\psi_j(s)dS=0$. Then $\\rho(s):=\\sum_{j=1}^N|\\psi_j(s)|^2$\nsatisfies the inequality\n\\begin{equation}\\label{LTS2}\n\\int_{\\mathbb{S}^2}\\rho(s)^2dS\\le\\frac{3\\pi}{32}\\sum_{j=1}^N\\|\\nabla\\psi_j\\|^2.\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nWe use the discrete version of the recent important far-going improvement \\cite{Frank-Nam}\nof the approach of \\cite{Rumin2}.\n\nLet $f$ be a smooth non-negative function on $\\mathbb{R}^+$ with\n\\begin{equation}\\label{f}\n\\int_0^\\infty f(t)^2dt=1,\n\\end{equation}\nand therefore for any $a>0$\n\\begin{equation}\\label{fa}\na=\\int_0^\\infty f(E\/a)^2dE.\n\\end{equation}\nExpanding a function $\\psi$ with $\\int_{\\mathbb{S}^2}\\psi(s)dS=0$ in spherical harmonics\n$$\n\\psi(s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}\\psi_n^kY_n^k(s),\\qquad\n\\psi_n^k=\\int_{\\mathbb{S}^2}\\psi(s)Y_n^k(s)dS=(\\psi,Y_n^k)\n$$\nand observing that the summation starts with $n=1$ we see using\n\\eqref{fa} that\n\\begin{equation}\\label{chain}\n\\aligned\n\\|\\nabla\\psi\\|^2=\\int_{\\mathbb{S}^2}|\\nabla\\psi(s)|^2dS=\n\\sum_{n=1}^\\infty n(n+1)\\sum_{k=1}^{2n+1}|\\psi_n^k|^2=\\\\=\n\\int_0^\\infty \\sum_{n=1}^\\infty f\\biggl(\\frac E{ n(n+1)}\\biggr)^2\\,\\sum_{k=1}^{2n+1}|\\psi_n^k|^2dE=\\\\=\n\\int_0^\\infty\\int_{\\mathbb{S}^2}|\\psi^E(s)|^2dSdE=\n\\int_{\\mathbb{S}^2}\\int_0^\\infty|\\psi^E(s)|^2dEdS,\n\\endaligned\n\\end{equation}\nwhere\n$$\n\\psi^E(s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}f\\biggl(\\frac E{ n(n+1)}\\biggr)\n\\psi_n^k\\,Y_n^k(s).\n$$\nReturning to the family $\\{\\psi_j\\}_{j=1}^N$ we have for any $\\varepsilon>0$\n$$\n\\aligned\n\\rho(s)&=\\sum_{j=1}^N|\\psi_j(s)|^2=\\\\&=\\sum_{j=1}^N|\\psi^E_j(s)|^2\n+2\\sum_{j=1}^N\\psi^E_j(s)(\\psi_j(s)-\\psi_j^E(s))+\\sum_{j=1}^N|\\psi_j(s)-\\psi^E_j(s)|^2\\le\\\\&\\le\n(1+\\varepsilon)\\sum_{j=1}^N|\\psi^E_j(s)|^2+\n(1+\\varepsilon^{-1})\\sum_{j=1}^N|\\psi_j(s)-\\psi^E_j(s)|^2.\n\\endaligned\n$$\nFor each term in the second sum we have\n$$\n\\psi(s)-\\psi^E(s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}\\psi_n^k\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)\n\\,Y_n^k(s)=\\bigl(\\psi(\\cdot),\\chi^E(\\cdot,s)\\bigr),\n$$\nwhere\n$$\n\\chi^E(s',s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)\n\\,Y_n^k(s')Y_n^k(s).\n$$\nSince the $\\psi_j$'s are orthonormal, we have by Bessel's inequality\n$$\n\\sum_{j=1}^N|\\psi_j(s)-\\psi^E_j(s)|^2=\\sum_{j=1}^N\\bigl(\\psi_j(\\cdot),\\chi^E(\\cdot,s)\\bigr)^2\n\\le\\|\\chi^E(\\cdot,s)\\|^2,\n$$\nwhere in view of \\eqref{identity} $\\|\\chi^E(\\cdot,s)\\|^2$, in fact, is independent of $s$:\n\\begin{equation}\\label{indeps}\n\\aligned\n\\|\\chi^E(\\cdot,s)\\|^2=\n\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)^2\nY_n^k(s)^2=\\\\=\\frac1{4\\pi}\\sum_{n=1}^\\infty(2n+1)\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)^2.\n\\endaligned\n\\end{equation}\n\nWe now specify the choice of $f$ by setting (see~\\cite{Frank-Nam}, \\cite{lthbook})\n\\begin{equation}\\label{choice}\nf(t)=\\frac1{1+\\mu t^2},\\qquad\\\n\\mu=\\frac{\\pi^2}{16}\\,.\n\\end{equation}\nThe function $f$ so chosen solves the minimization problem\n$$\n\\aligned\n\\int_{\\mathbb{R}^2}\\left(1-f(1\/|\\xi|^2)\\right)^2d\\xi=&\\pi\\int_0^\\infty(1-f(t))^2t^{-2}dt\\to\\min\\\\\n\\text{under condition}\\quad&\\int_0^\\infty f(t)^2dt=1,\n\\endaligned\n$$\nand the above integral over $\\mathbb{R}^2$ corresponds to the series\non the right-hand side in \\eqref{indeps} (see also~\\eqref{tor}).\n\nWe first observe that \\eqref{f} is satisfied and secondly,\nin view of the estimate for the series in the Appendix\n\\begin{equation}\\label{series1}\n\\aligned\n\\|\\chi^E(\\cdot,s)\\|^2=\\frac1{4\\pi}\\sum_{n=1}^\\infty\\frac\n{(2n+1)}{\\biggl({1+\\left(\\frac1{\\sqrt{\\mu}E}n(n+1)\\right)^2}\\biggr)^2}<\\\\<\n\\frac1{4\\pi}\\sqrt{\\mu}E\\int_0^\\infty\\frac{dt}{(1+t^2)^2}=\n\\frac1{4\\pi}\\sqrt{\\mu}E\\frac\\pi4=\\frac\\pi{64}E=:AE\\,.\n\\endaligned\n\\end{equation}\nHence\n\\begin{equation}\\label{epseps}\n\\rho(s)\\le(1+\\varepsilon)\\sum_{j=1}^N|\\psi^E_j(s)|^2+\n(1+\\varepsilon^{-1})AE.\n\\end{equation}\nOptimizing with respect to $\\varepsilon$ we obtain\n$$\n\\rho(s)\\le\\left(\\sqrt{\\sum_{j=1}^N|\\psi^E_j(s)|^2}+\\sqrt{AE}\\right)^2,\n$$\nwhich gives that\n$$\n\\sum_{j=1}^N|\\psi^E_j(s)|^2\\ge\\left(\\sqrt{\\rho(s)}-\\sqrt{AE}\\right)^2_+.\n$$\nSumming equalities \\eqref{chain} from $j=1$ to $N$ we\nobtain\n$$\n\\aligned\n&\\sum_{j=1}^N\\|\\nabla\\psi_j\\|^2=\\int_{\\mathbb{S}^2}\\int_0^\\infty\n\\sum_{j=1}^N|\\psi_j^E(s)|^2dEdS\\ge\\\\&\n\\int_{\\mathbb{S}^2}\\int_0^\\infty\\left(\\sqrt{\\rho(s)}-\\sqrt{AE}\\right)^2_+dEdS=\n\\frac1{6A}\\int_{\\mathbb{S}^2}\\rho(s)^2dS=\\frac{32}{3\\pi}\\int_{\\mathbb{S}^2}\\rho(s)^2dS.\n\\endaligned\n$$\nThe proof is complete.\n\\end{proof}\n\\begin{remark}\\label{R:semi}\n{\\rm\nThe constant $\\mathrm{k}$ in the theorem satisfies the (semiclassical) lower bound\n\\begin{equation}\\label{lb}\nk\\ge\\frac1{2\\pi}\\,,\n\\end{equation}\nwhich can easily be proved in our particular case of $\\mathbb{S}^2$. In fact, we\ntake for the orthonormal family the eigenfunctions $Y_n^k$ with $n=1,\\dots,N-1$,\nand $k=1,\\dots,2n+1$, so that\n$$\n\\sum_{n=1}^{N-1}(2n+1)=N^2-1\\ \\ \\text{and}\\ \\ \\sum_{n=1}^{N-1}(2n+1)n(n+1)=\\frac12N^2(N^2-1),\n$$\nthen \\eqref{M} and the Cauchy inequality give \\eqref{lb}, since\n$$\n(N^2-1)^2=\\left(\\int_{\\mathbb{S}^2}\\rho(s)dS\\right)^2\\le\n4\\pi\\|\\rho\\|^2\\le\n2\\pi \\mathrm{k}N^2(N^2-1).\n$$\n\n}\n\\end{remark}\n\n\n\\subsection{The vector case}\n\nThe vector case is similar, and the key identity~\\eqref{identity} is replaced by\nvector analogue\n(see \\cite{I93}): for any $s\\in\\mathbb{S}^{2}$\n\\begin{equation}\\label{identity-vec}\n\\sum_{k=1}^{2n+1}|\\nabla Y_n^k(s)|^2=n(n+1)\\frac{2n+1}{4\\pi}.\n\\end{equation}\n\n\n\n\nIn the vector case by the Laplace operator\nacting on (tangent) vector\nfields on $\\mathbb{S}^2$ we mean the Laplace--de Rham\noperator $-d\\delta-\\delta d$ identifying $1$-forms and\nvectors. Then for a two-dimensional manifold\n(not necessarily $\\mathbb{S}^2$) we have\n\\cite{I93}\n\\begin{equation}\\label{vecLap}\n\\mathbf{\\Delta} u=\\nabla\\div u-\\mathop\\mathrm{rot}\\rot u,\n\\end{equation}\nwhere the operators $\\nabla=\\mathop\\mathrm{grad}$ and $\\div$ have the\nconventional meaning. The operator $\\mathop\\mathrm{rot}$ of a vector $u$ is a\nscalar and for a scalar $\\psi$,\n$\\mathop\\mathrm{rot}\\psi$ is a vector:\n\\begin{equation}\\label{divrot}\n\\mathop\\mathrm{rot} u:=\\div(u^\\perp),\\qquad\n\\mathop\\mathrm{rot}\\psi:=\\nabla^\\perp\\psi,\n\\end{equation}\nwhere in the local frame $u^\\perp=(u_2,-u_1)$.\n\n Integrating by parts\nwe obtain\n\\begin{equation}\\label{byparts}\n(-\\mathbf{\\Delta} u,u)=\\|\\mathop\\mathrm{rot} u\\|^2+\\|\\div u\\|^2.\n\\end{equation}\n\n\nThe vector Laplacian has a complete in $L_2(T\\mathbb{S}^2)$ orthonormal basis\nof vector eigenfunctions: corresponding\nto the eigenvalue\n$\\Lambda_n=n(n+1)$, where $n=1,2,\\dots$, there are two families of $2n+1$\northonormal vector-valued eigenfunctions $w_n^k(s)$ and $v_n^k(s)$\n\\begin{equation}\\label{bases}\n\\aligned\nw_n^k(s)&=(n(n+1))^{-1\/2}\\,\\nabla^\\perp Y_n^k(s),\\ -\\mathbf{\\Delta}w_n^k=n(n+1)w_n^k,\\\n\\div w_n^k=0;\\\\\nv_n^k(s)&=(n(n+1))^{-1\/2}\\,\\nabla Y_n^k(s),\\ \\ -\\mathbf{\\Delta}v_n^k=n(n+1)v_n^k,\\ \\mathop\\mathrm{rot} v_n^k=0,\n\\endaligned\n\\end{equation}\nwhere\n$k=1,\\dots,2n+1$, and~(\\ref{identity-vec}) gives the\nfollowing important identities: for any $s\\in\\mathbb{S}^2$\n\\begin{equation}\\label{id-vec}\n\\sum_{k=1}^{2n+1}|w_n^k(s)|^2=\\frac{2n+1}{4\\pi},\\qquad\n\\sum_{k=1}^{2n+1}|v_n^k(s)|^2=\\frac{2n+1}{4\\pi}.\n\\end{equation}\nWe finally observe that $-\\mathbf{\\Delta}$ is strictly\npositive $-\\mathbf{\\Delta}\\ge \\Lambda_1I=2I.$\n\n\n\n\\begin{theorem}\\label{Th:LT-vec}\nLet $\\{u_j\\}_{j=1}^N\\in H^1(T\\mathbb{S}^2)$\nbe an orthonormal family of vector fields in $L^2(T\\mathbb{S}^2)$. Then\n\\begin{equation}\\label{orthvec}\n\\int_{\\mathbb{S}^2}\\rho(s)^2dS\\le\n \\frac{3\\pi}{16}\\sum_{j=1}^N(\\|\\mathop\\mathrm{rot} u_j\\|^2\n+\\|\\div u_j\\|^2),\n\\end{equation}\nwhere $\\rho(s)=\\sum_{j=1}^N|u_j(s)|^2$.\nIf, in addition, $\\div u_j=0$ $($or $\\mathop\\mathrm{rot} u_j=0$$)$,\nthen\n\\begin{equation}\\label{orthvecsol}\n\\int_{\\mathbb{S}^2}\\rho(s)^2dS\\le\\frac{3\\pi}{32}\\cdot\n\\begin{cases}\\displaystyle\n\\sum_{j=1}^N\\|\\mathop\\mathrm{rot} u_j\\|^2,\n\\quad \\ \\div u_j=0,\n\\\\\\displaystyle\n\\sum_{j=1}^N\\|\\div u_j\\|^2,\n\\quad \\mathop\\mathrm{rot} u_j=0.\n\\end{cases}\n\\end{equation}\n\\end{theorem}\n\\begin{proof} We prove the first inequality in~\\eqref{orthvecsol},\nthe proof of the second is similar. Expanding a vector function\n$u$ with $\\div u=0$ in the basis $w_n^k$\n$$\nu(s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1} u_n^kw_n^k(s),\\qquad\nu_n^k=(u,w_n^k),\n$$\nwe have instead of \\eqref{chain}\n\\begin{equation}\\label{chain1}\n\\aligned\n\\|\\mathop\\mathrm{rot} u\\|^2&=\n\\sum_{n=1}^\\infty n(n+1)\\sum_{k=1}^{2n+1}|u_n^k|^2=\\\\&=\n\\int_0^\\infty \\sum_{n=1}^\\infty f\\biggl(\\frac E{ n(n+1)}\\biggr)^2\\,\\sum_{k=1}^{2n+1}|u_n^k|^2dE=\n\\int_{\\mathbb{S}^2}\\int_0^\\infty|u^E(s)|^2dEdS,\n\\endaligned\n\\end{equation}\nwhere\n$$\nu^E(s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}f\\biggl(\\frac E{ n(n+1)}\\biggr)\nu_n^k\\,w_n^k(s).\n$$\nAs before\n$$\n\\aligned\n\\rho(s)\\le\n(1+\\varepsilon)\\sum_{j=1}^N|u^E_j(s)|^2+\n(1+\\varepsilon^{-1})\\sum_{j=1}^N|u_j(s)-u^E_j(s)|^2.\n\\endaligned\n$$\nWe now imbed $\\mathbb{S}^2$ into $\\mathbb{R}^3$\nin the natural way and use the standard basis $\\{e_1,e_2,e_3\\}$\nand the scalar product $\\langle \\cdot,\\cdot\\rangle$ in $\\mathbb{R}^3$.\nThen we see that\n$$\n\\aligned\n\\langle u(s)-u^E(s),e_1\\rangle=\\\\\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}u_n^k\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)\n\\,\\langle w_n^k(s),e_1\\rangle=\\bigl(u(\\cdot),\\chi^E_1(\\cdot,s)\\bigr),\n\\endaligned\n$$\nwhere the vector function\n$$\n\\chi^E_1(s',s)=\\sum_{n=1}^\\infty\\sum_{k=1}^{2n+1}\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)\n\\,w_n^k(s')\\langle w_n^k(s),e_1\\rangle.\n$$\nBy orthonormality and Bessel's inequality\n$$\n\\aligned\n\\sum_{j=1}^N|u_j(s)-u^E_j(s)|^2=\\sum_{j=1}^N\\sum_{l=1}^3|\\langle u_j(s)-u^E_j(s),e_l\\rangle|^2=\\\\\n=\\sum_{l=1}^3\\sum_{j=1}^N\\bigl(u_j(\\cdot),\\chi^E_l(\\cdot,s)\\bigr)^2\n\\le\\sum_{l=1}^3\\|\\chi_l^E(\\cdot,s)\\|^2.\n\\endaligned\n$$\nHowever, in view of~\\eqref{id-vec}, the right hand side is again independent of $s$\n$$\n\\aligned\n\\sum_{l=1}^3\\|\\chi^E_l(\\cdot,s)\\|^2&=\n\\sum_{n=1}^\\infty\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)^2\\sum_{k=1}^{2n+1}\\sum_{l=1}^3\n|\\langle w_n^k(s),e_l\\rangle|^2=\\\\&=\n\\sum_{n=1}^\\infty\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)^2\\sum_{k=1}^{2n+1}\n| w_n^k(s)|^2=\\\\&=\\frac1{4\\pi}\\sum_{n=1}^\\infty(2n+1)\\biggl(1-f\\biggl(\\frac E{ n(n+1)}\\biggr)\\biggr)^2,\n\\endaligned\n$$\nand we complete the proof in exactly the same way as we have\ndone in the proof of Theorem~\\ref{Th:S2} after \\eqref{indeps}.\nFinally, in the proof of inequality~\\eqref{orthvec} both families\nof vector eigenfunctions \\eqref{bases} play equal roles,\nand the constant is increased by the factor of two.\n\\end{proof}\n\nThis, however, does not happen for a single vector function.\n\n\\begin{corollary}\\label{C:vec}\nLet $u\\in H^1(T\\mathbb{S}^2)$. Then\n\\begin{equation}\\label{vecLad}\n\\|u\\|^4_{L_4}\\le \\vec{\\mathrm{k}}_\\mathrm{Lad}\n\\|u\\|^2\\left(\\|\\mathop\\mathrm{rot} u\\|^2+\\|\\div u\\|^2\\right),\\qquad\n\\vec{\\mathrm{k}}_\\mathrm{Lad}\\le\\frac{3\\pi}{32}\\,.\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\nThe proof is based on the equivalence\n\\eqref{LT}$_{\\gamma=1}\\Leftrightarrow$\\eqref{orth} with equality\nfor the constants~\\eqref{kL} and the fact that the eigenvalues of\nthe vector Schr\\\"odinger operator on $\\mathbb{S}^2$\n\\begin{equation}\\label{vecSchr}\nAv=-\\mathbf{\\Delta} v-Vv\n\\end{equation}\nhave even multiplicities as the following equality implies\n(see~\\eqref{vecLap}, \\eqref{divrot})\n$$\n\\mathbf{\\Delta}(v^\\perp)=(\\mathbf{\\Delta} v)^\\perp.\n$$\nNow let $u$ in \\eqref{vecLad} be normalized, $\\|u\\|=1$, let\n$V(s)=\\alpha|u(s)|^2$, $\\alpha>0$, and let $E$ be the lowest\neigenvalue of~\\eqref{vecSchr}. If $E<0$, then since $E$ is counted\nat least twice in the sum $\\sum_{\\lambda_j\\le0}\\lambda_j$, it\nfollows that\n\\begin{equation}\\label{E}\nE\\ge \\frac12\\sum_{\\lambda_j\\le0}\\lambda_j\\ge-\\frac12\\mathrm{L_1}\\int_{\\mathbb{S}^2}V(s)^2dS=\n-\\alpha^2\\frac12\\mathrm{L_1}\\|u\\|_{L_4}^4,\n\\end{equation}\nwhere the second inequality is~\\eqref{LT} with\n$$\n\\mathrm{L_1}\\le \\frac14\\cdot\\frac{3\\pi}{16},\n$$\nin view of \\eqref{kL} and \\eqref{orthvec}. If $E\\ge0$, then\n\\eqref{E} also formally holds.\n\nNext, by the variational principle\n\\begin{equation}\\label{Eless}\n\\aligned\nE\\le(Au,u)=\\|\\mathop\\mathrm{rot} u\\|^2+\\|\\div u\\|^2-\\int_{\\mathbb{S}^2}V(s)|u(s)|^2dS=\\\\\n\\|\\mathop\\mathrm{rot} u\\|^2+\\|\\div u\\|^2-\\alpha\\|u\\|^4_{L_4}.\n\\endaligned\n\\end{equation}\nCombining \\eqref{E} and \\eqref{Eless} and setting optimal\n$\\alpha=1\/\\mathrm{L}_1$ we finally obtain\n$$\n\\|u\\|_{L_4}^4\\le 2\\mathrm{L}_1\\left(\\|\\mathop\\mathrm{rot} u\\|^2+\\|\\div u\\|^2\\right)\\le\n\\frac{3\\pi}{32}\\left(\\|\\mathop\\mathrm{rot} u\\|^2+\\|\\div u\\|^2\\right).\n$$\n\\end{proof}\n\n\n\\begin{remark}\n{\\rm\nIt is worth pointing out that\n and for any domain on the sphere\n$\\Omega\\subseteq\\mathbb{S} ^2$ and an orthonormal family\n$\\{u_j\\}_{j=1}^N\\in H^1_0(\\Omega,T\\mathbb{S}^2)$ extension by zero\nshows that the corresponding Lieb--Thirring constants are uniformly\nbounded by the constants on the whole sphere whose estimates were\nfound in Theorem~\\ref{Th:LT-vec} and Corollary~\\ref{C:vec}. }\n\\end{remark}\n\n\n\\setcounter{equation}{0}\n\\section{Lieb--Thirring inequalities on $\\mathbb{T}^2$ }\\label{sec3}\nWe now prove Theorem~\\ref{Th:1} for the 2D torus. We first consider the torus with equal periods\nand without loss of generality\nwe set $\\mathbb{T}^2=[0,2\\pi]^2$.\n\\begin{proof}[Proof of Theorem~\\ref{Th:1} for $\\mathbb{T}^2$]\nWe use the Fourier series\n$$\n\\psi(x)=\\frac1{2\\pi}\\sum_{k\\in\\mathbb{Z}^2_0}\\psi_k e^{ik\\cdot x},\\qquad\n\\psi_k=\\frac1{2\\pi}\\int_{\\mathbb{T}^2}\\psi(x)e^{-ik\\cdot x}dx,\\quad\n\\mathbb{Z}^2_0=\\mathbb{Z}^2\\setminus\\{0,0\\},\n$$\nso that\n$$\n\\|\\psi\\|^2=\\sum_{k\\in\\mathbb{Z}^2_0}|\\psi_k|^2,\n\\qquad\n\\|\\nabla\\psi\\|^2=\\sum_{k\\in\\mathbb{Z}^2_0}|k|^2|\\psi_k|^2.\n$$\nThen as before we have\n$$\n\\|\\nabla\\psi\\|^2=\n\\int_0^\\infty \\sum_{k\\in\\mathbb{Z}^2_0} f\\biggl(\\frac E{|k|^2}\\biggr)^2|\\psi_k|^2dE=\n\\int_{\\mathbb{T}^2}\\int_0^\\infty|\\psi^E(x)|^2dEdx,\n$$\nwhere\n$$\n\\psi^E(x)=\\frac1{2\\pi}\\sum_{k\\in\\mathbb{Z}^2_0} f\\biggl(\\frac E{|k|^2}\\biggr)\\psi_ke^{ik\\cdot x},\n$$\nand therefore\n$$\n\\psi(x)-\\psi^E(x)=\n\\frac1{2\\pi}\\sum_{k\\in\\mathbb{Z}^2_0}\n\\left(1-f\\biggl(\\frac E{|k|^2}\\biggr)\\right)\\psi_ke^{ik\\cdot x}=\n(\\psi(\\cdot), \\chi^E(\\cdot,x)),\n$$\nwhere\n$$\n\\chi^E(x',x)=\\frac1{2\\pi}\n\\sum_{k\\in\\mathbb{Z}^2_0}\n\\left(1-f\\biggl(\\frac E{|k|^2}\\biggr)\\right)e^{ik\\cdot x'}e^{-ik\\cdot x}.\n$$\nWith the choice of $f$ given in \\eqref{choice} and setting $a=\\sqrt{\\mu}E$ below, we have\n\\begin{equation}\\label{tor}\n\\aligned\n\\|\\chi^E(\\cdot,x)\\|^2=\\frac1{4\\pi^2}\\sum_{k\\in\\mathbb{Z}^2_0}\n\\left(1-f\\biggl(\\frac E{|k|^2}\\biggr)\\right)^2=\\\\=\n\\frac1{4\\pi^2}\\sum_{k\\in\\mathbb{Z}^2_0}\\frac1{\\left(\\left(\\frac{|k|}{\\sqrt{a}}\\right)^4+1\\right)^2}\n<\\frac a{16}=\\frac\\pi{64}E=:AE,\n\\endaligned\n\\end{equation}\nwhere the key inequality for series is proved in the Appendix.\n\nAt this point we can complete the proof as in Theorem~\\ref{Th:S2}.\n\\end{proof}\n\n\\subsection{Elongated torus.}\nWe now briefly discuss the Lieb--Thirring constant on a 2D torus\nwith aspect ratio $\\alpha$. Since the Lieb--Thirring constant\ndepends only on $\\alpha$, we consider the torus $\\mathbb{T}^2_\\alpha=[0,2\\pi\/\\alpha]\\times\n[0,2\\pi]$. Furthermore, it suffices to consider the case $\\alpha\\le1$, since otherwise\nwe merely interchange the periods.\n\n\\begin{theorem}\\label{Th:alpha}\nThe Lieb--Thirring constant on the elongated torus $\\mathbb{T}^2_\\alpha$\nsatisfies the bound\n\\begin{equation}\\label{alpha}\n\\mathrm{k}_\\mathrm{LT}(\\mathbb{T}^2_\\alpha)\\le\\frac1\\alpha\\frac{3\\pi}{32}\\\n\\text{as}\\ \\alpha\\to0.\n\\end{equation}\n\\end{theorem}\n\\begin{proof} We shall prove \\eqref{alpha} under an additional technical assumption that\n$k=1\/\\alpha\\in\\mathbb{N}$. Given the orthonormal family\n$\\{\\psi_j\\}_{j=1}^N \\in\\dot H^1(\\mathbb{T}^2_\\alpha)$, we extend each $\\psi_j$\n by periodicity in the $x_2$ direction $k$ times, multiply the result by $\\sqrt{\\alpha}$\n and denote the resulting function defined on the square\n torus $\\mathbb{T}^2=[0,2\\pi k]^2$ by $\\widetilde\\psi_j$. Then the family\n $\\{\\widetilde\\psi_j\\}_{j=1}^N$ is orthonormal in $L_2(\\mathbb{T}^2)$ and\n for $\\rho_{\\widetilde\\psi}(x)=\\sum_{j=1}^N|\\widetilde\\psi_j(x)|^2$\n and $\\rho_{\\psi}(x)=\\sum_{j=1}^N|\\psi_j(x)|^2$ it holds\n $$\n \\int_{\\mathbb{T}^2}\\rho_{\\widetilde\\psi}(x)^2dx=\n \\alpha\\int_{\\mathbb{T}^2_\\alpha}\\rho_{\\psi}(x)^2dx,\\qquad\n \\int_{\\mathbb{T}^2}|\\nabla\\widetilde\\psi_j(x)|^2dx=\n \\int_{\\mathbb{T}^2_\\alpha}|\\nabla\\psi_j(x)|^2dx,\n $$\n which gives \\eqref{alpha}.\n \\end{proof}\n \\begin{remark}\n{\\rm\nThe rate of growth $1\/\\alpha$ of the Lieb--Thirring constant is sharp\nas $\\alpha\\to0$. To see this we set $N=1$ and consider a function\non $\\mathbb{T}^2_\\alpha$ depending on the long coordinate $x_1$ only.\nFor example, let $\\psi(x_1,x_2)=\\sin(2\\pi\\alpha x_1)$.\nThen $\\|\\psi\\|^4_{L_4}\\sim 1\/\\alpha$ $(=\\frac{3\\pi^2}{2\\alpha})$,\n$\\|\\psi\\|^2_{L_2}\\sim 1\/\\alpha$ $(=\\frac{2\\pi^2}{\\alpha})$,\n$\\|\\nabla\\psi\\|^2_{L_2}\\sim \\alpha$ $(=2\\pi^2\\alpha)$.\nTherefore $\\mathrm{k}_\\mathrm{LT}(\\mathbb{T}^2_\\alpha)\\succeq 1\/\\alpha$\n$(\\ge\\frac1\\alpha\\frac3{8\\pi^2})$.\n}\n\\end{remark}\n\\begin{remark}\n{\\rm\nThe orthogonal complement to the subspace of functions depending only on the long coordinate\n$x_1$ consists of functions $\\psi(x_1,x_2)$ with mean value\nzero with respect to the short coordinate $x_2$:\n\\begin{equation}\\label{T2cond}\n\\int_0^{2\\pi}\\psi(x_1,x_2)dx_2=0\\quad\\forall x_1\\in[0,2\\pi\/\\alpha].\n\\end{equation}\nThe Lieb--Thirring constant on this subspace is bounded uniformly\nwith respect to $\\alpha$ as $\\alpha\\to0$. The similar\nresult holds for the multidimensional torus with different periods.\nSee \\cite{I-L-MS} for the details.\n}\n\\end{remark}\n\n\\begin{remark}\n{\\rm The lifting argument of \\cite{Lap-Weid} was used in\n\\cite{I-L-MS} to derive the Lieb--Thirring inequalities on the\nmultidimensional with pointwise orthogonality condition of the type\n\\eqref{T2cond}. It is not clear how to use the lifting argument\nin the case of a global (and weaker) orthogonality contition\n$\\int_{\\mathbb{T}^d}\\psi(x)dx=0$.\n\nFinally, we do not know whether the lifting argument\ncan in some form be used for the Lieb--Thirring inequalities\non the sphere, say, when going over from $\\mathbb{S}^{d-1}$ to\n$\\mathbb{S}^{d}$. }\n\\end{remark}\n\n\n\n\n\\setcounter{equation}{0}\n\\section{Appendix. Estimates of the series}\\label{sec4}\n\n\\subsection*{Estimate for the sphere.} The series estimated in \\eqref{series1} is precisely of the type\n\\begin{equation}\\label{G}\nG(\\nu):=\\sum_{n=1}^\\infty(2n+1)g\\left(\\nu\\, n(n+1)\\right),\n\\end{equation}\nwhere $g$ is sufficiently smooth and sufficiently fast decays at\ninfinity. We need to find the asymptotic behavior of $G(\\nu)$ as\n$\\nu\\to0$. This has been done in~\\cite{IZ} where the following\nresult was proved.\n\\begin{lemma}\\label{L:E-M}\nThe following asymptotic expansion holds as $\\nu\\to0$:\n\\begin{equation}\\label{as2}\nG(\\nu)=\\frac1{\\nu}\\int_0^\\infty g(t)dt-\\frac23g(0)-\n\\frac1{15}\\nu g'(0)+\\frac4{315}\\nu^2g''(0)+\nO(\\nu^3).\n\\end{equation}\n\\end{lemma}\n\nThe series in \\eqref{series1} is of the form~\\eqref{G}\nwith\n$$\ng(t)=\\frac1{(1+t^2)^2}, \\qquad \\nu=\\frac1{\\sqrt{\\mu}E},\n$$\nso that $g(0)=1$, $g'(0)=0$, $g''(0)=-4$ and $\\int_0^\\infty g(t)dt=\\pi\/4$. Therefore\n\\eqref{as2} gives\n$$\n\\sum_{n=1}^\\infty\\frac\n{(2n+1)}{\\biggl({1+\\left(\\frac{n(n+1)}a\\right)^2}\\biggr)^2}=\n a\\frac\\pi 4-\\frac23-\\frac{16}{315}a^{-2}+O(a^{-3}),\\ \\text{as}\\ a\\to\\infty,\n$$\nwhich shows that\ninequality \\eqref{series1}\nclearly holds for large energies $E>E_0$. The proof of inequality \\eqref{series1} for\nall $E\\in[0,\\infty)$ amounts to showing that the inequality\n$$\nH_{\\mathbb{S}^2}(a):=\\frac4\\pi\\,a^3\\sum_{n=1}^\\infty\\frac{2n+1}{\\bigl(\\bigl(n(n+1)\\bigr)^2+a^2\\bigr)^2}\n\\,<\\,1, \\quad a=\\sqrt{\\mu}E=\\frac\\pi 4E\n$$\nholds on a \\emph{finite} interval $ a\\in[0,a_0]$.\nThe value of $a_0$ (say, $20$) can be specified similarly to \\cite{I12JST}.\nFurthermore, the sum of the series\ncan be found in an explicit form in terms of the (digamma) $\\psi$-function.\nThe function $H_{\\mathbb{S}^2}(a)$ and the third-order remainder term are shown in Fig.~\\ref{fig:S2}.\n\\begin{figure}[htb]\n\\centerline{\\psfig{file=S2.eps,width=7.5cm,height=6cm,angle=0}\n\\psfig{file=remainderS2.eps,width=7.5cm,height=6cm,angle=0}}\n\\caption{The graph of $H_{\\mathbb{S}^2}(a)$ is on the left;\nthe remainder term $\\left(H_{\\mathbb{S}^2}(a)-1-\\frac8{3\\pi a}\\right)\\cdot a^3$\nis shown on the right, the horizontal red line is $-64\/(315\\pi)=-0.064$.}\n\\label{fig:S2}\n\\end{figure}\n\\subsection*{Estimate for the torus.}\n\n\\begin{lemma}\\label{L:Poisson}\nThe following asymptotic expansion holds as $a\\to\\infty$:\n\\begin{equation}\\label{at2}\n\\sum_{k\\in\\mathbb{Z}^2_0}\\frac1{\\left(\\left(\\frac{|k|}{\\sqrt{a}}\\right)^4+1\\right)^2}\n=\\frac{\\pi^2}4 a-1+O(e^{-C\\sqrt{a}}).\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWe use the Poisson summation formula\n(see, e.\\,g., \\cite{S-W}):\n$$\n\\sum_{m\\in\\mathbb{Z}^d}f(m\/\\nu)=\n(2\\pi)^{d\/2}\\nu^d\n\\sum_{m\\in\\mathbb{Z}^d}\\widehat{f}(2\\pi m \\nu),\n$$\nwhere\n$\\widehat{f}(\\xi)=(2\\pi)^{-d\/2}\\int_{\\mathbb{R}^d}\nf(x)e^{-i\\xi x}dx$.\n\nTaking into account that the term with $k=(0,0)$ is missing in\n\\eqref{at2}, the Poisson summation formula gives\n\\begin{equation}\\label{expsmall}\n\\aligned\n\\sum_{k\\in\\mathbb{Z}^2_0}\\frac1{\\left(\\left(\\frac{|k|}{\\sqrt{a}}\\right)^4+1\\right)^2}+1=\\\\=\na\\int_{\\mathbb{R}^2}\\frac{dxdy}{((x^2+y^2)^2+1)^2}+2\\pi a\\sum_{k\\in\\mathbb{Z}^2_0}\n\\widehat h(2\\pi\\sqrt{a}|k|)=\\frac{\\pi^2}4a+O(e^{-C\\sqrt{a}}),\n\\endaligned\n\\end{equation}\nwhere $h(x,y)=\\frac1{((x^2+y^2)^2+1)^2}$ is analytic\nand therefore its Fourier transform\n\\begin{equation}\\label{Fourier}\n\\widehat h(\\xi)=\\frac1{2\\pi}\\int_{\\mathbb{R}^2}\ne^{-ix\\xi_1-iy\\xi_2}h(x,y)dxdy\n\\end{equation}\nis exponentially decaying.\n\\end{proof}\n\nThe graph of the function\n\\begin{equation}\\label{lessthen1}\nH_{\\mathbb{T}^2}(a):=\\frac4{\\pi^2}\\,a^3\\sum_{k\\in\\mathbb{Z}^2_0}\\frac{1}{\\bigl(|k|^4+a^2\\bigr)^2}\n\\,<\\,1,\n\\end{equation}\nand the exponentially small remainder term\n$$\nR(a)=2\\pi a\\sum_{k\\in\\mathbb{Z}^2_0}\n\\widehat h(2\\pi\\sqrt{a}|k|)\n$$\nare shown in Fig.~\\ref{fig:T2}\n\\begin{figure}[htb]\n\\centerline{\\psfig{file=T2.eps,width=7.5cm,height=6cm,angle=0}\n\\psfig{file=remainderT2.eps,width=7.5cm,height=6cm,angle=0}}\n\\caption{The function $H_{\\mathbb{T}^2}(a)$ is shown on the left;\nthe exponentially small term $R(a)$ is shown on the right.}\n\\label{fig:T2}\n\\end{figure}\n\nWe now give an explicit estimate for the exponentially small remainder term in~\\eqref{expsmall}.\nThe function $h(z)$ is analytic in the domain $\\Omega\\subset\\mathbb{C}^2$:\n$\\Omega=\\{z\\in \\mathbb{C}^2,\\ |\\mathrm{Im} z_1|<\\frac12,\\ |\\mathrm{Im} z_2|<\\frac12\\}$.\nIn fact, the equation\n$$\n(x+i\\alpha)^2+(y+i\\alpha)^2=\\pm i\n$$\nhas real solutions $x$ and $y$ only for $\\alpha\\ge\\frac12$.\n\nFor $F(x,y,\\alpha)=((x+i\\alpha)^2+(y+i\\alpha)^2)^2+1$ we have\n$$\n\\aligned\n&\\mathrm{Re}\\,F=\n(x^2+y^2)^2-8\\alpha^2(x^2+y^2+xy)+4\\alpha^4+1\\ge t^2-12\\alpha^2t+4\\alpha^4+1,\\\\\n&\\mathrm{Im}\\,F=\n4a(x+y)(x^2+y^2-2\\alpha^2),\\quad|\\mathrm{Im}\\,F|\\le4\\sqrt{2}\\alpha t^{1\/2}|t-2\\alpha^2|,\n\\endaligned\n$$\nwhere $t:=x^2+y^2$. Next, by a direct inspection we verify that\nfor $t\\ge0$\n$$\n\\aligned\n|F^2|\\ge\n (\\mathrm{Re}\\,F)^2-(\\mathrm{Im}\\,F)^2=\\\\\n(t^2-12\\alpha^2t+4\\alpha^4+1)^2-32\\alpha^2t(t-2a^2)^2>\\frac1b(t^4+1),\n\\endaligned\n$$\nwhere $\\alpha=4.6^{-1}$ and $b=4.75$.\nThis gives that for $|\\mathrm{Im} z_1|\\le\\alpha,\\ |\\mathrm{Im} z_2|\\le\\alpha$\n$$\n|h(x+i\\alpha,y+i\\alpha)|\\le\\frac b{(x^2+y^2)^4+1}\\,.\n$$\nBy the Cauchy integral theorem we can shift the $x$ and $y$ integration in \\eqref{Fourier}\nin the complex plane by $\\pm i\\alpha$ (depending on the sign of $\\xi_1$ and $\\xi_2$) and find that\n$$\n|\\widehat h(\\xi)|\\!\\le\\!\\frac b{2\\pi}e^{-(|\\xi_1|+|\\xi_2|)\\alpha}\\!\\int_{\\mathbb{R}^2}\n\\frac{dxdy}{(x^2+y^2)^4+1}\\!=e^{-(|\\xi_1|+|\\xi_2|)\\alpha}\\frac{b\\pi\\sqrt{2}}8\\le\ne^{-\\alpha|\\xi|}\\frac{b\\pi\\sqrt{2}}8\\,.\n$$\n\nWe write the numbers $|k|^2$ over the lattice $\\mathbb{Z}^2_0$\nin non-decreasing order and denote them by $\\{\\lambda_j\\}^\\infty_{j=1}$.\nUsing that $\\lambda_j\\ge j\/4$ (see \\cite{I-L-AA}) and setting\n$L:=\\frac{\\alpha\\pi\\sqrt{a}}2$ and $A:=\\frac{\\pi^2\\sqrt{2}ab}4$ we\nestimate the series in \\eqref{expsmall} as follows\n$$\n\\aligned\n|R(a)|\\le\n2\\pi a\\sum_{k\\in\\mathbb{Z}^2_0}\n|\\widehat h(2\\pi\\sqrt{a}|k|)|=2\\pi a\\sum_{j=1}^\\infty\n|\\widehat h(2\\pi\\sqrt{a}\\lambda_j^{1\/2})|\\le\\\\\nA\\sum_{j=1}^\\infty\ne^{-2\\pi\\alpha\\sqrt{a}\\lambda_j^{1\/2}}\\le\nA\\sum_{j=1}^\\infty\ne^{-2Lj^{1\/2}}=Ae^{-L}\\sum_{j=1}^\\infty e^{-L(2j^{1\/2}-1)}\\le\\\\\nAe^{-L}\\sum_{j=1}^\\infty e^{-Lj^{1\/2}}<\nAe^{-L}\\int_0^\\infty e^{-L\\sqrt{x}}dx=\nAe^{-L}\\frac2{L^2}=\\frac{2^{3\/2}b}{\\alpha^2}e^{-\\frac{\\alpha\\pi\\sqrt{a}}2}\\,.\n\\endaligned\n$$\nInequality~\\eqref{lessthen1} holds if $R(a)<1$. The above estimates show that\n$|R(a)|<1$ for\n$$\na>\\left[\\frac2{\\alpha\\pi}\\log\\left(\\frac{2^{3\/2}b}{\\alpha^2}\\right)\\right]^2=\n273.8\\,.\n$$\n\nA more optimistic estimate follows from the fact that $h(x)$ is radial and therefore\nso is its Fourier transform\n $$\\widehat h(\\xi)=\\int_0^\\infty J_0(|\\xi|r)h(r)rdr,\n$$\nwhere $J_0$ is the Bessel function. The latter integral is expressed in terms of the\nMeijer G-function and satisfies $|\\widehat h(\\xi)|\\left[\\frac4\\pi\\log\\frac{64}\\pi\\right]^2=14.73\\,.\n$$\n\n\n\\subsection*{Acknowledgements}\\label{SS:Acknow}\nThe work of A.\\,I. and S.\\,Z. is supported in part by the Russian\nScience Foundation grant 19-71-30004 (sections 1,2). Research of A.\\,L. is\nsupported by the Russian Science Foundation grant 19-71-30002 (sections 3,4).\n\n\n\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Introduction}\n\nPartial least squares (PLS) is a regularized regression technique developed by \\citet{Wol84} to deal with collinearities in the regressor matrix. It is an iterative algorithm where the covariance between response and regressor is maximized at each step, see \\citet{Hel88} for a detailed description. Regularization in the PLS algorithm is obtained by stopping the iteration process early.\n\nSeveral studies showed that partial least squares algorithm is competitive with other regression methods such as ridge regression and principal component regression and it needs generally fewer iterations than the latter to achieve comparable estimation and prediction, see, e.g., \\citet{FrankFriedman} and \\citet{Krae07b}. For an overview of further properties of PLS we refer to \\citet{Ros06}.\n\nReproducing kernel Hilbert spaces (RKHS) have a long history in probability and statistics \\citep[see e.g.][]{Berlinet}. \nHere we focus on the supervised kernel based learning approach for the solution of non-parametric regression problems. RKHS methods are both computationally and theoretically attractive, due to the kernel trick \\citep{Sch98} and the representer theorem \\citep{Wah99} as well as its generalization \\citep{Sch01}. Within the reproducing kernel Hilbert space framework one can adapt linear regularized regression techniques like ridge regression and principal component regression to a non-parametric setting, see \\citet{Sau98} and \\citet{Ros00a}, respectively. We refer to \\citet{bSch} for more details on the kernel based learning approach.\n\nKernel PLS was introduced in \\citet{Ros01} who reformulated the algorithm presented in \\citet{Lin93}. The relationship to kernel conjugate gradient (KCG) methods was highlighted in \\citet{Blan10a}. It can be seen in \\citet{Hanke} that conjugate gradient methods are well suited for handling ill-posed problems, as they arise in kernel learning, see, e.g., \\citet{Vit06}.\n\n\\citet{Ros03} investigated the performance of kernel partial least squares (KPLS) for non-linear discriminant analysis.\n\\citet{Blan10a} proved the consistency of KPLS when the algorithm is stopped early without giving convergence rates. \n\n\\citet{Cap07} showed that kernel ridge regression (KRR) attains optimal probabilistic rates of convergence for independent and identically distributed data, using a source and a polynomial effective dimensionality condition. A generalization of these results to a wider class of effective dimensionality conditions and extension to kernel principal component regression can be found in \\citet{Dicker17}.\n\nFor a variant of KCG \\citet{Blan10b} obtained probabilistic convergence rates for independent identically distributed data. The pointed explicitly out that their approach and results are not directly applicable to KPLS.\n\nWe study of the convergence of the kernel partial least squares estimator to the true regression function when the algorithm is stopped early. \nSimilar to \\citet{Blan10b} we derive explicit probabilistic convergence rates. In contrast to previously cited works on kernel regression our input data are not independent and identically distributed but rather stationary time series. We derive probabilistic convergence results that can be applied for arbitrary temporal dependence structures, given that certain concentration inequalities for these data hold.\nThe derived convergence rates depend not only on the complexity of the target function and of the data mapped into the kernel space, but also on the persistence of the dependence in the data. In the stationary setting we prove that the short range dependence still leads to optimal rates, but if the dependence is more persistent, the rates become slower.\n\n\\section{Kernel Partial Least Squares}\n\\label{sec:problem}\nConsider the non-parametric regression problem \n\\begin{equation}\n\\label{eq:model}\n\ty_t = f^\\ast(X_t) + \\varepsilon_t,~~t \\in {\\mathbb Z}.\n\\end{equation}\nHere $\\{X_t\\}_{t \\in {\\mathbb Z}}$ is a $d$-dimensional, $d \\in {\\mathbb N}$, stationary time series on a probability space $(\\Omega,\\mathcal A,\\mathrm{P})$ and $\\{\\varepsilon_t\\}_{t \\in {\\mathbb Z}}$ is an independent and identically distributed sequence of real valued random variables with expectation zero and variance $\\sigma^2 > 0$ that is independent of $\\{X_t\\}_{t \\in {\\mathbb Z}}$. Let $X$ be a random vector that is independent of $\\{X_t\\}_{t \\in {\\mathbb Z}}$ and $\\{\\varepsilon_t\\}_{t \\in {\\mathbb Z}}$ with the same distribution as $X_0$. The target function we seek to estimate is $f^\\ast \\in \\mathcal L^2\\left(\\Prob^{X} \\right)$.\n\nFor the purpose of supervised learning assume that we have a training sample $\\{(X_t,y_t)\\}_{t=1}^n$ for some $n \\in {\\mathbb N}$. In the following we introduce some basic notation for the kernel based learning approach.\n\nDefine with $(\\mathcal{H},\\langle \\cdot,\\cdot\\rangle_\\mathcal H)$ the RKHS of functions on ${\\mathbb R}^d$ with reproducing kernel $k:{\\mathbb R}^d \\times {\\mathbb R}^d \\rightarrow {\\mathbb R}$, i.e., it holds\n\\begin{equation}\n\\label{eq:rep.property}\n\tg(x) = \\langle g, k(\\cdot,x) \\rangle_\\mathcal H, ~~x \\in {\\mathbb R}^d, g\\in \\mathcal H.\n\\end{equation} \n\nThe corresponding inner product and norm in $\\mathcal H$ is denoted by $\\langle \\cdot,\\cdot \\rangle_\\mathcal H$ and $\\|\\cdot\\|_\\mathcal H$, respectively. We refer to \\citet{Berlinet} for examples of Hilbert spaces and their reproducing kernels. In the following we deal with reproducing kernel Hilbert spaces which fulfill the following, rather standard, conditions:\n\\begin{enumerate}[label={(K\\arabic*})]\n\\item \\label{con:k1}\n$\\mathcal H$ is separable,\n\\item \\label{con:k2}\nThere exists a $\\kappa>0$ such that $|k(x,y)| \\leq \\kappa$ for all $x,y \\in {\\mathbb R}^d$ and $k$ is measurable.\n\\end{enumerate}\nUnder \\ref{con:k1} the Hilbert-Schmidt norm $\\|\\cdot\\|_{\\mathrm{HS}}$ for operators mapping from $\\mathcal H$ to $\\mathcal H$ is well defined.\nIf condition \\ref{con:k2} holds, all functions in $\\mathcal H$ are bounded, see \\citet{Berlinet}, chapter 2. \nThe conditions are satisfied for a variety of popular kernels, e.g., Gaussian or triangular. \n\nThe main principle of RKHS methods is the mapping of the data $X_t$ into $\\mathcal H$ via the feature maps $\\phi_t = k(\\cdot,X_t)$, $t=1,\\dots,n$. This mapping can be done implicitly by using the kernel trick $\\langle \\phi_t,\\phi_s \\rangle_\\mathcal H = k(X_t,X_s)$ and thus only the $n \\times n$ dimensional kernel matrix $K_n = n^{-1}[k(X_t,X_s)]_{t,s=1}^n$ is needed in the computations. Then the task for RKHS methods is to find coefficients $\\alpha_1,\\dots,\\alpha_n$ such that $f_\\alpha = \\sum_{t=1}^n \\alpha_t \\phi_t$ is an adequate approximation of $f^\\ast$ in $\\mathcal H$, measured in the $\\mathcal L^2\\left(\\Prob^{X} \\right)$ norm $\\|\\cdot\\|_2$.\n\nThere are a variety of different approaches to estimate the coefficients $\\alpha_1,\\dots,\\alpha_n$, including kernel ridge regression, kernel principal component regression and, of course, kernel partial least squares. The latter method was introduced by \\citet{Ros01} and is the focus of the current work.\n\nIt was shown by \\citet{Krae07b} that the KPLS algorithm solves\n\\begin{equation}\n\\label{eq:kpls.optim}\n\t\\widehat{\\alpha}_i = \\arg\\min\\limits_{v \\in \\mathcal K_i(K_n,y)} \\|y - K_n v\\|^2,~~i=1,\\dots,n,\n\\end{equation}\nwith $y=(y_1,\\dots,y_n)^{ \\mathrm{\\scriptscriptstyle T} }$. Here $\\mathcal K_i(K_n,y) = \\mathrm{span}\\left\\{\n\ty,K_n y, K_n^2y,\\dots,K_n^{i-1}y\n\\right\\}$, $i=1,\\dots,n$, is the $i$th order Krylov space with respect to $K_n$ and $y$ and $\\| \\cdot\\|$ denotes the Euclidean norm rescaled by $n^{-1}$. The dimension $i$ of the Krylov space is the regularization parameter for KPLS.\n\nWe will introduce several operators that will be crucial for our further analysis. Fist define two integral operators: the kernel integral operator $T^\\ast:\\mathcal L^2\\left(\\Prob^{X} \\right) \\rightarrow \\mathcal H, g \\mapsto\\mathrm{E}\\{k(\\cdot,X) g(X)\\}$ and the change of space operator $T:\\mathcal H \\rightarrow \\mathcal L^2\\left(\\Prob^{X} \\right), g \\mapsto g$, which is well defined if \\ref{con:k2} holds.\nIt is easy to see that $T, T^\\ast$ are adjoint, i.e., for $u \\in \\mathcal H$ and $v \\in \\mathcal L^2\\left(\\Prob^{X} \\right)$ it holds $\\langle T^\\ast v, u \\rangle_\\mathcal H = \\langle v, T u \\rangle_2$ with $\\langle\\cdot,\\cdot \\rangle_2$ being the inner product in $\\mathcal L^2\\left(\\Prob^{X} \\right)$. \n\nThe sample analogues of $T,T^\\ast$ are $T_n:\\mathcal H \\rightarrow {\\mathbb R}^n, g \\mapsto \\{g(X_1),\\dots,g(X_n)\\}^{ \\mathrm{\\scriptscriptstyle T} }$ and $T_n^\\ast:{\\mathbb R}^n \\rightarrow \\mathcal H, (v_1,\\dots,v_n)^{ \\mathrm{\\scriptscriptstyle T} } \\mapsto n^{-1} \\sum_{t=1}^n v_t k(\\cdot,X_t)$, respectively. Both operators are adjoint with respect to \nthe rescaled Euclidean product $\\langle u,v\\rangle = n^{-1} u^{ \\mathrm{\\scriptscriptstyle T} } v$, $u,v \\in {\\mathbb R}^d$\n\nFinally, we define the sample kernel covariance operator $S_n = T^\\ast_n T_n:\\mathcal H \\rightarrow \\mathcal H$ and the population kernel covariance operator $S = T^\\ast T:\\mathcal H \\rightarrow \\mathcal H$. Note that it holds $K_n = T_n T_n^\\ast$. Under \\ref{con:k1} and \\ref{con:k2} $S$ is a self-adjoint compact operator with operator norm $\\|S\\|_{\\mathcal L} \\leq \\kappa$, see \\citet{Cap07}.\n\nWith this notation we can restate (\\ref{eq:kpls.optim}) for the function $f_\\alpha$ \n\\begin{equation}\n\\label{eq:func.representation}\nf_{\\widehat{\\alpha}_i} = \\arg \\min_{g \\in \\mathcal K_i(S_n,T_n^\\ast y )} \\|y- \\{g(X_1),\\dots,g(X_n)\\}^{ \\mathrm{\\scriptscriptstyle T} }\\|^2=\\arg \\min_{g \\in \\mathcal K_i(S_n,T_n^\\ast y )}\\|y-T_ng\\|^2.\n\\end{equation}\nHence, we are looking for functions that minimize the squared distance to $y$ constrained to a sequence of Krylov spaces. \n\nIn the literature of ill-posed problems it is well known that without further conditions on the target function $f^\\ast$ the convergence rate of the conjugate gradient algorithm can be arbitrarily slow, see \\citet{Hanke}, chapter 3.2. One common a-priori assumption on the regression function $f^\\ast$ is \na source condition:\n\\begin{enumerate}[label={(S)}]\n\\item\n\\label{eq:source}\nThere exist $r \\geq 0$, $R>0$ and $u \\in \\mathcal L^2\\left(\\Prob^{X} \\right)$ such that $f^\\ast = (T T^\\ast)^{r} u$ and $\\|u\\|_2 \\leq R$.\n\\end{enumerate}\n\nIf $r \\geq 1\/2$, then the target function $f^\\ast \\in \\mathcal L^2\\left(\\Prob^{X} \\right)$ coincides almost surely with a function $f \\in \\mathcal H$ and we can write $f^\\ast = T f$, see \\citet{Cuc02}.\nWith this the kernel partial least squares estimator $f_{\\widehat{\\alpha}_i}$ estimates the correct target function, not only its best approximation in $\\mathcal{H}$. This case is known as the inner case. \n\nThe situation with $r<1\/2$ is referred to as the outer case. Under additional assumptions, e.g., the availability of additional unlabeled data, it is still possible that an estimator of $f^\\ast$ converges to the true target function in $\\mathcal L^2\\left(\\Prob^{X} \\right)$ norm with optimal rates (with respect to the number $n$ of labeled data points). See \\citet{Vit06} for a detailed description of this semi-supervised approach for kernel ridge regression in the independent and identically distributed case. We do not treat the case $r<1\/2$ in this work.\n\nA source conditions is often interpreted as an abstract smootheness condition. This can be seen as follows.\nLet $\\eta_1 \\geq \\eta_2 \\geq \\dots$ be the eigenvalues and $\\psi_1,\\psi_2,\\dots$ the corresponding eigenfunctions of the compact operator $S$. \nThen it is easy to see that the source condition \\ref{eq:source} is equivalent to $f = \\sum_{j=1}^\\infty b_j \\psi_j \\in \\mathcal L^2\\left(\\Prob^{X} \\right)$ with $b_j$ such that $\\sum_{j=1}^\\infty \\eta_j^{-2(r+1\/2)} b_j^2 < \\infty$. Hence, the higher $r$ is chosen the faster the sequence $\\{b_j\\}_{j=1}^\\infty$ must converge to zero. Therefore, the sets of functions for which source conditions hold are nested, i.e., the larger $r$ is the smaller the corresponding set will be. The set with $r=1\/2$ is the largest one and corresponds to a zero smoothness condition, i.e., $\\sum_{j=1}^\\infty \\eta_j^{-2} b_j^2 < \\infty$, which is equivalent to $f \\in \\mathcal H$. For more details we refer to \\citet{Dicker17}.\n\n\\section{Consistency of Kernel Partial Least Squares}\n\\label{sec:kpls.convergence}\nThe KCG algorithm as described by \\citet{Blan10b} is consistent when stopped early and convergence rates can be obtained when a source condition \\ref{eq:source} holds. Here we will proof the same property for KPLS. Early stopping in this context means that we stop the algorithm at some $a=a(n) \\leq n$ and consider the estimator $f_{\\widehat{\\alpha}_a}$ for $f^\\ast$.\n\nThe difference between KCG and KPLS is the norm which is optimized. The kernel conjugate gradient algorithm studied in \\citet{Blan10b} estimates the coefficients $\\alpha \\in {\\mathbb R}^n$ of $f_\\alpha$ via $\\widehat{\\alpha}_i^{CG} = \\arg\\min_{v \\in \\mathcal K_i(K_n,y)} \\langle y- K_n v, K_n(y- K_n v)\\rangle$.\nIt is easy to see that this optimization problem can be rewritten for the function $f_\\alpha$ as \n\\[\n\\min_{g \\in \\mathcal K_i(S_n,T_n^\\ast y )} \\|T_n^\\ast y-S_ng\\|_\\mathcal H^2=\t\\min_{g \\in \\mathcal K_i(S_n,T_n^\\ast y )} \\|T_n^\\ast\\left(y-T_ng\\right)\\|_\\mathcal H^2,\n\\] \ncompared to (\\ref{eq:func.representation}) for KPLS. Thus, KCG obtains the least squares approximation $g$ in the $\\mathcal H$-norm for the normal equation $T_n^\\ast y = T^\\ast _nT_n g$ and KPLS finds a function that minimizes the residual sum of squares. In both methods the solutions are restricted to functions $g \\in \\mathcal K_i(S_n,T_n^\\ast y )$.\n\nAn advantage of the kernel conjugate gradient estimator is that concentration inequalities can be established for both $T_n^\\ast y$ and $S_n$ and applied directly as the optimization function contains both quantities. The stopping index for the regularization can be chosen by a discrepancy principle as ${a^\\ast} = \\min\\{1\\leq i \\leq n: \\|S_n f_{\\widehat{\\alpha}_i^{CG}} - T_n^\\ast y\\| \\leq \\Lambda_n\\}$ with $\\Lambda_n$ being a threshold sequence that goes to zero as $n$ increases.\n\nOn the other hand, the function to be optimized for KPLS contains only $y$ and $T_n g = \\{g(X_1),\\dots,g(X_n)\\}^{ \\mathrm{\\scriptscriptstyle T} }$ for which statistical properties are not readily available. Thus, we need to find a way to apply the concentration inequalities for $T_n^\\ast y$ and $S_n$ to this slightly different problem. This leads to complications in the proof of consistency and a rather different and more technical stopping rule for choosing the optimal regularization parameter $a^\\ast$ is used, as can be seen in Theorem \\ref{th:kpls}. This stopping rule has its origin in \\citet{Hanke}.\\\\\n\n In the following $\\|\\cdot\\|_{\\cal{L}}$ denotes the operator norm and $\\|\\cdot\\|_{HS}$ is the Hilbert-Schmidt norm.\n\n\\begin{theorem}\n\\label{th:kpls}\nAssume that conditions \\ref{con:k1}, \\ref{con:k2}, \\ref{eq:source} hold with $r \\geq 3\/2$ and there are constants $C_\\delta(\\nu),C_\\epsilon(\\nu)>0$ and a sequence $\\{\\gamma_n\\}_{n \\in {\\mathbb N}} \\subset [0,\\infty)$, $\\gamma_n \\rightarrow 0$, such that we have for $\\nu \\in (0,1]$\n\\begin{align*}\n\t\\mathrm{P}\\left(\n\t\\|S_n-S\\|_{\\mathcal L} \\leq C_\\delta(\\nu) \\gamma_n\n\t\\right) &\\geq 1 - \\nu\/2,\\\\\n\t\\mathrm{P}\\left(\n\t\\|T_n^\\ast y - S f\\|_\\mathcal H \\leq C_\\epsilon(\\nu) \\gamma_n\n\t\\right) &\\geq 1- \\nu\/2.\n\\end{align*}\nDefine the stopping index $a^\\ast$ by\n\\begin{equation}\n\\label{eq:stopping}\na^\\ast = \\min\\left\\{1 \\leq a \\leq n: \\sum_{i=0}^a \\|S_n f_{\\widehat{\\alpha}_i} - T_n^\\ast y\\|^{-2}_\\mathcal H \\geq (C \\gamma_n)^{-2}\n\t\\right\\},\n\\end{equation}\nwith $C = C_\\epsilon(\\nu) + \\kappa^{r-1\/2}(r+1\/2) R \\{1 + C_\\delta(\\nu)\\}$.\n\nThen it holds with probability at least $1-\\nu$ that\n\\begin{align*}\n\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f^\\ast\\|_2 &= O\\left\\{\\gamma_n^{2r\/(2r+1)}\\right\\},\\\\\n\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f\\|_\\mathcal H &= O\\left\\{\\gamma_n^{(2r-1)\/(2r+1)}\\right\\},\n\\end{align*}\nwith $f^\\ast = T f$.\n\\end{theorem}\n\n\nIt can be shown that the stopping rule (\\ref{eq:stopping}) always determines a finite index, i.e., the set the minimum is taken over is not empty, see \\citet{Hanke}, chapter 4.3.\n\nThe theorem yields two convergence results, one in the $\\mathcal H$-norm and one in the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-norm. It holds that $\\|v\\|_2 = \\|S^{1\/2}v\\|_\\mathcal H$. These are the endpoints of a continuum of norms $\\|v\\|_\\beta = \\|S^\\beta v\\|_\\mathcal H$, $\\beta \\in [0,1\/2]$ that were considered in \\citet{Nemirovskii86} for the derivation of convergence rates for KCG algorithms in a deterministic setting. \n\nThe convergence rate of the kernel partial least squares estimator depends crucially on the sequence $\\gamma_n$ and the source parameter $r$. If $\\gamma_n = O(n^{-1\/2})$, this yields the same convergence rate as Theorem 2.1 of \\citet{Blan10b} for kernel conjugate gradient or \\citet{Vito05} for kernel ridge regression with independent and identically distributed data. For stationary Gaussian time series we will derive concentration inequalities in the next section and obtain convergence rates depending on the source parameter $r$ and the range of dependence. Note that Theorem \\ref{th:kpls} is rather general and it can be applied to any kind of dependence structure, as long as the necessary concentration inequalities can be established.\n\nThe next theorem derives faster convergence rates under assumptions on the effective dimensionality of operator $S$, which is defined as $d_\\lambda = \\mathrm{tr}\\{(S+\\lambda)^{-1} S\\}$. The concept of effective dimensionality was introduced in \\citet{Zho02} to get sharp error bounds for general learning problems considered there. If $\\mathcal H$ is a finite dimensional space it was shown in \\citet{Zho02} that $d_\\lambda \\leq \\mathrm{dim}(\\mathcal H)$. For infinite dimensional spaces it describes the complexity of the interactions between data and reproducing kernel. \n\nIf $d_\\lambda = O(\\lambda^{-s})$ for some $s \\in (0,1]$, \\citet{Cap07} showed that the order optimal convergence rates $n^{-r\/(2r+s)}$ are attained for KRR with independent and identically distributed data.\n\nThe effective dimensionality clearly depends on the behaviour of eigenvalues of $S$. If these converge sufficiently fast to zero, nearly parametric rates of convergence can be achieved for reproducing kernel Hilbert space methods, see, e.g., \\citet{Dicker17}. In particular, the behaviour of $d_\\lambda$ around zero is of interest, since it determines how ill-conditioned the operator $(S+\\lambda)^{-1}$ becomes. In the following theorem we set $\\lambda = \\lambda_n$ for a sequence $\\{\\lambda_n\\}_{n\\in {\\mathbb N}} \\subset (0,\\infty)$ that converges to zero. \n\n\\begin{theorem}\n\\label{th:kpls2}\nAssume that conditions \\ref{con:k1}, \\ref{con:k2}, \\ref{eq:source} hold with $r \\geq 1\/2$ and that the effective dimensionality $d_\\lambda$ is known. Additionally, there are constants $C_\\delta(\\nu),C_\\epsilon(\\nu),C_\\psi\n>0$ and a sequence $\\{\\gamma_n\\}_{n \\in {\\mathbb N}} \\subset [0,\\infty)$, $\\gamma_n \\rightarrow 0$, such that for $\\nu \\in (0,1]$ and $n$ sufficiently large\n\\begin{align*}\n\t\\mathrm{P}\\left\\{\n\t\\|S_n-S\\|_{\\mathcal L} \\leq C_\\delta(\\nu) \\gamma_n\n\t\\right\\} &\\geq 1 - \\nu\/3,\\\\\n\t\\mathrm{P}\\left\\{\n\t\\|(S+\\lambda_n)^{-1\/2}(T_n^\\ast y - S f)\\|_\\mathcal H \\leq C_\\epsilon(\\nu) \\sqrt{d_{\\lambda_n}}\\gamma_n\n\t\\right\\} &\\geq 1- \\nu\/3,\\\\\n\t\\mathrm{P}\\left\\{\n\t\\|(S+\\lambda_n)^{1\/2}(S_n+\\lambda_n)^{-1\/2}\\|_{\\mathcal L} \\leq C_\\psi\n\t\\right\\} &\\geq 1 - \\nu\/3,\n\\end{align*}\nHere $\\{\\lambda_n\\}_{n \\in {\\mathbb N}} \\subset (0,\\infty)$ is a sequence converging to zero such that for $n$ large enough \n\\begin{equation}\n\t\\label{eq:lambda.inequ}\n\t\\gamma_n \\leq \\lambda_n^{r-1\/2}.\n\\end{equation}\n\nTake $\\zeta_n = \\max\\{\\sqrt{\\lambda_n d_{\\lambda_n}} \\gamma_n, \\lambda_n^{r+1\/2}\\}$ \nDefine the stopping index $a^\\ast$ by\n\\begin{equation}\n\\label{eq:stopping2}\na^\\ast = \\min\\left\\{1 \\leq a \\leq n: \\sum_{i=0}^a \\|S_n f_{\\widehat{\\alpha}_i} - T_n^\\ast y\\|^{-2}_\\mathcal H \\geq (C \\zeta_n)^{-2}\\right\\},\n\\end{equation}\nwith $C=4 R \\max\\{1, C_\\psi^2,(r-1\/2)\\kappa^{r-3\/2} C_\\delta(\\nu),2^{-1\/2}R^{-1} C_\\psi C_\\epsilon(\\nu)\\}$.\n\nThen it holds with probability at least $1-\\nu$ that\n\\begin{align*}\n\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f^\\ast\\|_2 &= O\\left\\{\\lambda_n^{-1\/2}\\zeta_n\\right\\},\\\\\n\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f\\|_\\mathcal H &= O\\left\\{\\lambda_n^{-1}\\zeta_n\\right\\},\n\\end{align*}\nwith $f^\\ast = T f$.\n\\end{theorem}\nThe condition (\\ref{eq:lambda.inequ}) holds trivially for $r=1\/2$ as $\\gamma_n$ converges to zero. For $r >1\/2$ the sequence $\\lambda_n$ must not converge to zero arbitrarily fast. \n\nIn its general form Theorem \\ref{th:kpls2} does not give immediate insight in the probabilistic convergence rates of the kernel partial least squares estimator. Therefore, we state two corollaries, where the function $d_\\lambda$ is specified. In both corollaries we explicitly state the choice of the sequence $\\lambda_n$ that yield the corresponding rates.\n\n\\begin{corollary}\n\\label{cor:pol.ed}\nAssume that there exists $s \\in (0,1]$ such that $\n\td_\\lambda = O(\\lambda^{-s})\n$ for $\\lambda \\rightarrow 0$.\nThen under conditions of Theorem \\ref{th:kpls2} with $\\lambda_n = \\gamma_n^{2\/(2r + s)}$ it holds with probability at least $1-\\nu$ that\n\\begin{align*}\n\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f^\\ast\\|_2 &= O\\left\\{\\gamma_n^{2r\/(2r+s)}\\right\\}.\n\\end{align*}\n\\end{corollary}\nPolynomial decay of the effective dimensionality $d_\\lambda = \\mathrm{tr}\\{(S+\\lambda)^{-1}S\\}$ occurs if the eigenvalues of $S$ also decay polynomially fast, that is, $\\mu_i = c_s i^{-1\/s}$ for $s \\in (0,1]$, since in this case $d_\\lambda = \\sum\\limits_{i=1}^\\infty \\{1+\\lambda\/c_s i^{1\/s}\\}^{-1} = O(\\lambda^{-s})$. This holds, for example, for the Sobolev kernel $k(x,y) = \\min(x,y)$, $x,y \\in [0,1]$ and data that are uniformly distributed on $[0,1]$, see \\citet{Ras14}.\n\nIf $\\gamma_n = n^{-1\/2}$, then the KPLS estimator converges in the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-norm with a rate of $n^{-r\/(2r+s)}$. This rate is shown to be optimal in \\citet{Cap07} for KRR with independent identically distributed data.\n\nNote that the rate obtained in Theorem \\ref{th:kpls} corresponds to $\\gamma_n^{-2r\/(2r+s)}$ with $s=1$, i.e., the worst case rate with respect to the parameter $s \\in (0,1]$.\n\nIn the next corollary to Theorem \\ref{th:kpls2} we assume that the effective dimensionality behaves in a logarithmic fashion.\n\n\\begin{corollary}\n\\label{cor:log.ed}\nLet $d_\\lambda = O\\{\\log(1+a\/\\lambda)\\}$ for $\\lambda \\rightarrow 0$ and $a>0$. Then under the conditions of Theorem \\ref{th:kpls2} with $\\lambda_n = \\gamma_n^2 \\log\\{ \\gamma_n^{-2}\\}$ and $r=1\/2$ it holds with probability at least $1-\\nu$ that\n\\begin{align*}\n\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f^\\ast\\|_2 &= O\\left\\{\\gamma_n \\log(1\/2 \\gamma_n^{-2})\\right\\}.\n\\end{align*}\n\\end{corollary}\nThe effective dimensionality takes the special form considered in this corollary, for example, when the eigenvalues of $S$ decay exponentially fast. This holds, for example, if the data are Gaussian and the Gaussian kernel is used, see Section \\ref{sec:gauss.kern}.\nIf $\\gamma_n =O(n^{-1\/2})$, then the convergence rate is of order $O\\{n^{-1}\\log(n)\\}$, which are nearly parametric. It is noteworthy that the source condition only impacts the choice of the sequence $\\lambda_n$, not the convergence rates of the estimator in the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-norm. Therefore, we stated the corollary for $r=1\/2$, which is a minimal smoothness condition on $f^\\ast$, i.e., that $f^\\ast = T f$ almost surely for an $f \\in \\mathcal H$.\n\nThe rates obtained in Corollaries \\ref{cor:pol.ed} and \\ref{cor:log.ed} were derived in \\citet{Dicker17} for kernel ridge regression and kernel principal component regression under the assumption of independent and identically distributed data.\n\n\\section{Concentration Inequalities for Gaussian Time Series}\n\\label{sec:concentration}\nCrucial assumptions of Theorem \\ref{th:kpls} and \\ref{th:kpls2} are the concentration inequalities\nfor $S_n$ and $T_n^\\ast y$ and convergence of the sequence $\\{\\gamma_n\\}_{n \\in {\\mathbb N}}$. Here we establish such inequalities in a Gaussian setting for stationary time series. At the end of this section we will state explicit convergence rates for $f_{\\widehat{\\alpha}_{a^\\ast}}$ that depend not only on the source parameter $r \\geq 1\/2$ and the effective dimensionality $d_\\lambda$, but also on the persistence of the dependence in the data.\n\nThe Gaussian setting is summarized in the following assumptions\n\\begin{enumerate}[label={(D\\arabic*})]\n\\item\n\\label{D1}\n$(X_h,X_0)^{ \\mathrm{\\scriptscriptstyle T} } \\sim \\mathcal{N}_{2d}(0,\\Sigma_h)$, $h=1,\\dots,n-1$, with \n\\[\n \\Sigma_h = \\left[\n\\begin{matrix}\n\t\\tau_0 & \\tau_h\\\\\n\t\\tau_h & \\tau_0\n\\end{matrix}\n\\right] \\otimes \\Sigma.\n\\]\nHere $\\Sigma \\in {\\mathbb R}^{d \\times d}$ and $V=[\\tau_{|i-j|}]_{i,j=1}^{n} \\in {\\mathbb R}^{n \\times n}$ are positive definite, symmetric matrices and $\\otimes$ denotes the Kronecker product between matrices. Furthermore $X_0 \\sim \\mathcal{N}_d(0,\\tau_0 \\Sigma)$.\n\\item\n\\label{D2}\nFor the autocorrelation function $\\rho_h = \\tau^{-1}_0\\tau_h$ there exists a $q>0$ such that $|\\rho_h| \\leq (h+1)^{-q}$ for $h =0,\\dots,n-1$.\n\\end{enumerate}\nCondition \\ref{D1} is a separability condition for the covariance matrices $\\Sigma_h$, $h = 0,\\dots,n-1$. Due to \\ref{D1} the effects (on the covariance) over time and between the different variables can be treated separately. Under condition \\ref{D2} it is easy to see that from $q>1$ follows the absolute summability of the autocorrelation function $\\rho$ and thus $\\{X_t\\}_{t \\in {\\mathbb Z}}$ is a short memory process. Stationary short memory processes keep many of the properties of independent and identically distributed data, see, e.g., \\citet{bBrock}.\n\nOn the other hand $q \\in (0,1]$ yields a long memory process, see, e.g., Definition 3.1.2 in \\citet{Giraitis}. Examples of long memory processes are the fractional Gaussian noise with an autocorrelation function that behaves like $(h+1)^{-2(1-H)}$, with $H \\in [0,1)$ being the Hurst coefficient. Stationary long memory processes exhibit dependencies between observations that are more persistent and many statistical results that hold for independent and identically distributed data turn out to be false, see \\citet{Samoro} for details.\n\nThe next theorem gives concentration inequalities for both estimators $S_n$ and $T_n^\\ast y$ in a Gaussian setting with convergence rates depending on the parameter $q>0$. These inequalities are the ones needed in Theorem \\ref{th:kpls} and Theorem \\ref{th:kpls2}. Recall that $d_\\lambda = \\mathrm{tr}\\{(S+\\lambda)^{-1}S\\}$ denotes the effective dimensionality of $S$.\n\n\\begin{theorem}\n\\label{th:conc.equality}\n(i) Define $\\mathrm{d}\\mu_h(x,y) = \\mathrm{d}\\mathrm{P}^{X_h,X_0}(x,y) - \\mathrm{d} \\mathrm{P}^{X_0}(x)\\mathrm{d} \\mathrm{P}^{X_0}(y)$.\nUnder Assumptions \\ref{con:k1} and \\ref{con:k2} it holds for $\\nu \\in (0,1]$ with probability at least $1-\\nu$ that\n\\begin{align*}\n\\|S_n - S\\|^2_{\\mathcal L} &\\leq\n\\frac{2 \\nu^{-1}}{n^2}\\sum\\limits_{h=1}^{n-1} (n-h) \\int\\limits_{{\\mathbb R}^{2d}} k^2(x,y) \\mathrm{d}\\mu_h(x,y) + \\frac{\\nu^{-1}}{n} \\left\\{\n\t\\mathrm{E} k^2(X_0,X_0) - \\|S\\|^2_{\\mathrm{HS}}\n\\right\\},\\\\\n\\|T_n^\\ast y - S f\\|^2_\\mathcal H &\\leq\n\\frac{2\\nu^{-1}}{n^2}\\sum\\limits_{h=1}^{n-1} (n-h) \\int\\limits_{{\\mathbb R}^{2d}} k(x,y)f(x)f(y)\\mathrm{d}\\mu_h(x,y)\\\\\n&+ \\frac{\\nu^{-1}}{n} \\left[\n\t\\mathrm{E} \\left\\{k(X_0,X_0) f^2(X_0)\\right\\} - \\| S f\\|^2_\\mathcal H + \\sigma^2 \\mathrm{E}\\{ k(X_0,X_0)\\}\n\\right].\n\\end{align*}\n\n\n(ii) Assume that additionally to \\ref{con:k1}, \\ref{con:k2} also \\ref{D1}, \\ref{D2} for $q >0$ are fulfilled. Denote $M=\\sup_{x \\in {\\mathbb R}^d} |f(x)|$.\n\nThen there exists a constant $C(q)>0$ such that \n\\begin{align*}\n\\|S_n - S\\|_\\mathcal L\n\t&\\leq \n\t \\nu^{-1\/2} \\{\\gamma_n^2(q) \\kappa C_\\gamma\t\n\t\t+ n^{-1}(\\kappa^2-\\|S\\|_\\mathrm{HS}^2)\\}^{1\/2},\\\\\n \\|T_n^\\ast y - S f\\|_\\mathcal H\n\t&\\leq \n\t\\nu^{-1\/2} \\left[\\gamma_n^2(q) M C_\\gamma \n\t\t+ n^{-1}\\left\\{\n\t\t\\kappa (M + \\sigma^2) - \\| S f\\|^2_\\mathcal H\t\\right\\}\\right]^{1\/2}\n\t\t,\n\\end{align*}\nfor $C_\\gamma = C(q)\\{(2\\pi)^d \\mathrm{det}(\\Sigma)\\}^{-1\/2} \\kappa d^{1\/2}(1-4^{-q})^{-1\/4(d+2)}$. The function $\\gamma_n(q)$, $q>0$, is defined as\n\\[\n\t\\gamma_n(q) = \\left\\{\n\t\t\t\t\\begin{array}{clc}\n\t\t\t n^{-1\/2} &,& q>1\\\\\n\t\tn^{-1\/2} \\log(1\/2n)\t&,& q=1\\\\\n\t\t\tn^{-q\/2} \n\t\t\t&,& q \\in (0,1).\n\t\t\\end{array}\n\t\\right.\n\\]\n\n(iii) Let \\ref{con:k1}, \\ref{con:k2} and \\ref{eq:source} hold. Let $\\gamma_n(q)$ be the function as defined in (ii). Then there exists a constant $\\tilde{C}_\\epsilon>0$ such that it holds with probability at least $1-\\nu$ for $\\lambda>0$ that\n\\[\n \\|(S+\\lambda)^{-1\/2}(T_n^\\ast y - S_n f\\|_\\mathcal H \\leq \\nu^{-1\/2} \\tilde{C}_\\epsilon \\sigma \\sqrt{d_\\lambda}\\gamma_n(q).\n\\] \n\n(iv) Let \\ref{con:k1}, \\ref{con:k2}, \\ref{eq:source}, \\ref{D1} and \\ref{D2} hold. Let $\\lambda_n^{-1\/2}d_{\\lambda_n}^{1\/2} \\gamma_n(q) \\rightarrow 0$ for a sequence $\\lambda_n \\rightarrow 0$ and $\\gamma_n(q)$ the function defined in (ii). Then there exists an $n_0 = n_0(\\nu,q) \\in {\\mathbb N}$ such that with probability at least $1-\\nu$ we have for all $n \\geq n_0$\n\\[\\|(S+\\lambda_n)^{1\/2}(S_n+\\lambda_n)^{-1\/2}\\|_\\mathcal L \\leq \\sqrt{2}.\n\\]\n\\end{theorem}\n\nThe first part of the theorem is general and can be used to derive concentration inequalities not only in the Gaussian setting and is of interest in itself. The convergence rate is controlled by the sums appearing on the right hand side.\n If these sums are of $O(n)$ then the mean squared error of both $S_n$ and $T_n^\\ast y$ will converge to zero with a rate of $n^{-1}$. On the other hand, if the sums are of order $O(n^{2-q})$ for some $q\\in (0,1)$, the mean squared errors will converge with the reduced rate $n^{-q}$.\n\nThe second part derives explicit concentration inequalities in the Gaussian setting described by \\ref{D1} and \\ref{D2} with rates depending on the range of the dependence measured by $q>0$. These inequalities appear in Theorem \\ref{th:kpls}.\n\nParts (iii) and (iv) give the additional probabilistic bounds needed to apply Theorem \\ref{th:kpls2}. The condition $\\lambda_n^{-1\/2} d^{1\/2}_{\\lambda_n} \\gamma_n(q) \\rightarrow 0$ in Theorem \\ref{th:conc.equality} (iv) is fulfilled in the settings of Corollary \\ref{cor:pol.ed} and Corollary \\ref{cor:log.ed}.\n\n Theorem \\ref{th:kpls}, Corollary \\ref{cor:pol.ed}, Corollary \\ref{cor:log.ed} and Theorem \\ref{th:conc.equality} together imply\n\\begin{corollary}\n\\label{cor:convergence}\nLet the conditions of Theorem \\ref{th:kpls2} and \\ref{D1}, \\ref{D2} hold.\n\n(i) Assume that there exists $s\\in (0,1]$ such that $d_\\lambda = O(\\lambda^{-s})$ for $\\lambda \\rightarrow 0$. Then with probability at least $1-\\nu$\n\t\\[\n\t\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f^\\ast\\|_2 = \\left\\{\n\t\t\t\\begin{array}{cc}\n\t\t\t\tO\\{n^{-r\/(2r+s)}\\}, & q>1,\\\\\n\t\t\t\tO\\{n^{-q r\/(2r+s)}\\}, & q \\in (0,1).\n\t\t\t\\end{array}\n\t\t\t\\right.\n\t\\]\nIf instead of conditions of Theorem \\ref{th:kpls2}, conditions of Theorem \\ref{th:kpls} are assumed, then the convergence rates above have $s=1$. \n\t\n(ii) Assume that there exists $a>0$ such that $d_\\lambda = O\\{\\log(1+a\/\\lambda)\\}$ for $\\lambda \\rightarrow 0$ and $r=1\/2$. Then with probability at least $1-\\nu$\n\t\\[\n\t\t\\|f_{\\widehat{\\alpha}_{a^\\ast}} - f^\\ast\\|_2 = \\left\\{\n\t\t\t\\begin{array}{cc}\n\t\t\t\tO\\{n^{-1\/2}\\log(1\/2 n)\\}, & q>1,\\\\\n\t\t\t\tO\\{n^{-q\/2} \\log(1\/2 n^q)\\}, & q \\in (0,1).\n\t\t\t\\end{array}\n\t\t\t\\right.\n\t\\]\n\\end{corollary}\nHence, for $q>1$ the kernel partial least squares algorithm achieves the same rates as if the data were independent and identically distributed. For $q \\in (0,1)$ the convergence rates become substantially slower, highlighting that dependence structures that persist over a long time can influence the convergence rates of the algorithm.\n\n\\section{Source condition and effective dimensionality for Gaussian kernels}\n\\label{sec:gauss.kern}\nThe source condition \\ref{eq:source} and the effective dimensionality $d_\\lambda$ are of great importance in the convergence rates derived in previous sections. Here we investigate these conditions for the reproducing kernel Hilbert space corresponding to the Gaussian kernel $k(x,y) = \\exp(-l\\|x-y\\|^2)$, $x,y \\in {\\mathbb R}^d$, $l>0$, for $d=1$. Hence, the space $\\mathcal H$ is the space of all analytic functions that decay exponentially fast, see \\citet{Steinwart05anexplicit}.\n\nWe also impose the normality conditions \\ref{D1} and \\ref{D2} on $\\{X_t\\}_{t\\in {\\mathbb Z}}$, where now $\\sigma^2_x =\\Sigma \\in {\\mathbb R}$ due to $d=1$. The following proposition derives a more explicit representation for $f \\in \\mathcal H$.\n\n\\begin{proposition}\n\\label{prop:source}\nAssume that \\ref{con:k1},\\ref{con:k2} and \\ref{eq:source} hold for $r \\geq 1\/2$. Let $d=1$, $X_0 \\sim \\mathcal{N}(0,\\sigma^2_x), \\sigma^2_x > 0$ and consider the Gaussian kernel $k(x,y) = \\exp\\{-l (x-y)^2\\}$ for $x,y \\in {\\mathbb R}$, $l>0$. Then $f$ can be expressed for $\\mu = r - 1\/2 \\in {\\mathbb N}$\n via $f(x) = \\sum_{i=1}^\\infty c_i L_\\mu(x,z_i)$ for fixed $\\{z_i\\}_{i=1}^\\infty,\\{c_i\\}_{i=1}^\\infty \\subset {\\mathbb R}$ such that $\\sum_{i,j=1}^\\infty c_i c_j k(z_i,z_j) \\leq R^2$, $R>0$. Here we have for $x,z \\in {\\mathbb R}$ \n\t\\begin{align*}\n\tL_\\mu(x,z) &= \\exp\n\t\t\\left[\n\t\t\t-1\/2 \\left\\{\n\t\t\t\t\\frac{\\det(\\Lambda)(x^2+z^2)-2 l^{\\mu+1} x z}{\\det(\\Lambda_{1:\\mu})}\n\t\t\t\\right\\}\n\t\t\\right],\n\t\t\\end{align*}\n\twith $\\Lambda \\in {\\mathbb R}^{(\\mu+1)\\times (\\mu+1)}$ being a tridiagonal matrix with elements \n\t\\[\n\t\t\\Lambda_{i,j} = \\left\\{\n\t\\begin{array}{cll}\n\t\t\\sigma^{-2}_x + 2 l &, & i=j<\\mu+1\\\\\n\t\tl &, & i=j=\\mu+1\\\\\n\t\t-l &, & |i-j| = 1\\\\\n\t\t0&, & else\n\t\\end{array}\\right.\n\t\\]\n\tfor $i,j=1,\\dots,\\mu+1$ and $\\Lambda_{1:\\mu}$ is the $\\mu \\times \\mu$-dimensional sub-matrix of $\\Lambda$ including the fist $\\mu$ columns and rows.\n\t\n\tConversely any function $f^\\ast = T f$ with $f$ of the above form fulfills a source condition $\\ref{eq:source}$ with $r = \\mu + 1\/2$, $\\mu \\in {\\mathbb N}$.\n\\end{proposition}\nHence if we fix an $r \\geq 1\/2$ with $r-1\/2 \\in {\\mathbb N}$ this theorem gives us a way to construct functions $f \\in \\mathcal H$ with $f^\\ast = Tf$ that fulfill \\ref{eq:source}.\n\nThe next proposition derives the effective dimensionality $d_\\lambda$ in this setting:\n\\begin{proposition}\n\\label{prop:ed}\nLet $d=1$, $X_0 \\sim \\mathcal{N}(0,\\sigma^2_x)$ for some $\\sigma^2_x>0$ and consider the Gaussian kernel $k(x,y) = \\exp\\{-l(x-y)^2\\}$, $x,y \\in {\\mathbb R}$, $l>0$. \n\nThen there is a constant $D>0$ such that it holds for any $\\lambda \\in (0,1]$\n\\[\n\td_\\lambda = \\mathrm{tr}\\{(S+\\lambda)^{-1} S\\} \\leq \tD\\log(1+a\/\\lambda),\n\\]\nwith $a =\\sqrt{2}(1+\\beta+\\sqrt{1+\\beta})^{-1\/2} $, $\\beta = 4 l \\sigma^2_x$.\n\\end{proposition}\n\nWith the latter result Corollary \\ref{cor:log.ed} is applicable and we expect convergence rates for the kernel partial least squares algorithm of order $O\\{\\gamma_n \\log(1\/2 \\gamma_n^{-2})\\}$ for a sequence $\\{\\gamma_n\\}_n$ as in Theorem \\ref{th:kpls2}.\n\n\n\\section{Simulations}\n\\label{sec:simulations}\nTo validate the theoretical results of the previous sections we conducted a simulation study. The reproducing kernel Hilbert space is chosen to correspond to the Gaussian kernel $k(x,y) = \\exp(-l\\|x-y\\|^2)$, $x,y \\in {\\mathbb R}^d$, $l=2$, for $d=1$.\n\nThe source parameter is taken $r=4.5$ and we consider the function \n\\[\n\tf(x) = 4.37^{-1}\\{3 {L}_4(x,-4) - 2 {L}_4(x,3)+ 1.5 {L}_4(x,9)\\}, ~~ x \\in {\\mathbb R}.\n\t\\] \n\tThe normalization constant is chosen such that $f$ takes values in $[-0.35,0.65]$ and $L_4$ is the exponential function given in Proposition \\ref{prop:source}. The function $f$ is shown in Figure \\ref{fig:func}.\n\\begin{figure}\n \\begin{center}\n \t\\includegraphics[width=.5\\linewidth]{f_plot}\n \t\\caption{\\label{fig:func}\n\t\tThe function $f$ evaluated on $[-7.5,7.5]$ (black) and one realisation of the noisy data $y = f(x) + \\varepsilon$ (grey). \t\t\n \t\t }\n \\end{center}\n\\end{figure}\n\nIn condition \\ref{D1} we set $\\sigma^2_x =\\Sigma = 4$. For the matrix $V^2 = [\\tau_{|i-j|}]_{i,j=1}^n \\in {\\mathbb R}^{n \\times n}$ we choose three different structures for $n\\in\\{200,400,1000\\}$. In the first setting $\\tau_h = \\mathbb{I}(h=0)$, which corresponds to independent data. The second setting with $\\tau_h = 0.9^{-h}$ implies an autoregressive process of order one. Finally, the third setting with $\\tau_h = (1+h)^{-q}$, $q=1\/4$, $h =0,\\dots,n-1$ leads to the long range dependent case. \n\nIn a Monte Carlo simulation with $M=1000$ repetitions the time series $\\{X_t^{(j)}\\}_{t=1}^n$ are generated via $X^{(j)} = V N^{(j)}$ with $N^{(j)} \\sim \\mathcal{N}_n(0,\\sigma^2 I_n)$, $j=1,\\dots,M$, where $I_n$ is the $n \\times n$-dimensional identity matrix. \n\nThe residuals $\\varepsilon_1^{(j)},\\dots,\\varepsilon_n^{(j)}$ are generated as independent standard normally distributed random variables and independent of $\\{X_t^{(j)}\\}_{t=1}^n$ . The response is defined as $y_t^{(j)} = f(X_t^{(j)}) + \\eta\\, \\varepsilon_t^{(j)}$, $t=1,\\dots,n$, $j=1,\\dots,M$, with $\\eta = 1\/16$.\n\nThe kernel partial least squares and kernel conjugate gradient algorithms are run for each sample $\\{(X_t^{(j)},y_t^{(j)})^{ \\mathrm{\\scriptscriptstyle T} }\\}_{t=1}^n$, $j=1,\\dots,M$, with a maximum of $40$ iteration steps. We denote the estimated coefficients with $\\widehat{\\alpha}_1^{(j,m)},\\dots,\\widehat{\\alpha}_{40}^{(j,m)}$, $j=1,\\dots,M$, with $m=CG$ meaning that the kernel conjugate gradient algorithm was employed and $m=PLS$ that kernel partial least squares was used to estimate $\\alpha_1,\\dots,\\alpha_n$. \n\nThe squared error in the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-norm is calculated via\n\\[\n\t\\widehat{e}_{n,\\tau}^{(j,m)} = \\min\\limits_{a=1,\\dots,40} \\left[ \\frac{1}{\\sqrt{2\\pi\\sigma_x^2}}\\int\\limits_{-\\infty}^\\infty \\left\\{f_{\\widehat{\\alpha}_a^{(j,m)}}(x) - f(x) \\right\\}^2 \\exp\n\t\\left(-\\frac{1}{2\\sigma_x^2} x^2 \\right) \\mathrm{d} x \\right],\n\\]\nfor $j=1,\\dots,M$, $n = 200,400,\\dots,1000$ and $m \\in \\{CG, PLS\\}$.\n\nThe results of the Monte-Carlo simulations are depicted in the boxplots of Figure \\ref{fig:box}.\n\\begin{figure}\n \\begin{center}\n\t \t\\begin{minipage}{.33\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{MSE_box_1}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}{.33\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{MSE_box_2}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}{.33\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{MSE_box_3}\n\t\t\\end{minipage}\n\t\t\\caption{\\label{fig:box}\n\t\t\tBoxplots of the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-errors $\\{\\widehat{e}_{n,\\tau}^{(j,m)}\\}_{j=1}^{M}$ of kernel partial least squares (left side of each panel) and kernel conjugate gradient (right side of each panel) for different autocovariance functions $\\tau$ and $n = 200,400,1000$. On the left is $\\tau_h = \\mathbb{I}(h=0)$, in the middle $\\tau_h = 0.9^{-h}$ and on \n\t\t\tthe right $\\tau_h = (h+1)^{-1\/4}$.\n\t\t}\n\t\\end{center}\n\\end{figure}\nFor kernel partial least squares (left panels) one observes that independent and autoregressive dependent data have roughly the same convergence rates, although the latter have a somewhat higher error. In contrast, the long range dependent data show slower convergence with the larger interquartile range, supporting the theoretical results of Corollary \\ref{cor:convergence}.\n\nThe $\\mathcal L^2\\left(\\Prob^{X} \\right)$-error of kernel conjugate gradient estimators is generally slightly higher than that of kernel partial least squares. Nonetheless, both of them have a similar behaviour.\n\nWe also investigated the the stopping indices $a=1,\\dots,40$ for which the errors $\\widehat{e}_{n,\\tau}^{(j,m)}$ were attained. These are shown in Figure \\ref{fig:a} for independent and identically distributed data.\n\\begin{figure}\n \\begin{center}\n \t\\begin{minipage}{.45\\textwidth}\n\t\\center \t\n \t\\includegraphics[width=.98\\linewidth]{MSE_sim_a_200}\n \t\\end{minipage}\n \t\\begin{minipage}{.45\\textwidth}\n\t\\center \t\n \t\\includegraphics[width=.98\\linewidth]{MSE_sim_a_1000}\n \t\\end{minipage}\n \t\\caption{\\label{fig:a}\n \t\tBoxplots of the optimal indices $a\\in\\{1,\\dots,40\\}$ for which the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-errors $\\{\\widehat{e}_{n,\\tau}^{(j,m)}\\}_{j=1}^{M}$ were attained. Kernel partial least squares is on the left of each panel and kernel conjugate gradient on the right. On the left is $n = 200$, on the right $n=1000$. The data were assumed to be independent and identically distributed.\n \t}\n \\end{center}\n\\end{figure}\nIt can be seen that the optimal indices for both algorithms have a rather similar behaviour. Kernel conjugate gradient stops slightly later, but overall the differences seem negligible.\n\nFigure \\ref{fig:lines} shows the mean (over $j$) of the estimated $\\mathcal L^2\\left(\\Prob^{X} \\right)$ errors $\\{\\widehat{e}_{n,\\tau}^{(j,m)}\\}_{j=1}^{M}$ for different $n$, $\\tau$ and $m \\in \\{CG,PLS\\}$. The errors were multiplied by $n\/\\log(n)$ to illustrate the convergence rates. According to Proposition \\ref{prop:ed} and Corollary \\ref{cor:convergence} (ii) we expect the rates for the independent and autoregressive cases to be $n^{-1}\\log(n)$, which is verified by the fact that the solid black and grey lines are roughly constant.\nFor the long range dependent case we expect worse convergence rates which are also illustrated by the divergence of the dashed black line.\n\\begin{figure}\n \\begin{center}\n \t\\begin{minipage}{.45\\textwidth}\n\t\\center \t\n \t\\includegraphics[width=.98\\linewidth]{MSE_sim_pls_mean}\n \t\\end{minipage}\n \t\\begin{minipage}{.45\\textwidth}\n\t\\center \t\n \t\\includegraphics[width=.98\\linewidth]{MSE_sim_cg_mean}\n \t\\end{minipage}\n \t\\caption{\\label{fig:lines}\n \t\tMean of the $\\mathcal L^2\\left(\\Prob^{X} \\right)$-errors $\\{\\widehat{e}_{n,\\tau}^{(j,m)}\\}_{j=1}^{M}$ of kernel partial least squares (left) and kernel conjugate gradient (right) for $n = 200,400,\\dots,1000$ multiplied by $n\/\\log(n)$. The solid black line is for $\\tau_h = \\mathbb{I}(h=0)$, the grey line for $\\tau_h = 0.9^{-h}$ and the dashed black line for $\\tau_h = (h+1)^{-1\/4}$.\n \t}\n \\end{center}\n\\end{figure}\n\n\\section{Application to Molecular Dynamics Simulations}\n\\label{sec:protein}\nThe collective motions of protein atoms are responsible for its biological function and molecular dynamics simulations is a popular tool to explore this \\citep{Henz07}. \n\nTypically, the $p \\in {\\mathbb N}$ backbone atoms of a protein are considered for the analysis with the relevant dynamics happening in time frames of nanoseconds. Although the dynamics are available exactly, the high dimensionality of the data and large number of observations can be cumbersome for regression analysis, e.g., due to the high collinearity in the columns of the covariates matrix. Many function-dynamic relationships are also non-linear \\citep{Hub09}.\nA further complication is the fact that the motions of different backbone atoms are highly correlated, making additive non-parametric models for the target function $f^\\ast$ less suitable. \n\nWe consider T4 Lysozyme (T4L) of the bacteriophage T4, a protein responsible for the hydrolisis of 1,4-beta-linkages in peptidoglycans and chitodextrins from bacterial cell walls.\nThe number of available observations is $n=4601$ and T4L consists of $p=486$ backbone atoms.\n\nDenote with $A_{t,i} \\in {\\mathbb R}^3$ the $i$-th atom, $i=1,\\dots,p$, at time $t=1,\\dots,n$ and $c_i \\in {\\mathbb R}^3$ the $i$-th atom in the (apo) crystal structure of T4L. A usual representation of the protein in a regression setting is the Cartesian one, i.e., we take as the covariate $X_t = (A_{1,t}^{ \\mathrm{\\scriptscriptstyle T} },\\dots,A^{ \\mathrm{\\scriptscriptstyle T} }_{p,t})^{ \\mathrm{\\scriptscriptstyle T} }$, $t=1,\\dots,n$, see \\citet{Bro83}.\nThe functional quantity to predict is the root mean square deviation of the protein configuration $X_t$ at time $t = 1,\\dots,n$ from the (apo) crystal structure $C = (c_1^{ \\mathrm{\\scriptscriptstyle T} },\\dots,c_d^{ \\mathrm{\\scriptscriptstyle T} })^{ \\mathrm{\\scriptscriptstyle T} }$, i.e.,\n\\[\n\ty_t = \\left\\{\n\t\tp^{-1} \\sum\\limits_{i=1}^{p} \\|X_{i,t}-C_i\\|^2\n\t\\right\\}^{1\/2}.\n\\] \nThis nonlinear function was previously considered in \\citet{Hub09}, where it was established that linear models are insufficient for the prediction.\n\nFigure \\ref{fig:sample.series} shows the time series corresponding to $X_{t,1}$ (i.e., the first coordinate of the first atom of T4L) on the left and the functional quantity $y_t$ on the right. These plots reveal certain persistent dependence over time.\n\n\\begin{figure}\n \\begin{center}\n\t \t\\begin{minipage}{.48\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{T4L_X1}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}{.48\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{T4L_Y}\n\t\t\\end{minipage}\n\t\t\\caption{\\label{fig:sample.series}\n\t\t\tTime series of $X_{t,1}$, i.e., the first coordinate of the first atom T4L consists of (left) and the root mean squared deviation $y_t$ between the protein configuration at time $t$ and the (apo) crystal structure.\n\t\t}\n\t\\end{center}\n\\end{figure}\n\n\nFitting autoregressive moving average models of order $(3,2)$ ($ARMA(3,2)$) to $y_t$ and $ARMA(5,2)$ to $X_{t,1}$ shows that the smallest root of their respective characteristic polynomial is close to one ($1.009$ for $y_t$ and $1.003$ for $X_{t,1}$), highlighting that we are on the border of stationarity, see, e.g., \\citet{bBrock}.\n\nFigure \\ref{fig:acf} depicts the autocorrelation functions of $X_{t,1}$ and $y_t$, the theoretical autocorrelation function of the corresponding autoregressive moving average process and $\\rho_h \\propto (h+1)^{-q}$ for $q=0.134$ for $X_{t,1}$ and $q=0.066$ for $y_t$. The latter, as highlighted in Section \\ref{sec:concentration}, is an autocorrelation function for a stationary long range dependent process.\n\\begin{figure}\n \\begin{center}\n\t \t\\begin{minipage}{.48\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{T4L_acf_X}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}{.48\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{T4L_acf_y}\n\t\t\\end{minipage}\n\t\t\\caption{\\label{fig:acf}\n\t\t\tAutocorrelation \n\t\t\tplots of $X_{t,1}$ (left) and $y_t$ (right). The estimated autocorrelation function is grey, the theoretical one of a fitted $ARMA(3,2)$ process is solid black and $\\rho_h \\propto (h+1)^{-q}$ for a suitable choice of $q>0$ is dashed black.\n\t\t}\n\t\\end{center}\n\\end{figure}\nThese plots suggest that $X_{t,1}$ and $y_t$ follow some long-range stationary process.\n\nWe apply kernel partial least squares to this data set with the Gaussian kernel $k(x,y) = \\exp(-l \\|x-y\\|^2)$, $x,y \\in {\\mathbb R}^{3p}$, $l>0$. The function $f$ we aim to estimate is a distance between protein configurations, so using a distance based kernel seems reasonable. Moreover, we also investigated the impact of other bounded kernels such as triangular and Epanechnikov and obtained similar results. The first $25\\%$ of the data form a training set to calculate the kernel partial least squares estimator and the remaining data are used for testing.\n\nThe parameter $l>0$ is calculated via cross validation on the training set. \nIn our evaluation we obtained $l = 0.0189$.\n\nFigure \\ref{fig:prot} compares the observed response in the test set with the prediction on the test set obtained by kernel partial least squares, kernel principal component regression and linear partial least squares.\n\\begin{figure}[t!]\n \\begin{center}\n\t \t\\begin{minipage}{.48\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{T4L_pred_cor}\n\t\t\\end{minipage}%\n\t\t\\begin{minipage}{.48\\textwidth}\n\t \t\t\\centering\n\t \t\t\\includegraphics[width=.98\\linewidth]{T4L_pred_rss}\n\t\t\\end{minipage}\n\t\t\\caption{\\label{fig:prot}\n\t\t\tCorrelation (left) and residual sum of squares (right) between predicted values and the observed response on the test set depending on the number of used components for kernel partial least squares (solid black), partial least squares (grey) and kernel principal component regression (dashed black).\n\t\t}\n\t\\end{center}\n\\end{figure}\nApparently, kernel partial least squares show the best performance and the kernel principal components algorithm is able to achieve comparable prediction with more components only. Obviously, linear partial least squares can not cope with the non-linearity of the problem. \n\nThis application highlights that kernel partial least squares still delivers a robust prediction even when the dependence in the data is more persistent, if enough observations are available.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}