diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjymb" "b/data_all_eng_slimpj/shuffled/split2/finalzzjymb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjymb" @@ -0,0 +1,5 @@ +{"text":"\\section{Fixed-Period Problems: The Sublinear Case}\n\nIt is widely accepted that an era of \nBig Bang nucleosynthesis (hereafter; BBN) between cosmic temperatures\n$T\\sim 1$ MeV and $T\\sim 10$ keV is the production site of the bulk of $^4$He \nand $^3$He,\na good fraction of $^7$Li, and essentially all $^2$H \nobserved in nebulae and stars in\nthe present universe. Generally good \nagreement between observationally inferred \nprimordial abundances and theoretical predictions for $^2$H, $^4$He, and \n$^7$Li may be\nobtained within a standard homogeneous BBN scenario with baryon-to-photon\nratio $\\eta$ in the range $2\\times 10^{-10} - 6\\times\n10^{-10}$~\\cite{BBN} \n(most likely\ntowards the upper end of this range), \nthough details of the observational determination \nof primordial abundances and the question about consistency within a\nstandard BBN scenario remain under investigation.\nPrimordial conditions as envisioned in a standard BBN scenario yield only\nproduction of minute amounts of isotopes with nucleon number\n$A>7$. For a baryon-to-photon\nratio of $\\eta\\approx 4\\times 10^{-10}$ synthesis of a $^{12}$C \nmass fraction of only\n${\\rm X_{^{12}{\\rm C}}}\\approx 5\\times 10^{-15}$ results. \nThere is also production of trace amounts\nof $^6$Li, $^9$Be, and $^{11}$B, with typical mass fractions $\\sim 10^{-16} -\n10^{-13}$. \nIn contrast to stellar nucleosynthesis, the three-body\ntriple alpha process is not operative in standard BBN, \nmainly due to the small densities ($\\sim 10^{-5}$g\/cm$^3$ at $T\\approx 100$\nkeV) \nand short expansion time ($\\sim 100$ s). A potentially different way of bridging \nbetween light and \\lq\\lq heavy\\rq\\rq\\ ($A\\geq 12$) \nelements across the mass eight\ngap is via the reaction sequence $^7$Be $\\to$ $^{11}$C $\\to$ \n$^{11}$B $\\to$ $^{12}$C. This reaction\nchain is, nevertheless,\nineffective in a standard BBN since build-up of $^7$Be occurs late at \n$T\\sim 50-100$ keV where Coulomb barrier effects \nprevent further processing of this isotope\ninto heavier ones.\n\nOne may pose the following question: {\\it Which alternative BBN scenarios\nwould give substantial initial metal production, without violating\nobservational constraints on the light elements?} Here \\lq\\lq metal\\rq\\rq \nrefers to isotopes with nucleon number $A\\geq 12$ and by the term \\lq\\lq\nsubstantial\\rq\\rq\\ it is meant that production would result in abundance which\ncould either be directly observable in the not-to-distant future within the \natmospheres of metal-poor stars, \nor would be sufficient for the operation of the CNO\ncycle within the first stars (${\\rm X_{CNO}}>10^{-10}$). The only scenario for \npre-stellar production of metals known to the author \nis a BBN scenario characterized by an inhomogeneous baryon distribution.\nSuch inhomogeneities in the baryon-to-photon ratio may result from the\nout-of-equilibrium dynamics of the early universe prior to the BBN era, \nas possibly during a\nfirst-order QCD phase transition ($T\\approx 100$ MeV) or an era of \ninhomogeneous\nelectroweak baryogenesis ($T\\approx 100$ GeV). Another possibility may be the\ncreation of baryon \\lq\\lq lumps\\rq\\rq via the evaporation of baryon\nnumber carrying solitons formed in the early universe, \nsuch as strange quark matter nuggets or B-balls. Given the wide variety of\npossibilities and the large intrinsic uncertainties of \nscenarios for the generation\nof baryon number fluctuations in the early universe, it is probably best to initially\nregard the question about primordial abundance yields and possible metallicity\nproduction in \ninhomogeneous BBN scenarios disconnected from the speculative\nmechanism for\nthe creation of baryon number fluctuations. \n\nIn the past, there have been a number of investigations of the evolution of\npre-existing $\\eta$ fluctuations from high cosmic temperatures ($T\\approx\n100$ GeV)\nthrough the epoch of BBN~\\cite{EVO}, and of the resulting BBN\nabundance yields~\\cite{IBBN}.\nBaryon number fluctuations only impact BBN yields if they survive\ncomplete dissipation by neutron diffusion\nbefore the epoch of weak freeze-out at $T\\approx 1$ MeV. \nThis is the case for fluctuations containing baryon number in excess\nof $N_b\\, ^>_{\\sim}\\, 10^{35}(\\eta_h\/10^{-4})^{-1\/2}$, \nwhere $\\eta_h$ is the baryon-to-photon ratio within the fluctuation,\ncorresponding to a baryon mass of $M_b\\, ^>_{\\sim}\\, 10^{-22}M_{\\odot}\n(\\eta_h\/10^{-4})^{-1\/2}$. \nFor reference, the baryon mass\ncontained within the QCD- and electroweak horizon are \n$\\sim 10^{-8}M_{\\odot}$ and $\\sim 10^{-18}M_{\\odot}$, respectively.\nThe evolution of fluctuations after the epoch of weak freeze-out,\nfor a large part of parameter space, is \ncharacterized by differential neutron-proton diffusion resulting in \nhigh-density, proton-rich\nand low-density, neutron-rich regions. \nIt had been suggested that such scenarios may not only\nresult in consistency between\nobservationally inferred and theoretically predicted primordial abundances\nfor much larger horizon-average baryon-to-photon ratios than $\\eta\\approx\n4\\times 10^{-10}$, but also lead to a possible\nefficient r-process nucleosynthesis~\\cite{Apple} \nin the neutron-rich regions. When all\ndissipative and hydrodynamic processes during inhomogeneous BBN are \ntreated properly, one finds that, except for a fairly small parameter\nspace, the\naverage $\\eta$ in inhomogeneous BBN scenarios may not be larger than in\nstandard BBN due to overabundant $^7$Li and\/or $^4$He \nsynthesis~\\cite{Result}. A detailed\ninvestigation of possible r-process nucleosynthesis in the neutron-rich regions\nshows that an r-process is utterly ineffective in inhomogeneous \nBBN~\\cite{r-process}. \nThis is mostly due to r-process yields being a sensitive function of the local \nbaryon-to-photon ratio\nin the neutron-rich region, which generically is too low in inhomogeneous BBN.\n\nNevertheless, BBN metal synthesis via $p$- and $\\alpha$-burning reactions\nof cosmologically interesting magnitude\nmay occur if a small fraction, $f_b$, of the cosmic baryons reside in regions \nwith very high baryon-to-photon ratio, \n$\\eta_h\\geq 10^{-4}$~\\cite{metals}. Whereas\ninhomogeneous BBN scenarios are typically plagued by overproduction of\n$^7$Li,\nthe comparatively early production of $^7$Be \n(which would decay into $^7$Li) in regions with\n$\\eta_h\\geq 10^{-4}$ allows for a further processing of $^7$Be \ninto heavier isotopes.\nThe net nucleosynthesis of such regions are a $^4$He mass fraction $Y_p\\approx\n0.36$, some metals, and virtually no $^2$H, $^3$He, and $^7$Li. \nA baryon-to-photon ratio $\\eta\\approx 10^{-4}$ is interesting as it presents\nthe asymptotic value attained after partial dissipation \nof initially very high-density regions\n$\\eta_i\\gg 10^{-4}$ by the action of \nneutrino heat conduction at temperature $T\\gg\n1$ MeV, for a wide part of parameter space. \nBaryon \\lq\\lq lumps\\rq\\rq\\ with initial baryon-to-photon ratio\n$\\eta_i$ and baryonic mass less than $M_b\\leq 10^{-14}M_{\\odot}\n\\eta_i$ will have evolved to an asymptotic $\\eta_f\\approx 10^{-4}$\nby the time of weak freeze-out. They are further almost unaffected by neutron\ndiffusion during BBN provided their mass exceeds $M_b \\, ^>_{\\sim}\\, \n10^{-19}M_{\\odot}$. \nIn this case horizon-averaged BBN abundance yields may be determined by\nan appropriate average of the yields of two homogeneous universes, one at\n$\\eta_h\\approx 10^{-4}$ and one at $\\eta_l$, close to the observationally \ninferred value for a standard BBN scenario. The resultant cosmic \naverage metallicity in such an inhomogeneous scenario\nmay be approximated by~\\cite{metals}\n\\begin{equation}\n\\nonumber\n[{\\rm Z}]\\sim -6.5 + {\\rm log}_{10}(f_b\/10^{-2}) + 2\\,\n{\\rm log}_{10}(\\eta_h\/10^{-4})\\, ,\n\\end{equation}\nwhere [Z] denotes the logarithm of ${\\rm X}_{A\\geq 12}$ \nrelative to the solar value\n(${\\rm X}_{\\odot}\\approx 2\\times 10^{-2}$). \nThe most stringent constraint on\nthe fraction of baryons residing in high $\\eta$ regions comes from a possible\noverproduction of $^4$He, with a change $\\Delta Y_p\\approx 0.12f_b$ relative to\na homogeneous BBN at $\\eta_l$, allowing for $f_b\\sim 10^{-2}-3\\times 10^{-2}$,\nand [Z]$\\sim -6$, possibly even larger for $\\eta_h > 10^{-4}$.\nGiven $\\eta_h\\approx 10^{-4}$, a nucleosynthetic signature is given\nby [O\/C] $\\sim 1.3$\nand [C\/$Z_{A\\geq 28}]\\sim -1.5$. \nFor $\\eta_h\\gg 10^{-4}$, synthesis of mostly iron-group elements results.\n\nIn conclusion, it is possible, though not necessarily probable, that the universe\n\\lq\\lq started\\rq\\rq\\ with some initial metallicity if a small fraction of\nthe cosmic baryon number existed in high-density regions at the epoch of BBN,\nThis metallicity may be large enough to have the first stars operate \non the CNO-cycle, even though it probably would fall\nbelow the metallicities observed in the currently \nmost metal-poor stars ([Z] $\\sim -5$) known.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nA recent body of research seeks to understand the acceleration\nphenomena of first-order discrete optimization methods by means of\nmodels that evolve in continuous time. Roughly speaking, the idea is\nto study the behavior of ordinary differential equations (ODEs) which\narise as continuous limits of discrete-time accelerated\nalgorithms. The basic premise is that the availability of the powerful\ntools of the continuous realm, such as differential calculus, Lie\nderivatives, and Lyapunov stability theory, can be then brought to\nbear to analyze and explain the accelerated behavior of these flows,\nproviding insight into their discrete counterparts. Fully closing the\ncircle to provide a complete description of the acceleration\nphenomenon requires solving the outstanding open question of how to\ndiscretize the continuous flows while retaining their accelerated\nconvergence properties. However, the discretization of accelerated\nflows has proven to be a challenging task, where retaining\nacceleration seems to depend largely on the particular ODE and the\ndiscretization method employed. This paper develops a resource-aware\napproach to the discretization of accelerated optimization flows.\n\n\n\\subsubsection*{Literature Review}\n\n\nThe acceleration phenomenon goes back to the seminal\npaper~\\cite{BTP:64} introducing the so-called heavy-ball method, which\nemployed momentum terms to speed up the convergence of the classical\ngradient descent method.\nThe heavy-ball method achieves optimal convergence rate in a\nneighborhood of the minimizer for arbitrary convex functions and\nglobal optimal convergence rate for quadratic objective functions.\nLater on, the work~\\cite{YEN:83} proposed the Nesterov's accelerated\ngradient method and, employing the technique of estimating sequences,\nshowed that it converges globally with optimal convergence rate for\nconvex and strongly-convex smooth functions. The algebraic nature of\nthe technique of estimating sequences does not fully explain the\nmechanisms behind the acceleration phenomenon, and this has motivated\nmany approaches in the literature to provide fundamental understanding\nand insights. These include coupling dynamics~\\cite{ZAZ-LO:17},\ndissipativity theory~\\cite{BH-LL:17}, integral quadratic\nconstraints~\\cite{LL-BR-AP:16,BVS-RAF-KML:18}, and geometric\narguments~\\cite{SB-YTL-MS:15}.\n\nOf specific relevance to this paper is a recent line of research\ninitiated by~\\cite{WS-SB-EJC:16} that seeks to understand the\nacceleration phenomenon in first-order optimization methods by means\nof models that evolve in continuous time.~\\cite{WS-SB-EJC:16}\nintroduced a second-order ODE as the continuous limit of Nesterov's\naccelerated gradient method and characterized its accelerated\nconvergence properties using Lyapunov stability analysis. The ODE\napproach to acceleration now includes the use of Hamiltonian dynamical\nsystems~\\cite{MB-MJ-AW:18,CJM-DP-YWT-BO-AD:18}, inertial systems with\nHessian-driven damping~\\cite{HA-ZC-JF-HR:19}, and high-resolution\nODEs~\\cite{BS-SSD-MIJ-WJS:18-arxiv,BS-JG-SK:20}. This body of\nresearch is also reminiscient of the classical dynamical systems\napproach to algorithms in optimization, see~\\cite{RWB:91,UH-JBM:94}.\nThe question of how to discretize the continuous flows while\nmaintaining their accelerated convergence rates has also attracted\nsignificant attention, motivated by the ultimate goal of fully\nunderstanding the acceleration phenomenon and taking advantage of it\nto design better optimization algorithms. Interestingly,\ndiscretizations of these ODEs do not necessarily lead to\nacceleration~\\cite{BS-SSD-MIJ-WJS:19-arxiv}. In fact, explicit\ndiscretization schemes, like forward Euler, can even become\nnumerically unstable after a few iterations~\\cite{AW-ACW-MIJ:16}.\nMost of the discretization approaches found in the literature are\nbased on the study of well-known integrators, including symplectic\nintegrators~\\cite{MB-MJ-AW:18,AW-LM-AW:19}, Runge-Kutta\nintegrators~\\cite{JZ-AM-SS-AJ:18} or modifications of Nesterov's\n\\emph{three sequences}~\\cite{AW-ACW-MIJ:16,AW-LM-AW:19,ACW-BR-MIJ:18}.\nOur previous work~\\cite{MV-JC:19-nips} instead developed a\nvariable-stepsize discretization using zero-order holds and\nstate-triggers based on the derivative of the Lyapunov function of the\noriginal continuous flow. Here, we provide a comprehensive approach\nbased on powerful tools from resource-aware control, including\nperformance-based triggering and state holds that more effectively use\nsampled information. Other recent approaches to the acceleration\nphenomena and the synthesis of optimization algorithms using\ncontrol-theoretic notions and techniques include~\\cite{ASK-PME-TK:18},\nwhich employs hybrid systems to design a continuous-time dynamics with\na feedback regulator of the viscosity of the heavy-ball ODE to\nguarantee arbitrarily fast exponential convergence,\nand~\\cite{DH-RGS:19}, which introduced an algorithm which alternates\nbetween two (one fast when far from the minimizer but unstable, and\nanother slower but stable around the minimizer) continuous heavy-ball\ndynamics.\n\n\\subsubsection*{Statement of Contributions}\n\nThis paper develops a resource-aware control framework to the\ndiscretization of accelerated optimization flows that fully exploits\ntheir dynamical properties. Our approach relies on the key\nobservation that resource-aware control provides a principled way to\ngo from continuous-time control design to real-time implementation\nwith stability and performance guarantees by opportunistically\nprescribing when certain resource should be employed. In our\ntreatment, the resource to be aware of is the last sampled state of\nthe system, and hence what we seek to maximize is the stepsize of the\nresulting discrete-time algorithm.\nOur first contribution is the introduction of a second-order\ndifferential equation which we term heavy-ball dynamics with displaced\ngradient. This dynamics generalizes the continuous-time heavy-ball\ndynamics analyzed in the literature by evaluating the gradient of the\nobjective function taking into account the second-order nature of the\nflow. We establish that the proposed dynamics retains the same\nconvergence properties as the original one while providing additional\nflexibility in the form of a design parameter.\n\nOur second contribution uses trigger design concepts from\nresource-aware control to synthesize criteria that determine the\nvariable stepsize of the discrete-time implementation of the\nheavy-ball dynamics with displaced gradient. We refer to these\ncriteria as event- or self-triggered, depending on whether the\nstepsize is implicitly or explicitly defined. We employ derivative-\nand performance-based triggering to ensure the algorithm retains the\ndesired decrease of the Lyapunov function of the continuous flow. In\ndoing so, we face the challenge that the evaluation of this function\nrequires knowledge of the unknown optimizer of the objective\nfunction. To circumvect this hurdle, we derive bounds on the evolution\nof the Lyapunov function that can be evaluated without knowledge of\nthe optimizer. We characterize the convergence properties of the\nresulting discrete-time algorithms, establishing the existence of a\nminimum inter-event time and performance guarantees with regards to\nthe objective function. \n\nOur last two contributions provide ways of exploiting the sampled\ninformation to enhance the algorithm performance. Our third\ncontribution provides an adaptive implementation of the algorithms\nthat adaptively adjusts the value of the gradient displacement\nparameter depending on the region of the space to which the state\nbelongs. Our fourth and last contribution builds on the fact that the\ncontinuous-time heavy-ball dynamics can be decomposed as the sum of a\nsecond-order linear dynamics with a nonlinear forcing term\ncorresponding to the gradient of the objective function. Building on\nthis observation, we provide a more accurate hold for the\nresource-aware implementation by using the samples only on the\nnonlinear term, and integrating exactly the resulting linear system\nwith constant forcing. We establish the existence of a minimum\ninter-event time and characterize the performance with regards to the\nobjective function of the resulting high-order-hold algorithm.\nFinally, we illustrate the proposed optimization algorithms in\nsimulation, comparing them against the heavy-ball and Nesterov's\naccelerated gradient methods and showing superior performance to other\ndiscretization methods proposed in the literature.\n\n\\section{Preliminaries}\\label{sec:preliminaries}\nThis section presents basic notation and preliminaries.\n\n\\subsection{Notation}\\label{assumptions}\nWe denote by ${\\mathbb{R}}$ and $\\mathbb{R}_{>0}$ the sets of real and positive\nreal numbers, resp. All vectors are column vectors. We denote their\nscalar product by $\\langle \\cdot,\\cdot\\rangle$. We use $\\norm{\\cdot{\n }}$ to denote the $2$-norm in Euclidean space. Given $\\mu \\in\n\\mathbb{R}_{>0}$, \na continuously differentiable function $f$ is $\\mu$-strongly convex if\n$f(y) - f(x) \\geq\\langle \\nabla f(x), y - x\\rangle +\n\\frac{\\mu}{2}\\norm{x - y}^2$ for $x$, $y \\in {\\mathbb{R}}^n$. Given $L \\in\n\\mathbb{R}_{>0}$ and a function $f:X \\rightarrow Y$ between two normed spaces\n$(X,\\norm{\\cdot{}}_X)$ and ($Y,\\norm{\\cdot{}}_Y$), $f$ is\n$L$-Lipschitz if $\\norm{f(x) - f(x')}_{Y}\\leq L\\norm{x - x'}_{X}$ for\n$x$, $x' \\in X$. \nThe functions we consider here are continuously differentiable,\n$\\mu$-strongly convex and have $L$-Lipschitz continuous gradient. We\nrefer to the set of functions with all these properties\nby~$\\mathcal{S}_{\\mu,L}^1({\\mathbb{R}}^n)$. A function\n$f:{\\mathbb{R}}^n\\rightarrow{\\mathbb{R}}$ is positive definite relative to $x_*$ if\n$f(x_*)=0$ and $f(x)>0$ for $x\\in{\\mathbb{R}}^n\\setminus\\{x_*\\}$.\n\n\\subsection{Resource-Aware Control}\\label{sec:resource-aware}\n\nOur work builds on ideas from resource-aware control to develop\ndiscretizations of continuous-time accelerated flows. Here, we provide\na brief exposition of its basic elements and refer\nto~\\cite{WPMHH-KHJ-PT:12,CN-EG-JC:19-auto} for further details.\n\nGiven a controlled dynamical system $\\dot{p} = X(p,u)$, with $p \\in\n{\\mathbb{R}}^n$ and $u \\in {\\mathbb{R}}^m$, assume we are given a stabilizing\ncontinuous state-feedback $\\map{\\mathfrak{k}}{{\\mathbb{R}}^n}{{\\mathbb{R}}^m}$ so that the\nclosed-loop system $\\dot{p} = X(p,\\mathfrak{k}(p))$ has $p_*$ as a globally\nasymptotically stable equilibrium point. Assume also that a Lyapunov\nfunction $\\map{V}{{\\mathbb{R}}^n}{{\\mathbb{R}}}$ is available as a certificate of\nthe globally stabilizing nature of the controller. Here, we assume\nthis takes the form\n\\begin{equation}\\label{eq:lyapunov_decay}\n \\dot{V} = \\langle \\nabla V(p) X(p,\\mathfrak{k}(p)) \\rangle \\leq - \\frac{\\sqrt{\\mu}}{4} V (p), \n \n\\end{equation}\nfor all $p \\in {\\mathbb{R}}^n$. Although exponential decay of $V$ along the\nsystem trajectories is not necessary, we restrict our attention to\nthis case as it arises naturally in our treatment.\n\nSuppose we are given the task of implementing the controller signal\nover a digital platform, meaning that the actuator cannot be\ncontinuously updated as prescribed by the specification\n$u=\\mathfrak{k}(p)$. In such case, one is forced to discretize the\ncontrol action along the execution of the dynamics, while making sure\nthat stability is still preserved. A simple-to-implement approach is\nto update the control action \\emph{periodically}, i.e., fix $h >0$,\nsample the state as $\\{p(kh)\\}_{k=0}^\\infty $ and implement\n\\begin{align*}\n \\dot{p}(t) = X(p(t),\\mathfrak{k}(p(kh))) , \\quad t \\in [kh,(k +1)h].\n\\end{align*}\nThis approach requires $h$ to be small enough to ensure that $V$\nremains a Lyapunov function and, consequently, the system remains\nstable. By contrast, in \\emph{resource-aware control}, one employs\nthe information generated by the system along its trajectory to update\nthe control action in an opportunistic fashion. Specifically, we seek\nto determine in a state-dependent fashion a sequence of times\n$\\{t_k\\}_{k=0}^\\infty$, not necessarily uniformly spaced, such that\n$p_*$ remains a globally asymptotically stable equilibrium for the\nsystem\n\\begin{align}\\label{eq:triggered-design}\n \\dot{p}(t) &= X(p(t),\\mathfrak{k}(p(t_k))) , \\quad t \\in\n [t_k,t_{k+1}] .\n\\end{align}\nThe main idea to accomplish this is to let the state sampling be\nguided by the principle of maintaining the same type of exponential\ndecay~\\eqref{eq:lyapunov_decay} along the new dynamics. To do this,\none defines triggers to ensure that this decay is never violated by\nprescribing a new state sampling. Formally, one sets $t_0 = 0$ and $\nt_{k + 1} = t_k + \\operatorname{step} (p(t_k))$, where the stepsize is defined by\n\\begin{align}\\label{eq:step-generic}\n \\operatorname{step} (\\hat{p}) &= \\min \\setdef{t > 0}{b(\\hat {p},t) = 0}.\n\\end{align}\nWe refer to the criteria as \\emph{event-triggering} or\n\\emph{self-triggering} depending on whether the evaluation of the\nfunction $b$ requires monitoring of the state $p$ along the trajectory\nof~\\eqref{eq:triggered-design} (ET) or just knowledge of its initial\ncondition $\\hat{p}$ (ST). The more stringent requirements to implement\nevent-triggering lead to larger stepsizes versus the more conservative\nones characteristic of self-triggering. In order for the state\nsampling to be implementable in practice, the inter-event times\n$\\{t_{k + 1} - t_k\\}_{k=0}^\\infty$ must be uniformly lower bounded by\na positive minimum inter-event time, abbreviated MIET. In particular,\nthe existence of a MIET rules out the existence of Zeno behavior,\ni.e., the possibility of an infinite number of triggers in a finite\namount of time.\n\nDepending on how the evolution of the function $V$ is examined, we\ndescribe two types of triggering conditions\\footnote{In both cases,\n for a given $z \\in {\\mathbb{R}}^n$, we let $p(t;\\hat{p})$ denote the\n solution of $\\dot{p}(t) = X(p(t),\\mathfrak{k}(\\hat{p}))$ with initial condition\n $p(0) = \\hat{p}$.}:\n\\begin{LaTeXdescription}\n\\item[Derivative-based trigger:] In this case, $b^{\\operatorname{d}}$ is defined as\n an upper bound of the expression $\\frac{d}{dt} V(p(t;\\hat{p})) +\n \\frac{\\sqrt{\\mu}}{4} V(p(t;\\hat{p}))$. This definition ensures\n that~\\eqref{eq:lyapunov_decay} is maintained\n along~\\eqref{eq:triggered-design};\n\n\\item[Performance-based trigger:] In this case, $b^{\\operatorname{p}}$ is defined as an\n upper bound of the expression $V(p(t;\\hat{p})) - e^{-\\frac{\\sqrt{\\mu}}{4}\n t}V(\\hat{p})$. Note that this definition ensures that the integral\n version of~\\eqref{eq:lyapunov_decay} is maintained\n along~\\eqref{eq:triggered-design}.\n\\end{LaTeXdescription}\nIn general, the performance-based trigger gives rise to stepsizes that\nare at least as large as the ones determined by the derivative-based\napproach, cf.~\\cite{PO-JC:18-cdc}. This is because the latter\nprescribes an update as soon as the exponential decay is about to be\nviolated, and therefore, does not take into account the fact that the\nLyapunov function might have been decreasing at a faster rate since\nthe last update. Instead, the performance-based approach reasons over\nthe \\emph{accumulated decay} of the Lyapunov function since the last\nupdate, potentially yielding longer inter-sampling times.\n\n\nA final point worth mentioning is that, in the event-triggered control\nliterature, the notion of \\emph{resource} to be aware of can be many\ndifferent things, beyond the actuator described above, including the\nsensor, sensor-controller communication, communication with other\nagents, etc. This richness opens the way to explore more elaborate\nuses of the sampled information beyond the zero-order hold\nin~\\eqref{eq:triggered-design}, something that we also leverage later\nin our presentation.\n\n\n\n\\section{Problem Statement}\\label{se:problem-statement}\n\nOur motivation here is to show that principled approaches to\ndiscretization can retain the accelerated convergence properties of\ncontinuous-time dynamics, fill the gap between the continuous and\ndiscrete viewpoints on optimization algorithms, and lead to the\nconstruction of new ones. Throughout the paper, we focus on the\ncontinuous-time version of the celebrated heavy-ball\nmethod~\\cite{BTP:64}. Let $f$ be a function in\n$\\mathcal{S}_{\\mu,L}^1({\\mathbb{R}}^n)$ and let $x_*$ be its unique\nminimizer. The heavy-ball method is known to have an optimal\nconvergence rate in a neighborhood of the minimizer. For its\ncontinuous-time counterpart, consider the following $s$-dependent\nfamily of second-order differential equations, with $s \\in \\mathbb{R}_{>0}$,\nproposed in~\\cite{BS-SSD-MIJ-WJS:18-arxiv},\n\\begin{subequations}\\label{eq:continuous-hb-dynamics}\n \\begin{align}\n \\begin{bmatrix}\n \\dot{x} \\\\\n \\dot{v}\n \\end{bmatrix}\n & =\n \\begin{bmatrix}\n v\n \\\\\n - 2\\sqrt{\\mu}v - (1+\\sqrt{\\mu s})\\nabla f(x))\n \\end{bmatrix},\n \\\\\n x(0) & =x_0, \\quad v(0)=-\\frac{2\\sqrt{s}\\nabla\n f(x_0)}{1+\\sqrt{\\mu s}} .\n \\label{eq:initial-state}\n \n \n \n \n \n \n \n \n \n \n \n \n \\end{align}\n\\end{subequations}\nWe refer to this dynamics as~$X_{\\operatorname{hb}}$. The following result\ncharacterizes the convergence properties\nof~\\eqref{eq:continuous-hb-dynamics} to $p_*=[x_*, 0]^T$.\n\n\\begin{theorem}[\\cite{BS-SSD-MIJ-WJS:18-arxiv}]\\label{th:hb}\n Let $\\map{V}{{\\mathbb{R}}^n \\times {\\mathbb{R}}^n}{{\\mathbb{R}}}$ be\n \\begin{align}\\label{eq:continuous-hb-lyapunov}\n V(x,v) & = (1+\\sqrt{\\mu s})(f(x) - f(x_*)) +\n \\displaystyle\\frac{1}{4}\\norm{v}^2 \\nonumber\n \\\\\n & \\quad + \\displaystyle\\frac{1}{4}\\norm{v +2\\sqrt{\\mu}(x-x_*)}^2,\n \\end{align}\n which is positive definite relative to $[x_*, 0]^T$. Then $\\dot{V}\n \\leq-\\frac{\\sqrt{\\mu}}{4}V$ along the\n dynamics~\\eqref{eq:continuous-hb-dynamics} and, as a consequence,\n $p_*=[x_*, 0]^T$ is globally asymptotically stable. Moreover, for\n $s\\leq 1\/L$, the exponential decrease of $V$ implies\n \\begin{equation}\\label{eq:decay_fun}\n f(x(t))-f(x_*)\\leq\n \\frac{7\\norm{x(0) -\n x_*}^2}{2s}e^{-\\frac{\\sqrt{\\mu}}{4}t} .\n \\end{equation}\n\\end{theorem}\n\nThis result, along with analogous\nresults~\\cite{BS-SSD-MIJ-WJS:18-arxiv} for the Nesterov's accelerated\ngradient descent, serves as an inspiration to build Lyapunov\nfunctions that help to explain the accelerated convergence rate of the\ndiscrete-time methods.\n\n\n\nInspired by the success of resource-aware control in developing\nefficient closed-loop feedback implementations on digital systems,\nhere we present a discretization approach to accelerated optimization\nflows using resource-aware control. At the basis of the approach\ntaken here is the observation that the convergence\nrate~\\eqref{eq:decay_fun} of the continuous flow is a direct\nconsequence of the Lyapunov nature of the\nfunction~\\eqref{eq:continuous-hb-lyapunov}. In fact, the integration\nof $\\dot{V} \\leq-\\frac{\\sqrt{\\mu}}{4}V$ along the system trajectories\nyields\n\\begin{equation*}\n V(x(t),v(t)) \\leq\n e^{-\\frac{\\sqrt{\\mu}}{4}t} V(x(0),v(0)).\n\\end{equation*} \nSince $ f(x(t)) - f(x_*) \\leq V(x(t),v(t))$, we deduce\n\\begin{align*}\n f(x(t)) - f(x_*) \\leq e^{-\\frac{\\sqrt{\\mu}}{4}t} V(x(0),v(0)) =\n \\mathcal{O}(e^{-\\frac{\\sqrt{\\mu}}{4}t}) .\n\\end{align*}\nThe characterization of the convergence rate via the decay of the\nLyapunov function is indeed common among accelerated optimization\nflows. This observation motivates the resource-aware approach to\ndiscretization pursued here, where the resource that we aim to use\nefficiently is the sampling of the state itself. By doing so, the\nultimate goal is to give rise to large stepsizes that take maximum\nadvantage of the decay of the Lyapunov function (and consequently of\nthe accelerated nature) of the continuous-time dynamics in the\nresulting discrete-time implementation.\n\n\n\\section{Resource-Aware Discretization of Accelerated Optimization\n Flows}\\label{sec:performance-based}\n\nIn this section we propose a discretization of accelerated\noptimization flows using state-dependent triggering and analyze the\nproperties of the resulting discrete-time algorithm. For convenience,\nwe use the shorthand notation $p = [x,v]^T$. In following with the\nexposition in Section~\\ref{sec:resource-aware}, we start by\nconsidering the zero-order hold implementation $\\dot p = X_{\\operatorname{hb}}\n(\\hat{p})$, $p(0) = \\hat{p}$ of the heavy-ball\ndynamics~\\eqref{eq:continuous-hb-dynamics},\n\\begin{subequations}\\label{eq:forward-euler}\n \\begin{align}\n \\dot{x} & = \\hat{v},\n \\\\\n \\dot{v} & = -2\\sqrt{\\mu}\\hat{v} - (1 + \\sqrt{\\mu s}) \\nabla\n f(\\hat{x}).\n \\end{align} \n\\end{subequations}\nNote that the solution trajectory takes the form $p(t) = \\hat{p} + t\nX_{\\operatorname{hb}} (\\hat p)$, which in discrete-time terminology corresponds to a\nforward-Euler discretization\nof~\\eqref{eq:continuous-hb-dynamics}. Component-wise, we have\n\\begin{align*}\n x(t) & = \\hat{x} + t\\hat{v},\n \\\\\n v(t) & =\\hat{v} -t \\big( 2\\sqrt{\\mu}\\hat{v} + (1 + \\sqrt{\\mu s})\n \\nabla f(\\hat{x}) \\big).\n\\end{align*}\n\nAs we pointed out in Section~\\ref{sec:resource-aware}, the use of\nsampled information opens the way to more elaborated constructions than\nthe zero-order hold in~\\eqref{eq:forward-euler}. As an example, given\nthe second-order nature of the heavy-ball dynamics, it would seem\nreasonable to leverage the (position, velocity) nature of the pair\n$(\\hat{x},\\hat{v})$ (meaning that, at position $\\hat{x}$, the system\nis moving with velocity $\\hat{v}$) by employing the modified zero-order\nhold\n\\begin{subequations}\\label{eq:forward-euler-a}\n \\begin{align}\n \\dot{x} & = \\hat{v},\n \\\\\n \\dot{v} & = -2\\sqrt{\\mu}\\hat{v} - (1 + \\sqrt{\\mu s}) \\nabla\n f(\\hat{x} + a \\hat{v}),\n \\end{align} \n\\end{subequations}\nwhere $a \\ge 0$. Note that the trajectory\nof~\\eqref{eq:forward-euler-a} corresponds to the forward-Euler\ndiscretization of the continuous-time dynamics\n\\begin{align}\\label{eq:continuous-hb-dynamics_a}\n \\begin{bmatrix}\n \\dot{x} \\\\\n \\dot{v}\n \\end{bmatrix}\n & =\n \\begin{bmatrix}\n v\n \\\\\n - 2\\sqrt{\\mu}v - (1+\\sqrt{\\mu s})\\nabla f(x + av))\n \\end{bmatrix},\n\\end{align}\nWe refer to this as the {\\it heavy-ball dynamics with displaced\n gradient} and denote it by~$X^a_{\\operatorname{hb}}$ (note\nthat~\\eqref{eq:forward-euler-a}\nand~\\eqref{eq:continuous-hb-dynamics_a} with $a=0$\nrecover~\\eqref{eq:forward-euler}\nand~\\eqref{eq:continuous-hb-dynamics}, respectively). In order to\npursue the resource-aware approach laid out in\nSection~\\ref{sec:resource-aware} with the modified zero-order hold\nin~\\eqref{eq:forward-euler-a}, we need to characterize the asymptotic\nconvergence properties of the heavy-ball dynamics with displaced\ngradient, which we tackle next.\n\n\\begin{remark}\\longthmtitle{Connection between the use of sampled\n information and high-resolution-ODEs} {\\rm A number of\n works~\\cite{ML-AMD:19,MM-MIJ:19,IS-JM-GD-GH:13} have explored\n formulations of Nesterov's accelerated that employ\n displaced-gradient-like terms similar to the one used above. Here,\n we make this connection explicit. Given Nesterov's algorithm\n \n \\begin{align*}\n y_{k+1} & = x_k -s \\nabla f(x_k)\n \\\\\n x_{k+1} &= y_{k+1} + \\displaystyle\\frac{1-\\sqrt{\\mu s}}{1 +\n \\sqrt{\\mu s}}(y_{k+1} - y_k)\n \\end{align*}\n \n \n \n \n \n \n \n \n \n \n \n the work~\\cite{BS-SSD-MIJ-WJS:18-arxiv} obtains the following\n limiting high-resolution ODE\n \n \n \n \n \\begin{equation}\\label{hr-ODE}\n \\ddot{x} + 2\\sqrt{\\mu}\\dot{x} +\n \\sqrt{s}\\nabla^2 f(x)\\dot{x} + (1+\\sqrt{\\mu s})\\nabla f(x) = 0.\n \\end{equation}\n Interestingly, considering instead the evolution of the $y$-variable\n \n \n \n \n \n \n \n \n \n \n and applying similar arguments to the ones\n in~\\cite{BS-SSD-MIJ-WJS:18-arxiv}, one instead obtains\n \\begin{equation}\\label{hr-ODE_new}\n \\ddot{y} + 2\\sqrt{\\mu}\\dot{y} + (1+\\sqrt{\\mu s})\\nabla f \\big(y +\n \\frac{\\sqrt{s}}{1 +\\sqrt{\\mu s}}\\dot{y}\\big) = 0 , \n \\end{equation}\n which corresponds to the continuous heavy-ball dynamics\n in~\\eqref{eq:continuous-hb-dynamics} evaluated with a displaced\n gradient, i.e.,~\\eqref{eq:continuous-hb-dynamics_a}. Even further,\n if we Taylor expand the last term in~\\eqref{hr-ODE_new} as\n \\[\n \\nabla f(y + \\displaystyle\\frac{\\sqrt{s}}{1 +\\sqrt{\\mu s}}\\dot{y}) =\n \\nabla f(y) + \\nabla^2 f(y) \\displaystyle\\frac{\\sqrt{s}}{1\n +\\sqrt{\\mu s}}\\dot{y} + \\mathcal{O}(s)\n \\]\n and disregard the $\\mathcal{O}(s)$ term, we recover~\\eqref{hr-ODE}.\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n This shows that~\\eqref{hr-ODE_new} is just~\\eqref{hr-ODE} with extra\n higher-order terms in $s$, and provides evidence of the role of\n gradient displacement in enlarging the modeling capabilities of\n high-resolution ODEs.} \\relax\\ifmmode\\else\\unskip\\hfill\\fi\\oprocendsymbol\n\\end{remark}\n\n\n\\subsection{Asymptotic Convergence of Heavy-Ball Dynamics with\n Displaced Gradient}\\label{sec:hbdg-asymptotic}\n\nIn this section, we study the asymptotic convergence of heavy-ball\ndynamics with displaced gradient. Interestingly, for $a$ sufficiently\nsmall, this dynamics enjoys the same convergence properties as the\ndynamics~\\eqref{eq:continuous-hb-dynamics}, as the following result\nshows.\n\n\\begin{theorem}\\longthmtitle{Global asymptotic stability of\n heavy-ball dynamics with\n displaced gradient}\\label{th:continuous-sampled}\n Let $\\beta_1,\\dots,\\beta_4>0$ be\n \\begin{align*}\n \\beta_1 &= \\sqrt{\\mu_s}\\mu, \\quad \\beta_2 =\n \\displaystyle\\frac{\\sqrt{\\mu_s} L}{\\sqrt{\\mu}},\n \\\\\n \\beta_3 &= \\frac{13\\sqrt{\\mu}}{16}, \\quad \\beta_4 = \\frac{4\n \\mu^{2}\\sqrt{s} + 3 L \\sqrt{\\mu} \\sqrt{\\mu_s} }{8 L^2},\n \n \n \n \n \n \n \n \\end{align*}\n where, for brevity, $\\sqrt{\\mu_s} = 1 + \\sqrt{\\mu s}$,\n and define\n \\begin{align}\\label{eq:a1}\n a^*_1 = \\frac{2}{\\beta_2^2} \\Big( \\beta_1 \\beta_4 +\n \\sqrt{\\beta_2^2 \\beta_3 \\beta_4 + \\beta_1^2 \\beta_4^2} \\Big) .\n \\end{align}\n Then, for $0 \\leq a \\leq a^*_1$, $\\dot{V}\n \\leq-\\frac{\\sqrt{\\mu}}{4}V$ along the\n dynamics~\\eqref{eq:continuous-hb-dynamics_a} and, as a consequence,\n $p_*=[x_*, 0]^T$ is globally asymptotically stable. Moreover, for\n $s\\leq 1\/L$, the exponential decrease of $V$\n implies~\\eqref{eq:decay_fun} holds along the trajectories\n of~$X^a_{\\operatorname{hb}}$.\n\\end{theorem}\n\\begin{proof\n Note that\n \\begin{align*}\n & \\langle \\nabla V(p), X^a_{\\operatorname{hb}}(p) \\rangle +\n \\frac{\\sqrt{\\mu}}{4} V(p) =\n \\\\\n & = (1 \\!+\\! \\sqrt{\\mu s})\\langle \\nabla f(x),v \\rangle \\!-\\!\n \\sqrt{\\mu}\\norm{v}^2 \\!-\\! \\sqrt{\\mu_s}\\langle \\nabla f(x \\!+\\! a\n v),v \\rangle\n \\\\\n & \\quad - \\sqrt{\\mu}\\sqrt{\\mu_s}\\langle \\nabla f(x + av),x - x_*\n \\rangle + \\frac{\\sqrt{\\mu}}{4} V(x,v)\n \\\\\n \n \n \n \n \n \n \n \n \n \n \n & = \\underbrace{-\\sqrt{\\mu}\\norm{v}^2 - \\sqrt{\\mu}\\sqrt{\\mu_s}\\langle \\nabla f(x), x - x_* \\rangle +\\frac{\\sqrt{\\mu}}{4}\n V(x,v)}_{\\textrm{Term~I}}\n \\\\\n & \\quad \\underbrace{-\\sqrt{\\mu_s}\\langle \\nabla f(x + av) -\n \\nabla f(x),v \\rangle}_{\\textrm{Term~II}}\n \\\\\n & \\quad \\underbrace{ -\\sqrt{\\mu}\\sqrt{\\mu_s}\\langle \\nabla\n f(x + av) - \\nabla f(x),x - x_*\\rangle}_{\\textrm{Term~III}},\n \\end{align*}\n where in the second equality, we have added and subtracted\n $\\sqrt{\\mu} \\sqrt{\\mu_s}\\langle \\nabla f(x ),x - x_*\n \\rangle$. Observe that ``Term~I'' corresponds to $\\langle \\nabla\n V(p),X_{\\operatorname{hb}}(p)\\rangle + \\frac{\\sqrt{\\mu}}{4}V(p)$ and\n is therefore negative by\n Theorem~\\ref{th:hb}. From~\\cite{MV-JC:19-nips}, this term can be\n bounded as\n \\begin{align*}\n \\textrm{Term~I} & \\leq \\frac{-13\\sqrt{\\mu}}{16} \\norm{v}^2\n \\\\\n & \\quad+ \\Big(\\frac{4 \\mu^{2}\\sqrt{s}+3 L \\sqrt{\\mu}\\sqrt{\\mu_s}}{8\n L^2} \\Big)\\norm{\\nabla f(x)}^2.\n \n \n \n \n \n \n \n \\end{align*}\n Let us study the other two terms. By strong convexity, we have $ -\n \\langle \\nabla f(x + av) - \\nabla f(x), v \\rangle \\leq - a \\mu\n \\norm{v}^2$, and therefore\n \\begin{align*}\n \\textrm{Term~II} & \\leq -a\\sqrt{\\mu_s}\\mu \\norm{v}^2 \\le 0 .\n \\end{align*}\n \n \n \n \n \n \n \n \n \n \n Regarding Term~III, one can use the $L$-Lipschitzness of $\\nabla f$\n and strong convexity to obtain\n \\begin{align*}\n \\textrm{Term~III} & \\leq \\displaystyle\\frac{a}{\\mu}\\sqrt{\\mu} \\sqrt{\\mu_s} L\\norm{v}\\norm{\\nabla f(x)}.\n \\end{align*}\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n Now, using the notation in the statement, we can write\n \\begin{align}\\label{eq:a_condition}\n & \\langle \\nabla V(p), X^a_{\\operatorname{hb}}(p) \\rangle +\n \\frac{\\sqrt{\\mu}}{4} V(p)\n \\\\\n & \\leq a \\big(\\!-\\!\\beta_1\\norm{v}^2 + \\beta_2\\norm{v}\\norm{\\nabla\n f(x)} \\big) \\!-\\! \\beta_3\\norm{v}^2 \\!-\\!\\beta_4\\norm{\\nabla\n f(x)}^2 . \\notag\n \\end{align}\n If $-\\beta_1\\norm{v}^2 + \\beta_2\\norm{v}\\norm{\\nabla f(x)} \\leq 0$,\n then the RHS of~\\eqref{eq:a_condition} is negative for any $a \\geq\n 0$. If $-\\beta_1\\norm{v}^2 + \\beta_2\\norm{v}\\norm{\\nabla f(x)} > 0$,\n the RHS of~\\eqref{eq:a_condition} is negative if and only if \n \\[\n a \\leq \\displaystyle\\frac{\\beta_3\\norm{v}^2 + \\beta_4\\norm{\\nabla\n f(x)}^2}{-\\beta_1\\norm{v}^2 + \\beta_2\\norm{v}\\norm{\\nabla\n f(x)}}. \n \\]\n The RHS of this equation corresponds to $g({\\norm{\\nabla\n f(x)}}\/{\\norm{\\nabla v}})$, with the function $g$ defined\n in~\\eqref{eq:g}. From Lemma~\\ref{lemma:bound}, as long as\n $-\\beta_1\\norm{v}^2 + \\beta_2\\norm{v}\\norm{\\nabla f(x)} > 0$, this\n function is lower bounded by\n \\begin{align*}\n a_1^* = \\frac{\\beta_3 + \\beta_4(z_{\\root}^+)^2}{-\\beta_1 + \\beta_2\n z_{\\root}^+} >0 ,\n \\end{align*}\n where $z_{\\root}^+$ is defined in~\\eqref{eq:zroot}. This exactly\n corresponds to~\\eqref{eq:a1}, concluding the\n result.\n\\end{proof}\n\n\\begin{remark}\\longthmtitle{Adaptive displacement along the\n trajectories of heavy-ball dynamics with displaced\n gradient}\\label{rem:a-over-region}\n {\\rm From the proof of Theorem~\\ref{th:continuous-sampled}, one can\n observe that if $(x,v)$ is such that $\\underline{n} \\leq\n \\norm{\\nabla f(x)} < \\overline{n}$ and $\\underline{m} \\leq\n \\norm{v} < \\overline{m}$, for $\\underline{n}, \\overline{n},\n \\underline{m}, \\overline{m} \\in \\mathbb{R}_{>0}$, then one can upper\n bound the LHS of~\\eqref{eq:a_condition}~by\n \\[\n a (-\\beta_1 \\underline{m}^2 + \\beta_2 \\overline{m} \\, \\overline{n}) -\n \\beta_3 \\underline{m}^2 - \\beta_4 \\underline{n}^2.\n \\]\n If $-\\beta_1 \\underline{m}^2 + \\beta_2 \\overline{m} \\,\n \\overline{n} \\le 0$, any $a \\ge 0$ makes this expression negative.\n If instead $ -\\beta_1 \\underline{m}^2 + \\beta_2 \\overline{m} \\,\n \\overline{n} \\geq 0$, then $a$ must satisfy\n \\begin{align}\\label{eq:adaptive-a}\n a \\leq \\Big| \\frac{\\beta_3 \\underline{m}^2 + \\beta_4\n \\underline{n}^2}{-\\beta_1 \\underline{m}^2 + \\beta_2\n \\overline{m} \\, \\overline{n}} \\Big| .\n \\end{align}\n This argument shows that over the region $R =\n \\setdef{(x,v)}{\\underline{n} \\leq \\norm{\\nabla f(x)} <\n \\overline{n} \\text{ and } \\underline{m} \\leq \\norm{v} <\n \\overline{m}}$, any $a \\ge 0$ satisfying~\\eqref{eq:adaptive-a}\n ensures that $\\dot V \\le -\\frac{\\sqrt{\\mu}}{4} V$, and hence the\n desired exponential decrease of the Lyapunov function. This\n observation opens the way to modify the value of the parameter $a$\n adaptively along the execution of the heavy-ball dynamics with\n displaced gradient, depending on the region of state space visited\n by its trajectories. } \\relax\\ifmmode\\else\\unskip\\hfill\\fi\\oprocendsymbol\n\\end{remark}\n\n\\subsection{Triggered Design of Variable-Stepsize\n Algorithms}\\label{sec:trigger-design}\n\n\nIn this section we propose a discretization of the continuous\nheavy-ball dynamics based on resource-aware control. To do so, we\nemploy the approaches to trigger design described in\nSection~\\ref{sec:resource-aware} on the dynamics~$X^a_{\\operatorname{hb}}$, whose\nforward-Euler discretization corresponds to the modified zero-order\nhold~\\eqref{eq:forward-euler-a} of the heavy-ball dynamics.\n\nOur starting point is the characterization of the asymptotic\nconvergence properties of~$X^a_{\\operatorname{hb}}$ developed in\nSection~\\ref{sec:hbdg-asymptotic}. The trigger design necessitates of\nbounding the evolution of the Lyapunov function $V$\nin~\\eqref{eq:continuous-hb-lyapunov} for the continuous-time\nheavy-ball dynamics with displaced gradient along its zero-order hold\nimplementation. However, this task presents the challenge that the\ndefinition of $V$ involves the minimizer $x_*$ of the optimization\nproblem itself, which is unknown (in fact, finding it is the ultimate\nobjective of the discrete-time algorithm we seek to design). In order\nto synthesize computable triggers, this raises the issue of bounding\nthe evolution of $V$ as accurately as possible while avoiding any\nrequirement on the knowledge of~$x_*$. The following result, whose\nproof is presented in Appendix~\\ref{app:appendix},\naddresses this point.\n\n\\begin{proposition}\\longthmtitle{Upper bound for derivative-based\n triggering with zero-order hold}\\label{prop:upper-bound-derivative}\n Let $a \\ge 0$ and\n define\n \\begin{align*}\n b^{\\operatorname{d}}_{\\operatorname{ET}}(\\hat{p}, t;a) &= A_{\\operatorname{ET}}(\\hat{p}, t;a) +\n B_{\\operatorname{ET}}(\\hat{p}, t;a) + C_{\\operatorname{ET}}(\\hat{p};a),\n \\\\\n b^{\\operatorname{d}}_{\\operatorname{ST}}(\\hat{p}, t;a) &= B^q_{\\operatorname{ST}}(\\hat{p};a)t^2 +\n (A_{\\operatorname{ST}}(\\hat{p};a) + B^l_{\\operatorname{ST}}(\\hat{p};a))t\n \\\\\n & \\quad + C_{\\operatorname{ST}}(\\hat{p};a),\n \\end{align*}\n where\n \\begin{align*}\n & A_{\\operatorname{ET}}(\\hat{p}, t;a) = 2 \\mu t \\norm{\\hat{v}}^2 +\n \\sqrt{\\mu_s}\\big(\\langle \\nabla f(\\hat{x}+t \\hat{v}) - \\nabla\n f(\\hat{x}), \\hat{v} \\rangle\n \\\\\n &\\quad + 2t \\sqrt{\\mu} \\langle \\nabla f(\\hat{x} + a \\hat{v}),\n \\hat{v} \\rangle + t\\sqrt{\\mu_s} \\norm{\\nabla f(\\hat{x} + a\n \\hat{v})}^2 \\big),\n \\\\\n \n & B_{\\operatorname{ET}}(\\hat{p}, t;a) = \\frac{\\sqrt{\\mu}t^2}{16} \\norm{2\n \\sqrt{\\mu} \\hat{v} + \\sqrt{\\mu_s}\\nabla f(\\hat{x} + a \\hat{v})}^2\n \\\\\n & \\quad - \\frac{t\\mu}{4} \\norm{\\hat{v}}^2\n \n \n + \\frac{\\sqrt{\\mu}\\sqrt{\\mu_s}}{4}\\big(\n f(\\hat{x} + t \\hat{v}) - f(\\hat{x}) +\n \\\\\n &\\quad -t \\langle \\hat{v}, \\nabla f(\\hat{x}+a\n \\hat{v}) \\rangle\n \n \n + \\frac{ t^2 \\sqrt{\\mu_s}}{4}\\norm{\\nabla f(\\hat{x} +\n a \\hat{v})}^2\n \\\\\n &\\quad - \\frac{t \\sqrt{\\mu}}{L} \\norm{\\nabla f(\\hat{x} + a\n \\hat{v})}^2 \n \n \n +t\\sqrt{\\mu} \\langle a\\hat{v} , \\nabla\n f(\\hat{x} + a\\hat{v})\\rangle\\big),\n \\\\\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n & C_{\\operatorname{ET}}(\\hat{p};a) = -\\frac{13\\sqrt{\\mu}}{16}\\norm{\\hat{v}}^2\n -\\frac{\\mu^2\\sqrt{s}}{2}\\displaystyle\\frac{\\norm{\\nabla\n f(\\hat{x})}^2}{L^2}\n \\\\\n & \\quad + \\sqrt{\\mu_s}\\big(\\frac{-3\\sqrt{\\mu}}{8L}\\norm{\\nabla\n f(\\hat{x})}^2\n \\\\\n & \\quad + \\sqrt{\\mu}(f(\\hat x) - f(\\hat{x} + a\\hat{v})) +\n \\sqrt{\\mu}\\norm{\\nabla f(\\hat{x})}\\norm{a \\hat{v}}\n \\\\\n & \\quad -\\frac{\\mu^{3\/2}}{2}\\norm{a\\hat{v}}^2 -\\langle \\nabla\n f(\\hat{x} + a\\hat{v}) - \\nabla f (\\hat{x}) , \\hat{v}\\rangle\n \\\\\n & \\quad + \\sqrt{\\mu}\\langle \\nabla f(\\hat{x} + a\\hat{v}), a\\hat{v}\n \\rangle\\big) ,\n \\\\\n \n \n \n \n \n \n \n \n \n \n \n \n & A_{\\operatorname{ST}}(\\hat{p};a) = 2 \\mu \\norm{\\hat{v}}^2+ \\sqrt{\\mu_s}\n \\big(L\\norm{\\hat{v}}^2 + 2 \\sqrt{\\mu} \\langle \\nabla f(\\hat{x} + a\n \\hat{v}), \\hat{v} \\rangle\n \\\\\n &\\quad + \\sqrt{\\mu_s} \\norm{\\nabla f(\\hat{x} + a \\hat{v})}^2\\big),\n \\\\\n \n & B^l_{\\operatorname{ST}}(\\hat{p};a) = \\frac{\\sqrt{\\mu}}{4}\\big(\n -\\sqrt{\\mu}\\norm{\\hat{v}}^2 + \\sqrt{\\mu_s}( \\langle \\nabla\n f(\\hat{x})- \\nabla f(\\hat{x} + a\\hat{v}),\\hat{v} \\rangle\n \n \\\\\n & \\quad - \\frac{\\sqrt{\\mu}}{L}\\norm{\\nabla\n f(\\hat{x} + a \\hat{v})}^2\n \n \n + \\sqrt{\\mu}\\langle a\\hat{v} , \\nabla\n f(\\hat{x} + a\\hat{v})\\rangle)\\big),\n \\\\\n \n & B^q_{\\operatorname{ST}}(\\hat{p};a) =\n \\frac{\\sqrt{\\mu}}{16}\\norm{2\\sqrt{\\mu}\\hat{v} +\\sqrt{\\mu_s}\\nabla\n f(\\hat{x} + a \\hat{v})}^2\n \\\\\n & \\quad +\n \\frac{\\sqrt{\\mu}\\sqrt{\\mu_s}}{4}\\big(\\frac{L}{2}\\norm{\\hat{v}}^2\n +\\frac{\\sqrt{\\mu_s}}{4}\\norm{\\nabla f(\\hat{x} + a \\hat{v})}^2\\big),\n \\\\\n & C_{\\operatorname{ST} }(\\hat{p};a) = C_{\\operatorname{ET}}(\\hat{p};a).\n \\end{align*}\n Let $t\\mapsto p(t) = \\hat{p} + t X^a_{\\operatorname{hb}}(\\hat{p})$ be the trajectory\n of the zero-order hold dynamics $\\dot p= X^a_{\\operatorname{hb}} (\\hat{p})$, $p(0) =\n \\hat{p}$. Then, for $t\\ge 0$,\n \\begin{align*}\n \\frac{d}{dt} V(p(t)) +\\frac{\\sqrt{\\mu}}{4} V({p}(t))\n \n \n \n & \\leq b^{\\operatorname{d}}_{\\operatorname{ET}}(\\hat{p},t;a) \\leq b^{\\operatorname{d}}_{\\operatorname{ST}}(\\hat{p}, t;a) .\n \\end{align*}\n\\end{proposition}\n\nThe importance of Proposition~\\ref{prop:upper-bound-derivative} stems\nfrom the fact that the triggering conditions defined by $b^{\\operatorname{d}}_\\#$,\n$\\# \\in \\{\\operatorname{ET},\\operatorname{ST}\\}$, can be evaluated without knowledge of the\noptimizer~$x_*$. We build on this result next to establish an upper\nbound for the performance-based triggering condition.\n\n\\begin{proposition}\\longthmtitle{Upper bound for performance-based\n triggering with zero-order hold}\\label{prop:upper-bound-performance}\n Let $a \\ge 0$ and\n \\begin{align*}\n b^{\\operatorname{p}}_{\\#}(\\hat{p},t;a) &= \\int_0^te^{\\frac{\\sqrt{\\mu}}{4}\n \\zeta}b^{\\operatorname{d}}_{\\#}(\\hat p,\\zeta;a) d\\zeta ,\n \\end{align*}\n for $\\# \\in \\{\\operatorname{ET},\\operatorname{ST}\\}$. Let $t\\mapsto p(t) = \\hat{p} + t\n X^a_{\\operatorname{hb}}(\\hat{p})$ be the trajectory of the zero-order hold dynamics\n $\\dot p= X^a_{\\operatorname{hb}} (\\hat{p})$, $p(0) = \\hat{p}$. Then, for $t \\ge 0 $,\n \\begin{align*}\n V(p(t)) \\!-\\! e^{-\\frac{\\sqrt{\\mu}}{4} t} V(\\hat p) \\!\\leq\\!\n e^{-\\frac{\\sqrt{\\mu}}{4} t} b^{\\operatorname{p}}_{\\operatorname{ET}} (\\hat{p},t;a) \\!\\leq\\!\n e^{-\\frac{\\sqrt{\\mu}}{4} t} b^{\\operatorname{p}}_{\\operatorname{ST}} (\\hat{p},t;a).\n \\end{align*}\n\\end{proposition}\n\\begin{proof\n We rewrite $ V(p(t)) - e^{-\\frac{\\sqrt{\\mu}}{4} t} V(\\hat p) =\n e^{-\\frac{\\sqrt{\\mu}}{4} t} (e^{\\frac{\\sqrt{\\mu}}{4} t} V(p(t)) -\n V(\\hat p))$, and note that\n \\begin{align*}\n & e^{\\frac{\\sqrt{\\mu}}{4} t} V(p(t)) - V(\\hat p)\n \\\\\n & \\quad = \\int_0^t \\frac{d}{d\\zeta} \\big( e^{\\frac{\\sqrt{\\mu}}{4} \\zeta}V(p(\\zeta))\n - V(\\hat p)\\big) d\\zeta\n \n \n \n \\\\\n & \\quad = \\int_0^te^{\\frac{\\sqrt{\\mu}}{4} \\zeta} \\Big( \\frac{d}{d\\zeta} V(p(\\zeta)) +\n \\frac{\\sqrt{\\mu}}{4} V(p(\\zeta)\\Big) d\\zeta.\n \\end{align*}\n Note that the integrand corresponds to the derivative-based\n criterion bounded in\n Proposition~\\ref{prop:upper-bound-derivative}. Therefore, \n \\begin{align*}\n e^{\\frac{\\sqrt{\\mu}}{4} t} V(p(t)) - V(\\hat p) & \\leq \\int_0^t e^{\\frac{\\sqrt{\\mu}}{4} \\zeta}\n b^{\\operatorname{d}}_{\\operatorname{ET}}(\\hat p,\\zeta;a) d\\zeta\n \\\\\n & = b^{\\operatorname{p}}_{\\operatorname{ET}}(\\hat{p},t;a) \\leq b^{\\operatorname{p}}_{\\operatorname{ST}}(\\hat{p},t;a)\n \\end{align*}\n for $t \\ge 0$, and the result follows.\n \n \n \n \n \n \n \n \n \n \n \n\\end{proof}\n\nPropositions~\\ref{prop:upper-bound-derivative}\nand~\\ref{prop:upper-bound-performance} provide us with the tools to\ndetermine the stepsize according to the derivative- and\nperformance-based triggering criteria, respectively. For convenience,\nand following the notation in~\\eqref{eq:step-generic}, we define the\nstepsizes\n\\begin{subequations}\n \\begin{align}\n \\operatorname{step}^{\\operatorname{d}}_{\\#}(\\hat{p};a) & = \\min \\setdef{t >\n 0}{b^{\\operatorname{d}}_{\\#}(\\hat{p},t;a) = 0} ,\n \\label{eq:step_derivative}\n \\\\\n \\operatorname{step}^{\\operatorname{p}}_{\\#}(\\hat{p};a) & = \\min \\setdef{t > 0}{b^{\\operatorname{p}}_{\\#}(\\hat{p},t;a) = 0} ,\n \\label{eq:step_performance}\n \\end{align}\n\\end{subequations}\nfor $\\# \\in \\{\\operatorname{ET},\\operatorname{ST}\\}$. Observe that, as long as $\\hat{p} \\neq p_*\n= [x_*,0]^T$ and $0\\leq a \\leq a^*_1$, we have $C_{\\#}(\\hat{p};a) < 0$\nfor $\\#\\in\\{\\operatorname{ST},\\operatorname{ET}\\}$ and, as a consequence, $\nb^{\\operatorname{d}}_{\\#}(\\hat{p},0;a)< 0$. The ET\/ST terminology is justified by\nthe following observation: in the ET case, the equation defining the\nstepsize is in general implicit in~$t$. Instead, in the ST case, the\nequation defining the stepsize is explicit in~$t$.\nEquipped with this notation, we define the variable-stepsize algorithm\ndescribed in Algorithm~\\ref{algo:DG}, which consists of following\nthe dynamics~\\eqref{eq:forward-euler-a} until the exponential decay of\nthe Lyapunov function is violated as estimated by the derivative-based\n($\\diamond = {\\operatorname{d}}$) or the performance-based ($\\diamond = {\\operatorname{p}}$) triggering condition.\nWhen this happens, the algorithm re-samples the state before continue\nflowing along~\\eqref{eq:forward-euler-a}.\n\n\\begin{algorithm}[h]\n \\SetAlgoLined\n %\n \\textbf{Design Choices:} $\\diamond \\in \\{{\\operatorname{d}},{\\operatorname{p}}\\}$, $\\# \\in \\{\\operatorname{ET},\\operatorname{ST}\\}$\n %\n \\textbf{Initialization:} Initial point ($p_0$),\n objective function ($f$), tolerance ($\\epsilon$), $a \\ge 0$, $k=0$\n \\\\\n \\While{$\\norm{\\nabla f(x_k)}\\geq \\epsilon$}{\n \n Compute stepsize $\\Delta_k = \\operatorname{step}^\\diamond_{\\#}(p_k;a)$\n \\\\\n Compute next iterate $p_{k+1} = p_k +\\Delta_k X^a_{\\operatorname{hb}}(p_k)$\n \\\\\n Set $k = k+1$ }\n \\caption{Displaced-Gradient Algorithm}\\label{algo:DG}\n\\end{algorithm}\n\n\\subsection{Convergence Analysis of Displaced-Gradient\n Algorithm}\\label{sec:analysis}\n\nHere we characterize the convergence properties of the derivative- and\nperformance-based implementations of the Displaced-Gradient\nAlgorithm. In each case, we show that algorithm is implementable\n(i.e., it admits a MIET) and inherits the convergence rate from the\ncontinuous-time dynamics. The following result deals with the\nderivative-based implementation of Algorithm~\\ref{algo:DG}.\n\n\\begin{theorem}\\longthmtitle{Convergence of derivative-based\n implementation of Displaced-Gradient \n Algorithm}\\label{non-zeno-hb-db}\n Let $\\hat{\\beta}_1,\\dots,\\hat{\\beta}_5>0$ be\n \\begin{alignat*}{2}\n \\hat{\\beta}_1 & = \\sqrt{\\mu_s}(\\frac{3\\sqrt{\\mu}}{2} + L), &\n \\hat{\\beta}_2 & = \\sqrt{\\mu}\\sqrt{\\mu_s}\\frac{3}{2},\n \\\\\n \\hat{\\beta}_3& = \\frac{13\\sqrt{\\mu}}{16}\n & \\hat{\\beta}_4& = \\frac{4 \\mu^{2}\\sqrt{s}+3 L \\sqrt{\\mu}\n \\sqrt{\\mu_s}}{8 L^2},\n \\\\\n \\hat{\\beta}_5& = \\sqrt{\\mu_s}\\big(\\frac{5\\sqrt{\\mu}L}{2} -\n \\frac{\\mu^{3\/2}}{2}\\big), &&\n \\end{alignat*}\n and define\n \\begin{align}\\label{eq:a2}\n a^*_2 = \\alpha \\min \\Big\\{\\frac{-\\hat \\beta_1 + \\sqrt{\\hat\n \\beta_1^2 + 4 \\hat \\beta_5\\hat \\beta_3}}{2\\hat\n \\beta_5},\\frac{\\hat{\\beta}_4}{\\hat{\\beta}_2} \\Big\\} ,\n \\end{align}\n with $0<\\alpha<1$. Then, for $0 \\leq a \\leq a^*_2$, $\\diamond =\n {\\operatorname{d}}$, and $\\# \\in \\{\\operatorname{ET},\\operatorname{ST}\\}$, the variable-stepsize strategy in\n Algorithm~\\ref{algo:DG} has the following properties\n \\begin{enumerate}\n \\item[(i)] the stepsize is uniformly lower bounded by the positive\n constant $\\operatorname{MIET}(a)$, where\n \\begin{equation}\\label{g-definition}\n \\operatorname{MIET}(a)= -\\nu + \\sqrt{\\nu^2 + \\eta}, \n \\end{equation}\n $\\eta= \\min\\{\\eta_1,\\eta_2\\}$, $\\nu = \\max\\{\\nu_1,\\nu_2\\}$, and\n \\begin{align*}\n \\eta_1 &=\\frac{8 a \\sqrt{\\mu_s} \\left(a (\\mu -5 L)-\\frac{2\n L}{\\sqrt{\\mu }}-3\\right)+13}{2 \\sqrt{\\mu_s} L \\left(3 a^2\n \\sqrt{\\mu_s} L+1\\right)+8 \\mu } ,\n \\\\\n \n \n \n \n \n \n \\eta_2 &=-\\frac{3 \\sqrt{\\mu_s} \\sqrt{\\mu } L (4 a L-1)-4 \\mu ^2\n \\sqrt{s}}{3 \\mu_s \\sqrt{\\mu } L^2},\n \\\\\n \\nu_1 &=\\frac{\\mu \\left(2 a^3 \\sqrt{\\mu_s} L^2+a\n \\sqrt{\\mu_s}+16\\right)+8 \\sqrt{\\mu_s} L \\left(2 a^2\n \\sqrt{\\mu_s} L+1\\right)}{2 \\sqrt{\\mu } \\left(\\sqrt{\\mu_s} L\n \\left(3 a^2 \\sqrt{\\mu_s} L+1\\right)+4 \\mu \\right)}\n \\\\\n &\\quad+ \\frac{\\sqrt{\\mu_s} (a L (8 a L+1)+4)}{\n \\sqrt{\\mu_s} L \\left(3 a^2 \\sqrt{\\mu_s}\n L+1\\right)+4 \\mu},\n \n \n \n \n \n \n \n \n \n \\\\\n \\nu_2 &=\\frac{a \\mu +8 \\sqrt{\\mu_s}+8 \\sqrt{\\mu}}{3 \\sqrt{\\mu_s}\n \\sqrt{\\mu}};\n \\end{align*}\n \\item[(ii)] $ \\frac{d}{dt} V(p_{k}+t X^a_{\\operatorname{hb}} (p_k)) \\leq\n -\\frac{\\sqrt{\\mu}}{4} V(p_k+t X^a_{\\operatorname{hb}} (p_k))$ for all $t \\in\n [0,\\Delta_k]$ and $k \\in \\{0\\} \\cup \\mathbb{N}$.\n \\end{enumerate}\n As a consequence, $ f(x_{k+1})-f(x_*) =\n \\mathcal{O}(e^{-\\frac{\\sqrt{\\mu}}{4}\\sum_{i=0}^k \\Delta_i})$.\n\\end{theorem}\n\\begin{proof\n Regarding fact (i), we prove the result for the $\\operatorname{ST}$-case, as the\n $\\operatorname{ET}$-case follows from $\\operatorname{step}^{\\operatorname{d}}_{\\operatorname{ET}}(\\hat{p};a) \\geq\n \\operatorname{step}^{\\operatorname{d}}_{\\operatorname{ST}}(\\hat{p}; a)$.\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n We start by upper bounding $C_{\\operatorname{ST}}(\\hat{p};a)$ by a negative\n quadratic function of $\\norm{\\hat{v}}$ and $\\norm{\\nabla\n f(\\hat{x})}$ as follows,\n \\begin{align*}\n & C_{\\operatorname{ST}}(\\hat{p};a) = -\\frac{13\\sqrt{\\mu}}{16}\\norm{\\hat{v}}^2 +\n \\sqrt{\\mu_s}\\frac{-3\\sqrt{\\mu}}{8L}\\norm{\\nabla\n f(\\hat{x})}^2\n \\\\\n & \\quad -\\frac{\\mu^2\\sqrt{s}}{2L^2}\\displaystyle\\norm{\\nabla\n f(\\hat{x})}^2\n +\\sqrt{\\mu_s}\\big(\\sqrt{\\mu}\\underbrace{(f(\\hat x) - f(\\hat{x} +\n a\\hat{v}))}_{\\textrm{(a)}}\n \\\\\n & \\quad + \\sqrt{\\mu}\\underbrace{\\norm{\\nabla f(\\hat{x})}\\norm{a\n \\hat{v}}}_{\\textrm{(b)}} -\\frac{\\mu^{3\/2}}{2}\\norm{a\\hat{v}}^2\n \\\\\n & \\quad +\\underbrace{\\langle \\nabla f(\\hat{x}) - \\nabla f (\\hat{x}\n + a\\hat{v} ) , \\hat{v}\\rangle}_{\\textrm{(c)}} +\n \\sqrt{\\mu}\\underbrace{\\langle \\nabla f(\\hat{x} + a\\hat{v}),\n a\\hat{v} \\rangle}_{\\textrm{(d)}}\\big).\n \\end{align*}\n Using the $L$-Lipschitzness of the gradient and Young's inequality,\n we can easily upper bound\n \\begin{align*}\n \\textrm{(a)} & \\le \\underbrace{\\langle \\nabla f(\\hat x + a \\hat\n v), - a \\hat v \\rangle + \\frac{L}{2}a^2\\norm{\\hat v}^2\n }_{\\textrm{Using~\\eqref{eq:aux-d}}}\n \n \n \n \n \\\\\n & = \\langle \\nabla f(\\hat x + a \\hat v) - \\nabla f(\\hat x), - a\n \\hat v \\rangle + \\frac{L}{2}a^2\\norm{\\hat v}^2\n \\\\\n & \\quad + \\langle \\nabla f(\\hat x), - a \\hat v \\rangle\n \n \n \n \\\\\n & \\leq La^2\\norm{\\hat v}^2 + \\frac{L}{2}a^2\\norm{\\hat v}^2 + a\n \\big(\\frac{\\norm{\\nabla f(\\hat x)}^2}{2} + \\frac{\\norm{\\hat\n v}^2}{2} \\big)\n \\\\\n \n \n \n & = \\frac{3La^2 + a}{2}\\norm{\\hat v}^2 + \\frac{a}{2} \\norm{\\nabla\n f(\\hat x)}^2,\n \\\\\n \\textrm{(b)} & \\le a \\big (\\frac{\\norm{\\nabla f(\\hat x)}^2}{2} +\n \\frac{\\norm{\\hat v}^2}{2} \\big),\n \n \n \\\\\n \\textrm{(c)} & \\leq L a \\norm{\\hat v}^2,\n \\\\\n \\textrm{(d)} & = \\langle \\nabla f(\\hat{x} + a\\hat{v}) - \\nabla\n f(\\hat x) + \\nabla f(\\hat x), a\\hat{v} \\rangle\n \\\\\n & \\leq L a^2\\norm{\\hat v}^2 + \\langle \\nabla f(\\hat x), a \\hat v\n \\rangle\n \n \n \n \\\\\n & = \\frac{2La^2 + a}{2}\\norm{\\hat v}^2 + \\frac{a}{2}\\norm{\\nabla\n f(\\hat z)}^2.\n \\end{align*}\n Note that, with the definition of the constants\n $\\hat{\\beta}_1,\\dots,\\hat{\\beta}_5>0$ in the statement, we can write\n \n \n \n \n \n \n \n \n \n \n \n \n \\begin{align*}\n C_{\\operatorname{ST}}(\\hat p;a)& \\leq a \\hat{\\beta}_1\\norm{\\hat{v}}^2 +\n a^2\\hat{\\beta}_5\\norm{\\hat v}^2 + a\\hat{\\beta}_2\\norm{\\nabla\n f(\\hat x)}^2\n \\\\\n & \\quad -\\hat{\\beta}_3\\norm{\\hat{v}}^2 -\\hat{\\beta}_4\\norm{\\nabla\n f(\\hat{x})}^2 .\n \\end{align*}\n \n \n \n \n \n \n \n \n \n \n \n \n Therefore, for $a \\in [0,a^*_2]$, we have\n \\begin{align*}\n a \\hat{\\beta}_1 + a^2\\hat \\beta_5 - \\hat{\\beta}_3 & \\le a^*_2\n \\hat{\\beta}_1 + (a^*_2)^2\\hat \\beta_5 - \\hat{\\beta}_3 = -\\gamma_1\n < 0\n \\\\\n a \\hat{\\beta}_2 - \\hat{\\beta}_4 & \\le a^*_2 \\hat{\\beta}_2 -\n \\hat{\\beta}_4 = -\\gamma_2< 0,\n \\end{align*}\n and hence $C_{\\operatorname{ST}}(\\hat{p};a)\\leq - \\gamma_1\\norm{\\hat{v}}^2 -\n \\gamma_2\\norm{\\nabla f(\\hat{x})}^2$. Similarly, introducing\n \\begin{align*}\n \\gamma_3 &= 2 a^2 \\mu_s L^2+2 a^2 \\sqrt{\\mu_s} \\sqrt{\\mu }\n L^2+\\sqrt{\\mu_s} \\sqrt{\\mu }+\\sqrt{\\mu_s} L +2 \\mu,\n \\\\\n \\gamma_4 & = 2 \\mu_s + 2 \\sqrt{\\mu_s} \\sqrt{\\mu}, \\; \\gamma_5 \\!=\\!\n \\frac{1}{8} a \\sqrt{\\mu_s} \\left(2 a^2 \\mu L^2+\\mu +2 \\sqrt{\\mu }\n L\\right),\n \\\\\n \\gamma_6 & = \\frac{a\\mu\\sqrt{\\mu_s}}{4}, \\; \\gamma_7 = \\frac{3}{8}\n a^2 \\mu_s \\sqrt{\\mu } L^2+\\frac{1}{8} \\sqrt{\\mu_s} \\sqrt{\\mu }\n L+\\frac{\\mu ^{3\/2}}{2},\n \\\\\n \\gamma_8 & = \\frac{3 \\mu_s\\sqrt{\\mu}}{8},\n \\end{align*}\n one can show that\n \\begin{align*}\n A_{\\operatorname{ST}}(\\hat p;a) \\le \\hat{A}_{\\operatorname{ST}}(\\hat p;a) &= \\gamma_3\n \\norm{\\hat v}^2 + \\gamma_4 \\norm{\\nabla f(\\hat x)}^2,\n \\\\\n B^l_{\\operatorname{ST}}(\\hat p;a) \\le \\hat{B}^l_{\\operatorname{ST}}(\\hat p;a) & = \\gamma_5\n \\norm{\\hat v}^2 + \\gamma_6 \\norm{\\nabla f(\\hat x)}^2,\n \\\\\n B^q_{\\operatorname{ST}}(\\hat p;a) \\le \\hat{B}^q_{\\operatorname{ST}}(\\hat p;a) & =\\gamma_7\n \\norm{\\hat v}^2 + \\gamma_8 \\norm{\\nabla f(\\hat x)}^2.\n \\end{align*}\n Thus, from~\\eqref{eq:step_derivative}, we have\n \\begin{align}\\label{eq:step_derivative_explicit}\n & \\operatorname{step}^{\\operatorname{d}}_{\\operatorname{ST}}(\\hat{p};a) \\geq \\frac{-(\\hat A_{\\operatorname{ST}}(\\hat{p};a) +\n \\hat B^l_{\\operatorname{ST}}(\\hat{p};a))}{2 \\hat B^q_{\\operatorname{ST}}(\\hat{p};a)}\n \\\\\n &\\quad + \\sqrt{\\left(\\frac{\\hat A_{\\operatorname{ST}}(\\hat{p};a) +\n \\hat B^l_{\\operatorname{ST}}(\\hat{p};a)}{ 2 \\hat B^q_{\\operatorname{ST}}(\\hat{p};a) }\\right)^2\n -\\frac{C_{\\operatorname{ST}}(\\hat{p};a)}{\\hat B^q_{\\operatorname{ST}}(\\hat{p};a)}}. \\notag\n \\end{align}\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n Using now \\cite[supplementary material, Lemma~1]{MV-JC:19-nips}, we\n deduce\n \\begin{align*}\n \\eta \\leq \n \\frac{-C_{\\operatorname{ST}}(\\hat{p};a)}{\\hat B^q_{\\operatorname{ST}}(\\hat{p};a)} ,\n \n \n \n \\quad\n \n \n \n \n \n \\frac{\\hat A_{\\operatorname{ST}}(\\hat{p};a) + \\hat B^l_{\\operatorname{ST}}(\\hat{p};a)}{2\\hat\n B^q_{\\operatorname{ST}}(\\hat{p};a)} \\leq \\nu,\n \\end{align*}\n where\n \\begin{align*}\n \\eta &=\n \\min\\{\\frac{\\gamma_1}{\\gamma_7},\\frac{\\gamma_2}{\\gamma_8}\\}, \\quad\n \\nu = \\max\\{\\frac{\\gamma_3 + \\gamma_5}{2\\gamma_7},\\frac{\\gamma_4 +\n \\gamma_6}{2\\gamma_8}\\}.\n \\end{align*}\n With these elements in place and referring\n to~\\eqref{eq:step_derivative_explicit}, we have\n \\begin{align*}\n \\operatorname{step}^{\\operatorname{d}}_{\\operatorname{ST}}(\\hat{p};a) & \\geq \\frac{-(\\hat A_{\\operatorname{ST}}(\\hat{p};a) +\n \\hat B^l_{\\operatorname{ST}}(\\hat{p};a))}{2\\hat B^q_{\\operatorname{ST}}(\\hat{p};a)}\n \\\\\n & \\quad + \\sqrt{\\left(\\frac{\\hat A_{\\operatorname{ST}}(\\hat{p};a) +\n \\hat B^l_{\\operatorname{ST}}(\\hat{p};a)}{ 2\\hat B^q_{\\operatorname{ST}}(\\hat{p};a) }\\right)^2 +\n \\eta}.\n \\end{align*}\n We observe now that $z \\mapsto g(z) = -z + \\sqrt{z^2 + \\eta} $ is\n monotonically decreasing and lower bounded. So, if $z$ is upper\n bounded, then $g(z)$ is lower bounded by a positive\n constant. Taking $z = \\frac{(\\hat A_{\\operatorname{ST}}(\\hat{p};a) +\n \\hat B^l_{\\operatorname{ST}}(\\hat{p};a))}{2\\hat B^q_{\\operatorname{ST}}(\\hat{p};a)} \\leq \\nu$ gives\n the bound of the stepsize. Finally, the algorithm design together\n with Proposition~\\ref{prop:upper-bound-derivative} ensure fact\n (ii) throughout its evolution.\n\\end{proof}\n\nIt is worth noticing that the derivative-based implementation of the\nDisplaced-Gradient Algorithm generalizes the algorithm proposed in our\nprevious work~\\cite{MV-JC:19-nips} (in fact, the strategy proposed\nthere corresponds to the choice $a=0$). The next result characterizes\nthe convergence properties of the performance-based implementation of\nAlgorithm~\\ref{algo:DG}.\n\n\\begin{theorem}\\longthmtitle{Convergence \n of performance-based implementation of Displaced-Gradient\n Algorithm}\\label{non-zeno-hb-pb} \n For $0 \\leq a \\leq a^*_2$, $\\diamond = {\\operatorname{p}}$, and $\\# \\in\n \\{\\operatorname{ET},\\operatorname{ST}\\}$, the variable-stepsize strategy in\n Algorithm~\\ref{algo:DG} has the following properties\n \\begin{enumerate}\n \\item[(i)] the stepsize is uniformly lower bounded by the positive\n constant $\\operatorname{MIET}(a)$;\n \\item[(ii)] $ V(p_{k }+t X^a_{\\operatorname{hb}} (p_k)) \\leq e^{-\\frac{\\sqrt{\\mu}}{4}\n t} V(p_k)$ for all $t \\in [0,\\Delta_k]$ and $k \\in \\{0\\} \\cup\n \\mathbb{N}$.\n \\end{enumerate}\n As a consequence, $ f(x_{k+1})-f(x_*) =\n \\mathcal{O}(e^{-\\frac{\\sqrt{\\mu}}{4}\\sum_{i=0}^k \\Delta_i})$.\n\\end{theorem}\n\\begin{proof}\n To show (i), notice that it is sufficient to prove that\n $\\operatorname{step}^{\\operatorname{p}}_{\\operatorname{ST}}$ is uniformly lower bounded away from zero. This is\n because of the definition of stepsize in~\\eqref{eq:step_performance}\n and the fact that $b^{\\operatorname{p}}_{\\operatorname{ET}} (\\hat p,t;a) \\le b^{\\operatorname{p}}_{\\operatorname{ST}} (\\hat\n p,t;a)$ for all $\\hat p$ and all $t$. For an arbitrary fixed $\\hat\n p$, note that $t \\mapsto b^{\\operatorname{d}}_{\\operatorname{ST}}(\\hat p,t;a)$ is strictly\n negative in the interval $[0,\\operatorname{step}^{\\operatorname{d}}_{\\operatorname{ST}}(p;a))$ given the\n definition of stepsize in~\\eqref{eq:step_derivative}. Consequently,\n the function $t \\mapsto b^{\\operatorname{p}}_{\\operatorname{ST}} (\\hat p,t;a)=\\int_0^t\n e^{\\frac{\\sqrt{\\mu}}{4} \\zeta} b^{\\operatorname{d}}_{\\operatorname{ST}}(\\hat p;\\zeta,a) d\\zeta$\n is strictly negative over $(0,\\operatorname{step}^{\\operatorname{d}}_{\\operatorname{ST}}(\\hat p;a))$. From the\n definition of $\\operatorname{step}^{\\operatorname{p}}_{\\operatorname{ST}}$, it then follows that\n $\\operatorname{step}^{\\operatorname{p}}_{\\operatorname{ST}}(\\hat p;a)\\geq \\operatorname{step}^{\\operatorname{d}}_{\\operatorname{ST}}(\\hat p;a)$. The result now\n follows by noting that $ \\operatorname{step}^{\\operatorname{d}}_{\\operatorname{ST}}$ is uniformly lower bounded\n away from zero by a positive constant,\n cf. Theorem~\\ref{non-zeno-hb-db}(i).\n\n To show (ii), we recall that $\\Delta_k = \\operatorname{step}^{\\operatorname{p}}_{\\#}(p_k;a)$ for $\\#\n \\in \\{\\operatorname{ET},\\operatorname{ST}\\}$ and use\n Proposition~\\ref{prop:upper-bound-performance} for $\\hat p = p_k$ to\n obtain, for all $t \\in [0,\\Delta_k]$,\n \\begin{align*}\n V(p(t)) - e^{-\\frac{\\sqrt{\\mu}}{4} t} V(p_k) & \\leq e^{-\\frac{\\sqrt{\\mu}}{4} t} b^{\\operatorname{p}}_{\\#}\n (p_k,t;a)\n \\\\\n & \\le e^{-\\frac{\\sqrt{\\mu}}{4} t} b^{\\operatorname{p}}_{\\#} (p_k,\\Delta_k;a) = 0 ,\n \\end{align*}\n as claimed.\n\\end{proof}\n\nThe proof of Theorem~\\ref{non-zeno-hb-pb} brings up an interesting\ngeometric interpretation of the relationship between the stepsizes\ndetermined according to the derivative- and performance-based\napproaches. In fact, since\n\\begin{align*}\n \\frac{d}{dt} b^{\\operatorname{p}}_{\\#}(\\hat p,t;a) = e^{\\frac{\\sqrt{\\mu}}{4} t}\n b^{\\operatorname{d}}_{\\#} (\\hat p,t;a) ,\n\\end{align*}\nwe observe that $\\operatorname{step}^{\\operatorname{d}}_{\\#}(\\hat{p};a)$ is precisely the (positive)\ncritical point of $t \\mapsto b^{\\operatorname{p}}_{\\#}(\\hat p,t;a)$. Therefore,\n$\\operatorname{step}^{\\operatorname{p}}_{\\operatorname{ST}}(\\hat p;a)$ is the smallest nonzero root of $t \\mapsto\nb^{\\operatorname{p}}_{\\#}(\\hat p,t;a)$, whereas $\\operatorname{step}^{\\operatorname{d}}_{\\operatorname{ST}}(\\hat p;a)$ is the time\nwhere $t \\mapsto b^{\\operatorname{p}}_{\\#}(\\hat p,t;a)$ achieves its smallest value,\nand consequently is furthest away from zero. This confirms the fact\nthat the performance-based approach obtains larger stepsizes than the\nderivative-based approach.\n\n\n\n\\section{Exploiting Sampled Information to Enhance Algorithm\n Performance}\\label{sec:exploit-sampled-info}\n\nHere we describe two different refinements of the implementations\nproposed in Section~\\ref{sec:performance-based} to further enhance\ntheir performance. Both of them are based on further exploiting the\nsampled information about the system. The first refinement,\ncf. Section~\\ref{sec:adaptive}, looks at the possibility of adapting\nthe value of the gradient displacement as the algorithm is\nexecuted. The second refinement, cf. Section~\\ref{sec:HOH}, develops a\nhigh-order hold that more accurately approximates the evolution of the\ncontinuous-time heavy-ball dynamics with displaced gradient.\n\n\\subsection{Adaptive Gradient Displacement}\\label{sec:adaptive}\n\nThe derivative- and performance-based triggered implementations \nin Section~\\ref{sec:trigger-design} both employ a constant value of\nthe parameter~$a$. Here, motivated by the observation made in\nRemark~\\ref{rem:a-over-region}, we develop triggered implementations\nthat adaptively adjust the value of the gradient displacement\ndepending on the region of the space to which the state belongs.\nRather than relying on the condition~\\eqref{eq:adaptive-a}, which\nwould require partitioning the state space based on bounds on~$\\nabla\nf(x)$ and~$v$, we seek to compute on the fly a value of the\nparameter~$a$ that ensures the exponential decrease of the Lyapunov\nfunction at the current state. Formally, the strategy is stated in\nAlgorithm~\\ref{algo:ADG}.\n\n\\begin{algorithm}[h]\n \\SetAlgoLined\n \n \\textbf{Design Choices:} $\\diamond \\in \\{{\\operatorname{d}},{\\operatorname{p}}\\}$, $\\# \\in \\{\\operatorname{ET},\\operatorname{ST}\\}$\n \n \\textbf{Initialization:} Initial point ($p_0$),\n \n objective function ($f$), tolerance ($\\epsilon$), increase rate\n ($r_i > 1$), decrease rate ($0 < r_d < 1$), stepsize lower bound\n ($\\tau$), $a\\ge 0$, $k=0$\n \\\\\n \n \\While{$\\norm{\\nabla f(x_k)}\\geq \\epsilon$}{\n increase = True\n\n exit = False\n\n \\While{ {\\rm exit} = {\\rm False}}{\n \\While{$C_{\\#}(p_k;a) \\geq 0$ }{\n \n $a = a r_d$\n %\n\n increase = False\n \n }\n \\uIf{$\\operatorname{step}^\\diamond_{\\#}(p_k;a) \\geq \\tau$}{\n exit = True\n %\n }\n \\uElse{\n $a = a r_d$\n \n\n increase = False\n }\n }\n \n Compute stepsize $\\Delta_k = \\operatorname{step}^\\diamond_{\\#}(p_k;a)$\n \\\\\n \n Compute next iterate $p_{k+1} = p_k +\\Delta_k X_{\\operatorname{hb}}^a(p_k)$\n \\\\\n \n Set $k = k+1$\n\n \\uIf{\\rm increase = True}{ $a = a r_i$ }\n\n }\n \\caption{Adaptive Displaced-Gradient Algorithm}\\label{algo:ADG}\n\\end{algorithm}\n\n\\begin{proposition}\\longthmtitle{Convergence of Adaptive Displaced-Gradient\n Algorithm}\\label{prop:non-zeno-sampled-algorithm}\n For\n $\\diamond \\in \\{{\\operatorname{d}},{\\operatorname{p}}\\}$, $\\# \\in \\{\\operatorname{ET},\\operatorname{ST}\\}$, and $\\tau \\le\n \\min_{a\\in [0,a^*_2]} \\operatorname{MIET}(a)$, the variable-stepsize strategy in\n Algorithm~\\ref{algo:ADG} has the following properties:\n \\begin{enumerate}\n \\item[(i)] it is executable (i.e., at each iteration, the parameter\n $a$ is determined in a finite number of steps);\n \\item[(ii)] the stepsize is uniformly lower bounded by~$\\tau$;\n \\item[(iii)] it satisfies $ f(x_{k+1})\\!-\\!f(x_*) \\!=\\!\n \\mathcal{O}(e^{-\\frac{\\sqrt{\\mu}}{4}\\sum_{i=0}^k \\Delta_i} )$, for\n $k \\in \\{0\\} \\cup \\mathbb{N}$.\n \\end{enumerate}\n \\end{proposition}\n\\begin{proof}\n Notice first that the function~$a \\mapsto \\operatorname{MIET}(a)>0$ defined\n in~\\eqref{g-definition} is continuous and therefore attains its\n minimum over a compact set. At each iteration,\n Algorithm~\\ref{algo:ADG} first ensures that $C_{\\#}(\\hat{p};a)< 0 $,\n decreasing $a$ if this is not the case. We know this process is\n guaranteed as soon as $a < a_2^*$ (cf. proof of\n Theorem~\\ref{non-zeno-hb-db}) and hence only takes a finite number\n of steps. Once $C_{\\#}(\\hat{p};a)< 0 $, the stepsize could be\n computed to guarantee the desired decrease of the Lyapunov function\n $V$. The algorithm next checks if the stepsize is lower bounded\n by~$\\tau$. If that is not the case, then the algorithm reduces $a$\n and re-checks if $C_{\\#}(\\hat{p};a) < 0$. With this process and in\n a finite number of steps, the algorithm eventually either computes a\n stepsize lower bounded by~$\\tau$ with $a > a^*_2$ or $a$ decreases\n enough to make $a \\leq a^*_2$, for which we know that the stepsize\n is already lower bounded by~$\\tau$. These arguments establish facts\n (i) and (ii) at the same time. Finally, fact (iii) is a consequence\n of the prescribed decreased of the Lyapunov function along the\n algorithm execution.\n\\end{proof}\n\n\\subsection{Discretization via High-Order Hold}\\label{sec:HOH}\n\nThe modified zero-order hold based on employing displaced gradients\ndeveloped in Section~\\ref{sec:performance-based} is an example of the\npossibilities enabled by more elaborate uses of sampled\ninformation. In this section, we propose another such use based on the\nobservation that the continuous-time heavy-ball dynamics can be\ndecomposed as the sum of a linear term and a nonlinear\nterm. Specifically, we have\n\\begin{align*}\n X_{\\operatorname{hb}}^a (p) & =\n \\begin{bmatrix}\n v\n \\\\\n - 2\\sqrt{\\mu}v\n \\end{bmatrix}\n + \n \\begin{bmatrix}\n 0\n \\\\\n - \\sqrt{\\mu_s}\\nabla f(x + av)\n \\end{bmatrix} .\n\\end{align*}\nNote that the first term in this decomposition is linear, whereas the\nother one contains the potentially nonlinear gradient term that\ncomplicates finding a closed-form solution. Keeping this in mind when\nconsidering a discrete-time implementation, it would seem reasonable\nto perform a zero-order hold only on the nonlinear term while exactly\nintegrating the resulting differential equation. Formally, a\nzero-order hold at $\\hat{p}=[\\hat{x},\\hat{v}]$ of the nonlinear term\nabove yields a system of the form\n\\begin{align}\\label{eq:linear-inhomo}\n \\begin{bmatrix}\n \\dot x\n \\\\\n \\dot v\n \\end{bmatrix}\n & =\n A\n \\begin{bmatrix}\n x\n \\\\\n v\n \\end{bmatrix}\n + b ,\n\\end{align}\nwith $p(0) = \\hat p$, and where\n\\begin{align*}\n A=\n\\begin{bmatrix}\n 0 & 1\n \\\\\n 0 & -2\\sqrt{\\mu}\n\\end{bmatrix},\n\\quad b = \\begin{bmatrix}\n 0\n \\\\\n - \\sqrt{\\mu_s}\\nabla f(\\hat x + a\\hat v)\n \\end{bmatrix} .\n\\end{align*}\nEquation~\\eqref{eq:linear-inhomo} is an in-homogeneous linear\ndynamical system, which is integrable by the method of variation of\nconstants~\\cite{LP:00}. Its solution is given by $ p(t) = e^{At}\n\\big(\\int_0^t e^{-A\\zeta} b d\\zeta + p(0) \\big) $, or equivalently,\n\\begin{subequations}\\label{eq:hoh-flow}\n \\begin{align}\n x(t) & = \\hat{x} -\\frac{\\sqrt{\\mu_s}\\nabla f(\\hat{x} + a \\hat{v})t }{ 2\n \\sqrt{\\mu} } \n \\\\\n & \\quad + (1-e^{-2 \\sqrt{\\mu } t}) \\frac{\\sqrt{\\mu_s}\\nabla f(\\hat{x} +\n a \\hat{v})+2 \\sqrt{\\mu } \\hat{v}}{4\\mu} , \\notag\n \n \n \n \n \n \n \n \\\\\n v(t) & = e^{-2 \\sqrt{\\mu } t}\\hat{v} + (e^{-2 \\sqrt{\\mu } t} -1)\n \\frac{\\sqrt{\\mu_s}\\nabla f(\\hat{x} + a \\hat{v}) }{2 \\sqrt{\\mu\n }} .\n \\end{align}\n\\end{subequations}\nWe refer to this trajectory as a \\emph{high-order-hold integrator}.\nIn order to develop a discrete-time algorithm based on this type of\nintegrator, the next result provides a bound of the evolution of the\nLyapunov function $V$ along the high-order-hold integrator\ntrajectories. The proof is presented in\nAppendix~\\ref{app:appendix}.\n\n\\begin{proposition}\\longthmtitle{Upper bound for derivative-based\n triggering with high-order\n hold}\\label{prop:upper-bound-derivative-hoh}\n Let $a\\ge 0$\n and define\n \\begin{align*}\n \\mathfrak{b}^{\\operatorname{d}}_{\\operatorname{ET}}(\\hat{p}, t;a) &= \\mathfrak{A}_{\\operatorname{ET}}(\\hat{p}, t;a) +\n \\mathfrak{B}_{\\operatorname{ET}}(\\hat{p}, t;a)\n \\\\\n & \\quad + \\mathfrak{C}_{\\operatorname{ET}}(\\hat{p};a) +\\mathfrak{D}_{\\operatorname{ET}}(\\hat{p},t;a),\n \\\\\n \\mathfrak{b}^{\\operatorname{d}}_{\\operatorname{ST}}(\\hat{p}, t;a) & = (\\mathfrak{A}_{\\operatorname{ST}}^q(\\hat{p};a) + \\mathfrak{B}^q_{\\operatorname{ST}}\n (\\hat{p};a) ) t^2 + (\\mathfrak{A}_{\\operatorname{ST}}^l(\\hat{p};a)\n \\\\\n & \\quad + \\mathfrak{B}^l_{\\operatorname{ST}}(\\hat{p};a) + \\mathfrak{D}_{\\operatorname{ST}} (\\hat{p};a) )t\n +\\mathfrak{C}_{\\operatorname{ST}}(\\hat{p};a) ,\n \\end{align*}\n where\n \\begin{align*}\n & \\mathfrak{A}_{\\operatorname{ET}}(\\hat p,t;a) =\\sqrt{\\mu_s}(\\langle \\nabla f(x(t)) -\n \\nabla f(\\hat{x}),v(t) \\rangle\n \\\\\n & \\quad - \\langle v(t) - \\hat{v}, \\nabla f(\\hat{x} + a\\hat{v})\n \\rangle\n \\\\\n & \\quad - \\sqrt{\\mu}\\langle x(t) - \\hat{x}, \\nabla f(\\hat{x} +\n a\\hat{v}) \\rangle)\n \\\\\n &\\quad -\\sqrt{\\mu}\\langle v(t) - \\hat{v},v(t)\\rangle,\n \\\\\n \n & \\mathfrak{B}_{\\operatorname{ET}}(\\hat p,t;a) = \\frac{\\sqrt{\\mu}}{4}\\big(\\sqrt{\\mu_s}\n (f(x(t)) - f(\\hat{x}))\n \\\\\n &\\quad - \\sqrt{\\mu}\\sqrt{\\mu_s} t \\frac{\\norm{\\nabla f(\\hat{x} +\n a\\hat{v})}^2}{L}\n \\\\\n &\\quad + \\sqrt{\\mu}\\sqrt{\\mu_s} t \\langle \\nabla f(\\hat{x} + a\n \\hat{v}), a \\hat{v}\\rangle + \\frac{1}{4} (\\norm{v(t)}^2 -\n \\norm{\\hat{v}}^2)\n \\\\\n &\\quad + \\frac{1}{4} \\norm{v(t)-\\hat{v} + 2 \\sqrt{\\mu}\n (x(t)-\\hat{x})}^2\n \\\\\n & \\quad + \\frac{1}{2}\\langle v(t)-\\hat{v} + 2 \\sqrt{\\mu}\n (x(t)-\\hat{x}), \\hat{v} \\rangle \\big),\n \\\\\n \n & \\mathfrak{C}_{\\operatorname{ET}}(\\hat p;a) = C_{\\operatorname{ET}}(\\hat{p};a) ,\n \\\\\n & \\mathfrak{D}_{\\operatorname{ET}} (\\hat p,t;a) = \\sqrt{\\mu_s}\\langle \\nabla f(\\hat{x}),\n v(t) - \\hat{v}\\rangle\n \\\\\n & \\quad -\\sqrt{\\mu}\\langle \\hat{v} , v(t) - \\hat{v} \\rangle ,\n \\end{align*}\n and \n \\begin{align*}\n & \\mathfrak{A}_{\\operatorname{ST}}^l(\\hat p;a) = \\norm{2 \\sqrt{\\mu}\\hat v\n +\\sqrt{\\mu_s}\\nabla f(\\hat x + a \\hat v) } \\Big( \\sqrt{\\mu}\n \\norm{\\hat v}\n \\\\\n & \\quad + \\frac{L\\sqrt{\\mu_s}}{2\\sqrt{\\mu}} \\norm{\\hat v} +\n \\frac{3\\sqrt{\\mu_s}}{2} \\norm{\\nabla f(\\hat x + a \\hat v)} \\Big)\n \\\\\n & \\quad + \\frac{\\mu_s}{2} \\norm{\\nabla f(\\hat x + a \\hat\n v)} \\Big( \\frac{L}{\\sqrt{\\mu}} \\norm{\\hat v} + \\norm{\\nabla\n f(\\hat x + a \\hat v)} \\Big) ,\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \\\\\n \n & \\mathfrak{A}_{\\operatorname{ST}}^q(\\hat p;a) = \\norm{2 \\sqrt{\\mu}\\hat v +\n \\sqrt{\\mu_s}\\nabla f(\\hat x + a \\hat v) }\n \\\\\n & \\quad \\cdot{}\\Big( \\big( \\frac{L\\sqrt{\\mu_s}}{2\\sqrt{\\mu}} +\n \\sqrt\\mu \\big)\\norm{2 \\sqrt{\\mu}\\hat v + \\sqrt{\\mu_s}\\nabla f(\\hat\n x + a \\hat v) }\n \\\\\n & \\quad + \\frac{L\\mu_s}{2\\sqrt{\\mu}} \\norm{\\nabla f(\\hat x\n + a \\hat v)} \\Big) ,\n \n \n \n \n \n \n \n \n \n \n \\\\\n \n & \\mathfrak{B}^l_{\\operatorname{ST}}(\\hat p;a) = \\frac{\\sqrt{\\mu} \\sqrt{\\mu_s}}{4} \\Big(\n \\frac{\\sqrt{\\mu_s}}{2\\sqrt{\\mu}}\\norm{\\nabla f(\\hat x + a \\hat\n v)}\\norm{\\nabla f(\\hat x)}\n \\\\\n & \\quad + \\frac{1}{2} \\norm{2 \\sqrt{\\mu}\\hat v \\!+\\!\n \\sqrt{\\mu_s}\\nabla f(\\hat x + a \\hat v) } \\big(\n \\frac{\\norm{\\nabla f(\\hat x)}}{\\sqrt{\\mu}} \\!+\\! \\frac{\\norm{\\hat\n v}}{\\sqrt{\\mu_s}}\\big)\n \\\\\n &\\quad - \\sqrt{\\mu} \\frac{\\norm{\\nabla f(\\hat{x} + a\n \\hat{v})}^2}{L} + (a\\sqrt{\\mu} - \\frac{1}{2}) \\langle \\nabla\n f(\\hat{x} + a \\hat{v}), \\hat{v} \\rangle \\Big),\n \\\\\n \n & \\mathfrak{B}^q_{\\operatorname{ST}}(\\hat p;a) =\n \n \\frac{10 \\mu ^2+L^2 \\sqrt{\\mu_s}}{32 \\mu ^{3\/2}}\n\\\\\n& \\quad \\cdot{}\\norm{2 \\sqrt{\\mu}\\hat v + \\sqrt{\\mu_s}\\nabla f(\\hat x + a \\hat v) }^2\n \\\\\n & \\quad \n \n \n \\frac{\\mu_s \\left(4 \\mu ^2+L^2 \\sqrt{\\mu_s}\\right)}{32 \\mu\n ^{3\/2}} \\norm{\\nabla f(\\hat x + a \\hat v)}^2\n \\\\\n & \\quad+\\frac{\\sqrt{\\mu_s} \\left(4 \\mu ^2+L^2\n \\sqrt{\\mu_s}\\right)}{16 \\mu ^{3\/2}}\\norm{2 \\sqrt{\\mu}\\hat v +\n \\sqrt{\\mu_s}\\nabla f(\\hat x + a \\hat v)}\n \\\\\n & \\quad \\cdot{}\\norm{\\nabla f(\\hat x + a \\hat v)}),\n \\\\\n \n & \\mathfrak{C}_{\\operatorname{ST}}(\\hat p;a) = C_{\\operatorname{ST}}(\\hat p;a) ,\n \n \\\\\n & \\mathfrak{D}_{\\operatorname{ST}}(\\hat p;a) = \\norm{2 \\sqrt{\\mu} \\hat v +\n \\sqrt{\\mu_s}\\nabla f(\\hat{x} + a \\hat{v})} \\cdot\n \\\\\n & \\quad \\Big( \\sqrt{\\mu_s}\\norm{\\nabla f(\\hat x)} +\n \\sqrt{\\mu}\\norm{\\hat v} \\Big).\n \\end{align*}\n Let $t\\mapsto p(t)$ be the high-order-hold integrator\n trajectory~\\eqref{eq:hoh-flow} from $p(0) = \\hat{p}$. Then, for\n $t\\ge 0$,\n \\begin{align*}\n \\frac{d}{dt} V(p(t)) +\\frac{\\sqrt{\\mu}}{4} V({p}(t))\n \n \n \n & \\leq \\mathfrak{b}^{\\operatorname{d}}_{\\operatorname{ET}}(\\hat{p},t;a) \\leq \\mathfrak{b}^{\\operatorname{d}}_{\\operatorname{ST}}(\\hat{p}, t;a) .\n \\end{align*}\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\\end{proposition}\n\nAnalogously to what we did in Section~\\ref{sec:trigger-design}, we\nbuild on this result to establish an upper bound for the\nperformance-based triggering condition with the high-order-hold\nintegrator.\n\n\\begin{proposition}\\longthmtitle{Upper bound for performance-based\n triggering with high-order\n hold}\\label{prop:upper-bound-performance-hoh}\n Let $0 \\leq a$ and\n \\begin{align}\\label{eq:bpb}\n \\mathfrak{b}^{\\operatorname{p}}_{\\#}(\\hat{p},t;a) &= \\int_0^te^{\\frac{\\sqrt{\\mu}}{4} \\zeta}\\mathfrak{b}^{\\operatorname{d}}_{\\#}(\\hat\n p,\\zeta;a) d\\zeta ,\n \\end{align}\n for $\\# \\in \\{\\operatorname{ET},\\operatorname{ST}\\}$. Let $t\\mapsto p(t)$ be the\n high-order-hold integrator trajectory~\\eqref{eq:hoh-flow} from $p(0)\n = \\hat{p}$. Then, for $t\\ge 0$,\n \\begin{align*}\n V(p(t)) \\!-\\! e^{-\\frac{\\sqrt{\\mu}}{4} t} V(\\hat p) \\!\\leq\\!\n e^{-\\frac{\\sqrt{\\mu}}{4} t} \\mathfrak{b}^{\\operatorname{p}}_{\\operatorname{ET}} (\\hat{p},t;a) \\!\\leq\\!\n e^{-\\frac{\\sqrt{\\mu}}{4} t} \\mathfrak{b}^{\\operatorname{p}}_{\\operatorname{ST}} (\\hat{p},t;a).\n \\end{align*}\n\\end{proposition}\n\nUsing Proposition~\\ref{prop:upper-bound-derivative-hoh}, the proof of\nthis result is analogous to that of\nProposition~\\ref{prop:upper-bound-performance}, and we omit it for\nspace reasons. Propositions~\\ref{prop:upper-bound-derivative-hoh}\nand~\\ref{prop:upper-bound-performance-hoh} are all we need to fully\nspecify the variable-stepsize algorithm based on high-order-hold\nintegrators. Formally, we set\n\\begin{align}\n \\frak{step}^\\diamond_{\\#}(\\hat{p};a) = \\min \\setdef{t >\n 0}{\\mathfrak{b}^\\diamond_{\\#}(\\hat{p},t;a) = 0} ,\n\\end{align}\nfor $\\diamond \\in \\{{\\operatorname{d}},{\\operatorname{p}} \\}$ and $\\# \\in \\{\\operatorname{ET},\\operatorname{ST}\\}$. With this in place,\nwe design Algorithm~\\ref{algo:HOH}, which is a higher-order\ncounterpart to Algorithm~\\ref{algo:ADG}, and whose convergence\nproperties are characterized in the following result.\n\n\\begin{algorithm}[h]\n \\SetAlgoLined\n %\n\n\n\\textbf{Design Choices:} $\\diamond \\in \\{{\\operatorname{d}},{\\operatorname{p}}\\}$, $\\# \\in \\{\\operatorname{ET},\\operatorname{ST}\\}$\n \\textbf{Initialization:} Initial point ($p_0$),\n objective function ($f$), tolerance ($\\epsilon$), increase rate\n ($r_i > 1$), decrease rate ($0 0.1$. The relative occurrence is shown in the last row.}\n\\label{fig:pt}\n\\end{figure*}\n\n\n\n\\subsection{Crystal diffusion variational autoencoder}\n\nThe CDVAE combines a variational autoencoder\\cite{kingma2013autoencoder} and a diffusion model to generate new periodic materials. The crystal is represented by a tuple consisting of the atomic number of the $N$ atoms, their respective coordinates, and the unit cell basis vectors. CDVAE consist of three networks: the encoder, a property predictor, and the decoder. The encoder is a SE(3) equivariant periodic graph neural network (PGNN), which encodes the material onto a lower dimensional latent space from which the property predictor predicts the number of atoms $N$, the lattice vectors, and the composition, which is the fraction present of each element. The decoder is a noise conditional score network diffusion model\\cite{song2019generative} that takes a structure with noise added to the atom types and coordinates and learns to denoise it into the original stable structure. Here the score is an estimate of the gradient of the underlying probability distribution of the materials and is predicted by another SE(3) equivariant PGNN. The three networks are trained concurrently.\n\nNew materials can be generated after training by using the property predictor to sample the latent space. A unit cell with the predicted basis vectors is then initialised with the predicted atoms placed at random positions. Using the decoder, the atom types and coordinates of the initial random unit cell are then gradually denoised into a material that is similar to the data distribution of the training data. CDVAE utilizes that adding noise to a stable material will likely decrease its stability and, thus, by learning to denoise the noisy stable structure, the decoder learns to increase the stability of the structure. Therefore CDVAE should be trained only on stable materials.\nAn in-depth description of CDVAE can be found in Xie \\textit{et al.} \\cite{xie2021crystal}. \n\n\nThe set of materials used as training data for the CDVAE and seed structures for the lattice decoration protocol (LDP), respectively, consists of 2615 unique 2D materials from the C2DB\\cite{haastrup2018computational,gjerding2021recent}. As our aim is to discover new stable materials we limited the initial set of materials to the subset of C2DB with energy above the convex hull $\\Delta H_{\\mathrm{hull}}< \\SI{0.3}{eV\/atom}$. This was done because both the CDVAE (LDP) are more likely to generate stable materials when trained on (seeded by) stable materials. We did not exclude dynamically unstable materials.\n\nAfter training the CDVAE model, 10.000 structures were generated of which 1106 failed CDVAE's basic validity check (charge neutrality and bond lengths above 0.5 \u00c5). Of the remaining 8894 structures, 3891 are duplicate structures which are sorted out (see section \\ref{sec: Method} for more details) and the rest are relaxed using DFT.\n\n\n\\subsection{Lattice decoration protocol}\n\nThe lattice decoration protocol (LDP) substitutes the atoms in the seed structures by atoms of similar chemical nature. As a measure of chemical similarity we use the probability matrix $P_{AB}$ introduced by Glawe \\textit{et al.}\\cite{glawe2016optimal}, which describes the likelihood that a stable material containing a chemical element $A$ remains stable after the substitution $A\\to B$. Glawe \\textit{et al.} constructed this probability matrix based on an analysis of materials in the Inorganic Crystal Structure Database\\cite{belsky2002new}. We choose a substitution probability of 10 \\% ($P_{AB}>0.1$), which generates the substitutions shown in Fig. \\ref{fig:pt}. Based on these substitution relations, we perform all possible single and double substitutions for all seed structures. For example, the seed structure MoS$_2$ generates six MX$_2$ structures with M=Mo,W and X=O, S, Se (the seed structure itself included). The total set of resulting materials are analysed for structures that share the same reduced formula and space group. Such structures are considered as duplicate structures and are filtered out. After removal of duplicates, we are left with 14192 unique 2D crystals (the seed structures excluded) which are relaxed using DFT. \n\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1\\linewidth]{figures\/hist_combined.pdf}\n\\caption{Histogram of the heat of formation and energy above convex hull for the DFT-relaxed structures resulting from the CDVAE (a, b) and LDP (c, d) methods. The inset shows the energy above convex hull with respect to the number of unique elements in the structure.}\n\\label{fig:hist_thermo}\n\\end{figure*}\n\n\\subsection{Workflow}\nOur workflow is illustrated in Fig. \\ref{fig:workflow}. Starting with the initial set of 2D materials, we generate two new sets of crystal structures using CDVAE and LDP, respectively. Duplicate structures within each set are removed (see section \\ref{sec: Method} for more details). The now unique crystal structures are relaxed using DFT calculations employing the PBE xc-functional (see section \\ref{sec: Method} for more details). After the relaxation, any new duplicate structures are removed again and as are materials that have relaxed into non 2D structures (we refer to Ref. \\cite{gjerding2021recent} for details on the dimensionality analysis). Finally the heat of formation, $\\Delta H$, and the energy above convex hull, $\\Delta H_{\\mathrm{hull}}$, are calculated. \n\n\n\\begin{table}[]\n\\begin{tabularx}{0.45 \\textwidth}{ \n >{\\raggedright\\arraybackslash\\hsize=.6\\hsize}X \n \n >{\\centering\\arraybackslash\\hsize=.2\\hsize}X \n >{\\raggedleft\\arraybackslash\\hsize=.2\\hsize}X }\n\\hline\n & LDP & CDVAE \\\\ \\hline\nSuccess rate & 82 \\% & 69 \\% \\\\\nAvg. number of steps & 40.1 & 55.5\\\\\nAvg. energy decrease $\\left[ \\si{eV\/atom} \\right]$ & 0.62 & 0.51 \\\\ \\hline\n\\end{tabularx}\n\\caption{Summary statistics for the DFT relaxation of the two methods for generating initial structures.}\n \\label{tab:statistics}\n\\end{table}\n\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/kde_ehull.pdf}\n \\caption{Kernel density estimate showing the distribution of the convex hull energies for the stable and unstable CDVAE generated dataset as well as their training data.}\n \\label{fig:kde}\n\\end{figure}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1\\linewidth]{figures\/cdvae_stable_exambles.png}\n\\caption{(a-g) Examples of CDVAE generated materials with negative convex hull energies. (h, i) Examples of CDVAE generated stable materials with the new discovered combination of stoichiometry ABC$_2$D$_2$, space group number 25 and occupied Wyckoff positions a,b,c,d.}\n\\label{fig:cdvae_examples}\n\\end{figure*}\n\nIn Table \\ref{tab:statistics} we report the success rates for the DFT relaxations of the structures generated by CDVAE and LDP, respectively, together with the average number of relaxation steps and the average energy decrease from the initial to the relaxed structure. All three parameters are assumed to describe \nhow close the initial structures are to the final DFT relaxed structures - e.g. a structure from a perfect generative method would only need one relaxation step and the energy decrease would be zero. As expected, neither LDP or CDVAE generate stable relaxed structures. However, while the LDP on average requires less steps to relax, the CDVAE structures are closer in energy to the relaxed structure. The fact that the number of relaxation steps and reduction in energy upon relaxation is comparable for LDP and CDVAE, suggest that the CDVAE-generated crystals are as close to relaxed structures as the LPD-generated structures. \n\nWe observe that the DFT relaxation fails for about 18 \\% of the LDP-generated and about 31 \\% of the CDVAE-generated structures. The vast majority of these failures are due to problems in converging the Kohn-Sham SCF cycle. We suspect that a large fraction of the convergence problems occur for materials with magnetic ground state (all calculations are performed with spin polarisation). This is supported by the fact that 30 \\% of the materials containing one or more of the magnetic 3d-metals (V, Cr, Mn, Fe, Co, Ni), fails due to convergence errors, while this is only happens for 10 \\% of other materials. \n\n\n\\subsection{Thermodynamic stability}\n\nA histogram of the heat of formation and the energy above the convex hull for the (DFT-relaxed) structures resulting from the CDVAE and LDP are shown in Fig. \\ref{fig:hist_thermo}. The distributions of both $\\Delta H$ and $\\Delta H_{\\mathrm{hull}}$ obtained for the two structure generation methods are remarkably similar. For example, 73.8 \\% of the CDVAE materials have $\\Delta H_{\\text{hull}}$ below $\\SI{0.3}{eV\/atom}$ (as the training data) while this is the case for 74.0 \\% of the LDP materials. It should, however, be noted that the smaller success rate of the DFT relaxation of the CDVAE generated materials could influence these statistics as it likely that many of the structures which could not be converged would have resulted in unstable structures. The inset of Fig. \\ref{fig:hist_thermo} shows how the energy above the convex hull is distributed depending on the number of different elements in the structure. First of all it is evident that CDVAE is able to create structures with a larger number of unique elements than is present in the training data (5 unique elements is the maximum in the seed structures), while LDP is limited to the stoichiometries present in the seed materials. However, generally the thermodynamic stability is lower for the materials with larger number of unique elements. Examples of some of the most stable CDVAE generated structures is shown in Fig. \\ref{fig:cdvae_examples}. The material Zr$_2$CCl$_2$ shown in c) is one of the 22 materials which are found both by the CDVAE and LDP method.\n\nTo predict whether a given 2D material can be synthesized is a complex problem that involves many factors. Often the size of $\\Delta H_{\\mathrm{hull}}$ is used a soft criterion for synthesizability as it determines the material's thermodynamic stability relative to other competing phases (this criterion neglects growth kinetics and substrate interactions both of which can be important for 2D materials). A previous study of 700 polymorphs in 41 common inorganic bulk material systems showed that a threshold of $\\Delta H_{\\mathrm{hull}}<0.1$ eV\/atom will exclude 26\\% of the known synthesized polymorphs\\cite{aykol2018thermodynamic}. We also note that the T-phase of MoS$_2$ was synthesised both as a monolayer\\cite{kappera2014phase} and a layered bulk\\cite{bell1957preparation}, despite having $\\Delta H_{\\mathrm{hull}}=0.18$ eV\/atom\\cite{urlc2db}. These examples demonstrate that many of the predicted 2D materials with $\\Delta H_{\\mathrm{hull}}<50$ meV\/atom (2004) or even $\\Delta H_{\\mathrm{hull}}<100$ meV\/atom (3400), are likely to be synthesizable. \n\nWhile the $\\Delta H_{\\mathrm{hull}}$-distributions in Fig. \\ref{fig:hist_thermo} are clearly peaked close to zero they also have a tail of less stable materials. In particular, about 26 \\% of the materials have $\\Delta H_{\\text{hull}}>\\SI{0.3}{eV\/atom}$ (the threshold to select the training structures). A natural question to ask is then to what extent the structures produced by the CDVAE are in fact biased towards high stability structures? To answer this question, we trained a CDVAE model on 988 2D materials with a $\\Delta H_{\\text{hull}}>\\SI{0.4}{eV\/atom}$ and used it to generate another 10.000 structures from which we randomly selected 1000 non-duplicate structures, which we relaxed following the same workflow as described before. The distribution of the energy above the convex hull of the relaxed structures for both the stable and unstable CDVAE models are shown in Fig. \\ref{fig:kde} together with the distribution of their respective training sets. We clearly see that the CDVAE model trained to generate unstable materials produces structures that are significantly further from the convex hull than the stable model. This illustrates that CDVAE successfully learns the chemistry of the materials in the training data.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1\\linewidth]{figures\/Structure_combined_no_TSNE.pdf}\n\\caption{Relative frequency of the stoichiometry, space group number and occupied Wyckoff positions for each of the data set.}\n\\label{fig:structure}\n\\end{figure*}\n\n\n\\subsection{Structural diversity}\n\nHaving established the capability of the CDVAE to produce materials with good stability properties, we now turn to its ability to generate crystals of high chemical and structural diversity. While the LDP is restricted to stoichiometries and crystal structures already present in the seed structures, the CDVAE (in principle) has no such limitations. Fig. \\ref{fig:pt} shows the relative occurrence of each element in the seed\/training structures. The corresponding plots for the materials generated by LDP and CDVAE (after relaxation) are shown in Fig. 1 in the Supplemental Information. Both LDP and CDVAE produces diverse compositions with elements covering most of the periodic table. However, CDVAE has a significantly higher occurrence of oxygen and chalcogens (S and Se) as well as halogens (Cl, Br and I). This trend is also present for the materials prior to relaxation and, thus does not originate from a potential higher DFT convergence rate for these elements. Instead, the six elements are also more prevalent, albeit slightly, in the seed structures which could indicate an overfitting of the model.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1\\linewidth]{figures\/TSNE_compare_annotated.pdf}\n\\caption{t-SNE embedding of the structure\nrepresented as a individual one-hot encoding of the stoichiometry, space group, and occupied Wyckoff positions. Selected clusters are highlighted as 'stoichiometry'-'space group'-'Wyckoff position'. 'X' corresponds to an arbitrary stoichiometry.}\n\\label{fig:tSNE}\n\\end{figure}\n\n\n\nThe CDVAE generates significantly different chemical compositions and crystal structures as compared to the seed structures and those generated by the LDP. Fig. \\ref{fig:structure} shows the relative frequencies of stoichiometry, space group number and occupied Wyckoff positions, respectively. Only the most common classes of the seed structures are shown. We find 239 unique stoichiometries among the CDVAE-generated materials, while there is only 87 and 103 unique stoichiometries in the seed structures and LDP-generated structures, respectively. The higher number of unique stoichiometries in the LDP-generated structures than in the seed structures is due to new stoichiometries being created when two different elements are substituted by the same element, or when an element is being substituted with an element already present in the seed material. For example, the seed materials Te$_2$Cu$_4$O$_{12}$ (stoichiometry AB$_2$C$_6$) becomes Cu$_4$S$_{14}$ (stoichiometry A$_2$B$_7$) under the double substitution O$\\to$S and Te$\\to$S.\nThe significantly larger number of unique stoichiometries generated by CDVAE compared to the LDP shows that the former is able to produce new classes of structures that are not present in the training data.\n\nThe CDVAE tends to generate rather complex, low-symmetry structures, which is illustrated by the large fraction of materials with space group number 1. Moreover, the average number of different elements in the unit cell is 4.0 for the CDVAE generated materials while it is only 2.6 for the C2DB seed structures. The larger number of different elements is part of the reason for the higher fraction of materials with low symmetry. This tendency of CDVAE to generate structures with more complex composition is also noted by Xie \\textit{et al.}, who attributes this to a non-Gaussian distribution of the underlying structure of the materials. Thus, when CDVAE generates new materials it samples from a Gaussian distribution $\\mathcal{N}(0,1)$ from which it predicts the number of atoms and composition. However if $\\mathcal{N}(0,1)$ is not representative of the latent space, out of distribution materials can be generated. For materials discovery this could, however, be advantageous as this makes CDVAE able to generate new crystal types which are not present in the training data.\n\nTo give a global overview of the structural distribution of the three data sets, a t-SNE embedding is shown in Fig. \\ref{fig:tSNE}. The t-SNE analysis is made for 2500 materials sampled randomly from each data set. Here the structure is represented as a tuple given by the space group, occupied Wyckoff positions, and stoichiometry, where each is one-hot encoded before the t-SNE embedding. We see that most of the training data form clear clusters, which represent the most common stoichiometries, space group and Wyckoff positions. The LDP generated materials mostly follow the same pattern as the seed structures. However, the CDVAE generated structures are more spread out, which is partly due to the large variation in their stoichiometries, while a few clusters appear due to the large fraction of low symmetry materials with space group number 1. One noteworthy example is the cluster of CDVAE generated materials with stoichiometry ABC$_2$D$_2$, space group number 25 and occupied Wyckoff positions a,b,c,d. For this specific combination, CDVAE discovered 123 new materials of which 30 lies within 50 meV of the convex hull, while there is no examples of such materials in the training set nor in the LDP generated structures. Two of the most stable discovered materials of this type can be seen in Fig. \\ref{fig:cdvae_examples} (h, i). The new class of structures have broken out-of-plane symmetry either due to the outermost atoms (b) or the innermost atoms (a). The fact that the CDVAE is able to generate new classes of stable materials, which are not present in the training data, is very promising and a clear advantage of deep generative models compared to lattice decoration protocols.\n\n\\section{Conclusions}\nIn conclusion, we have successfully employed a deep generative model in combination with a systematic lattice decoration protocol (LDP) to generate more than 8500 unique 2D crystals with formation energies ($\\Delta H$) within 0.3 eV\/atom of the convex hull. Out of these, more than 2000 have $\\Delta H$ within 50 meV\/atom of the convex hull, and could potentially be synthesized. This represents at least a doubling of the known stable 2D materials. \n\nIn addition to the very significant expansion of the known space of 2D materials, our work provides a quantitative assessment of the crystal diffusion variational autoencoder (CDVAE)\\cite{xie2021crystal}, and establishes its excellent performance with respect to the two key criteria: ability to learn the stability properties of the training structures, and ability to generate crystals with high chemical and structural diversity. In fact, only 25\\% of the generated materials had $\\Delta H_{\\mathrm{hull}}$ above the 0.3 eV\/atom threshold used to select the training structures, and the stoichiometries of the generated materials span 239 types versus 87 present in the training structures. Generally, the crystal structures generated by CDVAE have higher complexity and lower symmetries than the training structures. We found the method of lattice decoration to be complementary to the CDVAE generator with the two methods yielding only 22 common crystals out of the 11630 structures generated in total. While the LDP is limited to the structural blueprint of the seed materials, CDVAE is able to generate new classes of materials, which are not present in the initial data set. This is promising for an autonomous materials discovery method as it adds new genes to pool of trial materials and thus goes beyond the lattice decoration paradigm. \n\nThe fact that CDVAE is comparable to lattice decoration (with substitution by chemically similar elements) in terms of stability while producing new and diverse crystal structures, is a testimony to the prospect of using deep generative models in materials discovery. \n\nAll the structures are available in the C2DB database\\cite{urlc2db}, and their basic properties will also be made available as the execution of the C2DB property workflow proceeds.\n\n\n\n\\section{Method} \\label{sec: Method}\n\n\\subsection{Workflow}\nTo set up and manage the workflow we use the Atomic Simulation Recipes\\cite{gjerding2021atomic}, which has implemented tools for DFT relaxation, duplicate removal, dimensionality check, and for calculating the thermodynamic properties. The DFT calculations are performed using the GPAW code \\cite{Mortensen2005real} with the PBE xc-functional, a plane wave cut-off energy of $ \\SI{800}{eV}$ and a \\textit{k}-point density of at least $\\SI{4}{\u00c5}$. The relaxation is stopped when the maximum force is below $ \\SI{0.01}{eV\/\u00c5}$ and the maximum stress is below $ \\SI{0.002}{eV\/\u00c5^3}$.\n\nThe duplicate removal recipe finds duplicate structures using the root mean square distance (RMSD) between the structures which is calculated using the Python library pymatgen\\cite{ong2013Python}. We consider structures to be duplicate if RMSD$<0.3$ \u00c5 and only keep the structure with the lowest heat of formation. See Ref. \\cite{gjerding2021recent} for more information. For the initial LDP generated materials (before the DFT relaxation) a more crude duplicate sorting of the structures is employed, where two materials with the same reduced formula and space group are considered duplicates.\\\\\n\n\nTo determine the convex hull we use as reference databases the C2DB as well as a database of reference structures comprising 9590 elementary, binary, and ternary crystals that all lie within 20 meV of the convex hull in the Open Quantum Materials Database (OQMD)\\cite{saal2013materials}. These reference structures were relaxed using the VASP\\cite{kresse1993ab} code at the PBE level (PBE+U for selected transition metal oxides) as part of the OQMD project. Since we use the GPAW code to relax and evaluate the energy of the 2D materials, we have re-calculated the total energy of the reference structures (without re-optimisation) using the GPAW code. \n\n\\subsection{CDVAE}\nCDVAE is designed to generate 3D bulk crystals, where the unit cell is periodic in all three directions. This introduces a problem when generating 2D materials, which are non-periodic in one direction. We solve this issue by introducing an artificial periodicity in the non-periodic direction with a lattice vector which is an order of magnitude larger than those in the periodic directions. This ensures that the graph networks only connect atoms in the 2D layer and thus CDVAE learns to generate 2D materials. When training the model, we used 70 \\% of the materials in the training set, while 15 \\% was used for validation and 15 \\% for test. We used the same hyperparameters as employed by Xie \\textit{et al.} for their MP-20 data set. See Ref. \\cite{xie2021crystal} for more information.\n\n\\section{Acknowledgements}\nWe thank Morten N. Gjerding for assistance with setting up the lattice decoration protocol. \nWe acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program Grant No. 773122 (LIMA) and Grant agreement No. 951786 (NOMAD CoE). K. S. T. is a Villum Investigator supported by VILLUM FONDEN (grant no. 37789).\n\n\\section{Author contributions}\nP.L. and K.S.T. developed the initial concept. P.L. ran the generative models, the DFT simulations and performed the data analysis. K.S.T. supervised the project and aided with the interpretation of the results. P.L. and K.S.T wrote and discussed the paper together.\n\n\\section{Competing interests}\nThe authors declare no competing interests.\n\n\\section{Data availability}\nAll the discovered crystal structures and their properties will be available as a part of C2DB (\\url{https:\/\/cmr.fysik.dtu.dk\/c2db\/c2db.html})\n\n\n\\section{Periodic table heat map}\n\\vspace{-0.7cm}\n\n\n\n\n\n\n\\begin{figure}[H]\n\\centering\n\\begin{subfigure}[b]{0.99\\textwidth}\n \\includegraphics[width=1\\linewidth]{figures\/Periodic_table_CDVA.pdf}\n \\caption{}\n \\label{fig:pt_CDVAE}\n\\end{subfigure}\n\n\\begin{subfigure}[b]{0.99\\textwidth}\n \\includegraphics[width=1\\linewidth]{figures\/Periodic_table_LDP.pdf}\n \\caption{}\n \\label{fig:pt_decorated}\n\\end{subfigure}\n\n\\caption{Heat map of the relative occurrence of each element in the CVAE and LDP generated and relaxed data set. The bottom number is the relative occurrence.}\n\\label{fig:pt}\n\\end{figure}\n\n\n\\vspace{-0.2cm}\n\n\n\\section{Acknowledgements}\n\\vspace{-0.2cm}\nWe acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program Grant No. 773122 (LIMA) and Grant agreement No. 951786 (NOMAD CoE). K. S. T. is a Villum Investigator supported by VILLUM FONDEN (grant no. 37789).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Proof for Theorem \\ref{thm:hsequence}}\n\\label{app:thm:hsequence}\n\n\\textbf{Theorem \\ref{thm:hsequence}: }\n\\textit{\nFor any $\\hsic_0$, there exists a set of bandwidths $\\sigma_l$ and a \\KS $\\{\\fm_{l^{\\circ}}\\}_{l=1}^L$ parameterized by $W_l = W_s$ in \\eq{eq:trivial_W} such that: }\n\\begin{enumerate}[topsep=0pt, partopsep=0pt, label=\\Roman*.]\n \\item \n $\\hsic_L$ can approach arbitrarily close to $\\hsic^*$ such that for any $L>1$ and $\\delta>0$ we can achieve\n \\begin{equation}\n \\hsic^{*} - \\hsic_L \\le \\delta,\n \\label{arb_close_append}\n \\end{equation} \n \\item \n as $L \\rightarrow \\infty$, the \\RS converges to the global optimum where\n \\begin{equation}\n \\lim_{L \\rightarrow \\infty} \\hsic_L = \\hsic^*,\n \\end{equation} \n \\item \n the convergence is strictly monotonic where \n \\begin{equation}\n \\hsic_{l} > \\hsic_{l-1} \\quad \\forall l \\ge 1.\n \\end{equation} \n\\end{enumerate}\n\n\\begin{lemma}\n\\label{app:lemma:lowerbound}\nGiven $\\sigma_0$ and $\\sigma_1$ as the $\\sigma$ values from the last layer and the current layer, then there exists a lower bound for $\\hsic_l$, denoted as $\\lb\\Lsigma$ such that \n\n\n\\begin{equation}\n \\hsic_l \\ge \\lb\\Lsigma.\n\\end{equation}\n\\end{lemma}\n\n\\textbf{Basic Background, Assumptions, and Notations. }\n\\begin{enumerate}\n \\item\n The simulation of this theorem for Adversarial and Random data is also publicly available on \\url{https:\/\/github.com\/anonymous}.\n \\item Here we show that this bound can be established given the last 2 layers. \n \\item $\\sigma_0$ is the $\\sigma$ value of the previous layer\n \\item $\\sigma_1$ is the $\\sigma$ value of the current layer\n \\item $\\nclass$ is the number of classes\n \\item $n$ is total number of samples\n \\item $n_i$ is number of samples in the $i^{th}$ class\n \\item $\\cS$ is a set of all $i,j$ sample pairs where $r_i$ and $r_j$ belong to the same class.\n \\item $\\cS^c$ is a set of all $i,j$ sample pairs where $r_i$ and $r_j$ belong to different same classes.\n \\item $\\cS^{\\beta}$ is a set of all $i,j$ sample pairs that belongs to the same $\\beta^{th}$ classes.\n \\item $r_i^{(\\alpha)}$ is the $i^{th}$ sample in the $\\alpha^{th}$ class among $\\nclass$ classes.\n \\item We assume no $r_i \\ne r_j$ pair are equal $\\forall i \\ne j$. \n \\item \n Among all $r_i \\ne r_j$ pairs, there exists an optimal $r_i^*, r_j^*$ pair where \n $\\langle r_i^*, r_j^* \\rangle \\ge \\langle r_i, r_j \\rangle$ $\\forall r_i \\ne r^*_i$ and $r_j \\ne r^*_j$. We denote this maximum inner product as \n \\begin{equation}\n \\ub = \\langle r_i^*, r_j^* \\rangle.\n \\end{equation}\n \\item Here, each $r_i$ sample is assumed to be a sample in the RKHS of the Gaussian kernel, therefore all inner products are bounded such that\n \\begin{equation}\n 0 \\le \\langle r_i, r_j \\rangle \n \\le\n \\ub.\n \\end{equation}\n \\item We let $W$ be \n \\begin{equation}\n W_s = \\frac{1}{\\sqrt{\\zeta}} \\W.\n \\end{equation}\n Instead of using an optimal $W^*$ defined as $W^{*} = \\argmax_{W} H_{l}(W)$, we use a suboptimal $W_s$ where each dimension is simply the average direction of each class: $\\frac{1}{\\sqrt{\\zeta}}$ is a unnecessary normalizing constant $\\zeta = ||W_{s}||_{2}^{2}$. By using $W_s$, this implies that the $\\hsic$ we obtain is already a lower bound compare $\\hsic$ obtained by $W^*$. But, we will use this suboptimal $W_s$ to identify an even lower bound. Note that based on the definition $W^{*}$, we have the property $\\hsic(W^{*}) \\geq \\hsic(W) \\, \\forall W$.\n\n \\item We note that the objective $\\hsic$ is\n \\begin{equation}\n \\hsic = \n \\underbrace{\n \\sums \\Gij \\ISMexpC{}{}}_{\\within}\n -\n \\underbrace{\n \\sumsc |\\Gij| \\ISMexpC{}{}}_{\\between}\n \\end{equation}\n where we let $\\within$ be the summation of terms associated with the within cluster pairs, and let $\\between$ be the summation of terms associated with the between cluster pairs.\n\\end{enumerate}\n \n \n \n\\begin{proof} $\\hspace{1pt}$\n\\begin{adjustwidth}{0.5cm}{0.0cm}\n The equation is further divided into smaller parts organized into multiple sections.\n \n \\textbf{For sample pairs in $\\cS$. } The first portion of the function can be split into multiple classes where\n \\begin{equation}\n \\within = \n \\underbrace{\n \\sum_{\\cS^1} \\Gij \\ISMexpC{(1)}{(1)}\n }_\\text{$\\within_1$}\n + ... + \n \\underbrace{\n \\sum_{\\cS^{\\nclass}} \\Gij \\ISMexpC{(\\nclass)}{(\\nclass)}}_\\text{$\\within_{\\nclass}$}\n \\end{equation}\n Realize that to find the lower bound, we need to determine the minimum possible value of each term which translates to \\textbf{maximum} possible value of each exponent. Without of loss of generality we can find the lower bound for one term and generalize its results to other terms due to their similarity. Let us focus on the numerator of the exponent from $\\within_1$. Given $W_s$ as $W$, our goal is identify the \\textbf{maximum} possible value for\n \\begin{equation}\n \\underbrace{\n \\rij{1}{1}^TW}_{\\Pi_1}\n \\underbrace{\n W^T\\rij{1}{1}}_{\\Pi_2}. \n \\end{equation}\n Zoom in further by looking only at $\\Pi_1$, we have \n the following relationships\n \\begin{equation}\n \\Pi_1 = \n \\underbrace{\n r_i^{(1)^T} W}_{\\xi_1}\n - \n \\underbrace{r_j^{(1)^T} W}_{\\xi_2} \n \\end{equation}\n \\begin{align}\n \\xi_1 & = \\frac{1}{\\sqrt{\\zeta}} r_i^{(1)^T} \\W \\\\\n & = \\frac{1}{\\sqrt{\\zeta}}\n r_i^{(1)^T} \n \\begin{bmatrix}\n (r_1^{(1)} + ... + r_{n_1}^{(1)}) &\n ... &\n (r_1^{(\\nclass)} + ... + r_{n_{\\nclass}}^{(\\nclass)})\n \\end{bmatrix}\n \\end{align}\n \\begin{align}\n \\xi_2 & = \n \\frac{1}{\\sqrt{\\zeta}}\n r_j^{(1)^T} \\W \\\\\n & = \n \\frac{1}{\\sqrt{\\zeta}}\n r_j^{(1)^T} \n \\begin{bmatrix}\n (r_1^{(1)} + ... + r_{n_1}^{(1)}) &\n ... &\n (r_1^{(\\nclass)} + ... + r_{n_{\\nclass}}^{(\\nclass)})\n \\end{bmatrix}\n \\end{align} \n By knowing that the inner product is constrained between $[0,\\ub]$, we know the maximum possible value for $\\xi_1$ and the minimum possible value for $\\xi_2$ to be\n \\begin{align}\n \\xi_1 & = \n \\frac{1}{\\sqrt{\\zeta}}\n \\begin{bmatrix}\n 1 + (n_1 - 1)\\ub &\n n_2 \\ub &\n n_3 \\ub &\n ... &\n n_{\\nclass} \\ub\n \\end{bmatrix} \\\\\n \\xi_2 & = \n \\frac{1}{\\sqrt{\\zeta}}\n \\begin{bmatrix}\n 1 &\n 0 &\n 0 &\n ... &\n 0\n \\end{bmatrix}. \n \\end{align} \n Which leads to \n \\begin{equation}\n \\Pi_1 = \n \\frac{1}{\\sqrt{\\zeta}}\n (\\xi_1 - \\xi_2) = \n \\frac{1}{\\sqrt{\\zeta}}\n \\begin{bmatrix}\n (n_1 - 1)\\ub &\n n_2 \\ub &\n n_3 \\ub &\n ... &\n n_{\\nclass} \\ub\n \\end{bmatrix}\n \\end{equation}\n Since $\\Pi_2^{T} = \\Pi_1$ we have \n \\begin{align}\n \\Pi_1 \\Pi_2 & = \n \\frac{1}{\\zeta}\n [\n (n_1 - 1)^2\\ub^2 + \n n_2^2 \\ub^2 +\n n_3^2 \\ub^2 +\n ... +\n n_{\\nclass}^2 \\ub^2] \\\\\n & = \n \\frac{1}{\\zeta}\n [(n_1 - 1)^2+ \n n_2^2 +\n n_3^2 +\n ... +\n n_{\\nclass}^2] \\ub^2\n \\end{align}\n The lower bound for just the $\\within_1$ term emerges as \n \\begin{equation}\n \\within_1 \\ge \n \\sum_{\\cS^1} \\Gij e^{\n - \\frac{ \n [(n_1 - 1)^2+ \n n_2^2 +\n n_3^2 +\n ... +\n n_{\\nclass}^2] \\ub^2 }{2 \\zeta \\sigma_1^2}}. \n \\end{equation}\n To further condense the notation, we define the following constant\n \\begin{equation}\n \\mathscr{N}_g = \n \\frac{1}{2 \\zeta}\n [n_1^2+ \n n_2^2 +\n ... +\n (n_g - 1)^2 + \n ... + \n n_{\\tau}^2].\n \\end{equation}\n Therefore, the lower bound for $\\within_1$ can be simplified as \n \\begin{equation}\n \\within_1 \\ge \\sum_{\\cS^1} \\Gij e^{-\\frac{\\mathscr{N}_1 \\ub^2}{\\sigma_1^2}}\n \\end{equation}\n and the general pattern for any $\\within_g$ becomes\n \\begin{equation}\n \\within_g \\ge \\sum_{\\cS^i} \\Gij e^{-\\frac{\\mathscr{N}_g \\ub^2}{\\sigma_1^2}}. \n \\end{equation}\n The lower bound for the entire set of $\\cS$ then becomes\n \\begin{equation}\n \\sums \\Gij \\ISMexpC{}{} = \n \\within_1 + ... + \\within_{\\nclass} \\ge \n \\underbrace{\n \\sum_{g=1}^{\\nclass} \\sum_{\\cS^g} \\Gij\n e^{-\\frac{\\mathscr{N}_g \\ub^2}{\\sigma_1^2}}}_{\\text{Lower bound}}.\n \\end{equation}\n \n \\textbf{For sample pairs in $\\cS^c$. } \n To simplify the notation, we note that\n \\begin{align}\n -\\between_{g_1,g_2} & = \n -\n \\sum_{i \\in \\cS^{g_1}} \n \\sum_{j \\in \\cS^{g_2}}\n |\\Gij| \\ISMexpC{(g_1)}{(g_2)} \\\\\n & = -\n \\sum_{i \\in \\cS^{g_1}} \n \\sum_{j \\in \\cS^{g_2}}\n |\\Gij| e^{-\\frac\n {\\Tr(W^T (\\rij{g_1}{g_1})(\\rij{g_1}{g_2})^T W)}\n {2 \\sigma_1^2}} \\\\ \n & = -\n \\sum_{i \\in \\cS^{g_1}} \n \\sum_{j \\in \\cS^{g_2}}\n |\\Gij| e^{-\\frac{\\Tr(W^T A_{i,j}^{(g_1, g_2)} W)}\n {2 \\sigma_1^2}} \\\\ \n \\end{align}\n We now derived the lower bound for the sample pairs in $\\cS^c$. We start by writing out the entire summation sequence for $\\between$. \n \\begin{equation}\n \\begin{split}\n \\between & = \n -\\underbrace{\n \\sum_{i \\in \\cS^1} \n \\sum_{j \\in \\cS^2}\n |\\Gij| \n e^{-\\frac{\\Tr(W^T A_{i,j}^{(1, 2)} W)}\n {2 \\sigma_1^2}}\n }_\\text{$\\between_{1,2}$} \n - \\underbrace{...}_{\\between_{g_1 \\ne g_2}}\n -\\underbrace{\n \\sum_{i \\in \\cS^1} \n \\sum_{j \\in \\cS^{\\nclass}}\n |\\Gij| \n e^{-\\frac{\\Tr(W^T A_{i,j}^{(1, \\nclass)} W)}\n {2 \\sigma_1^2}}\n }_\\text{$\\between_{1,\\nclass}$} \n \\\\\n & \n -\\underbrace{\n \\sum_{i \\in \\cS^2} \n \\sum_{j \\in \\cS^1}\n |\\Gij| \n e^{-\\frac{\\Tr(W^T A_{i,j}^{(2, 1)} W)}\n {2 \\sigma_1^2}}\n }_\\text{$\\between_{2,1}$} \n - \\underbrace{...}_{\\between_{g_1 \\ne g_2}}\n -\\underbrace{\n \\sum_{i \\in \\cS^2} \n \\sum_{j \\in \\cS^{\\nclass}}\n |\\Gij| \n e^{-\\frac{\\Tr(W^T A_{i,j}^{(2, \\nclass)} W)}\n {2 \\sigma_1^2}}\n }_\\text{$\\between_{2,\\nclass}$} \n \\\\ \n & ... \\\\\n & \n -\\underbrace{\n \\sum_{i \\in \\cS^\\nclass} \n \\sum_{j \\in \\cS^1}\n |\\Gij| \n e^{-\\frac{\\Tr(W^T A_{i,j}^{(\\nclass, 1)} W)}\n {2 \\sigma_1^2}}\n }_\\text{$\\between_{\\nclass,1}$} \n - \\underbrace{...}_{\\between_{g_1 \\ne g_2}}\n -\\underbrace{\n \\sum_{i \\in \\cS^{\\nclass-1}} \n \\sum_{j \\in \\cS^{\\nclass}}\n |\\Gij| \n e^{-\\frac{\\Tr(W^T A_{i,j}^{(\\nclass-1, \\nclass)} W)}\n {2 \\sigma_1^2}}\n }_\\text{$\\between_{\\nclass-1,\\nclass}$} \n \\end{split}\n \\end{equation}\n \n Using a similar approach with the terms from $\\within$, note that $\\between$ is a negative value, so we need to maximize this term to obtain a lower bound. Consequently, the key is to determine the \\textbf{minimal} possible values for each exponent term. Since every one of them will behave very similarly, we can simply look at the numerator of the exponent from $\\between_{1,2}$ and then arrive to a more general conclusion. Given $W_s$ as $W$, our goal is to identify the \\textbf{minimal} possible value for\n \\begin{equation}\n \\underbrace{\n \\rij{1}{2}^TW}_{\\Pi_1}\n \\underbrace{\n W^T\\rij{1}{2}}_{\\Pi_2}. \n \\end{equation}\n \n Zoom in further by looking only at $\\Pi_1$, we have \n the following relationships\n \\begin{equation}\n \\Pi_1 = \n \\underbrace{\n r_i^{(1)^T} W}_{\\xi_1}\n - \n \\underbrace{r_j^{(2)^T} W}_{\\xi_2} \n \\end{equation}\n \\begin{align}\n \\xi_1 & = \n \\frac{1}{\\sqrt{\\zeta}}\n r_i^{(1)^T} \\W \\\\\n & = \n \\frac{1}{\\sqrt{\\zeta}}\n r_i^{(1)^T} \n \\begin{bmatrix}\n (r_1^{(1)} + ... + r_{n_1}^{(1)}) &\n ... &\n (r_1^{(\\nclass)} + ... + r_{n_{\\nclass}}^{(\\nclass)})\n \\end{bmatrix}\n \\end{align}\n \\begin{align}\n \\xi_2 & = \n \\frac{1}{\\sqrt{\\zeta}}\n r_j^{(2)^T} \\W \\\\\n & = \n \\frac{1}{\\sqrt{\\zeta}}\n r_j^{(2)^T} \n \\begin{bmatrix}\n (r_1^{(1)} + ... + r_{n_1}^{(1)}) &\n ... &\n (r_1^{(\\nclass)} + ... + r_{n_{\\nclass}}^{(\\nclass)})\n \\end{bmatrix}\n \\end{align} \n By knowing that the inner product is constrained between $[0,\\ub]$, we know the \\textbf{minimum} possible value for $\\xi_1$ and the \\textbf{maximum} possible value for $\\xi_2$ to be\n \\begin{align}\n \\xi_1 & = \n \\frac{1}{\\sqrt{\\zeta}}\n \\begin{bmatrix}\n 1 &\n 0 &\n 0 &\n ... &\n 0\n \\end{bmatrix} \\\\\n \\xi_2 & = \n \\frac{1}{\\sqrt{\\zeta}}\n \\begin{bmatrix}\n n_1\\ub &\n 1 + (n_2 - 1) \\ub &\n n_3 \\ub &\n ... &\n n_{\\nclass} \\ub\n \\end{bmatrix} \n \\end{align} \n Which leads to \n \\begin{equation}\n \\Pi_1 = \n \\frac{1}{\\sqrt{\\zeta}}\n (\\xi_1 - \\xi_2) = \n \\frac{1}{\\sqrt{\\zeta}}\n \\begin{bmatrix}\n 1 - n_1\\ub &\n -(1 + (n_2 - 1) \\ub) &\n -n_3 \\ub &\n ... &\n -n_{\\nclass} \\ub\n \\end{bmatrix}\n \\end{equation}\n Since $\\Pi_2^{T} = \\Pi_1$ we have \n \\begin{align}\n \\Pi_1 \\Pi_2 & = \n \\frac{1}{\\zeta}\n [\n (1- n_1 \\ub)^2 + \n (1 + (n_2-1)\\ub)^2 + \n n_3^2 \\ub^2 +\n ... +\n n_{\\nclass}^2 \\ub^2].\n \\end{align}\n The lower bound for just the $\\between_{1,2}$ term emerges as \n \\begin{equation}\n -\\between_{1,2} \\ge \n - \\sum_{\\cS^1} \\sum_{\\cS^2} |\\Gij| e^{\n - \\frac{ \n (1- n_1\\ub)^2 + \n (1 + (n_2-1)\\ub)^2 + \n n_3^2 \\ub^2 +\n ... +\n n_{\\nclass}^2 \\ub^2}\n {2 \\zeta \\sigma_1^2}}. \n \\end{equation}\n To further condense the notation, we define the following function\n \\begin{equation}\n \\begin{split}\n \\mathscr{N}_{g_1, g_2}(\\ub) & = \n \\frac{1}{2 \\zeta}\n [ n_1^2 \\ub^2 + n_2^2 \\ub^2 + ... \\\\ \n & + (1- n_{g_1}\\ub)^2 + ... + \n (1 + (n_{g_2}-1)\\ub)^2 \\\\\n & + ... + n_{\\tau}^2 \\ub^2].\n \\end{split}\n \\end{equation}\n Note that while for $\\cS$, the $\\ub$ term can be separated out. But here, we cannot, and therefore $\\mathscr{N}$ here must be a function of $\\ub$. Therefore, the lower bound for $\\between_{1,2}$ can be simplified into\n \\begin{equation}\n - \\between_{1,2} \\ge - \n \\sum_{\\cS^1} \n \\sum_{\\cS^2}\n |\\Gij| e^{-\\frac{\\mathscr{N}_{1,2}(\\ub)}{\\sigma_1^2}}\n \\end{equation}\n and the general pattern for any $\\between_{g_1, g_2}$ becomes\n \\begin{equation}\n -\\between_{g_1,g_2} \\ge -\n \\sum_{\\cS^{g1}} \n \\sum_{\\cS^{g2}}\n \\Gij e^{-\\frac{\\mathscr{N}_{g_1,g_2}(\\ub)}\n {\\sigma_1^2}}. \n \\end{equation} \n The lower bound for the entire set of $\\cS^c$ then becomes\n \\begin{align}\n -\\sumsc |\\Gij| \\ISMexpC{}{} & = \n - \\between_{1,2} \n - \\between_{1,3} \n - ... - \n \\between_{\\nclass-1, \\nclass} \\\\\n & \\ge \n - \\underbrace{\n \\sum_{g_1 \\ne g_2}^{\\nclass}\n \\sum_{i \\in \\cS^{g_1}} \n \\sum_{j \\in \\cS^{g_2}} \n |\\Gij|\n e^{-\\frac{\\mathscr{N}_{g_1,g_2} (\\ub)}{\\sigma_1^2}}}_{\\text{Lower bound}}.\n \\end{align} \n \n \\textbf{Putting $\\cS$ and $\\cS^c$ Together. } \n \\begin{align}\n \\hsic & = \\within + \\between \\\\\n & \\ge\n \\underbrace{\n \\sum_{g=1}^{\\nclass} \\sum_{\\cS^g} \\Gij\n e^{-\\frac{\\mathscr{N}_g \\ub^2}{\\sigma_1^2}}}_{\\text{Lower bound of } \\within}\n - \\underbrace{\n \\sum_{g_1 \\ne g_2}^{\\nclass} \n \\sum_{i \\in \\cS^{g_1}} \n \\sum_{j \\in \\cS^{g_2}} \n |\\Gij|\n e^{-\\frac{\\mathscr{N}_{g_1,g_2} (\\ub)}{\\sigma_1^2}}}_{\\text{Lower bound of } \\between}.\n \\end{align}\n Therefore, we have identified a lower bound that is a function of $\\sigma_0$ and $\\sigma_1$ where\n \\begin{equation}\n \\lb\\Lsigma = \n \\sum_{g=1}^{\\tau} \\sum_{\\cS^g} \\Gij\n e^{-\\frac{\\mathscr{N}_g \\ub^2}{\\sigma_1^2}}\n - \n \\sum_{g_1 \\ne g_2}^{\\tau} \n \\sum_{i \\in \\cS^{g_1}} \n \\sum_{j \\in \\cS^{g_2}} \n |\\Gij|\n e^{-\\frac{\\mathscr{N}_{g_1,g_2} (\\ub)}{\\sigma_1^2}}.\n \\end{equation}\n From the lower bound, it is obvious why it is a function of $\\sigma_1$. The lower bound is also a function of $\\sigma_{0}$ because $\\ub$ is actually a function of $\\sigma_0$. To specifically clarify this point, we have the next lemma. \n\\end{adjustwidth}\n\\end{proof}\n\\hspace{1pt}\n\n\\begin{lemma}\n\\label{app:lemma:ub_goes_to_zero}\nThe $\\ub$ used in \\citelemma{app:lemma:lowerbound} is a function of $\\sigma_0$ where $\\ub$ approaches to zero as $\\sigma_{0}$ approaches to zero, i.e.\n\\begin{equation}\n \\lim_{\\sigma_0 \\rightarrow 0} \n \\ub = 0.\n\\end{equation}\n\\end{lemma}\n\n\\textbf{Assumptions and Notations. }\n\\begin{enumerate}\n \\item \n We use Fig.~\\ref{app:fig:two_layer_img} to help clarify the notations. We here only look at the last 2 layers. \n \\item \n We let $\\hsic_0$ be the $\\hsic$ of the last layer, and $\\hsic_1$, the $\\hsic$ of the current layer.\n \\item\n The input of the data is $X$ with each sample as $x_i$, and the output of the previous layer are denoted as $r_i$. $\\psi_{\\sigma_0}$ is the feature map of the previous layer using $\\sigma_0$ and $\\psi_{\\sigma_1}$ corresponds to the current layer. \n \\begin{figure}[h]\n \\center\n \\includegraphics[width=7cm]{img\/TwoLayer.png}\n \\caption{Figure of a 2 layer network.}\n \\label{app:fig:two_layer_img}\n \\end{figure} \n \\item \n As defined from \\citelemma{app:lemma:lowerbound}, \n among all $r_i \\ne r_j$ pairs, there exists an optimal $r_i^*, r_j^*$ pair where $\\langle r_i^*, r_j^* \\rangle \\ge \\langle r_i, r_j \\rangle$ $\\forall r_i \\ne r^*_i$ and $r_j \\ne r^*_j$. We denote this maximum inner product as \n \\begin{equation}\n \\ub = \\langle r_i^*, r_j^* \\rangle.\n \\end{equation}\n\\end{enumerate}\n\n\n\n\\begin{proof} \\hspace{1pt}\n\\begin{adjustwidth}{0.5cm}{0.0cm}\nGiven Fig.~\\ref{app:fig:two_layer_img}, the equation for $\\hsic_0$ is \n\\begin{align}\n \\hsic_0 \n & = \\sums \\Gij \n e^{-\\frac{(x_i - x_j)^TW W^T (x_i - x_j)}{2 \\sigma_0^2}}\n -\n \\sumsc |\\Gij| \n e^{-\\frac{(x_i - x_j)^TW W^T (x_i - x_j)}{2 \\sigma_0^2}} \\\\\n & = \n \\sums \\Gij\n \\langle \n \\psi_{\\sigma_0}(x_i), \n \\psi_{\\sigma_0}(x_j)\n \\rangle\n - \\sumsc |\\Gij|\n \\langle \n \\psi_{\\sigma_0}(x_i), \n \\psi_{\\sigma_0}(x_j)\n \\rangle\n\\end{align}\nNotice that as $\\sigma_0 \\rightarrow 0$, we have \n\\begin{equation}\n \n \\lim_{\\sigma_0 \\rightarrow 0} \n \\langle \n \\psi_{\\sigma_0}(x_i), \n \\psi_{\\sigma_0}(x_j)\n \\rangle \n = \n \\begin{cases} \n 0 \\quad \\forall i \\ne j\n \\\\\n 1 \\quad \\forall i = j\n \\end{cases}.\n\\end{equation}\nIn other words, as $\\sigma_0 \\rightarrow 0$, the samples $r_i$ in the RKHS of a Gaussian kernel approaches orthogonal to all other samples. Given this fact, it also implies that the $\\sigma_0$ controls the inner product magnitude in RKHS space of the maximum sample pair $r_i^*, r_j^*$. We define this maximum inner product as\n\\begin{equation}\n \\langle \n \\psi_{\\sigma_0}(x_i^*), \n \\psi_{\\sigma_0}(x_j^*)\n \\rangle \n \\ge \n \\langle \n \\psi_{\\sigma_0}(x_i), \n \\psi_{\\sigma_0}(x_j)\n \\rangle \n\\end{equation}\nor equivalently\n\\begin{equation}\n \\langle \n r_i^*, \n r_j^*\n \\rangle \n \\ge \n \\langle \n r_i,\n r_j\n \\rangle \n\\end{equation}\n\nTherefore, given a $\\sigma_0$, it controls the upper bound of the inner product. Notice that as $\\sigma_0 \\rightarrow 0$, every sample in RKHS becomes orthogonal. Therefore, the upper bound of $\\langle r_i, r_j \\rangle$ also approaches 0 when $r_i \\ne r_j$. From this, we see the relationship\n\\begin{equation}\n \\lim_{\\sigma_0 \\rightarrow 0} \n \\ub = \\lim_{\\sigma_0 \\rightarrow 0} \\exp -(|.|\/\\sigma^{2}_{0}) = 0\n\\end{equation}, where $|.|$ is bounded and has a minimum and maximum, because we have finite number of samples.\n\\end{adjustwidth}\n\\end{proof}\n\n\n\n\n\n\\begin{lemma}\n\\label{app:lemma:as_ub_zero_L_approaches}\nGiven any fixed $\\sigma_1 > 0 $, the lower bound $\\lb\\Lsigma$ is a function with respect to $\\sigma_0$ and as $\\sigma_0 \\rightarrow 0$, $\\lb\\Lsigma$ approaches the function\n \\begin{equation}\n \\lb(\\sigma_1) = \n \\sum_{g=1}^{\\nclass} \\sum_{\\cS^g} \\Gij\n - \n \\sum_{g_1 \\ne g_2}^{\\nclass}\n \\sum_{i \\in \\cS^{g_1}} \n \\sum_{j \\in \\cS^{g_2}} \n |\\Gij|\n e^{-\\frac{1}{\\zeta \\sigma_1^2}}.\n \\end{equation}\nAt this point, if we let $\\sigma_1 \\rightarrow 0$, we have \n \\begin{align}\n \\lim_{\\sigma_1 \\rightarrow 0}\n \\lb(\\sigma_1) & = \\sums \\Gij \\\\\n & = \\hsic^*.\n \\end{align}\n\\end{lemma}\n\n\\begin{proof} \\hspace{1pt}\n\\begin{adjustwidth}{0.5cm}{0.0cm}\n Given \\citelemma{app:lemma:ub_goes_to_zero}, we know that \n \\begin{equation}\n \\lim_{\\sigma_0 \\rightarrow 0} \n \\ub = 0.\n \\end{equation} \n Therefore, having $\\sigma_0 \\rightarrow 0$ is equivalent to having $\\ub \\rightarrow 0$. Since \n \\citelemma{app:lemma:lowerbound} provide the equation of a lower bound that is a function of $\\ub$, this lemma is proven by simply evaluating $\\lb\\Lsigma$ as $\\ub \\rightarrow 0$. Following these steps, we have\n \\begin{align}\n \\lb(\\sigma_1) & = \n \\lim_{\\ub \\rightarrow 0}\n \\sum_{g=1}^{\\nclass} \\sum_{\\cS^g} \\Gij\n e^{-\\frac{\\mathscr{N}_g \\ub^2}{\\sigma_1^2}}\n - \n \\sum_{g_1 \\ne g_2}^{\\nclass} \n \\sum_{i \\in \\cS^{g_1}} \n \\sum_{j \\in \\cS^{g_2}} \n |\\Gij|\n e^{-\\frac{\\mathscr{N}_{g_1,g_2} (\\ub)}{\\sigma_1^2}}, \\\\\n & = \n \\sum_{g=1}^{\\nclass} \\sum_{\\cS^g} \\Gij\n - \n \\sum_{g_1 \\ne g_2}^{\\nclass} \n \\sum_{i \\in \\cS^{g_1}} \n \\sum_{j \\in \\cS^{g_2}} \n |\\Gij|\n e^{-\\frac{1}{\\zeta \\sigma_1^2}}.\n \\end{align} \nAt this point, as $\\sigma_1 \\rightarrow 0$, our lower bound reaches the global maximum \n \\begin{align}\n \\lim_{\\sigma_1 \\rightarrow 0}\n \\lb(\\sigma_1) & = \\sum_{g=1}^{\\nclass} \\sum_{\\cS^g} \\Gij\n = \\sums \\Gij \\\\\n & = \\hsic^*.\n \\end{align}\n \n\\end{adjustwidth}\n\\end{proof}\n\n\\begin{lemma}\n\\label{app:lemma:arbitrarily_close}\nGiven any $\\hsic_{l-2}$, $\\delta > 0$, there exists a $\\sigma_0 > 0 $ and $\\sigma_1 > 0$ such that\n\\begin{equation}\n \\hsic^{*} - \\hsic_l \\le \\delta.\n \\label{app:eq:proof_I}\n\\end{equation}\n\\end{lemma}\n\\begin{proof} $\\hspace{1pt}$\n\n\n\\begin{adjustwidth}{0.5cm}{0.0cm}\n\\textbf{Observation 1. }\n\nNote that the objective of $\\hsic_l$ is\n\\begin{equation}\n\\begin{split}\n \\hsic_l = \n \\max_W\n &\n \\sums \\Gij \n e^{-\\frac{\\rij{\\cS}{\\cS}^T WW^T \\rij{\\cS}{\\cS}}{2\\sigma_1^2}} \\\\\n - & \\sumsc |\\Gij| \n e^{-\\frac{\\rij{\\cS^c}{\\cS^c}^T WW^T \\rij{\\cS^c}{\\cS^c}}{2\\sigma_1^2}}.\n\\end{split}\n\\end{equation}\nSince the Gaussian kernel is bounded between 0 and 1, the theoretical maximum of $\\hsic^*$ is when the kernel is 1 for $\\cS$ and 0 for $\\cS^c$ with the theoretical maximum as $\\hsic^* = \\sums \\Gij$. Therefore \\eq{app:eq:proof_I} inequality is equivalent to \n\\begin{equation}\n \\sums \\Gij - \\hsic_l \\le \\delta.\n\\end{equation}\n\\end{adjustwidth}\n\n\n\n\n\\begin{adjustwidth}{0.5cm}{0.0cm}\n\\textbf{Observation 2. }\n\nIf we choose a $\\sigma_0$ such that \n\n\n\\begin{equation}\n \\lb^*(\\sigma_1) - \\lb\\Lsigma \\le \\frac{\\delta}{2}\n \\quad \\text{and} \\quad\n \\hsic^* - \\lb^*(\\sigma_1) \\le \\frac{\\delta}{2}\n\\end{equation}\nthen we have identified the condition where $\\sigma_0 > 0$ and $\\sigma_1 > 0$ such that\n\\begin{equation}\n \\sums \\Gij - \\lb\\Lsigma \\le \\delta.\n\\end{equation}\nNote that the $\\lb^*(\\sigma_1)$ is a continuous function of $\\sigma_1$. Therefore, a $\\sigma_1$ exists such that $\\lb^*(\\sigma_1)$ can be set arbitraty close to $\\hsic^{*}$. Hence, we choose an $\\sigma_{1}$ that has the following property:\n\\begin{equation}\n \\hsic^* - \\lb^*(\\sigma_1) \\le \\frac{\\delta}{2}.\n\\end{equation}\nWe next fix $\\sigma_{1}$, we also know $\\lb\\Lsigma$ is a continuous function of $\\sigma_{0}$, and it has a limit $\\lb^*(\\sigma_1)$ as $\\sigma_{0}$ approaches to 0, hence there exits a $\\sigma_{0}$, where \n\\begin{equation}\n \\lb^*(\\sigma_1) - \\lb\\Lsigma \\le \\frac{\\delta}{2}\n\\end{equation}\nThen we have:\n\\begin{equation}\n \\lb^*(\\sigma_1) - \\lb\\Lsigma \\le \\frac{\\delta}{2}\n \\quad \\text{and} \\quad\n \\hsic^* - \\lb^*(\\sigma_1) \\le \\frac{\\delta}{2}.\n\\end{equation}\nBy adding the two $\\frac{\\delta}{2}$, we conclude the proof.\n\\end{adjustwidth}\n\\end{proof}\n\n\n\n\n\n\\begin{lemma}\n \\label{app:lemma:approach_optimal_H1}\n There exists a \\KS $\\{\\fm_{l^{\\circ}}\\}_{l=1}^L$ parameterized by a set of weights $W_l$ and a set of bandwidths $\\sigma_l$ such that \n \\begin{equation}\n \\lim_{l \\rightarrow \\infty} \\hsic_l = \\hsic^* , \\quad \\hsic_{l+1} > \\hsic_l \\quad \\forall l\n \\end{equation}\n\\end{lemma}\n\n\nBefore, the proof, we use the following figure, Fig.~\\ref{app:fig:all_layers}, to illustrate the relationship between \\KS $\\{\\phi_{l^\\circ}\\}_{l=1}^L$ that generates the \\RS $\\{\\hsic_l\\}_{l=1}^L$. By solving a network greedily, we separate the network into $L$ separable problems. At each additional layer, we rely on the weights learned from the previous layer. At each network, we find $\\sigma_{l-1}$, $\\sigma_l$, and $W_l$ for the next network. We also note that since we only need to prove the existence of a solution, this proof is done by \\textit{\\textbf{Proof by Construction}}, i.e, we only need to show an example of its existence. Therefore, this proof consists of us constructing a \\RS which satisfies the lemma.\n \\begin{figure}[h]\n \\center\n \\includegraphics[width=7cm]{img\/kchain_layers.png}\n \\caption{Relating \\KS to \\RS.}\n \\label{app:fig:all_layers}\n \\end{figure} \n\n\\begin{proof} \\hspace{1pt}\n\\begin{adjustwidth}{0.5cm}{0.0cm}\nWe first note that from \\citelemma{app:lemma:arbitrarily_close}, we have previously proven given any $\\hsic_{l-2}$, $\\delta > 0$, there exists a $\\sigma_0 > 0 $ and $\\sigma_1 > 0$ such that\n\\begin{equation}\n \\hsic^{*} - \\hsic_{l} \\leq \\delta_{l}.\n \\label{eq:cond1}\n\\end{equation}\nThis implies that based on Fig.~\\ref{app:fig:all_layers}, at any given layer, we could reach arbitrarily close to $\\hsic^*$. Given this, we list the 2 steps to build the \\RS.\n\n\n\n\n\\textbf{Step 1: } Define $\\{\\mathcal{E}_{n}\\}_{n = 1}^{\\infty}$ as a sequence of numbers $\\hsic^{*} - \\frac{\\hsic^{*} - \\hsic_0}{n}$ on the real line. We have the following properties for this sequence:\n\\begin{equation}\n \\lim_{n\\rightarrow\\infty} \\mathcal{E}_{n} = \\hsic^{*}\n , \\quad \\mathcal{E}_{1} = \\mathcal{H}_{0}. \n\\end{equation}\n\n\nUsing these two properties, for any $\\hsic_{l-1} \\in [\\hsic_{0},\\hsic^{*}]$ there exist an unique $n$, where \\begin{equation}\n \\mathcal{E}_{n} \\leq \\hsic_{l-1} < \\mathcal{E}_{n+1}.\n \\label{eq:bounding_box}\n\\end{equation}\n\n\\textbf{Step 2: } \n For any given $l$, we choose $\\delta_{l}$ to satisfies \\eq{eq:cond1} by the following procedure, First find an $n$ that satisfies\n \\begin{equation}\n \\mathcal{E}_{n} \\leq \\hsic_{l-1} < \\mathcal{E}_{n+1}, \n \\label{ineq:differential}\n \\end{equation}\n and second define $\\delta_l$ to be \n \\begin{equation}\n \\delta_{l} = \\hsic^{*} - \\mathcal{E}_{n+1}.\n \\label{eq:delta_define}\n \\end{equation}\n \n \n \n\n \n \nTo satisfy \\eq{eq:cond1}, the following must be true. \\begin{equation}\n \\hsic^{*} - \\hsic_{l-1} \\leq \\delta_{l-1}.\n \\label{eq:delta_greater_than_l1}\n\\end{equation}\nand further we found $n$ such that\n\\begin{equation}\n \\mathcal{E}_{n} \\leq \\hsic_{l-1} < \\mathcal{E}_{n+1} \\implies \\hsic^{*} - \\mathcal{E}_{n} \\geq \\hsic^{*} - \\hsic_{l-1} > \\hsic^{*} - \\mathcal{E}_{n+1}.\n \\label{eq:all_ineqals}\n\\end{equation}\nThus combining \\eq{eq:delta_define}, \\eq{eq:delta_greater_than_l1}, and \\eq{eq:all_ineqals} we have\n\\begin{equation}\n \\delta_{l-1} > \\delta_{l}.\n\\end{equation}\nTherefore, $\\{ \\delta_l \\}$ is a decreasing sequence.\n\n \n\n\\textbf{Step 3: } \nNote that $\\{\\mathcal{E}_{n}\\}$ is a converging sequence where \n\\begin{equation}\n \\lim_{n \\rightarrow \\infty}\n \\hsic^{*} - \\frac{\\hsic^{*} - \\hsic_0}{n} = \\hsic^*.\n\\end{equation}\nTherefore, $\\{\\Delta_n\\} = \\hsic^* - \\{\\mathcal{E}_n\\}$ is also a converging sequence where \n\\begin{equation}\n \\lim_{n \\rightarrow \\infty}\n \\hsic^* - \\hsic^{*} + \\frac{\\hsic^{*} - \\hsic_0}{n} = 0\n\\end{equation}\nand $\\{\\delta_{l}\\}$ is a subsequence of $\\{\\Delta_{l}\\}$. Since any subsequence of a converging sequence also converges to the same limit, we know that\n\\begin{equation}\n \n \n \n \\lim_{l \\rightarrow \\infty} \\delta_l = 0.\n\\end{equation}\n \n\n \n\nFollowing this construction, if we always choose $\\hsic_l$ such that \n\\begin{equation}\n \\hsic^{*} - \\hsic_{l} \\leq \\delta_{l}.\n \\label{eq:key_inequality}\n\\end{equation}\nAs $l \\rightarrow \\infty$, the inequality becomes\n\\begin{align}\n \\hsic^{*} - \n \\lim_{l \\rightarrow \\infty} \n \\hsic_{l} \n & \\leq \n \\lim_{l \\rightarrow \\infty} \n \\delta_{l}, \\\\\n & \\leq \n 0.\n\\end{align}\nSince we know that \n\\begin{equation}\n \\hsic^{*} - \\hsic_l \\geq 0\\, \\forall l. \n\\end{equation}\nThe condition of \n\\begin{equation}\n 0 \\leq\n \\hsic^{*} - \\lim_{l \\rightarrow \\infty} \\hsic_l \\leq 0\n\\end{equation}\nis true only if \n\\begin{equation}\n \\hsic^{*} - \\lim_{l \\rightarrow \\infty} \\hsic_l = 0.\n\\end{equation}\nThis allows us to conclude \n\\begin{equation}\n \\hsic^{*} = \\lim_{l \\rightarrow \\infty} \\hsic_l.\n\\end{equation}\n\n\n\n\n\n\n\\textbf{Proof of the Monotonic Improvement. } \n\nGiven \\eq{eq:bounding_box} and \\eq{eq:delta_define}, \nat each step we have the following: \n\\begin{align}\n \\hsic_{l-1} & < \\mathcal{E}_{n+1} \\\\\n & \\leq \n \\hsic^{*} - \\delta_{l}. \n\\end{align}\nRearranging this inequality, we have\n\\begin{equation}\n \\delta_{l} < \\hsic^{*} - \\hsic_{l-1}.\n \\label{eq:keyIneq}\n\\end{equation}\nBy combining the inequalities from \\eq{eq:keyIneq} and \\eq{eq:key_inequality}, we have the following relationships.\n\\begin{align}\n \\hsic^{*} - \\hsic_{l} \\leq \\delta_{l} & < \\hsic^{*} - \\hsic_{l-1} \\\\\n \\hsic^{*} - \\hsic_{l} & < \\hsic^{*} - \\hsic_{l-1} \\\\\n - \\hsic_{l} & < - \\hsic_{l-1} \\\\\n \\hsic_{l} & > \\hsic_{l-1}, \\\\\n\\end{align}\nwhich concludes the proof of theorem.\n\n\n\\end{adjustwidth}\n\\end{proof}\n\n\n\n\\end{appendices}\n\n\n\n\\section{Proof for Theorem \\ref{thm:geometric_interpret}}\n\\label{app:thm:geometric_interpret}\n\n\\textbf{Theorem \\ref{thm:geometric_interpret}: }\n\\textit{\nAs $l \\rightarrow \\infty$ and $\\hsic_l \\rightarrow \\hsic^*$, \nthe following properties are satisfied: }\n\n\\begin{enumerate}[label=\\Roman*]\n \\item \n \n the scatter ratio approaches 0 where\n \\begin{equation}\n \\lim_{l \\rightarrow \\infty} \\frac{\\Tr(S_w^{l})}{\\Tr(S_b^{l})} = 0\n\\end{equation}\n\n\\item the \\KS converges to the following kernel:\n\\begin{equation}\n \n \n \\lim_{l \\rightarrow \\infty} \n \\kf(x_{i},x_{j})^{l} = \n \\kf^* = \n \\begin{cases} \n 0 \\quad \\forall i,j \\in \\mathcal{S}^c\n \\\\\n 1 \\quad \\forall i,j \\in \\mathcal{S}\n \\end{cases}.\n\\end{equation}\n\\end{enumerate}\n\n\\begin{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\nWe start by proving condition II starting from the $\\hsic$ objective using a \\rbfk\n\\begin{align}\n \\max_{W} \n \\sums \\Gij \\kf_W(r_i, r_j) \n -\n \\sumsc |\\Gij| \\kf_W(r_i, r_j)\\\\\n \\max_{W} \n \\sums \\Gij \\ISMexp\n -\n \\sumsc |\\Gij| \\ISMexp \n\\end{align}\nGiven that $\\hsic_l \\rightarrow \\hsic^*$,\nand the fact that $0 \\leq \\kf_W \\leq 1$,\nthis implies that the following condition must be true:\n\\begin{equation}\n \\hsic^{*} = \\sums \\Gij = \n \\sums \\Gij (1) \n -\n \\sumsc |\\Gij| (0).\n\\end{equation}\n\nBased on \\eq{eq:cond1}, our construction at each layer ensures to satisfy \n\\begin{equation}\n \\hsic^{*} - \\hsic_{l} \\leq \\delta_{l}.\n\\end{equation}\nSubstituting the definition of $\\hsic^*$ and $\\hsic_l$, we have\n\\begin{align}\n \\sums \\Gij (1) \n -\\left[\\sums \\Gij \\kf_W(r_i, r_j) \n -\n \\sumsc |\\Gij| \\kf_W(r_i, r_j) \\right] \\leq \\delta_{l}\n \\\\\n \\sums \\Gij (1-\\kf_W(r_i, r_j)) \n +\n \\sumsc |\\Gij| \\kf_W(r_i, r_j) \\leq \\delta_{l}.\n \\label{eq:kernel_inequal}\n\\end{align}\nSince every term within the summation in \\eq{eq:kernel_inequal} is positive, this implies\n\\begin{align}\\label{eq:limit kernal behaviour}\n 1-\\kf_W(r_i, r_j) \\leq \\delta_{l} \\quad i,j \\in \\mathcal{S}\n \\\\\n \\kf_W(r_i, r_j) \\leq \\delta_{l}\\quad i,j \\in \\mathcal{S}^{c}.\n \\label{eq:limit kernal behaviour2}\n\\end{align}\n\nSo as $l \\rightarrow \\infty$ and $\\delta_{l} \\rightarrow 0$, every component getting closer to limit Kernel, i.e, taking the limit from both sides and using the fact that is proven is theorem 1 $\\lim_{l\\rightarrow \\infty} \\delta_{l} = 0$ leads to\n\\begin{align}\n \\lim_{l\\rightarrow \\infty} 1 \\leq \\kf_W(r_i, r_j) \\quad i,j \\in \\mathcal{S}\n \\\\\n \\lim_{l\\rightarrow \\infty}\\kf_W(r_i, r_j) \\leq 0 \\quad i,j \\in \\mathcal{S}^{c}\n\\end{align}\nboth terms must instead be strictly equality. Therefore, we see that at the limit point $\\kf_W$ would have the form \n\\begin{equation}\n \\kf^* = \n \\begin{cases} \n 0 \\quad \\forall i,j \\in \\mathcal{S}^c\n \\\\\n 1 \\quad \\forall i,j \\in \\mathcal{S}\n \\end{cases}. \n\\end{equation}\n\n\\textbf{First Property}:\n\nUsing \\eq{eq:limit kernal behaviour} and \\eq{eq:limit kernal behaviour2} we have:\n\n\\begin{align}\n 1-\\delta_{l}\\leq \\ISMexp \\quad i,j \\in \\mathcal{S}\n \\\\\n \\ISMexp \\leq \\delta_{l}\\quad i,j \\in \\mathcal{S}^{c}.\n\\end{align}\n\nAs $\\lim_{l\\rightarrow \\infty} \\delta_{l} = 0$, taking the limit from both side leads to:\n\n\n\n\n\n \n\\begin{equation}\n \\begin{cases} \n \\ISMexp = 1 \\quad \\forall i,j \\in \\mathcal{S}\\\\\n \\ISMexp = 0 \\quad \\forall i,j \\in \\mathcal{S}^c\n \\end{cases}. \n\\end{equation}\nIf we take the log of the conditions, we get\n\\begin{equation}\n \\begin{cases} \n \\frac{1}{2\\sigma^2}\n (r_i - r_j)^T W W^T(r_i - r_j) =0 \n \\quad \\forall i,j \\in \\mathcal{S}\\\\ \n \\frac{1}{2\\sigma^2}\n (r_i - r_j)^T W W^T(r_i - r_j) = \\infty\n \\quad \\forall i,j \\in \\mathcal{S}^c\n \\end{cases}. \n\\end{equation}\nThis implies that as $l \\rightarrow \\infty$ we have\n\\begin{equation}\n \\lim_{l \\rightarrow \\infty}\n \\sums\n \\frac{1}{2\\sigma^2}\n (r_i - r_j)^T W W^T(r_i - r_j) \n = \n \\lim_{l \\rightarrow \\infty}\n \\Tr(S_w) = 0.\n\\end{equation}\n\\begin{equation}\n \\lim_{l \\rightarrow \\infty}\n \\sumsc \\frac{1}{2\\sigma^2}\n (r_i - r_j)^T W W^T(r_i - r_j) \n = \n \\lim_{l \\rightarrow \\infty}\n \\Tr(S_b)\n =\n \\infty,\n\\end{equation}\nThis yields the ratio\n\\begin{equation}\n \\lim_{\\hsic_l \\rightarrow \\hsic^*} \\frac{\\Tr(S_w)}{\\Tr(S_b)} = \\frac{0}{\\infty} = 0.\n\\end{equation}\n\n\\end{proof}\n\\end{appendices}\n\\section{Proof for \\texorpdfstring{$W_s$}\\xspace Optimality}\n\\label{app:lemma:W_not_optimal}\n\\textit{Given $\\hsic_l$ as the empirical risk at layer $l \\ne L$, we have}\n\\begin{equation}\n \\frac{\\partial}{ \\partial W_l}\\hsic_l(W_s) \\ne 0\n\\end{equation}\n\n\\begin{proof}\n Given $\\frac{1}{\\sqrt{\\zeta}}$ as a normalizing constant for $W_s = \\frac{1}{\\sqrt{\\zeta}} \\sum_{\\alpha} r_{\\alpha}$ such that $W^TW=I$. We start with the Lagrangian\n \\begin{equation}\n \\mathcal{L} = \n - \\sum_{i,j} \\Gij \\ISMexp - \\Tr(\\Lambda(W^TW - I)).\n \\end{equation}\n If we now take the derivative with respect to the Lagrange, we get\n \\begin{equation}\n \\nabla \\mathcal{L} = \n \\frac{1}{\\sigma^2}\n \\sum_{i,j} \\Gij \\ISMexp\n (r_i - r_j)(r_i - r_j)^TW \n - 2W\\Lambda.\n \\end{equation}\n By setting the gradient to 0, we have\n \\begin{align}\n \\left[\n \\frac{1}{2\\sigma^2}\n \\sum_{i,j} \\Gij \\ISMexp\n (r_i - r_j)(r_i - r_j)^T \n \\right]\n W \n =& W\\Lambda. \\\\\n \\mathcal{Q}_l W =& W \\Lambda.\n \\label{app:eq:ism_conclusion}\n \\end{align} \n From \\eq{app:eq:ism_conclusion}, we see that $W$ is only the optimal solution when $W$ is the eigenvector of $Q_l$. Therefore, by setting $W$ to \n $W_s = \\frac{1}{\\sqrt{\\zeta}} \\sum_{\\alpha} r_{\\alpha}$, it is not guaranteed to yield an optimal for all $\\sigma_l$.\n\\end{proof}\n\n\n\\end{appendices}\n\\section{Proof for Corollary \\ref{corollary:mse} and \\ref{corollary:ce}}\n\\label{app:corollary:ce}\n\n\\textbf{Corollary} \\ref{corollary:mse}: \n \\textit{Given $\\hsic_l \\rightarrow \\hsic^*$, the network output in IDS solves MSE via a translation of labels.}\n \n\\begin{proof} \\hspace{1pt}\n\\begin{adjustwidth}{0.5cm}{0.0cm}\n As $\\hsic_l \\rightarrow \\hsic^*$, Thm.~\\ref{thm:geometric_interpret} shows that sample of the same class are mapped into the same point. Assuming that $\\fm$ has mapped the sample into $c$ points $\\alpha = [\\alpha_1, ..., \\alpha_c]$ that's different from the truth label\n $\\xi = [\\xi_1, ..., \\xi_c]$. Then the $\\mse$ objective is minimized by translating the $\\fm$ output by \n \\begin{equation}\n \\xi - \\alpha.\n \\end{equation}\n\\end{adjustwidth}\n\\end{proof}\n\n\\textbf{Corollary} \\ref{corollary:ce}: \n \\textit{Given $\\hsic_l \\rightarrow \\hsic^*$, the network output in RKHS solves $\\ce$ via a change of bases.}\n \n\\textbf{Assumptions, and Notations. }\n\\begin{enumerate}\n \\item \n $n $ is the number of samples.\n \\item\n $\\nclass$ is the number of classes.\n \\item\n $y_i \\in \\mathbb{R}^{\\nclass}$ is the ground truth label for the $i^{th}$ sample. It is one-hot encoded where only the $j^{th}$ element is 1 if $x_i$ belongs to the $j^{th}$ class, all other elements would be 0.\n \\item\n We denote $\\fm$ as the network, and $\\hat{y}_i \\in \\mathbb{R}^{\\nclass}$ as the network output where $\\hat{y}_i = \\fm(x_i)$. We also assume that $\\hat{y}_i$ is constrained on a probability simplex where $1 = \\hat{y}_i^T \\mathbf{1}_n$.\n \n \n \n \n \\item\n We denote the $j^{th}$ element of $y_i$, and $\\hat{y}_i$ as $y_{i,j}$ and $\\hat{y}_{i,j}$ respectively.\n \\item\n We define\n \\begin{addmargin}[1em]{2em\n \\textbf{Orthogonality Condition: }\n A set of samples $\\{\\hat{y}_1, ..., \\hat{y}_n\\}$ satisfies the orthogonality condition if\n \\begin{equation} \n \\begin{cases}\n \\langle \\hat{y_{i}}, \\hat{y_{j}}\\rangle =1 & \\forall\\quad i,j \\textrm{ same class} \\\\\n \\langle \\hat{y_{i}}, \\hat{y_{j}}\\rangle=0 & \\forall\\quad i,j \\textrm{ not in the same class}\n \\end{cases}.\n \\end{equation}\n \\end{addmargin}\n \\item\n We define the Cross-Entropy objective as \n \\begin{equation}\n \\underset{\\fm}{\\argmin} -\\sum_{i=1}^{n} \\sum_{j=1}^{\\nclass} y_{i,j} \\log(\\fm(x_{i})_{i,j}).\n \\end{equation}\n \\end{enumerate}\n\\begin{proof}\\hspace{1pt}\n\\begin{adjustwidth}{0.5cm}{0.0cm}\nFrom Thm.~\\ref{thm:geometric_interpret}, we know that the network $\\fm$ output, $\\{ \\hat{y}_1, \\hat{y}_2, ..., \\hat{y}_n \\}$, satisfy the orthogonality condition at $\\hsic^*$. Then there exists a set of orthogonal bases represented by $\\Xi = [\\xi_1, \\xi_2, ..., \\xi_c]$ that maps $\\{ \\hat{y}_1, \\hat{y}_2, ..., \\hat{y}_n \\}$ to simulate the output of a softmax layer. Let $\\xi_{i} = \\hat{y}_{j} , j\\in \\cS^{i}$, i.e., for the $i_{th}$ class we arbitrary choose one of the samples from this class and assigns $\\xi_i$ of that class to be equal to the sample's output. Realize in our problem we have $<\\hat{y}_{i},\\hat{y}_{i}> = 1$, so if $<\\hat{y}_{i},\\hat{y}_{j}> = 1$, then subtracting these two would lead to $<\\hat{y}_{i},\\hat{y}_{i}-\\hat{y}_{j}> = 0$, which is the same as $\\hat{y}_{i}=\\hat{y}_{j}$.\nSo this representation is well-defined and its independent of choices of the sample from each group if they satisfy orthogonality condition.\nNow we define transformed labels, $Y$ as:\n\\begin{equation}\n Y = \\hat{Y} \\Xi.\n\\end{equation}\nNote that $Y = [y_1, y_2, ..., y_n]^T$ which each $y_{i}$ is a one hot vector representing the class membership of $i$ sample in $c$ classes.\nSince given $\\Xi$ as the change of basis, we can match $\\hat{Y}$ to $Y$ exactly, $\\ce$ is minimized.\n\\end{adjustwidth}\n\\end{proof}\n\\end{appendices}\n\n\n\n\\section{Dataset Details}\n\\label{app:data_detail}\n\nNo samples were excludes from any of the dataset. \n\n\\textbf{Wine. } \n This dataset has 13 features, 178 samples, and 3 classes. The features are continuous and heavily unbalanced in magnitude. The dataset can be downloaded at \\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/wine.}\n \n\n \\textbf{Divorce. } \n This dataset has 54 features, 170 samples, and 2 classes. The features are discrete and balanced in magnitude. The dataset can be downloaded at \\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/Divorce+Predictors+data+set.}\n \n \\textbf{Car. } \n This dataset has 6 features, 1728 samples and 2 classes. The features are discrete and balanced in magnitude. The dataset can be downloaded at \\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/Car+Evaluation.} \n \n\\textbf{Cancer. } \n This dataset has 9 features, 683 samples, and 2 classes. The features are discrete and unbalanced in magnitude. The dataset can be downloaded at \\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/Breast+Cancer+Wisconsin+(Diagnostic)}.\n \n \\textbf{Face. } \n This dataset consists of images of 20 people in various poses. The 624 images are vectorized into 960 features. \n The dataset can be downloaded at \n \\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/CMU+Face+Images}.\n\n\\textbf{Random. } \n This dataset has 2 features, 80 samples and 2 classes. It is generate with a gaussian distribution where half of the samples are randomly labeled as 1 or 0.\n \n\\textbf{Adversarial. } \n This dataset has 2 features, 80 samples and 2 classes. It is generate with the following code:\n \\begin{lstlisting} \n #!\/usr\/bin\/env python\n \n n = 40\n X1 = np.random.rand(n,2)\n X2 = X1 + 0.01*np.random.randn(n,2)\n \n X = np.vstack((X1,X2))\n Y = np.vstack(( np.zeros((n,1)), np.ones((n,1)) ))\n \\end{lstlisting} \n \n\\textbf{CFAR10 Test. } The test set images from CIFAR10 are preprocessed with a convolutional layer that outputs vectorized samples of $x_i \\in \\mathbb{R}^{10}$. This dataset has 10 features and 10,000 samples. The preprocessing code to map the images to $\\mathbb{R}^{10}$ data is included in the supplementary. The link to download the data is at \\url{https:\/\/www.cs.toronto.edu\/~kriz\/cifar.html}.\n\n\n\\textbf{Raman. } The dataset consists of 4306 samples, 700 frequencies, and 35 different cell types. Since this is proprietary data, a download link is not included. \n\n\\end{appendices}\n\\section{Optimal Gaussian \\texorpdfstring{$\\sigma $ }\\xspace for Maximum Kernel Separation}\n\\label{app:opt_sigma}\nAlthough the Gaussian kernel is the most common kernel choice for kernel methods, its $\\sigma$ value is a hyperparameter that must be tuned for each dataset. This work proposes to set the $\\sigma$ value based on the maximum kernel separation. The source code is made publicly available on \\url{https:\/\/github.com\/anonamous}.\n\n\nLet $X \\in \\mathbb{R}^{n \\times d}$ be a dataset of $n$ samples with $d$ features and let $Y \\in \\mathbb{R}^{n \\times \\nclass}$ be the corresponding one-hot encoded labels where $\\nclass$ denotes the number of classes. Given $\\kappa_X(\\cdot, \\cdot)$ and $\\kappa_Y(\\cdot,\\cdot)$ as two kernel functions that applies respectively to $X$ and $Y$ to construct kernel matrices $K_X \\in \\mathbb{R}^{n \\times n}$ and $K_Y \\in \\mathbb{R}^{n \\times n}$. Given a set $\\mathcal{S}$, we denote $|\\mathcal{S}|$ as the number of elements within the set. Also let $\\mathcal{S}$ and $\\mathcal{S}^c$ be sets of all pairs of samples of $(x_i,x_j)$ from a dataset $X$ that belongs to the same and different classes respectively, then the average kernel value for all $(x_i,x_j)$ pairs with the same class is\n\\begin{equation}\n d_{\\mathcal{S}} = \\frac{1}{|\\mathcal{S}|}\\sum_{i,j \\in \\mathcal{S}} e^{-\\frac{||x_i - x_j||^2}{2\\sigma^2}}\n\\end{equation}\nand the average kernel value for all $(x_i,x_j)$ pairs between different classes is\n\\begin{equation}\n d_{\\mathcal{S}^c} = \n \\frac{1}{|\\mathcal{S}^c|}\\sum_{i,j \\in \\mathcal{S}^c} e^{-\\frac{||x_i - x_j||^2}{2\\sigma^2}}. \n\\end{equation}\nWe propose to find the $\\sigma$ that maximizes the difference between $d_{\\mathcal{S}}$ and $d_{\\mathcal{S}^c}$ or \n\\begin{equation}\n \\underset{\\sigma}{\\max} \\quad \n \\frac{1}{|\\mathcal{S}|}\\sum_{i,j \\in \\mathcal{S}} e^{-\\frac{||x_i - x_j||^2}{2\\sigma^2}} - \n \\frac{1}{|\\mathcal{S}^c|}\\sum_{i,j \\in \\mathcal{S}^c} e^{-\\frac{||x_i - x_j||^2}{2\\sigma^2}}.\n \\label{eq:main_objective}\n\\end{equation}\nIt turns out that is expression can be computed efficiently. Let $g = \\frac{1}{|\\mathcal{S}|}$ and $\\bar{g} = \\frac{1}{|\\mathcal{S}^c|}$, and let $\\textbf{1}_{n \\times n} \\in \\mathbb{R}^{n \\times n}$ be a matrix of 1s, then we can define $Q$ as\n\\begin{equation}\n Q = -g K_Y + \\bar{g} (\\textbf{1}_{n \\times n} - K_Y).\n\\end{equation}\nOr $Q$ can be written more compactly as\n\\begin{equation}\n Q = \\bar{g} \\textbf{1}_{n \\times n} - (g + \\bar{g})K_Y. \n\\end{equation}\nGiven $Q$, Eq.~(\\ref{eq:main_objective}) becomes\n\\begin{equation}\n \\underset{\\sigma}{\\min} \\quad \n \\Tr(K_X Q).\n \\label{eq:obj_compact}\n\\end{equation}\nThis objective can be efficiently solved with BFGS. \n\nBelow in Fig.~\\ref{fig:max_kernel_separation}, we plot out the average within cluster kernel and the between cluster kernel values as we vary $\\sigma$. From the plot, we can see that the maximum separation is discovered via BFGS. \n \\begin{figure}[h]\n \\centering\n \\includegraphics[width=10cm,height=7cm]{img\/opt_kernel_separation.png}\n \\caption{Maximum Kernel separation.}\n \\label{fig:max_kernel_separation}\n \\end{figure} \n \n \n\\textbf{Relation to HSIC. } \nFrom Eq.~(\\ref{eq:obj_compact}), we can see that the $\\sigma$ that causes maximum kernel separation is directly related to HSIC. Given that the HSIC objective is normally written as\n\\begin{equation}\n \\underset{\\sigma}{\\min} \\quad \n \\Tr(K_X H K_Y H),\n\\end{equation}\nby setting $Q=HK_YH$, we can see how the two formulations are related. While the maximum kernel separation places the weight of each sample pair equally, HSIC weights the pair differently.\nWe also notice that the $Q_{i,j}$ element is positive\/negative for $(x_i,x_j)$ pairs that are with\/between classes respectively. Therefore, the argument for the global optimum should be relatively close for both objectives. Below in Figure~\\ref{fig:max_HSIC}, we show a figure of HSIC values as we vary $\\sigma$. Notice how the optimal $\\sigma$ is almost equivalent to the solution from maximum kernel separation. For the purpose of \\RS, we use $\\sigma$ that maximizes the HSIC value.\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width=10cm,height=7cm]{img\/opt_HSIC.png}\n \\caption{Maximal HSIC.}\n \\label{fig:max_HSIC}\n \\end{figure} \n\\end{appendices}\n\\section{\\texorpdfstring{$W_l$ }\\xspace Dimensions for each 10 Fold of each Dataset}\n\\label{app:W_dimensions}\nWe report the input and output dimensions of each $W_l$ for every layer of each dataset in the form of $(\\alpha, \\beta)$; the corresponding dimension becomes $W_l \\in \\mathbb{R}^{\\alpha \\times \\beta}$. Since each dataset consists of 10-folds, the network structure for each fold is reported. We note that the input of the 1st layer is the dimension of the original data. However, after the first layer, the width of the RFF becomes the output of each layer; here we use 300. \n\nThe $\\beta$ value is chosen during the ISM algorithm. By keeping only the most dominant eigenvector of the $\\Phi$ matrix, the output dimension of each layer corresponds with the rank of $\\Phi$. It can be seen from each dataset that the first layer significantly expands the rank. The expansion is generally followed by a compression of fewer and fewer eigenvalues. These results conform with the observations made by \\citet{montavon2011kernel} and \\citet{ansuini2019intrinsic}.\n\n\\begin{tabular}{ll}\n\\centering\n\\tiny\n\\setlength{\\tabcolsep}{7.0pt}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{cccccc|}\n\t\\hline\nData & Layer 1 & Layer 2 & Layer 3 & Layer 4 \\\\ \n\t\\hline\nadversarial 1 & (2, 2) & (300, 61) & (300, 35) \\\\ \nadversarial 2 & (2, 2) & (300, 61) & (300, 35) \\\\ \nadversarial 3 & (2, 2) & (300, 61) & (300, 8) & (300, 4) \\\\ \nadversarial 4 & (2, 2) & (300, 61) & (300, 29) \\\\ \nadversarial 5 & (2, 2) & (300, 61) & (300, 29) \\\\ \nadversarial 6 & (2, 2) & (300, 61) & (300, 7) & (300, 4) \\\\ \nadversarial 7 & (2, 2) & (300, 61) & (300, 34) \\\\ \nadversarial 8 & (2, 2) & (300, 12) & (300, 61) & (300, 30) \\\\ \nadversarial 9 & (2, 2) & (300, 61) & (300, 33) \\\\ \nadversarial 10 & (2, 2) & (300, 61) & (300, 33) \\\\ \n\t\\hline\n\\end{tabular}\n&\n\\centering\n\\tiny\n\\setlength{\\tabcolsep}{3.0pt}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{ccccc|}\n\t\\hline\nData & Layer 1 & Layer 2 & Layer 3 \\\\ \n\t\\hline\nRandom 1 & (3, 3) & (300, 47) & (300, 25) \\\\ \nRandom 2 & (3, 3) & (300, 46) & (300, 25) \\\\ \nRandom 3 & (3, 3) & (300, 46) & (300, 25) \\\\ \nRandom 4 & (3, 3) & (300, 47) & (300, 4) \\\\ \nRandom 5 & (3, 3) & (300, 47) & (300, 25) \\\\ \nRandom 6 & (3, 3) & (300, 45) & (300, 23) \\\\ \nRandom 7 & (3, 3) & (300, 45) & (300, 25) \\\\ \nRandom 8 & (3, 3) & (300, 45) & (300, 21) \\\\ \nRandom 9 & (3, 3) & (300, 45) & (300, 26) \\\\ \nRandom 10 & (3, 3) & (300, 47) & (300, 25) \\\\ \n\t\\hline\n\\end{tabular}\n\\end{tabular}\n\n\n\n\n\\begin{tabular}{ll}\n\\tiny\n\\setlength{\\tabcolsep}{3.0pt}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{cccccccc|}\n\t\\hline\nData & Layer 1 & Layer 2 & Layer 3 & Layer 4 & Layer 5 & Layer 6 \\\\ \n\t\\hline\nspiral 1 & (2, 2) & (300, 15) & (300, 6) & (300, 7) & (300, 6) \\\\ \nspiral 2 & (2, 2) & (300, 13) & (300, 6) & (300, 7) & (300, 6) & (300, 6) \\\\ \nspiral 3 & (2, 2) & (300, 12) & (300, 6) & (300, 7) & (300, 6) & (300, 6) \\\\ \nspiral 4 & (2, 2) & (300, 13) & (300, 6) & (300, 7) & (300, 6) & (300, 6) \\\\ \nspiral 5 & (2, 2) & (300, 13) & (300, 6) & (300, 7) & (300, 6) \\\\ \nspiral 6 & (2, 2) & (300, 14) & (300, 6) & (300, 7) & (300, 6) \\\\ \nspiral 7 & (2, 2) & (300, 14) & (300, 6) & (300, 7) & (300, 6) \\\\ \nspiral 8 & (2, 2) & (300, 14) & (300, 6) & (300, 7) & (300, 6) & (300, 6) \\\\ \nspiral 9 & (2, 2) & (300, 13) & (300, 6) & (300, 7) & (300, 6) \\\\ \nspiral 10 & (2, 2) & (300, 14) & (300, 6) & (300, 7) & (300, 6) \\\\ \n\t\\hline\n\\end{tabular}\n&\n\\tiny\n\\setlength{\\tabcolsep}{3.0pt}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{cccccccc}\n\t\\hline\nData & Layer 1 & Layer 2 & Layer 3 & Layer 4 & Layer 5 & Layer 6 \\\\ \n\t\\hline\nwine 1 & (13, 11) & (300, 76) & (300, 6) & (300, 7) & (300, 6) & (300, 6) \\\\ \nwine 2 & (13, 11) & (300, 76) & (300, 6) & (300, 6) & (300, 6) & (300, 6) \\\\ \nwine 3 & (13, 11) & (300, 75) & (300, 6) & (300, 7) & (300, 6) & (300, 6) \\\\ \nwine 4 & (13, 11) & (300, 76) & (300, 6) & (300, 6) & (300, 6) & (300, 6) \\\\ \nwine 5 & (13, 11) & (300, 74) & (300, 6) & (300, 7) & (300, 6) & (300, 6) \\\\ \nwine 6 & (13, 11) & (300, 74) & (300, 6) & (300, 6) & (300, 6) & (300, 6) \\\\ \nwine 7 & (13, 11) & (300, 74) & (300, 6) & (300, 6) & (300, 6) & (300, 6) \\\\ \nwine 8 & (13, 11) & (300, 75) & (300, 6) & (300, 7) & (300, 6) & (300, 6) \\\\ \nwine 9 & (13, 11) & (300, 75) & (300, 6) & (300, 8) & (300, 6) & (300, 6) \\\\ \nwine 10 & (13, 11) & (300, 76) & (300, 6) & (300, 7) & (300, 6) & (300, 6) \\\\ \n\t\\hline\n\\end{tabular}\n\\end{tabular}\n\n\\begin{tabular}{ll}\n\\tiny\n\\setlength{\\tabcolsep}{3.0pt}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{cccccccc|}\n\t\\hline\nData & Layer 1 & Layer 2 & Layer 3 & Layer 4 & Layer 5 & Layer 6 \\\\ \n\t\\hline\ncar 1 & (6, 6) & (300, 96) & (300, 6) & (300, 8) & (300, 6) \\\\ \ncar 2 & (6, 6) & (300, 96) & (300, 6) & (300, 8) & (300, 6) \\\\ \ncar 3 & (6, 6) & (300, 91) & (300, 6) & (300, 8) & (300, 6) \\\\ \ncar 4 & (6, 6) & (300, 88) & (300, 6) & (300, 8) & (300, 6) & (300, 6) \\\\ \ncar 5 & (6, 6) & (300, 94) & (300, 6) & (300, 8) & (300, 6) \\\\ \ncar 6 & (6, 6) & (300, 93) & (300, 6) & (300, 7) \\\\ \ncar 7 & (6, 6) & (300, 92) & (300, 6) & (300, 8) & (300, 6) \\\\ \ncar 8 & (6, 6) & (300, 95) & (300, 6) & (300, 7) & (300, 6) \\\\ \ncar 9 & (6, 6) & (300, 96) & (300, 6) & (300, 9) & (300, 6) \\\\ \ncar 10 & (6, 6) & (300, 99) & (300, 6) & (300, 8) & (300, 6) \\\\ \n\t\\hline\n\\end{tabular}\n&\n\\tiny\n\\setlength{\\tabcolsep}{3.0pt}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{ccccccc|}\n\t\\hline\nData & Layer 1 & Layer 2 & Layer 3 & Layer 4 & Layer 5 \\\\ \n\t\\hline\ndivorce 1 & (54, 35) & (300, 44) & (300, 5) & (300, 5) \\\\ \ndivorce 2 & (54, 35) & (300, 45) & (300, 4) & (300, 4) \\\\ \ndivorce 3 & (54, 36) & (300, 49) & (300, 6) & (300, 6) \\\\ \ndivorce 4 & (54, 36) & (300, 47) & (300, 7) & (300, 6) \\\\ \ndivorce 5 & (54, 35) & (300, 45) & (300, 6) & (300, 6) \\\\ \ndivorce 6 & (54, 36) & (300, 47) & (300, 6) & (300, 6) \\\\ \ndivorce 7 & (54, 35) & (300, 45) & (300, 6) & (300, 6) & (300, 4) \\\\ \ndivorce 8 & (54, 36) & (300, 47) & (300, 6) & (300, 7) & (300, 4) \\\\ \ndivorce 9 & (54, 36) & (300, 47) & (300, 5) & (300, 5) \\\\ \ndivorce 10 & (54, 36) & (300, 47) & (300, 6) & (300, 6) \\\\ \n\t\\hline\n\\end{tabular}\n\\end{tabular}\n\n\n\\begin{table}[h]\n\\tiny\n\\setlength{\\tabcolsep}{3.0pt}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{cccccccccccc|}\n\t\\hline\nData & Layer 1 & Layer 2 & Layer 3 & Layer 4 & Layer 5 & Layer 6 & Layer 7 & Layer 8 & Layer 9 & Layer 10 \\\\ \n\t\\hline\ncancer 1 & (9, 8) & (300, 90) & (300, 5) & (300, 6) & (300, 6) & (300, 5) & (300, 4) & (300, 5) & (300, 6) & (300, 6) \\\\ \ncancer 2 & (9, 8) & (300, 90) & (300, 6) & (300, 7) & (300, 8) & (300, 11) & (300, 8) & (300, 4) \\\\ \ncancer 3 & (9, 8) & (300, 88) & (300, 5) & (300, 6) & (300, 7) & (300, 7) & (300, 6) & (300, 4) \\\\ \ncancer 4 & (9, 8) & (300, 93) & (300, 6) & (300, 7) & (300, 9) & (300, 11) & (300, 8) \\\\ \ncancer 5 & (9, 8) & (300, 93) & (300, 9) & (300, 10) & (300, 10) & (300, 11) & (300, 9) & (300, 7) \\\\ \ncancer 6 & (9, 8) & (300, 92) & (300, 7) & (300, 8) & (300, 8) & (300, 7) & (300, 7) \\\\ \ncancer 7 & (9, 8) & (300, 90) & (300, 4) & (300, 4) & (300, 5) & (300, 6) & (300, 6) & (300, 6) & (300, 6) \\\\ \ncancer 8 & (9, 8) & (300, 88) & (300, 5) & (300, 6) & (300, 7) & (300, 8) & (300, 7) & (300, 6) \\\\ \ncancer 9 & (9, 8) & (300, 88) & (300, 5) & (300, 7) & (300, 7) & (300, 7) & (300, 7) \\\\ \ncancer 10 & (9, 8) & (300, 97) & (300, 9) & (300, 11) & (300, 12) & (300, 13) & (300, 6) \\\\ \n\t\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[h]\n\\tiny\n\\setlength{\\tabcolsep}{3.0pt}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{cccccc|}\n\t\\hline\nData & Layer 1 & Layer 2 & Layer 3 & Layer 4 \\\\ \n\t\\hline\nface 1 & (960, 233) & (300, 74) & (300, 73) & (300, 46) \\\\ \nface 2 & (960, 231) & (300, 75) & (300, 73) & (300, 43) \\\\ \nface 3 & (960, 231) & (300, 76) & (300, 73) & (300, 44) \\\\ \nface 4 & (960, 232) & (300, 76) & (300, 74) & (300, 44) \\\\ \nface 5 & (960, 231) & (300, 77) & (300, 73) & (300, 43) \\\\ \nface 6 & (960, 232) & (300, 74) & (300, 72) & (300, 47) \\\\ \nface 7 & (960, 232) & (300, 76) & (300, 73) & (300, 45) \\\\ \nface 8 & (960, 230) & (300, 74) & (300, 74) & (300, 44) \\\\ \nface 9 & (960, 233) & (300, 76) & (300, 76) & (300, 45) \\\\ \nface 10 & (960, 231) & (300, 76) & (300, 70) & (300, 43) \\\\ \n\t\\hline\n\\end{tabular}\n\\end{table}\n\n\\end{appendices}\n\n\\section{Sigma Values used for Random and Adversarial Simulation}\n\\label{app:sigma_values}\n\nThe simulation of Thm.~\\ref{thm:hsequence} as shown in Fig.~\\ref{fig:thm_proof} spread the improvement across multiple layers. The $\\sigma_l$ and $\\hsic_l$ values are recorded here. We note that $\\sigma_l$ are reasonably large and not approaching 0 and the improvement of $\\hsic_l$ is monotonic. \n\n\\begin{figure}[h]\n\\center\n \\includegraphics[width=9cm]{img\/Random_sigma.png}\n \\caption{}\n\\end{figure} \n\n\\begin{figure}[h]\n\\center\n \\includegraphics[width=9cm]{img\/adv.png}\n \\caption{}\n\\end{figure} \n\n\n\nGiven a sufficiently small $\\sigma_0$ and $\\sigma_1$, Thm.~\\ref{thm:hsequence} claims that it can come arbitrarily close to the global optimal using a minimum of 2 layers. We here simulate 2 layers using a relatively small $\\sigma$ values ($\\sigma_0 = 10^{-5}$) on the Random (left) and Adversarial (right) data and display the results of the 2 layers below. Notice that given 2 layer, it generated a clearly separable clusters that are pushed far apart.\n\n \\begin{figure}[!h]\n \\begin{minipage}[H]{6.0cm}\n \\includegraphics[width=6cm]{img\/random_2_layer.png}\n \\caption{Random Dataset with 2\\\\layers and $\\sigma=10^{-5}$}\n \\end{minipage}%\n \\begin{minipage}[H]{6.0cm}\n \\includegraphics[width=6cm]{img\/adversarial_2_layer.png}\n \\caption{Adversarial Dataset with 2 \\\\layers and $\\sigma=10^{-5}$}\n \\end{minipage}\n \\end{figure} \n\n\\end{appendices}\n\\section{Evaluation Metrics Graphs}\n\\label{app:metric_graphs}\n\n\\begin{figure}[h]\n\\center\n \\includegraphics[width=10cm]{img\/r1.png}\n \\caption{}\n\\end{figure} \n\n\\begin{figure}[h]\n\\center\n \\includegraphics[width=10cm]{img\/r2.png}\n \\caption{Figures of key metrics for all datasets as samples progress through the network. It is important to notice the uniformly and monotonically increasing \\RS for each plot\n since this guarantees a converging kernel\/risk sequence. As the $\\mathcal{T}$ approach 0, samples of the same\/difference classes in IDS are being pulled into a single point or pushed maximally apart respectively. As $C$ approach 0, samples of the same\/difference classes in RKHS are being pulled into 0 or $\\frac{\\pi}{2}$ cosine similarity respectively.}\n\\end{figure} \n\n\\end{appendices}\n\n\\section{Graphs of Kernel Sequences}\n\\label{app:kernel_sequence_graph}\n\nA representation of the \\KS are displayed in the figures below for each dataset. The samples of the kernel matrix are previously organized to form a block structure by placing samples of the same class adjacent to each other. Since the Gaussian kernel is restricted to values between 0 and 1, we let white and dark blue be 0 and 1 respectively where the gradients reflect values in between. Our theorems predict that the \\KS will evolve from an uninformative kernel into a highly discriminating kernel of perfect block structures. \n\n\\begin{figure}[h]\n\\center\n \\includegraphics[width=9cm]{img\/wine_kernel.png}\n \\caption{The kernel sequence for the wine dataset.}\n\\end{figure} \n\n\\begin{figure}[h]\n\\center\n \\includegraphics[width=9cm]{img\/cancer_kernel.png}\n \\caption{The kernel sequence for the cancer dataset.}\n\\end{figure} \n\n\\begin{figure}[h]\n\\center\n \\includegraphics[width=9cm]{img\/adv_kernel.png}\n \\caption{The kernel sequence for the Adversarial dataset.}\n\\end{figure} \n\n\\begin{figure}[h]\n\\center\n \\includegraphics[width=9cm]{img\/car_kernel.png}\n \\caption{The kernel sequence for the car dataset.}\n\\end{figure} \n\n\\begin{figure}[h]\n\\center\n \\includegraphics[width=11cm]{img\/face_kernel.png}\n \\caption{The kernel sequence for the face dataset.}\n\\end{figure} \n\n\\begin{figure}[h]\n\\center\n \\includegraphics[width=9cm]{img\/divorce_kernel.png}\n \\caption{The kernel sequence for the divorce dataset.}\n\\end{figure} \n\n\\begin{figure}[h]\n\\center\n \\includegraphics[width=11cm]{img\/spiral_kernel.png}\n \\caption{The kernel sequence for the spiral dataset.}\n\\end{figure} \n\n\n\n\\begin{figure}[h]\n\\center\n \\includegraphics[width=11cm]{img\/random_kernel.png}\n \\caption{The kernel sequence for the Random dataset.}\n\\end{figure} \n\\end{appendices}\n\\section{On Generalization. } Besides being an optimum solution, $W_l^*$ exhibits many advantages over $W_s$. For example, while $W_s$ experimentally performs well, $W^*$ converges with fewer layers and superior generalization. This raises a well-known question on generalization. It is known that overparameterized MLPs can generalize even without any explicit regularizer \\citep{Zhang2017UnderstandingDL}. This observation contradicts classical learning theory and has been a longstanding puzzle \\citep{cao2019generalization,brutzkus2017sgd,allen2019learning}. \nTherefore, by being overparameterized with an infinitely wide network, \\kn's ability under HSIC to generalize raises similar questions. In both cases, $W_s$ and $W^*$, the HSIC objective employs an infinitely wide network that should result in overfitting. We ask theoretically, under our framework, what makes HSIC and $W^*$ special? \n\nRecently, \\citet{poggio2020complexity} have proposed that traditional MLPs generalize because gradient methods implicitly regularize the normalized weights given an exponential objective (like our HSIC). We discovered a similar impact the process of finding $W^*$ has on HSIC, i.e., HSIC can be reformulated to isolate out $n$ functions $[D_1(W_l), ..., D_n(W_l)]$ that act as a penalty term during optimization. Let $\\cS_i$ be the set of samples that belongs to the $i_{th}$ class and let $\\cS^c_i$ be its complement, then each function $D_i(W_l)$ is defined as \\begin{equation}\n D_i(W_l) = \n \\frac{1}{\\sigma^2}\n \\sum_{j \\in \\cS_i}\n \\Gij \\kf_{W_l}(r_i,r_j)\n -\n \\frac{1}{\\sigma^2}\n \\sum_{j \\in \\cS^c_i}\n |\\Gij| \\kf_{W_l}(r_i,r_j).\n\\end{equation}\nNotice that $D_i(W_l)$ is simply \\eq{eq:similarity_hsic} for a single sample scaled by $\\frac{1}{\\sigma^2}$. Therefore, improving $W_l$ also leads to an increase and decrease of $\\kf_{W_l}(r_i,r_j)$ associated with $\\cS_i$ and $\\cS^c_i$ in \\eq{eq:penalty_term}, thereby increasing the size of the penalty term $D_i(W_l)$. To appreciate how $D_i(W_l)$ penalizes $\\hsic$, we propose an equivalent formulation in the theorem below with its derivation in App~\\ref{app:thm:regularizer}.\n\\begin{theorem}\n\\eq{eq:similarity_hsic} is equivalent to \n\\begin{equation}\n \\max_{W_l} \n \\sum_{i,j}\n \\frac{\\Gij}{\\sigma^2}\n \\ISMexp\n (r_i^TW_lW_l^Tr_j) \n -\n \n \\sum_{i}\n D_i(W_l)\n ||W_l^Tr_i||_2.\n\\end{equation}\n\\end{theorem}\nBased on Thm.~\\ref{thm:regularizer}, $D_i(W_l)$ adds a negative variable cost to the sample norm, $||W_l^Tr_i||_2$, prescribing an implicit regularizer on HSIC. As $W_l$ improve HSIC, it also imposes a heavier penalty on \\eq{eq:generalization_formulation}, severely constraining $W_l$.\n\n\\end{appendices}\n\n\\section{Proof for Theorem \\ref{thm:regularizer}}\n\\label{app:thm:regularizer}\n\n\\textbf{Theorem \\ref{thm:regularizer}: }\n\\textit{\\eq{eq:similarity_hsic} objective is equivalent to }\n\\begin{equation}\n \\sum_{i,j}\n \\Gij \\ISMexp\n (r_i^TWW^Tr_j)\n -\n \\sum_{i}\n D_i(W)\n ||W^Tr_i||_2.\n\\end{equation}\n\n\\begin{proof}\nLet $A_{i,j} = (r_i - r_j)(r_i - r_j)^T$. Given the Lagranian of the HSIC objective as \n\\begin{equation}\n \\mathcal{L} = -\\sum_{i,j} \\Gij \\ISMexp - \\Tr[\\Lambda(W^TW - I)].\n\\end{equation}\nOur layer wise HSIC objective becomes \n\\begin{equation}\n \\min_W -\\sum_{i,j} \\Gij \\ISMexp - \\Tr[\\Lambda(W^TW - I)].\n \\label{obj:hsic}\n\\end{equation}\nWe take the derivative of the Lagrangian, the expression becomes\n \\begin{equation} \\label{eq:gradient_of_lagrangian}\n \\nabla_W \\mathcal{L} ( W, \\Lambda) = \\sum_{i, j} \\frac{\\Gamma_{i,\n j}}{\\sigma^2} e^{- \\frac{\\Tr (W^T A_{i, j} W)}{2 \\sigma^2}} A_{i, j} W\n - 2 W \\Lambda. \n \\end{equation}\nSetting the gradient to 0, and consolidate some scalar values into $\\hat{\\Gamma}_{i,j}$, we get the expression\n \\begin{align} \n \\left[\n \\sum_{i, j} \\frac{\\Gamma_{i,\n j}}{2\\sigma^2} e^{- \\frac{\\Tr (W^T A_{i, j} W)}{2 \\sigma^2}} A_{i, j} \n \\right]\n W\n & = W \\Lambda \\\\\n \\left[\\frac{1}{2}\n \\sum_{i,j} \\hat{\\Gamma}_{i,j} A_{i,j}\n \\right] W\n & = W \\Lambda \\\\\n \\mathcal{Q} W\n & = W \\Lambda. \n \\end{align}\nFrom here, we see that the optimal solution is an eigenvector of $\\mathcal{Q}$. Based on ISM, it further proved that the optimal solution is not just any eigenvector, but the eigenvectors associated with the smallest values of $\\mathcal{Q}$. From this logic, ISM solves objective~(\\ref{obj:hsic}) with a surrogate objective\n\\begin{equation}\n \\min_W \\quad\n \\Tr \\left( W^T\n \\left[\\frac{1}{2}\n \\sum_{i,j} \\hat{\\Gamma}_{i,j} A_{i,j}\n \\right] W \\right) \\quad \\st W^TW=I.\n \\label{obj:min_2}\n\\end{equation}\nGiven $D_{\\hat{\\Gamma}}$ as the degree matrix of $\\hat{\\Gamma}$ and $R = [r_1, r_2, ...]^T$, ISM further shows that Eq.~(\\ref{obj:min_2}) can be written into\n\\begin{align}\n \\min_W \\quad\n \\Tr \\left( W^T R^T\n \\left[\n D_{\\hat{\\Gamma}} - \\hat{\\Gamma}\n \\right] R W \\right) &\\quad \\st W^TW=I \\\\\n \\max_W \\quad\n \\Tr \\left( W^T R^T\n \\left[\n \\hat{\\Gamma} - \n D_{\\hat{\\Gamma}}\n \\right] R W \\right) &\\quad \\st W^TW=I \\\\ \n \\max_W \\quad\n \\Tr \\left( W^T R^T\n \\hat{\\Gamma} \n R W \\right) \n -\n \\Tr \\left( W^T R^T\n D_{\\hat{\\Gamma}} \n R W \\right) \n &\\quad \\st W^TW=I \\\\ \n \\max_W \\quad\n \\Tr \\left( \n \\hat{\\Gamma} \n R W W^T R^T\\right) \n -\n \\Tr \\left( \n D_{\\hat{\\Gamma}} \n R W W^T R^T\\right) \n &\\quad \\st W^TW=I \\\\ \n \\max_W \\quad\n \\sum_{i,j}\n \\hat{\\Gamma}_{i,j}\n [R W W^T R^T]_{i,j}\n -\n \\sum_{i,j}\n D_{\\hat{\\Gamma}_{i,j}}\n [R W W^T R^T]_{i,j} \n &\\quad \\st W^TW=I. \n\\end{align}\nSince the jump from \\eq{obj:min_2} can be intimidating for those not familiar with the literature, we included a more detailed derivation in App.~\\ref{app:matrix_derivation}.\n\nNote that the degree matrix $D_{\\hat{\\Gamma}}$ only have non-zero diagonal elements, all of its off diagonal are 0. Given $[RWW^TR^T]_{i,j} = (r_i^TWW^Tr_j)$, the objective becomes\n\\begin{equation}\n \\max_W \\quad\n \\sum_{i,j}\n \\hat{\\Gamma}_{i,j}\n (r_i^TWW^Tr_j)\n -\n \\sum_{i}\n D_i(W)\n ||W^Tr_i||_2\n \\quad \\st W^TW=I. \n\\end{equation}\nHere, we treat $D_{i}$ as a penalty weight on the norm of the $W^Tr_i$ for every sample. \n\\end{proof}\n\n\nTo better understand the behavior of $D_i(W)$, note that $\\hat{\\Gamma}$ matrix looks like\n\\begin{equation}\n \\hat{\\Gamma} = \\frac{1}{\\sigma^2} \\begin{bmatrix}\n \\begin{bmatrix}\n \\Gamma_{\\mathcal{S}} \\ISMexp\n \\end{bmatrix}\n & \n \\begin{bmatrix}\n -|\\Gamma_{\\mathcal{S}^c}| \\ISMexp\n \\end{bmatrix}\n &\n ... \\\\\n \\begin{bmatrix}\n -|\\Gamma_{\\mathcal{S}^c}| \\ISMexp\n \\end{bmatrix}\n & \n \\begin{bmatrix}\n \\Gamma_{\\mathcal{S}} \\ISMexp\n \\end{bmatrix} \n & \n ... \\\\\n ... &\n ... &\n ...\n \\end{bmatrix}.\n\\end{equation}\nThe diagonal block matrix all $\\Gij$ elements that belong to $\\cS$ and the off diagonal are elements that belongs to $\\cS^c$. Each penalty term is the summation of its corresponding row. Hence, we can write out the penalty term as\n\\begin{equation}\n D_i(W_l) = \n \\frac{1}{\\sigma^2}\n \\sum_{j \\in \\cS|i}\n \\Gij \\kf_{W_l}(r_i,r_j)\n -\n \\frac{1}{\\sigma^2}\n \\sum_{j \\in \\cS^c|i}\n |\\Gij| \\kf_{W_l}(r_i,r_j).\n\\end{equation}\nFrom this, it shows that as $W$ improve the objective, the penalty term is also increased. In fact, at its extreme as $\\hsic_l \\rightarrow \\hsic^*$, all the negative terms are gone and all of its positive terms are maximized and this matrix approaches \n\\begin{equation}\n \\hat{\\Gamma}^* = \\frac{1}{\\sigma^2} \\begin{bmatrix}\n \\begin{bmatrix}\n \\Gamma_{\\mathcal{S}} \n \\end{bmatrix}\n & \n \\begin{bmatrix}\n 0\n \\end{bmatrix}\n &\n ... \\\\\n \\begin{bmatrix}\n 0\n \\end{bmatrix}\n & \n \\begin{bmatrix}\n \\Gamma_{\\mathcal{S}} \n \\end{bmatrix} \n & \n ... \\\\\n ... &\n ... &\n ...\n \\end{bmatrix}.\n\\end{equation}\nFrom the matrix $\\hat{\\Gamma}^*$ and the definition of $D_i(W_l)$, we see that as $\\mathcal{K}_W$ from $\\mathcal{S}$ increase, \n\nSince $D_i(W)$ is the degree matrix of $\\hat{\\Gamma}$, we see that as $\\hsic_l \\rightarrow \\hsic^*$, we have\n\\begin{equation}\n D^*_i(W) > D_i(W). \n\\end{equation}\n\\end{appendices} \n\\section{How This Work Relates to Climate Change}\n\\label{app:motivation} \n Finding an alternative to BP also has significant climate implications. \\citet{strubell2019energy} have shown that some standard AI models can emit over 626,000 pounds of carbon dioxide; a carbon footprint five times greater than the lifetime usage of a car. This level of emission is simply not sustainable in light of our continual explosive growth. Therefore, the environmental impact of BP necessitates a cheaper alternative. Looking at nature, we can be inspired by the brain's learning capability using only a fraction of the energy. Perhaps artificial neurons can also train without the high energy cost to the environment. This is the moral and the foundational motivation for this work in identifying the existence of $W_s$. A closed-form solution holds the potential to significantly reduce the computational requirement and carbon footprint. Even if our work ultimately failed to mimic the brain, we hope to inspire the community to identify other closed-form solutions and go beyond BP. \\\\ \\\\\n This paper aims to promote the discussion of viewing backpropagation alternatives not only as an academic exercise but also as a climate imperative. Yet, this topic is largely ignored by the community. The authors believe the energy costs of training Neural Networks are having a detrimental climate impact and should be an added topic of interest. The earth also needs an advocate, why not us? Therefore, we as a community, must begin addressing how we can ameliorate our own carbon footprint. This exploratory work aims to share a potential path forward for further research that may address these concerns with the community. While $W_s$ is still not ready for commercial usage, we sincerely hope that the community begins to build novel algorithms over our work on \\RS and identify a simpler and cheaper path to train our networks. \\\\\n \n\\end{appendices} \n\\section{Derivation for \\texorpdfstring{$\\sum_{i,j} \\Psi_{i,j} (x_i - x_j)(x_i - x_j)^T = 2X^T(D_\\Psi - \\Psi)X$ }\\xspace }\n\\label{app:matrix_derivation}\nSince $\\Psi$ is a symmetric matrix, and $A_{i, j} = ( x_i - x_j) ( x_i -\nx_j)^T $, we can\nrewrite the expression into\n\\[ \\begin{array}{lll}\n \\sum_{i, j} \\Psi_{i, j} A_{i, j} & = & \\sum_{i, j} \\Psi_{i, j} ( x_i -\n x_j) ( x_i - x_j)^T\\\\\n & = & \\sum_{i, j} \\Psi_{i, j} ( x_i x_i^T - x_j x_i^T - x_i x_j^T +\n x_j x_j^T)\\\\\n & = & 2 \\sum_{i, j} \\Psi_{i, j} ( x_i x_i^T - x_j x_i^T)\\\\\n & = & \\left[ 2 \\sum_{i, j} \\Psi_{i, j} ( x_i x_i^T) \\right] - \\left[ 2\n \\sum_{i, j} \\Psi_{i, j} ( x_i x_j^T) \\right] .\n \\end{array} \\]\nIf we expand the 1st term, we get\n\\begin{align}\n 2\\sum_{i}^n \\sum_{j}^n \n \\Psi_{i, j} ( x_i x_i^T)\n & = \n 2 \\sum_i\n \\Psi_{i, 1} ( x_i x_i^T) +\n \\ldots + \\Psi_{i, n} ( x_i x_i^T) \\\\\n &= \n 2 \\sum_{i}^n \n [\\Psi_{1, 1} + \\Psi_{1, 2} + ...]\n x_i x_i^T \\\\\n &= \n 2\\sum_{i}^n \n d_i \n x_i x_i^T \\\\ \n &=\n 2 X^TD_\\Psi X\n\\end{align}\nGiven $\\Psi_i$ as the $i$th row, next we look at the 2nd term\n\\begin{align}\n 2 \\sum_i \\sum_j \\Psi_{i, j} x_i x_j^T\n &=\n 2 \\sum_i \\Psi_{i, 1} x_i x_1^T\n + \\Psi_{i, 2} x_i x_2^T\n + \\Psi_{i, 3} x_i x_3^T + ...\\\\\n &=\n 2 \\sum_i x_i (\\Psi_{i, 1} x_1^T)\n + x_i (\\Psi_{i, 2} x_2^T)\n + x_i (\\Psi_{i, 3} x_3^T) + ...\\\\\n &=\n 2 \\sum_i x_i \n \\left[\n (\\Psi_{i, 1} x_1^T)\n + (\\Psi_{i, 2} x_2^T)\n + (\\Psi_{i, 3} x_3^T) + ...\n \\right]\\\\\n &=\n 2 \\sum_i x_i \n \\left[\n X^T \\Psi_i^T\n \\right]^T\\\\ \n &=\n 2 \\sum_i x_i \n \\left[\n \\Psi_i X\n \\right]\\\\ \n &=\n 2 \\left[ \n x_1 \\Psi_1 X + \n x_2 \\Psi_2 X + \n x_3 \\Psi_3 X + ...\n \\right]\\\\ \n &=\n 2 \\left[ \n x_1 \\Psi_1 + \n x_2 \\Psi_2 + \n x_3 \\Psi_3 + ...\n \\right]X \\\\ \n &=\n 2X^T \\Psi X \\\\ \n\\end{align}\nPutting both terms together, we get\n\\begin{align}\n \\sum_{i,j} \\Psi_{i,j} A_{i,j} &= 2 X^TD_\\Psi X - 2X^T \\Psi X a\\\\\n &= 2 X^T[ D_\\Psi - \\Psi] X \\\\\n\\end{align}\n\n\\end{appendices} \n\\subsubsection*{\\bibname}}\n\n\\usepackage[round]{natbib}\n\\bibliographystyle{plainnat}\n\n\\begin{document}\n\n\n\n\\twocolumn[\n\n\\aistatstitle{Deep Layer-wise Networks Have Closed-Form Weights}\n\n\\aistatsauthor{ Chieh Wu* \\And Aria Masoomi* \\And Arthur Gretton \\And Jennifer Dy }\n\n\n\\aistatsaddress{ Northeastern University \\And Northeastern University \\And University College London \\And Northeastern University} ]\n\n\\input{tex\/a_abstract}\n\\input{tex\/b_intro}\n\\input{tex\/c_related_work}\n\\input{tex\/d_model}\n\\input{tex\/e_thm1}\n\\input{tex\/f_network_behavior}\n\\input{tex\/f2_generalization}\n\\input{tex\/g_experiments}\n\\input{tex\/i_limitations}\n\n\\clearpage\n\n\n\\section{INTRODUCTION}\nDue to the brain-inspired architecture of Multi-layered Perceptrons (MLPs), the relationship between MLPs and our brains has been a topic of significant interest \\citep{zador2019critique,walker2020Deep}. This line of research triggered a debate \\citep{whittington2019theories} around the neural plausibility of backpropagation (BP). While some contend that brains cannot simulate BP \\citep{crick1989recent,grossberg1987competitive}, others have proposed counterclaims with a new generation of models \\citep{hinton2007backpropagation,Lilli2020Backpropagation,liao2016important,bengio2017stdp,guerguiev2017towards,sacramento2018dendritic,whittington2017approximation}. This debate has inspired the search for alternative optimization strategies beyond BP. To better mimic the brain, learning the network \\textit{one layer at a time} over a single forward pass (we call it layer-wise network) has been proposed as a more likely candidate to match existing understandings in neuroscience \\citep{ma2019hsic,Pogodin2020KernelizedIB,Oord2018RepresentationLW}. Our theoretical work contributes to this debate by answering two open questions regarding layer-wise networks.\n\\begin{enumerate}\n[noitemsep,topsep=0pt,leftmargin=5mm]\n \\item \\textit{Do they have a closed-form solution? }\n \\item \\textit{How do we know when to stop adding more layers? }\n\\end{enumerate}\nQuestion 1 asks if easily computable and \\textit{closed-form weights} can theoretically yield networks equally powerful as traditional MLPs, bypassing both BP and Stochastic Gradient Descent (SGD). This question is answered by characterizing the expressiveness of layer-wise networks using only \\textit{\"trivially learned weights\"}. Currently, the \\textit{Universal Approximation Theorem} states that a network can approximate any continuous function \\citep{cybenko1989approximation, hornik1991approximation,zhou2020universality}. However, it is not obvious that weights of \"layer-wise networks\" can be computed closed-form requiring only basic operations, a simplicity constraint inspired by biology. As our contribution, we prove that layer-wise networks can classify \\textit{any pattern} with trivially obtainable closed-form weights using only \\textit{addition}. Surprisingly, these weights turn out to be the \\kme.\n\n\nIdentifying the network depth has been an open question for traditional networks. For layer-wise networks, this question reduces down to \"\\textit{when should we stop adding layers?}. We posit that additional layers become unnecessary if layer-wise networks exhibit a \\textit{limiting behavior} where adding more layers ceases to meaningfully change the network. We proved that this is theoretically possible by using \\kme as weights. In fact, we show that these networks could be modeled as a \\textit{mathematical sequence} of functions that converge by \\textit{intentional design}. Indeed, not only can these networks converge, they can be induced to converge towards a highly desirable kernel for classification; we call it the \\textit{Neural Indicator Kernel} (NIK).\n\n\n\n\\section{MODELING LAYER-WISE NETWORKS}\n\\textbf{Layer Construction. } Let $X \\in \\mathbb{R}^{n \\times d}$ be a dataset of $n$ samples with $d$ features and let $Y \\in \\mathbb{R}^{n \\times \\nclass}$ be its one-hot encoded labels with $\\nclass$ classes. The $l^{th}$ layer consists of linear weights $W_l \\in \\mathbb{R}^{m \\times q}$ followed by an activation function $\\af:\\mathbb{R}^{n \\times q} \\rightarrow \\mathbb{R}^{n \\times m}$. We interpret each layer as a function $\\fm_l$ parameterized by $W_l$ with an input\/output denoted as $R_{l-1} \\in \\mathbb{R}^{n \\times m}$ and $R_{l} \\in \\mathbb{R}^{n \\times m}$ where $R_l = \\fm_l(R_{l-1}) = \\af(R_{l-1}W_l)$. The entire network $\\fm$ is the composition of all layers where $\\fm$ where $\\fm = \\fm_L \\circ ... \\circ \\fm_1$. \n\n\\textbf{Network Objective. } \nGiven $x_i, y_i$ as the $i^{th}$ sample and label of the dataset, the network output is used to minimize an empirical risk $(\\hsic)$ with a loss function $(\\ell)$ with a general objective of\n \\begin{equation}\n \\underset{\\fm}{\\min} \\hspace{0.3cm} \\hsic \n \\coloneqq\n \\underset{\\fm}{\\min} \\hspace{0.3cm} \\frac{1}{n} \\sum_{i=1}^n \\ell(\\fm(x_i), y_i).\n \\label{eq:basic_empirical_risk}\n \\end{equation}\nAs Eq.~(\\ref{eq:basic_empirical_risk}), we are structurally identical to conventional MLPs where each layer consists of linear weights and an activation function. Yet, we differ by introducing the composition of the first $l$ layers as $\\fm_{l^\\circ} = \\fm_l \\circ ... \\circ \\fm_1$ where $l \\leq L$. This notation $(\\fm_{l^\\circ})$ connects the data directly to the $l$th layer output where $R_l = \\fm_{l^{\\circ}}(X)$ and leads to the \\textit{key novelty of our theoretical contribution}. Namely, we propose to optimize Eq.~(\\ref{eq:basic_empirical_risk}) layer-wise as a sequence of a growing networks by replacing $\\fm$ in \\eq{eq:basic_empirical_risk} incrementally with a sequence of functions $\\{\\fm_{l^\\circ}\\}_{l=1}^L$. This results in a sequence of empirical risks $\\{\\hsic_l\\}_{l=1}^L$ which we incrementally solve. We refer to $\\{\\fm_{l^\\circ}\\}_{l=1}^L$ and $\\{\\hsic_{l}\\}_{l=1}^L$ as the \\KS and the \\RS. \n\nSolving Eq.~(\\ref{eq:basic_empirical_risk}) as \\RS is where we differ from tradition, this approach enables us to easily represent, analyze and optimize \"layer-wise networks\". In contrast to using BP with SGD, we now can use \\textit{\"closed-form solutions\"} for $W_l$ to construct \\textit{Kernel Sequences} that drives the \\RS to automatically minimize Eq.~(\\ref{eq:basic_empirical_risk}). To visualize the network structure and how they form the sequences, refer to Fig.~\\ref{fig:notation}.\n\n\\begin{figure*}[t]\n\\center\n \n \\includegraphics[width=12cm,height=3.7cm]{img\/notations.png}\n \\caption{\\textbf{Left - Distinguishing } $\\phi_l$ vs $\\phi_{l^\\circ}$\\textbf{:} $\\fm_l$ is a single layer while $\\fm_{l^{\\circ}} = \\fm_l \\circ ... \\circ \\fm_{1^\\circ}$ is a composition of the first $l$ layers.\n \\textbf{Right - Visualize the \\textit{Kernel} and $\\mathcal{H}$-\\textit{Sequences}: } Note that the \\KS is a converging sequence of \"\\textit{functions}\" $\\{\\fm_{l^\\circ}\\}_{l=1} = \\{\\fm_{1^\\circ}, \\fm_{2^\\circ}, ... \\}$ and \\RS is a converging sequence of scalar values $\\{\\hsic_{l}\\}_{l=1} = \\{\\hsic_{1}, \\hsic_{2}, ... \\}$.\n To minimize Eq.~(\\ref{eq:basic_empirical_risk}) at $\\hsic_l$, all weights before $W_l$ are already identified and held fixed, only $W_l$ is unknown. \n As $l\\rightarrow \\infty$, the \\KS converges to the \\textit{Neural Indicator Kernel}. }\n \\label{fig:notation}\n\\end{figure*} \n\n\\textbf{Performing Classification. } \nClassification tasks typically use objectives like Mean Squared Error ($\\mse$) or Cross-Entropy ($\\ce$) to match the network output $\\fm(X)$ to the label $Y$. While this approach achieves the desirable outcome, it also constrains the space of potential solutions where $\\fm(X)$ must match $Y$. Yet, if $\\fm$ maps $X$ to the labels $\\{0,1\\}$ instead of the true label $\\{-1,1\\}$, \n$\\fm(X)$ may not match $Y$, but the solution is the same. Therefore, enforcing $\\fm(X) = Y$ ignores an entire space of equivalently optimal classifiers. We posit that by relaxing this constraint and accepting a larger space of potential global optima, it will be easier during optimization to collide with this space. This intuition motivates us to depart from the tradition of label matching and instead seek alternative objectives that focus on solving the underlying prerequisite of classification, i.e., learning a mapping where samples from \\textit{similar and different} classes become easily distinguishable.\n\nHowever, since there are many ways to define \\textit{similarity}, how do we choose the best one that leads to classification? We demonstrate that the Hilbert Schmidt Independence Criterion (HSIC \\citep{gretton2005measuring}) is a highly advantageous objective for this purpose. As we'll later show in corollaries \\ref{corollary:mse} and \\ref{corollary:ce}, its maximization indirectly minimizes both Mean Square Error (MSE) and Cross-Entropy (CE) under different notions of \"distance\", enabling classification. Moreover, this is made possible because maximizing HSIC automatically learns the optimal notion of \\textit{similarity} as a kernel function. Using HSIC objective as $\\hsic_l$ for each element of \n$\\{\\hsic_{l}\\}_{l=1}^L$, the layer-wise formulation of Eq.~(\\ref{eq:basic_empirical_risk}) becomes\n\\begin{equation}\n\\begin{aligned}\n \\max_{W_l} \\quad &\n \\Tr \\left(\n \\Gamma \\:\n \\left[\n \\af(R_{l-1}W_l) \\af^T(R_{l-1}W_l)\n \\right]\n \\right) \n \\\\\n \\st \\quad & \n W_l^TW_l=I,\n \\label{eq:main_obj}\n\\end{aligned}\n\\end{equation}\n\nwhere we let $\\Gamma = HK_YH = HYY^TH$ with centering matrix $H$ defined as $H = I_n - \\frac{1}{n} \\mathbf{1}_n \\mathbf{1}_n^T$. $I_n$ is an identity matrix of size $n \\times n$ and $\\textbf{1}_n$ is a column vector of 1s also of length $n$.\n\nNote that the HSIC objective is more familiarly written as $\\Tr(HK_YHK_X)$ where $K_X = \\af(R_{l-1}W_l) \\af^T(R_{l-1}W_l)$. Yet, we purposely present it as Eq.~(\\ref{eq:main_obj}) to highlight how the network structure leads to the objective. Namely, the layer input $R_{l-1}$ first multiplies the weight $W_l$ before passing through the activation function $\\af$. The layer output is then multiplied by itself and $\\Gamma$ to form the HSIC objective.\n\nLastly, since HSIC can be trivially maximized by setting each element of $W$ as $\\infty$, setting $W^TW=I$ is a constraint commonly used with HSIC while learning a projection \\citep{wu2018iterative, Wu2019SolvingIK,niu2010multiple}.\n\n\n\n\\textbf{Key Difference From Traditional MLPs. } \\textit{We generalize the concept of the activation function to a kernel feature map}. Therefore, instead of using the traditional Sigmoid or ReLU activation functions, we use the feature map of a Gaussian kernel as the activation function $\\af$. Conveniently, HSIC leverages the kernel trick to spare us the direct computation of the inner product $\\af(R_{l-1}W_l) \\af^T(R_{l-1}W_l)$. Therefore, for each $(r_i, r_j)$ pair we only need to compute \n\\begin{equation}\n \\begin{aligned}\n \\kf(W_l^T r_i, W^T_lr_j) & = \n \\langle\n \\af(W_l^T r_i), \\af(W_l^T r_j) \n \\rangle\n \\\\\n & = \n \\text{exp}\\{-\n \\frac{||W_l^T r_i - W_l^T r_j||^2}{2\\sigma^2_l} \\}.\n \\label{eq:kernel_def}\n \\end{aligned}\n\\end{equation}\n\n\\textbf{How Does HSIC Learn the Kernel? } \nLet $\\cS$ be a set of $i,j$ sample pairs that belong to the same class. Its complement, $\\cS^c$ contains all sample pairs from different classes. By reinterpreting the kernel $\\kf(W_l^T r_i, W^T_lr_j)$ from Eq.~(\\ref{eq:kernel_def}) as a kernel function $\\kf_{W_l}(r_i, r_j)$ parameterized by $W_l$, \\eq{eq:main_obj} can be reformulated into the following objective to see how HSIC learns the kernel.\n\\begin{equation}\n \\begin{aligned}\n \\max_{W_l} &\n \\sums \\Gij \\kf_{W_l}(r_i, r_j) \n -\n \\sumsc |\\Gij| \\kf_{W_l}(r_i, r_j)\n \\\\\n \\st & \\quad W_l^TW_l=I.\n \\label{eq:similarity_hsic}\n \\end{aligned}\n\\end{equation}\nWhile \\eq{eq:main_obj} and \\eq{eq:similarity_hsic} are equivalent objectives, \\eq{eq:similarity_hsic} reveals how an optimal similarity measure is learned as a kernel function $\\kf_{W_l}(r_i,r_j)$ parameterized by $W_l$. First note that $\\Gamma$ came directly from the label with $\\Gamma=HYY^TH$ such that the $i,j_{th}$ element of $\\Gamma$, denoted as $\\Gamma_{i,j}$, is a positive value for samples pairs in $\\cS$ and negative for $\\cS^c$. The objective leverages the sign of $\\Gamma_{i,j}$ as labels to guide the choice of $W_l$ such that it increases $\\kf_{W_l}(r_i,r_j)$ when $r_i,r_j$ belongs to the same class in $\\cS$ while decreasing $\\kf_{W_l}(r_i,r_j)$ otherwise. Therefore, by finding a $W_l$ matrix that best parameterizes $\\kf_{W_l}$, HSIC identifies the optimal kernel function $\\kf_{W_l}(r_i, r_j)$ that separates samples into similar and dissimilar partitions to enable classification.\n\n\n\\section{ANALYZING LAYER-WISE NETWORKS} \nInstead of maximizing Eq.~(\\ref{eq:main_obj}) with traditional strategies like SGD, can we completely bypass the optimization step? Our analysis proves that this is possible. In fact, the weights $W_l$ simply need to be set to the \\kme \\citep{muandet2016kernel} at each layer. The stacking of layers with these known weights automatically drives \\RS towards its theoretical global optimum. Specifically, if we let $r^{j}_{\\iota}$ be the $\\iota^{th}$ input sample in class $j$ for layer $l$ with $\\zeta$ as a normalizer that can be ignored in practice, then the closed-form solution is \n \\begin{equation}\n W_s = \\frac{1}{\\sqrt{\\zeta}} \\W\n \n \n \n \n \n \n \n .\n \\label{eq:trivial_W}\n \\end{equation}\nWe prove that as long as no two identical samples have conflicting labels, setting $W_l = W_s$ at each layer can generate a monotonically increasing \\RS towards the global optimal, $\\hsic^*$. Comparing to BP, this finding suggests that instead of using gradients or needing to propagate error backward, a deep classifier can be globally optimized with \"only a single forward pass\" by \\textit{simple addition}. We formally prove this discovery in App.~\\ref{app:thm:hsequence} as the following theorem.\n\\begin{theorem}\n\\label{thm:hsequence}\nFrom any initial risk $\\hsic_0$, there exists a set of bandwidths $\\sigma_l$ and a \\KS $\\{\\fm_{l^{\\circ}}\\}_{l=1}^L$ parameterized by $W_l = W_s$ in \\eq{eq:trivial_W} such that:\n\\begin{enumerate}[topsep=0pt, partopsep=0pt, label=\\Roman*.]\n \\item \n $\\hsic_L$ can approach arbitrarily close to $\\hsic^*$ such that for any $L>1$ and $\\delta>0$ we can achieve\n \\begin{equation}\n \\hsic^{*} - \\hsic_L \\le \\delta,\n \\label{arb_close}\n \\end{equation} \n \\item \n as $L \\rightarrow \\infty$, the \\RS converges to the global optimum where \n \\begin{equation}\n \\lim_{L \\rightarrow \\infty} \\hsic_L = \\hsic^*,\n \\end{equation} \n \\item \n the convergence is strictly monotonic where \n \\begin{equation}\n \\hsic_{l} > \\hsic_{l-1} \\quad \\forall l \\ge 1.\n \\end{equation} \n\\end{enumerate}\n\\end{theorem}\nTo summarize the proof, we identified a lower bound for HSIC that is guaranteed to increase at each layer by adjusting $\\sigma_l$ while using $W_s$. Since HSIC has a known global theoretical upper bound ($\\sums \\Gij$), we show that the lower bound can approach arbitrarily close to the upper bound by adding more layers. \n\nIntuitively, the network incrementally discovers a better feature map to improve the \\kme. Imagine classifying a set of dogs\/cats, since \\kme defines the \\textit{average} dog and cat as two points in a high dimensional space. As $L\\rightarrow \\infty$, \\RS pushes these two points maximally apart from each other with representations that enable easy distinction. The continuity of the feature map ensures that dogs are closer to the average dog than the average cat, enabling classification.\n\nUsing the \\kme has a secondary implication; the layer-wise network is performing \\textit{Kernel Density Estimation} \\citep{davis2011remarks}. That is, $W_s$ empowers these networks to predict the likelihood of an outcome without knowing the underlying distribution, relying solely on observations. For classification, multiplying the output of $\\fm_{l-1^\\circ}$ by its corresponding $W_s$ is equivalent to approximating the probability of a sample being in each class. Moreover, by summarizing the data into the $W_s$, all past examples can be discarded while allowing new examples to update the network by adjusting only the embedding itself, enabling layer-wise networks to self-adapt given new information.\n\n\nTheoretically, viewing a network as an \\RS is the key contribution that enables these interpretations. Its usage raises new questions with tantalizing possibilities in the debate over the brain's likelihood of performing BP. Is the brain more likely computing derivatives with BP, or is it simply repeating \\textit{addition}? Can the brain use mechanisms similar to \\textit{Kernel Density Estimation} to approximate the likelihood of an event? This work only presents the theoretical feasibility and does not claim to have the answer.\n\n\n\n\\section{GENERALIZATION} \nBesides being an optimum solution, $W_l^*$ exhibits many advantages over $W_s$. For example, while $W_s$ experimentally performs well, $W^*$ converges with fewer layers and superior generalization. This raises a well-known question on generalization. It is known that overparameterized MLPs can generalize even without any explicit regularizer \\citep{Zhang2017UnderstandingDL}. This observation contradicts classical learning theory and has been a longstanding puzzle \\citep{cao2019generalization,brutzkus2017sgd,allen2019learning}. \nTherefore, by being overparameterized with an infinitely wide network, \\kn's ability under HSIC to generalize raises similar questions. In both cases, $W_s$ and $W^*$, the HSIC objective employs an infinitely wide network that should result in overfitting. We ask theoretically, under our framework, what makes HSIC and $W^*$ special? \n\nRecently, \\citet{poggio2020complexity} have proposed that traditional MLPs generalize because gradient methods implicitly regularize the normalized weights given an exponential objective (like our HSIC). We discovered a similar impact the process of finding $W^*$ has on HSIC, i.e., HSIC can be reformulated to isolate out $n$ functions $[D_1(W_l), ..., D_n(W_l)]$ that act as a penalty term during optimization. Let $\\cS_i$ be the set of samples that belongs to the $i_{th}$ class and let $\\cS^c_i$ be its complement, then each function $D_i(W_l)$ is defined as \\begin{equation}\n\\begin{split}\n D_i(W_l) = &\n \\frac{1}{\\sigma^2}\n \\sum_{j \\in \\cS_i}\n \\Gij \\kf_{W_l}(r_i,r_j)\n - \\\\\n &\n \\frac{1}{\\sigma^2}\n \\sum_{j \\in \\cS^c_i}\n |\\Gij| \\kf_{W_l}(r_i,r_j).\n \\label{eq:penalty_term}\n\\end{split}\n\\end{equation}\nNotice that $D_i(W_l)$ is simply \\eq{eq:similarity_hsic} for a single sample scaled by $\\frac{1}{\\sigma^2}$. Therefore, improving $W_l$ also leads to an increase and decrease of $\\kf_{W_l}(r_i,r_j)$ associated with $\\cS_i$ and $\\cS^c_i$ in \\eq{eq:penalty_term}, thereby increasing the size of the penalty term $D_i(W_l)$. To appreciate how $D_i(W_l)$ penalizes $\\hsic$, we propose an equivalent formulation in the theorem below with its derivation in App~\\ref{app:thm:regularizer}.\n\\begin{theorem}\n\\label{thm:regularizer}\n\\eq{eq:similarity_hsic} is equivalent to \n\\begin{equation}\n \\label{eq:generalization_formulation}\n\\begin{split}\n \\max_{W_l} \n \\sum_{i,j} &\n \\frac{\\Gij}{\\sigma^2}\n \\ISMexp\n (r_i^TW_lW_l^Tr_j) \n \\\\\n \n & -\n \\sum_{i}\n D_i(W_l)\n ||W_l^Tr_i||_2.\n\\end{split}\n\\end{equation}\n\\end{theorem}\nBased on Thm.~\\ref{thm:regularizer}, $D_i(W_l)$ adds a negative variable cost to the sample norm, $||W_l^Tr_i||_2$, prescribing an implicit regularizer on HSIC. Therefore, as $W_l$ improve HSIC, it also imposes an incrementally heavier penalty on \\eq{eq:generalization_formulation}, severely constraining $W_l$.\n\n\n\\section{EXPERIMENTS}\n\\textbf{Experiment Settings. }\n Our analysis from Thm.~\\ref{thm:hsequence} shows that the Gaussian kernel's $\\sigma_l$ can be incrementally decreased to guarantee a monotonic increase. Our experiments learn the $\\sigma_l$ by incrementally decreasing its value until an improved HSIC is achieved. For results that involves $W^*$, the Gaussian kernel's $\\sigma_l$ is set to the value that maximizes HSIC, see App. \\ref{app:opt_sigma}. \n \n The activation function is approximated with RFF of width 300 \\citep{rahimi2008random}. The convergence threshold for \\RS is set at $\\hsic_l > 0.99$. The network structures discovered by $W^*$ for every dataset are recorded and provided in App.~\\ref{app:W_dimensions}. The MLPs that use $\\mse$ and $\\ce$ have weights initialized via the method used in \\citet{he2015delving}. All datasets are centered to 0 and scaled to a standard deviation of 1. All sources are written in Python using Numpy, Sklearn and Pytorch \\citep{numpy,sklearn_api,paszke2017automatic}, and ran on an Intel Xeon(R) CPU E5-2630 v3 @ 2.40GHz x 16 with 16 total cores.\n\\begin{figure*}[t]\n\\center\n \\includegraphics[width=14cm,height=3.5cm]{img\/output_summary.png}\n \\caption{Thm.~\\ref{thm:hsequence} is simulated on two highly complex datasets, Random and Adversarial. The 2D representation is shown, and next to it, the 1D output of each layer is displayed over each line. As we add more layers, we can see the samples from the two classes gradually separate from each other. Both datasets achieved the global optimum $\\hsic^*$ at the $12_{th}$ layers. For additional results, see App.~\\ref{app:sigma_values}}.\n \\label{fig:thm_proof}\n\\end{figure*} \n\n\n\\begin{figure*}[t]\n\\center\n \n \\includegraphics[width=14cm,height=4cm]{img\/progression_kdiscovery.png}\n \\caption{\\textbf{Left - Key evaluation metrics at each layer: } Showing key statistics computed from the output of each layer. Notice $\\hsic$ monotonically increases while $\\sil$ and $\\csr$ decrease. \n \\textbf{Right - A visual confirmation of Thm.~\\ref{thm:geometric_interpret}: } The top row plots out the kernel matrix from the output of the first 4 layers. Layer 1 shows the original kernel, while layer 4 represents the kernel when it has nearly converged to \\kn. The perfect block diagonal structure at layer 4 confirms that the \\KS is indeed converging toward \\kn. Below the kernels, we also plot out the convergent pattern in \\textit{preactivation}. Samples from the same class converge towards a single point while being pushed away from samples from different classes.}\n \\label{fig:progression_kdiscovery}\n\\end{figure*} \n\n\n\\textbf{Datasets. }\nThe experiments are divided into two sections. Since our contributions are completely theoretical in nature, the majority of the experiments will focus on supporting our thesis by verifying the various components of the theoretical claims. We use three synthetic datasets (Random, Adversarial, and Spiral) and five popular UCI benchmark datasets: wine, cancer, car, face, and divorce ~\\citep{Dua:2017}. They are included along with the source code in the supplementary, and their download link and statistics are in App.~\\ref{app:data_detail}. All theoretical claims are experimentally reproducible with its source code and datasets publicly available in the Supplementary and at \\url{https:\/\/github.com\/endsley}.\n\n\\textbf{Competing Kernel Methods. } We next compare $W^*$ and $W_s$ to several popular kernel networks that have publicly available code: NTK \\citep{Arora2020HarnessingTP, Yu2019}, GP \\citep{maka892017, Wilson2016DeepKL}, Arc-cos \\citep{gionuno2017,Cho2009KernelMF}. \nFor this comparison, we additionally included CIFAR10 \\citep{Krizhevsky09learningmultiple}.\nThe images are preprocessed with a convolutional layer that outputs vectorized samples of $x_i \\in \\mathbb{R}^{10}$. The preprocessing code, the network output, and the network weights are included in the supplementary.\n\n\n\n\n\n\\begin{table*}[t]\n\\tiny\n\\centering\n\\setlength{\\tabcolsep}{3.5pt} \n\\renewcommand{\\arraystretch}{1.1}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}\n\t\\hline\n\t & obj &\n\t\t Train Acc $\\uparrow$&\n\t\t Test Acc $\\uparrow$&\n\t\t Time(s) $\\downarrow$&\n\t\t $\\hsic$ $\\uparrow$&\n\t\t $\\mse$ $\\downarrow$&\n\t\t $\\ce$ $\\downarrow$&\n\t\t $C$ $\\downarrow$&\n\t\t $\\sil$ $\\downarrow$\\\\ \n\t\t\\hline \n\t\\parbox[t]{2mm}{\\multirow{4}{*}{\\rotatebox[origin=c]{90}{\\textbf{random}}}} &\n\t $W^*$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0.00}} &\n\t\t \\black{0.38 $\\pm$ 0.21} &\n\t\t \\textbf{\\black{0.40 $\\pm$ 0.37}} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0.01}} &\n\t\t \\textbf{\\black{0.00 $\\pm$ 0.01}} &\n\t\t \\black{0.05 $\\pm$ 0.00} &\n\t\t \\textbf{\\black{0.00 $\\pm$ 0.06}} &\n\t\t \\black{0.02 $\\pm$ 0.0} \\\\ \n & $W_s$ &\n \t0.99 $\\pm$ 0.01 & \n \t0.45 $\\pm$ 0.20 & \n \t0.52 $\\pm$ 0.05 & \n \t0.98 $\\pm$ 0.01 & \n \t2.37 $\\pm$ 1.23 & \n \t0.06 $\\pm$ 0.13 & \n \t0.05 $\\pm$ 0.02 & \n \t0.13 $\\pm$ 0.01\\\\\n\t & $\\ce$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0.00}} &\n\t\t \\black{0.48 $\\pm$ 0.17} &\n\t\t \\black{25.07 $\\pm$ 5.55} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0.00}} &\n\t\t \\black{10.61 $\\pm$ 11.52} &\n\t\t \\textbf{\\black{0.0 $\\pm$ 0.0}} &\n\t\t \\textbf{\\black{0.0 $\\pm$ 0.0}} &\n\t\t \\textbf{\\black{0.0 $\\pm$ 0.0}} \\\\ \n\t & $\\mse$ &\n\t\t \\black{0.98 $\\pm$ 0.04} &\n\t\t \\textbf{\\black{0.63 $\\pm$ 0.21}} &\n\t\t \\black{23.58 $\\pm$ 8.38} &\n\t\t \\black{0.93 $\\pm$ 0.12} &\n\t\t \\black{0.02 $\\pm$ 0.04} &\n\t\t \\black{0.74 $\\pm$ 0.03} &\n\t\t \\black{0.04 $\\pm$ 0.04} &\n\t\t \\black{0.08 $\\pm$ 0.1} \\\\ \n\t\t\\hline \n\t\\parbox[t]{2mm}{\\multirow{4}{*}{\\rotatebox[origin=c]{90}{\\textbf{Advers}}}} &\n $W^*$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0.00}} &\n\t\t \\black{0.38 $\\pm$ 0.10} &\n\t\t \\textbf{\\black{0.52 $\\pm$ 0.51}} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0.00}} &\n\t\t \\textbf{\\black{0.00 $\\pm$ 0.00}} &\n\t\t \\textbf{\\black{0.04 $\\pm$ 0.00}} &\n\t\t \\textbf{\\black{0.01 $\\pm$ 0.08}} &\n\t\t \\textbf{\\black{0.01 $\\pm$ 0.0}} \\\\ \n & $W_s$ &\n \t0.99 $\\pm$ 0.04 & \n \t\\textbf{0.42 $\\pm$ 0.18} & \n \t2.82 $\\pm$ 0.81 & \n \t0.99 $\\pm$ 0.19 & \n \t15.02 $\\pm$ 11.97 & \n \t0.32 $\\pm$ 0.15 & \n \t0.30 $\\pm$ 0.18 & \n \t0.34 $\\pm$ 0.19\\\\\n\t & $\\ce$ &\n\t\t \\black{0.59 $\\pm$ 0.04} &\n\t\t \\black{0.29 $\\pm$ 0.15} &\n\t\t \\black{69.54 $\\pm$ 24.14} &\n\t\t \\black{0.10 $\\pm$ 0.07} &\n\t\t \\black{0.65 $\\pm$ 0.16} &\n\t\t \\black{0.63 $\\pm$ 0.04} &\n\t\t \\black{0.98 $\\pm$ 0.03} &\n\t\t \\black{0.92 $\\pm$ 0.0} \\\\ \n\t & $\\mse$ &\n\t\t \\black{0.56 $\\pm$ 0.02} &\n\t\t \\black{0.32 $\\pm$ 0.20} &\n\t\t \\black{113.75 $\\pm$ 21.71} &\n\t\t \\black{0.02 $\\pm$ 0.01} &\n\t\t \\black{0.24 $\\pm$ 0.01} &\n\t\t \\black{0.70 $\\pm$ 0.00} &\n\t\t \\black{0.99 $\\pm$ 0.02} &\n\t\t \\black{0.95 $\\pm$ 0.0} \\\\ \n\t\t\\hline \n\t\t\\hline \n\t\\parbox[t]{2mm}{\\multirow{4}{*}{\\rotatebox[origin=c]{90}{\\textbf{spiral}}}} &\n\t $W^*$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0.00}} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0.00}} &\n\t\t \\textbf{\\black{0.87 $\\pm$ 0.08}} &\n\t\t \\black{0.98 $\\pm$ 0.01} &\n\t\t \\black{0.01 $\\pm$ 0.00} &\n\t\t \\black{0.02 $\\pm$ 0.01} &\n\t\t \\black{0.04 $\\pm$ 0.03} &\n\t\t \\black{0.02 $\\pm$ 0.0} \\\\ \n & $W_s$ &\n \t\\textbf{1.00 $\\pm$ 0.00} & \n \t\\textbf{1.00 $\\pm$ 0.00} & \n \t13.54 $\\pm$ 5.66 & \n \t0.88 $\\pm$ 0.03 & \n \t38.60 $\\pm$ 25.24 & \n \t0.06 $\\pm$ 0.02 & \n \t0.08 $\\pm$ 0.04 & \n \t0.08 $\\pm$ 0\\\\\n\t & $\\ce$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{11.59 $\\pm$ 5.52} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{57.08 $\\pm$ 31.25} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} \\\\ \n\t & $\\mse$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{0.99 $\\pm$ 0.01} &\n\t\t \\black{456.77 $\\pm$ 78.83} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\black{1.11 $\\pm$ 0.04} &\n\t\t \\black{0.40 $\\pm$ 0.01} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} \\\\ \n\t\t\\hline \n\t\\parbox[t]{2mm}{\\multirow{4}{*}{\\rotatebox[origin=c]{90}{\\textbf{wine}}}} &\n\t $W^*$ &\n\t\t \\black{0.99 $\\pm$ 0} &\n\t\t \\textbf{\\black{0.99 $\\pm$ 0.05}} &\n\t\t \\textbf{\\black{0.28 $\\pm$ 0.04}} &\n\t\t \\black{0.98 $\\pm$ 0.01} &\n\t\t \\black{0.01 $\\pm$ 0} &\n\t\t \\black{0.07 $\\pm$ 0.01} &\n\t\t \\black{0.04 $\\pm$ 0.03} &\n\t\t \\black{0.02 $\\pm$ 0} \\\\ \n & $W_s$ &\n \t0.98 $\\pm$ 0.01 & \n \t0.94 $\\pm$ 0.04 & \n \t0.78 $\\pm$ 0.09 & \n \t0.93 $\\pm$ 0.01 & \n \t2.47 $\\pm$ 0.26 & \n \t0.06 $\\pm$ 0.01 & \n \t0.05 $\\pm$ 0.01 & \n \t0.08 $\\pm$ 0.01\\\\\n\t & $\\ce$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0.00}} &\n\t\t \\black{0.94 $\\pm$ 0.06} &\n\t\t \\black{3.30 $\\pm$ 1.24} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0.00}} &\n\t\t \\black{40.33 $\\pm$ 35.5} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} \\\\ \n\t & $\\mse$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{0.89 $\\pm$ 0.17} &\n\t\t \\black{77.45 $\\pm$ 45.40} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\black{1.15 $\\pm$ 0.07} &\n\t\t \\black{0.49 $\\pm$ 0.02} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} \\\\ \n\t\t\\hline \n\t\\parbox[t]{2mm}{\\multirow{4}{*}{\\rotatebox[origin=c]{90}{\\textbf{cancer}}}} &\n\t $W^*$ &\n\t\t \\black{0.99 $\\pm$ 0} &\n\t\t \\textbf{\\black{0.98 $\\pm$ 0.02}} &\n\t\t \\textbf{\\black{2.58 $\\pm$ 1.07}} &\n\t\t \\black{0.96 $\\pm$ 0.01} &\n\t\t \\black{0.02 $\\pm$ 0.01} &\n\t\t \\black{0.04 $\\pm$ 0.01} &\n\t\t \\black{0.02 $\\pm$ 0.04} &\n\t\t \\black{0.04 $\\pm$ 0.0} \\\\ \n & $W_s$ &\n \t0.98 $\\pm$ 0.01 & \n \t0.96 $\\pm$ 0.03 & \n \t6.21 $\\pm$ 0.36 & \n \t0.88 $\\pm$ 0.01 & \n \t41.31 $\\pm$ 56.17 & \n \t0.09 $\\pm$ 0.01 & \n \t0.09 $\\pm$ 0.02 & \n \t0.16 $\\pm$ 0.03\\\\\n\t & $\\ce$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{0.97 $\\pm$ 0.01} &\n\t\t \\black{82.03 $\\pm$ 35.15} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{2330 $\\pm$ 2915} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} \\\\ \n\t & $\\mse$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0.00}} &\n\t\t \\black{0.97 $\\pm$ 0.03} &\n\t\t \\black{151.81 $\\pm$ 27.27} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\black{0.66 $\\pm$ 0.06} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} \\\\ \n\t\t\\hline \n\t\\parbox[t]{2mm}{\\multirow{4}{*}{\\rotatebox[origin=c]{90}{\\textbf{car}}}} &\n\t $W^*$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0.01}} &\n\t\t \\textbf{\\black{1.51 $\\pm$ 0.35}} &\n\t\t \\black{0.99 $\\pm$ 0} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\black{0.01 $\\pm$ 0.00} &\n\t\t \\black{0.04 $\\pm$ 0.03} &\n\t\t \\black{0.01 $\\pm$ 0} \\\\ \n & $W_s$ &\n \t\\textbf{1.00 $\\pm$ 0} & \n \t\\textbf{1.00 $\\pm$ 0} & \n \t5.15 $\\pm$ 1.07 & \n \t0.93 $\\pm$ 0.02 & \n \t12.89 $\\pm$ 2.05 & \n \t0 $\\pm$ 0 & \n \t0.06 $\\pm$ 0.02 & \n \t0.08 $\\pm$ 0.02\\\\\n\t & $\\ce$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{25.79 $\\pm$ 18.86} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{225.11 $\\pm$ 253} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} \\\\ \n\t & $\\mse$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{504 $\\pm$ 116.6} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\black{1.12 $\\pm$ 0.07} &\n\t\t \\black{0.40 $\\pm$ 0} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} \\\\ \n\t\t\\hline \n\t\\parbox[t]{2mm}{\\multirow{4}{*}{\\rotatebox[origin=c]{90}{\\textbf{face}}}} &\n\t $W^*$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0.99 $\\pm$ 0.01}} &\n\t\t \\textbf{\\black{0.78 $\\pm$ 0.08}} &\n\t\t \\black{0.97 $\\pm$ 0} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\black{0.17 $\\pm$ 0} &\n\t\t \\black{0.01 $\\pm$ 0} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} \\\\ \n & $W_s$ &\n \t0.98 $\\pm$ 0.01 & \n \t0.94 $\\pm$ 0.26 & \n \t0.86 $\\pm$ 0.04 & \n \t3.15 $\\pm$ 3.05 & \n \t2.07 $\\pm$ 1.04 & \n \t0.28 $\\pm$ 0.51 & \n \t0.04 $\\pm$ 0.01 & \n \t0.01 $\\pm$ 0\\\\\n\t & $\\ce$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{0.79 $\\pm$ 0.31} &\n\t\t \\black{23.70 $\\pm$ 8.85} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{16099 $\\pm$ 16330} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} \\\\ \n\t & $\\mse$ &\n\t\t \\black{0.92 $\\pm$ 0.10} &\n\t\t \\black{0.52 $\\pm$ 0.26} &\n\t\t \\black{745.2 $\\pm$ 282} &\n\t\t \\black{0.94 $\\pm$ 0.07} &\n\t\t \\black{0.11 $\\pm$ 0.12} &\n\t\t \\black{3.50 $\\pm$ 0.28} &\n\t\t \\black{0.72 $\\pm$ 0.01} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} \\\\ \n\t\t\\hline \n\t\\parbox[t]{2mm}{\\multirow{4}{*}{\\rotatebox[origin=c]{90}{\\textbf{divorce}}}} &\n\t $W^*$ &\n\t\t \\black{0.99 $\\pm$ 0.01} &\n\t\t \\black{0.98 $\\pm$ 0.02} &\n\t\t \\textbf{\\black{0.71 $\\pm$ 0.41}} &\n\t\t \\black{0.99 $\\pm$ 0.01} &\n\t\t \\black{0.01 $\\pm$ 0.01} &\n\t\t \\black{0.03 $\\pm$ 0} &\n\t\t \\textbf{\\black{0 $\\pm$ 0.05}} &\n\t\t \\black{0.02 $\\pm$ 0} \\\\ \n & $W_s$ &\n \t0.99 $\\pm$ 0 & \n \t0.95 $\\pm$ 0.06 & \n \t1.54 $\\pm$ 0.13 & \n \t0.91 $\\pm$ 0.01 & \n \t60.17 $\\pm$ 70.64 & \n \t0.04 $\\pm$ 0.01 & \n \t0.05 $\\pm$ 0.01 & \n \t0.08 $\\pm$ 0\\\\\n\t & $\\ce$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0.99 $\\pm$ 0.02}} &\n\t\t \\black{2.62 $\\pm$ 1.21} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{14.11 $\\pm$ 12.32} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} \\\\ \n\t & $\\mse$ &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\black{0.97 $\\pm$ 0.03} &\n\t\t \\black{47.89 $\\pm$ 24.31} &\n\t\t \\textbf{\\black{1.00 $\\pm$ 0}} &\n\t\t \\textbf{\\black{0 $\\pm$ 0}} &\n\t\t \\black{0.73 $\\pm$ 0.07} &\n\t\t \\textbf{\\black{0 $\\pm$ 0.01}} &\n\t\t \\black{0.01 $\\pm$ 0} \\\\ \n\t\\hline\n\\end{tabular}\n\\vspace{3pt}\n\\caption{Each dataset contains 4 rows comparing the $W^*$ and $W_s$ against traditional MLPs trained using MSE and CE via SGD given the same network width and depth. The best results are in bold with $\\uparrow\/\\downarrow$ indicating larger\/smaller values preferred.}\n\\label{table:main}\n\\end{table*}\n\\textbf{Evaluation Metrics.} We report the HSIC objective as $\\hsic$ at convergence along with the training\/test accuracy for each dataset. Here, $\\hsic$ is normalized to the range between 0 to 1 using the method proposed by \\citet{cortes2012algorithms}. To corroborate Corollaries~\\ref{corollary:mse} and \\ref{corollary:ce}, we also record $\\mse$ and $\\ce$. To evaluate the sample geometry predicted by \\eq{eq:scatter_ratio}, we recorded the scatter trace ratio $\\sil$ to measure the compactness of samples within and between classes. The angular distance between samples in $\\cS$ and $\\cS^c$ as predicted by \\eq{eq:converged_kernel} is evaluated with the Cosine Similarity Ratio ($\\csr$). The equations for normalized $\\hsic$ and $\\csr$ are\n $\\hsic = \\frac{\\hsic(\\fm(X),Y)}{\\sqrt{\\hsic(\\fm(X),\\fm(X)) \\hsic(Y,Y)}}$ and \n $\\csr = \\frac\n {\\sum_{i,j \\in \\mathcal{S}^c} \\langle \\fm(x_i), \\fm(x_j) \\rangle}\n {\\sum_{i,j \\in \\mathcal{S}} \\langle \\fm(x_i), \\fm(x_j) \\rangle}$.\n\n\\textbf{Confirming Theorem 1. }\nSince Thm.~\\ref{thm:hsequence} guarantees an optimal convergence for \\textit{any} dataset given $W_s$, we designed an Adversarial dataset of high complexity, i.e., the sample pairs in $\\cS^c$ are intentionally placed significantly closer than samples pairs in $\\cS$. We next designed a Random dataset with completely random labels. We then simulated Thm.~\\ref{thm:hsequence} in Python and plotted the sample behavior in Fig.~\\ref{fig:thm_proof}. The original 2-dimensional data is shown next to its 1-dimensional results: each line represents the 1D output at that layer. As predicted by the theorem, the \\RS for both datasets converges to the global optimum at the 12th layer and perfectly separated the samples based on labels. \\textit{This experiment simultaneously confirms Thm.~\\ref{thm:hsequence} and the idea that simple and repetitive patterns as weights can incrementally improve a network to classify any pattern during training. It also supports the argument that gradients might not be theoretically necessary to optimize deep networks.}\n\n\\begin{table*}[t]\n\\centering\n\\tiny\n\\setlength{\\tabcolsep}{1.5pt}\n\\renewcommand{\\arraystretch}{1.3}\n\\begin{tabular}{|c|c|c|c|c|c||c|c|c|c|c|}\n\t\\hline\n\t & \t \\multicolumn{5}{|c||}{Training Accuracy} & \n\t \\multicolumn{5}{|c|}{Test Accuracy} \\\\ \n\t\\hline\n\t & \tW* & Ws & GP & Arc-cos & NTK & W* & Ws & GP & Arc-cos & NTK \\\\ \n\t\\hline\n\t\\textbf{adversarial} & \n\t\\textbf{1.00 $\\pm$ 0.00} &\n\t\\textbf{1.00 $\\pm$ 0.00} &\n\t0.56 $\\pm$ 0.02 &\n\t0.53 $\\pm$ 0.02 &\n\t0.52 $\\pm$ 0.01&\n\t\\textbf{0.53 $\\pm$ 0.00} &\n\t0.50 $\\pm$ 0.16 &\n\t0.17 $\\pm$ 0.15 &\n\t0.25 $\\pm$ 0.08 &\n\t0.30 $\\pm$ 0.06\n\\\\ \n\t\\textbf{random} & \n\t\\textbf{1.00 $\\pm$ 0.00} &\n\t\\textbf{1.00 $\\pm$ 0.00} &\n\t0.95 $\\pm$ 0.02 &\n\t0.66 $\\pm$ 0.02 &\n\t0.63 $\\pm$ 0.03&\n\t0.40 $\\pm$ 0.02 &\n\t0.32 $\\pm$ 0.14 &\n\t\\textbf{0.55 $\\pm$ 0.22} &\n\t0.53 $\\pm$ 0.21 &\n\t0.37 $\\pm$ 0.21\n\\\\ \n\t\\textbf{spiral} & \n\t\\textbf{1.00 $\\pm$ 0.00} &\n\t\\textbf{1.00 $\\pm$ 0.00} &\n\t0.99 $\\pm$ 0.00 &\n\t0.99 $\\pm$ 0.00 &\n\t0.99 $\\pm$ 0.00&\n\t\\textbf{0.99 $\\pm$ 0.01} &\n\t0.97 $\\pm$ 0.00 &\n\t0.99 $\\pm$ 0.01 &\n\t\\textbf{0.99 $\\pm$ 0.01} &\n\t0.98 $\\pm$ 0.02\n\\\\ \n\t\\textbf{wine} & \n\t0.99 $\\pm$ 0.00 &\n\t1.00 $\\pm$ 0.00 &\n\t\\textbf{1.0 $\\pm$ 0.0} &\n\t\\textbf{1.0 $\\pm$ 0.0} &\n\t\\textbf{1.0 $\\pm$ 0.0} &\n\t\\textbf{0.99 $\\pm$ 0.03} &\n\t0.94 $\\pm$ 0.03 &\n\t0.86 $\\pm$ 0.11 &\n\t0.94 $\\pm$ 0.07 &\n\t0.96 $\\pm$ 0.04\n\\\\ \n\t\\textbf{cancer} & \n\t\\textbf{0.99 $\\pm$ 0.00} &\n\t\\textbf{0.99 $\\pm$ 0.00} &\n\t\\textbf{0.99 $\\pm$ 0.00} &\n\t\\textbf{0.99 $\\pm$ 0.00} &\n\t0.98 $\\pm$ 0.00&\n\t\\textbf{0.98 $\\pm$ 0.00} &\n\t0.98 $\\pm$ 0.02 &\n\t0.97 $\\pm$ 0.02 &\n\t0.97 $\\pm$ 0.02 &\n\t0.97 $\\pm$ 0.02\n\\\\ \n\t\\textbf{car} & \n\t\\textbf{1 $\\pm$ 0.0} &\n\t\\textbf{1 $\\pm$ 0.0} &\n\t\\textbf{1 $\\pm$ 0.0} &\n\t\\textbf{1 $\\pm$ 0.0} &\n\t\\textbf{1 $\\pm$ 0.0} &\n\t\\textbf{1 $\\pm$ 0.0} &\n\t0.99 $\\pm$ 0.00 &\n\t0.99 $\\pm$ 0.01 &\n\t\\textbf{1.0 $\\pm$ 0.0} &\n\t\\textbf{1.0 $\\pm$ 0.0} \n\\\\ \n\t\\textbf{face} & \n\t\\textbf{1.0 $\\pm$ 0.0} &\n\t\\textbf{1.0 $\\pm$ 0.0} &\n\t0.55 $\\pm$ 0.02 &\n\t\\textbf{1.0 $\\pm$ 0.0} &\n\t\\textbf{1.0 $\\pm$ 0.0} &\n\t\\textbf{1.0 $\\pm$ 0.0} &\n\t0.99 $\\pm$ 0.01 &\n\t0.22 $\\pm$ 0.05 &\n\t0.79 $\\pm$ 0.32 &\n\t0.61 $\\pm$ 0.39\n\\\\ \n\t\\textbf{divorce} & \n\t0.99 $\\pm$ 0.01 &\n\t0.99 $\\pm$ 0.01 &\n\t\\textbf{1.00 $\\pm$ 0.0} &\n\t\\textbf{1.00 $\\pm$ 0.0} &\n\t\\textbf{1.00 $\\pm$ 0.0} &\n\t\\textbf{0.99 $\\pm$ 0.01} &\n\t0.97 $\\pm$ 0.05 &\n\t0.94 $\\pm$ 0.06 &\n\t0.95 $\\pm$ 0.12 &\n\t0.97 $\\pm$ 0.07\n\\\\ \n\t\\textbf{cifar10} & \n\t\\textbf{0.99 $\\pm$ 0.01} &\n\t0.81 $\\pm$ 0.00 &\n\t0.77 $\\pm$ 0.01 &\n\t0.94 $\\pm$ 0.01 &\n\t0.94 $\\pm$ 0.01 &\n\t\\textbf{0.93 $\\pm$ 0.01} &\n\t0.74 $\\pm$ 0.01 &\n\t0.72 $\\pm$ 0.01 &\n\t\\textbf{0.93 $\\pm$ 0.01} &\n\t\\textbf{0.93 $\\pm$ 0.01}\n\\\\ \n\t\\hline\n\\end{tabular}\n\\caption{Comparing Train and test accuracy between recent kernel networks against $W_s\/W^*$. Notice that $W^*$ consistently achieves the highest test Accuracy while $W_s$ performs comparatively to GP. Also, note that $W_s$ and $W^*$ were the only models with a sufficiently large function class to shatter the Adversarial and random dataset. The 10-fold dataset is reshuffled to contrast against Table~\\ref{table:main}. }\n\\label{table:comparison_table}\n\\end{table*}\n \n\\textbf{Does $\\hsic$ Improve Monotonically? } Thm.~\\ref{thm:hsequence} also claims that \\RS should be monotonically increasing. Fig.~\\ref{fig:progression_kdiscovery} (Left) plots out all key metrics during training \\textit{at each layer}. Here, the \\RS is clearly monotonic and converging towards a $\\hsic^* \\approx 1$. Moreover, the trends for $\\sil$ and $\\csr$ indicate an incremental clustering of samples into separate partitions. Corresponding to low $\\sil$ and $\\csr$ values, the low MSE and $\\ce$ errors at convergence further reinforces the claims of Corollaries~\\ref{corollary:mse} and \\ref{corollary:ce}. \nThese patterns are highly repeatable, as shown across all datasets included in App.~\\ref{app:metric_graphs}.\n\n\n\\textbf{Can Layer-wise Networks with $W^*\/W_s$ Compete Against MLPs with BP? } We conduct 10-fold cross-validation spanning 8 datasets and report their mean and the standard deviation for all key metrics in Table~\\ref{table:main}. Once our model is trained and has learned its structure, we use the same depth and width to train 2 additional MLPs via SGD, where instead of HSIC, $\\mse$ and $\\ce$ are used as $\\hsic$. \\textit{This allows us to compare \\kn to traditional networks of identical sizes that are trained by BP\/SGD.}\n\nObserving Table~\\ref{table:main}, first note that $\\mathbf{\\hsic} \\approx 1$ for both $W_s$ and $W^*$. This corroborates with Thm.~\\ref{thm:hsequence} where $\\hsic^{*}$ is guaranteed given enough layers. Next, notice that $\\boldsymbol{\\sil \\approx 0}$, \u00a0$\\boldsymbol{\\csr \\approx 0}$, \\textbf{MSE} $\\boldsymbol{\\approx 0}$, and \\textbf{CE} $\\boldsymbol{\\approx 0}$. These results are aligned with Thm.~\\ref{thm:geometric_interpret} and its corollaries since they indicate that samples of the same class are merging into a single point while minimizing MSE and CE for classification. Moreover, note that MSE\/CE failed to shatter the adversarial dataset while $W_s\/W^*$ easily handled the data. This suggest that given the same network size, $W_s\/W^*$ is significantly more flexible.\n\nLastly, notice how $W_s$ and $W^*$ perform favorably against traditional MLPs on almost all benchmarks, confirming that layer-wise network weights can be obtained via basic operations to achieve comparable performance. These results affirmatively answer question 1 while validating layer-wise networks as a promising alternative to BP. \\textit{Our theoretical and experimental results strongly suggest that layer-wise networks with \"closed-form weights\" can match MLP performance. }\n\n\\textbf{What is the Speed Difference? } The execution time is also included for reference in Table~\\ref{table:main}. Since \\kn can be obtained via a single forward pass while SGD requires many iterations of backpropagation, \\kn \\textit{should be faster}. The Time column of Table~\\ref{table:main} confirms this expectation by a wide margin. The biggest difference can be observed by comparing the face dataset: $W^*\/W_s$ finished in 0.78\/0.86 seconds while $\\mse$ required 745 seconds, \\textit{which is almost a 1000 times difference.}\n\n\n\\textbf{Does $W_s$ and $W^*$ Experimentally Generalize? }\n With the exception of the two random datasets, the Test Accuracy of $W_s$ consistently performed well against MLPs trained using BP\/SGD. This suggests that $W_s$ is a trivially obtainable network solution that generalizes.\n From an optimization perspective, $W^*$ impressively generalized even better across all datasets. It further differentiates itself on a high dimension Face dataset where it was the only method that avoided overfitting. While the generalizability of $W_s$ and $W^*$ is still ongoing research, the experimental results are promising.\n \n \n\\textbf{Is the Network Converging to the Neural Indicator Kernel? } \nA visual pattern of \\KS converging toward NIK is shown on the right of Fig.~\\ref{fig:progression_kdiscovery}. We rearrange the samples of the same class to be adjacent to each other. This allows us to evaluate the kernel quality via its block diagonal structure quickly. Since Gaussian kernels are restricted to values between 0 and 1, we let white and dark blue be 0 and 1 respectively, where the gradients reflect values in between. Our proof predicts that the \\KS converges to NIK, evolving from an uninformative kernel into a highly discriminating kernel of perfect \\textit{block diagonal structures}. Corresponding to the top row, the bottom row plots out the \\textit{preactivation} at each layer. As predicted by Thm.~\\ref{thm:geometric_interpret}, the samples of the same class incrementally converge towards a single point. This pattern is consistently observed on all datasets, and the complete collection of the \\textit{kernel sequences} for each dataset can be found in App.~\\ref{app:kernel_sequence_graph}. \n\n\n\n\n\\textbf{Comparing to Other Deep Kernel Frameworks. }\nThe development of \\RS primarily focused on the biological motivation to model layer-wise networks; it was not designed to automatically outperforming all existing networks. Table~\\ref{table:main} satisfies our primary objective by showing that it already performs comparably against traditional MLPs trained with BP. \nFor the kernel community, here we supply additional experiments comparing $W_s\/W^*$ against several recently proposed kernel networks to demonstrate surprisingly competitive results in Table~\\ref{table:comparison_table}. First, note that $W_s$ reliably achieves comparable test accuracy as GP while $W^*$ consistently outperform all kernel networks. This suggests that \\RS yields networks that not only generalizes competitively against traditional MLPs, it is also comparable to some recently proposed kernel networks. Next notice for the Training Accuracy, only $W_s$ and $W^*$ were sufficiently expressive to shatter both the adversarial and random dataset, confirming the expressiveness of layer-wise networks. \n\n\n\\textbf{Conclusion.} \nWe have comprehensively tested each theoretical claim, demonstrating how a layer-wise network modeled as an \\RS can yield closed-form solutions that perform comparably to MLPs. The convincing results from our experiments strongly align with the predictions made by our theorems, answering the two central questions of this exploratory work. \n\nIndeed, a repetition of simple rules can incrementally construct powerful networks capable of classifying any pattern, bypassing both BP and SGD. By modeling MLPs as a \\KS, it allows us to design convergent behaviors for layer-wise networks to achieve classification while identifying the network depth. \n\n\n\n\n\n\\section*{Checklist}\n\\begin{enumerate}\n\n\\item For all authors...\n\\begin{enumerate}\n \\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?\n \\answerYes{} \\\\\n \\textbf{Justification. } \\textit{The central contribution of this work is presented as Theorem 1 and 2. These two findings answer the two main questions raised in the abstract\/Intro. Namely, we have first shown that SGD and backpropagation are not prerequisites to training a network for classification given the closed-form solution $W_s$. We have secondarily shown the mathematical viability of learning the network depth via a network convergence behavior. Moreover, we purposely designed the experiments to test each theoretical claim carefully. Therefore, both the theorems and the experiments are structured to reinforce our contributions as proposed in abstract\/Into.} \n \\item Did you describe the limitations of your work?\n \\answerYes{} \\\\\n \\textbf{Justification. } \\textit{See the last section of the last page under the title \"Limitations.\" }\n \\item Did you discuss any potential negative societal impacts of your work?\n \\answerYes{} \\\\\n \\textbf{Justification. } \\textit{Since our work is at such a fundamental and theoretical level, it would be difficult or even impossible to foresee its immediate implication. However, we felt that a key motivation of this work will have a significant societal impact and should be discussed by the community. \n \n } \\\\\n \\begin{adjustwidth}{0.2in}{0.2in}\n \\textbf{Broader Impact. } \n Finding an alternative to BP also has significant climate implications. \\citet{strubell2019energy} have shown that some standard AI models can emit over 626,000 pounds of carbon dioxide; a carbon footprint five times greater than the lifetime usage of a car. This level of emission is simply not sustainable in light of our continual explosive growth. Therefore, the environmental impact of BP necessitates a cheaper alternative. Looking at nature, we can be inspired by the brain's learning capability using only a fraction of the energy. Perhaps artificial neurons can also train without the high energy cost to the environment. This is the moral and the foundational motivation for this work in identifying the existence of $W_s$. A closed-form solution holds the potential to significantly reduce the computational requirement and carbon footprint. Even if our work ultimately failed to mimic the brain, we hope to inspire the community to identify other closed-form solutions and go beyond BP. \\\\ \\\\\n This paper aims to promote the discussion of viewing backpropagation alternatives not only as an academic exercise but also as a climate imperative. Yet, this topic is largely ignored by the community. The authors believe the energy costs of training Neural Networks are having a detrimental climate impact and should be an added topic of interest in NeurIPS along with fairness and racial equality. The earth also needs an advocate, why not us? Therefore, we as a community, must begin addressing how we can ameliorate our own carbon footprint. This exploratory work aims to share a potential path forward for further research that may address these concerns with the community. While $W_s$ is still not ready for commercial usage, we sincerely hope that the community begins to build novel algorithms over our work on \\RS and identify a simpler and cheaper path to train our networks. \\\\\n \\end{adjustwidth}\n \\item Have you read the ethics review guidelines and ensured that your paper conforms to them?\n \\answerYes{}\n\\end{enumerate}\n\n\\item If you are including theoretical results...\n\\begin{enumerate}\n \\item Did you state the full set of assumptions of all theoretical results?\n \\answerYes{} \\\\\n \\textbf{Justification. } \\textit{We have to the best of our ability state the necessary assumptions. Since the authors are not infallible, it is possible to have oversights. We would be happy to incorporate any suggestions from the reviewers. }\n\t\\item Did you include complete proofs of all theoretical results?\n \\answerYes{} \\\\\n \\textbf{Justification. } \\textit{We have proven Theorem 1 in Appendix A, and Theorem 2 in Appendix B. Corollary 1 and 2 are proven in Appendix D. For reproducibility, we have purposely written the proof to be instructional. This means that we have clearly defined each symbol and tediously wrote the proof step by step without skipping operations that may appear obvious to a more senior audience. We note that this is not intended to insult the intelligence of reviewers, but it is designed to help younger students to reproduce, understand, and build upon the proof.}\n\\end{enumerate}\n\n\\item If you ran experiments...\n\\begin{enumerate}\n \\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\n \\answerYes{} \\\\\n \\textbf{Justification. } \\textit{The source code to implement $W_s$ and $W^*$ are included in the Supplementary materials along with simple instructions to run them. We worked to ensure that the experimental results are easily reproducible in several manners. First, repeating each experiment is as simple as uncommenting the experiment you wish to run. Second, we also included the exact data used to conduct each experiment. Third, we also included the code we used for the competing methods to allow the comparisons easily reproducible. Therefore, the entire process is public, can be examined by any individual, and yields results that should closely track the paper. }\n \\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?\n \\answerYes{} \\\\\n \\textbf{Justification. } \\textit{The training detail to conduct the experiments are describe in the Experiment section under the subtitle \"Evaluation Metrics\" and \"Experiment Settings\". There are also hyperparameter choices that are less obvious, e.g., how did we choose the $\\sigma$ value for the Gaussian kernel? For this, we included in App. \\ref{app:opt_sigma} a detailed description for the logic behind how they were identified. We even conducted additional experiments and provided figures (also included in the appendix) to demonstrate how it works in practice. The source code for $\\sigma$ identification is included with the Supplementary Material.}\n\t\\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?\n \\answerYes{} \\\\\n \\textbf{Justification. } \\textit{Every single experiment is conducted with 10-fold cross validation. This implies that each dataset is randomly divided into 10 subsets. Every experiment is conducted 10 times using each 10\\% subset as the test set and the 90\\% as the training. We then collect the mean and the standard deviation for each 10-fold experiment and report these values in the format $\\mu \\pm \\sigma$.}\n\t\\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?\n \\answerYes{} \\\\\n \\textbf{Justification. } \\textit{The computation device description is included along with \"Experiment Settings\".}\n\\end{enumerate}\n\n\\item If you are using existing assets (e.g., code, data, models) or curating\/releasing new assets...\n\\begin{enumerate}\n \\item If your work uses existing assets, did you cite the creators?\n \\answerYes{} \\\\\n \\textbf{Justification. } \\textit{For every single data and every competing method, we cite the work immediately after mentioning the source. These citations can be found from the first two paragraphs of the Experimental section. Every dataset is also included in the Supplementary. The link to download the original dataset is also included in App.~\\ref{app:data_detail}. For each competing method, the link to download the source code from github is included in the reference. These citations can be found from the first two paragraphs of the Experimental section. } \n \\item Did you mention the license of the assets?\n \\answerNA{} \\\\\n \\textbf{Justification. } We have purposely used datasets that are in the public domain. The license information can be easily obtained from the link provided within the paper.\n \\item Did you include any new assets either in the supplemental material or as a URL?\n \\answerNA{} \n \\item Did you discuss whether and how consent was obtained from people whose data you're using\/curating?\n \\answerNA{} \n \\item Did you discuss whether the data you are using\/curating contains personally identifiable information or offensive content?\n \\answerNA{}\n \n \n\\end{enumerate}\n\n\\item If you used crowdsourcing or conducted research with human subjects...\n\\begin{enumerate}\n \\item Did you include the full text of instructions given to participants and screenshots, if applicable?\n \\answerNA{}\n \\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?\n \\answerNA{}\n \\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n \\answerNA{}\n\\end{enumerate}\n\n\\end{enumerate}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section*{List of model and transport parameters}\n\\vspace{-6.0cm}\n\\begin{table*}[h!]\n\\begin{center}\n\\begin{tabular}{ll}\n$\\begin{aligned} k_{F} \\end{aligned}$ & $\\qquad$ Fermi wavevector \\\\\n& \\\\ \n$\\begin{aligned} m \\end{aligned}$ & $\\qquad$ Electron's effective mass \\\\\n& \\\\ \n$\\begin{aligned} J \\end{aligned}$ & $\\qquad$ Exchange constant in the $s$-$d$ model \\\\\n& \\\\\n$\\begin{aligned} v_{i} \\end{aligned}$ & $\\qquad$ Impurity potential \\\\\n& \\\\\n$\\begin{aligned} n_{i} \\end{aligned}$ & $\\qquad$ Impurity concentration \\\\\n& \\\\\n$\\begin{aligned} \\xi_{SO} \\end{aligned}$ & $\\qquad$ Dimensionless spin-orbit coupling constant \\\\\n& \\\\\n$\\begin{aligned} \\varepsilon_{F}=\\frac{\\hbar^{2}k_{F}^{2}}{2m} \\end{aligned}$ & $\\qquad$ Fermi energy \\\\\n& \\\\\n$\\begin{aligned} v_{F}=\\frac{\\hbar k_{F}}{m} \\end{aligned}$ & $\\qquad$ Fermi velocity \\\\\n& \\\\\n$\\begin{aligned} \\beta=\\frac{J}{2\\varepsilon_{F}} \\end{aligned}$ & $\\qquad$ Spin polarization factor \\\\\n& \\\\\n$\\begin{aligned} D_{0}=\\frac{mk_{F}}{2\\pi^{2}\\hbar^{2}} \\end{aligned}$ & $\\qquad$ Spin independent density of states per spin at the Fermi level \\\\\n& \\\\\n$\\begin{aligned} \\frac{1}{\\tau_{0}}=\\frac{2\\pi v_{i}^{2}n_{i}D_{0}}{\\hbar} \\end{aligned}$ & $\\qquad$ Spin independent relaxation time \\\\\n& \\\\\n$\\begin{aligned} \\frac{1}{\\tau_{L}}=\\frac{2J}{\\hbar} \\end{aligned}$ & $\\qquad$ Larmor precession time \\\\\n& \\\\\n$\\begin{aligned} \\frac{1}{\\tau_{\\phi}}=\\frac{4J^{2}\\tau_{0}}{\\hbar^{2}} \\end{aligned}$ & $\\qquad$ Spin dephasing relaxation time \\\\\n& \\\\\n$\\begin{aligned} \\frac{1}{\\tau_{sf}}=\\frac{8}{9}\\frac{\\xi_{SO}^{2}}{\\tau_{0}} \\end{aligned}$ & $\\qquad$ Spin-flip relaxation time \\\\\n& \\\\\n$\\begin{aligned} l_{F}=\\tau_{0}v_{F} \\end{aligned}$ & $\\qquad$ Mean-free path \\\\\n& \\\\\n$\\begin{aligned} D=\\frac{\\tau_{0}v_{F}^{2}}{3} \\end{aligned}$ & $\\qquad$ Diffusion coefficient \\\\\n& \\\\\n$\\begin{aligned} \\alpha_{sw}=\\frac{2}{3}\\xi_{SO} \\end{aligned}$ & $\\qquad$ Dimensionless spin swapping constant \\\\\n& \\\\\n$\\begin{aligned} \\alpha_{sj}=\\frac{\\xi_{SO}}{l_{F}k_{F}} \\end{aligned}$ & $\\qquad$ Dimensionless side-jump constant \\\\\n& \\\\\n$\\begin{aligned} \\alpha_{sk}=\\frac{v_{i}mk_{F}}{3\\pi\\hbar^{2}}\\xi_{SO} \\end{aligned}$ & $\\qquad$ Dimensionless skew scattering constant \\\\\n& \\\\\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\n\n\n\\pagebreak\n\\section{General formalism}\n\\par We start with a free-electron Hamiltonian $\\hat{\\mathcal{H}}_{0}$ and its Fourier transform $\\hat{\\mathcal{H}}_{\\boldsymbol{k}}$: \n\\begin{equation}\n\\hat{\\mathcal{H}}_{0}=-\\frac{\\hbar^{2}}{2m}\\nabla^{2}\\hat{\\sigma}_{0}+J\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\qquad\\xrightarrow{\\,\\,\\mathcal{F}\\,\\,}\\qquad\\hat{\\mathcal{H}}_{\\boldsymbol{k}}=\\frac{\\hbar^{2}_{}\\boldsymbol{k}^{2}_{}}{2m}\\hat{\\sigma}_{0}+J\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}.\n\\end{equation}\n\\noindent Here, the first term stands for the kinetic energy, where $m$ and $\\boldsymbol{k}$ are the electron's effective mass and wave vector, respectively; the second term refers to the exchange interaction in the so-called $s$-$d$ model, where $\\boldsymbol{m}=(\\cos{\\phi}\\sin{\\theta},\\sin{\\phi}\\sin{\\theta},\\cos{\\theta})$ is the magnetization unit vector parametrized in spherical coordinates, $J$ is the exchange coupling parameter, $\\hat{\\boldsymbol{\\sigma}}$ is the Pauli matrix vector, and $\\hat{\\sigma}_{0}$ is the identity matrix. The unperturbed Green's function is defined by $\\hat{\\mathcal{H}}_{\\boldsymbol{k}}$:\n\\begin{equation}\n\\hat{G}_{0,\\boldsymbol{k}E}^{R(A)}=\\left[E-\\hat{\\mathcal{H}}_{\\boldsymbol{k}}\\pm i\\eta\\right]^{-1}=\\sum\\limits_{s=\\pm}\\frac{|s\\rangle\\langle s|}{E-E_{\\boldsymbol{k}s}\\pm i\\eta}=\\frac{1}{2}\\sum\\limits_{s=\\pm}\\frac{\\hat{\\sigma}_{0}+s\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}}{E-E_{\\boldsymbol{k}s}\\pm i\\eta},\n\\label{eq:gzero}\n\\end{equation}\n\\noindent where $s$ refers to the spin index, $|s\\rangle$ is the corresponding eigenstate:\n\\begin{equation}\n|s\\rangle=\\left( \\begin{array}{c}\nse^{-i\\phi}\\sqrt{\\frac{1+s\\cos{\\theta}}{2}}\\\\\n\\sqrt{\\frac{1-s\\cos{\\theta}}{2}}\\end{array} \\right),\n\\end{equation}\n\\noindent $E_{\\boldsymbol{k}s}=E_{\\boldsymbol{k}}+sJ$, $E_{\\boldsymbol{k}}=\\hbar^{2}\\boldsymbol{k}^{2}\/2m$, and $\\eta$ is a positive infinitesimal. \n\\par Next, we consider the impurity Hamiltonian with spin-orbit coupling:\n\\begin{equation}\n\\label{eq:imppot}\n\\hat{\\mathcal{H}}_{\\mathrm{imp}}=\\sum\\limits_{\\boldsymbol{R}_{i}}V(\\boldsymbol{r}-\\boldsymbol{R}_{i})\\hat{\\sigma}_{0}+\\frac{\\xi_{SO}}{\\hbar k_{F}^{2}}\\sum_{\\boldsymbol{R}_{i}}\\left(\\nabla V(\\boldsymbol{r}-\\boldsymbol{R}_{i})\\times\\hat{\\boldsymbol{p}}\\right)\\cdot\\hat{\\boldsymbol{\\sigma}},\n\\end{equation}\n\\noindent where $\\hat{\\boldsymbol{p}}=-i\\hbar\\partial_{\\boldsymbol{r}}$ is the momentum operator, $V(\\boldsymbol{r}-\\boldsymbol{R}_{i})=v_{i}\\delta(\\boldsymbol{r}-\\boldsymbol{R}_{i})$ is the on-site potential at the impurity site $\\boldsymbol{R}_{i}$, $\\xi_{SO}$ is the spin-orbit coupling parameter (defined as a dimensionless quantity), and $k_{F}$ is the Fermi wavevector. Here, we neglect the localization effects and electron-electron correlations, and assume a short-range impurity potential. In the reciprocal space, it can be written as:\\cite{rammer1}\n\\begin{equation}\n\\hat{\\mathcal{H}}_{\\boldsymbol{k}\\boldsymbol{k'}}=\\Omega\\langle\\boldsymbol{k}|\\hat{\\mathcal{H}}_{\\mathrm{imp}}|\\boldsymbol{k'}\\rangle,\n\\end{equation}\n\\noindent where the momentum eigenstates are defined as $\\langle\\boldsymbol{r}|\\boldsymbol{k}\\rangle=\\Omega^{-1\/2}e^{i\\boldsymbol{k}\\cdot\\boldsymbol{r}}$, and $\\Omega$ is the volume of the system. Then, by using the following identities:\n\\begin{equation}\n\\int\\limits_{\\Omega}d\\boldsymbol{r}\\,f(\\boldsymbol{r})\\delta(\\boldsymbol{r}-\\boldsymbol{r}_{i})=f(\\boldsymbol{r}_{i}),\\qquad\\int\\limits_{\\Omega}d\\boldsymbol{r}\\,f(\\boldsymbol{r})\\nabla\\delta(\\boldsymbol{r}-\\boldsymbol{r}_{i})=-\\int\\limits_{\\Omega}d\\boldsymbol{r}\\,\\nabla f(\\boldsymbol{r})\\delta(\\boldsymbol{r}-\\boldsymbol{r}_{i}),\n\\end{equation}\n\\noindent we obtain:\n\\begin{equation}\n\\begin{aligned}\n\\hat{\\mathcal{H}}_{\\boldsymbol{k}\\boldsymbol{k'}}&=\\sum\\limits_{\\boldsymbol{R}_{i}}\\int\\limits_{\\Omega}d\\boldsymbol{r}\\,\\left[v_{i}\\delta(\\boldsymbol{r}-\\boldsymbol{R}_{i})\\hat{\\sigma}_{0}e^{-i(\\boldsymbol{k}-\\boldsymbol{k'})\\cdot\\boldsymbol{r}}-iv_{i}\\frac{\\xi_{SO}}{k_{F}^{2}}e^{-i\\boldsymbol{k}\\cdot\\boldsymbol{r}}\\Big(\\nabla\\delta(\\boldsymbol{r}-\\boldsymbol{R}_{i})\\times\\partial_{\\boldsymbol{r}}\\Big)\\cdot\\hat{\\boldsymbol{\\sigma}}e^{i\\boldsymbol{k'}\\cdot\\boldsymbol{r}} \\right]\\\\\n&=V(\\boldsymbol{k}-\\boldsymbol{k'})\\left[\\hat{\\sigma}_{0}+i\\frac{\\xi_{SO}}{k_{F}^{2}}\\hat{\\boldsymbol{\\sigma}}\\cdot(\\boldsymbol{k}\\times\\boldsymbol{k'})\\right],\n\\end{aligned}\n\\label{eq:potsoi}\n\\end{equation}\n\\noindent where $V(\\boldsymbol{k}-\\boldsymbol{k'})$ is the Fourier transform of the impurity on-site potential:\n\\begin{equation}\nV(\\boldsymbol{k}-\\boldsymbol{k'})=v_{i}\\sum_{\\boldsymbol{R}_{i}}e^{-i(\\boldsymbol{k}-\\boldsymbol{k'})\\cdot\\boldsymbol{R}_{i}}.\n\\end{equation}\n\\par We proceed to write a kinetic equation by means of the Keldysh formalism:\n\\begin{equation}\n\\underline{\\hat{G}}^{-1}=\\hat{G}_{0}^{-1}-\\underline{\\Sigma},\\qquad \n\\underline{\\hat{G}}=\\left( \\begin{array}{cc}\n\\hat{G}^{R} &\\hat{G}^{K}\\\\\n0&\\hat{G}^{A}\\end{array} \\right),\\qquad\n\\underline{\\hat{\\Sigma}}=\\left( \\begin{array}{cc}\n\\hat{\\Sigma}^{R} &\\hat{\\Sigma}^{K}\\\\\n0&\\hat{\\Sigma}^{A}\\end{array} \\right),\n\\end{equation}\n\\noindent where $\\underline{\\hat{G}}$ and $\\underline{\\hat{\\Sigma}}$ are the Green's function and self-energy in the Keldysh space; the indexes $R$, $A$ and $K$ stand for the retarded, advanced and Keldysh components, respectively, and $\\hat{G}_{0}^{-1}=i\\hbar\\partial_{t}-\\hat{\\mathcal{H}}_{0}$. In the semiclassical approximation, a set of diffusive equations for the non-equilibrium charge and spin densities can be derived through the distribution function $\\hat{g}_{\\boldsymbol{k}}\\equiv \\hat{g}_{\\boldsymbol{k}}(\\boldsymbol{R},T)$ defined as the Wigner representation of the Keldysh Green's function $\\hat{G}^{K}$:\\cite{rammer2}\n\\begin{equation}\n\\begin{aligned}\n\\hat{G}^{K}(\\boldsymbol{r}_{1},t_{1};\\boldsymbol{r}_{2},t_{2})&\\,\\,\\xrightarrow{\\mathcal{W}}\\hat{G}^{K}(\\boldsymbol{R}+\\frac{\\boldsymbol{r}}{2},T+\\frac{t}{2};\\boldsymbol{R}-\\frac{\\boldsymbol{r}}{2},T-\\frac{t}{2})\\equiv\\hat{G}^{K}(\\boldsymbol{r},t;\\boldsymbol{R},T)\\\\\n&\\,\\,\\xrightarrow{\\mathcal{F}}\\hat{G}^{K}(\\boldsymbol{r},t;\\boldsymbol{R},T)=\\int\\frac{dE}{2\\pi}\\frac{d\\boldsymbol{k}}{(2\\pi)^{3}}\\,\\,\\hat{g}^{K}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)e^{-\\frac{i}{\\hbar}Et}e^{i\\boldsymbol{r}\\cdot\\boldsymbol{k}}\\\\\n&\\,\\,\\xrightarrow{\\,\\,}\\hat{g}_{\\boldsymbol{k}}=i\\int\\frac{dE}{2\\pi}\\,\\hat{g}^{K}_{\\boldsymbol{k}E}(\\boldsymbol{R},T),\n\\end{aligned}\n\\end{equation}\n\\noindent where the relative $\\boldsymbol{r}=\\boldsymbol{r}_{1}-\\boldsymbol{r}_{2}$, $t=t_{1}-t_{2}$ and center-of-mass $\\boldsymbol{R}=(\\boldsymbol{r}_{1}+\\boldsymbol{r}_{2})\/2$, $T=(t_{1}+t_{2})\/2$ coordinates are introduced. In the dilute limit, we can employ the Kadanoff-Baym anzats:\n\\begin{equation}\n\\underline{\\hat{G}}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)=\\left( \\begin{array}{cc}\n\\hat{G}^{R}_{\\boldsymbol{k}E} &\\hat{g}^{K}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)\\\\\n0&\\hat{G}^{A}_{\\boldsymbol{k}E}\\end{array} \\right)\n\\end{equation}\n\\noindent and\n\\begin{equation}\n\\hat{g}^{K}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)=\\hat{G}^{R}_{\\boldsymbol{k}E}\\,\\hat{g}_{\\boldsymbol{k}}(\\boldsymbol{R},T)-\\hat{g}_{\\boldsymbol{k}}(\\boldsymbol{R},T)\\hat{G}^{A}_{\\boldsymbol{k}E}.\n\\label{eq:anzats}\n\\end{equation}\n\\noindent The Keldysh Green's function $\\hat{G}^{K}$ satisfies the Kadanoff-Baym equation:\\cite{rammer2}\n\\begin{equation}\n[\\hat{G}^{R}]^{-1}\\ast\\hat{G}^{K}-\\hat{G}^{K}\\ast[\\hat{G}^{A}]^{-1}=\\hat{\\Sigma}^{K}\\ast\\hat{G}^{A}-\\hat{G}^{R}\\ast\\hat{\\Sigma}^{K},\n\\end{equation}\n\\noindent Having applied the Wigner transformation, we use the so-called gradient approximation, where the convolution $\\mathcal{A}\\ast\\mathcal{B}$ of two functions is expressed as:\n\\begin{equation}\n\\begin{aligned}\n\\left(\\mathcal{A}\\ast\\mathcal{B}\\right)_{\\boldsymbol{k}E}(\\boldsymbol{R},T)&\\simeq\\mathcal{A}\\mathcal{B}-\\frac{i\\hbar}{2}\\left(\\partial_{T}\\mathcal{A}\\partial_{E}\\mathcal{B}-\\partial_{E}\\mathcal{A}\\partial_{T}\\mathcal{B}\\right)\\\\\n&-\\frac{i}{2}\\left(\\nabla_{\\boldsymbol{k}}\\mathcal{A}\\cdot\\nabla_{\\boldsymbol{R}}\\mathcal{B}-\\nabla_{\\boldsymbol{R}}\\mathcal{A}\\cdot\\nabla_{\\boldsymbol{k}}\\mathcal{B}\\right).\n\\end{aligned}\n\\end{equation}\n\\noindent Taking into account that $\\hat{G}^{R(A)}$ and $\\hat{\\Sigma}^{R(A)}$ do not depend on the center-of-mass coordinates, we obtain:\n\\begin{equation}\n\\begin{aligned}\ni\\hbar\\partial_{T}\\hat{g}^{K}+[\\hat{g}^{K},J\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}]&+\\frac{i}{2}\\left\\{\\nabla_{\\boldsymbol{k}}\\hat{\\mathcal{H}}_{\\boldsymbol{k}},\\nabla_{\\boldsymbol{R}}\\hat{g}^{K}\\right\\}=\\hat{\\Sigma}^{K}\\hat{G}^{A}-\\hat{G}^{R}\\hat{\\Sigma}^{K}+\\hat{\\Sigma}^{R}\\hat{g}^{K}-\\hat{g}^{K}\\hat{\\Sigma}^{A}\\\\\n&-\\frac{i\\hbar}{2}\\left(\\partial_{T}\\hat{\\Sigma}^{K}\\partial_{E}\\hat{G}^{A}+\\partial_{E}\\hat{G}^{R}\\partial_{T}\\hat{\\Sigma}^{K} \\right)\n+\\frac{i\\hbar}{2}\\left(\\partial_{E}\\hat{\\Sigma}^{R}\\partial_{T}\\hat{g}^{K}+\\partial_{T}\\hat{g}^{K}\\partial_{E}\\hat{\\Sigma}^{A} \\right)\\\\\n&-\\frac{i}{2}\\left(\\nabla_{\\boldsymbol{k}}\\hat{\\Sigma}^{R}\\cdot\\nabla_{\\boldsymbol{R}}\\hat{g}^{K}+\\nabla_{\\boldsymbol{R}}\\hat{g}^{K}\\cdot\\nabla_{\\boldsymbol{k}}\\hat{\\Sigma}^{A}\\right)\n+\\frac{i}{2}\\left(\\nabla_{\\boldsymbol{R}}\\hat{\\Sigma}^{K}\\cdot\\nabla_{\\boldsymbol{k}}\\hat{G}^{A}+\\nabla_{\\boldsymbol{k}}\\hat{G}^{R}\\cdot\\nabla_{\\boldsymbol{R}}\\hat{\\Sigma}^{K}\\right),\n\\label{eq:full}\n\\end{aligned}\n\\end{equation}\n\\noindent where $[\\cdot\\,,\\cdot]$ and $\\{\\cdot\\,,\\cdot\\}$ stand for a commutator and anticommutator, respectively. In steady state, we have:\n\\begin{equation}\n\\begin{aligned}\n\\,[\\hat{g}^{K},J\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}]&+\\frac{i}{2}\\left\\{\\nabla_{\\boldsymbol{k}}\\hat{\\mathcal{H}}_{\\boldsymbol{k}},\\nabla_{\\boldsymbol{R}}\\hat{g}^{K}\\right\\}=\\hat{\\Sigma}^{K}\\hat{G}^{A}-\\hat{G}^{R}\\hat{\\Sigma}^{K}+\\hat{\\Sigma}^{R}\\hat{g}^{K}-\\hat{g}^{K}\\hat{\\Sigma}^{A}\\\\\n&-\\frac{i}{2}\\left(\\nabla_{\\boldsymbol{k}}\\hat{\\Sigma}^{R}\\cdot\\nabla_{\\boldsymbol{R}}\\hat{g}^{K}+\\nabla_{\\boldsymbol{R}}\\hat{g}^{K}\\cdot\\nabla_{\\boldsymbol{k}}\\hat{\\Sigma}^{A}\\right)\n+\\frac{i}{2}\\left(\\nabla_{\\boldsymbol{R}}\\hat{\\Sigma}^{K}\\cdot\\nabla_{\\boldsymbol{k}}\\hat{G}^{A}+\\nabla_{\\boldsymbol{k}}\\hat{G}^{R}\\cdot\\nabla_{\\boldsymbol{R}}\\hat{\\Sigma}^{K}\\right).\n\\label{eq:kelfull}\n\\end{aligned}\n\\end{equation}\n\\noindent Finally, in the dilute limit, we can assume that the self-energy is almost constant and neglect its derivatives on the right-hand side:\n\\begin{equation}\n[\\hat{g}^{K},J\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}]+\\frac{i}{2}\\left\\{\\nabla_{\\boldsymbol{k}}\\hat{\\mathcal{H}}_{\\boldsymbol{k}},\\nabla_{\\boldsymbol{R}}\\hat{g}^{K}\\right\\}=\\hat{\\Sigma}^{K}\\hat{G}^{A}-\\hat{G}^{R}\\hat{\\Sigma}^{K}+\\hat{\\Sigma}^{R}\\hat{g}^{K}-\\hat{g}^{K}\\hat{\\Sigma}^{A},\n\\end{equation}\n\\noindent or\n\\begin{equation}\n[\\hat{g}^{K},J\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}]+i\\frac{\\hbar^{2}}{m}(\\boldsymbol{k}\\cdot\\nabla_{\\boldsymbol{R}})\\,\\hat{g}^{K}=\\hat{\\Sigma}^{K}\\hat{G}^{A}-\\hat{G}^{R}\\hat{\\Sigma}^{K}+\\hat{\\Sigma}^{R}\\hat{g}^{K}-\\hat{g}^{K}\\hat{\\Sigma}^{A}.\n\\label{eq:keldysh}\n\\end{equation}\n \n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.55]{pic1.jpg}\n\\end{center}\n\\caption{$a)$ Diagrammatic expansion for scatterings off the static impurity potential. $b)$ Self-consistent Born approximation. $c)$ Skew-scattering diagrams.}\n\\end{figure}\n\n\n\\section{Self-energy}\n\\subsection{Self-consistent Born approximation}\n\\par Let us consider first and second orders of the diagrammatic expansion for the scattering off the static impurity potential (Fig. 1$a$). The Green's function $\\underline{\\hat{G}}$ and the corresponding self-energy $\\underline{\\hat{\\Sigma}}$ are defined in the wave vector representation as follows:\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{G}}(\\boldsymbol{k},t_{1};\\boldsymbol{k'},t_{2})&=\\underline{\\hat{G}}_{0}(\\boldsymbol{k},t_{1};\\boldsymbol{k'},t_{2})+\\sum\\limits_{\\{\\boldsymbol{k}_{i}\\}} \\underline{\\hat{G}}_{0}(\\boldsymbol{k},t_{1};\\boldsymbol{k}_{1},t_{2}) \\langle\\boldsymbol{k}_{1}|\\hat{\\mathcal{H}}_{\\mathrm{imp}}|\\boldsymbol{k}_{2}\\rangle\\underline{\\hat{G}}_{0}(\\boldsymbol{k}_{2},t_{1};\\boldsymbol{k'},t_{2})\\\\\n&+\\sum\\limits_{\\{\\boldsymbol{k}_{i}\\}} \\underline{\\hat{G}}_{0}(\\boldsymbol{k},t_{1};\\boldsymbol{k}_{1},t_{2}) \\langle\\boldsymbol{k}_{1}|\\hat{\\mathcal{H}}_{\\mathrm{imp}}|\\boldsymbol{k}_{2}\\rangle\\underline{\\hat{G}}_{0}(\\boldsymbol{k}_{2},t_{1};\\boldsymbol{k}_{3},t_{2})\\langle\\boldsymbol{k}_{3}|\\hat{\\mathcal{H}}_{\\mathrm{imp}}|\\boldsymbol{k}_{4}\\rangle\\underline{\\hat{G}}_{0}(\\boldsymbol{k}_{4},t_{1};\\boldsymbol{k'},t_{2})+...\\\\\n&=\\underline{\\hat{G}}_{0}(\\boldsymbol{k},t_{1};\\boldsymbol{k'},t_{2})+\\sum\\limits_{\\{\\boldsymbol{k}_{i}\\}} \\underline{\\hat{G}}_{0}(\\boldsymbol{k},t_{1};\\boldsymbol{k}_{1},t_{2})\\underline{\\hat{\\Sigma}}(\\boldsymbol{k}_{1},t_{1};\\boldsymbol{k}_{2},t_{2})\\underline{\\hat{G}}(\\boldsymbol{k}_{2},t_{1};\\boldsymbol{k'},t_{2}),\n\\end{aligned}\n\\label{eq:selfscat}\n\\end{equation}\n\\noindent where $\\underline{\\hat{G}}_{0}$ is a free propagator. To consider a particle moving in a random potential we take the average over different spatial configurations of the ensemble of $N$ impurities. Upon impurity-averaging the first order term in Eq.~(\\ref{eq:selfscat}) is a constant and can be renormalized away. For the second order term, we take into account all two-line irreducible diagrams corresponding to the double scattering off the same impurity and neglect the so-called crossing diagrams, where impurity lines cross and give a small contribution to the region of interest, $E\\simeq E_{F}$ and $k\\simeq k_{F}$.\\cite{rammer1} This is nothing else but the self-consistent Born approximation (Fig. 1$b$). Using Eq.~(\\ref{eq:potsoi}) for the impurity potential, the self-energy is given by three terms:\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{1a}(\\boldsymbol{k}_{1},t_{1};\\boldsymbol{k}_{4},t_{2})=\\frac{1}{\\Omega^{2}}\\sum\\limits_{\\boldsymbol{k}_{2}\\boldsymbol{k}_{3}}\\underline{\\hat{G}}(\\boldsymbol{k}_{2},t_{1};\\boldsymbol{k}_{3},t_{2})\\left\\langle V(\\boldsymbol{k}_{1}-\\boldsymbol{k}_{2})V(\\boldsymbol{k}_{3}-\\boldsymbol{k}_{4})\\right\\rangle,\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{1b}(\\boldsymbol{k}_{1},t_{1};\\boldsymbol{k}_{4},t_{2})&=\\frac{i}{\\Omega^{2}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\sum\\limits_{\\boldsymbol{k}_{2}\\boldsymbol{k}_{3}}\\left[[(\\boldsymbol{k}_{1}\\times\\boldsymbol{k}_{2})\\cdot\\hat{\\boldsymbol{\\sigma}}]\\underline{\\hat{G}}(\\boldsymbol{k}_{2},t_{1};\\boldsymbol{k}_{3},t_{2})\\right.\\\\\n&\\left.+\\,\\underline{\\hat{G}}(\\boldsymbol{k}_{2},t_{1};\\boldsymbol{k}_{3},t_{2})[(\\boldsymbol{k}_{3}\\times\\boldsymbol{k}_{4})\\cdot\\hat{\\boldsymbol{\\sigma}}]\\right]\\left\\langle V(\\boldsymbol{k}_{1}-\\boldsymbol{k}_{2})V(\\boldsymbol{k}_{3}-\\boldsymbol{k}_{4})\\right\\rangle,\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{1c}(\\boldsymbol{k}_{1},t_{1};\\boldsymbol{k}_{4},t_{2})&=-\\frac{1}{\\Omega^{2}}\\frac{\\xi_{SO}^{2}}{k^{4}_{F}}\\sum\\limits_{\\boldsymbol{k}_{2}\\boldsymbol{k}_{3}}[(\\boldsymbol{k}_{1}\\times\\boldsymbol{k}_{2})\\cdot\\hat{\\boldsymbol{\\sigma}}]\\underline{\\hat{G}}(\\boldsymbol{k}_{2},t_{1};\\boldsymbol{k}_{3},t_{2})[(\\boldsymbol{k}_{3}\\times\\boldsymbol{k}_{4})\\cdot\\hat{\\boldsymbol{\\sigma}}]\\left\\langle V(\\boldsymbol{k}_{1}-\\boldsymbol{k}_{2})V(\\boldsymbol{k}_{3}-\\boldsymbol{k}_{4})\\right\\rangle,\\\\\n\\end{aligned}\n\\end{equation}\n\\noindent where impurity averaging leads to:\n\\begin{equation}\n\\begin{aligned}\n \\left\\langle V(\\boldsymbol{k}_{1}-\\boldsymbol{k}_{2})V(\\boldsymbol{k}_{3}-\\boldsymbol{k}_{4})\\right\\rangle&= v_{i}^{2} \\left\\langle\\sum_{\\boldsymbol{R}_{i}}e^{-i(\\boldsymbol{k}_{1}-\\boldsymbol{k}_{2})\\cdot\\boldsymbol{R}_{i}}e^{-i(\\boldsymbol{k}_{3}-\\boldsymbol{k}_{4})\\cdot\\boldsymbol{R}_{i}}\\right\\rangle=v_{i}^{2}N\\delta_{\\boldsymbol{k}_{1}+\\boldsymbol{k}_{3},\\boldsymbol{k}_{2}+\\boldsymbol{k}_{4}}.\n \\end{aligned}\n\\end{equation}\n\\noindent To proceed with the Wigner transformation, we change the variables:\n$$\n\\boldsymbol{k}_{1}=\\boldsymbol{k}+\\frac{\\boldsymbol{q}}{2}\\qquad\\boldsymbol{k}_{2}=\\boldsymbol{k'}+\\frac{\\boldsymbol{q'}}{2}\\qquad\\boldsymbol{k}_{3}=\\boldsymbol{k'}-\\frac{\\boldsymbol{q'}}{2} \\qquad\\boldsymbol{k}_{4}=\\boldsymbol{k}-\\frac{\\boldsymbol{q}}{2}\n$$\n\\noindent and\n$$\nT=\\frac{t_{1}+t_{2}}{2}\\qquad t=t_{1}-t_{2},\n$$\n\\noindent that gives:\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{1a}(\\boldsymbol{k},t;\\boldsymbol{q},T)=\\frac{1}{\\Omega^{2}}\\sum\\limits_{\\boldsymbol{k}_{2}\\boldsymbol{k}_{3}}\\underline{\\hat{G}}(\\boldsymbol{k'},t;\\boldsymbol{q'},T)\\delta_{\\boldsymbol{q},\\boldsymbol{q'}},\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{1b}(\\boldsymbol{k},t;\\boldsymbol{q},T)&=\\frac{i}{\\Omega^{2}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\sum\\limits_{\\boldsymbol{k}_{2}\\boldsymbol{k}_{3}}\\left[\\big[((\\boldsymbol{k}+\\frac{\\boldsymbol{q}}{2})\\times(\\boldsymbol{k'}+\\frac{\\boldsymbol{q'}}{2}))\\cdot\\hat{\\boldsymbol{\\sigma}}\\big]\\underline{\\hat{G}}(\\boldsymbol{k'},t;\\boldsymbol{q'},T)\\right.\\\\\n&\\left.+\\,\\underline{\\hat{G}}(\\boldsymbol{k'},t;\\boldsymbol{q'},T)\\big[((\\boldsymbol{k'}-\\frac{\\boldsymbol{q'}}{2})\\times(\\boldsymbol{k}-\\frac{\\boldsymbol{q'}}{2})))\\cdot\\hat{\\boldsymbol{\\sigma}}\\big]\\right]\\delta_{\\boldsymbol{q},\\boldsymbol{q'}},\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{1c}(\\boldsymbol{k},t;\\boldsymbol{q},T)&=-\\frac{1}{\\Omega^{2}}\\frac{\\xi_{SO}^{2}}{k^{4}_{F}}\\sum\\limits_{\\boldsymbol{k}_{2}\\boldsymbol{k}_{3}}\\big[((\\boldsymbol{k}+\\frac{\\boldsymbol{q}}{2})\\times(\\boldsymbol{k'}+\\frac{\\boldsymbol{q'}}{2}))\\cdot\\hat{\\boldsymbol{\\sigma}}\\big]\\underline{\\hat{G}}(\\boldsymbol{k'},t;\\boldsymbol{q'},T)\\big[((\\boldsymbol{k'}-\\frac{\\boldsymbol{q'}}{2})\\times(\\boldsymbol{k}-\\frac{\\boldsymbol{q}}{2}))\\cdot\\hat{\\boldsymbol{\\sigma}}\\big]\\delta_{\\boldsymbol{q},\\boldsymbol{q'}}.\n\\end{aligned}\n\\end{equation}\n\\noindent The Kronecker function reflects that translation invariance is recovered, and we have in the continuum limit:\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{1a}(\\boldsymbol{k},t;\\boldsymbol{q},T)=v_{i}^{2}n_{i}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\,\\underline{\\hat{G}}(\\boldsymbol{k'},t;\\boldsymbol{q},T),\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{1b}(\\boldsymbol{k},t;\\boldsymbol{q},T)&=\\underline{\\hat{\\Sigma}}^{sw}(\\boldsymbol{k},t;\\boldsymbol{q},T)+\\underline{\\hat{\\Sigma}}^{sj}(\\boldsymbol{k},t;\\boldsymbol{q},T)\\\\\n&= iv_{i}^{2}n_{i}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\left[(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}},\\underline{\\hat{G}}(\\boldsymbol{k'},t;\\boldsymbol{q},T)\\right]\\\\\n&+iv_{i}^{2}n_{i}\\frac{\\xi_{SO}}{2k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\left\\{[\\boldsymbol{q}\\times(\\boldsymbol{k}'-\\boldsymbol{k})]\\cdot\\hat{\\boldsymbol{\\sigma}},\\underline{\\hat{G}}(\\boldsymbol{k'},t;\\boldsymbol{q},T)\\right\\},\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{1c}(\\boldsymbol{k},t;\\boldsymbol{q},T)&=-v_{i}^{2}n_{i}\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}[(\\boldsymbol{k}\\times\\boldsymbol{k'}+\\frac{1}{2}\\boldsymbol{q}\\times\\boldsymbol{k'}+\\frac{1}{2}\\boldsymbol{k}\\times\\boldsymbol{q})\\cdot\\hat{\\boldsymbol{\\sigma}}]\\\\\n&\\cdot\\underline{\\hat{G}}(\\boldsymbol{k'},t;\\boldsymbol{q},T)[(\\boldsymbol{k'}\\times\\boldsymbol{k}-\\frac{1}{2}\\boldsymbol{k'}\\times\\boldsymbol{q}-\\frac{1}{2}\\boldsymbol{q}\\times\\boldsymbol{k})\\cdot\\hat{\\boldsymbol{\\sigma}}].\n\\end{aligned}\n\\end{equation}\n\\noindent Having Fourier transformed with respect to $\\boldsymbol{q}$, that gives the Wigner coordinate $\\boldsymbol{R}$:\n\\begin{equation}\n\\begin{aligned}\n\\hat{G}(\\boldsymbol{r}_{1};\\boldsymbol{r}_{4})&=\\int\\frac{d\\boldsymbol{k}_{1}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{k}_{4}}{(2\\pi)^{3}}\\hat{G}(\\boldsymbol{k}_{1};\\boldsymbol{k}_{4})e^{i(\\boldsymbol{k}_{1}\\boldsymbol{r}_{1}-\\boldsymbol{k}_{4}\\boldsymbol{r}_{4})}\\\\\n&=\\int\\frac{d\\boldsymbol{k}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{q}}{(2\\pi)^{3}}\\hat{G}(\\boldsymbol{k}+\\frac{1}{2}\\boldsymbol{q};\\boldsymbol{k}-\\frac{1}{2}\\boldsymbol{q})e^{i(\\boldsymbol{k}+\\frac{1}{2}\\boldsymbol{q})\\cdot(\\boldsymbol{R}+\\frac{1}{2}\\boldsymbol{r})}e^{-i(\\boldsymbol{k}-\\frac{1}{2}\\boldsymbol{q})\\cdot(\\boldsymbol{R}-\\frac{1}{2}\\boldsymbol{r})}\\\\\n&=\\int\\frac{d\\boldsymbol{k}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{q}}{(2\\pi)^{3}}\\hat{G}(\\boldsymbol{k};\\boldsymbol{q})e^{i\\boldsymbol{q}\\cdot\\boldsymbol{R}}e^{i\\boldsymbol{k}\\cdot\\boldsymbol{r}}=\\hat{G}(\\boldsymbol{r};\\boldsymbol{R}),\n\\end{aligned}\n\\end{equation}\n\\noindent we get the final form for the self-energy in the mixed representation:\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{1a}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)&=v_{i}^{2}n_{i}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\,\\underline{\\hat{G}}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T),\\\\\n\\underline{\\hat{\\Sigma}}^{1b}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)&=\\underline{\\hat{\\Sigma}}^{sw}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)+\\underline{\\hat{\\Sigma}}^{sj}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)\\\\\n&= iv_{i}^{2}n_{i}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\left[(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}},\\underline{\\hat{G}}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)\\right]\\\\\n&+v_{i}^{2}n_{i}\\frac{\\xi_{SO}}{2k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\nabla_{\\boldsymbol{R}}\\left\\{(\\boldsymbol{k}'-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\underline{\\hat{G}}_{\\boldsymbol{k'}E}(\\boldsymbol{R};T)\\right\\},\\\\\n\\underline{\\hat{\\Sigma}}^{1c}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)&=-v_{i}^{2}n_{i}\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}[(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}]\\underline{\\hat{G}}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)[(\\boldsymbol{k'}\\times\\boldsymbol{k})\\cdot\\hat{\\boldsymbol{\\sigma}}].\n\\end{aligned}\n\\end{equation}\n\\noindent At the level of the self-consistent Born approximation, the self-energy is given by the following contributions. The first term $\\underline{\\hat{\\Sigma}}^{1a}$ stands for the standard elastic scattering off the on-site impurity potential. To first order of $\\xi_{SO}$, there are two terms, $\\underline{\\hat{\\Sigma}}^{sw}$ and $\\underline{\\hat{\\Sigma}}^{sj}$, related to the side-jump and spin swapping contributions, respectively. Finally, the second order of $\\xi_{SO}$ yields the Elliot-Yafet spin relaxation mechanism (all gradient terms $\\sim\\xi_{SO}^{2}\\nabla_{\\boldsymbol{R}}$ are neglected on account of their smallness). \n\\par Let us rewrite these contributions for the retarded, advanced and Keldysh Green's functions (taking into account that $\\hat{G}^{R(A)}$ does not depend on the center-of-mass coordinates):\n\\begin{equation}\n\\underline{\\hat{\\Sigma}}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)=\\underline{\\hat{\\Sigma}}^{1a}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)+\\underline{\\hat{\\Sigma}}^{1b}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)+\\underline{\\hat{\\Sigma}}^{1c}_{\\boldsymbol{k}E}(\\boldsymbol{R},T),\n\\end{equation}\n\\noindent which is equivalent to:\n\\begin{equation}\n\\hat{\\Sigma}^{R}_{\\boldsymbol{k}E}=v_{i}^{2}n_{i}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\,\\left[\\hat{\\sigma}_{0}+i\\frac{\\xi_{SO}}{k_{F}^{2}}(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}\\right]\\hat{G}^{R}_{\\boldsymbol{k'}E}\\left[\\hat{\\sigma}_{0}-i\\frac{\\xi_{SO}}{k_{F}^{2}}(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}\\right],\n\\label{eq:sret}\n\\end{equation}\n\\begin{equation}\n\\hat{\\Sigma}^{A}_{\\boldsymbol{k}E}=v_{i}^{2}n_{i}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\,\\left[\\hat{\\sigma}_{0}+i\\frac{\\xi_{SO}}{k_{F}^{2}}(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}\\right]\\hat{G}^{A}_{\\boldsymbol{k'}E}\\left[\\hat{\\sigma}_{0}-i\\frac{\\xi_{SO}}{k_{F}^{2}}(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}\\right],\n\\label{eq:sadv}\n\\end{equation}\n\\noindent and\n\\begin{equation}\n\\begin{aligned}\n\\hat{\\Sigma}^{K}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)&=v_{i}^{2}n_{i}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\,\\left[\\hat{\\sigma}_{0}+i\\frac{\\xi_{SO}}{k_{F}^{2}}(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}\\right]\\hat{g}^{K}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)\\left[\\hat{\\sigma}_{0}-i\\frac{\\xi_{SO}}{k_{F}^{2}}(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}\\right]\\\\\n&+v_{i}^{2}n_{i}\\frac{\\xi_{SO}}{2k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\nabla_{\\boldsymbol{R}}\\left\\{(\\boldsymbol{k}'-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\hat{g}^{K}_{\\boldsymbol{k'}E}(\\boldsymbol{R};T)\\right\\}.\n\\label{eq:skel}\n\\end{aligned}\n\\end{equation}\n\n\n\\subsection{Skew-scattering}\n\\par To take into account skew-scattering, one has to go beyond the Born approximation. Starting from third order diagrams (Fig.~1$c$), we obtain the following expressions for the self-energy to first order in $\\xi_{SO}$:\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{2a}(\\boldsymbol{k}_{1},t_{1};\\boldsymbol{k}_{6},t_{2})=\\frac{i}{\\Omega^{3}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\sum\\limits_{\\boldsymbol{k}_{2},\\boldsymbol{k}_{3},\\boldsymbol{k}_{4},\\boldsymbol{k}_{5}}&[(\\boldsymbol{k}_{1}\\times\\boldsymbol{k}_{2})\\cdot\\hat{\\boldsymbol{\\sigma}}]\\underline{\\hat{G}}_{0}(\\boldsymbol{k}_{2},t_{1};\\boldsymbol{k}_{3},t_{2})\\underline{\\hat{G}}_{0}(\\boldsymbol{k}_{4},t_{1};\\boldsymbol{k}_{5},t_{2})\\\\\n&\\cdot\\left\\langle V(\\boldsymbol{k}_{1}-\\boldsymbol{k}_{2})V(\\boldsymbol{k}_{3}-\\boldsymbol{k}_{4})V(\\boldsymbol{k}_{5}-\\boldsymbol{k}_{6})\\right\\rangle,\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{2b}(\\boldsymbol{k}_{1},t_{1};\\boldsymbol{k}_{6},t_{2})=\\frac{i}{\\Omega^{3}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\sum\\limits_{\\boldsymbol{k}_{2},\\boldsymbol{k}_{3},\\boldsymbol{k}_{4},\\boldsymbol{k}_{5}}&\\underline{\\hat{G}}_{0}(\\boldsymbol{k}_{2},t_{1};\\boldsymbol{k}_{3},t_{2})[(\\boldsymbol{k}_{3}\\times\\boldsymbol{k}_{4})\\cdot\\hat{\\boldsymbol{\\sigma}}]\\underline{\\hat{G}}_{0}(\\boldsymbol{k}_{4},t_{1};\\boldsymbol{k}_{5},t_{2})\\\\\n&\\cdot\\left\\langle V(\\boldsymbol{k}_{1}-\\boldsymbol{k}_{2})V(\\boldsymbol{k}_{3}-\\boldsymbol{k}_{4})V(\\boldsymbol{k}_{5}-\\boldsymbol{k}_{6})\\right\\rangle,\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{2c}(\\boldsymbol{k}_{1},t_{1};\\boldsymbol{k}_{6},t_{2})=\\frac{i}{\\Omega^{3}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\sum\\limits_{\\boldsymbol{k}_{2},\\boldsymbol{k}_{3},\\boldsymbol{k}_{4},\\boldsymbol{k}_{5}}&\\underline{\\hat{G}}_{0}(\\boldsymbol{k}_{2},t_{1};\\boldsymbol{k}_{3},t_{2})\\underline{\\hat{G}}_{0}(\\boldsymbol{k}_{4},t_{1};\\boldsymbol{k}_{5},t_{2})[(\\boldsymbol{k}_{5}\\times\\boldsymbol{k}_{6})\\cdot\\hat{\\boldsymbol{\\sigma}}]\\\\\n&\\cdot\\left\\langle V(\\boldsymbol{k}_{1}-\\boldsymbol{k}_{2})V(\\boldsymbol{k}_{3}-\\boldsymbol{k}_{4})V(\\boldsymbol{k}_{5}-\\boldsymbol{k}_{6})\\right\\rangle,\n\\end{aligned}\n\\end{equation}\n\\noindent where impurity averaging in the triple scattering off the same impurity potential is implied:\n\\begin{equation}\n\\begin{aligned}\n\\left\\langle V(\\boldsymbol{k}_{1}-\\boldsymbol{k}_{2})V(\\boldsymbol{k}_{3}-\\boldsymbol{k}_{4})V(\\boldsymbol{k}_{5}-\\boldsymbol{k}_{6})\\right\\rangle&= v_{i}^{3}\\left\\langle \\sum_{\\boldsymbol{R}_{i}}e^{-i(\\boldsymbol{k}_{1}-\\boldsymbol{k}_{2})\\cdot\\boldsymbol{R}_{i}}e^{-i(\\boldsymbol{k}_{3}-\\boldsymbol{k}_{4})\\cdot\\boldsymbol{R}_{i}}e^{-i(\\boldsymbol{k}_{5}-\\boldsymbol{k}_{6})\\cdot\\boldsymbol{R}_{i}}\\right\\rangle\\\\\n &=v_{i}^{3}N\\delta_{\\boldsymbol{k}_{1}+\\boldsymbol{k}_{3}+\\boldsymbol{k}_{5},\\boldsymbol{k}_{2}+\\boldsymbol{k}_{4}+\\boldsymbol{k}_{6}}.\n\\end{aligned}\n\\end{equation}\n\\noindent Here, we do not consider triple scatterings off the on-site impurity potential without spin-orbit coupling, which gives a negligible correction to the elastic relaxation time ($\\sim\\frac{\\beta v_{i}}{\\tau_{0}}$). Changing the variables:\n\\begin{equation}\n\\begin{aligned}\n\\boldsymbol{k}_{1}&=\\boldsymbol{k}+\\frac{\\boldsymbol{q}}{2}\\qquad \\boldsymbol{k}_{2}=\\boldsymbol{k'}+\\frac{\\boldsymbol{q'}}{2}\\qquad \\boldsymbol{k}_{3}=\\boldsymbol{k'}-\\frac{\\boldsymbol{q'}}{2},\\\\\n\\boldsymbol{k}_{4}&=\\boldsymbol{k''}+\\frac{\\boldsymbol{q''}}{2}\\qquad \\boldsymbol{k}_{5}=\\boldsymbol{k''}-\\frac{\\boldsymbol{q''}}{2}\\qquad \\boldsymbol{k}_{6}=\\boldsymbol{k}-\\frac{\\boldsymbol{q}}{2}\n\\end{aligned}\n\\end{equation}\n\\noindent and\n\\begin{equation}\nT=\\frac{t_{1}+t_{2}}{2}\\qquad t=t_{1}-t_{2}\n\\end{equation}\n\\noindent leads in the continuum limit to:\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{2a}(\\boldsymbol{k},t;\\boldsymbol{q},T)&=iv_{i}^{3}n_{i}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{q'}}{(2\\pi)^{3}}\\\\\n&\\left[\\left(\\boldsymbol{k}+\\frac{\\boldsymbol{q}}{2}\\right)\\times\\left(\\boldsymbol{k'}+\\frac{\\boldsymbol{q'}}{2}\\right)\\cdot\\hat{\\boldsymbol{\\sigma}}\\right]\\underline{\\hat{G}}(\\boldsymbol{k'},t;\\boldsymbol{q'},T)\\underline{\\hat{G}}(\\boldsymbol{k''},t;\\boldsymbol{q}-\\boldsymbol{q'},T),\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{2b}(\\boldsymbol{k},t;\\boldsymbol{q},T)&=iv_{i}^{3}n_{i}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{q'}}{(2\\pi)^{3}}\\\\\n&\\underline{\\hat{G}}(\\boldsymbol{k'},t;\\boldsymbol{q'},T)\\left[\\left(\\boldsymbol{k'}-\\frac{\\boldsymbol{q'}}{2}\\right)\\times\\left(\\boldsymbol{k''}+\\frac{\\boldsymbol{q}}{2}-\\frac{\\boldsymbol{q'}}{2}\\right)\\cdot\\hat{\\boldsymbol{\\sigma}}\\right]\\underline{\\hat{G}}(\\boldsymbol{k''},t;\\boldsymbol{q}-\\boldsymbol{q'},T),\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n\\underline{\\hat{\\Sigma}}^{2c}(\\boldsymbol{k},t;\\boldsymbol{q},T)&=iv_{i}^{3}n_{i}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{q'}}{(2\\pi)^{3}}\\\\\n&\\underline{\\hat{G}}(\\boldsymbol{k'},t;\\boldsymbol{q'},T)\\underline{\\hat{G}}(\\boldsymbol{k''},t;\\boldsymbol{q}-\\boldsymbol{q'},T)\\left[\\left(\\boldsymbol{k''}-\\frac{\\boldsymbol{q}}{2}+\\frac{\\boldsymbol{q'}}{2}\\right)\\times\\left(\\boldsymbol{k}-\\frac{\\boldsymbol{q}}{2}\\right)\\cdot\\hat{\\boldsymbol{\\sigma}}\\right].\n\\end{aligned}\n\\end{equation}\n\\noindent Let us assume that any inhomogeneity in a system is smooth, so we can neglect all gradient terms $\\sim\\boldsymbol{q}$. Having Fourier transformed with respect to $\\boldsymbol{q}$ and $t$, we obtain:\n\\begin{equation}\n\\underline{\\hat{\\Sigma'}}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)=\\underline{\\hat{\\Sigma}}^{2a}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)+\\underline{\\hat{\\Sigma}}^{2b}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)+\\underline{\\hat{\\Sigma}}^{2c}_{\\boldsymbol{k}E}(\\boldsymbol{R},T),\n\\end{equation}\n\\noindent where \n\\begin{equation}\n\\underline{\\hat{\\Sigma}}^{2a}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)=iv_{i}^{3}n_{i}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}[(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}]\\underline{\\hat{G}}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)\\underline{\\hat{G}}_{\\boldsymbol{k''}E}(\\boldsymbol{R},T),\n\\end{equation}\n\\begin{equation}\n\\underline{\\hat{\\Sigma}}^{2b}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)=iv_{i}^{3}n_{i}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}\\underline{\\hat{G}}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)[(\\boldsymbol{k'}\\times\\boldsymbol{k''})\\cdot\\hat{\\boldsymbol{\\sigma}}]\\underline{\\hat{G}}_{\\boldsymbol{k''}E}(\\boldsymbol{R},T),\n\\end{equation}\n\\begin{equation}\n\\underline{\\hat{\\Sigma}}^{2c}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)=iv_{i}^{3}n_{i}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}\\underline{\\hat{G}}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)\\underline{\\hat{G}}_{\\boldsymbol{k''}E}(\\boldsymbol{R},T)[(\\boldsymbol{k''}\\times\\boldsymbol{k})\\cdot\\hat{\\boldsymbol{\\sigma}}].\n\\end{equation}\n\\par To first order in $\\xi_{SO}$, we can express the retarded and advanced Green's functions by using the Sokhotski formula:\n\\begin{equation}\n\\begin{aligned}\n\\hat{G}^{R(A)}_{\\boldsymbol{k}E}&=\\left(\\hat{E}-\\hat{\\mathcal{H}}_{\\boldsymbol{k}}-\\hat{\\Sigma}^{R(A)}_{\\boldsymbol{k}E}\\right)^{-1}\\approx\\frac{1}{2}\\sum\\limits_{s=\\pm}\\frac{\\hat{\\sigma}_{0}+s\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}}{E-E_{\\boldsymbol{k}s}\\pm i\\frac{\\hbar}{2\\tau_{s}}}\\\\\n&=\\frac{1}{2}\\sum\\limits_{s=\\pm}\\left(\\hat{\\sigma}_{0}+s\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\right)\\left[\\mp i\\pi\\delta(E-E_{\\boldsymbol{k}s})\\right],\n\\end{aligned}\n\\end{equation}\n\\noindent where $\\tau_{s}$ is the spin dependent relaxation time. As a result, the retarded and advanced components of the skew-scattering self-energy vanish, and we deal with its Keldysh part, which survives for $\\hat{\\Sigma}^{2a}$ and $\\hat{\\Sigma}^{2c}$ only:\n\\begin{equation}\n\\begin{aligned}\n\\hat{\\Sigma'}^{K}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)&=i\\frac{\\xi_{SO}}{k_{F}^{2}}v_{i}^{3}n_{i}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}\\left(\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}^{K}_{\\boldsymbol{k''}E}(\\boldsymbol{R},T)+\\hat{g}^{K}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)\\hat{G}^{A}_{\\boldsymbol{k''}E} \\right)\\\\\n&+i\\frac{\\xi_{SO}}{k_{F}^{2}}v_{i}^{3}n_{i}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}\\left(\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}^{K}_{\\boldsymbol{k''}E}(\\boldsymbol{R},T)+\\hat{g}^{K}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)\\hat{G}^{A}_{\\boldsymbol{k''}E} \\right)(\\boldsymbol{k''}\\times\\boldsymbol{k})\\cdot\\hat{\\boldsymbol{\\sigma}}\n\\end{aligned}\n\\end{equation}\n\\noindent or\n\\begin{equation}\n\\begin{aligned}\n\\hat{\\Sigma'}^{K}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)&=i\\frac{\\xi_{SO}}{k_{F}^{2}}v_{i}^{3}n_{i}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}^{K}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)\\hat{G}^{A}_{\\boldsymbol{k''}E}\\\\\n&+i\\frac{\\xi_{SO}}{k_{F}^{2}}v_{i}^{3}n_{i}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}^{K}_{\\boldsymbol{k''}E}(\\boldsymbol{R},T)(\\boldsymbol{k''}\\times\\boldsymbol{k})\\cdot\\hat{\\boldsymbol{\\sigma}}.\n\\end{aligned}\n\\end{equation}\n\\noindent This expression can be further simplified, as we integrate over $\\boldsymbol{k}$:\n\\begin{equation}\n\\begin{aligned}\n\\mp\\frac{i\\pi}{2}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\sum\\limits_{s=\\pm}\\left(\\hat{\\sigma}_{0}+s\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\right)\\delta(E-E_{\\boldsymbol{k}s})=\\mp\\frac{i\\pi}{2}(D^{\\uparrow}+D^{\\downarrow})\\left(\\hat{\\sigma}_{0}+\\delta\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\right),\n\\end{aligned}\n\\end{equation}\n\\noindent where $D^{\\uparrow(\\downarrow)}$ is the spin-dependent density of states and $\\delta=(D^{\\uparrow}-D^{\\downarrow})\/(D^{\\uparrow}+D^{\\downarrow})$, so the final form of $\\hat{\\Sigma'}^{K}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)$ is given by:\n\\begin{equation}\n\\begin{aligned}\n\\hat{\\Sigma'}^{K}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)&=-\\frac{\\pi}{2}\\frac{\\xi_{SO}}{k_{F}^{2}}v_{i}^{3}n_{i}(D^{\\uparrow}+D^{\\downarrow})\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}^{K}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)\\left[\\hat{\\sigma}_{0}+\\delta\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right]\\\\\n&\\,\\,\\,\\,\\,-\\frac{\\pi}{2}\\frac{\\xi_{SO}}{k_{F}^{2}}v_{i}^{3}n_{i}(D^{\\uparrow}+D^{\\downarrow})\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\left[\\hat{\\sigma}_{0}+\\delta\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right]\\hat{g}^{K}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}.\n\\end{aligned}\n\\end{equation}\n\\noindent In the weak exchange coupling limit $(J\\ll\\varepsilon_{F})$, we can express $D^{\\uparrow(\\downarrow)}\\approx D_{0}(1\\mp\\beta)$, where $\\beta=\\frac{J}{2\\varepsilon_{F}}$ is the spin polarization factor and $D_{0}=\\frac{mk_{F}}{2\\pi^{2}\\hbar^{2}}$ is the spin independent density of states per spin at the Fermi level. Then, we obtain:\n\\begin{equation}\n\\begin{aligned}\n\\hat{\\Sigma'}^{K}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)&=-\\pi v_{i}^{3}n_{i}D_{0}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}^{K}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)\\left[\\hat{\\sigma}_{0}-\\beta\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right]\\\\\n&\\,\\,\\,\\,\\,-\\pi v_{i}^{3}n_{i}D_{0}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\left[\\hat{\\sigma}_{0}-\\beta\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right]\\hat{g}^{K}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}\n\\end{aligned}\n\\end{equation}\n\\noindent or \n\\begin{equation}\n\\begin{aligned}\n\\hat{\\Sigma'}^{K}_{\\boldsymbol{k}E}(\\boldsymbol{R},T)&=-\\frac{\\hbar}{2}\\frac{v_{i}}{\\tau_{0}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}^{K}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)\\left[\\hat{\\sigma}_{0}-\\beta\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right]\\\\\n&\\,\\,\\,\\,\\,-\\frac{\\hbar}{2}\\frac{v_{i}}{\\tau_{0}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\left[\\hat{\\sigma}_{0}-\\beta\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right]\\hat{g}^{K}_{\\boldsymbol{k'}E}(\\boldsymbol{R},T)(\\boldsymbol{k}\\times\\boldsymbol{k'})\\cdot\\hat{\\boldsymbol{\\sigma}},\n\\end{aligned}\n\\label{eq:sskew}\n\\end{equation}\n\\noindent where $\\frac{1}{\\tau_{0}}=2\\pi v_{i}^{2}n_{i}D_{0}\/\\hbar$ is the spin independent relaxation time.\n\n\n\n\\section{Relaxation time}\n\\par The imaginary part of the retarded and advanced self-energies is related to the momentum relaxation time, which is given by the elastic scattering off the on-site impurity potential and Elliot-Yafet mechanism:\n\\begin{equation}\n\\hat{\\Sigma}^{R(A)}_{\\boldsymbol{k}E}=\\mp i\\frac{\\hbar}{2\\hat{\\tau}_{\\boldsymbol{k}}}.\n\\end{equation}\n\\noindent Taking into account Eqs. (\\ref{eq:sret}) and (\\ref{eq:gzero}), we get:\n\\begin{equation}\n\\frac{1}{\\hat{\\tau}_{\\boldsymbol{k}}}=\\frac{\\pi v_{i}^{2}n_{i}}{\\hbar}\\sum_{s=\\pm}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\,\\left[\\hat{\\sigma}_{0}+i\\frac{\\xi_{SO}}{k_{F}^{2}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\right]\\left(\\hat{\\sigma}_{0}+s\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right)\\left[\\hat{\\sigma}_{0}-i\\frac{\\xi_{SO}}{k_{F}^{2}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\right]\\delta(E-E_{\\boldsymbol{k'}s}),\n\\end{equation}\n\\noindent where $\\boldsymbol{n}=\\boldsymbol{k}\\times\\boldsymbol{k'}$. In terms of spherical coordinates, $d\\boldsymbol{k}=k^{2}dk\\,\\sin{\\theta}d\\theta\\,d\\phi$ with $k\\in[0,k_{F}]$, $\\theta\\in[0,\\pi]$ and $\\phi\\in[0,2\\pi]$, it is easy to show:\n\\begin{equation}\n\\int k_{i}d\\boldsymbol{k}=0,\n\\end{equation}\n\\noindent where $k_{i}$ is the $i$th cartesian coordinate of $\\boldsymbol{k}$, and all terms linear in $\\boldsymbol{n}$ vanish. Thus, we get:\n\\begin{equation}\n\\frac{1}{\\hat{\\tau}_{\\boldsymbol{k}}}=\\frac{\\pi v_{i}^{2}n_{i}}{\\hbar}\\sum_{s=\\pm}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\,\\left[\\hat{\\sigma}_{0}+s\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}+\\frac{\\xi_{SO}^{2}}{k_{F}^{4}}(\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}})(\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}})+s\\,\\frac{\\xi_{SO}^{2}}{k_{F}^{4}}(\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}})(\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})(\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}) \\right]\\delta(E-E_{\\boldsymbol{k'}s}),\n\\end{equation}\n\\noindent or having used $(\\boldsymbol{a}\\cdot\\hat{\\boldsymbol{\\sigma}})(\\boldsymbol{b}\\cdot\\hat{\\boldsymbol{\\sigma}})=(\\boldsymbol{a}\\cdot\\boldsymbol{b})\\hat{\\sigma}_{0}+i(\\boldsymbol{a}\\times\\boldsymbol{b})\\cdot\\hat{\\boldsymbol{\\sigma}}$:\n\\begin{equation}\n\\begin{aligned}\n\\frac{1}{\\hat{\\tau}_{\\boldsymbol{k}}}&=\\frac{\\pi v_{i}^{2}n_{i}}{\\hbar}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\,\\left[(1+\\frac{\\xi_{SO}^{2}}{k_{F}^{4}}\\boldsymbol{n}^{2})\\hat{\\sigma}_{0}(\\delta(E-E_{\\boldsymbol{k'}+})+\\delta(E-E_{\\boldsymbol{k'}-}))\\right.\\\\\n&\\left.+\\left(\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}+\\frac{\\xi_{SO}^{2}}{k_{F}^{4}}\\hat{\\boldsymbol{\\sigma}}\\cdot(2\\boldsymbol{n}(\\boldsymbol{n}\\cdot\\boldsymbol{m})-\\boldsymbol{m}\\boldsymbol{n}^{2})\\right)(\\delta(E-E_{\\boldsymbol{k'}+})-\\delta(E-E_{\\boldsymbol{k'}-}))\\right].\n\\label{eq:tau2}\n\\end{aligned}\n\\end{equation}\n\\noindent Let us also consider the following expressions:\n\\begin{equation}\n\\begin{aligned}\n\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}k'_{i}k'_{j}\\,(\\delta(E-E_{\\boldsymbol{k'}+})\\pm\\delta(E-E_{\\boldsymbol{k'}-}))=0\\qquad\\mathrm{for}\\,\\,\\, i\\ne j,\\\\\n\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}{k'}_{i}^{2}\\,(\\delta(E-E_{\\boldsymbol{k'}+})\\pm\\delta(E-E_{\\boldsymbol{k'}-}))=\\frac{1}{6\\pi^{2}}\\int d{k'}{k'}^{4}\\,(\\delta(E-E_{\\boldsymbol{k'}+})\\pm\\delta(E-E_{\\boldsymbol{k'}-})),\\\\\n\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\boldsymbol{n}^{2}\\,(\\delta(E-E_{\\boldsymbol{k'}+})\\pm\\delta(E-E_{\\boldsymbol{k'}-}))=\\frac{1}{3\\pi^{2}}\\,k^{2}\\int d{k'}{k'}^{4}\\,(\\delta(E-E_{\\boldsymbol{k'}+})\\pm\\delta(E-E_{\\boldsymbol{k'}-})).\n\\end{aligned}\n\\end{equation}\n\\noindent We can rewrite Eq. (\\ref{eq:tau2}) as:\n\\begin{equation}\n\\begin{aligned}\n\\frac{1}{\\hat{\\tau}_{\\boldsymbol{k}}}&=\\frac{v_{i}^{2}n_{i}}{2\\pi\\hbar}\\int dk'\\,\\left[({k'}^{2}+\\frac{2}{3}\\frac{\\xi_{SO}^{2}}{k_{F}^{4}}k^{2}{k'}^{4})\\hat{\\sigma}_{0}(\\delta(E-E_{\\boldsymbol{k'}+})+\\delta(E-E_{\\boldsymbol{k'}-}))\\right.\\\\\n&\\left.+\\left(\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}{k'}^{2}-\\frac{2}{3}\\frac{\\xi_{SO}^{2}}{k_{F}^{4}}{k'}^{4}(\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{k})(\\boldsymbol{k}\\cdot\\boldsymbol{m})\\right)(\\delta(E-E_{\\boldsymbol{k'}+})-\\delta(E-E_{\\boldsymbol{k'}-}))\\right].\n\\end{aligned}\n\\end{equation}\n\\noindent Next, we can employ the following relation for the delta-function:\n\\begin{equation}\n\\delta(E-E_{\\boldsymbol{k}s})=\\frac{\\delta(k-k_{s})}{\\frac{\\hbar^{2}k_{s}}{m}},\n\\end{equation}\n\\noindent where $k_{\\pm}=\\sqrt{2m(\\varepsilon_{F}\\mp J)}\/\\hbar$, and $\\varepsilon_{F}=\\frac{\\hbar^{2}k_{F}^{2}}{2m}$ is the Fermi energy. Then, integrating over $k'$ gives:\n\\begin{equation}\n\\begin{aligned}\n\\frac{1}{\\hat{\\tau}_{\\boldsymbol{k}}}=\\frac{v_{i}^{2}n_{i}}{2\\pi\\hbar}\\Big(\\frac{m}{\\hbar^{2}}(k_{+}+k_{-})\\hat{\\sigma}_{0}&+\\frac{2m}{3\\hbar^{2}}\\frac{\\xi_{SO}^{2}}{k_{F}^{4}}k^{2}(k^{3}_{+}+k^{3}_{-})\\hat{\\sigma}_{0} +\\frac{m}{\\hbar^{2}}(k_{+}-k_{-}) \\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\\\\n&-\\frac{2m}{3\\hbar^{2}}\\frac{\\xi_{SO}^{2}}{k_{F}^{4}}(k^{3}_{+}-k^{3}_{-})(\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{k})(\\boldsymbol{k}\\cdot\\boldsymbol{m}) \\Big).\n\\end{aligned}\n\\end{equation}\n\\noindent Finally, in the weak exchange coupling limit $(J\\ll\\varepsilon_{F})$, we can perform a Taylor expansion, $k_{\\pm}\\approx k_{F}(1\\mp\\beta)$. Neglecting higher order terms $\\sim\\beta\\xi_{SO}^{2}$, we obtain:\n\\begin{equation}\n\\frac{1}{\\hat{\\tau}_{\\boldsymbol{k}}}=\\frac{1}{\\tau_{0}}\\left(\\hat{\\sigma_{0}}-\\beta\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}+\\frac{2}{3}\\frac{\\xi_{SO}^{2}}{k_{F}^{2}}k^{2}\\hat{\\sigma}_{0} \\right),\n\\end{equation}\n\\noindent where $\\frac{1}{\\tau_{0}}=2\\pi v^{2}_{i}n_{i}D_{0}\/\\hbar$ is the spin-independent relaxation time due to scattering off the impurity potential.\n\n\n\n\n\n\\section{Averaged velocity operator}\n\\par For educational purposes, let us derive the averaged velocity operator in diffusive ferromagnets with extrinsic spin-orbit coupling.\\cite{velo} Within the Lippmann-Schwinger equation, the scattered state $\\parallel\\!\\boldsymbol{k},s\\rangle$ can be written in the first order of $\\hat{\\mathcal{H}}_{\\mathrm{imp}}$:\n\\begin{equation}\n\\begin{aligned}\n\\parallel\\!\\boldsymbol{k},s\\rangle&=|\\boldsymbol{k},s\\rangle+\\sum_{\\boldsymbol{k'}}\\hat{G}^{R}_{0,\\boldsymbol{k'}}\\langle\\boldsymbol{k'}|\\hat{\\mathcal{H}}_{\\mathrm{imp}}|\\boldsymbol{k}\\rangle|\\boldsymbol{k'},s\\rangle\\\\\n&=|\\boldsymbol{k},s\\rangle-\\frac{1}{\\Omega}\\frac{i\\pi}{2}\\sum_{\\boldsymbol{k'}s'}(\\hat{\\sigma}_{0}+s'\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})(\\hat{\\sigma\n_{0}}-i\\frac{\\xi_{SO}}{k_{F}^{2}}\\hat{\\boldsymbol{\\sigma}}\\cdot(\\boldsymbol{k}\\times\\boldsymbol{k'}))V(\\boldsymbol{k'}-\\boldsymbol{k})\\delta(E-E_{\\boldsymbol{k'}s'})|\\boldsymbol{k'},s\\rangle,\n\\end{aligned}\n\\end{equation}\n\\noindent and \n\\begin{equation}\n\\begin{aligned}\n\\langle\\boldsymbol{k},s\\!\\parallel&=\\langle\\boldsymbol{k},s|+\\sum_{\\boldsymbol{k'}}\\langle\\boldsymbol{k'},s|\\langle\\boldsymbol{k}|\\hat{\\mathcal{H}}_{\\mathrm{imp}}|\\boldsymbol{k'}\\rangle\\hat{G}^{R}_{0,\\boldsymbol{k'}}\\\\\n&=\\langle\\boldsymbol{k},s|+\\frac{1}{\\Omega}\\frac{i\\pi}{2}\\sum_{\\boldsymbol{k'}s'}\\langle\\boldsymbol{k'},s| (\\hat{\\sigma\n_{0}}+i\\frac{\\xi_{SO}}{k_{F}^{2}}\\hat{\\boldsymbol{\\sigma}}\\cdot(\\boldsymbol{k}\\times\\boldsymbol{k'}))(\\hat{\\sigma}_{0}+s'\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})V(\\boldsymbol{k}-\\boldsymbol{k'})\\delta(E-E_{\\boldsymbol{k'}s'}).\n\\end{aligned}\n\\end{equation}\n\\noindent The corresponding matrix elements of the velocity operator can be found as:\n\\begin{equation}\n\\begin{aligned}\n\\boldsymbol{v}_{\\boldsymbol{k}\\boldsymbol{k'}}^{ss'}=-\\frac{i}{\\hbar}\\langle\\boldsymbol{k},s\\!\\parallel[\\hat{\\boldsymbol{r}},\\hat{\\mathcal{H}}]\\parallel\\!\\boldsymbol{k'},s'\\rangle=-\\frac{i}{\\hbar}\\langle\\boldsymbol{k},s\\!\\parallel[\\hat{\\boldsymbol{r}},\\hat{\\mathcal{H}}_{0}+\\hat{\\mathcal{H}}_{\\mathrm{imp}}]\\parallel\\!\\boldsymbol{k'},s'\\rangle,\n\\end{aligned}\n\\end{equation}\n\\noindent where\n\\begin{equation}\n-\\frac{i}{\\hbar}[\\hat{\\boldsymbol{r}},\\hat{\\mathcal{H}}]=-\\frac{i}{\\hbar}[\\hat{\\boldsymbol{r}},\\hat{\\mathcal{H}}_{0}+\\hat{\\mathcal{H}}_{\\mathrm{imp}}]=-\\frac{i\\hbar}{m}\\nabla\\hat{\\sigma}_{0}+\\frac{\\xi_{SO}}{\\hbar k_{F}^{2}}\\sum_{\\boldsymbol{R}_{i}}\\hat{\\boldsymbol{\\sigma}}\\times\\nabla V(\\boldsymbol{r}-\\boldsymbol{R}_{i}).\n\\end{equation}\n\\noindent Let us express these terms separately neglecting higher order terms $\\sim\\xi_{SO}^{2}$:\n\\begin{equation}\n\\begin{aligned}\n&-\\frac{i}{\\hbar}\\langle\\boldsymbol{k},s\\!\\parallel[\\hat{\\boldsymbol{r}},\\hat{\\mathcal{H}}_{0}]\\parallel\\!\\boldsymbol{k'},s'\\rangle=-\\frac{i\\hbar}{m}\\langle\\boldsymbol{k},s\\!\\parallel\\nabla\\parallel\\!\\boldsymbol{k'},s'\\rangle\\\\\n&=-\\frac{i\\hbar}{m}\\langle\\boldsymbol{k}|\\nabla|\\boldsymbol{k'}\\rangle\\delta_{ss'} - \\frac{i\\pi}{2}\\sum_{s''}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}V(\\boldsymbol{k''}-\\boldsymbol{k'})\\delta(E-E_{\\boldsymbol{k''}s''})\\langle\\boldsymbol{k}|\\nabla|\\boldsymbol{k''}\\rangle\\langle s|(\\hat{\\sigma}_{0}+s''\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})(\\hat{\\sigma\n_{0}}-i\\frac{\\xi_{SO}}{k_{F}^{2}}\\hat{\\boldsymbol{\\sigma}}\\cdot(\\boldsymbol{k'}\\times\\boldsymbol{k''}))|s'\\rangle\\\\\n&\\qquad\\qquad\\qquad\\qquad\\,\\,\\,\\,\\,\\, + \\frac{i\\pi}{2}\\sum_{s''}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}V(\\boldsymbol{k}-\\boldsymbol{k''})\\delta(E-E_{\\boldsymbol{k''}s''})\\langle s|(\\hat{\\sigma\n_{0}}+i\\frac{\\xi_{SO}}{k_{F}^{2}}\\hat{\\boldsymbol{\\sigma}}\\cdot(\\boldsymbol{k}\\times\\boldsymbol{k''}))(\\hat{\\sigma}_{0}+s''\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})|s'\\rangle\\langle\\boldsymbol{k''}|\\nabla|\\boldsymbol{k'}\\rangle,\n\\end{aligned}\n\\end{equation}\n\\noindent and\n\\begin{equation}\n\\begin{aligned}\n&-\\frac{i}{\\hbar}\\langle\\boldsymbol{k},s\\!\\parallel[\\hat{\\boldsymbol{r}},\\hat{\\mathcal{H}}_{\\mathrm{imp}}]\\parallel\\!\\boldsymbol{k'},s'\\rangle=\\frac{\\xi_{SO}}{\\hbar k_{F}^{2}}\\sum_{\\boldsymbol{R}_{i}}\\langle\\boldsymbol{k},s\\!\\parallel\\hat{\n\\boldsymbol{\\sigma}}\\times\\nabla V(\\boldsymbol{r}-\\boldsymbol{R}_{i})\\parallel\\!\\boldsymbol{k'},s'\\rangle\\\\\n&=\\frac{i}{\\Omega}\\frac{\\xi_{SO}}{\\hbar k_{F}^{2}}\\langle s|\\hat{\\boldsymbol{\\sigma}}|s'\\rangle\\times(\\boldsymbol{k}-\\boldsymbol{k'})V(\\boldsymbol{k}-\\boldsymbol{k'})\\\\\n& +\\frac{1}{\\Omega} \\frac{\\pi}{2}\\frac{\\xi_{SO}}{\\hbar k_{F}^{2}}\\sum_{s''}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}V(\\boldsymbol{k''}-\\boldsymbol{k'})V(\\boldsymbol{k}-\\boldsymbol{k''})\\delta(E-E_{\\boldsymbol{k''}s''})\\langle s|\\hat{\n\\boldsymbol{\\sigma}}\\times(\\boldsymbol{k}-\\boldsymbol{k''})(\\hat{\\sigma}_{0}+s''\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})|s'\\rangle\\\\\n& -\\frac{1}{\\Omega} \\frac{\\pi}{2}\\frac{\\xi_{SO}}{\\hbar k_{F}^{2}}\\sum_{s''}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}V(\\boldsymbol{k}-\\boldsymbol{k''})V(\\boldsymbol{k''}-\\boldsymbol{k'})\\delta(E-E_{\\boldsymbol{k''}s''})\\langle s|(\\hat{\\sigma}_{0}+s''\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})\\hat{\n\\boldsymbol{\\sigma}}\\times(\\boldsymbol{k''}-\\boldsymbol{k'})| s'\\rangle.\n\\end{aligned}\n\\end{equation}\n\\noindent Upon impurity averaging we obtain:\n\\begin{equation}\n\\begin{aligned}\n \\boldsymbol{v}_{\\boldsymbol{k}}^{ss'}&=\\frac{\\hbar}{m}\\boldsymbol{k}\\,\\delta_{ss'}+v_{i}^{2}n_{i}\\frac{\\pi}{2}\\frac{\\xi_{SO}}{\\hbar k_{F}^{2}}\\sum_{s''}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}\\delta(E-E_{\\boldsymbol{k''}s''})\\langle s|\\left\\{\\hat{\n\\boldsymbol{\\sigma}}\\times(\\boldsymbol{k}-\\boldsymbol{k''}),(\\hat{\\sigma}_{0}+s''\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})\\right\\}|s'\\rangle\\\\\n&=\\frac{\\hbar}{m}\\boldsymbol{k}\\,\\delta_{ss'}+v_{i}^{2}n_{i}\\pi\\frac{\\xi_{SO}}{\\hbar k_{F}^{2}}\\sum_{s''}\\int\\frac{d\\boldsymbol{k''}}{(2\\pi)^{3}}\\delta(E-E_{\\boldsymbol{k''}s''})\\langle s|\\hat{\\boldsymbol{\\sigma}}\\times\\boldsymbol{k}+s''\\boldsymbol{m}\\times\\boldsymbol{k}|s'\\rangle,\n\\end{aligned}\n\\end{equation}\n\\noindent or in the limit $J\\ll\\varepsilon_{F}$:\n\\begin{equation}\n\\hat{\\boldsymbol{v}}_{\\boldsymbol{k}}=\\frac{\\hbar}{m}\\boldsymbol{k}\\,\\hat{\\sigma}_{0}+\\frac{1}{\\tau_{0}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\hat{\\boldsymbol{\\sigma}}\\times\\boldsymbol{k}-\\frac{\\beta}{\\tau_{0}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\boldsymbol{m}\\times\\boldsymbol{k}\\,\\hat{\\sigma}_{0}.\n\\label{eq:av}\n\\end{equation}\n\n\n\n\n\n\n\n\n\\section{Quantum transport equations}\n\\par Having integrated Eq.~(\\ref{eq:keldysh}) over energy, we arrive at the kinetic equation written for the distribution function $\\hat{g}_{\\boldsymbol{k}}$:\n\\begin{equation}\n-i[\\hat{g}_{\\boldsymbol{k}},J\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}]+\\frac{\\hbar^{2}}{m}(\\boldsymbol{k}\\cdot\\nabla_{\\boldsymbol{R}})\\,\\hat{g}_{\\boldsymbol{k}}=coll,\n\\label{eq:keld2}\n\\end{equation}\n\\noindent where the collision integral is defined as:\n\\begin{equation}\ncoll=\\mathcal{J}_{\\boldsymbol{k}}+\\mathcal{I}_{\\boldsymbol{k}},\n\\end{equation}\n\\noindent and\n\\begin{equation}\n\\mathcal{J}_{\\boldsymbol{k}}=\\int\\frac{dE}{2\\pi}\\left(\\hat{\\Sigma}^{K}_{\\boldsymbol{k}E}\\hat{G}^{A}_{\\boldsymbol{k}E}-\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{\\Sigma}^{K}_{\\boldsymbol{k}E} \\right),\n\\end{equation} \n\\begin{equation}\n\\mathcal{I}_{\\boldsymbol{k}}=\\int\\frac{dE}{2\\pi}\\left(\\hat{\\Sigma}^{R}_{\\boldsymbol{k}E}\\hat{g}^{K}_{\\boldsymbol{k}E}-\\hat{g}^{K}_{\\boldsymbol{k}E}\\hat{\\Sigma}^{A}_{\\boldsymbol{k}E} \\right).\n\\end{equation}\n\\noindent Let us proceed with its detailed derivation. Taking into account the Kadanoff-Baym anzats~(\\ref{eq:anzats}) for $\\hat{g}^{K}_{\\boldsymbol{k}E}$, the integration over energy (up to a given Fermi level $\\varepsilon_{F}$) can be performed by using the residue theorem:\n\\begin{equation}\n\\begin{aligned}\n\\int\\frac{dE}{2\\pi}\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}&=\\int\\frac{dE}{8\\pi}\\sum\\limits_{s,s'}\\frac{\\hat{\\sigma}_{0}+s\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}}{E-E_{\\boldsymbol{k}s}+i\\frac{\\hbar}{2\\tau_{s}}}\\hat{g}_{\\boldsymbol{k'}}\\frac{\\hat{\\sigma}_{0}+s'\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}}{E-E_{\\boldsymbol{k'}s'}-i\\frac{\\hbar}{2\\tau_{s'}}}\\\\\n&=-\\frac{i}{4}\\sum_{s,s'}\\frac{(\\hat{\\sigma}_{0}+s\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})\\hat{g}_{\\boldsymbol{k'}}(\\hat{\\sigma}_{0}+s'\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})}{\\varepsilon_{F}-E_{\\boldsymbol{k'}s'}-i\\frac{\\hbar}{2}\\left(\\frac{1}{\\tau_{s'}}+\\frac{1}{\\tau_{s}}\\right)},\n\\end{aligned}\n\\end{equation}\n\\noindent and\n\\begin{equation}\n\\begin{aligned}\n\\int\\frac{dE}{2\\pi}\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k}E}&=\\int\\frac{dE}{8\\pi}\\sum\\limits_{s,s'}\\frac{\\hat{\\sigma}_{0}+s'\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}}{E-E_{\\boldsymbol{k'}s'}+i\\frac{\\hbar}{2\\tau_{s'}}}\\hat{g}_{\\boldsymbol{k'}}\\frac{\\hat{\\sigma}_{0}+s\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}}{E-E_{\\boldsymbol{k}s}-i\\frac{\\hbar}{2\\tau_{s}}}\\\\\n&=\\frac{i}{4}\\sum_{s,s'}\\frac{(\\hat{\\sigma}_{0}+s'\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})\\hat{g}_{\\boldsymbol{k'}}(\\hat{\\sigma}_{0}+s\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})}{\\varepsilon_{F}-E_{\\boldsymbol{k'}s'}+i\\frac{\\hbar}{2}\\left(\\frac{1}{\\tau_{s}}+\\frac{1}{\\tau_{s'}}\\right)},\n\\end{aligned}\n\\end{equation}\n\\noindent while \n\\begin{equation}\n\\int\\frac{dE}{2\\pi}\\hat{G}^{R(A)}_{\\boldsymbol{k}E}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{R(A)}_{\\boldsymbol{k'}E}=0,\n\\end{equation}\n\\noindent where the retarded and advanced Green's functions are defined as:\n\\begin{equation}\n\\hat{G}_{\\boldsymbol{k}E}^{R(A)}=\\frac{1}{2}\\sum\\limits_{s=\\pm}\\frac{\\hat{\\sigma}_{0}+s\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}}{E-E_{\\boldsymbol{k}s}\\pm i\\frac{\\hbar}{2\\tau_{s}}}.\n\\end{equation}\n\\noindent Assuming the scattering term in the denominator to be small and transport properties to be described solely by the electrons close to the Fermi level, we can rewrite these expressions with the Sokhotski formula:\n\\begin{equation}\n\\begin{aligned}\n\\int\\frac{dE}{2\\pi}\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}&=\\frac{\\pi}{2}\\sum_{s'}\\hat{g}_{\\boldsymbol{k'}}(\\hat{\\sigma}_{0}+s'\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})\\delta(\\varepsilon_{F}-E_{\\boldsymbol{k'}s'}),\n\\end{aligned}\n\\end{equation}\n\\noindent and\n\\begin{equation}\n\\begin{aligned}\n\\int\\frac{dE}{2\\pi}\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k}E}=\\frac{\\pi}{2}\\sum_{s'}(\\hat{\\sigma}_{0}+s'\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m})\\hat{g}_{\\boldsymbol{k'}}\\delta(\\varepsilon_{F}-E_{\\boldsymbol{k'}s'}).\n\\end{aligned}\n\\end{equation}\n\\par Starting from the Born approximation~(\\ref{eq:skel}), we have:\n\\begin{equation}\n\\begin{aligned}\n\\hat{\\Sigma}^{K}_{\\boldsymbol{k}E}\\hat{G}^{A}_{\\boldsymbol{k}E}&=v_{i}^{2}n_{i}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\,\\left[\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k}E}+i\\frac{\\xi_{SO}}{k^{2}_{F}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k}E} \\right.\\\\\n&\\left.-i\\frac{\\xi_{SO}}{k_{F}^{2}}\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{G}^{A}_{\\boldsymbol{k}E}+\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{G}^{A}_{\\boldsymbol{k}E}\\right]\\\\\n&+\\frac{v_{i}^{2}n_{i}}{2}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\left\\{(\\boldsymbol{k'}-\\boldsymbol{k}\\}\\times\\hat{\\boldsymbol{\\sigma}},\\hat{G}^{R}_{\\boldsymbol{k'}E}\\nabla_{\\boldsymbol{R}}\\hat{g}_{\\boldsymbol{k'}}\\right\\}\\hat{G}^{A}_{\\boldsymbol{k}E},\n\\end{aligned}\n\\end{equation}\n\\noindent and \n\\begin{equation}\n\\begin{aligned}\n\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{\\Sigma}^{K}_{\\boldsymbol{k}E}&=-v_{i}^{2}n_{i}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\,\\left[\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}+i\\frac{\\xi_{SO}}{k_{F}^{2}}\\hat{G}^{R}_{\\boldsymbol{k}E}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E} \\right.\\\\\n&\\left.-i\\frac{\\xi_{SO}}{k_{F}^{2}}\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}+\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\hat{G}^{R}_{\\boldsymbol{k}E}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\right]\\\\\n&-\\frac{v_{i}^{2}n_{i}}{2}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\hat{G}^{R}_{\\boldsymbol{k}E}\\left\\{(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\nabla_{\\boldsymbol{R}}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\right\\}.\n\\end{aligned}\n\\end{equation}\n\\noindent By using the following relations: \n\\begin{equation}\n\\begin{aligned}\n\\hat{\\sigma}_{a}\\hat{\\sigma}_{b}&=i\\varepsilon_{abc}\\,\\hat{\\sigma}_{c}+\\delta_{ab}\\hat{\\sigma}_{0},\\\\\n(\\boldsymbol{a}\\cdot\\hat{\\boldsymbol{\\sigma}})(\\boldsymbol{b}\\cdot\\hat{\\boldsymbol{\\sigma}})&=(\\boldsymbol{a}\\cdot\\boldsymbol{b})\\hat{\\sigma}_{0}+i(\\boldsymbol{a}\\times\\boldsymbol{b})\\cdot\\hat{\\boldsymbol{\\sigma}},\n\\end{aligned}\n\\end{equation}\n\\noindent these terms give:\n\\begin{equation}\n\\begin{aligned}\n\\int\\frac{dE}{2\\pi}\\left[\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k}E}+\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\right]&=\\pi\\hat{g}_{\\boldsymbol{k'}}\\delta_{T}+\\pi\\{\\hat{g}_{\\boldsymbol{k'}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\}\\delta_{J},\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n&\\int\\frac{dE}{2\\pi} i\\frac{\\xi_{SO}}{k_{F}^{2}}\\left[[\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}]\\hat{G}^{A}_{\\boldsymbol{k}E}+\\hat{G}^{R}_{\\boldsymbol{k}E}[\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}]\\right]=\\\\\n&=i\\pi\\frac{\\xi_{SO}}{k_{F}^{2}}[\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{g}_{\\boldsymbol{k'}}]\\delta_{T}+i\\pi\\frac{\\xi_{SO}}{k_{F}^{2}}\\left(\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}-\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}} \\right)\\delta_{J}\\\\\n&-\\pi\\frac{\\xi_{SO}}{k_{F}^{2}}\\{(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{g}_{\\boldsymbol{k'}}\\}\\delta_{J},\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n&\\int\\frac{dE}{2\\pi} \\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\left[\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{G}^{A}_{\\boldsymbol{k}E}+\\hat{G}^{R}_{\\boldsymbol{k}E}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\right]=\\\\\n&=\\pi\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\delta_{T}\n+\\pi\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\boldsymbol{m}\\{\\hat{g}_{\\boldsymbol{k'}},\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}} \\}\\delta_{J}\\\\\n&+i\\pi\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\delta_{J}\n-i\\pi\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}\\delta_{J},\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n&\\int\\frac{dE}{2\\pi} \\left[\\left\\{(\\boldsymbol{k'}-\\boldsymbol{k}\\}\\times\\hat{\\boldsymbol{\\sigma}},\\hat{G}^{R}_{\\boldsymbol{k'}E}\\nabla_{\\boldsymbol{R}}\\hat{g}_{\\boldsymbol{k'}}\\right\\}\\hat{G}^{A}_{\\boldsymbol{k}E}+\\hat{G}^{R}_{\\boldsymbol{k}E}\\left\\{(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\nabla_{\\boldsymbol{R}}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\right\\} \\right]=\\\\\n&=\\pi\\left\\{(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\nabla_{\\boldsymbol{R}}\\hat{g}_{\\boldsymbol{k'}}\\right\\}\\delta_{T}\n+\\pi\\left\\{(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\{\\nabla_{\\boldsymbol{R}}\\hat{g}_{\\boldsymbol{k'}},\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\}\\right\\}\\delta_{J},\n\\end{aligned}\n\\end{equation}\n\\noindent where the following notations are used:\n\\begin{equation}\n\\begin{aligned}\n\\delta_{T}&=\\delta(\\varepsilon_{F}-E_{\\boldsymbol{k'}+})+\\delta(\\varepsilon_{F}-E_{\\boldsymbol{k'}-}),\\\\\n\\delta_{J}&=\\frac{1}{2}\\left[\\delta(\\varepsilon_{F}-E_{\\boldsymbol{k'}+})-\\delta(\\varepsilon_{F}-E_{\\boldsymbol{k'}-})\\right].\n\\end{aligned}\n\\end{equation}\n\\noindent For the skew-scattering self-energy Eq.~(\\ref{eq:sskew}), we have:\n\\begin{equation}\n\\begin{aligned}\n\\hat{\\Sigma'}^{K}_{\\boldsymbol{k}E}\\hat{G}^{A}_{\\boldsymbol{k}E}&=-\\frac{\\hbar}{2}\\frac{v_{i}}{\\tau_{0}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\left[\\hat{\\sigma}_{0}-\\beta\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right]\\hat{G}^{A}_{\\boldsymbol{k}E}\\\\\n&-\\frac{\\hbar}{2}\\frac{v_{i}}{\\tau_{0}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\left[\\hat{\\sigma}_{0}-\\beta\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right]\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{G}^{A}_{\\boldsymbol{k}E},\n\\end{aligned}\n\\end{equation}\n\\noindent and\n\\begin{equation}\n\\begin{aligned}\n\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{\\Sigma'}^{K}_{\\boldsymbol{k}E}&=\\frac{\\hbar}{2}\\frac{v_{i}}{\\tau_{0}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\hat{G}^{R}_{\\boldsymbol{k}E}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\left[\\hat{\\sigma}_{0}-\\beta\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right]\\\\\n&+\\frac{\\hbar}{2}\\frac{v_{i}}{\\tau_{0}}\\frac{\\xi_{SO}}{k_{F}^{2}}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\hat{G}^{R}_{\\boldsymbol{k}E}\\left[\\hat{\\sigma}_{0}-\\beta\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\right]\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\n\\end{aligned}\n\\end{equation}\n\\noindent that gives:\n\\begin{equation}\n\\begin{aligned}\n&\\int\\frac{dE}{2\\pi}\\,\\left[\\left\\{\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\right\\}\\hat{G}^{A}_{\\boldsymbol{k}E}+\\hat{G}^{R}_{\\boldsymbol{k}E}\\left\\{\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\right\\}\\right]=\\\\\n&=\\pi\\{\\hat{g}_{\\boldsymbol{k'}},\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\}\\delta_{T}+\\pi\\left\\{\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\{\\hat{g}_{\\boldsymbol{k'}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\}\\right\\}\\delta_{J},\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n&\\int\\frac{dE}{2\\pi}\\,\\left[\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\hat{G}^{A}_{\\boldsymbol{k}E}+\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{G}^{A}_{\\boldsymbol{k}E}\\right.\\\\\n&\\qquad\\qquad\\left.+\\hat{G}^{R}_{\\boldsymbol{k}E}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}+\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\hat{g}_{\\boldsymbol{k'}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\right]=\\\\\n&=\\pi\\left(\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}+\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right) \\delta_{T}+\\pi\\boldsymbol{n}\\cdot\\boldsymbol{m}\\{\\hat{g}_{\\boldsymbol{k'}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\}\\delta_{J}+\\pi\\boldsymbol{m}^{2}\\{\\hat{g}_{\\boldsymbol{k'}},\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}} \\}\\delta_{J}\\\\\n&\\qquad\\qquad+i\\pi\\left((\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}-\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\hat{g}_{\\boldsymbol{k'}} (\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}} \\right)\\delta_{J}.\n\\end{aligned}\n\\end{equation}\n\\par Finally, we obtain:\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{J}_{\\boldsymbol{k}}&=\\pi a_{1}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\Big[\\hat{g}_{\\boldsymbol{k'}}\\delta_{T}+\\{\\hat{g}_{\\boldsymbol{k'}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\}\\delta_{J}\\\\\n&\\qquad\\qquad+i\\frac{\\xi_{SO}}{k_{F}^{2}}[\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{g}_{\\boldsymbol{k'}}]\\delta_{T}+i\\frac{\\xi_{SO}}{k_{F}^{2}}\\left(\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}-\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}} \\right)\\delta_{J}-\\frac{\\xi_{SO}}{k_{F}^{2}}\\{(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{g}_{\\boldsymbol{k'}}\\}\\delta_{J}\\\\\n&\\qquad\\qquad+\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\delta_{T}\n+\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\boldsymbol{m}\\{\\hat{g}_{\\boldsymbol{k'}},\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}} \\}\\delta_{J}\\\\\n&\\qquad\\qquad+i\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\delta_{J}\n-i\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}\\delta_{J}\\\\\n&\\qquad\\qquad+\\frac{1}{2}\\frac{\\xi_{SO}}{k_{F}^{2}}\\left\\{(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\nabla_{\\boldsymbol{R}}\\hat{g}_{\\boldsymbol{k'}}\\right\\}\\delta_{T}\n+\\frac{1}{2}\\frac{\\xi_{SO}}{k_{F}^{2}}\\left\\{(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\{\\nabla_{\\boldsymbol{R}}\\hat{g}_{\\boldsymbol{k'}},\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\}\\right\\}\\delta_{J}\\\\\n&-\\pi a_{2}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\Big[\\{\\hat{g}_{\\boldsymbol{k'}},\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\}\\delta_{T}+\\left\\{\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\{\\hat{g}_{\\boldsymbol{k'}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\}\\right\\}\\delta_{J}\\\\\n&\\qquad\\qquad-\\beta\\left(\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}+\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right) \\delta_{T}-\\beta\\boldsymbol{n}\\cdot\\boldsymbol{m}\\{\\hat{g}_{\\boldsymbol{k'}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\}\\delta_{J}-\\beta\\boldsymbol{m}^{2}\\{\\hat{g}_{\\boldsymbol{k'}},\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}} \\}\\delta_{J}\\\\\n&\\qquad\\qquad-i\\beta\\left((\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}-\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\hat{g}_{\\boldsymbol{k'}} (\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}} \\right)\\delta_{J}\\Big],\n\\end{aligned}\n\\end{equation}\n\\noindent where $a_{1}=n_{i}v_{i}^{2}$ and $a_{2}=\\frac{\\hbar}{2}\\frac{v_{i}}{\\tau_{0}}\\frac{\\xi_{SO}}{k_{F}^{2}}$. \n\\par In a similar manner, by using Eqs. (\\ref{eq:sret}) and (\\ref{eq:sadv}) we proceed with the second part of the collision integral $\\mathcal{I}_{\\boldsymbol{k}}$:\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{I}_{\\boldsymbol{k}}&=-a_{1}\\int\\frac{dE}{2\\pi}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\,\\Big[\\big(\\hat{\\sigma}_{0}+i\\frac{\\xi_{SO}}{k_{F}^{2}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\big)\\hat{G}^{R}_{\\boldsymbol{k'}E}(\\hat{\\sigma}_{0}-i\\frac{\\xi_{SO}}{k_{F}^{2}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}})\\hat{g}_{\\boldsymbol{k}}\\hat{G}^{A}_{\\boldsymbol{k}E}\\\\\n&\\qquad\\qquad\\qquad\\qquad\\qquad+\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{g}_{\\boldsymbol{k}}\\big(\\hat{\\sigma}_{0}+i\\frac{\\xi_{SO}}{k_{F}^{2}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\big)\\hat{G}^{A}_{\\boldsymbol{k'}E}(\\hat{\\sigma}_{0}-i\\frac{\\xi_{SO}}{k_{F}^{2}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}})\\Big],\n\\end{aligned}\n\\end{equation}\n\\noindent where \n\\begin{equation}\n\\begin{aligned}\n\\int\\frac{dE}{2\\pi}\\left[\\hat{G}^{R}_{\\boldsymbol{k'}E}\\hat{g}_{\\boldsymbol{k}}\\hat{G}^{A}_{\\boldsymbol{k}E}+\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{g}_{\\boldsymbol{k}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\right]&=\\pi\\hat{g}_{\\boldsymbol{k}}\\delta_{T}+\\pi\\{\\hat{g}_{\\boldsymbol{k}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\}\\delta_{J},\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n&\\int\\frac{dE}{2\\pi} i\\frac{\\xi_{SO}}{k_{F}^{2}}\\left[[\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{G}^{R}_{\\boldsymbol{k'}E}]\\hat{g}_{\\boldsymbol{k}}\\hat{G}^{A}_{\\boldsymbol{k}E}+\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{g}_{\\boldsymbol{k}}[\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{G}^{A}_{\\boldsymbol{k'}E}]\\right]=\\\\\n&=-\\pi\\frac{\\xi_{SO}}{k_{F}^{2}}(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\{\\hat{g}_{\\boldsymbol{k}},\\hat{\\boldsymbol{\\sigma}}\\}\\delta_{J},\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n&\\int\\frac{dE}{2\\pi} \\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\left[\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{G}^{R}_{\\boldsymbol{k'}E}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k}}\\hat{G}^{A}_{\\boldsymbol{k}E}+\\hat{G}^{R}_{\\boldsymbol{k}E}\\hat{g}_{\\boldsymbol{k}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{G}^{A}_{\\boldsymbol{k'}E}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\right]=\\\\\n&=\\pi \\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}^{2}\\hat{g}_{\\boldsymbol{k}}\\delta_{T}+\\pi \\frac{\\xi^{2}_{SO}}{k_{F}^{4}}(2(\\boldsymbol{n}\\cdot\\boldsymbol{m})\\boldsymbol{n}-\\boldsymbol{n}^{2}\\boldsymbol{m})\\cdot\\{\\hat{g}_{\\boldsymbol{k}},\\hat{\\boldsymbol{\\sigma}}\\}\\delta_{J},\n\\end{aligned}\n\\end{equation}\n\\noindent so we obtain:\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{I}_{\\boldsymbol{k}}&=-\\pi a_{1}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\Big[\\hat{g}_{\\boldsymbol{k}}\\delta_{T}+\\{\\hat{g}_{\\boldsymbol{k}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\}\\delta_{J}-\\frac{\\xi_{SO}}{k_{F}^{2}}(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\{\\hat{g}_{\\boldsymbol{k}},\\hat{\\boldsymbol{\\sigma}}\\}\\delta_{J}\\\\\n&\\qquad\\qquad+\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}^{2}\\hat{g}_{\\boldsymbol{k}}\\delta_{T}+\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}(2(\\boldsymbol{n}\\cdot\\boldsymbol{m})\\boldsymbol{n}-\\boldsymbol{n}^{2}\\boldsymbol{m})\\cdot\\{\\hat{g}_{\\boldsymbol{k}},\\hat{\\boldsymbol{\\sigma}}\\}\\delta_{J}\\Big].\n\\end{aligned}\n\\end{equation}\n\\noindent Finally, the collision integral is written as:\n\\begin{equation}\n\\begin{aligned}\ncoll&=\\pi a_{1}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\Big[(\\hat{g}_{\\boldsymbol{k'}}-\\hat{g}_{\\boldsymbol{k}})\\delta_{T}+\\{\\hat{g}_{\\boldsymbol{k'}}-\\hat{g}_{\\boldsymbol{k}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\}\\delta_{J}\\\\\n&\\qquad\\qquad+i\\frac{\\xi_{SO}}{k_{F}^{2}}[\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{g}_{\\boldsymbol{k'}}]\\delta_{T}+i\\frac{\\xi_{SO}}{k_{F}^{2}}\\left(\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}-\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}} \\right)\\delta_{J}-\\frac{\\xi_{SO}}{k_{F}^{2}}\\{(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{g}_{\\boldsymbol{k'}}\\}\\delta_{J}\\\\\n&\\qquad\\qquad+\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\delta_{T}\n+\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\boldsymbol{m}\\{\\hat{g}_{\\boldsymbol{k'}},\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}} \\}\\delta_{J}\\\\\n&\\qquad\\qquad+i\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\delta_{J}\n-i\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}\\delta_{J}\\\\\n&\\qquad\\qquad-\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}^{2}\\hat{g}_{\\boldsymbol{k}}\\delta_{T}-\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}(2(\\boldsymbol{n}\\cdot\\boldsymbol{m})\\boldsymbol{n}-\\boldsymbol{n}^{2}\\boldsymbol{m})\\cdot\\{\\hat{g}_{\\boldsymbol{k}},\\hat{\\boldsymbol{\\sigma}}\\}\\delta_{J}\\\\\n&\\qquad\\qquad+\\frac{1}{2}\\frac{\\xi_{SO}}{k_{F}^{2}}\\left\\{(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\nabla_{\\boldsymbol{R}}\\hat{g}_{\\boldsymbol{k'}}\\right\\}\\delta_{T}\n+\\frac{1}{2}\\frac{\\xi_{SO}}{k_{F}^{2}}\\left\\{(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\{\\nabla_{\\boldsymbol{R}}\\hat{g}_{\\boldsymbol{k'}},\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\}\\right\\}\\delta_{J}\\Big]\\\\\n&-\\pi a_{2}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\Big[\\{\\hat{g}_{\\boldsymbol{k'}},\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\}\\delta_{T}+\\left\\{\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\{\\hat{g}_{\\boldsymbol{k'}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\}\\right\\}\\delta_{J}\\\\\n&\\qquad\\qquad-\\beta\\left(\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\hat{g}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}+\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right) \\delta_{T}-\\beta\\boldsymbol{n}\\cdot\\boldsymbol{m}\\{\\hat{g}_{\\boldsymbol{k'}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\}\\delta_{J}-\\beta\\boldsymbol{m}^{2}\\{\\hat{g}_{\\boldsymbol{k'}},\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}} \\}\\delta_{J}\\\\\n&\\qquad\\qquad-i\\beta\\left((\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{g}_{\\boldsymbol{k'}}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}-\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\hat{g}_{\\boldsymbol{k'}} (\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}} \\right)\\delta_{J}\\Big].\n\\end{aligned}\n\\label{eq:collision1}\n\\end{equation}\n\\noindent By neglecting higher order terms $\\beta\\delta_{J}\\sim\\beta^{2}$ in skew-scattering and introducing a more familiar distribution function $\\hat{g}_{\\boldsymbol{k}}=\\hat{\\sigma}_{0}-2\\hat{h}_{\\boldsymbol{k}}$, we get:\n\\begin{equation}\n\\begin{aligned}\ncoll&=-2\\pi a_{1}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\Big[(\\hat{h}_{\\boldsymbol{k'}}-\\hat{h}_{\\boldsymbol{k}})\\delta_{T}+\\{\\hat{h}_{\\boldsymbol{k'}}-\\hat{h}_{\\boldsymbol{k}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\}\\delta_{J}\\\\\n&\\qquad\\qquad+i\\frac{\\xi_{SO}}{k_{F}^{2}}[\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{h}_{\\boldsymbol{k'}}]\\delta_{T}+i\\frac{\\xi_{SO}}{k_{F}^{2}}(\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{h}_{\\boldsymbol{k'}}\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}-\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{h}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}})\\delta_{J}-\\frac{\\xi_{SO}}{k_{F}^{2}}\\{(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}},\\hat{h}_{\\boldsymbol{k'}}\\}\\delta_{J}\\\\\n&\\qquad\\qquad+\\frac{1}{2}\\frac{\\xi_{SO}}{k_{F}^{2}}\\{(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\nabla_{\\boldsymbol{R}}\\hat{h}_{\\boldsymbol{k'}}\\}\\delta_{T}\n+\\frac{1}{2}\\frac{\\xi_{SO}}{k_{F}^{2}}\\{(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\{\\nabla_{\\boldsymbol{R}}\\hat{h}_{\\boldsymbol{k'}},\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\}\\}\\delta_{J}\\\\\n&\\qquad\\qquad+\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{h}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\delta_{T}\n+\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\boldsymbol{m}\\{\\hat{h}_{\\boldsymbol{k'}},\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}} \\}\\delta_{J}\\\\\n&\\qquad\\qquad+i\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{h}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\delta_{J}\n-i\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{h}_{\\boldsymbol{k'}}(\\boldsymbol{n}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}\\delta_{J}\\\\\n&\\qquad\\qquad-\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}\\boldsymbol{n}^{2}\\hat{h}_{\\boldsymbol{k}}\\delta_{T}-\\frac{\\xi^{2}_{SO}}{k_{F}^{4}}(2(\\boldsymbol{n}\\cdot\\boldsymbol{m})\\boldsymbol{n}-\\boldsymbol{n}^{2}\\boldsymbol{m})\\cdot\\{\\hat{h}_{\\boldsymbol{k}},\\hat{\\boldsymbol{\\sigma}}\\}\\delta_{J}\\Big]\\\\\n&+2\\pi a_{2}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\Big[\\{\\hat{h}_{\\boldsymbol{k'}},\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\}\\delta_{T}+\\{\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}},\\{\\hat{h}_{\\boldsymbol{k'}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\}\\}\\delta_{J}-\\beta(\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\hat{h}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}+\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{h}_{\\boldsymbol{k'}}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} ) \\delta_{T}\\Big],\n\\end{aligned}\n\\label{eq:collision2}\n\\end{equation}\n\\noindent while the Keldysh equation~(\\ref{eq:keld2}) is rewritten as:\n\\begin{equation}\n-2\\left(-i[\\hat{h}_{\\boldsymbol{k}},J\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}]+\\frac{\\hbar^{2}}{m}(\\boldsymbol{k}\\cdot\\nabla_{\\boldsymbol{R}})\\,\\hat{h}_{\\boldsymbol{k}} \\right)=coll.\n\\label{eq:keld3}\n\\end{equation}\n\n\n\n\n\n\\section{Ferromagnetic solution without spin-orbit coupling}\n\\par Let us consider Eq. (\\ref{eq:keld3}) without extrinsic spin-orbit coupling:\n\\begin{equation}\n\\begin{aligned}\n-i[\\hat{h}_{\\boldsymbol{k}}, J\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}]+\\frac{\\hbar^2}{m}(\\boldsymbol{k} \\cdot \\nabla_{\\boldsymbol{R}}) \\hat{h}_{\\boldsymbol{k}}= \\pi a_{1}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\big[(\\hat{h}_{\\boldsymbol{k'}}-\\hat{h}_{\\boldsymbol{k}})\\delta_{T}+\\{\\hat{h}_{\\boldsymbol{k'}}-\\hat{h}_{\\boldsymbol{k}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\}\\delta_{J}\\big].\n\\end{aligned}\n\\end{equation} \n\\noindent By introducing $\\Omega=i\\hbar\/\\tau_0$, $\\hat U = \\hat{\\boldsymbol{\\sigma}} \\cdot \\boldsymbol{m}$, and: \n\\begin{equation}\n\\begin{aligned}\n\\label{eq:k_fer}\n\\hat{K}=-\\frac{\\hbar^2}{m}(\\boldsymbol{k}\\cdot \\nabla_{\\boldsymbol{R}}) \\hat{h}_{\\boldsymbol{k}}+\\pi a_{1}\\int\\frac{d\\boldsymbol{k'}}{(2\\pi)^{3}}\\big[\\hat{h}_{\\boldsymbol{k'}}\\delta_{T}+\\{\\hat{h}_{\\boldsymbol{k'}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}\\}\\delta_{J}\\big],\n\\end{aligned}\n\\end{equation}\n\\noindent in the weak exchange coupling limit ($J\\ll\\varepsilon_{F}$) we have:\n\\begin{equation}\n\\begin{aligned}\n\\hat{h}_{\\boldsymbol{k}} = \\frac{i}{\\Omega} \\hat{K} + \\frac{\\beta}{2}\\{\\hat{h}_{\\boldsymbol{k}}, \\hat{U}\\} + \\frac{J}{\\Omega}[\\hat{U}, \\hat{h}_{\\boldsymbol{k}}].\n\\end{aligned}\n\\end{equation}\n\\noindent This equation is solved iteratively:\n\\begin{equation}\n\\begin{aligned}\n\\hat{h}_{\\boldsymbol{k}} &= \\frac{i}{\\Omega}\\left(1+2\\frac{J^{2}}{\\Omega^{2}}+8\\frac{J^{4}}{\\Omega^{4}}+32\\frac{J^{6}}{\\Omega^{6}}+... \\right) \\hat{K} + \\frac{i}{\\Omega}\\frac{\\beta^{2}}{2}\\left(1+\\beta^{2}+\\beta^{4}+... \\right)\\hat{K} \\\\\n&+ \\frac{i}{\\Omega}\\frac{\\beta}{2}\\left(1+\\beta^{2}+\\beta^{4}+... \\right)\\{\\hat{U},\\hat{K}\\}+\\frac{i}{\\Omega}\\frac{J}{\\Omega}\\left(1+4\\frac{J^{2}}{\\Omega^{2}}+16\\frac{J^{4}}{\\Omega^{4}}+... \\right)[\\hat{U},\\hat{K}]\\\\\n&+ \\frac{i}{\\Omega}\\frac{\\beta^{2}}{2}\\left(1+\\beta^{2}+\\beta^{4}+... \\right)\\hat{U}\\hat{K}\\hat{U}-2\\frac{i}{\\Omega}\\frac{J^{2}}{\\Omega^{2}}\\left(1+4\\frac{J^{2}}{\\Omega^{2}}+16\\frac{J^{4}}{\\Omega^{4}}+... \\right)\\hat{U}\\hat{K}\\hat{U},\n\\end{aligned}\n\\end{equation}\n\\noindent or by using $1+x+x^{2}+x^{3}...=\\frac{1}{1-x}$ for $x\\ll 1$:\n\\begin{equation}\n\\begin{aligned}\n\\hat{h}_{\\boldsymbol{k}} &=\\frac{i}{\\Omega}\\left(\\frac{\\Omega^{2}-2J^{2}}{\\Omega^{2}-4J^{2}}+\\frac{\\beta^{2}}{2(1-\\beta^{2})} \\right)\\hat{K}+\\frac{i}{\\Omega}\\frac{\\beta}{2}\\frac{1}{1-\\beta^{2}}\\{\\hat{U},\\hat{K}\\}\\\\\n&+\\frac{iJ}{\\Omega^{2}-4J^{2}}[\\hat{U},\\hat{K}]+\\frac{i}{\\Omega}\\left(\\frac{\\beta^{2}}{2(1-\\beta^{2})}-\\frac{2J^{2}}{\\Omega^{2}-4J^{2}} \\right)\\hat{U}\\hat{K}\\hat{U}.\n\\end{aligned}\n\\end{equation}\n\\noindent Since $J^{2}\/\\Omega^{2}\\ll1$ and $\\beta^{2}\\ll1$, this solution is well justified. By substituting $\\Omega$, $\\hat{K}$ and $\\hat{U}$ and removing the delta-functions, we have:\n\\begin{equation}\n\\begin{aligned}\n\\frac{\\hbar}{\\tau_{0}}\\hat{h}_{\\boldsymbol{k}}=& -\\frac{\\hbar^{2}}{m}(\\boldsymbol{k}\\cdot\\nabla_{\\boldsymbol{R}})\\hat{h}_{\\boldsymbol{k}}+\\frac{\\tau_{0}^{2}}{m}\\frac{2J^{2}}{1+\\frac{4J^{2}\\tau_{0}^{2}}{\\hbar^{2}}}(\\boldsymbol{k}\\cdot\\nabla_{\\boldsymbol{R}})\\left(\\hat{h}_{\\boldsymbol{k}}-\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\hat{h}_{\\boldsymbol{k}}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right) \\\\\n&-\\frac{\\beta^{2}}{2(1-\\beta^{2})}\\frac{\\hbar^{2}}{m}(\\boldsymbol{k}\\cdot\\nabla_{\\boldsymbol{R}})\\left(\\hat{h}_{\\boldsymbol{k}}+\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\hat{h}_{\\boldsymbol{k}}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\right)-\\frac{\\beta}{2(1-\\beta^{2})}\\frac{\\hbar^{2}}{m}(\\boldsymbol{k}\\cdot\\nabla_{\\boldsymbol{R}})\\{\\hat{h}_{\\boldsymbol{k}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\}\\\\\n&+\\frac{\\hbar}{\\tau_{0}}\\int\\frac{d\\check{\\boldsymbol{k'}}}{4\\pi}\\,\\hat{h}_{\\boldsymbol{k'}} +\\frac{\\tau_{0}}{\\hbar}\\frac{2J^{2}}{1+\\frac{4J^{2}\\tau_{0}^{2}}{\\hbar^{2}}}\\left(\\int \\frac{d\\check{\\boldsymbol{k'}}}{4\\pi}\\,\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\hat{h}_{\\boldsymbol{k'}}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}-\\int \\frac{d\\check{\\boldsymbol{k'}}}{4\\pi}\\,\\hat{h}_{\\boldsymbol{k'}}\\right)\\\\\n&-\\frac{iJ}{1+\\frac{4J^{2}\\tau_{0}^{2}}{\\hbar^{2}}} \\int\\frac{d\\check{\\boldsymbol{k'}}}{4\\pi}\\,[\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m},\\hat{h}_{\\boldsymbol{k'}}] +\\frac{\\tau_{0}\\hbar}{m}\\frac{iJ}{1+\\frac{4J^{2}\\tau_{0}^{2}}{\\hbar^{2}}} (\\boldsymbol{k}\\cdot\\nabla_{\\boldsymbol{R}})[\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m},\\hat{h}_{\\boldsymbol{k}}], \n\\label{eq:fer1}\n\\end{aligned}\n\\end{equation}\n\\noindent where $\\check{\\boldsymbol{k}}=\\boldsymbol{k}\/|\\boldsymbol{k}|$. In the diffusive limit $v_{F}\\tau\\ll L$, where $L$ is the system size, we can partition the distribution function $\\hat{h}_{\\boldsymbol{k}}$ into the isotropic charge $\\mu_{c}$ and spin $\\boldsymbol{\\mu}$ and anisotropic $\\hat{\\boldsymbol{j}}\\cdot\\check{\\boldsymbol{k}}$ components, $\\hat{h}_{\\boldsymbol{k}}=\\mu_{c}\\hat{\\sigma}_{0}+\\boldsymbol{\\mu}\\cdot\\hat{\\boldsymbol{\\sigma}}+\\hat{\\boldsymbol{j}}\\cdot\\check{\\boldsymbol{k}}$. This form is nothing else but the generalized $p$-wave approximation for the distribution function. Upon integrating Eq. (\\ref{eq:fer1}) multiplied by $\\check{\\boldsymbol{k}}$ over $d\\check{\\boldsymbol{k}}\/4\\pi$ and neglecting higher order terms $\\sim \\beta^{2}$, we obtain the following expression for $\\hat{\\boldsymbol{j}}$: \n\\begin{equation}\n\\begin{aligned}\n\\label{eq:j_fer}\n\\frac{\\hbar}{\\tau_{0}}\\hat{\\boldsymbol{j}}=&-\\frac{\\hbar^{2}}{m}k\\nabla\\left(\\mu_{c}\\hat{\\sigma}_{0}+\\boldsymbol{\\mu}\\cdot\\hat{\\boldsymbol{\\sigma}}+\\beta\\mu_{c}\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}+\\beta\\boldsymbol{\\mu}\\cdot{\\boldsymbol{m}}\\hat{\\sigma}_{0}\\right)\\\\\n&-\\frac{\\tau_{0}\\hbar}{m}\\frac{2J}{1+\\frac{4J^{2}\\tau_{0}^{2}}{\\hbar^{2}}}k\\nabla\\,\\hat{\\boldsymbol{\\sigma}}\\cdot(\\boldsymbol{m}\\times\\boldsymbol{\\mu})-\\frac{\\tau_{0}^{2}}{m}\\frac{4J^{2}}{1+\\frac{4J^{2}\\tau_{0}^{2}}{\\hbar^{2}}}k\\nabla\\,\\hat{\\boldsymbol{\\sigma}}\\cdot(\\boldsymbol{m}\\times(\\boldsymbol{m}\\times\\boldsymbol{\\mu})).\n\\end{aligned}\n\\end{equation}\n\\par The charge and spin currents (its $j$th component in the spin space) can be defined as:\n\\begin{equation}\n\\tilde{\\boldsymbol{j}}^{C}=\\frac{1}{4}\\int\\frac{d\\check{\\boldsymbol{k}}}{4\\pi}\\mathrm{Tr}\\,\\{\\hat{\\boldsymbol{v}}_{\\boldsymbol{k}},\\hat{h}_{\\boldsymbol{k}}\\}=\\frac{v_{F}}{6}\\mathrm{Tr}\\,\\hat{\\boldsymbol{j}},\n\\end{equation}\n\\noindent and\n\\begin{equation}\n\\tilde{\\boldsymbol{J}}^{S}_{j}=\\frac{1}{4}\\int\\frac{d\\check{\\boldsymbol{k}}}{4\\pi}\\mathrm{Tr}\\,\\big[\\hat{\\sigma}_{j}\\{\\hat{\\boldsymbol{v}}_{\\boldsymbol{k}},\\hat{h}_{\\boldsymbol{k}}\\}\\big]=\\frac{v_{F}}{6}\\mathrm{Tr}\\,\\big[\\hat{\\sigma}_{j}\\,\\hat{\\boldsymbol{j}}\\big],\n\\end{equation}\n\\noindent where the velocity operator is defined as $\\hat{\\boldsymbol{v}}_{\\boldsymbol{k}}=\\frac{\\hbar}{m}\\boldsymbol{k}\\hat{\\sigma}_{0}$, and $v_{F}=\\frac{\\hbar}{m} k_{F}$ is the Fermi velocity. Thus, neglecting higher order terms $\\sim J^{3}$ gives:\n\\begin{equation}\n\\tilde{\\boldsymbol{j}}^{C} = - D\\nabla (\\mu_c + \\beta \\boldsymbol{\\mu} \\cdot \\boldsymbol{m}),\n\\label{eq:cc_fer}\n\\end{equation}\n\\noindent and\n\\begin{equation}\n\\frac{\\tilde{\\boldsymbol{J}}^{S}_{j}}{D}= -\\nabla( \\mu_{j}+\\beta\\mu_c m_{j})-\\frac{\\tau_{0}}{\\tau_{L}}\\nabla(\\boldsymbol{m}\\times\\boldsymbol{\\mu})_{j}-\\frac{\\tau_{0}}{\\tau_{\\phi}}\\nabla(\\boldsymbol{m}\\times(\\boldsymbol{m}\\times\\boldsymbol{\\mu}))_{j},\n\\label{eq:sc_fer}\n\\end{equation}\n\\noindent where $D=\\tau_{0}v_{F}^{2}\/3$ is the diffusion coefficient, $1\/\\tau_{L}=2J\/\\hbar$ is the Larmor precession time, and $1\/\\tau_{\\phi}=4J^{2}\\tau_{0}\/\\hbar^{2}$ is the spin dephasing time.\n\\par The corresponding equations for the charge and spin densities are obtained by integrating Eq.~(\\ref{eq:fer1}) over $\\check{\\boldsymbol{k}}$ and neglecting terms $\\sim J\\nabla^{2}$ and $\\sim\\beta\\nabla^{2}$:\n\\begin{equation}\n-\\frac{\\hbar^{2}}{m}\\frac{k}{3}\\nabla \\cdot \\hat{\\boldsymbol{j}}+\\frac{\\tau_{0}}{\\hbar}\\frac{4J^{2}}{1+\\frac{4J^{2}\\tau_{0}^{2}}{\\hbar^{2}}}\\hat{\\boldsymbol{\\sigma}}\\cdot(\\boldsymbol{m}\\times(\\boldsymbol{m}\\times\\boldsymbol{\\mu}))+\\frac{2J}{1+\\frac{4J^{2}\\tau_{0}^{2}}{\\hbar^{2}}}\\hat{\\boldsymbol{\\sigma}}\\cdot(\\boldsymbol{m}\\times\\boldsymbol{\\mu})=0.\n\\end{equation}\n\\noindent Taking $\\mathrm{Tr}\\,[...]$ and $\\mathrm{Tr}\\,[\\hat{\\boldsymbol{\\sigma}}...]$ and neglecting terms $\\sim J^{3}$ leads to:\n\\begin{equation}\n0=D\\nabla^{2}(\\mu_{c}+\\beta\\boldsymbol{\\mu}\\cdot\\boldsymbol{m})=-\\nabla\\cdot\\tilde{\\boldsymbol{j}}^{C}\n\\end{equation}\n\\noindent and \n\\begin{equation}\n0=-\\nabla\\cdot\\tilde{\\boldsymbol{J}}^{S}+\\frac{1}{\\tau_{L}}(\\boldsymbol{m}\\times\\boldsymbol{\\mu})+\\frac{1}{\\tau_{\\phi}}(\\boldsymbol{m}\\times(\\boldsymbol{m}\\times\\boldsymbol{\\mu}))\n\\end{equation}\n\\noindent for the charge and spin components, respectively. Finally, by recovering time-dependence from Eq. (\\ref{eq:kelfull}) we obtain:\n\\begin{equation}\n\\partial_{T}\\mu_{c}=-\\nabla\\cdot\\tilde{\\boldsymbol{j}}^{C}\n\\label{eq:ca_fer}\n\\end{equation}\n\\noindent and \n\\begin{equation}\n\\partial_{T}\\boldsymbol{\\mu}=-\\nabla\\cdot\\tilde{\\boldsymbol{J}}^{S}+\\frac{1}{\\tau_{L}}(\\boldsymbol{m}\\times\\boldsymbol{\\mu})+\\frac{1}{\\tau_{\\phi}}(\\boldsymbol{m}\\times(\\boldsymbol{m}\\times\\boldsymbol{\\mu})).\n\\label{eq:sa_fer}\n\\end{equation}\n\\noindent Thus, Eqs. (\\ref{eq:cc_fer}), (\\ref{eq:sc_fer}), (\\ref{eq:ca_fer}) and (\\ref{eq:sa_fer}) define a set of the drift-diffusion equations for ferromagnets in the absence of extrinsic spin-orbit coupling.\n\n\n\n\n\n\\section{Ferromagnetic solution with spin-orbit coupling}\n\\par To derive drift-diffusion equations including extrinsic spin-orbit coupling, we employ the same $p$-wave approximation for $\\hat{h}_{\\boldsymbol{k}}$. Then, we have:\n\\begin{gather}\n-i[\\hat{h}_{\\boldsymbol{k}},J\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}]=2J\\hat{\\boldsymbol{\\sigma}}\\cdot(\\boldsymbol{\\mu}\\times\\boldsymbol{m})-i[\\hat{\\boldsymbol{j}}\\cdot\\check{\\boldsymbol{k}},J\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m}],\\\\\n(\\boldsymbol{k}\\cdot\\nabla_{\\boldsymbol{R}})\\hat{h}_{\\boldsymbol{k}}=(\\boldsymbol{k}\\cdot\\nabla_{\\boldsymbol{R}})\\mu_{c}\\hat{\\sigma}_{0}+(\\boldsymbol{k}\\cdot\\nabla_{\\boldsymbol{R}})\\boldsymbol{\\mu}\\cdot\\hat{\\boldsymbol{\\sigma}}+(\\boldsymbol{k}\\cdot\\nabla_{\\boldsymbol{R}})\\hat{\\boldsymbol{j}}\\cdot\\check{\\boldsymbol{k}}\n\\end{gather}\n\\noindent for the left-hand side of the Keldysh equation~(\\ref{eq:keld3}), and:\n\\begin{gather}\n\\begin{aligned}\n\\{\\nabla_{\\boldsymbol{R}}\\hat{h}_{\\boldsymbol{k'}},(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}}\\}&=2\\left(\\nabla_{\\boldsymbol{R}}\\times(\\boldsymbol{k'}-\\boldsymbol{k}) \\right)\\cdot\\hat{\\boldsymbol{\\sigma}}\\mu_{c}-2(\\boldsymbol{k'}-\\boldsymbol{k})\\cdot\\left(\\nabla_{\\boldsymbol{R}}\\times\\boldsymbol{\\mu}\\right)\\hat{\\sigma}_{0}\\\\\n&+\\{\\nabla_{\\boldsymbol{R}}\\,\\hat{\\boldsymbol{j}}\\cdot\\check{\\boldsymbol{k'}},(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}}\\},\n\\end{aligned}\\\\\n\\begin{aligned}\n\\{(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\{\\nabla_{\\boldsymbol{R}}\\hat{h}_{\\boldsymbol{k'}},\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\}\\}&=4(\\boldsymbol{k'}-\\boldsymbol{k})\\cdot(\\boldsymbol{m}\\times\\nabla_{\\boldsymbol{R}} \\mu_{c})\\hat{\\sigma}_{0}+4\\hat{\\boldsymbol{\\sigma}}\\cdot(\\nabla_{\\boldsymbol{R}}\\times(\\boldsymbol{k'}-\\boldsymbol{k}))\\boldsymbol{\\mu}\\cdot\\boldsymbol{m}\\\\\n&+\\{(\\boldsymbol{k'}-\\boldsymbol{k})\\times\\hat{\\boldsymbol{\\sigma}},\\{\\nabla_{\\boldsymbol{R}}\\,\\hat{\\boldsymbol{j}}\\cdot\\check{\\boldsymbol{k'}},\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\}\\}\n\\end{aligned}\\\\\n\\begin{aligned}\n\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{h}_{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}-\\boldsymbol{n}^{2}\\hat{h}_{\\boldsymbol{k}}=2(\\boldsymbol{n}\\times(\\boldsymbol{n}\\times\\boldsymbol{\\mu}))\\cdot\\hat{\\boldsymbol{\\sigma}}+\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}\\hat{\\boldsymbol{j}}\\cdot\\check{\\boldsymbol{k'}}\\boldsymbol{n}\\cdot\\hat{\\boldsymbol{\\sigma}}-\\boldsymbol{n}^{2}\\hat{\\boldsymbol{j}}\\cdot\\check{\\boldsymbol{k}}\n\\end{aligned}\n\\end{gather}\n\\noindent for the collision integral~(\\ref{eq:collision2}). Upon integrating Eq. (\\ref{eq:keld3}) over $d\\check{\\boldsymbol{k}}\/4\\pi$ and neglecting terms $\\sim \\xi_{SO}^{2}\\beta$, we obtain in the limit $J\\ll\\varepsilon_{F}$: \n\\begin{equation}\n\\begin{aligned}\n\\label{eq:gen1}\n2J\\hat{\\boldsymbol{\\sigma}}\\cdot(\\boldsymbol{\\mu}\\times\\boldsymbol{m})+\\frac{1}{3}\\frac{\\hbar^{2}k}{m}\\,\\nabla_{\\boldsymbol{R}}\\cdot\\hat{\\boldsymbol{j}}=&-\\frac{8}{9}\\frac{\\hbar}{\\tau_{0}}\\frac{{k}^{2}}{k_{F}^{2}}\\xi_{SO}^{2}\\boldsymbol{\\mu}\\cdot\\hat{\\boldsymbol{\\sigma}}\\\\\n&+\\frac{1}{6}\\frac{\\hbar}{\\tau_{0}}\\frac{\\xi_{SO}}{k_{F}}\\big(\\nabla_{\\boldsymbol{R}}\\cdot(\\hat{\\boldsymbol{j}}\\times\\hat{\\boldsymbol{\\sigma}})-\\nabla_{\\boldsymbol{R}}\\cdot(\\hat{\\boldsymbol{\\sigma}}\\times\\hat{\\boldsymbol{j}}) \\big)\\\\\n&+\\frac{1}{6}\\frac{\\hbar}{\\tau_{0}}\\frac{\\xi_{SO}}{k_{F}}\\beta\\big[\\nabla_{\\boldsymbol{R}}\\cdot(\\hat{\\boldsymbol{j}}\\times\\hat{\\boldsymbol{\\sigma}})+\\nabla_{\\boldsymbol{R}}\\cdot(\\hat{\\boldsymbol{\\sigma}}\\times\\hat{\\boldsymbol{j}}),\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\big]\\\\\n&+\\frac{2}{3}\\frac{\\hbar}{\\tau_{0}}\\frac{\\xi_{SO}}{k_{F}}\\beta\\,\\nabla_{\\boldsymbol{R}}\\cdot(\\boldsymbol{m}\\times\\hat{\\boldsymbol{j}}).\n\\end{aligned}\n\\end{equation}\n\\noindent One more equation is derived by averaging Eq.~(\\ref{eq:keld3}) over $\\check{\\boldsymbol{k}}$ multiplied by $\\check{\\boldsymbol{k}}$ and neglecting terms $\\sim \\xi_{SO}^{2}\\beta$:\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:gen2}\n-iJ\\big[\\,\\hat{\\boldsymbol{j}},\\,\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\big]+\\frac{\\hbar^{2}k}{m}\\nabla_{\\boldsymbol{R}}(\\mu_{c}\\hat{\\sigma}_{0}+\\boldsymbol{\\mu}\\cdot\\hat{\\boldsymbol{\\sigma}})=&-\\frac{\\hbar}{\\tau_{0}}\\Big(1+\\frac{2}{3}\\frac{k^{2}}{k_{F}^{2}}\\xi_{SO}^{2}\\Big)\\hat{\\boldsymbol{j}}+\\frac{1}{2}\\frac{\\hbar}{\\tau_{0}}\\beta\\big\\{\\hat{\\boldsymbol{j}},\\hat{\\boldsymbol{\\sigma}}\\cdot\\boldsymbol{m} \\big\\}\\\\\n&-\\frac{i}{3}\\frac{\\hbar}{\\tau_{0}}\\frac{k}{k_{F}}\\xi_{SO}\\big(\\hat{\\boldsymbol{j}}\\times\\hat{\\boldsymbol{\\sigma}}+\\hat{\\boldsymbol{\\sigma}}\\times\\hat{\\boldsymbol{j}} \\big)\\\\\n&+\\frac{i}{3}\\frac{\\hbar}{\\tau_{0}}\\frac{k}{k_{F}}\\xi_{SO}\\beta\\big(\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\,\\hat{\\boldsymbol{j}}\\times\\hat{\\boldsymbol{\\sigma}}+\\hat{\\boldsymbol{\\sigma}}\\times\\hat{\\boldsymbol{j}}\\,\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\big)\\\\\n&+\\frac{1}{3}\\frac{\\hbar}{\\tau_{0}}\\frac{k}{k_{F}}\\xi_{SO}\\beta\\big(\\hat{\\boldsymbol{\\sigma}}\\cdot\\hat{\\boldsymbol{j}}+\\hat{\\boldsymbol{j}}\\cdot\\hat{\\boldsymbol{\\sigma}})\\boldsymbol{m}\n-\\frac{1}{3}\\frac{\\hbar}{\\tau_{0}}\\frac{k}{k_{F}}\\xi_{SO}\\beta\\big\\{\\hat{\\boldsymbol{\\sigma}},\\hat{\\boldsymbol{j}}\\cdot\\boldsymbol{m} \\big\\}\\\\\n&+\\frac{\\hbar}{\\tau_{0}}\\frac{k}{k_{F}}\\frac{\\xi_{SO}}{k_{F}}\\big(\\nabla_{\\boldsymbol{R}}\\times\\hat{\\boldsymbol{\\sigma}}\\,(\\mu_{c}-\\beta \\boldsymbol{\\mu}\\cdot\\boldsymbol{m})+\\nabla_{\\boldsymbol{R}}\\times(\\boldsymbol{\\mu}-\\beta\\mu_{c}\\boldsymbol{m})\\,\\hat{\\sigma}_{0} \\big)\\\\\n&+\\frac{1}{6}\\frac{m}{\\pi\\hbar}\\frac{v_{i}}{\\tau_{0}}\\xi_{SO}k\\big(\\hat{\\boldsymbol{\\sigma}}\\times\\hat{\\boldsymbol{j}}-\\hat{\\boldsymbol{j}}\\times\\hat{\\boldsymbol{\\sigma}}\\big)\\\\\n&+\\frac{1}{3}\\frac{m}{\\pi\\hbar}\\frac{v_{i}}{\\tau_{0}}\\xi_{SO}\\beta k\\big(\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\,\\hat{\\boldsymbol{j}}\\times\\hat{\\boldsymbol{\\sigma}}-\\hat{\\boldsymbol{\\sigma}}\\times\\hat{\\boldsymbol{j}}\\,\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\big)\\\\\n&+\\frac{1}{6}\\frac{m}{\\pi\\hbar}\\frac{v_{i}}{\\tau_{0}}\\xi_{SO}\\beta k\\big(\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\,\\hat{\\boldsymbol{\\sigma}}\\times \\hat{\\boldsymbol{j}}-\\hat{\\boldsymbol{j}}\\times\\hat{\\boldsymbol{\\sigma}}\\,\\boldsymbol{m}\\cdot\\hat{\\boldsymbol{\\sigma}}\\big)\\\\\n&-\\frac{2}{3}\\frac{m}{\\pi\\hbar}\\frac{v_{i}}{\\tau_{0}}\\xi_{SO}\\beta k\\,\\boldsymbol{m}\\times \\hat{\\boldsymbol{j}}.\n\\end{aligned}\n\\end{equation}\n\\noindent The equations above define a set of the generalized drift-diffusion equations, which can now be solved approximately while keeping leading orders in $\\xi_{SO}$ and $\\beta$. Then, starting from a ferromagnetic solution given by Eq.~(\\ref{eq:j_fer}) the anisotropic component of the density matrix is obtained by solving Eq.~(\\ref{eq:gen2}):\n\\begin{equation}\n\\begin{aligned}\n\\hat{\\boldsymbol{j}}&=-\\tau_{0}v_{F}\\nabla\\hat{\\mu}_{0}+\\left(\\frac{\\xi_{SO}}{k_{F}}+\\frac{\\tau_{0}v_{i}k_{F}^{2}}{3\\pi\\hbar}\\xi_{SO} \\right)\\nabla\\times\\boldsymbol{\\mu}\\hat{\\sigma}_{0}-\\frac{\\tau_{0}v_{i}k_{F}^{2}}{3\\pi\\hbar}\\xi_{SO}\\beta (\\nabla\\times\\boldsymbol{m})\\mu_{c}\\hat{\\sigma}_{0}\\\\\n&-\\left(\\frac{2}{3}\\xi_{SO}\\beta\\tau_{0}v_{F}-\\frac{\\tau_{0}v_{i}k_{F}^{2}}{3\\pi\\hbar}\\xi_{SO}\\frac{\\tau_{0}}{\\tau_{L}} \\right)\\nabla\\times(\\boldsymbol{m}\\times\\boldsymbol{\\mu})\\hat{\\sigma}_{0}+\\left(\\frac{\\xi_{SO}}{k_{F}}+ \\frac{\\tau_{0}v_{i}k_{F}^{2}}{3\\pi\\hbar}\\xi_{SO}\\right)\\nabla\\times\\hat{\\boldsymbol{\\sigma}}\\mu_{c}\\\\\n&-\\frac{\\xi_{SO}}{k_{F}}\\beta\\nabla\\times(\\boldsymbol{m}\\times(\\hat{\\boldsymbol{\\sigma}}\\times\\boldsymbol{\\mu}))-\\frac{\\tau_{0}v_{i}k_{F}^{2}}{3\\pi\\hbar}\\xi_{SO}\\beta\\nabla\\times((\\boldsymbol{m}\\times\\hat{\\boldsymbol{\\sigma}})\\times\\boldsymbol{\\mu})-\\frac{\\tau_{0}v_{i}k_{F}^{2}}{3\\pi\\hbar}\\xi_{SO}\\beta(\\nabla\\times\\hat{\\boldsymbol{\\sigma}})\\boldsymbol{\\mu}\\cdot\\boldsymbol{m}\\\\\n&+\\left(\\frac{\\xi_{SO}}{k_{F}}+\\frac{\\tau_{0}v_{i}k_{F}^{2}}{3\\pi\\hbar} \\right)\\frac{\\tau_{0}}{\\tau_{L}}\\nabla\\times(\\hat{\\boldsymbol{\\sigma}}\\times\\boldsymbol{m})\\mu_{c} - \\frac{2}{3}\\tau_{0}v_{F}\\xi_{SO}\\nabla\\times(\\hat{\\boldsymbol{\\sigma}}\\times(\\boldsymbol{\\mu}-\\beta\\mu_{c}\\boldsymbol{m}))\\\\\n&-\\frac{2}{3}\\tau_{0}v_{F}\\xi_{SO}\\frac{\\tau_{0}}{\\tau_{L}}\\nabla\\times\\left((\\hat{\\boldsymbol{\\sigma}}\\times\\boldsymbol{m})\\times\\boldsymbol{\\mu}+\\hat{\\boldsymbol{\\sigma}}\\times(\\boldsymbol{m}\\times\\boldsymbol{\\mu}) \\right),\n\\end{aligned}\n\\end{equation}\n\\noindent where \n\\begin{equation}\n\\hat{\\mu}_{0}=(\\mu_{c}+\\beta\\boldsymbol{\\mu}\\cdot\\boldsymbol{m})\\hat{\\sigma}_{0}+(\\boldsymbol{\\mu}+\\beta\\mu_{c}\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}+\\frac{\\tau_{0}}{\\tau_{L}}(\\boldsymbol{m}\\times\\boldsymbol{\\mu})\\cdot\\hat{\\boldsymbol{\\sigma}}+\\frac{\\tau_{0}}{\\tau_{\\phi}}(\\boldsymbol{m}\\times(\\boldsymbol{m}\\times\\boldsymbol{\\mu}))\\cdot\\hat{\\boldsymbol{\\sigma}}.\n\\end{equation}\n\\noindent Here, the first term of $\\hat{\\boldsymbol{j}}$ comes from the ferromagnetic solution by moving the right-hand side of Eq.~(\\ref{eq:gen2}) into Eq.~(\\ref{eq:k_fer}). Plugging this solution in Eq.~(\\ref{eq:gen1}) leads to:\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:j_full}\n\\frac{2J}{\\hbar}(\\boldsymbol{\\mu}\\times\\boldsymbol{m})\\cdot\\hat{\\boldsymbol{\\sigma}}&+\\frac{8}{9}\\frac{\\xi_{SO}^{2}}{\\tau_{0}}\\boldsymbol{\\mu}\\cdot\\hat{\\boldsymbol{\\sigma}}=\\\\\n&=D\\nabla\\cdot\\Big[ \\nabla\\hat{\\mu}_{0}+\\alpha_{sj}\\big[\\hat{\\boldsymbol{\\sigma}}\\times\\nabla(2\\mu_{c}-\\beta\\boldsymbol{\\mu}\\cdot\\boldsymbol{m})-\\nabla\\times(2\\boldsymbol{\\mu}-\\beta\\mu_{c}\\boldsymbol{m})\\big]\\\\\n&+\\alpha_{sk}\\big[\\hat{\\boldsymbol{\\sigma}}\\times\\nabla(\\mu_{c}-\\beta\\boldsymbol{\\mu}\\cdot\\boldsymbol{m})-\\nabla\\times(\\boldsymbol{\\mu}-\\beta\\mu_{c}\\boldsymbol{m})\\big]+\\alpha_{sw}\\nabla\\times(\\hat{\\boldsymbol{\\sigma}}\\times\\boldsymbol{\\mu})\\\\\n&-\\nabla\\times\\big[(\\alpha_{sj}\\frac{\\tau_{0}}{\\tau_{L}}+\\alpha_{sk}\\frac{\\tau_{0}}{\\tau_{L}}+\\alpha_{sw}\\beta)(\\hat{\\boldsymbol{\\sigma}}\\times\\boldsymbol{m})\\mu_{c} + (\\alpha_{sj}\\frac{\\tau_{0}}{\\tau_{L}}+\\alpha_{sk}\\frac{\\tau_{0}}{\\tau_{L}}-\\alpha_{sw}\\beta)(\\boldsymbol{m}\\times\\boldsymbol{\\mu}) \\big]\\\\\n&+\\nabla\\times\\big[(\\alpha_{sj}\\beta+\\alpha_{sk}\\beta+\\alpha_{sw}\\frac{\\tau_{0}}{\\tau_{L}})\\hat{\\boldsymbol{\\sigma}}\\times(\\boldsymbol{m}\\times\\boldsymbol{\\mu}) - (\\alpha_{sj}\\beta+\\alpha_{sk}\\beta-\\alpha_{sw}\\frac{\\tau_{0}}{\\tau_{L}})(\\hat{\\boldsymbol{\\sigma}}\\times\\boldsymbol{m})\\times\\boldsymbol{\\mu} \\big]\\Big],\n\\end{aligned}\n\\end{equation}\n\\noindent where $\\alpha_{sw}=\\frac{2\\xi_{SO}}{3}$, $\\alpha_{sj}=\\frac{\\xi_{SO}}{l_{F}k_{F}}$ and $\\alpha_{sk}=\\frac{v_{i}mk_{F}}{3\\pi\\hbar^{2}}\\xi_{SO}$ are the spin swapping, side-jump and skew-scattering coefficients, respectively, and $l_{F}=\\tau_{0}v_{F}$ is the mean-free path. As seen, Eq.~(\\ref{eq:j_full}) can be regarded as a generalized continuity equation for the density matrix $\\mu_{c}\\hat{\\sigma}_{0}+\\boldsymbol{\\mu}\\cdot\\hat{\\boldsymbol{\\sigma}}$, and its right-hand side is nothing else but the divergence of the full current $\\boldsymbol{j}^{C}\\hat{\\sigma}_{0}+\\boldsymbol{J}^{S}\\cdot\\hat{\\boldsymbol{\\sigma}}$, where the dot product is over spin components. Thus, the corresponding expressions for the charge and spin currents (its $j$th spin component) can be readily written as:\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:finalcharge}\n\\boldsymbol{j}^{C}\/D&=\\tilde{\\boldsymbol{j}}^{C}\/D+\\alpha_{sj}\\nabla\\times(2\\boldsymbol{\\mu}-\\beta\\mu_{c}\\boldsymbol{m})+\\alpha_{sk}\\nabla\\times(\\boldsymbol{\\mu}-\\beta\\mu_{c}\\boldsymbol{m})+(\\alpha_{sj}\\frac{\\tau_{0}}{\\tau_{L}}+\\alpha_{sk}\\frac{\\tau_{0}}{\\tau_{L}}-\\alpha_{sw}\\beta)\\nabla\\times(\\boldsymbol{m}\\times\\boldsymbol{\\mu})\n\\end{aligned}\n\\end{equation}\n\\noindent and\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:finalspin}\n\\boldsymbol{J}_{j}^{S}\/D&=\\tilde{\\boldsymbol{J}}_{j}^{S}\/D+\\alpha_{sj}\\nabla\\times\\boldsymbol{e}_{j}(2\\mu_{c}-\\beta\\boldsymbol{\\mu}\\cdot\\boldsymbol{m})+\\alpha_{sk}\\nabla\\times\\boldsymbol{e}_{j}(\\mu_{c}-\\beta\\boldsymbol{\\mu}\\cdot\\boldsymbol{m})-\\alpha_{sw}\\nabla\\times(\\boldsymbol{e}_{j}\\times\\boldsymbol{\\mu})\\\\\n&+\\nabla\\times (\\alpha_{sj}\\frac{\\tau_{0}}{\\tau_{L}}+\\alpha_{sk}\\frac{\\tau_{0}}{\\tau_{L}}+\\alpha_{sw}\\beta)(\\boldsymbol{e}_{j}\\times\\boldsymbol{m})\\mu_{c} - \\nabla\\times(\\alpha_{sj}\\beta+\\alpha_{sk}\\beta+\\alpha_{sw}\\frac{\\tau_{0}}{\\tau_{L}})(\\boldsymbol{e}_{j}\\times(\\boldsymbol{m}\\times\\boldsymbol{\\mu})) \\\\\n& + \\nabla\\times(\\alpha_{sj}\\beta+\\alpha_{sk}\\beta-\\alpha_{sw}\\frac{\\tau_{0}}{\\tau_{L}})((\\boldsymbol{e}_{j}\\times\\boldsymbol{m})\\times\\boldsymbol{\\mu}),\n\\end{aligned}\n\\end{equation}\n\\noindent or\n\\begin{equation}\n\\begin{aligned}\nJ_{ij}^{S}\/D&=\\tilde{J}_{ij}^{S}\/D-\\alpha_{sj}\\epsilon_{ijk}\\nabla_{k}(2\\mu_{c}-\\beta\\mu_{n}m_{n})-\\alpha_{sk}\\epsilon_{ijk}\\nabla_{k}(\\mu_{c}-\\beta\\mu_{n}m_{n})-\\alpha_{sw}(\\delta_{ij}\\nabla_{k}\\mu_{k}-\\nabla_{j}\\mu_{i})\\\\\n&+(\\alpha_{sj}\\frac{\\tau_{0}}{\\tau_{L}}+\\alpha_{sk}\\frac{\\tau_{0}}{\\tau_{L}}+\\alpha_{sw}\\beta)(\\delta_{ij}\\nabla_{k}m_{k}-m_{i}\\nabla_{j})\\mu_{c} \\\\\n& - (\\alpha_{sj}\\beta+\\alpha_{sk}\\beta+\\alpha_{sw}\\frac{\\tau_{0}}{\\tau_{L}})\\epsilon_{ikn}\\nabla_{k}(m_{n}\\mu_{j}-\\mu_{n}m_{j}) \\\\\n& +(\\alpha_{sj}\\beta+\\alpha_{sk}\\beta-\\alpha_{sw}\\frac{\\tau_{0}}{\\tau_{L}})(\\epsilon_{ikn}\\nabla_{k}m_{n}\\mu_{j}+\\epsilon_{ijk}\\nabla_{k}m_{n}\\mu_{n}),\n\\end{aligned}\n\\end{equation}\n\\noindent where $\\epsilon_{ijk}$ is the Levi-Civita symbol, and summation over repeated indexes is implied. Here, the first and second subscripts correspond to the spatial and spin components, respectively. Finally, by recovering time dependence in Eq.~(\\ref{eq:j_full}) we obtain the remaining equations for the charge and spin densities:\n\\begin{equation}\n\\label{eq:finalchden}\n\\partial_{T}\\mu_{c}=D\\nabla^{2}(\\mu_{c}+\\beta\\boldsymbol{\\mu}\\cdot\\boldsymbol{m})=-\\nabla\\cdot\\boldsymbol{j}^{C}\n\\end{equation}\n\\noindent and\n\\begin{equation}\n\\label{eq:finalspden}\n\\partial_{T}\\boldsymbol{\\mu}=-\\nabla\\cdot\\boldsymbol{J}^{S}+\\frac{1}{\\tau_{L}}(\\boldsymbol{m}\\times\\boldsymbol{\\mu})+\\frac{1}{\\tau_{\\phi}}(\\boldsymbol{m}\\times(\\boldsymbol{m}\\times\\boldsymbol{\\mu}))-\\frac{1}{\\tau_{sf}}\\boldsymbol{\\mu},\n\\end{equation}\n\\noindent where $1\/\\tau_{sf}=8\\xi_{SO}^{2}\/9\\tau_{0}$ is the spin-flip relaxation time. The set of Eqs.~(\\ref{eq:finalcharge}), (\\ref{eq:finalspin}), (\\ref{eq:finalchden}) and (\\ref{eq:finalspden}) is the central result of this work.\n\n\n\n\n\n\n\\section{Spin swapping symmetry}\n\\par In this section we verify the symmetry of the spin swapping term and compare our results with the previous ones derived for normal metals. \n\\par In their original work Dyakonov and Lifshits give the following definition of the spin current $q_{ij}$ due to scattering off the spin-orbit coupling potential:\\cite{perel} \n\\begin{equation}\n\\label{lifshits}\nq_{ij}=q_{ij}^{(0)}-\\alpha_{sh}\\epsilon_{ijk}q^{(0)}_{k}+\\alpha_{sw}(q_{ji}^{(0)}-\\delta_{ij}q_{kk}^{(0)}),\n\\end{equation}\n\\noindent where $q^{(0)}_{k}$ and $q_{ij}^{(0)}$ stand for the primary charge and spin currents in the absence of spin-orbit coupling, respectively, and $\\alpha_{sh}$ represents the overall spin Hall effect. Thus, it is argued that the spin swapping effect always appears in the form given above.\n\\par Let us consider our solution in the case of normal metals ($\\beta=0$ and $J=0$):\n\\begin{equation}\n\\begin{aligned}\n\\boldsymbol{J}_{j}^{S}&=-D\\nabla\\mu_{j}+D\\alpha_{sh}\\nabla\\times\\boldsymbol{e}_{j}\\mu_{c}-D\\alpha_{sw}\\nabla\\times(\\boldsymbol{e}_{j}\\times\\boldsymbol{\\mu})\n\\end{aligned}\n\\end{equation}\n\\noindent or\n\\begin{equation}\nJ_{ij}^{S}=-D\\nabla_{i}\\mu_{j}-D\\alpha_{sh}\\epsilon_{ijk}\\nabla_{k}\\mu_{c}+D\\alpha_{sw}(\\nabla_{j}\\mu_{i}-\\delta_{ij}\\nabla_{k}\\mu_{k}).\n\\end{equation}\n\\noindent Taking into account that $q_{ij}^{(0)}\\approx-D\\nabla_{i}\\mu_{j}$ and $q_{k}^{(0)}\\approx-D\\nabla_{k}\\mu_{c}$, it is seen that Eq.~(\\ref{eq:finalspin}) displays the correct symmetry up to a sign coming from the definition of the spin-orbit coupling potential, Eq.~(\\ref{eq:imppot}). This form is also in agreement with some previously published results.\\cite{brataas, raimondi}\n\\par Finally, it is worth comparing our equations with those that fail to include spin swapping in the form given by Eq.~(\\ref{lifshits}). For example, in Ref.~[\\onlinecite{manchon1}] the spin swapping term appeared with the following symmetry:\n\\begin{equation}\n\\begin{aligned}\ne^{2}\\boldsymbol{J}_{j}^{S}\/\\sigma_{N}&=-\\nabla\\mu_{j}\/2+\\alpha_{sj}\\boldsymbol{e}_{j}\\times\\nabla\\mu_{c}-\\alpha_{sw}\\boldsymbol{e}_{j}\\times(\\nabla\\times\\boldsymbol{\\mu})\/2\\\\\n&=-\\nabla\\mu_{j}\/2+\\alpha_{sj}\\boldsymbol{e}_{j}\\times\\nabla\\mu_{c}-\\alpha_{sw}(\\nabla\\mu_{j}-\\nabla_{j}\\boldsymbol{\\mu})\/2,\n\\end{aligned}\n\\end{equation}\n\\noindent where $\\sigma_{N}$ is the bulk conductivity. It is clear that the symmetry of spin swapping is wrong: e.g. $q_{xx}$ should contain a term $\\sim\\alpha_{sw}(-q^{(0)}_{yy}-q^{(0)}_{zz})$, which is absent in the expression above. There is the same symmetry problem in Eq.~(2) of Ref.~[\\onlinecite{brataas2}], where the spin swapping term reads as $-\\alpha_{sw}\\hat{\\boldsymbol{\\sigma}}\\times\\nabla\\times\\boldsymbol{\\mu}$ (or $-\\alpha_{sw}\\boldsymbol{e}_{j}\\times\\nabla\\times\\boldsymbol{\\mu}$ for the spin current component).\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}