diff --git a/data_all_eng_slimpj/shuffled/split2/finalzeex b/data_all_eng_slimpj/shuffled/split2/finalzeex new file mode 100644 index 0000000000000000000000000000000000000000..315f55cfd80c014522a54bd5c7fe8d84c25f48ce --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzeex @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\footnotetext[1]\npheligenius@yahoo.com}\\footnotetext[2]\nfarida\\_tahir@comsats.edu.pk}One of the most flummoxing problems in modern\nphysics that kept scientists at alert and has been hotly debated since $1929\n, is the realization of the expansion of the universe, established when\nEdwin Hubble published his revolutionary paper. Astronomical observations\nand study of universe, in the past few decades, strongly invalidated\nastronomers' view point that the universe was entirely composed of \\emph\n\\textquotedblleft baryonic matter\\textquotedblright }. The latest\nconformation of the accelerating universe \\cite{DN,AG,SP,SW,SE} endorsed the\nfact that the universe is infused with an unknown form of energy density $(\ndubbed as dark energy $\\left( \\rho _{\\Lambda }\\right) )$ which makes up for\nabout $75\\%$ of the total energy density of the universe. It is this $75\\%$\nmysterious $\\rho _{\\Lambda }$, which conditions our three-dimensional\nspatial curvature to be zero, that is responsible for the acceleration of\nthe universe. This discovery provided the first direct evidence that $\\rho\n_{\\Lambda }$ is non-zero, with $\\rho _{\\Lambda }\\approx \\left( 2.3\\times\n10^{-3}eV\\right) ^{4}$\\cite{FR,JZ}.\n\nHowever, the theoretical expectations for the $\\rho _{\\Lambda }$ exceed\nobservational limits by some $120$ orders of magnitude \\cite{JC}. This huge\ndiscrepancy between theory and observation, hitherto, constitutes a serious\nproblem for theoretical physics community. In fact, Steven Weinberg puts it\nmore succinctly by saying that the small non-zero value of $\\rho _{\\Lambda }$\nis \\emph{\\textquotedblleft a bone in the throat of theoretical\nphysics\\textquotedblright .} \\ Considering this huge discrepancy may\nun-shroud something fundamental, yet to be unveiled, about the hidden nature\nof the universe. This paper is one of such attempts.\n\nThe most elegant and comprehensible endeavour in order to solve this\nproblem, in our view, was put forward by F. R. Urban and A. R. Zhitnitsky \n\\cite{FR}. These authors approached the problem from the angle of the\neffective theory of gravity interacting with standard model fields by using\nthe solution of the $U(1)$ problem as put forward by G. Veneziano and E.\nWitten \\cite{GV,EW}.\\ In this framework, the basic problem of why the dark\nenergy is $120$ orders of magnitude smaller than its Planck scale \nM_{planck}^{4}$, is replaced by fundamentally different questions:\n\\textquotedblleft $(i)$ What is the relevant scale which enters the\neffective theory of gravitation? $(ii)$ How does this scale appear in the\neffective quantum field theory for gravity?\\textquotedblright\\ In their\nview, this effective scale has nothing to do with the cutoff ultraviolet \n(UV)$ scale $M_{planck}$: the appropriate effective scale must emerge as a\nresult of a subtraction at which some infrared (IR) scale enters the\nphysics. They completely turned the problem on its head!\n\nThough their attempt being cognizant, yet it fails to reproduce, exactly,\nthe measured value of $\\rho _{\\Lambda }$ \\cite{MT}. We observe here that\ntheir assumption $g\\equiv c=C_{QCD}\\times C_{grav}=1$ is debatable, since it\nis valid only for $C_{QCD}$ but not for $C_{grav}$, as proved in this paper.\nHere $g$ is the Minkowski metric in vacuum, $C_{QCD}$ is the Quantum\nChromodynamic $(QCD)$ coupling constant, and $C_{grav}$ is the gravitational\ncoupling constant.\n\nFrom Ref.[7], the value of $C_{grav}$ was wrongly computed to be \nC_{grav}=0.0588$\\ (which is approximately one-third of the value we proved\nin our calculation) but for obvious reason the authors neglected this value\nand used a position dependent Minkowski metric distance $g(x^{2})$\\ instead.\nThey computed $g(x^{2})$ to be $g(x^{2})=1\/6.25.$ For no clear reason, they\napproximated the value of $g(x^{2})$ to $g(x^{2})\\approx 1\/6$ by truncating \n0.25$ from their original \\ value of $g(x^{2}).$ This approach is totally\nunacceptable in the field of computational cosmology where every minuscule\nvalue counts.\n\nIn this paper, we have proved the value of $C_{grav}$ to an order of\nmagnitude less than one i.e. $1.797\\times 10^{-1}$; this leads towards the\nexact measured \\cite{MT} value of $\\rho _{\\Lambda }$. In order to get this\nvalue, we have used finite temperature and density $(FTD)$ correction\ntechnique. Here, the $FTD$ background acts as highly energetic medium \n\\left( M_{planck}^{4}\\right) $ controlling the particle propagation. Our\nbasic guiding idea is that the finite temperature field theory $(FTFT)$,\nsimilar to the physics of superconductivity (quantum field theory at $T=0$),\nis linked to the infrared sector of the effective theory of gravity\ninteracting with standard model fields, specifically with $QCD$ fields \\cit\n{FR}. In this case, the statistical background effects are incorporated in\npropagators through the Bose-Einstein distribution function \\cite{KR,FE}: it\nis worth noting that the Bose-Einstein distribution function is the\nmathematical tool for understanding the essential feature of the theory of\nsuperconductivity \\cite{FE}. The general attribute of a successful theory of\nsuperconductivity is the existence of degenerate vacuum\/broken symmetry\nmechanism. A characteristic feature of such a theory is the possible\nexistence of \\textquotedblleft unphysical\\textquotedblright\\ zero-mass\nbosons which tend to preserve the underlying symmetry of the theory.\\ The\nmasslessness of these singularities is protected in the limit \nq\\longrightarrow 0$. This means that it should cost no energy to create a\nYang-Mills quantum at $q=0$ and thus the mass is zero \\cite{GS}. In the\npreceding Ref. the Goldstone-Salam-Weinberg theorem is valid for a zero-mass\npole, which is protected. That pole is not physical and is purely gauge,\nhence unphysical. This is precisely the highly celebrated Veneziano ghost \n\\cite{FR}, which is analogous to the Kogut-Susskind \\emph{(KS)} ghost in the\nSchwinger model \\emph{(distinctive unphysical degree of freedom which is\nmassless and can propagate to arbitrary large distances)}.\n\n\\qquad It is imperative to note that this set of unphysical massless bosons\ntends to transform as a basis for a representation of a compact Lie group \n\\cite{FE} thereby, forming a compact manifold. We do not make any specific\nassumptions on the topological nature of the manifold; we only assume that\nthere is at least one Minkowski metric distance that defines a general\ncovariance of comoving coordinates \\cite{SW} with size $L_{M}=2\\times\nEuclidean$ metric distance.\n\nIn the next section, we derive the finite temperature and density relation\nfor the Veneziano ghosts by using Bose-Einstein distribution function. It\nshould be noted here that Veneziano ghosts are treated as unphysical\nmassless bosons due to the fact that they both have the same propagator \n(+ig_{\\mu \\nu }\/q^{2})$\\cite{FR}: the propagator for unphysical massless\nboson is obtained from $(+\\left( 2\\pi \\right) ^{4}ie^{2}g_{\\mu \\nu\n}\/q^{2})\\langle \\phi \\rangle ^{2}$ \\cite{FE}.\n\n\\section{ Veneziano-Ghost Density}\n\nFrom Fermi-Dirac and Bose-Einstein distribution functions, we hav\n\\begin{equation}\nn_{r}=\\frac{g_{r}}{e^{\\alpha +\\beta \\varepsilon _{r}}\\pm 1} \\tag{1}\n\\end{equation}\n\nThe positive sign applies to fermions and the negative to bosons. $g_{r}$ is\nthe degenerate parameter, $\\alpha $ is the coefficient of expansion of the\nboson gas inside the volume $(V)$, $\\beta $ is the Lagrange undetermined\nmultiplier, $n_{r}$ and $\\varepsilon _{r}$ are the numbers of particles and\nthe energy of the $r-th$ state respectively. The value of $\\alpha $ for\nboson gas at a given temperature is determined by the normalization\ncondition \\cite{PC\n\\begin{equation}\nN=\\underset{r}{\\sum }\\frac{g_{r}}{e^{\\alpha +\\beta \\varepsilon _{r}}-1} \n\\tag{2}\n\\end{equation}\n\nThis sum can be converted into an integral, because for a particle in a box,\nthe states of the system have been found to be very close together i.e. \n\\left( \\Delta \\varepsilon _{vac}\\equiv d\\varepsilon \\rightarrow 0\\right) $.\nUsing the density of single-particle states function, $Eq.(2)$ reduces to\n\n\\begin{equation}\nN=\\overset{\\infty }{\\underset{0}{\\int }}\\frac{D\\left( \\varepsilon \\right)\nd\\varepsilon }{e^{\\alpha +\\beta \\varepsilon }-1} \\tag{3}\n\\end{equation}\n\nWhere $D\\left( \\varepsilon \\right) d\\varepsilon $ is the number of allowed\nstates in the energy range $\\varepsilon $ to $\\varepsilon +d\\varepsilon $\nand $\\varepsilon $ is the energy of the single-particle states. Using the\ndensity of states as a function of energy, we have \\cite{PC}\n\n\\begin{equation*}\nD\\left( \\varepsilon \\right) d\\varepsilon =\\frac{4\\pi V}{h^{3}}2m\\varepsilon\n\\left( \\frac{m}{p}\\right) d\\varepsilon\n\\end{equation*}\n\nwith $p=\\sqrt{2m\\varepsilon }$\n\n\\begin{equation}\nD\\left( \\varepsilon \\right) d\\varepsilon =2\\pi V\\left( \\frac{2m}{h^{2}\n\\right) ^{3\/2}\\varepsilon ^{1\/2}d\\varepsilon \\tag{4}\n\\end{equation}\n\nPutting $Eq.(4)$ into $Eq.(3),$ we get\n\n\\begin{equation}\nN=2\\pi V\\left( \\frac{2m}{h^{2}}\\right) ^{3\/2}\\overset{\\infty }{\\underset{0}\n\\int }}\\frac{\\varepsilon ^{1\/2}d\\varepsilon }{e^{\\alpha +\\beta \\varepsilon\n}-1} \\tag{5}\n\\end{equation}\n\nWhere $m$ is the mass of boson and $h$ is the Planck constant. $\\alpha\n=\\beta \\mu $ and $\\beta =1\/kT$. $\\mu $ is the chemical potential, $k$ is the\nBoltzmann constant and $T$ is temperature. Since there is no restriction on\nthe total number of bosons, the chemical potential is always equals to zero.\nThus $Eq.(5)$ reads as:\n\n\\begin{equation}\nN=2\\pi V\\left( \\frac{2m}{h^{2}}\\right) ^{3\/2}\\overset{\\infty }{\\underset{0}\n\\int }}\\frac{\\varepsilon ^{1\/2}d\\varepsilon }{e^{\\varepsilon \/kT}-1} \\tag{6}\n\\end{equation}\n\nBy using standard integral\n\n\\begin{equation*}\n\\overset{\\infty }{\\underset{0}{\\int }}\\frac{x^{z-1}dx}{e^{x}-1}=\\varsigma\n\\left( z\\right) \\Gamma \\left( z\\right)\n\\end{equation*}\n\nwhere $\\varsigma \\left( z\\right) $ is the Riemann zeta function and $\\Gamma\n\\left( z\\right) $ is the gamma function. $Eq.(6)$ takes the form\n\n\\begin{equation*}\nN=2.61V\\left( \\frac{2\\pi mkT}{h^{2}}\\right) ^{3\/2}\n\\end{equation*}\n\nLet $n_{gv}=N\/V$\n\n\\begin{equation}\nn_{gv}=2.61\\left( \\frac{2\\pi mkT}{h^{2}}\\right) ^{3\/2} \\tag{7}\n\\end{equation}\n\n\\bigskip Recall that \\ $m=\\Delta \\varepsilon _{vac}\/c^{2}$ and the average\nkinetic energy of gas in three-dimensional space is given by $\\Delta\n\\varepsilon _{vac}=\\frac{3kT}{2}$. Thus $Eq.(7)$ becomes\n\n\\begin{equation*}\nn_{gv}=\\left( \\frac{\\left( 2.61\\right) \\left( 3\\pi \\right) ^{3\/2}k^{3}}\n\\left( hc\\right) ^{3}}\\right) T^{3}\n\\end{equation*}\n\nDefine\n\n\\begin{eqnarray*}\n\\xi &\\equiv &\\left( \\frac{\\left( 2.61\\right) \\left( 3\\pi \\right) ^{3\/2}k^{3\n}{\\left( hc\\right) ^{3}}\\right) \\\\\n&=&2.522\\times 10^{7}\\left( mk\\right) ^{-3}\n\\end{eqnarray*}\n\nHence, the Veneziano-ghost density$\\left( n_{gv}\\right) $ can be\nre-expressed in more elegant form as:\n\n\\begin{equation}\nn_{gv}=\\xi T^{3} \\tag{8}\n\\end{equation}\n\n\\bigskip $Eq.(8)$ is the required result for the finite temperature and\ndensity relation for the Veneziano ghost(s).\n\n\\section{Gravitational Coupling Constant From Veneziano-Ghost Density}\n\nThe principle of general covariance tells us that the energy-momentum tensor\nin the vacuum must take the form\n\n\\begin{equation}\n\\left\\langle 0\\left\\vert \\widehat{T}_{\\mu \\nu }\\right\\vert 0\\right\\rangle\n=T_{\\mu \\nu }^{vac}=g\\left\\langle \\rho \\right\\rangle \\tag{9}\n\\end{equation}\n\n\\bigskip Here $\\left\\langle \\rho \\right\\rangle $ has the dimension of energy\ndensity and $g$ describes a real gravitational field \\cite{SE}. Thus $Eq.(9)$\ncan be written as\n\n\\begin{equation}\n\\left\\langle 0\\left\\vert \\widehat{T}_{\\mu \\nu }\\right\\vert 0\\right\\rangle\n=g\\left( \\Delta \\varepsilon _{vac}\\right) ^{4} \\tag{10}\n\\end{equation}\n\nWhere \\textquotedblleft $g$\\textquotedblright\\ in Ref.\\cite{FR,JZ}, is\ndefined as $g\\equiv c=C_{QCD}\\times C_{grav}.$ Therefore, $Eq.(10)$ can be\nwritten as\n\n\\begin{equation*}\n\\left\\langle 0\\left\\vert \\widehat{T}_{\\mu \\nu }\\right\\vert 0\\right\\rangle\n=C_{QCD}\\times C_{grav}\\times \\left( \\Delta \\varepsilon _{vac}\\right) ^{4}\n\\end{equation*}\n\nWhere, $C_{QCD}=1$ as quoted by \\cite{JZ}, and references within, thus\n\n\\begin{equation}\n\\left\\langle 0\\left\\vert \\widehat{T}_{\\mu \\nu }\\right\\vert 0\\right\\rangle\n=C_{grav}\\times \\left( \\Delta \\varepsilon _{vac}\\right) ^{4} \\tag{11}\n\\end{equation}\n\nNow, the energy density can be written as\n\n\\begin{equation}\n\\rho _{vac}=\\frac{\\Delta \\varepsilon _{vac}}{V}=V^{-1}\\times \\Delta\n\\varepsilon _{vac} \\tag{12}\n\\end{equation}\n\n$Eq.(12)$ is justified by the standard box-quantization procedure \\cite{SE}.\nBy comparing $Eq.(12)$ with $Eq.(8)$, we get\n\n\\begin{equation}\n\\rho _{vac}=n_{gv}\\times \\Delta \\varepsilon _{vac} \\tag{13}\n\\end{equation}\n\nWith $n_{gv}\\equiv V^{-1},$ From the average kinetic energy for gas in\nthree-dimensional space, we have $T=2\\Delta \\varepsilon _{vac}\/3k.$ Hence \nEq.(8)$ becomes\n\n\\begin{equation}\nn_{gv}=\\frac{8\\xi \\left( \\Delta \\varepsilon _{vac}\\right) ^{3}}{27k^{3}} \n\\tag{14}\n\\end{equation}\n\nPutting the value of $n_{gv}$ in $Eq.(13)$, we get\n\n\\begin{equation}\n\\rho _{vac}=\\frac{8\\xi \\left( \\Delta \\varepsilon _{vac}\\right) ^{4}}{27k^{3}}\n\\tag{15}\n\\end{equation}\n\n$Eq.(15)$ represents the energy density of a vacuum state.\n\nThe natural demand of the Lorentz invariance of the vacuum state is bedecked\nin the structure of (effective) quantum field theory in Minkowski space-time\ngeometry \\cite{SE,RM}. Hence, if $\\left\\vert 0\\right\\rangle $\\ is a vacuum\nstate in a reference frame $S$ and $\\left\\vert \n{\\acute{}\n\\right\\rangle $ refers to the same vacuum state observed from a reference\nframe $\n{\\acute{}\n,$\\ which moves with uniform velocity relative to $S$, then the quantum\nexpression for Lorentz invariance of the vacuum state read\n\\begin{equation}\n\\left\\vert \n{\\acute{}\n\\right\\rangle =u\\left( L\\right) \\left\\vert 0\\right\\rangle =\\left\\vert\n0\\right\\rangle \\tag{16}\n\\end{equation}\n\nWhere $u\\left( L\\right) $ is the unitary transformation (acting on the\nquantum state $\\left\\vert 0\\right\\rangle $) corresponding to a Lorentz\ntransformation $L$. All the physical properties that can be extracted from\nthis vacuum state, such as the value of energy density, should also remain\ninvariant under Lorentz transformations \\cite{SE}. If the Lorentz\ntransformation is initiated by $\\rho _{vac}$, then $2\\times \\rho _{vac}$ is\nneeded for a unitary transformation to take place. The logic behind this\nassumption is simple: if $\\rho _{vac}$ defines the Lorentz invariant length \n(L)$ (Euclidean metric distance) of $\\left\\vert 0\\right\\rangle $, then the\nLorentz transformation from $\\left\\vert 0\\right\\rangle $ to $\\left\\vert \n{\\acute{}\n\\right\\rangle $ (with continuous excitation) requires \\ $2\\times \\rho _{vac}\n: \\ $\\left\\vert 0\\right\\rangle \\overset{2\\times \\rho _{vac}}{\\longrightarrow \n}$\\ $\\left\\vert \n{\\acute{}\n\\right\\rangle $. This leads to the principle of general covariance as\napriori stated in the introduction \\cite{SE}. Thus,\n\n\\begin{equation}\n\\left\\langle 0\\left\\vert \\widehat{T}_{\\mu \\nu }\\right\\vert 0\\right\\rangle\n=2\\times \\rho _{vac}=\\frac{16\\xi \\left( \\Delta \\varepsilon _{vac}\\right) ^{4\n}{27k^{3}} \\tag{17}\n\\end{equation}\n\n\\bigskip $Eq.(17)$ is also justified by the standard box-quantization\nprocedure \\cite{SE}. Now by combining $Eq.(11)$ and $Eq.(17)$, we have\n\n\\begin{equation*}\nC_{grav}=\\frac{16\\xi }{27k^{3}}=2.336\\times 10^{19}\\left( m.eV\\right) ^{-3}\n\\end{equation*}\n\n\\bigskip As $1m=5.07\\times 10^{15}GeV^{-1}$. This leads to\n\n\\begin{equation}\nC_{grav}=1.797\\times 10^{-1} \\tag{18}\n\\end{equation}\n\nwhich is the required gravitational coupling constant.\n\n\\section{Dark Energy From The Veneziano-Ghost: A Review}\n\nThe major ingredient of standard Witten-Veneziano resolution of $U(1)$\nproblem is the existence of topological susceptibility $\\chi $. In Ref.\\cit\n{FR}, it has been proved that the deviation in $\\chi $, i.e. $\\Delta \\chi $,\nrepresents the vacuum energy density \\emph{(dark energy)}. We review this\nresult by making use of $Eq.(9)$ and resolve the inherent hitch in this\napproach with the help of $Eq.(18)$. Thus from $Eq.(9)$ we have,\n\n\\begin{equation}\ni\\int dx\\left\\langle 0\\left\\vert \\widehat{T}_{\\mu \\nu }\\right\\vert\n0\\right\\rangle =i\\int dxT_{\\mu \\nu }^{vac} \\tag{19}\n\\end{equation}\n\nBy using the standard Witten-Veneziano relations\n\n\\begin{equation*}\n\\widehat{T}_{\\mu \\nu }\\equiv T\\left\\{ Q\\left( x\\right) ,Q\\left( 0\\right)\n\\right\\}\n\\end{equation*}\n\n\\bigskip Wher\n\\begin{equation*}\nQ\\equiv \\frac{\\alpha _{s}}{16\\pi }\\epsilon ^{\\mu \\nu \\rho \\sigma }G_{\\mu \\nu\n}^{a}G_{\\rho \\sigma }^{a}\\equiv \\frac{\\alpha _{s}}{8\\pi }G_{\\mu \\nu }^{a\n\\widetilde{G}^{\\mu \\nu a}\\equiv \\partial _{\\mu }K^{\\mu }\n\\end{equation*}\n\nAnd\n\n\\begin{equation}\nK^{\\mu }\\equiv \\frac{\\Gamma ^{2}}{16\\pi ^{2}}\\epsilon ^{\\mu \\nu \\lambda\n\\sigma }A_{\\nu }^{a}\\left( \\partial _{\\lambda }A_{\\sigma }^{a}+\\frac{\\Gamma \n}{3}f^{abc}A_{\\lambda }^{b}A_{\\sigma }^{c}\\right) \\tag{20}\n\\end{equation}\n\n\\bigskip Where $A_{\\mu }^{a}$\\ are the conventional $QCD$ color gluon fields\nand $Q$ is the topological charge density, and $\\alpha _{s}=$ $\\frac{\\Gamma\n^{2}}{4\\pi }$. Thus we have\n\n\\begin{equation*}\ni\\int dx\\left\\langle 0\\left\\vert T\\left\\{ Q\\left( x\\right) ,Q\\left( 0\\right)\n\\right\\} \\right\\vert 0\\right\\rangle =i\\int dxT_{\\mu \\nu }^{vac}\n\\end{equation*}\n\n\\begin{equation}\n\\underset{q\\longrightarrow 0}{\\lim }i\\int dxe^{iqx}\\left\\langle 0\\left\\vert\nT\\left\\{ Q\\left( x\\right) ,Q\\left( 0\\right) \\right\\} \\right\\vert\n0\\right\\rangle =\\underset{q\\longrightarrow 0}{\\lim }i\\int dxe^{iqx}T_{\\mu\n\\nu }^{vac} \\tag{21}\n\\end{equation}\n\n\\bigskip Le\n\\begin{equation*}\n\\underset{q\\longrightarrow 0}{\\lim }i\\int dxe^{iqx}T_{\\mu \\nu }^{vac}=\\chi\n\\end{equation*}\n\nHence $Eq.(21)$ becomes\n\n\\begin{equation*}\n\\chi =\\underset{q\\longrightarrow 0}{\\lim }i\\int dxe^{iqx}\\left\\langle\n0\\left\\vert T\\left\\{ Q\\left( x\\right) ,Q\\left( 0\\right) \\right\\} \\right\\vert\n0\\right\\rangle\n\\end{equation*}\n\nAnd\n\n\\begin{equation}\n\\Delta \\chi =\\Delta \\left[ \\underset{q\\longrightarrow 0}{\\lim }i\\int\ndxe^{iqx}\\left\\langle 0\\left\\vert T\\left\\{ Q\\left( x\\right) ,Q\\left(\n0\\right) \\right\\} \\right\\vert 0\\right\\rangle \\right] \\tag{22}\n\\end{equation}\n\nUsing $\\Delta =c\\left( H\/m_{\\eta }\\right) $ and $\\left[ \\underset\nq\\longrightarrow 0}{\\lim }i\\int dxe^{iqx}\\left\\langle 0\\left\\vert T\\left\\{\nQ\\left( x\\right) ,Q\\left( 0\\right) \\right\\} \\right\\vert 0\\right\\rangle\n\\right] =-\\left[ \\lambda _{YM}^{2}\\left( q^{2}-m_{0}^{2}\\right) \/\\left(\nq^{2}-m_{0}^{2}-\\frac{\\lambda _{\\eta }^{2}}{N_{c}}\\right) \\right] $ from \\\nRef.\\cite{FR}, $Eq.(22)$ can be written as\n\n\\begin{equation}\n\\Delta \\chi =-c\\left( \\frac{2H}{m_{\\eta }}\\right) .\\frac{\\lambda\n_{YM}^{2}\\left( q^{2}-m_{0}^{2}\\right) }{\\left( q^{2}-m_{0}^{2}-\\frac\n\\lambda _{\\eta }^{2}}{N_{c}}\\right) } \\tag{23}\n\\end{equation}\n\nThe standard Witten-Veneziano solution of $U(1)$ problem is based on the\nwell-established assumption (confirmed by various lattice computations) that\n\\ $\\chi $ does not vanish, despite the fact that $Q$ is a total derivative \nQ\\equiv \\partial _{\\mu }K^{\\mu }$. This suggests that there is an unphysical\npole at $q=0$ in the correlation function of $K^{\\mu }$, similar to \\emph{KS}\nghost in the Schwinger model \\cite{FR}. Thus $Eq.(23)$ becomes\n\n\\begin{equation}\n\\Delta \\chi =-c\\left( \\frac{2H}{m_{\\eta }}\\right) .\\frac{\\lambda\n_{YM}^{2}m_{0}^{2}}{m_{\\eta }^{2}} \\tag{24}\n\\end{equation}\n\nwhere $m_{\\eta }^{2}=m_{0}^{2}+\\frac{\\lambda _{\\eta }^{2}}{N_{c}}$ is the\nmass of physical $\\eta $ field and the reason for a factor of $2$ in \nEq.(24) $ follows from the principle of general covariance as we have\nalready established. Using Witten-Veneziano relation \\ $4\\lambda _{YM}^{2}$\\ \n$=f_{\\pi }^{2}m_{\\eta }^{2}$ and chiral condensate $m_{0}^{2}f_{\\pi\n}^{2}=-4m_{q}\\left\\langle \\overline{q}q\\right\\rangle ,$ \\ $Eq.(24)$ \\ can be\nwritten as\n\n\\begin{equation}\n\\Delta \\chi =c\\left( \\frac{2H}{m_{\\eta }}\\right) \\left\\vert\nm_{q}\\left\\langle \\overline{q}q\\right\\rangle \\right\\vert \\tag{25}\n\\end{equation}\n\nwhere $H$ is Hubble constant and $m_{q}$\\ is the mass of a single light\nquark. From Ref.\\cite{FR} $c\\left( \\frac{2H}{m_{\\eta }}\\right) \\left\\vert\nm_{q}\\left\\langle \\overline{q}q\\right\\rangle \\right\\vert \\approx c\\left(\n3.6\\times 10^{-3}eV\\right) ^{4}$ leads to\n\n\\begin{equation}\n\\Delta \\chi \\approx c\\left( 3.6\\times 10^{-3}eV\\right) ^{4} \\tag{26}\n\\end{equation}\n\nBy using $c=C_{QCD}\\times C_{grav}\\approx C_{grav}$ from \\cite{JZ} and\nreference within, $Eq.(26)$ can be written as\n\n\\begin{equation}\n\\Delta \\chi \\approx C_{grav}\\left( 3.6\\times 10^{-3}eV\\right) ^{4} \\tag{27}\n\\end{equation}\n\n\\bigskip Comparision of $Eq.(18)$ with $Eq.(27)$ gives \n\\begin{equation}\n\\rho _{\\Lambda }\\equiv \\Delta \\chi \\approx \\left( 2.3\\times 10^{-3}eV\\right)\n^{4} \\tag{28}\n\\end{equation}\n\n$Eq.(28)$ is the measured value of $\\rho _{\\Lambda }$ that is responsible\nfor the acceleration of the universe.\n\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ Using Planck scale $M_{Pl}^{4}$ as the cutoff\ncorrection, $Eq.(8)$ becomes\n\n\\begin{equation}\nn_{gv}^{planck}=7\\times 10^{103}m^{-3} \\tag{29}\n\\end{equation}\n\nFrom the standard box-quantization procedure \\cite{SE}, we have\n\n\\begin{equation}\n2\\times \\rho _{\\Lambda }^{total}=\\frac{1}{V}\\underset{k}{\\sum \\text\nh{\\hskip-.2em}\\llap{\\protect\\rule[1.1ex]{.325em}{.1ex}}{\\hskip.2em\n}\\omega _{k}} \\tag{30}\n\\end{equation}\n\nBy imposing Lorentz invariance of vacuum state formalism on $Eq.(30)$, we\nhave\n\n\\begin{equation}\n2\\times \\rho _{\\Lambda }^{total}=\\frac{1}{V}n\\text\nh{\\hskip-.2em}\\llap{\\protect\\rule[1.1ex]{.325em}{.1ex}}{\\hskip.2em\n}\\omega =n\\left[ n_{gv}\\times \\Delta \\varepsilon _{vac}\\right] \\tag{31}\n\\end{equation}\n\nWhere $n_{gv}\\equiv \\frac{1}{V}$ and $\\Delta \\varepsilon _{vac}\\equiv \nh{\\hskip-.2em}\\llap{\\protect\\rule[1.1ex]{.325em}{.1ex}}{\\hskip.2em\n$\\omega .$ Note that $Eq.(31)$ reduces to $Eq.(17)$ for $n=1,$ therefore \nEq.(31)$ can be rewritten for Planck scale cutoff correction (where Planck\nseries of energy $(n\nh{\\hskip-.2em}\\llap{\\protect\\rule[1.1ex]{.325em}{.1ex}}{\\hskip.2em\n$\\omega )$ is taken to be the Planck energy $(E_{Pl})$):\n\n\\begin{equation}\nn\\left[ n_{gv}\\times \\Delta \\varepsilon _{vac}\\right] =n_{gv}^{Planck}\\times\nE_{Pl} \\tag{32}\n\\end{equation}\n\nFrom $Eqs.(13),(14),(15),(17)$ and $(28),$ we have\n\n\\begin{equation*}\nn\\left[ \\frac{8\\xi \\left( \\Delta \\varepsilon _{vac}\\right) ^{4}}{27k^{3}\n\\right] =n_{gv}^{Planck}\\times E_{Pl}\n\\end{equation*}\n\n\\begin{equation}\nn\\left[ \\frac{\\rho _{\\Lambda }}{2}\\right] =n_{gv}^{Planck}\\times\nE_{Pl}=M_{Pl}^{4} \\tag{33}\n\\end{equation}\n\nWhere $\\rho _{vac}=\\rho _{\\Lambda }\/2$ is the energy density of each\ninfrared sector. $Eq.(33)$ shows how cutoff $UV$ scale $M_{Pl}^{4}$\\\nmanifests itself as linearly independent infrared sectors of the effective\ntheory of gravity interacting with $QCD$ fields.\n\nBy combining $Eqs.(28),(29)$ and $(33)$ we have\n\n\\begin{equation}\nn=4\\times 10^{122}\\approx 10^{122} \\tag{34}\n\\end{equation}\n\nWhere $n_{gv}^{Planck}\\times E_{Pl}=M_{Pl}^{4}=1.4\\times 10^{113}J\/m^{3}.$\nThus $Eq.(34)$ suggests that there are $\\approx 10^{122}$(degenerate) vacuum\nstates. These vacuum states ($n-$torus) are called \\emph{\\textquotedblleft\nsubuniverses or multiverse\"\\ }\\cite{EB,SWH,SC,AV,SW97}. An $n-$torus is an\nexample of $n-$dimensional compact manifold or a compact Abelian Lie group \nU(1).$ In this sense, it is a product of $n$ circles i.e $T^{n}=S^{1}\\times\nS^{1}\\times ..........\\times S^{1}=T^{1}\\times T^{1}\\times ........\\times\nT^{1}$ \\cite{BR,DL,RP}. In this paper, $n$ circles, which are the elements\nof $U(1)$ group, represent $n$ linearly independent infrared sectors or the\nunphysical massless gauge bosons dubbed as Veneziano ghosts.\n\nIt is important to notice that the existence of non-vanishing \\ and linearly\nindependent infrared sectors of the effective theory of gravity interacting\nwith $QCD$ fields is parametrically proportional to the Planck cutoff\nenergy. Therefore, our simple extension of Veneziano ghost theory of $QCD$\nto accommodate FTFT has striking consequences: it predicts, accurately, the\nvalue of \\ $C_{grav},$ which leads towards the $100\\%$ consistency between\ntheory and experimental value of $\\rho _{\\Lambda }.$ As an offshoot, it\nfotifies the idea of multiverse and paints a new picture of quantum\ncosmological paradigm.\n\n\\section{Summary and Conclusion}\n\nThe computational analysis of the dark energy problem from the combined\nframeworks of finite temperature-density correction technique and the\nVeneziano ghost theory of $QCD$ conditions $FTD$ background to behave like a\nreservoir for the infrared sectors of the effective theory of gravity\ninteracting with $QCD$ fields. These infrared sectors (unphysical massless\nbosons) transform as a basis for a representation of a compact manifold.\nThis is analogous to the process of quantizing on manifold $M$ (such as a\ntorus group $T^{n}=T^{1}\\times ..........\\times T^{1}$ $=T^{10^{122}}$), in\nwhich all the submanifolds (tori) are linearly independent of each other.\nThis means that an \\emph{\\textquotedblleft observer\\textquotedblright }\\\ntrapped in one of such tori would think his torus is the whole Universe. An\nimportant prediction of this is that the vacuum energy $\\Delta \\varepsilon\n_{vac}$ owes its existence to the degenerate nature of vacuum (or to the\nasymmetric nature of the universe). The effect of this is a direct\nconsequence of the embedding of our subuniverse on a non-trivial manifold $M$\nwith (minuscule) different linear sizes.\n\nThe main result of the present study is that the effective scales obviously\nhave something to do with the cutoff Ultraviolet $(UV)$ scale $M_{Pl}$.\nBased on the standard box-quantization procedure, the $UV$ scale $M_{Pl}$ is\na collection of infrared $(IR)$ scales. Undoubtedly, the relevant effective\nscales appear as a result of energy differences (subtractions) at which the \nIR$ scales enter the physics of $UV$ scale $M_{Pl}$. It is therefore\nimpossible to compute the value of $\\rho _{\\Lambda }$ without putting into\nconsideration the statistical effect of the $UV$ scale $M_{Pl}$ which\nmanifests itself through the existence of the linearly independent $IR$\nsectors of the effective theory of quantum field theory $(QFT)$: this is the \n\\emph{\\textquotedblleft stone\\textquotedblright } that confirms the\ninterrelationship between $FTFT$ and the theory of superconductivity $(QFT$\nat $T=0)$.\n\nThus, if you buy the idea of Lorentz invariance of vacuum state formalism or\nthe degenerate vacuum mechanism, then $\\sim 10^{122}$ subuniverses come as\nfree gifts!\n\n\\subsubsection{\\textbf{Acknowledgement}}\n\nMr. O. F. Akinto is indebted to the Department of Physics, CIIT, Islamabad \\\nand the National Mathematical Center Abuja, Nigeria \\ for their finacial\nsupport.\n\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nStereo matching\\cite{DBLP:journals\/pami\/SunZS03,Ke2009Cross} is a core technique in many 3D computer vision applications, such as autonomous driving, robot navigation, object detection and recognition\\cite{DBLP:conf\/nips\/ChenKZBMFU15,DBLP:conf\/iccv\/ZhangLCCCR15}. It aims to get the depth information for the reference image by calculating the disparity map of the two images (\\emph{i.e.} left image and right image, sized $H\\times W$, respectively) captured by the stereo camera. The reference image can be either left image or right image. In the rest of this manuscript, we assume the left image to be the reference image, and the right image is regarded as the source image accordingly. \n\nThe disparity of a target point in the reference image is the horizontal displacement between the target point and its most similar point in the source image\\cite{DBLP:journals\/ijcv\/ScharsteinS02,DBLP:conf\/cvpr\/LuoSU16,DBLP:conf\/cvpr\/SekiP17} . In order to find the most similar point for a target point, the similarity scores between this point and the candidate points in the source image are calculated(\\emph{i.e.} similarity distribution)\\cite{DBLP:conf\/cvpr\/Hirschmuller05,DBLP:journals\/pami\/BrownBH03}. When there are $D_{max}$ candidate points(\\emph{i.e.} matching range), a 3D cost volume of size $D_{max}\\times H \\times W$ containing all similarity scores for $H \\times W$ target points is calculated. \n\nTo obtain such 3D cost volume, recent cost aggregation network based learning methods\\cite{DBLP:conf\/iccv\/KendallMDH17,DBLP:conf\/cvpr\/ChangC18,DBLP:conf\/cvpr\/GuoYYWL19,DBLP:conf\/cvpr\/ZhangPYT19} first form a 4D volume of size $F\\times \\frac{1}{n}D_{max}\\times \\frac{1}{n}H \\times \\frac{1}{n}W$(where $F$ is the dimension of the correlation feature and $n$ is the ratio of downsampling) by associating each unary feature with their corresponding unary from the opposite source image across $ \\frac{1}{n}D_{max}$ disparity levels. They obtain a high quality low-resolution 3D cost volume through focusing on optimizing the low-resolution 4D volume by cost aggregation network, and then get the high precision performance on final high-resolution disparity map ($H\\times W$). The process of most cost aggregation networks contains multiple identical 4D volume aggregation stages to refine the correlation features multiple times. These methods only output low-resolution 3D cost volume containing similarity scores of partial candidates. To obtain high-resolution disparity, the widely accepted method is to use the linear interpolation to get the complete 3D cost volume firstly. However, the similarity score is not a linear function of the disparity, which causes inaccurate estimation of the final disparity. Although some methods add additional refinement modules to refine the disparity, they still cannot get satisfactory results due to the lack of correlation features between matching pairs.\n \nActually, due to the nature of CNN, each point in the low resolution features contains the information of all pixels in the patch of the original resolution images where its located. Therefore, all $D_{max}$ correlation features between target points and candidates are included in the low-resolution 4D volume. Leveraging convolutional layers for decoupling all $D_{max}$ similarity scores from the 4D volume is naturally a better solution for complete 3D cost volume. Early methods, such as GC-Net\\cite{DBLP:conf\/iccv\/KendallMDH17}, decouple all $D_{max}$ similarity scores by \\emph{Transposed convolution}. However, the implementation of \\emph{Transposed convolution} introduces additional computation. More notably, as the network deepens, details in 4D Volume will be lost. In addition, learning $D_{max}$ similarity scores from the optimized low scale 4D volume, containing $\\frac{1}{n}D_{max}$ correlation features for each target point, means that one correlation feature outputs $n$ similarity scores. This is an internal competitive task, because each feature essentially represents the degree of the correlation between the target point and $n$ different candidates. It is too difficult for the network to compute a universal correlation features to predict $n$ optimal similarity scores of $n$ different candidates simultaneously. \n\nBased on the above analysis, we design a new Multistage Full Matching scheme (MFM) in this work through simply decomposing the full matching task into multiple stages. Each stage estimate a different $\\frac{1}{n}D_{max}$ similarity scores. In this decomposing way, we can not only learning all similarity scores directly from the low-resolution 4D volume, but also keep one similarity score learning from one correlation feature. \n\nWhile it is noteworthy that we share the similar insight with the existing multistage matching methods\\cite{DBLP:conf\/cvpr\/TonioniTPMS19,DBLP:conf\/iccv\/DuggalWMHU19,DBLP:conf\/cvpr\/YangMAL20,DBLP:conf\/cvpr\/YinDY19}, as decomposing the matching task into multiple stages. Such methods first obtain a coarse disparity and then perform residual disparity search from the neighbor of the current disparity by constructing a partial cost volume. The later stage strongly depend on the previous stage. In contrast, the previous stage only provide a reference for the later stage in MFM. \n\nMultiple tasks in the proposed MFM are equally important. However, serial multistage framework, which is designed for more sufficient cost aggregation, results in unbalanced prediction of multiple stages. Aiming at this problem, we propose the strategy of \\emph{Stages Mutual Aid}. Specifically, we take advantage of the close distribution of the similarities predicted at each stage, and merge the output of other $(n-1)$ stages to obtain a voting result of the current similarity distribution for reference in the current stage. In this way, not only can the shallower network exploit the output of the deeper network, but also the voting result provides a correction message for the current prediction. \n\nThe contributions of our work are summarized as follows:\n\\begin{itemize} \n\\item A Multistage Full Matching disparity estimation scheme (MFM) is proposed to decompose the full matching learning task into multiple stages and decouple all $D_{max}$ similarity scores from the low-resolution 4D volume step by step, which improves the stereo matching precision accordingly. \n\\item A \\emph{Stages Mutual Aid} strategy is proposed to solve the unbalance between the predictions of each stage in the serial multistage framework.\n\\end{itemize}\nWe evaluate the proposed method on three challenging datasets, \\emph{i.e.} SceneFlow\\cite{DBLP:conf\/cvpr\/MayerIHFCDB16}, KITTI 2012\\cite{DBLP:conf\/cvpr\/GeigerLU12} and KITTI 2015 datasets\\cite{DBLP:conf\/cvpr\/MenzeG15}. The results demonstrate that our MFM scheme achieves state-of-the-art.\n\n\\section{Related Work}\nThis section reviews recent end-to-end supervised deep learning stereo matching methods.\n\\subsection{2D CNN Regression Module Based Methods}\nDispNetC\\cite{DBLP:conf\/cvpr\/MayerIHFCDB16} is the first end-to-end trainable disparity estimation network. It forms a low-resolution 3D cost volume by calculating cosine distance of each unary feature with their corresponding unary from the opposite stereo image across each disparity level. Then the 3D cost volume is input to 2D CNN with left features for disparity regeression. Following DispNetC, CRL\\cite{DBLP:conf\/iccvw\/PangSRYY17} and iRes-Net\\cite{DBLP:conf\/cvpr\/LiangFGLCQZZ18} introduce stack refinement sub-networks to further improve the performance. SegStereo\\cite{DBLP:conf\/eccv\/YangZSDJ18} and EdgeStereo\\cite{DBLP:journals\/corr\/abs-1903-01700} both design multiple tasks frameworks for the disparity regression task. The former introduces semantic information in the refinement stage and the latter applies edge information in guiding disparity optimization. \n\nThe 2D CNN regression network fails to make good use of geometric principles in stereo matching to regress accurate disparity. More recent works focus on directly optimize and compute 3D cost volume by cost aggregation networks.\n\n\\subsection{Cost Aggregation Network Based Methods}\nCost aggregation network based methods study how to optimize the low-resolution 4D volume to obtain more accurate similarity scores in the low-resolution 3D cost volume and output a better disparity map accordingly. Yu \\emph{et al.}\\cite{DBLP:conf\/aaai\/YuWWJ18} propose an explicit cost aggregation sub-network to provide better contextual information. PSMNet\\cite{DBLP:conf\/cvpr\/ChangC18} introduces a pyramid pooling module for incorporating global context information into image features, and stacked 3D CNN hourglasses to extend the regional support of context information in cost volume. In order to make full use of the features, GwcNet\\cite{DBLP:conf\/cvpr\/GuoYYWL19} builds the cost volume by concatenating the cost volume constructed in different ways. GANet\\cite{DBLP:conf\/cvpr\/ZhangPYT19} proposes two new neural net layers to capture the local and the whole-image cost dependencies and to replace the 3D convolutional layer. AANet\\cite{DBLP:journals\/corr\/abs-2004-09548} proposes a sparse points based intra-scale cost aggregation method to achieve fast inference speed while maintaining comparable accuracy. \n\nThese methods only output low-resolution 3D cost volume containing similarity scores of partial candidates from the iterative cost aggregation network. However, the low-resolution 3D cost volume is inadequate for calculating high-resolution disparity without correlation features. Although the high-resolution 3D cost volume in GC-Net\\cite{DBLP:conf\/iccv\/KendallMDH17} is obtained by \\emph{Transposed convolution}, additional calculations are introduced because of the way the \\emph{Transposed convolution} is implemented. In addition, it is an internal competitive task to calculate $D_{max}$ similarity features for the high-resolution 3D cost volume from only one 4D volume with $\\frac{1}{n}D_{max}$ correlation features. The proposed MFM in this work decomposes the full matching task into these multiple cost aggregation stages and decouple the high-resolution 3D cost volume directly from the correlation features, which solved the aforementioned problem.\n\n\\subsection{Multistage Matching Methods}\nMultistage matching methods\\cite{DBLP:conf\/cvpr\/TonioniTPMS19,DBLP:conf\/cvpr\/YinDY19} first obtain a coarse disparity and then perform residual disparity search from the neighbor of the current disparity by constructing a partial cost volume. DeepPruner\\cite{DBLP:conf\/iccv\/DuggalWMHU19} develops a differentiable PatchMatch module that allows to discard most disparities without requiring full cost volume evaluation in the second stage. CVP-MVSNet\\cite{DBLP:conf\/cvpr\/YangMAL20} proposes a cost volume pyramid based Multi-View Stereo Network for depth inference. \n\nThese methods improve the stereo matching speed and avoid the process of obtaining high-resolution disparity from low-resolution 3D cost volume. However, if the cues obtained in the first stage is wrong, the subsequent fine matching will also be wrong. In contrast, the previous stage in our multistage methods has only guidance for the latter stage and does not limit the estimation range of the latter stage, which guarantees the freedom of estimation in the latter stage.\n\n\\section{Multistage Full Matching Network}\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{architecture}\n\t\\caption{The architecture of the Multistage Full Matching network. LR denotes low-resolution and HR represents high-resolution. The Multistage estimation module is composed of $n$ stages. Here takes $n=3$ as an example. Each stage estimates the similarity scores of different candidate points($\\frac{1}{n}D_{max}$ points) in the matching range($D_{max}$ points). The high-resolution similarity distribution($D_{max}\\times H\\times W$) is obtained by combining the low-resolution similarity distribution output from each stage.}\n\t\\label{fig1}\n\\end{figure*}\n\nAs shown in Figure 1, the proposed framework has the following main structure: \n\nFirst, the MFM network extracts low-resolution feature maps of left and right images with shared 2D convolution. For feature extraction network, we adopt the ResNet-like network. We control the scale of the output features by controlling the stride of each convolution layer. The scale of the features is $\\frac{1}{n}$ the size of the input image. \n\nSecond, low-resolution 4D volume is calculated by \\emph{Group-wise correlation}\\cite{DBLP:conf\/cvpr\/GuoYYWL19}. The left features and the right features are divided into groups along the channel dimension, and correlation maps are computed among each group to obtain multiple cost volumes, which are then packed into one cost volume called \\emph{g-cost}. Another cost volume called \\emph{cat-cost} is obtained by concatenating the features of each target point and candidate points in the matching range. The final cost volume is obtained by concatenating the corresponding correlation features of \\emph{g-cost} and \\emph{cat-cost}.\n\nThen, the low-resolution 4D volume from the second step is fed to the cost aggregation process for the high-resolution 3D cost volume by multistage 3D cost volume estimation module. \n\nAt last, the final disparity is obtained by the modified parabolic fitting\\cite{Raffel1998Particle,DBLP:journals\/scjapan\/ShimizuO02,nishiguti} sub-pixel estimation method on the high-resolution 3D cost volume.\n\n\\subsection{Multistage 3D Cost Volume Estimation Module} \nThe proposed method divides the matching range into $k$ cells, and each cell contains $n$ points, where $k=D_{max}\/n$. $D_{max}$ is the length of the matching range. $n$ is the same as the ratio of downsampling. In this way, the candidate set can be represented as $\\{c_{0}^{0}, c_{0}^{1},c_{0}^{2}, \\cdots, c_{0}^{n-1}, c_{1}^{0}, c_{1}^{1}, \\cdots, c_{m}^s, \\cdots, c_{k-1}^{n-1}\\}$. Each stage of the multistage full matching module learns similarity scores of specific $k$ candidates, \\emph{i.e.} $\\{c_{0}^{s}, c_{1}^{s}, \\cdots, c_{k-1}^{s}\\}$($s$ is the stage number of current stage), from $k$ correlation features of the low-resolution 4D volume. The candidates learned in the $s$-th stage is adjacent to $(s-1)$ stage and $(s+1)$ stage. $D_{max}$ high-resolution similarity scores are obtained after $n$ stages.\n\nIn order to obtain $D_{max}$ high-resolution similarity scores, we decouple $D_{max}$ similarity features from different low-resolution 4D volumes. Let $f(x)$-$C$ be the coordinate system set up with $f(x)$ as the origin in the C dimensional feature space, and $f(x)$ is the feature corresponding to the point $I_r(x)$ of reference image. The similarity features between $I_r(x)$ and its candidates, \\emph{i.e.} $\\{fc(x+0), fc(x+1), \\cdots, fc(x+d), \\cdots, fc(x+D_{max}-1)\\}$, is a series points in $f(x)$-$C$. \n\nIn camera coordinate system, the position difference between $c_m^s(x)$ can be represented as $\\Delta x$. Similarly, in $f(x)$-$C$, the position difference between $fc(x+d)$ can be represented as $\\Delta fc$, where $\\Delta fc \\in \\mathbb{R}^C$. Therefore, when the position of $fc(x+d)$ is known, the position of $fc(x+d+\\Delta x)$ can be obtained by:\n\\begin{equation}\nfc(x+d+\\Delta x) = fc(x+d) + \\Delta fc\n\\end{equation}\nwhere $\\Delta x$ represents the position difference between $c_m^s(x)$ and $c_m^s(x+\\Delta x)$.\n\n$fc(x+d)$ in high-dimensional feature space not only contains the information of $c_m^s(x)$, but also include the features of points surround it. Consequently, $\\Delta fc$ from $fc(x+d)$ to $fc(x+d+\\Delta x)$ can be decoupled from $fc(x+d)$ when $\\Delta x$ is small. The success of our experiment verifies the correctness of this inference.\n\nBased on the above analysis, we design the follow framework:\n\\paragraph{Step 1} we first initialize a similarity feature $F^{ori}=\\{fc^{ori}_m | m \\in N, m < k\\}$ for all stage in the space. After that, we will estimate $\\Delta F^{s}=\\{\\Delta fc^{s}_m | m \\in N, m < k\\}$ from $F^{ori}$ to $F^{s}=\\{fc^{s}_m | m \\in N, m < k\\}$ for each stage in the second step, where $s$ is the stage number.\n\nDue to the nature of CNN, each point in the low scale features contains the information of all pixels in the patch of the original resolution images where its located. Such features will first be associated across $\\frac{1}{n}D_{max}$ disparity levels as follows:\n\\begin{equation}\nF_{raw}(f_{r},f_{so}^{d}) = Corr(f_{r},f_{so}^{d})\n\\end{equation}\nwhere $Corr(\\cdot, \\cdot)$ represent \\emph{Group-wise correlation}. $f_{r}$ is the feature of a target point on reference feature map, and $f_{so}^{d}$ is the feature of the $d$-th candidate point on source feature map. $F_{raw}(f_{r},f_{so}^{d})$ is the raw correlation feature of $f_{r}$ and $f_{so}^{d}$.\n\nTherefore, all $D_{max}$ correlation features between target points and candidates are included in the raw 4D volume composed of $F_{raw}(f_{r},f_{so}^{d})$. Then, $F_{raw}(f_{r},f_{so}^{d})$ in such raw 4D volume is converted to $F^{ori}$ by two 3D convolutions. In this way, the $F^{ori}$ can provide an excellent initial value for better $\\Delta F^{s} $.\n\n\\paragraph{Step 2} The multistage similarity features estimation process is divided into $n$ stages. Except the first stage which takes $F_{ori}$ as input, each stage takes $F^{s-1}$ as the input to get the shift $\\Delta F^{s} $ between $F^{s}$ and $F^{ori}$.\n\\begin{equation}\n \\Delta F^{s} = De(F^{s-1} )\n\\end{equation}\nwhere $De(\\cdot)$ is the decouple function, which is implemented by an hourglass network composed of 3D CNN. The structure of $De(\\cdot)$ is the same as the basic block in most cost aggregation network\\cite{DBLP:conf\/cvpr\/ChangC18,DBLP:conf\/cvpr\/GuoYYWL19}. Then, the simlairty features of candidates $\\{c_{0}^{s}, c_{1}^{s}, \\cdots, c_{k-1}^{s}\\}$ in the $s$-th stage can be obtained by\n\\begin{equation}\nF^{s} =F^{ori} + \\Delta F^{s}\n\\end{equation}\n\n\\paragraph{Step 3} Another role of $De(\\cdot)$ is to update each similarity feature by referring to the information in a larger receptive field. Therefore, a serial network composed of the $n$ $De(\\cdot)$ is necessary to obtain more sufficient aggregation. However, a serial network will lead to unbalance between the predictions of each stage, for each $De(\\cdot)$ is responsible for a different task. In this step, as shown in Fig.1, a \\emph{Stages Mutual Aid} operation is conducted on $F^{s}$.\n \n We design the formula(5) to construct similarity scores for supervising each stage:\n\\begin{equation}\nS\\left(m,i\\right)=e^{-(m \\times n+i-d_{gt})^2}\n\\end{equation}\nwhere $m$ stands for the $m$-th cell, and $m \\in \\{0,1,2,\\ldots, k-1\\}$. $n$ is the length of each cell. $i$ is the order of the candidate point in each cell, and it also represents the stage order. $d_{gt}$ is the ground-truth. $S(m,i)$ represents the similarity score that the $i$-th point in the $m$-th cell of the $i$-th stage should have. The similarity distribution ground-truth of different stages can be obtained by changing the value of $i$. If the similarity peak falls in the $i$-th position of the $m$-th bin, the similarity peak of each stage will fall in the $m$-th bin or the ajacent bin of $m$. Therefore, the similarity distributions output of each stage is close, the similarity peak difference of which is 1 or 0. Accordingly, a voting result can be obtained from the similarity features output of the other $(n-1)$ stages for the estimation of the $s$-th stage.\n\nFirst, we obtain the voting results $V$ for the $s$-th stage by \n\\begin{equation}\nV= \\sum_{i=0, i\\ne s}^{n-1}(F^i)\n\\end{equation}\nThen, the $V$ is optimized by one 3D convolution layer, obtaining $V^s$.\nBecause it's hard to decide artificially the percentage of $V^s$ and $F^s$ when fusion the two features, we directly let the network learn how to merge the two features. The two features are concatenated along $d$ dimension and fed to the distance network $D(\\cdot)$ to calculate the similarity scores. The $D(\\cdot)$ network is consist of two 3D convolution layers.\n\\begin{equation}\nP(I_r(x), c^s(x))) = D(Concat(V^s, F^s)), \n\\end{equation}\nwhere $P(I_r(x), c^s(x)))$ is the 3D cost volume output from the $s$-th stage, which represents the predicted similarity scores between $I_r(x)$ and $c^s(x)$ of the $s$-th stage. $Concat(\\cdot)$ is the concatenation operation. \n\n \\paragraph{Step 4}\n Latitude $H$ and latitude $W$ of the $n$ 3D cost volumes are restored to the original scale by linear interpolation. The full resolution cost volume is obtained by combining the $n$ low-resolution 3D cost volume along the similarity dimension. Finally, high-resolution cost volume is normalized by \\emph{softmax} operation along the similarity dimension and rearranged in the similarity dimension to obtain the final high-resolution cost volume.\n\n\\subsection{Supervision Strategy and Loss Function}\n\nWe design the formula(5) to construct similarity scores for supervising each stage. By using formula(5), our supervision strategy can guarantee the relationship between the output results of multiple stages. \n\nThe full loss of the proposed MFM network can be represented as the following:\n\\begin{equation}\nLoss= L_{stage}+L_{1}\n\\end{equation}\n\n$L_{1}$ loss is utilized to optimize the final disparity calculated through the high-resolution similarity distribution. We denote the predicted disparity as $d$ and the disparity ground-truth as $d_{gt}$, $L_{1}$ loss can be represented as the following:\n\\begin{equation}\nL_{1} = \\sum|d-d_{gt}|\n\\end{equation}\n\n$L_{stage}$ is designed to guide each stage to learn specific $k$ similarity scores from $k$ correlation features of the low-resolution 4D volume. We need to align the predicted similarity distributions of $n$ stages with the aforementioned supervision similarity distributions of the $n$ stages, respectively, therefore the Cross Entropy Error Function is selected for supervision. $L_{stage}$ loss is designed as follows:\n\\begin{equation}\nL_{stage}=\\sum_{i=0}^n\\sum_{m=0}^{k-1}S(m,i)\\cdot logP(m,i)\n\\end{equation}\nwhere $P(m,i)$ represents the similarity score between the target point and the $i$-th candidate point in the $m$-th cell output from the $i$-th stage, and $k=D_{max}\/n$. \n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[scale=0.43]{kitti12.pdf}\n \\caption{Error map visualization of AcfNet\\cite{DBLP:conf\/aaai\/0005C0YYLY20}, AANet+\\cite{DBLP:journals\/corr\/abs-2004-09548} and our method on KITTI 2012. Darker represents lower error.}\n \\end{figure*}\n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[scale=0.43]{kitti15.pdf}\n \\caption{ Error map visualization of AcfNet\\cite{DBLP:conf\/aaai\/0005C0YYLY20}, AANet+\\cite{DBLP:journals\/corr\/abs-2004-09548} and our method on KITTI 2015. Darker blue represents lower error.}\n\\end{figure*}\n\\section{Experiments}\n\\subsection{Implementation Details}\n\\paragraph{Datasets.} Scene Flow datasets\\cite{DBLP:conf\/cvpr\/MayerIHFCDB16} provide 35,454 training and 4,370 testing images of size $960\\times540$ with accurate ground-truth. We use the Finalpass of the Scene Flow datasets, since it contains more motion blur and defocus, and is more real than the Cleanpass. KITTI 2012\\cite{DBLP:conf\/cvpr\/GeigerLU12} and KITTI 2015\\cite{DBLP:conf\/cvpr\/MenzeG15} are driving scene datasets. KITTI 2012 contains 194 training image pairs with sparse ground truth and 195 testing image pairs with ground truth disparities held by evaluation server for submission evaluation only. KITTI 2015 contains 200 training stereo image pairs with sparse ground-truth and 200 testing image pairs with ground truth disparities held by evaluation server for submission evaluation only. \n\n\\paragraph{Evaluation Indicators.} For Scene Flow datasets, the evaluation metrics are the end-point error (EPE), which is the mean average disparity error in pixels, and the error rates $>1px$ and $>3px$ are the percentage of pixels whose error are greater than 1 pixel and 3 pixels, respectively. For KITTI 2015, we use the percentage of disparity outliers D1 as evaluation indicators. The outliers are defined as the pixels whose disparity errors are larger than $max(3px, 0.05\\cdot d_{gt})$, where $d_{gt}$ denotes the ground-truth disparity. For KITTI 2012, we use the error rates $ >2px, >3px, >4px$ and $>5px$ as evaluation indicators.\n\n\\paragraph{Training.} Our network is implemented with PyTorch. We use Adam optimizer, with $\\beta_{1} = 0.9, \\beta_{2} = 0.999$. The batch size is fixed to 8. For Scene Flow datasets, we train the network for 16 epochs in total. The initial learning rate is set as 0.001 and it is down-scaled by 2 after epoch 10, 12, 14. We set the $D_{max}$ as 192 and $n$ as 4, respectively. For KITTI 2015 and KITTI 2012, we fine-tune the network pre-trained on Scene Flow datasets for another 300 epochs. The learning rate is 0.001 and it is down-scaled by 10 after 210 epochs.\n\n\\subsection{Performance Comparison}\n\n\\paragraph{Quantitative Comparison.}\n\\begin{table}[t]\n\t\\centering\n\t\\caption{Performance comparison on Scene Flow datasets.}\n\t\\begin{tabular}{lccc}\n\t\t\\hline\n\t\tModel & EPE & \\textgreater{}1px & \\textgreater{}3px \\\\ \n\t\t\\hline\n\t\tiResNet-i3\\shortcite{DBLP:conf\/cvpr\/LiangFGLCQZZ18}&2.45&9.28\\%&4.57\\% \\\\\n\t\tCRL\\shortcite{DBLP:conf\/iccvw\/PangSRYY17}&1.32&-&6.20\\% \\\\\n\t\t\\hline\n\t\tStereoNet\\shortcite{DBLP:conf\/eccv\/KhamisFRKVI18}&1.10&21.33\\%&8.80\\% \\\\\n\t\tPSMNet\\shortcite{DBLP:conf\/cvpr\/ChangC18}&1.03& 10.32\\%&4.12\\% \\\\\n\t\tGANet\\shortcite{DBLP:conf\/cvpr\/ZhangPYT19}&0.81& 9.00\\%&3.49\\% \\\\\n\t\tGwcNet\\shortcite{DBLP:conf\/cvpr\/GuoYYWL19}&0.77&8.03\\% &3.30\\% \\\\\n\t\tAANet\\shortcite{DBLP:journals\/corr\/abs-2004-09548}&0.87&9.30\\%&- \\\\\n\t\tAcfNet\\shortcite{DBLP:conf\/aaai\/0005C0YYLY20}&0.87&-&4.31\\% \\\\ \n\t\t\\hline\n\t\tDeepPruner-Best\\shortcite{DBLP:conf\/iccv\/DuggalWMHU19}&0.86&-&- \\\\\n\t\t\\hline\n\t\tOurs&\\textbf{0.66}&\\textbf{4.95\\%}&\\textbf{2.50\\%} \\\\ \n\t\t\\hline\n\t\\end{tabular}\n \\label{table1}\n\\end{table}\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\caption{Performance comparison on KITTI 2015 datasets.}\n\t\\begin{tabular}{llll}\n\t\t\\hline \n\t\tModel&D1-bg&D1-fg&D1-all \\\\ \n\t\t\\hline\n\t\tCRL\\shortcite{DBLP:conf\/iccvw\/PangSRYY17}&2.48\\%&3.59\\%&2.67\\% \\\\\n\t\tEdgeStereo-v2\\shortcite{DBLP:journals\/corr\/abs-1903-01700}&1.84\\%&3.30\\%&2.08\\% \\\\\n\t\tSegStereo\\shortcite{DBLP:conf\/eccv\/YangZSDJ18}&1.88\\%&4.07\\%&2.25\\% \\\\\n\t\t\\hline\n\t\tGCNet\\shortcite{DBLP:conf\/iccv\/KendallMDH17}&2.21\\%&6.16\\%&2.87\\% \\\\\n\t\tGwcNet-g\\shortcite{DBLP:conf\/cvpr\/GuoYYWL19}&1.74\\%&3.93\\%&2.11\\% \\\\\n\t\tAcfNet\\shortcite{DBLP:conf\/aaai\/0005C0YYLY20}&1.51\\%&3.80\\%&1.89\\% \\\\\n\t\tBi3D\\shortcite{DBLP:conf\/cvpr\/BadkiTKKSG20}&1.95\\%&\\textbf{3.48}\\%&2.21\\% \\\\\n\t\tAANet+\\shortcite{DBLP:journals\/corr\/abs-2004-09548}&1.65\\%&3.96\\%&2.03\\% \\\\\n\t\t\\hline\n\t\tHD\\^ \\ 3\\shortcite{DBLP:conf\/cvpr\/YinDY19}&1.70\\%&3.63\\%&2.02\\% \\\\\n\t\tCSN\\shortcite{DBLP:conf\/cvpr\/GuFZDTT20}&1.59\\%&4.03\\%&2.00\\% \\\\\n\t\t\\hline\n\t\tOurs&\\textbf{1.51\\%}&3.67\\%&\\textbf{1.87}\\% \\\\\n\t\t\\hline\n\t\\end{tabular}\n \\label{table2}\n\\end{table}\n\\begin{table*}[t]\n\t\\centering\n\t\\caption{Performance comparison on KITTI 2012 datasets.}\n\t\\begin{tabular}{l|cc|cc|cc|cc}\n\t\t\\hline\n\t\t\\multirow{2}{*}{Model} & \\multicolumn{2}{c|}{ \\textgreater{}2px}&\\multicolumn{2}{c|}{ \\textgreater{}3px}&\\multicolumn{2}{c|}{ \\textgreater{}4px}& \\multicolumn{2}{c}{ \\textgreater{}5px} \\\\\n\t\t\\cline{2-9} \n\t\t&Noc&All&Noc&All&Noc&All&Noc&All \\\\\n\t\t\\hlin\n\t\tEdgeStereo-v2\\shortcite{DBLP:journals\/corr\/abs-1903-01700}&2.32\\%&2.88\\%&1.46\\%&1.83\\%&1.07\\%&1.34\\%&0.83\\%&1.04\\%\\\\\n\t\tSegStereo\\shortcite{DBLP:conf\/eccv\/YangZSDJ18}&2.66\\%&3.19\\%&1.68\\%&2.03\\%&1.25\\%&1.52\\%&1.00\\%&1.21\\%\\\\%&0.5\\%\\\\\n\t\t\\hline\n\t\tGwcNet-gc\\shortcite{DBLP:conf\/cvpr\/GuoYYWL19}&2.16\\%&2.71\\%&1.32\\%&1.70\\%&0.99\\%&1.27\\%&0.80\\%&1.03\\%\\\\\n\t\tGANet-deep\\shortcite{DBLP:conf\/cvpr\/ZhangPYT19}&1.89\\%&2.50\\%&1.19\\%&1.60\\%&0.91\\%&1.23\\%&0.76\\%&1.02\\%\\\\\n\t\tAMNet\\shortcite{DBLP:journals\/corr\/abs-1904-09099}&2.12\\%&2.71\\%&1.32\\%&1.73\\%&0.99\\%&1.31\\%&0.80\\%&1.06\\%\\\\\n\t\tAcfNet\\shortcite{DBLP:conf\/aaai\/0005C0YYLY20}&1.83\\%&2.35\\%&1.17\\%&1.54\\%&0.92\\%&1.21\\%&0.77\\%&1.01\\%\\\\\n\t\tAANet+\\shortcite{DBLP:journals\/corr\/abs-2004-09548}&2.30\\%&2.96\\%&1.55\\%&2.04\\%&1.20\\%&1.58\\%&0.98\\%&1.30\\%\\\\\n\t\t\\hline\n\t HD\\^ \\ 3\\shortcite{DBLP:conf\/cvpr\/YinDY19}&2.00\\%&2.56\\%&1.40\\%&1.80\\%&1.12\\%&1.43\\%&0.94\\%&1.19\\%\\\\\n\t\t\\hline\n\t\tOurs&\\textbf{1.68\\%}&\\textbf{2.16\\%}&\\textbf{1.15\\%}&\\textbf{1.47\\%}&\\textbf{0.91\\%}&\\textbf{1.16\\%}&\\textbf{0.76\\%}&\\textbf{0.97\\%}\\\\%&\\textbf{0.4}&\\textbf{0.5} \n\t\t\\hline \n\t\\end{tabular}\n \\label{table3}\n\\end{table*}\n\nThe quantitative comparison focuses on deep learning stereo matching methods. Table 1, Table 2 and Table 3 show the performance of different methods on Scene Flow datasets, KITTI 2012 dataset, and KITTI 2015 dataset, respectively. In each table from top to bottom, the methods are separated into four groups: (1)2D CNN regression based methods, (2)cost aggregation network based methods, (3)multistage matching methods, (4)our MFM method.\n\n(1) methods refine the low-resolution coarse disparity with much error and noise without referring to the correlation features. Thus, such methods have lower precision. (2) methods obtain the high quality 3D cost volume iteratively from the 4D volume, which provides satisfied geometry context. Therefore, compared with (1) methods, the EPE value of (2) methods is significantly reduced to below 1.0. However, the $>1px$ error rates is still high. Because the multistage cost aggregation module only outputs a low-resolution 3D cost volume, and the high-resolution disparity is obtained from the low-resolution 3D cost volume without the geometry correlation features. (3) methods match within the narrow matching range obtained from the previous stage other than obtaining all similarity scores directly. The miscalculated narrow matching range obtained in the previous stage results in misestimating in the latter stage, which causes its low accuracy. As shown in Table1,Table2 and Table3, the proposed MFM method performs better, which demonstrates the effectiveness of the multistage 3D cost volume estimation module.\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{scene}\n\t\\caption{Error map visualization of GwcNet(top row)\\cite{DBLP:conf\/cvpr\/GuoYYWL19} and our method(bottom row) on Scene Flow datasets. Darker blue represents lower error.}\n\t\\label{figure2}\n\\end{figure*}\n\n\n\n\\paragraph{Qualitative Comparison.}\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth]{similarity}\n\t\\caption{Peak position estimation comparison with existing classic cost aggregation network based methods. }\n\t\\label{figure3}\n\\end{figure}\nFigure 4 visualizes the error maps of our method and the GwcNet\\cite{DBLP:conf\/cvpr\/GuoYYWL19} on Scene Flow datasets. As shown in Figure 4, light blue area takes up less area in the error map of our MFM method, which illustrates that our method has better prediction accuracy by decompose the matching task into multiple stage for high-resolution 3D cost volume directly.\n\nFigure 2 and Figure 3 visualize the error maps of our method and other methods on KITTI 2012 and KITTI 2015, respectively. The error area takes up less area in the error map of our MFM method, which also demonstrates the effectiveness of the proposed MFM mechanism.\n\nThe prediction of the peak position in the similarity distribution determines the accuracy of the disparity obtained through 3D cost volume refinement, which is the first and the most important phase of the disparity prediction task. We count the proportion of the points with peak position deviating from the ground truth by more than 1 pixel and 3 pixels, respectively. As shown in Figure 5, PSMNet, GANet, and GwcNet all have a larger number of error points than our MFM method, especially on the deviated more than 1 pixel figure. Well begun is half done. The remarkable advantage of our method shown in Figure 5 largely determines our final state-of-art performance.\n\n\\subsection{Detailed Analysis of Proposed Method}\n\\paragraph{Ablation Study.}\n\\begin{table}[t]\n\t\\centering\n\t\\caption{The ablative prediction results of different variants of MFM on Scene Flow datasets. B:Baseline, de:decouple, ms:\\emph{Multistage Matching}, sma: \\emph{Stages Mutual Aid}.}\n\n\t\\begin{tabular}{ccccccl}\n\t\t\\hline\n\t\tModel&EPE& \\textgreater{}1px & \\textgreater{}3 px \\\\\n\t\t\\hline\n\t\tBaseline &0.77&8.03\\% &3.30\\%\\\\\n\t\t\\hline\n\t\tB+de&0.86&6.30\\%&3.10\\%\\\\\n\t\t\\hline\n\t\tB+de+ms&0.74&5.18\\%&2.66\\%\\\\\n\t\tB+de+ms+sma&\\textbf{0.66}&\\textbf{4.95\\%}& \\textbf{2.50\\%} \\\\\n\t\t\\hline\n\t\\end{tabular}\n \\label{table3}\n\\end{table}\nWe conduct ablation studies to understand the influence of different designs in our proposed method. We design different runs on Scene Flow datasets and report the results in Table 4. First, GwcNet\\cite{DBLP:conf\/cvpr\/GuoYYWL19} is adopted as our baseline. The \"Baseline\" refine the correlation features multiple times for a low-resolution 3D cost volume and the high-resolution 3D cost volume is obtained by linear interpolation. When we introduce the learning mechanism(Baseline+decouple) to learn all similarity scores from the last stage of the cost volume aggregation module, the $> 1px$ and $> 3px$ prediction error on Scene Flow datasets improve $1.73\\%$ and $0.20\\%$, respectively. However, the EPE drops slightly for the competition of simultaneously learning multiple similarity scores from one correlation feature, which may influence the prediction results on a large pixel error range. Then, we take account into the multistage decomposition of the learning task(Baseline+decouple +\\emph{Multistage matching}), and achieve the state-of-the-art result on all accuracy metrics. This demonstrates that decoupling all similarity scores step by step from different 4D volume makes the task easier to learn. Finally, the \\emph{Stages Mutual Aid} operation is added to 'Baseline+decouple +\\emph{Multistage matching}'. The performance is significantly improved, which demonstrates the simplicity and effectiveness of the \\emph{Stages Mutual Aid} module. Ablation experiments have verified that the proposed MFM scheme indeed learns the more accurate similarity scores by decomposing the task into multiple stages, thus effectively improves the prediction performance of the high-resolution disparity map. \n\n\\section{Conclusion}\nIn this paper, we propose the Multistage Full Matching framework, which directly estimates high-resolution 3D cost volume from the low-resolution 4D volume by decomposing the matching task into multiple stages. First, a serial network is designed for sufficient cost aggregation and multistage high-resolution 3D cost volume estimation. Then, the \\emph{Stages Mutual Aid} is proposed to solve the unbalanced prediction of the multiple stages resulted by serial network. The last but the most important, our MFM scheme achieves state-of-the-art on three popular datasets.\n\n\n\n\n\\bibliographystyle{aaai21}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nCurrent advances in robotics and autonomous systems have expanded the use of robots in a wide range of robotic tasks including assembly, advanced manufacturing, human-robot or robot-robot collaboration. In order for robots to efficiently perform these tasks, they need to have the ability to adapt to the changing environment while interacting with their surroundings, and a key component of this interaction is the reliable grasping of arbitrary objects. Consequently, a recent trend in robotics research has focused on object detection and pose estimation for the purpose of dynamic robotic grasping.\n\nHowever, identifying objects and recovering their poses are particularly challenging tasks as objects in the real world are extremely varied in shape and appearance. Moreover, cluttered scenes, occlusion between objects, and variance in lighting conditions make it even more difficult. Additionally, the system needs to be sufficiently fast to facilitate real-time robotic tasks. As a result, a generic solution that can address all these problems remains an open challenge.\n\nWhile classification~\\cite{resnet, vgg, inception,flexnet, overfeat, spatial}, detection~\\cite{fast-rcnn, faster-rcnn, ssd, yolo, yolo-9000, fldnet}, and segmentation~\\cite{segnet, maskrcnn, unet} of objects from images have taken a significant step forward - thanks to deep learning, the same has not yet happened to 3D localization and pose estimation. One primary reason was the lack of labeled data in the past as it is not practical to manually infer, thus As a result, the recent research trend in the deep learning community for such applications has shifted towards synthetic datasets~\\cite{butler,mayer,qiu,zhang,mccormac}. Several pose estimation methods leveraging deep learning techniques~\\cite{posecnn,dope,brachmann,wang,hu} use these synthetic datasets for training and have shown satisfactory accuracy. \n\nAlthough synthetic data is a promising alternative, capable of generating large amounts of labeled data, it requires photorealistic 3D models of the objects to mirror the real-world scenario. Hence, generating synthetic data for each newly introduced object needs photo-realistic 3D models and thus significant effort from skilled 3D artists. Furthermore, training and running deep learning models are not feasible without high computing resources as well. As a result, object detection and pose estimation in real-time with computationally moderate machines remain a challenging problem. To address these issues, we have devised a simpler pipeline that does not rely on high computing resources and focuses on planar objects, requiring only an RGB image and the depth information in order to infer real-time object detection and pose estimation.\n\nIn this work, we present a feature-detector-descriptor based method for detection and a homography based pose estimation technique where, by utilizing the depth information, we estimate the pose of an object in terms of a 2D planar representation in 3D space. The robot is pre-trained to perform a set of canonical grasps; a canonical grasp describes how a robotic end-effector should be placed relative to an object in a fixed pose so that it can securely grasp it. Afterward, the robot is able to detect objects and estimates their pose in real-time, and then adapt the pre-trained canonical grasp to the new pose of the object of interest. We demonstrate that the proposed method can detect a well-textured planar object and estimate its accurate pose within a tolerable amount of out-of-plane rotation. We also conducted experiments with the humanoid PR2 robot to show the applicability of the framework where the robot grasped objects by adapting to a range of different poses.\n\n\n\\section{Related Work}\nOur work constitutes of three modules: object detection, planar pose estimation, and adaptive grasping. In the following sub-sections, several fields of research that are closely related to our work are reviewed.\n\\subsection{Object Detection}\nObject detection has been one of the fundamental challenges in the field of computer vision and in that aspect, the introduction of feature detectors and descriptors represents a great achievement. Over the past decades, many detectors, descriptors, and their numerous variants have been presented in the literature. The applications of these methods have widely extended to numerous other vision applications such as panorama stitching, tracking, visual navigation, etc.\n\nOne of the first feature detectors was proposed by Harris et al.~\\cite{harris} (widely known as the Harris corner detector). Later Tomasi et al.~\\cite{tomasi} developed the KLT (Kanade-Lucas-Tomasi) tracker based on the Harris corner detector. Shi and Tomasi introduced a new detection metric GFTT~\\cite{shi} (Good Features To Track) and argued that it offered superior performance. Hall et al. introduced the concept of saliency~\\cite{hall} in terms of the change in scale and evaluated the Harris method proposed in~\\cite{lindeberg} and the Harris Laplacian corner detector~\\cite{mikolajczyk} where a Harris detector and a Laplacian function are combined.\n\nMotivated by the need for a scale-invariant feature detector, in 2004 Lowe~\\cite{SIFT} published one of the most influential papers in computer vision, SIFT (Scale Invariant Feature Transform). SIFT is both a feature point detector and descriptor. H. Bay et al.~\\cite{SURF} proposed SURF (Speeded Up Robust Features) in 2008. But both of these methods are computationally expensive as SIFT detector leverages the difference of Gaussians (DoG) in different scales while SURF detector uses a Haar wavelet approximation of the determinant of the Hessian matrix to speed up the detection process. Many variants of SIFT~\\cite{sift_v_1, sift_v_2, sift_v_3, sift_v_4} and SURF~\\cite{surf_v_1, surf_v_2, surf_v_3} were proposed, either targeting a different problem or reporting improvements in matching, however, the execution time remained a persisting problem for several vision applications.\n\nTo improve execution time, several other detectors such as FAST~\\cite{fast} and AGAST~\\cite{agast} have been introduced. Calonder et al. developed the BRIEF~\\cite{brief} (Binary Robust Independent Elementary Features) descriptor of binary strings that has a fast execution time and is very useful for matching images. E. Rublee et al. presented ORB~\\cite{ORB} (Oriented FAST and Rotated Brief) which is a combination of modified FAST (Features from Accelerated Segment Test) for feature detection and BRIEF for description. S. Leutnegger et al. designed BRISK~\\cite{BRISK} (Binary Robust Invariant Scale Keypoint) that detects corners using AGAST and filters them using FAST. On the other hand, FREAK (Fast Retina Key-point), introduced by Alahi et al.~\\cite{FREAK} generates retinal sampling patterns using a circular sampling grid and uses a binary descriptor, formed by a one bit difference of Gaussians (DoG). Alcantarilla et al. introduced KAZE~\\cite{KAZE} features that exploit non-linear scale-space using non-linear diffusion filtering and later extended it to AKAZE~\\cite{AKAZE} where they replaced it with a more computationally efficient method called FED (Fast Explicit Diffusion)~\\cite{fed1, fed2}.\n\nIn our work, we have selected four methods to investigate: SIFT, SURF, FAST+BRISK, AKAZE. \n\n\\subsection{Planar Pose Estimation}\n\nAmong the many techniques in literature on pose estimation, we focus our review on those related to planar pose estimation. In recent years, planar pose estimation has been increasingly becoming popular in many fields, such as robotics and augmented reality. \n\nSimon et. al~\\cite{simon} proposed a pose estimation technique for planar structures using homography projection and by computing camera pose from consecutive images. Changhai et. al~\\cite{changhai} presented a method to robustly estimate 3D poses of planes by applying a weighted incremental normal estimation method that uses Bayesian inference. Donoser et al.~\\cite{donoser} utilized the properties of Maximally Stable Extremal Regions (MSERs~\\cite{LMSER}) to construct a perspectively invariant frame on the closed contour to estimate the planar pose. In our approach, we applied perspective transformation to approximate a set of corresponding points on the test image for estimating the basis vectors of the object surface and used the depth information to estimate the 3D pose by computing the normal to the planar object.\n\n\\subsection{Adaptive Grasping}\n\nDesigning an adaptive grasping system is challenging due to the complex nature of the shapes of objects. In early times, analytical methods were used where the system would analyze the geometric structure of the object and would try to predict suitable grasping points. Sahbani et al.~\\cite{sahbani} did an in depth review on the existing analytical approaches for 3D object grasping. However, with the analytical approach it is difficult to compute force and not suitable for autonomous manipulation. Later, as the number of 3D models increased, numerous data driven methods were introduced that would analyze grasps in the 3D model database and then transfer to the target object. Bohg et al.~\\cite{bohg} reviewed data driven grasping method methods where they divided the approach into three groups based on the familiarity of the object.\n\nKehoe et al. \\cite{Kehoe2013} used a candidate grasp from the candidate grasp set based on the feasibility score determined by the grasp planner. The grasps weren't very accurate in situations where the objects had stable horizontal poses and were close to the width of the robot's gripper. Huebner et al. \\cite{Huebner2008} also take a similar approach as they perform grasp candidate simulation. They created a sequence of grasps by approximating the shape of the objects and then computed a random grasp evaluation for each model of objects. In both works, a grasp has been chosen from a list of candidate grasps.\n\nThe recent advances in deep learning also made it possible to regress grasp configuration through deep convolutional networks. A number of deep learning-based methods were reviewed in~\\cite{caldera} where the authors also discussed how each element in deep learning-based methods enhances the robotic grasping detection. \\cite{Yu2013} presented a system where deep neural networks were used to learn hierarchical features to detect and estimate the pose of an object, and then use the centers of the defined pose classes to grasps the objects. Kroemer et al.~\\cite{Kroemer2009} introduced an active learning approach where the robot observes a few good grasps by demonstration and learns a value function for these grasps using Gaussian process regression. Aleotti et al.~\\cite{Aleotti2011} proposed a grasping model that is capable of grasping objects by their parts which learns new tasks from human demonstration with automatic 3D shape segmentation for object recognition and semantic modeling. \\cite{Saxena2008} and \\cite{Montesano2012} used supervised learning to predict grasp locations from RGB images. In~\\cite{Nogueira2016}, as an alternative to a trial and error exploration strategy, the authors proposed a Bayesian optimization technique to address the robot grasp optimization problem of unknown objects. These methods emphasized developing and using learning models for obtaining accurate grasps. \n\nIn our work, we focus on pre-defining a suitable grasp relative to an object that can adapt to a new grasp based on the change of position and orientation of the object.\n\n\\section{Method}\nThe proposed method is divided into two parts. The first part outlines the process of simultaneous object detection and pose estimation of multiple objects and the second part describes the process of generating an adaptive grasp using the pre-trained canonical grasp and the object pose.\nThe following sections describe the architecture of the proposed framework (figure~\\ref{fig:sysarch}) in detail.\n\n\n\n\\subsection{Object Detection and Pose Estimation}\n\nWe present a planar pose estimation algorithm (algorithm \\ref{algorithm}) for adaptive grasping that consists of four phases: (i) feature extraction and matching, (ii) homography estimation and perspective transformation, (iii) directional vectors estimation on the object surface, (iv) planar pose estimation using the depth data. In the following sections, we will focus on the detailed description of the aforementioned steps.\n\n\\subsubsection{Feature extraction and matching}\n\nOur object detection starts with extracting features from the images of the planar objects and then matching them with the features found in the images acquired from the camera. Image features are patterns in images based on which we can describe the image. A feature detecting algorithm takes an image and returns the locations of these patterns - they can be edges, corners or interest points, blobs or regions of interest points, ridges, etc. This feature information then needs to be transformed into a vector space using a feature descriptor, so that it gives us the possibility to execute numerical operations on them. A feature descriptor encodes these patterns into a series of numerical values that can be used to match, compare, and differentiate one feature to another; for example, we can use these feature vectors to find the similarities in different images which can lead us to detect objects in the image. In theory, this information would be invariant to image transformations. In our work, we have investigated SIFT~\\cite{SIFT}, SURF~\\cite{SURF}, AKAZE~\\cite{AKAZE}, and BRISK~\\cite{BRISK} descriptors. SIFT, SURF, AKAZE are both feature detectors and descriptors, but BRISK uses FAST~\\cite{fast} algorithm for feature detection. These descriptors were selected after carefully reviewing the comparisons done in the recent literature~\\cite{andersson2016comparison, karami2017image, tareen2018comparative}.\n\nOnce the features are extracted and transformed into vectors, we compare the features to determine the presence of an object in the scene. For non-binary feature descriptors (SIFT, SURF) we find matches using the Nearest Neighbor algorithm. However, finding the nearest neighbor matches within high dimensional data is computationally expensive, and with more objects introduced it can affect the process of updating the pose in real-time. To counter this issue to some extent, we used the FLANN~\\cite{muja_flann_2009} implementation of K-d Nearest Neighbor Search, which is an approximation of the K-Nearest Neighbor algorithm that is optimized for high dimensional features. For binary features (AKAZE, BRISK), we used the Hamming distance ratio method to find the matches. Finally, if we have more than ten matches, we presume the object is present in the scene. \n\n\\RestyleAlgo{boxruled}\n\\begin{algorithm}[ht]\n\\fontsize{8}{8}\\selectfont\n\\DontPrintSemicolon\n \n \\KwIn{Training images of planar objects, $\\mathcal{I}$}\n $Detector \\gets \\text{Define feature detector}$\\;\n $Descriptor \\gets \\text{Define feature descriptor}$\\;\n \\tcc{\\fontsize{8}{8}\\selectfont retrieve feature descriptor}\n \\tcc{\\fontsize{8}{8}\\selectfont for each image in $\\mathcal{I}$}\n \\For{i in $\\mathcal{I}$}{\n \\tcc{\\fontsize{7}{7}\\selectfont $\\mathcal{K}$ is set of detected keypoints for image i}\n \\fontsize{8}{8}\\selectfont $\\mathcal{K} \\gets \\texttt{DetectKeypoints($i, Detector$)}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont $\\mathcal{D}[i]$ is the corresponding descriptor set for image i }\n $\\mathcal{D}[i] \\gets \\texttt{GetDescriptors( $\\mathcal{K}, Descriptor$)}$\\;\n }\n\n \\While{$\\text{camera is on}$}\n { \\fontsize{8}{8}\\selectfont\n $f \\gets \\text{RGB image frame}$\\;\n $PC \\gets \\text{Point cloud data}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont $K_F$ is set of detected keypoints for image frame $f$}\n $K_F \\gets \\texttt{DetectKeypoints($f, Detector$)}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont $D_F$ is the corresponding descriptor set for rgb image $f$}\n $D_F \\gets \\texttt{GetDescriptors( $K_F, Descriptor$)}$\\;\n \\For{i in $\\mathcal{I}$}\n {\n $matches \\gets \\texttt{FindMatches( $\\mathcal{D}[i]$, $D_F$)}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont If there is at least 10 matches then we have the object (described in image $i$) in the scene}\n \\uIf{\\text{Total number of }$matches \\geq 10$}\n {\n \n \\tcc{\\fontsize{7}{7}\\selectfont extract matched keypoints pair $(kp_{i},kp_{f})$ from the corresponding descriptors matches.}\n $kp_{i}, kp_{f} \\gets \\texttt{ExtractKeypoints($matches$)}$\\;\n $\\mathbf{H} \\gets \\texttt{EstimateHomography($kp_{i}, kp_{f}$)}$\\;\n $p_c, p_x, p_y \\gets \\text{points on the planar object }\\newline \\text{~~~~~~~~~~~~~ obtained using equation (\\ref{eqn:axis})}$\\;\n $p_c^{'}, p_x^{'}, p_y^{'} \\gets \\text{corresponding projected points}\\newline \\text{~~~~~~~~~~~~~ of $p_c, p_x, p_y$ on image frame $f$}\\newline \\text{~~~~~~~~~~~~~ estimated using equations}\\newline \\text{~~~~~~~~~~~~~ (\\ref{eqn:homography}) and (\\ref{eqn:projection})}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont $\\vec{c}$ denotes the origin of the object frame with respect to the base\/world frame}\n $\\Vec{c}, \\Vec{x}, \\Vec{y} \\gets \\text{corresponding 3d locations }\\newline \\text{~~~~~~~~~ of $p_c^{'}, p_x^{'}, p_y^{'}$ from point cloud $PC$}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont shift $\\vec{x}, \\vec{y}$ to the origin of the base or the world frame}\n $\\vec{x} \\gets \\vec{x}-\\vec{c}$\\; \n $\\vec{y} \\gets \\vec{y}-\\vec{c}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont estimate the object frame in terms of three orthonormal vectors \n $\\hat{i}, \\hat{j}$, and $\\hat{k}$.}\n $\\hat{i}, \\hat{j}, \\hat{k} \\gets \\text{from equation (\\ref{eqn:unitv})}$\\;\n \\tcc{\\fontsize{7}{7}\\selectfont compute the rotation $\\phi_i,\\theta_i,\\psi_i$ of the object frame $\\hat{i}, \\hat{j}$, $\\hat{k}$ with respect to the base or the world frame $\\vec{X}, \\vec{Y}, \\vec{Z}$.}\n $\\phi_i,\\theta_i,\\psi_i \\gets \\text{from equation (\\ref{eqn:eulerangles})}$\\;\n \n \n \\tcc{\\fontsize{7}{7}\\selectfont finally, publish the position and orientation of the object.}\n \\texttt{publish$(\\vec{c},\\phi_i,\\theta_i,\\psi_i)$}\\;\n }\n }\n }\n \\caption{Planar Pose Estimation}\n \\label{algorithm}\n\\end{algorithm}\n\n\\subsubsection{Homography Estimation and Perspective Transformation}\nA homography is an invertible mapping of points and lines on the projective plane that describes a 2D planar projective transformation~(figure~\\ref{fig:homography}) that can be estimated from a given pair of images. In simple terms, a homography is a matrix that maps a set of points in one image to the corresponding set of points in another image. We can use a homography matrix $\\mathbf{H}$ to find the corresponding points using equation \\ref{eqn:homography} and~\\ref{eqn:projection}, which defines the relation of projected point $(x^{'}, y^{'})$ (figure \\ref{fig:homography}) on the rotated plane to the reference point $(x,y)$. \n\nA 2D point $(x,y)$ in an image can be represented as a 3D vector $(x, y, 1)$ which is called the homogeneous representation of a point that lies on the reference plane or image of the planar object. In equation (\\ref{eqn:homography}), $\\mathbf{H}$ represents the homography matrix and $[x~y~1]^{T}$ is the homogeneous representation of the reference point $(x,y)$ and we can use the values of $a,b,c$ to estimate the projected point $(x^{'},y^{'})$ in equation (\\ref{eqn:projection}). \n\n\\begin{align}\n \\left [ \\begin{matrix} a \\\\ b \\\\ c \\end{matrix} \\right ] = \\mathbf{H}\\begin{bmatrix} x\\\\ y\\\\ 1\\\\ \\end{bmatrix} = \\begin{bmatrix} h_{11}&h_{12}&h_{13}\\\\ h_{21}&h_{22}&h_{23}\\\\ h_{31}&h_{32}&h_{33}\\\\ \\end{bmatrix} \\begin{bmatrix} x\\\\ y\\\\ 1\\\\ \\end{bmatrix}\n\\label{eqn:homography} \n\\end{align}\n\n\\begin{equation}\n \\begin{aligned}\n\\left \\lbrace \\begin{aligned}\n x^{'} = \\frac{a}{c} \\\\\n y^{'} = \\frac{b}{c} \n\\end{aligned} \\right . \n\\end{aligned}\n\\label{eqn:projection}\n\\end{equation}\n\nWe estimate the homography using the matches found from the nearest neighbor search as input; often these matches can have completely false correspondences, meaning they don't correspond to the same real-world feature at all which can be a problem in estimating the homography. So, we chose RANSAC~\\cite{ransac} to robustly estimate the homography by considering only inlier matches as it tries to estimate the underlying model parameters and detect outliers by generating candidate solutions through random sampling using a minimum number of observations.\n\nWhile the other techniques use as much data as possible to find the model parameters and then pruning the outliers, RANSAC uses the smallest set of data point possible to estimate the model, thus making it faster and more efficient than the conventional solutions. \n\n\\begin{figure}[h]\n\\begin{center}\n\\graphicspath{ {.\/images\/} }\n\\includegraphics[height=6cm]{images\/homography.png}\n\\end{center}\n \\caption{Object in different orientation from the camera}\n\\label{fig:homography}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=0.85\\linewidth, height=0.50\\linewidth]{images\/sys_arch_2.png}\n\\end{center}\n \\caption{System architecture.}\n\\label{fig:sysarch}\n\n\\end{figure*}\n\n\\subsubsection{Finding directional vectors on the object}\n\nIn order to find the pose of a planar object, we need to find the three orthonormal vectors on the planar object that describe the object coordinate frame and consequently, the orientation of the object relative to the world coordinate system. We start by estimating the vectors on the planar object that form the basis of the plane, illustrated in figure~\\ref{eqn:axis}. Then, we take the cross product of these two vectors to find the third directional vector which is the normal to the object surface. Let's denote the world coordinate system as $XYZ$, and the object coordinate system as $xyz$. We define the axes of the orientation in relation to a body as: \n\\\\\n\n\\qquad \\qquad \\qquad \\qquad $x \\to \\text{right}$\n\n\\qquad \\qquad \\qquad \\qquad $y \\to \\text{up}$ \n\n\\qquad \\qquad \\qquad \\qquad $z \\to \\text{towards the camera}$ \n\nFirst, we retrieve the locations of the three points $p_c, p_x, p_y$ on the planar object from the reference image using equation (\\ref{eqn:axis}) and then locate the corresponding points $p_{c}^{'}, p_{x}^{'}, p_{y}^{'}$ on the image acquired from the Microsoft Kinect sensor. We estimate the locations of these points using the homography matrix $\\mathbf{H}$ as shown in equation~\\ref{eqn:homography}, \\ref{eqn:projection}. Then we find the corresponding 3D locations of $p_{c}^{'}, p_{x}^{'}, p_{y}^{'}$ from the point cloud data also obtained from the Microsoft Kinect sensor. We denote them as vectors $\\vec{c}$,$\\vec{x}$, and $\\vec{y}$. Here, $\\vec{c}$ represents the translation vector from the object frame to the world frame and also the position of the object in the world frame. Next, we subtract $\\vec{c}$ from $\\vec{x}$, $\\vec{y}$ which essentially gives us two vectors $\\vec{x}$ and $\\vec{y}$ centered at the origin of the world frame. We take the cross product of these two vectors $\\vec{x}, \\vec{y}$ to find the third axis $\\vec{z}$. But, depending on the homography matrix the estimated axes $\\vec{x}$ and $\\vec{y}$ might not be exactly orthogonal, so we take the cross product of $\\vec{y}$ and $\\vec{z}$ to recalculate the vector $\\vec{x}$. Now that we have three orthogonal vectors, we compute the three unit vectors $\\hat{i}$, $\\hat{j}$, and $\\hat{k}$ along the $\\vec{x}$, $\\vec{y}$, and $\\vec{z}$ vectors respectively using equation~\\ref{eqn:unitv}. These three orthonormal vectors describe the object frame. These vectors were projected onto the image plane to give a visual confirmation of the methods applied; figure~\\ref{fig:posevizcam} shows the orthogonal axes projected onto the object plane.\n\n \n\n\\begin{equation}\n\\vcenter{\\hbox{\\begin{minipage}{5cm}\n\\centering\n\\includegraphics[width=4cm,height=4cm]{images\/box_axis.png}\n\\captionof{figure}{Axis on the reference plane}\n\\end{minipage}}}\n\\begin{aligned}\n\\left \\lbrace \\begin{aligned}\np_c &= (w\/2, h\/2)\n\\\\\np_x &= (w, h\/2)\n\\\\\np_y &= (w\/2, 0)\n\\end{aligned} \\right . \n\\end{aligned}\n\\label{eqn:axis}\n\\end{equation}\n\n\\begin{figure}[h] \n\\centering\n {\\includegraphics[width=78pt, height=78pt]{images\/pose1.png}} \n \\hspace{1px}\n {\\includegraphics[width=78pt, height=78pt]{images\/pose2.png}}\n \\hspace{1px}\n {\\includegraphics[width=78pt, height=78pt]{images\/pose3.png}}\n \\caption{Computed third directional axis projected onto image plane}\n \\label{fig:posevizcam}\n\\end{figure}\n\n\\begin{align}\n\\begin{split}\n\\hat{j} = \\frac{\\Vec{y}}{|\\Vec{y}|} = [j_X \\hspace{0.15cm} j_Y \\hspace{0.15cm} j_Z]\n\\\\\n\\hat{k} = \\frac{\\Vec{x} \\times \\Vec{y}}{|\\Vec{x} \\times \\Vec{y}|} = [k_X \\hspace{0.15cm} k_Y \\hspace{0.15cm} k_Z]\n\\\\\n\\hat{i} = \\frac{\\Vec{y} \\times \\Vec{z}}{|\\Vec{y} \\times \\Vec{z}|} = [i_X \\hspace{0.15cm} i_Y \\hspace{0.15cm} i_Z]\n\\end{split}\n\\label{eqn:unitv}\n\\end{align}\n\n\\subsubsection{Planar pose computation}\nWe compute the pose of the object in terms of the Euler angles. Euler angles are three angles that describe the orientation of a rigid body with respect to a fixed coordinate system. The rotation matrix $\\mathbf{R}$ in equation (\\ref{eqn:rotR}) rotates X axis to $\\hat{i}$, Y axis to $\\hat{j}$, and Z axis to $\\hat{k}$. \n\n\\begin{align}\n \\mathbf{R} = \\left [ \\begin{matrix} i_X & j_X & k_X \\\\ i_Y & j_Y & k_Y \\\\ i_Z & j_Z & k_Z \\end{matrix} \\right ]\n\\label{eqn:rotR} \n\\end{align}\n\nEuler angles are combinations of the three axis rotations (equation~\\ref{eqn:euler-axis}), where $\\phi$, $\\theta$, and $\\psi$ specify the intrinsic rotations around the X, Y, and Z axis respectively. The combined rotation matrix is a product of three matrices: $\\mathbf{R} = \\mathbf{R}_z \\mathbf{R}_y \\mathbf{R}_x$ (equation~\\ref{eqn:rotcomb}); the first intrinsic rotation rightmost, last leftmost.\n\n\\begin{align}\\medmath{\n \\left\\lbrace \\begin{aligned}\n\\mathbf{R}_x &= \\colvec {1 & 0 & 0 \\\\ 0 & \\cos\\phi & -\\sin\\phi \\\\ 0 & \\sin\\phi & \\cos\\phi } \\\\\n\\mathbf{R}_y &= \\colvec {\\cos\\theta & 0 & \\sin\\theta \\\\ 0 & 1 & 0 \\\\ -\\sin\\theta & 0 & \\cos\\theta } \\\\\n\\mathbf{R}_z &= \\colvec {\\cos\\psi & -\\sin\\psi & 0 \\\\ \\sin\\psi & \\cos\\psi & 0 \\\\ 0 & 0 & 1 }\n \\end{aligned} \\right .}\n \\label{eqn:euler-axis}\n\\end{align}\n\n\\begin{align}\n\\mathbf{R} = \n \\begin{bmatrix*}\n c\\theta c\\psi\n & s\\phi s\\theta c\\psi - c\\phi s\\psi\n & c\\phi s\\theta c\\psi + s\\phi s\\psi\n \\\\ c\\theta s\\psi\n & s\\phi s\\theta s\\psi + c\\phi c\\psi\n & c\\phi s\\theta s\\psi - s\\phi c\\psi\n \\\\ -s\\theta\n & s\\phi c\\theta\n & c\\phi c\\theta\n \\end{bmatrix*}\n\\label{eqn:rotcomb}\n\\end{align}\n\nIn equation~\\ref{eqn:rotcomb}, $c$ and $s$ represents $\\cos$ and $\\sin$ respectively.\n\nSolving for $\\phi, \\theta$, and $\\psi$ from (\\ref{eqn:rotR}) and (\\ref{eqn:rotcomb}), we get,\n\n\\begin{align}\\medmath{\n \\left\\lbrace \\begin{aligned}\n \\phi &= \\tan^{-1}\\left(\\frac{j_Z}{k_Z}\\right) \\\\\n \\theta &= \\tan^{-1}\\left(\\frac{-i_Z}{\\sqrt{1-i_Z^2}}\\right) = \\sin^{-1}\\left(-i_Z\\right) \\\\\n \\psi &= \\tan^{-1}\\left(\\frac{i_Y}{i_X}\\right)\n \\end{aligned} \\right .}\n \\label{eqn:eulerangles}\n\\end{align}\n\n\\begin{figure*}\n\\centering\n\\subfloat[]{\\includegraphics[width = 135pt, height=80pt]{{images\/box_robot-head_1.jpg}}}\\hspace{10px}\n\\subfloat[]{\\includegraphics[width = 135pt, height=80pt]{{images\/box_robot-head_2.jpg}} }\\hspace{10px}\n\\subfloat[]{\\includegraphics[width = 135pt, height=80pt]{{images\/box_robot-head_3.jpg}}}\n\n\\vspace{0pt}\n\\subfloat[]{\\includegraphics[width = 135pt, height=80pt]{{images\/box_rviz_1.jpg}}} \\hspace{10px}\n\\subfloat[]{\\includegraphics[width = 135pt, height=80pt]{{images\/box_rviz_2.jpg}}} \\hspace{10px}\n\\subfloat[]{\\includegraphics[width = 135pt, height=80pt]{{images\/box_rviz_3.jpg}}}\n\n\\vspace{-5pt}\n \\caption{(a),(b),(c) are recovered poses from robot's camera and (d),(e),(f) are corresponding poses visualized in RViz}\n\\label{fig:poseviz}\n\\end{figure*}\n\n\n\n\n\\subsection{Training Grasps for Humanoid Robots}\nTo ensure that the robot can grasp objects in an adaptive manner, we pre-train the robot to perform a set of canonical grasps. We place the object and the robot's gripper close to each other and record the relative pose. This essentially gives us the pose of the gripper with respect to the object. Figure~\\ref{fig:can_grasp} illustrates the training process in which the robot's gripper and a cracker box have been placed in close proximity and the relative poses have been recorded for grasping the objects from the side. \n\n\\begin{equation}\n \\textbf{T}_{s}^{d}\n = \\begin{bmatrix} \\textbf{R}_{s}^{d} & P_{s}^{d} \\\\ 0 & 1 \\end{bmatrix}\n =\\begin{bmatrix} r_{11} & r_{12} & r_{13} & X_t \\\\\n r_{21} & r_{22} & r_{23} & Y_t \\\\\n r_{31} & r_{32} & r_{33} & Z_t \\\\\n 0 & 0 & 0 & 1 \\end{bmatrix} \n\\label{eqn:transmat}\n\\end{equation}\n\nEquation~\\ref{eqn:transmat} outlines the structure of a transformation matrix $\\textbf{T}_{s}^{d}$ that describes the rotation and translation of frame $d$ with respect to frame $s$; $\\textbf{R}_{s}^{d}$ represents the rotation matrix similar to equation~\\ref{eqn:rotcomb} and $P_{s}^{d}=[X_{t},Y_{t},Z_{t}]^{T}$ is the translation matrix which is the 3D location of the origin of frame $d$ in frame $s$.\n\nDuring the training phase, we first formulate the transformation matrix $\\textbf{T}_{b}^{o}$ using the rotation matrix and the object location. We take the inverse of $\\textbf{T}_{b}^{o}$ which gives us the transformation matrix $\\textbf{T}_{o}^{b}$. We then use the equation~\\ref{eqn:graspmat} to record the transformation $\\mathbf{T}_{o}^{g}$ of the robot's wrist relative to the object.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.65\\linewidth]{images\/canonical_grasp.png}\n \\caption{Pre-training canonical grasp}\n \\label{fig:can_grasp}\n\\end{figure}\n\n\\begin{equation} \\label{eqn:graspmat}\nT_{o}^{g} = T_{o}^{b} \\times T_{b}^{g} \\ \\text{where} \\ T_{o}^{b} = (T_{b}^{o})^{-1}\n\\end{equation}\n\\par\n\nIn the equation~\\ref{eqn:graspmat}, $b$ refers to the robot's base, $o$ refers to the object, and $g$ refers to the wrist of the robot to which the gripper is attached. Once we record the matrix, we get a new pose of the object from the vision in the testing phase and generate the final matrix using the equation~\\ref{eqn:fingrasp} that has the new position and orientation of the robot's wrist in matrix form .\n\\begin{equation} \\label{eqn:fingrasp}\nT_{b}^{g} = T_{b}^{o} \\times T_{o}^{g}\n\\end{equation}\n\\par\n\nWe then extract the rotational angles $\\gamma$, $\\beta$, $\\alpha$~(roll, pitch, yaw) of the grasp pose from matrix $\\mathbf{T}_{b}^{g}$ using equation~\\ref{eqn:grippereulerangles}\n\n\\begin{align}\\medmath{\n \\left\\lbrace \\begin{aligned}\n \\gamma=tan^{-1}(r_{32}\/r_{33}) \\\\\n \\beta=tan^{-1}\\frac{-r_{31}}{\\sqrt {{r_{32}}^2+{r_{33}}^2}}\\\\\n \\alpha=tan^{-1}(r_{21}\/r_{11})\n \\end{aligned} \\right .}\n \\label{eqn:grippereulerangles}\n\\end{align}\n\n\\section{Evaluation}\nThe proposed object recognition and pose estimation algorithm was implemented on an Ubuntu 14.04 platform equipped with 3.0 GHz Intel R Core(TM) i5-7400 CPU and 8GB system memory. The RGB-D camera used in the experiments was a Microsoft Kinect sensor v1. We evaluated the proposed algorithm by comparing the accuracy of object recognition, pose estimation, and execution time of four different feature descriptors. We also validated the effectiveness of our approach for adaptive grasping by conducting experiments with the PR2 robot.\n\n\\subsection{Object detection and pose estimation}\n\nWithout enough observable features, the system would fail to find good matches that are required for accurate homography estimation. Consequently, our object detection and pose estimation approach has a constraint on the out-of-plane rotation $\\theta$, illustrated in figure~\\ref{fig:theta}. In other words, if the out-of-plane rotation of the object is more than $\\theta$, the system would not be able to recognize the object. Fast execution is also a crucial aspect to facilitate multiple object detection and pose estimation for real-time applications. We experimented with four different descriptors on several planar objects and the comparative result is shown in table~\\ref{tbl:comparisondescriptor}. The execution time was measured for the object detection and pose estimation step. AKAZE and BRISK had much lower processing time for detection and pose estimation, thus would have a better frame rate, but SIFT and SURF had larger out-of-plane rotational freedom.\n\n\\begin{figure}[h]\n\\begin{center}\n\\graphicspath{ {.\/images\/} }\n\\includegraphics[width=5cm]{images\/rot_all.png}\n\\end{center}\n \\caption{Out of plane rotation}\n\\label{fig:theta}\n\\end{figure}\n\n\\begin{table}[h!]\n\\centering\n \\caption{Comparison of feature descriptors}\n \n \\begin{tabular}{l|c|s}\n \\toprule\n \n \\textbf{Descriptor} & \\begin{tabular}[x]{@{}c@{}}\\textbf{Maximum out of}\\\\ \\textbf{plane rotation} (degree)\\end{tabular} & \\begin{tabular}[x]{@{}c@{}}\\textbf{Execution time}\\\\ (second)\\end{tabular}\\\\\n \n \\midrule\n \n SIFT & $48^{\\circ}\\pm2^{\\circ}$ & \\text{~~~~~~~0.21s}\\\\ \\hline\n SURF & $37^{\\circ}\\pm2^{\\circ}$ & \\text{~~~~~~~0.27s}\\\\ \\hline\n AKAZE & $18^{\\circ}\\pm1^{\\circ}$ & \\text{~~~~~~~0.05s}\\\\ \\hline\n BRISK & $22^{\\circ}\\pm2^{\\circ}$ & \\text{~~~~~~~0.06s}\\\\\n \\bottomrule\n \\end{tabular}\n \\label{tbl:comparisondescriptor}\n\\end{table}\n\nWe also compared the \\textit{RMS} difference $\\epsilon$~(equation~\\ref{eqn:epsilon}) of re-calculated $\\vec{x}$ to original $\\vec{x}$ ($\\vec{x}^{'}$ in the equation) for increasing out-of-plane rotation of the planar objects to assess the homography estimation. Ideally, the two estimated vectors $\\vec{x}$ and $\\vec{y}$, which describe the basis of the plane of the planar object, should be orthogonal to each other, but often they are not. So, the values of $\\epsilon$ in figure~\\ref{fig:epsilon} give us an indication of the average error in homography estimation for different out-of-plane rotations. In figure~\\ref{fig:epsilon}, we can see AKAZE has much higher $\\epsilon$ values while the rest remained within a close range. This tells us AKAZE results in a much larger error in estimating the homography than the other methods. \n\n\\begin{figure}[h]\n\\begin{center}\n\\graphicspath{ {.\/images\/} }\n\\includegraphics[width=\\linewidth]{images\/chart.png}\n\\end{center}\n \\caption{Out of plane rotation vs $\\epsilon$}\n\\label{fig:epsilon}\n\\end{figure}\n\nWe chose SIFT and SURF to evaluate how the execution time for detection scales up while increasing the number of objects. From table~\\ref{tbl:multobjcompdesc}, which shows the mean processing time for object detection, we can see that SURF had a detection time around 50\\% more than SIFT in all the cases. This outcome coupled with the previous results prompted us to select SIFT for the subsequent experiments.\n\nThe system was capable of detecting multiple objects in real-time and at the same time could estimate their corresponding poses. Figure~\\ref{fig:multobjdet} shows detected objects with estimated directional planar vectors. We can also observe that the system was robust to in-plane rotation and partial occlusion.\n\n\\begin{table}[h]\n\\centering\n\\caption{\\centering Execution time of SIFT and SURF for multiple object detection}\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Number of\\\\ Objects\\end{tabular}} & \\multicolumn{2}{c|}{\\begin{tabular}[c]{@{}c@{}}Detection time\\\\ (second)\\end{tabular}} \\\\ \\cline{2-3} \n & SIFT & SURF \\\\ \\hline\n1 & 0.06s & 0.09s \\\\ \\hline\n2 & 0.11s & 0.17s \\\\ \\hline\n3 & 0.17s & 0.26s \\\\ \\hline\n4 & 0.22s & 0.35s \\\\ \\hline\n5 & 0.28s & 0.4s5 \\\\ \\hline\n6 & 0.34s & 0.54s \\\\ \\hline\n\\end{tabular}\n\\label{tbl:multobjcompdesc}\n\\end{table}\n\n\\begin{figure}[h] \n\\centering\n {\\includegraphics[width=78pt, height=78pt]{images\/col1.png}} \n \\hspace{1px}\n {\\includegraphics[width=78pt, height=78pt]{images\/col2.png}}\n \\hspace{1px}\n {\\includegraphics[width=78pt, height=78pt]{images\/col3.png}}\n \\caption{\\centering Multiple object detection with estimated planar vectors}\n \\label{fig:multobjdet}\n\\end{figure}\n\nWe used RViz~\\cite{RViz}, a 3D visualizer for the Robot Operating System (ROS)~\\cite{ros}, to validate the pose estimation. The calculated directional axes were projected onto the image and the estimated poses were visualized in RViz. As shown in figure~\\ref{fig:poseviz}, we qualitatively verified the accuracy of the detection and the estimated pose by comparing the two outputs. We can see that both the outputs render similar results. We conducted experiments with multiple objects and human held objects as well. Figure~\\ref{fig:pose_check} illustrates the simultaneous detection and pose estimation of two different boxes and an object held by a human, respectively.\n\n\\begin{figure}[h] \n\\centering\n {\\includegraphics[width=100pt, height=100pt]{images\/Pose_Detection.png}} \n \\hspace{1px}\n {\\includegraphics[width=100pt, height=100pt]{images\/pose_detection_human.png}}\n \n \n \\caption{(a) Pose estimation of multiple objects (b) Estimated pose of an object held by a human}\n \n \\label{fig:pose_check}\n\\end{figure}\n\n\n\\begin{equation}\n \\epsilon = \\frac{1}{N}\\sum_{i=1}^{N}||\\vec{x_i}^{'}-\\vec{x_i}||, \\text{\\fontsize{8}{8}\\selectfont where N is the number of frames}\n \\label{eqn:epsilon}\n\\end{equation}\n\n\\subsection{Adaptive grasping}\nWe assessed our approach for adaptive grasping keeping two different aspects of the robotic application in mind; robotic tasks that require 1) interacting with a static environment, and 2) interacting with humans.\n\nWe first tested our system for static objects where the object was attached to a tripod. Next, we set up experiments where the object was held by a human. We used a sticker book and a cartoon book and evaluated our system on a comprehensive set of poses. In almost all the experiments, the robot successfully grasped the object in a manner consistent with its training. There were some poses that were not reachable by the robot - for instance, when the object was pointing inward along the X-axis in the robot reference frame, it was not possible for the end-effector to make a top grasp. Figure~\\ref{fig:tripod_results} and \\ref{fig:human_results} show the successful grasping of the robot for both types of experiments.\n\n\\begin{figure}\n \\centering\n \\captionsetup[subfigure]{labelformat=empty}\n \n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_3.1.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_3.2.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_3.3.jpg}}\\\\[-5ex]\n \n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_4.1.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_4.2.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_4.3.jpg}}\\\\[-5ex]\n \n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_5.1.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_5.2.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.3\\linewidth, height=0.3\\linewidth]{images\/exp_5.3.jpg}}\n \n \\caption{Robot grasping an object from a tripod. Left: initial position of the robot's gripper, middle: gripper adapting to the object's pose, right: grasping of the object.}\n \\label{fig:tripod_results}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\captionsetup[subfigure]{labelformat=empty}\n \n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_6.1.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_6.2.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_6.3.jpg}}\\\\[-5ex]\n \n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_7.1.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_7.2.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_7.3.jpg}}\\\\[-5ex]\n \n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_8.1.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_8.2.jpg}}\n \\vspace{5px}\n \\subfloat[]{\\includegraphics[width=0.30\\linewidth, height=0.35\\linewidth]{images\/exp_8.3.jpg}}\n \n \\caption{Robot grasping an object held by a human. Left: initial position of the robot's gripper, middle: gripper adapting to the object's pose, right: grasping of the object.}\n \\label{fig:human_results}\n\\end{figure}\n\n\n\n\n\\section{Conclusion and Future Work}\n\nThis work presents an approach that enables humanoid robots to grasp objects using planar pose estimation based on RGB image and depth data. We examined the performance of four feature-detector-descriptors for object recognition and found SIFT to be the best solution. We used FLANN's K-d Tree Nearest Neighbor implementation, and Bruteforce Hamming to find the keypoint matches and employed RANSAC to estimate the homography. The homography matrix was used to approximate the three orthonormal directional vectors on the planar object using perspective transformation. The pose of the planar object was estimated from the three directional vectors. The system was able to detect multiple objects and estimate the pose of the objects in real-time. We also conducted experiments with the humanoid PR2 robot to show the practical applicability of the framework where the robot grasped objects by adapting to a range of different poses.\n\nIn the future, we plan to add GPU acceleration for the proposed algorithm that would further improve the overall computational efficiency of the system. We would like to extend the algorithm to automatically prioritize certain objects and limit the number of objects needed for detection based on different scheduled tasks. Finally, we would like to incorporate transferring grasp configuration for familiar objects and explore other feature matching technique e.g. multi probe LSH, hierarchical k-means tree, etc.\n\n \\bibliographystyle{unsrt2}\n \n \\footnotesize{\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSupport Vector Data Description (SVDD) is a machine learning technique used\nfor single class classification and outlier detection. SVDD technique is\nsimilar to Support Vector Machines and was first introduced by Tax and Duin\n\\cite{tax2004support}. It can be used to build a flexible boundary around\nsingle class data. Data boundary is characterized by observations designated\nas support vectors. SVDD is used in domains where majority of data belongs to\na single class. Several researchers have proposed use of SVDD for multivariate\nprocess control \\cite{sukchotrat2009one}. Other applications of SVDD involve\nmachine condition monitoring \\cite{widodo2007support, ypma1999robust} and image\nclassification \\cite{sanchez2007one}.\n\n\\subsection{\\bf Mathematical Formulation of SVDD}\n\\label{mfsvdd}\n\\paragraph*{\\bf Normal Data Description}\\mbox{}\\\\\nThe SVDD model for normal data description builds a minimum radius hypersphere around the data.\n\\paragraph*{\\bf Primal Form}\\mbox{}\\\\\nObjective Function:\n\\begin{equation}\n\\min R^{2} + C\\sum_{i=1}^{n}\\xi _{i}, \n\\end{equation}\nsubject to: \n\\begin{align}\n\\|x _{i}-a\\|^2 \\leq R^{2} + \\xi_{i}, \\forall i=1,\\dots,n,\\\\\n\\xi _{i}\\geq 0, \\forall i=1,...n.\n\\end{align}\nwhere:\\\\\n$x_{i} \\in {\\mathbb{R}}^{m}, i=1,\\dots,n $ represents the training data,\\\\\n$ R:$ radius, represents the decision variable,\\\\\n$\\xi_{i}:$ is the slack for each variable,\\\\\n$a$: is the center, a decision variable, \\\\\n$C=\\frac{1}{nf}:$ is the penalty constant that controls the trade-off between the volume and the errors, and,\\\\\n$f:$ is the expected outlier fraction.\n\\paragraph*{\\bf Dual Form}\\mbox{}\\\\\nThe dual formulation is obtained using the Lagrange multipliers.\\\\ \nObjective Function:\n\\begin{equation} \n\\max\\ \\sum_{i=1}^{n}\\alpha _{i}(x_{i}.x_{i}) - \\sum_{i,j}^{ }\\alpha _{i}\\alpha _{j}(x_{i}.x_{j}) ,\n\\end{equation}\nsubject to:\n\\begin{align}\n& & \\sum_{i=1}^{n}\\alpha _{i} = 1,\\label{sv:s}\\\\\n& & 0 \\leq \\alpha_{i}\\leq C,\\forall i=1,\\dots,n.\n\\end{align}\nwhere:\\\\\n$\\alpha_{i}\\in \\mathbb{R}$: are the Lagrange constants,\\\\\n$C=\\frac{1}{nf}:$ is the penalty constant.\n\\paragraph*{\\bf Duality Information}\\mbox{}\\\\\nDepending upon the position of the observation, the following results hold:\nCenter Position: \\begin{equation} \\sum_{i=1}^{n}\\alpha _{i}x_{i}=a. \\label{sv:0} \\end{equation}\nInside Position: \\begin{equation} \\left \\| x_{i}-a \\right \\| < R \\rightarrow \\alpha _{i}=0.\\end{equation}\nBoundary Position: \\begin{equation} \\left \\| x_{i}-a \\right \\| = R \\rightarrow 0< \\alpha _{i}< C.\\end{equation}\nOutside Position: \\begin{equation}\\left \\| x_{i}-a \\right \\| > R \\rightarrow \\alpha _{i}= C. \\label{sv:1} \\end{equation}\nThe radius of the hypersphere is calculated as follows:\\\\\n\\begin{equation} \nR^{2}=(x_{k}.x_{k})-2\\sum_{i}^{ }\\alpha _{i}(x_{i}.x_{k})+\\sum_{i,j}^{ }\\alpha _{i}\\alpha _{j}(x_{i}.x_{j}).\n\\label{eq:a}\n\\end{equation}\n using any $ x_{k} \\in SV_{ R^{2} $ are designated as outliers.\n\nThe spherical data boundary can include a significant amount of space with a very sparse distribution of training\nobservations which leads to a large number of falses positives. The use of kernel functions leads to better compact\nrepresentation of the training data.\n\\paragraph*{\\bf Flexible Data Description}\\mbox{}\\\\\nThe Support Vector Data Description is made flexible by replacing the inner product $ (x_{i}.x_{j}) $ in equation \\eqref{eq:a} with a \nsuitable kernel function $ K(x_{i},x_{j}) $. The Gaussian kernel function used in this paper is defined as:\n\\begin{equation} \nK(x_{i}, x_{j})= \\exp \\dfrac{ -\\|x_i - x_j\\|^2}{2s^2}\n\\label{eq:b}\n\\end{equation}\nwhere $s$: Gaussian bandwidth parameter.\n\nThe modified mathematical formulation of SVDD with kernel function is:\n\nObjective function:\n\\begin{equation} \\label{eq:1}\n\\max\\ \\sum_{i=1}^{n}\\alpha _{i}K(x_{i},x_{i}) - \\sum_{i,j}^{ }\\alpha _{i}\\alpha _{j}K(x_{i},x_{j}),\n\\end{equation}\nSubject to:\n\\begin{align}\n& &\\sum_{i=1}^{n}\\alpha _{i} = 1, \\label{eq:2} \\\\\n& & 0 \\leq \\alpha_{i}\\leq C = \\frac{1}{nf} , \\forall i=1,\\dots,n. \\label{eq:3}\n\\end{align}\nConditions similar to \\eqref{sv:0} to \\eqref{sv:1} continue to hold even when the kernel function is used.\\\\\nThe threshold $R^{2}$ is calculated as :\n\\begin{multline}\nR^{2} = K(x_{k},x_{k})-2\\sum_{i}^{ }\\alpha _{i}K(x_{i},x_{k})+\\sum_{i,j}^{ }\\alpha _{i}\\alpha _{j}K(x_{i},x_{j})\n\\end{multline}\nusing any $ x_{k} \\in SV_{ R^{2} $ are designated as outliers.\n\n\\section{Need for a Sampling-based Approach}\nAs outlined in Section \\ref{mfsvdd}, SVDD of a data set is obtained\nby solving a quadratic programming problem. The time required to solve\nthe quadratic programming problem is directly related to the number of\nobservations in the training data set. The actual time complexity depends\nupon the implementation of the underlying Quadratic Programming solver.\nWe used LIBSVM to evaluate SVDD training time as\na function of the training data set size.\nWe have used C++ code that uses LIBSVM ~\\cite{chang2011libsvm} implementation of SVDD\nthe examples in this paper, we have also provided a Python implmentation which uses Scikit-learn~\\cite{scikit-learn} \nat \\cite{smsvddg}.\nFigure~\\ref{fig:image_0} shows\nprocessing time as a function of training data set size for the two donut\ndata set (see Figure \\ref{fig:image_3} for a scatterplot of the two donut\ndata). In Figure~\\ref{fig:image_0} the x-axis indicates the training data set\nsize and the y-axis indicates processing time in minutes. As indicated in\nFigure~\\ref{fig:image_0}, the SVDD training time is low for small or moderately\nsized training data but gets prohibitively high for large datasets.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[scale=0.25]{image0}\n\t\\caption{SVDD Training Time: Two Donut data}\n\t\\label{fig:image_0}\n\\end{figure}\n\nThere are applications of SVDD in areas such as process control and equipment\nhealth monitoring where size of training data set can be very large, consisting\nof few million observations. The training data set consists of sensors readings\nmeasuring multiple key health or process parameters at a very high frequency.\nFor example, a typical airplane currently has $\\approx$7,000 sensors\nmeasuring critical health parameters and creates 2.5 terabytes of data per day.\nBy 2020, this number is expected to triple or quadruple to over 7.5 terabytes\n\\cite{ege2015IoT}. In such applications, multiple SVDD training models are\ndeveloped, each representing separate operating mode of the equipment or process\nsettings. The success of SVDD in these applications require algorithms which can\ntrain using huge amounts of training data in an efficient manner.\n\n\nTo improve performance of SVDD training on large data sets, we propose a new\nsampling based method. Instead of using all observations from the training data\nset, the algorithm computes the training data SVDD by iteratively computing SVDD\non independent random samples obtained from the training data set and combining\nthem. The method works well even when the random samples have few observations.\nWe also provide a criteria for detecting convergence. At convergence the our\nmethod provides a data description that compares favorably with result obtained\nby using all the training data set observations.\n\n\nThe rest of this document is organized as follows: Section~\\ref{sbm} provides\ndetails of the proposed sampling-based iterative method. Results of training\nwith the proposed method are provided in section~\\ref{res}; the analysis of high\ndimensional data is provided in section~\\ref{pd}; the results of a simulation\nstudy on random polygons is provided in section Section~\\ref{ss} and we provide\nour conclusions in section~\\ref{cn}.\n\n\\textbf{Note:\\underline{\\underline{}}} \\textit{In the remainder of this paper, we refer to the training method using all\nobservations in one iteration as the full SVDD method.\\textit{}}\n\n\\section{Sampling-based Method}\n\\label{sbm}\nThe Decomposition and Combination method of Luo et.al.\\cite{luo2010fast} and\nK-means Clustering Method of Kim et.al.\\cite{kim2007fast}, both use sampling for\nfast SVDD training, but are computationally expensive. The first method by Lou\net.al. uses an iterative approach and requires one scoring action on the entire\ntraining data set per iteration. The second method by Kim et.al. is a classic\ndivide and conquer algorithm. It uses each observation from the training data\nset to arrive at the final solution.\n\n\nIn this section we describe our sampling-based method for fast SVDD training.\nThe method iteratively samples from the training data set with the objective\nof updating a set of support vectors called as the master set of support\nvectors ($SV^{*}$). During each iteration, the method updates $SV^{*}$ and\ncorresponding threshold $R^{2}$ value and center $a$. As the threshold value\n$R^{2}$ increases, the volume enclosed by the $SV^{*}$ increases. The method\nstops iterating and provides a solution when the threshold value $R^{2}$ and the\ncenter $a$ converge. At convergence,\nthe members of the master set of support vectors $SV^{*}$, characterize\nthe description of the training data set. For all test cases, our method\nprovided a good approximation to the solution that can be obtained by using all\nobservations in the training data set.\n\nOur method addresses drawbacks of existing sampling based methods proposed\nby Luo et.al.\\cite{luo2010fast} and Kim et.al.\\cite{kim2007fast}. In each\niteration, our method learns using very a small sample from the training data\nset during each step and typically uses a very small subset of the training data set. The method\ndoes not require any scoring actions while it trains.\n\nThe sampling method works well for different sample sizes for the random draws in the iterations. \nIt also provides a better alternative to training SVDD on one large random sample from the training data\nset, since establishing a right size, especially with high dimensional data, is\na challenge.\n\nThe important steps in this algorithm are outlined below:\\mbox{}\\\\\n\\textbf{Step 1:}\nThe algorithm is initialized by selecting a random sample $S_{0}$ of size $n$\nfrom the training data set of $M$ observations ($n \\ll M$). SVDD of $S_{0}$ is\ncomputed to obtain the corresponding set of support vectors $SV_{0}$. The set\n$SV_{0}$ initializes the master set of support vectors $SV^{*}$. The iteration\nnumber $i$ is set to 1.\\\\\n\\textbf{Step 2:}\nDuring this step, the algorithm updates the master set of support vectors,\n$SV^{*}$ until the convergence criteria is satisfied. In each iteration $i$,\nfollowing steps are executed:\n\\begin{adjustwidth}{2mm}{0pt} \\textbf{Step 2.1:} A random sample $S_{i}$ of size $n$ is selected and its SVDD is computed. The corresponding support vectors are designated as $SV_{i}$.\\\\\n\\textbf{Step 2.2}: A union of $SV_{i}$ with the current master set of support vectors, $SV^{*}$ is taken to obtain a set $S_{i}^{'}$ ($S_{i}^{'}=SV_{i} \\bigcup SV^{*}$).\\\\\n\\textbf{Step 2.3: }SVDD of $S_{i}^{'}$ is computed to obtain corresponding support vectors $SV_{i}^{'}$, threshold value\n$R_{i}^{2}$ and ``center'' $a_{i}$ (which we define as $\\sum_{i}\\alpha_i x_i$ even when a Kernel is used). The set $SV_{i}^{'}$, is designated as the new master set of support vectors $SV^{*}$. \n\\end{adjustwidth}\n\\textbf{ Convergence Criteria:} \nAt the end of each iteration $i$, the following conditions are checked to determine convergence.\n\\begin{adjustwidth}{2mm}{0pt}\n\\begin{enumerate}\n\\item $i$ = $maxiter$, where $maxiter$ is the maximum number of iteration; or\\\\\n\\item $ \\| a_{i} - a_{i-1} \\| \\le \\epsilon_1 \\|a_{i-1}\\|$, and\n $\\left \\| R_{i}^{2}-R_{i-1}^{2} \\right \\| \\le \\epsilon_2 R_{i-1}^{2}$ where $\\epsilon_1,\\epsilon_2$ are appropriately\nchosen tolerance parameters.\n\\end{enumerate}\n\\end{adjustwidth}\nIf the maximum number of iterations is reached or the second condition satisfied for $t$ consecutive iterations,\nconvergence is declared. In many cases checking the convergence of just $R_i^2$ suffices.\n\nThe pseudo-code for this method is provided in algorithm~\\ref{alg:the_alg1}. The pseudo-code uses following notations:\n\\begin{enumerate}\n\\item $S_{i} \\leftarrow SAMPLE (T, n)$ denotes the data set $S_{i}$ obtained by selecting random sample of size $n$ from data set $T$.\n\\item $\\delta S_{i}$ denotes SVDD computation on data set $S_{i}$.\n\\item $\\langle SV_{i}, R_{i}^{2}, a_{i} \\rangle \\leftarrow \\delta S_{i} $ denotes the set of support vectors $SV_{i}$, threshold value $R_{i}^{2}$ and center $a_{i}$ obtained by performing SVDD computations on data set $S_{i}$.\n\\end{enumerate}\n\\begin{algorithm}\n\t\\caption{Sampling-based iterative method}\\label{euclid}\n\t\\label{alg:the_alg1}\n\t\\begin{algorithmic}[1]\n\t\t\\State $T$ (training data set) , $n$ (sample size), convergence criteria, $s$ (Gaussian bandwidth parameter),\n$f$ (fraction of outliers) and $t$ (required number of consecutive observations satisfying convergence criteria ).\n\t\t\\State $S_{0} \\leftarrow SAMPLE(T, n)$\n\t\t\\State $ \\langle SV_{0}, R_{0}^{2}, a_{0} \\rangle \\leftarrow \\delta S_{0}$ \n\t\t\\State $SV^{*} \\leftarrow SV_{0}$\n\t\t\\State $i=1$\n\t\t\\While {(Convergence criteria not satisfied for $t$ consecutive obs)} \n\t\t\\State $S_{i} \\leftarrow SAMPLE(T, n)$\n\t\t\\State $\\langle SV_{i}, R_{i}^{2}, a_{i} \\rangle \\leftarrow \\delta S_{i}$\n\t\t\\State $S_{i}^{'} \\leftarrow SV_{i} \\bigcup SV^{*}$.\t\t\n\t\t\\State $\\langle SV_{i}^{'}, R_{i}^{2'}, a_{i}^{'}\\rangle \\leftarrow \\delta S_{i}^{'}$\n\t\t\\State Test for convergence\n\t\t\\State $SV^{*} \\leftarrow SV_{i}^{'}$\n\t\t\\State $i=i+1$\n\t\t\\EndWhile\n\t\t\\RETURN { $SV^{*}$}\n\t\t\n\t\\end{algorithmic}\n\\end{algorithm}\n\nAs outlined in steps 1 and 2, the algorithm obtains the final training data\ndescription by incrementally updating the master set of support vectors $SV^{*}$.\nDuring each iteration, the algorithm first selects a small random\nsample $S_{i}$, computes its SVDD and obtains corresponding set of support\nvectors, $SV_{i}$. The support vectors of set $SV_{i}$ are included in the\nmaster set of support vectors $SV^{*}$ to obtain $S_{i}^{'}$ ($S_{i}^{'}=SV_{i}\n\\bigcup SV^{*}$). The set $S_{i}^{'}$ thus represents an incremental expansion\nof the current master set of support vectors $SV^{*}$. Some members of $SV_{i}$\ncan be potentially ``inside'' the data boundary characterized by $SV^{*}$ the\nnext SVDD computation on $S_{i}^{'}$ eliminates such ``inside'' points.\nDuring initial iterations as $SV^{*}$ gets updated, its threshold value $R_{i}^{2'}$ typically increases and \nthe master set of support vectors expands to describe the entire data set.\n\nEach iteration of our algorithm involves two small SVDD computations and one union operation. The first SVDD\ncomputation is fast since it is perfomed on a small sample of training data set. For the remaining two operations, our\nmethod exploits the fact that for most data sets support vectors obtained from SVDD are a tiny fraction\nof the input data set and both the union operation and the second SVDD computation are fast. So our method consists\nof three fast operations per iteration. For most large datasets we have experimented on the time to convergence is fast\nand we achieve a reasonable approximation to full SVDD in a fraction to time needed compute SVDD with the full dataset.\n\n\n\n\\subsubsection{Distributed Implementation}\nFor extremely large training datasets, efficiency gains using distributed\nimplementation are possible. Figure \\ref{fig:image_511} describes SVDD solution\nusing the sampling method outlined in section \\ref{sbm} utilizing a distributed\narchitecture. The training data set with $M$ observations is first distributed\nover $p$ worker nodes. Each worker node computes SVDD of its $\\dfrac{M}{p}$\nobservations using the sampling method to obtain its own master set of support\nvectors $SV_{i}^*$. Once SVDD computations are completed, each worker node\npromotes its own master set of support vectors $SV_{i}^*$, to the controller\nnode. The controller node takes a union of all worker node master sets of\nsupport vectors, $SV_{i}^*$ to create data set $S^{'}$. Finally, solution is\nobtained by performing SVDD computation on $S^{'}$. The corresponding set of\nsupport vectors $SV^{*}$ are used to approximate the original training data set\ndescription.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[scale=0.25]{image1}\n\t\\caption{Distributed Implementation}\n\t\\label{fig:image_511}\n\\end{figure}\n\n\\section{Results}\n\\label{res}\nTo test our method we experimented with three data sets of known geometry which we call the Banana-shaped, Star-shaped,\nand Two-Donut-shaped\ndata. The figures \\ref{fig:image_1}-\\ref{fig:image_3} illustrate these three data sets. \n\\begin{figure}\n\t\\centering \n\t\\subfloat[Banana-shaped data]{\\label{fig:image_1}\\includegraphics[scale=0.11]{image2}}\n\t\\subfloat[Star-shaped data]{\\label{fig:image_2}\\includegraphics[scale=0.11]{image3}}\n\t\\subfloat[Two-donut-shaped data]{\\label{fig:image_3}\\includegraphics[scale=0.11]{image4}}\n\t\\caption{Scatter plots}\\label{fig:image_4}\n\\end{figure}\nFor each data set, we first obtained SVDD using all observations.\nTable~\\ref{table:t2} summarizes the results.\\\\\nFor each data set, we varied the value of the sample size $n$ from 3 to 20\nand obtained multiple SVDD using the sampling method. For each sample size\nvalue, the total processing time and number of iterations till convergence was\nnoted. Figures \\ref{fig:image_5} to \\ref{fig:image_7} illustrate the results.\nThe vertical reference line indicates the sample size corresponding to the\nminimum processing time. Table~\\ref{table:t3} provides the minimum processing\ntime, corresponding sample size and other details for all three data sets.\nFigure \\ref{fig:image_106} shows the convergence of threshold $R^2$ for the\nBanana-shaped data trained using sampling method.\n\n\\begin{figure}\n\\centering \n\\subfloat[Run time vs. sample size]{\\label{fig:image_51}\\includegraphics[width=3.00in]{image5_1}}\\mbox{}\\\\\n\\subfloat[\\# iterations vs. sample size]{\\label{fig:image_52}\\includegraphics[width=3.00in]{image5_2}}\n\\caption{Banana-shaped data}\\label{fig:image_5}\n\\end{figure}\n\n\\begin{figure}\n\\centering \n\\subfloat[Run time vs. sample size]{\\label{fig:image_61}\\includegraphics[width=3.00in]{image6_1}}\\mbox{}\\\\\n\\subfloat[\\# iterations vs. sample size]{\\label{fig:image_62}\\includegraphics[width=3.00in]{image6_2}}\n\\caption{Star-shaped data}\\label{fig:image_6}\n\\end{figure}\n\n\\begin{figure}\n\\centering \n\\subfloat[Run time vs. sample size]{\\label{fig:image_71}\\includegraphics[width=3.00in]{image7_1}}\\\\\n\\subfloat[\\# iterations vs. sample size]{\\label{fig:image_72}\\includegraphics[width=3.00in]{image7_2}}\n\\caption{Two Donut data}\\label{fig:image_7}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=3.00in]{image106}\n\\caption{Plot of threshold $R^2$ - Banana shaped data (Sample size = 6) }\n\\label{fig:image_106}\n\\end{figure}\n\n\\begin{table}[h!]\n\\begin{minipage}{.5\\textwidth}\n \\begin{tabular}{||c c c c c||} \n \\hline\n Data & \\#Obs & $R^{2}$ & \\#SV & Time \\\\ [0.5ex] \n \\hline\\hline\n Banana & 11,016 & 0.8789 & 21 & 1.98 sec \\\\ \n TwoDonut &1,333,334 & 0.8982 &178 & 32 min \\\\ \n Star & 64,000 & 0.9362 &76 &11.55 sec \\\\ [1ex] \n \\hline\n \\end{tabular}\n \\caption{SVDD Training using full SVDD method}\\label{table:t2}\n\\end{minipage}\\\\\n\\mbox{}\\\\\n\\begin{minipage}{.4\\textwidth}\n\t\\begin{tabular}{||ccccc||} \n\t\t\\hline\n\t\tData&Iterations & $R^{2}$ & \\#SV & Time\\\\ [0.5ex] \n\t\t\\hline\\hline\n\t\tBanana(6)&119&0.872&19&0.32 sec\\\\ \n\t\tTwoDonut(11)&157&0.897&37&0.29 sec\\\\\n\t\tStar(11)&141&0.932&44&0.28 sec\\\\[1ex] \n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{SVDD Results using Sampling Method (sample size in parenthesis)}\\label{table:t3} \n\\end{minipage}\n\\end{table}\nResults provided in Table \\ref{table:t2} and Table \\ref{table:t3} indicate\nthat our method provides an order of magnitude performance improvement as\ncompared to training using all observations in a single iteration. The threshold\n$R^{2}$ values obtained using the sampling-based method are approximately equal\nto the values that can be obtained by training using all observations in a\nsingle iteration. Although the radius values are same, to confirm if the data\nboundary defined using support vectors is similar, we performed scoring on a\n$200\\times200$ data grid. Figure \\ref{fig:image_8} provides the scoring results\nfor all data sets. The scoring results for the Banana-shaped and the Two-Donut-shaped are very similar\nfor both the method, the scoring results for the Star-shaped shaped data for the two methods are also similar\nexcept for a region near the center.\n\n\n\\begin{figure}\n\\begin{tabular}{cc}\n\\includegraphics[width=1.5in]{image100_1} & \\includegraphics[width=1.5in]{image100_2} \\\\\n(a) & (b) \\\\[6pt]\n\\includegraphics[width=1.5in]{image101_1} & \\includegraphics[width=1.5in]{image101_2} \\\\\n(a) & (b) \\\\[6pt]\n\\includegraphics[width=1.5in]{image102_1} & \\includegraphics[width=1.5in]{image102_2} \\\\\n(a) & (b) \\\\[6pt]\nFull SVDD Method & Sampling method \\\\[6pt]\n\\end{tabular}\n\\caption{Scoring results. Above figures show results of scoring on a 200x200 data grid. Light gray color indicates outside points and black color indicates inside points. Figure (a) used full SVDD method for training. Figure (b) used sampling method for training. }\\label{fig:image_8}\n\\end{figure}\n\n\n\\section{Analysis of High Dimensional Data}\n\\label{pd}\nSection \\ref{res} provided comparison of our sampling method with full SVDD\nmethod. For two-dimensional data sets the performance of sampling method can\nbe visually judged using the scoring results. We tested the sampling method\nwith high dimensional datasets, where such visual feedback about classification\naccuracy of sampling method is not available. We compared classification\naccuracy of the sampling method with the accuracy of training with full SVDD\nmethod. We use the $F_{1}$-measure to quantify the classification accuracy\n\\cite{zhuang2006parameter}.\nThe $F_{1}$-measure is defined as follows:\n\\begin{equation} \nF_{1}=\\dfrac{2\\times \\text{Precision}\\times \\text{Recall}}{\\text {Precision}+\\text {Recall}},\n\\end{equation}\nwhere:\n\\begin{align}\n\\text {Precision}=\\dfrac{\\text{true positives}}{\\text{true positives} + \\text{false positives}}\\\\\n\\text {Recall}=\\dfrac{\\text{true positives}}{\\text{true positives} + \\text{false negatives}}.\n\\end{align} \nThus high precision relates to a low false positive rate, and high recall\nrelates to a low false negative rate. We chose the $F_{1}$-measure because it is\na composite measure that takes into account both the Precision and the Recall.\nModels with higher values of $F_{1}$-measure provide a better fit. \\\\\n\n\\subsection{Analysis of Shuttle Data}\nIn this section we provide results of our experiments with Statlog (shuttle)\ndataset \\cite{Lichman:2013}. This is a high dimensional data consists of nine\nnumeric attributes and one class attribute. Out of 58,000 total observations,\n80\\% of the observations belong to class one.\nWe created a training data set of randomly selected 2,000 observations belonging\nto class one. The remaining 56,000 observations were used to create a scoring\ndata set. SVDD model was first trained using all observations in the training\ndata set. The training results were used to score the observations in the\nscoring data set to determine if the model could accurately classify an\nobservation as belonging to class one and the accuracy of scoring was measured\nusing the $F_{1}$-measure. We then trained using the sampling-based method,\nfollowed by scoring to compute the $F_{1}$-measure again. The sample size for\nthe sampling-based method was set to 10 (number of variables + 1). We measured\nthe performance of the sampling method using the $F_{1}$-measure ratio defined\nas $F_{\\text{Sampling}}\/F_{\\text{Allobs}}$ where\n$F_{\\text{Sampling}}$ is the $F_1$-measure obtained when the value obtained using the sampling method for training, and\n$F_{\\text{Allobs}}$ is the value of $F_1$-measure computed when all observations were used for training. A value close to 1 indicate that sampling method is competitive with full SVDD method.\nWe repeated the above steps varying the training data set of size from 3,000\nto 40,000 in the increments of 1,000. The corresponding scoring data set size\nchanged from 55,000 to 18,000. Figure \\ref{fig:image_9_1_1} provides the plot of\n$F_{1}$-measure ratio. The plot of $F_1$-measure ratio is constant, very close\nto 1 for all training data set sizes, provides the evidence that our sampling\nmethod provides near identical classification accuracy as compared to full SVDD\nmethod. Figure \\ref{fig:image_9_1_2} provides the plot of the processing time\nfor the sampling method and training using all obsrvations. As the training\ndata set size increased, the processing time for full SVDD method increased\nalmost linearly to a value of about 5 seconds for training data set of 40,000\nobservations. In comparison, the processing time of the sampling based method\nwas in the range of 0.24 to 0.35 sec. The results prove that the sampling-based\nmethod is efficient and it provides near identical results to full SVDD method.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=3.45in]{image9_1_b}\n\t\\caption{$F_1$-measure plot: Shuttle data. \\textit{Sample size for sampling method=10}}\n\t\\label{fig:image_9_1_1}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=3.45in]{image9_2_b}\n\\caption{Processing time plot: Shuttle data. \\textit{Sample size for sampling method=10}}\n\\label{fig:image_9_1_2}\n\\end{figure}\n\n\n\\subsection{Analysis of Tennessee Eastman Data}\nIn this section we provide results of our experiments with high dimensional\nTennessee Eastman data. The data was generated using the MATLAB simulation code\n\\cite{Ricker:2002} which provides a model of an industrial chemical process\n\\cite{downs1993plant}. The data was generated for normal operations of the\nprocess and twenty faulty processes. Each observation consists of 41 variables,\nout of which 22 are measured continuously, on an average, every 6 seconds\nand remaining 19 sampled at a specified interval either every 0.1 or 0.25\nhours. We interpolated the 22 observations which are measured continuously\nusing SAS\\textsuperscript{\\textregistered} \\textit{EXPAND} procedure. The\ninterpolation increased the observation frequency and generated 20 observations\nper second. The interpolation ensured that we have adequate data volume to\ncompare performance our sampling method with full SVDD method.\n\n\nWe created a training data set of 5,000 randomly selected observations belonging\nto the normal operations of the process. From the remaining observations, we\ncreated a scoring data of 228,000 observations by randomly selecting 108,000\nobservations belonging to the normal operations and 120,000 observations\nbelonging to the faulty processes. A SVDD model was first trained using all\nobservations in the training data set. The training results were used to score\nthe observations in the scoring data set to determine if the model could\naccurately classify an observation as belonging to the normal operations. The\naccuracy of scoring was measured using the $F_{1}$-measure. We then trained\nusing the sampling method, followed by scoring to compute the $F_{1}$-measure\nagain. The sample size for the sampling based method was set to 42 (number\nof variables + 1). Similar to the Shuttle data analysis, we measured the\nperformance of the sampling method using the $F_{1}$-measure ratio defined as\n$F_{\\text{Sampling}}\/F_{\\text{Allobs}}$ where $F_{\\text{Sampling}}$ is the\n$F_1$-measure obtained when the value obtained using the sampling method for\ntraining, and $F_{\\text{Allobs}}$ is the value of $F_1$-measure computed when\nall observations were used for training. A value close to 1 indicate that\nsampling method is competitive with full SVDD method.\n\nWe repeated the above steps varying the training data set of size from 10,000\nto 100,000 in the increments of 5,000. The scoring data set was kept unchanged\nduring each iteration. Figure \\ref{fig:image_9_1} provides the plot of\n$F_{1}$-measure ratio. The plot of $F_{1}$-measure ratio was constant, very\nclose to 1 for all training data set sizes, provides the evidence that the\nsampling method provides near identical classification accuracy as compared\nto full SVDD method. Figure \\ref{fig:image_9_2} provides the plot of the\nprocessing time for the sampling-based method and the all obsrvation method. As\nthe training data set size increased, the processing time for full SVDD method\nincreased almost linearly to a value of about one minute for training data set\nof 100,000 observations. In comparison, the processing time of the sampling\nbased method was in the range of 0.5 to 2.0 sec. The results prove that the\nsampling-based method is efficient and it provides and closely approximates\nthe results obtained from full SVDD method.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=3.45in]{image12_1}\n\t\\caption{$F_1$-measure ratio plot: Tennessee Eastman data. Sample size for sampling method=42}\n\t\\label{fig:image_9_1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.45in]{image12_2}\n\\caption{Processing time plot: Tennessee Eastman data. Sample size for sampling method=42}\n\\label{fig:image_9_2}\n\\end{figure}\n\n\\section{Simulation Study}\n\\label{ss}\nIn this section we measure the accuracy of Sampling method when it is\napplied to randomly generated polygons. Given the number of vertices, $k$,we\ngenerate the vertices of a randomly generated polygon in the anticlockwise\nsense as $r_1\\exp i \\theta_{(1)}, \\dots, r_k \\exp i \\theta_{(k)}.$ Here\n$\\theta_{(i)}$'s are the order statistics of an i.i.d sample uniformly\ndrawn from $(0,2\\pi)$ and $r_i$'s are uniformly drawn from an interval\n$[\\text{r}_{\\text{min}},\\text{r}_{\\text{max}}].$ For this simulation we chose\n$\\text{r}_{\\text{min}}=3$ and $\\text{r}_{\\text{max}}=5$ and varied the number\nof vertices from $5$ to $30$. We generated $20$ random polygons for each vertex\nsize. Figure \\ref{fig:image_10} shows two random polygons. Having determined a\npolygon we randomly selected $600$ points uniformly from the interior of the\npolygon to construct a training data set.\n\nTo create the scoring data set we the divided the bounding rectangle of each\npolygon into a $200 \\times 200$ grid. We labeled each point on this grid\nas an ``inside'' or an ``outside'' point. We then fit SVDD on the training\ndata set and scored the corresponding scoring data set and calculated the\n$F_1$-measure. The process of training and scoring was first performed using the\nfull SVDD method, followed by the sampling method. For sampling method we\nused sample size of 5. We trained and scored each instance of a polygon 10 times\nby changing the value of the Gaussian bandwidth parameter, $s$. We used $s$\nvalues from the following set:\\linebreak $s=[1, 1.44, 1.88, 2.33, 2.77, 3.22, 3.66,\n4.11, 4.55, 5].$\n\nAs in previous examples we used the $F_!$ measure ratio to judge the accuracy of the sampling method.\n\n\\begin{figure}\n\t\\centering \n\t\\subfloat[Number of Vertices = 5]{\\label{fig:image_10}\\includegraphics[width=3.45in]{p5}}\\\\\n\t\\subfloat[Number of Vertices = 25]{\\label{fig:image_11}\\includegraphics[width=3.45in]{p25}}\n\t\\caption{Random Polygons}\\label{fig:image_10}\n\\end{figure}\n\nThe Box-whisker plots in figures \\ref{fig:image_11_a} to \\ref{fig:image_11_c}\nsummarize the simulation study results. The x- axis shows the number of\nvertices of the ploygon and y-axis shows the $F_1$-measure ratio. The bottom and\nthe top of the box shows the first and the third quartile values. The ends of\nthe whiskers represent the minimum and the maximum value of the $F_1$-measure\nratio. The diamond shape indicates the mean value and the horizontal line in the\nbox indicates the second quartile.\n\\subsection{Comparison of the best fit across $s$}\nFor each instance of a polygon we looked at $s$ value which provides the best\nfit in terms of the $F_1$-ratio for each of the methods. The plot in Figure\n\\ref{fig:image_11_a} shows the plot of $F_{1}$ measure ratio computed using the\nmaximum values of $F_{1}$ measures. The plot shows that $F_1$-measure ratio\nis greater than $\\approx 0.92$ across all values of number of vertices. The\n$F_1$ measure ratio in the top three quartiles is greater than $\\approx$ 0.97\nacross all values of the number of vertices. Using best possible value of s, the\nsampling method provides comparable results with full SVDD method.\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.45in]{image104}\n\\caption{Box-whisker plot: Number of vertices vs. Ratio of max $F_1$ measures \\label{fig:image_11_a}}\n\\end{figure}\n\n\\subsection{Results Using Same Value of $s$}\nWe evaluated sampling method against full SVDD method, for the same value\nof $s$. The plots in Figure \\ref{fig:image_11_b} illustrate the results for\ndifferent six different values of $s$. The plot shows that except for one outlier result in Figure \\ref{fig:image_11_b} (d), $F_1$-measure\nratio is greater than 0.9 across number of vertices and $s$. In Figures\n\\ref{fig:image_11_b} (c) to (f), the top three quartiles of $F_{1}$ measure\nratio was consistently greater than $\\approx 0.95$. Training using sampling method\nand full SVDD method, using same $s$ value, provide similar results.\\\\\n\n\\begin{figure}\n \\centering\n\t\\subfloat[$s$=1]{\\includegraphics[width=3.00in]{image105_1}}\\\\\n\t\\subfloat[$s$=1.4]{\\includegraphics[width=3.00in]{image105_2}}\\\\\n\t\\subfloat[$s$=2.3]{\\includegraphics[width=3.00in]{image105_3}}\\\\\n\t\\subfloat[$s$=3.4]{\\includegraphics[width=3.00in]{image105_4}}\n \\phantomcaption\n\\end{figure}\n\\begin{figure}\n \\ContinuedFloat\n \\centering\n\t\\subfloat[$s$=4.1]{\\includegraphics[width=3.00in]{image105_5}}\\\\\n\t\\subfloat[$s$=5.0]{\\includegraphics[width=3.00in]{image105_6}}\n \\caption[]{Box-whisker plot: Number of vertices vs. $F_1$ measure ratio for different s values \\label{fig:image_11_b}}\n\\end{figure}\n\n\\subsection{Overall Results}\nFigure \\ref{fig:image_11_c} provides summary of all simulation performed for\ndifferent polygon instances and varying values of $s$. The plot shows that\nexcept for one outlier result, $F_1$-measure ratio is greater than 0.9 across number of vertice. The $F_1$\nmeasure ratio in the top three quartiles is greater than $\\approx 0.98$ across\nall values of the number of vertices. The accuracy of sampling method is\ncomaprable to full SVDD method.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.45in]{image103}\n\\caption{Box-whisker plot: Number of vertices vs. $F_1$ measure ratio }\n\\label{fig:image_11_c}\n\\end{figure}\n\n\n\\section{Conclusion}\n\\label{cn}\nWe propose a simple sampling-based iterative method for training SVDD. The\nmethod incrementally learns during each iteration by utilizing information\ncontained in the current master set of support vectors and new information\nprovided by the random sample. After a certain number of iterations, the\nthreshold $R^2$ value and the center $a$ start to converge. At this point, the\nSVDD of the master set of support vectors is close to the SVDD of training data\nset. We provide a mechanism to detect convergence and establish a stopping\ncriteria. The simplicity of proposed method ensures ease of implementation.\nThe implementation involves writing additional code for calling SVDD training\ncode iteratively, maintaining a master set of support vectors and implementing\nconvergence criteria based on threshold $R^2$ and center $a$. We do not\npropose any changes to the core SVDD training algorithm as outlined in section\n\\ref{mfsvdd}. The method is fast. The number of observations used for\nfinding the SVDD in each iteration can be a very small fraction of the number\nof observations in the training data set. The algorithm provides good results\nin many cases with sample size as small as $m+1$, where $m$ is the number of variables in\nthe training data set. The small sample size ensures that each iteration of the\nalgorithm is extremely fast. The proposed method provides a fast alternative\nto traditional SVDD training method which uses information from all observations\nin one iteration. Even though the sampling based method provides an approximation \nof the data description but in applications where training data set is large, fast\napproximation is often preferred to an exact description which takes more time to determine.\nWithin the broader realm of Internet of Things (IoT) we\nexpect to see multiple applications of SVDD especially to monitor industrial\nprocesses and equipment health and many of these applications will require fast periodic training\nusing large data sets. This can be done very efficiently with our\nmethod.\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe two gas giants of the \\replaced{solar}{Solar} system, Jupiter and Saturn, are host to a large number of satellites and rings. The satellites of both planets follow a similar progression pattern. The inner region of each system consists of small icy satellites, with an accompanying ring system \\citep{Thomas1998InnerSatJup, Throop2004JupiterRings, Porco2005CassiniRingSat,Thomas2013InnerSatSatu}. Further out, there are larger icy\/silicate satellites \\citep{Thomas2010SaturnSatsCassiniProps, Deienno2014OrbitGalileanSat}. In the outer system, both planets have a series of irregular satellites, small satellites with high eccentricities and inclinations \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature, Jewitt2007IrregularSats}. It is thought that these satellites were captured from other populations of small \\replaced{solar}{Solar} system bodies \\citep{Colombo1971JupSatsForm, Heppenheimer1977Capture, Pollack1979SatGasDrag, Sheppard2003IrrSatNature, Nesvorny2004IrrSatFamilyOrigin, Johnson2005PhoebeKuiper, Nesvorny2007IrrSatCap, Nesvorny2014IrrCapture}. This is in contrast to the inner satellites, which are thought to have accreted in a circumplanetary disk \\citep[e.g.][]{Canup2002GalSatAcc, Canup2010SaturnSatOrigin}. Such a formation mechanism is thought to resemble the accretion of planets in a protoplanetary disk around a young star \\citep{Lissauer1987PlanetAccretion}, a conclusion that is supported by the recent discovery of the TRAPPIST-1 planetary system \\citep{Gillon2016Trapist1}. That system features at least seven Earth-mass planets orbiting a very low mass star. The star itself, TRAPPIST-1, is within two orders of magnitude more massive than Jupiter, and similar in size. The seven planets span an area comparable to that of Jupiter's regular satellite system. Studying and understanding the gas giant systems in our own \\replaced{solar}{Solar} system, can therefore provide context for future exploration of low-mass exoplanetary systems. \n\n\\subsection{The Jovian System}\nHistorically, \\citet{Galileo1610SidereusNuncius} discovered the first satellites in the Jovian system, the large Galileans, Io, Europa, Ganymede and Callisto. Our knowledge of these satellites has increased greatly, as a result of both improved ground-based instrumentation \\citep[e.g.][]{Sparks2016EuropaHST,Vasundhara2017GalileansGround} and spacecraft visitations \\citep[e.g.][]{Smith1979Jupiter, Grundy2007NewHorizonsJupiterSats, Greenberg2010IcyJovian}\\deleted{have given us a more detailed understanding of these objects}. \n\nAmalthea, one of the inner set of Jovian satellites, was discovered by \\citet{Barnard1892Amalthea}. A few years later, the first two small irregular satellites, Himalia \\citep{Perrine1905Himalia} and Elara \\citep{Perrine1905Elara}, were discovered in inclined, prograde orbits. The discovery of Pasiphae three years later by \\citet{Melotte1908Pasiphae} is significant as this was only the second satellite in the Solar system to be found on a retrograde orbit, and the first such object found in the Jovian system. Several other irregular satellites were discovered in the first half of the 20th century, Sinope \\citep{Nicholson1914Sinope}, Lysithea \\citep{Nicholson1938LysitheaCarme}, Carme \\citep{Nicholson1938LysitheaCarme} and Ananke \\citep{Nicholson1951Ananke}. Leda, another small prograde irregular, was discovered 20 years later by \\citet{Kowal1975Leda}. Themisto, the first Jovian satellite smaller than 10km to be discovered, was found that same year \\citep{Kowal1975Themisto} and subsequently lost. Themisto was rediscovered by \\citet{Sheppard2000ThemistoReDis} nearly 20 years later. The Voyager visitations of Jupiter discovered the remaining three inner satellites, Metis \\citep{Synnott1981MetisDiscov}, Adrastea \\citep{Jewitt1979AdrasteaDiscov} and Thebe \\citep{Synnott1980ThebeDiscov}, along with a ring system \\citep{Smith1979Jupiter}. These three satellites, Amalthea and the ring system, would be imaged again by the Galileo \\citep{OckertBell1999JupiterRing} and Cassini \\citep{Porco2005CassiniRingSat} spacecraft during their missions. \n\nThe irregular Jovian satellites orbit the planet with semi-major axes an order of magnitude greater than the Galilean moons, and have large eccentricities and inclinations. In the early years of the 21st century, extensive surveys were carried out to search for the Jovian irregular satellites \\citep{Scotti2000Callirrhoe, Sheppard2001SatsJupDiscovery,\nSheppard2002JupSatDisc, Gladman2003JupIAU1, Gladman2003JupIAU2,\nSheppard2003JupIAU1, Sheppard2003JupIAU2, Sheppard2003JupIAU3,\nSheppard2003IrrSatNature,\nSheppard2004JupIAU1,Sheppard2004JupIAU2, Beauge2007IrregSatRsonance,\nJacobson2011NewJupSats, Sheppard2012JupIAU}. These surveys increased the number of known Jovian satellites from 14 after Voyager, to the 67 known today. The inner five irregular satellites, Leda, Himalia, Lystea, Elara and Dia, have prograde orbits and have previously been classified into the Himalia group \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature}. Themisto and Carpo were proposed as single members of their own groups by \\citet{Sheppard2003IrrSatNature}. The remainder of the irregular satellites have retrograde orbits. Based on similarities in semi-major axis, inclination and eccentricity, these satellites have been grouped into families by \\citet{Sheppard2003IrrSatNature} and \\citet{Nesvorny2003IrrSatEvol}. These dynamical families are typified by their largest member, Himalia representing the inner prograde satellites, with the retrograde ones being broken down into the Ananke, Pasiphae and Carme families. Recently, several additional small irregular satellites have been discovered \\citep{Jacobson2011NewJupSats, Sheppard2012JupIAU} which are yet to be named or classified. With the discovery of new satellites \\citep{Scotti2000Callirrhoe, Sheppard2001SatsJupDiscovery, Beauge2007IrregSatRsonance,\nJacobson2011NewJupSats, Sheppard2012JupIAU} and additional information from the Cassini spacecraft \\citep{Porco2005CassiniRingSat}, a revisitation of the classification of the Jovian irregular satellites \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature, Jewitt2007IrregularSats} is warranted. \n\n\\subsection{The Saturnian System}\nThe Saturnian system is broadly similar to that of Jupiter, but exhibits greater complexity. One of the most striking features, visible to even the most modest telescope, is Saturn's ring system. First observed by Galileo in 1610, it was \\citet{Huygens1659systema} that observed that the objects surrounding Saturn were in fact rings. The rings themselves are composed of individual particles, from micrometer to meter size \\citep{Zebker1985SaturnRingParticle}. Embedded within several of the main rings are a series of small moonlets \\citep{Tiscareno2006SaturnAringMoonlets} and several shepherd satellites \\citep{Showalter1991Pan, Porco2007SaturnSmallSats, Cuzzi2014FringPromethius}. The co-orbitals Janus and Epimetheus \\citep{Yoder1983SaturnCoorobiting, Yoder1989JanusEpiMassOrbit, Nicholson1992CoorbitalSaturn, Treffenstdt2015JanusEpiFormation, ElMoutamid2016JansSwapAring}, and their associated faint ring system \\citep{Winter2016JanusEpiRing} are unique to the Saturn system. Just beyond the Janus\/Epimetheus orbit, there is a diffuse G-ring, the source of which is the satellite Aegaeon \\citep{Hedman2007Gring}.\n\n\\citet{Huygens1659systema} also discovered Saturn's largest satellite, Titan. Earth-based observations highlighted the methane based atmosphere of Titan \\citep{Kuiper1944TitanAtmos, Karkoschka1994TitanESO}, with further characterization by the Cassini spacecraft \\citep{Niemann2005TitanAtmos} and Huygens lander \\citep{Lebreton2005HuygensTiten}. The bulk composition of Titan is analogous to that of the other icy satellites, with an icy shell, subsurface water ocean and silicate core \\citep{Hemingway2013TitanIceshell}. There are seven other mid-sized icy satellites, with semi-major axes on a similar order of magnitude to \\added{that of} Titan. The five largest, Mimas, Enceladus, Tethys, Dione and Rhea are large enough to be in hydrostatic equilibrium. All of the mid-sized satellites are thought to be predominantly composed of water ice, with some contribution from silicate rock, and may contain subsurface liquid oceans \\citep{Matson2009SaturnSat, Filacchione2012VIMS3}. Those satellites closer to Saturn than Titan, Mimas, Enceladus, Tethys, Dione and Rhea, are embedded in the E-ring \\citep{Feibelman1967SatEring, Baum1981SatEring, Hillier2007EringComp, Hedman2012EringStruc}. The Cassini mission identified the source of this ring as the southern cryo-plumes of Enceladus \\citep{Sphan2006EnceladusEring}. \n\nIn addition to the larger icy satellites, there are four small Trojan satellites \\citep{Porco2005CassiniRingSat}, situated at the leading and trailing Lagrange points, 60\\degree \\ ahead or behind the parent satellites in their orbit. Tethys has Telesto and Calypso as Trojan satellites, while Helene and Polydeuces are Trojan satellites of Dione. So far, these Trojan satellites are unique to the Saturnian system. Between the orbits of Mimas and Enceladus, there are the Alkyonides, Methone, Anthe and Pallene, recently discovered by the Cassini spacecraft \\citep{Porco2005CassiniRingSat}. Each of the Alkyonides have their own faint ring arcs \\citep{Hedman2009SatRingArcs} comprised of similar material to the satellite. Dynamical modeling by \\citet{Sun2017MethoneDust} supports the theory of \\citet{Hedman2009SatRingArcs}, that the parent satellite is the source of the rings.\n\nIn the outer Saturnian system there are a large number of smaller irregular satellites, with 38 known to date. The first of these irregular satellites to be discovered was Phoebe, which was the first planetary satellite to be discovered photographically \\citep{Pickering1899Phoebe}. Phoebe was also the first satellite to be discovered moving on a retrograde orbit \\citep{Pickering1905Phoebe, Ross1905Phoebe}. Phoebe is the best studied irregular satellite and the only one for which in-situ observations have been obtained \\citep{Clark2005Phoebe}. Recently, a large outer ring associated with Phoebe and the other irregular satellites has been discovered \\citep{Verbiscer2009SatLarRing}. It has been suggested that Phoebe may have originated in the Edgeworth-Kuiper Belt and captured into orbit around Saturn \\citep{Johnson2005PhoebeKuiper}. The other Saturnian irregular satellites were discovered in extensive surveys during the early 21st century \\citep{Gladman200112Sat, Sheppard2003IAUJupSat, Jewitt2005IAUCSat,\nSheppard2006SatIAUC, Sheppard2007SatIAUC}. \\replaced{With}{Due to} the small size of the majority of these satellites, only their orbital information is available. There are nine prograde and 29 retrograde outer satellites, of which attempts have been made to place into families based on dynamical \\citep{Gladman200112Sat, Jewitt2007IrregularSats, Turrini2008IrregularSatsSaturn} and photometric \\citep{Grav2003IrregSatPhoto, Grav2007IrregSatCol} information. In the traditional naming convention \\citep{Grav2003IrregSatPhoto}, the Inuit family, Ijiraq, Kiviuq, Paaliaq, Siarnaq and Tarqeq, are small prograde satellites, whose inclination is between 45\\degree and 50\\degree . The Gallic family, Albiorix, Bebhionn, Erriapus and Tarvos, is a similar, prograde group, but with inclinations between 35\\degree and 40\\degree . The retrograde satellites are all grouped into the Norse family, including Phoebe. There is a possibility that the Norse family could be further split into subfamilies, based on photometric studies \\citep{Grav2003IrregSatPhoto, Grav2007IrregSatCol}. The convention of using names from respective mythologies for the satellite clusters \\citep{Jewitt2007IrregularSats}, has become the default standard for the irregular satellite families of Saturn. \n\n\\subsection{Formation Theories}\n\nThe purpose of taxonomy and classification, beyond simple grouping, is to investigate the origin of objects. The origin of the irregular satellites is a major topic of ongoing study \\citep{Nesvorny2012JumpingJupiter, Nesvorny2014IrrCapture}. Here we present an overview for context. There are three main theories in the formation of the Jovian satellites: formation via disk accretion \\citep{Canup2002GalSatAcc}; via nebula drag \\citep{Pollack1979SatGasDrag}; or via dynamic capture \\citep{Nesvorny2003IrrSatEvol, Nesvorny2007IrrSatCap}. The satellites that are captured either by nebula drag or through dynamical means, are thought to be from \\replaced{solar}{Solar} system debris, such as asteroids and comets. \n\nThe disk accretion theory has generally been accepted as the mechanism for the formation of the inner prograde satellites of Jupiter \\citep{Canup2002GalSatAcc}. The satellites form from dust surrounding proto-Jupiter in a process analogous to the formation of planets around a star \\citep{Lissauer1987PlanetAccretion}. This surrounding disk would have lain in the equatorial plane of Jupiter, with material being accreted to the planet itself through the disk. This would explain both the prograde, coplanar orbits of the regular satellites and their near circular orbits.\n\nThe second theory requires satellites to be captured in the original Jovian nebula \\citep{Pollack1979SatGasDrag, Cuk2004HimaliaGasDrag}. Before it coalesced into a planet, Jupiter is proposed to have had a greater radius, and lower density than now. There was a `nebula' surrounding this proto-Jupiter. As other pieces of \\replaced{solar}{Solar} system debris crossed into the Hill sphere of this nebula, they would be slowed down by friction and be captured as a satellite. Related to this is the concept of a pull down mechanism \\citep{Heppenheimer1977Capture}. As a gas giant increases in mass from accretion \\citep{Pollack1996GiantPlanetAccretion}, the hills sphere increases. As a subsequent effect, small \\replaced{solar}{Solar} system bodies can possibly be captured as irregular satellites.\n\nDynamical capture can explain the retrograde orbits of the Jovian satellites \\citep{Nesvorny2003IrrSatEvol}. The Hill sphere of a planet dictates the limit of its gravitational influence over other bodies. The theory \\citep{Nesvorny2003IrrSatEvol, Nesvorny2007IrrSatCap} states that is it impossible for a satellite to be captured in a three body system (Sun, planet and satellite). The Nice model of the \\replaced{solar}{Solar} system \\citep{Tsiganis2005NICEplanets, Nesvorny2007IrrSatCap, Nesvorny2014IrrCapture} has a fourth body interaction placing the satellite into a stable orbit inside the Hill sphere of the gas giant. Recently the Nice model was updated to include a fifth giant planet \\citep{Nesvorny2012JumpingJupiter}. This updated theory has the new planet interacting with Jupiter and allowing for the capture of the satellites, before the fifth giant planet is ejected from the \\replaced{solar}{Solar} system. Collisions between objects could also play a part in the dynamical capture of the irregular satellites \\citep{Colombo1971JupSatsForm}.\n\nThe formation of the Saturnian satellite system is thought to be similarly complex. The inner satellites are possibly formed from accretion within the ring system \\citep{Charnoz2010SaturnMooletsfromMainRings} or from the breakup of a large, lost satellite \\citep{Canup2010SaturnSatOrigin}. Modeling of the Saturnian system by \\citet{Salmon2017SaturnMidAccretion} has \\replaced{indicated}{shown} that the mid-sized satellites could have formed from a large ice-dominated ring, with contamination of \\replaced{asteroids}{cometary material} during the Late Heavy Bombardment, delivering the requisite silicate rock. Being the largest satellite in the Saturnian system, Titan is thought to have formed from accretion of proto-satellites \\citep{Asphaug2013SatMerger}. The Saturnian irregular satellites are predicted to be captured objects \\citep{Jewitt2007IrregularSats}, though their origins are still in dispute. Collisions are thought to have played a part in the capture of the irregular satellites of Saturn \\citep{Turrini2009IrregularSatsSaturn}. The cratering data provided by the Cassini spacecraft \\citep{Giese2006PhoebeTopo} supports this hypothesis.\n\n\\subsection{This Project}\nWith the discovery of several new irregular satellites \\citep{Scotti2000Callirrhoe, Gladman200112Sat, Sheppard2001SatsJupDiscovery,\nSheppard2002JupSatDisc, Gladman2003JupIAU1,Gladman2003JupIAU2,\nSheppard2003IAUJupSat, Sheppard2003JupIAU1,Sheppard2003JupIAU2,Sheppard2003JupIAU3,\nSheppard2003IrrSatNature,\nSheppard2004JupIAU1,Sheppard2004JupIAU2, Jewitt2005IAUCSat, Sheppard2006SatIAUC,Sheppard2007SatIAUC, \nJacobson2011NewJupSats, Sheppard2012JupIAU}, along with the detailed examination of the Jovian and Saturnian system by the Cassini spacecraft \\citep{Brown2003CassiniJupiter, \nPorco2005CassiniRingSat, Cooper2006CassiniAmaltheaThebe, Giese2006PhoebeTopo, Porco2006EnceladusPlume, Sphan2006EnceladusEring, Filacchione2007VIMS1, Nicholson2008VIMSRings, Matson2009SaturnSat, Buratti2010SatInnerSat, Filacchione2010VIMS2, Thomas2010SaturnSatsCassiniProps, \nClark2012VIMSIapetus, Filacchione2012VIMS3, Spitale2012s2009s1, Tosi2010IapetusDark, Hirtzig2013VIMSTitan, Brown2014Rayleigh, Filacchione2014VIMSrings, Filacchione2016VIMS4}, there is an opportunity to revisit the classification of the satellite systems of the gas giants. We apply a technique called \\textit{cladistics} to characteristics of the Jovian and Saturnian satellites, in order to examine the relationships between objects in the systems. The purpose of this is two fold. First, due to their \\replaced{well established}{well-established} classification systems, the Jovian and Saturnian satellite systems offer an opportunity to test the cladistical technique in a planetary science context. This project is an extension of \\citet{Holt2016JovSatCald} and together they form the first use of cladistics for planetary bodies. The second aim of the project is to classify recently discovered satellites, as well as providing context for future work.\n\nIn Section \\ref{Methods}, we introduce the cladistical technique, and how it is used in this paper. The resulting taxonomic trees for the Jovian and Saturnian systems, along with their implications for the taxonomy of the satellites, are presented in Sections \\ref{JupiterTax} and \\ref{SaturnTax} respectively. Section \\ref{Discussion} discusses the implications of cladistics in a planetary science context, along with some remarks on origins of the gas giant satellites and possible future work.\n\n\\section{Methods}\n\\label{Methods}\nIn this section, we present an overview of the cladistical method and how it is applied to the Jovian and Saturnian satellite systems. Following a general overview of cladistics, the section progresses into the specifics of this study, including characteristics used in the paper. The section concludes with an explanation on the specific matrices of the Jovian and Saturnian satellites and how they are applied to the cladistical method. \n\\subsection{Cladistics}\n\\label{cladistics}\n\nCladistics is an analytical technique, originally developed to examine the relationships between living organisms \\citep{Hennig1965PhylogeneticSystem}. A \\textit{clade} is the term used for a cluster of objects or \\textit{taxa}, that are related to each other at some level. In astronomy\/astrophysics, the technique has been used to look at the relationships between stars \\citep{FraixBurnet2015StarClads,Jofre2017StarsClads}, gamma-ray bursts \\citep{Cardone2013GRBClads}, globular clusters \\citep{FraixBurnet2009GlobularClusters} and galaxies \\citep{FraixBurnet2006DwarfGalaxies, FraixBurnet2010EarlyGalx, FraixBurnet2012SixPermGal, FraixBurnet2015GalClad}. These works, along with this study, form a body of work in the new field of `Astrocladistics' \\citep{FraixBurnet2015GalClad}. There are good reasons to believe that cladistics can provide sensible groupings in a planetary science context. Objects that have similar formation mechanisms should have comparable characteristics. Daughter objects that are formed by breaking pieces off a larger object should also have similar characteristics. The advantage of this method over other multivariate analysis systems is the inclusion of a larger number of characteristics, enabling us to infer more detailed relationships. \n\nThe vast majority of work in cladistics and phylogenetics has been undertaken in the Biological and Paleontological sciences. Biologists and Paleontologists use cladistics as a method to investigate the common origins\\replaced{ or `tree', of life}{, or `tree' of life} \\citep{Darwin1859Origin, Hennig1965PhylogeneticSystem, Hug2016TreeLife}, and how different species are related to one another \\citep[e.g.][]{Van1993new, Salisbury2006originCrocs, Vrivcan2011two, Smith2017new, Aria2017burgess}. Historically, the investigation into relationships between different organisms reaches back to \\citet{Darwin1859Origin}. Early attempts at using tree analysis techniques occurred in the early 20th century \\citep{Mitchell1901BridsClads, Tillyard1926insects, Zimmermann1931arbeitsweise}. \\citet{Hennig1965PhylogeneticSystem} is regarded as one of the first to propose `phylogenetic systematics', the technique that would become modern cladistical\/phylogenetic analysis. The technique was quickly adopted by the biological community and used to analyze every form of life, from Bacteria \\citep[e.g.][]{Olsen1994winds} to Dinosauria \\citep[e.g.][]{Bakker1974dinosaur} and our own ancestors \\citep[e.g.][]{Chamberlain1987EarlyHominid}. Recently, the use of DNA led to the expansion of the technique to become molecular phylogenetics \\citep{Suarez2008HistoryPhylo}. As computing power improves, larger datasets can be examined, and our understanding of the `Tree of Life' improves \\citep{Hug2016TreeLife}. For a detailed examination of the history of cladistics and pyholgenetics, we refer the interested reader to \\citet{Hamilton2014EvolSystem}.\n\n\nThe cladisitcal methodology begins with the creation of a taxon-character matrix. Each matrix is a 2-d array, with the taxa, the objects of interest, in the rows, and each characteristic in the columns. The taxa used in this study are the rings and satellites of the Jovian and Saturnian Systems. The orbital, physical and compositional properties of the rings and satellites are used as characteristics, see Section \\ref{Characteristics}. For a given taxa, each corresponding characteristic is defined as a numerical state, usually a 0 or 1, though multiple, discrete states may be used. A 0 numerical state is used to indicate the original or `base' state. An \\textit{outgroup}, or a taxa outside the area of interest, is used to dictate the 0 base state of a characteristic. For this study, we use the Sun as an outgroup. An unknown character state can be accounted for, with a question mark (?). This taxon-character matrix is created using the Mesquite software package \\citep{Mesquite}.\n\nA set of phylogenetic trees are subsequently created from the Mesquite taxon-character matrix, using Tree analysis using New Technology (TNT) 1.5 \\citep{Goloboff2008TNT, Golboff2016TNT15}, via the Zephyr Mesquite package \\citep{MesquiteZephyr}. The trees are created on the concept of maximum parsimony \\citep{Maddison1984outgroup}, that the tree with the shortest lengths, the smallest number of changes, is most likely to show the true relationships. TNT uses a method of indirect tree length estimation \\citep{Goloboff1994Treelengths, Goloboff1996FastPasrimony}, in its heuristic search for trees with the smallest length. TNT starts the drift algorithm \\citep{Goloboff1996FastPasrimony} search by generating 100 Wagner trees \\citep{Farris1970MethodsComp}, with 10 drifting trees per replicate. These starting trees are then checked using a Tree bisection and reconnection (TBR) algorithm \\citep{Goloboff1996FastPasrimony} to generate a block of \\replaced{equality}{equally} parsimonious trees. Closely related taxa are grouped together in the tree. Ideally, all equally parsimonious trees would be stored, but this is computationally prohibitive. For this analysis, 10000 equally parsimonious trees are requested from TNT, to create the tree block. Once a tree block has been generated and imported into Mesquite \\citep{Mesquite} for analysis, a 0.5 majority-rules consensus tree can be constructed using \\replaced{the a well established}{a well-established} algorithm \\citep{Margush1981MajorityRules}. This tree is generated as a consensus of the block, with a tree branch being preserved if it is present in the majority of the trees. The resulting branching taxonomic tree is then a hypothesis for the relations between taxa, the satellites and rings of the gas giants. \n\nWe can assess how accurately a tree represents true relationships between taxa. The number of steps it takes to create a tree is call the \\textit{tree length}. A smaller tree length \\replaced{indicates}{implies} a more likely tree, as it is more parsimonious. Tree length estimation algorithms \\citep{Farris1970MethodsComp} continue to be improved, and are fully explored in a modern context by \\cite{Goloboff2015Parsimony}. Two other tree metrics, the consistency and retention indices, are a measure of \\textit{homoplasy}, or the independent loss or gain of a characteristic \\citep{Givnish1997consistency}. High amounts of homoplasy in a tree is \\replaced{indicative}{suggestive} of random events, rather than the desired relationships between taxa \\citep{Brandley2009Homoplasy}. Mathematically, homoplasy can be represented by the consistency index ($CI$) of a tree, (equation (\\ref{ConIndexEq}) \\citep{Kluge1969Cladistics}) and is related to the minimum number of changes ($M$) and the number of changes on the tree actually observed ($S$). \n\n\\begin{equation}\nCI = M\/S\n\\label{ConIndexEq}\n\\end{equation}\n\nA tree with no \\textit{homoplasy} would have a consistency index of 1. \\added{One of the criticisms of the consistency index is that it shows a negative correlation with the number of taxa and characteristics \\citep{Archie1989homoplasy, Naylor1995RetentionIndex}. In order to combat the issues with the consistency index, a new measure of homoplasy, the retention index, was created \\citep{Farris1989retention}.} The retention index ($RI$) \\citep{Farris1989retention} \\deleted{, a second measure of homoplasy} introduces the maximum number of changes ($G$) required into equation (\\ref{RetentionIndexEq}). \n\n\\begin{equation}\nRI = \\frac{G - M}{G - S}\n\\label{RetentionIndexEq}\n\\end{equation}\n\nAs with the consistency index, a tree with a retention index of 1 indicates a perfectly reliable tree. Both of these metrics \\replaced{give an indication of}{show} how confidently the tree represents the \\replaced{true}{most plausible} relationships between taxa. Values closer to 1 of both the consistency and retention indices indicate that the tree represents the true relationships between taxa \\citep{Sanderson1989VaiationHomoplasy}. For a detailed examination of the mathematics behind the algorithms and statistics used in cladistical analysis, we direct the interested reader to \\cite{Gascuel2005MathEvolPhylogeny}.\n\nA traditional form of multivariate hierarchical clustering is used in the detection of asteroid collisional families \\citep{Zappala1990HierarchicalClustering1, Zappala1994HierarchicalClustering2}.\nThis method of clustering uses Gauss equations to find clusters in n parameter space, typically using semi-major axis, eccentricity and inclination \\citep{Zappala1990HierarchicalClustering1}. Work has also been undertaken incorporating the known colors \\citep{Parker2008AsteroidFamSDSS} and albedo \\citep{Carruba2013AsteroidFamilies} of the asteroids \\citep{Milani2014AsteroidFamilies} into the classical method, though this reduces the dataset significantly. The classical method of multivariate hierarchical clustering was used by \\citep{Nesvorny2003IrrSatEvol} to identify the Jovian irregular satellite families. \\citet{Turrini2008IrregularSatsSaturn} expanded the classical method into the Saturnian irregular satellites, and utilized the Gauss equations, solved for velocities, in a similar way to \\cite{Nesvorny2003IrrSatEvol} to verify the families found, using semi-major axis ($a$), eccentricity ($e$) and inclination ($i$) of the satellites. The rational behind these calculations is that the dispersal velocities of the clusters would be similar to the escape velocities of the parent body. In this work we use the inverse Gauss equations, equations \\ref{InvGauss1}, \\ref{InvGauss2} and \\ref{InvGauss3}, substituted into equation \\ref{VelocityEq}, to test the dispersal velocities of the clusters found through cladistics. $\\delta a$, $\\delta e$ and $\\delta i$ are the difference between the individual satellites and the reference object. $a_r$, $e_r$, $i_r$ and orbital frequency ($n_r$) are parameters of the reference object. In this case, the reference object is taken as the largest member of the cluster. The true anomaly ($f$) and perihelion argument ($w + f$) at the time of disruption are unknown. Only in special cases, for example, for young asteroid families \\citep[e.g.]{Nesvorny2002AsteroidBreakup}, the values of ($f$) and ($w + f$) can be inferred from observations. In this work we adopt $f = 90 \\degree $ and $(f+w) = 45 \\degree $ respectively as reasonable assumptions. Previous works by \\cite{Nesvorny2003IrrSatEvol} and \\cite{Turrini2008IrregularSatsSaturn} using this method, do not \\replaced{indicate}{specify} the true anomaly ($f$) and perihelion argument ($w + f$) used, nor the central reference point, making any comparisons between them and this work relative rather than absolute. The final $\\delta V_d$ for the cluster is composed of the velocities in the direction of orbital motion ($\\delta V_T$), the radial direction ($\\delta V_R$) and perpendicular to the orbital plane ($\\delta V_W$). \n\n\\begin{equation}\n\\delta V_T = \\frac{n_r a_r (1+e_r \\cos f)}{\\sqrt{1-e_r^2}} \\cdot \\left [ \\frac{\\delta a}{2 a_r} - \\frac{e_r \\delta e}{1-e_r^2} \\right ]\n\\label{InvGauss1}\n\\end{equation}\n\n\\begin{equation}\n\\delta V_R = \\frac{n_r a_r}{(\\sqrt{1-e_r^2})\\sin f} \\cdot \\left [ \\frac{\\delta e_r (1 + e_r \\cos f)^2}{1-e_r^2} - \\frac{\\delta a (e_r + e_r \\cos^2 f + 2\\cos f)}{2a_r} \\right ]\n\\label{InvGauss2}\n\\end{equation}\n\n\\begin{equation}\n\\delta V_W = \\frac{\\delta i \\cdot n_r a_r }{\\sqrt{1-e_r^2}} \\cdot \\frac{1+e_r \\cos f}{\\cos (w+f)}\n\\label{InvGauss3}\n\\end{equation}\n\n\\begin{equation}\n\\delta V_d = \\sqrt{\\delta V_T^2 + \\delta V_R^2 + \\delta V_W^2}\n\\label{VelocityEq}\n\\end{equation}\n\nCladistics offers a fundamental advantage over this primarily dynamics based clustering, and that is the incorporation of unknown values. Classical multivariate hierarchical clustering \\citep{Zappala1990HierarchicalClustering1} requires the use of a complete dataset, and as such a choice is required. The parameters are either restricted to only known dynamical elements, or the dataset is reduced to well studied objects. Cladistical analysis can incorporate objects with large amounts of unknown information, originally fossil organisms \\citep{Cobbett2007FossilsCladistics}, without a reduction in the number of parameters. \n\n\n\\subsection{Characteristics}\n\\label{Characteristics}\nWe define 38 characteristics that can be broken into three broad categories: orbital, physical and compositional parameters. All numerical states are considered having equal weight. The discrete character sets are unordered. Any continuous characteristics are broken into bins, as cladistical analysis requires discrete characteristics. We developed a Python program to establish the binning of continuous characteristics. \\added{The pandas Cut module \\citep{Mckinney2010Pandas} is used to create the bins.} Each characteristic is binned independently of each other and for each of the Jovian and Saturnian systems. The aforementioned \\replaced{python}{Python} program iterates the number of bins until \\replaced{an $r^2$ score of $>0.99$ is reached for that characteristic set}{a linear regression model between binned and unbinned sets achieves a coefficient of determination ($r^2$) score of $>0.99$. This is calculated using the stats package in SciPy \\citep{Jones2010SciPy}}. Thus each character set will have a different number of bins, $r^2$ score and delimiters. All characteristics are binned in a linear fashion, with the majority increasing in progression. The exception to the linear increase is the density character set, with a reversed profile. All of the continuous, binned characteristic sets are ordered, as used by \\cite{FraixBurnet2006DwarfGalaxies}. A full list of the characteristics used, the $r^2$ score for each of the binned characteristics, along with the delimiters are listed in Appendix \\ref{ListCharacters}.\n\nThe first broad category includes the five orbital characteristics (Appendix \\ref{listOrbitChar}). This category is comprised of two discrete characteristics, presence in orbit around the gas giant, and prograde or retrograde orbit. The three remaining characteristics, semi-major axis (a), orbital inclination (i) and eccentricity (e), are continuous and require binning using the aforementioned \\replaced{python}{Python} program. \n\nThe second category used to construct the matrix consists of two continuous physical characteristics, density and visual geometric albedo (Appendix \\ref{listPhysChar}). We chose to not include mass, or any properties related to mass, as characters in the analysis. The inclusion of these characteristics could hide any relationships between a massive object and any daughter objects, as the result of collisions. \n\nThe third category describes the discrete compositional characteristics and details the presence or absence of 31 different chemical species (Appendix \\ref{listCompChar}). In order to account for any positional bias, the fundamental state, solid, liquid, gas or plasma was not considered. In this anyalysis, we make no distinction between surface, bulk and trace compositions. This is to account for the potential of daughter objects to have their bulk composition comprised of surface material from the parent. The majority of compounds have absence as a base state (0), and presence as the derived (1). The exception are the first three molecules, elemental hydrogen (eH), hydrogen (H$_2$) and helium (He), all of which are found in the Sun. As the Sun is the designated outgroup, the base state (0) indicates the presence of these species. With the exception of elemental hydrogen, the remaining single element species are those found in compounds. The spectroscopy of an object often only reports on the presence of an ion, as opposed to a full chemical analysis. \\deleted{As the full chemical composition of a body.} As more detailed analysis becomes available, characters may be added to the matrix. Several chemical species are used in this particular matrix which are either not present in any of the satellites or unknown. These are included for future comparisons with other orbital bodies.\n\n\\subsection{Matrices}\nThe Jovian taxon-character matrix holds 68 taxa consisting of: the Sun (outgroup), four inner satellites, the main ring, four Galilean satellites and 59 irregular satellites. Appendix \\ref{JupiterMatrix} contains the matrix, along with the references used in its construction.\n\nThe Saturnian matrix, presented in Appendix \\ref{SaturnMatrix}, is created with 76 taxa. These taxa are the Sun (outgroup), six main rings, nine inner small satellites, four minor rings, eight large icy satellites, four Trojan satellites, three Alkynoids and their associated rings, and the 38 irregular satellites. The references used in the construction of the Saturnian matrix are located in Appendix \\ref{SaturnMatrix}. Both matricies use the same characteristics, as discussed in Section \\ref{Characteristics}, and are available in machine readable format. \n\n\n\\section{Results}\n\nIn this section we present the resulting taxonomic trees from the analysis of the Jovian and Saturnian satellites. The taxonomic trees are used to form the systematic classification, of the Jovian (Table \\ref{JupiterClassTable}) and Saturnian (Table \\ref{SaturnClassTable}) satellite systems. Using inverse Gauss equations \\citep{Zappala1990HierarchicalClustering1}, in a similar method to \\cite{Nesvorny2003IrrSatEvol} and \\cite{Turrini2008IrregularSatsSaturn}, we show in Tables \\ref{JupiterClassTable} and \\ref{SaturnClassTable}, dispersal velocities ($\\delta V$) for each of the taxonomic groups where a single origin object is hypothesized, namely the irregular satellites. For these calculations we assume the largest representative of the cluster as the origin point. See section \\ref{cladistics} for further discussion.\n\n\\subsection{Jovian Taxonomy}\n\\label{JupiterTax}\nThe results of the cladistical analysis of the individual Jovian satellites is shown in Figure \\ref{JupiterTree}. This 0.5 majority-rules consensus tree has a tree length score of 128, with a consistency index of 0.46 and a retention index of 0.85. \\added{The low value of the consistency index is possibly due to the mixed use of ordered, multi-state, continuous characteristics and bi-modal compositional characteristics \\citep{Farris1990phenetics}.} \\replaced{These values indicate}{The high retention index suggests} that the consensus tree is robust and \\replaced{indicative of the true}{demonstrates the most likely} relationships between the satellites. \n\n\\begin{figure*}\n\\includegraphics[height=0.65\\paperheight]{Fig1JupiterCladColourResub.pdf} \n\\caption{Majority consensus taxonomic tree of objects in the Jovian system. This tree has a tree length score of 128, with a consistency index of 0.46 and a retention index of 0.85. Numbers indicate frequency of the node in the 10000 most parsimonious tree block. \\replaced{Colors are indicative of}{Colors represent terminology used in} traditional classification: \\textcolor{Amalthea}{Amalthea inner regular family} ; \n \\textcolor{Galilians}{Galilean family};\n \\textcolor{Themisto}{Themisto prograde irregular} ;\n \\textcolor{Himalia}{Himalia prograde irregular family};\n \\textcolor{Carpo}{Carpo prograde irregular} ; \n \\textcolor{Ananke}{Ananke irregular family};\n \\textcolor{Carme}{Carme irregular family} ; \n \\textcolor{Pasiphae}{Pasiphae irregular group};\n Unnamed and unclassified. Proposed groups and families are \\replaced{indicated}{shown} on the right.} \n\\label{JupiterTree}\n\\end{figure*}\n\n\\begin{deluxetable}{p{3cm}p{3.8cm}ccccccp{2cm}cc}\n\\label{JupiterClassTable}\n\n\\rotate\n\n\\tabletypesize{\\scriptsize}\n\n\n\\tablecaption{Jovian Satellite Systematic Classification}\n\n\n\\tablenum{1}\n\n\n\\tablehead{\\colhead{Taxonomy} & \\colhead{Members} & \\colhead{Orbit} & \\colhead{Semi-major Axis} & \\colhead{Inclination} & \\colhead{Eccentricity} & \\colhead{Density} & \\colhead{Albedo} & \\colhead{Composition} & \\colhead{Velocity ($\\delta V$)} & \\colhead{Ref.} \\\\ \n\\colhead{} & \\colhead{} & \\colhead{} & \\colhead{(km)} & \\colhead{} & \\colhead{} & \\colhead{($kg~m^{-3}$)} & \\colhead{} & \\colhead{} & \\colhead{($m~s^{-1}$)} & \\colhead{} } \n\n\\startdata\nAmalthea family & Thebe, Amalthea, Metis and Adrastea & Prograde & $< 3.0 \\times 10^5$ & $< 0.02\\degree$ & $< 2\\degree$ & $< 900$ & $< 0.1$ & predominately water ice and silicates & $3570.4 \\pm 491.8 $ & 1 \\\\\nGalilean family & Io, Ganymede, Europa, Callisto & Prograde & $4.0\\times 10^5$ \u2013 $2.0\\times 10^6$ & $< 0.5\\degree$ & $< 0.01$ & $> 1800$ & $> 0.18$ & water ice and silicates dominate; presence of SO$_2$; other chemical species present & \\nodata & 2 \\\\\nJovian Irregular Satellite group & & & & & & & & & & \\\\\nHimalia family & Leda, Elara, Lyithea, Himalia and Themisto. & Prograde & $7.5 \\times 10^6$ - $1.8 \\times 10^6$ & $25\\degree$ - $55\\degree$ & $0.1$ - $0.3$ & \\nodata & $< 0.1$ & silicate based & $623.8 \\pm 750.3$ & 3,4 \\\\\nAnanke\/Carme Family & S\/2003 J3, S\/2003 J9, Ananke subfamily, Carme subfamily and Sinope subfamily. & Retrograde & $1.88 \\times 10^7$ - $2.5\\times 10^7$ & $143\\degree$ - $166\\degree$ & $0.2$ - $0.4$ & \\nodata & $< 0.07$ & \\nodata & $457.2 \\pm 445.7$ & 3,4 \\\\\nAnanke Subfamily & Euanthe, Thyone, Mneme, Harpalyke, Praxidike, Thelxinoe and Ananke. & Retrograde & $2.0 \\times 10^7$ - $2.15 \\times 10^7$ & $145\\degree$ - $152\\degree$ & $0.2$ - $0.25$ & \\nodata & $< 0.07$ & \\nodata & $61.0 \\pm 45.6$ & 3,4 \\\\\nCarme Subfamily & Arche, Pasithee, Chaldene, Isonoe, Kale, Aitne, Erinome, Taygete, Carme, Kalyke, Eukelade and Kallichore. & Retrograde & $2.2 \\times 10^7$ - $2.4 \\times 10^7$ & $164\\degree$ - $166\\degree$ & $0.24$ - $0.27$ & \\nodata & $< 0.07$ & \\nodata & $36.1 \\pm 13.1$ & 3,4 \\\\\nSinope Subfamily & Eurydome, Autonoe, Sinope and Callirrhoe. & Retrograde & $2.2 \\times 10^7$ - $2.42 \\times 10^7$ & $147\\degree$ - $159\\degree$ & $0.27$ - $0.35$ & \\nodata & $< 0.06$ & \\nodata & $323.9 \\pm 97.3$ & \\\\\nIocaste Family & Euporie, S\/2003 J18, Hermippe, Helike, Iocaste, S\/2003 J15, Herse, S\/2003 J4, Aoede, S\/2003 J5 and S\/2003 J10 & Retrograde & $1.9 \\times 10^7$ - $2.5 \\times 10^7$ & $140\\degree$ - $165\\degree$ & $0.1$ - $0.45$ & \\nodata & $< 0.05$ & \\nodata & $510.2 \\pm 303.3$ & \\\\\nPasiphae Family & S\/2003 J12, S\/2011 J1, S\/2010 J2, S\/2003 J19, S\/2010 J1, S\/2011 J2, Sponde, Pasiphae, Megaclite, Hegemone, S\/2003 J23, Cyllene, Kore and S\/2003 J2. & Retrograde & $1.9 \\times 10^7$ - $2.9 \\times 10^7$ & $145\\degree$ - $164\\degree$ & $0.30$ - $0.421$ & \\nodata & $< 0.1$ & \\nodata & $412.3 \\pm 224.5$ & 3,4 \\\\\n\\enddata\n\n\n\\tablerefs{(1) \\citet{Barnard1892Amalthea};\n(2) \\citet{Galileo1610SidereusNuncius};\n(3) \\citet{Nesvorny2003IrrSatEvol};\n(4) \\citet{Sheppard2003IrrSatNature}.}\n\n\\end{deluxetable}\n\n\nAs can be seen in the Jovian taxonomic tree in Figure \\ref{JupiterTree}, the satellites cluster in to clades resembling the taxonomy proposed by \\citet{Nesvorny2003IrrSatEvol} and \\citet{Sheppard2003IrrSatNature}. The irregular satellites are a separate cluster to the prograde regular satellites. \n\nWe maintain the closest family to Jupiter, the Amalthea family, as a valid taxonomic cluster. The dispersal velocity is very large and may \\replaced{indicative}{suggest} that the Amalthea family did not form from a single object. This family, along with Jupiter's main ring, \\replaced{are}{is} associated with the well known Galilean family. \n\nIn the analysis, we maintain the 'irregular' satellite group. The Himalia family clusters with the retrograde satellites, separate to the other prograde satellites. The Himalia family has relatively low inclinations, in comparison with the Jovian retrograde satellites and their high eccentricity could be explained by disruptions \\citep{Christou2005HimaliaScattering}. The small satellites Themisto and Carpo cluster together with the other prograde satellites in the Himalia family. We propose that Themisto and Carpo be included in the Himalia family, as they are the sole members of the groups proposed by \\citet{Sheppard2003IrrSatNature}, and show similar orbital characteristics. The large mean dispersal velocity calculated for the Himalia family (see Table \\ref{JupiterClassTable}) was also noticed by \\citet{Nesvorny2003IrrSatEvol} for the Prograde satellites. The large mean dispersal velocity due to the dispersal velocities of Themisto and Carpo. Without including these members, the mean dispersal velocity for the classical Himalia family is $154.6 \\pm 72.5 m\/s$, close to the escape velocity of Himalia, $121.14 m\/s$. This dispersal velocity of the classical Himalia family, was explained via gravitational scattering from Himalia by \\citet{Christou2005HimaliaScattering}. Disruption and scattering could also be used to explain the large dispersal velocities of Themisto and Carpo, though further modeling is required. \n\nThe term `irregular' is maintained through the retrograde family for consistency with the literature \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature, Nesvorny2004IrrSatFamilyOrigin, Beauge2007IrregSatRsonance, Jewitt2007IrregularSats}. The retrograde irregular satellites are separate, but related cluster to the Himalia, prograde irregulars. The broad classifications introduced by \\citet{Sheppard2003IrrSatNature} and \\citet{Nesvorny2003IrrSatEvol} are preserved, though the Ananke\/Carme family is unresolved and may be split into subfamilies. Separating out the traditional families \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature}, see colors in figure \\ref{JupiterTree}, give smaller dispersal velocities. The traditional Ananke (escape velocity (eV) $23.10 m\/s$) family has a $\\delta V$ of $61.0 \\pm 45.6 m\/s$, traditional Carme (eV $29.83 m\/s$) has $36.2 \\pm 13.1 m\/s$, and a created Sinope (eV $27.62 m\/s$) family has $323.9 \\pm 97.3 m\/s$. These are smaller than the $\\delta V$ of our unresolved Ananke\/Carme Family ($457.2 \\pm 445.7 m\/s$, see table \\ref{JupiterClassTable}). \\citet{Nesvorny2003IrrSatEvol} used similar small $\\delta V$ values to establish the Ananke and Carme dynamical families. The dynamical situation could be explained through a more recent capture and breakup event for Ananke, Carme and Sinope, that disrupted the ancestral irregular satellites. The identified Iocaste and Pasiphae families also have large dispersal velocities, \\replaced{indicative}{suggestive} of disruptions. Following the nomenclature of \\citet{Sheppard2003IrrSatNature}, each of the families and subfamilies are represented by the name of the largest contained satellite. Satellites within families are related by their retrograde orbit, high inclinations and eccentricities. In addition to their linked orbital characteristics, the satellites of the retrograde irregular group all show a low albedo \\citep{Beauge2007IrregSatRsonance}. \n\nThe Ananke subfamily is tightly constrained in its orbital characteristics, with a small dispersal velocity. While the characteristics listed in Table \\ref{JupiterClassTable} would preclude them from being included in the Pasiphae family, their clustering around a common semi-major axis, inclination and eccentricity, \\replaced{indicated}{suggesting} that they are a distinct young dynamical family. The members we include in the Ananke family for this analysis are all historical members of the family \\citep{Jewitt2007IrregularSats}. Some of the satellites that have been historically included in the Ananke family \\citep{Jewitt2007IrregularSats} are moved to other families. We do not add any new satellites to this family. \n\nThe orbital characteristics of the Carme subfamily are tightly constrained. Satellites in this family orbit further from Jupiter, with higher orbital inclinations, but similar eccentricities to the Ananke family. As with the Ananke family, it is the highly constrained orbital characteristics and low mean dispersal velocity, that justify the classification of this traditional family \\citep{Jewitt2007IrregularSats}. According to the tree presented in Figure \\ref{JupiterTree}, there is a continuum between the Ananke and Carme families. However, differences in orbital characteristics, broken down in Table \\ref{JupiterClassTable}, distinguish both of these families from each other.\n\nA new cluster, the Iocaste family, is defined as shown in Figure \\ref{JupiterTree} and Table \\ref{JupiterClassTable}. The semi-major axis of this family spans most of the orbital space where irregular satellites have been discovered. The lower eccentricities and albedo are used to separate this family from the Pasiphae family. As with the Passiphae family, the Iocaste family has a high mean dispersal velocity ($510.2 \\pm 303.3$ compared with a escape velocity of $3.16 m\/s$), \\replaced{indicative}{suggestive} of disruptions taking place at some point since the break-up of the original object \\citep{Christou2005HimaliaScattering}. Iocaste, being the largest member of this family, is proposed as the representative object. Also included are several members that have been previously included in other families \\citep{Jewitt2007IrregularSats}, along with new unnamed satellites. For full details on included satellites and the descriptive properties of the family, see Table \\ref{JupiterClassTable}.\n\nThe Pasiphae family show a broad range of orbital characteristics that, along with the large dispersal velocity ($412.3 \\pm 224.5$ compared with an escape velocity of $47.16 m\/s$), are \\replaced{indicative}{suggestive} of disruptions during the family's life-time \\citep{Christou2005HimaliaScattering}. The Pasiphae family has a broad range semi-major axies and inclinations, with the Pasiphae family orbiting further from Jupiter and having larger eccentricities on average than the new Iocaste family, see table \\ref{JupiterClassTable}. A Pasiphae subfamily, see figure \\ref{JupiterTree}, with a $\\delta V$ of $230.1 \\pm 174.3 m\/s$, can be identified. This may \\replaced{indicate}{imply} a secondary, more recent break-up from Pasiphae. In addition, many of the unnamed satellites from recent observations \\citep{Gladman2003JupIAU1, Gladman2003JupIAU2, Sheppard2003JupIAU1, Sheppard2003JupIAU2, Sheppard2003JupIAU3, Sheppard2003IAUJupSat,\nSheppard2004JupIAU1,Sheppard2004JupIAU2,\nJacobson2011NewJupSats, Sheppard2012JupIAU} are associated with this family, see Table \\ref{JupiterClassTable} and Figure \\ref{JupiterTree} for a complete list. \n\n\n\\subsection{Saturnian Taxonomy}\n\\label{SaturnTax}\nCladistical analysis of the Saturnian \\replaced{System}{system} yields the 0.5 majority-rules consensus tree, Figure \\ref{SaturnTree}, constructed from the 10000 parsimonious trees, with a tree length score of 186. The tree has a consistency index of 0.30 and a retention index of 0.81. The consistency index of the Saturnian tree is lower than that of the Jovian tree, though this could be due to the number of taxa used \\citep{Sanderson1989VaiationHomoplasy}. \\added{As with the Jovian tree, this low consistency index could be due to the mixed character states. This effect is to be explored further in a future paper.} The high retention index indicates that the tree \\replaced{is indicative}{is suggestive} of the true relationships \\citep{Farris1989retention}. \n\n\n\\begin{figure*}\n\n\\includegraphics[height=0.65\\paperheight]{Fig2SaturnCladColourReSub.pdf} \n\\caption{Majority Consensus taxonomic tree of objects in the Saturnian system. The tree has a consistency index of 0.30 and a retention index of 0.81. Numbers indicate frequency of the node in the 10000 most parsimonious tree block. \\replaced{Colors are indicative of}{Colors represent terminology used in} classical classification: \\textcolor{MainRing}{Main ring group, with associated shepherd satellites} ; \n \\textcolor{IcySats}{Mid-sized Icy satellites and Titan};\n \\textcolor{Trojans}{Trojan satellites} ;\n \\textcolor{Alkanoids}{Alkanoids and associated rings};\n \\textcolor{Inuit}{`Inuit' prograde irregular family} ; \n \\textcolor{Gallic}{`Gallic' prograde irregular family};\n \\textcolor{Norse}{`Norse' retrograde irregular family} ; \n Unnamed and unclassified. \n Proposed groups and families are \\replaced{indicated}{shown} to the right. }\n\\label{SaturnTree}\n\\end{figure*}\n\n\n\\begin{deluxetable}{p{3cm}p{3.8cm}ccccccp{2cm}cc}\n\n\\rotate\n\n\\tabletypesize{\\scriptsize}\n\n\\tablecaption{Saturnian Satellite Systematic Classification}\n\\label{SaturnClassTable}\n\n\\tablenum{2}\n\n\n\\tablehead{\\colhead{Taxonomy} & \\colhead{Members} & \\colhead{Orbit} & \\colhead{Semi-major Axis} & \\colhead{Inclination} & \\colhead{Eccentricity} & \\colhead{Density} & \\colhead{Albedo} & \\colhead{Composition} & \\colhead{Velocity ($\\delta V$)} & \\colhead{Ref.} \\\\ \n\\colhead{} & \\colhead{} & \\colhead{} & \\colhead{(km)} & \\colhead{} & \\colhead{} & \\colhead{($kg~m^{-3}$)} & \\colhead{} & \\colhead{} & \\colhead{($m~s^{-1}$)} & \\colhead{} } \n\n\\startdata\nSaturnian Inner system Group, Main ring and Icy satellites & Atlas, Janus, Epimetheus, Prometheus, Janus\/Epimetheus ring, G ring, D ring, Pan, Aegaeon, S\/2009 S1, F ring, B ring, Cassini Division, C ring, Daphnis and A ring. Possible members: Telesto, Calypso, Methone ring arc, Anthe ring arc, Pallene ring arc, Methone, Anthe, Pallene, Polydeuces Mimas, Tethys, Enceladus family, Hyperion, Titan and Iapetus ; see Section \\ref{SaturnTax} for discussion. & Prograde & $< 4.0 \\times 10^6$ & $< 15\\degree$ & $< 0.03$ & $550$ \u2013 $1900$ & $0.1$ - $1$ & Composition of water ice with silicates and presence of CO$_2$. Other chemical species may be present & \\nodata & 1, 2 \\\\\nEnceladus Family & E ring, Enceladus, Rhea, Dione and Helene. & Prograde & $1.8 \\times 10^5$ - $5.3 \\times 10^5$ & $< 0.5\\degree$ & $0$ & $1200$ \u2013 $1700$ & $> 0.7$ & Complex composition, predominately water ice and silicates, with Hydrocarbons and CO$_2$ present & \\nodata & \\\\\nSaturnian Irregular Satellite group & & & & & & & & & & \\\\\nAlbiorix family & Bebhionn, Erriapus, Albiorix and Tarvos & Prograde & $1.6 \\times 10^7$ - $1.8 \\times 10^7$ & $30\\degree$ - $40\\degree$ & $0.4$ - $0.6$ & \\nodata & $< 0.1$ & \\nodata & $80.9 \\pm 1.6 $ & 3,4,5 \\\\\nSiarnaq family & Tarqeq, Kiviuq, Ijiraq, Paaliaq and Siarnaq & Prograde & $1.1 \\times 10^7$ - $1.9 \\times 10^7$ & $40\\degree$ - $50\\degree$ & $0.1$ \u2013 $0.4$ & \\nodata & $< 0.1$ & \\nodata & $266.8 \\pm 60.0$ & 3,4,5 \\\\\nPhoebe family & Phoebe Ring, Phoebe, Fenrir, Loge, Aegir subfamily, and Ymir subfamily. & Retrograde & $1.1 \\times 10^7$ - $2.51 \\times 10^7$ & $> 145\\degree$ & $> 0.1$ & \\nodata & $< 0.1$ & \\nodata & $763.3 \\pm 259.0 $ & 3,4,5 \\\\\nAegir subfamily & S\/2007 S2, Mundilfari, Jarnsaxa, S\/2006 S1, Bergelmir, Suttungr, Farbauti, S\/2007 S3, Aegir and Fornjot. & Retrograde & $1.6 \\times 10^7$ - $2.51 \\times 10^7$ & $> 150\\degree$ & $0.1$ \u2013 $0.25$ & \\nodata & \\nodata & \\nodata & $295.1 \\pm 125.0$ & 5 \\\\\nYmir subfamily & Skathi, Skoll, Greip, Hyrrokkin, S\/2004 S13, S\/2004 S17, Narvi, S\/2004 S12, S\/2004 S07, Hati, Bestla, Thrymr, S\/2006 S3, Kari, Surtur and Ymir & Retrograde & $1.55 \\times 10^7$ - $2.30 \\times 10^7$ & $> 145\\degree$ & $0.25$ - $0.6$ & \\nodata & $< 0.1$ & \\nodata & $497.5 \\pm 247.7$ & 5 \\\\\n\\enddata\n\n\n\\tablerefs{(1) \\citet{Huygens1659systema};\n(2) \\citet{Cassini1673Sat2Sats,Cassini1686Sat2Sats}\n(3) \\citet{Nesvorny2003IrrSatEvol};\n(4) \\citet{Sheppard2003IrrSatNature};\n(5) \\citet{Turrini2008IrregularSatsSaturn}.}\n\n\\end{deluxetable}\n\n\n\n\nThe tree shown in Figure \\ref{SaturnTree} highlights the diversity of structures found in the orbit of Saturn. Satellites cluster into two main grouping around Saturn, the Inner group, comprised of rings and icy satellites, and the Irregular satellite group, see Table \\ref{SaturnClassTable} for members and diagnostic properties of each clade. While the traditional classification nomenclature \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature, Jewitt2007IrregularSats} is broadly conserved, several \\replaced{anomalies}{discrepancies} require discussion. Table \\ref{SaturnClassTable} shows our new taxonomy, along with included members of the families and their descriptive properties.\n\nThe Main ring and Icy satellite group form an unresolved, inner system group. This group includes the Saturnian ring system, the Alkynoids and their associated ring arcs, as well as the larger Icy satellites and their Trojans. We have confirmed the recently discovered S\/2009 S1 \\citep{Spitale2012s2009s1} is part of this group due to its orbital characteristics. Within this large group, there is the resolved Enceladus family.\n\nOur results suggest the traditionally classified Alkyonides, Methone, Anthe and Pallene, along with their associated rings, are not clustered with the the Enceladus family, as would be expected by their orbital location, between Mimas and Enceladus, within the E-ring. Due to their bulk water ice composition, the Alkynoides associate with the Main ring objects, see Figure \\ref{SaturnTree}. The low density and mid-range albedo of Pallene and Methone \\citep{Hedman2009SatRingArcs} \\replaced{indicates}{suggests} that the association with the Main ring group is genuine. The dynamic resonances of both Methone and Anthe \\citep{Callegari2010SmallSaturnSatsDynamics} \\replaced{are also indicative of these objects being}{implies that these objects where} captured, rather than forming in-situ. As there is very little known about the composition of these objects, beyond their bulk water ice composition \\citep{Hedman2009SatRingArcs}, further study and dynamical modeling of the capture process is required to resolve their true origins.\n \nLike the Alkynoids, the Trojan satellites of Tethys, Calypso and Telesto, also form an association with the main rings. The reason for this could be that Calypso and Telesto, like the Alkynoids, are also possible captured main ring objects. The capture dynamics could be similar to that of the Jovian Trojan asteroids \\citep{Morbidelli2005TrojanCapture, Lykawka2010TrojanCaputer, Nesvorny2013TrojanCaptureJJ}. Both of the Tethys Trojans \\citep{Buratti2010SatInnerSat} and main ring objects, are chiefly comprised of water ice, \\replaced{indicative of}{implying} a common origin. The bulk composition of Tethys is also prominently water ice \\citep{Buratti2010SatInnerSat}, with a very small fraction of silicates. Trojans may instead have formed from the same material as Tethys itself, either during accretion \\citep{Charnoz2011SaturnSatAccretion} or in the same orbit from a large debris disk \\citep{Canup2010SaturnSatOrigin}. As Tethys is also in the unresolved Main ring and Satellite group, we can not differentiate between the two scenarios. Further compositional information about the Tethys Trojans could shed light on this issue. Polydeuces, a Trojan of Dione, also forms an association with the Main ring group in our analysis. This could be due to overemphasis on orbital and physical characteristics, since the bulk composition of Polydeauces is unknown \\citep{Thomas2013InnerSatSatu}. Helene, the more well studied Trojan of Dione \\citep{Thomas2013InnerSatSatu}, is well within the Enceladus Family. Helene and Dione are closely associated in our analysis, \\replaced{indicating}{implying} that Helene is a daughter object of Dione.\n\nThe outer icy satellites, Titan, Hyperion and Iapetus, do not form a single cluster, and are therefore not considered a valid taxonomic group. They are associated with the Main ring and Icy Satellite group. The Enceladus family is formed by the known association between the E-ring, Enceladus and Icy Satellites \\citep{Verbiscer2007Enceladus}, which is mainly due to the detection of volatile chemicals, such as NH$_3$, CH$_4$ and other hydrocarbons. Plumes from Enceleadus containing these chemicals \\citep{Porco2006EnceladusPlume}, thought to be representative of the subcrust ocean \\citep{Porco2006EnceladusPlume}, \\deleted{and} are the source of the E ring \\citep{Sphan2006EnceladusEring}. Titan itself also has an abundance of these volatiles \\citep{Hirtzig2013VIMSTitan}, \\replaced{indicating}{implying} a possible association between the Icy satellites of Saturn that remains unresolved in our analysis.\nMaterial from the outer satellites, particularly Pheobe and its associated ring \\citep{Tosi2010IapetusDark, Tamayo2011IapetusDust} is thought to play a role in the observed hemispherical dichotomy on Iapetus \\citep{Tosi2010IapetusDark}. In Figure \\ref{SaturnTree}, Iapetus is unresolved in the Main ring and Icy Satellite group.\n\nThe irregular satellites form a major cluster with each other separate from the inner Saturnian system, and are therefore collected under the Irregular satellite group. Along with their high inclinations, eccentricities and semi-major axes, the Irregular satellite group is characterized by a dark albedo, comparative to the other objects in the Saturnian system. We follow the naming convention introduced with the Jovian satellites, Section \\ref{JupiterTax}, where each irregular satellite family is represented by the largest member \\citep{Jewitt2007IrregularSats}. We therefore rename the classical Inuit group \\citep{Blunck2010SaturnSats} to the Siarnaq family and the Gallic group \\citep{Blunck2010SaturnSats} to the Albiorix family. Though this does change the formal name of the clusters, we encourage the discoverers of the unnamed satellites \\citep{Gladman200112Sat,Sheppard2003IAUJupSat,Jewitt2005IAUCSat,Sheppard2006SatIAUC,Sheppard2007SatIAUC} and any future discoveries that are placed in these groups, to follow IAU convention and use names from Inuit and Gallic mythology for satellites in the Siarnaq and Albiorix families respectively. \nAs in \\cite{Turrini2008IrregularSatsSaturn}, the Albiorix family is distinct and has a low mean dispersal velocity ($\\delta V$). The Siarnaq family has a higher $\\delta V$, again \\replaced{indicative}{suggestive} of disruptions \\citep{Christou2005HimaliaScattering}. The mean $\\delta V$ of all prograde satellites is $364.8 \\pm 114.9 m\/s$, only slightly higher than that of the Siarnaq family \\citep{Turrini2008IrregularSatsSaturn}. This could \\replaced{be indicative of}{imply} a disruption scenario, with a more recent capture of the Albiorix family parent body disrupting the older Siarnaq family. Our cladistical analysis supports this scenario, as the Siarnaq family shows a more branching structure than the Albiorix family. Further compositional information about these bodies, as well as dynamical modeling, could resolve this complex situation.\n\n\\replaced{Our study shows that the classically named retrograde Norse group \\citep{Blunck2010SaturnSats} is determined to be the Phoebe family, which can be split into at least two subfamilies.}{In our analysis, we separate out the retrograde irregular satellites, including Phoebe, from the prograde irregular satellites. In previous taxonomy, this group has been classified as the 'Norse' group \\citep{Blunck2010SaturnSats}. In our revised nomenclature, this group should be termed the Phoebe family. We further separate out two clades, distinct from Phoebe and its associated ring.} The first clade, the unresolved Aegir subfamily\\added{ (previously identified as the S\/2004 S10 group in \\cite{Turrini2008IrregularSatsSaturn})}, is characterized as having on average, orbits further from Saturn, with low eccentricities and higher inclinations. The second clade is the Ymir subfamily and is categorized, on average, by being closer to Saturn, but with high eccentricities. This subfamily shows a branching structure and may be further split \\citep{Grav2007IrregSatCol}. \\added{This family was also identified by \\cite{Turrini2008IrregularSatsSaturn}.} We identify an association between Fenrir and Loge, with a low dispersal velocity ($\\delta V = 114.4 m\/s$), \\replaced{indicative}{suggestive} of a recent breakup. The high dispersal velocity ($\\delta V$) of the Phoebe family is due to the selection of Phoebe as a reference point. If Phoebe and the \\replaced{associate}{associated} ring are removed from the \\replaced{subfamily}{family}, \\added{and Ymir (with an escape velocity of $8.56 m\/s$) selected as the reference object,} the $\\delta V$ is halved from $763.3 \\pm 259.0 m\/s$ to $439.9 \\pm 215.1 m\/s$. The satellite with the lowest $\\delta V$ to Phoebe is S\/2007 S2, with $\\delta V = 248.0 m\/s$, still \\added{significantly}larger than the escape velocity of Phoebe ($100.8 m\/s$). \\deleted{If Phoebe is removed from the family, and Ymir (escape velocity $8.56 m\/s$) selected as the reference object, the mean $\\delta V$ of the cluster is $439.9 \\pm 215.1 m\/s$, lower than the $\\delta V$ of the Ymir subfamily.} \\added{\\cite{Turrini2008IrregularSatsSaturn} also found a dynamical separation between Phoebe and the other retrograde satellites.} This is supportive of the narrative \\replaced{for a different origin of Phoebe and}{that Phoebe has a different origin to} the other retrograde irregular satellites of Saturn \\citep{Turrini2008IrregularSatsSaturn}. The high $\\delta V$ among all the subfamilies shows \\added{that} a complex dynamical situation is present in the Saturnian irregular satellites. Phoebe has been shown to clear its orbital parameter space \\citep{Turrini2008IrregularSatsSaturn}, which could have had a major disruptive effect on those remaining satellites \\citep{Turrini2008IrregularSatsSaturn}. \\added{The similarities between our analysis and that of \\cite{Turrini2008IrregularSatsSaturn} further validates cladistics as a method suitable for applications in Solar system astronomy.} The addition of \\replaced{detail}{detailed} compositional information from the other irregular satellites to an updated cladistical analysis could solve some of the \\added{minor} discrepancies found between this analysis and that of \\cite{Turrini2008IrregularSatsSaturn}.\n\nWe assign the currently unnamed irregular satellites to each of the subfamilies. S\/2006 S1, S\/2007 S2 and S\/2007 S3 are part of the Aegir subfamily. We include S\/2004 S13, S\/2004 S17, S\/2004 S12, S\/2006 S3 and S\/2007 S7 in the Ymir subfamily. See Table \\ref{SaturnClassTable} for a full list of members in each subfamily. As with the Albiorix and Siarnaq families, we encourage discoverers of new satellites that fall within the Phoebe family to follow the Norse mythological naming convention as set by the IAU. \n\n\n\n\\section{Discussion}\n\\label{Discussion}\n\nIn this study we have shown, using the Jovian and Saturnian satellite systems, that cladistics can be used in a planetary science context. We have ensured that the technique is objective, by statistically creating bins for characteristics that are continuous in nature, see Section \\ref{Characteristics}. By thus ensuring the objectivity of our analysis, we increase the confidence that cladistics is a valid technique that can be applied in the planetary sciences. Our results largely support the traditional classifications used in both the Jovian and Saturnian systems. However, the power of cladistics is shown in the ease of classifying new satellites as well as identifying substructures within larger clusters. Cladistics also offers a method of analysis where limited information is available. In our study we have examined well studied satellites, as well as those where only dynamical information available. In traditional methods of analysis, either only dynamical information is considered, or the dataset is truncated in favor of more well studied bodies. Cladistics offers a method that can incorporate as much information about an object as is available, while accounting for any unknown characteristics. As more detailed information becomes available, either of known or newly discovered satellites, cladistics offers a systematic method for inclusion or revision of the classification system. \n\nThe relationships that we noted between the satellites suggest common formation scenarios within the clusters. The prograde, inner families of Jupiter are the products of accretion from a circumplanetary disk \\citep{Canup2002GalSatAcc}. The association of the Amalthea and Galilean families, along with the Main ring of Jupiter, in our analysis supports this hypothesis. Clustering of the Himalia family with other 'irregular' satellites, \\replaced{indicates}{implying} a capture scenario. The prograde nature of the Himalia family is possibly explained via a nebula drag capture mechanism \\citep{Cuk2004HimaliaGasDrag}. Further modeling of the Himalia family is required to ascertain their true origins, particularly in light of the Jovian pebble formation hypothesis that may not include an extended nebula \\citep{Levison2015GasGiantsPebbles}. \n\nWith the proposal that Sinope forms its own subfamily, each Jovian irregular satellite subfamilies contain only a single large satellite. This strengthens the hypothesis that each of the families represents a capture event and subsequent breakup \\citep{Nesvorny2007IrrSatCap} of an object external to the Jovian system. Two of the subfamiles, the Pasiphae and Sinope subfamiles, show a broad range of orbital characteristics and larger dispersal velocities. The other two, the Ananke and Carme subfamiles, show much more constrained characteristics and smaller dispersal velocities. This dichotomy between the two types of subfamiles, broad versus constrained, could \\replaced{indicate}{imply} at least two capture events, with the earlier Pasiphae and Sinope families being disrupted by later Ananke and Carme captures. The Iocaste family does not contain a large progenitor satellite, but has high dispersal velocities. This is \\replaced{indicative}{suggestive} of a possible ejection scenario. An alternative hypothesis is that the capture events happen simultaneously, but there were multiple disruption events. Both scenarios are supported by the dichotomy in dispersal velocities. Future analysis and simulations into the origins of the irregular satellites could help determine which theory is more probably correct. \n\nAs with the Jovian satellites, there are multiple origins for the origin of the Saturnian rings and satellites. The results from our analysis support a growing body of work showing the complexity of formation scenarios in the Saturnian system. The rings themselves possibly formed after the breakup of an inner icy satellite \\citep{Canup2010SaturnSatOrigin}. \n\nThe unresolved nature of the inner Saturnian system shows a complexity of formation scenarios. The main ring satellites, along with the Alkyonides and Tethys Trojans possibly formed via accretion from the current ring system \\citep{Charnoz2010SaturnMooletsfromMainRings}. The Alkynoides and Tethys Trojans are then secondarily captured in their current orbits. The major icy satellites, those in the E-ring and outer satellites, probably formed in an accretion scenario, with delivery of the silicate from the outer system \\citep{Salmon2017SaturnMidAccretion}. Titan could be secondarily derived from multiple subsatellites that formed in the same disk \\citep{Asphaug2013SatMerger}. The volatiles are delivered from comets, with at least one, Phoebe, being captured in orbit. The size of Phoebe is not traditionally associated with comet nuclei, but atleast one comet, C\/2002 VQ94 with a similar ~100km diameter has been observed \\citep{Korsun2014c2002vq94100kmComet}. The irregular satellite families and subfamiles form from collisional breakup events \\citep{Nesvorny2004IrrSatFamilyOrigin} resulting from the captured comet nuclei. The large dispersal velocities of the subfamilies \\replaced{indicate}{imply} that this capture and disruption process is complex and requires detailed modeling.\n\nWe have shown that cladistics can be used in the classification of the Jovian and Saturnian satellite systems. Consequently, several related studies may be attempted in the future. Uranus and Neptune have similarly complex satellite systems to those of Jupiter and Saturn \\citep{Jewitt2007IrregularSats}. These satellite systems could also be classified using cladistics, particularly the irregular satellites. Such a study is hampered by a lack of completeness in the irregular satellite dataset \\citep{Sheppard2005UransIrr, Sheppard2006NeptuneIrr}, but may become practical as observational technology improves and the hypothesized small irregular satellites are discovered. Cladistics could be used to further investigate the origins of the irregular satellites of Saturn and Jupiter. As the irregular satellites are thought to be captured bodies \\replaced{\\citep{Nesvorny2007IrrSatCap}}{\\citep[e.g.][]{Nesvorny2007IrrSatCap}}, the question becomes from \\replaced{what}{which} small body population they originated. Comparisons between the well studied irregular satellites and other \\replaced{solar}{Solar} system bodies could help constrain the origins of these satellites. \n\n\n\\section{Conclusions}\n\\label{Conclusion}\n\nWe have shown that the new application of cladistics on the Jovian and Saturnian satellite systems is valid for investigating the relationships between orbital bodies. In the Jovian system, the traditional classification categories \\citep{Nesvorny2003IrrSatEvol,Sheppard2003IrrSatNature,Jewitt2007IrregularSats} are preserved. We support the hypothesis put forward by \\cite{Nesvorny2007IrrSatCap} that each Jovian irregular satellite family can be represented by the largest member, and that each family is the remnants of a dynamical capture event, and subsequent breakup. We can also assign recently discovered, as yet unnamed, satellites to each of their respective Jovian families. Cladistical analysis of the Saturnian system broadly preserves the traditional classifications \\citep{Nesvorny2003IrrSatEvol, Sheppard2003IrrSatNature, Jewitt2007IrregularSats,Turrini2008IrregularSatsSaturn}, strengthening the validity of the cladistical method. In the Phoebe family of retrograde, irregular satellites, we assign two subfamilies\\added{similar to those found by \\citep{Turrini2008IrregularSatsSaturn}}. We rename the classical mythological designations for the Saturnian irregular satellites, to represent the largest member of the subfamily, in order to be consistent with the Jovian naming convention. Newly discovered, unnamed Saturnian satellites are easily assigned to various subfamiles. Through the application of the technique to the Jovian and Saturnian systems, we show that cladistics can be used as a valuable tool in a planetary science context, providing a systematic method for future classification.\n\n\n\\acknowledgments\nThis research was in part supported by the University of Southern Queensland's Strategic Research Initiative program. We wish to thank an anonymous reviewer for \\replaced{their}{his\/her} comments, particularly on Multivariate Hierarchical Clustering. The AAS Statistics Reviewer provided valuable feedback on the methodology. Dr. Guido Grimm assisted with the cladistical methodology and terminology used in this paper. Dr. Pablo Goloboff provided assistance with TNT, which is subsidized by the Willi Hennig Society, as well as additional comments on the methodology. We would like to thank \\added{Dr.} Henry Throop for discussions regarding the Ring systems.\n\n\\software{\nMesquite 3.10 \\citep{Mesquite}, \nPython 3.5, \nSpyder 2.3.8 \\citep{Spyder238}, \nAnaconda Python distribution package 2.40 \\citep{Anaconda240},\n\\added{pandas Python package \\citep{Mckinney2010Pandas},\nScyPy Python package \\citep{Jones2010SciPy},}\nTexMaker 4.1.1, \nTree analysis using New Technology (TNT) 1.5 \\citep{Goloboff2008TNT, Golboff2016TNT15}.\nZephyr 1.1: Mesquite package \\citep{MesquiteZephyr}.\n}\n\n\n\\bibliographystyle{aasjournal} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzftp b/data_all_eng_slimpj/shuffled/split2/finalzftp new file mode 100644 index 0000000000000000000000000000000000000000..93970feecae107d09a15de2c5c4de5a064cb2142 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzftp @@ -0,0 +1,5 @@ +{"text":"\\section{Basic Results} \\label{sec:basic_results}\n\nIn this section, we prove some basic results concerning the complexity of \\textsc{ListIso}\\xspace and \\textsc{ListAut}\\xspace.\n\n\\begin{lemma} \\label{lem:lautlgi_equiv}\nBoth problems \\textsc{ListAut}\\xspace and \\textsc{ListIso}\\xspace are polynomially equivalent.\n\\end{lemma}\n\n\\begin{proof}\nTo see that \\textsc{ListAut}\\xspace is polynomially reducible to \\textsc{ListIso}\\xspace just set $H$ to be a copy of $G$ and keep the\nlists for all vertices of $G$. It is straightforward to check that these two instances are\nequivalent. For the other direction, we build an instance $G'$ and $\\frakL'$ of \\textsc{ListAut}\\xspace as follows. Let\n$G'$ be a disjoint union of $G$ and $H$. And let $\\frakL'(v) = \\frakL(v)$ for all $v\\in V(G)$ and set\n$\\frakL'(w) = V(G)$ for all $w \\in V(H)$. It is easy to see that there exists list-compatible\nisomorphism from $G$ to $H$, if and only if there exists a list-compatible automorphism of $G'$.\\qed\n\\end{proof}\n\n\\begin{lemma} \\label{lem:list_size_two}\nThe problem \\textsc{ListIso}\\xspace can be solved in time $\\calO(n+m)$ when all lists are of size at\nmost two.\n\\end{lemma}\n\n\\begin{proof}\nWe construct a list-compatible isomorphism $\\pi : G \\to H$ by solving a 2-SAT formula which can be\ndone in linear time~\\cite{even_2sat,aspvall_2sat}. When $w \\in \\frakL(v)$, we assume that $\\deg(v) =\n\\deg(w)$, otherwise we remove $w$ from $\\frakL(v)$. Notice that if $\\frakL(u) = \\{w\\}$, we can set $\\pi(u)\n= w$ and for every $v \\in N(u)$, we modify $\\frakL(v) := L(v) \\cap N(w)$. Now, for every vertex $u_i$\nwith $\\frakL(u_i) = \\{w^0_i, w^1_i\\}$, we introduce a variable $x_i$ such that $\\pi(u_i) = w^{x_i}_i$.\nClearly, the mapping $\\pi$ is compatible with the lists.\n\nWe construct a 2-SAT formula such that there exists a list-compatible isomorphism if and only if it\nis satisfiable. First, if $\\frakL(u_i) \\cap \\frakL(u_j) \\ne \\emptyset$, we add implications for $x_i$ and\n$x_j$ such that $\\pi(u_i) \\ne \\pi(u_j)$. Next, when $\\pi(u_i) = w^j_i$, we add implications that\nevery $u_j \\in N(u_i)$ is mapped to $N(w^j_i)$. If $\\frakL(u_j) \\cap N(w^j_i) \\ne \\emptyset$, otherwise\n$u_i$ cannot be mapped to $w^j_i$ and $x_i \\ne j$. Therefore, $\\pi$ obtained from a satisfiable\nassignment maps $N[u]$ bijectively to $N[\\pi(u)]$ and it is an isomorphism. The total number of\nvariables in $n$, and the total number of clauses is $\\calO(n+m)$, so the running time is\n$\\calO(n+m)$.\\qed\n\\end{proof}\n\n\\begin{lemma} \\label{lem:disconnected}\nLet $G_1,\\dots,G_k$ be the components of $G$ and $H_1,\\dots,H_k$ be the components of $H$.\nIf we can decide \\textsc{ListIso}\\xspace in polynomial time for all pairs $G_i$ and $H_j$, then we can\nsolve \\textsc{ListIso}\\xspace for $G$ and $H$ in polynomial time.\n\\end{lemma}\n\n\\begin{proof}\nLet $G_1,\\dots,G_k$ be the components of $G$ and $H_1,\\dots,H_k$ be the components of $H$. For each\ncomponent $G_i$, we find all components $H_j$ such that there exists a list-compatible isomorphism\nfrom $G_i$ to $H_j$. Notice that a necessary condition is that every vertex in $G_i$ contains one\nvertex of $H_j$ in its list. So we can go through all lists of $G_i$ and find all candidates $H_j$,\nin total time $\\calO(\\ell)$ for all components $G_1,\\dots,G_k$. Let $n' = |V(G_i)|$, $m' = |E(G_i)|$,\nand $\\ell'$ be the total size of lists of $G_i$ restricted to $H_j$. We test existence of a\nlist-compatible isomorphism in time $\\varphi(n',m',\\ell')$. Then we form the bipartite graph $B$\nbetween $G_1,\\dots,G_k$ and $H_1,\\dots,H_k$ such that $G_iH_j \\in E(B)$ if and only if there exists\na list-compatible isomorphism from $G_i$ to $H_j$. There exists a list-compatible isomorphism from\n$G$ to $H$, if and only if there exists a perfect matching in $B$. Using Lemma~\\ref{lem:bipmatch},\nthis can be tested in time $\\calO(\\sqrt k \\ell)$. The total running time depends on the running time of\ntesting \\textsc{ListIso}\\xspace of the components, and we note that the sum of the lengths of lists in these test is at\nmost $\\ell$.\\qed\n\\end{proof}\n\n\\begin{lemma} \\label{lem:cycles}\nThe problem \\textsc{ListIso}\\xspace can be solved for cycles in time $\\calO(\\ell)$.\n\\end{lemma}\n\n\\begin{proof}\nWe may assume that $|V(G)| = |V(H)|$. Let $u \\in V(G)$ be a vertex with a smallest list and let $k\n= |\\frakL(u)|$. Since $\\ell = \\calO(kn)$, it suffices to show that we can find a list-compatible\nisomorphism in time $\\calO(kn)$. We test all the $k$ possible mappings $\\pi \\colon G \\to H$ with\n$\\pi(u) \\in \\frakL(u)$. For $u \\in V(G)$ and $v \\in \\frakL(u)$, there are at most two possible isomorphisms\nthat map $u$ to $v$. For each of these isomorphism, we test whether they are list-compatible.\\qed\n\\end{proof}\n\n\\begin{lemma} \\label{lem:max_degree_2}\nThe problem \\textsc{ListIso}\\xspace can be solved for graphs of maximum degree 2 in time $\\calO(\\sqrt n \\ell)$.\n\\end{lemma}\n\n\\begin{proof}\nBoth graphs $G$ and $H$ are disjoint unions of paths and cycles of various lengths. For each two\nconnected components, we can decide in time $\\calO(\\ell')$ whether there exists a list-compatible\nisomorphism between them, where $\\ell'$ is the total size of lists when restricted to these\ncomponents: for paths trivially, and for cycles by Lemma~\\ref{lem:cycles}. The rest follows from\nLemma~\\ref{lem:disconnected}, where the running time is of each test in $\\calO(\\ell')$ where $\\ell'$ is\nthe total length of lists restricted to two components.\\qed\n\\end{proof}\n\n\n\\section{Bounded Genus Graphs} \\label{sec:bounded_genus}\n\nIn this section, we describe an FPT algorithm solving \\textsc{ListIso}\\xspace when parameterized by the Euler genus\n$g$. We modify the recent paper of Kawarabayashi~\\cite{kawarabayashi} solving graph isomorphism in\nlinear time for a fixed genus $g$. The harder part of this paper are structural results, described\nbelow, which transfer to list-compatible isomorphisms without any change. Using these structural\nresults, we can build our algorithm.\n\n\\begin{theorem} \\label{thm:bounded_genus}\nFor every integer $g$, the problem \\textsc{ListIso}\\xspace can be solved on graphs of Euler genus at most $g$ in time\n$\\calO(\\sqrt n \\ell)$.\n\\end{theorem}\n\n\\begin{proof}\nSee~\\cite[p.~14]{kawarabayashi} for overview of the main steps. We show that these steps can be\nmodified to deal with lists. We prove this result by induction on $g$, where the base case for $g=0$\nis Theorem~\\ref{thm:planar}. Next, we assume that both graphs $G$ and $H$ are 3-connected, otherwise\nwe apply Theorem~\\ref{thm:3conn_reduction}. By~\\cite[Theorem 1.2]{kawarabayashi}, if $G$ and $H$\nhave no polyhedral embeddings, then the face-width is at most two.\n\t \n\\emph{Case 1: $G$ and $H$ have polyhedral embeddings.} Following~\\cite[Theorem 1.2]{kawarabayashi},\nwe have at most $f(g)$ possible embeddings of $G$ and $H$. We choose one embedding of $G$ and we\ntest all embeddings of $H$. It is known that the average degree is $\\calO(g)$. Therefore, we can apply\nthe same idea as in the proof of Lemma~\\ref{lem:planar_3conn} and test isomorphism of all these\nembeddings in time $\\calO(\\ell)$. \n\n\\emph{Case 2: $G$ and $H$ have no polyhedral embedding, but have embeddings of face-width exactly two.}\nThen we split $G$ into a pair of graphs $(G',L)$. The graph $L$\nare called \\emph{cylinders} and the graph $G'$ correspond to the remainder of $G$. The following\nproperties hold~\\cite[p.~5]{kawarabayashi}:\n\\begin{packed_itemize}\n\\item We have $G = G' \\cup L$ and for $\\bo L = V(G' \\cap L)$, we have $|\\bo L| = 4$.\n\\item The graph $G'$ can be embedded to a surface of genus at most $g-1$, and $L$ is\nplanar~\\cite[p.~4]{kawarabayashi}.\n\\item This pair $(G',L)$ is canonical, i.e., every isomorphism from $G$ to $H$ maps $(G',L)$ to another\npair $(H',L')$ in $H$.\n\\end{packed_itemize}\nIt is proved~\\cite[Theorem 5.1]{kawarabayashi} that there exists some function $q'(g)$ bounding the\nnumber of these pairs both in $G$ and $H$, and can be found in time $\\calO(n)$. We fix a pair $(G',L)$\nin $G$ and iterate over all pairs $(H'_i,L'_i)$ in $H$. Following~\\cite[p.~36]{kawarabayashi}, we\nget that $G \\cong H$, if and only if there exists a pair $(H'_i,L'_i)$ in $H$ such that $G' \\cong\nH'_i$, $L \\cong L'_i$, and $G' \\cap L$ is mapped to $H'_i \\cap L'_i$. To test this, we run at most\n$2q'(g)$ instances of \\textsc{ListIso}\\xspace on smaller graphs with modified lists.\n\nSuppose that we want to test whether $G' \\cong H'_i$ and $L \\cong L'_i$. First, we modify the lists:\nfor $u \\in V(G')$, put $\\frakL'(u) = \\frakL(u) \\cap H'_i$, and for $v \\in V(L)$, put $\\frakL'(v) = \\frakL(v) \\cap\nL'_i$, and similarly for lists of darts. Further, for all vertices $u \\in \\bo L$ in both $G'$ and\n$L$, we put $\\frakL'(u) = \\frakL(u) \\cap \\bo L$. We test existence of list-compatible isomorphisms from $G'$\nto $H'_i$ and from $L$ to $L'_i$. There exists a list-compatible isomorphism from $G$ to $H$, if and\nonly if these list-compatible isomorphisms exist at least for one pair $(H'_i,L'_i)$.\n\nWe note that when $g=2$, a special case is described in~\\cite[Theorem 5.3]{kawarabayashi}, which is\nslightly easier and can be modified similarly.\n\n\\emph{Case 3: $G$ and $H$ have no polyhedral embedding and have only embeddings of face-width one.}\nLet $V$ be the set of vertices in $G$ such that for each $u \\in V$, there exists a non-contractible\ncurve passing only through $u$. By~\\cite[Lemma 6.3]{kawarabayashi}, $|V| \\le q(g)$ for some function\n$q$. For $u$, the non-contractible curve divides its edges to two sides, so we can cut $G$ at $u$,\nand split the incident edges. We obtain a graph $G'$ which can be embedded to a surface of genus at\nmost $g-1$.\n\nBy~\\cite[Lemma 6.3]{kawarabayashi}, we can find all these vertices $V$ and $V'$ in $G$ and $H$ in\ntime $\\calO(n)$. We choose $u \\in V$ arbitrarily, and we test all possible vertices $v \\in V'$.\nLet $G'$ be constructed from $G$ by splitting $u$ into new vertices $u'$ and $u''$, and similarly\n$H'$ be constructed from $H$ by splitting $v$ into new vertices $v'$ and $v''$. \nIn~\\cite[p.~36]{kawarabayashi}, it is stated that $G \\cong H$, if and only if there exists a choice\nof $v \\in V'$ such that $G' \\cong H'$ and $\\{u',u''\\}$ is mapped to $\\{v',v''\\}$.\nTherefore, we run at most $q(g)$ instances of \\textsc{ListIso}\\xspace on smaller graphs with modified lists.\n\nIf $v \\notin L(u)$, clearly a list-compatible isomorphism is not possible for this choice of $v \\in\nV'$. If $v \\in L(u)$, we put $L'(u') = L'(u'') = \\{v',v''\\}$. Then there exists a list-compatible\nisomorphism from $G$ to $H$, if and only if there exists a list-compatible isomorphism from $G'$ to\n$H'$.\n\nThe correctness of our algorithm follows from~\\cite{kawarabayashi}. It remains to argue the\ncomplexity. Throughout the algorithm, we produce at most $w(g)$ subgraphs of $G$ and $H$, for some\nfunction $w$, for which we test list-compatible isomorphisms. Assuming the induction hypothesis, the\nreduction of graphs to 3-connected graphs can be done in time $\\calO(\\sqrt n \\ell)$. Case 1 can be\nsolved in time $\\calO(\\ell)$. Case 2 can be solved in time $\\calO(\\sqrt n \\ell)$. Case 3 can be solved in\ntime $\\calO(\\sqrt n \\ell)$.\\qed\n\\end{proof}\n\n\\section{Bounded Treewidth Graphs} \\label{sec:bounded_treewidth}\n\nIn this section, we prove that \\textsc{ListIso}\\xspace can be solved in \\hbox{\\rm \\sffamily FPT}\\xspace with respect to the parameter treewidth\n$\\tw(G)$. Unlike in Sections~\\ref{sec:planar_graphs} and~\\ref{sec:intersection}, the difficulty\nof graph isomorphism on bounded treewidth graphs raises from the fact that\ntree decomposition is not uniquely determined. We follow the approach of\nBodlaender~\\cite{iso_xp_treewidth} which describes an \\hbox{\\rm \\sffamily XP}\\xspace algorithm for \\textsc{GraphIso}\\xspace of bounded treewidth\ngraphs, running in time $n^{\\calO(\\tw(G))}$. Then we show that the recent breakthrough by Lokshtanov\net al.~\\cite{iso_fpt_treewidth}, giving an \\hbox{\\rm \\sffamily FPT}\\xspace algorithm for \\textsc{GraphIso}\\xspace, translates as well.\n\n\\begin{definition}\nA \\emph{tree decomposition} of a graph $G$ is a pair ${\\cal T} =\n(\\{B_i\\colon i\\in I\\},T = (I,F)),$ where $T$ is a rooted tree and $\\{B_i\\colon\ni\\in I\\}$ is a family of subsets of $V,$ such that\n\\begin{enumerate}\n\\item for each $v\\in V(G)$ there exists an $i \\in I$ such that $v\\in B_i$,\n\\item for each $e\\in E(G)$ there exists an $i \\in I$ such that $e\\subseteq B_i$,\n\\item for each $v\\in V(G), I_v = \\{i \\in I\\colon v\\in B_i\\}$ induces a subtree of $T.$\n\\end{enumerate} \nWe call the elements $B_i$ the \\emph{nodes}, and the elements of the set $F$ the\ndecomposition edges.\n\\end{definition}\n\nWe define the width of a tree decomposition ${\\cal T} = (\\{B_i\\colon i\\in I\\},\nT)$ as $\\max_{i\\in I}|B_i|-1$ and the \\emph{treewidth} $\\tw(G)$ of a graph $G$\nas the minimum width of a tree decomposition of the graph $G$. \n\n\\heading{Nice Tree Decompositions.}\nIt is common to define a {\\it nice tree decomposition} of the graph~\\cite{kloks}. We naturally\norient the decomposition edges towards the root and for an oriented decomposition edge $(B_j,B_i)$\nfrom $B_j$ to $B_i$ we call $B_i$ the {\\it parent} of $B_j$ and $B_j$ a {\\it child} of $B_i$. If\nthere is an oriented path from $B_j$ to $B_i$ we say that $B_j$ is a {\\it descendant} of $B_i$.\n\nWe also adjust a tree decomposition such that for each decomposition edge $(B_i,B_j)$ it holds that\n$\\big| |B_i|-|B_j| \\big| \\le 1$ (i.e. it joins nodes that differ in at most one vertex). The\nin-degree of each node is at most $2$ and if the in-degree of the node $B_k$ is $2$ then for its\nchildren $B_i,B_j$ holds that $B_i = B_j = B_k$ (i.e. they represent the same vertex set).\n\nWe classify the nodes of a nice decomposition into four classes---namely {\\it introduce nodes}, {\\it\nforget nodes}, {\\it join nodes} and {\\it leaf nodes}. We call the node $B_i$ an introduce node of\nthe vertex $v$, if it has a single child $B_j$ and $B_i\\setminus B_j = \\{v\\}$. We call the node\n$B_i$ a forget node of the vertex $v$, if it has a single child $B_j$ and $B_j\\setminus B_i =\n\\{v\\}$. If the node $B_k$ has two children, we call it a join node (of nodes $B_i$ and $B_j$).\nFinally we call a node $B_i$ a leaf node, if it has no child.\n\n\\heading{Bodlaender's Algorithm.}\nA graph $G$ has treewidth at most $k$ if either $|V(G)| \\le k$, or there exists a cut set $U\n\\subseteq V(G)$ such that $|U| \\le k$ and each component of $G \\setminus U$ together with $U$ has\ntreewidth at most $k$. The set $U$ corresponds to a bag in some tree decomposition of $G$.\nBodlaender's algorithm~\\cite{iso_xp_treewidth} enumerates all possible cut sets $U$ of size at most\n$k$ in $G$ (resp. $H$), we denote these $C_i$ (resp. $D_i$). Furthermore, it enumerates all\nconnected components of $G\\setminus C_i$ as $C_i^j$ (resp.~of $H \\setminus D_i$ as $D_i^j$). We\ndenote by $G[U,W]$ the graph induced by $U \\mathbin{\\dot\\cup} W$. The set $W$ is either a connected component\nor a collection of connected components. We call $U$ the \\emph{border set}.\n\n\\begin{lemma}[\\cite{ACP87:partialktrees,iso_xp_treewidth}]\n\\label{lem:partialKTree}\nA graph $G[U, W]$ with at least $k$ vertices has a treewidth at most $k$ with the border set $U$ if\nand only if there exists a vertex $v\\in W$ such that for each connected component $A$ of $G[W\n\\setminus v]$, there is a $k$-vertex cut $C_s \\subseteq U\\cup\\{v\\}$ such that no vertex in $A$ is\nadjacent to the (unique) vertex in $(U \\cup\\{v\\})\\setminus C_s$, and $G[C_s, A]$ has treewidth at\nmost $k$.\n\\end{lemma}\n\n\\begin{lemma} \\label{lem:xp_treewidth}\nThe problem \\textsc{ListIso}\\xspace can be solved in \\hbox{\\rm \\sffamily XP}\\xspace with respect to the parameter treewidth.\n\\end{lemma}\n\n\\begin{proof}\nWe modify the algorithm of Bodlaender~\\cite{iso_xp_treewidth}. Let $k = \\tw(G) = \\tw(H)$. We\ncompute the sets $C_i, C_i^j$ for $G$ and the sets $D_{i'}, D_{i'}^{j'}$ for $H$; there are\n$n^{\\calO(k)}$ pairs $(C_i,C_i^j)$. The pair $(C_i, C_i^j)$ is \\emph{compatible} if $C_i^j$ is a\nconnected component of $G'\\setminus C_i$ for some $G'\\subseteq G$ that arises during the recursive\ndefinition of treewidth. Let $f \\colon C_i\\to D_{i'}$ be an isomorphism. We say that $(C_i,\nC_i^j)\\equiv_f(D_{i'},D_{i'}^{j'})$ if and only if there exists an isomorphism $\\varphi\\colon\nC_i\\cup C_i^j\\to D_{i'}\\cup D_{i'}^{j'}$ such that $\\varphi|_{C_i} = f$. In other words, $\\varphi$\nis a partial isomorphism from $G$ to $H$. The change for \\textsc{ListIso}\\xspace is that we also require that both $f$\nand $\\varphi$ are list-compatible.\n\nThe algorithm resolves $(C_i, C_i^j)\\equiv_f(D_{i'},D_{i'}^{j'})$ by the dynamic programming,\naccording to the size of $D_{i'}^{j'}$. If $|C_i^j| = |D_{i'}^{j'}| \\le 1$, we can check it\ntrivially in time $k^{\\calO(k)}$. Otherwise, suppose that $|C_i^j| = |D_{i'}^{j'}| > 1$, and let\n$m$ be the number of components of $C_i^j$ (and thus $D_{i'}^{j'}$). We test whether $f : C_i \\to\nD_{i'}$ is a list-compatible isomorphism. Let $v\\in C_i^j$ be a vertex given by\nLemma~\\ref{lem:partialKTree} (with $U = C_i$ and $W = C_i^j$) and let $C_s$ be the corresponding\nextension of $v$ to a cut set. We compute for all $w \\in D_{i'}^{j'} \\cap \\frakL(v)$ all connected\ncomponents $B_q$. From the dynamic programming, we know for all possible extensions $D'$ of $w$ to a\ncut set whether $(C_m, A_p)\\equiv_{f'}(D',B_q)$ with $f'(x) = f(x)$ for $x\\in C_i$ and $f'(v) = w$.\nFinally, we decide whether there exists a perfect matching in the bipartite graph between $(C_m,\nA_p)$'s and $(D',B_q)$'s where the edges are according to the equivalence.\\qed\n\\end{proof}\n\n\\heading{Reducing The Number of Possible Bags.}\nOtachi and Schweitzer~\\cite{GIReductionTechniques} proposed the idea of pruning the family of\npotential bags which finally led to an \\hbox{\\rm \\sffamily FPT}\\xspace algorithm~\\cite{iso_fpt_treewidth}. A family\n$\\mathcal{B}(G)$, whose definition depends on the graph, is called \\emph{isomorphism-invariant} if\nfor an isomorphism $\\phi : G \\to G'$, we get $\\mathcal{B}(G') = \\phi(\\mathcal{B}(G))$, where\n$\\phi(\\mathcal{B}(G))$ denotes the family $\\mathcal{B}(G)$ with all the vertices of $G$ replaced by\ntheir images under $\\phi$. \n\nFor a graph $G$, a pair $(A,B)$ with $A \\cup B = V$ is called a {\\em\nseparation} if there are no edges between $A\\setminus B$ and $B\\setminus A$ in\n$G$. The order of $(A,B)$ is $|A\\cap B|$. For two vertices $u,v \\in V(G)$, by\n$\\mu(u,v)$ we denote the minimum order of separation $(A,B)$ with $u\\in\nA\\setminus B$ and $v\\in B\\setminus A$. We say a graph $G$ is {\\em\n$k$-complemented} if $\\mu_G(u,v) \\ge k) \\implies uv \\in E(G)$ holds for\nevery two vertices $u,v\\in V$. We may canonically modify the input graphs $G$ and $H$ \\textsc{ListIso}\\xspace, by\nadding these additional edges and making them $k$-complemented.\n\n\n\\begin{theorem}[\\cite{iso_fpt_treewidth}, Theorem~5.5] \\label{thm:pruned_bags}\nLet $k$ be a positive integer, and let $G$ be a graph on $n$ vertices that is connected and\n$k$-complemented. There exists an algorithm that computes in time $2^{\\calO(k^5\\log k)}\\cdot n^3$ an\nisomorphism-invariant family of bags $\\mathcal{B}$ with the following properties:\n\\begin{enumerate}\n \\item $|B|\\le \\calO(k^4)$ for each $B\\in\\mathcal{B}$,\n \\item $|\\mathcal{B}|\\le 2^{\\calO(k^5\\log k)}\\cdot n^2$,\n \\item Assuming $\\tw(G) 0 \"damping (D) should be >0\"\n\t@assert H > 0 \"inertia (H) should be >0\" \n\tSOmega_H = SOmega * 2pi \/ H\nend [[Somega, dSomega]] begin\n\tp = real(u * conj(i))\n\tdu = u * im * Somega\n\tdSomega = (P - D*Somega - p)*SOmega_H\nend\n\\end{lstlisting}\nAgain, the left part of the first line provides the name of the new type and the name of the parameters. Lines 2 to 4 are code that should be run only once. In this case these are consistency checks, that the damping and inertia are positive, and the reduction of the rated frequency $\\mathtt{\\Omega}$ and the inertia $\\mathtt{H}$ to a single variable $\\mathtt{\\Omega\\_H}$. In line 5, the variable name of the internal variable $\\mathtt{\\omega}$ and the name for its derivative $\\mathtt{d\\omega}$ are given. Finally, lines 6 to 8 implement \\Cref{eq:swing-u,eq:swing-omega} by simply writing down the mathematical terms.\n\n\\subsection{Instantiating the power grid model}\n\\label{sec:implementation-grid}\n\nIn order to create the grid model, we first have to instantiate the bus models simply by calling them with the corresponding parameter values from \\Cref{tab:ieee14-bus-parameters}, e.g.:\n\\begin{lstlisting}[linewidth=\\columnlistingwidth]\nSwingEq(H=5.148, P=2.32, D=2, SOmega=50) # for bus 1\nPQAlgebraic(S=-0.295-0.166im) # for bus 9\n\\end{lstlisting}\nWithin the actual code we simply loaded the data from a \\texttt{.csv} file into and automatized this instantiation\\footnote{The full source code is not part of this paper as it would be too long, but it will be published along with \\pd{}.}. The instantiated bus models should then be saved in an array called e.g. \\texttt{nodes}. Similarly, the admittance Laplacian should be generated from the line data in \\Cref{tab:ieee14-line-parameters} and saved in a matrix called e.g. \\texttt{LY}. The actual grid model instatiation is then simply one line where the model is saved in the variable \\texttt{g}:\n\\begin{lstlisting}[linewidth=\\columnlistingwidth]\ng = GridDynamics(node_list, LY)\n\\end{lstlisting}\nNow, \\texttt{g} contains all the information of the power grid and in the following section we will show how to solve it.\n\n\n\\section{Modeling Results}\n\\label{sec:modeling}\n\n\\begin{figure*}[!t]\n\t\\centering\n\t\\subfloat{\\includegraphics[width=\\columnwidth]{ieee14-frequency-perturbation}\n\t\\label{fig:ieee14-frequency-perturbation}}\n\t\\subfloat{\\includegraphics[width=\\columnwidth]{ieee14-line-tripping}\n\t\\label{fig:ieee14-line-tripping}}\n\t\\hfil\n\t\\caption{The two subfigures show the outflowing power (top) at each node and the angular frequency (bottom) for the buses modeled by the swing equation (1, 3, 6, 8) in for the two perturbation scenarios: (left) frequency perturbation at bus 1 and (right) line tripping of line 2 (between bus 1 and 5). The letter in bracets refers to the modeling type, i.e. $G$ for generator \/ synchronous compensator as swing equation, $S$ for the slack bus, and $L$ for a load as algebraic PQ-constraint.}\n\\end{figure*}\n\nWithin this section, we will analyze two simple cases: (a) a frequency perturbation at the largest generator at bus 1 and (b) a line tripping event at of line 2 (between bus 1 and 5) (cmp.\\ \\Cref{fig:ieee14grid}).\n\nBefore we find the normal point of operation for the power grid, i.e.\\ the fixed point of \\Cref{eq:i-definition,eq:u-general,eq:x-general} or synchronous state for the IEEE 14-bus system. For that, we use the grid model \\texttt{g} generated in the previous \\Cref{sec:implementation} and use the function provided by \\pd{}:\n\\begin{lstlisting}[linewidth=\\columnlistingwidth]\nfp = operationpoint(g, ones(SystemSize(g)))\n\\end{lstlisting}\nwhere $\\mathtt{ones(SystemSize(g))}$ is a vector of the correct length for the initial condition of the fixed point search. \\texttt{fp} is now a \\texttt{State} object that we can use as initial condition for the solving the differential equations corresponding to the power grid.\n\n\\subsection{Frequency perturbation}\n\nIn order to model a frequency perturbation, one can simply take a copy of the fixed point as found before and adjust the initial frequency value:\n\\begin{lstlisting}[linewidth=\\columnlistingwidth]\nx0 = copy(fp)\nx0[1, :int, 1] += 0.2 \n\\end{lstlisting}\nThe second line can be read as ``add 0.2 to the 1$^\\text{st}$ internal variable of the 1$^\\text{st}$ node'' which is the frequency $\\omega$ as there is only one internal variable. Note that the first \\texttt{1} refers to the node and then second \\texttt{1} to the internal variable counter. The power grid with the initial condition \\texttt{x0} can then be solved for a time span of 0.5 seconds by calling:\n\\begin{lstlisting}[linewidth=\\columnlistingwidth]\nsol = solve(g, x0, (0.0,.5))\n\\end{lstlisting}\nThe solution is shown in \\Cref{fig:ieee14-frequency-perturbation}. It shows that the system is very stable against frequency perturbations. The actual dynamics is not so exciting as the system is very stable. Please note that this system was taken as an example to present on how easily one can model a power grid using \\pd{}, not to find new exciting dynamics.\n\n\\subsection{Line tripping}\n\nTo show some more dynamic behavior we simulated a line tripping as well. We model this effect by taking the operation point of the full power grid (\\texttt{fp}) as initial condition but defining a new admittance Laplacian where line 2 (between bus 1 and 5, see \\Cref{tab:ieee14-line-parameters,fig:ieee14grid}) has been taken out (i.e. the admittance is set to $0$). Running the model with this new Laplacian yields \\Cref{fig:ieee14-line-tripping}.\n\nIn the frequency plot we identify how the frequency of bus 1, where the line tripping happened, compensates for the momentarily excess power at the bus. The lacking power in the rest of the grid is matched by the synchronous compensators whose frequency decreases in turn. Note that the angular frequency $\\omega$ is shown, so with a division by $2\\pi$ the maximal frequency deviation $f$ is $\\approx 0.05\\,$Hz. After about one second, the system recovers to the normal state of operation.\n\n\n\\section{Conclusion \\& Outlook}\n\\label{sec:conclusion}\n\nWithin this paper, we have seen how one one can use the Open-Source library \\pd{} in order to model the dynamics of a power grid with just a few lines of code. We have seen how the fundamental mathematical equations (given in \\Cref{sec:powerdynamics}) translate to source code that reads exactly the same. We employed \\pd{} for the IEEE 14-bus distribution grid feeder in order to demonstrate how one can easily simulate faults and analyze the transient reaction of the power grid dynamics. As example scenarios we used a frequency perturbation and a line tripping.\n\nFinally, this paper is really just a sneak preview with a simple example the publication of \\pd{} is planned for October 15, 2018. By then, we will have added more inverter control schemes and stochastic descriptions of intermittency due to renewable energy sources.\n\n\n\n\n\n\n\\section*{Acknowledgment}\n\nThis paper was presented at the 19th Wind Integration Workshop and published in the workshop's proceedings.\n\nWe would like to thank the German Academic Exchange Service for the opportunity to participate at the Wind Integration Workshop 2018 in Stockholm via the funding program ``Kongressreisen 2018''.\nThis paper is based on work developed within the Climate-KIC Pathfinder project ``elena -- electricity network analysis'' funded by the European Institute of Innovation \\& Technology.\nThis work was conducted in the framework of the Complex Energy Networks research group at the Potsdam Institute for Climate Impact Research.\n\nWe would like to thank Frank Hellmann and Paul Schultz for the discussions on structuring an Open-Source library for dynamic power grid modeling.\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro-sec}\n\\setcounter{equation}{0}\n\nA significant stage in the formation of living systems was the \ntransition from a symmetric chemistry involving mirror-symmetric and \napproximately equal numbers of left- and right-handed chiral species \ninto a system involving just one-handedness of chiral molecules. \n\nIn this paper we focus on mathematical models of one example of a \nphysicochemical system which undergoes such a symmetry-breaking \ntransition, namely the crystal grinding processes investigated by \nViedma \\cite{viedma} and Noorduin {\\em et al.}\\ \\cite{wim}, \nwhich have been recently reviewed by McBride \\& Tully \n\\cite{mcbride-nature}. Our aim is to describe this process by way \nof a detailed microscopic model of the nucleation and growth processes \nand then to simplify the model, retaining only the bare essential \nmechanisms responsible for the symmetry-breaking bifurcation. \n\nWe start by reviewing the processes which are already known to \ncause a symmetry-breaking bifurcation. By this we mean that a system \nwhich starts off in a racemic state (one in which both left-handed and \nright-handed structures occur with approximately equal frequencies) \nand, as the system evolves, the two handednesses grow differently, \nso that at a later time, one handedness is predominant in the system. \n\n\\subsection{Models for homochiralisation}\n\nMany models have been proposed for the emergence of homochirality \nfrom an initially racemic mixture of precursors. Frank \\cite{frank} \nproposed an open system into which $R$ and $S$ particles are \ncontinually introduced, and combine to form one of two possible \nproducts: left- or right-handed species, $X,Y$. Each of these \nproducts acts as a catalyst for its own production (autocatalysis), and \neach combines with the opposing handed product (cross-inhibition) to \nform an inert product ($P$) which is removed from the system at some \nrate. These processes are summarised by the following reaction scheme: \n\\begin{equation} \\begin{array}{rclcrclcl} \n&&&& \\hspace*{-9mm}{\\rm external \\;\\;\\; source} & \\rightarrow &R,S& \n\t\\;\\; & {\\rm input}, k_0, \\\\ \nR+S & \\rightleftharpoons & X && \nR+S & \\rightleftharpoons & Y &\\qquad &\\mbox{slow}, k_1 , \\\\ \nR+S+X & \\rightleftharpoons & 2 X && \nR+S+Y & \\rightleftharpoons & 2 Y &\\quad& \\mbox{fast, autocatalytic}, k_2 \\\\ \n&&&&X + Y & \\rightarrow & P &\\qquad& \\mbox{cross-inhibition}, k_3 , \\\\\n&&&& P &\\rightarrow & & \\qquad & {\\rm removal}, k_4 . \n\\end{array}\\end{equation} \nIgnoring the reversible reactions (for simplicity), \nthis system can be modelled by the differential equations \n\\begin{eqnarray}\n\\frac{{\\rm d} r}{{\\rm d} t} & =& k_0 - 2 k_1 r s - k_2 r s (x\\!+\\!y) \n\t+ k_{-1} (x\\!+\\!y) + k_{-2} (x^2\\!+\\!y^2) , \\\\\n\\frac{{\\rm d} s}{{\\rm d} t} & = & k_0 - 2 k_1 r s - k_2 r s (x\\!+\\!y) \n\t+ k_{-1} (x\\!+\\!y) + k_{-2} (x^2\\!+\\!y^2) , \\\\ \n\\frac{{\\rm d} x}{{\\rm d} t} & = & \n\tk_1 r s + k_2 r s x - k_3 x y - k_{-1} x - k_{-2} x^2 , \\\\ \n\\frac{{\\rm d} y}{{\\rm d} t} & = & \n\tk_1 r s + k_2 r s y - k_3 x y - k_{-1} y - k_{-2} y^2 , \\\\ \n\\frac{{\\rm d} p}{{\\rm d} t} & = & k_{3} x y - k_4 p , \n\\end{eqnarray}\nfrom which we note that at steady-state we have \n\\begin{equation}\nrs=\\frac{k_0+k_{-1}(x+y) + k_{-1}(x^2+y^2)}{2k_1+k_2(x+y)}.\n\\end{equation}\nWe write the absolute enantiomeric excess as $ee=x-y$ and the \ntotal concentration as $\\sigma=x+y$; adding and subtracting \nthe equations for ${\\rm d} x\/{\\rm d} t$ and ${\\rm d} y\/{\\rm d} t$, we find \n\\begin{equation} \n\\sigma^2 = \\frac{2k_0}{k_3} + ee^2 , \n\\end{equation} \n\\begin{equation} \nee \\left[ \n\\frac{k_2(k_{-2}ee^2+k_{-2}\\sigma^2+2k_{-1}\\sigma+2k_0)}\n{2(2k_1+k_2\\sigma)} - k_{-1} - k_{-2} \\sigma \\right] = 0 . \n\\end{equation} \nHence $ee=0$ is always a solution, and there are other solutions with \n$ee\\neq0$ if the rate constants $k_*$ satisfy certain conditions (these \ninclude $k_3>k_{-2}$ and $k_0$ being sufficiently large). \n\nThe important issues to note here are: \n\\begin{description}\n\\item[(i)] this system is {\\em open}, it requires the continual supply of \nfresh $R,S$ to maintain the asymmetric steady-state. Also, the \nremoval of products is required to avoid the input terms \ncausing the total amount of material to increase indefinitely; \n\\item[(ii)] the forcing input term drives the system away from \nan equilibrium solution, into a distinct steady-state solution; \n\\item[(iii)] the system has cross-inhibition which removes equal \nnumbers of $X$ and $Y$, amplifying any differences caused by \nrandom fluctuations in the initial data or in the input rates. \n\\end{description} \n\nSaito \\& Hyuga \\cite{saito} discuss a sequence of toy models \ndescribing homochirality caused by nonlinear autocatalysis and \nrecycling. Their family of models can be summarised by \n\\begin{eqnarray}\n\\frac{{\\rm d} r}{{\\rm d} t} & =& k r^2 (1-r-s) - \\lambda r , \\\\ \n\\frac{{\\rm d} s}{{\\rm d} t} & =& k s^2 (1-r-s) - \\lambda s , \n\\end{eqnarray}\nwhere $r$ and $s$ are the concentrations of the two enantiomers. \nInitially they consider $k_r=k_s=k$ and $\\lambda=0$ and find that \nenantiomeric exess, $r-s$ is constant. Next the case $k_r=kr$, \n$k_s=ks$, $\\lambda=0$ is analysed, wherein the relative enantiomeric \nexcess $\\frac{r-s}{r+s}$ is constant. Then the more complex case of \n$k_r=k r^2$, $k_s=k s^2$, $\\lambda=0$ is analysed, and amplification \nof the enantiomeric excess is obtained. This amplification persists \nwhen the case $\\lambda>0$ is finally analysed. This shows us strong \nautocatalysis may cause homochiralisation, but in any given \nexperiment, it is not clear which form of rate coefficients \n($k_r,k_s,\\lambda$) should be used. \n\nSaito \\& Hyuga (2005) analyse a series of models of crystallisation \nwhich include some of features present in our more general model. \nThey note that a model truncated at tetramers exhibits different \nbehaviour from one truncated at hexamers. In particular, the \nsymmetry-breaking phenomena is {\\em not} present in the tetramer \nmodel, but {\\em is} exhibited by the hexamer model. Hence, later, \nwe will consider models truncated at the tetramer and the hexamer \nlevels and investigate the differences in symmetry-breaking \nbehaviour (Sections \\ref{tetra-sec} and \\ref{hex-sec}). \n\nDenoting monomers by $c$, small and large left-handed clusters by \n$x_1,x_2$ respectively and right-handed by $y_1,y_2$, Uwaha \n\\cite{uwaha} writes down the scheme \n\\begin{eqnarray}\n\\frac{{\\rm d} c}{{\\rm d} t} & =& - 2 k_0 z^2 k_1 z (x_1+y_1) + \n\t\\lambda_1(x_2+y_2) + \\lambda_0(x_1+y_1) , \n\\\\ \n\\frac{{\\rm d} x_1}{{\\rm d} t} & = & k_0 z^2 - k_u x_1 x_2 \n\t- k_c x_1^2 + \\lambda_u x_2 + \\lambda_0 x_1 , \n\\\\ \n\\frac{{\\rm d} x_2}{{\\rm d} t} & =& k_1 x_2 c + k_u x_1 x_2 + \n\tk_c x_1^2 - \\lambda_1 x_2 - \\lambda_u x_2 , \n\\\\ \n\\frac{{\\rm d} y_1}{{\\rm d} t} & = & k_0 z^2 - k_u y_1 y_2 \n\t- k_c y_1^2 + \\lambda_u y_2 + \\lambda_0 y_1 , \n\\\\ \n\\frac{{\\rm d} y_2}{{\\rm d} t} & =& k_1 y_2 c + k_u y_1 y_2 + \n\tk_c y_1^2 - \\lambda_1 y_2 - \\lambda_u y_2 , \n\\end{eqnarray}\nwhich models \n\\begin{itemize}\n\\item the formation of small chiral clusters \n\t($x_1,y_1$) from an achiral monomer ($c$) at rate $k_0$, \n\\item small chiral clusters ($x_1,y_1$) of the same handedness \n\tcombining to form larger chiral clusters (rate $k_c$), \n\\item small and larger clusters combining to form larger clusters \n(rate $k_u$), \n\\item large clusters combining with achiral monomers to form more \n\tlarge clusters at the rate $k_1$, \n\\item the break up of larger clusters into smaller clusters (rate $\\lambda_u$), \n\\item the break up of small clusters into achiral monomers (rate $\\lambda_0$), \n\\item the break up of larger clusters into achiral monomers (rate $\\lambda_1$). \n\\end{itemize}\n\nSuch a model can exhibit symmetry-breaking to a solution in which \n$x_1\\neq x_2$ and $x_2\\neq y_2$. Uwaha points out that the \nrecycling part of the model (the $\\lambda_*$ parameters) are crucial \nto the formation of a `completely' homochiral state. One problem with \nsuch a model is that since the variables are all total masses in the \nsystem, the size of clusters is not explicitly included. In asymmetric \ndistributions, the typical size of left- and right- handed clusters may \ndiffer drastically, hence the rates of reactions will proceed differently \nin the cases of a few large crystals or many smaller crystals. \n\nSandars has proposed a model of symmetry-breaking in the \nformation of chiral polymers \\cite{sandars}. His model has an \nachiral substrate ($S$) which splits into chiral monomers $L_1,R_1$ \nboth spontaneously at a slow rate and at a faster rate, when catalysed \nby the presence of long homochiral chains. This catalytic effect has \nboth autocatalytic and crosscatalytic components, that is, for example, \nthe presence of long right-handed chains $R_n$ autocatalyses the \nproduction of right-handed monomers $R_1$ from $S$, (autocatalysis) \nas well as the production of left-handed monomers, $L_1$ (crosscatalysis). \nSandars assumes the growth rates of chains are linear and not \ncatalysed; the other mechanism required to produce a symmetry-breaking \nbifurcation to a chiral state is cross-inhibition, by which chains of \nopposite handednesses interact and prevent either from further \ngrowth. These mechanisms are summarised by \n\\begin{eqnarray}\nS \\rightarrow L_1 , && S \\rightarrow R_1 , \n\t\\qquad \\mbox{slow} , \\nonumber \\\\ \nS \\!+\\! L_n \\rightarrow L_1 \\!+\\! L_n , && \nS \\!+\\! R_n \\rightarrow R_1 \\!+\\! R_n , \\quad \n\\mbox{autocatalytic, rate $\\propto 1\\!+\\!f$} , \\nonumber \n\\\\\nS \\!+\\! R_n \\rightarrow L_1 \\!+\\! R_n , && \nS \\!+\\! L_n \\rightarrow R_1 \\!+\\! L_n , \\quad \n\\mbox{cross-catalytic, rate $\\propto 1\\!-\\!f$} , \\nonumber \n\\\\\nL_n + L_1 \\rightarrow L_{n+1} , && \nR_n + R_1 \\rightarrow R_{n+1} , \\qquad \n\\mbox{chain growth, rate $=a$} , \\nonumber\n\\\\\nL_n + R_1 \\rightarrow Q_{n+1} , && \nR_n + L_1 \\rightarrow P_{n+1} , \\qquad \n\\mbox{cross-inhibition, rate $=a\\chi$} . \\nonumber\n\\end{eqnarray}\nThis model and generalisations of it have been analysed \nby Sandars \\cite{sandars}, Brandenburg {\\em et al.}\\ \\cite{axel,axel3}, \nMultimaki \\& Brandenburg \\cite{axel2}, \nWattis \\& Coveney \\cite{jw-sandars,jw-ch-rna-rev}, \nGleiser \\& Walker \\cite{sara-bup}, Gleiser {\\em et al.}\\ \\cite{sara-punc}. \nTypically a classic pitchfork bifurcation is found when the fidelity \n($f$) of the autocatalysis over the cross-catalysis is increased. \nOne counterintuitive effect is that increasing the cross-inhibition \neffect ($\\chi$) aids the bifurcation, allowing it to occur at lower \nvalues of the fidelity parameter $f$. \n\n\n\\subsection{Experimental results on homochiralisation}\n\nThe Soai reaction was one of the first experiments which \ndemonstrated that a chemical reaction could amplify initial small \nimbalances in chiral balance; that is, a small enantiomeric exess in \ncatalyst at the start of the experiment led to a much larger imbalance \nin the chiralities of the products at the end of the reaction. Soai {\\em et al.}\\ \n\\cite{soai} was able to achieve an enantiomeric exess exceeding \n85\\% in the asymmetric autocatalysis of chiral pyrimidyl alkanol. \n\nThe first work showing that crystallisation experiments could exhibit \nsymmetry breaking was that of Kondepudi \\& Nelson \n\\cite{kon-sci}. Later Kondepudi {\\em et al.}\\ \\cite{kon-jacs} \nshowed that the stirring rate was a good bifurcation parameter to \nanalyse the final distribution of chiralities of crystals emerging from a \nsupersaturated solution of sodium chlorate. With no stirring, there \nwere approximately equal numbers of left- and right-handed crystals. \nAbove a critical (threshold) stirring rate, the imbalance in the numbers \nof each handedness increased, until, at large enough stirring rates, \ntotal chiral purity was achieved. This is due to all crystals in the \nsystem being derived from the same `mother' crystal, which is the first \ncrystal to become established in the system; all other crystals grow \nfrom fragments removed from it (either directly or indirectly). \nBefore this, Kondepudi \\& Nelson \\cite{kon-pla,kon-nat} \nworked on the theory of chiral symmetry-breaking mechanisms \nwith the aim of predicting how parity-violating perturbations could \nbe amplified to give an enantiomeric exess in prebiotic chemistry, \nand the timescales involved. Their results suggest \na timescale of approximately $10^4$ years. More recently, \nKondepudi and Asakura \\cite{kon+a} have summarised \nboth the experimental and theoretical aspects of this work. \n\nViedma \\cite{viedma} was the first to observe that grinding a \nmixture of chiral crystals eventually led to a distribution of crystals \nwhich were all of the same handedness. The crystalline material \nused was sodium chlorate, as used by Kondepudi {\\em et al.}\\ \n\\cite{kon-sci}. \nSamples of L and D crystals are mixed with water in round-bottomed \nflasks and the system is stirred by a magnetic bar (of length 3-20mm) \nat 600rpm. The system is maintained in a supersaturated state; \nsmall glass balls are added to continually crush the crystals. \nThe grinding is thus continuous, and crystals are maintained below \na size of 200 $\\mu$m. The chirality of the resulting crystals was \ndetermined by removing them from the flask, allowing them to grow \nand measuring their optical activity. \nThe results show that, over time, the percentages of left- and \nright-handed crystals steadily change from about 50\/50 to 100\/0 or \n0\/100 -- a state which is described as complete chiral purity. \nWith stirring only and no glass balls, the systems conserve their \ninitial chiral excesses; with glass balls present and stirring, the chiral\nexcess increases, and this occurs more rapidly if more balls are \npresent or the speed of stirring is increased. \n\nMore recently, Noorduin {\\em et al.}\\ \\cite{wim} have observed a \nsimilar effect with amino acids -- a much more relevant molecule \nin the study of origins of life. This work has been reviewed by \nMcBride \\& Tully \\cite{mcbride-nature}, who add to the \nspeculation on the mechanisms responsible for the phenomenon. \nNoorduin {\\em et al.}\\ describe grinding as `dynamic \ndissolution\/crystallization processes that result in the conversion \nof one solid enantiomorph into the other'. They also note that `once \na state of single chirality is achieved, the system is ``locked'' \nbecause primary nucleation to form and sustain new crystals from \nthe opposite enantiomer is kinetically prohibited'. Both these quotes \ninclude the crucial fact that the process evolves {\\em not} towards \nan equilibrium solution (which would be racemic), but towards a \ndifferent, dynamic steady-state solution. As noted by Plasson \n(personal communication, 2008), this nonequilibrium state is \nmaintained due to the constant input of energy into the system \nthrough the grinding process. \n\nMcBride \\& Tully \\cite{mcbride-nature} discuss the growth \nof one enantiomorph, and the dissolution of the other as a type of \nOstwald ripening process; with the large surface area to volume ratio \nof smaller crystals giving a rapid dissolution rate, whilst larger crystals, \nhave a lower surface area to volume ratio meaning that they dissolve \nmore slowly. However appealing such an argument maybe, since \nsurface area arguments can equally well be applied to the growth \nside of the process, it is not clear that this is either necessary or \nsufficient. Infact, the model analysed later in this paper will show \nthat a critical cluster size is not necessary to explain homochiralisation \nthrough grinding. \n\n\\subsection{Our aims}\n\nWe aim to describe the results of the crystal grinding phenomenon \nthrough a model which recycles mass through grinding, which \ncauses crystals to fragment, rather than having explicit mass input \nand removal. Simultaneously we need crystal growth processes \nto maintain a distribution of sizeable crystals. \n\nWe assume that the crystals are solids formed in an aqueous \nenvironment, however, we leave open questions as to whether \nthey are crystals of some mineral of {\\em direct} biological \nrelevance (such as amino acids), or whether they are some other \nmaterial, which after growing, will later provide a chirally \nselective surface for biomolecules to crystallise on, or be a catalyst \nfor chiral polymerisation to occur. Following Darwin's \n\\cite{darwin} ``warm little pond'', an attractive scenario might \nbe a tidal rock pool, where waves agitating pebbles provide the \nenergetic input for grinding. Taking more account of recent work, \na more likely place is a suboceanic hydrothermal vent where \nthe rapid convection of hot water impels growing nucleii into the \nvent's rough walls as well as breaking particles off the walls and \nentraining them into the fluid flow, simultaneously grinding any \ngrowing crystals. \n\nIn Section \\ref{model-sec} we propose a detailed microscopic \nmodel of the nucleation and crystal growth of several species \nsimultaneously. This has the form of a generalised Becker-D\\\"{o}ring \nsystem of equations \\cite{bd}. Due to the complexity of the \nmodel we immediately simplify it, making assumptions on the rate \ncoefficients. Furthermore, to elucidate those processes which \nare responsible for homochiralisation, we remove some processes \ncompletely so as to obtain a simple system of ordinary differential \nequations which can be analysed theoretically. \n\nThe simplest model which might be expected to show \nhomochiralisation is one which has small and large clusters of each \nhandedness. Such a truncated model is considered in Section \n\\ref{tetra-sec} wherein it is shown that such a model might lead \nto amplification of enantiomeric exess in the short time, but that \nin the long-time limit, only the racemic state can be approached. \nThis model has the structure akin to that of Saito \\& Hyuga \n\\cite{saito2} truncated at the tetramer level. \n\nHence, in Section \\ref{hex-sec} we consider a more complex model \nwith a cut-off at larger sizes (one can think of small, medium, and large \nclusters of each handedness). Such a model has a similar structure to \nthe hexamer truncation analysed by Saito \\& Hyuga \\cite{saito2}. \nWe find that such a model does allow a final steady-state in which one \nchirality dominates the system and the other is present only in \nvanishingly small amounts. \n\nHowever, as discussed earlier, there may be subtle effects whereby it \nis not just the {\\em number} of crystals of each type that is important \nto the effect, but a combination of size and number of each handedness \nof crystal that is important to the evolution of the process. Hence, in \nSection \\ref{new-sec} we introduce an alternative reduction of the \nsystem of governing equations. In this, instead of truncating and \nkeeping only clusters of a small size, we postulate a form for the \ndistribution which includes information on both the number and size \nof crystals, and use these two quantities to construct a system of \nfive ordinary differential equations for the system's evolution. \n\nWe discuss the results in Sections \\ref{disc-sec} and \\ref{conc-sec} \nwhich conclude the paper. The Appendix \\ref{app} shows how, \nby removing the symmetry in the growth rates of the two \nhandednesses, the model could be generalised to account for \nthe competitive nucleation of different polymorphs growing from \na common supply of monomer. \n\n\\section{The BD model with dimer interactions and \nan amorphous metastable phase} \n\\label{model-sec}\n\\setcounter{equation}{0}\n\n\\subsection{Preliminaries}\n\nSmoluchowski \\cite{smol} proposed a model in which clusters \nof any sizes could combine pairwise to form larger clusters. Chemically \nthis process is written $C_r + C_s \\rightarrow C_{r+s}$ where $C_r$ \nrepresents a cluster of size $r$. Assuming this process is reversible \nand occurs with a forward rate given by $a_{r,s}$ and a reverse rate \ngiven by $b_{r,s}$, the law of mass action yields the kinetic equations \n\\begin{eqnarray}\n\\frac{{\\rm d} c_r}{{\\rm d} t} &\\!=\\!&\\! \\mbox{$\\frac{1}{2}$} \\sum_{s=1}^{r-1} \n \\left( a_{s,r-s} c_s c_{r-s} - b_{s,r-s} c_r \\right) \n - \\sum_{s=1}^\\infty \\left( a_{r,s} c_r c_s - \n b_{r,s} c_{r+s} \\right) . \\nonumber \\\\ && \\lbl{smol-eq}\n\\end{eqnarray}\nThese are known as the coagulation-fragmentation equations. \nThere are simplifications in which only interactions between clusters \nof particular sizes are permitted to occur, for example when only \ncluster-monomer interactions can occur, the Becker-D\\\"{o}ring \nequations \\cite{bd} are obtained. Da Costa has formulated a \nsystem in which only clusters upto a certain size ($N$) are permitted \nto coalesce with or fragment from other clusters. In the case of \n$N=2$, which is pertinent to the current study, only cluster-monomer \nand cluster-dimer interactions are allowed, for example \n\\begin{equation}\nC_r + C_1 \\rightleftharpoons C_{r+1} , \n\\qquad \nC_r + C_2 \\rightleftharpoons C_{r+2} . \n\\end{equation} \nThis leads to a system of kinetic equations of the form \n\\begin{eqnarray}\n\\frac{{\\rm d} c_r}{{\\rm d} t} & = & J_{r-1} - J_r + K_{r-2} - K_r , \n\t\\qquad (r\\geq3) , \\lbl{gbd-eq1} \\\\ \n\\frac{{\\rm d} c_2}{{\\rm d} t} & = & J_1 - J_2 - K_2 \n\t- \\displaystyle\\sum_{r=1}^\\infty K_r , \\\\ \n\\frac{{\\rm d} c_1}{{\\rm d} t} & = & - J_1 - K_2 \n\t- \\displaystyle\\sum_{r=1}^\\infty J_r , \\\\ \nJ_r &=& a_r c_r c_1 - b_{r+1} c_{r+1} , \\qquad \nK_r = \\alpha_r c_r c_2 - \\beta_{r+2} c_{r+2} . \n\\lbl{gbd-eq4} \\end{eqnarray}\nA simple example of such a system has been analysed \npreviously by Bolton \\& Wattis \\cite{bw-dimers}. \n\nIn the next subsection we generalise the model (\\ref{smol-eq}) to \ninclude a variety of `species' or `morphologies' of cluster, \nrepresenting left-handed, right-handed and achiral clusters. We \nsimplify the model in stages to one in which only monomer and dimer \ninteractions are described, and then one in which only dimer \ninteractions occur. \n\n\\subsection{A full microscopic model of chiral crystallisation}\n\nWe start by outlining all the possible cluster growth, fragmentation \nand transformation processes. We denote the two handed clusters \nby $X_r$, $Y_r$, where the subscript $r$ specifies the size of cluster. \nAchiral clusters are denoted by $C_r$, and we allow clusters to \nchange their morphology spontaneously according to \n\\begin{equation} \\begin{array}{rclclccrclcl}\nC_r & \\rightarrow & X_r & \\quad& {\\rm rate} = \\mu_r , &&\nX_r & \\rightarrow & C_r & \\quad& {\\rm rate} = \\mu_r \\nu_r , \\\\ \nC_r & \\rightarrow & Y_r & \\quad& {\\rm rate} = \\mu_r , && \nY_r & \\rightarrow & C_r & \\quad& {\\rm rate} = \\mu_r \\nu_r . \n\\end{array} \\end{equation}\nWe allow clusters to grow by coalescing with clusters \nof similar handedness or an achiral cluster. In the case of \nthe latter process, we assume that the cluster produced \nis chiral with the same chirality as the parent. Thus \n\\begin{equation} \\begin{array}{rclcl}\nX_r + X_s & \\rightarrow & X_{r+s} , && {\\rm rate} = \\xi_{r,s}, \\\\ \nX_r + C_s & \\rightarrow & X_{r+s} , && {\\rm rate} = \\alpha_{r,s},\\\\ \nC_r + C_s & \\rightarrow & C_{r+s} , && {\\rm rate} = \\delta_{r,s},\\\\ \nY_r + C_s & \\rightarrow & Y_{r+s} , && {\\rm rate} = \\alpha_{r,s},\\\\ \nY_r + Y_s & \\rightarrow & Y_{r+s} , && {\\rm rate} = \\xi_{r,s} . \n\\end{array} \\end{equation}\nWe do not permit clusters of opposite to chirality to merge. \nFinally we describe fragmentation: all clusters may \nfragment, producing two smaller clusters each of \nthe same chirality as the parent cluster \n\\begin{equation} \\begin{array}{rclcl}\nX_{r+s} & \\rightarrow & X_r + X_s && {\\rm rate} = \\beta_{r,s}, \\\\ \nC_{r+s} & \\rightarrow & C_r + C_s && {\\rm rate} = \\epsilon_{r,s}, \\\\ \nY_{r+s} & \\rightarrow & Y_r + Y_s &\\quad& {\\rm rate} = \\beta_{r,s} . \n\\end{array} \\end{equation}\nSetting up concentration variables for each size and each type of \ncluster by defining $c_r(t) = [C_r]$, $x_r(t) = [X_r]$, $y_r(t) = [Y_r]$ \nand applying the law of mass action, we obtain \n\\begin{eqnarray}\n\\frac{{\\rm d} c_r}{{\\rm d} t} &\\!=\\!& -2\\mu_r c_r + \\mu_r\\nu_r(x_r+y_r) \n\t- \\sum_{k=1}^\\infty \\alpha_{k,r} c_r (x_k+y_k) \\\\ && \\nonumber \n\t+ \\mbox{$\\frac{1}{2}$} \\sum_{k=1}^{r-1} \\left( \\delta_{k,r-k} c_k c_{r-k} \n\t- \\epsilon_{k,r-k} c_k c_{r-k} \\right) \n\t- \\sum_{k=1}^\\infty \\left( \\delta_{k,r} c_k c_r \n\t- \\epsilon_{k,r} c_{r+k} \\right) , \n\\lbl{gbd1} \\\\ \n\\frac{{\\rm d} x_r}{{\\rm d} t} &\\!=\\!&\\! \\mu_r c_r \\!-\\! \\mu_r \\nu_r x_r \n\t+ \\sum_{k=1}^{r-1} \\alpha_{k,r-k} c_k x_{r-k} \n\t\\!-\\! \\mbox{$\\frac{1}{2}$} \\sum_{k=1}^{r-1} \\left( \\xi_{k,r-k} x_k x_{r-k} \n\t\\!-\\! \\beta_{k,r\\!-\\!k} x_r \\right) \\nonumber \\\\ && \n\t- \\sum_{k=1}^\\infty \\left( \\xi_{k,r} x_k x_r \n\t- \\beta_{k,r} x_{r+k} \\right) , \n\\\\ \n\\frac{{\\rm d} y_r}{{\\rm d} t} &\\!=\\!&\\! \\mu_r c_r \\!-\\! \\mu_r \\nu_r y_r \n\t+ \\sum_{k=1}^{r-1} \\alpha_{k,r-k} c_k y_{r-k} \n\t\\!-\\! \\mbox{$\\frac{1}{2}$} \\sum_{k=1}^{r-1} \\left( \\xi_{k,r-k} y_k y_{r-k} \n\t\\!-\\! \\beta_{k,r\\!-\\!k} y_r \\right) \\nonumber \\\\ && \n\t- \\sum_{k=1}^\\infty \\left( \\xi_{k,r} y_k y_r \n\t- \\beta_{k,r} y_{r+k} \\right) . \n\\lbl{gbd3} \\end{eqnarray}\nThe main problem with such a model is the vast number \nof parameters that have been introduced ($\\alpha_{r,k}$, \n$\\xi_{r,k}$, $\\beta_{r,k}$, $\\mu_r$, $\\nu_r$, $\\delta_{r,k}$, \n$\\epsilon_{r,k}$, for all $k,r$). \n\nHence we make several simplifications: \n\\begin{description} \n\\item[(i)] \nwe assume that the dominant coagulation and fragmentation \nprocesses are between large and very small clusters (rather \nthan large clusters and other large clusters). Specifically, we \nassume that only coalescences involving $C_1$ and $C_2$ \nneed to be retained in the model, and fragmentation always \nyields either a monomer or a dimer fragment. This assumption \nmeans that the system can be reduced to a generalised \nBecker-D\\\"{o}ring equation closer to the form of \n(\\ref{gbd-eq1})--(\\ref{gbd-eq4}) rather than (\\ref{smol-eq}); \n\n\\item[(ii)] \nwe also assume that the achiral clusters are unstable at larger size, \nso that their presence is only relevant at small sizes. \nTypically at small sizes, clusters are amorphous and do not take \non the properties of the bulk phase, hence at small sizes clusters \ncan be considered achiral. We assume that there is a regime of \ncluster sizes where there is a transition to chiral structures, and \nwhere clusters can take on the bulk structure (which is chiral) as well \nas exist in amorphous form. At even larger sizes, we assume that \nonly the chiral forms exist, and no achiral structure can be adopted; \n\n\\item[(iv)] \nfurthermore, we assume that all rates are independent of cluster size, \nspecifically, \n\\begin{eqnarray}\n\\alpha_{_{k,1}} & = & a , \\qquad \\qquad \n\\alpha_{_{k,2}} = \\alpha , \\qquad \\quad \n\\alpha_{_{k,r}} =0 , \\quad (r\\geq2) \t\\\\ \n\\mu_2 &=& \\mu , \\qquad \\qquad \n\\mu_r=0 , \\quad (r\\geq3) , \t\\\\ \n\\nu_2 &=& \\nu , \\qquad \\qquad \n\\nu_r=0 , \\quad (r\\geq3) , \t\\\\ \n\\delta_{1,1} & = & \\delta , \\qquad \n\\delta_{k,r} = 0 , \\quad ({\\rm otherwise}) \t\\\\ \n\\epsilon_{1,1} & = & \\epsilon , \\qquad \n\\epsilon_{k,r} = 0 , \\quad ({\\rm otherwise})\t\\\\ \n\\xi_{k,2} &=& \\xi_{2,k} = \\xi , \\qquad \n\\xi_{k,r} = 0 , \\quad ({\\rm otherwise})\t\t\\\\\n\\beta_{k,1} & = & \\beta_{1,k} = b , \\qquad \n\\beta_{k,2} = \\beta_{2,k} = \\beta , \\qquad \n\\beta_{k,r} = 0 , \\quad ({\\rm otherwise}), \n\\nonumber \\\\ && \\end{eqnarray}\nUltimately we will set $a=b=0=\\delta=\\epsilon$ so that we have only \nfive parameters to consider ($\\alpha$, $\\xi$, $\\beta$, $\\mu$, $\\nu$). \n\n\\end{description} \n\n\\begin{figure}[!ht]\n\\begin{picture}(500,160)(35,-35)\n\\put(48,50){\\circle{15}}\n\\multiput(80,50)(40,0){6}{\\circle{15}}\n\\multiput(80,10)(40,0){8}{\\circle{15}}\n\\multiput(80,90)(40,0){8}{\\circle{15}}\n\\put(044,48){$c_1$}\n\\put(076,48){$c_2$}\n\\put(116,48){$c_3$}\n\\put(156,48){$c_4$}\n\\put(196,48){$c_5$}\n\\put(236,48){$c_6$}\n\\put(276,48){$c_7$}\n\\put(076,88){$x_2$}\n\\put(116,88){$x_3$}\n\\put(156,88){$x_4$}\n\\put(196,88){$x_5$}\n\\put(236,88){$x_6$}\n\\put(276,88){$x_7$}\n\\put(316,88){$x_8$}\n\\put(356,88){$x_9$}\n\\put(076,08){$y_2$}\n\\put(116,08){$y_3$}\n\\put(156,08){$y_4$}\n\\put(196,08){$y_5$}\n\\put(236,08){$y_6$}\n\\put(276,08){$y_7$}\n\\put(316,08){$y_8$}\n\\put(356,08){$y_9$}\n\\multiput(83,20)(40,0){6}{\\vector(0,1){20}}\n\\multiput(77,40)(40,0){6}{\\vector(0,-1){20}}\n\\multiput(77,60)(40,0){6}{\\vector(0,1){20}}\n\\multiput(83,80)(40,0){6}{\\vector(0,-1){20}}\n\\multiput(69,30)(40,0){6}{$\\mu$}\n\\multiput(85,30)(40,0){6}{$\\nu$}\n\\multiput(69,70)(40,0){6}{$\\mu$}\n\\multiput(85,70)(40,0){6}{$\\nu$}\n\\put(058,53){\\vector(1,0){10}}\n\\put(068,50){\\vector(-1,0){10}}\n\\multiput(095,53)(40,0){5}{\\vector(1,0){10}}\n\\multiput(105,50)(40,0){5}{\\vector(-1,0){10}}\n\\multiput(095,95)(40,0){7}{\\vector(1,0){10}}\n\\multiput(105,90)(40,0){7}{\\vector(-1,0){10}}\n\\multiput(095,15)(40,0){7}{\\vector(1,0){10}}\n\\multiput(105,10)(40,0){7}{\\vector(-1,0){10}}\n\\put(060,59){$\\delta$}\n\\put(060,40){$\\epsilon$}\n\\multiput(097,59)(40,0){5}{$\\delta$}\n\\multiput(097,40)(40,0){5}{$\\epsilon$}\n\\multiput(097,97)(40,0){5}{$a$}\n\\multiput(097,80)(40,0){5}{$b$}\n\\multiput(097,17)(40,0){5}{$a$}\n\\multiput(097,00)(40,0){5}{$b$}\n\\put(118,119){$\\beta$}\n\\put(127,115){\\vector(-1,0){10}}\n\\put(120,100){\\oval(80,30)[t]}\n\\put(120, 0){\\oval(80,30)[b]}\n\\put(127,-15){\\vector(-1,0){10}}\n\\put(118,-27){$\\beta$}\n\\put(158,124){$\\beta$}\n\\put(167,120){\\vector(-1,0){10}}\n\\put(160,100){\\oval(80,40)[t]}\n\\put(160, 0){\\oval(80,40)[b]}\n\\put(167,-20){\\vector(-1,0){10}}\n\\put(158,-32){$\\beta$}\n\\put(198,119){$\\beta$}\n\\put(207,115){\\vector(-1,0){10}}\n\\put(200,100){\\oval(80,30)[t]}\n\\put(200, 0){\\oval(80,30)[b]}\n\\put(207,-15){\\vector(-1,0){10}}\n\\put(198,-27){$\\beta$}\n\\put(238,124){$\\beta$}\n\\put(247,120){\\vector(-1,0){10}}\n\\put(240,100){\\oval(80,40)[t]}\n\\put(240, 0){\\oval(80,40)[b]}\n\\put(247,-20){\\vector(-1,0){10}}\n\\put(238,-32){$\\beta$}\n\\put(278,119){$\\beta$}\n\\put(287,115){\\vector(-1,0){10}}\n\\put(280,100){\\oval(80,30)[t]}\n\\put(280, 0){\\oval(80,30)[b]}\n\\put(287,-15){\\vector(-1,0){10}}\n\\put(278,-27){$\\beta$}\n\\put(318,124){$\\beta$}\n\\put(327,120){\\vector(-1,0){10}}\n\\put(320,100){\\oval(80,40)[t]}\n\\put(320, 0){\\oval(80,40)[b]}\n\\put(327,-20){\\vector(-1,0){10}}\n\\put(318,-32){$\\beta$}\n\\end{picture} \n\\caption{Reaction scheme involving monomer and \ndimer aggregation and fragmentation of achiral clusters \nand those of both handednesses (right and left). \nThe aggregation of achiral and chiral clusters \nis not shown (rates $\\alpha$, $\\xi$). }\n\\label{rec-sch-fig}\n\\end{figure}\n\nThis scheme is illustrated in Figure \\ref{rec-sch-fig}. However, \nbefore writing down a further system of equations, we make one \nfurther simplification. We take the transition region described in (ii), \nabove, to be just the dimers. Thus the only types of achiral cluster \nare the monomer and the dimer ($c_1$, $c_2$); dimers exist in achiral, \nright- and left-handed forms ($c_2$, $x_2$, $y_2$); at larger sizes \nonly left- and right-handed clusters exist ($x_r$, $y_r$, $r\\geq2$). \n\nThe kinetic equations can be reduced to \n\\begin{eqnarray} \n\\frac{{\\rm d} c_1}{{\\rm d} t} & = & 2 \\varepsilon c_2 - 2 \\delta c_1^2 \n\t- \\sum_{r=2}^\\infty ( a c_1 x_r + a c_1 y_r - b x_{r+1} \n\t- b y_{r+1} ) , \\lbl{gbd-c1}\n\t\\\\ \n\\frac{{\\rm d} c_2}{{\\rm d} t} & = & \\delta c_1^2 - \\varepsilon c_2 - 2 \\mu c_2 \n\t+ \\mu\\nu (x_2+y_2) - \\sum_{r=2}^\\infty \\alpha c_2 (x_r+y_r) , \n\t\\\\\n\\frac{{\\rm d} x_r}{{\\rm d} t} & = & a c_1 x_{r-1} - b x_r \n\t- a c_1 x_r + b x_{r+1} + \\alpha c_2 x_{r-2} \n\t- \\alpha c_2 x_r \\nonumber\\\\ && \n\t- \\beta x_r + \\beta x_{r+2} + \\xi x_2 x_{r-2} \n\t- \\xi x_2 x_r , \\qquad \\hfill (r\\geq4) , \n\t\\\\ \n\\frac{{\\rm d} x_3}{{\\rm d} t} & = & a c_1 x_2 - b x_3 - a c_1 x_3 \n\t+ b x_4 - \\alpha c_2 x_3 - \\xi x_2 x_3 + \\beta x_5 , \n\t\\\\ \n\\frac{{\\rm d} x_2}{{\\rm d} t} & = & \\mu c_2 - \\mu\\nu x_2 + b x_3 \n\t- a c_1 x_2 - \\alpha x_2 c_2 \n\t+ \\beta x_4 \\nonumber \\\\ && + \\sum_{r=2}^\\infty \\beta x_{r+2} \n\t- \\sum_{r=2}^\\infty \\xi x_2 x_r - \\xi x_2^2 , \n\t\\\\ \n\\frac{{\\rm d} y_r}{{\\rm d} t} & = & a c_1 y_{r-1} - b y_r \n\t- a c_1 y_r + b y_{r+1} + \\alpha c_2 y_{r-2} \n\t- \\alpha c_2 y_r \\nonumber\\\\ && \n\t- \\beta y_r + \\beta y_{r+2} + \\xi y_2 y_{r-2} \n\t- \\xi y_2 y_r , \\qquad \\hfill (r\\geq4), \n\t\\\\ \n\\frac{{\\rm d} y_3}{{\\rm d} t} & = & a c_1 y_2 - b y_3 - a c_1 y_3 \n\t+ b y_4 - \\alpha c_2 y_3 - \\xi y_2 y_3 + \\beta y_5 , \n\t\\\\ \n\\frac{{\\rm d} y_2}{{\\rm d} t} & = & \\mu c_2 - \\mu\\nu y_2 + b y_3 \n\t- a c_1 y_2 - \\alpha y_2 c_2 \n\t+ \\beta y_4 \\nonumber \\\\ && + \\sum_{r=2}^\\infty \\beta y_{r+2} \n\t- \\sum_{r=2}^\\infty \\xi y_2 y_r - \\xi y_2^2 . \\lbl{gbd-y2}\n\\end{eqnarray} \n\n\\subsection{Summary and simulations of the macroscopic model}\n\\label{macro-sec} \n\nThe advantage of the above simplifications is that certain sums \nappear repeatedly; by defining new quantities as these sums, \nthe system can be written in a simpler fashion. We define $N_x = \n\\sum_{r=2}^\\infty x_r$, $N_y = \\sum_{r=2}^\\infty y_r$, then \n\\begin{eqnarray}\n\\frac{{\\rm d} c_1}{{\\rm d} t} & = & 2 \\varepsilon c_2 - 2 \\delta c_1^2 \n\t- a c_1 (N_x+N_y) + b (N_x-x_2+N_y-y_2) , \\lbl{macro-c1}\\\\ \n\\frac{{\\rm d} c_2}{{\\rm d} t} & = & \\delta c_1^2 - \\varepsilon c_2 - 2 \\mu c_2 \n\t+ \\mu\\nu (x_2+y_2) - \\alpha c_2(N_x+N_y) ,\\\\\n\\frac{{\\rm d} N_x}{{\\rm d} t} & = & \\mu c_2 - \\mu\\nu x_2 \n\t+ \\beta (N_x-x_3-x_2) - \\xi x_2 N_x , \\\\ \n\\frac{{\\rm d} x_2}{{\\rm d} t} & = & \\mu c_2 - \\mu\\nu x_2 + b x_3 \n\t- a c_1 x_2 - \\alpha x_2 c_2 + \\beta (x_4+N_x-x_2-x_3) \n\t\\nonumber \\\\ && -\\xi x_2^2 - \\xi x_2 N_x , \\\\ \n\\frac{{\\rm d} N_y}{{\\rm d} t} & = & \\mu c_2 - \\mu\\nu y_2 \n\t+ \\beta (N_y-y_3-y_2) - \\xi y_2 N_y , \\\\ \n\\frac{{\\rm d} y_2}{{\\rm d} t} & = & \\mu c_2 - \\mu\\nu y_2 + b y_3 \n\t- a c_1 y_2 - \\alpha y_2 c_2 + \\beta (y_4+N_y-y_2-y_3) \n\t\\nonumber\\\\ && - \\xi y_2^2 - \\xi y_2 N_y . \\lbl{macro-y2}\n\\end{eqnarray} \nHowever, such a system of equations is not `closed'. The equations \ncontain $x_3,y_3,x_4,y_4$, and yet we have no expressions for \nthese; reintroducing equations for $x_3,y_3$ would introduce \n$x_5,y_5$ and so an infinite regression would be entered into. \n\n\\begin{figure}[!ht]\n\\vspace*{65mm}\n\\special{psfile=fig-fullt.eps \n\thscale=85 vscale=60 hoffset=-80 voffset=-160}\n\\caption{ Plot of the concentrations $c_1$, $c_2$, \n$N_x$, $N_y$, $N=N_x+N_y$, $\\varrho_x$, $\\varrho_y$, \n$\\varrho_x+\\varrho_y$ and $\\varrho_x+\\varrho_y+2c_2+c1$ \nagainst time, $t$ on a logarithmic timescale. Since model equations \nare in nondimensional form, the time units are arbitrary. \nParameter values $\\mu=1.0$, $\\nu=0.5$, $\\delta=1$, $\\varepsilon=5$, \n$a=4$, $b=0.02$, $\\alpha=10$, $\\xi=10$, $\\beta=0.03$, with \ninitial conditions $c_2=0.49$, $x_4(0)=0.004$, $y_4(0)=0.006$, \nand all other concentrations zero. }\n\\label{fig-fullt}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\vspace*{73mm}\n\\special{psfile=fig-fullxy.eps \n\thscale=80 vscale=60 hoffset=-80 voffset=-145}\n\\caption{Plot of the cluster size distribution at \n$t=0$ (dashed line), $t=112$ (dotted line) and \n$t=9.4\\times10^5$. Parameters and initial conditions \nas in Figure \\protect\\ref{fig-fullt}. }\n\\label{fig-fullxy}\n\\end{figure}\n\nHence we need to find some suitable alternative expressions for \n$x_3,y_3,x_4,y_4$; or an alternative way of reducing the system to \njust a few ordinary differential equations that can easily be analysed. \nSuch systems are considered in Sections \\ref{tetra-sec}, \n\\ref{hex-sec} and \\ref{new-sec}. Before that, however, we illustrate \nthe behaviour of the system by briefly presenting the results of some \nnumerical simulations. In Figures \\ref{fig-fullt} and \\ref{fig-fullxy} \nwe show the results of a simulation of (\\ref{macro-c1})--(\\ref{macro-y2}). \nThe former shows the evolution of the concentrations $c_1$ which \nrises then decays, $c_2$ which decays since the parameters have \nbeen chosen to reflect a cluster-dominated system. Also plotted are \nthe numbers of clusters $N_x,N_y$ and the mass of material in \nclusters $\\varrho_x$, $\\varrho_y$ defined by \n\\begin{equation} \n\\varrho_x = \\sum_{j=2}^K j x_j , \\qquad \\varrho_y = \\sum_{j=2}^K j y_j . \n\\end{equation} \nNote that under this definition $\\varrho_x+\\varrho_y+c_1+2c_2$ \nis conserved, and this is plotted as {\\sl rho}. Both the total number \nof clusters, $N_x+N_y$, and total mass of material in handed \nclusters $\\varrho_x+\\varrho_y$ appear to equilibrate by $t=10^2$, \nhowever, at a much later time ($t\\sim 10^4 - 10^5$) a \nsymmetry-breaking bifurcation occurs, and the system changes from \nalmost racemic (that is, symmetric) to asymmetric. This is more \nclearly seen in Figure \\ref{fig-fullxy}, where we plot the cluster size \ndistribution at three time points. At $t=0$ there are only dimers \npresent (dashed line), and we impose a small difference in the \nconcentrations of $x_2$ and $y_2$. At a later time, $t=112$ \n(dotted line), there is almost no difference between the $X$- and \n$Y$-distributions, however by the end of the simulation ($t\\sim10^6$, \nsolid line) one distribution clearly completely dominates the other. \n\n\\subsection{Simplified macroscopic model}\n\nTo obtain the simplest model which involves three polymorphs \ncorresponding to right-handed and left-handed chiral clusters and \nachiral clusters, we now aim to simplify the processes of cluster \naggregation and fragmentation in (\\ref{macro-c1})--(\\ref{macro-y2}). \nOur aim is to retain the symmetry-breaking phenomenon but \neliminate physical processes which are not necessary for it to occur. \n\nOur first simplification is to remove all clusters of odd size from the \nmodel, and just consider dimers, tetramers, hexamers, {\\em etc}. \nThis corresponds to putting $a=0$, $b=0$ which removes $x_3$ \nand $y_3$ from the system. \nFurthermore, we put $\\varepsilon=0$ and make $\\delta$ large, so that the \nachiral monomer is rapidly and irreversibly converted to achiral dimer. \nSince the monomers do not then influence the evolution of any of \nthe other variables, we further simplify the system by ignoring $c_1$ \n(or, more simply, just impose initial data in which $c_1(0)=0$). \nThus we are left with \n\\begin{eqnarray}\n\\!\\!\\!\\frac{{\\rm d} c_2}{{\\rm d} t} & \\!=\\! & - 2 \\mu c_2 \n\t+ \\mu\\nu (x_2+y_2) - \\alpha c_2(N_x+N_y) , \\lbl{smm2} \\\\\n\\!\\!\\!\\frac{{\\rm d} N_x}{{\\rm d} t} & \\!=\\! & \\mu c_2 - \\mu\\nu x_2 \n\t+ \\beta (N_x-x_2) - \\xi x_2 N_x , \\\\ \n\\!\\!\\!\\frac{{\\rm d} x_2}{{\\rm d} t} & \\!=\\! & \\!\\mu c_2 - \\mu\\nu x_2 - \\alpha x_2 c_2 \n\t+ \\beta (N_x\\!-\\!x_2 \\!+\\! x_4 ) - \\xi x_2^2 - \\xi x_2 N_x , \\\\ \n\\!\\!\\!\\frac{{\\rm d} N_y}{{\\rm d} t} & \\!=\\! & \\mu c_2 - \\mu\\nu y_2 \n\t+ \\beta (N_y-y_2) - \\xi y_2 N_y , \\\\ \n\\!\\!\\!\\frac{{\\rm d} y_2}{{\\rm d} t} & \\!=\\! & \\!\\mu c_2 - \\mu\\nu y_2 - \\alpha y_2 c_2 \n\t+ \\beta (N_y\\!-\\!y_2 \\!+\\! y_4) - \\xi y_2^2 - \\xi y_2 N_y . \\lbl{smmy2}\n\\end{eqnarray} \nSince we have removed four parameters from the model, \nand halved the number of dependent variables, we show a couple \nof numerical simulations just to show that the system \nabove does still exhibit symmetry-breaking behaviour. \n\n\\begin{figure}[!ht]\n\\vspace*{69mm}\n\\special{psfile=fig-dimt.eps \n\thscale=85 vscale=58 hoffset=-80 voffset=-150}\n\\caption{ Plot of the concentrations $c_1$, $c_2$, \n$N_x$, $N_y$, $N=N_x+N_y$, $\\varrho_x$, $\\varrho_y$, \n$\\varrho_x+\\varrho_y$ and $\\varrho_x+\\varrho_y+2c_2+c_1$ \nagainst time, $t$ on a logarithmic timescale. Since model \nequations are in nondimensional form, the time units are \narbitrary. Parameter values $\\mu=1$, $\\nu=0.5$, \n$\\alpha=10$, $\\xi=10$, $\\beta=0.03$, with initial \nconditions $c_2=0.49$, $x_4(0)=0.004$, $y_4(0)=0.006$, \nall other concentrations zero. }\n\\label{fig-dimt}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\vspace*{71mm}\n\\special{psfile=fig-dimxy.eps \n\thscale=80 vscale=58 hoffset=-80 voffset=-140}\n\\caption{Plot of the cluster size distribution at \n$t=0$ (dashed line), $t=250$ (dotted line) and \n$t=6\\times10^5$. Parameters and initial conditions \nas in Figure \\protect\\ref{fig-dimt}. }\n\\label{fig-dimxy}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\vspace*{70mm}\n\\special{psfile=fig-dimtt.eps \n\thscale=80 vscale=60 hoffset=-80 voffset=-150}\n\\caption{ Plot of the concentrations $c_1$, $c_2$, $N_x$, $N_y$, \n$N=N_x+N_y$, $\\varrho_x$, $\\varrho_y$, $\\varrho_x+\\varrho_y$ and \n$\\varrho_x+\\varrho_y+2c_2+c_1$ against time, $t$ on a logarithmic timescale. \nParameters and initial conditions as in Figure \\protect\\ref{fig-dimt}. }\n\\label{fig-dimtt}\n\\end{figure}\n\nFigure \\ref{fig-dimt} appears similar to Figure \\ref{fig-fullt}, \nsuggesting that removing the monomer interactions has changed the \nunderlying dynamics little. We still observe the characteristic \nequilibration of cluster numbers and cluster masses as $c_2$ decays, \nand then a period of quiesence ($t\\sim10$ to $10^4$) before a later \nsymmetry-breaking event, around $t\\sim10^5$. \nAt first sight, the distribution of $X$- and $Y$-clusters displayed in \nFigure \\ref{fig-dimxy} is quite different to Figure \\ref{fig-fullxy}; this \nis due to the absence of monomers from the system, meaning that \nonly even-sized clusters can now be formed. If one only looks at the \neven-sized clusters in Figure \\ref{fig-dimxy}, we once again see only \na slight difference at $t=0$ (dashed line), almost no difference at \n$t\\approx250$ (dotted line) but a significant difference at \n$t=6\\times10^5$ (solid line). \nWe include one further graph here, Figure \\ref{fig-dimtt} similar to \nFigure \\ref{fig-dimt} but on a linear rather than a logarithmic timescale. \nThis should be compared with Figures such as Figures 3 and 4 of \nViedma \\cite{viedma} and Figure 1 of Noorduin {\\em et al.}\\ \\cite{wim}. \n\n\\section{The truncation at tetramers} \n\\label{tetra-sec} \n\\setcounter{equation}{0}\n\n\\begin{figure}[!ht]\n\\begin{picture}(500,120)(0,-10)\n\\put(80,50){\\circle{17}}\n\\put(077,48){$c_2$}\n\\put(080,10){\\circle{17}}\n\\put(077,08){$y_2$}\n\\put(080,90){\\circle{17}}\n\\put(077,88){$x_2$}\n\\put(140,10){\\circle{20}}\n\\put(137,08){$y_4$}\n\\put(140,90){\\circle{20}}\n\\put(137,88){$x_4$}\n\\put(77,20){\\vector(0,1){20}}\n\\put(83,40){\\vector(0,-1){20}}\n\\put(83,60){\\vector(0,1){20}}\n\\put(77,80){\\vector(0,-1){20}}\n\\put(63,30){$\\nu\\mu$}\n\\put(85,30){$\\mu$}\n\\put(63,70){$\\nu\\mu$}\n\\put(85,70){$\\mu$}\n\\put(107,97){$\\beta$}\n\\put(125,95){\\vector(-1,0){30}}\n\\put(095,90){\\vector(1,0){30}}\n\\put(107,81){$\\xi$}\n\\put(110,85){\\oval(30,20)[b]}\n\\put(125,85){\\vector(0,1){5}}\n\\put(113,58){$\\alpha$}\n\\put(090,52){\\vector(3,+2){40}}\n\\put(090,48){\\vector(3,-2){40}}\n\\put(113,38){$\\alpha$}\n\\put(125,17){\\vector(0,-1){5}}\n\\put(110,17){\\oval(30,20)[t]}\n\\put(107,12){$\\xi$}\n\\put(095,10){\\vector(1,0){30}}\n\\put(125,05){\\vector(-1,0){30}}\n\\put(107,-5){$\\beta$}\n\\end{picture} \n\\caption{Simplest possible reaction scheme which might \nexhibit chiral symmetry-breaking. }\n\\label{simp-rec-sch-fig}\n\\end{figure}\n\nThe simplest possible reaction scheme of the form \n(\\ref{gbd-c1})--(\\ref{gbd-y2}) which we might expect to exhibit \nsymmetry-breaking to homochirality is the system truncated at \ntetramers, namely \n\\begin{eqnarray} \n\\displaystyle\\frac{{\\rm d} c_2}{{\\rm d} t} & = & - 2\\mu c_2 + \\mu\\nu (x_2+y_2) \n\t-\\alpha c_2(x_2+y_2) , \\lbl{dim-c2dot} \\\\ \n\\displaystyle\\frac{{\\rm d} x_2}{{\\rm d} t} & = & \\mu c_2 - \\mu\\nu x_2 \n\t- \\alpha c_2 x_2 - 2 \\xi x_2^2 + 2 \\beta x_4 , \\\\ \n\\displaystyle\\frac{{\\rm d} y_2}{{\\rm d} t} & = & \\mu c_2 - \\mu\\nu y_2 \n\t- \\alpha c_2 y_2 - 2 \\xi y_2^2 + 2 \\beta y_4 , \\\\ \n\\displaystyle\\frac{{\\rm d} x_4}{{\\rm d} t} & = & \\alpha x_2 c_2 + \\xi x_2^2 - \\beta x_4 , \\\\ \n\\displaystyle\\frac{{\\rm d} y_4}{{\\rm d} t} & = & \\alpha y_2 c_2 + \\xi y_2^2 - \\beta y_4 . \n\\lbl{dim-y4dot} \\end{eqnarray} \n\nWe investigate the symmetry-breaking by transforming \nthe variables $x_2$, $x_4$, $y_2$, $y_4$ according to \n\\begin{eqnarray}\nx_2 = \\mbox{$\\frac{1}{2}$} z (1+\\theta) , &\\quad& y_2 = \\mbox{$\\frac{1}{2}$} z (1-\\theta) , \n\\lbl{tet-ztheta} \\\\ \nx_4 = \\mbox{$\\frac{1}{2}$} w (1+\\phi) , && y_4 = \\mbox{$\\frac{1}{2}$} w (1-\\phi) , \n\\lbl{tet-wphi} \n\\end{eqnarray}\nwhere $z=x_2+y_2$ is the total concentration of chiral dimers, \n$w=x_4+y_4$ is the total tetramer concentration, $\\theta=(x_2-y_2)\/z$ \nis the relative chirality of the dimers, $\\phi=(x_4-y_4)\/w$ is the \nrelative chirality of tetramers. Hence \n\\begin{eqnarray}\n\\frac{{\\rm d} c_2}{{\\rm d} t} & = & - 2\\mu c_2 + \\mu\\nu z - \\alpha c_2 z , \n\t\\lbl{c23} \\\\ \n\\frac{{\\rm d} z}{{\\rm d} t} & = & 2 \\mu c_2 - \\mu\\nu z - \\alpha c_2 z \n\t- \\xi z^2 (1+\\theta^2) + 2 \\beta w , \\\\ \n\\frac{{\\rm d} w}{{\\rm d} t} & = & \\alpha z c_2 + \\mbox{$\\frac{1}{2}$} \\xi z^2 (1+\\theta^2) \n\t- \\beta w , \\lbl{w3}\\\\ \n\\frac{{\\rm d} \\theta}{{\\rm d} t} & = & - \\theta \\left( \\frac{2\\mu c}{z} + \n\t\\frac{2\\beta w}{z}+ \\xi z (1-\\theta^2) \\right) + \n\t\\frac{2\\beta w\\phi}{z} , \\\\\n\\frac{{\\rm d} \\phi}{{\\rm d} t} & = & \\theta \\frac{z}{w} ( \\alpha c + \\xi z ) \n\t- \\left( \\alpha c + \\mbox{$\\frac{1}{2}$} \\xi z (1+\\theta^2) \\right) \\frac{z}{w} \\phi . \n\\end{eqnarray}\nThe stability of the evolving symmetric-state ($\\theta=\\phi=0$) \nis given by the eigenvalues ($q$) of the matrix \n\\begin{equation}\n\\left( \\begin{array}{cc} \n- \\left( \\frac{2\\mu c}{z} + \\frac{2\\beta w}{z} + \\xi z \\right) & \n\\frac{2\\beta w}{z} \\\\ \n(\\alpha c + \\xi z) \\frac{z}{w} & \n- (\\alpha c + \\mbox{$\\frac{1}{2}$} \\xi z) \\frac{z}{w} \n\\end{array} \\right) , \n\\end{equation}\nwhich are given by\n\\begin{eqnarray}\nq^2 + q \\left( \\frac{\\alpha c z}{w} + \\frac{\\xi z^2}{w} \n+ \\frac{2\\mu c}{z} + \\xi z + \\frac{2\\beta w}{z} \\right) + && \\nonumber \\\\ \n\\frac{1}{w} \\left( 2\\mu c \\alpha c + \\mu c \\xi z + \n\\alpha c \\xi z^2 + \\mbox{$\\frac{1}{2}$} \\xi^2 z^3 - \\beta \\xi z w \\right) &=&0 . \n\\end{eqnarray}\nHence there is an instability if \n\\begin{equation}\n\\beta \\xi z w > 2\\mu c \\alpha c + \\mu c \\xi z + \n\\alpha c \\xi z^2 + \\mbox{$\\frac{1}{2}$} \\xi^2 z^3 , \n\\lbl{crude-instab}\n\\end{equation}\nusing the steady-state result that $2\\beta w = z(2\\alpha c + \\xi z)$ \nand factorising ($2\\alpha c + \\xi z$) out of the result, reduces the \ninstability (\\ref{crude-instab}) to the contradictory $\\xi z^2 > \n\\xi z^2 + 2\\mu c$. Hence the racemic steady-state of the system \nis stable for all choices of parameter values and is approached \nfrom all initial conditions. However, initial perturbations, \nmay be amplified due to the presence of nonlinear terms. \n\n\\begin{figure}[!ht]\n\\vspace*{75mm}\n\\special{psfile=fig-tet3.eps \nhscale=80 vscale=67 hoffset=-60 voffset=-175}\n\\caption{The concentrations $c_2$, $z$ and $w$ \n(\\protect\\ref{tet-ztheta})--(\\protect\\ref{tet-wphi}) plotted against \ntime, for the tetramer-truncated system with the two sets of initial \ndata (\\protect\\ref{tet-ics}). Since model equations \nare in nondimensional form, the time units are arbitrary. The \nparameter values are $\\mu=1$, $\\nu=0.5$, $\\alpha=\\xi=10$, \n$\\beta=0.1$. }\n\\label{fig-tet3}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\vspace*{75mm}\n\\special{psfile=fig-tet4.eps \nhscale=80 vscale=67 hoffset=-60 voffset=-175}\n\\caption{The chiralities $\\theta$, $\\phi$ \n(\\protect\\ref{tet-ztheta})--(\\protect\\ref{tet-wphi}) plotted against \ntime, for the tetramer-truncated system with \nthe two sets of initial data (\\protect\\ref{tet-ics}). Since model equations \nare in nondimensional form, the time units are arbitrary. The \nparameter values are the same as in Figure \\ref{fig-tet3}.}\n\\label{fig-tet4}\n\\end{figure}\n\nEvolution from two sets of initial conditions of the system \n(\\ref{dim-c2dot})--(\\ref{dim-y4dot}) are shown in each of Figures \n\\ref{fig-tet3}, \\ref{fig-tet4}. The continuous and dotted lines \ncorrespond to the initial data \n\\begin{equation} \\begin{array}{c} \nc_2(0) = 0.29 , \\quad x_2(0) = 0.0051, \\quad y_2(0) = 0.0049, \\\\ \nx_4(0) = 0.051 , \\quad y_4(0) = 0.049 ; \\quad {\\rm and} \\\\ \nc_2(0) = 0 , \\quad x_2(0) = 0.051 \\quad y_2(0) = 0.049, \\\\ \nx_4(0) = 0.1 , \\quad y_4(0) = 0.1 ; \n\\end{array} \\lbl{tet-ics} \\end{equation}\nrespectively. \nIn the former case, the system starts with considerable amount \nof amorphous dimer, which is converted into clusters, and initially \nthere is a slight chiral imbalance in favour of $x_2$ and $x_4$ over \n$y_2$ and $y_4$. Over time this imbalance reduces (see figure \n\\ref{fig-tet4}); although there is a region around $t=1$ where \n$\\theta$ increases, both $\\theta$ and $\\phi$ eventually \napproach the zero steady-state. \n\nFor both sets of initial conditions we note that the chiralities evolve \nover a significantly longer timescale than the concentrations, the \nlatter having reached steady-state before $t=10$ and the former \nstill evolving when $t={\\cal O}(10^2)$. In the second set of initial \ndata, there is no $c_2$ present initially and there are exactly equal \nnumbers of the two chiral forms of the larger cluster, but a slight \nexess of $x_2$ over $y_2$. In time an imbalance in larger clusters \nis produced, but over larger timescales, both $\\theta$ and $\\phi$ \nagain approach the zero steady-state. \n\nHence, we observe that the truncated system \n(\\ref{dim-c2dot})--(\\ref{dim-y4dot}) does {\\em not} yield a chirally \nasymmetric steady-state. Even though in the early stages of the \nreaction chiral perturbations may be amplified, at the end of the \nreaction there is a slower timescale over which the system returns \nto a racemic state. In the next section we consider a system \ntruncated at hexamers to investigate whether that system allows \nsymmetry-breaking of the steady-state. \n\n\\section{The truncation at hexamers}\n\\label{hex-sec} \\setcounter{equation}{0}\n\nThe above analysis has shown that the truncation of the model \n(\\ref{gbd-c1})--(\\ref{gbd-y2}) to (\\ref{dim-c2dot})--(\\ref{dim-y4dot}) \nresults in a model which always ultimately approaches the \nsymmetric (racemic) steady-state. In this section, we show that a \nmore complex model, the truncation at hexamers retains enough \ncomplexity to demonstrate the symmetry-breaking bifurcation which \noccurs in the full system. In this case the governing equations are \n\\begin{eqnarray} \n\\displaystyle\\frac{{\\rm d} c_2}{{\\rm d} t} &=& -2 \\mu c_2 + \\mu \\nu (x_2+y_2) \n- \\alpha c_2 (x_2+y_2) - \\alpha c_2 (x_4+y_4) , \n\\lbl{hex-c2} \\\\ \n\\displaystyle\\frac{{\\rm d} x_2}{{\\rm d} t} & = & \\mu c_2 - \\mu \\nu x_2 \n- \\alpha c_2 x_2 - 2 \\xi x_2^2 - \\xi x_2 x_4 + 2\\beta x_4 \n+ \\beta x_6 , \n\\\\ \n\\displaystyle\\frac{{\\rm d} x_4}{{\\rm d} t} & = & \\alpha x_2 c_2 + \\xi x_2^2 \n- \\beta x_4 - \\alpha c_2 x_4 - \\xi x_2 x_4 + \\beta x_6 , \n\\\\\n\\displaystyle\\frac{{\\rm d} x_6}{{\\rm d} t} & = & \\alpha x_4 c_2 + \\xi x_2 x_4 \n- \\beta x_6 , \n\\\\\n\\displaystyle\\frac{{\\rm d} y_2}{{\\rm d} t} & = & \\mu c_2 - \\mu \\nu y_2 \n- \\alpha c_2 y_2 - 2 \\xi y_2^2 - \\xi y_2 y_4 + 2\\beta y_4 \n+ \\beta y_6 , \n\\\\ \n\\displaystyle\\frac{{\\rm d} y_4}{{\\rm d} t} & = & \\alpha y_2 c_2 + \\xi y_2^2 \n- \\beta y_4 - \\alpha c_2 y_4 - \\xi y_2 y_4 + \\beta y_6 , \n\\\\\n\\displaystyle\\frac{{\\rm d} y_6}{{\\rm d} t} & = & \\alpha y_4 c_2 + \\xi y_2 y_4 \n- \\beta y_6 . \n\\lbl{hex-y6} \\end{eqnarray}\n\nTo analyse the symmetry-breaking in the system we transform the \ndependent coordinates from $x_2,x_4,x_6,y_2,y_4,y_6$ to total \nconcentrations $z,w,u$ and relative chiralities $\\theta,\\phi,\\psi$ \naccording to \n\\begin{equation} \\begin{array}{rclcrclcrcl}\nx_2 &=& \\mbox{$\\frac{1}{2}$} z (1 + \\theta) , & \\quad & \nx_4 &=& \\mbox{$\\frac{1}{2}$} w (1 + \\phi) , & \\quad & \nx_6 &=& \\mbox{$\\frac{1}{2}$} u (1 + \\psi) , \\\\[2ex] \ny_2 &=& \\mbox{$\\frac{1}{2}$} z (1 - \\theta) , & \\quad & \ny_4 &=& \\mbox{$\\frac{1}{2}$} w (1 - \\phi) , & \\quad & \ny_6 &=& \\mbox{$\\frac{1}{2}$} u (1 - \\psi) . \n\\end{array} \\end{equation}\n\nWe now separate the governing equations for the total concentrations \nof dimers ($c,z$), tetramers ($w$) and hexamers ($u$)\n\\begin{eqnarray}\n\\displaystyle\\frac{{\\rm d} c}{{\\rm d} t} & = & - 2 \\mu c \n+ \\mu \\nu z - \\alpha c z - \\alpha c w , \n\\lbl{hex-cdot}\\\\ \n\\displaystyle\\frac{{\\rm d} z}{{\\rm d} t} & =& 2\\mu c - \\mu \\nu z \n- \\alpha c z - \\xi z^2 (1+\\theta^2) - \\mbox{$\\frac{1}{2}$} z w (1+\\theta\\phi) \n+ \\beta u + 2 \\beta w , \\nonumber \\\\ && \n\\\\ \n\\displaystyle\\frac{{\\rm d} w}{{\\rm d} t} & = & \\alpha c z \n+ \\mbox{$\\frac{1}{2}$} \\xi z^2 (1+\\theta^2) - \\beta w + \\beta u \n- \\alpha c w - \\mbox{$\\frac{1}{2}$} \\xi z w (1+\\theta\\phi) , \n\\\\ \n\\displaystyle\\frac{{\\rm d} u}{{\\rm d} t} & = & \\alpha c w \n+ \\mbox{$\\frac{1}{2}$} \\xi z w (1+\\theta\\phi) - \\beta u , \n\\lbl{hex-udot}\\end{eqnarray}\nfrom those for the chiralities \n\\begin{eqnarray}\n\\displaystyle \\frac{{\\rm d} \\psi}{{\\rm d} t} & =& \n\\frac{\\alpha c w}{u} (\\phi-\\psi) + \n\\frac{\\xi z w}{2u} ( \\theta+\\phi-\\psi-\\psi\\phi\\theta )\n\\lbl{hex-psi-dot} \\\\ \n\\displaystyle \\frac{{\\rm d} \\phi}{{\\rm d} t} & = & \n\\frac{\\alpha c z }{w} (\\theta-\\phi) + \n\\frac{\\xi z^2}{2w} ( 2\\theta -\\phi-\\phi\\theta^2) + \n\\frac{\\beta u}{w} (\\psi-\\phi) - \n\\mbox{$\\frac{1}{2}$} \\xi z \\theta (1-\\phi^2) , \\nonumber \\\\ && \n\\\\ \n\\displaystyle \\frac{{\\rm d} \\theta}{{\\rm d} t} & = & \n-\\frac{2\\mu c \\theta}{z} - \\xi z \\theta(1\\!-\\!\\theta^2) - \n\\mbox{$\\frac{1}{2}$} \\xi w \\phi (1\\!-\\!\\theta^2) + \\frac{\\beta u\\psi}{z} - \n\\frac{\\beta u \\theta}{z} \\nonumber \\\\ && + \\frac{2\\beta w\\phi}{z} \n- \\frac{2\\beta w \\theta}{z} . \\lbl{hex-th-dot}\n\\end{eqnarray}\n\nIn applications, we expect $\\nu<1$, so that the small amorphous \nclusters (dimers) prefer to adopt one of their chiral states rather \nthan the achiral structure. In addition, we note that the grinding \nprocess observed in experiments is much longer than the \ncrystallisation process, and that there are many larger, macroscopic \ncrystals hence we consider two limits in which $\\beta \\ll \\alpha \\xi$. \nWe will consider the case of small $\\beta$ with all other parameters \nbeing ${\\cal O}(1)$ and then the case where $\\alpha\\sim\\xi\\gg1$ and \nall other parameters are ${\\cal O}(1)$. \n\n\\subsection{Symmetric steady-state for the concentrations}\n\nFirstly, let us solve for the symmetric steady-state. \nIn this case we assume $\\theta=0=\\phi=\\psi$, simplifying \nequations (\\ref{hex-cdot})--(\\ref{hex-udot}). One of \nthese is a redundant equation, hence we have the solution \n\\begin{equation}\nw = \\frac{z}{\\beta}(\\alpha c + \\mbox{$\\frac{1}{2}$} \\xi z) , \\qquad \nu = \\frac{z}{\\beta^2}(\\alpha c+\\mbox{$\\frac{1}{2}$}\\xi z)^2 , \n\\lbl{hex-wu-sol} \\end{equation}\n\\begin{equation}\nc = \\frac{1}{\\alpha} \\left(\\sqrt{ \\left( \\frac{\\beta}{2} + \n\\frac{\\beta\\mu}{\\alpha z} + \\frac{\\xi z}{4} \\right)^2 \n+ \\beta\\mu\\nu} - \\frac{\\beta}{2} - \\frac{\\beta\\mu}\n{\\alpha z} - \\frac{\\xi z}{4} \\right) , \n\\lbl{hex-c-sol} \\end{equation}\nwith $z$ being determined by conservation of total mass in the system \n\\begin{equation} \n2c + 2 z + 4 w + 6 u = \\varrho . \\lbl{hex-roo} \n\\end{equation} \n\nIn the case of small grinding, ($\\beta\\ll1$), with $\\varrho$ \nand all other parameters being ${\\cal O}(1)$, we find \n\\begin{equation} \\begin{array}{rclcrcl}\nz & = & \\left( \\displaystyle\\frac{2\\varrho \\beta^2}{3 (\\alpha\\nu+\\xi)^2} \n\t\\right)^{1\/3} , &\\qquad& \nc & = & \\nu \\left( \\displaystyle\\frac{\\varrho \\beta^2}{12 (\\alpha\\nu+\\xi)^2} \n\t\\right)^{1\/3} , \\\\ \nw & = & \\left( \\displaystyle\\frac{\\varrho^2 \\beta}{18 (\\alpha\\nu+\\xi)} \n\t\\right)^{1\/3} , &\\qquad& \nu & = & \\displaystyle\\frac{\\varrho}{6} . \n\\end{array} \\lbl{hex-ssss-asymp} \\end{equation} \nIn this case most of the mass is in hexamers with a little in \ntetramers and very little in dimers. \n\nIn the asymptotic limit of $\\alpha \\sim \\xi \\gg 1$ \nand all other parameters ${\\cal O}(1)$, we find \n\\begin{eqnarray} & \nc = \\displaystyle\\frac{\\mu\\nu}{\\alpha} \\left( \n\\displaystyle\\frac{12\\beta}{\\varrho\\xi} \\right)^{1\/3} , \\quad \nz = \\left( \\displaystyle\\frac{2\\beta^2\\varrho}{3\\xi^2} \\right)^{1\/3} , \\quad \nw = \\left( \\displaystyle\\frac{\\beta\\varrho^2}{18\\xi} \\right)^{1\/3} , \\quad\nu = \\displaystyle\\frac{\\varrho}{6} . \n& \\nonumber \\\\ && \\lbl{hex-sss2-asymp} \n\\end{eqnarray} \nThis differs significantly from the other asymptotic scaling as, not only \nare $c$ and $z$ both small, they are now different orders of magnitude, \nwith $c\\ll z$. We next analyse the stability of these symmetric states. \n\n\\subsection{Stability of symmetric state}\n\nIn deriving the above solutions (\\ref{hex-wu-sol})--(\\ref{hex-c-sol}), \nwe have assumed chiral symmetry, that is, $\\theta=0=\\psi=\\phi$. \nWe now turn to analyse the validity of this assumption. Linearising \nthe system of equations (\\ref{hex-psi-dot})--(\\ref{hex-th-dot}) \nwhich govern the chiralities, we determine whether the symmetric \nsolution is stable from \n\\begin{equation} \\!\\! \n\\frac{{\\rm d} }{{\\rm d} t} \\!\\! \\left( \\!\\! \\begin{array}{c} \n\\psi \\\\ \\phi \\\\ \\theta \\end{array} \\!\\! \\right) \\!=\\! \n\\left( \\begin{array}{ccc} \n\\!\\!\\!- \\displaystyle\\frac{\\alpha c w}{u} \\!-\\! \\displaystyle\\frac{\\xi z w}{2u} \\!\\!& \n\\displaystyle\\frac{\\alpha c w}{u} \\!+\\! \\displaystyle\\frac{\\xi z w}{2u} & \n\\displaystyle\\frac{\\xi z w}{2u} \\\\[2ex] \n\\displaystyle\\frac{\\beta u}{w} & -\\displaystyle\\frac{\\alpha c z}{w} \n\\!-\\! \\displaystyle\\frac{\\xi z^2}{2w} \\!-\\! \\displaystyle\\frac{\\beta u}{w} & \n\\displaystyle\\frac{\\alpha c z}{w} \\!+\\! \\displaystyle\\frac{\\xi z^2}{w} \n\\!-\\! \\mbox{$\\frac{1}{2}$} \\xi z \\\\[2ex] \n\\displaystyle\\frac{\\beta u}{z} & \n\\displaystyle\\frac{2\\beta w}{z} \\!-\\! \\displaystyle\\frac{\\xi w}{2} & \\!\\!\n- \\displaystyle\\frac{2\\mu c}{z} \\!-\\! \\xi z \\!-\\! \\frac{\\beta u}{z} \n\\!-\\! \\displaystyle\\frac{2\\beta w}{z} \\!\\!\n\\end{array} \\! \\right) \\!\\!\n\\left( \\!\\begin{array}{c} \\psi \\\\ \\phi \\\\ \\theta \n\\end{array} \\!\\right) \\!.\\!\\!\n\\lbl{hex-stab-mat} \\end{equation}\nFor later calculations it is useful to know the determinant of this matrix. \nUsing the steady-state solutions (\\ref{hex-wu-sol}), the determinant \nsimplifies to \n\\begin{equation}\nD = \\frac{3 c}{4 \\beta \\rho} ( 2 \\alpha c + \\xi z )^2 \n( \\alpha \\xi z^2 - 4 \\beta \\mu ) . \n\\end{equation}\n\nFor general parameter values, the signs of the real parts of the \neigenvalues of the matrix in (\\ref{hex-stab-mat}) are not clear. \nHowever, using the asymptotic result (\\ref{hex-ssss-asymp}), \nfor $\\beta\\ll1$, we obtain the simpler matrix \n\\begin{equation}\n\\left( \\!\\!\\begin{array}{ccc} \n-\\beta & \\beta & \\displaystyle \\frac{\\beta\\xi}{\\xi\\!+\\!\\alpha\\nu} \n\\\\[2ex] \n\\left( \\displaystyle\\frac{\\beta^2 \\varrho (\\xi\\!+\\!\\alpha\\nu) }{12} \\right)^{1\/3} & \n- \\left( \\displaystyle\\frac{\\beta^2 \\varrho (\\xi\\!+\\!\\alpha\\nu) }{12} \\right)^{1\/3} & \n-\\frac{\\xi}{2} \\left( \\displaystyle\\frac{2\\beta^2\\varrho}{3(\\xi\\!+\\!\\alpha\\nu)^2} \\right) ^{1\/3} \n\\\\[2ex] \n\\beta^{1\/3} \\left( \\displaystyle\\frac{\\xi\\!+\\!\\alpha\\nu}{12\\varrho} \\right)^{2\/3} & \n- \\frac{\\xi}{2} \\left( \\displaystyle\\frac{\\beta\\varrho^2}{18(\\xi\\!+\\!\\alpha\\nu)} \\right)^{1\/3} & \n- \\mu \\nu - \\beta^{1\/3} \\left( \\displaystyle\\frac{\\xi\\!+\\!\\alpha\\nu}{12\\varrho} \\right)^{2\/3} \n\\end{array} \\!\\!\\right) \\! , \\lbl{hex-asy1-mat}\n\\end{equation}\nwhose characteristic polynomial is \n\\begin{equation} \n0 = q^3 + \\mu\\nu q^2 + \\mu\\nu \\left( \\rec{12} \\beta^2 \\varrho \n(\\xi\\!+\\!\\alpha\\nu) \\right)^{1\/3} q - D , \\lbl{hex-asy1-cp} \n\\end{equation} \nFormally $D$ is the determinant of the matrix in (\\ref{hex-asy1-mat}), \nwhich is zero, giving a zero eigenvalue, which indicates marginal \nstability. Hence, we return to the more accurate matrix in \n(\\ref{hex-stab-mat}), which gives $D \\sim -\\beta^2\\mu\\nu$. \nThe polynomial (\\ref{hex-asy1-cp}) thus has roots \n\\begin{equation} \nq_1 \\sim -\\mu\\nu, \\quad \nq_2 \\sim - \\left( \\frac{ \\beta^2 \\varrho (\\xi\\!+\\!\\alpha\\nu)}{12} \\right)^{1\/3} , \n\\quad \nq_3 \\sim - \\left( \\frac{12 \\beta^4}{\\varrho(\\alpha\\nu\\!+\\!\\xi)} \\right)^{1\/3} . \n\\lbl{hex-ev1} \\end{equation} \nThis means that the symmetric state is always linearly stable \nfor this asymptotic scaling. We expect to observe evolution on \nthree distinct timescales, one of ${\\cal O}(1)$, one of \n${\\cal O}(\\beta^{-2\/3})$ and one of ${\\cal O}(\\beta^{-4\/3})$. \n\nWe now consider the other asymptotic limit, namely, \n$\\alpha\\sim\\xi\\gg1$ and all other parameters are ${\\cal O}(1)$. \nIn this case, taking the leading order terms in each row, the stability \nmatrix in (\\ref{hex-stab-mat}) reduces to \n\\begin{equation}\n\\left( \\begin{array}{ccc} \n-6 \\mu \\nu \\left( \\frac{12\\beta}{\\varrho\\xi} \\right)^{2\/3} & \n6 \\mu \\nu \\left( \\frac{12\\beta}{\\varrho\\xi} \\right)^{2\/3} & 0\n\\\\\n\\left( \\frac{\\beta^2\\varrho\\xi}{12}\\right)^{1\/3} & \n-\\left( \\frac{\\beta^2\\varrho\\xi}{12}\\right)^{1\/3} & \n-\\left( \\frac{\\beta^2\\varrho\\xi}{12}\\right)^{1\/3} \n\\\\ \n\\left( \\frac{\\beta\\varrho^2 \\xi^2}{144} \\right)^{1\/3} & \n- \\left( \\frac{\\beta\\varrho^2 \\xi^2}{144} \\right)^{1\/3} & \n- \\left( \\frac{\\beta\\varrho^2 \\xi^2}{144} \\right)^{1\/3} \n\\end{array} \\right) , \n\\end{equation}\nwhich again formally has a zero determinant. \nThe characteristic polynomial is \n\\begin{equation}\n0 = q^3 + q^2 + 6 \\beta \\mu\\nu q - D , \n\\end{equation}\nwherein we again take the more accurate determinant \nobtained from a higher-order expansion of \n(\\ref{hex-stab-mat}), namely $D=\\beta^2\\mu\\nu$. \nThe eigenvalues are then given by \n\\begin{equation}\nq_1 \\sim - \\left( \\frac{\\beta\\varrho^2\\xi^2}{144} \\right)^{1\/3} , \n\\qquad \nq_{2,3} \\sim \\pm \\sqrt{\\beta\\mu\\nu} \n\\left( \\frac{12\\beta}{\\varrho\\xi} \\right)^{1\/3} . \n\\lbl{hex-ev2} \\end{equation}\nWe now observe that there is always one stable and \ntwo unstable eigenvalues, so we deduce that the system \nbreaks symmetry in the case $\\alpha \\sim \\xi \\gg 1$. \nThe first eigenvalue corresponds to a faster timescale \nwhere $t\\sim {\\cal O}(\\xi^{-2\/3})$ whilst the latter two \ncorrespond to the slow timescale where $t={\\cal O}(\\xi^{1\/3})$. \n\n\\subsection{Simulation results} \n\n\\begin{figure}[!ht] \n\\vspace*{64mm} \n\\special{psfile=fig-hex1.eps \n\thscale=60 vscale=45 hoffset=00 voffset=-96} \n\\caption{Illustration of the evolution of the total concentrations \n$c_2,z,w,u$ for a numerical solution of the system truncated \nat hexamers (\\ref{hex-c2})--(\\ref{hex-y6}) in the limit \n$\\alpha\\sim\\xi \\gg1$. Since model equations \nare in nondimensional form, the time units are arbitrary. \nThe parameters are $\\alpha=\\xi=30$, $\\nu=0.5$, $\\beta=\\mu=1$, \nand the initial data is $x_6(0)=y_6(0)=0.06$, \n$x_4(0)=y_4(0)=0.01$, $x_2(0) = 0.051$, $y_2(0) = 0.049$, \n$c_2(0) = 0$. Note the time axis has a logarithmic scale. } \n\\label{fig-hex-1} \n\\end{figure}\n\n\\begin{figure}[!ht]\n\\vspace*{64mm}\n\\special{psfile=fig-hex2.eps \n\thscale=60 vscale=45 hoffset=00 voffset=-96}\n\\caption{Graph of the evolution of the chiralities against time \non a log-log scale; results of numerical simulation of the same \nhexamer-truncated system, with identical initial data and \nparameters as in Figure \\protect\\ref{fig-hex-1}. }\n\\label{fig-hex-2}\n\\end{figure}\n\nWe briefly review the results of a numerical simulation of \n(\\ref{hex-c2})--(\\ref{hex-y6}) in the case $\\alpha\\sim\\xi\\gg1$ \nto illustrate the symmetry-breaking observed therein. \nAlthough the numerical simulation used the variables $x_k$ and \n$y_k$ ($k=2,4,6$) and $c_2$, we plot the total concentrations \n$z,w,u$ in Figure \\ref{fig-hex-1}. The initial conditions have a \nslight imbalance in the handedness of small crystals ($x_2,y_2$). \nThe chiralities of small ($x_2,y_2,z$), medium ($x_4,y_4,w$), and \nlarger ($x_6,y_6,u$) are plotted in Figure \\ref{fig-hex-2} on a \nlog-log scale. Whilst Figure \\ref{fig-hex-1} shows the \nconcentrations in the system has equilibrated by $t=10$, at this \nstage the chiralities are in a metastable state, that is, a long plateau \nin the chiralities between $t=10$ and $t=10^3$ where little appears \nto change. There then follows a period of equilibration of chirality \non the longer timescale when $t\\sim 10^4$. We have observed \nthis significant delay between the equilibration of concentrations \nand that of chiralities in a large number of simulations. The reason \nfor this difference in timescales is due to the differences in the sizes \nof the eigenvalues in (\\ref{hex-ev1}). \n\nWe have also investigated the case $\\beta \\ll1$ with all other \nparameters ${\\cal O}(1)$ to verify that this case does indeed approach \nthe racemic state at large times (that is, $\\theta,\\phi,\\zeta \\rightarrow0$ \nas $t\\rightarrow\\infty$). However, once again the difference in \ntimescales can be observed, with the concentrations reaching \nequilibration on a faster timescale than the chiralities, due to the \ndifferent magnitudes of eigenvalues (\\ref{hex-ev2}).\n\n\\section{New simplifications of the system}\n\\label{new-sec}\n\\setcounter{equation}{0}\n\nWe return to the equations (\\ref{smm2})--(\\ref{smmy2}) \nin the case $\\delta=0$, now writing $x_2=x$ and $y=y_2$ \nto obtain \n\\begin{eqnarray}\n\\frac{{\\rm d} c}{{\\rm d} t} & = & - 2 \\mu c \n\t+ \\mu\\nu (x+y) - \\alpha c(N_x+N_y) , \\lbl{newcdot} \\\\\n\\frac{{\\rm d} x}{{\\rm d} t} & = & \\mu c - \\mu\\nu x - \\alpha x c \n\t+ \\beta (N_x-x + x_4) - \\xi x^2 - \\xi x N_x , \\lbl{newxdot} \\\\ \n\\frac{{\\rm d} y}{{\\rm d} t} & = & \\mu c - \\mu\\nu y - \\alpha y c \n\t+ \\beta (N_y-y + y_4) - \\xi y^2 - \\xi y N_y , \\lbl{newydot}\\\\ \n\\frac{{\\rm d} N_x}{{\\rm d} t} & = & \\mu c - \\mu\\nu x \n\t+ \\beta (N_x-x) - \\xi x N_x , \\lbl{newnxdot} \\\\ \n\\frac{{\\rm d} N_y}{{\\rm d} t} & = & \\mu c - \\mu\\nu y \n\t+ \\beta (N_y-y) - \\xi y N_y , \\lbl{newnydot} \n\\end{eqnarray} \nwhich are not closed, since $x_4,y_4$ appear \non the {\\sc rhs}'s of (\\ref{newxdot}) and (\\ref{newydot}), \nhence we need to find formulae to determine $x_4$ and $y_4$ \nin terms of $x,y,N_x,N_y$.\n\nOne way of achieving this is to expand the system to include other \nproperties of the distribution of cluster sizes. For example, equations \ngoverning the mass of crystals in each chirality can be derived as \n\\begin{equation}\n\\frac{{\\rm d} \\varrho_x}{{\\rm d} t}=2\\mu c-2\\mu\\nu x+2\\alpha c N_x ,\n\\quad\n\\frac{{\\rm d} \\varrho_y}{{\\rm d} t}=2\\mu c-2\\mu\\nu y+2\\alpha c N_y . \n\\lbl{new-roxy-dot} \\end{equation}\nThese introduce no more new new quantities into the \nmacroscopic system of equations, and do not rely on knowing \n$x_4$ or $y_4$, (although they do require knowledge of $x$ and $y$). \n\nIn the remainder of this section we consider various potential formulae \nfor $x_4$, $y_4$ in terms of macroscopic quantities so that a \nmacroscopic system can be constructed. We then analyse such \nmacroscopic systems in two specific limits to show that predictions \nrelating to symmetry-breaking can be made. \n\n\\subsection{Reductions}\n\nThe equations governing the larger cluster sizes $x_k$, $y_k$, are \n\\begin{equation} \n\\frac{{\\rm d} x_{2k}}{{\\rm d} t} = \\beta( x_{2k+2} - x_{2k} ) \n- (x_{2k}-x_{2k-2})(\\alpha c + \\xi x) ; \n\\lbl{5xkdot} \\end{equation} \nin general this has solutions of the form $x_{2k} = \\sum_j A_j(t) \n\\Lambda_j^{k-1}$, \nwhere $\\Lambda_j$ are parameters (typically taking values between \nunity (corresponding to a steady-state in which mass is being \nadded to the distribution) and \n$\\mfrac{\\alpha c+\\xi x}{\\beta}$ (the equilibrium value); and $A_j(t)$ \nare time-dependent; for some $\\Lambda_j$, $A_j$ will be constant. \n\nWe assume that the distribution of each chirality of cluster is given by \n\\begin{equation} \nx_{2k} = x \\left( 1 - \\frac{1}{\\lambda_x} \\right)^{k-1} ,\\qquad\\qquad\ny_{2k} = y \\left( 1 - \\frac{1}{\\lambda_y} \\right)^{k-1} , \n\\end{equation}\nsince solutions of this form may be steady-states of the governing \nequations (\\ref{5xkdot}). However, in our approximations for \n$x_4$ and $y_4$ the parameters $\\lambda_x$, $\\lambda_y$ are \npermitted to vary with time in some way that depends on other \nquantities in the model equations. The resulting expressions for \nthe macroscopic number and mass quantities are \n\\begin{eqnarray}\nN_x = \\sum_{k=1}^\\infty x_{2k} = x \\lambda_x , &\\qquad& \nN_y = \\sum_{k=1}^\\infty y_{2k} = y \\lambda_y , \\\\ \n\\varrho_x = \\sum_{k=1}^\\infty 2 k x_{2k} = 2 x \\lambda_x^2 , &\\qquad& \n\\varrho_y = \\sum_{k=1}^\\infty 2 k y_{2k} = 2 y \\lambda_y^2 . \n\\end{eqnarray}\nOur aim is to find a simpler expression for the terms $x_4$ \nand $y_4$ which occur in (\\ref{newxdot})--(\\ref{newydot}), \nthese are given by $x_4=x(1-1\/\\lambda_x)$ where \n\\begin{equation}\n\\lambda_x = \\frac{N_x}{x} = \\frac{\\varrho_x}{2N_x} \n= \\sqrt{\\frac{\\varrho_x}{2x}} , \\lbl{lambda-eqs}\n\\end{equation}\nhence\n\\begin{equation}\nx_4 = x - \\frac{x^2}{N_x} , \\quad \nx_4 = x - \\frac{2 x N_x}{\\varrho_x} ,\\quad \n{\\rm or} \\;\\;\\; \nx_4 = x - x\\sqrt{\\frac{2x}{\\varrho_x}} . \n\\end{equation}\n\nThere are thus three possible reductions of the equations \n(\\ref{newcdot})--(\\ref{newnydot}), each eliminating one of \n$x,N_x,\\varrho_x$ (and the corresponding $y,N_y,\\varrho_y$). We consider \neach reduction in turn in the following subsections. Since some \nof these reductions involve $\\varrho_x, \\varrho_y$, we also use the \nevolution equations (\\ref{new-roxy-dot}) for these quantities. \n\n\\subsection{Reduction 1: to $x,y,N_x,N_y$}\n\nHere we assume $\\lambda_x = N_x\/x$, $\\lambda_y = N_y\/y$, \nso, in addition to (\\ref{newcdot}), (\\ref{newnxdot})--(\\ref{newnydot}) \nthe equations of motion are \n\\begin{eqnarray}\n\\frac{{\\rm d} x}{{\\rm d} t} & = & \\mu c - \\mu \\nu x + \\beta N_x \n\t- \\frac{\\beta x^2}{N_x} - \\xi x^2 - \\xi x N_x , \\\\\n\\frac{{\\rm d} y}{ {\\rm d} t} & = & \\mu c - \\mu \\nu y + \\beta N_y \n\t- \\frac{\\beta y^2}{N_y} - \\xi y^2 - \\xi y N_y ; \n\\end{eqnarray}\nwe have no need of the densities $\\varrho_x,\\varrho_y$ in this formulation. \n\nThe disadvantage of this reduction is that, due to \n(\\ref{lambda-eqs}), the total mass is given by \n\\begin{equation}\n\\varrho = 2c + \\varrho_x+\\varrho_y = 2 c + \\frac{2 N_x^2}{x} \n+ \\frac{2 N_y^2}{y} , \n\\end{equation}\nand there is no guarantee that this will be conserved. \n\nWe once again consider the system in terms of total concentrations \nand relative chiralities by applying the transformation \n\\begin{eqnarray}&\nx = \\mbox{$\\frac{1}{2}$} z (1\\!+\\!\\theta) , \\quad \ny = \\mbox{$\\frac{1}{2}$} z (1\\!-\\!\\theta) , \\quad \nN_x = \\mbox{$\\frac{1}{2}$} N (1\\!+\\!\\phi) , \\quad \nN_y = \\mbox{$\\frac{1}{2}$} N (1\\!-\\!\\phi) , & \\nonumber \\\\ && \n\\end{eqnarray}\nto obtain the equations\n\\begin{eqnarray}\n\\frac{{\\rm d} c}{{\\rm d} t} & =& - 2 \\mu c + \\mu \\nu z - \\alpha c N , \n\\lbl{R1cdot} \\\\ \n\\frac{{\\rm d} z}{{\\rm d} t} & =& 2\\mu c - \\mu \\nu z - \\alpha c z + \\beta N \n\t-\\frac{\\beta z^2(1+\\theta^2-2\\theta\\phi)}{N(1-\\phi^2)} \\nonumber \\\\ && \n\t- \\mbox{$\\frac{1}{2}$} \\xi z^2(1+\\theta^2) - \\mbox{$\\frac{1}{2}$} \\xi z N (1+\\theta\\phi) , \n\t\\lbl{R1zdot} \\\\\n\\frac{{\\rm d} N}{{\\rm d} t} & = & 2\\mu c - \\mu \\nu z + \\beta N - \\beta z \n\t- \\mbox{$\\frac{1}{2}$} \\xi z N (1+\\theta\\phi) . \\lbl{R1Ndot} \\\\ \n\\frac{{\\rm d} \\theta}{{\\rm d} t} & = & \n\t- \\left( \\mu \\nu + \\alpha c + \\xi z + \\mbox{$\\frac{1}{2}$} \\xi N \n\t\t+ \\frac{2\\beta z}{N(1\\!-\\!\\phi^2)} \n\t\t+ \\frac{1}{z} \\frac{{\\rm d} z}{{\\rm d} t} \\right) \\theta \\nonumber \\\\ && \n\t+ \\left( \\frac{\\beta N}{z} - \\mbox{$\\frac{1}{2}$} \\xi N \n\t+ \\frac{\\beta z (1\\!+\\!\\theta^2)}{N(1\\!-\\!\\phi^2)} \n\t\\right) \\phi , \\\\ \n\\frac{{\\rm d} \\phi}{{\\rm d} t} & =& \n\t- \\left( \\mu\\nu + \\beta + \\mbox{$\\frac{1}{2}$} \\xi N \\right) \\frac{z}{N} \\theta \n\t+ \\left( \\beta - \\mbox{$\\frac{1}{2}$} \\xi z - \\frac{1}{N}\\frac{{\\rm d} N}{{\\rm d} t} \\right) \\phi . \n\t\\nonumber \\\\ && \n\\end{eqnarray}\nThese equations have the symmetric steady-state given by \n$\\theta=0=\\phi$ and $c,z,N$ satisfying \n\\begin{equation}\nc = \\frac{\\mu\\nu z}{2\\mu+\\alpha N} , \\qquad\nz = \\frac{2\\beta N (2\\mu+\\alpha N) }\n{(2\\beta + \\xi N)(2\\mu+\\alpha N)+2\\alpha\\mu\\nu N} , \n\\lbl{R1sss} \\end{equation}\nfrom (\\ref{R1cdot}) and (\\ref{R1Ndot}). Note that the steady \nstate value of $N$ will depend upon the initial conditions, it is \nnot determined by (\\ref{R1zdot}). This is because the \nsteady-state equations obtained by setting the time derivatives \nin (\\ref{R1cdot})--(\\ref{R1Ndot}) are not independent. \nThe difference (\\ref{R1zdot})--(\\ref{R1Ndot}) is equal \nto $z\/N$ times the sum (\\ref{R1cdot})$+$(\\ref{R1Ndot}). \n\nIn subsections \\ref{5R1A1-sec} and \\ref{5R1A2-sec} below, \nso as to discuss the stability of a solution in the two asymptotic \nregimes $\\beta\\ll1$ and $\\alpha\\sim\\xi\\gg1$, we augment the \nsteady-state equations (\\ref{R1cdot})--(\\ref{R1Ndot}) with the \ncondition $\\varrho=2N^2\/z$, with $\\varrho$ assumed to be ${\\cal O}(1)$.\n\nThe linear stability of $\\theta=0=\\phi$ is given by assuming \n$\\theta$ and $\\phi$ are small, yielding the system \n\\begin{equation} \n\\frac{{\\rm d}}{{\\rm d} t} \\!\\!\\left(\\!\\! \\begin{array}{c} \n\\theta \\\\[2ex] \\phi \\end{array}\\!\\! \\right) \\!\n= \\! \\left( \\begin{array}{cc} \n- \\left( \\displaystyle\\frac{2\\mu c}{z} + \\displaystyle\\frac{\\xi z}{2} \n\t+ \\displaystyle\\frac{\\beta z}{N} \n\t+ \\displaystyle\\frac{\\beta N}{z} \\right) &\n\\left(\\displaystyle\\frac{\\beta N}{z} + \\displaystyle\\frac{\\beta z}{N} \n\t- \\displaystyle\\frac{\\xi N}{2} \\right) \\\\ \n- ( \\mu \\nu + \\beta + \\mbox{$\\frac{1}{2}$} \\xi N ) \\displaystyle\\frac{z}{N} & \n\\left( \\beta + \\mu\\nu - \\displaystyle\\frac{2\\mu c}{z} \\right) \n\\displaystyle\\frac{z}{N} \\end{array} \\right) \\!\\! \\left( \\!\\!\n\\begin{array}{c} \\theta \\\\[2ex] \\phi \\end{array} \\!\\!\\right)\\! . \n\\lbl{5stabmat} \\end{equation}\nAn instability of the symmetric solution is indicated by the \ndeterminant of this matrix being negative. Substituting (\\ref{R1sss}) \ninto the determinant, yields \n\\begin{equation}\n\\mbox{det} = \\frac{ \\beta \\mu \\nu ( 4 \\beta \\mu - \\alpha \\xi N^2 )}\n{4\\beta\\mu + 2 \\alpha \\beta N + 2 \\mu \\xi N + 2 \\alpha \\mu \\nu N \n+ \\alpha \\xi N^2} . \n\\lbl{5detsimp} \\end{equation}\nHence we find that the symmetric (racemic) state is unstable if \n$N > 2 \\sqrt{ \\mu\\beta \/ \\alpha \\xi }$, that is, large aggregation rates \n($\\alpha,\\xi$) and slow grinding ($\\beta$) are preferable for \nsymmetry-breaking. \n\nWe consider two specific asymptotic limits of parameter values \nso as to derive specific results for steady-states and conditions on \nstability. In both limits, we have that the aggregation rates dominate \nfragmentation ($\\alpha \\sim \\xi \\gg \\beta$), so that the system is \nstrongly biased towards the formation of crystals and the dimer \nconcentrations are small. In the first case we assume that the \nfragmentation is small and the aggregation rates are of a similar \nscale to the interconversion of dimers ($\\beta \\ll \\mu \\sim \\alpha \\sim \n\\xi = {\\cal O}(1)$); whilst the second has a fragmentation rate of \nsimilar size to the dimer conversion rates and larger aggregation rates \n($\\alpha \\sim \\xi \\gg \\mu \\sim \\beta = {\\cal O}(1)$). \n\n\\subsubsection{Asymptotic limit 1: $\\beta\\ll1$} \n\\label{5R1A1-sec} \n\nIn the case of asymptotic limit 1, $\\beta\\ll1$, we find the \nsteady-state solution \n\\begin{equation}\nN \\sim \\sqrt{\\frac{\\beta\\varrho}{\\xi+\\alpha\\nu}} , \\quad \nz \\sim \\frac{2\\beta}{\\xi+\\alpha\\nu} , \\quad \nc \\sim \\frac{\\beta\\nu}{\\xi+\\alpha\\nu} . \n\\end{equation}\nFrom (\\ref{5detsimp}), we find an instability if $\\varrho > \\varrho_c := \n4 \\mu (\\xi+\\alpha\\nu) \/ \\alpha\\xi$. That is, larger masses ($\\varrho$) \nfavour symmetry-breaking, as do larger aggregation rates \n($\\alpha,\\xi$). The eigenvalues of (\\ref{5stabmat}) in this limit are \n$q_1 = -\\mu\\nu$ -- a fast stable mode of the dynamics and \n\\begin{equation}\nq_2 = \\frac{\\alpha \\xi \\beta^{3\/2}}{2\\mu \\sqrt{\\varrho} (\\xi+\\alpha\\nu)^{3\/2}} \n\\left( \\varrho - \\frac{4\\mu(\\xi+\\alpha\\nu)}{\\alpha\\xi} \\right) , \n\\end{equation}\nwhich indicates a slowly growing instability when $\\varrho>\\varrho_c$. Hence \nthe balace of achiral to chiral morphologies of smaller clusters ($\\nu$) \nalso influences the propensity for non-racemic solution. However, \nsince the dynamics described by this model does not conserve total \nmass, the results from this should be treated with some caution, \nand we now analyse models which do conserve total mass. \n\n\\subsubsection{Asymptotic limit 2: $\\alpha\\sim\\xi\\gg1$}\n\\label{5R1A2-sec}\n\nIn this case we find the steady-state solution is given by \n\\begin{equation}\nN \\sim \\sqrt{\\frac{\\beta\\varrho}{\\xi}} , \\quad \nz \\sim \\frac{2\\beta}{\\xi} , \\quad \nc \\sim \\frac{4\\mu\\nu}{\\alpha} \\sqrt{\\frac{\\beta}{\\xi\\varrho}} . \n\\end{equation}\nThe condition following from (\\ref{5detsimp}) then implies that we \nhave an instability if $\\varrho>\\varrho_c := 4\\mu\/\\alpha \\ll 1$. The eigenvalues \nof the stability matrix are $q_1 = - \\mbox{$\\frac{1}{2}$} \\sqrt{\\beta\\varrho\\xi}$, which is \nlarge and negative, indicating attraction to some lower dimensional \nsolution over a relatively fast timescale; the eigenvector being \n$(1,0)^T$ showing that $\\theta\\rightarrow0$. The other eigenvalue \nis $q_2 = 2\\mu\\nu \\sqrt{\\beta\/\\varrho\\xi} \\ll 1$, and corresponds to a slow \ngrowth of the chirality of the solution, since it relates to the \neigenvector $(0,1)^T$. Assuming the system is initiated near its \nsymmetric solution ($\\theta=\\phi=0$), this shows that the distribution \nof clusters changes its chirality first, whilst the dimer concentrations \nremain, at least to leading order, racemic. We expect that at a later \nstage the chirality of the dimers too will become nonzero. \n\n\\subsection{Reduction 2: to $x,y,\\varrho_x,\\varrho_y$}\n\nHere we eliminate $x_4=x(1-1\/\\lambda_x)$, \n$y_4=y(1-1\/\\lambda_y)$ together with $N_x$ and $N_y$ using \n\\begin{equation}\n\\lambda_x=\\sqrt{\\frac{\\varrho_x}{2x}}, \\quad \n\\lambda_y=\\sqrt{\\frac{\\varrho_y}{2y}}, \\quad \nN_x = \\sqrt{\\frac{x\\varrho_x}{2}}, \\quad \nN_y = \\sqrt{\\frac{y\\varrho_y}{2}}, \n\\end{equation}\nleaving a system of equations for $(c,x,y,\\varrho_x,\\varrho_y)$ \n\\begin{eqnarray}\n\\frac{{\\rm d} c}{{\\rm d} t} & = & \\mu\\nu(x+y) - 2\\mu c \n- \\sqrt{2} \\alpha c \\left( \\sqrt{x\\varrho_x} + \\sqrt{y \\varrho_y} \\right) , \\\\ \n\\frac{{\\rm d} x}{{\\rm d} t} & =& \\mu c - \\mu \\nu x - \\alpha c x - \n\\xi x^2 - \\xi x \\sqrt{\\frac{x\\varrho_x}{2}} + \\beta \\sqrt{\\frac{x\\varrho_x}{2}} \n- \\beta x \\sqrt{\\frac{2x}{\\varrho_x}} , \\nonumber \\\\ && \\\\ \n\\frac{{\\rm d} \\varrho_x}{{\\rm d} t} & = & - 2 \\mu \\nu x + 2 \\mu c \n+ 2 \\alpha c \\sqrt{\\frac{x\\varrho_x}{2}} , \n\\end{eqnarray} \nwith similar equations for $y,\\varrho_y$. Transforming to total \nconcentrations and relative chiralities by way of \n\\begin{eqnarray}&\nx = \\mbox{$\\frac{1}{2}$} z (1+\\theta) , \\quad \ny = \\mbox{$\\frac{1}{2}$} z (1-\\theta) , \\quad \n\\varrho_x = \\mbox{$\\frac{1}{2}$} R (1+\\zeta) , \\quad \n\\varrho_y = \\mbox{$\\frac{1}{2}$} R (1-\\zeta) , \n&\\nonumber\\\\&&\n\\end{eqnarray}\nwe find \n\\begin{eqnarray}\n\\frac{{\\rm d} c}{{\\rm d} t} & =& \\mu \\nu z - 2 \\mu c \n\t- \\frac{\\alpha c \\sqrt{z R}}{2\\sqrt{2}} \\left[ \n\t\\sqrt{(1\\!+\\!\\theta)(1\\!+\\!\\zeta)} + \n\t\\sqrt{(1\\!-\\!\\theta)(1\\!-\\!\\zeta)} \\right] , \\lbl{new-r2-cdot} \n\\\\ \n\\frac{{\\rm d} z}{{\\rm d} t} & = & 2\\mu c - \\mu \\nu z - \\alpha c z \n\t- \\mbox{$\\frac{1}{2}$} \\xi z^2 (1\\!+\\!\\theta^2) \\nonumber \\\\ && \n\t+ \\frac{\\beta \\sqrt{zR}}{2\\sqrt{2}} \n\t\\left[ \\sqrt{(1\\!+\\!\\theta)(1\\!+\\!\\zeta)} + \\sqrt{(1\\!-\\!\\theta)(1\\!-\\!\\zeta)} \n\t\\right] \\nonumber \\\\ && \n\t- \\frac{\\xi z^{3\/2} R^{1\/2}}{4\\sqrt{2}} \\left[ \n\t(1\\!+\\!\\theta)^{3\/2} (1\\!+\\!\\zeta)^{1\/2} + (1\\!-\\!\\theta)^{3\/2} \n\t(1\\!-\\!\\zeta)^{1\/2} \\right] \\nonumber \\\\ && \n\t- \\frac{\\beta z^{3\/2} }{\\sqrt{2R}} \n\t\\left[ \\frac{(1\\!+\\!\\theta)^{3\/2}}{(1\\!+\\!\\zeta)^{1\/2}} + \n\t\\frac{(1\\!-\\!\\theta)^{3\/2}}{(1\\!-\\!\\zeta)^{1\/2}} \\right] , \n\\lbl{new-r2-zdot} \n\\\\ \n\\frac{{\\rm d} R}{{\\rm d} t} & = & - 2\\mu\\nu z + 4 \\mu c \n\t+ \\mbox{$\\frac{1}{2}$} \\alpha c \\sqrt{2zR} \\left[ \n\t\\sqrt{(1\\!+\\!\\theta)(1\\!+\\!\\zeta)} + \n\t\\sqrt{(1\\!-\\!\\theta)(1\\!-\\!\\zeta)} \\right] , \n\\nonumber \\\\ && \\lbl{new-r2-Rdot} \n\\end{eqnarray}\ntogether with the equations \n(\\ref{new-r2-thetadot})--(\\ref{new-r2-zetadot}) for the relative \nchiralities $\\theta$ and $\\zeta$, which will be analysed later. \n\nSince the equations for ${\\rm d} R\/dd t$ and ${\\rm d} c\/{\\rm d} t$ are \nessentially the same, we obtain a third piece of information from \nthe requirement that the total mass in the system is unchanged \nfrom the initial data, hence the new middle equation above. \nSolving these we find $c=\\mbox{$\\frac{1}{2}$} (\\varrho-R)$ and use this in place \nof the equation for $c$. \n\nIn the symmetric case ($\\theta=\\zeta=0$) we obtain the \nsteady-state conditions \n\\begin{eqnarray}\n0 & = & 2\\mu\\nu z - 4\\mu c - \\alpha c \\sqrt{2zR} , \\qquad\\qquad \n\\varrho \\; = \\; R + 2 c , \\lbl{R2-ssss1} \\\\ \n0 & = & 2\\mu c - \\mu \\nu z - \\alpha c z - \\mbox{$\\frac{1}{2}$} \\xi z^2 \n\t+ \\mbox{$\\frac{1}{2}$} \\beta \\sqrt{2zR} - \\beta z \\sqrt{\\frac{2z}{R}} \n\t- \\frac{\\xi z}{2} \\sqrt{\\frac{zR}{2}} . \\nonumber \\\\ && \\lbl{R2-ssss2} \n\\end{eqnarray}\nFor small $\\theta,\\zeta$, the equations for the chiralities \ncan be approximated by \n\\begin{eqnarray}\n\\frac{{\\rm d} \\theta}{{\\rm d} t} & = & - \\left( \\frac{2\\mu c}{z} \n\t+ \\mbox{$\\frac{1}{2}$} \\xi z + \\mbox{$\\frac{1}{2}$} \\beta \\sqrt{\\frac{R}{2z}} \n\t+ \\mbox{$\\frac{1}{2}$} \\beta \\sqrt{\\frac{2z}{R}} \n\t+ \\rec{4} \\xi \\sqrt{\\frac{zR}{2}} \\right) \\theta \\nonumber \\\\ && \n\t+ \\left( \\frac{\\beta(R+2z)}{2\\sqrt{2zR}} \n\t- \\frac{\\xi}{4} \\sqrt{\\frac{Rz}{2}} \\right) \\zeta , \n\\lbl{new-r2-thetadot} \\\\ \n\\frac{{\\rm d} \\zeta}{{\\rm d} t} & = & \\left( \\frac{2\\mu\\nu z}{R} \n- \\alpha c \\sqrt{\\frac{zR}{2}} \\right) \\theta \n- \\left( \\frac{2\\mu\\nu z}{R} - \\frac{4\\mu c}{R} \\right) \\zeta , \n\\lbl{new-r2-zetadot} \n\\end{eqnarray}\nWe analyse the stability of the symmetric (racemic) state in the two \nlimits $\\beta\\ll1$ and $\\alpha\\sim\\xi\\gg1$ in the next subsections. \n\n\\subsubsection{Asymptotic limit 1: $\\beta\\ll1$}\n\nIn this case, solving the conditions \n(\\ref{R2-ssss1})--(\\ref{R2-ssss2}) asymptotically, \nwe find \n\\begin{equation}\nz \\sim \\frac{2\\beta}{\\xi+\\alpha\\nu} , \\qquad \nc \\sim \\frac{\\beta\\nu}{\\xi+\\alpha\\nu} , \\qquad \nR \\sim \\varrho - 2c . \n\\end{equation}\nSubstituting these values into the differential equations \nwhich determine the stability of the racemic state leads to \n\\begin{equation}\n\\frac{{\\rm d} }{{\\rm d} t} \n\\left( \\begin{array}{c} \\theta \\\\[3ex] \\zeta \\end{array} \\right) \n\\left( \\begin{array}{cc} \n-\\mu\\nu & \n\\displaystyle\\frac{\\alpha\\nu}{4} \\sqrt{\\displaystyle\\frac{\\beta\\varrho}{\\xi+\\alpha\\nu}}\\\\ \n-\\displaystyle\\frac{4\\beta\\mu\\nu}{\\varrho(\\xi+\\alpha\\nu)} & \n\\displaystyle\\frac{\\alpha\\nu\\beta^{3\/2}}{(\\xi+\\alpha\\nu)^{3\/2} \\sqrt{\\varrho}} \n\\end{array} \\right) \n\\left( \\begin{array}{c} \\theta \\\\[3ex] \\zeta \\end{array} \\right) . \n\\end{equation}\nFormally this matrix has eigenvalues of zero and $-\\mu\\nu$. \nSince the zero eigenvalue indicates marginal stability of the \nracemic solution, we need to consider higher-order terms to \nobtain a more definite result. \n\nGoing to higher order, gives the determinant of the resulting matrix \nas $-\\alpha \\xi \\nu \/ (\\alpha\\nu+\\xi)^2$ hence the eigenvalues are \n\\begin{equation}\nq_1 = -\\mu\\nu , \\qquad {\\rm and} \\quad \nq_2 = \\frac{ \\alpha \\xi }{\\mu (\\alpha\\nu+\\xi)^2 } , \n\\end{equation}\nthe former indicating a rapid decay of $\\theta$ (corresponding to the \neigenvector $(1,0)^T$), and the latter showing a slow divergence from \nthe racemic state in the $\\zeta$-direction, at leading order, according to \n\\begin{equation}\n\\left( \\begin{array}{c} \\theta \\\\ \\zeta \\end{array} \\right) \n\\sim C_1 \\left( \\begin{array}{c} 0 \\\\ 1 \\end{array} \\right) \n\\exp \\left( \\frac{ \\alpha \\xi t }{\\mu (\\alpha\\nu+\\xi)^2 } \\right) . \n\\end{equation}\nHence in the case $\\beta\\ll1$, we find an instability of the \nsymmetric solution for all other parameter values. \n\n\\subsubsection{Asymptotic limit 2: $\\alpha\\sim\\xi\\gg1$}\n\nIn this case, solving the conditions \n(\\ref{R2-ssss1})--(\\ref{R2-ssss2}) asymptotically, we find \n\\begin{equation} \nz \\sim \\frac{2\\beta}{\\xi} , \\qquad \nc \\sim \\frac{2\\mu\\nu}{\\alpha} \\sqrt{\\frac{\\beta}{\\varrho\\xi}} , \\qquad \nR \\sim \\varrho - 2c . \n\\end{equation} \nSubstituting these values into the differential equations \n(\\ref{new-r2-thetadot})--(\\ref{new-r2-zetadot}) \nwhich determine the stability of the racemic state leads to \n\\begin{equation} \n\\frac{{\\rm d} }{{\\rm d} t} \n\\left( \\begin{array}{c} \\theta \\\\[1ex] \\zeta \\end{array} \\right) \n\\left( \\begin{array}{ccc} \n- \\mbox{$\\frac{1}{2}$} \\sqrt{\\beta\\xi\\varrho} && o(\\sqrt{\\xi}) \\\\[1ex] \n- \\displaystyle\\frac{4\\beta\\mu\\nu}{\\varrho\\xi} && \\displaystyle\\frac{4\\beta\\mu\\nu}{\\varrho\\xi} \n\\end{array} \\right) \n\\left( \\begin{array}{c} \\theta \\\\[1ex] \\zeta \\end{array} \\right) , \n\\end{equation} \nhence the eigenvalues are $q_1=-\\mbox{$\\frac{1}{2}$}\\sqrt{\\beta\\varrho\\xi}$ and \n$q_2 = 4\\mu\\nu\\beta\/\\varrho\\xi$, (in the above $o(\\sqrt{\\xi})$ means \na quantity $q$ satisfying $q\\ll\\sqrt{\\xi}$ as $\\xi\\rightarrow\\infty$). \nWhilst the former indicates the existence of a stable manifold (with \na fast rate of attraction), the latter shows that there is also an unstable \nmanifold. Although the timescale associated with this is much slower, \nit shows that the symmetric (racemic) state is unstable. \n\n\\subsection{Reduction 3: to $N_x,N_y,\\varrho_x,\\varrho_y$}\n\nIn this case our aim is to retain only information on the number \nand typical size of crystal distribution, so we eliminate the dimer \nconcentrations $x,y$, using \n\\begin{equation} \n\\lambda_x = \\frac{\\varrho_x}{2 N_x} , \\quad \n\\lambda_y = \\frac{\\varrho_y}{2 N_y} , \\quad \nx = \\frac{2 N_x^2}{\\varrho_x} , \\quad \ny = \\frac{2 N_y^2}{\\varrho_y} . \n\\end{equation} \nThese transformations reformulate the governing equations \n(\\ref{newcdot})--(\\ref{new-roxy-dot}) to \n\\begin{eqnarray}\n\\frac{{\\rm d} N_x}{{\\rm d} t} & = & \\mbox{$\\frac{1}{2}$} \\mu (\\varrho -R) + \\beta N_x \n\t- 2 (\\mu\\nu+\\beta) \\frac{N_x^2}{\\varrho_x} \n\t- \\frac{2\\xi N_x^3}{\\varrho_x} , \\lbl{r3-nxdot} \\\\ \n\\frac{{\\rm d} N_y}{{\\rm d} t} & = & \\mbox{$\\frac{1}{2}$} \\mu (\\varrho - R) + \\beta N_y\n\t- 2 (\\mu\\nu+\\beta) \\frac{N_y^2}{\\varrho_y} \n\t- \\frac{2\\xi N_y^3}{\\varrho_y} , \\\\ \n\\frac{{\\rm d} \\varrho_x}{{\\rm d} t} & = & (\\varrho-R)(\\mu+\\alpha N_x) \n\t- \\frac{4\\mu\\nu N_x^2}{\\varrho_x} , \\lbl{r3-roxdot} \\\\ \n\\frac{{\\rm d} \\varrho_y}{{\\rm d} t} & = & (\\varrho-R)(\\mu+\\alpha N_y) \n\t- \\frac{4\\mu\\nu N_y^2}{\\varrho_y} , \\lbl{r3-roydot}\n\\end{eqnarray}\nwhere $R := \\varrho_x + \\varrho_y$. \nWe now transform to total concentrations ($N$, $R$) \nand relative chiralities ($\\phi$ and $\\zeta$) {\\em via} \n\\begin{equation} \nN_x = \\mbox{$\\frac{1}{2}$} N (1+\\phi) , \\quad \nN_y = \\mbox{$\\frac{1}{2}$} N (1-\\phi) , \\quad \n\\varrho_x = \\mbox{$\\frac{1}{2}$} R (1+\\zeta) , \\quad \n\\varrho_y = \\mbox{$\\frac{1}{2}$} R (1-\\zeta) , \n\\end{equation}\ntogether with $c = \\mbox{$\\frac{1}{2}$} (\\varrho - R)$, to obtain \n\\begin{eqnarray}\n\\frac{{\\rm d} R}{{\\rm d} t} & = & (\\varrho-R)(2\\mu+ \\alpha N) \n- \\frac{4\\mu\\nu N^2(1+\\phi^2-2\\phi\\zeta)}{R (1-\\zeta^2)} , \n\\lbl{r3Rd} \\\\ \\lbl{r3Nd} \n\\frac{{\\rm d} N}{{\\rm d} t} & = & \\!\\!\\mu (\\varrho \\! - \\! R) + \\beta N \n\t\\\\ && \\! - \\frac{N^2}{R(1\\!-\\!\\zeta^2)} \\left[ \n\t2(\\mu\\nu\\!+\\!\\beta) (1\\!+\\!\\phi^2\\!-\\!2\\phi\\zeta) + \n\t\\xi N (1\\!+\\!3\\phi^2\\!-\\!3\\phi\\zeta\\!-\\!\\phi^3\\zeta) \\right] ,\n\\nonumber \\\\ \n\\frac{{\\rm d}\\phi}{{\\rm d} t} &=& \\beta\\phi - \\frac{1}{N}\\frac{{\\rm d} N}{{\\rm d} t}\\phi\n\t\\\\&& \\!\\!- \\frac{N}{R(1\\!-\\!\\zeta^2)} \\left[ \n\t2(\\beta\\!+\\!\\mu\\nu)(2\\phi\\!-\\!\\zeta\\!-\\!\\phi^2\\zeta)\n\t+ \\xi N (3\\phi\\!-\\!\\zeta\\!+\\!\\phi^3\\!-\\!3\\phi^2\\zeta) \\right] , \\nonumber \n\\\\ \n\\frac{{\\rm d} \\zeta}{{\\rm d} t} & =& \\frac{\\alpha (\\varrho-R) N \\phi}{R} \n\t- \\frac{1}{R}\\frac{{\\rm d} R}{{\\rm d} t} \\zeta - \\frac{4\\mu\\nu N^2 \n\t(2\\phi-\\zeta-\\phi^2\\zeta)}{R^2 (1-\\zeta^2)} . \n\\end{eqnarray}\nWe now analyse this system in more detail, since this set of \nequations conserves mass, and is easier to analyse than \n(\\ref{new-r2-cdot})--(\\ref{new-r2-Rdot}) due to the absence of \nsquare roots. We consider the two asymptotic limits ($\\beta\\ll1$ \nand $\\alpha\\sim\\xi\\gg1$) in which, at steady-state, the majority of \nmass is in the form of clusters. \n\n\\subsubsection{The symmetric steady-state}\n\nPutting $\\zeta=0=\\phi$, we find the symmetric steady-state is given by \n\\begin{eqnarray} \n0 &=& (\\varrho-R)(2\\mu+\\alpha N) - \\frac{4\\mu\\nu N^2}{R} , \n\\lbl{r3ssss1} \\\\ \n0 &=& \\mu (\\varrho-R) + \\beta N \n- 2(\\mu\\nu+\\beta)\\frac{N^2}{R} - \\frac{\\xi N^3}{R} . \n\\lbl{r3ssss2} \\end{eqnarray} \nthe former is solved by one of \n\\begin{equation}\nR = \\mbox{$\\frac{1}{2}$} \\varrho \\left( 1 \\pm \\sqrt{ 1 - \\frac{16\\mu\\nu N^2} \n{ (2\\mu+\\alpha N) \\varrho^2 } } \\right) , \\qquad \n\\end{equation}\n\\begin{equation}\nN = \\frac{\\alpha R(\\varrho-R)}{8\\mu\\nu} \\left( 1 + \n\\sqrt{1 + \\frac{32\\mu^2\\nu}{\\alpha^2 R(\\varrho-R)}} \\right) . \n\\end{equation}\nMore complete asymptotic solutions will be derived in Sections \n\\ref{r3-a1-sec} and \\ref{r3-a2-sec}. \n\n\\subsubsection{Stability of the symmetric state} \n\nWe now consider the stability of the \nsymmetric steady-state. For small $\\phi,\\zeta$ we have \n\\begin{eqnarray} \\!\\!\\!\\!\\!& \n\\displaystyle\\frac{R}{N} \\displaystyle\\frac{{\\rm d}}{{\\rm d} t} \\!\\!\n\\left( \\!\\!\\begin{array}{c} \\phi \\\\ \\\\ \\zeta \\end{array} \\!\\!\\right) \n\\!=\\!\\! \\left( \\!\\!\\begin{array}{cc} \n\\!\\! - \\! 2\\beta \\!-\\! 2\\mu\\nu \\!-\\! 2 \\xi N \n\t\\!-\\! \\displaystyle\\frac{\\mu (\\varrho\\!-\\!R) R}{N^2} \\!\\!&\\! \n2\\beta \\!+\\! 2\\mu\\nu \\!+\\! \\xi N \n\\\\ \n\\left( \\alpha (\\varrho\\!-\\!R) \n\t\\!-\\! \\displaystyle\\frac{8\\mu\\nu N}{R} \\right) \\! \\!&\\! \n\\!8\\mu\\nu \\!-\\! \\displaystyle\\frac{(\\varrho\\!\\!-\\!\\!R)(2\\mu\\!\\!+\\!\\!\\alpha N)R}{N^2} \\! \n\\end{array} \\!\\!\\right) \\!\\!\\!\n\\left(\\!\\! \\begin{array}{c} \\phi \\\\ \\\\ \\zeta \\end{array} \\!\\!\\right) \\!\\!,\\!\\! \n& \\nonumber \\\\ \\!\\!\\!\\!\\!\\! && \\!\\!\\!\\!\\lbl{r3-stab}\n\\end{eqnarray}\nand this is unstable if the determinant of this matrix is negative. \nNow we consider the two asymptotic limits in more detail. \n\n\\subsubsection{Asymptotic limit 1: $\\beta \\ll1$}\n\\label{r3-a1-sec}\n\nWhen fragmentation is slow, that is, $\\beta\\ll1$, at steady-state we \nhave $N={\\cal O}(\\sqrt{\\beta})$ and $R = \\varrho - {\\cal O}(\\beta)$. \nBalancing terms in (\\ref{r3ssss1})--(\\ref{r3ssss2}) we find the same \nleading order equation twice, namely $2\\nu N^2=\\beta\\varrho(\\varrho-R) $. \nTaking the difference of the two yields an independent equation \nfrom higher order terms, hence we obtain \n\\begin{equation} \nN \\sim \\sqrt{\\frac{\\beta \\varrho}{\\xi+\\alpha\\nu}} , \\qquad \nR \\sim \\varrho - \\frac{2\\nu\\beta}{\\xi+\\alpha\\nu} . \n\\end{equation} \nNote that this result implies that the dimer concentrations \nare small, with $c\\sim z$ and $c \\sim \\beta\\nu \/ (\\xi+\\alpha\\nu)$, \n$z\\sim 2\\beta\/(\\xi+\\alpha\\nu)$. \n\nSubstituting these expressions into those for the stability of the \nsymmetric steady-state (\\ref{r3-stab}), we find \n\\begin{equation}\n\\frac{R}{4\\mu\\nu N} \\frac{{\\rm d}}{{\\rm d} t} \n\\left( \\begin{array}{c} \\phi \\\\[1ex] \\zeta \\end{array} \\right) = \n\\left( \\begin{array}{cc} -1 & \\quad \\frac{1}{2} \\\\ \n-2\\sqrt{\\displaystyle\\frac{\\beta}{\\varrho(\\xi\\!+\\!\\alpha\\nu)}} & \\quad 1 \n\\end{array} \\right) \n\\left( \\begin{array}{c} \\phi \\\\[1ex] \\zeta \\end{array} \\right) . \n\\end{equation}\nThis matrix has one stable eigenvalue (corresponding to \n$(1,0)^T$ and hence the decay of $\\phi$ whilst $\\zeta$ remains \ninvariant), the unstable eigenvector is $(1,4)^T$, hence we find \n\\begin{equation}\n\\left( \\begin{array}{c} \\phi(t) \\\\ \\zeta(t) \\end{array} \\right) \\sim C \n\\left( \\begin{array}{c} 1 \\\\ 4 \\end{array} \\right) \\exp \\left( \n\\frac{4\\mu\\nu t \\sqrt{\\beta}}{\\sqrt{\\varrho(\\xi+\\alpha\\nu)}} \\right) . \n\\lbl{r3-chir-rate} \\end{equation} \nIf we compare the timescale of this solution to that over which the \nconcentrations $N,R$ vary, we find that symmetry-breaking \noccurs on a slower timescale than the evolution of cluster masses \nand numbers. This is illustrated in the numerical simulation of \nequations (\\ref{r3-nxdot})--(\\ref{r3-roydot}) shown in Figure \n\\ref{fig-r3alpha}. More specifically, the time-scale increases with \nthe mass in the system, and with the ratio of aggregation to \nfragmentation rates, $(\\alpha\\nu+\\xi)\/\\beta$, and is inversely related \nto the chiral switching rate of small clusters ($\\mu\\nu$). \n\n\\begin{figure}[!ht]\n\\vspace*{68mm}\n\\special{psfile=fig-r3beta.eps \n\thscale=80 vscale=60 hoffset=-70 voffset=-150}\n\\caption{Graph of concentrations $N_x,N_y,\\varrho_x,\\varrho_y,c$ \nagainst time on a logarithmic time for the asymptotic limit 1, \nwith initial conditions $N_x=0.2=N_y$, $\\varrho_x=0.45$, $\\varrho_y=0.44$, \nother parameters given by $\\alpha=1=\\xi=\\mu$, $\\beta=0.01$ , \n$\\varrho=8$. Since model equations are in nondimensional form, \nthe time units are arbitrary. }\n\\label{fig-r3alpha}\n\\end{figure}\n\n\\subsubsection{Asymptotic limit 2: $\\alpha \\sim \\xi \\gg 1$}\n\\label{r3-a2-sec}\n\nIn this case we retain the assumptions that $\\mu,\\nu={\\cal O}(1)$, \nhowever, we now impose $\\beta={\\cal O}(1)$ and \n$\\alpha \\sim \\xi \\gg1$. For a steady-state, we require the scalings \n$N ={\\cal O}(1\/\\sqrt{\\xi})$ and $\\varrho-R={\\cal O}(1\/\\xi^{3\/2})$. \nSpecifically, solving (\\ref{r3ssss1})--(\\ref{r3ssss2}) we find \n\\begin{equation}\nN \\sim \\sqrt{\\frac{\\beta\\varrho}{\\xi}} , \\qquad \nR \\sim \\varrho - \\frac{4\\mu\\nu}{\\alpha\\varrho} \\sqrt{\\frac{\\beta\\varrho}{\\xi}} , \n\\lbl{r3a2-sss} \\end{equation}\nhence the dimer concentrations $c = \\mbox{$\\frac{1}{2}$} (\\varrho-R) \\sim N^3 = \n{\\cal O}(1\/\\xi^{3\/2})$ and $z = 2 N^2\/\\varrho \\sim N^2 = {\\cal O}(1\/\\xi)$. \nMore precisely, $c\\sim (2\\mu\\nu\/\\alpha)\\sqrt{\\beta\/\\varrho\\xi}$ and \n$z\\sim 2\\beta\/\\xi$, in contrast with the previous asymptotic scaling \nwhich gave $z\\sim N^2$). \n\nTo determine the timescales for crystal growth and dissolution, \nwe use (\\ref{r3a2-sss}) to define\n\\begin{equation} \nN \\sim n(t) \\sqrt{\\beta \\varrho\/\\xi} , \\quad \nR \\sim \\varrho - \\frac{4\\mu\\nu r(t)}{\\alpha \\varrho} \n\\sqrt{\\frac{\\beta\\varrho}{\\xi}} , \n\\end{equation} \nand so rewrite the governing equations (\\ref{r3Rd})--(\\ref{r3Nd}) as \n\\begin{eqnarray}\n\\frac{{\\rm d} n}{{\\rm d} t} & = & \\beta n \\left( 1 - n^2 - \n\\frac{2 n (\\beta+\\mu\\nu)}{\\sqrt{\\varrho\\xi\\beta}} \\right) , \\\\ \n\\frac{{\\rm d} r}{{\\rm d} t} & = & \\alpha \\sqrt{\\frac{\\beta\\varrho}{\\xi}} \n\\left( n^2 -r - \\frac{2\\mu r}{\\alpha} \n\\sqrt{\\frac{\\xi}{\\beta\\varrho}} \\right) . \n\\end{eqnarray}\nHere, the former equation for $n(t)$ corresponds to the \nslower timescale, with a rate $\\beta$, the rate of \nequilibration of $r(t)$ being $\\alpha \\sqrt{\\beta\\varrho\/\\xi}$. \n\nThe stability of the symmetric state is determined by \n\\begin{equation}\n\\frac{R}{N} \\frac{{\\rm d} }{{\\rm d} t} \n\\left( \\begin{array}{c} \\phi(t) \\\\ \\zeta(t) \\end{array} \\right) = \n\\left( \\begin{array}{cc} -2 \\sqrt{\\beta\\varrho\\xi} & \\sqrt{\\beta\\varrho\\xi} \\\\ \n-4\\mu\\nu \\sqrt{\\beta \/ \\xi \\varrho} & 4\\mu\\nu \\end{array} \\right) \n\\left( \\begin{array}{c} \\phi \\\\ \\zeta \\end{array} \\right) . \n\\lbl{r3a2-phi-zeta-sys} \\end{equation} \nThis matrix has one large negative eigenvalue ($\\sim -2\\sqrt{\\beta\\varrho\\xi}$) \nand one (smaller) positive eigenvalue ($\\sim 4\\mu\\nu$); the former \ncorresponds to $(1,0)^T$ hence the decay of $\\phi$, whilst the latter \ncorresponds to the eigenvector $(1,2)^T$. Hence the system \n(\\ref{r3a2-phi-zeta-sys}) has the solution \n\\begin{equation} \n\\left( \\begin{array}{c} \\phi \\\\ \\zeta \\end{array} \\right) \\sim \nC \\left( \\begin{array}{c} 1 \\\\ 2 \\end{array} \\right) \n\\exp \\left( 4 \\mu \\nu t \\sqrt{ \\frac{\\beta}{\\varrho\\xi}} \\right) . \n\\lbl{r3a2urate} \\end{equation} \nThe chiralities evolve on two timescales, the faster being \n$2\\beta$ corresponding to the stable eigenvalue of \n(\\ref{r3a2-phi-zeta-sys}) and the slower unstable rate \nbeing $4\\mu\\nu\\sqrt{\\beta\/\\xi\\varrho}$. This timescale is \nsimilar to (\\ref{r3-chir-rate}), being dependent on mass and \nthe ratio of aggregation to fragmentation, and inversely \nproportional to the chiral switching rate of dimers ($\\mu\\nu$). \n\n\\begin{figure}[!ht]\n\\vspace*{68mm}\n\\special{psfile=fig-r3alpha.eps \n\thscale=80 vscale=60 hoffset=-70 voffset=-150}\n\\caption{Graph of the concentrations $N_x,N_y,\\varrho_x,\\varrho_y,c$ \nagainst time on a logarithmic time for the asymptotic limit 2, \nwith initial conditions $N_x=0.2=N_y$, $\\varrho_x=0.45$, $\\varrho_y=0.44$, \nother parameters given by $\\alpha=10=\\xi$, $\\beta=1=\\mu$, \n$\\nu=0.5$, $\\varrho=2$. Since model equations \nare in nondimensional form, the time units are arbitrary. }\n\\label{fig-r3beta}\n\\end{figure}\n\n\\subsection{The asymmetric steady-state}\n\nSince the symmetric state can be unstable, there must be some \nother large-time asymmetric attractor(s) for the system, \nwhich we now aim to find. From (\\ref{r3-nxdot}) and \n(\\ref{r3-roxdot}), at steady-state, we have \n\\begin{equation}\n2c_2 (2\\mu+\\alpha N_x) = \\frac{4\\mu\\nu N_x^2}{\\varrho_x} , \n\\qquad \\mu c_2 + \\beta N_x = \n2 (\\mu\\nu+\\beta+\\xi N_x) \\frac{N_x^2}{\\varrho_x} . \n\\lbl{r3a-eqs} \\end{equation} \nTaking the ratio of these we find a single quadratic \nequation for $N_x$\n\\begin{equation}\n0 = \\alpha \\xi N_x^2 - \\left( \\frac{\\beta\\mu\\nu}{c_2} \n- \\alpha\\beta - \\alpha\\mu\\nu - \\xi\\mu \\right) N_x + \\beta\\mu , \n\\lbl{r3a-Neq}\n\\end{equation}\nwith an identical one for $N_y$. Hence there is the possibility \nof distinct solutions for $N_x$ and $N_y$ if both roots of \n(\\ref{r3a-Neq}) are positive; this occurs if \n\\begin{equation}\nc_2 < \\frac{\\beta\\mu\\nu}{\\alpha\\beta + \\xi\\mu \n+ \\alpha\\mu\\nu + 2\\sqrt{\\alpha\\beta\\xi\\mu} } . \n\\lbl{r3a-ineq} \\end{equation}\nGiven $N_x$ ($N_y$), we then have to solve one of \n(\\ref{r3a-eqs}) to find $\\varrho_x$ ($\\varrho_y$), {\\em via} \n\\begin{equation}\n\\varrho_x = \\frac{2 \\mu \\nu N_x^2}{c_2 (\\mu+\\alpha N_x)} , \n\\lbl{r3a-a1-rox} \\end{equation} \nand then satisfy the consistency condition that \n$\\varrho_x + \\varrho_y + 2 c_2 = \\varrho$. After some algebra, \nthis condition reduces to \n\\begin{eqnarray}\n\\mbox{$\\frac{1}{2}$} \\alpha^2 \\xi c_2^2 (\\beta \\!-\\! \\alpha c_2 ) (\\varrho\\!-\\!2c_2) \n&\\!=\\!& \\beta^2\\mu^2\\nu^2 - \\beta\\mu\\nu c_2 [ \\alpha\\beta \n+ 2\\alpha\\mu\\nu + 2\\xi\\mu ] \\nonumber \\\\ && \n+ \\mu c_2^2 [ \\mu (\\alpha\\nu\\!+\\!\\xi)^2 + \\alpha\\beta (\\alpha\\nu\\!-\\!\\xi) ] . \n\\lbl{r3a-consistency} \\end{eqnarray}\nBeing a cubic, it is not straightforward to write down explicit \nsolutions of this equation, hence we once again consider the two \nasymptotic limits ($\\beta\\ll1$ and $\\alpha\\sim\\xi\\gg1$). \n\n\\subsubsection{Asymptotic limit 1: $\\beta \\ll 1$}\n\nIn this case, $c_2 = {\\cal O}(\\beta)$ hence we put $c_2=\\beta C$ \nand the consistency condition (\\ref{r3a-consistency}) yields \n\\begin{equation} \n{\\cal O}(\\beta^3) = \\beta^2 \\left[ \\nu - (\\alpha\\nu+\\xi) C \\right]^2 , \n\\lbl{r3a-a1-C} \\end{equation} \nhence, to leading order, $C=\\nu\/(\\alpha\\nu+\\xi)$ . Unfortunately, the \nresulting value for $c_2$ leads to all the leading order terms in the \nlinear equation (\\ref{r3a-Neq}) for $N_x$ to cancel. We thus have to \nfind higher order terms in the expansion for $c_2$; due to the form of \n(\\ref{r3a-a1-C}), the next correction term is ${\\cal O}(\\beta^{3\/2})$. \nPutting $c_2=\\beta C(1+\\tilde C \\sqrt{\\beta})$, we find \n\\begin{equation} \n\\tilde C^2 = \\frac{\\alpha\\xi \\,\\left[ \\, \\alpha\\xi\\varrho + 4 \\mu (\\alpha\\nu+\\xi) \n\\, \\right] }{2\\mu^2 (\\alpha\\nu+\\xi)^3} . \n\\end{equation} \nIn order to satisfy the inequality (\\ref{r3a-ineq}), we require the \nnegative root, that is, $\\tilde C<0$. \n\nAlthough the formulae for $N_x,N_y$ are lengthy, \ntheir sum and products simplify to \n\\begin{equation}\n\\Sigma = N_x + N_y = \n\\frac{\\mu \\tilde C \\sqrt{\\beta} (\\alpha\\nu+\\xi)}{\\alpha\\xi} , \\qquad \n\\Pi = N_x N_y = \\frac{\\beta\\mu}{\\alpha\\xi} . \n\\end{equation}\nThe chirality $\\phi$ can be simplified using $\\phi^2=1-4\\Pi\/\\Sigma^2$ \nwhich implies \n\\begin{equation}\n\\phi^2 = \\frac{\\alpha\\varrho \\xi - 4\\mu(\\alpha\\nu+\\xi)}\n{\\alpha\\varrho\\xi+4\\mu (\\alpha\\nu+\\xi)} . \n\\end{equation}\nHence we require $\\varrho > \\varrho_c := 4\\mu(\\alpha\\nu+\\xi)\/\\alpha\\xi$ in \norder for the system to have nonsymmetric steady-states, that is, the \nsystem undergoes a symmetry-breaking bifurcation as $\\varrho$ increases \nthrough $\\varrho=\\varrho_c$. As the mass in the system increases further, \nthe chirality $\\phi$ approaches ($\\pm$) unity, indicating a state in \nwhich one handedness of crystal completely dominates the other. \n\n\\subsubsection{Asymptotic limit 2: $\\alpha \\sim \\xi \\gg 1$}\n\nIn this case, the left-hand side of the consistency condition \n(\\ref{r3a-consistency}) is ${\\cal O}(\\alpha^2\\xi c_2^2)$ whilst the \nright-hand side is ${\\cal O}(1)+{\\cal O}(\\alpha c_2^2)$, which implies \nthe balance $c_2={\\cal O}(\\xi^{-3\/2})$. Solving for $c_2$ leads to \n\\begin{equation} \nc_2 \\sim \\frac{\\mu\\nu}{\\alpha} \\sqrt{ \\frac{2\\beta}{\\varrho\\xi} } . \n\\end{equation} \nThe leading order equation for $N_x,N_y$ is then \n\\begin{equation} \n0 = \\alpha\\xi N^2 - \\alpha N \\sqrt{\\mbox{$\\frac{1}{2}$}\\beta\\varrho\\xi} + \\beta\\mu , \n\\end{equation} \nhence we find the roots \n\\begin{equation} \nN_x,N_y \\sim \\sqrt{\\frac{\\beta\\varrho}{2\\xi}} , \n\\frac{2\\mu}{\\alpha} \\sqrt{\\frac{\\beta}{2\\xi\\varrho}} , \\qquad \n\\varrho_x , \\varrho_y \\sim \\varrho , \\frac{2\\mu}{\\alpha} . \n\\end{equation} \nSince we have either $\\varrho_x \\gg N_x \\gg \\varrho_y \\gg N_y$ \nor $\\varrho_y \\gg N_y \\gg \\varrho_x \\gg N_x$, in this asymptotic limit, \nthe system is completely dominated by one species or the other. \nPutting $\\Sigma=N_x+N_y$ and $\\Pi=N_xN_y$ we have \n$\\phi^2=1-4\\Pi\/\\Sigma^2 \\sim 1 - 8 \\mu\/\\alpha\\varrho$. \n\n\\section{Discussion}\n\\label{disc-sec}\n\\setcounter{equation}{0}\n\nWe now try to use the above theory and experimental \nresults of Viedma \\cite{viedma} to estimate the relevant \ntimescales for symmetry-breaking in a prebiotic world. \nExtrapolating the data of time against grinding rate in rpm \nfrom Figure 2 of Viedma \\cite{viedma} \nsuggests times of $2\\times10^5$ hours using a straight line \nfit to log(time) against log(rpm) or 1000--3000 hours if \nlog(time) against rpm or time against log(rpm) is fitted. \nA reduction in the speed of grinding in prebiotic circumstances \nis expected since natural processes such as water waves \nare much more likely to operate at the order of a few seconds$^{-1}$ \nor minutes$^{-1}$ rather than 600 rpm. \n\nSimilar extrapolations on the number and mass of balls \nused to much lower amounts gives a further reduction \nof about 3, using a linear fit to log(time) against mass of balls \nfrom Figure 1 of Viedma \\cite{viedma}. There is an \nequally good straight line fit to time against log(ball-mass) \nbut it is then difficult to know how small a mass of balls \nwould be appropriate in the prebiotic scenario. \nThere is an additional factor due to the experiments \nof Viedma being on a small volume of 10 ml, whereas a \nsensible volume for prebiotic chemistry is 1000 l, \ngiving an additional factor of $10^5$. \nCombining these three factors ($10^3$, 3, and $10^5$) with \nthe 10 days of the original experiment, we estimate that the \ntimescale for prebiotic symmetry breaking is ${\\cal O}(3\\times10^9)$ \ndays, which is equivalent to the order of about ten million years. \n\nThis extrapolation ignores the time required to arrive \nat the initial enantiomeric excesses of 5\\% used by Viedma \n\\cite{viedma} from a small asymmetry caused by \neither a random fluctuation or by the parity-violation. \nAlthough the observed chiral structures are the minimum energy \nconfigurations as predicted by parity violation, there is an evens \nprobability that the observed handedness could simply be the result \nof a random fluctuation which was amplified by the same mechanisms. \nIn order to perform an example calculation, we take a random \nfluctuation of the size predicted by parity violation, which is \nof the order of $10^{-17}$, as suggested by Kondepudi \\& Nelson \n\\cite{kon-pla}. Our goal is now to find the time taken to \namplify this to an ${\\cal O}(1)$ (5\\%) enantiomeric excess. \n\nThe models derived in this paper, for example in Section \n\\ref{r3-a2-sec}, predict that the chiral excess grows \nexponentially in time. Assuming, from (\\ref{r3a2urate}), \nthat $\\phi(t_0)=10^{-17}$ and $\\phi(t_1)= 0.1$, then \nthe timescale for the growth of this small perturbation is \n\\[ t_1 - t_0 = \\frac{1}{4\\mu\\nu} \\sqrt{\\frac{\\xi\\varrho}{\\beta}} \n\\log \\frac{10^{-1}}{10^{-17}} . \\] \nSince the growth of enantiomeric excess is exponential, \nit only takes 16 times as long for the perturbation to grow \nfrom $10^{-17}$ to $10^{-1}$ as from $10^{-1}$ to 1. \nHence we only need to increase our estimate of the timescale \nby one power of ten, to 100 million years. \n\nThis estimate should be taken as a very rough estimate, since it \nrelies on extrapolating results by many orders of magnitude. \nAlso, given the vast differences in temperature from the \nputative subzero prebiotic world to a tentative hot hydrothermal \nvent, there could easily be changes in timescale by a factor of \nseveral orders of magnitude. \n\n\\section{Conclusions}\n\\label{conc-sec}\n\\setcounter{equation}{0}\n\nAfter summarising the existing models of chiral symmetry-breaking \nprocesses we have systematically derived a model in which through \naggregation and fragmentation chiral clusters compete for achiral \nmaterial. The model is closed, in that there is no input of mass \ninto the system, although the form of the aggregation and \nfragmentation rate coefficients mean that there is an input of energy, \nkeeping the system away from equilibrium. Furthermore, there is no \ndirect interaction of clusters of opposite handedness; rather \njust through a simple competition for achiral substrate, the system \ncan spontaneously undergo chiral symmetry-breaking. This model \nhelps explain the experimental results of Viedma \\cite{viedma} \nand Noorduin {\\em et al.}\\ \\cite{wim}. \n\nThe microscopic model originally derived has been simplified \nsuccessively to a minimalistic model, which, numerical results show, \nexhibits symmetry-breaking. Even after this reduction, the model is \nextremely complex to analyse due to the large number of cluster \nsizes retained in the model. Hence we construct two truncated \nmodels, one truncated at tetramers, which shows no \nsymmetry-breaking and one at hexamers which shows \nsymmetry-breaking under certain conditions on the parameter values. \nAlternative reductions are proposed: instead of retaining the \nconcentrations of just a few cluster sizes, we retain information \nabout the shape of the distribution, such as the number of clusters \nand the total mass of material in clusters of each handedness. \nThese reduced models are as simple to analyse as truncated models \nyet, since they more accurately account for the shape of the \nsize-distribution than a truncated model, are expected to give models \nwhich more easily fit to experimental data. Of course, other \nansatzes for the shape of the size distributions could be made, \nand will lead to modified conditions for symmetry-breaking; \nhowever, we believe that the qualitative results outlined here will \nnot be contradicted by analyses of other macroscopic reductions. \n\nOne noteworthy feature of the results shown herein is that the \nsymmetry-breaking is inherently a product of the two handednesses \ncompeting for achiral material. The symmetry-breaking does not rely \non critical cluster sizes, which are a common feature of theories of \ncrystallisation, or on complicated arguments about surface area to \nvolume ratios to make the symmetric state unstable. We do not \ndeny that these aspects of crystallisation are genuine, these features \nare present in the phenomena of crystal growth, but they are not \nthe fundamental cause of chiral symmetry-breaking. \n\nMore accurate fitting of the models to experimental data could be \nacheived if one were to fit the generalised Becker-D\\\"{o}ring model \n(\\ref{gbd1})--(\\ref{gbd3}) with realistic rate coefficients. Questions \nto address include elucidating how the number and size distribution \nat the start of the grinding influences the end state. For example, if \none were to start with a few large right-handed crystals and many \nsmall left-handed crystals, would the system convert to entirely \nleft- or entirely right-handed crystals ? Answers to these more \ncomplex questions may rely on higher moments of the size distributions, \nsurface area to volume ratios and critical cluster nuclei sizes. \n\n\\subsection*{Acknowledgments}\n\nI would particularly like to thank Professors Axel Brandenburg and \nRaphael Plasson for inviting me to an extended programme of \nstudy on homochirality at Nordita (Stockholm, Sweden) in February 2008. \nThere I met and benefited greatly from discussions with Professors \nMeir Lahav, Mike McBride, Wim Noorduin, as well as many others. \nThe models described here are a product of the stimulating \ndiscussions held there. I am also grateful for funding under \nEPSRC springboard fellowship EP\/E032362\/1. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\n\\emph{Seriation} is an important ordering problem\nwhose aim is to find the best enumeration order of a set of units, according\nto a given correlation function. The desired order can be characteristic of the\ndata, a chronological order, a gradient or any sequential structure of the\ndata. \n\nThe concept of seriation has been formulated in many different ways and appears\nin various fields, such as archaeology, anthropology, psychology, and \nbiology~\\cite{brusco06,genome,matharcheo,mirkin1984}.\nIn this paper we use the archaeological setting as a metaphor for the seriation\nproblem.\nAn important aim of archaeological investigation is to date excavation\nsites on the basis of found objects and determine their relative chronology,\ni.e., a dating which indicates if a given site is chronologically preceding or\nsubsequent to another.\nIn general, relative chronologies are devoid of a direction, in the sense that\nthe units are placed in a sequence which can be read in both directions.\nRelative dating methods can be used where absolute dating methods, such as\ncarbon dating, cannot be applied.\n\nThe available data are usually represented by a \\emph{data matrix}, in which\nthe rows are the archaeological units (e.g., the sites) and the columns\nrepresent the types (the archaeological finds).\nEach unit is characterized by the presence of certain artefacts, which are in\nturn classified in types. In~\\cite{ps05}, the authors refer to the data matrix as either \\emph{incidence matrix} or \\emph{abundance\nmatrix}, depending on the archaeological data representation. In the first case, the data are reported by using a binary\nrepresentation, i.e., an element in the position $(i,j)$ is equal to $1$ if\ntype $j$ is present in the unit $i$, and $0$ otherwise.\nIn the second second case, the data matrix reports the number of objects\nbelonging to a certain type in a given unit, or its percentage. In this paper,\nwe will follow the usual terminology used in \\emph{complex networks theory} and\nwe will refer to a binary representation as an \\emph{adjacency matrix}, an\nexample of which is given in Table~\\ref{tab:adjacency}. More details can be\nfound in Section~\\ref{sec:mathback}. If\nthe data matrix represents types of found objects as columns and the locations\n(graves, pits, etc.) in which they are found as rows, we can find a\nchronological order for the locations by assuming that the types were produced,\nor were ``fashionable'', only for a limited period of time. In the light of\nthis assumption, the purpose of determining a relative chronology results in\nobtaining an ordering of the rows and columns of the data matrix that places\nthe nonzero entries close to the diagonal of the data matrix.\n\n\\begin{table}\n\\caption{Adjacency matrix for archaeological data originated from female\nburials at the Bornholm site, Germany; see~\\cite{ps05} and the references\ntherein. The rows report the names of the tombs, the columns the identification\ncodes of the found \\emph{fibulae}.}\n\\label{tab:adjacency}\n\\scriptsize\n\\begin{center}\n\\begin{tabular}{rcccccccccccc}\n\\hline\\noalign{\\smallskip}\n&G3&F27&S1&F26&N2&F24&P6&F25&P5&P4&N1&F23\\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n\\textsf{Mollebakken 2} & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n\\textsf{Kobbea 11}&0&1&1&0&1&1&0&0&0&0&0&0\\\\ \n\\textsf{Mollebakken 1}&1&1&0&1&1&0&1&1&0&0&0&0\\\\ \n\\textsf{Levka 2}&0&1&1&0&1&0&0&1&1&0&0&0\\\\ \n\\textsf{Grodbygard 324}&0&0&0&0&1&1&0&0&0&1&0&0\\\\ \n\\textsf{Melsted 8}&0&0&1&1&0&0&1&1&0&1&0&0\\\\ \n\\textsf{Bokul 7}&0&0&0&0&0&0&1&1&0&0&1&0\\\\ \n\\textsf{Heslergaard 11}&0&0&0&0&0&0&0&1&0&1&0&0\\\\ \n\\textsf{Bokul 12}&0&0&0&0&0&0&0&1&1&0&0&1\\\\ \n\\textsf{Slamrebjerg 142}&0&0&0&0&0&0&0&0&0&1&0&1\\\\ \n\\textsf{Nexo 6} &0&0&0&0&0&0&0&0&0&1&1&1\\\\ \n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nA closely related problem is the \\emph{consecutive ones problem}\n(C1P)~\\cite{fulkerson65,or09}, whose aim is to find all the permutations of the\nrows of a binary matrix that place the $1$'s consecutively in each column. If\nsuch permutations exist, then the matrix is said to have the \\emph{consecutive\nones property} for columns. The equivalent property for rows can be similarly\ndefined. The problem of verifying if a matrix possesses this property has\napplications in different fields, such as computational biology and recognition\nof interval graphs~\\cite{booth1976testing,cor98}.\nThe connection between C1P and seriation has been investigated by Kendall \nin~\\cite{kendall69}.\n\nThe first systematic formalization of the seriation problem was made by Petrie\nin 1899~\\cite{petrie}, even if the term seriation was used before in\narchaeology. The subject was later resumed by Breinerd and\nRobinson~\\cite{brainerd1951place,robinson1951method}, who also proposed a\npractical method for its solution, and by\nKendall~\\cite{kendall63,kendall69,kendall70}.\nNice reviews on seriation are given in~\\cite{liiv2010},~\\cite{ps05}, \nand~\\cite{bl02}, where its application in stratigraphy is discussed, \nwhile~\\cite{bb15,matharcheo} describe other applications of mathematics in\narchaeology.\n\nGiven the variety of applications, some software packages have been developed in\nthe past to manipulate seriation data. Some of these packages have not undergo a\nregular maintenance, and does not seem to be easily usable on modern computers,\nlike the Bonn Archaeological Software Package (BASP)\n(\\url{http:\/\/www.uni-koeln.de\/~al001\/}).\nA software specifically designed for the seriation problem in bioinformatics\nhas been developed by Caraux and Pinloche~\\cite{permutsoft}, and is available\nfor free download.\n\nOther implementations of the spectral algorithm from~\\cite{atkins1998spectral}\nhave been discussed in~\\cite{fogel2014}, \\cite{hahsler2008}, which describes an\nR package available at \\url{http:\/\/cran.r-project.org\/web\/packages\/seriation\/},\nand~\\cite{seminaroti2016}.\nThe paper~\\cite{fogel2015} proposes an interesting method, based on quadratic\nprogramming, aimed at treating ``noisy cases''.\n\nIn this paper we present a Matlab implementation of a spectral method for the\nsolution of the seriation problem which appeared in~\\cite{atkins1998spectral},\nbased on the use of the Fiedler vector of the Laplacian associated to the\nproblem, and which describes the results in terms of a particular data\nstructure called a PQ-tree.\nWe further develop some numerical aspects of the algorithm, concerning the\ndetection of equal components in the Fiedler vector and the computation of the\neigensystem of the Laplacian associated to a large scale problem.\nWe also provide a parallel version of the method.\nThe package, named the \\texttt{PQser} toolbox, also defines a data structure to store\na PQ-tree and provides the Matlab functions to manipulate and visualize it.\nFinally, we discuss the implications of the presence of a multiple Fiedler\nvalue, an issue which has been disregarded up to now, and we illustrate some\nnumerical experiments.\n\nThe plan of the paper is the following. Section \\ref{sec:mathback} reviews the\nnecessary mathematical background and sets up the terminology to be used in the\nrest of the paper. Section~\\ref{sec:pqtrees} describes the data structures\nused to store the solutions of the seriation problem.\nThe spectral algorithm is discussed in Section~\\ref{sec:seriation} and the\nspecial case of a multiple Fiedler values is analyzed in\nSection~\\ref{sec:mult}.\nSection~\\ref{sec:numexp} reports some numerical results and\nSection~\\ref{sec:last} contains concluding remarks. \n\n\n\\section{Mathematical background}\\label{sec:mathback}\n\nWith the aim of making this paper self-contained, we review some mathematical\nconcepts that will be used in the following.\nWe denote matrices by upper case roman letters and their elements by\nlower case double indexed letters.\n\nLet $G$ be a simple graph formed by $n$ nodes.\nEach entry $f_{ij}$ of the adjacency matrix $F\\in{\\mathbb{R}}^{n\\times n}$ associated to\n$G$ is taken to be the weight of the edge connecting node $i$ to node $j$. \nIf the two nodes are not connected, then $f_{ij}=0$.\nA graph is unweighted if the weights are either 0 or 1.\nThe adjacency matrix is symmetric if and only if the graph is undirected.\n\nThe (unnormalized) \\emph{graph Laplacian} of a symmetric, matrix \n$F\\in{\\mathbb{R}}^{n\\times n}$ is the symmetric, positive semidefinite matrix \n$$\nL=D-F,\n$$\nwhere $D=\\diag(d_1,\\ldots,d_n)$ is the \\emph{degree matrix}, whose $i$th\ndiagonal element equals the sum of the weights of the edges starting from node\n$i$ in the undirected network defined by $F$, that is, \n$d_i=\\sum_{j=1}^n f_{ij}$.\nIn the case of an unweighted graph, $d_i$ is the number of nodes connected to\nnode $i$.\n\nSetting $\\bm{e}=[1,\\dots,1]^T\\in{\\mathbb{R}}^n$, it is immediate to observe that\n$$\nL\\bm{e} = (D-F)\\bm{e}= \\bm{0},\n$$\nwhere $\\bm{0}\\in{\\mathbb{R}}^n$ is the zero vector.\nHence, $0$ is an eigenvalue of the graph Laplacian with eigenvector $\\bm{e}$.\n\nThe Gershgorin circle theorem implies all the eigenvalues are non-negative, so\nwe order them as $\\lambda_1=0\\leq\\lambda_2\\leq\\dots\\leq\\lambda_n$, with\ncorresponding eigenvectors $\\bm{v}_1=\\bm{e},\\bm{v}_2,\\dots, \\bm{v}_n$.\nThe smallest eigenvalue of $L$ with associated eigenvector orthogonal to\n$\\bm{e}$ is called the \\emph{Fiedler value}, or the \\emph{algebraic\nconnectivity}, of the graph described by $F$.\nThe corresponding eigenvector is the \\emph{Fiedler\nvector}~\\cite{fiedler1973algebraic,fiedler1975property,fiedler1989laplacian}.\n\nAlternatively, the Fiedler value may be defined by\n$$\n\\min_{\\bm{x}^{T}\\bm{e}=0,\\ \\bm{x}^{T}\\bm{x}=1}\\bm{x}^{T}L\\bm{x}.\n$$\nThen, a Fiedler vector is any vector $\\bm{x}$ that achieves the minimum.\n\nFrom the Kirchhoff matrix-tree theorem it follows that the Fiedler value is\nzero if and only if the graph is not connected~\\cite{de2007old}; in particular,\nthe number of times 0 appears as an eigenvalue of the Laplacian is the number\nof connected components of the graph.\nSo, if the considered adjacency matrix is irreducible, that is, if the\ngraph is connected, the Fiedler vector corresponds to the first non-zero\neigenvalue of the Laplacian matrix.\n\nA \\emph{bipartite graph} $G$ is a graph whose vertices can be divided into two\ndisjoint sets $U$ and $V$ containing $n$ and $m$ nodes, respectively, such\nthat every edge connects a node in $U$ to one in $V$.\n\nIn our archaeological metaphor, the nodes sets $U$ (units) and $V$\n(types) represent the excavation sites and the found artifacts, respectively.\nThen, as already outlined in the Introduction, \nthe associated adjacency matrix $A$, of size $n\\times m$, is obtained by\nsetting $a_{ij}=1$ if the unit $i$ contains objects of type $j$, and 0\notherwise. \nIf the element $a_{ij}$ takes value different from 1, we consider it as a\nweight indicating the number of objects of type $j$ contained in unit $i$, or\ntheir percentage. In this case, we denote $A$ as the \\emph{abundance matrix}.\n\nThe first mathematical definition of seriation was based on the construction of\na symmetric matrix $S$ known as \\emph{similarity\nmatrix}~\\cite{brainerd1951place,robinson1951method}, where $s_{ij}$ describes,\nin some way, the likeness of the nodes $i,j\\in U$. One possible definition is\nthrough the product $S=AA^T$, being $A$ the adjacency matrix of the problem.\nIn this case, $s_{ij}$ equals the number of types shared between unit $i$ and\nunit $j$. \nThe largest value on each row is the diagonal element, which reports the\nnumber of types associated to each unit. By permuting the rows and columns of\n$S$ in order to cluster the largest values close to the main diagonal, one \nobtains a permutation of the corresponding rows of $A$ that places closer the\nunits similar in types.\nIt is worth noting that this operation of permuting rows and columns of $S$ is\nnot uniquely defined.\n\nThe \\emph{Robinson method}~\\cite{robinson1951method} is a statistical technique\nbased on a different similarity matrix. It is based on the concept that each\ntype of artifact used in a certain period eventually decreases in popularity\nuntil it becomes forgotten. This method is probably the first documented\nexample of a practical procedure based on the use of the similarity matrix, so\nits description is interesting in a historical perspective.\n\nThe method, starting from an abundance matrix $A \\in {\\mathbb{R}}^{n \\times m}$ whose\nentries are in percentage form (the sum of each row is 100), computes the\nsimilarity matrix $S$ by a particular rule, leading to a symmetric matrix of\norder $n$ with entries between $0$ (rows with no types in common) and $200$,\nwhich corresponds to units containing exactly the same types.\nThen, the method searches for a permutation matrix $P$ such that $PSP^T$ has\nits largest entries as close as possible to the main diagonal. The same\npermutation is applied to the rows of the data matrix $A$ to obtain a\nchronological order for the archaeological units. Since, as already remarked,\nthe sequence can be read in both directions, external information must be used\nto choose an orientation.\n\nThe procedure of finding a permutation matrix $P$ is not uniquely specified.\nOne way to deal with it is given by the so called \\emph{Robinson's form}, which\nplaces larger values close to the main diagonal, and lets off-diagonal entries\nbe nonincreasing moving away from the main diagonal.\nMore in detail, a symmetric matrix $S$ is in Robinson's form, or is an\nR-matrix, if and only if\n\\begin{eqnarray}\ns_{ij}\\leqslant s_{ik}, \\quad \\text{if } j\\leqslant k\\leqslant i, \n\\label{rmatrix1} \\\\\ns_{ij}\\geqslant s_{ik}, \\quad \\text{if } i\\leqslant j\\leqslant k.\n\\label{rmatrix2} \n\\end{eqnarray}\nA symmetric matrix is pre-$R$ if and only if there exists a simultaneous\npermutation of its rows and columns which transforms it in Robinson's form, so\nit corresponds to a well-posed ordering problem.\nFor other interesting references on R-matrices and their detection, \nsee~\\cite{chepoi1997,laurent2017lex,laurent2017,prea2014,seston2008}.\n\n\n\n\\section{PQ-trees}\\label{sec:pqtrees}\n\nA \\emph{PQ-tree} is a data structure introduced by Booth and\nLueker~\\cite{booth1976testing} to encode a family of permutations of a\nset of elements, and solve problems connected to finding admissible permutations\naccording to specific rules.\n\nA PQ-tree $T$ over a set $U = \\{u_1,u_2,\\dots,u_n\\}$ is a rooted tree whose\nleaves are elements of $U$ and whose internal (non-leaf) nodes are\ndistinguished as either P-nodes or Q-nodes. \nThe only difference between them is the way in which their children are\ntreated: for Q-nodes only one order and its reverse are allowed, whereas in the\ncase of P-node all possible permutations of the children leaves are permitted.\nThe root of the tree can be either a P or a Q-node.\n\nWe will represent graphically a P-node by a circle, and a Q-node by a\nrectangle. \nThe leaves of $T$ will be displayed as triangles, and labeled by the elements\nof $U$. The frontier of $T$ is one possible permutation of the elements of $U$,\nobtained by reading the labels of the leaves from left to right.\n\nWe recall two definitions from~\\cite{booth1976testing}.\n\n\\begin{definition}\\label{def:proper}\nA PQ-tree is \\textit{proper} when the following conditions hold:\n\\begin{itemize}\n\\item[i)] every $u_{i}\\in U$ appears precisely once as a leaf;\n\\item[ii)] every P-node has at least two children;\n\\item[iii)] every Q-node has at least three children. \n\\end{itemize}\n\\end{definition}\n\nAs we observed above, the only difference between a P-node and a Q-node is the\ntreatment of their children, and in the case of exactly two children there is\nno real distinction between a P-node and a Q-node.\nThis justifies the second and third conditions of Definition~\\ref{def:proper}.\n\n\\begin{definition}\\label{def:equiv}\nTwo PQ-trees are said to be \\textit{equivalent} if one can be transformed into\nthe other by applying a sequence of the following two transformations:\n\\begin{itemize}\n\\item[i)] arbitrarily permute the children of a P-node;\n\\item[ii)] reverse the children of a Q-node.\n\\end{itemize} \n\\end{definition}\n\nA PQ-tree represents permutations of the elements of a set through\nadmissible reorderings of its leaves.\nEach transformation in Definition~\\ref{def:equiv} specifies an admissible\nreordering of the nodes within a PQ-tree.\nFor example, a tree with a single P-node represents the equivalence\nclass of all permutations of the elements of $U$, while a tree with a single\nQ-node represents both the left-to-right and right-to-left orderings of the\nleaves.\nA tree with a mixed P-node and Q-node structure represents the equivalence\nclass of a constrained permutation, where the exact structure of the tree\ndetermines the constraints.\nFigure~\\ref{fig:pqtree} displays a PQ-tree and the admissible permutations\nit represents.\n\n\\begin{figure}\n\\begin{minipage}{.52\\textwidth}\n\\includegraphics[width=\\textwidth]{pqtree}\n\\end{minipage}\n\\begin{minipage}{.45\\textwidth}\n\\tiny\n\\begin{center}\n\\begin{tabular}{cccccc}\n\\hline\\noalign{\\smallskip}\n1 & 2 & 3 & 4 & 5 & 6 \\\\\n1 & 2 & 3 & 6 & 5 & 4 \\\\\n1 & 3 & 2 & 4 & 5 & 6 \\\\\n1 & 3 & 2 & 6 & 5 & 4 \\\\\n2 & 1 & 3 & 4 & 5 & 6 \\\\\n2 & 1 & 3 & 6 & 5 & 4 \\\\\n2 & 3 & 1 & 4 & 5 & 6 \\\\\n2 & 3 & 1 & 6 & 5 & 4 \\\\\n3 & 1 & 2 & 4 & 5 & 6 \\\\\n3 & 1 & 2 & 6 & 5 & 4 \\\\\n3 & 2 & 1 & 4 & 5 & 6 \\\\\n3 & 2 & 1 & 6 & 5 & 4 \\\\\n4 & 5 & 6 & 1 & 2 & 3 \\\\\n4 & 5 & 6 & 1 & 3 & 2 \\\\\n4 & 5 & 6 & 2 & 1 & 3 \\\\\n4 & 5 & 6 & 2 & 3 & 1 \\\\\n4 & 5 & 6 & 3 & 1 & 2 \\\\\n4 & 5 & 6 & 3 & 2 & 1 \\\\\n6 & 5 & 4 & 1 & 2 & 3 \\\\\n6 & 5 & 4 & 1 & 3 & 2 \\\\\n6 & 5 & 4 & 2 & 1 & 3 \\\\\n6 & 5 & 4 & 2 & 3 & 1 \\\\\n6 & 5 & 4 & 3 & 1 & 2 \\\\\n6 & 5 & 4 & 3 & 2 & 1 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{center}\n\\end{minipage}\n\\caption{On the left, a PQ-tree over the set $U=\\{1,\\ldots,6\\}$; on the right,\nthe 24 admissible permutations encoded in the tree.}\n\\label{fig:pqtree}\n\\end{figure}\n\nThe PQ-tree data structure has been exploited in a variety of applications,\nfrom archaeology and chronology reconstruction~\\cite{atkins1998spectral} to\nmolecular biology with DNA mapping and sequence\nassembly~\\cite{greenberg1995physical}.\nThe first problem to which it was applied is the consecutive ones property\n(C1P) for matrices~\\cite{booth1976testing}, mentioned in\nSection~\\ref{sec:intro}.\n\nGiven a pre-R matrix, the spectral algorithm from~\\cite{atkins1998spectral},\nthat will be discussed in Section~\\ref{sec:seriation}, constructs a PQ-tree\ndescribing the set of all the permutations of rows and columns that lead to an\nR-matrix.\n\n\n\\subsection{Implementation of PQ-trees}\n\nThe \\texttt{PQser} toolbox for Matlab is available as a compressed archive that can be downloaded from the authors webpages (see, e.g., \n\\url{http:\/\/bugs.unica.it\/~gppe\/soft\/}).\nBy uncompressing it, the directory \\texttt{PQser} will be created. \nIt must be added to Matlab search path, either by the command \\ttt{addpath} or\nusing the graphical interface menus.\nThe sub-directory \\texttt{demo} contains a tutorial for the toolbox and the\nscripts which were used to construct the examples reported in the paper.\nThe installation procedure and the toolbox content are described in detail in\nthe README.txt file, which can be found in the main directory.\n\nIn the \\texttt{PQser} toolbox, a PQ-tree $T$ is a \\ttt{struct} variable (i.e., a\nrecord) composed by two fields.\nThe first field, \\ttt{T.type}, specifies the type of the node, i.e., P, Q,\nor a leaf, in the case of a trivial tree.\nThe second field, \\ttt{T.value}, is a vector which provides a list of PQ-trees,\nrecursively defined.\nIn the case of a leaf, this field contains the index of the unit it represents.\n\nFor example, the graph in Figure~\\ref{fig:pqtree} was obtained by the following\npiece of code\n\\begin{quote}\n\\footnotesize\n\\begin{verbatim}\nv(1) = pnode([1 2 3]);\nv(2) = qnode([4 5 6]);\nT = pnode(v);\t\npqtreeplot(T)\t\n\\end{verbatim}\n\\end{quote}\nthe resulting data structure for the PQ-tree is\n\\begin{quote}\n\\footnotesize\n\\begin{verbatim}\nT = \n struct with fields:\n type: 'P'\n value: [1x2 struct]\n\\end{verbatim}\n\\end{quote}\nand the permutations encoded in $T$ are computed by\n\\begin{quote}\n\\footnotesize\n\\begin{verbatim}\nperms_matrix = pqtreeperms(T)\n\\end{verbatim}\n\\end{quote}\nThese instructions are contained in the script \\texttt{graf1.m}, in the\n\\texttt{demo} sub-directory.\n\n\\begin{table}[htb]\n\\footnotesize\n\\centering\n\\begin{tabular}{ll}\n\\hline\n\\ttt{pnode} & create a P-node \\\\\n\\ttt{qnode} & create a Q-node \\\\\n\\ttt{lnode} & create a leaf \\\\\n\\ttt{mnode} & create an M-node \\\\\n\\ttt{pqtreeplot} & plot a PQ-tree \\\\\n\\ttt{pqtreeNperm} & number of admissible permutations in a PQ-tree \\\\\n\\ttt{pqtreeperms} & extract all admissible permutations from a PQ-tree \\\\\n\\ttt{pqtree1perm} & extract one admissible permutation from a PQ-tree \\\\\n\\ttt{pqtreegetnode} & extract a subtree from a PQ-tree \\\\\n\\ttt{pqtreenodes} & converts a PQ-tree to Matlab \\ttt{treeplot} format \\\\\n\\hline\n\\end{tabular}\n\\caption{Functions in the \\texttt{PQser} toolbox devoted to the manipulation of\nPQ-trees.}\n\\label{tab:pqfuncs}\n\\end{table}\n\nThe functions intended for creating and manipulating a PQ-tree are listed in\nTable~\\ref{tab:pqfuncs}.\nThe function \\ttt{mnode} creates an additional type of node, an M-node, which\nis intended to deal with multiple Fiedler values; we will comment on it in\nSection~\\ref{sec:mult};\n\\ttt{pqtreegetnode} and \\ttt{pqtreenodes} are utility functions for\n\\ttt{pqtreeplot}, they are not intended to be called directly by the user.\nAll the functions are documented via the usual Matlab \\ttt{help} command, e.g.,\n\\begin{quote}\n\\footnotesize\n\\begin{verbatim}\nhelp pnode\nhelp pqtreeplot\n\\end{verbatim}\n\\end{quote}\n\nAs an example, we report in Algorithm~\\ref{alg:pqtreeNperm} the structure of\n\\ttt{pqtreeNperm}, a function which returns the number $N$ of all the\npermutations contained in the tree whose root $T$ is given in input.\nIn the particular case of a leaf, only one permutation is possible \n(line~\\ref{l1}--\\ref{l2}).\nOtherwise, we consider the vector $\\bm{c}$ of size $k$, containing the children\nnodes of the root of T (line~\\ref{l5}).\nThe algorithm calls itself recursively on each component of $\\bm{c}$\n(line~\\ref{l8}).\nIn the case of a Q-node the number of permutations is doubled, because only one\nordering and its reverse are admissible, whereas for a P-node the number is\nmultiplied by the factorial of $k$, since in this case all the possible\npermutations of the children are allowed.\nThe same procedure is applied to an M-node; see Section~\\ref{sec:mult} for\ndetails.\n\n\\begin{algorithm}[!ht]\n\\begin{algo}\n\\STATE \\Function $N = \\pqtreeNperm(T)$\n\\IF $T$ is a leaf\n\\label{l1}\n\\STATE $N =1$\n\\label{l2}\n\\ELSE \n\t\\STATE $\\bm{c}=T\\ttt{.value}$, $k=\\length(\\bm{c})$\n\\label{l5}\n\t\\STATE $p=1$\n\t\\FOR $i = 1,\\dots,k$\n\t\t\\STATE $p = p*\\pqtreeNperm(c_i)$\n\\label{l8}\n\t\\ENDFOR\n\t\\IF $T$ is a Q-node\n\t\t\\STATE $N = 2*p$\n\t\\ELSE \n\t\t\\STATE $N = \\factorial(k)*p$\n\t\\ENDIF\n\\ENDIF\t \n\\end{algo}\n\\caption{Compute the number of admissible permutations in a PQ-tree.}\n\\label{alg:pqtreeNperm}\n\\end{algorithm}\n\nThe toolbox includes an interactive graphical tool for exploring a PQ-tree $T$.\nAfter displaying $T$ by \\ttt{pqtreeplot}, it is possible to extract a subtree\nby clicking on one node with the left mouse button. In this case, the\ncorresponding subtree is extracted, it is plotted in a new figure, and it is\nsaved to the variable \\ttt{PQsubtree} in the workspace.\nThis feature is particularly useful when analyzing a large PQ-tree.\nThe function \\ttt{pqtreeplot} allows to set some attributes of the plot; see\nthe help page.\n\n\n\n\\section{A spectral algorithm for the seriation problem}\\label{sec:seriation}\n\nIn this section we briefly review the spectral algorithm for the seriation\nproblem introduced in~\\cite{atkins1998spectral}, and describe our\nimplementation.\n\nGiven the set of units $U=\\{u_1,u_2,\\dots,u_n\\}$, we will write \n$i\\preccurlyeq j$ if $u_i$ precedes $u_j$ in a chosen ordering.\nIn~\\cite{atkins1998spectral}, the authors consider a symmetric bivariate\n\\emph{correlation function} $f$ reflecting the desire for units $i$ and $j$ to\nbe close to each other in the sought sequence.\nThe point is to find all index permutation vectors\n${\\boldsymbol{\\pi}}=(\\pi_1,\\ldots,\\pi_n)^T$ such that \n\\begin{equation}\\label{fperm}\n\\pi_i\\preccurlyeq\\pi_j\\preccurlyeq\\pi_k \\quad \\iff \\quad \nf(\\pi_i,\\pi_j)\\geq f(\\pi_i,\\pi_k) \\quad \\text{and} \\quad\nf(\\pi_j,\\pi_k)\\geq f(\\pi_i,\\pi_k).\n\\end{equation}\nIt is natural to associate to such correlation function a real symmetric matrix\n$F$, whose entries are defined by $f_{ij}=f(i,j)$.\nThis matrix plays exactly the role of the similarity matrix $S$ discussed in\nSection~\\ref{sec:mathback}, as the following theorem states.\n\n\\begin{theorem}\\label{theo:fs}\nA matrix $F$ is an R-matrix if and only if \\eqref{fperm} holds.\n\\end{theorem}\n\n\\begin{proof}\nLet us assume that the permutation ${\\boldsymbol{\\pi}}$ which realizes \\eqref{fperm} has\nalready been applied to the units. \nThen, since a permutation of the units corresponds to a simultaneous\npermutation of the rows and columns of the matrix $F$, we obtain\n$$\ni\\leq j\\leq k \\quad \\iff \\quad \nf_{ij}\\geq f_{ik} \\quad \\text{and} \\quad\nf_{jk}\\geq f_{ik}.\n$$\nThe first inequality $f_{ij}\\geq f_{ik}$ is exactly \\eqref{rmatrix2}.\nKeeping into account the symmetry of $F$ and cyclically permuting the indexes,\nfrom the second inequality we get\n$$\nj\\leq k\\leq i \\quad \\iff \\quad \nf_{ij}\\leq f_{ik},\n$$\nwhich corresponds to \\eqref{rmatrix1}.\n\\end{proof}\n\nIf a seriation data set is described by an adjacency (or abundance) matrix $A$,\nwe will set $F=AA^T$.\nIf $F$ is pre-$R$ (see Section~\\ref{sec:mathback}), there exists a rows\/columns\npermutation that takes it in $R$-form. Unfortunately, this property cannot be\nstated in advance, in general. This property can be ascertained, e.g., after\napplying the algorithm discussed in this section; see below.\n\nThe authors approach in~\\cite{atkins1998spectral}, see\nalso~\\cite{estrada2010network}, is to consider the minimization of the\nfollowing penalty function \n$$\nh(\\bm{x}) = \\frac{1}{2}\\sum_{i,j=1}^{n} f_{ij}(x_i-x_j)^2, \n\\quad \\bm{x}\\in {\\mathbb{R}}^n,\n$$ \nwhose value is small for a vector $\\bm{x}$ such that each pair $(i,j)$ of\nhighly correlated units is associated to components $x_i$ and $x_j$ with close\nvalues.\nOnce the minimizing vector $\\bm{x}_{\\min}$ is computed, it is sorted in\neither nonincreasing or nondecreasing value order, yielding\n$\\bm{x}_{\\boldsymbol{\\pi}}=(x_{\\pi_1},\\ldots,x_{\\pi_n})^T$.\nThe permutation of the units ${\\boldsymbol{\\pi}}$ realizes \\eqref{fperm}.\n\nNote that $h$ does not have a unique minimizer, since its value does not change\nif a constant is added to each of the components $x_i$ of the vector\n$\\bm{x}$.\nIn order to ensure uniqueness and to rule out the trivial solution, it is\nnecessary to impose two suitable constraints on the components of the vector\n$\\mathbf{x}$.\nThe resulting minimization problem is:\n$$\n\\begin{aligned}\n&\\text{minimize} & &\nh(\\bm{x}) = \\frac{1}{2}\\sum_{i,j=1}^{n} f_{ij}(x_i-x_j)^2 \\\\\n&\\text{subject to} & & \\sum_i x_i = 0 \\quad \\text{and} \\quad \\sum_i x_i^2 = 1.\n\\end{aligned}\n$$\n\nThe solution to this approximated problem may be obtained from the Fiedler\nvector of the Laplacian $L$ of the correlation matrix $F$. \nLetting $D = \\diag(d_i)$ be the degree matrix, $d_i=\\sum_{j=1}^n f_{ij}$, it is\nimmediate to observe that\n$$\nh(\\bm{x}) = \\frac{1}{2}\\sum_{i,j=1}^{n} f_{ij}(x_i^2+x_j^2-2x_ix_j)\n= \\bm{x}^T D \\bm{x} - \\bm{x}^T F \\bm{x}.\n$$ \nThis shows that the previous minimization problem can be rewritten as \n\\begin{eqnarray*}\n\\min_{\\|\\mathbf{x}\\|=1,\\ \\bm{x}^T\\bm{e} = 0}\n\\mathbf{x}^TL\\mathbf{x}\n\\end{eqnarray*}\nwhere $L=D-F$.\nThe constraints require $\\mathbf{x}$ to be a unit vector orthogonal to\n$\\mathbf{e}$.\nBeing $L$ symmetric, all the eigenvectors except $\\bm{e}$ satisfy the\nconstraints.\nConsequently, a Fiedler vector is a solution to the constrained minimization\nproblem.\n\nIn fact, Theorem~3.2 from~\\cite{atkins1998spectral} proves that an R-matrix has\na monotone Fiedler vector, while Theorem~3.3, under suitable assumptions,\nimplies that a reordering of the Fiedler vector takes a pre-R matrix to R-form.\nThis confirms that the problem is well posed only when $F$ is pre-R.\nNevertheless, real data sets may be inconsistent, in the sense that do not\nnecessarily lead to pre-R similarity matrices.\nIn such cases, it may be useful to construct an approximate solution to the\nseriation problem, and sorting the entries of the Fiedler vector generates an\nordering that tries to keep highly correlated elements close to each other.\nThis is relevant because techniques based on Fiedler vectors are being used for\nthe solution of different sequencing\nproblems~\\cite{barnard1995spectral,greenberg1995physical,higham2007,juvan1992optimal}. \nIn particular, they are employed in complex network analysis, e.g., for\ncommunity detection and partitioning of graphs~\\cite{estrada2012,estrada2015}.\n\nThe algorithm proposed in~\\cite{atkins1998spectral} is based upon the above\nidea, and uses a PQ-tree to store the permutations of the units that produce a\nsolution to the seriation problem; its implementation is described in\nAlgorithm~\\ref{alg:spectrsort}.\n\n\\begin{algorithm}[!ht]\n\\begin{algo}\n\\STATE \\Function $T = \\spectrsort(F,U)$\n\\STATE $n=$ row size of $F$\n\\STATE $\\alpha = \\min_{i,j} f_{i,j}$, \\If $\\alpha\\neq 0$, \n\t$\\bm{e}=(1,\\ldots,1)^T$, $F=F-\\alpha \\bm{e}\\bm{e}^{T}$, \\End\n\\label{line2}\n\\STATE call $\\getconcomp$ to construct the connected components \n$\\{F_1,\\dots,F_k\\}$ of $F$\n\\label{line3}\n\\STATE \\phantom{XXX} and the corresponding index sets $U=\\{U_1,\\dots,U_k\\}$ \n\\label{line4}\n\\IF {$k>1$}\n\t\\FOR {$j=1,\\dots,k$}\n\\label{line6}\n\t\t\\STATE $v(j) = \\spectrsort(F_j,U_j)$\n\t\\ENDFOR\n\t\\STATE $T = \\pnode(v)$\n\\label{line9}\n\\ELSE \n\t\\IF {$n=1$}\n\\label{line11}\n\t\t\\STATE $T=\\lnode(U)$\n\t\\ELSEIF {$n=2$}\n\t\t\\STATE $T=\\pnode(U)$\n\t\\ELSE\n\\label{line15}\n\t\t\\STATE $L=$ Laplacian matrix of $F$\n\\label{line16}\n\t\t\\STATE compute (part of) the eigenvalues and eigenvectors of $L$\n\\label{line17}\n\t\t\\STATE determine multiplicity $n_F$ of the Fiedler value\n\t\t\taccording to a tolerance $\\tau$\n\t\t\\IF {$n_F = 1$}\n\t\t\t\\STATE $\\bm{x}=$ sorted Fiedler vector\n\t\t\t\\STATE $t$ number of distinct values in $\\bm{x}$\n\t\t\t\taccording to a tolerance $\\tau$\n\\label{line19}\n\t\t\t\\FOR {$j=1,\\dots,t$}\n\t\t\t\t\\STATE $u_j$ indices of elements in $\\bm{x}$ \n\t\t\t\t\twith value $x_j$\n\t\t\t\t\\IF $u_j$ has just one element\n\t\t\t\t\t\\STATE $v_j=\\lnode(u_j)$\n\\label{line26}\n\t\t\t\t\\ELSE \n\t\t\t\t\t\\STATE $v(j) = \\spectrsort(F(u_j,u_j),U(u_j,u_j))$\n\\label{line28}\n\t\t\t\t\\ENDIF\n\t\t\t\\ENDFOR\n\t\t\t\\STATE $T = \\qnode(v)$\n\t\t\\ELSE\n\t\t\t\\STATE $T=\\mnode(U)$\n\\label{line33}\n\t\t\\ENDIF\n\t\\ENDIF\n\\ENDIF\n\\end{algo}\n\\caption{Spectral sort algorithm.}\n\\label{alg:spectrsort}\n\\end{algorithm}\n\nThe algorithm starts translating all the entries of the correlation matrix so\nthat the smallest is 0 , i.e., \n\\begin{equation}\\label{trans}\n\\tilde{F}=F-\\alpha \\bm{e}\\bm{e}^{T}, \\qquad \\alpha = \\min_{i,j} f_{ij};\n\\end{equation}\nsee line~\\ref{line2} of Algorithm~\\ref{alg:spectrsort}.\nThis is justified by the fact that $F$ and $\\tilde{F}$ have the same Fiedler\nvectors and that if $F$ is an irreducible R-matrix such translation ensures\nthat the Fiedler value is a simple eigenvalue of\n$L$~\\cite[Lemma~4.1~and~Theorem~4.6]{atkins1998spectral}.\nOur software allows the user to disable this procedure (see\nTable~\\ref{tab:opts} below)\nas he may decide to\nsuitably preprocess the similarity matrix in order to reduce the\ncomputational load.\nIndeed, the translation procedure is repeated each time the algorithm calls\nitself recursively.\n\n\\begin{algorithm}[!ht]\n\\begin{algo}\n\\STATE \\Function $U = \\getconcomp(F)$\n\\STATE preallocate the cell-array $U$, $chlist=$empty vector\n\\STATE $root=$\\{node 1\\}, $list=root$, $n=$row size of $F$\n\\STATE $i=0$, $flag=true$ (logical variable)\n\\WHILE $flag$\n\\STATE $i =i+1$\n\\STATE $list =\\graphvisit(root,list)$\n\\STATE $U\\{i\\}=list$\n\\STATE update $chlist$ adding the nodes in $list$ and sort the vector\n\\STATE $flag=true$ if the number of elements in $chlist$ is different from $n$\n\\STATE \\phantom{XXX} otherwise $flag=false$\n\\IF $flag$\n\t\\STATE choose the $root$ for a new connected component\n\t\\IF there are no connected components left\n\t\t\\STATE exit\n\t\\ENDIF\n\t\\STATE $list=root$\n\\ENDIF\n\\ENDWHILE\n\\end{algo}\n\\caption{Detect the connected components of a graph.}\n\\label{alg:getconcomp}\n\\end{algorithm}\n\nIf the matrix $F$ is reducible, then the seriation problem can be\ndecoupled~\\cite[Lemma~4.2]{atkins1998spectral}.\nLines~\\ref{line3}--\\ref{line4} of the algorithm detect the irreducible blocks\nof the correlation matrix by using the function \\textsf{getconcomp.m}, which\nalso identifies the corresponding index sets.\nThe function, described in Algorithm~\\ref{alg:getconcomp}, constructs a\n\\emph{cell array} containing the indices which identify each connected\ncomponent of a graph.\nIt calls the function \\textsf{graphvisit.m}, which visits a graph starting from\na chosen node; see Algorithm~\\ref{alg:graphvisit}.\nNote that these two functions, in order to reduce the stack consumption due to\nrecursion, use a global variable to store the correlation matrix.\n\n\\begin{algorithm}[!ht]\n\\begin{algo}\n\\STATE \\Function $list = \\graphvisit(root,list)$\n\\STATE construct the list $l$ of the indices of the nodes connected to the root\n\\STATE initialize an empty list $nlist$\n\\STATE find the elements of $l$ which are not in $list$\n\\STATE add the new elements to $list$ and to $nlist$\n\\IF $nlist$ is not empty\n\t\\STATE sort $list$\n\t\\FOR each node $i$ in $nlist$\n\t\t\\STATE $list=\\graphvisit(nlist(i),list)$\n\t\\ENDFOR\n\\ENDIF\n\\end{algo}\n\\caption{Visit a graph starting from a node.}\n\\label{alg:graphvisit}\n\\end{algorithm}\n\nIf more than one connected component is found, then the function calls itself\non each component, and stores the returned lists of nodes as children of\na P-node (lines \\ref{line6}--\\ref{line9}).\nIf the matrix is irreducible, the dimension $n$ of the matrix is\nconsidered (lines~\\ref{line11}--\\ref{line15}).\nThe cases $n=1,2$ are trivial.\nIf $n>2$, the Laplacian matrix $L$ is computed, as well as the Fiedler value\nand vector (lines~\\ref{line16}--\\ref{line17}). \nDepending on the matrix being ``small'' or ``large'' different algorithms are\nused.\nFor a small scale problem the full spectral decomposition of the Laplacian is\ncomputed by the \\ttt{eig} function of Matlab.\nFor a large scale problem only a small subset of the eigenvalues and\neigenvectors are evaluated using the \\ttt{eigs} function, which is based on a\nKrylov space projection method.\nThe \\texttt{PQser} toolbox computes by default the eigenpairs corresponding to the\nthree eigenvalues of smallest magnitude, since they are sufficient to\nunderstand if the Fiedler value is simple or multiple, but the default value\ncan be modified.\nThe choice between the two approaches is automatically performed and it may be\ninfluenced by the user; see Table~\\ref{tab:opts} in Section~\\ref{sec:implser}.\n\nThen, the algorithm determines the multiplicity of the Fiedler value according\nto a given tolerance.\nIf the Fiedler value is a simple eigenvalue of $L$, the algorithm sorts the\nelements of the current list according to the reordering of the Fiedler vector\nand stores them as the children of a Q-node.\nIf two or more values of the Fiedler vector are repeated the function invokes\nitself recursively (line~\\ref{line28}), in accordance\nwith~\\cite[Theorem~4.7]{atkins1998spectral}; on the contrary, the corresponding\nnode becomes a leaf (line~\\ref{line26}).\nIn our implementation we introduce a tolerance $\\tau$ to distinguish ``equal''\nand ``different'' numbers: $a$ and $b$ are considered ``equal'' if\n$|a-b|<\\tau$. The default value for $\\tau$ is $10^{-8}$.\n\nIn the case of a multiple Fiedler value, the algorithm conventionally\nconstructs an ``M-node'' (line~\\ref{line33}).\nThis new type of node has been introduced in order to flag this particular\nsituation, which will be further discussed in Section~\\ref{sec:mult}.\n\nAlgorithm~\\ref{alg:spectrsort} produces a PQ-tree whether $F$ is a pre-R matrix\nor not.\nIf all the Fiedler values computed are simple, the starting matrix is pre-R and\nany permutation encoded in the PQ-tree will take it to R-form.\nIn the presence of a multiple Fiedler vector the problem is not well posed and\nan approximate solution is computed.\n\nThe number $N$ of all the admissible permutations generated by the algorithm\ncan be obtained by counting all the admissible boundaries of the tree.\nIn the case of a PQ-tree consisting of a single Q-node $N$ is equal to 2,\nbecause only the left-to-right order of the children leaves and its reverse\nare possible. \nFor a single P-node, the number of all the permutations is the factorial of the\nnumber of the children.\nAn M-node is temporarily treated as a P-node, although we experimentally\nobserved that not all the permutations are admissible; this aspect is discussed\nin Section~\\ref{sec:mult}.\n\n\n\\subsection{Implementation of spectral seriation}\\label{sec:implser}\n\nThe functions included in the \\texttt{PQser} toolbox are listed in\nTable~\\ref{tab:serfuncs}.\nBesides the function \\ttt{spectrsort}, which implements \nAlgorithm~\\ref{alg:spectrsort}, there is a parallel version of the same method,\n\\ttt{pspectrsort}, which distributes the \\ttt{for} loop at\nline~\\ref{line6} of the algorithm among the available processing units.\nIn order to execute the function \\ttt{pspectrsort}, the Parallel Computing\nToolbox must be present in the current Matlab installation.\n\n\\begin{table}[htb]\n\\footnotesize\n\\centering\n\\begin{tabular}{ll}\n\\hline\n\\ttt{spectrsort} & spectral sort for the seriation problem \\\\\n\\ttt{pspectrsort} & parallel version of \\ttt{spectrsort} \\\\\n\\ttt{fiedvecs} & compute the Fiedler vectors and values of a Laplacian \\\\\n\\ttt{getconcomp} & determine the connected components of a graph \\\\\n\\ttt{graphvisit} & visit a graph starting from a node \\\\\n\\ttt{distinct} & sort and level the elements of a vector \\\\\n\\ttt{lapl} & construct the graph Laplacian of a matrix \\\\\n\\ttt{testmatr} & test matrices for PQser \\\\\n\\hline\n\\end{tabular}\n\\caption{Functions in the \\texttt{PQser} toolbox devoted to the solution of the\nseriation problem.}\n\\label{tab:serfuncs}\n\\end{table}\n\nThe function \\ttt{testmatr} allows one to create some simple test problems.\nThe remaining functions of Table~\\ref{tab:serfuncs} are not likely to be used\nin the common use of the toolbox. \nThey are made available to the expert user, who may decide to call them\ndirectly or to modify their content.\n\n\\begin{table}[htb]\n\\footnotesize\n\\centering\n\\begin{tabular}{ll}\n\\hline\n\\ttt{tau} & tolerance used to distinguish between ``equal'' and ``different'' \\\\\n\t & values (\\ttt{spectrsort} and \\ttt{fiedvecs}, def. $10^{-8}$) \\\\\n\\ttt{translate} & applies translation \\eqref{trans} (\\ttt{spectrsort}, def. 1) \\\\\n\\ttt{lrg} & used to select small scale or large scale algorithm (\\ttt{fiedvecs}, \\\\\n\t& true if the input matrix is sparse) \\\\\n\\ttt{nlarge} & if matrix size is below this value, the small scale algorithm \\\\\n\t & is used (\\ttt{fiedvecs}, def. 1000) \\\\\n\\ttt{neig} & number of eigenpairs to be computed when the large scale \\\\\n\t & algorithm is used (\\ttt{fiedvecs}, def. 3) \\\\\n\\ttt{maxncomp} & maximum number of connected components (\\ttt{getconcomp}, def. 100) \\\\\n\\ttt{bw} & half bandwidth of test matrix (\\ttt{testmatr}, type 2 example, def. 2) \\\\\n\\ttt{spar} & construct a sparse test matrix (\\ttt{testmatr}, type 2 example, def. 1) \\\\\n\\hline\n\\end{tabular}\n\\caption{Tuning parameters for the \\texttt{PQser} toolbox; the functions affected are\nreported in parentheses, together with the default value of each parameter.}\n\\label{tab:opts}\n\\end{table}\n\nThe toolbox has some tuning parameters, which are set to a default value, but\ncan be modified by the user. This can be done by passing to a function, as an\noptional argument, a variable of type \\ttt{struct} with fields chosen among the\nones listed in Table~\\ref{tab:opts}.\nFor example:\n\\begin{quote}\n\\footnotesize\n\\begin{verbatim}\nopts.translate = 0;\nT = spectrsort(F,opts);\n\\end{verbatim}\n\\end{quote}\napplies Algorithm~\\ref{alg:spectrsort} to a similarity matrix $F$ omitting\nthe translation process described in \\eqref{trans}.\n\nTo illustrate the use of the toolbox, we consider a similarity matrix $R$\nsatisfying the Robinson criterion \n$$\nR = \\begin{bmatrix}\n200 & 150 & 120 & 80 & 40 & 0 & 0 & 0 & 0 & 0\\\\\n150 & 200 & 160 & 120 & 80 & 40 & 0 & 0 & 0 & 0\\\\\n120 & 160 & 200 & 160 & 120 & 80 & 40 & 0 & 0 & 0\\\\\n80 & 120 & 160 & 200 & 160 & 120 & 80 & 40 & 0 & 0\\\\\n40 & 80 & 120 & 160 & 200 & 160 & 120 & 80 & 40 & 0\\\\\n0 & 40 & 80 & 120 & 160 & 200 & 160 & 120 & 80 & 40\\\\\n0 & 0 & 40 & 80 & 120 & 160 & 200 & 160 & 120 & 80\\\\\n0 & 0 & 0 & 40 & 80 & 120 & 160 & 200 & 160 & 120\\\\\n0 & 0 & 0 & 0 & 40 & 80 & 120 & 160 & 200 & 150\\\\\n0 & 0 & 0 & 0 & 0 & 40 & 80 & 120 & 150 & 200\n\\end{bmatrix}\n$$\nand the pre-R matrix obtained by applying to the rows and columns of $R$ a\nrandom permutation \n$$\nF = \\begin{bmatrix} \n200 & 0 & 0 & 150 & 120 & 0 & 160 & 40 & 0 & 80\\\\\n0 & 200 & 150 & 0 & 0 & 120 & 0 & 80 & 160 & 40\\\\ \n0 & 150 & 200 & 0 & 0 & 80 & 0 & 40 & 120 & 0\\\\ \n150 & 0 & 0 & 200 & 80 & 0 & 120 & 0 & 0 & 40\\\\ \n120 & 0 & 0 & 80 & 200 & 80 & 160 & 120 & 40 & 160\\\\ \n0 & 120 & 80 & 0 & 80 & 200 & 40 & 160 & 160 & 120\\\\ \n160 & 0 & 0 & 120 & 160 & 40 & 200 & 80 & 0 & 120\\\\ \n40 & 80 & 40 & 0 & 120 & 160 & 80 & 200 & 120 & 160\\\\ \n0 & 160 & 120 & 0 & 40 & 160 & 0 & 120 & 200 & 80\\\\ \n80 & 40 & 0 & 40 & 160 & 120 & 120 & 160 & 80 & 200 \n\\end{bmatrix}.\n$$\n\nThe PQ-tree $T$ containing the solution of the reordering problem is\nconstructed by calling the function \\ttt{spectrsort}, which returns the\nresulting data structure:\n\\begin{quote}\n\\footnotesize\n\\begin{verbatim}\nT = spectrsort(F,opts)\nT = \n struct with fields:\n type: 'Q'\n value: [1x10 struct]\n\\end{verbatim}\n\\end{quote}\nUsing the function \\ttt{pqtreeplot} \n\\begin{quote}\n\\footnotesize\n\\begin{verbatim}\npqtreeplot(T)\n\\end{verbatim}\n\\end{quote}\nwe obtain the representation of the PQ-tree displayed in Figure~\\ref{pqtree2}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.52\\textwidth]{pqtree2}\n\\caption{A PQ-tree corresponding to a pre-R matrix of dimension 10.}\n\\label{pqtree2}\n\\end{center}\n\\end{figure}\n\nIn this particular case, the PQ-tree $T$ consists of just a Q-node as a root,\nso only two permutations of the leaves are allowed.\nThey can be extracted from the tree using the function \\ttt{pqtreeperms}, whose\noutput is\n\\begin{quote}\n\\footnotesize\n\\begin{verbatim}\nperms_matrix = pqtreeperms(T)\nperms_matrix =\n 4 1 7 5 10 8 6 9 2 3\n 3 2 9 6 8 10 5 7 1 4\n\\end{verbatim}\n\\end{quote} \nIn some occasions, a PQ-tree may contain a very large number of permutations.\nIn such cases, the function \\ttt{pqtree1perm} extracts just one of the\npermutations, in order to apply it to the rows and columns of the matrix $F$:\n\\begin{quote}\n\\footnotesize\n\\begin{verbatim}\nseq = pqtree1perm(T);\nAR = F(seq,seq);\n\\end{verbatim}\n\\end{quote} \nSince $F$ is pre-R, we clearly reconstruct the starting similarity matrix $R$.\n\nThis experiment is contained in the script \\texttt{graf2.m}, in the\n\\texttt{demo} sub-directory.\nThe script \\texttt{tutorial.m}, in the same directory, illustrates the use of\nthe toolbox on other numerical examples.\n\n\n\\section{The case of a multiple Fiedler value}\\label{sec:mult}\n\nIn this section we discuss the case where the Fiedler value is a multiple root\nof the characteristic polynomial of the Laplacian $L$.\nWhen this happens, the eigenspace corresponding to the smallest nonzero\neigenvalue of $L$ has dimension larger than one, so there is no uniqueness in\nthe choice of the Fiedler vector.\n\nWe conjecture that sorting the entries of a Fiedler vector, that is, of any\nvector in the eigenspace of the Fiedler value, does not necessarily lead to all\npossible indexes permutations, i.e., the factorial of the number $n$ of units.\nWe observed that there may be some constraints that limit the number of\npermutations deriving from the Fiedler vector, and this number does not appear\nto be related to the multiplicity of the Fiedler value by a simple formula.\nWe will illustrate this issue by a numerical experiment.\n\nAs we did not find any reference to this problem in the literature, we plan\nto study it in a subsequent paper.\nThis is the reason why the \\texttt{PQser} toolbox conventionally associates an\n\\emph{M-node}, to the presence of a multiple Fiedler value.\nIn the software, the new type of node is temporarily treated as a P-node.\nThis leaves the possibility to implement the correct treatment of the case,\nonce the problem will be understood.\n\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\begin{tikzpicture}[->,>=stealth',auto,node distance=1cm,\n\ton grid,semithick, every state\/.style={fill=cyan!20!white,draw=none,\n\tcircular drop shadow,text=black,inner sep=0pt}]\n\t\\tiny\n\\node[state](A){1};\n\\node[state](B)[below=of A]{2};\n\\node[state](C)[below=of B]{3};\n\\node[state](D)[below=of C]{4};\n\\node[state](E)[below=of D]{5};\n\\node[state,fill=red!20!white,node distance=4cm](L)[right=of A]{1};\n\\node[state,fill=red!20!white,node distance=4cm](M)[right=of B]{2};\n\\node[state,fill=red!20!white,node distance=4cm](N)[right=of C]{3};\n\\node[state,fill=red!20!white,node distance=4cm](O)[right=of D]{4};\n\\node[state,fill=red!20!white,node distance=4cm](P)[right=of E]{5};\n\\path (A) edge (L);\n\\path (A) edge (M);\n\\path (B) edge (M);\n\\path (B) edge (N);\n\\path (C) edge (N);\n\\path (C) edge (O);\n\\path (D) edge (O);\n\\path (D) edge (P);\n\\path (E) edge (P);\n\\path (E) edge (L);\n\\end{tikzpicture}\n\\caption{The \\emph{cycle} seriation problem; the units are on the left, the\ntypes on the right.}\n\\label{fig:cycle}\n\\end{center}\n\\end{figure}\n\nHere we present a simple example to justify our conjecture.\nLet us consider the seriation problem described by the bipartite graph depicted\nin Figure~\\ref{fig:cycle}.\nThe nodes on the left are the units, e.g., the excavation sites; on the right\nthere are the types, which may be seen as the archeological findings. The\nrelationships between units and types are represented by edges connecting the\nnodes.\n\nThe problem is clearly unsolvable, as the associated graph describes a\n\\emph{cycle}: each unit is related to surrounding units by a connection to a\ncommon type, and the two extremal units are related to each other in the same\nway.\n\nAt the same time, not all the units permutations are admissible.\nFor example, one may argue that the permutation ${\\boldsymbol{\\pi}}_1=(3,4,5,1,2)^T$ should\nbe considered partially feasible, as it breaks only one of the constraints\ncontained in the bipartite graph, while the ordering ${\\boldsymbol{\\pi}}_2=(1,4,2,5,3)^T$ has\nnothing to do with the problem considered.\n\nThe adjacency matrix associated to the graph in Figure~\\ref{fig:cycle} is the\nfollowing\n\\begin{equation}\\label{incidCycle}\nA = \\begin{bmatrix}\n1 & 1 & 0 & 0 & 0\\\\\n0 & 1 & 1 & 0 & 0\\\\\n0 & 0 & 1 & 1 & 0\\\\\n0 & 0 & 0 & 1 & 1\\\\\n1 & 0 & 0 & 0 & 1\n\\end{bmatrix}.\n\\end{equation}\nWe can associate to \\eqref{incidCycle} the similarity matrix\n\\begin{equation}\\label{cycleF}\nF = AA^T = \\begin{bmatrix}\n2 & 1 & 0 & 0 & 1\\\\\n1 & 2 & 1 & 0 & 0\\\\\n0 & 1 & 2 & 1 & 0\\\\\n0 & 0 & 1 & 2 & 1\\\\\n1 & 0 & 0 & 1 & 2\n\\end{bmatrix},\n\\end{equation}\nwhose Laplacian is\n$$\nL = D - F = \\begin{bmatrix}\n2 & -1 & 0 & 0 & -1\\\\\n-1 & 2 & -1 & 0 & 0\\\\\n0 & -1 & 2 & -1 & 0\\\\\n0 & 0 & -1 & 2 & -1\\\\\n-1 & 0 & 0 & -1 & 2\n\\end{bmatrix}.\n$$\n\nThe matrix $L$ is circulant, that is, it is fully specified by its first\ncolumn, while the other columns are cyclic permutations of the first one\nwith an offset equal to the column index. \nA complete treatment of circulant matrices can be found\nin~\\cite{davis1979circulant}, while~\\cite{redivo2012smt} implements a Matlab\nclass for optimized circulant matrix computations.\n\nOne of the basic properties of circulant matrices is that their spectrum is\nanalytically known. In particular, the eigenvalues of $L$ are given by\n$$\n\\{\\widehat{L}(1), \\widehat{L}(\\omega), \\widehat{L}(\\omega^2),\n\\widehat{L}(\\omega^3), \\widehat{L}(\\omega^4) \\},\n$$\nwhere $\\widehat{L}(\\zeta)$ is the discrete Fourier transform of the first\ncolumn of $L$\n$$\n\\hat{L}(\\zeta) = 2 -\\zeta ^{-1}-\\zeta ^{-4},\n$$\nand $\\omega = {\\mathrm{e}}^{\\frac{2\\pi{\\mathrm{i}}}{5}}$ is the minimal phase $5$th root of\nunity; see~\\cite{davis1979circulant}.\nA simple computation shows that\n$$\n\\widehat{L}(1)=0, \\quad\n\\widehat{L}(\\omega)=\\widehat{L}(\\omega^4)=2-2\\cos\\frac{2\\pi}{5}, \\quad\n\\widehat{L}(\\omega^2)=\\widehat{L}(\\omega^3)=2-2\\cos\\frac{4\\pi}{5}, \n$$\nso that the Fiedler value $\\widehat{L}(\\omega)$ has multiplicity 2.\n\nTo explore this situation we performed the following numerical experiment.\nWe considered 10000 random linear combinations of an orthonormal basis for the\neigenspace corresponding to the Fiedler value. This produces a set of random\nvectors, belonging to a plane immersed in ${\\mathbb{R}}^5$, which can all be considered\nas legitimate ``Fiedler vectors''.\n\nEach vector was sorted, and the corresponding permutations of indexes were\nstored in the columns of a matrix.\nIn the end, all the repeated permutations were removed.\nWe obtained 10 permutations, reported in the columns of the following matrix\n\\begin{equation}\\label{permat}\n\\begin{bmatrix}\n3 & 2 & 5 & 3 & 2 & 4 & 4 & 1 & 5 & 1 \\\\\n2 & 1 & 4 & 4 & 3 & 3 & 5 & 5 & 1 & 2 \\\\\n4 & 3 & 1 & 2 & 1 & 5 & 3 & 2 & 4 & 5 \\\\\n1 & 5 & 3 & 5 & 4 & 2 & 1 & 4 & 2 & 3 \\\\\n5 & 4 & 2 & 1 & 5 & 1 & 2 & 3 & 3 & 4\n\\end{bmatrix}.\n\\end{equation}\nThey are much less than the $5!=120$ possible permutations, and reduce to 5 if\nwe remove the columns which are the reverse of another column.\nThis confirms our conjecture: when a Fiedler value is multiple some constraints\nare imposed on the admissible permutations of the units.\n\nIt is relevant to notice that matrix \\eqref{permat} does not contain the cyclic\npermutations of the units depicted in Figure~\\ref{fig:cycle}.\nIndeed, the spectral algorithm aims at moving the nonzero components close to\nthe main diagonal, and this contrasts with the presence of nonzeros in the\nelements $f_{51}$ and $f_{15}$ of matrix \\eqref{cycleF}.\nAll permutations contained in \\eqref{permat}, when applied to the similarity\nmatrix $F$, produce the same matrix\n$$\n\\tilde{F} = \\begin{bmatrix}\n2 & 1 & 1 & 0 & 0 \\\\\n1 & 2 & 0 & 1 & 0 \\\\\n1 & 0 & 2 & 0 & 1 \\\\\n0 & 1 & 0 & 2 & 1 \\\\\n0 & 0 & 1 & 1 & 2\n\\end{bmatrix},\n$$\nwhich exhibits a smaller bandwidth than \\eqref{cycleF}.\n\nThis experiment can be reproduced by executing the script \\texttt{mfiedval.m},\nwhich is found in the \\texttt{demo} sub-directory.\n\n\n\\section{Numerical experiments}\\label{sec:numexp}\n\nIn this section we illustrate the application of the \\texttt{PQser} toolbox to some\nnumerical examples.\nThe experiments can be repeated by running the related Matlab scripts located\nin the \\texttt{demo} sub-directory of the toolbox.\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[width=.30\\textwidth]{bornmat}\\hfil \n\\includegraphics[width=.30\\textwidth]{bornmix}\\hfil \n\\includegraphics[width=.30\\textwidth]{bornord}\n\\caption{Processing of the Bornholm data set: the spy plot on the left shows\nthe original matrix, a permuted version is reported in the central graph, on\nthe right we display the matrix reordered by the spectral algorithm.}\n\\label{bornholm}\n\\end{center}\n\\end{figure}\n\nThe first example is the numerical processing of the Bornholm data set,\npresented in Table~\\ref{tab:adjacency}.\nWe randomly permute the rows of the adjacency matrix and apply the\nspectral algorithm to the similarity matrix associated to the permuted matrix. \nThe resulting PQ-tree contains just a Q-node, so there is only one solution\n(actually, this is a proof that the matrix is pre-R) which we use to reorder\nthe permuted matrix.\nThe computational code is contained in the file \\ttt{exper1.m}.\n\nFigure~\\ref{bornholm} reports the spy plots which represent the nonzero\nentries of the initial matrix, its permuted version, and the final reordering.\nIt is immediate to observe that the lower band of the reordered matrix is\nslightly narrower than the initial matrix, showing that the spectral algorithm\nwas able to improve the results obtained empirically by archaeologists.\n\nThe second example concerns the comparison of the spectral algorithm with its\nparallel version in the solution of a large scale problem.\nThe experiments were performed on a dual Xeon CPU E5-2620 system (12 cores),\nrunning the Debian GNU\/Linux operating system and Matlab 9.2.\n\nThe function \\ttt{testmatr} of the toolbox allows the user to create a block\ndiagonal matrix, formed by $m$ banded blocks whose size is chosen using a\nsecond input parameter.\nThe matrix is randomly permuted in order to hide its reducible structure.\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[width=.49\\textwidth]{exper2time}\\hfil \n\\includegraphics[width=.47\\textwidth]{exper2spup}\n\\caption{Comparison between the sequential and the parallel versions of\nAlgorithm~\\ref{alg:spectrsort}: on the left the execution time in seconds, on\nthe right the parallel speedup, defined as the ratio between the sequential and\nthe parallel timings. The test matrix is of dimension $2^{15}=32768$, the size\nof each reducible block is $2^j$, where $j$ is the index reported in the\nhorizontal axis.}\n\\label{exper2}\n\\end{center}\n\\end{figure}\n\nWe let the size of the problem be $n=2^{15}=32768$ and, for $j=1,2\\ldots,15$,\nwe generate a sequence of test matrices containing $n\\cdot 2^{-j}$ blocks, each\nof size $2^j$.\n\nWe apply the function \\ttt{spectrsort} that implements\nAlgorithm~\\ref{alg:spectrsort} to the above problems, as well as its parallel\nversion \\ttt{pspectrsort}, and record the execution time; see the file\n\\ttt{exper2.m}.\nThe number of processors available on our computer was 12.\n\nThe graph on the left of Figure~\\ref{exper2} shows that there is a significant\nadvantage from running the toolbox on a parallel computing system when the\nnetwork associated to the problem is composed by a small number of large\nconnected components.\nThis is confirmed by the plot of the parallel speedup, that is, the ratio\nbetween the timings of the sequential and the parallel implementations,\ndisplayed in the graph on the right in the same figure.\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[width=.30\\textwidth]{exper3data}\\hfil \n\\includegraphics[width=.30\\textwidth]{exper3spec}\\hfil \n\\includegraphics[width=.30\\textwidth]{exper3rcm}\n\\caption{Bandwidth reduction of a sparse matrix of size 1024: the three spy\nplots display the initial matrix, the reordered matrix resulting from the\nspectral algorithm, and the one produced by the \\ttt{symrcm} function of\nMatlab.}\n\\label{exper3}\n\\end{center}\n\\end{figure}\n\nTo conclude, we consider another important application of the reordering\ndefined by the Fiedler vector of the Laplacian, namely, the reduction of the\nbandwidth for a sparse matrix; see~\\cite{barnard1995spectral}.\n\nWe generate a sparse symmetric matrix of size $n=1024$, having approximately\n0.2\\% nonzero elements, and reorder its rows and columns by\nAlgorithm~\\ref{alg:spectrsort}.\nNotice that in this case the spectral algorithm must be applied to a matrix\nwhose elements are taken in absolute value.\nThe computation is described in the script \\ttt{exper3.m}.\n\nThe resulting matrix is depicted by displaying its nonzero pattern in\nFigure~\\ref{exper3}, where it is compared to the reverse Cuthill-McKee\nordering, as implemented in the \\ttt{symrcm} function of Matlab.\nThe spectral algorithm appears to be less effective than \\ttt{symrcm},\nleading to a reordered matrix with a wider band.\nThis is due to the fact that \\ttt{spectrsort} aims at placing the largest\nentries close to the diagonal, and this does not necessarily produce the\nmaximal bandwidth reduction.\nExperimenting with sparser matrices we observed that often the two methods\nproduce similar results.\n\nWe remark that, also in this application, the presence of a multiple Fiedler\nvalue may constitute a problem.\nFor example, we were not able to correctly process the Matlab test matrix\n\\ttt{bucky} (the connectivity graph of the Buckminster Fuller geodesic dome),\nbecause the associated Laplacian possesses a triple Fiedler value.\n\n\n\\section{Conclusions}\\label{sec:last}\n\nIn this paper we present a new Matlab toolbox principally aimed to the\nsolution of the seriation problem, but which can be applied to other related\nproblems.\n\nIt is based on a spectral algorithm introduced in~\\cite{atkins1998spectral},\nand contains an implementation of PQ-trees as well as some tools for their\nmanipulation, including an interactive visualization tool.\nThe implemented algorithm includes the possibility to choose between a small\nscale and a large scale algorithm for the computation of the Fiedler vector,\nand to detect equal components in the same vector according to a chosen\ntolerance. Further, a parallel version of the method is provided.\n\nWe also point out the importance of the presence of multiple Fiedler values, a\nproblem which has not been considered before in the literature and which has a\nsignificant influence on the computation of an approximate solution to the\nseriation problem.\n\nThe use of the toolbox is illustrated by a few practical examples, and its\nperformance is investigated through a set of numerical experiments, both of\nsmall and large scale.\n\n\n\\section{Acknowledgements}\n\nWe would like to thank Matteo Sommacal for pointing our attention to the\nproblem of seriation and to its application in archaeology.\nThe paper~\\cite{ps05}, which he coauthored, was our principal source of\ninformation when the research which lead to this paper started.\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n Herbig-Haro (HH) jets are the optical manifestations of\ncollimated outflows from young stellar objects (YSOs) in their early\nstages of evolution and, along with typical HH objects, are apparent\ntracers of active star formation. YSOs, unfortunately, are usually deeply\nembedded in dense, extended envelopes and\/or opaque molecular cloud\ncores that strongly impede efforts for investigations in the optical \nregime and make our view of the early stages of star formation still \nquite a puzzle. In some cases, however, YSOs are located within\nthe confines of HII regions (Reipurth et al. 1998) and their power sources\nsuffer very low extinction and are therefore optically visible. Similar\njets immersed in a photoionized medium were identified in the Orion\nnebula and near the late B stars in NGC~1333 (Bally et al. 2000;\nBally \\& Reipurth 2001).\nHowever, some properties of photoionized jets in HII regions are\nfound to be highly dependent on their environment. Those immersed in the \noutskirts of HII regions, or near soft-UV radiation sources such as late B stars,\ntend to emerge as bipolar jets which usually share a ``C\" shaped appearance\n(Bally et al. 2000; Bally \\& Reipurth 2001), whereas jets deeply embedded \nin HII regions \nendure a more disruptive environment and seem to favor highly asymmetric\nor even monopolar jet formation (Reipurth et al. 1998).\n\nThe Rosette Nebula is a spectacular HII region excavated by the strong\nstellar winds from O stars in the center of the young open\ncluster NGC~2244, which has an age of about 3 x $10^6$ yr (Ogura \\& Ishida 1981).\nThis on-going star forming region (Meaburn \\& Walsh 1986;\nClayton et al. 1998; Phelps \\& Lada 1997; Li et al. 2002; Li 2003) is located\nat a distance of $\\sim$1500 pc at the tip of a giant molecular cloud\ncomplex with an extent of around 100 pc (Dorland \\& Montmerle 1987).\nIn this paper, we present the discovery of an optical jet system embedded \nin the Rosette Nebula with a fascinating morphology that displays exceptional \nnew features not known before.\n\n\\section{Observations and Data Reduction}\n\n\\subsection{Narrowband Imaging}\n\nNarrow band images of the Rosette Nebula were obtained 3 March 1999 with the\nKitt Peak National Observatory 0.9-meter telescope and the 8k$\\times$8k MOSAIC I\ncamera (Muller et al. 1998). Images were taken with the H${\\alpha}$,\n[OIII] and [SII] filters, whose central wavelengths\/FWHMs are 6569\\AA\/80\\AA, 5021\\AA\/55\\AA,\nand 6730\\AA\/80\\AA, respectively. For each filter, five exposures of 600~seconds\nwere taken, each image slightly offset to fill in physical gaps between the\nMOSAIC CCDs. The pixel scale is 0.423\\arcsec~pixel$^{-1}$, resulting in a 59\\arcmin~x~59\\arcmin\\\nfield of view. An astrometric solution for these images was determined with\nthe use of stars in the Guide Star Catalog 1.2 (Roser et al. 1998). The astrometric\naccuracy in the optical images presented is $\\sim$1\\arcsec.\n\n\\subsection{Optical Spectroscopy}\n\nOptical spectroscopy of the jet system, with a\n200\\AA~mm$^{-1}$, 4.8\\AA~pixel$^{-1}$ dispersion, and a 2.5\\arcsec slit,\nwas carried out with the 2.16m telescope of the National Astronomical\nObservatories of China on 10 and 16 January 2003, with slit positions\nalong and orthogonal to the jet direction. A possible\nphysical companion of the apparent power source was also observed with the\northogonal slit position to avoid contamination from the jet system.\nAn OMR (Optomechanics Research Inc.) spectrograph and a Tektronix 1024${\\times}$1024 CCD\ndetector were used in this run of observations.\n\nThe spectral data were reduced following\nstandard procedures in the NOAO Image Reduction and Analysis Facility\n(IRAF, version 2.11) software package. The CCD reductions included bias\nand flat-field correction, successful nebular background subtraction, and\ncosmic rays removal. Wavelength calibration was performed based on helium-argon\nlamps exposed at both the beginning and the end of the observations every night.\nFlux density calibration of each spectrum was conducted based on observations of at\nleast two of the KPNO spectral standards (Massey et al. 1988) per night.\n\n\\section{Results and Discussion}\n\n\\subsection{The Jet System}\n\nAn optical jet system with a striking morphology was identified\nwhen scrutinizing the narrow band images of the Rosette Nebula (Fig. 1). A\nprominent jet with a position angle of 312\\arcdeg\\ is clearly seen traced back\nto a faint visible star, indicating rather low extinction along the\nline of sight. The circumstellar envelope of the YSO\nmight have been stripped away, leaving a photoablating disk exposed\nto the strong UV radiation field of the exciting sources in the HII region.\nThe optical jet remains highly collimated for a projected distance\nof $>$8000~AU from the energy source if a mean distance\nof 1500 pc to Rosette is adopted. At the end of the collimated jet is a \nside-impacted shock structure, probably due to severe radiation pressure from the\nstrong UV field present and strong stellar winds encountered. \nA prominent knot or compact mass bullet resembling a point source is immediately\nnoticed in the highly collimated part of the jet with a projected separation of\n7\\arcsec\\ from the source. Another one or \ntwo knots can be marginally noticed in the bow shaped extension of the jet, \nindicating episodic eruption events from the driving source.\n\n We infer the possible existence of \na counterjet from the presence of a promising bow shock\nstructure on the opposite side of the jet. Successive episodes\nof counterjet ejection may have ceased or may be too faint. \nAlternatively, this structure may simply be shocked\ninterstellar material that was swept up and photoionized by the strong\nradiation field, which happen to be projected onto the vicinity of\nthe jet exciting source. Another possible explanation is that it is\ndisrupted jet material from a possible T Tauri companion (please refer to\nFig.~1) or from nearby young stars. If it is indeed a \ndegenerated counterjet, however, it may be the only existing observational \nevidence for how bipolar jets evolve into monopolar or \nhighly asymmetric jets.\nWe further argue that the long-puzzling high-velocity components from\nthe photoionized gas fronts aggregated in the Rosette (Meaburn \\& Walsh 1986; Clayton \\& Meaburn 1995; Clayton, Meaburn \\& Lopez et al. 1998) could\nactually be from rapidly disrupted young disk-jet systems in the inner\npart of the HII region. Diffuse [OIII] emission is prominent from \nall parts of the jet system, which is likely due to its photo-dissipating nature. \n[SII] emission from the highly-collimated part\nof the jet decreases rapidly from the base of the jet and is undetected\nin the shocked structures, indicating a steep decline of internal\nshock effects or rather overwhelming external ionization.\n\nSpectroscopic observation along the jet direction reveals prominent H$\\alpha$ \n($W_{\\lambda} \\sim$~35\\AA) as well as [SII] emission. The presence of\nsignificant [OIII] $\\lambda\\lambda$4959, 5007 line emission \nconfirms our suspicion that the jet is in a high-excitation state.\nFig. 2 clearly shows the existence of a weak counterjet as discussed above.\nNote that enhanced [NII] emission is detected along the jet, which is not\nshown here.\n\n\\subsection{The exciting source}\n\nSpectroscopic observations of the exciting source reveal primarily the \nspectrum of a normal F8Ve star with weak H$\\alpha$ ($W_{\\lambda} \\sim$~6.7\\AA)\nand prominent [OIII] emission. However, the spectrum also shows a \nsignificant red-displaced absorption component in H$\\alpha$, with \na receding velocity of 500$\\pm$50~km~s$^{-1}$ (Fig. 3). Two weak\nbut significant emission features with unclear nature are also present immediately\nto the red of the rest H$\\alpha$ emission and its red-displaced absorption. One is\ncentered at 6582.5 \\AA~ with an accuracy of $\\pm$ 1 \\AA~, and can be \nthe pronounced [NII] $\\lambda$ 6583.6 emission detected also along the jet. \nThis gives a high [NII] to H$\\alpha$ flux ratio of about 1:3.\nAlbeit the other with a central wavelength of 6613.3 \\AA~ cannot be identified \nwith any known emission lines with rest velocities. A possible interpretation\nis a red-displaced emission component of H$\\alpha$. This however would suggest\nan extraordinarily high receding velocity of 2300$\\pm$50~km~s$^{-1}$ and should\nbe treated with great caution. The possible ejection of high velocity, compact \nmass bullets from the energy source, if true, can be partially supported by the \nclear existence of a compact knot in the highly collimated part of the jet\nas mentioned in the previous subsection, and may offer a big challenge to \ncurrently available disk-jet models. This scenario, however, seems to match well\nwith Meaburn's (2003) recent speculations on the formation of ablated jets\nin HII regions.\n\n Spectral energy distribution (Kurucz 1991) fitting of the energy source\nbased on spectroscopic observations and data extracted from 2MASS indicated a relic\ndisk mass of only 0.006 M$_{\\odot}$, consistent with a photoablating\norigin of the system. It is noteworthy that Inverse P Cygni (IPC) profiles\nseldom appear in low Balmer\nemission lines such as H$\\alpha$. The most well known case is the YY Orionis\ntype, weak-lined T Tauri star T Cha (Alcala, Covino \\& Franchini et al. 1993), \nfrom which, at most, a relic circumstellar\ndisk can be expected. An IPC profile in H$\\alpha$ is usually attributed to its\noptically thin nature, which normally leads to the conclusion of a high\ninclination angle of the relic disk. Furthermore, this must happen under\ntight conditions such as low-density in the residual circumstellar\nmaterial and weak H$\\alpha$ emission (Alcala, Covino \\& Franchini et al. 1993).\nBoth are consistent with properties of this jet system.\n\n\\subsection{Implications}\n\nA similar red-displaced absorption profile is also extracted \naround the power source from the spectra taken along the jet direction, \nwhich helps to elucidate the reliability of the spectral reduction and time \nelapsed mass accretion of the central YSO. The simultaneous existence of \nsignatures of mass inflow and outflow, along with the absence of apparent \nveiling or blue excess, the absence of signatures of chromospheric \nactivity, and its weak or absent Balmer emission in combination is strong evidence \nthat the forming star is highly affected by its location in the strong UV \nradiation fields. Indeed, Reipurth et al. (1998) state that external ionization may \nhelp on the feeding and\/or launching of the jet, which makes jet formation in such\nsystems survive. We further speculate that the aforementioned effects eventually \nchange the configuration of the stellar-disk magnetosphere \nsuch that photodissipated material in the relic disk is \neasily loaded onto the magnetic channels, especially on the side facing the\nstrong radiation fields. It is conceivable that most or all of this material could \nhave been ejected in the form of a jet on the opposite side of the \nmass loading, instead of eventually ramming into the\ncontracting infant star embarrassed with starvation. \nIt's therefore hard for the energy source to further grow in mass, its accretion\ndisk could be dissipated on a comparatively short time scale and its evolution\ntoward the main sequence is either highly accelerated or in some cases even led \nto the formation of a failed star that would have evolved into a higher mass\nstar under normal conditions.\nMany young, very low mass stars and brown dwarfs \nare found to show evidence of mass accretion from its circumstellar environment \n(Fernandez \\& Comeron 2001; Luhman, Briceno \\& Stauffer 2003; Jayawardhana, \nMohanty \\& Basri 2003; White \\& Basri, 2003). This study presents\nfurther implications on how incipience of \nmassive stars in giant molecular clouds inhibits further generations of low-mass \nstar formation. It could also well serve as an inspiring case of how isolated \nsubstellar\/planetary mass objects in regions of massive star formation, especially \nthose found in HII regions (Zapatero Osorio et al. 2000), were prohibited from \ngrowing in mass, ceased their accretion process and came into being.\n\n{\\flushleft \\bf Acknowledgments~}\n\nWe are grateful to an anonymous referee for the \nmany helpful comments on the paper. Thanks prof. You-Hua Chu very much\nfor her valuable comments and suggestions. Appreciations \nto prof. W. P. Chen and W. H. Ip for their kind accommodations and help \nduring my two years stay at NCU. This work has made use of the 2MASS database.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzgqn b/data_all_eng_slimpj/shuffled/split2/finalzgqn new file mode 100644 index 0000000000000000000000000000000000000000..d07a0f74d64c9da500dd013aa396755de3bd380d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzgqn @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn recent years, significant advances have been made in learning representations of graph-structured data and predicting quantities of interest for nodes, edges or graphs themselves~\\cite{gcn,gat}. This new subfield has attracted an increasing amount of interest, leading to the development of numerous methods~\\cite{2019survey}. However, several earlier works have noted issues with existing standard benchmarks, which make it difficult to rigorously compare results and accurately distinguish between the performance of competing architectures~\\cite{oleks2018pitfalls,appnp}.\n\nOur primary focus is semi-supervised node classification: given labels of a small subset of nodes (typically 1--5\\%) and features of all nodes, as well as their connectivity information, the task is to predict all other labels. This setup is often used to assess the performance of various Graph Neural Network (GNN) architectures~\\cite{gcn,gat}. These methods are usually evaluated on three citation network datasets (Cora, CiteSeer, PubMed) introduced by Yang et al~\\yrcite{planetoid}. Unfortunately, different training splits are used across studies, which makes comparisons challenging, especially since the performance on this task is very sensitive to the choice of training split~\\cite{oleks2018pitfalls}. Furthermore, all three benchmarks are derived from the same domain, with similar structural properties. In contrast, \\textsc{Wiki-CS} has a much higher connectivity rate, hence it provides a different kind of distribution for these methods to be tested on.\n\nSimilar datasets have also been used in single-relation link prediction~\\cite{vgae,graphstar}. We further use \\textsc{Wiki-CS} to benchmark relational methods for this task, along with a non-structural SVM baseline. \n\n\\section{Related Work}\nThe most commonly used semi-supervised node classification benchmarks are the previously-described citation network graphs, proposed by Yang et al.~\\yrcite{planetoid}. Larger datasets have also been used, such as Reddit and PPI~\\cite{graphsage}. However, due to the standard split sizes proposed for these benchmarks, state-of-the-art methods have already achieved F1 scores of 0.995 and 0.97 respectively~\\cite{graphsaint}, making it difficult for further improvements to be properly gauged.\n\nDue to the issues with existing datasets, there has been significant concurrent work on establishing robust GNN benchmarks:\n\\begin{itemize}\n \\item The Open Graph Benchmark~\\cite{open-graph-benchmark} (OGB) has recently developed a range of datasets, focusing on diversity of domains, graph sizes and types of tasks and unified evaluation methods. A Wikidata Knowledge Graph is included for a link prediction task---note that this source material is entirely different from the article hyperlink graph used for \\textsc{Wiki-CS}. OGB also proposes challenging domain-specific splits based on some aspect of the data (for exaxmple, time or molecular structure), instead of selecting this randomly.\n \\item Dwivedi et al.~\\yrcite{dwivedi2020benchmarking} similarly proposed several datasets to rigorously distinguish the aspects of GNN architectures that significantly contribute to good performance on challenging benchmarks. To achieve this, they used largely synthetic graphs.\n\\end{itemize} \nOur contribution complements the existing ones by providing a dataset and experimental results based on a new domain. We thus further establish the generality of GNN methods and extend the range of available benchmarks.\n\n\\section{The Dataset}\n\n\\subsection{Article Selection and Label Generation}\nWe processed Wikipedia datadumps from August 2019 to extract a subgraph where accurate class labels could be provided, based on the categories of each article. Unfortunately, these category tags were not suitable for direct use as class labels, as most of them are highly specific and inconsistently applied to a small number of pages---there were around 1.5 million different categories defined on the 6 million pages, at the time of the snapshot that we used.\n\nThis problem was mitigated by using the category sanitizer tool made available by Boldi \\& Monti~\\yrcite{category-sanitizer}, with some modifications. Their method relies on the subcategory relation to aggregate articles belonging to subcategories to their parent categories. A small set of prominent categories is selected based on harmonic centrality measures in the subcategory graph; other nodes in the subcategory graph are aggregated to one of their nearest ancestors (see Figure \\ref{fig:cs-categories} for an example subgraph). See Boldi \\& Monti~\\yrcite{category-sanitizer} for the details of the process. This avoids aggregation through many indirect steps, which would often lead to articles being mapped to categories which they have little semantic overlap with.\n\nHowever, the output still required further clean-up: some aggregated categories still contained unrelated articles. Additionally, if the subcategory graph related to some topic is very dense, the selected prominent categories and the aggregation choices can be very arbitrary.\n\n\\textsc{Wiki-CS} was created by inspecting the list of $10,000$ prominent categories selected by the sanitizer and picking a subject area with few such issues. We identified three possible candidate subjects (branches of biology, US states, branches of CS), and sampled 20 pages from every class of these candidates. Although all of these had some issues, we were able to clean up the CS data by dropping some categories and manually disabling aggregation across specific subcategories to prune bad pages from others. This resulted in a dataset with 10 classes corresponding to branches of computer science, with very high connectivity. See Appendix \\ref{app:categories} for the set of prominent categories we used for each label. Finally, we dropped the articles that would have been mapped to multiple classes.\n\n\\begin{figure}[ht]\n\\vskip 0.2in\n\\centering\n\n\\includegraphics[width=0.9\\columnwidth]{cs-cat.pdf} \n\n\\caption{A subgraph of the subcategory relation graph. Nodes with dark borders are the prominent categories chosen based on centrality. The others were aggregated to the nearest marked ancestor as denoted by their colors, with ties broken arbitrarily.}\n\n\\label{fig:cs-categories}\n\\vskip -0.2in\n\\end{figure}\n\n\\subsection{Node Features}\n\nSimilarly to previous work~\\cite{planetoid}, our node features were derived from the text of the corresponding articles. However, they were calculated as the average of pre-trained GloVe word embeddings~\\cite{glove} instead of using binary bag-of-words vectors. This allowed us to encode rich features corresponding to a large vocabulary in relatively small 300-dimensional input vectors, which can be an advantage for training large models on a GPU.\n\n\\subsection{Training Splits}\n\\label{subsec:training-splits}\n\nIt has been shown that the choice of the training split can seriously affect model performance for semi-supervised node classification~\\cite{oleks2018pitfalls}. Therefore, using multiple training splits can improve the robustness of a benchmark~\\cite{appnp}. For this reason, we randomly selected 20 different training splits from the data that was not used for testing.\n\nMore specifically, we split the nodes in each class into two sets, 50\\% for the test set and 50\\% potentially visible. From the visible set, we generated 20 different splits of training, validation and early-stopping sets: 5\\% of the nodes in each class were used for training in each split, 22.5\\% were used to evaluate the early-stopping criterion, and 22.5\\% were used as the validation set for hyperparameter tuning. We stored the resulting mask vectors with the rest of the dataset, so that they can be used consistently across all future work.\n\n\\subsection{Statistics and Structural Properties}\n\n\\begin{table}[t]\n\\caption{Comparison of key dataset statistics between \\textsc{Wiki-CS} and standard citation network benchmarks. SP stands for shortest path length.}\n\\label{tab:dataset-statistics}\n\\vskip 0.15in\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{l|ccc|c}\n\\toprule\n & \\bf{Cora} & \\bf{CiteSeer} & \\bf{PubMed} & \\bf{Wiki-CS}\\\\\n\\midrule\n \\bf{Classes} & 7 & 6 & 3 & 10 \\\\\n \\bf{Nodes} & 2708 & 3327 & 19717 & 11701 \\\\\n \\bf{Edges} & 5429 & 4732 & 44338 & 216123 \\\\\n \\bf{Features dim.} & 1433 & 3703 & 500 & 300 \\\\\n \\bf{Label rate} & 3.6\\% & 5.2\\% & 0.3\\% & 5\\% \\\\\n \\bf{Mean degree} & 4.00 & 2.84 & 4.50 & 36.94 \\\\\n \\bf{\\shortstack{Average SP}} & 6.31 & 9.32 & 6.34 & 3.01 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\nTable \\ref{tab:dataset-statistics} summarises the key statistics of the citation network and the \\textsc{Wiki-CS} datasets. Note the significantly higher rate of connectivity compared to existing benchmarks and the short average distance between any two nodes. This suggests that progress could be made on the benchmark by designing more involved computations within the neighborhood of a node, rather than focusing on long-range connections. This makes \\textsc{Wiki-CS} a useful and complementary addition to existing node classification datasets.\n\nThis connectivity also leads to more varied node neighborhoods: for each node, we calculated the proportion of neighbors that belong to the same class as the node itself, and plotted this distribution for \\textsc{Wiki-CS} as well as the existing citation network benchmarks. The results shown in Figure \\ref{fig:same-class-neighbors} show that the existing datasets have a large share of nodes in homogeneous neighborhoods, while \\textsc{Wiki-CS} is significantly more varied.\n\n\\begin{figure}\n\\vskip 0.2in\n\\centering\n\n\\begin{subfigure}{0.49\\columnwidth}\n\\includegraphics[width=\\linewidth]{same-class-neighbors\/cora.png} \n\\caption{Cora}\n\\end{subfigure}\n\\begin{subfigure}{0.49\\columnwidth}\n\\includegraphics[width=\\linewidth]{same-class-neighbors\/citeseer.png} \n\\caption{CiteSeer}\n\\end{subfigure}\n\n\\begin{subfigure}{0.49\\columnwidth}\n\\includegraphics[width=\\linewidth]{same-class-neighbors\/pubmed.png} \n\\caption{PubMed}\n\\end{subfigure}\n\\begin{subfigure}{0.49\\columnwidth}\n\\includegraphics[width=\\linewidth]{same-class-neighbors\/wiki.png} \n\\caption{Wiki-CS}\n\\label{fig:same-class-neighbors}\n\\end{subfigure}\n\n\\caption{Distribution of the ratio of neighbors belonging to the same class. In all three the citation network datasets, almost two-thirds of all nodes have all neighbors belonging to the same class. The distribution of \\textsc{Wiki-CS} is considerably more balanced.}\n\\label{fig:same-class-neighbors}\n\\end{figure}\n\n\nWe also visualized the structure of all four datasets using Deep Graph Mapper~\\cite{dgm}, an unsupervised GNN-based visualisation technique. The results shown in Figure \\ref{fig:dgm-vis} suggest that \\textsc{Wiki-CS} might have a more centralized, hierarchical structure than the citation networks, which seems plausible considering the different source domains.\n\\begin{figure}\n\\vskip 0.2in\n\\centering\n\n\\begin{subfigure}{0.49\\columnwidth}\n\\includegraphics[width=\\linewidth]{dgm\/dataset-vis\/cora.png} \n\\caption{Cora}\n\\end{subfigure}\n\\begin{subfigure}{0.49\\columnwidth}\n\\includegraphics[width=\\linewidth]{dgm\/dataset-vis\/citeseer.png} \n\\caption{CiteSeer}\n\\end{subfigure}\n\n\\begin{subfigure}{0.49\\columnwidth}\n\\includegraphics[width=\\linewidth]{dgm\/dataset-vis\/pubmed.png} \n\\caption{PubMed}\n\\end{subfigure}\n\\begin{subfigure}{0.49\\columnwidth}\n\\includegraphics[width=\\linewidth]{dgm\/dataset-vis\/wiki.png} \n\\caption{Wiki-CS}\n\\label{fig:dgm-vis-wiki-cs}\n\\end{subfigure}\n\n\\caption{Deep Graph Mapper (DGM) visualisation of benchmarks. Each node in the figure corresponds to a cluster of similar nodes in the original graph, with edge thickness representing the amount of connections between clusters. Colors represent the most frequent class in each cluster. The DGM unsupervised embedding process did not take labels into account, only relying on the node features and edges. The hyperparameters are described in Appendix \\ref{app:hyperparameters}.}\n\\label{fig:dgm-vis}\n\\vskip -0.2in\n\\end{figure}\n\n\\section{Experiments}\n\n\\subsection{Semi-Supervised Node Classification}\n\nAs described in Section \\ref{subsec:training-splits}, 20 different training splits were created for the node classification task, each consisting of 5\\% of nodes from each class. The same test set (50\\% of the nodes) was evaluated for all splits. In each split, a different 22.5\\% of nodes is used for early-stopping: we finish training when the loss calculated on this set has not improved for 100 epochs, and evaluate the model snapshot that produced the lowest loss.\n\nThis evaluation was performed 5 times on each of the 20 splits; we report the mean accuracy with a 95\\% confidence interval based on bootstrap resampling from these results with 1,000 samples.\n\nThree GNN models were evaluated: GCN~\\cite{gcn}, GAT~\\cite{gat} and APPNP~\\cite{appnp}. Hyperparameter tuning was performed using the same training setup and measuring validation performance on 22.5\\% of the nodes disjoint from the training and early-stopping sets. For efficiency, only the first 10 (out of 20) splits were used for hyperparameter tuning. The model configurations are described in Appendix \\ref{app:hyperparameters}.\n\nTwo non-structural baselines were also included: a multi-layer perceptron (MLP) and a support vector machine (SVM). These predicted the class for each node individually, based on the node features. Since SVMs are deterministic, we only had a single data point from each training split and report the mean accuracy.\n\nThe results are shown in Table \\ref{tab:node-classification}. The relative model performances align well with the results on citation network benchmarks, providing evidence that these are indeed good general-purpose methods. It is perhaps surprising that the attention mechanism of GAT improved very little on the GCN result despite the large neighborhoods---one reason might be that it is difficult to learn what to attend to in the semi-supervised setting, as discussed in-depth by Knyazev et al.~\\yrcite{attention}.\n\nThe model predictions were also visualised with Deep Graph Mapper, and are included in Appendix \\ref{app:dgm-preds}. This was based on training each model once, on the first of the 20 training splits. As expected, the mistakes and disagreements are largely located near boundaries of classes. This reinforces the idea that more complex neighborhood aggregation methods might be able to improve prediction accuracy. There are also some less connected clusters that seem to produce consistent incorrect predictions under all models---this might be due to not having good training samples in their proximity.\n\n\\begin{table}[ht]\n\\caption{Performance of semi-supervised node classification methods on the \\textsc{Wiki-CS} dataset. Accuracies are represented as the average over 100 runs, with 95\\% confidence intervals calculated by bootstrapping.}\n\\label{tab:node-classification}\n\\vskip 0.15in\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n \\begin{tabular}{c|c}\n\\toprule\n & \\textbf{Accuracy} \\\\\n\\midrule\n \\textbf{SVM} & 72.63\\%\\\\\n \\textbf{MLP} & 73.17 $\\pm$ 0.19\\%\\\\\n\\midrule\n \\textbf{GCN} & 79.07 $\\pm$ 0.10\\%\\\\\n \\textbf{GAT} & 79.63 $\\pm$ 0.10\\%\\\\\n \\textbf{APPNP}&79.84 $\\pm$ 0.10\\%\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\\subsection{Link Prediction}\n\nFor the link prediction benchmark, we followed the experimental setup of studies performing single-relation link prediction on the Cora, CiteSeer and PubMed datasets~\\cite{vgae, graphstar}. We split the data as follows: 85\\% of the real edges for training, 5\\% for validation and 10\\% for testing. For each group, the same number of negative examples (non-edge node pairs) was sampled uniformly at random. \n\nTwo GNN methods were benchmarked for link prediction: GraphStar~\\cite{graphstar} and VGAE~\\cite{vgae}. They were trained using the configurations reported in the original works, except for the hidden layer size of GraphStar: a maximum size of 256 would fit on the GPU. Details are included in Appendix \\ref{app:hyperparameters}. An MLP baseline was also trained using concatenated pairs of node feature vectors. \n\nThe results are shown in Table \\ref{tab:link-prediction}. Note the extremely high performance of all models, even the MLP baseline. It appears that randomly selected false edges are very easy to distinguish from true edges in this dataset, and harder negative samples would be needed for more meaningful benchmarking. The large number of edges aggravates this, but it is not the main cause: we performed an experiment where we trained the models on just $10000$ examples of each class, and found the metrics to be still comfortably above $0.9$. See Table \\ref{tab:lp-10k} for the results.we\n\n\\begin{table}[t]\n\\caption{Performance of link prediction methods on the \\textsc{Wiki-CS} dataset. Metrics are represented as the average over 50 runs of VGAE, 20 runs of the MLP and 10 runs of GraphStar, with 95\\% confidence intervals calculated by bootstrapping.}\n\\label{tab:link-prediction}\n\\vskip 0.15in\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n \\begin{tabular}{c|c c}\n\\toprule\n & \\textbf{ROC-AUC} & \\textbf{AP} \\\\\n\\midrule\n \\textbf{MLP} & $0.9785 \\pm 0.0001$ & $0.9761 \\pm 0.0002$ \\\\\n\\midrule\n \\textbf{VGAE} & $0.9553 \\pm 0.0008$ & $0.9608 \\pm 0.0007$ \\\\\n \\textbf{GraphStar} & $0.9793 \\pm 0.0002$ & $0.9896 \\pm 0.0001$\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\\begin{table}[t]\n\\caption{Performance of link prediction methods trained on only $10,000$ examples of each class.}\n\\label{tab:lp-10k}\n\\vskip 0.15in\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n \\begin{tabular}{c|c c}\n\\toprule\n & \\textbf{ROC-AUC} & \\textbf{AP} \\\\\n\\midrule\n \\textbf{MLP} & $0.9192 \\pm 0.0004$ & $0.9119 \\pm 0.0006$ \\\\\n\\midrule\n \\textbf{VGAE} & $0.8546 \\pm 0.0024$ & $0.8427 \\pm 0.0032$ \\\\\n \\textbf{GraphStar} & $0.9577 \\pm 0.0006$ & $0.9795 \\pm 0.0003$\\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\\section{Conclusion}\n\nWe have presented \\textsc{Wiki-CS}, a new benchmark for GNN methods. We have described how its structural properties are significantly different from commonly used datasets. Our experiments show existing GNN architectures for semi-supervised node classification and link prediction performing similarly to their results on other benchmarks, which is further evidence that they are good general-purpose methods for graph-learning tasks. Our dataset is available for further study, broadening the range of available benchmarks.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\nSpacetime singularities are among the most spectacular predictions of classical general relativity.\nIn the context of cosmology, the presence of an initial singularity signals a fundamental incompleteness in our understanding of the Universe, whose origin and beginning can not be explained within a classical or semiclassical treatment.\nThe singularity theorems of general relativity require energy conditions which may be violated in the early Universe if inflation was present, but one can nevertheless show that inflationary spacetimes are past incomplete \\cite{BGV03}.\n\\footnote{See e.g.\\ \\cite{YQ18} for an analysis of conditions under which continuous or differentiable extensions of the spacetime metric beyond the past incomplete region may exist for such inflationary spacetimes.}\nIt is widely expected that a quantum theory of gravity is required to resolve the classical Big Bang singularity and give a fundamental basis to theoretical cosmology; a full quantum treatment of spacetime may indeed be needed to justify the assumption that the early Universe can be studied in terms of quantum fields on a classical background \\cite{FLT17,TFLT19}.\n\nOn the other hand, the spacetime geometry of the early Universe was presumably very simple, describable in terms of a homogeneous isotropic background with only small perturbations.\nThis simplicity must ultimately be explained more fundamentally, but it also results in significant practical simplifications: at least as a first step, one can study simple homogeneous, isotropic spacetimes in quantum gravity to learn about singularity resolution.\nOne could hope that, as in the classical theory, this does not require understanding the nonlinear and presumably complicated dynamics of full quantum gravity.\nThis philosophy, systematically applied to \\ac{lqg} as a candidate theory of quantum gravity, gave birth to the field of \\ac{lqc} where, for models of quantum gravity coupled to a massless scalar field, the classical Big Bang singularity is indeed resolved and replaced by a `big bounce' \\cite{Boj01,AS11}.\nMore recently \\ac{lqc} has made direct contact with inflation, providing a quantum-gravitational extension of the usual semiclassical framework \\cite{AAN12}.\nThe precise relation between \\ac{lqg} and \\ac{lqc} has been the focus of much work in recent years \\cite{AC15,ADLP18,DLP19}.\n\nThe origin of singularity resolution in \\ac{lqc} is the fundamental discreteness of the theory, manifest in discrete spectra for areas and volumes with a gap away from zero \\cite{Rov04,BH11}.\nThis key feature of the \\ac{lqg} kinematics is shared by its reformulation in terms of \\ac{gft} \\cite{Ori17}.\n\\ac{gft} interprets the {\\em quanta of spacetime} seen in \\ac{lqg} as excitations of a quantum field (also then a quantum field {\\em of}, not {\\em on} spacetime), while the dynamics of a \\ac{gft} are generally defined such that its perturbative expansion corresponds to a sum over {\\em spin foams}, or discrete spacetime histories of \\ac{lqg} states \\cite{RR01}.\nThe continuum limit of this sum, needed to obtain continuum quantum geometry, is to be taken in a way similar to matrix and tensor models \\cite{Ori12}.\nGiven the very close relation of \\ac{gft} to \\ac{lqg}, it is natural to ask whether cosmological models of \\ac{gft} dynamics can resolve the classical Big Bang singularity in a way similar to what is seen in \\ac{lqc}.\n\nThe key idea that led to the derivation of cosmological models in the \\ac{gft} approach was that a spatially homogeneous quantum geometry could be understood as a type of Bose--Einstein condensate in \\ac{gft} \\cite{GOS14,GS16,Ori17a,PS19}.\nAs in other quantum field theories, such a condensate can be understood as a nonperturbative vacuum of the theory, characterised by a common quantum state for a very large number of quanta with respect to the original Fock vacuum.\nThe idea that spacetime could be a kind of Bose--Einstein condensate of geometric quanta had been formulated in other approaches before \\cite{Hu05,KS12}, but the quantum field theory framework of \\ac{gft} allows studying such a condensate with relatively conventional methods, adapted to a background-independent quantum gravity context.\nIn the simplest (mean-field) approximation, the equation of motion of the condensate mean field is the analogue of the usual Gross--Pitaevskii equation in condensed matter physics.\nFrom a solution to this equation of motion, one can compute geometric observables such as the total volume of the condensate.\nDynamics are introduced, just as in \\ac{lqc} and many other models of quantum cosmology \\cite{BI75}, by coupling a relational matter clock, given by a free massless scalar field.\nConcretely, one extracts equations for the relational volume observable $V(\\phi)$, the three-dimensional volume of a condensate given a particular value of the relational clock field, and its derivatives.\nThese are then interpreted as effective Friedmann equations derived from the \\ac{gft} condensate dynamics.\nThese steps were first fully implemented in \\cite{OSW16,OSW17} where it was shown how such effective Friedmann equations, for a wide class of \\ac{gft} models and under various simplifying assumptions, are consistent with the classical Friedmann equations at large volume while showing a bouncing behaviour at high densities very similar to the one in \\ac{lqc}.\nIn particular, these effective Friedmann equations can reproduce the preferred `improved dynamics' form of \\ac{lqc} \\cite{APS06} whose derivation from Hamiltonian formulations of \\ac{lqg} is a largely outstanding challenge \\cite{DLP19}.\n\nThese very promising results for effective cosmological dynamics from \\ac{gft} relied on assuming the emergence of a condensate regime in which the mean-field approximation is valid and \\ac{gft} interactions are subdominant with respect to the quadratic (kinetic) term.\nIncluding interactions leads to interesting modifications to effective cosmological equations, which can serve as a starting point for \\ac{gft} phenomenology \\cite{CPS16,PS17}.\nTo better understand the dependence of \\ac{gft} cosmology on a mean-field approximation, a simple toy model was then studied in \\cite{AGW18}.\nIn this model, only a single mode of a \\ac{gft} field is excited and a {\\em squeezing} Hamiltonian generates evolution in relational matter time $\\phi$, so that a squeezed state emerges dynamically even from the Fock vacuum without assuming a mean-field regime.\nThe general features of a bounce at high densities and agreement with classical cosmology at large volume could be reproduced in this simpler setting.\nThe choice of Hamiltonian was rather ad-hoc, motivated by agreement with classical cosmology and the properties of squeezed states.\nIt was then shown in \\cite{WEw19} that a Legendre transformation of the free \\ac{gft} action, taking again the matter clock $\\phi$ to define time, leads essentially to a squeezing Hamiltonian, the latter representing the dynamics of an `upside-down' harmonic oscillator with negative quadratic potential.\nThe results of \\cite{WEw19} hence explained the agreement between the effective cosmological dynamics of a squeezing Hamiltonian and those of previous results for \\ac{gft} condensates.\nA different argument explaining the close connection of \\ac{gft} cosmology to \\ac{lqc} was given in \\cite{BBC19} where it was argued that the canonical \\ac{lqc} framework and the field-theoretic (bosonic) \\ac{gft} cosmology can be seen as different realisations of the same $\\liealg{su}(1,1)$ algebra.\n\nThe aim of this paper is to extend many of these recent results to further clarify the connection between fundamental \\ac{gft} dynamics and effective cosmological equations.\nRather than using mean-field approximations, we derive general dynamical equations for operators and therefore expectation values of these operators; we work similarly to the `toy model' analysis of \\cite{AGW18} but consider a more general form of the \\ac{gft} dynamics.\nIn particular, we will go beyond quadratic Hamiltonians and add simple interactions, allowing us to connect to results such as those of \\cite{CPS16} for \\ac{gft} interactions which were obtained in a mean-field approximation.\nWe will also extend the algebraic viewpoint of \\cite{BBC19} to the case of an interacting Hamiltonian.\nAn issue in the derivation of effective dynamical equations that we will encounter is that, in the general case, these equations involve additional expectation values or higher moments that are not directly identifiable with the variables of a classical \\ac{flrw} cosmology (which is fully characterised by the scale factor or volume and the energy density in the massless scalar field).\nIn other words, dynamical equations for quantum expectation values require knowledge of additional initial conditions compared to a classical cosmology.\nThis is of course the usual situation for effective equations for quantum systems.\n(See e.g.~\\cite{BHT11} for a systematic discussion.)\nChoosing a specific class of initial conditions, for instance by focussing on a class of coherent states, then simplifies these equations.\nWe will illustrate this trade-off between obtaining effective equations that are as general as possible but include a dependence on additional variables, and more specific equations in which a particular choice of state (or family of states) allows deriving equations with more direct (semiclassical) physical interpretation.\nAs concrete examples, we will discuss Fock coherent states which have been studied in most of the existing literature on \\ac{gft} cosmology, but also coherent states based on the $\\liealg{su}(1,1)$ algebra satisfied by basic \\ac{gft} operators (in particular the well-known \\ac{pg} coherent states).\n\nWe show a key property of simple Fock coherent states, which is that relative uncertainties of quantities like volume and energy can be made arbitrarily small at late times, so that these states are as semiclassical as desired.\nThis gives further justification to their interpretation as semiclassical macroscopic geometries \\cite{GOS14}.\n\\acl{pg} states that can be thought of as elements of a \\ac{gft}-like Fock space do not admit such a semiclassical interpretation and are therefore disfavoured for \\ac{gft} cosmology.\n \n\\section{Group field theory cosmology}\n\\label{sec:gft_cosmo}\nIn this section we summarise previous work on the derivation of effective cosmological equations from group field theory, in particular the previous papers we are building on in this work.\nFor more details and background we point to \\cite{GOS14,GS16,Ori17a,PS19}.\n\n\\subsection{The group field theory approach to quantum gravity}\n\\Acfp{gft} are a nonperturbative approach to quantum gravity, aiming to extend the successes of matrix and tensor models to a theory of quantum geometry in higher (in particular four) dimensions by incorporating the kinematical and dynamical structure of loop quantum gravity and spin foams \\cite{Ori17,Ori12,Ori06,Fre05,Kra11}.\n\nConcretely, rather than being based on a matrix or tensor with a number of discrete indices of finite range, a \\ac{gft} is defined in terms of a field $\\varphi$ depending on a number of continuous variables taking values in a Lie group.\nIn this sense, the purely combinatorial structure of matrix models is enriched by additional group-valued degrees of freedom, interpreted as parallel transports akin to the fundamental variables in lattice gauge theory.\nNevertheless, the main ideas are similar; a \\ac{gft} perturbative expansion should generate a sum over quantum geometries and admit a consistent continuum limit.\n\nThe prototype for \\ac{gft} as an approach to quantum gravity is the {\\em Boulatov model} in three dimensions \\cite{Bou92}.\nOne defines a real field\n\\begin{equation}\n \\varphi:\n G^3 \\rightarrow \\field{R}\n \\mathcomma\n \\quad\n \\varphi(g_1,g_2,g_3)\n = \\varphi(g_2,g_3,g_1)\n = \\varphi(g_3,g_1,g_2)\n\\end{equation}\nwith an action\n\\begin{equation}\n\\eqalign{\n S[\\varphi]\n =\n&\n \\frac{1}{2}\\int\n \\intmeasure[3]{g} \\,\n \\varphi(g_1,g_2,g_3)\n \\varphi(g_1,g_2,g_3)\n\\\\\n&\n -\\frac{\\lambda}{4!}\\int\n \\intmeasure[6]{g} \\,\n \\varphi(g_1,g_2,g_3)\\varphi(g_1,g_4,g_5)\\varphi(g_2,g_5,g_6)\\varphi(g_3,g_6,g_4)\n \\mathcomma\n}\n\\end{equation}\nwhere $G$ is a Lie group and $\\intmeasure{g}$ is the Haar measure on $G$.\nNotice the `nonlocal' combinatorial pairing of field arguments in the interaction term which is the generalisation of trace invariants such as ${\\rm tr}M^n$ in the case of a matrix model.\nOne can then show that, for $G=\\liegroup{SU}(2)$, the \\ac{gft} partition function admits a perturbative expansion of the form\n\\begin{equation}\n \\label{eq:gft_boulatov_exp}\n \\int \\mathcal{D}\\varphi\\;e^{-S[\\varphi]}=\\sum_{C}\n \\lambda^{N_{T}(C)}\\sum_{\\{j_l\\}\\in{\\rm Irrep}}\\prod_{l\\in C} (2j_l+1)\\sum_{T\\in\n C}\\left\\{ \\matrix{\n j_{T_1} & j_{T_2} & j_{T_3}\n \\cr\n j_{T_4} & j_{T_5} & j_{T_6}\n }\\right\\}\n\\end{equation}\nwhere the first sum is over all oriented three-dimensional simplicial complexes $C$, $N_T(C)$ is the number of tetrahedra in $C$, $j_l$ is an irreducible representation of ${\\liegroup{SU}(2)}$ assigned to each link $l\\in C$, and $\\{\\cdot\\}$ is the Wigner $6j$-symbol associated to a tetrahedron $T\\in C$ (involving its six links).\nUp to the factor $\\lambda^{N_{T}(C)}$, each complex $C$ appearing in \\eref{eq:gft_boulatov_exp} is weighted by its {\\em Ponzano--Regge state sum} \\cite{PR68}, a possible definition of discrete three-dimensional quantum gravity (see e.g.~\\cite{BN09}).\nIn this sense, the perturbative expansion of the Boulatov model generates all possible triangulations (including all topologies) each weighted by a partition function for quantum gravity on this triangulation.\nThis expansion is highly divergent without further regularisation \\cite{BN09}.\nThe \\ac{gft} programme aims to extend \\eref{eq:gft_boulatov_exp} to more complicated models, in particular candidates for quantum gravity in four dimensions, where the Ponzano--Regge state sum is replaced by a general {\\em spin foam amplitude} \\cite{Rov04}: the amplitudes of the Barrett--Crane model \\cite{BC98} can be obtained as Feynman amplitudes of a \\ac{gft} defined on the three-sphere $S^3=\\liegroup{SO}(4)\/\\liegroup{SO}(3)$ \\cite{PFKR00} and this extends to a one-to-one correspondence between general spin foam amplitudes and their realisations as the perturbative expansion of a \\ac{gft} \\cite{RR01}.\nThis correspondence extends from spin foam models for Euclidean quantum gravity to models with Lorentzian signature such as \\cite{BC00} which can be defined through a \\ac{gft} on a noncompact group such as $\\liegroup{SO}(3,1)$ (see e.g.~\\cite{LO07}).\nIn this sense one could say that a \\ac{gft} defines a completion of the spin foam programme in that it not only generates spin foam amplitudes for quantum gravity on a given discretisation, but also the weights in a sum over discretisations.\n\n\\subsection{Cosmology from group field theory}\nFor general \\ac{gft} models for quantum gravity, it is difficult to make sense of a perturbative expansion of the form \\eref{eq:gft_boulatov_exp}.\nThe number of terms quickly grows out of control as the number of building blocks is increased and there is no obvious physical meaning to truncating such an expansion to the first few terms, i.e. to discretisations with very few building blocks.\n\\Eref{eq:gft_boulatov_exp} is really an expansion around a `no-space' vacuum in which no geometry is present at all.\n\nHowever, there is more to quantum field theory than a perturbative expansion\naround vanishing field value: interacting field theories often exhibit phase transitions to a {\\em condensate} characterised by a nonvanishing field expectation value.\nWith respect to the original Fock vacuum in which the field vanishes, a condensate has a very large number of quanta all characterised by a single quantum state (the `condensate wavefunction').\nThis is a quantum state of high symmetry and quantum coherence.\nThe key idea of {\\em \\ac{gft} condensates} is that such a configuration in \\ac{gft} is a candidate for a macroscopic, nearly homogeneous Universe, and hence a starting point for effective cosmology.\nWe refer the reader to \\cite{GOS14} for details and arguments for this geometric interpretation.\n\nWe generally define a \\ac{gft} field for a four-dimensional quantum gravity model coupled to scalar matter by\n\\begin{equation}\n \\varphi:\n G^4\\times \\field{R} \\rightarrow \\field{K}\n \\mathcomma\n \\quad\\varphi(g_1,\\ldots,g_4,\\phi)=\\varphi(g_1h,\\ldots,g_4h,\\phi)\\;\\forall h\\in G\n \\mathperiod\n\\end{equation}\nwhere $G$ is the gauge group of gravity (often assumed to be $\\liegroup{SU}(2)$) and $\\field{K}$ is either the real or complex numbers.\nThe action takes the general form\n\\begin{equation}\n S\n =\n \\int\n \\intmeasure[4]{g}\\,\n \\intmeasure{\\phi}\\,\n \\bar\\varphi(g_I,\\phi)\\,\\mathcal{K}\\varphi(g_I,\\phi)\n + \\mathcal{V}[\\varphi]\n\\end{equation}\nwhere for a real field $\\bar\\varphi=\\varphi$ (and, to obtain the usual normalisation of a kinetic term, one has to also rescale the field), $\\mathcal{K}$ is a kinetic operator which in general contains derivatives with respect to all arguments, and all terms that are higher order than quadratic are part of $\\mathcal{V}[\\varphi]$.\nIn general, $\\mathcal{V}[\\varphi]$ is also nonlocal in a way similar to the Boulatov \\ac{gft} defined above.\nIn its Feynman expansion, such a field theory will generate graphs whose edges are labelled by $g_I$ (interpreted as parallel transports of a $G$-connection) and whose vertices are labelled by $\\phi$ interpreted as the values of a matter scalar field.\n\nIf we denote the expectation value of the field operator by\n\\begin{equation}\n \\expval{\\op{\\varphi}(g_I, \\phi)}\n =\n \\sigma(g_I,\\phi)\n \\mathcomma\n\\end{equation}\na condensate phase is then characterised by a nonvanishing $\\sigma(g_I,\\phi)$.\n\nThe {\\em mean-field approximation} which is used in most work on \\ac{gft} cosmology so far requires that the mean field $\\sigma(g_I,\\phi)$, for a \\ac{gft} with complex field, satisfies the classical \\ac{gft} equation of motion\n\\begin{equation}\n \\label{eq:gft_cosmo_meanfield_eom}\n \\mathcal{K}\\sigma(g_I,\\phi)\n +\n \\frac{\n \\delta\\mathcal{V}[\\sigma]\n }{\n \\delta\\bar\\sigma(g_I,\\phi)\n }\n =\n 0\n \\mathcomma\n\\end{equation}\nthe analogue of the Gross--Pitaevskii equation for the condensate wavefunction in condensed matter physics.\nFrom a solution $\\sigma(g_I,\\phi)$ to the equation of motion, one can then extract an observable corresponding to the total condensate volume as a function of the matter field `clock',\n\\begin{equation}\n \\label{eq:gft_cosmo_gft_volumeop}\n \\expval{\\op{V}(\\phi)}\n \\equiv\n \\int\n \\intmeasure[4]{g}\\,\n \\intmeasure[4]{g'}\\,\n V(g_I,g'_I)\n \\bar\\sigma(g_I,\\phi)\n \\sigma(g'_I,\\phi)\n \\mathcomma\n\\end{equation}\nwhere $V(g_I,g'_I)$ are matrix elements of the \\ac{gft} volume operator between `single-particle' states $\\ket{g_I}$ and $\\ket{g'_I}$; such an operator can be defined from the action of a volume operator in \\ac{lqg} on open spin networks with a single vertex and four links.\nDynamical equations satisfied by $\\expval{\\op{V}(\\phi)}$ and its derivatives with respect to $\\phi$ are then interpreted as effective cosmological (Friedmann) equations for the three-volume of (a patch of) the Universe, derived directly from a prescription for the microscopic dynamics of a \\ac{gft}.\n\nThe most concrete derivation of this type, for models of quantum gravity coupled to massless scalar matter, was given in \\cite{OSW16}.\nFirst the nonlinear, nonlocal equation of motion \\eref{eq:gft_cosmo_meanfield_eom} was simplified by making an `isotropic' ansatz\n\\begin{equation}\n \\sigma(g_I,\\phi)\n =\n \\sum_{j\\in{\\rm Irrep}}\n \\sigma_j(\\phi){\\bf D}^j(g_I)\n \\mathcomma\n\\end{equation}\nwhere the \\ac{gft} gauge group is taken to be $\\liegroup{SU}(2)$ and ${\\bf D}^j(g_I)$ is a fixed convolution of four Wigner $D$-matrices for the irreducible representation $j$, encoding the `shape' of the \\ac{gft} building blocks.\n(${\\bf D}^j(g_I)$ requires a choice of intertwiner $j\\otimes j\\otimes j\\otimes j\\rightarrow {\\bf 0}$; in \\cite{OSW16} this is taken to be the intertwiner with maximum eigenvalue for the volume, see \\eref{eq:gft_volume_expv}.)\n\nAssuming a quintic potential as is done for many spin foam models related to \\ac{lqg}, this reduces \\eref{eq:gft_cosmo_meanfield_eom} to a decoupled form\n\\begin{equation}\n \\label{eq:gft_cosmo_isotropic_eom}\n A_j\n \\partial_\\phi^2\\sigma_j(\\phi)\n - B_j\\sigma_j(\\phi)\n + w_j\\bar\\sigma_j(\\phi)^4\n =\n 0\n \\mathcomma\n\\end{equation}\nwhere $A_j$, $B_j$ and $w_j$ are determined by the couplings in the \\ac{gft} action.\nSince the volume operator is diagonal when written in terms of $\\liegroup{SU}(2)$ representations, the volume of a condensate in such a state is given by\n\\begin{equation}\n \\expval{\n \\op{V}(\\phi)\n }\n =\n \\sum_{j\\in{\\rm Irrep}}\n V_j\\,|\\sigma_j(\\phi)|^2\n\\label{eq:gft_volume_expv}\n\\end{equation}\nwhere $V_j$ is the volume eigenvalue assigned to the spin $j$ (which in general depends on the intertwiner used to define ${\\bf D}^j(g_I)$).\nIn a regime in which interactions can be neglected, and assuming that the ratios $B_j\/A_j$ take a positive maximum for some $j=j_0$, it is easy to see that for almost any solution to \\eref{eq:gft_cosmo_isotropic_eom}\\footnote{The only cases for which $V(\\phi)$ does not have the given asymptotics are solutions for which one only uses the exponentially growing or the exponentially decaying solution to \\eref{eq:gft_cosmo_isotropic_eom} for $j=j_0$.} the volume $V(\\phi) \\equiv \\expval{\\op{V}(\\phi)}$ satisfies\n\\begin{equation}\n V(\\phi)\n \\stackrel{\\phi\\rightarrow -\\infty}{\\sim}\n c_1 \\exp\\left(-2\\sqrt{\\frac{B_{j_0}}{A_{j_0}}}\\phi\\right)\n \\mathcomma\n \\quad\n V(\\phi)\n \\stackrel{\\phi\\rightarrow +\\infty}{\\sim}\n c_2 \\exp\\left(2\\sqrt{\\frac{B_{j_0}}{A_{j_0}}}\\phi\\right)\n\\end{equation}\nfor some constants $c_1$ and $c_2$ \\cite{Gie16}.\nMoreover, $V(\\phi)$ can only ever reach zero for very special initial conditions (although this case becomes generic if the \\ac{gft} field is taken to be real-valued \\cite{PS17}).\nWith the identification $\\frac{B_{j_0}}{A_{j_0}}=:3\\pi G$, this corresponds to a bounce solution interpolating between the expanding and contracting solutions to the {\\em classical} Friedmann equations for a flat \\ac{flrw} Universe filled with a massless scalar field, $V(\\phi)=V_0\\exp(\\pm\\sqrt{12 \\pi G}\\phi)$.\nSimilar conclusions apply if one considers condensates only formed by a single $j$ component, again denoted by $j_0$.\nIn the latter case, one can show that the volume satisfies an effective Friedmann equation \\cite{OSW16}\n\\begin{equation}\n \\label{eq:gft_cosmo_friedmann}\n \\left(\n \\frac{V'(\\phi)}{V(\\phi)}\n \\right)^2\n =\n 12\\pi G\n \\left(\n 1-\\frac{\\rho(\\phi)}{\\rho_\\critical}\n \\right)\n + \\frac{4V_{j_0}E}{V(\\phi)}\n\\end{equation}\nwhere $\\rho=\\pi_\\phi^2\/(2V^2)$, with $\\pi_\\phi$ the conserved momentum conjugate to $\\phi$, is the energy density of the massless scalar field, and $\\rho_\\critical$ is a maximal (critical) energy density similar to the one found in \\ac{lqc} \\cite{Boj01,AS11} (and we have again set $\\frac{B_{j_0}}{A_{j_0}}=:3\\pi G$).\nThe last term, involving a conserved quantity $E$ (`\\ac{gft} energy'), represents a slight modification with respect to the usual \\ac{lqc} effective dynamics.\nAgain, clearly at large volumes or late times such effective dynamics reduce to the classical Friedmann equation $(V'\/V)^2=12\\pi G$.\n\nIn this article, we strengthen the foundations of these past results.\nWe aim to obtain effective cosmological dynamics from \\ac{gft} without several assumptions that were necessary to obtain \\eref{eq:gft_cosmo_friedmann}, namely: the validity of a mean-field regime in which one solves equations for the mean field; restriction of the effective equations to simple expectation values such as $\\expval{\\op{V}(\\phi)}$ without taking into account fluctuations around these expectation values; neglecting \\ac{gft} interactions by effectively setting $w_j=0$.\n\\footnote{The work of \\cite{CPS16} included \\ac{gft} interactions into the analysis, leading to additional terms on the right-hand side of \\eref{eq:gft_cosmo_friedmann}, while working in a mean-field regime.}\nIndirectly, the results outlined so far also assumed a given Fock space structure used to define a \\ac{gft} volume operator, which has not been derived from the canonical analysis of a \\ac{gft} action.\n\n\\subsection{Toy model for group field cosmology, and a Hamiltonian for \\ac{gft}}\nA first step towards deriving effective cosmological dynamics from \\ac{gft} outside of a mean-field regime was taken in \\cite{AGW18}.\nOne motivation for this work was to develop a toy model in which some of the successes of \\ac{gft} cosmology could be obtained in a simpler setting, but there was also a new technical assumption: the massless scalar field $\\phi$ was proposed as a (relational) time variable, with a Hamiltonian generating evolution with respect to this clock.\nThat is, the idea was to define a {\\em deparametrised} setting in which some degrees of freedom serve as coordinates parametrising the remaining ones, a strategy widely employed in canonical quantum gravity \\cite{BK95,DGKL10}.\\footnote{This strategy can be extended to a \\ac{gft} coupled to four massless scalar fields serving as relational coordinates for both time and space \\cite{Gie18}.}\nThis approach was different from previous work on \\ac{gft} cosmology in which the fundamental \\ac{gft} formalism treated all arguments of the field on the same footing.\nThe Hamiltonian itself was chosen so as to reproduce the correct cosmological dynamics at large volume.\n\nClassical \\ac{flrw} cosmology can be defined by a volume variable $V(\\phi)$ and conjugate momentum $p_V(\\phi)$ subject to a Hamiltonian\n\\begin{equation}\n \\label{eq:gft_cosmo_dilat}\n \\mathcal{H}\n =\n \\sqrt{12 \\pi G}\\,Vp_V\n \\mathcomma\n\\end{equation}\ngenerating a dilatation as its time evolution, i.e.~the exponential solutions in $\\phi$ mentioned above.\nIn \\cite{AGW18} it was then observed that, for a Fock space generated by annihilation operators $\\op{A}^i$ and creation operators $\\hermconj{\\op{A}}_j$ (here $i,j$ run from 1 to 5) with algebra\n\\begin{equation}\n \\commutator{\n \\op{A}^i\n }{\n \\hermconj{\\op{A}}_j\n }\n =\n \\delta^i_j\n \\mathcomma\n\\end{equation}\na discrete analogue of the dilatation operator is given by a {\\em squeezing Hamiltonian}\n\\begin{equation}\n \\label{eq:gft_cosmo_squeezing}\n \\op{\\mathcal{H}}\n =\n \\frac{\\mathup{i}}{2}\n \\lambda\n \\left(\n \\hermconj{\\op{A}}_i \\hermconj{\\op{A}}_j \\epsilon^{ij}\n - \\op{A}^i \\op{A}^j \\epsilon_{ij}\n \\right)\n \\mathcomma\n\\end{equation}\nwhere $\\epsilon^{ij}$ is an appropriate symmetric tensor.\nIndeed, for a volume operator taken to be the multiple of the number operator\n\\begin{equation}\n \\label{eq:gft_cosmo_volume_via_number_op}\n \\op{V}\n =\n v_0 \\op{N}\n :=\n v_0 \\hermconj{\\op{A}}_i \\op{A}^i\n\\end{equation}\none can show that, for suitable states characterised by the eigenvalues of $\\op{V}$, $\\op{\\mathcal{H}}$ acts as\n\\begin{equation}\n (\\op{\\mathcal{H} }\\Psi)(V)\n \\stackrel{v_0\\rightarrow 0}{\\rightarrow}\n - \\mathup{i}\\lambda\\left(V\\frac{\\partial}{\\partial V}\n + \\frac{\\partial}{\\partial V}V\\right)\\Psi(V)\n \\mathperiod\n\\end{equation}\nThus, with the identification $\\lambda:=\\sqrt{3\\pi G}$ the continuum limit of squeezing \\eref{eq:gft_cosmo_squeezing} is compatible with the classical dilatation Hamiltonian \\eref{eq:gft_cosmo_dilat}.\nThe picture of a Fock space of `quanta of geometry' in which each quantum carries a given volume mimics the Fock space structure of \\ac{gft}, with the simplification that here each quantum comes with a fixed $v_0$ rather than a general state-dependent volume as in \\eref{eq:gft_cosmo_gft_volumeop}.\n\nGiven that the Hamiltonian is quadratic, expressions for the time evolution of observables of interest can be computed analytically.\nOne finds that\n\\begin{equation}\n \\label{eq:gft_cosmo_toymodel_n}\n \\expval{\\op{N}(\\phi)}\n =\n - \\frac{5}{2}\n +\n \\left(\n N_0 + \\frac{5}{2}\n \\right)\n \\cosh(2\\lambda\\phi)\n + Q \\sinh(2\\lambda\\phi)\n\\end{equation}\nwith\n\\begin{equation}\n N_0\n :=\n \\left.\n \\expval{\\op{N}}\n \\right|_{\\phi=0}\n \\mathcomma\n \\quad\n Q\n :=\n \\frac{1}{2}\n \\left.\n \\left(\n \\epsilon^{ij} \\expval{\\hermconj{\\op{A}}_i \\hermconj{\\op{A}}_j}\n + \\epsilon_{ij} \\expval{\\op{A}^i \\op{A}^j}\n \\right)\n \\right|_{\\phi=0}\n\\end{equation}\n(computed equivalently in the Schr\u00f6dinger or Heisenberg picture).\nAt late or early times $\\phi\\rightarrow\\pm\\infty$ (and with $\\lambda:=\\sqrt{3\\pi G}$), the expectation value $V(\\phi)\\equiv\\expval{\\op{V}(\\phi)}$ then again reproduces the classical solution $V(\\phi)=V_0\\exp(\\pm\\sqrt{12\\pi G}\\phi)$\\,.\nMoreover, one can show that only special initial conditions (such as choosing the Fock vacuum as initial state) lead to a solution that ever encounters a singularity where $V(\\phi_0)=0$ for some $\\phi_0$.\nGeneric solutions avoid the classical singularity and describe a bounce connecting the classical expanding and contracting branches.\n\nThe quantity $Q$ cannot be directly interpreted in terms of the volume or energy density of the corresponding classical cosmology; its presence leads to an asymmetry in the solution \\eref{eq:gft_cosmo_toymodel_n}. For simplicity, the further analysis of \\cite{AGW18} only considered the case $Q=0$, for which one obtains the effective Friedmann equation\n\\begin{equation}\n \\left(\n \\frac{V'(\\phi)}{V(\\phi)}\n \\right)^2\n =\n 4\\lambda^2\n \\left(\n 1\n + \\frac{5v_0}{V(\\phi)}\n - \\frac{N_0(N_0+5)v_0^2}{V(\\phi)^2}\n \\right)\n \\mathperiod\n\\end{equation}\nThe similarity to the effective Friedmann equation \\eref{eq:gft_cosmo_friedmann} of \\ac{gft} in the mean-field setting is apparent.\nIn this sense, the toy model based on a squeezing Hamiltonian \\eref{eq:gft_cosmo_squeezing} already reproduced several previous results in \\ac{gft} cosmology.\n\nOne reason for this close connection was uncovered in \\cite{WEw19,GPW19} where, taking again a deparametrised viewpoint, a Hamiltonian formalism was derived from a Legendre transformation of the full (free) \\ac{gft} action in which the `matter' argument $\\phi$ of the \\ac{gft} field is taken as a time coordinate.\nThe starting point is an action for real \\ac{gft} fields of the form\n\\begin{equation}\n \\label{eq:gft_cosmo_gft_hamiltonian_action}\n S\n =\n \\frac{1}{2}\n \\int\n \\intmeasure{\\phi}\n \\sum_{\\vec{\\jmath},\\vec{m},\\iota}\n \\varphi^{\\vec{\\jmath},\\iota}_{-\\vec{m}}(\\phi)\n \\left[\n \\mathcal{K}_{\\vec{\\jmath},\\vec{m},\\iota}^{(0)}\n + \\mathcal{K}_{\\vec{\\jmath},\\vec{m},\\iota}^{(2)}\\partial_\\phi^2\n \\right]\n \\varphi^{\\vec{\\jmath},\\iota}_{\\vec{m}}(\\phi)\n + \\mathcal{V}[\\varphi]\n \\mathcomma\n\\end{equation}\nwhere the field $\\varphi(g_I,\\phi)$ has been decomposed into Peter--Weyl modes according to\n\\begin{equation}\n \\varphi(g_I,\\phi)\n =\n \\sum_{j_I\\in{\\rm Irrep}}\n \\sum_{m_I,n_I=-j_I}^{j_I}\n \\sum_{\\iota}\\varphi^{\\vec{\\jmath},\\iota}_{\\vec{m}}(\\phi)\\,\n \\intertwiner^{\\vec{\\jmath},\\iota}_{\\vec{n}}\n \\prod_{I=1}^4\n \\sqrt{2j_I+1}\\,\n D^{j_I}_{m_I n_I}(g_I)\n\\end{equation}\nand $\\iota$ labels a basis of intertwiners $\\intertwiner$ for the representation labels $\\{j_I\\}$ (and the sum over $(\\vec{\\jmath},\\vec{m},\\iota)$ in \\eref{eq:gft_cosmo_gft_hamiltonian_action} is a shorthand for the sums appearing in this decomposition).\nFor a real field, these Peter--Weyl coefficients satisfy the reality condition\n\\begin{equation}\n \\compconj*{\n \\varphi^{\\vec{\\jmath},\\iota}_{\\vec{m}}(\\phi)\n }\n =\n (-1)^{\\sum_I (j_I-m_I)}\\varphi^{\\vec{\\jmath},\\iota}_{-\\vec{m}}(\\phi)\n \\mathperiod\n\\end{equation}\nThe Hamiltonian can then be written as\n\\begin{equation}\n \\mathcal{H}\n =\n - \\frac{1}{2}\n \\sum_{\\vec{\\jmath},\\vec{m},\\iota}\n \\left[\n \\frac{\n \\pi^{\\vec{\\jmath},\\iota}_{\\vec{m}}\\pi^{\\vec{\\jmath},\\iota}_{-\\vec{m}}\n }{\n \\mathcal{K}_{\\vec{\\jmath},\\vec{m},\\iota}^{(2)}\n }\n +\n \\mathcal{K}_{\\vec{\\jmath},\\vec{m},\\iota}^{(0)}\n \\varphi^{\\vec{\\jmath},\\iota}_{\\vec{m}}\n \\varphi^{\\vec{\\jmath},\\iota}_{-\\vec{m}}\n \\right]\n - \\mathcal{V}[\\varphi]\n\\end{equation}\nwhose free part corresponds to either a harmonic oscillator or an `upside down' harmonic oscillator for each mode, depending on the signs of the couplings $\\mathcal{K}_{\\vec{\\jmath},\\vec{m},\\iota}^{(0)}$ and $\\mathcal{K}_{\\vec{\\jmath},\\vec{m},\\iota}^{(2)}$.\nAnnihilation and creation operators can then be defined by\n\\begin{eqnarray}\n \\label{eq:gft_cosmo_annihilation}\n \\op{a}_{\\vec{\\jmath},\\vec{m},\\iota}\n & = &\n \\frac{1}{\n \\sqrt{\n 2\n \\abs{\\mathcal{K}^{(2)}_{\\vec{j}, \\vec{m}, \\iota}}\n \\omega^{\\vec{\\jmath},\\iota}_{\\vec{m}}\n }\n }\n \\left(\n \\abs{\\mathcal{K}^{(2)}_{\\vec{j}, \\vec{m}, \\iota}}\n \\omega^{\\vec{\\jmath},\\iota}_{\\vec{m}}\n \\op\\varphi^{\\vec{\\jmath},\\iota}_{\\vec{m}}\n +\\mathup{i} (-1)^{\\sum_I (j_I-m_I)}\\op\\pi^{\\vec{\\jmath},\\iota}_{-\\vec{m}}\n \\right)\n \\\\\n \\label{eq:gft_cosmo_creation}\n \\hermconj{\\op{a}}_{\\vec{\\jmath},\\vec{m},\\iota}\n & = &\n \\frac{1}{\n \\sqrt{\n 2\n \\abs{\\mathcal{K}^{(2)}_{\\vec{j}, \\vec{m}, \\iota}}\n \\omega^{\\vec{\\jmath},\\iota}_{\\vec{m}}\n }\n }\n \\left(\n (-1)^{\\sum_I (j_I-m_I)}\n \\abs{\\mathcal{K}^{(2)}_{\\vec{j}, \\vec{m}, \\iota}}\n \\omega^{\\vec{\\jmath},\\iota}_{\\vec{m}}\n \\op\\varphi^{\\vec{\\jmath},\\iota}_{-\\vec{m}}\n -\\mathup{i} \\op\\pi^{\\vec{\\jmath},\\iota}_{\\vec{m}}\n \\right)\n \\mathcomma\n\\end{eqnarray}\nwhere\n$\n\\omega^{\\vec{\\jmath},\\iota}_{\\vec{m}}\n=\n\\sqrt{\n \\abs{\n \\mathcal{K}_{\\vec{\\jmath},\\vec{m},\\iota}^{(0)}\n \/ \\mathcal{K}_{\\vec{\\jmath},\\vec{m},\\iota}^{(2)}\n }\n}\n$.\nThe free Hamiltonian is then $\\op{\\mathcal{H}} = \\sum_{\\vec{\\jmath},\\vec{m},\\iota} \\op{\\mathcal{H}}_{\\vec{\\jmath},\\vec{m},\\iota}$ written as a sum of single-mode Hamiltonians $\\op{\\mathcal{H}}_{\\vec{\\jmath},\\vec{m},\\iota}$.\nFor a mode for which $\\mathcal{K}_{\\vec{\\jmath},\\vec{m},\\iota}^{(0)}$ and $\\mathcal{K}_{\\vec{\\jmath},\\vec{m},\\iota}^{(2)}$ have different signs the single-mode Hamiltonian is given by\n\\begin{equation}\n \\op{\\mathcal{H}}_{\\vec{\\jmath},\\vec{m},\\iota}\n =\n -\\frac{1}{2}\n {\\rm sgn}(\\mathcal{K}_{\\vec{\\jmath},\\vec{m},\\iota}^{(0)})\n \\omega_{\\vec{m}}^{\\vec{j}, \\iota}\n \\left(\n \\hermconj{\\op{a}}_{\\vec{\\jmath},\\vec{m},\\iota}\n \\hermconj{\\op{a}}_{\\vec{\\jmath},-\\vec{m},\\iota}\n + \\op{a}_{\\vec{\\jmath},\\vec{m},\\iota}\n \\op{a}_{\\vec{\\jmath},-\\vec{m},\\iota}\n \\right)\n\\end{equation}\nwhich is analogous to the squeezing operator \\eref{eq:gft_cosmo_squeezing} (after redefinition by a phase $\\op{A}^i \\rightarrow e^{\\mathup{i}\\pi\/4}\\op{A}^i$ and $\\hermconj{\\op{A}}_j \\rightarrow e^{-\\mathup{i}\\pi\/4}\\hermconj{\\op{A}}_j$, \\eref{eq:gft_cosmo_squeezing} becomes $\\op{\\mathcal{H}}=\\frac{1}{2}\\lambda(\\hermconj{\\op{A}}_i \\hermconj{\\op{A}}_j\\epsilon^{ij}+\\op{A}^i\\op{A}^j\\epsilon_{ij})$).\nIn this sense, at least for modes with magnetic indices $m_i=0$ for which there is no coupling between modes, the Hamiltonian dynamics coming from the quadratic part of the full \\ac{gft} action is exactly of squeezing type.\n \n\\section{A toy model revisited}\n\\label{sec:tm}\nIn this section, we study the dynamics of \\ac{gft} for a single field mode, in the approximation where \\ac{gft} interactions are neglected.\nThe observation that a squeezing operator as used in \\cite{AGW18} is already the (free) \\ac{gft} Hamiltonian for a mode in which all the magnetic indices are zero motivates us to revisit the model studied in \\cite{AGW18}.\nThe restriction of this model to a single field mode can be partially motivated by results in \\cite{Gie16} that suggest \\ac{gft} dynamics are generically dominated by a single value for the spin $j$.\nIn the next section, we will add interactions and go beyond the assumption of free dynamics.\n\nWe make use of the observation of \\cite{BBC19} that the fundamental operators appearing in this model (representing the Hamiltonian and volume operators) generate an $\\liealg{su}(1, 1)$ algebra.\nWe will extend some results both of the toy model analysis \\cite{AGW18} and of the algebraic discussion in \\cite{BBC19}: we will discuss general algebraic expressions representing effective Friedmann equations, and then specify by choosing different classes of coherent states.\nImportantly, we will compute relative uncertainties for the main physical quantities and use them as a criterion for the selection of good semiclassical states.\n\nThe Hamiltonian we consider is the one-mode squeezing Hamiltonian\n\\begin{equation}\n \\label{eq:tm_hamiltonian}\n \\op{H}\n =\n - \\frac{\\omega}{2}\n (\n \\op{a}^2\n + \\hermconj{\\op{a}}{}^2\n )\n \\mathperiod\n\\end{equation}\nAs in previous work the main observable of interest is the volume operator\n\\begin{equation}\n \\label{eq:tm_n_v_relation}\n \\op{V}\n =\n v_0\n \\op{N}\n :=\n v_0\n \\hermconj{\\op{a}}\n \\op{a}\n \\mathcomma\n\\end{equation}\nwhere $v_0$ would be the eigenvalue for the \\ac{gft} volume operator for the representation (and intertwiner) chosen for the model, i.e., the volume associated to a single quantum in this mode.\nWe are working in a deparametrised framework in which the Hamiltonian generates time evolution with respect to scalar field time $\\phi$.\nThe energy expectation value $\\expval{\\op{H}}$ thus physically represents the conjugate momentum $\\pi_\\phi$ of $\\phi$.\nWe can then define an effective energy density of the matter scalar field $\\phi$, at the level of expectation values, by\n\\begin{equation}\n \\label{eq:tm_energy_density}\n \\rho_\\phi(\\phi)\n =\n \\frac{\n \\expval{\\op{H}}^2\n }{\n 2 \\expval{\\op{V}(\\phi)}^2\n }\n \\mathcomma\n\\end{equation}\ngiven the classical expression $\\rho_\\phi=\\pi_\\phi^2\/(2V^2)$.\nThis definition extends the one used in the mean-field setting (see \\eref{eq:gft_cosmo_friedmann}) which also only included expectation values of elementary operators $\\op{H}$ and $\\op{V}$.\nOther definitions using composite operators would be possible and would differ from \\eref{eq:tm_energy_density} by higher moments such as $\\expval{\\op{H^2}}-\\expval{\\op{H}}^2$.\nNotice that inverse operators such as $\\op{V}(\\phi)^{-2}$ are not obviously well-defined in the \\ac{gft} formalism.\n\n\\subsection{\\texorpdfstring{$\\liealg{su}(1, 1)$}{su(1, 1)} structure of the system}\nAs was first pointed out for \\ac{gft} cosmology in \\cite{BBC19}, the operators $\\op{V}$ and $\\op{H}$ generate the Lie algebra $\\liealg{su}(1, 1)$, which appears frequently in the context of quantum cosmology for a flat \\ac{flrw} Universe filled with a free scalar field, see e.g.\\ \\cite{Boj07,EM12,AL19}.\nThe three independent quadratic products of creation and annihilation operators form a realisation of the $\\liealg{su}(1,1)$ algebra.\nIn particular the identifications\n\\begin{equation}\n \\label{eq:su11_bosonic_realisation}\n \\op{K}_0\n =\n \\frac{1}{4}\n \\left(\n \\hermconj{\\op{a}}\n \\op{a}\n +\n \\op{a}\n \\hermconj{\\op{a}}\n \\right)\n = \\frac{1}{2}\\op{N}+\\frac{1}{4}\\op{I}\n \\mathcomma\n \\qquad\n \\op{K}_+\n =\n \\frac{1}{2}\n \\hermconj{\\op{a}}{}^2\n \\mathcomma\n \\qquad\n \\op{K}_-\n =\n \\frac{1}{2}\n \\op{a}^2\n\\end{equation}\n(where $\\op{I}$ denotes the identity)\ngive the $\\liealg{su}(1,1)$ relations with the usual normalisation\n\\begin{equation}\n \\commutator{\n \\op{K}_0\n }{\n \\op{K}_\\pm\n }\n =\n \\pm \\op{K}_\\pm\n \\mathcomma\n \\qquad\n \\commutator{\n \\op{K}_-\n }{\n \\op{K}_+\n }\n =\n 2 \\op{K}_0\n \\mathperiod\n\\end{equation}\nThe Casimir of $\\liealg{su}(1, 1)$ is given by\n\\begin{equation}\n \\label{eq:su11_casimir}\n \\op{C}\n =\n (\\op{K}_0)^2\n -\n \\frac{1}{2}\n (\n \\op{K}_+\n \\op{K}_-\n +\n \\op{K}_-\n \\op{K}_+\n )\n \\mathperiod\n\\end{equation}\nIn terms of the $\\liealg{su}(1, 1)$ generators the Hamiltonian \\eref{eq:tm_hamiltonian} reads\n\\begin{equation}\n \\label{eq:tm_hamiltonian_su11}\n \\op{H}\n =\n - \\omega\n (\n \\op{K}_+\n +\n \\op{K}_-\n )\n \\mathperiod\n\\end{equation}\nAs one can see from \\eref{eq:su11_bosonic_realisation} the dynamics of the operator $\\op{K}_0$ are intimately related to the dynamics of the volume operator $\\op{V} = v_0 \\op{N}$.\nWe consider here only the $\\liealg{su}(1, 1)$ representations of the discrete ascending series in which the operator $\\op{K}_0$, and hence the volume, is bounded from below.\\footnote{In general, for such representations one can only say that $\\expval{\\op{N}}>-\\frac{1}{2}$.\nBelow we mostly focus on Fock representations, for which $\\op{N}$ is always nonnegative.}\n(A more general discussion would also include other types of representations, for which there is no such lower bound. See also the comments in \\cite[Sec.~4]{BBC19}.)\n\nThese representations are labelled by a real positive number $k$, the so-called Bargmann index, and satisfy\n\\begin{eqnarray}\n &\n \\op{K}_-\n \\ket{k, 0}\n =\n 0\n \\mathcomma\n \\\\\n &\n \\op{K}_0\n \\ket{k, m}\n =\n (k + m)\n \\ket{k, m}\n \\mathcomma\n \\\\\n &\n \\op{C}\n \\ket{k, m}\n =\n k (k-1)\n \\ket{k, m}\n \\mathcomma\n\\end{eqnarray}\nwhere the states $\\ket{k, m}$ are the normalised states proportional to\n$(\\op{K}_+)^m \\ket{k, 0}$.\nSee \\aref{app:su11} for more details.\n\nWhen one inserts the realisation of the $\\liealg{su}(1, 1)$ operators in terms of bosonic creation and annihilation operators \\eref{eq:su11_bosonic_realisation} into the Casimir \\eref{eq:su11_casimir}, one finds that the Casimir is $\\op{C} = - 3\/16 \\op{I}$ which implies a Bargmann index of either $k=1\/4$ or $k=3\/4$.\nThese two cases respectively correspond to representations spanned by the eigenstates of the number operator with even or odd eigenvalues.\nThe choice $k=1\/4$ appears more interesting physically since it contains the Fock vacuum (or cosmological `singularity') in which no quanta are present.\nSince we are interested in studying the Fock space representations of the \\ac{gft} field, we will mostly restrict the Bargmann index to these cases.\n\n\\subsection{Classes of coherent states and relative uncertainties}\n\\label{sec:tm_class_of_coh_stat_and_rel_uncert}\nThe time evolution of a system can be quite sensitive to its initial state.\nIn this section we discuss classes of coherent states and comment on their usefulness in the context of \\ac{gft} cosmology.\nThe coherent states we consider are the following:\n\\begin{itemize}\n \\item\n Fock coherent states,\n \\item\n \\acf{pg} coherent states of $\\liealg{su}(1, 1)$,\n \\item\n \\acf{bg} coherent states of $\\liealg{su}(1, 1)$.\n\\end{itemize}\n\nThe Fock coherent states correspond to the well-known coherent states of the harmonic oscillator, labelled by a complex number $\\sigma$.\nOne possible way to define the Fock coherent states is by acting with the displacement operator on the Fock vacuum,\n\\begin{equation}\n \\label{eq:tm_coh_state_fock}\n \\ket{\\sigma}\n =\n \\exp\\left(\n \\sigma \\hermconj{\\op{a}}\n - \\compconj{\\sigma} \\op{a}\n \\right)\n \\ket{0}\n \\mathperiod\n\\end{equation}\n\nThe \\ac{pg} coherent states are obtained by acting on the $\\liealg{su}(1, 1)$ ground state $\\ket{k, 0}$ with a squeezing operator\n\\begin{equation}\n \\op{S}(\\xi)\n =\n \\exp\\left(\n \\xi\n \\op{K}_+\n -\n \\compconj{\\xi}\n \\op{K}_-\n \\right)\n \\mathperiod\n\\end{equation}\nWe will denote the \\ac{pg} coherent states by $\\ket{\\zeta, k}$ and they are obtained by the following choice of squeezing parameter\n\\begin{equation}\n \\ket{\n \\zeta, k\n }\n =\n \\op{S}\\left(\n \\frac{\\zeta}{\\abs{\\zeta}}\n \\artanh{\\abs{\\zeta}}\n \\right)\n \\ket{k, 0}\n \\mathcomma\n \\qquad\n \\abs{\\zeta} < 1\n \\mathperiod\n\\end{equation}\n\nThe \\ac{bg} coherent states will be denoted by $\\ket{\\chi, k}$ and are defined to be the eigenstates of the lowering operator $\\op{K}_-$,\n\\begin{equation}\n \\op{K}_-\n \\ket{\\chi, k}\n =\n \\chi\n \\ket{\\chi, k}\n \\mathperiod\n\\end{equation}\n\nWe now turn to the question of which class of coherent states should be considered in the context of \\ac{gft} cosmology.\nIn the context of quantum cosmology a commonly studied quantity is the relative uncertainty of the volume operator.\nIt is argued that the magnitude of the relative uncertainty corresponds to a measure of `quantumness' of the system at some given time and it is therefore important that the theory allows for initial states which give a (comparatively) small value for the relative uncertainty at late times since then the system has become (semi-)classical.\nFor previous works commenting on this in the context of \\ac{lqc} see, e.g., \\cite{AG15} and in the context of \\ac{gft} see, e.g., \\cite{PS17}.\nIn addition one would also require the relative uncertainty of the energy (which is a constant of motion) to be very small.\n\nWe define the relative uncertainty of an operator $\\op{O}$ for a given state $\\ket{\\psi}$ as\n\\begin{equation}\n r(\\op{O}, \\ket{\\psi})\n =\n \\frac{\n \\bra{\\psi}\n \\op{O}^2\n \\ket{\\psi}\n -\n \\bra{\\psi}\n \\op{O}\n \\ket{\\psi}^2\n }{\n \\bra{\\psi}\n \\op{O}\n \\ket{\\psi}^2\n }\n \\mathperiod\n\\end{equation}\nIn the following we state the relative uncertainty of the Hamiltonian and the asymptotic relative uncertainty of the volume operator at large volumes (i.e., for $\\phi\\rightarrow\\pm\\infty$), for the three classes of coherent states that we are interested in.\n\nFor the Fock coherent states one obtains for the relative uncertainty of the Hamiltonian and the asymptotic relative uncertainty of the volume operator\n\\begin{eqnarray}\n \\label{eq:tm_rel_uncertainty_h_fock}\n r(\n \\op{H},\n \\ket{\\sigma}\n )\n =\n \\frac{\n 2 (1 + 2 \\abs{\\sigma}^2)\n }{\n (\\sigma^2 + \\compconj{\\sigma}^2)^2\n }\n \\mathcomma\n \\\\\n \\label{eq:tm_rel_uncertainty_v_fock}\n r(\n \\op{V}(\\pm \\infty),\n \\ket{\\sigma}\n )\n =\n \\frac{\n 2\n \\left(\n 1 \\mp 2 \\mathup{i} ( \\sigma \\pm \\mathup{i} \\compconj{\\sigma} )^2\n \\right)\n }{\n \\left(\n 1 \\mp \\mathup{i} ( \\sigma \\pm \\mathup{i} \\compconj{\\sigma} )^2\n \\right)^2\n }\n \\mathperiod\n\\end{eqnarray}\nIn principle the value of the parameter $\\sigma$ is arbitrary and therefore for suitable choices of $\\sigma$ the asymptotic relative uncertainty in both energy and volume becomes arbitrarily small.\nThese states can hence be interpreted as becoming semiclassical, consistent with arguments from \\ac{gft} that suggest that such `condensates' are good candidates for effective semiclassical macroscopic geometries \\cite{GOS14,GS16,Ori17a,PS19}.\n\nFor the \\ac{pg} coherent states the relative uncertainties of interest are\n\\begin{eqnarray}\n r(\n \\op{H},\n \\ket{\\zeta, k}\n )\n =\n \\frac{1}{2k}\n \\frac{\n (1 + \\zeta^2)\n (1 + \\compconj{\\zeta}^2)\n }{\n (\\zeta + \\compconj{\\zeta})^2\n }\n \\mathcomma\n \\\\\n r(\n \\op{V}(\\pm \\infty),\n \\ket{\\zeta, k}\n )\n =\n \\frac{1}{2k}\n \\mathperiod\n\\end{eqnarray}\nThe asymptotic relative uncertainty of the volume operator is independent of the parameter labelling the different \\ac{pg} coherent states.\nThis suggests that for \\ac{pg} coherent states the classical limit is reached for $k \\rightarrow \\infty$.\nHowever, we saw before that if we want to consider coherent states living in a bosonic Fock representation (rather than a more general $\\liealg{su}(1,1)$ representation), this restricts the values of the Bargmann index to either $k = 1\/4$ or $k = 3\/4$.\nThus we conclude that in the context of \\ac{gft} cosmology the class of \\ac{pg} coherent states do not `classicalise' at late times and hence, even though these states are naturally suggested by the $\\liealg{su}(1,1)$ structure, they do not appear to be good candidate states for effective macroscopic cosmologies in \\ac{gft}.\n\nFor completeness we also state the relative uncertainties for the \\ac{bg} coherent states\n\\begin{eqnarray}\n &r(\n \\op{H},\n \\ket{\\chi, k}\n )\n =\n \\frac{2}{\n (\\chi + \\compconj{\\chi})^2\n }\n \\left[\n k\n +\n \\abs{\\chi}\n \\frac{\n I_{2k}(2 \\abs{\\chi})\n }{\n I_{2k-1}(2 \\abs{\\chi})\n }\n \\right]\n \\mathcomma\n \\\\\n&\n\\eqalign{\n r(\n \\op{V}(\\pm \\infty),\n \\ket{\\chi, k}\n )\n =\n &\n 2\n \\Big[\n - 2 \\abs{\\chi}^2\n I_{2k}(2 \\abs{\\chi})^2\n +\n (3 - 4k)\n \\abs{\\chi}\n I_{2k}(2\\abs{\\chi})\n I_{2k-1}(2\\abs{\\chi})\n \\\\\n &\n \\qquad\n +\n (k \\mp \\mathup{i}(\\chi - \\compconj{\\chi}) + 2 \\abs{\\chi}^2)\n I_{2k-1}(2\\abs{\\chi})^2\n \\Big]\n \\\\\n &\n \\times\n \\Big[\n 2 \\abs{\\chi}\n I_{2k}(2 \\abs{\\chi})\n +\n (2 k \\mp \\mathup{i}(\\chi - \\compconj{\\chi}))\n I_{2k-1}(2 \\abs{\\chi})\n \\Big]^{-2}\n \\mathcomma\n}\n\\end{eqnarray}\nwhere $I_\\alpha(x)$ is the modified Bessel function of the first kind.\nWe can use an asymptotic expansion of the modified Bessel functions to get the relative uncertainties for large values of $\\abs{\\chi}$,\n\\begin{eqnarray}\n &r(\n \\op{H},\n \\ket{\\chi, k}\n )\n \\stackrel{\\abs{\\chi} \\rightarrow \\infty}{\\sim}\n \\frac{\n 2\n \\abs{\\chi}\n }{\n (\\chi + \\compconj{\\chi})^2\n }\n \\mathcomma\n\\\\\n&\n r(\n \\op{V}(\\pm \\infty),\n \\ket{\\chi, k}\n )\n \\stackrel{\\abs{\\chi} \\rightarrow \\infty}{\\sim}\n \\frac{\n 2\n }{\n 2 \\abs{\\chi} \\mp \\mathup{i} (\\chi - \\compconj{\\chi})\n }\n \\mathcomma\n\\end{eqnarray}\nwhich shows that the asymptotic relative uncertainties for the \\ac{bg} coherent states are also arbitrarily small for large values of $\\abs{\\chi}$.\nHence these states would also be suitable states for \\ac{gft} cosmology.\nHowever, the ubiquitous appearance of the modified Bessel functions makes calculations with the \\ac{bg} states quite cumbersome.\nBelow we mostly focus on Fock coherent states which are easier to calculate with.\n\nIn \\fref{fig:tm_rel_uncertainty_comp} we provide an overview of the time dependence of the relative uncertainty of the various states discussed here.\nOne notable aspect is that the uncertainties are asymmetric with respect to time.\nWhile the uncertainty can be asymptotically small in the future, it might have been asymptotically large in the past.\nIn particular, one could in general not conclude from the emergence of a semiclassical regime at late times, in which the relative uncertainties remain small, that the same was true at early times in the collapsing pre-bounce phase.\nIn order to quantify this asymmetry of the asymptotic relative uncertainty, we define an `asymptotic asymmetry parameter'\n\\begin{equation}\n \\label{eq:tm_asymmetry_parameter}\n \\eta(\\ket{\\psi})\n =\n 1\n -\n \\min\n \\left\\{\n \\frac{\n r(\\op{V}(+\\infty), \\ket{\\psi})\n }{\n r(\\op{V}(-\\infty), \\ket{\\psi})\n }\n ,\n \\frac{\n r(\\op{V}(-\\infty), \\ket{\\psi})\n }{\n r(\\op{V}(+\\infty), \\ket{\\psi})\n }\n \\right\\}\n \\mathperiod\n\\end{equation}\nThe values this parameter can take lie between zero and one, where small values signify that the relative uncertainty is the same in the asymptotic past and future whereas values close to one correspond to a large past-future asymmetry.\nIn \\fref{fig:tm_asymmetry_parameter} the asymptotic asymmetry parameter is shown for Fock and \\ac{bg} coherent states as a function of the argument $\\theta$ of the complex parameters characterising the state, i.e.\\ $\\chi = \\abs{\\chi} \\exp(\\mathup{i} \\theta)$ and $\\sigma = \\abs{\\sigma} \\exp(\\mathup{i} \\theta)$.\n Note that even though the plot is done for some specific value of the absolute value of the coherent state parameters, the situation is generic.\n Only for very small absolute values ($\\abs{\\sigma} \\ll 1, \\abs{\\chi} \\ll 1$) is the asymmetry parameter close to one for all values of the argument $\\theta$.\nFrom this it becomes apparent that the past-future asymmetry is rather generic.\nSimilar questions have been discussed previously in the context of \\ac{lqc} \\cite{Boj07a,CP08,Boj08}; our analysis here extends them from the mean-field calculations in \\cite{PS17} to broader classes of states of interest for \\ac{gft} cosmology.\n\n\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics{fig-tm_rel_uncertainty_comp.pdf}\n \\caption{The relative uncertainty of the volume operator as a function of $\\omega \\phi$ for $k=1\/4$.\n The complex parameter $x$ given in the figure is related to the parameters of the coherent states in the following manner.\n \\ac{pg}:\n $\\ket{\\zeta, k} \\equiv \\ket{(x\/\\abs{x}) \\tanh(\\abs{x}),1\/4}$\n \\ac{bg}:\n $\\ket{\\chi, k} \\equiv \\ket{x, 1\/4}$,\n Fock:\n $\\ket{\\sigma} \\equiv \\ket{x}$.\n For the bottom-left plot, $|x|\\ll 1$ and these states have very small volume around $\\phi=0$, which leads to the large relative uncertainties.\n }\\label{fig:tm_rel_uncertainty_comp}\n\\end{figure}\n\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics{fig-tm_asymmetry_parameter.pdf}\n \\caption{The asymptotic asymmetry parameter \\eref{eq:tm_asymmetry_parameter} as a function of the argument of the complex coherent state parameters for $k=1\/4$.\n The argument $\\theta$ is related to the parameters of the coherent states in the following way.\n \\ac{bg}:\n $\\ket{\\chi, k} \\equiv \\ket{100 \\exp(\\mathup{i} \\theta), 1\/4}$,\n Fock:\n $\\ket{\\sigma} \\equiv \\ket{100 \\exp(\\mathup{i} \\theta)}$.\n }\\label{fig:tm_asymmetry_parameter}\n\\end{figure}\n\n\\subsection{Effective Friedmann equations}\nIn order to derive the cosmological implications of the model, we derive in this section an effective Friedmann equation\n\\begin{equation}\n \\label{eq:tm_eff_fried_formal}\n \\left(\n \\frac{\n V'(\\phi)\n }{\n V(\\phi)\n }\n \\right)^2\n =\n f[V(\\phi)]\n \\mathcomma\n\\end{equation}\nwhere we introduced the compact notation $V(\\phi) \\equiv \\expval{\\op{V}(\\phi)}$ and $f[V(\\phi)]$ is some functional to be specified later.\nThe method we will be employing to solve this problem is an algebraic approach introduced in \\cite{BBC19} which we extend to noncommuting variables and connect to the Fock representation underlying the kinematics of \\ac{gft}.\n\nWe work in the Heisenberg picture and assume that the Schr\u00f6dinger and Heisenberg picture coincide at $\\phi = 0$.\nOperators without argument denote the Schr\u00f6dinger picture operators.\nThe equations of motion for $\\op{K}_0$ and $\\op{K}_+ - \\op{K}_-$ are given by ($\\op{K}_+ + \\op{K}_-$ is proportional to the Hamiltonian and therefore constant under time evolution)\n\\begin{eqnarray}\n &\n \\label{eq:tm_k0_eom}\n \\op{K}_0'\n (\\phi)\n =\n \\mathup{i} \\omega\n (\n \\op{K}_+\n -\n \\op{K}_-\n )\n (\\phi)\n \\mathcomma\n \\\\\n &\n (\\op{K}_+ - \\op{K}_-)'(\\phi)\n =\n - 4 \\mathup{i} \\omega \\op{K}_0\n\\end{eqnarray}\nwhich are solved by\n\\begin{eqnarray}\n \\label{eq:tm_k0_heisenberg}\n &\n \\op{K}_0(\\phi)\n =\n \\op{K}_0\n \\cosh(2 \\omega \\phi)\n +\n \\frac{\\mathup{i}}{2}\n (\\op{K}_+ - \\op{K}_-)\n \\sinh(2 \\omega \\phi)\n \\mathcomma\n \\\\\n &\n (\\op{K}_+ - \\op{K}_-)(\\phi)\n =\n (\\op{K}_+ - \\op{K}_-)\n \\cosh(2 \\omega \\phi)\n -\n 2 \\mathup{i} \\op{K}_0\n \\sinh(2 \\omega \\phi)\n \\mathperiod\n\\end{eqnarray}\nFrom this one gets for the time dependence of the number operator\n\\begin{equation}\n \\label{eq:tm_n_heisenberg}\n \\op{N}(\\phi)\n =\n -\\frac{1}{2}\n +\n \\left(\n \\op{N}\n + \\frac{1}{2} \\op{I}\n \\right)\n \\cosh(2 \\omega \\phi)\n +\n \\mathup{i}\n (\\op{K}_+ - \\op{K}_-)\n \\sinh(2 \\omega \\phi)\n \\mathperiod\n\\end{equation}\nThe expectation value of this is nonnegative for all Fock states and grows exponentially at early or late times ($|\\omega\\phi|\\gg 1$).\nA nonvanishing expectation value $\\expval{\\mathup{i}(\\op{K}_+ - \\op{K}_-)}$ implies a time asymmetry in the resulting effective cosmological history, i.e., different pre- and post-bounce phases.\nFor generic states $\\expval{\\op{N}(\\phi)}$ is positive for all $\\phi$; the only cases for which it becomes zero at some point during the evolution is for states which satisfy $ \\abs{\\expval{\\op{K}_+ - \\op{K}_-}}^2 \\geq \\expval{\\op{N}} (\\expval{\\op{N}} + 1)$.\nFor Fock coherent states this is only the case for the Fock vacuum for which both sides are zero.\nFor both \\ac{pg} and \\ac{bg} coherent states it depends on the value of the Bargmann index $k$.\nFor $k > 1\/4$ this inequality never holds, for $k < 1\/4$, however, there are states for which the inequality is satisfied and for $k=1\/4$ it is only the ground state (analogue to the Fock vacuum) for which the inequality holds.\n\\Eref{eq:tm_n_heisenberg} reproduces the result obtained in the `toy model' context of \\cite{AGW18} (cf.~\\eref{eq:gft_cosmo_toymodel_n} and below), the only difference being that a factor 5 is replaced by 1 since we consider a single field mode, not five modes as in the toy model.\n\nOne can derive an effective Friedmann equation directly by taking the expectation value of the explicit expression \\eref{eq:tm_n_heisenberg}.\nOne then finds\n\\begin{equation}\n \\label{eq:tm_eff_friedmann_from_heisenberg}\n \\fl\n \\left(\n \\frac{\n V'(\\phi)\n }{\n V(\\phi)\n }\n \\right)^2\n =\n 4\\omega^2\n \\left(\n 1\n +\n \\frac{v_0}{V(\\phi)}\n -\n \\frac{v_0^2}{V(\\phi)^2}\n \\left[\n \\expval{\\op{N}}^2 + \\expval{\\op{N}}\n -\n \\expval{\\mathup{i}(\\op{K}_+ - \\op{K}_-)}^2\n \\right]\n \\right)\n \\mathcomma\n\\end{equation}\nwhere $V(\\phi) \\equiv v_0 \\expval{\\op{N}(\\phi)}$ (cf.~\\eref{eq:tm_n_v_relation}).\n\nOne can, however, also obtain this effective Friedmann equation without knowing the time dependence of the number operator explicitly by using the algebraic structure of the system.\nThis was shown in \\cite{BBC19} for the corresponding classical system, where the variables commute.\nHere we extend this algebraic approach to the noncommutative case.\n\nStarting from \\eref{eq:tm_k0_eom} and the definition of the Casimir \\eref{eq:su11_casimir} one arrives at\n\\begin{equation}\n \\op{K}_0' (\\phi)^2\n =\n 4 \\omega^2\n \\left[\n \\op{K}_0(\\phi)^2\n -\n \\left(\n \\frac{\\op{H}^2}{4 \\omega^2}\n +\n \\op{C}\n \\right)\n \\right]\n\\end{equation}\nor written in terms of the number operator\n\\begin{equation}\n \\label{eq:gft_alg_friedmann}\n \\op{N}' (\\phi)^2\n =\n 4 \\omega^2\n \\left[\n \\op{N}(\\phi)^2\n +\n \\op{N}(\\phi)\n -\n \\left(\n \\frac{\\op{H}^2}{\\omega^2}\n +\n 4 \\op{C}\n -\n \\frac{1}{4}\n \\op{I}\n \\right)\n \\right]\n\\mathperiod\n\\end{equation}\n\nIn order to get the effective Friedmann equation one has to take the expectation value of \\eref{eq:gft_alg_friedmann}.\nHowever, it is crucial to note that in \\eref{eq:tm_eff_fried_formal} the expectation value of the volume operator enters, rather than the expectation value of the volume operator squared.\nThe difference between the two is related to the variance of the volume, which is in general state-dependent.\nIndeed, rearranging the expectation value of \\eref{eq:gft_alg_friedmann} gives\n\\begin{equation}\n \\label{eq:tm_eff_friedmann_k0}\n N'(\\phi)^2\n =\n 4 \\omega^2\n \\left[\n N(\\phi)^2\n +\n N(\\phi)\n +\n X\n \\right]\n\\end{equation}\nwith $N(\\phi) \\equiv \\expval{\\op{N}(\\phi)}$ and $X$ being given by\n\\begin{equation}\n X\n =\n \\covariance{\n \\op{N}\n (\\phi)\n }{\n \\op{N}\n (\\phi)\n }\n -\n \\frac{1}{4\\omega^2}\n \\covariance{\n \\op{N}'\n (\\phi)\n }{\n \\op{N}'\n (\\phi)\n }\n -\n \\frac{\n \\expval{\\op{H}^2}\n }{\n \\omega^2\n }\n -\n 4 \\expval{\\op{C}}\n +\n \\frac{1}{4}\n \\mathcomma\n\\end{equation}\nwhere the covariance $\\covariance{\\op{A}}{\\op{B}}$ is defined as\n\\begin{equation}\n \\covariance{\n \\op{A}\n }{\n \\op{B}\n }\n =\n \\frac{1}{2}\n \\expval{\n \\op{A}\n \\op{B}\n +\n \\op{B}\n \\op{A}\n }\n -\n \\expval{\\op{A}}\n \\expval{\\op{B}}\n\\end{equation}\n(and for the case $\\op{A}=\\op{B}$ this would be called the variance of $\\op{A}$).\n\nWe will now show that the quantity $X$ is indeed time-independent as suggested by the notation.\nNoting that the variance of $\\op{N}'(\\phi)$ can be written as\n\\begin{equation}\n \\covariance{\n \\op{N}'\n (\\phi)\n }{\n \\op{N}'\n (\\phi)\n }\n =\n - 4 \\covariance{\\op{H}}{\\op{H}}\n + 16 \\omega^2\n \\covariance{\n \\op{K}_+\n (\\phi)\n }{\n \\op{K}_-\n (\\phi)\n }\n \\mathcomma\n\\end{equation}\none can write $X$ as\n\\begin{equation}\n X\n =\n \\covariance{\n \\op{N}\n (\\phi)\n }{\n \\op{N}\n (\\phi)\n }\n -\n 4\n \\covariance{\n \\op{K}_+\n (\\phi)\n }{\n \\op{K}_-\n (\\phi)\n }\n -\n \\frac{\n \\expval{\\op{H}}^2\n }{\n \\omega^2\n }\n -\n 4 \\expval{\\op{C}}\n +\n \\frac{1}{4}\n \\mathperiod\n\\end{equation}\nA quick calculation shows that\n\\begin{equation}\n \\dfrac{\\phi}\n \\covariance{\n \\op{K}_+\n (\\phi)\n }{\n \\op{K}_-\n (\\phi)\n }\n =\n \\frac{1}{2}\n \\covariance{\n \\op{N}'\n (\\phi)\n }{\n \\op{N}\n (\\phi)\n }\n\\end{equation}\nwhich shows that $X$ is time-independent, since both $\\op{H}$ and $\\op{C}$ are constants\nof motion.\nIt follows that $X$ can be written as\n\\begin{equation}\n X\n =\n \\covariance{\n \\op{N}\n }{\n \\op{N}\n }\n -\n 4\n \\covariance{\n \\op{K}_+\n }{\n \\op{K}_-\n }\n -\n \\frac{\n \\expval{\\op{H}}^2\n }{\n \\omega^2\n }\n -\n 4\n \\expval{\\op{C}}\n +\n \\frac{1}{4}\n \\mathperiod\n\\end{equation}\n\nUsing the definition of the Casimir \\eref{eq:su11_casimir} one can show that $X$ can be written in the alternative form\n\\begin{equation}\n X\n =\n - \\expval{\n \\op{N}\n }^2\n -\n \\expval{\n \\op{N}\n }\n +\n 4\n \\expval{\n \\op{K}_+\n }\n \\expval{\n \\op{K}_-\n }\n -\n \\frac{\n \\expval{\n \\op{H}\n }^2\n }{\n \\omega^2\n }\n \\mathperiod\n\\end{equation}\nYet another form can be obtained by inserting the Hamiltonian \\eref{eq:tm_hamiltonian_su11} to get\n\\begin{equation}\n X\n =\n -\n \\expval{\n \\op{N}\n +\n \\frac{1}{2}\n \\op{I}\n }^2\n +\n \\expval{\n \\mathup{i}\n (\n \\op{K}_+\n -\n \\op{K}_-\n )\n }^2\n +\n \\frac{1}{4}\n \\mathperiod\n\\end{equation}\nThe operators appearing in this last expression for $X$ are exactly those appearing in the formula for the operator $\\op{K}_0(\\phi)$ in the Heisenberg picture \\eref{eq:tm_n_heisenberg} and one recovers the form of the effective Friedmann equation given in \\eref{eq:tm_eff_friedmann_from_heisenberg}.\nFrom the exact solution \\eref{eq:tm_n_heisenberg}, one can see that $X\\le 0$ in all Fock states, since $X>0$ would be equivalent to the number operator $N(\\phi)$ taking a negative expectation value somewhere.\nThere are Fock states with $X=0$; these states encounter a singularity in their geometric interpretation, in the sense that the expectation value of the volume reaches zero somewhere and hence the effective energy density defined according to \\eref{eq:tm_energy_density} diverges, even though the quantum evolution is completely regular even for these states.\n\nRecalling that the volume operator is the rescaled number operator, i.e.\\ $\\op{V} = v_0 \\op{N}$, one arrives at the effective Friedmann equation\n\\begin{equation}\n \\label{eq:tm_eff_friedmann_full}\n \\fl\n \\left(\n \\frac{\n V'(\\phi)\n }{\n V(\\phi)\n }\n \\right)^2\n =\n 4 \\omega^2\n \\left(\n 1\n +\n \\frac{\n v_0\n }{\n V(\\phi)\n }\n -\n \\frac{\n v_0^2\n \\expval{\n \\op{N}\n }\n (\n \\expval{\n \\op{N}\n }\n +\n 1\n )\n }{\n V(\\phi)^2\n }\n +\n \\frac{\n 4\n v_0^2\n \\expval{\n \\op{K}_+\n }\n \\expval{\n \\op{K}_-\n }\n }{\n V(\\phi)^2\n }\n -\n \\frac{\n v_0^2\n \\expval{\n \\op{H}\n }^2\n }{\n \\omega^2\n V(\\phi)^2\n }\n \\right)\n\\end{equation}\nTaking the late time limit (corresponding to large volumes) suggests that one should identify $12 \\pi G := 4 \\omega^2$ in order for the leading term to be compatible with the classical Friedmann dynamics.\nThis identification of fundamental couplings with an `emergent' Newton's constant is common in \\ac{gft} cosmology \\cite{OSW16,AGW18}.\nFurthermore, identifying an energy density as in \\eref{eq:tm_energy_density} and defining a critical energy density\n\\begin{equation}\n \\rho_\\critical\n =\n \\frac{\n \\omega^2\n }{\n 2 v_0^2\n }\n =\n \\frac{\n 3 \\pi G\n }{\n 2 v_0^2\n }\n =\n \\frac{3\\pi}{2}\n \\rho_\\planck\n \\left(\n \\frac{v_\\planck}{v_0}\n \\right)^2\n \\mathcomma\n\\end{equation}\nwhere $\\rho_\\planck$ is the Planck mass density and $v_\\planck$ is the Planck volume, the last term inside the parentheses in \\eref{eq:tm_eff_friedmann_full} takes the form $-\\rho\/\\rho_\\critical$ familiar from \\ac{lqc}.\nThe value for $\\rho_\\critical$ appearing here agrees with the critical density found in \\cite{OSW16,OSW17}.\nIn close analogy to the results obtained in \\ac{lqc} we then write for the effective Friedmann equation (see also \\eref{eq:gft_cosmo_friedmann})\n\\begin{equation}\n\\label{eq:tm_friedman_with_rho_eff}\n \\left(\n \\frac{\n V'(\\phi)\n }{\n V(\\phi)\n }\n \\right)^2\n =\n 4 \\omega^2\n \\left(\n 1\n -\n \\frac{\n \\rho_\\effective(\\phi)\n }{\n \\rho_\\critical\n }\n \\right)\n +\n 4 \\omega^2\n \\frac{\n v_0\n }{\n V(\\phi)\n }\n \\mathcomma\n\\end{equation}\nwhere the effective energy density $\\rho_\\effective(\\phi)$ is defined as\n\\begin{equation}\n \\rho_\\effective(\\phi)\n =\n \\rho_\\phi(\\phi)\n +\n \\frac{\n \\omega^2\n \\expval{\n \\op{N}\n }\n (\n \\expval{\n \\op{N}\n }\n +\n 1\n )\n }{\n 2V(\\phi)^2\n }\n -\n \\frac{\n 2\n \\omega^2\n \\expval{\n \\op{K}_+\n }\n \\expval{\n \\op{K}_-\n }\n }{\n V(\\phi)^2\n }\n \\mathperiod\n\\end{equation}\nThe first contribution to this effective energy density is given by the energy density $\\rho_\\phi(\\phi)=\\expval{\\op{H}}^2\/(2V(\\phi)^2)$ associated to a massless scalar field (as defined in \\eref{eq:tm_energy_density}), but there are two additional contributions depending on the expectation values $\\expval{\\op{N}},\\,\\expval{\\op{K}_+}$ and\n $\\expval{\n \\op{K}_-\n }$\nin the initial state.\nAs these additional contributions to $\\rho_\\effective$ also scale as $V(\\phi)^{-2}$, their effect is equivalent to a shift in the scalar field momentum compared to its classical value $\\expval{\\op{H}}$.\nThe last term in \\eref{eq:tm_friedman_with_rho_eff}, scaling as $1\/V(\\phi)$, is similar to a correction found in mean-field calculations and takes the form of an effective matter contribution for matter with equation of state $p=2\\rho$ (cf.~\\cite{CPS16}).\n\nDepending on the initial state, the quantity $X$ (or, alternatively, the additional contributions to the effective energy density) can take different forms.\nFor the \\ac{pg} coherent states one finds the following form\n\\begin{equation}\n \\label{eq:gft_su11_x_pg}\n X_{\\mathsubscript{PG}}\n =\n - 4 k^2\n \\frac{\n (1 + \\zeta^2)\n (1 + \\compconj{\\zeta}^2)\n }{\n (1 - \\abs{\\zeta}^2)^2\n }\n +\n \\frac{1}{4}\n =\n -\n 4 k^2\n -\n \\frac{\n \\expval[\\mathsubscript{PG}]{\n \\op{H}\n }^2\n }{\n \\omega^2\n }\n +\n \\frac{1}{4}\n \\mathperiod\n\\end{equation}\nWe note that for $k = 1\/4$ this reduces to\n$\n X_{\\mathsubscript{PG}}\n =\n -\n \\expval[\\mathsubscript{PG}]{\n \\op{H}\n }^2\n \/\n \\omega^2\n$, and the effective energy density and classical energy density exactly coincide.\nThe effective Friedmann equation for \\ac{pg} coherent states therefore reads\n(for general $k$)\n\\begin{equation}\n \\left(\n \\frac{\n V'(\\phi)\n }{\n V(\\phi)\n }\n \\right)^2\n =\n 4 \\omega^2\n \\left(\n 1\n +\n \\frac{\n v_0\n }{\n V(\\phi)\n }\n -\n \\frac{\n v_0^2\n \\expval[\\mathsubscript{PG}]{\n \\op{H}\n }^2\n }{\n \\omega^2\n V(\\phi)^2\n }\n - \\frac{\n v_0^2\\left(16 k^2\n - 1\\right)\n }{\n 4V(\\phi)^2\n }\n \\right)\n \\mathperiod\n\\end{equation}\n\nFor Fock coherent states one gets for $X$\n\\begin{equation}\n X_{\\mathsubscript{F}}\n =\n - \\frac{1}{4} (\\sigma^2 + \\compconj{\\sigma}^2)^2\n - \\abs{\\sigma}^2\n =\n -\n \\expval[\\mathsubscript{F}]{\n \\op{N}\n }\n -\n \\frac{\n \\expval[\\mathsubscript{F}]{\n \\op{H}\n }^2\n }{\n \\omega^2\n }\n \\mathperiod\n\\end{equation}\nTherefore the Friedmann equation for Fock coherent states is given by\n\\begin{equation}\n \\label{eq:tm_eff_fried_fock}\n \\left(\n \\frac{\n V'(\\phi)\n }{\n V(\\phi)\n }\n \\right)^2\n =\n 4 \\omega^2\n \\left(\n 1\n +\n \\frac{\n v_0\n }{\n V(\\phi)\n }\n -\n \\frac{\n v_0^2\n \\expval[\\mathsubscript{F}]{\n \\op{N}\n }\n }{\n V(\\phi)^2\n }\n -\n \\frac{\n v_0^2\n \\expval[\\mathsubscript{F}]{\n \\op{H}\n }^2\n }{\n \\omega^2\n V(\\phi)^2\n }\n \\right)\n \\mathperiod\n\\end{equation}\n\nFor completeness we also state the value of $X$ one gets for \\ac{bg} coherent states,\n\\begin{equation}\n X_{\\mathsubscript{BG}}\n =\n -\n (\\chi - \\compconj{\\chi})^2\n -\n 4\n \\left(\n k\n +\n \\abs{\\chi}\n \\frac{\n I_{2k}(2 \\abs{\\chi})\n }{\n I_{2k - 1}(2 \\abs{\\chi})\n }\n \\right)^2\n +\n \\frac{1}{4}\n \\mathperiod\n\\end{equation}\n\nThe Friedmann equations derived here are compatible with previous results in \\ac{gft} cosmology \\cite{OSW16,OSW17,AGW18,WEw19} where either a mean-field approach was used or a simplifying assumption was imposed on the initial conditions.\nWe emphasise that no approximations were used which resulted in the appearance of extra terms.\nIn particular, we were able to identify one of those extra terms with the energy density of the real scalar field acting as a relational clock variable.\nCorrections to the classical Friedmann dynamics of the \\ac{lqc}-like form $-\\rho\/\\rho_\\critical$, which lead to effective repulsive behaviour and a bounce at high energies, were found for all coherent states considered.\nWe also found that in general the `effective' energy density appearing in the Friedmann equation \\eref{eq:tm_friedman_with_rho_eff} is not equal to the classical energy density $\\rho_\\phi=\\pi_\\phi^2\/(2V^2)$ of a massless scalar field, but contains additional terms depending on the initial conditions chosen.\n \n\\section{Interacting toy model}\nIn this section we extend the toy model discussed in \\sref{sec:tm} by adding an interaction term to the Hamiltonian.\nThe resulting interacting model still represents a simplification of the dynamics of full \\ac{gft}, since we continue to assume that only one mode is relevant.\nWhile the general expectation is that the dynamics should depend on the coupling of different modes, studying this simpler model can provide insights on how \\ac{gft} interactions can change the interpretation of the dynamics in terms of effective cosmology.\nA similar model, which included polynomial interactions for a single \\ac{gft} field mode, was previously studied in \\cite{CPS16} in a mean-field approximation (see also \\cite{PS17}), leading to corrections to the effective Friedmann equations coming from these interactions.\nThese corrections become more significant at late times as the impact of \\ac{gft} interactions grows with the number of quanta.\nHere we will be able to contrast these mean-field results with effective modified Friedmann equations obtained in a more general setting.\n\nWe now consider a Hamiltonian given in terms of the $\\liealg{su}(1, 1)$ variables by\n\\begin{equation}\n \\label{eq:int_hamiltonian}\n \\op{H}\n =\n -\\omega\n (\n \\op{K}_+\n + \\op{K}_-\n )\n +\n \\lambda \\omega\n (\n \\op{K}_+\n + \\op{K}_-\n + 2 \\op{K}_0\n )^2\n \\mathperiod\n\\end{equation}\nIn terms of the bosonic realisation of the $\\liealg{su}(1, 1)$ algebra \\eref{eq:su11_bosonic_realisation} the Hamiltonian reads\n\\begin{equation}\n \\label{eq:int_hamiltonian_fock_vars}\n \\op{H}\n =\n - \\frac{\\omega}{2}\n (\n \\op{a}^2\n + \\hermconj{\\op{a}}{}^2\n )\n +\n \\frac{\n \\lambda \\omega\n }{\n 4\n }\n (\n \\op{a}\n + \\hermconj{\\op{a}}\n )^4\n \\mathperiod\n\\end{equation}\nRecalling the definitions of the creation and annihilation operators in terms of the \\ac{gft} field and its conjugate momentum \\eref{eq:gft_cosmo_annihilation}, \\eref{eq:gft_cosmo_creation} one can rewrite the Hamiltonian as\n\\begin{equation}\n \\label{eq:int_hamiltonian_gft_vars}\n \\op{H}\n =\n \\frac{1}{2 \\abs{\\mathcal{K}^{(2)}}}\n \\op{\\pi}^2\n -\n \\frac{1}{2}\n \\abs{\\mathcal{K}^{(0)}}\n \\op{\\varphi}^2\n +\n \\lambda\n \\abs{\\mathcal{K}^{(0)}}^{3\/2}\n \\abs{\\mathcal{K}^{(2)}}^{1\/2}\n \\op{\\varphi}^4\n \\mathcomma\n\\end{equation}\nwhere we suppressed the Peter--Weyl representation labels and we also assume the mode to be of the type discussed at the end of \\sref{sec:gft_cosmo} with magnetic indices $m_i = 0$.\nFrom this expression one sees that the interaction term would correspond to a $\\varphi^4$ interaction term in an appropriately defined \\ac{gft} action.\n\nThe dynamics of this system crucially depend on the sign of $\\lambda$.\nIndeed, for positive $\\lambda$ this Hamiltonian will be bounded from below, whereas for the case that $\\lambda$ is negative the Hamiltonian is unbounded as it is in the free case.\nInterpreting \\eref{eq:int_hamiltonian_gft_vars} as a mechanical system with kinetic and potential terms, one sees that for positive $\\lambda$ one gets a `Mexican hat' type potential, whereas in the case of negative $\\lambda$ the potential is an `upside-down' anharmonic oscillator.\nIn the cosmological context one expects from this that for negative $\\lambda$ the Universe will undergo an enhanced exponential expansion and for positive $\\lambda$ the Universe will recollapse after some time leading to a cyclic cosmology.\nSuch a cyclic cosmology was indeed found in \\cite{CPS16}, where $\\lambda > 0$ was assumed.\nIn \\sref{sec:int_alg_poisson} we argue that this expectation is correct if \\eref{eq:int_hamiltonian} is seen as the Hamiltonian of a classical system.\n\nWhen one tries to use the algebraic approach detailed in \\sref{sec:tm} to derive an effective Friedmann equation for the interacting model \\eref{eq:int_hamiltonian} one faces several challenges.\nFirstly, the noncommutativity does not allow the reduction to a small set of `basis operators'.\nSecondly, the expressions involved feature products of three operators and it is technically challenging to relate them to the expectation values of `simple' operators such as the Hamiltonian and number operator.\n\nTo begin with, we restrict ourselves to the classical case, where the variables commute and one can employ the algebraic approach to derive an effective Friedmann equation.\nWe find an exact Friedmann equation whose limits at early and late times are given.\nAfter that we turn to the general (quantum) case, where the operators do not commute.\nIn that case we resort to a perturbative treatment which is valid at early times.\nFurthermore, we perform a numerical analysis for Fock coherent states.\nWe find that a linear (perturbative) correction to the effective Friedmann equation can capture the effect of the interaction term for a short time, after which the dynamics become nonperturbative.\n\n\\subsection{Algebraic approach for classical analogue system}\n\\label{sec:int_alg_poisson}\nIn this section we study a classical dynamical system with time evolution generated by the Hamiltonian \\eref{eq:int_hamiltonian}.\nIn this approach the $\\liealg{su}(1,1)$ variables $K_0$, $K_+$ and $K_-$ are not viewed as quantum operators but as coordinates on a Poisson manifold, subject to an $\\liealg{su}(1,1)$ Poisson algebra which then defines the Hamiltonian dynamics.\nIn contrast to the full quantum case, these variables commute and we interpret the variables themselves as the observables of interest, i.e.\\ one does not have to take expectation values.\nIn this sense, this approximation neglects all quantum corrections coming from operator orderings and uncertainties in a quantum state.\nWe will identify the variable $K_0$ with the particle number $N$ such that $N \\equiv 2 K_0$.\nAs above, we assume the total volume to be proportional to the particle number, $V = v_0 N$, and switch between $N$ and $V$ freely.\n\nThe equation of motion of the variable $N$ is given by\n\\begin{equation}\n N'(\\phi)\n =\n 2 \\mathup{i} \\omega\n (K_+(\\phi) - K_-(\\phi))\n (\n 1 - 2 \\lambda (K_+(\\phi) + K_-(\\phi) + N(\\phi))\n )\n \\mathperiod\n\\end{equation}\nAfter squaring this equation one can use the Casimir \\eref{eq:su11_casimir} to replace the combination $(K_+ - K_-)$.\nOne is then left with an expression where only the combination $(K_+ + K_-)$ appears,\n\\begin{equation}\n \\label{eq:int_classical_eff_friedmann_intermediate}\n \\eqalign{\n N'(\\phi)^2\n =\n &\n 4 \\omega^2\n \\left(\n N(\\phi)^2\n - 4C\n - (K_+(\\phi) + K_-(\\phi))^2\n \\right)\n \\\\\n &\n \\qquad\n \\times\n \\left(\n 1\n - 2 \\lambda (K_+(\\phi) + K_-(\\phi) + N(\\phi))\n \\right)^2\n \\mathperiod\n }\n\\end{equation}\nFrom the Hamiltonian one can derive an explicit formula for $(K_+ + K_-)$\n\\begin{equation}\n \\label{eq:int_kP_plus_kM}\n K_+(\\phi)\n +\n K_-(\\phi)\n =\n \\frac{1}{2 \\lambda}\n \\left(\n 1\n - 2 \\lambda N(\\phi)\n -\n \\sqrt{\n 1\n + 4 \\lambda \\left( \\frac{H}{\\omega} - N(\\phi) \\right)\n }\n \\right)\n \\mathcomma\n\\end{equation}\nwhere we chose the solution connected to the free theory.\nInserting this into \\eref{eq:int_classical_eff_friedmann_intermediate} one gets the nonperturbative effective Friedmann equation\n\\begin{equation}\n \\label{eq:int_classical_eff_friedmann_nonpert}\n \\fl\n\\eqalign{\n N'(\\phi)^2\n =\n &\n -\n \\frac{\n 2\n \\omega^2\n }{\n \\lambda^2\n }\n \\left(\n 1 + 4 \\lambda\n \\left(\n \\frac{H}{\\omega} - N(\\phi)\n \\right)\n \\right)\n \\times\n \\Bigg[\n 1 - 4 \\lambda N(\\phi)\n \\\\\n &\n \\qquad\n \\qquad\n - (1 - 2 \\lambda N(\\phi))\n \\sqrt{\n 1 + 4 \\lambda\n \\left(\n \\frac{H}{\\omega} - N(\\phi)\n \\right)\n }\n + 2 \\lambda\n \\left(\n 4 \\lambda C\n + \\frac{H}{\\omega}\n \\right)\n \\Bigg]\n \\mathperiod\n}\n\\end{equation}\nWe already see that $\\lambda > 0$ implies an upper bound on the value of $N$ and hence the volume, since the right-hand side of \\eref{eq:int_classical_eff_friedmann_nonpert} has to be real and positive.\nIndeed, at exactly that upper limit the right-hand side of \\eref{eq:int_classical_eff_friedmann_nonpert} becomes zero leading to a recollapse.\nIn the case $\\lambda < 0$, for the right-hand side of \\eref{eq:int_classical_eff_friedmann_nonpert} to be real and positive $N$ has to be greater than some minimal value (which can be zero).\nThe right-hand side remains real and positive for all values of $N$ greater than that minimal value, implying that the Universe expands indefinitely.\nFor clarity, we recall that $H$ here and throughout the paper denotes the Hamiltonian or energy (interpreted as the canonical momentum conjugate to the scalar field $\\phi$), not a Hubble rate in cosmology.\n\nWe would like to interpret \\eref{eq:int_classical_eff_friedmann_nonpert} in terms of a cosmological model given by one or several matter components which contribute to the matter energy density on the right-hand side.\nFor a general such model in usual classical cosmology, with $n$ matter components labelled by an index $i$ viewed as perfect fluids with each having an equation of state $p_i=w_i \\rho_i$, the Friedmann equation for the volume as a function of relational time $\\phi$ would be of the form\n\\begin{equation}\n \\label{eq:int_friedmann_eos}\n \\left(\n \\frac{V'(\\phi)}{V(\\phi)}\n \\right)^2\n = \\sum_{i=1}^n\n A_i V(\\phi)^{1-w_i}\n\\end{equation}\nwhere $A_i$ are constants of motion.\nWhile \\eref{eq:int_classical_eff_friedmann_nonpert} is valid at all times, its interpretation in terms of cosmological models of the form \\eref{eq:int_friedmann_eos} is not clear due to the appearance of a square root on the right-hand side.\nMoreover, additional matter components as in \\eref{eq:int_friedmann_eos} would come with new conserved quantities $A_i$, whose values can be varied independently (or, in other words, are determined by the initial conditions).\nIn our model, the presence of \\ac{gft} interactions does not introduce new parameters set by initial conditions, only a new coupling constant $\\lambda$; these interactions therefore modify the dynamics of gravity rather than matter.\nHere we follow the convention in which quantum gravity corrections are written as modifying the right-hand side rather than the left-hand side of Friedmann equations (as is usually done in \\ac{lqc}) in order to give intuition for the effective dynamics.\\footnote{An alternative possible interpretation of effective Friedmann equations obtained from quantum gravity models is to view them as equivalent to Friedmann equations of a modified theory of gravity.\n A general reconstruction method of this type for mimetic gravity theories was developed in \\cite{Ces19}.\n}\n\nOne might be interested in the Friedmann equation valid at relatively small volumes where interactions have become relevant but not dominant, so that one can employ perturbation theory.\nExpanding \\eref{eq:int_classical_eff_friedmann_nonpert} as a series around $\\lambda=0$ one gets\n\\begin{equation}\n \\label{eq:int_classical_eff_friedmann_series}\n \\fl\n \\eqalign{\n N'(\\phi)^2\n =\n 4\\omega^2\n \\Bigg\\{\n &\n N(\\phi)^2\n - \\frac{H^2}{\\omega^2}\n - 4C\n \\left(\n 1\n -\n 4 \\lambda\n \\left(\n N(\\phi) - \\frac{H}{\\omega}\n \\right)\n \\right)\n \\\\\n &\n \\eqalign{\n -\n \\sum_{n=1}^\\infty\n \\bigg[\n &\n 12 \\lambda^n\n \\frac{\n (2n - 2)!\n }{\n (n - 1)!\n (n + 2)!\n }\n \\left(\n (2n - 1) \\frac{H}{\\omega}\n + (3 - n) N(\\phi)\n \\right)\n \\\\\n &\n \\times\n \\left(\n N(\\phi)\n - \\frac{H}{\\omega}\n \\right)^{n + 1}\n \\bigg]\n \\Bigg\\}\n \\mathperiod\n }\n }\n\\end{equation}\nFrom this form one can see that it is the product $\\lambda N(\\phi)$ that must be small for the perturbative expansion to make sense.\nComparing with \\eref{eq:int_friedmann_eos}, one could interpret the leading (linear) correction coming from the interaction term as an effective matter component with an equation of state parameter $w=0$, i.e., a dust component.\nThis result would then agree with the results of a mean-field calculation in \\cite{CPS16} where adding a $\\varphi^4$ interaction to the \\ac{gft} Lagrangian led to such a dust-like contribution in the effective cosmology.\nHowever, the full expansion given in \\eref{eq:int_classical_eff_friedmann_series} shows that such an interpretation would only be valid in an intermediate regime in which the product $\\lambda N(\\phi)$ is no longer negligible, but also not yet large enough for higher orders to contribute.\nIndeed, as the volume grows further, $\\lambda N(\\phi)$ would soon be $\\bigO{1}$ and the perturbative expansion receives contributions from \\emph{all} orders.\nIn particular, there is never a regime in which the effective Friedmann equation is dominated by the `dust-like' component, as it would be in the mean-field form obtained in \\cite{CPS16}.\nOne possible interpretation of this discrepancy is that mean-field methods are strictly only valid in the free theory, since they assume the absence of correlations between quanta.\nHence one would expect them to become inaccurate as the contribution to the effective dynamics coming from \\ac{gft} interactions becomes strong.\nCare has to be taken when interpreting \\eref{eq:int_classical_eff_friedmann_series} up to some order.\nFor instance, if one only considers terms up to order $\\lambda^2$ in \\eref{eq:int_classical_eff_friedmann_series}, one would conclude that there is always a recollapse (even for negative $\\lambda$), which is not true for the full solution.\n\nTo understand the late-time behaviour of the system, recall that $N(\\phi)$ grows without bound while the other dynamical quantities on the right-hand side of \\eref{eq:int_classical_eff_friedmann_nonpert}, $C$ and $H$, are constants of motion.\nIn the limit of large $N(\\phi)$ (corresponding to late times) the leading order contribution of \\eref{eq:int_classical_eff_friedmann_nonpert} is given by\n\\begin{equation}\n \\label{eq:int_classical_eff_friedmann_asymptotic}\n \\left(\n \\frac{N'(\\phi)}{N(\\phi)}\n \\right)^2\n =\n 32 \\omega^2\n \\sqrt{-\\lambda N(\\phi)}\n +\n \\bigO{1}\n \\mathperiod\n\\end{equation}\nInterpreting the right-hand side of \\eref{eq:int_classical_eff_friedmann_asymptotic} as an energy density of matter, one finds from \\eref{eq:int_friedmann_eos} that it corresponds to matter with an equation of state parameter $w = 1\/2$.\nThe solutions to this asymptotic form of the effective Friedmann equation behave as $N(\\phi)\\sim|\\phi-\\phi_0|^{-4}$, diverging at some $\\phi=\\phi_0$.\nWe would hence expect the evolution of our system to terminate at some finite value of $\\phi$, depending on initial conditions.\nNotice that in this interpretation the energy density of this new `matter' is fixed by the \\ac{gft} coupling $\\lambda$ and hence, as we mentioned before, the effect of adding \\ac{gft} interactions should rather be seen as modifying the dynamics of gravity on large scales.\nThe scale at which such a modification would become relevant depends on the chosen value of $\\lambda$.\n\nTo characterise the dynamics of the effective cosmology at arbitrary times, from \\eref{eq:int_friedmann_eos} we define an `effective equation of state parameter'\n\\begin{equation}\n \\label{eq:int_eos_eff}\n w_{\\effective}(\\phi)\n =\n 1\n -\n \\frac{\n \\mathrm{d} \\log \\left( N'(\\phi)\/N(\\phi) \\right)^2\n }{\n \\mathrm{d} \\log N(\\phi)\n }\n\\end{equation}\nwhich is plotted in \\fref{fig:int_eos_poisson} as a function of $N(\\phi)$ for various truncations of \\eref{eq:int_classical_eff_friedmann_series}.\nIn the plot one sees that for $\\abs{\\lambda N(\\phi)} \\lesssim 10^{-2}$ the interactions become relevant and that for $\\abs{\\lambda N(\\phi)} \\lesssim 1$ the difference between the exact result and the first order truncation becomes large as expected.\nAnother observation is that while for small values of $\\abs{\\lambda N(\\phi)}$ the second order truncation is in better agreement with the nonperturbative results, this truncation quickly diverges for $\\abs{\\lambda N(\\phi)} \\gtrsim 1$.\n\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics{fig-int_poisson.pdf}\n \\caption{The relative relational expansion rate squared \\eref{eq:int_classical_eff_friedmann_nonpert} and the `effective equation of state parameter' \\eref{eq:int_eos_eff} as functions of $N$.\n The solid lines correspond to a truncation at zeroth order in $\\lambda$.\n The dashed lines correspond to a truncation at first order in $\\lambda$.\n The dotted lines correspond to a truncation at second order in $\\lambda$.\n The dash-dotted lines correspond to the full nonperturbative case.\n The parameters are:\n $\\omega = 1$,\n $\\lambda = - 10^{-7}$,\n $H = -10040$,\n $C = - 3\/16$.\n (The choice of $H$ corresponds to $\\bra{\\sigma} \\op{H} \\ket{\\sigma}$ for $\\sigma = 100$.)\n }\n \\label{fig:int_eos_poisson}\n\\end{figure}\n\n\\subsection{Quantum calculation}\nWe are not able to derive an exact solution for $\\hat{N}(\\phi)$ in the interacting quantum mechanical case, where operators do not commute and higher moments and simple expectation values are independent.\nTo give approximate solutions, we first give a perturbative analytical method which we then contrast with numerical results.\nWe expand the number operator as a series with expansion parameter $\\lambda$, i.e.\\\n\\begin{equation}\n \\label{eq:int_n_pert_series}\n \\op{N}(\\phi)\n =\n \\sum_{n=0}^\\infty\n \\lambda^n\n \\op{N}_n(\\phi)\n \\mathperiod\n\\end{equation}\nSplitting the Hamiltonian $\\op{H}$ into a $\\lambda$-independent part, $\\op{H}_0$, and a $\\lambda$-dependent part, $\\op{H}_1$, one can split the time evolution operator in a way similar to what is done in the interaction picture as $\\op{U}(\\phi) = \\op{U}_0(\\phi) \\op{U}_\\interaction(\\phi)$, where $\\op{U}_0(\\phi) = \\exp(-\\mathup{i} \\op{H}_0 \\phi)$ is the time evolution operator of the free system and the interaction time evolution operator is defined by the time-ordered exponential\n\\begin{equation}\n \\op{U}_\\interaction(\\phi)\n =\n \\timeordering\n \\exp\n \\left(\n -\\mathup{i}\n \\int_0^\\phi\n \\intmeasure{\\phi'}\\,\n \\op{U}_0^{-1}(\\phi')\n \\op{H}_1\n \\op{U}_0(\\phi')\n \\right)\n \\mathperiod\n\\end{equation}\nNote that the inverse interaction time evolution operator requires anti-time-ordering.\nExpanding the interaction time evolution operator as a series in $\\lambda$,\n\\begin{equation}\n \\op{U}_\\interaction(\\phi)\n =\n \\sum_{n=0}^\\infty\n \\lambda^n\n (\\op{U}_\\interaction)_n(\\phi)\n \\mathcomma\n\\end{equation}\nand inserting into \\eref{eq:int_n_pert_series} gives for the $\\op{N}_n(\\phi)$\n\\begin{equation}\n \\op{N}_n(\\phi)\n =\n \\sum_{m=0}^n\n (\\op{U}_\\interaction^{-1})_m(\\phi)\n \\op{U}_0^{-1}(\\phi)\n \\op{N}\n \\op{U}_0(\\phi)\n (\\op{U}_\\interaction)_{n-m}(\\phi)\n \\mathperiod\n\\end{equation}\nThe strategy would be to find the exact form of these $\\op{N}_n(\\phi)$ up to some order and then derive an effective Friedmann equation from these.\nWe will only do this to first order.\n\nThe leading term $\\op{N}_0(\\phi)$ is of course the same as in the free theory \\eref{eq:tm_n_heisenberg}.\nThe first order term $\\op{N}_1(\\phi)$ is given by\n\\begin{equation}\n\\eqalign{\n \\fl\n \\op{N}_1(\\phi)\n =\n \\frac{\\lambda}{2}\n \\Big\\{\n &\n \\mkern-12mu (\\op{K}_+)^2\n \\left[\n 3\n - 2 (2 + 3\\mathup{i}\\omega\\phi) \\cosh(2\\omega\\phi)\n + \\cosh(4\\omega\\phi)\n \\right]\n + \\hermconjtext\n \\\\\n &\n +\n \\op{K}_+\n (2\\op{N} + 3)\n \\left[\n (\\mathup{i} - 3\\omega\\phi) \\sinh(2\\omega\\phi)\n - \\mathup{i} \\sinh(4\\omega\\phi)\n \\right]\n + \\hermconjtext\n \\\\\n &\n -\n \\frac{1}{2}\n \\sinh^2(2\\omega\\phi)\n [\n 3 + 4 \\op{N}^2 + 8 \\op{N} + 8 \\op{K}_+ \\op{K}_-\n ]\n \\Big\\}\n \\mathperiod\n}\n\\end{equation}\n\nAs outlined above, what one would like to do is take the expectation values of the perturbative expansion \\eref{eq:int_n_pert_series} and derive an effective Friedmann equation for arbitrary states, valid up to some order in $\\lambda$.\nHowever, already the first order expression for $\\op{N}(\\phi)$ is quite complicated and we were not able to derive a corresponding effective Friedmann equation for general states.\nTo ameliorate this we resort to taking the expectation values for some specific classes of coherent states.\n\nFirstly, we turn to the Fock coherent states.\nWriting for the parameter of the Fock coherent states \\eref{eq:tm_coh_state_fock} $\\sigma = \\sigma_1 + \\mathup{i} \\sigma_2$, one finds for the case $\\sigma_2 = 0$, i.e.\\ for real $\\sigma$,\n\\begin{equation}\n \\label{eq:int_eff_fried_fock_real_order1}\n \\fl\n \\eqalign{\n N'(\\phi)^2\n =\n 4 \\omega^2\n \\bigg(\n &\n N(\\phi)^2\n + N(\\phi)\n - \\sigma_1^2 (1 + \\sigma_1^2)\n \\\\\n &\n \\eqalign{\n +\n \\frac{\n \\lambda\n }{\n (1 + 2 \\sigma_1^2)^2\n }\n &\n \\Big[\n -4 N(\\phi)^3 (3 + 12 \\sigma_1^2 + 4 \\sigma_1^4)\n \\\\\n &\n - 6 N(\\phi)^2 (3 + 15 \\sigma_1^2 + 12 \\sigma_1^4 + 4 \\sigma_1^6)\n \\\\\n &\n + 6 N(\\phi) (-1 - 5 \\sigma_1^2 + 4 \\sigma_1^6)\n \\\\\n &\n + 2 \\sigma_1^2 (3 + 24 \\sigma_1^2 + 51 \\sigma_1^4 + 48 \\sigma_1^6 +\n 20 \\sigma_1^8)\n \\Big]\n + \\bigO{\\lambda^2}\n \\bigg)\n \\mathperiod\n }\n }\n\\end{equation}\nFor the case $\\sigma_1 = 0$, i.e.\\ imaginary $\\sigma$, one finds the similar expression\n\\begin{equation}\n \\fl\n \\eqalign{\n N'(\\phi)^2\n =\n 4 \\omega^2\n \\bigg(\n &\n N(\\phi)^2\n + N(\\phi)\n - \\sigma_2^2 (1 + \\sigma_2^2)\n \\\\\n &\n \\eqalign{\n +\n \\frac{\n \\lambda\n }{\n (1 + 2 \\sigma_2^2)^2\n }\n &\n \\Big[\n -4 N(\\phi)^3 (3 + 12 \\sigma_2^2 + 4 \\sigma_2^4)\n \\\\\n &\n - 6 N(\\phi)^2 (3 + 9 \\sigma_2^2 - 4 \\sigma_2^4 - 4 \\sigma_2^6)\n \\\\\n &\n + 6 N(\\phi) (-1 + \\sigma_2^2 + 16 \\sigma_2^4 + 12 \\sigma_2^6)\n \\\\\n &\n + 2 \\sigma_2^2 (3 + 6 \\sigma_2^2 - 15 \\sigma_2^4 - 24 \\sigma_2^6\n - 4 \\sigma_2^8)\n \\Big]\n + \\bigO{\\lambda^2}\n \\bigg)\n \\mathperiod\n }\n }\n\\end{equation}\nThe expression for the general case, where $\\sigma$ can be any complex number, is quite involved and we do not state it here.\n\nSecondly, we turn to the \\ac{pg} coherent states.\nFor the these states one finds the following general expression\n\\begin{equation}\n \\fl\n \\eqalign{\n N'(\\phi)^2\n =\n 4 \\omega^2\n \\Bigg\\{\n &\n \\left(\n N(\\phi)\n + \\frac{1}{2}\n \\right)^2\n - 4 k^2\n - \\frac{\\expval[\\mathsubscript{PG}]{\\op{H}}^2}{\\omega^2}\n \\\\\n &\n \\eqalign{\n +\n \\lambda\n \\frac{2k + 1}{4k}\n \\Big[\n &\n - 8 N(\\phi)^3\n + 12 N(\\phi)^2\n \\left(\n \\frac{\\expval[\\mathsubscript{PG}]{\\op{H}}}{\\omega} - 1\n \\right)\n \\\\\n &\n + 2 N(\\phi)\n \\left(\n 6 \\frac{\\expval[\\mathsubscript{PG}]{\\op{H}}}{\\omega}\n + 16 k^2\n - 3\n \\right)\n \\\\\n &\n -\n \\left(\n \\frac{\\expval[\\mathsubscript{PG}]{\\op{H}}}{\\omega}\n - 1\n \\right)\n \\left(\n 2 \\frac{\\expval[\\mathsubscript{PG}]{\\op{H}}^2}{\\omega^2}\n + \\frac{\\expval[\\mathsubscript{PG}]{\\op{H}}}{\\omega}\n + 16 k^2\n - 1\n \\right)\n \\Big]\n }\n \\\\\n &\n +\n \\bigO{\\lambda^2}\n \\Bigg\\}\n \\mathperiod\n }\n\\end{equation}\nIt is remarkable that it is possible to write the right-hand side only in terms of $N(\\phi)$ and $\\expval[\\mathsubscript{PG}]{H}$.\nAs before, we would expect all higher order corrections to become relevant as soon as a regime is reached in which $\\abs{\\lambda N(\\phi)} \\lesssim 1$.\n\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics{fig-int_quantum.pdf}\n \\caption{\n The relative relational expansion rate squared and the `effective equation of state parameter' \\eref{eq:int_eos_eff} as functions of $N$ for Fock coherent states with real parameter $\\sigma$.\n The solid lines correspond to a truncation at zeroth order in $\\lambda$.\n The dashed lines correspond to a truncation at first order in $\\lambda$.\n The dotted lines correspond to a truncation at second order in $\\lambda$.\n The dash-dotted lines correspond to the full nonperturbative case.\n The parameters are:\n $\\omega = 1$,\n $\\lambda = - 10^{-3}$,\n $\\sigma = 10$\n }\\label{fig:int_quantum}\n\\end{figure}\n\nIn \\fref{fig:int_quantum} the relative relational expansion rate squared and the effective equation of state parameter \\eref{eq:int_eos_eff} are plotted as a function of $N$ for different truncations of the perturbative expansion and for the result of a numerical calculation.\\footnote{The numerical results were obtained by solving the time-dependent Schr\u00f6dinger equation of the Hamiltonian \\eref{eq:int_hamiltonian_fock_vars} in the position representation.}\nThe zeroth order truncation corresponds to the free case and corresponds to \\eref{eq:tm_eff_fried_fock} and the first order truncation is given in \\eref{eq:int_eff_fried_fock_real_order1}.\nNote that we do not state the second order truncation explicitly, since the expression is rather convoluted.\nThe state considered in this plot is a Fock coherent state with real parameter $\\sigma$.\nThe numerical results are in good agreement with the second order truncation.\nHowever, the second order truncation diverges quickly for values $\\abs{\\lambda N(\\phi)} \\gtrsim 1$, whereas the first order truncation does not.\nNote that the parameters chosen do not accommodate a regime in which $\\abs{\\lambda N(\\phi)} \\ll 1$, explaining the mismatch with the linearised theory.\nDue to numerical limitations we were not able to enter the asymptotic regime and determine the corresponding effective equation of state parameter for late times.\n\nWe conclude the present discussion of the quantum behaviour of the interacting toy model by revisiting the relative uncertainties of the volume as a function of relational time presented in \\sref{sec:tm_class_of_coh_stat_and_rel_uncert} and illustrated in \\fref{fig:tm_rel_uncertainty_comp}.\nHere we restrict ourselves to the class of Fock coherent states.\nThe results of numerical calculations comparing the free and interacting cases for a range of parameters are given in \\fref{fig:int_rel_uncertainty_comp}.\nAn immediate effect of the interactions is that the expectation values diverge at finite relational time, resulting in different ranges of the relational time coordinate.\nThese divergences can already be anticipated from the discussion below \\eref{eq:int_classical_eff_friedmann_asymptotic}.\nThe key observation is that when interactions are present the relative uncertainties are not asymptotically constant but start growing as soon as the interactions become dominant.\nThe general statement that Fock coherent states become semiclassical at late times can therefore not be extended to this interacting case in an obvious way.\nAgain, this is also consistent with the expectation that mean-field methods break down in the interacting case when interactions begin to dominate over the quadratic Hamiltonian.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics{fig-int_rel_uncertainty_comp.pdf}\n \\caption{The relative uncertainty of the volume operator as a function of $\\omega \\phi$ for Fock coherent states.\n The solid line corresponds to the free case, $\\lambda = 0$.\n The dashed line corresponds to the interacting case with $\\lambda = -10^{-3}$.\n }\n \\label{fig:int_rel_uncertainty_comp}\n\\end{figure}\n \n\\section{Conclusions}\nOur aim was to present a general perspective on the derivation of reliable effective Friedmann equations from given quantum dynamics of a \\ac{gft} model, building on various recent developments in the derivation of effective cosmological dynamics from \\ac{gft}.\nWithin this general perspective, the most important assumption was to restrict the \\ac{gft} dynamics to those of a single field mode, i.e., fixed values of the representation labels in the Peter--Weyl decomposition.\nThis simplification of the full dynamics in which all modes would be present can be seen as the most important limitation of our work.\nWhile there are arguments suggesting the dynamical emergence of a regime dominated by a single field mode in \\ac{gft}, showing such an emergence in models of interest for four-dimensional quantum gravity remains an outstanding challenge.\nOn the other hand, we were able to derive effective cosmological dynamics without relying on a mean-field approximation, and in general no assumptions needed to be made on the initial state.\n\nWe first focussed on the case of dynamics defined by a free (quadratic) Hamiltonian.\nSuch a Hamiltonian can be of harmonic form or of `upside-down' harmonic form; the latter case in which the Hamiltonian is unbounded from below is most relevant to \\ac{gft} cosmology, since it admits solutions expanding to infinity \\cite{WEw19,GPW19}.\nFor this case, we recovered and extended results of \\cite{AGW18} and \\cite{BBC19} for the resulting effective Friedmann equations.\nGeneric solutions exhibit singularity resolution in the sense of a minimal non-zero value for the volume, and interpolate between a collapsing and an expanding branch which both at large volumes approach classical \\ac{flrw} solutions.\nThese solutions depend on a parameter determined by the initial conditions (with no obvious classical analogue) which generates an asymmetry between the solution before and after the minimum for the volume.\nThe main new result in this part was a discussion of relative uncertainties in the two main physical observables---volume and energy---in three different classes of coherent states: Fock coherent states which have been used previously in \\ac{gft} cosmology, and \\acf{pg} and \\acf{bg} coherent states of $\\liealg{su}(1, 1)$.\nWe found that Fock coherent states approach a semiclassical regime at large volume, where relative uncertainties can be arbitrarily small, while PG states of interest here never reach such a regime.\nThe difference in our treatment compared with works such as \\cite{EM12} was that we assumed the Fock space structure of \\ac{gft} and only considered PG states that live in bosonic Fock representations.\nIn \\ac{gft}, Fock coherent states are a good choice for initial conditions that become semiclassical at low curvature.\nThis part of our analysis could be broadened by going to more general types of coherent states for which only a Casimir condition and the saturation of uncertainty relations are assumed, as was done for $\\liealg{su}(1, 1)$ in \\cite{BT14}.\n\nWe then added a quartic interaction term to the Hamiltonian to extend the derivation of effective Friedmann equations for \\ac{gft} models with polynomial interactions given in \\cite{CPS16} to situations where no mean-field approximation is assumed.\nAs for the free case, one has a choice between a quartic term for which the Hamiltonian is bounded from below, which leads to a recollapse and cyclic cosmology \\cite{CPS16}, or an interaction that admits solutions escaping to infinity, corresponding to a Universe expanding forever.\nWe focussed on the second case, the more common one in usual cosmology.\nTo understand the dynamics for this system, we first used a classical approximation in which one considers the basic dynamical variables as commuting phase space functions, with commutators replaced by Poisson brackets.\nIn this case we could derive an exact nonperturbative Friedmann equation; its unusual feature is the appearance of a square root involving the volume and energy on the right-hand side, preventing its straightforward interpretation in terms of effective perfect fluids.\nLinearising this equation in the interaction constant leads to a Friedmann equation with an effective dust term, which would reproduce the mean-field result of \\cite{CPS16}.\nThis linear correction only describes an intermediate regime in the expansion history, after which all orders become relevant.\nWe interpret this as signifying a failure of mean-field methods as soon as interactions become strong.\nThe asymptotic form of the effective Friedmann equation at late times would correspond to a matter component with equation of state $p=\\frac{1}{2}\\rho$ (instead of dust with $p=0$), or a modification of gravity on large scales in this model.\nWe then turned to the full quantum case in which one has to resort to a perturbative or numerical treatment.\nFor Fock coherent states, we find qualitatively similar results to the classical case: we derived a linearised correction to the effective Friedmann equation, and the full numerical solution quickly deviates from this regime as interactions become stronger.\nWe found numerical evidence that relative uncertainties in the volume start growing for Fock coherent states when the interactions become relevant, spoiling the property of these states to become semiclassical at late times that we observed for a quadratic Hamiltonian.\n\nA main direction for future work will be lifting the assumption that only a single field mode contributes to the \\ac{gft} dynamics.\nSince the quadratic part of the full \\ac{gft} Hamiltonian only couples pairs of modes, in the free case it would not be difficult to include additional modes into the analysis.\nFor this case one question would be whether some modes would always dominate asymptotically, as in the mean-field analysis of \\cite{Gie16}.\nWhen adding interaction terms however, we would expect the interaction of different modes to lead to a substantial modification of the effective cosmological dynamics away from the effectively free regime described by an \\ac{lqc}-like bounce.\nIn particular this could apply to the recent proposal of \\cite{GO18} for the generation of cosmological perturbations through quantum fluctuations in \\ac{gft} cosmology.\nIn the long term, we would then also aim to bring the interacting \\ac{gft} `toy' models studied in cosmological applications closer to candidate theories for full quantum gravity.\n\nAn entirely different but conceptually important direction would be to contrast the deparametrised framework used here, in which the scalar field $\\phi$ is used as a clock from the beginning, with a covariant setting in which one is free to choose different clocks, following e.g. the ideas of \\cite{Van18,Hoe18}.\nWe plan to investigate this in models which include multiple candidate matter clocks.\n\n \n\\section*{Acknowledgments}\nWe thank Martin Bojowald, Marco de Cesare, Daniele Oriti, Andreas Pithis and Edward Wilson-Ewing for helpful comments on an earlier version of the manuscript.\nWe also thank the referees for helpful suggestions for improvement.\nThe work of SG was funded by the Royal Society through a University Research Fellowship (UF160622) and a Research Grant for Research Fellows (RGF\\textbackslash R1\\textbackslash 180030).\nAP was supported by the same Research Grant for Research Fellows awarded to SG.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIn \\cite{Lo2}, J.-L. Loday introduced a non-antisymmetric version of\nLie algebras, whose bracket satisfies the Leibniz relation (see\n(2.5)), therefore called {\\it Leibniz algebra}. The Leibniz\nrelation, combined with antisymmetry, is a variation of the Jacobi\nidentity, hence Lie algebras are anti-symmetric Leibniz algebras. In\n\\cite{Lo4}, Loday also introduced an 'associative' version of\nLeibniz algebras, called {\\it associative dialgebras}, equipped with\ntwo binary operations, $\\vdash$ and $\\dashv$, which satisfy the five\nrelations (see the axiom (Ass) in section 2). These identities are\nall variations of the associative law, so associative algebras are\ndialgebras for which the two products coincide. The peculiar point\nis that the bracket $[a, b]=:a\\dashv b-b\\vdash a$ defines a Leibniz\nalgebra which is not antisymmetric, unless the left and right\nproducts coincide. Hence dialgebras yield a commutative diagram of\ncategories and functors\n\\begin{eqnarray*}\n{\\bf Dias}&\\stackrel{-}{\\to}& {\\bf Leib}\\\\\n\\downarrow&&\\downarrow\\\\\n{\\bf Assoc}&\\stackrel{-}{\\to}& {\\bf Lie}\n\\end{eqnarray*}\n\n\nSteinberg Lie algebras come from Steinberg groups, which are closely\nconnected with K-theory, and play a key role in the study of Lie\nalgebras graded by finite root systems of type $A$. By definition,\nthe {\\it Steinberg Lie algebra} $\\frak{st}\\,(n, A)$ over a $K$-algebra $A$\nis a Lie algebra generated by symbols $v_{ij}(a)$, $1\\le i\\ne j\\le\nn$, $a\\in A$, subject to the relations\n\\begin{eqnarray}\n&&v_{ij}(k_1a+k_2b)=k_1v_{ij}(a)+k_2v_{ij}(b),\\ \\hbox{ for } \\ a, b\\in D, \\ k_1,\nk_2\\in K;\\\\\n&&[v_{ij}(a), v_{kl}(b)]=0\\ \\hbox{ if }\\ i\\ne l\\ \\hbox{ and } \\ j\\ne k;\\\\\n&&[v_{ij}(a), v_{kl}(b)]=v_{il}(ab)\\ \\hbox{ if } \\ i\\ne l\\\n\\hbox{and } \\ j= k.\n\\end{eqnarray}\nIt is clear that the relation (3) makes sense only if $n\\ge3$.\n\n From \\cite{Fa} we see that the map $\\eta: a\\to v_{ij}(a)$ is one-to-one if and\n only if $A$ is an associative algebra for $n\\ge 4$ and $A$ is an alternative\n algebra for $n=3$.\n\n\nIn 1992, S. Berman and R.V. Moody (\\cite{BM}) studied Lie algebras\ngraded by finite root systems of $A_l\\,(l\\ge 2)$, $D_l\\, (l\\ge 4)$,\n$E_l\\, (l=6, 7, 8)$ and obtained the structure of a Lie algebra over\n$K$ graded by the root system $\\Delta$ of type $X_l\\, ( l\\ge2)\\, (\nX_l=A_l, D_l, E_l)$.\n\n\nThe universal central extensions of Lie algebras graded by finite\nroot systems were studied in several papers (\\cite{B}, \\cite{Gar},\n\\cite{KL}, \\cite{Gao1}, \\cite{ABG1}, etc.).\n\n In this paper we shall consider Leibniz algebras graded by finite root systems of\n types $A, D$ and $E$. We also prove that\n\\begin{theo}{\\bf(Recognition Theorem).}\n Let $L$ be a Leibniz algebra over $K$ graded by the root system $\\Delta$ of\n type $X_l(l\\ge2)\\, (X_l=A_l, D_l, E_l)$.\n\n(1) If $X_l=A_l, l\\ge 3$, then there exists a unital associative\n$K$-dialgebra $R$ such that $L$ is centrally isogenous with\n$\\frak{sl}\\,(l+1, R)$;\n\n(2) If $X_l=A_l, l=2$, then there exists a unital alternative\n$K$-dialgebra $R$ such that $L$ is centrally isogenous with\n$\\frak{stl}\\,(l+1, R)$, where $\\frak{stl}\\,(n, R)$ is defined in Section 2.4;\n\n(3) If $X_l=D_l\\, (l\\ge 4), E_l\\, (l=6, 7, 8)$, then there exists a\nunital associative commutative $K$-dialgebra $R$ such that $L$ is\ncentrally isogenous with $\\dot{\\mg}\\otimes R$.\n\n\\end{theo}\n{\\bf Remark.} Two perfect Lie algebras $L_1$ and $L_2$ are called\n{\\it centrally isogenous} if they have the same universal central\nextension (up to isomorphism).\n\n\nThe paper is organized as follows. In Section 2, we recall some\nnotions of Leibniz algebras and dialgebras. In Section 3, we give\nthe definition of Leibniz algebras graded by finite root systems. In\nSections 4 and 5, we mainly prove the Recognition Theorem (Theorem\n1.1). Throughout this paper, $K$ denotes a field of characteristic\n0, $R$ a unital dialgebra over $K$.\n\n\n\n\n\\section{Dialgebras and Leibniz algebras}\n\n\nWe recall the notions of associative dialgebras, alternative\ndialgebras, Leibniz algebras and their (co)homology as defined in\n\\cite{Lo1}---\\cite{Lo4} and \\cite{L}.\n\n\n\\subsection{Dialgebras.}\n\n\\begin{defi}\\cite{Lo4} A {\\it dialgebra} $D$ over $K$ is a $K$-vector space\n$D$ with two operations $\\dashv, \\vdash:D\\otimes D\\to D$, called left\nand right products.\n\\end{defi}\n\nA dialgebra is called unital if it is given a specified bar-unit: an\nelement $1\\in D$ which is a unit for the left and right products\nonly on the bar-side, that is, $1\\vdash a=a=a\\dashv 1$, for any\n$a\\in D$. A morphism of dialgebras is a $K$-linear map $f:D\\to D'$\nwhich preserves the products, i.e. $f(a\\star b)=f(a)\\star f(b)$,\nwhere $\\star$ denotes either the product $\\dashv$ or the product\n$\\vdash$.\n\n\n\\begin{defi} \\cite{Lo4} A dialgebra $D$ over $K$ is called {\\it associative}\nif the two operators $\\dashv$ and $\\vdash$ satisfy the following\nfive axioms:\n$$\\cases{a\\dashv(b\\dashv c)=(a\\dashv b)\\dashv c=a\\dashv(b\\vdash c),\\cr\n (a\\vdash b)\\dashv c=a\\vdash(b\\dashv c),\\cr\n (a\\vdash b)\\vdash c=a\\vdash (b\\vdash c)=(a\\dashv b)\\vdash c. }\\leqno(Ass)$$\n\\end{defi}\n\n\nDenote by {\\bf Dias, Assoc} the categories of associative dialgebras\nand associative algebras over $K$ respectively. Then the category\n{\\bf Assoc} is a full subcategory of {\\bf Dias}.\n\nObviously, an associative dialgebra is an associative algebra if and\nonly if $a\\dashv b=a\\vdash b=ab$.\n\nThe concept of alternative dialgebras was introduced in \\cite{L} for\nthe study of the Steinberg Leibniz algebras.\n\n\\begin{defi}\\cite {L} A dialgebra $D$ over $K$ is called {\\it alternative}\nif the two operators $\\dashv$ and $\\vdash$ satisfying the following five axioms:\n$$\\cases{J_{\\dashv}(a, b, c)=-J_{\\vdash}(c, b, a), \\quad J_{\\dashv}(a, b, c)=J_{\\vdash}(b, c, a),\\cr\n J_{\\times}(a, b, c)=-J_{\\vdash}(a, c, b),\\cr\n (a\\vdash b)\\vdash c=(a\\dashv b)\\vdash c,\\quad a\\dashv (b\\vdash c)\n =a\\dashv (b\\dashv c), }\\leqno(Alt)$$\nwhere $J_{\\dashv}(a, b, c)=(a\\dashv b)\\dashv c-a\\dashv(b\\dashv c),\\ J_{\\vdash}(a, b, c)\n=(a\\vdash b)\\vdash c-a\\vdash(b\\vdash c)$ and $J_{\\times}(a, b, c)\n=(a\\vdash b)\\dashv c-a\\vdash(b\\dashv c)$.\n\\end{defi}\n\nObviously, an alternative dialgebra is an alternative algebra if\n$a\\dashv b=a\\vdash b=ab$. Moreover, the following formulae are clear\nfor an alternative dialgebras according to the definition.\n$$J_{\\dashv}(a, b, c)=-J_{\\dashv}(a, c, b),\\eqno(2.1)$$\n$$J_{\\vdash}(a, b, c)=-J_{\\vdash}(b, a, c),\\eqno(2.2)$$\n$$J_{\\times}(a, b, c)=-J_{\\times}(c, b, a).\\eqno(2.3)$$\nSo we also have $$J_{\\dashv}(a, b, b)=0, \\ J_{\\vdash}(a, a, b)=0, \\\nJ_{\\times}(a, b, a)=0.\\eqno(2.4)$$ {\\bf Examples.}\n\n1. Obviously, an associative (alternative) dialgebra is an\nassociative (alternative) algebra if and only if $a\\dashv b=a\\vdash\nb=ab$.\n\n2. {\\it Differential associative (alternative) dialgebra.} Let $(A, d)$ be\na differential associative (alternative) algebra. So by hypothesis,\n$d(ab)=(da)b+adb$ and $d^2=0$. Define left and right products on\n$A$ by the formulas\n$$x\\dashv y=xdy, \\quad x\\vdash y=(dx)y.$$\nThen $A$ equipped with these two products is an associative (alternative) dialgebra.\n\n3. Tensor product. Let $D$ and $D'$ be two associative dialgebras,\nthen $D\\otimes D'$ with multiplication $(a\\otimes a')\\star (b\\otimes b')=(a\\star\nb)\\otimes (a'\\star b')$, $\\star=\\dashv, \\vdash$, is also an associative\ndialgebra. Especially, if $D$ is a unital associative dialgebra,\nthen $M_n(D)=M_n(K)\\otimes D$ is also a unital associative dialgebra.\n\n4. Let $D$ be an associative (alternative) algebra. On the module of $n$-space $D=A^{\\otimes n}$ one puts\n$$(x\\dashv y)_i=x_i(\\sum_{j=1}^ny_j), \\ i=1, \\cdots, n\\quad \\hbox {and}$$\n$$(x\\vdash y )_i=(\\sum_{j=1}^nx_j)y_i, \\ i=1, \\cdots, n. $$\nThen $(D, \\dashv, \\vdash)$ is an associative (alternative) dialgebra. For $n=1$, this is example 1.\n\n\n\n\\subsection{Leibniz algebras.} A {\\it Leibniz algebra} \\cite {Lo2} $L$\nis a vector space over a field $K$ equipped with a $K$-bilinear\nmap\n$$[-,-]: L\\times L\\to L$$\nsatisfying the Leibniz identity\n$$[x, [y, z]]= [[x, y], z]-[[x, z], y], \\quad \\forall \\;x, \\,y, \\,z\\in L.\\eqno(2.5)$$\n\nObviously, a Lie algebra is a Leibniz algebra. A Leibniz algebra is a Lie algebra if\nand only if\n$[x, x]=0$ for all $x\\in L$.\n\n\nSuppose that $L$ is a Leibniz algebra over $K$. For any $z\\in L$,\nwe define $\\hbox{\\rm ad}\\, z\\in \\hbox{End}_kL$ by\n$$\\hbox{\\rm ad}\\, z(x)=-[x, z], \\quad\\forall x\\in L.$$\nIt follows (2.5) that\n$$\\hbox{\\rm ad}\\, z([x, y])=[\\hbox{\\rm ad}\\, z(x), y]+[x, \\hbox{\\rm ad}\\, z(y)]$$\nfor all $x, y\\in L$. This says that $\\hbox{\\rm ad}\\, z$ is a derivation of\n$L$. We also call it an inner derivation of $L$.\n\n\nSimilarly, we also have the definition of general derivation of a\nLeibniz algebra and we denote by $\\hbox{\\rm Inn}\\,(L)$, $\\hbox{Der}\\,(L)$ the set of all\ninner derivations, derivations of $L$ respectively. They are also\nLeibniz algebras.\n\n\nLet $L$ be a Leibniz algebra over $K$. Consider the boundary map: $\\delta_n:L^{\\otimes n}\\to L^{\\otimes (n-1)}$ defined by\n$$\\delta_n(x_1\\otimes\\cdots\\otimes x_n)=\\sum_{1\\le iE)=1.78\\cdot10^{-7}(E\/\\mathrm{TeV})^{-1.57}\\mathrm{photons}\/(\\mathrm{m}^{2}\\;\\mathrm{s})$\n\\cite{HEGRA-Crab}.\n\\label{fig:sensitivity-comparison}}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\hsize]{differential2rn}\n\\caption{Differential sensitivity limit\n(with 5 sigma, 10 events in each energy bin of 0.2 dex) for the 97{}-tel.\nconfiguration, with different exposure times, from 3 minutes to 50 hours. \nA 1 hour exposure would\nbe enough for a high{}-quality spectrum of a 0.1 Crab source over two\norders of magnitude in energy. 50 hours would be needed for the\nspectrum of a milli{}-Crab source.\nOne C.U. (Crab unit) is assumed here as \n$dF_{\\rm{cu}}\/dE=2.79\\cdot10^{-7}(E\/\\mathrm{TeV})^{-2.57}\\mathrm{photons}\/(\\mathrm{m}^{2}\\;\\mathrm{s}\\;\\mathrm{TeV})$\n\\cite{HEGRA-Crab}.\n\\label{fig:sensitivity-time}}\n\\end{figure}\n\n\n\\section {Conclusions and outlook}\n\nThe combination of the CORSIKA and sim\\_telarray programs is very well\nsuited for simulations of large telescope arrays, even with hundreds\nof telescopes. It is also very\nflexible and can be adapted to arbitrary array configurations and\ntelescope setups just by means of run{}-time configuration. The\nsubsequent analysis with its Hillas{}-parameter based shower\nreconstruction results in rather conservative performance\nestimates. Arrays consisting of multiple telescope types are easily\nhandled in all stages. First simulations included a range of\ndifferent test configurations, with emphasis on different energy\nranges, and demonstrated that the CTA sensitivity goal can be\nachieved \\cite{ICRC2007paper} {--} at least for energies above 50 to 100 GeV. Future\nsimulations will include more realistic CTA configurations (within\nbudget constraints) as well as some corner cases needed to improve the\nCTA design optimisation scheme.\n\n\n\n\\begin{theacknowledgments}\nI would like to thank Emiliano Carmona, Jim Hinton, and many others\nfor useful discussions on the CTA configurations. Stefan Funk has\ncomplemented my 9-telescope simulations for 2000 and 5000~m altitude\nwith corresonding ones for 3500~m altitude, carried out at the\nSLAC computing center.\n\\end{theacknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\n\nThe large hadron collider (LHC) has discovered the Higgs boson and measured its properties consistent with the precision measurements of the Standard Model (SM). The discovery of the last particle of the standard model strengthened the foundation of the SM. However, overwhelming evidence and hints require physics beyond the Standard Model (BSM). While the intensive searches for new particles at the weak scale or heavier have been performed experimentally, we have not found any convincing evidence of such new particles yet. In these circumstances, there is growing attention to feebly interacting light particles from both theoretical and experimental points of view. \n\n\nNew light particles with mass less than tens of GeV are compelling addition to the SM as they can directly resolve the central issues of the SM, such as the strong CP problem and the finite neutrino masses, explain many flavor physics anomalies, and be a portal to the dark sector accommodating the dark matter and the baryon asymmetry of the universe. On the experimental grounds, the light particles, if they exist, need to be feebly interacting with the SM sector due to various experimental constraints. However, one should be aware that the bounds are strongly model-dependent because the signature depends on the nature of the light degrees of freedom. This situation is contrasted with heavy new particle exploration with mass beyond the reach of current collider energies, which can be probed through Effective Field Theories~(EFT). Therefore, there is growing interest in various scenarios based on the current experimental data and the potential for future high-intensity experiments. The status of recent progress is well-summarized in the report of {\\it Physics Beyond Collider}~\\cite{Beacham:2019nyx}. \n\n\nIn this paper, we study the physics potential of the dump facilities of the International Linear Collider (ILC) to hunt new light particles. The ILC is a proposed future $e^+e^-$ linear collider with beam-energy of 125~GeV, which can be upgraded to 500~GeV. The main goal of the ILC is to study events of $e^+e^-$ collision to perform high precision measurements of the Higgs bosons and search for new particles produced by the electroweak interaction. In the linear collider the beam has to go into the beam dump after the collision, and it provides an excellent opportunity for a high-intensity experiment to produce feebly interacting light particles. The number of electrons on the beam dump is enormous, $N_{\\rm EOT}=4\\times 10^{21}\/\\rm$~per~year, and the electromagnetic (EM) shower also leads to a high-intensity photon carrying $\\cal O$(10\\%) of the beam energy. The setup is equivalent to the other fixed target experiments if a detector is placed downstream of the beam dump. Hereafter we refer to such setup as {\\it the ILC beam dump experiment}. One can search for light particles produced in the dump, fly beyond the muon shield, then decay to the SM particles or scatter the detector material. The potential of the ILC beam dump project has been investigated in Refs~\\cite{Izaguirre:2013uxa,Kanemura:2015cxa,Sakaki:2020mqb,Asai:2021ehn,Asai:2021xtg,Moroi:2022qwz}. \n\n\nIn the previous works of the electron beam dump experiments, the searches for light particles dominantly interacting with electron or photon were mainly studied since they are produced by the primary electron and positron or the secondary shower particles. In the ILC beam dump experiment, the initial beam energy is much higher than those of the past electron beam dump experiments. In addition to the EM showers, heavy mesons and $\\tau$ lepton can be produced by the shower photon hitting nuclei. The decay of the produced SM particles is another promising source of light particles. This paper examines the production yield and spectrum of the mesons and $\\tau$ lepton at the ILC beam dump setup for the first time. \n\n \nAs an application of this study, we investigate a projected sensitivity of heavy neutral leptons (HNLs, called sterile neutrinos in some literature). The HNLs mix with the SM neutrinos with an angle of $U_\\ell \\ (\\ell=e,\\mu, \\tau)$, resulting in a suppressed weak interaction to the SM. Therefore it is very natural to consider the HNL production from the meson and $\\tau$ lepton decays where the weak interaction dominates. The current experimental constraints are mostly for mixing with $\\nu_e$ and $\\nu_\\mu$, and the high intensity of mesons would further reach the under-explored parameter space. Also the sensitivity to the $\\tau$ neutrino mixing angle $U_\\tau$ can be significantly improved because $\\tau$ lepton is accessible. \n\n\nThe phenomenology of the HNLs is well-reviewed in Refs~\\cite{Alekhin:2015byh, Dasgupta:2021ies}, and see references therein. We stress that feebly interacting HNLs in a GeV mass range are a motivated and well-defined physics target. The seesaw mechanism can explain the neutrino masses observed by the neutrino oscillations with at least two HNLs. If the two HNLs are almost degenerate, the sum of the mixing angles have the lower bound, approximately $U^2\\equiv \\sum_l |U_l|^2\\gtrsim m_{\\rm atm}\/m_N \\sim 10^{-11} (m_N\/{\\rm 1~GeV})^{-1}$~\\cite{Shaposhnikov:2006nn,Alekhin:2015byh}. Furthermore the degenerate HNLs in the early universe can produce the baryon asymmetry via the HNL oscillations~\\cite{Akhmedov:1998qx, Asaka:2005pn}. The feeble interactions are necessary for the departure of the thermal equilibrium which is one of Sakharov's conditions for generating the baryon asymmetry. Together with the neutrino mass constraints, the interesting parameter space is in a range of $10^{-11}\\lesssim U^2\\lesssim10^{-6}$.~\\footnote{The benchmark values depend on the number of HNLs involved in the seesaw mechanism. Three or more HNLs will allow a smaller value of $|U_l|^2$. However, it is important to examine the parameter space of the minimal scenarios.}\n\n\nThis paper is organized as follows. In Sec.~\\ref{sec:dumpsetup}, we briefly review the ILC beam dump experiment, and in the following section, we evaluate the spectra of mesons and $\\tau$ lepton. In Sec.~\\ref{sec:HNL}, we study the HNLs at the ILC beam dump experiment. In addition to the HNL production from the SM particle decays, we consider an HNL production via deep inelastic scattering and another production from $Z$ decays at the interaction point. We finish with the discussion in Sec.~\\ref{sec:discussion}. \n\n\n\\section{ILC Beam Dump Experiment}\\label{sec:dumpsetup}\nThe ILC main beam dump has to absorb 2.6~MW ($125~{\\rm GeV}\\times 21~{\\rm \\mu A}$) of the $e^\\pm$ beam energy for 125~GeV in the initial stage and 13.6~MW ($500~{\\rm GeV}\\times 27.3~{\\rm \\mu A}$) for 500~GeV in an upgrade stage. Following a water dump designed with the length of $l_{\\rm dump}=11~{\\rm m}$ and the diameter of 1.8~m, a muon shield of length $l_{\\rm sh}=70~{\\rm m}$ would be placed as proposed in \\cite{Sakaki:2020mqb,Asai:2021ehn}, to remove the secondary muon background. The cylindrical decay volume of length $l_{\\rm dec}=50~{\\rm m}$ and radius $r_{\\rm det}=3$~m would lie between the muon shield and the downstream detector. The schematic view is seen in Fig.~\\ref{fig:exp}, and the similar design of the setup can be found in \\cite{Sakaki:2020mqb,Asai:2021ehn}. Here, we additionally assume a multi-layer tracker in the decay volume. We consider a time frame of 10-year run for both ILC-250 and ILC-1000 where the beam energy is 125~GeV and 500~GeV, respectively. \n\n\nThe EM shower (photons, electrons, and positions) starts in the beam dump. The ILC beam dump experiment is unique compared to the past electron beam dump experiments with regards to the higher beam energy and the high intensity, i.e., the number of electrons on the beam dump $N_{\\rm EOT}$ is about $4\\times 10^{21}$ per year\\footnote{In ILC-1000, we assume to be $N_{\\rm EOT}=4\\times 10^{21}$ per year in numerical calculations.}.\nThe energetic beam creates photons at $\\cal O$(1-10)~GeV energy scale (Fig.~9 of \\cite{Asai:2021ehn}), and the secondary interaction between the photon and the nucleus can produce light mesons ($\\pi$ and $K$), heavy mesons ($D$, $B$, and even $B_c$) and $\\tau$ lepton.\nThey loose the energy in the beam dump and finally decay, and some of them produce the HNLs. The yield and energy spectrum of the SM particles at the decay is therefore important to study the sensitivity of the ILC beam dump experiment.\nWe show the evaluation in Sec.~\\ref{sec:spectrum}. \n\n\nThe secondary muons are stopped in the muon shield at ILC-250, but their penetration behind the shield cannot be neglected for $E_{\\rm beam}=500$~GeV at ILC-1000. In this case, an additional active muon shield behind the muon shield would be necessary, and we assume that the muon shield consists of the lead shield ($l^{\\rm lead}_{\\rm sh}=10$~m) and the active shield ($l^{\\rm active}_{\\rm sh}=70~{\\rm m}-l^{\\rm lead}_{\\rm sh}$). The HNL with a dominant mixing with the muon neutrino can be produced inside the muon shield by scattering of the shower muon, and we approximate that the HNLs produced behind the lead shield do not contribute to the signal events. In Appendix~\\ref{app:muonshield}, we study how different depth of the muon shield affects the sensitivity for the HNL at ILC-1000. \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.95\\textwidth]{figs\/exp.pdf}\n\\caption{A setup for ILC beam dump experiments. It consists of the main beam dump, a muon shield, and a decay volume. We assume a multi-layer tracker is placed in the decay volume so that the charged tracks are measured.}\n\\label{fig:exp}\n\\end{figure}\n\n\n\\section{Meson and $\\tau$ lepton Spectra}\\label{sec:spectrum}\n\n\nThis section presents the meson and $\\tau$ lepton spectrum obtained by Monte-Carlo simulation at the ILC beam dump. We use PHITS~3.25~\\cite{Sato:2018imy} for production and transport of particles other than heavy mesons. PHITS (Particle and Heavy Ion Transport code System) is a general-purpose Monte Carlo particle transport simulation code developed under collaboration between JAEA, RIST, KEK, and several other institutes. PHITS can transport most particle species for a given geometry of the materials, and it is tested thoroughly by benchmarks studies~\\cite{iwamoto2017benchmark,matsuda2011benchmarking}. For heavy mesons production, we implement their differential production cross sections obtained by PYTHIA8.3~\\cite{Bierlich:2022pfr} into PHITS. More details are given in the following. \n\n\n\\subsection{Light Mesons}\n\n\n\\begin{figure*}\n\\centering \n\\includegraphics[width=0.49\\textwidth]{figs\/decay_125GeV.pdf}~~\n\\includegraphics[width=0.49\\textwidth]{figs\/decay_500GeV.pdf}\n\\caption{The kinetic energy distribution of light mesons when they decay, where $\\pi^\\pm$ and $K^\\pm$ indicates the sum of charged pions and kaons.\nWe consider two beam energies, $E_{\\rm beam}=125,~500$~GeV at ILC-250 and ILC-1000, respectively.}\n\\label{fig:decay}\n\\end{figure*}\n\n\nThe light mesons are mainly produced by the interaction of real photons in the electromagnetic shower with the nucleons in the beam dump. If the decay length of the produced light mesons is in the same order of magnitude of or greater than the mean free path in the material, the particles reduce their energy or change into different flavors by (in-)elastic scattering or multiple scattering. We use the following codes and models which are available in PHITS to simulate the electromagnetic shower and the production and transport of the light mesons. For the electromagnetic shower, the simulation is performed by EGS5~\\cite{Hirayama:2005zm}. For the light meson production and transport, the JAM~\\cite{Nara:1999dz} and JQMD~\\cite{PhysRevC.52.2620} models modified for photoproduction (photonuclear interaction) are used. In addition to these models, INCL4.6~\\cite{PhysRevC.87.014606} is also employed to calculate the interaction of the mesons with nuclei during transport. The energy loss of the charged particles due to multiple scattering is evaluated by ATIMA~\\cite{ATIMA}.\n\n\nFig.~\\ref{fig:decay} shows the kinetic energy distribution of light mesons when they decay. The decay energy distribution is more important than the production energy distribution because the kinematics of new particles is determined by the parent particle distribution at the decay. For reference, the production energy distribution of light mesons is shown in Fig.~\\ref{fig:production}.\n\n\n\n\n\n\\subsection{Heavy Mesons}\n\n\nWe use PYTHIA8 to calculate the differential cross sections of the $\\gamma p(n)\\to B(D)+X$ process for the heavy mesons.\\footnote{We thank the Pythia team, especially to Ilkka Helenius for helping us understand the latest photoproduction feature of PYTHIA~8.3.}\nWe have checked that the sum of the direct and non-diffractve cross section of the $D$ meson production agrees very well with the photoproduction data~\\cite{SLACHybridFacilityPhoton:1983yfx,TaggedPhotonSpectrometer:1989bpi}, see Appendix~\\ref{app:Dmeson}. Therefore we regard the sum of the two cross sections as the total cross section, \n$\\sigma_{\\rm total}(\\gamma p(n))$\n$=\\sigma_\\text{non-diff}(\\gamma p(n))$\n$+ \\sigma_{\\rm direct}(\\gamma p(n))$. The total cross section of atomic neucleus is obtained by taking into account the shadowing effect \\cite{Caldwell:1978ik,Kopeliovich:2012kw}, \n\\begin{equation}\n\\sigma_{\\rm total}(\\gamma A)= A^{l} \\left(\\frac{Z}{A}\\sigma_{\\rm total}(\\gamma p)+\\frac{A-Z}{A}\\sigma_{\\rm total}(\\gamma n)\\right) ,\n\\label{eq:atomic}\n\\end{equation}\nwhere $l=0.92$~\\cite{Kopeliovich:2012kw}. \n\n\nThe differential production cross sections for heavy meson photoproduction are obtained by PYTHIA8~\\cite{Bierlich:2022pfr} and implemented in PHITS. Since the decay length of the heavy mesons are much shorter than the mean free path in the material, the spectra at their production and decay are similar. In Fig.~\\ref{fig:production}, we show the production rate per electron injection in the beam dump for mesons and $\\tau$ lepton with respect to the kinetic energy at production, where the energy is normalized by the beam energy. The results for $\\pi^{\\pm}(\\pi^+, \\pi^-)$, $K(K^+, K^-, K_S^0, K_L^0)$, $D(D^+, D^-, D^0, \\overline{D}^0)$, $D_s(D_s^+, D_s^-)$, $B(B^+, B^-, B^0, \\overline{B}^0)$, $B_s(B_s^0, \\overline{B}_s^0)$, $B_c(B_c^+, B_c^-)$, and $\\tau(\\tau^+, \\tau^-)$ produced in the beam dump are shown, which represent the sum of the particles in the parenthesis. We consider two beam energies, $E_{\\rm beam}=125,~500$~GeV at ILC-250 and ILC-1000, respectively. The overall yield of the heavy meson production increases as the beam energy gets higher, and $B_c$ becomes accessible at ILC-1000. For the sake of comparison, we also include $\\pi$, $K$ distributions at production. \n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figs\/production_125GeV.pdf}\n\\includegraphics[width=0.45\\textwidth]{figs\/production_500GeV.pdf}\n\\caption{The production rate per electron injection in the beam dump for mesons and $\\tau$ lepton with respect to the kinetic energy at production, where the energy is normalized by the beam energy.\nThe results for $\\pi^{\\pm}(\\pi^+, \\pi^-)$, $K(K^+, K^-, K_S^0, K_L^0)$, $D(D^+, D^-, D^0, \\overline{D}^0)$, $D_s(D_s^+, D_s^-)$, $B(B^+, B^-, B^0, \\overline{B}^0)$, $B_s(B_s^0, \\overline{B}_s^0)$, $B_c(B_c^+, B_c^-)$, and $\\tau(\\tau^+, \\tau^-)$ produced in the beam dump are shown, which represent the sum of the particles in the parenthesis.\nWe consider two beam energies, $E_{\\rm beam}=125,~500$~GeV at ILC-250 and ILC-1000, respectively.}\n\\label{fig:production}\n\\end{figure*}\n\n\n\\subsection{$\\tau$ lepton}\nAs discussed in the previous literature~\\cite{2018}, the primary source of $\\tau$ lepton is the $D_s$ decay with approximately 5\\% branching ratio. PHITS simulates $D_s$ meson propagation and decay, which accounts for the $\\tau$ lepton production. The sub-dominant source of $\\tau$ lepton is the $\\tau$ lepton pair production, $\\gamma + {\\rm nucleus\/nucleon} \\to \\tau^+ + \\tau^- + X$. We implement a complete differential cross section calculated with the Born approximation in QED in the PHITS code and generate events for the process~\\cite{Tsai:1973py,Sakaki:2020cux}. The form factors for coherent (nucleus elastic), quasi-elastic and inelastic interactions are included. The spectrum of $\\tau$ lepton is shown in Fig.~\\ref{fig:production}.\n\n\nWe find that the number of $\\tau$ leptons from the pair production is about 20 times smaller than those from the decay of $D_s$. However, the pair production process becomes dominant in the high-energy region where the kinetic energy of $\\tau$ lepton is above 65\\% of the beam energy. So the pair production process will be necessary when considering the physics of high-energy $\\tau$ leptons or $\\tau$ neutrinos at the ILC beam dump.\n\n\n\n\n\\section{Heavy Neutral Leptons}\\label{sec:HNL}\n\nIf gauge singlet fermions $N$ exist in the BSM sector, a renormalizable interaction with the SM sector is possible\n\\begin{align}\n{\\cal L} = -\\lambda_{\\ell I} (\\bar L_\\ell \\tilde{H}) N_I -\\frac{1}{2}M_{I} {\\bar{N}_I^c} N_I +\\rm h.c. \\label{eq:HNL}, \n\\end{align}\nwhere $L_\\ell$ is the SM lepton doublet of flavor $\\ell=e,\\mu,\\tau$, and $I$ is the index of $N$. \nIf $M_I\\gg \\lambda_{iI} v$, the standard seesaw mechanism make the SM neutrino mass light, $m_{\\nu, \\ell\\ell'}\\sim \\sum_I (\\lambda_{\\ell I}\\lambda_{\\ell'I}) v^2\/M_I$. For $\\mathcal{O}(1)$ Yukawa coupling, the singlet fermion $N$ has to be extremely heavy to satisfy the bounds on the neutrino mass, $m_{\\nu} \\gtrsim 0.05~{\\rm eV}$~\\cite{ParticleDataGroup:2020ssz}. On the other hand, if the size of Yukawa coupling $\\lambda$ is small, $M$ can be in MeV-GeV mass scale to satisfy the same condition. Such particles can be searched directly in laboratories, and they are often called heavy neutral leptons (HNLs) or sterile neutrinos. The active neutrinos $\\nu$ from $L$ doublet and the HNLs $N$ are almost in the mass eigenstates up to a small admixture between $\\nu_i$ and $N_I$ characterized by the mixing angle\n\\begin{align}\nU_{\\ell I} = \\frac{v \\lambda_{\\ell I}}{M_I}. \\label{eq:mixing}\n\\end{align}\nSince the mixing and mass determine the HNL interactions with the SM particles, we use the mixing parameter $U_{\\ell I}$ instead of the Yukawa couplings to discuss the HNL phenomenology. \n\n\n\\begin{figure*}[tt]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{figs\/benchmarkplot.pdf}\n\\caption{\nThe target parameter space in a scenario of two degenerate HNLs with respect to the HNL mass and the sum of the mixing squared defined in Eq.\\eqref{eq:mixingsum}. The region between dashed (dotted) green lines is favored as the HNL can generate the baryon asymmetry of the universe~\\cite{Akhmedov:1998qx, Asaka:2005pn}, and the lines are adopted from Fig.~4.17 of \\cite{Alekhin:2015byh}. \nThe bottom shaded region cannot explain the neutrino oscillation data in Type-I seesaw mechanism with the two degenerate HNLs. Another shaded region with the BBN label is excluded by that the long-lived HNLs affect the successful big bang nucleosynthesis. The bound is obtained with respect to $U_e$, $U_\\mu$, or $U_\\tau$ in \\cite{Boyarsky:2020dzc}, and we take $U^2<\\min_{\\ell=e,\\mu,\\tau}[|U_{\\ell}^{(\\rm BBN)}|^{2}]$ for this plot. \n}\n\\label{fig:bench}\n\\end{figure*}\n\n\nThe HNLs in the GeV mass range have another exciting aspect other than explaining the mass of active neutrinos and testability. They can be responsible for the baryon asymmetry by leptogenesis via HNL oscillation~\\cite{Akhmedov:1998qx, Asaka:2005pn}. The effective leptogenesis occurs for fast $N_I$ oscillation, and therefore two degenerate HNLs are an excellent benchmark model to investigate. \n\n\nIn the minimalistic scenario with two quasi-degenerate HNLs, the target parameter space is well-defined by the baryon asymmetry and the neutrino mass. We schematically show it in Fig.~\\ref{fig:bench}. Note that the vertical axis is the sum of the mixing angles over both the active neutrinos and the HNLs, \n\\begin{align}\nU^2\\equiv \\sum_{i=e,\\mu,\\tau \\ I=1,2} |U_{\\ell I}|^2 \n\\label{eq:mixingsum}\n\\end{align}\nWe also include a bound on $U^2$ from Big Bang Nucleosynthesis (BBN). The small mixing angle is disfavored as the HNLs are sufficiently long-lived to decay during or slightly before BBN and affect the ratio of the neutron and proton number densities~\\cite{Boyarsky:2020dzc}. \n\n\nIt is essential to probe all flavor mixings to cover the target parameter space for baryon asymmetry characterized by $U^2$. The sensitivity to $U_{e I}$ and $U_{\\mu I}$ of the current and proposed experiments is high, but the $\\tau$ neutrino mixing is poorly constrained. In the ILC beam dump experiment, the relevant region of $U_{\\tau I}$ can be probed because $\\tau$ leptons can be copiously produced from $D_s$ meson decay thanks to the higher beam energy.\n\n\nIn many phenomenological studies of HNLs, one assumes a single HNL (say $N_1$) in the low energy because having two HNLs will have little impact on the search sensitivity as long as the two HNLs are degenerate. In the following we deal with one HNL for simplicity, and thus the index $I$ is omitted. Furthermore we turn on only one $U_\\ell$, a mixing with one of the active neutrinos in the flavor eigenstate ($\\nu_e, \\nu_\\mu, \\nu_\\tau$), at a time, which helps us to understand which underlying process matters. Under these assumptions, the phenomenology is well-described by the HNL mass and the single mixing in each benchmark model. Also, these benchmarks are used commonly in the literature, which allows us to compare our results with the previous works. \n\n\n\\subsection{Sensitivities at ILC beam dump}\n\n\nIn this subsection we evaluate the sensitivity of the ILC beam dump experiment to the HNLs. We consider the following two production mechanisms of HNL:\n\\begin{enumerate}[(i)]\n \\item Productions from meson and $\\tau$ lepton decays; \n \\item Direct productions from electrons and muons in EM shower interacting with nucleons.\n\\end{enumerate}\nIn both cases, we consider the HNLs decaying inside the decay volume as the signal. We adopt the decay widths of HNL to the SM particles based on Ref.~\\cite{2018}, and most decay patterns of the HNL would leave multiple tracks, which is distinguishable from the background. Then, we assume zero background and account for all the HNL decays inside the decay volume as signal except for the invisible decay mode, $\\rm HNL\\to \\nu\\nu\\bar\\nu$. One of the possible backgrounds is hardons produced by high energy neutrinos, such as $K_{S,L}$ decaying to charged pions. This type of background was estimated with GIANT~4 in the SHiP setup~ \\cite{Bonivento:2013jag}. They found that the relevant hadrons are always produced at the edge of the muon shield and can be significantly reduced by requiring the charged tracks consistent with the HNL kinematics. Note that PHITS does not take into account the high energy neutrinos interactions, which is, therefore, not included in our results. Although cosmic muons may produce beam-unrelated backgrounds, the deep underground location of the detector (100 m from the ground surface), direction of tracks, and coincidence time window based on the pulsed electron beam significantly reduce the backgrounds. The detector setup and event selections to reduce the background are beyond the scope of this paper.\n\n\nIn the HNL production process (i), evaluating the position and 4-momentum of the HNL is not straightforward because it involves various intermediate particles with possible higher multiplicity. Therefore, it is suitable to estimate the sensitivity using Monte-Carlo simulation. We include the SM particle decays to an HNL which are summarized in Appendix~\\ref{sec:Br} and Ref.~\\cite{2018}\n\n\nFor the other production (ii), associated physical processes are tractable without simulation. We therefore evaluate its sensitivity by the numerical integration and provide relevant formulae for it. To help the quantitative understanding, we also provide an approximate formula of the signal rate of the process (i) that can be integrated numerically. The signal rate of the numerical integration will be compared with the one obtained by the Monte-Carlo simulation. \n\n\nIn the following, we describe the detail of each method. \n\n\n\\subsubsection{Monte-Carlo method} \nWe simulate particle production and transport by PHITS with the help of PYTHIA8 for electron injection as described in Sec.~\\ref{sec:spectrum}. We also modify the decay tables of mesons and $\\tau$ lepton of PHITS to include decay modes to an HNL as discussed in Appendix~\\ref{sec:Br}. It is essential to perform the Monte-Carlo simulation to track how the system evolves from the incident electron since the intermediate steps are very involved in the production (i). The result of the Monte-Carlo simulation also becomes the guiding post for the course-grained integration method which is described in Sec.~\\ref{sec:nummethod}.\n\n\nHowever, a naive Monte-Carlo simulation of the long-lived particle would suffer from technical challenges in obtaining sufficient statistics, in particular in the following cases:\n\\begin{enumerate}[(a)]\n \\item Small cross section of new physics process or photoproduction of mesons.\n \\item Small decay branching ratio from a SM particle to the new particle.\n \\item The decay length of the new particle much shorter than the shield length.\n \\item The decay length of the new particle much longer than the length of the experimental setup.\n\\end{enumerate}\n\n\nIn cases (a) and (b), the processes resulting in the signal process are so rare that it is difficult to obtain the sufficient number of events. The issue can be solved in PHITS-based simulation using the {\\it biasing technique}. In this technique, PHITS provides a biasing parameter for photoproduction. In this technique, the biased process occurs more frequently according to the biasing parameter, and an appropriate weight of the produced particle lower than that of the incident particle is assigned. We obtain the correct expected value of physical quantities by adding up the weights, rather than adding up the number of particles produced. \n\n\nIn case (c), the new particles generated by the simulation predominantly decay inside the shield, so sufficient signal statistics are not easily obtained. This issue can be avoided by using the {\\it importance sampling technique} which PHITS supports. This technique allows us to assign an {\\it importance} to different regions of the simulation geometry. When a particle passes through the boundary between different regions with increasing importance, several copies of the particle are created in an event, and their weights are reduced depending on the importance ratio between the regions. We divide the region of the shield in the direction of the beam axis and increase their importance value exponentially, so that a short lifetime particle pass through the shield more efficiently without exponential loss. \n\n\nThe importance technique is also useful in the opposite case (d) as we can set a large importance value in the decay volume to increase the event sampling in the relevant region. In addition, {\\it forced-decay technique} is more useful when the decay length of the long-lived particle $X$ is extremely longer than the length of the shield and decay volume. In this technique, we introduce a {\\it maximum decay length}, $l_{\\rm max}$. When the decay length of $X$, $l_X$, is sufficiently longer than the typical length of experiment, $l_{\\rm exp} (\\sim l_{\\rm dump}+l_{\\rm sh}+l_{\\rm dec})$, the differential decay rate of $X$ with respect to the flight distance $z$ is\n\\begin{align}\n\\left. \\frac{dP}{dz}\\right\\vert_{l_X}\n= \\frac{1}{l_X} {\\rm e}^{-\\frac{z}{l_X}}\n\\simeq\n\\frac{1}{l_X},~~(z1)$ and the weight of $X$ by $1\/b$. The rescaled probability is\n\\begin{align}\n\\left. b \\frac{dP}{dz}\\right\\vert_{l_X}\n\\simeq\n\\frac{b}{l_X}\n\\simeq\n\\left. \\frac{dP}{dz}\\right\\vert_{l_{\\rm max}}.\n\\end{align}\nThis corresponds to the decay rate when the lifetime of $X$ is multiplied by $1\/b$. In summary, when $l_{\\rm max}0.02$~s~\\cite{Boyarsky:2020dzc}. Sensitivity reach through $10^ 9$ $Z$-decays at ILC is shown as a blue solid line. See Sec.~\\ref{sec:ZatILC}. For a comparison, dashed lines show a sensitivity reach of the DUNE experiment~\\cite{Coloma:2020lgy} (brown), the FASER2 experiment~\\cite{Kling:2018wct} (purple), the NA62 experiment~\\cite{Beacham:2019nyx} (orange), and the SHiP experiment~\\cite{Alekhin:2015byh} (magenta), the MATHUSLA experiment\\cite{Curtin:2018mvb} (green), and $10^{12}$ $Z$-decays that could be realized at the FCC-ee experiment~(cyan). \\label{fig:sensitivityUe}\n}\n\\end{figure*}\n\n\n\n\\subsubsection{Projected sensitivities}\nWe give the prospects of the ILC beam dump experiment in ILC-1000 and ILC-250 for 10 year run. We assume the parameter region with more than three signal events can be probed, corresponding to the expected $95\\%$ C.L. exclusion sensitivity. After independently evaluating the signal events by the productions (i) and (ii), we combine the plots of the sensitivity of ILC.\n\n\n \n\n\\subsection*{$U_e$ dominance}\n\nFig.~\\ref{fig:sensitivityUe} shows the sensitivity of ILC for the HNL whose mixing to active neutrinos is dominated by the $U_e$ mixing. The region above the red (black) curves is the region {expected $95\\%$ C.L. exclusion sensitivity} that can be searched by ILC-1000 (ILC-250) with 10-year statistics. The gray-shaded regions are constrained from past experiments which is discussed in Sec.~\\ref{sec:bounds}.\n\n\nLet us explore the results in Fig.~\\ref{fig:sensitivityUe}, highlighting each of the HNL production processes.\n\\begin{enumerate}[(a)]\n\\item Meson and $\\tau$ lepton decays\n\nIn the small $|U_e|^2$ regions where the lifetime of HNL becomes longer, the decay probability in Eq.~\\eqref{eq:decP} is approximately $d P_{\\rm dec}^{\\rm dump}\/dz \\simeq 1\/l^{\\rm (lab)}_{X}$. Depending on the mass of HNL, the HNLs are produced by the decay of mesons and $\\tau$ lepton, see Figs.~\\ref{fig:brUeli}, \\ref{fig:brUeD}, \\ref{fig:brUeB}, and \\ref{fig:brUetau} in Appendix~\\ref{sec:Br}.\nBelow the kaon mass $m_{K}\\simeq 0.5~{\\rm GeV}$, the HNLs are mainly produced by the kaon decay. For $m_{\\rm HNL}\\leq m_{D,\\tau}-m_e\\sim 2~{\\rm GeV}$, the $D(D_s)$ meson decay dominates the HNL productions. In the region of $1~{\\rm GeV}\\lesssim m_{\\rm HNL}\\lesssim 2~{\\rm GeV}$ there are many thresholds of production mode, such as $D\\to K+e+{\\rm HNL}$, $D_s\\to \\eta+e+{\\rm HNL}$, $D\\to \\pi+e+{\\rm HNL}$ and $\\tau\\to \\nu_{\\tau}+e+{\\rm HNL}$, see Fig.~\\ref{fig:brUeD}. \n\n\nThe limit obtained by the Monte-Carlo simulation can be checked using the approximate formula Eq.~\\eqref{eq:N1approx}. As explained in Appendix \\ref{app:Com}, the approximate formula agrees well with the results of the Monte-Carlo simulation. For the electron beam energy $E_{\\rm beam}=500~{\\rm GeV}$, the number of events by the leptonic $D$ meson decays $D^\\pm\\to e^\\pm +{\\rm HNL}$ is given by \n\\begin{align}\nN_{\\rm signal}^{\\rm (i)}&\\sim \\left(\\frac{N_{\\rm EOT}}{4\\times 10^{22}}\\right) \\left(\\frac{l_{\\rm dec}}{50~{\\rm m}}\\right)\\left(\\frac{r_{\\rm det}}{3~{\\rm m}}\\right)^2\\left(\\frac{81~{\\rm m}}{l_{\\rm dump}+l_{\\rm sh}}\\right)^2\\left(\\frac{|U_e|^2}{ 10^{-10}}\\right)^2 \\left(\\frac{m_{\\rm HNL}}{1~{\\rm GeV}}\\right)^4,\n\\end{align}\nwhere we used follwing approximations:\n\\begin{align}\n {\\rm Br}(D^{\\pm}\\to e^{\\pm}+{\\rm HNL})\\propto |U_e|^2 , \\quad \n {\\rm Br}_{\\rm vis}\\simeq 1, \\quad \n (l^{\\rm lab}_X)^{-1}\\propto |U_e|^2 m_{\\rm HNL}^4\/E^{\\rm lab}_{\\rm HNL} \\ . \n\\end{align}\nNear the kinematic threshold, the number of events is rapidly decreased by a suppression of the branching ratio ${\\rm Br}(D^{\\pm}\\to e^{\\pm}+{\\rm HNL})$. For $m_D-m_e \\lesssim m_{\\rm HNL} \\lesssim m_B-m_e$, the $D$ meson decay channels are closed, and the $B$ meson decay channels become dominant in the HNL productions up to around $m_{\\rm HNL}\\sim 3$ GeV.\n\n\n\n\n\\item Direct production\n\nAbove $m_{\\rm HNL}\\sim 3$ GeV, the direct production channel with the incoming electrons and positrons become significant. \nIn particular, the production with the high-energy primary electron is essential to extend the mass reach of the $U_e$ dominant scenario.\nIn the small $|U_e|^2$ regions, the decay probability in Eq.~\\eqref{eq:decP} is approximately $d P_{\\rm dec}^{\\rm dump}\/dz \\simeq 1\/l^{\\rm (lab)}_{X}$, and the energy of the produced HNL is approximately $E^{\\rm lab}_{\\rm HNL}\\simeq E_{e^{\\pm}}$. Then, the number of events for $E_{\\rm beam}=500~{\\rm GeV}$ is given by\n\\begin{align}\nN_{\\rm signal}^{\\rm (ii),e^{\\pm}}\\sim \\left(\\frac{N_{\\rm EOT}}{4\\times 10^{22}}\\right)\\left(\\frac{l_{\\rm dec}}{50~{\\rm m}}\\right)\\left(\\frac{r_{\\rm det}}{3~{\\rm m}}\\right)^2\\left(\\frac{81~{\\rm m}}{l_{\\rm dump}+l_{\\rm sh}}\\right)^2\\left(\\frac{|U_e|^2}{10^{-10}}\\right)^2 \\left(\\frac{m_{\\rm HNL}}{10~{\\rm GeV}}\\right)^6,\n\\end{align}\nwhere we use, \n\\begin{align}\n &dl_{e^{\\pm}}\/d E_{e^{\\pm}}\\simeq \\left(dl_{e^{\\pm}}\/d E_{e^{\\pm}}\\right)_{\\rm primary}\\propto 1\/E_{e^{\\pm}},\\quad\n d^2\\sigma(e^{\\pm} N\\to {\\rm HNL})\/dx dy \\propto |U_e|^2 E_{e^{\\pm}},\n \\nonumber\\\\\n &(l^{(\\rm lab)}_X)^{-1}\\propto |U_e|^2 m_{\\rm HNL}^6\/E^{\\rm lab}_{\\rm HNL},\\quad\n {\\rm Br}_{\\rm vis}\\simeq 1,\\quad\n y\\lesssim x^{-1}E_{e^{\\pm}} r^2_{\\rm det}(l_{\\rm dump}+l_{\\rm sh})^{-2}.\n\\end{align}\n\n\nIn the larger $|U_e|^2$ regions, where $l^{\\rm (lab)}_{\\rm HNL}\\ll l_{\\rm dump}+l_{\\rm sh}$, the shape of the upper contour lines in Fig.~\\ref{fig:sensitivityUe} is determined by the probability to decay inside the decay volume given in Eq.~\\eqref{eq:decP}. \nThe contour is characterized by the exponent of the decay probability\n\\begin{align}\n\\frac{m_{\\rm HNL}\\Gamma_{\\rm HNL}}{E^{\\rm lab}_{\\rm HNL}}(l_{\\rm dump}+l_{\\rm sh})\\sim {\\rm const}.\\label{eq:diupp}\n\\end{align} \nCombining $E^{\\rm lab}_{\\rm HNL}\\simeq E_{\\rm beam}$ and $\\Gamma_{\\rm HNL}\\propto |U_e|^2\\cdot m_{\\rm HNL}^5$, Eq.~\\eqref{eq:diupp} becomes $|U_e|^2\\propto m_{\\rm HNL}^{-6}$ , which is consistent with the parameter dependence of Fig.~\\ref{fig:sensitivityUe}. \n\n\\end{enumerate}\n\n\n\n\\begin{figure*}[t!]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{figs\/Umu.pdf}\n\\caption{\nSensitivity reach of ILC beam dump experiment to HNLs mixing with the mu-neutrino in the mass and mixing\nplane. For the description of this plot, see the caption of Fig.~\\ref{fig:sensitivityUe}. \\label{fig:sensitivityUmu}\n}\n\\vspace{20pt}\n\\includegraphics[width=0.6\\textwidth]{figs\/Utau.pdf}\n\\caption{\nSensitivity reach of ILC beam dump experiment to HNLs mixing with the tau-neutrino in the mass and mixing\nplane. For the description of this plot, see the caption of Fig.~\\ref{fig:sensitivityUe}. \n\\label{fig:sensitivityUtau}\n}\n\\end{figure*}\n\n\n\\subsection*{$U_\\mu$ dominance}\nFig.~\\ref{fig:sensitivityUmu} summarizes the prospects of the ILC beam dump experiment in ILC-1000 and ILC-250 for the HNL mixing dominantly with $\\nu_{\\mu}$.\nThe red (black) curves show the expected sensitivity for $95\\%$ C.L. exclusion of ILC-1000 (ILC-250) with 10-year statistics given by the meson and $\\tau$ lepton decays, and the DIS process for incoming muons from the EM shower. \nWe provide approximated formulae of the number of signal events for each of the HNL production mode for ease of understanding.\n\n\n\\begin{enumerate}[(a)]\n\\item Meson and $\\tau$ lepton decays\n\nThe production process from meson and $\\tau$ lepton decays of the HNL that mix dominantly with $\\nu_{\\mu}$ can be calculated in parallel with $U_e$ dominant case except for minor threshold difference.\nThe mass dependence of the dominant HNL production process is the same as that of $U_e$ dominant case with $e$ replaced with $\\mu$.\n\n\nAs shown in Figs.~\\ref{fig:brUmuD} and \\ref{fig:brUmuB}, for $m_D-m_{\\mu} \\lesssim m_{\\rm HNL} \\lesssim m_B-m_{\\mu}$, \nthe $B$ meson decay gives a significant contribution to the HNL production.\nThe zigzag curves near $m_{\\rm HNL}\\sim 3~{\\rm GeV}$ correspond to the threshold of $B\\to D+{\\mu}+{\\rm HNL}$.\nAbove $m_{\\rm HNL}\\simeq 3~{\\rm GeV}$, the leptonic $B$ meson decay such as $B^{\\pm}\\to \\mu^{\\pm} +{\\rm HNL}$ dominates the HNL productions.\n\n\n\\item Direct production\n\nAs shown in Fig.~\\ref{fig:exp}, incident real photons produce muon pairs in the beam dump through the electromagnetic interaction with the nucleus or nucleon~\\cite{Sakaki:2020cux}, and the HNL can be generated by the DIS process in the muon shield.\nThe muon pair production cross section is about $(m_{\\mu}\/m_e)^2\\simeq 10^{5}$ times smaller than that of the electron pair production. \n\n\nIn the small $|U_{\\mu}|^2$ regions, the projected sensitivity scales with the ratio of numbers of muon and electron in the \nshower, and the projected sensitivity via the DIS process is less significant compared with the \n$U_e$ dominant case.\nOn the other hand in the large $|U_{\\mu}|^2$ regions, the number of signal events is mostly determined by the muon shield length and less sensitive to the number of muon pairs, because the lifetime is shorter.\nThe shape of the upper side of contours in Fig.~\\ref{fig:sensitivityUmu} is determined by the exponential factor in Eq.~\\eqref{eq:decP}, which is characrized by \n\\begin{align}\n \\frac{m_{\\rm HNL} \\Gamma_{\\rm HNL}}{E^{\\rm lab}_{\\rm HNL}}(l_{\\rm sh}-\\delta_{\\mu})\\sim {\\rm const}.\\label{eq:larUmu}\n\\end{align}\nCombining $E^{\\rm lab}_{\\rm HNL}\\simeq E_{\\rm beam}$ and $\\Gamma_{\\rm HNL}\\propto |U_{\\mu}|^2\\cdot m^5_{\\rm HNL}$, Eq.~\\eqref{eq:larUmu} becomes $|U_{\\mu}|^2\\propto m^{-6}_{\\rm HNL}$. \nSince the DIS process with a muon happens in the muon shield, the shorter distance from the HNL production point to the decay volume enhances the acceptance compared the DIS process in the beam dump.\n \n\\end{enumerate}\n\n\n\n\\subsection*{$U_\\tau$ dominance}\n\nThe projected sensitivities in the $U_\\tau$ dominant scenario are shown in Fig.~\\ref{fig:sensitivityUtau} with a similar notation as in the $U_{e,\\mu}$ dominance.\nThe red (black) curves show the expected $95\\%$ C.L. exclusion sensitivity with 10-year statistics in ILC-1000 (ILC-250) by $B$ and $D_s$ mesons and $\\tau$ lepton decay production. The direct (DIS) production is absent in this case due to very limited flux of $\\tau$ leptons.\n\n\nAs shown in Figs.~\\ref{fig:production}, \\ref{fig:brUtauD}, \\ref{fig:brUtauB}, and \\ref{fig:brUtautau}, below the threshold of $m_{D_s}-m_{\\tau}\\simeq 0.2~{\\rm GeV}$,\nthe main HNL production channel is the $D_s\\to \\tau+{\\rm HNL}$ decay.\nFor $0.2~{\\rm GeV}\\lesssim m_{\\rm HNL}\\leq m_{\\tau}-m_e\\simeq 1.8~{\\rm GeV}$, the production of HNL is dominated by $\\tau$ lepton decay.\nMost of the $\\tau$ leptons are produced by $D_s$ meson decays, and the $\\tau$ pair production in the electromagnetic showers are subdominant, which therefore is dropped in the HNL study. \nThe branching ration of $D_s\\to \\tau +\\nu_{\\tau}$ is $5.48\\%$, and an energy dependence of $\\tau$ lepton spectra is determined by that of $D_s$ meson spectra in Fig.~\\ref{fig:production}.\nThe main decay channels of $\\tau$ are $\\tau\\to \\rho +{\\rm HNL}$, $\\tau\\to \\mu+\\nu+{\\rm HNL}$, $\\tau\\to e+\\nu+{\\rm HNL}$, $\\tau\\to \\pi+{\\rm HNL}$, $\\tau\\to K^{\\ast}+{\\rm HNL}$, and $\\tau\\to K+{\\rm HNL}$, see Fig~\\ref{fig:brUtautau}.\n\nIn the small $|U_{\\tau}|$ regions with the longer lifetime of HNL, the decay probability in Eq.~\\eqref{eq:decP} is approximated by $d P_{\\rm dec}^{\\rm dump} \/dz\\simeq 1\/l^{\\rm (lab)}_X$.\nFor the electron beam energy $E_{\\rm beam}=500~{\\rm GeV}$, the number of events by the $\\tau^{\\pm}\\to \\rho^{\\pm} +{\\rm HNL}$ is approximately given by\n\\begin{align}\n N_{\\rm signal}^{\\rm (i)}&\\sim \\left(\\frac{N_{\\rm EOT}}{4\\times 10^{22}}\\right)\\left(\\frac{l_{\\rm dec}}{50~{\\rm }}\\right)\\left(\\frac{r_{\\rm det}}{3~{\\rm m}}\\right)^2 \\left(\\frac{81~{\\rm m}}{l_{\\rm dump}+l_{\\rm sh}}\\right)^2 \\left(\\frac{|U_{\\tau}|^2}{5\\times 10^{-9}}\\right)^2 \\left(\\frac{m_{\\rm HNL}}{0.5~{\\rm GeV}}\\right)^4, \n\\end{align}\nwhere we use\n\\begin{align}\n {\\rm Br}(\\tau^{\\pm}\\to \\rho^{\\pm}+{\\rm HNL})\\propto |U_{\\tau}|^2,\\quad\n {\\rm Br}_{\\rm vis}\\simeq 1,\\quad\n (l^{\\rm lab}_X)^{-1}\\propto |U_{\\tau}|^2 \\cdot m^4_{\\rm HNL} \/E^{\\rm lab}_{\\rm HNL}.\n\\end{align}\nFor $1.8~{\\rm GeV}\\lesssim m_{\\rm HNL}$, the $B$ meson decays dominate the HNL productions, see Figs.~\\ref{fig:brUtauD}, \\ref{fig:brUtauB}, and \\ref{fig:brUtautau}.\n\n\n\n\\subsection{Sensitivities from $Z$~decays at ILC and FCC-ee}\\label{sec:ZatILC}\n\nThe future $e^+ e^-$ colliders can be seen as a $Z$ boson factory, such as Giga-$Z$ program of the ILC and Tera-$Z$ program of the CERN Future $e^+ e^-$ Circular Collider, dubbed FCC-ee. The HNLs can leave a clear signal with displaced tracks at $e^+ e^-$ colliders once they are produced via $Z\\to N \\bar\\nu, \\bar N \\nu$. This type of signal was examined at DELPHI detector of the Large Electron-Positron collider (LEP) \\cite{DELPHI:1996qcc}. We briefly study the future sensitivities of the HNL search using the displaced tracks from the $Z$ decay, which is complementary to the sensitivities of the ILC beam dump experiment. \n\n\nThe proposed ILC detector \\cite{Behnke:2013lya} has three layers of vertex detector (VTX) starting from 1.5~cm to 19.5~cm surrounding the beam, Time Projection Chambers (TPC) as the main tracking detector spreading from 33~cm to 170~cm, and two layers of silicon strip detectors (SIT) arranged between TPC and VTX. We assume the ILC detector is sensitive to the HNL signal with negligible background if the HNL decays between 3~cm and 170~cm from the collision point. We adopt the radius-dependent track-detection efficiency $\\epsilon_{\\rm trk}$ linearly decreasing from $r = 3$~cm ($\\epsilon_{\\rm trk}=100\\%$) to $r = 170$~cm ($\\epsilon_{\\rm trk}=0\\%$)~\\cite{Bertholet:2021hjl}. Then, we assume $10^9$ $Z$ decays and zero background after requiring the displaced tracks and consider all the HNL decays except for the fully invisible mode, $N\\to \\nu\\nu\\bar\\nu$. The $95\\%$~CL future sensitivity corresponds to three HNL decays with the multi-displaced tracks in the fiducial volume, and the result is included in Figs.~\\ref{fig:sensitivityUe},~\\ref{fig:sensitivityUmu},~and~\\ref{fig:sensitivityUtau}.\n\n \nSimilarly, we study the future projection at the FCC-ee. For the simplicity, we take the same detector setup of the ILC and rescale the statistics assuming $10^{12}$ $Z$ decays. This is also shown in Figs.~\\ref{fig:sensitivityUe},~\\ref{fig:sensitivityUmu},~and~\\ref{fig:sensitivityUtau}. Ref.~\\cite{Blondel:2014bra} studied also the projected sensitivities at the FCC-ee, but our estimate is different with respect to the detector coverage and the signal efficiency. \n\n\n\\subsection{Existing constraints on HNL and projected sensitivities at other experiments}\\label{sec:bounds}\n\nThe presence of HNL can have drastic consequences on early-universe observables and has been explored extensively in the literature. The HNL is thermally produced in the early universe, and its late decay can disrupt the standard Big Bang Nucleosynthesis. The typical constraint is for $\\tau_{\\rm HNL}\\lesssim 1\\rm s$, see \\cite{Abazajian:2012ys} and references therein. In addition, Ref ~\\cite{Boyarsky:2020dzc} studies that the pions from the HNL hadronic decay alter the decoupling of neutrons from the prediction of the standard BBN, which affects the ratio of neutrons to protons. The upper limit on the HNL lifetime is obtained from the $^4$He abundance as $\\tau_{\\rm HNL}\\lesssim 0.02$~s (see also other recent studies \\cite{Alonso-Alvarez:2022uxp, Bondarenko:2021cpc}).\n\n\nThe long-lived HNLs are also extensively searched in the accelerator-based experiments. The HNLs may be produced directly either in the beam-target collisions, or in the decays of secondary particles such as $B$, $D$ mesons and $\\tau$ leptons. The searches are carried out either by detecting the displaced decay of the HLN or the significant missing momentum of the events. In the following, we briefly list the existing constraints included in Fig.~\\ref{fig:sensitivityUe}, \\ref{fig:sensitivityUmu}, and \\ref{fig:sensitivityUtau}. \n\n\n\\begin{itemize}\n\\item CHARM proton beam dump experiment --- \nSearches for HNLs produced at a proton beam dump operating at the 400 GeV CERN SPS are sensitive to all the scenarios we consider~\\cite{CHARM:1983ayi, CHARM:1985nku}. Recently, the result of the $U_\\tau$ dominant scenario was reanalyzed, and the bound was improved for $ 0.3~{\\rm GeV}\\lesssim m_{\\rm HNL}\\lesssim 1.5~{\\rm GeV}$~\\cite{Boiarska:2021yho}. \n\n\n\\item Other beam damp\/neutrino beam experiments --- \nIn addition to the CHARM experiment, the HNL searches were conducted at a variety of fixed target and neutrino experiments, such as NuTeV~\\cite{NuTeV:1999kej}, BEBC~\\cite{WA66:1985mfx}, and PS191~\\cite{Bernardi:1987ek}. They are mostly sensitive to the $U_\\mu$ mixing, while PS191 placed a strong bound on the $U_e$ dominant scenario as well.\n \n\n\\item Long-baseline neutrino experiment --- \nAt the T2K experiment, the near detector ND280 was utilized to search for the HNL~\\cite{T2K:2019jwa}. The detection principle is similar to the one at PS191. \nWe include the bounds of ``single-channel'' analysis where only one of $U_\\ell$ is non-zero at a time. In the future, this type of search can be performed at the near detector of the DUNE experiment~\\cite{Coloma:2020lgy}.\n\n\n\\item Super-Kamiokande --- \nThe HNLs can be copiously produced from kaon and pion decays in atmospheric showers and decay in the Super-Kamiokande detector. In~\\cite{Coloma:2019htx}, the authors analyzed the Super-Kamiokande data~\\cite{Super-Kamiokande:2017yvm}, and obtained bounds on the HNL in all the scenarios. The bound on $U_{\\tau}$ is as strong as the T2K one. \n \n\n\\item Pion decay --- \nPrecision measurements of charge pions can test $\\pi^+ \\to \\ell^+ +\\rm HNL$ with no subsequent HNL decay which is sensitive to the $U_e$ and $U_{\\mu}$ mixings \\cite{PIENU:2017wbj, PIENU:2019usb}. It gives an unique bound in the low mass region $m_{\\rm HNL}\\sim 0.1~{\\rm GeV}$ of the $U_{e}$ dominant scenario. \n\n\n\\item Kaon decay --- \nSimilar to the search with the charged pion, the HNL searches have been conducted by $K^+\\to \\ell^+ +\\rm HNL(invisible)$ at NA62~\\cite{NA62:2020mcv, NA62:2021bji}, E949~\\cite{E949:2014gsn}, and KEK\\cite{Yamazaki:1984sj}. Typically they give the best limit near the threshold of $m_{\\rm HNL}\\lesssim m_{K^+}-m_{\\ell^+}$. \n\n\n\\item $B$ meson and $\\tau$ lepton decay --- \nThe heavier HNL can be looked for from the $B$ meson decays. The HNL production by $B\\to X \\ell +{\\rm HNL}\\ (\\ell=e,\\mu)$ with a subsequent displaced decay of ${\\rm HNL}\\to \\ell \\pi$ was examined using the Belle data in~\\cite{Belle:2013ytx}. The $B$-factories also produce $\\tau$ leptons copiously, and the search of long-lived HNL from $\\tau$ lepton decays at Belle and Babar placed a bound on $U_{\\tau}.$ \\cite{Dib:2019tuj}\n\n\n\\item $Z$ boson decay --- \nThe search of $Z$ boson decaying into HNL using the DELPHI data at the LEP collider constraints all the scenarios. It sets strongest limit in mass range around $m_{\\rm HNL}\\sim {\\cal O}(10)~\\rm GeV$ \\cite{DELPHI:1996qcc}. See Sec.~\\ref{sec:ZatILC} for more detail and the projection at the future $e^+e^-$ colliders. \n\n\n\\item Large Hadron Collider --- \nAt the ATLAS and CMS detectors of LHC, the $W$ boson mediated process efficiently produces the HNL. Depending on the final states and decay length, different analyses were performed at ATLAS~\\cite{ATLAS:2019kpx, ATLAS:2022atq} and CMS~\\cite{CMS:2018jxx, CMS:2018iaf, CMS:2022fut}. In particular, the search for the displaced decays of HNL~\\cite{CMS:2022fut, ATLAS:2022atq} excludes the large parameter space. \n \n\n\\end{itemize}\n\n\nAmong studies of the future HNL searches, we include projected sensitivities at FASER2~\\cite{FASER:2018eoc}, NA62~\\cite{Beacham:2019nyx}, DUNE~\\cite{Coloma:2020lgy}, SHiP~\\cite{Alekhin:2015byh}, and MATHUSLA~\\cite{Curtin:2018mvb} in Figs.~\\ref{fig:sensitivityUe}, \\ref{fig:sensitivityUmu}, and \\ref{fig:sensitivityUtau} to compare to the sensitivities at the ILC beam dump experiment. Other experiments including LHCb~\\cite{Antusch:2017hhu, Cvetic:2019shl}, Codex-b~\\cite{Aielli:2019ivi}, Belle~II~\\cite{Dib:2019tuj}, Dark-Quest~\\cite{Batell:2020vqn}, IceCube~\\cite{Coloma:2017ppo}, ATLAS and CMS experiments~\\cite{ATLAS:2022atq, CMS:2022fut} at HL-LHC can also search for the HNLs beyond the current experimental constrains. \n\nIn Figs.~\\ref{fig:sensitivityUe}, \\ref{fig:sensitivityUmu}, and \\ref{fig:sensitivityUtau}, the estimated sensitivity contours of ILC is better than the other proposed searches. We remind here that the contour is calculated assuming zero background events, but it is very encouraging to explore the possibility further. The ILC sensitivity is very close to the one of SHiP in the low mass region ($m_{\\rm HNL}<2$~GeV). Above 2~GeV, the sensitivity of the ILC beam dump experiment is better than SHiP because the initial electron energy is high enough to produce $B$ mesons at a higher rate. Thanks to the higher HNL mass, the HNL decay products are more clearly separated from the background so that the neutrino background would be less critical in the region. \n\n\n\n\n\\section{Discussions}\\label{sec:discussion}\n\nThe ILC beam dump experiment is a seamless extension of the ILC program, which provides a unique opportunity to test feebly interacting light particles. The electron beam energy of ILC is much higher than the ones at the current and past electron beam dump experiments, and therefore, not only the light mesons but also the heavier SM particles, such as heavy flavor mesons and $\\tau$ lepton, can be produced in the beam dump. Moreover, a large number of electrons on target are expected thanks to the high-intensity beam. In many BSM scenarios, new particles can be efficiently produced by decays of the SM particles. Therefore, it is essential to estimate the production rate of the SM particles at the ILC beam dump experiment. In this paper, we evaluate the light and heavy mesons and $\\tau$ lepton spectra at the decay for the first time using PHITS and PYTHIA8. PHITS is responsible for producing and transporting light SM particles, and we incorporate the heavy meson productions calculated by PYTHIA8 into PHITS. The main results are in Fig.~\\ref{fig:decay} for the light mesons and in Fig.~\\ref{fig:production} for the heavy mesons and $\\tau$ lepton. \n\n\nThe spectra of SM particles can be used to estimate the yield of BSM particles from SM particle decay. As a demonstration, we studied the projected sensitivity of the heavy neutral leptons at the ILC beam dump experiment. Figs.~\\ref{fig:sensitivityUe}, \\ref{fig:sensitivityUmu}, and \\ref{fig:sensitivityUtau} show that ILC would explore the HNLs with the heavier mass and small mixing and cover the large parameter space motivated by the baryon asymmetry of the Universe. We use the Monte-Carlo simulation to evaluate the HNL signal from the SM particle decay, and the results are well reproduced by the the course-grained integration method convolving the SM particle spectra given in Figs.~\\ref{fig:decay} and \\ref{fig:production}. \n\n\nApart from the SM particle decays, the HNLs can be produced through various processes at ILC. The EM shower can directly create an HNL via DIS, and we account for this production by the course-grained integration method. Moreover, copious $Z$ decays expected in the primary detector of ILC would realize a different search method for the mildly long-lived HNLs.\n\n\nOther than the HNLs, many motivated long-lived particles can be predominantly produced by the SM particle decay. For example, dark scalars such as the QCD axion and the Higgs portal scalar can be efficiently produced via flavor-changing decays of meson (especially $B\\to K X$ and $K\\to \\pi X$ where $X$ denotes a long-lived particle). The heavy mesons produced from the high energy electron beam allow us to probe long-lived particles heavier than several GeV. Another advantage of the abundant heavy mesons is that the experiment is sensitive to a new particle that preferably couples to the third generation fermions ($b$ quark or $\\tau$ lepton). One can estimate the sensitivity of these particles at the ILC beam dump experiment based on our results in Figs.~\\ref{fig:decay}, \\ref{fig:production} and the approximate formula like Eqs.~\\eqref{eq:sig}. \n\n\n\n\n\\section*{Note added}\\label{sec:note}\nWhile completing this work, we became aware of \\cite{Giffin:2022}, which considers related topics.\n\n\\section*{Acknowledgement}\\label{sec:ackn}\nMN is supported by \t Grant-in-Aid for Scientific Research on Innovative Areas(16H06492) JSPS KAKENHI 22K03629, YS is supported by JSPS KAKENHI JP21H05466. KT is supported by in part the US Department of Energy grant DE-SC0010102 and JSPS KAKENHI 21H01086. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalznnb b/data_all_eng_slimpj/shuffled/split2/finalznnb new file mode 100644 index 0000000000000000000000000000000000000000..37ce4f418a42a2655cd39958464d926784a18385 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalznnb @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nRecent observational studies of nearby star-forming regions with the {\\em Herschel Space Observatory} have convincingly shown that stars are born in self-gravitating filaments \n \\citep[e.g., ][]{Andre+2010,Arzoumanian+2011}. \nIn addition, the resultant mass function of star-forming dense cores are now explained by the mass distribution along filaments \\citep{Inutsuka2001,Andre+2014}. \nThis simplifies the question of the initial conditions of star formation, but poses the question of how such filamentary molecular clouds are created in the interstellar medium (ISM) prior to the star formation process. \nRecent high-resolution magneto-hydrodynamical simulations of two-fluid dynamics with cooling\/heating and thermal conduction by \\citet{InoueInutsuka2008,InoueInutsuka2009} have shown that the formation of molecular clouds requires multiple episodes of supersonic compression \\citep[see also][]{Heitsch+2009}. \n\\citet{InoueInutsuka2012} further investigated the formation of molecular clouds in the magnetized ISM \nand revealed the formation of a magnetized molecular cloud by the accretion of HI clouds created through thermal instability. \nSince the mean density of the initial multi-phase HI medium is an order of magnitude larger than the typical warm neutral medium (WNM) density, this formation timescale is shorter than that of molecular cloud formation solely through the accumulation of diffuse WNM \n\\citep[see, e.g.,][for the cases of WNM flows]{KoyamaInutsuka2002,Hennebelle+2008,HeitschHartmann2008,Banerjee+2009,Vazquez-Semadeni+2011}. \nThe resulting timescale of molecular cloud formation of $\\gtrsim$10 Myrs is consistent with the evolutionary timescale of molecular clouds in the LMC \\citep{Kawamura+2009}.\n\n\nWe have done numerical simulations of additional compression of already-formed but low-mass molecular clouds, and found interesting features associated with realistic evolution.\nFigure 1 shows a snapshot of the face-on view of the layer created by compressing a non-uniform molecular cloud with a shock wave propagating at 10 km\/s. The direction of the shock compression is perpendicular to the layer. The magnetic field lines are mainly in the dense layer of compressed gas. \nThe strength of the initial magnetic field prior to the shock compression is $20\\mu$Gauss and that of the dense region created after compression is about $200\\mu$Gauss on average. \nMany dense filaments are created with axes perpendicular to the mean magnetic field lines. \nWe can also see many faint filamentary structures that mimic ``striations'' observed in the Taurus Dark Cloud and are almost parallel to the mean magnetic field lines \\citep[][]{Goldsmith+2008}. \nIn our simulations, these faint filaments appear to be feeding gas onto dense filaments (similar to what is observed for local clouds by \\citet[e.g.,][]{Sugitani+2011,Palmeirim+2013,Kirk+2013}). \nOnce the line-mass of a dense filament exceeds the critical value ($2C_{\\rm s}^2\/G$), star formation is expected to start \\citep{InutsukaMiyama1992,InutsukaMiyama1997,Andre+2010}. \nThis threshold of line-mass for star formation is equivalent to the threshold of the column density of molecular gas $116M_\\sun {\\rm pc}^{-2}$ \\citep[][]{Lada+2010}, if the widths of filaments are all close to 0.1pc \\citep[][]{Arzoumanian+2011,Andre+2014}. \n\nAlthough further analysis is required for quantitative comparison between the results of simulation and observed structures, Figure 1 clearly shows that the structures created by multiple shock wave passages do match the characteristic structures observed in filamentary molecular clouds. \nThis motivates us to describe a basic scenario of molecular cloud formation.\nThe present paper is focused on the implications of this identification of the mechanism of molecular cloud formation.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\hsize]{inutsuka_fig1.eps}\n\\caption{Face-on column density view of a shock-compressed dense layer of molecular clouds. \nWe set up low-mass molecular clouds by the compression of two-phase HI clouds. \nThis snapshot shows the result of an additional compression of \nlow-mass\nmolecular clouds by a shock wave propagating at 10 km\/s. \nThe magnetic field lines are mainly in a dense sheet of a compressed gas. \nThe color scale for column density (in cm$^{-2}$) is shown on top. \nThe mean magnetic field is in the plane of the layer and its direction is shown by white bars.\nNote the formation of dense magnetized filaments whose axes are almost perpendicular to the mean magnetic field. \nFainter ``striation''-like filaments can also be seen, that are almost perpendicular to the dense filaments. \n\\label{fig1}}\n\\end{figure}\n\n\\section{A Scenario of Cloud Formation Driven by Expanding Bubbles}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\hsize]{inutsuka_fig2.eps}\n\\caption{\nA schematic picture of sequential formation of molecular clouds by multiple compressions by overlapping dense shells driven by expanding bubbles. \nThe thick red circles correspond to magnetized dense multi-phase ISM where cold turbulent HI clouds are embedded in WNM. \nMolecular clouds can be formed only in the limited regions where the compressional direction is almost parallel to the local mean magnetic field lines, or in regions experiencing an excessive number of compressions. \nAn additional compression of a molecular cloud tends to create multiple filamentary molecular clouds. \nOnce the line-mass of a filament exceeds the critical value, even in a less massive molecular cloud, star formation starts. \nIn general, star formation in a cloud accelerates with the growth in total mass of the cloud. \nGiant molecular clouds collide with one another at limited frequency. \nThis produces very unstable molecular gas and may trigger very active star formation. \n\\label{fig2}}\n\\end{figure}\n\nHI observations of our Galaxy reveal many shell-like structures near the galactic plane \\citep[e.g.,][]{HartmannBurton1997,Taylor+2003}. \nWe identify repeated interactions of expanding shock waves as a basic mechanism of molecular cloud formation, and we depict the overall scenario of cloud formation in our Galaxy as a schematic picture in Figure \\ref{fig2}. \nIn this picture, red circles correspond to the remnants of shock waves due to old and slow supernova remnants or expanding HII regions. \nCold HI clouds embedded in WNM are almost ubiquitously found in the shells of these remnants \n\\citep[e.g.,][]{HartmannBurton1997,Taylor+2003,2007ApJ...664..363H}. \nMolecular clouds are expected to be formed in limited regions where the mean magnetic field is parallel to the direction of shock wave propagation, or in regions where an excessive number of shock wave sweepings are experienced. \nTherefore, molecular clouds can be found only in limited regions in shells. \nNote that the typical timescale of each shock wave is of order 1Myr, but the formation of molecular clouds requires many Myrs. \nSome bubbles become invisible as a supernova remnant or an HII region many million years after their birth. \nTherefore, this schematic picture corresponds to a ``very long exposure snapshot'' of the real structure of the ISM. \nEach molecular cloud may have random velocity depending on the location in the most recent bubble that interacts with the cloud. \nInterestingly, this multi-generation picture of the evolution of molecular clouds seems to agree with the observational findings of \\citet{Dawson+2011a,Dawson+2011b,Dawson+2015}, who investigated the transition of atomic gas to molecular gas in the wall of Galactic supershells.\nIn the case of LMC, \\citet{Dawson+2013} concluded that only $12\\sim25$\\% of the molecular mass can be apparently attributed to the formation due to presently visible shell activity. \nThis may not be inconsistent with our scenario since Dawson et al (2013) only considered HI supergiant shells, whereas molecular clouds in our model can form at the interface of much smaller bubbles and shells (which observationally are more difficult to identify and characterize in the HI data) and the timescale for cloud forming shells to become invisible is much shorter than the growth timescale of molecular mass.\n\nA typical velocity of the shock wave due to an expanding ionization-dissociation front is 10km\/s, as shown by Hosokawa \\& Inutsuka (2006a), since it is essentially determined by the sound speed of ionized gas ($\\sim 10^4$K). \nIwasaki et al. (2011b) have shown that if a molecular cloud is swept-up by shock wave of ~10km\/s, it moves with a velocity slightly less than the shock speed. \nThus, the mean velocity of each molecular cloud should be somewhat smaller than that of the most recent shock wave. \nWhen the shock velocity of a supernova remnant is much higher than 10km\/s, the resultant interaction would result in the destruction of molecular clouds. \nTherefore, the cloud-to-cloud velocity dispersion of molecular clouds should be similar to 10km\/s.\nAccording to this acquisition mechanism of random velocity, the velocity of a cloud is not expected to depend strongly on its mass. \nIn other words, random velocities of molecular clouds of different masses are not expected to be in equipartition of energy ($M \\delta v^2\/2=$const.). \nObservations by Stark \\& Lee (2005) have shown that the random velocities of low-mass molecular clouds ($< 2 \\times 10^5 M\\sun$) only vary by a few, with no dependence on cloud mass. \nThese observations are therefore more consistent with our picture than a model in which molecular clouds acquire their relative velocities via mutual gravitational interaction.\n\n\nIn limited circumstances, created molecular clouds collide with one another. \nThis produces highly gravitationally unstable molecular gas and may trigger very active star formation \\citep[e.g.,][]{Fukui+2014}. \nInoue \\& Fukui (2013) have done magnetohydrodynamical simulations of a cloud-cloud collision and argue that it may lead to active formation of massive stars \\citep[see also][]{Vaidya+2013}. \nThis mode of massive star formation is not, however, a prerequisite of our model.\n\n\n\\subsection{Formation Timescale of Molecular Clouds}\nLet's first model the growth of molecular clouds. \n\\cite{InoueInutsuka2012} have shown that we need multiple episodes of compression of HI clouds to create molecular clouds. \nAccording to the standard picture of supernova-regulated ISM dynamics (e.g., McKee \\& Ostriker 1977), the typical timescale between consecutive compressions by supernova remnants is about 1Myr. \nThe total creation rate of expanding bubbles is larger than the occurrence rate of supernova explosions, since the former can also be created by radiation from massive stars less massive than supernova progenitors. \nTherefore, the actual timescale of compressions in ISM, $T_{\\rm exp}$, should be somewhat smaller than 1Myr if it is averaged over the Galactic thin disk. \nObviously the compression timescale is smaller in the spiral arms and larger in inter-arm regions since star formation activity is concentrated in the spiral arms.\nThus, we have to consider the time evolution of cloud mass for much longer than 1 Myr. \n\nLet us estimate the typical timescale of molecular cloud growth. \n\\cite{InoueInutsuka2012} have shown that the angle between the converging flow direction and the average direction of the magnetic field should be less than a certain angle for molecular cloud formation. \nAlthough Inoue \\& Inutsuka (2009) shows that this critical angle depends on the flow speed, we adopt a critical angle of 15 degrees (=0.26 radian) in the following discussion for simplicity. \nThis value is not so different from the angle ($\\sim 20$ degrees) for possible compression in the simpler one-dimensional model by \\cite{HennebellePerault2000}. \nFor simplicity we assume that magnetic field is uniform in the region we consider and the direction of compression is isotropic. \nThe solid angle spanned by the possible directions of compression resulting in the formation of a molecular cloud is $0.26^2 \\pi$. \nThe anti-parallel directions are also possible. \nTherefore, the probability, $p$, of successfully forming a molecular cloud in a single compression can be estimated by the ratio of solid angle over which compressions lead to molecular cloud formation to the solid angle of the whole sphere, i.e., $p=2 \\cdot 0.26^2 \\pi\/(4\\pi)=0.034$.\nNote that Figure 1 is not the snapshot just after the birth of molecular clouds, but the result of one additional compression of the molecular clouds in which the direction of compression is perpendicular to the mean direction of the magnetic field lines.\nWe also emphasize that since the formation of a GMC requires many episodes of compression, our model does not predict a strong correlation between the present-day magnetic field direction and the orientation of the GMC.\n\nAfter each compression a cloud may slightly expand because of the reduced pressure of the ambient medium, which may result in the loss of diffuse components of cloud mass. \nObservationally the average column densities of molecular clouds do not seem to change very much and always appear to correspond to a visual extinction of several.\nThis means that the mass of a cloud is proportional to its cross-section. \nSince the compressional formation of molecular material is expected to be proportional to the cross-section of the pre-existing cloud, we can model the rate of increase of molecular cloud mass as \n\\begin{equation}\n \\frac{dM}{dt} = \\frac{M}{T_{\\rm f}} , \\label{eq:Eqformation}\n\\end{equation}\nwhere $T_{\\rm f}$ denotes the formation timescale. \nThis equation shows that resultant mass of each molecular cloud grows exponentially with a long timescale $T_{\\rm f}$ if we average in time over a few Myr.\nIf self-gravity increases the accumulation rate of mass into the molecular cloud, the right-hand side of Equation (1) may have a stronger dependence on mass. \nFor example, the so-called ``gravitational focusing factor'' increases the cross section of coalescence by a factor proportional to the square of mass for the large mass limit. \nThis will produce a significantly steeper slope of the cloud mass function (see Section 4). \nA linear dependence on mass in our formulation implicitly assumes that self-gravity of the whole molecular cloud does not significantly affect the cloud growth.\n\nBased on our investigation of molecular cloud formation described above,\nwe estimate the formation timescale as follows:\n\\begin{equation}\n T_{\\rm f} = \\frac{1}{p} \\cdot T_{\\rm exp}. \\label{eq:Tformation}\n\\end{equation}\nThe average value in spiral arm regions would be $T_{\\rm f} \\sim 10$ Myr, but can be factor of a few longer in the inter-arm regions. \nIn reality, many repeated compressions with large angles between the flow direction and mean magnetic field lines gradually increase the dense parts of clouds, and hence contribute to the formation of molecular clouds over a long timescale. \nThis may mean that the actual value of $T_{\\rm f}$ is somewhat smaller than the estimate of Equation (2). \n\nFukui et al. (2009) has shown that the clouds with masses (a few $\\times 10^5M_\\sun$) gain their mass at a rate 0.05 $M_\\sun$\/yr over a timescale 10Myr. \nThis means that the mass of a cloud in their sample doubles in $\\sim 10$Myr, which is consistent with our choice of $T_{\\rm f}=10$Myr. \nNote, however, that Fukui et al. (2009) argued that the atomic gas accretion is driven by the self-gravity of a GMC, which is not included in the present modelling where we assume that gas accretion is essentially driven by the interaction with expanding bubbles. \nIf the gravitational force is significant for the HI accretion onto GMC, it possibly enhances the growth rate of molecular cloud (i.e., smaller $T_{\\rm f}$). \nFurther quantitative studies of the effect of self-gravity on the accretion of gas onto a GMC remain to be done. \nIn the present paper we neglect this effect and do not distinguish self-gravitationally bound and pressure-confined clouds, for simplicity.\n\nIn regions where the number density of molecular clouds is very large, cloud-cloud collision may contribute to the increase of cloud mass, and hence, may also affect the mass function of molecular clouds. \nThe detailed modelling of cloud-cloud collision will be given in our forthcoming paper. \nHere we ignore the contribution of cloud-cloud collision to the change of mass function and simply use the constant value of $T_{\\rm f}$.\n\n\n\\section{Quenching of Star Formation in Molecular Clouds\nNext we consider the destruction of molecular clouds to determine how the star formation is quenched. \nDale et al. (2012, 2013) have done extensive three dimensional simulations of star cluster formation with ionization or stellar wind feedback and shown that the effects of photo-ionization and stellar winds are limited in quenching the star formation in massive molecular clouds \\citep[see also][]{Walch+2012}. \n\\citet{Diaz-Miller+1998} calculated the steady-state structures of HII regions and pointed out that the photodissociation of hydrogen molecules due to FUV photons is much more important than photoionization due to UV photons for the destruction of molecular clouds. \n\\cite{2005ApJ...623..917H,2006ApJ...646..240H,2006ApJ...648L.131H,2007ApJ...664..363H} actually included photodissociation in the detailed radiation hydrodynamical calculations of an expanding HII region in a non-magnetized ISM (by resolving photodissociative line radiation), and found the limited effect of ionizing radiation and essentially confirmed the critical importance of FUV radiation for the ambient molecular cloud. \n\n\\subsection{Expanding HII Regions in Magnetized ISM}\nIn the case of non-magnetized molecular gas of density $10^2{\\rm cm}^{-3}$ around a massive star larger than $\\sim 20 M_\\sun$, a large amount of gas ($\\sim 3\\times 10^4 M_\\sun$) is photodissociated and re-processed into molecular material in the dense shell around the expanding HII region within 5 Myrs \\citep{2006ApJ...648L.131H}. \nAccording to the series of papers by \\citet{InoueInutsuka2008,InoueInutsuka2009}, however, \nthe inclusion of magnetic field is expected to reduce the density of the swept-up shell substantially. \nTherefore the magnetic field should affect significantly the actual structure of compressed shell and subsequent star formation process \\citep[c.f., 3D simulation by][]{Arthur+2011}. \n\nTo quantitatively analyze the consequence, we have done numerical magnetohydrodynamics simulations of an expanding bubble due to UV and FUV photons from the massive star. \nThe details of the method is the same as described in \n\\cite{2006ApJ...646..240H,2006ApJ...648L.131H} except for the inclusion of the magnetic field. \nSince the calculation assumes spherical symmetry, we include only the $13\\mu$Gauss magnetic field that is transverse to the radial direction as a simplification. \nThe magnetic pressure due to transverse field is accounted for in the Riemann solver as in \\cite{Sano+1999} \\citep[see][]{SuzukiInutsuka2006,IwasakiInutsuka2011}. \n\nThe upper panel of Figure \\ref{fig3} shows the resultant masses of ionized gas in HII region and atomic gas in photo-dissociation region transformed from cold molecular gas around an expanding HII region at the termination time as a function of the mass of a central star ($M_*$). \nAlso plotted is the warm molecular gas in and outside the compressed shell around the HII region. \nThe temperature of the warm molecular gas exceeds 50K. \nIts column density is smaller than $10^{21} {\\rm cm}^{-2}$, and hence, dust shielding for CO molecule is not effective and all the CO molecules are photo-dissociated.\nThis warm molecular gas without CO (so-called, CO-dark H$_2$) is not expected to be gravitationally bound unless the mass of the parental molecular cloud is exceptionally large. \nTherefore the subsequent star formation in this warm molecular gas is not expected. \nThe uppermost black solid line denotes the total mass ($M_{\\rm g}(M_*)$) of these non-star-forming gases. \n\nThe lower panel of Figure 3 shows gas mass in the upper panel multiplied by $M_*^{-1.3}$ (blue dashed curve), $M_*^{-1.5}$ (black solid curve), and $M_*^{-1.7}$ (red dotted curve). \nThe areas under these curves are proportional to the mass affected by massive stars whose mass distribution follows \n$dn_*\/d(log M_*) \\propto M_*^{1.3}, M_*^{1.5}$, and $M_*^{1.7}$, respectively. \nWe can see that the shape of the curve does not vary much, and stars with mass $20\\sim30 M_\\sun$ always dominate the disruption of the molecular cloud. \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\hsize]{inutsuka_fig3.eps}\n\\caption{\nUpper Panel: \nMasses in various phases transformed from cold molecular gas around an expanding HII region at the termination time as a function of the mass of a central star. \nThe red dot-dashed line, the blue dotted line, and purple dashed line correspond to \nionized hydrogen in the HII region, \nneutral hydrogen in the photodissociation region, \nand warm molecular hydrogen gas without CO, respectively. \nThe uppermost black solid line denotes the total mass of these non-star-forming gases. \nLower Panel: \nThe IMF-weighted mass of non-star-forming gas transformed from molecular gas by a massive star of mass $M_*$. \nThe area under the curve is proportional to the mass generated by massive stars whose mass function follows $dn\/d(\\log M_*) \\propto M_*^{-\\beta +1}$. \nThe peak of the curve determines the inverse of star formation efficiency $\\epsilon_{\\rm SF}$ (see explanation below Equation \\ref{eq:SFE}). \n\\label{fig3}\n}\n\\end{figure}\n\n\nOur calculations include ionization, photodissociation, and magnetohydrodynamical shock waves, but are restricted to a spherical geometry. \nTherefore we should investigate the dispersal process in more realistic three dimensional simulations. \nHowever, the inclusion of photo-dissociation requires the numerical calculation of FUV line transfer and hence remains extremely difficult in multi-dimensional simulations.\n\n\\subsection{Star Formation Efficiency}\nHereafter we assume the power law exponent of mass in the initial mass function of stars for large mass ($M_* > 8 M_\\sun$) is $-\\beta$ and $2<\\beta<3$ \n($ dn_*\/dM_* \\propto M_*^{-\\beta} $). \nNow we calculate the total mass of non-star forming gas disrupted by new born stars in a cloud. \nOne might think that the total mass of non-star-forming mass in the stellar system can be calculated by \n$\n M_{\\rm g,total} = \\int_0^\\infty M_{\\rm g}(M_*) (dn_*\/dM_*) dM_*\n$.\nHowever, this estimation is meaningful only in the case the number of massive stars in a cloud are very large, \n$\n \\int_{20 M_\\sun}^\\infty (dn_*\/dM_*) dM_* \\gg 1. \n$ \nIn reality, the number of massive stars in a molecular cloud of intermediate mass is quite small, and even a single massive star can destruct the whole parental molecular cloud. \nThus, to analyze the quenching of star formation in molecular clouds, it is more appropriate to determine the most likely mass of the star that is responsible for the destruction of molecular clouds. \n\nSince we assume the large mass side of the stellar initial mass function can be approximated by the power law of the exponent $-\\beta$, we can express the mass function in logarithmic mass of massive stars created in a cloud as \n\\begin{equation}\n \\frac{dn_*}{d\\log M_*} = M_* \\frac{dn_*}{dM_*} \n = N_* \\left( \\frac{M_*}{M_\\sun} \\right)^{-\\beta+1}\n ~~{\\rm for}~M_* > 8 M_\\sun\n \\label{eq:IMF} \n\\end{equation}\nNote that the pre-factor $N_*$ is defined for the mass distribution of stars in the individual cloud we are analyzing. \nFor convenience, we define the effective minimum mass ($M_{\\rm *m}$) of a star in the hypothetical power law mass function by the following formula for the total mass in the cloud, \n\\begin{eqnarray}\n M_{\\rm *,total} \n& = & \n \\int_{0} ^{\\infty} M_* \\frac{dn_*}{dM_*} dM_* \\nonumber \\\\ \n& \\equiv & \n \\int_{M_{\\rm *m}}^{\\infty} \n N_* \\left( \\frac{M_*}{M_\\sun} \\right)^{-\\beta+1} dM\n = \\left( \\frac{N_*}{\\beta-2} \\right)\n \\left( \\frac{M_\\sun}{M_{\\rm *m}}\\right)^{\\beta-2} .\n \\label{eq:Matotal}\n\\end{eqnarray}\n\nSuppose that a single massive star more massive than $M_{\\rm *d}$ is created in the molecular cloud. \nThis condition can be expressed as \n\\begin{equation}\n 1 =\\int_{M_{\\rm *d}}^{\\infty} \\frac{dn_*}{dM_*} dM_* = \n \\left( \\frac{N_*}{\\beta-1} \\right)\n \\left( \\frac{M_\\sun}{M_{\\rm *d}} \\right)^{\\beta-1} \n ~~{\\rm for}~M_* > 8 M_\\sun .\n \\label{eq:MdNa}\n\\end{equation}\nThis equation relates $M_{\\rm *d}$ and $N_*$. \nWe can express the total mass of stars in the cloud as a function of $M_{\\rm *d}$ by eliminating $N_*$ in equations (\\ref{eq:Matotal}) and (\\ref{eq:MdNa}), \n\\begin{equation}\n M_{\\rm *,total} =\n \\left( \\frac{\\beta-1 }{\\beta-2 } \\right)\n \\left( \\frac{M_\\sun }{M_{\\rm *m}} \\right)^{\\beta-2} \n \\left( \\frac{M_{\\rm *d}}{M_\\sun } \\right)^{\\beta-1} \n ~~{\\rm for}~M_* > 8 M_\\sun .\n \\label{eq:MtMd}\n\\end{equation}\nThus, $M_{\\rm *,total} \\propto M_{\\rm *d}^{\\beta-1}$ for $M_* > 8 M_\\sun$.\nNow we suppose that a molecular cloud of mass $M_{\\rm cl}$ is eventually destroyed by UV and FUV photons from a star of mass $M_{\\rm *d}$ born in the cloud, and hence, star formation in the cloud is quenched. \nThe condition for this to occur can be written as \n$\n M_{\\rm cl} = M_{\\rm g}\n$\n and \n$\n \\epsilon_{\\rm SF} M_{\\rm cl} = M_{\\rm *,total}, \n$\nwhere $\\epsilon_{\\rm SF}$ is the star formation efficiency (the ratio of the total mass of stars to the mass of the parental cloud).\n\nIf $\\epsilon_{\\rm SF}$ is smaller than the value that would satisfy the above condition, the cloud destruction is not sufficient and star formation continues using the remaining cold molecular material in the cloud, which in turn increases $\\epsilon_{\\rm SF}$. \nThus, we expect that the actual evolution of a molecular cloud finally satisifies the above condition when the star formation is eventually quenched.\nThis means that the star formation efficiency should be given by \n\\begin{equation}\n \\epsilon_{\\rm SF} = \\frac{M_{\\rm *,total}}{M_{\\rm g}(M_{\\rm *d})} \n= \\left( \\frac{\\beta-1 }{\\beta-2 } \\right)\n \\left( \\frac{M_\\sun }{M_{\\rm *m}} \\right)^{\\beta-2}\n \\left( \\frac{M_{\\rm *d}}{M_\\sun } \\right)^{\\beta-1}\n \\left( \\frac{M_{\\rm g} }{M_\\sun } \\right)^{-1}.\n \\label{eq:SFE} \n\\end{equation}\nThe preceding argument suggests that the value of star formation efficiency should take the minimum value of the right hand side of this equation, \ni.e., the maximum value of $M_{\\rm g} M_{\\rm *}^{-\\beta+1}$ where $M_{\\rm g}$ is a function of $M_*$. \nFigure 3 shows that the maximum value of $M_{\\rm g} M_{\\rm *}^{-\\beta+1}$ is attained at $M_* \\sim 30M_\\sun$ where $M_{\\rm g}$ is about $10^5 M_\\sun$. \nTherefore we can conclude that once a massive star of $M_{\\rm *d} = 30 \\pm 10 M_\\sun$ is created, the star formation is eventually quenched in a cloud of mass $M_{\\rm g}(M_{\\rm *d}) \\sim 10^5 M_\\sun$. \nThis corresponds to $\\epsilon_{\\rm SF} \\sim 10^{-2}$, if we adopt $\\beta=2.5$ and $M_{\\rm *m} = 0.1M_\\sun$.\nThe dependence of $\\epsilon_{\\rm SF}$ on $M_{\\rm *m}$ is quite weak ($\\beta-2 \\sim 0.5$) as shown in Equation (7). \nIt is also not sensitive on $\\beta$ in the limited range ($2.3 < \\beta < 2.7$) as shown in Figure 3. \nThus, the authors think that this value of $\\epsilon_{\\rm SF} \\sim 10^{-2}$ is robust in typical star forming regions in our Galaxy. \nThis argument may explain the reason for the low star formation efficiency in molecular clouds observationally found many decades ago \\citep[e.g.,][]{ZuckermanEvans1974}. \n\nNote that a sharp increase of $M_{\\rm g} M_*^{-\\beta+1}$ is due to the sharp increase of UV\/FUV luminosity at $M_* \\sim 20 M_\\sun$. \nTherefore a star much smaller than $20M_\\sun$ is not expected to be the main disrupter of the molecular cloud. \nFor example, the upper panel of Figure 3 shows that a $10M_\\sun$ star can quench $\\sim 10^3 M_\\sun$ of the surrounding molecular material. \nHowever a $10^3 M_\\sun$ molecular cloud is not likely to produce a $10M_\\sun$ star unless $\\epsilon_{\\rm SF} \\sim 1$ as can be seen in equation (\\ref{eq:MtMd}), and hence, the destruction of $10^3 M_\\sun$ cloud by a $10M_\\sun$ star is not expected, in general. \n\n\nIf the initial mass function does not depend on the parent cloud mass as we assume here, the star formation efficiency is not expected to depend on mass for a cloud larger than $\\sim 10^5 M_\\sun$. \nThis can be understood as follows. \nThe number of UV\/FUV photons is proportional to the number of massive stars, which increases with the mass of the cloud. \nHowever, the required number of photons also increases with the mass of the cloud. \nFor example, a $10^6 M_\\sun$ cloud will produce 10 stars with mass $> 30M_\\sun$ \nif $\\epsilon_{\\rm SF} = 10^{-2}$, $\\beta=2.5$, and $M_{\\rm *m} = 0.1M_\\sun$.\nThen, these 10 massive stars will destroy $10 \\times 10^5 M_\\sun$ molecular gas. \nTherefore star formation in the whole molecular cloud is quenched when $\\epsilon_{\\rm SF} = 10^{-2}$. \nTherefore we can conclude that the star formation efficiency does not depend on the mass of the cloud if the shape of the initial mass function does not depend on the mass of the cloud. \n\nNow we can estimate the timescale for the destruction of a molecular cloud. \nOur calculation of the expanding ionization\/dissociation front in the magnetized molecular cloud shows that a $\\sim 10^5 M_\\sun$ molecular cloud can be destroyed within 4 Myrs.\nThe actual destruction timescale, $T_{\\rm d}$, should be the sum of the timescale of formation of a massive star and expansion timescale of the HII region, i.e., $T_{\\rm d} \\approx T_* + 4$Myr, where $T_*$ denotes the timescale for a massive star to form once the cloud is created. \nAfter one cycle of molecular cloud destruction over a timescale $T_{\\rm d}$, only a fraction, $\\epsilon_{\\rm SF}$, of the molecular gas is transformed into stars. \nTherefore, the timescale to completely transform a molecular cloud to stars is $T_{\\rm d}\/\\epsilon_{\\rm SF} \\sim 1.4$Gyr for $T_* \\sim 10$Myrs and $T_{\\rm d} \\sim 14$Myrs. \nThis may explain the so-called ``depletion timescale'' of molecular clouds that manifests observationally in the Schmidt-Kennicutt Law \\citep[e.g.,][] {Bigiel+2011,Lada+2012,KennicuttEvans2012}. \n\n\\section{Mass Function of Molecular Clouds}\nIn order to describe the time evolution of the mass function of molecular clouds, $n_{\\rm cl}(M)=dN_{\\rm cl}\/dM$, over a timescale much longer than 1 Myr,\n we adopt coarse-graining of short-timescale growth and destruction of clouds, and describe the continuity equation of molecular clouds in mass space as \n\\begin{equation}\n \\PD{n_{\\rm cl}}{t} + \\PD{}{M} \\left( n_{\\rm cl} \\frac{dM}{dt} \\right)\n = - \\frac{n_{\\rm cl}}{T_{\\rm d}} , \n\\end{equation}\nwhere \n$n_{\\rm cl}(dM\/dt)$ denotes the flux of mass function in mass space, \n$dM\/dt$ describes the growth rate of the molecular cloud as given in Equation (1). \nThe sink term on the right hand side of this equation corresponds to the destruction rate of molecular clouds in the sense of ensemble average. \nIf the dynamical effects such as shear and tidal stresses contribute to the cloud destruction \\citep[e.g.,][]{Koda+2009,DobbsPringle2013}, we should modify $T_{\\rm d}$ in this equation. \nSince the left hand side of this equation should be regarded as the ensemble average, the term $1\/T_{\\rm d}$ represents the sum of the destruction rate of all the possible processes. \nHere we simply assume that the resultant $T_{\\rm d}$ is not very different from our estimate of destruction due to radiation feedback from massive stars. \n\nAccording to the series of our work on the formation of molecular clouds, the molecular cloud as a whole is not necessarily created as a self-gravitationally bound object. \nTherefore, our modelling of the mass function of molecular clouds is not restricted to the self-gravitationally bound clouds.\nHowever, our modelling is not intended to describe the spatially extended diffuse molecular clouds much larger than the typical size of the bubbles ($\\lesssim100$pc).\n\nA steady state solution of the above equation is \n\\begin{equation}\n n_{\\rm cl}(M) = \\frac{N_0}{M_{\\sun}} \\left( \\frac{M}{M_{\\sun}} \\right)^{-\\alpha}, \n\\end{equation}\nwhere $N_0$ is a constant and \n\\begin{equation}\n \\alpha = 1 + \\frac{T_{\\rm f}}{T_{\\rm d}} . \\label{eq:alpha}\n\\end{equation}\nFor conditions typical of spiral arm regions in our Galaxy, we expect $T_* \\sim T_{\\rm f}$ and thus $T_{\\rm f} \\la T_{\\rm d}$, which corresponds to $1 < \\alpha \\lesssim 2$.\nFor example, $T_{\\rm f}=T_*=10$Myrs corresponds to $\\alpha \\approx 1.7$, which agrees well with observations \\citep{Solomon+1987,Kramer+1998,Heyer+2001,RomanDuval+2010}. \n\nHowever, in a quiescent region away from spiral arms or in the outer disk, in which there is a very limited amount of dense material, $T_{\\rm f}$ is expected to be larger at least by a factor of a few than in spiral arms. \nIn contrast, $T_{\\rm d}$ is not necessarily expected to be large even in such an environment, since the meaning of $T_{\\rm d}$ is the average timescale of cloud destruction that occurs after the cloud is created, and thus, it does not necessarily depend on the growth timescale of the cloud. \nTherefore, we expect that $T_{\\rm d}$ can be smaller than $T_{\\rm f}$ in such an environment, which may produce $\\alpha = 1+ T_{\\rm f}\/T_{\\rm d} > 2$. \nThis tendency is actually observed in Milky Way outer disk, LMC, M33, and M51 \\citep{Rosolowsky2005,Wong+2011,Gratier+2012,Hughes+2010,Colombo+2014} . \n\n\n\n\nThe total number of molecular clouds is calculated as \n\\begin{eqnarray}\n N_{\\rm total} &=& \\int_{M_1}^{M_2} n(M) dM \n = \\frac{N_0}{\\alpha-1} \n \\left[ \n \\left( \\frac{M_{\\sun}}{M_1} \\right)^{\\alpha-1}\n - \\left( \\frac{M_{\\sun}}{M_2} \\right)^{\\alpha-1}\n \\right] \n \\nonumber\\\\ \n &\\sim& \\frac{N_0}{\\alpha-1} \n \\left( \\frac{M_{\\sun}}{M_1} \\right)^{\\alpha-1}, \n\\end{eqnarray}\nwhere we used $M_2 \\gg M_1$ in the final estimate. \nThe total number of clouds is essentially determined by the lower limit of the mass of the cloud. \nLikewise, the total mass of the molecular clouds is \n\\begin{eqnarray}\n M_{\\rm total} &=& \\int_{M_1}^{M_2} M n(M) dM \n = \\frac{N_0 M_{\\sun}}{2-\\alpha} \n \\left[ \n \\left( \\frac{M_2}{M_{\\sun}} \\right)^{2-\\alpha}\n - \\left( \\frac{M_1}{M_{\\sun}} \\right)^{2-\\alpha}\n \\right]\n \\nonumber\\\\ \n &\\sim& \\frac{N_0 M_{\\sun}}{2-\\alpha} \n \\left( \\frac{M_2}{M_{\\sun}} \\right)^{2-\\alpha}, \n\\end{eqnarray}\nwhere we used $M_2 \\gg M_1$ and $\\alpha < 2$ in the final estimate. \nThus, the total mass of molecular clouds is essentially determined by the upper limit of the mass of the cloud. \nLet us assume $M_{\\rm total} \\sim 10^9 M_{\\sun}$ in the Galaxy, then our simple choice of $M_1=10^2 M_{\\sun}$, $M_2=10^6 M_{\\sun}$, and $\\alpha=1.5$ corresponds to $N_{\\rm total}\\sim10^5$ and the average mass of molecular clouds is $M_{\\rm ave} \\equiv M_{\\rm total}\/N_{\\rm total} \\sim 10^4 M_{\\sun}$. \nNote that these numbers depend on our choice of $M_1$. \n\n\n\\section{Summary}\nIn general, dense molecular clouds cannot be created in shock waves propagating in magnetized WNM without cold HI clouds. \nIn this paper we identify repeated interactions of shock waves in dense ISM as a basic mechanism for creating filametary molecular clouds, which are ubiquitously observed in the nearby ISM \\citep{Andre+2014}. \nThis suggests an expanding-bubble-dominated picture of the formation of molecular clouds in our Galaxy, which enables us to envision an overall picture of the statistics of molecular clouds and resultant star formation. \nTogether with the findings of our previous work, our conclusions are summarized as follows: \n\\begin{enumerate}\n\\item \nTurbulent cold HI clouds embedded in WNM can be readily created in the expanding shells of HII regions or in the very late phase of supernova remnants. \nIn contrast, the formation of molecular clouds in a magnetized ISM needs many compression events. \nOnce low-mass molecular clouds are formed, an additional compression creates many filamentary molecular clouds. \nOne compression corresponds to of order 1Myr on average in our Galaxy.\nThe timescale of cloud formation is a few times 10Myrs. \n\\item \nSince the galactic thin disk is occupied by many bubbles, molecular clouds are formed in the overlapping regions of (old and new) bubbles. \nHowever, since the average lifetime of each bubble is shorter than the timescale of cloud formation, it is difficult to observationally identify the multiple bubbles that created the molecular clouds. \n\\item \nThe velocity dispersion of molecular clouds should originate in the expansion velocities of bubbles. \nThis is estimated to be $\\lesssim$10km\/s and should not strongly depend on the mass of the molecular cloud. \n\\item \nTo describe the growth of molecular cloud mass we can temporally smooth out the evolution over timescales larger than $\\sim$ 1Myr. \nThe resultant mass (smoothed over time) of each molecular cloud is an almost exponentially increasing function of time. \n\n\\item \nThe destruction of a molecular cloud is mainly due to UV\/FUV radiation from massive stars more massive than $20M_\\sun$. \nThe probability of cloud destruction is not a sensitive function of the mass of molecular clouds. \nIf the shape of the initial mass function does not vary much with the mass of parent molecular clouds, cloud destruction by $30 \\pm 10 M_\\sun$ stars results in a star formation efficiency of order 1\\%. \nThis property explains the observed constancy of the gas depletion timescale ($1 \\sim 2$ Gyr) of giant molecular clouds in the solar neighborhood and possibly in some external galaxies where the normalizations for the Schmidt-Kennicutt Law obtained by high-density tracers are shown to be similar.\n\n\\item \nThe steady state of the evolution of the cloud mass function corresponds to a power law with exponent $-n$ in the range $1 < n \\lesssim 2$ in the spiral arm regions of our Galaxy. \nHowever, a larger value of the exponent, such as $n > 2$, is possible in the inter-arm regions. \n\\end{enumerate}\n\nNote that the first and third conclusions have partly shown in our previous investigations \\citep{InoueInutsuka2009}. \nIn addition we can suggest the following implications from these conclusions: \n\\begin{enumerate}\n\\setcounter{enumi}{6}\n\\item \nStar formation starts, even in small molecular clouds, once the line-mass of an internal self-gravitating filament exceeds the critical value \\citep{Andre+2010}. \nOur analysis suggests that the mass of an individual molecular cloud increases roughly exponentially over $\\sim 10$ Myrs.\nAccording to the formation mechanism driven by repeated compressions, we expect that the total mass in filaments of sufficiently high line-mass increases with the number of compressional events. \nThis means that the mass of star-forming dense gas increases with the mass of the molecular cloud and the star formation should accelerate over many million years. \nThis conjecture may provide a clue in understanding the star formation histories found by \\citet{PallaStahler2010} in seven individual molecular clouds such as Taurus-Auriga and $\\rho$ Ophiuchi. \n\\item \nMolecular clouds may collide over a timescale of a few times 10 Myrs, depending on the relative locations in adjacent (almost invisible) bubbles. \nSuch a molecular cloud collision may result in active star formation in a giant molecular cloud \\citep[e.g.,][]{Fukui+2014}. \n\\end{enumerate}\n\nThese implications should be investigated in more detail by numerical simulations. \nOur radiation magnetohydrodynamics simulations of an expanding bubble due to UV and FUV photons from the massive star show that most of the material in molecular clouds become warm molecular clouds without CO molecules. \nAlthough we have to investigate the fate of the CO-dark gas in more detail, we expect that the total mass can be very large and may account for the dark gas indicated by various observations \\cite[e.g.,][]{Grenier+2005}. \n\nThere are many report that the Kennicutt-Schmidt correlation varies with some properties of galaxies \n\\citep[e.g., ][]{Saintonge+2011,Saintonge+2012,Meidt+2013,Davis+2014}. \nIn addition, a simple relation does not fit to the center of our Galaxy \\citep[e.g.,][]{Longmore+2013}. \nThe reasons for these deviations remain to be studied.\n\n\n\n\\begin{acknowledgements}\nSI thanks Hiroshi Kobayashi and Jennifer M. Stone for useful discussions and comments. \nSI is supported by Grant-in-Aid for Scientific Research (23244027,23103005)\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe discrete memoryless interference channel (DM-IC) is the canonical model for studying the effect of interference in wireless systems. The capacity of this channel is only known in some special cases e.g. class of deterministic ICs \\cite{Gam82,Cho07}, strong interference conditions \\cite{Sat81,Cos87,Chu07}, degraded conditions \\cite{Ben79,Liu08} and a class of semideterministic ICs \\cite{Cho09}. Characterizing the capacity region in the general case has been one of the long standing open problems in information theory. The best known achievable rate region is the so-called Han-Kobayashi scheme, which can be achieved by using schemes that are based on the concepts of rate-splitting and superposition coding \\cite{Han81,Cho06}. Rate-splitting refers to the technique of splitting the message at a transmitter into a common and a private part, where the common part is decoded at all the receivers and the private part is decoded only at the intended receiver. The two parts of the message are then combined into a single signal using superposition coding, first introduced in \\cite{Cov72} in the context of the broadcast channel.\nIn all the special cases where the capacity is known, the Han-Kobayashi region equals the capacity region. However, it has been very recently shown that this inner bound is not tight in general \\cite{Nai15}.\n\nThe first result we present in this paper is to show that the Han-Kobayashi region can be achieved by a multicoding scheme. This scheme does not involve any explicit rate-splitting. Instead, the codebook at each encoder is generated as a multicodebook, i.e. there are multiple codewords corresponding to each message. The auxiliary random variable in this scheme does not explicitly carry a part of the message, rather it implicitly carries \\emph{some} part of the message, and it is not required to specify which part.\\footnote{A similar idea, combined with block-Markov operation, has been recently used in \\cite{Lim14} to develop an achievability scheme called distributed-decode-forward for broadcast traffic on relay networks.} In this sense, it's role is different from that in the Han-Kobayashi scheme \\cite{Han81,Cho06}, and is reminiscent of the encoding for state-dependent channels in \\cite{Gel80}, and the alternative proof of Marton's achievable rate region for the broadcast channel given in \\cite{Gam81}. A key advantage of\nthe multicoding nature of the new scheme is that it can be easily extended to obtain simple achievability schemes for setups in which the canonical interference channel model is augmented to incorporate additional node capabilities such as cognition and state-dependence, while extending the original Han-Kobayashi scheme to such setups can quickly become highly involved. We demonstrate this by constructing schemes for settings which augment the canonical interference channel model in different ways.\n \n\nThe first setting we consider is when the interference channel is state-dependent and the state-information is available non-causally to one of the transmitters (cognitive transmitter). For simplicity, we focus on the case when the cross-link between the non-cognitive transmitter and its undesired receiver is weak enough to be ignored, giving rise to the so called $Z$-interference channel topology. We know that for a point-to-point state-dependent channel with non-causal state information at the encoder, the optimal achievability scheme due to Gelfand and Pinsker uses multicoding at the encoders. Hence, for state-dependent interference channels with noncausal state information at the encoders too, we would like to use the idea of multicoding. Since the new achievability scheme that we present for the canonical interference channel already involves multicoding, it requires almost no change to be applicable to the state-dependent setting. Apart from being simple, we are also able to prove its optimality for the case of the deterministic $Z$-interference channel.\n\nWe then specialize our capacity characterization for the state-dependent deterministic $Z$-interference channel to the case where the channels are governed by the linear deterministic model of \\cite{Ave11}. In the recent literature, this model has proven extremely useful for approximating the capacity of wireless networks and developing insights for the design of optimal communication strategies. We consider a linear deterministic Z-interference channel, in which the state of the channel denotes whether the interference link is present or not. When the transmitters are base-stations and the receivers are end-users, this can model the scenario where one of the transmitters is cognitive, for example it can be a central controller that knows when the other Tx-Rx pair will be scheduled to communicate on the same frequency band. When the two Tx-Rx pairs are scheduled to communicate on the same frequency band, this gives an interference channel; when they communicate on different frequency bands each pair gets a clean channel free of interference. Moreover, the cognitive transmitter can know the schedule ahead of time, i.e. the times at which its transmission will be interfering with the second Tx-Rx pair. For this special case, we identify auxiliary random variables and provide an explicit expression for the capacity region. This explicit capacity characterization allows us to identify interesting properties of the optimal strategy. In particular, with single bit level for the linear deterministic channels (which would imply low to moderate SNR for the corresponding Gaussian channels), the sum rate is maximized when the interfering transmitter remains silent (transmits $0$'s) at times when it interferes with the second transmission. It then treats these symbols as stuck to $0$ and performs Gelfand-Pinsker coding. The second transmitter observes a clean channel at all times and communicates at the maximal rate of $1$ bit per channel use. This capacity characterization also reveals that when all nodes are provided with the state information the sum-capacity cannot be further improved. Thus, for this\nchannel, the sum-capacity when all nodes have state information is the same as that when only the interfering encoder\nhas state information. \n\nMotivated by wireless applications, there has been significant recent interest in state-dependent interference channels (ICs), where the state information is known only to some of the transmitters. Given the inherent difficulty of the problem, many special cases have been considered \\cite{Zha13,Goo13,Dua13a,Dua13b}, for which different coding schemes have been proposed. However, exact capacity characterizations have proven difficult. Another line of related work has been the study of cognitive state-dependent ICs \\cite{Rin11,Som08,Dua12,Kaz13}. Here, the term ``cognitive'' is usually used to mean that the cognitive transmitters know not only the state of the channel but also messages of other transmitters. Note that this assumption is significantly stronger than assuming state information at the transmitter as we do here.\n\nThe second setting we consider is when one of the transmitters has the capability to overhear the signal transmitted by the other transmitter, which can be used to induce cooperation between the two transmitters. This is different from having orthogonal communication links (or conferencing) between the encoders, as studied in \\cite{Wan11b}. Instead, overhearing exploits the natural broadcasting nature of the wireless medium to establish cooperation without requiring any dedicated resources. A variety of different models have been used to capture overhearing \\cite{Pra11,Yan11,Car12}, and are known by different names such as cribbing, source cooperation, generalized feedback, cognition etc. We use \"partial cribbing\" to model the overhearing, in which some deterministic function of the signal transmitted by the non-cognitive transmitter is available at the cognitive transmitter in a strictly causal fashion. Again, for simplicity, we focus on the case of the $Z$-interference channel, where the cross-link between the non-cognitive transmitter and its undesired receiver is weak enough to be ignored. For this setting, we develop a simple achievability scheme by combining our multicoding-based scheme with block-Markov coding and show that it is optimal for deterministic configurations.\n\nFinally, to further illustrate the point that simple schemes can be obtained for augmented scenarios, we describe two extensions which introduce even more complexity in the model. In the first extension, a third message is introduced in the state-dependent Z-interference channel, which is to be communicated from the interfering transmitter to the interfered receiver. The second extension combines the state-dependent Z-IC and the Z-IC with unidirectional partial cribbing. In both extensions, we are able to obtain simple optimal schemes by naturally extending the multicoding-based achievability schemes.\n\n\\subsection*{Organization}\nWe describe the models considered in this paper formally in Section~\\ref{sec:model}.\nThe alternate achievability scheme that achieves the Han-Kobayashi region is presented in Sections~\\ref{sec:outline}. Section~\\ref{sec:state} describes the results concerning the state-dependent setup and section~\\ref{sec:cribbing} describes the results concerning the cribbing setup. The two extensions are described in Section~\\ref{sec:extensions} and we end the paper with a short discussion in Section~\\ref{sec:conclude}.\n\n\\section{Model}\\label{sec:model}\n\nCapital letters, small letters and capital calligraphic letters denote random variables, realizations and alphabets respectively. The tuple $(x(1),x(2),\\dots ,x(n))$ and the set $\\{a,{a+1},\\dots ,b\\}$ are denoted by $x^n$ and $[a:b]$ respectively, and $\\mc{T}_{\\epsilon}^{(n)}$ stands for the $\\epsilon$-strongly typical set of length-$n$ sequences.\n\nWe now describe the channel models considered in this paper.\n\n\\subsection{Canonical Interference Channel}\nThe two-user discrete memoryless interference channel $p_{Y_1,Y_2|X_1,X_2}(y_1,y_2|x_1,x_2)$ is depicted in Fig.~\\ref{fig:model}. Each sender $j\\in\\{1,2\\}$ wishes to communicate a message $M_j$ to the corresponding receiver.\n\nA $(n,2^{nR_1},2^{nR_2},\\epsilon)$ code for the above channel consists of the encoding and decoding functions:\n\\begin{IEEEeqnarray*}{rCl}\nf_{j,i} & : & [1:2^{nR_j}] \\rightarrow \\mc{X}_j, \\quad j\\in\\{1,2\\}, 1\\leq i\\leq n,\\\\ \ng_j & : & \\mc{Y}_j^n \\rightarrow [1:2^{nR_j}],\\quad j\\in\\{1,2\\},\n\\end{IEEEeqnarray*}\nsuch that \n$$\\text{Pr}\\left\\{g(Y_j^n)\\neq M_j\\right\\} \\leq \\epsilon,\\quad j\\in\\{1,2\\},$$\nwhere $M_1$ and $M_2$ are assumed to be distributed uniformly in $[1:2^{nR_1}]$ and $[1:2^{nR_2}]$ respectively. A rate pair $(R_1,R_2)$ is said to be \\emph{achievable} if for every $\\epsilon > 0,$ there exists a $(n,2^{nR_1},2^{nR_2},\\epsilon)$ code for sufficiently large $n$. The capacity region is defined to be the closure of the achievable rate region.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[scale=1.5]{model.pdf}\n\\caption{Two-User Discrete Memoryless Interference Channel (DM-IC)}\n\\label{fig:model}\n\\end{figure}\n\n\n\\subsection{State-Dependent Z-Interference Channel}\\label{subsec:model_state}\n\nThe discrete memoryless Z-interference channel $p(y_1|x_1,s)p(y_2|x_1,x_2,s)$ with discrete memoryless state $p(s)$ is depicted in Fig.~\\ref{fig:model_gen}. The states are assumed to be known noncausally at encoder 1. Each sender $j\\in\\{1,2\\}$ wishes to communicate a message $M_j$ at rate $R_j$ to the corresponding receiver. For this setting, a $(n,2^{nR_1},2^{nR_2},\\epsilon)$ code consists of the encoding and decoding functions:\n\\begin{IEEEeqnarray*}{rCl}\nf_{1,i}& : &[1:2^{nR_1}]\\times \\mc{S}^n \\rightarrow \\mc{X}_1, \\quad 1\\leq i\\leq n,\\\\ \nf_{2,i}& :& [1:2^{nR_2}] \\rightarrow \\mc{X}_2, \\quad 1\\leq i\\leq n,\\\\ \ng_j &: & \\mc{Y}_j^n \\rightarrow [1:2^{nR_j}],\\quad j\\in\\{1,2\\},\n\\end{IEEEeqnarray*}\nsuch that \n$$\\text{Pr}\\left\\{g(Y_j^n)\\neq M_j\\right\\} \\leq \\epsilon,\\quad j\\in\\{1,2\\}.$$ The probability of error, achievable rate pairs $(R_1,R_2)$ and the capacity region are defined in a similar manner as before.\n\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[scale=1.5]{block_diag_gen.pdf}\n\\caption{The State-Dependent Z-Interference Channel (S-D Z-IC)}\n\\label{fig:model_gen}\\vspace{0mm}\n\\end{figure}\n\nThe deterministic S-D Z-IC is depicted in Fig.~\\ref{fig:model_state_det}. The channel output $Y_1$ is a deterministic function $y_1(X_1,S)$ of the channel input $X_1$ and the state $S$. At receiver 2, the channel output $Y_2$ is a deterministic function $y_2(X_2,T_1)$ of the channel input $X_2$ and the interference $T_1$, which is assumed to be a deterministic function $t_1(X_1,S)$.\nWe also assume that if $x_2$ is given, $y_2(x_2,t_1)$ is an injective function of $t_1$, i.e. there exists some function $g$ such that $t_1=g(y_2,x_2).$ \n\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[scale=1.5]{block_diag_2.pdf}\n\\caption{The Injective Deterministic S-D Z-IC}\n\\label{fig:model_state_det}\n\\end{figure}\n\nWe consider a special case of the injective deterministic S-D Z-IC in detail, which is the modulo-additive S-D Z-IC, depicted in Fig.~\\ref{fig:model_modulo}. All channel inputs and outputs come from a finite alphabet $\\mc{X}=\\{0,1,\\dots ,|\\mc{X}|-1\\}$. The channel has two states. In state $S=0$, there is no interference while in state $S=1$, the cross-link is present. When the cross-link is present, the output at receiver~2 is the modulo-$\\mc{X}$ sum of $X_2$ and $X_1$. For all other cases, the output is equal to the input. We can describe this formally as: \n\\begin{equation*}\n\\begin{split}\nY_1 & = X_1,\\\\\nY_2 & = X_2 \\oplus (S\\cdot X_1).\n\\end{split}\n\\end{equation*}\nAssume that the state $S$ is i.i.d. Ber$(\\lambda)$. A generalization of this model that incorporates multiple levels is also considered subsequently.\n\n\\begin{figure}[!h]\n\\centering\n\\includegraphics[scale=1.5]{modulo.pdf}\n\\caption{The Modulo-Additive S-D Z-IC. All channel inputs and outputs take values in the same finite alphabet $\\mc{X}$. The state $S$ is Ber$(\\lambda).$}\n\\label{fig:model_modulo}\n\\end{figure}\n\n\\subsection{Z-Interference Channel with Partial Cribbing}\\label{subsec:model_crib}\nThe discrete memoryless deterministic Z-interference channel is depicted in Fig.~\\ref{fig:model_crib}. The channel output $Y_1$ is a deterministic function $y_1(X_1)$ of the channel input $X_1$. At receiver 2, the channel output $Y_2$ is a deterministic function $y_2(X_2,T_1)$ of the channel input $X_2$ and the interference $T_1$, which is assumed to be a deterministic function $t_1(X_1)$. We also assume that if $x_2$ is given, $y_2(x_2,t_1)$ is an injective function of $t_1$, i.e. there exists some function $g$ such that $t_1=g(y_2,x_2).$ Each sender $j\\in\\{1,2\\}$ wishes to communicate a message $M_j$ at rate $R_j$ to the corresponding receiver. \n\nWe assume that encoder 1 can overhear the signal from transmitter 2 \\emph{strictly causally}, which is modeled as partial cribbing with a delay \\cite{Asn13}. The partial cribbing signal, which is a function of $X_2$ is denoted by $Z_2$. So $X_{1i}$ is a function of $(M_1,Z_2^{i-1})$ and $X_{2i}$ is a function of $M_2$.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[scale=1.5]{block_diag.pdf}\n\\caption{Injective Deterministic Z-Interference Channel with Unidirectional Partial Cribbing}\n\\label{fig:model_crib}\n\\end{figure}\n\nA $(n,2^{nR_1},2^{nR_2},\\epsilon)$ code for this setting consists of \n\\begin{IEEEeqnarray*}{rCl}\nf_{1,i} & : & [1:2^{nR_1}]\\times \\mc{Z}_2^{i-1} \\rightarrow \\mc{X}_1, \\quad 1\\leq i\\leq n,\\\\ \nf_{2,i} & : & [1:2^{nR_2}] \\rightarrow \\mc{X}_2, \\quad 1\\leq i\\leq n,\\\\ \ng_j & : & \\mc{Y}_j^n \\rightarrow [1:2^{nR_j}],\\quad j\\in\\{1,2\\},\n\\end{IEEEeqnarray*}\nsuch that \n$$\\text{Pr}\\left\\{g(Y_j^n)\\neq M_j\\right\\} \\leq \\epsilon,\\quad j\\in\\{1,2\\}.$$ The probability of error, achievable rate pairs $(R_1,R_2)$ and the capacity region are defined in a similar manner as before.\n\n\\section{Canonical Interference Channel}\\label{sec:outline}\n\n\\subsection{Preliminaries}\\label{sec:prelim}\nThe currently best known achievable rate region for the 2-user DM-IC was provided by Han and Kobayashi in \\cite{Han81}, using a scheme based on rate-splitting and superposition coding. An alternative achievable rate region that included the Han-Kobayashi rate region was proposed in \\cite{Cho06}, using another scheme that used rate-splitting and superposition coding. Using the terminology introduced in \\cite{Wan13}, the encoding in \\cite{Han81} can be described as employing \\emph{homogeneous} superposition coding, while that in \\cite{Cho06} can be described as employing \\emph{heterogeneous} superposition coding. It was then proved in \\cite{Cho08} that the two regions are, in fact, equivalent and given by the following compact representation (see also \\cite{Kra06,Kob07}).\n\n\\begin{thm}[Han-Kobayashi Region]\\label{thm:HK}\nA rate pair $(R_1,R_2)$ is achievable for the DM-IC $p(y_1,y_2|x_1,x_2)$ if\n\\begin{equation}\\label{eq:achreg_prelim}\n\\begin{split}\nR_1 & < I(X_1;Y_1|U_2,Q),\\\\\nR_2 & < I(X_2;Y_2|U_1,Q),\\\\\nR_1 + R_2 & < I(X_1;Y_1|U_1,U_2,Q) +I(X_2,U_1;Y_2|Q) ,\\\\\nR_1 + R_2 & < I(X_1,U_2;Y_1|U_1,Q) + I(X_2,U_1;Y_2|U_2,Q),\\\\\nR_1 + R_2 & < I(X_1,U_2;Y_1|Q) + I(X_2;Y_2|U_1,U_2,Q),\\\\\n2R_1 + R_2 & < I(X_1;Y_1|U_1,U_2,Q) + I(X_2,U_1;Y_2|U_2,Q) + I(X_1,U_2;Y_1|Q),\\\\\nR_1 + 2R_2 & < I(X_2;Y_2|U_1,U_2,Q) + I(X_1,U_2;Y_1|U_1,Q) + I(X_2,U_1;Y_2|Q),\n\\end{split}\n\\end{equation}\nfor some pmf $p(q)p(u_1,x_1|q)p(u_2,x_2|q),$ where ${|\\mc{U}_1|\\leq |\\mc{X}_1|+4}$, ${|\\mc{U}_2|\\leq |\\mc{X}_2|+4}$ and ${|\\mc{Q}|\\leq 4.}$\n\\end{thm}\n\n\\subsection{Outline of the new achievability scheme}\nWe first describe the alternative achievability scheme informally and discuss the similarities and differences with the existing achievability schemes. The later subsections describe and analyze the scheme formally.\n\nEncoder $j$, where $j\\in\\{1,2\\}$ prepares two codebooks: \n\\begin{itemize}\n\\item A transmission multicodebook\\footnote{The term ``multicodebook'' refers to the fact that there are multiple codewords corresponding to each message.}, which is a set of codewords $\\{x_j^n(\\cdot,\\cdot)\\}$ formed using the transmission random variable $X_j$. This set is partitioned into a number of bins (or subcodebooks), where the bin-index corresponds to the message, \n\\item A coordination codebook which is a set of codewords $\\{u_j^n(\\cdot)\\}$ formed using the auxiliary random variable $U_j$.\n\\end{itemize}\nGiven a message, one codeword $x_j^n$ from the corresponding bin in the transmission multicodebook is chosen so that it is jointly typical with some sequence $u_j^n$ in the coordination codebook. The codeword $x_j^n$ so chosen forms the transmission sequence.\n\nAt a decoder, the desired message is decoded by using joint typicality decoding, which uses the coordination codebook and the transmission multicodebook of the corresponding encoder and the coordination codebook of the other encoder. Thus, a receiver makes use of the interference via its knowledge of the coordination codebook at the interfering transmitter.\n\nFrom the above description, it can be seen that the coordination codebook does not carry any message. Its purpose is to ensure that the transmission sequence from a given bin is well-chosen, i.e. it is beneficial to the intended receiver and also the unintended receiver. To the best of our knowledge, this is the first time an auxiliary random variable (which is not the time-sharing random variable) appears in one of the best known achievability schemes without being explicitly associated with any message. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Achievability scheme}\\label{subsec:achHK}\nChoose a pmf $p(u_1,x_1)p(u_2,x_2)$ and $0<\\epsilon'<\\epsilon$.\n\\subsubsection*{Codebook Generation}\n\\begin{itemize}\n\\item Encoder 1 generates a coordination codebook consisting of $2^{nR_{1c}}$ codewords\\footnote{Though there is no notion of a common message or a private message in this achievability scheme, we use the subscripts $c$ and $p$ to convey if the corresponding random variables are used for decoding at all destinations or only the desired destination respectively.} $u_1^n(l_{1c}),\\;l_{1c}\\in[1:2^{nR_{1c}}]$ i.i.d. according to $\\prod_{i=1}^np(u_{1i})$. It also generates a transmission multicodebook consisting of $2^{n(R_1+R_{1p})}$ codewords $x_1^n(m_1,l_{1p}),\\;m_1\\in[1:2^{nR_1}],\\; l_{1p}\\in[1:2^{nR_{1p}}]$ i.i.d. according to $\\prod_{i=1}^np(x_{1i})$.\n\\item Similarly, encoder 2 generates a coordination codebook consisting of $2^{nR_{2c}}$ codewords $u_2^n(l_{2c}),\\;l_{2c}\\in[1:2^{nR_{2c}}]$ i.i.d. according to $\\prod_{i=1}^np(u_{2i})$. It also generates a transmission multicodebook consisting of $2^{n(R_2+R_{2p})}$ codewords $x_2^n(m_2,l_{2p}),\\;m_2\\in[1:2^{nR_2}],\\; l_{2p}\\in[1:2^{nR_{2p}}]$ i.i.d. according to $\\prod_{i=1}^np(x_{2i})$.\n\\end{itemize}\n\n\\subsubsection*{Encoding}\n\\begin{itemize}\n\\item To transmit message $m_1$, encoder 1 finds a pair $(l_{1c},l_{1p})$ such that $$(u_1^n(l_{1c}),x_1^n(m_1,l_{1p}))\\in\\mc{T}^{(n)}_{\\epsilon'}$$ and transmits $x_1^n(m_1,l_{1p})$. If it cannot find such a pair, it transmits $x_1^n(m_1,1)$.\n\\item Similarly, to transmit message $m_2$, encoder 2 finds a pair $(l_{2c},l_{2p})$ such that $$(u_2^n(l_{2c}),x_2^n(m_2,l_{2p}))\\in\\mc{T}^{(n)}_{\\epsilon'}$$ and transmits $x_2^n(m_2,l_{2p})$. If it cannot find such a pair, it transmits $x_2^n(m_2,1)$.\n\\end{itemize}\n\nThe codebook generation and encoding process are illustrated in Fig.~\\ref{fig:encoding}.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=1.5]{codebooks.pdf}\n\\vspace{1mm}\n\\caption{Codebook Generation and Encoding at Encoder~1. The independently generated $x_1^n$ sequences, lined up vertically in the figure, are binned into $2^{nR_1}$ bins. The independently generated coordination sequences $u_1^n$ are lined up horizontally. To transmit message $m_1$, a jointly typical pair $(x_1^n,u_1^n)$ is sought where $x_1^n$ falls into the $m_1$-th bin, and then $x_1^n$ is transmitted.}\n\\label{fig:encoding}\n\\end{figure}\n\n\\subsubsection*{Decoding}\n\\begin{itemize}\n\\item Decoder 1 finds the unique $\\hat{m}_1$ such that $$(u_1^n(l_{1c}),x_1^n(\\hat{m}_1,l_{1p}),u_2^n(l_{2c}),y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon}$$ for some $(l_{1c},l_{1p},l_{2c})$. If none or more than one such $\\hat{m}_1$ are found, then decoder 1 declares error.\n\\item Decoder 2 finds the unique $\\hat{m}_2$ such that $$(u_2^n(l_{2c}),x_2^n(\\hat{m}_2,l_{2p}),u_1^n(l_{1c}),y_2^n)\\in\\mc{T}^{(n)}_{\\epsilon}$$ for some $(l_{2c},l_{2p},l_{1c})$. If none or more than one such $\\hat{m}_2$ are found, then decoder 2 declares error.\n\\end{itemize}\n\n\\subsubsection*{Discussion}\nBefore providing the formal analysis of the probability of error to show that the coding scheme described above achieves the Han-Kobayashi region, we discuss the connection between the new scheme and the scheme from \\cite{Cho06} which motivates the equivalence of their rate regions.\n\nConsider the set of codewords used at encoder 1. While this set resembles a multicodebook, it can be reduced to a standard codebook (one codeword per message) by stripping away the codewords in each bin that are not jointly typical with any of the $u_1^n$ sequences, and therefore are never used by the transmitters. In other words, after we generate the multicodebook in Fig.~\\ref{fig:encoding}, we can form a smaller codebook by only keeping one codeword per message which is jointly typical with one of the $u_1^n$ sequences (i.e., those codewords highlighted in Fig.~\\ref{fig:encoding}). Note that this reduced codebook indeed has a superposition structure. Each of the $ 2^{nR_1} $ remaining codewords $x_1^n$ is jointly typical with one of the $2^{nR_{1c}}$ $u_1^n$ codewords, and when $n$ is large there will be exactly $ 2^{n(R_1-R_{1c})} $ $x_1^n$ sequences that are typical with each $u_1^n$ sequence, i.e., these $ 2^{n(R_1-R_{1c})} $ $x_1^n$ sequences will look as if they were generated i.i.d. from $p(x_1|u_1)$. Therefore, the $u_1^n$ sequences can be indeed thought as the cloud centers in this superposition codebook and $x_1^n$'s as the satellite codewords. Therefore, our multicodebook construction can be viewed as an equivalent way to generate a superposition codebook as in \\cite{Cho08}. This reveals that both the codebook structure and the decoding in our scheme are similar to that in the Han-Kobayashi scheme and therefore the two achievable rate regions are, not surprisingly, equal.\n\nHowever, note that for broadcast channels, combining Marton coding (which employs multicoding) \\cite{Gam81} with Gelfand-Pinsker coding (which also employs multicoding) is more straightforward than combining superposition coding with Gelfand-Pinsker coding. The former has been shown to be optimal in some cases \\cite{Lap13}. Since our codebook construction for the interference channel also has the flavor of multicoding, extending this construction to setups where multicoding is required is also quite straightforward. As mentioned in the introduction, we exploit this to develop simple achievability schemes for more general setups described in later sections. \n\n\n\\subsubsection*{Probability of Error}\nDue to the symmetry of the code, the average probability of error $\\msf{P}(\\mc{E})$ is equal to $\\msf{P}(\\mc{E}|M_1,M_2)$, so we can assume $(M_1,M_2) = (1,1)$ and analyze $\\msf{P}(\\mc{E}|1,1)$. Let $(L_{1c},L_{1p},L_{2c},L_{2p})$ denote the indices chosen during encoding by encoder 1 and encoder 2. \n\nWe now define events that cover the event of error in decoding message $m_1$:\n\\begin{IEEEeqnarray*}{rCl}\n\\mc{E}_1 & \\triangleq & \\{(U_1^n(l_{1c}),X_1^n(1,l_{1p}))\\notin\\mc{T}^{(n)}_{\\epsilon'} \\;\\text{ for all } l_{1c}, l_{1p}\\}, \\\\\n\\mc{E}_2 & \\triangleq & \\{(U_1^n(L_{1c}),X_1^n(1,L_{1p}),U_2^n(L_{2c}),Y_1^n)\\notin\\mc{T}^{(n)}_{\\epsilon}\\}, \\\\\n\\mc{E}_3 & \\triangleq & \\{(U_1^n(L_{1c}),X_1^n(m_1,l_{1p}),U_2^n(L_{2c}),Y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon}\\text{ for some }m_1\\neq 1, \\text{ for some } l_{1p}\\}, \\\\\n\\mc{E}_4 & \\triangleq & \\{(U_1^n(L_{1c}),X_1^n(m_1,l_{1p}),U_2^n(l_{2c}),Y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon} \\text{ for some }m_1\\neq 1, \\text{ for some } l_{1p},l_{2c}\\} ,\\\\\n\\mc{E}_5 & \\triangleq & \\{(U_1^n(l_{1c}),X_1^n(m_1,l_{1p}),U_2^n(L_{2c}),Y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon} \\text{ for some }m_1\\neq 1, \\text{ for some } l_{1p},l_{1c}\\} ,\\\\\n\\mc{E}_6 & \\triangleq & \\{(U_1^n(l_{1c}),X_1^n(m_1,l_{1p}),U_2^n(l_{2c}),Y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon} \\text{ for some }m_1\\neq 1, \\text{ for some } l_{1c},l_{1p},l_{2c}\\}.\n\\end{IEEEeqnarray*}\n\nConsider also the event $\\mc{E}'_1$, analogous to $\\mc{E}_1$, which is defined as follows.\n\\begin{IEEEeqnarray*}{rCl}\n\\mc{E}'_1 & \\triangleq & \\{(U_2^n(l_{2c}),X_2^n(1,l_{2p}))\\notin\\mc{T}^{(n)}_{\\epsilon'} \\;\\text{ for all } l_{2c}, l_{2p}\\}. \\label{eq:E'1}\n\\end{IEEEeqnarray*}\n\nSince an error for $m_1$ occurs only if at least one of the above events occur, we use the union bound to get the following upper bound on the average probability of error in decoding $m_1$:\n$$ \\msf{P}(\\mc{E}_1) + \\msf{P}(\\mc{E}'_1)+ \\msf{P}(\\mc{E}_2\\cap\\mc{E}_1^c\\cap\\mc{E}'^{c}_1) + \\msf{P}(\\mc{E}_3) + \\msf{P}(\\mc{E}_4) + \\msf{P}(\\mc{E}_5) + \\msf{P}(\\mc{E}_6).$$\n\nBy the mutual covering lemma \\cite[Chap. 8]{Gam12}, $\\msf{P}(\\mc{E}_1)\\rightarrow 0$ as $n\\rightarrow\\infty$ if \n\\begin{IEEEeqnarray}{rCl}\nR_{1p} + R_{1c} & > & I(U_1;X_1) + \\delta(\\epsilon'),\\label{eq:ach1}\n\\end{IEEEeqnarray}\nwhere $\\delta(\\epsilon')\\rightarrow 0$ as $\\epsilon'\\rightarrow 0.$\n\nSimilarly, we get that \n$\\msf{P}(\\mc{E}'_1)\\rightarrow 0$ as $n\\rightarrow\\infty$ if \n\\begin{IEEEeqnarray}{rCl}\nR_{2p} + R_{2c} & > & I(U_2;X_2) + \\delta(\\epsilon').\\label{eq:ach1'}\n\\end{IEEEeqnarray}\n\nBy the conditional typicality lemma, $\\msf{P}(\\mc{E}_2\\cap\\mc{E}_1^c\\cap\\mc{E}'^{c}_1)$ tends to zero as $n\\rightarrow\\infty$.\n\nFor $\\msf{P}(\\mc{E}_3)\\rightarrow 0$, we can use the packing lemma from \\cite[Ch. 3]{Gam12} to get the condition\n\\begin{equation}\\label{eq:ach2}\nR_1 + R_{1p} < I(X_1;U_1,U_2,Y_1) - \\delta(\\epsilon),\n\\end{equation}where $\\delta(\\epsilon)\\rightarrow 0$ as $\\epsilon\\rightarrow 0.$\n\nFor $\\msf{P}(\\mc{E}_4)\\rightarrow 0$, we can again use the packing lemma to get the condition\n\\begin{equation}\\label{eq:ach3}\nR_1 + R_{1p} + R_{2c} < I(X_1,U_2;U_1,Y_1)- \\delta(\\epsilon).\n\\end{equation}\n\nFor $\\msf{P}(\\mc{E}_5)\\rightarrow 0$, we apply the multivariate packing lemma from the Appendix as shown in \\eqref{eq:multipack_2} to get the condition\n\\begin{IEEEeqnarray}{LCl}\nR_1 + R_{1p} + R_{1c} < I(U_1;X_1) + I(U_1,X_1;U_2,Y_1) - \\delta(\\epsilon).\\label{eq:ach4}\n\\end{IEEEeqnarray} \n\nFinally, for $\\msf{P}(\\mc{E}_6)\\rightarrow 0$ as $n\\rightarrow\\infty$, another application of the multivariate packing lemma as shown in \\eqref{eq:multipack_3} gives the condition\n\\begin{IEEEeqnarray}{rCl}\nR_1 + R_{1p} + R_{1c} + R_{2c} & < & I(U_1;X_1) + I(U_2;Y_1) + I(U_1,X_1;U_2,Y_1)-\\delta(\\epsilon).\\label{eq:ach5}\n\\end{IEEEeqnarray}\n\nA similar analysis leads to the following additional conditions for the probability of error in decoding $m_2$ to vanish as $n\\rightarrow\\infty$.\n\\begin{IEEEeqnarray}{rCl}\nR_2 + R_{2p} & < & I(X_2;U_2,U_1,Y_2) - \\delta(\\epsilon),\\label{eq:ach7}\\\\\nR_2 + R_{2p} + R_{1c} & < & I(X_2,U_1;U_2,Y_2)- \\delta(\\epsilon),\\label{eq:ach8}\\\\\nR_2 + R_{2p} + R_{2c} & < & I(U_2;X_2) + I(U_2,X_2;U_1,Y_2)- \\delta(\\epsilon),\\label{eq:ach9}\\\\\nR_2 + R_{2p} + R_{2c} + R_{1c} & < & I(U_2;X_2) + I(U_1;Y_2) + I(U_2,X_2;U_1,Y_2)-\\delta(\\epsilon).\\label{eq:ach10}\n\\end{IEEEeqnarray}\n\nHence the probability of error vanishes as $n\\rightarrow\\infty$ if the conditions \\eqref{eq:ach1}-\\eqref{eq:ach10} are satisfied.\nFor the sake of brevity, let us first denote the RHS of the conditions \\eqref{eq:ach1}-\\eqref{eq:ach10} by $a,b,c,d,e,f,g,h,i,j$ respectively (ignoring the $\\delta(\\epsilon')$ and $\\delta(\\epsilon)$ terms). \n\nWe then note the following relations among these terms which can be proved using the chain rule of mutual information, the Markov chains $U_1-X_1-(U_2,X_2,Y_1,Y_2)$ and $U_2-X_2-(U_1,X_1,Y_1,Y_2)$ and the independence of $(U_1,X_1)$ and $(U_2,X_2)$\n\\begin{equation}\\label{eq:relFM}\n\\begin{gathered}\ne-a \\leq \\min\\{c,d\\},\\\\\nf -a \\leq d \\leq f,\\\\\nc \\leq e \\leq f,\\\\\ni-b \\leq \\min\\{g,h\\},\\\\\nj-b \\leq h \\leq j,\\\\\ng \\leq i \\leq j.\n\\end{gathered}\n\\end{equation}\n\nWe now employ Fourier-Motzkin elimination on the conditions \\eqref{eq:ach1}-\\eqref{eq:ach10} and $R_{1c},R_{1p},R_{2c},R_{2p}\\geq 0$ to eliminate $R_{1c},R_{1p},R_{2c},R_{2p}$. The set of relations \\eqref{eq:relFM} can be used to simplify this task by recognizing redundant constraints. At the end, we get the following achievable region:\n\\begin{equation}\\label{eq:achreg1}\n\\begin{split}\nR_1 & < e-a,\\\\\nR_2 & < i-b,\\\\\nR_1 + R_2 & < c + j-a-b,\\\\\nR_1 + R_2 & < d + h-a-b,\\\\\nR_1 + R_2 & < f + g-a-b,\\\\\n2R_1 + R_2 & < c+h+f-2a-b,\\\\\nR_1+2R_2 & < d +g+j-a-2b.\n\\end{split}\n\\end{equation}\n\nUsing the same facts as those used to prove \\eqref{eq:relFM}, we can show that the above region is the same as the Han-Kobayashi region. For the sake of completeness, we show this explicitly.\n\\begin{itemize}\n\\item Consider the upper bound on $R_1$:\n\\begin{IEEEeqnarray}{rCl}\ne-a & = & I(U_1,X_1;U_2,Y_1)\\nonumber\\\\\n& \\stackrel{(a)}{=} & I(X_1;U_2,Y_1)\\nonumber\\\\\n& \\stackrel{(b)}{=} & I(X_1;Y_1|U_2),\\label{eq:achreg2}\n\\end{IEEEeqnarray}where step $(a)$ follows since $U_1-X_1-(U_2,Y_1)$ is a Markov chain, and step $(b)$ follows since $X_1$ is independent of $U_2$.\n\\item Similarly, \\begin{equation}\\label{eq:achreg3}i-b= I(X_2;Y_2|U_1).\\end{equation}\n\\item Consider the first upper bound on the sum-rate ${c+j-a-b}$:\n\\begin{IEEEeqnarray}{lCl}\nc+j-a-b \\nonumber\\\\\n = I(X_1;U_1,U_2,Y_1) + I(U_2;X_2) + I(U_1;Y_2) \\nonumber\\\\\n \\quad\\quad +\\> I(U_2,X_2;U_1,Y_2)- I(U_2;X_2)-I(U_1;X_1)\\nonumber\\\\\n \\stackrel{(a)}{=} I(X_1;U_2,Y_1|U_1) + I(U_1;Y_2) + I(U_2,X_2;U_1,Y_2)\\nonumber\\\\\n \\stackrel{(b)}{=} I(X_1;U_2,Y_1|U_1) + I(U_1;Y_2) + I(X_2;U_1,Y_2)\\nonumber\\\\\n \\stackrel{(c)}{=} I(X_1;U_2,Y_1|U_1) + I(U_1;Y_2) + I(X_2;Y_2|U_1)\\nonumber\\\\\n \\stackrel{(d)}{=} I(X_1;Y_1|U_1,U_2) + I(X_2,U_1;Y_2),\\label{eq:achreg4}\n\\end{IEEEeqnarray}where step $(a)$ follows by the chain rule of mutual information, step $(b)$ follows by the Markov chain $U_2-X_2-(U_1,Y_2)$, step $(c)$ follows since $U_1$ and $X_2$ are independent and step $(d)$ follows by the independence of $U_2$ and $(U_1,X_1)$. \n\\item By similar steps, $f+g-a-b =$ \\begin{equation}\\label{eq:achreg5}I(X_1,U_2;Y_1) + I(X_2;Y_2|U_1,U_2).\\end{equation}\n\\item The remaining upper-bound on the sum-rate $d+h-a-b$ can be simplified as follows: \\begin{IEEEeqnarray}{lCl}\nd+h-a-b \\nonumber\\\\\n= I(X_1,U_2;U_1,Y_1) + I(X_2,U_1;U_2,Y_2) \\nonumber\\\\\n\\quad -\\> I(U_1;X_1) - I(U_2;X_2)\\nonumber\\\\\n = I(X_1,U_2;Y_1|U_1) + I(X_2,U_1;Y_2|U_2),\\label{eq:achreg6}\n\\end{IEEEeqnarray}which follows by the chain rule of mutual information and the independence of $(U_1,X_1)$ and $(U_2,X_2)$.\n\\item The upper bound on $2R_1+R_2$ can be simplified as follows:\n\\begin{IEEEeqnarray}{lCl}\nc+h+f-2a-b \\nonumber\\\\\n= I(X_1;U_1,U_2,Y_1) + I(X_2,U_1;U_2,Y_2) + I(U_1,X_1) \\nonumber\\\\\n\\quad +\\> I(U_2;Y_1) + I(U_1,X_1;U_2,Y_1) - 2I(U_1;X_1) - I(U_2;X_2)\\nonumber\\\\\n \\stackrel{(a)}{=} I(X_1;U_2,Y_1|U_1) + I(X_2,U_1;Y_2|U_2) + I(U_2;Y_1) + I(U_1,X_1;U_2,Y_1)\\nonumber\\\\\n \\stackrel{(b)}{=} I(X_1;U_2,Y_1|U_1) + I(X_2,U_1;Y_2|U_2) + I(U_2;Y_1) + I(X_1;Y_1|U_2)\\nonumber\\\\\n \\stackrel{(c)}{=} I(X_1;Y_1|U_1,U_2) + I(X_2,U_1;Y_2|U_2) + I(X_1,U_2;Y_1),\\label{eq:achreg7}\n\\end{IEEEeqnarray}where step $(a)$ holds by the chain rule of mutual information and the independence of $U_1$ and $(U_2,X_2)$, step $(b)$ follows by $U_1-X_1-(U_2,Y_1)$ and the independence of $X_1$ and $U_2$, and step $(c)$ follows by the chain rule of mutual information and the independence of $U_2$ and $(U_1,X_1)$.\n\\item Finally, $d+g+j-a-2b$ can be similarly shown to be equal to \\begin{equation}\\label{eq:achreg8} I(X_2;Y_2|U_1,U_2) + I(X_1,U_2;Y_1|U_1) + I(X_2,U_1;Y_2).\\end{equation}\n\\end{itemize}\n\nFrom \\eqref{eq:achreg1}-\\eqref{eq:achreg8} and including a time-sharing random variable $Q$, we get that the following region is achievable:\n\\begin{equation}\\label{eq:achreg}\n\\begin{split}\nR_1 & < I(X_1;Y_1|U_2,Q),\\\\\nR_2 & < I(X_2;Y_2|U_1,Q),\\\\\nR_1 + R_2 & < I(X_1;Y_1|U_1,U_2,Q) +I(X_2,U_1;Y_2|Q) ,\\\\\nR_1 + R_2 & < I(X_1,U_2;Y_1|U_1,Q) + I(X_2,U_1;Y_2|U_2,Q),\\\\\nR_1 + R_2 & < I(X_1,U_2;Y_1|Q) + I(X_2;Y_2|U_1,U_2,Q),\\\\\n2R_1 + R_2 & < I(X_1;Y_1|U_1,U_2,Q) + I(X_2,U_1;Y_2|U_2,Q) + I(X_1,U_2;Y_1|Q),\\\\\nR_1 + 2R_2 & < I(X_2;Y_2|U_1,U_2,Q) + I(X_1,U_2;Y_1|U_1,Q) + I(X_2,U_1;Y_2|Q),\n\\end{split}\n\\end{equation}\nfor pmf $p(q)p(u_1,x_1|q)p(u_2,x_2|q).$ This region is identical to the region in \\eqref{eq:achreg_prelim}.\\hfill\\IEEEQED\n\n\n\n\\section{State-dependent Interference channels}\\label{sec:state}\nIn this section, we focus on the particular setup of the state-dependent Z-interference channel (S-D Z-IC) with noncausal state information at the interfering transmitter, as depicted in Fig.~\\ref{fig:model_gen}. We provide a simple achievability scheme for this setup, that is obtained from the alternative achievability scheme for the general interference channel. This scheme is shown to be optimal for the deterministic case. The auxiliary random variable used for encoding at the interfering transmitter now implicitly captures some part of the message as well as some part of the state sequence realization. \nThe achievability scheme can also be viewed as a generalization of the schemes presented in \\cite{Cad09} and \\cite{Dua13b}.\n\n\n\nAfter characterizing the capacity region of the deterministic S-D Z-IC, we investigate a special case in detail: the modulo-additive S-D Z-IC. The modulo-additive channel is motivated by the linear deterministic model which has gained popularity over the recent years for studying wireless networks \\cite{Ave11}. For this case (which can be thought of as a linear deterministic model with only one \\emph{bit level}), we obtain an explicit description of the capacity region and furthermore, show that the capacity region is also achieved by the standard Gelfand-Pinsker coding over the first link and treating interference as noise over the second link. Following this, the modulo-additive S-D Z-IC with multiple levels is considered and some discussion is provided about the capacity region and the performance of simple achievability schemes.\n\nTo summarize, this section contains the following contributions:\n\\begin{itemize}\n\\item An achievable rate region for the S-D Z-IC,\n\\item Capacity region of the injective deterministic S-D Z-IC,\n\\item Modulo-additive S-D Z-IC: optimality of treating interference-as-noise and other properties.\n\\end{itemize}\n\n\n\n\n\n\\subsection{Results for the State-Dependent Channel}\\label{subsec:main_res_state}\n\nThe following theorem provides an inner bound to the capacity region of the S-D Z-IC in Fig.~\\ref{fig:model_gen}.\n\\begin{thm}\\label{thm:gen_ach}\nA rate pair $(R_1,R_2)$ is achievable for the channel in Fig.~\\ref{fig:model_gen} if\n\\begin{equation}\\label{eq:gen_ach}\n\\begin{split}\nR_1 & < I(U;Y_1|Q)-I(U;S|Q),\\\\\nR_2 & < I(X_2;Y_2|V,Q),\\\\\nR_2 & < I(V,X_2;Y_2|Q) - I(V;S|Q),\\\\\nR_1 + R_2 & < I(U;Y_1|Q)+ I(V,X_2;Y_2|Q)\\\\\n & \\quad\\quad -I(U;S|Q)- I(U,S;V|Q),\n\\end{split}\n\\end{equation}for some pmf $p(q)p(u,v|s,q)p(x_1|u,v,s,q)p(x_2|q).$\n\\end{thm}\n\n\nFor the injective deterministic S-D Z-IC, we can identify natural choices for the auxiliary random variables in Theorem~\\ref{thm:gen_ach} that, in fact, yield the capacity region. This result is stated in the following theorem.\n\n\\begin{thm}\\label{thm:cap}\nThe capacity region of the injective deterministic S-D Z-IC in Fig.~\\ref{fig:model_state_det} is the set of rate pairs $(R_1,R_2)$ that satisfy\n\\begin{equation}\\label{eq:cap}\n\\begin{split}\nR_1 & \\leq H(Y_1|S,Q),\\\\\nR_2 & \\leq H(Y_2|T_1,Q),\\\\\nR_2 & \\leq H(Y_2|Q) - I(T_1;S|Q),\\\\\nR_1 + R_2 & \\leq H(Y_1|T_1,S,Q)+H(Y_2|Q)-I(T_1;S|Q),\n\\end{split}\n\\end{equation}\nfor some pmf $p(q)p(x_1|s,q)p(x_2|q),$ where $|\\mc{Q}|\\leq 4$.\n\\end{thm}\n\n\n\n\\begin{remark} Note that the capacity region remains unchanged even if the first receiver is provided with the state information. The proof of this theorem is presented in subsection~\\ref{subsec:proof_thm_cap}.\n\\end{remark}\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm:gen_ach}}\\label{subsec:proofthm1}\nFix $p(u,v|s)p(x_1|u,v,s)p(x_2)$ and choose $0<\\epsilon'<\\epsilon$. \n\n\\subsubsection*{Codebook Generation}\n\\begin{itemize}\n\\item Encoder 2 generates $2^{nR_2}$ codewords $x_2^n(m_2), m_2\\in[1:2^{nR_2}]$ i.i.d. according to $p(x_2)$.\n\\item Encoder 1 generates $2^{n(R_1+R_1')}$ codewords $u^n(m_1,l_1)$ i.i.d. according to $p(u)$, where $m_1\\in[1:2^{nR_1}]$ and $l_1\\in[1:2^{nR_1'}]$. Encoder 1 also generates $2^{nR_2'}$ codewords $v^n(l_2), l_2\\in[1:2^{nR_2'}]$ i.i.d. according to $p(v)$.\n\\end{itemize}\n\n\\subsubsection*{Encoding}\n\\begin{itemize}\n\\item To transmit message $m_2$, encoder~2 transmits $x_2^n(m_2)$.\n\\item Assume that the message to be transmitted by encoder~1 is $m_1$. After observing $s^n$, it finds a pair $(l_1,l_2)$ such that $(u^n(m_1,l_1),v^n(l_2),s^n)\\in\\mc{T}^{(n)}_{\\epsilon'}$. Then it transmits $x_1^n$, which is generated i.i.d. according to $p(x_1|u,v,s)$.\n\\end{itemize}\n\n\\subsubsection*{Decoding}\n\\begin{itemize}\n\\item Decoder 1 finds a unique $\\hat{m}_1$ such that $(u^n(\\hat{m}_1,l_1),y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon}$ for some $l_1$.\n\\item Decoder 2 finds a unique $\\hat{m}_2$ such that $(x_2^n(\\hat{m}_2),v^n(l_2),y_2^n)\\in\\mc{T}_\\epsilon^{(n)}$ for some $l_2$.\n\\end{itemize}\n\n\\subsubsection*{Probability of Error}\nDue to the symmetry of the code, the average probability of error $\\msf{P}(\\mc{E})$ is equal to $\\msf{P}(\\mc{E}|M_1,M_2)$, so we can assume $(M_1,M_2) = (1,1)$ and analyze $\\msf{P}(\\mc{E}|1,1)$. Let $(L_1,L_2)$ denote the pair of indices chosen by encoder 1 such that $(U^n(1,L_1),V^n(L_2),S^n)\\in\\mc{T}^n_{\\epsilon'}$. \n\nWe now define events that cover the error event:\n\\begin{IEEEeqnarray*}{rCl}\n\\mc{E}_1 & \\triangleq & \\{(U^n(1,l_1),V^n(l_2),S^n)\\notin\\mc{T}^{(n)}_{\\epsilon'} \\text{ for all } l_1, l_2\\}, \\label{eq:E1}\\\\\n\\mc{E}_2 & \\triangleq & \\{(U^n(1,L_1),Y_1^n)\\notin\\mc{T}^{(n)}_{\\epsilon}\\}, \\label{eq:E2}\\\\\n\\mc{E}_3 & \\triangleq & \\{(U^n(m_1,l_1),Y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon} \\text{ for some }m_1\\neq 1, l_1\\}, \\label{eq:E3}\\\\\n\\mc{E}_4 & \\triangleq & \\{(X_2^n(1),V^n(L_2),Y_2^n)\\notin\\mc{T}^{(n)}_{\\epsilon}\\} \\label{eq:E4},\\\\\n\\mc{E}_5 & \\triangleq & \\{(X_2^n(m_2),V^n(l_2),Y_2^n)\\in\\mc{T}^{(n)}_{\\epsilon} \\text{ for some }m_2\\neq 1, l_2\\} \\label{eq:E5}.\n\\end{IEEEeqnarray*}\n\nSince an error occurs only if at least one of the above events occur, we have the following upper bound on the average probability of error:\n$$\\msf{P}(\\mc{E}) \\leq \\msf{P}(\\mc{E}_1) + \\msf{P}(\\mc{E}_2\\cap\\mc{E}_1^c) + \\msf{P}(\\mc{E}_3) + \\msf{P}(\\mc{E}_4\\cap\\mc{E}_1^c) + \\msf{P}(\\mc{E}_5).$$\n\nSimilar to the proof of the mutual covering lemma \\cite[Ch. 8]{Gam12}, we can show that $\\msf{P}(\\mc{E}_1)\\rightarrow 0$ as $n\\rightarrow\\infty$ if \n\\begin{IEEEeqnarray}{rCl}\nR_1' & > & I(U;S) + \\delta(\\epsilon'),\\label{eq:ach1_state}\\\\\nR_2' & > & I(V;S) + \\delta(\\epsilon'),\\label{eq:ach2_state}\\\\\nR_1' + R_2' & > & I(U;S) + I(U,S;V) + \\delta(\\epsilon')\\label{eq:ach3_state},\n\\end{IEEEeqnarray}\nwhere $\\delta(\\epsilon')\\rightarrow 0$ as $\\epsilon'\\rightarrow 0.$\n\nBy the conditional typicality lemma \\cite[Ch. 2]{Gam12}, $\\msf{P}(\\mc{E}_2\\cap\\mc{E}_1^c)$ and $\\msf{P}(\\mc{E}_4\\cap\\mc{E}_1^c)$ both tend to zero as $n\\rightarrow\\infty$.\n\nBy the packing lemma \\cite[Ch. 3]{Gam12}, for $\\msf{P}(\\mc{E}_3)\\rightarrow 0$, we require\n\\begin{equation}\\label{eq:ach4_state} R_1+R_1' < I(U;Y_1) - \\delta(\\epsilon),\\end{equation}\nand for $\\msf{P}(\\mc{E}_5)\\rightarrow 0$, we require\n\\begin{IEEEeqnarray}{rCl}\nR_2 & < & I(X_2;Y_2|V) - \\delta(\\epsilon),\\label{eq:ach5_state}\\\\\nR_2 + R_2' & < & I(V,X_2;Y_2) - \\delta(\\epsilon),\\label{eq:ach6_state}\n\\end{IEEEeqnarray}\nwhere $\\delta(\\epsilon)\\rightarrow 0$ as $\\epsilon\\rightarrow 0.$ Hence, $\\msf{P}(\\mc{E})\\rightarrow 0$ if \\eqref{eq:ach1_state}, \\eqref{eq:ach2_state}, \\eqref{eq:ach3_state}, \\eqref{eq:ach4_state}, \\eqref{eq:ach5_state}, \\eqref{eq:ach6_state} are satisfied. \nAllowing coded-time sharing with a time-sharing random variable $Q$ and eliminating $R_1',R_2'$ via Fourier-Motzkin elimination, we obtain the region \\eqref{eq:gen_ach}.\\hfill\\IEEEQED\n\n\n\\subsection{Proof of Theorem~\\ref{thm:cap}}\\label{subsec:proof_thm_cap}\nAchievability follows from Theorem~\\ref{thm:gen_ach} by choosing $U=Y_1$ and $V=T_1$. These choices are valid since encoder 1 knows $(M_1,S^n)$, which determines $T_1^n$ and $Y_1^n$. We now prove the converse.\n\nGiven a sequence of codes that achieves reliable communication (i.e. $P_e^{(n)}\\rightarrow 0$ as $n\\rightarrow\\infty$) at rates $(R_1,R_2)$, we have, by Fano's inequality:\n\\begin{IEEEeqnarray*}{c}\nH(M_1|Y_1^n) \\leq n\\epsilon_n,\\\\\nH(M_2|Y_2^n) \\leq n\\epsilon_n,\n\\end{IEEEeqnarray*}\nwhere $\\epsilon_n\\rightarrow 0$ as $n\\rightarrow\\infty.$\n\nUsing these, we can establish an upper bound on $R_1$ as follows,\n\\begin{IEEEeqnarray}{rCl}\nnR_1 & = & H(M_1) \\nonumber\\\\\n& = & H(M_1|S^n) \\nonumber\\\\\n& \\leq & I(M_1;Y_1^n|S^n) + n\\epsilon_n \\nonumber\\\\\n& \\leq & H(Y_1^n|S^n) + n\\epsilon_n \\nonumber\\\\\n& \\leq & \\sum_{i=1}^n H(Y_{1i}|S_i) + n\\epsilon_n. \\nonumber\n\\end{IEEEeqnarray}\n\nA simple upper bound on $R_2$ is established in the following:\n\\begin{IEEEeqnarray}{rCl}\nnR_2 & = & H(M_2)\\nonumber\\\\\n& = & H(M_2|T_1^n)\\nonumber\\\\\n& \\leq & I(M_2;Y_2^n|T_1^n) + n\\epsilon_n\\nonumber\\\\\n& \\leq & H(Y_2^n|T_1^n) + n\\epsilon_n\\nonumber\\\\\n& \\leq & \\sum_{i=1}^n H(Y_{2i}|T_{1i}) + n\\epsilon_n.\\nonumber\n\\end{IEEEeqnarray}\n\n\nFor the second upper bound on $R_2$, consider the following:\n\\begin{IEEEeqnarray}{rCl}\nnR_2 & = & H(M_2)\\nonumber\\\\\n& = & H(M_2) + H(Y_2^n|M_2) - H(Y_2^n|M_2) \\nonumber\\\\\n& = & H(Y_2^n) + H(M_2|Y_2^n) - H(Y_2^n|M_2) \\nonumber\\\\\n& \\leq & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - H(Y_2^n|M_2)\\nonumber\\\\\n& \\stackrel{(a)}{=} & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - H(T_1^n|M_2)\\nonumber\\\\\n& \\stackrel{(b)}{=} & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - H(T_1^n)\\nonumber\\\\\n& \\leq & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - I(T_1^n;S^n)\\nonumber\\\\\n& = & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - H(S^n) + H(T_1^n|S^n)\\nonumber\\\\\n& \\leq & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - H(S^n) + \\sum_{i=1}^nH(T_{1i}|S_i)\\nonumber\\\\\n& \\stackrel{(c)}{=} & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - \\sum_{i=1}^nH(S_i) + \\sum_{i=1}^nH(T_{1i}|S_i)\\nonumber\\\\\n& = & \\sum_{i=1}^nH(Y_{2i}) + n\\epsilon_n - \\sum_{i=1}^nI(T_{1i};S_i)\\nonumber\\\\\n\\end{IEEEeqnarray}\nwhere step $(a)$ follows by the injectivity property, step $(b)$ follows because $T_1^n$ is independent of $M_2$, and step $(c)$ follows because $S^n$ is an i.i.d. sequence.\n\n\nWe now establish an upper bound on the sum-rate\n\n \\begin{IEEEeqnarray}{rL}\nn(R_1+R_2) & = H(M_1|S^n) + H(M_2)\\nonumber\\\\\n& \\leq I(M_1;T_1^n,Y_1^n|S^n) + n\\epsilon_n + H(Y_2^n) + H(M_2|Y_2^n) - H(Y_2^n|M_2)\\nonumber\\\\\n& \\leq I(M_1;T_1^n,Y_1^n|S^n) + n\\epsilon_n + H(Y_2^n) + n\\epsilon_n - H(Y_2^n|M_2)\\nonumber\\\\\n& \\stackrel{(a)}{\\leq} H(T_1^n,Y_1^n|S^n) + H(Y_2^n) - H(T_1^n|M_2)+ 2n\\epsilon_n\\nonumber\\\\\n& \\stackrel{(b)}{=} H(T_1^n,Y_1^n|S^n) + H(Y_2^n) - H(T_1^n)+ 2n\\epsilon_n\\nonumber\\\\\n& = H(Y_1^n|S^n,T_1^n) + H(Y_2^n) - I(T_1^n;S^n)+ 2n\\epsilon_n\\nonumber\\\\\n& \\stackrel{(c)}{\\leq} \\sum_{i=1}^n H(Y_{1i}|S_i,T_{1i}) + \\sum_{i=1}^n H(Y_{2i}) - \\sum_{i=1}^nI(T_{1i};S_i)+ 2n\\epsilon_n\\nonumber\n\\end{IEEEeqnarray}\nwhere as before, steps $(a)$, $(b)$ and $(c)$ follow because of injectivity property, independence of $T_1^n$ and $M_2$, and i.i.d. state respectively.\n\n\n\nFrom the four bounds established in this section, we can complete the converse by introducing an independent time-sharing random variable $Q$ uniformly distributed on $[1:n]$ and defining $X_1$, $T_1$, $S$, $X_2$, $Y_1$, $Y_2$ to be $X_{1Q}$, $T_{1Q}$, $S_{Q}$, $X_{2Q}$, $Y_{1Q}$, $Y_{2Q}$ respectively. \\hfill\\IEEEQED\n\n\n\\subsection{Example: Modulo-Additive State-Dependent Z-Interference Channel}\\label{subsec:modulo}\n\n\\begin{thm}\\label{thm:modulo}\nThe capacity region of the modulo-additive S-D Z-IC in Fig.~\\ref{fig:model_modulo} is given by the convex closure of the rate pairs $(R_1,R_2)$ satisfying\n\\begin{equation}\\label{eq:cap_modulo}\n\\begin{split}\nR_1 & < (1-\\lambda)\\log |\\mc{X}| + \\lambda H(\\bm{p}),\\\\\nR_2 & < \\log |\\mc{X}| - H\\left(\\lambda \\bm{p} + (1-\\lambda)\\bm{\\delta}_0\\right),\n\\end{split}\n\\end{equation}\nfor some $\\bm{p}\\in\\mc{P}_{\\mc{X}}$, where $\\mc{P}_{\\mc{X}}$ denotes the probability simplex corresponding to $\\mc{X}$, $H(\\bm{p})$ stands for the entropy of the pmf $\\bm{p}$ and $\\bm{\\delta}_0$ denotes the pmf that has unit mass at $0$.\n\\end{thm}\n\nThe capacity region when $\\mc{X}=\\{0,1\\}$ and $S$ is i.i.d. Ber$\\left(\\frac{1}{2}\\right)$ is shown in Figure~\\ref{fig:modulo}.\n\n\n\n\\subsection*{Proof of Theorem~\\ref{thm:modulo}}\n\n\n\\begin{figure}[!t]\n\\centering\n\\input{modulo.tikz}\n\\caption{Capacity Region with $\\mc{X}=\\{0,1\\}$ and $S$ i.i.d. Ber$\\left(\\frac{1}{2}\\right).$ The dotted line shows the capacity region when all nodes have state information. Note that the maximal sum-rate of $1.5$ bits\/channel use is achievable with state information only at the interfering Tx.}\n\\label{fig:modulo}\n\\end{figure}\n\nConsider the capacity region stated in Theorem~\\ref{thm:cap}. Let $\\bm{p}_{1,0}$, $\\bm{p}_{1,1}$ and $\\bm{p}_{2}$, all in $\\mc{P}_{\\mc{X}}$, be used to denote the pmf's ${p(x_1|s=0,q)}$, $p(x_1|s=1,q)$ and $p(x_2|q)$ respectively. Evaluating each of the constraints in \\eqref{eq:cap} gives us the following expression for the capacity region:\n\\begin{equation}\\label{eq:cap_modulo_1}\n\\begin{split}\nR_1 & < (1-\\lambda)H(\\bm{p}_{1,0}) + \\lambda H(\\bm{p}_{1,1}),\\\\\nR_2 & < H(\\bm{p}_{2}),\\\\ \nR_2 & < H\\left((1-\\lambda)\\bm{p}_{2} + \\lambda\\widetilde{\\bm{p}}\\right) + \\lambda H(\\bm{p}_{1,1}) \\\\\n&\\quad\\quad - H\\left(\\lambda \\bm{p}_{1,1} + (1-\\lambda)\\bm{\\delta}_0\\right),\\\\\nR_1 + R_2 & < (1-\\lambda)H(\\bm{p}_{1,0})+H\\left((1-\\lambda)\\bm{p}_{2} + \\lambda\\widetilde{\\bm{p}}\\right)\\\\\n&\\quad\\quad + \\lambda H(\\bm{p}_{1,1}) - H\\left(\\lambda \\bm{p}_{1,1} + (1-\\lambda)\\bm{\\delta}_0\\right),\n\\end{split}\n\\end{equation}\nwhere $\\widetilde{\\bm{p}}\\in\\mc{P}_{\\mc{X}}$ is a pmf that is defined as \n$$\\widetilde{\\bm{p}}(k) = \\sum_{i=0}^{|\\mc{X}|-1} \\bm{p}_{1,1}(i)\\bm{p}_{2}(k-i),\\quad 0\\leq k\\leq |\\mc{X}|-1,$$ and $k-i$ should be understood to be $(k-i)\\text{ mod } |\\mc{X}|$.\n\nFirstly, we note that $\\bm{p}_{1,0}$ should be chosen as the pmf of the uniform distribution to maximize $H(\\bm{p}_{1,0})$, thus maximizing the RHS of the constraints in \\eqref{eq:cap_modulo_1}. Similarly, $\\bm{p}_2$ should also be chosen to be the pmf of the uniform distribution. Then, we can also remove the first constraint on $R_2$, since it is rendered redundant by the other constraint on $R_2$.\nThus, the capacity region is given by the convex closure of $(R_1,R_2)$ satisfying\n\\begin{equation}\\label{eq:cap_modulo_3}\n\\begin{split}\nR_1 & < (1-\\lambda)\\log(|\\mc{X}|) + \\lambda H(\\bm{p}_{1,1}),\\\\\nR_2 & < \\log(|\\mc{X}|) + \\lambda H(\\bm{p}_{1,1}) - H\\left(\\lambda \\bm{p}_{1,1} + (1-\\lambda)\\bm{\\delta}_0\\right),\\\\\nR_1 + R_2 & < (2-\\lambda)\\log(|\\mc{X}|)+ \\lambda H(\\bm{p}_{1,1})\\\\\n&\\quad\\quad\\quad\\quad - H\\left(\\lambda \\bm{p}_{1,1} + (1-\\lambda)\\bm{\\delta}_0\\right),\n\\end{split}\n\\end{equation} for $\\bm{p}_{1,1}\\in\\mc{P}_{\\mc{X}}.$\n\nFor any $\\bm{p}$, the region in \\eqref{eq:cap_modulo} is contained in the region in \\eqref{eq:cap_modulo_3} for $\\bm{p}_{1,1}=\\bm{p}$. Hence, the convex closure of \\eqref{eq:cap_modulo} is contained in the convex closure of \\eqref{eq:cap_modulo_3}.\n\nHowever, also note that the region in \\eqref{eq:cap_modulo_3} for any $\\bm{p}_{1,1}$ is contained in the convex hull of two regions, one obtained by setting $\\bm{p} = \\bm{p}_{1,1}$ in \\eqref{eq:cap_modulo} and the other obtained by setting $\\bm{p}=\\bm{\\delta}_0$ in \\eqref{eq:cap_modulo}. Hence, the convex closure of \\eqref{eq:cap_modulo_3} is also contained in the convex closure of \\eqref{eq:cap_modulo}. This concludes the proof of Theorem~\\ref{thm:modulo}.\\hfill\\IEEEQED \n\n\n\\begin{remark}\nThe optimal sum-rate $(2-\\lambda)\\log |\\mc{X}|$ is achieved by choosing $\\bm{p}=\\bm{\\delta}_0$. This corresponds to setting the transmitted symbols of the first transmitter to $0$ when $S=1$ so that it does not interfere with the second transmission. The first transmitter then treats these symbols as stuck to $0$ and performs Gelfand-Pinsker coding. The second transmitter transmits at rate $\\log(|\\mc{X}|)$ bits\/channel use. It can be easily verified that this is also the optimal sum-rate when all nodes are provided with the state~information. Thus, for this channel, the sum-capacity when all nodes have state information is the same as that when only encoder~1 has state information.\n\\end{remark}\n\n\\begin{remark}\nFinally, we note that there is also another way to achieve the capacity region of the modulo additive S-D Z-IC. For this, first recall that to get the capacity region expression in Theorem~\\ref{thm:cap}, we set the auxiliary random variables $U$ and $V$ in the expression in Theorem~\\ref{thm:gen_ach} to $Y_1$ and $T_1$ respectively. Another choice, which corresponds to standard Gelfand-Pinsker coding for the first transmitter-receiver pair and treating interference as noise at the second receiver is to choose $V=\\phi$ in Theorem~\\ref{thm:gen_ach}. This gives us the following achievable region:\n\\begin{equation}\\label{eq:int_noise}\n\\begin{split}\nR_1 & < I(U;Y_1|Q)-I(U;S|Q),\\\\\nR_2 & < I(X_2;Y_2|Q),\n\\end{split}\n\\end{equation} for some pmf $p(q)p(u|s,q)p(x_1|u,s,q)p(x_2|q)$. We can now see that for the modulo-additive S-D Z-IC, the capacity region is also achieved by making the following choices in the above region: $p(u|s=0)$ to be the uniform pmf over $\\mc{X}$, $p(u|s=1)$ to be $\\bm{p}$, $p(x_1|u,s)$ to be $\\bm{\\delta}_u$ (i.e. $X_1=U$) and $p(x_2)$ to be the uniform pmf over $\\mc{X}$. Thus, the capacity region of the modulo-additive S-D Z-IC can also be achieved by treating interference as noise at the second receiver.\n\\end{remark}\n\n\\subsection{Multiple-level modulo-additive S-D Z-IC}\\label{subsec:multiplelevel}\n\nThe linear deterministic model introduced in \\cite{Ave11} consists of multiple \\emph{bit levels} that roughly correspond to bits communicated at different power levels. The modulo-additive S-D Z-IC that we looked at in the previous subsection is a special case in which the number of levels is one. Extending the model to have multiple bit levels raises some interesting questions which we consider in this subsection. \n\nMore specifically, consider the model depicted in Fig.~\\ref{fig:model_modulo_multiple}, which can be thought of as three copies of the model in Fig.~\\ref{fig:model_modulo}, which are however related by the common state affecting them. For simplicity, we restrict attention to the case when the alphabet on each level, denoted by $\\mc{X}$, is the binary alphabet, i.e. $\\{0,1\\}$, and the state is Ber$(0.5).$ Let $L$ denote the number of bit levels.\n\n\\begin{figure}[!th]\n\\centering\n\\includegraphics[scale=1.5]{modulo_multiple_levels.pdf}\n\\caption{The Modulo-Additive S-D Z-IC wit multiple bit levels.}\n\\label{fig:model_modulo_multiple}\n\\end{figure}\n\n\\begin{figure}[!th]\n\\centering\n\\begin{subfigure}\n\\centering\n \\resizebox{.5\\linewidth}{!}{\\input{twolevel_binary_SD_ZIC.tikz}}\n\\caption{Comparison of the different rate regions for 2-level binary modulo-additive S-D Z-IC}\n\\label{fig:modulo_multiple_2}\n\\end{subfigure}\n\\vspace{4mm}\n\\begin{subfigure}\n\\centering\n\\resizebox{.5\\linewidth}{!}{\\input{threelevel_binary_SD_ZIC.tikz}}\n\\caption{Comparison of the different rate regions for 3-level binary modulo-additive S-D Z-IC}\n\\label{fig:modulo_multiple_3}\n\\end{subfigure}\n\\end{figure}\n\nThis model also falls under the injective-deterministic setup for which we have completely characterized the capacity region. So the capacity region can be easily computed, as we indeed do in the following. This evaluation also allows us to immediately compare the capacity region with the rates achieved by some straightforward achievability schemes that we can employ. In particular, consider the following two simple achievability schemes:\n\\begin{itemize}\n\\item ``Separation'': The simplest strategy one can employ is to separately consider each level and communicate over it independently of the other levels. This gives us that the rate pairs $(R_1,R_2)$ satisfying\n\\begin{equation}\\label{eq:ach_separation}\n\\begin{split}\nR_1 & < \\frac{L}{2} + \\sum_{i=1}^{L}\\frac{1}{2} H(\\bm{p}_i),\\\\\nR_2 & < L - \\sum_{i=1}^{L}H\\left( \\bm{p}_i + \\bm{\\delta}_0\\right),\n\\end{split}\n\\end{equation}\nfor some $\\bm{p}_1,\\bm{p}_2,\\dots,\\bm{p}_L\\in\\mc{P}_{\\mc{X}}$ are achievable.\n\\item ``Communicate state'': Alternatively, by noticing that strictly better rates could have been achieved if decoder~2 also had access to the state information, we can reserve one level to communicate the state from encoder~1 to decoder~2. This is done by ensuring that encoder~1 transmits a 1 on this reserved level whenever the state is 1, and encoder~2 constantly transmits a 0 on this level. The nodes communicate on the remaining levels keeping in mind that now decoder~2 also has state information. Note that while no communication can happen between encoder~2 and decoder~2 on the reserved level, encoder~1 can still communicate with decoder~1 at rate 0.5 on this level by treating it as a channel with stuck bits (bit equals 1 whenever state equals 1). This strategy provides us the following achievable region:\n\\begin{equation}\\label{eq:ach_state}\n\\begin{split}\nR_1 & < \\frac{L}{2} + \\frac{1}{2} H(\\bm{p}),\\\\\nR_2 & < L-1 - \\frac{1}{2}H\\left( \\bm{p}\\right),\n\\end{split}\n\\end{equation}\nfor some $\\bm{p}\\in\\mc{P}_{\\mc{X}^{L-1}}$.\n\\end{itemize}\n\nWe can expect that the suboptimality of reserving one level for communicating the state should become relatively small as the number of levels increases i.e. at high SNR. This is corroborated by the numerical analysis, shown in Figs.~\\ref{fig:modulo_multiple_2} and \\ref{fig:modulo_multiple_3}, in which we can see that there is a marked improvement in the rates achieved by this scheme relative to the capacity region as we increase the number of levels from 2 to 3. Indeed, since all the levels are affected by the same state, the entropy of the state becomes small compared to the communication rates as the SNR increases, so it is not a big overhead to explicitly communicate the state to decoder~2 at high SNR. However, at low SNR, the figures show that the overhead incurred is quite high due to which this approach is significantly suboptimal, while the simple scheme of treating the levels separately results in achieving very close to the entire capacity region.\n\n\n\n\n\n\\section{Interference Channels with Partial Cribbing}\\label{sec:cribbing}\n\n\nIn this section, we focus on deterministic Z-interference channels when the interfering transmitter can overhear the signal transmitted by the other transmitter after it passes through some channel. This channel is also modeled as a deterministic channel, dubbed as \\emph{partial cribbing} in \\cite{Asn13}. Deterministic models, in particular linear deterministic models \\cite{Ave11}, have gained popularity due to the observation that they are simpler to analyze and are provably close in performance to Gaussian models. \n\nThere have been quite a few very sophisticated achievability schemes designed for interference channels with causal cribbing encoders, however optimality of the achievable rate regions has not been addressed. In the most general interference channel model with causal cribbing \\cite{Yan11}, each encoder needs to split its message into four parts: a common part to be sent cooperatively, a common part to be sent non-cooperatively, a private part to be sent cooperatively and a private part to be sent non-cooperatively. Further, because of the causal nature of cribbing, achievability schemes usually involve block-Markov coding, so that each encoder also needs to consider the cooperative messages of both encoders from the previous block. Motivated by the alternative achievability scheme we have presented earlier for the general interference channel, we present a simple optimal achievability scheme that minimizes the rate-splitting that is required. Specifically, while encoder~2 only splits its message into a cooperative and non-cooperative private part, encoder~1 does not perform any rate-splitting at all. By focusing on the specific configuration of the Z-interference channel, we are able to prove the optimality of an achievability scheme that is simpler than the highly involved achievability schemes for the general case that are currently known.\n \n\n\n\n\n\n\n\n\\subsection{Result for Partial Cribbing}\\label{subsec:main_res_crib}\n\\begin{thm}\\label{thm:part_crib}\nThe capacity region of the injective deterministic Z-interference channel with unidirectional partial cribbing, depicted in Fig.~\\ref{fig:model_crib}, is given by the convex closure of $(R_1,R_2)$ satisfying \\begin{equation}\\label{eq:cap_part_crib}\n\\begin{split}\nR_1 & \\leq H(Y_1|W),\\\\\nR_2 & \\leq \\min\\Big(H(Y_2), H(Y_2,Z_2|T_1,W)\\Big),\\\\\nR_1 + R_2 & \\leq H(Y_1|T_1,W)+\\min\\Big(H(Y_2), H(Y_2,Z_2|W)\\Big),\n\\end{split}\n\\end{equation}\nfor $p(w)p(x_1|w)p(x_2|w),$ where $W$ is an auxiliary random variable whose cardinality can be bounded as\n$|\\mathcal{W}|\\leq |\\mathcal{Y}_2|+3.$\n\\end{thm}\n\nThe proof of this theorem is presented below.\n\n\\subsection{Proof of Theorem~\\ref{thm:part_crib}}\\label{subsec:proof_crib}\n\\emph{Achievability}\\\\\nChoose a pmf $p(w)p(u_d,u_c,x_1|w)p(x_2,z_2|w)$ and ${0<\\epsilon'<\\epsilon}$, where for the sake of generality, we use the auxiliary random variables $U_d$ and $U_c$. In the injective deterministic case at hand, they can be set to $Y_1$ and $T_1$ respectively.\n\\subsubsection*{Codebook Generation}\nThe communication time is divided into $B$ blocks, each containing $n$ channel uses, and an independent random code is generated for each block $b\\in[1:B]$. Whenever it is clear from the context, we suppress the dependence of codewords on $b$ to keep the notation simple. The messages in block $B$ are fixed apriori, so a total of $B-1$ messages are communicated over the $B$ blocks. The resulting rate loss can be made as negligible as desired by choosing a sufficiently large $B$.\n\nWe split $R_2$ as $R_2'+R_2''$, which corresponds to the split of message~2 into two parts, one that will be sent cooperatively by both transmitters to receiver~2 and the other non-cooperatively only by transmitter~2 to receiver~2. For each block $b$, let $m_{2,b}'\\in[1:2^{nR_2'}]$ and $m_{2,b}''\\in[1:2^{nR_2''}]$. For each block $b\\in [1:B]$, we generate $2^{nR_2'}$ sequences $w^n$ i.i.d. according to $p(w)$.\n\\begin{itemize}\n\\item For each $w^n$ in block $b$, we generate $2^{nR_{2}'}$ sequences $\\left\\{z_2^n(w^n,m'_{2,b})\\right\\}$ i.i.d. according to $p(z_2|w)$. Then for each $(w^n,z_2^n)$, we generate $2^{nR_2''}$ sequences $\\left\\{x_2^n(w^n,z_2^n,m''_{2,b})\\right\\}$ i.i.d. according to $p(x_2|z_2,w)$.\n\\item For each $w^n$ in block $b$, we generate $2^{nR_c}$ sequences $\\left\\{u_c^n(w^n,l_c)\\right\\}$ i.i.d. according to $p(u_c|w)$, where $l_c\\in[1:2^{nR_c}]$. We also generate $2^{n(R_1+R_d)}$ sequences $\\left\\{u_d^n(m_{1,b},l_d)\\right\\}$ i.i.d. according to $p(u_{d})$, where $m_{1,b}\\in[1:2^{nR_1}]$ and $l_d\\in[1:2^{nR_d}]$. \\footnote{Note that the $u_d^n$ sequences are generated independently of the $w^n$ sequences.}\n\\end{itemize}\n\n\\subsubsection*{Encoding}\nLet us assume for now that as a result of the cribbing, encoder 1 knows $m'_{2,b-1}$ at the end of block ${b-1}$. Then in block $b$, both encoders can encode $m'_{2,b-1}$ using $w^n(m'_{2,b-1})$ where $w^n$ is from the code for block $b$.\n\\begin{itemize}\n\\item To transmit message $m_{1,b}$, encoder 1 finds a pair $(l_{d},l_{c})$ such that $$(w^n(m'_{2,b-1}),u_c^n(w^n,l_c),u_d^n(m_{1,b},l_{d}))\\in\\mc{T}^{(n)}_{\\epsilon'}.$$ It transmits $x_1^n$ that is generated i.i.d. according to $p(x_1|w,u_d,u_c)$.\n\\item To transmit message $m_{2,b}=(m'_{2,b},m''_{2,b})$, encoder 2 encodes $m'_{2,b}$ as $z_2^n(w^n,m'_{2,b})$ and then transmits $x_2^n(w^n,z_2^n,m''_{2,b})$.\n\\end{itemize}\nWe fix apriori the messages in block $B$ to be $m_{1,B}=1$, $m'_{2,B}=1$ and $m''_{2,B}=1$. Also, to avoid mentioning edge cases explicitly, whenever $m_{1,0}$, $m'_{2,0}$ or $m''_{2,0}$ appear, we assume that all are fixed to 1.\n\n\\subsubsection*{Decoding}\n\\begin{itemize}\n\\item \\emph{Encoder~1:} At the end of block $b$, assuming it has already decoded $m'_{2,b-1}$ at the end of block $b-1$, encoder 1 decodes $m'_{2,b}$ by finding the unique $\\hat{m}'_{2,b}$ such that the sequence $z_2^n$ it has observed via cribbing is equal to $z_2^n(w^n,\\hat{m}'_{2,b})$.\n\\item \\emph{Decoder~1:} In each block $b$, decoder 1 finds the unique $\\hat{m}_{1,b}$ such that $(u_d^n(m_{1,b},l_{d}),y_1^n)\\in\\mc{T}^{(n)}_{\\epsilon}$ for some $l_{d}$.\n\\item \\emph{Decoder~2:} Decoder 2 performs backward decoding as follows: \n\\begin{itemize}\n\\item In block $B$, decoder 2 finds a unique $m'_{2,B-1}$ such that the condition \\eqref{eq:jtd_dec2_B} is satisfied for some $l_c.$\n\\begin{equation}\\label{eq:jtd_dec2_B}\n(w^n(\\hat{m}'_{2,B-1}),z_2^n(w^n,1),x_2^n(w^n,z_2^n,1),u_c^n(w^n,l_c),y_2^n)\\in\\mc{T}^{(n)}_{\\epsilon}\n\\end{equation}\n\\item In block $b$, assuming $m'_{2,b}$ has been decoded correctly, it finds the unique $(\\hat{m}'_{2,b-1}, \\hat{m}''_{2,b})$ such that the condition \\eqref{eq:jtd_dec2} is satisfied for some $l_{c}$.\n\\begin{equation}\\label{eq:jtd_dec2}\n(w^n(\\hat{m}'_{2,b-1}),z_2^n(w^n,m'_{2,b}),x_2^n(w^n,z_2^n,\\hat{m}''_{2,b}),u_c^n(w^n,l_c),y_2^n)\\in\\mc{T}^{(n)}_{\\epsilon}\n\\end{equation}\n\\end{itemize}\n\\end{itemize}\n\n\n\n\n\\subsubsection*{Probability of Error}\n\nTo get a vanishing probability of error, we can impose the conditions described in the following list.\n\\begin{itemize}\n\\item\nSimilar to the proof of the mutual covering lemma \\cite[Ch. 8]{Gam12}, we can show that the following conditions are sufficient for the success of encoding at the first transmitter:\n\\begin{IEEEeqnarray}{rCl}\nR_d & > & I(U_d;W) +\\delta(\\epsilon'),\\label{eq:dec1}\\\\\nR_d + R_c & > & I(U_d;U_c,W)+\\delta(\\epsilon')\\label{eq:dec2}.\n\\end{IEEEeqnarray}\n\\item For the decoding at encoder 1 to succeed:\n\\begin{equation}\nR'_2 < H(Z_2|W)-\\delta(\\epsilon).\\label{eq:dec3}\n\\end{equation}\n\\item For decoding at decoder 1 to succeed:\n\\begin{equation}\nR_1 + R_d < I(U_d;Y_1)-\\delta(\\epsilon).\\label{eq:dec4}\n\\end{equation}\n\\item For the backward decoding at decoder 2 to succeed, it is sufficient that the following conditions are satisfied:\n\\begin{IEEEeqnarray}{rCl}\nR''_2 & < & I(X_2;Y_2|W,U_c,Z_2)-\\delta(\\epsilon),\\label{eq:dec5}\\\\\nR''_2+ R_c & < & I(U_c,X_2;Y_2|W,Z_2)-\\delta(\\epsilon),\\label{eq:dec6}\\\\\nR'_2 + R''_2 + R_c & < & I(W,U_c,X_2;Y_2)-\\delta(\\epsilon).\\label{eq:dec7}\n\\end{IEEEeqnarray}\n\\end{itemize}\n\nNoting that $R'_2+R''_2=R_2$, eliminating $(R_d,R_c,R'_2,R''_2)$ from \\eqref{eq:dec1}-\\eqref{eq:dec7} via Fourier-Motzkin elimination, and substituting $U_d=Y_1$ and $U_c=T_1$, we get the achievable region in \\eqref{eq:cap_part_crib} with the following additional bound on $R_1$:\n$$R_1< H(Y_1|W,T_1) + H(Y_2|W,Z_2).$$\nTo conclude the proof of achievability, we show that this bound is rendered redundant by $R_1d, 0\\leq m,n \\leq N$. From our numerical search (Fig.~\\ref{fig2}), we found that $d=1$ Hamiltonians generate only trivial error correcting codes. Therefore we will consider the $d=2$ case, which seems to be the minimal distance required for QEC exceeding break-even. More explicitly, a $d=2$ Hamiltonian satisfies\n\\begin{equation}\\label{eq:locality_constraints}\n \\begin{pmatrix}\n 0 & \\tilde{H}_{03} & \\tilde{H}_{04} & ... & \\tilde{H}_{0N} \\\\\n \\tilde{H}_{30} & 0 & \\tilde{H}_{14} & ... & \\tilde{H}_{1N} \\\\\n \\tilde{H}_{40} & \\tilde{H}_{41} & 0 & ... & \\tilde{H}_{2N} \\\\\n \\vdots & & & & \\tilde{H}_{N-3,N} \\\\\n \\tilde{H}_{N0} & \\tilde{H}_{N1} & ... & \\tilde{H}_{N,N-3} & 0\n \\end{pmatrix} =\\mathbf{0}.\n\\end{equation}\nThe goal here is to find $\\tilde{H}$, in other words we will solve for the coefficients $\\beta_{ij}$ as well as the logical states while satisfying these locality constraints.\n\n\\subsection{Example of the $\\sqrt{3}$ code}\nWe demonstrate how to solve this problem with a concrete example. Numerically the $\\sqrt{3}$ code we found with the $d=2$ Hamiltonian had logical states of the form\n\\begin{equation}\n \\begin{split}\n \\ket{\\psi_0} &= a_0 \\ket{0} + a_3 \\ket{3} \\\\\n \\ket{\\psi_1} &= a_1 \\ket{1} + a_4 \\ket{4} + a_6 \\ket{6} .\n \\end{split} \n\\end{equation}\nSince the two logical states don't share any Fock basis, we can always make all coefficients $a_0,a_3,a_1,a_4,a_6$ real by doing the basis transformation $\\ket{n} \\rightarrow e^{i\\theta_n} \\ket{n}$. The error states are\n\\begin{equation}\n \\begin{split}\n \\ket{\\psi_2} &= \\ket{2} \\propto \\hat a\\ket{\\psi_0} \\\\\n \\ket{\\psi_3} &= \\mathcal{N}_1 ( a_1 \\ket{0} + 2 a_4 \\ket{3} + \\sqrt{6} a_6 \\ket{5} ) \\propto \\hat a\\ket{\\psi_1} .\n \\end{split} \n\\end{equation}\nNotice that if Eq.~(\\ref{eq:locality_constraints}) does have a solution, the solution always exists no matter how we choose the basis for the orthogonal subspace $\\mathcal{H}_2$. In other words, we could always represent the new basis as linear combinations of the old basis and that together with the old solution $\\beta_{ij}$ gives the new solution $\\beta_{ij}'$. Therefore here we have complete freedom to select the basis $\\{ \\ket{\\psi_4},\\ket{\\psi_5},\\ket{\\psi_6} \\}$ for $\\mathcal{H}_2$ and for convenience of further analysis we make the following choice (notation $\\psi_i(n) = \\inp{n}{\\psi_i}$):\n\\begin{equation}\n \\begin{split}\n \\ket{\\psi_4} & = \\psi_4(1) \\ket{1} + \\psi_4(4) \\ket{4} + \\psi_4(6) \\ket{6} \\\\\n \\ket{\\psi_5} & = \\psi_5(1) \\ket{1} + \\psi_5(4) \\ket{4} + \\psi_5(6) \\ket{6} \\\\\n \\ket{\\psi_6} & = \\psi_6(0) \\ket{0} + \\psi_6(3) \\ket{3} + \\psi_6(5) \\ket{5} .\n \\end{split} \n\\end{equation}\nWe can make all $\\psi_i(n)$ to be real here, which leads to all $\\beta_{ij}$ also being real.\nWith this basis choice, many constraints in Eq.~(\\ref{eq:locality_constraints}) can be easily satisfied either automatically or by setting certain $\\beta_{ij}=0$. More specifically, for any $|m-n|>2$ such that $\\mele{m}{(\\ket{\\psi_2}\\bra{\\psi_0} + \\ket{\\psi_3}\\bra{\\psi_1})}{n} = 0$, there are two different cases:\n\\begin{enumerate}\n \\item $\\mele{m}{(\\ket{\\psi_i}\\bra{\\psi_j})}{n}=0, \\forall i,j$: in this case $\\tilde{H}_{mn}=0$ is already satisfied.\n \\item there exists $i,j$ such that $\\mele{m}{(\\ket{\\psi_i}\\bra{\\psi_j})}{n}\\neq 0$: in this case we just set $\\beta_{ij}=0$.\n\\end{enumerate}\nTherefore the only non-trivial constraints from Eq.~(\\ref{eq:locality_constraints}) are those with $\\mele{m}{(\\ket{\\psi_2}\\bra{\\psi_0} + \\ket{\\psi_3}\\bra{\\psi_1})}{n} \\neq 0$ which are $\\tilde{H}_{04},\\tilde{H}_{06},\\tilde{H}_{36},\\tilde{H}_{51}$. It is easy to see that the only terms in Eq.~(\\ref{eq:QEC_full}) that will contribute to these matrix elements are $\\ket{\\psi_6}\\bra{\\psi_4}$ and $\\ket{\\psi_6}\\bra{\\psi_5}$. With these analysis, the ansatz Hamiltonian Eq.~(\\ref{eq:QEC_full}) can greatly simplified to the following:\n\\begin{equation}\\label{eq:H1}\n \\tilde{H} = \\ket{\\psi_2}\\bra{\\psi_0} + \\ket{\\psi_3}\\bra{\\psi_1} + \\beta_1 \\ket{\\psi_6} \\bra{\\psi_4} + \\beta_2 \\ket{\\psi_6} \\bra{\\psi_5} ,\n\\end{equation}\nwhere the two free parameters $\\beta_1$ and $\\beta_2$ satisfy a set of linear equations\n\\begin{subequations}\n \\begin{align}\n \\tilde{H}_{04} &= \\psi_3(0) \\psi_1(4) + \\beta_1 \\psi_6(0) \\psi_4(4) + \\beta_2 \\psi_6(0) \\psi_5(4) = 0 \\label{eq:a} \\\\\n \\tilde{H}_{06} &= \\psi_3(0) \\psi_1(6) + \\beta_1 \\psi_6(0) \\psi_4(6) + \\beta_2 \\psi_6(0) \\psi_5(6) = 0 \\label{eq:b} \\\\\n \\tilde{H}_{36} &= \\psi_3(3) \\psi_1(6) + \\beta_1 \\psi_6(3) \\psi_4(6) + \\beta_2 \\psi_6(3) \\psi_5(6) = 0 \\label{eq:c} \\\\\n \\tilde{H}_{51} &= \\psi_3(5) \\psi_1(1) + \\beta_1 \\psi_6(5) \\psi_4(1) + \\beta_2 \\psi_6(5) \\psi_5(1) = 0 \\label{eq:d} .\n \\end{align}\n\\end{subequations}\nThe crucial observation here is that the number of equations 4 is larger than the number of parameters 2, which means the coefficients must be linearly dependent. Since these coefficients are essentially functions of $\\ket{\\psi_0}$ and $\\ket{\\psi_1}$, this eventually provides the extra constraints for determining the logical states. Here there should be $4-2=2$ constraints in total.\n\nBelow we will show in details how to obtain the two constraints and eventually the two logical states. Comparing Eq.~(\\ref{eq:b}) and Eq.~(\\ref{eq:c}), it's easy to see that the first constraint is\n\\begin{equation}\\label{eq:constraint1}\n \\frac{\\psi_3(0)}{\\psi_3(3)} = \\frac{\\psi_6(0)}{\\psi_6(3)} .\n\\end{equation}\nTo get the second constraint, let us multiply Eq.~(\\ref{eq:a}) with $\\psi_1(4)$, multiply Eq.~(\\ref{eq:b}) with $\\psi_1(6)$, and then add them together:\n\\begin{equation}\n \\begin{split}\n & \\psi_3(0) ( [\\psi_1(4)]^2 + [\\psi_1(6)]^2 ) \\\\\n & +\\beta_1 \\psi_6(0) [ \\psi_4(4) \\psi_1(4) + \\psi_4(6) \\psi_1(6) ] \\\\\n & + \\beta_2 \\psi_6(0) [ \\psi_5(4) \\psi_1(4) + \\psi_5(6) \\psi_1(6) ] = 0 .\n \\end{split}\n\\end{equation}\nUsing the fact that $\\ket{\\psi_1}$ is normalized and orthogonal to both $\\ket{\\psi_4}$ and $\\ket{\\psi_5}$, we have\n\\begin{equation}\n \\begin{split}\n & \\psi_3(0) ( 1 - [\\psi_1(1)]^2) + \\beta_1 \\psi_6(0) [ -\\psi_4(1) \\psi_1(1) ] \\\\\n & + \\beta_2 \\psi_6(0) [ -\\psi_5(1) \\psi_1(1) ] = 0 \\\\\n \\Rightarrow & - \\psi_3(0) \\frac{1 - [\\psi_1(1)]^2}{\\psi_1(1)} + \\beta_1 \\psi_6(0) \\psi_4(1) \\\\\n & + \\beta_2 \\psi_6(0) \\psi_5(1) = 0 .\n \\end{split}\n\\end{equation}\nCompare this with Eq.~(\\ref{eq:d}), we immediately obtain the second constraint:\n\\begin{equation}\\label{eq:constraint2}\n \\begin{split}\n & - \\psi_3(0) \\frac{1 - [\\psi_1(1)]^2}{\\psi_1(1) \\psi_6(0)} = \\frac{\\psi_3(5) \\psi_1(1)}{\\psi_6(5)} \\\\\n \\Rightarrow & \\psi_3(0) \\psi_6(5) (1 - [\\psi_1(1)]^2) + \\psi_3(5) \\psi_6(0) [\\psi_1(1)]^2 = 0 .\n \\end{split}\n\\end{equation}\nLet us explicitly list all the relevant states here\n\\begin{equation}\n \\begin{split}\n \\ket{\\psi_0} &= a_0 \\ket{0} + a_3 \\ket{3} \\\\\n \\ket{\\psi_1} &= a_1 \\ket{1} + a_4 \\ket{4} + a_6 \\ket{6} \\\\\n \\ket{\\psi_3} &= \\mathcal{N}_1 ( a_1 \\ket{0} + 2 a_4 \\ket{3} + \\sqrt{6} a_6 \\ket{5} ) \\\\\n \\ket{\\psi_6} &= \\mathcal{N}_2 ( a_1 \\ket{0} + 2 a_4 \\ket{3} + \\beta \\ket{5} ) ,\n \\end{split} \n\\end{equation}\nwhere we have applied Eq.~(\\ref{eq:constraint1}) for $\\ket{\\psi_6}$ and $\\beta$ is another parameter. Combining the QEC criteria and Eq.~(\\ref{eq:constraint2}), we have\n\\begin{equation}\n \\begin{split}\n & a_0^2 + a_3^2 = 1 \\\\\n & a_1^2 + a_4^2 + a_6^2 = 1 \\\\\n & a_0 a_1 + 2 a_3 a_4 = 0 \\\\\n & 3 a_3^2 = a_1^2 + 4 a_4^2 + 6 a_6^2 \\\\\n & a_1^2 + 4 a_4^2 + \\sqrt{6}\\beta a_6 = 0 \\\\\n & \\beta (1 - a_1^2) + \\sqrt{6} a_6 a_1^2 = 0 .\n \\end{split}\n\\end{equation}\nWe have 6 equations and 6 parameters in total, and the solution is (there is some freedom to choose the signs which again is just a trivial basis transformation)\n\\begin{equation}\n \\begin{split}\n & a_0 = \\sqrt{1-\\frac{1}{\\sqrt{3}}}, a_3 = \\frac{1}{\\sqrt[4]{3}}, a_1 = \\sqrt{\\frac{2(6-\\sqrt{3})}{\\sqrt{3}+9}} \\\\\n & a_4 = -\\sqrt{\\frac{(\\sqrt{3}-1)(6-\\sqrt{3})}{2(\\sqrt{3}+9)}}, a_6 = \\sqrt{\\frac{3-\\sqrt{3}}{2(\\sqrt{3}+9)}} .\n \\end{split}\n\\end{equation}\nTherefore the logical states of the $\\sqrt{3}$ code are\n\\begin{equation}\\label{eq:logical_states}\n \\begin{split}\n \\ket{\\psi_0} =& \\sqrt{1-\\frac{1}{\\sqrt{3}}} \\ket{0} + \\frac{1}{\\sqrt[4]{3}} \\ket{3} \\\\\n \\ket{\\psi_1} =& \\sqrt{\\frac{2(6-\\sqrt{3})}{\\sqrt{3}+9}} \\ket{1} - \\sqrt{\\frac{(\\sqrt{3}-1)(6-\\sqrt{3})}{2(\\sqrt{3}+9)}} \\ket{4} \\\\\n & + \\sqrt{\\frac{3-\\sqrt{3}}{2(\\sqrt{3}+9)}} \\ket{6} .\n \\end{split}\n\\end{equation}\nNotice that the average number of photons in the codewords is $3|a_3|^2 = \\sqrt{3}$.\n\nNow we could complete all basis states and the Hamiltonian Eq.~(\\ref{eq:H1}). The basis of $\\mathcal{H}_2$:\n\\begin{equation}\n \\begin{split}\n \\ket{\\psi_6} &= \\mathcal{N}_2 ( a_1 \\ket{0} + 2 a_4 \\ket{3} + \\beta \\ket{5} ) , \\quad \\beta = -\\frac{a_1^2+4a_4^2}{\\sqrt{6} a_6} \\\\\n \\ket{\\psi_4} &= \\mathcal{N}_3 ( a_4 \\ket{1} - a_1 \\ket{4} ) \\\\\n \\ket{\\psi_5} &= \\mathcal{N}_4 ( a_1 \\ket{1} + a_4 \\ket{4} + \\beta' \\ket{6} ) , \\quad \\beta' = -\\frac{a_1^2+a_4^2}{a_6} \n \\end{split} \n\\end{equation}\nand the Hamiltonian parameters:\n\\begin{equation}\n \\beta_2 = -\\frac{\\mathcal{N}_1 a_6}{\\mathcal{N}_2 \\mathcal{N}_4 \\beta'} \\qquad \\beta_1 = \\frac{\\mathcal{N}_1 a_4 (1-a_6\/\\beta')}{\\mathcal{N}_2 \\mathcal{N}_3 a_1} .\n\\end{equation}\n\nThere are some extra complexities in constructing the AQEC Hamiltonian and we actually need to keep more terms from the summation in Eq.~(\\ref{eq:QEC_full}) rather than just the $\\beta_1$ and $\\beta_2$ terms in Eq.~(\\ref{eq:H1}). To understand why this is required, let us study a simpler problem of stabilizing $\\ket{\\psi} = \\frac{1}{\\sqrt{2}}(\\ket{0}+\\ket{2})$ under photon loss error.\nEven though the Hamiltonian $\\hat{H} = (\\ket{0,e}+\\ket{2,e})\\bra{1,g} + \\text{h.c.}$ corrects the error after a single photon loss, it doesn't actually lead to state stabilization.\nThe reason is that when no photon loss happens, the state evolves within the subspace $\\{\\ket{0}, \\ket{2}\\}$ under the non-Hermitian Hamiltonian $\\hat{H}'=-i\\kappa \\hat{a}^\\dagger \\hat{a}\/2$ and eventually becomes $\\ket{0}$. This is because non-detection of a photon still provides us information about the state causing us to update it in a way that skews towards a lower number of photons. State-stabilization must undo this effect.\nWe protect $\\ket{\\psi}$ against $\\hat{H}'$ by engineering a large detuning for $\\ket{\\psi}$ within the subspace $\\{\\ket{0}, \\ket{2}\\}$. For example, adding extra terms such as $\\Omega \\ket{\\psi} \\bra{\\psi}$ or $\\Omega \\ket{0}\\bra{2}+\\text{h.c.}$ to $\\hat{H}$ will stabilize $\\ket{\\psi}$. This new interaction can be seen as rapidly repopulating the $\\ket{2}$ component of the wavevector as it decays through non-detection of photons.\n\nSimilarly, Eq.~(\\ref{eq:H1}) only protects the logical states against single photon loss error, but not the non-unitary dynamics under $\\hat{H}'$.\nFortunately, keeping a few extra terms from the summation in Eq.~(\\ref{eq:QEC_full}) is sufficient to generate the large detuning without changing the Hamiltonian distance as well as the above derivation.\nThe choices are not unique and one option is to add $\\ket{\\psi_4} \\bra{\\psi_2}$ as well as $(a_6 \\ket{4} - a_4 \\ket{6}) \\bra{5}$ in $\\tilde{H}$, which produces similar results compared to the discovered code in Fig.~\\ref{fig2}(c)i.\nOn the other hand, all these complications in constructing a proper AQEC Hamiltonian are automatically taken care of by \\texttt{AutoQEC} through numerical optimization of the average fidelity.\n\n\n\n\\section{Minimizing the emission bandwidth $\\mathcal{B}$}\nHere we prove the claim in the main text that for a harmonic oscillator coupled to a three-level qubit with Hamiltonian $\\hat{H}_{ab}$ in Eq.~(\\ref{eq:H_ab}), the emission bandwidth $\\mathcal{B}$ is minimized when $g_1^2\/\\Delta_1 \\approx g_2^2\/\\Delta_2$.\n\n\\begin{proof}\nThe Hamiltonian can be written in the subspace of $\\{\\ket{n+2,g},\\ket{n+1,e},\\ket{n,f}\\}$ as a matrix\n\\begin{equation}\n \\begin{pmatrix}\n 0 & \\sqrt{n+2} g_1 & 0 \\\\\n \\sqrt{n+2} g_1 & \\Delta_1 & \\sqrt{n+1} g_2 \\\\\n 0 & \\sqrt{n+1} g_2 & \\Delta_2\n \\end{pmatrix} \n\\end{equation}\nand the eigenvalues satisfy\n\\begin{equation}\n \\begin{split}\n & \\lambda^3 - (\\Delta_1+\\Delta_2) \\lambda^2 + \\left[\\Delta_1 \\Delta_2 - (n+2) g_1^2 - (n+1) g_2^2 \\right] \\lambda \\\\\n & + (n+2) g_1^2 \\Delta_2 = 0 .\n \\end{split}\n\\end{equation}\nIn the dispersive regime $\\Delta_{1,2} \\gg g_{1,2}$, the eigenvalues can be expanded perturbatively as\n\\begin{equation}\n \\begin{split}\n \\lambda =& \\lambda_0 + \\lambda_1 + \\lambda_2 + \\mathcal{O}\\left(\\frac{g^3}{\\Delta^3}\\right) g \\\\\n \\lambda_1 =& \\mathcal{O}\\left(\\frac{g}{\\Delta}\\right) g, \\quad \\lambda_2 = \\mathcal{O}\\left(\\frac{g^2}{\\Delta^2}\\right) g .\n \\end{split}\n\\end{equation}\nFor dressed eigenstates $\\ket{\\widetilde{n+2,g}}$, $\\lambda_0 = 0$ and\n\\begin{equation}\n \\begin{split}\n & \\left[\\Delta_1 \\Delta_2 - (n+2) g_1^2 - (n+1) g_2^2 \\right] \\lambda_1 + (n+2) g_1^2 \\Delta_2 = 0 \\\\ \n \\Rightarrow & \\quad \\lambda_1 = -\\frac{(n+2) g_1^2}{\\Delta_1}\n \\end{split}\n\\end{equation}\nwhich agrees with the dispersive coupling Hamiltonian and no level nonlinearity shows up at this order. To the next order,\n\\begin{equation}\n \\begin{split}\n & \\left[\\Delta_1 \\Delta_2 - (n+2) g_1^2 - (n+1) g_2^2 \\right] (\\lambda_1 + \\lambda_2) \\\\\n & - (\\Delta_1+\\Delta_2) \\lambda_1^2 + (n+2) g_1^2 \\Delta_2 = 0\n \\end{split}\n\\end{equation}\nwhich gives\n\\begin{equation}\n \\lambda_2 = \\frac{(n+2)g_1^2}{\\Delta_1^2} \\left[ (n+2) \\frac{g_1^2}{\\Delta_1} - (n+1) \\frac{g_2^2}{\\Delta_2} \\right] .\n\\end{equation}\nNotice that in general $\\lambda_2$ will induce nonlinearity for the dressed states $\\ket{\\widetilde{n,g}}$ since it depends on $n^2$. However, when $g_1^2\/\\Delta_1 = g_2^2\/\\Delta_2$ the dependence on $n^2$ is completely removed which means the nonlinearity and therefore also the emission bandwidth $\\mathcal{B}$ is eliminated at this order.\n\\end{proof}\n\n\\subsection{Qubit choice for $\\hat{b}$}\nThe relevant dispersive coupling to the $e$ levels is\n\\begin{equation}\n \\chi_e = \\frac{2g_1^2}{\\Delta_1} - \\frac{g_2^2}{\\Delta_2 - \\Delta_1}\n\\end{equation}\nand at the minimal nonlinearity point, we have\n\\begin{equation}\n \\chi_e = \\frac{g_1^2}{\\Delta_1} \\frac{r-2}{r-1}\n\\end{equation}\nwhere $r=g_2^2\/g_1^2=\\Delta_2\/\\Delta_1$.\nIdeally, $\\chi_e$ should be as large as possible at this minimal nonlinearity point, such that we can selectively drive certain level transitions without introducing large $\\mathcal{B}$.\nFor a transmon qubit $r \\approx 2 \\Rightarrow \\chi_e \\approx 0$ and therefore cannot be used as qubit $\\hat{b}$.\nFortunately, other qubit designs could provide much more flexibility in engineering the coupling ratio $r$ and $r \\approx 1$ is favorable in terms of larger $\\chi_e$.\n\nIn this work, we choose a fluxonium type of Hamiltonian\n\\begin{equation}\n \\hat{H} = 4 E_C \\hat{n}^2 - E_J \\cos (\\hat{\\phi} - \\phi_{\\text{ext}}) + \\frac{1}{2} E_L \\hat{\\phi}^2\n\\end{equation}\nfor qubit $\\hat{b}$. With realistic parameters $\\phi_{\\text{ext}}=0$, $E_C\/2\\pi=0.95~\\text{GHz}$, $E_J\/2\\pi=4.75~\\text{GHz}$ and $E_L\/2\\pi=0.65~\\text{GHz}$, the coupling ratio is $r = g_2^2\/g_1^2 = |\\mele{f}{\\hat{n}}{\\hat{e}}|^2\/|\\mele{e}{\\hat{n}}{\\hat{g}}|^2 \\approx 1.2$ with $\\omega_{ge}\/2\\pi \\approx 5.43~\\text{GHz}$ and $\\omega_{ef}\/2\\pi \\approx 3.87~\\text{GHz}$.\n\n\n\\section{Full circuit design}\nIn this section, we provide details for the full circuit simulation in Fig.~\\ref{fig4}(e).\nThe AQEC Hamiltonian Eq.~(\\ref{eq:AQEC_H}) can be implemented with a more physical Hamiltonian\n\\begin{equation}\\label{eq:H_physical}\n \\hat{H} = \\hat{H}_{ab} + \\left( f_1(t)\\hat{a}^\\dagger + f_2(t) \\hat{a} + f_3(t) \\hat{a}^2 \\right) \\hat{b}^\\dagger + f_4(t) \\hat{b}^\\dagger \\hat{c} + \\text{h.c.} \n\\end{equation}\nwhere\n\\begin{equation}\n \\begin{split}\n f_1(t) =& \\sum_n \\frac{\\alpha^{(1)}_n e^{-i(E_{n,e} - E_{n-1,g})t}}{\\mele{\\widetilde{n,e}}{\\hat{a}^\\dagger \\hat{b}^\\dagger}{\\widetilde{n-1,g}}} \\\\\n f_2(t) =& \\sum_n \\frac{\\alpha^{(2)}_n e^{-i(E_{n,e} - E_{n+1,g})t}}{\\mele{\\widetilde{n,e}}{\\hat{a} \\hat{b}^\\dagger}{\\widetilde{n+1,g}}} \\\\\n f_3(t) =& \\sum_n \\frac{\\alpha^{(3)}_n e^{-i(E_{n,e} - E_{n+2,g})t}}{\\mele{\\widetilde{n,e}}{\\hat{a}^2 \\hat{b}^\\dagger}{\\widetilde{n+2,g}}} \\\\\n f_4(t) =& \\Omega \\sum_n \\frac{e^{-i(E_{n,e} - E_{n,g})t}}{\\mele{\\widetilde{n,e}}{\\hat{b}^\\dagger}{\\widetilde{n,g}}} \n \\end{split}\n\\end{equation}\nand $\\ket{\\widetilde{n,g(e)}}$ are the dressed eigenstates of $\\hat{H}_{ab}$ with energies $E_{n,g(e)}$.\nNotice that the dressed states $\\ket{\\widetilde{n,g}}$ replace the bare Fock states $\\ket{n,g}$ in our definition of the logical basis in Eq.~(\\ref{eq:logical_states}) and the matrix elements such as $\\mele{\\widetilde{n,e}}{\\hat{a}^\\dagger \\hat{b}^\\dagger}{\\widetilde{n-1,g}}$ will be close but not equal to $\\sqrt{n}$.\nThe drivings $f_{1\\sim 4}(t)$ engineer couplings between dressed states rather than the bare Fock states.\n\nThe relevant dissipators are $\\{ \\sqrt{\\kappa} \\hat{a}, \\sqrt{\\kappa_q} \\hat{c} \\}$, but to be more realistic we also include an extra dissipator $\\sqrt{\\kappa} \\hat{b}$ in the simulation.\nThe coupling strength $\\Omega$ between $\\hat{b}$ and $\\hat{c}$ is chosen such that the effective decay rate for $\\hat{b}$ after adiabatically eliminating $\\hat{c}$~\\cite{Reiter2012a} is still $4\\Omega^2\/\\kappa_q=2\\pi \\times 20~\\text{MHz}$, same as the value used in the numerical optimization. We set the decay rate of $\\hat{c}$ as $\\kappa_q\/2\\pi=100~\\text{MHz}$.\n\nThe Hamiltonian Eq.~(\\ref{eq:H_physical}) can be furthermore implemented with a circuit model\n\\begin{equation}\\label{eq:H_circuit}\n \\begin{split}\n \\hat{H} =& \\omega_a \\hat{a}^\\dagger \\hat{a} + \\omega_{ge} \\ket{e} \\bra{e} + (\\omega_{ge} + \\omega_{ef}) \\ket{f} \\bra{f}+ \\omega_c \\hat{c}^\\dagger \\hat{c} \\\\\n &+ g_1 (\\hat{a}^\\dagger \\ket{g}\\bra{e} + \\hat{a}\\ket{e}\\bra{g}) + g_2 (\\hat{a}\\ket{f}\\bra{e} + \\hat{a}^\\dagger \\ket{e}\\bra{f}) \\\\\n &+ \\varepsilon_1 (t) \\left[ g_{ab}^{(1)} \\cos \\left( \\varphi_a (\\hat{a}+\\hat{a}^\\dagger) + \\varphi_b (\\hat{b}+\\hat{b}^\\dagger) \\right) \\right. \\\\\n &\\qquad \\qquad \\left. + g_{bc}^{(1)} \\cos \\left( \\varphi_b (\\hat{b}+\\hat{b}^\\dagger) + \\varphi_c (\\hat{c}+\\hat{c}^\\dagger) \\right) \\right] \\\\\n &+ \\varepsilon_2 (t) \\left[ g_{ab}^{(2)} \\sin \\left( \\varphi_a (\\hat{a}+\\hat{a}^\\dagger) + \\varphi_b (\\hat{b}+\\hat{b}^\\dagger) \\right) \\right. \\\\\n &\\qquad \\qquad \\left. + g_{bc}^{(2)} \\sin \\left( \\varphi_b (\\hat{b}+\\hat{b}^\\dagger) + \\varphi_c (\\hat{c}+\\hat{c}^\\dagger) \\right) \\right] ,\n \\end{split}\n\\end{equation}\nwhere the drivings are given by\n\\begin{equation}\n \\begin{split}\n \\varepsilon_1 (t) =& - 2 \\text{Re} \\left\\{ \\frac{1}{\\varphi_a \\varphi_b g_{ab}^{(1)}} \\left[ e^{-2i\\omega_a t} f_1(t) + f_2(t) \\right] \\right. \\\\\n &\\left. + \\frac{1}{\\varphi_b \\varphi_c g_{bc}^{(1)}} e^{i(\\omega_c- \\omega_a) t} f_4(t) \\right\\} \\\\\n \\varepsilon_2 (t) =& - 2 \\text{Re} \\left\\{ \\frac{2}{\\varphi_a^2 \\varphi_b g_{ab}^{(2)}} e^{i \\omega_a t} f_3(t) \\right\\} ,\n \\end{split}\n\\end{equation}\nwhich are generated by the two independent flux pump through the larger and smaller loops~\\cite{Kapit2016a} in Fig.~\\ref{fig4}(c).\nAfter Taylor expanding the $\\cos$ and $\\sin$ interaction and dropping fast rotating terms in the rotating frame, we could show the equivalence of Eq.~(\\ref{eq:H_circuit}) to the Hamiltonian Eq.~(\\ref{eq:H_physical}) with $\\Delta_1 = \\omega_{ge}-\\omega_a$ and $\\Delta_2 = \\Delta_1 + \\omega_{ef}-\\omega_a$.\nTo ensure the validity of rotating wave approximation, we place the frequencies at $\\omega_a\/2\\pi=3.5~\\text{GHz}$ and $\\omega_c\/2\\pi=2.5~\\text{GHz}$ with qubit $\\hat{b}$ frequencies from the previous section. We also choose $\\varphi_a=\\varphi_b=\\varphi_c=0.1$ such that higher order terms in the $\\cos$ and $\\sin$ expansions can be safely dropped.\nAll AQEC Hamiltonian parameters as well as the logical basis states are directly imported from the \\texttt{AutoQEC} optimization result instead of using the analytical results in Appendix~\\ref{appendix:b}.\nWe use QuTiP~\\cite{Johansson2012,Johansson2013} for the full circuit simulation.\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/SI_figure_1.pdf}\n \\caption{(a-b) Wigner functions and photon number distributions for the discovered encoding in Fig.~\\ref{fig2}(c)ii. (c) Single state fidelity $F_{\\theta\\phi}$ at $t=10\\mu$s for the whole Bloch sphere. The white dashed line indicates the break-even fidelity. (d) $F_{\\theta\\phi}$ on the Bloch sphere for the $\\sqrt{3}$ code. For all Wigner funciton plots throughout this paper, the horizontal axis label is $x = \\ave{\\hat{a} + \\hat{a}^\\dagger}\/\\sqrt{2}$ and the vertical axis label is $p = i\\ave{\\hat{a}^\\dagger - \\hat{a}}\/\\sqrt{2}$.}\n \\label{fig_SI_1}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/SI_figure_2.pdf}\n \\caption{(a-b) Wigner functions and photon number distributions for another variant of the $\\sqrt{3}$ code discovered with $d=2$ Hamiltonian.}\n \\label{fig_SI_2}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{figures\/SI_figure_3.pdf}\n \\caption{(a) Learning curve for results in Fig.~\\ref{fig_SI_2}. (b) Wigner functions of $\\hat{\\rho}_{\\text{code}}$ at different iterations during training, which shows a relatively good convergence after a few thousand iterations.}\n \\label{fig_SI_3}\n\\end{figure}\n\n\n\\section{Additional comments on the optimization results}\n\\subsection{Exceed break-even with partial protection}\nWe further investigate the result in Fig.~\\ref{fig2}(c)ii, which represents a class of optimization results that perform better than break-even fidelity but worse than full QEC codes.\nFig.~\\ref{fig_SI_1}(a) shows the Wigner functions for the code subspace as well as both logical states, and Fig.\\ref{fig_SI_1}(b) shows the photon number distribution for the logical states where $\\ket{\\psi_0} \\in \\{ \\ket{1}, \\ket{2}, \\ket{3} \\}$ only occupies low photon number states and $\\ket{\\psi_1} \\in \\{ \\ket{5}, \\ket{6}, \\ket{7} \\}$ only occupies high photon number states.\n\nTo understand how the logical subspace is preserved under the AQEC Hamiltonian, we plot the single state fidelity $F_{\\theta\\phi}$ over the Bloch sphere (Fig.~\\ref{fig_SI_1}(c)) at $t=10~\\mu$s. The logical state $\\ket{\\psi_0}$ is strongly stabilized by the AQEC Hamiltonian with a fidelity 0.985 and $\\ket{\\psi_1}$ is preserved with a lower fidelity 0.598.\nSome of their superposition states ($\\theta,\\phi$ in-between the white dashed line) have fidelities below break-even, but the average fidelity over the whole Bloch sphere still exceeds break-even (Fig.~\\ref{fig2}(e) red dashed line) due to the partial protection in the logical subspace.\nIn comparison, we also plot the single state fidelity for the $\\sqrt{3}$ code (Fig.~\\ref{fig2}(c)i) in Fig.~\\ref{fig_SI_1}(d) which shows a relatively uniform protection for any logical states.\n\nWe could study a simplified example to demonstrate that a partially protected logical subspace exceeds break-even.\nStabilizing Fock states $\\ket{0}$ and $\\ket{2}$ under photon loss error can be implemented with a distance 1 Hamiltonian $\\hat{H}=\\ket{2,e} \\bra{1,g}+\\ket{1,g} \\bra{2,e}$.\nAt long time, both logical states are stabilized with single state fidelities $F_{\\theta=0}(t) \\approx F_{\\theta=\\pi}(t) \\approx 1$ but any coherent superposition state becomes a complete mixture of $\\{\\ket{0}\\bra{0}, \\ket{2}\\bra{2}\\}$.\nThis leads to an average fidelity of $\\frac{2}{3}$ which is better than the break-even fidelity $\\frac{1}{2}$.\nIntuitively, stabilizing both $\\ket{\\psi_0}$ and $\\ket{\\psi_1}$ preserves strictly more information compared to collapsing the whole Bloch sphere to $\\ket{\\psi_0}=\\ket{0}$.\n\n\\subsection{A different $\\sqrt{3}$ code}\nBesides the $\\sqrt{3}$ code explained in the main text, \\texttt{AutoQEC} also discovered another variant of $\\sqrt{3}$ code (Fig.~\\ref{fig_SI_2}(a)) protected by a distance 2 Hamiltonian.\nThe main difference is that $\\ket{\\psi_1} \\in \\{ \\ket{1}, \\ket{4}, \\ket{7} \\}$ instead of $\\{ \\ket{1}, \\ket{4}, \\ket{6} \\}$ (Fig.~\\ref{fig_SI_2}(b)). Following the same procedures as in Appendix~\\ref{appendix:b}, this new code can also be analytically derived as ($F \\approx 99.8\\%$ compared to the numerical results in Fig.~\\ref{fig_SI_2}(a))\n\\begin{equation}\n \\begin{split}\n \\ket{\\psi_0} =& \\sqrt{1-\\frac{1}{\\sqrt{3}}} \\ket{0} + \\frac{1}{\\sqrt[4]{3}} \\ket{3} \\\\\n \\ket{\\psi_1} =& \\sqrt{\\frac{4(7-\\sqrt{3})}{3(7 + \\sqrt{3})}} \\ket{1} - \\sqrt{\\frac{(\\sqrt{3}-1)(7-\\sqrt{3})}{3(7 + \\sqrt{3})}} \\ket{4} \\\\\n & + \\sqrt{\\frac{3-\\sqrt{3}}{3(7 + \\sqrt{3})}} \\ket{7} .\n \\end{split}\n\\end{equation}\n\n\n\\section{Training details}\nWe use Adam optimizer~\\cite{Kingma2015} for the gradient based learning with a learning rate about 0.001. Usually after a few hundred iterations, we can tell whether the training is stuck at a bad local minimum below break-even or not and make a decision on early stops. The training often achieves good convergence after a few thousand iterations and we could lower the learning rate to about 0.0003 for the final learning stage.\n\nWe choose a Fock state cutoff of 20 for the bosonic mode with a total Hilbert space dimension of 40. At the beginning of each \\texttt{AutoQEC} run, the real and imaginary parts of the logical states $\\ket{\\psi_0}$ and $\\ket{\\psi_1}$ as length 40 complex vectors are randomly initialized.\nDuring the optimization, in general $\\ket{\\psi_0}$ and $\\ket{\\psi_1}$ won't be perfectly orthogonal to each other after an Adam update step and therefore we choose to maintain their orthogonality by manually setting $\\ket{\\psi_1} \\rightarrow \\ket{\\psi_1} - \\frac{\\inp{\\psi_0}{\\psi_1}}{\\inp{\\psi_0}{\\psi_0}} \\ket{\\psi_0}$ after each iteration.\n\nFigure~\\ref{fig_SI_3} shows the learning curve for results in Fig.~\\ref{fig_SI_2} discovered with $d=2$ Hamiltonian. Similar learning curves occur frequently through many runs of \\texttt{AutoQEC}.\nRegarding computational cost, each iteration takes about 12 seconds on 3 CPUs (Intel Xeon CPU E5-2609 v4 @ 1.70GHz) for training with distance 2 Hamiltonians.\n\\texttt{AutoQEC} runs on 3 CPUs because $\\hat{\\rho}_{00}(t),\\hat{\\rho}_{11}(t),\\hat{\\rho}_{10}(t)$ in the definition of $\\bar{F}(t)$ can be evaluated in parallel with three independent master equation time evolutions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe goal for the construction that I describe in this note was to lift the stability result of \\cite{vineyards} to the setting when the simplicial filtrations are not necessarily defined on the same set. The ideas in this note were first presented at ATCMS in July 2012. A partial discussion appears in \\cite{facundo2012banff}. \n\nSection \\ref{sec:geodesic} proves that that this construction defines a geodesic metric on the collection of finite filtered spaces. There I give an improvement of the stability of persistence which uses these geodesics.\n\n\n\n\\section{Simplicial Homology}\nGiven a simplicial complex $L$ and simplices $\\sigma,\\tau\\in L$, we write $\\sigma\\subseteq \\tau$ whenever $\\sigma$ is a face of $\\tau$. For each integer $\\ell\\geq 0$ we denote by $L^{(\\ell)}$ the $\\ell$-skeleton of $L$.\n\nRecall that given two finite simplicial complexes $L$ and $S$, a \\emph{simplicial map} between them arises from any map $f:L^{(0)}\\rightarrow S^{(0)}$ with the property that whenever $p_0,p_1,\\ldots,p_k$ span a simplex in $L$, then $f(p_0),f(p_1),\\ldots,f(p_k)$ span a simplex of $S$. One does not require that the vertices $f(p_0),f(p_1),\\ldots,f(p_k)$ be all distinct. Given a map $f:L^{(0)}\\rightarrow S^{(0)}$ between the vertex sets of the finite simplicial complexes $L$ and $S$, we let $\\overline{f}:L\\rightarrow S$ denote the induced \\emph{simplicial map}.\n\n\n\n\n\n\nWe will make use of the following theorem in the sequel.\n\\begin{theorem}[Quillen's Theorem A in the simplicial category, \\cite{quillen}] Let $\\zeta:S\\rightarrow L$ be a simplicial map between two finite complexes. Suppose that the preimage of each closed simplex of $L$ is contractible. Then $\\zeta$ is a homotopy equivalence.\n\\end{theorem}\n\n\n\n\n \n\\begin{corollary}\\label{coro:eq-pd}\nLet $L$ be a finite simplicial complex and $\\varphi:Z\\rightarrow L^{(0)}$ be any surjective map with finite domain $Z$. Let $S:=\\{\\tau\\subseteq Z|\\,\\varphi(\\tau)\\in L\\}$. Then $S$ is a simplicial complex and the induced simplicial map\n$\\overline{\\varphi}:S\\rightarrow L$ is an homotopy equivalence.\n\\end{corollary}\n\n\\begin{proof}\nNote that $S = \\bigcup_{\\sigma\\in L}\\{\\tau\\subseteq Z|\\,\\varphi(\\tau)=\\sigma\\}$ so it is clear that $S$ is a simplicial complex with vertex set $Z$. That the preimage of each $\\sigma\\in L$ is contractible is trivially true since those preimages are exactly the simplices in $S$. The conclusion follows directly from Quillen's Theorem A.\n\\end{proof}\n\nIn this paper we consider homology with coefficients in a field $\\mathbb{F}$ so that given a simplicial complex $L$, then for each $k\\in\\mathbb{N}$, $H_k(L,\\mathbb{F})$ is a vector space. To simplify notation, we drop the argument $\\mathbb{F}$ from the list and only write $H_k(L)$ for the homology of $L$ with coefficients in $\\mathbb{F}$.\n\n\\section{Filtrations and Persistent Homology}\n\nLet $\\mathcal{F}$ denote the set of all finite \\emph{filtered spaces}: that is pairs $\\mathbf{X}=(X,F_X)$ where $X$ is a finite set and $F_X:\\mathrm{pow}(X)\\rightarrow \\mathbb{R}$ is a monotone function. Any such function is called a \\emph{filtration} over $X$. Monotonicity in this context refers to the condition that $F_X(\\sigma)\\geq F_X(\\tau)$ whenever $\\sigma \\supseteq \\tau.$ Given a finite set $X$, by $\\mathcal{F}(X)$ we denote the set of all possible filtrations $F_X:\\pow{X}\\rightarrow \\mathbb{R}$ on $X$. Given a filtered space $\\mathbf{X}=(X,F_X)\\in\\mathcal{F}$, for each $\\varepsilon\\in \\mathbb{R}$ define the simplicial complex \n$$L_\\varepsilon(\\mathbf{X}):=\\big\\{\\sigma\\subseteq X|\\,F_X(\\sigma)\\leq \\varepsilon\\big\\}.$$\nOne then considers the nested family of simplicial complexes\n\n\n$$L(\\mathbf{X}):=\\big\\{L_{\\varepsilon}(\\mathbf{X})\\subset L_{\\varepsilon'}(\\mathbf{X})\\}_{\\varepsilon\\leq \\varepsilon'}$$\n\n\nwhere each $L_{\\varepsilon}(\\mathbf{X})$ is, by construction, finite. At the level of homology, for each $k\\in \\mathbb{N}$ the above inclusions give rise to a system of vector spaces and linear maps\n\n$$\\mathbb{V}_k(\\mathbf{X}):=\\big\\{V_{\\varepsilon}(\\mathbf{X})\\stackrel{v_{\\varepsilon,\\varepsilon'}}{\\longrightarrow} V_{{\\varepsilon'}}(\\mathbf{X})\\big\\}_{\\varepsilon\\leq \\varepsilon'},$$\n\n\n\nwhich is called a \\emph{persistence vector space.} Note that each $V_{\\varepsilon}(\\mathbf{X})$ is finite dimensional.\n\n\nPersistence vector spaces admit a \\emph{classification up to isomorphism} in terms of collections of intervals so that to the persistence vector space $\\mathbb{V}$ one assigns a multiset of intervals $I(\\mathbb{V})$ \\cite{zz}. These collections of intervals are sometimes referred to as \\emph{barcodes} or also \\emph{persistence diagrams}, depending on the graphical representation that is adopted \\cite{comptopo-herbert}. We denote by $\\mathcal{D}$ the collection of all finite persistence diagrams. An element $D\\in\\mathcal{D}$ is a \\emph{finite} multiset of points $$D= \\{(b_\\alpha,d_\\alpha),\\,0\\leq b_\\alpha\\leq d_\\alpha,\\,\\alpha\\in A\\}$$ for some (finite) index set $A$. Given $k\\in\\mathbb{N}$, to any filtered set $\\mathbf{X}\\in\\mathcal{F}$ one can attach a persistence diagram via \n$$\\mathbf{X}\\longmapsto L(\\mathbf{X}) \\longmapsto \\mathbb{V}_k(\\mathbf{X}) \\longmapsto I\\big(\\mathbb{V}_k(\\mathbf{X})\\big).$$\nWe denote by $\\dk{k}:\\mathcal{F}\\rightarrow \\mathcal{D}$ the resulting composite map. Given $\\mathbf{X}=(X,F_X)$, we will sometimes write $\\dk{k}{(F_X)}$ to denote $\\dk{k}{(\\mathbf{X})}$.\n\n\n\\section{Stability of filtrations}\nThe \\emph{bottleneck distance} is a useful notion of distance between persistence diagrams and we recall it's definition next. We will follow the presentation on \\cite{carlsson_2014}. Let $\\Delta\\subset \\mathbb{R}^2_+$ be comprised of those points which sit above the diagonal: $\\Delta:=\\{(x,y)|\\,x\\leq y\\}.$ \n\n\n\nDefine the \\emph{persistence} of a point $P=(x_P,y_P)\\in\\Delta$ by $\\pers{P}:=y_P-x_P$. \n\n Let $D_1=\\{P_\\alpha\\}_{\\alpha\\in A_1}$ and $D_2=\\{Q_\\alpha\\}_{\\alpha\\in A_2}$ be two persistence diagrams indexed over the finite index sets $A_1$ and $A_2$, respectively. Consider subsets $B_i\\subseteq A_i$ with $|B_1|=|B_2|$ and any bijection $\\varphi:B_1\\rightarrow B_2$, then define\n$$J(\\varphi):=\\max\\left(\\max_{\\beta\\in B_1}\\|P_\\beta-Q_{\\varphi(\\beta)}\\|_\\infty,\\max_{\\alpha\\in A_1\\backslash B_1}\\frac{1}{2}\\pers{P_\\alpha},\\max_{\\alpha\\in A_2\\backslash B_2}\\frac{1}{2}\\pers{P_\\alpha}\\right).$$\nFinally, one defines the bottleneck distance between $D_1$ and $D_2$ by $$d_{\\mathcal{D}}(D_1,D_2):=\\min_{(B_1,B_2,\\varphi)} J(\\varphi),$$\nwhere $(B_1,B_2,\\varphi)$ ranges over all $B_1\\subset A_1$, $B_2\\subset A_2$, and bijections $\\varphi:B_1\\rightarrow B_2$.\n\n\nOne of the standard results about the stability of persistent homology invariants, which is formulated in terms of the Bottleneck distance, is the proposition below which we state in a weaker form that will suffice for our presentation:\\footnote{In \\cite{vineyards} the authors do not assume that the underlying simplicial complex is the full powerset.}\n \\begin{theorem}[\\cite{vineyards}]\\label{theo:stab-vineyards}\n For all finite sets $X$ and filtrations $F,G:\\mathrm{pow}(X)\\rightarrow \\mathbb{R}$, \n $$d_{\\mathcal{D}}(\\dk{k}(F),\\dk{k}(G))\\leq \\max_{\\sigma\\in\\mathrm{pow}(X)}|F(\\sigma)-G(\\sigma)|,$$\n for all $k\\in\\mathbb{N}.$\n \\end{theorem}\n\nThe proof of this theorem offered in \\cite{vineyards} is purely combinatorial and elementary. This result requires that the two filtrations be given on the \\emph{same} set. This restriction will be lifted using the ideas that follow.\n\n\n\\subsection{Filtrations defined over different sets}\n A \\emph{parametrization} of a finite set $X$ is any finite set $Z$ and a surjective map $\\varphi_X:Z\\rightarrow X$.\nConsider a filtered space $\\mathbf{X} = (X,F_X)\\in\\mathcal{F}$ and a parametrization $\\varphi_X:Z\\rightarrow X$ of $X$. By $\\varphi_X^\\ast F_X$ we denote the \\emph{pullback filtration} induced by $F_X$ and the map $\\varphi_X$ on $Z$. This filtration is given by $\\tau\\mapsto F_X(\\varphi_X(\\tau))$ for all $\\tau\\in\\mathrm{pow}(Z).$ \n\n\n\nA useful corollary of the persistence homology isomorphism theorem \\cite[pp. 139]{comptopo-herbert} and Corollary \\ref{coro:eq-pd} is that the persistence diagrams of the original filtration and the pullback filtration are identical.\n\\begin{corollary}\\label{coro:same}\nLet $\\mathbf{X}=(X,F_X)\\in \\mathcal{F}$ and $\\varphi:Z\\twoheadrightarrow X$ a parametrization of $X$. Then, for all $k\\in\\mathbb{N}$, $\\dk{k}(\\varphi^\\ast F_X)=\\dk{k}(F_X).$\n\\end{corollary}\n\n\n\\subsubsection{Common parametrizations of two spaces: tripods.}\nNow, given $\\mathbf{X} = (X,F_X)$ and $\\mathbf{Y}=(Y,F_Y)$ in $\\mathcal{F}$, the main idea in comparing filtrations defined on different spaces is to consider parametrizations \n$\\varphi_X:Z\\twoheadrightarrow X$ and $\\varphi_Y:Z\\twoheadrightarrow Y$ of $X$ and $Y$ from a \\emph{common} parameter space $Z$, i.e. \\emph{tripods}:\n\n$$\\xymatrix{ & Z \\ar@{->>}[dl]_{{\\varphi_X}} \\ar@{->>}[dr]^{\\varphi_Y} & \\\\ X\t& & Y}$$\n\nand compare the pullback filtrations $\\varphi_X^*F_X$ and $\\varphi_Y^*F_Y$ on $Z$. Formally, define\n\n\n\n\n\n\n \\begin{multline}\n d_{\\mathcal{F}}\\big(\\mathbf{X},\\mathbf{Y}\\big):=\\\\\\inf\\left\\{\\max_{\\tau\\in\\mathrm{pow}(Z)}\\big|\\varphi^\\ast_X F_X(\\tau)-\\varphi^\\ast_Y F_Y(\\tau)\\big|;\\,\\varphi_X:Z\\twoheadrightarrow X,\\, \\varphi_Y:Z\\twoheadrightarrow Y\\,\\,\\mbox{parametrizations}\\right\\}.\n \\end{multline}\n\n\n\n\\begin{remark} \\label{rem:dist-F-simple} Notice that in case $X=\\{\\ast\\}$ and $F_{\\{\\ast\\}}(\\ast) = c\\in\\mathbb{R}$, then $d_{\\mathcal{F}}(X,Y)=\\max_{\\sigma\\subset Y}\\big|F_Y(\\sigma)-c\\big|,$ for any filtered space $Y$. If $c=0$, $Y=\\{y_1,y_2\\}$ with $F_Y(y_1)=F_Y(y_2)=0$ and $F_{Y}(\\{y_1,y_2\\})=1$. Then, $d_{\\mathcal{F}}(X,Y)=1.$\n\nHowever, still with $c=0$ and $Y=\\{y_1,y_2\\}$, but $F_Y(y_1)=F_Y(y_2)=F_{Y}(\\{y_1,y_2\\})=0$, one has $d_\\mathcal{F}(X,Y)=0$. This means that $d_{\\mathcal{F}}$ is at best a pseudometric on filtered spaces.\n\\end{remark}\n\n\n\\begin{proposition}\n$d_{\\mathcal{F}}$ is a pseudometric on $\\mathcal{F}$.\n\\end{proposition}\n\\begin{proof}\nSymmetry and non-negativity are clear. We need to prove the triangle inequality. Let $\\mathbf{X} = (X,F_X)$, $\\mathbf{Y} = (Y,F_Y)$, and $\\mathbf{W} = (W,F_W)$ in $\\mathcal{F}$ be non-empty and $\\eta_1,\\eta_2>0$ be s.t. \n$$d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y})<\\eta_1\\,\\,\\mbox{and}\\,\\,d_{\\mathcal{F}}(\\mathbf{Y},\\mathbf{W})<\\eta_2.$$ \nChoose, $\\psi_X:Z_1\\twoheadrightarrow X$, $\\psi_Y:Z_1\\twoheadrightarrow Y$, $\\zeta_Y:Z_2\\twoheadrightarrow Y$, and $\\zeta_W:Z_2\\twoheadrightarrow W$ surjective such that \n$$\\|F_X\\circ\\psi_X-F_Y\\circ\\psi_Y\\|_{\\ell^\\infty(\\pow{Z_1})}<\\eta_1$$\nand\n$$\\|F_Y\\circ\\zeta_Y-F_W\\circ\\zeta_W\\|_{\\ell^\\infty(\\pow{Z_2})}<\\eta_2.$$\nLet $Z\\subseteq Z_1\\times Z_2$ be defined by $Z:=\\{(z_1,z_2)\\in Z_1\\times Z_2|\\psi_Y(z_1)=\\zeta_Y(z_2)\\}$ and consider the following (pullback) diagram:\n\n$$\\xymatrix{& & Z \\ar[dl]_{{\\pi_1}} \\ar[dr]^{\\pi_2} & & \\\\ & Z_1\\ar[dl]_{\\psi_X} \\ar[dr]^{\\psi_Y}& & Z_2\\ar[dl]_{\\zeta_Y}\\ar[dr]^{\\zeta_W} &\\\\ X & & Y & & W}.$$\n\nClearly, since $\\psi_Y$ and $\\zeta_Y$ are surjective, $Z$ is non-empty.\nNow, consider the following three maps with domain $Z$: $\\phi_X := \\psi_X\\circ\\pi_1$, $\\phi_Y := \\psi_Y\\circ\\pi_1=\\zeta_Y\\circ\\pi_2$, and $\\phi_W:=\\zeta_W\\circ\\pi_2$. These three maps are surjective and therefore constitute parametrizations of $X$, $Y$, and $W$, respectively. Then, since $\\pi_i:Z\\rightarrow Z_i$, $i=1,2$, are surjective and $\\psi_Y\\circ {\\pi_1} = \\zeta_Y\\circ\\pi_2$, we have\n\\begin{align*}\nd_{\\mathcal{F}}(\\mathbf{X},\\mathbf{W})&\\leq \\|F_X\\circ\\phi_X-F_W\\circ\\phi_W\\|_{\\ell^\\infty(\\pow{Z})}\\\\\n&\\leq \\|F_X\\circ\\phi_X - F_Y\\circ\\phi_Y\\|_{\\ell^\\infty(\\pow{Z})}+\\|F_Y\\circ\\phi_Y - F_W\\circ\\phi_W\\|_{\\ell^\\infty(\\pow{Z})}\\\\\n&= \\|F_X\\circ\\psi_X - F_Y\\circ\\psi_Y\\|_{\\ell^\\infty(\\pow{Z_1})}+\\|F_Y\\circ\\zeta_Y - F_W\\circ\\zeta_W\\|_{\\ell^\\infty(\\pow{Z_2})}\\\\\n&\\leq \\eta_1+\\eta_2.\n\\end{align*}\nThe conclusion follows by letting $\\eta_1\\searrow d_{\\mathcal{F}}(X,Y)$ and $\\eta_2\\searrow d_{\\mathcal{F}}(Y,W)$.\n\\end{proof}\n\nWe now obtain a lifted version of Theorem \\ref{theo:stab-vineyards}\n \\begin{theorem} \\label{theo:stab-pullback}\n For all finite filtered spaces $\\mathbf{X} =(X,F_X)$ and $\\mathbf{Y} = (Y,F_Y)$, and all $k\\in\\mathbb{N}$ one has:\n $$d_{\\mathcal{D}}(\\dk{k}(\\mathbf{X}),\\dk{k}(\\mathbf{Y}))\\leq d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y}).$$\n \\end{theorem}\n\n\n\n \\begin{proof}[Proof of Theorem \\ref{theo:stab-pullback}]\n Assume $\\varepsilon>0$ is such that $d_{\\mathcal{F}}(F_X,F_Y)<\\varepsilon.$ Then, let $\\varphi_X:Z\\rightarrow X$ and $\\varphi_Y:Z\\rightarrow Y$ be surjective maps from the finite set $Z$ into $X$ and $Y$, respectively, such that $|\\varphi_X^\\ast F_X(\\tau) - \\varphi_Y^\\ast F_Y(\\tau)|<\\varepsilon$ for all $\\tau\\in\\mathrm{pow}(Z)$. Then, by Theorem \\ref{theo:stab-vineyards}, \n $$d_{\\mathcal{D}}(\\dk{k}(\\varphi_X^\\ast F_X),\\dk{k}(\\varphi_Y^\\ast F_Y))<\\varepsilon.$$\n for all $k\\in\\mathbb{N}.$ Now apply Corollary \\ref{coro:same} and conclude by letting $\\varepsilon$ approach $d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y})$.\n \\end{proof}\n\n\\begin{remark}\\label{rem:non-tight}\nConsider the case of $\\mathbf{Y}$ being the one point filtered space $\\{\\ast\\}$ such that $F_{\\{\\ast\\}}(\\{\\ast\\})=0$, and $\\mathbf{X}$ such that $X=\\{x_2,x_2\\}$, and $F_X(\\{x_1\\})=F_X(\\{x_2\\})=0$, $F_X(\\{x_1,x_2\\})=1$. In this case $d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y})=1$. However, notice that for $k=0$ $\\mathrm{D}_0(\\mathbf{X}) = \\{[0,\\infty),[0,1)\\}$\n and $\\mathrm{D}_0(\\mathbf{Y}) = \\{[0,\\infty)\\}$. Additionaly, for all $k\\geq 1$ one has $\\mathrm{D}_k(\\mathbf{X}) = \\mathrm{D}_k(\\mathbf{Y}) = \\emptyset$. This means that the lower bound provided by Theorem \\ref{theo:stab-pullback} is equal to $\\frac{1}{2}<1=d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y})$.\n\\end{remark}\n\n\\section{Filtrations arising from metric spaces: Rips and \\v{C}ech}\nRecall \\cite{burago-book} that for two compact metric spaces $(X,d_X)$ and $(Y,d_Y)$, a correspondence between them is amy subset $R$ of $X\\times Y$ such that the natural projections $\\pi_X:X\\times Y\\rightarrow X$ and $\\pi_Y:X\\times Y\\rightarrow Y$ are such that $\\pi_X(R)=X$ and $\\pi_Y(R)=Y$. The distortion of any such correspondence is given by \n$$\\mathrm{dis}(R):=\\sup_{(x,y),(x',y')\\in R}\\big|d_X(x,x')-d_Y(y,y')\\big|.$$\nThen, Gromov-Hausdorff distance between $(X,d_X)$ and $(Y,d_Y)$ is defined as \n$$\\dgro{X}{Y} := \\frac{1}{2}\\inf_{R}\\mathrm{dis}(R),$$\nwhere the infimum is taken over all correspondences $R$ between $X$ and $Y$.\n\n\\subsection{The Rips filtration}\nRecall the definition of the \\emph{Rips filtration} of a finite metric space $(X,d_X)$: for $\\sigma\\in \\pow{X}$,\n$$F^{\\mathrm{R}}_X(\\sigma)=\\diamms{\\sigma}{X}:=\\max_{x,x'\\in X}d_X(x,x').$$\n\nThe following theorem was first proved in \\cite{dgw-topo-pers}. A different proof (also applicable to compact metric spaces) relying on the interleaving distance and multivalued maps was given in \\cite{chazal-geom}. Yet another different proof avoiding multivalued maps is given in \\cite{dowker-ph}.\n\n\\begin{theorem}\\label{theo:stab-dD-R}\nFor all finite metric spaces $X$ and $Y$, and all $k\\in\\mathbb{N}$,\n$$d_{\\mathcal{D}}\\big(\\dk{k}(F^{\\mathrm{R}}_X),\\dk{k}(F^{\\mathrm{R}}_Y)\\big)\\leq 2\\,\\dgro{X}{Y}.$$\n\\end{theorem}\n\nA different proof of Theorem \\ref{theo:stab-dD-R} can be obtained by combining Theorem \\ref{theo:stab-pullback} and Proposition \\ref{prop:stab-R} below.\n\\begin{proposition}\\label{prop:stab-R}\nFor all finite metric spaces $X$ and $Y$, \n$$d_{\\mathcal{F}}\\big(F^{\\mathrm{R}}_X,F^{\\mathrm{R}}_Y\\big)\\leq 2\\,\\dgro{X}{Y}.$$\n\\end{proposition}\n\\begin{proof}[Proof of Proposition \\ref{prop:stab-R}]\nLet $X$ and $Y$ be s.t. $\\dgro{X}{Y}<\\eta$, and let $R\\subset X\\times Y$ be a surjective relation with $|d_X(x,x')-d_Y(y,y')|\\leq 2\\eta$ for all $(x,y),(x',y')\\in R$. Consider the parametrization $Z=R$, and $\\varphi_X=\\pi_1:Z\\rightarrow X$ and $\\varphi_Y=\\pi_2:Z\\rightarrow Y$, then \n\\beq{eq:param}\n|d_X(\\varphi_X(t),\\varphi_X(t'))-d_Y(\\varphi_Y(t),\\varphi_Y(t'))|\\leq 2\\eta\n\\end{equation}\nfor all $t,t'\\in Z$. Pick any $\\tau\\in Z$ and notice that\n\n$$\\varphi^*_XF_X^{\\mathrm{R}}(\\tau) = F_X^{\\mathrm{R}}(\\varphi_X(\\tau)) = \\max_{t,t'\\in \\tau}d_X(\\varphi_X(t),\\varphi_X(t')).$$\n\nNow, similarly, write \n\\begin{multline}\\varphi^*_YF_Y^{\\mathrm{R}}(\\tau) = \\max_{t,t'\\in \\tau}d_Y(\\varphi_Y(t),\\varphi_Y(t'))\\leq \\max_{t,t'\\in\\tau}d_X(\\varphi_X(t),\\varphi_X(t'))+2\\eta = \\varphi^*_XF_X^{\\mathrm{R}}(\\tau) + 2\\eta,\n\\end{multline}\n\nwhere the last inequality follows from \\refeq{eq:param}. The proof follows by interchanging the roles of $X$ and $Y$.\n\\end{proof}\n\n\n\\subsection{The \\v{C}ech filtration}\nAnother interesting and frequently used filtration is the \\emph{\\v{C}ech filtration}: for each $\\sigma\\in\\pow{X}$, \n$$F^\\mathrm{C}_X(\\sigma) := \\mathbf{rad}_X(\\sigma)=\\min_{p\\in X}\\max_{x\\in \\sigma}d_X(x,p).$$ That is, the filtration value of each simplex corresponds to its \\emph{circumradius}.\n\\begin{proposition}\\label{prop:stab-C}\nFor all finite metric spaces $X$ and $Y$, \n$$d_{\\mathcal{F}}\\big(F^{W}_X,F^{W}_Y\\big)\\leq 2\\,\\dgro{X}{Y}.$$\n\\end{proposition}\n\nAgain, as a corollary of Theorem \\ref{theo:stab-pullback} and Proposition \\ref{prop:stab-C} we have the following \n\\begin{theorem}\\label{theo:stab-dD-C}\nFor all finite metric spaces $X$ and $Y$, and all $k\\in\\mathbb{N}$,\n$$d_{\\mathcal{D}}\\big(\\dk{k}(F^{\\mathrm{C}}_X),\\dk{k}(F^{\\mathrm{C}}_Y)\\big)\\leq 2\\,\\dgro{X}{Y}.$$\n\\end{theorem}\nA proof of this theorem via the interleaving distance and multi-valued maps has appeared in \\cite{chazal-geom}.\\footnote{The version in \\cite{chazal-geom} applies to compact metric spaces.} Another proof avoiding multivalued maps is given in \\cite{dowker-ph}.\n\n\\begin{proof}[Proof of Proposition \\ref{prop:stab-C}]\n\nThe proof is similar to that of Proposition \\ref{prop:stab-R}. Pick any $\\tau\\in Z$, then,\n\n$$\\varphi^*_XF_X^W(\\tau) = F_X^W(\\varphi_X(\\tau)) = \\min_{p\\in X}\\max_{t\\in \\tau}d_X(p,\\varphi_X(t))=\\max_{t\\in \\tau}d_X(p_\\tau,\\varphi_X(t))$$\nfor some $p_\\tau\\in X$. Let $t_\\tau\\in Z$ be s.t. $\\varphi_X(t_\\tau)=p_\\tau$, and from the above obtain\n$$\\varphi^*_XF_X^W(\\tau) = \\max_{t\\in \\tau}d_X(\\varphi_X(t_\\tau),\\varphi_X(t)).$$\n\nNow, similarly, write \n\\begin{multline}\\varphi^*_YF_Y^W(\\tau) = \\min_{q\\in Y}\\max_{t\\in \\tau}d_Y(q,\\varphi_Y(t))\\leq \\max_{t\\in\\tau}d_Y(\\varphi_Y(t_\\tau),\\varphi_Y(t))\\leq \\max_{t\\in\\tau}d_X(\\varphi_X(t_\\tau),\\varphi_X(t))+2\\eta\\\\ = \\varphi^*_XF_X^W(\\tau) + 2\\eta,\n\\end{multline}\n\nwhere the last inequality follows from \\refeq{eq:param}. The proof follows by interchanging the roles of $X$ and $Y$.\n\\end{proof}\n\n\n\\section{$d_{\\mathcal{F}}$ is geodesic}\\label{sec:geodesic}\nIn this section we construct geodesics between any pair $\\mathbf{X}$ and $\\mathbf{Y}$ of filtered spaces and obtain a strengthening of Theorem \\ref{theo:stab-pullback}.\n\n\n\\subsection{Geodesics}\nGiven $\\mathbf{X}$ and $\\mathbf{Y}$ in $\\mathcal{F}$ consider $\\mathcal{T}^\\mathrm{opt}(\\mathbf{X}.\\mathbf{Y})$ the set of all minimizing tripods: That is, for any $(Z,\\varphi_X,\\varphi_Y)\\in\\mathcal{T}(\\mathbf{X},\\mathbf{Y})$ we have $\\|\\varphi_X^\\ast F_X-\\varphi_Y^*F_Y\\|_{\\ell^\\infty(\\pow{Z}} = d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y}).$\n\nFor each minimizing tripod $T=(Z,\\varphi_X,\\varphi_Y)\\in\\mathcal{T}^\\mathrm{opt}(\\mathbf{X},\\mathbf{Y})$ consider the curve $$\\mbox{$\\gamma_T:[0,1]\\rightarrow \\mathcal{F}$ defined by $t\\mapsto \\mathbf{Z_t}:=(Z,F_t)$}$$ where \n $$F_t:=(1-t)\\cdot \\varphi_X^*F_X+t\\cdot \\varphi_Y^* F_Y.$$\n\n\n\\begin{theorem}\nFor each $T\\in\\mathcal{T}^\\mathrm{opt}(\\mathbf{X},\\mathbf{Y})$ the curve $\\gamma_T$ is a geodesic between $\\mathbf{X}$ and $\\mathbf{Y}$. Namely, for all $s,t\\in[0,1]$ one has:\n$$d_{\\mathcal{F}}(\\gamma_T(s),\\gamma_T(t))=|s-t|\\cdot d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y}).$$\n\\end{theorem}\n\\begin{proof}\nLet $\\eta = d_{\\mathcal{F}}(\\mathbf{X},\\mathbf{Y})$. We check that $$(\\ast )\\,\\,\\,\\,d_{\\mathcal{F}}(\\gamma_T(s),\\gamma_T(t))\\leq |s-t|\\cdot \\eta$$ and notice that this is enough. Otherwise, let $s\\frac{1}{2}=d_{\\mathcal{D}}(\\alpha_T(0),\\alpha_T(1)),$ which is the bound provided by Theorem \\ref{theo:stab-pullback}.\n\n \\end{remark}\n\n\\begin{proof}[Proof of Theorem \\ref{theo:strengthening}]\nLet $T\\in \\mathcal{T}^\\mathrm{opt}(\\mathbf{X},\\mathbf{Y})$ and let $0=t_0<\\cdots 0 \\\\ 0 & , & \\theta < 0\n\\end{array}\\right. \\end{equation}\nSimilarly,\n\\begin{equation} \\nu_1 = \\int_{-\\pi}^{\\pi} \\frac{dk}{2\\pi} \\frac{1}{1 + i e^{ik} \/z} = \\left\\{ \\begin{array}{ccc}\n\t0 & , & \\theta > 0 \\\\ 1 & , & \\theta < 0\n\\end{array}\\right. \\end{equation}\nThus, we have two distinct topological phases with $\\nu_0 = 1$, $\\nu_1 = 0$ ($\\nu_0 = 0$, $\\nu_1 = 1$) for $\\theta > 0$ ($\\theta <0$).\n\nIn position space, the action of $W$ can be deduced directly from \\eqref{eq10a} and \\eqref{eq13}. We obtain\n\\begin{eqnarray}\\label{eq8} \\Psi_0 (x) &\\to& \\frac{\\cos\\theta}{2} \\left[ \\Psi_0 (x-1) - \\Psi_0 (x+1) \\right] \\nonumber\\\\\n&& - \\frac{1 + \\sin\\theta}{2} \\Psi_1 (x-1) - \\frac{1 - \\sin\\theta}{2} \\Psi_1 (x+1)\\nonumber\\\\\n\\Psi_1 (x) &\\to& \\frac{\\cos\\theta}{2} \\left[ \\Psi_1 (x-1) - \\Psi_1 (x+1) \\right] \\nonumber\\\\\n&& - \\frac{1 - \\sin\\theta}{2} \\Psi_0 (x-1) - \\frac{1 + \\sin\\theta}{2} \\Psi_0 (x+1)\n\\nonumber\\\\\n\\end{eqnarray}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=1.5]{QW-trapped.png}\n\t\\caption{Trapped state at the boundary of two topological phases of a discrete simple-step walk.}\\label{fig:1}\n\\end{figure}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=0.4]{QW.png}\n\t\\caption{State being reflected at the boundary of two topological phases of a discrete simple-step walk.}\\label{fig:2}\n\\end{figure}\n\nIn Fig.~\\ref{fig:1}, we see that a discrete-time simple-step topological walk gives rise to bound states at $x=0$ and $x=-1$, which is chosen to be the boundary of two distinct topological phases. The initial state was centered around $x=0$ with a small spread $\\Delta x$, and $\\Psi_0(x) = \\Psi_1(x)$. As the system evolves in time, the parts of the state near the boundary diffuse ballistically away from the boundary, but the part at the boundary remains protected. In Fig.~\\ref{fig:2}, we choose the initial state around $x=50$, i.e., entirely within a single topological phase. In this case, we see that the quantum walk diffuses in both directions ballistically. The part of the walk that reaches the boundary of the other phase is reflected back and continues to diffuse away from the boundary ballistically without entering the region of the other topological phase.\n\n\\subsection{Continuous-time limit}\n\nTo go over to the continuous-time limit, we set $\\theta = \\frac{\\pi}{2} - \\epsilon$, and consider a scaling limit in which $\\epsilon \\to 0$ and $n\\to\\infty$, so that the product $n\\epsilon $ remains finite. In this limit, $\\omega_+ \\to \\pi$, and $\\omega_- \\to 0$.\nNotice that $W^2 = \\mathbb{I} + \\mathcal{O} (\\epsilon)$, which is not the case for $W$. We will therefore consider the limit of an \\emph{even} number of steps.\n\nSetting\n\\begin{equation}\\epsilon = \\gamma\\Delta t \\ , \\ \\ t = 2n\\Delta t \\end{equation}\nApplying \\eqref{eq8} twice, we obtain\n\\begin{eqnarray}\\label{eq8a} &&\\Psi_0 (x, n+2) - \\Psi_0 (x,n) \\nonumber\\\\\n&& = \\gamma \\Delta t \\left[ \\Psi_1 (x,n) - \\Psi_1 (x-2,n) \\right] \\nonumber\\\\\n&& \\Psi_1 (x, n+2) - \\Psi_1 (x,n) \\nonumber\\\\\n&&= -\\gamma \\Delta t \\left[ \\Psi_0 (x,n) - \\Psi_0 (x+2,n) \\right]\n\\end{eqnarray}\nfrom which we deduce the continuous-time quantum walk in the limit $\\Delta t\\to 0$,\n\\begin{eqnarray}\\label{eq8b} \\frac{\\partial\\Psi_0 (x, t)}{\\partial t} &=& \\gamma \\left[\n\\Psi_1 (x,t) - \\Psi_1 (x-2,t) \\right] \\nonumber\\\\\n\\frac{\\partial\\Psi_1 (x, t)}{\\partial t} &=& \\gamma \\left[\n- \\Psi_0 (x,t) + \\Psi_0 (x+2,t) \\right]\n\\end{eqnarray}\nDefining\n\\begin{equation}\\label{eq27} \\Phi_\\pm (x) = \\pm \\Psi_0 (x) + \\Psi_1 (x-1 ) \\end{equation}\nwe obtain the decoupled equations\n\\begin{equation}\\label{eq37} \\frac{\\partial\\Phi_\\pm (x, t)}{\\partial t} = \\pm \\gamma \\left[\n\\Phi_\\pm (x+1,t) - \\Phi_\\pm (x-1,t) \\right]\n\\end{equation}\nrelated to each other by time reversal.\n\nWorking similarly in the other phase (with $\\nu_0 = 0$), we obtain in the continuous-time limit,\n\\begin{eqnarray}\\label{eq8bb} \\frac{\\partial\\Psi_0 (x, t)}{\\partial t} &=& \\gamma \\left[\n\\Psi_1 (x,t) - \\Psi_1 (x+2,t) \\right] \\nonumber\\\\\n\\frac{\\partial\\Psi_1 (x, t)}{\\partial t} &=& \\gamma \\left[\n- \\Psi_0 (x,t) + \\Psi_0 (x-2,t) \\right]\n\\end{eqnarray}\nIt is easy to see that these are equivalent to the decoupled Eqs.\\ \\eqref{eq37} under the definition $\\Phi_\\pm (x) = \\mp \\Psi_0 (x) + \\Psi_1 (x+1 )$ (\\emph{cf.}\\ with Eq.\\ \\eqref{eq27}).\n\nThe above results are no longer valid if the coin parameters are spatially dependent. In particular, we are interested in the case in which there are two regions in space which have different topological numbers. This can be achieved by having $\\theta > 0$ and $\\theta < 0$ in the respective regions.\nThen the above results are valid in the bulk of each region, but not along their boundaries.\n\nFor the continuous-time limit, we need to consider the limit of $4n$ steps as $n\\to\\infty$. This is because $W^2 \\ne \\mathbb{I} + \\mathcal{O} (\\epsilon)$, due to an obstruction at the boundary of the two regions, but we still have $W^4 = \\mathbb{I} + \\mathcal{O} (\\epsilon)$.\n\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=1.5]{CTQW-trapped.png}\n\t\\caption{Trapped state at the boundary of two topological phases of a continuous simple-step walk.}\\label{fig:3}\n\\end{figure}\n\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=0.4]{CTQW.png}\n\t\\caption{State being reflected at the boundary of two topological phases of a continuous simple-step walk.}\\label{fig:4}\n\\end{figure}\nIn the continuous-time limit, away from the boundary the results match those of single topological phases. More precisely, for $x\\ge 2$, we recover Eq.\\ \\eqref{eq8bb} whereas for $x\\le -3$, we recover Eq.\\ \\eqref{eq8b}.\nNear the boundary (for $-3 < x < 2$), we obtain\n\\begin{eqnarray} \\frac{\\partial\\Psi_0(1)}{\\partial t}\n&=& \\gamma \\Psi_1 (1) \\nonumber\n\\\\ \\frac{\\partial\\Psi_1 (1)}{\\partial t} &=& - \\gamma \\left[ \\Psi_0 (1) - \\Psi_0 (3) \\right] \\nonumber\n\\\\ \\frac{\\partial\\Psi_0(0)}{\\partial t}\n&=& 0 \\nonumber\\\\\n\\frac{\\partial\\Psi_1 (0)}{\\partial t} &=& \\gamma \\Psi_0 (2) \\nonumber\n\\\\\n\\frac{\\partial\\Psi_0 (-1)}{\\partial t} &=& 0\n\\\\\n\\frac{\\partial\\Psi_1(-1)}{\\partial t}\n&=& \\gamma \\Psi_0 (-3) \\nonumber \\nonumber\\\\\n\\frac{\\partial\\Psi_0 (-2)}{\\partial t} &=& \\gamma \\Psi_1 (-2) \\nonumber\\\\\n\\frac{\\partial\\Psi_1(-2)}{\\partial t}\n&=& -\\gamma \\left[ \\Psi_0 (-2) - \\Psi_0 (-4) \\right]\n\\end{eqnarray}\nNotice that $\\Psi_0 (0)$ and $\\Psi_1 (-1)$ decouple, so if initially $\\Psi_0(0) =1$, then the walk is trapped at $x=0$, and similarly for $\\Psi_1(-1)$. Moreover, if a walk starts entirely within one of the topologically distinct regions (or at the boundary, $x=0$), it remains in it. This matches the asymptotic behavior observed in the discrete-time case (cf. Fig.~\\ref{fig:1}).\n\nBy defining\n$\\Phi_\\pm$ as in \\eqref{eq27}, we recover the equations of motion for decoupled walks\n\\eqref{eq37} away from the boundary (for $|x|\\ge 2$). In the continuous-time limit of the simple-step quantum walk, we see the same behavior as we did for the discrete quantum walk. We have the same two bound states near the boundary at $x = 0,1$, respectively. This is shown in Fig.~\\ref{fig:3}. Away from the boundary, the system diffuses ballistically, and is reflected at the boundary, as shown in Fig.~\\ref{fig:4}.\n\n\\section{Split-step Walk}\n\\label{sec:2}\n\nAs with the previous section we rederive the discrete-time case here for completeness. The original results can be found in \\cite{Kit,Kitagawa2010,Obuse2015} along with our new results in the continuous-time limit following thereafter.\n\n\\subsection{Discrete-time walk}\nFor the split-step walk, we flip two coins, $T(\\theta_1)$ and $T(\\theta_2)$, where\n$T(\\theta)$ is defined in \\eqref{eqTtheta}.\nA step of the walker is represented by\n\\begin{equation} U = S_-T(\\theta_2)S_+ T(\\theta_1) \\end{equation}\nIt is convenient to define the repeated block\n\\begin{equation}\\label{eq10as} U' = e^{-i \\frac{\\theta_1}{2} Y} Z S_- e^{-i \\theta_2 Y} Z S_+ e^{-i\\frac{\\theta_1}{2} Y} \\end{equation}\nThe advantage of working with $U'$ instead of $U$ is that $U'$ is of the form\n\\begin{equation}\\label{eq10s} U' = -F X F^{-1} X \\end{equation}\nwhere\n\\begin{equation}\\label{eqFsplit} F = e^{-i\\frac{\\theta_1}{2} Y} ZS_- e^{-i\\frac{\\theta_2}{2} Y}\n\\end{equation}\nAfter switching to a frame in which $X$ is diagonal, we arrive at $W$ given by \\eqref{eq13}, and $W_c$ acting on a coin (Eq.\\ \\eqref{eq15}) as\n\\begin{equation} W_c\n= \\begin{pmatrix}\n\t\\beta_0 & \\beta_1 \\\\ -\\beta_1^\\ast & \\beta_0\n\\end{pmatrix} \\end{equation}\nwhere\n\\begin{eqnarray} \\beta_0 &=& \\cos k \\cos\\theta_1\\cos\\theta_2 + \\sin\\theta_1\\sin\\theta_2 \\nonumber\\\\\n\\beta_1 &=& -(i\\sin k + \\cos k \\sin\\theta_1)\\cos\\theta_2 + \\cos\\theta_1\\sin\\theta_2\\ \\ \\\n\\end{eqnarray}\nThe eigenvalues are\n\\begin{equation} e^{-i\\omega_\\pm} = \\beta_0 \\mp i\\sqrt{1-\\beta_0^2} \\end{equation}\nSimilarly, for $G_c$ defined by \\eqref{eq13} and \\eqref{eq15} with $F$ given by\n\\eqref{eqFsplit}, we obtain\n\\begin{equation} G_c\n= e^{-ik\/2}\\begin{pmatrix}\n\t\\gamma_0 & \\gamma_1 \\\\ \\gamma_1^\\ast & -\\gamma_0^\\ast\n\\end{pmatrix} \\end{equation}\nwhere\n\\begin{eqnarray} \\gamma_0 &=& \\cos \\frac{k}{2} \\cos\\frac{\\theta_-}{2} + i \\sin \\frac{k}{2} \\cos\\frac{\\theta_+}{2} \\nonumber\\\\\n\\gamma_1 &=& \\cos \\frac{k}{2} \\sin\\frac{\\theta_-}{2} - i \\sin \\frac{k}{2} \\cos\\frac{\\theta_+}{2}\\ \\ \\\n\\end{eqnarray}\nand we defined $\\theta_\\pm = \\theta_1 \\pm \\theta_2$.\n\nFor the two topological invariants, we obtain, respectively,\n\\begin{equation} \\nu_0 = \\int_{-\\pi}^{\\pi} \\frac{dk}{2\\pi}\\frac{1}{1-z_0 e^{ik} } \\ , \\ \\ z_0 = \\frac{\\cos\\frac{\\theta_+}{2} + \\sin\\frac{\\theta_-}{2}}{\\cos\\frac{\\theta_+}{2} - \\sin\\frac{\\theta_-}{2}} \\end{equation}\nand\n\\begin{equation} \\nu_1 = \\int_{-\\pi}^{\\pi} \\frac{dk}{2\\pi}\\frac{1}{1+ z_1 e^{ik} } \\ , \\ \\ z_1 = \\frac{\\cos\\frac{\\theta_-}{2} - \\sin\\frac{\\theta_+}{2}}{\\cos\\frac{\\theta_-}{2} + \\sin\\frac{\\theta_+}{2}}\n\\end{equation}\nIt is easy to see that\n\\begin{equation} \\nu_\\alpha = \\left\\{ \\begin{array}{ccc}\n\t1 & , & |z_\\alpha| < 1 \\\\ 0 & , & |z_\\alpha|>1\n\\end{array}\\right. \\ , \\ \\\n\\alpha = 0,1~.\n\\end{equation}\nThus we obtain four different topological phases,\n\\begin{align}\n\tI (\\nu_0,\\nu_1) &= (1, 1)\\\\\n\tII (\\nu_0,\\nu_1) &= (0, 0)\\\\\n\tIII (\\nu_0,\\nu_1) &= (0,1)\\\\\n\tIV (\\nu_0,\\nu_1) &= (1,0)\n\\end{align}\nIn position space, the action of $W$ yields\n\\begin{widetext}\n\\begin{eqnarray}\\label{eq8split} \\Psi_0 (x) &\\to& \\frac{\\cos\\theta_1\\cos\\theta_2}{2} \\left[ \\Psi_0 (x-1) + \\Psi_0 (x+1) \\right] + \\sin\\theta_1\\sin\\theta_2 \\Psi_0 (x)\\nonumber\\\\\n&& - \\frac{1 + \\sin\\theta_1}{2}\\cos\\theta_2 \\Psi_1 (x-1) + \\frac{1 - \\sin\\theta_1}{2}\\cos\\theta_2 \\Psi_1 (x+1) + \\cos\\theta_1 \\sin\\theta_2 \\Psi_1 (x)\\nonumber\\\\\n\\Psi_1 (x) &\\to& \\frac{\\cos\\theta_1\\cos\\theta_2}{2} \\left[ \\Psi_1 (x-1) + \\Psi_1 (x+1) \\right] + \\sin\\theta_1\\sin\\theta_2 \\Psi_1 (x) \\nonumber\\\\\n&& - \\frac{1 - \\sin\\theta_1}{2}\\cos\\theta_2 \\Psi_0 (x-1) + \\frac{1 + \\sin\\theta_1}{2}\\cos\\theta_2 \\Psi_0 (x+1) - \\cos\\theta_1 \\sin\\theta_2 \\Psi_0 (x)\n\\end{eqnarray}\n\\end{widetext}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=1.5]{SQW_type1-trapped.png}\n\t\\caption{Trapped state at the boundary of topological phases \\emph{I} and \\emph{II} of a continuous simple-step walk.}\\label{fig:5}\n\\end{figure}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=0.35]{SQW1.png}\n\t\\caption{State being reflected at the boundary of topological phases \\emph{I} and \\emph{II} of a continuous simple-step walk.}\\label{fig:7}\n\\end{figure}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=1.5]{SQW_type2-trapped.png}\n\t\\caption{Trapped state at the boundary of topological phases \\emph{I} and \\emph{III} of a continuous simple-step walk.}\\label{fig:6}\n\\end{figure}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=0.35]{SQW2.png}\n\t\\caption{State being reflected at the boundary of topological phases \\emph{I} and \\emph{III} of a continuous simple-step walk.}\\label{fig:8}\n\\end{figure}\n\nFigures \\ref{fig:5} and \\ref{fig:7} show the behavior of a discrete split-step quantum walk with topological phases \\emph{III} for $x\\ge 0$ and \\emph{IV} for $x<0$. In Fig.~\\ref{fig:5}, we see the same two bound states near the boundary of the two phases as in the case of the simple-step walk. This is expected, because the boundary between phases \\emph{III} and \\emph{IV} is equivalent to the boundary between two topologically distinct phases of a simple-step quantum walk. This behavior has been demonstrated experimentally by Kitagawa, \\emph{et al.}~\\cite{Kitagawa2012}. Figure \\ref{fig:7} shows the behavior of the split-step quantum walk away from the boundary. In particular, a system whose initial state is entirely within a single topological phase will never leave the region of that phase. When such a state comes in contact with the boundary between two phases, it is reflected back.\n\nFigures \\ref{fig:6} and \\ref{fig:8} show the behavior of a discrete split-step quantum walk with topological phases \\emph{I} for $x\\ge 0$ and \\emph{III} for $x<0$. Unlike in the previous case, there is a single bound state at the boundary between phases \\emph{I} and \\emph{III}, at $x=-1$. This has also been observed experimentally~\\cite{Kitagawa2012}. Away from the boundary, we obtain the same qualitative behavior as in the previous case, including reflection of the state at the boundary.\n\n\\subsection{Continuous-time limit}\n\nThe continuous-time limit is obtained as $\\theta_1 \\to\\pm \\frac{\\pi}{2}$, $\\theta_2\\to 0$, or $\\theta_2 \\to\\pm \\frac{\\pi}{2}$, $\\theta_1\\to 0$. They correspond to the four distinct topological phases listed above, respectively,\n\\begin{align}\n\tI (\\theta_1,\\theta_2) &= (0, \\frac{\\pi}{2})\\\\\n\tII (\\theta_1,\\theta_2) &= (0, -\\frac{\\pi}{2})\\\\\n\tIII (\\theta_1,\\theta_2) &= (\\frac{\\pi}{2},0)\\\\\n\tIV (\\theta_1,\\theta_2) &= (-\\frac{\\pi}{2},0)\n\\end{align}\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=1.5]{CTSQW_type1-trapped.png}\n\t\\caption{Trapped state at the boundary of topological phases \\emph{I} and \\emph{II} of a continuous simple-step walk.}\\label{fig:9}\n\\end{figure}\n\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=0.35]{CTSQW1.png}\n\t\\caption{State being reflected at the boundary of topological phases \\emph{I} and \\emph{II} of a continuous simple-step walk.}\\label{fig:10}\n\\end{figure}\nIn phase $I$, we set $\\theta_1 = \\epsilon_1$, $\\theta_2 = \\frac{\\pi}{2} - \\epsilon_2$, and consider the scaling limit in which $\\epsilon_{1,2}\\to 0$, $n\\to\\infty$, so that the products $n\\epsilon_{1,2}$ remain finite. In this limit, $\\omega_\\pm \\to \\pm \\frac{\\pi}{2}$. We have $W^2 = -\\mathbb{I} + \\mathcal{O} (\\epsilon_{1,2})$. We will therefore multiply the wavefunction by a phase $i$, and consider the limit of an even number of steps. Setting\n\\begin{equation} \\epsilon_{1,2} = \\gamma_{1,2} \\Delta t \\ , \\ \\ t = 2n\\Delta t \\end{equation}\nwe obtain from \\eqref{eq8split} the equations of motion in the limit $\\Delta t \\to 0$,\n\\begin{eqnarray}\\label{eqomI} \\frac{\\partial\\Psi_0 (x)}{\\partial t} &=& -2\\gamma_1 \\Psi_1 (x) - \\gamma_2 \\left[ \\Psi_1 (x-1) + \\Psi_1 (x+1) \\right] \\nonumber\\\\\n\\frac{\\partial\\Psi_1(x)}{\\partial t} &=& 2\\gamma_1 \\Psi_0 (x) + \\gamma_2 \\left[ \\Psi_0 (x-1) + \\Psi_0 (x+1) \\right]\\ \\ \\ \\\n\\end{eqnarray}\nWorking similarly, we obtain the same equations of motion \\eqref{eqomI} in the continuous-time limit in phase $II$.\n\nDefining\n\\begin{equation}\\label{eq47} \\Phi_\\pm (x) = \\pm i \\Psi_0 (x) + \\Psi_1 (x-1) \\end{equation}\nwe obtain the decoupled equations of motion,\n\\begin{equation}\\label{eq48} \\frac{\\partial\\Phi_\\pm (x)}{\\partial t} = \\pm 2i\\gamma_2 \\Phi_\\pm (x) \\pm i \\gamma_1 \\left[ \\Phi_\\pm (x-1) + \\Phi_\\pm (x+1) \\right] \\end{equation}\nIn phase $III$, we obtain the equations of motion\n\\begin{eqnarray}\\label{eqomIII} \\frac{\\partial\\Psi_0 (x)}{\\partial t} &=& \\gamma_1 \\left[ \\Psi_1 (x) +\\Psi_1 (x-2) \\right] +2\\gamma_2 \\Psi_1 (x-1)\\nonumber\\\\\n\\frac{\\partial\\Psi_1(x)}{\\partial t} &=& - \\gamma_1 \\left[ \\Psi_0 (x) +\\Psi_0 (x+2) \\right] - 2\\gamma_2 \\Psi_0 (x+1) \\ \\ \\ \\ \\\n\\end{eqnarray}\nand in phase $IV$,\n\\begin{eqnarray}\\label{eqomIV} \\frac{\\partial\\Psi_0 (x)}{\\partial t} &=& \\gamma_1 \\left[ \\Psi_1 (x) +\\Psi_1 (x+2) \\right] +2\\gamma_2 \\Psi_1 (x+1)\\nonumber\\\\\n\\frac{\\partial\\Psi_1(x)}{\\partial t} &=& - \\gamma_1 \\left[ \\Psi_0 (x) +\\Psi_0 (x-2) \\right] - 2\\gamma_2 \\Psi_0 (x-1) \\ \\ \\ \\ \\\n\\end{eqnarray}\nThey can be put into the decoupled form \\eqref{eq48}, if we define $\\Phi_\\pm$ as in \\eqref{eq47} in phase $III$, and $\\Phi_\\pm (x) = \\pm i \\Psi_0 (x) + \\Psi_1 (x+1)$ in phase $IV$.\n\nThere are six different boundaries, but only two are qualitatively different. We proceed to consider a representative from each type.\n\nFor a system in phase $III$ for $x\\ge 0$, and phase $IV$ for $x<0$,\nworking as before, we obtain for $x\\le -3$ the equations of motion \\eqref{eqomIII}, and for $x\\ge 1$, the equations of motion \\eqref{eqomIV}. Near the boundary of the two phases, we have\n\t\\begin{eqnarray}\\label{eq53}\n\t\\frac{\\partial\\Psi_0 (-2)}{\\partial t} &=& \\gamma_1 \\Psi _1(-2) + 2\\gamma_2\\Psi _1(-1)\\nonumber\\\\\n\t\\frac{\\partial\\Psi_1 (-2)}{\\partial t} &=& -\\gamma_1\\left( \\Psi _0(-2) + \\Psi_0(-4) \\right) - 2\\gamma_2\\Psi _0(-3)\\nonumber\\\\\n\t\t\\frac{\\partial\\Psi_0 (-1)}{\\partial t} &=& 0\\nonumber\\\\\n\t\t\\frac{\\partial\\Psi_1 (-1)}{\\partial t} &=& -\\gamma_1\\Psi_0(-3) - 2\\gamma_2\\Psi_0(-2)\\nonumber\\\\\n\t\t\\frac{\\partial\\Psi_0 (0)}{\\partial t} &=& 0 \\nonumber\\\\\n\t\t\\frac{\\partial\\Psi_1 (0)}{\\partial t} &=& -\\gamma_1\\Psi _0(2) - 2\\gamma_2\\Psi_0(1)\\nonumber\\\\\n\t\t\\frac{\\partial\\Psi_0(1)}{\\partial t} &=& \\gamma_1\\Psi_1(1) + 2\\gamma_2\\Psi_1(0) \\nonumber\\\\\n\t\t\\frac{\\partial\\Psi_1(1)}{\\partial t} &=& -\\gamma_1\\left( \\Psi_0(1) + \\Psi_0(3) \\right) - 2\\gamma_2\\Psi_0(2)\n\t\\end{eqnarray}\n\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=1.5]{CTSQW_type2-trapped.png}\n\t\\caption{Trapped state at the boundary of topological phases \\emph{I} and \\emph{III} of a continuous simple-step walk.}\\label{fig:11}\n\\end{figure}\n\n\\begin{figure}[htp]\n\t\\centering\n\t\\includegraphics[scale=0.35]{CTSQW2.png}\n\t\\caption{State being reflected at the boundary of topological phases \\emph{I} and \\emph{III} of a continuous simple-step walk.}\\label{fig:12}\n\\end{figure}\n\n\nSimilarly, for a system in phase $I$ for $x\\ge 0$, and phase $III$ for $x<0$,\nwe obtain for $x\\ge 1$, the equations of motion \\eqref{eqomI}, and for $x\\le -3$, the equations of motion \\eqref{eqomIII}.\nNear the boundary, we have\n\\begin{eqnarray}\\label{eq54}\n\\frac{\\partial\\Psi_0 (-2)}{\\partial t} &=& \\gamma_1\\left[\\Psi _1(-4)+\\Psi _1(-2)\\right] + 2\\gamma_2\\Psi _1(-3) \\nonumber\\\\\n\\frac{\\partial\\Psi_1 (-2)}{\\partial t} &=& -\\gamma_1\\Psi _0(-2)-2\\gamma_2 \\Psi _0(-1)\\nonumber\\\\\n\\frac{\\partial\\Psi_0 (-1)}{\\partial t} &=& \\gamma_1\\Psi _1(-3) +2\\gamma_2 \\Psi _1(-2)\\nonumber\\\\\n\\frac{\\partial\\Psi_1 (-1)}{\\partial t} &=& 0 \\nonumber\\\\\n\\frac{\\partial\\Psi_0 (0)}{\\partial t} &=& - \\gamma_2\\Psi _1(1)\\nonumber\\\\\n\\frac{\\partial\\Psi_1 (0)}{\\partial t} &=& \\gamma _2\\Psi _0(1)\\nonumber\\\\\n\\end{eqnarray}\nFigures \\ref{fig:9}, \\ref{fig:10}, \\ref{fig:11}, and \\ref{fig:12} depict the continuous-time limit of the discrete quantum walks shown in Figs. \\ref{fig:5}, \\ref{fig:6}, \\ref{fig:7}, and \\ref{fig:8}, respectively. As expected, the observed behavior matches the asymptotic behavior at and away from the boundary in the discrete case. It should be noted that in the continuous-time limit, the bound states can be found analytically from \\eqref{eq53} for the boundary between \\emph{III} and \\emph{IV}, and \\ref{eq54} for the boundary between phases \\emph{I} and \\emph{III}. These bound states are all in agreement with the asymptotic results obtained above in the corresponding discrete cases, as well as experimental results \\cite{Kitagawa2012}.\n\n\\begin{widetext}\n\n\\begin{figure}[htp]\n \\centering\n\t\\includegraphics[scale=1.0]{stability.png}\n\t\\caption{Bound states near the boundary between two phases. From left to right, the distributions of $|\\Psi_0(x)|^2, |\\Psi_1(x)|^2, |\\Psi_0(x)|^2 + |\\Psi_1(x)|^2$ after a short time $t=25$. In each plot, the value $R$ corresponds to the ratio between walk parameters $\\gamma_1, \\gamma_2$. (a) At the boundary between phases \\emph{I} and \\emph{II} with the initial state centered at $\\Psi_0(-1)\\ \\text{and}\\ \\Psi_0(0)$. (b) At the boundary between phases \\emph{I} and \\emph{III} with the initial state centered at $\\Psi_1(-1)$}\\label{fig:13}\n\\end{figure}\n\n\\end{widetext}\n\nAs discussed above, the split-step quantum walk with boundary between the topological phases \\emph{III} and \\emph{IV} gives rise to two topologically protected bound states at the boundary. In Fig.~\\ref{fig:13}, we show that these bound states are robust against small changes in the quantum walk parameters.\n\n\\begin{figure}[htp]\n \\centering\n\t\\includegraphics[scale=0.75]{ballistic.png}\n\t\\caption{Ballistic diffusion of continuous walks without boundary. (a) In phase \\emph{III} with no boundary the quantum walk diffuses ballistically. This diffusion can occur to the right or to the left depending on the initial state. Here the initial state was centered at $\\Psi_0(0)$ and $\\Psi_1(0)$ with $\\Psi_1(0) = \\pm \\Psi_0(0)$. (b) In phase \\emph {I} with no boundary the quantum walk diffuses ballistically with the same behavior regardless of the initial state. }\\label{fig:14}\n\\end{figure}\n\nFigure \\ref{fig:14} shows that in the case of a single topological phase, the state ballistically diffuses away from its initial position. In topological phase \\emph{III}, one can choose the initial state in such a away that the diffusion only occurs in a single direction. Due to the linearity of the system, it follows that the initial state can be chosen so that the diffusion occurs in both directions. However, in phase \\emph{I}, regardless of the choice of initial state, the diffusion invariably occurs in both directions ballistically.\n\n\n\\section{Conclusions}\n\\label{sec:3}\n\nIn conclusion, we have investigated the continuous-time limit of discrete quantum walks with topological phases. In quantum walks, it is common to consider both discrete and continuous-times. In recent years much interest has been devoted to understanding how discrete-time quantum walks can simulator topological insulators. Here we have shown the existence of a continuous-time limit that preserves their topological phases. We considered both simple-step and split-step walks and derived analytically the equations of motion governing their behaviors. Through our analytical solutions we showed the existence of bound states at the boundary of two phases. We also solved the equations of motion numerically in the bulk. In terms of future work it would be interesting to consider the alternative continuous-limit approach given in \\cite{Dheeraj2015} to study topological properties of quantum walks.\n\n\\acknowledgments{We thank C.\\ M. Chandrashekar for illuminating discussions. D.\\ C.\\ and G.\\ S.\\ thank the Army Research Laboratory, where most of this work was performed, for its hospitality and financial support.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaedu b/data_all_eng_slimpj/shuffled/split2/finalzzaedu new file mode 100644 index 0000000000000000000000000000000000000000..b44895cce0d1b164dd1478f0b5823bd5250757f0 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaedu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{Introduction}\n\nIn the past years measurements of the absolute sky emission at\nfrequency $\\nu \\sim 1$ GHz have been carried out to evaluate the\nbrightness temperature ($T_{cmb}$) of the Cosmic Microwave\nBackground (CMB). Besides the non-trivial problem of assuring an\naccurate absolute calibration of the measured signal, we need to\nremember that the sky emission is a superposition of different\ncontributions. After subtracting the local emissions (mainly due\nto atmosphere inside the main beam, ground and radio frequency\ninterferences in the far side-lobes) the sky brightness\ntemperature ($T_{sky}$) can be written as:\n\n\\begin{equation}\nT_{sky}(\\nu,\\alpha,\\delta) = T_{cmb}(\\nu) + T_{gal}(\\nu,\\alpha,\n\\delta) + T_{UERS}(\\nu)\n\\end{equation}\n\n\\noindent where $T_{gal}$ is the emission of our galaxy and\n$T_{UERS}(\\nu)$ the temperature of the unresolved extragalactic\nradio sources (UERS). In the present paper we evaluate the UERS\nbrightness temperature and its frequency dependence.\n\nThis paper follows a series of others describing the measurements\nof the sky brightness temperature at frequencies close to 1 GHz\ngathered by the TRIS experiment with an angular resolution\n$FWHM_{TRIS} \\sim 20$ deg (\\cite[]{TRIS-I}, \\cite[]{TRIS-II},\n\\cite[]{TRIS-III}). The results obtained in the present paper were\nused to disentangle the components of the sky brightness and to\nevaluate the CMB temperature at the frequencies $\\nu =$ 600, 820\nand 2500 MHz (\\cite[]{TRIS-II}). The aim of this work is to\nprovide a new estimate of the integrated contribution of UERS to\nthe diffuse brightness of the sky. An accurate estimate of\n$T_{UERS}(\\nu)$ is necessary for the TRIS experiment, but also for\nall the experiments aimed at the study of the spectral distortions\nin the Rayleigh-Jeans tail of the CMB spectrum. Deviations from\nthe black-body distribution can be present at low frequency, but\nthe amplitude of the distortions at frequencies around 1 GHz is\nnowadays constrained by past experiments at the level of few tens\nof mK \\cite[]{Fixen_96}.\n\nExperiments like TRIS \\cite[]{TRIS-I} can reach a control of\nsystematics at the level of $\\sim$50 mK, a remarkable improvement\nif compared to previous measurements at the same frequencies. On\nthe other hand, relying on the current knowledge of both amplitude\nand spectrum of the UERS signal \\cite[]{Longair_66}, we can\nestimate that at 600, 820, 1400 and 2500 MHz (where CMB\nobservations have been carried out in the past) the extra-galactic\ncontribution is respectively $810\\pm 180$ mK, $340\\pm 80$ mK,\n$79\\pm 19$ mK and $16\\pm 4$ mK (see for example \\cite{Sironi_90}).\nUsing the current 178 MHz normalization \\cite[]{Longair_66}, for\nstate-of-the-art experiments, this means that the uncertainty\nassociated with the UERS at the lowest frequencies (which are the\nmost interesting when looking for CMB spectral distortions), is\npotentially higher than instrumental systematics. In this paper we\nshow that by exploiting all the data available in literature we\ncan significantly improve the present status of our knowledge\nabout the UERS contribution, and that TRIS-like experiments are\nessentially limited by the current technology. New and updated\nestimates of the brightness temperature of UERS will be useful\nalso for feasibility studies of future experiments in this field.\n\nThis paper is organized as follows: Section \\ref{Sources}\ndiscusses the general properties of the UERS and the data in\nliterature. In Section \\ref{Fit} we describe the procedure to fit\nthe available number counts; in Section \\ref{Brightness} we\ncalculate the UERS sky brightness and its frequency dependence.\nFinally in Section \\ref{Discussion} we discuss the implications of\nthe results obtained for astrophysics and cosmology.\n\n\n\\section{The extragalactic radio sources}\\label{Sources}\n\n\\subsection{The population of sources}\n\nThe unresolved extragalactic radio sources contribute as a blend\nof point sources to the measurements of diffuse emission,\nespecially with poor angular resolution. Actually UERS are an\ninhomogeneous collection of quasars, radio galaxies, and other\nobjects. These can be both compact and extended sources with\ndifferent local radio luminosity function, lifetimes and cosmic\nevolution. An extensive discussion can be found in\n\\cite{Longair_78}.\n\nUsually we can distinguish two populations of radio sources: {\\it\nsteep spectrum} sources if $\\alpha > 0.5$, and {\\it flat spectrum}\nsources if $\\alpha < 0.5$, where $\\alpha$ is the spectral index of\nthe source spectrum ($S(\\nu)\\propto \\nu^{-\\alpha}$). Compact radio\nsources, like quasars, have mostly a flat spectrum ($\\alpha \\simeq\n0$) and are most commonly detected at higher frequencies. On the\nother hand, extended sources like radio-galaxies, have a steep\nspectrum ($\\alpha \\simeq 0.7-0.8$) and dominate low frequency\ncounts (see \\cite[]{Peacock_81a} and \\cite[]{Longair_78}).\n\n{\\it Steep spectrum} sources and {\\it flat spectrum} sources\ncontribute in different ways to the number counts. {\\it Flat\nspectrum} sources are important only at high frequency ($\\nu\n\\gtrsim 2$ GHz). In the same range, {\\it flat spectrum} source\ncounts seem to be comparable to {\\it steep spectrum} source counts\nfor high fluxes, but at low fluxes {\\it steep spectrum} sources\nare still dominating, as shown for example by \\cite{Condon_84a}\nand \\cite{Kellermann_87} The total number counts have been\nsuccessfully fitted using a luminosity evolution model by\n\\cite{Peacock_81a} and \\cite{Danese_87}.\n\n\\subsection{Isotropy}\n\nThe large scale isotropy of the extragalactic radio sources has\nbeen studied by \\cite{Webster_77}. He analyzed several samples\nof sources measured in a cube of 1 Gpc-side, getting an upper\nlimit $\\Delta N \/ N < 3 \\%$ on the fluctuation of the source\nnumber. This limit is set by the finite statistics of the sources\ncontained in the survey: $N \\sim 10^{4}$.\n\n\\cite{Franceschini_89} have evaluated the fluctuation of the\nsource counts assuming a poissonian distribution: at $\\nu = 5$ GHz\nthey found fluctuations of the antenna temperature $\\Delta T_A \/\nT_A < 10^{-4}$ over an angular scale $\\theta \\sim 5$ deg. This\nfluctuation rapidly decreases at larger scales. At lower\nfrequencies the fluctuations increase, but we have at most $\\Delta\nT_A \/ T_A < 10^{-2}$ for $\\nu=408$ MHz and $\\Delta T_A \/ T_A <\n10^{-4}$ for $\\nu=2.5$ GHz at an angular scale $\\theta \\sim 20$\ndeg.\n\nDue to these considerations, the\ncontribution of UERS to the sky brightness at large angular scale\nis assumed isotropic in the following discussion.\nMoreover the radio sources are supposed\nto be randomly distributed and the fluctuation in the number\ncounts is assumed to be poissonian. A possible anisotropic\ncontribution of UERS is in most cases negligible and limited to\nthe region of the super-galactic plane (see \\cite[]{Shaver_89}).\n\n\\subsection{The data set}\n\nMany source-number vs flux distributions have been produced in the\npast years: radio source counts at low frequency have been\npublished even since the Sixties, while deep surveys at higher\nfrequencies have been performed recently. Compilation of source\ncounts can be found in several papers (see for example\n\\cite[]{Condon_84a}, \\cite[]{Franceschini_89},\n\\cite[]{Toffolatti_98}, \\cite[]{Windhorst_93}). Most of the data\nwe used can be extracted from the references included in these\npapers.\n\nWe used the counts distributions at the frequencies between 150\nand 8000 MHz, spanning a frequency range larger than the one\ncovered by the TRIS experiment. In literature we found data at\neight frequencies: $\\nu=$ 151 MHz, 178 MHz, 408 MHz, 610 MHz, 1.4\nGHz, 2.7 GHz, 5.0 GHz, 8.44 GHz. The complete list of the papers\nwe examined is reported in Table \\ref{tab1}. Some of those\nmeasurements were performed at a frequency slightly different from\nthe nominal one. In this case, the original measurements were\nscaled to the nominal frequency by assuming a dependence of the\nsource flux $S(\\nu) \\sim \\nu^{-0.7}$. Usually the correction is\nnegligible. The number counts extracted from Table \\ref{tab1} is\nshown in Figure \\ref{fig1}.\n\n\n\\section{Number counts distribution fit}\\label{Fit}\n\nThe fit was performed on the differential number counts normalized\nto the Euclidean distribution of sources $E(S) = S^{5\/2}(dN\/dS)$,\nusing the following analytical expression:\n\n\\begin{equation}\nQ(S) = Q_1(S) + Q_2(S) = \\frac{1}{A_1 S^{\\varepsilon_1} + B_1\nS^{\\beta_1}} + \\frac{1}{A_2 S^{\\varepsilon_2} + B_2 S^{\\beta_2}}\n\\label{Fitequation}\n\\end{equation}\n\nThe best values of the fit parameters are summarized in Tables\n\\ref{tab2} and \\ref{tab3}. This analytical fit function is\nempirical. It reproduces the distribution of the experimental data\nwith some advantages: 1) it is a simple analytical function; 2) it\nallows to extend the extrapolation beyond the available data,\nbecause both the amplitude and slope of the tails are well\ndefined, both at low and high fluxes; 3) it is built as the sum of\ntwo \\textit{\"populations\"} of sources with different emission, but\nsimilar behavior and shape; 4) the fitting procedure is\nindependent on the source type and evolution but it is applied to\nthe source counts as currently observed. In addition, this\nanalytical form is simply a power law, with various indices at\ndifferent values of flux, exactly as the observations suggest.\n\n\\subsection{High flux distribution fit}\n\nTo get the fit we first considered the source distribution at high\nfluxes (i.e. we evaluated the parameters of the component\n$Q_1(S)$). For this high flux distribution we have data at all the\nconsidered frequencies. The best fit parameters are listed in\nTable \\ref{tab2}. We found that the values of the spectral indices\n$\\varepsilon_1$ and $\\beta_1$ obtained at all the frequencies are\nvery similar. We then decided to take a unique weighted average\nvalue for $\\varepsilon_1$ and $\\beta_1$. The parameter\n$\\varepsilon_1$ is particularly well constrained by the data at\n1400 MHz, where a large set of data with low scatter is available\n(see \\cite[]{White_97} and Figure \\ref{fig1}). Conversely\navailable data at 178 MHz do not span a flux range wide enough to\nconstrain the two slopes.\n\nSource counts at 2700 and 8440 MHz are the less accurate among the\nfull data set we used. At both frequencies there are sets of data\nnot completely overlapping and the statistics is poor. Therefore,\nto take into account these uncertainties, we assumed a uniform\ndistribution of the parameter $A_1$. At $\\nu=2700$ MHz we took\n$A_1 = (6 - 12) \\times 10^{-4}$, while at $\\nu=8440$ MHz we took\n$A_1 = (16 - 34) \\times 10^{-4}$ (see Table \\ref{tab2}). In this\nfitting procedure we used all the data collected from the papers\nlisted in the Table \\ref{tab1}. We excluded from the fit only the\nseven data points at lowest flux published by \\cite{White_97} at\n1400 MHz because these data present a roll-off, exactly in the\nregion where a changing of the slope is expected, but in the\nopposite direction. According to the authors, this roll-off is an\nindication of the incompleteness of the survey at the faint-flux\nlimit.\n\n\\subsection{Low flux distribution fit}\n\nWe extended the fitting procedure to the counts at low flux, in\norder to constrain the distribution $Q_2(S)$. The two parameters\n$A_2$ and $\\varepsilon_2$ fixed amplitude and slope of the\nlow-flux tail of the distribution, while $B_2$ and $\\beta_2$ are\nneeded to fit the distribution in the region showing a change in\nthe slope. Deep counts are available only at 0.61, 1.4, 5 and 8.44\nGHz, but at 8.44 GHz the number of experimental points and their\nscatter do not allow to fit any parameter. In addition we\nconsidered the model of the low-flux tail published by\n\\cite{Franceschini_89} at 1.4 and 5 GHz. This source evolution\nmodel fits the data also after the addition of the most recent\nmeasurements both at 1.4 and 5 GHz. The low-flux tail of the model\ncompared with the experimental data and our fit $Q(S)$ are shown\nin Figure \\ref{fig2}. This evolution model is able to predict\naccurately the slope of the low-flux tail and we used it to get\nthe value of $\\varepsilon_2$, which is independent on the\nfrequency \\cite[]{Franceschini_07}. The values of $\\varepsilon_2$,\nobtained at 1.4 and 5 GHz, are fully compatible with the average\nvalue of $\\varepsilon_1$ previously evaluated. Combining the\nestimates at 1.4 and 5 GHz we get $\\varepsilon_2 = -0.856 \\pm\n0.021$.\n\nSince the model by \\cite{Franceschini_89} is not able to fix the\namplitude of the number counts, i.e. the parameter $A_2$, we\nevaluated this parameter by using the experimental data at 1.4 and\n5 GHz. We get: $A_2\/A_1 = 0.24 \\pm 0.02$ at 1.4 GHz; $A_2\/A_1 =\n0.30 \\pm 0.04$ at 5 GHz. The ratio $A_2\/A_1$ is almost independent\non the frequency. The small change is due to the different\ncontribution of the \\textit{flat-spectrum} sources in the total\ncounts at the two frequencies in the low flux region (below\n$10^{-5}$ Jy) and in the high flux region (10 mJy - 1 Jy).\n\nUsing the model of \\cite{Franceschini_89} and \\cite{Toffolatti_98}\nat 1.4, 5 and 8.4 GHz, we extrapolated the value of the ratio\n$A_2\/A_1$ also to the other frequencies. In order to evaluate\n$A_2\/A_1$ at lower frequencies we estimated the variation of\n$A_2\/A_1$ at 1.4 GHz excluding from the counts the\n\\textit{flat-spectrum} sources. This is the extreme condition,\nwhich holds at very low frequencies. In fact the other sources\ncontributing to the number counts do not change significantly with\nfrequency. We obtain in this situation that $0.23 \\leq A_2\/A_1\n\\leq 0.24$. The same result was obtained starting from data and\nmodel at 5 GHz. Therefore for the frequencies 151, 178, 408 and\n610 MHz we take the value obtained at 1.4 GHz, but associating a\nlarger error bar, due to the uncertainty in the extrapolation\nprocedure: $A_2\/A_1 = 0.24 \\pm 0.04$.\n\nWe then evaluated the contribution of the {\\it flat spectrum}\nsources in the total counts at 8.44 GHz (see \\cite{Toffolatti_98})\nin comparison with the counts at 5 GHz. In this way we estimate at\n8.44 GHz the value $A_2\/A_1 = 0.31 \\pm 0.04$. At 2.7 GHz we took a\nvalue constrained by the results obtained at 1.4 and 5 GHz:\n$A_2\/A_1 = 0.24 - 0.30$.\n\nWe finally estimated the $B_2$ and $\\beta_2$. They are important\njust to define the shape of the distribution in the region showing\na change in the slope. At 1.4 GHz data are accurate enough to\nconstrain both parameters, but $\\beta_2$ can not be constrained at\nthe other frequencies. Since the accuracy of these two parameters\nis not important for the calculation of the integrated brightness\ntemperature, we assumed for them the average value for all the\nfrequencies.\n\nThe summary of the best values of all the parameters of $Q_2(S)$\nis shown in Table \\ref{tab3}. The number counts and the function\nwhich fit them are shown in Figure \\ref{fig1}. In conclusion we\ncan note that: 1) $A_1$ and $B_1$, the two frequency dependent\nparameters of the fit, take different values at each frequency.\nThe same is true for the ratio $A_2\/A_1$. 2) The power law indices\n($\\varepsilon_1$, $\\beta_1$, $\\varepsilon_2$ and $\\beta_2$) are\nfrequency independent and we take a common value at all the\nfrequencies.\n\n\n\\section{The UERS contribution to the sky diffuse emission}\\label{Brightness}\n\n\\subsection{Evaluation of the diffuse emission}\n\nThe contribution of the UERS ($B_{UERS}(\\nu)$) to the sky\nbrightness is evaluated by integrating the function $S(dN\/dS)$\nfrom the largest flux ($S_{max}$) of the measured sources down to\nthe lowest fluxes ($S_{min}$) corresponding to the faintest\nsources:\n\n\\begin{equation}\nB_{UERS}(\\nu) = \\int^{S_{max}}_{S_{min}} \\frac{dN}{dS}(\\nu) \\cdot\nS \\ dS\n\\end{equation}\n\nThe brightness temperature $T_{UERS}(\\nu)$ is by definition:\n\n\\begin{equation}\nT_{UERS}(\\nu) = B_{UERS}(\\nu) \\frac{\\lambda^2}{2 \\ k_B},\n\\end{equation}\n\n\\noindent $k_B$ being the Boltzmann constant. The values of\n$T_{UERS}$ at the eight frequencies we considered are\nsummarized in Table \\ref{tab4}.\n\nFrom the observations we have $S_{max} \\sim 10^2$ Jy (as measured\nat 151, 408 and 1400 MHz) and $S_{min} \\sim 10^{-6}$ Jy (as\nmeasured in the deepest counts at 5 GHz). While sources at higher\nfluxes if present in the surveyed sky region can be easily\nmeasured, the limit at low flux is set by the confusion limit or\nby the observation completeness. In other words there is no sharp\nlimit at low flux in the population of the sources. We extended\nthe integration down to very faint limits, several orders of\nmagnitude below the faintest detected sources ($S_{min} \\sim\n10^{-6}$ Jy). When the integration is extended down to $S_{min}\n\\sim 10^{-12}$ Jy \\ the brightness increase by $3-4$ \\% and then\nthe value converges. This increment is comparable with the total\nuncertainty we get on the value of the brightness, as shown in\nFigure \\ref{fig3}.\n\nWe extended the integration also to higher values of the flux, in\norder to test also this integration limit. Increasing $S_{max}$\nwell beyond the flux of the strongest sources observed, the\nintegral change by less than $0.5\\%$ and quickly converges. This\nis a consequence of the very low statistics of sources at the\nhighest fluxes. This is a confirmation that the large scale\nbrightness is actually not sensitive to the upper limit of\nintegration.\n\n\\subsection{Evaluation of the uncertainty}\n\nThe error budget takes into account both the fit uncertainties and\nthe number counts fluctuations over the observed sky region. The\nfit uncertainties were evaluated by means of Monte Carlo\nsimulations. For each parameter we considered a gaussian\ndistribution with standard deviation as reported in Tables\n\\ref{tab2} and \\ref{tab3}. Only for the values of $A_1$ at 2.7 and\n8.44 GHz and for $A_2\/A_1$ at 2.7 GHz we assumed a uniform\ndistribution inside the interval reported in Tables \\ref{tab2} and\n\\ref{tab3}. The error bars of the parameters of the fit (in\nparticular $A_2\/A_1$ and $\\varepsilon_2$) include the uncertainty\non the extrapolation at the lowest fluxes for the various\nfrequencies. We underline that, as shown in Figure \\ref{fig3}, the\ncontribution to the brightness temperature of the low-flux tail\n(below $\\sim 10^{-6}$ Jy) is lower than the overall uncertainty\nreported in Table \\ref{tab4}.\n\nThe statistical fluctuations of the sources' number counts have no\neffect in a large part of the distribution because the number of\nsources is quite large. We concentrated on the effect of the\nfluctuation of the few sources with the highest flux. We\nconsidered these sources randomly distributed and therefore their\nfluctuation is Poissonian. We evaluated the fluctuation of the\nbrightness in a patch of the sky corresponding to the beam of\nTRIS: $\\Omega_{TRIS} \\sim 0.1$ sr. The upper limit of the\ncontribution to the temperature uncertainty due to the fluctuation\nin the number of sources is directly measured by the maximum of\nthe function\n\n\\begin{equation}\\label{}\nC(S_{min})=\\frac{\\lambda^2}{2k_B\\Omega_{TRIS}}\n\\frac{\\int_{S_{min}}^{100Jy}\\frac{dN}{dS}SdS}\n{\\int_{S_{min}}^{100Jy}\\frac{dN}{dS}dS}\n\\end{equation}\n\n\\noindent plotted in Figure \\ref{fig4} for the specific case of\nthe 1400 MHz data. For all the frequencies this maximum falls\naround $\\sim 5$ Jy, and its value is from 2 to 6 times smaller\nthan the corresponding values reported in Table \\ref{tab4}.\nTherefore for every frequency, and over the full flux range, the\nerror of the brightness temperature is dominated by the\nstatistical uncertainties of the fit parameters.\n\nThe relative error bar of the brightness temperature is $6-7\\%$ at\n151, 408, 610 and 1400 MHz increasing up to $9\\%$ at 5000 MHz. At\n178 MHz the available measurements are few and old and the\nobtained error bar is $13\\%$. At 2700 and 8440 MHz both the\nquantity and the quality of the data do not allow accurate\nestimates of the parameters of the fit and we get an uncertainty\nof $25-30\\%$.\n\n\\subsection{Frequency dependence}\n\nThe integrated contribution of the UERS to the brightness of the\nsky at the various frequencies is shown in Figure \\ref{fig5}. The\ndistribution can be fitted by a power law:\n\n\\begin{equation}\nT_{UERS}(\\nu) = T_0 \\Bigl( \\frac{\\nu}{\\nu_0}\\Bigr)^{\\gamma_0}\n\\end{equation}\n\n\\noindent Setting $\\nu_0 = 610$ MHz (chosen because it is\nclose to one of the channels of the TRIS experiment), we obtain\nthe best fit of $T_0$ and $\\gamma_0$ shown in Table \\ref{tab5}\n(\\textit{FIT1}). As shown in Figure \\ref{fig5}, in spite of the\nlarge error bars at 2700 and 8440 MHz, scatter of the data points is\nlimited. The fit of a single power law, done excluding the data with\nthe largest uncertainty (at $\\nu=178$, 2700 and 8440 MHz), gives\nthe values of $T_0$ and $\\gamma_0$ shown in Table \\ref{tab5}\n(\\textit{FIT2}). Now the scatter of the experimental data is much\nsmaller, as shown by the value of the reduced $\\chi^2$.\n\nIn both cases the results obtained are fully compatible with the\nslope of the {\\it steep spectrum} sources, which are therefore the\nmain contributors to the source counts. However it is interesting\nto check the contribution of the {\\it flat spectrum} sources\nespecially at high frequencies. We assumed a {\\it flat spectrum}\ncomponent with fixed slope $\\gamma_1 = -2.00$ and a dominant {\\it\nsteep spectrum} component with slope $\\gamma_0 = -2.70$:\n\n\\begin{equation}\nT_{UERS}(\\nu) = T_0 \\Bigl( \\frac{\\nu}{\\nu_0}\\Bigr)^{\\gamma_0} +\nT_1 \\Bigl( \\frac{\\nu}{\\nu_0}\\Bigr)^{\\gamma_1}\n\\end{equation}\n\n\\noindent The results of the fit of $T_0$ and $T_1$ are shown in Table\n\\ref{tab5} (\\textit{FIT3}). Doing so we can get an estimate of\nthe contribution of the {\\it flat-spectrum} component in the\nnumber counts at the various frequencies. The value of\n$T_{UERS}(\\nu) (\\nu \/\\nu_0)^{2.70}$ \\ at the frequencies analyzed\nis shown in Figure \\ref{fig6}, together with the power law best\nfits. Figure \\ref{fig6} shows that the two last fits\n(\\textit{FIT2} and \\textit{FIT3}) are equivalent within the error\nbars.\n\n\\section{Discussion}\\label{Discussion}\n\n\\subsection{Analytical form of the fit function}\n\nAs discussed in Section \\ref{Fit}, the fit function $Q(S)$ is a\npower law distribution taking different slopes and amplitudes in\nthree different regions of the sources' flux. We prefer to deal\nwith a power law (like $Q(S)$) instead of a polynomial fit, as\nperformed by \\cite{Katgert_88} and \\cite{Hopkins_03}. A polynomial\nfit can be better adapted to the experimental points, but its\nvalidity range is restricted to the region covered by experimental\npoints. Therefore it is not possible to use a polynomial fit to\nextrapolate the distribution outside this range. Conversely our\nfit function can be extrapolated because amplitude and slope of\nthe tails are well defined and take into account the shapes\nexpected from the counts evolution models\n\\cite[]{Franceschini_89}.\n\nAccording to \\cite{Longair_78} we should expect a broadened\nmaximum in the distribution of the source counts with increasing\nfrequency. This indication seems to be confirmed looking at the\ndifferential counts shown in (\\cite[]{Kellermann_87} and\n\\cite[]{Condon_84a}). In spite of these expectations we have\nobtained a good fit of the counts at the various frequencies,\nusing the same function with the same slopes, as shown Figure\n\\ref{fig1}. Part of this effect is probably hidden by the larger\nscatter of the high frequency differential counts. In any case the\nbroadening of the maximum could be marginally observed above 1400\nMHz and the effect is not relevant for the calculation of the\ncontribution to the sky brightness.\n\n\\subsection{Frequency dependence}\n\nThe spectral dependence of the brightness temperature $T_{UERS}$\nfollows the expectations at low frequency where the number counts\nare dominated by the sources with steep spectrum: $\\alpha \\sim\n0.7$. At high frequency the situation is more complex. We could\nexpect a flattening, because at these frequencies the number\ncounts of {\\it flat-spectrum} sources begin to be important. It\nwill probably appear at frequencies higher than 1400 MHz.\nUnfortunately the available data are not accurate enough to\nconstrain the fit parameters (in particular $A_1$) at 2700 and\n8440 MHz. The values obtained at these two frequencies, lower than\nthe values expected looking at the other frequencies, could also\nbe an indication of incompleteness of the surveys (see Figure\n\\ref{fig6}). Conversely at 5000 MHz data are better and numerous.\nWe obtain a value for $T_{UERS}$ fully consistent with the slope\nof the data at low frequency, where {\\it steep-spectrum} sources\nare dominating, even if there is a marginal indication of a\nspectral flattening (see Figure \\ref{fig6}). In fact when we fit\nthe data adding the contribution of the {\\it flat-spectrum}\nsources we get an estimate of this contribution which is $T_1\/T_0\n\\simeq 2\\%$ at $\\nu=610$ MHz and $T_1\/T_0 \\simeq 9\\%$ at $\\nu=5$\nGHz (see \\textit{FIT3} in the Table \\ref{tab5}).\n\n\n\\subsection{Previous estimates of $T_{UERS}$}\n\nThe values of brightness temperature shown in Table \\ref{tab4}\nhave error bars of roughly 7\\% at most frequencies. The\nuncertainty is a bit larger at 178 MHz and even worse at 2700 and\n8440 MHz, because of the quality of the number counts data. So far\nvery few estimates of $T_{UERS}$ have been published (see\n\\cite[]{Longair_66}, \\cite[]{Wall_90} and \\cite[]{Burigana_04}),\nand the frequencies covered were limited and sometimes the\nuncertainty was not quoted. Our results are in agreement with the\nvalues previously estimated by \\cite{Longair_66}, $T_{UERS}(178 \\\nMHz)=23\\pm5$ K, and by \\cite{Wall_90}, $T_{UERS}(408 \\\nMHz)\\simeq2.6$ K, $T_{UERS}(1.4 \\ GHz)\\simeq0.09$ K, and\n$T_{UERS}(2.5 \\ GHz)\\simeq0.02$ K, but our error bars are\ndefinitely smaller. The accuracy of the estimated values of\n$T_{UERS}$ is particularly important if this contribution is to be\nsubtracted to calculate the value of the CMB temperature at low\nfrequency. Table \\ref{tab4} suggests that the value of $T_{UERS}$\nneeds to be accurately evaluated up to a frequency of several GHz,\nbecause its value is not negligible.\n\n\\section{Conclusions}\n\nWe used the source number - flux measurements in literature to\nevaluate the contribution of the Unresolved Extragalactic Radio\nSources to the diffuse brightness of the sky. We analyzed the\ncount distributions at eight frequencies between 150 and 8000 MHz,\nspanning over the frequency range partially covered by the TRIS\nexperiment (see \\cite[]{TRIS-I}): $\\nu=$ 151 MHz, 178 MHz, 408\nMHz, 610 MHz, 1.4 GHz, 2.7 GHz, 5.0 GHz, 8.44 GHz.\n\nWe optimized the fitting function of the experimental number\ncounts distribution. The differential number counts ($dN\/dS$) at\nthe various frequencies are well described by a multi power law\nempirical distribution $Q(S)$ (see Equation \\ref{Fitequation}).\nThe amplitudes ($A_1$ and $B_1$) are frequency dependent\nparameters of the fit and have different values at each frequency.\nConversely the power law indices ($\\varepsilon_1$, $\\beta_1$,\n$\\varepsilon_2$ and $\\beta_2$) have a common value at all the\nfrequencies.\n\nThe contribution of the UERS to the sky brightness was\nevaluated by integrating the function $S(dN\/dS)$ from the largest\nflux ($S_{max} = 10^{2}$) of the measured sources down to the\nlowest fluxes ($S_{min} = 10^{-12}$) corresponding to the expected\nfaintest sources. We got the brightness temperature with a\nrelative error bar of $\\delta T_{UERS}\/T_{UERS} \\simeq 6-7\\%$ at\n$\\nu =$ 151, 408, 610 and 1400 MHz, $\\delta T_{UERS}\/T_{UERS}\n\\simeq 9\\%$ at $\\nu =$ 5000 MHz, $\\delta T_{UERS}\/T_{UERS} \\simeq\n13\\%$ at $\\nu =$ 178 MHz and $\\delta T_{UERS}\/T_{UERS} \\simeq\n25-30\\%$ at $\\nu =$ 2700 and 8440 MHz.\n\nWe finally evaluated the spectral dependence of the point source\nintegrated brightness. As expected this dependence can be\ndescribed using a power law with a spectral index $\\gamma_0 \\simeq\n-2.7$, in agreement with the frequency dependence of the flux\nemitted by the {\\it steep-spectrum} sources. We have also tested\nthe contribution of the {\\it flat-spectrum} sources, adding a\nsecond component with the slope $\\gamma_1 = -2.0$. The\ncontribution of these sources starts to be relevant only at\nfrequencies above several GHz. In fact we estimated a contribution\nby {\\it flat-spectrum} sources $\\sim 2 \\%$ at 610 MHz and $\\sim 9\n\\%$ at 5 GHz.\n\nThe above results were used to evaluate the CMB temperature\nat frequencies close to 1 GHz from absolute measurements of the\nsky temperature made by our group (see \\cite[]{TRIS-I},\n\\cite[]{TRIS-II}, \\cite[]{TRIS-III}).\n\n\\acknowledgments {\\bf Acknowledgements}: This work is part of the\nTRIS activity, which has been supported by MIUR (Italian Ministry\nof University and Research), CNR (Italian National Council of\nResearch) and the Universities of Milano and of Milano-Bicocca.\nThe authors acknowledge A. Franceschini for useful discussions and\nthe anonymous referee for helpfull comments to the draft.\n\n\\vfill \\eject\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nOptimization problems are wide-spread in several domains of science and engineering. \nThe usual goal is to minimize or maximize some pre-defined objective(s). \nMost of the real-world scenarios place certain restrictions \non the variables of the problem i.e. the variables need to satisfy \ncertain pre-defined constraints to realize an acceptable solution.\n \nThe most general form of a constrained optimization problem (with inequality constraints,\nequality constraints and variable bounds) can be written as a nonlinear programming\n(NLP) problem: \n\n{\\small\\begin{eqnarray}\nMinimize\t &f(\\vec{x})\t\t&\t\\nonumber\\\\\nSubject to\t &g_j(\\vec{x}) \\ge 0, & j= 1,...,J \\nonumber\\\\\n\t\t &h_k(\\vec{x}) = 0,\t& k= 1,...,K \\nonumber\\\\\n\t \t & x_i^{(L)} \\le x_i \\le x_i^{(U)}, & i=1,...,n. \\label{eq:NLPprob}\n\\end{eqnarray}}\n\nThe NLP problem defined above contains $n$ decision variables (i.e. $\\vec{x}$ is a vector of size $n$),\n$J$ greater-than-equal-to type inequality constraints (less-than-equal-to can be expressed in this form by \nmultiplying both sides by $-1$), and $K$ equality-type constraints.\nThe problem variables $x_is$ are bounded by the lower ($x_i^{(L)}$) and upper ($x_i^{(U)}$)\nlimits. When only the variable bounds are specified then the \nconstraint-handling strategies are often termed as the boundary-handling methods.\n\\footnote{For the rest of the paper, by \\textit{constraint-handling} we imply\ntackling all of the following: variable bounds, inequality constraints and equality constraints. \nAnd, by a \\textit{feasible solution} it is implied that the solution satisfies all the variable bounds, inequality constraints,\nand equality constraints. The main contribution of the paper is to propose an efficient constraint-handling method\nthat operates and generates only feasible solutions during optimization.}\n\nIn classical optimization, the task of constraint-handling has been addressed\nin a variety of ways: (i) \\textit{using penalty approach} developed by Fiacoo and McCormick \\cite{jensen2003operations},\nwhich degrades the function value\nin the regions outside the feasible domain, (ii) \\textit{using barrier methods} which operate in a similar fashion \nbut strongly degrade the function values as the solution approaches a constraint boundary from \ninside the feasible space, (iii) \\textit{performing search in the feasible directions} using methods\nsuch gradient projection, reduced gradient and Zoutendijk's approach \\cite{zoutendyk1960methods}\n(iv) \\textit{using the augmented Lagrangian formulation} of the problem, as commonly \ndone in linear programming and sequential quadratic programming (SQP).\nFor a detailed account on these methods along with their implementation and \nconvergence characteristics the reader is referred to \\cite{rekl,debOptiBOOK,MichalewiczConstraint}.\nThe classical optimization methods reliably and effectively solve convex constrained optimization problems while ensuring convergence\nand therefore widely used in such scenarios. However, same is not true in the presence of non-convexity.\nThe goal of this paper is to address the issue of constraint-handling for evolutionary algorithms in real-parameter optimization, \nwithout any limitations to convexity or a special form of constraints or objective functions. \n\nIn context to the evolutionary algorithms the constraint-handling has been addressed\nby a variety of methods; including borrowing of the ideas from the classical techniques. These include\n(i) \\textit{use of penalty functions} to degrade the fitness\nvalues of infeasible solutions such that the degraded solutions are given less emphasis during the evolutionary search. A\ncommon challenge in employing such penalty methods arises from\nchoosing an appropriate penalty parameter ($R$) that strikes the right balance between\nthe objective function value, the amount of constraint violation and the associated penalty. \nUsually, in EA studies, a trial-and-error method is employed to estimate $R$.\nA study \\cite{debpenalty} in 2000 suggested a parameter-less\napproach of implementing the penalty function concepts for population-based optimization method. \nA recent bi-objective method\n\\cite{deb-dutta} was reported to find the appropriate $R$ values adaptively during the optimization process.\nOther studies \\cite{wang2012dynamic,wang2012combining} have employed the concepts of multi-objective\noptimization by simultaneously considering the minimization of the constraint violation and optimization of the\nobjective function, (ii) \\textit{use of feasibility preserving operators},\nfor example, in \\cite{michalewicz1996genocop} specialized operators in the presence of linear constraints \nwere proposed to create new and feasible-only individuals from the feasible parents. In another example, \ngeneration of feasible child solutions within the variable bounds was achieved \nthrough Simulated Binary Crossover (SBX) \\cite{debSBX} and polynomial mutation\noperators \\cite{debBookMO}. The explicit feasibility of child solutions was ensured by redistributing the probability distribution \nfunction in such a manner that the infeasible regions were assigned a zero probability \nfor child-creation \\cite{debpenalty}. Although explicit creation of feasible-only solutions during an EA search is an\nattractive proposition, but it may not be possible always since generic crossover or mutation operators\nor other standard EAs do not gaurantee creation of feasible-only solutions, (iii) \\textit{deployment of repair strategies}\nthat bring an infeasible solution back into the feasible domain.\nRecent studies \\cite{PadhyeBIC2012,Helwid-Constraing-handling-2013,amir,chu} investigated the \nissue of constraint-handling through repair techniques in context to PSO and DE, and \nshowed that the repair mechanisms can introduce a bias in the search and \nhinder exploration. Several repair methods \nproposed in context PSO \\cite{padhyeCEC2009,Sabina,JonathanPareto} \nexploit the information about location of the optimum and fail to perform when the location of \noptimum changes \\cite{padhye2010}. These issues are universal and\noften encountered with all EAs (as shown in later sections).\nFurthermore, the choice of the evolutionary optimizer, the constraint-handling strategy, and \nthe location of the optima with respect to the search space, all play an important\nrole in the optimization task. To this end, authors have realized a need\nfor a reliable and effective repair-strategy that explicitly preserves feasibility.\nAn ideal evolutionary optimizer (evolutionary algorithm\nand its constrained-handling strategy) should be robust in terms of finding the \noptimum, irrespective of the location of the optimal location in the search space. \nIn rest of the paper, the term constraint-handling strategy refers to explicit feasibility\npreserving repair techniques. \n\nFirst we review the existing constraint-handling strategies \nand then propose two new constraint-handling schemes, namely, Inverse Parabolic Methods (IPMs). \nSeveral existing and newly proposed constrained-handling strategies are first tested on a class \nof benchmark unimodal problems with variable bound constraints.\nStudying the performance of constraint-handling strategies on problems with variable bounds\nallows us to gain better understanding into the operating principles in a simplistic manner. \nParticle Swarm Optimization, Differential Evolution and real-coded Genetic Algorithms are chosen \nas evolutionary optimizers to study the performance of different constraint-handling strategies. \nBy choosing different evolutionary optimizers, better understanding on the functioning\nof constraint-handlers embedded in the evolutionary frame-work can be gained. \nBoth, the search algorithm and constraint-handling strategy must operate efficiently and synergistically in\norder to successfully carry out the optimization task. It is shown that the constraint-handling\nmethods possessing inherent pre-disposition; in terms of bringing infeasible solutions back into the\nspecific regions of the feasible domain, perform poorly. Deterministic constraint-handling strategies such as \nthose setting the solutions on the constraint boundaries result in the loss of population diversity. \nOn the other hand, random methods of bringing the solutions back into the search space\narbitrarily; lead to complete loss of all useful information carried by the solutions. \nA balanced approach that utilizes the useful information from the solutions\nand brings them back into the search space in a meaningful way is desired. The newly proposed IPMs are motivated\nby these considerations.\nThe stochastic and adaptive components of IPMs (utilizing the information of the solution's\nfeasible and infeasible locations), and a user-defined parameter ($\\alpha$) render\nthem quite effective. \n \nThe rest of the paper is organized as follows:\nSection~\\ref{sec:feasibility-preserving-existing} reviews existing constraint-handling techniques\ncommonly employed for problems with variable bounds.\nSection~\\ref{sec:IP} provides a detailed description on two newly proposed IPMs.\nSection~\\ref{sec:ResultsDiscussion} provides a description on the benchmark test problems and \nseveral simulations performed on PSO, GAs and DE with different constraint-handling techniques. \nSection~\\ref{sec:scale-up} considers \noptimization problems with larger number of variables. \nSection~\\ref{sec:Constraint-Programming} shows the extension and applicability of proposed\nIPMs for generic constrained problems.\nFinally, conclusions and scope for future work are discussed in Section~\\ref{sec:Conclusion}.\n \n\n\n\n\n\\section{Feasibility Preserving Constraint-Handling Approaches for Optimization Problems with Variable Bounds}\n\\label{sec:feasibility-preserving-existing}\nSeveral constraint-handling strategies have been proposed to bring solutions back into the feasible region\nwhen constraints manifest as variable bounds. Some of these strategies can also be extended in \npresence of general constraints. An exhaustive recollection and \ncomparison of all the constraint-handling techniques is beyond the scope of this study. \nRather, we focus our discussions on the popular and representative constraint-handling techniques. \n\nThe existing constraint-handling methods for problems with variable bounds can be broadly categorized into two groups: \nGroup~$A$ techniques that perform feasibility check variable wise, and Group~$B$ techniques that perform feasibility\ncheck vector-wise. According to Group~$A$ techniques, for every solution, each variable is tested for its feasibility \nwith respect to its supplied bounds and made feasible if the corresponding bound is violated.\nHere, only the variables violating their corresponding bounds are altered, independently, and\nother variables are kept unchanged. \nAccording to Group~$B$ techniques, if a solution (represented as a vector) \nis found to violate any of the variable bounds, it is brought back into\nthe search space along a vector direction into the feasible space. In\nsuch cases, the variables that explicitly do not violate their own\nbounds may also get modified.\n \nIt is speculated that for variable-wise separable problems, that is,\nproblems where variables are not linked to one another, \ntechniques belonging to Group~$A$ are likely to perform well. However, for the problems \nwith high correlation amongst the variables (usually referred to as {\\em linked}-problems), Group~$B$ techniques are likely to be more useful. \nNext, we provide description of these constraint-handling methods in detail \\footnote{The implementation of several \nstrategies as C codes can be obtained by emailing npdhye@gmail.com or pulkitm.iitk@gmail.com}.\n\n\\subsection{Random Approach}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.35]{Random.eps} \n\\end{center}\n\\caption{Variable-wise random approach for handling bounds.}\n\\label{fig:RandomBH}\n\\end{figure}\nThis is one of the simplest and commonly used approaches for handling boundary\nviolations in EAs \\cite{chu}. This approach belongs to Group~$A$. \nEach variable is checked for a boundary violation and if the variable bound is violated\nby the current position, say $x_i^c$, then $x_i^c$ is replaced with a randomly chosen value\n$y_i$ in the range $[x_i^{(L)}, x_i^{(U)}]$, as follows:\n\\begin{equation}\ny_i = \\mbox{random} [x_i^{(L)}, x_i^{(U)}].\n\\end{equation}\nFigure~\\ref{fig:RandomBH} illustrates this\napproach. Due to the random choice of the feasible location, this approach explicitly maintains\ndiversity in the EA population. \n\n\\subsection{Periodic Approach}\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.35]{Periodic.eps} \n\\end{center}\n\\caption{Variable-wise periodic approach for handling bounds.}\n\\label{fig:PeriodicBH}\n\\end{figure}\nThis strategy assumes a periodic repetition of the objective function\nand constraints with a period $p=x_i^{(U)}-x_i^{(L)}$. This is carried out by\nmapping a violated variable $x_i^c$ in the range $[x_i^{(L)},\nx_i^{(U)}]$ to $y_i$, as follows:\n\\begin{equation}\ny_i = \\left\\{ \n\\begin{array}{ll}\n x_i^{(U)} - (x_i^{(L)}-x_i^c)\\%p, & \\quad \\text{if $x_i^cx_i^{(U)}$}, \\\\\n\\end{array} \n\\right.\n\\end{equation}\nIn the above equation, \\% refers to the modulo operator.\nFigure~\\ref{fig:PeriodicBH} describes the periodic approach. \nThe above operation brings back an infeasible solution in a structured\nmanner to the feasible region. \nIn contrast to the random method, the periodic approach is too methodical and it is unclear \nwhether such a repair mechanism is supportive of\npreserving any meaningful information of the solutions that have created the\ninfeasible solution. This approach belongs to Group~$A$. \n\n\\subsection{SetOnBoundary Approach}\nAs the name suggests, according to this strategy a violated variable\nis reset on the \nbound of the variable which it violates.\n\\begin{equation}\ny_i = \\left\\{ \n\\begin{array}{ll}\n x_i^{(L)}, & \\quad \\text{if $x_i^cx_i^{(U)}$}.\\\\\n\\end{array} \\right.\n\\end{equation}\nClearly this approach forces all violated solutions to lie on the\nlower or on the upper boundaries, as the case may be. Intuitively, this approach will work\nwell on the problems when the optimum of the problem lies exactly on one of the variable\nboundaries. This approach belongs to Group~$A$.\n\n\\subsection{Exponentially Confined (Exp-C) Approach}\n\\begin{figure}[hbt]\n\\begin{minipage}{0.47\\linewidth}\n\\begin{center}\n\\includegraphics[scale=.35]{FieldSend.eps} \n\\end{center}\n\\caption{Variable-wise exponentially approach (Exp-C) for handling\n bounds.}\n\\label{fig:FieldSendDistBH}\n\\end{minipage}\\hfill\n\\begin{minipage}{0.47\\linewidth}\n\\begin{center}\n\\includegraphics[scale=.35]{expS.eps} \n\\end{center}\n\\caption{Variable-wise exponentially approach (Exp-S) for handling bounds.}\n\\label{fig:expS}\n\\end{minipage}\n\\end{figure}\nThis method was proposed in \\cite{JonathanPareto}. According to this \napproach, a particle is brought back inside the feasible search space variable-wise in the region between \nits old position and the violated bound. The new location is created\nin such a manner that higher sampling probabilities are assigned to the regions\nnear the violated boundary. The developers suggested the use of an\nexponential probability distribution, shown in\nFigure~\\ref{fig:FieldSendDistBH}. \nThe motivation of this approach is based on the hypothesis \nthat a newly created infeasible point violates a particular variable\nboundary because the optimum solution lies closer to that variable\nboundary. Thus, this method will probabilistically \ncreate more solutions closer to the boundaries, unless the optimum lies well\ninside the restricted search space. This approach belongs to Group~$A$.\n\nAssuming that the exponential distribution is $p(x_i) =\nA\\exp(|x_i-x_i^p|)$, the value of $A$ can be obtained by integrating\nthe probability from $x_i=x_i^p$ to $x_i=x_i^{(B)}$ (where $B=L$ or\n$U$, as the case may be). Thus, the probability distribution is given\nas $p(x) = \\exp(|x_i-x_i^p|)\/(\\exp(|x_i^{(B)} - x_i^p|)-1)$. For any\nrandom number $r$ within $[0,1]$, the feasible solution is calculated as follows:\n\\begin{equation}\ny_i= \\left\\{\\begin{array}{ll}\nx_i^p - \\ln (1+r(\\exp (x_i^p-x_i^{(L)})-1)) &\\mbox{if $x_ix_i^{(U)}$}.\n\\end{array}\\right.\n\\label{eq:exp}\n\\end{equation}\n\n\n\\subsection{Exponential Spread (Exp-S) Approach}\nThis is a variation of the above approach, in which, instead of \nconfining the probability to lie between $x_i^p$ and the violated\nboundary, the exponential probability is spread over the entire\nfeasible region, that is, the probability is distributed from lower\nboundary to the upper boundary with an increasing probability towards\nthe violated boundary. This requires replacing $x_i^p$ with\n$x_i^{(U)}$ (when the lower boundary is violated) or $x_i^{(L)}$ \n(when the upper boundary is violated) in the Equation~\\ref{eq:exp} as follows: \n\\begin{equation}\ny_i= \\left\\{\\begin{array}{ll}\nx_i^{(U)} - \\ln (1+r(\\exp(x_i^{(U)}-x_i^{(L)})-1)) &\\mbox{if $x_ix_i^{(U)}$}.\n\\end{array}\\right.\n\\end{equation}\nThe probability distribution is shown in Figure~\\ref{fig:expS}.\nThis approach also belongs to Group~$A$.\n\n\\subsection{Shrink Approach}\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.35]{SHR.eps} \n\\end{center}\n\\caption{Vector based {SHR.} strategy for handling bounds.}\n\\label{fig:SHRBH}\n\\end{figure}\nThis is a vector-wise approach and belongs to Group~$B$ in which the violated solution is\nset on the intersection point of the line joining the parent point\n($\\vec{x}_{not}$), child point ($\\vec{x}^c)$, and the violated boundary. Mathematically,\nthe mapped vector $\\vec{y}$ is created as follows:\n\\begin{equation}\n\\vec{y} = \\vec{x}_{not} +\\beta (\\vec{x}^c-\\vec{x}_{not}), \n\\end{equation}\nwhere $\\beta$ is computed as the minimum of all positive values of intercept\n$(x_i^{(L)}-x_{i,not})\/(x_i^c-x_{i,not})$ for a violated boundary\n$x_i^{(L)}$ and $(x_i^{(U)}-x_{i,not})\/(x_i^c-x_{i,not})$ for a violated boundary\n$x_i^{(U)}$.\nThis operation is shown in Figure~\\ref{fig:SHRBH}. In the case shown,\n$\\beta$ needs to be\ncomputed for variable bound $x_2^{(U)}$ only. \\\\ \\\\\n\n\n\\section{Proposed Inverse Parabolic (IP) Constraint-Handling Methods}\\label{sec:IP}\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.75]{ProposedDistDeb.eps} \n\\end{center}\n\\caption{Vector based {Inverse Parabolic Methods}.}\n\\label{fig:ProbDistBH}\n\\end{figure}\nThe exponential probability distribution function described in the previous section brings \nviolated solutions back into the allowed range variable-wise, but \nignores the distance of the violated solution $x_i^c$ with respect to\nthe violated boundary. The distance from the violated boundary \ncarries useful information for remapping the violated solution into\nthe feasible region. One way to utilize this distance information is\nto bring solutions back into the allowed range with a higher\nprobability closer to the boundary, when the \\textit{fallen-out}\ndistance ($d_v$, as shown in Figure~\\ref{fig:ProbDistBH}) is small. \nIn situations, when points are too far outside the allowable range,\nthat is, the fallen-out\ndistance $d_v$ is large, particles are brought back more uniformly\ninside the feasible range. Importantly, when the fallen-out distance\n$d_v$ is small (meaning that the violated child solution is close to the\nvariable boundary), the repaired point is also close to the violated\nboundary but in the feasible side. Therefore, the nature of the exponential\ndistribution should become more and more like a uniform distribution\nas the fallen-out distance $d_v$ becomes large.\n\nLet us consider Figure~\\ref{fig:ProbDistBH} which shows a\nviolated solution $\\vec{x}^c$ and its parent solution $\\vec{x}^p$. Let\n$d_p=\\|\\vec{x}^c-\\vec{x}^p\\|$ denote the distance between the violated solution\nand the parent solution. Let $\\vec{v}$ and $\\vec{u}$ be the intersection points\nof the line joining $\\vec{x}^c$ and $\\vec{x}^p$ with the violated\nboundary and the non-violated boundary, respectively. The\ncorresponding distances of these two points from $\\vec{x}^c$ are $d_v$\nand $d_u$, respectively. Clearly, the violated distance is $d_v =\n\\|\\vec{x}^c-\\vec{v}\\|$. We now define an inverse\nparabolic probability distribution function from $\\vec{x}^c$ along \nthe direction $(\\vec{x}^p-\\vec{x}^c)$ as:\n\\begin{equation}\np(d) = \\frac{A}{(d-d_v)^2+\\alpha^2d_v^2}, \\quad d_v \\leq d \\leq a,\n\\end{equation}\nwhere $a$ is the upper bound of $d$ allowed by the constraint-handling\nscheme (we define $a$ later) and $\\alpha$ is a pre-defined parameter. By calculating and\nequating the cumulative probability equal to one, we find:\n\\[A = \\frac{\\alpha d_v}{\\tan^{-1} \\frac{a-d_v}{\\alpha d_v}}.\\]\nThe probability is maximum at $d=d_v$ (at the violated boundary) and\nreduces as the solution enters the allowable range. Although this \ncharacteristic was also present in the exponential distribution, the\nabove probability distribution is also a function of violated distance\n$d_v$, which acts\nlike a variance to the probability distribution. If $d_v$ is\nsmall, then the variance of the distribution is small, thereby resulting in a\nlocalized effect of creating a mapped solution. \nFor a random number $r\\in [0,1]$, the distance of the mapped solution\nfrom $\\vec{x}^c$ in the allowable\nrange $[d_v,d_u]$ is given as follows:\n\\begin{equation}\nd' = d_v + \\alpha d_v \\tan \\left(r \\tan^{-1} \\frac{a-d_v}{\\alpha d_v}\\right).\n\\label{eq:s}\n\\end{equation}\nThe corresponding mapped solution is as follows:\n\\begin{equation}\n\\vec{y} = \\vec{x}^c + d' (\\vec{x}^p-\\vec{x}^c).\n\\label{eq:map}\n\\end{equation}\nNote that the IP method makes a vector-wise operation and is sensitive\nto the relative locations of the infeasible solution, the parent solution, and\nthe violated boundary. \n\nThe parameter $\\alpha$ has a direct {\\em external\\\/} effect of inducing small or large\nvariance to the above probability distribution. If $\\alpha$ is large,\nthe variance is large, thereby having uniform-like distribution. Later \nwe shall study the effect of the parameter $\\alpha$. A value $\\alpha$ $\\approx$ 1.2 is\nfound to work well in most of the problems and is recommended. \nNext, we describe two particular constraint-handling schemes employing this \nprobability distribution. \n\n\\subsection{Inverse Parabolic Confined (IP-C) Method}\nIn this approach, the probability distribution is confined\nbetween $d \\in [d_v,d_p]$, thereby making $a=d_p$. Here, a mapped\nsolution $\\vec{y}$ lies strictly between violated boundary location\n($\\vec{v}$) and the parent ($\\vec{x}^p$). \n\n\\subsection{Inverse Parabolic Spread (IP-S) Method} \nHere, the mapped solution is allowed to lie in the entire\nfeasible range between $\\vec{v}$ and\n$\\vec{u}$ along the vector $(\\vec{x}^p-\\vec{x}^c)$, but more emphasis is given on relocating the \nchild near the violated boundary. The solution can be found by using Equations~\\ref{eq:s} and\n\\ref{eq:map}, and by setting $a=d_u$. \n\n\\section{Results and Discussions}\\label{sec:ResultsDiscussion}\nIn this study, first we choose four standard scalable unimodal test functions (in presence of variable bounds): \nEllipsoidal ($F_{\\rm elp}$), Schwefel ($F_{\\rm sch}$), Ackley ($F_{\\rm sch}$),\nand Rosenbrock ($F_{\\rm ros}$), described as follows:\n\\begin{eqnarray}\nF_{\\rm elp} &=& \\sum_{i=1}^n ix_i^2 \\\\\nF_{\\rm sch} &=& \\sum_{i=1}^n\\left(\\sum_{j=1}^i x_j\\right)^2 \\\\\nF_{\\rm ack} &=& -20 exp \\left(-0.2 \\sqrt{ \\frac{1}{n} \\sum_{i=1}^{i=n} x_i^2}\\right) -exp\\left(\\frac{1}{n} \\sum_{i=1}^{n}cos(2\\pi x_i)\\right) +20 + e\\\\\nF_{\\rm ros} &=& \\sum_{i=1}^{n-1} (100(x_i^2-x_{i+1})^2 + (x_i-1)^2) \n\\end{eqnarray}\n\nIn the unconstrained space, $F_{\\rm elp}$, $F_{\\rm sch}$ and $F_{\\rm ack}$ have a minimum\nat $x_i^{\\ast}=0$, whereas $F_{ros}$ has a minimum at $x_i^{\\ast}=1$.\nAll functions have minimum value $F^{\\ast}=0$. \n$F_{\\rm elp}$ is the only variable separable problem.\n$F_{\\rm ros}$ is a challenging test problem that has a ridge which poses\ndifficulty for several optimizers. \nIn all the cases the number of variables is chosen to be $n=20$. \n\nFor each test problem three different scenarios corresponding to the\nrelative location of the \noptimum with respect to the allowable search range are considered. This is done \nby selecting different variable bounds, as follows: \n \n\\begin{description}\n\\item[On the Boundary:] Optimum is exactly on one of the variable boundaries (for $F_{\\rm elp}$, $F_{\\rm sch}$ and $F_{\\rm ack}$ \n$x_i\\in [0, 10]$, and for $F_{\\rm ros}$, $x_i\\in [1, 10]$),\n\\item[At the Center:] Optimum is at the center of the allowable range (for $F_{\\rm elp}$, $F_{\\rm sch}$ and $F_{\\rm ack}$ \n$x_i\\in [-10, 10]$, and for $F_{\\rm ros}$, $x_i\\in [-8, 10]$), and \n\\item[Close to Boundary:] Optimum is near the variable boundary, \nbut not exactly on the boundary (for $F_{\\rm elp}$, $F_{\\rm sch}$ and $F_{\\rm ack}$ \n$x_i\\in [-1, 10]$, and for $F_{\\rm ros}$, $x_i\\in [0, 10]$).\n\\end{description}\nThese three scenarios are shown in the Figure~\\ref{fig:3scenario} for a\ntwo-variable problem having variable bounds: $x_i^{(L)}=0$ and $x_i^{(U)}=10$.\nAlthough in practice, the optimum can lie anywhere in the allowable range,\nthe above three scenarios pose adequate representation of different\npossibilities that may exist in practice. \n\n\\begin{figure}\n\\centering\n\\begin{subfigure}{}\n \\centering\n \\includegraphics[width=.65\\linewidth]{boundary-ellp-optima}\n\n \\label{fig:boundary-ellp-optima}\n\\end{subfigure\n\n\\begin{subfigure}{}\n \\centering\n \\includegraphics[width=.65\\linewidth]{center-ellp-optima}\n \n \\label{fig:center-ellp-optima}\n\\end{subfigure}\n\n\\begin{subfigure}{}\n \\centering\n \\includegraphics[width=.65\\linewidth]{close-to-boundary-ellp-optim}\n \n \\label{fig:close-to-boundary-ellp-optim}\n\\end{subfigure}\n\n\\caption{Location of optimum for $F_{elp}$: (a) on the boundary (b) in the center and (c) close to the \nedge of the boundary by selecting different search domains.}\n\\label{fig:3scenario}\n\\end{figure}\n \nFor each test problem, the population is initialized uniformly in the\nallowable range. We count the number of function\nevaluations needed for the algorithm to find a solution close to the\nknown optimum solution and we call this our \nevaluation criterion $S$. \nChoosing a high accuracy (i.e. small value of $S$)\nas the termination criteria minimizes the chances of locating the optimum due to \nrandom effects, and provides a better insight into the behavior of a constraint-handling mechanism. \n\nTo eliminate the random effects and gather results of statistical importance, each algorithm \nis tested on a problem $50$ times (each run starting with a different\ninitial population). A particular run is terminated if the evaluation\ncriterion $S$ is met (noted as a successful run), \nor the number of function evaluations exceeds one million (noted as an\nunsuccessful run). If only a few out of the $50$ runs are\nsuccessful, then we report the number of successful runs in the\nbracket. In this case, the best, median and worst number of function\nevaluations are computed from the successful runs only. If none of the runs\nare successful, we denote this by marking \\textit{(DNC)} (Did Not\nConverge). In such cases, we report the best, median and worst\nattained function values of the best solution at the end of each\nrun. To distinguish the unsuccessful results from successful\nones, we present the fitness value information of the unsuccessful\nruns in italics.\n\nAn in-depth study on the constraint-handling techniques is carried out\nin this paper. \nDifferent locations of the optimum are selected and systematic comparisons are \ncarried out for PSO, DE and GAs in Sections ~\\ref{subsec:psoresults}, \n~\\ref{subsec:DEresults} and ~\\ref{subsec:GAresults} , respectively. \n \n\\subsection{Results with Particle Swarm Optimization (PSO)}\\label{subsec:psoresults}\nIn PSO, decision variable and the velocity terms are updated\nindependently. Let us say, that the initial position is $\\vec{x}_{t}$, the newly created \nposition is infeasible and represented by $\\vec{x}_{t+1}$, and the repaired solution\nis denoted by $\\vec{y}_{t}$. \n\nIf the velocity update is based on the infeasible solution as:\n\n\\begin{equation}\n\\label{eqn:velocity-standard}\n\\vec{v}_{t+1}=\\vec{x}_{t+1} - \\vec{x}_{t} \n\\end{equation}\n\nthen, we refer to this as ``Velocity Unchanged''. However, if the velocity update \nis based on the repaired location as: \n\n\\begin{equation}\n\\label{eqn:velocity-recomputed}\n\\vec{v}_{t+1}=\\vec{y}_{t} - \\vec{x}_{t}, \n\\end{equation}\n\nthen, we refer to this as ``Velocity Recomputed''. This terminology is used for rest of the paper. \nFor inverse parabolic (IP) and exponential (Exp) approaches, we use\n``Velocity Recomputed'' strategy only. We have performed ``Velocity\nUnchanged'' strategy with IP and exponential approaches, but the\nresults were not as good as compared to ``Velocity Recomputed'' strategy. \nFor the \\textit{SetOnBoundary} approach, we use the ``Velocity Recomputed''\nstrategy and two other strategies discussed as follows. \n\nAnother strategy named \n``Velocity Reflection'' is used, which simply implies \nthat if a particle is set on the $i$-th boundary, then $v_i^{t+1}$ \nis changed to $-v_i^{t+1}$. The goal of the velocity\nreflection is to explicitly allow particles to move back into the search space.\nIn the ``Velocity Set to Zero'' strategy, if a particle is set\non the $i$-th boundary, then the corresponding velocity component is set to zero i.e. $v_i^{t+1}=0$. \nFor the shrink approach, both ``Velocity Recomputed'' and ``Velocity\nSet to Zero'' strategies are used. \n\nFor PSO, a recently proposed {\\em hyperbolic} \\cite{Helwid-Constraing-handling-2013} constraint-handling approach is also\nincluded in this study. This strategy operates by first calculating velocity according to the standard mechanism ~\\ref{eqn:velocity-standard}, and \nin the case of violation a linear normalization is performed on the velocity to restrict the solution from jumping out of the constrained boundary as follows:\n\\begin{equation}\n\\label{eq:hyperbolic}\nv_{i,t+1} = \\frac{v_{i,t+1}}{1+\\frac{|v_{i,t+1}|}{\\min(x_i^{(U)}-x_{i},x_{i,t}-x_{i}^{(L)})}}. \n\\end{equation}\nEssentially, the closer the particle gets to the boundary (e.g.,\n$x_{i,t}$ only slightly smaller than $x_{i}^{(U)}$), the more difficult it becomes to reach the boundary. In fact, the particle is never completely\nallowed to reach the boundary as the velocity tends to zero. We emphasize again that this strategy is only applicable to\nPSO. A standard PSO is employed in this study with a population size of 100. \nThe results for all the above scenarios with PSO are presented in\nTables~\\ref{tab:PSOEllp} to ~\\ref{tab:PSORos}.\n\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\caption{Results on $F_{\\rm elp}$ with PSO for $10^{-10}$ termination criterion.}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\begin{tabular}{|lrrrr|} \\hline \n{Strategy} & Velocity update & {Best} &\n Median & Worst \\\\ \\hline \\hline \n\\multicolumn{5}{|c|}{\t{$F_{\\rm elp}$ in [0,10]: On the Boundary}} \\\\ \\hline\nIP Spread & Recomputed & 39,900 & 47,000 &\n67,000 \\\\ IP Confined & Recomputed & 47,900 (49) & 88,600 & 140,800 \\\\ \nExp. Spread & Recomputed &\\textit{3.25e-01} & \\textit{5.02e-01} & \\textit{1.08e+00} \\\\\nExp. Confined & Recomputed & {\\bf 4,600} & {\\bf 5,900} &\n{\\bf 7,500} \\\\ \nPeriodic & Recomputed &\\textit{3.94e+02} \\textit{(DNC)}& \\textit{6.63e+02} & \\textit{1.17e+03} \\\\ \nPeriodic & Unchanged & \\textit{8.91e+02} \\textit{(DNC)} &\n\\textit{1.03e+03} &\\textit{1.34e+03} \\\\ \nRandom & Recomputed & \\textit{1.97e+01} \\textit{(DNC)}& \\textit{3.37e+01} & \\textit{8.10e+01} \\\\ \nRandom & Unchanged & \\textit{5.48e+02} \\textit{(DNC)}&\\textit{6.69e+02} & \\textit{9.65e+02} \\\\ \nSetOnBoundary & Recomputed & 900 (44) & 1,300 & 5,100 \\\\\nSetOnBoundary & Reflected & 242,100 & 387,100 & 811,400 \\\\ \nSetOnBoundary & Set to Zero & 1,300 (48) & 1,900 & 4,100 \\\\ \nShrink & Recomputed & 8,200 (49) & 10,900 & 14,300 \\\\ \nShrink & Set to Zero & 33,000 & 40,700 & 53,900 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t\t & 14,100 &15,100 & 16,500\t\t\t\\\\\t\\hline \\hline\n\\multicolumn{5}{|c|}{\t{$F_{\\rm elp}$ in [-10,10]: At the Center}}\t\\\\ \\hline\nIP Spread & Recomputed & 31,600 & 34,000 &37,900 \\\\\nIP Confined & Recomputed & 30,900 & 33,800 & 38,500 \\\\\nExp. Spread & Recomputed & 30,500 & 34,700 & 38,300 \\\\\nExp. Confined & Recomputed & 31,900 & 35,100 & 38,200 \\\\\nPeriodic & Recomputed & 32,200 & 35,100 & 37,900 \\\\\nPeriodic & Unchanged & 33,800 & 36,600 & 41,200 \\\\%\\hline\nRandom & Recomputed & 31,900 & 34,800 & 37,400 \\\\\nRandom & Unchanged & 31,600 & 34,900 & 38,100 \\\\\nSetOnBoundary &Recomputed & 31,900 & 35,500 & 40,500 \\\\ \nSetOnBoundary & Reflected & 50,800 (38) & 83,200 & 484,100 \\\\\nSetOnBoundary & Set to Zero & 31,600 & 35,000 & 37,200 \\\\\nShrink & Recomputed & 32,000 & 34,400 & 48,200 \\\\% \\hline \nShrink & Set to Zero & 31,400 & 34,000 & 37,700 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\n& {\\bf 29,400} & {\\bf 31,200} & {\\bf 34,700}\t\t\t\\\\ \\hline \\hline\n\\multicolumn{5}{|c|}{\t {$F_{\\rm elp}$ in [-1,10]: Close to Boundary}} \\\\ \\hline\nIP Spread & Recomputed & 28,200 & 31,900 & 35,300 \\\\\nIP Confined & Recomputed & 28,300 & 32,900 & 44,600 \\\\\nExp. Spread & Recomputed & 28,300 & 30,700 & 33,200 \\\\% \\hline\nExp. Confined & Recomputed & 29,500 & 33,000 & 44,700 \\\\\nPeriodic & Recomputed & \\textit{4.86e+01} \\textit{(DNC)} & \\textit{1.41e+02} & \n\\textit{4.28e+02} \\\\\nPeriodic & Unchanged & \\textit{2.84e+02} \\textit{(DNC)} & \\textit{5.46e+02} & \\textit{8.28e+02} \\\\\nRandom & Recomputed & 36,900 & 41,900 & 45,600 \\\\\nRandom & Unchanged & \\textit{1.13e+02} \\textit{(DNC)} & \\textit{2.26e+02} & \\textit{4.35e+02} \\\\ \nSetOnBoundary & Recomputed & \\textit{1.80e+01} \\textit{(DNC)} & \\textit{7.60e+01} & \n\\textit{3.00e+02} \\\\ \nSetOnBoundary & Reflected & \\textit{2.13e-01} \\textit{(DNC)}& \\textit{2.17e+01} & \n\t\\textit{1.06e+02} \\\\\nSetOnBoundary & Set to Zero & 31,700 (2) & 31,700 & 32,600 \\\\\nShrink & Recomputed & 29,500 (6) & 36,100 & 42,300 \\\\\nShrink & Set to Zero & 28,400 (36) & 32,700 & 65,600 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\n& {\\bf 25,900} & {\\bf 29,200} & {\\bf 31,000}\t\\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\label{tab:PSOEllp}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\n\\begin{table*}\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\caption{Results on $F_{\\rm sch}$ with PSO for $10^{-10}$ termination criterion.}\n\\begin{tabular}{|lrrrr|} \\hline \\hline\n{{Strategy}} & Velocity & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{5}{|c|}{{$F_{\\rm sch}$ in [0,10]: On the Boundary}} \\\\ \\hline\nIP Spread & Recomputed & 67,200 & 257,800 & 970,400 \\\\% \\hline\nIP Confined & Recomputed & 112,400 (6) & 126,500 & 145,900 \\\\% \\hline\nExp. Spread & Recomputed & \\textit{3.79e+00} \\textit{(DNC)}& \\textit{8.37e+00} & \\textit{1.49e+01} \\\\\nExp. Confined & Recomputed & {\\bf 4,900} & {\\bf 6,100} & {\\bf 13,500} \\\\% \\hline\nPeriodic & Recomputed & \\textit{4.85e+03} \\textit{(DNC)}& \\textit{7.82e+03}&\\textit{1.34e+04} \\\\\nPeriodic &Unchanged & \\textit{7.69e+03} \\textit{(DNC)}& \\textit{1.11e+04}& \\textit{1.51e+04} \\\\% \\hline\nRandom &Recomputed & \\textit{2.61e+02} \\textit{(DNC)}& \\textit{5.44e+02} & \\textit{1.05e+03} \\\\\n\nRandom &Unchanged & \\textit{5.30e+03} \\textit{(DNC)} & \\textit{7.60e+03} & \\textit{1.22e+04} \\\\% \\hline\n \nSetOnBoundary &Recomputed & 800 (30) & 1,100 & 3,900 \\\\ \nSetOnBoundary &Reflected & 171,500 & 241,700 & 434,200 \\\\\nSetOnBoundary &Set to Zero & 1,000 (40) & 1,600 & 5,300 \\\\\nShrink &Recomputed\t & 6,900 & 9,100 & 11,600 \\\\% \\hline\nShrink &Set to Zero & 17,900 & 31,900 & 49,800 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t\t\t& 36,400 & 41,700 & 48,700\t\t\t\\\\ \\hline \\hline\n\\multicolumn{5}{|c|}{{$F_{\\rm sch}$ in [-10,10]: At the Center}} \\\\ \\hline\nIP Spread & Recomputed & 106,700 & 127,500 & 144,300 \\\\\nIP Confined & Recomputed & 111,500 & 130,100 & 149,900 \\\\% \\hline\nExp. Spread & Recomputed & 112,300 & 131,400 & 149,000 \\\\\nExp. Confined & Recomputed & 116,400 & 131,300 & 148,200 \\\\\nPeriodic &Recomputed & 113,400 & 130,900 & 150,600 \\\\%\t\\hline\nPeriodic &Unchanged & 121,200 & 137,800 & 159,100 \\\\\nRandom &Recomputed & 112,900 & 129,800 & 151,100 \\\\\nRandom &Unchanged & 117,000 & 130,600 & 148,100 \\\\\nSetOnBoundary &Recomputed & 118,500 (49) & 132,300 & 161,100 \\\\\nSetOnBoundary &Reflected & \\textit{3.30e-06} \\textit{(DNC)}& \\textit{8.32e+01}\\textit{(DNC)}& \n\\textit{2.95e+02} \\textit{(DNC)} \\\\% \\hline \nSetOnBoundary & Set to Zero & 111,900 & 132,200 & 149,700 \\\\\nShrink.&Recomputed & 111,800 (49)\t& 131,800\t& 183,500 \\\\\nShrink.&Set to Zero & 108,400\t& 125,100\t& 143,600 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t\t\t& {\\bf 101,300}\t& \t{\\bf 117,700} & {\\bf 129,700} \\\\\\hline \\hline\n\\multicolumn{5}{|c|}{{$F_{\\rm sch}$ in [-1,10]: Close to\n Boundary}} \\\\ \\hline\nIP Spread & Recomputed & 107,200 & 130,400 & 272,400 \\\\\nIP Confined & Recomputed & 120,100 (44) & 171,200 & 301,200 \\\\\nExp. Spread & Recomputed & 92,800 & 109,200 & 126,400 \\\\\nExp. Confined & Recomputed & 110,200 & 127,400 & 256,100 \\\\\nPeriodic&Recomputed & \\textit{8.09e+02} \\textit{(DNC)}& \\textit{2.01e+03} \\textit{(DNC)}& \n\\textit{5.53e+03}\\textit{(DNC)} \\\\\nPeriodic&Unchanged & \\textit{2.16e+03} \\textit{(DNC)} & \\textit{4.36e+03} \\textit{(DNC)} & \\textit{6.87e+03} \\textit{(DNC)} \\\\\nRandom&Recomputed & 123,300 & 165,600\t& 280,000 \\\\\nRandom&Unchanged & \\textit{8.17e+02} \\textit{(DNC)} & \\textit{1.96e+03} \\textit{(DNC)} & \n\\textit{2.68e+03} \\textit{(DNC)} \\\\\n\t\t\nSetOnBoundary&Recomputed & \\textit{2.50e+00} \\textit{(DNC)} & \\textit{1.25e+01} \\textit{(DNC)} & \\textit{5.75e+02} \\textit{(DNC)} \\\\\nSetOnBoundary&Reflected & \\textit{1.86e+00} \\textit{(DNC)} & \\textit{7.76e+00} \\textit{(DNC)} & \\textit{5.18e+01} \\textit{(DNC)} \\\\\nSetOnBoundary&Set to Zero & \\textit{1.00e+00} \\textit{(DNC)} & \\textit{5.00e+00} \\textit{(DNC)} & \n\\textit{4.21e+02} \\textit{(DNC)} \\\\\nShrink & Recomputed & \\textit{5.00e-01} \\textit{(DNC)} & \\textit{3.00e+00} \\textit{(DNC)} & \\textit{1.60e+01} \\textit{(DNC)} \\\\\nShrink &Set to Zero & 108,300 (8) & 130,300 & 143,000 \\\\ \nHyperbolic \t& Modified (Eq.~(\\ref{eq:hyperbolic}))\t\t& {\\bf 93,100} \t\t&{\\bf \t108,300} & {\\bf 119,000}\t\\\\ \\hline\n\\end{tabular}\n\\label{tab:PSOSch}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\caption{Results on $F_{\\rm ack}$ with PSO for $10^{-10}$ termination criterion. }\n\\begin{center}\n\\begin{tabular}{|lrrrr|} \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{5}{|c|}{{$F_{\\rm ack}$ in [0,10]: On the Boundary}} \\\\ \\hline\nIP Spread & Recomputed\t\t&\t150,600 (49)\t& 220,900\t & 328,000 \\\\\nIP Confined & Recomputed\t& \\textit{4.17e+00} \\textit{(DNC)} & \\textit{6.53e+00} & \\textit{8.79e+00}\\\\ \nExp. Spread & Recomputed\t&\\textit{2.76e-01} \\textit{(DNC)} & \\textit{9.62e-01} & \n\\textit{2.50e+00} \\\\ \nExp. Confined & Recomputed\t& 7,800\t& 9,600\t& 11,100 \\\\% \\hline \nPeriodic&Recomputed \t&\\textit{6.17e+00} \\textit{(DNC)} & \\textit{6.89e+00} & \n\\textit{9.22e+00} \\\\\n\t\nPeriodic&Unchanged \t & \\textit{8.23e+00} \\textit{(DNC)} & \\textit{9.10e+00} & \n\\textit{9.68e+00} \\\\\nRandom&Recomputed \t& \\textit{3.29e+00} \\textit{(DNC)} & \\textit{3.40e+00} & \\textit{4.19e+00} \\\\\nRandom&Unchanged &\\textit{6.70e+00} \\textit{(DNC)}& \\textit{7.46e+00}& \\textit{8.57e+00} \n\\\\ \nSetOnBoundary&Recomputed \t&{\\bf 800}\t& {\\bf 1,100}\t& {\\bf 2,100} \\\\% \\hline \nSetOnBoundary&Reflected \t & 420,600\t& 598,600\t& 917,400 \\\\\nSetOnBoundary&Set to Zero\t & 1,100\t& 1,800\t& 3,100\t \\\\% \\hline \nShrink.&Recomputed \t\t& 33,800 (5) & 263,100 & 690,400 \\\\% \\hline \nShrink.&Set Zero \t\t& \\textit{3.65e+00} \\textit{(DNC)} & \\textit{6.28e+00} & \\textit{8.35e+00} \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t\t& 24,600 (25) &26,100 &28,000 \\\\\t\\hline \\hline\n\\multicolumn{5}{|c|}{ {$F_{\\rm ack}$ in [-10,10]: At the Center}} \\\\ \\hline\nIP Spread & Recomputed\t\t\t &\t53,900 (46)\t& 58,600\t& 66,500 \\\\% \\hline\nIP Confined & Recomputed\t\t\t& 54,800 (49)\t& 59,200\t& 64,700 \\\\\nExp. Spread & Recomputed\t\t& {\\bf 55,100}\t& {\\bf 59,300}\t& {\\bf 63,600} \\\\\nExp. Confined\t& Recomputed\t &\t 56,800 & 59,600\t& 65,000 \\\\\nPeriodic&Recomputed \t\t & 55,700 (48)\t& 59,900\t& 64,700 \\\\\nPeriodic&Unchanged \t\t &\t57,900 (49)\t& 62,100\t& 66,700 \\\\% \\hline\nRandom&Recomputed \t\t& 55,100 (47)\t& 59,400\t& 65,100 \\\\\nRandom&Unchanged \t\t&\t56,300\t & 59,700\t& 65,500 \\\\% \\hline\nSetOnBoundary&Recomputed \t & 55,100 (49)\t& 58,900\t& 65,400 \\\\\nSetOnBoundary&Reflected \t& 86,900 (4)\t& 136,400\t& 927,600 \\\\% \\hline\nSetOnBoundary&Set to Zero \t& 53,900 (49)\t& 59,600\t& 67,700 \\\\\nShrink &Recomputed \t\t& 55,800 (47)\t\t& 58,700\t& 65,800 \\\\\nShrink &Set to Zero \t\t\t& 55,700 (49)\t\t& 58,900\t& 62,000 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t\t\t& 52,900 (49) & 56,200 & 64,400\t\t\\\\\\hline \\hline\n\\multicolumn{5}{|c|}{ {$F_{\\rm ack}$ in [-1,10]: Close to\n Boundary}} \\\\ \\hline\nIP Spread & Recomputed\t\t\t &\t54,600 (5)\t& 55,100\t& 56,600 \\\\\nIP Confined & Recomputed\t\t\t& 63,200 (1)\t& 63,200\t& 63,200 \\\\\nExp. Spread & Recomputed\t\t& {\\bf 51,300}\t& {\\bf 55,200}\t& {\\bf 58,600} \\\\\nExp. Confined & Recomputed & \\textit{1.42e+00} \\textit{(DNC)} & \\textit{2.17e+00} \n& \\textit{2.92e+00} \\\\\nPeriodic&Recomputed & \\textit{2.88e+00} \\textit{(DNC)} & \\textit{4.03e+00} & \\textit{5.40e+00} \\\\\nPeriodic&Unchanged & \\textit{6.61e+00} \\textit{(DNC)} & \\textit{7.46e+00} & \\textit{8.37e+00} \\\\ \nRandom&Recomputed & 60,300 (45) & 66,200 & 72,200 \\\\% \\hline\nRandom&Unchanged \t& \\textit{4.21e+00} \\textit{(DNC)} &\n\\textit{4.93e+00} & \\textit{6.11e+00} \\\\ \nSetOnBoundary&Recomputed & \\textit{2.74e+00} \\textit{(DNC)} & \\textit{3.16e+00} & \\textit{3.36e+00} \\\\\nSetOnBoundary&Reflected \t & 824,700 (1)\t& 824,700\t& 824,700 \\\\% \\hline\nSetOnBoundary&Set to Zero \t& \\textit{1.70e+00} \\textit{(DNC)} & \\textit{2.63e+00} \n& \\textit{3.26e+00} \\\\ \nShrink&Recomputed & \\textit{1.45e+00} \\textit{(DNC)} & \\textit{2.34e+00} & \\textit{2.73e+00} \\\\ \nShrink&Set to Zero & \\textit{2.01e+00} \\textit{(DNC)} & \\textit{3.96e+00} & \\textit{6.76e+00} \\\\ \n\tHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic})) &\n 50,000 (39) & 53,500 & 58,100 \\\\ \\hline\n\\end{tabular}\n\\label{tab:PSOAck}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\caption{Results on $F_{\\rm ros}$ with PSO for $10^{-10}$ termination criterion.}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\begin{tabular}{|lrrrr|} \\hline \n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{5}{|c|}{ {$F_{\\rm ros}$ in [1,10]: On the Boundary}} \\\\ \\hline\nIP Spread & Recomputed & 89,800 & 195,900 & 243,300\t \\\\\nIP Confined & Recomputed & 23,800 &164,300 & 209,300\t\\\\\nExp. Spread & Recomputed & \\textit{9.55e-01} \\textit{(DNC)} & \\textit{2.58e+00}\n& \\textit{7.64e+00} \\\\ \nExp. Confined & Recomputed & {\\bf 3,700} &\t{\\bf 128,100}\t& {\\bf 344,400}\t\\\\\nPeriodic&Recomputed & \\textit{1.24e+04} \\textit{(DNC)} & \\textit{2.35e+04}\n\t&\\textit{4.24e+04} \\\\\nPeriodic&Unchanged & \\textit{6.99e+04} \\textit{(DNC)} & \\textit{1.01e+05} \n& \\textit{1.45e+05} \t\t\\\\\nRandom&Recomputed & \\textit{6.00e+01} \\textit{(DNC)} & \\textit{1.37e+02} & \n\t\\textit{4.42e+02} \t\t \\\\\nRandom&Unchanged & \\textit{2.32e+04} \\textit{(DNC)} & \\textit{3.90e+04} \n& \\textit{8.22e+04} \\\\ \nSetOnBoundary&Recomputed \t& 900 (45) &\t1,600 \t&89,800\t\t \\\\\nSetOnBoundary&Reflected \t& \\textit{2.14e-03} \\textit{(DNC)} & \\textit{6.01e+02} \n& \\textit{5.10e+04} \t\t \\\\ \nSetOnBoundary&Set to Zero \t& 1,400 (48) &\t3,000\t& 303,700 \t\t \\\\\nShrink.&Recomputed \t& 3,900 (44) &\t5,100 \t& 406,000\t\t \\\\% \\hline\nShrink.&Set to Zero & 15,500 &136,200 & 193400\t\t \\\\\nHyperbolic & & 177,400 (45) & 714,300 & 987,500\\\\\t\t \\hline \\hline\n\\multicolumn{5}{|c|}{ {$F_{\\rm ros}$ in [-8,10]: Near the\n Center}} \\\\ \\hline\nIP Spread & Recomputed & 302,300 (28) & 774,900 & 995,000 \t \\\\% \\hline\nIP Confined & Recomputed & 296,600 (32) &729,000 &955,000 \t \\\\\nExp. Spread & Recomputed & 208,800 (24) & 754,700 & 985,200 \t \\\\\nExp. Confined & Recomputed & 301,100 (33) & 801,400 & 961,800 \\\\\nPeriodic&Recomputed & 26,200 (27) & 705,100 & 986,200 \\\\% \\hline\nPeriodic&Unchanged & 247,300 (32) & 776,800 & 994,900 \t \\\\% \\hline\nRandom&Recomputed & 311,200 (30) &809,300 & 990,800 \\\\\nRandom&Unchanged & 380,100 (29) & 793,300 & 968,300\t \\\\%\t\\hline\nSetOnBoundary&Recomputed & {\\bf 248,700} (35) & {\\bf 795,600} & {\\bf 973,900} \\\\\nSetOnBoundary&Reflected & 661,900 (01) & 661,900 & 661,900 \\\\% \\hline\nSetOnBoundary&Set to Zero & 117,400 (25) &\t858,400 \t& 995,400 \\\\% \\hline\nShrink.&Recomputed & 347,900 (33) & 790,500& 996,300 \\\\% \\hline\nShrink.&Set to Zero & 353,300 (26) & 788,700 & 986,800 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t & \\textit{6.47e-08 (DNC)} & \\textit{1.27e-04 (DNC)}& \\textit{6.78e+00 (DNC)} \t\\\\\t\\hline \\hline\n\\multicolumn{5}{|c|}{ {$F_{\\rm ros}$ in [1,10]: Close to Boundary}}\n\\\\ \\hline\nIP Spread & Recomputed & 184,600 (47) & 442,200 & 767,500 \\\\% \\hline\nIP Confined & Recomputed & 229,900 (40) & 457,600 & 899,200 \\\\% \\hline\nExp. Spread & Recomputed & 19,400 (47) & 378,200 & 537,300 \\\\\nExp. Confined & Recomputed & \\textit{6.79e-03} \\textit{(DNC)} & \\textit{4.23e+00} & \n\\textit{6.73e+01} \\\\ \nPeriodic&Recomputed & \\textit{1.51e-02} \\textit{(DNC)} \t& \\textit{3.73e+00} \n& \\textit{5.17e+02} \\\\\nPeriodic&Unchanged & \\textit{1.92e+04} \\textit{(DNC)} & \\textit{2.86e+04} & \n\\textit{6.71e+04} \\\\ \nRandom&Recomputed & {\\bf 103,800} \t& {\\bf 432,200}\t& {\\bf 527,200} \\\\\nRandom&Unchanged & \\textit{2.33e+02} \\textit{(DNC)} & \\textit{1.47e+03} \n& \\textit{4.23e+03} \\\\ \nSetOnBoundary&Recomputed & \\textit{1.71e+01} \\textit{(DNC)} & \\textit{1.87e+01}& \n\\textit{3.13e+02} \t\\\\\nSetOnBoundary&Reflected & \\textit{6.88e+00} \\textit{(DNC)} \t& \\textit{5.52e+02} & \n\t\\textit{2.14e+04} \\\\\nSetOnBoundary&Set to Zero & \\textit{6.23e+00} \\textit{(DNC)} & \\textit{1.80e+01}\n& \\textit{3.12e+02} \\\\ \nShrink &Recomputed & 350,300 (3) \t& 350,900\t& 458,400 \\\\ \nShrink &Set to Zero & 163,700 (26) & 418,000 &531,900 \\\\ \nHyperbolic & Modified (Eq.~(\\ref{eq:hyperbolic}))\t & 920,900 (1) & 920,900 & 920,900 \t\\\\ \t\\hline\n\\end{tabular}\n\\label{tab:PSORos}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\nThe extensive simulation results are summarized using the following method. For each (say $j$) of the 14\napproaches, the corresponding number of the successful applications ($\\rho_j$) are recorded. Here,\nan application is considered to be successful if more than 45 runs out of 50\nruns are able to find the optimum within the specified accuracy. It is\nobserved that IP-S is successful in 10 out of 12 problem instances. Exponential confined approach (Exp-C) is successful in 9\ncases. To investigate the required number of function evaluations (FE) needed\nto find the optimum, by an approach (say $j$), we compute the average number of $\\bar{\\rm FE}_k^j$ \nneeded to solve a particular problem ($k$) and construct the following objective for\n$j$-th approach:\n\\begin{equation}\n\\mbox{FE-ratio}_j = \\frac{1}{\\rho_j}\\sum_{k=1}^{12} \\frac{\\mbox{FE}_k^j}{\\bar{\\rm FE}_k^j}, \n\\end{equation}\nwhere FE$_k^j$ is the FEs needed by the $j$-th approach to solve the\n$k$-th problem. Figure~\\ref{fig:pso_rank} shows the performance of\neach ($j$-th) of the 14 approaches on the two-axes plot ($\\rho_j$ and\n$\\mbox{FE-ratio}_j$). \n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=1.0]{pso_rank.eps} \n\\end{center}\n\\caption{Performance comparison of 14 approaches for two metrics --\n number of problems solved successfully and function evaluation ratio\n -- with the PSO algorithm.}\n\\label{fig:pso_rank}\n\\end{figure}\n\nThe best approaches should have large $\\rho_j$ values and small $\\mbox{FE-ratio}_j$ values. \nThis results in a trade-off between three best approaches which are marked in filled circles. All other 11\napproaches are {\\em dominated\\\/} by these three approaches. The\n{\\it SetBound} (\\textit{SetOnBoundary}) with velocity set to zero performs in only six out of 12\nproblem instances. Thus, we ignore this approach. There is a clear\ntrade-off between IP-S and Exp-C approaches. IP-S solves one\nproblem more, but requires more FEs in comparison to Exp-C. Hence, we\nrecommend the use of both of these methods vis-a-vis all other methods used in this study. \n\nOther conclusions of this extensive study of PSO with different\nconstraint-handling methods are summarized as follows:\n\\begin{enumerate}\n\\item The constraint-handling methods show a large variation in the performance\ndepending on the choice of test problem and location of the optimum in\nthe allowable variable range.\n\n\\item When the optimum is on the variable boundary, periodic and\n random allocation methods perform poorly. This\n is expected intuitively. \n\n\\item When the optimum is on the variable boundary, methods that set infeasible\n solutions on the violated boundary (\\textit{SetOnBoundary} methods) work very\n well for obvious reasons, but these methods do not perform well for other cases. \n\n\\item When the optimum lies near the center of the allowable range,\n most constraint-handling approaches work almost equally well. This can be understood intuitively from the fact that\ntendency of particles to fly out of the search space is small when the optimum is\nin the center of the allowable range. For example, the\nperiodic approaches fail in all the cases but are able to demonstrate\nsome convergence characteristics \nfor all test problems, when the optimum is at the center. When the optimum is on the boundary or close\nto the boundary, then the effect of the chosen \nconstraint-handling method becomes critical.\n\n\\item The shrink method (with ``Velocity Recomputed\" and ``Velocity Set\n Zero'' strategies) succeeded in 10 of the 12 cases.\n\\end{enumerate}\n\n\\subsection{Parametric Study of $\\alpha$}\\label{sec:alpha-effect}\nThe proposed IP approaches involve a parameter $\\alpha$ affecting the\nvariance of the probability distribution for the mapped variable. In\nthis section, we perform a parametric study of $\\alpha$ to determine\nits effect on the performance of the IP-S approach.\n \nFollowing $\\alpha$ values are chosen: $0.1$, $1$, $10$, and\n$1,000$. To have an idea of the effect of $\\alpha$, we plot the\nprobability distribution of mapped values in the allowable range\n($[1,10]: On the Boundary$) for $d=1.0$ in Figure~\\ref{fig:Alpha-effect-figure}. It can\nbe seen that for $\\alpha=10$ and 1,000, the distribution is almost\nuniform. \n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.75]{alpha-effect-final.eps} \n\\caption{Probability distribution function with different $\\alpha$\n values. The $x$-axis denotes the increased distance from the\n violated boundary. $x=1$ means the violated boundary. The child is\n created at $x=0$.}\n\\label{fig:Alpha-effect-figure}\n\\end{center}\n\\end{figure}\n\nFigure ~\\ref{fig:Alpha-effect-result} shows the effect\nof $\\alpha$ on $F_{elp}$ problem. For the same termination criterion \nwe find that $\\alpha=0.1$ and $1.0$ perform better compared to other\nvalues. With\nlarger values of $\\alpha$ the IP-S method does not even find the\ndesired solution in all 50 runs. \n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=1.0]{Alpha-effect.eps} \n\\caption{Performance of PSO algorithm with the IP-S approach with\n different $\\alpha$ values on $F_{elp}$ problem.}\n\\label{fig:Alpha-effect-result}\n\\end{center}\n\\end{figure}\n\n\\subsection{Results with Differential Evolution (DE)}\\label{subsec:DEresults}\nDifferential evolution, originally proposed in \\cite{storn}, has gained \npopularity as an efficient evolutionary optimization algorithm. \nThe developers of DE proposed a total of ten different strategies \n\\cite{StornPriceBook}. In \\cite{DE-Boundary-Handling} it was shown that \nperformance of DE largely depended upon the choice of constraint-handling mechanism.\nWe use Strategy~1 (where the offspring is created around the population-best solution), which is\nmost suited for solving unimodal problems \\cite{padhyeJOGO2012}. \nA population size of $50$ was chosen with parameter values of \n$CR=0.5$ and $F=0.7$. Other parameters are set the same as before. \nWe use $S=10^{-10}$ as our termination criterion. Results are tabulated \nin Tables~\\ref{tab:DEellp} to ~\\ref{tab:DEros}. \nFollowing two observations can be drawn:\n\\begin{enumerate}\n\\item For problems having optimum at one of the boundaries,\n {\\it SetOnBoundary} approach performs the best. This is not a surprising\n result.\n\\item However, for problems having the optimum near the center of the\n allowable range, almost all eight algorithms perform in a similar\n manner.\n\\item For problems having their optimum close to one of the boundaries,\nthe proposed IP and existing exponential approaches perform better than\nthe rest of the approaches with DE.\n\\end{enumerate}\nDespite the differences, somewhat similar performances of different constraint-handling\napproaches with DE indicates\nthat the DE is an efficient optimization algorithm and its performance\nis somewhat less dependent on the choice of constraint-handling scheme\ncompared to the PSO algorithm.\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Results on $F_{\\rm elp}$ with DE for $10^{-10}$ termination criterion.}\n\\begin{tabular}{|lrrr|} \\hline \n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{$F_{\\rm elp}$ in [0,10]: On the Boundary} \\\\ \\hline\nIP Spread & 25,600 & 26,850 &27,650 \\\\ \nIP Confined & 22,400 &23,550 &24,200 \\\\ \nExp. Spread & 38,350 & 39,800 &41,500 \\\\ \nExp. Confined & 19,200 & 20,700 & 21,900 \\\\ \nPeriodic & 42,400 & 43,700 & 45,050 \\\\\nRandom & 40,650 & 43,050 & 44,250 \\\\ \nSetOnBoundary & {\\bf 2,850} & {\\bf 3,350} & {\\bf 3,900} \\\\ \nShrink & 4,050 & 4,900 & 5,850 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{$F_{\\rm elp}$ in [-10,10]: At the Center} \\\\ \\hline \nIP Spread & 29,950 & 31,200 & 32,500 \\\\ \nIP Confined & {\\bf 29,600} & {\\bf 31,200} & {\\bf 32,400} \\\\\nExp. Spread & 29,950 &31,300 & 32,400 \\\\ \nExp. Confined & 30,500& 31,400 & 32,250 \\\\% \\hline\nPeriodic & 29,650 & 31,300 & 32,400 \\\\\nRandom & 30,000 & 31,200 & 31,250 \\\\ \nSetOnBoundary & 29,850 & 31,200 & 32,700 \\\\\nShrink & 30,300 & 31,250 &32,750 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{$F_{\\rm elp}$ in [-1,10]: Close to Boundary} \\\\ \\hline\nIP Spread & 28,550 & 29,600 & 30,550 \\\\\nIP Confined & 28,500 & 29,500 & 30,650 \\\\% \\hline\nExp. Spread & {\\bf 28,050} & {\\bf 28,900} & {\\bf 29,850} \\\\\nExp. Confined& 28,150 & 29,050 & 29,850 \\\\\nPeriodic & 29,850 & 30,850 & 32,100 \\\\% \\hline\nRandom & 28,900 & 30,200 & 31,000 \\\\\nSetOnBoundary & 28,650 & 29,600 & 30,500 \\\\ \nShrink & 28,800 & 29,900 & 31,200 \\\\ \\hline \\hline\n\\end{tabular}\n\\label{tab:DEellp}\n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n\\begin{table*}[ht]\n\\begin{center}\n\\begin{footnotesize}\n\\caption{Results on $F_{\\rm sch}$ with DE for $10^{-10}$ termination criterion.}\n\\begin{tabular}{|lrrr|} \\hline \n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [0,10]: On the Boundary}} \\\\ \\hline\nIP Spread & 26,600 & 27,400 & 28,000 \\\\\nIP Confined & 22,450 & 23,350 & 24,300 \\\\ \nExp. Spread & 40,500 & 42,050 & 43,200 \\\\\nExp. Confined& 19,650 & 20,350 & 22,050 \\\\ \nPeriodic & 44,700 & 46,300 & 48,250 \\\\ \nRandom & 43,850 & 45,150 & 47,000 \\\\% \\hline\nSetOnBoundary & {\\bf 2,100 } &{\\bf 3,100 } & {\\bf 3,750 } \\\\\nShrink & 3,450 & 4,400 & 5,100 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [-10,10]: At the Center}} \\\\ \\hline\nIP Spread & {\\bf 258,750} & {\\bf 281,650} & {\\bf 296,300} \\\\\nIP Confined & 268,150 & 283,050 & 300,450 \\\\\nExp. Spread & 266,850 & 283,950 & 304,500 \\\\\nExp. Confined & 266,450 & 283,700 & 305,550 \\\\\nPeriodic & 269,700 & 284,100 & 310,100 \\\\\nRandom & 263,300 & 282,600 & 306,250 \\\\% \\hline\nSetOnBoundary & 267,750 & 284,550 & 298,850 \\\\\nShrink & 263,600 & 282,750 & 304,350 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [-1,10]: Close to Boundary}} \\\\ \\hline\nIP Spread & {\\bf 228,950} & {\\bf 242,300} & {\\bf 255,700} \\\\% \\hline\nIP Confined & 232,200 & 243,900 & 263,400 \\\\\nExp. Spread & 227,550 & 243,000 & 261,950 \\\\\nExp. Confined & 228,750 & 243,800 & 262,500 \\\\\nPeriodic & 231,950 & 247,150 & 260,700 \\\\ \nRandom & 228,550 & 244,850 & 261,900 \\\\ \nSetOnBoundary & 237,100 & 255,750 & 266,400 \\\\ \nShrink & 234,000 & 253,250 & 275,550 \\\\ \\hline\n\\end{tabular}\n\\label{tab:DEsch}\n\\end{footnotesize}\n\\end{center}\n\\end{table*}\n\\begin{table*}[ht]\n\\begin{center}\n\\begin{footnotesize}\n\\caption{Results on $F_{\\rm ack}$ with DE for $10^{-10}$ termination criterion.}\n\\begin{tabular}{|lrrr|} \\hlin\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [0,10]: On the Boundary}} \\\\ \\hline\nIP Spread & 43,400 & 44,950 & 45,950 \\\\ \nIP Confined & 37,300 & 38,700 & 40,350 \\\\\nExp. Spread Dist. & 66,300 & 69,250 & 71,300 \\\\\nExp. Confined Dist& 32,750 & 34,600 & 36,200 \\\\\nPeriodic & 72,500 & 74,250 & 75,900 \\\\\nRandom & 70,650 & 73,000 & 74,750 \\\\\nSetOnBoundary & {\\bf 2,550} & {\\bf 3,250} &{\\bf 3,950 } \\\\ \nShrink & 3,500 & 4,700 & 5,300 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [-10,10]: At the Center}} \\\\ \\hline\nIP Spread & {\\bf 50,650} &{\\bf 52,050} & {\\bf 53,450} \\\\\nIP Confined & 51,050 & 52,200 & 53,800 \\\\ \nExp. Spread & 51,200 &52,150 & 53,400 \\\\% \\hline\nExp. Confined& 51,100 &52,300 & 53,850 \\\\ \nPeriodic & 51,250 & 52,250 &53,500 \\\\ \nRandom & 50,950 & 52,200 & 53,450 \\\\\nSetOnBoundary & 50,950 & 52,300 & 53,450 \\\\\nShrink & 50,450 & 52,300 & 53,550 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [-1,10]: Close to Boundary}} \\\\ \\hline\nIP Spread & 49,100 & 50,650 & 51,650 \\\\ \nIP Confined & 48,650 & 50,400 & 52,100 \\\\% \\hline\nExp. Spread & {\\bf 48,300} & {\\bf 49,900} & {\\bf 51,750} \\\\\nExp. Confined& 48,900 & 50,000 & 51,250 \\\\ \nPeriodic & 50,400 & 51,950 & 53,300 \\\\ \nRandom & 50,250 &51,200 &52,150 \\\\\nSetOnBoundary & 49,900 (33) & 51,100 & 53,150 \\\\ \nShrink & 50,200 & 51,400 & 52,750 \\\\ \\hline\n\\end{tabular}\n\\label{tab:DEack}\n\\end{footnotesize}\n\\end{center}\n\\end{table*}\n\\begin{table*}[ht]\n\\begin{center}\n\\begin{footnotesize}\n\\caption{Results on $F_{\\rm ros}$ with DE for $10^{-10}$ termination criterion.}\n\\begin{tabular}{|lrrr|} \\hline \n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [1,10]: On the Boundary}} \\\\ \\hline\nIP Spread & 38,850 & 62,000 & 89,700 \\\\ \nIP Confined & 24,850 & 45,700 & 73,400 \\\\ \nExp. Spread & 57,100 & 86,800 & 118,600 \\\\\nExp. Confined & 16,600 & 21,400 &79,550 \\\\ \nPeriodic & 69,550 & 93,500 & 18,1150 \\\\\nRandom & 65,850 &92,950 &157,600 \\\\% \\hline\nSetOnBoundary & {\\bf 2,950} & {\\bf 4,700} & {\\bf 30,450} \\\\ \nShrink & 5,450 & 8,150 &55,550 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [-8,10]: At the Center}} \\\\ \\hline\nIP Spread & 133,350 (41) & 887,250& 995,700 \\\\\nIP Confined & 712,500 (44) & 854,800 & 991,400 \\\\\nExp. Spread & {390,700} (48) & {866,150} & {998,950} \\\\\nExp. Confined & 138,550 (40) &883,500 & 994,350 \\\\ \nPeriodic & 764,650 (39) & 874,700 & 999,650 \\\\% \\hline\nRandom & {\\bf 699,400} (49) & {\\bf 885,450} & {\\bf 999,600} \\\\ \nSetOnBoundary & 743,600 (38) & 865,450& 995,500 \\\\\nShrink & 509,900 (40) & 873,450 & 998,450 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [0,10]: Close to Boundary}} \\\\ \\hline\nIP Spread & 36,850 & 78,700 & 949,700 \\\\\nIP Confined & 46,400 (46) & 95,900 & 891,450 \\\\ \nExp. Spread & 49,550 (49) & 85,900 & 968,200 \\\\% \\hline\nExp. Confined & 87,300 (43) & 829,200 & 973,350 \\\\ \nPeriodic & {\\bf 38,750} &{\\bf 62,200} & {\\bf 94,750} \\\\\nRandom & 41,200 & 61,300 & 461,500 \\\\\nSetOnBoundary & 8.23E+00 (DNC) & 1.62E+01 & 1.89E+01 \\\\% \\hline\nShrink & 252,650 (9) & 837,700 & 985,750 \\\\ \\hline\n\\end{tabular}\n\\label{tab:DEros}\n\\end{footnotesize}\n\\end{center}\n\\end{table*}\n\\subsection{Results with Real-Parameter Genetic Algorithms (RGAs)}\n\\label{subsec:GAresults}\nWe have used two real-parameter GAs in our study here:\n\\begin{enumerate}\n\\item {\\em Standard-RGA\\\/} using the simulated binary crossover (SBX) \\cite{debramb} operator\nand the polynomial mutation operator\n \\cite{debBookMO}. In this approach, variables are expressed as\n real numbers initialized within the allowable range of each\n variable. The SBX and polynomial mutation operators can create\ninfeasible solutions. Violated boundary, if any, is handled using one\nof the approaches studied in this paper. Later we shall investigate a\nrigid boundary implementation of these operators which ensures\ncreation of feasible solutions in every recombination and mutation operations.\n\n\\item {\\em Elitist-RGA} in which two newly created offsprings are compared against the two\nparents, and the best two out of these four \nsolutions are retained as parents (thereby introducing elitism). Here, the \noffspring solutions are created using non-rigid\n versions of SBX and polynomial mutation operators. \nAs before, we test eight different constraint-handling approaches and, later\nexplore a rigid boundary implementation of the operators in presence of \nelite preservation. \n\\end{enumerate}\n\nParameters for RGAs are chosen as follows: population size of 100, \ncrossover probability $p_{c}$=0.9, mutation probability $p_{m}$=0.05, distribution index for crossover $n_{dist.,c}$=2, \ndistribution index for mutation $n_{dist.,m}$=100.\nThe results for the Standard-RGA are shown in Tables~\\ref{tab:RGAellp} to ~\\ref{tab:RGAros}\nfor four different test problems. Tables~\\ref{tab:GA-elitist-elp}\nto ~\\ref{tab:GA-elitist-ros} show results using the Elitist-RGA.\nFollowing key observations can be made:\n\\begin{enumerate}\n\\item For all the four test problems, Standard-RGA shows convergence\n \\textit{only} in the situation when optima is on the boundary. \n \n\\item Elitist-RGA shows good convergence on $F_{elp}$ when the optimum is on the boundary and,\nonly some convergence is noted when the optima is at the other locations.\nFor other three problems, convergence is only obtained when optimum is present on the boundary. \n\n\\item Overall, the performance of Elite-RGA is comparable or slightly better compared to Standard-RGA.\n\\end{enumerate}\n\nThe \\textit{Did Not Converge} cases can be explained on the fact that\nthe SBX operator has the property of creating solutions around\nthe parents; if parents are close to each other. This property is a likely cause of premature\nconvergence as the population gets closer to the optima. \nFurthermore the results suggest that the elitism implemented in this study\n(parent-child comparison) is not quite effective. \n \nAlthough RGAs are able to locate the optima,\nhowever, they are unable to fine-tune the optima due to undesired \nproperties of the generation scheme. This emphasizes the fact that \ngeneration scheme is primarily \nresponsible for creating good solutions, and\nthe constraint-handling methods cannot act as surrogate\nfor generating efficient solutions. \nEach step of the evolutionary \nsearch should be designed effectively in order to achieve overall success. \nOn the other hand one could argue that strategies such as increasing the mutation rate (in order to promote diversity so as to avoid\npre-mature convergence) should be tried, however, creation of good and meaningful solutions\nin the generation scheme is rather an important and a desired fundamental-feature.\n \nAs expected, when the optima is on the boundary \\textit{SetOnBoundary} finds the optima most efficiently within a minimum number\nof function evaluations. Like in PSO the performance of \n\\textit{Exp. Confined} is better than \n\\textit{Exp. Spread}.\n\\textit{Periodic} and \\textit{Random} show comparable \nor slightly worse performances (these mechanisms don't have any preference\nof creating solutions close to the boundary and actually promote spread of \nthe population). \n\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\caption{Results on $F_{\\rm elp}$ with Standard-RGA for termination criterion of $10^{-10}$.}\n\\begin{tabular}{|lrrr|} \\hline \n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm elp}$ in [0,10]}} \\\\\nIP Spread & 9,200 & 10,500 & 12,900 \\\\% \\hline\nIP Confined & 7,900 & 9,400 & 10,900 \\\\% \\hline\nExp. Spread & 103,100 (6) & 718,900 & 931,200 \\\\\nExp. Confined & 4,500 & 5,700 & 7,000 \\\\% \\hline\nPeriodic & 15,200 (1) & 15,200 & 15,200 \\\\\nRandom & 68,300 (12) & 314,700 & 939,800 \\\\\nSetOnBoundary & {\\bf 1,800} & {\\bf 2,400} & {\\bf 2,800} \\\\\nShrink & 3,700 & 5,100 & 6,600 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm elp}$ in [-10,10]}} \\\\% \\hline\n\\multicolumn{4}{|c|}{ \t{2.60e-02 \\textit{(DNC)}}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm elp}$ in [-1,10]}} \\\\ \n\\multicolumn{4}{|c|}{\t{\\textit{1.02e-02 (DNC)}}}\t\t\\\\ \\hline\n\\end{tabular}\n\\label{tab:RGAellp}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\caption{Results on $F_{\\rm sch}$ with Standard-RGA for termination criterion of $10^{-10}$}\n\\begin{tabular}{|lrrr|} \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [0,10]}} \\\\% \\hline\nIP Spread & 6,800 &9,800 & 11,800 \\\\% \\hline\nIP Confined & 6,400 & 8,200 & 10,300 \\\\\nExp. Spread & 21,200 (47) & 180,000 & 772,200 \\\\% \\hline\nExp. Confined & 4,300 & 5,500 & 6,300 \\\\% \\hline\nPeriodic & 14,800 (26) & 143,500 & 499,400 \\\\\nRandom & 8,700 (43) & 195,200 & 979,300 \\\\% \\hline\nSetOnBoundary & {\\bf 1,800} & {\\bf 2,300} & {\\bf 2,900} \\\\% \\hline\nShrink. & 3,600 & 4,600 & 5,500 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [-10,10]}} \\\\\n\\multicolumn{4}{|c|}{ {\t\\textit{1.20e-01 (DNC)}}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [-1,10]}} \\\\% \\hline\n\\multicolumn{4}{|c|}{ {\\textit{8.54e-02 (DNC)}}} \\\\ \\hline\n\\end{tabular}\n\\label{tab:RGAsch}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\caption{Results on $F_{\\rm ack}$ with Standard-RGA for termination criterion of $10^{-10}$}\n\\begin{tabular}{|lrrr|} \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [0,10]}} \\\\\nIP Spread & 12,100 & 22,600 & 43,400 \\\\\nIP Confined & 9,800 & 13,200 & 16,400 \\\\ \nExp. Spread & 58,100 (29) & 355,900 & 994,000 \\\\ \nExp. Confined& 6,300 & 9,100 & 11,900 \\\\ \nPeriodic & 19,600 (46) & 122,300 &870,200 \\\\ \nRandom &35,700 (38) & 229,200 & 989,500 \\\\ \nSetOnBoundary & {\\bf 1,800} & {\\bf 2,500} & {\\bf 3,100} \\\\% \\hline\nShrink &4,200 & 5,700 & 8,600 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [-10,10]}} \\\\\n\\multicolumn{4}{|c|}{ { \\textit{7.76e-02(DNC)}}} \\\\ \\hline \\hline\n \n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [-1,10]}} \\\\% \\hline \\hline\n\\multicolumn{4}{|c|}{ {\\textit{4.00e-02 (DNC)}}} \\\\ \\hline\n \n\\end{tabular}\n\\label{tab:RGAack}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\\begin{table*}\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\begin{center}\n\\caption{Results on $F_{\\rm ros}$ with Standard-RGA for termination criterion of $10^{-10}$}\n\\begin{tabular}{|lrrr|} \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [1,10]: On the Boundary}} \\\\\nIP Spread & 12,400 (39)& 15,800& 20,000 \\\\\nIP Confined & 9,400 (39)& 11,800& 13,600 \\\\% \\hline\nExp. Spread &\\textit{9.73e+00} \\textit{(DNC)} & \\textit{1.83e+00}& \\textit{2.43e+01} \\\\% \\hline\nExp. Confined & 6,000 & 6,900& 8,200 \\\\\nPeriodic & \\textit{6.30E+01} \\textit{(DNC)}& \\textit{4.92e+02}& \\textit{5.27e+04} \\\\\nRandom & \\textit{3.97e+02} \\textit{(DNC)} & \\textit{9.28e+02}& \\textit{1.50e+03} \\\\% \\hline\nSetOnBoundary & {\\bf 1,900} & {\\bf 2,700} & {\\bf 3,400} \\\\% \\hline\nShrink & 4,100 & 5,200 & 6,500 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [-8,10]}} \\\\% \\hline \n\\multicolumn{4}{|c|}{ { \\textit{3.64e+00 (DNC)}}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [1,10]: On the Boundary}} \\\\\n\\multicolumn{4}{|c|}{ {\\textit{1.04e+01(DNC)}}} \\\\ \\hline\n\\end{tabular}\n\\label{tab:RGAros}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\n\n\n\n\\clearpage\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Results on $F_{\\rm elp}$ with Elite-RGA for $10^{-10}$ termination criterion.}\n\\begin{tabular}{|lrrr|} \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm elp}$ in [0,10]}} \\\\\nIP Spread & 6,600 & 8,000 & 9,600 \\\\% \\hline\nIP Confined & 6,300 & 8,100 & 9,800 \\\\\nExp. Spread & 4,800 & 6,900 & 8,300 \\\\\nExp. Confined & 4,600 & 5,800 & 6,700 \\\\% \\hline\nPeriodic & 6,500 & 8,800 & 11,500 \\\\\nRandom & 6,400 & 7,900 & 10,300 \\\\\nSetOnBoundary & {\\bf 2,200} & {\\bf 2,600} & {\\bf 3,500} \\\\\nShrink & 4,000 & 5,200 & 6,700 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm elp}$ in [-10,10]}} \\\\\nIP Spread & 980,200 (1) & 980,200 & 980,200 \\\\% \\hline\nIP Confined & 479,000 (1) & 479,000 & 479,000 \\\\% \\hline\nExp. Spread & \\textit{2.06e-01} \\textit{(DNC)} & \\textit{4.53e-01} \n& \\textit{4.86e-01} \\\\\nExp. Confined & 954,400 (1)\t & 954,400 & 954,400 \\\\% \\hline\nPeriodic & \\textit{1.55E-01} \\textit{(DNC)}& \\textit{2.48E-01} & \\textit{2.36E-01} \\\\\nRandom & \\textit{1.92E-01} \\textit{(DNC)}& \\textit{2.00E-01} & \\textit{2.46E-01} \\\\\nSetOnBoundary & \\textit{2.11E-01} \\textit{(DNC)}&2.95E-01 & 1.94E-01 \\\\% \\hline\nShrink & {\\bf 530,900} (3) & {\\bf 654,000} & {\\bf 779,000} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm elp}$ in [-1,10]}} \\\\\nIP Spread & 803,400 (5) & 886,100 & 947,600 \\\\\nIP Confined & 643,300 (2) & 643,300 &963,000 \\\\% \\hline\nExp. Spread & 593,300 (3) & 628,900 & 863,500 \\\\\nExp. Confined & 655,400 (3) & 940,500 & 946,700 \\\\% \\hline\nPeriodic & 653,800 (3) & 842,900 &843,100 \\\\\nRandom & 498,500 (2) & 498,500 &815,500 \\\\\nSetOnBoundary & {\\bf 593,800} (5) & {\\bf 870,500} & {\\bf 993,500} \\\\\nShrink & 781,000 (2) & 781,000 &928,300 \\\\ \\hline\n\\end{tabular}\n\\label{tab:GA-elitist-elp}\n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Results on $F_{\\rm sch}$ with Elite-RGA for termination criterion of $10^{-10}$}\n\\begin{tabular}{|lrrr|} \\hline \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [0,10]}} \\\\ \\hline\nIP Spread & 5,000 & 6,500 & 7,900 \\\\% \\hline\nIP Confined & 4,900 & 6,500 & 7,900 \\\\% \\hline\nExp. Spread & 4,300 & 5,800 & 7,800 \\\\\nExp. Confined & 4,300 & 4,900 & 5,600 \\\\\nPeriodic & 5,400 & 7,000 & 11,300 \\\\\nRandom & 5,300 & 6,600 & 8,500 \\\\\nSetOnBoundary & {\\bf 1,600} & {\\bf 2,200} & {\\bf 2,600} \\\\% \\hline\nShrink & 3,100 & 4,200 & 5,400 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [-10,10]}} \\\\\n\\multicolumn{4}{|c|}{ {\\textit{8.12e-05(DNC)}}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm sch}$ in [-1,10]}} \\\\% \\hline\n\\multicolumn{4}{|c|}{ { \\textit{1.61e-01(DNC)}}} \\\\ \\hline \n\\end{tabular}\n\\label{tab:GA-elitist-sch}\n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Results on $F_{\\rm ack}$ with Elite-RGA for termination criterion of $10^-{10}$}\n\\begin{tabular}{|lrrr|} \\hline \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [0,10]}} \\\\ \\hline\nIP Spread & 6,300 &8,700 &46,500 \\\\% \\hline\nIP Confined & 6,800 &9,200 &32,000 \\\\\nExp. Spread & 5,600 &6,800 &8,700 \\\\\nExp. Confined & 5,200 &7,800 &9,900 \\\\\nPeriodic & 6,300 &9,300 &12,200 \\\\\nRandom & 6,200 &8,300 &53,700 \\\\\nSetOnBoundary & {\\bf 1,900} &{\\bf 2,500} &{\\bf 4,000} \\\\% \\hline\nShrink & 3,900 &5,100 &7,700 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [-10,10]}} \\\\\n\\multicolumn{4}{|c|}{ {\\textit{1.03e-01(DNC)}}} \\\\ \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ack}$ in [-1,10]}} \\\\\n\\multicolumn{4}{|c|}{ {\\textit{1.15e-00 (DNC)}}} \\\\ \\hline\n\\end{tabular}\n\\label{tab:GA-elitist-ack}\n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n \n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Results on $F_{\\rm ros}$ with Elite-RGA for termination criterion of $10^-{10}$}\n\\begin{tabular}{|lrrr|} \\hline \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [0,10]}} \\\\ \\hline\nIP Spread & 9,900 (13) &12,500 &14,000 \\\\ \nIP Confined & 10,100 (12)& 12,100 &14,400 \\\\ \nExp. Spread & 8,500 (10) &11,000 &15,400 \\\\ \nExp. Confined & 6,600 (30) &7,800 &8,900 \\\\ \nPeriodic & 9,500 (10) &13,300 &16,800 \\\\ \nRandom & 14,000 (3)& 15,300 &16,100 \\\\ \nSetOnBoundary & {\\bf 2,300} (44) & {\\bf 3,200} & {\\bf 4,500} \\\\ \nShrink & 4,500 (32) &6,100 &8,100 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [-8,10]}} \\\\ \n\\multicolumn{4}{|c|}{ {\\textit{1.27e-00 (DNC)}}} \\\\ \\hline \\hline\n \n\\multicolumn{4}{|c|}{ {$F_{\\rm ros}$ in [1,10]: On the Boundary}} \\\\\n\\multicolumn{4}{|c|}{ {\\textit{1.49e-00 (DNC)}}} \\\\ \\hline \n\\end{tabular}\n\\label{tab:GA-elitist-ros}\n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n\n\\subsubsection{RGAs with Rigid Boundary}\\label{subsec:NonelitistGAresults}\n\n\\begin{table*}[htb]\n\\begin{footnotesize}\n\\caption{RGA with rigid boundary with termination criterion $10^{-10}$}\n{\\footnotesize\\begin{center}\n\\begin{tabular}{|lrrr|} \\hline \n\t\t\\multicolumn{4}{|c|}{ Optimum on the boundary }\t\t\t\\\\\t\\hline\t\\hline\t\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n$F_{elp}$\t\t&\t8,100 & 8,500 & 8,800\t\t\\\\\n$F_{sch}$\t\t&\t7,800 & 8,100 & 8,300\t\t\\\\\n$F_{ack}$\t\t&\t9,500 & 10,100 & 10,800\t\t\\\\\n$F_{ros}$\t\t&\t10,100 (39) & 10,900 & 143,600\t\t\\\\\t\\hline\t\\hline \n\\multicolumn{4}{|c|}{ Optimum in the center}\t\t\\\\\n\\multicolumn{4}{|c|}{ {\\textit{3.88e-02 (DNC)}}} \\\\\n\\hline \\hline \n\\multicolumn{4}{|c|}{ Optimum close to the edge of the boundary } \\\\\t\n\\multicolumn{4}{|c|}{ {\\textit{9.44e-03 (DNC)}}} \\\\\n\\hline \\hline \n\\end{tabular}\n\\label{tab:rga-standard-rigid-boundary}\n\\end{center}}\n\\end{footnotesize}\n\\end{table*}\n\nWe also tested RGA (standard and its elite version) \nwith a rigid bound consideration in its operators. In this case, the\nprobability distributions of both\n{SBX} and polynomial mutation operator are changed in a way so as to\nalways create a feasible solution. \nIt is found that the Standard-RGA with rigid bounds shows \nconvergence only when optimum is on the boundary (Table~\\ref{tab:rga-standard-rigid-boundary}). \nThe performance of Elite-RGA with rigid bounds is slightly better \n(Table~\\ref{tab:rga-elite-rigid-boundary}).\nOverall, SBX operating within the rigid bounds is found to perform slightly better\ncompared to the RGAs employing boundary-handling mechanisms. \nHowever, as mentioned earlier, in the scenarios where the generation scheme cannot \nguarantee creation of feasible only solutions there is a necessary\nneed for constraint-handling strategies. \n\n\\begin{table*}[htb]\n\\begin{footnotesize}\n\\begin{minipage}[b]{1.0\\linewidth}\n\\caption{Elitist-RGA with rigid boundary with termination criterion $10^{-10}$.}\n\\begin{center}\n\\begin{tabular}{|lrrr|} \\hline\n{{Strategy}} & {{Best}}& {{Median}}& {{Worst}} \\\\ \\hline \\hline\n \\multicolumn{4}{|c|}{ Optimum on the boundary} \\\\ \n$F_{elp}$ & 7,300 & 7,900 \t& 8,400 \t\\\\ \n$F_{sch}$ & 6,500 & 6,900 & 7,500 \t \\\\ \n$F_{ack}$ & 9,400 & 10,400 & 12,200 \t\\\\ \n$F_{ros}$ & 11000 (10) & 12700 & 16400 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{ Optimum in the center } \\\\ \n\\multicolumn{4}{|c|}{ {\\textit{1.24e-01(DNC)}}} \\\\\n\\hline \\hline \n\\multicolumn{4}{|c|}{ Optimum close to the boundary edge} \\\\ \n$F_{elp}$ & 579,800 (3) &885,900 & 908,600 \\\\ \n$F_{sch}$ & \\textit{2.73E-00} \\textit{DNC} & \\textit{6.18E-00} & \\textit{1.34E-00} \\\\ \n$F_{ack}$ & \\textit{1.75E-01} \\textit{DNC} & \\textit{8.38E-01} & \\textit{2.93E-00} \\\\ \n$F_{ros}$ & \\textit{3.29E-00} \\textit{DNC} & \\textit{4.91E+00} & \\textit{5.44E+00} \\\\ \\hline\n\\end{tabular}\n\\label{tab:rga-elite-rigid-boundary}\n\\end{center}\n\\end{minipage}\n\\end{footnotesize}\n\\end{table*}\n\\section{Higher-Dimensional Problems}\\label{sec:scale-up}\nAs the dimensionality of the search space increases it becomes\ndifficult for a search algorithm to locate the optimum.\nConstraint-handling methods play even a more critical role\nin such cases. \nSo far in this study, DE has been found to be the best algorithm. Next, we consider all\nfour unimodal test problems with an increasing problem size: $n=20$, $50$, $100$, $200$, $300$,\nand $500$. For all problems the variable bounds were chosen such that optima occured\nnear the center of the search space. No population scaling is used for DE. The DE parameters are chosen as $F=0.8$ and $CR=0.9$.\nFor $\\alpha = 1.2$ we were able to achieve a high degree of convergence and\nresults are shown in Figure~\\ref{fig:DE-scale-up}.\nAs seen from the figure, although it is expected that the required\nnumber of function evaluations would increase with the \nnumber of variables, the increase is sub-quadratic.\nEach case is run $20$ times and the termination criteria\nis set as $10^{-10}$. All 20 runs are found to be successful in each case,\ndemonstrating the robustness of the method in terms of finding the\noptimum with a high precision.\nParticularly problems with large variables, complex search spaces and \nhighly nonlinear constraints, such a methodology should be useful in terms of \napplying the method to different real-world problems. \n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=1.0]{scale.eps} \n\\end{center}\n\\caption{Scale-up study on four test problems for the DE algorithm with\n the proposed IP-S approach shows sub-quadratic performance in all\n problems. Slopes of a fitted polynomial line is marked within\n brackets for each problem. Linear and quadratic limit lines are shown with dashed lines.}\n\\label{fig:DE-scale-up}\n\\end{figure}\n\nIt is worth mentioning that authors independently tested other scenarios for \nlarge scale problems with corresponding optimum on the boundary, and in order to achieve convergence\nwith IP methods we had to significantly reduce the \nvalues of $\\alpha$. Without lowering $\\alpha$, particularly, PSO did not show any convergence. \nAs expected in larger dimensions the probability to sample a point closer to the boundary\ndecreases and hence a steeper distribution is needed. However, this highlights the usefulness\nof the parameter $\\alpha$ to modify the behavior of the IPMs so as to yield the desired performance. \n\n\\section{General Purpose Constraint-Handling} \n\\label{sec:Constraint-Programming}\n\nSo far we have carried out simulations on problems where constraints have \nmanifested as the variable bounds. The IP methods proposed in this paper can be easily\nextended and employed for solving nonlinear constrained optimization problems (inclusive of \nvariable bounds).\\footnote{By \\textit{General Purpose Constraint-Handling} we imply\ntackling of all variable bounds, inequality constraints and equality constraints.}\n\nAs an example, let us consider a generic inequality constraint function: $g_j(\\vec{x})\n\\geq 0$ -- the $j$-th constraint in a set of $J$ inequality\nconstraints. In an optimization algorithm, every created (offspring) solution\n$\\vec{x}^c$ at an iteration must be\nchecked for its feasibility. If\n$\\vec{x}^c$ satisfies all $J$ inequality constraints, the solution is\nfeasible and the algorithm can proceed with the created solution. \nBut if $\\vec{x}^c$ does not\nsatisfy one or more of $J$ constraints, the solution is\ninfeasible and should be repaired before proceeding further. \n\nLet us\nillustrate the procedure using the inverse parabolic (IP) approach described in\nSection~\\ref{sec:IP}; though other constraint-handling\nmethods discussed before may also be used. The IP approaches require us to locate\nintersection points $\\vec{v}$ and $\\vec{u}$: two bounds in the\ndirection of ($\\vec{x}^p-\\vec{x}^c$), where $\\vec{x}^p$ is one of the\nparent solutions that created the offspring solution (see Figure~\\ref{fig:constr}). The critical intersection\npoint can be found by\nfinding multiple roots of the direction vector with each constraint\n$g_j(\\vec{x})$ and then choosing the\nsmallest root. \n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=1.0]{constr.eps} \n\\caption{Extension of constraint-handling approaches to constrained optimization problems.}\n\\label{fig:constr}\n\\end{center}\n\\end{figure}\nWe define a parameter\n$\\alpha$ as the extent of a point from $\\vec{x}^c$, as\nfollows:\n\\begin{equation}\n\\vec{x}(\\alpha) = \\vec{x}^c + \\alpha\n\\frac{\\vec{x}^p-\\vec{x}^c}{\\|\\vec{x}^p-\\vec{x}^c\\|}.\n\\label{eq:mapping}\n\\end{equation}\n\nSubstituting above expression for $\\vec{x}(\\alpha)$ \\footnote{Note that $\\alpha$\nhere for calculating points should not be confused with parameter $\\alpha$ introduced in\nthe proposed constraint-handling methods.}in the $j$-th constraint function, we have the following\nroot-finding problem for the $j$-th constraint:\n\\begin{equation}\ng_j(\\vec{x}(\\alpha)) = 0.\n\\end{equation}\nLet us say the roots of the above equation are $\\alpha_k^j$ for\n$k=1,2\\ldots,K_j$. The above procedure can now be repeated for all $J$\ninequality constraints and corresponding roots can be found one at a time. \nSince the extent of $\\alpha$ to reach $\\vec{x}^p$ from $\\vec{x}^c$\nis given as \n\\[\\alpha^p = \\|\\vec{x}^p-\\vec{x}^c\\|,\\]\nwe are now ready to compute the two closest bounds (lower and upper bounds) on\n$\\alpha$ for our consideration, as\nfollows:\n\\begin{eqnarray}\n\\alpha^v &=& \\max \\{\\alpha_k^j | 0 \\leq \\alpha_k^j \\leq \\alpha^p,\n \\forall\nk, \\forall j\\}, \\\\ \n\\alpha^u &=& \\min \\{\\alpha_k^j | \\alpha_k^j \\geq \\alpha^p, \\forall\nk, \\forall j\\}.\n\\end{eqnarray}\nIP-S and IP-C approaches presented in Section~\\ref{sec:IP} now can be used to map the violated variable\nvalue $x_i^c$ into the feasible region using $d=\\alpha^v$ (the lower bound),\n$d_u=\\alpha^u$ (the upper bound) and $d^p=\\alpha^p$ (location of parent). \nIt is clear that the only difficult aspect of this method is to find\nmultiple intersection points in presence of nonlinear constraints. \n\nFor the sake of completeness, we show here that the two bounds for each variable: $x_i\\geq x_i^{(L)}$ and\n$x_i\\leq x_i^{(U)}$ used in previous sections can be also be treated\nuniformly using the above described approach. The two bounds can be written together as follows:\n\\begin{equation}\n\\left(x_i-x_i^{(L)}\\right)\\left(x_i^{(U)}-x_i\\right) \\geq 0.\n\\end{equation}\nNote that a simultaneous non-positive value of each of the bracketed terms\nis not possible, thus only way to satisfy the above left side is to\nmake each bracketed term non-negative. The above inequality can be\nconsidered as a quadratic constraint function, instead of two\nindependent variable bounds and treated as a single combined nonlinear\nconstraint and by finding both roots of the resulting quadratic root-finding equation. \n\nFinally, the above procedure can also be extended to handle equality constraints ($h_k(\\vec{x})=0$) with a\nrelaxation as follows: $-\\epsilon_k \\leq h_k(\\vec{x}) \\leq\n\\epsilon_k$. Again, they can be combined together as follows:\n\\begin{equation}\n\\epsilon_k^2-(h_k(\\vec{x}))^2 \\geq 0.\n\\end{equation}\nAlternatively, the above can also be written as $\\epsilon_k -\n|h_k(\\vec{x})| \\geq 0$ and may be useful for non-gradient based\noptimization methods, such as evolutionary algorithms. We now show the\nworking of the above constraint handling procedure on a number of\nconstrained optimization problems. \n \n\\subsection{Illustrations on Nonlinear Constrained Optimization}\n\nFirst, we consider the three unconstrained problems used in previous\nsections as $f(\\vec{x})$, but now add an inequality constraint by imposing a\nquadratic constraint that makes the solutions fall within a radius of one-unit from\na chosen point $\\vec{o}$:\n\\begin{equation}\n\\begin{array}{rl}\n\\mbox{Minimize} & f(\\vec{x}), \\\\\n\\mbox{subject to} & \\sum_{i=1}^{n} (x_i-o_i)^2 \\leq 1.\t\n\\end{array}\n\\label{eq:convex-opti-problem}\n\\end{equation}\nThere are no explicit variable bounds in the above problem. \nBy choosing different locations of the center\nof the hyper-sphere ($\\vec{o}$), we can have different scenarios of\nthe resulting constrained optimum. \nIf the minimum of the objective function (without constraints)\nlies at the origin, then setting $o_i=0$ the unconstrained minimum is also the solution to the\nconstrained problem, and this case is similar to the ``Optimum at the\nCenter'' (but in the context\nof constrained optimization now).\nThe optimal solution is at $x_i=0$ with $f^*=0$. DE with IP-S and previous parameter \nsettings is applied to this new constrained problem, and the results from 50 different runs for this case are shown in\nTable~\\ref{tab:convex-constraints-de-1}.\n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Results for test functions with DE for $o_i=0$ with\n $S=10^{-10}$.}\n\\label{tab:convex-constraints-de-1} \n\\begin{tabular}{|lrrr|} \\hline\n{{Strategy}} & {{Best}} & {{Median}} & {{Worst}} \\\\\\hline \\hline\n$F_{\\rm elp}$ & 22,800 &\t23,750 &\t24,950 \t\t \\\\ \n$F_{\\rm sch}$ & 183,750\t& 206,000 &\t229,150 \\\\\n$F_{\\rm ack}$ & 42,800 &\t44,250\t& 45,500 \t \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n\nAs a next case, we consider $o_i = 2$. The constrained\nminimum is now different from that of the unconstrained problem, as\nthe original unconstrained minimum is no more feasible. This case is equivalent to ``Optima on \nthe Constraint Boundary''. Since the optimum value is not zero as before, instead of choosing\na termination criterion based on $S$ value, we allocate a maximum of\none million function evaluations for a run\nand record the obtained optimized solution. The {best fitness} values for \n$f(\\vec{x})$ as $f_{elp}$, $f_{sch}$ and $f_{ack}$ are shown in Table\n~\\ref{tab:convex-constraints-de-2}. For each function, we verified\nthat the obtained optimized solution satisfies the KKT optimality\nconditions \\cite{rekl,debOptiBOOK} suggesting that a truly optimum solution has\nbeen found by this procedure. \n\\begin{table*}[ht]\n\\begin{footnotesize}\n\\begin{center}\n\\caption{Best fitness results for test functions with DE for $o_i=2$.}\n\\begin{tabular}{|ccc|} \\hline\n$F_{\\rm elp}$ & $F_{\\rm sch}$& $F_{\\rm ack}$\t \t\t \\\\ \\hline \\hline\n 640.93 $\\pm$ 0.00 & 8871.06 $\\pm$ 0.39\t& 6.56 $\\pm$ 0.00 \t \\\\ \\hline\n\\end{tabular}\n\\label{tab:convex-constraints-de-2} \n\\end{center}\n\\end{footnotesize}\n\\end{table*}\n \nNext, we consider two additional nonlinear constrained optimization\nproblems (TP5 and TP8) from \\cite{debpenalty} and a well-studied structural design and mechanics problem (`Weld') taken from \\cite{ragsdell1976optimal}.\nThe details on the mechanics of the welded structure and the beam deformation can be found in \\cite{shigley1963engineering,timoshenko1962theory}.\nThese problems have \nmultiple nonlinear inequality constraints and our goal is to demonstrate the performance of our proposed constraint-handling\nmethods. We used DE with\nIP-S method with the following \nparameter settings: $NP=50$, $F=0.7$, $CR=0.5$, and $\\alpha=1.2$ for all three problems.\nA maximum of 200,000 function evaluations were allowed and a termination criteria of $10^{-3}$\nfrom the known optima is chosen. \nThe problem definitions of TP5, TP8 and `Weld' are as follows:\\\\\n\n\\noindent \\textbf{TP5:}\n\\begin{equation}\n\\label{eq:TP5}\n\\begin{array}{rl}\n\\mbox{Minimize} & f(\\vec{x}) = (x_1-10)^2+5(x_2-12)^2+x_3^4+3(x_4-11)^2 \\\\\t\n& \\qquad\t+ 10x_5^6+7x_6^2+x^4_7-4x_6x_7-10x_6-8x_7, \\\\\n\\mbox{subject to} & g_1(\\vec{x}) \\equiv 2x_1^2+3x_2^4+x_3+4x_4^2+5x_5 \\leq 127,\t\\\\\n& g_2(\\vec{x}) \\equiv 7x_1+3x_2+10x_3^2+x_4-x_5 \\leq 282, \\\\\t\n& g_3(\\vec{x}) \\equiv 23x_1+x_2^2+6x_6^2-8x_7 \\leq 196, \\\\\n& g_4(\\vec{x}) \\equiv 4x_1^2+x_2^2-3x_1x_2+2x_3^2 +5x_6-11x_7 \\leq 0,\\\\\t\t\n& -10 \\leq x_i \\leq 10, \\quad i = 1,\\ldots,7. \n\\end{array} \n\\end{equation} \n\n\\noindent\\textbf{TP8:}\n\\begin{equation}\n\\label{eq:TP8}\n\\begin{array}{rl}\n\\mbox{Minimize} & f(\\vec{x}) = x_1^2+x_2^2+x_1x_2-14x_1-16x_2+2(x_9-10)^2\\\\\n & \\qquad + 2(x_6-1)^2 + 5x_7^2+7(x_8-11)^2+45+(x_{10}-7)^2 \\\\\n & \\qquad + (x_3-10)^2 + 4(x_4-5)^2 + (x_5-3)^2\\\\\n\\mbox{subject to} & g_1(\\vec{x}) \\equiv 4x_1+5x_2-3x_7+9x_8 \\leq 105,\\\\\n& g_2(\\vec{x}) \\equiv\t10x_1-8x_2-17x_7+2x_8 \\leq 0,\\\\\n& g_3(\\vec{x}) \\equiv -8x_1+2x_2+5x_9-2x_{10} \\leq 12,\\\\\n& g_4(\\vec{x})\\equiv 3(x_1-2)^2 + 4(x_2-3)^2+2x_3^2-7x_4 \\leq 120,\\\\\n& g_5(\\vec{x})\t\\equiv 5x_1^2+8x_2+(x_3-6)^2-2x_4 \\leq 40,\t\\\\\n& g_6(\\vec{x})\t\\equiv\t x_1^2+2(x_2-2)^2-2x_1x_2+14x_5-6x_6 \\leq 0,\\\\\n& g_7(\\vec{x})\t \\equiv\t 0.5(x_1-8)^2+2(x_2-4)^2+3x_5^2-x_6 \\leq 30,\\\\\n& g_8(\\vec{x})\t \\equiv\t -3x_1+6x_2+12(x_9-8)^2-7x_{10} \\leq 0,\\\\\n& -10 \\leq x_i\\leq 10, \\quad i = 1,\\ldots,10. \n\\end{array} \n\\end{equation}\t \n\n\\noindent\\textbf{`Weld':}\n\\begin{equation}\n\\label{eq:weld-problem}\n\\begin{array}{rl}\n\\mbox{Minimize} & f(\\vec{x}) = 1.10471h^2l+0.04811tb(14.0+l),\t \\\\\t\n\\mbox{subject to} & g_1(\\vec{x}) \\equiv \\tau(\\vec{x}) \\leq 13,600, \\\\\n& g_2(\\vec{x}) \\equiv \\sigma(\\vec{x}) \\leq 30,000,\t\\\\\n& g_3(\\vec{x}) \\equiv h-b \\leq 0, \\\\\t \n& g_4(\\vec{x}) \\equiv P_c(\\vec{x}) \\geq 6,000,\\\\\n& g_5(\\vec{x}) \\equiv \\delta(\\vec{x}) \\leq 0.25,\t\t\\\\\n& 0.125\\leq (h,b) \\leq 5, \\mbox{ and } 0.1 \\leq (l,t) \\leq 10, \n\\end{array}\n\\end{equation}\nwhere,\t\t\t \n\\begin{eqnarray*}\n\\tau(\\vec{x})\t&=& \\sqrt{ (\\tau')^2 + (\\tau'')^2 + (l\\tau'\\tau'')\/ \\sqrt{0.25(l^2+(h+t)^2)}}, \\\\\t\t\\\\\t\n\\tau' &=& \\frac{6,000}{\\sqrt{2}hl}, \\\\\n\\tau'' &=& \\frac{6,000(14+0.5l)\\sqrt{0.25*(l^2+(h+t)^2)}}{2[0.707hl(l^2\/12+0.25(h+t)^2)] }, \\\\\n\\sigma(\\vec{x}) &=& \\frac{504,000}{t^2b}, \\\\\n\\delta(\\vec{x}) &=& \\frac{2.1952}{t^3b}, \\\\\nP_c(\\vec{x}) &=& 64,746.022(1-0.0282346t)tb^3. \n\\end{eqnarray*} \n \nThe feasible region in the above problems is quite complex, unlike hypercubes in case\nof problems with variable bounds, and since our methods require feasible initial population,\nwe first identified a single feasible-seed solution (known as the seed solution), and generated other several other \nrandom solutions. Amongst the several randomly generated solutions, those infeasible,\nwere brought into the feasible region using IP-S method and feasible-seed solution as the reference.\nThe optimum results for TP5, TP8 and `Weld', thus found, are shown in the\nTable~\\ref{tab:Non-linear-opti}. \n\n\\begin{table*}[htb]\n\\caption{Results from TP5, TP8 and `Weld' problem. For each problem, the\n obtained solution also satisfies the KKT conditions.}\n\\label{tab:Non-linear-opti}\n\\begin{center}\n\\begin{footnotesize}\n\\begin{tabular}{|l|l|l|l|} \\hline\n\t \t& \\multirow{2}{1.5cm}{Optimum Value ($f^*$)}& {Corresponding Solution Vector ($\\vec{x}^{\\ast}$)}\n & \\multirow{2}{1.2cm}{Active Constraints} \\\\ \n & & & \\\\ \\hline\n\\textbf{TP5}\t& $680.63$\n&$(2.330,1.953,-0.473,4.362,-0.628,1.035,1.591)^T$\t& $g_1$, $g_4$ \t\t\t\t\\\\ \\hline\n\\textbf{TP8}\t\t& $24.33$ &\n$(2.160,2.393,8.777,5.088,0.999,1.437,1.298,9.810,8.209,8.277)^T$&\n$g_1$ to $g_6$\\\\ \\hline \n\\textbf{`Weld'}\t&\n$2.38$& $(0.244,6.219,8.291,0.244)^T$\t& $g_1$ to $g_4$ \\\\ \\hline \n\\end{tabular}\n\\end{footnotesize}\n\\end{center}\n\\end{table*}\n\nTo verify the obtained optimality of our solutions, we employed MATLAB sequential\nquadratic programming (SQP) toolbox with a\ntermination criterion of $10^{-6}$ to solve each of the three problems. The solution obtained from SQP\nmethod matches with our reported solution indicating that our proposed constrained\nhandling procedure is successful\nin solving the above problems.\n\nFinally, the convergence plots of a sample run on these test\nproblems is shown in Figures~\\ref{fig:TP5}, \\ref{fig:TP8}, and \\ref{fig:WELD}, respectively.\nFrom the graphs it is clear that our proposed method is able to\nconverge to the final solution quite effectively. \n\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.35,angle=-90]{TP5-sep-2014.eps} \n\\end{center}\n\\caption{Convergence plot for TP5.}\n\\label{fig:TP5}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.35,angle=-90]{TP8-sep-2014.eps} \n\\end{center}\n\\caption{Convergence plot for TP8.}\n\\label{fig:TP8}\n\\end{figure}\n\n\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[scale=.35,angle=-90]{Weld-sep-2014.eps} \n\\end{center}\n\\caption{Convergence plot for TP8.}\n\\label{fig:WELD}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:Conclusion}\nThe existing constraint-handling strategies that repair solutions by bringing\nthem back into the search spaces exhibit several\ninadequacies. This paper has addressed the task of studying, and proposing two new, explicit feasibility preserving\nconstraint-handling methods for real-parameter optimization using evolutionary algorithms. \nOur proposed single parameter Inverse Parabolic Methods with stochastic and adaptive components, \novercome limitations of existing methods and perform effectively\non several test problems.\nEmpirical comparisons on four different\nbenchmark test problems ($F_{elp}$, $F_{sch}$, $F_{ack}$, and $F_{ros}$) \nwith three different settings of the optimum relative to the\nvariable boundaries revealed key insights into the performance of PSO, GAs and DE\nin conjunction with different constraint-handling strategies.\nIt was noted that in addition to the critical role of constraint-handling strategy (which\nis primarily responsible for bringing the infeasible solutions back into the search space),\nthe generation scheme (child-creation step) in an evolutionary algorithm must create efficient solutions \nin order to proceed effectively.\nExponential and Inverse Parabolic Methods were most robust methods and never\nfailed to solve any problem. The other constraint-handling strategies\nwere either too deterministic and\/or operated without utilizing sufficient useful information from \nthe solutions. The probabilistic methods\nwere able to bring the solutions back into the\nfeasible part of the search space and showed a consistent\nperformance. In particular, scale-up studies on four problems, up to 500 variables, demonstrated\nsub-quadratic empirical run-time complexity with the proposed IP-S method.\n\nFinally, the success of the proposed IP-S scheme is demonstrated \non generalized nonlinear constrained optimization problems. \nFor such problems, the IP-S method requires finding the lower and upper bounds\nfor feasible region along the direction of search by solving a series\nof root-finding problems. \nTo best of our knowledge, the\nproposed constraint-handling approach is a first explicit feasibility preserving method that has demonstrated\nsuccess on optimization problems with variable bounds and nonlinear constraints.\nThe approach is arguably general, and can be \napplied with complex real-parameter search spaces such as non-convex, discontinuous, etc., \nin addition to problems dealing with multiple conflicting objectives, multi-modalities, dynamic\nobjective functions, and other complexities.\nWe expect that proposed constraint-handling scheme will find its utility in \nsolving complex real-world constrained optimization problems using evolutionary algorithms. \nAn interesting direction for the future would be to carry out one-to-one comparison between\nevolutionary methods employing constrained handling strategies and the classical\nconstrained optimization algorithms. Such studies shall benefit the practitioners in optimization \nto compare and evaluate different algorithms in a unified frame-work.\nIn particular, further development of a robust, parameter-less\nand explicit feasibility preserving constraint-handling procedure can be attempted.\nOther probability distribution functions and utilizing of information \nfrom other solutions of the population can also be attempted. \n\\section*{Acknowledgments}\nNikhil Padhye acknowledges past discussions with Dr. C.K. Mohan on swarm intelligence. \nProfessor Kalyanmoy Deb acknowledges \nthe support provided by the J. C. Bose National\nfellowship generously provided by the Department of Science and\nTechnology (DST), Government of India. \n{\\footnotesize\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nBootstrap percolation on a hypergraph with infection threshold $r\\geq 1$ is a deterministic infection process which evolves in rounds. In each round every vertex has exactly one of two possible states: it is either infected or uninfected. We denote the set of initially infected vertices by $\\mathcal{A}_r(0)$. We say that a vertex $u$ is a neighbour of $v$ if there exists an edge containing both $u$ and $v$. In each round of the process every uninfected vertex $v$ becomes infected if it has at least $r$ infected neighbours, otherwise it remains uninfected. Once a vertex has become infected it remains infected forever. The process stops once no more vertices become infected and we denote this time step by $T$. The final infected set is denoted by $\\mathcal{A}_r(T)$.\n\nBootstrap percolation was introduced by Chalupa, Leath, and Reich \\cite{bootstrapintr} in the context of\nmagnetic disordered systems. Since then bootstrap percolation processes (and extensions) have been used to describe several complex phenomena: from neuronal activity \\cite{MR2728841,inhbootstrap} to the dynamics of the Ising model at zero temperature \\cite{Fontes02stretchedexponential}.\n\nIn the context of social networks, bootstrap percolation provides a prototype model for the spread of ideas. In this setting infected vertices represent individuals who have already adopted a new belief and a person adopts a new belief if at least $r$ of his acquaintances have already adopted it.\n\nOn the $d$-dimensional grid $[n]^d$ bootstrap percolation has been studied by\nBalogh, Bollob{\\'a}s, Duminil-Copin, and Morris \\cite{MR2888224}, when the initial infected set contains every vertex independently with probability $p$. \nFor the size of the final infection set they showed the existence of a sharp threshold. More precisely, they established the threshold probability $p_\\mathrm{c}$, such that if $p\\leq (1-\\varepsilon )p_\\mathrm{c}$, then the probability that every vertex in $[n]^d$ becomes infected tends to 0, as $n\\rightarrow\\infty$, while if $p\\geq (1+\\varepsilon )p_\\mathrm{c}$, then the probability that every vertex in $[n]^d$ becomes infected tends to one, as $n\\rightarrow\\infty$.\n\nBootstrap percolation has also been studied for several random graph models. For instance Amini and Fountoulakis \\cite{bootpower} considered the Chung-Lu model \\cite{MR1955514} where the vertex weights follow a power law degree distribution and the presence of an edge $\\{u,v\\}$ is proportional to the product of the weights of $u$ and $v$. Taking into account that in this model a linear fraction of the vertices have degree less than $r$ and thus at most a linear fraction of the vertices can become infected the authors proved the size of the final infected set $\\mathcal{A}_r(T)$ exhibits a phase transition. \n\n\nJanson, \\L uczak, Turova, and Vallier \\cite{MR3025687} analysed bootstrap percolation on the binomial random graph $G(n,p)$ where every edge appears independently with probability $p$. For $r\\geq 1$ and $n^{-1}\\ll p \\ll n^{-1\/r}$ they showed that there is a threshold such that if the initial number of infected vertices is below the threshold, then the process infects only a few additional vertices and if the initial number of infected vertices exceeds the threshold, then almost every vertex becomes infected.\n\nIn this paper we investigate the binomial random hypergraph $H_k(n,p)$, where every edge ($k$-tuple of vertices) is present independently with probability $p$. We choose the initial infected set uniformly at random and consider bootstrap percolation with infection threshold $r> 1$ in the regime $n^{-1}\\ll n^{k-2}p \\ll n^{-1\/r}$.\nThe main contribution of this paper are:\n\\begin{itemize}\n\\item strengthening of the result in \\cite{MR3025687}, by showing that the failure probability decreases {\\em exponentially} (Theorem~\\ref{thm:graph});\n\\item extension of the original results from graphs to hypergraphs (Theorem~\\ref{thm:hypergraph}).\n\\end{itemize}\n\n\n\\section{Main Results}\n\nWe extend the following result, which was originally proved in \\cite{MR3025687}, to $H_k(n,p)$: Consider bootstrap percolation with infection threshold $r$ on $G(n,p)$, where $n^{-1}\\ll p \\ll n^{-1\/r}$. There is a threshold $b_r=b_r(n,p)$ such that if $|\\mathcal{A}_r(0)|\\le(1-\\varepsilon)b_r$, then with probability tending to one as $n\\rightarrow \\infty$ (whp for short) only a few additional vertices become infected, while if $|\\mathcal{A}_r(0)|\\ge(1+\\varepsilon)b_r$, then whp almost every vertex in the process becomes infected.\nFor integers $k\\geq 2$ and $r> 1$ set\n\\[\nb_{k,r}:=b_{k,r}(n,p)=\\left\\{ \\begin{array}{ll}\n\\left(1-\\frac{1}{r}\\right)\\left(\\frac{(r-1)!}{n\\left(\\binom{n}{k-2}p\\right)^r}\\right)^{1\/(r-1)} & \\mbox{if } r > 2 \\\\\n\\frac{1}{2(2k-3)}\\frac{1}{n\\left(\\binom{n}{k-2}p\\right)^2} & \\mbox{if } r = 2,\n\\end{array}\n\\right.\n\\]\nand note that the only difference for the $r=2$ case is a $1\/(2k-3)$ multiplier. Since $2k-3=1$ when $k=2$ this is consistent with the threshold in the graph case i.e.\\ $b_{2,r}=b_r$. \n\n\\begin{theorem}\\label{thm:hypergraph}\nFor $k\\geq 2$ consider bootstrap percolation with infection threshold $r> 1$ on $H_{k}(n,p)$ when $n^{-1}\\ll n^{k-2} p \\ll n^{-1\/r}$. Assume the initial infection set is chosen uniformly at random from all sets of vertices of size $a=a(n)$. Then for any fixed $\\varepsilon>0$ we have that\n\\begin{itemize}\n\\item if $a\\leq(1-\\varepsilon)b_{k,r}$ then whp $|\\mathcal{A}_r(T)|= O(b_{k,r})$;\n\\item if $a\\geq (1+\\varepsilon)b_{k,r}$ then whp $|\\mathcal{A}_r(T)|=(1+o(1))n$.\n\\end{itemize}\n\\end{theorem}\n\nUsing the methods developed for this result we also obtain a strengthened form of the result for $G(n,p)$ establishing exponentially small bounds on the failure probability.\n\n\\begin{theorem}\\label{thm:graph}\nConsider bootstrap percolation with infection threshold $r > 1$ on $G(n,p)$ when $n^{-1}\\ll p \\ll n^{-1\/r}$. Assume the initial infection set is chosen uniformly at random from the set of vertices of size $a=a(n)$. Then for any fixed $\\varepsilon>0$ the following holds with probability $1-\\exp(-\\Omega(b_{2,r}))$:\n\\begin{itemize}\n\\item if $a\\leq(1-\\varepsilon)b_{2,r}$, then $|\\mathcal{A}_r(T)|=O(b_{2,r})$;\n\\item if $a\\geq (1+\\varepsilon)b_{2,r}$, then $|\\mathcal{A}_r(T)|=(1+o(1))n$.\n\\end{itemize}\n\\end{theorem}\n\nThe proofs rely on surprisingly simple methods. When the number of vertices infected in the individual rounds is large, we apply Chebyshev's or Chernoff's inequality. However when the process dies out, these changes can become arbitrarily small. In this case we couple the infection process with a subcritical branching process which dies out very quickly.\n\n\n\n\n\\section{Proof outlines}\n\nWe first show the outline for the proof of Theorem~\\ref{thm:hypergraph}. For brevity we will only describe the $r>2$ case in detail and comment on the differences for $r=2$ at the end. \n\nStart with a given set of initially infected vertices $\\mathcal{A}_r(0)$ and consider the infection process round by round. At the end of round $t\\geq 1$ we partition the set of vertices into $\\mathcal{A}_0(t),\\mathcal{A}_1(t),...,\\mathcal{A}_r(t)$ where the set $\\mathcal{A}_i(t)$ consists of all the vertices which have exactly $i$ infected neighbours (these are vertices in $\\mathcal{A}_r(t-1)$), for $i0$, there exists a $\\tau$, which does not depend on $n$, such that $\\Delta(\\tau)\\leq \\eta b_{k,r}$.\nThe fact that $|\\mathcal{A}_i(t)|$ is concentrated around $a_i(t)$ for $t<\\tau$ follows from Chebyshev's inequality.\n\nSince we are in the subcritical regime the size of the individual generations will become small and the concentration will fail. In order to avoid this we attempt to analyse the remaining steps together. Consider the forest where every vertex in $\\mathcal{A}_r(\\tau+1)\\backslash \\mathcal{A}_r(\\tau)$ is a root.\nRecall that in order for a vertex to become infected in round $t+1$ it must have a neighbour that got infected in round $t$. The children of a vertex $v \\in \\mathcal{A}_r(t+1)\\backslash \\mathcal{A}_r(t)$ will be the vertices $u \\in \\mathcal{A}_r(t+2)\\backslash \\mathcal{A}_r(t+1)$ which lie in an edge containing $v$ and should this relation not be unique for some vertex $u$, $u$ is assigned arbitrarily to one of the candidates. Clearly every vertex of $A_r(T)\\setminus A_r(\\tau)$ is contained in the forest and thus the size of this forest matches the number vertices which got infected after round $\\tau$.\n\nNote that for every $\\delta>0$ there exists a $t_0$ such that $|\\mathcal{A}_i(t)|\\leq (1+\\delta)a_i(\\tau)$, for every $0\\leq i\\leq r$ and $\\tau(1+\\delta)a_r(\\tau)$ is dominated by the probability that the total size of the branching process exceeds $\\delta a_r(\\tau)$. However for properly chosen $\\eta,\\delta>0$ the probability that the total size of the branching process exceeds $\\delta a_r(\\tau)$ is sufficiently small. Therefore we have that there are at most $(1+\\delta)a_{r}(\\tau)$ infected vertices in total.\n\n\nNow for the supercritical case. Recall that \\eqref{eq:seqchange} and \\eqref{eq:seq} hold when \\linebreak[4] $a_r=o\\left(\\left(n^{k-2}p\\right)^{-1}\\right)$. Again we consider the differences $\\Delta(t)=a_{r}(t+1)-a_r(t)$. Although at the beginning of the process the values of $\\Delta(t)$ decrease there exists a value $t_1$ not depending on $n$ such that for every $t>t_1$ we have that $\\Delta(t+1)>\\Delta(t)$. In fact there exists a $t_2$ not depending on $n$ such that for $t\\geq t_2$ we have that $\\Delta(t+1)>2\\Delta(t)$.\nTherefore the probability of non-concentration is dominated by a geometric sequence and applying the union bound gives us concentration as long as $a_r(t)=o\\left(\\left({n}^{k-2}p\\right)^{-1}\\right)$.\nWhen $a_r(t)=\\Omega\\left(\\left({n}^{k-2}p\\right)^{-1}\\right)$ the expected number of neighbours is $\\Omega(1)$ and thus our approximation in \\eqref{eq:seqchange} does not hold any more. Refining these approximations shows that at most 2 rounds are required for almost every vertex to become infected, with $\\Theta(n)$ vertices becoming infected in every required step.\n\nRecall that for $r>2$ the typical vertex became infected when it was contained in $r$ different edges each containing a different infected vertex. When $r=2$ this is equivalent to finding two intersecting edges each containing a different infected vertex. However unlike the $r>2$ case finding two such edges in step $t$ implies that every vertex in these edges is infected by step $t+1$. Two intersecting edges typically overlap in exactly one vertex and thus finding such an edge pair implies that $2k-3$ vertices will become infected, not just one. Taking this into account gives us the modified bound on the threshold.\n\nThe proof of Theorem~\\ref{thm:graph} is analogous. In the random graph case, in round $t$ of the process only those edges are examined which contain exactly one vertex from $\\mathcal{A}(t)\\backslash \\mathcal{A}(t-1)$ and no vertices from $\\mathcal{A}(t-1)$. Since each of these edges can contain at most one uninfected vertex the behaviour of the individual vertices is independent. Thus we can replace Chebyshev's inequality with Chernoff's inequality and achieve a stronger bound on the failure probability.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe large-scale distribution of dark matter halos is one of the key\ningredients of the theoretical description of large-scale structure (LSS). \nSince most observed tracers of LSS, such as galaxies, reside in halos,\nthe statistics of halos determine those of galaxies on large scales. \nSimilarly, the halo model description of the nonlinear matter density\nfield \\cite{cooray\/sheth} crucially relies on halo statistics. In the context of perturbation\ntheory, the statistics of halos are written in terms of bias parameters \nmultiplying operators constructed out of the matter density field. \nIn general, these operators consist of powers of the matter density\nand tidal field \\cite{fry\/gaztanaga,mcdonald\/roy}, as well as convective time derivatives of these quantities \\cite{MSZ,senatore:14}. \nHowever, the most well-studied and phenomenologically most important\nbias parameters on large scales are those multiplying powers of\nthe matter density field, i.e. \n\\begin{equation}\n\\d_h(\\v{x},\\tau) \\supset b_1(\\tau) \\delta_\\rho(\\v{x},\\tau) + \\frac12 b_2(\\tau) \\delta_\\rho^2(\\v{x},\\tau)\n+ \\frac16 b_3(\\tau) \\delta_\\rho^3(\\v{x},\\tau) + \\cdots\\,,\n\\label{eq:localbias}\n\\end{equation}\nwhere $\\d_h$ is the fractional number density perturbation of a given\nhalo sample,\nwhile $\\delta_\\rho$ is the matter density perturbation. More precisely,\nthe powers of $\\delta_\\rho$ should be understood as renormalized operators\n\\cite{mcdonald,assassi\/etal,PBSpaper}. \nThe $b_n$ are commonly called (nonlinear) \\emph{local bias parameters}. \nThe goal of this paper is to present precision measurements of\n$b_1,\\,b_2,\\,b_3$ using a novel technique, \\emph{separate universe simulations}.\n\nIn the separate universe approach \n\\citep{lemaitre:1933,sirko:2005,baldauf\/etal:2011,li\/hu\/takada:2014,Wagner:2014}, a long-wavelength density\nperturbation is included in an N-body simulation by changing the \ncosmological parameters, in particular $\\Omega_m,\\,\\Omega_\\Lambda,\\,\\Omega_K$ and $H_0$, \nfrom their fiducial values, and running the simulation to a different\nscale factor. As argued in \\cite{baldauf\/etal:11,jeong\/etal,PBSpaper}, \nthe (renormalized) local bias parameters defined in \\refeq{localbias}\ncorrespond to the response of the halo abundance, $\\bar n_h$, to a long-wavelength\ndensity perturbation, equivalent to a change in the background density, $\\bar\\rho$,\n\\begin{equation}\nb_{n} = \\frac{\\bar\\rho^{\\hskip 1pt n}}{\\bar n_h} \\frac{\\partial^n \\bar n_h}{\\partial\\bar\\rho^{\\hskip 1pt n}}\\, .\n\\end{equation}\nThis can be understood as an exact formulation of the peak-background split (PBS) \\cite{kaiser:1984,mo\/white:1996}. Thus, the $b_n$ can be measured through the mass function of halos in a suite\nof separate universe simulations. This technique has several advantages: \nfirst, it is guaranteed to recover the large-scale limit of the $b_n$, without\nscale-dependent or nonlinear corrections which affect measurements of the\nbias parameters from the halo power spectrum and bispectrum, or\nfrom the cross-correlation with smoothed fields. Note that, starting \nat second order, ``nonlocal'' bias parameters such as those with respect\nto powers of the tidal field will enter in these latter measurements at\nthe same level as the $b_n$. Second, \nwe can straightforwardly obtain measurements of higher order bias parameters\nsuch as $b_3$, which become cumbersome to measure using correlations. Finally,\nby using the same initial phases for simulations with different density,\nwe can cancel to a large extent the cosmic variance contribution to the measurement error. \n\nSeparate universe simulations are expected to estimate the same set of bias parameters as those obtained from matter-halo cross-correlations. We will thus compare the biases obtained from the separate universe simulations to\nthose determined by fitting to halo two- and three-point statistics. \nWe also compare the results to biases derived from universal mass functions\nusing the classic peak-background split argument, and \nrecent theoretical predictions from\nthe excursion set-peaks (ESP) approach \\cite{Paranjape:2012,Paranjape:2013}, which incorporates some aspects\nof the Gaussian peaks model into the excursion set framework. \n\nHigher order bias parameters have previously been measured in simulations\nby correlating the halo number with powers of the smoothed density field \nat the final time (Eulerian frame) \n\\cite{angulo\/baugh\/lacey:2008,manera\/gaztanaga:2011}\nor in the initial conditions \\cite{paranjape\/etal:2013}. \nHowever, the bias parameters measured in this way depend on the smoothing\nscale adopted, while the local bias parameters that are relevant for perturbation theory predictions, and that we are interested in here, correspond to\na smoothing scale of infinity. Further, all these references\nneglect the nonlocal bias terms mentioned above, \nwhich will affect the inferred values of $b_2$ and higher. \nFor these reasons, it is difficult to directly compare our measurements of\nnonlinear bias parameters with these previous results (although we find\nbroad agreement). \nWe stress again that in the separate universe approach we are guaranteed\nto obtain the local bias in the large-scale limit, without nonlinear\nor tidal corrections. Moreover, we simultaneously obtain both\nthe Eulerian ($b_n$) and Lagrangian ($b_n^L$) bias parameters. \n\nTwo related papers appeared on the preprint archive simultaneously\nto this paper. Ref.~\\cite{li\/etal:15} measured the linear bias using\nseparate universe simulations through an abundance matching technique\nwhich yields the integrated halo bias above a mass threshold. This\ntechnique reduces the shot noise in the bias measurement. \nRef.~\\cite{baldauf\/etal:15} also measured the linear bias via\nthe mass function. In addition, they present measurements of $b_2$\nthrough the response of the halo power spectrum to a long-wavelength mode\n(as done in \\cite{li\/hu\/takada:2014,Wagner:2015} for the matter power spectrum). \nOur results are consistent with the findings of both of these references. \nHowever, unlike these and any other previous published results, we\nuse the fully nonlinear separate universe approach to obtain accurate\nmeasurements of the \\emph{linear and nonlinear} local biases.\n\n\nIn this paper we adopt a flat $\\Lambda$CDM fiducial cosmology with $\\Omega_m=0.27$, $h=0.7$, $\\Omega_b h^2=0.023$ and $\\mathcal{A}_s=2.2\\cdot 10^{-9}$. \nThe outline of the paper is as follows. In \\refsec{theory}, we present\nthe theoretical predictions that we will compare our measurements with. \\refSec{bsep}\ndescribes the technique of measuring bias parameters from separate universe\nsimulations, while \\refsec{bcorr} presents the estimators for $b_1$ and $b_2$\nusing the conventional approach of measuring halo correlations. \nWe discuss the results in \\refsec{res}. We conclude in \\refsec{concl}. \nThe appendices contain more details on the ESP predictions as well as\nour bias measurements. \n\n\\section{Theory predictions}\n\\label{sec:theory}\n\nIn this section we present several theoretical predictions for the large-scale bias from the literature. We first recap the PBS argument in \\refsec{PBS} and briefly present the ESP formalism in \\refsec{ESP}.\n\nBefore jumping into details, we briefly explain the definitions of \\textit{Lagrangian} and \\textit{Eulerian} halo bias. The Lagrangian bias links the abundance of dark matter halos to the density perturbations in Lagrangian space, i.e. it describes the relation of proto-halos in the initial conditions that correspond to halos identified at redshift $z$ to the initial linear density perturbation field. On the other hand, the Eulerian bias relates the halos identified at redshift $z$ to the nonlinear density field, $\\delta_\\rho$, at redshift $z$. \nIn the case of the local bias parameters considered here, there is\nan exact nonlinear mapping between the Lagrangian bias parameters $b_n^L$\nand their Eulerian counterparts $b_m$, see \\refapp{compbsep}. We will\nmake use of this mapping both for the theory predictions and measurements. \n\nIn the following, the top-hat filtered variance on a scale $R_{\\rm TH}$ \n(the Lagrangian radius of halos) is denoted as\n\\begin{equation}\n\\s{0}^2 \\equiv \\int {\\rm dln} k\\, \\Delta^2 (k) [W_{\\rm TH}(kR_{\\rm TH})]^2,\n\\label{eq:sigma0}\n\\end{equation}\nwhere $\\Delta^2(k) = k^3 P(k)\/2\\pi^2$ is the dimensionless linearly extrapolated matter power spectrum and \nthe top-hat filter in Fourier space $W_{\\rm TH}(k R_{\\rm TH})$ is given in \\refeq{WTH}. \n\n\\subsection{Peak-background split bias}\n\\label{sec:PBS}\n\nWe briefly recap how the bias parameters can be derived from the differential halo mass function using the PBS argument,\nas initially proposed in \\cite{kaiser:1984,cole\/kaiser:1989,mo\/white:1996}. \nFollowing the PBS argument, the effect of a long wavelength mode $\\d_0$ on the small scale formation can be seen as locally modulating the density threshold for halo formation, or barrier $B$, sending it to $B-\\d_0$ (here we denote the barrier as $B$ to emphasize that this argument is not restricted to the constant spherical collapse threshold $\\d_c$ and can be extended to barriers depending e.g. on the halo mass $M$ through $\\s{0}$). Note that, in the case where stochasticity should be introduced in the barrier, this shift does not modify the stochastic contribution to the barrier, which is supposed to capture the effect of small-scale modes. We define the differential mass function as \n\\begin{equation}\nn_{\\rm}(\\nu_{\\rm B}) = \\frac{\\bar{\\rho}_m}{M}f_{\\rm}(\\nu_{\\rm B}) \\left|\\frac{{\\rm d ln}\\,\\s{0}}{{\\rm d ln }\\,M}\\right|,\n\\label{eq:nf}\n\\end{equation}\nwith $\\nu_{\\rm B} \\equiv B(\\s{0})\/\\s{0}$ (we reserve the notation $\\nu$ for $\\nu\\equiv\\d_c\/\\s{0}$), $M$ the corresponding mass and $f(\\nu_{\\rm B})$ the mass fraction contained in halos of mass $M$. The scale-independent large-scale Lagrangian bias parameters are then defined by the well known relation \n\\begin{equation}\nb^L_n(\\nu_{\\rm B}) = \\frac{1}{n(\\nu_{\\rm B})}\\frac{\\partial ^n n([B(\\s{0})-\\d_0]\/\\sigma_0)}{\\partial \\d_0^n}\\Bigg|_{\\d_0=0}.\n\\label{eq:biasPBS}\n\\end{equation}\nAs we have indicated, this also applies if the deterministic part of the barrier is mass-dependent. We will use \\refeq{biasPBS} both to derive the bias in the ESP model and from the fits to the mass function proposed in \\cite{Sheth:1999} and \\cite{Tinker:2008} (hereafter ST99 and T08 respectively). \n\n\\subsection{Excursion set peaks}\n\\label{sec:ESP}\n\nIn this section, we review the ESP formalism proposed in \\citep{Paranjape:2012} and \\citep{Paranjape:2013}. The details of the calculation are relegated to \\refapp{ESP}. All the results that we present here and in \\refapp{ESP} were already derived in these two references, but in a different way; here, we use the PBS argument to derive the bias parameters directly. Further, the ESP predictions\nfor $b_3$ and $b_4$ are computed here for the first time. \n\nThe ESP aims at unifying the peak model of Bardeen et al. in 1986 (hereafter BBKS) \\citep{Bardeen:1986} and the excursion set formalism of Bond et al. in 1991 \\citep{Bond:1991}. It can be seen either as addressing the cloud-in-cloud problem within the peak model, or as applying the excursion set formalism to a special subset of all possible positions (the peaks). \nWe follow \\citep{Paranjape:2013}, who chose a top-hat filter for the excursion\nset part, and a Gaussian filter to identify peaks (in order to ensure finite moments\nof derivatives of the smoothed density field). \n\nMore importantly, \\citep{Paranjape:2013} improved the model by adding a mass-dependent stochastic scatter to the threshold. Specifically, the barrier is\ndefined as \\citep{Paranjape:2012}\n\\begin{equation} \nB(\\s{0}) = \\d_c + \\beta \\s{0}\\,.\n\\label{eq:barrier}\n\\end{equation}\nHere, $\\beta$ is a stochastic variable and \\cite{Paranjape:2013} chose its PDF $p(\\beta)$ to be lognormal with mean and variance corresponding to $\\<\\beta\\> = 0.5$ and ${\\rm Var}(\\beta)=0.25$. This choice was made to match the peak height measured in simulations by \\cite{robertson\/etal}. Hence $\\beta$ takes only positive values. Note that \\refeq{barrier} then corresponds to a mass-dependent mean barrier $\\d_c + 0.5 \\s{0}$. \n\nAs we show in \\refapp{ESP}, the Lagrangian bias parameters in the ESP\ncan be directly derived from \\refeq{biasPBS} by inserting the multiplicity\nfunction $f_{\\rm ESP}(\\nu)$ into \\refeq{nf}, and sending \n$\\nu = \\d_c\/\\s{0}$ to $\\nu_1 = \\nu\\left(1-\\d_0\/\\d_c\\right)$.\\footnote{Here one needs to take care not to shift one instance of $\\nu$ in the expression for $f_{\\rm ESP}(\\nu)$ that is actually unrelated to the barrier. See \\refapp{ESP}.} \nOur results for the bias, \\refeq{btheo}, are identical to the large-scale bias parameters derived using\na different approach in \\citep{Paranjape:2012,Paranjape:2013}. \nWe will see that the choice of barrier \\refeq{barrier} leads to significant differences from the standard PBS biases derived using $B = \\d_c$\nfrom the T08 and ST99 mass functions. \n\n\n\\section{Bias parameters from separate universe simulations}\n\\label{sec:bsep} \n\nOur results are based on the suite of separate universe simulations described in \\cite{Wagner:2014,Wagner:2015}, performed using the cosmological code GADGET-2 \\citep{Springel:2005}. The idea of the separate universe simulations is that a uniform matter overdensity $\\delta_\\rho$ of a scale larger than the simulation box can be absorbed in the background density $\\tilde{\\rho}_m$ of a modified cosmology simulation (throughout the whole paper, quantities in modified cosmologies will be denoted with a tilde), where\n\\begin{equation} \n\\tilde{\\rho}_m(t) = \\rho_m(t)\\left[1+\\delta_\\rho(t)\\right], \n\\label{eq:dr}\n\\end{equation}\nwith $\\rho_m$ the mean matter density in a simulation with no overdensity (which we call the fiducial cosmology). Indeed, a uniform density can only be included in this way, since the Poisson equation for the potential enforces a vanishing mean density perturbation over the entire box. Thus one can see a simulation with a constant overdensity $\\delta_\\rho$ as a separate universe simulation with a properly modified cosmology. Qualitatively, a positive overdensity causes slower expansion and enhances the growth of structure, i.e. more halos, whereas a negative one will have the opposite effect. The precise mapping of $\\delta_\\rho$ to modified cosmological parameters is described in \\cite{Wagner:2014}. Crucially, we work to fully nonlinear order in $\\delta_\\rho(t)$. \n\nWe use two sets of simulations denoted by ``lowres'' and ``highres'' throughout the paper. Both have a comoving box size of $500\\,h^{-1}{\\rm Mpc}$ in the fiducial cosmology. The ``lowres'' set uses $256^3$ particles in each simulation, while ``highres'' employs $512^3$ particles. For both sets, we run the fiducial cosmology, i.e. $\\delta_\\rho=0$, and simulations with values of $\\delta_\\rho$ corresponding to $\\d_L$ = \\{$\\pm$0.5, $\\pm$0.4, $\\pm$0.3, $\\pm$0.2, $\\pm$0.1, $\\pm$0.07, $\\pm$0.05, $\\pm$0.02, $\\pm$0.01\\}, where $\\d_L$ is the present-day linearly extrapolated matter density contrast. \nIn addition, we simulate separate universe cosmologies corresponding to $\\d_L$ = 0.15, 0.25, and 0.35 for both resolutions. \nThis makes the sampling in the final, nonlinear $\\delta_\\rho$ more symmetric around 0 which should help diminish the covariance between the bias parameters.\\footnote{We have not performed a systematic study on the number of $\\d_L$ values that are necessary to derive accurate measurements of the $b_n$ up to a given order. \nGiven the significant degeneracies between $b_n$ and $b_{n+2}$ we have found\n(\\refapp{cov}), this is a nontrivial question.} \nThe comoving box size in the modified cosmology simulations is adjusted to match that in the fiducial cosmology, $L=500\\,h^{-1}{\\rm Mpc}$. \nHence, in the high redshift limit ($z\\rightarrow \\infty$ for which $\\delta_\\rho\\rightarrow 0$) the physical size of the box is the same for all simulations whereas at the present time ($z=0$ in the fiducial cosmology) the physical size of the simulation box varies with $\\delta_\\rho$. However, this choice of the box size has the advantage that the physical mass resolution is the same within each set of simulations regardless of the simulated overdensity $\\delta_\\rho$ (i.e. $\\tilde{m}_p = m_p$ where $m_p$ is the particle mass in the fiducial cosmology). \nSince the biases are determined by comparing halo abundances between different overdensities, this eliminates any possible systematic effects in the biases due to varying mass resolution. \nThe mass resolution is $m_p = 5.6\\cdot 10^{11}h^{-1} M_\\odot$ in the ``lowres'' set of simulations and $m_p=7\\cdot 10^{10}h^{-1} M_\\odot$ in the ``highres'' one. Furthermore, for the ``lowres'' set of simulation, we ran 64 realizations of the entire set of $\\d_L$ values. For the ``highres'' one we ran only 16 realizations of each $\\d_L$ value as they are more costly in terms of computation time. \nEach simulation was initialized using 2LPT at $z_i$ = 49. \nFor further details about the simulations, see \\citep{Wagner:2015}.\n\n\\subsection{Halo catalogs}\n\\label{sec:HC}\n\nThe halos were identified using the Amiga Halo Finder (hereafter AHF) \\cite{Gill:2004,Knollmann:2009}, which identifies halos with a spherical overdensity (SO) algorithm. We identify halos at a fixed proper time corresponding to $z=0$ in the fiducial cosmology. \nIn this paper, we only use the number of distinct halos and do not consider their sub-halos. \n\nThe key point in identifying halos with the spherical overdensity criterion is the setting of the density threshold. We choose here a value of $\\Delta_{\\rm SO}=200$ times the background matter density in the \\emph{fiducial} cosmology. \nThus, our measured bias parameters are valid for this specific halo definition. \nFor the simulations with a different background density, the threshold must be rescaled in order to compare halos identified using the same physical density in each simulation. Specifically, we need to use\n\\begin{equation}\n\\Delta_{\\rm SO} = \\frac{200}{1+\\delta_\\rho}\\,.\n\\label{eq:DSO}\n\\end{equation}\nAnother point is the treatment of the particle unbinding in a halo. AHF has the ability to remove unbound particles, i.e particles which are not gravitationally bound to the halo they are located in. However, in order to avoid having to implement the complicated matching of the unbinding criterion between the modified and fiducial cosmologies, we have turned unbinding off in all halo catalogs. \nNote that the effect of unbinding is very small (of order 1\\% on the mass function), and\nthat we consistently use the same halo catalogs for all measurements,\nso that this choice does not affect our comparison between different methods\nfor measuring bias. \n\nWe count halos in top-hat bins given by \n\\begin{equation}\nW_n(M,M_{{\\rm center}})=\\begin{cases} 1 &\\mbox{if } \\left|{\\rm log}_{10}(M)-{\\rm log}_{10}(M_{{\\rm center}})\\right| \\leq 0.1 \\\\\n0 & \\mbox{otherwise, } \\end{cases}\n\\label{eq:bins}\n\\end{equation}\nwhere $M$ is the mass ($M_{{\\rm center}}$ corresponding to center of the bin). \nFor the high resolution simulations, we count halos in 12 bins centered from ${\\rm log}_{10}\\left(M_{{\\rm center}}\\right) = 12.55$ to ${\\rm log}_{10}\\left(M_{{\\rm center}}\\right) = 14.75$, to ensure that we have enough halos in each bin. For the low resolution simulations, we have 7 bins from ${\\rm log}_{10}\\left(M_{{\\rm center}}\\right) = 13.55$ to ${\\rm log}_{10}\\left(M_{{\\rm center}}\\right) = 14.75$. With this binning choice, the lowest bin is centered around halos with 63 particles for the ``lowres'' set of simulations, with a lower limit at halos containing around 50 particles. For the ``highres'' set of simulations, the lowest mass bin is centered on halos with around 51 particles, with a lower limit around 40 particles. These numbers are quite low compared to more conservative values (e.g. 400 particles in T08). However $\\d_h$ is the \\emph{relative difference} of the number of halos between the fiducial and modified cosmology simulations (see \\refeq{DN_N} hereafter) and therefore that quantity should be less affected by resolution effects. For halos with a minimum number of 40 particles, we did not find any systematic difference between the bias parameters measured from the ``lowres'' and ``highres'' simulations. Thus, we present results for halos that are resolved by at least 40 particles. \n\n\\subsection{Eulerian biases}\n\\label{sec:BE}\n\nInstead of fitting the Eulerian bias parameters directly to the simulation results, we derive them from the measured Lagrangian biases for which the fitting is more robust, using the exact nonlinear evolution of $\\delta_\\rho$ (see \\refapp{compbsep} for the details of the mapping). \nIn order to obtain the Lagrangian bias parameters, we compute $\\d_h(M,\\d_L)$ versus $\\d_L$ where $\\d_h(M,\\d_L)$ is the overdensity of halos in a bin of mass $M$ compared to the fiducial case $\\d_L=0$,\n\\begin{equation}\n\\delta_h (M,\\d_L) = \\frac{\\tilde{N}(M,\\d_L) - N(M)}{N(M)},\n\\label{eq:DN_N}\n\\end{equation}\nwith $\\tilde N(M,\\d_L)$ the number of halos in a bin centered around mass $M$ in the presence of the linear overdensity $\\d_L$ and $N(M)=\\tilde N(M,\\d_L=0)$. \nNote that $\\d_h(M,\\d_L)$ is the overdensity of halos in Lagrangian space as the physical volumes of the separate universe simulations only coincide at high redshift.\n\nIn order to obtain the Lagrangian bias parameters $b_n^L$, we then fit \\refeq{DN_N} by \n\\begin{equation}\n\\d_h = \\sum_{n=1}^5 \\frac{1}{n!} b_n^L (\\d_L)^n\\,.\n\\label{eq:BiasExp}\n\\end{equation}\nAs indicated in \\refeq{BiasExp} we use a $5^{\\rm th}$ order polynomial in $\\d_L$ by default. In \\refapp{deg} we study the effect of the degree of the polynomial on the results; as a rough rule, if one is interested in $b_n^L$, then one should fit a polynomial up to order $n+2$. \n\nIn order to estimate the overall best-fit of and error bars on the bias parameters, we use a bootstrap technique. For each non zero $\\d_L$ value, we randomly produce $p$ resamples of the mass function. Each resample is composed of the same number of realizations as the original sample (i.e. 16 or 64) and we choose $p=100\\cdot 64$ ($100\\cdot 16$) for the low (high) resolution simulations. We then compute the average number of halos per mass bin for each resample. This gives us $p$ numbers $\\tilde{N}^i(M,\\d_L)$. \nFor a given $\\d_L$, we also create the same set of resamples for the fiducial cosmology \nand again compute the average number of halos, i.e. $N^i(M)$. We then compute $p$ times $\\d^i_h$ according to \\refeq{DN_N} for every $\\d_L$ value. \nSince we use the same resamples for the separate universe results, $\\tilde{N}^i(M,\\d_L)$, and the fiducial case, $N^i(M)$, the cosmic variance is canceled to leading order. The error on $\\d_h$ at fixed mass and $\\d_L$ is given by the sample variance and we use it as a weight for the fit. \nWe neglect, however, the covariance between $\\tilde{N}^i(M,\\d_L)$ for different $\\d_L$ values.\nWe then produce $p$ fits with a weighted least squares method. For every bias parameter, the value we report is the mean of the results of the $p$ fits while the corresponding error bar is given by the square root of the variance of the distribution. Within the mass range common to both sets of simulations ``lowres'' and ``highres'', the measurements are consistent with each other and hence we perform a volume-weighted average of the biases from the two sets of simulations.\n\n\\section{Bias parameters from correlations}\n\\label{sec:bcorr}\n\nTraditionally bias parameters are used for and measured from $n$-point correlation functions or $n$-spectra. The $n$-th order bias parameters enter the tree-level calculation of the $n+1$-point functions. For instance, $b_1$ appears at the leading order in the large-scale behavior of the halo power spectrum, $b_2$ in the large-scale limit of the bispectrum and $b_3$ in the large-scale limit of the trispectrum. For the comparison to $n$-point functions, we will restrict ourselves to the power spectrum and bispectrum at tree level here. The bispectrum also contains nonlocal bias parameters, i.e. biases with respect to the tidal field, that arise from triaxial collapse and gravitational evolution. The estimation of the first and second order bias parameters closely follows the steps outlined in \\cite{Baldauf:2012} (see also \\cite{Saito:2014}), with the difference that we are performing a joint fit for all the bias parameters, instead of first fitting $b_1$ to the halo power spectrum and then using its value in the bispectrum analysis.\n\nLet us start by discussing the power spectrum. We measure the halo-matter cross power spectrum $P_\\text{hm}$, which at tree level (on large scales) is given by\n\\begin{equation}\nP_\\text{hm}(k)=b_1 P_\\text{mm}(k).\n\\end{equation}\nWe refrain from explicitly including the loop corrections, since they contain third order biases not present in the bispectrum as well as scale-dependent biases $\\propto k^2$ \\cite{assassi\/etal}. \nThe advantage of the halo-matter cross power spectrum over the halo-halo power spectrum is that it is free of shot noise. To ensure that our measurements are not contaminated by higher order contributions or scale dependent bias, we will \nin fact fit $P_\\text{hm}(k)=(b_1 + b_{P,k^2} k^2 ) P_\\text{mm}(k)$ to the simulation\nresults, where $b_{P,k^2}$ is a free nuisance parameter. This term absorbs the\nloop corrections in the large-scale limit. \nWe measure the matter and halo power spectra in the same wavenumber bins in the simulation and take their ratio to cancel the leading cosmic variance, i.e. we define a quantity $q(k)=P_\\text{hm}(k)\/P_\\text{mm}(k)$ and the $\\chi^2$\n\\begin{equation}\n\\chi^2_P=\\sum_{k}^{k_\\text{max}} \\left(\\frac{q(k)-b_1-b_{P,k^2}k^2}{\\sigma[q(k)]}\\right)^2,\n\\end{equation}\nwhere the variance $\\sigma^2(q)$ is estimated from the box-to-box scatter between the simulation realizations. \n\nLet us now turn to the bispectrum. One can form three different bispectra containing the halo field, the halo-halo-halo, the halo-halo-matter and the halo-matter-matter bispectrum. We are using the latter, since it is the only bispectrum free of shot noise. Furthermore, we will employ the unsymmetrized bispectrum, where the halo mode is the one associated with the wavevector $\\vec k_3$. This unsymmetrized bispectrum measurement allows for a clear distinction of the second order local bias $b_2$ and tidal tensor bias $b_{s^2}$, once the matter bispectrum is subtracted out. The unsymmetrized tree-level bispectrum reads\n\\begin{equation}\nB_\\text{mmh}(k_1,k_2,k_3)=b_1 B_\\text{mmm}(k_1,k_2,k_3) + b_2 P(k_1)P(k_2)+2 b_{s^2} S_2(\\vec k_1,\\vec k_2) P(k_1)P(k_2)\\; ,\n\\end{equation}\nwhere $B_\\text{mmm}$ is the tree-level matter bispectrum (e.g., \\cite{Baldauf:2012}), and we employed the tidal operator $S_2$ defined as\n\\begin{equation}\nS_2(\\vec k_1,\\vec k_2)=\\left(\\frac{\\vec k_1\\cdot \\vec k_2}{k_1^2 k_2^2}-\\frac{1}{3}\\right).\n\\end{equation}\nSimilarly to the power spectrum defined above, this bispectrum does not include loop corrections or scale dependent biases. Thus, we again add a term of the form\n$b_{B,k^2} (k_1^2+k_2^2)P(k_1) P(k_2)$ with a free coefficient $b_{B,k^2}$,\ndesigned to absorb the loop corrections. \nTo cancel cosmic variance, we define the ratio of bispectrum and power spectrum measurements\n\\begin{equation}\nQ(k_1,k_2,k_3;b_1)=\\frac{B_\\text{mmh}(k_1,k_2,k_3)-b_1 B_\\text{mmm}(k_1,k_2,k_3)}{P_\\text{mm}(k_1) P_\\text{mm}(k_2)},\n\\end{equation}\nand using this we define the corresponding $\\chi^2$\n\\begin{equation}\n\\chi^2_B=\\sum_{k_1,k_2,k_3}^{k_\\text{max}} \\left(\\frac{Q(k_1,k_2,k_3;b_1)-b_2 -2b_{s^2}S_2-b_{B,k^2}(k_1^2+k_2^2)}{\\sigma[Q(k_1,k_2,k_3;b_{1,\\text{fid}})]}\\right)^2\\; ,\n\\end{equation}\nwhere the variance of $Q$ is estimated from the box-to-box scatter between the simulation realizations for a fiducial $b_{1,\\text{fid}}$. Equivalent results could have been obtained using the estimator presented in \\cite{Schmittfull:2014tca}. We decided to stick with the more traditional bispectrum estimation for the following reasons: for their method the smoothing scale of the fields needs to be chosen before the simulation data is reduced, complicating convergence tests. Furthermore, \\cite{Schmittfull:2014tca} ignored two-loop corrections to their estimator and higher derivative terms, while we marginalize over an effective shape accounting for the onset of scale dependence. A detailed comparison of the two methods is however beyond the scope of this work.\n\nAll measurements are done on the ``lowres'' and ``highres'' sets of the fiducial cosmology. \nWe find the best fit biases $b_1$ and $b_2$ by sampling the log-likelihood $\\ln \\mathcal{L}=-\\chi^2_\\text{tot}\/2$, where $\\chi^2_\\text{tot}=\\chi^2_P+\\chi^2_B$ using the Markov Chain code EMCEE \\cite{emcee}. \nThe errors on the bias parameters are estimated from the posterior distribution of sampling points after marginalizing over the (for our purposes) nuisance parameters $b_{P,k^2}$, $b_{B,k^2}$ and $b_s^2$. \nWe have varied that maximum wavenumber $k_\\text{max}$ to ensure that we remain in the regime where the tree level bias parameters remain consistent with increasing $k_\\text{max}$. Further, we demand that the total $\\chi^2$ per degree of freedom is approximately unity. The results shown below use a conservative value of $k_\\text{max} = 0.06 \\; h \\, {\\rm Mpc}^{-1}$. This limits the number of modes to $\\mathcal{O}(100)$ and thus also the number of power and bispectrum configurations. Due to the cancellation of the leading order cosmic variance this is not of major concern. We have compared the clustering constraints with a larger $2400\\; h^{-1}\\text{Mpc}$ box providing a factor of 100 more modes to the same cutoff and found consistent results.\n\n\\section{Results}\n\\label{sec:res}\n\nThis section presents the results for the Eulerian bias parameters $b_1$ to $b_3$. For completeness, we also present results for $b_4$, which is poorly constrained, in \\refapp{b4}.\n\nIn order to obtain a precise comparison between any theoretical prediction for the bias $b_n(M)$ (such as the ESP, \\refeq{btheo}) and our data points, we convolve the theoretical prediction with the mass bins used in the simulation (see \\refsec{bsep}). I.e., the theory predictions we will show in the following are given by\n\\begin{equation}\nb_n^{\\rm conv}(M) = \\frac{\\int{W_n(M',M) n(M') b_n(M') {\\rm d} M'}}{\\int{ W_n(M',M) n(M'){\\rm d}M'}},\n\\label{eq:b1TbinConv}\n\\end{equation} \nwhere $W_n(M',M)$ is the window function of the mass bin given by \\refeq{bins}, and $n(M')$ is the differential halo mass function, parametrized by the fitting formula of Eq. (2) in T08. \nIn this way, we obtain smooth curves for the theory prediction whose\nvalue at the center of a given mass bin can be compared directly to the simulation results.\n\n\\subsection{Linear bias}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.55]{b1_comp_deg5.pdf} \n\\caption{\\textbf{Top panel:} comparison between the linear halo bias from separate universe simulations (green dots), and from clustering (red crosses; displaced slightly horizontally for clarity). Error bars that are not visible are within the marker size. The solid black curve is the Tinker et al. (2010) best fit curve for $b_1$, while the dot-dashed green curve is the ESP prediction \\refeq{btheo}. We also show the result obtained by applying the PBS argument [\\refeq{biasPBS}] to the T08 \nand ST99 mass functions (blue dashed curves). \\textbf{Bottom panel:} relative difference between the measurements and the Tinker et al. (2010) best fit.} \n\\label{fig:b1}\n\\end{figure}\n\n\\refFig{b1} presents the results for $b_1$. The green points show the results obtained from the separate universe simulations, while the red crosses show those from fitting $P_{\\text{hm}}$ and $B_{\\text{mmh}}$. The mutual agreement of the two measurements is very good (the only point with relative difference greater than the $1\\sigma$ uncertainty is at $\\log M=13.15$). The error bars of the separate universe measurements are significantly smaller. Note however that the effective volume used by these measurements is also larger, since the halo-matter power spectrum was only measured in the fiducial boxes. This is a first validation of the separate universe method and also proves its efficiency.\n\nThese results are consistent with the ones presented in \\cite{li\/etal:15} who derived the linear bias from abundance matching. Since Ref.~\\cite{li\/etal:15} used a linearized implementation of separate universe simulations, they are restricted to small overdensities (they take $\\delta_\\rho= \\pm 0.01$), resulting in very small changes in the halo abundance. For such small changes, abundance matching is much more efficient than binning halos into finite mass intervals. We circumvent this issue by using fully nonlinear separate universe simulations which allow us to simulate arbitrary values of $\\delta_\\rho$. \n\nWe also compare our data with several results from the literature. The solid black curve is the fit to $P_{\\text{hm}}$ measurements from Tinker et al. (2010) \\cite{Tinker:2010} [their Eq. (6)]. As shown in the lower panel of\n\\refFig{b1}, the agreement is better than 5\\%, the quoted accuracy of the\nfitting formula. Note that we do not remove unbound particles from our\nhalos, which we expect to lead to a slight underestimate of the bias at the few percent level at low masses. \nNext, we turn to the ``standard'' peak-background split \nargument \\refeq{biasPBS} applied to the universal mass functions of ST99 \nand T08 (blue dashed curves). At low masses, the T08 curve is at 1\\% level agreement but the ST99 prediction overestimates the bias by around 8\\%. The agreement is worse at high mass where these two curves underestimate the bias by around 8\\% and 11\\% respectively.\n\nThe green dot-dashed line finally shows the prediction from excursion set\npeaks \\refeq{btheo}. The agreement at high masses is excellent, where\nthe ESP matches the measured $b_1$ to better than 2\\%. The agreement is far less good at low masses where the ESP prediction overestimates the bias by roughly 10\\%. Note that the assumption that halos correspond to peaks in the\ninitial density field is not expected to be accurate at low masses\n\\cite{ludlow\/porciani}. Part of the discrepancy might also come from the up-crossing criterion applied to derive the ESP prediction, which is only expected to be accurate at high masses \\cite{Musso:2013}. It is worth emphasizing that \\refeq{biasPBS} still \napplies in the case of the ESP. That is, the large-scale bias can still be\nderived directly from the mass function. The key difference to the\nPBS curves discussed previously is that, following \\cite{Paranjape:2013},\nwe employ a stochastic moving barrier, which changes the relation between\nmass function and bias. This more realistic barrier leads to the\nsignificant improvement in the prediction of the bias for high-mass halos. \n\n\\subsection{Higher order biases}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5]{b2_comp_deg5.pdf} \n\\caption{\\textbf{Top panel:} same as \\refFig{b1}, but for the quadratic bias $b_2$. The color code is as in \\refFig{b1}. \\textbf{Bottom panel:} relative difference between measurements and the theoretical prediction of the ESP. In each panel, the clustering points have been horizontally displaced as in \\refFig{b1}.} \n\\label{fig:b2}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.55]{b3_comp_deg5.pdf} \n\\caption{As \\refFig{b2} but for $b_3$.} \n\\label{fig:b3}\n\\end{figure}\n\n\\refFigs{b2}{b3} present the analogous results of \\refFig{b1} for $b_2$ and $b_3$, respectively. For $b_2$ at masses below $10^{13.5} h^{-1} M_\\odot$, there is some scatter in the separate universe results that is apparently larger than what is expected given the error bars (a hint of a similar effect can be seen in $b_1$ as well). \nNote however that there is significant residual degeneracy\nbetween the $b_n$ for a given mass bin, so that a ``$\\chi$-by-eye''\ncan be misleading. As an example, we show projections of the likelihood for one mass bin in \\refFig{contours}. \nThe covariance between the bias parameters is further explored in \\refapp{cov}. Covariance in the halo shot noise between different mass bins, which we do not take into account in the likelihood, could also contribute to the fluctuations in the bias parameters.\n\nIn the case of $b_2$, we can compare the separate universe results to the results of fitting to $P_{\\text{hm}}$ and $B_{\\text{mmh}}$. Again, we find good agreement, with all points being within $2\\sigma$ from each other. Note that $b_2$ is most difficult to constrain from correlations around its zero-crossing. The difference in constraining power between the two methods is now even larger than in the case of $b_1$. This is because, when using correlations, $b_2$ has to be measured from a higher order statistic which has lower signal-to-noise. \nIn the case of $b_3$, a measurement from correlations would have to\nrely on the trispectrum and accurate subtraction of 1-loop contributions in \nperturbation theory. We defer this significantly more involved measurement\nto future work. \nAs discussed in the introduction, it is difficult to rigorously compare these\nmeasurements to previously published results, since those were measured\nat a fixed smoothing scale and did not take into account nonlocal bias\nterms. Nevertheless, our results for $b_2$ and $b_3$ appear broadly consistent with those of \n\\cite{angulo\/baugh\/lacey:2008,paranjape\/etal:2013}\nand \\cite{angulo\/baugh\/lacey:2008}, respectively.\n\nWe again compare with the peak-background split results, now derived at\nsecond and third order from the ST99 and T08\nmass functions. For $b_2$, at low mass, both predictions deviate from our measurements by about 50\\%. At high mass, the deviation is at most 25\\% for T08 and 40\\% for ST99. In the low mass range, this apparently big discrepancy is also due to the smallness of the absolute value of $b_2$. \nIn the case of $b_3$, the PBS predictions using either the T08 or ST99\nmass functions are in fact completely consistent with the measurements \nat masses $\\gtrsim 10^{12.7} h^{-1} M_\\odot$ and $10^{13.5} h^{-1} M_\\odot$, respectively.\n\nTurning to the ESP prediction, we again find very good agreement at\nhigh masses, although for $b_2$ and $b_3$ the performance is not significantly better than the\nPBS-derived biases from the T08 mass function. At low masses,\nwe again find larger discrepancies, with the ESP now underpredicting\nthe magnitude of $b_2$ and $b_3$. The same caveats regarding the relation of low-mass halos to peaks and the efficiency of the up-crossing condition apply here, i.e. we do not expect the ESP\nprediction to work well for those masses.\\\\ \n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.37]{b2overb1_of_b1_zdep.pdf} \n\\includegraphics[scale=0.37]{b3overb1_of_b1_zdep.pdf}\n\\caption{$b_2$ and $b_3$ as a function of $b_1$ obtained from separate universe simulations and for different redshifts. The dashed curves present the third order best fit polynomial for each bias. See text for details about the fit.} \n\\label{fig:b2b3(b1)}\n\\end{figure}\n\nSo far, we have only shown results at redshift $0$. \\refFig{b2b3(b1)}\nshows results from various redshifts by plotting $b_2,\\,b_3$ as functions of\n$b_1$. If the bias parameters are uniquely determined by $\\sigma_0 = \\sigma(M)$, then this relation will be redshift-independent. Indeed, we find no\nevidence for a redshift dependence over the range $z = 0 \\dots 2$ and\n$b_1 = 1 \\dots 10$. Note that we have kept the overdensity criterion\n$\\Delta_{\\rm SO}=200$ fixed. \nSince the separate universe simulation measurements of $b_2$ and $b_3$ \nare very accurate, we provide fitting formulas in \nthe form of $b_n(b_1)$ for convenience. \nGiven the consistency with a universal behavior, we perform a joint fit of results from all redshifts. We use a $3^{\\rm rd}$ order polynomial form for both $b_2$ and $b_3$. Again, we use a weighted least squares method for the fit but do not take into account the error on $b_1$ since it is much smaller than those in $b_2,\\,b_3$. We obtain\n\\begin{equation}\nb_2(b_1) = 0.412-2.143\\,b_1+0.929\\,b_1^2+0.008\\,b_1^3, \n\\label{eq:b2(b1)} \n\\end{equation}\nand\n\\begin{equation}\nb_3(b_1) = -1.028+7.646\\,b_1-6.227\\,b_1^2+0.912\\,b_1^3. \n\\label{eq:b3(b1)} \n\\end{equation}\nThe fits are shown as dashed lines in the two panels of \\refFig{b2b3(b1)}. Notice that we restricted ourselves to $b_1 < 9.6$ on these figures for clarity but we used the full range of results to produce the fits.\nNote that one should be careful when using these formulas outside the fitting range $1\\lesssim b_1\\lesssim 10$.\n\\refeqs{b2(b1)}{b3(b1)} are similar to the fitting formulas provided in \\cite{Hoffmann:2015} who fitted $2^{\\rm nd}$ and $3^{\\rm rd}$ order polynomials for\n$b_2(b_1)$ and $b_3(b_1)$, respectively, to PBS predictions, and found no redshift dependence of their results. Such universal relations became already apparent in \\cite{Saito:2014} (their figure 9). \n\n\\section{Conclusions}\n\\label{sec:concl}\n\nWe have presented a new method to measure the large-scale, renormalized\nlocal density bias parameters $b_n$ of dark matter halos, with $n=1,2,3$,\nby running simulations which simulate an infinite-wavelength density\nperturbation of arbitrary amplitude. This method can be seen as an \nexact implementation of the peak-background split. \nThis method has several advantages, including a simple implementation applicable,\nin principle, to arbitrarily high $n$. The most important advantage,\nhowever, is that the measured biases are not affected\nby the modeling of scale-dependent or nonlinear corrections, and there\nis no ambiguous choice of $k_{\\rm max}$, with the associated risk of \noverfitting, as when\nfitting halo $N$-point functions. The most significant disadvantage\nof the method is that it needs a set of dedicated simulations with\nvarying cosmological parameters to generate a range of $\\d_L$ (note\nhowever that once the simulations are done, they can be used for\nvarious studies, such as for example the nonlinear power spectrum\nresponse \\cite{Wagner:2015}). \n\nWe have compared our results for $b_1$ and $b_2$ to those measured\nfrom the halo-matter power spectrum and halo-matter-matter bispectrum,\nand find excellent agreement overall. One necessary condition for this\nagreement is a careful fitting procedure of the halo statistics and choice of $k_{\\rm max}$. \n\nWe also compared our results to predictions based on the analytical peak-background\nsplit. Once a specific barrier $B$ is assumed, the PBS allows for\na derivation of all local bias parameters $b_n$ from a given halo mass\nfunction. The simplest and most common choice is $B=\\d_c$, which we have\napplied to the ST99 and T08 mass function prescriptions. \nWe found that even though the latter provides a very accurate mass function, the linear bias derived via the PBS and simple collapse threshold is only accurate at the $\\sim 10$\\% level, in agreement\nwith previous results \\cite{manera\/etal:2010}. Things are even worse for $b_2$, with up to 50\\% discrepancy at low mass, although the absolute difference between the PBS predictions and the measurements is similar to that in $b_1$. For $b_3$, the simple PBS predictions are consistent with the measurements (at least at high masses), but this is not a very strong statement given the large error bars on $b_3$. \n\nWe also derived the biases predicted in the excursion set-peaks\napproach, which includes a stochastic moving barrier motivated by \nsimulation results. At high mass, this performs much better, at least for $b_1$, showing\nthat the choice of barrier is a key ingredient in deriving accurate\nbias parameters. In this context, it is important to note that previous\nresults on the inaccuracy of PBS bias parameters \\cite{manera\/etal:2010}\nrelied on the simple constant threshold $B=\\d_c$. This shows that the cause of \ntheses inaccuracies is not the peak-background split itself. The\ninaccuracy of the peak-background split thus depends on what one\ndefines PBS to mean, and can be summarized as follows:\n\\begin{itemize}\n\\item The PBS implemented via the separate universe approach is exact.\n\\item The PBS using a simulation-derived stochastic moving barrier \\cite{robertson\/etal,Paranjape:2013}, as in the ESP, is accurate to a few percent, at least at high masses. The discrepancy found at low mass can be explained by the failure of the peak assumption at such masses, an issue unrelated to the choice of the barrier.\n\\item The PBS using the constant spherical collapse barrier is no better than 10\\%\\,.\n\\end{itemize} \n\nWe also provide fitting formulas for $b_2,\\,b_3$ as a function of $b_1$\nwhich are valid over a range of redshifts and \ncan be useful for predictions and forecasts based on halo statistics,\nsuch as for example the halo model. \n\nIn the future, we plan to extend our analysis to accurately measure\nassembly bias, i.e. the dependence of bias on halo properties beyond\nthe mass (e.g., \\cite{gao\/etal,wechsler\/etal,dalal\/etal}). \nFurther, it will be interesting to extend this technique beyond the \ninfinite wavelength, spherically symmetric ``separate universe'' to allow for precision measurements of the \ntidal and scale-dependent biases. \n\n\\acknowledgments{We thank Aseem Paranjape, Marcello Musso and Vincent~Desjacques for useful discussion about the ESP. F.S.~acknowledges support from the Marie Curie Career Integration Grant (FP7-PEOPLE-2013-CIG) ``FundPhysicsAndLSS''. T.B.~gratefully acknowledges support from the Institute for Advanced Study through a Corning Glass works foundation grant.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\nThe promise of automating tedious tasks and keeping humans away from hazardous environments has driven the development of robotic systems to reach tremendous capabilities.\nWith the increased maturity of the single agent systems, the interest in teams of robots collaborating with each other has been steadily rising.\nThe use of such a team of robots collaborating towards a common goal promises to increase the efficiency and robustness of a mission by distributing the tasks among the participating agents and has been proposed for a variety of different tasks, such as in search-and-rescue missions \\cite{srr}, archaeological mapping \\cite{col_arch_map}, precision agriculture \\cite{agri-robots}, and surveillance \\cite{surveillance}.\nSharing information across the agents not only enables the robots to perform a task faster but also enables the individual agents to make better-informed decisions as they can profit from information beyond their own gathered experience, as shown in \\cite{bartolomei2020multi}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/expt_eurocv3.png}\n \\caption{Collaborative SLAM estimate for 5 agents (EuRoC MH Dataset) running different VIO front-ends. The COVINS-G back-end does not require map points (shown here only for visualization) and thus, is compatible with any arbitrary VIO front-end.\n }\n \\label{fig:irchel_5ag} \n \\vspace{-20pt}\n\\end{figure}\n\nHowever, in order for the robots to work towards any higher-level goal, they need to be aware of their surroundings and their pose in their workspace.\nMoreover, for the robots to be able to collaborate, the knowledge of the pose of all other robots within the team is crucial.\nThe use of external sensors, such as GPS or motion capture systems can provide such a shared reference frame enabling the coordination of the robotic team, however, for many practical applications, such data is not reliable or simply not available in the first place.\nFor example, the uncertainty of GPS measurements can be in the order of tens of meters close to larger structures (e.g. within a city) or not available at all inside buildings or underground.\nIn order to remove dependencies on external sensors, research into Simultaneous Localization And Mapping (SLAM) has made significant progress.\nIn particular, the use of cameras and Inertial Measurement Units (IMUs) for visual-inertial SLAM has proven to provide robustness and accuracy, which led to their deployment onboard products in the market already.\nWith the increasing maturity of single-agent vision-based SLAM techniques, the extension towards multi-agent SLAM as a core enabler for robotic collaboration in real-world scenarios has been increasingly gaining interest, sparking a variety of works addressing multi-agent SLAM \\cite{multi-uav-slam, ccm-slam, cvi-slam, door-slam, kimera-multi, covins}. \nWhile the capabilities and robustness of the developed systems have been steadily increasing, due to their tailored architectures, their modularity is often sacrificed. \nAs a result, any exchange and modification of the front-end onboard such systems most often requires significant effort to adapt the back-end to the structure.\nIn this spirit, this work addresses this issue by proposing a generic back-end solution built on top of the architecture of \\cite{covins}.\nAs the proposed approach requires only 2D features, it is agnostic to the front-end running onboard each agent and even allows mixing different front-end algorithms on the different agents during the same mission as illustrated in Figure \\ref{fig:irchel_5ag}.\nIn summary, the contributions of this work are the following:\n\\begin{itemize}[leftmargin=10pt]\n \\item a generalized collaborative SLAM back-end, which requires only 2D keypoints and a pose estimate to fuse the estimates from multiple agents, enabling the use of any arbitrary VIO and stereo front-ends onboard each agent, \n \\item a publicly available codebase\\footnote{\\href{https:\/\/github.com\/VIS4ROB-lab\/covins}{https:\/\/github.com\/VIS4ROB-lab\/covins}}, which is integrated with the framework of \\cite{covins}. Furthermore, a front-end wrapper is provided, to support any off-the-shelf front-end, and\n \\item an extensive evaluation of the proposed back-end on both the EuRoC dataset \\cite{euroc} as well as newly collected datasets. Our evaluation reveals the flexibility of the proposed approach, using and combining different types of front-ends onboard collaborating agents within the same mission.\n\\end{itemize}\n\n\\vspace{0pt}\n\\section{Related Work}\n\\label{sec:relatedwork}\nThe capability to process multiple trajectories sequentially ({\\it{aka}} the multi-session capability) can be seen as a special case of collaborative SLAM.\nRecent SLAM systems, such as ORB-SLAM3\\cite{ORBSLAM3_TRO} and VINS-mono \\cite{vins-mono} have such multi-session capabilities, which enables them to achieve joint pose and scene estimates similar to collaborative SLAM estimates.\nWhile these approaches achieve greater accuracy and robustness when compared to single-agent SLAM, they are not designed to be used in real-time applications where multiple agents are operating at the same time. \n\n\nIn multi-agent SLAM literature, the classification of the systems is generally made into decentralized and centralized architectures.\nOne of the first decentralized approaches to multi-agent SLAM is DDF-SAM \\cite{ddfsam}, which communicates and propagates condensed local graphs between the robots to distribute the information.\nCombining efficient decentralized place recognition \\cite{Cieslewski:Scaramuzza:MRS2017} and with a Gauss-Seidel based distributed pose graph optimization \\cite{Coudhary:etal:ICRA2016}, in \\cite{Cieslewski:etal:ICRA2018} a data-efficient and complete decentralized visual SLAM approach was proposed. \nIn \\cite{dist-coslam}, a monocular vision-only distributed SLAM for mapping large-scale environments is presented. \nThe recent works \\cite{door-slam, kimera-multi} both make use of distributed pose graph optimization schemes along with a robust mechanism for identifying and rejecting incorrect loop closures. \nWhile these distributed SLAM approaches have advantages in terms of scalability, in general, as the information is also distributed the extent of collaboration is limited in order to keep the communication requirements feasible.\n\nOn the other hand, in centralized systems, all relevant information passes through a central node.\nIn \\cite{PG-slam}, the authors present a back-end for real-time multi-robot collaborative SLAM where the server combines the local pose graphs obtained from different agents into a global pose graph.\nIn MOARSLAM \\cite{moarslam}, each agent runs a full SLAM pipeline onboard, while the server is used to store the agents' maps and perform map merges across them.\nAs the agents perform all the computationally expensive tasks, in particular global optimization, this approach is not well-suited for resource-constrained platforms.\nOn the other end of the spectrum is C${}^2$TAM \\cite{riazuelo2014c}, which offloads all tasks onto a server platform, except pose tracking.\nWhile this allows for very limited computational load onboard the agents, it limits the autonomy of the agents, as a loss of connection to the server eventually causes the onboard tracking to fail.\nA middle ground between MOARSLAM and C${}^2$TAM was proposed in \\cite{multi-uav-slam}, which introduces a system architecture enabling the agents to function autonomously, but is still able to offload heavy computations to a server and crucially, enable two-way information flow between the agents and the server.\nThis work was extended in CCM-SLAM \\cite{ccm-slam}, shown to perform in real-time on real data, proposing redundancy detection that enabled scalability, which is key in large-scale missions.\nPushing for the incorporation of inertial cues onboard each agent, aside from the monocular cues, CVI-SLAM \\cite{cvi-slam} was demonstrated to achieve higher accuracy and metrically scaled SLAM estimates aligned with the gravity direction -- which are core to robotic autonomy in reality.\nMost recently, COVINS \\cite{covins} was proposed, revisiting the most important components for centralized collaborative SLAM, shown to achieve significant gains in accuracy, while pushing the scalability of the architecture to up to 12 participating agents.\n\nDespite the improved performance, however, one of the major limitations of COVINS is that this performance is highly dependent on the choice of the VIO front-end employed onboard the agents.\nFor example, using an ORB-SLAM3\\cite{ORBSLAM3_TRO} front-end with the COVINS back-end gives an exceptional performance, but the performance drops significantly when using an alternative state-of-the-art VIO front-end, such as VINS-mono \\cite{vins-mono}.\nThis is mainly caused due to the reliance of COVINS on large numbers of highly accurate map points for closing loops, which holds for ORB-SLAM3, for example, but not for VINS-mono. \nBy utilizing a generic multi-camera relative pose solver for map fusion and loop-closures we break the dependency on highly accurate map points and instead perform the operations using only 2D image observations.\nThis does not only enable COVINS-G to perform well with virtually any VIO frontend, but also permits the usage of more powerful image features, which in return, allows to handle drastic view-point changes where previous systems failed.\n\n\n\\section{Methodology}\n\\label{sec:methodology}\nIn this section, we first provide an overview of the overall architecture in Section \\ref{sec:architecture}.\nSince our system is closely inspired by the COVINS framework, we then focus on the individual modules of the whole architecture, which are directly impacted by the contributions of this work.\n\n\\subsection{System Overview}\n\\label{sec:architecture}\nA summary of our system architecture can be seen in Figure \\ref{fig:architecture}.\nThe system consists of $n$ individual agents, which all run their own local VIO independently and are able to communicate their Keyframe (KF) information to a central server.\nThe communication module is based on the approach of \\cite{covins}, and is shown to be resilient to potential message losses as well as network delays and occasional bottlenecks in the bandwidth.\nHowever, owing to the fact that each agent is running an independent VIO, even a complete loss in connection does not violate the autonomy of an agent.\n\nThe central server, on the other hand, is responsible for maintaining the data of the individual agents, fusing the information from the agent, and carrying out computationally expensive tasks, such as global optimization.\nAfter decoding the received KF information, for every KF a visual descriptor based on the Bag-Of-Words (BoW) approach gets computed and added to the KF Database containing the visual descriptor for all the collected KFs.\nFor every incoming KF, a place-recognition query is performed to find the closest visual matches (candidate KFs).\nBased on the visual similarity, it is attempted to compute the transformation between the newly received KF and the candidate KFs.\nUpon successful computation of the transformation between the KFs, it is used to either perform a loop closure within a map or if it is across multiple maps, these get fused based on the computed transformation.\n\nAt the start of a mission, each of the agents is initialized separately and has a dedicated map on the server.\nIn our system, the map closely resembles a Pose Graph, where nodes correspond to KFs and edges are based on the relative poses from the odometry and loop closures.\nAs soon as a loop is found across two agents, their maps get merged together by transforming the map of one agent into the coordinate frame of the other based on the estimated loop transformation.\nUpon detection of any loop, a Pose Graph Optimization (PGO) is carried out over all connected Maps.\n\n\nAs opposed to COVINS, which strongly relies on having a well-estimated and consistent set of map points, in our approach we only operate on 2D keypoint information.\nThis opens up the possibility to use and combine all sorts of front-ends, even if no map points are accessible at all (e.g. in the case of an off-the-shelf tracking sensor such as the T265).\nIn order to be able to compute the loop constraints without access to map points, we make use of a multi-camera relative pose estimation algorithm \\cite{17PT} by treating neighboring KFs as a multi-camera system.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.47\\textwidth]{images\/covins_arch_v3.pdf}\n \\caption{Overview of the COVINS-G system architecture.\n }\n \\label{fig:architecture} \n \\vspace{-19pt}\n\\end{figure}\n\n\n\\subsection{Loop Closure and Map Fusion}\n\\label{sec:loop_closure}\nThe process of closing loops consists of two main processes; first, suitable candidates for the current query KF are detected, and second, the candidates get geometrically verified by estimating the relative pose between the candidate and the query KF.\nThe first step is handled by the Place Recognition module, which queries the KF Database for similar KFs based on the BoW \\cite{dbow2} image descriptor.\n\nThe second step, the geometrical verification, is used as an additional check beside the visual appearance and in order to obtain a transformation between the query KF $\\text{KF}_q$ and the candidate KF $\\text{KF}_c$.\nHence, with $q$ being the sensor coordinate frame of the $\\text{KF}_q$ and $c$ being the coordinate frame of the sensor for $\\text{KF}_c$, the goal is to find the transformation $T_{cq}$, describing the transformation from frame $q$ into frame $c$.\nThe standard way of computing this transformation is to establish 3D-2D correspondences between the query KF and the map points associated with the candidate KF. \nHowever, in COVINS-G the system does not have access to 3D map points, but only 2D keypoints.\nEstimating the transformation between $\\text{KF}_q$ and $\\text{KF}_c$ using standard 2D-2D relative pose estimation algorithms like the 5-point algorithm \\cite{5PT} would result in a scale ambiguity.\nInstead, we propose to not only use the query and candidate KFs but also use some of their neighboring KFs and the relative odometry transformations to form two sets of cameras which can be treated as a multi-camera system as illustrated in Figure \\ref{fig:17_PT}.\nUsing the 17-point algorithm \\cite{17PT}, the transformation between two such camera systems can be estimated at metric scale.\n\n\\subsubsection{The 17-point Algorithm}\n\\label{sec:seventeen_pt}\nThe 17-point (or 17-ray) algorithm \\cite{17PT} can be used to solve for the relative motion between two multi-camera systems. \nA multi-camera system consists of more than one camera where the relative transformations between the cameras are known. \nUsing 17 2D-2D point correspondences, the relative transformation between two viewpoints of a multi-camera system can be estimated. \nTo do so, the generalized epipolar constraints \\cite{generalized_epipolar} are used, which can be seen as an extension of the epipolar constraints to systems without a central point of projection.\nNote that in our setup we only have a monocular camera, however, we leverage the fact that we have a good estimate of the relative pose between adjacent KFs provided by the VIO front-end.\nHence for a $\\text{KF}_a$, we take the $m$ neighbors $\\text{KF}_{a_{n_i}}, i \\in [1, m]$ of $\\text{KF}_a$ and use the relative transformations $T_{aa_{n_{i}}}$ to build up a conceptual multi-camera system (Figure \\ref{fig:17_PT}).\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{images\/17PT_v5.png}\n \\caption{The 17-point algorithm requires 17 2D point correspondences to estimate the relative transformation $T_{cq}$ between the multi-camera systems $S_{c}$ and $S_{q}$ (represented by blue dashed lines). }\n \\label{fig:17_PT}\n \\vspace{-15pt}\n\\end{figure}\n\n\n\\subsubsection{Implementation}\n\\label{sec:seventeen_pt_approach}\nTo estimate the transformation between the candidate KF ($\\text{KF}_c$) and the query KF ($\\text{KF}_q$), we extend the neighborhood to form two multi-camera systems $S_{q}$ and $S_{c}$ for the $\\text{KF}_q$ and the $\\text{KF}_c$, respectively.\nIn our setup the two sets comprised of one additional neighbor for $\\text{KF}_q$ and two additional neighbors for $\\text{KF}_c$, i.e. $S_{q} = \\{ \\text{KF}_q, \\text{KF}_{q_{n_1}} \\}$ and $S_{c} = \\{ \\text{KF}_c, \\text{KF}_{c_{n_1}}, \\text{KF}_{c_{n_2}}\\}$.\nIn order to establish the 2D-2D candidate correspondences, we first perform a brute force descriptor matching across all pairs of the two sets $S_{q}$ and $S_{c}$, leading to 6 sets of correspondences.\n\nInstead of using this set of correspondences directly with the 17-point algorithm inside a RANSAC framework, we first perform a pre-filtering, as a high outlier ratio would render a RANSAC with 17 correspondences infeasible.\nThe pre-filtering consists of performing a standard 2D-2D RANSAC on each of the 6 sets of correspondences.\nTo reject bad candidates early on, we require a minimum number of inliers (30) in each of the 6 sets.\nOn the remaining inlier correspondences, we carry out a 17-point RANSAC, where during the sampling we ensure that candidates from all sets are present in order to prevent degenerate configurations (e.g. all 17 correspondences from a single set).\nFor both the pre-filtering as well as the 17-point RANSAC, we utilize the OpenGV library \\cite{opengv}.\n\nFor accepting a transformation we require to have at least 100 inliers after RANSAC.\nTo quantify the uncertainty of the computed transformation we use a sampling-based approach to compute a covariance matrix.\nThe samples are obtained by repeatedly selecting 17 inliers and computing the transformation using the 17-point algorithm.\nWe favor the sampling approach over an analytical one as it results in a more consistent covariance estimate since unmodelled effects (e.g. the relative pose uncertainty of the odometry) are better reflected in the samples.\n\n\\subsection{Pose Graph Optimization}\n\\label{sec:pgo}\nThe Pose Graph Optimization (PGO) in COVINS-G is where the information from the different agents gets fused and the drift in the corresponding trajectories can be corrected.\nThe PGO step is triggered every time a new constraint is added to the graph as a result of the loop detection.\nThe state that is optimized in the PGO consists of all KF poses that are in the corresponding graph. \nIn the following, we denote the pose of a KF $k$ by a rotation $q_{ws_k}$ and a translation $p_{ws_k}$, where $w$ represents the coordinate frame in which the poses are expressed and $s_k$ denotes the device coordinate frame at time-point $k$.\nHence, the state in PGO can be defined as $\\mathcal{S} = \\{ q_{ws_1}, p_{ws_1}, \\cdots, q_{ws_n}, p_{ws_n} \\}$, with $n$ being the number of keyframes in the graph.\n\nIn the actual PGO, we optimize the following objective:\n\\begin{equation}\n\\label{eq:pg}\n \\mathcal{S}^{*} = \\underset{\\mathcal{S}}{\\text{arg min}} \\{ \\sum_{k=1}^{n} \\sum_{l=1}^{q} \\norm{e_{kk+l}}^2_{W_{kk+l}} + \\sum_{i,j \\in \\mathcal{L}} \\varrho (\\norm{e_{ij}}^2_{W_{ij}}) \\},\n\\end{equation}\nwhere $\\norm{e}^2_{W} = e^TWe$ is the squared Mahalanobis distance with the information matrix $W$, $\\mathcal{L}$ denotes the set of all loop-closure edges and $\\varrho(\\cdot)$ denotes the use of a robust loss function, here, the Cauchy loss function.\nThe parameter $q$ represents the number of neighbor KFs added as odometry constraints (e.g. $q=1$ would correspond to having an edge only with the subsequent KF). \nThis is used to approximate correlations between poses which are generally computed in a sliding window fashion and in our implementation is set $q=4$.\nThe error terms $e_{ij}$ are of the form\n\\begin{equation}\ne_{ij} = \\begin{bmatrix}\n\\left(p_{ij} - \\hat{p_{ij}} \\right)^T & 2 \\cdot \\text{vec}(q_{ij}^{-1} \\cdot \\hat{q_{ij}})^T \n\\end{bmatrix}^{T},\n\\end{equation}\nwhere $\\text{vec}(q)$ extracts the vector part of a quaternion. \nThe variables denoted with $\\hat{\\cdot}$ correspond to measurements, e.g. from a loop closure transformation or the delta pose estimated by the odometry front-end.\nWe use the simplified notation $p_{ij}, q_{ij}$ to denote the translation and rotation of the delta pose between the pose of KF $i$ and $j$ given by $T_{ij} = T_{ws_i}^{-1} T_{ws_j}$.\nFor the optimization, the Ceres's implementation of the Levenberg-Marquardt is used.\nThe information matrices $W$ for the loop constraints are obtained via the estimated covariance outlined in section \\ref{sec:seventeen_pt_approach}.\nThe ones corresponding to the odometry edges are obtained using the expected accuracy of the corresponding front-end.\n\n\n\\section{Experiments and Discussions}\n\\label{sec:experiments}\nIn our evaluation, we demonstrate the generic nature of our collaborative back-end and its capability to use and combine all sorts of VIO front-ends.\nIn Sections \\ref{sec:tracking_cam} and \\ref{sec:feature_support} we show two applications where our back-end enables a collaborative estimation, which previous approaches cannot support.\nAll our experiments are performed using pre-recorded datasets which we play back in real-time.\nThe agents' front-ends are run on Intel NUC 7i7BNH @3.5 GHz hardware, while the server is run on a laptop with a core i7-6700HQ @2.6 GHz.\nNote that in our setup the agents and the server are connected via a wireless network, in order for real communication to take place.\nThis setup allows us to have more comparable results over multiple runs while still making use of a real wireless network as would be the case during real-world deployment.\n\n\\subsection{Collaborative SLAM Estimation Accuracy}\n\\label{sec:euroc_eval}\nWe evaluate the accuracy of the collaborative SLAM estimate on the EuRoC Dataset\\cite{euroc} using the Machine Hall (MH) and the Vicon Room 1 (V1) sequences to establish a collaborative estimation scenario with three to five participating agents. \nWe use various combinations of front-ends with our back-end and compare the results against COVINS \\cite{covins} as well as the state-of-the-art multi-session capabilities of VINS-mono\\cite{vins-mono} and ORB-SLAM3 \\cite{ORBSLAM3_TRO}.\nAs multi-session methods only support one agent at a time, we process the datasets sequentially with one agent at a time, in contrast to the collaborative SLAM approaches, where all agents' processes are run in parallel.\n\nTo showcase the flexibility of our approach, we perform the evaluation of our back-end with different front-end combinations.\nIn the first experimental setup, all agents are operated using the same front-end, which in our experiments is either the VINS-mono or the ORB-SLAM3 front-end.\nIn subsequent experiments, we test COVINS-G using different front-ends for the different agents, namely OpenVINS\\cite{Openvins}, VINS-mono, ORB-SLAM3 and SVO-pro\\cite{Forster17troSVO}.\nSince COVINS performs as a post-processing step a Global Bundle Adjustment (GBA) at the end of every run, we do not directly compare against it but rather include its performance here as a reference.\nFor more appropriate comparisons with COVINS-G, we include the COVINS result without the additional GBA step.\nThe averaged results over 5 runs are summarized in Table \\ref{tab:multi_ag}.\n\nUsing the ORB-SLAM3 front-end, we achieve on-par performance to the ORB-SLAM3 multi-session back-end, even though in our approach we do not perform map re-use as in ORB-SLAM3.\nAs ORB-SLAM3 performs a GBA upon every loop detected, it can potentially reach very high accuracy (as COVINS demonstrates with GBA enabled), however, as long as it is able to find sufficient correspondences in the re-localization, no GBA gets triggered.\nTherefore, the overall accuracy is coupled also to the time that the last GBA step was performed.\nCompared to COVINS without GBA, our approach reduces the error almost by a factor of two.\nThis can be explained by the fact that in COVINS-G we are able to close more loops, as also shown in Table \\ref{tab:computation}, and also because the covariance of the loop transformations gets explicitly estimated instead of using fixed heuristics.\n\nThe comparison with the VINS-mono front-end showcases a similar outcome that COVINS-G performs similarly to the VINS-mono multi-session back-end.\nThe small gap in performance can be explained in that VINS-mono incorporates a larger number of loops, whereas in COVINS-G we set a minimum number of KFs between consecutive loops in order to reduce the computational burden.\nCompared to COVINS with the VINS-mono frontend, COVINS-G shows a significant improvement with a factor of around 3.\nThis is because COVINS only detects and closes few loops because it's loop-closure pipeline is tailored to a high quality map with a large number of map points.\nDue to this weak connection within and across the trajectories, even the GBA is unable to reduce the error.\nWith COVINS-G, even with a mix of different front-ends, we can see that the performance is in a similar range as when using the VINS-mono front-end, demonstrating the effectiveness of our generic approach.\n\n\\begin{table*}[]\n\\caption{Evaluation of joint-trajectory estimates for different methods on the EuRoC Dataset\\cite{euroc} (lowest error in bold) reported as the average trajectory error over 5 runs each. $^\\#$For the heterogenous front-end, agents 1-5 utilize OpenVINS\\cite{Openvins}, VINS-mono\\cite{vins-mono}, ORB-SLAM3\\cite{ORBSLAM3_TRO}, SVO-Pro\\cite{Forster17troSVO} and VINS-mono\\cite{vins-mono} front-ends, respectively. As *COVINS\\cite{covins} performs a Global Bundle Adjustment (GBA) step at the end of the run it has been included here for reference only and is excluded from our comparison.\n}\n\\label{tab:multi_ag}\n\\centering\n\\def1.2{1.2}\n\n\\begin{tabular}{|cc|ccc|}\n\\hline\n\\multicolumn{2}{|c|}{\\textbf{Method}} & \\multicolumn{3}{c|}{\\textbf{Translational RMSE (m)}} \\\\ \\hline\n\\multicolumn{1}{|c|}{} & & \\multicolumn{1}{c|}{MH01-MH03} & \\multicolumn{1}{c|}{MH01-MH05} & V101-V103 \\\\ \\cline{3-5} \n\\multicolumn{1}{|c|}{\\multirow{-2}{*}{Front-end}} & \\multirow{-2}{*}{Back-end} & \\multicolumn{1}{c|}{(3 Agents)} & \\multicolumn{1}{c|}{(5 Agents)} & (3 Agents) \\\\ \\hline\n\n\\multicolumn{1}{|c|}{ORB-SLAM3\\cite{ORBSLAM3_TRO}} & ORB-SLAM3 MS\\cite{ORBSLAM3_TRO} & \\multicolumn{1}{c|}{0.041} & \\multicolumn{1}{c|}{0.082} & \\textbf{0.048} \\\\ \\hline\n\\multicolumn{1}{|c|}{ORB-SLAM3\\cite{ORBSLAM3_TRO}} & COVINS (No GBA) \\cite{covins} & \\multicolumn{1}{c|}{0.075} & \\multicolumn{1}{c|}{0.119} & 0.130 \\\\ \\hline\n\\multicolumn{1}{|c|}{{\\color[HTML]{666666} ORB-SLAM3\\cite{ORBSLAM3_TRO}}} & {\\color[HTML]{666666} COVINS (GBA) \\cite{covins}*} & \\multicolumn{1}{c|}{{\\color[HTML]{666666} 0.024}} & \\multicolumn{1}{c|}{{\\color[HTML]{666666} 0.036}} & {\\color[HTML]{666666} 0.042} \\\\ \\hline\n\\multicolumn{1}{|c|}{ORB-SLAM3\\cite{ORBSLAM3_TRO}} & Ours & \\multicolumn{1}{c|}{\\textbf{0.040}} & \\multicolumn{1}{c|}{\\textbf{0.064}} & 0.067 \\\\ \\hline \\hline\n\n\\multicolumn{1}{|c|}{VINS-mono \\cite{vins-mono}} & VINS MS\\cite{vins-mono} & \\multicolumn{1}{c|}{\\textbf{0.062}} & \\multicolumn{1}{c|}{0.100} & \\textbf{0.076} \\\\ \\hline\n\\multicolumn{1}{|c|}{VINS-mono \\cite{vins-mono}} & COVINS (No GBA) \\cite{covins} & \\multicolumn{1}{c|}{0.259} & \\multicolumn{1}{c|}{0.305} & 0.183 \\\\ \\hline\n\\multicolumn{1}{|c|}{{\\color[HTML]{666666} VINS-mono \\cite{vins-mono}}} & { \\color[HTML]{666666} COVINS (GBA) \\cite{covins}*} & \\multicolumn{1}{c|}{ {\\color[HTML]{666666} 0.261}} & \\multicolumn{1}{c|}{{\\color[HTML]{666666} 0.321}} & {\\color[HTML]{666666} 0.183} \\\\ \\hline\n\\multicolumn{1}{|c|}{VINS-mono\\cite{vins-mono}} & Ours & \\multicolumn{1}{c|}{0.081} & \\multicolumn{1}{c|}{\\textbf{0.095}} & 0.090 \\\\ \\hline \\hline\n\n\\multicolumn{1}{|c|}{Heterogenous$^\\#$} & Ours & \\multicolumn{1}{c|}{0.081} & \\multicolumn{1}{c|}{0.090} & 0.088 \\\\ \\hline\n\\end{tabular}\n\\vspace{-10pt}\n\\end{table*}\n\n\\subsection{Large-Scale Outdoor Experiments}\n\\label{sec:outdoor_expt}\nTo demonstrate the applicability of our back-end to large-scale scenarios, we captured a dataset consisting of 4 sequences using a hand-held setup with an Intel Realsense D455.\nThe dataset was recorded by walking on the sidewalks around the campus of ETH Zurich in the center of Zurich and with a combined trajectory length of around 2400$m$ covered an area of approximately 67500$m^2$.\nFor this experiment, the VINS-mono front-end was used on all sequences.\nAs we do not have a ground truth trajectory for the dataset, we superimposed the estimated combined trajectory on a satellite image of the area as illustrated in Figure \\ref{fig:outdoor_expt}. \nAs can be seen, the different trajectories are well aligned with the road structure. \nA complete visualization of the whole experiment can be found in the supplementary video.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/expt_outdoor_largescalev3.png}\n \\caption{Joint trajectory estimates for 4 agents superimposed on the satellite image of our large-scale outdoor experiment with a combined trajectory length of 2400 $m$.}\n \\label{fig:outdoor_expt} \n \\vspace{-12pt}\n\\end{figure}\n\n\\subsection{Tracking Camera Support}\n\\label{sec:tracking_cam}\nThis experiment aims at highlighting the generalization capabilities of our system with respect to the front-end that is used.\nTo do so, we captured a dataset with two different sensors, one Intel Realsense D455 camera, and one Intel Realsense T265 tracking camera inside an office floor.\nFor the D455 we used the VINS-mono as a front-end, whereas for the T265 we used the odometry estimate that is provided directly by the sensor.\nAs this sensor only provides its images and the corresponding poses but does not offer access to its internal state, we use our ROS-based front-end wrapper, which does a motion-based keyframe selection, detects feature points, and creates and communicates the KF messages to the back-end.\n\nThe outcome of the experiment is illustrated in Figure \\ref{fig:tracking}, where one can see an example image for each of the sensors and the combined trajectory overlaid with the floor plan of the office (the complete experiment is visualized in the supplementary video).\nAs it can be seen, the estimated trajectory fits the floor plan, indicating its consistency.\nIn addition to the different odometry sources, the two sensors have also two rather different lenses, rendering matching between the two systems more difficult, however, our proposed back-end is able to merge the estimates nonetheless.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{images\/expt_trackingv3.png}\n \\caption{Joint trajectory estimate for a heterogeneous system of 2 agents superimposed on the floor plan of the \n building. Map points are shown for visualization purposes only}\n \\label{fig:tracking} \n \\vspace{-9pt}\n\\end{figure}\n\n\\subsection{Alternative Feature Descriptor support}\n\\label{sec:feature_support}\nVisual features are designed to be tolerant to illumination changes, perspective distortions as well as viewpoint changes.\nWhile for runtime efficiency, real-time SLAM systems are mainly restricted to highly computational efficient binary features such as ORB \\cite{ORB} or BRISK \\cite{Leutenegger:etal:ICCV2011}.\nHowever, it is accepted that to date, binary features are not able to match the robustness of more expensive descriptors such as e.g. SIFT \\cite{SIFT}.\nAs our approach is largely decoupled from the front-end, we are able to make use of more expensive features, also because we do not require them to be computed at frame rate, thus allowing us to potentially match trajectories with much larger viewpoint differences.\n\nIn this experiment, we recorded another outdoor dataset with two hand-held trajectories walking in parallel on two sides of a street.\nWe modified our front-end wrapper to detect SIFT instead of ORB features and run the back-end using these SIFT features as well.\nWhile using the framework with ORB features no overlap can be detected to merge the trajectories, using the powerful SIFT features allowed us to successfully detect loops across the agents and align the trajectories.\nThe comparison of this experiment can be found in the accompanying video.\nSuch an improved tolerance to large view-point changes allows the system to be used in scenarios where larger view-point changes are inherent to the use case, for example, the collaboration between aerial and ground robots.\n\n\\subsection{Computational and Communication Requirements}\n\\label{sec:computation}\n\\begin{table}[]\n\\caption{Comparison of the runtime for the loop computation and the average communication traffic.}\n\\label{tab:computation}\n\\centering\n\\def1.2{1.2}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{1}{|l|}{\\textbf{}} & \\textbf{\\begin{tabular}[c]{@{}c@{}}\\# Loops\\\\ detected\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}c@{}}Computation Time \\\\ per loop\\end{tabular}} & \\textbf{\\begin{tabular}[c]{@{}c@{}}Network Traffic\\\\ per Agent\\end{tabular}} \\\\ \\hline\nOurs & 75 & 213 $\\pm$ 171 ms & 179.74 kB\/s \\\\ \\hline\nCOVINS \\cite{covins} & 57 & 35 $\\pm$ 23 ms & 486.41 kB\/s \\\\ \\hline\n\\end{tabular}\n\\vspace{-15pt}\n\\end{table}\n\nThe statistics for communication and loop transformation computation are generated and compared with COVINS for the experiment performed on EuRoC MH dataset with 5 agents, each running an ORB-SLAM3 front-end. \nThe computation time for estimating the loop transformation using our approach (including feature matching, 2D-2D pre-filtering, 17-point RANSAC and covariance matrix computation) is compared against the standard approach used in COVINS (feature matching + PnP RANSAC) and summarized in Table \\ref{tab:computation}. \nThough the computation time for our method is around an order of magnitude higher than for COVINS, the average computation time per loop is around $213 ms$ which still makes it possible to operate in real-time for up to 5 loop computations per second. \nFor this experiment, a total of 75 loops were detected for a mission time of around 135 seconds, resulting in 0.55 loops per second over all agents. \nThe network traffic for our approach is around three times lower than that of COVINS.\nThis is owed to the fact that in our back-end we require only 2D keypoints and their descriptors for each KF, unlike COVINS which also requires sending the map points as well as the data required to perform the IMU pre-integration.\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this work, we present a front-end-agnostic back-end for collaborative visual-inertial SLAM.\nBy making use of a multi-camera relative pose estimator for estimating loop-closure transformations, our system is able to work using only 2D keypoint information.\nThis allows our collaborative back-end to be compatible with virtually any VIO front-end with minimal to no modifications to it.\nIn our experimental evaluation, we achieve at least on-par accuracy with state-of-the-art multi-session and collaborative SLAM systems, which use back-ends specifically designed for the respective front-end.\nOwed to the decoupled nature of the data used in our back-end, we can make use of more powerful keypoint descriptors like SIFT, allowing us to close loops and merge trajectories with drastic viewpoint changes, which cannot be handled by state-of-the-art systems.\nOur open-source implementation of the system enables users to unlock the capabilities of a collaborative SLAM system without the need to change their state estimation pipeline of choice.\n\nIn proceeding work, we would like to make more explicit use of the information coming from our VIO, in particular the gravity alignment, by replacing the 17-point algorithm with a minimal 4-point solver which leverages the gravity information \\cite{4PT_gravity}.\nThis allows to speed up the relative pose estimation step significantly, as it requires drawing fewer samples within the RANSAC computation.\nIn the future, we would also like to deploy this system on a team of robots for multi-robot applications, such as exploration and 3D reconstruction of large areas.\n\n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzalcv b/data_all_eng_slimpj/shuffled/split2/finalzzalcv new file mode 100644 index 0000000000000000000000000000000000000000..f8b4db5f2b63f13da641341187d3f1d5d0066e15 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzalcv @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nInteracting particle systems are relevant to wide-ranging phenomena in \nphysics, chemistry, biophysics, ecology, etc. The concept of\n'particles' is used in a broad sense, that is \n'particles' can be atoms, molecules, spins, individuals, etc, and\nwhilst attention is\ndrawn to the interactions among particles no attempt is made in order to\nachieve a detailed description (e. g. quantum mechanical) of the particle \nitself. Therefore, due to interactions, the occurrence of complex behavior, \nsuch as phase transitions, self-organization, chaos, bistability, etc, may \nbe observed \\cite{Ligget}. \n\nWithin this context, an active field of research is the study of\nfar-from-equilibrium reaction systems \\cite{ZGB,Eze1}. Irreversible\nphase \ntransitions (IPT) between active regimes (where reactions are\nsustained) and absorbing states (where \nreactions are no longer possible) have been reported in a great variety of \nmodels such as the Ziff, Gulari, and Barshad (ZGB) model for the\ncatalytic oxidation of CO \\cite{ZGB}, the \ndimer-dimer model \\cite{Eze2}, the contact process \\cite{Jens1},\nforest-fire models \\cite{Eze3}, etc (for a recent review see\ne. g. \\cite{Eze1}). According to the Janssen-Grassberger conjecture\n\\cite{Janss,Grass1}, irreversible reaction systems that exhibit a\nphase transition to a single absorbing state characterized by a scalar\norder parameter belong to the Directed Percolation (DP) universality\nclass. This conjecture, stated a long time ago for unique\nabsorbing states, has \nbeen further generalized for the cases where such states are\nnon unique \n\\cite{Jens2,Eze4}. A special case corresponds to non-equilibrium systems\nwhere, provided an IPT exists, there is in addition a local or global\nconservation of particles modulo two, such as the branching\nand annihilating random walks with an even number of offsprings\n\\cite{BARWO,Jens4}. In these cases a new universality class emerges, commonly called \nparity conserving (PC) class, which is due\nto the existence of two statistically equivalent absorbing states at\nthe critical point \\cite{Kolea}. However, global conservation of\nparticles of modulo two\nmay also lead to exponents in the PC class only when local spontaneous annihilation\n($1X \\rightarrow 0$) is highly inhibited. Then, at a coarse grained\nlevel, the relevant surviving processes are those conserving\nparity. In other words, parity conservation can \nbe restored at a coarse grained level \\cite{Haye,Roberto}. A nice example where global\nparity conservation still leads to DP exponents is given by Inui et. al. \\cite{Ponja}. \nIt is clear in this case that spontaneous annihilation must be taken\ninto account.\n\n\nIPT are studied largely by means of Monte Carlo simulations\nand mean-field approaches. Recent developments of field-theoretic\nrenormalization group techniques have provided a new theoretical framework\nwhere non-equilibrium phase transitions can be studied \\cite{Uwe}. These\ntechniques are able to identify the relevant processes in a given\nuniversality class although the quantitative predictions are still\npoor. \n\nSo far, most of the simulations have been performed using\n\\underline{discrete} lattices where each particle fills a single site\non the lattice and neighboring particles may react with a certain\nprobability. In contrast, our knowledge on the behavior of\nirreversible reaction systems in continuous media is rather poor. In\norder to stress some dramatic differences that may arise for a reaction system\nwhen it is simulated off lattice, let us consider the simplest case of\nthe $B + B \\rightarrow 0$ irreversible reaction which proceeds according\nto the Langmuir-Hinshelwood mechanism.\n\\begin{eqnarray}\n & B(g) + S \\rightarrow B(a) \\\\ \\nonumber \n & B(g) + S \\rightarrow B(a) \\\\ \n & B(a) + B(a) \\rightarrow 0 + 2 S \\nonumber \n\\end{eqnarray}\nwhere $g$ and $a$ refer to the gas and adsorbed phases, respectively,\nwhile $S$ represents a site on the surface. At first, we assume that\n$B$-species adsorbed on nearest neighbor sites react with unitary\nprobability ($P_r = 1$). If we used a discrete lattice, reactions\nwould be sustained indefinitely, i. e. the system could not irreversibly\nevolve into an absorbing state. However, considering a continuous\nmedia, the random adsorption of $B$ particles of finite size $\\sigma$\ncauses the formation of several interparticle gaps of size smaller\nthan $\\sigma$. So, in the infinite time limit ($t \\rightarrow \\infty$) the\nsample becomes imperfectly covered by $B$-species separated by small\ninterparticle gaps. Reaction is no longer possible and the system\nbecomes irreversible trapped into an absorbing state (infinitely\ndegenerated). The maximum jamming coverage attained in one dimension\nis $\\Theta_j \\approx 0.74759$, which corresponds to the so called car\nparking problem \\cite{Jim,car}. \n\nIn this paper we show that by introducing the adsorption of species\nwith transient mobility in a continuous one-dimensional medium, it is\npossible to open a window where reactions are sustained. However, by \ntuning the external parameter which controls the transient mobility\nof the particles it is possible to irreversible drive the system into\nan absorbing state. \n\nIt should be mentioned that the study of reactions\nof atoms in the gas phase possessing thermal energy with adsorbed\natomic CO species on metal and \nsemiconductor surfaces is a topic of current interest. In contrast to\nthermally activated reactions among adsorbed species, i. e., the so\ncalled Langmuir-Hinshelwood mechanism, these kind of reactions take\nplace under far-from-equilibrium conditions. Consequently, the\ndetermination of the underlying mechanism as well as the understanding\nof the dynamic behavior is challenging work. Within this context,\nvery recently Kim {\\it et al} \\cite{Kim} have reported experimental studies\nof the reaction of hot $H$-atoms with adsorbed $D$-atoms (for further\nexperimental works see references in \\cite{Kim}).\n\nIt should be noted that from the theoretical point of view a number of \nrelated models for random sequential adsorption with diffusion\n\\cite{priv,eis} and desorption \\cite{stinc} have also been proposed and studied\n(for a review see also \\cite{Jim}). However, interest in\nsuch studies is addressed to the asymptotic approach to the jammed state.\nIn contrast, in this paper our interest is focused on the irreversible\ncritical behaviour of a reactive system.\n\nSo this work\nis devoted to the characterization of \nsuch IPT in continuous media and it is organized as follows: section\n2 gives the description of the model and the simulation technique. In\nsection 3 we discuss the results while conclusions and remarks are\npresented in section 4.\n\n\\section{The Model and the Monte Carlo simulation method}\n\nIn this paper, we study a 1D off-lattice\nadsorption-reaction model in which particles of size $\\sigma$ undergo a \nballistic flight just after depositing on the substrate.\nThe system evolves in time under the following local rules. (i) A position $x$\nis randomly selected on the substrate. If the interval $[x - \\sigma\/2, x + \\sigma\/2]$ is \nempty, then the adsorption trial is successful, otherwise it is rejected. So, it is\nclear that double occupancy of positions is forbidden. (ii) Right after a successful\nadsorption trial, a random direction is selected (left or right). Then, the particle \nundergoes a ballistic flight in the previously selected direction up to a distance $R$ \nfrom the adsorption position $x$, provided that no other already deposited particle is found \nalong the way. (iii) If during the flight one particle hits another\npreviously adsorbed particle which is \nalready at rest on the substrate, the following alternatives can occur: (1) the annihilation \n($B+B \\rightarrow 0$) occurs with probability $P_r$. Then, both particles react and leave \nthe system. (2) Particles do not react (with probability $(1- P_r)$), and the flying particle \nis frozen at the collision point.\n\nThe ballistic flight mimics 'hot monomer' adsorption, allowing the incoming particle to \ntransform its energy into degrees of freedom parallel to the\nsubstratum. The length of the flight $R$ is finite in order to account for\nfrictional dissipation. The model has two externally tunable\nparameters, namely $R$ and $P_r$. For $P_r = 0$ one recovers the 'hot\nmonomer' random sequential adsorption model \\cite{Daniel} while for\n$R = 0$ and $P_r = 0$ one has the 1d car parking problem \\cite{car}.\n\nIn order to simulate a continuous medium on a digital computer, one\nactually considers a discrete lattice. However, each site of size\n$\\sigma$ is subdivided into $2^{64}$ different adsorption\npositions. This high degree of discretization has provided excellent\nresults when compared with the exact analytic solution of a related\nproblem \\cite{Daniel}. \n\nPreliminary results show that the system can undergo continuous IPT\nbetween a stationary \nreactive state and an absorbing state without reactions when varying\nthe parameters. This can easily be tested by considering the case\n$P_r = 1$ and $R = 0$ ($R > 1$) which gives an absorbing (reactive)\nstate, respectively. \nIt should be pointed out that continuous IPT are dominated by\nfluctuations. Consequently, in a finite\nsystem and close to the critical point, the stationary state of the reactive\nphase can irreversibly evolve into the saturated state (absorbing state). Due\nto this circumstance, the precise determination of both critical points and\ncritical exponents is rather difficult. However, this shortcoming can be \navoided by performing an epidemic analysis. \nFor this purpose one starts, at $t=0$, with a a configuration close to\none of the absorbing states. It is clear that different absorbing states will normally differ\nin the density of monomers. It should be pointed out that the dynamical critical behavior\nof systems with infinitely many absorbing configurations is expected to depend upon the\ninitial density of inactive particles \\cite{Jens2,Mendes}. However, static critical behavior appears\nto be independent of it. \nFrom the set of possible initial densities, \na value $\\rho_n$ is particularly important, namely the stationary density of inactive particles\nwhich results after the natural evolution of the system in the subcritical region has finished.\nThe value $\\rho_n$ is relevant, since only for this value the natural dynamical critical \nbehavior emerges. Preliminary simulation results show that $\\rho_n$ depends on the parameter \n$P_r$, but their values have not been included in this work for the sake of space.\nThe dependence of the critical behavior on the set of initial densities is \nthe subject of an ongoing investigation.\n\nConsequently, the initial state for the epidemic analysis is \ngenerated by the evolution of the system very close to the critical\npoint until poisoning is achieved. After generating this stationary\npoisoned state, we remove one or two particles \nfrom the middle of the system in order to create a small active\narea where adsorption is now possible. It should be noted that an\nempty area is considered to be active \nif it is longer than or equal to $\\sigma$. Then, the time evolution of\nthe system is analyzed by measuring the following properties: (i) the\naverage amount of active area at time $t$, $A(t)$; (ii) the survival\nprobability of the active area at time $t$, $P_s (t)$; and (iii) the average\ndistance over which the active area have spread at time $t$,\n$D(t)$. Finite size effects are absent because the system is taken\nlarge enough to avoid the presence of active area at the boundaries. \nFor this purpose a sample of $10^4 \\sigma$ is enough. Averages are taken over \n$10^5$ to $10^6$ different samples. Near the critical point, the\namount of active area is often very small. Then, we improve the\nefficiency of the algorithm by keeping a list of the positions where\nthere is active area. Time is incremented by $1\/a(t)$, where $a(t)$\nis the amount of active area at time $t$. The time evolution of the active\narea is monitored up to $t =10^5$. At criticality, the following\nscaling behavior holds: \n\\begin{equation}\n\\label{Eta}\nA(t) \\propto t^{\\eta},\n\\end{equation}\n\\begin{equation}\n\\label{Delta}\nP_s(t) \\propto t^{-\\delta},\n\\end{equation}\nand\n\\begin{equation}\n\\label{Zeta}\nD(t) \\propto t^{z\/2}\n\\end{equation}\nwhere $\\delta $, $\\eta $ and $z$ are \\underline{dynamic} exponents. \n\n\\section{Results and discussion}\nPreliminary simulations show that for $P_r = 1$ it is possible to\nachieve a stationary reactive state in the large $R$ limit (i. e. for\n$R \\ge 2$) while in the opposite limit ($R \\le 1.5$) the system\nbecomes irreversible saturated by $B$-species. In order to obtain a\nquantitative description of the IPT we have performed epidemic studies\naround the critical edge. Figure 1 (a - c) shows log-log plots of $A$, $P_s$\nand $D$ versus the time $t$, obtained for different parameter values. The \nthree plots exhibit a power law behavior which is the signature of a\ncritical state. Using smaller (greater) $R$-values we observe slight\nupwards (downwards) deviations in three plots which indicate\nsupercritical (subcritical) behavior (these results are not shown for\nthe sake of clarity). Then, the critical exponents obtained by\nregressions are:\n\\begin{equation}\n\\label{expon}\n\\eta = 0.308 \\pm 0.004 \\; \\; \\; \\delta = 0.165 \\pm 0.003 \\; \\;\n\\; z\/2 = 0.625 \\pm 0.002 \n\\end{equation}\nThese values are in excellent agreement with the exponents\ncorresponding to the DP universality class in $1+1$\ndimensions \\cite{Janss,Grass1}. Recently, extended series expansions calculations \\cite{series}\nhave provided very accurate values for the DP critical exponents, namely\n\\begin{equation}\n\\label{dpexpon}\n\\eta = 0.31368(4) \\; \\; \\; \\delta = 0.15947(3) \\; \\; \\; z\/2 = 0.63261(2)\n\\end{equation}\nTherefore we conclude that the studied adsorption-reaction model on a\n\\underline{continuous medium} belongs to the DP universality class\nlike many other systems already studied on \\underline{discrete\nlattices}. It should be noticed that the present model has infinitely\nmany absorbing states, so as in the case of the dimer-dimer model\n\\cite{Eze2} the DP conjecture holds for non-unique absorbing states\n\\cite{Janss} at least as long as the absorbing states can be solely \ncharacterized by the vanishing of a single scalar order parameter. \n\nWe have also studied the case of imperfect reaction, i. e. $P_r <\n1$. Figure 2 shows a plot of the phase diagram. The phase boundary\ncurve was determined by means of an epidemic analysis, as is shown\nin figure 1. The obtained critical exponents are\n\\begin{equation}\n\\label{imexpon}\n\\eta = 0.312 \\pm 0.004 \\; \\; \\; \\delta = 0.157 \\pm 0.003\\; \\; \\; z\/2 =\n0.631 \\pm 0.001 \n\\end{equation}\nOnce again, these exponents are in good agreement with those corresponding\nto DP. Scanning the whole critical curve we obtain second order IPT's\nthat belong to the DP universality class. However, the special case\n$R \\rightarrow \\infty$ merits further discussion. For $P_r = 1$ ($P_r =\n0$) the system evolves towards a reactive (absorbing) state,\nrespectively. Then, a transition is expected at some intermediate\n$P_r$ value. In this case, the time evolution of the active area can\nbe described by means of a mean field equation. In fact, the active\narea $A(t)$ will grow (decrease) proportionally to $A(t) P_r$ ($A(t) (1 - P_r)$),\nrespectively; so\n\\begin{equation}\n\\label{mfield1}\n\\frac{d A}{dt} = A(t) P_r - A(t) (1 - P_r)\n\\end{equation}\nwhich leads to\n\\begin{equation}\n\\label{mfield3}\nA(t) = A_{0} \\; e^{(2 P_r - 1)t} \n\\end{equation}\nTherefore, $P_r = 1\/2$ is a critical value such as for $P_r > 1\/2$\n($P_r < 1\/2$) the active area will increase (decrease) exponentially\nin time, while just at criticality $A(t)$ will remain constant ($A(t)\n= A_0$), which is consistent with a mean field exponent $\\eta_{MF} =\n0$. The predicted behavior is confirmed by means of simulations as it\nis shown in figure 3. By means of linear regressions the following\nexponents are obtained for $P_{r} = \\frac{1}{2}$:\n\\begin{equation}\n\\label{imexp}\n\\eta \\approx 0.0 \\; \\; \\; \\delta \\approx 1.0 \\; \\; \\; z\/2 \\approx 1.0\n\\end{equation}\nThen, our mean field estimate for $\\eta$ is in good agreement with the\nsimulation results. Regrettably, we were unable to derive the mean\nfield values for the remaining exponents.\n\nWe conclude that the particular point $P_r =\n1\/2$, $R \\rightarrow \\infty$ is a first order point (see figure 2)\nwhich is not in the DP class which characterizes the whole critical curve. \n\nIn the following, we give theoretical arguments by means of a coarse-grained \nLangevin description that support the result concerning the\nuniversality class of the model. First, note that the normalized variables\nneeded to characterize the configurations of the system are the amount\nof active area $a(x,t)$, the number of monomers in the system $n(x,t)$ and\nthe amount of inactive area $v(x,t)$. These variables are not\nindependent since we have the constrain $a(x,t) + n(x,t) + v(x,t) = 1$. It\nis clear from the above discussion that the time evolution of the\nsystem ends when $a(x,t) = 0$. Since $a(x) \\rightarrow 0$ at\ncriticality, this quantity can be chosen as the order parameter of\nthe system. Then, we will try to describe the time\nevolution of the system near the critical point by means of two\ncoupled Langevin equations, for instance, one for $a(x,t)$ and the\nother for $n(x,t)$. Due to the nature of the absorbing configurations,\neach term of these equations must vanish when $a(x,t) \\rightarrow\n0$. \\\\ Let us consider the microscopic processes which are \nrelevant to characterize the critical behavior of the system. First of\nall, both diffusion of $a(x)$ and $n(x)$ can be interpreted as\nsuccessive adsorption-reaction processes. Within a one site\ndescription, both $n(x)$ and $a(x)$ will increase proportional to\n$a(x)$. The reaction processes will contribute to the equations with a\ncoupling term proportional to $a(x)n(x)$. It is also clear that monomer\nflights will introduce terms proportional to $a(x)^2$, $a(x)^3$,\netc. Since only the lower order terms are relevant for a\nrenormalization group treatment \\cite{cardyb}, we just keep the term\nproportional to $a(x)^2$. Then, we can write down the following\nLangevin equations: \n\\begin{equation}\n\\label{lang}\n\\partial n(x,t) \/ \\partial t = k_{1} \\nabla^{2} a(x,t) + k_{2} a(x,t)\n-k_{3} a(x,t)^{2} - k_{4} a(x,t) n(x,t) + \\eta_{1} (x,t) \n\\end{equation}\n\n\\begin{equation}\n\\label{lang1}\n\\partial a(x,t) \/ \\partial t = u_{1} \\nabla^{2} a(x,t) + u_{2} a(x,t)\n-u_{3} a(x,t)^{2} - u_{4} a(x,t) n(x,t) + \\eta_{2} (x,t) \n\\end{equation}\nwhere $\\eta_{1} (x,t)$ and $\\eta_{2} (x,t)$ are uncorrelated noises\nproportional to $\\sqrt{a(x,t)}$, $k_{i}$ and $u_{i}$ are coefficients. \nThis system of coupled Langevin equations is similar to that obtained\nfor the `pair contact process' \\cite{Jens1,Jens2} which is one\nof the prototype systems with multiple absorbing states. Mu\\~{n}oz\n{\\it et al} \\cite{munioz1} have shown that for large $t$ the equation\ncorresponding to the activity (equation (\\ref{lang1}) for the present model)\nreduces to the Langevin representation of DP. Then, our simulation\nresults are consistent with the above-presented theoretical\narguments. The same authors have also shown that systems with many\navailable absorbing configurations display strong memory effects\nthat may lead to anomalous scaling. In addition to this, Mendes {\\it et al} \\cite{Mendes} have \nproposed a generalized hyperscaling relation which has proved to be valid in systems with \nmultiple absorbing configurations.\nSimulation results on several\nlattice models with infinitely-many absorbing states support both\ntheoretical argument \\cite{Mendes,Jens2,dick}. The role that initial states play in \nthe temporal evolution of the present model is under investigation. \n\n\\section{Conclusions and final remarks}\nA model for the irreversible adsorption-reaction of a single species\non a continuous medium is studied by means of numerical\nsimulations. We would like to stress and comment upon the following\ninteresting features of the system: \n(i) in contrast to standard (reversible) transitions, non-equilibrium\nIPT can happen in one dimension. (ii) the studied adsorption-reaction model\nclearly shows interesting new effects that may arise when a\nprocess is modelled on a continuous medium. Since the system always reaches a\nstationary reactive state when simulated on a discrete lattice but a\nfinal poisoned state can be observed on a continuous one (e. g. for\n$P_r = 1$ and $R = 0$), one may expect a crossover behaviour when the\n'discretization degree' of the surface is tuned from one extreme to\nthe other. This can be achieved by considering the adsorption on\n\\underline{discrete lattices} of species of arbitrary length $r$,\ni. e. $r$-mers. We found that the reactive random sequential\nadsorption (RRSA) of dimers ($r = 2$) always\nleads to a reactive steady state, whose stationary coverage is\nclose to $\\Theta \\approx 0.5$, and no poisoning is observed. However,\nthe RRSA of trimers ($r = 3$) causes irreversible poisoning of the lattice with\nan average saturation coverage close to $\\Theta_{ST} \\approx 0.7$. In\nthe case of dimers, two absorbing states of the type \n\\begin{eqnarray}\n...BBVBBVBBVBB... \\\\ \\nonumber\n...VBBVBBVBBVBBV... \\nonumber\n\\end{eqnarray}\nwhere $V$ is an empty site, can be expected. So, during the RRSA the formation\nof several interfaces of the type $...BBVBBVVBBVBB...$ takes place. Due to\ncoarsening we expect that in the asymptotic limit ($t \\rightarrow\n\\infty$) both absorbing states will form two semi-infinite domains\nseparated by an interface. The competition between these domains will\nkeep the system in the reactive state for ever. Just by increasing \n$r = 2$ to $r = 3$, an infinite number of absorbing configurations\nappear in the system. So, the arguments developed above no longer hold\nand poisoning is observed. Consequently, an IPT is located between $r = 2$ and $r\n= 3$ and its precise location requires the study of the\nadsorption-reaction problem with particles of non-integer\nlength. However, adsorption of $r$-mers of length $r = 2 + \\epsilon$\nwould cause the argument of the two competing absorbing states to\nfail. Therefore, we expect that the critical size, i. e. 'the\ndiscretization degree' is $2$. (iii) To our best knowledge, this is the\nfirst off-lattice model which exhibits a second-order IPT in the DP\nuniversality class. Consequently, this results once again supports the\nDP conjecture \\cite{Janss} which can now be generalized to off-lattice\nmodels with infinitely many absorbing states. \n\nWe expect that the interesting behavior of the present simple reaction\nmodel will stimulate further work on irreversible transitions and\ncritical phenomena on continuous media, a field that, to\nour best knowledge, remains almost unexplored. \n\\vskip 1.5 true cm\n{\\bf Acknowledgments}: This work was financially supported by CONICET,\nUNLP, CIC (Provincia Bs. As.), ANPCyT, the Fundaci\\'on Antorchas,\nand the Volkswagen Foundation (Germany).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\t\n\tMulti-task representation learning~\\cite{caruana1997multitask} is an important problem which aims to learn a common low-dimensional representation from multiple related tasks. Representation learning has received extensive attention in both empirical applications~\\cite{ando2005framework,bengio2013representation,li2014joint} and theoretical study~\\cite{maurer2016benefit,du2020few,tripuraneni2021provable}.\n\n\n\t\n\tRecently, an emerging number of works~\\cite{yang2020impact,yang2022nearly,hu2021near,cella2022multi} investigate representation learning for sequential decision making, and show that if all tasks share a joint low-rank representation, then by leveraging such a joint representation,\n\tit is possible to learn faster than treating each task independently. \n\tDespite the accomplishments of these works, they mainly focus on the regret minimization setting, where the performance is measured by the cumulative reward gap between the optimal option and the actually chosen options. \n\t\n\n\t\n\tHowever, in real-world applications where obtaining a sample is expensive and time-consuming, e.g., clinical trails~\\cite{zhang2012multi},\n\n\tit is often desirable to identify the optimal option using as few samples as possible, i.e., we face the \\emph{pure exploration} scenario rather than regret minimization.\n\n\tMoreover, in many decision-making applications, we often need to tackle multiple related tasks, e.g., treatment planning for different diseases~\\cite{bragman2018uncertainty} and content optimization for multiple websites~\\cite{agarwal2009explore}, and there\n\tusually exists a common representation among these tasks, e.g., the features of drugs and the representations of website items. Thus, we desire to exploit the shared representation among tasks to expedite learning.\n\tFor example, in clinical treatment planning, we want to identify the optimal treatment for multiple diseases, and there exists a joint representation of treatments. In this case, since conducting a clinical trial and collecting a sample is time-consuming, we desire to make use of the shared representation and reduce the number of samples required. \\looseness=-1\n\t\n\tMotivated by the above fact, in this paper, we study representation learning for multi-task pure exploration in sequential decision making. Following prior works~\\cite{yang2020impact,yang2022nearly,hu2021near}, we consider the linear bandit setting, which is one of the most popular settings in sequential decision making and has various applications such as clinical trials and recommendation systems. \n\n\n\tSpecifically, we investigate two pure exploration problems, i.e., representation learning for best arm identification in linear bandits ($\\textup{RepBAI-LB}$) and best policy identification in contextual linear bandits ($\\textup{RepBPI-CLB}$). \n\t\n\tIn $\\textup{RepBAI-LB}$, an agent is given a confidence parameter $\\delta$, an arm set $\\mathcal{X}:=\\{\\boldsymbol{x}_1,\\dots,\\boldsymbol{x}_n\\} \\subseteq \\mathbb{R}^{d}$ and $M$ tasks. For each task $m \\in [M]$, the expected reward of each arm $\\boldsymbol{x} \\in \\mathcal{X}$ is generated by $\\boldsymbol{x}^\\top \\boldsymbol{\\theta}_m$, where $\\boldsymbol{\\theta}_m \\in \\mathbb{R}^{d}$ is an underlying reward parameter. There exists an unknown global feature extractor $\\boldsymbol{B} \\in \\mathbb{R}^{d \\times k}$ and an underlying prediction parameter $\\boldsymbol{w}_m$ such that $\\boldsymbol{\\theta}_m=\\boldsymbol{B} \\boldsymbol{w}_m$ \n\tfor any $m \\in [M]$, where $M \\gg d \\gg k$. We can understand the problem as that all tasks share a joint representation $\\boldsymbol{f}(\\boldsymbol{x}):= \\boldsymbol{B}^\\top \\boldsymbol{x}$ for arms, where the dimension of $\\boldsymbol{f}(\\boldsymbol{x})$ is much smaller than that of $\\boldsymbol{x}$. The agent sequentially selects arms and tasks to sample, and observes noisy rewards. The goal of the agent is to identify the best arm with the maximum expected reward for each task with confidence $1-\\delta$, using as few samples as possible. \n\t\n\tThe $\\textup{RepBPI-CLB}$ problem is an extension of $\\textup{RepBAI-LB}$ to environments with random and varying contexts. \n\tIn $\\textup{RepBPI-CLB}$, there are a context space $\\mathcal{S}$, an action space $\\mathcal{A}$, a known feature mapping $\\boldsymbol{\\phi}:\\mathcal{S} \\times \\mathcal{A} \\mapsto \\mathbb{R}^d$ and an \\emph{unknown} context distribution $\\mathcal{D}$. \n\n\n\tFor each task $m \\in [M]$, the expected reward of each context-action pair $(s,a) \\in \\mathcal{S} \\times \\mathcal{A}$ is generated by $\\boldsymbol{\\phi}(s,a)^\\top \\boldsymbol{\\theta}_m$, where $\\boldsymbol{\\theta}_m=\\boldsymbol{B} \\boldsymbol{w}_m$.\n\n\tWe can similarly interpret the problem as that all tasks share a low-dimensional context-action representation $\\boldsymbol{B}^\\top \\boldsymbol{\\phi}(s,a) \\in \\mathbb{R}^k$. At each timestep, the agent first observes a context drawn from $\\mathcal{D}$, and chooses an action and a task to sample, and then observes a random reward.\n\n\tGiven a confidence parameter $\\delta$ and an accuracy parameter $\\varepsilon$, the agent aims to identify an $\\varepsilon$-optimal policy (i.e., a mapping $\\mathcal{S} \\mapsto \\mathcal{A}$ that gives suboptimality within $\\varepsilon$)\n\n\tfor each task with confidence $1-\\delta$, while minimizing the number of samples used.\n\n\t\n\n\n\n\n\t\n\t\n\tIn contrast to existing representation learning works~\\cite{yang2020impact,yang2022nearly,hu2021near,cella2022multi}, we focus on the pure exploration scenario and face several unique challenges: \n\n\t(i) The sample complexity minimization objective requires us to plan an optimal sample allocation for recovering the low-rank representation, in order to save samples to the highest degree. \n\t(ii) Unlike prior works which either assume that the arm set is an ellipsoid\/sphere~\\cite{yang2020impact,yang2022nearly} or are computationally inefficient~\\cite{hu2021near}, we allow an arbitrary arm set that spans $\\mathbb{R}^d$, which poses challenges on how to efficiently schedule samples according to the shapes of arms.\n\t(iii) Different from prior works~\\cite{huang2015efficient,li2022instance}, we do not assume prior knowledge of the context distribution. This imposes additional difficulties in sample allocation planning and estimator construction.\n\t%\n\tTo handle these challenges, we design computationally and sample efficient algorithms, which effectively estimate the context distribution and employ the experimental design approaches to plan samples.\n\n\t\n\tWe summarize our contributions in this paper as follows.\n\t\\begin{itemize}\n\t\t\\item We formulate the problems of multi-task representation learning for best arm identification in linear bandits ($\\textup{RepBAI-LB}$) and best policy identification in contextual linear bandits ($\\textup{RepBPI-CLB}$). To the best of our knowledge, this is the first work to study representation learning in the multi-task pure exploration scenario.\n\t\t\\item For $\\textup{RepBAI-LB}$, we propose an efficient algorithm $\\mathtt{DouExpDes}$ equipped with \\emph{double experimental designs}. The first design optimally schedules samples to learn the joint representation according to arm shapes, and the second design minimizes the estimation error for rewards using low-dimensional representations.\n\t\n\t\n\t\n\t\tFurthermore, we establish a sample complexity guarantee $\\tilde{O}(\\frac{Mk}{\\Delta_{\\min}^2})$, which shows superiority over the baseline result $\\tilde{O}(\\frac{Md}{\\Delta_{\\min}^2})$ (i.e., solving each task independently). Here $\\Delta_{\\min}$ denotes the minimum reward gap.\n\t\n\t\t\\item For $\\textup{RepBPI-CLB}$, we develop $\\mathtt{C \\hyphen DouExpDes}$, an algorithm which efficiently estimates the context distribution and conducts double experimental designs under the estimated context distribution to learn the global representation. A sample complexity result $\\tilde{O}(\\frac{Mk^2}{\\varepsilon^2})$ is also provided for $\\mathtt{C \\hyphen DouExpDes}$, which significantly outperforms the baseline result $\\tilde{O}(\\frac{Md^2}{\\varepsilon^2})$, and demonstrates the power of representation learning.\n\t\\end{itemize}\n\t\n\t\n\t\n\t\\section{Related Work}\n\t\n\tIn this section, we introduce two lines of related works, and defer a more complete literature review to Appendix~\\ref{apx:related_work}. \\looseness=-1\n\t\n\t\\textbf{Representation Learning.}\n\tThe study of representation learning has been initiated and developed in the supervised learning setting, e.g., \\cite{baxter2000model,ando2005framework,maurer2016benefit,du2020few,tripuraneni2021provable}. Recently, representation learning for sequential decision making has attracted extensive attention.\n\n\n\t\\citet{yang2020impact,yang2022nearly} study representation learning for linear bandits with the regret minimization objective, where they assume that the arm set is an ellipsoid or sphere. \\citet{hu2021near} relax this assumption and allow arbitrary arm sets, but their algorithms based on a multi-task joint least-square estimator are computationally inefficient. \\citet{cella2022meta,cella2022multi} design algorithms which do not need to know the dimension of the underlying representation. \n\n\tDifferent from the above works which consider regret minimization, we study representation learning for (contextual) linear bandits with the pure exploration objective, which brings unique challenges on how to optimally allocate samples to learn the feature extractor, and motivates us to design algorithms building upon double experimental designs. \\looseness=-1\n\t\n\t\\textbf{Pure Exploration in (Contextual) Linear Bandits.}\n\tMost existing linear bandit works focus on regret minimization, e.g.,~\\cite{dani2008stochastic,chu2011contextual,abbasi2011improved}. Recently, there has been a surge of interests in the pure exploration objective for (contextual) linear bandits.\n\n\tFor linear bandits, \\citet{soare2014best} firstly apply the experimental design approach to distinguish the optimal arm, and establish sample complexity that heavily depends on the minimum reward gap.\n\t\\citet{tao2018best} design a novel randomized estimator for the underlying reward parameter, and achieve tighter sample complexity which depends on the reward gaps of the best $d$ arms.\n\n\n\t\\citet{fiez2019sequential} provide the first near-optimal sample complexity upper and lower bounds for best arm identification in linear bandits. \n\n\tFor contextual linear bandits, \\citet{zanette2021design} develop a non-adaptive policy to collect data, from which a near-optimal policy can be computed. \\citet{li2022instance} build instance-optimal sample complexity for best policy identification in contextual linear bandits, with prior knowledge of the context distribution. By contrast, our work studies a multi-task setting where tasks share a common representation,\n\tand does not assume any prior knowledge of the context distribution.\n\t\n\t\n\t\\section{Problem Formulation}\n\t\n\tIn this section, we present the formal problem formulations of $\\textup{RepBAI-LB}$ and $\\textup{RepBPI-CLB}$. Before describing the formulations, we first introduce some useful notations.\n\t\n\t\\textbf{Notations.}\n\tWe use bold lower-case letters to denote vectors and bold upper-case letters to denote matrices.\n\tFor any matrix $\\boldsymbol{A}$, $\\|\\boldsymbol{A}\\|$ denotes the spectral norm of $\\boldsymbol{A}$, and $\\sigma_{\\min}(\\boldsymbol{A})$ denotes the minimum singular value of $\\boldsymbol{A}$. For any positive semi-definite matrix $\\boldsymbol{A} \\in \\mathbb{R}^{d' \\times d'}$ and vector $\\boldsymbol{x} \\in \\mathbb{R}^{d'}$, $\\|\\boldsymbol{x}\\|_{\\boldsymbol{A}}:=\\sqrt{\\boldsymbol{x}^\\top \\boldsymbol{A} \\boldsymbol{x}}$. We use $\\textup{polylog}(\\cdot)$ to denote a polylogarithmic factor in given parameters, and $\\tilde{O}(\\cdot)$ to denote an expression that hides polylogarithmic factors in all problem parameters except $\\delta$ and $\\varepsilon$.\n\n\t\n\t\n\t\\textbf{Representation Learning for Best Arm Identification in Linear Bandits ($\\textup{RepBAI-LB}$).}\n\tAn agent is given a set of arms $\\mathcal{X}:=\\{\\boldsymbol{x}_1,\\dots,\\boldsymbol{x}_n\\} \\subseteq \\mathbb{R}^d$ and $M$ best arm identification tasks. Without loss of generality, we assume that $\\mathcal{X}$ spans $\\mathbb{R}^d$, as done in many prior works~\\cite{fiez2019sequential,katz2020empirical,degenne2020gamification}. For any $\\boldsymbol{x} \\in \\mathcal{X}$, $\\|\\boldsymbol{x}\\|\\leq L_{x}$ for some constant $L_{x}$. For each task $m \\in [M]$, the expected reward of each arm $\\boldsymbol{x} \\in \\mathcal{X}$ is $\\boldsymbol{x}^\\top \\boldsymbol{\\theta}_m$, where $\\boldsymbol{\\theta}_m \\in \\mathbb{R}^d$ is an unknown reward parameter. Among all tasks, there exists a common underlying feature extractor $\\boldsymbol{B} \\in \\mathbb{R}^{d \\times k}$, which satisfies that for each task $m \\in [M]$, $\\boldsymbol{\\theta}_m=\\boldsymbol{B} \\boldsymbol{w}_m$. Here $\\boldsymbol{B}$ has orthonormal columns, $\\boldsymbol{w}_m \\in \\mathbb{R}^k$ is an unknown prediction parameter, and $M \\gg d \\gg k$. For any $m \\in [M]$, $\\|\\boldsymbol{w}_m\\|\\leq L_{w}$ for some constant $L_{w}$. \n\t\n\tAt each timestep $t$, the agent chooses an arm $\\boldsymbol{x} \\in \\mathcal{X}$ and a task $m \\in [M]$, to sample arm $\\boldsymbol{x}$ in task $m$. Then, she observes a random reward $r_t=\\boldsymbol{x}^\\top \\boldsymbol{\\theta}_m+\\eta_t=\\boldsymbol{x}^\\top \\boldsymbol{B} \\boldsymbol{w}_m+ \\eta_t$, where $\\eta_t$ is an independent, zero-mean and sub-Gaussian noise. For simplicity of analysis, we assume that $\\mathbb{E}[\\eta_t^2]=1$, which can be easily relaxed by using a more carefully-designed estimator in our algorithm.\n\tGiven a confidence parameter $\\delta \\in (0,1)$, the agent aims to identify the best arms $\\boldsymbol{x}^{*}_m:=\\operatornamewithlimits{argmax}_{\\boldsymbol{x} \\in \\mathcal{X}} \\boldsymbol{x}^\\top \\boldsymbol{\\theta}_m$ for all tasks $m \\in [M]$ with probability at least $1-\\delta$, using as few samples as possible. We define sample complexity as the total number of samples used over all tasks, which is the performance metric considered in our paper.\n\t\n\tTo efficiently learn the underlying low-dimensional representation,\n\twe make the following standard assumptions.\n\t\n\t\\begin{assumption}[Diverse Tasks] \\label{assumption:diverse_task}\n\t\tWe assume that $\\sigma_{\\min}(\\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{w}_m \\boldsymbol{w}_m^\\top) = \\Omega(\\frac{1}{k})$.\n\t\\end{assumption}\n\t\n\tThis assumption indicates that the prediction parameters $\\boldsymbol{w}_1,\\dots,\\boldsymbol{w}_{M}$ are uniformly spread out in all directions of $\\mathbb{R}^k$, which was also assumed in~\\cite{du2020few,tripuraneni2021provable,yang2020impact}, and is necessary for recovering the feature extractor $\\boldsymbol{B}$.\n\t\n\tFor any distribution $\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}$ and $\\boldsymbol{B} \\in \\mathbb{R}^{d \\times k}$, let $\\boldsymbol{A}(\\boldsymbol{\\lambda}, \\boldsymbol{B}):=\\sum_{i=1}^{n} \\lambda(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}$.\n\tFor any task $m \\in [M]$, let\n\t\\begin{align*}\n\t\t\\boldsymbol{\\lambda}^*_m := & \\operatornamewithlimits{argmin}_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{x} \\in \\mathcal{X} \\setminus \\{\\boldsymbol{x}^{*}_{m}\\}} \\frac{\\| \\boldsymbol{B}^\\top (\\boldsymbol{x}^{*}_{m} - \\boldsymbol{x}) \\|^2_{\\boldsymbol{A}(\\boldsymbol{\\lambda}, \\boldsymbol{B})^{-1}} }{ ((\\boldsymbol{x}^{*}_{m} - \\boldsymbol{x})^\\top \\boldsymbol{\\theta}_m)^2 } .\n\t\\end{align*}\n\tHere $\\boldsymbol{\\lambda}^*_m$ denotes the optimal sample allocation that minimizes prediction error of arms (i.e., the solution of G-optimal design~\\cite{pukelsheim2006optimal}) under the underlying low-dimensional representation.\n\t\n\t\\begin{assumption}[Eigenvalue of G-optimal Design Matrix] \\label{assumption:lambda^*_B_x_x_B_invertible}\n\t\tFor any task $m \\in [M]$, $\\sigma_{\\min}(\\boldsymbol{A}(\\boldsymbol{\\lambda}^*_m, \\boldsymbol{B})) \\geq \\omega$ for some constant $\\omega>0$.\n\t\\end{assumption}\n\t\n\tThis assumption implies that the covariance matrix $\\boldsymbol{A}(\\boldsymbol{\\lambda}^*_m, \\boldsymbol{B})$ under the optimal sample allocation is invertible, which is necessary for estimating $\\boldsymbol{w}_m$. \\looseness=-1\n\t\n\t\\textbf{Representation Learning for Best Policy Identification in Contextual Linear Bandits ($\\textup{RepBPI-CLB}$).}\n\tIn this problem, there are a context space $\\mathcal{S}$, an action space $\\mathcal{A}$, a feature mapping $\\boldsymbol{\\phi}(\\cdot,\\cdot):\\mathcal{S}\\times\\mathcal{A} \\mapsto \\mathbb{R}^d$ and an \\emph{unknown} context distribution $\\mathcal{D} \\in \\triangle_{\\mathcal{S}}$. For any $(s,a) \\in \\mathcal{S} \\times \\mathcal{A}$, $\\|\\boldsymbol{\\phi}(s,a)\\|\\leq L_{\\phi}$ for some constant $L_{\\phi}$.\n\tAn agent needs to solve $M$ best policy identification tasks. For each task $m \\in [M]$, the expected reward of each context-action pair $(s,a) \\in \\mathcal{S} \\times \\mathcal{A}$ is $\\boldsymbol{\\phi}(s,a)^\\top \\boldsymbol{\\theta}_m$, where $\\boldsymbol{\\theta}_m \\in \\mathbb{R}^d$ is an unknown reward parameter. Similar to $\\textup{RepBAI-LB}$, there exists a global feature extractor $\\boldsymbol{B} \\in \\mathbb{R}^{d \\times k}$ with orthonormal columns, such that for each task $m \\in [M]$, $\\boldsymbol{\\theta}_m=\\boldsymbol{B} \\boldsymbol{w}_m$. Here $\\boldsymbol{w}_m \\in \\mathbb{R}^k$ is an unknown prediction parameter, $\\|\\boldsymbol{w}_m\\|\\leq L_{w}$ for any $m \\in [M]$, and $M \\gg d \\gg k$.\n\t\n\tAt each timestep $t$, the agent first observes a random context $s_t$, which is i.i.d. drawn from $\\mathcal{D}$. Then, she selects an action $a_t \\in \\mathcal{A}$ and a task $m \\in [M]$, to sample action $a_t$ in context $s_t$ under task $m$. After sampling, she observes a random reward $r_t=\\boldsymbol{\\phi}(s_t,a_t)^\\top \\boldsymbol{\\theta}_m+\\eta_t=\\boldsymbol{\\phi}(s_t,a_t)^\\top \\boldsymbol{B} \\boldsymbol{w}_m+ \\eta_t$, where $\\eta_t$ is an independent, zero-mean and $1$-sub-Gaussian noise. \n\t\n\tWe define a policy $\\pi$ as a mapping from $\\mathcal{S}$ to $\\mathcal{A}$. For each task $m \\in [M]$, we say a policy $\\hat{\\pi}_m$ is $\\varepsilon$-optimal if\\looseness=-1\n\t\\begin{align*}\n\t\t\\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{ \\max_{a \\in \\mathcal{A}} \\sbr{\\boldsymbol{\\phi}(s,a) - \\boldsymbol{\\phi}(s,\\hat{\\pi}_m(s) }^\\top \\boldsymbol{\\theta}_m } \\leq \\varepsilon .\n\t\\end{align*}\n\tGiven a confidence parameter $\\delta \\in (0,1)$ and an accuracy parameter $\\varepsilon>0$, the goal of the agent is to identify an $\\varepsilon$-optimal policy $\\hat{\\pi}_m$ for each task $m \\in [M]$ with probability at least $1-\\delta$, and minimize the number of samples used, i.e., sample complexity.\n\t\n\tWe also make two standard assumptions for $\\textup{RepBPI-CLB}$: Assumption~\\ref{assumption:diverse_task} and the following assumption on the context distribution and context-action features.\n\t\n\t\\begin{assumption}\\label{assumption:bpi_rho_E_D_is_finite}\n\t\tThere exists some $\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{A}}$ such that \n\t\t$$\n\t\t\\sigma_{\\min}\\sbr{\\sum_{a \\in \\mathcal{A}} \\lambda(a) \\mathbb{E}_{s \\sim \\mathcal{D}}\\mbr{ \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top }} \\geq \\nu \n\t\t\\vspace*{-1em}\n\t\t$$ \n\t\\end{assumption}\n\tfor some constant $\\nu>0$.\n\t\n\tAssumption~\\ref{assumption:bpi_rho_E_D_is_finite} manifests that there exists at least one sample allocation, under which the expected covariance matrix with respect to random contexts is invertible. This assumption enables one to reveal the feature extractor $\\boldsymbol{B}$, despite stochastic and varying contexts. \n\tNote that Assumption~\\ref{assumption:bpi_rho_E_D_is_finite} only assumes the existence of a feasible sample allocation, rather than the knowledge of this sample allocation. \n\n\t\n\t\n\tIt is worth mentioning that in this work, we do not assume that we can sample arbitrary vectors in an ellipsoid\/sphere as in \\cite{yang2020impact,yang2022nearly}, or assume that each arm (action) has zero mean and identity covariance as in \\cite{tripuraneni2021provable}. In contrast, we allow arbitrary shapes of arms (actions), and efficiently allocate samples according to their different shapes. Moreover, we do not assume prior knowledge of the context distribution as in \\cite{huang2015efficient,li2022instance}.\n\tInstead, we design an effective scheme to estimate the context distribution, and carefully bound the estimation error in our analysis. \n\t\n\tBelow we will introduce our algorithms and results. We defer all our proofs to Appendix due to space limit.\n\t\n\t\n\t\\section{Representation Learning for Best Arm Identification in Linear Bandits}\n\t\n\t\n\t\n\t\n\tIn this section, we design a computationally efficient algorithm $\\mathtt{DouExpDes}$ for $\\textup{RepBAI-LB}$, which performs double delicate experimental designs to recover the feature extractor and distinguish the best arms using low-rank representations. Furthermore, we provide sample complexity guarantees that mainly depend on the underlying low dimension.\n\t\n\n\tTo better describe our algorithm, we first introduce the notion of \\emph{experimental design}.\n\tExperimental design is an important problem in statistics~\\cite{pukelsheim2006optimal}. Consider a set of feature vectors and an unknown linear regression parameter. Sampling each feature vector will produce a noisy feedback of the inner-product of this feature vector and the unknown parameter.\n\tExperimental design investigates how to schedule samples to maximize the statistical power of estimating the unknown parameter. \n\n\tIn our algorithm, we mainly use two popular types of experimental design, i.e., \\emph{E-optimal design}, which minimizes the spectral norm of the inverse of sample covariance matrix, and \\emph{G-optimal design}, which minimizes the maximum prediction error for feature vectors. \\looseness=-1\n\t\n\t\n\t\n\t\\subsection{Algorithm $\\mathtt{DouExpDes}$}\n\t\n\t\n\t\n\t\n\n\t\n\tNow we present our algorithm $\\mathtt{DouExpDes}$, whose pseudo-code is provided in Algorithm~\\ref{alg:repbailb}. $\\mathtt{DouExpDes}$ is a phased elimination algorithm, which first conducts the E-optimal design to optimally schedule samples for learning the feature extractor $\\boldsymbol{B}$, and then performs the G-optimal design with low-dimensional representations to eliminate suboptimal arms.\n\n\n\t\n\t$\\mathtt{DouExpDes}$ uses a \\emph{rounding procedure} $\\mathtt{ROUND}$~\\cite{allen2017near,fiez2019sequential}, which transforms a given continuous sample allocation (design) into a discrete sample sequence and maintains important properties (e.g., E-optimality and G-optimality) of the design. $\\mathtt{ROUND}(\\{(\\boldsymbol{q}_i,\\boldsymbol{Q}_i)\\}_{i=1}^{n'}, \\boldsymbol{\\lambda}, \\zeta, N)$ takes $n'$ arm-matrix pairs $(\\boldsymbol{q}_1,\\boldsymbol{Q}_1),\\dots,(\\boldsymbol{q}_{n'},\\boldsymbol{Q}_{n'}) \\in \\mathcal{X} \\times \\mathbb{R}^{{d'} \\times {d'}}$, a distribution $\\boldsymbol{\\lambda} \\in \\triangle_{\\{\\boldsymbol{q}_1,\\dots,\\boldsymbol{q}_{n'}\\}}$, a rounding approximation parameter $\\zeta>0$, and the number of samples $N$ such that $N \\geq \\frac{180d'}{\\zeta^2}$ as inputs. It will return a sample sequence $\\boldsymbol{s}_1, \\dots, \\boldsymbol{s}_N \\in \\mathcal{X}$, which correspond to feature matrices $\\boldsymbol{S}_1, \\dots, \\boldsymbol{S}_N \\in \\{\\mathcal{Q}_1,\\dots,\\mathcal{Q}_{n'}\\}$, and $\\sum_{j=1}^N \\boldsymbol{S}_j$ has similar properties as the covariance matrix of the inputted design $N \\sum_{i=1}^{n'} \\lambda(\\boldsymbol{q}_i) \\boldsymbol{Q}_i$ (see Appendix~\\ref{apx:rounding_procedure} for more details).\n\t\n\t\n\t\n\tThe procedure of $\\mathtt{DouExpDes}$ is as follows. At the beginning, $\\mathtt{DouExpDes}$ performs the E-optimal design with raw representations, to plan an optimal sample allocation $\\boldsymbol{\\lambda}^E$ for the purpose of recovering the feature extractor $\\boldsymbol{B}$ (Line~\\ref{line:bai_E_optimal_design}). \n\tThen, $\\mathtt{DouExpDes}$ calls $\\mathtt{ROUND}$ to convert the E-optimal sample allocation $\\boldsymbol{\\lambda}^E$ into a discrete sample batch $\\bar{\\boldsymbol{x}}_1,\\dots,\\bar{\\boldsymbol{x}}_p$, which satisfies that\n\t$$\n\t\\bigg\\| \\Big(\\sum_{j=1}^p \\bar{\\boldsymbol{x}}_j \\bar{\\boldsymbol{x}}_j^\\top \\Big)^{-1} \\bigg\\| \\leq (1+\\zeta) \\bigg\\| \\Big(p \\sum_{i=1}^n \\lambda^E(\\boldsymbol{x}_i) \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\Big)^{-1} \\bigg\\| .\n\t$$\n\tNext, $\\mathtt{DouExpDes}$ enters multiple phases, and maintains a candidate arm set $\\hat{\\mathcal{X}}_{t,m}$ for each task. \n\tThe specific value of $T_t$ in Line~\\ref{line:bai_T_t} is presented in Eq.~\\eqref{eq:value_T_t} of Appendix~\\ref{apx:bai_feature_recover}\n\t\n\t\n\t\\begin{algorithm}[t]\n\t\t\\caption{$\\mathtt{DouExpDes}$ (Double Experimental Design)} \\label{alg:repbailb}\n\t\t\\begin{algorithmic}[1]\n\t\t\t\\STATE {\\bfseries Input:} $\\mathcal{X}$, $\\delta$, rounding procedure $\\mathtt{ROUND}$, rounding approximation parameter $\\zeta:=\\frac{1}{10}$, and the size of sample batch $p:= \\frac{180d}{\\zeta^2}$.\n\t\t\n\t\t\t\\STATE Let $\\boldsymbol{\\lambda}^{E}$ and $\\rho^{E}$ be the optimal solution and the optimal value of the E-optimal design optimization:\n\t\t\t$$\n\t\t\t\\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\Big\\| \\big( \\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\big)^{-1} \\Big\\|\n\t\t\t\\vspace*{-0.5em}\n\t\t\t$$\\label{line:bai_E_optimal_design}\\\\\n\t\t\t\\STATE $\\bar{\\boldsymbol{x}}_1,\\dots,\\bar{\\boldsymbol{x}}_p \\leftarrow \\mathtt{ROUND}(\\{(\\boldsymbol{x}_i, \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top)\\}_{i=1}^{n}, \\boldsymbol{\\lambda}^{E}, \\zeta, p)$ \\label{line:bai_E_optimal_round}\n\t\t\t\\STATE $\\hat{\\mathcal{X}}_{1,m} \\leftarrow \\mathcal{X}$ for any $m \\in [M]$. $\\delta_t \\leftarrow \\frac{\\delta}{2 t^2}$ for any $t \\geq 1$\\; \n\t\t\t\\FOR{phase $t=1,2,\\dots$}\n\t\t\n\t\t\t\\STATE $T_t \\leftarrow \\lceil \\frac{c_1 \\sbr{1+\\zeta}^3 (\\rho^E)^2 k^4 L_{x}^4 L_{w}^4}{M} \\max\\{2^{2t},\\ \\frac{L_x^4}{\\omega^2}\\}\\cdot$\\\\$ \\textup{polylog}(\\zeta,\\rho^E,p,k,L_{x},L_{w},\\frac{1}{\\delta_t}, \\frac{1}{\\omega}) \\rceil$, where $c_1$ is an absolute constant \\label{line:bai_T_t}\n\t\t\t\\STATE $\\hat{\\boldsymbol{B}}_t \\leftarrow \\mathtt{FeatRecover}(T_t, \\{\\bar{\\boldsymbol{x}}_i\\}_{i \\in [p]})$\\;\n\t\t\n\t\t\t\\STATE $\\{\\hat{\\mathcal{X}}_{t+1,m}\\}_{m \\in [M]} \\leftarrow$\\\\$ \\mathtt{EliLowRep} (t, \\mathcal{X}, \\{\\hat{\\mathcal{X}}_{t,m}\\}_{m \\in [M]}, \\delta_t, \\mathtt{ROUND}, \\zeta, \\hat{\\boldsymbol{B}}_t)$\n\t\t\t\\IF{$|\\hat{\\mathcal{X}}_{t+1,m}|=1$, $\\forall m \\in [M]$}\n\t\t\t\\STATE {\\bfseries return} $\\hat{\\mathcal{X}}_{t+1,m}$ for all tasks $m \\in [M]$\\;\n\t\t\t\\ENDIF\n\t\t\t\\ENDFOR\n\t\t\\end{algorithmic}\n\t\\end{algorithm}\n\t\n\t\\begin{algorithm}[t] \n\t\t\\caption{$\\mathtt{FeatRecover}(T, \\{\\bar{\\boldsymbol{x}}_i\\}_{i \\in [p]})$} \\label{alg:feat_recover}\n\t\t\\begin{algorithmic}[1]\n\t\t\n\t\t\t\\FOR{task $m \\in [M]$} \\label{line:bai_stage2_sample_start}\n\t\t\t\\FOR{round $j \\in [T]$ }\n\t\t\t\\FOR{arm $i \\in [p]$}\n\t\t\t\\STATE Sample $\\bar{\\boldsymbol{x}}_i$, and observe random reward $\\alpha_{m,j,i}$\\; \\label{line:bai_stage2_sample}\n\t\t\t\\ENDFOR\n\t\t\t\\STATE $\\tilde{\\boldsymbol{\\theta}}_{m,j} \\leftarrow (\\sum_{i=1}^{p} \\bar{\\boldsymbol{x}}_i \\bar{\\boldsymbol{x}}_i^\\top)^{-1} \\sum_{i=1}^{p} \\bar{\\boldsymbol{x}}_i \\alpha_{m,j,i}$\n\t\t\t\\ENDFOR\n\t\t\t\\ENDFOR \\label{line:bai_stage2_sample_end}\n\t\t\t\\STATE $\\boldsymbol{Z} \\leftarrow \\frac{1}{M T} \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\tilde{\\boldsymbol{\\theta}}_{m,j} (\\tilde{\\boldsymbol{\\theta}}_{m,j})^\\top - (\\sum_{i=1}^{p} \\bar{\\boldsymbol{x}}_i \\bar{\\boldsymbol{x}}_i^\\top)^{-1} $ \\label{line:bai_Z_t}\\;\n\t\t\t\\STATE Perform SVD decomposition on $\\boldsymbol{Z}$, and let $\\hat{\\boldsymbol{B}}$ be the top-$k$ left singular vectors of $\\boldsymbol{Z}$. {\\bfseries return} $\\hat{\\boldsymbol{B}}$ \\label{line:svd}\\;\n\t\t\\end{algorithmic}\n\t\\end{algorithm}\n\t\n\t\n\t\\begin{algorithm}[t!] \n\t\t\\caption{$\\mathtt{EliLowRep}(t, \\mathcal{X}\\!,\\! \\{\\hat{\\mathcal{X}}_{m}\\}_{\\! m \\in [M]}, \\delta'\\!, \\mathtt{ROUND},\\! \\zeta,\\! \\hat{\\boldsymbol{B}})$} \\label{alg:elim_low_rep}\n\t\t\\begin{algorithmic}[1]\n\t\t\n\t\t\t\\FOR{task $m \\in [M]$}\n\t\t\t\\STATE Let $\\boldsymbol{\\lambda}^{G}_{m}$ and $\\rho^{G}_{m}$ be the optimal solution and the optimal value of the G-optimal design optimization:\n\t\t\t$$\n\t\t\t\\operatornamewithlimits{argmin}_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{x},\\boldsymbol{x}' \\in \\hat{\\mathcal{X}}_{m}} \\nbr{ \\hat{\\boldsymbol{B}}^\\top (\\boldsymbol{x}-\\boldsymbol{x}') }^2_{\\boldsymbol{A}(\\boldsymbol{\\lambda},\\hat{\\boldsymbol{B}})^{-1}}\n\t\t\t\\vspace*{-0.5em}\n\t\t\t$$ \\label{line:bai_G_optimal_design}\\\\\n\t\t\t\\STATE $N_{m} \\leftarrow \\lceil \\max \\{ 32 (1+\\zeta) 2^{2t} \\rho^{G}_{m} \\log (\\frac{4n^2 M}{\\delta'}),$ $\\frac{180 k}{\\zeta^2} \\} \\rceil$\\;\n\t\t\t\\STATE $\\boldsymbol{z}_{m,1},\\dots,\\boldsymbol{z}_{m,N_{m}} \\leftarrow$\\\\$ \\mathtt{ROUND}(\\{(\\boldsymbol{x}_i, \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}})\\}_{i=1}^{n}, \\boldsymbol{\\lambda}^{G}_{m}, \\zeta, N_{m})$ \\label{line:bai_stage3_round}\\;\n\t\t\t\\STATE Sample the arms $\\boldsymbol{z}_{m,1},\\dots,\\boldsymbol{z}_{m,N_{m}} \\in \\mathcal{X}$, and observe random rewards $r_{m,1}, \\dots, r_{m,N_{m}}$\\; \\label{line:bai_stage3_sample}\n\t\t\t\\STATE Let $\\tilde{\\boldsymbol{z}}_{m,j} := \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{z}_{m,j}$ for any $j \\in [N_{m}]$\n\t\t\t\\STATE $\\hat{\\boldsymbol{w}}_{m} \\leftarrow ( \\sum_{j=1}^{N_{m}} \\tilde{\\boldsymbol{z}}_{m,j} \\tilde{\\boldsymbol{z}}_{m,j}^\\top )^{-1} \\sum_{j=1}^{N_{m}} \\tilde{\\boldsymbol{z}}_{m,j} r_{m,j}$ \\label{line:bai_est_theta}\\;\n\t\t\t\\STATE $\\hat{\\boldsymbol{\\theta}}_{m} \\leftarrow \\hat{\\boldsymbol{B}} \\hat{\\boldsymbol{w}}_{m}$ \\label{line:bai_est_w}\\;\n\t\t\t\\STATE $\\hat{\\mathcal{X}}'_{m} \\leftarrow \\hat{\\mathcal{X}}_{m} \\setminus \\{ \\boldsymbol{x} \\in \\hat{\\mathcal{X}}_{m} \\ | \\ \\exists \\boldsymbol{x}' \\in \\hat{\\mathcal{X}}_{m}: (\\boldsymbol{x}'-\\boldsymbol{x})^\\top \\hat{\\boldsymbol{\\theta}}_{m} > 2^{-t} \\}$ \\label{line:bai_elimination}\\;\n\t\t\t\\ENDFOR\n\t\t\t\\STATE {\\bfseries return} $\\{\\hat{\\mathcal{X}}'_{m}\\}_{m \\in [M]}$\n\t\t\\end{algorithmic}\n\t\\end{algorithm}\n\t\n\tIn each phase $t$, \n\t$\\mathtt{DouExpDes}$ first calls subroutine $\\mathtt{FeatRecover}$ to recover the feature extractor $\\boldsymbol{B}$. In $\\mathtt{FeatRecover}$ (Algorithm~\\ref{alg:feat_recover}), we repeatedly sample $\\bar{\\boldsymbol{x}}_1,\\dots,\\bar{\\boldsymbol{x}}_p$ in all tasks, and construct an estimator $\\boldsymbol{Z}$ for $\\frac{1}{M} \\sum_{i=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top$, which contains the information of underlying reward parameters (Line~\\ref{line:bai_Z_t}). Then, we perform SVD on $\\boldsymbol{Z}$ and obtain the estimated feature extractor $\\hat{\\boldsymbol{B}}$ (Line~\\ref{line:svd}). \n\t\n\tThen, $\\mathtt{DouExpDes}$ calls subroutine $\\mathtt{EliLowRep}$ to eliminate suboptimal arms using low-dimensional representations.\n\tIn $\\mathtt{EliLowRep}$ (Algorithm~\\ref{alg:elim_low_rep}), we conduct the G-optimal design with the reduced-dimensional representations $\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{x}$, and obtain sample allocation $\\boldsymbol{\\lambda}^{G}_{m}$ for each task (Line~\\ref{line:bai_G_optimal_design}). We further use $\\mathtt{ROUND}$ to transform $\\boldsymbol{\\lambda}^{G}_{m}$ into a sample sequence $\\boldsymbol{z}_{m,1},\\dots,\\boldsymbol{z}_{m,N_{m}}$, which satisfies that\n\t\\begin{align*}\n\t\t& \\max_{\\boldsymbol{x}, \\boldsymbol{x}' \\in \\hat{\\mathcal{X}}_{m}} \\nbr{\\boldsymbol{x}-\\boldsymbol{x}'}^2_{\\sbr{\\sum_{j=1}^{N_{m}} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{z}_{m,j} \\boldsymbol{z}_{m,j}^\\top \\hat{\\boldsymbol{B}}}^{-1}} \n\t\t\\\\\n\t\t\\leq & (1+\\zeta) \\max_{\\boldsymbol{x}, \\boldsymbol{x}' \\in \\hat{\\mathcal{X}}_{m}} \\nbr{\\boldsymbol{x}-\\boldsymbol{x}'}^2_{\\sbr{N_{m} \\sum_{i=1}^{n} \\lambda^{G}_{m}(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}}^{-1}} .\n\t\\end{align*}\n\tAfter sampling this sequence, we build estimators $\\hat{\\boldsymbol{w}}_{t,m}$ and $\\hat{\\boldsymbol{\\theta}}_{t,m}$ for the underlying prediction parameter $\\boldsymbol{w}_m$ and reward parameter $\\boldsymbol{\\theta}_m$, respectively (Lines~\\ref{line:bai_est_theta}-\\ref{line:bai_est_w}). Then, we discard the arms that show large gaps to the estimated optimal arm for each task (Line~\\ref{line:bai_elimination}). \n\t\n\t\n\n\t\n\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\n\t\n\t\n\t\\subsection{Theoretical Performance of $\\mathtt{DouExpDes}$}\n\t\n\tIn this subsection, we provide sample complexity guarantees for $\\mathtt{DouExpDes}$.\n\t%\n\tTo formally present our sample complexity,\n\twe first revisit existing results for conventional single-task best arm identification in linear bandits (BAI-LB).\n\t\n\tFor a single-task BAI-LB instance with arm set $\\mathcal{X} \\in \\mathbb{R}^d$ and underlying reward parameter $\\boldsymbol{\\theta} \\in \\mathbb{R}^{d}$, the instance-dependent hardness is defined as~\\cite{fiez2019sequential}\n\t\\begin{align*}\n\t\t\\rho^{S}(\\mathcal{X}, \\boldsymbol{\\theta}) := & \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{x} \\in \\mathcal{X} \\setminus \\{\\boldsymbol{x}^{*}\\}} \\frac{\\| \\boldsymbol{x}^{*} - \\boldsymbol{x} \\|^2_{\\sbr{\\sum_{i=1}^{n} \\lambda(\\boldsymbol{x}_i) \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top }^{-1}} }{ ((\\boldsymbol{x}^{*} - \\boldsymbol{x})^\\top \\boldsymbol{\\theta})^2 } ,\n\t\\end{align*}\n\tand the best known sample complexity result is $\\tilde{O}(\\rho^{S}(\\mathcal{X}, \\boldsymbol{\\theta}) \\log(\\frac{1}{\\delta}))=\\tilde{O}(\\frac{d}{(\\Delta^{S}_{\\min})^2} \\log(\\frac{1}{\\delta}))$~\\cite{fiez2019sequential}.\n\tHere $\\boldsymbol{x}^{*}:=\\operatornamewithlimits{argmax}_{\\boldsymbol{x} \\in \\mathcal{X}} \\boldsymbol{x}^\\top \\boldsymbol{\\theta}$ denotes the best arm, and $\\Delta^{S}_{\\min}:=\\min_{\\boldsymbol{x} \\in \\mathcal{X} \\setminus \\{\\boldsymbol{x}^{*}\\}}(\\boldsymbol{x}^{*} - \\boldsymbol{x})^\\top \\boldsymbol{\\theta}$ refers to the minimum reward gap.\n\t\n\tIt can be seen that a naive algorithm for RepBAI-LB is to run an existing single-task BAI-LB algorithm~\\cite{fiez2019sequential,katz2020empirical} to solve $M$ tasks independently. Then, the sample complexity of such naive algorithm is\n\t\\begin{align}\n\t\t\\!\\!\\!\\! \\tilde{O} \\! \\sbr{ \\sum_{m=1}^{M} \\rho^{S}(\\mathcal{X}, \\boldsymbol{\\theta}_m) \\log\\sbr{ \\frac{1}{\\delta} } \\!} \\!\\!=\\! \\tilde{O}\\sbr{\\! \\frac{Md}{\\Delta_{\\min}^2} \\log\\sbr{ \\frac{1}{\\delta} } \\!} \\!, \\label{eq:bai_naive_sample_complexity}\n\t\\end{align}\n\twhere $\\Delta_{\\min}:=\\min_{m \\in [M], \\boldsymbol{x} \\in \\mathcal{X} \\setminus \\{\\boldsymbol{x}^{*}_m\\}}(\\boldsymbol{x}^{*}_m - \\boldsymbol{x})^\\top \\boldsymbol{\\theta}_m$ denotes the minimum reward gap among all tasks. In the following, we take Eq.~\\eqref{eq:bai_naive_sample_complexity} as the baseline to demonstrate the power of representation learning.\n\t\n\tNow we state the sample complexity for $\\mathtt{DouExpDes}$.\n\t\\begin{theorem} \\label{thm:bai_ub}\n\t\tWith probability at least $1-\\delta$, algorithm $\\mathtt{DouExpDes}$ returns the best arms $\\boldsymbol{x}^{*}_{m}$ for all tasks $m \\in [M]$, and the number of samples used is bounded by\n\t\t\\begin{align*}\n\t\t\t\\tilde{O} \\bigg( & \\sum_{m=1}^{M} \\! \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\! \\max_{\\boldsymbol{x} \\in \\mathcal{X} \\setminus \\{\\boldsymbol{x}^{*}_{m}\\}} \\!\\!\\!\\!\\! \\frac{\\| \\boldsymbol{B}^\\top (\\boldsymbol{x}^{*}_{m} - \\boldsymbol{x}) \\|^2_{\\boldsymbol{A}(\\boldsymbol{\\lambda},\\boldsymbol{B})^{-1}} }{ ((\\boldsymbol{x}^{*}_{m} - \\boldsymbol{x})^\\top \\boldsymbol{\\theta}_m)^2 } \\! \\log\\Big(\\frac{1}{\\delta}\\Big)\n\t\t\t\\\\& \n\t\t\t+ (\\rho^E)^2 d k^4 L_{x}^2 L_{w}^2 D \\log^4 \\Big(\\frac{1}{\\delta}\\Big) \\bigg)\n\t\t\t\\\\\n\t\t\t= & \\ \\tilde{O} \\bigg( \\frac{M k}{\\Delta_{\\min}^2} \\log\\Big(\\frac{1}{\\delta}\\Big) + (\\rho^E)^2 d k^4 L_{x}^2 L_{w}^2 D \\log^4 \\Big(\\frac{1}{\\delta}\\Big) \\bigg) ,\n\t\t\\end{align*}\n\t\twhere\n\t\t$D:=\\max\\{ \\frac{1}{\\Delta_{\\min}^2} ,\\ \\frac{L_{x}^4}{\\omega^2} \\}$.\n\t\\end{theorem}\n\t\n\t\\textbf{Remark 1.} In the above theorem, the first term, $\\sum_{m=1}^{M} \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{x} \\in \\mathcal{X} \\setminus \\{\\boldsymbol{x}^{*}_{m}\\}} \\frac{\\| \\boldsymbol{B}^\\top (\\boldsymbol{x}^{*}_{m} - \\boldsymbol{x}) \\|^2_{\\boldsymbol{A}(\\boldsymbol{\\lambda},\\boldsymbol{B})^{-1}} }{ ((\\boldsymbol{x}^{*}_{m} - \\boldsymbol{x})^\\top \\boldsymbol{\\theta}_m)^2 }=O(\\frac{M k}{\\Delta_{\\min}^2})$, is the instance-dependent hardness of $M$ $k$-dimensional linear bandit problems with arm set $\\tilde{\\mathcal{X}}:=\\{\\boldsymbol{B}^\\top \\boldsymbol{x}: \\boldsymbol{x} \\in \\mathcal{X}\\} \\subseteq \\mathbb{R}^k$ and underlying reward parameters $\\boldsymbol{w}_1,\\dots,\\boldsymbol{w}_{M} \\in \\mathbb{R}^k$. This term only depends on the reduced dimension $k$, instead of $d$. \n\tIn other words, it is an essential price that is needed for solving $M$ low-dimensional tasks, even if one knows the feature extractor $\\boldsymbol{B}$. \n\tThe second term $(\\rho^E)^2 d k^4 L_{x}^2 L_{w}^2 D$, which depends on the raw dimension $d$, is a cost paid for learning the feature extractor. Note that since this term does not contain $M$, the cost for learning the underlying features is paid only once, rather than for all tasks. \\looseness=-1\n\t\n\tWhen $M \\gg d \\gg k$, the first term dominates the bound. This indicates that algorithm $\\mathtt{DouExpDes}$ effectively learns the low-dimensional representation, and exploits the intrinsic problem structure to\n\n\treduce the sample complexity from $\\tilde{O}(\\frac{M d}{\\Delta_{\\min}^2} \\log (\\frac{1}{\\delta}))$ (i.e., learning each task independently)\n\tto only $\\tilde{O}(\\frac{M k}{\\Delta_{\\min}^2} \\log (\\frac{1}{\\delta}))$. Our result corroborates the benefits of representation learning for multi-task pure exploration.\n\t\n\t\\textbf{Analytical Novelty.}\n\tWe highlight several novelties in the analysis of Theorem~\\ref{thm:bai_ub} as follows. (i) We develop a delicate concentration inequality for $\\|\\boldsymbol{Z} - \\frac{1}{M} \\sum_{i=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top\\|$, by bounding $\\|(\\sum_{i=1}^{p} \\bar{\\boldsymbol{x}}_i \\bar{\\boldsymbol{x}}_i^{\\top})^{-1}\\|$ via the E-optimality of the sample batch $\\bar{\\boldsymbol{x}}_1,\\dots,\\bar{\\boldsymbol{x}}_p$ and applying the matrix Bernstern inequality with truncated noises. Then, we employ the Davis-Kahan sin $\\theta$ Theorem~\\cite{bhatia2013matrix} to bound the estimation error of $\\hat{\\boldsymbol{B}}$\n\n\tincurred from the SVD decomposition on $\\boldsymbol{Z}$.\n\t(ii) Furthermore, we decompose the prediction error $\\boldsymbol{x}^\\top (\\hat{\\boldsymbol{\\theta}}_{m}-\\boldsymbol{\\theta}_m)$ into three parts, i.e., the sampling variance and bias for learning $\\hat{\\boldsymbol{w}}_{m}$ with estimated representation $\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{x}$, and the estimation error of $\\hat{\\boldsymbol{B}}$. (iii) To interpret our sample complexity by the essential instance-dependent hardness with true representation $\\boldsymbol{B}^\\top \\boldsymbol{x}$, we establish a connection between the actual effective dimension $\\| \\boldsymbol{B}^\\top \\boldsymbol{x} \\|^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} }$ and our estimated effective dimension $\\| \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{x} \\|^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}} }^{-1} }$.\n\t\n\t\n\t\\vspace*{-0.5em}\n\t\\section{Representation Learning for Best Policy Identification in Contextual Linear Bandits}\n\t\n\t\n\tIn this section, we turn to contextual linear bandits.\n\tDifferent from prior contextual linear bandit works, e.g.,~\\cite{huang2015efficient,li2022instance}, here we do not assume any knowledge of context distribution. \n\tAs a result, our $\\textup{RepBPI-CLB}$ problem faces several unique challenges: (i) how to plan an efficient sample allocation for recovering the feature extractor in advance under an \\emph{unknown} context distribution, and (ii) how to construct an estimator for the feature extractor with a partially observed context space.\n\t\n\n\t\n\n\tWe propose algorithm $\\mathtt{C \\hyphen DouExpDes}$, which first (i) efficiently estimates the context distribution and conducts experimental designs under the estimated context distribution, and then (ii) builds a delicate estimator for the feature extractor using instantaneous contexts. \n\tMoreover, we also establish a sample complexity guarantee for $\\mathtt{C \\hyphen DouExpDes}$, which mainly depends on the low dimension of the common representation among tasks.\n\t\n\t\\vspace*{-0.5em}\n\t\\subsection{Algorithm $\\mathtt{C \\hyphen DouExpDes}$}\n\t\n\t\\begin{algorithm}[t]\n\t\t\\caption{$\\mathtt{C \\hyphen DouExpDes}$ (Contextual Double Experimental Design)} \\label{alg:repbpiclb}\n\t\t\\begin{algorithmic}[1]\n\t\t\t\\STATE {\\bfseries Input:} $\\delta$, $\\varepsilon$, $\\boldsymbol{\\phi}(\\cdot,\\cdot)$, regularization parameter $\\gamma \\geq 1$, rounding procedure $\\mathtt{ROUND}$, rounding approximation parameter $\\zeta:=\\frac{1}{10}$, and the size of sample batch $p:= \\lceil \\frac{c_2 (1+\\zeta)^2 L_{\\phi}^4}{\\nu^2} \\textup{polylog}(\\zeta,M,d,k,L_{\\phi},L_{w},\\gamma,\\frac{1}{\\nu},\\frac{1}{\\delta},\\frac{1}{\\varepsilon}) \\rceil$, where $c_2$ is an absolute constant. \\label{line:bpi_value_p}\n\t\t\n\t\t\n\t\t\t\\STATE $T_0 \\leftarrow \\lceil \\frac{32^2 (1+\\zeta)^2 L_{\\phi}^4}{\\nu^2} \\log^2 (\\frac{20d |\\mathcal{A}|}{\\delta}) \\rceil$. $\\hat{\\mathcal{D}} \\leftarrow \\emptyset$\n\t\t\n\t\t\t\\FOR{$\\tau \\in [T_0]$} \n\t\t\t\\STATE \\hspace*{-0.4em} Observe context $s_{\\tau}$, and randomly sample an action \\label{line:bpi_estimate_context_dis}\\;\n\t\t\t\\STATE $\\hat{\\mathcal{D}} \\leftarrow \\hat{\\mathcal{D}} \\cup \\{s_{\\tau}\\}$\\;\n\t\t\t\\ENDFOR\n\t\t\t\\STATE Let $\\boldsymbol{\\lambda}^{E}_{\\hat{\\mathcal{D}}}$ and $\\rho^{E}_{\\hat{\\mathcal{D}}}$ be the optimal solution and the optimal value of the E-optimal design optimization: \n\t\t\t$$\n\t\t\t\\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{A}}} \\Big\\| \\big( \\sum_{a \\in \\mathcal{A}} \\lambda(a) \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}}\\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} \\big)^{-1} \\Big\\|\n\t\t\t\\vspace*{-1em}\n\t\t\t$$ \\label{line:bpi_E_optimal_design}\\\\\n\t\t\t\\STATE $\\{\\bar{a}_i\\}_{i \\in [p]} \\!\\! \\leftarrow \\!\\! \\mathtt{ROUND}(\\{(a, \\mathbb{E}_{\\! s \\sim \\hat{\\mathcal{D}} \\!\\!}\\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top})\\}_{a \\in \\mathcal{A}},$\\\\$ \\boldsymbol{\\lambda}^{E}_{\\hat{\\mathcal{D}}}, \\zeta, p)$ \\label{line:bpi_E_design_round}\\; \n\t\t\n\t\t\n\t\t\t\\STATE $T \\leftarrow \\lceil \\frac{c_3 (1+\\zeta)^2 k^4 L_{\\phi}^4 L_{w}^4 }{ M \\nu^2 \\varepsilon^2 } \\textup{polylog}(\\zeta,d,k,L_{\\phi},L_{w},\\gamma,\\frac{1}{\\nu},$\\\\$\\frac{1}{\\delta},\\frac{1}{\\varepsilon}) \\rceil$, where $c_3$ is an absolute constant \\label{line:bpi_value_T}\n\t\t\t\\STATE $\\hat{\\boldsymbol{B}} \\leftarrow \\mathtt{C \\hyphen FeatRecover}(T, \\{\\bar{a}_i\\}_{i \\in [p]})$\n\t\t\t\\STATE $N \\leftarrow \\lceil \\frac{ (k^2 + \\gamma k L_{w}^2) }{\\varepsilon^2} \\log^4 ({\\frac{ \\gamma k L_{w} }{\\varepsilon \\delta}} ) \\rceil$\\;\n\t\t\n\t\t\t\\STATE $\\{\\hat{\\boldsymbol{\\theta}}_{m,N}\\}_{m \\in [M]} \\leftarrow \\mathtt{EstLowRep}(N, \\gamma, \\hat{\\boldsymbol{B}})$\n\t\t\t\\STATE {\\bfseries return} $\\hat{\\pi}_m(\\cdot):=\\operatornamewithlimits{argmax}_{a \\in \\mathcal{A}} \\boldsymbol{\\phi}(\\cdot,a)^\\top \\hat{\\boldsymbol{\\theta}}_{m,N}$ for all tasks $m \\in [M]$ \\label{line:bpi_return}\n\t\t\\end{algorithmic}\n\t\\end{algorithm}\n\t\n\t\n\t\\begin{algorithm}[t]\n\t\t\\caption{$\\mathtt{C \\hyphen FeatRecover}(T, \\{\\bar{a}_i\\}_{i \\in [p]})$} \\label{alg:con_feat_recover}\n\t\t\\begin{algorithmic}[1]\n\t\t\t\\FOR{task $m \\in [M]$} \\label{line:bpi_stage2_sample_start}\n\t\t\t\\FOR{round $j \\in [T]$}\n\t\t\n\t\t\t\\FOR{arm $i \\in [p]$}\n\t\t\t\\STATE Observe context $s_{m,j,i}^{(1)}$, sample action $\\bar{a}_i$ in task $m$, and observe reward $\\alpha^{(1)}_{m,j,i}$\\; \\label{line:bpi_stage2_sample1}\n\t\t\t\\STATE Observe context $s_{m,j,i}^{(2)}$, sample action $\\bar{a}_i$ in task $m$, and observe reward $\\alpha^{(2)}_{m,j,i}$\\; \\label{line:bpi_stage2_sample2}\n\t\t\t\\ENDFOR\n\t\t\t\\STATE Let $\\boldsymbol{\\phi}^{(\\ell)}_{m,j,i}\\!:=\\!\\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i)$, $\\forall i \\!\\in\\! [p]$, $\\forall \\ell \\!\\in\\! \\{1,2\\}$\\;\n\t\t\t\\STATE $\\tilde{\\boldsymbol{\\theta}}^{(\\ell)}_{m,j} \\! \\leftarrow \\!\\! ( \\sum_{i=1}^{p} \\! \\boldsymbol{\\phi}^{(\\ell)}_{m,j,i} {\\boldsymbol{\\phi}^{(\\ell)}_{m,j,i}}^{\\!\\!\\!\\!\\top} \\!)^{-1} \\! \\sum_{i=1}^{p} \\! \\boldsymbol{\\phi}^{(\\ell)}_{m,j,i} \\alpha^{(\\ell)}_{m,j,i}$, $\\forall \\ell \\in \\{1,2\\}$\\;\n\t\t\t\\ENDFOR\n\t\t\n\t\t\t\\ENDFOR \\label{line:bpi_stage2_sample_end}\n\t\t\t\\STATE $\\boldsymbol{Z} \\leftarrow \\frac{1}{M T} \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\tilde{\\boldsymbol{\\theta}}^{(1)}_{m,j} (\\tilde{\\boldsymbol{\\theta}}^{(2)}_{m,j})^\\top$ \\label{line:bpi_Z}\\;\n\t\t\t\\STATE Perform SVD decomposition on $\\boldsymbol{Z}$, and let $\\hat{\\boldsymbol{B}}$ be the top-$k$ left singular vectors. {\\bfseries return} $\\hat{\\boldsymbol{B}}$\\; \\label{line:bpi_svd}\n\t\t\\end{algorithmic}\n\t\\end{algorithm}\n\t\n\t\n\t\\begin{algorithm}[t]\n\t\t\\caption{$\\mathtt{EstLowRep}(N, \\gamma, \\hat{\\boldsymbol{B}})$} \\label{alg:est_low_rep}\n\t\t\\begin{algorithmic}[1]\n\t\t\t\\STATE $\\boldsymbol{\\Sigma}_{m,0} \\leftarrow \\gamma I$ for any $m \\in [M]$\\;\n\t\t\n\t\t\t\\FOR{task $m \\in [M]$}\n\t\t\t\\FOR{timestep $t \\in [N]$}\n\t\t\t\\STATE Observe context $s_{m,t}$\\; \\label{line:bpi_stage3_context}\n\t\t\t\\STATE $a_{m,t} \\leftarrow \\operatornamewithlimits{argmax}_{a \\in \\mathcal{A}} \\| \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,t},a) \\|_{\\boldsymbol{\\Sigma}_{m,t-1}^{-1}}$\\;\n\t\t\t\\STATE Sample action $a_{m,t}$, and observe reward $r_{m,t}$\\; \\label{line:bpi_stage3_sample}\n\t\t\t\\STATE $\\boldsymbol{\\Sigma}_{m,t} \\leftarrow \\boldsymbol{\\Sigma}_{m,t-1} +$\\\\$ \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,t},a_{m,t}) \\boldsymbol{\\phi}(s_{m,t},a_{m,t})^\\top \\hat{\\boldsymbol{B}}$\\;\n\t\t\t\\STATE $\\hat{\\boldsymbol{w}}_{m,t} \\leftarrow \\boldsymbol{\\Sigma}_{m,t}^{-1} \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) r_{m,\\tau}$ \\label{line:bpi_est_w}\\;\n\t\t\t\\STATE $\\hat{\\boldsymbol{\\theta}}_{m,t} \\leftarrow \\hat{\\boldsymbol{B}} \\hat{\\boldsymbol{w}}_{m,t}$ \\label{line:bpi_est_theta}\\;\n\t\t\t\\ENDFOR\n\t\t\t\\ENDFOR\n\t\t\t\\STATE {\\bfseries return} $\\{\\hat{\\boldsymbol{\\theta}}_{m,N}\\}_{m \\in [M]}$\\;\n\t\t\\end{algorithmic}\n\t\\end{algorithm}\n\t\n\t\n\tAlgorithm~\\ref{alg:repbpiclb} presents the pseudo-code of $\\mathtt{C \\hyphen DouExpDes}$.\n\n\tAt the beginning, $\\mathtt{C \\hyphen DouExpDes}$ uses $T_0$ samples to estimate the context distribution $\\mathcal{D}$ (Line~\\ref{line:bpi_estimate_context_dis}). Then, it performs the E-optimal design under the estimated context distribution $\\hat{\\mathcal{D}}$, and obtains an efficient sample allocation $\\boldsymbol{\\lambda}^{E}_{\\hat{\\mathcal{D}}}$ for the purpose of recovering the feature extractor $\\boldsymbol{B}$ (Line~\\ref{line:bpi_E_optimal_design}). Further, $\\mathtt{C \\hyphen DouExpDes}$ calls the rounding procedure $\\mathtt{ROUND}$ to transform $\\boldsymbol{\\lambda}^{E}_{\\hat{\\mathcal{D}}}$ into a sample batch $\\bar{a}_1,\\dots,\\bar{a}_p$, such tha\n\t\\begin{align*}\n\t\t& \\Big\\| \\big(\\sum_{j=1}^p \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}}\\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_j) \\boldsymbol{\\phi}(s,\\bar{a}_j)^\\top} \\big)^{-1} \\Big\\| \n\t\t\\\\ \n\t\t\\leq & (1+\\zeta) \\Big\\| \\big(p \\sum_{a \\in \\mathcal{A}} \\lambda^{E}_{\\hat{\\mathcal{D}}}(a) \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}}\\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} \\big)^{-1} \\Big\\| .\n\t\\end{align*}\n\tThe specific values of $p$ and $T$ in Lines~\\ref{line:bpi_value_p},~\\ref{line:bpi_value_T} are provided in Eq.~\\eqref{eq:value_p} of Appendix~\\ref{apx:bpi_sample_batch_planning} and Eq.~\\eqref{eq:value_T} of Appendix~\\ref{apx:bpi_feat_recover}, respectively.\\looseness=-1\n\t\n\tNext, $\\mathtt{C \\hyphen DouExpDes}$ runs subroutine $\\mathtt{C \\hyphen FeatRecover}$ to estimate the feature extractor $\\boldsymbol{B}$ using the sample batch $\\bar{a}_1,\\dots,\\bar{a}_p$.\n\tIn $\\mathtt{C \\hyphen FeatRecover}$ (Algorithm~\\ref{alg:con_feat_recover}), we repeatedly sample $\\bar{a}_1,\\dots,\\bar{a}_p$ in all tasks with random contexts. \n\tIn Lines~\\ref{line:bpi_stage2_sample1}-\\ref{line:bpi_stage2_sample2}, we sample this batch twice, and the superscripts $(1)$ and $(2)$ denotes the first and second samples, respectively.\n\tAfter sampling, we carefully establish an estimator $\\boldsymbol{Z}$ for the reward parameter related matrix $\\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top$, using instantaneous context-action features $\\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_{i})^\\top$.\n\tWe then perform SVD decomposition on $\\boldsymbol{Z}$ to obtain the estimated feature extractor $\\hat{\\boldsymbol{B}}$ (Lines~\\ref{line:bpi_Z}-\\ref{line:bpi_svd}). \n\t\n\n\tThen, $\\mathtt{C \\hyphen DouExpDes}$ calls subroutine $\\mathtt{EstLowRep}$, which adapts existing reward-free-exploration algorithm in \\cite{zanette2021design} with low-rank representations to estimate $\\boldsymbol{\\theta}_m$.\n\tIn $\\mathtt{EstLowRep}$ (Algorithm~\\ref{alg:est_low_rep}), we employ the estimated representation $\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)$ to sample the actions with the maximum uncertainty under the observed contexts.\n\tAfter that, we construct estimators $\\hat{\\boldsymbol{w}}_{m,t}$ and $\\hat{\\boldsymbol{\\theta}}_{m,t}$ for the prediction parameter $\\hat{\\boldsymbol{w}}_m$ and reward parameter $\\hat{\\boldsymbol{\\theta}}_m$ (Lines~\\ref{line:bpi_est_w}-\\ref{line:bpi_est_theta}). \n\tAt last, $\\mathtt{C \\hyphen DouExpDes}$ returns the greedy policy with respect to the estimated reward parameter $\\hat{\\boldsymbol{\\theta}}_{m,N}$ for each task. \n\t\n\t\n\t\n\t\\subsection{Theoretical Performance of $\\mathtt{C \\hyphen DouExpDes}$}\n\t\n\tNext, we establish sample complexity guarantees for algorithm $\\mathtt{C \\hyphen DouExpDes}$. In order to illustrate the advantages of representation learning, we first review existing results for traditional single-task best policy identification in contextual linear bandits (BPI-CLB).\n\tFor a single BPI-CLB instance with context-action features $\\boldsymbol{\\phi}(s,a) \\in \\mathbb{R}^d$ and reward parameter $\\boldsymbol{\\theta} \\in \\mathbb{R}^{d}$, the best known sample complexity is $\\tilde{O}(\\frac{d^2}{\\varepsilon^2}\\log(\\frac{1}{\\delta}))$~\\cite{zanette2021design,li2022instance}.\n\t\n\tApparently, if one naively solves the $\\textup{RepBPI-CLB}$ problem by running single-task BPI-CLB algorithms to tackle $M$ tasks independently, one will have a sample complexity \n\t\\begin{align*}\n\t\t\\vspace*{-0.5em}\n\t\t\\tilde{O}\\bigg( \\frac{Md^2}{\\varepsilon^2} \\log \\Big(\\frac{1}{\\delta}\\Big) \\bigg) ,\n\t\t\\vspace*{-0.5em}\n\t\\end{align*}\n\twhich heavily depends on the raw dimension $d$ of context-action features.\n\tThe goal of representation learning is to leverage the common representation among tasks to alleviate the dependency of dimension and save samples.\n\t\n\tNow we present the sample complexity for $\\mathtt{C \\hyphen DouExpDes}$. \\looseness=-1\n\t\n\t\\begin{theorem} \\label{thm:bpi_ub}\n\t\tWith probability at least $1-\\delta$, $\\mathtt{C \\hyphen DouExpDes}$ returns an $\\varepsilon$-optimal policy $\\hat{\\pi}_m$ such that\n\t\t$\n\t\t\\mathbb{E}_{s \\sim \\mathcal{D}} [ \\max_{a \\in \\mathcal{A}} (\\boldsymbol{\\phi}(s,a) - \\boldsymbol{\\phi}(s,\\hat{\\pi}_m(s) )^\\top \\boldsymbol{\\theta}_m ] \\leq \\varepsilon\n\t\t$ \n\t\tfor each task $m \\in [M]$, and the number of samples used is \n\t\t\\begin{align*}\n\t\t\t\\tilde{O} \\sbr{ \\frac{ M \\sbr{k^2 + \\gamma k L_{w}^2 } }{\\varepsilon^2} + \\frac{ k^4 L_{\\phi}^8 L_{w}^4 }{ \\nu^4 \\varepsilon^2 } } .\n\t\t\\end{align*}\n\t\\end{theorem}\n\t\\vspace*{-1em}\n\t\n\t\\textbf{Remark 2.} The first term $\\frac{ M (k^2 + \\gamma k L_{w}^2) }{\\varepsilon^2}$ is a cost of identifying optimal policies for $M$ tasks with underlying $k$-dimensional representation $\\boldsymbol{B}^\\top \\boldsymbol{\\phi}(s,a)$. The second term $\\frac{ k^4 L_{\\phi}^8 L_{w}^4 }{ \\nu^4 \\varepsilon^2 }$ is a price paid for learning global feature extractor $\\boldsymbol{B}$ and does not depend on $M$, which indicates that we only need to pay this price once and then enjoy the benefits of dimension reduction for all $M$ tasks.\n\tTheorem~\\ref{thm:bpi_ub} shows that when $M \\gg \\frac{1}{\\nu} \\gg k$, $\\mathtt{C \\hyphen DouExpDes}$ \n\n\tperforms almost as well as an oracle that knows inherent low-dimensional features $\\boldsymbol{B}^\\top \\boldsymbol{\\phi}(s,a)$. This sample complexity significantly outperforms the baseline result $\\tilde{O}(\\frac{M d^2}{\\varepsilon^2})$ (i.e., solving $M$ tasks independently), and demonstrates the power of representation learning.\n\t\n\t\\textbf{Innovations in Analysis.}\n\tThe proof of Theorem~\\ref{thm:bpi_ub} has several innovations. (i) We carefully bound the deviation between the estimated context distribution $\\hat{\\mathcal{D}}$ and the true context distribution $\\mathcal{D}$, and utilize the E-optimality of sample batch $\\bar{a}_1,\\dots,\\bar{a}_p$\n\n\tto bound $\\|(\\sum_{i=1}^{p} \\boldsymbol{\\phi}^{(\\ell)}_{m,j,i} {\\boldsymbol{\\phi}^{(\\ell)}_{m,j,i}}^\\top)^{-1}\\|$. (ii) A concentration inequality for $\\|\\boldsymbol{Z}-\\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top\\|$ is established, using the bounded $\\|(\\sum_{i=1}^{p} \\boldsymbol{\\phi}^{(\\ell)}_{m,j,i} {\\boldsymbol{\\phi}^{(\\ell)}_{m,j,i}}^\\top)^{-1}\\|$ and matrix Bernstern inequality with truncated noises. Furthermore, an estimation error guarantee for $\\hat{\\boldsymbol{B}}$ is derived by applying the SVD perturbation analysis. (iii) We provide a decomposition of the prediction error $\\boldsymbol{\\phi}(s,a)^\\top (\\hat{\\boldsymbol{\\theta}}_{m,t} - \\boldsymbol{\\theta}_m)$ into three components, including the variance and bias incurred in the sampling for learning $\\hat{\\boldsymbol{w}}_{m,t}$, and the estimation error of $\\hat{\\boldsymbol{B}}$. Then, we bound them via self-normalized concentration inequalities with reduced dimension $k$.\n\t\n\t\n\n\t\n\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\n\t\n\t\n\t\\section{Experiments}\n\t\n\t\n\t\n\tIn this section, we present experiments to evaluate the empirical performance of our algorithms.\n\t\n\n\tIn our experiments, we set $\\delta=0.005$, $d=5$, $k=2$ and $M \\in [50,230]$, where $k$ divides $M$. In $\\textup{RepBAI-LB}$, $\\mathcal{X}$ is the canonical basis of $\\mathbb{R}^{d}$. \n\tIn $\\textup{RepBPI-CLB}$, we set $\\varepsilon=0.1$, $|\\mathcal{S}|=5$ and $|\\mathcal{A}|=5$.\n\t$\\mathcal{D}$ is the uniform distribution on $\\mathcal{S}$. For any $s \\in \\mathcal{S}$, $\\{\\boldsymbol{\\phi}(s,a)\\}_{a \\in \\mathcal{A}}$ is the canonical basis of $\\mathbb{R}^d$. \n\tIn both problems, $\\boldsymbol{B}=[I_k;\\boldsymbol{0}]$, where $I_k$ denotes the $k \\times k$ identity matrix. \n\t$\\boldsymbol{w}_1,\\dots,\\boldsymbol{w}_M$ are divided into $k$ groups, with $\\frac{M}{k}$ same members in each group. \n\tThe members in the $i$-th group ($i \\in [k]$), i.e., $\\boldsymbol{w}_{(M\/k)\\times(i-1)+1},\\dots,\\boldsymbol{w}_{(M\/k)\\times i}$, have $1$ in the $i$-th coordinate and $0$ in all other coordinates. \n\n\n\tFor any $m \\in [M]$, $\\boldsymbol{\\theta}_m=\\boldsymbol{B} \\boldsymbol{w}_m$. We vary $M$ and perform $50$ independent runs to report the average sample complexity across runs.\n\t\n\t\n\t\\begin{figure}[t]\n\t\t\\centering \n\t\t\\subfigure[$\\textup{RepBAI-LB}$] { \n\t\t\t\\label{fig:bai} \n\t\t\t\\includegraphics[width=0.45\\columnwidth]{fig\/BAI.pdf} \n\t\t} \n\t\t\\subfigure[$\\textup{RepBPI-CLB}$] { \n\t\t\t\\label{fig:bpi} \n\t\t\t\\includegraphics[width=0.45\\columnwidth]{fig\/BPI.pdf}\n\t\t}\n\t\t\\vspace*{-1em}\n\t\t\\caption{Experimental results for $\\textup{RepBAI-LB}$ and $\\textup{RepBPI-CLB}$. The two figures compare the sample complexities of our algorithms with the naive algorithms which treat each task independently.\n\t\t}\n\t\t\\label{fig:experiments}\n\t\\end{figure} \n\t\n\tFor $\\textup{RepBAI-LB}$, we compare algorithm $\\mathtt{DouExpDes}$ with the baseline $\\mathtt{IndRAGE}$ which runs the state-of-the-art single-task BAI-LB algorithm $\\mathtt{RAGE}$~\\cite{fiez2019sequential} to solve $M$ tasks independently.\n\tFigure~\\ref{fig:bai} shows the empirical results for $\\textup{RepBAI-LB}$. From Figure~\\ref{fig:bai}, we can see that $\\mathtt{DouExpDes}$ has a better sample complexity than $\\mathtt{IndRAGE}$, and as the number of tasks $M$ increases, the sample complexity of $\\mathtt{DouExpDes}$ increases at a lower rate than that of $\\mathtt{IndRAGE}$. This demonstrates that $\\mathtt{DouExpDes}$ effectively utilize the shared representation among tasks to reduce the number of samples needed for multi-task learning.\n\t\n\tFor $\\textup{RepBPI-CLB}$, our algorithm $\\mathtt{C \\hyphen DouExpDes}$ is compared with the baseline $\\mathtt{IndRFLinUCB}$, which tackles $M$ tasks independently by calling the state-of-the-art single-task BPI-CLB algorithm $\\mathtt{Reward \\hyphen free\\ LinUCB}$~\\cite{zanette2021design}. As presented in Figure~\\ref{fig:bpi}, $\\mathtt{C \\hyphen DouExpDes}$ achieves a significantly lower sample complexity than $\\mathtt{IndRFLinUCB}$. In addition, the slope of the sample complexity curve of $\\mathtt{C \\hyphen DouExpDes}$ with respect to $M$ is much smaller than that of $\\mathtt{IndRFLinUCB}$, which validates that $\\mathtt{C \\hyphen DouExpDes}$ enjoys a lighter dependency on dimension in multi-task learning. These empirical results match our theoretical bounds, and corroborate the power of representation learning. \n\t\n\t\\section{Conclusion and Future Work}\n\t\n\tIn this paper, we investigate representation learning for pure exploration in multi-task (contextual) linear bandits. We propose two efficient algorithms which conduct double experimental designs to optimally allocate samples for learning the low-rank representation. The sample complexities of our algorithms mainly depend on the low dimension of the underlying joint representation among tasks, instead of the raw high dimension. Our theoretical and experimental results demonstrate the benefit of representation learning for pure exploration in multi-task bandits.\n\n\tThere are many interesting directions for further exploration. One direction is to establish lower bounds to validate the optimality of our algorithms. Another direction is to extend this work to more complex (nonlinear) representation settings.\n\t\n\t\n\t\n\t\n\t\n\\section{Related Work} \\label{apx:related_work}\n\nIn this section, we present a full literature review for two lines of related works, i.e., representation learning and pure exploration in (contextual) linear bandit.\n\n\\textbf{Representation Learning.}\nThe study of representation learning has been initiated and developed in the supervised learning setting, e.g., \\cite{baxter2000model,ben2003exploiting,ando2005framework,maurer2006bounds,cavallanti2010linear,maurer2016benefit,du2020few,tripuraneni2021provable}. Recently, representation learning for sequential decision making has attracted extensive attention, e.g., ~\\cite{yang2020impact,yang2022nearly,hu2021near,cella2022meta,cella2022multi,qin2022non}.\n\\citet{tripuraneni2021provable} propose a method-of-moments estimator for recovering the feature extractor, and establish error guarantees for transferring the learned representation from past tasks to a new task.\n\\citet{yang2020impact,yang2022nearly} study representation learning for linear bandit with the regret minimization objective, and assume that the action set at each timestep is an ellipsoid or sphere. \\citet{hu2021near} further relax this assumption and allow arbitrary action sets, but their algorithms equipped with a multi-task joint least-square estimator are computationally inefficient. \\citet{cella2022meta,cella2022multi} also investigate the problem in \\cite{yang2020impact} and propose algorithms which do not need to know the dimension of the underlying representation. \n\\citet{qin2022non} study representation learning for linear bandit under non-stationary environments, and develop algorithms that learn and transfer non-stationary representations adaptively.\nDifferent from the above works which consider regret minimization, we study representation learning for (contextual) linear bandit with the pure exploration objective, which imposes unique challenges in how to optimally allocate samples to learn the feature extractor, and motivates us to design algorithms building upon double experimental designs.\n\n\\textbf{Pure Exploration in (Contextual) Linear Bandit.}\nMost linear bandit studies consider regret minimization, e.g.,~\\cite{dani2008stochastic,rusmevichientong2010linearly,chu2011contextual,abbasi2011improved}. Recently, there is a surge of interests in pure exploration for (contextual) linear bandit, e.g., \\cite{soare2014best,tao2018best,xu2018fully,fiez2019sequential,katz2020empirical,degenne2020gamification,jedra2020optimal,du2021combinatorial,zanette2021design,li2022instance}. \nFor linear bandit, \\citet{soare2014best} firstly apply the G-optimal design to identify the best arm, and provide a sample complexity result that heavily depends on the minimum reward gap.\n\\citet{tao2018best} design a novel randomized estimator for the underlying reward parameter, and achieve tighter sample complexity which depends on the reward gaps of the best $d$ arms. \\citet{du2021combinatorial} further extend the algorithm in \\citep{tao2018best} to develop a polynomial-time algorithm for combinatorially large arm sets. \\citet{xu2018fully} propose a fully-adaptive algorithm which changes the arm selection strategy at each timestep. \\citet{fiez2019sequential} establish the first near-optimal sample complexity upper and lower bounds for best arm identification in linear bandit. \\citet{katz2020empirical} further extend the algorithm in \\cite{fiez2019sequential} and use empirical processes to avoid an explicit union bound over the number of arms. \\citet{degenne2020gamification,jedra2020optimal} develop asymptotically optimal algorithms using the track-and-stop approaches. For contextual linear bandit, \\citet{zanette2021design} design a single non-adaptive policy to collect a dataset, from which a near-optimal policy can be computed. \\citet{li2022instance} build the first instance-dependent upper and lower bounds for best policy identification in contextual linear bandit, with the prior knowledge of the context distribution. By contrast, our work studies multi-task best arm\/policy identification in (contextual) linear bandit with a shared representation among tasks, and does not assume any prior knowledge of the context distribution.\n\n\n\\section{Rounding Procedure} \\label{apx:rounding_procedure}\n\nIn this section, we introduce the rounding procedure $\\mathtt{ROUND}$ in detail.\n\nLet $\\mathcal{X}^{+}:=\\mathcal{X} \\cup \\mathcal{A}$ denote the union space of arm set $\\mathcal{X}$ and action space $\\mathcal{A}$. There are $n$ arms or actions $p_1,\\dots,p_n \\in \\mathcal{X}^{+}$ and $n$ positive semi-definite matrices $\\boldsymbol{Q}_1,\\dots,\\boldsymbol{Q}_n \\in \\mathbb{S}^{d}_{+}$, where $\\boldsymbol{Q}_i$ represents the feature of arm or action $p_i$ for any $i \\in [n]$. Denote $\\mathcal{P}:=\\{p_1,\\dots,p_n\\}$ and $\\mathcal{Q}:=\\{\\boldsymbol{Q}_1,\\dots,\\boldsymbol{Q}_n\\}$.\n\nThe rounding procedure $\\mathtt{ROUND}(\\{(p_i,\\boldsymbol{Q}_i)\\}_{i=1}^{n}, \\boldsymbol{\\lambda}, \\zeta, N)$~\\cite{allen2017near,fiez2019sequential} takes $n$ arm-matrix or action-matrix pairs $(p_1,\\boldsymbol{Q}_1),\\dots,(p_n,\\boldsymbol{Q}_n) \\in \\mathcal{X}^{+} \\times \\mathbb{S}^{d}_{+}$, a distribution $\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{P}}$ (or equivalently, $\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{Q}}$), an approximation parameter $\\zeta>0$, and the number of samples $N$ which satisfies that $N \\geq \\frac{180d}{\\zeta^2}$ as inputs. \nRoughly speaking, it will find a $N$-length discrete arm or action sequence whose associated feature matrices maintain the similar property (e.g., G-optimality and E-optimality) as the continuous sample allocation $\\boldsymbol{\\lambda}$. \n\nFormally, $\\mathtt{ROUND}(\\{(p_i,\\boldsymbol{Q}_i)\\}_{i=1}^{n}, \\lambda, \\zeta, N)$ returns a discrete sample sequence $s_1, \\dots, s_N \\in \\mathcal{P}^{N}$ associated with feature matrices $\\boldsymbol{S}_1, \\dots, \\boldsymbol{S}_N \\in \\mathcal{Q}^{N}$, which satisfy the following properties: \n\n(i) If $\\boldsymbol{\\lambda}$ is an E-optimal design, i.e., $\\boldsymbol{\\lambda}$ is the optimal solution of the optimization\n$$\n\\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{Q}}} \\nbr{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{Q}_i) \\boldsymbol{Q}_i}^{-1}} ,\n$$\nthen $\\boldsymbol{S}_1, \\dots, \\boldsymbol{S}_N$ satisfy that\n\\begin{align*}\n\t\\nbr{\\sbr{\\sum_{j=1}^N \\boldsymbol{S}_j }^{-1}} \\leq (1+\\zeta) \\nbr{\\sbr{N \\sum_{i=1}^n \\lambda(\\boldsymbol{Q}_i) \\boldsymbol{Q}_i }^{-1}} .\n\\end{align*}\n\n(ii) If $\\boldsymbol{\\lambda}$ is a G-optimal design, i.e., for a given prediction set $\\mathcal{Y} \\subseteq \\mathbb{R}^d$, $\\lambda$ is the optimal solution of the optimization\n$$\n\\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{Q}}} \\max_{\\boldsymbol{y} \\in \\mathcal{Y}} \\nbr{\\boldsymbol{y}}^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{Q}_i) \\boldsymbol{Q}_i}^{-1}} ,\n$$\nthen $\\boldsymbol{S}_1, \\dots, \\boldsymbol{S}_N$ satisfy that\n\\begin{align*}\n\t\\max_{\\boldsymbol{y} \\in \\mathcal{Y}} \\nbr{\\boldsymbol{y}}^2_{\\sbr{\\sum_{j=1}^{N} \\boldsymbol{S}_j}^{-1}} \\leq (1+\\zeta) \\max_{\\boldsymbol{y} \\in \\mathcal{Y}} \\nbr{\\boldsymbol{y}}^2_{\\sbr{N \\sum_{i=1}^{n} \\lambda(\\boldsymbol{Q}_i) \\boldsymbol{Q}_i}^{-1}} .\n\\end{align*}\n\n\nWe implement $\\mathtt{ROUND}$ by setting $\\boldsymbol{\\pi}^*=N\\boldsymbol{\\lambda}$, $k=r=N$ and $\\boldsymbol{x}_i \\boldsymbol{x}_i^\\top = (\\sum_{i=1}^{n} \\pi^*(\\boldsymbol{Q}_i) \\boldsymbol{Q}_i)^{-\\frac{1}{2}} \\boldsymbol{Q}_i (\\sum_{i=1}^{n} \\pi^*(\\boldsymbol{Q}_i) \\boldsymbol{Q}_i)^{-\\frac{1}{2}}$ for any $i \\in [n]$ in Algorithm 1 of \\cite{allen2017near}. \nNote that Algorithm 1 in \\cite{allen2017near} only needs to access the feature matrix $\\boldsymbol{x}_i \\boldsymbol{x}_i^\\top$ rather than the separate feature vector $\\boldsymbol{x}_i$, which allows us to apply it to our problem. We refer interested readers to \\cite{allen2017near} and Appendix B in \\cite{fiez2019sequential} for more implementation details of this rounding procedure.\n\n\n\\section{Proofs for Algorithm $\\mathtt{DouExpDes}$}\n\nIn this section, we provide the proofs for Algorithm~$\\mathtt{DouExpDes}$. \n\nThroughout our proofs, we use $L_{\\theta}$ to denote the upper bound of $\\|\\boldsymbol{\\theta}_m\\|$ for any $m \\in [M]$. \nSince $\\boldsymbol{\\theta}_m=\\boldsymbol{B}\\boldsymbol{w}_m$ for any $m \\in [M]$, we have that $\\|\\boldsymbol{\\theta}_m\\| \\leq \\|\\boldsymbol{B}\\| \\|\\boldsymbol{w}_m\\| \\leq \\|\\boldsymbol{w}_m\\| \\leq L_{w}$, and thus $L_{\\theta} \\leq L_{w}$.\n\n\\subsection{Sample Batch Planning}\n\nRecall that\n\\begin{align*}\n\t\\boldsymbol{\\lambda}^{E} := & \\operatornamewithlimits{argmin}_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\nbr{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1}} \n\\end{align*}\nand\n\\begin{align*}\n\t\\rho^{E} := & \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\nbr{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1}} \n\\end{align*}\nare the optimal solution and the optimal value of the E-optimal design optimization, respectively (Line~\\ref{line:bai_E_optimal_design} in Algorithm~\\ref{alg:repbailb}). $\\bar{\\boldsymbol{x}}_1,\\dots,\\bar{\\boldsymbol{x}}_p$ is an arm sequence generated according to sample allocation $\\boldsymbol{\\lambda}^{E}$ via rounding procedure $\\mathtt{ROUND}$ (Line~\\ref{line:bai_E_optimal_round} in Algorithm~\\ref{alg:repbailb}).\n\nLet \n$$\n\\boldsymbol{X}_{\\textup{batch}}:=\n\\begin{bmatrix}\n\t\\bar{\\boldsymbol{x}}_1^\\top\n\t\\\\\n\t\\dots\n\t\\\\\n\t\\bar{\\boldsymbol{x}}_p^\\top\n\\end{bmatrix} ,\n$$\nand \n$$\n\\boldsymbol{X}_{\\textup{batch}}^{+} := (\\boldsymbol{X}_{\\textup{batch}}^\\top \\boldsymbol{X}_{\\textup{batch}})^{-1} \\boldsymbol{X}_{\\textup{batch}}^{\\top} .\n$$\nAccording to the fact that $\\mathcal{X}$ spans $\\mathbb{R}^d$, the definition of E-optimal design and the guarantee of $\\mathtt{ROUND}$, we have that $\\boldsymbol{X}_{\\textup{batch}}^\\top \\boldsymbol{X}_{\\textup{batch}}$ is invertible.\n\nNow, we first give an upper bound of $\\|\\boldsymbol{X}_{\\textup{batch}}^{+}\\|$.\n\n\\begin{lemma} \\label{lemma:X_batch_ub}\n\n\tIt holds that\n\t\\begin{align*}\n\t\t\\|\\boldsymbol{X}_{\\textup{batch}}^{+}\\| \\leq \\sqrt{\\frac{(1+\\zeta) \\rho^{E}}{p}} .\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}[Proof of Lemma~\\ref{lemma:X_batch_ub}]\n\tWe have\n\t\\begin{align*}\n\t\t\\|\\boldsymbol{X}_{\\textup{batch}}^{+}\\| = & \\nbr{\\sbr{\\boldsymbol{X}_{\\textup{batch}}^{\\top} \\boldsymbol{X}_{\\textup{batch}}}^{-1} \\boldsymbol{X}_{\\textup{batch}}^{\\top}}\n\t\t\\\\\n\t\t= & \\sqrt{ \\nbr{\\sbr{\\boldsymbol{X}_{\\textup{batch}}^{\\top} \\boldsymbol{X}_{\\textup{batch}}}^{-1} \\boldsymbol{X}_{\\textup{batch}}^{\\top} \\boldsymbol{X}_{\\textup{batch}} \\sbr{\\boldsymbol{X}_{\\textup{batch}}^{\\top} \\boldsymbol{X}_{\\textup{batch}}}^{-1}} }\n\t\t\\\\\n\t\t= & \\sqrt{ \\nbr{\\sbr{\\boldsymbol{X}_{\\textup{batch}}^{\\top} \\boldsymbol{X}_{\\textup{batch}}}^{-1}} }\n\t\t\\\\\n\t\t= & \\sqrt{ \\nbr{\\sbr{\\sum_{i=1}^{p} \\bar{\\boldsymbol{x}}_i \\bar{\\boldsymbol{x}}_i^{\\top}}^{-1}} }\n\t\t\\\\\n\t\t\\leq & \\sqrt{(1+\\zeta) \\nbr{\\sbr{p \\sum_{i=1}^{n} \\lambda^{E}(\\boldsymbol{x}_i) \\boldsymbol{x}_i \\boldsymbol{x}_i^{\\top}}^{-1}} }\n\t\t\\\\\n\t\t= & \\sqrt{\\frac{(1+\\zeta) \\rho^{E}}{p}} .\n\t\\end{align*}\n\\end{proof}\n\n\n\\subsection{Global Feature Extractor Recovery} \\label{apx:bai_feature_recover}\n\nFor clarity of notation, we add subscript $t$ to the notations in subroutine $\\mathtt{FeatRecover}$ to denote the quantities generated in phase $t$. Specifically, we use $\\alpha_{t,m,j,i}$, $\\tilde{\\boldsymbol{\\theta}}_{t,m,j}$, $\\boldsymbol{Z}_t$ and $\\hat{\\boldsymbol{B}}_t$ to denote the random reward, estimator of reward parameter, estimator of $\\frac{1}{M} \\sum_{i=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top$ and estimator of feature extractor in phase $t$, respectively.\n\n\nFor any phase $t>0$, task $m \\in [M]$, round $j \\in [T_t]$ and arm $i \\in [p]$, let $\\eta_{t,m,j,i}$ denote the noise of the sample on arm $\\bar{\\boldsymbol{x}}_i$ in the $j$-th round for task $m$, during the execution of $\\mathtt{FeatRecover}$ in phase $t$ (Line~\\ref{line:bai_stage2_sample} in Algorithm~\\ref{alg:feat_recover}). The noise $\\eta_{t,m,j,i}$ is zero-mean and sub-Gaussian, and has variance $1$. $\\eta_{t,m,j,i}$ is independent for different $t,m,j,i$.\n\nFor any phase $t>0$, task $m \\in [M]$, round $j \\in [T_t]$, let $\\boldsymbol{\\alpha}_{t,m,j}:=[\\alpha_{t,m,j,1},\\dots,\\alpha_{t,m,j,p}]^\\top$.\nThen, we have that\n$$\n\\tilde{\\boldsymbol{\\theta}}_{t,m,j} = \\boldsymbol{X}_{\\textup{batch}}^{+} \\boldsymbol{\\alpha}_{t,m,j} ,\n$$\nand\n$$\n\\boldsymbol{Z}_t = \\frac{1}{M T_t} \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\tilde{\\boldsymbol{\\theta}}_{t,m,j} (\\tilde{\\boldsymbol{\\theta}}_{t,m,j})^\\top - \\boldsymbol{X}_{\\textup{batch}}^{+} (\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top .\n$$\n\n\n\\begin{lemma}[Expectation of $\\boldsymbol{Z}_t$] \\label{lemma:bai_expectation_Z_t}\n\tIt holds that\n\t\\begin{align*}\n\t\t\\mathbb{E} \\mbr{ \\boldsymbol{Z}_t } = \\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top .\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}[Proof of Lemma~\\ref{lemma:bai_expectation_Z_t}]\n\t$\\boldsymbol{Z}_t$ can be written as\n\t\\begin{align}\n\t\t\\boldsymbol{Z}_t = & \\frac{1}{M T_t} \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\tilde{\\boldsymbol{\\theta}}_{t,m,j} (\\tilde{\\boldsymbol{\\theta}}_{t,m,j})^\\top - \\boldsymbol{X}_{\\textup{batch}}^{+} (\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top \n\t\t\\nonumber\\\\\n\t\t= & \\frac{1}{M T_t} \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\boldsymbol{X}_{\\textup{batch}}^{+} \\begin{bmatrix}\n\t\t\t\\alpha_{t,m,j,1}\n\t\t\t\\\\\n\t\t\t\\vdots\n\t\t\t\\\\\n\t\t\t\\alpha_{t,m,j,p} \n\t\t\\end{bmatrix} \n\t\t[\\alpha_{t,m,j,1}, \\dots, \\alpha_{t,m,j,p}]^\\top (\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top\n\t\t- \\boldsymbol{X}_{\\textup{batch}}^{+} (\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top \n\t\t\\nonumber\\\\\n\t\t= & \\frac{1}{M T_t} \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\boldsymbol{X}_{\\textup{batch}}^{+} \\begin{bmatrix}\n\t\t\t\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m + \\eta_{t,m,j,1}\n\t\t\t\\\\\n\t\t\t\\vdots\n\t\t\t\\\\\n\t\t\t\\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m + \\eta_{t,m,j,p} \n\t\t\\end{bmatrix} \n\t\t[\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m + \\eta_{t,m,j,1}, \\dots,\n\t\t\\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m + \\eta_{t,m,j,p} ]^\\top (\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top\n\t\t- \\boldsymbol{X}_{\\textup{batch}}^{+} (\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top \n\t\t\\nonumber\\\\\n\t\t= & \\frac{1}{M T_t} \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\boldsymbol{X}_{\\textup{batch}}^{+} \n\t\t\\begin{bmatrix}\n\t\t\t(\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m + \\eta_{t,m,j,1})^2 & \\!\\!\\!\\!\\!\\cdots\\!\\!\\!\\!\\! & (\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m + \\eta_{t,m,j,1}) (\\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m + \\eta_{t,m,j,p}) \\\\\n\t\t\t\\cdots\t& \\!\\!\\!\\!\\!\\cdots\\!\\!\\!\\!\\! & \\cdots \\\\\n\t\t\t(\\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m + \\eta_{t,m,j,p})(\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m + \\eta_{t,m,j,1}) & \\!\\!\\!\\!\\!\\cdots\\!\\!\\!\\!\\! & (\\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m + \\eta_{t,m,j,p})^2\n\t\t\\end{bmatrix}\n\t\t(\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top\n\t\t\\nonumber\\\\& - \\boldsymbol{X}_{\\textup{batch}}^{+} (\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top \n\t\t\\nonumber\\\\\n\t\n\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\n\t\n\t\n\t\t= & \\frac{1}{M T_t} \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\boldsymbol{X}_{\\textup{batch}}^{+} \n\t\t\\Bigg(\n\t\t\\begin{bmatrix}\n\t\t\t(\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m)^2 & \\cdots & \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m & \\cdots & (\\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m)^2 \n\t\t\\end{bmatrix}\n\t\t\\nonumber\\\\& +\n\t\t\\begin{bmatrix}\n\t\t\t2 \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,1} & \\cdots & \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,p} + \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,1} \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,p} + \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,1} & \\cdots & 2 \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,p}\n\t\t\\end{bmatrix}\n\t\t\\nonumber\\\\& +\n\t\t\\begin{bmatrix}\n\t\t\t(\\eta_{t,m,j,1})^2 & \\cdots & \\eta_{t,m,j,1} \\eta_{t,m,j,p} \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\eta_{t,m,j,1} \\eta_{t,m,j,p} & \\cdots & (\\eta_{t,m,j,p})^2\n\t\t\\end{bmatrix}\n\t\t\\Bigg)\n\t\t(\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top\n\t\t- \\boldsymbol{X}_{\\textup{batch}}^{+} (\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top . \\label{eq:Z_t_decompose}\n\t\\end{align}\n\t\n\n\tThen, taking the expectation on $\\boldsymbol{Z}_t$, we have\n\t\\begin{align*}\n\t\t\\mathbb{E} [\\boldsymbol{Z}_t] = & \\frac{1}{M T_t} \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\boldsymbol{X}_{\\textup{batch}}^{+} \n\t\t\\Bigg( \\begin{bmatrix}\n\t\t\t(\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m)^2 & \\cdots & \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m & \\cdots & (\\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m)^2\n\t\t\\end{bmatrix} + \\boldsymbol{I}_d \\Bigg)\n\t\t(\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top\n\t\t- \\boldsymbol{X}_{\\textup{batch}}^{+} (\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top\n\t\t\\\\\n\t\t= & \\frac{1}{M T_t} \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\boldsymbol{X}_{\\textup{batch}}^{+} \n\t\t\\begin{bmatrix}\n\t\t\t(\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m)^2 & \\cdots & \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m & \\cdots & (\\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m)^2\n\t\t\\end{bmatrix} \n\t\t(\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top\n\t\t\\\\\n\t\t= & \\frac{1}{M T_t} \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\boldsymbol{X}_{\\textup{batch}}^{+} \n\t\t\\begin{bmatrix}\n\t\t\t\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m\n\t\t\t\\\\\n\t\t\t\\vdots\n\t\t\t\\\\\n\t\t\t\\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m\n\t\t\\end{bmatrix} [\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m, \\dots, \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m]^\\top\n\t\t(\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top\n\t\t\\\\\n\t\t= & \\frac{1}{M T_t} \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\boldsymbol{X}_{\\textup{batch}}^{+} \n\t\t\\boldsymbol{X}_{\\textup{batch}} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top \\boldsymbol{X}_{\\textup{batch}}^\\top\n\t\t(\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top\n\t\t\\\\\n\t\t= & \\frac{1}{M T_t} \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top \n\t\t\\\\\n\t\t= & \\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top .\n\t\\end{align*}\n\\end{proof}\n\nRecall that for any $t>0$, $\\delta_t:=\\frac{\\delta}{2t^2}$.\n\nFor any phase $t>0$, define events \n\\begin{align*}\n\t\\mathcal{E}_t:= \\lbr{ \\nbr{\\boldsymbol{Z}_t - \\mathbb{E} [\\boldsymbol{Z}_t]} \n\t\t\\leq \\frac{ 96 \\nbr{\\boldsymbol{X}_{\\textup{batch}}^{+}}^2 p L_{x} L_{\\theta} \\log\\sbr{\\frac{16p}{\\delta_t}} }{\\sqrt{MT_t}} \\log \\sbr{\\frac{16pMT_t}{\\delta_t}} } ,\n\\end{align*}\nand\n\\begin{align*}\n\t\\mathcal{E}:=\\cap_{t=1}^{\\infty} \\mathcal{E}_t .\n\\end{align*}\n\n\\begin{lemma}[Concentration of $\\boldsymbol{Z}_t$] \\label{lemma:Z_t_est_error}\n\tIt holds that\n\n\t\n\t\n\n\t\\begin{align*}\n\t\t\\Pr \\mbr{\\mathcal{E}} \\geq \\frac{\\delta}{2} . \n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:Z_t_est_error}]\n\tAccording to Eq.~\\eqref{eq:Z_t_decompose}, we have\n\t\\begin{align*}\n\t\t\\boldsymbol{Z}_t - \\mathbb{E} [\\boldsymbol{Z}_t] = & \\frac{1}{M T_t} \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\boldsymbol{X}_{\\textup{batch}}^{+} \n\t\t\\Bigg(\n\t\t\\begin{bmatrix}\n\t\t\t2 \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,1} & \\cdots & \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,p} + \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,1} \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,p} + \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,1} & \\cdots & 2 \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,p}\n\t\t\\end{bmatrix}\n\t\t\\\\& -\n\t\t\\mathbb{E}\n\t\t\\begin{bmatrix}\n\t\t\t2 \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,1} & \\cdots & \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,p} + \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,1} \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,p} + \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,1} & \\cdots & 2 \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,p}\n\t\t\\end{bmatrix}\n\t\t\\nonumber\\\\& +\n\t\t\\begin{bmatrix}\n\t\t\t(\\eta_{t,m,j,1})^2 & \\cdots & \\eta_{t,m,j,1} \\eta_{t,m,j,p} \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\eta_{t,m,j,1} \\eta_{t,m,j,p} & \\cdots & (\\eta_{t,m,j,p})^2\n\t\t\\end{bmatrix}\n\t\t-\n\t\t\\mathbb{E}\n\t\t\\begin{bmatrix}\n\t\t\t(\\eta_{t,m,j,1})^2 & \\cdots & \\eta_{t,m,j,1} \\eta_{t,m,j,p} \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\eta_{t,m,j,1} \\eta_{t,m,j,p} & \\cdots & (\\eta_{t,m,j,p})^2\n\t\t\\end{bmatrix}\n\t\t\\Bigg)\n\t\t(\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top .\n\t\\end{align*}\n\t\n\tDefine the following matrices:\n\t\\begin{align*}\n\t\t\\boldsymbol{A}_{t,m,j} &:= \\frac{1}{M T_t} \\begin{bmatrix}\n\t\t\t2 \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,1} & \\cdots & \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,p} + \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,1} \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,p} + \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,1} & \\cdots & 2 \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\eta_{t,m,j,p} \n\t\t\\end{bmatrix} ,\n\t\t\\\\\n\t\t\\boldsymbol{A}_t &:= \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\boldsymbol{A}_{t,m,j} ,\n\t\t\\\\ \n\t\t\\boldsymbol{C}_{t,m,j} &:= \\frac{1}{M T_t} \\begin{bmatrix}\n\t\t\t(\\eta_{t,m,j,1})^2 & \\cdots & \\eta_{t,m,j,1} \\eta_{t,m,j,p} \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\eta_{t,m,j,1} \\eta_{t,m,j,p} & \\cdots & (\\eta_{t,m,j,p})^2 \n\t\t\\end{bmatrix} ,\n\t\t\\\\\n\t\t\\boldsymbol{C}_t &:= \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\boldsymbol{C}_{t,m,j} .\n\t\\end{align*}\n\t\n\tThen, we can write $\\boldsymbol{Z}_t - \\mathbb{E} [\\boldsymbol{Z}_t]$ as\n\t\\begin{align*}\n\t\t\\boldsymbol{Z}_t - \\mathbb{E} [\\boldsymbol{Z}_t] = \\boldsymbol{X}_{\\textup{batch}}^{+} \\sbr{ \\boldsymbol{A}_t - \\mathbb{E}[\\boldsymbol{A}_t] + \\boldsymbol{C}_t - \\mathbb{E}[\\boldsymbol{C}_t] } (\\boldsymbol{X}_{\\textup{batch}}^{+})^\\top ,\n\t\\end{align*}\n\tand thus,\n\t\\begin{align}\n\t\t\\nbr{\\boldsymbol{Z}_t - \\mathbb{E} [\\boldsymbol{Z}_t]} \\leq \\nbr{\\boldsymbol{X}_{\\textup{batch}}^{+}}^2 \\sbr{ \\nbr{\\boldsymbol{A}_t - \\mathbb{E}[\\boldsymbol{A}_t]} + \\nbr{\\boldsymbol{C}_t - \\mathbb{E}[\\boldsymbol{C}_t]} } . \\label{eq:Z_t_expressed_by_A}\n\t\\end{align}\n\t\n\tNext, we analyze $\\|\\boldsymbol{A}_t - \\mathbb{E}[\\boldsymbol{A}_t]\\|$ and $\\|\\boldsymbol{C}_t - \\mathbb{E}[\\boldsymbol{C}_t]\\|$. In order to use the truncated matrix Bernstein inequality (Lemma~\\ref{lemma:matrix_bernstein_tau}), we define the truncated noise and truncated matrices as follows.\n\t\n\tLet $R>0$ be a truncation level of noises, which will be chosen later. For any $t>0$, $m \\in [M]$, $j \\in [T_t]$ and $i \\in [p]$, let $\\tilde{\\eta}_{t,m,j,i}=\\eta_{t,m,j,i} \\mathbbm{1}\\{|\\eta_{t,m,j,i}| \\leq R\\}$ denote the truncated noise. Then, we define the following truncated matrices:\n\t\\begin{align}\n\t\t\\tilde{\\boldsymbol{A}}_{t,m,j} &:= \\frac{1}{M T_t} \\begin{bmatrix}\n\t\t\t2 \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\tilde{\\eta}_{t,m,j,1} & \\cdots & \\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\tilde{\\eta}_{t,m,j,p} + \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\tilde{\\eta}_{t,m,j,1} \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\bar{\\boldsymbol{x}}_1^\\top \\boldsymbol{\\theta}_m \\tilde{\\eta}_{t,m,j,p} + \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\tilde{\\eta}_{t,m,j,1} & \\cdots & 2 \\bar{\\boldsymbol{x}}_p^\\top \\boldsymbol{\\theta}_m \\tilde{\\eta}_{t,m,j,p}\n\t\t\\end{bmatrix} \n\t\t\\nonumber\\\\\n\t\t\\tilde{\\boldsymbol{A}}_t &:= \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\tilde{\\boldsymbol{A}}_{t,m,j} ,\n\t\t\\nonumber\\\\\n\t\t\\tilde{\\boldsymbol{C}}_{t,m,j} &:= \\frac{1}{M T_t} \\begin{bmatrix}\n\t\t\t(\\tilde{\\eta}_{t,m,j,1})^2 & \\cdots & \\tilde{\\eta}_{t,m,j,1} \\tilde{\\eta}_{t,m,j,p} \\\\\n\t\t\t\\cdots\t& \\cdots & \\cdots \\\\\n\t\t\t\\tilde{\\eta}_{t,m,j,1} \\tilde{\\eta}_{t,m,j,p} & \\cdots & (\\tilde{\\eta}_{t,m,j,p})^2 \n\t\t\\end{bmatrix} \\label{eq:tilde_C_t_m_j}\n\t\t\\\\\n\t\t\\tilde{\\boldsymbol{C}}_t &:= \\sum_{m=1}^{M} \\sum_{j=1}^{T_t} \\tilde{\\boldsymbol{C}}_{t,m,j} \\nonumber\n\t\\end{align}\n\t\n\tFirst, we bound $\\|\\boldsymbol{A}_t-\\mathbb{E}[\\boldsymbol{A}_t]\\|$. \n\tSince for any $t>0$, $m \\in [M]$, $j \\in [T_t]$ and $i \\in [p]$, $|\\tilde{\\eta}_{t,m,j,i}| \\leq R$ and $|\\bar{\\boldsymbol{x}}_i^\\top \\boldsymbol{\\theta}_m| \\leq L_{x} L_{\\theta}$, we have $\\|\\tilde{\\boldsymbol{A}}_{t,m,j}\\| \\leq \\frac{1}{M T_t} \\cdot 2p L_{x} L_{\\theta} R$.\n\t\n\n\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\n\n\n\n\n\t\n\t\n\n\n\t\n\t\n\n\n\t\n\t\n\t\n\tRecall that for any $t>0$, $m \\in [M]$, $j \\in [T_t]$ and $i \\in [p]$, $\\eta_{t,m,j,i}$ is 1-sub-Gaussian. Using a union bound over $i \\in [p]$, we have that for any $t>0$, $m \\in [M]$, $j \\in [T_t]$, with probability at least $1-2p\\exp(-\\frac{R^2}{2})$, $|\\eta_{t,m,j,i}| \\leq R$ for all $i \\in [p]$. Thus, with probability at least $1-2p\\exp(-\\frac{R^2}{2})$, $\\|\\boldsymbol{A}_{t,m,j}\\| \\leq \\frac{1}{M T_t} \\cdot 2p L_{x} L_{\\theta} R$. \n\t\n\tThen, we have\n\t\\begin{align*}\n\t\t\\nbr{ \\mathbb{E}[\\boldsymbol{A}_{t,m,j}]-\\mathbb{E}[\\tilde{\\boldsymbol{A}}_{t,m,j}] } \\leq & \\nbr{ \\mathbb{E} \\mbr{\\boldsymbol{A}_{t,m,j} \\cdot \\indicator{ \\nbr{\\boldsymbol{A}_{t,m,j}} \\geq \\frac{2p L_{x} L_{\\theta} R}{M T_t} }} }\n\t\t\\\\\n\t\t\\leq & \\mathbb{E} \\mbr{ \\nbr{\\boldsymbol{A}_{t,m,j}} \\cdot \\indicator{ \\nbr{\\boldsymbol{A}_{t,m,j}} \\geq \\frac{2p L_{x} L_{\\theta} R}{M T_t} } } \n\t\t\\\\\n\t\t= & \\mathbb{E}\\mbr{ \\frac{2p L_{x} L_{\\theta} R}{M T_t} \\cdot \\indicator{ \\nbr{\\boldsymbol{A}_{t,m,j}} \\geq \\frac{2p L_{x} L_{\\theta} R}{M T_t} }} \\\\& + \\mbr{\\sbr{ \\nbr{\\boldsymbol{A}_{t,m,j}} - \\frac{2p L_{x} L_{\\theta} R}{M T_t}} \\cdot \\indicator{ \\nbr{\\boldsymbol{A}_{t,m,j}} \\geq \\frac{2p L_{x} L_{\\theta} R}{M T_t} } } \n\t\t\\\\\n\t\t= & \\frac{2p L_{x} L_{\\theta} R}{M T_t} \\cdot \\Pr\\mbr{ \\nbr{\\boldsymbol{A}_{t,m,j}} \\geq \\frac{2p L_{x} L_{\\theta} R}{M T_t} } + \\int_0^{\\infty} \\Pr \\mbr{ \\nbr{\\boldsymbol{A}_{t,m,j}} - \\frac{2p L_{x} L_{\\theta} R}{M T_t} > x} dx \n\t\t\\\\\n\t\t\\leq & \\frac{2p L_{x} L_{\\theta} R}{M T_t} \\cdot 2p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} + \\frac{2p L_{x} L_{\\theta}}{M T_t} \\int_R^{\\infty} \\Pr \\mbr{ \\nbr{\\boldsymbol{A}_{t,m,j}} > \\frac{2p L_{x} L_{\\theta} y}{M T_t} } dy \n\t\t\\\\\n\t\t\\leq & \\frac{2p L_{x} L_{\\theta} R}{M T_t} \\cdot 2p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} + \\frac{2p L_{x} L_{\\theta}}{M T_t} \\int_R^{\\infty} 2p \\exp\\sbr{-\\frac{y^2}{2}} dy\n\t\t\\\\\n\t\t\\leq & \\frac{2p L_{x} L_{\\theta} R}{M T_t} \\cdot 2p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} + \\frac{2p L_{x} L_{\\theta}}{M T_t} \\cdot 2p \\cdot \\frac{1}{R} \\cdot \\exp\\sbr{-\\frac{R^2}{2}} \n\t\t\\\\\n\t\t= & \\frac{2p L_{x} L_{\\theta}}{M T_t} \\cdot 2p \\cdot \\sbr{R+\\frac{1}{R}} \\exp\\sbr{-\\frac{R^2}{2}} .\n\t\\end{align*}\n\t\n\tLet $\\delta' \\in (0,1)$ be a confidence parameter which will be chosen later.\n\tUsing the truncated matrix Bernstein inequality (Lemma~\\ref{lemma:matrix_bernstein_tau}) with $n=MT_t$, $R=\\sqrt{2\\log \\sbr{\\frac{2pMT_t}{\\delta'}}}$, $n \\Pr[\\|\\boldsymbol{A}_{t,m,j}\\| \\geq \\frac{1}{M T_t} \\cdot 2p L_{x} L_{\\theta} R] \\leq \\delta'$, $U=\\frac{2p L_{x} L_{\\theta} \\sqrt{2\\log \\sbr{\\frac{2pMT_t}{\\delta'}}}}{MT_t}$, $\\sigma^2=M T_t U^2$, $\\tau=\\frac{4\\cdot 2p L_{x} L_{\\theta} \\sqrt{2\\log \\sbr{\\frac{2pMT_t}{\\delta'}}} \\log\\sbr{\\frac{2p}{\\delta'}}}{\\sqrt{M T_t}} + \\frac{4\\cdot 2p L_{x} L_{\\theta} \\sqrt{2\\log \\sbr{\\frac{2pMT_t}{\\delta'}}} \\log\\sbr{\\frac{2p}{\\delta'}}}{M T_t}$ and $\\Delta=\\frac{2p L_{x} L_{\\theta} \\cdot 2 \\sqrt{2\\log \\sbr{\\frac{2pMT_t}{\\delta'}}}}{M T_t} \\cdot \\frac{\\delta'}{M T_t}$, we have that with probability at least $1-2\\delta'$, \n\t\\begin{align}\n\t\t\\nbr{\\boldsymbol{A}_t - \\mathbb{E}[\\boldsymbol{A}_t]} \\leq & \\frac{4\\cdot 2p L_{x} L_{\\theta} \\sqrt{2\\log \\sbr{\\frac{2pMT_t}{\\delta'}}} \\log\\sbr{\\frac{2p}{\\delta'}}}{\\sqrt{M T_t}} + \\frac{4\\cdot 2p L_{x} L_{\\theta} \\sqrt{2\\log \\sbr{\\frac{2pMT_t}{\\delta'}}} \\log\\sbr{\\frac{2p}{\\delta'}}}{M T_t} \n\t\t\\nonumber\\\\ \n\t\t\\leq & \\frac{8\\cdot 2p L_{x} L_{\\theta} \\sqrt{2\\log \\sbr{\\frac{2pMT_t}{\\delta'}}} \\log\\sbr{\\frac{2p}{\\delta'}}}{\\sqrt{M T_t}} .\n\t\t\\label{eq:A_t_concentration}\n\t\\end{align}\n\t\n\tNow we investigate $\\|\\boldsymbol{C}_t-\\mathbb{E}[\\boldsymbol{C}_t]\\|$. Recall that in Eq.~\\eqref{eq:tilde_C_t_m_j}, for any $t>0$, $m \\in [M]$, $j \\in [T_t]$ and $i \\in [p]$, $|\\tilde{\\eta}_{t,m,j,i}| \\leq R$. Then, we have $\\|\\tilde{\\boldsymbol{C}}_{t,m,j}\\| \\leq \\frac{1}{M T_t} \\cdot pR^2$.\n\n\n\t\n\t\n\t\n\t\n\t\n\t\n\n\n\n\n\n\t\n\t\n\n\n\t\n\t\n\t\n\tRecall that for any $t>0$, $m \\in [M]$ and $j \\in [T_t]$, with probability at least $1-2p\\exp(-\\frac{R^2}{2})$, $|\\eta_{t,m,j,i}| \\leq R$ for all $i \\in [p]$. Thus, with probability at least $1-2p\\exp(-\\frac{R^2}{2})$, $\\|\\boldsymbol{C}_{t,m,j}\\| \\leq \\frac{1}{M T_t} \\cdot pR^2$. Then, we have\n\t\\begin{align*}\n\t\t\\nbr{ \\mathbb{E}[\\boldsymbol{C}_{t,m,j}]-\\mathbb{E}[\\tilde{\\boldsymbol{C}}_{t,m,j}] } \\leq & \\nbr{ \\mathbb{E} \\mbr{\\boldsymbol{C}_{t,m,j} \\cdot \\indicator{ \\nbr{\\boldsymbol{C}_{t,m,j}} \\geq \\frac{pR^2}{M T_t} }} }\n\t\t\\\\\n\t\t\\leq & \\mathbb{E} \\mbr{ \\nbr{\\boldsymbol{C}_{t,m,j}} \\cdot \\indicator{ \\nbr{\\boldsymbol{C}_{t,m,j}} \\geq \\frac{pR^2}{M T_t} } } \n\t\t\\\\\n\t\t= & \\mathbb{E}\\mbr{ \\frac{pR^2}{M T_t} \\cdot \\indicator{ \\nbr{\\boldsymbol{C}_{t,m,j}} \\geq \\frac{pR^2}{M T_t} }} + \\mbr{\\sbr{ \\nbr{\\boldsymbol{C}_{t,m,j}} - \\frac{pR^2}{M T_t}} \\cdot \\indicator{ \\nbr{\\boldsymbol{C}_{t,m,j}} \\geq \\frac{pR^2}{M T_t} } } \n\t\t\\\\\n\t\t= & \\frac{pR^2}{M T_t} \\cdot \\Pr\\mbr{ \\nbr{\\boldsymbol{C}_{t,m,j}} \\geq \\frac{pR^2}{M T_t} } + \\int_0^{\\infty} \\Pr \\mbr{ \\nbr{\\boldsymbol{C}_{t,m,j}} - \\frac{pR^2}{M T_t} > x} dx \n\t\t\\\\\n\t\t\\leq & \\frac{pR^2}{M T_t} \\cdot 2p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} + \\frac{2p}{M T_t} \\int_R^{\\infty} \\boldsymbol{y} \\cdot \\Pr \\mbr{ \\nbr{\\boldsymbol{C}_{t,m,j}} > \\frac{dy^2}{M T_t} } dy \n\t\t\\\\\n\t\t\\leq & \\frac{pR^2}{M T_t} \\cdot 2p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} + \\frac{2p}{M T_t} \\int_R^{\\infty} \\boldsymbol{y} \\cdot 2p \\exp\\sbr{-\\frac{y^2}{2}} dy\n\t\t\\\\\n\t\t\\leq & \\frac{pR^2}{M T_t} \\cdot 2p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} + \\frac{2p}{M T_t} \\cdot 2p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} \n\t\t\\\\\n\t\t= & \\frac{p}{M T_t} \\cdot 2p \\cdot \\sbr{R^2+2} \\exp\\sbr{-\\frac{R^2}{2}} .\n\t\\end{align*}\n\t\n\tUsing the truncated matrix Bernstein inequality (Lemma~\\ref{lemma:matrix_bernstein_tau}) with $n=MT_t$, $R=\\sqrt{2\\log \\sbr{\\frac{2pMT_t}{\\delta'}}}$, $n \\Pr[\\|\\boldsymbol{C}_{t,m,j}\\| \\geq \\frac{1}{M T_t} \\cdot pR^2] \\leq \\delta'$, $U=\\frac{p \\cdot 2\\log \\sbr{\\frac{2pMT_t}{\\delta'}} }{MT_t}$, $\\sigma^2=\\frac{32p}{M T_t}$, $\\tau=\\frac{4\\cdot p \\cdot 2\\log \\sbr{\\frac{2pMT_t}{\\delta'}} \\log\\sbr{\\frac{2p}{\\delta'}}}{\\sqrt{M T_t}} + \\frac{4\\cdot p \\cdot 2\\log \\sbr{\\frac{2pMT_t}{\\delta'}} \\log\\sbr{\\frac{2p}{\\delta'}}}{M T_t}$ and $\\Delta=\\frac{p \\cdot 2 \\cdot 2\\log \\sbr{\\frac{2pMT_t}{\\delta'}} }{M T_t} \\cdot \\frac{\\delta'}{M T_t}$, we have that with probability at least $1-2\\delta'$, \n\t\\begin{align}\n\t\t\\nbr{\\boldsymbol{C}_t - \\mathbb{E}\\mbr{\\boldsymbol{C}_t}} \\leq & \\frac{4\\cdot 2p \\log \\sbr{\\frac{2pMT_t}{\\delta'}} \\log\\sbr{\\frac{2p}{\\delta'}}}{\\sqrt{M T_t}} + \\frac{4 \\cdot 2p \\log \\sbr{\\frac{2pMT_t}{\\delta'}} \\log \\sbr{\\frac{2p}{\\delta'}}}{M T_t} \n\t\t\\nonumber\\\\\n\t\t\\leq & \\frac{8\\cdot 2p \\log \\sbr{\\frac{2pMT_t}{\\delta'}} \\log\\sbr{\\frac{2p}{\\delta'}}}{\\sqrt{M T_t}}\n\t\t\\label{eq:C_t_concentration}\n\t\\end{align}\n\t\n\tPlugging Eqs.~\\eqref{eq:A_t_concentration} and \\eqref{eq:C_t_concentration} into Eq.~\\eqref{eq:Z_t_expressed_by_A}, we have that with probability at least $1-4\\delta'$,\n\t\\begin{align*}\n\t\t\\nbr{\\boldsymbol{Z}_t - \\mathbb{E} [\\boldsymbol{Z}_t]} \\leq & \\nbr{\\boldsymbol{X}_{\\textup{batch}}^{+}}^2 \\sbr{\\nbr{\\boldsymbol{A}_t - \\mathbb{E} \\mbr{\\boldsymbol{A}_t}} + \\nbr{\\boldsymbol{C}_t - \\mathbb{E} \\mbr{\\boldsymbol{C}_t}} }\n\t\t\\\\\n\t\t\\leq & \\nbr{\\boldsymbol{X}_{\\textup{batch}}^{+}}^2 \\sbr{ \\frac{8\\cdot 2p L_{x} L_{\\theta} \\sqrt{2\\log \\sbr{\\frac{2pMT_t}{\\delta'}}} \\log\\sbr{\\frac{2p}{\\delta'}}}{\\sqrt{M T_t}} + \\frac{8\\cdot 2p \\log \\sbr{\\frac{2pMT_t}{\\delta'}} \\log\\sbr{\\frac{2p}{\\delta'}}}{\\sqrt{M T_t}} }\n\t\t\\\\\n\t\t\\leq & \\frac{ 96 \\nbr{\\boldsymbol{X}_{\\textup{batch}}^{+}}^2 p L_{x} L_{\\theta} \\log\\sbr{\\frac{2p}{\\delta'}} }{\\sqrt{MT_t}} \\log \\sbr{\\frac{2pMT_t}{\\delta'}} .\n\t\\end{align*}\n\tLet $\\delta'=\\frac{\\delta_t}{8}$. Then, we obtain that with probability at least $1-\\frac{\\delta_t}{2}$,\n\t\\begin{align*}\n\t\t\\nbr{\\boldsymbol{Z}_t - \\mathbb{E} [\\boldsymbol{Z}_t]} \\leq & \\frac{ 96 \\nbr{\\boldsymbol{X}_{\\textup{batch}}^{+}}^2 p L_{x} L_{\\theta} \\log\\sbr{\\frac{16p}{\\delta_t}} }{\\sqrt{MT_t}} \\log \\sbr{\\frac{16pMT_t}{\\delta_t}} , \n\t\\end{align*}\n\twhich implies that\n\t$\\Pr \\mbr{\\mathcal{E}_t} \\geq 1-\\frac{\\delta_t}{2}$.\n\t\n\tTaking a union bound over all phases $t\\geq 1$ and recalling $\\delta_t:=\\frac{\\delta}{2t^2}$, we obtain\n\t\\begin{align*}\n\t\t\\Pr \\mbr{\\mathcal{E}} \n\t\t\\geq & 1- \\sum_{t=1}^{\\infty} \\Pr \\mbr{\\bar{\\mathcal{E}_t}}\n\t\t\\\\\n\t\t\\geq & 1- \\sum_{t=1}^{\\infty} \\frac{\\delta_t}{2}\n\t\t\\\\\n\t\t= & 1- \\sum_{t=1}^{\\infty} \\frac{\\delta}{4t^2}\n\t\t\\\\\n\t\t\\geq & 1-\\frac{\\delta}{2} .\n\t\\end{align*}\n\\end{proof}\n\nFor any matrix $\\boldsymbol{A} \\in \\mathbb{R}^{m \\times n}$ with $m \\geq n$, let $\\sigma_{\\max}(\\boldsymbol{A})$ and $\\sigma_{\\min}(\\boldsymbol{A})$ denote the maximum and minimum singular values of $\\boldsymbol{A}$, respectively. For any $i \\in [m]$, let $\\sigma_i(\\boldsymbol{A})$ denote the $i$-th singular value of $\\boldsymbol{A}$.\n\nFor any matrix $\\boldsymbol{A} \\in \\mathbb{R}^{m \\times n}$ with $m \\geq n$, let $\\boldsymbol{A}_{\\bot}$ denote the orthogonal complement matrix of $\\boldsymbol{A}$, where the columns of $\\boldsymbol{A}_{\\bot}$ are the orthogonal complement of those of $\\boldsymbol{A}$. Then, it holds that $\\boldsymbol{A} \\boldsymbol{A}^\\top + \\boldsymbol{A}_{\\bot} \\boldsymbol{A}_{\\bot}^\\top = \\boldsymbol{I}_m$, where $\\boldsymbol{I}_m$ is the $m \\times m$ identity matrix.\n\nAccording to Assumption~\\ref{assumption:diverse_task}, there exists an absolute constant $c_0$ which satisfies that $\\sigma_{\\min}(\\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{w}_m \\boldsymbol{w}_m^\\top) = \\sigma_{\\min}(\\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top) \\geq \\frac{c_0}{k}$.\n\n\n\\begin{lemma}[Concentration of $\\hat{\\boldsymbol{B}}_t$] \\label{lemma:concentration_B_hat_t}\n\tSuppose that event $\\mathcal{E}$ holds. Then, for any phase $t>0$,\n\t\\begin{align*}\n\t\t\\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\leq \\frac{ 192 \\nbr{\\boldsymbol{X}_{\\textup{batch}}^{+}}^2 k p L_{x} L_{\\theta} \\log\\sbr{\\frac{16p}{\\delta_t}} }{\\sqrt{MT_t}} \\log \\sbr{\\frac{16pMT_t}{\\delta_t}} .\n\t\\end{align*}\n\tFurthermore, for any phase $t>0$, if \n\t\\begin{align}\n\t\n\t\t%\n\t\tT_t = \\Bigg \\lceil & \\frac{68 \\cdot 192^2 \\cdot 8^2 \\sbr{1+\\zeta}^3 (\\rho^E)^2 k^4 L_{x}^4 L_{\\theta}^2 L_{w}^2}{c_0^2 M} \\cdot \\max\\lbr{2^{2t},\\ \\frac{L_x^4}{\\omega^2}} \\cdot \\log^2 \\sbr{\\frac{16p}{\\delta_t}} \\nonumber\\\\& \\log^2 \\sbr{ \\frac{192 \\cdot 16 \\cdot 8 \\sbr{1+\\zeta}^{\\frac{3}{2}} \\rho^E k^2 p L_{x}^2 L_{\\theta} L_{w}}{c_0} \\cdot \\max\\lbr{2^t,\\ \\frac{L_x^2}{\\omega}} \\cdot \\frac{1}{\\delta_t} \\cdot \\log\\sbr{\\frac{16p}{\\delta_t}} } \\Bigg \\rceil , \\label{eq:value_T_t}\n\t\\end{align}\n\tthen\n\t\\begin{align*}\n\t\t\\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\leq \\min\\lbr{\\frac{1}{8 k L_{x} L_{w} \\cdot 2^t \\sqrt{1+\\zeta}},\\ \\frac{\\omega}{6 L_x^2}} .\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}[Proof of Lemma~\\ref{lemma:concentration_B_hat_t}]\n\tFrom Assumption~\\ref{assumption:diverse_task}, $\\sigma_{k}(\\mathbb{E}[\\boldsymbol{Z}_t]) - \\sigma_{k+1}(\\mathbb{E}[\\boldsymbol{Z}_t])=\\sigma_{\\min}( \\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top ) \\geq \\frac{c_0}{k}$. Using the Davis-Kahan sin $\\theta$ Theorem~\\cite{bhatia2013matrix} and letting $T_t$ be large enough to satisfy $\\nbr{\\boldsymbol{Z}_t - \\mathbb{E}[\\boldsymbol{Z}_t]} \\leq \\frac{c_0}{2k}$, we have\n\t\\begin{align*}\n\t\t\\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\leq & \\frac{ \\nbr{\\boldsymbol{Z}_t - \\mathbb{E}[\\boldsymbol{Z}_t]} }{ \\sigma_{k}(\\mathbb{E}[\\boldsymbol{Z}_t]) - \\sigma_{k+1}(\\mathbb{E}[\\boldsymbol{Z}_t]) - \\nbr{\\boldsymbol{Z}_t - \\mathbb{E}[\\boldsymbol{Z}_t]} }\n\t\t\\\\\n\t\t\\leq & \\frac{2k}{c_0} \\nbr{\\boldsymbol{Z}_t - \\mathbb{E}[\\boldsymbol{Z}_t]} \n\t\t\\\\\n\t\t\\overset{\\textup{(a)}}{\\leq} & \\frac{ 192 \\nbr{\\boldsymbol{X}_{\\textup{batch}}^{+}}^2 k p L_{x} L_{\\theta} \\log\\sbr{\\frac{16p}{\\delta_t}} }{c_0 \\sqrt{MT_t}} \\log \\sbr{\\frac{16pMT_t}{\\delta_t}} ,.\n\t\\end{align*}\n\twhere inequality (a) uses the definition of event $\\mathcal{E}$.\n\t\n\tUsing Lemma~\\ref{lemma:technical_tool_bai_stage2} with $A= \\frac{192 \\nbr{\\boldsymbol{X}_{\\textup{batch}}^{+}}^2 k p L_{x} L_{\\theta}}{c_0} \\log\\sbr{\\frac{16p}{\\delta_t}} $, $B=\\frac{16p}{\\delta_t}$ and $\\kappa=\\min\\{\\frac{1}{8 k L_{x} L_{w} \\cdot 2^t \\sqrt{1+\\zeta}}, \\frac{\\omega}{6 L_x^2}\\}$, we have that if \n\t\\begin{align*}\n\t\tM T_t \\geq & 68 \\sbr{ \\frac{192 \\nbr{\\boldsymbol{X}_{\\textup{batch}}^{+}}^2 k p L_{x} L_{\\theta}}{c_0} \\log\\sbr{\\frac{16p}{\\delta_t}}}^2 \\cdot \\max\\lbr{ \\sbr{8 k L_{x} L_{w} \\cdot 2^t \\sqrt{1+\\zeta} }^2 ,\\ \\frac{6^2 L_x^4}{\\omega^2} } \\cdot \\\\& \\log^2 \\sbr{ \\frac{192 \\nbr{\\boldsymbol{X}_{\\textup{batch}}^{+}}^2 k p L_{x} L_{\\theta}}{c_0} \\log\\sbr{\\frac{16p}{\\delta_t}} \\cdot \\frac{16p}{\\delta_t} \\cdot \\max\\lbr{ 8 k L_{x} L_{w} \\cdot 2^t \\sqrt{1+\\zeta} ,\\ \\frac{6 L_x^2}{\\omega} } } ,\n\t\n\t\n\t\n\t\\end{align*}\n\tthen $\\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\leq \\min\\lbr{\\frac{1}{8 k L_{x} L_{w} \\cdot 2^t \\sqrt{1+\\zeta}},\\ \\frac{\\omega}{6 L_x^2}}$.\n\t\n\n\tAccording to Lemma~\\ref{lemma:X_batch_ub}, we have $\\|\\boldsymbol{X}_{\\textup{batch}}^{+}\\| \\leq \\sqrt{\\frac{(1+\\zeta) \\rho^{E}}{p}}$.\n\t\n\tThen, further enlarging $M T_t$, we have that if\n\t\\begin{align*}\n\t\tM T_t \\geq & \\frac{68 \\cdot 192^2 \\cdot 8^2 \\sbr{1+\\zeta}^3 (\\rho^E)^2 k^4 L_{x}^4 L_{\\theta}^2 L_{w}^2}{c_0^2} \\cdot \\max\\lbr{2^{2t},\\ \\frac{L_x^4}{\\omega^2}} \\cdot \\log^2 \\sbr{\\frac{16p}{\\delta_t}} \\\\& \\log^2 \\sbr{ \\frac{192 \\cdot 16 \\cdot 8 \\sbr{1+\\zeta}^{\\frac{3}{2}} \\rho^E k^2 p L_{x}^2 L_{\\theta} L_{w}}{c_0} \\cdot \\max\\lbr{2^t,\\ \\frac{L_x^2}{\\omega}} \\cdot \\frac{1}{\\delta_t} \\cdot \\log\\sbr{\\frac{16p}{\\delta_t}} } ,\n\t\\end{align*}\n\tthen\n\t\\begin{align*}\n\t\t\\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\leq \\min\\lbr{\\frac{1}{8 k L_{x} L_{w} \\cdot 2^t \\sqrt{1+\\zeta}},\\ \\frac{\\omega}{6 L_x^2}} .\n\t\\end{align*}\n\\end{proof}\n\n\n\n\\subsection{Elimination with Low-dimensional Representations}\n\nFor clarity of notation, we also add subscript $t$ to the notations in subroutine $\\mathtt{EliLowRep}$ to denote the quantities generated in phase $t$. Specifically, we use the notations $\\hat{\\boldsymbol{B}}_t$, $\\hat{\\mathcal{X}}_{t,m}$, $\\boldsymbol{\\lambda}^{G}_{t,m}$, $\\rho^{G}_{t,m}$, $N_{t,m}$, $\\{\\boldsymbol{z}_{t,m,i}\\}_{i \\in [N_{t,m}]}$, $\\{r_{t,m,i}\\}_{i \\in [N_{t,m}]}$, $\\hat{\\boldsymbol{w}}_{t,m}$ and $\\hat{\\boldsymbol{\\theta}}_{t,m}$ to denote the corresponding quantities used in $\\mathtt{EliLowRep}$ in phase $t$.\n\n\nBefore analyzing the sample complexity of $\\mathtt{EliLowRep}$, we first prove that there exists a sample allocation $\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}$ such that $\\sum_{i=1}^{n} \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t$ is invertible, i.e., the G-optimal design optimization with $\\hat{\\boldsymbol{B}}_t$ is non-vacuous (Line~\\ref{line:bai_G_optimal_design} in Algorithm~\\ref{alg:elim_low_rep}).\n\n\nFor any task $m \\in [M]$, let\n\\begin{align*}\n\t\\boldsymbol{\\lambda}^*_m := & \\operatornamewithlimits{argmin}_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{x} \\in \\mathcal{X} \\setminus \\{\\boldsymbol{x}^{*}_{m}\\}} \\frac{\\| \\boldsymbol{B}^\\top \\boldsymbol{x}^{*}_{m} - \\boldsymbol{B}^\\top \\boldsymbol{x} \\|^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} } }{\\sbr{ ({\\boldsymbol{x}^{*}_{m}} - \\boldsymbol{x})^\\top \\boldsymbol{\\theta}_m }^2} .\n\\end{align*}\n$\\boldsymbol{\\lambda}^*_m$ is the optimal solution of the G-optimal design optimization with true feature extractor $\\boldsymbol{B}$.\n\n\n\\begin{lemma} \\label{lemma:invertible_under_hat_B}\n\tFor any phase $t>0$ and task $m \\in [M]$, if $\\|\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot}\\| \\leq \\frac{\\omega}{6 L_x^2}$, we have\n\t\\begin{align*}\n\t\t\\sigma_{\\min}\\sbr{\\sum_{i=1}^{n} \\lambda^{*}_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t} > 0 .\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}[Proof of Lemma~\\ref{lemma:invertible_under_hat_B}]\n\tFor any task $m \\in [M]$, let $\\boldsymbol{A}_m:=\\sum_{i=1}^{n} \\lambda^{*}_m(\\boldsymbol{x}_i) \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top$.\n\tThen, for any phase $t>0$ and task $m \\in [M]$, we have\n\t\\begin{align*}\n\t\t\\sum_{i=1}^{n} \\lambda^{*}_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t = & \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{A}_m \\hat{\\boldsymbol{B}}_t\n\t\t\\\\\n\t\t= & \\hat{\\boldsymbol{B}}_t^\\top \\sbr{ \\boldsymbol{B} \\boldsymbol{B}^\\top + \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top } \\boldsymbol{A}_m \\sbr{ \\boldsymbol{B} \\boldsymbol{B}^\\top + \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top } \\hat{\\boldsymbol{B}}_t\n\t\t\\\\\n\t\t= & \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{A}_m \\boldsymbol{B} \\boldsymbol{B}^\\top \\hat{\\boldsymbol{B}}_t + \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{A}_m \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\hat{\\boldsymbol{B}}_t \n\t\t\\\\& + \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{A}_m \\boldsymbol{B} \\boldsymbol{B}^\\top \\hat{\\boldsymbol{B}}_t + \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{A}_m \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\hat{\\boldsymbol{B}}_t .\n\t\\end{align*}\n\tHence, we have\n\t\\begin{align*}\n\t\t\\sigma_{\\min}\\sbr{\\sum_{i=1}^{n} \\lambda^{*}_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t} \\geq & \\sigma_{\\min}\\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{A}_m \\boldsymbol{B} \\boldsymbol{B}^\\top \\hat{\\boldsymbol{B}}_t } - \\sigma_{\\max}\\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{A}_m \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\hat{\\boldsymbol{B}}_t }\n\t\t\\\\& - \\sigma_{\\max}\\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{A}_m \\boldsymbol{B} \\boldsymbol{B}^\\top \\hat{\\boldsymbol{B}}_t } - \\sigma_{\\max}\\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{A}_m \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\hat{\\boldsymbol{B}}_t }\n\t\t\\\\\n\t\t\\geq & \\sigma_{\\min}\\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} } \\sigma_{\\min}\\sbr{ \\boldsymbol{B}^\\top \\boldsymbol{A}_m \\boldsymbol{B} } \\sigma_{\\min}\\sbr{ \\boldsymbol{B}^\\top \\hat{\\boldsymbol{B}}_t } - \\nbr{ \\boldsymbol{B}_{\\bot}^\\top \\hat{\\boldsymbol{B}}_t } \\nbr{\\boldsymbol{A}_m}\n\t\t\\\\& - \\nbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} } \\nbr{\\boldsymbol{A}_m} - \\nbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} } \\nbr{\\boldsymbol{A}_m}\n\t\t\\\\\n\t\t\\geq & \\sigma^2_{\\min}\\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} } \\sigma_{\\min}\\sbr{ \\boldsymbol{B}^\\top \\boldsymbol{A}_m \\boldsymbol{B} } - 3\\nbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} } L_x^2\n\t\t\\\\\n\t\t\\overset{\\textup{(a)}}{\\geq} & \\sbr{ 1 - \\nbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} }^2 } \\omega - 3\\nbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} } L_x^2 ,\n\t\\end{align*}\n\twhere inequality (a) uses the fact that $\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\hat{\\boldsymbol{B}}_t + \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\hat{\\boldsymbol{B}}_t = \\hat{\\boldsymbol{B}}_t^\\top (B \\boldsymbol{B}^\\top + \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top) \\hat{\\boldsymbol{B}}_t =\\hat{\\boldsymbol{B}}_t^\\top \\hat{\\boldsymbol{B}}_t=\\boldsymbol{I}_k$, and thus, $\\sigma^2_{\\min}( \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} )=1 - \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\|^2$.\n\t\n\t\n\t\n\tLet $\\|\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot}\\| \\leq \\frac{\\omega}{6 L_x^2}$. Then, we have\n\t\\begin{align*}\n\t\t\\sigma_{\\min}\\sbr{\\sum_{i=1}^{n} \\lambda^{*}_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t} \\geq & \\sbr{ 1 - \\frac{\\omega^2}{36 L_x^4} } \\omega - \\frac{\\omega}{2} \n\t\t\\\\\n\t\t= & \\frac{\\omega}{2} - \\frac{\\omega^3}{36 L_x^4} \n\t\t\\\\\n\t\t> & 0 ,\n\t\\end{align*}\n\twhere the last inequality is due to $\\omega \\leq L_x^2 < \\sqrt{18} L_x^2$.\n\\end{proof}\n\n\nNext, we bound the optimal value $\\rho^{G}_{t,m}$ of the G-optimal design optimization with the estimated feature extractor $\\hat{\\boldsymbol{B}}_t$.\n\nFor any $\\mathcal{Z} \\subseteq \\mathcal{X}$, let $\\mathcal{Y}(\\mathcal{Z}):=\\{ \\boldsymbol{x}-\\boldsymbol{x}':\\ \\forall \\boldsymbol{x},\\boldsymbol{x}' \\in \\mathcal{Z},\\ \\boldsymbol{x} \\neq \\boldsymbol{x}' \\}$.\nRecall that in Line~\\ref{line:bai_G_optimal_design} of Algorithm~\\ref{alg:elim_low_rep}, for any phase $t>0$ and task $m \\in [M]$, \n$$\n\\rho^{G}_{t,m}:=\\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{y} \\in \\mathcal{Y}(\\hat{\\mathcal{X}}_{t,m})} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}} .\n$$\n\n\n\\begin{lemma} \\label{lemma:rho_t_leq_4k}\n\tFor any phase $t>0$ and task $m \\in [M]$, \n\t\\begin{align*}\n\t\t\\rho^{G}_{t,m} \\leq 4k .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:rho_t_leq_4k}]\n\tFor any phase $t>0$ and task $m \\in [M]$, we have that $\\hat{\\mathcal{X}}_{t,m} \\subseteq \\mathcal{X}$ and $ \\mathcal{Y}(\\hat{\\mathcal{X}}_{t,m}) \\subseteq \\mathcal{Y}(\\mathcal{X})$.\n\t\n\tFor any fixed $\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}$, \n\t\\begin{align*}\n\t\t\\max_{\\boldsymbol{y} \\in \\mathcal{Y}(\\hat{\\mathcal{X}}_{t,m})} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}} \\leq & \\max_{\\boldsymbol{y} \\in \\mathcal{Y}(\\mathcal{X})} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}\n\t\t\\\\\n\t\t= & \\| \\hat{\\boldsymbol{B}}_t^\\top (\\boldsymbol{x}'_1-\\boldsymbol{x}'_2) \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}\n\t\t\\\\\n\t\t\\leq & \\sbr{\\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}'_1 \\|_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}+\\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}'_2 \\|_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}}^2\n\t\t\\\\\n\t\t\\leq & 2\\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}'_1 \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}} + 2\\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}'_2 \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}\n\t\t\\\\\n\t\t\\leq & 4 \\max_{\\boldsymbol{x} \\in \\mathcal{X}} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}} , \n\t\\end{align*}\n\twhere $\\boldsymbol{x}'_1$ and $\\boldsymbol{x}'_2$ are the arms which satisfy that $\\boldsymbol{y}=\\boldsymbol{x}'_1-\\boldsymbol{x}'_2$ achieves the maximum value $\\max_{\\boldsymbol{y} \\in \\mathcal{Y}(\\mathcal{X})} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}$.\n\t\n\tSince $\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x} \\in \\mathbb{R}^k$, according to the Equivalence Theorem in \\cite{kiefer1960equivalence}, we have \n\t\\begin{align*}\n\t\t\\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{x} \\in \\mathcal{X}} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}} = k .\n\t\\end{align*}\n\t\n\tTherefore, we have\n\t\\begin{align*}\n\t\t4k = & 4 \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{x} \\in \\mathcal{X}} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}} \\\\\n\t\t= & 4 \\max_{\\boldsymbol{x} \\in \\mathcal{X}} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda'(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}\n\t\t\\\\\n\t\t\\geq & \\max_{\\boldsymbol{y} \\in \\mathcal{Y}(\\hat{\\mathcal{X}}_{t,m})} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda'(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}\n\t\t\\\\\n\t\t\\geq & \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{y} \\in \\mathcal{Y}(\\hat{\\mathcal{X}}_{t,m})} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}\n\t\t\\\\\n\t\t= & \\rho^{G}_{t,m} ,\n\t\\end{align*}\n\twhere $\\boldsymbol{\\lambda}':=\\operatornamewithlimits{argmin}_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{x} \\in \\mathcal{X}} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}$.\n\\end{proof}\n\nNow we analyze the estimation error of the estimated reward parameter $\\hat{\\boldsymbol{\\theta}}_{t,m}=\\hat{\\boldsymbol{B}}_t \\hat{\\boldsymbol{w}}_{t,m}$ in $\\mathtt{EliLowRep}$.\n\nFor any phase $t>0$, task $m \\in [M]$ and arm $j \\in [N_{t,m}]$, let $\\xi_{t,m,j}$ denote the noise of the sample on arm $\\boldsymbol{z}_{t,m,j}$ for task $m$, during the execution of $\\mathtt{EliLowRep}$ in phase $t$ (Line~\\ref{line:bai_stage3_sample} in Algorithm~\\ref{alg:elim_low_rep}).\n\nFor any phase $t>0$, define events\n\\begin{align}\n\t\\mathcal{F}_t := \\Bigg \\{ &\n\t\\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1} \\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot \\xi_{t,m,j} \n\t\\nonumber\\\\\n\t& \\leq \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}_{\\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1}} \\sqrt{2\\log \\sbr{\\frac{4n^2 M}{\\delta_t}}}, \\ \\forall m \\in [M] ,\\ \\forall \\boldsymbol{y} \\in \\mathcal{Y}(\\hat{\\mathcal{X}}_{t,m}) \\Bigg\\}, \\label{eq:definition_cF}\n\\end{align}\nand \n\\begin{align*}\n\t\\mathcal{F} := \\cap_{t=1}^{\\infty} \\mathcal{F}_t .\n\\end{align*}\n\n\\begin{lemma}[Concentration of the Variance Term]\\label{lemma:concentration_variance_bai}\n\tIt holds that\n\n\t\n\t\n\n\t\\begin{align*}\n\t\t\\Pr \\mbr{\\mathcal{F}} \\geq 1-\\frac{\\delta}{2} . \n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:concentration_variance_bai}]\n\tLet $\\boldsymbol{\\Sigma}_{t,m}:=\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t$.\n\tThen, we can write\n\t\\begin{align*}\n\t\t\\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1} \\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot \\xi_{t,m,j} =\\sum_{j=1}^{N_{t,m}} \\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\boldsymbol{\\Sigma}_{t,m}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot \\xi_{t,m,j} .\n\t\\end{align*}\n\t\n\tFor any phase $t>0$, task $m \\in [M]$ and arm $j \\in [N_{t,m}]$, $\\hat{\\boldsymbol{B}}_t$, $\\boldsymbol{\\Sigma}_{t,m}$ and $\\{\\boldsymbol{z}_{t,m,j}\\}_{j=1}^{N_{t,m}}$ are fixed before the sampling in $\\mathtt{EliLowRep}$, and the noise $\\xi_{t,m,j}$ is 1-sub-Gaussian (Line~\\ref{line:bai_stage3_sample} in Algorithm~\\ref{alg:elim_low_rep}).\n\tThus, we have that for any $t>0$, $m \\in [M]$ and $j \\in [N_{t,m}]$, $\\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\boldsymbol{\\Sigma}_{t,m}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot \\xi_{t,m,j} $ is $(\\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\boldsymbol{\\Sigma}_{t,m}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j})$-sub-Gaussian.\n\t\n\tUsing Hoeffding's inequality and taking a union bound over all $m \\in [M]$ and $\\boldsymbol{y} \\in \\mathcal{Y}(\\hat{\\mathcal{X}}_{t,m})$, we have that with probability at least $1-\\frac{\\delta_t}{2}$,\n\t\\begin{align*}\n\t\n\t\n\t\t& \\sum_{j=1}^{N_{t,m}} \\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\boldsymbol{\\Sigma}_{t,m}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot \\xi_{t,m,j}\n\t\t\\\\\n\t\t\\leq & \\sqrt{2 \\sum_{j=1}^{N_{t,m}} \\sbr{\\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\boldsymbol{\\Sigma}_{t,m}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j}}^2 \\cdot \\log \\sbr{\\frac{4n^2 M}{\\delta_t}}} \n\t\t\\\\\n\t\t= & \\sqrt{2 \\sum_{j=1}^{N_{t,m}} \\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\boldsymbol{\\Sigma}_{t,m}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t \\boldsymbol{\\Sigma}_{t,m}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\cdot \\log \\sbr{\\frac{4n^2 M}{\\delta_t}}} \n\t\t\\\\\n\t\t= & \\sqrt{2 \\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\boldsymbol{\\Sigma}_{t,m}^{-1} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\sum_{j=1}^{N_{t,m}} \\boldsymbol{z}_{t,m,j} \\cdot {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t} \\boldsymbol{\\Sigma}_{t,m}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\cdot \\log \\sbr{\\frac{4n^2 M}{\\delta_t}}} \n\t\t\\\\\n\t\t= & \\sqrt{2 \\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\boldsymbol{\\Sigma}_{t,m}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\cdot \\log \\sbr{\\frac{4n^2 M}{\\delta_t}}} \n\t\t\\\\\n\t\t= & \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}_{\\boldsymbol{\\Sigma}_{t,m}^{-1}} \\sqrt{2 \\log \\sbr{\\frac{4n^2 M}{\\delta_t}}} ,\n\t\\end{align*}\n\twhich implies that\n\t\\begin{align*}\n\t\t\\Pr \\mbr{ \\mathcal{F}_t } \\geq 1 - \\frac{\\delta_t}{2} .\n\t\\end{align*}\n\t\n\tTaking a union bound over all phases $t \\geq 1$ and recalling $\\delta_t:=\\frac{\\delta}{2t^2}$, we obtain\n\t\\begin{align*}\n\t\t\\Pr \\mbr{\\mathcal{F}} \n\t\t\\geq & 1- \\sum_{t=1}^{\\infty} \\Pr \\mbr{\\bar{\\mathcal{F}_t}}\n\t\t\\\\\n\t\t\\geq & 1- \\sum_{t=1}^{\\infty} \\frac{\\delta_t}{2}\n\t\t\\\\\n\t\t= & 1- \\sum_{t=1}^{\\infty} \\frac{\\delta}{4t^2}\n\t\t\\\\\n\t\t\\geq & 1-\\frac{\\delta}{2} .\n\t\\end{align*}\n\\end{proof}\n\n\n\\begin{lemma}[Concentration of $\\hat{\\boldsymbol{\\theta}}_{t,m}$] \\label{lemma:bai_concentration_theta_t_m}\n\tSuppose that event $\\mathcal{E} \\cap \\mathcal{F}$ holds. Then, for any phase $t>0$, task $m \\in [M]$ and $\\boldsymbol{y} \\in \\mathcal{Y}(\\hat{\\mathcal{X}}_{t,m})$,\n\t\\begin{align*}\n\t\t\\abr{\\boldsymbol{y}^\\top \\sbr{\\hat{\\boldsymbol{\\theta}}_{t,m}-\\boldsymbol{\\theta}_m}} \\leq \\frac{1}{2^t} .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:bai_concentration_theta_t_m}]\n\t\n\tFor any phase $t>0$, task $m \\in [M]$ and $\\boldsymbol{y} \\in \\mathcal{Y}(\\hat{\\mathcal{X}}_{t,m})$,\n\t\\begin{align}\n\t\t\\boldsymbol{y}^\\top \\sbr{\\hat{\\boldsymbol{\\theta}}_{t,m}- \\boldsymbol{\\theta}_m} = & \\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\hat{\\boldsymbol{w}}_{t,m}-\\boldsymbol{y}^\\top \\sbr{ \\hat{\\boldsymbol{B}}_t \\hat{\\boldsymbol{B}}_t^\\top + \\hat{\\boldsymbol{B}}_{t,\\bot} \\hat{\\boldsymbol{B}}_{t,\\bot}^\\top } \\boldsymbol{\\theta}_m\n\t\t\\nonumber\\\\\n\t\t= & \\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\sbr{\\hat{\\boldsymbol{w}}_{t,m} - \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{\\theta}_m } -\\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_{t,\\bot} \\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{\\theta}_m . \\label{eq:y_theta_est_err_decompose}\n\t\\end{align}\n\t\n\t\n\tHere, $\\hat{\\boldsymbol{w}}_{t,m}$ can be written as\n\t\\begin{align}\n\t\t\\hat{\\boldsymbol{w}}_{t,m} = & \\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1} \\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot r_{t,m,j}\n\t\t\\nonumber\\\\\n\t\t= & \\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1} \\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot \\sbr{{\\boldsymbol{z}_{t,m,j}}^\\top \\boldsymbol{\\theta}_m + \\xi_{t,m,j}}\n\t\t\\nonumber\\\\\n\t\t= & \\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1} \\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot \\sbr{{\\boldsymbol{z}_{t,m,j}}^\\top \\sbr{ \\hat{\\boldsymbol{B}}_t \\hat{\\boldsymbol{B}}_t^\\top + \\hat{\\boldsymbol{B}}_{t,\\bot} \\hat{\\boldsymbol{B}}_{t,\\bot}^\\top } \\boldsymbol{\\theta}_m + \\xi_{t,m,j}}\n\t\t\\nonumber\\\\\n\t\t= & \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{\\theta}_m + \\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1} \\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_{t,\\bot} \\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{\\theta}_m \\nonumber\\\\& + \\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1} \\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot \\xi_{t,m,j} . \\label{eq:hat_w_decompose}\n\t\\end{align}\n\t\n\tPlugging Eq.~\\eqref{eq:hat_w_decompose} into Eq.~\\eqref{eq:y_theta_est_err_decompose}, we can decompose the estimation error of $\\hat{\\boldsymbol{\\theta}}_{t,m}$ in $\\mathtt{EliLowRep}$ into three parts as\n\t\\begin{align*}\n\t\t\\boldsymbol{y}^\\top \\sbr{\\hat{\\boldsymbol{\\theta}}_{t,m}- \\boldsymbol{\\theta}_m} = & \n\t\t\\underbrace{\\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1} \\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_{t,\\bot} \\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B} \\boldsymbol{w}_m}_{\\textup{Bias}} \\\\& + \\underbrace{\\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_t \\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1} \\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot \\xi_{t,m,j}}_{\\textup{Variance}} - \\underbrace{\\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_{t,\\bot} \\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B} \\boldsymbol{w}_m}_{\\textup{Estimation error of $\\hat{\\boldsymbol{B}}_t$}} .\n\t\\end{align*}\n\t\n\t\n\tTaking the absolute value on both sides, and using the Cauchy\u2013Schwarz inequality and definition of event $\\mathcal{F}$ (Eq.~\\eqref{eq:definition_cF}), we have\n\t\\begin{align*}\n\t\t& \\abr{\\boldsymbol{y}^\\top \\hat{\\boldsymbol{\\theta}}_{t,m}-\\boldsymbol{y}^\\top \\boldsymbol{\\theta}_m} \n\t\t\\\\\n\t\t\\leq & \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}_{\\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1}} \\cdot \\nbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} \\cdot {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_{t,\\bot} \\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B} \\boldsymbol{w}_m}_{\\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1}} \\\\& + \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}_{\\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1}} \\sqrt{2\\log \\sbr{\\frac{4n^2 M}{\\delta_t}}} + \\abr{\\boldsymbol{y}^\\top \\hat{\\boldsymbol{B}}_{t,\\bot} \\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B} \\boldsymbol{w}_m}\n\t\t\\\\\n\t\t\\overset{\\textup{(a)}}{\\leq} & \\frac{ \\sqrt{1+\\zeta} \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}_{\\sbr{\\sum_{i=1}^{n} \\lambda^{G}_{t,m}(\\boldsymbol{x}_i) \\cdot \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}}{\\sqrt{N_{t,m}}} \\cdot L_{x} L_{w} \\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\cdot \\sum_{j=1}^{N_{t,m}} \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j}}_{\\sbr{\\sum_{j=1}^{N_{t,m}} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{z}_{t,m,j} {\\boldsymbol{z}_{t,m,j}}^\\top \\hat{\\boldsymbol{B}}_t}^{-1}} \\\\& + \\frac{ \\sqrt{1+\\zeta} \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}_{\\sbr{\\sum_{i=1}^{n} \\lambda^{G}_{t,m}(\\boldsymbol{x}_i) \\cdot \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}}{\\sqrt{N_{t,m}}} \\cdot \\sqrt{2\\log \\sbr{\\frac{4n^2 M}{\\delta_t}}} + 2 L_{x} L_{w} \\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \n\t\t\\\\\n\t\t\\overset{\\textup{(b)}}{\\leq} & \\frac{ \\sqrt{1+\\zeta} \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}_{\\sbr{\\sum_{i=1}^{n} \\lambda^{G}_{t,m}(\\boldsymbol{x}_i) \\cdot \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}}{\\sqrt{N_{t,m}}} \\cdot L_{x} L_{w} \\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\cdot \\sqrt{k N_{t,m}} \\\\& + \\frac{ \\sqrt{1+\\zeta} \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}_{\\sbr{\\sum_{i=1}^{n} \\lambda^{G}_{t,m}(\\boldsymbol{x}_i) \\cdot \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}}{\\sqrt{N_{t,m}}} \\cdot \\sqrt{2\\log \\sbr{\\frac{4n^2 M}{\\delta_t}}} + 2 L_{x} L_{w} \\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \n\t\t\\\\\n\t\t\\leq & \\sqrt{1+\\zeta} \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}_{\\sbr{\\sum_{i=1}^{n} \\lambda^{G}_{t,m}(\\boldsymbol{x}_i) \\cdot \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}} \\cdot L_{x} L_{w} \\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\cdot \\sqrt{k} \\\\& + \\frac{ \\sqrt{1+\\zeta} \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}_{\\sbr{\\sum_{i=1}^{n} \\lambda^{G}_{t,m}(\\boldsymbol{x}_i) \\cdot \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}}{\\sqrt{N_{t,m}}} \\cdot \\sqrt{2\\log \\sbr{\\frac{4n^2 M}{\\delta_t}}} + 2 L_{x} L_{w} \\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \n\t\t\\\\\n\t\t\\leq & \\sqrt{(1+\\zeta) \\cdot k \\cdot \\rho^{G}_{t,m}} \\cdot L_{x} L_{w} \\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} + \\frac{ \\sqrt{(1+\\zeta) \\cdot \\rho^{G}_{t,m} \\cdot 2\\log \\sbr{\\frac{4n^2 M}{\\delta_t}} } }{\\sqrt{N_{t,m}}} + 2 L_{x} L_{w} \\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}}\n\t\t\\\\\n\t\t\\overset{\\textup{(c)}}{\\leq} & \\sqrt{(1+\\zeta) \\cdot 4 k^2} \\cdot L_{x} L_{w} \\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} + \\frac{ \\sqrt{(1+\\zeta) \\cdot \\rho^{G}_{t,m} \\cdot 2\\log \\sbr{\\frac{4n^2 M}{\\delta_t}} } }{\\sqrt{N_{t,m}}} + 2 L_{x} L_{w} \\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}}\n\t\t\\\\\n\t\t\\overset{\\textup{(d)}}{\\leq} & \\sqrt{(1+\\zeta) \\cdot 4 k^2} \\cdot L_{x} L_{w} \\cdot \\frac{1}{8 k L_{x} L_{w} \\cdot 2^t \\sqrt{1+\\zeta}} + \\frac{1}{4 \\cdot 2^t} + 2 L_{x} L_{w} \\cdot \\frac{1}{8 k L_{x} L_{w} \\cdot 2^t \\sqrt{1+\\zeta}}\n\t\t\\\\\n\t\t\\leq & \\frac{1}{4 \\cdot 2^t} + \\frac{1}{4 \\cdot 2^t} + \\frac{1}{4 \\cdot 2^t}\n\t\t\\\\\n\t\t\\leq & \\frac{1}{2^t} .\n\t\\end{align*}\n\tHere inequality (a) is due to the guarantee of rounding procedure $\\mathtt{ROUND}$ and the triangle inequality. Inequality (b) uses Lemma~\\ref{lemma:sqrt_n_k}, and inequality (c) follows from Lemma~\\ref{lemma:rho_t_leq_4k}. Inequality (d) comes from Lemma~\\ref{lemma:concentration_B_hat_t} and $N_{t,m}:=\\max \\{ \\lceil 32 \\cdot 2^{2t} (1+\\zeta) \\rho^{G}_{t,m} \\log (\\frac{4n^2 M}{\\delta_t}) \\rceil,\\ \\frac{180k}{\\zeta^2} \\}$.\n\\end{proof}\n\n\n\nFor any task $m \\in [M]$ and arm $\\boldsymbol{x} \\in \\mathcal{X}$, let $\\Delta_m(\\boldsymbol{x}):=({\\boldsymbol{x}^{*}_{m}} - \\boldsymbol{x})^\\top \\boldsymbol{\\theta}_m$ denote the reward gap between the optimal arm $\\boldsymbol{x}^{*}_{m}$ and arm $\\boldsymbol{x}$ in task $m$.\nFor any phase $t>0$ and task $m \\in [M]$, let $\\mathcal{Z}_{t,m}:=\\{ \\boldsymbol{x} \\in \\mathcal{X} : \\Delta_{m}(\\boldsymbol{x}) \\leq 4 \\cdot 2^{-t} \\}$.\n\n\\begin{lemma} \\label{lemma:cX_subseteq_cS}\n\tSuppose that event $\\mathcal{E} \\cap \\mathcal{F}$ holds. For any phase $t>0$ and task $m \\in [M]$,\n\t\\begin{align*}\n\t\t\\boldsymbol{x}^{*}_{m} \\in \\hat{\\mathcal{X}}_{t,m} ,\n\t\\end{align*}\n\tand for any phase $t\\geq2$ and task $m \\in [M]$,\n\t\\begin{align*}\n\t\t\\hat{\\mathcal{X}}_{t,m} \\subseteq \\mathcal{Z}_{t,m} .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:cX_subseteq_cS}]\n\tThis proof follows a similar analytical procedure as that of Lemma~2 in \\cite{fiez2019sequential}.\n\t\n\tFirst, we prove $\\boldsymbol{x}^{*}_{m} \\in \\hat{\\mathcal{X}}_{t,m}$ for any phase $t>0$ and task $m \\in [M]$ by contradiction.\n\t\n\tSuppose that for some $t>0$ and some $m \\in [M]$, $\\boldsymbol{x}^{*}_{m}$ is eliminated from $\\hat{\\mathcal{X}}_{t,m}$ in phase $t$.\n\tThen, we have that there exists some $\\boldsymbol{x}' \\in \\hat{\\mathcal{X}}_{t,m}$ such that\n\t\\begin{align*}\n\t\t(\\boldsymbol{x}'-\\boldsymbol{x}^{*}_{m})^\\top \\hat{\\boldsymbol{\\theta}}_{t,m} > 2^{-t} .\n\t\\end{align*}\n\tThen, we have\n\t\\begin{align*}\n\t\t(\\boldsymbol{x}'-\\boldsymbol{x}^{*}_{m})^\\top \\boldsymbol{\\theta}_m = & (\\boldsymbol{x}'-\\boldsymbol{x}^{*}_{m})^\\top \\hat{\\boldsymbol{\\theta}}_{t,m} - (\\boldsymbol{x}'-\\boldsymbol{x}^{*}_{m})^\\top \\sbr{\\hat{\\boldsymbol{\\theta}}_{t,m} - \\boldsymbol{\\theta}_m} \n\t\t\\\\\n\t\t\\geq & (\\boldsymbol{x}'-\\boldsymbol{x}^{*}_{m})^\\top \\hat{\\boldsymbol{\\theta}}_{t,m} - 2^{-t}\n\t\t\\\\\n\t\t> & 2^{-t} - 2^{-t}\n\t\t\\\\\n\t\t= & 0 ,\n\t\\end{align*}\n\twhich contradicts the definition of $\\boldsymbol{x}^{*}_{m}$. Thus, we obtain that $\\boldsymbol{x}^{*}_{m} \\in \\hat{\\mathcal{X}}_{t,m}$ for any phase $t>0$ and task $m \\in [M]$.\n\t\n\tNext, we prove $\\hat{\\mathcal{X}}_{t,m} \\subseteq \\mathcal{Z}_{t,m}$ for any phase $t\\geq2$ and task $m \\in [M]$, i.e., each $\\boldsymbol{x} \\in \\hat{\\mathcal{X}}_{t,m}$ satisfies that $\\Delta_m(\\boldsymbol{x}) \\leq 4 \\cdot 2^{-t}$. \n\t\n\tSuppose that there exists some phase $t$, some task $m$ and some $\\boldsymbol{x} \\in \\hat{\\mathcal{X}}_{t,m}$ such that $\\Delta_m(\\boldsymbol{x}) > 4 \\cdot 2^{-t}$. Then, in phase $t-1 \\geq 1$, we have\n\t\\begin{align*}\n\t\t(\\boldsymbol{x}^{*}_{m}-\\boldsymbol{x})^\\top \\hat{\\boldsymbol{\\theta}}_{t-1,m} = & (\\boldsymbol{x}^{*}_{m}-\\boldsymbol{x})^\\top \\boldsymbol{\\theta}_m - (\\boldsymbol{x}^{*}_{m}-\\boldsymbol{x})^\\top \\sbr{\\boldsymbol{\\theta}_m-\\hat{\\boldsymbol{\\theta}}_{t-1,m}} \n\t\t\\\\\n\t\t\\geq & (\\boldsymbol{x}^{*}_{m}-\\boldsymbol{x})^\\top \\boldsymbol{\\theta}_m - 2^{-(t-1)}\n\t\t\\\\\n\t\t> & 4 \\cdot 2^{-t} - 2^{-(t-1)}\n\t\t\\\\\n\t\t= & 2^{-(t-1)} , \n\t\\end{align*}\n\twhich implies that $x$ should have been eliminated from $\\hat{\\mathcal{X}}_{t,m}$ in phase $t-1$, and contradicts our supposition. Thus, we complete the proof.\n\\end{proof}\n\n\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm:bai_ub}}\n\nBefore proving Theorem~\\ref{thm:bai_ub}, we first introduce a useful lemma.\n\nFor any task $m \\in [M]$, let\n\\begin{align*}\n\t\\boldsymbol{\\lambda}^*_m := & \\operatornamewithlimits{argmin}_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{x} \\in \\mathcal{X} \\setminus \\{\\boldsymbol{x}^{*}_{m}\\}} \\frac{\\| \\boldsymbol{B}^\\top \\boldsymbol{x}^{*}_{m} - \\boldsymbol{B}^\\top \\boldsymbol{x} \\|^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} } }{ \\sbr{({\\boldsymbol{x}^{*}_{m}} - \\boldsymbol{x})^\\top \\boldsymbol{\\theta}_m}^2 } ,\n\\end{align*}\nand\n\\begin{align*}\n\t\\rho^*_m := & \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{x} \\in \\mathcal{X} \\setminus \\{\\boldsymbol{x}^{*}_{m}\\}} \\frac{\\| \\boldsymbol{B}^\\top \\boldsymbol{x}^{*}_{m} - \\boldsymbol{B}^\\top \\boldsymbol{x} \\|^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} } }{ \\sbr{({\\boldsymbol{x}^{*}_{m}} - \\boldsymbol{x})^\\top \\boldsymbol{\\theta}_m}^2 } .\n\\end{align*}\n$\\boldsymbol{\\lambda}^*_m$ and $\\rho^*_m$ are the optimal solution and the optimal value of the G-optimal design optimization with true feature extractor $\\boldsymbol{B}$, respectively.\n\n\n\\begin{lemma}\\label{lemma:connection_ub_lb}\n\tSuppose that event $\\mathcal{E} \\cap \\mathcal{F}$ holds. For any task $m \\in [M]$ and $\\boldsymbol{y} \\in \\mathbb{R}^d$,\n\t\\begin{align*}\n\t\t\\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\|^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t }^{-1} } \\leq \\| \\boldsymbol{B}^\\top \\boldsymbol{y} \\|^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} } + \\frac{ 11 L_{x}^4 }{ k \\omega^2 \\cdot 2^t } .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:connection_ub_lb}]\n\t\n\tWe first handle the term $( \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t )^{-1}$. \n\t\n\tFor any task $m \\in [M]$, we have\n\t\\begin{align*}\n\t\t& \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t \n\t\t\\\\\n\t\t= & \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i + \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i + \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i}^\\top\n\t\t\\\\\n\t\t= & \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\Bigg( \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i}^\\top + \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i}^\\top \n\t\t\\\\\n\t\t& + \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i}^\\top + \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i}^\\top \\Bigg)\n\t\t\\\\\n\t\t= & \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i}^\\top\n\t\t+ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\Bigg( \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i}^\\top \n\t\t\\\\& + \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i}^\\top + \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i}^\\top \\Bigg) .\n\t\\end{align*}\n\t\n\tLet $\\boldsymbol{P}_t:=\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) (\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i) \\cdot (\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i)^\\top$.\n\tLet $\\boldsymbol{Q}_t:=\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) ( (\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i) \\cdot (\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i)^\\top \n\t+ (\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i) \\cdot (\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i)^\\top + (\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i) \\cdot (\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{x}_i)^\\top )$.\n\tThen, we have $\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t = \\boldsymbol{P}_t+\\boldsymbol{Q}_t$.\n\t\n\tFrom Assumption~\\ref{assumption:lambda^*_B_x_x_B_invertible}, we have that for any task $m \\in [M]$, $\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}$ is invertible.\n\tSince $\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}$ is also invertible, we have that $\\boldsymbol{P}_t$ is invertible. \n\tAccording to Lemmas~\\ref{lemma:concentration_B_hat_t} and \\ref{lemma:invertible_under_hat_B}, we have that $\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t$ is also invertible.\n\tThus, we can write $(\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t)^{-1}$ as follows.\n\t\\begin{align*}\n\t\t\\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t }^{-1} = & \\boldsymbol{P}_t^{-1}-\\sbr{\\boldsymbol{P}_t + \\boldsymbol{Q}_t}^{-1} \\boldsymbol{Q}_t \\boldsymbol{P}_t^{-1} \n\t\\end{align*}\n\t\n\t\n\tHence, for any task $m \\in [M]$ and $\\boldsymbol{y} \\in \\mathbb{R}^d$, we have\n\t\\begin{align}\n\t\t\\|\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}\\|^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t }^{-1}} = &\n\t\t\\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}^\\top \\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t }^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \n\t\t\\nonumber\\\\\n\t\t= & \\underbrace{\\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}^\\top \\boldsymbol{P}_t^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}_{\\textup{Term 1}} - \\underbrace{\\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}^\\top \\sbr{\\boldsymbol{P}_t + \\boldsymbol{Q}_t}^{-1} \\boldsymbol{Q}_t \\boldsymbol{P}_t^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}_{\\textup{Term 2}} . \\label{eq:norm_B_hat_y_decompose}\n\t\\end{align}\n\t\n\t\n\tFrom Lemma~\\ref{lemma:concentration_B_hat_t}, we have\n\t\\begin{align*}\n\t\t\\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\leq \\min\\lbr{\\frac{1}{8 k \\cdot 2^t \\sqrt{1+\\zeta}},\\ \\frac{\\omega}{6 L_x^2}} \\leq \\min\\lbr{\\frac{1}{8 k \\cdot 2^t},\\ \\frac{\\omega}{6 L_x^2}} .\n\t\\end{align*}\n\t\n\t\n\tSince $\\boldsymbol{B}^\\top \\hat{\\boldsymbol{B}}_{t} \\hat{\\boldsymbol{B}}_{t}^\\top \\boldsymbol{B} + \\boldsymbol{B}^\\top \\hat{\\boldsymbol{B}}_{t,\\bot} \\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B} = \\boldsymbol{B}^\\top (\\hat{\\boldsymbol{B}}_{t} \\hat{\\boldsymbol{B}}_{t}^\\top + \\hat{\\boldsymbol{B}}_{t,\\bot} \\hat{\\boldsymbol{B}}_{t,\\bot}^\\top) \\boldsymbol{B} = \\boldsymbol{B}^\\top \\boldsymbol{B} =\\boldsymbol{I}_k$, we have $\\sigma^2_{\\min} ( \\hat{\\boldsymbol{B}}_{t}^\\top \\boldsymbol{B} )=1 - \\| \\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B} \\|^2$.\n\t\n\tThus, we have\n\t\\begin{align*}\n\t\t\\sigma_{\\min}(\\hat{\\boldsymbol{B}}_{t}^\\top \\boldsymbol{B}) = \\sqrt{1 - \\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}}^2} \\geq \\sqrt{1 - \\min\\lbr{\\frac{1}{64 k^2 \\cdot 2^{2t}},\\ \\frac{\\omega^2}{36 L_x^4}} } > 0 ,\n\t\\end{align*}\n\twhich implies that $\\hat{\\boldsymbol{B}}_{t}^\\top \\boldsymbol{B}$ is invertible.\n\t\n\n\t\n\tNow, we first analyze Term 1 in Eq.~\\eqref{eq:norm_B_hat_y_decompose}.\n\t\n\t\\begin{align*}\n\t\t\\textup{Term 1} = & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}^\\top \\boldsymbol{P}_t^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y} + \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y} }^\\top \\boldsymbol{P}_t^{-1} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y} + \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y} }\n\t\t\\\\\n\t\t\\\\\n\t\t= & \\underbrace{\\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}}^\\top \\boldsymbol{P}_t^{-1} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}}}_{\\textup{Term 1-1}} + \\underbrace{\\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}}^\\top \\boldsymbol{P}_t^{-1} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y} } }_{\\textup{Term 1-2}}\n\t\t\\\\\n\t\t& + \\underbrace{\\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y} }^\\top \\boldsymbol{P}_t^{-1} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}}}_{\\textup{Term 1-3}} + \\underbrace{\\sbr{ \\hat{\\boldsymbol{B}}_t^\\top B_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y} }^\\top \\boldsymbol{P}_t^{-1} \\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y} }}_{\\textup{Term 1-4}} .\n\t\\end{align*}\n\t\n\t\n\tIn the following, we bound Terms 1-1, 1-2, 1-3 and 1-4, respectively.\n\t\n\tFirst, we have\n\t\\begin{align*}\n\t\t\\textup{Term 1-1} = & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}}^\\top \\sbr{\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i}^\\top}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}}^\\top \\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\sbr{\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^\\top }^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}}^\\top \\sbr{\\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1} }^\\top \\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{\\boldsymbol{B}^\\top \\boldsymbol{y}}^\\top \\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} \\boldsymbol{B}^\\top \\boldsymbol{y} \n\t\t\\\\\n\t\t= & \\nbr{\\boldsymbol{B}^\\top \\boldsymbol{y}}^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1}} .\n\t\\end{align*}\n\t\n\tWe note that since $\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\hat{\\boldsymbol{B}}_t + \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\hat{\\boldsymbol{B}}_t = \\hat{\\boldsymbol{B}}_t^\\top (B \\boldsymbol{B}^\\top + \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top) \\hat{\\boldsymbol{B}}_t = \\hat{\\boldsymbol{B}}_t^\\top \\hat{\\boldsymbol{B}}_t =\\boldsymbol{I}_k$, $\\sigma^2_{\\min}\\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} }=1 - \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot}}^2$.\n\tIn addition, $\\nbr{\\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1}}=\\frac{1}{ \\sigma_{\\min}(\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}) }=\\frac{1}{ \\sqrt{1 - \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot}}^2} }$.\n\t\n\tThen, second, we have\n\t\\begin{align*}\n\t\t\\textup{Term 1-2} = & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}}^\\top \\sbr{\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i}^\\top}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}}^\\top \\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\sbr{\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^\\top }^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}}^\\top \\sbr{\\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1} }^\\top \\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{\\boldsymbol{B}^\\top \\boldsymbol{y}}^\\top \\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y} \n\t\t\\\\\n\t\t\\leq & 2 L_{x} \\cdot \\frac{1}{\\omega} \\cdot \\frac{1}{ \\sqrt{1-\\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot}}^2 } } \\cdot \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot}} \\cdot 2 L_{x}\n\t\t\\\\\n\t\t\\leq & 4 L_{x}^2 \\cdot \\frac{1}{\\omega} \\cdot \\frac{1}{\\sqrt{ 1-\\sbr{\\frac{1}{8k \\cdot 2^t}}^2 }} \\cdot \\frac{1}{8k \\cdot 2^t}\n\t\t\\\\\n\t\t\\leq & 4 L_{x}^2 \\cdot \\frac{1}{\\omega} \\cdot \\frac{1}{\\sqrt{ 1-\\frac{3}{4} }} \\cdot \\frac{1}{8k \\cdot 2^t}\n\t\t\\\\\n\t\t= & \\frac{ L_{x}^2 }{ k \\omega \\cdot 2^t} .\n\t\\end{align*}\n\t\n\tThird, we have\n\t\\begin{align*}\n\t\t\\textup{Term 1-3} = & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}}^\\top \\sbr{\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i}^\\top}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}}^\\top \\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\sbr{\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^\\top }^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}}^\\top \\sbr{\\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1} }^\\top \\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{ \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}}^\\top \\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} \\boldsymbol{B}^\\top \\boldsymbol{y} ,\n\t\t\\\\\n\t\t\\leq & \\frac{1}{ \\sqrt{1-\\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot}}^2 } } \\cdot \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot}} \\cdot 2 L_{x} \\cdot \\frac{1}{\\omega} \\cdot 2 L_{x} \n\t\t\\\\\n\t\t\\leq & 4 L_{x}^2 \\cdot \\frac{1}{\\omega} \\cdot \\frac{1}{\\sqrt{ 1-\\sbr{\\frac{1}{8k \\cdot 2^t}}^2 }} \\cdot \\frac{1}{8k \\cdot 2^t}\n\t\t\\\\\n\t\t\\leq & 4 L_{x}^2 \\cdot \\frac{1}{\\omega} \\cdot \\frac{1}{\\sqrt{ 1-\\frac{3}{4} }} \\cdot \\frac{1}{8k \\cdot 2^t}\n\t\t\\\\\n\t\t= & \\frac{ L_{x}^2 }{k \\omega \\cdot 2^t} .\n\t\\end{align*}\n\t\n\tFinally, we have\n\t\\begin{align*}\n\t\t\\textup{Term 1-4} = & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}}^\\top \\sbr{\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i} \\cdot \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\boldsymbol{B}^\\top \\boldsymbol{x}_i}^\\top}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}}^\\top \\sbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} \\sbr{\\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^\\top }^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}}^\\top \\sbr{\\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1} }^\\top \\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}\n\t\t\\\\\n\t\t= & \\sbr{ \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y}}^\\top \\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1} \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}}^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} \\boldsymbol{B}_{\\bot}^\\top \\boldsymbol{y} ,\n\t\t\\\\\n\t\t\\leq & \\sbr{\\frac{1}{ \\sqrt{1-\\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot}}^2 } } \\cdot \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot}} \\cdot 2 L_{x} }^2 \\cdot \\frac{1}{\\omega}\n\t\t\\\\\n\t\t\\leq & \\sbr{ 2 L_{x} \\cdot \\frac{1}{\\sqrt{ 1-\\sbr{\\frac{1}{8k \\cdot 2^t}}^2 }} \\cdot \\frac{1}{8k \\cdot 2^t} }^2 \\cdot \\frac{1}{\\omega}\n\t\t\\\\\n\t\t\\leq & \\sbr{ 2 L_{x} \\cdot \\frac{1}{\\sqrt{ 1-\\frac{3}{4} }} \\cdot \\frac{1}{8k \\cdot 2^t} }^2 \\cdot \\frac{1}{\\omega}\n\t\t\\\\\n\t\t= & \\frac{L_{x}^2}{4k^2 \\omega \\cdot 2^{2t}} .\n\t\\end{align*}\n\t\n\tThus, we have\n\t\\begin{align}\n\t\t\\textup{Term 1} = & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}^\\top \\boldsymbol{P}_t^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}\n\t\t\\nonumber\\\\\n\t\t\\leq & \\nbr{\\boldsymbol{B}^\\top \\boldsymbol{y}}^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1}} + \\frac{ 2L_{x}^2 }{k \\omega \\cdot 2^t} + \\frac{L_{x}^2}{4k^2 \\omega \\cdot 2^{2t}} \n\t\t\\nonumber\\\\\n\t\t\\leq & \\nbr{\\boldsymbol{B}^\\top \\boldsymbol{y}}^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1}} + \\frac{ 3L_{x}^2 }{k \\omega \\cdot 2^t} . \\label{eq:term1_ub}\n\t\\end{align}\n\t\n\t\n\tNext, we investigate Term 2. In order to bound Term 2, we first bound the minimum singular value of $\\boldsymbol{P}_t$ and the maximum singular value of $\\boldsymbol{Q}_t$.\n\t\n\t\n\tSince $\\boldsymbol{P}_t=\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} (\\sum_{i=1}^{n} \\lambda^*(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}) (\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B})^\\top$, we have\n\t\\begin{align*}\n\t\t\\sigma_{\\min}(\\boldsymbol{P}_t) \\geq & \\sigma^2_{\\min}(\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}) \\cdot \\omega\n\t\t\\\\\n\t\t= & \\sbr{1-\\|\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot}\\|^2 } \\omega \n\t\t\\\\\n\t\t\\geq & \\sbr{ 1-\\frac{1}{8^2 k^2 \\cdot 2^{2t}} } \\omega \n\t\t\\\\\n\t\t\\geq & \\frac{3}{4} \\omega .\n\t\\end{align*}\n\t\n\n\tSince $\\boldsymbol{Q}_t=\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B} (\\sum_{i=1}^{n} \\lambda^*(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}_{\\bot} ) (\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot})^\\top \n\t+ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} (\\sum_{i=1}^{n} \\lambda^*(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top_{\\bot} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} ) (\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B})^\\top + \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} (\\sum_{i=1}^{n} \\lambda^*(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top_{\\bot} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}_{\\bot} ) (\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot})^\\top $, we have \n\t\\begin{align*}\n\t\t\\sigma_{\\max}(\\boldsymbol{Q}_t) \\leq & 3 L_{x}^2 \\nbr{ \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{B}_{\\bot} } \n\t\t\\\\\n\t\t\\leq & \\min \\lbr{ \\frac{3 L_{x}^2}{8k \\cdot 2^t} , \\ \\frac{ \\omega }{2} } .\n\t\\end{align*}\n\t\n\tThen, we can bound Term 2 as \n\t\\begin{align}\n\t\t\\textup{Term 2} = & \\sbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}^\\top \\sbr{\\boldsymbol{P}_t + \\boldsymbol{Q}_t}^{-1} \\boldsymbol{Q}_t \\boldsymbol{P}_t^{-1} \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}\n\t\t\\nonumber\\\\\n\t\t\\leq & \\nbr{\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}}^2 \\cdot \\nbr{\\sbr{\\boldsymbol{P}_t + \\boldsymbol{Q}_t}^{-1}} \\cdot \\nbr{\\boldsymbol{Q}_t} \\cdot \\nbr{\\boldsymbol{P}_t^{-1}}\n\t\t\\nonumber\\\\\n\t\t\\leq & \\frac{ 4 L_{x}^2 \\cdot \\sigma_{\\max}\\sbr{\\boldsymbol{Q}_t} }{ \\sigma_{\\min}\\sbr{\\boldsymbol{P}_t + \\boldsymbol{Q}_t} \\cdot \\sigma_{\\min}\\sbr{\\boldsymbol{P}_t} }\n\t\t\\nonumber\\\\\n\t\t\\leq & \\frac{ 4 L_{x}^2 \\cdot \\sigma_{\\max}\\sbr{\\boldsymbol{Q}_t} }{ \\sbr{\\sigma_{\\min}\\sbr{\\boldsymbol{P}_t} - \\sigma_{\\max}\\sbr{\\boldsymbol{Q}_t}} \\cdot \\sigma_{\\min}\\sbr{\\boldsymbol{P}_t} }\n\t\t\\nonumber\\\\\n\t\n\t\n\t\t\\leq & \\frac{ 4 L_{x}^2 \\cdot \\frac{3 L_{x}^2}{8k \\cdot 2^t} }{ \\sbr{ \\frac{3}{4} \\omega - \\frac{ 1 }{2} \\omega } \\cdot \\frac{3}{4} \\omega }\n\t\t\\nonumber\\\\\n\t\t= & \\frac{ 8 L_{x}^4 }{ k \\omega^2 \\cdot 2^t } . \\label{eq:term2_ub}\n\t\\end{align}\n\t\n\t\n\tPlugging Eqs.~\\eqref{eq:term1_ub} and \\eqref{eq:term2_ub} into Eq.~\\eqref{eq:norm_B_hat_y_decompose}, we have\n\t\\begin{align*}\n\t\t\\|\\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y}\\|^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t }^{-1}} \\leq & \\nbr{\\boldsymbol{B}^\\top \\boldsymbol{y}}^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1}} + \\frac{ 3L_{x}^2 }{k \\omega \\cdot 2^t} + \\frac{ 8 L_{x}^4 }{ k \\omega^2 \\cdot 2^t } .\n\t\t\\\\\n\t\t\\leq & \\nbr{\\boldsymbol{B}^\\top \\boldsymbol{y}}^2_{\\sbr{ \\sum_{i=1}^{n} \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B} }^{-1}} + \\frac{ 11 L_{x}^4 }{ k \\omega^2 \\cdot 2^t } .\n\t\\end{align*}\n\t\n\\end{proof}\n\nBelow we prove the sample complexity for algorithm $\\mathtt{DouExpDes}$ (Theorem~\\ref{thm:bai_ub}).\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:bai_ub}]\n\tAccording to Lemmas~\\ref{lemma:Z_t_est_error} and \\ref{lemma:concentration_variance_bai}, we have $\\Pr [\\mathcal{E} \\cap \\mathcal{F}] \\geq 1-\\delta$. Below, supposing that event $\\mathcal{E} \\cap \\mathcal{F}$ holds, we prove the correctness and sample complexity.\n\t\n\tWe first prove the correctness.\n\t\n\tFor any task $m \\in [M]$, let $t^{*}_m$ denote the first phase which satisfies $|\\hat{\\mathcal{X}}_{t,m}|=1$. Let $t_*=\\max_{m \\in [M]} t^{*}_m$ denote the total number of phases used.\n\tFor any task $m \\in [M]$, let $\\Delta_{m,\\min}:=\\min_{\\boldsymbol{x} \\in \\mathcal{X} \\setminus \\{\\boldsymbol{x}^{*}_{m}\\}} (\\boldsymbol{x}^{*}_{m} - \\boldsymbol{x})^\\top \\boldsymbol{\\theta}_m$ denote the minimum reward gap for task $m$. Let $\\Delta_{\\min}:=\\min_{m \\in [M]} \\Delta_{\\min,m}$ denote the minimum reward gap among all tasks.\n\t\n\tFrom Lemma~\\ref{lemma:cX_subseteq_cS}, we can obtain the following facts: (i) For any task $m \\in [M]$, the optimal arm $\\boldsymbol{x}^{*}_{m}$ will never be eliminated. (ii) $t^{*}_m \\leq \\lceil \\log(\\frac{4}{\\Delta_{m,\\min}}) \\rceil +1$, and thus, $t_*\\leq \\lceil \\log(\\frac{4}{\\Delta_{\\min}}) \\rceil +1$. Therefore, after at most $\\lceil \\log(\\frac{4}{\\Delta_{\\min}}) \\rceil +1$ phases, algorithm $\\mathtt{DouExpDes}$ will return the optimal arms $\\boldsymbol{x}^{*}_{m}$ for all tasks $m \\in [M]$.\n\t\n\tNow we prove the sample complexity. \n\tIn the following, we first prove that the sample complexity of algorithm $\\mathtt{DouExpDes}$ is bounded by $\\tilde{O} ( \\frac{M k}{\\Delta_{\\min}^2} \\log(\\delta^{-1}) + (\\rho^E)^2 d k^4 L_{x}^2 L_{w}^2 D \\log^4(\\delta^{-1}) )$.\n\t\n\tRecall that $p=\\frac{180d}{\\zeta^2}$ and $\\zeta=\\frac{1}{10}$. Then, summing the number of samples used in subroutines $\\mathtt{FeatRecover}$ and $\\mathtt{EliLowRep}$ in all phases (Line~\\ref{line:bai_stage2_sample} in Algorithm~\\ref{alg:feat_recover}, Line~\\ref{line:bai_stage3_sample} in Algorithm~\\ref{alg:elim_low_rep}), we have that the total number of samples is\n\t\\begin{align}\n\t\t& \\sum_{t=1}^{t^*} p M T_t + \\sum_{m=1}^{M} \\sum_{t=1}^{t^{*}_m} N_{t,m}\n\t\t\\nonumber\\\\\n\t\t= & \\sum_{t=1}^{t^*} p \\cdot O \\Bigg( \\sbr{1+\\zeta}^3 (\\rho^E)^2 k^4 L_{x}^2 L_{\\theta}^2 \\max\\lbr{2^{2t},\\ \\frac{L_{x}^4}{\\omega^2}} \\log^2 \\sbr{\\frac{p}{\\delta_t}} \\cdot \\nonumber\\\\& \\hspace*{5em} \\log^2 \\sbr{ \\sbr{1+\\zeta} \\rho^E k p L_{x} L_{\\theta} \\max\\lbr{2^t,\\ \\frac{L_{x}}{\\omega}} \\frac{1}{\\delta_t} \\log\\sbr{\\frac{p}{\\delta_t}} } \\Bigg)\n\t\t\\nonumber\\\\\n\t\t& + \\sum_{m=1}^{M} \\sum_{t=1}^{t^{*}_m} O\\sbr{ 2^{2t} (1+\\zeta) \\rho^{G}_{t,m} \\log \\sbr{\\frac{n^2 M}{\\delta_t}} + \\frac{k}{\\zeta^2} }\n\t\t\\nonumber\\\\\n\t\t= & \\sum_{t=1}^{O(\\log(\\Delta_{\\min}^{-1}))} O \\Bigg( (\\rho^E)^2 k^4 d L_{x}^2 L_{\\theta}^2 \\max\\lbr{2^{2t},\\ \\frac{L_{x}^4}{\\omega^2}} \\log^2 \\sbr{\\frac{d \\log(\\Delta_{\\min}^{-1})}{\\delta}} \\cdot \\nonumber\\\\& \\hspace*{5em} \\log^2 \\sbr{ \\rho^E k d L_{x} L_{\\theta} \\max\\lbr{\\Delta_{\\min}^{-1},\\ \\frac{L_{x}}{\\omega}} \\frac{\\log(\\Delta_{\\min}^{-1})}{\\delta} \\log\\sbr{\\frac{d \\log(\\Delta_{\\min}^{-1})}{\\delta}} } \\Bigg)\n\t\t\\nonumber\\\\\n\t\t& + \\sum_{m=1}^{M} \\sum_{t=1}^{O(\\log(\\Delta_{m,\\min}^{-1}))} O\\sbr{ 2^{2t} \\rho^{G}_{t,m} \\log \\sbr{\\frac{n^2 M \\log(\\Delta_{m,\\min}^{-1}) }{\\delta}} + k}\n\t\t\\label{eq:bai_sample_complexity} \\\\\n\t\t= & O \\Bigg( (\\rho^E)^2 k^4 d L_{x}^2 L_{\\theta}^2 \\max\\lbr{ \\Delta_{\\min}^{-2} ,\\ \\frac{ L_{x}^4 \\log(\\Delta_{\\min}^{-1}) }{\\omega^2} } \\log^2 \\sbr{\\frac{d \\log(\\Delta_{\\min}^{-1})}{\\delta}} \\cdot \\nonumber\\\\& \\hspace*{5em} \\log^2 \\sbr{ \\rho^E k d L_{x} L_{\\theta} \\max\\lbr{\\Delta_{\\min}^{-1},\\ \\frac{L_{x}}{\\omega}} \\frac{\\log(\\Delta_{\\min}^{-1})}{\\delta} \\log\\sbr{\\frac{d \\log(\\Delta_{\\min}^{-1})}{\\delta}} } \\Bigg)\n\t\t\\nonumber\\\\\n\t\t& + O\\sbr{ M k \\Delta_{\\min}^{-2} \\log \\sbr{\\frac{n^2 M \\log(\\Delta_{\\min}^{-1})}{\\delta}} } . \\nonumber\n\t\\end{align}\n\\end{proof}\n\n\nNext, we prove that the sample complexity of algorithm $\\mathtt{DouExpDes}$ is bounded by \n$\n\\tilde{O} ( \\sum_{m=1}^{M} \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{x} \\in \\mathcal{X} \\setminus \\{\\boldsymbol{x}^{*}_{m}\\}} \\frac{\\| \\boldsymbol{B}^\\top (\\boldsymbol{x}^{*}_{m} - \\boldsymbol{x}) \\|^2_{\\boldsymbol{A}(\\boldsymbol{\\lambda})^{-1}} }{ ((\\boldsymbol{x}^{*}_{m} - \\boldsymbol{x})^\\top \\boldsymbol{\\theta}_m)^2 } \\log (\\delta^{-1}) + (\\rho^E)^2 d k^4 L_{x}^2 L_{w}^2 D \\log^4 (\\delta^{-1}) ) .\n$\n\n\nFrom Eq.~\\eqref{eq:bai_sample_complexity}, we have that with probability $1-\\delta$, the number of samples used by algorithm $\\mathtt{DouExpDes}$ is bounded by\n\\begin{align}\n\t\\tilde{O} \\Bigg( \\sum_{t=1}^{\\log(\\Delta_{\\min}^{-1})} (\\rho^E)^2 k^4 d L_{x}^2 L_{\\theta}^2 \\max\\lbr{2^{2t},\\ \\frac{L_{x}^4}{\\omega^2}} + \\sum_{m=1}^{M} \\sum_{t=1}^{\\log(\\Delta_{m,\\min}^{-1})} 2^{2t} \\rho^{G}_{t,m} + Mk \\Bigg) .\t\\label{eq:bai_sample_complexity_tilde}\n\\end{align}\n\n\n\n\n\nFor any $\\mathcal{Z} \\subseteq \\mathcal{X}$, $\\mathcal{Y}(\\mathcal{Z}):=\\{ \\boldsymbol{x}-\\boldsymbol{x}':\\ \\forall \\boldsymbol{x},\\boldsymbol{x}' \\in \\mathcal{Z},\\ \\boldsymbol{x} \\neq \\boldsymbol{x}' \\}$ and $\\mathcal{Y}^*_m(\\mathcal{Z}):=\\{ \\boldsymbol{x}^{*}_{m}-\\boldsymbol{x}:\\ \\forall \\boldsymbol{x} \\in \\mathcal{Z},\\ \\boldsymbol{x} \\neq \\boldsymbol{x}^{*}_{m} \\}$.\nThen, we have that for any task $m \\in [M]$ and phase $t\\geq 2$,\n\n\\begin{align}\n\t\\sbr{2^t}^2 \\rho^{G}_{t,m} \n\t= & \\sbr{2^t}^2 \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{y} \\in \\mathcal{Y}(\\hat{\\mathcal{X}}_{t,m})} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}\n\t\\nonumber\\\\\n\t\\leq & \\sbr{2^t}^2 \\max_{\\boldsymbol{y} \\in \\mathcal{Y}(\\hat{\\mathcal{X}}_{t,m})} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}\n\t\\nonumber\\\\\n\t\\overset{\\textup{(a)}}{\\leq} & \\sbr{2^t}^2 \\max_{\\boldsymbol{y} \\in \\mathcal{Y}(\\mathcal{Z}_{t,m})} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}\n\t\\nonumber\\\\\n\t\\overset{\\textup{(b)}}{\\leq} & 4 \\sbr{2^t}^2 \\max_{\\boldsymbol{y} \\in \\mathcal{Y}^*_m(\\mathcal{Z}_{t,m})} \\| \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda^*_m(\\boldsymbol{x}_i) \\hat{\\boldsymbol{B}}_t^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\hat{\\boldsymbol{B}}_t}^{-1}}\n\t\\nonumber\\\\\n\t\\overset{\\textup{(c)}}{\\leq} & 4 \\sbr{2^t}^2 \\sbr{\\max_{\\boldsymbol{y} \\in \\mathcal{Y}^*_m(\\mathcal{Z}_{t,m})} \\| \\boldsymbol{B}^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}}^{-1}} + \\frac{ 11 L_{x}^4 }{ \\omega^2 k \\cdot 2^t } }\n\t\\nonumber\\\\\n\t= & 4 \\sbr{ \\frac{16 \\max_{\\boldsymbol{y} \\in \\mathcal{Y}^*_m(\\mathcal{Z}_{t,m})} \\| \\boldsymbol{B}^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}}^{-1}}}{\\sbr{4 \\cdot 2^{-t}}^2} + \\frac{ 11 L_{x}^4 \\cdot 2^t }{ \\omega^2 k } }\n\t\\nonumber\\\\\n\t\\overset{\\textup{(d)}}{\\leq} & 4 \\sbr{ 16 \\max_{\\boldsymbol{y} \\in \\mathcal{Y}^*_m(\\mathcal{Z}_{t,m})} \\frac{\\| \\boldsymbol{B}^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}}^{-1}}}{ \\sbr{\\boldsymbol{y}^\\top \\boldsymbol{\\theta}_m}^2} + \\frac{ 11 L_{x}^4 \\cdot 2^t }{ \\omega^2 k } }\n\t\\nonumber\\\\\n\t\\leq & 4 \\sbr{ 16 \\max_{\\boldsymbol{y} \\in \\mathcal{Y}^*_m(\\mathcal{X})} \\frac{\\| \\boldsymbol{B}^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda^*_m(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}}^{-1}}}{ \\sbr{\\boldsymbol{y}^\\top \\boldsymbol{\\theta}_m}^2} + \\frac{ 11 L_{x}^4 \\cdot 2^t }{ \\omega^2 k } }\n\t\\nonumber\\\\\n\t\\overset{\\textup{(e)}}{=} & 4 \\sbr{ 16 \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{y} \\in \\mathcal{Y}^*_m(\\mathcal{X})} \\frac{\\| \\boldsymbol{B}^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}}^{-1}}}{ \\sbr{\\boldsymbol{y}^\\top \\boldsymbol{\\theta}_m}^2} + \\frac{ 11 L_{x}^4 \\cdot 2^t }{ \\omega^2 k } } . \\label{eq:connection_ub_lb}\n\\end{align}\nHere inequality (a) is due to $\\hat{\\mathcal{X}}_{t,m} \\subseteq \\mathcal{Z}_{t,m}$ (from Lemma~\\ref{lemma:cX_subseteq_cS}). Inequality (b) uses the fact that for any $\\boldsymbol{y}=\\boldsymbol{x}_i-\\boldsymbol{x}_j \\in \\mathcal{Y}(\\mathcal{Z}_{t,m})$, we can write $\\boldsymbol{y}=(\\boldsymbol{x}^{*}_{m}-\\boldsymbol{x}_j)-(\\boldsymbol{x}^{*}_{m}-\\boldsymbol{x}_i)$, and the triangle inequality. Inequality (c) follows from Lemma~\\ref{lemma:connection_ub_lb}, and inequality (d) is due to that for any $\\boldsymbol{y} \\in \\mathcal{Y}^*_m(\\mathcal{Z}_{t,m})$, $\\boldsymbol{y}^\\top \\boldsymbol{\\theta}_m \\leq 4 \\cdot 2^{-t}$ (from the definition of $\\mathcal{Z}_{t,m}$). Equality (e) comes from the definition of $\\boldsymbol{\\lambda}^*_m$.\n\nLet $L:=\\log^2 (\\frac{d \\log(\\Delta_{\\min}^{-1})}{\\delta}) \\cdot \\log^2 (\\rho^E k d L_{x} L_{\\theta} \\max \\{\\Delta_{\\min}^{-1},\\ \\frac{L_{x}}{\\omega}\\} \\frac{\\log(\\Delta_{\\min}^{-1})}{\\delta} \\log (\\frac{d \\log(\\Delta_{\\min}^{-1})}{\\delta}) ) $.\nPlugging Eq.~\\eqref{eq:connection_ub_lb} into Eq.~\\eqref{eq:bai_sample_complexity_tilde}, we have that with probability $1-\\delta$, the number of samples used by algorithm $\\mathtt{DouExpDes}$ is bounded by\n\\begin{align*}\n\n\n\n\n\n\t& O \\sbr{ \\sum_{m=1}^{M} \\!\\!\\! \\sum_{t=1}^{\\log(\\Delta_{m,\\min}^{-1})} \\!\\!\\!\\!\\!\\! 2^{2t} \\rho^{G}_{t,m} \\log \\sbr{\\frac{n^2 M \\log(\\Delta_{m,\\min}^{-1}) }{\\delta}} + Mk \\log(\\Delta_{\\min}^{-1}) + \\sum_{t=1}^{\\log(\\Delta_{\\min}^{-1})} (\\rho^E)^2 k^4 d L_{x}^2 L_{\\theta}^2 \\max\\lbr{2^{2t},\\ \\frac{L_{x}^4}{\\omega^2}} L } \n\t\\\\\n\t= & O \\Bigg( \\sum_{m=1}^{M} \\sum_{t=2}^{\\log(\\Delta_{m,\\min}^{-1})} \\Bigg( \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{y} \\in \\mathcal{Y}^*_m(\\mathcal{X})} \\frac{\\| \\boldsymbol{B}^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}}^{-1}}}{ \\sbr{\\boldsymbol{y}^\\top \\boldsymbol{\\theta}_m}^2} + \\frac{ L_{x}^4 \\cdot 2^t }{ \\omega^2 k } \\Bigg) \\cdot \\log \\sbr{\\frac{n^2 M \\log(\\Delta_{m,\\min}^{-1}) }{\\delta}} \\\\& + \\sum_{m=1}^{M} \\rho^*_{1,m} \\log \\sbr{\\frac{n^2 M \\log(\\Delta_{m,\\min}^{-1}) }{\\delta}} + Mk \\log(\\Delta_{\\min}^{-1}) + \\sum_{t=1}^{\\log(\\Delta_{\\min}^{-1})} (\\rho^E)^2 k^4 d L_{x}^2 L_{\\theta}^2 \\max\\lbr{2^{2t},\\ \\frac{L_{x}^4}{\\omega^2}} \\cdot L \\Bigg) \n\t\\\\\n\t\\overset{\\textup{(a)}}{=} & O \\Bigg( \\sum_{m=1}^{M} \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{y} \\in \\mathcal{Y}^*_m(\\mathcal{X})} \\frac{\\| \\boldsymbol{B}^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}}^{-1}}}{ \\sbr{\\boldsymbol{y}^\\top \\boldsymbol{\\theta}_m}^2} \\cdot \\log \\sbr{\\frac{n^2 M \\log(\\Delta_{m,\\min}^{-1}) }{\\delta}} \\cdot \\log(\\Delta_{m,\\min}^{-1}) \\\\& + \\frac{ M L_{x}^4 }{ \\omega^2 k \\cdot \\Delta_{\\min} } \\cdot \\log \\sbr{\\frac{n^2 M \\log(\\Delta_{\\min}^{-1}) }{\\delta}} + Mk \\cdot \\log \\sbr{\\frac{n^2 M \\log(\\Delta_{\\min}^{-1}) }{\\delta}} \\cdot \\log(\\Delta_{\\min}^{-1}) \\\\& \\qquad + (\\rho^E)^2 k^4 d L_{x}^2 L_{\\theta}^2 \\max\\lbr{ \\Delta_{\\min}^{-2} ,\\ \\frac{L_{x}^4 \\cdot \\log(\\Delta_{\\min}^{-1})}{\\omega^2}} \\cdot L \\Bigg) ,\n\\end{align*}\nwhere equality (a) uses Lemma~\\ref{lemma:rho_t_leq_4k}.\n\nWhen $L_x=\\omega=\\Theta(1)$, we have that with probability $1-\\delta$, the sample complexity of algorithm $\\mathtt{DouExpDes}$ is bounded by\n\\begin{align*}\n\t\\tilde{O} \\Bigg( \\sum_{m=1}^{M} \\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{X}}} \\max_{\\boldsymbol{y} \\in \\mathcal{Y}^*_m(\\mathcal{X})} \\frac{\\| \\boldsymbol{B}^\\top \\boldsymbol{y} \\|^2_{\\sbr{\\sum_{i=1}^n \\lambda(\\boldsymbol{x}_i) \\boldsymbol{B}^\\top \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top \\boldsymbol{B}}^{-1}}}{ \\sbr{\\boldsymbol{y}^\\top \\boldsymbol{\\theta}_m}^2} \\log\\sbr{\\frac{1}{\\delta}} + (\\rho^E)^2 k^4 d L_{x}^2 L_{\\theta}^2 \\max\\lbr{ \\Delta_{\\min}^{-2} ,\\ \\frac{L_{x}^4}{\\omega^2}} \\log^4\\sbr{\\frac{1}{\\delta}} \\Bigg) .\n\\end{align*}\n\n\n\\section{Proofs for Algorithm~$\\mathtt{C \\hyphen DouExpDes}$}\n\nIn this section, we present the proofs for Algorithm~$\\mathtt{C \\hyphen DouExpDes}$.\n\n\\subsection{Context Distribution Estimation and Sample Batch Planning} \\label{apx:bpi_sample_batch_planning}\n\nDefine $\\lambda_{\\mathcal{D}}^{E}$ and $\\rho_{\\mathcal{D}}^{E}$ as the optimal solution and the optimal value of the following E-optimal design optimization:\n\\begin{align}\n\t\\min_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{A}}} \\nbr{ \\sbr{ \\sum_{a \\in \\mathcal{A}} \\lambda(a) \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} }^{-1} } . \\label{eq:E_optimal_design}\n\\end{align}\n\n\\begin{lemma}\\label{lemma:bound_rho_E_cD}\n\tIt holds that\n\t\\begin{align*}\n\t\t\\rho_{\\mathcal{D}}^{E} \\leq \\frac{1}{\\nu} .\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}[Proof of Lemma~\\ref{lemma:bound_rho_E_cD}]\n\tThe optimization in Eq.~\\eqref{eq:E_optimal_design} is equivalent to maximize the minimum singular value of the matrix $\\sum_{a \\in \\mathcal{A}} \\lambda(a) \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top}$.\n\t\n\tThus, $\\lambda_{\\mathcal{D}}^{E}$ is the optimal solution of the following optimization:\n\t\\begin{align*}\n\t\t\\max_{\\boldsymbol{\\lambda} \\in \\triangle_{\\mathcal{A}}} \\sigma_{\\min} \\sbr{ \\sum_{a \\in \\mathcal{A}} \\lambda(a) \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} } .\n\t\\end{align*}\n\t\n\tUsing Assumption~\\ref{assumption:bpi_rho_E_D_is_finite}, we have \n\t\\begin{align*}\n\t\t\\sigma_{\\min} \\sbr{ \\sum_{a \\in \\mathcal{A}} \\lambda_{\\mathcal{D}}^{E} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} } \\geq \\nu .\n\t\\end{align*}\n\t\n\tThen, we have\n\t\\begin{align*}\n\t\t\\rho_{\\mathcal{D}}^{E} = & \\nbr{ \\sbr{ \\sum_{a \\in \\mathcal{A}} \\lambda_{\\mathcal{D}}^{E} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} }^{-1} }\n\t\t\\\\\n\t\t= & \\frac{1}{\\sigma_{\\min} \\sbr{ \\sum_{a \\in \\mathcal{A}} \\lambda_{\\mathcal{D}}^{E} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} }}\n\t\t\\\\\n\t\t\\leq & \\frac{1}{\\nu} .\n\t\\end{align*}\n\\end{proof}\n\nDefine event\n\\begin{align*}\n\t\\mathcal{K}:= \\lbr{ \\nbr{ \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{ \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top } - \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{ \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top } } \\leq \\frac{ 8 L_{\\phi}^2 \\log \\sbr{\\frac{20d |\\mathcal{A}|}{\\delta}} }{ \\sqrt{T_0} } , \\ \\forall a \\in \\mathcal{A} } .\n\\end{align*}\n\n\\begin{lemma} \\label{lemma:number_of_samples_T0}\n\tIt holds that\n\t\\begin{align*}\n\t\t\\Pr \\mbr{\\mathcal{K}} \\geq 1-\\frac{\\delta}{5} .\n\t\\end{align*}\n\tFurthermore, if event $\\mathcal{K}$ holds and \n\t\\begin{align*}\n\t\n\t\t%\n\t\tT_0 = \\left \\lceil \\frac{32^2 (1+\\zeta)^2 L_{\\phi}^4}{\\nu^2} \\log^2 \\sbr{\\frac{20d |\\mathcal{A}|}{\\delta}} \\right \\rceil , \n\t\n\t\\end{align*}\n\twe have that for any $a \\in \\mathcal{A}$,\n\t\\begin{align*}\n\t\t\\nbr{ \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{ \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top } - \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{ \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top } } \\leq & \\frac{\\nu}{ 4(1+\\zeta) } .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:number_of_samples_T0}]\n\tFor any $(s,a) \\in \\mathcal{S} \\times \\mathcal{A}$, $\\| \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top \\| \\leq L_{\\phi}^2$. Then, using the matrix Bernstern inequality (Lemma~\\ref{lemma:matrix_bernstein_tau}) and a union bound over $a \\in \\mathcal{A}$, we have that with probability $1-\\frac{\\delta}{5}$, for any $a \\in \\mathcal{A}$,\n\t\\begin{align*}\n\t\t\\nbr{ \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{ \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top } - \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{ \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top } } \\leq & 4 L_{\\phi}^2 \\sqrt{ \\frac{ \\log \\sbr{\\frac{10 \\cdot 2d |\\mathcal{A}|}{\\delta}} }{ T_0 } } + \\frac{ 4 L_{\\phi}^2 \\log \\sbr{\\frac{10 \\cdot 2d |\\mathcal{A}|}{\\delta}} }{ T_0 } \n\t\t\\\\\n\t\t\\leq & \\frac{ 8 L_{\\phi}^2 \\log \\sbr{\\frac{20d |\\mathcal{A}|}{\\delta}} }{ \\sqrt{T_0} } .\n\t\\end{align*} \n\tIf $T_0 \\geq 32^2 (1+\\zeta)^2 \\nu^{-2} L_{\\phi}^4 \\log^2 \\sbr{\\frac{20d |\\mathcal{A}|}{\\delta}}$, we have\n\t\\begin{align*}\n\t\t\\nbr{ \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{ \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top } - \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{ \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top } } \\leq \\frac{\\nu}{ 4(1+\\zeta) } ,\n\t\\end{align*} \n\twhich completes the proof.\n\\end{proof}\n\nDefine event\n\\begin{align*}\n\t\\mathcal{L}:= \\Bigg\\{ & \\nbr{ \\sum_{i=1}^{p} \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i) \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i)^\\top - \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } \\leq 8 L_{\\phi}^2 \\sqrt{ p } \\log \\sbr{\\frac{40dMT}{\\delta}} , \\\\& \\forall m \\in [M], \\ \\forall j \\in [T], \\ \\forall \\ell \\in \\{1,2\\} \\Bigg\\} .\n\\end{align*}\n\n\\begin{lemma} \\label{lemma:number_of_samples_p}\n\tIt holds that\n\t\\begin{align*}\n\t\t\\Pr \\mbr{\\mathcal{L}} \\geq 1 - \\frac{\\delta}{5} .\n\t\\end{align*}\n\tFurthermore, if event $\\mathcal{L}$ holds and \n\t\\begin{align}\n\t\tp = \\left \\lceil \\frac{32^2 (1+\\zeta)^2 L_{\\phi}^4}{\\nu^2} \\log^2 \\sbr{\\frac{40dMT}{\\delta}} \\right \\rceil , \\label{eq:value_p}\n\t\\end{align}\n\twe have that for any $m \\in [M]$, $j \\in [T]$ and $\\ell \\in \\{1,2\\}$,\n\t\\begin{align*}\n\t\t\\nbr{ \\sum_{i=1}^{p} \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i) \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i)^\\top - \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } \\leq & \\frac{p \\nu}{ 4(1+\\zeta) } .\n\t\\end{align*}\n\tHere, the value of $T$ is specified in Eq.~\\eqref{eq:value_T}.\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:number_of_samples_p}]\n\tFor any $(s,a) \\in \\mathcal{S} \\times \\mathcal{A}$, $\\| \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top \\| \\leq L_{\\phi}^2$. Then, using the matrix Bernstern inequality (Lemma~\\ref{lemma:matrix_bernstein_tau}) and a union bound over $m \\in [M]$, $j \\in [T]$ and $\\ell \\in \\{1,2\\}$, we have that with probability $1-\\frac{\\delta}{5}$, for any $m \\in [M]$, $j \\in [T]$ and $\\ell \\in \\{1,2\\}$,\n\t\\begin{align*}\n\t\t\\nbr{ \\sum_{i=1}^{p} \\! \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i) \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i)^\\top \\!\\!-\\!\\! \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } \\leq & 4 L_{\\phi}^2 \\sqrt{ p \\log \\sbr{\\frac{10 \\cdot 4dMT}{\\delta}} } \\!+\\! 4 L_{\\phi}^2 \\log \\sbr{\\frac{10 \\cdot 4dMT}{\\delta}}\n\t\t\\\\\n\t\t\\leq & 8 L_{\\phi}^2 \\sqrt{ p } \\log \\sbr{\\frac{40dMT}{\\delta}} .\n\t\\end{align*}\n\tIn addition, if $p \\geq 32^2 (1+\\zeta)^2 \\nu^{-2} L_{\\phi}^4 \\log^2 \\sbr{\\frac{40dMT}{\\delta}}$, we have that \n\t\\begin{align*}\n\t\t8 L_{\\phi}^2 \\sqrt{ p } \\log \\sbr{\\frac{40dMT}{\\delta}} \\leq \\frac{p \\nu}{ 4(1+\\zeta) }\n\t\\end{align*}\n\tand thus,\n\t\\begin{align*}\n\t\t\\nbr{ \\sum_{i=1}^{p} \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i) \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i)^\\top - \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } \\leq \\frac{p \\nu}{ 4(1+\\zeta) } ,\n\t\\end{align*} \n\twhich completes the proof.\n\\end{proof}\n\nFor any task $m \\in [M]$, round $j \\in [T]$ and $\\ell \\in \\{1,2\\}$, let\n$$\n\\boldsymbol{\\Phi}_{m,j}^{(\\ell)} = \n\\begin{bmatrix}\n\t\\boldsymbol{\\phi}(s_{m,j,1}^{(\\ell)},\\bar{a}_1)^\\top\n\t\\\\\n\t\\dots\n\t\\\\\n\t\\boldsymbol{\\phi}(s_{m,j,p}^{(\\ell)},\\bar{a}_p)^\\top\n\\end{bmatrix} ,\n$$ \nand\n$$\n(\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{+} = ((\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{-1} (\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} .\n$$\n\n\\begin{lemma} \\label{lemma:norm_Phi_plus}\n\tSuppose that event $\\mathcal{K} \\cap \\mathcal{L}$ holds. Then, for any $m \\in [M]$, $j \\in [T]$ and $\\ell \\in \\{1,2\\}$,\n\t\\begin{align*}\n\t\t\\nbr{ (\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{+} } \n\t\t\\leq 2 \\sqrt{\\frac{(1+\\zeta)}{p \\nu}} .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:norm_Phi_plus}]\n\n\tWe first assume that $(\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(\\ell)}$ is invertible. In our later analysis, we will prove that as long as $T_0$ and $p$ are large enough, $(\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(\\ell)}$ is invertible.\n\t\n\tFor any $m \\in [M]$, $j \\in [T]$ and $\\ell \\in \\{1,2\\}$, we have\n\t\\begin{align}\n\t\t\\nbr{ (\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{+} } = & \\nbr{ ((\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{-1} (\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} } \n\t\t\\nonumber\\\\\n\t\t= & \\sqrt{ \\nbr{ ((\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{-1} (\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(\\ell)} ((\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{-1} } }\n\t\t\\nonumber\\\\\n\t\t= & \\sqrt{ \\nbr{ ((\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{-1} } }\n\t\t\\nonumber\\\\\n\t\t= & \\frac{1}{ \\sqrt{ \\sigma_{\\min} \\sbr{ (\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(\\ell)} } } } . \\label{eq:norm_Phi_plus}\n\t\\end{align}\n\t\n\tIn addition, we have\n\t\\begin{align}\n\t\t& \\sigma_{\\min} \\sbr{ (\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(\\ell)} } \n\t\t\\nonumber\\\\ \n\t\t= & \\sigma_{\\min} \\sbr{ \\sum_{i=1}^{p} \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i) \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i)^\\top }\n\t\t\\nonumber\\\\\n\t\t= & \\sigma_{\\min} \\sbr{ \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} + \\sum_{i=1}^{p} \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i) \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i)^\\top - \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} }\n\t\t\\nonumber\\\\\n\t\t\\geq & \\sigma_{\\min} \\sbr{ \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } - \\nbr{ \\sum_{i=1}^{p} \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i) \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i)^\\top - \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } \n\t\t\\nonumber\\\\\n\t\t= & \\sigma_{\\min} \\sbr{ \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} + \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} - \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } \\nonumber\\\\& - \\nbr{ \\sum_{i=1}^{p} \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i) \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i)^\\top - \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } \n\t\t\\nonumber\\\\\n\t\t\\geq & \\sigma_{\\min} \\sbr{ \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } - \\nbr{ \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} - \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } \\nonumber\\\\& - \\nbr{ \\sum_{i=1}^{p} \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i) \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i)^\\top - \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} }\n\t\t\\nonumber\\\\\n\t\t\\geq & \\sigma_{\\min} \\sbr{ \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } - \\sum_{i=1}^{p} \\nbr{ \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} - \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } \\nonumber\\\\& - \\nbr{ \\sum_{i=1}^{p} \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i) \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i)^\\top - \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } \n\t\t\\nonumber\\\\\n\t\t\\geq & \\sigma_{\\min} \\sbr{ \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } - \\frac{p \\nu}{ 4(1+\\zeta) } - \\frac{p \\nu}{ 4(1+\\zeta) }\n\t\t, \\label{eq:sigma_min_Phi_top_times_Phi}\n\t\\end{align}\n\twhere the last inequality uses Lemmas~\\ref{lemma:number_of_samples_T0} and \\ref{lemma:number_of_samples_p}.\n\t\n\tIn the following, we analyze $\\sigma_{\\min} (\\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top})$.\n\tAccording to the guarantee of the rounding procedure $\\mathtt{ROUND}$, we have\n\t\\begin{align*}\n\t\t\\nbr{ \\sbr{ \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} }^{-1} } \\leq & (1+\\zeta) \\nbr{ \\sbr{ p \\sum_{a \\in \\mathcal{A}} \\lambda^{E}_{\\hat{\\mathcal{D}}}(a) \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} }^{-1} }\n\t\t\\\\\n\t\t\\leq & (1+\\zeta) \\nbr{ \\sbr{ p \\sum_{a \\in \\mathcal{A}} \\lambda^{E}(a) \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} }^{-1} } ,\n\t\\end{align*}\n\twhich implies that\n\t\\begin{align}\n\t\t& \\sigma_{\\min}\\sbr{ \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } \n\t\t\\nonumber\\\\\n\t\t\\geq & \\frac{p}{1+\\zeta} \\sigma_{\\min} \\sbr{ \\sum_{a \\in \\mathcal{A}} \\lambda^{E}(a) \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} } \n\t\t\\nonumber\\\\\n\t\t\\geq & \\frac{p}{1+\\zeta} \\sigma_{\\min} \\Bigg( \\sum_{a \\in \\mathcal{A}} \\lambda^{E}(a) \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} \\nonumber\\\\& + \\sum_{a \\in \\mathcal{A}} \\lambda^{E}(a) \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} - \\sum_{a \\in \\mathcal{A}} \\lambda^{E}(a) \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} \\Bigg)\n\t\t\\nonumber\\\\\n\t\t\\geq & \\frac{p}{1+\\zeta} \\Bigg( \\sigma_{\\min} \\sbr{ \\sum_{a \\in \\mathcal{A}} \\lambda^{E}(a) \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} } \\nonumber\\\\& - \\nbr{ \\sum_{a \\in \\mathcal{A}} \\lambda^{E}(a) \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} - \\sum_{a \\in \\mathcal{A}} \\lambda^{E}(a) \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} } \\Bigg)\n\t\t\\nonumber\\\\\n\t\t\\geq & \\frac{p}{1+\\zeta} \\Bigg( \\frac{1}{\\rho_{\\mathcal{D}}^{E}} - \\sum_{a \\in \\mathcal{A}} \\lambda^{E}(a) \\nbr{ \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} - \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top} } \\Bigg)\n\t\t\\nonumber\\\\\n\t\t\\overset{\\textup{(a)}}{\\geq} & \\frac{p}{1+\\zeta} \\sbr{ \\nu - \\frac{\\nu}{ 4(1+\\zeta) } }\n\t\t\\nonumber\\\\\n\t\t\\geq & \\frac{3p \\nu}{ 4(1+\\zeta) } , \\label{eq:sigma_min_batch_under_hat_cD}\n\t\\end{align}\n\twhere inequality (a) uses Lemmas~\\ref{lemma:bound_rho_E_cD} and \\ref{lemma:number_of_samples_T0}. \n\n\t\n\tPlugging Eq.~\\eqref{eq:sigma_min_batch_under_hat_cD} into Eq.~\\eqref{eq:sigma_min_Phi_top_times_Phi}, we have\n\t\\begin{align}\n\t\t\\sigma_{\\min} \\sbr{ (\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(\\ell)} } \\geq & \\frac{3p \\nu}{ 4(1+\\zeta) } - \\frac{p \\nu}{ 4(1+\\zeta) } - \\frac{p \\nu}{ 4(1+\\zeta) }\n\t\t\\nonumber\\\\\n\t\t= & \\frac{p \\nu}{ 4(1+\\zeta) } . \\label{eq:sigma_min_Phi_top_Phi_value}\n\t\\end{align}\n\tEquations~\\eqref{eq:sigma_min_Phi_top_times_Phi} and \\eqref{eq:sigma_min_Phi_top_Phi_value} show that if $T_0$ and $p$ are large enough to satisfy that $\\nbr{ \\mathbb{E}_{s \\sim \\hat{\\mathcal{D}}} \\mbr{ \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top } - \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{ \\boldsymbol{\\phi}(s,a) \\boldsymbol{\\phi}(s,a)^\\top } } \\leq \\frac{\\nu}{ 4(1+\\zeta) }$ for any $a \\in \\mathcal{A}$ and $\\nbr{ \\sum_{i=1}^{p} \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i) \\boldsymbol{\\phi}(s_{m,j,i}^{(\\ell)},\\bar{a}_i)^\\top - \\sum_{i=1}^{p} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\boldsymbol{\\phi}(s,\\bar{a}_i) \\boldsymbol{\\phi}(s,\\bar{a}_i)^\\top} } \\leq \\frac{p \\nu}{ 4(1+\\zeta) }$ for any $m \\in [M]$, $j \\in [T]$ and $\\ell \\in \\{1,2\\}$, respectively, then we have that $ (\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(\\ell)} $ is invertible.\n\t\n\tContinuing with Eq.~\\eqref{eq:norm_Phi_plus}, we have\n\t\\begin{align*}\n\t\t\\nbr{ (\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{+} } \n\t\t\\leq 2 \\sqrt{\\frac{(1+\\zeta)}{p \\nu}} .\n\t\\end{align*}\n\t\n\\end{proof}\n\n\n\\subsection{Global Feature Extractor Recovery with Stochastic Contexts} \\label{apx:bpi_feat_recover}\n\nIn subroutine $\\mathtt{C \\hyphen FeatRecover}$, for any $m \\in [M]$, $j \\in [T]$, $i \\in [p]$ and $\\ell \\in \\{1,2\\}$, let $s^{(\\ell)}_{m,j,i}$ and $\\eta^{(\\ell)}_{m,j,i}$ denote the random context and noise of the $\\ell$-th sample on action $\\bar{a}_i$ in the $j$-th round for task $m$, respectively. Here, the superscript $\\ell \\in \\{1,2\\}$ refers to the first sample (Line~\\ref{line:bpi_stage2_sample1} in Algorithm~\\ref{alg:con_feat_recover}) or the second sample (Line~\\ref{line:bpi_stage2_sample2} in Algorithm~\\ref{alg:con_feat_recover}) on an action $\\bar{a}_i$.\n\nIn $\\mathtt{C \\hyphen FeatRecover}$, for any $m \\in [M]$, $j \\in [T]$, $i \\in [p]$ and $\\ell \\in \\{1,2\\}$, let $\\boldsymbol{\\alpha}^{(\\ell)}_{m,j} \\leftarrow [\\alpha^{(\\ell)}_{m,j,1}, \\dots, \\alpha^{(\\ell)}_{m,j,p}]^\\top$, and then, $\\tilde{\\boldsymbol{\\theta}}^{(\\ell)}_{m,j} = (\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{+} \\boldsymbol{\\alpha}^{(\\ell)}_{m,j}$. Recall that $\\boldsymbol{Z} = \\frac{1}{M T} \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\tilde{\\boldsymbol{\\theta}}^{(1)}_{m,j} (\\tilde{\\boldsymbol{\\theta}}^{(2)}_{m,j})^\\top$.\n\n\\begin{lemma}[Expectation of $\\boldsymbol{Z}$] \\label{lemma:bpi_expectation_Z}\n\tIt holds that\n\t\\begin{align*}\n\t\t\\mathbb{E} \\mbr{ \\boldsymbol{Z} } = \\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:bpi_expectation_Z}]\n\t$\\boldsymbol{Z}$ can be written as\n\t\\begin{align}\n\t\t\\boldsymbol{Z} = & \\frac{1}{M T} \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\tilde{\\boldsymbol{\\theta}}^{(1)}_{m,j} (\\tilde{\\boldsymbol{\\theta}}^{(2)}_{m,j})^\\top \n\t\t\\nonumber\\\\\n\t\t= & \\frac{1}{M T} \\sum_{m=1}^{M} \\sum_{j=1}^{T} (\\boldsymbol{\\Phi}_{m,j}^{(1)})^{+} \\begin{bmatrix}\n\t\t\t\\alpha^{(1)}_{m,j,1}\n\t\t\t\\\\\n\t\t\t\\vdots\n\t\t\t\\\\\n\t\t\t\\alpha^{(1)}_{m,j,p} \n\t\t\\end{bmatrix}\n\t\t\\mbr{\\alpha^{(2)}_{m,j,1}, \\dots, \\alpha^{(2)}_{m,j,p}} ((\\boldsymbol{\\Phi}_{m,j}^{(2)})^{+})^{\\top}\n\t\t\\nonumber\\\\\n\t\n\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\n\t\n\t\t= & \\frac{1}{M T} \\sum_{m=1}^{M} \\sum_{j=1}^{T} (\\boldsymbol{\\Phi}_{m,j}^{(1)})^{+} \\Bigg( \\!\\!\n\t\t\\begin{bmatrix}\n\t\t\t&\\!\\!\\!\\!\\!\\! \\sbr{\\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m} \\sbr{\\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m} &\\!\\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\!\\! &\\sbr{\\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m} \\sbr{\\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m}\n\t\t\t\\\\\n\t\t\t&\\!\\!\\!\\!\\!\\! \\dots &\\!\\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\!\\! &\\dots\n\t\t\t\\\\\n\t\t\t&\\!\\!\\!\\!\\!\\! \\sbr{\\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m} \\sbr{\\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m} &\\!\\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\!\\! &\\sbr{\\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m} \\sbr{\\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m}\n\t\t\\end{bmatrix} \n\t\t\\nonumber\\\\& \\hspace*{-2em} +\n\t\t\\begin{bmatrix}\n\t\t\t&\\!\\!\\!\\!\\!\\! \\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m \\cdot \\eta^{(2)}_{m,j,1} + \\eta^{(1)}_{m,j,1} \\cdot \\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m &\\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\! & \\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m \\cdot \\eta^{(2)}_{m,j,p} + \\eta^{(1)}_{m,j,1} \\cdot \\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m\n\t\t\t\\\\\n\t\t\t&\\!\\!\\!\\!\\!\\! \\dots &\\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\! &\\dots\n\t\t\t\\\\\n\t\t\t&\\!\\!\\!\\!\\!\\! \\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m \\cdot \\eta^{(2)}_{m,j,1} + \\eta^{(1)}_{m,j,p} \\cdot \\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m & \\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\! & \\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m \\cdot \\eta^{(2)}_{m,j,p} + \\eta^{(1)}_{m,j,p} \\cdot \\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m\n\t\t\\end{bmatrix} \n\t\t\\nonumber\\\\& \\hspace*{-2em} +\n\t\t\\begin{bmatrix}\n\t\t\t&\\!\\!\\!\\!\\!\\! \\eta^{(1)}_{m,j,1} \\cdot \\eta^{(2)}_{m,j,1} &\\dots & \\eta^{(1)}_{m,j,1} \\cdot \\eta^{(2)}_{m,j,p}\n\t\t\t\\\\\n\t\t\t&\\!\\!\\!\\!\\!\\! \\dots &\\dots &\\dots\n\t\t\t\\\\\n\t\t\t&\\!\\!\\!\\!\\!\\! \\eta^{(1)}_{m,j,p} \\cdot \\eta^{(2)}_{m,j,1} &\\dots &\\eta^{(1)}_{m,j,p} \\cdot \\eta^{(2)}_{m,j,p}\n\t\t\\end{bmatrix} \n\t\t\\Bigg)\n\t\t((\\boldsymbol{\\Phi}_{m,j}^{(2)})^{+})^{\\top} \\label{eq:decompose_Z} .\n\t\\end{align}\n\t\n\tFor any task $m \\in [M]$, $j \\in [T]$, $i \\in [p]$, the sample on action $a_i$ in the first round (i.e., $s^{(1)}_{m,j,i}$ and $\\eta^{(1)}_{m,j,i}$) is independent of that in the second round (i.e., $s^{(2)}_{m,j,i}$ and $\\eta^{(2)}_{m,j,i}$). \n\tHence, taking the expectation on $\\boldsymbol{Z}$, we obtain\n\t\n\t\\begin{align*}\n\t\t\\mathbb{E}[\\boldsymbol{Z}] = & \\frac{1}{M T} \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\mathbb{E} \\Bigg[ (\\boldsymbol{\\Phi}_{m,j}^{(1)})^{+} \\cdot\n\t\t\\\\\n\t\t&\n\t\t\\begin{bmatrix}\n\t\t\t&\\sbr{\\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m} \\sbr{\\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m} &\\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\! &\\sbr{\\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m} \\sbr{\\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m}\n\t\t\t\\\\\n\t\t\t&\\dots &\\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\! &\\dots\n\t\t\t\\\\\n\t\t\t&\\sbr{\\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m} \\sbr{\\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m} &\\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\! &\\sbr{\\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m} \\sbr{\\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m}\n\t\t\\end{bmatrix}\n\t\t((\\boldsymbol{\\Phi}_{m,j}^{(2)})^{+})^{\\top} \\Bigg]\n\t\t\\\\\n\t\t= & \\frac{1}{M T} \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\mathbb{E} \\Bigg[ ((\\boldsymbol{\\Phi}_{m,j}^{(1)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(1)})^{-1} (\\boldsymbol{\\Phi}_{m,j}^{(1)})^{\\top} \n\t\t\\begin{bmatrix}\n\t\t\t\\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m\n\t\t\t\\\\\n\t\t\t\\vdots\n\t\t\t\\\\\n\t\t\t\\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m\n\t\t\\end{bmatrix}\n\t\t\\mbr{ \\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m, \\dots, \\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m }\n\t\t\\cdot\n\t\t\\\\\n\t\t& \\boldsymbol{\\Phi}_{m,j}^{(2)} ((\\boldsymbol{\\Phi}_{m,j}^{(2)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(2)})^{-1} \\Bigg]\n\t\t\\\\\n\t\t= & \\frac{1}{M T} \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\mathbb{E} \\mbr{ ((\\boldsymbol{\\Phi}_{m,j}^{(1)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(1)})^{-1} (\\boldsymbol{\\Phi}_{m,j}^{(1)})^{\\top} \\cdot \\boldsymbol{\\Phi}_{m,j}^{(1)} \\boldsymbol{\\theta}_m\n\t\t\t(\\boldsymbol{\\theta}_m)^{\\top} (\\boldsymbol{\\Phi}_{m,j}^{(2)})^{\\top} \\cdot\n\t\t\t\\boldsymbol{\\Phi}_{m,j}^{(2)} ((\\boldsymbol{\\Phi}_{m,j}^{(2)})^{\\top} \\boldsymbol{\\Phi}_{m,j}^{(2)})^{-1} }\n\t\t\\\\\n\t\t= & \\frac{1}{M T} \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\boldsymbol{\\theta}_m (\\boldsymbol{\\theta}_m)^\\top\n\t\t\\\\\n\t\t= & \\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top .\n\t\\end{align*}\n\t\n\\end{proof}\n\nDefine event \n\\begin{align*}\n\t\\mathcal{G}:= \\lbr{ \\nbr{\\boldsymbol{Z} - \\mathbb{E} [\\boldsymbol{Z}]} \\leq \\frac{ 256 (1+\\zeta) L_{\\phi} L_{\\theta} \\log\\sbr{\\frac{50d}{\\delta}} }{\\nu \\sqrt{MT}} \\log \\sbr{\\frac{100pMT}{\\delta}} } .\n\\end{align*}\n\n\\begin{lemma}[Concentration of $Z$] \\label{lemma:Z_est_error}\n\tSuppose that $\\mathcal{K} \\cap \\mathcal{L}$ holds. Then, it holds that\n\t\\begin{align*}\n\t\t\\Pr \\mbr{\\mathcal{G}} \\geq 1-\\frac{\\delta}{5} .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:Z_est_error}]\n\tDefine the following matrices:\n\t\\begin{align*}\n\t\t\\boldsymbol{D}_{m,j} &:= \\frac{1}{M T} (\\boldsymbol{\\Phi}_{m,j}^{(1)})^{+} \\cdot\n\t\t\\\\\n\t\t&\n\t\t\\begin{bmatrix}\n\t\t\t&\\sbr{\\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m} \\sbr{\\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m} &\\dots &\\sbr{\\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m} \\sbr{\\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m}\n\t\t\t\\\\\n\t\t\t&\\dots &\\dots &\\dots\n\t\t\t\\\\\n\t\t\t&\\sbr{\\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m} \\sbr{\\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m} &\\dots &\\sbr{\\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m} \\sbr{\\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m}\n\t\t\\end{bmatrix} ((\\boldsymbol{\\Phi}_{m,j}^{(2)})^{+})^\\top\n\t\t\\\\\n\t\t\\boldsymbol{D} &:= \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\boldsymbol{D}_{m,j}\n\t\t\\\\\n\t\t\\boldsymbol{E}_{m,j} &:= \\frac{1}{M T} (\\boldsymbol{\\Phi}_{m,j}^{(1)})^{+} \\cdot\n\t\t\\\\\n\t\t&\n\t\t\\hspace*{-1.8em}\n\t\t\\begin{bmatrix}\n\t\t\t&\\!\\!\\!\\!\\! \\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m \\cdot \\eta^{(2)}_{m,j,1} + \\eta^{(1)}_{m,j,1} \\cdot \\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m &\\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\! & \\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m \\cdot \\eta^{(2)}_{m,j,p} + \\eta^{(1)}_{m,j,1} \\cdot \\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m\n\t\t\t\\\\\n\t\t\t&\\!\\!\\!\\!\\!\\dots &\\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\! &\\dots\n\t\t\t\\\\\n\t\t\t&\\!\\!\\!\\!\\! \\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m \\cdot \\eta^{(2)}_{m,j,1} + \\eta^{(1)}_{m,j,p} \\cdot \\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m & \\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\! & \\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m \\cdot \\eta^{(2)}_{m,j,p} + \\eta^{(1)}_{m,j,p} \\cdot \\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m\n\t\t\\end{bmatrix} \\! \\cdot \n\t\t\\\\\n\t\t& ((\\boldsymbol{\\Phi}_{m,j}^{(2)})^{+})^\\top\n\t\t\\\\\n\t\t\\boldsymbol{E} &:= \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\boldsymbol{E}_{m,j}\n\t\t\\\\\n\t\t\\boldsymbol{F}_{m,j} &:= \\frac{1}{M T} (\\boldsymbol{\\Phi}_{m,j}^{(1)})^{+}\n\t\t\\begin{bmatrix}\n\t\t\t&\\eta^{(1)}_{m,j,1} \\cdot \\eta^{(2)}_{m,j,1} &\\dots & \\eta^{(1)}_{m,j,1} \\cdot \\eta^{(2)}_{m,j,p}\n\t\t\t\\\\\n\t\t\t&\\dots &\\dots &\\dots\n\t\t\t\\\\\n\t\t\t&\\eta^{(1)}_{m,j,p} \\cdot \\eta^{(2)}_{m,j,1} &\\dots &\\eta^{(1)}_{m,j,p} \\cdot \\eta^{(2)}_{m,j,p}\n\t\t\\end{bmatrix} ((\\boldsymbol{\\Phi}_{m,j}^{(2)})^{+})^\\top\n\t\t\\\\\n\t\t\\boldsymbol{F} &:= \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\boldsymbol{F}_{m,j}\n\t\\end{align*}\n\t\n\t\n\tFrom Eq.~\\eqref{eq:decompose_Z}, we can bound $\\|\\boldsymbol{Z} - \\mathbb{E} [\\boldsymbol{Z}]\\|$ as\n\t\\begin{align}\n\t\t\\nbr{\\boldsymbol{Z} - \\mathbb{E} [\\boldsymbol{Z}]} \\leq & \\nbr{\\boldsymbol{D} - \\mathbb{E}[\\boldsymbol{D}]} + \\nbr{\\boldsymbol{E} - \\mathbb{E}[\\boldsymbol{E}]} + \\nbr{\\boldsymbol{F} - \\mathbb{E}[\\boldsymbol{F}]} . \\label{eq:decompose_Z_est_err}\n\t\\end{align}\n\t\n\tSimilar to the proof of Lemma~\\ref{lemma:Z_t_est_error}, in order to use the truncated matrix Bernstein inequality (Lemma~\\ref{lemma:matrix_bernstein_tau}), we define the truncated noise and some truncated matrices as follows.\n\t\n\tLet $R>0$ be a truncation parameter of noises which will be chosen later. For any $m \\in [M]$, $j \\in [T]$, $i \\in [p]$ and $\\ell \\in \\{1,2\\}$, let $\\tilde{\\eta}^{(\\ell)}_{m,j,i}=\\eta^{(\\ell)}_{m,j,i} \\mathbbm{1}\\{|\\eta^{(\\ell)}_{m,j,i}| \\leq R\\}$ denote the truncated noise. Furthermore, we define the following matrices with truncated noises:\n\t\\begin{align*}\n\t\t\\tilde{\\boldsymbol{E}}_{m,j} &:= \\frac{1}{M T} (\\boldsymbol{\\Phi}_{m,j}^{(1)})^{+} \\cdot\n\t\t\\\\\n\t\t&\n\t\t\\hspace*{-1.8em} \n\t\t\\begin{bmatrix}\n\t\t\t&\\!\\!\\!\\!\\! \\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m \\cdot \\tilde{\\eta}^{(2)}_{m,j,1} + \\tilde{\\eta}^{(1)}_{m,j,1} \\cdot \\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m &\\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\! & \\boldsymbol{\\phi}(s^{(1)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m \\cdot \\tilde{\\eta}^{(2)}_{m,j,p} + \\tilde{\\eta}^{(1)}_{m,j,1} \\cdot \\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m\n\t\t\t\\\\\n\t\t\t&\\!\\!\\!\\!\\!\\dots &\\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\! &\\dots\n\t\t\t\\\\\n\t\t\t&\\!\\!\\!\\!\\! \\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m \\cdot \\tilde{\\eta}^{(2)}_{m,j,1} + \\tilde{\\eta}^{(1)}_{m,j,p} \\cdot \\boldsymbol{\\phi}(s^{(2)}_{m,j,1},\\bar{a}_1)^\\top \\boldsymbol{\\theta}_m & \\!\\!\\!\\!\\!\\dots\\!\\!\\!\\!\\! & \\boldsymbol{\\phi}(s^{(1)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m \\cdot \\tilde{\\eta}^{(2)}_{m,j,p} + \\tilde{\\eta}^{(1)}_{m,j,p} \\cdot \\boldsymbol{\\phi}(s^{(2)}_{m,j,p},\\bar{a}_p)^\\top \\boldsymbol{\\theta}_m\n\t\t\\end{bmatrix} \\! \\cdot\n\t\t\\\\\n\t\t& ((\\boldsymbol{\\Phi}_{m,j}^{(2)})^{+})^\\top\n\t\t\\\\\n\t\t\\tilde{\\boldsymbol{E}} &:= \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\tilde{\\boldsymbol{E}}_{m,j}\n\t\t\\\\\n\t\t\\tilde{\\boldsymbol{F}}_{m,j} &:= \\frac{1}{M T} (\\boldsymbol{\\Phi}_{m,j}^{(1)})^{+}\n\t\t\\begin{bmatrix}\n\t\t\t&\\tilde{\\eta}^{(1)}_{m,j,1} \\cdot \\tilde{\\eta}^{(2)}_{m,j,1} &\\dots & \\tilde{\\eta}^{(1)}_{m,j,1} \\cdot \\tilde{\\eta}^{(2)}_{m,j,p}\n\t\t\t\\\\\n\t\t\t&\\dots &\\dots &\\dots\n\t\t\t\\\\\n\t\t\t&\\tilde{\\eta}^{(1)}_{m,j,p} \\cdot \\tilde{\\eta}^{(2)}_{m,j,1} &\\dots &\\tilde{\\eta}^{(1)}_{m,j,p} \\cdot \\tilde{\\eta}^{(2)}_{m,j,p}\n\t\t\\end{bmatrix} ((\\boldsymbol{\\Phi}_{m,j}^{(2)})^{+})^\\top\n\t\t\\\\\n\t\t\\tilde{\\boldsymbol{F}} &:= \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\tilde{\\boldsymbol{F}}_{m,j}\n\t\\end{align*}\n\t\n\t\n\tRecall that from Lemma~\\ref{lemma:norm_Phi_plus}, we have that for any $m \\in [M]$, $j \\in [T]$ and $\\ell \\in \\{1,2\\}$, $\\|(\\boldsymbol{\\Phi}_{m,j}^{(\\ell)})^{+}\\| \\leq 2 \\sqrt{\\frac{(1+\\zeta)}{p \\nu}}$. Let $B_{\\Phi}:=2 \\sqrt{\\frac{(1+\\zeta)}{p \\nu}}$.\n\t\n\tWe first analyze $\\|\\boldsymbol{D}-\\mathbb{E}[\\boldsymbol{D}]\\|$. Since $|\\boldsymbol{\\phi}(s^{(\\ell)}_{m,j,i},\\bar{a}_i)^\\top \\boldsymbol{\\theta}_m| \\leq L_{\\phi}L_{\\theta}$ for any $m \\in [M]$, $j \\in [T]$, $i \\in [p]$ and $\\ell \\in \\{1,2\\}$, we have that $\\|\\boldsymbol{D}_{m,j}\\| \\leq \\frac{1}{MT} \\cdot p L_{\\phi} L_{\\theta} B_{\\Phi}^2$ and $\\| \\sum_{m=1}^{M} \\sum_{j=1}^{T} \\mathbb{E}[\\boldsymbol{D}^2_{m,j}]\\| \\leq MT \\cdot \\frac{1}{M^2 T^2} \\cdot p^2 L_{\\phi}^2 L_{\\theta}^2 B_{\\Phi}^4 = \\frac{1}{M T} \\cdot p^2 L_{\\phi}^2 L_{\\theta}^2 B_{\\Phi}^4$ for any $m \\in [M]$ and $j \\in [T]$. \n\t\n\tLet $\\delta' \\in (0,1)$ be a confidence parameter which will be chosen later.\n\tUsing the matrix Bernstein inequality (Lemma~\\ref{lemma:matrix_bernstein_tau}), we have that with probability at least $1-\\delta'$,\n\t\n\t\\begin{align}\n\t\t\\nbr{ \\boldsymbol{D} - \\mathbb{E}[\\boldsymbol{D}] } \\leq & 4 \\sqrt{ \\frac{p^2 L_{\\phi}^2 L_{\\theta}^2 B_{\\Phi}^4 \\log \\sbr{\\frac{2d}{\\delta'}}}{M T} } + \\frac{4 p L_{\\phi} L_{\\theta} B_{\\Phi}^2 \\log \\sbr{\\frac{2d}{\\delta'}}}{MT} \n\t\t\\nonumber\\\\\n\t\t\\leq & \\frac{8 \\cdot 4 p L_{\\phi} L_{\\theta} B_{\\Phi}^2 \\log \\sbr{\\frac{2d}{\\delta'}}}{\\sqrt{MT}} \\label{eq:D_est_err}.\n\t\\end{align}\n\t\n\tNext, we bound $\\|\\boldsymbol{E}-\\mathbb{E}[\\boldsymbol{E}]\\|$. \n\tSince $|\\boldsymbol{\\phi}(s^{(\\ell)}_{m,j,i},\\bar{a}_i)^\\top \\boldsymbol{\\theta}_m| \\leq L_{\\phi} L_{\\theta}$ and $|\\tilde{\\eta}^{(\\ell)}_{m,j,i}| \\leq R$ for any $m \\in [M]$, $j \\in [T]$, $i \\in [p]$ and $\\ell \\in \\{1,2\\}$, we have that $\\|\\tilde{\\boldsymbol{E}}_{m,j}\\| \\leq \\frac{1}{M T} \\cdot 2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2$ and $\\nbr{\\sum_{m=1}^{M} \\sum_{j=1}^{T} \\mathbb{E} [\\tilde{\\boldsymbol{E}}_{m,j}^2]} \\leq \\frac{1}{M T} \\cdot 4p^2R^2 L_{\\phi}^2 L_{\\theta}^2 B_{\\Phi}^4$ for any $m \\in [M]$ and $j \\in [T]$.\n\n\n\n\n\n\n\n\n\n\t%\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\tSince $\\eta^{(\\ell)}_{m,j,i}$ is 1-sub-Gaussian for any $m \\in [M]$, $j \\in [T]$, $i \\in [p]$ and $\\ell \\in \\{1,2\\}$, using a union bound over $i \\in [p]$ and $\\ell \\in \\{1,2\\}$, we have that for any $m \\in [M]$ and $j \\in [T]$, with probability at least $1-4p\\exp(-\\frac{R^2}{2})$, $|\\eta^{(\\ell)}_{m,j,i}| \\leq R$ for all $i \\in [p]$ and $\\ell \\in \\{1,2\\}$, and thus, $\\|\\boldsymbol{E}_{m,j}\\| \\leq \\frac{1}{M T} \\cdot 2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2$. Then, we have\n\t\\begin{align*}\n\t\t& \\nbr{ \\mathbb{E}[\\boldsymbol{E}_{m,j}]-\\mathbb{E}[\\tilde{\\boldsymbol{E}}_{m,j}] } \n\t\t\\nonumber\\\\\n\t\t\\leq & \\nbr{ \\mathbb{E} \\mbr{\\boldsymbol{E}_{m,j} \\cdot \\indicator{ \\nbr{\\boldsymbol{E}_{m,j}} \\geq \\frac{2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} }} }\n\t\t\\\\\n\t\t\\leq & \\mathbb{E} \\mbr{ \\nbr{\\boldsymbol{E}_{m,j}} \\cdot \\indicator{ \\nbr{\\boldsymbol{E}_{m,j}} \\geq \\frac{2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} } } \n\t\t\\\\\n\t\t= & \\mathbb{E}\\mbr{ \\frac{2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} \\!\\cdot\\! \\indicator{ \\nbr{\\boldsymbol{E}_{m,j}} \\!\\geq\\! \\frac{2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} }} \\!+\\! \\mathbb{E} \\mbr{\\sbr{ \\nbr{\\boldsymbol{E}_{m,j}} \\!-\\! \\frac{2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T}} \\!\\cdot\\! \\indicator{ \\nbr{\\boldsymbol{E}_{m,j}} \\!\\geq\\! \\frac{2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} } } \n\t\t\\\\\n\t\t= & \\frac{2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} \\cdot \\Pr\\mbr{ \\nbr{\\boldsymbol{E}_{m,j}} \\geq \\frac{2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} } + \\int_0^{\\infty} \\Pr \\mbr{ \\nbr{\\boldsymbol{E}_{m,j}} - \\frac{2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} > x} dx \n\t\t\\\\\n\t\t\\leq & \\frac{2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} \\cdot 4p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} + \\frac{2p L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} \\int_R^{\\infty} \\Pr \\mbr{ \\nbr{\\boldsymbol{E}_{m,j}} > \\frac{2p L_{\\phi} L_{\\theta} B_{\\Phi}^2 y}{M T} } dy \n\t\t\\\\\n\t\t\\leq & \\frac{2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} \\cdot 4p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} + \\frac{2p L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} \\int_R^{\\infty} 4p \\exp\\sbr{-\\frac{y^2}{2}} dy\n\t\t\\\\\n\t\t\\leq & \\frac{2pR L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} \\cdot 4p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} + \\frac{2p L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} \\cdot 4p \\cdot \\frac{1}{R} \\cdot \\exp\\sbr{-\\frac{R^2}{2}} \n\t\t\\\\\n\t\t= & \\frac{2p L_{\\phi} L_{\\theta} B_{\\Phi}^2}{M T} \\cdot 4p \\cdot \\sbr{R+\\frac{1}{R}} \\exp\\sbr{-\\frac{R^2}{2}} .\n\t\\end{align*}\n\t\n\t\n\tUsing the truncated matrix Bernstein inequality (Lemma~\\ref{lemma:matrix_bernstein_tau}) with $n=MT$, $R=\\sqrt{2\\log \\sbr{\\frac{4pMT}{\\delta'}}}$, $U=\\frac{2p L_{\\phi} L_{\\theta} B_{\\Phi}^2 \\sqrt{2\\log \\sbr{\\frac{4pMT}{\\delta'}}}}{MT}$, $\\sigma^2=\\frac{(2p L_{\\phi} L_{\\theta} B_{\\Phi}^2 \\sqrt{2\\log \\sbr{\\frac{4pMT}{\\delta'}}})^2}{MT}$, $\\tau=4\\sqrt{\\frac{ (2p L_{\\phi} L_{\\theta} B_{\\Phi}^2 \\sqrt{2\\log \\sbr{\\frac{4pMT}{\\delta'}}})^2 \\cdot \\log\\sbr{\\frac{2d}{\\delta'}}}{MT}}+\\frac{4 \\cdot 2p L_{\\phi} L_{\\theta} B_{\\Phi}^2 \\sqrt{2\\log \\sbr{\\frac{4pMT}{\\delta'}}} \\cdot \\log\\sbr{\\frac{2d}{\\delta'}}}{M T}$ and $\\Delta=\\frac{2p L_{\\phi} L_{\\theta} B_{\\Phi}^2 \\cdot 2 \\sqrt{2\\log \\sbr{\\frac{4pMT}{\\delta'}}}}{M T} \\cdot \\frac{\\delta'}{M T}$, we have that with probability at least $1-2\\delta'$, \n\t\\begin{align}\n\t\t\\nbr{\\boldsymbol{E} - \\mathbb{E}[\\boldsymbol{E}]} \\leq \\frac{8 \\cdot 2p L_{\\phi} L_{\\theta} B_{\\Phi}^2 \\sqrt{2\\log \\sbr{\\frac{4pMT}{\\delta'}}} \\cdot \\log\\sbr{\\frac{2d}{\\delta'}}}{\\sqrt{M T}} . \\label{eq:E_est_err}\n\t\\end{align}\n\t\n\tNow we investigate $\\nbr{\\boldsymbol{F} - \\mathbb{E}[\\boldsymbol{F}]}$. Since $|\\tilde{\\eta}^{(\\ell)}_{m,j,i}| \\leq R$ for any $m \\in [M]$, $j \\in [T]$, $i \\in [p]$ and $\\ell \\in \\{1,2\\}$, we have that $\\|\\tilde{\\boldsymbol{F}}_{m,j}\\| \\leq \\frac{1}{M T} \\cdot pR^2 B_{\\Phi}^2$ and $\\nbr{\\sum_{m=1}^{M} \\sum_{j=1}^{T} \\mathbb{E} \\mbr{\\tilde{\\boldsymbol{F}}_{m,j}^2} } \\leq \\frac{1}{M T} \\cdot p^2 R^4 B_{\\Phi}^4$.\n\n\n\n\n\n\n\t%\n\n\n\n\n\n\n\n\n\t\n\tRecall that for any $m \\in [M]$ and $j \\in [T]$, with probability at least $1-4p\\exp(-\\frac{R^2}{2})$, $|\\eta^{(\\ell)}_{m,j,i}| \\leq R$ for all $i \\in [p]$ and $\\ell \\in \\{1,2\\}$, and thus, $\\|\\boldsymbol{F}_{m,j}\\| \\leq \\frac{1}{M T} \\cdot p B_{\\Phi}^2 R^2 $. Then, we have\n\t\\begin{align*}\n\t\t\\nbr{ \\mathbb{E}[\\boldsymbol{F}_{m,j}]-\\mathbb{E}[\\tilde{\\boldsymbol{F}}_{m,j}] } \\leq & \\nbr{ \\mathbb{E} \\mbr{\\boldsymbol{F}_{m,j} \\cdot \\indicator{ \\nbr{\\boldsymbol{F}_{m,j}} \\geq \\frac{p B_{\\Phi}^2 R^2}{M T} }} }\n\t\t\\\\\n\t\t\\leq & \\mathbb{E} \\mbr{ \\nbr{\\boldsymbol{F}_{m,j}} \\cdot \\indicator{ \\nbr{\\boldsymbol{F}_{m,j}} \\geq \\frac{p B_{\\Phi}^2 R^2}{M T} } } \n\t\t\\\\\n\t\t= & \\mathbb{E}\\mbr{ \\frac{p B_{\\Phi}^2 R^2}{M T} \\cdot \\indicator{ \\nbr{\\boldsymbol{F}_{m,j}} \\geq \\frac{p B_{\\Phi}^2 R^2}{M T} }} + \\mbr{\\sbr{ \\nbr{\\boldsymbol{F}_{m,j}} - \\frac{p B_{\\Phi}^2 R^2}{M T}} \\cdot \\indicator{ \\nbr{\\boldsymbol{F}_{m,j}} \\geq \\frac{p B_{\\Phi}^2 R^2}{M T} } } \n\t\t\\\\\n\t\t= & \\frac{p B_{\\Phi}^2 R^2}{M T} \\cdot \\Pr\\mbr{ \\nbr{\\boldsymbol{F}_{m,j}} \\geq \\frac{p B_{\\Phi}^2 R^2}{M T} } + \\int_0^{\\infty} \\Pr \\mbr{ \\nbr{\\boldsymbol{F}_{m,j}} - \\frac{p B_{\\Phi}^2 R^2}{M T} > x} dx \n\t\t\\\\\n\t\t\\leq & \\frac{p B_{\\Phi}^2 R^2}{M T} \\cdot 4p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} + \\frac{2p B_{\\Phi}^2}{M T} \\int_R^{\\infty} \\boldsymbol{y} \\cdot \\Pr \\mbr{ \\nbr{\\boldsymbol{F}_{m,j}} > \\frac{p B_{\\Phi}^2 y^2}{M T} } dy \n\t\t\\\\\n\t\t\\leq & \\frac{p B_{\\Phi}^2 R^2}{M T} \\cdot 4p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} + \\frac{2p B_{\\Phi}^2}{M T} \\int_R^{\\infty} \\boldsymbol{y} \\cdot 4p \\exp\\sbr{-\\frac{y^2}{2}} dy\n\t\t\\\\\n\t\t\\leq & \\frac{p B_{\\Phi}^2 R^2}{M T} \\cdot 4p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} + \\frac{2p B_{\\Phi}^2}{M T} \\cdot 4p \\cdot \\exp\\sbr{-\\frac{R^2}{2}} \n\t\t\\\\\n\t\t= & \\frac{p B_{\\Phi}^2}{M T} \\cdot 4p \\cdot \\sbr{R^2+2} \\exp\\sbr{-\\frac{R^2}{2}} .\n\t\\end{align*}\n\t\n\t\n\tUsing the truncated matrix Bernstein inequality (Lemma~\\ref{lemma:matrix_bernstein_tau}) with $n=MT$, $R=\\sqrt{2\\log \\sbr{\\frac{4pMT}{\\delta'}}}$, $U=\\frac{p B_{\\Phi}^2 \\cdot 2\\log \\sbr{\\frac{4pMT}{\\delta'}}}{MT}$, $\\sigma^2=\\frac{(p B_{\\Phi}^2 \\cdot 2\\log \\sbr{\\frac{4pMT}{\\delta'}})^2}{MT}$, $\\tau=4\\sqrt{\\frac{ (p B_{\\Phi}^2 \\cdot 2\\log \\sbr{\\frac{4pMT}{\\delta'}})^2 \\cdot \\log\\sbr{\\frac{2d}{\\delta'}}}{MT}}+\\frac{4\\cdot p B_{\\Phi}^2 \\cdot 2\\log \\sbr{\\frac{4pMT}{\\delta'}} \\cdot \\log\\sbr{\\frac{2d}{\\delta'}}}{M T}$ and $\\Delta=\\frac{p B_{\\Phi}^2 \\cdot 2 \\cdot 2\\log \\sbr{\\frac{4pMT}{\\delta'}} }{M T} \\cdot \\frac{\\delta'}{M T}$, we have that with probability at least $1-2\\delta'$, \n\t\\begin{align}\n\t\t\\nbr{\\boldsymbol{F} - \\mathbb{E}\\mbr{\\boldsymbol{F}}} \\leq \\frac{8 \\cdot p B_{\\Phi}^2 \\cdot 2\\log \\sbr{\\frac{4pMT}{\\delta'}} \\cdot \\log\\sbr{\\frac{2d}{\\delta'}}}{\\sqrt{MT}} \\label{eq:F_est_err} .\n\t\\end{align}\n\t\n\tPlugging Eqs.~\\eqref{eq:D_est_err}-\\eqref{eq:F_est_err} into Eq.~\\eqref{eq:decompose_Z_est_err}, we have that with probability at least $1-5\\delta'$,\n\t\\begin{align*}\n\t\t\\nbr{\\boldsymbol{Z} - \\mathbb{E} [\\boldsymbol{Z}]} \\leq & \\nbr{\\boldsymbol{D} - \\mathbb{E} \\mbr{\\boldsymbol{D}}} + \\nbr{\\boldsymbol{E} - \\mathbb{E} \\mbr{\\boldsymbol{E}}} + \\nbr{\\boldsymbol{F} - \\mathbb{E} \\mbr{\\boldsymbol{F}}}\n\t\t\\\\\n\t\t\\leq & \\frac{64 p L_{\\phi} L_{\\theta} B_{\\Phi}^2 \\log \\sbr{\\frac{4pMT}{\\delta'}} \\log\\sbr{\\frac{2d}{\\delta'}}}{\\sqrt{MT}} .\n\t\\end{align*}\n\t\n\tLet $\\delta'=\\frac{\\delta}{25}$. Recall that $B_{\\Phi}:=2 \\sqrt{\\frac{(1+\\zeta)}{p \\nu}}$. Then, we obtain that with probability at least $1-\\frac{\\delta}{5}$,\n\t\\begin{align*}\n\t\t\\nbr{\\boldsymbol{Z} - \\mathbb{E} [\\boldsymbol{Z}]} \\leq \\frac{ 256 (1+\\zeta) L_{\\phi} L_{\\theta} \\log\\sbr{\\frac{50d}{\\delta}} }{\\nu \\sqrt{MT}} \\log \\sbr{\\frac{100pMT}{\\delta}} ,\n\t\\end{align*}\n\twhich implies that\n\t$\\Pr[\\mathcal{G}]\\geq 1-\\frac{\\delta}{5}$.\n\\end{proof}\n\nAccording to Assumption~\\ref{assumption:diverse_task}, there exists an absolute constant $c_0$ which satisfies that $\\sigma_{\\min}(\\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{w}_m \\boldsymbol{w}_m^\\top) = \\sigma_{\\min}(\\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top) \\geq \\frac{c_0}{k}$.\n\n\\begin{lemma}[Concentration of $\\hat{\\boldsymbol{B}}$] \\label{lemma:concentration_hat_B_clb}\n\tSuppose that event $\\mathcal{G}$ holds. Then, \n\t\\begin{align*}\n\t\t\\nbr{\\hat{\\boldsymbol{B}}_{\\bot}^\\top \\boldsymbol{B}} \\leq \\frac{ 2048 (1+\\zeta) k L_{\\phi} L_{\\theta} \\log\\sbr{\\frac{50d}{\\delta}} }{c_0 \\nu \\sqrt{MT}} \\log \\sbr{\\frac{ 135 (1+\\zeta) d L_{\\phi} M T}{\\nu \\delta} } .\n\t\\end{align*}\n\tFurthermore, if\n\t\\begin{align}\n\t\n\t\t%\n\t\tT = \\left \\lceil \\frac{68 \\cdot 2048^2 \\cdot 96^2 (1+\\zeta)^2 k^4 L_{\\phi}^4 L_{\\theta}^2 L_{w}^2 }{ c_0^2 \\nu^2 \\varepsilon^2 M } \\log^6 \\sbr{ \\frac{ 2048 \\cdot 135 \\cdot 96 \\cdot 50 \\cdot 5 (1+\\zeta)^2 k^2 d^2 L_{\\phi}^3 L_{\\theta} L_{w} N }{c_0 \\nu^2 \\delta^3 \\varepsilon} } \\right \\rceil , \\label{eq:value_T}\n\t\\end{align}\n\twe have \n\t\\begin{align*}\n\t\t\\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\leq \\frac{\\varepsilon}{ 96 k \\log \\sbr{\\frac{5N}{\\delta}} L_{\\phi} L_{w} } .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:concentration_hat_B_clb}]\n\tFirst, we have that $\\sigma_{k}(\\mathbb{E}[\\boldsymbol{Z}]) - \\sigma_{k+1}(\\mathbb{E}[\\boldsymbol{Z}])=\\sigma_{\\min}( \\frac{1}{M} \\sum_{m=1}^{M} \\boldsymbol{\\theta}_m \\boldsymbol{\\theta}_m^\\top )\\geq \\frac{c_0}{k}$. Let $p := \\lceil 32^2 (1+\\zeta)^2 \\nu^{-2} L_{\\phi}^4 \\log^2 \\sbr{\\frac{40dMT}{\\delta}} \\rceil$.\n\tThen, using the Davis-Kahan sin $\\theta$ Theorem~\\cite{bhatia2013matrix} and letting $T_t$ be large enough to satisfy that $\\nbr{\\boldsymbol{Z} - \\mathbb{E}[\\boldsymbol{Z}]} \\leq \\frac{c_0}{2k}$, we have\n\t\\begin{align*}\n\t\t\\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\leq & \\frac{ \\nbr{\\boldsymbol{Z} - \\mathbb{E}[\\boldsymbol{Z}]} }{ \\sigma_{k}(\\mathbb{E}[\\boldsymbol{Z}]) - \\sigma_{k+1}(\\mathbb{E}[\\boldsymbol{Z}]) - \\nbr{\\boldsymbol{Z} - \\mathbb{E}[\\boldsymbol{Z}]} }\n\t\t\\\\\n\t\t\\leq & \\frac{2k}{c_0} \\nbr{\\boldsymbol{Z} - \\mathbb{E}[\\boldsymbol{Z}]} \n\t\t\\\\\n\t\t\\leq & \\frac{ 512 (1+\\zeta) k L_{\\phi} L_{\\theta} \\log\\sbr{\\frac{50d}{\\delta}} }{c_0 \\nu \\sqrt{MT}} \\log \\sbr{\\frac{100pMT}{\\delta}} \n\t\t\\\\\n\t\t\\leq & \\frac{ 512 (1+\\zeta) k L_{\\phi} L_{\\theta} \\log\\sbr{\\frac{50d}{\\delta}} }{c_0 \\nu \\sqrt{MT}} \\log \\sbr{\\frac{100MT}{\\delta} \\cdot \\frac{2 \\cdot 32^2 (1+\\zeta)^2 L_{\\phi}^4}{\\nu^2} \\log^2 \\sbr{\\frac{40dMT}{\\delta}}} \n\t\t\\\\\n\t\t\\leq & \\frac{ 512 (1+\\zeta) k L_{\\phi} L_{\\theta} \\log\\sbr{\\frac{50d}{\\delta}} }{c_0 \\nu \\sqrt{MT}} \\log \\sbr{\\frac{ 2 \\cdot 100 \\cdot 32^2 \\cdot 40^2 (1+\\zeta)^2 d^2 L_{\\phi}^4 M^3 T^3}{\\nu^2 \\delta^3} } \n\t\t\\\\\n\t\t\\leq & \\frac{ 2048 (1+\\zeta) k L_{\\phi} L_{\\theta} \\log\\sbr{\\frac{50d}{\\delta}} }{c_0 \\nu \\sqrt{MT}} \\log \\sbr{\\frac{ 135 (1+\\zeta) d L_{\\phi} M T}{\\nu \\delta} } .\n\t\\end{align*}\n\t\n\tUsing Lemma~\\ref{lemma:technical_tool_bai_stage2} with $A=2048 (1+\\zeta) k c_0^{-1} \\nu^{-1} L_{\\phi} L_{\\theta} \\log\\sbr{\\frac{50d}{\\delta}}$, $B=\\frac{ 135 (1+\\zeta) d L_{\\phi} }{\\nu \\delta}$ and $\\kappa=\\frac{\\varepsilon}{ 96 k \\log \\sbr{\\frac{5N}{\\delta}} L_{\\phi} L_{w}}$, we have that if \n\t\\begin{align*}\n\t\tM T \\geq & \\frac{68 \\cdot 2048^2 \\cdot 96^2 (1+\\zeta)^2 k^4 L_{\\phi}^4 L_{\\theta}^2 L_{w}^2 }{ c_0^2 \\nu^2 \\varepsilon^2 } \\cdot \\\\& \\log^2 \\sbr{\\frac{50d}{\\delta}} \\log^2 \\sbr{\\frac{5N}{\\delta}} \\log^2 \\sbr{ \\frac{ 2048 \\cdot 135 \\cdot 96 (1+\\zeta)^2 k^2 d L_{\\phi}^3 L_{\\theta} L_{w} }{c_0 \\nu^2 \\delta \\varepsilon} \\log\\sbr{\\frac{50d}{\\delta}} \\log \\sbr{\\frac{5N}{\\delta}} } ,\n\t\\end{align*}\n\tthen $\\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\leq \\frac{\\varepsilon}{ 96 k \\log \\sbr{\\frac{5N}{\\delta}} L_{\\phi} L_{w} }$.\n\t\n\tFurther enlarging $MT$, if \n\t\\begin{align*}\n\t\tM T \\geq \\frac{68 \\cdot 2048^2 \\cdot 96^2 (1+\\zeta)^2 k^4 L_{\\phi}^4 L_{\\theta}^2 L_{w}^2 }{ c_0^2 \\nu^2 \\varepsilon^2 } \\log^6 \\sbr{ \\frac{ 2048 \\cdot 135 \\cdot 96 \\cdot 50 \\cdot 5 (1+\\zeta)^2 k^2 d^2 L_{\\phi}^3 L_{\\theta} L_{w} N }{c_0 \\nu^2 \\delta^3 \\varepsilon} } ,\n\t\\end{align*}\n\tthen\n\t\\begin{align*}\n\t\t\\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\leq \\frac{\\varepsilon}{ 96 k \\log \\sbr{\\frac{5N}{\\delta}} L_{\\phi} L_{w} } .\n\t\\end{align*}\n\\end{proof}\n\n\n\n\\subsection{Estimation with Low-dimensional Representations}\n\n\\begin{lemma} \\label{lemma:computation_logdet}\n\tIn subroutine $\\mathtt{EstLowRep}$ (Algorithm~\\ref{alg:est_low_rep}), for any $m \\in [M]$ and $t > 0$, we have \n\t\\begin{align*}\n\t\t\\log \\sbr{ \\frac{\\det\\sbr{ \\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} } }{ \\det \\sbr{ \\gamma I} } } \\leq k \\log \\sbr{ 1 + \\frac{ t }{\\gamma k} } .\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}[Proof of Lemma~\\ref{lemma:computation_logdet}]\n\tThis proof uses a similar idea as Lemma 11 in \\cite{abbasi2011improved}.\n\t\n\tIt holds that\n\t\\begin{align*}\n\t\t& \\log \\sbr{ \\frac{\\det\\sbr{ \\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} } }{ \\det \\sbr{ \\gamma I} } } \n\t\t\\\\\n\t\t\\leq & \\log \\sbr{ \\frac{\\sbr{ \\frac{ \\textup{Trace} \\sbr{\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}}} }{k}}^{k} }{ \\gamma^{k} } } \n\t\t\\\\\n\t\t= & k \\log \\sbr{ \\frac{ \\textup{Trace} \\sbr{\\gamma I} + \\sum_{\\tau=1}^{t} \\textup{Trace} \\sbr{ \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}}} }{\\gamma k} } \n\t\t\\\\\n\t\t= & k \\log \\sbr{ \\frac{ \\gamma k + \\sum_{\\tau=1}^{t} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})}^2 }{\\gamma k} } \n\t\t\\\\\n\t\t\\leq & k \\log \\sbr{ 1 + \\frac{ t }{\\gamma k} } .\n\t\\end{align*}\n\\end{proof}\n\n\\begin{lemma} \\label{lemma:decreasing_uncertainty}\n\tIn subroutine $\\mathtt{EstLowRep}$ (Algorithm~\\ref{alg:est_low_rep}), for any $m \\in [M]$ and $t \\geq 0$, we have \n\t\\begin{align*}\n\t\t\\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} } \\geq \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t+1}^{-1}} } .\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}[Proof of Lemma~\\ref{lemma:decreasing_uncertainty}]\n\tThis proof is similar to that of Lemma 6 in \\cite{zanette2021design}.\n\t\n\tFor any $m \\in [M]$ and $t \\geq 0$, since $\\boldsymbol{\\Sigma}_{m,t+1} \\succeq \\boldsymbol{\\Sigma}_{m,t}$, we have $\\boldsymbol{\\Sigma}_{m,t}^{-1} \\succeq \\boldsymbol{\\Sigma}_{m,t+1}^{-1}$. Hence, for any $m \\in [M]$, $t \\geq 0$, $s \\in \\mathcal{S}$ and $a \\in \\mathcal{A}$, we have\n\t\\begin{align*}\n\t\t\\boldsymbol{\\phi}(s,a)^\\top \\hat{\\boldsymbol{B}} \\boldsymbol{\\Sigma}_{m,t}^{-1} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a) \\geq \\boldsymbol{\\phi}(s,a)^\\top \\hat{\\boldsymbol{B}} \\boldsymbol{\\Sigma}_{m,t+1}^{-1} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a) ,\n\t\\end{align*}\n\twhich implies that\n\t\\begin{align*}\n\t\t\\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \\geq \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t+1}^{-1}} .\n\t\\end{align*}\n\t\n\tTherefore, for any $m \\in [M]$ and $t \\geq 0$, we have\n\t\\begin{align*}\n\t\t\\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} } \\geq \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t+1}^{-1}} } .\n\t\\end{align*}\n\\end{proof}\n\n\nIn subroutine $\\mathtt{EstLowRep}$, for any $m \\in [M]$ and $t>0$, let $\\xi_{m,t}$ denote the noise of the sample at timestep $t$ for task $m$ (Line~\\ref{line:bpi_stage3_sample} in Algorithm~\\ref{alg:est_low_rep}).\n\n\nDefine event \n\\begin{align*}\n\t\\mathcal{H} := \\Bigg\\{ &\n\t\\nbr{\\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\xi_{m,\\tau}}_{\\sbr{\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} }^{-1}} \\leq \\\\& k \\log \\sbr{ 1 + \\frac{ t }{\\gamma k} } + 2 \\log\\sbr{\\frac{5}{\\delta}}, \\ \\forall m \\in [M],\\ \\forall t>0 \\Bigg\\} . \n\\end{align*}\n\n\\begin{lemma}[Martingale Concentration of the Variance Term] \\label{lemma:martingale_concentration_variance}\n\tIt holds that\n\t\\begin{align*}\n\t\t\\Pr \\mbr{\\mathcal{H}} \\geq 1-\\frac{\\delta}{5} .\n\t\\end{align*}\n\\end{lemma}\n\\begin{proof}[Proof of Lemma~\\ref{lemma:martingale_concentration_variance}]\n\tLet $\\delta'$ be a confidence parameter which will be chosen later.\n\tSince $\\hat{\\boldsymbol{B}}$ is fixed before sampling $(s_{m,\\tau},a_{m,\\tau})$ for all $m \\in [M]$ and $\\tau>0$, using Lemma~\\ref{lemma:self-normalized_vector_concentration}, we have that with probability at least $1-\\delta'$, for any task $m \\in [M]$ and $t>0$,\n\t\\begin{align*}\n\t\t& \\nbr{\\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\xi_{m,j}}_{\\sbr{\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} }^{-1}} \n\t\t\\\\\n\t\t\\leq & 2 \\log \\sbr{ \\frac{\\det\\sbr{ \\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} }^{\\frac{1}{2}} }{ \\det \\sbr{ \\gamma I}^{\\frac{1}{2}} \\cdot \\delta'} } \n\t\t\\\\\n\t\t\\leq & \\log \\sbr{ \\frac{\\det\\sbr{ \\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} } }{ \\det \\sbr{ \\gamma I} } } + 2 \\log\\sbr{\\frac{1}{\\delta'}}\n\t\t\\\\\n\t\t\\overset{\\textup{(a)}}{\\leq} & k \\log \\sbr{ 1 + \\frac{ t }{\\gamma k} } + 2 \\log\\sbr{\\frac{1}{\\delta'}} ,\n\t\\end{align*}\n\twhere inequality (a) uses Lemma~\\ref{lemma:computation_logdet}.\n\t\n\tLetting $\\delta'=\\frac{\\delta}{5}$, we obtain this lemma.\n\\end{proof}\n\nDefine event \n\\begin{align*}\n\t\\mathcal{J} := \\Bigg\\{ & \\sum_{t=1}^{N} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{t-1}^{-1}} } \\leq \\\\& \\frac{1}{4} \\sbr{ 2\\sqrt{\\log \\sbr{\\frac{5}{\\delta}}} + \\sqrt{ 4 \\log \\sbr{\\frac{5}{\\delta}} + 4 \\sbr{ \\sum_{t=1}^{N} \\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_t,a) }_{\\boldsymbol{\\Sigma}_{t-1}^{-1}} + 2 \\log \\sbr{\\frac{5}{\\delta}} } } }^2 \\Bigg\\} .\n\\end{align*}\n\n\\begin{lemma} \\label{lemma:reverse_bernstein_uncertainty}\n\tIt holds that\n\t\\begin{align*}\n\t\t\\Pr \\mbr{\\mathcal{J}} \\geq 1-\\frac{\\delta}{5} .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:reverse_bernstein_uncertainty}]\n\tUsing Lemma~\\ref{lemma:reverse_bernstein}, we can obtain this lemma.\n\\end{proof}\n\n\\begin{lemma} \\label{lemma:number_of_samples_N}\n\tSuppose that event $\\mathcal{K} \\cap \\mathcal{L} \\cap \\mathcal{G} \\cap \\mathcal{H} \\cap \\mathcal{J}$ holds. For any task $m \\in [M]$, we have\n\t\\begin{align*}\n\t\t\\mathbb{E}_{s \\sim \\mathcal{D}} & \\mbr{\\max_{a \\in \\mathcal{A}} \\abr{\\boldsymbol{\\phi}(s,a)^\\top \\sbr{ \\hat{\\boldsymbol{\\theta}}_{m,N} - \\boldsymbol{\\theta}_m}}} \\leq \\sbr{2\\sqrt{ \\frac{2 k \\log \\sbr{ 1 + \\frac{ N }{\\gamma k} }}{N} } + \\frac{8 \\log \\sbr{\\frac{5}{\\delta}} }{N}} \\cdot \\\\& \\sbr{ \\nbr{\\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{B}} \\sqrt{Nk} + \\sqrt{k \\log \\sbr{ 1 + \\frac{ N }{\\gamma k} } + 2 \\log\\sbr{\\frac{5}{\\delta}}} + \\sqrt{\\gamma} } + \\nbr{\\hat{\\boldsymbol{B}}_{\\bot}^\\top \\boldsymbol{B}} .\n\t\\end{align*}\n\tFurthermore, if \n\t\\begin{align*}\n\t\n\t\t%\n\t\tN = \\left \\lceil \\frac{4^2 \\cdot 26^4 \\cdot 24^2 \\cdot 2 \\sbr{k^2 + k \\gamma L_{\\theta}^2 } \\log^4 \\big({\\frac{240 ( k + \\sqrt{k \\gamma} L_{\\theta}) }{\\varepsilon \\delta}} \\big) }{\\varepsilon^2} \\right \\rceil , \n\t\n\t\\end{align*}\n\tthen\n\t\\begin{align*}\n\t\t\\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\abr{\\boldsymbol{\\phi}(s,a)^\\top \\sbr{ \\hat{\\boldsymbol{\\theta}}_{m,N} - \\boldsymbol{\\theta}_m}}} \\leq \\frac{\\varepsilon}{2}\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:number_of_samples_N}]\n\tFor any task $m \\in [M]$ and $t \\in [N]$,\n\t\\begin{align*}\n\t\t\\hat{\\boldsymbol{w}}_{m,t} = & \\boldsymbol{\\Sigma}_{m,t}^{-1} \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) r_{m,\\tau}\n\t\t\\\\\n\t\t= & \\sbr{\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} }^{-1} \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\sbr{ \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\boldsymbol{\\theta}_m + \\xi_{m,j} }\n\t\t\\\\\n\t\t= & \\sbr{\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} }^{-1} \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\cdot \\\\& \\sbr{ \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\theta}_m + \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}}_{\\perp} \\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{\\theta}_m + \\xi_{m,j} }\n\t\t\\\\& + \\gamma \\sbr{\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} }^{-1} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\theta}_m \n\t\t\\\\& - \\gamma \\sbr{\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} }^{-1} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\theta}_m\n\t\t\\\\\n\t\t= & \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\theta}_m + \\sbr{\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} }^{-1} \\cdot\n\t\t\\\\&\n\t\t\\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}}_{\\perp} \\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{B} \\boldsymbol{w}_m\n\t\t\\\\&\n\t\t+ \\sbr{\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} }^{-1} \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\xi_{m,j}\n\t\t\\\\& \n\t\t- \\gamma \\sbr{\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} }^{-1} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\theta}_m .\n\t\\end{align*}\n\t\n\t\n\tHence, for any task $m \\in [M]$, $t \\in [N]$ and $(s,a) \\in \\mathcal{S} \\times \\mathcal{A}$, \n\t\\begin{align*}\n\t\t\\boldsymbol{\\phi}(s,a)^\\top \\sbr{ \\hat{\\boldsymbol{\\theta}}_{m,t} - \\boldsymbol{\\theta}_m} = & \\boldsymbol{\\phi}(s,a)^\\top \\hat{\\boldsymbol{B}} \\hat{\\boldsymbol{w}}_{m,t} - \\boldsymbol{\\phi}(s,a)^\\top \\sbr{\\hat{\\boldsymbol{B}}\\hat{\\boldsymbol{B}}^\\top+\\hat{\\boldsymbol{B}}_{\\bot}\\hat{\\boldsymbol{B}}_{\\bot}^\\top} \\boldsymbol{\\theta}_m\n\t\t\\\\\n\t\t= & \\boldsymbol{\\phi}(s,a)^\\top \\hat{\\boldsymbol{B}} \\sbr{ \\hat{\\boldsymbol{w}}_{m,t} - \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\theta}_m } - \\boldsymbol{\\phi}(s,a)^\\top \\hat{\\boldsymbol{B}}_{\\bot}\\hat{\\boldsymbol{B}}_{\\bot}^\\top \\boldsymbol{\\theta}_m\n\t\t\\\\\n\t\t= & \\boldsymbol{\\phi}(s,a)^\\top \\hat{\\boldsymbol{B}} \n\t\t\\sbr{\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} }^{-1} \\cdot \\\\& \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}}_{\\perp} \\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{B} \\boldsymbol{w}_m\n\t\t\\\\&\n\t\t+ \\boldsymbol{\\phi}(s,a)^\\top \\hat{\\boldsymbol{B}} \\sbr{\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}} }^{-1} \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\xi_{m,j}\n\t\t\\\\& \n\t\t-\\! \\gamma \\boldsymbol{\\phi}(s,a)^{\\!\\top} \\! \\hat{\\boldsymbol{B}} \\sbr{ \\! \\gamma I \\!+\\! \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^{\\!\\top} \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^{\\!\\top} \\hat{\\boldsymbol{B}} \\! }^{\\!\\!-1} \\!\\!\\!\\!\\!\\! \\hat{\\boldsymbol{B}}^\\top \\! \\boldsymbol{\\theta}_m \\!-\\! \\boldsymbol{\\phi}(s,a)^{\\!\\top} \\! \\hat{\\boldsymbol{B}}_{\\bot}\\hat{\\boldsymbol{B}}_{\\bot}^{\\!\\top} \\boldsymbol{B} \\boldsymbol{w}_m .\n\t\\end{align*}\n\t\n\tFor any $m \\in [M]$, let $\\boldsymbol{\\Sigma}_{m,0}:=\\gamma I$. For any $m \\in [M]$ and $t \\geq 1$, let $\\boldsymbol{\\Sigma}_{m,t}:=\\gamma I + \\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}}$.\n\t\n\tTaking the absolute value on both sides and using the Cauchy\u2013Schwarz inequality, we obtain that for any $m \\in [M]$, $t \\in [N]$ and $(s,a) \\in \\mathcal{S} \\times \\mathcal{A}$, \n\t\\begin{align*}\n\t\t& \\abr{\\boldsymbol{\\phi}(s,a)^\\top \\sbr{ \\hat{\\boldsymbol{\\theta}}_{m,t} - \\boldsymbol{\\theta}_m}} \n\t\t\\\\\n\t\t\\leq & \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \\nbr{\\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}}_{\\perp} \\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{B} \\boldsymbol{w}_m}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}}\n\t\t\\\\&\n\t\t+ \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \\nbr{\\sum_{\\tau=1}^{t} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\xi_{m,j}}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}}\n\t\t\\\\& \n\t\t+ \\gamma \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\theta}_m}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \n\t\t\\\\& \n\t\t+ \\abr{\\boldsymbol{\\phi}(s,a)^\\top \\hat{\\boldsymbol{B}}_{\\bot}\\hat{\\boldsymbol{B}}_{\\bot}^\\top \\boldsymbol{B} \\boldsymbol{w}_m}\n\t\t\\\\\n\t\t\\overset{\\textup{(a)}}{\\leq} & \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \\sum_{\\tau=1}^{t} \\abr{ \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}}_{\\perp} \\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{B} \\boldsymbol{w}_m } \\cdot \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}}\n\t\t\\\\&\n\t\t+ \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \\sqrt{k \\log \\sbr{ 1 + \\frac{ t }{\\gamma k} } + 2 \\log\\sbr{\\frac{5}{\\delta}}}\n\t\t\\\\& \n\t\t+ \\gamma \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \\cdot \\frac{1}{\\sqrt{\\gamma}} \\cdot \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\theta}_m} + \\nbr{\\hat{\\boldsymbol{B}}_{\\bot}^\\top \\boldsymbol{B}} L_{\\phi} L_{w}\n\t\t\\\\\n\t\t\\leq & \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \\cdot \\nbr{\\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{B}} L_{\\phi} L_{w} \\cdot \\sum_{\\tau=1}^{t} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}}\n\t\t\\\\&\n\t\t+ \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \\sqrt{k \\log \\sbr{ 1 + \\frac{ t }{\\gamma k} } + 2 \\log\\sbr{\\frac{5}{\\delta}}}\n\t\t\\\\& \n\t\t+ \\sqrt{\\gamma} L_{\\theta} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} + \\nbr{\\hat{\\boldsymbol{B}}_{\\bot}^\\top \\boldsymbol{B}} L_{\\phi} L_{w}\n\t\t\\\\\n\t\t\\overset{\\textup{(b)}}{\\leq} & \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \\cdot \\nbr{\\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{B}} L_{\\phi} L_{w} \\cdot \\sqrt{tk}\n\t\t\\\\&\n\t\t+ \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \\sqrt{k \\log \\sbr{ 1 + \\frac{ t }{\\gamma k} } + 2 \\log\\sbr{\\frac{5}{\\delta}}}\n\t\t\\\\& \n\t\t+ \\sqrt{\\gamma} L_{\\theta} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} + \\nbr{\\hat{\\boldsymbol{B}}_{\\bot}^\\top \\boldsymbol{B}} L_{\\phi} L_{w}\n\t\t\\\\\n\t\t= & \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{m,t}^{-1}} \\sbr{ \\nbr{\\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{B}} L_{\\phi} L_{w} \\cdot \\sqrt{tk} + \\sqrt{k \\log \\sbr{ 1 + \\frac{ t }{\\gamma k} } + 2 \\log\\sbr{\\frac{5}{\\delta}}} + \\sqrt{\\gamma} L_{\\theta} } \\\\& + \\nbr{\\hat{\\boldsymbol{B}}_{\\bot}^\\top \\boldsymbol{B}} L_{\\phi} L_{w} ,\n\t\\end{align*}\n\twhere inequality (a) uses the triangle inequality and the definition of event $\\mathcal{H}$, and inequality (b) is due to Lemma~\\ref{lemma:sqrt_n_k_gamma}.\n\t\n\t\n\tTaking the maximum over $a \\in \\mathcal{A}$ and taking the expectation on $s \\sim \\mathcal{D}$, we have that for any task $m \\in [M]$,\n\t\\begin{align}\n\t\t\\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\abr{\\boldsymbol{\\phi}(s,a)^\\top \\sbr{ \\hat{\\boldsymbol{\\theta}}_{m,N} - \\boldsymbol{\\theta}_m}}} \\leq & \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_N^{-1}} } \\cdot\n\t\t\\nonumber\\\\&\n\t\t\\sbr{ \\nbr{\\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{B}} L_{\\phi} L_{w} \\cdot \\sqrt{Nk} + \\sqrt{k \\log \\sbr{ 1 + \\frac{ N }{\\gamma k} } + 2 \\log\\sbr{\\frac{5}{\\delta}}} + \\sqrt{\\gamma} L_{\\theta} } \n\t\t\\nonumber\\\\& \n\t\t+ \\nbr{\\hat{\\boldsymbol{B}}_{\\bot}^\\top \\boldsymbol{B}} L_{\\phi} L_{w} . \\label{eq:ex_max_phi_theta}\n\t\\end{align}\n\t\n\t\n\tAccording to Lemma~\\ref{lemma:decreasing_uncertainty}, $\\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_t^{-1}} }$ is non-increasing with respect to $t$. Hence, we have\n\t\\begin{align}\n\t\t\\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{N}^{-1}} } \\leq & \\frac{1}{N} \\sum_{t=1}^{N} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{t}^{-1}} }\n\t\t\\nonumber\\\\\n\t\t\\leq & \\frac{1}{N} \\sum_{t=1}^{N} \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{t-1}^{-1}} } \n\t\t\\nonumber\\\\\n\t\t\\overset{\\textup{(a)}}{\\leq} & \\frac{1}{4N} \\Bigg( 2\\sqrt{\\log \\sbr{\\frac{5}{\\delta}}} \\nonumber\\\\& + \\sqrt{ 4 \\log \\sbr{\\frac{5}{\\delta}} + 4 \\sbr{ \\sum_{t=1}^{N} \\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_t,a) }_{\\boldsymbol{\\Sigma}_{t-1}^{-1}} + 2 \\log \\sbr{\\frac{5}{\\delta}} } } \\Bigg)^2 \n\t\t\\nonumber\\\\\n\t\t= & \\frac{1}{4N} \\sbr{ 2\\sqrt{\\log \\sbr{\\frac{5}{\\delta}}} + \\sqrt{ 4 \\log \\sbr{\\frac{5}{\\delta}} + 4 \\sbr{ \\sum_{t=1}^{N} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_t,a_t)}_{\\boldsymbol{\\Sigma}_{t-1}^{-1}} + 2 \\log \\sbr{\\frac{5}{\\delta}} } } }^{\\!\\!\\!2} \\!\\! , \\label{eq:ex_max_B_phi}\n\t\\end{align}\n\twhere inequality (a) is due to the definition of event $\\mathcal{J}$.\n\t\n\tIn addition, we have \n\t\\begin{align}\n\t\t\\sum_{t=1}^{N} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_t,a_t)}_{\\boldsymbol{\\Sigma}_{t-1}^{-1}} \\leq & \\sqrt{N \\cdot \\sum_{t=1}^{N} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_t,a_t)}^2_{\\boldsymbol{\\Sigma}_{t-1}^{-1}}}\n\t\t\\nonumber\\\\\n\t\t\\overset{\\textup{(a)}}{\\leq} & \\sqrt{2N \\log \\sbr{\\frac{\\det \\sbr{\\gamma I + \\sum_{\\tau=1}^{N} \\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau}) \\boldsymbol{\\phi}(s_{m,\\tau},a_{m,\\tau})^\\top \\hat{\\boldsymbol{B}}}}{\\det \\sbr{\\gamma I}}}}\n\t\t\\nonumber\\\\\n\t\t\\overset{\\textup{(b)}}{\\leq} & \\sqrt{2 N k \\log \\sbr{ 1 + \\frac{ N }{\\gamma k} }} , \\label{eq:B_phi_leq_k_log}\n\t\\end{align}\n\twhere inequality (a) uses Lemma~\\ref{lemma:elliptical_potential}, and inequality (b) is due to Lemma~\\ref{lemma:computation_logdet}.\n\t\n\t\n\tCombining Eqs.~\\eqref{eq:ex_max_B_phi} and \\eqref{eq:B_phi_leq_k_log}, we have\n\t\\begin{align}\n\t\t\\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\nbr{\\hat{\\boldsymbol{B}}^\\top \\boldsymbol{\\phi}(s,a)}_{\\boldsymbol{\\Sigma}_{N}^{-1}} } \\leq &\n\t\t\\frac{1}{4N} \\sbr{ 2\\sqrt{\\log \\sbr{\\frac{5}{\\delta}}} + \\sqrt{ 4 \\log \\sbr{\\frac{5}{\\delta}} + 4 \\sbr{ \\sqrt{2 N k \\log \\sbr{ 1 + \\frac{ N }{\\gamma k} }} + 2 \\log \\sbr{\\frac{5}{\\delta}} } } }^2\n\t\t\\nonumber\\\\\n\t\t\\overset{\\textup{(a)}}{\\leq} & \\frac{1}{2N} \\sbr{ 4 \\log \\sbr{\\frac{5}{\\delta}} + 4 \\log \\sbr{\\frac{5}{\\delta}} + 4 \\sbr{ \\sqrt{2 N k \\log \\sbr{ 1 + \\frac{ N }{\\gamma k} }} + 2 \\log \\sbr{\\frac{5}{\\delta}} } }\n\t\t\\nonumber\\\\\n\t\t= & \\frac{1}{N} \\sbr{ 2\\sqrt{2 N k \\log \\sbr{ 1 + \\frac{ N }{\\gamma k} }} + 8 \\log \\sbr{\\frac{5}{\\delta}} }\n\t\t\\nonumber\\\\\n\t\t= & 2\\sqrt{ \\frac{2 k \\log \\sbr{ 1 + \\frac{ N }{\\gamma k} }}{N} } + \\frac{8 \\log \\sbr{\\frac{5}{\\delta}} }{N} , \\label{eq:sum_ex_max_B_phi_final_bound}\n\t\\end{align}\n\twhere inequality (a) uses the Cauchy\u2013Schwarz inequality.\n\t\n\tFurthermore, plugging Eq.~\\eqref{eq:sum_ex_max_B_phi_final_bound} into Eq.~\\eqref{eq:ex_max_phi_theta} and using $\\gamma \\geq 1$, we have that for $N \\geq 1$ and $\\sqrt{k}\\log(2N) \\geq 1$,\n\t\n\t\n\t\\begin{align}\n\t\t& \\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\abr{\\boldsymbol{\\phi}(s,a)^\\top \\sbr{ \\hat{\\boldsymbol{\\theta}}_{m,N} - \\boldsymbol{\\theta}_m}}} \n\t\t\\\\\n\t\t\\leq & \\sbr{2\\sqrt{ \\frac{2 k \\log \\sbr{ 1 + \\frac{ N }{\\gamma k} }}{N} } + \\frac{8 \\log \\sbr{\\frac{5}{\\delta}} }{N}} \\cdot \\nonumber\\\\& \\sbr{ \\nbr{\\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{B}} L_{\\phi} L_{w} \\sqrt{N k} + \\sqrt{k \\log \\sbr{ 1 + \\frac{ N }{\\gamma k} } + 2 \\log\\sbr{\\frac{5}{\\delta}}} + \\sqrt{\\gamma} L_{\\theta} } + \\nbr{\\hat{\\boldsymbol{B}}_{\\bot}^\\top \\boldsymbol{B}} L_{\\phi} L_{w}\n\t\t\\nonumber\\\\\n\t\t\\leq & \\frac{ 12 \\sqrt{k} \\log \\sbr{\\frac{5N}{\\delta}} }{\\sqrt{N}} \\sbr{ \\nbr{\\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{B}} L_{\\phi} L_{w} \\sqrt{Nk} + 2\\sqrt{k} \\log\\sbr{\\frac{5N}{\\delta}} + \\sqrt{\\gamma} L_{\\theta} } + \\nbr{\\hat{\\boldsymbol{B}}_{\\bot}^\\top \\boldsymbol{B}} L_{\\phi} L_{w}\n\t\t\\nonumber\\\\\n\t\t\\leq & \\frac{ \\sbr{24 k + 12 \\sqrt{k \\gamma} L_{\\theta} } \\log^2 \\sbr{\\frac{5N}{\\delta}} }{\\sqrt{N}} + 24 k \\log \\sbr{\\frac{5N}{\\delta}} \\nbr{\\hat{\\boldsymbol{B}}_{\\perp}^\\top \\boldsymbol{B}} L_{\\phi} L_{w} . \\label{eq:ex_max_phi_theta_B_hat_times_B}\n\t\n\t\n\t\n\t\n\t\\end{align}\n\t\n\t\n\tUsing Lemma~\\ref{lemma:technical_tool_log_N_square} with $A=24 k + 12 \\sqrt{k \\gamma} L_{\\theta}$, $B=\\frac{5}{\\delta}$ and $\\kappa=\\frac{\\varepsilon}{4}$, we have that if\n\t\\begin{align*}\n\t\tN \\geq \\frac{26^4 \\sbr{24 k + 12 \\sqrt{k \\gamma} L_{\\theta}}^2 \\log^4 \\big({\\frac{2 \\cdot 5 (24 k + 12 \\sqrt{k \\gamma} L_{\\theta}) }{\\varepsilon \\delta}} \\big) }{\\sbr{\\frac{\\varepsilon}{4}}^2} ,\n\t\\end{align*} \n\tthen $\\frac{ \\sbr{24 k + 12 \\sqrt{k \\gamma} L_{\\theta} } \\log^2 \\sbr{\\frac{5N}{\\delta}} }{\\sqrt{N}} \\leq \\frac{\\varepsilon}{4}$.\n\t\n\tFurther enlarging $N$, if \n\t\\begin{align}\n\t\tN \\geq \\frac{4^2 \\cdot 26^4 \\cdot 24^2 \\cdot 2 \\sbr{k^2 + k \\gamma L_{\\theta}^2 } \\log^4 \\big({\\frac{240 ( k + \\sqrt{k \\gamma} L_{\\theta}) }{\\varepsilon \\delta}} \\big) }{\\varepsilon^2} , \\label{eq:number_of_samples_N}\n\t\\end{align}\n\tthen\n\t\\begin{align*}\n\t\t\\frac{ \\sbr{24 k + 12 \\sqrt{k \\gamma} L_{\\theta} } \\log^2 \\sbr{\\frac{5N}{\\delta}} }{\\sqrt{N}} \\leq \\frac{\\varepsilon}{4} .\n\t\\end{align*}\n\t\n\tAccording to Lemma~\\ref{lemma:concentration_hat_B_clb}, we have $\\nbr{\\hat{\\boldsymbol{B}}_{t,\\bot}^\\top \\boldsymbol{B}} \\leq \\frac{\\varepsilon}{ 96 k \\log \\sbr{\\frac{5N}{\\delta}} L_{\\phi} L_{w} }$.\n\t\n\tThus, setting $N$ as the value in Eq.~\\eqref{eq:number_of_samples_N}, and continuing with Eq.~\\eqref{eq:ex_max_phi_theta_B_hat_times_B}, we have\n\t\\begin{align*}\n\t\t\\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\abr{\\boldsymbol{\\phi}(s,a)^\\top \\sbr{ \\hat{\\boldsymbol{\\theta}}_{m,N} - \\boldsymbol{\\theta}_m}}} \\leq \\frac{\\varepsilon}{4} + \\frac{\\varepsilon}{4}\n\t\t= \\frac{\\varepsilon}{2} .\n\t\\end{align*}\n\t\n\\end{proof}\n\n\n\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm:bpi_ub}}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:bpi_ub}]\n\tCombining Lemmas~\\ref{lemma:number_of_samples_T0}, \\ref{lemma:number_of_samples_p}, \\ref{lemma:Z_est_error}, \\ref{lemma:martingale_concentration_variance} and \\ref{lemma:reverse_bernstein_uncertainty}, we have that $Pr[\\mathcal{K} \\cap \\mathcal{L} \\cap \\mathcal{G} \\cap \\mathcal{H} \\cap \\mathcal{J}] \\geq 1-\\delta$.\n\tSuppose that event $\\mathcal{K} \\cap \\mathcal{L} \\cap \\mathcal{G} \\cap \\mathcal{H} \\cap \\mathcal{J}$ holds.\n\t\n\tFirst, we uses a similar analytical procedure as that\n\n\tin \\cite{zanette2021design} to prove the correctness.\n\t\n\tUsing Lemma~\\ref{lemma:number_of_samples_N}, we have that for any task $m \\in [M]$,\n\t\\begin{align*}\n\t\t\\mathbb{E}_{s \\sim \\mathcal{D}} \\mbr{\\max_{a \\in \\mathcal{A}} \\abr{\\boldsymbol{\\phi}(s,a)^\\top \\sbr{ \\hat{\\boldsymbol{\\theta}}_{m,N} - \\boldsymbol{\\theta}_m}}} \\leq \\frac{\\varepsilon}{2} .\n\t\\end{align*}\n\t\n\tFor any $m \\in [M]$ and $s \\in \\mathcal{S}$, let $\\beta_m(s):=\\max_{a\\in\\mathcal{A}}|\\boldsymbol{\\phi}(s,a)^\\top (\\hat{\\boldsymbol{\\theta}}_{m,N}-\\boldsymbol{\\theta}_m)|$ and $\\pi^*_m(s):=\\operatornamewithlimits{argmax}_{a \\in \\mathcal{A}} \\boldsymbol{\\phi}(s,a)^\\top \\boldsymbol{\\theta}_m$.\n\t\n\tFor any $m \\in [M]$ and $s \\in \\mathcal{S}$, we have\n\t\\begin{align*}\n\t\t\\boldsymbol{\\phi}(s,\\hat{\\pi}_m(s))^\\top \\boldsymbol{\\theta}_m \\geq & \\boldsymbol{\\phi}(s,\\hat{\\pi}_m(s))^\\top \\hat{\\boldsymbol{\\theta}}_{m,N} - \\beta_m(s)\n\t\t\\\\\n\t\t\\overset{\\textup{(a)}}{\\geq} & \\boldsymbol{\\phi}(s,\\pi^*_m(s))^\\top \\hat{\\boldsymbol{\\theta}}_{m,N} - \\beta_m(s)\n\t\t\\\\\n\t\t\\geq & \\boldsymbol{\\phi}(s,\\pi^*_m(s))^\\top \\boldsymbol{\\theta}_m - 2\\beta_m(s) ,\n\t\\end{align*}\n\twhere inequality (a) is due to that $\\hat{\\pi}_m(s)$ is greedy with respect to $\\hat{\\boldsymbol{\\theta}}_{m,N}$.\n\t\n\tRearranging the above equation and taking the expectation of $s$ on both sides, we have\n\t\\begin{align*}\n\t\t\\mathbb{E}_{s\\sim\\mathcal{D}}\\mbr{\\max_{a \\in \\mathcal{A}} \\sbr{\\boldsymbol{\\phi}(s,a) - \\boldsymbol{\\phi}(s,\\hat{\\pi}_m(s))}^\\top \\boldsymbol{\\theta}_m }\n\t\t\\leq 2 \\mathbb{E}_{s\\sim\\mathcal{D}}[\\beta_m(s)] \\leq \\varepsilon .\n\t\\end{align*}\n\t\n\t\n\tNow we prove the sample complexity.\n\tSumming the number of samples used in the main algorithm of $\\mathtt{C \\hyphen DouExpDes}$ and subroutines $\\mathtt{C \\hyphen FeatRecover}$ and $\\mathtt{EstLowRep}$ (Line~\\ref{line:bpi_estimate_context_dis} in Algorithm~\\ref{alg:repbpiclb}, Lines~\\ref{line:bpi_stage2_sample1}-\\ref{line:bpi_stage2_sample2} in Algorithm~\\ref{alg:con_feat_recover} and Line~\\ref{line:bpi_stage3_sample} in Algorithm~\\ref{alg:est_low_rep}), we have that the total number of samples is bounded by\n\t\\begin{align*}\n\t\t& T_0+2MTp+MN \n\t\t\\\\\n\t\t= & O \\Bigg( \\frac{L_{\\phi}^4}{\\nu^2} \\log^2 \\sbr{\\frac{d |\\mathcal{A}|}{\\delta}} + \\frac{ k^4 L_{\\phi}^4 L_{\\theta}^2 L_{w}^2 }{ \\nu^2 \\varepsilon^2 } \\log^6 \\sbr{ \\frac{ k d L_{\\phi} L_{\\theta} L_{w} N }{\\nu \\delta \\varepsilon} } \\cdot \\frac{L_{\\phi}^4}{\\nu^2} \\log^2 \\sbr{\\frac{dMT}{\\delta}} \\\\&+ M \\cdot \\frac{\\sbr{k^2 + k \\gamma L_{\\theta}^2 } \\log^4 \\big({\\frac{ k + \\sqrt{k \\gamma} L_{\\theta} }{\\varepsilon \\delta}} \\big) }{\\varepsilon^2} \\Bigg)\n\t\t\\\\\n\t\t= & O \\Bigg( \\frac{ k^4 L_{\\phi}^4 L_{\\theta}^2 L_{w}^2 }{ \\nu^2 \\varepsilon^2 } \\log^6 \\sbr{ \\frac{ |\\mathcal{A}| k d L_{\\phi} L_{\\theta} L_{w} N }{\\nu \\delta \\varepsilon} } \\cdot \\frac{L_{\\phi}^4}{\\nu^2} \\log^2 \\sbr{\\frac{dMT}{\\delta}} \\\\&+ M \\cdot \\frac{\\sbr{k^2 + k \\gamma L_{\\theta}^2 } \\log^4 \\big({\\frac{ k + \\sqrt{k \\gamma} L_{\\theta} }{\\varepsilon \\delta}} \\big) }{\\varepsilon^2} \\Bigg)\n\t\t\\\\\n\t\t= & \\tilde{O} \\Bigg( \\frac{ k^4 L_{\\phi}^8 L_{\\theta}^2 L_{w}^2 }{ \\nu^4 \\varepsilon^2 } + \\frac{ M \\sbr{k^2 + k \\gamma L_{\\theta}^2 } }{\\varepsilon^2} \\Bigg) .\n\t\\end{align*}\n\\end{proof}\n\n\n\\section{Technical Tools}\n\nIn this section, we provide some useful technical tools.\n\n\\begin{lemma}[Matrix Bernstern Inequality - Average, Lemma 31 in \\cite{tripuraneni2021provable}] \\label{lemma:matrix_bernstein_tripuraneni2021}\n\tConsider a truncation level $U>0$. If $\\{\\boldsymbol{Z}_1,\\dots,\\boldsymbol{Z}_n\\}$ is a sequence of $d_1 \\times d_2$ independent random matrices and \n\t$\\boldsymbol{Z}'_i = \\boldsymbol{Z}_i \\cdot \\indicator{\\|\\boldsymbol{Z}_i\\| \\leq U}$ for any $i \\in [n]$, then\n\t\\begin{align*}\n\t\t\\Pr \\mbr{ \\nbr{ \\frac{1}{n} \\sum_{i=1}^{n} \\sbr{ \\boldsymbol{Z}_i - \\mathbb{E}[\\boldsymbol{Z}_i] } } \\geq t } \\leq \n\t\t\\Pr \\mbr{ \\nbr{ \\frac{1}{n} \\sum_{i=1}^{n} \\sbr{ \\boldsymbol{Z}'_i - \\mathbb{E}[\\boldsymbol{Z}'_i] } } \\geq t - \\Delta } + n \\Pr \\mbr{ \\|\\boldsymbol{Z}_i\\| \\geq U } ,\n\t\\end{align*}\n\twhere $\\Delta \\geq \\|\\mathbb{E}[\\boldsymbol{Z}_i]-\\mathbb{E}[\\boldsymbol{Z}'_i]\\|$ for any $i \\in [n]$. \n\t\n\tIn addition, for $t \\geq \\Delta$, we have\n\t\\begin{align*}\n\t\t\\Pr \\mbr{ \\nbr{ \\frac{1}{n} \\sum_{i=1}^{n} \\sbr{ \\boldsymbol{Z}'_i - \\mathbb{E}[\\boldsymbol{Z}'_i] } } \\geq t - \\Delta } \\leq (d_1 + d_2) \\exp \\sbr{- \\frac{n^2 (t-\\Delta)^2}{2\\sigma^2 + \\frac{2Un(t-\\Delta)}{3}}} ,\n\t\\end{align*}\n\twhere\n\t\\begin{align*}\n\t\t\\sigma^2 = & \\max \\lbr{ \\nbr{\\sum_{i=1}^{n} \\mathbb{E}[(\\boldsymbol{Z}'_i-\\mathbb{E}[\\boldsymbol{Z}'_i])^\\top (\\boldsymbol{Z}'_i-\\mathbb{E}[\\boldsymbol{Z}'_i])]},\\ \\nbr{\\sum_{i=1}^{n} \\mathbb{E}[(\\boldsymbol{Z}'_i-\\mathbb{E}[\\boldsymbol{Z}'_i]) (\\boldsymbol{Z}'_i-\\mathbb{E}[\\boldsymbol{Z}'_i])^\\top]} } \n\t\t\\\\\n\t\t\\leq & \\max \\lbr{ \\nbr{\\sum_{i=1}^{n} \\mathbb{E}[{\\boldsymbol{Z}'_i}^\\top \\boldsymbol{Z}'_i]},\\ \\nbr{\\sum_{i=1}^{n} \\mathbb{E}[\\boldsymbol{Z}'_i {\\boldsymbol{Z}'_i}^\\top]} } .\n\t\n\t\n\t\\end{align*} \n\\end{lemma}\n\nLemma 31 in \\cite{tripuraneni2021provable} gives a truncated matrix Bernstern inequality for symmetric random matrices. Here we extend it to general random matrices.\n\nLemma~\\ref{lemma:matrix_bernstein_tripuraneni2021} can be obtained by combining the truncation argument in the proof of Lemma 31 in \\cite{tripuraneni2021provable} and Theorem 6.1.1 in \\cite{tropp2015introduction} (classic matrix Bernstern inequality for general random matrices).\n\n\\begin{lemma}[Matrix Bernstern Inequality - Summation] \\label{lemma:matrix_bernstein_tau}\n\tConsider a truncation level $U>0$. If $\\{\\boldsymbol{Z}_1,\\dots,\\boldsymbol{Z}_n\\}$ is a sequence of $d_1 \\times d_2$ independent random matrices, and $\\boldsymbol{Z}'_i = \\boldsymbol{Z}_i \\cdot \\indicator{\\|\\boldsymbol{Z}_i\\| \\leq U}$ and $\\Delta \\geq \\|\\mathbb{E}[\\boldsymbol{Z}_i]-\\mathbb{E}[\\boldsymbol{Z}'_i]\\|$ for any $i \\in [n]$, then for $\\tau \\geq 2 n\\Delta$,\n\t\\begin{align*}\n\t\t\\Pr \\mbr{ \\nbr{ \\sum_{i=1}^{n} \\sbr{ \\boldsymbol{Z}_i - \\mathbb{E}[\\boldsymbol{Z}_i] } } \\geq \\tau } \\leq \n\t\t(d_1 + d_2) \\exp \\sbr{- \\frac{1}{4} \\cdot \\frac{ \\tau^2}{2\\sigma^2 + \\frac{U \\tau}{3}}} + n \\Pr \\mbr{ \\|\\boldsymbol{Z}_i\\| \\geq U } ,\n\t\\end{align*}\n\twhere \n\t\\begin{align*}\n\t\t\\sigma^2 = & \\max \\lbr{ \\nbr{\\sum_{i=1}^{n} \\mathbb{E}[(\\boldsymbol{Z}'_i-\\mathbb{E}[\\boldsymbol{Z}'_i])^\\top (\\boldsymbol{Z}'_i-\\mathbb{E}[\\boldsymbol{Z}'_i])]},\\ \\nbr{\\sum_{i=1}^{n} \\mathbb{E}[(\\boldsymbol{Z}'_i-\\mathbb{E}[\\boldsymbol{Z}'_i]) (\\boldsymbol{Z}'_i-\\mathbb{E}[\\boldsymbol{Z}'_i])^\\top]} } \n\t\t\\\\\n\t\t\\leq & \\max \\lbr{ \\nbr{\\sum_{i=1}^{n} \\mathbb{E}[{\\boldsymbol{Z}'_i}^\\top \\boldsymbol{Z}'_i]},\\ \\nbr{\\sum_{i=1}^{n} \\mathbb{E}[\\boldsymbol{Z}'_i {\\boldsymbol{Z}'_i}^\\top]} } .\n\t\n\t\n\t\\end{align*}\n\t\n\tFurthermore, we have\n\t\\begin{align*}\n\t\t\\Pr \\mbr{ \\nbr{ \\sum_{i=1}^{n} \\sbr{ \\boldsymbol{Z}_i - \\mathbb{E}[\\boldsymbol{Z}_i] } } \\geq 4 \\sqrt{ \\sigma^2 \\log \\sbr{\\frac{d_1 + d_2}{\\delta}} } + 4 U \\log \\sbr{\\frac{d_1 + d_2}{\\delta}} } \\leq \n\t\t\\delta + n \\Pr \\mbr{ \\|\\boldsymbol{Z}_i\\| \\geq U } .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:matrix_bernstein_tau}]\n\tUsing Lemma~\\ref{lemma:matrix_bernstein_tripuraneni2021} and defining $\\tau:=nt$, we have that for $\\tau>n\\Delta$,\n\t\\begin{align*}\n\t\t\\Pr \\mbr{ \\nbr{ \\sum_{i=1}^{n} \\sbr{ \\boldsymbol{Z}_i - \\mathbb{E}[\\boldsymbol{Z}_i] } } \\geq \\tau } \\leq & \n\t\t(d_1 + d_2) \\exp \\sbr{- \\frac{ (\\tau-n \\Delta)^2}{2\\sigma^2 + \\frac{2U(\\tau-n \\Delta)}{3}}} + n \\Pr \\mbr{ \\|\\boldsymbol{Z}_i\\| \\geq U } .\n\t\\end{align*}\n\tIf $\\tau>2n\\Delta$, then $\\tau-n\\Delta>\\frac{1}{2} \\tau$ and we have\n\t\\begin{align}\n\t\t\\Pr \\mbr{ \\nbr{ \\sum_{i=1}^{n} \\sbr{ \\boldsymbol{Z}_i - \\mathbb{E}[\\boldsymbol{Z}_i] } } \\geq \\tau } \n\t\t\\leq & (d_1 + d_2) \\exp \\sbr{- \\frac{ \\sbr{\\frac{1}{2} \\tau}^2}{2\\sigma^2 + \\frac{2U \\sbr{\\frac{1}{2} \\tau}}{3} } } + n \\Pr \\mbr{ \\|\\boldsymbol{Z}_i\\| \\geq U } \n\t\t\\nonumber\\\\\n\t\t\\leq & (d_1 + d_2) \\exp \\sbr{- \\frac{1}{4} \\cdot \\frac{ \\tau^2 }{2\\sigma^2 + \\frac{U \\tau}{3} }} + n \\Pr \\mbr{ \\|\\boldsymbol{Z}_i\\| \\geq U } . \\label{eq:matrix_bernstein_error}\n\t\\end{align} \n\tPlugging $\\tau=4 \\sqrt{ \\sigma^2 \\log \\sbr{\\frac{d_1 + d_2}{\\delta}} } + 4 U \\log \\sbr{\\frac{d_1 + d_2}{\\delta}}$ into Eq.~\\eqref{eq:matrix_bernstein_error}, we have\n\t\\begin{align*}\n\t\t& \\Pr \\mbr{ \\nbr{ \\sum_{i=1}^{n} \\sbr{ \\boldsymbol{Z}_i - \\mathbb{E}[\\boldsymbol{Z}_i] } } \\geq 4 \\sqrt{ \\sigma^2 \\log \\sbr{\\frac{d_1 + d_2}{\\delta}} } + 4 U \\log \\sbr{\\frac{d_1 + d_2}{\\delta}} } \n\t\t\\\\\n\t\t\\leq & (d_1 + d_2) \\exp \\sbr{- \\frac{1}{4} \\cdot \\frac{ 16 \\sigma^2 \\log \\sbr{\\frac{d_1 + d_2}{\\delta}} + 16 U^2 \\log^2 \\sbr{\\frac{d_1 + d_2}{\\delta}} + 32 U \\log \\sbr{\\frac{d_1 + d_2}{\\delta}} \\sqrt{ \\sigma^2 \\log \\sbr{\\frac{d_1 + d_2}{\\delta}} } }{2\\sigma^2 + \\frac{1}{3} \\sbr{4U \\sqrt{ \\sigma^2 \\log \\sbr{\\frac{d_1 + d_2}{\\delta}} } + 4 U^2 \\log \\sbr{\\frac{d_1 + d_2}{\\delta}} }} } \\\\& + n \\Pr \\mbr{ \\|\\boldsymbol{Z}_i\\| \\geq U}\n\t\t\\\\\n\t\t\\leq & (d_1 + d_2) \\exp \\sbr{- \\frac{1}{4} \\cdot 4 \\log \\sbr{\\frac{d_1 + d_2}{\\delta}} } + n \\Pr \\mbr{ \\|\\boldsymbol{Z}_i\\| \\geq U }\n\t\t\\\\\n\t\t= & \\delta + n \\Pr \\mbr{ \\|\\boldsymbol{Z}_i\\| \\geq U } .\n\t\\end{align*}\n\\end{proof}\n\n\n\\begin{lemma}\\label{lemma:technical_tool_bai_stage2}\n\tFor any $A,B>1$, $\\kappa \\in (0,1)$ and $T>0$ such that $\\log\\sbr{\\frac{AB}{\\kappa}}>1$ and $\\log(BT)>2$, if \n\t$$\n\tT \\geq \\frac{68 A^2 \\log^2 \\sbr{\\frac{AB}{\\kappa}} }{\\kappa^2} ,\n\t$$\n\tthen\n\t$$\n\t\\frac{A}{\\sqrt{T}} \\log(B T) \\leq \\kappa .\n\t$$\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:technical_tool_bai_stage2}]\n\tIf $T = \\frac{68 A^2 \\log^2 \\sbr{\\frac{AB}{\\kappa}} }{\\kappa^2}$, we have\n\t\\begin{align*}\n\t\t\\frac{A}{\\sqrt{T}} \\log(B T) = & \\frac{A \\kappa}{ \\sqrt{68} A \\log \\sbr{\\frac{AB}{\\kappa}} } \\log \\sbr{ \\frac{68 A^2 B \\log^2 \\sbr{\\frac{AB}{\\kappa}} }{\\kappa^2}}\n\t\t\\\\\n\t\t= & \\frac{\\kappa}{ \\sqrt{68} \\log \\sbr{\\frac{AB}{\\kappa}} } \\sbr{ \\log \\sbr{68} + \\log \\sbr{ \\frac{A^2 B}{\\kappa^2}} + \\log \\sbr{ \\log^2 \\sbr{\\frac{AB}{\\kappa}} } }\n\t\t\\\\\n\t\t\\leq & \\frac{\\kappa}{ \\sqrt{68} \\log \\sbr{\\frac{AB}{\\kappa}} } \\sbr{ \\log \\sbr{68} + 2 \\log \\sbr{ \\frac{A B}{\\kappa}} + 2 \\log \\sbr{ \\frac{AB}{\\kappa} } }\n\t\t\\\\\n\t\t\\leq & \\frac{\\kappa}{ \\sqrt{68} \\log \\sbr{\\frac{AB}{\\kappa}} } \\sbr{ \\log \\sbr{68} \\log \\sbr{ \\frac{A B}{\\kappa}} + 4 \\log \\sbr{ \\frac{A B}{\\kappa}} }\n\t\t\\\\\n\t\t\\leq & \\kappa .\n\t\\end{align*}\n\t\n\tLet $f(T)=\\frac{A}{\\sqrt{T}} \\log(B T)$. Then, the derivative of $f(T)$ is\n\t\\begin{align*}\n\t\tf'(T)= \\frac{2A-A\\log(BT)}{2T\\sqrt{T}} .\n\t\\end{align*}\n\tIf $\\log(BT)>2$, then $f'(T)<0$, and thus $f(T)$ is decreasing with respect to $T$.\n\t\n\tTherefore, if $T \\geq \\frac{68 A^2 \\log^2 \\sbr{\\frac{AB}{\\kappa}} }{\\kappa^2}$, we have\n\t\\begin{align*}\n\t\t\\frac{A}{\\sqrt{T}} \\log(B T) \\leq \\kappa.\n\t\\end{align*}\n\\end{proof}\n\n\n\\begin{lemma}\\label{lemma:technical_tool_log_N_square}\n\tFor any $A,B>1$ and $\\kappa \\in (0,1)$ such that $\\log (\\frac{AB}{\\kappa})>1$ and $\\log(BN)>4$, if \n\t$$\n\tN \\geq \\frac{26^4 A^2 \\log^4 (\\frac{AB}{\\kappa}) }{\\kappa^2} ,\n\t$$\n\tthen\n\t$$\n\t\\frac{A \\log^2 \\sbr{BN}}{\\sqrt{N}} \\leq \\kappa .\n\t$$\n\\end{lemma}\n\\begin{proof}[Proof of Lemma~\\ref{lemma:technical_tool_log_N_square}]\n\tIf $N = \\frac{26^4 A^2 \\log^4 (\\frac{AB}{\\kappa}) }{\\kappa^2}$, we have\n\t$\n\t\\kappa \\sqrt{N} = 26^2 A \\log^2 (\\frac{AB}{\\kappa}) \n\t$,\n\tand\n\t\\begin{align*}\n\t\tA \\log^2 \\sbr{BN} = & A \\log^2 \\sbr{ \\frac{26^4 A^2 B \\log^4 (\\frac{AB}{\\kappa}) }{\\kappa^2} }\n\t\t\\\\\n\t\t\\leq & A \\log^2 \\sbr{ \\frac{26^4 A^2 B }{\\kappa^2} \\cdot \\frac{A^4 B^4}{\\kappa^4} }\n\t\t\\\\\n\t\t\\leq & 36 A \\log^2 \\sbr{ \\frac{26 A B }{\\kappa} }\n\t\t\\\\\n\t\t= & 36 A \\sbr{\\log \\sbr{26} + \\log \\sbr{ \\frac{A B }{\\kappa} }}^2\n\t\t\\\\\n\t\t\\leq & 36 A \\sbr{\\log \\sbr{26} \\log \\sbr{ \\frac{A B }{\\kappa} } + \\log \\sbr{ \\frac{A B }{\\kappa} }}^2\n\t\t\\\\\n\t\t= & 36 \\sbr{\\log \\sbr{26}+1}^2 A \\log^2 \\sbr{ \\frac{A B }{\\kappa} }\n\t\t\\\\\n\t\t\\leq & 26^2 A \\log^2 \\sbr{ \\frac{A B }{\\kappa} }\n\t\t\\\\\n\t\t= & \\kappa \\sqrt{N} ,\n\t\\end{align*}\n\tand thus $\\frac{A \\log^2 \\sbr{BN}}{\\sqrt{N}} \\leq \\kappa$.\n\t\n\tLet $f(N)=\\frac{A \\log^2 \\sbr{BN}}{\\sqrt{N}}$. Then, the derivative function of $f(N)$ is\n\t\\begin{align*}\n\t\tf'(N)= \\frac{4A\\log(BN)-A\\log^2(BN)}{2N\\sqrt{N}} = \\frac{A\\log(BN) \\cdot (4-\\log(BN))}{2N\\sqrt{N}} .\n\t\\end{align*}\n\tIf $\\log(BN)>4$, then $f'(N)<0$, and thus $f(N)$ is decreasing with respect to $N$.\n\t\n\tTherefore, if $N \\geq \\frac{26^4 A^2 \\log^4 (\\frac{AB}{\\kappa}) }{\\kappa^2}$, we have\n\t$\\frac{A \\log^2 \\sbr{BN}}{\\sqrt{N}} \\leq \\kappa$.\n\\end{proof}\n\n\n\\begin{lemma}\\label{lemma:sqrt_n_k}\n\tFor any $\\boldsymbol{x}_1,\\dots,\\boldsymbol{x}_n \\in \\mathbb{R}^k$, we have\n\t\\begin{align*}\n\t\t\\sum_{j=1}^{n} \\|\\boldsymbol{x}_j\\|_{\\sbr{\\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1}} \\leq \\sqrt{nk} .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:sqrt_n_k}]\n\tIt holds that\n\t\\begin{align*}\n\t\t\\sum_{j=1}^{n} \\|\\boldsymbol{x}_j\\|_{\\sbr{\\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1}} = & \\sum_{j=1}^{n} \\sqrt{\\boldsymbol{x}_j^\\top \\sbr{\\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} \\boldsymbol{x}_j}\n\t\t\\\\\n\t\t\\leq & \\sqrt{ n \\cdot \\sum_{j=1}^{n} \\boldsymbol{x}_j^\\top \\sbr{\\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} \\boldsymbol{x}_j}\n\t\t\\\\\n\t\t\\leq & \\sqrt{ n \\cdot \\sum_{j=1}^{n} \\textup{Trace} \\sbr{ \\boldsymbol{x}_j^\\top \\sbr{\\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} \\boldsymbol{x}_j}}\n\t\t\\\\\n\t\t= & \\sqrt{ n \\cdot \\sum_{j=1}^{n} \\textup{Trace} \\sbr{ \\boldsymbol{x}_j \\boldsymbol{x}_j^\\top \\sbr{\\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} }}\n\t\t\\\\\n\t\t= & \\sqrt{ n \\cdot \\textup{Trace} \\sbr{ \\sum_{j=1}^{n} \\boldsymbol{x}_j \\boldsymbol{x}_j^\\top \\sbr{\\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} }}\n\t\t\\\\\n\t\t= & \\sqrt{ n \\cdot \\textup{Trace} \\sbr{ \\boldsymbol{I}_k } }\n\t\t\\\\\n\t\t= & \\sqrt{ n k }\n\t\\end{align*}\n\\end{proof}\n\n\\begin{lemma}\\label{lemma:sqrt_n_k_gamma}\n\tFor any $\\boldsymbol{x}_1,\\dots,\\boldsymbol{x}_n \\in \\mathbb{R}^k$ and $\\gamma>0$, we have\n\t\\begin{align*}\n\t\t\\sum_{j=1}^{n} \\|\\boldsymbol{x}_j\\|_{\\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1}} \\leq \\sqrt{nk} .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lemma:sqrt_n_k_gamma}]\n\tIt holds that\n\t\\begin{align*}\n\t\t\\sum_{j=1}^{n} \\|\\boldsymbol{x}_j\\|_{\\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1}} = & \\sum_{j=1}^{n} \\sqrt{\\boldsymbol{x}_j^\\top \\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} \\boldsymbol{x}_j}\n\t\t\\\\\n\t\t\\leq & \\sqrt{ n \\cdot \\sum_{j=1}^{n} \\boldsymbol{x}_j^\\top \\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} \\boldsymbol{x}_j}\n\t\t\\\\\n\t\t= & \\sqrt{ n \\cdot \\sum_{j=1}^{n} \\textup{Trace} \\sbr{ \\boldsymbol{x}_j^\\top \\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} \\boldsymbol{x}_j}}\n\t\t\\\\\n\t\t= & \\sqrt{ n \\cdot \\sum_{j=1}^{n} \\textup{Trace} \\sbr{ \\boldsymbol{x}_j \\boldsymbol{x}_j^\\top \\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} }}\n\t\t\\\\\n\t\t= & \\sqrt{ n \\cdot \\textup{Trace} \\sbr{ \\sum_{j=1}^{n} \\boldsymbol{x}_j \\boldsymbol{x}_j^\\top \\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} }}\n\t\t\\\\\n\t\t\\overset{\\textup{(a)}}{\\leq} & \\sqrt{ n \\!\\cdot\\! \\sbr{\\textup{Trace} \\sbr{ \\sum_{j=1}^{n} \\boldsymbol{x}_j \\boldsymbol{x}_j^\\top \\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{\\!\\!\\!-1} } \\!+\\! \\textup{Trace} \\sbr{ \\gamma \\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{\\!\\!\\!-1} } } }\n\t\t\\\\\n\t\t= & \\sqrt{ n \\cdot \\textup{Trace} \\sbr{ \\sum_{j=1}^{n} \\boldsymbol{x}_j \\boldsymbol{x}_j^\\top \\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} + \\gamma \\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} }}\n\t\t\\\\\n\t\t= & \\sqrt{ n \\cdot \\textup{Trace} \\sbr{ \\sbr{\\gamma I + \\sum_{j=1}^{n} \\boldsymbol{x}_j \\boldsymbol{x}_j^\\top} \\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}^{-1} } }\n\t\t\\\\\n\t\t= & \\sqrt{ n \\cdot \\textup{Trace} \\sbr{ \\boldsymbol{I}_k } }\n\t\t\\\\\n\t\t= & \\sqrt{ n k } ,\n\t\\end{align*}\n\twhere inequality (a) is due to that $\\sbr{\\gamma I + \\sum_{i=1}^{n} \\boldsymbol{x}_i \\boldsymbol{x}_i^\\top}$ is a positive definite matrix.\n\\end{proof}\n\n\n\n\\begin{lemma}[Self-normalized Concentration for Martingales, Theorem 1 in \\cite{abbasi2011improved}]\\label{lemma:self-normalized_vector_concentration}\n\tLet $\\{\\mathcal{F}_t\\}_{t=0}^{\\infty}$ be a filtration such that for any $t \\geq 1$, the selected action $\\boldsymbol{X}_t \\in \\mathbb{R}^k$ is $\\mathcal{F}_{t-1}$-measurable, the noise $\\eta_t \\in \\mathbb{R}$ is $\\mathcal{F}_{t}$-measurable, and conditioning on $\\mathcal{F}_{t-1}$, $\\eta_t$ is zero-mean and $R$-sub-Gaussian. Let $\\boldsymbol{V}_0 \\in \\mathbb{R}^{k \\times k}$ be a positive definite matrix and let $\\boldsymbol{V}_t=\\sum_{i=1}^{t} \\boldsymbol{X}_i \\boldsymbol{X}_i^\\top$ for any $t \\geq 1$. Then, for any $\\delta>0$, with probability at least $1-\\delta$, for all $t \\geq 1$,\n\t\\begin{align*}\n\t\t\\nbr{\\sum_{i=1}^{t} \\boldsymbol{X}_i \\cdot \\eta_i}^2_{\\sbr{\\boldsymbol{V}_0+\\boldsymbol{V}_t}^{-1}} \\leq 2R^2 \\log \\sbr{ \\frac{\\det(\\boldsymbol{V}_t)^{\\frac{1}{2}} }{ \\det(\\boldsymbol{V}_0)^{\\frac{1}{2}} \\cdot \\delta} } .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{lemma}[Reverse Bernstein Inequality for Martingales, Theorem 3 in \\cite{zanette2021design}] \\label{lemma:reverse_bernstein}\n\tLet $(\\boldsymbol{\\Sigma},\\mathcal{F},\\Pr[\\cdot])$ be a probability space and consider the stochastic process $\\{\\boldsymbol{X}_t\\}$ adapted to the filtration $\\{\\mathcal{F}_t\\}$. Let $\\mathbb{E}_t [\\boldsymbol{X}_t]:=\\mathbb{E}[\\boldsymbol{X}_t|\\mathcal{F}_{t-1}]$ be the conditional expectation of $\\boldsymbol{X}_t$ given $\\mathcal{F}_{t-1}$. If $0 \\leq \\boldsymbol{X}_t \\leq 1$ then it holds that\n\t\\begin{align*}\n\t\t\\Pr \\mbr{ \\sum_{t=1}^{T} \\mathbb{E}_t [\\boldsymbol{X}_t] \\geq \\frac{1}{4} \\sbr{ 2\\sqrt{\\log \\sbr{\\frac{1}{\\delta}}} + \\sqrt{ 4 \\log \\sbr{\\frac{1}{\\delta}} + 4 \\sbr{ \\sum_{t=1}^{T} \\boldsymbol{X}_t + 2 \\log \\sbr{\\frac{1}{\\delta}} } } }^2 } \\leq \\delta .\n\t\\end{align*}\n\\end{lemma}\n\n\\begin{lemma}[Elliptical Potential Lemma, Lemma 11 in \\cite{abbasi2011improved}] \\label{lemma:elliptical_potential}\n\tLet $\\{\\boldsymbol{X}_t\\}_{t=1}^{\\infty}$ be a sequence in $\\mathbb{R}^k$. Let $\\boldsymbol{V}_0$ be a $k \\times k$ positive definite matrix and let $\\boldsymbol{V}_t = \\boldsymbol{V}_0 + \\sum_{i=1}^{t} \\boldsymbol{X}_i \\boldsymbol{X}_i^\\top$ such that for any $t \\geq 1$, $\\|\\boldsymbol{X}_t\\|^2_{\\boldsymbol{V}_{t-1}^{-1}} \\leq 1$. Then, we have that\n\t\\begin{align*}\n\t\t\\sum_{t=1}^{n} \\nbr{ \\boldsymbol{X}_t }^2_{\\boldsymbol{V}_{t-1}^{-1}} \\leq 2 \\log \\frac{\\det(\\boldsymbol{V}_n)}{\\det(\\boldsymbol{V}_0)} .\n\t\\end{align*}\n\\end{lemma}\n\n\n\n\n\n\\begin{lemma}[Moments of Sub-Gaussian Random Variables, Proposition 3.2 in \\cite{subgaussian_note}] \\label{lemma:subgaussian_moment}\n\tFor a $\\sigma^2$-sub-Gaussian random variable $\\boldsymbol{X}$ which satisfies\n\t\\begin{align*}\n\t\t\\mathbb{E} \\mbr{ \\exp \\sbr{\\mu \\boldsymbol{X} } } \\leq \\exp \\sbr{ \\frac{\\sigma^2 \\mu^2}{2} }, \\ \\forall \\mu \\in \\mathbb{R} ,\n\t\\end{align*}\n\twe have that for any integer $n \\geq 1$,\n\t\\begin{align*}\n\t\t\\mathbb{E} [|\\boldsymbol{X}|^{n}] \\leq \\sbr{2\\sigma^2}^{\\frac{n}{2}} n \\cdot \\Gamma\\sbr{\\frac{n}{2}} ,\n\t\\end{align*}\n\twhere $\\Gamma(n):=(n-1)!$ for any integer $n\\geq1$.\n\\end{lemma}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\vskip4mm Let $R$ be a ring with an identity. Idempotents, units\nand nilpotent elements play important rules in ring theory. The\nmotivation of this paper is to investigate the structures of\nvarious rings involving such special elements. An element $a$ in a\nring is called very idempotent if $a$ or $-a$ is an idempotent. An\nelement $a\\in R$ is called (weakly) clean if there exists a (very)\nidempotent $e\\in R$ and a unit such that $a=e+u$~\\cite{A}. An\nelement $a\\in R$ is (weakly) nil-clean provided that there exists\na (very) idempotent $e\\in R$ and a nilpotent $w\\in R$ such that\n$a=e+w$~\\cite{B} and ~\\cite{D}. These inspire us introduce two\nconcepts. We call an element $a\\in R$ is (weakly) precious if\nthere exists a (very) idempotent $e\\in R$, a unit $u\\in R$ and a\nnilpotent $w\\in R$ such that $a=e+u+w$. A ring $R$ is called a\nweakly clean (weakly precious, nil-clean, precious) ring if every\nelement in $R$ is weakly clean (weakly precious, nil-clean,\nprecious). Many fundamental properties about commutative weakly\nclean rings were obtained in ~\\cite{Ah} and ~\\cite{A}, and that\nweakly nil-clean rings were comprehensive studied by Breaz et al.\nin ~\\cite{B}.\n\nIn this paper, we shall explore the structures of these rings. In\nSection 2, we prove that the direct product $R=\\prod R_{i}$ of\nrings $R_i$ is weakly precious if and only if each $R_{i}$ is\nweakly precious and at most one is not precious. Furthermore, we\nshow that the precious property is invariant for any Morita\ncontext. In Section 3, we are concern on weakly clean rings and\nnil-clean rings. Let $R$ be a commutative ring with at most three\nmaximal ideals. If $2\\in U(R)$ and $J(R)$ is nil, we prove that\n$R$ is weakly clean. This provides a new type of weakly clean\nrings. A ring $R$ is abelian if every idempotent is central. We\nshow that if $R$ is abelian then $M_n(R)$ is nil-clean if and only\nif $R\/J(R)$ is Boolean and $M_n(J(R))$ is nil. This extend the\nmain results of Breaz et al. ~\\cite{BGDT} and that of Ko\\c{s}an et\nal.~\\cite{KLZ}. In the last section, we investigate when a ring\nconsists entirely of very idempotent, units, and nilpotent\nelements. We prove that a ring consists entirely of very\nidempotents, units and nilpotent elements if and only if $R$ is\nisomorphic to one of the following: a Boolean ring; ${\\Bbb\nZ}_3\\bigoplus {\\Bbb Z}_3$; ${\\Bbb Z}_3\\oplus B$ where $B$ is a\nBoolean ring; local ring with a nil Jacobson radical;\n$M_2\\big({\\Bbb Z}_2\\big)$ or $M_2\\big({\\Bbb Z}_3\\big)$; or the\nring of a Morita context with zero pairings where the underlying\nrings are ${\\Bbb Z}_2$ or ${\\Bbb Z}_3$. The structure of such\nrings is thereby completely determined.\n\nThroughout, all rings are associative with an identity. $M_n(R)$\nand $T_n(R)$ will denote the ring of all $n\\times n$ full matrices\nand triangular matrices over $R$, respectively. $J(R)$ and $P(R)$\nstand for the Jacobson radical and prime radical of $R$. $Id(R)=\\{\ne\\in R~|~e^2=e\\in R\\},-Id(R)=\\{ e\\in R~|~e^2=-e\\in R\\}$, $U(R)$ is\nthe set of all units in $R$, and $N(R)$ is the set of all\nnilpotent elements in $R$.\n\n\\section{Weakly Previous Rings}\n\n\\vskip4mm We start this section by indicating that \"weakly\ncleanness\" and \"weakly preciousness\", \"nil-cleanness\" and\n\"preciousness\" are not the same for elements in a ring.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Example 2.1.}\\ \\ {\\it $(1)$ Every\nweakly clean element in a ring is weakly previous, but the\nconverse is not true.}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(2)] {\\it Every nil-clean element in a ring is previous, but the converse is not true.}\n\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\ $(1)$ Obviously, every\nweakly clean element in a ring is weakly previous. But the\nconverse is not true. Consider the matrix $A=\\left(\n\\begin{array}{cc}\n3&9\\\\\n-7&-2\n\\end{array}\n\\right)\\in M_2({\\Bbb Z})$. By ~\\cite[Theorem 4]{An}, $A$ is not\nclean. Thus, $(A,-A)\\in M_2({\\Bbb Z})\\times M_2({\\Bbb Z})$ is not\nweakly clean. But $(A,-A)$ is previous, as it has the previous\ndecomposition\n$$\\big(\\left(\n\\begin{array}{cc}\n1&0\\\\\n7&0\n\\end{array}\n\\right),\\left(\n\\begin{array}{cc}\n1&0\\\\\n6&0\n\\end{array}\n\\right)\\big)+\\big(\\left(\n\\begin{array}{cc}\n-1&0\\\\\n-13&1\n\\end{array}\n\\right),\\left(\n\\begin{array}{cc}\n-1&0\\\\\n0&-1\n\\end{array}\n\\right)\\big)+\\big(\\left(\n\\begin{array}{cc}\n3&9\\\\\n-1&-3\n\\end{array}\n\\right),\\left(\n\\begin{array}{cc}\n-3&-9\\\\\n1&3\n\\end{array}\n\\right)\\big).$$ Thus, $(A,-A)$ is weakly precious.\n\n$(2)$ Let $a\\in R$ be nil-clean. Then there exists an idempotent\n$e\\in R$ and a nilpotent $w\\in R$ such that $a=e+w$, and so\n$a=(1-e)+(2e-1)+w$. As $(2e-1)^2=1$, we see that $a\\in R$ is\nprecious. Thus, every nil-clean element in a ring is previous. The\nconverse is not true. For instance, $-1\\in {\\Bbb Z}_3$ is not\nnil-clean, but it is precious.\\hfill$\\Box$\n\n\\vskip4mm Example 2.1 shows that $\\{~\\mbox{weakly clean\nelements}~\\}\\subsetneq $ $\\{~\\mbox{weakly precious elements}\\}$\nand $\\{~\\mbox{nil-clean elements}~\\}\\subsetneq \\{~\\mbox{precious\nelements}~\\}.$ Though weakly precious rings are rich, but there\nindeed exist rings which are not weakly precious. Since\n$Id\\big({\\Bbb Z}\\big)=\\{ 0,1\\}, U\\big({\\Bbb Z}\\big)=\\{ 1,-1\\}$ and\n${\\Bbb Z}$ has no nonzero nilpotent element, we easily check that\n$5\\in {\\Bbb Z}$ is not weakly precious. Therefore, the ring ${\\Bbb\nZ}$ of all integers is not weakly previous. The purpose of this\nsection is to investigate when a ring is weakly previous or\nprevious. Clearly, every homomorphic image of weakly precious\nrings is weakly precious. Further, we derive\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Lemma 2.2.}\\ \\ {\\it Let $I$ be a\nnil ideal of a ring $R$. Then $R$ is weakly precious if and only\nif $R\/I$ is weakly precious.} \\vskip2mm\\hspace{-1.8em} {\\it\nProof.}\\ \\ Let $R$ be a weakly precious ring. Then $R\/I$ is weakly\nprecious. Now assume that $R\/I$ is weakly precious. Let $a\\in R$.\nThen $\\overline{a}= a+I=\\overline{e} +\\overline{u}+\\overline{w}$\nor $\\overline{a}= a+I=\\overline{-e} +\\overline{u}+\\overline{w}$\nfor an idempotent $\\overline{e}\\in R\/I$, a unit $\\overline{u}\\in\nR\/I$ and a nilpotent $\\overline{w}\\in R\/I$. As $I$ is a nil ideal\nof $R$, we easily check that $u$ is a unit element in $R$ and $w$\nis a nilpotent element. Since every idempotent lifts modulo $I$,\nwe can find an idempotent $e\\in R$ such that\n$\\overline{e}=\\overline{f}$. Hence, $a= e+u+(w+c)$ or $a=\n-e+u+(w+c)$ for some $c\\in I$. Write $w^m=0~(m\\geq 1)$. Then\n$(w+c)^m\\in I$, and so $w+c\\in R$ is a nilpotent element.\nTherefore $a\\in R$ is weakly precious, as required.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Theorem 2.3.}\\ \\ {\\it Let $R$ be a\nring. Then $R$ is weakly precious if and only if $R[[x]]\/(x^n)\n(n\\in {\\Bbb N})$ is weakly precious.} \\vskip2mm\\hspace{-1.8em}\n{\\it Proof.}\\ \\ Clearly, $R[[x]]\/(x^n)=\\{ a_0+a_1x+\\cdots\n+a_{n-1}x^{n-1}~|~a_0,\\cdots ,a_{n-1}\\in R\\}$. Let $\\alpha :\nR[[X]]\/(x^n)\\longrightarrow R$ be a morphism such that\n$\\alpha(f)=f(0)$. It is easy to check that $\\alpha$ is an\n$R$-epimorphism and $ker\\alpha$ is a nil ideal of $R$, and\ntherefore the result follows from Lemma 2.2.\\hfill$\\Box$\n\n\\vskip4mm An element $a\\in R$ is strongly nilpotent if for every\nsequence $a_0,a_1,\\cdots ,a_i,\\cdots$ such that $a_0 =a$ and\n$a_{i+1}\\in a_iRa_i$, there exists an $n$ with $a_n=0$. The prime\nradical (i.e., the intersection of all prime ideals) of a ring is\nexactly the set of all its strongly nilpotent elements.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Lemma 2.4.}\\ \\ {\\it Let $R$ be a\nring. Then the following are equivalent:}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)] {\\it $R$ is weakly precious.}\n\\vspace{-.5mm} \\item [(2)] {\\it $R\/P(R)$ is weakly\nprecious.}\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\ This is\nobvious by Lemma 2.2, as $P(R)$ is nil.\\hfill$\\Box$\n\n\\vskip4mm Recall that a ring $R$ is 2-primal provided that every\nnilpotent element of $R$ is strongly nilpotent~\\cite{MMZ}.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Theorem 2.5.}\\ \\ {\\it Let $R$ be\n2-primal. Then the following are equivalent:}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)] {\\it $R$ is weakly precious;}\n\\vspace{-.5mm} \\item [(2)] {\\it $R$ is weakly clean.}\n\\vspace{-.5mm} \\item [(3)] {\\it $R\/P(R)$ is weakly\nclean.}\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\\n$(1)\\Rightarrow (2)$ Let $a\\in R$. As $R$ is weakly precious,\n$a=e+u+w$ or $a=-e+u+w$ for an idempotent $e\\in R$, a unit $u\\in\nR$ and a nilpotent $w\\in R$. This shows that $a=\ne+u\\big(1+u^{-1}w)$ or $a=-e+u\\big(1+u^{-1}w)$. As $R$ is a\n2-primal ring, we get $w\\in P(R)$. Since $P(R)$ is a nil ideal of\n$R$, $1+u^{-1}w\\in U(R)$, and therefore $R$ is weakly clean.\n\n$(2)\\Rightarrow (3)$ is clear.\n\n$(3)\\Rightarrow (1)$ Since $R\/P(R)$ is weakly clean, it is weakly\nprecious. Therefore we complete the proof, by Lemma\n2.4.\\hfill$\\Box$\n\n\\vskip4mm A ring $R$ is called nil-semicommutative if $ab=0$ in\n$R$ implies that $aRb=0$ for every $a, b \\in N(R)$ (see\n\\cite{MMZ}). For instance, every semicommutative ring (i.e.,\n$ab=0$ in $R$ implies that $aRb=0$) is nil-semicommutative.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Corollary 2.6.}\\ \\ {\\it Let $R$ be\nnil-semicommutative. Then the following are\nequivalent:}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)] {\\it $R$ is weakly precious;}\n\\vspace{-.5mm} \\item [(2)] {\\it $R$ is weakly clean.}\n\\vspace{-.5mm} \\item [(3)] {\\it $R\/P(R)$ is weakly\nclean.}\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\ Let $R$ be\na nil-semicommutative ring. Then, by ~\\cite[Lemma 2.7]{MMZ}, $R$\nis 2-primal, so the result follows from Theorem 2.5.\\hfill$\\Box$\n\n\\vskip4mm A ring $R$ a right (left) quasi-duo ring if every\nmaximal right (left) ideal of $R$ is an ideal. For instance, local\nrings, duo rings and weakly right (left) duo rings are all right\n(left) quasi-duo rings. We now derive\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Proposition 2.7.}\\ \\ {\\it Let $R$\nbe a right (left) quasi-duo ring. Then the following are\nequivalent:}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)] {\\it $R$ is weakly precious.}\n\\vspace{-.5mm} \\item [(2)] {\\it $R$ is weakly clean.}\n\\vspace{-.5mm} \\item [(3)] {\\it $R\/P(R)$ is weakly\nclean.}\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\\n$(1)\\Rightarrow (2)$ Let $a\\in R$. Then there exists a very\nidempotent $e\\in R$, a unit $u\\in R$ and a nilpotent element $w\\in\nR$ such that $a=e+u+w$. As $R$ is right (left) quasi-duo, it\nfollows from ~\\cite[Lemma 2.3]{Yu} that $w\\in J(R)$. Thus,\n$a=e+u\\big(1+u^{-1}w\\big)$; hence, $a\\in R$ is weakly clean.\n\n$(2)\\Rightarrow (3)$ This is obvious.\n\n$(3)\\Rightarrow (1)$ Clearly, $R\/P(R)$ is weakly precious, and\ntherefore the result follows, by Lemma 2.4.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Lemma 2.8.}\\ \\ {\\it Let $R$ be\nweakly precious and $S$ be precious. Then $R\\oplus S$ is weakly\nprecious.} \\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ Set $A=R\\oplus\nS$. Let $(a,b)\\in A$. Then there exists an idempotent $e\\in R$, a\nunit $u\\in R$ and a nilpotent $v\\in R$ such that $a=e+u+v$ or\n$a=-e+u+v$.\n\nCase I. $a=e+u+v$. Then we have an idempotent $f\\in S$, a unit\n$s\\in S$ and a nilpotent $w\\in S$ such that $b=f+v+w$. Thus,\n$(a,b)=(e,f)+(u,s)+(v,w)$, where $(e,f)\\in A$ is an idempotent,\n$(u,s)\\in A$ is a unit and $(v,w)\\in A$ is nilpotent.\n\nCase II. $a=-e+u+v$. Then we have an idempotent $f\\in S$, a unit\n$s\\in S$ and a nilpotent $w\\in S$ such that $-b=f+s+w$. Thus,\n$(a,b)=-(e,f)+(u,-s)+(v,-w)$, where $(e,f)\\in A$ is an idempotent,\n$(u,-s)\\in A$ is a unit and $(v,-w)\\in A$ is nilpotent.\n\nTherefore we conclude that $(a,b)$ is the sum of a very\nidempotent, a unit and a nilpotent element in $A$, hence that\nresult.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Theorem 2.9.}\\ \\ {\\it Let\n$\\{R_{i}\\}$ be a family of rings. Then the direct product $R=\\prod\nR_{i}$ of rings $R_i$ is weakly precious if and only if each\n$R_{i}$ is weakly precious and at most one is not precious.}\n\\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ $\\Longrightarrow $\nObviously, each $R_i$ is weakly precious. Suppose $R_{i_1}$ and\n$R_{i_2} (i_1\\neq i_2)$ are not precious. Then there exist some\n$x_{i_j}\\in R_{i_j} (j=1,2)$ such that $x_{i_1}\\in R_{i_1}$ and\n$-x_{i_2}\\in R_{i_2}$ are not precious. Choose $x=(x_i)$ where\n$x_i=0$ whenever $i\\neq i_j (j=1,2)$. Then $x\\pm e$ is not the sum\nof a unit and a nilpotent for all idempotents $e\\in R$. This gives\na contradiction. Therefore each $R_{i}$ is a weakly precious and\nat most one is not precious.\n\n$\\Longleftarrow$ Suppose that $R_{i_0}$ is weakly precious and all\nthe others $R_i$ are precious. Then $\\prod\\limits_{i\\neq\ni_{0}}R_{i_0}$ is precious. In light of Lemma 2.8, We conclude\nthat $R$ is weakly precious.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Corollary 2.10.}\\ \\ {\\it Let\n$L=\\prod\\limits_{i\\in I}R_{i}$ be the direct product of rings\n$R_i\\cong R$ and $|I|\\geq 2$. Then $L$ is weakly precious if and\nonly if $R$ is precious if and only if $L$ is precious.}\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Lemma 2.11.}\\ \\ {\\it Let $e=e^2 \\in\nR$ be such that $eRe$ is weakly precious and $(1-e)R(1-e)$ is\nprecious. Then $R$ is weakly precious.} \\vskip2mm\\hspace{-1.8em}\n{\\it Proof.} Let $e\\in R$ be an idempotent, we have $R\\cong\n\\left(\n\\begin{array}{cc}\neRe&eR(1-e)\\\\\n(1-e)Re&(1-e)R(1-e)\n\\end{array}\n\\right)$. Now suppose that $A=\\left(\n\\begin{array}{cc}\na&b\\\\\nc&d\n\\end{array}\n\\right)$ be an element of $R$. As $eRe$ is weakly precious and\n$(1-e)R(1-e)$ is precious, $a=f+u+w$ or $a=-f+u+w$ for some\nidempotent $f\\in eRe$, $u\\in U(eRe)$ and a nilpotent $w\\in\\ eRe$.\n\nCase I. Let $a=f+u+w$. Then $d-cu^{-1}b \\in (1-e)R(1-e)$, and so\n$d-cu^{-1}b=g+v+z$ for some idempotent $g$, unit $v$ and nilpotent\n$z\\in (1-e)R(1-e)$. Now $$A= \\left(\n\\begin{array}{cc}\nf+u+w&b\\\\\nc&g+v+z+cu^{-1}b\n\\end{array}\n\\right) =\\left(\n\\begin{array}{cc}\nf&0\\\\\n0&g\n\\end{array}\n\\right)+ \\left(\n\\begin{array}{cc}\nu&b\\\\\nc&v+cu^{-1}b\n\\end{array}\n\\right)+ \\left(\n\\begin{array}{cc}\nw&0\\\\\n0&z\n\\end{array}\n\\right).$$ It is clear that $\\left(\n\\begin{array}{cc}\nf&0\\\\\n0&g\n\\end{array}\n\\right)$ is an idempotent element of $R$ and $\\left(\n\\begin{array}{cc}\nw&0\\\\\n0&z\n\\end{array}\n\\right) $ is a nilpotent element of $R$, so we need only to show\nthat $\\left(\n\\begin{array}{cc}\nu&b\\\\\nc&v+cu^{-1}b\n\\end{array}\n\\right)$ is a unit of $R$. One easily checks that\n$$\\left(\n\\begin{array}{cc}\ne&0\\\\\n-cu^{-1}&1-e\n\\end{array}\n\\right)\\left(\n\\begin{array}{cc}\nu&b\\\\\nc&v+cu^{-1}b\n\\end{array}\n\\right)\\left(\n\\begin{array}{cc}\ne&-u^{-1}\\\\\n0&1-e\n\\end{array}\n\\right)\\\\\n=\\left(\n\\begin{array}{cc}\nu&0\\\\\n0&v\n\\end{array}\n\\right).$$ Hence, $\\left(\n\\begin{array}{cc}\nu&b\\\\\nc&v+cu^{-1}b\n\\end{array}\n\\right)$ is invertible. Thus, $A$ is precious.\n\nCase II. Let $a=-f+u+w$. Then $-a=f-u-w$. By the similar way of\nCase I, we see that $-A$ is precious, as required.\\hfill$\\Box$\n\n\\vskip4mm A Morita context $(R,S,M,N,\\psi,\\varphi)$ consists of\ntwo rings $R$ and $S$, two bimodules $_RN_S$ and $_SM_R$, and a\npair of bimodule homomorphisms $\\psi : N\\bigotimes\\limits_{S}M\\to\nR$ and $\\varphi: M\\bigotimes\\limits_{R} N\\to S$ which satisfy the\nfollowing associativity: $\\psi \\big(n\\bigotimes m\\big)n'=n\\varphi\n\\big(m\\bigotimes n'\\big)$ and $\\varphi \\big(m\\bigotimes\nn\\big)m'=m\\psi \\big(n\\bigotimes m'\\big)$ for any $m,m'\\in M,\nn,n'\\in N$. The ring $T=\\{ \\left(\n\\begin{array}{cc}\nr&m\\\\\nn&s \\end{array} \\right)~|~r\\in R,s\\in S,m\\in M,n\\in N\\}$ is called\nthe ring of the Morita context $(R,S,M,N,\\psi,\\varphi)$.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Theorem 2.12.}\\ \\ {\\it Let $T$ be\nthe ring of the Morita context $(R,S,M,N,\\varphi,\\phi)$. If $R$ is\nweakly precious and $S$ is precious, then $T$ is weakly precious.}\n\\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ Let $R$ be weakly\nprecious and $S$ be precious, and let $e=diag(1_{R},0)$. Since\n$eTe\\cong R$ and $(1_{T}-e)T(1_{T}-e)\\cong S$, it follows by Lemma\n2.11 that $T$ is a weakly precious ring, as asserted.\\hfill$\\Box$\n\n\\vskip4mm Many properties of weakly precious rings can be extended\nto precious rings. For instance, $R$ is precious if and only if\n$R[[x]]\/(x^n) (n\\in {\\Bbb N})$ is precious. The direct product\n$\\prod R_{i}$ of rings $R_i$ is precious if and only if each\n$R_{i}$ is precious. But the subdirect product of (weakly)\nprecious rings is not necessarily (weakly) precious. For instance,\n${\\Bbb Z}$ is a subdirect product of rings $\\lbrace{\\Bbb Z_n},\nn\\geq 2\\rbrace$, where each ${\\Bbb Z_n} (n\\geq 2)$ is precious,\nbut ${\\Bbb Z}$ is not.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Lemma 2.13.}\\ \\ {\\it Let $e=e^2 \\in\nR$ be such that $eRe$ and $(1-e)R(1-e)$ are precious. Then $R$ is\nprecious.} \\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ Let $e\\in R$\nbe an idempotent element, we have $R\\cong \\left(\n\\begin{array}{cc}\neRe&eR(1-e)\\\\\n(1-e)Re&(1-e)R(1-e)\n\\end{array}\n\\right)$. Now suppose that $A=\\left(\n\\begin{array}{cc}\na&b\\\\\nc&d\n\\end{array}\n\\right)$ be an element of $R$. Since $eRe$ and $(1-e)R(1-e)$ are\nprecious rings, $a=f+u+w$ for some idempotent $f\\in eRe$, $u\\in\nU(eRe)$ and a nilpotent $w\\in\\ eRe$. Let $u^{-1}$ be the inverse\nof $u$. Then $d-cu^{-1}b\\in (1-e)R(1-e)$, so $d-cu^{-1}b=g+v+z$\nfor some idempotent $g$, unit $v$ and nilpotent $z\\in\n(1-e)R(1-e)$. Now $A= \\left(\n\\begin{array}{cc}\nf+u+w&b\\\\\nc&g+v+z+cu^{-1}b\n\\end{array}\n\\right) =\\left(\n\\begin{array}{cc}\nf&0\\\\\n0&g\n\\end{array}\n\\right)+\\left(\n\\begin{array}{cc}\nu&0\\\\\n0&v+cu^{-1}b\n\\end{array}\n\\right)+ \\left(\n\\begin{array}{cc}\nw&0\\\\\n0&z\n\\end{array}\n\\right)$. As in the proof of Lemma 2.11, we easily checks that\n$\\left(\n\\begin{array}{cc}\nu&b\\\\\nc&v+cu^{-1}b\n\\end{array}\n\\right)$ is a unit of $R$, and the the result follows.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Theorem 2.14.}\\ \\ {\\it Let $T$ be\nthe ring of the Morita context $(R,S,M,N,\\varphi,\\phi)$. If $R$\nand $S$ are precious, then $T$ is precious.}\n\\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ Let $R,S$ be precious\nrings and let $e=diag(1_{R},0)$. Then $eTe\\cong R$ and\n$(1_{T}-e)T(1_{T}-e)\\cong S$. By virtue of Lemma 2.13, $T$ is a\nprecious ring. \\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Corollary 2.15.}\\ \\ {\\it Let $R$ be\nprecious. Then $M_n(R)$ is precious.} \\vskip2mm\\hspace{-1.8em}\n{\\it Proof.}\\ \\ If $n=2$. Then the result follows by Theorem 2.14.\nSuppose that the result holds for $n\\leq k ( k\\geq 2)$. Then $R$\nand $M_k(R)$ are both precious. In light of Theorem 2.14, the ring\n$\\left(\n\\begin{array}{cc}\nR&M\\\\\nN&M_k(R)\n\\end{array}\n\\right)$ is precious, where $M=\\{\\left(\n\\begin{array}{ccc}\nb_{1}&\\cdots &b_k\n\\end{array}\n\\right)~|~b_1,\\cdots ,b_k\\in R\\}$ and $N=\\{\\left(\n\\begin{array}{c}\nc_{1}\\\\\n\\vdots\\\\\nc_k\n\\end{array}\n\\right)~|~c_1,\\cdots ,c_k\\in R\\}$. This completes the proof by\ninduction.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Theorem 2.16.}\\ \\ {\\it Let $R$ be a\nring. Then the following are equivalent:}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)] {\\it $R$ is precious.}\n\\vspace{-.5mm}\n\\item [(2)] {\\it $T_n(R)$ is precious for all $n\\in {\\Bbb N}$.}\n\\vspace{-.5mm}\n\\item [(3)] {\\it\n$T_n(R)$ is precious for some $n\\in {\\Bbb N}$.} \\vspace{-.5mm}\n\\item [(4)] {\\it\n$T_n(R)$ is weakly precious for some $n\\geq 2$.}\n\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\ $(1)\\Rightarrow\n(2)$ The result holds for $n=2$ by Theorem 2.14. Assume that the\nresult holds for $n\\leq k$ $(k\\geq 2)$. Let $n=k+1$. Then\n$T_n(R)\\cong \\left(\n\\begin{array}{cc} R&M\\\\\n0&T_{k}(R) \\end{array} \\right)$, where $M =\\{ ( c_{1}, \\cdots ,\nc_{k})\\mid c_1,\\cdots ,c_{k}\\in R\\}$. In light of Theorem 2.14,\n$T_{k+1}(R)$ is precious. Then, proving $(2)$, by induction.\n\n$(2)\\Rightarrow (3)$ is trivial.\n\n$(3)\\Rightarrow (1)$ This is obvious, as a triangular matrix over\n$R$ is an idempotent (unit, nilpotent matrix) if and only if every\nits diagonal entry is an idempotent (unit, nilpotent matrix).\n\n$(2)\\Rightarrow (4)$ is trivial.\n\n$(4)\\Rightarrow (1)$ Let $a\\in R$. Choose $A=\\left(\n\\begin{array}{cccccc}\na&&&&&\\\\\n&-a&&&&\\\\\n&&0&&&\\\\\n&&&\\ddots&&\\\\\n&&&&&0\n\\end{array}\n\\right)\\in T_n(R)$. By hypothesis, we can find an idempotent\n$\\left(\n\\begin{array}{ccccc}\ne_1&&&&\\\\\n&e_2&&&\\\\\n&&\\ddots&&\\\\\n&&&&e_n\n\\end{array}\n\\right)$, a unit $\\left(\n\\begin{array}{ccccc}\nu_1&&&&\\\\\n&u_2&&&\\\\\n&&\\ddots&&\\\\\n&&&&u_n\n\\end{array}\n\\right)$ and a nilpotent $\\left(\n\\begin{array}{ccccc}\nw_1&&&&\\\\\n&w_2&&&\\\\\n&&\\ddots&\\\\\n&&&&w_n\n\\end{array}\n\\right)$ such that\n$$A=\\pm\n\\left(\n\\begin{array}{ccccc}\ne_1&&&&\\\\\n&e_2&&&\\\\\n&&\\ddots&&\\\\\n&&&&e_n\n\\end{array}\n\\right)+\\left(\n\\begin{array}{ccccc}\nu_1&&&&\\\\\n&u_2&&&\\\\\n&&\\ddots&&\\\\\n&&&&u_n\n\\end{array}\n\\right)+\\left(\n\\begin{array}{ccccc}\nw_1&&&&\\\\\n&w_2&&&\\\\\n&&\\ddots&&\\\\\n&&&&w_n\n\\end{array}\n\\right).$$ It follows that $a=e_1+u_1+w_1$ or $a=e_2-u_2-w_2$.\nClearly, $e_1,e_2$ are idempotents, $u_1, u_2$ are units and\n$w_1,w_2$ are nilpotent. Therefore proving $(1)$.\\hfill$\\Box$\n\n\\vskip4mm A ring $R$ is weakly periodic provided that for any\n$a\\in R$ there exists some $p=p^m(m\\geq 2)$ such that $a-p\\in R$\nis nilpotent. For instance, every periodic ring is weakly\nperiodic.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Corollary 2.17.}\\ \\ {\\it Let $R$ be\na weakly periodic ring. Then $M_n(R)$ and $T_n(R)$ are precious\nfor all $n\\in {\\Bbb N}$.} \\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\\nFor any $a\\in R$, there exists a $p=p^{k+1} (k\\in {\\Bbb N})$ and a\nnilpotent $w\\in R$ such that $a=p+w$. If $k=1$, then $p\\in R$ is\nan idempotent, and so $a\\in R$ is precious. Suppose that $k\\geq\n2$. Set $e=1-p^k, u=p-1+p^k$. Then $e=e^2\\in\nR,u^{-1}=p^{k-1}-1+p^k$, and that $p=e+u$. Thus, $a=e+u+w$, and\nthen $R$ is precious. Therefore we complete the proof, by\nCorollary 2.15 and Theorem 2.16.\\hfill$\\Box$\n\n\\vskip4mm Let $R$ be a ring and $M$ an $R$-$R$-bimodule. The\ntrivial extension of $R$ by $M$,\n$$R\\propto M=\\{ \\left(\n\\begin{array}{cc}\nr&m\\\\\n&r\n\\end{array}\n\\right)~|~r\\in R,m\\in M\\}$$ is (weakly) precious if and only if\n$R$ is (weakly) precious.\n\n\\section{Certain Subclasses}\n\n\\vskip4mm Weakly clean rings and nil-clean rings forms main types\nof subclasses of weakly precious rings and precious rings,\nrespectively. The purpose of this section is to off new types of\nsuch rings. In ~\\cite{A}, Anderson and Camillo proved that if a\nring $R$ has at most two maximal ideals and $2\\in U(R)$ then $R$\nis weakly clean. We extend this result as follows.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Theorem 3.1.}\\ \\ {\\it Let $R$ be a\ncommutative ring with at most three maximal ideals. If $2\\in U(R)$\nand $J(R)$ is nil, then $R$ is weakly clean.}\n\\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ Case I. $R$ has only one\nmaximal ideal. Then $R$ is local, and so it is clean.\n\nCase II. $R$ has only two maximal ideals. Then $R$ is weakly\nclean, by ~\\cite[Proposition 16]{A}.\n\nCase III. $R$ has three maximal ideals $M_1,M_2$ and $M_2$. Let\n$a\\in R$. If $a\\not\\in M_1,M_2,M_3$, then $aR=R$; hence, $a\\in\nU(R)$. So we may assume that $a\\in M_1$. If $1-a\\not\\in\nM_1,M_2,M_3$, then $1-a\\in U(R)$, and so $a\\in R$ is clean. If\n$1-a\\in M_1$, then $1=a+(1-a)\\in M_1$, a contradiction. Thus,\n$1-a\\in M_2$ or $1-a\\in M_3$. If $1+a\\not\\in M_1,M_2,M_3$, then\n$1+a\\in U(R)$; hence, $a\\in R$ is weakly clean. If $1+a\\in M_1$,\nthen $1=(1+a)-a\\in M_1$, a contradiction. Thus, $1+a\\in M_2$ or\n$1+a\\in M_3$. There are only two possible cases: $1-a\\in M_2,\n1+a\\in M_3$ or $1-a\\in M_3, 1+a\\in M_2$, as $2\\in U(R)$. Thus, we\nmay assume that $1-a\\in M_2, 1+a\\in M_3$. Hence, $a(1-a)(1+a)\\in\nM_1M_2M_3\\subseteq J(R)$. Thus, $\\overline{a}=\\overline{a}^3\\in\nR\/J(R)$. Set $\\overline{e}=\\overline{1-a^2}$ and\n$\\overline{u}=\\overline{a^2+a-1}$. Then $\\overline{e}\\in R\/J(R)$\nis an idempotent and $\\overline{u}^2=\\overline{1}$. Further, we\nhave $\\overline{a}=\\overline{e}+\\overline{u}$. By hypothesis,\n$J(R)$ is nil, and then every unit and every idempotent lift\nmodulo $J(R)$. So we may assume that $e\\in R$ is an idempotent and\n$u\\in U(R)$. Set $w:=a-e-u$. Then $a=e+u+w$ where $w\\in J(R)$.\nClearly, $u+w=u(1+u^{-1}w)\\in U(R)$, and so $a\\in R$ is clean.\nTherefore $R$ is weakly clean.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Example 3.2.}\\ \\ {\\it Let\n$R=k[[x,y,z]]$ where $k$ is a field with $char(k)\\neq 2$. Let\n$S=R-(x)\\cup (y)\\cup (z)$. Then the ring $R_S\/J^2(R_S)$ is weakly\nclean.} \\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ Since\n$k[[x,y,z]]\/(x)\\cong k[[y,z]]$ is an integral domain, we see that\n$(x)$ is a prime ideal of $k[[x,y,z]]$. Likewise, $(y)$ and $(z)$\nare prime ideals of $R$. Let $S=R-(x)\\bigcup (y)\\bigcup (z)$. Then\n$S$ is a multiplicative closed subset of $R$. Let $P$ be a maximal\nideal of $R_S$. Then we have an ideal $Q$ of $R$ such that $P=Q_S$\nsuch that $Q\\bigcap S=\\emptyset$. Thus, $Q\\subseteq (x)\\bigcup\n(y)\\bigcup (z)$. Assume that $Q\\nsubseteq (x), Q\\nsubseteq (y)$\nand $Q\\nsubseteq (z)$. Then we have some $b,c,d\\in Q$, but\n$b\\not\\in (x)$, $c\\not\\in (y)$ and $d\\not\\in (z)$. Choose\n$a=b+c+d$. Then $a\\in Q$. If $a\\in (x)$, then $c+d\\in (x)$. If\n$c\\not\\in (x)$, then $c\\in (z)$. This implies that $d\\in (x)$.\nHence, $c\\in (z)\\bigcap (x)=0$. This gives a contradiction. If\n$c\\in (x)$, then $d\\in (x)$; hence that $b=a-(c+d)\\in (x)$, a\ncontradiction. Hence, $a\\not\\in (x)$. Likewise, $a\\not\\in (y)$ and\n$a\\not\\in (z)$. Thus, $a\\not\\in (x)\\bigcup (y)\\bigcup (z)$, a\ncontradiction. We infer that $Q\\subseteq (x)$, or $Q\\subseteq (y)$\nor $Q\\subseteq (z)$. Hence, $Q_S\\subseteq (x)_S$, or $Q_S\\subseteq\n(y)_S$, or $Q_S\\subseteq (z)_S$. By the maximality of $P$, we get\n$P=(x)_S$, or $(y)_S$, or $(z)_S$. Thus, $R_S$ has exactly three\nmaximal ideals $(x)_S$, $(y)_S$ and $(z)_S$. Therefore $R$ has at\nmost three maximal ideals. Since $char(k)\\neq 2$, we see that\n$2\\in U(R_S)$.\n\nSet $A=R_S\/J^2(R_S)$. Then $A$ has at most three maximal ideals\nand $2\\in U(A)$. If $\\overline{x}\\in J(A)$, then\n$\\overline{1-xr}\\in U(A)$ for any $r\\in R_S$. Hence, $1-xr\\in\nU(R_S)$. This implies that $x\\in J(R_S)$, and so\n$\\overline{x^2}=\\overline{0}$. That is, $\\overline{x}$ is\nnilpotent. So, $J(A)$ is nil. Therefore we complete the proof, in\nterms of Theorem 3.1.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Example 3.3.}\\ \\ {\\it Let $R=\\{\n\\frac{m}{n}~|~(m,n)=1, m,n\\in {\\Bbb Z}~\\mbox{and}~3,5,7~\\nmid\nn\\}$. Then the ring $R\/J^2(R)$ is weakly clean.}\n\\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ Let $M$ be an ideal of\n$R$ such that $3R\\subsetneq M\\subseteq R$. Choose $\\frac{m}{n}\\in\nM$ while $\\frac{m}{n}\\not\\in 3R$. Then $3\\nmid m$, and so\n$(3,m)=1$. So $3k+lm=1$ for some $k,l\\in {\\Bbb Z}$. This shows\nthat $\\frac{1}{1}=3\\cdot \\frac{k}{1}+l\\frac{m}{n}\\cdot\n\\frac{n}{1}\\in M$, i.e., $M=R$. Thus, $3R$ is a maximal ideal of\n$R$. Likewise, $5R$ and $7R$ are maximal ideals of $R$. For any\n$\\frac{m}{n}\\in 3R\\bigcap 5R\\bigcap 7R$ and $\\frac{a}{b}\\in R$,\nthen $\\frac{1}{1}-\\frac{m}{n}\\frac{a}{b}=\\frac{nb-ma}{nb}$. Write\n$\\frac{m}{n}=\\frac{3s}{t}$. Then $3sn=mt$, and so $3~|~mt$. Since\n$3\\nmid t$, we get $3~|~m$. Obviously, $3\\nmid nb$; hence, $3\\nmid\n(nb-ma)$. Similarly, $5,7\\nmid (nb-ma)$. It follows that\n$\\frac{nb}{nb-ma}\\in U(R)$. We infer that $\\frac{m}{n}\\in J(R)$.\nTherefore $3R\\bigcap 5R\\bigcap 7R\\subseteq J(R)$. Let $M$ be a\nmaximal ideal of $R$ and $M\\neq 3R, 5R, 7R$. Then $3R+M=R, 5R+M=R$\nand $7R+M=R$. Thus, $R=(3R+M)(5R+M)(7R+M)\\subseteq 3R\\bigcap\n5R\\bigcap 7R+M=J(R)+M\\subseteq M$, hence, $R=M$, an absurd. We\ninfer that $R$ is a commutative ring with exactly three maximal\nideals. Obviously $2\\in R$ is invertible. Therefore $A:=R\/J^2(R)$\nis a commutative ring with exactly three maximal ideals. Obviously\n$2\\in A$ is invertible. As in the proof of Example 3.2, $A$ has\nthe nil Jacobson radical. We conclude that $A$ is weakly clean, by\nTheorem 3.1.\\hfill$\\Box$\n\n\\vskip4mm In ~\\cite[Question 3]{D}, Diesl asked: Let $R$ be a nil\nclean ring, and let $n$ be a positive integer. Is $M_n(R)$ nil\nclean? In ~\\cite[Theorem 3]{BGDT}, Breaz et al. proved that their\nmain theorem: for a field $K$, $M_n(K)$ is nil-clean if and only\nif $K\\cong {\\Bbb Z}_2$. They also asked if this result could be\nextended to division rings. As a main result in ~\\cite{KLZ},\nKo\\c{s}an et al. gave a positive answer to this problem. They\nshowed that the preceding equivalence holds for any division ring.\nWe shall extend ~\\cite[Theorem 3]{BGDT} and ~\\cite[Theorem 3]{KLZ}\nto an arbitrary abelian ring.\n\n\\vskip4mm Recall that a ring $R$ is an exchange ring if for every\n$a\\in R$ there exists an idempotent $e\\in aR$ such that $1-e\\in\n(1-a)R$. Clearly, every nil-clean ring is an exchange ring.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Lemma 3.4.}\\ \\ {\\it Let $R$ be an\nabelian exchange ring, and let $x\\in R$. Then $RxR=R$ if and only\nif $x\\in U(R)$.} \\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ If $x\\in\nU(R)$, then $RxR=R$. Conversely, assume that $RxR=R$. As in the\nproof of ~\\cite[Proposition 17.1.9]{CH}, there exists an\nidempotent $e\\in R$ such that $e\\in xR$ such that $ReR=R$. This\nimplies that $e=1$. Write $xy=1$. Then $yx=y(xy)x=(yx)^2$. Hence,\n$yx=y(yx)x$. Therefore $1=x(yx)y=xy(yx)xy=yx$, and so $x\\in U(R)$.\nThis completes the proof.\\hfill$\\Box$\n\n\\vskip4mm Set $J^*(R)=\\bigcap \\{ P~|~P~\\mbox{is a maximal ideal\nof}~R\\}.$ We will see that $J(R)\\subseteq J^*(R)$. In general,\nthey are not the same. For instance, $J(R)=0$ and $J^*(R)=\\{ x\\in\nR~|~dim_F(xV)<\\infty\\}$, where $R=End_F(V)$ and $V$ is an\ninfinite-dimensional vector space over a field $F$.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Lemma 3.5.}\\ \\ {\\it Let $R$ be an\nabelian exchange ring. Then $J^*(R)=J(R)$.}\n\\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ Let $M$ be a maximal\nideal of $R$. If $J(R)\\nsubseteq M$, then $J(R)+M=R$. Write\n$x+y=1$ with $x\\in J(R),y\\in M$. Then $y=1-x\\in U(R)$, an absurd.\nHence, $J(R)\\subseteq M$. This implies that $J(R)\\subseteq\nJ^*(R)$. Let $x\\in J^*(R)$, and let $r\\in R$. If $R(1-xr)R\\neq R$,\nthen we can find a maximal ideal $M$ of $R$ such that\n$R(1-xr)R\\subseteq M$, and so $1-xr\\in M$. It follows that\n$1=xr+(1-xr)\\in M$, which is imposable. Therefore $R(1-xr)R=R$. In\nlight of Lemma 3.4, $1-xr\\in U(R)$, and then $x\\in J(R)$. This\ncompletes the proof.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Lemma 3.6.}\\ \\ {\\it Let $R$ be a\nring with no non-trivial idempotents, and let $n\\in {\\Bbb N}$.\nThen the following are equivalent:}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)]{\\it $M_n(R)$ is nil-clean.}\n\\item [(2)]{\\it $R\/J(R)\\cong {\\Bbb Z}_2$ and $M_n(J(R))$ is nil.}\n\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\ $(1)\\Rightarrow (2)$ In view of ~\\cite[Proposition 3.16]{D}, $J(M_n(R))$ is\nnil, and then so is $M_n(J(R))$.\n\nLet $a\\in R$. By hypothesis, $M_n(R)$ is nil-clean. If $n=1$, then\n$R$ is nil-clean. Hence, $a\\in N(R)$ or $a-1\\in N(R)$. This shows\nthat $a\\in U(R)$ or $1-a\\in U(R)$, and so $R$ is local. That is,\n$R\/J(R)$ is a division ring. As $R\/J(R)$ is nil-clean, it follows\nfrom ~\\cite[Theorem 3]{BGDT} that $R\/J(R)\\cong {\\Bbb Z}_2$. We now\nassume that $n\\geq 2$. Then there exists an idempotent $E\\in\nM_n(R)$ and a nilpotent $W\\in GL_n(R)$ such that $I_n+\\left(\n\\begin{array}{cccc}\na&&\\\\\n&0&\\\\\n&&\\ddots&\\\\\n&&&0\n\\end{array}\n\\right)=E+W$. Set $U=-I_n+W$. Then $U\\in GL_n(R)$. Hence,\n$$U^{-1}\\left(\n\\begin{array}{cccc}\na&&\\\\\n&0&\\\\\n&&\\ddots&\\\\\n&&&0\n\\end{array}\n\\right)=U^{-1}E+I_n=\\big(U^{-1}EU\\big)U^{-1}+I_n.$$ Set\n$F=U^{-1}EU$. Then $F=F^2\\in M_n(R)$, and that\n$$(I_n-F)U^{-1}\\left(\n\\begin{array}{cccc}\na&&\\\\\n&0&\\\\\n&&\\ddots&\\\\\n&&&0\n\\end{array}\n\\right)=I_n-F.$$ Write $I_n-F=\\left(\n\\begin{array}{cccc}\ne&0&\\\\\n*&0&\\\\\n\\vdots&&\\ddots&\\\\\n*&0&&0\n\\end{array}\n\\right).$ By hypothesis, $e=0$ or $1$. If $e=0$, then $I_n-F=0$,\nand so $E=I_n$. This shows that $\\left(\n\\begin{array}{cccc}\na&&\\\\\n&0&\\\\\n&&\\ddots&\\\\\n&&&0\n\\end{array}\n\\right)=W$ is nilpotent; hence that $a\\in R$ is nilpotent. Thus,\n$1-a\\in U(R)$.\n\nIf $e=1$, then $F=\\left(\n\\begin{array}{cccc}\n0&0&\\\\\n*&1&\\\\\n\\vdots&&\\ddots&\\\\\n*&0&&1\n\\end{array}\n\\right).$ Write $U^{-1}=\\left(\n\\begin{array}{cc}\n\\alpha&\\beta\\\\\n\\gamma&\\delta\n\\end{array}\n\\right),$ where $\\alpha\\in R, \\beta\\in M_{1\\times (n-1)}(R),$\n$\\gamma\\in M_{(n-1)\\times 1}(R)$ and $\\delta\\in M_{(n-1)\\times\n(n-1)}(R)$. Then $$\\left(\n\\begin{array}{cc}\n\\alpha&\\beta\\\\\n\\gamma&\\delta\n\\end{array}\n\\right)\\left(\n\\begin{array}{cccc}\na&&\\\\\n&0&\\\\\n&&\\ddots&\\\\\n&&&0\n\\end{array}\n\\right)=\\left(\n\\begin{array}{cc}\n0&0\\\\\nx&I_{n-1}\n\\end{array}\n\\right)\\left(\n\\begin{array}{cc}\n\\alpha&\\beta\\\\\n\\gamma&\\delta\n\\end{array}\n\\right)+I_n,$$ where $x\\in M_{(n-1)\\times 1}(R)$. Thus, we get\n$$\\begin{array}{c}\n\\alpha a=1, \\gamma a=x\\alpha+\\gamma, 0=x\\beta+\\delta+I_{n-1}.\n\\end{array}$$ One easily checks that\n$$\\left(\n\\begin{array}{cc}\n1&\\beta\\\\\n0&I_{n-1}\n\\end{array}\n\\right)\\left(\n\\begin{array}{cc}\n1&0\\\\\nx&I_{n-1}\n\\end{array}\n\\right)U^{-1}\\left(\n\\begin{array}{cc}\n1&0\\\\\n\\gamma a&I_{n-1}\n\\end{array}\n\\right)=\\left(\n\\begin{array}{cc}\n\\alpha+\\beta\\gamma a&0\\\\\n0&-I_{n-1}\n\\end{array}\n\\right).$$ This implies that $u:=\\alpha+\\beta\\gamma a\\in U(R)$.\nHence, $\\alpha=u-\\beta\\gamma a$. It follows from $\\alpha a=1$ that\n$(u-\\beta\\gamma a)a=1$. Since $R$ has only trivial idempotents, we\nget $a(u-\\beta\\gamma a)=1$, and so $a\\in U(R)$. This shows that\n$a\\in U(R)$ or $1-a\\in U(R)$. Therefore $R$ is local, and then\n$R\/J(R)$ is a division ring. Since $M_n(R)$ is nil-clean, we see\nthat so is $M_n(R\/J(R))$. In light of ~\\cite[Theorem 3]{BGDT},\n$R\/J(R)\\cong {\\Bbb Z}_2$, as desired.\n\n$(2)\\Rightarrow (1)$ By virtue of ~\\cite[Theorem 3]{BGDT},\n$M_n(R\/J(R))$ is\n nil-clean. Since $M_n(R)\/J(M_n(R))\\cong M_n(R\/J(R))$ and\n$J\\big(M_n(R)\\big)=M_n(J(R))$ is nil, it follows from ~\\cite[Lemma\n4]{BGDT} that $M_n(R)$ is nil-clean, as asserted. \\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Example 3.7.}\\ \\ Let $K$ be a\nfield, and let $R=K[x,y]\/(x,y)^2$. Then $M_n(R)$ is nil-clean if\nand only if $K\\cong {\\Bbb Z}_2$. Clearly, $J(R)=(x,y)\/(x,y)^2$,\nand so $R\/J(R)\\cong K$. Thus, $R$ is a local ring with a nilpotent\nJacobson radical. Hence, $R$ has no non-trivial idempotents. Thus,\nwe are done by Lemma 3.6.\n\n\\vskip4mm We are now ready to prove:\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Theorem 3.8.}\\ \\ {\\it Let $R$ be\nabelian, and let $n\\in {\\Bbb N}$. Then the following are\nequivalent:}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)]{\\it $M_n(R)$ is nil-clean.}\n\\item [(2)]{\\it $R\/J(R)$ is Boolean and $M_n(J(R))$ is nil.}\n\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\ $(1)\\Rightarrow (2)$ Clearly, $M_n(J(R))$ is nil. Let $M$ be a maximal ideal of\n$R$, and let $\\varphi_M: R\\to R\/M.$ Since $M_n(R)$ is nil-clean,\nthen so is $M_n(R\/M)$. Hence, $R\/M$ is an exchange ring with all\nidempotents central. In view of ~\\cite[Lemma 17.2.5]{CH}, $R\/M$ is\nlocal, and so $R\/M$ has only trivial idempotents. It follows from\nLemma 3.6 that $R\/M\/J(R\/M)\\cong {\\Bbb Z}_2$. Write $J(R\/M)=K\/M$.\nThen $K$ is a maximal ideal of $R$, and that $M\\subseteq K$. This\nimplies that $M=K$; hence, $R\/M\\cong {\\Bbb Z}_2$. Construct a map\n$\\varphi_M: R\/J^*(R)\\to R\/M, r+J^*(R)\\mapsto r+M$. Here, $J^*(R)$\nis the intersection of all maximal two-sided ideal of $R$. Then\n$\\bigcap\\limits_{M}Ker\\varphi_M=\\bigcap\\limits_{M}\\{\nr+J^*(R)~|~r\\in M\\}=0$. Therefore $R\/J^*(R)$ is isomorphic to a\nsubdirect product of some ${\\Bbb Z}_2$. Hence, $R\/J^*(R)$ is\nBoolean. In light of Lemma 3.5, $R\/J(R)$ is Boolean, as desired.\n\n$(2)\\Rightarrow (1)$ Since $R\/J(R)$ is Boolean, it follows by\n~\\cite[Corollary 6]{BGDT} that $M_n(R\/J(R))$ is nil-clean. That\nis, $M_n(R)\/J(M_n(R))$ is nil-clean. But $J(M_n(R))=M_n(J(R))$ is\nnil. Therefore we complete the proof, in terms of Lemma\n3.6.\\hfill$\\Box$\n\n\\vskip4mm We note that the \"$(2)\\Rightarrow (1)$\" in Theorem 3.8\nalways holds, but \"abelian\" condition is necessary in\n\"$(1)\\Rightarrow (2)$\". Let $R=M_n({\\Bbb Z}_2) (n\\geq 2)$. Then\n$R$ is nil-clean. But $R\/J(R)$ is not Boolean. Here, $R$ is not\nabelian.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Corollary 3.9.}\\ \\ {\\it Let $R$ be\ncommutative, and let $n\\in {\\Bbb N}$. Then the following are\nequivalent:}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)]{\\it $M_n(R)$ is nil-clean.}\n\\item [(2)]{\\it $R\/J(R)$ is Boolean and $J(R)$ is nil.}\n\\item [(3)]{\\it For any $a\\in R$, $a-a^2\\in R$ is nilpotent.}\n\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\ $(1)\\Rightarrow (3)$ Let $a\\in R$. In view of Theorem 3.8,\n$a-a^2\\in J(R)$. Since $R$ is commutative, we see that $J(R)$ is\nnil if and only if $J(M_n(R))$ is nil. Therefore $a-a^2\\in R$ is\nnilpotent.\n\n$(3)\\Rightarrow (2)$ Clearly, $R\/J(R)$ is Boolean. For any $a\\in\nJ(R)$, we have $(a-a^2)^n=0$ for some $n\\geq 1$. Hence,\n$a^n(1-a)^n=0$, and so $a^n=0$. This implies that $J(R)$ is nil.\n\n$(2)\\Rightarrow (1)$ As $R$ is commutative, we see that\n$M_n(J(R))$ is nil. This completes the proof, by Theorem 3.8.\n\\hfill$\\Box$\n\n\\vskip4mm Furthermore, we observe that the converse of\n~\\cite[Corollary 7]{BGDT} is true as the following shows.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Corollary 3.10.}\\ \\ {\\it A\ncommutative ring $R$ is nil-clean if and only if $M_n(R)$ is\nnil-clean.} \\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ One direction\nis obvious by ~\\cite[Corollary 7]{BGDT}. Suppose that $M_n(R)$ is\nnil-clean. In view of Corollary 3.9, $R\/J(R)\\cong {\\Bbb Z}_2$ is\nnil-clean, and that $J(R)$ is nil. Therefore $R$ is nil-clean, by\n~\\cite[Lemma 4]{BGDT}.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Corollary 3.11.}\\ \\ Let $m,n\\in\n{\\Bbb N}$. Then $M_n\\big({\\Bbb Z}_m\\big)$ is nil-clean if and only\nif $m=2^r$ for some $r\\in {\\Bbb N}$. Write $m=p_1^{r_1}\\cdots\np_s^{r_s} (p_1,\\cdots ,p_s~\\mbox{are distinct primes}, r_1,\\cdots\n,r_s\\in {\\Bbb N}$). Then $Z_m\\cong Z_{p_1^{r_1}}\\oplus \\cdots\n\\oplus Z_{p_m^{r_s}}$. In light of Corollary 3.10, $M_n\\big({\\Bbb\nZ}_m\\big)$ is nil-clean if and only if ${\\Bbb Z}_m$ is nil-clean,\nif and only if $s=1$ and $Z_{p_1^{r_1}}$ is nil-clean. Therefore\nwe are done by Lemma 3.6.\n\n\\section{A Special Case}\n\n\\vskip4mm A natural problem is asked when a ring consists entirely\nof very idempotents, units, and nilpotent elements. We will extend\nthe study of the rings consisting entirely of some special\nelements in ~\\cite{I}, and explore such type rings. Surprisingly,\nour case will be involved in both Boolean rings and the ring\n${\\Bbb Z}_3$ of integers modulo $3$. Their structures will be\nthereby completely determined. The following is a generalization\nof ~\\cite[Corollary 2.29]{Ah} which is for a commutative ring.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Lemma 4.1.}\\ \\ {\\it Let $R$ be a\nring. Then $R=U(R)\\bigcup Id(R)\\bigcup -Id(R)$ if and only if $R$\nis isomorphic to one of the following:}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)] {\\it a Boolean ring;} \\vspace{-.5mm}\n\\item [(2)] {\\it a division ring;}\n\\vspace{-.5mm}\n\\item [(3)] {\\it ${\\Bbb Z}_3\\oplus {\\Bbb Z}_3$; or} \\vspace{-.5mm}\n\\item [(4)] {\\it\n${\\Bbb Z}_3\\oplus B$ where $B$ is a Boolean ring.}\n\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\ $\\Longrightarrow $\nIt is easy to check that $R$ is reduced; hence, it is abelian.\n\nCase I. $R$ is indecomposable. Then $R$ is a division ring.\n\nCase II. $R$ is decomposable. Then $R=A\\oplus B$ where $A,B\\neq\n0$. If $0\\neq x\\in A$, then $(x,0)\\in R$ is a very idempotent.\nHence, $x\\in R$ is very idempotent. Hence, $A=Id(A)\\bigcup\n-Id(A)$. Likewise, $B=Id(B)\\bigcup -Id(B)$. In view of\n~\\cite[Theorem 1.12]{Ah}, $A$ and $B$ are isomorphic to one of the\nfollowing:\n\\begin{enumerate}\n\\item [(1)] {\\it ${\\Bbb Z}_3$;}\n\\vspace{-.5mm} \\item [(2)] {\\it a Boolean ring;} \\vspace{-.5mm}\n\\item [(3)] {\\it ${\\Bbb Z}_3\\oplus B$ where $B$ is a Boolean ring.}\n\\end{enumerate}\n\nThus, $R$ is isomorphic to one of the following:\n\\begin{enumerate}\n\\item [(a)] {\\it ${\\Bbb Z}_3\\oplus {\\Bbb Z}_3$;}\n\\vspace{-.5mm} \\item [(b)] {\\it ${\\Bbb Z}_3\\oplus B$ where $B$ is\na Boolean ring;} \\vspace{-.5mm}\n\\item [(c)] {\\it ${\\Bbb Z}_3\\oplus {\\Bbb Z}_3\\oplus B$ where $B$ is\na Boolean ring;} \\vspace{-.5mm}\n\\item [(d)] {\\it a Boolean ring;}\\end{enumerate}\n\nCase (c). $(1,-1,0)\\not\\in U(R)\\bigcup Id(R)\\bigcup -Id(R)$, an\nabsurd.\n\nTherefore we conclude that $R$ is one of Cases (a), (b) and (d),\nas desired.\n\n$\\Longleftarrow $ Case (1). $R=Id(R)$. Case (2). $R=U(R)\\bigcup\nId(R)$. Case (3). $U(R)=\\{ (1,1),(1,-1),$ $(-1,1),$ $(-1,-1)$,\n$Id(R)=\\{ (0,0),(0,1),(1,0)\\}$ and $-Id(R)=\\{ (0,0),(0,-1),$ $\n(-1,0),(-1,-1)\\}$. Thus, $R=U(R)\\bigcup Id(R)\\bigcup -Id(R)$. Case\n(4). $Id(R)=\\{ (0,x),(1,x) ~|~x\\in B\\}$ and $-Id(R)=\\{\n(0,x),(-1,x)~|~x\\in B$. Therefore $R=Id(R)\\bigcup -Id(R)$, as\ndesired.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Lemma 4.2.}\\ \\ {\\it Let $R$ be a\ndecomposable ring. Then $R$ consists entirely of very idempotents,\nunits, and nilpotent elements if and only if $R$ is isomorphic to\none of the following:}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)] {\\it a Boolean ring;}\n\\vspace{-.5mm}\n\\item [(2)] {\\it ${\\Bbb Z}_3\\times {\\Bbb Z}_3$;}\n\\vspace{-.5mm}\n\\item [(3)] {\\it\n${\\Bbb Z}_3\\oplus B$ where $B$ is a Boolean ring.}\n\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\ $\\Longrightarrow$ Write $R=A\\oplus B$\nwith $A,B\\neq 0$. Then $A$ and $B$ are rings that consisting\nentirely of very idempotents, units, and nilpotent elements. If\n$0\\neq x\\in N(A)$, then $(x,1)\\not\\in Id(R)\\bigcup -Id(R)\\bigcup\nU(R)\\bigcup N(R)$. This shows that $A=U(A)\\bigcup Id(A)\\bigcup\n-Id(A)$. Likewise, $B=U(B)\\bigcup Id(B)\\bigcup -Id(B)$. In light\nof Lemma 4.1, $R$ is one of the following:\n\\begin{enumerate}\n\\item [(a)] {\\it a Boolean ring;} \\vspace{-.5mm}\n\\item [(b)] {\\it $B\\oplus D$ where $B$ is a Boolean ring and $D$ is a division ring;}\n\\vspace{-.5mm}\n\\item [(c)] {\\it ${\\Bbb Z}_3\\oplus {\\Bbb Z}_3\\oplus B$ where $B$ is\na Boolean ring;} \\vspace{-.5mm}\n\\item [(d)] {\\it ${\\Bbb Z}_3\\oplus B$ where $B$ is\na Boolean ring;} \\vspace{-.5mm}\n\\item [(e)] {\\it $D\\oplus D'$ where $D$ and $D'$ are division rings; or}\n\\vspace{-.5mm}\n\\item [(f)] {\\it ${\\Bbb Z}_3\\oplus B\\oplus D$ where $B$ is a Boolean ring and $D$ is a division ring.}\n\\vspace{-.5mm}\n\\item [(g)] {\\it ${\\Bbb Z}_3\\oplus {\\Bbb Z}_3\\oplus D$;}\\vspace{-.5mm}\n\\item [(h)] {\\it ${\\Bbb Z}_3\\oplus {\\Bbb Z}_3\\oplus {\\Bbb Z}_3\\oplus {\\Bbb Z}_3$;}\n\\vspace{-.5mm}\n\\item [(i)] {\\it ${\\Bbb Z}_3\\oplus {\\Bbb Z}_3\\oplus {\\Bbb Z}_3\\oplus B$ where $B$ is adivision ring;}\\vspace{-.5mm}\n\\item [(j)] {\\it ${\\Bbb Z}_3\\oplus {\\Bbb Z}_3\\oplus {\\Bbb Z}_3\\oplus {\\Bbb Z}_3$;}\\end{enumerate}\n\nIn Case (b). If $0,\\pm 1\\neq x\\in D$, then $(0,x)\\not\\in\nU(R)\\bigcup Id(R)\\bigcup -Id(R)$. This forces $D\\cong {\\Bbb Z}_2,\n{\\Bbb Z}_3$. Hence, $(b)$ forces $R$ is in $(1)$ or $(3)$. $(c)$\ndoes not occur. $(e)$ forces $D,D'\\cong {\\Bbb Z_2}$ or ${\\Bbb\nZ}_3$. Hence, $R$ is in $(1)-(3)$. $(f)$ does not occur except\n$D\\cong {\\Bbb Z}_2$. Thus, $R$ is in $(1)-(3)$. Cases (g)-(j) do\nnot occur as $(1,-1,0), (1,-1,0,0)\\not\\in I(R)\\bigcup\n-Id(R)\\bigcup N(R)$, as desired.\n\n$\\Longleftarrow$ This is clear.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Theorem 4.3.}\\ \\ {\\it Let $R$ be an\nabelian ring. Then $R$ consists entirely of very idempotents,\nunits, and nilpotent elements if and only if $R$ is isomorphic to\none of the following:}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)] {\\it ${\\Bbb Z}_3$;}\n\\vspace{-.5mm}\n\\item [(2)] {\\it a Boolean ring;}\n\\vspace{-.5mm}\n\\item [(3)] {\\it ${\\Bbb Z}_3\\oplus {\\Bbb Z}_3$;}\n\\vspace{-.5mm}\n\\item [(4)] {\\it\n${\\Bbb Z}_3\\oplus B$ where $B$ is a Boolean ring; or}\n\\item [(5)] {\\it local ring with nil Jacobson radical.}\n\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\ $\\Longrightarrow$\nCase I. $R$ is indecomposable. Then $R=U(R)\\bigcup N(R)$. This\nshows that $R$ is local. Let $x\\in J(R)$, then $x\\in N(R)$, and so\n$J(R)$ is nil.\n\nCase II. $R$ is decomposable. In view of Lemma 4.2, $R$ is\nisomorphic to one of the following: \\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)] {\\it a Boolean ring;}\n\\vspace{-.5mm}\n\\item [(2)] {\\it ${\\Bbb Z}_3\\times {\\Bbb Z}_3$;}\n\\vspace{-.5mm}\n\\item [(3)] {\\it\n${\\Bbb Z}_3\\oplus B$ where $B$ is a Boolean ring.}\n\\end{enumerate} This shows that $R$ is isomorphic to one of\n$(1)-(5)$, as desired.\n\n$\\Longleftarrow$ This is obvious.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Lemma 4.4.}\\ \\ {\\it Let $R$ be any\nring that consists entirely of very idempotents, units and\nnilpotent elements. Then $eRe$ is a division ring for any\nnoncentral idempotent $e\\in R$.}\\vskip2mm\\hspace{-1.8em} {\\it\nProof.}\\ \\ Let $e\\in R$ be a noncentral idempotent, and let\n$f=1-e$. Then $R\\cong \\left(\n\\begin{array}{cc}\neRe&eRf\\\\\nfRe&fRf\n\\end{array}\n\\right)$. The subring $\\left(\n\\begin{array}{cc}\neRe&0\\\\\n0&fRf\n\\end{array}\n\\right)$ consists entirely of very idempotents, units and\nnilpotent elements. That is, $eRe\\times fRf$ consists entirely of\nvery idempotents, units and nilpotent elements. Set $A=eRe$ and\n$B=fRf$. Similarly to Lemma 4.2, $A=U(A)\\bigcup Id(A)\\bigcup\n-Id(A)$. In view of Lemma 4.1, $A$ is isomorphic to one of the\nfollowing:\n\\begin{enumerate}\n\\item [(1)] {\\it ${\\Bbb Z}_3$;}\n\\vspace{-.5mm} \\item [(2)] {\\it a Boolean ring;} \\vspace{-.5mm}\n\\item [(3)] {\\it a division ring;}\n\\vspace{-.5mm}\n\\item [(4)] {\\it ${\\Bbb Z}_3\\oplus B$ where $B$ is a Boolean ring.}\n\\end{enumerate}\nThat is, $A$ is a division ring or a ring in which every element\nis very idempotent. Suppose that $eRe$ is not a division ring.\nThen $eRe$ must contains a nontrivial idempotent, say $a\\in R$.\nLet $b=e-a$. Let $x\\in eRf$ and $y\\in fRe$. Choose\n$$X_1=\\left(\n\\begin{array}{cc}\na&x\\\\\n0&0\n\\end{array}\n\\right),X_2=\\left(\n\\begin{array}{cc}\nb&x\\\\\n0&0\n\\end{array}\n\\right),Y_1=\\left(\n\\begin{array}{cc}\na&0\\\\\ny&0\n\\end{array}\n\\right),Y_2=\\left(\n\\begin{array}{cc}\nb&0\\\\\ny&0\n\\end{array}\n\\right).$$ Then $X_1,X_2,Y_1,Y_2$ are not invertible. As $a,b\\in\neRe$ is nontrivial idempotents, we see that $X_1,X_2,Y_1,Y_2$ are\nall not nilpotent matrices. This shows that $X_1$ and $X_2$ are\nboth very idempotents. It follows that $X_1=\\pm X_2^2$ or\n$X_2^2=\\pm X_2$. As $x\\in eRf, y\\in fRe$, we have $ex=x$ and\n$fy=y$.\n\nCase I. $X_1=X_1^2, X_2=X_2^2$. Then $ax=x, bx=x$, and so\n$x=ex=2x$; hence, $x=0$.\n\nCase II. $X_1=X_1^2, X_2=-X_2^2$. Then $ax=x, bx=-x$, and so\n$x=ex=0$.\n\nCase III. $X_1=-X_1^2, X_2=X_2^2$. Then $ax=-x, bx=x$, and so\n$x=ex=0$.\n\nCase IV. $X_1=-X_1^2, X_2=-X_2^2$. Then $a=-a^2, ax=-x, b=-b^2$\nand $bx=-x$. Hence, $(e-a)x=-x$, and so $x=ex=-2x$, hence, $3x=0$.\nAs $a\\in R$ is an idempotent, we see that $a=a^2$, hence, $a=-a$,\nand so $2a=0$. It follows that $x=-ax=(2a)x-(3x)a=0$.\n\nThus, $x=0$ in any case. We infer that $eRf=0$. Likewise, $fRe=0$.\nHence, $e\\in R$ is central, an absurd. This completes the\nproof.\\hfill$\\Box$\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Lemma 4.5.}\\ \\ {\\it Let $R$ be any\nring that consists entirely of very idempotents, units and\nnilpotent elements. Then $eRe$ is isomorphic to ${\\Bbb Z}\/2{\\Bbb\nZ}$ or ${\\Bbb Z}\/3{\\Bbb Z}$ for any noncentral idempotent $e\\in\nR$.}\\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ Let $e\\in R$ be a\nnoncentral idempotent. In view of Lemma 4.4, $eRe$ is a division\nring. Set $f=1-e$. For any $u\\in eRe$, we assume that $u\\neq 0, e,\n-e$, then the matrix\n$$X= \\left(\n\\begin{array}{cc}\nu&0\\\\\n0&0\n\\end{array}\n\\right)\\in \\left(\n\\begin{array}{cc}\neRe&eRf\\\\\nfRe&fRf\n\\end{array}\n\\right)$$ is not be a unit, a very idempotent, or a nilpotent\nelement. This gives a contradiction. Therefore $u=0,e$ or $-e$, as\ndesired.\\hfill$\\Box$\n\n\\vskip4mm Recall that a ring $R$ is semiprime if it has no nonzero\nnilpotent ideals. Furthermore, we derive\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Theorem 4.6.}\\ \\ {\\it Let $R$ be\nany nonabelian ring that consists entirely of units, very\nidempotents, and nilpotent elements. If $R$ is semiprime, then it\nis isomorphic to $M_2({\\Bbb Z}_2)$ or $M_2({\\Bbb\nZ}_3)$.}\\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ Suppose that $R$\nis semiprime. In view of Lemma 4.4, $eRe$ is a division ring for\nany noncentral idempotent $e\\in R$. It follows by ~\\cite[Lemma\n21]{Du} that $R$ is isomorphic to $M_2(D)$ for a division ring\n$D$. Choose $E_{11}=\\left(\n\\begin{array}{cc}\n1&0\\\\\n0&0\n\\end{array}\n\\right)\\in M_2(D)$. Then $E_{11}$ is a noncentral idempotent.\nAccording to Lemma 4.5, $R\\cong {\\Bbb Z}\/2{\\Bbb Z}$ or ${\\Bbb\nZ}\/3{\\Bbb Z}$, as asserted.\\hfill$\\Box$\n\n\\vskip4mm Recall that a ring $R$ is a NJ-ring provided that for\nany $a\\in R$, either $a\\in R$ is regular or $1-a\\in R$ is a\nunit~\\cite{N}. Clearly, all rings in which every elements consists\nentirely of units, very idempotents, and nilpotent elements are\nNJ-rings.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Theorem 4.7.}\\ \\ {\\it Let $R$ be\nany nonabelian ring that consists entirely of very idempotents,\nunits and nilpotent elements. If $R$ is not semiprime, then it is\nisomorphic to the ring of a Morita context with zero pairings\nwhere the underlying rings are ${\\Bbb Z}_2$ or ${\\Bbb\nZ}_3$.}\\vskip2mm\\hspace{-1.8em} {\\it Proof.}\\ \\ Suppose that $R$\nis not semiprime. Clearly, $R$ is a NJ-ring. In view of\n~\\cite[Theorem 2]{N}, $R$ must be a regular ring, a local ring or\nisomorphic to the ring of a Morita context with zero pairings\nwhere the underlying rings are both division ring. If $R$ is\nregular, it is semiprime, a contradiction. If $R$ is local, it is\nabelian, a contradiction. Therefore, $R$ is isomorphic to the ring\nof a Morita context $T=(A,B,M,N,\\varphi,\\psi)$ with zero pairings\n$\\varphi,\\psi$ where the underlying rings are division rings $A$\nand $B$. Choose $E=\\left(\n\\begin{array}{cc}\n1_A&0\\\\\n0&0\n\\end{array}\n\\right)\\in T$. Then $E\\in T$ is a noncentral idempotent. In light\nof Lemma 4.5, $A\\cong ETE\\cong {\\Bbb Z}_2$ or ${\\Bbb Z}_3$.\nLikewise, $B\\cong {\\Bbb Z}_2$ or ${\\Bbb Z}_3$. This completes the\nproof.\\hfill$\\Box$\n\n\\vskip4mm With these information we completely determine the\nstructure of rings that consist entirely of very idempotents,\nunits and nilpotent elements.\n\n\\vskip4mm \\hspace{-1.8em} {\\bf Theorem 4.8.}\\ \\ {\\it Let $R$ be a\nring. Then $R$ consists entirely of very idempotents, units and\nnilpotent elements if and only if $R$ is isomorphic to one of the\nfollowing:}\\vspace{-.5mm}\n\\begin{enumerate}\n\\item [(1)] {\\it a Boolean ring;}\n\\vspace{-.5mm}\n\\item [(2)] {\\it ${\\Bbb Z}_3\\times {\\Bbb Z}_3$;}\n\\vspace{-.5mm}\n\\item [(3)] {\\it\n${\\Bbb Z}_3\\oplus B$ where $B$ is a Boolean ring;}\n\\item [(4)] {\\it local ring with a nil Jacobson radical;}\n\\vspace{-.5mm}\n\\item [(5)] {\\it $M_2\\big({\\Bbb Z}_2\\big)$ or $M_2\\big({\\Bbb Z}_3\\big)$; or} \\vspace{-.5mm}\n\\item [(6)] {\\it the ring of a Morita context\nwith zero pairings where the underlying rings are ${\\Bbb Z}_2$ or\n${\\Bbb Z}_3$.}\n\\end{enumerate} \\vspace{-.5mm} {\\it Proof.}\\ \\ $\\Longrightarrow$\nThis is obvious by Theorem 4.3, Theorem 4.6 and Theorem 4.7.\n\n$\\Longleftarrow$ Cases (1)-(4) are easy. Case (5)-(6) are verified\nby checking all possible (generalized) matrices over ${\\Bbb Z}_2$\nand ${\\Bbb Z}_3$.\\hfill$\\Box$\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the Euclidean case, we can roughly distinguish three types of Improved Sobolev inequalities following the method used in their proof and the parameter's range defining the functional spaces. Let us recall these inequalities (for a precise definition of the functional spaces used below, please refer to section \\ref{espa}).\\\\\n\nHistorically the first method, due to P. G\\'erard, F. Oru and Y. Meyer \\cite{GMO}, is based on a Littlewood-Paley decomposition and interpolation results applied to dyadic blocks. For a function $f$ such that $f\\in \\dot{W}^{s_1,p}(\\mathbb{R}^n)$ and $f\\in \\dot{B}^{-\\beta,\\infty}_{\\infty}(\\mathbb{R}^n)$, the inequality obtained reads as follows:\n\\begin{equation}\\label{ISI1}\n\\|f\\|_{\\dot{W}^{s,q}}\\leq C \\|f\\|_{\\dot{W}^{s_1,p}}^\\theta\\|f\\|_{\\dot{B}^{-\\beta,\\infty}_{\\infty}}^{1-\\theta}\n\\end{equation}\nwhere $12$, this Besov-Sobolev spaces embbeding is reversed. It is then important to observe that our weak inequality does not have this restriction since we allow $q$ to be in the interval $]1, +\\infty[$. \\\\\n\nNote also that in the context of stratified Lie groups, the weak inequality (\\ref{smile1}) is the sharpest result available.\\\\\n\nOur second result provides the main tool for proving theorem \\ref{smile00}:\n\\begin{Theoreme}[Modified pseudo-inequality of Poincar\\'e]\\label{CHAME} \nLet $\\mathbb{G}$ be a stratified Lie group, $\\omega \\in A_{1}$ and $\\nabla f\\in L^{1}(\\mathbb{G}, \\omega)$. We have the following estimate for $0\\leq s<1$ and for $t>0$: \n\\begin{equation}\\label{CHAME1} \n\\|\\mathcal{J}^{s\/2}f-H_{t}\\mathcal{J}^{s\/2}f \\|_ { L^{1}(\\omega)}\\leq C \\; t^{\\frac{1-s}{2 } } \\|\\nabla f\\|_ { L^{1}(\\omega) }. \n\\end{equation}\nWhere $\\mathcal{J}$ is a sub-Laplacian on $\\mathbb{G}$ invariant with respect to the family of dilation, $H_{t}$ stands for the associated heat semi-group and the constant $C=C(s)$ depends on the group $\\mathbb{G}$.\n\\end{Theoreme} \nThis estimate will be a consequence of the sub-Laplacian $\\mathcal{J}$ several spectral properties and we will specially use operators of type $m(\\mathcal{J})$ where $m$ is a well suited Borel function (see section \\ref{spectre} for the details). \\\\\n\nFinally, we will prove a in very straightforward way the next theorem for a weighted functional setting: \n\\begin{Theoreme}\\label{PGLR}\nLet $\\mathbb{G}$ be a stratified Lie group and $\\omega \\in A_{p}$ with $10$. Then, the well suited group law with respect to this dilation is given by \n\\begin{equation*}\n x\\cdot y=(x_{1}, x_{2}, x_{3})\\cdot(y_{1}, y_{2}, y_{3})=(x_{1}+y_{1}, x_{2}+y_{2}, x_{3}+y_{3}+\\frac{1}{2}(x_{1}y_{2}-y_{1}x_{2 })). \n\\end{equation*} \nThis triplet $(\\mathbb{R}^{3}, \\cdot,\\delta )$ corresponds to the Heisenberg group $\\mathbb{H}^{1}$ which is the first non-trivial example of a homogeneous group.\\\\ \n\nThe \\emph{homogeneous dimension} with respect to dilation (\\ref{dilat}) is given by the sum of the exponents of dilation: \n\\begin{equation*} N=\\sum_{1\\leq i\\leq n}a_{i }. \n\\end{equation*} \nWe observe that it is always larger than the topological dimension $n$ since integers $a_{i}$ verifies $a_{i}\\geq 1$ for all $i=1,...,n$. \nFor instance, in the Heisenberg group $\\mathbb{H}^{1}$ we have $N=4$ and $n=3$ while in the Euclidean case these two concepts coincide. \\\\ \n\nWe will say that a function on $\\mathbb{G}\\setminus \\{0\\}$ is homogeneous of degree $\\lambda \\in \\mathbb{R}$ if $f(\\delta_{\\alpha}[x])=\\alpha^{\\lambda}f(x)$ \nfor all $\\alpha>0$. \nIn the same way, we will say that a differential operator $D$ is homogeneous of degree $\\lambda$ if \n$$D(f(\\delta_{\\alpha}[x]))=\\alpha^{\\lambda}(Df)(\\delta_{\\alpha}[x])$$ for all $f$ in operator's domain. \nIn particular, if $f$ is homogeneous of degree $\\lambda$ and if $D$ is a differential operator of degree $\\mu$, then $Df$ is homogeneous of degree $\\lambda-\\mu$. \\\\ \n\nFrom the point of view of measure theory, homogeneous groups behave in a traditional way since Lebesgue measure $dx$ is bi-invariant and coincides with the Haar measure. For any subset $E$ of $\\mathbb{G}$ we will note its measure as $|E|$. The convolution of two functions $f$ and $g$ on $\\mathbb{G}$ is defined by\n\\begin{equation*} \nf\\ast g(x)=\\int_{\\mathbb{G}}f(y)g(y^{-1}\\cdot x)dy=\\int_{\\mathbb{G}}f(x\\cdot y^{-1})g(y)dy, \\quad x\\in \\mathbb{G}.\n\\end{equation*}\nWe also have the useful Young's inequalities:\n\\begin{Lemme}\\label{LemYoungG} \nIf $1\\leq p, q, r\\leq +\\infty$ such that $1+\\frac{1}{r}=\\frac{1}{p}+\\frac{1}{q}$. \nIf $f\\in L^{p}(\\mathbb{G})$ and $g\\in L^{q}(\\mathbb{G})$, then $f\\ast g \\in L^{r}(\\mathbb{G})$ and \n\\begin{equation}\\label{Young} \n\\|f\\ast g\\|_{L^r}\\leq \\|f\\|_{L^p}\\|g\\|_{L^q}. \n\\end{equation} \n\\end{Lemme} \nA proof is given in \\cite{Folland2}.\\\\\n\nFor a homogeneous group $\\mathbb{G}=(\\mathbb{R}^{n}, \\cdot, \\delta)$ we consider now its Lie algebra $\\mathfrak{g}$ whose elements can be conceived in two different ways: as \\textit{left}-invariant vector fields or as \\textit{right}-invariant vector fields. The left-invariant vectors fields $(X_j)_{1\\leq j\\leq n}$ are determined by the formula\n\\begin{equation}\\label{loiG} \n(X_{j}f)(x)=\\left.\\frac{\\partial f(x\\cdot y)}{\\partial y_{j}}\\right|_{y=0}=\\frac{\\partial f}{\\partial x_{j}}+\\sum_{jk$. Here $[E_{1}, E_{j}]$ indicates the subspace of $\\mathfrak{g}$ generated by the elements $[U, V]=UV-VU$ with $U\\in E_{1}$ and $V\\in E_{j}$. The integer $k$ is called the \\emph{degree} of stratification of $\\mathfrak{g}$. For example, on Heisenberg group $\\mathbb{H}^1$, we have $k=2$ while in the Euclidean case $k=1$. \\\\\n\nWe will suppose henceforth that $\\mathbb{G}$ is \\textbf{stratified}. Within this framework, if we fix the vectors fields $X_{1},...,X_{m}$ such that $a_{1}=a_{2}=\\ldots=a_{m}=1$ $(m0$ we form the balls by writing $B(x,r)=\\{y\\in \\mathbb{G}: d(x,y)0}$ defined on $L^{1}+L^{\\infty}(\\mathbb{G})$ with the semi-group property $H_{t+s}=H_{t}H_{s}$ for all $t, s>0$ and $H_{0}=Id$, such that: \\\\ \n\\begin{enumerate} \n\\item[1)] the sub-Laplacian $\\mathcal{J}$ is the infinitesimal generator of the semi-group $H_{t}=e^{-t\\mathcal{J}}$; \\\\ \n\\item[2)] $H_{t}$ is a contraction operator on $L^{p}(\\mathbb{G})$ for $1\\leq p\\leq +\\infty$ and for $t>0$; \\\\ \n\\item[3)] the semi-group $H_t$ admits a convolution kernel $H_{t}f=f\\ast h_{t}$ where $h_{t}(x)=h(x, t) \\in \\mathcal{C}^{\\infty}(\\mathbb{G}\\times]0, +\\infty[)$ is the heat kernel which satisfies the following points: \n\\begin{enumerate} \n\\item $(\\partial_{t}+\\mathcal{J})h_{t}=0$ on $\\mathbb{G}\\times]0, +\\infty[$, \\\\ \n\\item $h(x, t)=h(x^{-1}, t)$, $h(x, t)\\geq 0$ and $\\int_{\\mathbb{G}}h(x, t)dx=1$, \\\\ \n\\item $h_{t}$ has the semi-group property: $h_{t}\\ast h_{s}=h_{t+s}$ for $t, s>0$, \\\\ \n\\item $h(\\delta_{\\alpha}[x], \\alpha^{2}t)=\\alpha^{-N}h(x, t)$, \\\\ \n\\item For every $t>0$, $x\\mapsto h(x, t)$ belong to the Schwartz class in $\\mathbb{G}$. \\\\ \n\\end{enumerate} \n\\item[4)] $ \\|H_{t}f-f \\|_ {L^p}\\to 0$ if $t\\to 0$ for $f \\in L^{p}(\\mathbb{G})$ and $1\\leq p < +\\infty$; \\\\ \n\\item[5)] If $f\\in L^{p}(\\mathbb{G})$, $1\\leq p\\leq +\\infty$, then the function $u(x, t)=H_{t}f(x)\\in \\mathcal{C}^{\\infty}(\\mathbb{G}\\times \\mathbb{R}^{+})$ is a solution of the heat equation: \n$$ \\qquad\\left \\lbrace\\begin{array}{l}(\\frac{\\partial}{\\partial t}+\\mathcal{J})u(x, t)=0\\quad \\mbox{for } \\quad x\\in \\mathbb{G } \\quad \\mbox{and}\\quad t>0\\, ;\\\\[8mm] \nu(x, 0)=f(x)\\qquad\\quad \\mbox{for } \\quad x\\in \\mathbb{G }.\\\\ \n\\end{array}\\right. $$ \n\\end{enumerate} \n\\end{Theoreme} \n\nFor a detailed proof of these and other important facts concerning the heat semi-group see \\cite{Folland2} and \\cite{Saka}.\\\\\n\nTo close this section we recall the definition of the sub-Laplacian fractional powers $\\mathcal{J}^s$ with $s>0$.\\\\ We write:\n$$\\mathcal{J}^s f(x)=\\underset{\\varepsilon \\to 0}{\\lim}\\frac{1}{\\Gamma(k-s)}\\int_{\\varepsilon}^{+\\infty}t^{k-s-1}\\mathcal{J}^k H_tf(x)dt$$\nfor all $f\\in \\mathcal{C}^{\\infty}(\\mathbb{G})$ with $k$ the smallest integer greater than $s$.\n\n\\section{Maximal functions and Muckenhoupt weights}\\label{Herbata}\nThere are several ways of defining maximal functions in stratified Lie groups and our principal reference is \\cite{Folland2}. In this article we will mainly work with the following function \n\\begin{Definition} Let $f\\in \\mathcal{S}'(\\mathbb{G})$ and $\\varphi\\in\\mathcal{S}(\\mathbb{G})$. \nThe maximal function $\\mathcal{M}_{\\varphi}$ is given by the expression \n\\begin{equation*} \n\\mathcal{M}_{\\varphi}f(x)=\\underset{00$, then\n\\begin{equation}\\label{Wilma} \n\\mathcal{M}_{\\varphi}f(x)\\leq C \\mathcal{M}_{B}f(x). \n\\end{equation} \n\\end{Lemme} \nWe will use this property in the sequel and we request the reader to consult the proof in \\cite{Folland2}. \\\\ \n\nThe reader can consult \\cite{Folland2}, \\cite{Grafakos} and \\cite{Garcia} for a more detailed study of these important functions. \nFor our part, we will be interested in the relationship existing between these functions and weights. A \\emph{weight} $\\omega$ is, in a very general way, a locally integrable function on $\\mathbb{G}$ with values in $]0,+\\infty[$. \nFor a given weight $\\omega$ and a measurable set $E\\subset\\mathbb{G}$ we use the following notation: \n\\begin{equation*} \n\\omega(E)=\\int_{E}\\omega(x)dx. \n\\end{equation*} \nWe will define thus, for $1\\leq p<+\\infty$, weighted Lebesgue spaces by the norm \n\\begin{equation}\\label{pico1}\n \\|f\\|_ {L^{p}(\\omega)}=\\left(\\int_{\\mathbb{G}}|f(x)|^{p}\\omega(x)dx\\right)^{1\/p }\n\\end{equation}\nHistorically, the characterization of Muckenhoupt weights comes from the following problem: for a fixed $p\\in ]1, +\\infty[$ we want to know for which functions \n$\\omega$ one has the strong estimate\n\\begin{equation}\\label{MaximalProper}\n\\int_{\\mathbb{G}}\\mathcal{M}_{B}f(x)^{p}\\omega(x)dx\\leq C \\int_{\\mathbb{G}}|f(x)|^{p}\\omega(x)dx \\qquad (f\\in L^{p}(\\mathbb{G},\\omega)). \n\\end{equation} \nIt follows the condition below and the next definition (see \\cite{Grafakos}): \n\\begin{equation}\\label{brown} \n\\underset{B}{\\sup}\\left(\\frac{1}{|B|}\\int_{B}\\omega(x)dx\\right)\\left(\\frac{1}{|B|}\\int_{B}\\omega(x)^{-\\frac{1}{p-1}}dx\\right)^{p-1}<+\\infty. \n\\end{equation} \n\n\\begin{Definition} \nLet $\\mathbb{G}$ a stratified Lie group and let $1 \\sigma\\})d\\sigma. \n\\end{equation*} \n\\item[$\\bullet$] \\textbf{weak-$L^{p}$ spaces} or Lorentz spaces $L^{p,\\infty}(\\mathbb{G}, \\omega)$. We define them by\n\\begin{equation*}\n\\|f\\|_ { L^{p, \\infty}(\\omega)}=\\underset{\\sigma>0}{\\sup}\\{\\sigma \\; \\omega(\\{x\\in \\mathbb{G}:|f(x)|> \\sigma\\})^{1\/p}\\}.\n\\end{equation*} \n\\item[$\\bullet$]\\textbf{Sobolev spaces} $\\dot{W}^{s,p}(\\mathbb{G}, \\omega)$. For $\\omega\\in A_{p}$ we write: \n\\begin{equation*} \n\\|f\\|_ {\\dot{W}^{s, p}(\\omega)} =\\|\\mathcal{J}^{s\/2}f\\|_{L^{p}(\\omega)} \\qquad (10$ and $m$ an integer such that $m>s\/2$. \\\\ \n \nFinally, for Besov spaces of indices $(-\\beta, \\infty, \\infty)$ which appear in all the Improved Sobolev inequalities we have:\n\\begin{equation}\\label{BesovChaleurGhomo}\n\\|f\\|_{\\dot{B}^{-\\beta,\\infty}_{\\infty}}=\\underset{t>0}{\\sup}\\;\\;t^{\\beta\/2 } \\|H_{t}f\\|_ {L^\\infty} \n\\end{equation} \n\\end{enumerate}\n\\section{Spectral resolution of the sub-Laplacian}\\label{spectre}\n\nThe use in this article of spectral resolution for the sub-Laplacian consists roughly in expressing this operator by the formula $\\mathcal{J}=\\int_{0}^{+\\infty}\\lambda \\;dE_{\\lambda}$ and, by means of this characterization, build a family of new operators $m(\\mathcal{J})$ associated to a Borel function $m$. This kind of operators have some nice properties as shown in the next propositions.\n\\begin{Proposition} \nIf $m$ is a bounded Borel function on $]0,+\\infty[$ then the operator $m(\\mathcal{J})$ fixed by\n\\begin{equation}\\label{little} \nm(\\mathcal{J})=\\int_{0}^{+\\infty}m(\\lambda) \\;dE_{\\lambda }, \n\\end{equation} \nis bounded on $L^{2}(\\mathbb{G})$ and admits a convolution kernel $M$ i.e.: $m(\\mathcal{J})(f)=f\\ast M \\qquad (\\forall f\\in L^{2}(\\mathbb{G}))$. \n\\end{Proposition} \nSee \\cite{Folland2} and the references given therein for a proof. For our purposes, it will be particularly interesting to combine this result with the structure of dilation: \n\\begin{Lemme}\\label{madera} \nLet $m$ be a bounded function on $]0,+\\infty[$ and let $M$ be the kernel of the operator $m(\\mathcal{J})$. \nThen, for all $t>0$ we can build a bounded operator on $L^{2}(\\mathbb{G})$ by writing $m_{t}(\\mathcal{J})=m(t\\mathcal{J})$ with an associated kernel given by \n\\begin{equation*} \nM_{t}(x)=t^{-N\/2}M(t^{-1\/2}x). \n\\end{equation*} \n\\end{Lemme} \n\nFollowing \\cite{Hulanicki} and \\cite{Furioli2} we can improve the conclusion of the above proposition. Let $k\\in\\mathbb{N}$ and $m$ be a function of class $\\mathcal{C}^{k}(\\mathbb{R}^{+})$, we write \n\\begin{equation*} \n\\|m \\|_ {(k)}=\\underset{\\underset{\\lambda>0}{1\\leq r\\leq k}}{\\sup } (1+\\lambda)^{k }|m^{(r)}(\\lambda)|.\n\\end{equation*}\nThis formula gives us a necessary condition to obtain certain properties of the operators defined by (\\ref{little}):\n\\begin{Proposition}\\label{toge}\nLet $\\alpha\\in \\mathbb{N}$, $I=(i_{1},...,i_{n})$ be a multi-index and $p\\in [1, +\\infty]$. \nThere is a constant $C>0$ and an integer $k$ such that, for any function $m\\in \\mathcal{C}^{k}(\\mathbb{R}^{+})$ with $ \\|m\\|_{(k)}<+\\infty$, the kernel $M_{t}$ associated to the operator $m(t\\mathcal{J})$, $t>0$, satisfies \n \\begin{equation*} \n\\|(1+|\\cdot|)^{\\alpha}X^{I}M_{t}(\\cdot)\\|_{L^p}\\leq C(1+\\sqrt{t})^{\\alpha}t^{-(\\frac{N}{2p'}+\\frac{d(I)}{2})}\\|m\\|_{(k)}. \n\\end{equation*} \nwhere $\\frac{1}{p}+\\frac{1}{p'}=1$.\n\\end{Proposition} \n\\begin{Corollaire}\\label{black} Let $t>0$.\n\\begin{enumerate}\n\\item[1)] Let $m$ be the restriction on $\\mathbb{R}^{+}$ of a function defined on $\\mathcal{S}(\\mathbb{R})$. Then, the kernel $M$ \nof the operator $m(\\mathcal{J})$ is in $\\mathcal{S}(\\mathbb{G})$. \n\\item[2)] If $m$ is as above; and if it is vanishing at all orders near of the origin, then the kernel $M$ belongs to the \nspace $\\mathcal{S}_{0}(\\mathbb{G})$ formed by the functions of the Schwartz class which every moment is null. \n\\end{enumerate} \n\\end{Corollaire}\nFor more details and proofs see \\cite{Folland2}, \\cite{Furioli2} and \\cite{Hulanicki}.\n\\section{Improved Sobolev Inequalities on stratified groups: the proofs}\\label{demos} \n\nAs said in the introduction, inequalities given in theorem \\ref{smile00} depends on the theorem \\ref{CHAME}. We will thus begin proving this result in the following lines and we will continue our study by treating separately weak inequalities (\\ref{smile1}) and strong inequalities (\\ref{smile0}).\n\n\\subsection{The modified pseudo-inequality of Poincar\\'e}\\label{ISPGH} \n\nUnder the hypothesis of theorem \\ref{CHAME}, we have to prove the inequality \n\\begin{equation*} \n\\|\\mathcal{J}^{s\/2}f-H_{t}\\mathcal{J}^{s\/2}f\\|_{L^{1}(\\omega)}\\leq C \\; t^{\\frac{1-s}{2}}\\|\\nabla f\\|_{ L^{1}(\\omega)}. \n\\end{equation*} \nTo begin the proof, we observe that the following identity occurs: \n\\begin{equation*} \n(\\mathcal{J}^{s\/2}f-H_{t}\\mathcal{J}^{s\/2}f)(x)=\\left(\\int_{0}^{+\\infty}m(t\\lambda)dE_{\\lambda}\\right)t^{1-s\/2}\\mathcal{J} f(x), \n\\end{equation*} \nwhere we noted $m(\\lambda)=\\lambda^{s\/2-1}(1-e^{-\\lambda }$) for $\\lambda>0$, note that $m$ is a bounded function which tends to $0$ at infinity since $s\/2-1<0$. We break up this function by writing: \n$$m(\\lambda)=m_{0}(\\lambda)+m_{1}(\\lambda)=m(\\lambda)\\theta_{0}(\\lambda)+m(\\lambda)\\theta_{1}(\\lambda)$$ \nwhere we chose the auxiliary functions $\\theta_{0}(\\lambda), \\theta_{1}(\\lambda)\\in \\mathcal{C}^{\\infty}(\\mathbb{R}^{+})$ defined by: \n\\begin{eqnarray*} \n\\bullet\\quad \\theta_{0}(\\lambda) = 1 \\quad\\mbox{on } \\quad]0, 1\/2 ] \\quad\\mbox{and}\\quad 0 \\quad \\mbox{on}\\quad]1, +\\infty[, \\\\[5mm ] \n\\bullet\\quad \\theta_{1}(\\lambda) = 0 \\quad\\mbox{on} \\quad]0, 1\/2 ] \\quad\\mbox{and}\\quad 1 \\quad \\mbox{on }\\quad]1, +\\infty[, \n\\end{eqnarray*} \nso that $\\theta_{0}(\\lambda)+\\theta_{1}(\\lambda)\\equiv 1$. Then, we obtain the formula: \n\\begin{equation*} \n(\\mathcal{J}^{s\/2}f-H_{t}\\mathcal{J}^{s\/2}f)(x)=\\left(\\int_{0}^{+\\infty}m_{0}(t\\lambda)dE_{\\lambda}\\right)t^{1-s\/2}\\mathcal{J}f(x)+\n\\left(\\int_{0}^{+\\infty}m_{1}(t\\lambda)dE_{\\lambda}\\right)t^{1-s\/2}\\mathcal{J}f(x). \n\\end{equation*} \nIf we note $M^{(i)}_{t}$ the kernel of the operator fixed by $\\int_{0}^{+\\infty}m_{i}(t\\lambda)dE_{\\lambda}$ for $i=0,1$, we have: \n\\begin{equation*} \n(\\mathcal{J}^{s\/2}f-H_{t}\\mathcal{J}^{s\/2}f)(x)=t^{1-s\/2}\\mathcal{J}f\\ast M^{(0)}_{t}(x)+t^{1-s\/2}\\mathcal{J}f\\ast M^{(1)}_{t}(x). \n\\end{equation*} \nWe now multiply the above equality by a weight $\\omega\\in A_1$ to obtain the inequality\n\\begin{equation}\\label{poids1} \n\\int_{\\mathbb{G}}\\left|\\mathcal{J}^{s\/2}f-H_{t}\\mathcal{J}^{s\/2}\\right|\\omega(x)dx \\leq \\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(0)}_{t}(x)\\right|\\omega(x)dx +\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(1)}_{t}(x)\\right|\\omega(x)dx.\n\\end{equation} \nWe will now estimate the right side of the above inequality by the two following propositions: \n\\begin{Proposition}\\label{Sir} \nFor the first integral in the right-hand side of (\\ref{poids1}) we have the inequality: \n\\begin{equation*} \n\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(0)}_{t}(x)\\right|\\omega(x)dx\\leq Ct^{\\frac{1-s}{2}}\\|\\nabla f\\|_{L^{1}(\\omega)} \n\\end{equation*}\n\\end{Proposition} \n\\emph{\\textbf{Proof}}. The function $m_{0}$ is the restriction on $\\mathbb{R}^{+}$ of a function belonging to the Schwartz class. This function satisfies the assumptions of corollary \\ref{black} which we apply after having noticed the identity\n\\begin{equation*} \nI=\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(0)}_{t}(x)\\right|\\omega(x)dx=\n\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\nabla f\\ast\\tilde{\\nabla}M^{(0)}_{t}(x)\\right|\\omega(x)dx \n\\end{equation*} \nwhere we noted $\\tilde{\\nabla}$ the gradient formed by vectors fields $(Y_{j})_{1\\leq j\\leq m}$. We have then\n\\begin{equation*} \nI\\leq \\int_{\\mathbb{G}}\\int_{\\mathbb{G}}t^{1-s\/2}|\\nabla f(y)||\\tilde{\\nabla} M^{(0)}_{t}(y^{-1}\\cdot x)|\\omega(x)dxdy. \n\\end{equation*} \nBy corollary \\ref{black}, one has $M^{(0)}_{t}\\in \\mathcal{S}(\\mathbb{G})$ and, since $M_{t}^{(0)}(x)=t^{-N\/2}M^{(0)}(t^{-1\/2}x)$, we can write \n$$K_{t}(x)=t^{1\/2 }|\\tilde{\\nabla}M^{(0)}_{t}(x^{-1 })|\\in L^{1}(\\mathbb{G}).$$ \nOne obtains\n\\begin{equation*} \nI\\leq \\int_{\\mathbb{G}}\\int_{\\mathbb{G}}t^{\\frac{1-s}{2}}|\\nabla f(y)|\\, K_{t}(x\\cdot y^{-1})\\omega(x)dxdy=\\int_{\\mathbb{G}}t^{\\frac{1-s}{2}}|\\nabla f(y)|\\; \n\\omega\\ast K_{t}(y)dy. \n\\end{equation*} \nBy definition of maximal functions and by the estimate (\\ref{Wilma}), we have the inequality: \n$$\\underset{t>0}{\\sup } \\;\\omega\\ast K_{t}(y)\\leq C\\, \\left(\\mathcal{M}_{B}\\;\\omega\\right)(y), $$ \nhence, \n\\begin{equation*}\n I\\leq Ct^{\\frac{1-s}{2}}\\int_{\\mathbb{G}}|\\nabla f(y)|\\mathcal{M}_{B}\\, \\omega(y)dy. \n\\end{equation*} \nIt remains to notice that, by assumption, $\\omega \\in A_{1}$ if and only if $(\\mathcal{M}_{B}\\, \\omega)(\\cdot)\\leq C \\;\\omega(\\cdot).$ \nWe obtain then the desired estimation: \n\\begin{equation*} \n\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(0)}_{t}(x)\\right|\\omega(x)dx\\leq Ct^{\\frac{1-s}{2}}\\int_{\\mathbb{G}}|\\nabla f(y)|\\omega(y)dy. \n\\end{equation*} \n\\begin{flushright}{$\\blacksquare$}\\end{flushright}\n\\begin{Proposition}\\label{PropoChame2}\nFor the last integral of (\\ref{poids1}) we have the inequality\n\\begin{equation*} \n\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(1)}_{t}(x)\\right|\\omega(x)dx\\leq Ct^{\\frac{1-s}{2}}\\|\\nabla f\\|_{L^{1}(\\omega)} \n\\end{equation*}\n\\end{Proposition} \n\\emph{\\textbf{Proof}}. Here, it is necessary to make an additional step. We cut out the function $m_{1}$ in the following way: \n\\begin{equation*}\nm_{1}(\\lambda)=\\left(\\frac{1-e^{-\\lambda}}{\\lambda}\\right)\\theta_{1}(\\lambda)=m_{a}(\\lambda)-m_{b}(\\lambda) \n\\end{equation*} \nwhere $m_{a}(\\lambda)=\\frac{1}{\\lambda}\\theta_{1}(\\lambda)$ and $m_{b}(\\lambda)=\\frac{e^{-\\lambda}}{\\lambda}\\theta_{1}(\\lambda)$. We will note $M^{(a)}_{t}$ and $M^{(b)}_{t}$ the associated kernels of these two operators. We obtain thus the estimate \n\\begin{equation}\\label{Rolling1} \n\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(1)}_{t}(x)\\right|\\omega(x)dx\\leq \\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(a)}_{t}(x)\\right|\\omega(x)dx+ \n\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(b)}_{t}(x)\\right|\\omega(x)dx \n\\end{equation} \nObserve that $m_{b}\\in \\mathcal{S}(\\mathbb{R}^{+})$ and then $M^{(b)}_{t}\\in \\mathcal{S}(\\mathbb{G})$. We have the next lemma for the last integral in (\\ref{Rolling1}).\n\\begin{Lemme}\\label{BishopLem0}\n\\begin{equation*} \n\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(b)}_{t}(x)\\right|\\omega(x)dx\\leq Ct^{\\frac{1-s}{2}}\\|\\nabla f\\|_{L^{1}(\\omega)}.\n\\end{equation*} \n\\end{Lemme} \n\\emph{\\textbf{Proof}}. The proof is straightforward and follows the same steps as those of the preceding proposition \\ref{Sir}. \n\\begin{flushright}{$\\blacksquare$}\\end{flushright} \nWe treat the other part of (\\ref{Rolling1}) with the following lemma: \n\\begin{Lemme} \\label{BishopLem}\n\\begin{equation}\\label{Bishop} \n\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(a)}_{t}(x)\\right|\\omega(x)dx\\leq Ct^{\\frac{1-s}{2}}\\|\\nabla f\\|_{L^{1}(\\omega)} \n\\end{equation} \\end{Lemme} \n\\emph{\\textbf{Proof}}. We consider the auxiliary function \n$$\\psi(\\lambda)=\\theta_{0}(\\lambda\/2)-\\theta_{0}(\\lambda)=\\theta_{1}(\\lambda)-\\theta_{1}(\\lambda\/2)$$ \nin order to obtain the identity $$\\sum_{j=0}^{+\\infty}\\psi(2^{-j}\\lambda)=\\theta_{1}(\\lambda).$$ \nWe have then \n$$m_{a}(t\\lambda)=\\frac{1}{t\\lambda}\\sum_{j=0}^{+\\infty}\\psi(2^{-j}t\\lambda)=\\sum_{j=0}^{+\\infty}2^{-j}\\tilde{\\psi}(2^{-j}t\\lambda)$$ \nwhere $\\tilde{\\psi}(\\lambda)=\\frac{\\psi(\\lambda)}{\\lambda}$ is a function in $\\mathcal{C}^{\\infty}_{0}(\\mathbb{R}^{+})$. By corollary \\ref{black}, the kernel $\\tilde{K}$ associated with the function $\\tilde{\\psi}$ belongs to $\\mathcal{S}_{0}(\\mathbb{G})$. Then, from the point of view of operators, one has: \n\\begin{equation}\\label{Stone} \nM^{(a)}_{t}(x)=\\sum^{+\\infty}_{j=0}2^{-j}\\tilde{K}_{j, t}(x) \n\\end{equation} \nwhere $\\tilde{K}_{j,t}(x)=2^{N\/2}t^{-N\/2}\\tilde{K}(2^{j\/2}t^{-1\/2}x)$. With formula (\\ref{Stone}) we return to the left side of (\\ref{Bishop}): \n$$\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(a)}_{t}(x)\\right|\\omega(x)dx\\leq\\sum^{+\\infty}_{j=0}2^{-j}\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast \n\\tilde{K}_{j,t}(x)\\right|\\omega(x)dx.$$\nUsing the sub-Laplacian definition and vector fields properties, we have\n\\begin{equation}\\label{Chicago}\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(a)}_{t}(x)\\right|\\omega(x)dx\\leq \\sum^{+\\infty}_{j=0}2^{-j}t^{1-s\/2}\\int_{\\mathbb{G}}\n\\int_{\\mathbb{G } }|\\nabla f(y)| |\\tilde{\\nabla}\\tilde{K}_{j, t}(y^{-1}\\cdot x)|\\omega(x)dxdy.\n\\end{equation} \nWe note this time $K_{j, t}(x)=2^{-j\/2}t^{1\/2}|\\tilde{\\nabla}\\tilde{K}_{j, k}(x^{-1})|$ to obtain the following formula for the right side of (\\ref{Chicago}):\n$$\\sum^{+\\infty}_{j=0}2^{-j\/2}t^{\\frac{1-s}{2}}\\int_{\\mathbb{G}}\\int_{\\mathbb{G}}|\\nabla f(y)|K_{j,t}(x\\cdot y^{-1})\\omega(x)dxdy = \n\\sum^{+\\infty}_{j=0}2^{-j\/2}t^{\\frac{1-s}{2}}\\int_{\\mathbb{G}}|\\nabla f(y)|\\;\\omega\\ast K_{j, t}(y)dy.$$ \nIt remains to apply the same arguments used in proposition \\ref{Sir}, namely the assumption $\\omega \\in A_{1}$ and, for $K_{j,t}$, the estimations $\\underset{j, t>0}{\\sup } \\;\\omega \\ast K_{j, t}(y)\\leq C\\, (\\mathcal{M}_{B } \\; \\omega)(y)\\leq C\\, \\omega(y).$ \nThen, we finally get the inequality \n\\begin{equation*} \n\\int_{\\mathbb{G}}\\left|t^{1-s\/2}\\mathcal{J}f\\ast M^{(a)}_{t}(x)\\right|\\omega(x)dx\\leq C\\, t^{\\frac{1-s}{2}}\\sum_{j=0}^{+\\infty}2^{-j\/2}\\int_{\\mathbb{G}}|\\nabla f|\n(y)\\omega(y)dy = C\\, t^{\\frac{1-s}{2} } \\|\\nabla f\\|_{L^{1}(\\omega)}. \n\\end{equation*} \nWhich ends the proof of the lemma \\ref{BishopLem}.\n\\begin{flushright}{$\\blacksquare$}\\end{flushright} \nWith these two last lemmas we conclude the proof of the proposition \\ref{PropoChame2}. Now, getting back to the formula (\\ref{poids1}), with propositions \\ref{Sir} and \\ref{PropoChame2} we finally finish the proof of theorem \\ref{CHAME}.\n\\begin{flushright}{$\\blacksquare$}\\end{flushright} \n\\subsection{Weak inequalities}\n\nTo begin the proof notice that operator $\\mathcal{J}^{s\/2}$ carries out an isomorphism between the spaces $\\dot{B}^{-\\beta, \\infty}_{\\infty}(\\mathbb{G})$ and $\\dot{B}^{-\\beta-s, \\infty}_{\\infty}(\\mathbb{G})$ (see \\cite{Saka}). Thus inequality (\\ref{smile1}) rewrites as: \n\\begin{equation}\\label{ForIsoLap}\n\\|\\mathcal{J}^{s\/2}f\\|_{L^{q, \\infty}(\\omega)}\\leq C \\|\\nabla f\\|_{L^{1}(\\omega)}^{\\theta}\\|\\mathcal{J}^{s\/2}f\\|_{\\dot{B}^{-\\beta-s, \\infty}_{\\infty}}^{1-\\theta} \n\\end{equation} \nBy homogeneity, we can suppose that the norm $\\|\\mathcal{J}^{s\/2}f \\|_{\\dot{B}^{-\\beta-s,\\infty}_{\\infty}}$ is bounded by $1$; then we have to show \n\\begin{equation}\\label{CHAMEFAIBLE2} \n\\|\\mathcal{J}^{s\/2}f \\|_{L^{q, \\infty}(\\omega)}\\leq C \\|\\nabla f \\|_{L^{1}(\\omega)}^{\\theta}. \n\\end{equation} \nWe have thus to evaluate the expression $\\omega\\left(\\{x\\in \\mathbb{G}:|\\mathcal{J}^{s\/2}f(x)|> 2\\alpha\\}\\right)$ for all $\\alpha>0$. If we use the thermic definition of the Besov space (\\ref{BesovChaleurGhomo}), we have \n$$\\|\\mathcal{J}^{s\/2}f\\|_{\\dot{B}^{-\\beta-s,\\infty}_{\\infty}}\\leq 1 \\iff \\underset{t>0}{\\sup}\\left\\{t^{\\frac{\\beta+s}{2}}\\|H_{t}\\mathcal{J}^{s\/2}f\\|_{L^\\infty}\\right\\} \\leq 1.$$ \nBut, if one fixes $t_{\\alpha}=\\alpha^{-\\left(\\frac{2}{\\beta+s}\\right)}$, we obtain $\\|H_{t_{\\alpha}}\\mathcal{J}^{s\/2}f \\|_{L^\\infty}\\leq \\alpha $. \nNote also that with the definition of parameter $\\beta$ one has $t_{\\alpha}=\\alpha^{-\\frac{2(q-1)}{(1-s)}}$. Therefore, since we have the following set inclusion\n$$\\left\\{x\\in \\mathbb{G}: |\\mathcal{J}^{s\/2}f(x)|> 2\\alpha\\right\\}\\subset \\left\\{x\\in \\mathbb{G}: |\\mathcal{J}^{s\/2}f(x)-H_{t_{\\alpha}}\\mathcal{J}^{s\/2}f(x)|> \\alpha\\right\\}, $$ \nthe Tchebytchev inequality implies \n\\begin{equation*} \n\\alpha^{q}\\omega\\left(\\{x\\in \\mathbb{G}: |\\mathcal{J}^{s\/2}f(x)|> 2\\alpha\\}\\right)\\leq \\alpha^{q-1}\\int_{\\mathbb{G } }|\\mathcal{J}^{s\/2}f(x)-H_{t_{\\alpha}}\\mathcal{J}^{s\/2}f(x)|\\omega(x)dx. \n\\end{equation*} \nAt this point, we use the modified Poincar\\'e pseudo-inequality, given by theorem \\ref{CHAME}, to estimate the right side of the preceding inequality: \n\\begin{equation}\\label{T2g} \n\\alpha^{q}\\omega\\left(\\{x\\in \\mathbb{G}:|\\mathcal{J}^{s\/2}f(x)|> 2\\alpha\\}\\right)\\leq C \\alpha^{q-1}\\;t_{\\alpha}^{\\frac{1-s}{2}}\\int_{\\mathbb{G}}|\\nabla f(x)|\\omega(x)dx. \n\\end{equation} \nBut, by the choice of $t_{\\alpha}$, one has $\\alpha^{q-1}\\alpha^{-\\frac{2(q-1)}{(1-s)}\\frac{(1-s)}{2}}=1 $. Then (\\ref{T2g}) implies the inequality \n$$\\qquad\\alpha^{q}\\omega\\left(\\{x\\in \\mathbb{G}:|\\mathcal{J}^{s\/2}f(x)|> 2\\alpha\\}\\right)\\leq C \\|\\nabla f \\|_ { L^{1}(\\omega) } \\;; $$ \nand, finally, using definition (\\ref{DefDebilSoboLe}) of weak Sobolev spaces it comes \n$$\\qquad \\qquad \\|\\mathcal{J}^{s\/2}f \\|^{q}_{L^{q, \\infty}(\\omega)}\\leq C \\|\\nabla f \\|_{L^{1}(\\omega)}$$ \nwhich is the desired result.\\begin{flushright}{$\\blacksquare$}\\end{flushright} \n\\subsection{Strong inequalities}\nWhen $s=0$ in the weak inequalities it is possible to obtain stronger estimations. To achieve this, we will need an intermediate step: \n\\begin{Proposition}\\label{escalera2} \nLet $10$. We have then, by the thermic definition of Besov spaces, the estimate $\\|H_{t}f \\|_{L^\\infty}\\leq \\alpha$. We use now the characterization of Lebesgue space given by the distribution function: \n\\begin{equation}\\label{formulaG} \n\\frac{1}{5^{q }} \\|f\\|_{L^{q}(\\omega)}^{q}=\\int_{0}^{+\\infty}\\omega\\left(\\{x\\in \\mathbb{G}: |f(x)|> 5\\alpha\\}\\right)d(\\alpha^{q}). \n\\end{equation} \nIt now remains to estimate $\\omega(\\{x\\in \\mathbb{G}: |f(x)|> 5\\alpha\\})$ and for this we introduce the following thresholding function: \n\\begin{equation*}\\label{teta}\n\\Theta_{\\alpha}(t)=\\left\\lbrace\\begin{array}{l} \\Theta_{\\alpha}(-t)=-\\Theta_{\\alpha}(t)\\\\[4mm] \n0 \\qquad \\qquad\\qquad\\mbox{ if }\\qquad 0\\leq T \\leq \\alpha\\\\[4mm]\nt-\\alpha \\qquad\\qquad\\mbox{ if } \\qquad\\alpha\\leq T \\leq M\\alpha \\\\[4mm]\n(M-1)\\alpha \\qquad\\mbox{ if } \\qquad T > M\\alpha \n\\end{array}\\right.\\\\[3mm ] \n\\end{equation*}\nHere, $M$ is a parameter which depends on $q$ and which we will suppose for the moment larger than 10. \\\\\n\nThis cut-off function enables us to define a new function $f_{\\alpha}=\\Theta_{\\alpha}(f)$. We write in the next lemma some significant properties of this function $f_{\\alpha}$: \n\\begin{Lemme}\\label{Taboo} \n\\begin{enumerate} \n\\item[]\n\\item[1)] the set defined by $\\{x\\in \\mathbb{G}: |f(x)|> 5\\alpha\\}$ is included in the set $\\{x\\in \\mathbb{G}: |f_{\\alpha }(x)|> 4\\alpha\\}$. \\\\ \n\\item[2)] On the set $\\{x\\in \\mathbb{G}: |f(x)|\\leq M\\alpha\\}$ one has the estimate $|f-f_{\\alpha }|\\leq \\alpha$. \\\\ \n\\item[3)] If $f\\in \\mathcal{C}^{1}(\\mathbb{G})$, one has the equality $\\nabla f_{\\alpha}=(\\nabla f)\\mathds{1}_{\\{\\alpha\\leq|f|\\leq M\\alpha\\}}$ \nalmost everywhere. \n\\end{enumerate} \n\\end{Lemme} \nFor a proof see \\cite{Ledoux}. \\\\ \n\nLet us return now to (\\ref{formulaG}). By the first point of the lemma above we have\n\\begin{equation}\\label{cray} \n\\int_{0}^{+\\infty}\\omega\\left(\\left\\{x\\in \\mathbb{G}: |f(x)|> 5\\alpha\\right\\}\\right)d(\\alpha^{q})\\leq \\int_{0}^{+\\infty } \\omega\\left(\\{x\\in \\mathbb{G}: |f_{\\alpha }(x)|> 4\\alpha\\}\\right)d(\\alpha^{q})=I. \n\\end{equation} \nWe note $A_{\\alpha}=\\{x\\in \\mathbb{G}: |f_{\\alpha }(x)|> 4\\alpha\\}$, $B_{\\alpha}=\\{x\\in \\mathbb{G}: |f_{\\alpha}(x)-H_{t_{\\alpha}}(f_{\\alpha})(x)|> \\alpha\\}$ and $C_{\\alpha}=\\{x\\in \\mathbb{G}: |H_{t_{\\alpha}}(f_{\\alpha}-f)(x)|> 2\\alpha\\}$. Now, by linearity of $H_{t}$ we can write: $f_{\\alpha}=f_{\\alpha}-H_{t_{\\alpha}}(f_{\\alpha})+H_{t_{\\alpha}}(f_{\\alpha}-f)+H_{t_{\\alpha}}(f)$. Then, holding in account the fact $\\|H_{t}f\\|_{L^\\infty}\\leq \\alpha$, we obtain $A_{\\alpha}\\subset B_{\\alpha}\\cup C_{\\alpha}$. Returning to (\\ref{cray}), this set inclusion gives us the following inequality \n\\begin{equation}\\label{dosG} \nI\\leq \\int_{0}^{+\\infty}\\omega\\left(B_{\\alpha}\\right)d(\\alpha^{q})+\\int_{0}^{+\\infty}\\omega\\left(C_{ \\alpha}\\right)d(\\alpha^{q }) \n\\end{equation} \nWe will study and estimate these two integrals, which we will call $I_{1}$ and $I_{2}$ respectively, by the two following lemmas: \n\\begin{Lemme} For the first integral of (\\ref{dosG}) we have the estimate: \n\\begin{equation}\\label{ARG} \nI_{1 } = \\int_{0}^{+\\infty}\\omega\\left(B_{\\alpha}\\right)d(\\alpha^{q})\\leq C\\, q\\log(M) \\|\\nabla f\\|_ { L^{1}(\\omega) } \n\\end{equation} \n\\end{Lemme} \n\\textbf{\\emph{Proof}.} Tchebytchev's inequality implies \n\\begin{equation*} \n\\omega\\left(B_{\\alpha}\\right)\\leq \\alpha^{-1}\\int_{\\mathbb{G } }|f_{\\alpha}(x)-H_{t_{\\alpha}}(f_{\\alpha })(x)|\\omega(x)dx. \n\\end{equation*} \nUsing the modified Poincar\\'e pseudo-inequality (\\ref{CHAME1}) with $s=0$ in the above integral we obtain: \n$$\\omega\\left(B_{\\alpha}\\right)\\leq C \\, \\alpha^{-1 } \\, t_{\\alpha}^{1\/2}\\int_{\\mathbb{G } }|\\nabla f_{\\alpha }(x)|\\omega(x)dx$$ \nRemark that the choice of $t_{\\alpha}$ fixed before gives $t_{\\alpha}^{1\/2}=\\alpha^{1-q}$, then we have\n$$\\omega\\left(B_{\\alpha}\\right)\\leq C \\, \\alpha^{-q}\\int_{\\{\\alpha\\leq|f|\\leq M\\alpha \\} }|\\nabla f(x)|\\omega(x)dx.$$ \nWe integrate now the preceding expression with respect to $d(\\alpha^{q})$: \n$$I_{1}\\leq C\\int_{0}^{+\\infty}\\alpha^{-q}\\left(\\int_{\\{\\alpha\\leq|f|\\leq M\\alpha \\} }|\\nabla f(x)|\\omega(x)dx\\right)d(\\alpha^{q }) = C\\;q\\int_{\\mathbb{G } }|\\nabla f(x)|\\left(\\int_{\\frac{|f|}{M}}^{|f|}\\frac{d\\alpha}{\\alpha}\\right)\\omega(x)dx$$ \nIt follows then $I_{1}\\leq C\\, q\\, \\log(M) \\|\\nabla f \\|_ { L^{1}(\\omega)}$ and one obtains the estimation needed for the first integral.\n\\begin{flushright}{$\\blacksquare$}\\end{flushright} \n\\begin{Lemme}\\label{papas}\nFor the second integral of (\\ref{dosG}) one has the following result: \n\\begin{equation*}\nI_{2}=\\int_{0}^{+\\infty}\\omega(C_{\\alpha})d(\\alpha^{q})\\leq \\frac{q}{q-1}\\;\\frac{1}{M^{q-1 } } \\|f\\|_ {L^q}^{q } \n\\end{equation*} \n\\end{Lemme} \n\\textbf{\\emph{Proof}.} For the proof of this lemma, we write: $$|f-f_{\\alpha }|=|f-f_{\\alpha }|\\mathds{1}_{\\{|f|\\leq M \\alpha\\}}+|f-f_{\\alpha }|\\mathds{1}_{\\{|f|> M\\alpha\\}}.$$ \nAs the distance between $f$ and $f_{\\alpha}$ is lower than $\\alpha$ on the set $ \\{x\\in \\mathbb{G}: |f(x)|\\leq M \\alpha\\}$, one has the inequality \n$$|f-f_{\\alpha }|\\leq\\alpha+|f|\\mathds{1}_{\\{|f|> M \\alpha\\}}$$ \nBy applying the heat semi-group to both sides of this inequality we obtain $H_{t_{\\alpha}}(|f-f_{\\alpha }|)\\leq \\alpha + H_{t_{\\alpha}}(|f|\\mathds{1}_{\\{|f|> M \\alpha\\}})$ and we have then the following set inclusion $C_{\\alpha}\\subset \\left\\{x\\in \\mathbb{G}: H_{t_{\\alpha}}(|f|\\mathds{1}_{\\{|f|> M \\alpha\\}})>\\alpha\\right\\}$.\nThus, considering the measure of these sets and integrating with respect to $d(\\alpha^{q})$, it comes \n\\begin{equation*} \nI_{2}=\\int_{0}^{+\\infty}\\omega\\left(C_{\\alpha}\\right)d(\\alpha^{q})\\leq\\int_{0}^{+\\infty}\\omega\\bigg(\\{H_{t_{\\alpha}}(|f|\\mathds{1}_{\\{|f|> M\\alpha\\}})>\\alpha\\}\\bigg)d(\\alpha^{q}) \n\\end{equation*} \nWe obtain now, by applying Tchebytchev inequality, the estimate \n$$I_{2}\\leq\\int_{0}^{+\\infty}\\alpha^{-1}\\bigg(\\int_{\\mathbb{G}}H_{t_{\\alpha}}\\left(|f|\\mathds{1}_{\\{|f|> M\\alpha\\}}\\right)\\omega(x)dx\\bigg)d(\\alpha^{q}),$$ \nthen by Fubini's theorem we have\n$$I_{2}\\leq q\\int_{\\mathbb{G}}|f(x)|\\bigg(\\int_{0}^{+\\infty}\\mathds{1}_{\\{|f|> M\\alpha\\}}\\alpha^{q-2}d\\alpha\\bigg)\\omega(x)dx=\\frac{q}{q-1}\\int_{\\mathbb{G}}|f(x)|\\frac{|f(x)|^{q-1}}{M^{q-1}}\\omega(x)dx= \\frac{q}{q-1}\\frac{1}{M^{q-1 }}\\|f\\|_{L^{q}(\\omega)}^{q}.$$ \nAnd this concludes the proof of this lemma.\n\\begin{flushright}{$\\blacksquare$}\\end{flushright} \nWe finish the proof of proposition \\ref{escalera2} by connecting together these two lemmas \\textit{i.e.}:\n$$\\frac{1}{5^{q}} \\|f\\|_ {L^q(\\omega)}^{q}\\leq Cq\\, \\log(M) \\|\\nabla f\\|_ {L^1(\\omega)}+\\frac{q}{q-1}\\frac{1}{M^{q-1 } } \\|f\\|_{L^q(\\omega)}^{q}$$ \nSince we supposed all the norms bounded and $M\\gg 1$, we finally have\n$$\\left(\\frac{1}{5^{q}}-\\frac{q}{q-1}\\frac{1}{M^{q-1}}\\right) \\|f\\|_ {L^q(\\omega)}^{q}\\leq C q\\, \\log(M) \\|\\nabla f\\|_{L^1(\\omega)}$$ \n\\begin{flushright}{$\\blacksquare$}\\end{flushright} \nThe proof of the theorem \\ref{smile00} is not yet completely finished. The last step is provided by the \n\\begin{Proposition}\\label{escalera3} \nIn proposition \\ref{escalera2} it is possible to consider only the two assumptions \n$\\nabla f\\in L^{1}(\\mathbb{G},\\omega)$ and $f\\in \\dot{B}^{-\\beta, \\infty}_{\\infty}(\\mathbb{G})$. \n\\end{Proposition}\n\\textbf{\\emph{Proof}.} For the proof of this proposition we will build a homogeneous Littlewood-Paley like approximation of $f$ writing: \n\\begin{equation*} \nf_{j}=\\left(\\int_{0}^{+\\infty}\\left(\\varphi(2^{-2j}\\lambda)-\\varphi(2^{2j}\\lambda)\\right)dE_{\\lambda}\\right)(f) \\qquad (j\\in \\mathbb{N})\n\\end{equation*} \nwhere $\\varphi$ is a $\\mathcal{C}^{\\infty}(\\mathbb{R}^+)$ function such that $\\varphi=1$ on $]0,1\/4[$ and $\\varphi=0$ on $[1, +\\infty[$.\n\\begin{Lemme} \nIf $q>1$, if $\\nabla f\\in L^{1}(\\mathbb{G}, \\omega)$ and if $f\\in \\dot{B}^{-\\beta, \\infty}_{\\infty}(\\mathbb{G})$ then $\\nabla f_j\\in L^{1}(\\mathbb{G}, \\omega)$, $f_j\\in \\dot{B}^{-\\beta, \\infty}_{\\infty}(\\mathbb{G})$ and $f_{j}\\in L^{q}(\\mathbb{G},\\omega)$. \n\\end{Lemme} \n\\textbf{\\emph{Proof}.} The fact that $\\nabla f_j\\in L^{1}(\\mathbb{G}, \\omega)$ and $f_j\\in \\dot{B}^{-\\beta, \\infty}_{\\infty}(\\mathbb{G})$ is an easy consequence of the definition of $f_j$. For $f_{j}\\in L^{q}(\\mathbb{G},\\omega)$ the starting point is given by the relation: \n\\begin{equation*}\nf_{j}=\\left(\\int_{0}^{+\\infty}m(2^{-2j}\\lambda)\\, dE_{\\lambda}\\right)2^{-2j}\\mathcal{J}(f), \n\\end{equation*} \nwhere we noted $$m(2^{-2j}\\lambda)=\\frac{\\varphi(2^{-2j}\\lambda)-\\varphi(2^{2j}\\lambda)}{2^{-2j}\\lambda}.$$ \nObserve that the function $m$ vanishes near of the origin and satisfies the assumptions of corollary \\ref{black}. We obtain then the following identity where $M_{j}\\in \\mathcal{S}(\\mathbb{G})$ is the kernel of the operator $m(2^{-2j}\\mathcal{J})$: \n$$f_{j}=2^{-2j}\\mathcal{J}f\\ast M_{j}=2^{-2j}\\nabla f\\ast \\tilde{\\nabla}M_{j},$$ \nwhere we denoted $\\tilde{\\nabla}$ the gradient of the right invariant vectors fields and used the property (\\ref{izder}). Let us now calculate the norm $L^{q}(\\mathbb{G},\\omega)$ in the preceding identity:\n$$\\|f_{j}\\|_{L^{q}(\\omega)}=\\|2^{-2j}\\nabla f\\ast\\tilde{\\nabla}M_{j}\\|_{L^{q}(\\omega)}\\leq 2^{-2j}\\|\\nabla f\\|_{L^{1}(\\omega)}\\|\\tilde{\\nabla}M_{j}\\|_{L^{q}(\\omega)}.$$ \nFinally, we obtain: \n$$\\|f_{j } \\|_ { L^{q}(\\omega)}\\leq C\\, 2^{j(N(1-\\frac{1}{q})-1) } \\|\\nabla f \\|_ { L^{1}(\\omega)}<+\\infty$$ \n\\begin{flushright}{$\\blacksquare$}\\end{flushright} \n\nThanks to this estimate, we can apply the proposition \\ref{escalera2} to $f_{j}$ whose $L^{q}(\\mathbb{G}, \\omega)$ norm is bounded, and we obtain: \n$$\\|f_{j } \\|_ { L^{q}(\\omega)}\\leq C \\|\\nabla f_j\\|_ { L^{1}(\\omega)}^{\\theta } \\|f_j\\|_ { \\dot{B}^{-\\beta, \\infty}_{\\infty}}^{1-\\theta}.$$ \nNow, since $f\\in \\dot{B}^{-\\beta, \\infty}_{\\infty}(\\mathbb{G})$, we have $f_{j } \\rightharpoonup f$ in the sense of distributions. It follows \n$$\\|f\\|_{ L^{q}(\\omega)}\\leq \\underset{j \\to +\\infty}{\\lim \\inf } \\|f_{j } \\|_ { L^{q}(\\omega)}\\leq C \\|\\nabla f\\|_ { L^{1}(\\omega)}^{\\theta }\n \\|f\\|_ { \\dot{B}^{-\\beta, \\infty}_{\\infty}}^{1-\\theta}.$$ \nWe restricted ourselves to the two initial assumptions, namely $\\nabla f\\in L^{1}(\\mathbb{G}, \\omega)$ and $f\\in\\dot{B}^{-\\beta, \\infty}_{\\infty}(\\mathbb{G})$. The strong inequalities (\\ref{smile0}) are now completely proved for stratified groups. \n\\begin{flushright}{$\\blacksquare$}\\end{flushright} \n\n\\subsection{Maximal function and Improved Sobolev inequalities}\nWe will study now theorem \\ref{PGLR}. Just as for weak inequalities (\\ref{ForIsoLap}), we can rewrite (\\ref{PGLR1}) in the following way \n\\begin{equation*}\n\\| \\mathcal{J}^{\\frac{s-s_1}{2}}f\\|_{L^q(\\omega)}\\leq C \\|f\\|_{L^p(\\omega)}^\\theta\\|f\\|_{\\dot{B}^{-\\beta-s_1,\\infty}_{\\infty}}^{1-\\theta}\n\\end{equation*}\nwhere $10$ and $T$ will be fixed in the sequel.\\\\\n\nFor studying each one of these integrals we will use the estimates\n\\begin{enumerate}\n\\item[$\\bullet$] $|H_tf(x)|\\leq C \\mathcal{M}_B f(x)$ \\qquad\\qquad\\qquad\\;(by lemma \\ref{Led11})\\\\\n\n\\item[$\\bullet$] $|H_tf(x)|\\leq C t^{\\frac{-\\beta-s_1}{2}}\\|f\\|_{\\dot{B}^{-\\beta-s_1, \\infty}_{\\infty}}$ \\qquad (by the thermic definition of Besov spaces (\\ref{BesovChaleurGhomo}))\\\\\n\\end{enumerate}\nThen, applying these inequalities in (\\ref{Kashmor}) we obtain\n$$|\\mathcal{J}^{\\frac{-\\alpha}{2}}f(x)|\\leq \\frac{c_1}{\\Gamma(\\frac{\\alpha}{2})}T^{\\frac{\\alpha}{2}} \\mathcal{M}_B f(x)+ \\frac{c_2}{\\Gamma(\\frac{\\alpha}{2})}T^{\\frac{\\alpha-\\beta-s}{2}}\\|f\\|_{\\dot{B}^{-\\beta-s_1, \\infty}_{\\infty}}.$$\nWe fix now $$T=\\left(\\frac{\\|f\\|_{\\dot{B}^{-\\beta-s_1, \\infty}_{\\infty}}}{\\mathcal{M}_B f(x)}\\right)^{ \\frac{2}{\\beta+s_1}}$$\nand we get \n$$|\\mathcal{J}^{\\frac{-\\alpha}{2}}f(x)|\\leq \\frac{c_1}{\\Gamma(\\frac{\\alpha}{2})}\\mathcal{M}_B f(x)^{1-\\frac{\\alpha}{\\beta+s_1}}+ \\frac{c_2}{\\Gamma(\\frac{\\alpha}{2})}\\mathcal{M}_B f(x)^{1-\\frac{\\alpha}{\\beta+s_1}}\\|f\\|^{\\frac{\\alpha}{\\beta+s_1}}_{\\dot{B}^{-\\beta-s_1, \\infty}_{\\infty}}.$$\nSince $\\frac{\\alpha}{\\beta+s_1}=1-\\theta$, we have\n$$|\\mathcal{J}^{\\frac{-\\alpha}{2}}f(x)|\\leq \\frac{c}{\\Gamma(\\frac{\\alpha}{2})}\\mathcal{M}_B f(x)^{\\theta}\\|f\\|^{1-\\theta}_{\\dot{B}^{-\\beta-s_1, \\infty}_{\\infty}}.$$\nMultiplying this inequality by a $A_p$ weight $\\omega$, using the fact that $A_p\\subset A_q$ if $p 1 $ and $h < \\zeta$ (\\ref{Eq:spectrum}).\nFor these states the EE can be evaluated\nanalogously to the ground state of $ H_I $\n(see also \\cite{KaZi} for related analytical study).\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.45]{fig3.eps}\n\\caption{(Color online) $S_L$ vs $L$ at $h=1.0$ for different $\\zeta$. \nThe scaling behavior changes from $S_L\\sim\\frac{1}{6}\\ln L$ \non the critical line separating the ordered and disordered phase to \n$S_L\\sim\\frac{1}{3}\\ln L$ inside the current-carrying phase.}\n\\label{Fig:scaling}\n\\end{figure}\n\nLet us consider in detail the entanglement properties of the different phases\nshown in Fig.\\ref{Fig:phase}.\nFirst, we compare the scaling of the\nentanglement in the non-current-carrying critical regions and in\nthe region where an energy current is present.\nFig.\\ref{Fig:scaling} shows the result of the simulations for the scaling\nbehaviour of the ground-state EE with $\\zeta<1$ and $h=1$.\nFor all values of $\\zeta \\in [0,1]$\none observes the same scaling result. In the\nnon-current-carrying region\ncritical states are present only on\nthe $h=1$ line of the phase diagram.\nOn this line, separating the ferromagnetic and polarized\nparamagnetic phases,\nthe ground state of the system is the same as in $ H_I $.\nThis implies that the EE scaling is logarithmic with\na prefactor of $c\/3$, and $c=1\/2$.\nNote that also the entire current-carrying\nphase ($\\zeta>1$ and $h<\\zeta $) is gapless, and in this sense critical. At any point\nin this phase we observe logarithmic scaling of the EE in the subsystem size.\nThis is consistent with the discussion in the previous section\non the algebraic decay of the two point correlation function.\nInterestingly, the prefactor of the logarithmic scaling of EE in the\ncurrent phase is twice as large as the prefactor in the non-current phase.\nThis doubling reflects the increased number of zeros\nin the single-particle spectrum, consistent with\nthe results of \\cite{KaZi}.\nIn fact, when $\\zeta>1$ the ground state of Eq.\\ref{Eq:ham} is\nthe filled Fermi sea of modes in between $q_-$ and $q_+$ (see Fig.\\ref{Fig:spectrum}).\nSince the ground-state in the current phase is effectively\nan excited eigenstate of $ H_I $, we could expect a scaling of EE\nthat is extensive in the system size, as shown in \\cite{AlFaCa} for excited states.\nThe reason why this is not the case is due to the\nnature of the single-particle spectrum, which at most can have two zeros (see Fig.\\ref{Fig:spectrum}),\nand thus does not satisfy the requirements found in \\cite{AlFaCa} for an extensive scaling of EE.\n\nThe DM interaction in the Hamiltonian affects also the sub-leading term\nin the scaling of EE\n$ S_L=\\frac{1}{3}\\ln{L} +S_0(h,\\zeta) $.\nDeriving the analytical form of the sub-leading order term $S_0(h,\\zeta)$ is\ncomplicated. Nonetheless,\none can investigate this term numerically.\nIn Fig.\\ref{Fig:sublead} we see that $S_0$ is\nconstant on the critical line $h=1$ with $\\zeta\\leq 1$.\nAs soon as $\\zeta >1$ and $h<\\zeta$, $S_0$ increases, but becomes almost\nconstant for large $\\zeta$. Also $S_0$ is maximum at $h=0$,\nand $S_0$ is minimum at the critical point $h=\\zeta$. From the behaviour of\n$S_0$ we can conclude that a given block has the highest\nentanglement when all the negative modes ($q\\in [-\\pi,0)$) in the Fermi sea are filled.\nConsequently EE increases with higher values of the energy current.\n\nWe now consider the differences between the critical lines shown in\nthe phase diagram (Fig.\\ref{Fig:phase}),\nseparating different phases.\nThe only second-order quantum phase transition is found along\nat the $h=1$ line (with $\\zeta \\leq 1$), which corresponds to the\nIsing quantum phase transition\n(see Fig. \\ref{Fig:IIqpt}). On the hand the\nboundaries of the current-carrying phase with\nboth the paramagnetic and the ferromagnetic phases\nare characterized by a level crossing.\nThis translates into a sudden jump in EE (see Fig.\\ref{Fig:IIqpt} and Fig.\\ref{Fig:Iqpt}).\nThe value of EE is always higher in the current\ncarrying phase because of the presence of long-range correlations\nthat decay algebraically.\nThe plots in Fig.\\ref{Fig:IIqpt} and Fig.\\ref{Fig:Iqpt} show that controlling\nthe DM term can be used as an entanglement switch.\nThe amount of entanglement can be driven by the\nDM coupling term or the magnetic field,\nwhich are controllable parameters in\noptical lattices \\cite{optical}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.40]{fig4.eps}\n\\caption{(Color online) Non-universal nature of $S_0$. (upper panel) $S_0$ vs. $\\zeta$ at different $h$;\n(lower panel) $S_0$ vs. $h$ at\ndifferent $\\zeta$.}\n\\label{Fig:sublead}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.40]{fig5.eps}\n\\caption{(Color online) $S_L$ vs. $h$ at $L=60$ for different $\\zeta$.\n(upper panel) Static entanglement along the transition between the ordered ferromagnetic and the disordered paramagnetic phase. The peak signals\nthe presence of long-range correlations at the critical point, which is a signature of a second-order quantum phase transition. (lower panel) Static entanglement along the transition between the disordered paramagnetic\nand the current-carrying phase. The sudden change in entanglement followed by the absence of a peak at the\ncritical point is a signature of the first-order quantum phase transition.}\n\\label{Fig:IIqpt}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.40]{fig6.eps}\n\\caption{(Color online) $S_L$ vs. $\\zeta$ at $L=60$ for different $h$.\nThe plots for $h=0.5$ corresponds to the static entanglement\nalong the transition between the ordered ferromagnetic and the current-carrying phase. This is a first order\nquantum phase transition which occurs at $\\zeta=1$ (analogously for $h=1$). \nThe plots for $h=2$ and $h=3$ correspond to the static entanglement along the transition between the disordered and\nthe current-carrying phase. This is a first order QPT which occurs at $\\zeta=h$.}\n\\label{Fig:Iqpt}\n\\end{figure}\n\n\\section{Entanglement dynamics following a quench}\nIn this section, we focus on the quench dynamics\nof the EE. Quenching provides a way to excite a system, initially\nprepared in the ground state, and to subsequently study\nthe non-equilibrium dynamics of the model\n(in the following, we denote\nwith a subscript $0$ the value of the parameters\ndescribing the initial Hamiltonian).\nAs stated previously, the model in Eq.\\ref{Eq:ham}\nis of interest because it combines two different\nmechanisms typically used to drive a system out of equilibrium:\nquantum quenching, and the coupling to a field originating a current\nin the system. Furthermore the inclusion of the DM term allows us to\nstudy a model Hamiltonian where the energy current can be controlled\nand used in the quench protocol.\n\nIn our setup, the quench can either involve\nthe magnetic field $h$, the DM coupling $\\zeta$ or a combination of the two.\nSince the DM term commutes with the Hamiltonian, a quench\nin $ \\zeta $ leaves the system in one of its eigenstates,\nproviding a trivial evolution of the EE.\nOn the other hand, quenches in the magnetic field give more interesting behaviours.\nIf the quench is done with the initial state prepared in a region\nwith no current\nthe results are similar to those found in \\cite{CaCa}, where quenches for the\n$ H_I $ were considered.\nThis is due to the fact that, in the absence of\na current, the ground-state wave function initially is identical\nto that of $ H_I $, and the time evolution\nis not affected by the presence of the DM term\n(see the derivation of Eq.\\ref{Eq:timeCorrelation} in the Appendix for a proof of this).\nMore interestingly, if the quench involves an initial state inside\nthe current phase, new non-trivial behaviours can be expected, since the\nground state now is radically different.\nIt is important to notice that the DM coupling\nenters only in the specification of the initial state,\nwhereas the evolution can be effectively described by\nthe Hamiltonian without the DM term. The calculations\nshowing that this is in fact the case can\nbe found in the Appendix.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.40]{fig7.eps}\n\\caption{(Color online) Quenches from the current carrying phase. \n$S_L(t)$ vs. the time steps, with $L=60$, $h_0=4.0$, $\\zeta_0=\\zeta=5.0$ for different $h$.\nNote that the extent of the initial linear regime depends on the particular evolving Hamiltonian.}\n\\label{Fig:QuInCurr1}\n\\end{figure}\nWe first compare the evolution of the EE for different\nquenches inside the current-carrying phase. Fixing the\ncoupling constant of the DM term, and quenching only the\nexternal magnetic field we obtain the results shown in\nFig.\\ref{Fig:QuInCurr1}.\nOne always has an initial ballistic\nevolution of the EE, which grows linearly in time (measured in units where the speed of the elementary excitation is unity)\nand saturates at some point. Quite interestingly,\nthe saturation time (hence also the rate at\nwhich entanglement is initially building up) depends on the\nparticular evolving Hamiltonian.\nThis way we can control the time needed to\ngenerate the maximal asymptotic amount of entanglement.\nThis property is relevant also from a computational point of view.\nIn fact, DMRG-like schemes, used for the simulation of the time evolution\nof quantum systems, can take advantage of the lower rate at\nwhich entanglement is generated. Knowing the regions in the\nphase diagram where such rates are lower can provide more\nefficient time simulations.\nAs far as we know this is a new feature that is\nnot present in other quench protocols considered so far\nin the literature.\nThe other aspect that is important to notice in Fig.\\ref{Fig:QuInCurr1}\nis the special role played by the line $h=1$ in the phase diagram,\nwhich turns out to provide the maximum asymptotic EE\nfor different quench parameters.\nThis can be understood by mapping the quench for\n$ H_I+H_{DM} $\nto a quench protocol for $ H_I $ only.\nAs stated in the previous section,\nthe entanglement evolution with respect to $H(h,\\zeta)$\nis identical to the evolution with respect to $H(h,0)$.\nFurthermore, the ground state of $H(h_0,\\zeta_0)$, the initial Hamiltonian in the quench protocol\nis also an excited eigenstate of $H(h_0,0)$, because of the commutativity\nof the DM term with the total Hamiltonian.\nFrom this dual perspective\nthe effect of the current is that of effectively\nquenching an excited eigenstate without the current term.\nFor the Ising model, a quench from $h_0\\neq1$ yields the maximum value\nof $S_L(\\infty)$ when quenched to $h=1$, because the energy gap closes\nat $h=1$, and hence a large number of zero energy excitations can be produced.\nWhile the asymptotic value\nof EE depends on the particular\nexcited state at the beginning of the quench.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.40]{fig8.eps}\n\\caption{(Color online) Quenches from the current carrying-phase with different values of the current driving field\n$\\zeta$. $S_L(t)$ vs. the time steps with $L=60$, $h_0=3.0$ to $h=1$ for different $\\zeta_0=\\zeta$.}\n\\label{Fig:QuInCurr2}\n\\end{figure}\n\nFig.\\ref{Fig:QuInCurr2} shows results of simulations for quenches\nwith increasing values of the DM field in the current-carrying phase.\nThe asymptotic value of the EE decreases with\nincreasing $\\zeta$. This is consistent\nwith the phenomenological picture provided in \\cite{CaCa}, and\nwith the fact that if the system starts in an\nexcited state, the available number of unoccupied modes that\ncan be occupied after the quench is smaller than in the case\nof having the ground state as an initial state.\nFurthermore, Fig.\\ref{Fig:QuInCurr2} shows that\nthe time at which the EE saturates does not depend on\n$ \\zeta $, and consequently does not depend on\nthe particular initial Hamiltonian eigenstates\n(as long as it is not an eigenstate of the evolving Hamiltonian).\nThe line with $ \\zeta=100 $ in Fig.\\ref{Fig:QuInCurr2} shows that very deep into\nthe current phase quenching does not create\nentanglement. In fact, when $ \\zeta \\gg h_0 $\nquenching the magnetic field is just a small perturbation\nto the Hamiltonian, which then approximately stays in\nthe ground state.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.40]{fig9.eps}\n\\caption{(Color online) $S_L(t)$ vs. the time steps \ninside the current-carrying phase for different block sizes $L$. \nQuenching is done from $h_0=2.0$ to $h=1.0$ with $\\zeta_0=\\zeta=3.0$.}\n\\label{Fig:L}\n\\end{figure}\n\nFinally, we verify that the presence of\nan energy current does not affect the extensive nature of the\nasymptotic value of EE (Fig.\\ref{Fig:L}), and\nits proportionality with the quench size (Fig.\\ref{Fig:quenchsize}).\n\n\\begin{figure}\n\\includegraphics[scale=0.45]{fig10.eps}\n\\caption{(Color online) $S_{60}(t)$ vs. the time steps \nfor quenches to $h=1$ from various $h_0$ inside the current-carrying phase at $\\zeta=\\zeta_0=4$.}\n\\label{Fig:quenchsize}\n\\end{figure}\n\n\n\n\\section{Conclusions}\nWe have studied the static and dynamic properties\nof the entanglement entropy in the Ising spin chain with a transverse field and\na Dzyaloshinskii-Moriya interaction. The model is characterized\nby the presence of an energy current for certain regions of the phase diagram.\n\nConcerning the static properties we have analyzed\nthe transitions between phases with no energy current and the\nphase where an energy current is present.\nThe transition is captured by a discontinuity of EE as a function of the parameters,\nand by a distinguishable scaling behaviour in the current-carrying and non-current-carrying regions.\nIn particular, the leading logarithmic term of the\nEE scaling with respect to the system size\nhas a prefactor in the current-carrying region which\nis twice as large compared to the second order Ising critical line.\n\nConcerning the behaviour of the entanglement evolution\nfollowing a quench, the model in Eq.\\ref{Eq:ham}\nallows us to study new quench protocols.\nThe usual schemes consider quenches from an initial ground state.\nThis scenario, for the model in Eq. \\ref{Eq:ham},\neffectively corresponds to a quench from an initially excited state\nof the Ising spin chain in transverse field (without DM interaction).\nThe main result of this analysis shows that\nthe ballistic picture presented in \\cite{CaCa} is still valid,\nalthough with a significantly different aspect.\nIn particular the entanglement saturation time in the current-carrying\nphase depends on the\ndetails of the evolving Hamiltonian.\nThis is an indication of the role played by the evolving \nHamiltonian on the propagation of\nexcitations. This result is of relevance in\ntuning the dynamics of the system in regions with a\ndifferent rate for the propagation of entanglement.\nFurthermore it also provides\na characterization of the regions in the phase diagram that can be\nsimulated more efficiently with DMRG-like techniques.\n\nFrom a general point of view, the model in Eq. \\ref{Eq:ham} also\nsuggests a simple way to study the quench dynamics of initial excited\nstates in integrable systems. The addition of a commuting term\nin the Hamiltonian causes a reshuffling of the spectrum that,\nwithout changing the integrability of the model, allows us to\nobtain non-trivial results about the excitations in the original model.\nThe same trick can be applied to other systems of interest.\n\nWe thank Letian Ding, Zoltan R\\'acz, and Paolo Zanardi for useful comments.\nThis work has been supported by NSF grants PHY-803304, and DMR-0804914.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaoma b/data_all_eng_slimpj/shuffled/split2/finalzzaoma new file mode 100644 index 0000000000000000000000000000000000000000..1e74df4db4fee7cfd8e7e9091896c1a5b90dcbfb --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaoma @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nA long time ago Nielsen and Olesen \\cite{Nielsen:1973cs}\ndiscussed the vortex solution\nof the (3+1) dimensional Abelian Higgs model as a possible model\nfor strings. In the same publication the authors discuss the\neffective action for the transverse collective oscillations of the \nvortex which they find to be equivalent to the\nNambu-Goto action of string theory (see e.g. \\cite{Green:1987sp}).\nTheir derivation is based on a general consideration of Lorentz\ntransformation properties of such oscillations.\nWe here derive the nonrelativistic limit of \nthis effective action from an analysis of\nthe fluctuations in the underlying quantum field theory.\n\nFor quantum kinks\n(see, e.g., \\cite{Rajaraman:1982is,Coleman85})\n the collective motion is related to the\n translation mode. It is generated by infinitesimal\ndisplacements of the classical kink solution\n and is a zero mode of the \nfluctuation operator. Its quantization requires a special\napproach, by which the zero mode is found to carry\n the kinetic energy of the collective\nmotion of the kink.\n\nIn the Abelian Higgs model in two dimensions one finds an instanton\nsolution which describes topological transitions\n\\cite{Kripfganz:1989vm}. The\nfluctuations around the classical solution display two\nzero modes which again are related to translation invariance.\nThe one-loop prefactor for the semiclassical transition rate\nis related to the functional determinant of the fluctuation operator.\nThe zero modes would cause this determinant to be infinite and have\nto be eliminated. If handled properly, this elimination \nproduces the correct dimension for the transition rate\n\\cite{Baacke:1994bk,Baacke:2008zx}.\n\nThe instanton of the Abelian Higgs model reappears in the\n$(3+1)$ dimensional version of the model as the\nvortex solution which we will consider here. The vortex solution is identical\nto the instanton solution in the transversal $x$ and $y$ coordinates,\n and it is independent on $z$ and of time. The translation of the\nclassical solution now becomes local, dependent on $z$.\nInstead of a collective motion of a quantum kink we have\ncollective oscillations of the vortex.\nThe pole at $p_\\perp=0$ in the Euclidian Green's function of the \ntwo-dimensional model becomes a cut in the Green' s function of the\nmodel in four dimensions. In the computation of one-loop\ncorrections to the string tension \\cite{Baacke:2008sq}\nthese modes can be included in the same\nway as all other fluctuations, in contrast to the\ntwo-dimensional case. Indeed, for the renormalization of\nthese corrections it is necessary to include the\nzero mode contribution. \nStill the translation modes play a special r\\^ole, \nand in the present work we will discuss these particular aspects.\n\nThe text is organized as follows:\nIn Sec. \\ref{sec:basics} we present the model, the classical vortex solution\nand the classical string tension.\nIn Sec. \\ref{sec:flucsandtension} \nwe define the fluctuation operator and relate it to the one-loop \ncorrection to the string tension. Based on the derivation of the\ntranslation mode wave functions in Appendix \\ref{app:translationmode}\nand using a virial theorem proven in Appendix \\ref{app:virialtheorem}\nwe discuss in Sec. \\ref{sec:collectivestringoscillations}\nthe collective string oscillations and their effective action.\nWe discuss in Sec. \\ref{sec:fluctuationenergy} the r\\^ole of the \nzero modes in the computation of the renormalized the string tension. \nConclusions are presented in Sec. \\ref{sec:conclusions}.\n\n \n \n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Basic relations}\n\\setcounter{equation}{0}\n\\par\n\\label{sec:basics}\nThe Abelian Higgs model in (3+1) dimensions is\ndefined by the Lagrange density \n\\begin{equation}\n {\\cal L}=-\\frac{1}{4}F_{\\mu\\nu}F^{\\mu\\nu}\n+\\frac{1}{2}(D_\\mu\\phi)^*D^\\mu\\phi-\\frac{\\lambda}\n{4}\\left(|\\phi|^2-v^2\\right)^2\\; . \n\\end{equation}\nHere $\\phi$ is a complex scalar field and\n\\begin{eqnarray}\nF_{\\mu\\nu}&=&\\partial_\\mu A_\\nu-\\partial_\\nu A_\\mu\\; , \\\\\nD_\\mu&=&\\partial_\\mu-igA_\\mu \n\\; .\\end{eqnarray} \nThe particle spectrum consists of Higgs bosons of mass\n$m_H^2=2\\lambda v^2$ and vector bosons of mass $m_W^2=g^2v^2$.\nThe model allows for vortex type solutions, representing\nstrings with a magnetic flux, the Nielsen-Olesen vortices \n\\cite{Abrikosov:1957,Nielsen:1973cs,deVega:1976mi}.\nThe cylindrically symmetric\nansatz for this solution is given, in the singular gauge, by \n\\footnote{ We use Euclidean notation for the transverse components, so\n$A^\\perp_1\\equiv A^1=-A_1$ etc. } \n\\begin{eqnarray}\nA^{{\\rm cl}\\perp}_i (x,y,z)&=&\\frac{\\varepsilon_{ij}x^\\perp_j} \n{gr^2}\\left[A(r)+1\\right] \\;\\;\\; i=1,2 \\\\\n\\phi^{cl}(x,y,z)&=&vf(r) \\; .\n\\end{eqnarray}\nwhere $r=\\sqrt{x^2+y^2}$ and $\\varphi$ is the polar angle. \nFurthermore $A_3^{\\rm cl}=A_0^{\\rm cl}=0$. \nWith this ansatz the energy per unit length, or string\ntension $\\sigma$ takes the form\n\\begin{eqnarray}\n\\sigma_{\\rm cl}&=&\\pi v^2\n \\int^{\\infty}_{0}\\!\\!\\! dr \\left\\{\\frac{1}{rm_W^2}\\left[\n\\frac{dA(r)}{dr}\\right]^2\\!\\!+r\\left[\\frac{df(r)}{dr}\\right]^2\\!\\!\n+\\frac{f^2(r)}{r}\\left[A(r)+1\\right]^2\\!\\! \\right. \\nonumber \\\\ \n&+&\\left. \\frac{rm^2_H}{4}\\left[\nf^2(r)-1\\right]^2\\!\\right\\}\\; .\n\\label{eq:classtens} \\end{eqnarray}\nThe classical equations of motion are given by\n\\begin{eqnarray}\n\\left\\{\\frac{\\partial^2}{\\partial r^2}+\n\\frac{1}{r}\\frac{\\partial}{\\partial r}-\\frac{\\left[A(r)+1\\right]^2}{r^2}-\n\\frac{m^2_H}{2}\\left[f^2(r)-1\\right]\n\\right\\}f(r)&=&0 \\; , \\\\\n\\left\\{\\frac{\\partial^2}\n{\\partial r^2}-\\frac{1}{r}\n\\frac{\\partial}{\\partial r}-m^2_W f^2(r)\\right\\}\n\\left[A(r)+1\\right]&=&0\n\\; ,\\end{eqnarray}\nwhich are to be solved numerically with\n\\begin{equation} \\label{rb}\n\\begin{array}{rcccccr}\nA(r)&\\stackrel{\\scriptscriptstyle{r\\to 0}}\n{\\longrightarrow}& {\\rm const.}\\cdot r^2 \n&,&A(r)&\\stackrel{\\scriptscriptstyle{r\\to\\infty}}\n{\\longrightarrow}&-1 \\; , \\\\ \nf(r)&\\stackrel{\\scriptscriptstyle{r\\to 0}}\n{\\longrightarrow}&{\\rm const.}\\cdot r \n&,&f(r)&\\stackrel{\\scriptscriptstyle{r\\to\\infty}}\n{\\longrightarrow}&1\\; , \n\\end{array}\n\\; .\\end{equation}\n\n\n\\section{Fluctuation operator and one-loop string tension}\n\\label{sec:flucsandtension}\nExpanding the gauge and Higgs fields as\n\\begin{eqnarray}\n\\phi &=& \\phi_i^{\\rm cl} + \\varphi_1 + i \\varphi_2\n\\\\\nA_\\mu&= &A_\\mu^{\\rm cl} + a_\\mu\n\\end{eqnarray}\nthe dynamics of the fluctuations is described by the second\norder gauge fixed Lagrangian \\cite{Baacke:1994bk}\n\\begin{eqnarray} \\label{lag2}\n{\\cal L}^{II}&=&\n-a_\\mu\\frac{1}{2}\\left(-\\Box-g^2\\phi^2\\right)\na^\\mu\\nonumber \\\\\n&&+\\varphi_1\\frac{1}{2}\\left[-\\Box+g^2A_\\mu A^\\mu-\n\\lambda\\left(3\\phi^2-\nv^2\\right)\\right]\\varphi_1 \\nonumber \\\\ \n&&+\\,\\varphi_2\\frac{1}{2}\\left[-\\Box+g^2A_\\mu A^\\mu-g^2\\phi^2-\n\\lambda\\left(\\phi^2-v^2\\right)\\right]\\varphi_2 \\nonumber \\\\\n&&+\\,\\varphi_2(gA^\\mu\\partial_\\mu)\\varphi_1\n+\\varphi_1(-gA^\\mu\\partial_\\mu)\\varphi_2 \\\\\n&&+\\,a^\\mu(2g^2A_\\mu\\phi)\\varphi_1\n+\\,a^\\mu(2g\\partial_\\mu\\phi)\\varphi_2 \\nonumber \\\\\n&&+\\eta_1\\frac{1}{2}\\left(-\\Box- g^2\\phi^2\\right)\\eta_1\n+\\eta_2\\frac{1}{2}\\left(-\\Box-g^2\\phi^2\\right)\\eta_2\n\\nonumber \n\\; ,\\end{eqnarray}\nHere $\\varphi_1$ and $\\varphi_2$ denote the real and imaginary part of\nthe Higgs field fluctuations, $\\eta_i$ are the Faddeev-Popov ghosts, and\nwe have chosen the 't Hooft-Feynman background gauge.\nIn compact notation this may be written as\n \\begin{equation}\n{\\cal L}^{II} = \\frac{1}{2} \\psi^*_i {\\cal M}_{ij} \\psi_j \\; .\n\\end{equation}\nThe fields $\\psi_i$ denote the ensemble of gauge, Higgs and\nFaddeev-Popov fields and the\nfluctuation operator ${\\cal M}_{ij}$ is defined by this and the\nprevious equation.\nIn terms of the fluctuation operators\n${\\cal M}$ on the vortex and ${\\cal M}^0$ for the\nvacuum background fields, the effective action\nis defined as\n\\begin{equation}\nS_{eff} = \\frac{i}{2} \\ln \\left\\{ \\frac{\\det {\\cal M}+i\\epsilon}\n{\\det {\\cal M}^0+i\\epsilon} \\right\\} \\; .\n\\end{equation}\nAs the background field is time-independent and also independent of\n$z$ the fluctuation operators take the form\n\\begin{equation}\\label{flucsep}\n{\\cal M}_{ij} = (\\partial_0^2-\\partial_3^2)\\delta_{ij} +{\\cal M}_{\\perp ij}\n\\; , \\end{equation}\nwhere ${\\cal M}_\\perp$ is a positive-definite operator describing the transversal\nfluctuations.\nIt is identical for the longitudinal and timelike gauge fields and for\nthe Faddeev-Popov ghosts, so these contributions to the effective\naction cancel. The remaining degrees of freedom form a\ncoupled system of four fields $\\psi_i$: the real and imaginary part of \nthe Higgs field fluctuations $\\varphi_1,\\varphi_2$ and the transverse \ngauge field fluctuations $a_1,a_2$.\n \nAs is well known the logarithm of the determinant can be written as the\ntrace of the logarithm. One can do the trace over $k_0$, the\nmomentum associated with the time variable, by integrating over\n$T\\int dk_0\/2\\pi$, where $T$ is the lapse of time.\nOne then obtains\n\\begin{equation}\nS_{\\rm eff} = -T \\frac{1}{2} \\sum \\left[E_\\alpha-E_\\alpha^{(0)}\\right]\\; ,\n\\end{equation}\nwhere $E_\\alpha$ are square roots of the eigenvalues of the \npositive definite operator\n\\begin{equation}\n-\\partial_3^2+{\\cal M}_\\perp\\; ,\n \\end{equation}\nand likewise $E_\\alpha^{(0)}$ are those of the analogous operator\nin the vacuum\n\\begin{equation}\n-\\partial_3^2+{\\cal M}_\\perp^0=-\\partial_3^2-\\vec \\nabla_\\perp^2 + {\\bf m^2}\n\\; . \\end{equation}\nHere ${\\bf m}^2={\\rm diag}(m_1^2,\\dots,m_n^2)$ is the diagonal mass squared \noperator for the various fluctuations.\n\n So the effective action is equal to the sum of differences between \nthe zero point energies of the quantum\nfluctuations around the vortex and the ones of the\nquantum fluctuations in the vacuum, multiplied by $-T$.\nFurther, we can do the trace over the variable $k_3$\nby integrating over $L\\int dk_3\/2\\pi$. We then obtain\n\\begin{equation}\nS_{\\rm eff}= -TL\\sum_\\alpha\\int\\frac{dk_3}{2\\pi}\n\\frac{1}{2}\n\\left[\\sqrt{k_3^2+\\mu_\\alpha^2}-\\sqrt{k_3^2+\\mu_\\alpha^{(0)~ 2}}\\right]\n\\; , \\end{equation}\nwhere $\\mu_\\alpha^2$ are the eigenvalues of the operator ${\\cal M}^\\perp$ and\n$\\mu_\\alpha^{(0)~ 2}$ those of $-\\vec \\nabla_\\perp^2+{\\bf m}^2$. \nIn the same way the classical action becomes\n\\begin{equation}\nS_{\\rm cl}=-TL\\sigma_{\\rm cl}\n\\end{equation}\nwhere $\\sigma_{\\rm cl}$ is the classical string tension.\nThe fluctuation part of the string tension is \ngiven by\n\\begin{equation} \\label{fluctuationstringtension}\n\\sigma_{\\rm fl}=\\sum_\\alpha\\int\\frac{dk_3}{2\\pi}\n\\frac{1}{2}\\left[\\sqrt{k_3^2+\\mu_\\alpha^2}-\\sqrt{k_3^2+\\mu_\\alpha^{(0)~ 2}}\\right]\n\\; . \\end{equation}\nOf course all expressions are formal, the integrals do not exist before\na suitable regularization. \n\nA way of computing $\\sigma_{\\rm fl}$ has been formulated in Ref.\n\\cite{Baacke:2008sq}: one computes the matrix valued \nEuclidian Green' s function of the\nfluctuation operator defined by\n\\begin{equation}\n(p^2{\\bf 1} +{\\cal M}_\\perp) {\\cal G}({\\bf x}_\\perp,{\\bf x}'_\\perp,p) =\n{\\bf 1}\\delta^2({\\bf x}_\\perp-{\\bf x}'_\\perp)\n\\end{equation}\nand similarly for ${\\cal M}_0$.\nThen with\n\\begin{equation} \\label{fpdef}\nF(p)=\\int d^2 x_\\perp {\\rm Tr} \\left[{\\cal G}({\\bf x}_\\perp,{\\bf x}_\\perp,p)-\n{\\cal G}_0({\\bf x}_\\perp,{\\bf x}_\\perp,p)\\right]\n\\end{equation}\nthe fluctuation correction to the string tension is given by\n\\begin{equation}\\label{sigmanum}\n\\sigma_{\\rm fl}=-\\int_0^\\infty \\frac{p^3~dp}{4\\pi} F(p)\n\\end{equation}\nThe function $F(p)$ has been computed in Ref. \\cite{Baacke:2008sq}.\nFor small $p$ it behaves as $2\/p^2$, as expected for \ntwo translation modes which yield poles at $p=0$.\nThis is displayed in Fig. \\ref{fig:zeromode} for\n$\\xi= m_H\/m_W = 1$. For large momenta $F(p)$ again behaves\nas $p^{-2}$, but with a different coefficient.\nThe contribution of the translation modes, \nas well as the entire fluctuation correction\nare quadratically divergent. The renormalization has been\ndiscussed previously \\cite{Baacke:2008sq}.\nWe come back to the fluctuation correction below,\nbut at first we discuss explicitly the translation mode\nwhich describes collective oscillations of the vortex. \n\n\n\n\\section{Collective string oscillations}\n\\label{sec:collectivestringoscillations}\n\\setcounter{equation}{0}\n\nFor quantum kinks $\\phi(x)$ the translation mode is proportional to the\nderivative of the classical solution, $\\psi_0= N d \\phi_{\\rm cl}(x)\/dx$.\nIt is an eigenstate of the fluctuation operator with eigenvalue\nzero, and it leads to a pole in the Green's function of the fluctuation\noperator at energy $\\omega=0$. For the quantum kink the translation mode is \nrelated to the collective motion of the entire kink, and\nits contribution to the quantum corrections is the kinetic energy.\nHere we are considering local transverse displacements of the vortex.\nEach slice between $z$ and $z+\\Delta z$ is moving separately and the\nresulting motions of the vortex can be described in terms of\nwaves propagating along the string. In the Green's function\nof the complete fluctuation operator ${\\cal M}$ the pole appears as a cut,\nstarting at $\\omega=0$.\n\nOf course we again expect the zero modes to be related to the derivatives\nof the classical solution $\\nabla_i \\phi_{\\rm cl}$, but the fluctuation \noperator is matrix valued and therefore we have to determine all\nfour components of the eigenvector. This is discussed in Appendix\n\\ref{app:translationmode}, in a cylindrical basis of modes for which\nthe fluctuation operator was derived in Refs. \n\\cite{Baacke:1994bk,Baacke:2008sq}. \nThe wave functions of the zero modes arising from local translation invariance\nare derived in Appendix \\ref{app:translationmode}.\nCombining the modes with azimutal quantum numbers $m=\\pm 1$\nproportional to $\\exp{\\pm i\\varphi}$ one finds that an infinitesimal\nshift in the $x$ direction generates a four component wave function\n\\begin{eqnarray}\n\\varphi^x_1(r,\\varphi)&=&v f'(r) \\cos \\varphi \\; ,\n\\\\\n\\varphi^x_2(r,\\varphi)&=&-v \\frac{A(r)+1}{r}f(r)\\sin \\varphi\\; ,\n\\\\\n{\\bf a}^x(r,\\varphi)&=& \\left(\\begin{array}{c}0\\\\-1\\end{array}\\right)\n\\frac{A'(r)}{gr}\\; ,\n\\end{eqnarray}\nand a shift in the $y$ direction leads to\n\\begin{eqnarray}\n\\varphi^y_1(r,\\varphi)&=&v f'(r) \\sin \\varphi\\; ,\n\\\\\n\\varphi^y_2(r,\\varphi)&=&v \\frac{A(r)+1}{r}f(r) \\cos \\varphi\\; ,\n\\\\\n{\\bf a}^y(r,\\varphi)&=& \\left(\\begin{array}{c}1\\\\0\\end{array}\\right)\n\\frac{A'(r)}{gr}\\; .\n\\end{eqnarray}\nThe norm of these wave functions is given by\n\\begin{eqnarray} \\nonumber\n||\\psi_t^x||^2&=&\\int r~dr~\\int d\\varphi\n\\left\\{(a^x_1)^2+(a^x_2)^2+(\\varphi^x_1)^2+(\\varphi^x_2)^2\\right\\}\n\\\\\\nonumber&=&2 \\pi v^2\\int r~dr~\\left\\{\\frac{1}{m_W^2}\\frac{A'{}^2(r)}{r^2}+\n\\frac{1}{2}\\frac{(A(r)+1)^2}{r^2}f^2(r)+\\frac{1}{2}f'{}^2(r)\\right\\}\n\\; ,\\end{eqnarray}\nand analogously for the $y$ mode. Using the virial theorem proven in\nAppendix \\ref{app:virialtheorem} it takes the value\n\\begin{equation}\n||\\psi_t^x||^2 = \\sigma_{\\rm cl}\n\\; ,\n\\end{equation}\nwhere $\\sigma_{\\rm cl}$ is the \nclassical string tension. \n\nIn the mode expansion of the quantum fields these modes appear\nin the form\n\\begin{eqnarray} \\label{fieldexpansion1}\n\\phi_i(r,\\varphi,z,t)=\\phi^{\\rm cl}(r,\\varphi)\\delta_{i1}+ \n{\\bf X}(z,t)\\varphi^x_i(r,\\varphi)\n+ {\\bf Y}(z,t)\\varphi^y_i(r,\\varphi) + \\dots &&\n\\\\\\label{fieldexpansion2}\nA_i(r,\\varphi,z,t)=A_i^{\\rm cl}(r, \\varphi)+\n{\\bf X}(z,t)a^x_i(r,\\varphi)\n+{\\bf Y}(z,t)a^y_i(r,\\varphi) + \\dots&&\n\\; ,\\end{eqnarray}\nwhere the dots indicate the contributions of all other\neigenfunctions of the fluctuation operator. While these have,\nin general, complex wave functions we have written the contributions\nof the translation modes, which are real, in a suggestive form\nusing operators ${\\bf X}(z,t)$ and ${\\bf Y}(z,t)$. \nThe canonical momenta of the field operators are given by\n\\begin{eqnarray}\n\\Pi^\\Phi_i(r,\\varphi,z,t)=\n\\dot\\Phi_i(r,\\varphi,z,t = \\dot{\\bf X}(z,t)\\phi^x_i(r,\\varphi) + \n\\dot{\\bf Y}(z,t)\\phi^y_i(r,\\varphi) + \\dots && \n\\\\\n\\Pi^A_i(r,\\varphi,z,t)=\\dot A_i(r,\\varphi,z,t)=\\dot{\\bf X}(z,t)a^x_i(r,\\varphi)\n+\\dot{\\bf Y}(z,t)a^y_i(r,\\varphi) + \\dots &&\n\\; ,\\end{eqnarray}\n The relation of the operator ${\\bf X}$ to the\nusual creation and annihilation operators is given by\n\\begin{eqnarray}\\label{xcrelation}\n{\\bf X}(z,t)&=&\\frac{1}{\\sqrt{\\sigma_{\\rm cl}}}\n\\int \\frac{dk}{2\\pi 2|k|}\\left(c_x(k)e^{i(kz-|k|t)}+\nc_x^\\dagger(k)e^{-i(kx-|k|t)}\\right) \\; ,\n\\\\\\label{pcrelation}\n{\\bf P}_x(z,t)&=&\\sqrt{\\sigma_{\\rm cl}} \\int \\frac{dk}{2\\pi 2i}\\left(c_x(k)e^{i(kz-|k|t)}-\nc_x^\\dagger(k)e^{-i(kx-|k|t)}\\right)\n\\; ,\\end{eqnarray}\nand analogously for ${\\bf Y}$. Here we have used the fact that for the zero \nmodes we have $\\omega=|k|$. \nThe operators $c_\\alpha(k)$ satify the commutation relations\n\\begin{equation}\n\\left[c_\\alpha(k),c_\\beta^\\dagger(k')\\right]=\n2\\pi 2|k|\\delta_{\\alpha\\beta}\\delta(k-k')\n\\; ,\\end{equation}\nand the commutation relation between ${\\bf X}$ and ${\\bf P}_x$ is\n\\begin{equation}\n\\left[{\\bf X}(z,t),{\\bf P}_x(z',t)\\right]= i \\delta(z-z')\n\\; .\\end{equation}\n The normalization factors $\\sqrt\\sigma_{\\rm cl}$\nin Eqs. \\eqn{xcrelation} and \\eqn{pcrelation} are determined by \nthe requirement that in the field \nexpansion the operators $c_x(k), c_x^\\dagger(k)$ have to appear multiplied\nby wave functions normalized to unity. \nThis is necessary for obtaining the \ncanonical equal time commutation relations for the fields,\n\\begin{eqnarray}\n\\left[\\Phi_i(x,y,z,t),\\dot \\Phi_j(x',y',z',t)\\right]=i\\delta_{ij}\n\\delta^3({\\bf x}-{\\bf x}')\n\\\\\n\\left[A_i(x,y,z,t),\\dot A_j(x',y',z',t)\\right]=i\\delta_{ij}\n\\delta^3({\\bf x}-{\\bf x}')\n\\end{eqnarray}\nvia the completeness relation of the wave functions. Of course, this \ncompleteness relation requires of the inclusion of all\neigenfunctions of the fluctuation operator, which \nabove are indicated by dots.\n\nThe second order \nHamilton operator corresponding to the Lagrangian\n\\eqn{lag2} can be written in the form\n\\begin{equation}\n{\\bf H}^{II}=\\int d^2 x_\\perp \\int dz\n\\frac{1}{2}\\left\\{\\sum_i \\dot \\psi_i^2+\\sum_i\\left[\\frac {d\\psi_i}{dz}\\right]^2\n+\\sum_{ij}\\psi_i {\\cal M}_{\\perp ij}\\psi_j\\right\\}\n\\; .\\end{equation}\nHere we are interested only in the contribution of the zero modes\nof ${\\cal M}_\\perp$.\nIf we insert the fluctuation fields of the\nfield expansion Eqs. \\eqn{fieldexpansion1} and \\eqn{fieldexpansion2}\nthe operator ${\\cal M}_{\\perp ij}$ does not contribute. Using further the\nnorm of the translation modes in order to do the integration\nover $d^2 x_\\perp$ we obtain\n\\begin{equation}\n{\\bf H}^{II}_{\\rm transl.}=\n\\sigma_{\\rm cl}\\int dz \\frac{1}{2}\\left\\{\n\\dot {\\bf X}^2+ {\\bf X}'{}^2+\\dot {\\bf Y}^2+ {\\bf Y}'{}^2\\right\\}\n\\; . \\end{equation}\nIncluding the classical string tension we find\n\\begin{equation} \\label{Heffnonrel}\n{\\bf H}=\\sigma_{\\rm cl} \\int dz \\left[1+\\frac{1}{2}\\left(\n\\dot {\\bf X}^2+ {\\bf X}'{}^2+\\dot {\\bf Y}^2+ {\\bf Y}'{}^2\\right)\\right\\}+\\dots\n\\; ,\\end{equation}\nwhere the dots indicate the contributions of higher modes.\nThis looks analogous to the result for the quantization of kinks\n\\begin{equation}\nH= M+\\frac{1}{2M}{\\bf P}^2+\\dots= M(1+\\frac{1}{2}\\dot{\\bf X}^2)+\\dots\n\\; .\\end{equation}\nThere we know that the complete result, which only appears if higher loops\nare included, must be Lorentz covariant:\n\\begin{equation}\nH= \\frac{M}{\\sqrt{1-\\dot{\\bf X}^2}}+\\dots\n\\; ,\\end{equation}\ncorresponding to an action\n\\begin{equation}\nS=-M\\int dt \\sqrt{1-\\dot {\\bf X}^2}\n\\; .\n\\end{equation}\nFor the case of the Nielsen-Olesen vortex the action\n\\begin{equation} \\label{Seffrel}\nS=-\\sigma_{\\rm cl}\\int dt\\int dz\\sqrt{1-\\dot {\\bf X}^2-\\dot {\\bf Y}^2+\n{\\bf X}'{}^2+{\\bf Y}'{}^2}\n\\; .\\end{equation}\nimplies the string Hamiltonian\n\\begin{equation} \\label{Heffrel}\nH=\\sigma_{\\rm cl} \\int dz \\frac{1+{\\bf X}'{}^2+{\\bf Y}'{}^2}\n{\\sqrt{1-\\dot {\\bf X}^2-\\dot {\\bf Y}^2+\n{\\bf X}'{}^2+{\\bf Y}'{}^2}}\n\\end{equation}\nwhich in the nonrelativistic limit leads to Eq. \\eqn{Heffnonrel}.\nIn this limit these results are in agreement with Ref. \\cite{Nielsen:1973cs}\n\n\n\\section{Energy of collective fluctuations and renormalization}\n\\label{sec:fluctuationenergy}\n\\setcounter{equation}{0}\nThe Hamiltonian for collective oscillations of the vortex\nprimarily describes excitations of the string, here:\ntransversal waves that propagate along the $z$ axis.\nThe zero point energies associated with these degrees of freedom\ncan be absorbed, in the string picture,\n into the redefinition of the string tension.\nIn quantum field theory they are absorbed, as all other divergences,\nby counter terms local in the fields, the string tension does not appear\nin the basic Lagrangian and there is no related counter term either.\nThe difference between the two approaches appears in a similar\nway in the case of the Casimir effect \\cite{Baacke:1985fh,Graham:2002fi}.\n We would like to discuss this in some detail. \n\nThe trace of the Euclidian Green' s function \n$F(p)$ for the gauge-Higgs sector is displayed in Fig.\n\\ref{fig:zeromode}. We have mentioned\nalready that at low momenta it behaves as $2\/p^2$ which is the\nreflection of the two zero modes. At high momenta it behaves\nas $a\/p^2+b\/p^4+ O\\left(p^{-6}\\right)$ where the coefficients\n$a$ and $b$ are determined by the lowest orders \nof perturbation theory. A contribution to the Green' s function \nwhich at high momenta is proportional to $1\/p^2$ is converted, \nvia Eq. \\eqn{sigmanum} into a quadratic divergence for the \none-loop string tension. \nIf we consider the complete Green' s function then this quadratic\ndivergence, as well as the subleading logarithmic one,\ncan be handled \\cite{Baacke:2008sq} by subtracting \nthe leading orders perturbation theory\nanalytically and by regularizing and renormalizing them in the\nusual way. The subtracted function, whose integral is finite,\nbehaves as $p^{-6}$; it is plotted in Fig. \\ref{fig:zeromode}.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.5]{zeromode.eps}\n\\end{center}\n\\vspace{4mm}\n\\caption{\\label{fig:zeromode}\nThe integrand function $F(p)$ defined in Eq.\n\\eqn{fpdef}: circles: the unsubtracted function;\ndashed line: asymptotic behaviour $ a\/p^2$;\nsolid line: zero mode contribution $2\/p^2$;\ndiamonds: the subtracted function; dotted line: asymptotic\nbehaviour $\\propto 1\/p^6$ of the subtracted function. }\n\\end{figure}\nThe zero mode poles are part of the asymptotic\nGreen' s function; if they were removed, the asymptotic behaviour\nof the subtracted Green's function would be $-2\/p^2$ and its\nintegral would again be divergent. \nWe neither have a prescription to handle this divergence\nnor the one of the separated zero mode pole. \nSo within usual renormalized perturbation\ntheory there is no obvious way to quantify the contribution of the\ncollective string oscillations to the total one-loop fluctuation\nenergy: the renormalization of the contribution of collective\nfluctuations to the string tension is embedded into\nthe renormalization of the entire one-loop contribution within the\nframework of renormalized quantum field theory. It is not necessary to\ninvoke a mathematical definition of divergent sums like the\nzeta function regularization.\n\nThere is further a conceptual difference between the zero point energy\nof the collective fluctuations in a string picture\n and their contribution to the one-loop corrections\nin quantum field theory:\nIn a pure string picture the presence of fluctuations\ntrivially requires the presence of the string. Once included\ntheir zero point energies are added to the string tension. \nSo, if their contribution were be finite it would positive.\nIn quantum field theory the fluctuations of the field are present\neven in the absence of the vortex. The vortex generates an\nattractive potential. The presence of the\nzero modes implies that levels of a\ncontinuum which starts at energies larger than $m_W$ or $m_H$\nare pulled down such that at least in\none channel the continuum starts at energy zero. So we expect\na negative contribution to the string tension. Indeed the\nunsubtracted function $F(p)$ is positive and if the\nintegral of Eq. \\eqn{sigmanum} were finite, $\\sigma_{\\rm fl}$\nwould be negative. This simple\nfeature gets obscured in the process of regularization and \nrenormalization.\n\n\n\n\n\\section{Conclusions}\n\\setcounter{equation}{0}\n\\label{sec:conclusions}\nWe have considered here a particular aspect of the one-loop\ncorrections to the string tension of the Nielsen-Olesen vortex,\nthe r\\^ole of the translation modes. We have identified their wave functions\nand derived their contribution to the string tension. This contribution\ndescribes the energy of transversal waves propagating along the\ndirection of the string. These can be considered\nas collective fluctuations of the classical vortex, in the same way\nas the translation modes of quantum kinks describe the collective\nmotion of the kink. This relation is made precise, in both cases,\nby virial theorems. The effective action for the\nfluctuations is found to be the nonrelativistic limit\nof the Nambu-Goto action.\n\nFor the handling of the divergent one-loop corrections\nwe have discussed conceptual differences between standard\nstring theory and the vortex of the Abelian Higgs model.\nIn the latter case the divergences associated \nwith the the zero point energies of collective\nfluctuations are treated along with those of other\nfluctuations within the standard framework\nof renormalized quantum field theory.\n\nThe approach described here only pertains to a\nstraight line vortex of infinite length. It can be expected\nto hold for more general string configurations as long as the\ncurvature radii and a possible finite length are large\ncompared with the transverse extension of the string. The \nconceptual differences in renormalization between \nthe idealized string model\nand the vortex of a quantum field theory are of course\nof an entirely general nature. Indeed they are more general than\nthe special model considered here. E.g., if we elevate a kink\nof an $1+1$ dimensional quantum field theory to a domain wall in $3+1$ \ndimensions, its translation mode reappears in the form of collective\nsurface oscillations and the renormalization of this degree of freedom\nis again embedded into the renormalization of the energy\nof all quantum fluctuations.\n\n\n\\section*{Acknowledgements}\nThe author has pleasure in thanking Nina Kevlishvili for\nuseful discussions and comments.\n\n\n\n\n\\newpage\n\\noindent\n{\\LARGE \\bf Appendix}\n\\setcounter{equation}{0}\n\n\\begin{appendix}\n\n\n\\section{The translation mode}\n\\setcounter{equation}{0}\n\\label{app:translationmode}\nThe fluctuation operator for the coupled system\nof transverse gauge and Higgs fields\nwas derived in Ref. \\cite{Baacke:2008sq}\nin a basis of partial waves with ``magnetic'' quantum numbers\n$m$, proportional to $\\exp(im\\varphi)$. We refer to\nthis reference for details. Essentially, the amplitudes\n$F_4^m$ corresponds to the real part of the Higgs field\n$\\varphi_1$, $F_3^m$ to the imaginary part of the\nHiggs field $\\varphi_2$, and the amplitudes $F_1^m$ and $F_2^m$ to\ncombinations of the transverse gauge fields $a_1,a_2$. The\nbasis was chosen in such a way that the fluctuation operator\nbecomes symmetric, and the amplitudes are real relative to each other.\nThe amplitudes $F_i$ for $m= 1$ satisfy the following \ncoupled system of linear\ndifferential equations:\n\\begin{eqnarray}\n\\left\\{-\\frac{1}{r}\\frac{d}{dr}r\\frac{d}{dr}+m_W^2 f^2\\right\\}\nF^1_1+\\sqrt{2}m_W f' F^1_3+\\sqrt{2}m_W\\frac{A+1}{r}f F^1_4=0&&\n\\\\\n\\left\\{-\\frac{1}{r}\\frac{d}{dr}r\\frac{d}{dr}+\\frac{4}{r^2}+m_W^2 f^2\\right\\}\nF^1_2+\\sqrt{2}m_W f' F^1_3-\\sqrt{2}m_W\\frac{A+1}{r}f F^1_4=0&& \n\\\\\\nonumber\n\\left\\{-\\frac{1}{r}\\frac{d}{dr}r\\frac{d}{dr}+\\frac{1}{r^2}+\n\\frac{(A+1)^2}{r^2}+m_W^2 f^2+\\frac{m_H^2}{2}\\left(f^2-1\\right)\\right\\}\nF^1_3&&\n\\\\+\\sqrt{2}m_W f'(F^1_1+F_2)-2\\frac{A+1}{r} F^1_4=0&& \n\\\\\n\\nonumber\n\\left\\{-\\frac{1}{r}\\frac{d}{dr}r\\frac{d}{dr}+\\frac{1}{r^2}+\n\\frac{(A+1)^2}{r^2}+\\frac{m_H^2}{2}\\left(3f^2-1\\right)\\right\\}\nF^1_4&&\n\\\\+\\sqrt{2}m_W\\frac{A+1}{r}(F^1_1-F^1_2)-2\\frac{A+1}{r} F^1_3=0&& \n\\; .\\end{eqnarray}\nThe last equation corresponds to the real part of the fluctuations\nof the field $\\phi$, and the translation mode is obtained\nas $\\nabla \\phi_{cl}=\\hat x v f'$. We therefore start with the ansatz\n\\begin{equation}\nF^1_4= c f'\n\\; ,\\end{equation}\nwhere the coefficient $c$ is a prefactor that will be fixed\nlater. Applying the derivative $d\/dr$ to the equation of motion\nfor $F_4^1$ we find\n\\begin{eqnarray}\\nonumber\n\\frac{d}{dr}\\left\\{-\\frac{1}{r}\\frac{d}{dr}r\\frac{d}{dr}\n+\\frac{\\left[A(r)+1\\right]^2}{r^2}+\\frac{m^2_H}{2}\\left[f^2(r)-1\\right]\n\\right\\}f(r)&=&\n\\\\\\nonumber\n\\left\\{-\\frac{1}{r}\\frac{d}{dr}r\\frac{d}{dr}\n+\\frac{\\left[A(r)+1\\right]^2}{r^2}+\\frac{m^2_H}{2}\\left[3f^2(r)-1\\right]\n\\right\\}f'&&\n\\\\\n-2\\frac{(A+1)^2}{r^3}f+2\\frac{A+1}{r^2}fA'&=&0\n\\; .\\end{eqnarray}\nThis is the equation of motion for $F_1$ if the choose\n\\begin{eqnarray}\nF_3^1=c\\frac{A+1}{r}f\n\\\\\nF_1^1-F_2^1=c \\frac{\\sqrt{2}}{m_W}\\frac{A'}{r}\n\\; .\n\\end{eqnarray}\nThe assignment has to be checked for consistency with the\nremaining equations of motion. Multiplying the equation of motion of \n$f(r)$ with $(A+1)\/r$ and commuting this factor with \nthe derivatives we find\n\\begin{eqnarray}\\nonumber\n\\frac{A+1}{r}\\left\\{-\\frac{1}{r}\\frac{d}{dr}r\\frac{d}{dr}\n+\\frac{\\left[A(r)+1\\right]^2}{r^2}+\\frac{m^2_H}{2}\\left[f^2(r)-1\\right]\n\\right\\}f(r)&=&\n\\\\\\nonumber\n\\left\\{-\\frac{1}{r}\\frac{d}{dr}r\\frac{d}{dr}\n+\\frac{1}{r^2}+\\frac{\\left[A(r)+1\\right]^2}{r^2}+\\frac{m^2_H}{2}\\left[f^2(r)-1\\right] + m_W^2 f^2\n\\right\\}\\frac{A+1}{r}f(r)&&\n\\\\\n-2\\frac{(A+1)^2}{r^2}f'+2f'\\frac{A'}{r}=0&&\n\\; .\\end{eqnarray}\nIn the intermediate steps we have used the equation of motion\nfor $A(r)$ in order to replace a second derivative $A\"$.\nThe result is consistent with the previous assignement if we\nchoose\n\\begin{equation}\nF_1^1+F_2^2=c \\frac{\\sqrt{2}}{m_W}\\frac{A'}{r}\n\\; .\\end{equation}\nSo we find\n\\begin{eqnarray}\nF_2^1&=&0\n\\\\\nF_1^1&=&c \\frac {\\sqrt{2}}{m_W}\\frac{A'}{r}\n\\; .\\end{eqnarray} \nThis has to be verified by deriving the equation of motion for\n$A'\/r$. We obtain it by applying $1\/r~d\/dr$ to the \nclassical equation of motion for $A(r)$:\n\\begin{eqnarray}\\nonumber\n\\frac{1}{r}\\frac{d}{dr}\\left\\{\n-r \\frac{d}{dr}\\frac{1}{r}\\frac{d}{dr}\n+m^2_W f^2\\right\\}\n\\left[A+1\\right]&=&\n\\\\\n\\left\\{-\\frac{1}{r}\\frac{d}{dr}r \\frac{d}{dr}+m_W^2 f^2\\right\\}\n\\frac{A'}{r}+2m_W^2\\frac{A+1}{r} ff'=0&&\n\\; .\n\\end{eqnarray}\nThis is consistent with the previous assigments and the equation\nof motion for $F_1^1$.\n\nSo we have derived that the wave function of the translation\nmode with $m=1$ is given by\n\\begin{equation}\n\\left(\\begin{array}{c}F_1^1\\\\F_2^1\\\\F_3^1\\\\F_4^1\\end{array}\n\\right)=c\\left(\\begin{array}{c}\n\\sqrt{2}\/m_W~A'\/r\\\\\n0\\\\\n(A+1)f\/r\\\\\nf'\\end{array}\\right)\n\\end{equation}\nWe have started this derivation by considering the gradient\napplied to the scalar field $\\phi=v f(r)$, but this has fixed\nonly the component $F_4$. The component $F_3$ is easily seen\nas arising from replacing the derivatives $\\nabla_i$ by the\ncovariant derivatives $\\nabla_i-ig A_i$. The wave function for the \nvector potential is not given by an infinitesimal shift of the\nclassical potential, but it correctly describes the shift in\nthe classical magnetic field.\n\nThere is of course a second translation mode, with $m=-1$. Its wave function\nis given by\n\\begin{equation}\n\\left(\\begin{array}{c}F_1^{-1}\\\\F_2^{-1}\\\\F_3^{-1}\\\\F_4^{-1}\\end{array}\n\\right)=c'\\left(\\begin{array}{c}\n0\\\\\n\\sqrt{2}\/m_W~A'\/r\\\\\n-(A+1)f\/r\\\\\nf'\\end{array}\\right)\n\\end{equation}\nFrom these wave functions in the azimutal basis and in terms\nof the amplitudes $F_i^{\\pm 1}$ we go back to the physical basis\n$\\varphi_1,\\varphi_2,a_1,a_2$.\nThese are related to the functions $F_i^1$\nvia \\footnote{The prefactors appearing in the definitions\nof $F_1$ and $F_2$, Eq. (4.2) of Ref. \\cite{Baacke:2008sq}\nshould be $1\/\\sqrt 2$, not $1\/2$. This misprint also appears\nin Refs. \\cite{Baacke:1994bk} and \\cite{Baacke:2008zx}}\n\\begin{equation}\n\\left(\\begin{array}{c}a_1\\\\a_2\\\\\\varphi_1\\\\\\varphi_2\\end{array}\n\\right)=\\frac{1}{\\sqrt{2\\pi}}\\left(\\begin{array}{c}\n(F_1^1+F_1^{1*})\/\\sqrt{2}\\\\\ni(F_1^1-F_1^{1*}\/\\sqrt{2}\\\\\n-i[F_4^1\\exp(i\\varphi)-F_4^{1*}\\exp(-i\\varphi)]\\\\\nF_3^1\\exp(i\\varphi)+F_3^{1*}\\exp(-i\\varphi)\n\\end{array}\\right)\n\\end{equation}\nHere we have used reality constraints for the fields in order\nto eliminate the functions $F_i^{-1}$.\nThe wave functions corresponding to infinitesimal translations\ninto the $x$ and $y$ directions are obtained by\nchoosing the free coefficient $c$ is such a way that the real part of the\nHiggs field fluctuation $\\varphi_1$ is given by $d \\phi_{\\rm cl}\/dx$\nand $d \\phi_{\\rm cl}\/dy$, respectively. Explicitly, \n$c_x=iv\\sqrt{\\pi\/2}$ and $c_y=v \\sqrt{\\pi\/2}$.\nThe complete wave functions are \ngiven in section \\ref{sec:collectivestringoscillations}.\n\n\n\n\n\\section{The virial theorem}\n\\label{app:virialtheorem}\n\\setcounter{equation}{0}\nAs we have discussed in section \\ref{sec:collectivestringoscillations}\nthe normalization of the translation mode derived from the\ndeformation of the classical solution\nis given by\n\\begin{equation}\n|\\psi_t|^2=\\pi v^2\\int r~dr~\\left\\{\\frac{2}{m_W^2}\\frac{A'{}^2(r)}{r^2}+\n\\frac{(A(r)+1)^2}{r^2}f^2(r)+f'{}^2(r)\\right\\}\n\\end{equation}\nThe classical string tension is given by Eq. \\eqn{eq:classtens},\ni.e., \n\\begin{eqnarray}\\nonumber\n\\sigma_{\\rm cl}&=&\\pi v^2\\int r~dr~\\left\\{\\frac{1}{m_W^2}\\frac{A'{}^2(r)}{r^2}+\n\\frac{(A(r)+1)^2}{r^2}f^2(r)+f'{}^2(r)\\right.\n\\\\\n&&\\left.\\hspace{25mm} +\\frac{m_H^2}{4}\\left[f^2(r)-1\\right]^2\\right\\}\n\\end{eqnarray}\nIn analogy with the virial theorem for quantum kinks, where the normalization\nof the translation mode is equal to the classical mass,\nwe expect a virial theorem $|\\psi_t|^2=\\sigma_{\\rm cl}$ which\nreduces to the identity\n\\begin{equation}\n\\int r~dr~\\frac{1}{m_W^2}\\frac{A'{}^2(r)}{r^2}\n=\\int r~dr~\\frac{m_H^2}{4}\\left(f^2(r)-1\\right)^2\n\\; .\\end{equation}\nIt can readily be verified numerically. In order\nto derive the relation analytically we use the following weighted integrals \nover the classical equations of motion\n\\begin{eqnarray}\n&&I_1=\\int r~dr~f\\left\\{\n-\\frac{1}{r}\\frac{d}{dr}r\\frac{d}{dr}\n+\\frac{\\left[A+1\\right]^2}{r^2}+\n\\frac{m^2_H}{2}\\left(f^2-1\\right)\n\\right\\}f=0\n\\\\\n&&I_2=\\int r~dr~f r\\frac{d}{dr}\\left\\{\n-\\frac{1}{r}\\frac{d}{dr}r\\frac{d}{dr}\n+\\frac{\\left[A+1\\right]^2}{r^2}+\n\\frac{m^2_H}{2}\\left(f^2-1\\right)\n\\right\\}f=0\n\\\\\n&&I_3=\\int dr~(A+1)\\frac{d}{dr}\n\\left\\{-r\\frac{d}{dr}\\frac{1}{r}\\frac{d}{dr}+m^2_W f^2\\right\\}\n\\left(A+1\\right)=0 \\hspace{25mm}\n\\end{eqnarray}\nIntegrating by parts these take the form\n\\begin{eqnarray} \nI_1 &=&\\int r~dr~\\left\\{\\left(\\frac{df}{dr}\\right)^2\n+\\frac{(A+1)^2}{r^2}f^2+\\frac{m_H^2}{2}f^2(f^2-1)\\right\\}=0 \n\\\\\\nonumber\nI_2 &=&\\int r~dr~\\left\\{-2\\left(\\frac{df}{dr}\\right)^2\n-2\\frac{(A+1)^2}{r^2}f^2-m_H^2f^2(f^2-1)\\right.\n\\\\\n&&\\left.\\hspace{25mm}+\\frac{m_H^2}{4}(f^2-1)^2-\\frac{(A+1)^2}{r}ff'\\right\\}=0 \n\\\\\nI_3 &=& \\int r~dr~\\left\\{-\\frac{A'{}^2}{r^2}\n+m_W^2\\frac{(A+1)^2}{r}ff'\\right\\}\n\\end{eqnarray}\nCombining the first and second integral we find\n\\begin{equation}\nI_2+2I_1=\\int r~dr~\\left\\{\\frac{m_H^2}{4}(f^2-1)^2-\n\\frac{(A+1)^2}{r}ff'\\right\\}=0\n\\end{equation}\nAdding the third integral we obtain the relation\n\\begin{equation}\nI_2+2I_1+\\frac{1}{m_W^2}I_3=\\int r~dr~\\left\\{\n-\\frac{1}{m_W^2}\\frac{A'{}^2}{r^2}+\n\\frac{m_H^2}{4}\\left(f^2-1\\right)^2\\right\\}=0\n\\end{equation}\nwhich is the expected result.\n\\end{appendix}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nMagnetic reconnection in the lower solar atmosphere manifests itself through dynamical events such as jets \\citep[e.g.,][]{shibata:2007, Tian:2014:IRIS}, explosive events such as Ellerman bombs \\citep[][]{Ellerman:1917, PFChen:2001, CNelson:2013, HYang:2013, Peter:2014:IRIS}, and possibly Type II spicules \\citep{depontieu:2007}. Reconnection events contribute to the heating of the chromosphere \\citep[e.g.,][]{jess:2014}, and the contribution of jets and Type II spicules to the mass and energy budgets of the corona and solar wind is under active investigation \\citep{depontieu:2011, cranmer:2010, madjarska:2011}. \\cite{arge:1998} proposed that chromospheric reconnection could cause the elemental fractionation that is responsible for the first ionization potential (FIP) effect \\citep[see also][]{sturrock:1999A, feldman:2002}. Partial ionization effects are important not just in the chromosphere but also in molecular clouds, protoplanetary disks, the neutral phases of the interstellar medium, some exoplanetary atmospheres \\citep{koskinen:2010}, Earth's ionosphere \\citep{leake:2014}, the edge of tokamaks \\citep{mladams:thesis}, and dedicated reconnection experiments \\citep[e.g.,][]{lawrence:2013}.\n\nTheories and simulations of reconnection have generally assumed that the plasma is fully ionized. This approximation is valid for the fully ionized solar corona but invalid for weakly ionized plasmas such as the solar chromosphere which has ionization fractions ranging from {$\\lesssim$}$0.01$ to {$\\sim$}$0.5$. Partial ionization effects modify the dynamics of reconnection in several ways \\citep{zweibel:1989, zweibel:2011}. Before the onset of reconnection, current sheets thin significantly due to ambipolar diffusion \\citep{brandenburg:1994, brandenburg:1995}. When there is strong coupling between ions and neutrals (e.g., on length scales longer than the neutral-ion mean free path, $\\lambda_{ni}$), the effective ion mass is increased by the ratio of the total mass density $\\rho$ to the ion mass density $\\rho_i$. This decreases the bulk Alfv\\'en speed and consequently the predicted reconnection rate \\citep{zweibel:1989}. The plasma resistivity has contributions from both electron-ion and electron-neutral collisions \\cite[e.g.,][]{piddington:1954, cowling:1956, ni:2007}. \n\nThe Hall effect is expected to be important on scales comparable to or less than the ion inertial length $d_i$. However, the effective ion inertial length is predicted to be enhanced in plasmas with strong ion-neutral coupling: $d_i' = d_i\\sqrt{\\rho\/\\rho_i}$ \\citep[e.g.,][]{pandey:2008a}. Consequently, it has been proposed that Hall reconnection may occur at longer length scales or larger densities than would be predicted from the ion density alone \\citep{malyshkin:2011, vekstein:2013}. \\citet{malyshkin:2011} predict that enhancement of the Hall effect will lead to fast reconnection in molecular clouds and protoplanetary disks, but that this transition is less likely to be important in the solar chromosphere. In general, two regimes may be considered. \nWhen $\\lambda_{ni} \\ll d_i$, the Hall effect will be enhanced because of strong coupling and fast reconnection is predicted to occur. When $d_i \\lesssim \\lambda_{ni} \\lesssim d_i'$, some enhancement of the Hall effect due to ion-neutral coupling is also expected. However, as shown previously by \\citet[hereafter, \\citetalias{leake:partial1}]{leake:partial1} and \\citet[hereafter, \\citetalias{leake:partial2}]{leake:partial2}, the ion density (and therefore also the local values for $d_i$, $d_i'$, and $\\lambda_{ni}$) may vary by more than an order of magnitude between the reconnection current sheet and the ambient plasma, which makes it difficult to \\emph{a priori} evaluate the importance of the Hall effect in determining the rate of reconnection and the structure of the reconnection region in this parameter regime.\n\nThe behavior of weakly ionized plasmas can be modeled using either single-fluid or multi-fluid formulations. A single-fluid approach incorporates the ambipolar diffusion term into the generalized Ohm's law to account for relative drift between ions and neutrals \\citep[e.g.,][]{brandenburg:1994, brandenburg:1995}. This approach is less computationally expensive but implicitly assumes that the ions and neutrals are in ionization equilibrium. Multi-fluid models evolve each of the ion, neutral, and sometimes electron components separately to allow relative drifts between each of these populations and the fluid to be out of ionization equilibrium \\citep[e.g.,][]{meier:2012A}.\n\nIn recent years, considerable progress has been made in modeling magnetic reconnection in the lower solar atmosphere. \\citet{sakai:2006}, \\citet{smith:2008}, \\citet{sakai:2008}, and \\citet{sakai:2009} performed two-fluid (ion-neutral) simulations of coalescing current loops in the solar chromosphere. \\citet{smith:2008} found that the rate of reconnection in simulations of the upper chromosphere was about $\\sim${$20$} times greater than in the lower chromosphere which has a considerably lower ionization fraction. \\cite{sakai:2008} and \\citet{sakai:2009} investigated the role of reconnection in penumbra filaments. These models assume fixed ionization and recombination rates rather than assuming a dependence on temperature and density. However, it is essential to incorporate rates that depend on physical conditions in order to accurately capture the important role of recombination during chromospheric reconnection.\n\n\\citetalias{leake:partial1} and \\citetalias{leake:partial2} used the plasma-neutral module of the HiFi framework \\citep{lukin:thesis, meier:thesis} to model chromospheric reconnection. The simulations showed a strong enhancement of ion density inside the current sheet as a result of ions being preferentially dragged along by the reconnecting magnetic field. Recombination and ion outflow were of comparable importance in the ion continuity equation for removing ions from the current sheet. The ions and neutral inflows were decoupled, but the ion and neutral outflow jets were strongly coupled. Late in time, the simulations by \\citetalias{leake:partial1} and \\citetalias{leake:partial2} showed the development of the secondary tearing instability known as the plasmoid instability \\citep[][]{loureiro:2007}. The increase in the ion and electron densities in the plasmoids led to an increase in the recombination rate that allowed further contraction of the magnetic islands on timescales comparable to the advection time of islands out of the current sheet. \\citet{ni:2015} present one-fluid simulations of the plasmoid instability in the solar chromosphere using the NIRVANA code with and without guide fields. They find that the onset of the plasmoid instability leads to fast reconnection, and that slow shocks develop near the X-points. Cases without guide fields show rapid thinning of the current sheet due to ambipolar diffusion and radiative cooling, but these effects are not significant during guide field simulations.\n\nPrior models of magnetic reconnection in weakly ionized plasmas have generally assumed symmetric inflow. In general, however, there will be some asymmetry in the upstream magnetic field strengths, temperatures, and densities. The standard model for anenome jets predicts that chromospheric reconnection occurs when newly emerged flux interacts with pre-existing, overlying flux \\citep[e.g.,][]{shibata:2007}. It is reasonable to expect that these different plasma domains will have different plasma parameters, so this configuration naturally leads to asymmetric reconnection. Because the chromosphere is a dynamic magnetized environment \\citep{leenaarts:2007}, asymmetry in the reconnection process is likely to be the norm.\n\nThe physics of asymmetric inflow reconnection has been investigated in detail for fully ionized plasmas. One of the principal applications of this work has been Earth's dayside magnetopause. Asymmetric inflow reconnection has also been investigated in the context of Earth's magnetotail and elsewhere in the magnetosphere \\citep{oieroset:2004, muzamil:2014}, the solar atmosphere \\citep{NNakamura:2012, murphy:double, YingnaSu:2013:prominence, YangSu:2013}, laboratory experiments \\citep{yamada:2007, yoo:2014, murphy:mrx}, and plasma turbulence \\citep{servidio:2009, servidio:2010}. \\citet{cassak:asym} performed a scaling analysis for asymmetric inflow reconnection. They found that the outflow is governed by a hybrid upstream Alfv\\'en speed that is a function of the magnetic field strength and density in both upstream regions \\citep[see also][]{birn:2010} and that the flow stagnation point in the simulation frame and magnetic field null were not colocated. The structure and dynamics of asymmetric reconnection in fully ionized collisionless and two-fluid plasmas have been studied previously in several works \\citep[e.g.,][]{cassak:hall, cassak:dissipation, malakit:2010, malakit:2013, pritchett:2008:asym, pritchett:2009, mozer:2008, mozer:2009, swisdak:2003, aunai:2013b, aunai:2013a, murphy:mrx}. Simulations of the plasmoid instability during reconnection with asymmetric upstream magnetic fields by \\citet{murphy:plasmoidasym} showed that the resultant magnetic islands developed primarily into the weak field upstream region. Because the reconnection jets impacted the islands obliquely rather than directly, the islands developed net vorticity. In addition to asymmetric inflow reconnection, several groups have investigated asymmetric outflow reconnection \\citep[e.g.,][]{oka:2008, murphy:asym, murphy:retreat} and reconnection with three-dimensional asymmetry \\citep[e.g.,][]{AlHachami:2010, wyper:2013}. \n\nObservational investigations of partially ionized reconnection in the chromosphere are challenging because the dissipation length scales are significantly shorter than can be resolved with current instrumentation. Diagnosing the chromospheric magnetic field is an important but very difficult problem that requires inversion of spectropolarimetric data \\citep[e.g.,][]{kleint:2012}, and this will be even more challenging for short-lived dynamical events. Comparisons between simulations and observations should therefore concentrate on the large-scale consequences of chromospheric reconnection which depend on small-scale processes. A complementary approach to solar observations is to study partially ionized reconnection in the laboratory. \\citet{lawrence:2013} have performed such studies at the Magnetic Reconnection Experiment \\citep[MRX;][]{yamada:1997a}.\n\nOur motivation is to investigate the role of asymmetry on the small-scale physics of reconnection in partially ionized chromospheric plasmas. While there are similarities to symmetric cases, there are also physical effects such as neutral flows through the current sheet that do not occur in cases with symmetric inflow. We use the plasma-neutral module of the HiFi modeling framework \\citep{lukin:thesis} to perform simulations of magnetic reconnection with asymmetric upstream magnetic field strengths, temperatures, and\/or densities in the weakly ionized solar chromosphere. In Section \\ref{numerical}, we describe the numerical method and problem setup (see also \\citetalias{leake:partial1} and \\citetalias{leake:partial2}). In Section \\ref{global}, we describe the global dynamics of reconnection, the structure of the current sheet, the role of the Hall effect, the dynamics of the plasmoid instability, the motion of the X-point, and ion\/neutral flows at the X-point. We discuss our results in Section \\ref{discussion}.\n\n\\section{NUMERICAL MODEL AND PROBLEM SETUP\\label{numerical}}\n\nThe HiFi framework\\footnote{See \\begin{tt}http:\/\/faculty.washington.edu\/vlukin\/HiFi\\_Framework.html\\end{tt}}\n\\citep{lukin:thesis, glasser:2004} uses a spectral element spatial representation and implicit time advance to solve systems of partial differential equations. The modular approach makes new physical models straightforward to implement \\citep[e.g.,][]{gray:2010B, lukin:2011, ohia:2012, le:2013, stanier:2013, browning:2014, elee:2014}. In this paper, we use the module for partially ionized, reacting plasmas \\citep[][\\citetalias{leake:partial1}, \\citetalias{leake:partial2}]{meier:thesis}. These plasmas consist of neutral and ionized hydrogen and electrons. A multi-fluid approach allows the plasma and neutral components to be modeled separately. We summarize the equations used for our 2.5D simulations in this section, but direct the reader to \\citetalias{leake:partial1} and \\citetalias{leake:partial2} for further detail. In general, we use the notation and conventions from \\citetalias{leake:partial1} and \\citetalias{leake:partial2}\\@. The subscripts `n', `i', and `e' refer to neutrals, ions, and electrons, respectively. \n\n\\subsection{Normalizations\\label{normalizations}}\n\nFollowing \\citetalias{leake:partial1} and \\citetalias{leake:partial2}, we normalize the fluid equations to values characteristic of the lower chromosphere. We choose a characteristic length scale of $L_\\star\\equiv 1\\times 10^4$ m, a characteristic number density of $n_\\star \\equiv 3\\times 10^{16}$ m$^{-3}$, and a characteristic magnetic field strength of $B_\\star \\equiv 1\\times 10^{-3}$ T\\@. From these quantities, we derive additional normalizing values to be $V_\\star\\equiv B_\\star \/ \\sqrt{\\mu_0 m_p n_\\star} = 1.26\\times 10^5$ m\\,s$^{-1}$ for velocity, $t_\\star \\equiv L_\\star \/ V_\\star = 0.0794$ s for time, $T_\\star \\equiv B_\\star^2 \/ k_B \\mu_0 n_\\star = 1.92 \\times 10^6$ K for temperature, $P_\\star \\equiv B_\\star^2\/\\mu_0 = 0.796$ Pa for pressure, $J_\\star \\equiv B_\\star\/\\mu_0 L_\\star = 7.96\\times 10^{-2}$ A\\,$\\cdot$\\,m$^{-2}$ for current density, and $\\eta_\\star \\equiv \\mu_0 L_\\star V_\\star = 1.58 \\times 10^3$ $\\Omega$\\,{$\\cdot$}\\,m for resistivity. Unless otherwise indicated (e.g., Sections \\ref{irce} and \\ref{initial}), the equations presented for the simulation will be in dimensionless units according to these normalizations. \n\n\\subsection{Ionization and Recombination\\label{irce}}\n\nThe HiFi module for partially ionized plasmas includes both ionization and recombination of hydrogen to allow departures from ionization equilibrium \\citepalias{leake:partial1, leake:partial2}. The ionization and recombination rates are given by\n\\begin{eqnarray}\n \\Gamma^{ion}_n & \\equiv & -n_n \\nu^{ion}, \\label{ionrate} \\\\\n \\Gamma^{rec}_i & \\equiv & -n_i \\nu^{rec}, \\label{recrate}\n\\end{eqnarray}\nwith $\\Gamma^{ion}_i = -\\Gamma^{ion}_n$ and $\\Gamma^{rec}_i =\n-\\Gamma^{rec}_n$. The ionization frequency of hydrogen is approximated to be\n\\begin{equation}\n \\nu^{ion} = \\frac{n_e A}{X + \\phi_{ion}\/T_e^*}\n \\left(\\frac{\\phi_{ion}}{T_e^*}\\right)^K \n \\exp\\left(-\\frac{\\phi_{ion}}{T_e^*}\\right)\n \\mbox{~m}^3\\mbox{\\,s}^{-1}, \\label{ionfreq}\n\\end{equation}\nwith $A = 2.91\\times 10^{-14}$, $K=0.39$, $X=0.232$, and $\\phi_{ion} = 13.6$ eV \\citep{voronov:1997}. Here, $T_e^*$ is the electron temperature in eV\\@. \\citet{smirnov:2003} approximates the recombination frequency of hydrogen to be\n\\begin{equation}\n \\nu^{rec} = 2.6 \\times 10^{-19} \\frac{n_e}{\\sqrt{T_e^*}} \n \\mbox{~m}^3\\mbox{\\,s}^{-1}\n \\label{recfreq}.\n\\end{equation}\nAt a characteristic temperature of $9000$ K, this corresponds to an equilibrium ionization fraction of $4.1\\times 10^{-4}$. These expressions do not include the consequences of non-local thermodynamic equilibrium radiative transfer. For example, if Lyman $\\alpha$ is optically thick, then there will be more neutral hydrogen in the $n=2$ state than predicted from these expressions which would allow for enhanced ionization. At lower temperatures, ionization from low-FIP elements may contribute more to the electron density than hydrogen.\n\n\\subsection{Multi-fluid Equations\\label{fluidequations}}\n\nIn this section, we summarize the equations solved by the plasma-neutral module of HiFi. The equations are described more thoroughly by \\citetalias{leake:partial1} and \\citetalias{leake:partial2} \\citep[see also][]{meier:thesis, meier:2012A}. Our simulations include the modifications and extensions to the model described by \\citetalias{leake:partial2}. \n\nThe ion and neutral continuity equations are\n\\begin{eqnarray}\n \\frac{\\partial n_i}{\\partial t} + \\nabla\\cdot\n \\left(n_i\\ensuremath{\\mathbf{V}_i}\\right) & = & \\Gamma^{rec}_i + \\Gamma^{ion}_i,\n \\\\ \n \\frac{\\partial n_n}{\\partial t} + \\nabla\\cdot\n \\left(n_n\\ensuremath{\\mathbf{V}_n}\\right) & = & \\Gamma^{rec}_n + \\Gamma^{ion}_n.\n\\end{eqnarray}\nThese equations include ionization and radiative recombination as described in Section \\ref{irce}, \\citetalias{leake:partial1}, and \\citetalias{leake:partial2}. We assume quasineutrality such that $n_i=n_e$.\n\nThe ion and neutral momentum equations are given by\n\\begin{eqnarray}\n \\frac{\\partial }{\\partial t}\n \\left(m_in_i\\ensuremath{\\mathbf{V}_i}\\right) + \n \\nabla \\cdot \\left( m_in_i\\ensuremath{\\mathbf{V}_i}\\vi + \\ensuremath{\\mathsf{P}_i} + \\ensuremath{\\mathsf{P}_e} \\right) = \n \\hspace{6cm}\\nonumber\\\\ \\hspace{2cm}\n \\ensuremath{\\mathbf{J}} \\times \\ensuremath{\\mathbf{B}} + \\ensuremath{\\mathbf{R}^{in}_i} + \\Gamma^{ion}_im_i\\ensuremath{\\mathbf{V}_n} - \\Gamma^{rec}_nm_i\\ensuremath{\\mathbf{V}_i} \n + \\Gamma^{cx}m_i\\left(\\ensuremath{\\mathbf{V}_n}-\\ensuremath{\\mathbf{V}_i}\\right) +\\ensuremath{\\mathbf{R}^{cx}_{in}} - \\ensuremath{\\mathbf{R}^{cx}_{ni}}, \\label{momentum_plasma}\n \\\\\n \\frac{\\partial }{\\partial t}\n \\left(m_in_n\\ensuremath{\\mathbf{V}_n}\\right) + \n \\nabla \\cdot \\left( m_in_n\\ensuremath{\\mathbf{V}_n}\\vn + \\ensuremath{\\mathsf{P}_n} \\right) = \n \\hspace{6cm}\\nonumber\\\\ \\hspace{2cm}\n -\\ensuremath{\\mathbf{R}^{in}_i} + \\Gamma^{rec}_nm_i\\ensuremath{\\mathbf{V}_i} - \\Gamma^{ion}_im_i\\ensuremath{\\mathbf{V}_n}\n + \\Gamma^{cx} m_i\\left(\\ensuremath{\\mathbf{V}_i}-\\ensuremath{\\mathbf{V}_n}\\right)\n -\\ensuremath{\\mathbf{R}^{cx}_{in}} + \\ensuremath{\\mathbf{R}^{cx}_{ni}} \n . \\label{momentum_neutrals}\n\\end{eqnarray}\nThe Lorentz force acts directly on the plasma but not the neutrals, while the neutral pressure gradient acts directly on the neutrals but not the plasma. Coupling between the plasma and neutrals is achieved though many of the terms on the right hand side of Eqs.\\ \\ref{momentum_plasma} and \\ref{momentum_neutrals}. The term $\\ensuremath{\\mathbf{R}^{in}_i}$ represents momentum transfer from neutrals to ions due to identity preserving collisions and is given by\n\\begin{equation}\n \\ensuremath{\\mathbf{R}^{in}_i} = m_{in} n_i \\nu_{in} \\left(\\ensuremath{\\mathbf{V}_n}-\\ensuremath{\\mathbf{V}_i}\\right), \\label{rinidef}\n\\end{equation}\nwhere $m_{in} = m_i m_n \/(m_i+m_n)$ and the collision frequency $\\nu_{in}$ is given by Eq.\\ 7 of \\citetalias{leake:partial2}. The pressure tensor for species $\\alpha$ is\n\\begin{equation}\n \\mathsf{P}_\\alpha=P_\\alpha\\mathsf{I} + \\pi_\\alpha\n \\label{ptensdef},\n\\end{equation}\nwhere $P_\\alpha$ is the scalar pressure and $\\mathsf{I}$ is the identity tensor. The viscous stress tensor is then\n\\begin{equation}\n \\pi_\\alpha = - \\xi_\\alpha\\left[\\nabla\\mathbf{V}_\\alpha + \\left(\\nabla\\mathbf{V}_\\alpha\\right)^\\top\\right],\n\\end{equation}\nwith $\\xi_\\alpha$ as the isotropic dynamic viscosity coefficient. The terms that depend on the ionization and recombination rates represent momentum transfer by particles that change identity from neutrals to ions and vice-versa. We follow \\citetalias{leake:partial2} and include charge exchange. Here, $\\Gamma^{cx}$ is the charge exchange reaction rate and $\\mathbf{R}^{cx}_{\\alpha\\beta}$ is the momentum transfer due to charge exchange rections from species $\\beta$ to species $\\alpha$. Charge exchange leads to increased momentum and energy transfer between species by about a factor of two, and therefore more effective coupling. We use the formulation for charge exchange given by Eqs.\\ 4--6 of \\citetalias{leake:partial2} \\citep[see also][]{meier:thesis, meier:2012A, leake:partial1, Barnett:1990}.\n\nWe adjust the collisional cross sections to be $\\Sigma_{in}=\\Sigma_{ni}=5\\times 10^{-19}$ m$^2$ (used in Eq.\\ 7 of \\citetalias{leake:partial2} to calculate $\\nu_{in}$) and $\\Sigma_{nn}=5\\times 10^{-19}$ m$^2$ (used in Eq.\\ 9 of \\citetalias{leake:partial2} to calculate $\\nu_{nn}$). These values differ from \\citetalias{leake:partial1} and \\citetalias{leake:partial2}, but the value for $\\Sigma_{in}$ is consistent with \\citet{khomenko:2012} and \\citet{ni:2015}. These cross sections are a function of energy, but we use constant values appropriate for the chromosphere as a simplifying assumption \\citep[see also][]{draine:1983}. The neutral and ion viscosity coefficients $\\xi_n$ and $\\xi_i$ are set by Eq.\\ 8 of \\citetalias{leake:partial2} using the revised value of $\\Sigma_{nn}$. While $\\xi_n$ is a function of local physical conditions, $\\xi_i$ and $\\xi_e$ are constant throughout the domain and correspond to the parallel component of the Braginskii viscosity for each species computed using the mean of the asymptotic upstream densities and temperatures. We use $\\xi_i=2.8\\times 10^{-6}$ and $\\xi_e=4.6\\times 10^{-14}$ with a normalization of $\\xi_\\star=m_p n_\\star L_\\star V_\\star$. \n\nThe neutral-ion mean free path\n\\begin{equation}\n \\lambda_{ni} = \\frac{V_{T,n}}{\\nu_{ni}^\\dagger} \\label{lambdanidef}\n\\end{equation}\ngoverns the lengths scales above which the neutrals and ions are coupled to each other. Here, the neutral thermal velocity is $V_{T,n}=\\sqrt{2 k_B T_n\/m_n}$ where $k_B$ is the Boltzmann constant, the neutral temperature is $T_n$, and the neutral mass is $m_n$. The frequency $\\nu_{ni}^\\dagger = \\nu_{ni} + \\nu_{ni}^{CX}$ includes contributions from the neutral-ion collision frequency $\\nu_{ni}$ (see Eq.\\ 16 of \\citetalias{leake:partial1}) and the neutral-ion charge exchange frequency $\\nu_{ni}^{CX}$ (see also Eqs.\\ 4--6 of \\citetalias{leake:partial2}). The ions and neutrals will be coupled on length scales much longer than $\\lambda_{ni}$, and decoupled on scales much shorter than $\\lambda_{ni}$.\n\nThe energy equations for the plasma and neutral components are given by Eqs.\\ 19 and 20 of \\citetalias{leake:partial1}. Both equations include frictional heating due to identity preserving collisions, thermal transfer due to changes in identity, and the effects of charge exchange. The energy equation for the plasma component combines the electron and ion energy equations and includes Ohmic heating, optically thin radiative losses, and anisotropic thermal conduction \\citepalias[Eq.\\ 21 of][]{leake:partial1}. The neutral energy equation includes isotropic thermal conduction that depends on plasma parameters and is set by Eq.\\ 10 of \\citetalias{leake:partial2}. Neutral thermal conduction dominates thermal diffusion, in part due to rapid thermal transfer between neutrals and ions. For characteristic values of $T=9000$ K and $n_n=7.6\\times 10^{18}$ m$^{-3}$, the neutral thermal conductivity is $\\kappa_n=3.05\\times 10^{22}$ m$^{-1}$\\,s$^{-1}$. We assume that the ion and electron temperatures are equal, but the neutral temperature is evolved separately. \n\nThe generalized Ohm's law for this paper is given by\n\\begin{equation}\n \\ensuremath{\\mathbf{E}} + \\ensuremath{\\mathbf{V}_i}\\times\\ensuremath{\\mathbf{B}} \n =\n \\eta\\ensuremath{\\mathbf{J}}\n + \\frac{\\ensuremath{\\mathbf{J}}\\times\\ensuremath{\\mathbf{B}}}{e n_e} \n - \\frac{\\nabla P_e}{e n_e}\n - \\frac{m_e \\nu_{en}}{e} \\left(\\ensuremath{\\mathbf{V}_i}-\\ensuremath{\\mathbf{V}_n}\\right) \\label{genohms}\n\\end{equation}\nThe resistivity includes both electron-ion and electron-neutral collisions and is given by\n\\begin{equation}\n \\eta = \\frac{m_e n_e \\left(\\nu_{ei} + \\nu_{en}\\right)}{\\left( e n_e \\right)^2},\n\\end{equation}\nwhere the electron-ion collision frequency $\\nu_{ei}$ and the electron-neutral collision frequency $\\nu_{en}$ are functions of number density and temperature and are given by Eq.\\ 13 of \\citetalias{leake:partial2} with $\\Sigma_{en}=\\Sigma_{ne}=1\\times 10^{-19}$ m$^{2}$ as used previously in \\citetalias{leake:partial1}, \\citetalias{leake:partial2}, \\citet{khomenko:2012}, and \\citet{ni:2015}. The resistivity therefore depends on plasma parameters and is not a constant as in \\citetalias{leake:partial1}. We include the Hall term and evolve scalar plasma and neutral pressures. \n\n\\subsection{Initial Conditions\\label{initial}}\n\nHere we describe the procedure to establish an approximate initial equilibrium with asymmetric upstream magnetic field strengths, densities, temperatures, and ionization fractions. Before proceeding, we define several parameters to quantify asymmetries in different fields. The magnetic, temperature, number density (of ions and neutrals), and ionization fraction asymmetries are defined as\n\\begin{eqnarray}\n \\ensuremath{\\mathcal{B}} & \\equiv & \\frac{B_{2}}{B_{1}}, \\label{basym}\n \\\\ \n \\ensuremath{\\mathcal{T}} & \\equiv & \\frac{T_{2}}{T_{1}}, \\label{tasym}\n \\\\\n \\ensuremath{\\mathcal{N}} & \\equiv &\n \\frac{n_{i,2}+n_{n,2}}{n_{i,1}+n_{n,1}}, \\label{nasym}\n \\\\\n \\ensuremath{\\mathcal{F}} & \\equiv & \\frac{f_2}{f_1}, \\label{fasym}\n\\end{eqnarray}\nwhere the subscripts `1' and `2' correspond to the asymptotic upstream magnitudes of each field for $y>0$ and $y<0$, respectively, and the ionization fraction is defined as\n\\begin{equation}\n f \\equiv \\frac{n_i}{n_i + n_n}. \\label{ionfrac}\n\\end{equation}\nAll of these quantities are functions of time, so the subscript `0' in expressions below indicates correspondence to the initial conditions.\n\nThe equilibrium magnetic field is specified as a modified Harris sheet,\n\\begin{equation}\n B_{x0}\\left(y\\right) = B_{1,0} \\left[ \\frac{\\tanh \\left(\n \\frac{y}{\\ensuremath{\\lambda_\\psi}} - b\\right) + b}{1+b}\n \\right], \\label{initial_b}\n\\end{equation}\nwhere $\\ensuremath{\\lambda_\\psi}$ is the initial thickness of the current sheet \\citep[see also][]{birn:2008, birn:2010, murphy:double, murphy:plasmoidasym}. The initial magnetic asymmetry is given by $\\ensuremath{\\mathcal{B}}_0 = (1-b)\/(1+b)$ with the convention that $0 \\leq b < 1$ so that $B_{2,0}\\leq B_{1,0}$. We describe the in-plane magnetic field using the magnetic flux, $A_z$, such that \n\\begin{equation}\n \\mathbf{B} = \\nabla\\times \\left(A_z\\ensuremath{\\hat{\\mathbf{z}}}\\right) + B_z\\ensuremath{\\hat{\\mathbf{z}}}. \\label{fluxdef}\n\\end{equation}\nThe flux corresponding to Eq.\\ \\ref{initial_b} is \n\\begin{equation}\n A_{z0}(y) = \\frac{B_{1,0}}{1+b}\n \\left[\n \\ensuremath{\\lambda_\\psi}\n \\ln\\cosh\\left(\\frac{y}{\\ensuremath{\\lambda_\\psi}}-b\\right)\n + b y\n \\right].\n\\end{equation}\nThe out-of-plane current density is then\n\\begin{equation}\n J_{z0}(y) = - \\frac{B_{1,0}}{\\mu_0\\ensuremath{\\lambda_\\psi}}\n \\left[\n \\frac{\n \\mathrm{sech}^2{\\left(\\frac{y}{\\ensuremath{\\lambda_\\psi}} - b\n \\right)}}{1+b} \\right]. \\label{initial_j}\n\\end{equation}\n\nThe initial temperature is set using the relations\n\\begin{eqnarray}\n T_0(y) & = & T_{1,0} \n \\left[ 1 + \n \\left( \\ensuremath{\\mathcal{T}}_0 - 1 \\right)\n \\left( 1 - \\zeta^2 \\right)\n \\right], \\label{tempinit}\n \\\\\n \\zeta &=& \\frac{1}{2} \n \\left[ 1 + \\tanh\n \\left(\n \\frac{y}{\\ensuremath{\\lambda_\\psi}} - b\n \\right) \n \\right] . \\label{zetadef}\n\\end{eqnarray}\nThe initial ionization fraction is found numerically by equating the ionization rate $\\Gamma^{ion}_i$ with the recombination rate $\\Gamma^{rec}_i$ using Eqs.\\ \\ref{ionrate}--\\ref{recfreq}. The initial conditions are therefore in ionization equilibrium. If the initial temperature is non-uniform, then the initial ionization fraction will also be non-uniform: if $\\ensuremath{\\mathcal{T}}_0 \\ne 1$, then $\\ensuremath{\\mathcal{F}}_0 \\ne 1$. This is in contrast to \\citetalias{leake:partial1} and \\citetalias{leake:partial2} which both assume that the temperature and ionization fraction are both initially uniform.\n\nWe define $\\ensuremath{P_\\mathrm{tot}}$ to be the sum of the plasma and neutral pressures and the plasma pressure $P_p$ to be the sum of the electron an ion pressures,\n\\begin{eqnarray}\n \\ensuremath{P_\\mathrm{tot}} & \\equiv & P_n + P_p,\n \\\\\n P_p & \\equiv & P_i + P_e. \n\\end{eqnarray}\nFor equal temperatures, the neutral and plasma pressures are related to the ionization fraction by\n\\begin{eqnarray}\n P_n &=& \\ensuremath{P_\\mathrm{tot}} \\left(\\frac{1 - f}{1 + f}\\right),\n \\\\\n P_p &=& \\ensuremath{P_\\mathrm{tot}} \\left(\\frac{ 2 f }{1 + f}\\right),\n\\end{eqnarray}\nwhere we recall that the total number of particles depends on the ionization fraction and that electrons are included in the plasma pressure. We calculate ${\\ensuremath{P_\\mathrm{tot}}}_{,0}$ to balance the Lorentz force associated with the magnetic field profile given by Eq.\\ \\ref{initial_b},\n\\begin{equation}\n P_{\\mathrm{tot},0}(y) = \\left( 1 + \\beta_{1,0} \\right)\n \\frac{B_{1,0}^2}{2\\mu_0} - \\frac{B_{x0}(y)^2}{2\\mu_0}, \\label{ptot}\n\\end{equation}\nwhere $\\beta_{1,0}$ is the ratio of ${\\ensuremath{P_\\mathrm{tot}}}_{,0}$ to the magnetic pressure for $y \\gg 0$. The number densities are then given by\n\\begin{eqnarray}\n n_n &=& \\frac{P_n}{k_B T},\n \\\\ \n n_i &=& \\frac{P_p}{2 k_B T}.\n\\end{eqnarray} \nWe assume that the ions and electrons have equal temperatures and number densities.\n\nThe above magnetic and pressure profiles satisfy the relation\n\\begin{equation}\n 0 = - \\nabla \\left( P_p + P_n \\right) + \\ensuremath{\\mathbf{J}} \\times \\ensuremath{\\mathbf{B}}.\n\\end{equation}\nHowever, there must also be a relative velocity between the plasma and neutrals to allow coupling via the momentum equations of the different species through collisions and charge exchange. In the absence of charge exchange, the steady state momentum equations for the plasma and neutral components are\n\\begin{eqnarray}\n 0 &=& -\\nabla P_p + \\ensuremath{\\mathbf{J}}\\times\\ensuremath{\\mathbf{B}} + \\ensuremath{\\mathbf{R}^{in}_i},\n \\\\\n 0 &=& -\\nabla P_n - \\ensuremath{\\mathbf{R}^{in}_i},\n\\end{eqnarray}\nrespectively, where the frictional force $\\ensuremath{\\mathbf{R}^{in}_i}$ is defined in Eq.\\ \\ref{rinidef}. In this approximation, the initial ion and neutral velocities along the inflow direction can be chosen so that the initial configuration contains no net force on either of the plasma and neutral components,\n\\begin{eqnarray}\nV_{iy0}-V_{ny0} = \\frac{\\partial P_n\/\\partial y}{m_{in}n_i\\nu_{in}}, \\label{viy0def}\n\\end{eqnarray}\nwhere the neutral pressure gradient is calculated analytically using the expression\n\\begin{equation}\n \\frac{\\partial P_n}{\\partial y} = -\n \\left( \\frac{1-f_0}{1+f_0} \\right)\n \\left( \\frac{B_{1,0}}{\\mu_0 \\lambda_\\psi \\left(1+b\\right)} \\right)\n B_{x0}(y)\\,\n \\mathrm{sech}^2\\left( \\frac{y}{\\lambda_\\psi} - b\\right) .\n\\end{equation}\nBecause our simulations include charge exchange, momentum transfer between the ions and neutrals is of order twice as effective as the case with frictional coupling alone. In lieu of an exact solution to the steady state momentum equation, we reduce the initial ion-neutral drift velocity given in Eq.\\ \\ref{viy0def} by a factor of two so that the initial conditions for both ions and neutrals are in approximate but not exact force balance. We set $V_{iy0}$ and $V_{ny0}$ so that they have the same magnitude but opposite sign. Example initial conditions are shown in Fig.\\ \\ref{ICplot}\\@.\n\n\\begin{figure}[t!]\n \\begin{center}\n \\includegraphics{f1.pdf}\n \\end{center}\n \\caption{The initial conditions for case E with $\\ensuremath{\\mathcal{B}}_0=1$, $\\ensuremath{\\mathcal{T}}_0=0.95$, $\\ensuremath{\\mathcal{N}}_0=1.47$, and $\\ensuremath{\\mathcal{F}}_0=0.35$.\n \\label{ICplot}}\n\\end{figure}\n\nThe initial state does not represent an exact equilibrium, which leads to an outward moving pulse along the inflow direction that propagates from the initial current sheet. This pulse is mostly damped by placing a layer of significantly enhanced viscosity near the conducting walls at $y=\\pm 1$. \n\n\\subsection{Boundary Conditions\\label{BCs}}\n\nWe simulate a half-domain that extends from $0\\leq x\\leq L_x$ and from $-L_y \\leq y \\leq L_y$, where $x$ is the outflow direction and $y$ is the inflow direction. We assume that the fields $A_z$, $n_i$, $n_n$, $p_i$, $p_n$, $V_{iy}$, $V_{ny}$, $V_{iz}$, $V_{nz}$, and $J_z$ are symmetric about $x=0$ [e.g., $A_z(x,y)=A_z(-x,y)$], and that the fields $V_x$ and $B_z$ are antisymmetric about $x=0$ [e.g., $V_x(x,y)=-V_x(-x,y)$]. We assume that all fields are periodic along the $x$ direction with a period of $2L_x$. We apply perfectly conducting, zero-flux boundary conditions along the boundaries at $y=\\pm L_y$. \n\nThe choice of initial and boundary conditions enforces that the reconnection outflow is symmetric about $x=0$. The development of the plasmoid instability under these conditions often leads to the formation of a magnetic island centered about $x=0$ that cannot be advected out of the system. In general, there will be some degree of asymmetry along the outflow direction that can modify the internal structure of the reconnection layer and the dynamics of reconnection \\citep{oka:2008, murphy:asym, murphy:retreat}. This can be achieved by simulating the whole domain of interest and allowing the driving term, the outflow boundaries, or the initial perturbations to be asymmetric \\citep[e.g.,][]{murphy:plasmoidasym}. \n\n\\subsection{Initializing Reconnection\\label{initialization}}\n\nTo initialize reconnection, we apply a source function to the evolution equation for magnetic flux of the form\n\\begin{equation}\n \\frac{\\dif A_z}{\\dif t} = \n \\epsilon \\lambda_\\psi \n \\Lambda\\left(x,y\\right)\n \\Omega\\left(x,y\\right)\n \\Theta\\left(t\\right)\n\\end{equation}\nwhere\n\\begin{eqnarray}\n\\Lambda\\left(x,y\\right) &=&\n \\exp\\left[\n - \\left( \\frac{x}{h_x} \\right)^2\n - \\left( \\frac{y}{h_y} \\right)^2\n \\right]\n \\left\\{ 1 - \\frac{1}{2} \\exp\n \\left[\n - 3 \\left( \\frac{y}{h_y} \\right)^2\n \\right] \n \\right\\},\n \\\\\n \\Omega\\left(x,y\\right) &=&\n \\left[ 1 - \\left( \\frac{x}{L_x} \\right)^{16} \\right]\n \\left[ 1 - \\left( \\frac{y}{L_y} \\right)^{16} \\right],\n \\\\ \n \\Theta\\left(t\\right) &=&\n \\frac{1}{2}\n \\left[\n 1 - \\cos\\left( \\frac{2\\pi t}{t_{D}}\\right)\n \\right].\n\\end{eqnarray}\nThe spatial length scales for this source function are given by $h_x = 4\\lambda_\\psi$ and $h_y = \\lambda_\\psi$. The function $\\Lambda(x,y)$ localizes the source function in a form akin to a tearing mode eigenfunction to create an X-point at the field reversal along $x=0$. The function $\\Omega(x,y)$ ensures that the pulse goes to zero along the outer boundaries. The pulse is applied between $0\\leq t \\leq t_{D}$ with a waveform given by $\\Theta\\left(t\\right)$. We use $\\epsilon=0.05$ and $t_{D}=5$. The application of this electric field ceases before the reconnection layer has had a chance to develop. In contrast, \\citetalias{leake:partial1} applied a small, localized perturbation to the magnetic flux while \\citetalias{leake:partial2} applied a small amplitude localized rotational flow perturbation to initialize reconnection.\n\n\\section{LOCAL AND GLOBAL DYNAMICS OF RECONNECTION\\label{global}}\n\n\\begin{deluxetable}{ccccccccccccc}\n\\tablecolumns{7}\n\\tablewidth{0in}\n\\tablecaption{Simulation initial parameters\\label{table_simpars}}\n\\tablehead{\n\\colhead{Case} &\n\\colhead{$B_{1,0}$} &\n\\colhead{$B_{2,0}$} & \n\\colhead{$T_{1,0}$} &\n\\colhead{$T_{2,0}$} &\n\\colhead{$n_{1,0}$} &\n\\colhead{$n_{2,0}$} &\n\\colhead{$f_{1,0}$} & \n\\colhead{$f_{2,0}$} &\n\\colhead{$\\beta_{1,0}$} &\n\\colhead{$\\beta_{2,0}$} & \n\\colhead{$\\lambda_{ni,1,0}$} &\n\\colhead{$\\lambda_{ni,2,0}$}\n}\n\\startdata\nA & $7.9$ & $7.9$ & $9000$ & $9000$ & $7.6$ & $7.6$ & $0.00041$ & $0.00041$ & $3.8$ & $3.8$ & $220$ & $220$ \\\\\nB & $7.9$ & $7.9$ & $8750$ & $9250$ & $7.8$ & $7.4$ & $0.00024$ & $0.00068$ & $3.8$ & $3.8$ & $366$ & $136$ \\\\\nC & $10$ & $5$ & $9000$ & $9000$ & $8.8$ & $6.4$ & $0.00041$ & $0.00041$ & $11.0$ & $2.0$ & $261$ & $190$ \\\\\nD & $10$ & $5$ & $8750$ & $9250$ & $6.6$ & $8.6$ & $0.00024$ & $0.00068$ & $2.0$ & $11.0$ & $432$ &$117$ \\\\\nE & $10$ & $5$ & $9250$ & $8750$ & $6.2$ & $9.1$ & $0.00068$ & $0.00024$ & $2.0$ & $11.0$& $163$ & $314$ \\\\\n\\enddata\n\\vspace{-0.8cm}\n\\tablecomments{The units are G for magnetic field, K for temperature, $10^{18}$ m$^{-3}$ for number density (including both neutrals and ions), and m for the neutral-ion mean free paths. The neutral-ion mean free path $\\lambda_{ni}$ is calculated using Eq.\\ \\ref{lambdanidef} and includes charge exchange reactions. When calculating the charge exchange cross section using Eqs.\\ 5--6 of \\citetalias{leake:partial2}, we use that the relative velocity between ions and neutrals is much less than the neutral and ion thermal speeds for the initial conditions.\n}\n\\end{deluxetable}\n\n\\afterpage{\\clearpage}\n\nWe present five simulations to investigate the impact of asymmetry during magnetic reconnection in partially ionized chromospheric plasmas. The simulation parameters are shown in Table \\ref{table_simpars}. Case A is the symmetric test case. Case B has asymmetric upstream temperatures, and consequently asymmetric upstream densities and ionization fractions, but symmetric upstream magnetic field strengths. Case C has symmetric upstream temperatures but a factor of two difference in the upstream magnetic field strengths. Cases D and E have asymmetric temperatures and magnetic field strengths. We keep three parameters constant between runs: the sum of the initial magnetic pressure, neutral pressure, and plasma pressure which equals $1.495$ ($1.19$ Pa); the mean of the initial asymptotic upstream magnetic energy densities which is given by $\\frac{1}{2}\\left( B_{1,0}^2\/{2} + B_{2,0}^2\/{2}\\right) = 0.3125$ ($0.249$ J\\,m$^{-3}$); and the mean of the initial asymptotic upstream temperatures which is $(T_{1,0}+T_{2,0})\/2 = 4.69\\times 10^{-3}$ ($9000$ K). \n\nThe domain size for all simulations is $(L_x,L_y)=(2,1)$. The resolution in Case A is $m_x=256$ elements along the outflow direction and $m_y=128$ elements along the inflow direction. The resolution in Cases B--E is $m_x=m_y=256$. We use sixth order basis functions for all simulations, resulting in effective total resolution of $(M_x,M_y)=6(m_x,m_y)$. Grid packing is used to concentrate mesh in the reconnection region. In Case A, the current sheet does not move from $y=0$ so the mesh packing along the inflow direction is concentrated to a thin region near $y=0$. In Cases B--D, the current sheet drifts slowly in the $-\\ensuremath{\\hat{\\mathbf{y}}}$ direction so the highest resolution is needed over a much longer distance to resolve the dynamics \\citep[see also][]{murphy:double, murphy:plasmoidasym}. High resolution along the outflow direction helps capture the front end of the reconnection jet as well as the dynamics of the plasmoid instability. The resolution along the outflow direction from $0\\leq x \\lesssim 1.2$ and $1.95\\lesssim x \\leq 2$ is approximately twice as high as from $1.2\\lesssim x \\lesssim 1.95$. \n\n\\subsection{Structure of Reconnection Region\\label{structure}}\n\nAll five simulations show broadly similar evolution. The electric field application from $t=0$ to $t=5$ described in Section \\ref{initialization} allows the inflow\/outflow pattern associated with two-dimensional reconnection to develop in the portion of the current sheet near the origin. The outflow jet lengthens as it plows into a region of enhanced ion density associated with current sheet thinning outside of the reconnection region. By $t\\sim 25$, laminar reconnection is well-established. The current sheet has thinned from its initial thickness of ${0.1}$ to a thickness of $\\delta\\sim 1.0\\times 10^{-3}$ ({$\\sim$}$10$ m). This is comparable to the value of the neutral-ion mean free path evaluated inside the current sheet: $\\lambda_{ni} \\gtrsim 6.0\\times 10^{-4}$ to $1.0\\times 10^{-3}$ ({$\\gtrsim$}$6$ to $10$ m). The ionization fraction inside each current sheet is of order $0.01$ at this time. The structure of the reconnection region for all simulations at $t=25$ is shown in Figures \\ref{jzvxplot}, \\ref{flowplot}, \\ref{bzniplot}, and \\ref{inflowslice}. At $t=25$, the current sheets have thinned to {$\\sim$}$\\lambda_{ni}$, laminar reconnection is well-established, and plasmoid formation has not yet begun. The plasmoid instability onsets during all of these simulations. The simulations end when structures develop on scales comparable to the resolution scale as a result of the plasmoid instability.\n\nFigure \\ref{jzvxplot} shows the out-of-plane current density and the ion outflow. In Case B, the current sheet is slightly arched so that the X-point is closer to the low temperature upstream region ($y>0$) than the ends of the current sheet. In the cases with magnetic asymmetry, the current sheet is arched so that the X-point is closer to the strong field upstream region than the ends of the outflow jets (see also Fig. \\ref{inflowslice}b). Case E, which has the strong field side coincident with the high temperature side, shows the fastest development of these three cases and the strongest current density.\n\n\\begin{figure}[tp]\n \\begin{center}\n \\vspace{-7mm}\n \\includegraphics[height=7.5in]{f2_midres.pdf}\n \\end{center}\n \\vspace{-7mm}\n \\caption{The out-of-plane current density $J_z$ (left) and the ion outflow (right) at $t=25$ for Cases A--E\\@. The right half of each plot shows color contours of the outflow component of ion velocity $V_{ix}$, ion velocity vectors, and contours of magnetic flux $A_z$. This image is scaled significantly along the $y$ direction which exaggerates the inflow $y$-component of velocity.\n \\label{jzvxplot}}\n\\end{figure}\n\n\\begin{figure}[tp]\n \\begin{center}\n \\vspace{-7mm}\n \\includegraphics[height=7.5in]{f3_midres.pdf}\n \\end{center}\n \\vspace{-7mm}\n \\caption{The inflow components of the ion velocity $V_{iy}$ (left) and the neutral velocity $V_{ny}$ (right) in the reconnection region at $t=25$ for Cases A--E\\@. The solid black contours represent the magnetic flux $A_z$. The dashed green contour indicates the locations where the inflow component of velocity equals zero. \n \\label{flowplot}}\n\\end{figure}\n\n\\begin{figure}[tp]\n \\begin{center}\n \\vspace{-7mm}\n \\includegraphics[height=7.5in]{f4_midres.pdf}\n \\end{center}\n \\vspace{-7mm}\n \\caption{The logarithm of the ion number density $\\log_{10}\\,{n_i}$ (left) and the out-of-plane magnetic field $B_z$ (right) at $t=25$ for Cases A--E\\@. The solid black contours represent the magnetic flux $A_z$.\\label{bzniplot}}\n\\end{figure}\n\n\\begin{figure}[tp]\n \\begin{center}\n \\includegraphics[width=3.3in]{f5.pdf}\n \\end{center}\n \\caption{\n The structure of the current sheet along $x=0$ at $t=25$ very near the current sheet relative to the X-point. Shown are (a) the reconnecting component of the magnetic field $B_x$, (b) the out-of-plane current density $J_z$, (c) the logarithm of ion density $\\log_{10}\\,n_i$, (d) the inflow component of ion velocity $V_{iy}$, (e) the inflow component of neutral velocity $V_{ny}$, and (f) the neutral pressure.\n \\label{inflowslice}} \n\\end{figure}\n\nAs in \\citetalias{leake:partial1} and \\citetalias{leake:partial2}, the ion and neutral outflows are tightly coupled. The right half of each panel in Figure \\ref{jzvxplot} only show pseudocolor maps of $V_{ix}$ because $V_{nx}$ is very similar in the outflow jet. The strong coupling of the ion and neutral outflow jets occurs because the mean free path for neutral-ion collisions is comparable to the current sheet thickness which is significantly shorter than the length of the current sheet. \n\nFigure \\ref{flowplot} shows the inflow components of the ion velocity on the left side of each panel and the neutral inflow speed on the right side (see also Figs.\\ \\ref{inflowslice}d and \\ref{inflowslice}e). As in \\citetalias{leake:partial1} and \\citetalias{leake:partial2}, the inflow velocities are decoupled. In Cases B--E, the decoupling is asymmetric in part because $\\lambda_{ni}$ is different in each upstream region. The neutrals and the ions at the X-point are moving in opposite directions. For simulations with magnetic asymmetry (Cases C, D, and E), there is neutral flow through the current sheet from the weak magnetic field side toward the strong magnetic field side. This corresponds to a neutral pressure gradient that is pushing neutrals from the weak field side into the strong field side (see Fig.\\ \\ref{inflowslice}f). \n\nThe ion density is strongly peaked within the current sheet for all cases, as shown on the left side of each panel in Fig.\\ \\ref{bzniplot} \\citepalias[see also][]{leake:partial1, leake:partial2}. The decoupling of ions and neutrals on scales below the neutral-ion mean free path allows the ions to be swept into the current sheet by the magnetic field. Neutrals are dragged along by collisions with ions, though less effectively on these short length scales. The current sheet is therefore out of ionization equilibrium: the ionization fraction is much higher than the equilibrium value. As a result, recombination becomes of comparable importance to the outflow in the ion continuity equation.\n\nNext we consider the consequences of changing the temperature asymmetry while maintaining the same magnetic asymmetry. As we go from D to C to E, the magnetic asymmetry remains constant ($\\ensuremath{\\mathcal{B}}=0.5$) but the temperature asymmetry goes from $\\ensuremath{\\mathcal{T}}=1.06$ to $1$ to $0.95$. Case D has higher temperature in the weak field upstream region; Case C has initially uniform temperature; and Case E has higher temperature in the strong field upstream region. \n\n\\begin{figure}[bt]\n \\begin{center}\n \\includegraphics{f6.pdf}\n \\end{center}\n \\caption{The logarithm of the neutral-ion mean free path $\\lambda_{ni}=V_{T,n}\/\\nu_{ni}^\\dagger$ along $x=0$ at $t=25$. The minimum values of $\\lambda_{ni}$ in this plot are $6.5\\times 10^{-4}$, $6.3\\times 10^{-4}$, $7.9\\times 10^{-4}$, $7.2\\times 10^{-4}$, and $8.3\\times 10^{-4}$ for cases A through E, respectively.\n \\label{lambda_ni}}\n\\end{figure}\n\nSwitching the temperature asymmetry while maintaining the same magnetic asymmetry has a significant impact on how quickly reconnection develops, the reconnection rate at a given time, and the onset time and mode structure of the plasmoid instability. Reconnection develops more quickly and occurs at a faster rate in Case E where the strong field upstream region has a higher temperature than the weak field upstream region. As we proceed from Case D to C to E, the relative velocity between the ions and neutrals increases in magnitude on the weak field side but does not change substantially on the strong field side (see, e.g., Fig.\\ \\ref{inflowslice}d while noting that the $V_{ny}$ profiles in Fig.\\ \\ref{inflowslice}e are nearly identical between these three runs). The neutral-ion mean free path $\\lambda_{ni}$ along $x=0$ at $t=25$ is shown in Fig.\\ \\ref{lambda_ni}. In this figure, the minimum value of $\\lambda_{ni}$ increases by 10\\% from Case D to C and another 5\\% from Case C to E\\@. In contrast, the values of $\\lambda_{ni}$ increase by {$\\sim$}$30$\\% to {$\\sim$}$60$\\% from D to C to E in both near-upstream regions. The reconnection rate at $t=25$ measured as $\\dif A_z\/\\dif t$ evaluated at the X-point along $x=0$ is $3.4\\times 10^{-4}$, $3.8\\times 10^{-4}$, and $4.5\\times 10^{-4}$ in Cases D, C, and E, respectively, which suggests that differences in ion-neutral coupling inside and around the current sheet are largely responsible for the observed differences between simulations. In particular, the reconnection rate in these simulations is greater when there is weaker coupling between neutrals and ions. The $V_{ny}$ profile in Fig.\\ \\ref{inflowslice}d remains nearly unchanged between simulations with the same magnetic asymmetry, which suggests that the neutral flow across the current sheet depends strongly on magnetic asymmetry but only weakly on temperature asymmetry.\n\n\\subsection{Role of the Hall Effect\\label{halleffect}}\n\nThe Hall effect is not expected to be significant in the solar chromosphere during magnetic reconnection unless structures develop on scales comparable to or below the ion inertial length $d_i$ \\citep{malyshkin:2011}. In the case of strong coupling between ions and neutrals, it has been proposed that the ion inertial length may be enhanced because each ion will be dragging along a much larger mass \\citep{pandey:2008a, malyshkin:2011}; consequently the effective ion inertial length is proposed to be $d_i' = d_i \\sqrt{\\rho\/\\rho_i}$. We therefore include the Hall effect in these simulations. \n\nThe characteristic thickness of the current sheet is {$\\sim$}$10^{-3}$ ($10$ m; see Fig.\\ \\ref{inflowslice}). The ion inertial length in the upstream regions is initially of order $d_i\\sim 4\\times 10^{-4}$ ($4$ m; using upstream parameters from Case A), while $d_i'\\sim 0.02$ ($200$ m). In the current sheet, the pileup of ions leads to a local ion inertial length of $d_i\\approx 8\\times 10^{-5}$ ($0.8$ m; based on an ion density of {$\\sim$}$3n_0$), which corresponds to $d_i' \\sim 8\\times 10^{-4}$ ($8$ m). These lengths scales can also be compared to $\\lambda_{ni}$ which is of order {$\\sim$}$0.01$ to $0.03$ in the regions just upstream of the current sheet and {$\\sim$}$10^{-3}$ in the current sheet.\n\nThe right side of Figure \\ref{bzniplot} shows the quadrupole structure of the out-of-plane magnetic field $B_z$ that forms as a result of the Hall effect during laminar reconnection. The quadrupole field strength remains about an order of magnitude smaller than the asymptotic values of $B_x$, but $|B_z|\/\\sqrt{B_x^2+B_y^2}$ locally reaches {$\\sim$}$0.5$ near the location where $|B_z|$ is largest on the weak field upstream region in Cases C--E\\@. This occurs in part because the in-plane field is weaker in the near upstream regions than in the far upstream regions. Despite the locally high value of the ratio of $|B_z|$ to the in-plane field, these cases show neither the significant enhancement of the reconnection rate nor the development of the low aspect ratio (X-point) geometry expected during Hall-mediated reconnection in fully ionized plasmas \\citep[e.g.,][]{biskamp:1997}.\n\nThe structure of the quadrupole field is modified by both temperature and magnetic asymmetries. When the temperature or magnetic asymmetries are changed, this also impacts the neutral density, ion density, and ionization fraction asymmetries and therefore $\\lambda_{ni}$. In Case B, the quadrupole lobe on the low ion density side ($y<0$) has a greater spatial extent and higher magnitude than the high ion density side. In Cases C--E with magnetic asymmetry, the quadrupole lobe on the strong magnetic field side is localized near the current sheet while the quadrupole lobe on the weak magnetic field side has a greater spatial extent. This asymmetry of the quadrupole field is qualitatively similar with previous fully kinetic simulations \\citep[see, e.g., Fig.\\ 8c of][]{pritchett:2008:asym}. As we proceed from Case D to C to E at $t=25$, the quadrupole field on the strong field side increases in strength while the quadrupole field on the weak field side decreases in magnitude but increases in spatial extent. \n\nEven though the ion inertial scale is shorter than the current sheet thickness throughout these simulations, this might not be the case during the long-term evolution of the plasmoid instability. While simulations of fully ionized plasmas have shown that the plasmoid instability allows reconnection to occur at a rate that is roughly independent of Lundquist number, \\citet{Daughton:2009} and \\citet{shepherd:2010} have proposed that the most important role of the plasmoid instability might actually be to allow structure to develop on scales smaller than the ion inertial length which would then allow fast collisionless reconnection to take place. This proposed mechanism has not yet been tested in weakly ionized plasmas, which would require capturing highly nonlinear behavior at later times and will be explored in future work. \n\n\\subsection{Plasmoid Formation\\label{plasmoidformation}}\n\n\\begin{figure}[tp]\n \\vspace{-7mm}\n \\begin{center}\n \\includegraphics[height=7.5in]{f7_midres.pdf}\n \\end{center}\n \\vspace{-7mm}\n \\caption{The ion density $n_i$ (left) and current density $J_z$ (right) for Cases A--E near the end of each simulation after plasmoids form.\n \\label{plasmoidplot}}\n\\end{figure}\n\nEach simulation shows the formation of plasmoids after the current sheets have thinned sufficiently. Figure \\ref{plasmoidplot} shows the ion density, out-of-plane current density, and magnetic flux during the early nonlinear evolution of the plasmoid instability for each case. In contrast to Figures \\ref{jzvxplot}--\\ref{lambda_ni}, the times for each panel were chosen to be near the end of each simulation but before the plasma substructures associated with the cascading nonlinear evolution of the plasmoids reached the grid scale. The appearance of additional in-plane null points occurs at $t=27.9$ for Case A, $t=29.8$ for Case B, $t=29.6$ for Case C, $t=31.1$ for Case D, and $t=25.9$ for Case E\\@. In all cases, the secondary islands develop in the central portion of the current sheet: within $x\\lesssim 0.16$ compared to a current sheet length of {$\\sim$}$0.8$. The out-of-plane current density has a local extremum near each X-point.\n\nWe first consider the effect of introducing a temperature asymmetry by comparing Cases A and B\\@. The plasmoid instability takes longer to develop in Case B which has temperature asymmetry, but the mode structure does not change substantially. Both cases show a central O-point with two X-points on each side with an additional X-point\/O-point pair further out. The symmetry about $x=0$ allows a pitchfork bifurcation to change the central X-point into an O-point. The resulting island then grows due to reconnection and is unable to be advected out of the reconnection region by the outflow. Prior simulations of the plasmoid instability in resistive MHD have avoided this situation by simulating the entire domain and including a slight asymmetry along the outflow direction so that any central islands that form do not remain in the current sheet indefinitely.\n\nThe structure of the current sheet and the resultant plasmoid instability are more directly modified by magnetic asymmetry, including when it is coupled with temperature asymmetry. While Case D also undergoes a pitchfork bifurcation to change the central X-point into an O-point, Cases C and E maintain a central X-point and develop multiple alternating X-points and magnetic islands. As in prior simulations using the resistive MHD approximation \\citep{murphy:plasmoidasym}, the resulting islands preferentially grow into the weak field upstream region. \n\nWe now compare our simulations with the complementary simulations of the plasmoid instability during partially ionized chromospheric reconnection by \\citet{ni:2015}. Ni et al.\\ capture the long term nonlinear evolution of the plasmoid instability using a single fluid approach that incorporates ambipolar diffusion and assumes ionization equilibrium. The parameter regimes for the two sets of simulations differ. For example, Ni et al.\\ assume an initial upstream $\\beta$ of $0.1$ which allows heating up to {$\\sim$}$8\\times 10^4$ K, while our simulations have high $\\beta$ and thus do not result in significant heating because less magnetic free energy is available. Ni et al.\\ present current sheets with lengths of order {$\\sim$}$1$ Mm which is a significant fraction of the size of the chromosphere, in contrast to lengths of {$\\sim$}$10$ km in our simulations. The thicknesses of the Ni et al.\\ current sheets are generally much larger than $\\lambda_{ni}$ which suggests that the neutrals and ions are strongly coupled in their regime. While recombination plays an important role in the simulations presented by \\citetalias{leake:partial1}, \\citetalias{leake:partial2}, and this work, the high temperatures found by Ni et al.\\ make it less likely that net recombination will play an important role in the plasma continuity equation. Physical parameters such as magnetic field strength, temperature, density, and ionization fraction vary significantly within the chromosphere even at a single height, so it is likely that there are regions where each parameter regime is valid. These differences highlight the need for detailed parameter studies on how partial ionization impacts the reconnection process including the plasmoid instability at various locations in the chromosphere.\n\n\\subsection{Reconnection Rate\\label{reconrate}}\n\nThe reconnection rate for all five simulations is shown in Fig.\\ \\ref{ratefig}. This rate was measured as the maximum of the time derivatives of $A_z$ at all X-points within the reconnection region. A common alternative method for measuring the reconnection rate is to measure the inflow velocity divided by the outflow velocity; however, during asymmetric reconnection there is ambiguity in how to define the inflow velocity. The reconnection rate before $t=5$ is not shown because an electric field was applied to initialize reconnection at early times. When interpreting this figure, it is important to recall that the initial total pressure and average upstream magnetic energy density are constant between runs. With this convention, magnetic and temperature asymmetry reduce the reconnection rate for all cases studied.\n\n\\begin{figure}[tb]\n \\begin{center}\n \\includegraphics[width=8.5cm]{f8.pdf}\n \\end{center}\n \\caption{\n The reconnection rate as a function of time as measured by the maximum change in magnetic flux among all X-points in the current sheet, $\\dif A_z\/\\dif t$. This rate is presented in dimensionless units according to Section \\ref{normalizations}; consequently, this rate has not been renormalized to values immediately upstream of the reconnection layer.\n \\label{ratefig}}\n\\end{figure}\n\nFrom early times until around when plasmoids form, the reconnection rate is highest at the X-point along $x=0$. The reconnection rate increases as the current sheet thins. Around the time that plasmoids form, the reconnection rate at this X-point decreases. For some cases, this X-point bifurcates into an O-point. The peak reconnection rate then occurs at one of the nearby X-points that is not located along $x=0$. Cases A, B, and D show a considerable increase in the reconnection rate after the formation of plasmoids. It is likely that Cases C and E would also show a similar increase in the reconnection rate; however, the simulations ended before this occurred. At even later times, structure on small enough scales could develop so that Hall reconnection could become important \\citep[see also][]{Daughton:2009, shepherd:2010}.\n\n\\subsection{X-Point Motion and Flows Across the X-Point\\label{nullflows}}\n\n\\begin{figure*}[t!]\n \\begin{center}\n \\includegraphics[width=6.5in]{f9.pdf}\n \\end{center}\n \\caption{A comparison between the X-point velocity $\\frac{\\dif y_n}{\\dif t}$, the ion flow at the X-point along the inflow direction $V_{iy}(y_n)$, and the neutral flow at the X-point $V_{ny}(y_n)$ for Cases B--E\\@. We consider only the X-point along $x=0$ and times after the early electric field application and before the onset of the plasmoid instability. Case A is not shown because all of these velocities equal zero due to symmetry.\n \\label{nullflowfig}} \n\\end{figure*}\n\nWe consider three velocities that are relevant to the small-scale physics near the X-point located along the symmetry axis at $\\mathbf{x}_n = (0,y_n)$. The first velocity is the rate of motion of the X-point along the inflow direction: $\\frac{\\dif y_n}{\\dif t}$. This velocity is not a fluid velocity, but rather the velocity of a topologically stable magnetic feature \\citep{murphy:retreat}. The second is the ion flow at the X-point along the inflow direction: $V_{iy}(y_n)$. The third is the neutral flow at the X-point along the inflow direction: $V_{ny}(y_n)$. Differences between $\\frac{\\dif y_n}{\\dif t}$ and $V_{iy}(y_n)$ must be due to non-ideal behavior such as resistivity or the Hall effect. Differences between $V_{iy}(y_n)$ and $V_{ny}(y_n)$ correspond to momentum transfer between ions and neutrals (e.g., Eq.\\ \\ref{rinidef}).\n\nFigure \\ref{nullflowfig} shows $\\frac{\\dif y_n}{\\dif t}$, $V_{iy}(y_n)$, and $V_{ny}(y_n)$ as a function of time for all of the cases with asymmetric inflow. Case A is not shown because these velocities equal zero due to symmetry. The bump in velocity around $t=16$ is a consequence of the initial conditions being slightly out of equilibrium; the resulting pulse is mostly but not entirely damped by a viscous layer at the boundary and the reflected pulse returns to the current sheet region at this time. The magnitude of this pulse is weak compared to the ion inflow speeds, but is the same order of magnitude as the velocities associated with each null point. For the rest of this section, we consider $t\\gtrsim 20$ for which reconnection is well established. \n\nIn general, there is a small but nonzero difference between $\\frac{\\dif y_n}{\\dif t}$ and $V_{iy}(y_n)$. This similarity indicates that the magnetic field is predominantly carried by the ion flow. The residual difference in flow results from resistive diffusion of the magnetic field and the Hall effect. The difference between either of these velocities and the neutral flow at the X-point $V_{ny}(y_n)$ is greater, which is consistent with decoupling between the ion and neutral inflows. The $V_{ny}$ profile along the inflow direction remains roughly constant between Cases A and B which both have symmetric magnetic fields but different temperature profiles, and also between Cases C--E which have the same asymmetric magnetic field configuration but differing initial temperature profiles. Case B has symmetric upstream magnetic fields but the $y<0$ upstream region is warmer and less dense than the $y>0$ upstream region. That $\\frac{\\dif y_n}{\\dif t} \\approx V_{iy}(y_n) < 0$ indicates that the X-point is being carried primarily by the ions in the direction of the warmer upstream region. \n\nThe simulations with magnetic asymmetry (Cases C, D, and E) show neutral flow from the weak field upstream region into the strong field upstream region. This flow results from a neutral pressure gradient that is pushing the neutrals from the weak field side into the strong field side where the neutral pressure is lower. This flow is able to occur because of imperfect coupling between species on scales below or comparable to the neutral-ion mean free path. Neutrals swept along with the outflow will have preferentially originated from the weak field upstream region. This flow may have important observational consequences because the abundances of neutrals swept along with the outflow are likely to better match the weak field upstream region. In contrast, there is net ion flow inside the current sheet from the strong field upstream region into the weak field upstream region. While the neutrals are pushed by a neutral pressure gradient, the dominant force on the ions is a magnetic pressure gradient. \n\n\\section{DISCUSSION\\label{discussion}}\n\nIn this paper we perform simulations of asymmetric magnetic reconnection in partially ionized chromospheric plasmas using the plasma-neutral module of the HiFi framework. We include cases with symmetric or asymmetric upstream temperatures and magnetic field strengths. The plasma and neutrals are modeled as separate fluids with momentum and energy transfer between species due to collisions and charge exchange. These simulations self-consistently include ionization and recombination without assuming ionization equilibrium. \n\nSeveral of the properties of asymmetric partially ionized reconnection are qualitatively similar to the symmetric cases \\citepalias[Case A;][]{leake:partial1, leake:partial2}. The current sheet rapidly thins so that its thickness becomes comparable to the neutral-ion mean free path evaluated inside the current sheet. The ion and neutral outflows are strongly coupled, but the ion and neutral inflows are decoupled. The reconnecting magnetic field drags ions into the current sheet which leads to a significant enhancement of ion density so that the current sheet is out of ionization equilibrium. Recombination becomes of comparable importance to the outflow in the equation for ion conservation of mass. In this paper, we show that these properties are modified by the temperature and magnetic field asymmetries.\n\nDuring our simulations of asymmetric reconnection, the ion and neutral inflows remain decoupled, but the decoupling is asymmetric. When there is magnetic asymmetry, there is net neutral flow through the current sheet from the weak magnetic field upstream region into the strong field upstream region. This neutral flow results from a large scale neutral pressure gradient and imperfect coupling between ions and neutrals along the inflow direction. Similarly, the greater Lorentz force acting on the ions from the strong field upstream region leads to the ions pulling the X-point into the weak field upstream region. These effects are not present during symmetric simulations. An observational consequence of these neutral flows through the current sheet is that most of the neutrals swept along with the outflow are more likely to have originated from the weak magnetic field upstream region. This may be especially important if there are different elemental abundances in each upstream region (e.g., different FIP enhancements). \n\nPrior simulations of asymmetric reconnection in fully ionized plasmas have shown non-ideal flows through null points \\citep[e.g.,][]{oka:2008, murphy:retreat, murphy:double}. These flows result from non-ideal effects such as resistivity and the Hall effect. In resistive MHD, for example, the motion of null points results from a combination of resistive diffusion of the magnetic field and advection by the bulk plasma flow \\citep{murphy:retreat}. During asymmetric reconnection in partially ionized plasmas, an additional contributor to non-ideal plasma flows across null points is the electric field contribution from electron-neutral drag (see the fourth term on the right hand side of Eq.\\ \\ref{genohms}). Neutral flow through the current sheet is a new effect that is not present in fully ionized situations.\n\nWe include the Hall effect in our simulations to test whether or not a transition to Hall reconnection can develop in this regime. We find that the out-of-plane quadrupole magnetic field associated with antiparallel Hall reconnection does develop. The out-of-plane magnetic field strength is a significant fraction of the in-plane field strengths just upstream of the current sheet. However, the characteristic low aspect ratio geometry and high reconnection rate expected during Hall reconnection do not develop over the course of these simulations, including during the early nonlinear evolution of the plasmoid instability. The transition to Hall reconnection in a partially ionized plasma has been investigated analytically by \\citet{malyshkin:2011}, and will continue to be investigated by ongoing simulation efforts to determine the conditions under which such a transition is likely to occur.\n\nAfter an initial phase, each of our simulations show the development of the plasmoid instability. These simulations capture the early nonlinear evolution until structures develop on scales comparable to the resolution scale. In cases with magnetic asymmetry, the plasmoids develop preferentially into the weak field upstream region, which is similar to fully ionized simulations. Recombination due to the enhanced ion density remains an important loss term in the plasma continuity equation. We anticipate that secondary merging of plasmoids will be further modified by these asymmetries. Our simulations complement the work of \\citet{ni:2015} who use a single fluid framework to investigate the long term nonlinear evolution of the plasmoid instability in a different parameter regime.\n\nWe investigate the coupling between magnetic and temperature asymmetries by comparing cases with the same magnetic asymmetry but different temperature asymmetries. We find that reconnection develops more quickly, occurs at a faster rate, and becomes unstable to plasmoid formation as we increase the temperature in the strong field upstream region while decreasing the temperature in the weak field upstream region. This change corresponds to $\\lambda_{ni}$ increasing in both upstream regions. The neutral flow through the current sheet does not noticeably change and therefore depends more strongly on the magnetic asymmetry than the temperature asymmetry.\n\nThere are many important remaining directions for modeling work to understand how reconnection occurs in the solar chromosphere and other weakly ionized plasmas. The simulations of \\citetalias{leake:partial1}, \\citetalias{leake:partial2}, and this paper have all simulated two-dimensional, antiparallel reconnection. The simulations presented in these works have modeled the early nonlinear evolution of the plasmoid instability in a weakly ionized plasma, and to model the later evolution will likely require a combination of increased resolution (including along the outflow direction to capture plasmoid merging) and increased diffusion. An alternative strategy would be to evolve the logarithm of density instead of the density itself \\citep[e.g.,][]{elee:2014} which precludes the possibility of the number density becoming negative. \\citet{ni:2015} adopt adaptive mesh refinement to ensure adequate resolution. The presence of a guide field can suppress the thinning of current sheets due to ambipolar diffusion \\citep{brandenburg:1994, brandenburg:1995}, and thus should be considered in future efforts \\citep[see also][]{ni:2015}. Physical conditions in the chromosphere vary significantly even at a single height, so a detailed examination of parameter space will be necessary to characterize the different regimes of partially ionized reconnection in the chromosphere. The parameter studies may also include regimes relevant to partially ionized plasmas in the laboratory and in astrophysics. Future studies should investigate the impact of a realistic geometry and the interplay between small-scale physics and global dynamics (e.g., how small and large scales feed back on each other). Finally, much remains to be understood about the role of three-dimensional effects during weakly ionized reconnection and the behavior of current sheet thinning in cases with and without null points. \n\nIn addition to the modeling efforts, much work remains to compare the simulation results to observations of chromospheric reconnection and to validate the results against laboratory experiments on partially ionized reconnection. A challenge in comparing simulations against observations is the disparity in length scales. The current sheets in these simulations have characteristic thicknesses of $\\lesssim 10^2$~m and lengths of $\\lesssim 10$~km, while the spatial resolution of \\emph{IRIS} observations is of order $200$~km, so direct diagnostics of the reconnection layer itself remain extremely challenging. However, the simulations could be used to predict velocities, spectra (including the charge state distributions of minor ions), densities, and morphologies that could then be compared against observation. The experiments that have recently been performed at MRX provide an opportunity to directly validate HiFi's plasma-neutral module against laboratory measurements where key quantities can be observed using \\emph{in situ} probes. Using a code that has been validated against experimental data will improve confidence that it is able to accurately capture the relevant physical processes in the lower solar atmosphere and in astrophysical plasmas. \n\n\\acknowledgments\n\nThe authors thank H.\\ Ji, J.\\ Leake, J.\\ Lin, L.\\ Ni, N.\\ Nishizuka, J.\\ Raymond, K.\\ Reeves, C.\\ Shen, H.\\ Tian, and E.\\ Zweibel for useful discussions and an anonymous referee for useful comments that helped to improve this paper. N.A.M.\\ acknowledges support from NASA grants NNX12AB25G, NNX15AF43G, \\& NNX11AB61G; NSF SHINE grants AGS-1156076 and AGS-1358342; contract 8100002705 from LMSAL; and NASA contract NNM07AB07C to the Smithsonian Astrophysical Observatory (SAO). V.S.L.\\ acknowledges support from the NASA LWS \\& Solar and Heliospheric Physics programs, as well as the National Science Foundation. Resources supporting this work were provided by the NASA High-End Computing Program through the NASA Advanced Supercomputing Division at Ames Research Center. We thank J.\\ Sattelberger formerly from SAO and J.\\ Chang from NASA for technical support. This work has benefited from the use of NASA's Astrophysics Data System.\n\n\n\\bibliographystyle{apj}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\IEEEPARstart{M}{ulti-carrier} technology has shown great potentials in improving the transmission efficiency and reliability in fading channels \\cite{Cimini1985}. The multi-carrier based multi-access schemes, such as orthogonal frequency division multi-access (OFDMA) and single-carrier frequency division multi-access (SC-FDMA)\\footnote{SC-FDMA can be seen as a coded OFDMA scheme, which is discussed in Section \\ref{subsec:coding_schemes}.} etc., have been widely adopt by IEEE 802.16 and 3GPP LTE-A standards \\cite{IEEE802.16-2009,3GPPLTE}. Both academia and industry believe that the multi-carrier multi-access technology will be playing a more and more important role in future wireless communication systems \\cite{Yang2009}.\n\nIt is a key advantage that the multi-carrier multi-access channel provides great flexibility and adaptability in sharing the channel resources among multiple users, because the channel resources are composed by plenty of small \\emph{resource blocks} (RBs) in time-frequency domain \\cite{Bahai04}. In this context, a RB corresponds to a subcarrier in the duration of a timeslot. Previous works for channel resources allocation can be roughly divided into two categories: RB based scheme, and chunk based scheme. The RB based scheme implies that the based station (BS) directly allocates RBs, which are normally not adjacent in time-frequency domain, to every user. The interleaved subcarrier allocation in IEEE 802.16 and 3GPP LTE-A is a typical RB based allocation scheme, where the uniformly-spaced subcarriers are allocated to each user \\cite{IEEE802.16-2009,3GPPLTE}. In \\cite{Wong1999}, Wong, Cheng, Letaief, and Murch proposed an OFDMA system and studied the adaptive subcarrier, bit, and power allocation problem. In \\cite{Hoo2004}, Hoo, Halder, Tellado, and Cioffi considered allocating subcarriers and power in multi-carrier broadcast channels to maximize the weighted sum rate. In \\cite{Song2005}, Song and Li proposed a cross-layer optimization framework for subcarrier and power allocation in OFDM wireless networks. In contrast to RB based schemes, the chunk based scheme means that multiple RBs are bundled together as a chunk. The BS then allocates these chunks to every user. In IEEE 802.16 and 3GPP LTE-A, a typical chunk based allocation scheme is referred to as the localized allocation, where the adjacent subcarriers in the frequency domain are allocated to each user \\cite{IEEE802.16-2009,3GPPLTE}. In \\cite{Shen2005}, Shen, Li, and Liu studied the performance of a chunk based scheme, where the chunks are allocated to users according to their average SNR within each chunk. Zhu and Wang developed this idea and proposed a joint chunk, power, and bit allocation scheme for OFDMA systems in \\cite{Zhu2012}. The basic principle behind these schemes is to maximize the system capacity or some utility function by dynamically allocating RBs\/chunks and power under the constraints of total power, fairness, rate, and latency \\cite{Stuber05}. The used mathematical tools are mainly based on convex optimization and integral programming \\cite{Liu2005,Hanzo06}.\n\nAlthough many iterative or heuristic algorithms are derived from the aforementioned works to improve the transmission efficiency or enhance the reliability, it is not trivial to conduct a thorough analysis on the fundamental tradeoff between them. Since there lacks an analytic framework with a comprehensive performance metric, the relationship among the efficiency-reliability tradeoffs of different users is also still unknown. Moreover, these proposed allocation schemes usually require perfect channel state information (CSI) at the BS. Based on these considerations, an analytic coded $f$-matching framework is proposed for channel resources allocation problem in multi-carrier multi-access channels. The outage exponent region (OER), which is the region bounded by outage exponents of all the users at given target rates and SNR, is first defined as a fundamental performance metric. In this context, the outage exponent, proposed in our previous work \\cite{Bai2011c}, characterizes the relationship among the outage probability, the number of channels, and SNR at the same time. As a matter of fact, the region bounded by diversity-multiplexing tradeoffs (DMTs) of every user, referred to as the diversity-multiplexing region (DMR), is actually the asymptotic OER when SNR tends to infinity. Similar to the capacity region, the proposed OER and DMR comprehensively illustrate the relationship among the achieved outage performances of all the users for a given resources allocation scheme in slow fading multi-access channels. To achieve the optimal OER and DMR, the random bipartite graph (RBG) \\cite{Bollobas01} approach, which only depends on $\\unit[1]{bit}$ CSI per coherence bandwidth per user, is then applied to formulate the channel resources allocation problem in multi-carrier multi-access channels. The frequency-domain coding based maximum $f$-matching method, which can be applied to both RB and chunk based allocation schemes, is then proposed to allocate the channel resources to all of the users according to their different target rates.\n\nIn contrast to previous works, the outage probability, OER, and DMR of the coded $f$-matching framework can be analyzed in closed-form formulas. By applying the saddle-point approximation method, which has a good balance between the approximation accuracy and complexity \\cite{Butler2007}, the tight upper bound has been first derived for the outage probability of the optimal coding scheme with $\\unit[1]{bit}$ CSI feedback per coherence bandwidth. The combinatorial structure of the RBG based maximum $f$-matching is then studied by using the results in matching theory \\cite{Lovasz1986}. Based on these results, the approximation formulas for the outage probability of each user are obtained for RB and chunk based coded $f$-matching schemes in the high and low SNR regimes, respectively. The best achievable OER and DMR for RB and chunk based coded $f$-matching schemes are then obtained accordingly. Although there are serious conflicts in channel resources allocation, the proposed coded $f$-matching framework is still capable of achieving the optimal OER and DMR with multi-user diversity. As a result, all of the users share the total multiplexing gain according to their target rates, while achieving the full frequency diversity simultaneously.\n\nThe low-complexity channel resources allocation algorithm and coding scheme are also very important for practical multi-carrier multi-access systems. By applying the principle of parallel computations \\cite{Karpinski1998}, we propose the parallel vertices expansion \\& random rotation based Hopcroft-Karp (PVER\\textsuperscript{2}HK) algorithm. It can be shown that the time complexity is only $\\mathcal{O}\\left(\\log^{2\\eta}N\\right)$, where $N$ is the number of resource blocks and $\\eta$ is a constant. The proposed PVER\\textsuperscript{2}HK algorithm is even faster than FFT, $\\mathcal{O}\\left(N\\log N\\right)$. To achieve the optimal DMR, two practical DMT optimal coding approaches are discussed: rotated lattices code \\cite{Oggier2004}, and permutation code \\cite{Tavildar2006}. From the simulation examples, the proposed closed-form formulas for outage probabilities are nearly identical with the simulation curves in both high and low SNR regimes. The results also show that for zero multiplexing gain (i.e., with a fixed target rate), the RB based coded $f$-matching scheme has $\\unit[2]{dB}$ SNR gains compared to the interleaved or TDMA based allocation in IEEE 802.16 and 3GPP LTE-A. The SNR gain also becomes greater as the multiplexing gain increases. For the chunk based coded $f$-matching scheme, it achieves a much greater diversity gain than the localized allocation scheme. Therefore, the proposed framework cannot only be applied to analyze the performance of existing multi-carrier multi-access systems but also provide powerful tools for designing practical channel resources allocation schemes for future multi-carrier multi-access systems.\n\nThe rest of this paper is organized as follows. Section \\ref{sec:system_model} presents the system model and precise problem formulation. The RBG based maximum $f$-matching framework and the requirements on coding schemes are presented in Section \\ref{sec:coded_f_matching}. Section \\ref{sec:oer_dmr_analysis} presents the closed-form formulas for outage probabilities, OER, and DMR of the proposed framework. The practical issues, such as the optimal parallel channel resources allocation algorithm and asymptotic optimal coding schemes, are studied in Section \\ref{sec:practical_issues}. Section \\ref{sec:simulation_results} presents the simulation results to verify our theoretical results. Finally, Section \\ref{sec:conclusions} concludes this paper.\n\n\n\\section{System Model and Problem Formulation}\\label{sec:system_model}\n\n\\subsection{Multi-Carrier Multi-Access Channel Model}\\label{subsec:channel_model}\n\n\\begin{figure*}[t]\n\\centering\n\\begin{tikzpicture}[>=stealth]\n\t\\draw[thick] (0,2.8) rectangle (1.6,3.6);\n\t\\node at (0.8,3.2) {BS};\n\t\\draw[thick] (1.6,3.2) -- (2,3.2) -- (2,3.6);\n\t\\draw[thick] (2,3.6) -- (2.3,3.9) -- (1.7,3.9) -- (2,3.6);\n\t\n\t\\draw[thick] (9,0) rectangle (7.4,0.8);\n\t\\node at (8.2,0.4) {User $u_{N}$};\n\t\\draw[thick] (7.4,0.4) -- (7,0.4) -- (7,0.8);\n\t\\draw[thick] (7,0.8) -- (7.3,1.1) -- (6.7,1.1) -- (7,0.8);\n\t\n\t\\fill\n\t(8.2,1.5) circle (1.5pt)\n\t(8.2,1.8) circle (1.5pt)\n\t(8.2,2.1) circle (1.5pt);\n\t\n\t\\draw[thick] (9,2.8) rectangle (7.4,3.6);\n\t\\node at (8.2,3.2) {User $u_{2}$};\n\t\\draw[thick] (7.4,3.2) -- (7,3.2) -- (7,3.6);\n\t\\draw[thick] (7,3.6) -- (7.3,3.9) -- (6.7,3.9) -- (7,3.6);\n\t\n\t\\draw[thick] (9,5.1) rectangle (7.4,5.9);\n\t\\node at (8.2,5.5) {User $u_{1}$};\n\t\\draw[thick] (7.4,5.5) -- (7,5.5) -- (7,5.9);\n\t\\draw[thick] (7,5.9) -- (7.3,6.2) -- (6.7,6.2) -- (7,5.9);\n\t\n\t\\draw[<->,very thick] (2.4,3.8) -- (6.6,1.1);\n\t\\draw[<->,very thick] (2.4,3.9) -- (6.6,3.9);\n\t\\draw[<->,very thick] (2.4,4) -- (6.6,6.2);\n\t\n\n\t\\filldraw[fill=black!70] (-7,3.9) rectangle (-5.75,4.15);\n\t\\filldraw[fill=black!70] (-7,5.15) rectangle (-5.75,5.4);\n\t\\filldraw[fill=black!70] (-5.75,4.9) rectangle (-4.5,5.15);\n\t\\filldraw[fill=black!70] (-5.75,5.65) rectangle (-4.5,5.9);\n\t\\filldraw[fill=black!70] (-4.5,4.15) rectangle (-3.25,4.65);\n\t\\filldraw[fill=black!70] (-4.5,5.4) rectangle (-3.25,5.65);\n\t\\filldraw[fill=black!70] (-3.25,5.4) rectangle (-2,5.65);\n\n\t\\filldraw[fill=black!40] (-7,4.15) rectangle (-5.75,5.15);\n\t\\filldraw[fill=black!40] (-5.75,5.4) rectangle (-4.5,5.65);\n\t\\filldraw[fill=black!40] (-4.5,3.9) rectangle (-3.25,4.15);\n\t\\filldraw[fill=black!40] (-3.25,5.65) rectangle (-2,5.9);\n\n\t\\filldraw[fill=black!10] (-7,5.4) rectangle (-5.75,5.65);\n\t\\filldraw[fill=black!10] (-5.75,5.15) rectangle (-4.5,5.4);\n\t\\filldraw[fill=black!10] (-5.75,3.9) rectangle (-4.5,4.15);\n\t\\filldraw[fill=black!10] (-5.75,4.4) rectangle (-4.5,4.65);\n\t\\filldraw[fill=black!10] (-4.5,4.9) rectangle (-3.25,5.15);\n\t\\filldraw[fill=black!10] (-3.25,4.4) rectangle (-2,4.9);\n\t\\filldraw[fill=black!10] (-3.25,5.15) rectangle (-2,5.4);\n\t\n\t\\foreach \\x in {-7,-5.75,-4.5,-3.25}\n\t\t\\foreach \\y in {3.9,4.15,4.4,4.65,4.9,5.15,5.4,5.65}\n\t\t{\n\t\t\t\\draw (\\x,\\y) +(0,0) rectangle ++(1.25,.25);\n\t\t}\n\t\n\t\\draw[->,very thick] (-7,3.9) -- (-1.5,3.9) node[right] {Time};\n\t\\draw[->,very thick] (-7,3.9) -- (-7,6.3) node[above] {Frequency};\n\t\\node at (-4.5,3.6) {RB based Allocation Scheme};\n\t\\node at (-4,6.3) {$1$ Subchannel $=1$ RB};\n\n\t\\node at (-7.3,5.65) {$\\mathcal{S}_{4}^{\\mathrm{c}}\\,\\big\\{$};\n\t\\node at (-7.3,5.15) {$\\mathcal{S}_{3}^{\\mathrm{c}}\\,\\big\\{$};\n\t\\node at (-7.3,4.65) {$\\mathcal{S}_{2}^{\\mathrm{c}}\\,\\big\\{$};\n\t\\node at (-7.3,4.15) {$\\mathcal{S}_{1}^{\\mathrm{c}}\\,\\big\\{$};\n\n\t\t\t\t\n\t\\filldraw[fill=black!70] (-7,0.3) rectangle (-2,0.8);\n\t\\filldraw[fill=black!40] (-7,0.8) rectangle (-2,1.3);\n\t\\filldraw[fill=black!10] (-7,1.3) rectangle (-2,1.8);\n\t\t\t\n\t\\foreach \\x in {-7,-5.75,-4.5,-3.25}\n\t\t\\foreach \\y in {0.3,0.55,0.8,1.05,1.3,1.55,1.8,2.05}\n\t\t{\n\t\t\t\\draw (\\x,\\y) +(0,0) rectangle ++(1.25,.25);\n\t\t}\n\n\t\\draw[->,very thick] (-7,0.3) -- (-1.5,0.3) node[right] {Time};\n\t\\draw[->,very thick] (-7,0.3) -- (-7,2.7) node[above] {Frequency};\n\t\\node at (-4.5,0) {Chunk based Allocation Scheme};\n\t\\node at (-4,2.7) {$1$ Subchannel $=8$ RBs};\n\t\n\t\\node at (-7.3,2.05) {$\\mathcal{S}_{4}^{\\mathrm{c}}\\,\\big\\{$};\n\t\\node at (-7.3,1.55) {$\\mathcal{S}_{3}^{\\mathrm{c}}\\,\\big\\{$};\n\t\\node at (-7.3,1.05) {$\\mathcal{S}_{2}^{\\mathrm{c}}\\,\\big\\{$};\n\t\\node at (-7.3,0.55) {$\\mathcal{S}_{1}^{\\mathrm{c}}\\,\\big\\{$};\n\t\n\t\n\t\\filldraw[fill=black!70] (0,0.8) rectangle (1.25,1.05);\n\t\\node at (1.9,0.925) {User $u_{1}$};\n\t\\filldraw[fill=black!40] (0,0.3) rectangle (1.25,0.55);\n\t\\node at (1.9,0.425) {User $u_{2}$};\n\t\\filldraw[fill=black!10] (2.75,0.8) rectangle (4,1.05);\n\t\\node at (4.7,0.925) {User $u_{N}$};\n\t\\draw (2.75,0.3) rectangle (4,0.55);\n\t\\node at (4.92,0.425) {Unallocated};\n\\end{tikzpicture}\n\\caption{A multi-carrier multi-access system with one BS and $M\\geq2$ users over the frequency-selective Rayleigh slow fading channel. BS allocates the subchannels in the set $\\mathcal{S}_{m}$ to the user $u_{m}$ according to $\\unit[1]{bit}$ CSI feedback.}\\label{fig:system_desc}\n\\end{figure*}\n\nConsider a multi-carrier multi-access channel, as shown in Fig. \\ref{fig:system_desc}, where the basic resource unit allocated to each user is referred to as the \\emph{subchannel} in order to construct the unified analytic framework. Clearly, the subchannel is the aforementioned RB or chunk in the RB or chunk based allocation schemes, respectively.\\footnote{The term ``RB'' or ``chunk'' will be used, if the term ``subchannel'' indicates one of them specifically.} In the considered multi-carrier multi-access channel, $M$ users communicate with a BS through $N$ subchannels with $N\\geq M$. The user set is defined as $\\mathcal{U}=\\left\\{u_{1},\\ldots,u_{M}\\right\\}=\\left\\{u_{m}\\right\\}_{m=1}^{M}$, where $u_{m}$ denotes the $m$th user.\\footnote{The script symbol $\\mathcal{X}$ denotes a set, whose cardinality is denoted by $\\left|\\mathcal{X}\\right|$.} The subchannel set in a given frame is defined as $\\mathcal{S}=\\left\\{s_{n}\\right\\}_{n=1}^{N}$, where $s_{n}$ denotes the $n$th subchannel. The channel is assumed to be under-spread, i.e., the coherence time $T^{\\mathrm{c}}$ is much larger than the multi-path delay spread $T^{\\mathrm{d}}$ and also the length of each fame \\cite{Proakis2007}. The signals of each user are assumed to undergo independent $L$-path frequency-selective fading, i.e., the channel contains $L$ coherence bandwidths. Let $\\mathcal{S}_{l}^{\\mathrm{c}},\\,l=1,\\ldots,L$ denote the set of subchannels in the $l$th coherence bandwidth, then $\\left|\\mathcal{S}_{l}^{\\mathrm{c}}\\right|=N_{\\mathrm{c}}=\\frac{N}{L}$, where the non-integer case is omitted since $N$ can be exactly divided by $L$ in the chunk based scheme or $N\\gg L$ in the RB based scheme \\cite{IEEE802.16-2009}. For convenience, the coherence bandwidth is used to normalize the total bandwidth, and the length of each frame is normalized as $1$.\n\nAccording to the results in \\cite{Wong1999}, it is quite reasonable to assume that the channel gains of the subchannels in one coherence bandwidth are the same, whereas they are independent with one another for different coherence bandwidths. Let $g_{ml}$ denote the channel gain of the $l$th coherence bandwidth for the user $u_{m}$. Then, $g_{ml}$, for every $m=1,\\ldots,M$ and $l=1,\\ldots,L$, are independent with the identical distribution of $\\mathcal{CN}\\left(0,1\\right)$.\\footnote{$\\mathcal{CN}\\left(\\bm{\\mu},\\bm{C}\\right)$ denotes a circularly symmetric Gaussian distribution with mean vector $\\bm{\\mu}$ and covariance matrix $\\bm{C}$.} The channel gain on each subchannel, denoted by $h_{mn}$, can then be defined as $h_{mn}=g_{ml},\\,\\forall s_{n}\\in\\mathcal{S}_{l}^{\\mathrm{c}}$. The mutual information of a subchannel $s_{n}\\in\\mathcal{S}_{l}^{\\mathrm{c}}$ for the user $u_{m}$ is then given by\n\\begin{equation}\\label{eq:subchannel_capacity}\n\tI_{mn}=\\frac{1}{N_{\\mathrm{c}}}\\ln\\left(1+\\left|h_{mn}\\right|^{2}\\gamma\\right)=\\frac{1}{N_{\\mathrm{c}}}\\ln\\left(1+\\left|g_{ml}\\right|^{2}\\gamma\\right),\n\\end{equation}\nwhere $\\gamma$ is the average SNR at the receiver. Throughout the paper, the natural logarithm function is used, and the unit of information is ``$\\mathrm{nat}$''.\n\nBecause the subchannel must be allocated at the beginning of each frame in real-time, each user is only allowed to feedback $\\unit[1]{bit}$ CSI for each coherence bandwidth so as to reduce the complexity in both signaling and subchannel allocation processes. Let $\\bm{Q}$ denote the quantized CSI matrix at BS, where the element at the $m$th row and the $l$th column $\\left[\\bm{Q}\\right]_{ml}$ is an $\\unit[1]{bit}$ quantized value of $g_{ml}$. A subchannel allocation scheme, denoted by $\\mathscr{S}$, can be seen as a mapping from $\\bm{Q}$ to a family of subchannel allocation sets, that is\n\\begin{equation}\\label{eq:subchannel_allocation}\n\t\\left(\\mathcal{S}_{m}\\right)_{m=1}^{M}=\\mathscr{S}\\left(\\bm{Q}\\right),\n\\end{equation}\nwhere $\\left(\\mathcal{S}_{m}\\right)_{m=1}^{M}=\\left(\\mathcal{S}_{1},\\ldots,\\mathcal{S}_{M}\\right)$ is a vector of sets, and $\\mathcal{S}_{m}$ is the set of subchannels allocated to the user $u_{m}$, which satisfies $\\mathcal{S}_{m}\\cap\\mathcal{S}_{m'}=\\emptyset,\\,\\forall m'\\neq m$, and $\\mathcal{S}_{\\mathrm{alloc}}=\\bigcup_{u_{m}\\in\\mathcal{U}}\\mathcal{S}_{m}\\subseteq\\mathcal{S}$. Over the time duration of one frame, the received signal of each symbol $\\bm{y}\\in\\mathbb{C}^{N}$ in the frequency domain is given by\n\\begin{equation}\\label{eq:channel_model}\n\t\\bm{y}=\\bm{H}\\bm{x}+\\bm{w},\n\\end{equation}\nwhere $\\bm{w}\\in\\mathbb{C}^{N}$ is the additive white Gaussian noise with the distribution of $\\mathcal{CN}\\left(\\bm{0},\\bm{I}\\right)$. The channel gain $\\bm{H}$ is a diagonal matrix and $\\left[\\bm{H}\\right]_{nn}=h_{mn}$, if and only if $s_{n}\\in\\mathcal{S}_{m}$. The transmission symbol of all the users is denoted by the vector $\\bm{x}=\\left(x_{n}\\right)_{n=1}^{N}$, where $\\bm{x}_{m}=\\left(x_{n}\\right)_{s_{n}\\in\\mathcal{S}_{m}}$ is the symbol vector transmitted by the user $u_{m}$. Let $\\mathscr{C}$ denote the coding scheme in the frequency domain, then $\\bm{x}_{m}=\\mathscr{C}\\left(\\bm{b}_{m}\\right)$, where $\\bm{b}_{m}$ is the uncoded-symbol vector for user $u_{m}$.\n\nThe objective of this paper is to propose a unified analytic framework for designing the $\\mathscr{S}$ and choosing $\\mathscr{C}$ for subchannel allocation in multi-carrier multi-access systems. In the following, we will first assume $\\mathscr{C}$ has been properly chosen and focus on designing the optimal $\\mathscr{S}$. Then, the coding scheme, which can meet the requirement of the optimal $\\mathscr{S}$, will be discussed.\n\n\n\\subsection{Outage Exponent Region \\& Diversity-Multiplexing Region}\nAs defined in \\cite{Ozarow1994}, if the instantaneous mutual information is smaller than a target transmission rate, this channel is in outage. The results in compound channels showed that the reliable communication can be achieved if the channel is not in outage \\cite{Csiczar1981}. Therefore, the outage probability, which establishes the relationship between the reliability and efficiency, is a key performance metric in slow fading channels \\cite{Zheng2003}.\n\nAccording to the QoS requirement of the considered multi-carrier multi-access channel, each user has a specific target rate, which is denoted by $R_{m}$. A given vector of target rates $\\bm{R}=\\left(R_{m}\\right)_{m=1}^{M}$ will be referred to as an \\emph{operating point} of the system. If $\\bm{Q}$ is already known at the BS, $\\left(\\mathcal{S}_{m}\\right)_{m=1}^{M}$ can be generated by a given subchannel allocation scheme $\\mathscr{S}$. The coding scheme $\\mathscr{C}$ is assumed to be properly chosen such that the sum mutual information of the subchannels allocated to each user can be achieved. For a given operating point $\\bm{R}$, therefore, the outage probability of a user $u_{m}$ is defined by\n\\begin{equation}\\label{eq:outage_definition}\n\tp_{m}^{\\mathrm{out}}\\left(R_{m}\\right)=\\Pr\\left\\{\\left.\\sum_{s_{n}\\in\\mathcal{S}_{m}}I_{mn}=stealth]\n\t\\fill[domain=0:4,gray!50] plot (\\x,{2*sqrt(1-pow(\\x,2)\/pow(4,2))}) -- (0,0) -- (0,2);\n\t\\draw[domain=0:4,thick] plot (\\x,{2*sqrt(1-pow(\\x,2)\/pow(4,2))});\n\t\\draw[domain=0:4.5,thick,dashed] plot (\\x,{2.5*sqrt(1-pow(\\x,2)\/pow(4.5,2))});\n\t\\node at (1.8,0.9) {$\\mathcal{R}_\\mathrm{oer}\\left(R_{1},R_{2};\\gamma\\right)$};\n\t\\node at (3.5,2.7) {Boundary of $\\mathcal{R}_\\mathrm{dmr}\\left(r_{1},r_{2}\\right)$};\n\t\\node at (1.3,2.1) {$\\gamma\\rightarrow\\infty$};\n\n\t\\draw[->] (3.5,2.5) -- (3,1.8634);\n\t\\draw[->] (1.8,1.7861) -- (2.1,2.2111);\n\n\t\\draw[->,very thick] (0,0) -- (5,0) node[right] {$E_{1}\\left(R_{1}\\right)$};\n\t\\draw[->,very thick] (0,0) -- (0,3) node[above] {$E_{2}\\left(R_{2}\\right)$};\n\\end{tikzpicture}\n\\caption{The OER $\\mathcal{R}_{\\mathrm{oer}}\\left(R_{1},R_{2};\\gamma\\right)$ and DMR $\\mathcal{R}_{\\mathrm{dmr}}\\left(r_{1},r_{2}\\right)$ for an operating point $\\left(R_{1},R_{2}\\right)$.}\\label{fig:region_example}\n\\end{figure}\n\n\\medskip\n\\begin{defn}\\label{def:OER}\nFor a given operating point $\\bm{R}$, the OER, denoted by $\\mathcal{R}_{\\mathrm{oer}}\\left(\\bm{R};\\gamma\\right)$, consists of all vectors of outage exponents $\\left(E_{m}\\left(R_{m},\\gamma\\right)\\right)_{m=1}^{M}$, which can be achieved by at least one subchannel allocation scheme $\\mathscr{S}$. In this context, the outage exponent is defined as\n\\begin{equation}\\label{eq:outage_exponent}\n\tE_{m}\\left(R_{m},\\gamma\\right)=-\\frac{\\partial\\ln p_{m}^{\\mathrm{out}}\\left(R_{m}\\right)}{\\partial\\ln\\gamma}.\n\\end{equation}\n\\end{defn}\n\\medskip\n\nFig. \\ref{fig:region_example} shows an example of the OER for a subchannel allocation scheme operated at point $\\left(R_{1},R_{2}\\right)$, which is a two-dimensional region depending on the operating point. It should be noted that there is only one capacity region for a given system, but there is one OER for every operating point inside the capacity region. Clearly, a larger OER implies a better performance for a given operating point.\n\nAccording to the results in \\cite{Bai2011c}, the diversity-multiplexing tradeoff (DMT) for a user $u_{m}$ is closely related to the outage exponent as follows:\n\\begin{equation}\\label{eq:diversity_gain}\n\t\\lim_{\\mathsf{\\gamma\\rightarrow\\infty}}E_{m}\\left(R_{m},\\gamma\\right)=d_{m}\\left(r_{m}\\right),\n\\end{equation}\nwhere\n\\begin{equation}\n\tr_{m}=\\lim_{\\mathsf{\\gamma\\rightarrow\\infty}}\\frac{R_{m}}{\\ln\\left(1+\\gamma\\right)}\n\\end{equation}\nis known as the multiplexing gain. As shown in Fig. \\ref{fig:region_example}, the \\emph{diversity-multiplexing region} (DMR), defined in the following, is the asymptotic case of the OER when $\\gamma$ tends to infinity.\n\n\\medskip\n\\begin{defn}\\label{def:DMR}\nFor a given operating point $\\bm{r}=\\left(r_{m}\\right)_{m=1}^{M}$, the DMR, denoted by $\\mathcal{R}_{\\mathrm{dmr}}\\left(\\bm{r}\\right)$, consists of all vectors of diversity gains $\\left(d_{m}\\left(r_{m}\\right)\\right)_{m=1}^{M}$, which can be achieved by at least one subchannel allocation scheme $\\mathscr{S}$.\n\\end{defn}\n\\medskip\n\nThe optimal DMT for parallel fading channels is given by $d^{*}\\left(r\\right)=L\\left(1-r\/L\\right)$ \\cite{Bai2011c}. Therefore, for the DMR of the considered multi-carrier multi-access channel, the element of the optimal DMT vector should be given by\n\\begin{equation}\\label{eq:opt_dmt}\n\td_{m}^{*}\\left(r_{m}\\right)=L\\left(1-\\frac{r_{m}}{r_{m}^{*}}\\right),\\quad m=1,\\ldots,M,\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:opt_dmt_cond}\n\\left\\{\\begin{aligned}\n\t& \\frac{r_{1}^{*}}{R_{1}}=\\cdots=\\frac{r_{M}^{*}}{R_{M}},\\\\\n\t& \\sum_{u_{m}\\in\\mathcal{U}}r_{m}^{*}=L.\n\\end{aligned}\\right.\n\\end{equation}\nIn fact, Eqs. \\eqref{eq:opt_dmt} \\eqref{eq:opt_dmt_cond} indicate that each user shares the total multiplexing gain according to the operating point, while all of them achieve the full frequency diversity. Thus, a subchannel allocation scheme is referred to be \\emph{asymptotic optimal}, denoted by $\\mathscr{S}^{*}$, if and only if Eqs. \\eqref{eq:opt_dmt} \\eqref{eq:opt_dmt_cond} are achieved.\n\n\n\\section{RBG based Coded $f$-Matching Framework}\\label{sec:coded_f_matching}\n\nIn this section, the maximum $f$-matching approach will first be proposed for designing the asymptotic optimal subchannel allocation schemes $\\mathscr{S}^{*}$. The coding scheme $\\mathscr{C}$ which fulfills the requirement of the proposed framework, denoted by $\\mathscr{C}^{*}$, will also be discussed.\n\n\n\\subsection{Random Bipartite Graph Formulation}\\label{subsec:RBG_Formulation}\n\nBased on the aforementioned $\\bm{Q}$ matrix, the considered multi-carrier multi-access channel can be formulated as a \\emph{random bipartite graph} (RBG) model. In order to satisfy Eq. \\eqref{eq:opt_dmt_cond}, it requires every subchannel to contribute the same amount of efficiency and reliability to each user. This requirement implies that all of the elements in $\\bm{Q}$ should be independent and follows the identical distribution. The subchannel set $\\mathcal{S}$ will then be shared by all of the users according to the operating point of the considered multi-carrier multi-access channel. Define\n\\begin{equation}\\label{eq:subchannel_rate}\n\tR_{\\mathrm{s}}=\\frac{1}{N}\\sum_{u_{m}\\in\\mathcal{U}}R_{m}.\n\\end{equation}\nIf $I_{mn}\\sum_{s_{n}\\in\\mathcal{N}\\left(\\mathcal{X}\\right)}f\\left(s_{n}\\right),\n\\end{equation}\nwhere $\\mathcal{N}\\left(\\mathcal{X}\\right)\\subseteq\\mathcal{S}$ is the adjacent vertex set of $\\mathcal{X}$.\n\\end{lem}\n\\begin{IEEEproof}\nSee Appendix \\ref{app:f_matching}.\n\\end{IEEEproof}\n\\medskip\n\nFrom this lemma, the sufficient and necessary conditions for the existence of a specific maximum $f$-matching can be summarized as follows, where $K_{m}^{\\mathrm{th}}$ is an integer such that\n\\begin{equation}\n\tK_{m}^{\\mathrm{th}}=N+1-\\left\\lceil\\frac{M}{M-1}K_{m}^{\\mathrm{sum}}\\right\\rceil.\n\\end{equation}\n\n\\medskip\n\\begin{lem}\\label{lem:matching_edges}\nLet $\\mathcal{G}\\left(\\mathcal{U}\\cup\\mathcal{S},\\mathcal{E}\\right)$ be a bipartite graph with $\\left|\\mathcal{U}\\right|=M$, $\\left|\\mathcal{S}\\right|=N$ and $2\\leq M\\leq N$. Define a non-negative integer function $f\\left(v\\right)$ as Eq. \\eqref{eq:f-function}, then for any edge set $\\mathcal{E}$ if and only if\n\\begin{enumerate}\n \\item $|\\mathcal{E}|\\geq\\left(M-1\\right)N+K_{m}$ for the case of $K_{m}=1,\\ldots,K_{m}^{\\mathrm{th}}$,\n \\item or $|\\mathcal{E}|\\geq M\\left(K^{\\mathrm{sum}}-1\\right)+1$ for the case of $K_{m}=K_{m}^{\\mathrm{th}}+1,\\ldots,\\widetilde{K}_{m}$;\n\\end{enumerate}\nthere must be a maximum $f$-matching in $\\mathcal{G}\\left(\\mathcal{U}\\cup\\mathcal{S},\\mathcal{E}\\right)$ which can saturate the vertices $(u_{\\left(i\\right)})_{i=1}^{m}$, where $u_{\\left(i\\right)}$ has the same order as $K_{\\left(i\\right)}$, and $K^{\\mathrm{sum}}$ is defined in Eq. \\eqref{eq:K_sum}.\n\\end{lem}\n\\begin{IEEEproof}\nSee Appendix \\ref{app:matching_edges}.\n\\end{IEEEproof}\n\\medskip\n\n\n\\subsection{OER \\& DMR for RB based Coded $f$-Matching Scheme}\n\nAs shown in Fig. \\ref{fig:system_desc}, the subchannels, i.e., RBs themselves, are directly allocated to each user in the RB based coded $f$-matching allocation scheme. According to the channel model, the number of outage RBs is the integral multiples of the number of RBs in one coherence bandwidth, i.e., the integral multiples of $N_{\\mathrm{c}}$. Moreover, each user will be allocated $\\widetilde{K}_{m}$ RBs in this scheme so as to achieve the optimal performance. The exact performance analysis is very difficult for the coded $f$-matching approach, we will focus on the first order approximations in the high and low SNR regimes. Based on Theorem \\ref{thm:con_outage_exponent} and Lemma \\ref{lem:matching_edges}, the main results are summarized in the following theorem.\n\n\\medskip\n\\begin{thm}\\label{thm:outage_RB}\nFor the considered multi-carrier multi-access channel with $M$ users ($M\\geq2$), $L$ coherence bandwidths, and $N$ RBs, if the RB based coded $f$-matching allocation scheme is applied, the achieved outage probability for the user $u_{m}$, in the high SNR regime, is given by\n\\begin{equation}\\label{eq:outage_RB_high}\n\\begin{aligned}\n\tp_{m}^{\\mathrm{out}}\\left(R_{m}\\right)= & \\sum_{\\kappa=L-\\left\\lceil\\frac{\\widetilde{K}_{m}}{N_{\\mathrm{c}}}\\right\\rceil+1}^{L}\\frac{L!p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{\\widetilde{K}_{m},L-\\kappa}\\right.\\right)p_{\\mathrm{s}}^{\\kappa}}{\\kappa!\\left(L-\\kappa\\right)!}\\\\\n\t& +\\mathsf{O}\\left(p_{\\mathrm{s}}^{L}\\right),\n\\end{aligned}\n\\end{equation}\n\\end{thm}\n\\begin{IEEEproof}\nSee Appendix \\ref{app:outage_RB}.\n\\medskip\n\nAccording to Definition \\ref{def:DMR} and Corollary \\ref{cor:CDMT}, the achieved DMR of the RB based coded $f$-matching allocation scheme is obtained from Theorem \\ref{thm:outage_RB}.\n\n\\medskip\n\\begin{thm}\\label{thm:DMR_RB}\nFor an operating point $\\bm{r}=\\left(r_{m}\\right)_{m=1}^{M}$, the best achievable bound of the DMR $\\mathcal{R}_{\\mathrm{dmr}}\\left(\\bm{r}\\right)$ is given by the vector $\\left(d_{m}^{*}\\left(r_{m}\\right)\\right)_{m=1}^{M}$ which satisfies\n\\begin{equation}\\label{eq:DMR_RB}\n\td_{m}^{*}\\left(r_{m}\\right)=L\\left(1-\\frac{N_{\\mathrm{c}}r_{m}}{\\widetilde{K}_{m}}\\right),\n\\end{equation}\nfor $0=stealth]\n\t\\filldraw[fill=black!70] (-7,0.3) rectangle (-2,0.55);\n\t\\filldraw[fill=black!10] (-7,0.55) rectangle (-2,0.8);\n\t\\filldraw[fill=black!70] (-7,0.8) rectangle (-2,1.05);\n\t\\filldraw[fill=black!10] (-7,1.05) rectangle (-2,1.3);\n\t\\filldraw[fill=black!70] (-7,1.3) rectangle (-2,1.55);\n\t\\filldraw[fill=black!10] (-7,1.55) rectangle (-2,1.8);\n\t\\filldraw[fill=black!70] (-7,1.8) rectangle (-2,2.05);\n\t\\filldraw[fill=black!10] (-7,2.05) rectangle (-2,2.3);\n\t\t\t\n\t\\foreach \\x in {-7}\n\t\t\\foreach \\y in {0.3,0.55,0.8,1.05,1.3,1.55,1.8,2.05}\n\t\t{\n\t\t\t\\draw (\\x,\\y) +(0,0) rectangle ++(5,.25);\n\t\t}\n\n\t\\draw[->,very thick] (-7,0.3) -- (-1.5,0.3) node[right] {Time};\n\t\\draw[->,very thick] (-7,0.3) -- (-7,2.7) node[above] {Frequency};\n\t\\node at (-4.5,0) {Interleaved Allocation Scheme};\n\t\\node at (-4,2.7) {$1$ Subchannel $=1$ RB};\n\t\n\t\\node at (-7.3,2.05) {$\\mathcal{S}_{4}^{\\mathrm{c}}\\,\\big\\{$};\n\t\\node at (-7.3,1.55) {$\\mathcal{S}_{3}^{\\mathrm{c}}\\,\\big\\{$};\n\t\\node at (-7.3,1.05) {$\\mathcal{S}_{2}^{\\mathrm{c}}\\,\\big\\{$};\n\t\\node at (-7.3,0.55) {$\\mathcal{S}_{1}^{\\mathrm{c}}\\,\\big\\{$};\n\t\n\t\\filldraw[fill=black!70] (-7,-0.7) rectangle (-5.75,-0.45);\n\t\\node at (-5.1,-0.625) {User $u_{1}$};\n\t\\filldraw[fill=black!10] (-4.25,-0.7) rectangle (-3,-0.45);\n\t\\node at (-2.3,-0.625) {User $u_{N}$};\n\\end{tikzpicture}\n\\caption{The interleaved allocation scheme in IEEE 802.16 and LTE-A, where the RBs in different coherence bandwidths are allocated to different users in an interleaved way.}\\label{fig:interleaved}\n\\end{figure}\n\n\\subsection{OER \\& DMR for Chunk based Coded $f$-Matching Scheme}\n\nThe chunk based coded $f$-matching allocation scheme is considered in this subsection, where we assume $L>M$. The case of $L\\leq M$ has been shown to achieve the optimal OER and DMR without requiring the frequency-domain coding scheme in \\cite{Bai2011b}. As shown in Fig. \\ref{fig:system_desc}, all of the RBs in each coherence bandwidth will be bundled as one chunk, i.e., $N=L$, because: 1) dividing each coherence bandwidth into more chunks can be seen as a special case of RB based coded $f$-matching allocation scheme; 2) bundling all of the RBs in multiple coherence bandwidths as one chunk must be worse than allocating more chunks in the proposed bundling method. Due to the difficulty in calculating the exact formula of the outage probability, we still focus on the first order approximations in the high and low SNR regimes. Based on Theorem \\ref{thm:con_outage_exponent} and Lemma \\ref{lem:matching_edges}, the main results are summarized in the following theorem.\n\n\\medskip\n\\begin{thm}\\label{thm:outage_chunk}\nFor the considered multi-carrier multi-access channel with $M$ users and $L$ coherence bandwidths ($M\\leq L$), if the chunk based coded $f$-matching allocation scheme is applied, the achieved outage probability for the user $u_{m}$, in the high SNR regime, is given by\n\\begin{equation}\\label{eq:outage_chunk_1}\n\\begin{aligned}\n\tp_{m}^{\\mathrm{out}}\\left(R_{m}\\right)= & \\sum_{\\kappa=L-K_{m}+1}^{L}\\frac{L!p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m},L-\\kappa}\\right.\\right)p_{\\mathrm{s}}^{\\kappa}}{\\kappa!\\left(L-\\kappa\\right)!}\\\\\n\t& +\\mathsf{O}\\left(p_{\\mathrm{s}}^{L}\\right),\n\\end{aligned}\n\\end{equation}\nfor the case of $K_{m}=1,\\ldots,K_{m}^{\\mathrm{th}}$; or\n\\begin{equation}\\label{eq:outage_chunk_2}\n\\begin{aligned}\n p_{m}^{\\mathrm{out}}\\left(R_{m}\\right)= & \\frac{L!p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m},K_{m}-1}\\right.\\right)p_{\\mathrm{s}}^{M\\left(L-K^{\\mathrm{sum}}+1\\right)}}{M\\left(L-K^{\\mathrm{sum}}+1\\right)!\\left(K^{\\mathrm{sum}}-1\\right)!}\\\\\n & +\\mathsf{O}\\left(p_{\\mathrm{s}}^{M\\left(L-K^{\\mathrm{sum}}+1\\right)+K_{m}-1}\\right),\n\\end{aligned}\n\\end{equation}\nfor the case of $K_{m}=K_{m}^{\\mathrm{th}}+1,\\ldots,\\widetilde{K}_{m}$. For the case of $K_{m}=K_{m}^{\\mathrm{th}}$ and $\\left(M-1\\right)|MK_{m}^{\\mathrm{sum}}$, the outage probability of the user $u_{m}$ is the summation of Eq. \\eqref{eq:outage_chunk_1} and Eq. \\eqref{eq:outage_chunk_2}.\n\\end{thm}\n\\begin{IEEEproof}\nSee Appendix \\ref{app:outage_chunk}.\n\\end{IEEEproof}\n\\medskip\n\nSimilar to Theorem \\ref{thm:DMR_RB}, it is not difficult to obtain the following corollary from Theorem \\ref{thm:outage_chunk} and Corollary \\ref{cor:CDMT}.\n\n\\medskip\n\\begin{cor}\\label{cor:DMT_chunk}\nFor the operating point $\\left(r_{m}\\right)_{m=1}^{M}$, the achieved DMT for the user $u_{m}$ is given by\n\\begin{equation}\\label{eq:DMT_Lower}\n\td_{m}\\left(r_{m}\\right)=L\\left(1-\\frac{r_{m}}{K_{m}}\\right),\n\\end{equation}\nfor the case of $K_{m}=1,\\ldots,K_{m}^{\\mathrm{th}}$; and\n\\begin{equation}\\label{eq:DMT_Upper}\n\td_{m}\\left(r_{m}\\right)=\\left[M\\left(L-K^{\\mathrm{sum}}+1\\right)+K_{m}-1\\right]\\left(1-\\frac{r_{m}}{K_{m}}\\right),\n\\end{equation}\nfor the case of $K_{m}=K_{m}^{\\mathrm{th}}+1,\\ldots,\\widetilde{K}_{m}$.\n\\end{cor}\n\\medskip\n\nBoth Theorem \\ref{thm:outage_chunk} and Corollary \\ref{cor:DMT_chunk} show that the chunk based coded $f$-matching allocation will not be optimal in OER and DMR. For a given user $u_{m}$, the best achievable DMT is a piece-wised linear function. In Eq. \\eqref{eq:DMT_Lower}, the slope $-\\frac{L}{K_{m}}$ increases as $K_{m}$ increasing from $1$ to $K_{m}^{\\mathrm{th}}$, therefore\n\\begin{equation}\\label{eq:OptDMT_Lower}\n\td_{m}^{*}\\left(r_{m}\\right)=L\\left(1-\\frac{r_{m}}{K_{m}^{\\mathrm{th}}}\\right)\n\\end{equation}\noutperforms the cases of $K_{m}=1,\\ldots,K_{m}^{\\mathrm{th}}-1$. In Eq. \\eqref{eq:DMT_Upper}, similarly, the slope\n\\begin{equation}\n\\begin{aligned}\n\t& -\\frac{M\\left(L-K^{\\mathrm{sum}}+1\\right)+K_{m}-1}{K_{m}}\\\\\n\t= & -\\frac{M\\left(L-K_{m}^{\\mathrm{sum}}+1\\right)-1}{K_{m}}+M-1\n\\end{aligned}\n\\end{equation}\nincreases as $K_{m}$ increasing from $K_{m}^{\\mathrm{th}}+1$ to $\\widetilde{K}_{m}$. Thus,\n\\begin{equation}\\label{eq:OptDMT_Upper}\n\td_{m}^{*}\\left(r_{m}\\right)=\\left[M\\left(L-K^{\\mathrm{sum}}+1\\right)+\\widetilde{K}_{m}-1\\right]\\left(1-\\frac{r_{m}}{\\widetilde{K}_{m}}\\right)\n\\end{equation}\noutperforms the cases of $K_{m}=K_{m}^{\\mathrm{th}}+1,\\ldots,\\widetilde{K}_{m}$. Since $L>M\\left(L-K^{\\mathrm{sum}}+1\\right)+\\widetilde{K}_{m}-1$, a threshold of $r_{m}$ can be obtained by solving the following equation\n\\begin{equation}\n\\begin{aligned}\n\t& L\\left(1-\\frac{r_{m}}{K_{m}^{\\mathrm{th}}}\\right)\\\\\n\t= & \\left[M\\left(L-K^{\\mathrm{sum}}+1\\right)+\\widetilde{K}_{m}-1\\right]\\left(1-\\frac{r_{m}}{\\widetilde{K}_{m}}\\right).\n\\end{aligned}\n\\end{equation}\nClearly, we have\n\\begin{equation}\n\tr_{m}^{\\mathrm{th}}=\\frac{K_{m}^{\\mathrm{th}}\\widetilde{K}_{m}\\left[L-\\widetilde{K}_{m}+1-M\\left(L-K^{\\mathrm{sum}}+1\\right)\\right]}{\\widetilde{K}_{m}\\left(L-K_{m}^{\\mathrm{th}}\\right)+K_{m}^{\\mathrm{th}}\\left[1-M\\left(L-K^{\\mathrm{sum}}+1\\right)\\right]}.\n\\end{equation}\nTherefore, the optimal chunk based coded $f$-matching allocation scheme within the specific bundle method can be described as follows. If the operating point for the user $u_{m}$ satisfies $r_{m}\\leq r_{m}^{\\mathrm{th}}$, the proposed coded $f$-matching method will allocate $K_{m}^{\\mathrm{th}}$ chunks to this user; whereas if $r_{m}> r_{m}^{\\mathrm{th}}$, the proposed method will allocate $\\widetilde{K}_{m}$ chunks to this user. The best achievable DMT of the user $u_{m}$ is the piece-wised linear function joined by Eq. \\eqref{eq:OptDMT_Lower} and Eq. \\eqref{eq:OptDMT_Upper}. This result is summarized as follows.\n\n\\medskip\n\\begin{thm}\\label{thm:DMR_chunk}\nFor an operating point $\\bm{r}=\\left(r_{m}\\right)_{m=1}^{M}$, the best achievable bound of the DMR $\\mathcal{R}_{\\mathrm{dmr}}\\left(\\bm{r}\\right)$ is given by the vector $\\left(d_{m}^{*}\\left(r_{m}\\right)\\right)_{m=1}^{M}$ which satisfies\n\\begin{equation}\\label{eq:DMR_chunk}\n\td_{m}^{*}\\left(r_{m}\\right)=L\\left(1-\\frac{r_{m}}{K_{m}^{\\mathrm{th}}}\\right),\n\\end{equation}\nfor $0=stealth]\n\t\\filldraw[fill=black!70] (-7,0.3) rectangle (-5.75,2.3);\n\t\\filldraw[fill=black!40] (-5.75,0.3) rectangle (-4.5,2.3);\n\t\\filldraw[fill=black!10] (-4.5,0.3) rectangle (-3.25,2.3);\n\t\t\t\n\t\\foreach \\x in {-7,-5.75,-4.5,-3.25}\n\t\t\\foreach \\y in {0.3,0.55,0.8,1.05,1.3,1.55,1.8,2.05}\n\t\t{\n\t\t\t\\draw (\\x,\\y) +(0,0) rectangle ++(1.25,.25);\n\t\t}\n\n\t\\draw[->,very thick] (-7,0.3) -- (-1.5,0.3) node[right] {Time};\n\t\\draw[->,very thick] (-7,0.3) -- (-7,2.7) node[above] {Frequency};\n\t\\node at (-4.5,0) {TDMA based Allocation Scheme};\n\t\\node at (-4,2.7) {$1$ Subchannel $=8$ RBs};\n\t\n\t\\node at (-7.3,2.05) {$\\mathcal{S}_{4}^{\\mathrm{c}}\\,\\big\\{$};\n\t\\node at (-7.3,1.55) {$\\mathcal{S}_{3}^{\\mathrm{c}}\\,\\big\\{$};\n\t\\node at (-7.3,1.05) {$\\mathcal{S}_{2}^{\\mathrm{c}}\\,\\big\\{$};\n\t\\node at (-7.3,0.55) {$\\mathcal{S}_{1}^{\\mathrm{c}}\\,\\big\\{$};\n\t\n\t\\filldraw[fill=black!70] (-7,-0.7) rectangle (-5.75,-0.45);\n\t\\node at (-5.1,-0.625) {User $u_{1}$};\n\t\\filldraw[fill=black!40] (-7,-1.25) rectangle (-5.75,-1);\n\t\\node at (-5.1,-1.125) {User $u_{2}$};\n\t\\filldraw[fill=black!10] (-4.25,-0.7) rectangle (-3,-0.45);\n\t\\node at (-2.3,-0.625) {User $u_{N}$};\n\t\\draw (-4.25,-1.25) rectangle (-3,-1);\n\t\\node at (-2.08,-1.125) {Unallocated};\n\\end{tikzpicture}\n\\caption{TDMA is also a special case of the chunk based coded $f$-matching allocation scheme, where all of the RBs in one timeslot are bundled as one chunk.}\\label{fig:TDMA}\n\\end{figure}\n\n\\begin{rmk}\nBesides the bundling method shown in Fig. \\ref{fig:system_desc}, another method, often referred to as TDMA, is shown in Fig. \\ref{fig:TDMA}, where the RBs in one timeslot are bundled as one chunk. The OER and DMR performance can also be analyzed by the coded $f$-matching approach. For the user $u_{m}$, BS will allocate $\\widetilde{K}_{m}$ chunks to it without observing the outage state of these chunks. Since the frame is in one coherence time, the outage probabilities are the same for each user and is exponentially tight upper bounded by Eq. \\eqref{eq:outage_exponent_interleaved} with $\\widetilde{K}_{m}$ being replaced by $L$ in the equation. The DMT curve of the user $u_{m}$ for TDMA scheme is plotted in Fig. \\ref{fig:DMTCurve}. It can be seen that the optimal DMT can also be achieved by this scheme. However, this scheme has a high requirement on frequency shift estimation, and no multi-user diversity.\n\\end{rmk}\n\n\n\\section{Practical Issues}\\label{sec:practical_issues}\n\nThe complexity of the proposed framework is a key issue for practical multi-carrier multi-access systems. In this section, the parallel maximum $f$-matching algorithm for subchannel allocation with log-polynomial complexity is discussed. The asymptotic optimal coding scheme, which is easy to implement, is also presented.\n\n\n\\subsection{Parallel Subchannel Allocation Algorithm}\n\nThe Hopcroft-Karp algorithm is well known for solving the maximum matching problem in bipartite graphs \\cite{Hopcroft1973}. According to the matching theory \\cite{Lovasz1986}, a matching $\\mathcal{M}$ is maximum if and only if there is no augmenting path with respect to this matching. Consider a bipartite graph $\\mathcal{G}\\left(\\mathcal{U}\\cup\\mathcal{S},\\mathcal{E}\\right)$, an augmenting path is an alternating path that ends in an unsaturated vertex of $\\mathcal{S}$. Here, an alternating path with respect to $\\mathcal{M}$ is defined as a path in $\\mathcal{G}$ which starts in $\\mathcal{U}$ at an unsaturated vertex and then contains, alternately, edges from $\\mathcal{E}\\setminus\\mathcal{M}$ and from $\\mathcal{M}$. Therefore, the basic principle of the Hopcroft-Karp algorithm is to search the shortest augmenting path from all of the unsaturated vertices in $\\mathcal{S}$ simultaneously \\cite{Hopcroft1973}. If there are augmenting paths, a new matching with more saturated vertices can be obtained by computing the symmetric difference between the previous matching and the augmenting paths. It can be shown that the Hopcroft-Karp algorithm is capable of finding a maximum matching for any bipartite graph with the time complexity of $\\mathcal{O}\\left(N^{2.5}\\right)$, which is the fastest deterministic sequential matching algorithm ever known.\n\nThe Hopcroft-Karp algorithm, however, can only compute the maximum matching. It is still needed to be generalized for generating the maximum $f$-matching. For a sample of the RBG formulation of the considered multi-carrier multi-access channel, say $\\mathcal{G}\\left(\\mathcal{U}\\cup\\mathcal{S},\\mathcal{E}\\right)$, a new bipartite graph $\\mathcal{G}(\\mathcal{\\widetilde{U}}\\cup\\mathcal{S},\\widetilde{\\mathcal{E}})$ will then be constructed as follows: For the user $u_{m}\\in\\mathcal{U}$, let $\\mathcal{X}_{u_{m}}$ be a set of $K_{m}$ elements such that $\\mathcal{X}_{u_{m}}$ and $\\mathcal{X}_{u_{m'}}$ are disjointed if $m\\neq m'$, i.e., expand vertex $u_{m}$ to a set of $K_{m}$ vertices. Let\n\\begin{equation}\\label{eq:vertex_extension}\n\t\\widetilde{\\mathcal{U}}=\\bigcup_{u_{m}\\in\\mathcal{U}}\\mathcal{X}_{u_{m}},\n\\end{equation}\nthen connect each element in $\\mathcal{X}_{u_{m}}$ to each element of $\\mathcal{S}$, whenever $u_{m}$ and $s_{n}$ are adjacent in $\\mathcal{G}\\left(\\mathcal{U}\\cup\\mathcal{S},\\mathcal{E}\\right)$. The induced new edge set is denoted by $\\widetilde{\\mathcal{E}}$\n\n\\medskip\n\\begin{lem}\\label{lem:perfect_f_matching}\nThe bipartite graph $\\mathcal{G}(\\mathcal{\\widetilde{U}}\\cup\\mathcal{S},\\widetilde{\\mathcal{E}})$ has a perfect matching if and only if $\\mathcal{G}\\left(\\mathcal{U}\\cup\\mathcal{S},\\mathcal{E}\\right)$ has a perfect $f$-matching.\n\\end{lem}\n\\begin{IEEEproof}\nSee Appendix \\ref{app:perfect_f_matching}.\n\\end{IEEEproof}\n\\medskip\n\nFrom Lemma \\ref{lem:perfect_f_matching}, we only need to extend the user vertices as Eq. \\eqref{eq:vertex_extension}. The maximum $f$-matching can then be generated by using Hopcroft-Karp algorithm. However, the maximum $f$-matching method also requires Eq. \\eqref{eq:opt_cond_fmatching} to be hold for achieving the optimality of the framework. Moreover, Hopcroft-Karp algorithm will always saturate the user vertices in sequence, which is not fair for the users in subchannel allocation. Therefore, the vertices in the extended set $\\widetilde{\\mathcal{U}}$ should be grouped and random rotated with the following rules: The vertex set $\\widetilde{\\mathcal{U}}$ will be divided into $K_{(1)}$ subsets, where the $i$th subset is denoted by $\\widetilde{\\mathcal{U}}_{i},\\,i=1,\\ldots,K_{(1)}$. All of the vertices in the set $\\mathcal{X}_{u_{m}}$ will be distributed uniformly into these subsets. Thus, the subset $\\widetilde{\\mathcal{U}}_{i}$ has $\\left\\lceil\\frac{K_{m}}{K_{(1)}}\\right\\rceil$ vertices from the set $\\mathcal{X}_{u_{m}}$ for any $u_{m}\\in\\mathcal{U}$. Clearly, $\\widetilde{\\mathcal{U}}_{i}\\cap\\widetilde{\\mathcal{U}}_{i'}=\\emptyset,\\,\\forall i'\\neq i$, and $\\bigcup_{i=1}^{K_{(1)}}\\widetilde{\\mathcal{U}}_{i}=\\widetilde{\\mathcal{U}}$. In the ideal case, all of the subsets $\\widetilde{\\mathcal{U}}_{i}$ have the identical number of elements from the same set $\\mathcal{X}_{u_{m}}$. To guarantee the fairness among all of the users, the family of the subsets $\\widetilde{\\mathcal{U}}_{i},\\,i=1,\\ldots,K_{(1)}$ will be randomly rotated, and all of the vertices in each subset $\\widetilde{\\mathcal{U}}_{i}$ will also be randomly rotated.\n\nWith the development of multi-core processors, the parallel algorithms, which can execute on multiple processors simultaneously, attract much more attentions from both industry and academia. In order to fulfill the stringent computation time requirement of the next generation communication systems, the parallel implementation of Hopcroft-Karp algorithm will also be applied in the considered multi-carrier multi-access. Hence, the proposed algorithm will be referred to as the \\emph{parallel vertices extension \\& random rotation based Hopcroft-Karp} (PVER\\textsuperscript{2}HK) algorithm.\n\n\\begin{algorithm}[t]\n\\begin{algorithmic}[1]\n\t\\STATE Input $\\mathcal{G}\\left(\\mathcal{U}\\cup\\mathcal{S},\\mathcal{E}\\right)$ and let $l^{*}\\leftarrow2\\log^{\\eta}N+1$.\n\t\\STATE Expand the bipartite graph $\\mathcal{G}\\left(\\mathcal{U}\\cup\\mathcal{S},\\mathcal{E}\\right)$ to $\\widetilde{\\mathcal{G}}(\\widetilde{\\mathcal{U}}\\cup\\mathcal{S},\\widetilde{\\mathcal{E}})$.\n\t\\STATE Generated $(\\widetilde{\\mathcal{U}}_{i})_{i=1}^{K_{(1)}}$ from $\\widetilde{\\mathcal{U}}$.\n\t\\STATE Randomly rotate the vertex sets $(\\widetilde{\\mathcal{U}}_{i})_{i=1}^{K_{(1)}}$ and the elements in them.\n\t\\STATE Let $\\widetilde{\\mathcal{M}}\\leftarrow\\emptyset$ and $l\\leftarrow0$.\n\t\\REPEAT\n\t\t\\STATE $\\mathcal{G}_{\\mathrm{d}}\\leftarrow\\mathrm{D}(\\widetilde{\\mathcal{G}},\\widetilde{\\mathcal{M}})$: Construct a digraph $\\mathcal{G}_{\\mathrm{d}}$ form $\\widetilde{\\mathcal{G}}$ with $\\widetilde{\\mathcal{M}}$, and define $\\widetilde{\\mathcal{U}}'$ and $\\mathcal{S}'$ as the sets of free vertices in $\\widetilde{\\mathcal{U}}$ and $\\mathcal{S}$, respectively.\n\t\t\\STATE $\\left(\\mathcal{L},l\\right)\\leftarrow\\mathrm{PBFS}(\\widetilde{\\mathcal{U}}',\\mathcal{S}',\\mathcal{G}_{\\mathrm{d}},l^{*})$: Execute a parallel breadth-first searching to find the $l$-layer graph $\\mathcal{L}$ with $l\\eta$.}\n\t\\STATE Map $\\widetilde{\\mathcal{M}}$ to $\\mathcal{M}_{f}^{\\mathrm{m}}$ by applying the reversion of the vertex expansion and random rotation process.\n\t\\STATE Construct $\\left(\\mathcal{S}_{m}\\right)_{m=1}^{M}$ from $\\mathcal{M}_{f}^{\\mathrm{m}}$, and compute $\\mathcal{S}_{\\mathrm{alloc}}^{\\mathrm{c}}=\\mathcal{S}\\setminus\\bigcup_{u_{m}\\in\\mathcal{U}}\\mathcal{S}_{m}$.\n\t\\STATE $\\left(\\mathcal{S}_{m}\\right)_{m=1}^{M}\\leftarrow\\mathrm{RA}\\left(\\mathcal{S}_{\\mathrm{alloc}}^{\\mathrm{c}}\\right)$: Randomly allocate $\\widetilde{K}_{m}-k_{m}$ outage subchannels in $\\mathcal{S}_{\\mathrm{alloc}}^{\\mathrm{c}}$ to the user $u_{m}$.\n\t\\STATE Output subchannel allocation results $\\left(\\mathcal{S}_{m}\\right)_{m=1}^{M}$, then stop.\n\\end{algorithmic}\n\\caption{PVER\\textsuperscript{2}HK}\\label{alg:PVER2HK}\n\\end{algorithm}\n\nIn \\cite{Karpinski1998}, a parallel implementation of Hopcroft-Karp algorithm is studied, which can find the maximum matching with a poly-logarithmic complexity within a given approximation error. A matching $\\mathcal{M}$ is said to approximate a maximum matching $\\mathcal{M}^{\\mathrm{m}}$ with an approximation factor $\\epsilon$ if and only if\n\\begin{equation}\n\t\\left|\\mathcal{M}\\right|\\geq\\left(1-\\epsilon\\right)\\left|\\mathcal{M}^{\\mathrm{m}}\\right|.\n\\end{equation}\nThe detailed description of PVER\\textsuperscript{2}HK is listed in Algorithm \\ref{alg:PVER2HK}. For the operation $\\mathrm{D}(\\widetilde{\\mathcal{G}},\\mathcal{M})$, we orient all the matching edges in $\\mathcal{M}$ from their ends in $\\widetilde{\\mathcal{U}}$ to the ends in $\\mathcal{S}$, while the other edges have an opposite direction, i.e., from $\\mathcal{S}$ to $\\widetilde{\\mathcal{U}}$. The operation $\\mathrm{PBFS}(\\widetilde{\\mathcal{U}},\\mathcal{S},\\mathcal{G}_{\\mathrm{d}},l^{*})$ will start at all the vertices in $\\widetilde{\\mathcal{U}}$ and search along the directed edges in $\\mathcal{G}_{\\mathrm{d}}$, and end when some of them first hit on the vertices in $\\mathcal{S}$. Then, the layer graph $\\mathcal{L}$ can be generated with $l0\\right\\}.\n\\end{aligned}\n\\end{equation}\n\nAccording to the considered condition, the elements of $\\left\\{h_{mn}\\right\\}_{n=1}^{K_{m}}$ are independent with the identical distribution of $\\mathcal{CN}\\left(0,1\\right)$. Let $1\\leq n_{1}\\leq k_{m}$, $k_{m}+1\\leq n_{2}\\leq K_{m}$, the cumulant-generating function of $Y_{K_{m}}$ is then given by\n\\begin{equation}\n\\begin{aligned}\n\t& \\Phi_{K_{m}}\\left(\\lambda\\right)=\\ln\\mathsf{E}\\left\\{e^{\\lambda}Y_{K_{m}}\\right\\}=K_{m}R_{\\mathrm{s}}\\lambda\\\\\n\t& +\\ln\\left(\\mathsf{E}\\left\\{e^{-\\lambda Z_{mn_{2}}}\\right\\}\\right)^{\\beta_{m}K_{m}}+\\ln\\left(\\mathsf{E}\\left\\{e^{-\\lambda Z_{mn_{1}}}\\right\\}\\right)^{\\left(1-\\beta_{m}\\right)K_{m}}\\\\\n\t& =K_{m}\\left[R_{\\mathrm{s}}\\lambda+\\beta_{m}\\ln\\int_0^{R_{\\mathrm{s}}}e^{-\\lambda z}dF_{Z_{mn_{2}}}\\left(z\\right)\\right.\\\\\n\t& \\left.+\\left(1-\\beta_{m}\\right)\\ln\\int_{R_{\\mathrm{s}}}^{\\infty}e^{-\\lambda z}dF_{Z_{mn_{1}}}\\left(z\\right)\\right]\\\\\n\t& =K_{m}\\left[\\beta_{m}\\ln\\int_0^{R_{\\mathrm{s}}}\\frac{N_{\\mathrm{c}}e^{-\\lambda z}}{p_\\mathrm{s}\\gamma}\\exp\\left(-\\frac{e^{N_{\\mathrm{c}}z}-1}{\\gamma}+N_{\\mathrm{c}}z\\right)dz\\right.\\\\\n\t& \\left.+\\left(1-\\beta_{m}\\right)\\ln\\int_{R_{\\mathrm{s}}}^{\\infty}\\frac{N_{\\mathrm{c}}e^{-\\lambda z}}{q_{\\mathrm{s}}\\gamma}\\exp\\left(-\\frac{e^{N_{\\mathrm{c}}z}-1}{\\gamma}+N_{\\mathrm{c}}z\\right)dz\\right]\\\\\n &+K_{m}R_{\\mathrm{s}}\\lambda\\\\\n & =K_{m}\\left[R_{\\mathrm{s}}\\lambda+\\beta_{m}\\ln\\frac{e^{\\frac{1}{\\gamma}}}{p_\\mathrm{s}\\gamma^{\\frac{\\lambda}{N_{\\mathrm{c}}}}}+\\beta_{m}\\ln\\int_{\\frac{1}{\\gamma}}^{\\frac{e^{R_\\mathrm{c}}}{\\gamma}}t^{-\\frac{\\lambda}{N_{\\mathrm{c}}}}e^{-t}dt\\right.\\\\\n &\\left.+\\left(1-\\beta_{m}\\right)\\ln\\frac{e^{\\frac{1}{\\gamma}}}{q_\\mathrm{s}\\gamma^{\\frac{\\lambda}{N_{\\mathrm{c}}}}}+\\left(1-\\beta_{m}\\right)\\ln\\int_{\\frac{e^{R_{\\mathrm{c}}}}{\\gamma }}^{\\infty }t^{-\\frac{\\lambda}{N_{\\mathrm{c}}}}e^{-t}dt\\right]\n\\end{aligned}\n\\end{equation}\n\\begin{equation*}\n\\begin{aligned}\n\t& =K_{m}\\left[\\left(R_{\\mathrm{c}}-\\ln\\gamma\\right)\\frac{\\lambda}{N_{\\mathrm{c}}}+\\frac{1}{\\gamma}-\\ln p_{\\mathrm{s}}^{\\beta_{m}}q_{\\mathrm{s}}^{1-\\beta_{m}}\\right.\\\\\n\t& +\\beta_{m}\\ln\\left(\\Gamma\\left(1-\\frac{\\lambda}{N_{\\mathrm{c}}},\\frac{1}{\\gamma}\\right)-\\Gamma\\left(1-\\frac{\\lambda}{N_{\\mathrm{c}}},\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)\\right)\\\\\n\t& \\left.+\\left(1-\\beta_{m}\\right)\\ln\\Gamma\\left(1-\\frac{\\lambda}{N_{\\mathrm{c}}},\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)\\right],\n\\end{aligned}\n\\end{equation*}\nwhere $\\beta_{m}=1-\\frac{k_{m}}{K_{m}}$. Therefore, we have\n\\begin{equation}\n\\begin{aligned}\n\t\\Lambda\\left(\\lambda\\right)=&\\lim_{K_{m}\\rightarrow\\infty}\\frac{1}{K_{m}}\\Phi_{K_{m}}\\left(\\lambda\\right)\\\\\n\t= & \\left(R_{\\mathrm{c}}-\\ln\\gamma\\right)\\lambda+\\frac{1}{\\gamma}-\\ln p_{\\mathrm{s}}^{\\beta_{m}}q_{\\mathrm{s}}^{1-\\beta_{m}}\\\\\n\t& +\\beta_{m}\\ln\\left(\\Gamma\\left(1-\\lambda,\\frac{1}{\\gamma}\\right)-\\Gamma\\left(1-\\lambda,\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)\\right)\\\\\n\t& +\\left(1-\\beta_{m}\\right)\\ln\\Gamma\\left(1-\\lambda,\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right).\n\\end{aligned}\n\\end{equation}\nConsidering the relationship between the cumulant-generating function and the characteristic function, the characteristic function of $Y_{K_{m}}$ can then by given by\n\\begin{equation}\n\t\\Phi_{K_{m}}\\left(i\\lambda\\right)=K_{m}\\Lambda\\left(i\\lambda\\right).\n\\end{equation}\n\nDefine $G\\left(y\\right)=\\Pr\\left\\{Y_{K_{m}}>y\\right\\}$, then according to L\\'evy's theorem \\cite{Shiryaev1996}, we have\n\\begin{equation}\n\t\\begin{aligned}\n\t\tG\\left(y\\right) & =\\frac{1}{2\\pi i}\\int_{-\\infty}^{+\\infty}\\frac{1}{\\xi}\\exp\\left(K_{m}\\Lambda\\left(i\\xi\\right)-i\\xi y\\right)d\\xi\\\\\n\t\t& =\\frac{1}{2\\pi i}\\int_{\\lambda-i\\infty}^{\\lambda+i\\infty}\\frac{1}{z}\\exp\\left(K_{m}\\Lambda\\left(z\\right)-zy\\right)dz,\n\t\\end{aligned}\n\\end{equation}\nwhere $z=\\lambda+i\\xi$, and $\\lambda$ is chosen from the convergence region of this integral. According to the saddle-point approximation method \\cite{Butler2007}, if we let $z^{*}$ with $\\Re\\left(z^{*}\\right)=\\lambda^{*}$ be a solution of the saddle-point equation\n\\begin{equation}\n\t\\Lambda'\\left(z\\right)=\\frac{y}{K_{m}},\n\\end{equation}\nthen\n\\begin{equation}\n\t\\begin{aligned}\n\t\t& \\frac{1}{2\\pi i}\\int_{\\lambda^{*}-i\\infty}^{\\lambda^{*}+i\\infty}\\frac{1}{z}\\exp\\left(K_{m}\\Lambda\\left(z\\right)-zy\\right)dz\\lesssim\\\\\n\t\t& \\frac{1}{\\sqrt{2\\pi K_{m}\\Lambda''\\left(\\lambda^{*}\\right)}\\lambda^{*}}\\exp\\left(K_{m}\\Lambda\\left(z^{*}\\right)-z^{*}y\\right).\n\t\\end{aligned}\n\\end{equation}\nSince $G\\left(y\\right)$ is a real function, we can always choose a real $z^{*}$ only if the cumulant-generating function exists. Moreover, $\\frac{\\lambda y}{K_{m}}-\\Lambda\\left(\\lambda\\right)$ is a conjugate function when $\\lambda\\in\\mathbb{R}$, then $\\lambda^{*}$ must be unique. Therefore, the exponentially tight upper bound of the conditional outage probability $p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m}k_{m}}\\right.\\right)$ is the tail distribution of $G\\left(y\\right)$ with $y>\\mathsf{E}\\left\\{Y_{K_{m}}\\right\\}$, which is given by\n\\begin{equation}\\label{eq:saddle-point}\n\\begin{aligned}\n\t& p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m}k_{m}}\\right.\\right)=G\\left(0\\right)\\\\\n\t\\lesssim & \\frac{1}{\\sqrt{2\\pi K_{m}\\sigma^{2}}\\lambda^{*}}\\exp\\left(K_{m}\\Lambda\\left(\\lambda^{*}\\right)\\right)=p_{m}^{\\mathrm{upper}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m}k_{m}}\\right.\\right),\n\\end{aligned}\n\\end{equation}\nwhere $\\lambda^{*}$ is the solution of $\\Lambda'\\left(\\lambda\\right)=0$, and $\\sigma^{2}=\\Lambda''\\left(\\lambda^{*}\\right)$. Clearly, we have\n\\begin{equation}\\label{eq:CGF_derivative}\n\\begin{aligned}\n\t\\Lambda'\\left(\\lambda\\right)= & \\left(R_{\\mathrm{c}}-\\ln\\gamma\\right)+\\left(1-\\beta_{m}\\right)\\frac{\\Gamma'\\left(1-\\lambda,\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)}{\\Gamma\\left(1-\\lambda,\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)}\\\\\n\t& +\\beta_{m}\\frac{\\Gamma'\\left(1-\\lambda,\\frac{1}{\\gamma}\\right)-\\Gamma'\\left(1-\\lambda,\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)}{\\Gamma\\left(1-\\lambda,\\frac{1}{\\gamma}\\right)-\\Gamma\\left(1-\\lambda,\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)}=0.\n\\end{aligned}\n\\end{equation}\nAccording to the properties of Gamma function and Meijer's $G$-function, we have\n\\begin{equation}\\label{eq:gamma_derivative}\n\\begin{aligned}\n\t\\Gamma'\\left(1-\\lambda,z\\right)= & -\\Gamma\\left(1-\\lambda,z\\right)\\ln z\\\\\n\t& -G_{2,3}^{3,0}\\left(z\\left|\n\t\\begin{array}{ccc}\n\t\t1, & 1, & -\\\\\n\t\t0, & 0, & 1-\\lambda\n\t\\end{array}\\right.\n\t\\right).\n\\end{aligned}\n\\end{equation}\nTherefore, Eq. \\eqref{eq:exponent_equation} can be obtained by plugging Eq. \\eqref{eq:gamma_derivative} into Eq. \\eqref{eq:CGF_derivative}. For $\\sigma^{2}$, we have\n\\begin{equation}\n\\begin{aligned}\\label{eq:CGF_parameter}\n\t& \\sigma^{2}=\\Lambda''\\left(\\lambda^{*}\\right)\\\\\n\t=&\\left(1-\\beta_{m}\\right)\\left[\\frac{\\Gamma''\\left(1-\\lambda^{*},\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)}{\\Gamma\\left(1-\\lambda^{*},\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)}-\\left(\\frac{\\Gamma'\\left(1-\\lambda^{*},\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)}{\\Gamma\\left(1-\\lambda^{*},\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)}\\right)^{2}\\right]\\\\\n\t& +\\beta_{m}\\frac{\\Gamma''\\left(1-\\lambda^{*},\\frac{1}{\\gamma}\\right)-\\Gamma''\\left(1-\\lambda^{*},\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)}{\\Gamma\\left(1-\\lambda^{*},\\frac{1}{\\gamma}\\right)-\\Gamma\\left(1-\\lambda^{*},\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)}\\\\\n\t& -\\beta_{m}\\left(\\frac{\\Gamma'\\left(1-\\lambda^{*},\\frac{1}{\\gamma}\\right)-\\Gamma'\\left(1-\\lambda^{*},\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)}{\\Gamma\\left(1-\\lambda^{*},\\frac{1}{\\gamma}\\right)-\\Gamma\\left(1-\\lambda^{*},\\frac{e^{R_{\\mathrm{c}}}}{\\gamma}\\right)}\\right)^{2}.\n\\end{aligned}\n\\end{equation}\nAccording to the properties of Meijer's $G$-function, we have\n\\begin{equation}\\label{eq:Meijer_derivative}\n\\begin{aligned}\n\t& \\frac{\\partial}{\\partial x}G_{2,3}^{3,0}\\left(z\\left|\n\t\\begin{array}{ccc}\n\t\t1, & 1, & - \\\\\n\t\t0, & 0, & 1-\\lambda^{*}\\\\\n\t\\end{array}\\right.\\right)\\\\\n\t= & G_{2,3}^{3,0}\\left(z\\left|\n\t\\begin{array}{ccc}\n\t\t1, & 1, & - \\\\\n\t\t0, & 0, & 1-\\lambda^{*}\\\\\n\t\\end{array}\\right.\\right)\\ln z\\\\\n\t&+2G_{3,4}^{4,0}\\left(z\\left|\n\t\\begin{array}{cccc}\n\t\t1, & 1, & 1, & -\\\\\n\t\t0, & 0, & 0, & 1-\\lambda^{*}\\\\\n\t\\end{array}\\right.\\right)\n\\end{aligned}\n\\end{equation}\nTherefore, Eq. \\eqref{eq:exponent_parameter} can be obtained by plugging Eq. \\eqref{eq:Meijer_derivative} into Eq. \\eqref{eq:CGF_parameter}.\n\nFor the tail distribution with $y<\\mathsf{E}\\left\\{Y_{K_{m}}\\right\\}$, a similar result can be obtained by applying the same process. Therefore, Theorem \\ref{thm:con_outage_exponent} has been established.\n\n\n\\section{Proof of Lemma \\ref{lem:f_matching}}\\label{app:f_matching}\n\n($\\Rightarrow$). Suppose $\\mathcal{G}\\left(\\mathcal{U}\\cup\\mathcal{S},\\mathcal{E}\\right)$ has a maximum $f$-matching $\\mathcal{M}_{f}^{\\mathrm{m}}$, which cannot saturates the vertex $u_{m}$. Then, there must be conflicts between $u_{m}$ and other vertices in $\\mathcal{U}$. Therefore, we have a subset $\\mathcal{X}\\subseteq\\mathcal{U}$ with $u_{m}\\in\\mathcal{X}$ satisfying\n\\begin{equation}\n\t\\sum_{u_{i}\\in\\mathcal{X}}f\\left(u_{i}\\right)=\\sum_{u_{i}\\in\\mathcal{X}}K_{i}>\\sum_{s_{n}\\in\\mathcal{N} \\left(\\mathcal{X}\\right)}f\\left(s_{n}\\right).\n\\end{equation}\n\n($\\Leftarrow$). Orient all edges of $\\mathcal{G}\\left(\\mathcal{U}\\cup\\mathcal{S},\\mathcal{E}\\right)$ from $\\mathcal{U}$ to $\\mathcal{S}$. Add a source $\\theta$ joined all vertices in $\\mathcal{U}$ and a sink $\\zeta$ to which every vertex in $\\mathcal{S}$ is joined. This induced directed graph is denoted by $\\mathcal{G}_{\\mathrm{d}}$. Assign capacities to all of the edges as follows:\n\\begin{equation}\n\t\\left\\{\\begin{aligned}\n\t\t& c\\left(e\\right)=f\\left(u_{i}\\right), & \\textrm{if }e\\textrm{ is from }\\theta\\textrm{ to }u_{i}\\in\\mathcal{U};\\\\\n\t\t& c\\left(e\\right)=1, & \\textrm{on all the other edges}.\n\t\\end{aligned}\\right.\n\\end{equation}\nChoose a subset $\\mathcal{X}$ of $\\mathcal{U}$ with $u_{m}\\in\\mathcal{X}$ such that Eq. \\eqref{eq:f_matching_cond} holds. Let $\\mathcal{C}=\\mathcal{X}\\cup\\left\\{\\theta\\right\\}$, which is clearly a cut set separating $\\theta$ from $\\zeta$. The capacity of $\\mathcal{C}$ is then given by\n\\begin{equation}\n\t\\sum_{u_{i}\\in\\mathcal{U}-\\mathcal{X}}f\\left(u_{i}\\right)+\\sum_{s_{n}\\in\\mathcal{N}\\left(\\mathcal{X}\\right)}f\\left(s_{n}\\right)<\\sum_{u_{i}\\in\\mathcal{U}}f\\left(u_{i}\\right).\n\\end{equation}\nAccording to max-flow min-cut theorem in Lemma \\ref{lem:max_flow_min_cut}, the maximum flow $\\mathcal{F}^{*}$ from $\\theta$ to $\\zeta$ of $\\mathcal{G}_{\\mathrm{d}}$ must be smaller than $\\sum_{u_{i}\\in\\mathcal{U}}f\\left(u_{i}\\right)$. Moreover, since $u_{m}\\in\\mathcal{X}\\subseteq\\mathcal{C}$, the maximum flow $\\mathcal{F}^{*}$ cannot achieve the capacities of all the edges from $\\theta$ to vertices in $\\mathcal{X}$. Thus, greater flow rate can be allocated to the edge from $\\theta$ to $u_{i}$ in $\\mathcal{U}$ than the edge from $\\theta$ to $u_{m}$ when they are conflicts. Therefore, the generated maximum $f$-matching cannot saturate $u_{m}$.\n\n\n\\section{Proof of Lemma \\ref{lem:matching_edges}}\\label{app:matching_edges}\n\nLet $\\mathcal{G}(\\mathcal{U}\\cup\\mathcal{S},\\mathcal{E})$ satisfy the conditions in this lemma, and we assume there is an edge set $\\mathcal{E}_{0}$ that contains at least one maximum $f$-matching which cannot saturate $u_{m}$. According to Lemma \\ref{lem:f_matching}, there must be a subset $\\mathcal{X}\\subseteq\\mathcal{U}$ with the maximum number of elements, which satisfies\n\\begin{equation}\n \\left|\\mathcal{N}\\left(\\mathcal{X}\\right)\\right|\\left(M-1\\right)N+K_{i}$ with $K_{\\left(i\\right)}L$, since $M\\geq2$ in the considered multi-carrier multi-access channel. Therefore, this case is the higher order terms in the formula of the outage probability.\n\nAccording to the first part of Lemma \\ref{lem:matching_edges}, there must be $\\kappa$ outage coherence bandwidths for the user $u_{m}$, with $\\kappa=L-\\left\\lceil\\frac{\\widetilde{K}_{m}}{N_{\\mathrm{c}}}\\right\\rceil+1,\\ldots,L$, if this user cannot be saturated by the generated maximum $f$-matching. Recalling the definition of maximum $f$-matching, thus all of the RBs in $L-\\kappa$ coherence bandwidths must be allocated to the user $u_{m}$. According to Theorem \\ref{thm:con_outage_exponent}, therefore, the outage probability of the user $u_{m}$ for a given $\\kappa$ is given by\n\\begin{equation}\n\\begin{aligned}\n\t& \\binom{L}{\\kappa}p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{\\widetilde{K}_{m},L-\\kappa}\\right.\\right)p_{\\mathrm{s}}^{\\kappa}q_{\\mathrm{s}}^{ML-\\kappa}\\\\\n\t= & \\frac{L!p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{\\widetilde{K}_{m},L-\\kappa}\\right.\\right)p_{\\mathrm{s}}^{\\kappa}}{\\kappa!\\left(L-\\kappa\\right)!}+\\mathsf{O}\\left(p_{\\mathrm{s}}^{L}\\right).\n\\end{aligned}\n\\end{equation}\nHence, according to the law of total probability, the first order approximation of the user $u_{m}$ is given by Eq. \\eqref{eq:outage_RB_high}.\n\\end{IEEEproof}\n\n\n\\section{Proof of Theorem \\ref{thm:outage_RB_LowSNR}}\\label{app:outage_RB_LowSNR}\nConsider the low SNR regime such that the sample of $\\mathscr{G}\\left(\\mathcal{K}_{MN};\\mathsf{P}\\right)$ only has a few edges. For a user $u_{m}$ in the set $\\mathcal{U}$, there are two cases that make $u_{m}$ unsaturated: 1) There are no $K_{m}$ non-outage RBs in $\\mathcal{S}$ for $u_{m}$; and 2) There are other users competing for the same RB with $u_{m}$ and it is not saturated by the maximum $f$-matching.\n\nIn the first case, the occurrence probability of this event is given by\n\\begin{equation}\n\\begin{aligned}\n\t& \\sum_{\\kappa=L-K_{m}+1}^{L}\\binom{L}{\\kappa}p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m},L-\\kappa}\\right.\\right)p_{\\mathrm{s}}^{\\kappa}q_{\\mathrm{s}}^{L-\\kappa}\\\\\n\t= & p_{\\mathrm{s}}^{L}+Lp_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m},1}\\right.\\right)p_{\\mathrm{s}}^{L-1}q_{\\mathrm{s}}+\\mathsf{O}\\left(q_{\\mathrm{s}}\\right),\n\\end{aligned}\n\\end{equation}\nfor $\\gamma\\rightarrow0$. In the second case, there will be at least two edges in the bipartite graph $\\mathcal{G}\\left(\\mathcal{U}\\cup\\mathcal{S},\\mathcal{E}\\right)$. One is $u_{m}s_{n}$, and the other one is $u_{m'}s_{n}$. Assuming that there are only two edges in the sample of this random bipartite graph, i.e., $\\mathcal{E}=\\left\\{u_{m}s_{n},u_{m'}s_{n}\\right\\}$. In the maximum $f$-matching, $u_{m}$ or $u_{m'}$ is chosen with equal probability. The outage probability of $u_{m}$ must then have the term\n\\begin{equation}\n\t\\frac{1}{2}\\binom{M}{2}\\binom{L}{1}p_{\\mathrm{s}}^{ML-2}q_{\\mathrm{s}}^{2}=\\mathsf{O}\\left(q_{\\mathrm{s}}\\right),\\quad\\gamma\\rightarrow0.\n\\end{equation}\n\nThus, if there are more than two edges in this random bipartite graph, the outage probability of $u_{m}$ must have a factor $q_{\\mathrm{s}}^{x}$ with $x\\geq3$. Hence, Eq. \\eqref{eq:outage_RB_LowSNR} has then been established.\n\n\n\\section{Proof of Theorem \\ref{thm:outage_chunk}}\\label{app:outage_chunk}\nWe first consider the case of $K_{m}=1,\\ldots,K_{m}^{\\mathrm{th}}$. According to Lemma \\ref{lem:matching_edges}, if the user $u_{m}$ is not saturated by the generated maximum $f$-matching, there are at least $L-K_{m}+1$ chunks in outage state. Let $\\kappa$ be the number of outage chunks. Since the first order approximation is considered in this paper, $\\kappa$ must be in the range from $L-K_{m}+1$ to $L$. According to the proof of Lemma \\ref{lem:matching_edges}, the $L-K_{m}+1$ outage chunks are only outage for the user $u_{m}$. In this specific condition, the outage probability of the user $u_{m}$ is given by\n\\begin{equation}\n\\begin{aligned}\n\t& \\binom{L}{L-K_{m}+1}p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m},K_{m}-1}\\right.\\right)p_{\\mathrm{s}}^{L-K_{m}+1}\\cdot\\\\\n\t& q_{\\mathrm{s}}^{ML-L+K_{m}-1}=\\frac{L!p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m},K_{m}-1}\\right.\\right)p_{\\mathrm{s}}^{L-K_{m}+1}}{\\left(L-K_{m}+1\\right)!\\left(K_{m}-1\\right)!}\\\\\n\t& +\\mathsf{O}\\left(p_{\\mathrm{s}}^{L}\\right),\n\\end{aligned}\n\\end{equation}\nas $\\gamma\\rightarrow\\infty$, where the lowest order of $p_{\\mathrm{s}}$ is $K_{m}-1+L-K_{m}+1=L$. For $\\kappa=L-K_{m}+2,\\ldots,L$, the $\\kappa$ outage chunks must also be in outage only for the user $u_{m}$. If this is not true, for example, $\\Delta_{1}$ outage chunks for $u_{m_{1}}$ and $\\Delta_{2}$ outage chunks for $u_{m_{2}}$ with $\\kappa=\\Delta_{1}+\\Delta_{2}$, the approximation with the lowest order of the outage probability for the user $u_{m_{i}},\\,i=1,2$ is given by\n\\begin{equation}\n\\begin{aligned}\n\t& \\binom{L}{\\Delta_{1}}\\binom{L-\\Delta_{1}}{\\Delta_{2}}p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m},L-\\Delta_{i}}\\right.\\right)p_{\\mathrm{s}}^{\\kappa}q_{\\mathrm{s}}^{ML-\\kappa}\\\\\n\t& = \\frac{L!p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m},L-\\Delta_{i}}\\right.\\right)p_{\\mathrm{s}}^{\\kappa}}{\\Delta_{1}!\\Delta_{2}!\\left(L-\\Delta_{1}-\\Delta_{2}\\right)!}+\\mathsf{O}\\left(p_{\\mathrm{s}}^{L-\\Delta_{i}+\\kappa}\\right).\n\\end{aligned}\n\\end{equation}\nTherefore, the lowest order of $p_{\\mathrm{s}}$ is given by $L-\\Delta_{i}+\\kappa>L$. In other words, this situation results in a higher order term. Hence, according to the law of total probability, the first order approximation of the user $u_{m}$ is given by Eq. \\eqref{eq:outage_chunk_1}.\n\nConsider now the case of $K_{m}=K_{m}^{\\mathrm{th}}+1,\\ldots,\\widetilde{K}_{m}$. According to Lemma \\ref{lem:matching_edges}, if the user $u_{m}$ is not saturated by the generated maximum $f$-matching, there are at least $ML-M\\left(K^{\\mathrm{sum}}-1\\right)=M\\left(L-K^{\\mathrm{sum}}+1\\right)$ chunks in outage state. According to the proof of Lemma \\ref{lem:matching_edges}, the only outage user $u_{m}$ has $K_{m}-1$ non-outage chunks. Therefore, the first order approximation of the outage probability for the user $u_{m}$ is given by\n\\begin{equation}\n\\begin{aligned}\n\t& \\frac{\\binom{L}{K^{\\mathrm{sum}}-1}}{M}p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m},K_{m}-1}\\right.\\right)p_{\\mathrm{s}}^{M\\left(L-K^{\\mathrm{sum}}+1\\right)}q_{\\mathrm{s}}^{M\\left(K^{\\mathrm{sum}}-1\\right)}\\\\\n \t& =\\frac{L!p_{m}^{\\mathrm{cop}}\\left(R_{m}\\left|\\mathcal{D}_{K_{m},K_{m}-1}\\right.\\right)p_{\\mathrm{s}}^{M\\left(L-K^{\\mathrm{sum}}+1\\right)}}{M\\left(L-K^{\\mathrm{sum}}+1\\right)!\\left(K^{\\mathrm{sum}}-1\\right)!}\\\\\n \t& +\\mathsf{O}\\left(p_{\\mathrm{s}}^{M\\left(L-K^{\\mathrm{sum}}+1\\right)+K_{m}-1}\\right),\n\\end{aligned}\n\\end{equation}\nwhen $\\gamma\\rightarrow\\infty$.\n\nIf it happens that $K_{m}=K_{m}^{\\mathrm{th}}$ and $\\left(M-1\\right)|MK_{m}^{\\mathrm{sum}}$, the outage probability of the user $u_{m}$ is clearly the summation of Eq. \\eqref{eq:outage_chunk_1} and Eq. \\eqref{eq:outage_chunk_2}.\n\n\n\\section{Proof of Lemma \\ref{lem:perfect_f_matching}}\\label{app:perfect_f_matching}\nSuppose $\\mathcal{G}(\\mathcal{\\widetilde{U}}\\cup\\mathcal{S},\\widetilde{\\mathcal{E}})$ has a perfect matching $\\widetilde{\\mathcal{M}}$. For each edge $u_{m}s_{n}\\in\\mathcal{E}$, let $l\\left(u_{m}s_{n}\\right)$ denote the number of edges of $\\widetilde{\\mathcal{M}}$ connecting $\\mathcal{X}_{u_{m}}$ and $s_{n}\\in\\mathcal{S}$. Then the number of edges of $\\widetilde{\\mathcal{M}}$ incident with $\\mathcal{X}_{u_{m}}$ is exactly $\\sum_{s_{n}\\in\\mathcal{S}}l\\left(u_{m}s_{n}\\right)$, which is also equal to $\\left|\\mathcal{X}_{u_{m}}\\right|=K_{m}$. Since $\\widetilde{\\mathcal{M}}$ is a perfect matching, then\n\\begin{equation}\n\t|\\widetilde{\\mathcal{M}}|=\\sum_{u_{m}\\in\\mathcal{U}}\\sum_{s_{n}\\in\\mathcal{S}}l\\left(u_{m}s_{n}\\right)=MN\n\\end{equation}\nis the maximum number of edges in any matching of $\\mathcal{G}(\\mathcal{\\widetilde{U}}\\cup\\mathcal{S},\\widetilde{\\mathcal{E}})$. This fact shows that if $\\mathcal{X}_{u_{m}}$ is seen as $u_{m}$, this is a perfect $f$-matching $\\mathcal{M}_{f}^{\\mathrm{p}}$. If this is not true, there will be more edges in $\\mathcal{M}_{f}^{\\mathrm{p}}$ than in $\\widetilde{\\mathcal{M}}$. In fact, when constructing $\\mathcal{G}(\\mathcal{\\widetilde{U}}\\cup\\mathcal{S},\\widetilde{\\mathcal{E}})$, we can dispatch one edge incident with $u_{m}$ in $\\mathcal{M}_{f}^{\\mathrm{p}}$ to one vertex in $\\mathcal{X}_{u_{m}}$. As a result, a new matching which contains more edges than $\\widetilde{\\mathcal{M}}$ can be obtained. This contradicts the fact that $\\widetilde{\\mathcal{M}}$ is a perfect matching of $\\mathcal{G}(\\mathcal{\\widetilde{U}}\\cup\\mathcal{S},\\widetilde{\\mathcal{E}})$.\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n The goal of the\npaper is to prove local boundedness and other regularity of weak\nsolutions to the next two parabolic equations.\n\n\\[\n\\Delta u(x, t) - b(x, t) \\nabla u(x, t) - \\partial_t u(x, t)=0,\n\\quad (x, t) \\in \\Omega \\subset {\\bf R}^n \\times {\\bf R},\n\\leqno(1.1)\n\\]\n\\[\n\\begin{cases}\n\\Delta u(x, t) - b(x, t) \\nabla u(x, t) + \\nabla P(x, t) -\n\\partial_t\nu(x, t)=0, \\quad (x, t) \\in \\Omega \\subset {\\bf R}^3 \\times {\\bf R},\\\\\ndiv u = 0, \\ div b=0, \\ b(\\cdot, t) \\in L^2_{loc}.\n\\end{cases}\n\\leqno(1.2)\n\\]Here $\\Delta$ is the standard Laplacian and $b=b(x, t)$ is a given\n$L^2_{loc}$ singular vector field to be specified later. $\\Omega$\nis a domain.\n\nThere has been a mature theory of existence and regularity for\nequation (1.1) (see [LSU], [Lieb] e.g.). For instance when\n$b=b(x)$ and $|b| \\in L^p_{loc}({\\bf R}^n)$, $p>n$, weak solutions\nto (1.1) are locally bounded and H\\\"older continuous. This\ncondition is sharp in general. Here is an example (see [HL] p108).\nThe function $u=\\ln \\ln |x|^{-1} - \\ln \\ln R^{-1}$ is an unbounded\nweak solution of\n\\[\n\\Delta u + b \\nabla u =0\n\\]\nin the ball $B(0, R)$ in ${\\bf R}^2$, $R<1$. Here $b = \\nabla u =\n- \\frac{\\nabla |x|}{|x| \\ln |x|^{-1}}$ and hence $b \\in L^2_{loc}$\nwith $n=2$.\n\nThe first goal of the paper is to show that the simple condition\n$div b \\le 0$ will ensure that weak solutions of (1.1) are locally\nbounded when the data $b$ is almost twice as singular as before.\nThis will be made precise in Theorem 1.1 and Remark 1.1 below.\nThus one has achieved a leap in boundedness condition rather than\na marginal improvement.\n\n\nClearly a strong impetus still exists for the study of parabolic\nequations with very singular coefficients. In the study of\nnonlinear equations with gradient structure such as the\nNavier-Stokes equations and harmonic maps, highly singular\nfunctions occur naturally. So, it is very important investigate a\npossible gain of regularity in the presence of singular drift term\n$b$. This line of research has been followed in the papers [St],\n[KS], [Os], [CrZ], [ChZ], [CS] and [Se]. Under the condition $|b|\n\\in L^n({\\bf R}^n)$, Stampacchia [St] proved that bounded\nsolutions of $\\Delta u + b \\nabla u=0$ are H\\\"older continuous. In\nthe paper [CrZ], Cranston and Zhao proved that solutions to this\nequation are continuous when $b$ is in a suitable Kato class i.e.\n$\\lim_{r \\to 0} \\sup_x \\int_{|x-y| \\le r}\n\\frac{|b(y)|}{|x-y|^{n-1}} dy =0$. In the paper [KS] Kovalenko and\nSemenov proved the H\\\"older continuity of solutions to (1.1), when\n$| b|^2$ is independent of time and is sufficiently small in the\nform sense, i.e., for a sufficiently small $\\epsilon>0$,\n\\[\n\\int_{{\\bf R}^n} |b|^2(x) \\phi^2(x) dx \\le \\epsilon\n\\int_{\\mathbf{R}^n} |\\nabla \\phi|^2(x) dx, \\quad \\phi\\in\nC^\\infty_0(\\mathbf{R}^n).\n\\]It is a well known fact that form boundedness condition provides\na more general class of singular functions than corresponding\n$L^p$ class, Morrey-Campanato class and Kato class functions.\n This result was generalized in [Se] to\nequations with leading term in divergence form. In [Os], Osada\nproved, among other things, that the fundamental solution of (1.1)\nhas global Gaussian upper and lower bound when $\\bf b$ is the\nderivative of bounded functions (in distribution sense) and $div\nb=0$. Recently in the paper [LZ], H\\\"older continuity of solutions\nto (1.1) was established when $b=b(x)$, $|b|^2$ is form bounded\nand $div {b} =0$. Most recently, in [Z2], we considered (1.1) with\ntime-dependent functions $b=b(x, t)$. It was proven that weak\nsolutions to (1.1) are locally bounded provided that $div b =0$\nand for a fixed $m>1$, $|b|^m$ is form bounded. That is for any\n$\\phi \\in C^\\infty({\\bf R}^n \\times (0, \\infty) )$ with compact\nsupport in the spatial direction,\n\\[\n\\int \\int_{{\\bf R}^n} |b| ^m \\phi^2 dxdt \\le k \\int \\int_{{\\bf\nR}^n} |\\nabla \\phi|^2 dxdt\n\\]where $k$ is independent of $\\phi$. Note the key improvement\nover previous result is that the power on $b$ drops from $2$ to\nany number greater than $1$.\n\nIt is interesting to note that this class of data $b$ contains the\nvelocity function in the $3$ dimensional Navier-Stokes equations.\nAs a result we gave a different proof of the {\\it local}\nboundedness of velocity in $2$ dimensional case. Moreover assuming\na {\\it local} bound in the pressure, we prove boundedness of\nvelocity in $3$ the dimensional case.\n\nThe first goal of the paper is to treat the end point case of the\nabove condition i.e. $m=1$. We will prove that weak solutions to\n(1.1) are locally bounded provided that $|b| [\\ln (1+ |b|)]^2$ is\nform bounded and $div b \\le 0$.\n\nMany authors have also studied the regularity property of the\nrelated heat equation $ \\Delta u + V u - u_t =0$. Here $V$ is s\nsingular potential. We refer the reader to the papers by Aizenman\nand Simon [AS], Simon [S] and the reference therein. The function\n$V$ is allowed in the Kato class which is a little more singular\nthan the corresponding $L^p$ class. It remains a challenging\nproblem is to push this theory to broader class of functions.\n\n\nIn this paper we use the following definition of weak\n solutions.\n\n\n{\\it {\\bf Definition 1.1} Let $D \\subseteq {\\bf R}^n$ be a domain\nand $T \\in (0, \\infty]$. A function $u$ such that $u, |\\nabla u|\n\\in L^2_{loc}(D \\times [0, T])$ is a weak solution to (1.1) if:\nfor any $\\phi \\in C^{\\infty}_0(D \\times (-T, T))$, there holds\n\\[\n\\int^T_0\\int_D ( u \\partial_t \\phi - \\nabla u \\nabla \\phi)\n dxdt - \\int^T_0\\int_D b \\nabla u \\ \\phi \\ dxdt\n = - \\int_D u_0(x) \\phi(x, 0) dx.\n\\]}\n\n\n\n\n\n\\begin{theorem}\n\\label{th:1.1} Suppose $div b \\le 0$ in the weak sense, $b \\in\nL^2_{loc}$, and that $|b| [\\ln (1 + |b|)]^2$ is form bounded i.e.\nthere exists $k>0$ such that for any $\\phi \\in C^\\infty({\\bf R}^n\n\\times (0, \\infty) )$ with compact support in the spatial\ndirection,\n\\[\n\\int \\int_{{\\bf R}^n} |b| [\\ln (1 + |b|)]^2 \\phi^2 dxdt \\le k \\int\n\\int_{{\\bf R}^n} |\\nabla \\phi|^2 dxdt. \\leqno(1.3)\n\\]\nThen weak solutions to equation (1.1) are locally bounded.\n\\end{theorem}\n\\medskip\n\n{\\it Remark 1.1.} In the special case that $b$ is independent of\ntime and $b \\in L^p({\\bf R}^n)$ with $p>n\/2$, then it is easy to\ncheck that (1.3) is satisfied. Recall that the standard theory\nessentially only allows functions in $L^p$ with $p\n> n$. The strength of the theorem comes from the fact that weak\nsolutions are locally bounded in any domain regardless of its\nvalue on the parabolic boundary. If the domain is ${\\bf R}^n\n\\times (0, \\infty)$ or if the initial Dirichlet boundary condition\nis imposed, then using a Nash type estimate, one can show that\nsolutions are locally bounded when $t>0$ as long as the\nfundamental solution is well defined. In this case one can choose\n$b$ to be as singular as any $L^2$ functions. Actually the\npresence of $b$ is totally irrelevant except for the purpose of\nmaking the integrals in the definition of a weak solution finite.\nHere is a sketch of the proof. Let u solves\n\\[\n\\begin{cases}\n \\Delta u - b \\nabla u - u_t =0, \\qquad \\text{in} \\\n \\text D \\times (0, \\infty),\\\\\nu(x, t) = 0, \\qquad (x, t) \\in \\partial D \\times (0, \\infty)\\\\\n u(x, 0) = u_0(x).\n\\end{cases}\n \\]\nLet $G(x, t; y, t)$ be the fundamental solution with initial\nDirichlet boundary condition. If $div b=0$, differentiating in\ntime shows that $\\int_D G(x, t; y, 0) dy \\le 1$. By Nash\ninequality, one has\n\\[\n\\frac{d}{dt} \\int_D G(x, t; y, t)^2 dy = - 2 \\int_D |\\nabla G(x,\nt; y, t)|^2 dy \\le - c \\frac{ \\big{(} \\int_D G(x, t; y, t)^2 dy\n\\big{)}^{1+(2\/n)}}{\\big{(} \\int_D G(x, t; y, 0) dy \\big{)}^{4\/n}}.\n\\]\nHence\n\\[\nG(x, t; y, 0) \\le c\/t^{n\/2}.\n\\]Therefore\n\\[\nu(x, t) = \\int_D G(x, t; y) u_0(y) dy\n\\] is bounded as soon as $t>0$\nand $u_0$ is in $L^1(D)$.\n\n However this does not imply local\nboundedness of weak solutions unconditionally. It would be\ninteresting to establish existence and more regularity result for\n(1.1) with the singular data in Theorem 1.1.\n\nWe should mention that in the time independent elliptic case,\nthere was already strong indication that standard regularity\ntheory can be improved in the presence of divergence free data. In\nthe important papers [FR1-2], local boundedness of Green's\nfunction (away from the singularity) of the operator $ -\\Delta -\nb \\nabla $ with Dirichlet boundary conditions was proved, under\nthe conditions: $n=5$, $|\\nabla^2 b| \\in L^{4\/3}$, $b=0$ on the\nboundary and $div b =0$ ( c.f. Lemma 1.48 and Lemma1.11 [FR1]).\nThe upshot of the result is that the bounds on the Green's\nfunction is independent of the norm $|\\nabla^2 b| \\in L^{4\/3}$.\nThe proof uses essentially the fact that the Green's function\nvanishes on the boundary. So the drift term is integrated out. In\ncontrast we do not have the benefit of a zero boundary.\n\n\n\n\nIn the three dimension case, we derive a further regularity\nresult:\n\n\\begin{corollary}\n Assume $|b| \\in L^{\\infty}([0, T], L^2({\\bf R}^3))\n \\cap L^q(\n {\\bf R}^3 \\times [0, T])$ with $q>3$ and $div b =0$. Suppose $u$ is a weak solution of (1.1) in ${\\bf R}^3\n\\times [0, T]$ with $\\int_{{\\bf R}^3} u^2(x, 0) dx < \\infty$. Then\n$u$ is locally bounded and for almost every $t$, $u(\\cdot, t)$ are\nH\\\"older continuous. \\proof\n\\end{corollary}\n\nAccording to Corollary 1 in [Z2], $b $ satisfies the conditions of\nTheorem 1.1. For completeness we provide the proof here.\n\n Let us take $m=4\/3$ and\n$p=2\/m=3\/2$. Then, by H\\\"older's inequality,\n\\[\n\\aligned\n \\int^T_0 \\int_{{\\bf R}^3}& |b |^{4\/3} \\phi^2 dxdt \\le \\int^T_0\n\\bigg{(} \\int_{{\\bf R}^3} |b |^{mp} dx \\bigg{)}^{1\/p} \\ \\bigg{(}\n\\int_{{\\bf R}^3}\n\\phi^{2p\/(p-1)} dx \\bigg{)}^{(p-1)\/p} dt\\\\\n&=\\int^T_0 \\bigg{(} \\int_{{\\bf R}^3} |b |^2 dx \\bigg{)}^{2\/3} \\\n\\bigg{(} \\int_{{\\bf R}^3}\n\\phi^6 dx \\bigg{)}^{1\/3} dt\\\\\n&\\le \\sup_{t \\in [0, T]} \\bigg{(} \\int_{{\\bf R}^3} |b |^2(x, t) dx\n\\bigg{)}^{2\/3}\n\\ \\int^T_0 \\bigg{(} \\int_{{\\bf R}^3} \\phi^6 dx \\bigg{)}^{1\/3} dt\\\\\n&\\le C \\sup_{t \\in [0, T]} \\bigg{(} \\int_{{\\bf R}^3} |b |^2(x, t)\ndx \\bigg{)}^{2\/3} \\ \\int^T_0 \\int_{{\\bf R}^3} |\\nabla \\phi|^2 dx\ndt.\n\\endaligned\n\\]The last step is by Sobolev imbedding. This shows that condition\n(1.3) holds.\n\nHence Theorem 1.1 implies that $u$ is locally bounded. Notice the\nfact that\n\\[\n\\int_{{\\bf R}^3} u^2(x, t) dx\n\\]is non-increasing in time since $b$ is divergence free.\nBy this and Theorem 1.1, we know that $u \\in L^{\\infty}({\\bf R}^3\n\\times [t_0, T])$ for any $t_0>0$.\n\nDenote by $G_0$ the Gaussian heat kernel of the heat equation.\nThen, for $t>t_0$,\n\\[\nu(x, t) = \\int_{{\\bf R}^3} G_0(x, t; y, t_0) u(y, t_0) dy -\n\\int^t_{t_0} \\int_{{\\bf R}^3} G_0(x, t; y, s) b \\nabla u(y, s)\ndyds.\n\\]Since $b$ is divergence free, we have\n\\[\nu(x, t) = \\int_{{\\bf R}^3} G_0(x, t; y, t_0) u(y, t_0) dy +\n\\int^t_{t_0} \\int_{{\\bf R}^3} \\nabla_y G_0(x, t; y, s) b u(y, s)\ndyds.\n\\]Therefore, in the weak sense,\n\\[\n\\aligned\n \\nabla_x u(x, t) &= \\int_{{\\bf R}^3} \\nabla_x G_0(x, t; y,\nt_0) u(y, t_0) dy + \\int^t_{t_0} \\int_{{\\bf R}^3} \\nabla_x\n\\nabla_y G_0(x, t; y, s) b u(y, s) dyds\\\\\n& \\equiv I_1(x, t) + I_2(x, t).\n\\endaligned\n\\]It is well known that\n\\[\n\\nabla_x \\nabla_y G_0(x, t; y, t_0) =-\\nabla_x \\nabla_x G_0(x, t;\ny, t_0)\n\\]is a parabolic Calderon-Zygmond kernel (see [Lie] e.g.).\nHence by our assumption on $b$ and the fact that $u$ is bounded in\n${\\bf R}^3 \\times [t_0, T]$, the second term $I_2$ in the last\nintegral is in $L^q({\\bf R}^3 \\times [t_0, T])$, $q>3$. It follows\nthat\n\\[\n|I_2(\\cdot, t) | \\in L^q({\\bf R}^3), \\qquad q>3.\n\\]for a.e. $t$. Sobolev imbedding theorem then shows that\n $u (\\cdot, t)$ is\nH\\\"older continuous for a.e. $t$. \\qed\n\n\n\\medskip\n\n Next we turn to equation (1.2), which is the first step in\ntackling the full Navier-Stokes equations. When $b=0$, (1.2) is\njust the Stokes equations which has been studied for long time.\nOur focus is on how to allow $b$ as singular as possible while\nretaining the boundedness of weak solutions. As far as equation\n(1.2) is concerned, our result does not improve the standard\ntheory as dramatically as for equation (1.1). We have to restrict\nthe data $b$ in a suitable Kato class for (1.2). Nevertheless,\nTheorem 1.2 still generalizes the key part of the important work\n[AS], [CrZ] on the elliptic equations to the case of linearized\nNavier-Stokes system. Moreover, we even obtain gradient estimates\nfor solutions of (1.2) while only continuity was expected.\n\nAs pointed out in the papers [AS] and [Si], Kato class functions\nare quite natural objects in studying elliptic and parabolic\nequations with singular lower order terms. Roughly speaking, a\nfunction is in a Kato class with respect to an equation if its\nconvolutions with certain kernel functions are small in some\nsense. The kernel function usually is related to the fundamental\nsolution of the principal term of the equation. For instance, for\nthe equation $\\Delta u(x) + V(x) u(x) =0$ in ${\\bf R}^n$, $n \\ge\n3$, the function $V$ is in Kato class if\n\\[\n\\lim_{r \\to 0} \\sup_{x} \\int_{B(x, r)} \\frac{|V(y)|}{|x-y|^{n-2}}\ndy = 0. \\leqno(1.4)\n\\]In [AS], it is proven that weak solutions to $\\Delta u + V u=0$\nare continuous and satisfy a Harnack inequality when $V$ is in the\nabove Kato class. Numerous papers have been written on this\nsubject in the last thirty years, mainly in the context of\nelliptic and heat equations.\n\nIn the context of Navier-Stokes equations, the corresponding time\ndependent Kato class was defined recently in [Z3], which mirrors\nthose for the heat equation [Z]. Normally, with data in the Kato\nclass, weak solutions of elliptic equations are just continuous as\nproven in [AS], [CrZ]. It was proved in [Z3] that weak solutions\nto (1.2) are bounded when $b$ is in the Kato class. Here we prove\nthat the {\\it spatial gradient} of solutions to (1.2) are bounded\nprovided that $b$ is in the Kato class locally. Let us mention\nthat one can use the idea of Kato class to recover some (but not\nall) the decay estimate in the interesting papers [Scho1-2] and to\nprove some pointwise decay estimate (see [Z3]).\n\nIn order to make our statement precise, we introduce a number of\nnotations. Throughout the paper, we write\n\\[\nK_1(x, t; y, s) = \\begin{cases} \\frac{1}{[ \\ |x-y| + \\sqrt{t-s} \\\n]^{n+1}}, \\quad t \\ge s, x \\neq y\\\\\n0, sl$, we introduce the\nquantity\n\\[\nB(b, l, t) = \\sup_{x} \\int^{t}_l \\int_{{\\bf R}^n} [K_1(x, t; y, s) +\nK_1(x, s; y, l) ] |b(y, s)| dyds. \\leqno(1.6')\n\\]\n\nBy the example in Remark 1.2 in [Z3], we see that the function\nclass $K_1$ permits solutions which are very singular. In case\nthe spatial dimension is $3$, a function in this class can have\nan apparent singularity of certain type that is not $L^p_{loc}$\nfor any $p>1$ and of dimension $1$. One can also construct time\ndependent functions in $K_1$ with quite singular behaviors. The\nclass $K_1$ also contains the space $L^{p,q}$ with $n\/p + 2\/q<1$,\nwhich sometimes is referred to as the Prodi-Serrin class. For the\nnonlinear Navier-Stokes equation, if a weak solution is known to\nbe in this class, then it is actually smooth. As for the\nlinearized equation (1.2), following the argument in [Ser], it is\nclear that weak solutions are bounded if $b$ is in the above\n$L^{p, q}$ class. Now we are able to prove that the spatial\ngradient of weak solutions are bounded {\\it automatically},\nwithout resorting to the nonlinear structure. Using H\\\"older's\ninequality, one can see that the class $K_1$ also contains the\nMorrey type space introduced in [O] by O' Leary, where boundedness\nof weak solutions in Morrey space are proven.\n\nOne can also define a slightly bigger Kato class by requiring the\nlimit in (1.6) to be a small positive number rather than $0$. We\nwill not seek such generality this time. The appearance of two\nkernel functions is due to the asymmetry of the equation in time\ndirection.\n\n\n\\medskip\n\nLet $D$ for a domain in ${\\bf R}^3$ and $T>0$. Following standard\npractice, we will use this definition for solutions of (1.2)\nthroughout the paper.\n\n\\medskip\n\n\\noindent{\\it {\\bf Definition 1.3.} A divergence free vector\nfield $u \\in L^{\\infty}(0, T; L^2(D)) \\cap L^2(0, T; W^{1, 2}(D))$\nis called a\n (weak) solution of (1.2) if:\n\nfor any vector valued $\\phi \\in C^{\\infty}(D \\times [0, T])$ with\n$ div \\phi =0$ and $\\phi=0$ on $\\partial D \\times [0, T]$, $u$\nsatisfies\n\\[\n\\int^{t_2}_{t_1}\\int_{{\\bf R}^n} \n dxdt - \\int^{t_2}_{t_1}\\int_{{\\bf R}^n} dxdt\n = - \\int_{{\\bf R}^n} |^{t_2}_{t_1} dx.\n\\]}\n\n\n\nNext we state the theorem on equation (1.2), the linearized\nNavier-Stokes equations.\n\n\n\\bigskip\n\n\\begin{theorem}\n\\label{th:1.2}\n\nLet $u$ be a solution of (1.2) in a domain $\\Omega \\subset {\\bf\nR}^3 \\times {\\bf R}$. Suppose $Q_{4r}(x, t) \\subset \\Omega$, $div\nb =0$ and that $b |_{Q_{2r}(x, t)}$ is in class $K_1$ and $b \\in\nL^2_{loc}$. Then both $u$ and $|\\nabla u|$ are bounded functions\nin $Q_{2r}(x, t)$.\n\nMoreover, for some positive constants $C=C(b)$ and $r_0$,\ndepending on the size of the Kato norm of $b$, there hold, when\n$00$, $t>0$ and $\\sigma_0$ is a suitable number greater than $1$.\n Suppose $b$ satisfies Condition A in\n $Q_{\\sigma_0 r}$. Then there exist $C=C(r, b)>0$ such that\n\\[\n\\sup_{Q_r} u^2 \\le C(r, b) \\frac{1}{|Q_{\\sigma_0 r}|} \\int_{Q_{\n\\sigma_0 r}} u^2 dyds.\n\\]\n\n\n\nPick a solution $u$ of (1.1) in the parabolic cube $Q_{\\sigma r}\n=B(x, \\sigma r) \\times [t-(\\sigma r)^2, t]$, where $x \\in {\\bf\nR}^n$, $\\sigma>1$, $r>0$ and $t>0$. By direct computation, for\nany rational number $p \\ge 1$, which can be written as quotient of\ntwo integers with the denominator being odd, one has\n\\[\n\\Delta u^p - b \\nabla u^p - \\partial_t u^p = p(p-1) |\\nabla u|^2\nu^{p-2}. \\leqno (2.1)\n\\]Here the condition on $p$ is to ensure that $u^p$ makes sense\nwhen $u$ changes sign. Actually $u^2$ is a sub-solution to (1.1).\nHence one can also assume that $u$ is a non-negative sub-solution to\n(1.1) by working with $u^2$.\n\n\n\n\nChoose $\\psi=\\phi(y) \\eta(s)$ to be a refined cut-off function\nsatisfying\n\\[\nsupp \\ \\eta \\subset [t- (\\sigma r)^2, t]; \\quad \\eta(s)=1, \\quad\ns \\in [t- r^2, t]; \\quad |\\eta'| \\le 2\/( (\\sigma-1) r)^2; \\quad 0\n\\le \\eta \\le 1;\n\\]\n\\[\nsupp \\ \\phi \\subset B(x, \\sigma r); \\quad \\phi(y)=1, \\quad y \\in\nB(x, r); \\quad 0 \\le \\phi \\le 1;\n\\]\n\\[\n\\frac{|\\nabla \\phi|}{\\phi} \\le \\frac{A}{(\\sigma-1) r} |\\ln \\phi\n|^{3\/2}, \\qquad A>0.\n\\]\nBy modifying the following function\n\\[\n\\exp \\big{(} -\\frac{ \\sigma^2}{\\sigma^2 - |x-y|^2} \\big{)}^k\n\\]\nand scaling, it is easy to show that such a function $\\phi$\nexists. Here $k$ is a sufficiently large number.\n\n\n\nDenoting $w=u^p$ and using $w \\psi^2$ as a test function on (2.1),\none obtains\n\\[\n\\int_{Q_{\\sigma r}} (\\Delta w - b \\nabla w -\n\\partial_s w) w \\psi^2 dyds = p(p-1) \\int_{Q_{\\sigma r}}\n|\\nabla u|^2 w^2 u^{-2} \\ge 0.\n\\]Using integration by parts, one deduces\n\\[\n\\int_{Q_{\\sigma r}} \\nabla(w \\psi^2) \\nabla w dyds \\le -\n\\int_{Q_{\\sigma r}} b \\nabla w (w \\psi^2) dyds - \\int_{Q_{\\sigma\nr}} (\\partial_s w) w \\psi^2 dyds. \\leqno (2.2)\n\\]By direct calculation,\n\\[\n\\aligned &\\int_{Q_{\\sigma r}} \\nabla(w \\psi^2) \\nabla w dyds =\n\\int_{Q_{\\sigma r}} \\nabla [(w \\psi ) \\psi] \\nabla w dyds\\\\\n&=\\int_{Q_{\\sigma r}} [ \\ \\nabla (w \\psi ) ( \\ \\nabla (w \\psi ) -\n(\\nabla \\psi) w ) + w \\psi \\nabla \\psi \\nabla w ] dyds\\\\\n&=\\int_{Q_{\\sigma r}} [\\ |\\nabla (w \\psi )|^2 -|\\nabla \\psi |^2\nw^2 \\ ]dyds.\n\\endaligned\n\\]Substituting this to (2.2), we obtain\n\\[\n\\aligned\n\\int_{Q_{\\sigma r}} &|\\nabla (w \\psi )|^2 dyds \\\\\n&\\le - \\int_{Q_{\\sigma r}} b \\nabla w (w \\psi^2) dyds -\n\\int_{Q_{\\sigma r}} (\\partial_s w) w \\psi^2 dyds + \\int_{Q_{\\sigma\nr}} |\\nabla \\psi |^2 w^2 dyds.\n\\endaligned \\leqno (2.3)\n\\]\n\nNext, notice that\n\\[\n\\aligned \\int_{Q_{\\sigma r}}& (\\partial_s w) w \\psi^2 dyds =\n\\frac{1}{2}\n\\int_{Q_{\\sigma r}} (\\partial_s w^2) \\psi^2 dyds\\\\\n&=-\\int_{Q_{\\sigma r}} w^2 \\phi^2 \\eta \\partial_s \\eta dyds +\n\\frac{1}{2} \\int_{B(x, \\sigma r)} w^2(y, t) \\phi^2(y) dy.\n\\endaligned\n\\]Combining this with (2.3), we see that\n\\[\n\\aligned & \\int_{Q_{\\sigma r}} |\\nabla (w \\psi )|^2 dyds +\n\\frac{1}{2} \\int_{B(x, \\sigma r)} w^2(y, t) \\phi^2(y) dy \\\\\n&\\le \\int_{Q_{\\sigma r}} (|\\nabla \\psi |^2 + \\eta \\partial_s \\eta)\n\\ w^2 dyds -\n\\int_{Q_{\\sigma r}} b (\\nabla w) (w \\psi^2) dyds \\\\\n&\\equiv T_1+T_2.\n\\endaligned\n\\leqno(2.4)\n\\]The first term on the righthand side of (2.4) is\nalready in good shape. So let us estimate the second term as\nfollows.\n\\[\n\\aligned\nT_2 &= - \\int_{Q_{\\sigma r}} b (\\nabla w) (w \\psi^2) dyds\\\\\n&= - \\frac{1}{2} \\int_{Q_{\\sigma r}} b \\psi^2 \\nabla w^2 dyds =\n\\frac{1}{2} \\int_{Q_{\\sigma r}} div (b \\psi^2) w^2 dyds\\\\\n&=\\frac{1}{2} \\int_{Q_{\\sigma r}} div b (\\psi w) ^2 dyds +\n\\frac{1}{2}\n\\int_{Q_{\\sigma r}} b \\nabla (\\psi^2) w^2 dyds\\\\\n&=\\frac{1}{2} \\int_{Q_{\\sigma r}} div b (\\psi w) ^2 dyds +\n\\int_{Q_{\\sigma r}} b (\\nabla \\psi) \\psi w^2 dyds\\\\\n&\\le \\int_{Q_{\\sigma r}} b (\\nabla \\psi) \\psi w^2 dyds.\n\\endaligned\n\\]Here we just used the assumption that $div b \\le 0$.\n\n\n\n\n\nThe next paragraph contains a key argument of the paper.\n\nLet $D>0$ be a number to be chosen later\n\\[\n\\aligned T_2 &\\le \\int_{Q_{\\sigma r}} |b| \\ |\\nabla \\psi| \\psi\nw^2\ndyds \\\\\n&\\le \\int_{|b| \\ge D} |b| \\ |\\nabla \\psi| \\psi w^2 dyds\n+\\int_{|b| \\le D} |b| \\ |\\nabla \\psi| \\psi w^2 dyds\n\\\\\n&\\le \\int_{|b| \\ge D} |b| \\ |\\nabla \\psi| \\psi w^2 dyds + \\frac{C\nD }{(\\sigma - 1) r} \\int_{Q_{\\sigma r}} w^2 dyds.\n\\endaligned\n\\]Using\nthe property that $\\psi = \\phi \\eta$ and\n\\[\n|\\nabla \\phi| \\le \\frac{A}{(\\sigma -1) r} \\phi | \\ln \\phi\n|^{3\/2},\n\\]\nwe have\n\\[\n\\aligned T_2& \\le \\int \\int_{|b| \\ge 1\/\\phi, |b| \\ge D} |b|\n|\\nabla \\phi| \\phi w^2 dy \\eta^2 ds + \\int \\int_{|b|\n\\le 1\/\\phi} |b| |\\nabla \\phi| \\phi w^2 dy \\eta^2 ds\\\\\n&\\qquad \\qquad \\qquad+\n\\frac{C D }{(\\sigma - 1) r} \\int_{Q_{\\sigma r}} w^2 dyds\\\\\n&\\le \\frac{A}{(\\sigma -1) r} \\int \\int_{|b| \\ge 1\/\\phi, |b| \\ge D}\n|b| \\ |\\ln \\phi|^{3\/2} (\\phi w)^2 dy \\eta^2 ds \\\\\n&\\qquad \\qquad + \\frac{A}{(\\sigma -1) r} \\int \\int_{B(x, \\sigma\nr), |b| \\le 1\/\\phi} |b| \\ |\\ln \\phi|^{3\/2} (\\phi w)^2 dy \\eta^2\nds\n\\\\\n&\\qquad \\qquad \\qquad+\n\\frac{C D }{(\\sigma - 1) r} \\int_{Q_{\\sigma r}} w^2 dyds\\\\\n&\\le \\frac{A}{(\\sigma -1) r} \\int_{|b| \\ge D} |b| (\\ln |b|)^{3\/2}\n(\\psi w)^2 dyds + \\frac{A}{(\\sigma -1) r} \\int_{Q_{\\sigma r}}\n\\frac{1}{\\phi} |\\ln \\phi|^{3\/2} (\\phi\nw)^2 \\eta^2 dyds\\\\\n&\\qquad \\qquad \\qquad +\\frac{C D }{(\\sigma - 1) r} \\int_{Q_{\\sigma\nr}} w^2 dyds\\\\\n&\\le \\frac{A}{(\\sigma -1) r (\\ln D)^{1\/2}} \\int_{Q_{\\sigma r}} |b|\n(\\ln |b|)^2 (\\psi w)^2 dyds + \\frac{A}{(\\sigma -1) r}\n\\int_{Q_{\\sigma r}} \\phi |\\ln \\phi|^{3\/2}\nw^2 \\eta^2 dyds\\\\\n&\\qquad \\qquad \\qquad +\\frac{C D }{(\\sigma - 1) r} \\int_{Q_{\\sigma\nr}} w^2 dyds.\\\\\n\\endaligned\n\\]\n\n\nBy our assumptions on $b$,\n\\[\n\\int_{Q_{\\sigma r}} |b| (\\ln |b|)^2 (\\psi w)^2 dyds \\le k\n\\int_{Q_{\\sigma r}} |\\nabla (\\psi w)|^2 dyds,\n\\]and the fact that $\\phi |\\ln \\phi|^{3\/2}$ is a bounded function, we\ndeduce\n\\[\nT_2 \\le \\frac{A}{(\\sigma -1) r (\\ln D)^{1\/2}} \\int_{Q_{\\sigma r}}\n|\\nabla (\\psi w)|^2 dyds + \\frac{C_1 D }{(\\sigma - 1) r}\n\\int_{Q_{\\sigma r}} w^2 dyds.\n\\]Now we choose $D$ so that $\\frac{A}{(\\sigma -1) r (\\ln D)^{1\/2}}\n= \\frac{1}{2}$, i.e. $D= e^{( 2A\/ [(\\sigma-1) r])^2}$, then\n\\[\nT_2 \\le \\frac{1}{2} \\int_{Q_{\\sigma r}} |\\nabla (\\psi w)|^2 dyds\n+ c_0 e^{( c_1\/ [(\\sigma-1) r])^2} \\int_{Q_{\\sigma r}} w^2 dyds.\n\\leqno(2.6)\n\\]Here $c_0$ and $c_1$ are positive constants independent of $r$\nand $\\sigma$.\n\n\n\nCombining (2.4) with (2.6), we reach\n\\[\n\\int_{Q_{\\sigma r}} |\\nabla (w \\psi )|^2 dyds + \\int_{B(x, \\sigma\nr)} w^2(y, t) \\phi^2(y) dy \\le c_0 e^{( c_1\/ [(\\sigma-1) r])^2}\n\\int_{Q_{\\sigma r}} w^2 dyds. \\leqno(2.7)\n\\]\n\n\n\n\\medskip\n\n{\\bf step 2.} $L^2 - L^\\infty$ bounds.\n\\medskip\n\nBy modifying Moser's iteration moderately, we deduce from (2.7)\nthe following $L^2-L^\\infty$ estimate.\n\\[\n\\sup_{Q_r} u^2 \\le C(r, b) \\frac{1}{|Q_{\\sigma_0 r}|}\n\\int_{Q_{\\sigma_0 r}} u^2 dyds. \\leqno(2.8)\n\\]Indeed, by H\\\"older's inequality,\n\\[\n\\aligned\n \\int_{{\\bf R}^n} &{(\\phi w)}^{2(1+(2\/n))} =\n \\int_{{\\bf R}^n} (\\phi w)^2 \\ (\\phi w)^{4\/n}\\\\\n &\\le \\big{(}\n\\int_{{\\bf R}^n} {(\\phi w)}^{2n\/(n-2)} \\big{)}^{(n-2)\/n}\n \\big{(}\\int_{{\\bf R}^n} {(\\phi w)}^2 \\big{)}^{2\/n}.\n\\endaligned\n\\]Using Sobolev inequality, one obtains\n\\[\n\\int_{{\\bf R}^n} {(\\phi w)}^{2(1+(2\/n))} \\le\n C \\big{(}\\int {(\\phi w)}^2 \\big{)}^{2\/n}\n \\big{(}\\int_{{\\bf R}^n}\n |\\nabla (\\phi w)|^2 \\big{)}.\n\\]The last inequality, together with (2.7) implies, for some\n$C_1>0$,\n\\[\n\\int_{Q_{\\sigma' r }(x, t)} u^{2p\\theta} \\le \\big{(} c_0 e^{(\nc_1\/ [(\\sigma-\\sigma') r])^2} \\int_{Q_{\\sigma r}(x, t)}\nu^{2p}\\big{)}^{\\theta},\n\\]where $\\theta = 1+ (2\/n)$ and $\\sigma'<\\sigma$.\n\n\nTake a number $\\rho>1$ so that $\\rho^2< \\theta$.\n We set\n$\\tau_i=\\rho^{-i}$, $\\sigma_0=1\/(1-\\rho^{-1})$,\n$\\sigma_i=\\sigma_{i-1}-\\tau_i= \\sigma_0-\\Sigma^{i}_1 \\tau_j$,\n$p=\\theta^i$, $i=1, 2, ...$. The above then yields, for some $c_2,\nc_3>0$,\n\\[\n\\int_{Q_{\\sigma_{i+1} r}(x, t)} u^{2 \\theta^{i+1} } \\le c_3\n\\big{(} c^{i+1}_2 e^{ c^2_1 r^{-2} \\rho^{2 i} } \\int_{Q_{\\sigma_i\nr }(x, t)} u^{2 \\theta^i}\\big{)}^{\\theta}.\n\\]After iterations the above implies, for some $c_4 >0$,\n\\[\n\\big{(}\\int_{Q_{\\sigma_{i+1} r }(x, t)}\n u^{2 \\theta^{i+1}} \\big{)}^{\\theta^{-i-1}}\n\\le \\exp (c_4 \\Sigma^i_1 j \\theta^{-j}) \\ \\exp (c_1 r^{-2}\n\\Sigma^i_1 \\rho^{2j} \\theta^{-j})\n \\int_{Q_{\\sigma_0 r}(x, t)} u^2.\n\\]Observe that $\\rho^2\/\\theta<1$. Letting $i \\rightarrow \\infty$ and\nobserving that $\\sigma_i \\to 1$ as $i \\to \\infty$, we obtain\n\\[\n\\sup_{Q_r} u^2 \\le C(r, b) \\int_{Q_{\\sigma_0 r}} u^2.\n\\]\n This completes\nproof the theorem. \\qed\n\n\\medskip\n\n\n\n\n\\section{Proof of Theorem 1.2}\n\nFirst we will need a short lemma concerning kernel function $K_1$\ndefined in (1.5). It was proved, among others things, in [Z3]. We\ngive a proof for completeness.\n\n\\begin{lemma}\nThe following inequality holds for all $x, y, z \\in {\\bf R}^n$ and\n$t> \\tau >0$.\n\\[\n\\aligned K_1*b K_1 &\\equiv \\int^t_0\\int_{\\bf R^n} \\frac{1}{(\n|x-z|+\\sqrt{t-\\tau} )^{n+1}} \\frac{|b(z, \\tau)|}{(\n|z-y|+\\sqrt{\\tau} )^{n+1}} dzd\\tau \\\\\n&\\le C B(b, 0, t) K_1(x, t; y, 0). \\endaligned \\leqno(3.1)\n\\]Here, recalling from (1.6'),\n\\[\nB(b, 0, t) \\equiv \\sup_{x \\in {\\bf R}^n} \\int^{t}_0 \\int_{{\\bf\nR}^n} [K_1(x, t; y, s) + K_1(x, s; y, 0) ] |b(y, s)| dyds. \\leqno\n(3.2)\n\\]\n \\proof\n\\end{lemma}\n\nSince\n\\[\n|x-z| + \\sqrt{t-\\tau} + |y-z| + \\sqrt{\\tau} \\ge |x-y| + \\sqrt{t},\n\\]we have, either\n\\[\n|x-z| + \\sqrt{t-\\tau} \\ge \\frac{1}{2} (|x-y| + \\sqrt{t}),\n\\leqno(3.3)\n\\]or\n\\[\n|z-y| + \\sqrt{\\tau} \\ge \\frac{1}{2} (|x-y| + \\sqrt{t}).\n\\leqno(3.4)\n\\]\n\nSuppose (3.3) holds then\n\\[\nK_1*b K_1 \\le \\frac{ 2^n}{(|x-y| + \\sqrt{t})^{n+1}}\n\\int^t_0\\int_{\\bf R^n} \\frac{|b(z, \\tau)|}{(|z-y|+\\sqrt{\\tau}\n)^{n+1}} dzd\\tau.\n\\]That is\n\\[\nK_0*b K_1 \\le \\frac{ 2^n B(b, 0, t)}{(|x-y| + \\sqrt{t})^{n+1}}.\n\\leqno (3.5)\n\\]\n\nSuppose (3.4) holds but (3.3) fails, then\n\\[\n|z-y| + \\sqrt{\\tau} \\ge \\frac{1}{2} (|x-y| + \\sqrt{t}) \\ge |x-z| +\n\\sqrt{t-\\tau}.\n\\]This shows\n\\[\n\\frac{1}{( |x-z|+\\sqrt{t-\\tau} )^{n+1} \\ ( |z-y|+\\sqrt{\\tau}\n)^{n+1}} \\le \\frac{2^n}{( |x-z|+\\sqrt{t-\\tau} )^{n+1} (\n|x-y|+\\sqrt{t} )^{n+1}}.\n\\]Substituting this to (3.1), we obtain\n\\[\nK_1*b K_1 \\le \\frac{ 2^n}{(|x-y| + \\sqrt{t})^{n+1}}\n\\int^t_0\\int_{\\bf R^n} \\frac{|b(z, \\tau)|}{(|x-z|+\\sqrt{t-\\tau}\n)^{n+1}} dzd\\tau.\n\\]That is\n\\[\nK_1*b K_1 \\le \\frac{ 2^n B(b, 0, t)}{(|x-y| + \\sqrt{t})^n}.\n\\]\n\nClearly the only remaining case to consider is when both (3.3) and\n(3.4) holds. However this case is already covered by (3.5). Thus\n(3.1) is proven. \\qed\n\n\\medskip\n\n\n\n\nNext we state and prove a representation formula for solutions of\n(1.2) and their spatial gradient, following and extending the idea\nin [O]. The formula for solutions is contained in [O]. However, we\nwill outline the proof since it is useful in the proof of the\nformula for the gradient, which is a new contribution of this\npaper.\n\n\n\n\\medskip\n\n{\\bf Remark 3.1.} Let us note that at this moment, the\nrepresentation formula (3.6') below for the gradient is understood\nas a comparison of two $L^1_{loc}$ functions in space-time. This\nis legal for two reasons. First we assumed that $\\nabla u$ is a\n$L^2$ function a priori. Second, it is easy to check\n $K_1(\\cdot,\n\\cdot; y, s)$ is $L^1_{loc}$ and $b \\nabla u$ is $L^1_{loc}$ by\nthe assumption that $b$ is $L^2$. Therefore the last function on\nthe righthand side of (3.6') is a $L^1_{loc}$ function.\n\n\\medskip\n\n\\begin{lemma} (mean value inequality)\n\n(a). Let $u$ be a solution of (1.2) in the region $\\Omega$.\nSuppose $Q_{2r}(x, t) \\subset \\Omega$. Then there exists a\nconstant $\\lambda$ such that\n\\[\n\\aligned\n |u(x, t)| &\\le \\lambda \\frac{1}{r^5} \\int_{Q_r(x,\nt)-Q_{r\/2}(x, t)} |u(y, s)| dyds \\\\\n&\\qquad + \\lambda \\int_{Q_r(x, t)} K_1(x, t; y, s) |b(y, s)| \\\n|u(y, s)| dyds. \\endaligned\n \\leqno(3.6)\n\\]\n\n(b). Under the same assumption as (a), there exists a constant\n$\\lambda$ such that\n\\[\n\\aligned |\\nabla u(x, t)| &\\le \\frac{\\lambda}{r^5} \\int_{Q_r(x,\nt)-Q_{r\/2}(x, t)} | \\nabla u(y, s)| dyds\\\\\n&\\qquad + \\frac{\\lambda}{r^6}\n\\int_{Q_r(x, t)-Q_{r\/2}(x, t)} |u(y, s)| dyds\\\\\n&\\qquad + \\lambda \\int_{Q_r(x, t)} K_1(x, t; y, s) |b(y, s)| \\\n|\\nabla u(y, s)| dyds. \\endaligned \\leqno(3.6')\n\\]\n\n\\proof of (a).\n\\end{lemma}\n\nLet $E=E(x, t; y, s)$ be the fundamental solution (matrix) of the\nStokes system in ${\\bf R}^3 \\times (0, \\infty)$ and $E_k$ be the\n$k$-th column of $E$. This function has been studied for a long\ntime. All of its basic properties we are using below can be found\nin [So] and [FJR]. Fixing $(x, t)$, we construct a standard\ncut-off function $\\eta$ such that $\\eta(y, s)=1$ in $Q_{r\/2}(x,\nt)$, $\\eta(y, s)=0$ outside of $Q_r(x, t)$, $0 \\le \\eta \\le 1$ and\n$|\\nabla \\eta|^2 + |\\Delta \\eta| + |\\partial_s \\eta| \\le c\/r^2$.\n\nDefine a vector valued function\n\\[\n\\Phi_k=\\Phi^{(x, t)}_k(y, s) = \\frac{1}{4 \\pi} curl \\big{(} \\eta\n(y, s) \\int_{{\\bf R}^3} \\frac{ curl E_k(x, t; z, s)}{|z-y|} dz\n\\big{)}. \\leqno(3.7)\n\\]It is clear that when $t>s$, $\\Phi_k$ is a valid test function\nfor equation (1.2) since $\\Phi_k$ is smooth, compactly supported\nand divergence free. Using $\\Phi_k$ as a test function on (1.2),\nby Definition (1.2), we obtain\n\\[\n\\int^t_0\\int u(y, s) \\big{(} \\Delta \\Phi_k + b \\nabla \\Phi_k +\n\\partial_s \\Phi_k \\big{)} dyds = \\lim_{s \\to t} \\int u(y, s)\n\\Phi_k(y, s) dy.\n\\]Here and later,we will suppress the superscript $(x, t)$ on\n$\\Phi$, unless there is a confusion.\n\nSince $E_k$ is divergence free $curl \\ curl E_k = - \\Delta E_k$.\nThus\n\\[\n\\aligned \\Phi_k(y, s) &= \\eta(y, s) E_k(x, t; y, s) + \\frac{1}{4\n\\pi} \\nabla \\eta(y, s) \\times \\int_{{\\bf R}^3} \\frac{ curl E_k(x,\nt; z, s)}{|z-y|} dz\\\\\n&\\equiv \\eta E_k + \\overrightarrow{Z}.\n\\endaligned\n\\leqno(3.8)\n\\]\nUsing the property of the fundamental matrix $E$ and the fact\nthat $\\overrightarrow{Z}$ is a lower order term, it is easy to see\nthat\n\\[\n\\lim_{s \\to t} \\int u(y, s) \\Phi_k(y, s) dy = \\eta (y, t) u_k(x,\nt) = u_k(x, t)\n\\]where $u_k$ is the k-th component of $u$. Hence\n\\[\n\\aligned u_k(x, t) &= \\int_{Q_r(x, t)} u [ \\Delta (\\eta E_k) +\n\\partial_s (\\eta\nE_k)]dyds + \\int_{Q_r(x, t)} u [\\Delta \\overrightarrow{Z} +\n\\partial_s \\overrightarrow{Z}] dyds\\\\\n&\\qquad \\qquad + \\int_{Q_r(x, t)} u b \\nabla \\Phi_k dyds.\n\\endaligned\n\\]Here and later $u b \\nabla \\Phi_k = u \\cdot \\sum^3_{i=1} b_i\n\\partial_i \\Phi_k$.\n\nNote that $\\Delta E_k + \\partial_s E_k=0$ when $s0$,\n\\[\n|u(X)| \\le \\frac{C}{r^5} \\Vert u \\Vert_{L^1(Q_{2r}(X))}\n\\Sigma^\\infty_{j=1} [2^5 c_1 B(b)]^j.\n\\]When $2^5 c_1 B(b)<1$ the above series converges to yield the mean\nvalue inequality for $u$. Since $b$ is in the Kato class defined\nin Definition 1.2, we know that $2^5 c_1 B(b)= 2^5 c_1 B(b,\nt-(4r)^2, t)<1$ when $r$ is sufficiently small. This proves the\nbound on $u$.\n\n\\medskip\n\n{\\it (b) We prove the gradient bound.}\n\\medskip\n\n\nSince $u+c$ is also a solution of (1.2) for any constant $c$, we\nwill assume that $\\overline{u}_{Q_{2r}}$, the average of $u$ in\n$Q_{2r}(x, t)$ is $0$.\n\nThe idea is to iterate (3.6') in the above manner. From (3.6'),\nusing the same notation as in part (a), we have\n\\[\n |\\nabla u(X)| \\le m(X, r)\n + \\lambda \\int_{Q_r(X)} K_1(X; Y) |b(Y)| \\ |\\nabla\nu(Y)| dY, \\leqno(3.20)\n\\]where\n\\[\nm(X, r) \\equiv\n \\frac{\\lambda}{r^5} \\int_{Q_r(X)} | \\nabla u(Y)| dyds\n + \\frac{\\lambda}{r^6} \\int_{Q_r(X)} |u(Y)| dY.\n\\]Here, we remind the reader that both sides of (3.20) are\n$L^1_{loc}$ functions and hence finite almost everywhere (see\nRemark 3.1 just before Lemma 3.2).\n\nApplying (3.20) to $\\nabla u(Y)$ and $Q_{r\/2}(Y)$, we obtain\n\\[\n|\\nabla u(Y)| \\le m(Y, r\/2)\n + \\lambda \\int_{Q_{r\/2}(Y)} K_1(Y; Z) |b(Z)| \\ |\\nabla\nu(Z)| dZ.\n\\]For $Y \\in Q_r(X)$, it is clear that there exists a $\\mu >0$,\nindependent of $r$ such that\n\\[\nm(Y, r\/2) \\le \\mu m(X, 2 r).\n\\]Hence\n\\[\n|\\nabla u(Y)| \\le \\mu m(X, 2 r)\n + \\lambda \\int_{Q_{r\/2}(Y)} K_1(Y; Z) |b(Z)| \\ |\\nabla\nu(Z)| dZ. \\leqno(3.21)\n\\]Substituting (3.21) to (3.20) we have\n\\[\n\\aligned |\\nabla u(X)| &\\le m(X, r) + \\mu \\ m(X, 2r) \\lambda\n\\int_{Q_r(X)} K_1(X; Y) |b(Y)| dy \\\\\n&\\qquad + \\lambda \\int_{Q_r(X)} K_1(X; Y) |b(Y)| \\lambda\n\\int_{Q_{r\/2}(Y)} K_1(Y; Z) |b(Z)| \\ |\\nabla u(Z)| dZ dY.\n\\endaligned\n\\]Therefore\n\\[\n\\aligned |\\nabla u(X)| &\\le m(X, r) + \\mu \\lambda \\ m(X, 2r) B(b) \\\\\n&\\qquad + \\lambda^2 \\int_{Q_{3r\/2}(X)} \\int_{Q_{r\/2}(Z)} K_1(X; Y)\n|b(Y)| K_1(Y; Z) dZ |b(Z)| \\ |\\nabla u(Z)| dZ dY.\n\\endaligned\n\\]Lemma 3.1 then implies\n\\[\n|\\nabla u(X)| \\le m(X, r) + \\mu \\lambda \\ m(X, 2r) B(b) \\\\\n + c \\lambda^2 B(b) \\int_{Q_{3r\/2}(X)} K_1(X,Z) |b(Z)| \\ |\\nabla\nu(Z)| dZ.\n\\]Now, using (3.20) on $|\\nabla\nu(Z)|$ and the cube $Q_{r\/4}(Z)$ and repeat the above argument,\nhalving the size of the cube in each step, we finally reach\n\\[\n|\\nabla u(X)| \\le C \\ m(X, 2r) \\Sigma^{\\infty}_{k=1}\n\\lambda^{k+1} \\mu^{k+1} B(b)^k.\n\\]As before this implies the desired gradient bound when $B(b)$ is\nsmall.\n\n\nThe last statement of Theorem 1.2 is a simple consequence of the\ngradient estimate and Poincar\\`e inequality. \\qed\n\n\n\n\n\n\nWe end the section by showing that if the sum of the entries of\nthe drift term $b$ is zero, then (1.2) in torus has bounded\nsolutions for many initial values, regardless of the singularity\nof $b$. To state the result rigourously, we will assume that $b$\nis bounded. However all coefficients are independent of the bounds\nof $b$.\n\n\\medskip\n\n\\begin{proposition} Given bounded vector fields $b=(b_1(x, t), ...,\nb_n(x, t))$, consider the linearized Navier-Stokes equation with\nperiodic boundary condition i.e. in a torus.\n\\[\n\\begin{cases}\n\\Delta u(x, t) - b(x, t) \\nabla u(x, t) + \\nabla P(x, t) -\n\\partial_t\nu(x, t)=0, \\quad (x, t) \\in D \\times {\\bf R},\\\\\ndiv u = 0, \\ div b=0, \\ b(\\cdot, t) \\in L^\\infty \\\\\nu(x, 0) = u_0(x).\n\\end{cases}\n\\leqno(3.22)\n\\]Here $D = [0, 2 \\pi]^n$. The functions $b(\\cdot, t)$,\n$u_0(\\cdot)$ and $u(\\cdot, t)$ have period $2 \\pi$.\n\n\nSuppose $\\sum^n_{j=1} b_j(x, t) = \\lambda$, a constant and $u_0$\nis any finite linear combination of\n\\[\n\\int_{D} e^{i k \\sum^n_{j=1}(x_j-y_j)} f(y) dy,\n\\]where $k$ is a positive integer and $f$ is a bounded,\ndivergence free vector field with period $2 \\pi$. Then there\nexists a constant $c$ independent of $b$ and $\\lambda$ such that\n\\[\nu(x, t) \\le c e^{-t} C(\\Vert u_0 \\Vert_{\\infty}).\n\\]\n\\proof\n\\end{proposition}\n\nUnder the assumption that $b$ is bounded, the existence of\nsolutions to (3.22) follows from the standard theory. Let $E=E(x,\nt; y, s)$ be the fundamental solution of (3.22). The existence of\n$E$ is also standard.\n\n First we\njust assume that $u_0$ has one term, i.e.\n\\[\nu_0(x) = \\int_{D} e^{i k \\sum^n_{j=1}(x_j-y_j)} f(y) dy,\n\\]Fixing $(x, t)$, consider\n\\[\nI(s) \\equiv \\int_D E(x, t; y, s) u_0(y) dy. \\leqno(3.23)\n\\]By the fact that the rows of $E$ satisfies the conjugate\nequation of (3.22), we have\n\\[\n\\aligned I'(s) & = \\int_{D} \\frac{d}{ds} E(x, t; y, s) u_0(y) dy\\\\\n&= \\int_{D} \\big{[} - \\Delta_y E(x, t; y, s) - b(y, s) \\nabla_y\nE(x, t; y, s) + \\nabla Y(y, s)] u_0(y) dy. \\endaligned\n\\]Here\n\\[\n\\nabla Y =\\left( \\begin{array}{cc} \\partial_1 P_1 \\ ...\\\n\\partial_n P_1\\\\... \\ ... \\ ... \\\\ \\partial_1 P_1\\ ... \\\n\\partial_n P_n\n\\end{array} \\right)\n\\]with $P_1, ..., P_n$ being scalar functions. Using integration\nby parts and the divergence free property of $b$, we deduce\n\\[\nI'(s) = - \\int_{D} E(x, t; y, s) \\Delta u_0(y) dy + \\int_{D}\nE(x, t; y, s) \\sum^n_{j=1} b_j \\partial_{y_j} u_0(y) dy.\n\\]Noticing that $\\Delta u_0 = - n k^2 u_0$ and $\\partial_{y_j} u_0(y)\n= i k u_0$, we have\n\\[\nI'(s) = n k^2 \\int_{D} E(x, t; y, s) u_0(y) dy + i k \\int_{D}\n(\\sum^n_{j=1} b_j) E(x, t; y, s) u_0(y) dy.\n\\]By our assumption that $\\sum^n_{j=1} b_j = \\lambda$, the above\nshows\n\\[\nI'(s) = (n k^2 + i k \\lambda) I(s).\n\\]Hence\n\\[\nI(s) = e^{(n k^2 + i k \\lambda) s} I(0).\n\\]From (3.23) and the fact that $E$ is the fundamental solution to\n(3.22), we have $I(0) = u(x, t)$ and $I(t) = u_0(x)$. This shows\n\\[\nu(x, t) = e^{- (n k^2 + i k \\lambda) t} u_0(x). \\leqno(3.24)\n\\]\n\nNow let $S$ be a set of finite positive integers and\n\\[\nu_0(x) = \\sum_{l \\in S} c_l \\int_{D} e^{i k_l\n\\sum^n_{j=1}(x_j-y_j)} f_l(y) dy.\n\\]Here $k_l$ is a positive integer and $f_l$ is a bounded,\ndivergence free vector field with period $2 \\pi$. By (3.24) one\nhas\n\\[\nu(x, t) = \\sum_{l \\in S} e^{- (n k^2_l + i k_l \\lambda) t} c_l\n\\int_{D} e^{i k_l \\sum^n_{j=1}(x_j-y_j)} f_l(y) dy.\n\\]\\qed\n\n\n\n\\section{A regularity condition for Navier-Stokes equations not involving\nabsolute values}\n\nIn this section we introduce another sufficient condition on the\nvelocity for boundedness of weak solutions of 3 dimensional\nNavier-Stokes equations. The novelty is that no absolute value of\n$u$ is involved. This is useful since it allows more cancellation\neffect to be taken into account. Throughout the years, various\nconditions on $u$ that imply regularity have been proposed. One of\nthem is the Prodi-Serrin condition which requires that $u \\in\nL^{p, q}$ with $\\frac{3}{p}+\\frac{2}{q} \\le 1$ for some $3

0$.}\n\\medskip\n\n\nIn this section we are able to extend the form boundedness\ncondition further. The main improvement is that our new condition\nin $u$ ((4.3)) below does not involve the absolute value of $u$.\nThis differs significantly from known conditions on $u$ so far\nwhere $|u|$ is always present. By a simple integration by parts\nargument, it is clear that Condition (4.3) is more general than\nCondition (4.2).\n\n\\medskip\n\n\\begin{theorem}\nLet $u$ be a Leray-Hopf solution to the 3 dimensional\nNavier-Stokes equation in ${\\bf R}^3 \\times (0, \\infty)$.\n\n Suppose\nfor every $(x_0, t_0) \\in {\\bf R}^3 \\times (0, \\infty)$, there\nexists a cube $Q_r=B(x_0, r) \\times [t_0-r^2, t_0]$ such that $u$\nsatisfies the form bounded condition: for a given $\\delta>0$,\n\\[\n\\int_{Q_r} \\phi \\nabla u \\cdot \\phi dyds \\le \\frac{1-\\delta}{2}\n\\bigg{(} \\int_{Q_r} |\\nabla \\phi|^2 dyds + \\frac{1}{2} \\sup_{s \\in\n[t_0-r^2, t_0]} \\int_{B(x_0, r)} \\phi^2(y, s) dy \\bigg{)} +\nB(\\Vert \\phi \\Vert_{L^2(Q_r)}). \\leqno(4.3)\n\\]Here $\\phi$ is any smooth vector field vanishing on parabolic the side of\n$Q_r$ and $B=B(t)$ is any given locally bounded function of $t \\in\n{\\bf R}^1$. Then $u$ is a classical solution when $t>0$.\n\\end{theorem}\n\\medskip\n\n{\\bf Remark 4.1.} Condition 4.3 is actually a condition on the\nstrain tensor $\\nabla u + (\\nabla u)^T$. Theorem 4.1 immediately\nimplies that weak solutions to the 3 dimensional Navier-Stokes\nequations are locally bounded in any open subset of the region\nwhere the strain tensor is negative definite.\n\\medskip\n\n\n{\\bf Proof of Theorem 4.1.} Let $t_0$ be the first moment of\nsingularity formation. We will reach a contradiction. It is clear\nthat we only need to prove that $u$ is bounded in\n$Q_{r\/8}=Q_{r\/8}(x_0, t_0)$ for some $r>0$. In fact the number $8$\nis not essential. Any number greater than $1$ would work.\n\n\n Consider the equation for\nvorticity $w = \\nabla \\times u$. It is well known that, in the\ninterior of $Q_r$, $w$ is a classical solution to the parabolic\nsystem with singular coefficients\n\\[\n\\Delta w - u \\nabla w + w \\nabla u - w_t = 0. \\leqno(4.4)\n\\]Let $\\psi=\\psi(y, s)$ be the refined cut-off function defined right after\n(2.1) such that $\\psi=1$ in $Q_{r\/2}$, $\\psi=0$ in $Q^c_r$ and\nsuch that $0 \\le \\psi \\le 1$, $|\\nabla \\psi| \\le C\/r$ and\n$|\\psi_t| \\le C\/r^2$. We can use $w \\psi^2$ as a test function on\n(4.4) to obtain\n\\[\n\\aligned\n \\int_{Q_r} | &\\nabla (w \\psi) |^2 dyds + \\frac{1}{2}\n \\int_{B(x_0, r)} |w \\psi|^2(y, t_0) dy\\\\\n&\\le \\frac{C}{r^2} \\int_{Q_r} |w|^2 dyds - \\int_{Q_r} u \\nabla w\n\\cdot w \\psi^2 dyds + \\int_{Q_r} w \\nabla u \\cdot w \\psi^2\ndyds \\\\\n&\\equiv I_1 + I_2 + I_3.\n\\endaligned\n\\leqno(4.5)\n\\]\n\nThe term $I_1$ is already in good shape. Next, using integration\nby parts and the divergence free condition on $u$, we have\n\\[\nI_2 = \\frac{1}{2} \\int_{Q_r} u \\cdot \\nabla \\psi \\psi |w|^2 dyds.\n\\]Since $\\nabla u \\in L^2_{loc}({\\bf R}^3 \\times [0, \\infty))$\nand $\\Vert u( \\cdot, t)\\Vert_{{\\bf R}^3}$ is non-increasing in\ntime, it is easy to prove by Sobolev imbedding and H\\\"older's\ninequality that $u|_{Q_r}$ satisfies the form boundedness\ncondition (1.3). In fact this has been proven in Corollary 2 in\n[Z2]. Hence we can bound $I_2$ in exactly the same way as $T_2$ in\n(2.4) with $b$ being chosen as $u$ here. Following the argument\nbetween (2.4) and (2.6), we obtain, for any given $\\delta>0$,\n\\[\nI_2 \\le \\frac{\\delta}{4} \\int_{Q_{r}} |\\nabla (\\psi w)|^2 dyds +\nc_\\delta e^{( c_1\/ r)^2} \\int_{Q_r} w^2 dyds. \\leqno(4.6)\n\\]Note that in (2.6), $\\delta$ was chosen as $2$. However, a\ncloser look at the proof shows that (4.6) is true.\n\n\n\nNext we estimate $I_3$. From Condition (4.3)\n\\[\nI_3 \\le \\frac{1-\\delta}{2} \\bigg{(} \\int_{Q_r} |\\nabla (w \\psi)\n|^2 dyds + \\frac{1}{2} \\sup_{s \\in [t_0-r^2, t_0]} \\int_{B(x_0,\nr)} (w \\psi)^2(y, s) dy \\bigg{)} + B(\\Vert w \\psi\n\\Vert_{L^2(Q_r)}). \\leqno(4.7)\n\\]\n\n\nSubstituting (4.6)-(4.7) to (4.5) we obtain,\n\\[\n\\aligned\n \\int_{Q_r}& | \\nabla (w \\psi) |^2 dyds + \\frac{1}{2}\n\\int_{B(x_0, r)} |w \\psi|^2(y, t_0) dy \\\\\n&\\le \\frac{\\delta}{4} \\int_{Q_{r}} |\\nabla (\\psi w)|^2 dyds +\nc_\\delta e^{( c_1\/ r)^2}\n\\int_{Q_r} w^2 dyds \\\\\n&+ \\frac{1-\\delta}{2} \\bigg{(} \\int_{Q_r} |\\nabla (w \\psi) |^2\ndyds + \\frac{1}{2} \\sup_{s \\in [t_0-r^2, t_0]} \\int_{B(x_0, r)} (w\n\\psi)^2(y, s) dy \\bigg{)}\n + C B(\\Vert w\n\\Vert_{L^2(Q_r)}).\n\\endaligned\n\\]Repeating the above process, but restrict the integrals to $Q_r\n\\cap \\{ (y, s) \\ | s0$, we will use this definition for solutions of (\\ref{lns}).\n\n\n\n\\begin{definition}\n\\lab{defsol}\n A divergence free vector\nfield $u \\in L^{\\infty}(0, T; L^2(D)) \\cap L^2(0, T; W^{1, 2}(D))$\nis called a\n (weak) solution of (\\ref{lns}) if:\nfor any vector valued $\\phi \\in C^{\\infty}(D \\times [0, T])$ with\n$ div \\, \\phi =0$ and $\\phi=0$ on $\\partial D \\times [0, T]$, $u$\nsatisfies\n\\[\n\\int^{t_2}_{t_1}\\int_{{\\bf R}^n} \n dxdt - \\int^{t_2}_{t_1}\\int_{{\\bf R}^n} dxdt\n = - \\int_{{\\bf R}^n} (x, t) |^{t_2}_{t_1} dx.\n\\]\n\\end{definition}\n\n\n\nThe corrected version of Theorem 1.7 in \\cite{Z:4} is:\n\n\\begin{theorem}\n\\label{th:1.2}\nLet $u$ be a solution of (\\ref{lns}) in a domain $\\Omega \\subset {\\bf\nR}^3 \\times {\\bf R}$. Suppose $Q_{4r}(x, t) \\subset \\Omega$, $div\nb =0$ and that $b |_{Q_{2r}(x, t)}$ is in class $K_1$ and $b \\in\nL^2_{loc}$. Then both $u$ and $|\\nabla u|$ are bounded functions\nin $Q_{2r}=Q_{2r}(x, t)$, the standard parabolic cube of size $2r$.\n\nMoreover, for some positive constants $C=C(b)$ and $r_0$,\ndepending on the size of the Kato norm of $b$, there hold, when\n$0s$, $\\Phi_k$ is a valid test function\nfor equation (\\ref{lns}) since $\\Phi_k$ is smooth, compactly supported\nand divergence free. Using $\\Phi_k$ as a test function on (\\ref{lns}),\nby Definition \\ref{defsol}, we obtain\n\\be\n\\lab{uphi1}\n\\int^t_0\\int u(y, s) \\big{(} \\Delta \\Phi_k + b \\nabla \\Phi_k +\n\\partial_s \\Phi_k \\big{)} dyds = \\lim_{s \\to t} \\int u(y, s)\n\\Phi_k(y, s) dy.\n\\ee Here,we will suppress the superscript $(x, t)$ on\n$\\Phi_k$, unless there is a confusion. Also $u$ is regarded as a row vector so that $u \\Phi_k$ etc is\na scalar.\n\nSince $E_k$ is divergence free $curl \\ curl E_k = - \\Delta E_k$.\nThus\n\\be\n\\lab{(3.8)}\n\\aligned \\Phi_k(y, s) &= \\eta(y, s) E_k(x, t; y, s) + \\frac{1}{4\n\\pi} \\nabla \\eta(y, s) \\times \\int_{{\\bf R}^3} \\frac{ curl E_k(x,\nt; z, s)}{|z-y|} dz\\\\\n&\\equiv \\eta E_k + \\overrightarrow{Z}.\n\\endaligned\n\\ee\n\n{\\it Step 2.}\nUsing formula (\\ref{Ekform}) of the fundamental matrix $E$ and\nthat of $\\overrightarrow{Z}$ in (\\ref{(3.8)}), direct computation shows\nthat\n\\be\n\\lab{uphi2}\n\\al\n&\\lim_{s \\to t} \\int u(y, s) \\Phi_k(y, s) dy = \\eta u_k(x,t)\\\\\n& + \\int \\left[\n\\partial_{y_k} \\Gamma (x, y) (\\nabla \\eta \\cdot u)(y, t)\n+\\partial_{y_k} \\eta (y, t) (\\nabla_y \\Gamma(x, y) \\cdot u(y, t) )\n- (\\nabla \\eta \\cdot \\nabla_y \\Gamma(x, y)) u_k(y, t) \\right] dy.\n\\eal\n\\ee\nwhere $u_k$ is the k-th component of $u$ and $\\Gamma=\\frac{1}{4 \\pi |x-y|}$\nis the Green's function on ${\\bf R}^3$.\n\nAlternatively, from (\\ref{(3.8)}),\n\\be\n\\lab{i+ii}\n\\al\n\\lim_{s \\to t} \\int u(y, s) \\Phi_k(y, s) dy\n&=\\lim_{s \\to t} \\int \\eta u(y, s) E_k(x, t; y, s) dy + \\lim_{s \\to t} \\int u(y, s) \\vec{Z} dy\\\\\n&\\equiv I + II.\n\\eal\n\\ee First we treat $I$. Consider the Helmholtz decomposition\n\\[\n\\eta u = \\nabla f + X,\n\\]where $\\Delta f = 0$, $ div \\, X = 0$. Since $\\Delta f = \\nabla \\eta \\cdot u$ and\n$\\eta(\\cdot, s)$ is compactly supported, we can take\n$\nf(y, s) = -\\int \\Gamma(y, z) ( \\nabla \\eta \\cdot u)(z, s) dz.\n$ Then\n\\be\n\\lab{x=}\nX=\\eta u - \\nabla f = \\eta u(y, s) + \\int \\nabla_y \\Gamma(y, z) (\\nabla \\eta \\cdot u)(z, s) dz.\n\\ee\nUsing the decay estimates\n$\n|\\nabla_y E_k(x, t; y, s)| \\le \\frac{c}{(|x-y| + \\sqrt{t-s})^4},\n $ $|f(y, s)| \\le \\frac{c}{|y|}$ and the fact that $E_k$ is a divergence vector field of variable $y$, we can integrate by parts to deduce that\n$\n\\int \\nabla f E_k(x, t; y, s) dy=0.\n$ Therefore\n\\be\n\\lab{i=}\n\\al\nI&=\\lim_{s \\to t} \\int (\\nabla f + X) E_k(x, t; y, s) dy = X_k(x, t)\\\\\n&= \\eta u_k(x, t) + \\int \\partial_{y_k} \\Gamma(x, y) (\\nabla \\eta \\cdot u)(y, t) dy.\n\\eal\n\\ee\n\nIn order to compute $II$, we see from (\\ref{Ekform}) and (\\ref{(3.8)}) that\n\\[\n\\vec{Z}\n= \\frac{1}{ 4 \\pi} \\nabla_y \\eta(y, s) \\times\n\\bigg{(} \\nabla_y \\big{[}\\int_{{\\bf R}^3} \\frac{ G(x, t; z,\ns)}{|z-y|} dz \\big{]} \\times e_k \\bigg{)}.\n\\]This implies\n\\be\n\\lab{ii=}\n\\al\nII=\\lim_{s \\to t} \\int u(y, s) \\vec{Z} dy\n=\\int u(y, t) \\cdot \\left[ \\nabla_y \\eta \\times (\\nabla_y \\Gamma(x, y) \\times e_k )\\right] dy.\n\\eal\n\\ee\n\nA combination of (\\ref{i=}), (\\ref{ii=}) and (\\ref{i+ii}) proves (\\ref{uphi2}). By (\\ref{uphi2}) and (\\ref{uphi1}):\n\n\\[\n\\aligned\n&u_k(x, t) = \\int_{Q_r(x, t)} u [ \\Delta (\\eta E_k) +\n\\partial_s (\\eta\nE_k)]dyds + \\int_{Q_r(x, t)} u [\\Delta \\overrightarrow{Z} +\n\\partial_s \\overrightarrow{Z}] dyds\\\\\n& + \\int_{B_r(x)} \\left[(\\nabla \\eta \\cdot \\nabla_y \\Gamma) u_k -\n\\partial_{y_k} \\Gamma (\\nabla \\eta \\cdot u)\n-\\partial_{y_k} \\eta (\\nabla_y \\Gamma \\cdot u )\n \\right](y, t) dy +\\int_{Q_r(x, t)} u b \\nabla \\Phi_k dyds.\n\\endaligned\n\\]Here and later $u b \\nabla \\Phi_k = u \\cdot \\sum^3_{i=1} b_i\n\\partial_i \\Phi_k$.\n\nNote that $\\Delta E_k + \\partial_s E_k=0$ when $s