diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzemnm" "b/data_all_eng_slimpj/shuffled/split2/finalzzemnm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzemnm" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nEver since the seminal observation by Polyakov \\cite{polyakov} that\nLiouville field theory describes two-dimensional gravity as coupled to\na relativistic string, much attention has been devoted to the problem\nof quantizing it, see\ne.g. \\cite{gernev,curtho,fadtak,seiberg,gervais,takhtajan} and\nreferences therein.\n\nDespite the fact that all the arsenals of canonical quantization\n\\cite{gernev,curtho} and path integral quantization\n\\cite{seiberg,takhtajan} have been used, much remains unclear about\nthe quantum theory, especially in the strongly coupled regime with\ncentral charge $1< c < 25$. Some encouraging results have been\nacquired for this region utilizing the underlying $sl_q(2)$\\ symmetry of\nthe quantum theory \\cite{gervais}, and by solving the conformal Ward\nidentities \\cite{takhtajan}.\n\nLiouville theory being a completely integrable model, the powerful\nmethods of the Quantum Inverse Scattering Method\\ \\cite{fadlhou82} should be applicable to\nquantizing it. In the spirit of this method, Liouville theory was put\non lattice in \\cite{fadtak}, but due to peculiarities of the\nintegrable structure of Liouville theory, only partial quantum results\nwere achieved. The concept of lattice Liouville theory, and the\nparallel concept of lattice Virasoro algebra were further developed\nin \\cite{volkov,fadvol,babelon}.\n\nIn this paper we take a somewhat different approach to putting\nLiouville on the lattice, more suitable for applying the methods of\nthe algebraic Bethe Ansatz\\ (see \\cite{fadBA} for recent reviews). Our\nconventions for classical continuum Liouville theory are explained in\n{\\bf section 2}.\n\nIn order to use the full power of the Quantum Inverse Scattering Method, one needs a $L$-operator\nwhich depends non-trivially of the spectral parameter. The\n$L$-operators used in quantum Liouville theory have so far lacked this\nproperty. In {\\bf Section 3} we shall find a remedy for this\nshortcoming, and thus the methods of the algebraic Bethe Ansatz\\ are in our use.\n\nFollowing the line of thought of \\cite{izekor}, we find in {\\bf Section\n4} a pseudovacuum for the product of $L$-operators on two adjacent\nsites, and derive Bethe Ansatz equations\\ for lattice Liouville theory. The equations\ncan be regarded as Bethe Ansatz equations\\ for a XXZ spin chain with spin\n$(-{\\textstyle \\frac{1}{2}}\\mm)$, with an extra phase factor depending on the length of the\nchain $N$. More exactly, the extra phase is related to the $N$:th\npower of the XXZ anisotropy $q=\\e{\\i\\gamma}$, where $\\gamma$ is the\nLiouville coupling constant.\n\nIn {\\bf Section 5} we then map the spin $(-{\\textstyle \\frac{1}{2}}\\mm)$ Bethe equations to\nthe paradigmatic spin $(+{\\textstyle \\frac{1}{2}}\\mm)$ ones with an extra phase factor. This\nis done in the string approach \\cite{taksuz} to exited states in the\nthermodynamic Bethe Ansatz. The mapping from Liouville to XXZ is successful\nonly for certain root of unity anisotropies $q$; the Liouville\ncoupling has to be of the form $\\gamma=\\pi\\trac{\\nu}{\\nu+1}$, with $\\nu$\nan integer. There is a reciprocal one to one correspondence between\nBethe states in the lattice Liouville model and the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain; highest\nstrings are mapped to 1-strings and vice versa.\n\nIn \\cite{blocani,affleck,cardy} it was shown that conformal properties\nof two-dimensional statistical models at criticality can be extracted\n{}from the finite (but large) size corrections to the eigenvalues of the\ntransfer matrix. Based on this, systematic methods for analytical\ncalculation of the finite size corrections\\ have been developed\n\\cite{devewoy,karowski,kluwezi,boizre}. For a spin chain or lattice\nmodel, the finite size analysis yields $\\operN$ corrections to the\neigenvalues of the transfer matrix, where $N$ is the number of lattice\nsites.\n\nThe calculation by Karowski \\cite{karowski} of the conformal weights\nof string exited states in six-vertex and related Potts models will be\nof particular interest to us. In {\\bf Section 6} we review these\nresults to for the $\\operN$ corrections to a spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\ with extra\nphase factor.\n\nFinally, in {\\bf Section 7} we interpret the results in terms of\nlattice Liouville theory. As the extra phase factor,\n$\\exp\\{N\\pi\\trac{\\nu}{\\nu+1}\\}$ is a $\\nu+1$:th root of unity, the\nthermodynamic limit $N\\to\\infty$ of our system of Bethe Ansatz equations\\ should be\ntaken in steps of $\\nu+1$. The phase factor thus becomes a function of\nthe remnant $\\kap = N\/2 \\ \\mbox{mod}\\ (\\nu+1)$.\n\nThe finite size results show that different remnants correspond to\ndifferent exited states, thus generalizing the property of\nspin chains that chains of even and odd length have different spectra.\nThe scaling properties of the antiferromagnetic XXZ vacuum with $\\kap=1$\nyield the central charges of the minimal models \\cite{bpz}\nbelonging to the unitary series of Friedan Qiu and Shenker \\cite{fqs}.\n\nFor $\\kap=0$ we get an ``unrestricted'' sector with central charge $c=1$.\nExcluding this sector from the theory corresponds exactly to the RSOS\nreduction of a critical SOS model \\cite{anbafo} . The spin ${\\textstyle \\frac{1}{2}}\\mm$\nBethe equations with the extra phase are exactly the Bethe\nequations of a SOS model at criticality. The restriction of these\nmodels is known to produce unitary conformal field theories\\ with $c<1$\n\\cite{huse}.\n\nIt is very plausible that the exclusion of the $\\kap=0$ sector of the\nHilbert space of the theory should be connected to an unitarity\nanalysis in terms of ``good'' representations of the underlying\n$sl_q(2)$\\ symmetry of the theory, c.f. \\cite{jutkar}.\n\nWith the result of \\cite{karowski} that XXZ primary states emerge from\na single string exited over the antiferromagnetic vacuum, we then have\ntwo integers to parameterize exited states: the string length $K$ and\nthe remnant $\\kap$. Working in the restricted sector, we recover from\nthe finite size corrections\\ the conformal weights corresponding to the whole Ka\\v c\\ table\nof primary states in unitary conformal field theories, parameterized by these two\nintegers.\n\nWe view the results announced here as encouraging when it comes to the\nanalysis of quantum Liouville theory. Generalizing our approach to a\nwider class of Liouville coupling constants, possibly along the lines of\n\\cite{gervais}, might shed more light on the evasive strong coupling regime.\n\nIn addition, doing the mapping of Section 5 in the inverse\ndirection, we see how the Bethe equations of a host of critical\nstatistical models (six-vertex, Potts, III\/IV-critical RSOS) can be\nmapped to the lattice Liouville ones. We regard this as an explicit\nproof of the conformal invariance of these theories at criticality,\nwhich gives a possibility to find a Lagrangian description of the\ncorresponding critical field theories.\n\n\n\\section{Classical Liouville theory}\n\nWe shall be quantizing Liouville theory on a Minkowskian cylinder,\nwith the basic field $\\Phi(x,t)$. The space coordinate is periodic,\n$x\\in[0, 2\\pi]$, and time is non-compact, as usual: $t \\in [-\\infty,\n\\infty]$. We take the classical Liouville action in the form\n\\be\n \\S= \\opertgam \\int_0^{2\\pi} dx\\mm\\left\\{{\\textstyle \\frac{1}{2}}\\mm \\dot\\Phi^2 - {\\textstyle \\frac{1}{2}}\\mm \\Phi'^2\n - 2 \\e{-2 \\i \\Phi} + 2\\i \\Phi''\\right\\} \\.\n \\mabel{claction}\n\\ee\nAs usual, $\\dot\\Phi \\equiv \\frac{\\partial}{\\partial t} \\Phi(x,t)$ and $\\Phi' \\equiv\n\\frac{\\partial}{\\partial x} \\Phi(x,t)$. The so called conformal improvement\nterm $\\i \\Phi''$ is required to make the conformal invariance\nmanifest. It can be considered as the flat space residue of the\ncoupling to the scalar curvature $\\sqrt{g} \\mm R(g)\\mm\\Phi$ of the\nLiouville action on a generic Riemann surface, see\ne.g. {\\cite{seiberg}}.\n\nAs it stands, the action of Equation {\\refe{claction}} seems very\nnon-unitary, but as we shall see in the course of this paper, and as\ncan be inferred from {\\cite{gernev,curtho}}, for specific values of\nthe coupling $\\gamma$ the complexities conspire to yield an\nunitary theory after quantization.\n\nThe equation of motion corresponding to {\\refe{claction}} is\nLiouville's equation (with an imaginary field),\n\\be\n \\ddot\\Phi - \\Phi'' - 4 \\i \\e{-2\\i\\Phi} = 0 \\.\n \\mabel{lioueq}\n\\ee\n In our parameterization the coupling constant does not appear in the\nequation of motion, it only multiplies the action. By redefining the\nfield it is easy to recover the usual way\n\\cite{gernev,curtho,seiberg} of having the coupling in the\nexponential.\n\nTo unravel the conformal invariance of the theory, it is easiest to move\nover to the Hamiltonian picture. Taking the conjugate momentum to be\n$\\Pi = {\\textstyle \\frac{1}{2}}\\mm \\dot\\Phi$, with Poisson Brackets\n\\be\n \\{ \\Phi(x),\\mm \\Pi(y) \\} = - \\gamma \\mm\\delta(x-y) \\,\n \\mabel{pbs}\n\\ee\n we get the conformally improved Hamiltonian\n\\[\n H = \\int dx\\ \\H = {\\textstyle \\frac{1}{\\gamma}}\\mm \\int dx\\mm\\left\\{\n \t\\Pi^2 + {\\textstyle \\frac{1}{4}}\\mm \\Phi'^2 + \\e{-2 \\i \\Phi} - \\i \\Phi'' \\right\\}\n\\]\n and the improved momentum\n\\[\n P = \\int dx\\ \\P = {\\textstyle \\frac{1}{\\gamma}}\\mm \\int dx\\mm\\left\\{\n \t\\Phi' \\Pi - {\\textstyle \\frac{{\\rm i}}{2}}\\mm\\Pi' \\right\\} \\.\n\\]\n According to the prescriptions of radial quantization \\cite{bpz} we\nknow that on a cylinder the role of the lightcone energy-momentum\ntensor is played by the sum of energy and momentum densities\n\\be\n s_+(x) = \\H(x) + \\P(x) \\. \\mabel{esplus}\n\\ee\n This sum generates the current algebra\n\\[\n \\{s_+(x), \\mm s_+(y)\\} = 2 \\mm (s_+(x) + s_+(y))\n \\mm\\delta'(x-y) \\mm + \\mm {\\textstyle \\frac{1}{\\gamma}}\\mm \\delta'''(x-y)\n\n\\.\n\\]\n Upon Fourier expanding the light-cone energy-momentum, we get a copy\nof the classical version of the Virasoro algebra,\n\\[\n \\i \\{L_n, L_m \\} = (n-m)L_{m+n}\n - \\frac{\\pi}{2\\gamma}(n^3 - n)\\delta_{n+m,0} \\,\n\\]\n from which we read off the classical central charge\n\\[\n c= - 6 \\pipergam \\.\n\\]\n The difference $s_-(x)\n= \\H(x) -\\P(x)$ generates another, commuting copy of this algebra.\n\n In \\cite{curtho} this system was canonically quantized using a normal\nordering prescription to cope with divergences. The\nquantum corrections shifted the central charge to\n\\[\nc= 1-6(\\pipergam + \\gamperpi -2) \\,\n\\]\n which yields minimal theories for $\\gamperpi$ rational, i.e. when the\ndeformation parameter $q=\\exp{i\\gamma}$ is a root of unity. The subset\n$\\gamperpi = \\frac{\\nu}{\\nu+1}, \\ \\ \\nu=2,3,\\ldots$ corresponds to\nunitary theories. Similar results were obtained in \\cite{gernev}.\n\n\n\\section{An L-matrix for Liouville Theory}\n\nInstead of normal ordering, we shall regularize the ultraviolet\ndivergencies by putting Liouville theory on a lattice, in a way that\npreserves the integrability of the model.\n\nTo get into a position where the methods of the algebraic Bethe\nAnsatz (see e.g. \\cite{fadBA}) can be used, we have to find a spectral\nparameter dependent quantum Lax operator for lattice Liouville theory.\nTo do this, we shall consider Liouville theory as the massless\nlimit of sine-Gordon theory. Indeed, the sine-Gordon equation of\nmotion\\footnote{the peculiar choice of normalization of sine-Gordon\nmass makes the quantum group structure more transparent in the sequel.}\n\\[\n \\mm\\Box\\mm\\Phi + 8 m^2 \\sin(2\\Phi) = 0\n\\]\n goes into the (imaginary) Liouville Equation \\refe{lioueq}, if we\nrescale the field:\n\\be\n \\Phi \\to \\Phi + \\i\\zeta \\,\n \\mabel{scale}\n\\ee\n and take the limit\n\\be\n m\\to 0 \\; \\zeta\\to\\infty\\, \\ \\ \\mbox{ so that}\\ \\ m\\e{\\zeta} = 1 \\.\n \\mabel{limit}\n\\ee\n\n\n\n\\subsection{The Sine-Gordon L-matrix}\n\nFor lattice sine-Gordon there exists a Lax-operator which is based on\na infinite dimensional representation of the underlying quantum group\n$sl_q(2)$\\ \\cite{izekor,fadBA}. In this approach, lattice sine-Gordon is\ntreated as an inhomogeneous XXZ spin chain.\n\nIn an auxiliary matrix space $a$ of $2\\times2$ matrices, we define the\n$L$-operator of the $n$:th site of a XXZ-chain to be the matrix operator\n\\be\n L_{n,a}^{{\\rm xxz}}(\\lambda) = \\mat{ll}{\n \\sinh(\\lambda + \\i\\gamma\\mm S^{(n)}_3) & \\i S^{(n)}_- \\mm \\sin\\gamma\\\\\n \\null&\\null\\\\\n \\i S^{(n)}_+ \\mm \\sin\\gamma& \\sinh(\\lambda - \\i\\gamma\\mm S^{(n)}_3) } \\,\n \\mabel{Lxxz}\n\\ee\n a function of the spectral parameter $\\lambda$. If the quantum\noperators defined on the sites generate the quantum group $sl_q(2)$,\n\\be\n \\com{S^{(n)}_+}{S^{(m)}_-} = \\frac{\\sin(2\\gamma S^{(n)}_3)}{\\sin\\gamma}\n\\ \\delta_{n,m}\n\\; \\com{S^{(n)}_3}{S^{(m)}_\\pm} = \\pm S^{(n)}_\\pm \\ \\delta_{n,m} \\,\n \\mabel{QG}\n\\ee\n the L-operators \\refe{Lxxz} acting on auxiliary spaces $a_1$ and $a_2$\nfulfill the fundamental commutation relations (FCR)\n\\bea\n R_{12}(\\lambda-\\mu) \\stack{1}{L}_n(\\lambda) \\stack{2}{L}_n(\\mu) &=&\n\\stack{2}{L}_n(\\mu) \\stack{1}{L}_n(\\lambda) R_{12}(\\lambda-\\mu)\n \\mabel{FCR} \\\\\n\\stack{1}{L}_n(\\lambda) \\stack{2}{L}_m(\\mu) &=&\n\\stack{2}{L}_m(\\mu) \\stack{1}{L}_n(\\lambda)\\,\\ \\mbox{for}\\ m\\neq n\\.\n \\nonumber\n\\eea\n Here the usual notation for matrices on the tensor product of two\nauxiliary spaces is adopted; $\\stack{1}{L}_n \\equiv L_{n,a_1}\\otimes\n1\\!\\!\\hskip 1pt\\mbox{l}_{a_2}$ etc.\n\nThe FCR encode the integrability of the system, and they are the basis\nof utilizing the algebraic Bethe Ansatz. The XXZ chain belongs to the\nclass where the R-matrix is trigonometric:\n\\be\n R=\\mat{cccc}{\\alpha & \\null& \\null& \\null \\\\\n \\null & \\beta& \\delta& \\null \\\\\n \\null & \\delta& \\beta& \\null \\\\\n\t \\null & \\null& \\null& \\alpha} \\;\n \\begin{array}{l}\\alpha=\\sinh(\\lambda+\\i\\gamma)\\\\\n\t\t\t\t\t \\beta=\\sinh\\lambda\\\\\n\t\t\t\t\t \\delta=\\i\\sin\\gamma \\end{array}\\.\n\\mabel{R}\n\\ee\n The ordered product of L-matrices around the periodic chain is the\nmonodromy matrix\n\\be\n T(\\lambda) = L_{\\sN,a}(\\lambda)\\mm L_ {\\sss{ N-1},a}(\\lambda)\n\\mm\\ldots\\mm L_{\\sss{1},a}(\\lambda) \\equiv \\mat{cc}{\\A(\\lambda)&\\B(\\lambda)\\\\\n\t\t\t\t\t \\C(\\lambda)&\\D(\\lambda)} \\.\n \\mabel{monodromedary}\n\\ee\n The conserved quantities can now be expressed as traces (over the\nauxiliary space $a$) of powers of $T$, which all commute due to the\nfundamental commutation relations. The trace of $T$ over the auxiliary space can be interpreted as\nthe row-to-row transfer matrix of the corresponding two-dimensional\nstatistical model,\n\\be\n \\tau(\\lambda) = \\mbox{Tr}_a\\bigl(T(\\lambda)\\bigr) = \\A \\mm + \\mm \\D \\.\n\\mabel{transfer}\n\\ee\n\n\n\n\n\\vali\n\nTo get a $L$-matrix for sine-Gordon, we use an infinite dimensional\nrepresentation of the quantum group, generated by the canonical\nvariables $\\Phi_n,\\mm \\Pi_n$ with commutation relations\n\\be\n \\com{\\Phi_n}{\\Pi_m} = \\i\\gamma\\mm\\delta_{mn} \\.\n \\mabel{commutator}\n\\ee\n Now we can write the generators\n\\be\n S_3^{(n)} = - {\\textstyle \\frac{1}{\\gamma}}\\mm\\Phi_n \\; S_\\pm^{(n)}\n = \\frac{1}{2 m \\sin\\gamma} \\e{\\pm{\\textstyle \\frac{{\\rm i}}{2}}\\mm\\Pi_n}\n\\Bigl(1 + m^2\\e{2\\i\\Phi_n}\\Bigr) \\e{\\pm{\\textstyle \\frac{{\\rm i}}{2}}\\mm\\Pi_n} \\,\n \\mabel{QGgen}\n\\ee\n which fulfill the commutation relations \\refe{QG}.\n\n\n Using Generators \\refe{QGgen} in Equation \\refe{Lxxz} and multiplying with\nthe matrix $-2 m \\i \\mm \\sigma_1$, we get the $L$-matrix of lattice\nsine-Gordon theory\n\\bea\n L^{SG}_{n,a} &=& -2 m\\i\\mat{rr}{0&1\\\\1&0} L_{n,a}^{{\\rm xxz}} \\cr\n &\\null& \\cr &=& \\mat{ll}{\n h_+(\\Phi_n)\\e{\\i\\Pi_n} & -2m\\i\\sinh(\\lambda + \\i\\Phi_n) \\\\\n -2m\\i\\sinh(\\lambda - \\i\\Phi_n) & h_-(\\Phi_n)\\e{-\\i\\Pi_n}} \\.\n \\mabel{sGL}\n\\eea\n Here we have denoted\n\\[\n h_\\pm(\\Phi) \\equiv 1 + m^2 \\e{\\pm 2\\i\\Phi +\\i\\gamma} \\.\n\\]\n Multiplying $L$ with $\\sigma_1$ is a symmetry of the FCR, so\nL-matrix \\refe{sGL} still satisfies Equation \\refe{FCR}.\n\n\n\\subsection{Massless Limit of the Sine-Gordon L-matrix}\n\nNow we perform the scaling and limiting procedure of Equations\n(\\ref{scale},\\ref{limit}) on the sine-Gordon L-matrix \\refe{sGL}, in\norder to get a $L$-matrix for lattice Liouville theory.\n\nThe functions $h_\\pm$ scale to\n\\[\n h_+ \\to 1 \\; h_- \\to 1 - \\e{-2\\i\\Phi + i\\gamma} \\equiv h \\.\n\\]\n Performing\n(\\ref{scale},\\ref{limit}) directly on \\refe{sGL}, we thus get\n\\be\n \\tilde L_{n,a}^{\\L} = \\mat{ll}{\n \\e{\\i\\Pi_n} & -\\e{-\\lambda - \\i\\Phi_n} \\\\\n \\e{\\lambda - \\i\\Phi_n} & h(\\Phi_n)\\e{-\\i\\Pi_n}} \\. \\mabel{trivL}\n\\ee\n This $L$-matrix with spectral parameter was acquired in\n\\cite{fadtak}. The $\\lambda$-dependence of $ \\tilde L_{n,a}^{\\L}$ is\ntrivial, though, it can be removed by a lattice gauge\ntransformation. In other words, the quantum determinant (the central\nelement of the algebra generated by the elements of $\\tilde L_{n,a}$) is\nindependent of $\\lambda$. If the $\\lambda$ dependence is removed, we are\nleft with the constant $L$-operators inherent in the approaches of\n\\cite{fadtak,gervais,babelon}. As such, this $L$-matrix is not\nviable for the Bethe Ansatz. However, let us comment that work with Lax\noperator {\\refe{trivL}} leads naturally to the lattice deformation of\nVirasoro algebra \\cite{fadtak,volkov,fadvol,babelon}.\n\nTo get a Liouville $L$-matrix with non-trivial spectral parameter\ndependence, we have to manipulate \\refe{sGL} in a more involved\nway.\\footnote{This prescription was communicated to us by A. Volkov.}\nFirst, we perform a constant lattice gauge transformation on\n\\refe{sGL}:\n\\[\n L^{SG}_{n,a } \\to g \\mm L^{SG}_{n,a} \\mm g^{-1} \\;\n g = \\mat{ll}{m^{{\\textstyle \\frac{1}{2}}\\mm}&\\null\\\\ \\null & m^{-{\\textstyle \\frac{1}{2}}\\mm} }\n\\]\n Then we make the scaling \\refe{scale}, accompanied by a\nrenormalization of the spectral parameter:\n\\be\n \\lambda \\to \\lambda - \\zeta \\.\n \\mabel{renorm}\n\\ee\n After scaling and renormalizing, the limiting procedure \\refe{limit}\nproduces\n\\be\n g \\mm L^{SG}_{n,a} \\mm\\mm g^{-1} \\to \\mat{ll}{\n \\e{\\i\\Pi_n} & -\\i\\e{-\\lambda - \\i\\Phi_n} \\\\\n -2\\i\\sinh(\\lambda - \\i\\Phi_n) & h(\\Phi_n)\\e{-\\i\\Pi_n}}\n \\equiv L^{\\L}_{n,a} \\.\n \\mabel{Lliou}\n\\ee\n\n\nThis is indeed the sought for quantum $L$-matrix for Liouville theory\nwith a non-trivial spectral parameter, corresponding to an integrable\nlattice regularization of the Liouville system described by Action\n\\refe{claction}.\n\nThe easiest way to see that $L$-matrix \\refe{Lliou} corresponds to an\nintegrable lattice version of quantum Liouville theory, is to take the\nclassical continuum limit of \\refe{Lliou} , and find the corresponding\nclassical continuum dynamics.\n\nTo define the classical limit, one has to recover Planck's constant.\nThis is achieved by reinterpreting $\\gamma \\to \\hbar\\gamma$ in all quantum\nexpressions. In the classical limit commutators turn into Poisson\nBrackets according to the usual Heissenberg correspondence,\n$\\frac{\\i}{\\hbar}[,] \\to \\{,\\}$.\n\nSimilarly, to find the continuum limit the lattice spacing $a$ has to\nbe recovered. The lattice mass in \\refe{sGL} should be $m = a\\mm m'\n$, and the {\\it continuum} mass $m'$ should be taken to zero according\nto Equation \\refe{limit}.\n\nThe continuum variables are defined by\n\\[\n \\Pi_n \\to a \\mm\\Pi(x)\\; \\Phi_n \\to \\Phi(x) \\;\n \\delta_{mn} \\to a\\mm\\delta(x-y) \\,\n\\]\n which maps the discrete brackets corresponding to \\refe{commutator}\nto the continuous ones of Equation \\refe{pbs}.\n\nNow we get in the classical continuum limit the matrix\n\\be\n U(x,\\lambda) = \\lim_{a, \\hbar \\to 0} \\mm \\trac{1}{\\i a} \\mm\n(L^\\L - 1\\!\\!\\hskip 1pt\\mbox{l}) = -\\mat{ll}{\n \t-\\Pi(x) & \\e{-\\lambda - \\i\\Phi(x)} \\\\\n\t 2 \\sinh(\\lambda + \\i\\Phi(x)) & \\Pi(x)} \\.\n \\mabel{U}\n\\ee\n As the classical limit of fundamental commutation relations\\ \\refe{FCR}, $U$ satisfies the so called\nfundamental Poisson brackets\n\\[\n \\pois{\\stack{1}{U}(x,\\lambda)}{\\stack{2}{U}(y,\\mu)} =\n \\i\\gamma\\com{r_{12}(\\lambda-\\mu)}{\\stack{1}{U}(x,\\lambda) +\n \t\t\\stack{2}{U}(x,\\mu)} \\mm \\delta(x-y) \\,\n\\]\n with the trigonometric classical $r$-matrix\n\\[\n r(\\lambda) = \\lim_{\\hbar \\to 0} \\mm\\frac{-1}{\\i\\hbar} \\mm\n\\left(\\frac{R(\\lambda)}{\\sinh\\lambda} - 1\\!\\!\\hskip 1pt\\mbox{l}\\right)\n = \\frac{-1}{\\sinh\\lambda}\\mat{cccc}{\n \\cosh\\lambda & \\null& \\null& \\null \\\\\n \\null & 0& 1& \\null \\\\\n \\null & 1& 0& \\null \\\\\n\t \\null & \\null& \\null& \\cosh\\lambda} \\.\n\\]\n\nTogether with the matrix\n\\[\n V(x,\\lambda) = - \\mat{ll}{\n \t{\\textstyle \\frac{1}{2}}\\mm\\Phi'(x) & \\e{-\\lambda -\\i\\Phi(x)} \\\\\n\t2 \\sinh(\\lambda - \\i\\Phi(x)) & -{\\textstyle \\frac{1}{2}}\\mm\\Phi'(x)} \\,\n\n\\]\n the classical L-matrix $U$ forms a Lax-pair for Liouville theory:\n\\[\n \\dot U + V' + i\\com{U}{V} = 0\n \\iff \\mm\\Box\\mm\\Phi - 4\\i\\e{-2\\i\\Phi} = 0 \\.\n\\]\n This Lax-pair can be acquired in the scaling and limiting procedure\n(\\ref{scale},\\ref{renorm},\\ref{limit}) from the Lax-pair of classical\nsine-Gordon in Reference \\cite{fadkor}.\n\nA. Volkov brought into our attention the fact that Lax-matrix\n{\\refe{U}} is gauge equivalent to\n\\[\n \\tilde U(x,\\mu) = \\mat{cc}{0& \\ \\mu - s_+\\\\\n\t\t 1&0} \\,\n\\]\n where $\\mu = \\exp{2\\lambda}$, and $s_+$ is the energy-momentum density of\nEquation \\refe{esplus}. The Lax Equation turns into the Schr\\\"odinger\nequation\n\\be\n -\\psi'' + s_+\\psi = \\mu \\psi \\,\n \\mabel{scroe}\n\\ee\n which usually is used in connection to the KdV equation. The role of\nEquation \\refe{scroe} for Liouville theory is stressed in {\\cite{dzpopota}}.\n\n\n\\section{Bethe Ansatz for lattice Liouville theory}\n\nIn the algebraic Bethe Ansatz, one tries to triangularize the local $L$-operators in\norder to diagonalize the transfer matrix over the\nchain, which yields the conserved quantum quantities. This is done by\nfinding a local pseudovacuum, which is annihilated by one of the\noff-diagonal components in $L_{n,a}$. In addition, to get an\neigenstate of the transfer matrix, the pseudovacuum should be an\neigenstate of the diagonal components.\n\nThe $L$-operator \\refe{Lliou} developed in the previous section does\nnot have a local pseudovacuum. However, as is the case with\nit's ancestor $L^{SG}$ \\cite{izekor}, the product of two $L^\\L$:s from\nadjacent sites indeed has a pseudovacuum.\n\nWe denote $\\Phi_{2n} = \\Phi_2\\mm ;\\ \\Phi_{2n-1} = \\Phi_1$, and\nsimilarly for $\\Pi$. The product of two Lax-operators is thus\n\\be\n \\L = L^\\L_{2n,a}\\mm L^\\L_{2n-1,a} \\mm = \\mat{cc}{A&B\\\\ C&D} \\,\n \\mabel{LL}\n\\ee\n with\n\\beastar\n A &=& \\e{\\i(\\Pi_2+\\Pi_1)} -2 \\e{-\\lambda - \\i\\Phi_2} \\sinh(\\lambda - \\i\\Phi_1) \\\\\n B &=& -\\i\\e{-\\lambda+\\i(\\Pi_2 -\\Phi_1)}\n -\\i h(\\Phi_1)\\e{-\\lambda - \\i(\\Phi_2 +\\Pi_1)} \\\\\n C &=& -2\\i\\sinh(\\lambda -\\i\\Phi_2)\\e{\\i\\Pi_1}\n - 2 \\i h(\\Phi_2)\\e{-\\i\\Pi_2}\\sinh(\\lambda - \\i\\Phi_1) \\\\\n D &=& h(\\Phi_2) h(\\Phi_1) \\e{-\\i(\\Pi_2+\\Pi_1)}\n - 2 \\sinh(\\lambda -\\i\\Phi_2)\\e{-\\lambda-\\i\\Phi_1}\n\\end{eqnarray*}} %%% eeastar \\> {end{eqnarray*}\n\nInspired by \\cite{izekor,fadBA} we make the Ansatz\n\\[\n \\psi = f(\\Phi_1) \\mm\\delta(\\Phi_1-\\Phi_2 -\\gamma)\n\n\\]\n for the pseudovacuum at site $n$, with $f$ a functional to be\ndefined. Demanding that the off-diagonal operator $C$ annihilates\nthe vacuum, we get the functional relation\n\\be\n f(\\Phi+\\gamma) = - h(\\Phi) \\mm f(\\Phi) \\.\n \\mabel{functrel}\n\\ee\n This equation sets very stringent conditions on the function $f$. It\nis a sufficient condition for the pseudovacuum;\nwhen it is fulfilled, the actions of $A$ and $D$ on $\\psi$ are\ndiagonal, with the eigenvalues:\n\\bea\n A\\mm\\psi &=& (\\!\\e{-2\\lambda +\\i\\gamma} -1)\\mm \\psi \\equiv a(\\lambda)\\mm\\psi \\cr\n D\\mm\\psi &=& (\\!\\e{-2\\lambda -\\i\\gamma} -1)\\mm \\psi \\equiv d(\\lambda)\\mm\\psi \\.\n \\mabel{aetd}\n\\eea\n\nThe treatment of Functional Relation \\refe{functrel} depends crucially\nof the value of $q~=~\\e{\\i\\gamma}$. When $|q| < 1$, Equation\n\\refe{functrel} is exactly fulfilled by the quantum dilogarithm, see\nRef. \\cite{fadkas}. Following \\cite{fadcur}, we get an explicit solution\nfor the case $|q| =1$ as well, which is of interest to this paper:\n\\[\n f(\\Phi) = \\exp{\\mm \\int_{-\\infty}^\\infty\n\\frac{dx}{4x}\\mm\\mm \\frac{\\e{(\\gamma-\\pipert-\\Phi) x}}\n{\\sinh\\pipert x \\sinh{\\textstyle \\frac{\\gamma}{2}}\\mm x}\n} \\mm\\.\n\\]\n The singularity of the integral at $x=0$ is left under the\nintegration path. For more discussion on solutions of \\refe{functrel}\nand other similar functional equations we refer to\n\\cite{volkov,fadvol,fadkas,fadcur}.\n\n{}From the local pseudovacuums $\\psi_n$ we can now build up a total\npseudovacuum for the quantum chain:\n\\[\n \\Psi = \\otimes_{n=1}^N \\psi_n \\,\n\\]\n where the amount of paired sites is denoted by $N$.\n\nThe two-site L-operators \\refe{LL} become triangular when acting on the\nlocal vacuum. Thus the monodromy \\refe{monodromedary} acting on the\ntotal pseudovacuum $\\Psi$ is triangular as well:\n\\[\n T(\\lambda) \\mm\\Psi = \\mat{cc}{a^N(\\lambda)\\mm\\Psi & *\\\\\n\t\t\t 0 & d^N(\\lambda)\\mm\\Psi } \\.\n\\]\n The star in the upper right corner denotes a complicated state\ncreated by various combination of $A$:s, $D$:s and one $B$ acting on\n$\\Psi$.\n\nCorrespondingly, the transfer matrix \\refe{transfer} has the\npseudovacuum eigenvalue\n\\be\n \\tau(\\lambda)\\mm \\Psi = (a^N(\\lambda)\\mm + \\mm d^N(\\lambda))\\mm\\Psi \\.\n \\mabel{eigentransfer}\n\\ee\n\n\n\\subsection{The Bethe Ansatz Equations}\n\n\nIn the algebraic Bethe Ansatz\\ one makes the assumption that the exited states of the\ntheory can be obtained from the pseudovacuum by acting on it with the\n``pseudoparticle creation operator'' $\\B$. An arbitrary state of the\nform\n\\be\n \\Psi_m = \\prod_{j=1}^m \\B(\\lambda_j) \\mm\\Psi\n\\mabel{psim}\n\\ee\n is characterized by $m$ values of the spectral parameter $\\lambda_j$.\nThis state is an eigenstate of the trace of the monodromy,\ni.e. $\\A+\\D$, if the spectral parameters $\\set{\\lambda_j}$ satsify a\nset of $m$ coupled trancendental equations known as the Bethe Ansatz equations.\nWritten in terms of the components\nof $R$-matrix \\refe{R} and the eigenvalues $a^N, d^N$ of $\\A$\nand $\\D$, these equations read\n\\be\n \\left[\\frac{a(\\lambda_k)}{d(\\lambda_k)}\\right]^N =\n\\prod_{\\stackrel{j=1}{ j\\neq k}}^m\n \\frac{\\alpha(\\lambda_k-\\lambda_j)\\beta(\\lambda_j-\\lambda_k)}{\\alpha(\\lambda_j-\\lambda_k)\n \\beta(\\lambda_k-\\lambda_j)}\n \\ \\ \\forall\\ k\\.\n \\mabel{fBAE}\n\\ee\n\n Solutions to \\refe{fBAE} appear in sets of $m$ spectral parameters,\nso called Bethe Ansatz\\ roots, which parameterize different states of the\nsystem. A more refined analysis shows that generically all Bethe Ansatz\\ roots\nare distinct, and that $m\\leq N\/2$, see e.g. \\cite{fadBA}.\n\nFor lattice Liouville theory, the eigenvalues $a$ and $d$\nare given by Equation \\refe{aetd}, and the ratio on the left hand side of\nEquation \\refe{fBAE} is\n\\[\n \\frac{a(\\lambda)}{d(\\lambda)} = \\e{\\i\\gamma} \\BAEspsmi{\\lambda}{\\i{\\textstyle \\frac{\\gamma}{2}}\\mm} \\.\n\\]\n This leads to the Bethe Ansatz equations\n\\be\n \\e{\\i N\\gamma} \\left[\\BAEspsmi{\\lambda_k}{\\i{\\textstyle \\frac{\\gamma}{2}}\\mm}\\right]^N =\n\\prod_{j,\\mm j\\neq k} \\BAEspspl{\\lambda_k-\\lambda_j}{\\i\\gamma} \\ \\ \\forall\\ k\\.\n \\mabel{BAE}\n\\ee\n On the other hand, the Bethe Ansatz equations\\ of a spin-S XXZ chain are\n(see e.g. \\cite{fadBA})\n\\be\n \\left[\\BAEspspl{\\lambda_k}{\\i S\\gamma}\\right]^N =\n \\prod_{j,\\mm j\\neq k} \\BAEspspl{\\lambda_k-\\lambda_j}{\\i\\gamma} \\ \\ \\forall\\ k\\.\n \\mabel{xxzBAE}\n\\ee\n Comparing \\refe{BAE} to \\refe{xxzBAE} we see that in terms of the Bethe\nAnsatz, lattice Liouville theory corresponds to a spin $(-{\\textstyle \\frac{1}{2}}\\mm)$ XXZ\nspin chain, with an extra phase factor $\\e{\\i N\\gamma}$.\n\nPhase factors of the form $\\e{\\i \\theta}$ have appeared earlier in the\nliterature on the Bethe Ansatz. In \\cite{karowski} it was used to analyze Bethe Ansatz equations\\\nfor Potts models, related to a seam insertion of a boundary\nfield. Similarly, twisted boundary conditions in Reference\n\\cite{kluwezi} manifest themselves in the form of extra phase factors.\nMost interestingly, in the analysis of the eight-vertex model\n\\cite{baxter,takfad79}, there appears a ``theta-vacuum like'' phase\nfactor in the Bethe equations, related to different SOS-sectors of the\ntheory.\n\nHere, however, the important difference to the situations of\n\\cite{karowski,kluwezi,baxter,takfad79} occurs that the length $N$ of\nthe chain appears in the phase factor. When $q$ is a root of unity,\nthis will give us an extra integer parameter to parametrize\n``primary'' exited states, in addition to the string length used in\n\\cite{karowski}. These two integers then give us enough freedom to\nparameterize the whole Ka\\v c\\ table.\n\nThus different chain lengths occurring in the lattice Liouville model\ncorrespond exactly to the different ``theta-vacuum'' sectors in a\ncritical (R)SOS model.\n\n\n\\section{Mapping the Lattice Liouville Model\n\t\tto the Spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ Chain }\\mabel{maptoxxz}\n\n\\noindent\nThe thermodynamics \\cite{taksuz} and finite size effects\n\\cite{devewoy,karowski,kluwezi,boizre} of the Bethe Ansatz\\ of the fundamental spin\n$+{\\textstyle \\frac{1}{2}}\\mm$ representation of XXZ-chains have been widely discussed in\nthe literature. Accordingly it would be a desirable goal to map the\nspin $(-{\\textstyle \\frac{1}{2}}\\mm)$ Liouville Bethe equations \\refe{BAE} to the spin\n$+{\\textstyle \\frac{1}{2}}\\mm$ XXZ ones \\refe{xxzBAE}. This can be done in the string\npicture \\cite{taksuz} of solutions to the Bethe Ansatz equations.\n\nIn order to do the mapping, we thus make the string hypothesis that in the\nthermodynamic limit $N\\to\\infty$, complex Bethe Ansatz\\ roots $\\set{\\lambda_j}$\ncluster to complexes of the form\n\\be\n \\lambda^{(\\sK)} = \\lambda_\\sK \\mm + \\mm k \\i\\gamma\\, \\ \\ k= -{\\textstyle \\frac{1}{2}}\\mm (K-1), \\mm\\ldots\\mm,\n \\mm{\\textstyle \\frac{1}{2}}\\mm(K-1) \\; \\Im\\lambda_\\sK\\in\\{0,\\pipert\\}\\, \\ \\ K\\in Z_+ \\.\n \\mabel{kstring}\n\\ee\n A collection of $K$ Bethe Ansatz\\ roots with a common center obeying\nEquation \\refe{kstring} is known as a $K$-string. One-strings are the\nusual real Bethe Ansatz\\ roots. Strings with $\\Im\\lambda_\\sK = 0$ are called\npositive parity strings, if $\\Im\\lambda_\\sK = \\pipert$ one speaks about\nnegative parity strings.\n\nIt is important to notice that when $q$ is a root of unity there\nis an upper limit to the length of the string $K$\n\\cite{taksuz}. With\n\\[\n \\left[\\e{\\i\\gamma}\\right]^{\\nu+1} = \\pm 1 \\,\n\\]\n the maximal string length is\n\\[\n K_{{\\rm max}} = \\nu.\n\\]\n This limit on the range of $K$ changes the usual concept of\ncombinatorical completeness of Bethe Ansatz\\ states. In Ref. \\cite{jutkar} it\nwas argued for a $sl_q(2)$\\ invariant spin chain at $q$ root of unity\nthat the Bethe Ansatz\\ states exhibit completeness in the space of ``good''\nrepresentations of the quantum group.\n\n From now on we shall concentrate only on the root of unity case. More\nspecifically, we shall assume that the Liouville coupling constant\ntakes only the values\n\\[\n \\gamma = \\frac{\\pi\\nu}{\\nu +1} \\; \\nu\\in Z_+ +1 \\.\n\n\\]\n Using the string picture, we shall be able to map the states of a\nLiouville chain with ``anisotropy'' $\\gamma$ to a spin ${\\textstyle \\frac{1}{2}}\\mm$ spin\nchain with anisotropy\n\\be\n {\\tilde\\gamma} = \\frac{\\pi}{\\nu +1} \\; \\nu\\in Z_+ +1 \\.\n \\mabel{tilgam}\n\\ee\n\nWe begin with positive parity strings. The number of $L$-strings is\n$n_\\sL$, and the total number of Bethe Ansatz\\ roots is\n\\[\n m = \\sum_{L=1}^\\nu L \\mm n_\\sL \\.\n\n\\]\n The rapidities of different $L$-strings are denoted $\\lambda_{\\sL,j}\\, \\\nj=1\\mm\\ldots\\mm n_\\sL$.\n\nMultiplying the Bethe Ansatz equations\\ \\refe{BAE} for all the rapidities comprising a\n$K$-string, we get a set of coupled equations of the {\\it\nreal parts} of the strings only.\n\nThe terms in the right hand side of (\\ref{fBAE}--\\ref{xxzBAE}), which\ndescribe the effect on scattering of pseudoparticles on each other,\nnow become scattering matrices of strings on strings.\n\nThe scattering of $K$-strings on 1-strings is described by the\nfunction\n\\bea\n S_{\\sss{ K 1}}(\\lambda) & \\equiv&\n\\prod_{k=-{{\\scriptscriptstyle \\frac{1}{2}(K-1)}}}^{{\\scriptscriptstyle \\frac{1}{2}(K-1)}} S_{1 1}(\\lambda)\\ =\n\\prod_{k=-{{\\scriptscriptstyle \\frac{1}{2}(K-1)}}}^{{\\scriptscriptstyle \\frac{1}{2}(K-1)}} \\BAEsps{\\lambda}{\\i k\\gamma} \\cr\n& &\\cr\n& &\\cr\n&=& \\BAEsps{\\lambda}{{\\textstyle \\frac{1}{2}}\\mm(K+1)\\i\\gamma}\\\n\\BAEsps{\\lambda}{{\\textstyle \\frac{1}{2}}\\mm(K-1)\\i\\gamma} \\. \\mabel{Sk1}\n\\eea\n In terms of this function, we can write the scattering matrix\nof $K$-strings on $L$-strings as\n\\bea\n S_{\\sK\\sL}(\\lambda) &=& \\prod_{k=-{{\\scriptscriptstyle \\frac{1}{2}(K-1)}}}^{{\\scriptscriptstyle \\frac{1}{2}(K-1)}}\\\n\\prod_{l={{\\scriptscriptstyle \\frac{1}{2}(L-1)}}}^{{\\scriptscriptstyle \\frac{1}{2}(L-1)}}\\\n \\BAEsps{\\lambda}{(k-l+1)\\mm\\i\\gamma} \\cr\n& &\\cr\n&=& \\prod_{k=\\sss{ \\jhalf |K-L|}}^{\\sss{ \\jhalf\n(K+L) - 1}} \\ S_{k1}(\\lambda) \\\n= \\prod_{k=\\sss{ \\jhalf |K-L|}}^{\\sss{ {\\rm\nmin} \\left[\\jhalf (K+L) - 1,\\ \\nu -\\jhalf(K+L)\\right]}} \\\nS_{k1}(\\lambda)\n\\.\n \\mabel{S}\n\\eea\n The first expression in terms of $S_{k1}$ bears close resemblance to\n(an exponential form of) the decomposition of the tensor product of\ntwo irreducible $SU(2)$ representations with spins ${\\textstyle \\frac{1}{2}}\\mm(K-1)$ and\n${\\textstyle \\frac{1}{2}}\\mm(L-1)$. Indeed, the Bethe Ansatz\\ can be viewed as a\ndifferent way of reducing tensor products of spin 1\/2 particles into\nirreducible representations. The factor $S_{\\sK 1}$ attached to a\nspin ${\\textstyle \\frac{1}{2}}\\mm(K-1)$ representation is obtained by fusing $K$\nfactors $S_{11}$ corresponding to spin 1\/2 representations. The\n$S$-matrix then naturally reflects the decomposition of the tensor\nproducts of the representation spaces involved in the scattering.\n\nThe second (``reduced'') expression in terms of $S_{k1}$ is peculiar\nto the root of unity case. It may be related to the decomposition of\nthe tensor product of two ``good'' quantum group representations,\nc.f. \\cite{jutkar}. The reduced decomposition is related to the\nfusion rules of conformal blocks as treated in {\\cite{verlinde}},\nsuggesting that conformal blocks might be represented by Bethe Ansatz\\\nstrings. In the sequel we will see, how this is to be interpreted.\n\nThe reduced form shows explicitly the symmetry of\n$S_{\\sK\\sL}$ which will allow us to map the Liouville Bethe equations\non the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ Bethe equations:\n\\be\n S_\\sss{ \\nu+1-K,\\mm\\nu+1-L}(\\lambda)\n = S_{\\sK,\\sL}(\\lambda) \\.\n \\mabel{Sklsym}\n\\ee\n\n\n Expressing the Liouville Bethe equations \\refe{BAE} in terms of\nstrings we get\n\\be\n \\e{\\i K N\\gamma} \\ \\left[\\BAEspsmi{\\lambda_{\\sK,i}}{\\i\\kpert\\gamma}\\right]^N\n = \\prod_{\\sL; \\ n_\\sL\\neq 0} \\\n \\prod_{\\stackrel{j=1}{(\\sL,j)\\neq(\\sK,i)}}^{n_\\sL}\n S_{\\sK\\sL}(\\lambda_{\\sK,i} - \\lambda_{\\sL,j})\\ \\ \\ \\ \\forall\\ (K,i)\\.\n \\mabel{sBAE}\n\\ee\n\nThe amount of coupled equations is reduced to $\\sum_{\\sL = 1}^\\nu\nn_\\sL$, and all roots of the equations are now taken to be real.\n\nSimilarly, applying the string picture to Equation \\refe{xxzBAE} for\nspin ${\\textstyle \\frac{1}{2}}\\mm$, we get the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ \\ Bethe equations in terms\nof strings,\n\\be\n \\left[\\BAEspsmi{\\lambda_{\\sK,i}}{\\i\\kpert\\gamma}\\right]^N\n = \\prod_{\\sL; \\ n_\\sL\\neq 0} \\\n \\prod_{\\stackrel{j=1}{(\\sL,j)\\neq(\\sK,i)}}^{n_\\sL}\n \\Bigl(S_{\\sK\\sL}(\\lambda_{\\sK,i} - \\lambda_{\\sL,j})\\Bigr)^{-1} \\ \\ \\ \\ \\forall\\ (K,i)\\.\n \\mabel{xxzsBAE}\n\\ee\n Notice that as compared to {\\refe{xxzBAE}}, we have inverted the equation.\n\nInspired by Equation \\refe{Sklsym}, we parameterize the string lengths\n{}from the maximal string backwards,\n\\[\n K = \\nu +1 - \\tilde K \\; L = \\nu +1 - \\tilde L\\; \\tilde K,\\tilde L =\n1, \\mm\\ldots\\mm , \\nu \\.\n \n\\]\n With this parameterization, it turns out that the Liouville Bethe Ansatz equations\\\n\\refe{sBAE} for the strings $\\set{\\lambda_{\\sL,j}}$ maps to the spin\n${\\textstyle \\frac{1}{2}}\\mm$ XXZ equations for a set of strings $\\set{{\\tilde\\lambda}_{\\tilde\n\\sL,j}}$ with the anisotropy ${\\tilde\\gamma}$.\n\nIndeed, after some algebra we have for the left hand side\n\\be\n \\e{\\i K \\gamma}\\ \\BAEspsmi{\\lambda_{\\sK,i}}{\\i\\kpert\\gamma} =\n \\e{\\i \\tilde K {\\tilde\\gamma}}\\\n\\BAEspsmi{{\\tilde\\lambda}_{\\tilde\\sK,i}}{\\i\\trac{\\tilde K}{2}{\\tilde\\gamma}} \\.\n \\mabel{pmap}\n\\ee\n The spectral parameter is changed in the following way:\n\\be\n {\\tilde\\lambda} = \\lambda - \\i (1+(-1)^\\sK) \\piperf \\, \\mabel{tillam}\n\\ee\n i.e. the parity of even-length strings is changed. In addition, it\nturns out that\n\\[\n S^\\gamma_\\sss{ K 1}(\\lambda)\n= \\Bigl(S^{\\tilde\\gamma}_\\sss{ \\tilde K 1}(\\tilde\\lambda)\\Bigl)^{-1} \\,\n\\]\n where the upper index denotes whether $\\gamma$ or ${\\tilde\\gamma}$ is\nused when defining $S_\\sss{K1}$ according to Equation \\refe{Sk1}.\n\n Using this, as well as Symmetry Property \\refe{Sklsym}, we get for\nthe string-on-string scattering matrix \\refe{S}\n\\be\n S_{\\sK\\sL}^\\gamma (\\lambda_{\\sK,i} - \\lambda_{\\sL,j})\n =\\Bigl(S_{\\tilde\\sK\\tilde\\sL}^{\\tilde\\gamma}\n ({\\tilde\\lambda}_{\\tilde\\sK,i}-{\\tilde\\lambda}_{\\tilde\\sL,j})\\Bigr)^{-1} \\.\n \\mabel{smap}\n\\ee\n\nThese results can easily be generalized to negative parity Liouville\nstrings as well.\n\nNow we are in position to state the equivalence of lattice Liouville\nand spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\ Bethe Ans\\\"atze. Using Equations (\\ref{pmap},\n\\ref{smap}) in \\refe{sBAE} and comparing to Equation \\refe{xxzsBAE},\nwe get {\\it complete equivalence} of the string states in lattice\nLiouville theory and a spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain with an additional phase\nfactor $ \\exp\\{\\i N \\tilde K {\\tilde\\gamma}\\}$. This extra factor will allow\nus to incorporate the remnant of the chain length mod $\\nu+1$ as a\nmeaningful extra parameter in the theory of finite size corrections\\ of an usual XXZ\nchain.\n\nThe spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ Bethe Ansatz equations\\ we shall be analyzing are thus\n\\be\n \\e{\\i N \\tilde K{\\tilde\\gamma}}\\mm \\left[\n\\BAEspsmi{{\\tilde\\lambda}_{\\tilde\\sK,i}}{\\i\\trac{\\tilde K}{2}\\gamma}\n\\right]^N \\times\n\\prod_{\\sss{\\tilde L}; \\ n_\\sss{\\tilde L}\\neq 0} \\\n \\prod_{\\stackrel{j=1}{\\sss{(\\tilde L,j)\\neq(\\tilde K,\n \t\ti)}}}^{n_\\sss{\\tilde L}}\n S_{\\sss{\\tilde K\\tilde L}}(\\lambda_{\\sss{\\tilde K},i}-\\lambda_{\\sss{\\tilde L},j})\n\\ = \\ 1 \\ \\ \\ \\ \\forall\\ (\\tilde K,i) \\.\n \\mabel{phxxzBAE}\n\\ee\n This equation is valid separately for each $N$, but\nthe thermodynamic limit $N\\to\\infty$ is sensible only if $N$\napproaches $\\infty$ in steps of $\\nu+1$.\n\nAccordingly, we parametrize the (even) chain length as\n\\be\n \\frac{N}{2} = n (\\nu +1) + \\kap\\, \\ \\ \\kap=0,\\ldots ,\\nu \\.\n \\mabel{N}\n\\ee\n Taking the thermodynamic limit $N\\to\\infty$ {\\it at fixed} $\\kap$,\nletting $n\\to\\infty$, the limit of Bethe Ansatz equations\\ (\\ref{sBAE}, \\ref{phxxzBAE})\nis well defined. The extra phase factor in the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\ Bethe\nequations \\refe{phxxzBAE} is thus\n\\be\n \\e{2 \\i\\kap \\tilde K {\\tilde\\gamma}} \\.\n \\mabel{phase}\n\\ee\n\n\\vali\n\nTo summarize, the obtained equivalence of string states in lattice\nLiouville and spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ is the following:\n\n\\vali\\vali\n\n \\noindent\\begin{tabular}{l|c|c}\n \\null & Lattice Liouville &\n\t\\begin{minipage}{3cm}\\begin{center}\\vspace*{2mm}\nSpin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ\n\t \\vspace*{2mm}\\end{center}\\end{minipage} \\\\ \\cline{1-3}\nanisotropy &\n\t\\begin{minipage}{3cm}\\begin{center}\\vspace*{2mm}\n$\\gamma=\\frac{\\pi\\nu}{\\nu+1}$\n\t \\vspace*{2mm}\\end{center}\\end{minipage} &\n${\\tilde\\gamma} = \\frac{\\pi}{\\nu +1}$\\\\\nstring lengths & $K$ & $\\tilde K =\\nu +1 -K$ \\\\\nspectral\nparameters &\n\t\\begin{minipage}{3cm}\\begin{center}\\vspace*{2mm}\n$\\lambda$\n\t \\vspace*{2mm}\\end{center}\\end{minipage} &\n${\\tilde\\lambda}=\\Re\\lambda+\\i\\left(\\Im\\lambda-(1+(-1)^\\sK)\\piperf\\right)$ \\\\\n \\end{tabular}\n\n \\vali\\vali\n\\noindent\nFor the analysis of the physical vacuum, it is important to note\nthat strings of maximal length are mapped to 1-strings, i.e. ordinary\nBethe Ansatz\\ roots, and vice versa.\n\nWe are intersted in the energy spectrum of the Liouville model. The\nauxiliary Lax-operator $L_{n,a}$ which was used for the Bethe Ansatz,\nintertwines the $n$:th quantum space and the auxiliary space $sl_2$.\nThe quantum spaces for lattice Liouville model are copies of $L^2({\\rm\nR})$. Thus $L_{n,a}(\\lambda)$ does not degenerate to a permutation\noperator at any value of the spectral parameter $\\lambda$, and it is not\ngood for investigating lattice dynamics. To get an integrable\nHamiltonian that generates lattice Liouville dynamics, one would have\nto introduce fundamental Lax-operators $L_{n,f}$ that intertwine two\nquantum spaces \\cite{tatafa}.\n\nFortunately, we are here only interested in energy and momentum\neigenvalues, so we do not have to investigate the fundamental\n$L$-operator. Following {\\cite{tatafa}} one can read off the energy\nand momentum eigenvalues from the eigenvalues of the diagonal elements\n$A$ and $D$ of the {\\it auxiliary} Lax-operators $L_{n,a}$. The\nmomentum eigenvalues are\n\\[\n P^\\L(\\set{\\lambda_j}) = \\trac{1}{2\\i} \\sum_{j=1}^m p(\\lambda_j) \\;\n p(\\lambda) = \\ln \\frac{a^\\gamma(\\lambda)}{d^\\gamma(\\lambda)} \\.\n \\]\n The upper index for $a$ and $d$ again stresses the particular\nvalue of anisotropy used.\n\nThe energy can be acquired by differentiating:\n\\[\n E^\\L(\\set{\\lambda_j}) = \\sum_{j=1}^m \\epsilon\\mm(\\lambda_j)\\;\n \\epsilon\\mm(\\lambda) = \\frac{\\gamma}{\\pi}\\mm\\frac{d}{d\\lambda}\\mm p(\\lambda) \\.\n\\]\n\n{}From {\\refe{phxxzBAE}} we see that for the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\ with extra phase\nfactor, the roles of $a$ and $d$ are interchanged. Accordingly,\nusing Correspondence {\\refe{pmap}}, we can write the Liouville energy\nand momentum in terms of the XXZ ones:\n\\bea\n P^{\\L}(\\set{\\lambda_j})\n &=& \\trac{1}{\\i}\\sum_{j=1}^m\n\t\t\\ln\\frac{a^{\\tilde\\gamma}(\\lambda_j)}{d^{\\tilde\\gamma}(\\lambda_j)}\n = - P^{{\\rm xxz}}(\\set{\\lambda_j})\n \\equiv\n \\i\\ln\\Lambda^{{{\\rm xxz}}}(\\set{\\lambda_j})\\longat{\\lambda=\\i\\trac{{\\tilde\\gamma}}{2}}\n \\mabel{moment-mal} \\\\\n E^\\L(\\set{\\lambda_j}) &=& - E^{{\\rm xxz}}(\\set{\\lambda_j})\n \\equiv -\\i\\frac{{\\tilde\\gamma}}{\\pi}\\mm \\frac{d}{d\\lambda}\\mm\n \\ln\\Lambda^{{{\\rm xxz}}}(\\lambda,\\set{\\lambda_j})\n \\longat{\\lambda=\\i\\trac{{\\tilde\\gamma}}{2}} \\ \\ \\.\n \\mabel{energy-mal}\n\\eea\n Here we have expressed the energy and momentum in terms of\neigenvalues $\\Lambda$ of the transfer matrix \\refe{transfer},\n\\be\n\\left(\\A(\\lambda)+\\D(\\lambda)\\right)\\mm\\Psi_m(\\set{\\lambda_j}) \\equiv\n\\Lambda(\\lambda,\\set{\\lambda_j})\\Psi_m(\\set{\\lambda_j}) \\.\n \\mabel{Lam}\n\\ee\n This is possible for the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\, as the auxiliary\nand fundamental Lax-operators for the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\ coincide. The XXZ\nLax-operator \\refe{Lxxz} yields local commuting quantities at the\nvalue $\\lambda=\\i\\frac{\\tilde\\gamma}{2}$, for which it becomes a permutation\nmatrix.\n\n\nThe Bethe equations do not have to be completely solved to get the low\nlying spectrum of the Hamiltonian. We need only the {\\it finite size}\ni.e. $\\operN$ corrections to the eigenvalues of the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ\ntransfer matrix corresponding to Bethe Ansatz equations\\ {\\refe{phxxzBAE}}. The reason\nfor this is the following:\n\nIn this paper we started by discretizing Liouville theory in a finite\nvolume $2\\pi = a\\mm N$, with lattice spacing $a$. Thus the\nconformal scaling limit of the spin chain corresponds to the continuum\nlimit of the Liouville field theory. Moreover, the continuum\nHamiltonian is\n\\be\nH_{{\\rm cont}} = \\trac{1}{a}\\mm H_{\\mm{\\rm lattice}} \\,\n\\mabel{Hcont}\n\\ee\n so that only $\\operN$ corrections to the eigenvalues of the lattice\nHamiltonian remain finite in the continuum limit. This is exactly the\nrealm of finite size effects, which in this case are\nsimultaneously finite lattice spacing effects.\n\n\n\n\\section{Finite Size Corrections for the six-vertex model}\n\nAs is well known, the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\ has intimate connections to the\nsix-vertex model of classical statistical mechanics {\\cite{baxter}}.\nFor two dimensional classical statistical models, the $\\operN$\nbehaviour and the conformal properties are closely related.\n\nIn \\cite{blocani,affleck,cardy} it was argued that the central charge\n$c$ of the conformal field theory corresponding to the scaling limit\nof a two-dimensional statistical model is related to the finite size\ncorrections to the free energy, i.e. the logarithm of the maximal\neigenvalue $\\Lambda_o$ of the transfer matrix, in the limit $N\\to\\infty$.\nOn the other hand, the statistical mechanics minimum free energy\nconfiguration corresponds to the ground state energy $E_o$ of the\nscaling conformal theory. For a model on an infinitely wide strip of\nlength $N$, the behavior of $E_o$ is \\cite{cardy}\n\\be\n E_o = N f_\\infty \\mm\n - \\mm \\operN \\trac{\\pi}{6}\\mm c \\mm + \\mm \\O(\\trac{1}{N^2}) \\,\n \\mabel{scalec}\n\\ee\n with $f_\\infty$ the free energy per site in the thermodynamic\nlimit.\n\nSimilarly, the critical indices (conformal weights) $\\Delta,\\Bardelta$\nof operators corresponding to exited states of the system can be read\noff the large $N$ behavior of configurations close to the one\nminimizing the free energy. In terms of the higher energy and\nmomentum eigenvalues of the corresponding 1+1 dimensional quantum\ntheory, critical indices are:\n \\bea\nE_m - E_o &=& \\trac{2\\pi}{N}\\mm (\\Delta \\mm + \\mm \\Bardelta)\n \t\t\\ +\\ \\O(\\trac{1}{N^2}) \\cr\n & & \\mabel{scaleweights}\\\\\nP_m - P_o &=& \\trac{2\\pi}{N}\\mm (\\Delta \\mm - \\mm \\Bardelta)\n \t\t\\ +\\ \\O(\\trac{1}{N^2}) \\nonumber\n\\eea\n As opposed to our differential dependence {\\refe{energy-mal}}, the\napproach of Ref. {\\cite{cardy}} relates the energy and momentum\ndirectly to lower eigenvalues $\\Lambda_m$ of the transfer matrix\nat criticality; $E_m\\sim -\\Re \\ln\\Lambda_m$, $\\ P_m\\sim -\\Im \\ln\\Lambda_m$.\n\n\\vali\n\nThe finite size corrections\\ of six-vertex models and the corresponding conformal\nproperties have been extensively studied in the literature\n\\cite{devewoy,karowski,kluwezi,boizre}. For the six-vertex model with an\nextra phase factor of the form \\refe{phase}, and string exitations,\nthe finite size corrections were calculated by Karowski\n\\cite{karowski}. The results of \\cite{karowski} relevant for us are\nthe following.\n\nThe ground state of a six-vertex model is described by a filled Dirac\nsea of one-strings, i.e. $n_1= N\/2$, $n_l = 0, \\mm l>1$. The\nlogarithm of the corresponding maximal eigenvalue of the transfer\nmatrix is \\cite[Eq. 4.3]{karowski}\\footnote{Note that our $\\nu$\ndiffers from the one of \\cite{karowski} by one. The relation between\nour $\\lambda$ and the $\\theta$ of \\cite{karowski} is $\\lambda = -\\i\\theta +\n\\i\\trac{{\\tilde\\gamma}}{2}$. This follows from the differences of the\nrespective Bethe equations \\refe{phxxzBAE} and \\cite[Eq.\n2.4]{karowski}. }\n\\be\n \\ln \\Lambda_o \\approx - \\i N f_\\infty(\\lambda) + \\frac{1}{N} \\frac{\\pi}{6}\n\\left(1 - \\frac{6\\kap^2}{\\nu(\\nu+1)} \\right) \\mm\n \\cosh\\left(\\trac{\\pi}{{\\tilde\\gamma}}\\lambda\\right) \\.\n \\mabel{cscale}\n\\ee\n\nExited states consist of higher strings above the vacuum, and holes in\nthe distribution of one-strings. Low-energy excitations have both the\nnumber of holes and the number of higher strings $\\sum_{\\sL > 1}\nn_\\sL$ of the order $N^0$.\n\nFor a given distribution of strings $\\{n_l\\}$, there is a certain\nnumber of allowed values for the spectral parameters. This number\ndepends on the behavior of the Bethe Ansatz equations\\ in the limits $\\lambda\\to\\pm\\infty$.\nDue to the dependence of these limits on higher strings, each\n$L$-string gives automatically rise to $2L-2$ holes.\n\nThe ``primary'' exitations correspond to states with one higher\nstring, no extra holes and the holes corresponding to the string\nevenly distibuted between the two surfaces of the Dirac sea of\n1-strings, i.e. between $\\lambda \\sim\\infty$ and $\\lambda \\sim-\\infty$.\n\nFor such states, the finite size behavior of the eigenvalues of the\ntransfer matrix reads\\cite[Eq. 4.5]{karowski}\n\\be\n \\ln\\Lambda_m - \\ln\\Lambda_o \\approx -\\trac{2\\pi}{N}\n\\left\\{(\\Delta + \\Bardelta)\n \\cosh\\left(\\trac{\\pi}{{\\tilde\\gamma}}\\lambda\\right)\n\\mm + \\mm (\\Delta - \\Bardelta)\n \\sinh\\left(\\trac{\\pi}{{\\tilde\\gamma}}\\lambda\\right) \\right\\}\\,\n \\mabel{weightscale}\n\\ee\n where the weights are, if the single higher string is a $\\tilde\nK$-string,\n\\be\n \\Delta = \\Bardelta = \\frac{\\Bigl((\\nu+1)(\\tilde K-1) +\\kap\\Bigr)^2 -\n\\kap^2}{4\\nu(\\nu+1)} \\.\n \\mabel{delta}\n\\ee\n If more strings and \/or holes are present in the exited state, the\nresulting critical indices differ from the ones above by additional\nintegers. Accordingly, these states belong to the conformal towers of\ndescendants of the described ``primary states''.\n\n For future use we define the function\n \\[\n \\delta_{L,\\kap} = \\frac{\\Bigl((\\nu+1)(L-1) +\\kap\\Bigr)^2}\n {4\\nu(\\nu+1)} \\,\n\n\\]\n in terms of which the critical indices \\refe{delta} read\n \\be\n\\Delta = \\Bardelta = \\delta_{\\tilde K,\\kap} - \\delta_{1,\\kap}\\.\n \\mabel{Deleqdeldel}\n\\ee\n In this form, we explicitely see the subtraction of the part\ncorresponding to the ground state.\n\nAn extra condition on $\\tilde K$ for a primary state was found in\n\\cite{karowski}. There is an upper limit for the length of the strings\nthat contribute to the finite size corrections. For extra phase $\\kap$, only strings\nsatisfying\n\\be\n \\tilde K < \\nu + 1 - \\kap\n\\mabel{Klim}\n\\ee\n contribute. This cutting off of higher strings resembles a result of\nRef. \\cite{jutkar} for a $sl_q(2)$\\ invariant XXZ chain with fixed\nboundary conditions. There it was conjectured that highest strings\ncorrespond to ``bad'' representations of the quantum group (i.e.\nrepresentations with vanishing $q$-dimension), which have to be\nremoved from the spectrum to keep the theory unitary.\n\n\\vali\n\nIn the six-vertex model, one expects conformal invariance at $\\lambda =\n0$, where the $R$-matrix becomes isotropic. At this point, one can use\nEquations (\\ref{scalec}, \\ref{scaleweights}) to recognize the conformal\nproperties of the model.\n\n{}From Equation \\refe{cscale} we read off the central charge:\n\\be\n c = 1 - \\frac{6\\kap^2}{\\nu(\\nu+1)} \\.\n \\mabel{c}\n\\ee\n For the value $\\kap=1$, related to critical Potts models in\n\\cite{karowski}, the central charges of unitary minimal models emerge.\nThe critical indices \\refe{delta} reproduce a row in the Ka\\v c\\ \\ table.\nThe highest string does not contribute to the spectrum, due to\nRestriction \\refe{Klim}.\n\n\n\n\n\n\\section{Conformal Properties of Lattice Liouville Theory}\n\n\nNow we can use the results of Ref. \\cite{karowski} reviewed in the\nprevious section, to calculate the scaling properties of lattice\nLiouville energies and momenta, and to recognize the corresponding\nconformal structures.\n\nWe are interested in Bethe Ansatz\\ states that correspond to the ``primary''\nstates of the six-vertex model, i.e. one $K$-string above the physical\nvacuum, with $K = \\nu + 1 - \\tilde K$.\n\n{}From the scaling forms of the transfer matrix\n(\\ref{weightscale}, \\ref{scalec}) we get Liouville momenta and energies\nusing Prescriptions (\\ref{moment-mal}, \\ref{energy-mal}). The\nresulting eigenvalues yield critical indices using\nEquation \\refe{scaleweights}.\n\nFor lattice Liouville theory, there is an important difference to the\ncase treated in \\cite{karowski}. The extra phase $\\kap$ is not a\nconstant. Instead we have sectors with different values of $\\kap$,\ncorresponding to different lengths of the chain mod $(\\nu+1)$, as\nindicated by Equation \\refe{N}. In the thermodynamic limit, all values of\n$\\kap$ coexist.\n\n{}From Equation \\refe{cscale} it is easy to see, that the ground state of the\ntheory lies in the $\\kap = 0$ sector, which gives $c=1$, the usual\ncentral charge for a periodic XXZ spin chain \\cite{devewoy,alibaba}.\n\nAs in \\cite{karowski}, minimal models would emerge if the ground state\nwas taken to be in the $\\kap=1$ sector. Here we will adopt this\napproach, discarding the $\\kap=0$ sector alltogether. Later on we will\nreturn to the interpretation of this reduction of the theory.\n\nAccordingly, the central charge is\n \\be\nc = 1 - \\frac{6}{\\nu(\\nu+1)} \\; \\nu = 2,3, \\ldots\n \\mabel{cIII}\n\\ee\n and we recover the unitary minimal conformal field theories\\ of \\cite{bpz,fqs}.\n\nCalculating the exitation energies and the corresponding critical\nindices from Equation \\refe{weightscale}, we have to subtract the ground\nstate energy, not only the minimal energy $\\delta_{1,\\kap}$ within\neach $\\kap$ sector.\n\nWith the ground state lying in the $\\kap=1$ sector, we get for the\ncritical indices, instead of (\\ref{Deleqdeldel}),\n\\be\n \\Delta=\\Bardelta = \\delta_{\\tilde K,\\kap} - \\delta_{1,1} =\n\\frac{\\Bigl((\\tilde K - 1) (\\nu+1) + \\kap\\Bigr)^2\n\t\t- 1}{4\\nu(\\nu+1)}\n\\. \\mabel{weightsIII}\n\\ee\n Defining\n\\bea\n p &=& \\tilde K + \\kap -1\\; p = 1, \\ldots, \\nu -1 \\cr\n q &=& \\kap\\; q = 1, \\ldots, \\nu\n \\mabel{pq}\n\\eea\n we can write the critical indices in the form\n\\be\n \\Delta = \\Bardelta \\ =\\ \\frac{\\Bigl( p\\mm (\\nu+1) - q\\mm \\nu\\Bigr)^2 -\n1}{4\\nu(\\nu+1)} \\.\n \\mabel{weightsIV}\n\\ee\n These scaling weights reproduce the whole Ka\\v c\\ table of unitary\nconformal field theories. The ranges of $p$ and $q$ follow accurately from Restriction\n\\refe{Klim} on the maximal string length, and the possible values of\n$\\kap$ with the $\\kap=0$ sector excluded.\n\n\\vali\n\n{}From \\refe{Hcont} we see that in the continuum, the energy and\nmomentum of primary\nBethe Ansatz\\ states are\n\\[\n E = \\Delta \\mm + \\mm \\Bardelta\\; P = \\Delta \\mm - \\mm \\Bardelta \\.\n\\]\n The primary Bethe states are thus products of holomorphic and\nantiholomorphic vectors (right and left movers) with conformal weights\n\\refe{weightsIII}.\n\n\\vali\n\nThe behavior encountered here is exactly the same as the one\nencountered in the restriction of SOS models to RSOS models\n\\cite{anbafo}. The Bethe Ansatz equations\\ \\refe{phxxzBAE} are the Bethe equations of\nthe so called III\/IV critical limit of the SOS and corresponding eight\nvertex and XYZ models, in the case of root of unity anisotropy. In\nthe III\/IV critical limit, the extreme off-diagonal term (usually\ndenoted $d$) in the eight-vertex $R$-matrix vanishes, and the elliptic\nfunctions degenerate to trigonometric ones. Thus (on applying the\nstring picture,) the XYZ Bethe equations of \\cite{baxter,takfad79}\nturn into Equations \\refe{phxxzBAE} at criticality. For an anisotropy\nof the form \\refe{tilgam}, the SOS Bethe states are parameterized by\nall $0\\leq \\kap \\leq \\nu$. The ground state lies in the $\\kap=0$\nsector, and the corresponding central charge is $c=1$.\n\nIn the root of unity case, it becomes possible to ``restrict'' the SOS\nmodel \\cite{anbafo}. The sectors with $1\\leq\\kap\\leq\\nu$ decouple from\nthe other sectors, and we can restrict our interest to these sectors\nonly. This decoupled part of the SOS model is known as the RSOS\nmodel. On the level of Bethe equations, the restriction means leaving\nout the $\\kap=0$ sector. The ground state now lies in the $\\kap=1$\nsector, and in the thermodynamic limit the unitary conformal field theories\\ with $c<1$\nemerge \\cite{huse}.\n\n\\vali\n\nThere is one more subtlety in the interpretation of the results for\nthe lattice Liouville model described above. Remembering the\ncorrespondence between lattice Liouville theory and the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\\ndescribed in Section \\ref{maptoxxz}, it is evident that the physical\nvacuum for Liouville theory consists of maximal strings. This can be\nviewed as the maximal string limit of the fact that the vacuum of\nhigher spin chains consists of higher strings.\n\nDue to the complicated structure of the vacuum, not all\ncombinations of remnant and string length are a priori allowed.\nOn the contrary, we get stringent conditions on $N$ from requiring the\ncoexistence of a specific remnant $N \\ \\mbox{mod}\\ (\\nu+1)$ and a single\n$K$-string (corresponding to a spin ${\\textstyle \\frac{1}{2}}\\mm\\ $ XXZ $\\tilde K$-string)\nabove a sea of maximal $\\nu$-strings. In fact, these requirements fix\nthe chain length modulo $\\nu(\\nu+1)$. The parameterization \\refe{N}\nof $N$ has to be extended to\n\\be\n \\frac{N}{2} = (n(\\nu+1) \\mm-\\mm \\kap \\mm + \\mm K)\\nu + K\n\t = (n\\mm\\nu \\mm-\\mm \\kap \\mm + \\mm K)\n\t\t\t(\\nu +1) + \\kap \\.\n \\mabel{nKr}\n\\ee\n{}From here we see that for this $N$ it is indeed possible to define a\nstate with one $K$-string over a sea of $n(\\nu+1) - \\kap + K$ maximal\nstrings. In addition the remnant is $\\kap$. The thermodynamic limit\nhas to be taken in steps of $\\nu(\\nu+1)$ by taking $n\\to\\infty$ in Equation \n\\refe{nKr}.\n\nAccordingly, the full picture of lattice Liouville primary states is\nthe following. In a lattice Liouville chain consisting of $N$ sites\n($N$ even), there is a primary state characterized by two integers.\nThese integers are related to the remnants of the chain lenth $\\ \\mbox{mod}\\ \n\\nu(\\nu+1)$ and $\\ \\mbox{mod}\\ (\\nu+1)$,\n\\[\n \\frac{N}2 \\ \\mbox{mod}\\ \\nu(\\nu+1) = (\\nu - p)(\\nu +1) + q \\.\n\\]\n The primary state is the state with a single lower string. The $q=0$\nsector exhibits the behavior of the SOS ground state, and after the\nrestriction, the RSOS unitary series emerge, with central charges\n\\refe{cIII} and conformal weights \\refe{weightsIV}. In the\nthermodynamic limit all remnants mod $\\nu(\\nu+1)$ coexist (except\npossibly for the decoupled $q=0$), and the Liouville primary states\ngive the whole Ka\\v c\\ table.\n\n\n\\section{Conclusions}\n\n\nWe have developed a spectral parameter dependent integrable structure\nto quantum Liouville theory on a lattice. Using the ensuing\n$L$-matrix, we have written the Bethe Ansatz equations\\ for Liouville theory.\n\nWe have concentrated on certain Liouville coupling constants $\\gamma$,\nfor which $q=\\e{\\i\\gamma}$ is a root of unity. Using the string picture\nto describe exited Bethe Ansatz\\ states, we have mapped the Liouville Bethe\nequations to a set of generalized spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ Bethe Ansatz equations, more exactly\nthe critical SOS Bethe equations. This mapping takes maximal\nLiouville strings to XXZ one-strings and vice versa. The physical\nLiouville vacuum thus consists of a Dirac sea of maximal strings.\n\nUsing results of Karowski \\cite{karowski} for the finite size corrections\\ to the\neigenvalues of the transfer matrix, we have calculated the central\ncharges and conformal dimensions of the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain, and accordingly also\nof Liouville theory.\n\nWe found that the continuum limits of lattice Liouville theories with\ncoupling constants $\\gamma = \\pi\\trac{\\nu}{\\nu+1}\\mm, \\ \\nu=2,3,\\ldots$\nreproduce the unitary minimal models of Friedan, Qiu and Shenker\n\\cite{fqs}, on restricting the chain length not to be divisible by\n$\\nu+1$. This restriction is the exact analogue of the RSOS\nrestriction of SOS models at root of unity anisotropies.\n\nPrimary excitations of the Liouville chain are characterized by two\nintegers, the length of a shorter string above the vacuum consisting\nof maximal strings, and the remnant $\\kap$ of the chain length mod\n$(\\nu+1)$. With these two parameters, the conformal weights\ncorresponding to the exited states give all states in the\ncorresponding Ka\\v c\\ table.\n\nTo clarify the structure of the different sectors in the theory, a\nunitarity analysis based on the hidden $sl_q(2)$\\ symmetry is needed. This\nshould illuminate both the RSOS restriction $\\kap\\neq 0$, and the\nresult of \\cite{karowski} that highest XXZ (i.e. lowest Liouville)\nstrings do not contribute to the spectrum. Following \\cite{jutkar} we\nbelieve that these properties are deeply related to properties of root\nof unity representations of $sl_q(2)$. Truncation of the Bethe Ansatz\\ Hilbert\nspace corresponds to excluding ``bad'' representations of the quantum\ngroup, which is required in order to have a positive metric on\nthe Hilbert space.\n\nIn this work we found equivalence of the Bethe Ansatz equations\\ of the lattice\nLiouville and the critical eight-vertex (SOS) models.\nAccordingly, the results presented here can be used to provide a\nLagrangian description of the critical conformal field theories of all\ntwo-dimensional statistical models related to the eight-vertex model.\n\nDue to the intimate connection of Liouville theory to two dimensional\ngravity, it would be very interesting to extend the method of\nquantizing Liouville theory presented here to more general couplings.\nIt is evident that negative couplings $\\gamma$ correspond to the real\nsector of the Liouville model with $c>25$.\n\nWhether the strong coupling results of Gervais \\& al. \\cite{gervais}\nin the regime $1 0$ is a hyper-parameter to balance the trade-off between fitting the data and satisfying our model assumptions given by $ \\Theta (\\mathbf{D}, \\mathbf{X})$, which is a regularizer that promotes the desired physical consistency. We call this \\emph{wave-informed matrix factorization}. To derive the specific function $\\Theta (\\mathbf{D}, \\mathbf{X})$ which promotes the desired physical consistency, we begin by designing a regularizer to promote wave-consistent signals in the columns of $\\mathbf{D}$.\\footnote{Note that our approach is also generalizable to having wave-consistent regularization in both factors.} \nSpecifically, consider the 1-dimensional wave equation evaluated at $d_i(l)x_i(t)$:\n\\begin{eqnarray}\n \\frac{\\partial^2 \\left[ d_i(l)x_i(t) \\right] }{\\partial l^2} &=& \\frac{1}{c^2} \\frac{\\partial^2 \\left[ d_i(l)x_i(t) \\right] }{\\partial t^2} \n \\label{eqn:wave_equation}\n\\end{eqnarray}\nNote that the above equation constrains that each \neach $d_i(l)x_i(t)$ component ($i \\in [N]$) is a wave. This can thus be seen as decomposing the given wave data into a superposition of simpler waves. The Fourier transform in time on both sides of \\eqref{eqn:wave_equation} yields:\n$\\frac{\\partial^2 \\left[ d_i(l) \\right] }{\\partial l^2}X_i(\\omega) = \\frac{-\\omega^2}{c^2} d_i(l) X_i(\\omega)$. Thus, at points where $X_i(\\omega) \\neq 0$, we can enforce the equation,\n\\begin{eqnarray}\n \\frac{\\partial^2 \\left[ d_i(l) \\right] }{\\partial l^2} &=& \\frac{-\\omega^2}{c^2} d_i(l) \n \\label{eqn:space_eigen_value}\n\\end{eqnarray}\nto maintain consistency with the wave-equation.\nNote that the above equation is analytically solvable for constant $\\omega\/c$ and the solution takes the form $d_i(l) = A \\sin(\\omega l\/c) + B \\cos(\\omega l\/c)$ and the constants $A$ and $B$ can be determined by initial conditions. Thus, we conclude that the choice of $\\omega\/c$ determines $d_i(l)$ and vice versa.\nWe next discretize equation \\eqref{eqn:space_eigen_value}. From the literature on discretizing differential equations, the above can be discretized to:\n\\begin{eqnarray}\n \\mathbf{L} \\mathbf{D}_i &=& -k_i^2 \\mathbf{D}_i \n \\label{eqn:discrete_wave_eqn}\n\\end{eqnarray}\nwhere $k_i = \\omega \/ c$ , also known as the wavenumber and $\\mathbf{L}$ is a discrete second derivative operator, where the specific form of $\\mathbf{L}$ depends on the boundary conditions (e.g., Dirichlet, Dirichlet-Neumann, etc) which can be readily found in the numerical methods literature (see for example \\cite{strang2007computational, golub2013matrix}).\n\nWe also specify the dependence on $i$ owing to the fact that $\\omega\/c$ and $d_i(l)$ determine each other. \nNow for the factorization to be physically consistent in terms of spatial data, we desire that \\eqref{eqn:discrete_wave_eqn} is satisfied for some (unknown) value of $k_i$, which we promote with the regularizer $\\min_{k_i} \\| \\mathbf{L} \\mathbf{D}_i + k_i^2 \\mathbf{D}_i \\|^2_F$. In addition to requiring that the columns of $\\mathbf{D}$ satisfy the wave equation, we also would like to constrain the number of modes in our factorization (or equivalently find a factorization $\\mathbf{D} \\mathbf{X}^\\top$ with rank equal to the number of modes). To accomplish this, we add squared Frobenius norms to both $\\mathbf{D}$ and $\\mathbf{X}$, which is known to induce low-rank solutions in the product $\\mathbf{D} \\mathbf{X}^\\top$ due to connections with the variational form of the nuclear norm \\cite{haeffele2019structured,haeffele2014structured,srebro2005rank}. Taken together, we arrive at a regularization function which decouples along the columns of $(\\mathbf{D},\\mathbf{X})$ given by:\n\\begin{equation}\n\\label{eq:theta}\n\\Theta(\\mathbf{D},\\mathbf{X}) = \\sum_{i=1}^N \\theta(\\mathbf{D}_i,\\mathbf{X}_i), \\text{ where } \n \\theta(\\mathbf{D}_i,\\mathbf{X}_i) = \\tfrac{1}{2} \\! \\! \\left( \\|\\mathbf{X}_i\\|_F^2 \\! + \\! \\|\\mathbf{D}_i\\|_F^2 \\right) \\! + \\! \\gamma \\min_{k_i} \\|\\mathbf{L} \\mathbf{D}_i \\! + \\! k_i^2 \\mathbf{D}_i \\|_F^2\n\\end{equation}\nIn the supplementary material we show that the above regularization on $\\mathbf{D}$ is equivalent to encouraging the columns of $\\mathbf{D}$ to lie in the passband of a bandpass filter (centered at wavenumber $k$) with the choice of the hyperparameter $\\gamma$ being inversely proportional to the filter bandwidth. With this regularization function, we then have our final model that we wish to solve:\n\\begin{equation}\n\\label{eq:main_obj}\n\\min_{N \\in \\mathbb{N}_+} \\min_{\\substack{\\mathbf{D} \\in \\mathbb{R}^{L \\times N}, \\mathbf{X} \\in \\mathbb{R}^{T \\times N} \\\\ \\mathbf{k} \\in \\mathbb{R}^N}} \\tfrac{1}{2}\\|\\mathbf{Y}-\\mathbf{D}\\mathbf{X}^\\top\\|_F^2 + \\tfrac{\\lambda}{2} \\sum_{i=1}^N \\left(\\|\\mathbf{X}_i\\|_F^2 + \\|\\mathbf{D}_i\\|_F^2 + \\gamma \\|\\mathbf{L} \\mathbf{D}_i + k_i^2 \\mathbf{D}_i \\|_F^2 \\right).\n\\end{equation}\n\n\\section{Model Optimization with Global Optimality Guarantees}\n\\label{section:model}\nWe note that our model in \\eqref{eq:main_obj} is inherently non-convex in $(\\mathbf{D},\\mathbf{X},\\mathbf{k})$ jointly due to the matrix factorization model as well as the fact that we are additionally searching over the number of columns\/entries in $(\\mathbf{D},\\mathbf{X},\\mathbf{k})$, $N$. However, despite the potential challenge of non-convex optimization, here we show that we can efficiently solve \\eqref{eq:main_obj} to global optimality in polynomial by leveraging prior results regarding optimization problems for structured matrix factorization \\cite{haeffele2019structured,bach2013convex}. In particular, the authors of \\cite{haeffele2019structured} consider a general matrix factorization problem of the form:\n\\begin{equation}\n\\label{eq:gen_obj}\n\\min_{N\\in\\mathbb{N}_+}{\\min_{\\mathbf{U} \\in \\mathbb{R}^{m \\times N}, \\mathbf{V} \\in \\mathbb{R}^{n \\times N}}} \\ell(\\mathbf{U} \\mathbf{V}^\\top) + \\lambda \\sum_{i=1}^N \\bar \\theta(\\mathbf{U}_i,\\mathbf{V}_i)\n\\end{equation}\nwhere $\\ell(\\hat \\mathbf{Y})$ is any function which is convex and once differentiable in $\\hat \\mathbf{Y}$ and $\\bar \\theta(\\mathbf{U}_i,\\mathbf{V}_i)$ is any function which satisfies the following 3 conditions:\n\\begin{enumerate}\n\\item $\\bar \\theta(\\alpha \\mathbf{U}_i, \\alpha \\mathbf{V}_i) = \\alpha^2 \\theta(\\mathbf{U}_i, \\mathbf{V}_i), \\ \\forall (\\mathbf{U}_i,\\mathbf{V}_i)$, $\\forall \\alpha \\geq 0$.\n\\item $\\bar \\theta(\\mathbf{U}_i, \\mathbf{V}_i) \\geq 0, \\ \\forall (\\mathbf{U}_i,\\mathbf{V}_i)$.\n\\item For all sequences $(\\mathbf{U}_i^{(n)},\\mathbf{V}_i^{(n)})$ such that $\\|\\mathbf{U}_i^{(n)}(\\mathbf{V}_i^{(n)})^\\top\\| \\rightarrow \\infty$ then $\\bar \\theta(\\mathbf{U}_i^{(n)},\\mathbf{V}_i^{(n)}) \\rightarrow \\infty$.\n\\end{enumerate}\nClearly, our choice of loss function $\\ell$ (squared loss) satisfies the necessary conditions. However, in \\eqref{eq:main_obj} we wish to optimize over not just the matrix factors $(\\mathbf{D},\\mathbf{X})$ but also the additional $\\mathbf{k}$ parameters, so it is not immediately apparent that the framework from \\cite{haeffele2019structured} can be applied to our problem. Here we first show that by our design of the regularization function $\\theta$ our formulation will satisfy the needed conditions, allowing us to apply the results from \\cite{haeffele2019structured} to our problem of interest \\eqref{eq:main_obj}.\n\\begin{proposition}\n\\label{prop:equiv_prob}\nThe optimization problem in \\eqref{eq:main_obj} is a special case of the problem considered in \\cite{haeffele2019structured}.\n\\end{proposition}\n\\begin{proof}\nAll proofs can be found in the supplement.\n\\end{proof}\nFrom this, we note that within the framework developed in \\cite{haeffele2019structured} it is shown (Corollary 1) that a given point $(\\tilde \\mathbf{U}, \\tilde \\mathbf{V})$ is a globally optimal solution of \\eqref{eq:gen_obj} iff the following two conditions are satisfied:\n\\begin{equation}\n1) \\ \\ \\langle -\\nabla \\ell( \\tilde \\mathbf{U} \\tilde \\mathbf{V}^\\top), \\tilde \\mathbf{U} \\tilde \\mathbf{V}^\\top \\rangle = \\lambda \\sum_{i=1}^N \\bar \\theta(\\tilde \\mathbf{U}_i, \\tilde \\mathbf{V}_i) \\ \\ \\ \\ \\ \\ 2) \\ \\ \\Omega_{\\bar \\theta}^\\circ (-\\tfrac{1}{\\lambda} \\nabla \\ell(\\tilde \\mathbf{U} \\tilde \\mathbf{V}^\\top)) \\leq 1\n\\end{equation}\nwhere $\\nabla(\\ell(\\hat \\mathbf{Y}))$ denotes the gradient w.r.t. the matrix product $\\hat \\mathbf{Y} = \\mathbf{U} \\mathbf{V}^\\top$ and $\\Omega_{\\bar \\theta}^\\circ(\\cdot)$ is referred to as the `polar problem' which is defined as \n\\begin{equation}\n\\label{eq:polar_def}\n\\Omega_{\\bar \\theta}^\\circ (\\mathbf{Z}) \\equiv \\sup_{\\mathbf{u},\\mathbf{v}} \\mathbf{u}^\\top \\mathbf{Z} \\mathbf{v} \\ \\ \\textnormal{s.t.} \\ \\ \\bar \\theta(\\mathbf{u},\\mathbf{v}) \\leq 1.\n\\end{equation}\nIt is further shown in \\cite{haeffele2019structured} (Proposition 3) that the first condition above will always be satisfied for any first-order stationary point $(\\tilde \\mathbf{U}, \\tilde \\mathbf{V})$, and that if a given point is not globally optimal then the objective function \\eqref{eq:gen_obj} can always be decreased by augmenting the current factorization by a solution to the polar problem as a new column:\n\\begin{equation}\n(\\mathbf{U}, \\mathbf{V}) \\! \\! \\leftarrow \\! \\! \\left( \\left[ \\tilde \\mathbf{U}, \\ \\tau \\mathbf{u}^* \\right], \\left[ \\tilde \\mathbf{V}, \\ \\tau \\mathbf{v}^* \\right] \\right) \\! : \\! \\mathbf{u}^*, \\mathbf{v}^* \\in \\argmax_{\\mathbf{u},\\mathbf{v}} \\mathbf{u}^\\top \\! (-\\tfrac{1}{\\lambda} \\nabla \\ell(\\tilde \\mathbf{U} \\tilde \\mathbf{V}^\\top) \\mathbf{v} \\ \\ \\textnormal{s.t.} \\ \\ \\bar \\theta(\\mathbf{u},\\mathbf{v}) \\leq 1 \n\\end{equation}\nfor an appropriate choice of step size $\\tau > 0$.\nIf one can efficiently solve the polar problem for a given regularization function $\\bar \\theta$, then this provides a means to efficiently solve problems with form \\eqref{eq:main_obj} to global optimality with guarantees of global optimality. Unfortunately, the main challenge in applying this result from a computational standpoint is that solving the polar problem requires one to solve another challenging non-convex problem in \\eqref{eq:polar_def}, which is often NP-Hard for even relatively simple regularization functions \\cite{bach2013convex}. Here, however, we provide a key positive result, proving that for our designed regularization function \\eqref{eq:theta}, the polar problem can be solved efficiently, in turn enabling efficient and guaranteed optimization of the non-convex model \\eqref{eq:main_obj}.\n\\begin{theorem}\n\\label{thm:polar}\nFor the objective in \\eqref{eq:main_obj}, the associated polar problem is equivalent to:\n\\begin{align}\n\\Omega_\\theta^\\circ (\\mathbf{Z}) &= \\max_{\\mathbf{d} \\in \\mathbb{R}^{L}, \\mathbf{x} \\in \\mathbb{R}^{T}, k\\in \\mathbb{R}} \\mathbf{d}^\\top \\mathbf{Z} \\mathbf{x}\n\\mathrm{s.t.} \\ &\\|\\mathbf{d}\\|^2_F + \\gamma \\|\\mathbf{L}\\mathbf{d} + k^2 \\mathbf{d}\\|_F^2 \\leq 1, \n &\\|\\mathbf{x}\\|^2_F \\leq 1, \\ 0 \\leq k \\leq 2.\n\\end{align}\nFurther, let $\\mathbf{L} = \\Gamma \\Lambda \\Gamma^\\top$ be an eigen-decomposition of $\\mathbf{L}$ and define the matrix $\\mathbf{A}( \\bar k) = \\Gamma(\\mathbf{I} + \\gamma(\\bar k \\mathbf{I} + \\Lambda)^2) \\Gamma^\\top$. Then, if we define $\\bar k^*$ as%\n\\begin{equation}\n\\bar k^* = \\argmax_{\\bar k \\in [0,4]} \\| \\mathbf{A}(\\bar k)^{-1\/2} \\mathbf{Z} \\|_2\n\\label{eq:k_max}\n\\end{equation}\noptimal values of $\\mathbf{d},\\mathbf{x}, k$ are given as $\\mathbf{d}^* = \\mathbf{A}(\\bar k^*)^{-1\/2} \\bar \\mathbf{d}$, $\\mathbf{x}^* = \\bar \\mathbf{x}$, and $k^* = (\\bar k^*)^{1\/2}$ where $(\\bar \\mathbf{d}, \\bar \\mathbf{x})$ are the left and right singular vectors, respectively, associated with the largest singular value of $\\mathbf{A}(\\bar k^*)^{-1\/2} \\mathbf{Z}$. Additionally, the above line search over $\\bar k$ is Lipschitz continuous with a Lipschitz constant, $L_{\\bar k}$, which is bounded by:\n\\begin{equation}\nL_{\\bar k} \\leq \\begin{Bmatrix} \\frac{2}{3 \\sqrt{3}} \\sqrt{\\gamma} \\|\\mathbf{Z}\\|_2 & \\gamma \\geq \\frac{1}{32} \\\\ \n4 \\gamma (1 + 16 \\gamma)^{-\\tfrac{3}{2}} \\|\\mathbf{Z}\\|_2 & \\gamma < \\frac{1}{32} \\end{Bmatrix} \\leq \\tfrac{2}{3 \\sqrt{3}} \\sqrt{\\gamma} \\|\\mathbf{Z}\\|_2\n\\end{equation}\n\\end{theorem}\n\nWe also note that the above result implies that we can solve the polar by first performing a (one-dimensional) line search over $k$, and due to the fact that the largest singular value of a matrix is a Lipschitz continuous function, this line search can be solved efficiently by a variety of global optimization algorithms. For example, we give the following corollary for the simple algorithm given in \\cite{malherbe2017global}, and similar results are easily obtained for other algorithms.\n\n\\begin{corollary}[Adapted from Cor 13 in \\cite{malherbe2017global}]\n\\label{cor:line_search}\nFor the function $f(\\bar k) = \\|\\mathbf{A}(\\bar k)^{-1\/2} \\mathbf{Z} \\|_2$ as defined in Theorem \\ref{thm:polar}, if we let $\\bar k_1, \\ldots \\bar k_r$ denote the iterates of the LIPO algorithm in \\cite{malherbe2017global} then we have $\\forall \\delta \\in (0,1)$ with probability at least $1-\\delta$,\n\\begin{equation}\n\\max_{\\bar k \\in [0,4]} f(\\bar k) - \\max_{i=1\\ldots r} f(\\bar k_i) \\leq \\tfrac{8}{3 \\sqrt{3}} \\sqrt{\\gamma} \\|\\mathbf{Z}\\|_2 \\frac{\\ln(1\/\\delta)}{r}\n\\end{equation}\n\\end{corollary}\nAs a result, we have that the error of the linesearch converges with rate $\\mathcal{O}(1\/r)$, where $r$ is the number of function evaluations of $f(\\bar k)$, to a global optimum, and then, given the optimal value of $k$, the optimal $(\\mathbf{d}, \\mathbf{x})$ vectors can be computed in closed-form via singular value decomposition. Taken together this allows us to employ the Meta-Algorithm defined in Algorithm \\ref{alg:meta} to solve problem \\eqref{eq:main_obj}.\n\\begin{corollary}\n\\label{cor:poly_time}\nAlgorithm \\ref{alg:meta} produces an optimal solution to \\eqref{eq:main_obj} in polynomial time.\n\\end{corollary}\n\n\\begin{algorithm}\n\\caption{\\bf{Meta-algorithm}}\n\\label{alg:meta}\n\\begin{algorithmic}[1]\n\\State Input $\\mathbf{D}_{init}$, $\\mathbf{X}_{init}$, $\\mathbf{k}_{init}$ \n\\State Initialize $ \\left( \\mathbf{D}, \\mathbf{X}, \\mathbf{k} \\right) \\leftarrow \\left( \\mathbf{D}_{init}, \\mathbf{X}_{init}, \\mathbf{k}_{init} \\right)$\n\\While {global convergence criteria is not met}\n\\State Perform gradient descent on the low-rank wave-informed objective function \\eqref{eq:main_obj} with $N$ fixed to reach a first order stationary point $(\\Tilde{\\mathbf{D}}, \\Tilde{\\mathbf{X}}, \\Tilde{\\mathbf{k}})$\\label{step:grad_desc}\n\\State Calculate the value of $\\Omega^\\circ_\\theta(\\tfrac{1}{\\lambda}(\\mathbf{Y}-\\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top))$ via Theorem \\ref{thm:polar} above and obtain $\\mathbf{d}^*, \\mathbf{x}^*, k^*$\n\\If {value of polar $\\Omega^\\circ_\\theta(\\tfrac{1}{\\lambda}(\\mathbf{Y}-\\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top)) = 1$} \\State {Algorithm converged to global minimum}\n\\Else \n\\State {Append $(\\mathbf{d}^*, \\mathbf{x}^*, k^*)$ to $(\\Tilde{\\mathbf{D}}, \\Tilde{\\mathbf{X}}, \\Tilde{\\mathbf{k}})$ and update $\\left( \\mathbf{D}, \\mathbf{X}, \\mathbf{k} \\right)$\n\\State $ \\left( \\mathbf{D}, \\mathbf{X}, \\mathbf{k} \\right) \\! \\leftarrow \\! \\left( \\left[ \\tilde{\\mathbf{D}}, \\ \\tau \\mathbf{d}^* \\right], \\left[ \\tilde{\\mathbf{X}}, \\ \\tau \\mathbf{x}^* \\right] , \\left[ \\tilde{\\mathbf{k}}^\\top, \\ k^* \\right]^\\top\n\\right)$, $\\tau\\! > \\! 0$ is step size (see supplement).\\label{step:step_size}}\n\\EndIf\n\\State Continue loop.\n\\EndWhile\n\\end{algorithmic}\n\\label{algoblock:meta-algo}\n\\end{algorithm}\n\n\\vspace{-3mm}\n\n\\section{Results \\& Discussions}\n\\label{sec:results}\nIn this section, we evaluate our proposed wave-informed matrix factorization on two synthetic datasets. For each dataset, algorithm iterations are performed until the value of polar is evaluated to be less than a threshold close to 1 to demonstrate convergence to the global minimum (i.e., a polar value of 1). We note that the distance to the global optimum (in objective value) is directly proportional to the value of the polar value at any point minus 1 (\\cite{haeffele2019structured}, Prop. 4), so choosing a stopping criteria as polar value $\\leq 1 \\! + \\! \\epsilon$ also guarantees optimality within $\\mathcal{O}(\\epsilon)$.\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.25]{figs\/merge_1_diff_strings_middle.png}\n \\caption{(left) Two cables of different impedances joined end-to-end; (right) cables of two different materials with a slightly alternate configuration, and measurements sampled at \\color{alizarin} \\textbf{X} \\color{black}.}\n \\label{fig:merge_1}\n \\vspace{-5mm}\n\\end{figure}\n\n\n\\myparagraph{Characterizing Multi-Segment Transmission Lines} First, we consider the \\textbf{material characterization} problem described in the introduction.\nSpecifically, we consider waves propagating along an electrical transmission line where the impedance of the transmission line changes along the line (as would occur, for example, if the transmission line was damaged or degraded in a region), as depicted in Figure \\ref{fig:merge_1}(left). \nWe simulate wave propagation in this line based on a standard RLGC (resistance, inductance, conductance, capacitance) transmission line model \\cite{Pozar2012}, using an algebraic graph theory engine for one-dimensional waves \\cite{Harley2019}. For this simulation, we combine two transmission line segments of length $0.4$~m (\\color{orange-left}left\\color{black}) and $0.6$~m (\\color{blue-right}right\\color{black}). \n\nThe wavenumber (which is inversely proportional to velocity) of the right segment is $4$~times higher than the wavenumber of the left segment. A modulated Gaussian signal with a center frequency of $10$~MHz and a $3$~dB bandwidth of $10$~MHz is transmitted from the left end of this setup. This impulse travels $0.4$~m till it reaches the interface (i.e., the connection between the two cables). At the interface, part of the propagating signal reflects and another part transmits into the next cable. The signal again reflects at the end of each cable. \n\nNote the following: 1) there are now two distinct wavenumber regions, 2) the excitation\/forcing function is transient and is therefore a linear combination of nearby frequencies that each have a unique wavenumber, and 3) the end boundaries have loss (i.e., energy exits the system). We show electrical voltage amplitude at a timestamp, see Figure \\ref{fig:Merge_2}(top-left). We observe waves travelling in two regions with different wavenumbers. Note that performing a simple Fourier transform (in space) of the signal will produce unsatisfactory results due to the discrete change in the wavenumber along the line. However, as we show below, our method automatically recovers the two distinct regions as separated modes in our decomposition. Namely, we solve our wave-informed matrix factorization with $\\gamma=50$ and $\\lambda=0.6$ on the above described wave data. \nInspection of the columns of $\\mathbf{D}$, see Figure \\ref{fig:Merge_2}(top-right), clearly shows that single columns of $\\mathbf{D}$ contain energy largely contained to only one region of the transmission line, automatically providing a natural decomposition of the signal which corresponds to changes in the physical transmission line. \n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.093]{figs\/Merge_2.png}\n \\caption{(top-left) Electrical amplitude at a timestamp of $0.4115 \\mu$s; (top-right) illustration of two columns of $\\mathbf{D}$ from Wave-informed matrix factorization; (bottom-left) illustration of two columns of $\\mathbf{D}$ from low-rank matrix factorization;(bottom-right) partitioned normalized energy of the first 30 significant columns of Wave-informed matrix factorization and low-rank matrix factorization.}\n \\label{fig:Merge_2}\n \\vspace{-5mm}\n\\end{figure}\n\nAs a first baseline comparison, we also perform a low-rank factorization (by setting $\\gamma=0$ and $\\lambda=0.6$), which is equivalent to only using nuclear norm regularization on the matrix product $(\\mathbf{D} \\mathbf{X}^\\top)$.\nWith just low-rank regularization, one observes that while there is a clear demarcation at $0.4$~m, see Figure \\ref{fig:Merge_2}(bottom-left), there is still significant energy in both transmission line segments for each column of $\\mathbf{D}$. \nTo quantitatively evaluate the quality of the decomposition, we normalize the 30 most significant (determined by the corresponding value of $\\|\\mathbf{D}_i \\mathbf{X}_i^{\\top} \\|_F^2$) columns of $\\mathbf{D}$ and plot the energy on each partition, see Figure \\ref{fig:Merge_2}(bottom-right). In case of the low-rank factorization, see Figure \\ref{fig:Merge_2}(bottom-right),\nwe largely observe higher energy levels on regions of higher wavenumber (energy is proportional to the wavenumber for oscillatory quantities) and lower energy levels on regions of lower wavenumber. \nIn the case of our wave-informed matrix factorization model, we see two clear step functions, one indicating the energy on the first segment and the other indicating energy on the second segment, showing a sharp decomposition of the signal into components corresponding to the two distinct regions. We can further encapsulate this into a single quantity representing the decomposition of the signal into components corresponding to the two distinct regions. For this we compute the mean entropy (mean over the columns of $\\mathbf{D}$) of the percentage of signal power in one of the two regions (note the entropy is invariant to the choice of region). In the ideal case, the entropy will be 0, indicating that the learned components have all of the power being exclusively in one of the two line segments, while an entropy of 1 indicates the worst case where the power in the signal of a component column is split equally between the two segments.\n To quantify the performance of our method we also compare against other similar matrix factorization or modal extraction techniques: wave-informed KSVD \\cite{Tetali2019} ({\\color{B}{\\textbf{B}}}), low-rank matrix factorization ({\\color{C}{\\textbf{C}}}), independent component analysis \\cite{hyvarinen2000independent,van2006pp} ({\\color{D}{\\textbf{D}}}), dynamic mode decomposition \\cite{PhysRevFluids.5.054401,SUSUKI2018327} ({\\color{E}{\\textbf{E}}}), ensemble empirical mode decomposition \\cite{wu2009ensemble, fang2011stress, 6707404} ({\\color{F}{\\textbf{F}}}) and principal component analysis \\cite{medina1993three} ({\\color{G}{\\textbf{G}}}). Dynamic mode decomposition ({\\color{E}{\\textbf{E}}}), independent component analysis ({\\color{D}{\\textbf{D}}}), and principal component analysis ({\\color{G}{\\textbf{G}}}) are subspace identification techniques that resemble matrix factorization in many respects. Ensemble empirical mode decomposition is a refined version of empirical mode decomposition, extended to 2D data), which learns a basis from data, similar to our algorithm.\n\n Note that our proposed wave-informed matrix factorization ({\\color{A}{\\textbf{A}}}) achieves the best performance of the seven methods, providing quantitative evidence of our model's ability to isolate distinct materials (and corresponding wavefields). \n\n\\begin{table}[ht]\n\\vspace{-5mm}\n\\caption{\\label{table:entropies} Comparison of entropy based performance for segmentation of a transmission line using algorithms ({\\color{A}{\\textbf{A}}}), ({\\color{B}{\\textbf{B}}}), ({\\color{C}{\\textbf{C}}}), ({\\color{D}{\\textbf{D}}}), ({\\color{E}{\\textbf{E}}}), ({\\color{F}{\\textbf{F}}}) and ({\\color{G}{\\textbf{G}}}), where the mean entropy row represents the mean entropy of partitioned normalized energies for each algorithm.}\n\\centering\n\\begin{tabularx}{\\textwidth}{@{}lXXXXXXX@{}}\n\\toprule\n\\textbf{Algorithm} & {\\color{A}{\\textbf{A}}} & {\\color{B}{\\textbf{B}}} & {\\color{C}{\\textbf{C}}} & {\\color{D}{\\textbf{D}}} & {\\color{E}{\\textbf{E}}} & {\\color{F}{\\textbf{F}}} & {\\color{G}{\\textbf{G}}} \\\\ \\midrule\n\\textbf{Mean Entropy} & 0.176 & 0.211 & 0.531 & 0.464 & 0.431 & 0.636 & 0.531 \\\\ \\bottomrule\n\\end{tabularx}\n\\vspace{-5mm}\n\\end{table}\n\n\n\\myparagraph{Characterizing Multi-segment Transmission Lines with Sparsely Sampled Data}\nIn this subsection, we demonstrate the effectiveness of our model when the data is sparsely sampled. Specifically, we reduce the spatial density at which points were sampled (keeping the number of time points the same) by sampling 10 points uniformly on the cable of length $1$m, and we also change the configuration of the transmission lines by assigning the \\color{orange-left} first $0.4$ m \\color{black} and the \\color{orange-left} last $0.5$ m \\color{black} with one material and the \\color{blue-right} middle $0.1$ m \\color{black} with another material (see Fig.~\\ref{fig:merge_1}(right)). We then solve a matrix completion problem with wave-informed matrix factorization to interpolate the wavefield at the remaining 90 points on the $1$m region. Since the sampling is uniformly done over space, the data matrix contains all zero columns which implies it \\textit{cannot} be completed with standard low-rank matrix completion. However, we show that wave-informed matrix factorization, on the other hand, fills in those regions appropriately due to the wave-constraint being enforced.\n\nSpecifically, we minimize the following objective\\footnote{The mathematical details of the optimization procedure are almost the same as the algorithm mentioned in Algorithm \\ref{alg:meta} except that the masking operator $\\mathcal{A}(\\cdot)$ also needs to be included (see supplement).}:\n\\begin{equation}\n\\label{eq:main_obj_missing}\n\\!\\!\\min_{N \\in \\mathbb{N}_+} \\!\\!\\min_{\\substack{\\mathbf{D} \\in \\mathbb{R}^{L \\times N}, \\\\ \\mathbf{X} \\in \\mathbb{R}^{T \\times N}, \\mathbf{k} \\in \\mathbb{R}^N}} \\!\\!\\!\\!\\tfrac{1}{2}\\| \\mathcal{A} \\left( \\mathbf{Y}-\\mathbf{D}\\mathbf{X}^\\top \\right) \\|_F^2 + \\tfrac{\\lambda}{2} \\sum_{i=1}^N \\left(\\|\\mathbf{X}_i\\|_F^2 + \\|\\mathbf{D}_i\\|_F^2 + \\gamma \\|\\mathbf{L} \\mathbf{D}_i + k_i^2 \\mathbf{D}_i \\|_F^2 \\right) \n\\end{equation}\nwhich is a matrix completion problem with a linear masking operator $\\mathcal{A}(\\cdot)$. We observe, despite the very sparse sampling, the columns of $\\mathbf{D}$ still maintain sufficient structure to identify different material regions. \nIn Figure~\\ref{fig:Merge_3}(left), we plot all of the recovered columns of $\\mathbf{D}$, where we observe that, except for one column of $\\mathbf{D}$, every other column contains very little energy in the region between $0.45$m and $0.55$m. Thus the algorithm again automatically detects a change in this region in the decomposition. Note that the actual change was introduced between $0.4$m and $0.5$m. An error of $0.05$m in estimating the region of material change is analogous to Gibbs phenomenon in signals and system theory -- this especially occurs due to the fact that we impose second derivative constraints on the columns of $\\mathbf{D}$, which imposes a smoothness constraint and shifts the transition. Figure \\ref{fig:Merge_3}(right) is similar to Figure \\ref{fig:Merge_2}(bottom-left) as it quantifies that only one column of $\\mathbf{D}$ is active in the middle region, and the other columns of $\\mathbf{D}$ are active in the other distinct regions, whereas a low-rank factorization model displays significantly more mixing of signal energies in each region. \n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.09]{figs\/Merge_3.png}\n \\caption{(left) All the columns of $\\mathbf{D}$ recovered from wave-informed matrix factorization in a single graph, note that the measurements were sampled only at $ \\{ 0,0.1, \\cdots, 0.9 \\}$; (right) normalized energies of various regions, for the first $8$ significant columns of $\\mathbf{D}$ for both algorithms.}\n \\label{fig:Merge_3}\n \\vspace{-5mm}\n\\end{figure}\n\n\n\\myparagraph{A Fixed Vibrating String}\nA fixed vibrating string is an example of a wave with fixed boundary conditions. \nAs an example, consider the dynamics of a noisy fixed string, given by: \n\\begin{equation}\ny(\\ell,t) = \\sum_{n=1}^{R} a_n \\sin (n k \\ell) \\sin(n \\omega t) + \\eta(\\ell,t) \\; .\n\\label{eqn:wave_equ}\n\\end{equation}\nThis can be represented in matrix notation as $\\mathbf{Y} = \\mathbf{D} \\mathbf{X}^{\\top} + \\mathbf{N}$, where $\\mathbf{N}$ is an additive noise term. To demonstrate our framework, we will consider two more challenging variants of \\eqref{eqn:wave_equ} where we add a damping factor (in either time or space) to the amplitude of the wave:\n\\begin{eqnarray}\n y_t(\\ell,t) = \\sum_{n=1}^{R} a_n e^{-\\alpha_n t} \\sin (n k \\ell) \\sin(n \\omega t) + \\eta(\\ell,t) \n \\label{eqn:damped_wave_time} \\\\\n y_s(\\ell,t) = \\sum_{n=1}^{R} a_n e^{-\\beta_n \\ell} \\sin (n k \\ell) \\sin(n \\omega t) + \\eta(\\ell,t)\n \\label{eqn:damped_wave_space}\n\\end{eqnarray}\n\nThough this does not exactly satisfy the wave equation, waves in practice often have a damped sinusoidal nature. We first consider the model in \\eqref{eqn:damped_wave_time} (damping in time) where we run our algorithm with $\\mathbf{Y} \\in \\mathbb{R}^{1000 \\times 4000}$, $R=10$, $k = 2 \\pi$, $\\omega = 12 \\pi$, $\\alpha_n = n$, $a_n$ roughly of the order of $10^{1}$, and additive white Gaussian noise $\\eta(\\ell,t)$ of variance $\\approx 10$ and $0$ mean (for the noisy case). We set $\\gamma = 10^9$ and $\\lambda = 200$ (see supplement for the behaviour of the algorithm with respect to changes in $\\gamma$ and $\\lambda$). From wave-informed matrix factorization, when there is damping in time, we still obtain the columns of $\\mathbf{D}$ (vibrations in space, which is undamped) as clear underlying sinusoids (Figure \\ref{fig:Merge_4}). We emphasize here that the modes of vibration are exactly recovered from data even in the presence of large amounts of noise (Figure \\ref{fig:Merge_4} (left)), whereas low-rank matrix factorization does not obtain pure sinusoids even in the noiseless case (Figure \\ref{fig:Merge_4} (right) -- the dotted blue curves change in amplitude over space). To quantify this performance, in the appendix we provide the error between modes recovered by our algorithm and the ground-truth modes. Recall our algorithm also provides guarantees of polynomial-time global optimality (unlike the work of \\cite{Tetali2019}), without using a library matrix as mentioned in \\cite{lai2020full}. \n %\n Next, we run wave-informed matrix factorization on the model in \\eqref{eqn:damped_wave_space} (damping in space) \n with the same parameters as above and $\\beta_n = n\/2$ (also including the additive noise term) and visually compare our algorithms to other modal analysis methods ({\\color{A}{\\textbf{A}}}, {\\color{B}{\\textbf{B}}}, {\\color{C}{\\textbf{C}}}, {\\color{D}{\\textbf{D}}}, {\\color{E}{\\textbf{E}}}, {\\color{F}{\\textbf{F}}}, {\\color{G}{\\textbf{G}}}) (Figure~\\ref{fig:damped_sines}). Here we show recovering damped sinusoids is possible under heavy noise (see two left columns of $\\mathbf{D}$ demonstrated in Figure~\\ref{fig:damped_sines}), where we have chosen the most significant columns of each method. A close look at the recovered modes indicates much cleaner recovery of damped (in space) sinusoids for our method ({\\color{A}{\\textbf{A}}}) compared to others. For example, ({\\color{B}{\\textbf{B}}}) is the closest in performance to our method, but distortions can be observed in the tails of the components.\n\n\\begin{figure}[ht]\n\\vspace{-4mm}\n \\centering\n \\includegraphics[trim=10 10 10 150, clip, scale=0.08]{figs\/Merge_4.png}\n \\caption{4 columns of $\\mathbf{D}$ from low-rank matrix factorization (blue) and wave-informed matrix factorization (red) with the temporally damped vibrating string model. Showing with noisy data (left) and the noiseless case (right). \\vspace{-5mm}}\n \\label{fig:Merge_4}\n\\end{figure}\n\\begin{figure}[ht]\n \\centering\n \\vspace{-4mm}\n \\includegraphics[trim=10 10 10 15, clip, width=\\linewidth]{figs\/dampled_sines.png}\n \\caption{Two recovered modes (rows) of spatially damped sinusoids ({\\color{A}{\\textbf{A}}}), ({\\color{B}{\\textbf{B}}}), ({\\color{C}{\\textbf{C}}}), ({\\color{D}{\\textbf{D}}}), ({\\color{E}{\\textbf{E}}}), ({\\color{F}{\\textbf{F}}}), ({\\color{G}{\\textbf{G}}}).} \\label{fig:damped_sines}\n \\vspace{-5mm}\n\\end{figure}\n\n\n\\myparagraph{Conclusions}\nWe have developed a framework for a wave-informed matrix factorization algorithm with provable, polynomial-time global optimally guarantees. Output from the algorithm was compared with that of low-rank matrix factorization and state-of-the-art algorithms for modal and component analysis. We demonstrated that the wave-informed approach learns representations that are more physically relevant and practical for the purpose of material characterization and modal analysis. Future work will include 1) generalizing this approach to a variety of linear PDEs beyond the wave equation as well as wave propagation along more than one dimension, 2) applications in baseline-free anomaly detection for structural health monitoring \\cite{alguri2018baseline, alguri2021sim}. \n\n\\textbf{Acknowledgments} This work was partially supported by NIH NIA 1R01AG067396, ARO MURI W911NF-17-1-0304, NSF-Simons MoDL 2031985, and the National Science Foundation under award number ECCS-1839704.\n\n\\bibliographystyle{IEEEtran}\n\n\\section{Supplementary Material}\n\nFollowing is an example Laplacian matrix ($\\mathbf{L}$),\n\\begin{eqnarray}\n \\mathbf{L} &=& \\frac{1}{(\\Delta l)^2} \\begin{bmatrix}\n -2 & 1 & 0 & 0 & 0 &\\cdots & 0 \\\\\n 1 & -2 & 1 & 0 & 0 & \\cdots & 0 \\\\\n 0 & 1 & -2 & 1 & 0 & \\cdots & 0 \\\\\n \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & 0 & 0 & 0 & 0 & \\cdots & -2 \\\\\n \\end{bmatrix}.\n \\label{eqn:Lap_mat}\n\\end{eqnarray}\nTo reduce complexity, all theorems and proofs consider $\\Delta l = 1$ (for $\\Delta l$ defined in equation \\eqref{eqn:Lap_mat} without any loss of generality. The only change that needs to be accommodated is on the values in $\\mathbf{k}$ obtained from equation (\\ref{eq:k_max}) (i.e. while solving the polar). Observing equations (\\ref{eqn:discrete_wave_eqn}) and (\\ref{eqn:Lap_mat}), the only modification that is needed is to note that for $\\Delta l \\neq 1$ we follow Algorithm \\ref{algoblock:meta-algo} and replace the final vector $\\mathbf{k}$ with $\\mathbf{k} \\Delta l$.\n\n\\subsection{Algorithm Details}\n\n\\paragraph{Matrix Completion}\nWe note that our algorithm (and guarantees of polynomial time solutions) is easily generalized to any loss function $\\ell(\\mathbf{D} \\mathbf{X}^\\top)$, provided the loss function is once differentiable and convex w.r.t. $\\mathbf{D} \\mathbf{X}^\\top$. For example, the following is the modified algorithm for the matrix completion formulation in \\eqref{eq:main_obj_missing}:\n\n\n\\begin{algorithm}\n\\caption{\\bf{Meta-algorithm}}\n\\begin{algorithmic}[1]\n\\State Input $\\mathbf{D}_{init}$, $\\mathbf{X}_{init}$, $\\mathbf{k}_{init}$ \n\\State Initialize $ \\left( \\mathbf{D}, \\mathbf{X}, \\mathbf{k} \\right) \\leftarrow \\left( \\mathbf{D}_{init}, \\mathbf{X}_{init}, \\mathbf{k}_{init} \\right)$\n\\While {global convergence criteria is not met}\n\\State Perform gradient descent on \\eqref{eq:main_obj} with $N$ fixed to reach a first order stationary point $(\\Tilde{\\mathbf{D}}, \\Tilde{\\mathbf{X}}, \\Tilde{\\mathbf{k}})$\\label{step:grad_desc}\n\\State Calculate the value of $\\Omega^\\circ_\\theta \\left(\\tfrac{1}{\\lambda}\\left(\\mathcal{A}^*\\left(\\mathbf{Y}-\\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top\\right)\\right)\\right)$ via Theorem \\ref{thm:polar} and obtain $\\mathbf{d}^*, \\mathbf{x}^*, k^*$\n\\If {value of polar $\\Omega^\\circ_\\theta \\left(\\tfrac{1}{\\lambda}\\left(\\mathcal{A}^*\\left(\\mathbf{Y}-\\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top\\right)\\right)\\right) = 1$} \\State {Algorithm converged to global minimum}\n\\Else\n\\State {Append $(\\mathbf{d}^*, \\mathbf{x}^*, k^*)$ to $(\\Tilde{\\mathbf{D}}, \\Tilde{\\mathbf{X}}, \\Tilde{\\mathbf{k}})$ and update $\\left( \\mathbf{D}, \\mathbf{X}, \\mathbf{k} \\right)$\n\\State $ \\left( \\mathbf{D}, \\mathbf{X}, \\mathbf{k} \\right) \\! \\leftarrow \\! \\left( \\left[ \\tilde{\\mathbf{D}}, \\ \\tau_{\\mathcal{A}} \\mathbf{d}^* \\right], \\left[ \\tilde{\\mathbf{X}}, \\ \\tau_{\\mathcal{A}} \\mathbf{x}^* \\right] , \\left[ \\tilde{\\mathbf{k}}^\\top, \\ k^* \\right]^\\top\n\\right)$, $\\tau_{\\mathcal{A}}\\! > \\! 0$ is step size.}\\label{step:step_size_mising}\n\\EndIf\n\\State Continue loop.\n\\EndWhile\n\\end{algorithmic}\n\\label{algoblock:meta-algo_missing}\n\\end{algorithm}\nwhere $\\mathcal{A}^*$ denotes the adjoint of the linear masking operator.\n\n\\paragraph{Gradient update equations} We set the derivative of the right hand side of equation \\ref{eqn:obj_final} with respect to $\\mathbf{D}$, $\\mathbf{X}$ and $\\mathbf{k}$ and utilize block coordinate descent of Gauss-Seidel type \\cite{xu2013block} to reach a first order stationary point mentioned in step \\ref{step:grad_desc} of Algorithm \\ref{algoblock:meta-algo}. The number of columns of $\\mathbf{D}$ and $\\mathbf{X}$ (denoted by $N$ in equation (\\ref{eq:main_obj})) is fixed and does not change during this step. The following are the gradient update equations (for stepsize $\\alpha_i$):\n\n\n\\begin{gather}\n \\mathbf{D}_j \\leftarrow \\mathbf{D}_j - \\alpha_j \\left( \\left(\\mathbf{D} \\mathbf{X}^{\\top} - \\mathbf{Y} \\right) \\mathbf{X}_j + \\lambda \\mathbf{D}_j + 2 \\gamma \\lambda \\left( \\mathbf{L} + k^2_j \\mathbf{I} \\right)^2 \\mathbf{D}_j \\right) \\label{eq:grad_D} \\\\\n \\mathbf{X}_j \\leftarrow \\mathbf{X}_j - \\alpha_j \\left( \\left( \\mathbf{D} \\mathbf{X}^{\\top} - \\mathbf{Y} \\right)^{\\top} \\mathbf{D}_j \\right) \\label{eq:grad_X} \\\\\n k_j \\leftarrow \\sqrt{-\\frac{ \\mathbf{D}_j^\\top \\mathbf{L} \\mathbf{D}_j } {\\|\\mathbf{D}_j\\|_2^2}} \\label{eq:grad_k} \n\\end{gather}\n\n\n\\paragraph{Gradient update equations for the Matrix completion setting}\n\n\\begin{gather}\n \\mathbf{D}_j \\leftarrow \\mathbf{D}_j - \\alpha_j \\left( \\mathcal{A}^* \\left(\\mathbf{D} \\mathbf{X}^{\\top} - \\mathbf{Y} \\right) \\mathbf{X}_j + \\lambda \\mathbf{D}_j + 2 \\gamma \\lambda \\left( \\mathbf{L} + k^2_j \\mathbf{I} \\right)^2 \\mathbf{D}_j \\right) \\label{eq:grad_D} \\\\\n \\mathbf{X}_j \\leftarrow \\mathbf{X}_j - \\alpha_j \\left( \\mathcal{A}^* \\left( \\mathbf{D} \\mathbf{X}^{\\top} - \\mathbf{Y} \\right)^{\\top} \\mathbf{D}_j \\right) \\label{eq:grad_X} \\\\\n k_j \\leftarrow \\sqrt{-\\frac{ \\mathbf{D}_j^\\top \\mathbf{L} \\mathbf{D}_j } {\\|\\mathbf{D}_j\\|_2^2}} \\label{eq:grad_k} \n\\end{gather}\n\n\\paragraph{Step size computation} The step size, $\\tau$ mentioned in Step 10 of Algorithm \\ref{algoblock:meta-algo} is computed through the following quadratic minimization problem.\n\\begin{eqnarray}\n \\min_{\\substack{\\tau \\in \\mathbb{R}}} \\tfrac{1}{2}\\|\\mathbf{Y}- \\mathbf{D}_\\tau \\mathbf{X}_\\tau^\\top\\|_F^2 + \\frac{\\lambda}{2} \\sum_{i=1}^{N+1} \\left(\\|\\left(\\mathbf{X}_{\\tau}\\right)_i\\|_F^2 + \\| \\left( \\mathbf{D}_{\\tau} \\right)_i \\|_F^2 + \\gamma \\|\\mathbf{L} \\left( \\mathbf{D}_{\\tau} \\right)_i + \\left(\\mathbf{k}_{\\tau}^2\\right)_i \\left( \\mathbf{D}_{\\tau} \\right)_i \\|_F^2 \\right)\n \\label{eqn:for_tau}\n\\end{eqnarray}\n\nwhere, $\\left(\\mathbf{D}_\\tau, \\mathbf{X}_\\tau, \\mathbf{k}_{\\tau} \\right) = \\left( \\left[ \\tilde{\\mathbf{D}}, \\ \\tau \\mathbf{d}^* \\right], \\left[ \\tilde{\\mathbf{X}}, \\ \\tau \\mathbf{x}^* \\right] , \\left[ \\tilde{\\mathbf{k}}^\\top, \\ k^* \\right]^\\top\n\\right)$ and $N$ is the number of columns in $\\tilde{\\mathbf{D}}$.\n\n\\paragraph{Proposition} The optimal step size $\\tau^*$ that minimizes the expression in (\\ref{eqn:for_tau}) is given by:\n\\begin{eqnarray}\n \\tau^* = \\frac{\\sqrt{(\\mathbf{d}^*)^\\top \\left( \\mathbf{Y} - \\tilde{\\mathbf{D}} \\tilde{\\mathbf{X}}^{\\top} \\right) \\mathbf{x}^* - \\lambda }}{\\|\\mathbf{d}^*\\|_2 \\|\\mathbf{x}^*\\|_2 }\n \\label{eq:for_tau_}\n\\end{eqnarray}\n\n\\begin{proof}\nLet $f(\\tau)$ represent the objective function in \\eqref{eqn:for_tau} with everything except for $\\tau$ held fixed:\n\\begin{eqnarray}\n f(\\tau) = \\tfrac{1}{2}\\|\\mathbf{Y}- \\mathbf{D}_\\tau \\mathbf{X}_\\tau^\\top\\|_F^2 + \\tfrac{\\lambda}{2} \\sum_{i=1}^{N+1} \\left(\\|\\left(\\mathbf{X}_{\\tau}\\right)_i\\|_F^2 + \\| \\left( \\mathbf{D}_{\\tau} \\right)_i \\|_F^2 + \\gamma \\|\\mathbf{L} \\left( \\mathbf{D}_{\\tau} \\right)_i + \\left(\\mathbf{k}_{\\tau}\\right)^2_i \\left( \\mathbf{D}_{\\tau} \\right)_i \\|_F^2 \\right)\n\\end{eqnarray}\n\nObserve that from the solution to the Polar problem we have by construction that the regularization term $\\theta(\\mathbf{d}^*,\\mathbf{x}^*)=1$, so combined with the positive homogeneity of $\\theta$ we have have that minimizing $f(\\tau)$ w.r.t. $\\tau$ is equivalent to solving:\n\\begin{equation}\n\\min_{\\tau \\geq 0} \\tfrac{1}{2}\\|\\mathbf{Y}- \\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top - \\tau^2 \\mathbf{d}^* (\\mathbf{x}^*)^\\top \\|_F^2 + \\lambda \\tau^2\n\\end{equation}\n\nTaking the gradient of the above w.r.t. $\\tau^2$ and solving for 0 gives:\n\\begin{equation}\n(\\tau^*)^2 = \\frac{\\langle \\mathbf{Y}-\\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top, \\mathbf{d}^* (\\mathbf{x}^*)^\\top \\rangle - \\lambda}{ \\|\\mathbf{d}^* (\\mathbf{x}^*)^\\top\\|_F^2}\n\\end{equation}\nThe result is completed by noting that the numerator is guaranteed to be strictly positive due to the fact that the Polar solution has value strictly greater that 1.\n\\end{proof}\n\nThe step size, $\\tau_{\\mathcal{A}}$ for the matrix completion algorithm, mentioned in Step 10 of Algorithm \\ref{algoblock:meta-algo_missing} is computed through the following quadratic minimization problem.\n\\begin{eqnarray}\n \\min_{\\substack{\\tau_{\\mathcal{A}} \\in \\mathbb{R}}} \\tfrac{1}{2}\\|\\mathbf{Y}- \\mathbf{D}_{\\tau_{\\mathcal{A}}} \\mathbf{X}_{\\tau_{\\mathcal{A}}}^\\top\\|_F^2 + \\frac{\\lambda}{2} \\sum_{i=1}^{N+1} \\left(\\|\\left(\\mathbf{X}_{\\tau_{\\mathcal{A}}}\\right)_i\\|_F^2 + \\| \\left( \\mathbf{D}_{\\tau_{\\mathcal{A}}} \\right)_i \\|_F^2 + \\gamma \\|\\mathbf{L} \\left( \\mathbf{D}_{\\tau_{\\mathcal{A}}} \\right)_i + \\left(\\mathbf{k}_{\\tau_{\\mathcal{A}}}^2\\right)_i \\left( \\mathbf{D}_{\\tau_{\\mathcal{A}}} \\right)_i \\|_F^2 \\right)\n \\label{eqn:for_tau_missing}\n\\end{eqnarray}\n\nwhere, $\\left(\\mathbf{D}_{\\tau_{\\mathcal{A}}}, \\mathbf{X}_{\\tau_{\\mathcal{A}}}, \\mathbf{k}_{\\tau_{\\mathcal{A}}} \\right) = \\left( \\left[ \\tilde{\\mathbf{D}}, \\ {\\tau_{\\mathcal{A}}} \\mathbf{d}^* \\right], \\left[ \\tilde{\\mathbf{X}}, \\ {\\tau_{\\mathcal{A}}} \\mathbf{x}^* \\right] , \\left[ \\tilde{\\mathbf{k}}^\\top, \\ k^* \\right]^\\top\n\\right)$ and $N$ is the number of columns in $\\tilde{\\mathbf{D}}$.\n\n\\paragraph{Proposition} The optimal step size $\\tau^*_{\\mathcal{A}}$ that minimizes the expression in (\\ref{eqn:for_tau_missing}) is given by:\n\\begin{eqnarray}\n\\tau^*_{\\mathcal{A}^*} = \\frac{ \\sqrt{ \\langle \\mathcal{A}^* \\left( \\mathbf{Y}-\\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top \\right), \\mathcal{A}^* \\left( \\mathbf{d}^* {\\mathbf{x}^*}^\\top \\right)\\rangle - \\lambda }}{ \\| \\mathcal{A}^* \\left( \\mathbf{d}^* {\\mathbf{x}^*}^\\top \\right) \\|_F}\n\\end{eqnarray}\n\\begin{proof}\nFollowing the same steps of the previous proof, we obtain:\n\n\\begin{equation}\n\\min_{\\tau_{\\mathcal{A}} \\geq 0} \\tfrac{1}{2}\\| \\mathcal{A}^* \\left( \\mathbf{Y}- \\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top \\right) - \\tau_{\\mathcal{A}}^2 \\mathcal{A}^* \\left( \\mathbf{d}^* (\\mathbf{x}^*)^\\top \\right) \\|_F^2 + \\lambda \\tau_{\\mathcal{A}}^2\n\\end{equation}\n\nThe proof is completed by minimizing the quadratic for $\\left(\\tau_{\\mathcal{A}}\\right)^2$.\n\n\\end{proof}\n\n\\subsection{Proofs}\n\n{\n\\renewcommand{\\theproposition}{\\ref{prop:equiv_prob}}\n\\begin{proposition}\nThe optimization problem in \\eqref{eq:main_obj} is a special case of the problem considered in \\cite{haeffele2019structured}.\n\\end{proposition}\n\\addtocounter{proposition}{-1}\n}\n\n\n\\begin{proof}\nThe problem considered in \\cite{haeffele2019structured} is stated in \\eqref{eq:gen_obj}. Comparing it with (\\ref{eq:main_obj}) we have:\n\\begin{gather}\n \\ell \\left( \\mathbf{D}\\mathbf{X}^{\\top} \\right) = \\tfrac{1}{2}\\| \\mathbf{Y} - \\mathbf{D}\\mathbf{X}^{\\top} \\|_F^2\\\\\n \\bar{\\theta} \\left( \\mathbf{D}_i, \\mathbf{X}_i \\right) = \\tfrac{1}{2} \\left( \\| \\mathbf{D}_i \\|^2_2 + \\| \\mathbf{X}_i \\|^2_2 + \\gamma \\min_{k_i} \\| \\mathbf{L} \\mathbf{D}_i - k_i \\mathbf{D}_i \\|^2_2 \\right)\n\\end{gather}\nObserve that $\\ell ( \\hat{\\mathbf{Y}} ) = \\tfrac{1}{2}\\| \\mathbf{Y} - \\hat{\\mathbf{Y}} \\|^2_F$ is clearly convex and differentiable w.r.t $\\hat \\mathbf{Y}$. \n Now to realize that the optimization problem in \\eqref{eq:main_obj} is a special case of the problem considered in \\cite{haeffele2019structured}, it suffices to check that $\\bar \\theta(\\mathbf{d},\\mathbf{x})$ satisfies the three conditions of a rank-1 regularizer from \\cite{haeffele2019structured}.\n\n\\begin{enumerate}\n \\item $\\bar \\theta(\\alpha \\mathbf{d}, \\alpha \\mathbf{x}) = \\alpha^2 \\theta(\\mathbf{d}, \\mathbf{x}), \\ \\forall (\\mathbf{d},\\mathbf{x})$ and $\\forall \\alpha \\geq 0$.\n \n For any $\\alpha > 0$, $\\forall (\\mathbf{d}, \\mathbf{x})$ :\n \\begin{equation}\n \\begin{split}\n \\bar{\\theta}\\left( \\alpha \\mathbf{d}, \\alpha \\mathbf{x} \\right) &= \\| \\alpha \\mathbf{d} \\|^2_2 + \\| \\alpha \\mathbf{x} \\|^2_2 + \\gamma \\min_{k} \\| \\alpha \\mathbf{L} \\mathbf{d} + k^2 \\alpha \\mathbf{d} \\|^2_F \\\\\n &= \\alpha^2 \\| \\mathbf{d} \\|^2_2 + \\alpha^2 \\| \\mathbf{x} \\|^2_2 + \\alpha^2 \\gamma \\min_{k} \\| \\mathbf{L} \\mathbf{d} + k^2 \\mathbf{d} \\|^2_F \\\\\n &= \\alpha^2 \\bar{\\theta}(\\mathbf{d},\\mathbf{x})\n \\end{split}\n \\end{equation}\n \n where we note that scaling $\\mathbf{d}$ by $\\alpha > 0$ does not change the optimal value of $k$ in the third term, allowing $\\alpha^2$ to be moved outside of the norm.\n \n \\item $\\bar \\theta(\\mathbf{d}, \\mathbf{x}) \\geq 0, \\ \\forall (\\mathbf{d},\\mathbf{x})$.\n \n \n Clearly, all terms in $\\bar{\\theta}(\\mathbf{d},\\mathbf{x})$ are non-negative, thus, $\\forall (\\mathbf{d},\\mathbf{x})$, we have $\\bar{\\theta}(\\mathbf{d},\\mathbf{x})\\geq 0$.\n \\item For all sequences $(\\mathbf{d}^{(n)},\\mathbf{x}^{(n)})$ such that $\\|\\mathbf{d}^{(n)}(\\mathbf{x}^{(n)})^\\top\\| \\rightarrow \\infty$ then $\\bar \\theta(\\mathbf{d}^{(n)},\\mathbf{x}^{(n)}) \\rightarrow \\infty$.\n \n Here, note that the following is true for all $(\\mathbf{d},\\mathbf{x})$:\n %\n \\begin{equation}\n \\|\\mathbf{d} \\mathbf{x}^\\top \\|_F = \\|\\mathbf{d}\\|_2 \\|\\mathbf{x}\\|_2 \\leq \\tfrac{1}{2}(\\|\\mathbf{d}\\|_2^2 + \\|\\mathbf{x}\\|_2^2) \\leq \\tfrac{1}{2}\\left(\\|\\mathbf{d}\\|_2^2 + \\|\\mathbf{x}\\|_2^2 + \\min_{k} \\| \\mathbf{L} \\mathbf{d} + k^2 \\mathbf{d} \\|^2_2\\right)\n \\end{equation}\n As a result we have $\\forall (\\mathbf{d}, \\mathbf{x})$ that $\\|\\mathbf{d}\\mathbf{x}^\\top\\|_F \\leq \\bar \\theta(\\mathbf{d},\\mathbf{x})$, completing the result.\n \n\\end{enumerate}\n\\end{proof}\n\nBefore proving Theorem \\ref{thm:polar} we first prove an intermediate lemma regarding Lipschitz constants.\n\\begin{lemma}\n\\label{lem:L_bound}\nGiven a set of constants $\\lambda_i, \\ i=1, 2, \\ldots$ such that $\\forall i$, $\\mu_\\lambda \\leq \\lambda_i \\leq 0$, and a constant $\\gamma > 0$, let the function $f$ be defined as:\n\\begin{equation}\nf(x) = \\max_i \\frac{1}{\\sqrt{1+\\gamma(x + \\lambda_i)^2}}.\n\\end{equation}\nThen, over the domain $0 \\leq x \\leq \\mu_x$ $f$ is Lipschitz continuous with Lipschitz constant $L_f$ bounded as follows:\n\\begin{equation}\nL_f \\leq \\left[\\begin{cases} \\frac{2}{3\\sqrt{3}} \\sqrt{\\gamma} & \\gamma\\geq \\frac{1}{2 \\max \\{ \\mu_\\lambda^2, \\mu_x^2 \\}} \\\\\n\\gamma \\max \\{ -\\mu_\\lambda, \\mu_x\\} (1+\\gamma \\max \\{\\mu_\\lambda^2, \\mu_x^2\\} )^{-\\tfrac{3}{2}} & \\mathrm{otherwise}\n\\end{cases} \\right] \\leq \\frac{2}{3 \\sqrt{3}}\\sqrt{\\gamma}. \n\\end{equation}\n\\end{lemma}\n\n\n\\begin{proof}\nFirst, note that for any two Lipschitz continuous functions $\\psi_a$ and $\\psi_b$, with associated Lipschitz constants $L_a$ and $L_b$, respectively, one has that the point-wise maximum of the two functions, $\\psi(x) = \\max \\{ \\psi_a(x), \\psi_b(x) \\}$, is also Lipschitz continuous with Lipschitz constant bounded by $\\max \\{ L_a, L_b \\}$. This can be easily seen by the following two inequalities:\n\\begin{align}\n&\\psi_a(x') \\leq \\psi_a(x) + | \\psi_a(x') - \\psi_a(x) | \\leq \\psi(x) + | \\psi_a(x') - \\psi_a(x) | \\leq \\psi(x) + L_a |x' - x| \\\\\n&\\psi_b(x') \\leq \\psi_b(x) + | \\psi_b(x') - \\psi_b(x) | \\leq \\psi(x) + | \\psi_b(x') - \\psi_b(x) | \\leq \\psi(x) + L_b |x' - x|\n\\end{align}\nFrom this we have:\n\\begin{equation}\n\\begin{split}\n&\\psi(x') = \\max \\{ \\psi_a(x'), \\psi_b(x') \\} \\leq \\max \\{ \\psi(x) + L_a |x'-x|, \\psi(x) + L_b |x'-x| \\} = \\psi(x) + \\max \\{L_a, L_b \\} |x'-x| \\\\\n&\\implies \\psi(x') - \\psi(x) \\leq \\max \\{ L_a, L_b \\} |x'-x|\n\\end{split}\n\\end{equation}\nwhich implies the claim from symmetry.\n\nNow, if we define the functions $g$ and $h$ as\n\\begin{equation}\ng(x, \\lambda) = \\frac{1}{\\sqrt{1+\\gamma(x + \\lambda)^2}} \\ \\ \\ \\ \\ \\ \\ h(b) = \\gamma b (1+\\gamma b^2)^{-\\tfrac{3}{2}}\n\\end{equation}\nwe have that the Lipschitz constant of $f$, denoted as $L_f$, is bounded as:\n\\begin{equation}\n\\label{eq:h_max}\n\\begin{split}\nL_f \\leq &\\max_i \\sup_{x \\in [0, u_x]} \\left| \\frac{\\partial}{\\partial x} g(x, \\lambda_i) \\right| \\leq \\sup_{\\lambda \\in [\\mu_\\lambda,0]} \\sup_{x \\in [0, \\mu_x]} \\left| \\frac{\\partial}{\\partial x} g(x, \\lambda) \\right| = \\\\\n&\\sup_{\\lambda \\in [\\mu_\\lambda,0]} \\sup_{x \\in [0, \\mu_x]} \\left| - \\frac{\\gamma(x+\\lambda)}{ (\\sqrt{1+\\gamma (x+\\lambda)^2})^3 } \\right|\n= \\sup_{b \\in [\\mu_\\lambda, \\mu_x]} \\left| h(b) \\right| = \\sup_{b \\in [0, \\max \\{ -\\mu_\\lambda, \\mu_x \\} ]} h(b)\n\\end{split}\n\\end{equation}\nWhere the first inequality is from the result above about the Lipschitz constant of the point-wise maximum of two functions and the simple fact that the Lipschitz constant of a function is bounded by the maximum magnitude of its gradient, and the final equality is due to the symmetry of $|h(b)|$ about the origin. Now, finding the critical points of $h(b)$ for non-negative $b$ we have:\n\\begin{equation}\n\\begin{split}\nh'(b) = \\gamma (1+\\gamma b^2)^{-\\tfrac{3}{2}} - 3 \\gamma^2 b^2 (1+\\gamma b^2)^{- \\tfrac{5}{2}} = 0 \n\\implies 3 \\gamma b^2 = (1+ \\gamma b^2) \\implies b^* = \\frac{1}{\\sqrt{2 \\gamma}}\n\\end{split} \n\\end{equation}\nNote that $h(0)=0$, $h(b)>0$ for all $b>0$, and $h'(b)<0$ for all $b > b^*$. As a result, $b^*$ will be a maximizer of $h(b)$ if it is feasible, otherwise the maximum will occur at the extreme point $b = \\max \\{-\\mu_\\lambda, \\mu_x \\}$. From this we have the result:\n\\begin{equation}\n\\begin{split}\nL_f &\\leq \\begin{cases} h(b^*) = \\frac{2}{3 \\sqrt{3}} \\sqrt{\\gamma} & \\gamma\\geq \\frac{1}{2 \\max \\{ \\mu_\\lambda^2, \\mu_x^2\\}} \\\\\nh(\\max \\{-\\mu_\\lambda, \\mu_x \\} ) = \\gamma \\max \\{ -\\mu_\\lambda, \\mu_x \\} (1+\\gamma \\max \\{ \\mu_\\lambda^2, \\mu_x^2 \\})^{-\\tfrac{3}{2}} & \\text{otherwise}\n\\end{cases} \\\\\n& \\leq h(b^*) = \\frac{2}{3\\sqrt{3}}\\sqrt{\\gamma}.\n\\end{split}\n\\end{equation}\n\\end{proof}\n\n{\n\\renewcommand{\\thetheorem}{\\ref{thm:polar}}\n\n\\begin{theorem}\nFor the objective in \\eqref{eq:main_obj}, the associated polar problem is equivalent to:\n\\begin{equation}\n\\Omega_\\theta^\\circ (\\mathbf{Z}) = \\max_{\\mathbf{d} \\in \\mathbb{R}^{L}, \\mathbf{x} \\in \\mathbb{R}^{T}, k\\in \\mathbb{R}} \\mathbf{d}^\\top \\mathbf{Z} \\mathbf{x} \\ \\ \\textnormal{s.t.} \\ \\ \\|\\mathbf{d}\\|^2_F + \\gamma \\|\\mathbf{L}\\mathbf{d} + k^2 \\mathbf{d}\\|_F^2 \\leq 1, \\ \\|\\mathbf{x}\\|^2_F \\leq 1, \\ 0 \\leq k \\leq 2.\n\\end{equation}\nFurther, let $\\mathbf{L} = \\Gamma \\Lambda \\Gamma^\\top$ be an eigen-decomposition of $\\mathbf{L}$ and define the matrix $\\mathbf{A}( \\bar k) = \\Gamma(\\mathbf{I} + \\gamma(\\bar k \\mathbf{I} + \\Lambda)^2) \\Gamma^\\top$. Then, if we define $\\bar k^*$ as%\n\\begin{equation}\n\\bar k^* = \\argmax_{\\bar k \\in [0,4]} \\| \\mathbf{A}(\\bar k)^{-1\/2} \\mathbf{Z} \\|_2\n\\label{eq:k_max}\n\\end{equation}\noptimal values of $\\mathbf{d},\\mathbf{x}, k$ are given as $\\mathbf{d}^* = \\mathbf{A}(\\bar k^*)^{-1\/2} \\bar \\mathbf{d}$, $\\mathbf{x}^* = \\bar \\mathbf{x}$, and $k^* = (\\bar k^*)^{1\/2}$ where $(\\bar \\mathbf{d}, \\bar \\mathbf{x})$ are the left and right singular vectors, respectively, associated with the largest singular value of $\\mathbf{A}(\\bar k^*)^{-1\/2} \\mathbf{Z}$. Additionally, the above line search over $\\bar k$ is Lipschitz continuous with a Lipschitz constant, $L_{\\bar k}$, which is bounded by:\n\\begin{equation}\nL_{\\bar k} \\leq \\left[ \\begin{cases} \\frac{2}{3 \\sqrt{3}} \\sqrt{\\gamma} \\|\\mathbf{Z}\\|_2 & \\gamma \\geq \\frac{1}{32} \\\\ \n4 \\gamma (1 + 16 \\gamma)^{-\\tfrac{3}{2}} \\|\\mathbf{Z}\\|_2 & \\mathrm{otherwise} \\end{cases} \\right] \\leq \\frac{2}{3 \\sqrt{3}} \\sqrt{\\gamma} \\|\\mathbf{Z}\\|_2\n\\end{equation}\n\\end{theorem}\n\\addtocounter{theorem}{-1}\n}\n\n\\begin{proof}\nThe polar problem associated with the objectives of the form \\eqref{eq:gen_obj} as given in \\cite{haeffele2019structured} is:\n\n\\begin{gather}\n \\Omega_\\theta^\\circ (\\mathbf{Z}) = \\sup_{\\mathbf{d},\\mathbf{x}} \\mathbf{d}^{\\top} \\mathbf{Z} \\mathbf{x} \\ \\ \\textnormal{s.t.} \\ \\ \\bar \\theta(\\mathbf{d},\\mathbf{x}) \\leq 1 \n\\end{gather}\n\nFor our particular problem, due to the bilinearity between $\\mathbf{d}$ and $\\mathbf{x}$ in the objective the above is equivalent to:\n\\begin{gather}\n \\Omega_\\theta^\\circ (\\mathbf{Z}) = \\sup_{\\mathbf{d},\\mathbf{x}} \\mathbf{d}^{\\top} \\mathbf{Z} \\mathbf{x} \\ \\ \\textnormal{s.t.} \\ \\ \\| \\mathbf{x} \\|^2_2 \\leq 1, \\|\\mathbf{d}\\|^2_2 + \\gamma \\min_{k} \\| \\mathbf{L} \\mathbf{d} + k^2 \\mathbf{d} \\|^2_2 \\leq 1\n\\end{gather}\nNote that this is equivalent to moving the minimization w.r.t. $k$ in the regularization constraint to a maximization over $k$:\n \\begin{gather}\n \\Omega_\\theta^\\circ (\\mathbf{Z}) = \\sup_{\\mathbf{d},\\mathbf{x},k} \\mathbf{d}^{\\top} \\mathbf{Z} \\mathbf{x} \\ \\ \\textnormal{s.t.} \\ \\ \\| \\mathbf{x} \\|^2_2 \\leq 1, \\|\\mathbf{d}\\|^2_2 + \\gamma \\| \\mathbf{L} \\mathbf{d} + k^2 \\mathbf{d} \\|^2_2 \\leq 1\n\\end{gather}\n\nNext, note that maximizing w.r.t. $\\mathbf{d}$ while holding $\\mathbf{x}$ and $k$ fixed is equivalent to solving a problem of the form:\n\\begin{align}\n\\max_\\mathbf{d} \\langle \\mathbf{d}, \\mathbf{Z} \\mathbf{x} \\rangle \\ \\ \\textnormal{subject to} \\ \\ \\mathbf{d}^\\top \\mathbf{A} \\mathbf{d} \\leq 1\n\\end{align}\nfor some positive definite matrix $\\mathbf{A}$. If we make the change of variables $\\bar \\mathbf{d} = \\mathbf{A}^{1\/2} \\mathbf{d}$, this then becomes:\n\\begin{align}\n\\max_{\\bar \\mathbf{d}} \\ \\ &\\langle \\bar \\mathbf{d}, \\mathbf{A}^{-1\/2} \\mathbf{Z} \\mathbf{x} \\rangle \\ \\ \\textnormal{subject to} \\ \\ \\| \\bar \\mathbf{d} \\|_2^2 \\leq 1 = \\| \\mathbf{A}^{-1\/2} \\mathbf{Z} \\mathbf{x} \\|_2 \n\\end{align}\nwhere the optimal $\\bar \\mathbf{d}$ and $\\mathbf{d}$ are obtained at \n\\begin{align}\n\\bar \\mathbf{d}_{opt} &= \\frac{\\mathbf{A}^{-1\/2} \\mathbf{Z} \\mathbf{x}}{\\|\\mathbf{A}^{-1\/2} \\mathbf{Z} \\mathbf{x}\\|_2} \\\\\n\\label{eq:d_opt}\n\\mathbf{d}_{opt} &= \\mathbf{A}^{-1\/2} \\bar \\mathbf{d}_{opt} = \\frac{\\mathbf{A}^{-1} \\mathbf{Z} \\mathbf{x}}{\\|\\mathbf{A}^{-1\/2} \\mathbf{Z} \\mathbf{x}\\|_2}\n\\end{align}\nFor our particular problem, if we make the change of variables $\\bar k = k^2$ we have that $\\mathbf{A}$ is given by:\n\\begin{equation}\n\\mathbf{A}(\\bar k) = (1+\\gamma \\bar k^2) \\mathbf{I} + \\gamma \\mathbf{L}^2 + 2 \\bar k \\gamma \\mathbf{L}\n\\end{equation}\nwhere we have used that $\\mathbf{L}$ is a symmetric matrix. If we let $\\mathbf{L} = \\Gamma \\Lambda \\Gamma^\\top$ be an eigen-decomposition of $\\mathbf{L}$ then we can also represent $\\mathbf{A}(\\bar k)$ and $\\mathbf{A}(\\bar k)^{-1\/2}$ as:\n\\begin{align}\n\\mathbf{A}(\\bar k) &= \\Gamma \\left( (1+\\gamma \\bar k^2) \\mathbf{I} + \\gamma \\Lambda^2 + 2 \\bar k \\gamma \\Lambda \\right) \\Gamma^\\top \\\\\n &= \\Gamma (\\mathbf{I} + \\gamma ( \\bar k \\mathbf{I} + \\Lambda)^2) \\Gamma^\\top \\\\ \n \\label{eq:A12}\n\\mathbf{A}(\\bar k)^{-1\/2} &= \\Gamma (\\mathbf{I} + \\gamma ( \\bar k \\mathbf{I} + \\Lambda)^2)^{-1\/2} \\Gamma^\\top \n\\end{align}\nNow if we substitute back into the original polar problem, we have:\n\\begin{align}\n\\Omega^\\circ(\\mathbf{Z}) &= \\max_{\\mathbf{x}, \\bar k} \\| \\mathbf{A}(\\bar k)^{-1\/2} \\mathbf{Z} \\mathbf{x} \\|_2 \\ \\ \\textnormal{subject to} \\ \\ \\|\\mathbf{x}\\|_2^2 \\leq 1 \\\\\n\\label{eq:k_line}\n&= \\max_{\\bar k} \\|(\\mathbf{A}(\\bar k)^{-1\/2} \\mathbf{Z} \\|_2\n\\end{align}\nwhere $\\|\\cdot\\|_2$ denotes the spectral norm (maximum singular value). Similarly, for a given $\\bar k$ the optimal $\\mathbf{x}$ is given as the right singular vector of $\\mathbf{A}(\\bar k)^{-1\/2}\\mathbf{Z}$ associated with the largest singular value.\n\nAs a result, we can solve the polar by performing a line search over $\\bar k$, then once an optimal $\\bar k^*$ is found we get $\\mathbf{x}^*$ as the largest right singular vector of $\\mathbf{A}(\\bar k^*)^{-1\/2} \\mathbf{Z}$ and the optimal $\\mathbf{d}^*$ from \\eqref{eq:d_opt} (where $\\mathbf{d}_{opt}$ will be the largest left singular vector of $\\mathbf{A}(\\bar k^*)^{-1\/2} \\mathbf{Z}$ multiplied by $\\mathbf{A}(\\bar k^*)^{-1\/2}$).\n\nNow, an upper bound for $\\bar k$ can be calculated from the fact that the optimal $\\bar k$ is defined using a minimization problem, i.e.\n\\begin{eqnarray}\n \\bar k^* = \\argmin_{\\bar k} \\| \\mathbf{L} \\mathbf{d} + \\bar k \\mathbf{d} \\|^2\n\\end{eqnarray}\nSo we note for any $\\mathbf{d}$,\n\\begin{eqnarray}\n \\bar k^* = -\\frac{ \\mathbf{d}^\\top \\mathbf{L} \\mathbf{d}} {\\|\\mathbf{d}\\|_2^2}\n \\label{eq:opt_k}\n\\end{eqnarray}\nwhich is bounded by the smallest eigenvalue of $\\mathbf{L}$ (note that $\\mathbf{L}$ is negative (semi)definite). We cite the literature on eigenvalues of discrete second derivatives \\cite{chung2000discrete} to note that all eigenvalues of $\\mathbf{L}$ (irrespective of the boundary conditions) lie in the range $[\\frac{-4}{\\Delta l},0]$, since we specifically chose $\\Delta l = 1$ (without loss of generality), we have that all eigenvalues of $\\mathbf{L}$ lie in the range$[-4,0]$.\n\n\n\n\n\n\n\n\nAs a result, we need to only consider $\\bar k$ in the range $[0,4]$:\n\\begin{equation}\n\\bar k^* = \\argmax_{\\bar k \\in [0,4]} \\| \\mathbf{A}(\\bar k)^{-1\/2} \\mathbf{Z} \\|_2\n\\end{equation}\n\nFinally, to show the Lipschitz continuity, we define the function:\n\\begin{equation}\nf_\\mathbf{A}(\\bar k) = \\| \\mathbf{A}(\\bar k)^{-1\/2} \\|_2\n\\end{equation}\nand then note the following:\n\\begin{equation}\n\\begin{split}\n& \\left| \\|\\mathbf{A}(\\bar k)^{-1\/2}\\mathbf{Z} \\|_2 - \\| \\mathbf{A}(\\bar k ')^{-1\/2} \\mathbf{Z} \\|_2 \\right| \\\\\n\\leq & \\left\\| \\left( \\mathbf{A}(\\bar k)^{-1\/2} - \\mathbf{A}(\\bar k')^{-1\/2} \\right) \\mathbf{Z} \\right\\|_2 \\\\\n\\leq & \\left\\| \\mathbf{A}(\\bar k)^{-1\/2} - \\mathbf{A}(\\bar k')^{-1\/2} \\right\\|_2 \\left\\| \\mathbf{Z} \\right\\|_2 \\\\\n\\leq & L_\\mathbf{A} |\\bar k - \\bar k'| \\|\\mathbf{Z}\\|_2\n\\end{split}\n\\end{equation}\nwhere the first inequality is simply the reverse triangle inequality, the second inequality is due to the spectral norm being submultiplicative, and $L_\\mathbf{A}$ denotes the Lipschitz constant of $f_\\mathbf{A}(\\bar k)$. From the form of $\\mathbf{A}(\\bar k)^{-1\/2}$ in \\eqref{eq:A12} note that we have:\n\\begin{equation}\nf_\\mathbf{A}(\\bar k) \\equiv \\| \\mathbf{A}(\\bar k)^{-1\/2}\\|_2 = \\max_i \\frac{1}{\\sqrt{1 + \\gamma(\\bar k + \\Lambda_{i,i})^2}}\n\\end{equation}\nso the result is completed by recalling from our discussion above that $\\Lambda_{i,i} \\in [-4, 0], \\forall i$ and applying Lemma \\ref{lem:L_bound}.\n\n\\end{proof}\n\n{\n\\renewcommand{\\thecorollary}{\\ref{cor:poly_time}}\n\\begin{corollary}\nAlgorithm \\ref{alg:meta} produces an optimal solution to \\eqref{eq:main_obj} in polynomial time.\n\\end{corollary}\n\\addtocounter{corollary}{-1}\n}\n\\begin{proof}\nThis result is largely a Corollary from Theorem \\ref{thm:polar} and what is known in the literature. Namely, the authors of \\cite{xu2013block} show that the block coordinate update steps \\eqref{eq:grad_D}, \\eqref{eq:grad_X} and \\eqref{eq:grad_k} in step \\ref{step:grad_desc} of Algorithm \\ref{algoblock:meta-algo} reaches a stationary point in polynomial time because the objective function \\eqref{eq:main_obj} is convex w.r.t. each ($\\mathbf{D}$, $\\mathbf{X}$, $\\mathbf{k}$) if the other terms are held fixed. Next,\nby Theorem \\ref{thm:polar} the optimization problem for solving the polar can be done in polynomial time. Finally, it has been noted in the literature on structured matrix factorization \\cite{bach2013convex} that the polar update step is equivalent to a generalized conditional gradient step\nand if the conditional gradient steps (i.e., the polar problem) can be solved exactly (as we show in Theorem \\ref{thm:polar}) then the algorithm converges in a polynomial number of such steps. As a result, due to the fact that the block coordinate update steps reach a stationary in polynomial time we will perform a polar update step (a.k.a., a conditional gradient step) at polynomial time intervals, so the overall algorithm is also guaranteed to converge in polynomial time.\n\n\\end{proof} \n\n\\subsection{Interpreting the Wave-Informed Regularizer as a Bandpass Filter}\n\nNote that when identifying the optimal $k$ value in the polar program, we solve for\n\\begin{equation}\n\\label{eq:k_opt}\n\\argmax_{k \\in [0,4]} \\| \\Gamma (\\mathbf{I} + \\gamma ( \\bar k \\mathbf{I} + \\Lambda)^2)^{-1\/2} \\Gamma^\\top \\mathbf{Z} \\|_2 \\; .\n\\end{equation}\nThis optimization has an intuitive interpretation from digital signal processing. Given that $\\Gamma$ contains the eigenvectors of a Toeplitz matrix, those eigenvectors have spectral qualities similar to the discrete Fourier transform (the eigenvectors of a circulant matrix would be the discrete Fourier transform). As a result, $\\Gamma^\\top$ transforms the data $\\mathbf{Z}$ into a spectral-like domain and $\\Gamma$ returns the data back to the original domain. Since the other terms are all diagonal matrices, they represent element-wise multiplication across the data in the spectral domain. This is approximately equivalent to a filtering operation, with filter coefficients given by the diagonal entries of $(\\mathbf{I} + \\gamma ( \\bar k \\mathbf{I} + \\Lambda)^2)^{-1\/2}$.\n\nFurthermore, recall that the transfer function of a 1st-order Butterworth filter is given by:\n\\begin{equation}\nT(\\omega) = \\frac{1}{\\sqrt{1+\\gamma (k_0 + \\omega)^2}}\n\\end{equation}\nwhere $k_0$ is the center frequency of the passband of the filter and $1\/\\sqrt{\\gamma}$ corresponds to the filter's $-3$dB cut-off frequency. Comparing this to the filter coefficients from \\eqref{eq:k_opt} we note that the filter coefficients are identical to those of the 1st-order Butterworth filter, where $\\Lambda$ corresponds to the angular frequencies.\nAs a result, we can consider this optimization as determining the optimal filter center frequency $(\\bar k)$ with fixed bandwidth $(1\/\\sqrt{\\gamma})$ that retains the maximum amount of signal power from $\\mathbf{Z}$. Likewise the choice of the $\\gamma$ hyperparameter sets the bandwidth of the filter. As $\\gamma \\rightarrow \\infty$, the filter bandwidth approaches 0 and thereby restricts us to a single-frequency (i.e., Fourier) solution.\nFurthermore, we can provide a recommended lower bound for $\\gamma$ according to $\\gamma > 1\/k_{bw}^2$, where $k_{bw}$ is the bandwidth of the signal within this spectral-like domain.\n\n\n\\section{Additional Results}\n\n\\subsection{Characterizing Multi-Segment Transmission Lines}\n\n For the simulation considered in \\textbf{Characterizing Multi-Segment Transmission Lines} in \\S~\\ref{sec:results}, Figure \\ref{fig:Merge_2} shows two example columns for $\\gamma = 50$ and $\\lambda = 0.6$. We show in Figure \\ref{fig:polar_objective} the reduction of objective value over iterations, the rate of change of objective value per iteration and the value of polar after each iteration of the overall meta-algorithm. Figures~\\ref{fig:workflow_a}, \\ref{fig:workflow_b}, \\ref{fig:workflow_c} show curves similar to Figure~\\ref{fig:polar_objective} but for different choices of regularization parameters.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.75]{figs\/polar_obj_50_lamb_0_6.png}\n \\caption{Objective value, rate of change of objective value per iteration and the value of polar after each iteration of the meta-algorithm for the simulation in subsection \\textbf{Characterizing Multi-Segment Transmission Lines} \\S \\ref{sec:results}.}\n \\label{fig:polar_objective}\n\\end{figure}\n\nWe observe in Figure \\ref{fig:polar_objective} that the change of objective value is almost zero after 70 iterations. At this point, the polar value also goes below 1.1 (which as we note in the main paper also implies we are close to the global minimum) and eventually reaches 1, providing a certificate of global optimality.\n\n \n \nIn practice, we often stop the algorithm after the polar value reaches below 1.1 as this guarantees a very close to optimal solution.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{figs\/polar_obj_50_lamb_1_6.png}\n \\caption{Objective value, rate of change of objective value per iteration and the value of polar after each iteration of the meta-algorithm for the simulation in \\textbf{Characterizing Multi-Segment Transmission Lines} \\S \\ref{sec:results} with values of regularization parameters $\\gamma = 50$, $\\lambda = 1.6$}\n \\label{fig:workflow_a}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{figs\/polar_obj_50_lamb_3_7.png}\n \\caption{Objective value, rate of change of objective value per iteration and the value of polar after each iteration of the meta-algorithm for the simulation in \\textbf{Characterizing Multi-Segment Transmission Lines} \\S \\ref{sec:results} with values of regularization parameters $\\gamma = 50$, $\\lambda = 3.7$}\n \\label{fig:workflow_b}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{figs\/polar_obj_50_lamb_5.png}\n \\caption{Objective value, rate of change of objective value per iteration and the value of polar after each iteration of the meta-algorithm for the simulation in \\textbf{Characterizing Multi-Segment Transmission Lines} \\S \\ref{sec:results} with values of regularization parameters $\\gamma = 50$, $\\lambda = 5$}\n \\label{fig:workflow_c}\n\\end{figure}\n\n\n\n\n\nIn addition to these optimization results, we also show additional examples of columns of $\\mathbf{D}$ for different values of $\\gamma$ ($500$ and $5000$) with $\\lambda$ fixed at 0.6 to show the variation of the columns of $\\mathbf{D}$ with $\\gamma$.\n\nIn Figures~\\ref{dictatoms_multi_a} and ~\\ref{dictatoms_multi_b}, we observe that with increasing $\\gamma$, the performance of the algorithm in distinguishing the two regions reduces: energy in the dictionary column is no longer confined to the segment containing the same material, it is distributed over the entire column. This can also be observed quantitatively from comparing the partitioned normalized energies (energy on each of the two segments, partitioned by the violet line, of the normalized columns of $\\mathbf{D}$) in Figures~\\ref{energies_a},~\\ref{energies_b} and Figures~\\ref{dictatoms_multi_a},\\ref{dictatoms_multi_b}.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.65\\textwidth]{figs\/D_diff_strings_lr_gamma_500_lam_0_6.png}\n \\caption{Two Columns of $\\mathbf{D}$ obtained from wave-informed matrix factorization when $\\gamma = 500$ $\\lambda = 0.6$.}\n \\label{dictatoms_multi_a}\n\\end{figure}\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.65\\textwidth]{figs\/D_diff_strings_lr_gamma_5000_lam_0_6.png}\n \\caption{Two Columns of $\\mathbf{D}$ obtained from wave-informed matrix factorization when $\\gamma = 5000$ $\\lambda = 0.6$.}\n \\label{dictatoms_multi_b}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.65\\textwidth]{figs\/energies_gamma_500_lamb_0_6.png}\n \\caption{Two Columns of $\\mathbf{D}$ obtained from wave-informed matrix factorization when $\\gamma = 500$ $\\lambda = 0.6$.}\n \\label{energies_a}\n\\end{figure}\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.65\\textwidth]{figs\/energies_gamma_5000_lamb_0_6.png}\n \\caption{Two Columns of $\\mathbf{D}$ obtained from wave-informed matrix factorization when $\\gamma = 5000$ $\\lambda = 0.6$.}\n \\label{energies_b}\n\\end{figure}\n\n\n\\subsection{A Vibrating String}\nFor the experiment described in subsection \\textbf{A Fixed Vibrating String} of \\S \\ref{sec:results}, we show the squared difference between the actual matrix $\\mathbf{D}$ and the one estimated from the algorithm in Table \\ref{table:wave_1}, Table \\ref{table:wave_2}, and Table \\ref{table:wave_3} (for different scenarios). We can calculate the percentage error as\n\\begin{gather}\n \\% \\textnormal{error} = \\frac{\\| \\hat{\\mathbf{D}} - \\mathbf{D} \\|_F}{\\|\\mathbf{D}\\|_F} \\times 100\n\\end{gather}\nwhere $\\hat{\\mathbf{D}}$ is the matrix estimated from the algorithm after permuting the columns to optimally align with the ground-truth $\\mathbf{D}$, zero-padding $\\mathbf{D}$ or $\\hat{\\mathbf{D}}$ (whichever has fewer columns) so that the two matrices have the same number of columns, and normalizing the non-zero columns of $\\hat{\\mathbf{D}}$ to have unit $\\ell_2$ norm\n\nSince the matrix is normalized and the actual $\\mathbf{D}$ matrix contains 10 columns (corresponding to 10 modes in the data simulated), we know that $\\| \\mathbf{D} \\|_F = 10$. We define error $\\| \\hat{\\mathbf{D}} - \\mathbf{D} \\|^2_F$ of the order of $0.01$ ($1 \\%$) to be desirable. We bold error of this order in the tables. Table~\\ref{table:wave_1} describes the performance as a function of regularization as described in \\eqref{eqn:wave_equ} with amplitude of order 10 and noise variance of $0.1$. We see a wide range of values of $\\gamma$ and $\\lambda$ satisfy our error criteria.\n\nTable~\\ref{table:wave_2} describes the performance as a function of regularization as described by the damped vibrations in \\eqref{eqn:damped_wave_time} with amplitude of order 10 and noise variance of $0.1$. We see that the error values satisfy our chosen criteria beyond a particular value of regularization $\\gamma$ ($10^5$) and are also limited in the range of $\\lambda$. The range of $\\lambda$ that satisfies the chosen error condition also reduces with increasing values of $\\gamma$.\n\nTable~\\ref{table:wave_3} describes the performance as a function of regularization as described in \\eqref{eqn:wave_equ} with amplitude of order 10 and noise variance of 10. We observe that a narrow range of regularization values satisfy the error criteria. We thus clearly see the effect of noise in this case. Based on our observations, we believe the large regularization values over fit to the noise with high frequency components, resulting in poor performance. We believe low regularization values do not enforce enough of the wave-constraint, resulting in noisy components.\n\nIn Figures \\ref{fig:accommodations_a}, \\ref{fig:accommodations_b}, we show columns of $\\mathbf{D}$ obtained from low-rank factorization and wave-informed matrix factorization with a signal amplitude of order $10$ and noise variance of $10$. For the low-rank factorization, we choose $\\lambda = 2592$ and for wave-informed matrix factorization we choose $\\gamma = 10^7$ and $\\lambda = 2200$ ($\\gamma$ and $\\lambda$ are chosen corresponding to the minimum value in Table~\\ref{table:wave_3}). The columns of $\\mathbf{D}$ obtained from low-rank matrix factorization in Figure~\\ref{fig:accommodations_a} are noisy and not always decipherable as sinusoids. In contrast, columns of $\\mathbf{D}$ obtained from wave-informed matrix factorization in Figure \\ref{fig:accommodations_b} on the other hand are completely noiseless and have a clearly measurable wavenumber.\n\nFor the case of \\eqref{eqn:damped_wave_space}, when $R=6$, we show an approximate recovery of all the modes in Figure~\\ref{fig:damped_sines_all}.\n\n\\setlength{\\arrayrulewidth}{0.65mm}\n\\setlength{\\tabcolsep}{4pt}\n\\begin{table}[H]\n\n\\captionof{table}{Squared Frobenius norm of the difference between actual $\\mathbf{D}$ and the one obtained from Wave-informed matrix factorization with noise of variance 0.1 and data of the form \\eqref{eqn:wave_equ}}\n\\begin{tabular}{@{}|r|r|r|r|r|r|r|r|r|r|r|r|r|@{}}\n\\hline\n\\multicolumn{1}{|c|}{} & \\multicolumn{12}{|c|}{Regularization $\\left(\\gamma\\right)$} \\\\\n\\hline\n\\multicolumn{1}{|l|}{$\\lambda$} & \\multicolumn{1}{l|}{$10^1$} & \\multicolumn{1}{l|}{$10^{2}$} & \\multicolumn{1}{l|}{$10^3$} & \\multicolumn{1}{l|}{$10^4$} & \\multicolumn{1}{l|}{$10^5$} & \\multicolumn{1}{l|}{$10^6$} & \\multicolumn{1}{l|}{$10^7$} & \\multicolumn{1}{l|}{$10^8$} & \\multicolumn{1}{l|}{$10^9$} & \\multicolumn{1}{l|}{$10^{10}$} & \\multicolumn{1}{l|}{$10^{11}$} & \\multicolumn{1}{l|}{$10^{12}$} \\\\ \\hline\n200 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFD666}\\textbf{0.0035} & \\cellcolor[HTML]{FFDC7F}1.0023 & \\cellcolor[HTML]{FFDC7F}1.0013 & \\cellcolor[HTML]{FFDC7F}1.0013 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} \\\\ \\hline\n300 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFD666}\\textbf{0.0035} & \\cellcolor[HTML]{FFDC7F}1.0023 & \\cellcolor[HTML]{FFDC7F}1.0013 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} \\\\ \\hline\n400 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFD666}\\textbf{0.0035} & \\cellcolor[HTML]{FFEAB4}3.0598 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} \\\\ \\hline\n500 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFEAB3}3.0438 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} \\\\ \\hline\n600 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFD666}\\textbf{0.0035} & \\cellcolor[HTML]{FFEAB3}3.0342 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} \\\\ \\hline\n700 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFD666}\\textbf{0.0035} & \\cellcolor[HTML]{FFE399}2.0015 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDC7F}1.0011 \\\\ \\hline\n800 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFD666}\\textbf{0.0035} & \\cellcolor[HTML]{FFDC7F}1.0014 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDC7F}1.0011 \\\\ \\hline\n900 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFDC7F}1.0013 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDC7F}1.0011 \\\\ \\hline\n1000 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFDC7F}1.0013 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFE398}2.0009 \\\\ \\hline\n1100 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFDC7F}1.0014 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFEAB2}3.0009 \\\\ \\hline\n1400 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFEAB2}2.9963 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFF8E5}5.0005 \\\\ \\hline\n1800 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFDC7F}1.0019 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFFEFE}6.0004 \\\\ \\hline\n2000 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFDC7F}1.0018 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDC7F}1.0011 & \\cellcolor[HTML]{FFFEFE}6.0004 \\\\ \\hline\n2100 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}\\textbf{1.0035} & \\cellcolor[HTML]{FFDC7F}1.0017 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDC7F}1.0011 & \\cellcolor[HTML]{FFFEFE}6.0004 \\\\ \\hline\n2200 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}\\textbf{1.0035} & \\cellcolor[HTML]{FFDC7F}1.0017 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDC7F}1.0011 & \\cellcolor[HTML]{FFFEFE}6.0004 \\\\ \\hline\n\\end{tabular}\n\\label{table:wave_1}\n\\end{table}\n\n\n\\setlength{\\arrayrulewidth}{0.65mm}\n\\setlength{\\tabcolsep}{4pt}\n\\begin{table}[H]\n\\captionof{table}{Squared Frobenius norm of the difference between actual $\\mathbf{D}$ and the one obtained from Wave-informed matrix factorization with noise of variance 0.1 and data of the form \\eqref{eqn:damped_wave_time}}\n\\begin{tabular}{@{}|r|r|r|r|r|r|r|r|r|r|r|r|r|@{}}\n\\hline\n\\multicolumn{1}{|c|}{} & \\multicolumn{12}{|c|}{Regularization $\\left(\\gamma\\right)$} \\\\\n\\hline\n\\multicolumn{1}{|l|}{$\\lambda$} & \\multicolumn{1}{l|}{$10^1$} & \\multicolumn{1}{l|}{$10^{2}$} & \\multicolumn{1}{l|}{$10^3$} & \\multicolumn{1}{l|}{$10^4$} & \\multicolumn{1}{l|}{$10^5$} & \\multicolumn{1}{l|}{$10^6$} & \\multicolumn{1}{l|}{$10^7$} & \\multicolumn{1}{l|}{$10^8$} & \\multicolumn{1}{l|}{$10^9$} & \\multicolumn{1}{l|}{$10^{10}$} & \\multicolumn{1}{l|}{$10^{11}$} & \\multicolumn{1}{l|}{$10^{12}$} \\\\ \\hline\n200 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFE7A7}3.0006 \\\\ \\hline\n300 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFEDBD}4.0005 \\\\ \\hline\n400 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFF3D3}5.0005 \\\\ \\hline\n500 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n600 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFE7A7}3.0006 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n700 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFE7A7}3.0006 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n800 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFEDBD}4.0005 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n900 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFEDBD}4.0005 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n1000 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFEDBD}4.0005 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n1100 & \\cellcolor[HTML]{FFDD82}1.2836 & \\cellcolor[HTML]{FFDD82}1.2835 & \\cellcolor[HTML]{FFDD81}1.2819 & \\cellcolor[HTML]{FFDD81}1.2644 & \\cellcolor[HTML]{FFDC7F}1.1596 & \\cellcolor[HTML]{FFDC7C}1.0513 & \\cellcolor[HTML]{FFDB7C}1.0162 & \\cellcolor[HTML]{FFDB7B}1.0016 & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFEDBD}4.0005 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n1400 & \\cellcolor[HTML]{FFE397}2.2794 & \\cellcolor[HTML]{FFE397}2.2793 & \\cellcolor[HTML]{FFE397}2.2777 & \\cellcolor[HTML]{FFE397}2.2602 & \\cellcolor[HTML]{FFE295}2.1554 & \\cellcolor[HTML]{FFE192}2.0478 & \\cellcolor[HTML]{FFE192}2.0152 & \\cellcolor[HTML]{FFE191}2.0013 & \\cellcolor[HTML]{FFE191}2.0008 & \\cellcolor[HTML]{FFEDBD}4.0003 & \\cellcolor[HTML]{FFFEFE}7.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 \\\\ \\hline\n1800 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0459 & \\cellcolor[HTML]{FFEDBE}4.0452 & \\cellcolor[HTML]{FFEDBE}4.0387 & \\cellcolor[HTML]{FFEDBD}4.0127 & \\cellcolor[HTML]{FFEDBD}4.0008 & \\cellcolor[HTML]{FFEDBD}4.0003 & \\cellcolor[HTML]{FFEDBD}4.0003 & \\cellcolor[HTML]{FFFEFE}7.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 \\\\ \\hline\n2000 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0459 & \\cellcolor[HTML]{FFEDBE}4.0452 & \\cellcolor[HTML]{FFEDBE}4.0387 & \\cellcolor[HTML]{FFEDBD}4.0127 & \\cellcolor[HTML]{FFEDBD}4.0008 & \\cellcolor[HTML]{FFEDBD}4.0003 & \\cellcolor[HTML]{FFF3D3}5.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 \\\\ \\hline\n2100 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0459 & \\cellcolor[HTML]{FFEDBE}4.0452 & \\cellcolor[HTML]{FFEDBE}4.0387 & \\cellcolor[HTML]{FFEDBD}4.0127 & \\cellcolor[HTML]{FFEDBD}4.0008 & \\cellcolor[HTML]{FFEDBD}4.0003 & \\cellcolor[HTML]{FFF3D3}5.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 \\\\ \\hline\n2200 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0459 & \\cellcolor[HTML]{FFEDBE}4.0452 & \\cellcolor[HTML]{FFEDBE}4.0387 & \\cellcolor[HTML]{FFEDBD}4.0127 & \\cellcolor[HTML]{FFEDBD}4.0008 & \\cellcolor[HTML]{FFEDBD}4.0003 & \\cellcolor[HTML]{FFF3D3}5.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 \\\\ \\hline\n\\end{tabular}\n\\label{table:wave_2}\n\\end{table}\n\n\n\\setlength{\\arrayrulewidth}{0.65mm}\n\\setlength{\\tabcolsep}{4pt}\n\n\\begin{table}[H]\n\\captionof{table}{Squared Frobenius norm of the difference between actual $\\mathbf{D}$ and the one obtained from Wave-informed matrix factorization with noise of variance 10 and data of the form \\eqref{eqn:wave_equ}}\n\\begin{tabular}{@{}|r|r|r|r|r|r|r|r|r|r|r|r|r|@{}}\n\\hline\n\\multicolumn{1}{|c|}{} & \\multicolumn{12}{|c|}{Regularization $\\left(\\gamma\\right)$} \\\\\n\\hline\n\\multicolumn{1}{|l|}{$\\lambda$} & \\multicolumn{1}{l|}{$10^1$} & \\multicolumn{1}{l|}{$10^{2}$} & \\multicolumn{1}{l|}{$10^3$} & \\multicolumn{1}{l|}{$10^4$} & \\multicolumn{1}{l|}{$10^5$} & \\multicolumn{1}{l|}{$10^6$} & \\multicolumn{1}{l|}{$10^7$} & \\multicolumn{1}{l|}{$10^8$} & \\multicolumn{1}{l|}{$10^9$} & \\multicolumn{1}{l|}{$10^{10}$} & \\multicolumn{1}{l|}{$10^{11}$} & \\multicolumn{1}{l|}{$10^{12}$} \\\\ \\hline\n200 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE6A2}7.991 \\\\ \\hline\n300 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n400 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n500 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n600 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFEAB1}10.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n700 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n800 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n900 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n1000 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n1100 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFDD83}3.991 \\\\ \\hline\n1400 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE8A9}9.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFE292}5.978 \\\\ \\hline\n1800 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.218 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE49A}7.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFE49A}6.978 \\\\ \\hline\n2000 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.218 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFEAB3}10.216 & \\cellcolor[HTML]{FFDC7D}3.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD86D}1.063 & \\cellcolor[HTML]{FFDE84}4.021 & \\cellcolor[HTML]{FFDE83}4.001 & \\cellcolor[HTML]{FFDE83}4.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n2100 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.218 & \\cellcolor[HTML]{FFDE85}4.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n2200 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFDA76}2.219 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n2300 & \\cellcolor[HTML]{FFE49C}7.220 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n2500 & \\cellcolor[HTML]{FFD667}0.220 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n2700 & \\cellcolor[HTML]{FFD667}0.220 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n3000 & \\cellcolor[HTML]{FFD667}0.220 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n3200 & \\cellcolor[HTML]{FFD667}0.220 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n\\end{tabular}\n\\label{table:wave_3}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{figs\/dict_atoms_low_rank.png}\n \\caption{ \\label{fig:accommodations_a}Example columns of $\\mathbf{D}$ obtained from Low-rank matrix factorization $\\lambda = 2952$. }\n \\label{fig:my_label}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{figs\/dict_atoms_gamma_50_lambda_0_6.png}\n \\caption{\\label{fig:accommodations_b}Example columns of $\\mathbf{D}$ obtained from our Wave-informed decomposition with $\\gamma = 10^7$, $\\lambda = 2200$.}\n\\end{figure}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[trim=15 10 10 15, clip, width=1.1\\linewidth]{figs\/dampled_sines_all.png}\n \\caption{Recovered modes (rows) of spatially damped sinusoids ({\\color{A}{\\textbf{A}}}), ({\\color{B}{\\textbf{B}}}), ({\\color{C}{\\textbf{C}}}), ({\\color{D}{\\textbf{D}}}), ({\\color{E}{\\textbf{E}}}), ({\\color{F}{\\textbf{F}}}), ({\\color{G}{\\textbf{G}}}) when $R=6$.} \\label{fig:damped_sines_all}\n \\vspace{-5mm}\n\\end{figure}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\nThe different disciplines in physics often evolve in isolation from\neach other, developing their own formalisms and terminologies. This\ndifference in concepts, language, and notations hampers the mutual\nunderstanding and deepens the divisions. It has the\neffect of obscuring the circumstance that the different disciplines\nmay have much more in common than meets the eye, and that much of what\nseems to be different is of cultural and not of intrinsic origin. \n\nIn the present paper we highlight the non-trivial circumstance that\nthe mathematical symmetries and structures that govern the Mueller\nmatrices in polarization theory are the same as those of the electromagnetic tensor in Maxwell\ntheory and the Lorentz transformations in relativistic physics, all of\nwhich obey the same Lorentz algebra. Absorption, electric\nfields, and Lorentz boosts are governed by the symmetric part of the\nrespective transformation matrix, while \ndispersion, magnetic fields, and spatial rotations are represented by\nthe antisymmetric part. These two aspects can be\nunified in terms of complex-valued matrices, where\nthe symmetric and antisymmetric aspects represent the real and\nimaginary parts, respectively. \n\nWe can extend this comparison by introducing a Minkowski metric for a\n4D space spanned by the four Stokes parameters $I,Q,U,V$. In this\ndescription Stokes vectors that represent 100\\,\\%\\ polarization\nare null vectors, while partial depolarization causes the Stokes\nvector to lie inside the null cones like the energy-momentum vectors\nof massive particles in ordinary spacetime. This comparison points to an analogy\nbetween depolarization (which can be seen as a symmetry breaking) and\nthe appearance of mass. Another interesting property is that, in\ncontrast to electromagnetism and Lorentz transformations, Stokes\nvectors and Mueller matrices have the rotational symmetry of spin-2\nobjects because they have substructure: they\nare formed from bilinear products of spin-1 objects. Here we try to\nexpose these potentially profound connections and discuss their\nmeanings. \n\nWe start in Sect.~\\ref{sec:emanalog} by showing the\nrelations between the Stokes formalism in polarization physics and the\ncovariant formulation of the Maxwell theory of electromagnetism. After\nclarifying the symmetries it is shown how the introduction of\ncomplex-valued matrices leads to an elegant, unified formulation. In\nSect.~\\ref{sec:lorentz} we show how the Lorentz transformations with\nits boosts and spatial rotations have the same structure. The\nintroduction of a Minkowski metric for polarization space in\nSect.~\\ref{sec:depolmass} reveals a\nnull cone structure for Stokes vectors that represent fully polarized\nlight. It further brings out an analogy between depolarization caused by the\nincoherent superposition of fields and the appearance of what may be\ninterpreted as a mass term. In Sect.~\\ref{sec:spin2} we highlight the\ncircumstance that the Stokes vectors and Mueller matrices in\npolarization physics have the symmetry of spin-2 objects, because they\nhave substructure, being formed from bilinear products of vector\nobjects. Section \\ref{sec:conc} summarizes the conclusions. \n\n\n\\section{The electro-magnetic analogy}\\label{sec:emanalog}\nLet $\\vec{S}_\\nu$ be the 4D Stokes vector for frequency\n$\\nu$. Explicitly, in terms of its transposed form (with superscript\n$T$) and omitting index $\\nu$ for clarity of notation, \n$\\vec{S}^T\\!\\equiv (S_0,\\,S_1,\\,S_2,\\,S_3) \\equiv (I,\\,Q,\\,U,\\,V)$. The equation for\nthe transfer of polarized radiation can then be written as \n\\begin{equation}\\label{eq:transeq}\n{{\\rm d}\\vec{S}_\\nu\\over{\\rm d}\\tau_c}=(\\vec{\\eta}+\\vec{I})\\,\\vec{S}_\\nu\n\\,-\\vec{j}_\\nu\/\\kappa_c\\,,\n\\end{equation}\nwhere $\\tau_c$ is the continuum optical depth, $\\vec{I}$ is the\n$4\\times 4$ identity matrix (representing continuum absorption),\n$\\kappa_c$ is the continuum absorption coefficient, $\\vec{j}_\\nu$ is\nthe emission 4-vector, while the Mueller absorption matrix\n$\\vec{\\eta}$ that represents the polarized processes due to the\natomic line transitions is \n\\begin{equation}\\label{eq:etamat} \n\\vec{\\eta}=\\left(\\matrix{\\,\\,\\eta_I &\\phantom{-}\\eta_Q &\\phantom{-}\\eta_U&\n\\phantom{-}\\eta_V\\,\\,\\cr \\,\\,\\eta_Q&\\phantom{-}\\eta_I&\n\\phantom{-}\\rho_V&-\\rho_U\\,\\,\\cr \\,\\,\\eta_U&-\\rho_V&\\phantom{-}\\eta_I&\\phantom{-}\n\\rho_Q\\,\\,\\cr \\,\\eta_V&\\phantom{-}\\rho_U&-\\rho_Q&\n\\phantom{-}\\eta_I\\,\\, }\\right)\\,.\n\\end{equation}\n For a detailed account of Stokes vector polarization theory with its\nnotations and terminology we refer to the monographs of\n\\citet{stenflo-book94} and \\citet{stenflo-lanlan04}. \n\nWhile $\\eta_{I,Q,U,V}$ represents the absorption terms for the four\nStokes parameters $I,Q,U,V$, the differential phase shifts, generally\nreferred to as anomalous dispersion or\nmagnetooptical effects, are represented by the $\\rho_{Q,U,V}$\nterms. They are formed, respectively, from the imaginary and the real\npart of the complex refractive index that is induced when the atomic\nmedium interacts with the radiation field. \n\nWe notice that $\\vec{\\eta}$ can be expressed as the sum of two\nmatrices: a symmetric matrix that only contains the $\\eta$ terms, and\nan anti-symmetric matrix that only contains the $\\rho$\nterms. Antisymmetric matrices represent spatial rotations, as will be\nparticularly clear in Sect.~\\ref{sec:lorentz} when comparing with\nLorentz transformations. \n\nThese symmetries turn out to be identical to the symmetries that\ngovern both the Maxwell theory for electromagnetism and the Lorentz\ntransformation of the metric in relativity. Here we will compare these\nvarious fields of physics to cast light on the intriguing\nconnections. We begin with electromagnetism. \n\nThe analogy between the Stokes formalism and electromagnetism has\npreviously been pointed out by \\citet{stenflo-lanlan81} using a 3+1\n(space + time) formulation rather than the covariant formulation in\nMinkowski spacetime. It is however only with the covariant formulation that\nthe correspondences become strikingly transparent. \n\nThe Lorentz electromagnetic force law when written in covariant 4D\nform is \n\\begin{equation}\\label{eq:forcelaw} \n{{\\rm d}p^{\\,\\alpha}\\over\\!\\!{\\rm d}\\tau}={e\\over\n m}\\,F^{\\,\\alpha}_{\\phantom{\\alpha}\\beta}\\,\\,p^{\\,\\beta}\\,,\n\\end{equation}\nwhere $p^{\\,\\alpha}$ are the components of the contravariant energy-momentum\n4-vector, $e$ and $m$ are the electric charge and mass of the\nparticle, and $F^{\\,\\alpha}_{\\phantom{\\alpha}\\beta}$ are the\ncomponents of the electromagnetic tensor, which has the representation\nof a $4\\times 4$ matrix: \n\\begin{equation}\\label{eq:fabmat} \nF^{\\,\\alpha}_{\\phantom{\\alpha}\\beta}\\,=\\left(\\matrix{\\,\\,0&\\phantom{-}E_x&\\phantom{-}E_y&\n\\phantom{-}E_z\\,\\,\\cr \\,\\,E_x&\\phantom{-}0&\n\\phantom{-}B_z&-B_y\\,\\,\\cr \\,\\,E_y&-B_z&\\phantom{-}0&\\phantom{-}\nB_x\\,\\,\\cr \\,\\,E_z&\\phantom{-}B_y&-B_x&\n\\phantom{-}0\\,\\, }\\right)\\,.\n\\end{equation}\n A comparison between Eqs.~(\\ref{eq:etamat}) and (\\ref{eq:fabmat})\nimmediately reveals the correspondence between absorption $\\eta$ and the electric\n$E$ field on the one hand and between anomalous dispersion $\\rho$ and the magnetic $B$\nfield on the other hand: \n\\begin{eqnarray}\\label{eq:emconnect} \n&\\eta_{Q,U,V}\\,\\,\\longleftrightarrow \\,\\, E_{x,y,z}\\nonumber\\\\\n&\\,\\,\\rho_{Q,U,V}\\,\\,\\longleftrightarrow \\,\\, B_{x,y,z}\\,.\n\\end{eqnarray}\n\nLet us follow up this structural comparison in terms of a unified\ndescription, where absorption and dispersion are combined into a\ncomplex-valued absorption. $\\eta$ is proportional to the Voigt\nfunction $H(a,v_q)$, with $\\eta_0$ as the proportionality constant, $a$\nthe dimensionless damping parameter, and $v_q$ the dimensionless\nwavelength or frequency parameter. Index $q$, with $q=0,\\pm 1$,\nindicates the differential shift of the wavelength scale for atomic\ntransitions with magnetic quantum number $m_{\\rm lower}-m_{\\rm\n upper}=q$. Similarly $\\rho$ is proportional \nto $2F(a,v_q)$, where $F$ is the line dispersion function. \n\\begin{eqnarray}\\label{eq:etarho} \n&\\,\\,\\,\\eta_{I,Q,U,V}=\\,\\,\\eta_0 \\,H_{I,Q,U,V}\\,,\\nonumber\\\\\n&\\rho_{Q,U,V}=2\\eta_0 \\,F_{Q,U,V}\\,.\n\\end{eqnarray}\nIn the unified description the $H$ and $F$ functions are combined into\nthe complex-valued \n\\begin{equation}\\label{eq:hcal}\n{\\cal H}(a,v_q)\\equiv H(a,v_q)-2i\\,F(a,v_q)\\,, \n\\end{equation}\nwhich now represents the building blocks when forming the\ncorresponding quantities with indices $I,Q,U,V$ to refer to the\nrespective Stokes parameters. ${\\cal H}_{I,Q,U,V}$ can be combined\ninto the 4-vector \n\\begin{equation}\\label{eq:hcalvect} \n\\vec{{\\cal H}}\\,\\equiv\\left(\\matrix{\\,\\,{\\cal H}_I\\,\\,\\cr \\,\\,{\\cal H}_Q\\,\\,\\cr \\,\\,{\\cal H}_U\\,\\,\\cr \\,\\,{\\cal H}_V\\,\\, }\\right)\\,\\equiv\\,\\left(\\matrix{\\,\\,{\\cal H}_0\\,\\,\\cr \\,\\,{\\cal H}_1\\,\\,\\cr \\,\\,{\\cal H}_2\\,\\,\\cr \\,\\,{\\cal H}_3\\,\\, }\\right)\\,.\n\\end{equation}\n From the above follows how ${\\cal H}_k$ is related to and unifies the\ncorresponding absorption and dispersion parameters $\\eta_k$ and $\\rho_k$: \n\\begin{equation}\\label{eq:completa}\n\\eta_k -i\\,\\rho_k =\\,\\eta_0\\,{\\cal H}_k\\,. \n\\end{equation}\n\nLet us next define the three symmetric matrices $\\vec{K}^{(k)}$ and three\nantisymmetric matrices $\\vec{J}^{(k)}$ through \n\\begin{eqnarray}\\label{eq:ikj} \n&\\vec{K}^{(k)}_{0j}&\\!\\!\\!\\equiv\\,\\, \\vec{K}^{(k)}_{j\\,0}\\,\\,\\equiv 1\\,,\\nonumber\\\\\n&\\vec{J}^{(k)}_{ij}&\\!\\!\\!\\equiv\\, -\\vec{J}^{(k)}_{j\\,i}\\equiv-\\,\\varepsilon_{ij\\,k}\\,,\n\\end{eqnarray}\nwhere $\\varepsilon_{ij\\,k}$ is the Levi-Civita antisymmetric symbol. \nWe further define the complex-valued matrix \n\\begin{equation}\\label{eq:xmatdef}\n\\vec{T}^{(k)} \\equiv \\vec{K}^{(k)} -i\\,\\vec{J}^{(k)} \\,. \n\\end{equation}\nThen the Mueller matrix from Eq.~(\\ref{eq:etamat}) becomes \n\\begin{equation}\\label{eq:etacompmat} \n\\vec{\\eta} -\\eta_I \\vec{I} =\\,\\eta_0\\,{\\rm Re}\\,(\\,{\\cal\n H}_k\\,\\vec{T}^{(k)} \\,)\\,. \n\\end{equation}\n\nLet us similarly define the complex electromagnetic vector \n\\begin{equation}\\label{eq:complemvec} \n\\vec{\\cal E} \\equiv \\vec{E}-i\\,\\vec{B}\\,, \n\\end{equation}\nwhich in quantum mechanics represents photons with positive\nhelicity. Then the electromagnetic tensor\n$F^{\\,\\alpha}_{\\phantom{\\alpha}\\beta}$ of Eq.~(\\ref{eq:fabmat}) can be\nwritten as \n\\begin{equation}\\label{eq:compemtensor} \n\\vec{F}\\,=\\,{\\rm Re}\\,(\\,{\\cal\n E}_k\\,\\vec{T}^{(k)} \\,)\\,. \n\\end{equation}\nComparison with Eq.~(\\ref{eq:etacompmat}) again brings out the\nstructural correspondence between the Mueller matrix and the\nelectromagnetic tensor, this time in a more concise and compact form. It also\nshows how the electric and magnetic fields are inseparably linked, as\nthe real and imaginary parts of the same complex vector. \n\nIt may be argued that the structural similarity between the Mueller\nand electromagnetic formalisms is not unexpected, since the underlying\nphysics that governs the Mueller matrices is the electromagnetic\ninteractions between matter and radiation. The atomic transitions are\ninduced by the oscillating electromagnetic force of the ambient\nradiation field when it interacts with the atomic electrons, and this\ninteraction is governed (in the classical description) by the force\nlaw of Eq.~(\\ref{eq:forcelaw}) with its electromagnetic tensor. This\nis however not the whole story, since there are also profound\ndifferences: As we will see in Sect.~\\ref{sec:spin2} Stokes vectors and Mueller\nmatrices behave like spin-2 objects, while the electromagnetic tensor\nis a spin-1 object. Another interesting aspect in the comparison\nbetween Stokes vectors and the energy-momentum 4-vector is that\ndepolarization of Stokes vectors acts as if the corresponding 4-vector\nhas acquired ``mass'', as will be shown in Sect.~\\ref{sec:depolmass}. Before we turn\nto these topics we will in the next section show the correspondence\nbetween the Mueller matrix and the Lorentz transformation matrix. \n\n\n\\section{Lorentz transformations and the Mueller absorption\n matrix}\\label{sec:lorentz} \n\nLet $\\vec{X}$ be the spacetime 4-vector: \n\\begin{equation}\\label{eq:xvect} \n\\vec{X}\\,\\equiv\\,\\left(\\matrix{\\,\\,ct\\,\\,\\cr \\,\\,x\\,\\,\\cr \\,\\,y\\,\\,\\cr \\,\\,z\\,\\, }\\right)\\,\\equiv\\,\\left(\\matrix{\\,\\,x_0\\,\\,\\cr \\,\\,x_1\\,\\,\\cr \\,\\,x_2\\,\\,\\cr \\,\\,x_3\\,\\, }\\right)\\,.\n\\end{equation}\n With the Lorentz transformation $\\vec{\\Lambda}$ we transfer to a new\nsystem $\\vec{X}^\\prime$: \n\\begin{equation}\\label{eq:xprimex} \n\\vec{X}^\\prime =\\vec{\\Lambda}\\,\\vec{X}\\,. \n\\end{equation}\n$\\vec{\\Lambda}$ represents rotations in Minkowski space, composed of\nthree spatial rotations $\\phi_k$ and three boosts $\\gamma_k$, which\nmay be regarded as imaginary rotations. Let us combine them into complex\nrotation parameters $\\alpha_k$ through \n\\begin{equation}\\label{eq:compalphagamphi} \n\\alpha_k\\,\\equiv \\,\\gamma_k + i\\,\\phi_k\\,. \n\\end{equation}\nThen the Lorentz transformation $\\vec{\\Lambda}$ can be written as \n\\begin{equation}\\label{eq:explorentz} \n\\vec{\\Lambda}\\equiv e^{\\vec{V}}\\,, \n\\end{equation}\nwhere \n\\begin{equation}\\label{eq:vlormat} \n\\vec{V}\\,=\\,{\\rm Re}\\,(\\,\\alpha_k\\,\\vec{T}^{(k)} \\,)\\,=\\gamma_k\\,\n\\vec{K}^{(k)}+\\,\\phi_k \\,\\vec{J}^{(k)}\\,. \n\\end{equation}\nExplicitly, \n\\begin{equation}\\label{eq:lorentzmi}\n\\vec{V}\\,= \\left(\\matrix{\\,\\,0&\\phantom{-}\\gamma_x&\\phantom{-}\\gamma_y&\n\\phantom{-}\\gamma_z\\,\\,\\cr \\,\\,\\gamma_x&\\phantom{-}0&\n-\\phi_z&\\phantom{-}\\phi_y\\,\\,\\cr \\,\\,\\gamma_y&\\phantom{-}\\phi_z&\\phantom{-}0&-\\phi_x\\,\\,\\cr\n\\,\\,\\gamma_z&-\\phi_y&\\phantom{-}\\phi_x& \n\\phantom{-}0\\,\\, }\\right)\\,. \n\\end{equation}\n Note that in quantum field theory a convention for the definition of\nthe $\\vec{K}$ and $\\vec{J}$ matrices that define the Lorentz algebra\nis used, which differs by the\nfactor of the imaginary unit $i$ from the convention of Eq.~(\\ref{eq:ikj}) used here, in\norder to make $\\vec{K}$ anti-hermitian and $\\vec{J}$ hermitian\n\\citep{stenflo-zee2010}. \n\nComparing with the Mueller matrix and the electromagnetic tensor we see the correspondence \n\\begin{eqnarray}\\label{eq:lorconnect} \n&\\eta_k\\,\\,\\longleftrightarrow \\,\\, \\,\\,\\,\\gamma_k\\nonumber\\,\\,\\,\\longleftrightarrow \\,\\, \\,\\,E_k\\\\\n&\\,\\,\\rho_k\\,\\,\\longleftrightarrow -\\, \\phi_k\\,\\,\\,\\longleftrightarrow \\,\\, \\,\\,B_k\\,.\n\\end{eqnarray}\nThe minus sign in front of $\\phi_k$ is only due to the convention\nadopted for defining the sense of rotations and is therefore\nirrelevant for the following discussion of the physical meaning of the\nstructural correspondence. \n\nRelations (\\ref{eq:lorconnect}) show how the Lorentz boosts\n$\\gamma_k$, which change the energy and momentum of the boosted object, relate to\nboth absorption $\\eta_k$ and electric field $E_k$, while the spatial\nrotations $\\phi_k$, which do not affect the energy but change the\nphase of the rotated object, relate to both dispersion or phase shift \neffects $\\rho_k$ and to magnetic fields $B_k$. \n\n\n\\section{Analogy between depolarization and the emergence of\n mass}\\label{sec:depolmass}\nThe structural correspondence between the $4\\times 4$ Mueller\nabsorption matrix, the matrix representation of the covariant\nelectromagnetic tensor, and the Lorentz transformation matrix suggests\nthat there may be a deeper analogy or connection between the 4D Stokes vector\nspace and 4D spacetime. Let us therefore see what happens when we\nintroduce the Minkowski metric to the Stokes vector formalism. The\nusual notation for the Minkowski metric is $\\eta_{\\mu\\nu}$, but to\navoid confusion with the absorption matrix $\\vec{\\eta}$ that we have\nbeen referring to in the present paper, we will use the notation\n$\\vec{g}$ or $g_{\\mu\\nu}$ that is generally reserved for a general\nmetric, but here we implicitly assume that we are only dealing with \ninertial frames, in which $g_{\\mu\\nu}=\\eta_{\\mu\\nu}$. \n\nAssume that $\\vec{I}_\\nu =\\vec{S}_\\nu$ is the 4D Stokes vector, with\nits transpose being $\\vec{I}_\\nu^T\n=(I_\\nu,\\,Q_\\nu,\\,U_\\nu,\\,V_\\nu)$. The scalar product in Minkowski\nspace is then \n\\begin{equation}\\label{eq:itetai}\n\\vec{I}_\\nu^T\n\\vec{g}\\,\\vec{I}_\\nu\\,=\\,I_\\nu^2\\,-\\,(\\,Q_\\nu^2\\,+\\,U_\\nu^2\\,+\\,V_\\nu^2\\,)\\,, \n\\end{equation}\nwhich also represents the squared length of the Stokes vector in\nMinkowski space. \n\nWe know from polarization physics that the right-hand-side of\nEq.~(\\ref{eq:itetai}) is always $\\geq 0$, and equals zero only when the\nlight beam is 100\\,\\%\\ (elliptically) polarized. Such fully polarized,\npure or coherent states are thus represented by null vectors, in\nexactly the same way as the energy-momentum 4-vector $\\vec{p}$ of\nmassless particles are also null vectors on the surface of null\ncones. The energy-momentum vectors of massive particles live inside\nthe null cones. Similarly the Stokes vectors live inside and not on\nthe surface of null cones only if the light is not fully but partially\npolarized. \n\nThis comparison raises the question whether there is some deeper\nconnection between depolarization and the appearance of mass. In\npolarization physics all individual (coherent) wave packages are\n100\\,\\%\\ polarized, and any coherent superposition of such wave\npackages is also fully polarized. Partial polarization occurs\nexclusively as a result of the {\\it incoherent} superposition of\ndifferent, uncorrelated wave packages. In such cases it is customary to\nrepresent the intensity $I_\\nu$, which represents the energy or the\nnumber of photons carried by the beam, as consisting of two parts, one\nfraction $p_\\nu$ that is fully polarized, and one fraction with\nintensity $I_{\\nu,\\,u}$ that is unpolarized, with transposed Stokes\nvector $I_{\\nu,\\,u}\\,(1,\\,0,\\,0,\\,0)$: \n\\begin{equation}\\label{eq:unpoldecomp}\nI=p_\\nu\\,I \\,+\\,I_u \\,, \n\\end{equation}\nwhere we have omitted index $\\nu$ for simplicity except for $p_\\nu$ (to\ndistinguish it from the momentum vector $p$ below). This fractional\npolarization $p_\\nu$ is \n\\begin{equation}\\label{eq:polfrac} \np_\\nu={I-I_u\\over I}=\\,{(Q^2 +U^2 +V^2)^{1\/2}\\over I}\\,. \n\\end{equation}\n\nIn comparison, in particle physics, the scalar product for the 4D\nenergy momentum vector $\\vec{p}$ is \n\\begin{equation}\\label{eq:ptgp} \n\\vec{p}^T\\vec{g}\\,\\vec{p}\\,=\\,m^2\\,c^2\\,, \n\\end{equation}\nfrom which the well-known Dirac equation \n\\begin{equation}\\label{eq:ptetap} \nE^2=p^2\\,c^2\\,+\\,m^2\\,c^4 \n\\end{equation}\nfollows. While the emergence of the mass term corresponds to the\nemergence of the unpolarized component $I_u$,\nEqs.~(\\ref{eq:unpoldecomp}) and (\\ref{eq:ptetap}) look different,\nbecause the decomposition in Eq.~({\\ref{eq:unpoldecomp}) has been done\n for the unsquared intensity $I$, while in Eq.~({\\ref{eq:ptetap}) it\n is in terms of the squared components. Since we have the freedom\n to choose different ways to mathematically decompose a quantity,\n this difference is not of particular physical significance. \n\nIn current quantum field theories (QFT) the emergence of mass requires\nthe spontaneous breaking of the gauge symmetry, for which the Higgs\nmechanism has been invented. It is postulated that all of space is\npermeated by a ubiquitous Higgs field, which when interacting with the\nfield of a massless particle breaks the symmetry. When the particle\ngets moved to the non-symmetric state it acquires mass. Because the phases\nof the Higgs field and the field of the initially massless particle\nare uncorrelated, the superposition of the fields is incoherent, which\nmay be seen as one reason for the breaking of the symmetry. \n\nIn polarization physics the emergence of depolarization may also be\ninterpreted as a symmetry breaking, caused by the incoherent\nsuperposition of different wave fields. Incoherence means that the\nphases of the superposed fields are uncorrelated, which has the result that\nthe interference terms, all of which are needed to retain the symmetry,\nvanish. \n\n\n\\section{Stokes vectors as spin-2 objects}\\label{sec:spin2}\nAn object with spin $s$ varies with angle of rotation $\\theta$ as\n$s\\,\\theta$. For $s=\\textstyle{1\\over 2}$ one has to rotate $4\\pi$\nradians to return to the original state, for $s=2$ one only needs to\nrotate $\\pi$ radians,\nand so on. Ordinary vectors, like the electric and magnetic fields\n$\\vec{E}$ and $\\vec{B}$, rotate like spin-1 objects. It may\ntherefore come as a surprise that the Stokes vector rotates with twice\nthe angle, like a spin-2 object, in spite of the identical symmetry\nproperties of the Mueller matrix and the electromagnetic tensor. \n\nThe resolution to this apparent paradox is found by distinguishing\nbetween the kind of spaces in which the rotations are performed. In\nthe Minkowski-type space that is spanned by $I,Q,U,V$ as\ncoordinates, which is the Poincar\\'e\\ space in polarization physics\nfor a fixed and normalized intensity $I$, the transformation properties\nare indeed those of a real vector, a spin-1 object. However, besides\nPoincar\\'e\\ space the Stokes vector also lives in ordinary space, and\na rotation by $\\theta$ of a vector in Poincar\\'e\\ space corresponds to\na rotation in ordinary space by $2\\theta$. While being a spin-1 object\nin Poincar\\'e\\ space, the same object becomes a spin-2 object in\nordinary space. \n\nThe reason why it becomes a spin-2 object is that the Stokes vector\nhas substructure: it is formed from tensor products of Jones\nvectors. Similarly Mueller matrices for coherent (100\\,\\%\\ polarized)\nwave packages are formed from tensor products of Jones matrices. While\nthe Jones vectors and matrices are spin-1 objects in ordinary space,\nthe bilinear products between them become spin-2 objects. \n\nThe fundamental physics that governs the polarization physics does not\nmanifest itself at the level of these spin-2 objects, because the basic\nprocesses are the electromagnetic interactions between the radiation\nfield and the electrons (which may be bound in atoms), and these\ninteractions are described at the spin-1 level (since the\nelectromagnetic waves represent a spin-1 vector field). The Jones\nmatrices, or, in QM terminology, the Kramers-Heisenberg scattering\namplitudes, contain the fundamental physics. They are the basic\nbuilding blocks for the bilinear products, the spin-2 objects. \n\nThis discussion points to the possibility that the physics of other\ntypes of spin-2 objects, like the metric field in general relativity,\nmay be hidden, because the governing physics may take place within a spin-1\nsubstructure level and would remain invisible if the spin-2 field\nwould be (incorrectly) \nperceived as fundamental, without substructure. \n\n\n\\section{Conclusions}\\label{sec:conc}\nComparison between the Stokes formalism, the covariant formulation of\nelectromagnetism, and the Lorentz transformation shows that they all\nshare the same Lie algebra, namely the algebra of the Lorentz\ngroup. This algebra is 6-dimensional (for instance in the case of\nelectromagnetism we have three electric field components + three magnetic\nfield components). While this is the algebra that is known to govern Lorentz\ntransformations and the related covariant formulation of\nelectromagnetism, it is not obvious why this group algebra should also apply\nto the transformation of Stokes 4-vectors, which have been constructed\nwith the aim of being a powerful tool for the treatment of\npartially polarized light. \n\nIn spite of the common underlying group structure, there is also a\nprofound difference. While the electromagnetic field vectors and \ntensors are objects of a vector space with rotational properties of\nspin-1 objects, Stokes vectors and Mueller matrices have the\nrotational symmetries of spin-2 objects, because they are formed from\ntensor products of spin-1 objects. This \nvector-field substructure contains the governing physics, where\neverything is coherent, and where in the quantum description the probability\namplitudes or wave functions live and get linearly superposed to form mixed states with\ncertain phase relations. When we go to the spin-2 level by\nforming bilinear products between the probability amplitudes, which\ngenerates observable probabilities, or when we form bilinear products\nbetween electric field vectors to generate \nquantities that represent energies or photon numbers, we get\nstatistical quantities (probabilities or energy packets) over which we\ncan form ensemble averages. If the phase relations of the mixed states in\nthe substructure are definite, we get interference effects and 100\\,\\%\\\npolarization for the ensemble averages, while if the phase relations\ncontain randomness (incoherent superposition) we get partial polarization. \n\nWhen comparing the Stokes 4-vector with the energy-momentum 4-vector\nof a particle, 100\\,\\%\\ elliptically polarized light corresponds to\nmassless particles, while the Stokes vector for partially polarized\nlight corresponds to the energy-momentum vector of a massive\nparticle. Depolarization thus has an effect as if the Stokes vector has\nacquired ``mass'' by a symmetry breaking that is caused by the destruction of\ncoherences between mixed states. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\IEEEPARstart{M}{odern} radar systems are able to generate optimal filters matched to increasingly complex target motion, resulting in increased sensitivity to targets exhibiting these motion at the cost of significant processing load. This problem is most difficult for sensors targeting objects in low Earth orbit (LEO), especially sensors with a significant field of regard. This is due to the observation time required to detect smaller targets, combined with significant orbital velocities and large search volumes, increasing the parameter space to impractical levels.\n\nExtending radar processing integration times in order to increase detection sensitivity requires mitigation against range migration, Doppler migration, and angular migration. The correction of these migrations is further complicated by the motion of the Earth, and hence the sensor located on the Earth. The direct implementation of a matched filter in this radar search space may lead to the incorporation of many parameters.\n\nThe nominal trajectory of orbits is well understood and is generally deterministic. The motion of a two-body Keplerian orbit, an idealised case of an object of insignificant mass orbiting around a much larger central body\\footnote{Treated as a single point mass.}, can be expressed entirely by six parameters. Matching the processing to this well-defined orbital motion for the purpose of improved radar detection and space situational awareness is therefore a natural extension.\n\nWhilst the primary aim of this general method is to increase a radar's sensitivity to objects in orbit, detections from a filter matched to a target's orbital trajectory will additionally provide coarse initial orbit determination. Traditionally, performing initial orbit determination requires many radar detections of a pass of an object in space.\n\nAfter briefly covering prior work (\\ref{ssec:prior_work}), Section \\ref{sec:problem_formulation} details the problem formulation, specifically in terms of ambiguity function expressions (\\ref{ssec:ambiguity_surface_generation}) and Keplerian orbital dynamics (\\ref{ssec:orbital_dynamics}). In Section \\ref{sec:odbd}, Orbit Determination Before Detect (ODBD) methods are discussed, including matched processing to orbital parameters, constraining the search volume (\\ref{ssec:odbd_search_constraints}), and constraining the orbit in radar measurement space (\\ref{ssec:odbd_zdc}), particularly for uncued detections. Some specific applications, including single-channel object detection and orbit determination are also discussed (\\ref{ssec:odbd_single_channel}). Section \\ref{sec:results} presents simulated results, with comparison against ephemerides. Section \\ref{sec:conclusion} concludes with a description of future work.\n\\vspace{-1ex}\n\n\\subsection{Prior Work}\n\\label{ssec:prior_work}\nThe motivation for this paper is to further develop techniques for the surveillance of space with the Murchison Widefield Array (MWA) using passive radar. The paper is particularly concerned with developing techniques for uncued detection over a wide field of regard. The MWA is a low frequency (70 - 300 MHz), wide field-of-view, radio telescope located in Western Australia \\cite{2013PASA...30....7T}. The MWA has demonstrated the incoherent detection of the International Space Station (ISS) \\cite{ 2013AJ....146..103T} and other, smaller, objects in orbit \\cite{prabu2020development}. However, for coherent processing, methods compensating for all aspects of motion migration are required in order to detect smaller satellites and space debris \\cite{8835821}. As passive radar systems have no control over the transmitter used for detection, improving processing gain through extended Coherent Processing Intervals (CPIs) is a method used to achieve the required sensitivity \\cite{4653940}. Orbital trajectories are ideal targets for such techniques, as stable and predictable relative motion allows for simpler measurement models. Such techniques have also been used with active radar, for improved sensitivity and processing gain \\cite{markkanen2005real} \\cite{8812975}.\n\nConsisting of 256 tiles spread across many square kilometres, the MWA's sparse layout\\footnote{At FM radio frequencies, even the compact configuration of MWA Phase II is sparse \\cite{pase22018article}.} provides high angular resolution. Objects in orbit will therefore transit many beamwidths per second at the point of closest approach. Because of this, high angular resolution (normally a desirable attribute) can result in significant angular migration. Highly eccentric orbits will transit significantly faster. This is particularly challenging for the uncued detection of small objects, where longer integration times are needed to achieve sufficient sensitivity.\n\nIndividual radar detections consisting of a single measurement of range, Doppler, azimuth and elevation, only define a broad region of potential orbital parameters \\cite{2007CeMDA..97..289T}. This region may be constrained by incorporating angular rates \\cite{2014demarsradar_regions1}, and even further by including radial acceleration and jerk \\cite{8812975}. Usually, many radar detections are required to perform initial orbit determination. The mapping between radar measurement space and orbital parameters is an ongoing area of research \\cite{8448187}.\n\n\n\\section{Problem Formulation}\n\\label{sec:problem_formulation}\n\\subsection{Radar Product Formation} \n\\label{ssec:ambiguity_surface_generation}\nA standard timeseries matched filter is a function to detect reflected copies of a reference signal $d(t)$ in the surveillance signal $s(t)$, specifically copies delayed by $\\tau$ and frequency shifted by $f_D$:\n\\vspace{-1ex}\n\\begin{IEEEeqnarray}{rCL}\n \\label{eq:woodward1}\n \\chi(\\tau,f_D) = \\int_T s(t){d}^*(t-\\tau){\\mathrm{e}}^{-j2\\pi f_Dt}\\,dt .\n\\end{IEEEeqnarray}\n\nThis matched filter can be extended to more complicated motions by \\textit{dechirping} (or even applying higher order corrections to) the motion-induced frequency shift. For example, instead of matching to the radial velocity with a Doppler shift of $f_D$, higher order motions could be matched with a time varying frequency (that can be represented as a polynomial phase signal) given by $f_D +\nf_Ct$, where $f_C$ is proportional to the radial acceleration. This can be extended to an arbitrary number of parameters at the cost of adding extra dimensions to the matched filter outputs. \nTo account for any range migration, the delay term $\\tau$ will also need to be a function of time to match the radial motion.\n\nFor a receiver array consisting of $N$ elements, the surveillance signal $s(t)$ can be formed by classical far-field beamforming in a direction of interest such that:\n\\begin{IEEEeqnarray}{rCL}\ns(t) = \\sum_{n=1}^{N}s_{n}(t){\\mathrm{e}}^{-j\\boldsymbol{k}(\\theta,\\phi) \\cdot \\boldsymbol{u}_n } ,\n\\end{IEEEeqnarray}\nwhere $s_{n}(t)$ is the received signal at the $n^{\\text{th}}$ antenna, $\\boldsymbol{u}_n$ is the position of the $n^{\\text{th}}$ antenna, and $\\boldsymbol{k}(\\theta,\\phi)$ is the signal wavevector for azimuth $\\theta$ and elevation $\\phi$. \nTime varying adjustments can be made to every measurement parameter to create a filter, $\\chi$, matched to the exact motion of an object with range $\\rho(t)$ and slant range-rate $\\dot{\\rho}(t)$, in time-varying directions given by azimuth $\\theta(t)$ and elevation $\\phi(t)$:\n\n\\begin{multline}\n \\label{eq:time_varying_delay_doppler}\n \\chi\\left(\\theta(t), \\phi(t), \\rho(t), \\dot{\\rho}(t)\\right) = \\int_T \\left[\\sum_{n=1}^{N}{\\mathrm{e}}^{j\\boldsymbol{k}(\\theta(t),\\phi(t)) \\cdot \\boldsymbol{u}_n }s_{n}(t)\\right] \\\\\n d^*\\left(t-2c^{-1}\\rho(t)\\right){\\mathrm{e}}^{-j\\frac{4\\pi}{\\lambda}\\dot{\\rho}(t)t}dt ,\n\\end{multline}\nwhere the delay to the target is now given by the total path distance scaled by $\\frac{1}{c}$, and the Doppler shift is given by $\\frac{2\\dot{\\rho}}{\\lambda}$.\n\n\\subsection{Orbital Dynamics}\n\\label{ssec:orbital_dynamics}\n\\vspace{-.47ex}\nThe most common elements used to parameterise an orbit are the Keplerian, or \\textit{classical}, orbital elements. These elements directly describe the size, shape, and orientation of an orbital ellipse (with one focus being at the centre of the central body), and the position of an object on this ellipse at some epoch, in the Earth-Centered Inertial (ECI) coordinate frame \\cite{Vallado2001fundamentals}. The ECI coordinate frame has its origin at the centre of the Earth, but it does not rotate with the Earth. It is also worth noting that a Keplerian orbit can, in fact, be any conic section. However, in this paper, it is assumed that orbits describe Earth-captured closed orbits.\n\nThe Keplerian orbital parameters are: the semi-major axis, $a$, and eccentricity, $e$, defining the size and shape of the ellipse; the right-ascension of the ascending node, $\\Omega$, and inclination, $i$, which define the orientation of the elliptical plane to the Earth's equatorial plane; the argument of periapsis, $w$, defining the orientation\/rotation of the ellipse in the orbital plane; and finally, the true anomaly, $\\nu$, defining the position of the object on the ellipse (refer to Figure \\ref{fig:orbitplane1}).\n\n\\vspace{-3ex}\n\\begin{figure}[ht!]\n\\hspace{2ex}\n\\tdplotsetmaincoords{70}{110}\n\\begin{tikzpicture}[tdplot_main_coords,scale=4.3]\n \\pgfmathsetmacro{\\r}{0.7}\n \\pgfmathsetmacro{\\O}{45} \n \\pgfmathsetmacro{\\i}{35} \n \\pgfmathsetmacro{\\f}{32}\n\n \\coordinate (O) at (0,0,0);\n\n \\draw [->] (O) -- (1,0,0) node[anchor=north east] {$\\boldsymbol{I}$};\n \\draw [->] (O) -- (0,0.9,0) node[anchor=north west] {$\\boldsymbol{J}$};\n \\draw [->] (O) -- (0,0,0.9) node[anchor=south] {$\\boldsymbol{K}$};\n \n \\tdplotdrawarc[dashed]{(O)}{\\r}{0}{360}{}{}\n\n \\tdplotsetrotatedcoords{\\O}{0}{0}\n\n \\draw [tdplot_rotated_coords] (0,0,0) -- (\\r,0,0) node [below right] {};\n \\tdplotdrawarc[->]{(O)}{.28*\\r}{0}{\\O}{anchor=north}{$\\Omega$}\n\n \\tdplotsetrotatedcoords{-\\O}{\\i}{0}\n \\tdplotdrawarc[tdplot_rotated_coords]{(O)}{\\r}{0}{360}{}{} \n \\begin{scope}[tdplot_rotated_coords]\n \\draw[->] (O) -- (0,0,0.8) node [above] {$\\boldsymbol{h}$};\n \\draw (0,0,0) -- (-\\r,0,0);\n \\tdplotdrawarc[->]{(O)}{.4*\\r}{90}{180}{anchor=west}{$\\omega$}\n \\coordinate (P) at (180+\\f:\\r);\n \\draw (O) -- (P);\n \\tdplotdrawarc[->]{(O)}{.7*\\r}{180}{180+\\f}{anchor=south west}{$\\nu$}\n \\end{scope}\n\n \\tdplotsetrotatedcoords{-\\O+\\f}{\\i}{0}\n \\tdplotsetrotatedcoordsorigin{(P)}\n \\begin{scope}[tdplot_rotated_coords,scale=.2,thick]\n \\fill (P) circle (.6ex) node [above] {Celestial Body};\n \\end{scope}\n\n \\tdplotsetthetaplanecoords{-\\f}\n \\tdplotdrawarc[tdplot_rotated_coords,->]{(O)}{0.9*\\r}{0}{\\i}{anchor=south}{$i$}\n\\end{tikzpicture}\n\\vspace{-2ex}\n\\caption{The orbital plane determined by orientation parameters $\\Omega$, $\\omega$, and $i$ relative to the plane of reference in the ECI coordinate frame. These parameters define the direction of the angular momentum vector $\\boldsymbol{h}$. The axes $\\boldsymbol{I}$, $\\boldsymbol{J}$ and $\\boldsymbol{K}$ define the ECI coordinate frame.}\n\\vspace{-1ex}\n\\label{fig:orbitplane1}\n\\end{figure}\n\nIt is also assumed that the only force acting on the object in orbit is due to the gravity of the dominant mass\\footnote{Uniform acceleration does not take into account the ellipsoidal\/oblate nature of the Earth or other forces, such as micro-atmospheric drag, solar weather, and gravity due to other celestial bodies. For the short duration of a single CPI, these factors are generally negligible.}, with the acceleration due to the Earth's gravity $\\boldsymbol{\\ddot{r}}$, given by:\n\\vspace{-0.2ex}\n\\begin{equation}\n\\boldsymbol{\\ddot{r}}= -\\frac{\\mu}{\\lvert\\boldsymbol{r}\\rvert^3}\\boldsymbol{r}\\label{eq:eci_acceleration},\\vspace{-1ex}\n\\end{equation}\nwhere $\\mu$ is the standard gravitational parameter for the Earth.\n\nGiven the orbital parameters, and the acceleration due to the Earth's gravity, the Cartesian position $\\boldsymbol{r}$, and velocity $\\boldsymbol{\\dot{r}}$, for an object in Earth orbit is completely deterministic and is given by:\n\\begin{IEEEeqnarray}{rCL}\n \\boldsymbol{r} &=& \\frac{a(1-e^2)}{1 + e\\cos{\\nu}}(\\cos{\\nu}\\boldsymbol{P} + \\sin{\\nu}\\boldsymbol{Q})\\label{eq:eci_position}~;\\\\\n \\boldsymbol{\\dot{r}} &=& \\sqrt{\\frac{\\mu}{a(1-e^2)}}(-\\sin{\\nu}\\boldsymbol{P} + (e + \\cos{\\nu})\\boldsymbol{Q})\\label{eq:eci_velocity}~,\n\\end{IEEEeqnarray}\nwhere $\\boldsymbol{P}$ and $\\boldsymbol{Q}$ represent axes of a coordinate system co-planar with the orbital plane in the Cartesian ECI coordinate frame (given by axes $\\boldsymbol{I}$, $\\boldsymbol{J}$, and $\\boldsymbol{K}$). The third axis, $\\boldsymbol{W}$, is perpendicular to the orbital plane \\cite{Vallado2001fundamentals}. These vectors are described by:\n\n\\begin{IEEEeqnarray}{rCL}\n\\boldsymbol{P} = & &\n \\begin{bmatrix} \n \\cos{\\Omega}\\cos{\\omega} - \\sin{\\Omega}\\cos{i}\\sin{\\omega} \\\\\n \\sin{\\Omega}\\cos{\\omega} + \\cos{\\Omega}\\cos{i}\\sin{\\omega} \\\\\n \\sin{i}\\sin{\\omega}\n \\end{bmatrix}~; \\\\\n \\boldsymbol{Q} = & & \\begin{bmatrix} \n -\\cos{\\Omega}\\sin{\\omega} - \\sin{\\Omega}\\cos{i}\\cos{\\omega} \\\\\n -\\sin{\\Omega}\\sin{\\omega} + \\cos{\\Omega}\\cos{i}\\cos{\\omega} \\\\\n \\sin{i}\\cos{\\omega}\n \\end{bmatrix}~;\\\\\n \\boldsymbol{W} = & & \\begin{bmatrix} \n \\sin{i}\\sin{\\Omega} \\\\\n -\\sin{i}\\cos{\\Omega} \\\\\n \\cos{i}\n \\end{bmatrix}~.\n\\end{IEEEeqnarray}\nNote that a complicating factor with the ECI reference frame is that a nominally stationary position on the surface of the Earth, such as a fixed radar sensor, will have significant motion.\n\n\\section{{Orbit Determination Before Detect}} \\label{sec:odbd}\n\n\\begin{figure}[ht!]\n\\hspace{7ex}\n\\begin{tikzpicture}[dot\/.style={draw,fill,circle,inner sep=1pt}]\n \\def5{5}\n \\def\\a{5}\n \\def\\angle{10} \n \\coordinate[] (O) at (0,0) {};\n \\node[thick, dot,label={\\angle:Celestial Body}] (X) at (20:{4} and {3.2}) {};\n \\coordinate[label={-30:Sensor Location}] (Q) at (0:{4} and {3}) {};\n \\draw [->] (O) -- (X) node [midway, above] {$\\boldsymbol{r}$};\n \\draw [->] (O) -- (Q) node [midway, below] {$\\boldsymbol{q}$};\n \\draw [dashed, ->] (X) -- (1.9,2) node [near end, right] [shift=(10:1mm)] {$\\dot{\\boldsymbol{r}}$};\n \\draw [ ->] (Q) -- (X) node [midway, right] {$\\boldsymbol{\\rho}$};\n \\fill (X) circle [radius=2pt];\n\n\\end{tikzpicture}\n\\vspace{-2ex}\n\\caption{In the ECI coordinate frame the sensor is at position $\\boldsymbol{q}$, the celestial body at position $\\boldsymbol{r}$ with velocity $\\dot{\\boldsymbol{r}}$ (given by \\eqref{eq:eci_position} and \\eqref{eq:eci_velocity}) and the slant range vector from the sensor to the object given by $\\boldsymbol{\\rho}$.}\n\\label{fig:radar_vectors}\n\\end{figure}\n\nFor a two-body Keplerian orbit, the time-varying terms $\\rho(t)$, $\\dot{\\rho}(t)$, $\\phi(t)$, and $\\theta(t)$ \n\\eqref{eq:time_varying_delay_doppler} can be completely described by an orbit's six independent parameters. Although the position of an object in orbit is given by \\eqref{eq:eci_position}, there are no closed form solutions for the time varying position $\\boldsymbol{r}(t)$. Instead, a Taylor series approximation can be used to calculate an expression for the object's position throughout a CPI such that $\\boldsymbol{r}(t) = \\sum_{n=0}^{\\infty}\\frac{\\boldsymbol{r}^{(n)}(0)t^n}{n!}$ (where $\\boldsymbol{r}^{(n)}(x)$ denotes the $n^{th}$ derivative of $\\boldsymbol{r}$ evaluated at the point $x$), with $t$ being the time through the CPI of length $T$, $t \\in [\\frac{-T}{2}, \\frac{T}{2}]$. With knowledge of the sensor's location, $\\boldsymbol{q}(t)$ (as in Figure \\ref{fig:radar_vectors}), and $\\boldsymbol{\\dot{q}}(t)$ giving the slant range vector from the sensor to the object, as well as the slant-range rate, as $\\boldsymbol{\\rho}(t) = \\boldsymbol{r}(t) - \\boldsymbol{q}(t)$ and $\\boldsymbol{\\dot{\\rho}}(t) = \\boldsymbol{\\dot{r}}(t) - \\boldsymbol{\\dot{q}}(t)$, a polynomial expression for the slant-range and slant-range rate equations of motion over the CPI is possible:\n\\vspace{-1ex}\n\\begin{IEEEeqnarray}{rCL}\n \\rho(t) &=& \\lvert\\boldsymbol{\\rho}(t)\\rvert = \\lvert \\sum_{n=0}^{\\infty}\\frac{\\boldsymbol{r}^{(n)}(0)t^n}{n!} - \\boldsymbol{q}(t)\\rvert\\label{eq:taylor_position}~;\\\\\n \\dot{\\rho}(t) &=& \\lvert\\boldsymbol{\\dot{\\rho}}(t)\\rvert = \\lvert \\sum_{n=1}^{\\infty}\\frac{\\boldsymbol{r}^{(n)}(0)t^n}{n!} - \\boldsymbol{\\dot{q}}(t)\\rvert\\label{eq:taylor_rangerate}~.\n\\end{IEEEeqnarray}\nThese expressions can be extended (or truncated) to arbitrary accuracy.\n\nThe directional angles are now calculated as topocentric right ascension and declination, that is right ascension and declination relative to the sensor location, given by $\\alpha$ and $\\delta$, respectively: \n\\vspace{-1.5ex}\n\\begin{IEEEeqnarray}{rCL}\n \\alpha(t) &=& {\\tan}^{-1}\\left(\\frac{\\rho_{\\boldsymbol{J}}(t)}{\\rho_{\\boldsymbol{I}}(t)}\\right)\\label{eq:alpha_t}~;\\\\\n \\delta(t) &=& {\\tan}^{-1}\\left(\\frac{\\rho_{\\boldsymbol{K}}(t)}{\\sqrt{{\\rho_{\\boldsymbol{I}}(t)}^2 + {\\rho_{\\boldsymbol{J}}(t)}^2}}\\right)~, \\label{eq:delta_t}\n\\end{IEEEeqnarray}\nnoting that these expressions depend on the individual elements of $\\boldsymbol{\\rho}$ such that $\\boldsymbol{\\rho}(t) = [\\rho_{\\boldsymbol{I}}(t), \\rho_{\\boldsymbol{J}}(t), \\rho_{\\boldsymbol{K}}(t)]^T$.\n\nUsing the expressions in this section, it is possible to form a matched filter to the orbital elements themselves, essentially creating $\\chi(e,a,i,\\Omega,\\omega,\\nu)$ at a given epoch \\eqref{eq:time_varying_delay_doppler}. This enables arbitrarily long CPIs by tracking an orbit throughout the CPI. Additionally, instead of calculating a Taylor Series expression for the orbital position $\\boldsymbol{r}(t)$, and deriving the parameters of interest, it is far more efficient to directly calculate a Taylor Series expression for the parameters of interest. \nFor a sensor at known Cartesian position $\\boldsymbol{q}$, with known instantaneous velocity, acceleration and jerk, given by $\\boldsymbol{\\dot{q}}$, $\\boldsymbol{\\ddot{q}}$, and $\\boldsymbol{\\dddot{q}}$, respectively, and given the slant range vector $\\boldsymbol{\\rho} = \\boldsymbol{r} - \\boldsymbol{q}$, the slant range and its instantaneous derivatives are given by:\n\n\\begin{align}\n \\rho &= \\lvert\\boldsymbol{\\rho}\\rvert\\label{eq:straight_up_slant_range}~;\\\\\n \\label{eq:orbit_doppler}\n \\dot{\\rho} &= \\frac{\\boldsymbol{\\rho}\\cdot\\dot{\\rhovec}}{\\rho}~; \\\\\n \\label{eq:orbit_chirp}\n \\ddot{\\rho} &= -\\frac{(\\boldsymbol{\\rho}\\cdot\\dot{\\rhovec})^2}{\\rho^3} + \\frac{|\\dot{\\rhovec}|^2 + \\boldsymbol{\\rho}\\cdot\\ddot{\\rhovec}}{\\rho}~; \\\\\n \\label{eq:orbit_jerk}\n \\dddot{\\rho} &= \\begin{multlined}[t]\n 3\\frac{(\\boldsymbol{\\rho}\\cdot\\dot{\\rhovec})^3}{\\rho^5} \\\\\n - 3\\frac{(\\boldsymbol{\\rho}\\cdot\\dot{\\rhovec})(|\\dot{\\rhovec}|^2 + \\boldsymbol{\\rho}\\cdot\\ddot{\\rhovec})}{\\rho^3}\\\\\n + \\frac{3\\dot{\\rhovec}\\cdot\\ddot{\\rhovec} + \\boldsymbol{\\rho}\\cdot\\dddot{\\rhovec}}{\\rho}~,\n \\end{multlined}\n\\end{align}\n\nwhere $\\dddot{\\boldsymbol{r}}$ is from the derivative of \\eqref{eq:eci_acceleration} and is given by:\n\\begin{IEEEeqnarray}{rCL}\n\\dddot{\\boldsymbol{r}} = \\frac{3\\mu\\boldsymbol{r}\\cdot\\boldsymbol{\\dot{r}}}{\\lvert\\boldsymbol{r}\\rvert^5}\\boldsymbol{{r}} -\\frac{\\mu}{\\lvert\\boldsymbol{r}\\rvert^3}\\boldsymbol{\\dot{r}}~.\n\\end{IEEEeqnarray}\n\nNow, \\eqref{eq:orbit_doppler}, \\eqref{eq:orbit_chirp}, and \\eqref{eq:orbit_jerk} can be used to directly specify the target's Doppler, chirp rate, and radial jerk. This leads to more efficient expressions (when compared to \n\\eqref{eq:taylor_position} and \\eqref{eq:taylor_rangerate}) for the slant-range, and also slant-range rate, throughout the CPI of length $T$ such that $t \\in [\\frac{-T}{2}, \\frac{T}{2}]$: \n\\begin{IEEEeqnarray}{rCL}\n \\rho(t) & = & \\rho + \\dot{\\rho}t + \\frac{1}{2}\\ddot{\\rho}t^2 + \\frac{1}{6}\\dddot{\\rho}t^3~; \\\\\n \\dot{\\rho}(t) & = & \\dot{\\rho} + \\ddot{\\rho}t + \\frac{1}{2}\\dddot{\\rho}t^2~.\n\\end{IEEEeqnarray}\n\nA fourth-order Taylor Series approximation to the slant-range, $\\rho(t)$, was chosen due to previous work, which demonstrated that a third order polynomial phase signal may be required in order to coherently match orbits for CPIs of duration up to 10 seconds \\cite{8835821}. \n\nSimilarly, equivalent approximations can be formed for the angular measurement parameters $\\alpha(t)$ \n\\eqref{eq:alpha_t} and $\\delta(t)$ \\eqref{eq:delta_t}.\n\\subsection{Search-Volume Constraints}\n\\label{ssec:odbd_search_constraints}\nThe methods described above enable coherent processing that matches orbital parameters; however, they are not suitable for searching to perform uncued detections. The parameter space is far too large to be practically searched, and the vast majority of orbits will not correspond to passes within a region of interest above the sensor. Although, as stated earlier in Section \\ref{ssec:orbital_dynamics}, alternatives to the Keplerian parameter set are available. In fact, it is possible to parameterise a Keplerian orbit with the Cartesian position and velocity to constitute the six elements \\cite{Vallado2001fundamentals}. It is also possible to utilise combinations of both sets of elements in other formulations.\n\nInstead of searching through classical orbital parameters, three parameters can be expressed as a hypothesised ECI position within a search volume of interest. This ensures any hypothesised orbit, determined from these initial parameters, will be within the search volume. Given this potential orbital position, $\\boldsymbol{r}$, only three more additional parameters are needed to fully define an elliptical orbit. Although the three elements forming the orbital velocity could be treated as free variables, the majority of possible velocities would not correspond to valid Earth-captured orbits. Instead, given position $\\boldsymbol{r}$ and semi-major axis $a$, the magnitude of the velocity of the corresponding orbit is given by the Vis-Viva equation \\cite{Vallado2001fundamentals}:\n\\begin{IEEEeqnarray}{rCL}\n{\\lvert\\boldsymbol{\\dot{r}}\\rvert}^2\n= \\mu(\\frac{2}{\\lvert\\boldsymbol{r}\\rvert} - \\frac{1}{a})~. \\label{eq:vis_viva}\n\\end{IEEEeqnarray}\n\nFurthermore, given position $\\boldsymbol{r}$ and eccentricity $e$, the semi major axis length will itself be constrained between the potential limits of the orbit's apogee and perigee ranges:\n\\begin{IEEEeqnarray}{rCL}\n\\frac{\\lvert\\boldsymbol{r}\\rvert}{1+e} \\leq a \\leq \\frac{\\lvert\\boldsymbol{r}\\rvert}{1-e}~. \\label{eq:apogiee_and_perigee_ranges}\n\\end{IEEEeqnarray}\n\nThe semi-major axis is also constrained by realistic limits on an orbit's range, as well as a sensor's maximum detection range, represented by minimum and maximum allowable periapsides, ${rp}_{min}$ and ${rp}_{ max}$:\n\\begin{IEEEeqnarray}{rCL}\n\\frac{{rp}_{min}}{1-e} \\leq a \\leq \\frac{{rp}_{max}}{1-e}~. \\label{eq:perigee_ranges_limits}\n\\end{IEEEeqnarray}\n\nAnother constraint is the constant angular momentum of the orbit, $\\boldsymbol{h}$. This vector is perpendicular to the orbital plane, parallel to $\\boldsymbol{W}$\\hspace{-0.5ex}, with a magnitude depending on the size and shape of the ellipse:\n\\begin{IEEEeqnarray}{rCL}\n \\boldsymbol{h} = \\sqrt{\\mu a(1-e^2)}\\boldsymbol{W} = \\boldsymbol{r}\\times\\boldsymbol{\\dot{r}}~. \\label{eq:angular_momentum}\n\\end{IEEEeqnarray}\n\nThis cross-product may be rewritten to form an expression for the inner product between the position and velocity:\n\\begin{IEEEeqnarray}{rCL}\n \\boldsymbol{r}\\cdot\\boldsymbol{\\dot{r}} & = & \\pm\\sqrt{\\lvert\\boldsymbol{r}\\rvert^2\\lvert\\boldsymbol{\\dot{r}}\\rvert^2 - \\lvert\\boldsymbol{h}\\rvert^2}~.\n\\end{IEEEeqnarray}\n\nCombined with the magnitude of the velocity, from the Vis-Viva equation \\eqref{eq:vis_viva}, as well as the magnitude of the constant angular momentum \\eqref{eq:angular_momentum}, an expression for this inner product can be formed which depends solely on the position $\\boldsymbol{r}$ and the size and shape of the orbital ellipse:\n\\begin{IEEEeqnarray}{rCL}\n \\boldsymbol{r}\\cdot\\boldsymbol{\\dot{r}} & = & \\pm\\sqrt{\\lvert\\boldsymbol{r}\\rvert^2\\mu(\\frac{2}{\\lvert\\boldsymbol{r}\\rvert} - \\frac{1}{a}) - \\mu a (1-e^2)}~. \\label{eq:parallel_planes}\n\\end{IEEEeqnarray}\n\nAdditionally, the specific relative angular momentum vector, $\\boldsymbol{h}$, is perpendicular to both the orbital position $\\boldsymbol{r}$ and orbital velocity $\\boldsymbol{\\dot{r}}$. This leads to the expressions $\\boldsymbol{r}\\cdot\\boldsymbol{h}=0$ and $\\boldsymbol{\\dot{r}}\\cdot\\boldsymbol{h}=0$, which result in another constraint on the velocity, dependant on the right ascension of the ascending node, $\\Omega$:\n\\begin{IEEEeqnarray}{rCL}\n \\begin{bmatrix}\n r_{\\boldsymbol{K}}\\sin{\\Omega} \\\\ -r_{\\boldsymbol{K}}\\cos{\\Omega} \\\\ r_{\\boldsymbol{J}}\\cos{\\Omega} - r_{\\boldsymbol{I}}\\sin{\\Omega}\n \\end{bmatrix}\\cdot\\boldsymbol{\\dot{r}} = 0~. \\label{eq:raan_plane}\n\\end{IEEEeqnarray}\n\nThese expressions lead to a simple geometric solution for determining orbits when $\\boldsymbol{r}$ (and other parameters) are known, and $\\boldsymbol{\\dot{r}}$ is unknown. For determining $\\boldsymbol{\\dot{r}}$, \n(\\ref{eq:vis_viva}) defines a sphere of radius $\\sqrt{\\mu(\\frac{2}{\\lvert\\boldsymbol{r}\\rvert} - \\frac{1}{a})}$, representing valid orbits in the velocity vector's element space. Additionally, \n(\\ref{eq:parallel_planes}) defines two parallel planes of valid orbits, which intersect with (\\ref{eq:vis_viva}) to define two circles. Finally, intersecting these two circles with the plane defined by the position and the right ascension of the ascending node, $\\Omega$, \n\\eqref{eq:raan_plane} will result in a maximum of four intersection points, that is, four velocities, each corresponding to a valid orbit. An example diagram is shown in Figure \\ref{fig:makingshapes}. Although this means that a choice of six orbital parameters will result in up to four potential orbital matched filters, this approach will be far more efficient than methods outlined earlier in this section, as the orbit will be within the search volume, and each parameter choice restricts the range of subsequent parameters. \n\n\\vspace{-3ex}\n\n\\begin{figure}[ht!]\n\\hspace{8ex}\\begin{tikzpicture}[\n point\/.style = {draw, circle, fill=black, inner sep=0.7pt},\n scale=0.7\n]\\clip (-4.25,-3.75) rectangle + (8.5,8);\n\\def3cm{3cm}\n\\coordinate (O) at (0,0); \n\n \\draw[->] (0,0,0) -- (1,0,0) node[anchor=north east]{$\\dot{{r}}_{\\boldsymbol{J}}$};\n \\draw[->] (0,0,0) -- (0,1,0) node[anchor=north west]{$\\dot{{r}}_{\\boldsymbol{K}}$};\n \\draw[->] (0,0,0) -- (0,0,1) node[anchor=south east]{$\\dot{{r}}_{\\boldsymbol{I}}$};\n\n\n\\draw[-] (0,0) circle [radius=3cm];\n\n\n\n\\begin{scope}[]\n\\draw[-]\n (-3.25,2.7) -- (4.25,2.7) -- (3.25,1.2) -- (-4.25,1.2) -- cycle;\n \n \\draw[]\n (-3.25,-1.2) -- (4.25,-1.2) -- (3.25,-2.7) -- (-4.25,-2.7) -- cycle;\n \n \\draw[]\n (-2.3,3.25) -- (-2.3,-3.75) -- (-1.2,-3.25) -- (-1.2,3.75) -- cycle;\n \n \\draw[dashed] (0,2) ellipse (2.2 and 0.35);\n \\draw[dashed] (0,-1.97) ellipse (2.2 and 0.3);\n \\draw[dashed] (-1.75,0) ellipse (0.3 and 2.4 );\n \n\\node at (2.5,3.1) {$P_1$};\n\n\\node at (3.6,-0.8) {$P_2$};\n\n\n\\node at (-0.85,3.45) {$P_3$};\n\n\\fill (-1.65,2.23) circle [radius=0.105];\n\n\\fill (-1.93,1.83) circle [radius=0.105];\n\n\\fill (-1.51,-1.73) circle [radius=0.105];\n\n\n\\fill (-1.85,-2.1) circle [radius=0.105];\n\n\n\\end{scope}\n\\end{tikzpicture}\n\\vspace{-1.5ex}\n\\caption{Four valid orbital velocities given by the intersection of the sphere (given by \\eqref{eq:vis_viva}), parallel planes P1 and P2 (given by \\eqref{eq:parallel_planes}), and plane P3 (given by \\eqref{eq:raan_plane} or \\eqref{eq:doppler_plane}).}\n\\label{fig:makingshapes}\n\\end{figure}\n\nTherefore, given an orbital position, $\\boldsymbol{r}$, a choice of eccentricity, $e$, semi-major axis, $a$, and right ascension of the ascending node, $\\Omega$, four potential orbital velocities, $\\boldsymbol{\\dot{r}}$, are calculated, which leads to an expression for the complete matched filter:\n\\vspace{-0.5ex}\n\\begin{IEEEeqnarray}{rCL}\n \\chi(\\boldsymbol{r}, \\boldsymbol{\\dot{r}}) \n & = \\hspace{-1ex} \\int\\limits_{-\\frac{T}{2}}^{\\frac{T}{2}} \\hspace{-1ex} [&\\sum_{n=1}^{N}{\\mathrm{e}}^{j\\boldsymbol{k}(\\delta(\\boldsymbol{r}, \\boldsymbol{\\dot{r}},t),\\alpha(\\boldsymbol{r}, \\boldsymbol{\\dot{r}},t)) \\cdot \\boldsymbol{u}_n }s_{n}(t)]\\nonumber\\\\\n & & ~d^*(t-2c^{-1}\\rho(\\boldsymbol{r}, \\boldsymbol{\\dot{r}},t)){\\mathrm{e}}^{-j\\frac{2\\pi}{\\lambda}\\dot{\\rho}(\\boldsymbol{r}, \\boldsymbol{\\dot{r}},t)t}\\,dt~.~~~\\label{eq:full_OD_right_here}\n\\end{IEEEeqnarray}\n\nThe proposed method tests for only realistic orbits in a given search region. Also, given a set of orbit parameters, this matched filter should maximise a radar's sensitivity to that orbit. Additionally, a detection in this matched filter corresponds to a detection in the orbital element space, providing initial orbit determination from a single detection.\n\nThis style of trajectory-match approach, has several advantages beyond just maximising sensitivity to motion models. Coupling measurement parameters together through a trajectory model can improve achievable resolution compared with using separate independent measurement parameters. As an example, a radar's range resolution is determined solely by the signal bandwidth, but its Doppler and Doppler-rate resolution improve with the CPI length.\n\nThrough coupling the measurement parameters with the trajectory model, as a radar can resolve finer Doppler and Doppler rate measurements it can essentially resolve finer trajectory states. This can potentially improve target localisation as increasingly accurate state measurements could localise a target within a single range bin.\n\\vspace{-0.2ex}\n\n\\subsection{Zero Doppler Crossing}\n\\label{ssec:odbd_zdc}\nThe flexibility of the geometric formulation in Section~\\ref{ssec:odbd_search_constraints} allows radar parameters to be used alongside, and in place of, other orbital parameters to constrain the search space. A Doppler shift $f_D$ will define another plane in $\\dot{\\boldsymbol{r}}$ space, given by:\n\\begin{equation}\n \\label{eq:doppler_plane}\n \\frac{\\boldsymbol{\\rho}}{\\rho}\\cdot\\boldsymbol{\\dot{r}} = -\\frac{\\lambda f_D}{2} + \\frac{\\boldsymbol{\\rho}\\cdot\\boldsymbol{\\dot{q}}}{\\rho}~.\n\\end{equation}\nEquation~\\eqref{eq:doppler_plane} can be used to search for a particular Doppler shift instead of one of the orbital parameters. This is useful because it allows a blind search to constrain the search-space solely for objects in orbit at their point of closest approach to the sensor. As an object is passing overhead, its point of closest approach will correspond exactly with it being at zero Doppler, which is when it is most detectable\\footnote{This may not necessarily hold in all instances, depending on particular beampattern and radar cross section factors.}. If a radar is unable to detect an object at its point of closest approach, at its minimum range, there is little value trying to detect it as it moves further away, towards the horizon.\n\nAnother benefit to applying this constraint is that, as Doppler is proportional to the range-rate, this constraint will also restrict the orbit search-space to a point of minimal (or zero) range migration, which greatly simplifies matched-processing\\footnote{Depending on the CPI length, it may be possible to make ${\\rho}(t) \\approx {\\rho}$.}.\n\nThe vast majority of the objects in an Earth-captured orbit are in a circular, or near-circular, orbit. Searching solely for objects in a circular orbit greatly decreases the potential orbital search space. A circular orbit means the eccentricity of the orbital ellipse is zero, $e=0$, and so \\eqref{eq:apogiee_and_perigee_ranges} becomes $a=\\lvert\\boldsymbol{r}\\rvert$. In a circular orbit, the position and velocity vectors will always be perpendicular, so \\eqref{eq:parallel_planes} simplifies to $\\boldsymbol{r}\\cdot\\boldsymbol{\\dot{r}} = 0$, a single plane instead of two parallel planes. The result is that a three-parameter search, within a region of interest, provides sufficient information to match the closest approach of objects in a circular orbit. For a given position\nin a search-region, there will be at most two possible orbits to match against (determined from the intersection of \\eqref{eq:vis_viva}, \\eqref{eq:parallel_planes}, and \\eqref{eq:doppler_plane}). This type of search approach, attempting uncued detection of the most common types of orbit when they are most detectable, is a far more realisable and practical approach than a completely unbounded search through measurement parameters. Additionally, for an eccentric orbit, the orbital velocity and position are perpendicular at perigee \\cite{Vallado2001fundamentals}. For typical radar detection ranges, an object in a highly eccentric orbit is likely to be within a radar's field of regard solely at, or near, perigee. Because of this, the same simplification of $\\boldsymbol{r}\\cdot\\boldsymbol{\\dot{r}} = 0$ could be used to reduce the number of potential orbits.\n\\vspace{-1.1ex}\n\\subsection{Single Channel Orbit Detection}\n\\label{ssec:odbd_single_channel}\nCoupling together measurement parameters is not necessarily new; however, incorporating such techniques into the detection stage offers some significant advantages. By coupling together the measurement parameters using these ODBD methods, it is possible to apply this matched filtering to single beam radar systems. This could be a post-beamformed surveillance signal from an array or even a classic narrowbeam tracking radar. Because the trajectory model determines all measurement parameters, a particular polynomial phase signal which results in a detection is coupled to a particular location and orbit. This is shown in \n\\eqref{eq:full_OD_right_here}. The beamforming parameters do not determine the location; rather the (hypothesised) location determines the beamforming parameters. Removing the array processing, as in \\eqref{eq:single_channel}, does not remove the ability to localise a target using the algorithm.\n\n\\vspace{-3ex}\n\\begin{IEEEeqnarray}{rCL}\n \\label{eq:single_channel}\n \\chi(\\boldsymbol{r},\\boldsymbol{\\dot{r}}) & = \\hspace{-1.5ex} \\int\\limits_{-\\frac{T}{2}}^{\\frac{T}{2}} & \\hspace{-1ex} s(t){d}^*(t-2c^{-1}\\rho(\\boldsymbol{r},\\boldsymbol{\\dot{r}},t)){\\mathrm{e}}^{-j\\frac{2\\pi}{\\lambda}\\dot{\\rho}(\\boldsymbol{r},\\boldsymbol{\\dot{r}},t)t}\\,dt~~~~\n\\end{IEEEeqnarray}\n\nIn the case of a narrow beam radar, the pointing of the beam will be incorporated into the algorithm by determining the search region that is used. Because it handles sensor motion, this type of processing would be ideal for a satellite-based sensor, with the sensor location term $\\boldsymbol{q}(t)$ (or its instantaneous components $\\boldsymbol{q}, \\boldsymbol{\\dot{q}, \\boldsymbol{\\ddot{q}}}$, etc.) themselves determined by a known orbit rather than the motion of the Earth.\n\n\\vspace{-0.2ex}\n\\section{Simulated Results} \\label{sec:results}\nThese methods have been verified by comparing ODBD-derived measurement parameters, described in section \\ref{sec:odbd}, of an object in orbit, against measurement parameters propagated from available ephemerides. These ephemeris tracks consist of the six Keplerian orbital elements, as well as several additional parameters describing drag and orbital decay. These tracks are propagated with the standard SGP-4 propagator used by the USSPACECOM two-line element sets \\cite{USSTRATCOM}.\n\nThe configuration used for these simulations, matching \\cite{8835821}, is a sensor located at the MWA (at a latitude of 27\\degree~south) in a bistatic configuration with a transmitter in Perth, approximately 600 km further south. This transmitter is taken to be transmitting an FM radio signal at a centre frequency of 100 MHz.\n\nFigure \\ref{fig:standard_circular} shows the path of an object in a near circular orbit at closest approach. The simulated measurement parameters match very well in both angular and delay-Doppler space despite being based on a perfectly circular orbit. Likewise, Figure \\ref{fig:standard-eccentric} also matches with the prediction, noting that the simulation used the matching eccentricity and semi-major axis.\n\nFigure \\ref{fig:eccentricity-mismatch} shows the path of an object in a near circular orbit, but slightly more eccentric than Figure \\ref{fig:standard_circular} ($e=0.00126$) at point of closest approach. The simulated circular path matches well in the delay-Doppler space but diverges in the angular space. Additionally, several other simulated close eccentricities are shown, resulting in changes to the direction of travel but little difference in the delay-Doppler space. The delay-Doppler results suggest good tolerance to small eccentricity changes, however the sensor's angular resolution may limit potential processing intervals.\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{images\/FIGURE1.png}\n\\end{center}\n\\vspace{-2ex}\n\\caption{The measurement parameters of a close pass of an object in a near-circular orbit ($e=0.0007$), as well as the simulation made assuming zero eccentricity at point of closest approach. The left plot is angular space and the right is the delay-Doppler. Twenty seconds of the true pass is shown with ten seconds of the simulated path overlaid.}\n\\label{fig:standard_circular}\n\\end{figure}\n\\vspace{-2ex}\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{images\/FIGURE2.png}\n\\end{center}\n\\vspace{-2ex}\n\\caption{The measurement parameters of a close pass of an object in an eccentric orbit ($e=0.7$), as well as the four simulations made with the correct eccentricity and semi-major axis. The left plot is angular space and the right is the delay-Doppler. Twenty seconds of the true pass is shown with ten seconds of the simulated paths overlaid.}\n\\label{fig:standard-eccentric}\n\\end{figure}\n\\vspace{-2ex}\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{images\/FIGURE3.png}\n\\end{center}\n\\vspace{-2ex}\n\\caption{The measurement parameters of a close pass of an object in a near-circular orbit ($e=0.00126$), as well as several simulations made using different eccentricities. The left plot is angular space and the right is the delay-Doppler. Twenty seconds of the true pass is shown with ten seconds of the simulated paths overlaid.}\n\\label{fig:eccentricity-mismatch}\n\\end{figure}\n\nThe good agreement between the parameters derived from methods described in this paper, when compared with ephemeris derived parameters, suggests that earlier results, \\cite{8835821}, can be practically achieved without requiring apriori information.\n\n\\section{Conclusion} \\label{sec:conclusion}\nModern radars are able form matched-filter products with significant numbers of measurement parameters, especially with digital beamforming and extended processing intervals. Conversely, the motion of an object in a Keplerian orbit is defined by only six parameters. Mapping radar measurement parameters from orbital motion parameters constrains the search space for uncued detection, it additionally allows for other constraints to be applied to further reduce the search-space, most notably when searching for objects in a circular orbit at their point of closest approach to the sensor. For a hypothesised orbit of this type, all range, Doppler, and angular motion parameters can be derived entirely from a three-dimensional position.\nDetections from this matched filter will correspond to the hypothesised orbit. This means that initial orbit determination can be potentially achieved from a single radar detection.\n\nIn future work, these algorithms will be experimentally validated with MWA observations. Noting that although these methods have been developed for the MWA, these methods also apply to conventional active space surveillance radar or even to satellite-based sensors. Additionally, it is planned to investigate the sensitivity of these techniques, characterising their variance by calculating the Cram\\'er-Rao lower bound (CRLB) on the variance of the initial orbital estimates.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:intro} Introduction}\n\nThere exists an accumulation of studies on quantum dynamics of classically\nchaotic systems\n, e.g., kicked rotators, kicked spin-tops, hydrogen atoms in\ntime-dependent electric field,\nand the standard map\nmodel, to mention a few.~\\cite{NS2001}\n Quantum suppression of energy diffusion,\ndynamical localization\nand other signatures of quantum chaos\nare notable in these dynamics. However, most of the systems treated so far are\nconfined to those with a few degrees-of-freedom, and little attention is\npaid to dynamics of quantum many-body systems\\cite{Jona,Prosen,Flambaum}\n whose adiabatic energy levels\nare characterized by\nGaussian orthogonal ensemble (GOE) spectral statistics, i.e., by a hallmark\nof quantum chaos.\nWhile some important\ncontributions~\\cite{Wilkinson88,WilAus92,WilAus95,Bulgac,Cohen00D,Cohen00M,Machida}\nare devoted to dynamics of a kind of many-body systems,\nthose systems are actually described by the random-matrix models,\nand not by deterministic quantum Hamiltonians.\nIt is highly desirable to explore dynamical behaviors of deterministic\nquantum many-body systems\n exhibiting GOE or GUE spectral statistics.\n \n On the other hand, the frustrated quantum spin systems have been receiving\n a wide attention, and we can find their realization in $s=\\frac12$\n antiferromagnetic chains Cu(ampy)Br$_2$~\\cite{Kikuchi} and\n (N$_2$H$_5$)CuCl$_3$,~\\cite{Hagiwara} and in $s=\\frac12$ triangular\n antiferromagnets.~\\cite{KPL}\n The high-lying states of these quantum many-body systems deserve\n being studied\n in the context of \"quantum chaos.\" The advantage of the frustrated quantum\nsystems\n is that one can expect quantum chaotic behaviors\n appearing already in the low energy region\n near the ground state.~\\cite{Nakamura85,Yamasaki04}\n From the viewpoint of real physics of condensed matters, novel features\nobserved\nin the low-energy region are very important and welcome.\nRecalling that in most of deterministic Hamiltonian systems quantum chaotic\nbehaviors\nappear in high-lying states, the role of\nfrustration is essential in the study of\nquantum dynamics from the ground state of deterministic many-body systems with\nGOE or Gaussian unitary ensemble (GUE) level statistics.\n\n\nIn this paper, we investigate dynamics of\n$XXZ$ quantum spin chains which have antiferromagnetic exchange\ninteractions for the\nnearest-neighbor (NN) and\nthe next-nearest-neighbor (NNN) couplings. The NNN couplings cause the\nfrustration, i.e., difficulty in achieving the ground state,\nthereby attributing a name of frustrated quantum spin chains to these systems.\nIn fact, the level statistics of the NNN coupled $XXZ$ spin chains\nwithout an applied magnetic field\nhas been studied intensively in Refs.~\\onlinecite{Kudo03,Kudo04}, and\nit has been shown that GOE behavior,\nwhich is typical of quantum chaos, appears already in the low energy\nregion near the ground state.~\\cite{note,note1}\nThe ground-state phase diagram is shown in Ref.~\\onlinecite{Nomura} for\nthe NNN coupled $XXZ$ spin chains without a magnetic field.\n\nA natural extension of the research is to investigate dynamics\nof the frustrated quantum spin chains\nwith an applied periodically oscillating\nmagnetic field. We calculate a time evolution of the system\nstarting from their ground state and analyze\nthe nature of energy diffusion. We shall numerically exhibit\nthe time dependence of energy variance,\nand show how\nthe diffusion coefficients depend on the coupling constants, the anisotropy\nparameters, the magnetic field and the frequency of the\nfield. Furthermore, to compare with the energy diffusion in\nthe case of weakened frustrations, we also investigate dynamics of\nthe corresponding energy diffusion in $XXZ$ spin chains with\nsmall NNN couplings.\n \nThe organization of the paper is as follows:\nIn Sec.~\\ref{sec:method}, we briefly describe a numerical approach to\nobtain the time evolution operator. In Sec.~\\ref{sec:variance}\nwe shall show the time dependence of energy variance starting from\nthe ground state of the many-body system and explain a way to evaluate\ndiffusion coefficients. Section \\ref{sec:diffusion} elucidates\nhow diffusion coefficients depend on field strength and driving\nfrequency. \nHere \npower laws are shown to exist in the linear response and non-perturbative\nregions.\nSection \\ref{sec:compare} is devoted to a mechanism of oscillation of energy\ndiffusion. \nConclusions are given in Sec.~\\ref{sec:conc}.\n\n\\section{\\label{sec:method} Numerical Procedure}\n\nWe give Hamiltonian for the NN and NNN exchange-coupled\nspin chain on $L$ sites with a time-periodic oscillating magnetic field as\n\\begin{equation}\n\\mathcal{H}(t)=\\mathcal{H}_0 +\\mathcal{H}_1(t),\n\\label{eq:H}\n\\end{equation}\nwhere\n\\begin{eqnarray}\n \\mathcal{H}_0 &=& J_1\\sum_{j=1}^{L}(S^x_j S^x_{j+1} +S^y_j S^y_{j+1}\n +\\Delta S^z_j S^z_{j+1})\n\\nonumber \\\\\n &+& J_2\\sum_{j=1}^{L}(S^x_j S^x_{j+2} +S^y_j S^y_{j+2}\n +\\Delta S^z_j S^z_{j+2}) \\nonumber \\\\\n&-& \\sum_{j=1}^{L}B^z_j(0)S^z_j,\n\\end{eqnarray}\n\\begin{equation}\n \\mathcal{H}_1(t)= \\sum_{j=1}^{L}B^z_j(0)S^z_j -\\sum_{j=1}^{L}B^z_j(t)S^z_j.\n\\end{equation}\nHere, $S_j^{\\alpha}=(1\/2)\\sigma_j^{\\alpha}$ and \n$(\\sigma^x_j, \\sigma^y_j, \\sigma^z_j)$ are the Pauli matrices on the\n$j$th site;\nthe periodic boundary conditions (P.~B.~C.) are imposed.\nThe magnetic field $B^z_j$ on $j$th site along the $z$ axis is chosen to\nform a traveling wave:\n\\begin{equation}\n B^z_j(t)=B_0\\sin\\left( \\omega t-\\frac{2\\pi j}L\\right).\n\\label{eq:jiba}\n\\end{equation}\nThe period of Eq.~(\\ref{eq:H}) as well as Eq.~(\\ref{eq:jiba}) is\n $T=2\\pi\/\\omega$. Because of the coexisting spatial P.~B.~C., however,\n the effective period of the adiabatic energy spectra is\n given by $T'=T\/L=2\\pi\/(\\omega L)$. In other words, the period of the\nHamiltonian operator is $T$, and the spectral flow of the\n eigenvalues has the effective period $T'$.\nThis periodicity property comes from the\n traveling-wave form of Eq.~(\\ref{eq:jiba}), and is advantageous for our\n getting a sufficient number of relevant data in each period $T$.\n\nWhen $J_1>0$ and $J_2>0$, \nthe unperturbed Hamiltonian $\\mathcal{H}_0$\nwithout coupling to the magnetic field is translationally invariant and \ncorresponds to a\nfrustrated antiferromagnetic quantum spin model\nexhibiting GOE level statistics.~\\cite{Kudo03,Kudo04} \nIf $J_2=0$ and $B_0=0$, it describes an\nintegrable and non-frustrated model. Before calculating energy\ndiffusion, we have to consider the symmetries of the model. We divide\nthe Hamiltonian matrix to some sectors which have the same quantum\nnumbers. In the Hamiltonian Eq.(\\ref{eq:H}), total $S^z $ $(S^z_{\\rm tot})$ is\nconserved. The eigenstates with different $S^z_{\\rm tot}$ are\nuncorrelated.\nOn the other hand, the non-uniform magnetic field\nbreaks the translational symmetry, and leads\nto mixing between manifolds of different wave-number values.\n\nBefore proceeding to consider the time evolution of a wave\nfunction, we should note: If we use the original\nHamiltonian\n$\\mathcal{H}(t)=\\mathcal{H}_0 +\\mathcal{H}_1(t)$ as it\nstands, the mean level spacing of eigenvalues would change\ndepending\non $J_2$, $\\Delta$, and $B_0$.\nTo see a universal feature of the energy diffusion, it is\nessential to\nscale the Hamiltonian so that the full range of adiabatic\nenergy eigenvalues becomes almost free from these\nparameters.\nNoting that this energy range for the original Hamiltonian\nis \nof order of $L$ when $J_1=J_2=\\Delta=1$,\nwe define the scaled Hamiltonian $H(t)=H_0+H_1(t)$\nso that the full energy range equals $L$ at $t=0$, \nwhich will be used throughout in the text.\nThe Sch\\\"odinger equation is then given by\n\\begin{equation}\n i\\hbar \\frac{\\partial}{\\partial t}|\\psi (t)\\rangle\n =H(t)|\\psi (t)\\rangle\n =[H_0+H_1(t)] |\\psi (t)\\rangle.\n\\label{eq:Schrodinger}\n\\end{equation}\nThe solution of Eq. (\\ref{eq:Schrodinger}) consists\nof a sequence of the infinitesimal processes as\n\\begin{eqnarray}\n |\\psi (t)\\rangle &=& U(t;t-\\Delta t) U(t-\\Delta t;t-2\\Delta t)\n\\nonumber \\\\\n&\\cdots& U(2\\Delta t;\\Delta t)\n U(\\Delta t;0) |\\psi (0)\\rangle.\n\\end{eqnarray}\nThe initial state $|\\psi(0)\\rangle$ is taken to be the ground state,\nsince our concern lies in the dynamical behaviors starting from the\nmany-body ground state.\nTo calculate a time evolution operator $U(t+\\Delta t;t)$\nfor each short time step $\\Delta t$, we use the\nfourth-order decomposition formula for the exponential\noperator:~\\cite{Suzuki90}\n\\begin{eqnarray}\n U(t+\\Delta t;t)&=& S(-i p_5\\Delta t\/\\hbar,t_5)\nS(-i p_4\\Delta t\/\\hbar,t_4) \\nonumber\\\\\n&\\cdots& S(-i p_2\\Delta t\/\\hbar,t_2)\n S(-i p_1\\Delta t\/\\hbar,t_1),\n\\label{eq:U}\n\\end{eqnarray}\nwhere,\n\\begin{equation}\n S(x,t)=\\exp\\left( \\frac{x H_1(t)}2 \\right) \\exp(x H_0)\n\\exp\\left( \\frac{x H_1(t)}2 \\right).\n\\end{equation}\nHere, $t_j$'s and $p_j$'s are the following:\n\\begin{eqnarray}\n t_j &=& t+(p_1+p_2+\\cdots +p_{j-1}+p_j\/2)\\Delta t, \\nonumber\\\\\n p &=& p_1 =p_2 =p_4=p_5, \\nonumber\\\\\n &=& 0.4144907717943757\\cdots \\nonumber\\\\\n p_3&=&1-4p.\n\\end{eqnarray}\nThe numerical procedure based on the above decompositions is quite effective\nwhen $H_1(t)$ and $H_0$\ndo not commute and each time step is very small. \nOur computation below is concerned mainly with\nthe system of $L=10$, whose $S^z_{\\rm tot}=1$ manifold involves 210\nlevels. To check the validity of our assertion, some of the results will be\ncompared to those for the system of $L=14$ and \n$S^z_{\\rm tot}=4$ whose manifold involves 364 levels. \n\n\n\\section{\\label{sec:variance} Time Dependence of Energy Variance}\n\n\\begin{figure}\n\\includegraphics[width=8cm]{fig1.eps}\n\\caption{\\label{fig:1} (Color online) \nTime evolution of energy diffusion for (a) $L=10$ and (b) $L=14$. The\n parameters are the following: $J_1=J_2=1.0$,\n $\\Delta=0.3$, $B_0=1.0$.}\n\\end{figure}\n\nWe calculate time evolution of the state and evaluate\nenergy variances at each integer multiple of\nthe effective period $T'=T\/L=2\\pi \/(\\omega L)$.\nAs mentioned already, we choose the ground state as an initial state,\nfollowing the spirit of real physics of condensed matters.\nThis viewpoint is in contrast to that of\nthe random matrix models where initial states\nare chosen among high-lying\nones.~\\cite{Wilkinson88,WilAus92,WilAus95,Bulgac,Cohen00D,Cohen00M} \nConsequently,\nthe energy variance of our primary concern is\nthe \\textit{variance around the ground state energy} $E_0$\nand is defined by\n\\begin{equation}\n \\delta E(t)^2= \\langle \\psi (t)|[H(t)-E_0]^2 |\\psi (t) \\rangle .\n\\label{eq:variance}\n\\end{equation}\nTime evolution of $\\delta E(t)^2$ is shown in Fig.~\\ref{fig:1}. The\nparameters except for $\\omega$ are fixed. The larger $\\omega$ is, the faster\nthe energy diffusion grows, which is consistent with our expectations. The\ndetails will be explained in Sec.~\\ref{sec:diffusion}.\nFor wide parameter values of the next-nearest-neighbor (NNN) coupling $J_2$\nand exchange anisotropy $\\Delta$, the early stage of quantum dynamics\nbecomes to show the normal diffusion in energy space, i.e., a linear growth of \n$\\delta E(t)^2$ in time.\nWhile we proceed to investigate this normal diffusion process, \nenergy variances will\nfinally saturate because the system size we consider is finite. On the\nother hand, energy variances can also saturate because of another reason,\ni.e., the dynamical localization effect associated with a periodic\nperturbation. \n\nDuring the first period, $\\delta E(t)^2$ shows a linear\ngrowth in time as shown in Fig.~\\ref{fig:1} (a). The range of the linear\ngrowth is not sufficiently wide because the number of levels is not\nlarge enough for\n$L=10$. However, if the number of levels as well as the system\nsize is increased, the length of a linear region may be elongated. \nIn fact, the linear growth of $\\delta E(t)^2$ during the first\nperiod can be recognized more clearly for $L=14$ than for $L=10$ \n[see Fig.~\\ref{fig:1} (b)].\nThe diffusion coefficient has to be determined much earlier than the \ntime where saturation begins. \nWe determine the diffusion coefficient $D$ from the fitting\n\\begin{equation}\n \\delta E(t)^2 = Dt +\\mbox{\\rm const.}\n\\label{eq:defD}\n\\end{equation}\nto some data points around the largest slope\nin the first period, where the normal diffusion is expected. \n\n\\section{\\label{sec:diffusion} Diffusion coefficients: dependence on field\nstrength and frequency}\n\nSince the time evolution of our system starts from the ground state,\nwe consider non-adiabatic regions where inter-level transitions\nfrequently occur. In other words, we suppress a near-adiabatic or the\nso-called Landau-Zener (LZ) region\nwhere the driving frequency $\\omega$ is much smaller than the mean level\nspacing divided by Planck constant. \nBecause of a large energy gap between the ground and first\nexcited states,\nthe near-adiabatic region cannot result in the notable energy diffusion and\nwill be left outside a scope of the present study.\n\nBeyond the LZ region, however, so long as the changing rate $\\dot{X}$ of a\nperturbation\nparameter is not very large,~\\cite{note2} the diffusion coefficient can be calculated\nusing the Kubo formula. We call such a parameter regime ``linear\nresponse'' regime. In the linear response regime, $D\\propto\\dot{X}^2$\n(See, e.g., Refs.~\\onlinecite{WilAus92} and \\onlinecite{WilAus95}).\nWhen $\\dot{X}$ is large, however, the perturbation theory fails. We call\nsuch a parameter regime ``non-perturbative'' regime. In the \nnon-perturbative regime, the diffusion coefficient is smaller than that\npredicted by the Kubo formula.~\\cite{WilAus95,Cohen00D} According to\nRef.~\\onlinecite{WilAus95}, $D \\propto \\dot{X}^{\\gamma}$ with\n$\\gamma \\le 1$ in the non-perturbative regime. We note that\n$\\dot{X}\\propto B_0\\omega$ in this paper since the perturbation is given by\nEq.~(\\ref{eq:jiba}). \nBoth Refs.~\\onlinecite{WilAus95} and \\onlinecite{Cohen00D} are based on\nthe random matrix models, which are utterly different from our\ndeterministic one.\n\n\\begin{figure}\n\\includegraphics[width=8cm]{fig2.eps}\n\\caption{\\label{fig:2} Driving frequency dependence of the diffusion\n coefficients. The chained line and the solid line\n are just eye guides for $D\\propto \\omega^{\\beta}$ with $\\beta=1$ and\n $2$, respectively. \nThe symbols ($\\diamond$) are the average of the diffusion coefficients\n calculated for several values of $\\Delta$ ($0.3\\le \\Delta \\le 0.8$). The\n parameters are the following: $L=10$, $J_1=1.0$; (a) $J_2=1.0$, \n(b) $J_2=0.2$.}\n\\end{figure}\n\n\\begin{figure*}\n\\includegraphics[width=12cm]{fig3.eps}\n\\caption{\\label{fig:3} Dependence of the diffusion\n coefficients on the product of field strength $B_0$ and driving\n frequency $\\omega$ for (a) $L=10$ and (b) $L=14$. \nThe symbols ($\\diamond$) are the average of the diffusion coefficient\n calculated for several values of $\\Delta$ ($0.3\\le \\Delta \\le 0.8$). \nThe parameters are $J_1=J_2=1.0$; for\n the inset, $J_1=1.0$ and $J_2=0.2$. \nThe chained line and the solid line\n are just eye guides for $D\\propto (B_0\\omega)^{\\beta}$ with $\\beta=1$\n and $2$, respectively. Some error bars are too short to see.}\n\\end{figure*}\n\nNumerical results of diffusion coefficients in Fig.~\\ref{fig:2}\n are almost consistent with the argument of Ref.~\\onlinecite{WilAus95}.\nDiffusion coefficients as a function of $\\omega$\nare shown in Fig.~\\ref{fig:2}.\nIn Fig.~\\ref{fig:2}(a), where $J_2=1.0$\n(i.e., the fully-frustrated case), $D$ is larger\nas $B_0$ is larger for a fixed value of $\\omega$. In a small-$\\omega$ regime,\n$D\\propto \\omega^{\\beta}$ with $\\beta=2$, though $\\beta >2$ for small $B_0$. \nThe latter is merely attributed to the fact that the\nperturbation is\ntoo small to observe a sufficient energy diffusion \nwhen both $\\omega$ and $B_0$ are small. \nIn a large-$\\omega$ regime, $\\beta=1$.\nNamely, we observe that $\\beta=2$ in the linear response\nregime\nand $\\beta=1$ in the non-perturbative regime.\nIn fact, for a large-$\\omega$ regime, \nthe increase of energy variances per effective\nperiod hardly depend on $\\omega$ by the time when $\\delta E(t)^2$\n starts to decrease.\nThis explains the observation\nthat $D\\propto \\omega^{\\beta}$ with $\\beta=1$ in both\nFig.~\\ref{fig:2}(a) and Fig.~\\ref{fig:2}(b). \nLet us represent the increase of energy variances per\neffective period\nas $\\Delta(\\delta E^2)$. From the definition of $D$,\ni.e. Eq.~(\\ref{eq:defD}), $D\\propto \\Delta(\\delta E^2)\/T'$.\nIf\n$\\Delta(\\delta E^2)$ is constant, $D\\propto \\omega$.\n\nOn the other hand, in Fig.~\\ref{fig:2}(b) where $J_2=0.2$\n(i.e., a weakly-frustrated\n case), the region with $\\beta=1$ is expanding. \nFor small $B_0$, $\\beta>2$ in a small-$\\omega$ regime is\nthe same as in the\ncase of $J_2=1.0$.\nFor small $B_0$ and around $\\omega\\sim 1$,\n$D$ seems to rather decrease than increase \nespecially in the case of $J_2=0.2$.\nSome kind of localization would have occurred \nin the very early stage of energy diffusion for large\n$\\omega$ and small $B_0$,\nleading to the suppression of $D$.\n\n\nIt is seen more clearly in\nFig.~\\ref{fig:3} how the behavior of $D$ changes between a linear\nresponse regime and a non-perturbative regime. \nThe diffusion coefficient $D$ obeys the power law\n$D\\propto (B_0 \\omega)^{\\beta}$ with its power $\\beta$ being two in the\nlinear response regime and $\\beta=1$ in the non-perturbative regime.\nFor small $B_0\\omega$, the power law seems to fail because of some\nfinite-size effects. \nThese universal feature is confirmed in systems of larger size.\nActually, $D$ obeys the power law \nbetter for $L=14$ [Fig.~\\ref{fig:3}(b)] than $L=10$\n[Fig.~\\ref{fig:3}(a)]. In addition, error bars are shorter for $L=14$\nthan $L=10$.\nHere, we have used the data of\n$\\omega \\le 1$. We cannot expect meaningful results in a large-$\\omega$\nregime since, as mentioned above, energy diffusion is not normal there.\n\nFigure~\\ref{fig:3} suggests that the strength of frustration should affect the\nrange of the linear response regime.\nThe linear response regime is shorter for $J_2=0.2$ than for $J_2=1.0$,\nwhile the non-perturbative regime is larger for $J_2=0.2$ than for\n$J_2=1.0$. In fact, when $J_2=0$ (i.e. the integrable case), \n$D\\propto (B_0 \\omega)^{\\beta}$ with $\\beta=1$ for almost all the data\nin the same range of $B_0\\omega$ as that of Fig.~\\ref{fig:3}. \n\n\\section{\\label{sec:compare} Oscillation Of energy diffusion in \nweakly-frustrated cases}\n\nWe shall now proceed to investigate\noscillations of diffusion which occur in the non-perturbative regime of\na weakly-frustrated case.\nFigure~\\ref{fig:4}(a) shows an example of oscillatory diffusion for\n$J_2=0.2$, which is compared with a non-oscillatory\ndiffusion for $J_2=1.0$. \nThe two examples have the same set of parameters except for $J_2$.\nHowever, the cases of $J_2=1.0$ and $J_2=0.2$ are in the linear response\nregime and in the non-perturbative regime, respectively.\nThe variance for both cases\nshows normal diffusion at the very early stage of time evolution.\nFor $J_2=1.0$, the energy variance seems to saturate after a normal\ndiffusion time. On the contrary, the energy variance for $J_2=0.2$ shows\nlarge-amplitude oscillations.\nTo investigate more details, we introduce another definition of energy \nvariance:\n\\begin{equation}\n \\delta \\tilde{E}(t)^2 =\\langle \\psi (t)|[H(t)-\\langle\\psi (t) |\nH(t)|\\psi(t)\\rangle]^2\n |\\psi (t)\\rangle .\n\\end{equation}\nThis follows a standard definition of the variance and quantifies\nthe degree of energy diffusion around the \\textit{time-dependent\nexpectation} of the energy Hamiltonian.\nThe time evolutions of $\\delta \\tilde{E}(t)^2$ \ncorresponding to that of $\\delta E(t)^2$ are shown\nin Fig.~\\ref{fig:4}(b). \nIn the fully-frustrated case ($J_2=1.0$), the profile of \n$\\delta\\tilde{E}(t)^2$ is similar to that of $\\delta E(t)^2$.\nThis observation indicates that an occupation probability spread\nover the whole levels after normal diffusion of energy.\n\n\\begin{figure}\n\\includegraphics[width=8cm]{fig4.eps}\n\\caption{\\label{fig:4} Examples for time evolution of energy variances : \n(a) $\\delta E(t)^2$ and (b) $\\delta \\tilde{E}(t)^2$ (see text).\nSolid lines are for $J_2=1.0$; Broken lines, $J_2=0.2$.\nThe parameters are\nthe following: $L=10$, $J_1=1.0$, $\\Delta=0.3$, $B_0=1.5$, $\\omega=0.5$.}\n\\end{figure}\n\nOn the contrary, in a weakly-frustrated case ($J_2=0.2$) \nin Fig.~\\ref{fig:4}, $\\delta \\tilde{E}(t)^2$\nshows small-amplitude oscillations reflecting\nthe large-amplitude oscillations of $\\delta E(t)^2$.\nMost part of $\\delta \\tilde{E}(t)^2$ for $J_2=0.2$ is smaller than that\nfor $J_2=1.0$. Furthermore, minima of $\\delta \\tilde{E}(t)^2$ come just\nbefore minima and maxima of $\\delta E(t)^2$. \nThese observations indicates the following: an occupation probability,\nwhich is diffusing slowly,\nclustering around the expectation of energy oscillates together with the\nexpectation in the energy space.\nTo make the picture of such behavior clearer, let us\nconsider an occupation probability described by\n\\begin{equation}\n P_t(E_n)=| \\langle \\phi_n | \\psi(t) \\rangle |^2,\n\\end{equation}\nwhere $|\\phi_n\\rangle$ is the $n$th excited eigenstate of $H_0$:\n\\begin{equation}\n H_0 |\\phi_n\\rangle =E_n |\\phi_n\\rangle .\n\\end{equation}\nWhen $t=0$ , $P_t(E_n)$ is given by the Kronecker delta:\n$P_0(E_n)=\\delta_{E_n,E_0}$, where $E_0$ is the energy of the ground\nstate. As $t$ increases, $P_t(E_n)$ forms a wave packet in energy space\nand moves to\nhigher levels. When the wave packet reaches some highest level, it\nreflects like a soliton and moves back to lower levels. Such behavior is\nrepeated, although the wave packet of $P_t(E_n)$ broadens slowly.\nWe have actually watched this soliton-like behavior of $P_t(E_n)$ in\na form of an animation.\n\n\\begin{figure}\n\\includegraphics[width=8cm]{fig5.eps}\n\\caption{\\label{fig:5} Parts of energy spectra depending on\n adiabatically fixed time $t$ with\n $0\\le t \\le T\/4$. Effective period is $\\omega T'=2\\pi\/10$. The\n parameters are the following: $L=10$, $J_1=1.0$, $\\Delta=0.3$, $B_0=0.8$;\n (a) $J_2=1.0$, (b) $J_2=0.2$.}\n\\end{figure}\n\nThe picture discussed above is also supported by the adiabatic energy\nspectra in Fig.~\\ref{fig:5}. \nFigures~\\ref{fig:5}(a) and \\ref{fig:5}(b) correspond to fully- and\nweakly-frustrated cases, respectively.\nMuch more sharp avoided crossings appear in\nFig.~\\ref{fig:5}(b) than Fig.~\\ref{fig:5}(a). Some energy levels appear\nto be crossing, although they are very close and never crossing in\nfact. At a sharp-avoided-crossing point,\nLandau-Zener formula for two adjacent levels is applicable.\nThen the nonadiabatic transition leads to one-way transfer of a\npopulation from a level to its partner, failing to result in the energy\ndiffusion. \nFor small-$J_2$, therefore,\nsuccessive sharp avoided crossings can suppress diffusion of energy.\n\nWe believe that large-amplitude oscillations of $\\delta E(t)^2$ should\nbe one of characteristic features of the non-perturbative regime in this\nfinite frustrated spin system.\nIn fact, similar oscillations of energy variance are seen for large\n$\\omega$ and large $B_0$ even when $J_2=1.0$ though the energy variance\nrapidly converges after one or two periods. How long such oscillations\ncontinue should depend mainly on $J_2$.\n\n\\begin{figure}\n\\includegraphics[width=8cm]{fig6.eps}\n\\caption{\\label{fig:6} (Color online) Level-spacing distributions at $t=\\pi\/4$ \nfor lowest 300 levels from\n the ground state (about 10\\% of all 3003 levels). Blue histogram is for\n $J_2=1.0$; Red bars, $J_2=0.2$; Solid curve, GOE spectral statistics.\nThe other\n parameters are the following: $L=14$, $S^z_{\\rm tot}=1$, $J_1=1.0$,\n $\\Delta=0.3$, $B_0=0.8$.\nThe inset is for all levels when $J_2=1.0$. The numerical methods to\n obtain the level-spacing distributions are referred in\n Refs.~\\onlinecite{Kudo03,Kudo04}. }\n\\end{figure}\n\nIt is a notable fact that, common to both $J_2=1.0$ and $J_2=0.2$, the\nlevel-spacing distributions in Fig.~\\ref{fig:6} show GOE behavior. This\nGOE behavior in the adiabatic energy spectra appears for an arbitrary\nfixed time except for special points\nsuch as $t=T=2\\pi\/\\omega$. This fact suggests that dynamics can reveal\nsome various generic features of quantum many-body systems\n which can never be explained by level\nstatistics. The level-spacing distributions in Fig.~\\ref{fig:6} convey\nanother crucial fact: they have been\ncalculated for low energy levels because our interest is in the low\nenergy region around the ground state. We have confirmed that the\nlevel-spacing distributions for all energy levels in the inset is also described by\nGOE spectral statistics. It is typical of this frustrated spin system that GOE level\nstatistics is observed already in the low energy region.~\\cite{Kudo04} \n\n\\section{\\label{sec:conc}Conclusions}\n\nWe have explored the energy diffusion from the ground state\nin frustrated quantum $XXZ$ spin chains under the applied oscillating\nmagnetic field.\nIn a wide parameter region of next-nearest-neighbor (NNN) coupling $J_2$\nand exchange anisotropy $\\Delta$,\nthe diffusion is normal in the early stage of diffusion.\nDiffusion coefficients $D$ obey the power law\nwith respect to\nboth the field strength and driving frequency with its power being two in the\nlinear response regime and equal to unity in the\nnon-perturbative regime.\nIn the case of weakened frustrations with small-$J_2$ \nwe find oscillation of energy diffusion,\nwhich is attributed to a\nnon-diffusive and ballistic nature of\nthe underlying energy diffusion.\nIn this way, the energy diffusion reveals generic features of the\nfrustrated quantum spin chains, which cannot be captured by the analysis\nof level statistics.\n\n\n\\begin{acknowledgments}\nThe authors would like to thank T. Deguchi.\n The present study was partially supported by Hayashi Memorial\n Foundation for Female Natural Scientists.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}