diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzapdp b/data_all_eng_slimpj/shuffled/split2/finalzzapdp new file mode 100644 index 0000000000000000000000000000000000000000..393b028c0a883592165feb99c08cc81c581b689d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzapdp @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction. Dimension increase and splitting}\n\nFinite dimensional integrable systems have been a rarity for a\nvery long time; up to few decades ago, they could be easily\nenumerated: harmonic oscillators, the Kepler system, a few\nspinning tops.\n\nApart from harmonic oscillators -- which give linear equations and are the prototype of integrability -- these are integrated by exploiting their symmetries and the\nassociated integrals of motion to reduce the problem to a lower\ndimensional one; thus, we have {\\it symmetry reduction}.\n\nThe situation changed radically in the seventies, when entire\nclasses of new integrable systems appeared, integrable by the Lax\npair construction \\cite{Lax}. Among these a\nprominent role -- also due to priority -- is taken by the Calogero\nsystem \\cite{Cal}. In this case, one has an arbitrary number $N$\nof points on the line, interacting via a certain pair potential;\nthus the system is described by a natural hamiltonian $H = (1\/2)\n\\sum_i (\\dot{x}_i )^2 + V (x)$. Calogero showed that there is an\ninvertible map from ${\\bf R}^N$ to $GL(N)$, mapping the vector $x$ to a\nmatrix $L(x)$, such that the evolution of $x (t)$ results in an\nevolution of $L (t) := L [x(t)]$ governed by Lax equations; thus\none integrates $L(t)$ and obtains $x(t)$.\n\nAs pointed out by Kazhdan, Kostant and Sternberg \\cite{KKS}, this\nalso initiated a new way of integrating systems (which they\nstudied geometrically): indeed, rather than trying to reduce the\ndimension of the system, Calogero passed to consider a system of\n{\\it higher dimension}\\footnote{Another classical\ninstance of simplification by dimension increase is the trick to\nlinearize the matrix Riccati equation. See also \\cite{CGM,ShW} in\nthis context.}. In this note we want to show that this approach is fruitful also beyond the realm of integrable hamiltonian systems.\n\nThe key observation made by Kazhdan, Kostant and Sternberg\n\\cite{KKS}, see also \\cite{Mar,MSSV}, in the context of\nHamiltonian systems is that given a differential equation $\\Delta$\nin dimension $n$, in some cases it may be helpful to increase its\ndimension: this is in particular the case when it is possible to\ndescribe $\\Delta$ as originating from either the symmetry\nreduction or the projection to an $n$-dimensional manifold of a\nsimpler (e.g. linear) equations $\\Delta^*$ in dimension $n+m$.\n\nIt was remarked in \\cite{con} that this approach does apply also\nto non-hamiltonian systems, and in particular to systems in\nPoincar\\'e-Dulac normal form \\cite{Arn,CiGa,enf} (a remark to this effect was already contained in \\cite{enf}). Essentially, a system\nin normal form with only a finite number of resonances can\nalways be mapped to a linear system, and is equivalent to the\nlatter on a certain invariant manifold. As observed there, such a\nprocedure is geometrically interesting, but equivalent\nto a classical technique already known to Dulac \\cite{Dul}, and\nabscribed by him to Horn and Lyapounov. In more general terms,\ni.e. outside the scope of normal forms theory, this corresponds to\nthe situation of a finite dimensional centralizer for a certain\nalgebra associated to the vector field \\cite{cag}.\n\nIn the present note, we show how ``dimension increasing''\nnicely combines with a ``splitting'' (in a sense to be described\nbelow) approach \\cite{split} in the case of an infinite\nnumber of resonances\\footnote{For a different use of the splitting approach in normal forms type problems, see \\cite{YP}.}; this case cannot be tackled by the Horn--Lyapounov--Dulac approach.\n\nRoughly speaking, the procedure presented in this note combines\nthe two approaches mentioned above, i.e. describing a nonlinear\nsystem as projection of a linear one and symmetry reduction in the\nsense described in \\cite{split}.\n\nA more abstract (and general) treatment is given elsewhere \\cite{seb}; in the present note we adopt an approach and notation aimed at\napplications, and discuss a number of concrete applications. We\nfocus on systems in normal form (sect.2) and in particular on the\nembedding and splitting of systems with infinitely many resonances (sect.3). Some simple examples are discussed in sect.4, while sect.5 is devoted to (in general non-hamiltonian) nonlinearly perturbed oscillators, and sect.6 to bifurcation problems.\n\n\\section{Equations in normal form; resonances}\n\nWe consider an ODE in $R^n$ with a fixed point in the origin and\nexpanded around this in a power series; we write this in the form\n$$ {{\\rm d} x \\over {\\rm d} t } \\ = \\ f (x) \\ = \\ A x \\, + \\, \\sum_{k=1}^\\infty f_k (x) \\eqno(1) $$\nwhere $f_k$ is polynomial with $f_k (a x) = a^{k+1} (x)$, and we have singled out the\nlinear part $f_0 (x) = A x$. We also write $F$ for the nonlinear\npart of $f$, i.e. $f^i (x) = A^i_j x^j + F^i (x)$.\n\nAs well known, the matrix $A$ can be uniquely decomposed into its\nsemisimple and nilpotent parts, which commute with each other and\nhence with $A$:\n$$ A = A_s + A_n \\ ; \\ [A_s , A_n ] = 0 . $$\nIn the following, we will use the following vector fields ($\\partial_i\n:= \\partial \/ \\partial x^i$):\n$$ X_A := (Ax)^i \\partial_i \\ , \\ X_0 = (A_s)^i \\partial_i \\ , \\ X_F = F^i (x) \\partial_i \\ , \\ X_f = f^i (x) \\partial_i \\ . $$\n\n\\subsection{Resonances}\n\nLet us denote by $\\{ \\lambda_1 , ... , \\lambda_n \\}$ the eigenvalues of\n$A$; take a basis $\\{ {\\bf e}_1 , ... , {\\bf e}_n \\}$ in $R^n$ consisting\nof generalized eigenvectors of $A$, i.e. eigenvectors of $A_s$: $A_s {\\bf e}_j = \\lambda_j {\\bf e}_j$. We will use $x$ coordinates in this basis, and the multiindex notation\n$$ x^\\mu \\ := \\ x_1^{\\mu_1} ... x_n^{\\mu_n} \\ . $$\n\nWe say that the vector monomial ${\\bf v}_{\\mu,\\alpha} := x^\\mu {\\bf e}_\\alpha$ is {\\it\nresonant} with $A$ if\n$$ (\\mu \\cdot \\lambda) \\ := \\sum_{i=1}^n \\mu_i \\lambda_i \\ = \\ \\lambda_\\alpha \\ \\\n{\\rm with} \\ \\ \\mu_i \\ge 0 \\ , \\ |\\mu| := \\sum_{i=1}^n \\mu_i \\ge 1 \\ . \\eqno(2) $$\n\nThe relation $(\\mu \\cdot \\lambda) = \\lambda_\\alpha$ is said to be a {\\it\nresonance relation} related to the eigenvalue $\\lambda_\\alpha$, and the\ninteger $|\\mu|$ is said to be the {\\it order} of the resonance.\nIn our context it is useful to include order one resonances in the definition (albeit the {\\em trivial} order one resonances given by $\\lambda_\\alpha = \\lambda_\\alpha$ are of little interest).\nNote that here one could as well consider $A_s$ rather than $A$.\n\nThe space of vectors resonant with (the semisimple part of) $A$ is\ndefined as the linear span of the vectors ${\\bf v}_{\\mu,\\alpha}$ defined\nabove. \n\n\n\\subsection{Normal forms}\n\nWe say (see e.g. \\cite{Arn,CiGa,enf}) that (1) is in\nPoincar\\'e-Dulac normal form\\footnote{The reader should be warned that some different definition is also in use \\cite{Elp}.} if its nonlinear part $F(x)$ is resonant with $A$. This implies that\n$$ \\[ \\, X_0 \\, , \\, X_F \\, \\] \\ = \\ 0 \\ . \\eqno(3) $$\n\nIt should be mentioned that the presence of a nilpotent part $A_n$\nin $A$ introduces some subtleties. If (3) holds, then both $X_A$ and $X_F$ commute with $X_0$, and therefore\n$$ \\[ \\, X_0 \\, , \\, X_f \\, \\] \\ = \\ 0 \\ , \\eqno(4) $$\ni.e. the system has a symmetry described by a semisimple matrix.\n\nAs well known, starting from any dynamical system (or vector\nfield) of the form (1), we can arrive at a dynamical system (or\nvector field) in Poincar\\'e-Dulac normal form by means of a\nsequence (in general, infinite) of near-identity transformations \nobtained by means of the Poincar\\'e algorithm; these combine into\na near-identity transformation $H$ defined by a series which is in\ngeneral only formal. \n\n\\medskip\\noindent\n{\\bf Remark 1.} We may reformulate the definition of systems in\nnormal form by saying that $f$ is is normal form if and only if\n$X_F$ is in the centralizer of $X_0$. $\\odot$\n\n\\medskip\\noindent\n{\\bf Remark 2.} If the system is required to have some symmetry,\nsay $[X_f , X_g ] = 0$ with (in an obvious notation) $g^i (x) =\nB^i_j x^j + G^i$, then $[X_f , X_B ] = 0$ as well, i.e. $X_F$ is\nin the centralizers of both $X_0$ and $X_B$. More generally, if\n$X_f$ is an element of some Lie algebra ${\\cal G}$, then under suitable conditions, see \\cite{CiGa}, it can be put in joint normal form, i.e. (again with obvious notations) $[X_f , X_{B_i} ] = 0$. $\\odot$\n\n\\subsection{Sporadic resonances and invariance relations}\n\nLet us consider again the resonance equation (2). It is clear that\nif there are non-negative integers $\\sigma_i$ (some of them nonzero) such that \n$$ \\sum_{i=1}^n \\, \\sigma_i \\lambda_i \\ = \\ 0 \\ , \\eqno(5) $$\nthen we always have infinitely many resonances. The monomial $\\phi\n= x^\\sigma$ will be called a {\\it resonant scalar monomial}. It is an invariant of $X_0$, and any multiindex $\\mu$\nwith $\\mu_i = k \\sigma_i + \\delta_{i \\alpha}$ provides a resonance relation\n$(\\mu \\cdot \\lambda ) = \\lambda_\\alpha$ related to the eigenvalue $\\lambda_\\alpha$; in\nother words, any monomial $x^{k \\sigma} x^\\alpha = \\phi x^\\alpha$ is resonant,\nand so is any vector ${\\bf v}_{k \\sigma + e_\\alpha , \\alpha}$.\n\nTherefore, we say that (5) identifies a {\\it invariance relation}.\nThe presence of invariance relations is the only way to have\ninfinitely many resonances in a finite dimensional system (see \\cite{enf}).\n\nAny nontrivial resonance (2) such that there is no $\\sigma$ with $\\sigma_i \\le \\mu_i$ (for all $i=1,...,n$) providing an invariance relation, is said to be a {\\it sporadic resonance}. Sporadic resonances are always in finite number (if any) in a finite dimensional system \\cite{enf}.\n\nAny invariance relation (5) such that there is no $\\nu$ with\n$\\nu_i \\le \\sigma_i$ (and of course $\\nu \\not= \\sigma$) providing another\ninvariance relation, is said to be an {\\it elementary invariance\nrelation}. Every invariance relation is a linear combination (with nonnegative integer coefficients) of elementary ones. Elementary invariance relations are always in finite number (if any) in a finite dimensional system \\cite{enf}.\n\n\n\n\n\\section{Embedding systems with invariance relations in quasi-linear systems}\n\nIn this section we will discuss how the procedure described in\n\\cite{con} generalizes, in connection with a ``splitting'' of the\nsystem described by $Y$ \\cite{split}, in the presence of\ninvariance relations.\n\nWe should preliminarily identify all sporadic resonances $(\\mu\n\\cdot \\lambda ) = \\lambda_\\alpha$ and elementary invariance relations $(\\sigma\n\\cdot \\lambda ) = 0$. We associate resonant monomials $x^\\mu$ and\nresonant vectors ${\\bf v}_{\\mu,\\alpha}$ to the former ones, and invariant\nmonomials $x^\\sigma$ to the latter ones.\n\nWe then introduce two set of new coordinates: these will be the\ncoordinates $w^1 , ... , w^r$ in correspondence with sporadic\nresonances (as in \\cite{con}), and other new coordinates $\\phi^1,\n... , \\phi^m$ in correspondence with elementary invariance\nrelations.\n\nWe should also assign evolution equations for the $w$ and $\\phi$ coordinates; these will be given in agreement with (1) itself. That is, the equations for the $w$ will be\n$$ {{\\rm d} w^j \\over {\\rm d} t} \\ = \\ {\\partial w^j \\over \\partial x^i} \\, {{\\rm d} x^i \\over {\\rm d} t} \\ := \\ h^j (x,w,\\phi) \\ ; \\eqno(6) $$\nand as for the $\\phi$'s we assign\n$$ {{\\rm d} \\phi^a \\over {\\rm d} t} \\ = \\ {\\partial \\phi^a \\over \\partial x^i} \\, {{\\rm d} x^i \\over {\\rm d} t} \\ := \\ z^a (x,w,\\phi) \\ . \\eqno(7) $$\n\nWe will thus consider the enlarged space $W = (x,w,\\phi) =\n{\\bf R}^{n+r+m}$, and in this the vector field\n$$ Y \\ = \\ f^i (x,w,\\phi) \\, {\\partial \\over \\partial x^i} \\ + \\ h^j (x,w,\\phi) \\, {\\partial \\over \\partial w^j} \\ + \\ z^a (x,w,\\phi) \\, {\\partial \\over \\partial \\phi^a} \\ . \\eqno(8) $$\n\nNote that some ambiguity is present here, in that we can write the\ncoefficients of this vector field in different ways as a function\nof the $x,w,\\phi$. Indeed, the vector field $Y$ is uniquely\ndefined only on the manifold identified by $\\psi^i := w^i -\nx^{\\mu_{(i)}} = 0$, $\\phi_a - \\zeta_a (x) = 0$.\n\n\\medskip\\noindent\n{\\bf Lemma 1.} {\\it The $(n+m)$-dimensional manifold $M \\subset W$ identified by $\\psi^i := w^i - x^{\\mu^{(i)}} = 0$ is invariant under the flow of $Y$.}\n\n\\medskip\\noindent\n{\\bf Proof.} Obvious by construction. $\\triangle$\n\n\\medskip\\noindent\n{\\bf Lemma 2.} {\\it The functions $z^a$ defined in (7) can be written in terms of the $\\phi$ variables alone, i.e. $\\partial z^a \/ \\partial x^i = \\partial z^a \/ \\partial w^j = 0$.}\n\n\\medskip\\noindent\n{\\bf Proof.} Every analytic invariant of $X_0$ can be represented as a convergent series in the $z^a$ (see \\cite{enf}); their evolution is also invariant under the $X_0$ action, hence can also be written in terms of invariants, hence of the $z^a$ themselves. $\\triangle$\n\n\\medskip\\noindent\n{\\bf Corollary 1.} {\\it The evolution of the $\\phi$ variables is described by a (nonlinear) equation in the $\\phi$ variables only.}\n\n\\medskip\\noindent\n{\\bf Proof.} This is merely a restatement of lemma 2 above. Note that the equations for $x$ and $w$ depend on $\\phi$ and are therefore nonautonomous. $\\triangle$\n\n\n\\medskip\\noindent\n{\\bf Proposition 3.} {\\it The analytic functions $f^i$ and $h^j$\ndefined above can be written as linear in the $x$ and $w$\nvariables, the coefficients being functions of the $\\phi$ variables.}\n\n\\medskip\\noindent\n{\\bf Proof.} Recall each $w^j$ is a monomial $w^j =\nx_1^{p_1} ... x_n^{p_n}$ with $(p , \\lambda) = \\lambda_s$ for some\n$s=1,...,n$; with reference to this integer $s$, we add a label to\n$w_j$, i.e. write $w_j^{(s)}$. \nBy construction and by the results above, each $f^m$ can be written in \nthe form $f^m = a^m (\\phi) \\cdot x_m + \\sum_k c_k^m (\\phi) w_k^{(m)}$, with analytic $a_m$ and $c_k^m$. So the assertion for the $f^m$ is obvious.\n\nThe time derivative of $w_j^{(s)}$ under the flow of (1) will be ${\\dot w}_j^{(s)} = (\\partial w_j^{(s)} \/ \\partial x^m ) f^m$. \nTherefore it is sufficient to show that $(\\partial w_j^{(s)}\/\\partial x^m ) w_k^{(m)}$ is zero or a multiple of $x^m$ or of some $w_\\ell^{(s)}$ with a suitable resonant scalar monomial as a factor. But the above operation just means to replace one factor $x^m$ by $w_k^{(m)}$, and the resulting linear combination of the eigenvalues still yields a resonance relation (2) with $\\lambda_s$ on the right hand side. See \\cite{seb} for a different approach to the proof. $\\triangle$\n\n\n\\medskip\\noindent\n{\\bf Corollary 2.} {\\it The evolution of the $x$ and $w$ variables is described by nonautonomous linear equations, obtained by inserting the solution $\\phi = \\phi (t)$ of the equations for $\\phi$ in the general equations ${\\dot x} = f (x,w,\\phi)$, ${\\dot w} = h (x,w,\\phi)$.}\n\\medskip\n\n\\medskip\\noindent\n{\\bf Proof.} Obvious. $\\triangle$\n\\medskip\n\n\\medskip\\noindent\n{\\bf Remark 3.} We will also say that the vector field $Y$ is quasi-linear, meaning by this that it is linear in the $x$ and $w$ variables. In this way we recover -- as a special case -- the situation discussed in \\cite{Lie} as well as the terminology used there. $\\odot$\n\n\\medskip\\noindent\n{\\bf Remark 4.} The results obtained here extend and unify those given in \\cite{split,con}; see also \\cite{enf}. As the $\\phi$ identify group orbits for the group $G$ generated by the Lie algebra, we interpret $\\dot\\phi = z (\\phi)$ as an equation in orbit space, and the equation for $(x,w)$ as an equation on the Lie group $G$. Methods for the solution of the latter are discussed in \\cite{WN}, see also \\cite{CGM}. $\\odot$\n\n\\medskip\\noindent\n{\\bf Remark 5.} If no invariance relations are present, hence no\n$\\phi$ variables are introduced, then the system describing the\ntime evolution of the $x,w$ variables is linear; this is the\nsituation studied in \\cite{con}. Note that in this case we have\nexactly the interpretation of normal forms as projection of a\nlinear system to an invariant manifold, without symmetry\nreduction. $\\odot$\n\n\\medskip\\noindent\n{\\bf Remark 6.} If there are no sporadic resonances of order greater than one then Proposition 3 yields a linear system for the $f^i$, with functions of the $\\phi$ variables as coefficients. Therefore, upon solving the reduced equation for the $\\phi$ variables one obtains a non-autonomous linear system. Moreover, if all eigenvalues are distinct then we have a product system of one-dimensional equations. $\\odot$\n\n\n\\medskip\\noindent\n{\\bf Remark 7.} Finally, we note that if $\\phi (t)$ converges to some $\\phi_0$ (this is always the case if the $\\phi$ space is one-dimensional and $|\\phi (t)|$ does not escape to infinity), the asymptotic evolution of the system is governed by a linear autonomous equation for $x$ and $w$ (see \\cite{Thi} for the behavior of \nasymptotically autonomous equations). Similarly, if there is a periodic solution $\\bar\\phi (t)$ with $\\phi (t) \\to \\bar\\phi (t)$, the asymptotic evolution of the system is governed by a linear equation with periodic coefficients for $x$ and $w$. $\\odot$\n\n\\section{Examples}\n\n\n{\\bf Example 1.} (See \\cite{con}). For $A = {\\rm diag} (1,k)$, $k \\in {\\bf N}$, the only resonant vector is ${\\bf v} = x^k {\\bf e}_2$, corresponding to a sporadic resonance, and there is no invariance relation. Systems in normal form correspond to\n$$ X = x \\partial_x + (k y + c x^k) \\partial_y $$\nwith $c$ a real constant. According to our procedure, we define $w = x^k$, and obtain\n$$ Y \\ = \\ x \\partial_x + (k y + c w) \\partial_y + k w \\partial_w \\ ; $$\nthe invariant manifold $M$ is given by $\\psi := w - x^k = 0$. The\nsolution to the system in $W$ for initial data $(x_0,y_0,w_0)$ is\n$ x(t) = x_0 e^t$, $y(t) = y_0 e^{kt} + (c_1 k w_0) t e^{kt}$,\n$w(t) = w_0 e^{kt}$; for initial data on $M$, i.e. $w_0 = x_0^k$,\nthe solution remains on $M$ and its projection to ${\\bf R}^2 = (x,y)$\nis $ x(t)= x_0 e^t$, $y (t) = [ y_0 + (c_1 k x_0^k) t ] e^{kt} $.\n\n\n\\medskip\\noindent\n{\\bf Example 2.} Consider the matrix\n$$ A \\ = \\ \\pmatrix{ 0 & -1 & 0 \\cr 1 & 0 & 0 \\cr 0 & 0 & 1 \\cr} $$\nwith eigenvalues $(-i,i,1)$. There is one elementary invariance\nrelation, $\\lambda_1 + \\lambda_2 = 0$, and no sporadic resonance. The linear centralizer is spanned by $A$ itself and by matrices $ D_1 = {\\rm diag} (1,1,0)$ and $D_2 = {\\rm diag} (0,0,1)$. The ring of\ninvariants is generated by $r^2 := x^2 + y^2$. Systems in normal\nforms are written as\n$$ \\begin{array}{l}\n{\\dot x} = \\alpha (r^2) x - \\beta (r^2) y \\\\\n{\\dot y} = \\beta (r^2) x + \\alpha (r^2) y \\\\\n{\\dot z} = \\gamma (r^2) z \\end{array} $$\nwhere $\\alpha , \\beta , \\gamma $ are arbitrary power series.\n\nFollowing our procedure, we introduce one further variable $\\phi = r^2$; for this we have $d \\phi \/ d t = 2 (x {\\dot x} + y {\\dot y} ) = 2 r^2 \\alpha (r^2 ) $. Hence the system in $W = {\\bf R}^4$ is written as\n$$ \\begin{array}{l}\n{\\dot x} \\ = \\ \\alpha (\\phi) \\, x \\ - \\ \\beta (\\phi) \\, y \\\\\n{\\dot y} \\ = \\ \\beta (\\phi) \\, x \\ + \\ \\alpha (\\phi) \\, y \\\\\n{\\dot z} \\ = \\ \\gamma (\\phi) \\, z \\\\\n{\\dot \\phi} = \\ 2 \\, \\phi \\, \\alpha (\\phi) \\ . \\end{array} $$\n\n\\medskip\\noindent\n{\\bf Example 3.} Consider the matrix\n$$ A \\ = \\ \\pmatrix{ 0 & -1 & 0 & 0 \\cr 1 & 0 & 0 & 0 \\cr 0 & 0 & 1 & 0 \\cr 0 & 0 & 0 & k \\cr} $$\n($k \\in {\\bf N}$, $k > 1$) with eigenvalues $(-i,+i,1,k)$. There\nis one sporadic resonance, $k \\lambda_3 = \\lambda_4$, and one elementary\ninvariance relation, $\\lambda_1 + \\lambda_2 = 0$. The linear centralizer\nis spanned the matrices $D_1 = {\\rm diag} (1,1,0,0)$, $D_2 = {\\rm\ndiag} (0,0,1,0)$, $D_3 = {\\rm diag} (0,0,0,1)$ together with\n$$ M_1 = \\pmatrix{0&-1&0&0\\cr 1&0&0&0\\cr 0&0&0&0\\cr 0&0&0&0\\cr} \\ \\ {\\rm and} \\ \\ M_2 = \\pmatrix{0&0&0&0\\cr 0&0&0&0\\cr 0&0&0&0\\cr 0&0&k&0\\cr} \\ . $$\nSystems in normal forms are written as\n$$ \\begin{array}{l}\n{\\dot x}_1 = \\alpha (r^2) x_1 - \\beta (r^2) x_2 \\\\\n{\\dot x}_2 = \\beta (r^2) x_1 + \\alpha (r^2) x_2 \\\\\n{\\dot x}_3 = \\gamma (r^2) x_3 \\\\\n{\\dot x}_4 = \\eta (r^2) x_4 + \\theta (r^2) x_3^k \\end{array} $$\nwhere $r^2 := x^2 + y^2$ and $\\alpha , \\beta , \\gamma , \\eta , \\theta $ are arbitrary power series.\n\nFollowing our procedure we introduce $\\phi = r^2$, for which $d \\phi \/ d t = 2 (x {\\dot x} + y {\\dot y} ) = 2 r^2 \\alpha (r^2 ) $, and $w = x_3^k$ for which $d w \/ d t = k x_3^{k-1} {\\dot x}_3 = k \\gamma (r^2) x_3^k$. Hence the system in $W = {\\bf R}^5$ is written as\n$$ \\begin{array}{l}\n{\\dot x}_1 = \\alpha (\\phi) x_1 - \\beta (\\phi) x_2 \\\\\n{\\dot x}_2 = \\beta (\\phi) x_1 + \\alpha (\\phi) x_2 \\\\\n{\\dot x}_3 = \\gamma (\\phi) x_3 \\\\\n{\\dot w} = k \\gamma (\\phi) w \\\\\n{\\dot \\phi} = 2 \\phi \\alpha (\\phi) \\ . \\end{array} $$\n\n\\section{Perturbed oscillators}\n\n{\\bf Example 4. Perturbation of oscillators in 1:1 resonance.}\nConsider the matrix\n$$ A \\ = \\ \\pmatrix{ 0 & -1 & 0 & 0 \\cr 1 & 0 & 0 & 0 \\cr 0 & 0 & 0 & -1 \\cr 0 & 0 & 1 & 0 \\cr} \\ , $$\nwhich we also write in block form as\n$$ A \\ = \\ \\pmatrix{ J & 0 \\cr 0 & J \\cr} \\ , \\ {\\rm where} \\ J \\ = \\ \\pmatrix{0&-1\\cr 1&0\\cr} \\ , $$\nwith eigenvalues $(-i,+i,-i,i)$. There are no sporadic resonances of order greater than one, and four elementary invariance relations:\n$$ \\lambda_1 + \\lambda_2 = 0 \\ , \\ \\lambda_3 + \\lambda_4 = 0 \\ , \\\n\\lambda_1 + \\lambda_4 = 0 \\ , \\ \\lambda_2 + \\lambda_3 = 0 \\ ; $$\nall other resonances can be described in terms of these. \nWe stress that the equations describing these invariance relations are\nlinearly dependent; however they should be considered, according\nto our definition, as different elementary ones. Corresponding to\nthis, the associated invariant quantities will not be functionally\nindependent (obviously, we cannot have more than three independent\ninvariants for a flow in ${\\bf R}^4$).\n\nThe linear centralizer of $A$ is an eight-dimensional algebra, spanned by the following matrices (in two by two block notation):\n$$\\begin{array}{l}\nB_1 = \\pmatrix{I&0\\cr 0&0\\cr} \\ , \\\nB_2 = \\pmatrix{0&0\\cr 0&I\\cr} \\ , \\\nB_3 = \\pmatrix{0&I\\cr I&0\\cr} \\ , \\\nB_4 = \\pmatrix{0&J\\cr -J&0\\cr} \\ , \\\\\n~ \\\\\nS_1 = \\pmatrix{J&0\\cr 0&0\\cr} \\ , \\\nS_2 = \\pmatrix{0&0\\cr 0&J\\cr} \\ , \\\nS_3 = \\pmatrix{0&I\\cr -I&0\\cr} \\ , \\\nS_4 = \\pmatrix{0&J\\cr J&0\\cr} \\ .\n\\end{array} \\eqno(9) $$\nNote here we have chosen a basis with $B_i = B_i^+$, $S_i = - S_i^+$.\n\nThe linear system $\\.\\xi = A \\xi$ describes two oscillators in 1:1 resonance; the normal form will correspond to a perturbation of these, generically breaking the exchange symmetry among the two oscillators.\nSystems in normal form are compactly written as\n$$\n\\pmatrix{ {\\dot x}\\cr {\\dot y}\\cr {\\dot z}\\cr {\\dot w}\\cr} \\ = \\\n\\pmatrix{\n\\alpha & - \\beta & \\gamma & - \\eta \\cr\n\\beta & \\alpha & \\eta & \\gamma \\cr\n\\mu & - \\nu & \\sigma & - \\tau \\cr\n\\nu & \\mu & \\tau & \\sigma \\cr}\n\\ \\pmatrix{ x\\cr y\\cr z\\cr w\\cr} $$\nwhere $\\alpha , \\beta , ... , \\tau$ are arbitrary power series in the elementary invariants\n$$ \\phi_1 = x^2 + y^2 \\ , \\ \\phi_2 = z^2 + w^2 \\ , \\ \\phi_3 = xz + yw \\ , \\ \\phi_4 = xw - yz \\ ; $$\nnote that $\\phi_a = (\\xi , B_a \\xi)$, with $(.,.)$ the scalar product.\nWe abbreviate the above evolution equation as\n$$ {\\dot \\xi} \\ = \\ K (\\phi) \\ \\xi \\ . $$\nThe evolution equations for the $\\phi$, as required by our procedure, are simply (only the selfadjoint matrices defined in (9) appear)\n$$ {\\dot \\phi}_a = \\( \\xi , (B_a K + K^+ B_a ) \\xi \\) \\ . $$\n\n\n\n\\medskip\\noindent\n{\\bf Example 5. Perturbation of oscillators in $1:k$ resonance.}\nConsider the matrix\n$$ A \\ = \\ \\pmatrix{ 0 & -1 & 0 & 0 \\cr 1 & 0 & 0 & 0 \\cr 0 & 0 & 0 & -k \\cr 0 & 0 & k & 0 \\cr} $$\n(with $k \\in {\\bf N}$, $k > 1$) with eigenvalues $(-i,+i,-i\nk,ik)$. This is put in diagonal form passing to\nvariables\n$$ \\xi_1 = (x_1 - i x_2)\/2 \\ , \\ \\xi_2 = (x_1 + i x_2)\/2 \\ , \\ \n\\xi_3 = (x_3 - i x_4)\/2 \\ , \\ \\xi_4 = (x_3 + i x_4)\/2 \\ , $$\nwhich we use in intermediate computations below.\nWe also write $ \\xi = \\Lambda x$, $x = \\Lambda^{-1} \\xi$, with\n$$ \\Lambda = {1 \\over 2} \\ \\pmatrix{1&-i&0&0\\cr 1&1&0&0\\cr 0&0&1&-i\\cr 0&0&1&i\\cr} \\ ; \\ \\Lambda^{-1} = \\pmatrix{1&1&0&0\\cr i&-i&0&0\\cr 0&0&1&1\\cr 0&0&i&-i\\cr} \\ . $$\n\nThere are four sporadic resonances:\n$$ k \\lambda_1 = \\lambda_3 \\ , \\ k \\lambda_2 = \\lambda_4 \\ , \\\n\\lambda_3 + (k-1) \\lambda_2 = \\lambda_1 \\ , \\ \\lambda_4 + (k-1) \\lambda_1 = \\lambda_2 \\ . $$\nMoreover, there are four elementary invariance relations:\n$$ \\lambda_1 + \\lambda_2 = 0 \\ , \\ \\lambda_3 + \\lambda_4 = 0 \\ , \\\nk \\lambda_1 + \\lambda_4 = 0 \\ , \\ k \\lambda_2 + \\lambda_3 = 0 \\ . $$\nAll other resonances can be described in terms of these. \n\nSystems in normal forms are written as\n$$ \\begin{array}{l}\n\\.\\xi_1 = \\alpha_1 \\xi_1 + \\vartheta_1 \\xi_2^{k-1} \\xi_3 \\\\\n\\.\\xi_2 = \\alpha_2 \\xi_2 + \\vartheta_2 \\xi_1^{k-1} \\xi_4 \\\\\n\\.\\xi_3 = \\alpha_3 \\xi_3 + \\vartheta_3 \\xi_1^k \\\\\n\\.\\xi_4 = \\alpha_4 \\xi_4 + \\vartheta_4 \\xi_2^k \\ ,\n\\end{array} \\eqno(10) $$\nwhere $\\alpha_i , \\vartheta_i $ are arbitrary power series in the invariants of the linear flow.\n\nFollowing our procedure, we introduce variables\n$$ w_1 = \\xi_1^k \\ , \\ w_2 = \\xi_2^k \\ , \\\nw_3 = \\xi_2^{k-1} \\xi_3 \\ , \\ w_4 = \\xi_1^{k-1} \\xi_4 \\ , $$\nrelated to sporadic resonances. We also introduce variables\nrelated to elementary invariance relations, given by\n$$ \\phi_1 = \\xi_1 \\xi_2 \\ , \\ \\phi_2 = \\xi_3 \\xi_4 \\ , \\\n\\phi_3 = \\xi_1^k \\xi_4 \\ , \\ \\phi_4 = \\xi_2^k \\xi_3 \\ . $$\n\nWe must then introduce evolution equations for the $w$ and $\\phi$ variables according to our procedure, i.e. according to (6) and (7) above.\nAs for the $w$, we get\n$$ \\begin{array}{l}\n\\.w_1 = k \\alpha_1 w_1 + \\vartheta_1 \\phi_1^{k-1} \\xi_3 \\\\\n\\.w_2 = k \\alpha_2 w_2 + \\vartheta_2 \\phi_1^{k-1} \\xi_4 \\\\\n\\.w_3 = [\\alpha_3 + (k-1) \\alpha_2 ] w_3 + [ (k-1) \\vartheta_2 \\phi_1^{k-2} \\phi_2 + \\vartheta_3 \\phi_1^{k-1} ] \\xi_1 \\\\\n\\.w_4 = [\\alpha_4 + (k-1) \\alpha_2 ] w_4 + [ (k-1) \\vartheta_1 \\phi_1^{k-2} \\phi_2 + \\vartheta_4 \\phi_1^{k-1} ] \\xi_2 \\ .\n\\end{array}$$\n\nLet us now consider the equations for the $\\phi$; we easily get\n$$ \\begin{array}{l}\n\\.\\phi_1 = (\\alpha_1 + \\alpha_2 ) \\phi_1 + \\vartheta_2 \\phi_3 + \\vartheta_1 \\phi_4 \\\\\n\\.\\phi_2 = (\\alpha_3 + \\alpha_4 ) \\phi_2 + \\vartheta_3 \\phi_3 + \\vartheta_4 \\phi_4 \\\\\n\\.\\phi_3 = (k \\alpha_1 + \\alpha_4 ) \\phi_3 + k \\vartheta_1 \\phi_1^{k-1} \\phi_2 + \\vartheta_4 \\phi_1^k \\\\\n\\.\\phi_4 = (k \\alpha_2 + \\alpha_3 ) \\phi_4 + k \\vartheta_2 \\phi_1^{k-1} \\phi_2 + \\vartheta_3 \\phi_1^k \\ .\n\\end{array} \\eqno(11) $$\n\nSummarizing, all systems of the form (10) -- i.e. in normal form with respect to the linear part $\\.\\xi = A \\xi$ -- are written as the autonomous system (11) for the $\\phi$ variables, plus a linear nonautonomous system, which introducing the notation $\\eta = (\\xi ; w)$ can be written as\n$$ \\.\\eta \\ = \\ M \\ \\eta $$\nwhere $M = M (\\phi)$ is a matrix which we write as $M = D + L$, where\n$$ D \\ = \\ {\\rm diag} (\\alpha_1 , \\alpha_2 , \\alpha_3 , \\alpha_4 ; k \\alpha_1 , k \\alpha_2 , \\alpha_3 + (k-1) \\alpha_2 , \\alpha_4 + (k-1) \\alpha_1 ) $$\nand $L$ is an off-diagonal sparse matrix with nonzero terms\n$$ \\begin{array}{c}\nL_{17} = \\vartheta_1 \\ , \\ L_{28} = \\vartheta_2 \\ , \\ L_{35} = \\vartheta_3 \\ , \\ L_{46} = \\vartheta_4 \\ ; \\ \nL_{53} = \\vartheta_1 \\phi_1^{k-1} \\ , \\ L_{64} = \\vartheta_2 \\phi_1^{k-1} \\ , \\\\\nL_{71} = [ (k-1) \\vartheta_2 \\phi_1^{k-2} \\phi_2 + \\vartheta_3 \\phi_1^{k-1} ] \\ , \\ \nL_{82} = [ (k-1) \\vartheta_1 \\phi_1^{k-2} \\phi_2 + \\vartheta_4 \\phi_1^{k-1} ] \\ .\n\\end{array}\n$$\n\n\n\\medskip\\noindent\n{\\bf Example 6. Perturbation of two oscillators with no resonance.} Consider the matrix \n$$ A \\ = \\ \\pmatrix{ 0 & -1 & 0 & 0 \\cr 1 & 0 & 0 & 0 \\cr 0 & 0 & 0 & -\\pi \\cr 0 & 0 & \\pi & 0 \\cr} $$\nwith eigenvalues $(-i,+i,-\\pi i ,\\pi i)$.\n\nNow there are no sporadic resonances of order greater than one, and two elementary\ninvariance relations:\n$$ \\lambda_1 + \\lambda_2 = 0 \\ , \\ \\lambda_3 + \\lambda_4 = 0 \\ . $$\nNote that now we have only two invariants, $r_1^2 = x_1^2 + x_2^2$ and $r_2^2 = x_3^2 + x_4^2$: this is easily understood as we have irrational flow on the two-torus ${\\bf T}^2 \\subset {\\bf R}^4$.\nThe centralizer of $A$ corresponds to matrices\n$$ M \\ = \\ \\pmatrix{ a & - b & 0 & 0 \\cr b & a & 0 & 0 \\cr 0&0& c & - d \\cr 0&0& d & c \\cr} $$\nThus systems in normal form will be written as\n$$ \\pmatrix{{\\dot x}_1 \\cr {\\dot x}_2 \\cr {\\dot x}_3 \\cr {\\dot x}_4 \\cr} \\ = \\ \\pmatrix{ \\alpha & - \\beta & 0 & 0 \\cr \\beta & \\alpha & 0 & 0 \\cr 0&0& \\gamma & - \\eta \\cr 0&0& \\eta & \\gamma \\cr} \\ \\pmatrix{x_1 \\cr x_2 \\cr x_3 \\cr x_4 \\cr} $$\nwith $\\alpha , \\beta , \\gamma , \\eta$ being power series in $r_1^2 , r_2^2$.\n\nFollowing our procedure we introduce variables $ \\phi_1 = x_1^2 + x_2^2$ and $\\phi_2 = x_3^2 + x_4^2 $, whose evolution is given by\n$$ {\\dot \\phi}_1 \\ = \\ 2 \\, \\alpha \\, \\phi_1 \\ , \\ {\\dot \\phi}_2 \\ = \\ 2 \\, \\gamma \\, \\phi_2 \\ . $$\n\n\\medskip\\noindent\n{\\bf Example 7. Perturbation of oscillators in 1:1:1 resonance.}\nConsider the six-dimensional matrix written in block form as\n$$ A \\ = \\ \\pmatrix{\\omega J & 0 & 0 \\cr 0 & \\omega J & 0 \\cr 0 & 0 & \\omega J \\cr} \\ \\ {\\rm where} \\ \\ J \\ = \\ \\pmatrix{0 & - 1 \\cr 1 & 0 \\cr} \\ . $$\nPassing to coordinates $(\\xi_j,\\eta_j) = (p_j+i q_j, p_j- i q_j)$,\nthis reads\n$$ \\^A \\ = \\ {\\rm diag} (i \\omega , - i \\omega , i \\omega , - i \\omega , i \\omega , - i \\omega ) \\ . $$\nThe eigenvalues are $\\lambda_k \\ = \\ (-1)^k i \\omega $, hence there is no sporadic resonance of order greater than one\\footnote{Note these are present in the case of $1:k:\\ell$ resonance (with integers $k,\\ell > 1$).} and nine invariance relations:\n$$ \\begin{array}{lll}\n\\lambda_1 + \\lambda_2 = 0 & \\lambda_3 + \\lambda_4 = 0 & \\lambda_5 + \\lambda_6 = 0 \\\\\n\\lambda_1 + \\lambda_4 = 0 & \\lambda_1 + \\lambda_6 = 0 & \\lambda_3 + \\lambda_6 = 0 \\\\\n\\lambda_2 + \\lambda_3 = 0 & \\lambda_2 + \\lambda_5 = 0 & \\lambda_4 + \\lambda_5 = 0\n\\end{array} $$ The invariants corresponding to these are of course\n$$ \\begin{array}{lll}\n\\^\\phi_1 = \\xi_1 \\eta_1 , & \\^\\phi_2 = \\xi_2 \\eta_2 , & \\^\\phi_3 = \\xi_3 \\eta_3 , \\\\\n\\^\\phi_4 = \\xi_1 \\eta_2 , & \\^\\phi_5 = \\xi_1 \\eta_3 , & \\^\\phi_6 = \\xi_2 \\eta_3 , \\\\\n\\^\\phi_7 = \\xi_2 \\eta_1 , & \\^\\phi_8 = \\xi_3 \\eta_2 , & \\^\\phi_9 =\n\\xi_3 \\eta_2 .\n\\end{array} $$\nGoing back to the original coordinates, expressions for these are\nrecovered from (no sum on repeated indices)\n$$ \\xi_j \\eta_j \\ = \\ (p_j^2 + q_j^2) \\ \\ ; \\ \\ \\xi_j \\eta_k \\ = \\ (p_j p_k + q_j q_k) + i (q_j p_k - p_j q_k) $$\nand is thus more convenient to pass to a different basis for\ninvariant functions, i.e.\n$$ \\begin{array}{l}\n\\phi_1 := \\^\\phi_1 = (p_1^2 + q_1^2) \\ \\ , \\ \\ \\phi_2 := \\^\\phi_2\n= (p_2^2 + q_2^2) \\ \\ , \\ \\\n\\phi_3 := \\^\\phi_3 = (p_3^2 + q_3^2) \\ ; \\\\\n\\phi_4 := (\\^\\phi_4 + \\^\\phi_7)\/2 = (p_1 p_2 + q_1 q_2) \\ \\ , \\ \\\n\\phi_5 := (\\^\\phi_4 - \\^\\phi_7)\/(2i) = (q_1 p_2 - p_1 q_2) \\ ; \\\\\n\\phi_6 := (\\^\\phi_5 + \\^\\phi_9)\/2 = (p_1 p_3 + q_1 q_3) \\ \\ , \\ \\\n\\phi_7 := (\\^\\phi_5 - \\^\\phi_9)\/(2i) = (q_1 p_3 - p_1 q_3) \\ ; \\\\\n\\phi_8 := (\\^\\phi_6 + \\^\\phi_8)\/2 = (p_2 p_3 + q_2 q_3) \\ \\ , \\ \\\n\\phi_9 := (\\^\\phi_6 - \\^\\phi_8)\/(2i) = (q_2 p_3 - p_2 q_3) \\ .\n\\end{array} $$\n\nThe centralizer $C (A)$ of $A$ in ${\\rm Mat} (6,R)$ is an algebra\nspanned by eighteen matrices; these are written in $2 \\times 2$\nblock form as\n$$ \\pmatrix{C_{11} & C_{12} & C_{13} \\cr C_{21} & C_{22} & C_{23} \\cr\nC_{31} & C_{32} & C_{33} \\cr} $$ where the $C_{ij}$ are \ntwo-dimensional matrices written as ($\\alpha_{ij}$ and $\\beta_{ij}$ real\nconstants)\n$$ C_{ij} \\ = \\ \\alpha_{ij} \\, I \\ + \\ \\beta_{ij} \\, J \\ . $$\nIt is easy to extract from these a basis made of nine selfadjoint\nmatrices $B_k = B_k^+$ and nine antiselfadjoint ones $S_k = -\nS_k^+$. With ${\\bf x} = (p_1,q_1,p_2,q_2,p_3,q_3)$, and $(.,.)$ the\nstandard scalar product in ${\\bf R}^6$ these can be chosen so that $\n\\phi_a = ({\\bf x} , B_a {\\bf x}) $.\n\nIn this compact notation the $1:1:1$ resonant three-dimensional\noscillator is described by ${\\dot {\\bf x}} = A {\\bf x}$. We write the\nnormal form of evolution equations for perturbation of this as\n$$ \\begin{array}{ll}\n{\\dot {\\bf x}} \\ &= \\ K (\\phi ) \\ {\\bf x} \\\\\n{\\dot \\phi}_a \\ &= \\( {\\bf x} \\, , \\, (B_a K + K^+ B_a ) {\\bf x} \\right)\n\\end{array} $$ with $K$ an arbitrary matrix in $C(A)$.\n\n\\section{Bifurcations}\n\n{\\bf Example 8. Hopf bifurcation.} Consider the matrix\n$$ A \\ = \\ \\pmatrix{0&-\\omega_0\\cr \\omega_0&0\\cr} $$\nwith eigenvalues $\\lambda_1 = - i \\omega_0$, $\\lambda_2 = i \\omega_0$. There is\nno sporadic resonance, and one elementary invariance relation, $\\lambda_1 + \\lambda_2 = 0$, with associated invariant $\\phi = x^2 + y^2$.\nThe most general system in normal form is therefore\n$ {\\dot x} = \\alpha (\\phi ) x - \\beta (\\phi ) y$, ${\\dot y} = \\beta (\\phi ) x + \\alpha (\\phi ) y $. According to our procedure, the evolution equation for the new coordinate $\\phi$ will be $ \\.\\phi = 2 \\phi \\alpha (\\phi)$. \nAs the linear part of the system is given by $A$, we must require $\\alpha (0) = 0 $, $\\beta (0) = \\omega_0$.\n\nIn applications, one is interested in the case where the system does also depend on an external (``control'') parameter $\\mu$, which usually does not evolve in time\\footnote{A different framework is provided by dynamic bifurcations \\cite{Ben,Nei}.}, the linear part being given by $A$ at the critical value. In this case the normal form and the $\\phi$ evolution equation read\n$$\\begin{array}{l}\n{\\dot x} \\ = \\ \\alpha (\\phi , \\mu ) \\, x \\ - \\ \\beta (\\phi , \\mu) \\, y \\\\\n{\\dot y} \\ = \\ \\beta (\\phi , \\mu ) \\, x \\ + \\ \\alpha (\\phi , \\mu) \\, y \\\\\n\\.\\phi \\ = \\ 2 \\, \\phi \\, \\alpha (\\phi , \\mu) \\ .\n\\end{array} $$\n\nIn the standard model of Hopf bifurcation, $\\alpha (\\phi , \\mu ) = \\mu - c \\phi $, and we write $\\beta (\\phi , \\mu ) = \\omega_0 + b (\\phi , \\mu)$ with $b(0,0) = 0$. This corresponds to the normal form\n$$ \\cases{\n{\\dot x} = \\mu x - \\omega_0 y - b (x^2+y^2 , \\mu) y - c (x^2 + y^2 ) x & \\cr\n{\\dot y} = \\omega_0 x + \\mu y + b (x^2+y^2 , \\mu) x - c (x^2 + y^2 ) x & \\cr} $$\nwhich in our approach reads\n$$ \\cases{\n{\\dot x} = \\mu x - \\omega_0 y - b (\\phi , \\mu) y - c (\\phi ) x & \\cr\n{\\dot y} = \\omega_0 x + \\mu y + b (\\phi , \\mu) x - c (\\phi ) x & \\cr \n\\.\\phi = 2 \\, \\phi \\, \\alpha (\\phi , \\mu) \\ . & \\cr}\n$$\nNote that the space of invariants is one-dimensional (with the additional constraint $\\phi \\ge 0$); thus, either $\\phi (t)$ is unbounded for $t > 0$, or it approaches one of the zeroes of the function $\\alpha (\\phi , \\mu)$, say $\\phi_0$. In this case, the system reduces asymptotically to a linear one: \n$$ \n\\pmatrix{{\\dot x} \\cr {\\dot y} \\cr} \\ = \\ \\pmatrix{\n0 & - \\omega_0 - b (\\phi_0 , \\mu ) \\cr \n\\omega_0 + b (\\phi_0 , \\mu) & 0 \\cr} \\ \\pmatrix{ x \\cr y \\cr} \\ . $$\nThe standard analysis of Hopf bifurcation is readily recovered in this way. \n\\bigskip\n\nNote that we can also look at Hopf bifurcation in a slightly\ndifferent way, i.e. include the $\\mu$ variable from the beginning.\nIn this case the matrix $A$ is given by\n$$ A = \\pmatrix{0&-1&0\\cr 1&0&0\\cr 0&0&0\\cr} $$\nwith eigenvalues $(-i,i,0)$ and invariance relations $\\lambda_1 +\n\\lambda_2 = 0 $ and $\\lambda_3 = 0$; the associated invariants are $\\phi$\nand $\\mu$.\nThe linear centralizer of $A$ is spanned by matrices\n$$ M = \\pmatrix{a&-b&0\\cr b&a&0\\cr 0&0&c\\cr} $$\nand correspondingly the normal form will be\n$$ \\begin{array}{l}\n{\\dot x} = \\alpha (\\phi , \\mu) x - \\beta (\\phi , \\mu) y \\\\\n{\\dot y} = \\beta (\\phi , \\mu) x + \\alpha (\\phi , \\mu) y \\\\\n\\.\\mu = \\gamma (\\phi , \\mu)\n\\end{array} $$\n\nThe equation for $\\phi$ is just the one given above, and we are\nled to the same system; interpreting $\\mu$ as an external control\nparameter forces $\\gamma (\\phi , \\mu ) \\equiv 0$ (if not, there is a feedback of the system on the control parameter).\n\n\n\\medskip\\noindent\n{\\bf Example 9. Hamiltonian Hopf bifurcation.} Consider the matrix\n$$ A \\ = \\ \\pmatrix{\\mu & - \\omega & 0 & 0 \\cr \\omega & \\mu & 0 & 0 \\cr 0&0&- \\mu &-\\omega \\cr 0&0&\\omega & -\\mu \\cr} \\ = \\ \\pmatrix{\\mu I + \\omega J & 0 \\cr 0 & - \\mu I + \\omega J \\cr} $$\nwith eigenvalues $\\lambda_1 = -\\mu - i \\omega$, $\\lambda_2 = -\\mu + i \\omega$, $\\lambda_3 = \\mu -i \\omega$, $\\lambda_4 = \\mu + i \\omega$. We assume $\\omega \\not= 0$ (so it could be rescaled to $\\omega = 1$) and $\\mu \\not=0$; the case $\\mu = 0$ corresponds to example 4. In applications, one considers the case where $\\mu$ is an external control parameter and studies the changes as this is varied; $\\mu = 0$ is a critical value.\n\nThe matrix $A$ is diagonalized, for all $\\mu$, passing to variables $\\xi^i$ as in example 5 above. The evolution ${\\dot x} = A x$ preserves the symplectic structure $\\kappa = {\\rm d} x^1 \\wedge {\\rm d} x^3 + {\\rm d} x^2 \\wedge {\\rm d} x^4$.\n\nIt is easy to check, for generic $\\mu$, that there is no sporadic resonance and that there are two elementary invariance relations, given by $\\lambda_1 + \\lambda_4 = 0$ and $\\lambda_2 + \\lambda_3 = 0$. The corresponding invariants will be $ \\varphi_1 = \\xi^1 \\xi^4$, $\\varphi_2 = \\xi^2 \\xi^3$; they are complex conjugate, and correspond to real invariants \n$ \\phi_1 = x_1 x_3 + x_2 x_4$ and $\\phi_2 = x_1 x_4 - x_2 x_3$. \nThese are also written for later reference as $\\phi_a = (1\/2) (\\xi , B_a \\xi)$, with $B_a$ obvious four-dimensional symmetric matrices. \n\nThe linear centralizer of $A$ (for $\\mu \\not= 0$) is given by matrices written in block form, with $\\alpha_k$ and $\\beta_k$ real constants, as\n$$ M \\ = \\ \\pmatrix{\n\\alpha_1 I + \\beta_1 J & 0 \\cr\n0 & \\alpha_2 I + \\beta_2 J \\cr} \\ . $$\n\nCorrespondingly, systems in normal form will be given by\n$$ {\\dot x} \\ = \\ M \\, x $$\nwhere now $\\alpha_k , \\beta_k $ will be functions of the\ninvariants $\\phi_1 , \\phi_2$. Note that such systems in\ngeneral do not preserve the symplectic form $\\kappa$, unless $\\alpha_2 = - \\alpha_1 $ and $\\beta_2 = \\beta_1$. \nThe evolution of the $\\phi$ is given by\n$ {\\dot \\phi}_a = (1\/2) \\( x , (M^+ B_a + B_a M ) x \\)$. \n\nIt is convenient to write the system in terms of the two-dimensional vectors $\\eta_1 = (x^1,x^2)$, $\\eta_2 = (x^3,x^4)$ and $\\phi = (\\phi_1,\\phi_2)$. Moreover, we single out the linear part in the functions $\\alpha_k$ and $\\beta_k$, writing $\\alpha_k (\\phi) = (-1)^{k+1} \\mu + a (\\phi)$, $\\beta_k (\\phi) = \\omega + b_k (\\phi)$ with $a_k (0) = b_k (0) = 0$; the smooth functions are $a_k$, $b_k$ are arbitrary apart from this constraint, and can also depend on the parameters $\\mu$ and $\\omega$.\n\n\nThe system is hence described in our approach by the following equations: \n$$ \\begin{array}{ll}\n{\\dot \\eta}_1 =& (\\mu I + \\omega J) \\, \\eta_1 \\ + \\ [a_1 (\\phi) \\cdot I \\, - \\, b_1 (\\phi) \\cdot J ] \\, \\eta_1 \\ , \\\\\n{\\dot \\eta}_2 =& (- \\mu I + \\omega J) \\, \\eta_2 \\ + \\ [a_2 (\\phi) \\cdot I \\, - \\, b_2 (\\phi) \\cdot J ] \\, \\eta_2 \\ , \\\\ \n{\\dot \\phi} =& \\[ \\( a_1 (\\phi) + a_2 (\\phi) \\) \\cdot I \\, + \\, \\( b_2 (\\phi) - b_1 (\\phi) \\) \\cdot J \\] \\ \\phi \\ . \\end{array} $$ \n\nIf $a_k , b_k$ are such that the system is hamiltonian, the $\\phi$ are always strictly invariant, and we are reduced to a linear system on each level set of $\\phi$; if the $a_k,b_k$ are such that the system is not hamiltonian but the (two-dimensional) equation for the $\\phi$ satisfies the condition for the existence of a limit cycle, as is often the case in bifurcation problems, then remark 7 applies.\n\nLet us look more closely to the perturbation of the case $\\mu = 0$; as recalled above, the analysis of this case can be recovered from example 4. More precisely, we can rescale time so that $\\omega = 1$, and allow the arbitrary functions appearing in the analysis of example 4 to also depend on the parameter $\\mu$.\n\nAlternatively, we can proceed as at the end of the previous example, and include the $\\mu$ variable from the beginning. With a $2 \\oplus 2 \\oplus 1$ block notation, we have now\n$$ A \\ = \\ \\pmatrix{\\mu I + \\omega J & 0 & 0 \\cr 0 & - \\mu I + \\omega J & 0 \\cr 0 & 0 & 0 \\cr} $$\nwith linear centralizer \n$$ M \\ = \\ \\pmatrix{\n\\alpha_1 I + \\beta_1 J & 0 & 0 \\cr\n0 & \\alpha_2 I + \\beta_2 J & 0 \\cr\n0 & 0 & c \\cr} \\ . $$\nIn the normal form, going back to the original time parametrization, we have \n$$ \\begin{array}{ll}\n{\\dot \\eta}_1 =& (\\mu I + \\omega J) \\, \\eta_1 \\ + \\ [\\alpha_1 (\\phi,\\mu) \\cdot I \\, - \\, \\beta_1 (\\phi,\\mu) \\cdot J ] \\, \\eta_1 \\ , \\\\\n{\\dot \\eta}_2 =& (- \\mu I + \\omega J) \\, \\eta_2 \\ + \\ [\\alpha_2 (\\phi,\\mu) \\cdot I \\, - \\, \\beta_2 (\\phi,\\mu) \\cdot J ] \\, \\eta_2 \\ , \\\\ \n{\\dot \\phi} =& \\[ \\( \\alpha_1 (\\phi,\\mu) + \\alpha_2 (\\phi,\\mu) \\) \\cdot I \\, + \\, \\( \\beta_2 (\\phi,\\mu) - \\beta_1 (\\phi,\\mu) \\) \\cdot J \\] \\ \\phi \\ , \\\\\n{\\dot \\mu} =& \\gamma (\\phi,\\mu) \\ . \\end{array} $$ \nAgain, interpreting $\\mu$ as an external control parameter requires $\\gamma (\\phi , \\mu ) \\equiv 0$.\n\n\n\\section*{Acknowledgements}\n\nThis work was started by discussions during a visit by SW in the\nDepartment of Mathematics of Universit\\`a di Milano, and while GG\nwas a guest at the DFG-Graduierten\\-kol\\-leg ``Hierarchie und\nSymmetrie in Mathema\\-ti\\-schen Mo\\-del\\-len'' at RWTH Aachen. We\nwould like to thank these Institutions for support to our work. GG\nalso acknowledges partial support by Fondazione CARIPLO and by\nGNFM-INdAM.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction\\label{sec:intro}}\nSkyrmions are particle-like magnetization swirls in magnetic materials,\\cite{Pf_MnSi_Science_09,Tokura_review_skyrmion_Natnano_13,Nanjing_Theory_artificial_skyrmion_PRL_13,Jiang_blowing_bubble_Science_15,CNRS_[CoFeB\/Pt]n_STXM_Natphys_15,White_GaV4S8_Natmater_15,CNRS_Ta\/Pt\/Co\/MgO\/Ta_PEEM_Natnano_16} which are promising candidates for advanced magnetic memory applications.\\cite{Tokura_review_skyrmion_Natnano_13}\nIn these memory schemes, individual skyrmions are used to encode the binary information `1' and `0', e.g., via their presence\\cite{Hamburg_Ir\/FePd_write_Science_13,VA,MIT_[Ta\/Pt\/Co]n_STXM_Natmater_16} or their internal structures.\\cite{Cu2OSeO3_multidomain_Nanoletters_16,Diaz_helicity_theory_IOP_16} Skyrmions are topological objects, which makes them robust against superparamagnetism.\nConsequently, it should be possible to reduce the bit size beyond the limits of conventional ferromagnetic memory. Further, the energy required to manipulate skyrmions is several orders of magnitude less than domain wall-based ferromagnetic memory.\\cite{Pf_MnSi_STT_rotation_Science,Rosch_MnSi_emergent_NatPhys,Tokura_CuOSeO-MnSi_ratchet_Natmater_14}\n\nProminent skyrmion-carrying materials are noncentrosymmetric helimagnets, such as MnSi,\\cite{Pf_MnSi_Science_09} Fe$_{1-x}$Co$_x$Si,\\cite{Pf_FeCoSi_SANS_PRB_10} FeGe,\\cite{Tokura_FeGe_STT_NatComm_12} Cu$_2$OSeO$_3$,\\cite{Tokura_CuOSeO_LTEM_Science_12,Tokura_CuOSeO_REXS_PRL_14} and $\\beta$-type Co$_8$Zn$_8$Mn$_4$.\\cite{CoMnZn} In such materials, the broken crystalline inversion symmetry induces the Dzyaloshinskii-Moriya interaction (DMI), leading to periodic, incommensurate, modulated spiral spin structures.\nAssisted by thermal fluctuations at finite temperature and in an external field, the topologically protected skyrmion lattice phase forms, consisting of chiral skyrmions.\\cite{Pf_MnSi_Science_09}\nThe advantages of chiral skyrmions for memory devices are twofold. First, compared to magnetic bubbles with similar topological spin configurations,\\cite{Bubble_review_Scinece_80} skyrmions are more robust and can be manipulated with ease.\nThe size of an individual skyrmion in these materials is usually between 3-100~nm,\\cite{Pf_MnSi_Science_09,2010:Yu:Nature,YuFeGe2011,2011:Kanazawa} i.e., smaller than other types of skyrmions.\\cite{Bubble_review_Scinece_80,Tokura_BaFeScMgO_reversal_PNAS_12,Nanjing_[Co\/Pt]n_MOKE_PRB_14,Jiang_blowing_bubble_Science_15,CNRS_[CoFeB\/Pt]n_STXM_Natphys_15,Berkeley_[Fe\/Ni]n_SPLEEM_APL_15,CNRS_Ta\/Pt\/Co\/MgO\/Ta_PEEM_Natnano_16,MIT_[Ta\/Pt\/Co]n_STXM_Natmater_16}\nSecond, the skyrmion phase is a rigid, hexagonally-ordered periodic lattice, resulting in an equal spacing between neighboring skyrmions across the entire sample.\\cite{Pf_MnSi_long_range,Tokura_review_skyrmion_Natnano_13}\nAs a consequence, in a racetrack-like memory scheme,\\cite{Parkin_racetrack_Science_08} no extra effort is needed to assure that they keep their distance, as the control of the skyrmion-skyrmion distance is otherwise experimentally rather challenging.\\cite{Hamburg_Ir\/FePd_write_Science_13,Jiang_blowing_bubble_Science_15,MIT_[Ta\/Pt\/Co]n_STXM_Natmater_16}\n\n\n\nThe skyrmion lattice phase in noncentrosymmetric helimagnets is usually a long-range-ordered state, with the correlation length reaching hundreds of micrometers.\\cite{Pf_MnSi_long_range}\nThis largely limits the applicability of this type of skyrmion order for device applications. Therefore, the formation and manipulation of skyrmion lattice domains, which break the long-range order, is an indispensable step towards skyrmion-based racetrack memories.\\cite{SkyRace_Tomasello}\nWe have recently reported the observation of a multidomain skyrmion lattice state on the surface of Cu$_2$OSeO$_3$ in a magnetic diffraction experiment, created by tilting the magnetic field away from the major crystalline axis.\\cite{Cu2OSeO3_multidomain_Nanoletters_16,2016:Zhang-PRB} \nHowever, several key questions remained unanswered.\nMost importantly, the size, shape, and distribution of the domains remained unknown, which are crucial pieces of information needed for designing skyrmion devices. \n\n\nIn this Letter, we use an imaging technique based on resonant elastic x-ray scattering (REXS) to map out the lateral distribution of skyrmion lattice domains. Moreover, we demonstrate that by tuning the tilt angle of the applied field, the size of the domains can be efficiently manipulated, which is an important step towards future applications.\n\n\n\n\n\n\n\\begin{figure*}[ht!]\n\t\\begin{center}\n\t\t\\includegraphics[width=16cm]{Fig1_Setup-PD.pdf}\n\t\t\\caption{(a) Schematic of the REXS setup, showing the field tilt angle $\\gamma$, defined with respect to the surface normal.\n\t\t\t(b) Photograph of the sample and its orientation. The blue square marks the real-space region ($1 \\times 1$~mm$^2$) mapped using magnetic diffraction imaging, shown in Fig.\\ \\ref{fig:domains}.\n\t\t\t(c) Sketch of the magnetic phase diagram of Cu$_2$OSeO$_3$, as obtained by REXS.\n\t\t\t(d) Magnetization patterns and (e) corresponding REXS diffraction patterns in reciprocal space $(hk1)$ plane (experimental data) for the conical (50~K, 32~mT, $\\gamma=90^\\circ$), skyrmion (57~K, 32~mT, $\\gamma=0^\\circ$), and helical (15~K, 0~mT) phase, respectively. The sampling area is $500\\times500~\\mu\\mathrm{m}^2$.}\n\t\t\\vspace*{-0.5cm}\n\t\t\\label{fig:setup}\n\t\\end{center}\n\\end{figure*}\n\nFigure \\ref{fig:setup}(a) shows a schematic of the REXS setup used for the characterization of the magnetically ordered phases of Cu$_2$OSeO$_3$ as a function of applied magnetic field and temperature. A $\\omega$-$2\\theta$ geometry was used for the experiments, where $\\omega$ is the angle of incidence of the x-rays and $2 \\theta$ the scattering angle. The diffracted x-rays are detected with a charge-coupled device (CCD) camera in the ultrahigh vacuum scattering chamber RASOR on beamline I10 at the Diamond Light Source.\\cite{RASOR_review} A magnetic field was applied to the sample whose strength and tilt angle, $\\gamma$, with respect to the surface normal can be varied.\nThe incident, $\\sigma$-polarized x-rays were at the resonance with the $L_3$ edge of Cu (931.25~eV), resulting in a wavelength of 13.3~\\AA.\nFor Cu$_2$OSeO$_3$ with its relatively large lattice constant of 8.925~\\AA, the (001) Bragg peak can be reached at $2 \\theta \\approx 96.5^\\circ$.\n\nIn noncentrosymmetric $P2_13$ helimagnets, a `universal' magnetic phase diagram is observed that shows helical, conical, or ferrimagnetic order below the transition temperature $T_\\mathrm{c}$ as a function of increasing field.\\cite{Rosch_3D_Monte_Carlo_PRB_13}\nBelow $T_\\mathrm{c}$ of $\\sim$57.5~K for Cu$_2$OSeO$_3$, at finite fields, the skyrmion phase can be found. \nFigure \\ref{fig:setup}(c) shows a schematic of the magnetic phase diagram for the Cu$_2$OSeO$_3$ bulk crystal as observed by REXS.\\cite{Cu2OSeO3_multidomain_Nanoletters_16} \nThe magnetization patterns corresponding to the helical, skyrmion, and conical phase are shown in Fig.\\ \\ref{fig:setup}(d), and the experimental REXS results in (e), respectively (from top to bottom).\nThe modulation wavevector for all three phases has a length of 0.0158-0.0162~reciprocal lattice units (r.l.u.), corresponding to a real-space modulation pitch of $\\sim$56~nm, in agreement with the values reported in the literature.\\cite{Pf_CuOSeO_PRL_12, Tokura_CuOSeO_LTEM_Science_12, Tokura_CuOSeO_rotation_PRB_12, White_Cu2OSeO3_E_rotation_IOP_12, Tokura_CuOSeO_REXS_PRL_14, White_Cu2OSeO3_LTEM_PNAS_15}\nIn the helical state, the weak cubic anisotropy locks the propagation wave vector along the $\\langle$100$\\rangle$ direction.\nAt 57~K and in an applied magnetic field of 32~mT, $\\gamma=0^\\circ$, the sharp six-fold-symmetric diffraction pattern emerges, which is a signature of the single-domain, long-range-ordered skyrmion lattice state.\nOne of the skyrmion wave vectors is along [010] due to the cubic anisotropy.\\cite{Cu2OSeO3_multidomain_Nanoletters_16}\n\n\nBy titling the magnetic field to $\\gamma=17^\\circ$, the skyrmion lattice breaks up into domains, resulting in a necklace-like diffraction pattern, as shown in Fig.\\ \\ref{fig:focus}(a).\nThe formation of domains is the result of the competing magnetic anisotropies.\\cite{Cu2OSeO3_multidomain_Nanoletters_16}\nNote that the diffraction pattern was obtained with an incident x-ray beam focused to an area of 300 $\\times$ 300~$\\mu$m$^2$ on the sample. Once the beam is skimmed down to an area of 20~$\\mu$m in diameter (using a pinhole), the single domain state with its six-fold-symmetric pattern is recovered [cf.\\ Fig.\\ \\ref{fig:focus}(b)]. \nThis means that the domains are $>$$100\\pi~\\mu$m$^2$ in area, and that this beam spot can be used to map out the real-space domain pattern.\n\n\\begin{figure}[ht!]\n\t\\begin{center}\n\t\t\\includegraphics[width=8.5cm]{Fig2_BeamSize.pdf}\n\t\t\\caption{Reciprocal space map of the skyrmion lattice plane that is perpendicular to the external field, reached by field cooling down to 57~K in a field of 32~mT with $\\gamma=15^\\circ$. Area sampled by the beam: (a) $300\\times300~\\mu\\text{m}^2$ and (b) $20~\\mu\\text{m}$ in diameter.\n\t\tThe scattering intensity is in arbitrary units.}\n\t\t\\vspace*{-1.0cm}\n\t\t\\label{fig:focus}\n\t\\end{center}\n\\end{figure}\n\n\n\n\\begin{figure*}[ht!]\n\t\\begin{center}\n\t\t\\includegraphics[width=16cm]{Fig3_Domains.pdf}\n\t\t\\caption{Scanning REXS images. (a) Definition of the lattice rotation angle $\\Psi$ in the skyrmion diffraction pattern ($\\Psi \\in [0^\\circ,60^\\circ]$ as indicated by the color bar). The case of $\\Psi=30^\\circ$ is shown.\n\t\t\t(b-h) 1~mm $\\times$ 1~mm raster scans showing in-plane rotational domain maps for the indicated field angle $\\gamma$.}\n\t\t\\vspace*{-0.9cm}\n\t\t\\label{fig:domains}\n\t\\end{center}\n\\end{figure*}\n\n\nThe sample shape and geometry is shown Fig.\\ \\ref{fig:setup}(b), in which the raster-scanned area ($1\\times1$~mm$^2$) is indicated by the blue square.\nThe skyrmion lattice state is reached by field-cooling from 65~K (paramagnetic phase) down to 57~K in a field of 32~mT with the $\\gamma$ angle as indicated in Fig.\\ \\ref{fig:domains}(b-h), followed by a 15~min waiting period. \nEach area scan image is composed of $50\\times50$ pixels. \nFor most of the pixels, a six-fold-symmetric, single domain diffraction pattern is observed. Its rotational state is characterized by the in-plane rotation angle $\\Psi$, as shown Fig.\\ \\ref{fig:domains}(a). An angle of $\\gamma =0^\\circ$ corresponds to two of the six diffraction spots being aligned along the $q_x$-axis.\nFor some pixels, especially for larger field tilt angles $\\gamma$, a multidomain state is found at the domain boundaries.\nThe evolution of the domain pattern as a function of field tilt angle is shown in Fig.\\ \\ref{fig:domains}(b-h).\nFor $\\gamma=0^\\circ$, a perfect long-range-ordered single skyrmion lattice domain is observed [cf.\\ Fig.\\ \\ref{fig:domains}(b)], locked in one direction determined by the anisotropies.\nWhen the field is slightly tilted ($\\gamma = 5^\\circ$), two domains start to emerge, which do not differ very much in $\\Psi$.\nAs the field is further tilted, the domains become randomly oriented and the average size decreases (most strongly for $\\gamma = 15^\\circ \\rightarrow 19^\\circ$).\nFor $\\gamma = 22^\\circ$, a mosaic pattern is observed at the bottom-left corner of Fig.\\ \\ref{fig:domains}(g). This results from a lack of resolution, as determined by the x-ray beam size.\nAs the domain size decreases to less than the beam size, a multidomain diffraction pattern is observed across each pixel.\nNote that the pinhole diameter can not be reduced much below 20~$\\mu$m in our experimental configuration, as this will result in Fresnel diffraction (near-field condition giving cylindrical wave fronts), i.e., the plane wave approximation can no longer be applied for modeling the scattering process.\n\n\nThe shape of the domains is generally rather irregular, and their distribution random, suggesting that the domain formation is spontaneous and not governed by defect-pinning.\nThe domain pattern obtained for each $\\gamma$ is stable over time, as confirmed by multiple scans of the same area.\nNote, however, that for the same $\\gamma$, if the temperature is increased above $T_\\mathrm{c}$ and then lowered down to the skyrmion phase again, the domain image has no resemblance to the previous pattern.\nThe orientation of neighboring domains is similar and the system generally prefers the orientation of neighboring domains to differ by less than $10^\\circ$. At higher tilt angles ($\\gamma \\geq 15^\\circ$), the domain boundaries are sharp, however, for smaller angles the transition between domains becomes almost continuous, as can be seen in Fig.\\ \\ref{fig:domains}(d).\n\n\nThis `polycrystalline' appearance of a skyrmion lattice reflects the delicate balance of the system's competing interactions.\nThe isotropic exchange interaction, the anisotropic exchange (DMI) term, and the Zeeman energy compete\nabout the principal magnetic order, while anisotropy, demagnetization, and possibly the ferroelectric effect in the multiferroic material Cu$_2$OSeO$_3$ serve as perturbations which are responsible for the fine adjustment of the system's structure.\nSkyrmions behave like quasi-particles and prefer to keep their close-packed order, i.e., less ferromagnetic-like space in-between them is preferred. However, if defects are introduced into the system, some region would have to become ferromagnetic, unless the skyrmions adapt their shape and size. \nAs a result, the skyrmions in the defect zone can become elliptically distorted, as observed using Lorentz transmission electron microscopy (LTEM) on thinned-down FeGe$_{1-x}$Si$_x$ bulk samples.\\cite{Tokyo_FeGeSi_LTEM_Scienceadv_16} \nMoreover, the close-packed ordering of the skyrmion lattice can have defects at the domain boundaries, e.g., `5-7 defects' where five or seven skyrmions are surrounding a reference skyrmion, instead of six.\\cite{White_Cu2OSeO3_LTEM_PNAS_15} In this case, a line defect can appear, forming the domain boundary. Depending on its configuration, the neighboring domains can change their orientations and take an arbitrary angle, depending on the width of the defect-containing boundary. The thicker this boundary, the larger the relative rotation. If we relate this to our domain observations, it points towards the fact that for small $\\gamma$, the domain boundary contains thin defect lines, leading to small relative rotations. With increasing $\\gamma$, the number of defects increases across the domain boundary, giving rise to sharp rotational transitions.\nNote that LTEM imaging, despite being immensely useful for studying the local defects of the skyrmions, is not able to map out the large-scale skyrmion structures like skyrmion lattice domains. Moreover, for systems with such a delicate energy balance as Cu$_2$OSeO$_3$, LTEM sample preparation-induced defects may affect the intrinsic surface domain structure (essentially a property of the bulk crystal). \nTherefore, the type and density of defects, and their influence on the observed domains, can be significantly different between bulk and thinned-down LTEM samples.\n\n\nIn summary, we have presented a study of the domain structure in the skyrmion lattice phase of Cu$_2$OSeO$_3$ using REXS imaging.\nThis method provides important information on the domain distribution, shape, size, and formation.\nBy tuning the field tilt angle, in-plane rotational domains in the hexagonally ordered skyrmion lattice phase can be generated. The lateral dimensions of the domains are for small tilt angles $>$20~$\\mu$m. The domain transition is abrupt, expect for very small tilt angles, where the change is almost gradual.\nThe study of skyrmion domains may enable potential device applications making use of the skyrmion lattice state. \nIn skyrmion lattice-based memory, the single-domain state must be broken up into domains, which each encode information in a spatially separated manner.\nThis was demonstrated using the field tilt angle as an additional handle.\n\n\n\n\n\n\nThe REXS experiments were carried out on beamline I10 at the Diamond Light Source, UK, under proposals SI-11784 and SI-12958. \nWe thank the EPSRC for support under grant EP\/N032128\/1.\nS.L.Z.\\ and T.H.\\ acknowledge financial support by the Semiconductor Research Corporation. A.B.\\ and C.P.\\ acknowledge financial support through DFG TRR80 and ERC AdG (291079, TOPFIT).\n\n\n\n\n \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\large Introduction}}\n\\label{secIntr}\n\n The main part of the paper is a a review of our results obtained\nin~\\cite{GribPavlov2012}--\\cite{GribPavlov2013a}.\n In the end we discuss some new features of particles with negative energies\nconcerning their origination and collisions with ordinary particles.\n\n Recently many papers were\npublished~\\cite{BanadosSilkWest09}--\\cite{Zaslavskii12b} in which some\nspecific properties of collisions of particles close to the horizon of\nthe rotating black holes were discussed.\n In~\\cite{BanadosSilkWest09} the effect of unlimited growing energy of\ncolliding two particles in the centre of mass frame for critical Kerr\nrotating black holes (call it BSW effect) was discovered.\n In our papers~\\cite{GribPavlov2010}--\\cite{GribPavlov2011b}\nthe same effect was found for noncritical black holes when multiple\ncollisions took place.\n\n All this shows that close to the horizon there is natural supercollider\nof particles with the energies up to Planck's scale.\n However its location close to the horizon makes difficult due to the large\nred shift get some observed effects outside of the ergosphere when particles\ngo from the ``black hole supercollider'' outside.\n Here we shall discuss some other effect valid at any point of\nthe ergosphere.\n The energy in the centre of mass frame can take large values for large\nnegative values of the angular momentum projection on the rotation axis of\nthe black hole.\n The problem is how to get this large (in absolute value) negative values.\n That is why first we shall obtain limitations on the values of\nthe projection of angular momentum outside ergosphere for particles falling\ninto the black hole and inside the ergosphere.\n It occurs that inside the ergosphere of black hole there is no limit for\nnegative values of the momentum and arbitrary large energy in the centre\nof mass frame is possible for large angular momentum of the two colliding\nparticles.\n In a sense the effect is similar to the BSW-effect.\n One of the particle with large negative angular momentum can be called\n``critical'', the other particle ``ordinary''.\n\n\n Also in this paper we analyse geodesics for particles with negative energies\nfor rotating black holes.\n Kerr's metric~\\cite{Kerr63}, predicts differently from the Schwarzschild\nmetric existence of the special region outside the horizon of the black hole\ncalled ergosphere.\n In the egosphere elementary particles must rotate together with the black\nhole.\n The new feature of ergosphere is the existence of geodesics with negative\nrelative to infinity energy.\n Existence of such geodesics leads to the possibility of extraction of\nenergy from the rotating black holes due to the\nPenrose process~\\cite{Penrose69}\n so one can call these geodesics the Penrose geodesics.\n However in spite of the more than 40 years passing after the discovery of\nthe Penrose effect there is still no information about the full picture of the\nPenrose geodesics especially about their origin.\n Here we shall investigate the problem of properties of such geodesics.\n Particle can arrive on such trajectory in the result of collisions or\ndecays in the ergosphere.\n But the world line of the geodesic in geodesically complete space-time\nmust originate or terminate either in singularity or in\nthe infinity~\\cite{HawkingEllis}.\n Note that here one means infinity in space-time so that it can be infinity\nin time for finite value of the space distances.\n The problem is to find where originate and terminate Penrose geodesics and\nthis is the subject of the paper.\n It will be shown that the length of the Penrose geodesics is always finite\ninside the ergosphere so that the only possibility of their origination and\ntermination is outside of the ergosphere.\n But the geodesics for particles with negative energy don't exist in the\nexternal space out of ergosphere so that one comes to the conclusion that they\noriginate and terminate inside the gravitational radius of the black hole.\n The fact that they terminate inside the horizon is well\nknown~\\cite{Contopoulos84}.\n So our interest in this paper is to find the place of their origination.\n These geodesics occur to be ``white hole'' geodesics originating at the\nsingularity arriving to ergosphere and then returning back to the horizon\nand singularity.\n A new feature is that if particle with negative energy collides with\nordinary particle the energy in the centre of mass frame is growing without\nlimit on the horizon.\n\n The system of units $G=c=1$ is used in the paper.\n\n\n\n\\vspace{9mm}\n{\\centering \\section{General formulas for the energy of particles\nclose to the black hole}\n\\label{sec2}}\n\n The Kerr's metric of the rotating black hole~\\cite{Kerr63} in\nBoyer--Lindquist coordinates~\\cite{BoyerLindquist67} has the form:\n \\begin{equation}\nd s^2 = d t^2 -\n\\frac{2 M r}{\\rho^2} \\, ( d t - a \\sin^2 \\! \\theta\\, d \\varphi )^2\n-\\, \\rho^2 \\left( \\frac{d r^2}{\\Delta} + d \\theta^2 \\right)\n- (r^2 + a^2) \\sin^2 \\! \\theta\\, d \\varphi^2,\n\\label{Kerr}\n\\end{equation}\n where\n \\begin{equation} \\label{Delta}\n\\rho^2 = r^2 + a^2 \\cos^2 \\! \\theta, \\ \\ \\ \\ \\\n\\Delta = r^2 - 2 M r + a^2,\n\\end{equation}\n $M$ is the mass of the black hole, $ aM $ its angular momentum.\n The rotation axis direction corresponds to $\\theta =0$, i.e. $a \\ge 0$.\n The event horizon of the Kerr's black hole corresponds to\n \\begin{equation}\nr = r_H \\equiv M + \\sqrt{M^2 - a^2} .\n\\label{Hor}\n\\end{equation}\n For Cauchy horizon one has\n \\begin{equation}\nr = r_C \\equiv M - \\sqrt{M^2 - a^2} .\n\\label{HorC}\n\\end{equation}\n The surface of the static limit is defined by\n \\begin{equation}\nr = r_1(\\theta) \\equiv M + \\sqrt{M^2 - a^2 \\cos^2 \\theta} .\n\\label{Lst}\n\\end{equation}\n In case $ a \\le M $ the region of space-time between the static limit\nand event horizon is called ergosphere.\n\n For geodesics in Kerr's metric~(\\ref{Kerr}) one obtains\n(see \\cite{Chandrasekhar}, Sec.~62 or \\cite{NovikovFrolov}, Sec.~3.4.1)\n \\begin{equation} \\label{geodKerr1}\n\\rho^2 \\frac{d t}{d \\lambda } = -a \\left( a E \\sin^2 \\! \\theta - J \\right)\n+ \\frac{r^2 + a^2}{\\Delta}\\, P,\n\\end{equation}\n \\begin{equation}\n\\rho^2 \\frac{d \\varphi}{d \\lambda } =\n- \\left( a E - \\frac{J}{\\sin^2 \\! \\theta} \\right) + \\frac{a P}{\\Delta} ,\n\\label{geodKerr2}\n\\end{equation}\n \\begin{equation} \\label{geodKerr3}\n\\rho^2 \\frac{d r}{d \\lambda} = \\sigma_r \\sqrt{R}, \\ \\ \\ \\\n\\rho^2 \\frac{d \\theta}{d \\lambda} = \\sigma_\\theta \\sqrt{\\Theta},\n\\end{equation}\n \\begin{equation} \\label{geodP}\nP = \\left( r^2 + a^2 \\right) E - a J,\n\\end{equation}\n \\begin{equation} \\label{geodR}\nR = P^2 - \\Delta [ m^2 r^2 + (J- a E)^2 + Q],\n\\end{equation}\n \\begin{equation} \\label{geodTh}\n\\Theta = Q - \\cos^2 \\! \\theta \\left[ a^2 ( m^2 - E^2) +\n\\frac{J^2}{\\sin^2 \\! \\theta} \\right].\n\\end{equation}\n Here $E$ is conserved energy (relative to infinity)\nof the probe particle,\n$J$ is conserved angular momentum projection on the rotation axis\nof the black hole,\n$m$ is the rest mass of the probe particle, for particles with nonzero\nrest mass $\\lambda = \\tau \/m $,\nwhere $\\tau$ is the proper time for massive particle,\n$Q$ is the Carter's constant.\n The constants $\\sigma_{r}, \\sigma_{\\theta}$ in formulas~(\\ref{geodKerr3})\nare equal to $\\pm 1$ and are defined by the direction\nof particle movement in coordinates $r$, $\\theta$.\n For massless particles one must take $m = 0$\nin~(\\ref{geodR}), (\\ref{geodTh}).\n\n The permitted region for particle movement outside the event horizon\nis defined by conditions\n \\begin{equation} \\label{ThB0}\nR \\ge 0, \\ \\ \\ \\ \\ \\Theta \\ge 0, \\ \\ \\ \\ \\\n\\frac{d t}{d \\lambda} \\ge 0 .\n\\end{equation}\n The last inequality forbids movement ``back in time''~\\cite{Wald}.\n Let us find limitations for the particle angular momentum from the\nconditions~(\\ref{ThB0}) at the point $(r, \\theta)$,\ntaking the fixed values of $\\Theta$~\\cite{GribPavlov2013}.\n\n Outside the ergosphere $ r^2 -2 r M +a^2 \\cos^2 \\! \\theta >0 $ one obtains\n \\begin{equation} \\label{EvErg}\nE \\ge \\frac{1}{\\rho^2} \\sqrt{(m^2 \\rho^2 + \\Theta)\n(r^2 -2 r M +a^2 \\cos^2 \\! \\theta)}, \\ \\ \\ \\ \\\n \nJ \\in \\left[ J_{-}, \\ J_{+} \\right],\n\\end{equation}\n \\begin{equation}\nJ_{\\pm} = \\frac{\\sin \\theta}{r^2 -2 r M +a^2 \\cos^2 \\! \\theta}\n\\biggl[ - 2 r M a E \\sin \\theta\n\\pm \\sqrt{ \\Delta \\left( \\rho^4 E^2 \\!-\\! (m^2 \\rho^2 \\!+\\! \\Theta)\n(r^2 \\!-\\! 2 r M \\!+\\! a^2 \\cos^2 \\! \\theta) \\right)} \\biggr] .\n\\label{Jpm}\n\\end{equation}\n\n On the boundary of ergosphere\n \\begin{equation} \\label{rEgErg}\nr = r_1(\\theta) \\ \\ \\ \\Rightarrow \\ \\ \\ E \\ge 0, \\ \\ \\ \\\nJ \\le E \\left[ \\frac{M r_1(\\theta) }{a} + a \\sin^2 \\! \\theta \\left(\\!\n1 - \\frac{m^2}{2 E^2} - \\frac{\\Theta}{4 M r_1(\\theta) E^2} \\!\\right)\n\\! \\right]\\!.\n\\end{equation}\n\n Inside ergosphere\n \\begin{equation} \\label{lHmdd}\nr_H < r < r_1(\\theta) \\ \\ \\ \\Rightarrow \\ \\ \\\n(r^2 -2 r M +a^2 \\cos^2 \\! \\theta) <0 ,\n\\end{equation}\n \\begin{equation}\nJ \\le J_{-}(r,\\theta) = \\frac{- \\sin \\theta}{\nr^2 \\!-\\! 2 r M \\!+\\! a^2 \\cos^2 \\! \\theta} \\biggl[ 2 r M a E \\sin \\theta\n- \\sqrt{ \\Delta \\left( \\rho^4 E^2 \\!-\\! (m^2 \\rho^2 + \\Theta)\n(r^2 \\!-\\! 2 r M \\!+\\! a^2 \\cos^2 \\! \\theta) \\right)} \\biggr].\n\\label{JmErg}\n\\end{equation}\n So it is only inside the ergosphere that the energy $E$ of the particle\nrelative to infinity can be negative.\n From~(\\ref{JmErg}) one can see that for negative energy~$E$ of the particle\nin ergosphere its angular momentum projection on the rotation axis of the\nblack hole must be also negative.\n\n For negative values of the energy $E$ the function $ J_{-}(r,\\theta) $\nis decreasing with growing $r$ in ergosphere, so that\n \\begin{equation} \\label{JgEHf}\n\\theta \\ne 0 , \\pi , \\ \\ \\ r \\to r_1(\\theta) \\ \\ \\Rightarrow \\ \\\nJ_{-}(r,\\theta) \\to - \\infty .\n\\end{equation}\n So in order to come to the upper frontier of the ergosphere particle\nwith negative energy one must have infinitely large in absolute value negative\nangular momentum.\n\n In the limit $r \\to r_H$ from~(\\ref{JmErg}) (for $\\theta \\ne 0, \\pi$)\none obtains\n \\begin{equation} \\label{JgEH}\nJ \\le J_H = \\frac{ 2 M r_H E}{a} .\n\\end{equation}\n So $J_H$ is the maximal value of the angular momentum of the particle with\nthe energy~$E$ close to the gravitational radius.\n\n\n\\vspace{9mm}\n{\\centering \\section{The energy of particles collision close to the black hole}\n\\label{secCollision}}\n\n\n One can find the energy in the centre of mass frame of two colliding\nparticles $E_{\\rm c.m.}$ with rest masses~$m_1$, $m_2$ taking the square of\n \\begin{equation} \\label{SCM}\n\\left( E_{\\rm c.m.}, 0\\,,0\\,,0\\, \\right) = p^{\\,i}_{(1)} + p^{\\,i}_{(2)},\n\\end{equation}\n where $p^{\\,i}_{(n)}$ are 4-momenta of particles $(n=1,2)$.\n Due to $p^{\\,i}_{(n)} p_{(n)i}= m_n^2$ one has\n \\begin{equation} \\label{SCM2af}\nE_{\\rm c.m.}^{\\,2} = m_1^2 + m_2^2 + 2 p^{\\,i}_{(1)} p_{(2)i} .\n\\end{equation}\n Note that the energy of collisions of particles in the centre of mass\nframe is always positive (while the energy of one particle due to Penrose\neffect~\\cite{Penrose69} can be negative!) and satisfies the condition\n \\begin{equation} \\label{Eb0}\nE_{\\rm c.m.} \\ge m_1 + m_2.\n\\end{equation}\n This follows from the fact that the colliding particles move one\ntowards another with some velocities.\n\n It is important to note that $E_{\\rm c.m.}$ for two colliding particles\nis not a conserved value differently from energies of particles (relative\nto infinity) $E_1$, $E_2$.\n\n For the free falling particles with energies $E_1$, $E_2$ and angular\nmomentum projections $J_1, J_2$ from~(\\ref{geodKerr1})--(\\ref{geodR})\none obtains~\\cite{HaradaKimura11}:\n \\begin{eqnarray}\nE_{\\rm c.m.}^{\\,2} = m_1^2 + m_2^2 + \\hspace{57mm}\n\\nonumber \\\\[7pt]\n+\\, \\frac{2}{\\rho^2} \\biggl[ \\, \\frac{P_1 P_2 -\n\\sigma_{1 r} \\sqrt{R_1} \\, \\sigma_{2 r} \\sqrt{R_2}}{\\Delta}\n- \\frac{ (J_1 - a E_1 \\sin^2 \\! \\theta) (J_2 - a E_2 \\sin^2 \\! \\theta)}\n{\\sin^2 \\! \\theta}\n - \\sigma_{1 \\theta} \\sqrt{\\Theta_1} \\,\n\\sigma_{2 \\theta} \\sqrt{\\Theta_2} \\, \\biggr].\n\\label{KerrL1L2}\n\\end{eqnarray}\n The big values of the collision energy can occur near the event horizon\nif one of the particles has the ``critical'' angular moment $J_H$:\n \\begin{eqnarray}\nE_{\\rm c.m.}^{\\,2}(r \\to r_H) = \\frac{ (J_{1H} J_2 - J_{2H} J_1)^2}\n{4 M^2 (J_{1H} - J_1) (J_{2H} - J_2)}\n+ m_1^2 \\left[1+ \\frac{J_{2H} \\!- J_2}{J_{1H} \\!- J_1}\\right] +\nm_2^2 \\left[ 1+ \\frac{J_{1H} \\!- J_1}{J_{2H} \\!- J_2}\\right]\n\\label{GrPvPi}\n\\end{eqnarray}\n(see Eq.~16 in~\\cite{GribPavlovPiattella2012}).\n This is the BSW-effect.\n\n The energy in the centre of mass frame can be written\nthrough the relative velocity~$ v_{\\rm rel}$ of particles at the moment\nof collision~\\cite{Zaslavskii11}, \\cite{GribPavlovPiattella2012}:\n \\begin{equation} \\label{Relsk03}\nE_{\\rm c.m.}^{\\,2} = m_1^2 + m_2^2 +\n\\frac{2 m_1 m_2}{\\sqrt{1 \\!- v_{\\rm rel}^2}}\n\\end{equation}\n and the nonlimited growth of the collision energy in the centre of mass\nframe occurs due to growth of the relative velocity to the velocity of\nlight~\\cite{Zaslavskii11}.\n\n Let's consider another opportunity for big values of collision energy.\n As we can see from~(\\ref{JmErg}),\non the boundary and inside ergosphere there exist geodesics on\nwhich particle with fixed energy can have arbitrary large in absolute value\nnegative angular momentum projection.\n Let us find the asymptotic of~(\\ref{KerrL1L2}) for $J_2 \\to -\\infty$ and\nsome fixed value $r$ in ergosphere supposing the value of Carter's\nconstant $Q_2$ to be such that~(\\ref{ThB0}) is valid and $\\Theta_2 \\ll J_2^2$.\n Then from~(\\ref{KerrL1L2}) one obtains\n \\begin{eqnarray}\nE_{\\rm c.m.}^{\\,2} \\approx \\frac{- 2 J_2}{\\rho^2 \\Delta } \\,\n\\biggl[ \\frac{J_1}{\\sin^2 \\! \\theta}\n\\left( r^2 \\!-\\! 2 r M \\!+\\! a^2 \\cos^2 \\! \\theta \\right)\n+ 2 r M a E_1\n- \\frac{\\sigma_{1r} \\sigma_{2r} \\sqrt{R_1}}{\\sin \\theta}\n\\sqrt{-(r^2 \\!-\\! 2 r M \\!+\\! a^2 \\cos^2 \\! \\theta) } \\biggr] .\n\\label{KerrJB}\n\\end{eqnarray}\n This asymptotic formula is valid for all possible $E_1$, $J_1$\n(see~(\\ref{JmErg})) for $r_H < r < r_{1}(\\theta)$ and for\n$E_1>0$ and $J_1$ satisfying~(\\ref{rEgErg}) for $r=r_{1}(\\theta)$.\n The poles $\\theta = 0, \\pi$ are not considered here because the points\non surface of static limit are on the event horizon.\n\n Note that expression in brackets in~(\\ref{KerrJB}) is positive\nin ergosphere.\n This is evident for $r=r_{1}(\\theta)$ and follows from\nlimitations~(\\ref{JmErg}) for $r_H < r < r_{1}(\\theta)$,\nand inside ergosphere~(\\ref{KerrJB}) can be written as\n \\begin{eqnarray}\nE_{\\rm c.m.}^{\\,2} \\approx J_2 \\frac{r^2 -2 r M +a^2 \\cos^2 \\! \\theta}\n{\\rho^2 \\Delta \\sin^2 \\! \\theta}\n\\left( \\sigma_{1r} \\sqrt{J_{1 +}- J_1} -\n\\sigma_{2r} \\sqrt{J_{1 -}- J_1} \\right)^2.\n\\label{KerrJBner}\n\\end{eqnarray}\n\n So from~(\\ref{KerrJB}) one comes to the conclusion that\n{\\it when particles fall on the rotating black hole collisions with arbitrarily\nhigh energy in the centre of mass frame are possible at any point of\nthe ergosphere if $J_2 \\to -\\infty$ and the energies $E_1, E_2$ are fixed}.\n The energy of collision in the centre of mass frame is growing\nproportionally to $\\sqrt{|J_2|}$.\n\n Note that large negative values of the angular momentum projection are\nforbidden for fixed values of energy of particle out of the ergosphere.\n So particle which is nonrelativistic on space infinity ($E=m$)\ncan arrive to the horizon of the black hole if its angular momentum projection\nis located in the interval\n \\begin{equation}\n-2 m M \\left[ 1 + \\sqrt{1+ \\frac{a}{M}}\\, \\right] \\le J \\le 2 m M\n\\left[ 1 + \\sqrt{1 - \\frac{a}{M}}\\, \\right].\n\\label{KerrEM}\n\\end{equation}\n The left boundary is a minimal value of the angular momentum of particles\nwith $E=m$ capable to achieve ergosphere falling from infinity.\n That is why collisions with $J_2 \\to -\\infty$ do not occur for particles\nfollowing from infinity.\n But if the particle came to ergosphere and there in the result of\ninteractions with other particles is getting large negative values of the\nangular momentum projection (no need for getting high energies!)\nthen its subsequent collision with the particle falling on the black hole\nleads to high energy in the centre of mass frame.\n\n Getting superhigh energies for collision of usual particles (i.e. protons)\nin such mechanism occur however physically nonrealistic.\n Really from~(\\ref{KerrJB}) the value of angular momentum necessary for\ngetting the collision energy $E_{\\rm c.m.}$ has the order\n \\begin{equation}\nJ_2 \\approx - \\frac{a E_{\\rm c.m.}^{\\,2}}{2 E_1}.\n\\label{KerrEMR}\n\\end{equation}\n So from~(\\ref{KerrEM}) absolute value of the angular momentum $J_2$\nmust acquire the order $ E_{\\rm c.m.}^{\\,2} \/ (m_1 m_2)$ relative to the\nmaximal value of the angular momentum of the particle incoming to ergosphere\nfrom infinity.\n For example if $E_1=E_2 = m_p$ (the proton mass) then $|J_2|$ must\nincrease with a factor $10^{18}$ for $ E_{\\rm c.m.} = 10^9 m_p$.\n To get this one must have very large number of collisions with getting\nadditional negative angular momentum in each collision.\n\n However the situation is different for supermassive particles.\n In~\\cite{GribPavlov2002(IJMPD)}--\\cite{GribPavlov2008c} we discussed\nthe hypothesis that dark matter contains stable superheavy neutral particles\nwith mass of the Grand Unification scale created by gravitation in the end\nof the inflation era.\n These particles are nonstable for energies of interaction of the order\nof Grand Unification and decay on particles of visual matter but are\nstable at low energies.\n But in ergosphere of the rotating black holes such particles due to\ngetting large relative velocities can increase their energy from $2m$ to\nvalues of $3m$ and larger so that the mechanism considered in our paper\ncan lead to their decays as it was in the early universe.\n The number of intermediate collisions for them is not very large\n(of the order of 10).\n\n\n\\vspace{9mm}\n{\\centering \\section{Properties of movement of particles with negative energy\nin ergosphere}\n\\label{secErgo}}\n\n\n From~(\\ref{geodKerr3}), (\\ref{geodTh}), (\\ref{JmErg}) one can see that the\nCarter's constant $Q \\ge 0$ for particles with negative energy in ergosphere.\n One can have $Q=0$ for $E \\le0$ only in case of movement in\nequatorial plane.\n The value $Q=0$ is necessary for equatorial movement, but for the positive\nenergy $E$ with $Q=0$ the non-equatorial movement can take place.\n\n Let's consider other specifics for movements of the negative\nenergy particles.\n Define the effective potential by the formula\n \\begin{equation} \\label{Leff}\nV_{\\rm eff} = - \\frac{R}{2 \\rho^4}.\n\\end{equation}\n Then due to~(\\ref{geodKerr3}),\n \\begin{equation} \\label{LeffUR}\n\\frac{1}{2} \\left( \\frac{d r}{d \\lambda} \\right)^{\\!2} + V_{\\rm eff}=0\n\\end{equation}\n and so\n \\begin{equation} \\label{LeffUR2}\n\\frac{d^2 r}{d \\lambda^2} = - \\frac{d V_{\\rm eff}}{d r} .\n\\end{equation}\n\n Inside the event horizon up to Cauchy horizon\nfrom~(\\ref{geodKerr3}), (\\ref{geodR}), (\\ref{geodTh}), (\\ref{JgEH})\none has $V_{\\rm eff} <0$ for any falling particles.\n So any particle intersecting the event horizon must achieve the\nCauchy horizon.\n After going through the Cauchy horizon the particle can achieve\nsingularity.\n The necessary condition for this is that Carter constant $Q \\le 0$\n(see~(\\ref{geodKerr3})--(\\ref{geodTh})).\n For particles with negative energy in ergosphere this is true only for\nmovement in equatorial plane, i.e. $Q=0$.\n Further, one consider the case of equatorial movement $(\\theta = \\pi\/2)$.\n\n Let us show that for particles with nonzero and zero masses (photons) with\nnegative relative to infinity energy there are no orbits inside ergosphere\nin equatorial plane with constant $r$ or with $r$ changing for all geodesic inside\nthe interval $r_1 \\ge r \\ge r_H$~\\cite{GribPavlov2013a}.\n\n The necessary condition of existence of orbits with constant $r$ is\n \\begin{equation} \\label{LeffCucl}\nV_{\\rm eff}=0, \\ \\ \\ \\ \\frac{d V_{\\rm eff}}{d r} =0\\,.\n\\end{equation}\n To prove our statement it is sufficient to show that for\n$dt\/d \\lambda >0$ and\n \\begin{equation} \\label{LeffdVefg0}\nE<0, \\ \\ \\ r > r_H, \\ \\ \\ V_{\\rm eff}(r)=0 \\ \\ \\Rightarrow\n\\ \\ V^{\\, \\prime}_{\\rm eff}(r) > 0 .\n\\end{equation}\n\n For $V_{\\rm eff}(r)=0$ the derivative of the effective potential\ncan be written as\n \\begin{equation} \\label{Leffder}\nV_{\\rm eff}^{\\, \\prime}(r) = \\frac{1}{2 \\rho^4}\n\\left( 2 (r -M) \\frac{P^2}{\\Delta} + 2 m^2 r \\Delta - 4 r E P \\right).\n\\end{equation}\n So in order to prove our statement it is sufficient to prove that\nfor the negative energy of the particle in ergosphere one has $P >0$.\n\n From the condition of movement ``forward in time'' and~(\\ref{geodKerr1})\none obtains\n \\begin{equation} \\label{geodKett}\nP \\ge \\frac{ a \\left( a E \\sin^2 \\! \\theta - J \\right) \\Delta}\n{r^2 + a^2}.\n\\end{equation}\n For particles in ergosphere with $E<0$ and $\\theta=\\pi\/2$, $ r<2$\nfrom~(\\ref{lHmdd}), (\\ref{JmErg}) one has\n \\begin{equation}\na E - J \\ge a E - J_{-} \\ge \\frac{a E }{ r^2 -2 r M} >0.\n\\label{LPerg}\n\\end{equation}\n That is why $P>0$ and our statement is proved.\n\n So there are no circular orbits for Penrose trajectories\nin Kerr's black holes.\n The permitted zone for such particles in ergosphere can have only\nupper boundary.\n\n Note that one gets absence of orbits with constant $r=r_H$ on the horizon\nof the nonextremal black holes $a 0 .\n\\label{VderH}\n\\end{eqnarray}\n For extremal black holes $a=M$ one can see from~(\\ref{VderH})\n$V^{\\, \\prime}_{\\rm eff}(r_H)=0$ for $\\theta= \\pi\/2$.\n However circular orbits for $E\\ne 0$ are also absent in this\ncase as it is shown in~\\cite{HaradaKimura10}.\n\n\n\n\n\\vspace{9mm}\n{\\centering \\section{The time of movement of particles with negative\nenergy in the ergosphere}\n\\label{secErttau}}\n\n\n Let us analyze the problem of the time of movement for particles with\nnegative energy in ergosphere.\n As it was shown in the previous section the geodesic with negative energy\nin ergosphere begins from $r=r_H$, then achieves the upper point of the\ntrajectory $r_b$ and after it falls to horizon.\n So the proper time interval of movement of the particle along all geodesic\nin ergosphere is defined by the integral\n \\begin{equation} \\label{VsIntdlbO}\n\\Delta \\lambda = 2 \\int \\limits_{r_H}^{r_b} \\frac{ d r }{|d r \/d \\lambda |}\n= 2 \\int \\limits_{r_H}^{r_b} \\frac{d r}{\\sqrt{- 2 V_{\\rm eff}(r)}} .\n\\end{equation}\n The factor 2 before the integral is due to taking into account the fact\nthat the proper time of movement along geodesic up from some value of\nthe radial coordinate $r$ is equal to the time of falling down to the same\nvalue of $r$.\n\n In the vicinity of the upper point $r_b$\non the trajectory of the particle with negative energy one has\nfrom (\\ref{LeffUR}) and $V_{\\rm eff}(r_b)=0$ that\n \\begin{equation} \\label{VsdrdlrH}\n\\left| \\frac{d r}{d \\lambda} \\right| = \\sqrt{- 2 V_{\\rm eff}} \\approx\n\\sqrt{2 (r_b - r) V^{\\, \\prime}_{\\rm eff}(r_b)}.\n\\end{equation}\n As it was shown in the preceding part for the boundary point of the\npermitted zone $V^{\\, \\prime}_{\\rm eff}(r_b)>0$, so the integral\n \\begin{equation} \\label{VsIntdlb}\n\\int \\frac{ d r }{|d r \/d \\lambda |} \\sim\n\\int \\frac{d r}{\\sqrt{2 (r_b - r) V^{\\, \\prime}_{\\rm eff}(r_b)}}\n\\end{equation}\n is convergent and the proper time of the lifting to the upper point\n(falling from the upper point) of the trajectory in the vicinity of this\npoint is finite.\n\n Due to the fact that permitted zones for particles with negative energies\nin ergosphere can have only upper boundary there are no zeros\nfor $dr \/ d \\lambda$ in the other points of the trajectory.\n So the integral~(\\ref{VsIntdlbO}) is convergent and the proper time of\nmovement along geodesic in the ergosphere for the particle with the negative\nenergy is finite.\n\n Note that the coordinate time of movement to the horizon is infinite.\n For equatorial movement the evaluations of the divergence of the\nintegral for coordinate time were given in~\\cite{GribPavlov2013a}.\n\n Finiteness of the proper time for movement of particles with negative\nenergy in ergosphere of the black hole leads to the problem of the\norigination and termination of such trajectories.\n As we said in the Introduction these lines cannot arrive to ergosphere\nfrom the region outside of the ergosphere.\n So they must originate and terminate inside the gravitational radius.\n This means that they originate as ``white hole'' geodesics originating\ninside the horizon.\n\n Note that similar situation takes place for some geodesics of particles\nwith positive energy in Schwarzschild metric (see the text book~\\cite{LL_II}).\n The geodesic completeness leads to the necessity of taking into account\n``white hole'' geodesics originating in the past singularity of the eternal\nblack hole for radial geodesics with specific energy $E\/m<1$\narising from the region inside the gravitational radius.\n However for Penrose geodesics we show that all such geodesics\nin ergosphere of the Kerr's black hole have such behaviour.\n\n\n From~(\\ref{geodKerr3})--(\\ref{ThB0}), (\\ref{JgEH}) one can see that all\nlight like particles (photons) with negative energy falling in equatorial\nplane from ergosphere achieve singularity.\n Massive particles also achieve singularity for example if $E=-m$\nor the angular momentum is such that\n \\begin{equation} \\label{FalSin}\nJ \\le J_H \\biggl( 1 + \\frac{a}{2M} \\sqrt{1 + \\frac{m^2}{E^2} } \\, \\biggr).\n\\end{equation}\n The proof of all these results is the same for\n$d r \/ d \\lambda >0$ and $d r \/ d \\lambda < 0$.\n That is why the same conditions are valid for ``white hole''geodesics\noriginating in Kerr's singularity, arriving to ergosphere and then going back\ninside the gravitational radius.\n Particles moving in equatorial plane do not achieve singularity\nif for example\n \\begin{equation} \\label{FalNS}\n\\frac{|E|}{m} \\ll \\frac{r_C}{M} , \\ \\ \\frac{|J|}{m M} \\ll \\frac{r_C}{M} .\n\\end{equation}\n Then after achieving some minimal values of the radial coordinate\nthe particle can turn it's movement in the direction of larger $r$ and come\nback to ergosphere along the white hole geodesics.\n\n Let's come back to the question on the energy of collisions for\nparticle falling onto black hole and the particle with negative energy\nmoving in ergosphere.\n As it was shown previously (Eqs.~(\\ref{JmErg}), (\\ref{JgEHf}))\nparticles with large in absolute value negative angular momentum can achieve\nthe region close to the upper frontier of ergosphere.\n The energy of collision of such particle with the ordinary particle\nwill be very large due to~(\\ref{KerrJB}) at any point of the ergosphere.\n\n The existence of particles moving from the gravitational radius in the\ndirection of larger $r$ along white hole geodesic can give us\nthe new opportunity for collisions independent of angular momentum\nwith non-limited energy near black holes.\n As we can see from~(\\ref{KerrL1L2}) the difference between energy\nof collisions with particle moving to increasing $r$ $ (\\sigma_{2r} =1)$\nand decreasing $r$ $ (\\sigma_{2r} =-1)$ is\n \\begin{equation} \\label{DDD}\n\\Delta E_{c.m}^2 = \\frac{4 \\sqrt{R_1} \\sqrt{R_2}} {\\rho^2 \\Delta} .\n\\end{equation}\n This difference is equal to zero for top point of trajectory,\nwhen $R_2 =0$.\n But for non-critical particles $(J \\ne J_H)$ the difference is infinite\nlarge for collision on the horizon $(r \\to r_H)$.\n So from~(\\ref{KerrL1L2}) one has for such collisions\n$(\\sigma_{1r} \\sigma_{2r} =-1)$ for $r\\to r_H$: \\,\n$P_1 P_2 - \\sigma_{1r} \\sqrt{R_1} \\sigma_{2r} \\sqrt{R_2} > 0$, \\, $\\Delta \\to 0$,\n \\begin{equation} \\label{ss12m}\nE_{c.m} \\sim \\frac{ \\sqrt{ 2 (P_1 P_2 - \\sigma_{1r} \\sqrt{R_1} \\sigma_{2r} \\sqrt{R_2})}}\n{\\rho \\sqrt{\\Delta}} \\approx\n\\frac{2 a}{ \\sqrt{r-r_H} } \\sqrt{\n\\frac{(J_{1H} - J_1 ) (J_{2H} - J_2 ) }\n{(r_H^2 + a^2 \\cos^2 \\theta )(r_H-r_C)}} \\to +\\infty .\n\\end{equation}\n The analogue of the result~(\\ref{ss12m}) is valid not only for Kerr's black\nholes but also for Schwarzschild black holes\n \\begin{equation} \\label{SHw}\na=0, \\ \\ \\sigma_1 \\sigma_2 =-1, \\ \\ \\\nE_{c.m} \\sim 2 \\sqrt{ \\frac{E_1 E_2}{ 1 - (r_H\/ r)}} \\to +\\infty, \\ \\\nr \\to r_H=2M.\n\\end{equation}\n As it is known from the textbook~\\cite{HawkingEllis}\nfor the Schwarzschild case geodesic completeness leads to existence of\n``white hole'' geodesics.\n Note that the energies of particles $E_1, E_2$ can not be negative\noutside event horizon of Schwarzschild black holes~\\cite{GribPavlov2010NE}.\n One can see from~(\\ref{Relsk03}) that the effect of the large energy\nis explained by the large Lorentz factor, so that $v_{\\rm rel} \\sim 1$\non the horizon.\n\n The formulas~(\\ref{ss12m}), (\\ref{SHw}) can be interpreted as nonstability\nof the configuration with ``white hole'' geodesics and therefore instability of\neternal black holes.\n Evaluation of the role of the gravitational wave radiation for large\nvalues of the energy in the centre of mass frame is needed to get the final\nanswer about the physical sense of the obtained results.\n\n\n\\vspace{7mm}\n{\\centering \\section{Conclusion}\n\\label{Conclusion}}\n\n\n1) Geodesics with negative energy in equatorial plane\noriginate inside the gravitational radius of the rotating black hole\nare ``white hole'' geodesics!\n This is similar to the case of Schwarzschild eternal black hole for which\nit was shown in~\\cite{GribPavlov2010NE} that negative energy\ntrajectories arise in the white hole past singularity.\n However differently from the eternal Schwarzschild black hole when there\nare two different space like singular surfaces --- the black hole and white\nhole --- here we have one Kerr's time like singular surface on which some\ngeodesics originate and some terminate.\n\n2)\nOne can get information about the interior of the gravitational radius if\nsome particles move along these geodesics!\n So there is no cosmic censorship for such rotating black holes if one\nunderstands cosmic censorship as impossibility to get information from\nthe region inside the gravitational radius.\n The cosmonaut can get direct information about the interior of the\ngravitational radius only inside the ergosphere.\nHowever if one considers interaction of negative energy particles radiated by\nthe ``black-white'' hole to ergosphere with ordinary positive energy particles\nescaping the ergosphere this information can be obtained by any external observer.\n The electromagnetic interaction of the negative energy photons with usual\nmatter can lead to some new physical process of ``annihilation'' of this\nmatter inside ergosphere.\n The energy in the centre of mass frame of the collision of ordinary\npositive energy particle with the particle with negative energy is growing\nwithout limit on the horizon.\n\n3)\nThe mass of the ``black-white'' hole can grow due to the radiation of negative\nenergy particles to ergosphere where these particles interact with\npositive energy particles.\n\n Surely all results in this paper correspond to exact Kerr's solution.\n For physically realized rotating black holes as the result of the star\ncollapse due to nonstability of interior solution inside the gravitational\nradius especially inside the Cauchy horizon (see~\\cite{GribPavlov2008UFN})\nthe results can be different and the special research is needed.\n\n\n\\vspace{0mm}\n {\\bf Acknowledgments.}\n The research is supported by the grant from The John Templeton Foundation.\n\n\n\\vspace{5mm}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Light quark confinement\\footnote{This is the third lecture on quark \nconfinement given by V.N.Gribov in 1992 in Orsay. An extensive discussion of the\nconsequences of all this for the structure of the Green function can be found in\n[5,6] - in the two last papers concluding his 20 years long study of the problem\nof quark confinement in QCD.}\n\\footnote{The text was prepared for\npublication by Yu. Dokshitzer, B. Metsch and J. Nyiri on the basis of a tape \nrecording and notes taken during the lecture}}\nWe have described the confinement of heavy quarks in an analogy with \nthe theory of the supercharged nucleus [1,2]. Let us now suppose again\nthat $\\alpha$ is behaving like\n\\[ \\picbox{O1.pstex_t} \\]\nMaking this assumption, we are neglecting gluon-gluon interactions and the\nexistence of gluons as real particles. Our aim is to see, what can arise\nfrom the discussion of light quarks only. We introduce $\\lambda$\ncorresponding to $\\alpha_c$ and consider quark masses $m_0 \\ll \\lambda$.\nThe interactions of light quarks (for which $m_0 \\ll \\lambda$) will be\ndiscussed in a rather simplified way. We will take into account all\npossible interactions \n\\[ \\picbox{O2.pstex_t} \\]\nwhere the gluon propagator (considered as an effective photon),\ncorresponding to the dotted line is \n\\begin{equation}\n\\label{1}\nD_{\\mu\\nu} = \\frac{\\alpha}{q^2}\\delta_{\\mu\\nu} .\n\\end{equation}\nFurther, we look for a model which enables us to see, what happens to\nthe fermions if there is an interaction \nbetween them, as indicated above. The\nquestion is, how the bound states or the Green function behave in such\na case.\n\nLet us consider the energy of two quarks, $u$ and $\\overline{d}$, for\nexample. Without interaction there will be positive energy states with\n$E > 2m$\nand negative energy states with $E<-2m$:\n\\[ \\picbox{O3.pstex_t} \\]\nIntroducing the interaction, for small $\\alpha$ we will\nfind that there are some bound states near $2m$ and $-2m$.\n\\[ \\picbox{O4.pstex_t} \\]\nSo far, we consider the usual Dirac vacuum: the negative \nenergy\nstates are\noccupied, and the positive ones are empty. Increasing the coupling, i.e.\nincreasing $\\alpha$, we could expect that the \nmagnitude of the energy is decreasing\nand the levels corresponding to the bound states will come closer and\ncloser to zero.\n\\[ \\picbox{O5.pstex_t} \\]\nWith a further increase of the coupling up to a critical value, one\npossibility for the levels will be just to approach the zero line and\nnever cross. There is, however, also a possibility of crossing.\nWe will see that the first case corresponds to normal spontaneous\nsymmetry breaking. But, if the levels cross, and especially in the most\nclear case, when they pass the lines $2m$ and $-2m$, respectively,\neverything will change and we arrive at\nvery different phenomena:\n\\[ \\picbox{O6.pstex_t} \\]\nIndeed, now the \noriginal \nvacuum which corresponds to the case when level $2$ is\nempty and level $1$ is occupied, is absolutely unstable. We have to\nfill the new negative energy state and leave the positive energy\nlevel empty. But by filling this new state, we get an excitation, a meson-type\nstate with a mass $\\mu$. For free quarks this would mean that the quark\nwith negative energy decays into a negative energy meson (filling the\nnegative energy levels) and creates a positive energy quark. As a result,\nthe Dirac picture in which all negative energy levels are filled up and\nall positive energy levels are empty, is destroyed. But if so, a positive\nenergy quark also decays into a positive energy meson and a quark with\nnegative energy. This means that both decays\n\\begin{eqnarray*}\n q^- & \\rightarrow & \\mu_- + q^+ \\\\\n q^+ & \\rightarrow & \\mu_+ + q^-\n\\end{eqnarray*}\nare possible, and both $q^-$ and $q^+$ are unstable.\n\nThe question is now, how to deal with the bound state\nproblem. Of course, we could\njust start to calculate the bound states, considering the interactions\nwithout corrections to the Green function. However, one has to take\ninto account \nthat the fermion-fermion interaction changes the effective mass of the\nquarks and this in its turn will change the bound states considerably,\n\\[ \\picbox{O7.pstex_t} \\]\nwhich makes the problem more complicated. We \nthus \nwill have to consider\nbound states and the Green functions\non equal footing. \n\nUp to now, the only approach which deals with this problem and is\nself-consistent is the Nambu - Jona-Lasinio model [3]. It considers the\nfermion Green function\ncorrections \ndue to a four-fermion interaction:\n\\[ \\picbox{O8.pstex_t} \\]\nIn spite of the strong dependence on the cut-off, the model preserves\nall symmetries in the Green function and in two-particle interactions.\nLet us present the result of Nambu and Jona-Lasinio in a way somewhat\ndifferent from what is given in [3]. We express it as the dependence\nof the renormalized mass \n$m$ on the bare mass\n$m_0$. \nThey found that if the\neffective coupling (it depends on the definition in their case)\n$\\frac{g^2}{2\\pi}$ is less\nthan unity, the curve will be just the usual one:\n\\[ \\picbox{O9.pstex_t} \\]\nIf, however, \nthe coupling\n$\\frac{g^2}{2\\pi}$ is larger than unity, the dependence\nwill be like \n\\footnote{This result is not always quoted, but it is\npresent in their paper.}:\n\\[ \\picbox{O10.pstex_t} \\]\nAccording to the interpretation of Nambu and Jona-Lasinio, the upper part\nof the curve, which at $m_0=0$ reaches a finite point, corresponds to the\nspontaneous symmetry breaking. But there are three solutions at $m_0=0$\nand at sufficiently small $m_0$ values. What Nambu and Jona-Lasinio claim\nis that the lower part of the curve is unstable, and there is a real\nvacuum. I agree with this, if $m_0>m_{crit}$.\nWe can ask: what is the source of instability\nof this curve? The general argumentation is the following. Talking\nabout spontaneous symmetry breaking, $m$ is like a magnetic field in a\nferromagnetic; we just choose a definite direction. But $m_0$ is like an\nexternal field, and the system is like a compass. If the external field\nand the induced field are pointing at the same direction, the situation\nis stable. If they point to opposite directions, the compass will change.\n\nI am, however, not sure that the instability of the almost perturbative\nsolution which contains no condensate at all can be explained in such\na way. The explanation may be right for the part $b$ of the curve in\nFig.2 which corresponds to a big spontaneous magnetic moment and the\nopposite direction of $m_0$. It does not work for the part $a$ close to\nperturbation theory which has no spontaneous magnetic moment. And,\nlooking more carefully at the curve, we see that the part $b$ corresponds\nto pseudoscalar states inside the Dirac see, while on the piece $a$ both\nthe pseudoscalar and scalar states are inside, both levels passed.\nRecognizing this, one can conclude that indeed, the mentioned state is\nunstable, but for a trivial reason: the corresponding level is inside\nthe Dirac see and it is not filled up. The problem is, what happens if\nwe fill up this level. It remains an open question, what can be\nconsidered as a ground state under these conditions. And\nit is a problem how to get these results in a more self-consistent\nway, not depending on the cut-off so strongly.\n\nThe Nambu - Jona-Lasinio model can be reproduced in our picture. For this\npurpose, just as a theoretical exercise, let us use $\\alpha$ not going\nto unity, i.e. draw\n\\[ \\picbox{O11.pstex_t} \\]\ninstead of the curve in Fig.1. In this case there will be a second scale\n$\\lambda_2$, and outside this scale we will have just point-like\ninteraction. This reminds of \nthe Nambu - Jona-Lasinio picture, which,\napparently, can be reached somehow in our approach. The problem is, how\nto write constructively the corresponding equation, and whether this\ncan be done at all. Of course, this constructive part can be only\napproximate. But if we recognize that it can be written, then we will\nbe able to develop a theory in which we put the main ingredient of our\ndiscussion as an input into our solution and try to find the real\nconstruction. The main difference will appear in the analytic properties\nof the Green function. The Green function of a fermion for such a case\nwould be quite different in \nits\nanalytic structure compared to the usual one.\n\nI am afraid I will not have the time to come to this point today, but\nI would like to explain just the physics.\n\nHow to write the equation? What happens in the real case and how to\ndeal with it? Let us start with with the Green function. What do we\nknow, what are we supposed to know about this Green function as a\nfunction of $q^2$ ?\n\\[ \\picbox{unnamed.pstex_t} \\]\nBeyond a certain\n$\\lambda$ in the region where the coupling is small, it\nis asymptotically free; here the Green function has to satisfy the\nrenorm group equation. But if as a result of the interaction a mass is\nacquired, this mass would be somewhere at smaller $q^2$; \nhere the equation\nbecomes essentially very complicated and we are not able to extract\na reasonable structure.\n\nThe idea to write an equation which is correct in both regions, near\nthe threshold and at large $q^2$, and to match these two solutions,\ncomes from the following consideration. Suppose that $\\alpha_{crit}\/\\pi$\nis small:\n\\[ \\frac{\\alpha_{crit}}{\\pi} \\approx 1-\\sqrt{\\frac{2}{3}} = 0.2 .\\]\nNow, however, we may ask: how could new masses, new solutions etc.\nappear at all at such a small $\\alpha$. Obviously, this $\\alpha$ has\nto be multiplied by something large. What happens, for example, at large\n$q$? We know, that there is always a logarithm of $q^2\/\\lambda^2$ and the\nreal parameter becomes\n\\[ \\alpha_0 \\ln \\frac{q^2}{\\lambda^2} \\]\nwhich is, in spite of the smallness of $\\alpha$, big enough to change\nthe Green function essentially. But near the threshold there is also a\nlogarithm:\n\\[ \\ln \\frac{q^2-m^2}{\\tilde{\\lambda}^2}\\,,\\qquad \n \\mbox{ with some scale }\\tilde{\\lambda}= \\lambda \\mbox{ or } m \\]\nwhich is always present. In other words, in this region there could be\nalso a quantity which changes seriously in spite of the relative\nsmallness of $\\alpha$. Hence, we want to write the equation which is\ncorrect near the threshold, taking into account correctly the singularity\nof a supposed mass, and after that compare this with the renorm group\nequation; we shall see whether it is possible to write an equation which\nis correct in both regions, and if yes, we will try to solve it.\nIn order to get the singularity correctly, we take the second derivative\nof $G^{-1}$ with respect to the momenta.\n\\begin{equation}\n\\label{2}\n \\picbox{O13.pstex_t}\n\\end{equation}\nThe contribution of the first term is trivial, the second derivative of\n$q+m$ gives zero. Taking the second derivative of the first graph, it can\nbe easily seen that\n\\begin{equation}\n\\label{3}\n\\partial^2 \\frac{1}{(q-q')^2-i\\varepsilon} = -4\\pi i\\delta^{(4)}(q-q') .\n\\end{equation}\nThis gives for the first diagram $\\gamma_{\\mu}G(q)\\gamma_{\\mu}\n\\frac{\\alpha}{\\pi}$ -- just by direct calculation. In other words, we\nmake it local. From this diagram we take the contribution where $k\n\\equiv (q-q')$ -- the momentum of the photon -- essentially equals zero.\n\nLet us now look at the second diagram. We have here the choice of taking\nthe second derivative at one of the photon lines, or to differentiate\nonce at one line and once at the second line. Having in mind that all\nthe integrations would have a structure which need some logarithmic\nenhancement, it would mean that the most important regions of integration\nin this integral would be those where $k_1 \\ll k_2$ and $k_2 \\ll k_1$.\nWe take the derivative at $k_1$ and then set it to be zero, but for $k_2$ the\nintegration will give the same as before. If this integration gives us\ntwo logarithms, we kill one and recover it after the integration of our\ndifferential equation; but we still have the first one. But if we\ndifferentiate once one line and once the other, we will always sit on\nthe region $k_1 \\sim k_2$, because they have to be of the same order.\nAnd, in this case, there is no logarithm at all; after the integration,\nwe will recover one, but one order will be lost. Clearly, a possible\napproach is to try not to choose different diagrams, but to use the small\n$k$ region of integration. Ordering the\nintegration inside the diagram in such a way that one momentum is much\nsmaller than the others, and differentiating this diagram, we will find\na relatively simple answer. Indeed, suppose that we have any diagrams\nwith any loops. If we differentiate some lines twice (it can be any line)\nand neglect all first derivatives, we get an amplitude of the following\nstructure:\n\\[ \\picbox{O14.pstex_t} \\]\nThis is just the Compton scattering of a zero momentum photon $k=0$, and\nfor this quantity the most singular contribution is obviously\n\\[ \\Gamma_{\\mu}(0,q)G(q)\\Gamma_{\\mu}(0,q) \\]\nwhich corresponds to the diagram\n\\[ \\picbox{O15.pstex_t} .\\]\n\nBut \nthe vertex $\\Gamma$ is at zero momentum \nand hence $\\Gamma_{\\mu}(0,q) = \\partial_{\\mu}G^{-1}(q)$. \nIn this approximation we\ncan write a very simple equation:\n\\begin{equation}\n\\label{4}\n\\partial^2 G^{-1}(q) = \\frac{\\alpha(q)}{\\pi}\\partial_{\\mu}G^{-1}(q)G(q)\n \\partial_{\\mu}G^{-1}(q)\n\\end{equation}\nwhich differs essentially from any Bethe-Salpeter type equation. Indeed,\nusing a Bethe-Salpeter type equation, we do not change the vertex part\nand end up with rather bad properties. Equation (\\ref{4}) is scale\ninvariant, it is $\\gamma_5$-invariant, it has many nice symmetry\nproperties and, what is most important, it has a correct behaviour near\nthe threshold.\n\nThe gauge is fixed, because we used\n\\[ D_{\\mu\\nu} = \\frac{\\delta_{\\mu\\nu}}{q^2}. \\]\nIt is an important question, what we would get in different gauges.\nIn Feynman gauge we are very lucky: we find an expression which does\nnot depend explicitly on the expression for the Green function.\nUsing a different gauge, we would find the infrared behaviour of this\ndiagram to be more complicated, and we would not be able to extract\nuniversally the region of small $k$. We would have integrals over $q$\nwhich are also possible to use, but with the necessity to think about the\nbehaviour near the threshold.\n\nWe, however, have chosen this gauge; we did not destroy the general\nfeatures and used the current conservation which just corresponds to\n$\\Gamma_{\\mu} = \\partial_{\\mu}G^{-1} $. Accepting this, we can now ask,\nwhat is the relation to the renorm group equation.\n\nSuppose that we would like to write the renorm group equation in the\nsame spirit. Let us take again the second derivatives. In this case\nwe would be definitely correct, because we know that it is a logarithmic\napproximation.\n\nIn our logarithmic approximation we will do exactly the same with the\nonly difference that $\\alpha$ would be $\\alpha(q^2)$ .\nIn the renorm group equation $\\alpha$ is a function of $q^2$. But, of\ncourse, $\\alpha$ in general depends on two momenta: $k^2$ and $q^2$.\nAnd in the renorm group equation at large \nmomenta, in the ultraviolet\nregion, $\\alpha$ depends on the variable which is the largest.\nSince we are close to $k=0$, this means that here we will have\n$\\alpha(q^2)$, \nand we will recover the renorm\ngroup equation at large $q^2$. If we solve this equation with a\nslowly varying $\\alpha$, we will be correct in both the threshold region\nand the ultraviolet region.\n\nWe also have to formulate an equation for the bound states under the same\nassumption. Looking for bound states, we consider scalar and pseudoscalar\nvertices. This vertex\n\\[ \\picbox{O16.pstex_t} \\]\nhas to be equal to \n\\[ \\picbox{O17.pstex_t} \\]\n\nHere $\\varphi(q,p)$ depends on $p$, the total momentum of a pair, and\n $q$, the quark momenta being given by $q+p\/2$ and\n$q-p\/2$. \nWith the same procedure as in obtaining the equation for the Green\nfunction we find for the vertex (see [4] for some details):\n\\begin{equation}\n\\label{5}\n\\partial^2 \\varphi(q,p) = \\frac{\\alpha}{\\pi}\\left[ A_{\\mu}(q)\\partial_{\\mu}\n \\varphi(q,p) + \\partial_{\\mu}\\varphi\\tilde{A_{\\mu}}(q) - A_{\\mu}(q)\\varphi(q,p) \\tilde{A_{\\mu}}(q)\\right]\\,,\n\\end{equation}\nwhere $A_{\\mu}=\\partial_{\\mu}G^{-1}G$, $\\tilde A_{\\mu}= G\n\\partial_{\\mu}G^{-1}$\\,.\nIt means that we have two equations in this approximation. We used this\napproximation just to be constructive and to study what will result if we\nmake this approximation. In principle, solving both equations \nwe will get everything\nwhat is necessary: we know $G$ and $A_{\\mu}$, we have a linear equation\nfor bound states, we can see what is the type of the energy etc. The\nequation for the bound states has very nice features. It is beautiful\nfrom the point of view of the Goldstone theorem in the following\nsense.\n\nSuppose I have some symmetry in my equation,\ne.g. $\\gamma_5$-invariance. Since there is no $\\gamma_5$ in\nequation (5), it is $\\gamma_5$-invariant. But of course the boundary\ncondition for $G^{-1}$ at $q \\to \\infty$ is just $G_0^{-1}=(\\hat q -\nm_0)$, and thus destroys the symmetry. But suppose that $m_0=0$. In\nthis case there would be symmetry here, which means that the Green\nfunction will not be unique, since it can be \n\\begin{displaymath}\n G^{-1} + \\delta G^{-1}, \n\\end{displaymath}\nwhere $G^{-1}$ is some solution and $ \\delta G^{-1} \\propto \\gamma_5\nG^{-1}$\\,. This means that the variation $\\delta G^{-1}$ also is\nimportant. What would be the equation for the variation? If we\ncalculate the variation on both sides of equation (4) we obtain\n\\begin{displaymath}\n \\partial_\\mu\\,(\\delta G^{-1}) = \\frac{\\alpha}{\\pi}\\left(\n \\partial_\\mu(\\delta G^{-1})G\\partial_\\mu G^{-1} +\n \\partial_\\mu G^{-1} G \\partial_\\mu (\\delta G^{-1}) -\n \\partial_\\mu G^{-1} G (\\delta G^{-1}) G \\partial_\\mu G^{-1}\n \\right) \\,,\n\\end{displaymath}\nso we find that $\\varphi=\\delta G^{-1}$ fulfils equation (5) at\n$p=0$. It means that if some symmetry is broken, i.e. if there are\nmultiple solutions of the equation for the Green function, we always\nwill have some solution of the equation for the vertex at $p=0$,\nwhich is the Goldstone.\n\nIt is clear, that in the present model we can discuss many questions,\nuse a running coupling $\\alpha$ as in Fig. 1 and reproduce the\nNJL-features without any essentials depending on a cutoff.\nBefore discussing this point further, we will first look for the\nsolution of (4) and discuss the result. \n\nAbove, we introduced $A_\\mu = (\\partial_\\mu G^{-1}) G$, which is a very\nuseful quantity. Since $G^{-1}=a\\frac{\\hat q}{q}+b$ is essentially a\n$2 \\times 2$ matrix, $A_\\mu$ is a $U(2)$ gauge potential: \n\\begin{equation}\n\\label{6}\n \\partial^2G^{-1} = \\partial_\\mu((\\partial_\\mu G^{-1})\\,G\\,G^{-1}) \n = (\\partial_\\mu A_\\mu)G^{-1} + A_\\mu (\\partial_\\mu G^{-1}) =\n \\frac{\\alpha}{\\pi}A_\\mu\\partial_\\mu G^{-1}\n\\end{equation}\nwhere in the last step we used Eq. (4). Multiplying from the right by\n$G$ we thus find\n\\begin{equation}\n\\label{7}\n\\partial_{\\mu}A_{\\mu} + A_{\\mu}A_{\\mu} = \\frac{\\alpha}{\\pi}A_{\\mu}A_{\\mu}.\n\\end{equation}\nThis means that\n\\[\\partial_{\\mu}A_{\\mu}=-\\beta A_{\\mu}A_{\\mu};\\]\nand $A_\\mu$ is a pure $U(2)$-gauge potential with a condition $\\beta = 1-\\frac{\\alpha}{\\pi}$.\nOf course, this is just a useful trick. Important is to write down the real\nequation for the Green function. The most natural thing is to express\n$G^{-1}$ in the form\n\\[ G^{-1} = \\rho\\, e^{\\frac{\\varphi}{2}\\hat{n}} ,\\]\nwhere $\\hat{n}$ is a $2 \\times 2$-matrix\n\\[ \\hat{n} = \\frac{\\hat{q}}{q} .\\]\nIt is just easier to use this form for our purpose: we can find an\nequation for $\\rho$ and an equation for $\\varphi$. Both are functions of\n$q^2$: $\\rho(q^2)$, $\\varphi(q^2)$. There is, however, no scale in the\nequation; it contains only a derivative of $q$. If we introduce\n\\[ \\xi = \\ln q ,\\]\nwe will find an equation in which $\\xi$ can be considered as ''time'', and\nwhich is an oscillator equation. In fact there are two oscillators, one\nfor $\\rho$, the other for $\\varphi$, and they will satisfy non-linear\nequations. For $\\varphi$ we find\n\\begin{equation}\n\\label{8}\n\\ddot{\\varphi} + 2\\left(1+\\beta\\frac{\\dot{\\rho}}{\\rho}\\right)\\dot{\\varphi}\n - 3\\sinh\\varphi = 0 .\n\\end{equation}\nThis is just an oscillator with damping; a similar equation can be\nwritten for $\\rho$. Important is that that there has to be ''energy''\nconservation in this equation. Indeed, we said that $\\xi$ plays the\nrole of time; it, however, did not enter the equation explicitly.\nThus there has to be a conservation law which, as it is easy to show,\nleads to\n\\begin{equation}\n\\label{9}\n\\left(1+\\beta \\frac{\\dot{\\rho}}{\\rho}\\right)^2 = 1 + \n \\beta^2\\left(\\frac{\\dot{\\varphi}^2}{4}-3\\sinh^2 \\frac{\\varphi}{2}\\right) .\n\\end{equation}\nWe thus can eliminate $\\rho$ altogether, and find the equation for\n$\\varphi$\n\\begin{displaymath}\n\\ddot{\\varphi} +\n 2\\sqrt{1-\\beta^2\\left(3\\sinh^2\\!\\frac{\\varphi}{2}-\\frac{\\dot\\varphi^2}{4}\\right)}\\dot{\\varphi}\n - 3\\sinh\\varphi = 0\\,,\n\\end{displaymath}\nwhich is an oscillator with damping. Having this in mind is sufficient\nto understand the structure of the solution. Indeed, what is this\n$\\varphi$ ? We have \n\\begin{equation}\n\\label{10}\nG^{-1} = \\rho\\cosh\\frac{\\varphi}{2} + \\frac{\\hat{q}}{q}\\rho\\sinh\n \\frac{\\varphi}{2} .\n\\end{equation}\nThe perturbative solution is $\\varphi$ close to $i\\pi$. In this case\nthe first term is zero, the other is proportional to $\\hat{q}\/q$ -\nthis corresponds to the massless solution. Since $m_0$ is small, we\nhave to have solutions like this at $q \\rightarrow \\infty$.\n\nNow we have to find the solution everywhere. Let us first investigate\nthe equation without damping; we get\n\\[ \\ddot{\\varphi} - 3\\sinh\\varphi = 0 .\\]\nIf we go to the Euclidean space, $\\varphi = i\\psi$, the potential becomes\na periodical potential:\n\\[ \\ddot{\\psi} - 3\\sin\\psi = 0 .\\]\n\\[ \\picbox{O18.pstex_t} \\]\nWe have to look for a possible solution for this\nstructure with damping. What does this mean? For the oscillator\nwith damping any solution at $\\xi \\rightarrow \\infty$ has to be in a\nminimum, because the energy is decreasing. But if $\\xi$ is going\nto $-\\infty$, the energy is growing. What could be in this case a normal,\nreasonable solution? It is almost clear that the only possibility is\nto put at $\\xi \\rightarrow -\\infty$ the ''particles'' in this\noscillator at the maximum,\nand start to move them slowly; eventually, they will appear inside the\nwell.\n\n\n\n\n\nThere is a most important question, namely: what is the critical\ncoupling in this case? What do we know about an oscillator with\ndamping? If the damping is large enough, all the trajectories will go\nmonotonically to the minimum. If the damping is sufficiently small,\nthe solution will start to oscillate. In order to see when this\ntransition happens, we have to look for the equation just near the\nminimum $\\psi=\\pi$. With $\\phi \\equiv \\pi - \\psi$ we have for small\n$\\phi$\n\\begin{displaymath}\n \\ddot \\phi + 2 \\sqrt{1+3\\beta^2}\\dot\\phi + 3 \\phi = 0\\,,\n\\end{displaymath}\nwith fundamental solutions\n\\begin{displaymath}\n \\phi_{1,2} = \\mbox{e}^{\\nu_{1,2}\\xi} \\mbox{ where } \\nu_{1,2} =\n -\\sqrt{1+3\\beta^2}\\pm\\sqrt{3\\beta^2-2}\\,.\n\\end{displaymath}\nSo we have monotonic behaviour for $3\\beta^2-2>0$. On the other hand\nif $\\beta^2<\\frac{2}{3}$, i.e.\n\\begin{displaymath}\n \\frac{\\alpha_{crit}}{\\pi} = 1 - \\sqrt{\\frac{2}{3}} <\n \\frac{\\alpha}{\\pi} < 1 + \\sqrt{\\frac{2}{3}}\\,,\n\\end{displaymath}\nwe will have oscillations before reaching the minimum. The critical angle $\\psi_c$, which\nseparates the regions where the solution is monotonic and where it\noscillates can be\nshown (see e.g. (4)) to be given by \n\\begin{displaymath}\n \\sin^2 \\frac{\\psi}{2} = \\left(\\frac{2}{3}-\\beta^2\\right)\n \\sqrt{\\frac{1+3\\beta^2}{1-\\beta^2}}\n \\frac{1}{1 + \\sqrt{(1+3\\beta^2)(1-\\beta^2)}}\\,.\n\\end{displaymath}\n\\[ \\picbox{O19.pstex_t} \\]\n\nUp to now we have considered a constant coupling $\\alpha$. We know that\nfor $q>\\lambda$ the Green function is determined by perturbation\ntheory, which has to match the solutions in the region of smaller\n$q$. If $\\beta^2>\\frac{2}{3}$ for all $q^2$, the solution which goes\nas $\\psi \\approx \\frac{q}{m_c}$ for $q\\to 0$ matches the solution\n$\\frac{i}{2}(\\psi-\\pi) \\approx \\frac{m_0}{q} + \\frac{\\nu_1^2}{q^3}$ for\n$q\\to \\infty$ monotonically. This determines $m_0$ as a function of $m_c$ in a\nunique way.\nLet $\\lambda$ be the value of $q$ where\n$\\beta^2(\\lambda^2)=\\frac{2}{3}$.\nIf, however, $\\beta^2<\\frac{2}{3}$ below $q=\\lambda$, the solutions can\noscillate and $m_0(m_{c_i})=0$ for some $m_{c_i}$ as indicated in\nFig.6. \n\\[ \\picbox{O21.pstex_t} \\]\n \\[ Fig.6 \\]\n This then is a\nsolution corresponding to broken chiral symmetry.\n\n\\section*{References}\n\\begin{description}\n\\item[1] V. N. Gribov, Orsay lectures on confinement (I), \npreprint LPTHE\n Orsay 92-60 (1993); hep-ph\/9403218\n\\item[2] V. N. Gribov, Orsay lectures on confinement (II), preprint LPTHE\n Orsay 94-20 (1994); hep-ph\/9407269\n\\item[3] Y. Nambu, G. Jona-Lasinio, Phys. Rev. \\underline{122} (1965), 345\n\\item[4] V. N. Gribov, Lund preprint LU 91-7 (1991)\n\\item[5] V. N. Gribov, QCD at large and short distances, \n Bonn preprint TK 97-08 (1997), hep-ph\/9807224.\n\\item[6] V. N. Gribov, The theory of quark confinement, Bonn preprint\n TK 98-09 (1998), hep-ph\/9902279\n\\end{description}\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe following bound was introduced by Lov{\\'a}sz in his celebrated paper on Shannon capacity. In particular it implies famous Hoffman bound~\\cite{haemers2021hoffman}. We provide the proof in the interest of completeness.\n\n\\begin{theorem}[Lov{\\'a}sz,~\\cite{lovasz1979shannon}]\nLet $G = (V,E)$ be a simple graph. Consider a symmetric real matrix $A$ such that $A_{ij} = 1$ for every pair $\\{i,j\\} \\notin E(G)$.\nThen\n\\[\n\\alpha(G) \\leq \\lambda_{max}(A),\n\\]\nwhere $\\alpha(G)$ is the size of a maximal independent set of $G$ and $\\lambda_{max}(A)$ is the maximal eigenvalue of $A$.\n\\label{lovasz}\n\\end{theorem}\n\n\\begin{proof} Let $I$ be an independent set, denote by $\\chi_I$ its characteristic vector. Then\n\\[\n(A\\chi_I, \\chi_I) = |I|^2.\n\\]\nFrom the other hand, one has\n\\[\n(A\\chi_I, \\chi_I) = \\sum c_i^2\\lambda_i \\leq \\left(\\sum c_i^2 \\right) \\lambda_{max} = |I| \\cdot \\lambda_{max},\n\\]\nwhere $\\chi_I = \\sum c_i v_i$ is the decomposition of $\\chi_I$ via orthonormal eigenbasis $\\{v_i\\}$ of $A$. \n(We use that symmetric matrix has real spectrum and the length of $\\chi_I$ is the same in the standard basis and in $\\{v_i\\}$, i.e. $\\sum c_i^2 = |I|$.)\n\\end{proof}\n\nThe minimum of $\\lambda_{max}(A)$ over the appropriate $A$ is called \\textit{Lov{\\'a}sz number} or \\textit{Lov{\\'a}sz theta function} of a graph.\n\nAlso we need the following corollaries. \nSuppose that $A$ and $G$ satisfy the conditions of Theorem~\\ref{lovasz}.\nLet $c$, $\\lambda_{max}$ and $\\spp$ stand for the minimal entry, the maximal eigenvalue and the spectral radius of $A$, respectively.\n\n\\begin{corollary}\nLet $I$ be a set with at most $\\varepsilon |I|^2\/2$ edges inside. \nSuppose that $(1-c)\\varepsilon < 1$. Then\n\\[\n|I| \\leq \\frac{\\lambda_{max}}{1-(1-c)\\varepsilon}.\n\\]\n\\label{supersaturation}\n\\end{corollary}\n\n\n\\begin{corollary}\nLet $I$ and $J$ be subsets of $V(G)$ with at most $\\varepsilon |I| \\cdot |J|$ edges between $I$ and $J$ (edges in $I\\cap J$ are counted twice here).\nSuppose that $(1-c)\\varepsilon < 1$. Then\n\\[\n|I| \\cdot |J| \\leq \\left( \\frac{\\spp}{1-(1-c)\\varepsilon} \\right)^2.\n\\]\n\\label{cross}\n\\end{corollary}\n\nFor the bounds on \\textit{disjoint} $I$ and $J$ one can make the class of appropriate matrices slightly wider, i.e.\nnot demand $A_{ii} = 1$. Then one may combine the proof of Corollary~\\ref{cross} with Proposition 4.1 in the paper of Haemers~\\cite{haemers2001bicliques}.\n\n\n\n\\subsection{A straightforward application to eventown problem}\n\nLet $F$ be a family of subsets of $[n]$ is \\textit{eventown} if the intersection of any two members is even (in particular all sets have even size).\nBerlekamp~\\cite{berlekamp1969subsets} and Graver~\\cite{graver1975boolean} independently proved $F$ has at most $2^{\\lf n\/2\\right\\rfloor}$ members, which is also best possible.\nThe proof is very short up to general linear algebra. Note that every maximal eventown $F$ is a linear subspace of $\\mathbb{F}_2^n$; otherwise one can replace $F$ with $\\sppan F$.\nSince $F$ lies in the orthogonal complement $F^\\perp$ and $\\dim F + \\dim F^\\perp = n$, $|F|$ has the dimension at most $\\lf n\/2 \\right\\rfloor$.\n\nConsider the following Hadamard matrix\n\\[\nA = \n\\begin{pmatrix}\n1 & 1 \\\\\n1 & -1\n\\end{pmatrix}.\n\\]\nIts spectrum is $\\{\\pm \\sqrt{2}\\}$. \nThen the spectrum of $M=A^{\\otimes n}$ is $\\{\\pm 2^\\frac{n}{2} \\}$. \nLet us consider $G=(2^{[n]},E)$ and $(X,Y)\\in E$ iff $|X\\cap Y|$ is odd. Then we identify elements of $2^{[n]}$ with $\\{0,1\\}^n$ by usual way as well as indices for rows and columns of matrix $M$ (where we mean that $A=(a_{ij})_{i,j\\in \\{0,1\\}}$). Let $M=(m_{st})_{s,t\\in \\{0,1\\}^n}$. Then for each $X,Y\\in 2^{[n]}$ we have\n\\[\n m_{\\chi(X),\\chi(Y)}=\\prod_{r=1}^n a_{(\\chi(X))_r, (\\chi(Y))_r}=(-1)^{|\\{ r\\in \\{1,\\dots,n\\}| a_{(\\chi(X))_r, (\\chi(Y))_r}=-1 \\}|}=(-1)^{|X\\cap Y|}.\n\\]\nThus we see that graph $G$ and matrix $M$ satisfy the conditions of Theorem~\\ref{lovasz}.\n\nApplying Theorem~\\ref{lovasz} one has $|F| \\leq 2^{n\/2}$. For even $n$ we already get another proof of eventown theorem.\nFor even $n$ we should also recall that $F$ is a linear subspace, so $|F| \\leq 2^{\\lf n\/2 \\right\\rfloor}$.\n\n\nLet $op(F)$ denote the number of distinct pairs $f_1, f_2 \\in F$ for which $|f_1 \\cap f_2|$ is odd.\nO'Neill~\\cite{oNeill2021towards} showed that for $1 \\leq s \\leq 2^{\\lf n\/2 \\right\\rfloor} - 2^{\\lf n\/4 \\right\\rfloor}$ there is a family $F$ with $|F| = 2^{\\lf n\/2 \\right\\rfloor} + s$\nand $op(F) = s \\cdot 2^{\\lf n\/2 \\right\\rfloor - 1}$.\nAlso he conjectured that this example is tight and proved the conjecture for $s = 1,2$.\nThe application of Corollary~\\ref{supersaturation} gives twice weaker bound for even $n$ (and much weaker bound for odd $n$).\n\n\\begin{theorem}\n\\label{theoremop}\nLet $|F| = 2^{n\/2} + s$ for some integer $s$. Then\n\\[\nop(F) \\geq s \\cdot 2^{\\lf \\frac{n}{2} \\right\\rfloor - 2}.\n\\]\n\\end{theorem}\n\n\n\n\\subsection{An application to $k$-town problem}\n\n\nLet $F$ be a family of vectors from $\\{0,\\dots,k-1\\}^n$, such that $(f_1,f_2) = 0 \\pmod k$ for any $f_1,f_2 \\in F$ (in particular for $f_1 = f_2$);\nsuch $F$ is further called a \\textit{$k$-town family}.\n\nFirst, for prime $k$ the classical argument gives the tight upper bound $k^{\\lf n \/2\\right\\rfloor}$.\n\nIf $k$ is square free we can obtain the same bound from the prime $k$ case in the following way. Let us see that the case $k=pq$ is a formal consequence of the cases $k=p$, $k=q$ if $p,q$ are coprime. Indeed let $F\\subset (\\mathbb{Z}\/pq\\mathbb{Z})^n$ such that $(f_1,f_2) = 0$ for each $f_1,f_2 \\in F$. Then by assumption the number of residues modulo $p$ for elements $F$ does not exceed $p^{\\frac{n}{2}}$ and number of residues modulo $q$ for elements $F$ does not exceed $q^{\\frac{n}{2}}$. Then by the Chinese remainder theorem $|F|\\leq p^{\\frac{n}{2}}q^{\\frac{n}{2}}$.\n\n\nThe observations above should be folklore, meanwhile we do not know how to prove the first inequality in the following theorem without spectral graph theory for an arbitrary $k$. \n\\begin{theorem}\nIf $F$ is a $k$-town family then \n\\[\n|F|\\leq k^{\\frac{n}{2}}.\n\\]\nMoreover suppose that $k$ is prime, and $(f_1,f_2) \\neq 0 \\pmod k$ for at most $\\varepsilon|F|^2$ pairs $f_1,f_2 \\in F$.\nIf $\\varepsilon < \\frac{k-1}{k}$ then\n\\[\n|F| \\leq \\frac{k^{\\frac{n}{2}}} {1 - \\frac{k}{k-1} \\varepsilon}.\n\\]\n\\label{ktown}\n\\end{theorem}\n\n\nObtaining an example of $k$-town $F$ with $|F| = k^{\\frac{n}{2}}$ in the case of prime $k$ and even $n$ is equivalent to finding a set of $\\frac{n}{2}$ pairwise (and-self) orthogonal linear independent vectors in $(\\mathbb{Z}\/k\\mathbb{Z})^n$. There are a lots of such sets; for example for a prime $k=4t+1$ one can consider vectors of the form $v_j=e_{2j-1}+\\varepsilon^t e_{2j}$ for $1\\leq j\\leq \\frac{n}{2}$ (here $\\{e_j\\}$ --- is the standard basis in $(\\mathbb{Z}\/k\\mathbb{Z})^n$, $\\varepsilon$ is a primitive root in $\\mathbb{Z}\/k\\mathbb{Z}$). \n\nFor a general prime $k$ one can choose $v_1,v_2,\\dots$ inductively and almost arbitrarily such that $v_j\\in \\langle v_1,\\dots,v_{j-1}\\rangle^{\\perp}\\setminus \\langle v_1,\\dots,v_{j-1}\\rangle$ and $(v_j,v_j)=0$. This can be done: indeed, for $j<\\frac{n}{2}-2$ we can choose arbitrary $4<\\dim \\langle v_1,\\dots,v_{j-1}\\rangle^{\\perp} - \\dim \\langle v_1,\\dots,v_{j-1}\\rangle$ linearly independent vectors and find $v_j$ in their span (as any quadratic form with $\\geq 3$ variables over finite field has an isotropic vector). When $j=[\\frac{n}{2}]-2$, we can choose $4$-dimensional \n$V\\subset (\\langle v_1,\\dots,v_{j-1}\\rangle^{\\perp}\\setminus \\langle v_1,\\dots,v_{j-1}\\rangle)\\cup{0}$ as well, \nand it is also well-known that there exists $v_{\\frac{n}{2}-1}, v_{\\frac{n}{2}}\\in V$ such that $(v_{\\frac{n}{2}-1},v_{\\frac{n}{2}-1})=(v_{\\frac{n}{2}-1},v_{\\frac{n}{2}})=\n(v_{\\frac{n}{2}},v_{\\frac{n}{2}}) = 0$.\n\nIn some cases were $k$ is non-prime we can obtain examples of different nature. For example, when $k=m^{2}$ for some integer $m$ one can consider the set of vectors of the form $(mx_1,\\dots, mx_n)$. This example shows that in the case of $k$ being a perfect square the first inequality of Theorem~\\ref{ktown} is also tight for an odd $n$. \n\nSuppose that we are interested in the scalar product $t$ instead of $0$. Then the statement of Theorem~\\ref{ktown} can be slightly improved.\n\n\\begin{corollary}\nLet $F$ be a family of vectors from $\\{0,\\dots,k-1\\}^n$, and $(f_1,f_2) = t \\pmod k$ for every $f_1,f_2 \\in F$. Then\n\\[\n|F| \\leq c(t,k) \\cdot k^{\\frac{n}{2}}\n\\]\nfor some $\\frac{1}{\\sqrt{2}} < c(t,k) \\leq 1$. Moreover if $\\frac{k}{\\gcd (k,t)}$ tends to infinity $c(t,k)$ tends to $\\frac{1}{\\sqrt{2}}$ ($\\gcd$ stands for the greatest common divisor).\n\nAssume also that $k$ is prime and $(f_1,f_2) \\neq t \\pmod k$ for at most $\\varepsilon|F|^2$ pairs $f_1,f_2 \\in F$ for some $t \\neq 0$.\nIf $\\varepsilon < \\frac{k-1}{k}$ then\n\\[\n|F| \\leq c(k) \\frac{k^{\\frac{n}{2}}} {1 - \\frac{k}{k-1} \\varepsilon},\n\\]\nwhere $c(k) < 1$ and $c(k) \\to \\frac{2\\sqrt{2}}{\\pi}$ with $k \\to \\infty$.\n\\label{slightlybetter}\n\\end{corollary}\n\n\n\\section{Proofs}\n\n\\begin{proof}[Proof of Corollary~\\ref{supersaturation}]\nDenote by $\\chi_I$ the characteristic vector of $I$. Then\n\\[\n(A\\chi_I, \\chi_I) \\geq |I|^2 \\cdot ( 1-(1-c)\\varepsilon ).\n\\]\nFrom the other hand, one has\n\\[\n(A\\chi_I, \\chi_I) = \\sum c_i^2\\lambda_i \\leq \\left(\\sum c_i^2 \\right) \\lambda_{max} = |I| \\cdot \\lambda_{max},\n\\]\nwhere $\\chi_I = \\sum c_i v_i$ is the decomposition of $\\chi_I$ via orthonormal eigenbasis $\\{v_i\\}$ of $A$. \n\\end{proof}\n\n\n\\begin{proof}[Proof of Corollary~\\ref{cross}]\nDenote by $\\chi_I$ and $\\chi_J$ the characteristic vectors of $I$ and $J$ respectively. Then\n\\[\n(A\\chi_I, \\chi_J) \\geq |I| \\cdot |J| \\cdot ( 1-(1-c)\\varepsilon ).\n\\]\nFrom the other hand, one has\n\\[\n(A\\chi_I, \\chi_I) = \\sum c_i d_i \\lambda_i \\leq \\left (\\sum |c_i| \\cdot |d_i| \\right ) \\spp \\leq \\sqrt{\\left(\\sum c_i^2 \\right) \\left(\\sum d_i^2 \\right) } \\cdot \\spp = \n\\sqrt{|I| \\cdot |J|} \\cdot \\spp,\n\\]\nwhere $\\chi_I = \\sum c_i v_i$ and $\\chi_J = \\sum d_i v_i$ are the decompositions of $\\chi_I$ and $\\chi_J$ via orthonormal eigenbasis $\\{v_i\\}$ of $A$. \n\n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem~\\ref{ktown}]\n\nConsider the following $k\\times k$ matrix:\n\\[\na_{jl} = \\phi^{jl},\n\\]\nwhere $\\phi$ is a primitive $k$-th root of unity and $0 \\leq j,l \\leq k-1$ and the matrix $M=A^{\\otimes n}$. Then we have $A^4=k^{2}E$, therefore $|\\lambda|=(k^{\\frac {1}{2}})^n=k^\\frac{n}{2}$ for each eigenvalue $\\lambda$ of $M$, i.e., $\\spp(M)=k^\\frac{n}{2}$. \n\nLet us see that $A$ (and, as a consequence, $M$) has an eigenbasis in $\\mathbb{R}^n$. Let $\\{e_i\\}$ be a standard basis. Then moving to the basis\n$\\{e_1, \\frac{1}{\\sqrt{2}}(e_j\\pm e_{k+2-j})\\}$ or to $\\{e_1, e_{\\frac{k}{2}}, \\frac{1}{\\sqrt{2}}(e_j\\pm e_{k+2-j})\\}$ for even $k$ (note that it is a unitary transformation) we obtain a block matrix with two blocks of the form $A_1, iA_2$, where $A_1,A_2$ are the real symmetric matrices (they have the sizes $\\frac{k+1}{2}$ \u0438 $\\frac{k-1}{2}$ respectively for odd $k$ and $\\frac{k}{2}+1$, $\\frac{k}{2}+1$ for even $k$). As a matter of fact, for each $2\\leq j\\leq k$ we have\n\\[\nA(e_j+e_{k+2-j})=\\sum_{l=1}^k \\phi^{(j-1)(l-1)} e_l +\n\\sum_{l=1}^k \\phi^{(k+1-j)(l-1)} e_l = 2+ \n\\sum_{l=2}^k (\\phi^{(j-1)(l-1)} +\n \\overline{\\phi^{(j-1)(l-1)} })e_l = 2+\n \\sum_{l=2}^{[\\frac{k}{2}]}2\\cdot \\Rre (\\phi^{(j-1)(l-1)})(e_l+e_{k+2-l})\n\\]\nand for each $2\\leq j\\leq k$, $j\\neq \\frac{k}{2}$\n\\[\nA(e_j-e_{k+2-j})=\\sum_{l=1}^k \\phi^{(j-1)(l-1)} e_l -\n\\sum_{l=1}^k \\phi^{(k+1-j)(l-1)} e_l = \n\\sum_{l=2}^k (\\phi^{(j-1)(l-1)} -\n \\overline{\\phi^{(j-1)(l-1)} })e_l =\n i\\cdot \\sum_{l=2}^{[\\frac{k}{2}]}2\\cdot \\Iim(\\phi^{(j-1)(l-1)})(e_l-e_{k+2-l}).\n\\]\n\nThus we see that this change of the base (over $\\mathbb{R}$) leads to real symmetric matrix and to pure imaginary symmetric matrix, which both have real eigenbasis, therefore $A$ has real eigenbasis as well. Hence $N := \\Rre M$ has the same real eigenbasis with $M$. \nObviously, all eigenvalues of $N$ lie in $\\{\\pm k^{n\/2},0\\}$. \n\n\nLet us consider $G=((\\mathbb{Z}\/k\\mathbb{Z})^n,E)$, where $(X,Y)=((x_1,\\dots,x_n), (y_1, \\dots, y_n))\\in E$ iff $\\sum_{ r=1}^n x_ry_r\\neq 0$. Then we also identify $(\\mathbb{Z}\/k\\mathbb{Z})^n$ as indices for rows and columns of matrix $M$ (taking all indices in $A$ modulo $k$). Let $M=(m_{st})_{s,t\\in \\{0,\\dots, k-1\\}^n}$. Now for each $X,Y\\in ((\\mathbb{Z}\/k\\mathbb{Z})^n, (X,Y)\\notin E$ we have\n\\[\n m_{X,Y}=\\prod_{r=1}^n a_{x_r, y_r}=\\phi^{\\sum_{r=1}^n x_ry_r}=\\phi^{0}=1.\n\\]\nThus graph $G$ and matrix $M$ (and also $N$) satisfy the conditions of Theorem~\\ref{lovasz}, so the first statement of the theorem is proved.\n\n\nNow let $k$ be a prime number. An immediate application of Corollary~\\ref{supersaturation} gives\n\\[\n|F| \\leq \\frac{k^{\\frac{n}{2}}} {1 - t \\varepsilon},\n\\]\nwhere $t = 1 - \\cos \\left ( \\lf \\frac{k}{2} \\right\\rfloor \\frac{2 \\pi}{k} \\right )$, which tends to 2 with $k \\to \\infty$.\nSo we modify the proof of Corollary~\\ref{supersaturation} in the following way.\n\nFor every root $\\phi$ the matrix $N = N(\\phi)$ satisfies\n\\[\n(N(\\phi) \\chi_F, \\chi_F) = \\sum_{i,j \\in F} N_{ij}(\\phi) \n\\]\nand\n\\[\n(N(\\phi) \\chi_F, \\chi_F) \\leq |F| \\cdot k^{\\frac{n}{2}}.\n\\]\nSumming up these inequalities for all $k$-th roots except $1$ one has\n\\begin{equation}\n (k-1) \\cdot |F|^2 \\cdot (1-\\varepsilon) - \\varepsilon \\cdot |F|^2 \\leq (k-1) \\cdot |F| \\cdot k^{\\frac{n}{2}}\n\\label{13231} \n\\end{equation}\nsince for every $i,j$ that corresponds to sets with nonzero scalar product\n\\[\n\\sum_{\\phi \\neq 1} N_{ij}(\\phi) = -1,\n\\]\nand for $i,j$ that corresponds to sets with zero scalar product\n\\[\n\\sum_{\\phi \\neq 1} N_{ij}(\\phi) = k-1.\n\\]\nRewriting~\\eqref{13231} finishes the proof.\n\\end{proof}\n\n\\begin{proof}[Proof of Corollary~\\ref{slightlybetter}]\nFix a primitive $k$-th root of unity $\\phi$ and consider the same matrices $A$ and $M$ as in the previous proof. \nLet us consider $G=((\\mathbb{Z}\/k\\mathbb{Z})^n,E)$, where $(X,Y)=((x_1,\\dots,x_n), (y_1, \\dots, y_n))\\in E$ iff $\\sum_{ r=1}^n x_ry_r\\neq t$.\nThen $G$ and $N := \\Rre (\\phi^t M)$ satisfy the conditions of Theorem~\\ref{lovasz}.\nNote that $N$ and $M$ shares a real eigenbasis, so all eigenvalues of $N$ lie in $\\Rre \\{ \\pm \\phi^t k^{n\/2}, \\pm \\phi^t k^{n\/2}i \\}$. \nHence the spectral radius of $N$ lies between $\\frac{1}{\\sqrt{2}} k^{n\/2}$ and $k^{n\/2}$. \n\nLets, check the second part of first proposition. Note that for any coprime $x,y$ we have $\\frac{xy}{\\gcd (xy,t)}=\\frac{x}{\\gcd (x,t)}\\cdot \\frac{y}{\\gcd (y,t)}$ and $c(t,xy)\\leq c(t,x)c(t,y)$ --- this follows from remarks before Theorem~\\ref{ktown}. Therefore we need only treat the case $k=p^s$ for prime $p$.\n\nNote that any upper bound for $|F|$ with some $t=t_0$ is also an upper bound for $t=t_0r^2, r\\in (\\mathbb{Z}\/k\\mathbb{Z})^*$. Indeed, if $(f_1,f_2)=t_0r^2$ for each $f_1,f_2\\in F$, then $(\\frac{1}{r}f_1, \\frac{1}{r}f_2)=t_0$ and we can apply a bound for $t=t_0$. Now we are to show that we can choose $r=r(k,t)$ such that $|\\Rre (\\phi^{tr^2})|$ tends to $\\frac{1}{\\sqrt{2}}$ if $\\frac{k}{\\gcd (k,t)}$ tends to infinity --- in this case $|\\Rre (\\phi^{tr^2}i^l)|$ tends to $\\frac{1}{\\sqrt{2}}$ as well. \nTo obtain this, it's sufficient to choose $r$ such that $|\\frac{tr^2}{k\/8}|$ tends to $1$ (with suitable choice of representative of $tr^2$ modulo $k$). \n\n\nAny $t\\in \\mathbb{Z}\/p^s \\mathbb{Z}$ we can consider as $t=p^mt', r\\in\\mathbb{Z}\/p^{s-m}\\mathbb{Z}$ and replacing $(t,p^s)$ with $(r, p^{s-m})$ (this doesn't change both ratios in question) we can assume that $t\\in (\\mathbb{Z}\/p^s \\mathbb{Z})^*$ (and $\\frac{k}{\\gcd (k,t)}=k$).\n\n\nLet $p>2$. Let's prove that we can choose $l$ such that $|l|5$). If $p>2$ we can take $l_1,l_2$ with $|l_i|<\\sqrt{p}$ such that $l_1=l_2t$ modulo $p$ by Thues lemma. Then for some $i$ $l_i$ is a non-square modulo $m$ and therefore modulo $k=p^s$. As $t$ is non square as well we can write $t=l_iw^2$ and take $l=l_i$.\n\n\nNow take $x \\in \\mathbb{Z}$ such that $x^2\\in \\left (\\frac{k}{8|l|} - \\left( \\frac{k}{|l|} \\right)^{\\frac{1}{2}}, \\frac{k}{8|l|}+ \\left (\\frac{k}{|l|} \\right)^{\\frac{1}{2}} \\right)$, then\n$||lx^2|-\\frac{k}{8}| \\leq (kl)^{\\frac{1}{2}} 2$ Theorem~\\ref{ktown} gives the following elementary bound which is far from the Iosevich--Rudnev estimate~\\cite{iosevich2007erdos}.\nLet $E$ be a set with $|\\Delta(E)| = s$. Without loss of generality $(0,\\dots,0) \\in E$; otherwise one may shift $E$. \nThen $E$ lies on $s$ spheres centered at $(0,\\dots,0)$. \nBy a pigeon-hole principle there is a sphere containing a subset $E'\\subset E$ of size at least $(|E|-1)\/s$. \nSince $k$ is odd and $E'$ lie on a sphere, there are at most $s$ different scalar products between vectors of $E'$.\nLet $t$ be a most popular scalar product in $E' \\times E'$. Then one may apply Theorem~\\ref{ktown} with $\\varepsilon = \\frac{s-1}{s}$ and receive that\n\\[\n|E'| \\leq \\frac{k^{n\/2}}{1 - \\frac{k}{k-1}\\frac{s-1}{s}} = \\frac{s(k-1)k^{n\/2}}{k-s}.\n\\]\nHence\n\\[\n|E| \\leq \\frac{s^2(k-1)k^{n\/2}}{k-s} + 1.\n\\]\n\nFor instance, $s = k-1$ means that we evaluate the size of a set without a single distance $r$ by $k^{{\\frac{n+6}{2}}}$, meanwhile Iosevich--Rudnev estimate is\n$Ck^{\\frac{n+1}{2}}$ with an absolute constant $C > 0$. \n\n\n\\paragraph{A bound via singular numbers.}\nLet $G = (V,E)$ be a simple graph. Suppose that $A$ is a complex matrix such that $A_{ij} = 1$ for every pair $\\{i,j\\} \\notin E(G)$.\nThen\n\\[\n\\alpha(G) \\leq \\sigma_{max}(A),\n\\]\nwhere $\\alpha(G)$ is the size of a maximal independent set of $G$ and $\\sigma_{max}(A)$ is the maximal singular value of $A$ (i.e. the square root of a\nmaximal eigenvalue of a self-adjoint operator $A^*A$, where $A^*$ denotes the adjoint of $A$).\nThe proof immediately follows from the main theorem of~\\cite{danciger2006min}.\n\nFor matrix $A$ in the proof of Theorem~\\ref{ktown} one has\n\\[\nA^*A = kE,\n\\]\nso every singular number of matrix $A$ is equal to $\\sqrt{k}$.\nHence every singular number of $M = A^{\\otimes n}$ is equal to $k^{n\/2}$.\nThis implies the first inequality of Theorem~\\ref{ktown}.\n\n\\paragraph{Hypergraph discrepancy and asymptotic precision of Theorem~\\ref{theoremop} when $\\varepsilon$ is close to $1\/2$.}\nA hypergraph is a pair $(V, E)$, where $V$ is a finite set whose elements are called vertices and $E$ is a family of subsets of $V$, called edges. \nA vertex $2$-coloring of a hypergraph $(V, E)$ is a map $\\pi : V \\rightarrow \\{\\pm 1\\}$. \nA \\textit{discrepancy} of a coloring $\\pi$ is the largest value of $|\\sum_{v\\in e} \\pi (v) |$ over $e \\in E$.\n\n\nConsider an explicit hypergraph $H = (V,E)$, where $V = [N] \\times [N]$ and edges have form $I \\times J$ for $I,J \\subset [N]$. \nNote that every $\\{\\pm 1\\}$ matrix of size $N$ produces a 2-coloring of $H$.\nLet $N = 2^n$ and $\\chi$ be the coloring from $A^{\\otimes n}$ (recall that $A$ is $2\\times2$ Hadamard matrix).\nLet $I \\times J$ be an edge of $H$ providing the discrepancy $\\disc$ of $\\chi$; without loss of generality let $\\disc$ be positive. Then\n\\[\n\\disc = (A^{\\otimes n} \\chi_I, \\chi_J) = (1-2\\varepsilon) \\cdot |I| \\cdot |J|,\n\\]\nwhere $\\varepsilon$ satisfies the conditions of Theorem~\\ref{theoremop}. Theorem~\\ref{theoremop} implies \n\\begin{equation}\n \\disc^2 = |I|^2 \\cdot |J|^2 (1-2\\varepsilon)^2 \\leq 2^n |I|\\cdot |J| \\leq 2^{3n} = N^3.\n\\label{precise}\n\\end{equation}\n\nAstashkin proved~\\cite{astashkin2010rademacher} that $H$ has the discrepancy at least $cN^{3\/2}$ for \\textit{every} coloring $\\chi$ and some absolute constant $c > 0$, i.e. inequality~(\\ref{precise}) is precise up to an absolute constant. It means that Theorem~\\ref{theoremop} is precise up to an absolute constant in the case when $|I|$ and $|J|$ are close to $N$ and $\\frac{1}{2} - \\varepsilon$ is of order $2^{-n\/2}$. \n\n\n\n\n\\subsection{Further questions}\n\n\\paragraph{Generalization on $k$-eventown.}\nWe say that $F \\subset 2^{[n]}$ is $k$-eventown if the size of the intersection of any (not necessarily different) $f_1, f_2 \\in F$ is zero modulo $k$.\nThe problem of determine the maximal size of $k$-eventown was studied by Frankl and Odlyzko~\\cite{frankl1983subsets}.\nThey found a nice construction based on Hadamard matrices of $k$-eventowns of size at least $(ck)^{\\lf n\/(4k) \\right\\rfloor}$, where $c > 0$ is an absolute constant. \nIn addition, they showed that any $k$-eventown has size at most $2^{O(\\log k\/k)n}$ as $n$ tends to $\\infty$. \n\nIn particular, for $k = 3$ the best known lower and upper bounds are $24^{\\lf n\/12 \\right\\rfloor}$ and $2^{\\lf n\/2 \\right\\rfloor}$ respectively.\n\n\n\n\\paragraph{Generalization on $t$-wise $k$-eventown.}\nWe say that $F \\subset 2^{[n]}$ is $t$-wise $k$-eventown if the size of the intersection of any different $f_1,\\dots, f_t \\in F$ is zero modulo $k$.\nNote that a 2-wise $k$-eventown is not the same as an $k$-eventown, since in the former we do not require that the sets themselves\nhave size zero modulo $k$. \n\nSudakov and Vieira~\\cite{sudakov2018two} show that a $t$-wise eventown has for $t \\geq 3$ a unique extremal configuration and obtain a stability result for this problem. Gishboliner, Sudakov and Tomon~\\cite{gishboliner2021small} show that for every $k$ there is $t = t(k)$ that\nthe size of any $t$-wise $k$-eventown is bounded by\n\\[\n|F| \\leq 2^{[n\/k]} + const(k,t).\n\\]\n\nA generalization of the methods of the paper, if exists, will have a deal with some tensor analysis.\n\n\n\n\n\n\n\\paragraph{Acknowledgments.} The authors are grateful to Fedor Petrov, Andrey Kupavskii and Pavel Prozorov for useful discussions. \nThe reviewers remarks were extremely helpful. The work of Danila Cherkashin was supported by the Russian Science Foundation grant 21-11-00040.\n\n\n\\bibliographystyle{plain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbbhb b/data_all_eng_slimpj/shuffled/split2/finalzzbbhb new file mode 100644 index 0000000000000000000000000000000000000000..b4b05955487edc78dfcd6a19d7f32e1e575e8a21 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbbhb @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{intro}\nMassive stars contribute to the chemical composition of matter as we know it in the universe, and their deaths are accompanied by energetic core-collapse supernovae (SNe) that seed our universe with black holes (BHs) and neutron stars (NSs) -- the most exotic objects of the stellar graveyard. Large time-domain surveys of the sky \\citep[e.g.,][]{York2000,Drake2009,Law2009,Kaiser2010,Shappee2014,Abbott2016,Tonry2018,Bellm2019}, paired with targeted follow-up efforts, have greatly enriched our view on the final stages of massive star evolution. Yet, a lot remains to be understood about the diverse paths that bring massive stars toward\ntheir violent deaths \\citep{Langer2012}. \n\nCore-collapse SNe can occur in stars with a hydrogen envelope\n(Type II) or in stars where hydrogen is almost or completely missing \\citep[Type Ib\/c, also referred to as stripped-envelope SNe;][]{Filippenko1997,Matheson2001,Li2011,Modjaz2014,Perley2020b,Frohmaier2021}. Type Ib\/c SNe constitute approximately 25\\% of all massive star explosions \\citep{Smith2011}, and their pre-SN progenitors are thought to be either massive ($M\\gtrsim 20-25$\\,M$_{\\odot}$) and metal-rich single stars that have been stripped through stellar mass loss; or the mass donors in close binary systems (at any metallicity) that have initial masses $\\gtrsim 8$\\,M$_{\\odot}$ \\citep[e.g.,][and references therein]{Langer2012}. \n\nA small fraction of Type Ib\/c SNe show velocities in their optical spectra that are systematically higher than those measured in ordinary SNe Ic at similar epochs. Hence, these explosions are referred to as SNe of Type Ic with broad lines \\citep[hereafter, Ic-BL; e.g.,][]{Filippenko1997,Modjaz2016,GalYam2017}. Compared to Type Ib\/c SNe, broad-lined events are found to prefer environments with lower metallicity (in a single star scenario, mass loss mechanisms also remove angular momentum and are enhanced by higher metallicities), and in galaxies with higher star-formation rate density. Thus, it has been suggested that SN Ic-BL progenitors may be stars younger and more massive than those of normal Type Ic (more massive progenitors can lose their He-rich layers to winds at lower metallicity due to the higher luminosities driving the winds), and\/or tight massive binary systems that can form efficiently in dense stellar clusters \\citep[e.g.,][]{Kelly2014,Japelj2018,Modjaz2020}. \n\nThe spectroscopic and photometric properties used to classify core-collapse SNe are largely determined by the stars' outer envelopes \\citep[envelope mass, radius, and chemical composition;][]{Young2004}. On the other hand, quantities such as explosion energies, nickel masses, and ejecta geometries can be inferred via extensive multi-wavelength and multi-band observations. These quantities, in turn, can help constrain the properties of the stellar cores \\citep[such as mass, density structure, spin, and magnetic fields; see e.g. ][and references therein]{Woosley2002,Burrows2007,Jerkstrand2015} that are key to determine the nature of the explosion. For example, based on nickel masses and ejecta masses derived from bolometric light curve analyses, \\citet{Taddia2019} found that $\\gtrsim 21\\%$ of Ic-BL progenitors are compatible with massive ($\\gtrsim 28$\\,M$_{\\odot}$), possibly single stars, whereas $\\gtrsim 64\\%$ could be associated with less massive stars in close binary systems. \n\nUnderstanding the progenitor scenario of SNe Ic-BL is particularly important as these SNe challenge greatly the standard explosion mechanism of massive stars \\citep[e.g.,][and references therein]{Mezzacappa1998,MacFadyen1999,Heger2003,WoosleyHeger2006,Janka2007,Janka2012,Smith2014,Foglizzo2015,Muller2020,Schneider2021}. The energies inferred from optical spectroscopic modeling of Ic-BL events are of order $\\approx 10^{52}\\,{\\rm erg}$, in excess of the $\\approx 10^{51}\\,{\\rm erg}$ inferred in typical SNe Ib\/c, while ejecta masses are comparable or somewhat higher \\citep{Taddia2019}. In the traditional core-collapse scenario, neutrino irradiation from the proto-NS revives the core-bounce shock, making the star explode. However, the neutrino mechanism cannot explain the more energetic SNe Ic-BL. Unveiling the nature of an engine powerful enough to account for the extreme energetics of SNe Ic-BL is key to understanding the physics behind massive stellar deaths, and remains as of today an open question. \n\nA compelling scenario invokes the existence of a jet or a newly-born magnetar as the extra source of energy needed to explain SNe Ic-BL \\citep[e.g.,][]{Burrows2007,Papish2011,Gilkis2014,Mazzali2014,Lazzati2012,Gilkis2016,Soker2017,Barnes2018,Shankar2021}. The rapid rotation of a millisecond proto-NS formed in the collapse of a rotating massive star can amplify the NS magnetic field to $\\gtrsim 10^{15}$\\,G, creating a magnetar. The magnetar spins down quickly via magnetic braking, and in some cases magneto-rotational instabilities can launch a collimated jet that drills through the outer layers of the star producing a gamma-ray burst \\citep[GRB; e.g.,][]{Heger2003,Izzard2004,WoosleyHeger2006,Burrows2007,Bugli2020,Bugli2021}. These jets can transfer sufficient energy to the surrounding stellar material to explode it into a SN.\n\nThe above scenario is particularly interesting in light of the fact that SNe Ic-BL are also the only type of core-collapse events that, observationally, have been unambiguously linked to GRBs \\citep[e.g.,][and references therein]{Woosley2006,Cano2017}. GRBs are characterized by bright afterglows that emit radiation from radio to X-rays, and are unique laboratories for studying relativistic particle acceleration and magnetic field amplification processes \\citep{Piran2004,Meszaros2006}. In between ordinary SNe Ic-BL and cosmological GRBs is a variety of transients that we still have to fully characterize. Among those are low-luminosity GRBs, of which the most famous example is GRB\\,980425, associated with the radio-bright Type Ic-BL SN\\,1998bw \\citep{Galama1998,Kulkarni1998}. \n\nRecently, \\citet{Shankar2021} used the jetted outflow model produced from a consistently formed proto-magnetar in a 3D core-collapse SN to extract a range of central engine parameters (energy $E_{eng}$ and opening angle $\\theta_{eng}$) that were then used as inputs to hydrodynamic models of jet-driven explosions. The output of these models, in turn, were used to derive predicted SN light curves and spectra from different viewing angles, and found to be in agreement with SN Ic-BL optical observables \\citep[see also][]{Barnes2018}. It was also shown that additional energy from the engine can escape through the tunnel drilled in the star as an ultra-relativistic jet (GRB) with energy $\\approx 10^{51}$\\,erg. On the other hand, a SN Ic-BL can be triggered even if the jet engine fails to produce a successful GRB jet. The duration of the central engine, $t_{eng}$, together with $E_{eng}$ and $\\theta_{eng}$, are critical to determining the fate of the jet \\citep{Lazzati2012}. \n\nA more general scenario where the high velocity ejecta found in SNe Ic-BL originate from a cocoon driven by a relativistic jet (regardless of the nature of the central engine) is also receiving attention. In this scenario, cosmological long GRBs are explosions where the relativistic jet breaks out successfully from the stellar envelope, while low-luminosity GRBs and SNe Ic-BL that are not associated with GRBs represent cases where the jet is choked \\citep[see e.g. ][and references therein]{Piran2019,Eisenberg2022,Gottlieb2022,Pais2022}. \n\nOverall, the dividing line between successful GRB jets and failed ones is yet to be fully explored observationally, and observed jet outcomes in SNe Ic-BL have not yet been systematically compared to model outputs. While we know that SNe discovered by means of a GRB are all of Type Ic-BL, the question that remains open is whether all SNe Ic-BL make a GRB (jet), at least from some viewing angle, or if instead the jet-powered SNe Ic-BL are intrinsically different and rarer than ordinary SNe Ic-BL. Indeed, due to the collimation of GRB jets, it is challenging to understand whether all SNe Ic-BL are linked to successful GRBs: a non-detection in $\\gamma$- or X-rays could simply be due to the explosion being directed away from us. \n Radio follow-up observations are needed to probe the explosions' fastest-moving ejecta ($\\gtrsim 0.2c$) largely free of geometry and viewing angle constraints. Determining observationally what is the fraction of Type Ic-BL explosions that output jets which successfully break out of the star (as mildly-relativistic or ultra-relativistic ejecta), and measuring their kinetic energy via radio calorimetry, can provide jet-engine explosion models a direct test of their predictions. \n \n Using one of the largest sample of SNe Ic-BL with deep radio follow-up observations \\citep[which included 15 SNe Ic-BL discovered by the Palomar Transient Factory, PTF\/iPTF;][]{Law2009}, \\citet{Corsi2016} already established that $<41\\%$ of SNe Ic-BL harbor relativistic ejecta similar to that of SN\\,1998bw. Here, we present the results of a systematic radio follow-up campaign of an additional 16 SNe Ic-BL (at $z\\lesssim 0.05$) detected independently of $\\gamma$-rays by the Zwicky Transient Facility \\citep[ZTF;][]{Bellm2019,Graham2019}. This study greatly expands our previous works on the subject \\citep[][]{Corsi2017,Corsi2016,Corsi2014}. Before the advent of PTF and ZTF, the comparison between jet-engine model outcomes and radio observables was severely limited by the rarity of SN Ic-BL discoveries \\citep[e.g.,][]{Berger2003,Soderberg2006} and\/or by selection effects \\citep[e.g.,][]{Woosley2006}---out of the thousands of jets identified, nearly all were discovered at large distances via their high-energy emission (implying aligned jet geometry and ultra-relativistic speeds). In this work, we aim to provide a study free of these biases. \n \n Our paper is organized as follows. In Section \\ref{sec:discovery} we describe our multi-wavelength observations; in Section \\ref{section:sample} we describe in more details the SNe Ic-BL included in our sample; in Section \\ref{sec:modeling} we model the optical, X-ray, and radio properties of the SNe presented here and derive constraints on their progenitor and ejecta properties. Finally, in Section \\ref{sec:conclusion} we summarize and conclude. Hereafter we assume cosmological parameters $H_0 = 69.6$\\,km\\,s$^{-1}$\\,Mpc$^{-1}$, $\\Omega_{\\rm M }= 0.286$, $\\Omega_{\\rm vac} = 0.714$ \\citep{Bennett2014}. All times are given in UT unless otherwise stated. \n\n\n\\begin{table*}\n\\begin{center}\n\\caption{The sample of 16 SNe Ic-BL analyzed in this work. For each SN we provide the IAU name, the ZTF name, the position, redshift, and luminosity distance. \\label{tab:sample}}\n\\begin{tabular}{lccc}\n\\hline\n\\hline\nSN (ZTF name) & RA, Dec (J2000) & $z$& $d_L$ \\\\\n & (hh:mm:ss~~dd:mm:ss) & &(Mpc)\\\\\n\\hline\n2018etk (18abklarx) & 15:17:02.53 +03:56:38.7 & 0.044& 196 \\\\\n2018hom (18acbwxcc)& 22:59:22.96 +08:45:04.6 & 0.030 & 132 \\\\\n2018hxo (18acaimrb) & 21:09:05.80 +14:32:27.8 & 0.048 & 214 \\\\\n2018jex (18acpeekw) & 11:54:13.87 +20:44:02.4 & 0.094 & 434 \\\\\n2019hsx (19aawqcgy) & 18:12:56.22 +68:21:45.2 & 0.021 & 92 \\\\\n2019xcc (19adaiomg) & 11:01:12.39 +16:43:29.1 & 0.029 & 128 \\\\\n2020jqm (20aazkjfv) & 13:49:18.57 $-$03:46:10.4 & 0.037 & 164 \\\\\n2020lao (20abbplei) & 17:06:54.61 +30:16:17.3 & 0.031 & 137 \\\\\n2020rph (20abswdbg) & 03:15:17.83 +37:00:50.8 & 0.042 & 187\\\\\n2020tkx (20abzoeiw) & 18:40:09.01 +34:06:59.5 & 0.027 & 119 \\\\\n2021xv (21aadatfg) & 16:07:32.82 +36:46:46.2 & 0.041 & 182 \\\\\n2021aug (21aafnunh) & 01:14:04.81 +19:25:04.7 & 0.041 & 182 \\\\\n2021epp (21aaocrlm) & 08:10:55.27 $-$06:02:49.3 & 0.038 & 168\\\\\n2021htb (21aardvol) & 07:45:31.19 +46:40:01.3 & 0.035& 155 \\\\\n2021hyz (21aartgiv) & \n09:27:36.51 +04:27:11.0 & 0.046& 205\\\\\n2021ywf (21acbnfos) & 05:14:10.99 +01:52:52.4 & 0.028 & 123 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{CorsiFig1_lcs_mag.png}\n \\caption{P48 $r$- (top) and $g$-band (middle) light curves for the SNe Ic-BL in our sample, compared with the $R$- and $B$-band light curves of SN\\,1998bw, respectively. The bottom panel shows the corresponding color evolution. Observed AB magnitudes are corrected for Milky Way extinction. The archetypal SN\\,1998bw is shown in black solid points, and its Gaussian Process interpolation in black dashed lines. See also \\citet{Anand2022} and \\citet{Gokul2022}.}\n \\label{fig:opt-lc-mag}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{CorsiFig2_lcs_flux.png}\n \\caption{Top and middle panels: same as Figure \\ref{fig:opt-lc-mag} but in flux space and with fluxes normalized to their Gaussian Process maximum. Bottom panel: bolometric light curves. We converted $g-r$ to bolometric magnitudes with the empirical relations by \\cite{Lyman2014,Lyman2016}. See also \\citet{Anand2022} and \\citet{Gokul2022}.\n \\label{fig:opt-lc-flux}}\n\\end{figure*}\n\n\\section{Multi-wavelength observations}\n\\label{sec:discovery}\nWe have collected a sample of 16 SNe Ic-BL observed with the ZTF and with follow-up observations in the radio. The SNe Ic-BL included in our sample are listed in Table \\ref{tab:sample}. We selected these SNe largely based on the opportunistic availability of follow-up observing time on the Karl G. Jansky Very Large Array (VLA). The sample of SNe presented here doubles the sample of SNe Ic-BL with deep VLA observations presented in \\citet{Corsi2016}. \n\nThe SNe considered in this work are generally closer than the PTF\/iPTF sample of SNe Ic-BL presented in \\citet{Taddia2019}. In fact, their median redshift ($\\approx 0.037$) is about twice as small as the median redshift of the PTF\/iPTF SN Ic-BL sample \\citep[$\\approx 0.076$;][]{Taddia2019}. However, the median redshift of the ZTF SNe in our sample is compatible with the median redshift ($\\approx 0.042$) of the full ZTF SN Ic-BL population presented in \\citet{Gokul2022}. A subset of the SNe Ic-BL presented here is also analyzed in a separate paper and in a different context \\citep[r-process nucleosynthesis;][]{Anand2022}. In this work, we report for the first time the results of our radio follow-up campaign of these events. We note that the ``Asteroid Terrestrial-impact Last Alert System''\\citep[ATLAS;][]{Tonry2018} has contributed to several of the SN detections considered here (see Section \\ref{section:sample}). Three of the Ic-BL in our sample were also reported in the recently released bright SN catalog by the All-Sky Automated Survey for Supernovae \\citep[ASAS-SN;][]{Neumann2022}.\n\n\nIn what follows, we describe the observations we carried for this work. In Section \\ref{section:sample} we give more details on each of the SNe Ic-BL in our sample. \n\n\\subsection{ZTF photometry}\nAll photometric observations presented in this work were conducted with the Palomar Schmidt 48-inch (P48) Samuel Oschin telescope as part of the ZTF survey \\citep{Bellm2019,Graham2019}, using the ZTF camera \\citep{dekany2020}. \nIn default observing mode, ZTF uses 30\\,s exposures, and survey observations are carried out in $r$ and $g$ band, down to a typical limiting magnitude of $\\approx 20.5$\\,mag. P48 light curves were derived using the ZTF pipeline \\citep{Masci2019}, and forced photometry \\cite[see][]{Yao2019}. Hereafter, all reported magnitudes are in the AB system. \nThe P48 light curves are shown in Figures~\\ref{fig:opt-lc-mag}-\\ref{fig:opt-lc-flux}. All the light curves presented in this work will be made public on the Weizmann Interactive Supernova Data Repository (WISeREP\\footnote{\\url{https:\/\/www.wiserep.org\/}}).\n\n\n\n\\begin{table}\n \\begin{center}\n\\caption{\\label{tab:xrt_summary}\\textit{Swift}\/XRT observations of 9 of the 16 SNe Ic-BL in our sample. We provide the MJD of the \\textit{Swift} observations, the XRT exposure time, and the 0.3-10\\,keV unabsorbed flux measurements (or $3\\sigma$ upper-limits for non detections). \\label{tab:x-ray}}\n\\begin{tabular}{llccc}\n\\hline\n\\hline\nSN & T$_{\\rm XRT}$ & Exp. & $F_{\\rm 0.3-10\\,keV}$ \\\\\n & (MJD) & (ks) & ($10^{-14}$\\,erg\\,cm$^{-2}$\\,s$^{-1}$)\\\\\n\\hline\n2018etk & 58377.85 & 4.8 & $< 4.2$ \\\\\n2018hom & 58426.02 & 4.3 & $< 6.4$ \\\\\n2019hsx & 58684.15 & 15 & $6.2_{-1.8}^{+2.3}$ \\\\\n2020jqm & 59002.09 & 7.4 & $< 3.3 $ \\\\\n2020lao & 59007.40 & 14 & $< 2.9 $ \\\\\n2020rph & 59088.89 & 7.5 & $< 3.6 $ \\\\\n2020tkx & 59125.38 & 8.1 & $< 3.3 $ \\\\\n2021hyz & 59373.09 & 4.7 & $< 3.5 $ \\\\\n2021ywf & 59487.60 & 7.2 & $5.3_{-3.3}^{+4.9} $ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\\subsection{Optical Spectroscopy}\nPreliminary spectral type classifications of several of the SNe in our sample were obtained with the Spectral Energy Distribution Machine (SEDM) mounted on the Palomar 60-inch telescope (P60), and quickly reported to the Transient Name Server (TNS). The SEDM is a very low resolution ($R\\sim 100$) integral field unit spectrograph optimized for transient classification with high observing efficiency \\citep{Blagorodnova2018,Rigault2019}. \n\nAfter initial classification, typically further spectroscopic observations are carried out as part of the ZTF transient follow-up programs to confirm and\/or improve classification, and to characterize the time-evolving spectral properties of interesting events. Amongst the series of spectra obtained for each of the SNe presented in this work, we select one good quality photospheric phase spectrum (Figures~\\ref{fig:spectra1}-\\ref{fig:spectra2}; grey) on which we run SNID \\citep{Blondin2007} to obtain the best match to a SN Ic-BL template (black), after clipping the host emission lines and fixing the redshift to that derived either from the SDSS host galaxy or from spectral line fitting (H$\\alpha$; see Section \\ref{section:sample} for further details).\nHence, in addition to the SEDM, in this work we also made use of the following instruments: the Double Spectrograph \\citep[DBSP;][]{Oke1995}, a low-to-medium resolution grating instrument for the Palomar 200-inch telescope Cassegrain focus that uses a dichroic to split light into separate red and blue channels observed simultaneously; the Low Resolution Imaging Spectrometer \\citep[LRIS;][]{Oke1982,Oke1995}, a visible-wavelength imaging and spectroscopy instrument operating at the Cassegrain focus of Keck-I; the Alhambra Faint Object Spectrograph and Camera (ALFOSC), a CCD camera and spectrograph installed at the Nordic Optical Telescope \\citep[NOT;][]{Djupvik2010}. All spectra presented in this work will be made public on WISeREP.\n\n\\subsection{X-ray follow up with \\textit{Swift}}\nFor 9 of the 16 SNe presented in this work we carried out follow-up observations in the X-rays using the X-Ray Telescope \\citep[XRT;][]{Burrows+2005} on the \\textit{Neil Gehrels Swift Observatory} \\citep{Gehrels+2004}. \n\nWe analyzed these observations using the online XRT tools\\footnote{See \\url{https:\/\/www.swift.ac.uk\/user_objects\/}.}, as described in \\citet{Evans+2009}. \nWe correct for Galactic absorption, and adopt a power-law spectrum with photon index $\\Gamma = 2$ for count rate to flux conversion for non-detections, and for detections (two out of ninw events) where the number of photons collected is too small to enable a meaningful spectral fit (one out of two detections). Table~\\ref{tab:xrt_summary} presents the results from co-adding all observations of each source.\n\n\n\\subsection{Radio follow up with the VLA}\n\\label{sec:radioobs}\nWe observed the fields of the SNe Ic-BL in our sample with the VLA via several of our programs using various array configurations and receiver settings (Table \\ref{tab:data}). \n\nThe VLA raw data were calibrated in \\texttt{CASA} \\citep{McMullin2007} using the automated VLA calibration pipeline. After manual inspection for further flagging, the calibrated data were imaged using the \\texttt{tclean} task. Peak fluxes were measured from the cleaned images using \\texttt{imstat} and circular regions centered on the optical positions of the SNe, with radius equal to the nominal width (FWHM) of the VLA synthesized beam (see Table \\ref{tab:data}). The RMS noise values were estimated with \\texttt{imstat} from the residual images. Errors on the measured peak flux densities in Table \\ref{tab:data} are calculated adding a 5\\% error in quadrature to measured RMS values. This accounts for systematic uncertainties on the absolute flux calibration. \n\nWe checked all of our detections (peak flux density above $3\\sigma$) for the source morphology (extended versus point-like), and ratio between integrated and peak fluxes using the CASA task \\texttt{imfit}. All sources for which these checks provided evidence for extended or marginally resolved emission are marked accordingly in Table \\ref{tab:data}. For non detections, upper-limits on the radio flux densities are given at the $3\\sigma$ level unless otherwise noted.\n\n\\renewcommand\\arraystretch{1.5}\n\\setlength\\LTcapwidth{2\\linewidth}\n\\begin{longtable*}{ccclccc}\n\\caption{VLA follow-up observations of the 16 SN\\lowercase{e} I\\lowercase{c}-BL in our sample. For all of the observations of the SNe in our sample we report: the mid MJD of the VLA observation; the central observing frequency; the measured flux density (all flux density upper-limits are calculated at $3\\sigma$ unless otherwise noted); the VLA array configuration; the FWHM of the VLA nominal synthesized beam; the VLA project code under which the observations were conducted. See Sections \\ref{sec:radioobs} and \\ref{sec:radioanalysis} for discussion.} \\label{tab:data}\\\\\n\\toprule\n\\toprule\nSN & ${\\rm T_{VLA}}$$^{a}$ & $\\nu$ & $F_{\\nu}$ & Conf. & Nom.Syn.Beam & Project \\\\\n & (MJD) & (GHz) & ($\\mu$Jy) & & (FWHM; arcsec) & \\\\\n\\midrule\n2018etk & 58363.08 & 6.3& $90.1\\pm8.7$$^{b}$ & D & 12& VLA\/18A-176$^{d}$\\\\\n & 58374.09 &14 & $41\\pm11$ & D & 4.6& VLA\/18A-176$^{d}$ \\\\\n & 58375.03 & 6.3& $89.7\\pm8.8$$^{b}$ & D &$12$& VLA\/18A-176$^{d}$ \\\\\n & 59362.27 &6.2 & $78.5\\pm6.3$$^{b}$ & D &12 & VLA\/20B-149$^{d}$\\\\\n\\midrule\n2018hom & 58428.04 & 6.6 & $133\\pm11$ & D & 12& VLA\/18A-176$^{d}$\\\\\n\\midrule\n2018hxo & 58484.73 & 6.4 & $\\lesssim 234$$^{c}$ & C & 3.5 & VLA\/18A-176$^{d}$\\\\\n\\midrule\n2018jex & 58479.38 &6.4 & $\\lesssim 28$ & C & 3.5& VLA\/18A-176$^{d}$\\\\\n\\midrule\n2019hsx & 58671.14 & 6.2 & $\\lesssim 19$ & BnA & 1.0& VLA\/19B-230$^{d}$\\\\\n\\midrule\n2019xcc & 58841.43 &6.3 &$62.7\\pm 8.7$$^{b}$& D & 12 & VLA\/19B-230$^{d}$\\\\\n & 58876.28 & 6.3 &$60.1\\pm8.5$$^{b}$ & D & 12& VLA\/19B-230$^{d}$ \\\\\n &59363.00 & 6.3 & $50.4\\pm8.1$$^{b}$& D & 12& VLA\/20B-149$^{d}$ \\\\\n\\midrule\n2020jqm & 58997.03 & 5.6& $175\\pm13$& C & 3.5&SG0117$^{d}$\\\\\n &59004.03 & 5.6 & $310\\pm19$&C & 3.5 &SG0117$^{d}$ \\\\\n &59028.48 & 5.5& $223\\pm18$&B & 1.0& VLA\/20A-568$^{d}$\\\\\n &59042.95 &5.7 & $202\\pm15$&B &1.0& VLA\/20A-568$^{d}$ \\\\\n &59066.09 &5.5 &$136\\pm13$ &B & 1.0& VLA\/20A-568$^{d}$ \\\\\n &59088.03 & 5.5 &$168\\pm13$ &B & 1.0& VLA\/20A-568$^{d}$ \\\\\n &59114.74 & 5.5 & $620\\pm33$ &B & 1.0& VLA\/20A-568$^{d}$ \\\\\n &59240.37 & 5.5 & $720\\pm37$ &A &0.33& VLA\/20B-149$^{d}$\\\\\n\\midrule\n2020lao & 59006.21 & 5.2& $\\lesssim 33$ &C & 3.5& SG0117$^{d}$\\\\\n & 59138.83 &5.5& $\\lesssim 21$& B & 1.0& SG0117$^{d}$ \\\\\n\\midrule\n2020rph & 59089.59 &5.5 & $42.7\\pm7.4$& B & 1.0& SG0117$^{d}$\\\\\n & 59201.28 & 5.5 & $43.9\\pm7.0$& A &0.33 & SG0117$^{d}$ \\\\\n\\midrule\n2020tkx & 59117.89 & 10 & $272\\pm 16$& B &0.6 & VLA\/20A-374$^{e}$\\\\\n & 59136.11 & 10 & $564\\pm29$& B &0.6 & VLA\/20A-374$^{e}$ \\\\\n & 59206.92 & 5.5&$86.6\\pm7.3$& A & 0.33 & VLA\/20B-149$^{d}$\\\\\n\\midrule\n2021xv & 59242.42 & 5.5 & $\\lesssim 23$& A & 0.33& VLA\/20B-149$^{d}$\\\\\n & 59303.24 & 5.2& $\\lesssim 29$& D &12 & VLA\/20B-149$^{d}$ \\\\\n &59353.11 & 5.2 & $34.3\\pm8.1$$^{b}$& D & 12 & VLA\/20B-149$^{d}$ \\\\\n\\midrule\n2021aug & 59254.75 & 5.2& $\\lesssim 22$& A & 0.33& VLA\/20B-149$^{d}$\\\\\n & 59303.62 &5.4 & $\\lesssim 45$& D & 12& VLA\/20B-149$^{d}$\\\\\n & 59353.48 & 5.4 & $\\lesssim 30$& D &12 & VLA\/20B-149$^{d}$\\\\\n\\midrule\n2021epp & 59297.06 &5.3 & $(2.62\\pm0.13)\\times10^3$$^{b}$ & D&12 & VLA\/20B-149$^{d}$\\\\\n & 59302.99 & 5.1 &$(2.82\\pm0.18)\\times10^3$$^{b}$ & D & 12& VLA\/20B-149$^{d}$ \\\\\n & 59352.83& 5.3 &$(2.75\\pm0.20)\\times10^3$$^{b}$ & D &12 & VLA\/20B-149$^{d}$ \\\\\n\\midrule\n2021htb &59324.94 & 5.2& $50\\pm10$$^{b}$& D&12 & VLA\/20B-149$^{d}$\\\\\n & 59352.87 & 5.2 &$59.4\\pm9.5$$^{b}$ &D & 12& VLA\/20B-149$^{d}$ \\\\\n\\midrule\n2021hyz & 59326.08 & 5.2& $38\\pm11$&D &12 & VLA\/20B-149$^{d}$\\\\\n & 59352.99& 5.5 & $\\lesssim 30$& D & 12& VLA\/20B-149$^{d}$ \\\\\n\\midrule\n2021ywf & 59487.57 & 5.0 & $83\\pm10$ & B &1.0 &SH0105$^{d}$\\\\\n & 59646.12 & 5.4 &$19.8\\pm 6.3$ & A & 0.33& SH0105$^{d}$\\\\\n\\bottomrule\n\\multicolumn{7}{l}{$^{a}$ Mid MJD time of VLA observation (total time including calibration).}\\\\\n\\multicolumn{7}{l}{$^{b}$ Resolved or marginally resolved with emission likely dominated by the host galaxy.}\\\\\n\\multicolumn{7}{l}{$^{c}$ Image is dynamic range limited due to the presence of a bright source in the field.}\\\\\n\\multicolumn{7}{l}{$^{d}$ PI: Corsi.}\\\\\n\\multicolumn{7}{l}{$^{e}$ PI: Ho.}\n\\end{longtable*}\n\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=15cm]{CorsiFig3_IcBL_snid_spectra_part1.pdf}\n \\caption{Photospheric phase spectra (grey) plotted along with their SNID best match templates (black) for the first half of the SNe Ic-BL in our sample. Spectra are labeled with their IAU name and spectroscopic phase (since $r$-band maximum; see Table \\ref{tab:opt_data}). We note that the spectra used to estimate the photospheric velocities of SN\\,2019xcc, SN\\,2020lao, and SN\\,2020jqm presented in Table \\ref{tab:opt_data} are different from the ones shown here for classification purposes. This is because for spectral classification we prefer later-time but higher-resolution spectra, while for velocity measurements we prefer earlier-time spectra even if taken with the lower-resolution SEDM. All spectra presented in this work will be made public on WISeREP. }\n \\label{fig:spectra1}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=15cm]{CorsiFig4_IcBL_snid_spectra_part2.pdf}\n \\caption{Same as Figure \\ref{fig:spectra1} but for the second half of the SNe Ic-BL in our sample. All spectra presented in this work will be made public on WISeREP. }\n \\label{fig:spectra2}\n\\end{figure*}\n\n\n\\section{Sample description}\n\\label{section:sample} \n\\subsection{SN 2018etk}\nOur first ZTF photometry of SN\\,2018etk (ZTF18abklarx) was\nobtained on 2018 August 1 (MJD 58331.16) with the P48. This first ZTF detection was in the $r$ band, with a host-subtracted magnitude of $19.21\\pm0.12$\\,mag (Figure~\\ref{fig:opt-lc-mag}), at $\\alpha$=15$^{\\rm h}$17$^{\\rm m}$02.53$^{\\rm s}$,\n$\\delta=+03^{\\circ}56'38\\farcs7$ (J2000). The object was reported to the TNS by ATLAS on 2018 August 8, who discovered it on 2018 August 6 \\citep{Tonry2018_ZTF18abklarx}.\nThe last ZTF non-detection prior to ZTF discovery was on 2018 July 16, and the last shallow ATLAS non-detection was on 2018 August 2, at $18.75\\,$mag.\nThe transient was classified as a Type Ic SN by \\citet{Fremling2018_ZTF18abklarx} based on a spectrum obtained on 2018 August 13 with the SEDM.\nWe re-classify this transient as a SN Type Ic-BL most similar to SN\\,2006aj based on a P200 DBSP spectrum obtained on 2018 August 21 (see Figure~\\ref{fig:spectra1}).\nSN\\,2018etk exploded in a star-forming galaxy with a known\nredshift of $z= 0.044$ derived from SDSS data.\n\n\\subsection{SN 2018hom}\nOur first ZTF photometry of SN\\,2018hom (ZTF18acbwxcc) was\nobtained on 2018 November 1 (MJD 58423.54) with\nthe P48. This first ZTF detection was in the $r$ band, with a host-subtracted magnitude of $16.60\\pm0.04$\\,mag (Figure \\ref{fig:opt-lc-mag}), at $\\alpha$=22$^{\\rm h}$59$^{\\rm m}$22.96$^{\\rm s}$,\n$\\delta=+08^{\\circ}45'04\\farcs6$ (J2000). The object was reported to the TNS by ATLAS on 2018 October 26, and discovered by ATLAS on 2018 October 24 at $o\\approx 17.3\\,$mag \\citep{Tonry2018_ZTF18acbwxcc}.\nThe last ZTF non-detection prior to ZTF discovery was on 2018 October 9 at $g>20.35\\,$mag, and the last ATLAS non-detection was on 2018 October 22 at $o> 18.25\\,$mag.\nThe transient was classified as a SN Type Ic-BL by \\citet{Fremling2018_ZTF18acbwxcc} based on a spectrum obtained on 2018 November 2 with the SEDM.\nSN\\,2018etk exploded in a galaxy with unknown redshift. We measure a redshift of $z = 0.030$ from star-forming emission lines in a Keck-I LRIS spectrum obtained on 2018 November 30. We plot this spectrum in Figure~\\ref{fig:spectra1}, along with its SNID template match to the Type Ic-BL SN\\,1997ef. We note that this SN was also reported in the recently released ASAS-SN bright SN catalog \\citep[][]{Neumann2022}.\n\n\\subsection{SN 2018hxo}\nOur first ZTF photometry of SN\\,2018hxo (ZTF18acaimrb) was\nobtained on 2018 October 9 (MJD 58400.14) with the P48. This first detection was in the $g$ band, with a host-subtracted magnitude of $18.89\\pm0.09$\\,mag (Figure~\\ref{fig:opt-lc-mag}), at $\\alpha$=21$^{\\rm h}$09$^{\\rm m}$05.80$^{\\rm s}$,\n$\\delta=+14^{\\circ}32'27\\farcs8$ (J2000). The object was first reported to the TNS by ATLAS on 2018 November 6, and first detected by ATLAS on 2018 September 25 at $o=18.36\\,$mag \\citep{Tonry2018_ZTF18acaimrb}.\nThe last ZTF non-detection prior to discovery was on 2018 September 27 at $r>20.12\\,$mag, and the last ATLAS non-detection was on 2018 September 24 at $o> 18.52\\,$mag.\nThe transient was classified as a SN Type Ic-BL by \\citet{Dahiwale2020_ZTF18acaimrb} based on a spectrum obtained on 2018 December 1 with Keck-I LRIS. In Figure \\ref{fig:spectra1} we plot this spectrum along with its SNID match to the Type Ic-BL SN\\,2002ap.\nSN\\,2018etk exploded in a galaxy with unknown redshift. We measure a redshift of $z=0.048$ from star-forming emission lines in the Keck spectrum. \n\n\\subsection{SN 2018jex}\nOur first ZTF photometry of SN\\,2018jex (ZTF18acpeekw) was\nobtained on 2018 November 16 (MJD 58438.56) with\nthe P48. This first detection was in the $r$ band, with a host-subtracted magnitude of $20.07\\pm0.29$\\,mag, at $\\alpha$=11$^{\\rm h}$54$^{\\rm m}$13.87$^{\\rm s}$,\n$\\delta=+20^{\\circ}44'02\\farcs4$ (J2000). The object was reported to the TNS by AMPEL on November 28 \\citep{Nordin2018_ZTF18acpeekw}.\nThe last ZTF last non-detection prior to ZTF discovery was on 2018 November 16 at $r>19.85\\,$mag.\nThe transient was classified as a SN Type Ic-BL based on a spectrum obtained on 2018 December 4 with Keck-I LRIS. In Figure~\\ref{fig:spectra1} we show this spectrum plotted against the SNID template of the Type Ic-BL SN\\,1997ef.\nAT2018jex exploded in a galaxy with unknown redshift. We measure a redshift of $z=0.094$ from star-forming emission lines in the Keck spectrum.\n\n\\subsection{SN 2019hsx}\nWe refer the reader to \\citet{Anand2022} for details about this SN Ic-BL. Its P48 light curves and the spectrum used for classification are shown in Figures \\ref{fig:opt-lc-mag} and \\ref{fig:spectra1}. We note that this SN was also reported in the recently released ASAS-SN bright SN catalog \\citep{Neumann2022}.\n\n\\subsection{SN 2019xcc}\nWe refer the reader to \\citet{Anand2022} for details about this SN Ic-BL. Its P48 light curves and the spectrum used for classification are shown in Figures \\ref{fig:opt-lc-mag} and \\ref{fig:spectra1}.\n\n\\subsection{SN 2020jqm}\nOur first ZTF photometry of SN\\,2020jqm (ZTF20aazkjfv) was\nobtained on 2020 May 11 (MJD 58980.27) with\nthe P48. This first detection was in the $r$ band, with a host-subtracted magnitude of $19.42\\pm0.13$\\,mag, at $\\alpha$=13$^{\\rm h}$49$^{\\rm m}$18.57$^{\\rm s}$,\n$\\delta=-03^{\\circ}46'10\\farcs4$ (J2000). The object was reported to the TNS by ALeRCE on May 11 \\citep{Forster2020_ZTF20aazkjfv}.\nThe last ZTF non-detection prior to ZTF discovery was on 2020 May 08 at $g>17.63\\,$mag.\nThe transient was classified as a SN Type Ic-BL based on a spectrum obtained on 2020 May 26 with the SEDM \\citep{Dahiwale2020_ZTF20aazkjfv}.\nSN\\,2020jqm exploded in a galaxy with unknown redshift. We measure a redshift of $z = 0.037$ from host-galaxy emission lines in a NOT ALFOSC spectrum obtained on 2020 June 6. We plot the ALFOSC spectrum along with its SNID match to the Type Ic-BL SN\\,1998bw in Figure~\\ref{fig:spectra1}.\n\n\\subsection{SN 2020lao}\nWe refer the reader to \\citet{Anand2022} for details about this SN Ic-BL. Its P48 light curves and the spectrum used for classification are shown in Figures \\ref{fig:opt-lc-mag} and \\ref{fig:spectra1}. We note that this SN was also reported in the recently released ASAS-SN bright SN catalog \\citep{Neumann2022}.\n\n\\subsection{SN 2020rph}\nWe refer the reader to \\citet{Anand2022} for details about this SN Ic-BL. Its P48 light curves and the spectrum used for classification are shown in Figures \\ref{fig:opt-lc-mag} and \\ref{fig:spectra2}.\n\n\\subsection{SN 2020tkx}\nWe refer the reader to \\citet{Anand2022} for details about this SN Ic-BL. Its P48 light curves and the spectrum used for classification are shown in Figures \\ref{fig:opt-lc-mag} and \\ref{fig:spectra2}.\n\n\\subsection{SN 2021xv}\nWe refer the reader to \\citet{Anand2022} for details about this SN Ic-BL. Its P48 light curves and the spectrum used for classification are shown in Figures \\ref{fig:opt-lc-mag} and \\ref{fig:spectra2}.\n\n\\subsection{SN 2021aug}\nOur first ZTF photometry of SN\\,2021aug (ZTF21aafnunh) was\nobtained on 2021 January 18 (MJD 59232.11) with the P48. This first detection was in the $g$ band, with a host-subtracted magnitude of $18.73\\pm0.08$\\,mag, at $\\alpha$=01$^{\\rm h}$14$^{\\rm m}$04.81$^{\\rm s}$,\n$\\delta=+19^{\\circ}25'04\\farcs7$ (J2000).\nThe last ZTF non-detection prior to ZTF discovery was on 2021 January 16 at $g>20.12\\,$mag. The transient was publicly reported to the TNS by ALeRCE on 2021 January 18 \\citep{MunozArancibia2021}, and classified as a SN Type Ic-BL\nbased on a spectrum obtained on 2021 February 09 with the SEDM \\citep{Dahiwale2021_ZTF21aafnunh}.\nSN\\,2020jqm exploded in a galaxy with unknown redshift. We measure a redshift of $z= 0.041$ from star-forming emission lines in a P200 DBSP spectrum obtained on 2021 February 08. This spectrum is shown in Figure~\\ref{fig:spectra2} along with its template match to the Type Ic-BL SN\\,1997ef.\n\n\\subsection{SN 2021epp}\nOur first ZTF photometry of SN\\,2021epp (ZTF21aaocrlm) was\nobtained on 2021 March 5 (MJD 59278.19) with the P48. This first ZTF detection was in the $r$ band, with a host-subtracted magnitude of $19.61\\pm0.15$\\,mag (Figure~\\ref{fig:opt-lc-mag}), at $\\alpha$=08$^{\\rm h}$10$^{\\rm m}$55.27$^{\\rm s}$, $\\delta=-06^{\\circ}02'49\\farcs3$ (J2000). \nThe transient was publicly reported to the TNS by ALeRCE on 2021 March 5 \\citep{MunozArancibia20210305}, and classified as a SN Type Ic-BL based on a spectrum obtained on 2021 March 13 by ePESSTO+ with the ESO Faint Object Spectrograph and Camera \\citep{Kankare2021}.\nThe last ZTF non-detection prior to discovery was on 2021 March 2 at $r>19.72\\,$mag.\nIn Figure~\\ref{fig:spectra2} we show the classification spectrum \nplotted against the SNID template of the Type Ic-BL SN\\,2002ap.\nSN\\,2021epp exploded in a galaxy with known redshift of $z = 0.038$.\n\n\\subsection{SN 2021htb}\nOur first ZTF photometry of SN\\,2021htb (ZTF21aardvol) was\nobtained on 2021 March 31 (MJD 59304.164) with the P48. This first ZTF detection was in the $r$ band, with a host-subtracted magnitude of $20.13\\pm0.21$\\,mag (Figure~\\ref{fig:opt-lc-mag}), at $\\alpha$=07$^{\\rm h}$45$^{\\rm m}$31.19$^{\\rm s}$,\n$\\delta=46^{\\circ}40'01\\farcs4$ (J2000).\nThe transient was publicly reported to the TNS by SGLF on 2021 April 2 \\citep{Poidevin_2021}.\nThe last ZTF non-detection prior to ZTF discovery was on 2021 March 2, at $r>19.88\\,$mag.\nIn Figure~\\ref{fig:spectra2} we show a P200 DBSP spectrum taken on 2021 April 09 plotted against the SNID template of the Type Ic-BL SN\\,2002ap. SN\\, 2021htb exploded in a SDSS galaxy with redshift $z= 0.035$.\n\n\\subsection{SN 2021hyz}\nOur first ZTF photometry of SN\\,2021hyz (ZTF21aartgiv) was\nobtained on 2021 April 03 (MJD 59307.155) with the P48. This first ZTF detection was in the $g$ band, with a host-subtracted magnitude of $20.29\\pm0.17$\\,mag (Figure~\\ref{fig:opt-lc-mag}), at $\\alpha$=09$^{\\rm h}$27$^{\\rm m}$36.51$^{\\rm s}$,\n$\\delta=04^{\\circ}27'11\\farcs$ (J2000).\nThe transient was publicly reported to the TNS by ALeRCE on 2021 April 3 \\citep{Forster_2021}.\nThe last ZTF non-detection prior to ZTF discovery was on 2021 April 1, at $g>19.15\\,$mag. In Figure~\\ref{fig:spectra2} we show a P60 SEDM spectrum taken on 2021 April 30 plotted against the SNID template of the Type Ic-BL SN\\,1997ef.\nSN\\,2021hyz exploded in a galaxy with redshift $z= 0.046$.\n\n\\subsection{SN 2021ywf}\nWe refer the reader to \\citet{Anand2022} for details about this SN Ic-BL. Its P48 light curves and the spectrum used for classification are shown in Figures \\ref{fig:opt-lc-mag} and \\ref{fig:spectra2}.\n\n\n\\begin{footnotesize}\n\\begin{longtable*}{lcccllllll}\n\\caption{Optical properties of the 16 SNe Ic-BL in our sample. We list the SN name; the MJD of maximum light in $r$ band; the absolute magnitude at $r$-band peak; the absolute magnitude at $g$-band peak; the explosion time estimated as days since $r$-band maximum; the estimated nickel mass; the characteristic timescale of the bolometric light curve; the photospheric velocity; the ejecta mass; and the kinetic energy of the explosion. See Sections \\ref{sec:vphot} and \\ref{sec:optical_properties} for discussion. \\label{tab:opt_data}}\\\\\n\\toprule\n\\toprule\nSN & T$_{r,\\rm max}$ & M$^{\\rm peak}_{r}$ & M$^{\\rm peak}_{g}$ & ${\\rm T}_{\\rm exp}$-T$_{r,\\rm max}$ & $M_{\\rm Ni}$ & $\\tau_{m}$ & v$_{\\rm ph}$($^{a}$) & $M_{\\rm ej}$ & $E_{\\rm k}$ \\\\\n & (MJD) & (AB mag) & (AB mag) & (d) & (M$_{\\odot}$) & (d) & ($10^4$\\,km\/s) & (M$_{\\odot}$) & ($10^{51}$erg) \\\\\n\\midrule[0.5pt]\n2018etk & 58337.40 & $-18.31\\pm0.03$ & $-18.30\\pm0.02$ & $-9\\pm1$ & $0.13_{-0.02}^{+0.01}$ & $5.0_{-2}^{+2}$ & $2.6\\pm0.2$ (5)\n& $0.7\\pm0.5$ & $3\\pm2$ \\\\\n2018hom & 58426.31 & $-19.30\\pm0.11$ & $-18.91\\pm0.01$ & $-9.3_{-0.4}^{+0.7}$ & $0.4\\pm0.1$ & $6.9\\pm0.2$ & $1.7\\pm0.2$ (27) & $> 0.7$ & $>1$ \\\\\n2018hxo & 58403.76 & $-18.68\\pm0.06$ & $-18.4\\pm0.1$ & $-28.6_{-0.3}^{+0.2}$ & $0.4\\pm0.2$ & $6\\pm2$ & $0.6\\pm0.1$ (48)\n& $>0.1$ & $>0.02$ \\\\\n2018jex & 58457.01 & $-19.06\\pm0.02$ & $-18.61\\pm0.04$ & $-18.49\\pm0.04$ & $0.53_{-0.06}^{+0.07}$ & $13_{-3}^{+2}$ & $1.8\\pm0.3$ (8)\n& $3\\pm1$ & $7\\pm3$ \\\\\n2019hsx & 58647.07 & $-17.08\\pm0.02$ & $-16.14\\pm0.04$ & $-15.6_{-0.5}^{+0.4}$ & $0.07_{-0.01}^{+0.01}$ & $12\\pm1$ & $1.0\\pm0.2$ (-0.2)\n& $1.6\\pm0.4$ & $1.0\\pm0.5$ \\\\\n2019xcc & 58844.59 & $-16.58\\pm0.06$ & $-15.6\\pm0.2$ & $-11\\pm2$ & $0.04\\pm0.01$ & $5.0_{-0.9}^{+1.4}$ & $2.4\\pm0.2$ (6)\n& $0.7\\pm0.3$ & $2\\pm1$ \\\\\n2020jqm & 58996.21 & $-18.26\\pm0.02$ & $-17.39\\pm0.04$ & $-17\\pm1$ & $0.29_{-0.04}^{+0.05}$ & $18\\pm2$ & $1.3\\pm0.3$ (-0.5) \n& $5\\pm1$ & $5\\pm3$ \\\\\n2020lao & 59003.92 & $-18.66\\pm0.02$ & $-18.55\\pm0.02$ & $-11\\pm1$ & $0.23\\pm0.01$ & $7.7\\pm0.2$ & $1.8\\pm0.2$ (9)\n& $1.2\\pm0.2$ & $2.5\\pm0.7$ \\\\\n2020rph & 59092.34 & $-17.48\\pm0.02$ & $-16.94\\pm0.03$ & $-19.88\\pm0.02$ & $0.07\\pm0.01$ & $17.23_{-0.9}^{+1.2}$ & $1.2\\pm0.5$ (-1)\n& $4\\pm2$ & $3\\pm3$ \\\\\n2020tkx & 59116.50 & $-18.49\\pm0.05$ & $-18.19\\pm0.03$ & $-13\\pm4$ & $0.22\\pm0.01$ & $10.9_{-0.8}^{+0.7}$ & $1.32\\pm0.09$ (53)\n& $> 1.5$ & $> 1.5$ \\\\\n2021xv & 59235.56 & $-18.92\\pm0.07$ & $-18.99\\pm0.05$ & $-12.8_{-0.3}^{+0.2}$ & $0.30_{-0.02}^{+0.01}$ & $7.7_{-0.5}^{+0.7}$ & $1.3\\pm0.1$ (3)\n& $0.9\\pm0.1$ & $1.0\\pm0.2$ \\\\\n2021aug & 59251.98 & $-19.42\\pm0.01$ & $-19.32\\pm0.06$ & $-24\\pm2$ & $0.7\\pm0.1$ & $17\\pm7$ & $0.8\\pm0.3$ (1) \n& $3\\pm2$ & $1\\pm1$ \\\\\n2021epp & 59291.83 & $-17.49\\pm0.03$ & $-17.12\\pm0.09$ & $-15\\pm1$ & $0.12\\pm0.02$ & $17_{-3}^{+4}$ & $1.4\\pm0.5$ (-4)\n& $5\\pm2$ & $6\\pm5$ \\\\\n2021htb & 59321.56 & $-16.55\\pm0.03$ & $-15.66\\pm0.07$ & $-19.38\\pm0.02$ & $0.04\\pm0.01$ & $13\\pm2$ & $1.0\\pm0.5$ (-6)\n& $1.8\\pm0.9$ & $1\\pm1$ \\\\\n2021hyz & 59319.10 & $-18.83\\pm0.05$ & $-18.81\\pm0.01$ & $-12.9\\pm0.9$ & $0.29_{-0.02}^{+0.01}$ & $7.7_{-0.4}^{+0.5}$ & $2.3\\pm0.3$ (16) & $> 1.3$ & $>4$ \\\\\n2021ywf & 59478.64 & $-17.10\\pm0.05$ & $-16.5\\pm0.1$ & $-10.7\\pm0.5$ & $0.06\\pm0.01$ & $8.9\\pm0.8$ & $1.2\\pm0.1$ (0.5) \n& $1.1\\pm0.2$ & $0.9\\pm0.3$ \\\\\n\\bottomrule\n\\multicolumn{9}{l}{$^{a}$ Rest-frame phase days of the spectrum that was used to measure the velocity.}\\\\\n\\end{longtable*}\n\\end{footnotesize}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[height=8cm]{CorsiFig5_speed.png}\n \\caption{Photospheric velocities of the ZTF SNe in our sample (black) plotted as a function of (rest frame) time since explosion (see Table \\ref{tab:opt_data}). Velocities are measured using Fe II (5169 $\\AA$); velocities quoted refer to 84\\% confidence and are measured relative to the Ic template velocity using the open source software \\texttt{SESNspectraLib} \\citep{Liu2016,Modjaz2016}. We compare our results with photospheric velocities derived from spectroscopic modeling for a number of Ib\/c SNe. Red symbols represent GRB-SNe \\citep{Iwamoto1998,Mazzali2003,Mazzali2006b}; magenta is used for XRF\/X-ray transients-SNe \\citep{Mazzali2006a,Pian2006,Modjaz2009}; blue represents SNe Ic-BL \\citep{Mazzali2000,Mazzali2002}; green is used for the ``normal'' Type Ic SN 1994I \\citep{Sauer2006}. Finally, for comparison we also plot the photospheric velocities for the SNe Ic-BL in the \\citet{Corsi2016} sample as measured by \\citet{Taddia2019} (see their Tables 2 and A1; yellow crosses). Errors on the times since explosion account for the uncertainties on T$_{\\rm exp}$ as reported in Table \\ref{tab:opt_data}. \\label{fig:velocities}}\n\\end{figure*}\n\n\n\\section{Multi-wavelength analysis}\n\\label{sec:modeling}\n\\subsection{Photospheric velocities}\n\\label{sec:vphot}\nWe confirm the SN Type Ic-BL classification of each object in our sample by measuring the photospheric velocities (${\\rm v_{ph}}$).\nSNe Ic-BL are characterized by high expansion velocities evident in the broadness of their spectral lines. A good proxy for the photospheric velocity is that derived from the maximum absorption position of the Fe II \\citep[$\\lambda 5169$; e.g.,][]{Modjaz2016}. We caution, however, that estimating this velocity is not easy given the strong line blending. We first pre-processed one high-quality spectrum per object using the IDL routine \\texttt{WOMBAT}, then smoothed the spectrum using the IDL routine \\texttt{SNspecFFTsmooth} \\citep{Liu2016}, and finally ran \\texttt{SESNSpectraLib} \\citep{Liu2016,Modjaz2016} to obtain final velocity estimates. \n\nIn Figure~\\ref{fig:velocities} we show a comparison of the photospheric velocities estimated for the SNe in our sample with those derived from spectroscopic modeling for a number of SNe Ib\/c. The velocities measured for our sample are compatible, within the measurement errors, with what was observed for the PTF\/iPTF samples.\nMeasured values for the photospheric velocities with the corresponding rest-frame phase in days since maximum $r$-band light of the spectra that were used to measure them are also reported in Table~\\ref{tab:opt_data}. \n\nWe note that the spectra used to estimate the photospheric velocities of SN\\,2019xcc, SN\\,2020lao, and SN\\,2020jqm are different from those used for the classification of those events as SNe Ic-BL (see Section \\ref{section:sample} and Figure \\ref{fig:spectra1}). This is because for spectral classification we prefer later-time but higher-resolution spectra, while for velocity measurements we prefer earlier-time spectra even if taken with the lower-resolution SEDM. \n\n\\subsection{Bolometric light curve analysis}\n\\label{sec:optical_properties}\nIn our analysis we correct all ZTF photometry for Galactic extinction, using the Milky Way (MW) color excess $E(B-V)_{\\mathrm{MW}}$ toward the position of the SNe.\nThese are all obtained from \\cite{Schlafly2011}. All reddening corrections are applied using the \\cite{Cardelli1989} extinction law with $R_V=3.1$. \nAfter correcting for Milky Way extinction, we interpolate our P48 forced-photometry light curves \nusing a Gaussian Process via the {\\tt GEORGE}\\footnote{\\href{https:\/\/george.readthedocs.io}{https:\/\/george.readthedocs.io}} package with a stationary Matern32 kernel and the analytic functions of \\citet{Bazin2009} as mean for the flux form.\nAs shown in Figure~\\ref{fig:opt-lc-mag}, the colour evolution of the SNe in our sample are not too dissimilar with one another, which implies that the amount of additional host extinction is small. Hence, we set the host extinction to zero.\nNext, we derive bolometric light curves calculating bolometric corrections from the $g$- and $r$-band data following the empirical relations by \\cite{Lyman2014,Lyman2016}. For SN\\,2018hxo, since there is only one $g$-band detection, we assume a constant bolometric correction to estimate its bolometric light curve. These bolometric light curves are shown in the bottom panel of Figure \\ref{fig:opt-lc-flux}. \n\nWe estimate the explosion time ${\\rm T}_{\\rm exp}$ of the SNe in our sample as follows. For SN\\,2021aug, we fit the early ZTF light curve data following the method presented in \\citet{Miller2020}, where we fix the power-law index of the rising early-time temporal evolution to $\\alpha=2$, and derive an estimate of the explosion time from the fit. For most of the other SNe in our sample, the ZTF $r$- and $g$-band light curves lack enough early-time data to determine an estimate of the explosion time following the formulation of \\citet{Miller2020}. For all these SNe we instead set the explosion time to the mid-point between the last non-detection prior to discovery, and the first detection. \nResults on ${\\rm T}_{\\rm exp}$ are reported in Table \\ref{tab:opt_data}.\n\nWe fit the bolometric light curves around peak ($-20$ to 60 rest-frame days relative to peak) to a model using the Arnett formalism \\citep{Arnett1982}, with the nickel mass ($M_{\\rm Ni}$) and characteristic time scale $\\tau_m$ as free parameters \\citep[see e.g. Equation A1 in][]{Valenti2008}. The derived values of $M_{\\rm Ni}$ (Table \\ref{tab:opt_data}) have a median of $\\approx 0.22$\\,M$_{\\odot}$, compatible with the median value found for SNe Ic-BL in the PTF sample by \\citet{Taddia2019}, somewhat lower than for SN\\,1998bw for which the estimated nickel mass values are in the range $(0.4-0.7)$\\,M$_{\\odot}$, but comparable to the $M_{\\rm Ni}\\approx 0.19-0.25$\\,M$_{\\odot}$ estimated for\nSN\\,2009bb \\citep[see e.g.,][]{Lyman2016,Afsariardchi2021}. We note that events such as SN\\,2019xcc and SN\\,2021htb have relatively low values of $M_{\\rm Ni}$, which are however compatible with the range of $0.02-0.05$\\,M$_{\\odot }$ expected for the nickel mass of magnetar-powered SNe Ic-BL \\citep{Nishimura2015,Chen2017,Suwa2015}. \n\nNext, from the measured characteristic timescale $\\tau_m$ of the bolometric light curve, and the photospheric velocities estimated via spectral fitting (see previous Section) we derive the ejecta mass ($M_{\\rm ej}$) and the kinetic energy ($E_{\\rm k}$) via the following relations \\citep[see e.g. Equations 1 and 2 in][]{Lyman2016}:\n\\begin{eqnarray}\n \\tau^2_m {\\rm v_{ph, max}}=\\frac{2\\kappa}{13.8 c} M_{\\rm ej}~~~ & \n ~~~{\\rm v}^2_{\\rm ph, max}=\\frac{5}{3}\\frac{2 E_{\\rm k}}{M_{\\rm ej}},\\label{eq:ejecta}\n\\end{eqnarray}\nwhere we assume a constant opacity of $\\kappa = 0.07$\\,g\\,cm$^{-2}$. \n\nWe note that to derive $M_{\\rm ej}$ and $E_{\\rm k}$ as described above we assume the photospheric velocity evolution is negligible within 15 days relative to peak epoch, and use the spectral velocities measured within this time frame to estimate ${\n\\rm v}_{\\rm ph, max}$ in Equation \\ref{eq:ejecta}. However, there are four objects in our sample (SN\\,2018hom, SN\\,2018hxo, SN\\,2020tkx, and SN\\,2021hyz) for which the spectroscopic analysis constrained the photospheric velocity only after day 15 relative to peak epoch. For these events, we only provide lower limits on the ejecta mass and kinetic energy (see Table \\ref{tab:opt_data}). \n \nConsidering only the SNe in our sample for which we are able to measure the photospheric velocity within 15\\,d since peak epoch, we derive median values for the ejecta masses and kinetic energies of $1.7$\\,M$_{\\odot}$ and $2.2\\times10^{51}$\\,erg, respectively. These are both a factor of $\\approx 2$ smaller than the median values derived for the PTF\/iPTF sample of SNe Ic-BL \\citep{Taddia2019}. This could be due to either an intrinsic effect, or to uncertainties on the measured photospheric velocities. In fact, we note that the photospehric velocity is expected to decrease very quickly after maximum light (see e.g. Figure \\ref{fig:velocities}). Since the photospheric velocity in Equation (\\ref{eq:ejecta}) of the Arnett formulation is the one at peak, our estimates of ${\\rm v}_{\\rm ph,max}$ could easily underestimate that velocity by a factor of $\\approx 2$ for many of the SNe in our sample. This would in turn yield an underestimate of $M_{\\rm ej}$ by a factor of $\\approx 2$ (though the kinetic energy would be reduced by a larger factor). A more in-depth analysis of these trends and uncertainties will be presented in \\citet{Gokul2022}. \n \n\\subsection{Search for gamma-rays}\nBased on the explosion dates derived for each object in Section \\ref{sec:optical_properties} (Table \\ref{tab:opt_data}), we searched for potential GRB coincidences in several online archives. No potential counterparts were identified in both spatial and temporal coincidence with either the Burst Alert Telescope (BAT; \\citealt{Barthelmy2005}) on the \\textit{Neil Gehrels\nSwift Observatory}\\footnote{See \\url{https:\/\/swift.gsfc.nasa.gov\/results\/batgrbcat}.} or the Gamma-ray Burst Monitor \\citep[GBM;][]{Meegan2009} on \\textit{Fermi}\\footnote{See \\url{https:\/\/heasarc.gsfc.nasa.gov\/W3Browse\/fermi\/fermigbrst.html}.}.\n\nSeveral candidate counterparts were found with \\textit{temporal} coincidence in the online catalog from the KONUS instrument on the \\textit{Wind} satellite (SN\\,2018etk, SN\\,2018hom, SN\\,2019xcc, SN\\,2020jqm, SN\\,2020lao, SN\\,2020tkx, SN\\,2021aug). However, given the relatively imprecise explosion date constraints for several of the events in our sample (see Table \\ref{tab:opt_data}), and the coarse localization information from the KONUS instrument, we cannot firmly associate any of these GRBs with the SNe Ic-BL. In fact, given the rate of GRB detections by KONUS ($\\sim 0.42$\\,d$^{-1}$) and the time window over which we searched for counterparts ($30$\\,d in total; derived from the explosion date constraints), the observed number of coincidences (13) is consistent with random fluctuations. Finally, none of the possible coincidences were identified in events with explosion date constraints more precise than 1\\,d. \n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=18cm]{CorsiFig6_GokulXray.png}\n \\caption{\\textit{Swift}\/XRT upper-limits and detections (downward pointing triangles and filled circles with error bars, respectively) obtained for 9 of the 16 SNe Ic-BL in our sample. We plot the observed X-ray luminosity as a function of time since explosion. We compare these observations with the X-ray light curves of the low-luminosity GRB\\,980425 \\citep[red stars;][]{Kouveliotou2004}, GRB\\,060218 \\citep[red squares;][]{Campana2006}, GRB\\,100316D \\citep[red crosses;][]{Margutti2013}, and with the relativistic iPTF17cw \\citep[blue cross;][]{Corsi2017}. Dotted red lines connect the observed data points (some of which at early and late times are not shown in the plot) for these three low-luminosity GRBs. We also plot the observed X-ray luminosity predicted by off-axis GRB models \\citep[black, green, and orange lines;][]{vanEerten2011, VanEerten2012}. We assume $\\epsilon_B=\\epsilon_e=0.1$, a constant density ISM in the range $n = 1-10\\,{\\rm cm}^{-3}$, a top-hat jet of opening angle $\\theta_j=0.2$, and various observer's angles $\\theta_{\\rm obs}=(2.5-3)\\theta_j$.}\n \\label{fig:X-raymodel}\n\\end{figure*}\n\n\\subsection{X-ray constraints}\n\\label{sec:X-raymodel}\nSeven of the 9 SNe Ic-BL observed with \\textit{Swift}-XRT did not result in a significant detection. In Table \\ref{tab:x-ray} we report the derived 90\\% confidence flux upper limits in the 0.3--10\\,keV band after correcting for Galactic absorption \\citep{Willingale+2013}. \n\nWhile observations of SN\\,2021ywf resulted in a $\\approx 3.2\\sigma$ detection significance (Gaussian equivalent), the limited number of photons (8) precluded a meaningful spectral fit. Thus, a $\\Gamma = 2$ power-law spectrum was adopted for the flux conversion for this source as well. We note that because of the relatively poor spatial resolution of the \\textit{Swift}-XRT (estimated positional uncertainty of 11.7\\arcsec~radius at 90\\% confidence), we cannot entirely rule out unrelated emission from the host galaxy of SN 2021ywf (e.g., AGN, X-ray binaries, diffuse host emission; see Figure \\ref{fig:hosts-det} for the host galaxy).\n\nFor SN\\,2019hsx we detected enough photons to perform a spectral fit for count rate to flux conversion. The spectrum is found to be relatively soft, with a best-fit power-law index of $\\Gamma = 3.9^{+3.0}_{-2.1}$. \nOur \\textit{Swift} observations of SN\\,2019hsx do not show significant evidence for variability of the source X-ray flux over the timescales of our follow up. While the lack of temporal variability is not particularly constraining given the low signal-to-noise ratio in individual epochs, we caution that also in this case the relatively poor spatial resolution of the \\textit{Swift}-XRT (7.4\\arcsec~radius position uncertainty at 90\\% confidence) implies that unrelated emission from the host galaxy cannot be excluded.\n\nThe constraints derived from \nthe \\textit{Swift}-XRT observations can be compared with the\nX-ray light curves of low-luminosity GRBs, or models of GRB afterglows \nobserved slightly off-axis. For the latter, we use\nthe numerical model by \\citet{vanEerten2011, VanEerten2012}. We assume equal energies in the electrons and magnetic fields ($\\epsilon_B= \\epsilon_e=0.1$), and an interstellar medium (ISM) of density $n = 1-10$\\,cm$^{-3}$. We note that a constant density ISM (rather than a wind profile) has been shown to fit the majority of GRB afterglow light curves, implying that most GRB progenitors might have relatively small wind termination-shock radii \\citep{Schulze2011}. We generate the model light curves for a nominal redshift of $z=0.05$ and then convert the predicted flux densities into X-ray luminosities by integrating over the 0.3-10\\,keV energy range and neglecting the small redshift corrections. We plot the model light curves in Figure \\ref{fig:X-raymodel}, for various energies, different power-law indices $p$ of the electron energy distribution, and various off-axis angles (relative to a jet opening angle, set to $\\theta_j=0.2$). In the same Figure we also plot the X-ray light curves of low-luminosity GRBs for comparison (neglecting redshift corrections). \nAs evident from this Figure, our \\textit{Swift}\/XRT upper limits (downward-pointing triangles) \nexclude X-ray afterglows associated with higher-energy GRBs observed slightly off-axis. However, X-ray emission as faint as the afterglow of\nthe low-luminosity GRB\\,980425 cannot be excluded. As we discuss in the next Section, radio data collected with the VLA enable us to exclude GRB\\,980425\/SN\\,1998bw-like emission for most of the SNe in our sample. \n\nWe note that our two X-ray detections of SN\\,2019hsx and SN\\,2021ywf are consistent with several GRB off-axis light curve models and, in the case of SN\\,2021ywf, also with GRB\\,980425-like emission within the large errors. However, for this interpretation of their X-ray emission to be compatible with our radio observations (see Table \\ref{tab:data}), one needs to invoke a flattening of the radio-to-X-ray spectrum, similar to what has been invoked for other stripped-envelope SNe in the context of cosmic-ray dominated shocks \\citep{Ellison2000, Chevalier2006}.\n\n\\subsection{Radio constraints}\n\\label{sec:radioanalysis}\nAs evident from Table \\ref{tab:data}, we have obtained at least one radio detection for 11 of the 16 SNe in our sample. None of these 11 radio sources were found to be coincident with known radio sources in the VLA FIRST \\citep{Becker1995} catalog (using a search radius of 30\\arcsec~around the optical SN positions). This is not surprising since the FIRST survey had a typical RMS sensitivity of $\\approx 0.15$\\,mJy at 1.4\\,GHz, much shallower than the deep VLA follow-up observations carried out within this follow-up program. We also checked the quick look images from the VLA Sky Survey (VLASS) which reach a typical RMS sensitivity of $\\approx 0.12$\\,mJy at 3\\,GHz \\citep{Hernandez2018,Law2018}. We could find images for all but one (SN\\,2021epp) of the fields containing the 16 SNe BL-Ic in our sample. The VLASS images did not provide any radio detection at the locations of the SNe in our sample.\n\nFive of the 11 SNe Ic-BL with radio detections are associated with extended or marginally resolved radio emission. Two other radio-detected events (SN\\,2020rph and SN\\,2021hyz) appear point-like in our images, but show no evidence for significant variability of the detected radio flux densities over the timescales of our observations. Thus, for a total of 7 out of 11 SNe Ic-BL with radio detections, we consider the measured flux densities as upper-limits corresponding to the brightness of their host galaxies, similarly to what was done in e.g. \\citet{Soderberg2006} and \\citet{Corsi2016}. The remaining 4 SNe Ic-BL with radio detections are compatible with point sources (SN\\,2018hom, SN\\,2020jqm, SN\\,2020tkx, and SN\\,2021ywf), and all but one (SN\\,2018hom) had more than one observation in the radio via which we were able to establish the presence of substantial variability of the radio flux density. Hereafter we consider these 4 detections as genuine radio SN counterparts, though we stress that with only one observation of SN\\,2018hom we cannot rule out a contribution from host galaxy emission, especially given that the radio follow up of this event was carried out with the VLA in its most compact (D) configuration with poorer angular resolution.\n\nIn summary, our radio follow-up campaign of 16 SNe Ic-BL resulted in 4 radio counterpart detections, and 12 deep upper-limits on associated radio counterparts.\n\n\\begin{figure*}\n \\begin{center}\n \\includegraphics[width=6cm]{CorsiFig7_ZTF18abklarx_host_contour.png}\n \\includegraphics[width=6cm]{CorsiFig7_ZTF19adaiomg_host_contour.png}\n \\includegraphics[width=6.cm,height=5.6cm]{CorsiFig7_ZTF20abswdbg_host_contour.png}\n \\includegraphics[width=6cm]{CorsiFig7_ZTF21aadatfg_host_contour.png}\n \\includegraphics[width=6cm,height=5.6cm]{CorsiFig7_ZTF21aaocrlm_host_contour.png}\n \\includegraphics[width=6cm]{CorsiFig7_ZTF21aardvol_host_contour.png}\n \\includegraphics[width=6cm]{CorsiFig7_ZTF21aartgiv_host_contour.png}\n \\caption{PanSTARRS-1 \\citep{Flewelling2020} reference $r$-band images of the fields of the SNe in our sample for which host galaxy light dominates the radio emission. Contours in magenta are 30\\%, 50\\%, and 90\\% of the radio peak flux reported in Table \\ref{tab:data} for the first radio detection of each field. The blue circles centered on the optical SN positions (not shown in the images) have sizes of 2\\arcsec \\citep[comparable to the ZTF PSF at average seeing;][]{Bellm2019}. The red-dotted circles enclose the region in which we search for radio counterparts (radii equal to the nominal FWHM of the VLA synthesized beams; Table \\ref{tab:data}). The sizes of the actual VLA synthesized beams are shown as filled magenta ellipses. The red dots mark the locations of the radio peak fluxes measured in the radio search areas. \\label{fig:hosts}}\n \\end{center}\n\\end{figure*}\n\n\\begin{figure*}\n \\begin{center}\n \\hbox{\n \\includegraphics[height=8.3cm]{CorsiFig8_ZTF18acbwxcc_contour.png}\n \\hspace{0.2cm}\n \\includegraphics[height=8.3cm]{CorsiFig8_ZTF20aazkjfv_contour.png}}\n \\hbox{\n \\includegraphics[height=7.8cm]{CorsiFig8_ZTF20abzoeiw_contour.png}\n \\includegraphics[height=7.8cm]{CorsiFig8_ZTF21acbnfos_contour.png}}\n \\caption{Same as Figure \\ref{fig:hosts} but for the fields containing the SNe in our sample for which we detected a SN radio counterpart. We stress that with only one observation of SN\\,2018hom we cannot rule out a contribution from host galaxy emission, especially given that the radio follow up of this event was carried out with the VLA in its D configuration. \\label{fig:hosts-det}}\n \\end{center}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=18cm]{CorsiFig9_RadioLight.png}\n \\caption{Radio ($\\approx 6$\\,GHz) observations of the 16 SNe Ic-BL in our sample (filled circles and downward pointing triangles in shades of pink, purple, and blue). Upper-limits associated with non-detections ($3\\sigma$ or brightness of the host galaxy at the optical location of the SN) are plotted with downward-pointing triangles; detections are plotted with filled circles. We compare these observations with the radio light curves of GRB-SNe (red); of relativistic-to-mildly relativistic SNe Ic-BL discovered independently of a $\\gamma$-ray trigger (cyan); and with PTF11qcj \\citep{Corsi2014}, an example of a radio-loud non-relativistic and CSM-interacting SN Ic-BL (yellow). As evident from this Figure, our observations exclude SN\\,1998bw-like radio emission for all but one (SN\\,2021epp) of the events in our sample. This doubles the sample of SNe Ic-BL for which radio emission observationally similar to SN\\,1998bw was previously excluded \\citep{Corsi2016}, bringing the upper limit on the fraction of SNe compatible with SN\\,1998bw down to $< 19\\%$ \\citep[compared to $< 41\\%$ previously reported in][]{Corsi2016}.\n For 10 of the 16 SNe presented here we also exclude relativistic ejecta with radio luminosity densities in between $\\approx 5\\times10^{27}$\\,erg\\,s$^{-1}$\\,Hz$^{-1}$ and $\\approx 10^{29}$\\,erg\\,s$^{-1}$\\,Hz$^{-1}$ at $t\\gtrsim 20$\\,d, similar to SNe associated with low-luminosity GRBs such as SN\\,1998bw \\citep{Kulkarni1998}, SN\\,2003lw \\citep{Soderberg2004}, SN\\,2010dh \\citep{Margutti2013}, or to the relativistic SN\\,2009bb \\citep{Soderberg2010} and iPTF17cw \\citep{Corsi2017}. None of our observations exclude radio emission similar to that of SN\\,2006aj. }\n \\label{fig:radiolight}\n\\end{figure*}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.6cm]{CorsiFig10_RadioSpeed.png}\n \\caption{Properties of the radio-emitting ejecta of the SNe in our sample for which we detect a radio counterpart (magenta dots), compared with those of GRB-SNe (red stars) and of relativistic-to-mildly\nrelativistic SNe Ic-BL discovered independently of a $\\gamma$-ray trigger (cyan stars). As evident from this Figure, only SN\\,2018hom is compatible with an ejecta speed $\\gtrsim 0.3c$, though with the caveat that we only have one radio observation for this SN. None of the other ZTF SNe Ic-BL in our sample shows evidence for ejecta faster than $0.3c$. We also note that SN\\,2020jqm lies in the region of the parameter space occupied by radio-loud CSM-interacting SNe similar to PTF11qcj. See Section \\ref{sec:radio_properties} for discussion.}\n \\label{fig:radiospeed}\n\\end{figure}\n\n\\subsubsection{Fraction of SN\\,1998bw-like SNe Ic-BL}\nThe local rate of SNe Ic-BL is estimated to be $\\approx 5\\%$ of the core-collapse SN rate \\citep{Li2011,Shivvers2017,Perley2020b} or $\\approx 5\\times10^3$\\,Gpc$^{-3}$\\,yr$^{-1}$ assuming a core-collapse SN rate of $\\approx 10^5$\\,Gpc$^{-3}$\\,yr$^{-1}$ \\citep{Perley2020b}. Observationally, we know that cosmological long GRBs are characterized by ultra-relativistic jets observed on-axis, and have an \\textit{intrinsic} (corrected for beaming angle) local volumetric rate of $79^{+57}_{-33}$\\,Gpc$^{-3}$\\,yr$^{-1}$ \\citep[e.g.,][and references therein]{Ghirlanda2022}. Hence, only ${\\cal O}(1)\\%$ of SNe Ic-BL can make long GRBs. For low-luminosity GRBs, the \\textit{observed} local rate is affected by large errors, $230^{+490}_{-190}$\\,Gpc$^{-3}$\\,yr$^{-1}$ \\citep[see][and references therein]{Bromberg2011}, and their typical beaming angles are largely unconstrained. Hence, the question of what fraction of SNe Ic-BL can make low-luminosity GRBs remains to be answered. \n\nRadio observations of SNe Ic-BL are a powerful way to constrain this fraction independently of relativistic beaming effects that preclude observations of jets in X-rays and $\\gamma$-rays for off-axis observers. However, observational efforts aimed at constraining the fraction of SNe Ic-BL harboring low-luminosity GRBs independently of $\\gamma$-ray observations have long been challenged by the rarity of the SN Ic-BL optical detections (compared to other core-collapse events), coupled with the small number of these rare SNe for which the community has been able to collect deep radio follow-up observations within 1\\,yr since explosion \\citep[see e.g.,][]{Soderberg2006catalog}. Progress in this respect has been made since the advent of the PTF, and more generally with synoptic optical surveys that have greatly boosted the rate of stripped-envelope core-collapse SN discoveries \\citep[e.g.,][]{Shappee2014,Sand2018,Tonry2018}. \n\nIn our previous work \\citep{Corsi2016}, we presented one of the most extensive samples of SNe Ic-BL with deep VLA observations, largely composed of events detected by the PTF\/iPTF. Combining our sample with the SN Ic-BL\\,2002ap \\citep{GalYam2002,Mazzali2002} and SN\\,2002bl \\citep{Armstrong2002,Berger2003}, and the CSM-interacting SN Ic-BL\\,2007bg \\citep{Salas2013}, we had overall 16 SNe Ic-BL for which radio emission observationally similar to SN\\,1998bw was excluded, constraining the rate of SNe Ic-BL observationally similar to SN\\,1998bw to $< 6.61\/16\\approx 41\\%$, where we have used the fact that the Poisson 99.865\\% confidence (or $3\\sigma$ Gaussian equivalent for a single-sided distribution) upper-limit on zero SNe compatible with SN\\,1998bw is $\\approx 6.61$.\n\nWith the addition of the 16 ZTF SNe Ic-BL presented in this work, we now have doubled the sample of SNe Ic-BL with deep VLA observations presented in \\citet{Corsi2016}, providing evidence for additional 15 SNe Ic-BL (all but SN\\,2021epp; see Figure \\ref{fig:radiolight}) that are observationally different from SN\\,1998bw in the radio. Adding to our sample also SN\\,2018bvw \\citep{Ho2020_ZTF18aaqjovh}, AT\\,2018gep \\citep{Ho2019_ZTF18abukavn}, and SN\\,2020bvc \\citep{Ho2020_ZTF20aalxlis}, whose radio observations exclude SN\\,1998bw-like emission, we are now at 34 SNe Ic-BL that are observationally different from SN\\,1998bw. Hence, we can tighten our constraint on the fraction of 1998bw-like SNe Ic-BL to $< 6.61\/34\\approx 19\\%$ (99.865\\% confidence). This upper-limit implies that the \\textit{intrinsic} rate of 1998bw-like GRBs is $\\lesssim 950$\\,Gpc$^{-3}$\\,yr$^{-1}$. Combining this constraint with the rate of low-luminosity GRBs derived from their high-energy emission, we conclude that low-luminosity GRBs have inverse beaming factors $2\/\\theta^2\\lesssim 4^{+20}_{-3}$, corresponding to jet half-opening angles $\\theta \\gtrsim 40^{+40}_{-24}$\\,deg. \n\nWe note that for 10 of the SNe in the sample presented here we also exclude relativistic ejecta with radio luminosity densities in between $\\approx 5\\times10^{27}$\\,erg\\,s$^{-1}$\\,Hz$^{-1}$ and $\\approx 10^{29}$\\,erg\\,s$^{-1}$\\,Hz$^{-1}$ at $t\\gtrsim 20$\\,d, pointing to the fact that SNe Ic-BL similar to those associated with low-luminosity GRBs, such as SN\\,1998bw \\citep{Kulkarni1998}, SN\\,2003lw \\citep{Soderberg2004}, SN\\,2010dh \\citep{Margutti2013}, or to the relativistic SN\\,2009bb \\citep{Soderberg2010} and iPTF17cw \\citep{Corsi2017}, are intrinsically rare. However, none of our observations exclude radio emission similar to that of SN\\,2006aj. This is not surprising since the afterglow of this low-luminosity GRB faded on timescales much faster than the $20-30$ days since explosion that our VLA monitoring campaign allowed us to target. To enable progress, obtaining prompt ($\\lesssim 5$\\,d since explosion) and accurate spectral classification paired with deep radio follow-up observations of SNe Ic-BL should be a major focus of future studies. At the same time, as discussed in \\citet{Ho2020_ZTF20aalxlis}, high-cadence optical surveys can provide an alternative way to measure the rate of SNe Ic-BL that are similar to SN\\,2006aj independently of $\\gamma$-ray and radio observations, by catching potential optical signatures of shock-cooling emission at early times. Based on an analysis of ZTF SNe with early high-cadence light curves, \\citet{Ho2020_ZTF20aalxlis} concluded that it appears that SN\\,2006aj-like events are uncommon, but more events will be needed to measure a robust rate.\n\n\\subsubsection{Properties of the radio-emitting ejecta}\n\\label{sec:radio_properties}\nGiven that none of the SNe in our sample shows evidence for relativistic ejecta, hereafter we consider their radio properties within the synchrotron self-absorption (SSA) model for radio SNe \\citep{Chevalier1998}. Within this model, constraining the radio peak frequency and peak flux can provide information on the size of the radio emitting material. We start from Equations (11) and (13) of \\citet{Chevalier1998}:\n\\begin{eqnarray}\n\\nonumber R_{p}\\approx 8.8\\times10^{15}\\,{\\rm cm}\\left(\\frac{\\eta}{2\\alpha}\\right)^{1\/(2p+13)} \\left(\\frac{F_p}{\\rm Jy}\\right)^{(p+6)\/(2p+13)}\\times\\\\\\left(\\frac{d_L}{\\rm Mpc}\\right)^{(2p+12)\/(2p+13)}\\left(\\frac{\\nu_p}{5\\,\\rm GHz}\\right)^{-1},~~~\n\\label{eq:ejspeed}\n\\end{eqnarray}\nwhere $\\alpha \\approx 1$ is the ratio of relativistic electron energy density to magnetic energy density, $F_{p}$ is the flux density at the time of SSA peak, $\\nu_{p}$ is the SSA frequency, and where $R\/\\eta$ is the thickness of the radiating electron shell. The normalization of the above Equation has a small dependence on $p$ and in the above we assume $p\\approx 3$ for the power-law index of the electron energy distribution. Setting $R_p \\approx {\\rm v_s} t_p$ in Equation (\\ref{eq:ejspeed}), and considering that $L_p \\approx 4\\pi d^2_L F_p$ (neglecting redshift effects), we get:\n\\begin{eqnarray}\n \\nonumber \\left(\\frac{L_p}{\\rm erg\\,s^{-1}\\,Hz^{-1}}\\right) \\approx 1.2\\times10^{27} \\left(\\frac{\\beta_s}{3.4}\\right)^{(2p+13)\/(p+6)} \\\\\\times\\left(\\frac{\\eta}{2\\alpha}\\right)^{-1\/(p+6)} \\left(\\frac{\\nu_p}{5\\,\\rm GHz}\\frac{t_p}{\\rm 1\\,d}\\right)^{(2p+13)\/(p+6)}\n\\end{eqnarray}\nwhere we have set $\\beta_s = {\\rm v_s}\/c$. We plot in Figure \\ref{fig:radiospeed} with blue-dotted lines the relationship above for various values of $\\beta_s$ (and for $p=3$, $\\eta=2$, $\\alpha=1$). As evident from this Figure, relativistic events such as SN\\,1998bw (for which the non-relativistic approximation used in the above Equations breaks down) are located at $\\beta_s\\gtrsim 1$. None of the ZTF SNe Ic-BL in our sample for which we obtained a radio counterpart detection shows evidence for ejecta faster than $0.3c$, except possibly for SN\\,2018hom. However, for this event we only have one radio observation and hence contamination from the host galaxy cannot be excluded. We also note that SN\\,2020jqm lies in the region of the parameter space occupied by radio-loud CSM interacting SNe similar to PTF\\,11qcj. \n\nThe magnetic field can be expressed as \\citep[see Equations (12) and (14) in][]{Chevalier1998}:\n\\begin{eqnarray}\n\\nonumber B_p \\approx 0.58\\rm\\,G \\left(\\frac{\\eta}{2\\alpha}\\right)^{4\/(2p+13)}\\left(\\frac{F_{p}}{\\rm Jy}\\right)^{-2\/(2p+13)}\\times\\\\\\left(\\frac{d_{L}}{\\rm Mpc}\\right)^{-4\/(2p+13)}\\left(\\frac{\\nu_p}{5\\,\\rm GHz}\\right).\n\\label{eq2}\n\\end{eqnarray} \nConsider a SN shock expanding in a circumstellar medium (CSM) of density:\n\\begin{equation}\n\\rho\\approx 5 \\times10^{11} \\,{\\rm g\\,cm}^{-1}A_{*}R^{-2}\n\\end{equation}\nwhere:\n\\begin{equation}\n A_{*}= \\frac{\\dot{M}\/(10^{-5}M_{\\odot}\/{\\rm yr})}{4\\pi {\\rm v}_w\/(10^3{\\rm km\/s})}.\n\\label{eq:rho}\n\\end{equation}\nAssuming that a fraction $\\epsilon_B$ of the energy density $\\rho{\\rm v}^2_s$ goes into magnetic fields:\n\\begin{equation}\n \\frac{B_p^2}{8\\pi} = \\epsilon_B \\rho {\\rm v_s}^2 = \\epsilon_B \\rho R_p^2 t_p^{-2},\n \\label{eq3}\n\\end{equation}\none can write:\n\\begin{eqnarray}\n\\nonumber \\left(\\frac{L_p}{\\rm erg\\,s^{-1}\\,Hz^{-1}}\\right) \\approx 1.2\\times10^{27} \\left(\\frac{\\eta}{2\\alpha}\\right)^{2}\\left(\\frac{\\nu_p}{5\\,\\rm GHz}\\frac{t_p}{1\\,\\rm d}\\right)^{(2p+13)\/2}\\\\\\times \\left(5\\times10^3\\epsilon_B A_*\\right)^{-(2p+13)\/4},~~~~~~\n\\end{eqnarray}\nwhere we have used Equations (\\ref{eq2}), (\\ref{eq:rho}), and (\\ref{eq3}). We plot in Figure \\ref{fig:radiospeed} with yellow-dashed lines the relationship above for various values of $\\dot{M}$ (and for $p=3$, $\\eta=2$, $\\alpha=1$, $\\epsilon_B =0.33$, $v_w=1000$\\,km\\,s$^{-1}$). As evident from this Figure, relativistic events such as SN\\,1998bw show a preference for smaller mass-loss rates. We note that while the above relationship depends strongly on the assumed values of $\\eta$, $\\epsilon_B$, and $v_w$, this trend for $\\dot{M}$ remains true regardless of the specific values of these (uncertain) parameters. We also note that the above analysis assumes mass-loss in the form of a steady wind. While this is generally considered to be the case for relativistic SNe Ic-BL, binary interaction or eruptive mass loss in core-collapse SNe can produce denser CSM with more complex profiles \\citep[e.g.][]{Montes1998,Soderberg2006,Salas2013,Corsi2014,Margutti2017,Balasubramanian2021,Maeda2021,Stroh2021}.\n\nFinally, the total energy coupled to the fastest (radio emitting) ejecta can be expressed as \\citep[e.g.,][]{Soderberg2006}:\n\\begin{equation}\nE_r\\approx \\frac{4\\pi R_p^3}{\\eta}\\frac{B_p^2}{8\\pi\\epsilon_B}=\\frac{R_p^3}{\\eta}\\frac{B_p^2}{2\\epsilon_B}.\n\\label{eq4}\n\\end{equation}\n In Table \\ref{tab:radioproperties} we summarize the properties of the radio ejecta derived for the four SNe for which we detect a radio counterpart. These values can be compared with $\\dot{M}\\approx2.5\\times10^{-7}M_{\\odot}{\\rm yr}^{-1}$ and $E_r\\approx (1-10)\\times10^{49}$\\,erg estimated for SN\\,1998bw by \\citet{Li1999}, with $\\dot{M}\\approx2\\times10^{-6}M_{\\odot}{\\rm yr}^{-1}$ and $E_r\\approx 1.3\\times10^{49}$\\,erg estimated for SN\\,2009bb by \\citet{Soderberg2010}, and with with $\\dot{M}\\approx(0.4-1)\\times10^{-5}M_{\\odot}{\\rm yr}^{-1}$ and $E_r\\approx (0.3-4)\\times10^{49}$\\,erg estimated for GRB\\,100316D by \\citet{Margutti2013}. \n\n\\begin{center}\n\\begin{table}\n\\caption{Properties of the radio ejecta of the SNe in our sample for which we detect a radio counterpart. We report the SN name, the estimated SN shock speed normalized to the speed of light ($\\beta_s$), the mass-loss rate of the pre-SN progenitor ($\\dot{M}$), the energy coupled to the fastest (radio-emitting) ejecta ($E_r$), and the ratio between the last and the kinetic energy of the explosion (estimated from the optical light curve modeling, $E_k$). See Section \\ref{sec:radioanalysis} for discussion. \\label{tab:radioproperties}}\n\\begin{tabular}{lcccc}\n\\hline\n\\hline\nSN & $\\beta_s$ & $\\dot{M}$ & $E_r$ & $E_r\/E_k$ \\\\\n & & (M$_{\\odot}$\\,yr$^{-1}$) & (erg) & \\\\\n\\hline\n2018hom & 0.35 & $1.1\\times10^{-6}$ & $3.6\\times10^{47}$ & $<0.04$\\%\\\\\n2020jqm & 0.048 & $2.7\\times10^{-4}$ & $5.7\\times10^{48}$ & 0.1\\% \\\\\n2020tkx & 0.14 & $1.7\\times10^{-5}$ & $1.1\\times10^{48}$ & $<0.07$\\% \\\\\n2021ywf & 0.19 & $2.2\\times10^{-6}$ & $2.3\\times10^{47}$ & 0.02\\%\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\end{center}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=16cm]{CorsiFig11_OffAxisRadioLight.png}\n \\caption{Approximate radio luminosity density for GRBs observed largely off-axis during the sub-relativistic phase (black solid, dotted, dashed, and dash-dotted lines) compared with the radio detections and upper-limits of the SNe Ic-BL in our sample. Most of our observations exclude fireballs with energies $E\\gtrsim 10^{50}$\\,erg expanding in ISM media with densities $\\gtrsim 1$\\,cm$^{-3}$. However, our observations become less constraining for smaller energy and ISM density values. For example, most of our radio data cannot exclude off-axis jets with energies $E\\sim 10^{49}$\\,erg and $n\\sim 0.1$\\,cm$^{-3}$. See Section \\ref{sec:off-axis-radio} for discussion.}\n \\label{fig:off-axis-radio}\n\\end{figure*}\n\n\\subsubsection{Off-axis GRB radio afterglow constraints}\n\\label{sec:off-axis-radio}\nWe finally consider what type of constraints our radio observations put on a scenario where the SNe Ic-BL in our sample could be accompanied by relativistic ejecta from \na largely (close to 90\\,deg) off-axis GRB afterglow that would become visible in the radio band when the relativistic fireball enters the sub-relativistic phase and approaches spherical symmetry.\nBecause our radio observations do not extend past 100-200 days since explosion, we can put only limited constraints on this scenario. Hence, hereafter we present some general order-of-magnitude considerations rather than a detailed event-by-event modeling. \n\nFollowing \\citet{Corsi2016}, we can model approximately the late-time radio emission from an off-axis GRB based on the results by \\citet{Livio2000}, \\citet{Waxman2004b}, \\citet{Zhang2009}, and \\citet{VanEerten2012}. For fireballs expanding in an interstellar medium (ISM) of constant\ndensity $n$ (in units of cm$^{-3}$), at timescales $t$ such that:\n\\begin{equation}\n t \\gtrsim (1+z) \\times t_{\\rm SNT}\/2\n\\end{equation}\nwhere the transition time to the spherical Sedov\u2013Neumann\u2013Taylor (SNT) blast wave, $t_{\\rm SNT}$, reads:\n\\begin{equation}\n t_{\\rm SNT}\\approx 92\\,{\\rm d}\\left(E_{51}\/n\\right)^{1\/3},\n\\end{equation}\nthe luminosity density can be approximated analytically via the following formula \\citep[see Equation (23) in][ where we neglect redshift corrections and assume $p=2$]{Zhang2009}:\n\\begin{eqnarray}\n\\nonumber L_{\\nu}({\\rm t}) \\approx 4\\pi d^{2}_{\\rm L} F_{\\nu}({\\rm t}) \\approx 2\\times10^{30}\\left(\\frac{\\epsilon_e}{0.1}\\right)\\left(\\frac{\\epsilon_B}{0.1}\\right)^{3\/4}n^{9\/20}\\\\\\times E^{13\/10}_{51}\\left(\\frac{\\nu}{\\rm 1\\,GHz}\\right)^{-1\/2}\\left(\\frac{t}{92\\,{\\rm d}}\\right)^{-9\/10}\\,{\\rm erg\\,s^{-1}\\,{\\rm Hz}^{-1}}.~~~~~~~\n\\end{eqnarray}\nIn the above Equations, $E_{51}$ is the beaming-corrected ejecta energy in units of $10^{51}$\\,erg. We note that here we assume again a constant density ISM in agreement with the majority of GRB afterglow observtions \\citep[e.g.,][]{Schulze2011}. \n\nWe plot the above luminosity in Figure \\ref{fig:off-axis-radio} together with our radio observations and upper-limits, assuming $\\epsilon_e=0.1$, $\\epsilon_B=0.1$, and for representative values of low-luminosity GRB energies and typical values of long GRB ISM densities $n$. As evident from this Figure, our observations exclude fireballs with energies $E\\gtrsim 10^{50}$\\,erg expanding in ISM media with densities $\\gtrsim 1$\\,cm$^{-3}$. However, our observations become less constraining for smaller energy and ISM density values. \n\n\\section{Summary and Conclusion}\\label{sec:conclusion}\nWe have presented deep radio follow-up observations of 16 SNe Ic-BL that are part of the ZTF sample. \nOur campaign resulted in 4 radio counterpart detections and 12 deep radio upper-limits. For 9 of these 16 events we have also carried out X-ray observations with \\textit{Swift}\/XRT. All together, these results constrain the fraction of SN\\,1998bw-like explosions to $< 19\\%$ (3$\\sigma$ Gaussian equivalent), tightening previous constraints by a factor of $\\approx 2$. Moreover, our results exclude relativistic ejecta with radio luminosities densities in between $\\approx 5\\times10^{27}$\\,erg\\,s$^{-1}$\\,Hz$^{-1}$ and $\\approx 10^{29}$\\,erg\\,s$^{-1}$\\,Hz$^{-1}$ at $t\\gtrsim 20$\\,d since explosion for $\\approx 60\\%$ of the events in our sample, pointing to the fact that SNe Ic-BL similar to low-luminosity-GRB-SN such as SN\\,1998bw, SN\\,2003lw, SN\\,2010dh, or to the relativistic SN\\,2009bb and iPTF17cw, are intrinsically rare. This result is in line with numerical simulations that suggest that a SN Ic-BL can be triggered even if a jet\nengine fails to produce a successful GRB jet.\n\nWe showed that our radio observations exclude an association of the SNe Ic-BL in our sample with largely off-axis GRB afterglows with energies $E\\gtrsim 10^{50}$\\,erg expanding in ISM media with densities $\\gtrsim 1$\\,cm$^{-3}$. On the other hand, our radio observations are less constraining for smaller energy and ISM density values, and cannot exclude off-axis jets with energies $E\\sim 10^{49}$\\,erg.\n\nWe noted that the main conclusion of our work is subject to the caveat that the parameter space of SN\\,2006aj-like explosions (with faint radio emission peaking only a few days after explosion) is left largely unconstrained by current systematic radio follow-up efforts like the one presented here. In other words, we cannot exclude that a larger fraction of SNe Ic-BL harbors GRB\\,060218\/SN\\,2006aj-like emission. In the future, obtaining fast and accurate spectral classification of SNe Ic-BL paired with deep radio follow-up observations executed within $5$\\,d since explosion would overcome this limitation. While high-cadence optical surveys can provide an alternative way to measure the rate of SNe Ic-BL that are similar to SN\\,2006aj via shock-cooling emission at early times, more optical detections are also needed to measure a robust rate. \n\nThe Legacy Survey of Space and Time on the Vera C. Rubin Observatory \\citep[LSST;][]{Rubin2019} promises to provide numerous discoveries of even the rarest type of explosive transients, such as the SNe Ic-BL discussed here. The challenge will be to recognize and classify these explosions promptly \\citep[e.g.,][]{Villar2019,Villar2020}, so that they can be followed up in the radio with current and next generation radio facilities. Indeed, Rubin, paired with the increased sensitivity of the next generation VLA \\citep[ngVLA;][]{ngVLA}, could provide a unique opportunity for building a large statistical sample of SNe Ic-BL with deep radio observations that may be used to guide theoretical modeling in a more systematic fashion, beyond what has been achievable over the last $\\approx 25$ years (i.e., since the discovery of GRB-SN\\,1998bw). In addition, the Square Kilometer Array (SKA) will enable discoveries of radio SNe and other transients in an untargeted and optically-unbiased way \\citep{Lien2011}. Hence, one can envision that the Rubin-LSST+ngVLA and SKA samples will, together, provide crucial information on\nmassive star evolution, as well as SNe Ic-BL physics and CSM properties. \n\nWe conclude by noting that understanding the evolution of single and stripped binary stars up to core collapse is of special interest in the new era of time-domain multi-messenger (gravitational-wave and neutrino) astronomy \\citep[see e.g., ][for recent reviews]{Murase2018,Scholber2012,Ernazar2020,Guepin2022}. Gravitational waves from nearby core-collapse SNe, in particular, represent an exciting prospect for expanding multi-messenger studies beyond the current realm of compact binary coalescences. While they may come into reach with the current LIGO \\citep{Aasi2015} and Virgo \\citep{Acernese2015} detectors, it is more likely that next generation gravitational-wave observatories, such as the Einstein Telescope \\citep{Maggiore2020} and the Cosmic Explorer \\citep{Evans2021}, will enable painting the first detailed multi-messenger picture of a core-collapse explosion. The physics behind massive stars' evolution and deaths also impacts the estimated rates and mass distribution of compact object mergers \\citep[e.g.,][]{Schneider2021} which, in turn, are current primary sources for LIGO and Virgo, and will be detected in much large numbers by next generation gravitational-wave detectors. Hence, continued and coordinated efforts dedicated to understanding massive stars' deaths and the link between pre-SN progenitors and properties of SN explosions, using multiple messengers, undoubtedly represent an exciting path forward.\n\n\\begin{acknowledgements}\n\\small\nA.C. and A.B. acknowledge support from NASA \\textit{Swift} Guest investigator programs (Cycles 16 and 17 via grants \\#80NSSC20K1482 and \\#80NSSC22K0203). S.A. gratefully acknowledges support from the National Science Foundation GROWTH PIRE grant No. 1545949. S.Y. has been supported by the research project grant ``Understanding the Dynamic Universe'' funded by the Knut and Alice Wallenberg Foundation under Dnr KAW 2018.0067, and the G.R.E.A.T research environment, funded by {\\em Vetenskapsr\\aa det}, the Swedish Research Council, project number 2016-06012.\nBased on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar\nObservatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grants\nNo. AST-1440341 and No. AST-2034437, and a collaboration including Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at\nStockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos\nNational Laboratories, the TANGO\nConsortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Berkeley\nNational Laboratories, Lawrence Livermore National\nLaboratories, and IN2P3, France. Operations are conducted by COO, IPAC, and UW. The SED Machine is based upon work supported by the National Science Foundation under Grant No. 1106171. The ZTF forced-photometry service was funded under the Heising-Simons Foundation grant \\#12540303 (PI: Graham).\nThe National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.\nThe Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. Based in part on observations made with the Nordic Optical Telescope, owned in collaboration by the University of Turku and Aarhus University, and operated jointly by Aarhus University, the University of Turku and the University of Oslo, representing Denmark, Finland and Norway, the University of Iceland and Stockholm University at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias.\n\\end{acknowledgements}\n\n\\bibliographystyle{aasjournal}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nMelanoma is the most dangerous type of skin cancer which cause almost 60,000 deaths annually. In order to improve the efficiency, effectiveness, and accuracy of melanoma diagnosis, International Skin Imaging Collaboration (ISIC) provides over 2,000 dermoscopic images of various skin problems for lesion segmentation, disease classification and other relative research. \n\nLesion segmentation aims to extract the lesion segmentation boundaries from dermoscopic images to assist exports in diagnosis. In recent years, U-Net, FCN and other deep learning methods are widely used for medical image segmentation. However, the existing algorithms have the restriction in lesion segmentation because of various appearance of lesion caused by the diversity of persons and different collection environment. FCN, U-Net and other one-stage segmentation methods are sensitive to the size of the lesion. Too large or too small size of lesion decreasing the accuracy of these one-stage methods. Two-stage method could reduce the negetive influence of diverse size of lesion. Mask R-CNN can be viewed as a two-stage method which has an outstanding performance in COCO. However, Mask R-CNN still has some brawbacks in lesion segmentation. Unlike clear boundary between different objects in COCO data, the boundary between lesion and healthy skin is vague in this challenge. The vague boundary reduces the accuracy of RPN part in Mask R-CNN which may cause the negative influence in following segmentation part. Furthermore, the low resolution of input image in the segmentation of Mask R-CNN also reduce the accuracy of segmentation.\n\nIn this report, we propose a method for lesion segmentation. Our method is a two-stage process including detection and segmentation. Detection part is used to detect the location of the lesion and crop the lesion from images. Following the detection, segmentation part segments the cropped image and predict the region of lesion. Furthermore, we also propose an optimised processed for cropping image. Instead of cropping the image by bounding box exactly, in training, we crop the image with a random expansion and contraction to increase robustness of neural networks. Finally, image augmentation and other ensemble methods are used in our method. Our method is based on deep convolutional networks and trained by the dataset provided by ISIC \\cite{Tschandl2018_HAM10000} \\cite{DBLP:journals\/corr\/abs-1710-05006}. \n\n\\section{Materials and Methods}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[height=6cm,width=0.75\\textwidth]{process.png}\n \\caption{Process}\n \\label{fig:Process}\n\\end{figure*}\n\n\\subsection{Database and Metrics}\nFor Lesion Segmentation, ISIC 2018 provides 2594 images with corresponding ground truth for training. There are 100 images for validation and 1000 images for testing without ground truth. The number of testing images in this year is 400 more than which in 2017. The size and aspect ratio of images are various. The lesion in images has different appearances and appear in different parts of people. There are three criterions for ground truth which are a fully-automated algorithm, a semi-automated flood-fill algorithm and manual polygon tracing. The ground truth labelled under different criterions has different shape of boundary which is a challenge in this task. We split the whole training set into two sets with ratio 10:1.\n\nThe Evaluation of this challenge is Jaccard index which is shown in Equation \\ref{eq:jaccard}. In 2018, the organiser adds a penalty when the Jaccard index is below 0.65.\n\\begin{equation}\nJ(A,B) = \n\\begin{cases}\n\\frac{|A \\bigcap B|}{|A \\bigcup B|} & J(A,B) \\geq 0.65 \\\\\n0 & \\text{otherwise}\\\\\n\\end{cases}\n\\label{eq:jaccard}\n\\end{equation}\n\n\n\n\n\\subsection{Methods}\n\nWe design a two-stage process combining detection and segmentation. Figure \\ref{fig:Process} shows our process. Firstly, the detection part detects the location of the lesion with the highest probability. According to the bounding box, the lesion is cropped from the original image and the cropped image is normalised to 512*512 which is the size of the input image of segmentation part. A fine segmentation boundray of the cropped image is provided by the segmentation part.\n\n\\subsubsection{Detection Part}\nIn the detection process, we use the detection part in MaskR-CNN \\cite{DBLP:journals\/corr\/HeGDG17}. We also use the segmentation branch of Mask R-CNN to supervise the training of neural networks.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{detection.png}\n \\caption{Detection}\n \\label{fig:Detection}\n\\end{figure}\n\n\\subsubsection{Segmentation Part}\nIn the segmentation part, we design an encode-decode architecture of network inspired by DeepLab \\cite{DBLP:journals\/corr\/ChenPK0Y16}, PSPNet \\cite{DBLP:journals\/corr\/ZhaoSQWJ16}, DenseASPP \\cite{Yang_2018_CVPR} and Context Contrasted Local \\cite{Ding_2018_CVPR}. Our architecture is shown in Figure \\ref{fig:Segmentation}. The features are extracted by extended ResNet 101 with three cascading blocks. After ResNet, a modified ASPP is used to compose various scale features. We also use a skip connection to transfer detailed information.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{segmentation.png}\n \\caption{Segmentation}\n \\label{fig:Segmentation}\n\\end{figure}\n\nFigure \\ref{fig:ASPP} shows the structure of the modified ASPP block. A 1x1 convolutional layer is used to compose the feature extracted by ResNet and reduce the number of feature maps. After that, we use a modified ASPP block to extract information in various scales. The modified ASPP has three parts which are dense ASPP, standard convolution layers and pooling layers. Dense ASPP is proposed by \\cite{Yang_2018_CVPR} which reduce the influence of margin in ASPP. Considering the vague boundary and low contrast appearance of the lesion, we add standard convolution layers to enhance the ability of neural networks in distinguishing the boundary. The aim of pooling layers is to let the networks consider the surrounding area of the low contrast lesion. The modified ASPP includes three dilated convolutions with rate 3, 6, 12 respectively, three standard convolution layers with size 3, 5, 7 respectively and four pooling layers with size 5, 9, 13, 17 respectively.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[height=13cm,width=0.4\\textwidth]{ASPP.png}\n \\caption{Modified ASPP}\n \\label{fig:ASPP}\n\\end{figure}\n\n\n\n\\subsection{Pre-processing}\nInstead of only using RBG channels, we combine the SV channels in Hue-Saturation-Value colour space and lab channels in CIELAB space with the RGB channels. These 8 channels are the input of segmentation part. Figure \\ref{fig:hsvlab} shows different channels of HSV and CIELAB colour space.\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{hsvlab.png}\n \\caption{Single channel in HSV and CIELAB colour space}\n \\label{fig:hsvlab}\n\\end{figure}\n\n\\subsection{Post-processing}\nThe ensemble is used as the post-processing to increase the performance of segmentation. The input image of the segmentation part is rotated 90, 180 degrees and flipped to generate the other three images. Each image has a result predicted by the segmentation part. The results of the rotated and flipped image need to rotate and flip back to the original image. The final mask is the average of four results of these images.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{ensemable.png}\n \\caption{Ensamble}\n \\label{fig:Ensamble}\n\\end{figure}\n\n\\subsection{Training}\nImage augmentation is used in training of detection and segmentation. Rotation, colour jitter, flip, crop and shear is operated in each image. Each channel of images is scaled to 0 to 1. In segmentation part the size of input images is set to 512x512. Some examples are shown in Figure \\ref{fig:aug1}. \n\n\\begin{figure}[H]\n\\centering\n\\begin{subfigure}{.25\\textwidth}\n \\centering\n \\includegraphics[width=0.4\\linewidth]{fliplr.png}\n \\caption{Flip Left to Right}\n \\label{fig:aug1-1}\n\\end{subfigure}%\n\\begin{subfigure}{.25\\textwidth}\n \\centering\n \\includegraphics[width=0.4\\linewidth]{rotate45.png}\n \\caption{Roatate 45$^\\circ$}\n \\label{fig:aug1-2}\n\\end{subfigure}\n\\caption{Examples for image augmentation}\n\\label{fig:aug1}\n\\end{figure}\n\nWe use the Adam optimiser and set the learning rate to 0.001. The learning rate will be set at 92\\% of the previous after each epoch. The batch size is 8. We early stop the training when the net start overfitting. We use the dice loss which is shown in Equation \\ref{eq:diceloss}, where $p_{i,j}$ is the prediction in pixel $(i,j)$ and $g_{i,j}$ is the ground truth in pixel $(i,j)$. \n\n\\begin{equation}\n\\label{eq:diceloss}\nL = -\\frac{\\sum_{i,j}(p_{i,j} g_{i,j})}{\\sum_{i,j}p_{i,j} + \\sum_{i,j}g_{i,j}-\\sum_{i,j}(p_{i,j} g_{i,j})}\n\\end{equation}\n\nThe input image of the segmentation part is crop randomly in a range near the bounding box predicted by detection part. In order to improve the diversity of input images and provide more information about background context. In training, the input image of the segmentation part is cropped from 81\\% to 121\\% of the bounding box randomly which is shown in Figure \\ref{fig:crop}.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.3\\textwidth]{crop.png}\n \\caption{Crop image}\n \\label{fig:crop}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{results.png}\n \\caption{Predicted masks of different segmentation methods}\n \\label{fig:result}\n\\end{figure*}\n\n\\subsection{Implementation}\nThe detection part of our method is implemented by using Pytorch 0.4 in Ubuntu 14.04. The framework is from \\url{https:\/\/github.com\/roytseng-tw\/Detectron.pytorch}. The segmentation part is implemented by using Pytorch 0.3.1 in Ubuntu 14.04. The neural networks are trained by two Nvidia 1080 Ti with 60 GB of RAM.\n\n\\section{Results}\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{llll}\n\\hline\nMethod & Jaccard & Jaccard(>0.65)\\\\\n\\hline\nMask R-CNN & 0.825 & 0.787\\\\\nOne-stage Segmentation & 0.820 & 0.783\\\\\nTwo-stage Segmentation & 0.846 & 0.816\\\\\n\\hline\n\\end{tabular}\n\\label{table:results}\n\\caption{Evaluation metrics of different segmentation methods}\n\\end{table}\n\n\nThe evaluation metrics on 257 images of our two-stage segmentation method with MaskR-CNN and our one-stage segmentation method is shown in Table 1. The thresholded Jaccard of our two-stage method on the official testing set in 0.802. Figure \\ref{fig:result} shows the outputs of different segmentation methods. Compared with other methods, The results of our two-stage method has a better location and more smooth edge.\n\n\n\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:I}\n\nNeutron stars, which are produced through core-collapse supernovae, are one of the most suitable environments for probing physics under extreme conditions, which are quite difficult to realize on the Earth. The density inside the star significantly exceeds the standard nuclear density, while the gravitational and magnetic fields become much stronger than those observed in our solar system. So, by observing the neutron star itself and\/or the phenomena associated with neutron stars, inversely one would be able to extract the information for the extreme conditions. For example, the discovery of the $2M_\\odot$ neutron stars excludes some of the soft equations of state (EOSs) \\cite{D10, A13, C20}. Namely, if the maximum mass of the neutron star constructed with an EOS does not reach the observed mass, such an EOS is not good for the EOS describing neutron stars. The light bending due to the strong gravitational field, which is one of the relativistic effects, may also tell us the neutron star properties. That is, owing to the strong gravitational field induced by the neutron star, the light curve from the neutron star would be modulated. Thus, by carefully observing the pulsar light curve, one could mainly constrain the stellar compactness, i.e., the ratio of the stellar mass to the radius (e.g., \\cite{PFC83, LL95, PG03, PO14, SM18, Sotani20a}). In practice, the mass and radius of the neutron stars, i.e., PSR J0030+0451 \\cite{Riley19, Miller19} and PSR J0740+6620 \\cite{Riley21, Miller21}, are successfully constrained through the observations with the Neutron star Interior Composition Explorer (NICER) on the International Space Station. The current uncertainty in the mass and radius constraints is still large, but this type of constraint must help us to know the true EOS when the uncertainty is reduced through future observations. \n\n\nIn addition to the direct observations of the stellar mass and radius, the oscillation frequency from the neutron star is another important observable. Since the oscillation frequency strongly depends on the interior properties of the object, as an inverse problem, one can extract the stellar properties by observing the frequency. This technique is known as asteroseismology, which is similar to seismology on the Earth and helioseismology on the Sun. In fact, by identifying the quasi-periodic oscillations observed in the afterglow following the giant flares with the neutron star crustal oscillations, one could constrain the crust properties (e.g., Refs. \\cite{GNHL2011, SNIO2012, SIO2016}). Similarly, one may know the neutron star mass, radius, and EOS by observing the gravitational wave frequencies from neutron stars and by identifying them with specific oscillation modes (e.g., Refs. \\cite{AK1996,AK1998,STM2001,SH2003,SYMT2011,PA2012,DGKK2013,Sotani20b,Sotani20c,Sotani21,SD22}). Furthermore, this technique is recently adopted for understanding the gravitational wave signals appearing in the numerical simulations for core-collapse supernova (e.g., Refs. \\cite{FMP2003,FKAO2015,ST2016,SKTK2017,MRBV2018,SKTK2019,TCPOF19,SS2019,ST2020,STT2021}).\n\n\nIn general, the gravitational wave frequencies from the neutron stars are complex numbers, where the real and imaginary parts respectively correspond to an oscillation frequency and damping rate, and the corresponding eigenmodes are called quasi-normal modes. So, to determine the quasi-normal modes of the neutron stars, one has to somehow solve the eigenvalue problem with respect to the eigenvalue in two-dimensional parameter space with the real and imaginary parts. Since this solution may be a bother, one sometimes adopts the approximation to estimate the frequency of gravitational waves. The simple approximation is the relativistic Cowling approximation, where the metric perturbations are neglected during the fluid oscillations, i.e., the frequencies of fluid oscillations are determined with the fixed metric. One can qualitatively discuss the behavior of frequencies with the Cowling approximation, even though the accuracy of the determined frequencies is within $\\sim 20\\%$ \\cite{ST2020, YK97}. Another approximation adopted so far is the zero-damping approximation, where one takes into account the metric perturbations as well as the fluid perturbations, but the imaginary part of the eigenvalue is assumed to be zero. This is because the damping rate, i.e, the imaginary part of the eigenvalue, for the gravitational waves induced by the fluid oscillations is generally much smaller than the oscillation frequency, i.e., the real part of the eigenvalue. This approximation is considered to estimate the frequency of the gravitational waves well, but it is not discussed how well this approximation works. So, in this study, we first discuss how well the zero-damping approximation works by comparing the frequencies determined with the approximation to those determined through the proper eigenvalue problem. Anyway, with either the Cowling approximation or the zero-damping approximation, one can discuss only the frequency of the gravitational waves induced by the fluid oscillations, such as the fundamental, pressure, and gravity modes, but one cannot discuss the gravitational waves associated with the oscillations of the spacetime itself, i.e., the $w$-modes. \n\n\nIn order to estimate the $w_1$-mode frequency, in this study, we find the empirical relation for the ratio of the damping rate of the $w_1$-mode gravitational wave, i.e., the imaginary part of the quasi-normal mode, to its frequency, i.e., the corresponding real part, as a function of the stellar compactness almost independently of the adopted EOSs. With this empirical relation, we newly propose a one-dimensional approximation for estimating the $w_1$-mode frequency and show how well this approximation works. Unless otherwise mentioned, we adopt geometric units in the following, $c=G=1$, where $c$ and $G$ denote the speed of light and the gravitational constant, respectively, and the metric signature is $(-,+,+,+)$.\n\n\n\\section{Neutron star models}\n\\label{sec:EOS}\n\nIn this study, we simply consider the static, spherically symmetric stars, where the metric describing the system is given by \n\\begin{equation} \n ds^2 = -e^{2\\Phi}dt^2 + e^{2\\Lambda}dr^2 + r^2\\left(d\\theta^2 + \\sin^2\\theta d\\phi^2\\right). \\label{eq:metric}\n\\end{equation}\nThe metric functions $\\Phi$ and $\\Lambda$ in Eq. (\\ref{eq:metric}) are functions only of $r$, while $e^{2\\Lambda}$ is directly related to the mass function, $m(r)$, as $e^{-2\\Lambda}=1-2m\/r$. The stellar structure is determined by integrating the Tolman-Oppenheimer-Volkoff equation with an appropriate EOS for neutron star matter. In this study, we adopt the same EOSs as in Refs. \\cite{Sotani20c, Sotani21}, which are listed in Table \\ref{tab:EOS} together with the EOS parameters and maximum mass of the neutron star. Here, $K_0$ and $L$ are the incompressibility for symmetric nuclear matter and the density dependence of the nuclear symmetry energy. We note that any EOSs can be characterized by these nuclear saturation parameters, which are constrained via terrestrial nuclear experiments, such as $K_0= 230\\pm 40$ MeV \\cite{KM13} and $L\\simeq 58.9\\pm 16$ MeV \\cite{Li19}. Comparing these fiducial values of $K_0$ and $L$ to those for the adopted EOSs, some of the EOSs listed in Table \\ref{tab:EOS} seem to be excluded, but we consider even such EOSs in this study to examine the EOS dependence in the wide parameter space. Additionally, $\\eta$ in Table \\ref{tab:EOS} is the specific combination of $K_0$ and $L$ as $\\eta\\equiv \\left(K_0 L^2\\right)^{1\/3}$ \\cite{SIOO14}, which is a suitable parameter not only for expressing the properties of low-mass neutron stars but also for discussing the maximum mass of neutron stars \\cite{SSB16,Sotani17,SK17}. The mass and radius relations for the neutron star models constructed with the EOS listed in Table \\ref{tab:EOS} are shown in Fig. \\ref{fig:MR}, where the filled and open marks respectively correspond to the EOSs constructed with the relativistic framework and with the Skyrme-type effective interaction, while the double-square is the EOS constructed with a variational method. As in Ref. \\cite{Sotani20c}, we consider the stellar models denoted with marks in this figure. \n\n\n\\begin{table}\n\\caption{EOS parameters adopted in this study, $K_0$, $L$, and $\\eta$, and the maximum mass, $M_{\\rm max}$, of the neutron star constructed with each EOS.} \n\\label{tab:EOS}\n\\begin {center}\n\\begin{tabular}{ccccc}\n\\hline\\hline\nEOS & $K_0$ (MeV) & $L$ (MeV) & $\\eta$ (MeV) & $M_{\\rm max}\/M_\\odot$ \\\\\n\\hline\nDD2\n & 243 & 55.0 & 90.2 & 2.41 \\\\ \nMiyatsu\n & 274 & 77.1 & 118 & 1.95 \\\\\nShen\n\\ & 281 & 111 & 151 & 2.17 \\\\ \nFPS\n & 261 & 34.9 & 68.2 & 1.80 \\\\ \nSKa\n & 263 & 74.6 & 114 & 2.22 \\\\ \nSLy4\n & 230 & 45.9 & 78.5 & 2.05 \\\\ \nSLy9\n & 230 & 54.9 & 88.4 & 2.16 \\\\ \nTogashi\n & 245 & 38.7 & 71.6 & 2.21 \\\\ \n\\hline \\hline\n\\end{tabular}\n\\end {center}\n\\end{table}\n\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[scale=0.5]{MR1}\n\\vspace{0.5cm}\n\\caption\nFor the neutron star models constructed with various EOSs, the mass is shown as a function of the radius. Figure is taken from Ref. \\cite{Sotani20c}.\n\n\\label{fig:MR}\n\\end{figure}\n\n\n\n\n\\section{Quasi-normal modes}\n\\label{sec:QNMs}\n\nIn order to determine the quasi-normal modes of gravitational waves from compact objects, one has to solve the eigenvalue problem. The perturbation equations are derived from the linearized Einstein equation by adding the metric and fluid perturbations on the background stellar models. By imposing the appropriate boundary conditions at the stellar center, surface, and spacial infinity, the problem to solve becomes the eigenvalue problem with respect to the eigenvalue, $\\omega$. As a practical matter, how to numerically deal with the boundary condition at the spatial infinity may become a problem. In this study, we especially adopt the continued fraction method, the so-called Leaver's method \\cite{Leaver85,Leaver90}. That is, the perturbation variable outside the star is expressed as a power series around the stellar surface, which also satisfies the boundary condition at the infinity. By substituting this expansion into the Regge-Wheeler equation, one can get a three-term recurrence relation including $\\omega$, which is rewritten in the form of a continued fraction. So, one has to find the eigenvalue, $\\omega$, which satisfies the continued fraction. Here, we symbolically express this condition as $f(\\omega)=0$. The resulatant $\\omega$ is generally a complex value, where the real and imaginary parts correspond to the oscillation frequency, Re($\\omega$)\/$2\\pi$, and damping rate of the corresponding gravitational waves. The concrete perturbation equations, boundary conditions, and functional form of $f(\\omega)$ are shown in Refs. \\cite{STM2001,ST2020}. \n\nIn practice, since $f(\\omega)$ is also a complex value, we try to find the value of $\\omega$ numerically, with which the absolute value of $f(\\omega)$ becomes the local minimum. In this study, we especially focus on the fundamental ($f$-), 1st pressure ($p_1$-), and 1st spacetime ($w_1$-) modes for the various neutron star models constructed with the EOSs listed in Table \\ref{tab:EOS}. We note that the $f$- and $p_1$-modes are the quasi-normal modes induced by the fluid oscillations, while the $w_1$-mode is the quasi-normal mode associated with the oscillations of spacetime itself. \n\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[scale=0.5]{ratio-ffDf}\n\\vspace{0.5cm}\n\\caption\nIn the top panel, the ratio of Im($\\omega$) to Re($\\omega$) for the $f$-mode is shown as a function of the stellar compactness for various neutron star models, where the solid line denotes the fitting formula given by Eq. (\\ref{eq:fitting_ff}). In the bottom pane, the relative deviation of the value of Im($\\omega$)\/Re($\\omega$) estimated with the empirical formula from that calculated via the eigenvalue problem is shown. \n\n\\label{fig:ratio-ff}\n\\end{figure}\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[scale=0.5]{ratio-p1}\n\\vspace{0.5cm}\n\\caption\nThe ratio of Im($\\omega$) to Re($\\omega$) for the $p_1$-mode is shown as a function of the stellar compactness for various neutron star models.\n\n\\label{fig:ratio-p1}\n\\end{figure}\n\nWith the resultant $f$-mode frequency, we show the ratio of Im$(\\omega)$ to Re$(\\omega)$ in the top panel of Fig. \\ref{fig:ratio-ff}. From this figure, one can observe that the values of Im$(\\omega)$\/Re$(\\omega)$ are well expressed as a function of the stellar compactness, $M\/R$, almost independently of the EOS. In fact, we can derive the empirical relation for Im$(\\omega)$\/Re$(\\omega)\\, \\raisebox{-0.8ex}{$\\stackrel{\\textstyle >}{\\sim}$ } 10^{-5}$, i.e., except for the quite low-mass neutron stars, given by \n\\begin{equation}\n \\frac{{\\rm Im}(\\omega_f)}{{\\rm Re}(\\omega_f)} = \\left[0.13193 -4.4754\\left(\\frac{M}{R}\\right)\n +290.9\\left(\\frac{M}{R}\\right)^2 -756.14 \\left(\\frac{M}{R}\\right)^3\\right]\\times10^{-4},\\label{eq:fitting_ff}\n\\end{equation}\nwith which the expected values are shown with the thick-solid line in the top panel. In the bottom panel of Fig. \\ref{fig:ratio-ff}, we also show the relative deviation calculated with\n\\begin{equation}\n \\Delta = \\frac{{\\rm abs}[{\\cal R}-{\\cal R}_{\\rm fit}]}{{\\cal R}}, \\label{eq:Delta}\n\\end{equation}\nwhere ${\\cal R}$ and ${\\cal R_{\\rm fit}}$ denote the ratio of Im$(\\omega)$ to Re$(\\omega)$ detemined through the eigenvalue problem and estimated with Eq. (\\ref{eq:fitting_ff}) for each stellar model, respectively. From this figure, we find that the values of Im$(\\omega)$\/Re$(\\omega)$ can be usually estimated within $\\sim10\\%$ accuracy by using the empirical relation. In a similar way, we show Im$(\\omega)$\/Re$(\\omega)$ for the $p_1$-modes in Fig. \\ref{fig:ratio-p1}. But, for the case of the $p_1$-mode we can not derive the empirical relation as a function of $M\/R$, unlike the case of the $f$-mode. \n\n\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[scale=0.5]{ReIm-MR1}\n\\vspace{0.5cm}\n\\caption\nThe ratio of Im$(\\omega)$ to Re$(\\omega)$ of the $w_1$-mode for various EOSs is shown as a function of the corresponding stellar compactness (top panel), where the solid line denotes the fitting line given by Eq. (\\ref{eq:fitting_w1}). The bottom panel denotes the relative deviation of the estimation with the fitting formula from the values of Im$(\\omega)$\/Re$(\\omega)$ calculated via the eigenvalue problem.\n\n\\label{fig:ratio}\n\\end{figure}\n\nOn the other hand, we also find that one can express the value of Im(${\\omega}$)\/Re($\\omega$) for the $w_1$-mode as a function of $M\/R$ almost independently of the adopted EOS. In the top panel of Fig. \\ref{fig:ratio}, we show the ratio of Im$(\\omega)$ to Re$(\\omega)$ for the $w_1$-mode determined through the eivenvlue problem for each stellar model as a function of the corresponding stellar compactness, where the solid line denotes the fitting formula given by\n\\begin{equation}\n \\frac{{\\rm Im}(\\omega_{w_1})}{{\\rm Re}(\\omega_{w_1})} = 1.0659 -4.1598\\left(\\frac{M}{R}\\right)\n +16.4565\\left(\\frac{M}{R}\\right)^2 -39.5369\\left(\\frac{M}{R}\\right)^3. \\label{eq:fitting_w1}\n\\end{equation}\nIn the bottom panel, we show the relative deviation calculated with Eq. (\\ref{eq:Delta}) for the $w_1$-mode, where ${\\cal R}_{\\rm fit}$ is estimated with Eq. (\\ref{eq:fitting_w1}). Considering to the fact that $M\/R=0.172$ (0.207) for the canonical neutron star model with $M=1.4M_\\odot$ and $R=12$ km (10 km), one can observe that the fitting formula given by Eq. (\\ref{eq:fitting_w1}) works better for the low-mass neturon star models. \n\n\n\n\n\\section{One-dimensional approximation}\n\\label{sec:Approximation0}\n\nIn order to determine the quasi-normal modes, one has to somehow search the solution of $\\omega$ in two-dimensional parameter space with the real and imaginary parts, and this procedure may be trouble. One may sometimes adopt a suitable approximation to get out of this trouble, even if one can estimate only the frequency of quasi-normal modes. In this study, we especially consider the approximation, where the eigenvalue belongs to one-dimensional parameter space, depending only on the real part of the eigenvalue. We refer to this approximation as the one-dimensional approximation in this study. For example, since the imaginary part of the quasi-normal modes induced by the fluid oscillations is much smaller than the real part of them, as shown in Figs. \\ref{fig:ratio-ff} and \\ref{fig:ratio-p1}, one may assume that Im$(\\omega)=0$. This approximation, referred to as zero-damping approximation, is a special case of one-dimensional approximation. In fact, this approximation has been adopted in some previous studies, but it has not been discussed how well this approximation works. So, in the next subsections, we will see the accuracy of the zero-damping approximation for the $f$- and $p_1$-mode frequencies, and then we also propose the one-dimensional approximation for estimating the $w_1$-mode frequency.\n\n\n\\subsection{Zero-damping approximation}\n\\label{sec:Approximation1}\n\nThe zero-damping approximation, neglecting the imaginary part of the eigenvalue, is the simplest one-dimensional approximation, i.e., the eigenvalue, $\\omega$, is assumed to be \n\\begin{equation}\n \\omega_{{\\rm 1D},f} = \\omega_{r}, \\label{eq:omega_1D0}\n\\end{equation}\nwhere $\\omega_r$ is some real number and $\\omega_{{\\rm 1D},f}$ denotes the eigenvalue with the approximation for the gravitational wave induced by the fluid oscillations. With the zero-damping approximation, one can estimate the frequency, with which the absolute value of $f(\\omega_{{\\rm 1D},f})$ becomes the local minimum. Once the value of $\\omega_r$ would be determined, the frequency of a gravitational wave is given by $\\omega_r\/2\\pi$. As an example, we show the absolute value of $f(\\omega_{{\\rm 1D},f})$ as a function of the frequency for the neutron star model with $1.46M_\\odot$ constructed with SLy4 EOS in Fig. \\ref{fig:SLy4-13}, where the vertical dashed lines denote the frequencies of the $f$- and $p_1$-modes determined through the proper eigenvalue problem without the approximation and the inserted panel is an enlarged drawing in the vicinity of the $p_1$-mode frequency. \n\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[scale=0.5]{SLy4-13fp1}\n\\vspace{0.5cm}\n\\caption\nFor the neutron star model with $1.46M_\\odot$ constructed with SLy4 EOS, we show the absolute value of $f(\\omega_{{\\rm 1D},f})$ as a function of frequency. The vertical dashed lines denote the Re$(\\omega)\/2\\pi$ for the $f$- and $p_1$-modes determined through the proper eigenvalue problem without the approximation. The panel inserted in the figure is an enlarged drawing in the vicinity of the $p_1$-mode frequency. \n\n\\label{fig:SLy4-13}\n\\end{figure}\n\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[scale=0.5]{Dfp1}\n\\vspace{0.5cm}\n\\caption\nThe relative deviation of the value of $\\omega_r$ determined with the zero-damping approximation from the value of Re$(\\omega)$ determined through the proper eigenvalue problem without the approximation, which is calculated with Eq. (\\ref{eq:DRef}), is shown as a function of $M\/R$ for various neutron star models. The top and bottom panels correspond to the results for the $f$- and $p_1$-modes, respectively. \n\n\\label{fig:Dfp1}\n\\end{figure}\n\nIn order to check the accuracy of the frequencies of the $f$- and $p_1$-modes with the zero-damping approximation, we estimate the relative deviation of them from the corresponding frequencies determined through the proper eigenvalue problem, which is calculated by\n\\begin{equation}\n \\Delta {\\rm Re}(\\omega_i) = \\frac{{\\rm abs}[{\\rm Re}(\\omega_i)-\\omega_{r,i}]}{{\\rm Re}(\\omega_i)}, \n \\label{eq:DRef}\n\\end{equation}\nwhere ${\\rm Re}(\\omega_i)$ and $\\omega_{r,i}$ respectively denote the real part of $\\omega$ determined through the proper eivenvalue problem without the approximation and the value of $\\omega_r$ determined with the zero-damping approximation for the $f$- ($i=f$) and $p_1$-modes ($i=p_1$). For various neutron star models, we show the values of $\\Delta {\\rm Re}(\\omega_i)$ for the $f$-mode ($p_1$-mode) in the top (bottom) panel of Fig. \\ref{fig:Dfp1}. From this figure, one can observe that the zero-damping approximation works significantly well.\n\n\n\n\n\\subsection{One-dimensional approximation for the $w_1$-mode}\n\\label{sec:Approximation2}\n\nUnlike the gravitational waves induced by fluid oscillations, such as the $f$- and $p_1$-modes, the imaginary part of the spacetime modes ($w$-modes) becomes generally comparable to the real part of them, as shown in Fig. \\ref{fig:ratio}. So, one cannot estimate the $w_1$-mode frequency with the zero-damping approximation. However, owing to the finding of the emprical relation for Im($\\omega$)\/Re($\\omega$) as shown in Fig. \\ref{fig:ratio}, we propose the one-dimensional approximation for the $w_1$-mode, i.e., the eigenvalue with the one-dimensional approximation is assumed to be\n\\begin{equation}\n \\omega_{{\\rm 1D},w}=\\omega_r\\left(1+{\\rm i}{\\cal R}_{\\rm fit}\\right), \\label{eq:omega_1D1}\n\\end{equation}\nwhere $\\omega_r$ is some real value, while ${\\cal R}_{\\rm fit}$ denotes the ratio of Im$(\\omega_{w_1})$ to Re$(\\omega_{w_1})$ estimated with Eq. (\\ref{eq:fitting_w1}). Then, one should find the suitable value of $\\omega_r$, with which the absolute value of $f(\\omega_{{\\rm 1D},w})$ becomes local minimum. As an example, in Fig. \\ref{fig:SLy4-08} we show the absolute value of $f(\\omega_{{\\rm 1D},w})$ as a function of the frequency given by $\\omega_r\/2\\pi$ for the neutron star model with $1.46M_\\odot$ constructed with SLy4 EOS. In this figure, for reference we also show the $w_1$-mode frequency determined through the proper eigenvalue problem with the dashed vartical line. \n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[scale=0.5]{SLy4-08w1}\n\\vspace{0.5cm}\n\\caption\nThe absolute value of $f(\\omega_{{\\rm 1D},w})$ is shown as a function of the frequency for the neutron star model with $1.46M_\\odot$ constructed with SLy4 EOS, where the vertical dashed line denotes the $w_1$-mode frequency determined though the proper eigenvalue problem without the approximation. \n\n\\label{fig:SLy4-08}\n\\end{figure}\n\n\\begin{figure}[tbp]\n\\begin{center}\n\\includegraphics[scale=0.5]{Dw1Im}\n\\end{center}\n\\vspace{0.5cm}\n\\caption\nThe relative deviation of the value of $\\omega_r$ determined with the one-dimensional approximation from that of Re$(\\omega)$ determined through the proper eigenvalue problem is shown as a function of stellar compactness for various neutron star models in the top panel. In the bottom panel, we show the relative deviation of the imaginary part of the eigenvalue estimated with the one-dimensional approximation from that of Im$(\\omega)$ determined through the proper eigenvalue problem, calculated with Eq. (\\ref{eq:DIm})\n\n\\label{fig:Dw1}\n\\end{figure}\n\nTo check the accuracy of the one-dimensional approximation, in the top panel of Fig. \\ref{fig:Dw1} we show the relative deviation of the value of $\\omega_r$ determined with the one-dimensional approximation from that of Re$(\\omega)$ determined through the proper eigenvalue problem, calculated with Eq. (\\ref{eq:DRef}) for the $w_1$-mode, as a function of $M\/R$ for various neutron star models. In the bottom panel of Fig. \\ref{fig:Dw1}, we also show the relative deviation, $\\Delta {\\rm Im}(\\omega_{w_1})$, of the imaginary part of the eigenvalue estimated with the one-dimensional approximation, i.e., $\\omega_r{\\cal R}_{\\rm fit}$, from Im$(\\omega)$ determined through the proper eigenvalue problem, which is calculated with\n\\begin{equation}\n \\Delta {\\rm Im}(\\omega_{w_1}) \n = \\frac{{\\rm abs}[{\\rm Im}(\\omega_{w_1})- \\omega_r{\\cal R}_{\\rm fit}]}{{\\rm Im}(\\omega_{w_1})}.\n \\label{eq:DIm}\n\\end{equation}\nFrom this figure, one can observe that the $w_1$-mode frequency can be estimated with the one-dimensional approximation within $\\sim 1\\%$ accuracy independently of the adopted EOSs. On the other hand, the damping rate of the $w_1$-mode can be estimated $\\sim 30\\%$ accuracy, which seems to strongly depend on the accuracy of the empirical relation for Im$(\\omega_{w_1})$\/Re$(\\omega_{w_1})$ given by Eq. (\\ref{eq:fitting_w1}).\n\n\n\n\n\\section{Conclusion}\n\\label{sec:Conclusion}\n\nQuasi-normal modes are one of the important properties characterizing compact objects. In this study, first, we show that the ratio of the imaginary part to the real part of the quasi-normal mode for the $f$- and $w_1$-modes can be expressed as a function of the stellar compactness almost independently of the adopted EOSs, and we derive the corresponding empirical relations. Then, focusing on the $f$- and $p_1$-modes, which are gravitational wave frequencies induced by the stellar fluid oscillations, we examine the accuracy of the zero-damping approximation to estimate the corresponding frequencies of gravitational waves, where the damping rate, i.e., the imaginary part of the quasi-normal mode, is neglected. As a result, we show that one can estimate the frequencies of the $f$- and $p_1$-mode with the zero-damping approximation with considerable accuracy. In addition, we newly propose the one-dimensional approximation for estimating the $w_1$-mode frequency by adopting the empirical relation (that we find in this study) for the ratio of the imaginary part to the real part of the $w_1$-modes, and show that this approximation works well, where one can estimate the frequency within $\\sim 1\\%$ accuracy. \n\n\n\n\n\\bmhead{Acknowledgments}\nThis work is supported in part by the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Numbers \nJP19KK0354, \nJP20H04753, and\nJP21H01088, \nand by Pioneering Program of RIKEN for Evolution of Matter in the Universe (r-EMU).\n\n\n\n\\section*{Declarations}\n\nThe author has no relevant financial or non-financial interests to disclose.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\setcounter{equation}{0} \\ALTsect{\\setcounter{equation}{0} \\ALTsect}\n\\def{\\arabic{section}}.{\\arabic{equation}}{{\\arabic{section}}.{\\arabic{equation}}}\n\\begin{document}\n\\sloppy\n\\begin{center}\n{\\Large \\bf\nBody Fixed Frame, Rigid Gauge Rotations \\\\\nand Large N Random Fields in QCD}\\\\\n\\vskip .5cm\n{\\large{\\bf Shimon Levit}\\\\}\n\\vskip .5cm\n{\\it Department of Physics}\\\\\n{\\it Weizmann Institute of Science}\\\\\n{\\it Rehovot 76100 Israel\\ \\footnote{ Permanent address. Supported in\npart by\nUS -- Israel Binational Science Foundation grant no. 89--00393}}\\\\\n{\\it and}\\\\\n{\\it Max-Planck-Institut f\\\"ur Kernphysik}\\\\\n{\\it D-6900 Heidelberg, Germany\\ \\footnote{ Supported by Humboldt Award.}}\\\\\n\\end{center}\n\\vskip .5cm\n\\centerline{\\Large{\\bf Abstract}}\n\n\n The \"body fixed frame\" with respect to local gauge transformations is\n introduced. Rigid gauge \"rotations\" in QCD and their Schr\\\"odinger equation\n are studied for static and dynamic quarks.\nPossible choices of the rigid gauge field configuration\n corresponding to a nonvanishing\nstatic colormagnetic field in the \"body fixed\" frame are discussed.\nA gauge invariant variational equation is derived in this frame.\nFor large number N of colors\n the rigid gauge field configuration is regarded as random\nwith maximally random probability distribution under constraints on\nmacroscopic--like quantities. For the uniform magnetic field\nthe joint probability distribution of the field components\nis determined by maximizing the\nappropriate entropy under the area law constraint for the Wilson loop.\nIn the quark sector the gauge invariance requires\nthe rigid gauge field configuration to appear\nnot only as a background but also as inducing\nan instantaneous quark-quark interaction. Both are random in the large N\nlimit.\n\\vskip .5cm\n\\setcounter{equation}{0} \\ALTsect{Introduction}\n\nStudies of non perturbative aspects of dynamics\nof non abelian gauge fields will continue to remain one of\nthe focuses of theoretical activities. These fields appear\nat all levels of the \"elementary\" interactions and even begin to\nenter at a more phenomenological macroscopic level in condensed matter\nsystems. Quantum Chromodynamics represents a prime example of a\nstrongly coupled theory with non abelian gauge fields. Despite many\nefforts, e.g. instantons \\cite{ins}, large N expansion\n\\cite{lar,fra,wit,mgm,egk,ind},\nlattice gauge theory and strong coupling\nexpansion \\cite{lat,str}, topological considerations \\cite{pol,top},\n QCD sum rules \\cite{svz}, \"spaghetti\" vacuum \\cite{cop}, light cone\napproach \\cite{bro}, explicit color projection \\cite{jon}, and others\n\\cite{adl,yaf,dua}, the quantitative understanding\nof the basic QCD features is still far from satisfactory. A sustained\neffort with different angles of attack is clearly in order with the hope\nthat accumulated qualitative experience will finally lead to the\ndevelopment of quantitative calculational tools. This paper is a\ncontribution to this effort.\n\n Invariance under local gauge transformations is the most important\nfeature of a non abelian gauge theory. In the framework of the Hamiltonian\nformulation of QCD I wish to explore the consequences of this\ninvariance using some\ngeneral methods common to molecular and nuclear physics. I wish to\ndefine an appropriate generalization of the body-fixed\n (intrinsic, rotating) frame formalism in the context of\nthe local gauge transformations. After doing so one can attempt to\nseparately investigate the\ndynamics of the gauge \"rotations\" of the frame and the intrinsic frame\ndynamics. These would be the analogs of rigid rotations and\nintrinsic vibrations in molecular and nuclear physics. Most of my\ninterest in this paper will concentrate on the study of the \"rigid\ngauge rotations\".\nThe study of the couplings between the \"rotations\" and\nthe \"vibrations\" of the gauge field is deferred to future work.\nTo avoid misunderstanding I wish to stress that by \"gauge rotations\" or\n\"rotations of the gauge field\"\nin this paper I will always mean Eq.(\\ref{rgd}) below, which includes\nthe proper SU(N) rotation as well as the inhomogeneous \"shift\" term.\n\nPerhaps the most important conceptual advantage of using the body-fixed\nframe associated with a given symmetry\nlies in the fact that one can freely approximate the dynamics in this\nframe without fears to violate the symmetry.\nIn particular the use of this formalism appears to be\nfruitful provided there exist such a body-fixed, intrinsic\nframe in which the \"rotational - vibrational\" coupling can be considered\nas small. This in turn generically\nhappens when the \"rotational inertia\" is much\nlarger than the \"inertia\" associated with the intrinsic motion\nso that a variant of the Born-Oppenheimer approximation is valid. A typical\nsituation is when the system's ground state is strongly \"deformed\" away\nfrom a symmetric state.\nBy deformation I mean absence of symmetry with respect to\n\"intrinsic transformations\",\ni.e. transformations in the body-fixed frame and not\nwith respect to the transformations in \"laboratory\". Absence of symmetry\nin \"laboratory\" would correspond to the symmetry breakdown\nwhich can not occur for a local gauge symmetry.\nQuantum mechanical examples of deformed bodies are e.g. non spherical\nmolecules, deformed nuclei, etc.\n\nI do not have a priori arguments that the QCD vacuum\nis strongly \"deformed\" in the above sense. Appearance of various QCD\ncondensates, Ref. \\cite{svz}, suggests that this may be true. The\ncondensate wavefunction should then play a role of the strongly deformed\nconfiguration. Another positive indication is the large N master field concept,\nRef. \\cite{wit}, according to which a special gauge field configuration\nshould exist which dominates the vacuum wave function or the\ncorresponding functional integral. It is expected, however, that the\nmaster field is not\nsimply a fixed classical configuration. It should rather be regarded as\na statistical distribution allowing to calculate quantities which are\nanalogous to macroscopic\nthermodynamic quantities in statistical physics, i.e. such that their\nfluctuations are suppressed\nin the large N limit, Refs. \\cite{mat,ran}. Glimpses of the meaning of\nthese vague notions were found in various matrix models,\ncf. Refs. \\cite{mgm,egk,mat,ran},\nand, e.g. in 1+1 dimensional QCD, Ref. \\cite{bor}. If this view point\nis correct then suitably chosen rigidly rotating \"deformed\" gauge field\ncould play a role of the master field provided one understands\nin which sense it should also be statistical.\nThe following formalism will clarify some of these issues and\nprovide a general framework in which they could be further discussed.\n\nWorks in the spirit of our study have already appeared in the\npast,cf. Refs. \\cite{lee,jac,sim,dos} and the analogy with various\ntypes of rotational motion is frequently used in QCD.\nIn this sense the present study is a continuation of these works.\n\nThis paper is organized as following. In Section 2\nI introduce the transformation to the body-fixed frame in the context\nof the simplest model of the gauge rotational motion --\nthe rigid gauge rotor. Giving a natural definition\nof the rigid gauge \"rotations\" I proceed to determine\nthe appropriate generalization of the standard space rigid rotor\n results -- the moment of inertia tensor,\nthe \"body-fixed\"\nframe, the generators of the \"body-fixed\" gauge group\nin terms of which the character of \"deformation\" can be classified, etc.\nDespite severe limitation on the set of the allowed gauge field\nconfigurations the model is gauge invariant.\n I work out in Section 3\nthe quantum mechanics of the model. As with the\nspace rotor the generators of the \"laboratory\" and \"body-fixed\"\ngauge transformations provide a complete set of quantum numbers\nfor the wave functionals of the model. The vacuum has zero energy and\nis the most disordered state. Higher states correspond to the presence\nof very heavy, i.e. static quarks and antiquarks in the system.\nAs an important example I consider the wave function and the\ncorresponding Schr\\\"odinger equation for a pair of static quark and antiquark.\nThis and other similar equations in the model are\nsimple matrix equation with the inverse \"moment of inertia\" determining\nthe interaction between the color sources and depending on the assumed\nrigid gauge configuration which plays the role of the \"free parameter\".\n\nIn Section 4 I discuss the meaning of the results obtained so far and\npossible choices of the rigid gauge field which physically represents\na non vanishing colormagnetic field\nin the body-fixed frame. In the ground state this frame\ndoes not \"rotate\" but has random orientations in local\ncolor spaces at every space point. Introduction of static quarks forces\nthe frame to \"rotate\" quantum mechanically\nat the points where the quarks are situated. The energy eigenvalues\nof these \"rotations\" are the energies of the quantum states of the\ncolorelectric field generated by the quarks. The propagator of\n this field\nis the moment of inertia of the model and depends explicitly on the\nassumed configuration of the rigid static colormagnetic field.\nFor zero field the propagator is a simple Coulomb while for\na uniform field diagonal in color the\npropagator behaves asymptotically as a decaying Gaussian.\n The so called dual Meissner effect picture of the confinement,\n Ref. \\cite{man}, could be implemented if a configuration\nof the rigid colormagnetic field is found which \"channels\" the\ncolorelectric field and makes its propagator effectively one dimensional.\nIt turns out that the creation of such a magnetic \"wave-guide\"\nis connected with existance of a zero eigenvalue of a certain\noperator in the model.\n\nSince the quark color degrees of freedom are treated\n quantum mechanically\nthe model allows for a possibility that confinement of fundamental\nrepresentations does not automatically mean confinement of higher\nrepresentations. I discuss this\npossibility and derive a variational equation for the rigid field.\nThis equation is fully gauge invariant.\n\nIn Section 5\nexpecting that rigid gauge rotations should be relevant for the\nmaster field concept I study the model in the large N limit.\nAny candidate for the master field must be allowed to undergo free\n\"gauge rotations\" which can not be frozen by this limit\nand should induce an interaction between the quarks.\nGoing to the body fixed frame of these rotations I regard the\nrigid gauge field configuration as random and introduce a natural\nrequirement that it is least biased under constraints that it\nshould reproduce\ngauge invariant quantities which can be regarded macroscopic-like in the\nlarge N limit. This means that it should be maximally random under these\nconstraints. In order to make these ideas explicit I discuss in some\ndetail the case of the uniform\ncolormagnetic field. Such configuration in QCD was already discussed in the\npast, Refs. \\cite{sav,cop} but it seems that its appearance in the interaction\nis a novel feature\nof the model. The detailed form of this interaction depends on the\ndifferences of the color components of the magnetic field. It is not\nconfining for any finite number of colors. For $N \\rightarrow \\infty$\n I assume that the form of the density of the color components of\n the field is known.\nIn 2+1 dimensions I choose it such that it gives area\nlaw for space oriented Wilson loops.\nI treat then the entire distribution of these components as\n a joint distribution of their probabilities and regard\n the adopted \"single component\" density as an analog of a\nmacroscopic quantity that must be reproduced by this\njoint distribution. I postulate that it must otherwise be maximally random,\n i.e. must have the maximum entropy (minimum\ninformation content) under suitable constraints. In this way I\nderive the maximally random distribution for this model. I discuss its relation\nto the large N limit of the Schr\\\"odinger equation for the static quark--antiquark\nsystem. I also give possible generalizations of this development\nto 3+1 dimensions.\n\nIn Section 6 I include dynamical quarks\nand show that the rigid gauge rotor limit corresponds exactly to the\nlimit in QED in which only the instantaneous Coulomb interaction\nbetween the charges is retained. The major difference in QCD is that\nin this limit the quarks not only interact instantaneously via a more\ncomplicated interaction controlled by the rigid gauge field configuration,\nbut at the same time are also found in a static colormagnetic field\ninduced by this configuration. This dual appearance of the rigid field is\n a consequence of the gauge invariance and in the large N limit\nis apparently the way the\nmaster field should enter the quark sector of the theory. According to the\nideology developed in Section 5 both the field in which the quarks\nmove and their interaction should be considered as random in the large N\nlimit. The random interaction between the quarks opens interesting\npossibilities to discuss the relationship between confinement and\nlocalization.\n\nThe body fixed Hamiltonian with dynamical quarks is gauge invariant.\nIts invariance with respect to global symmetries however is not\nguaranteed for an arbitrary choice of the rigid gauge configuration.\nI discuss possible variational approaches to determine this configuration\nand derive an analogue of the Hartree-Fock equations for the model.\n\nIn the rest of the Introduction I will establish my notations, cf.,\n Ref.\\cite{bjo}. I\nconsider the QCD Hamiltonian in d=3 space dimensions in the $A^{0} = 0$\ngauge,\n\\begin{equation}\nH=\\frac{1}{2}\\int d^{3}x [(E_{a}^{i}(x))^{2}+(B_{a}^{i}(x))^{2}]\n+ \\int d^{3}x q_{\\gamma}^{+}(x)[\\alpha^{i} \\left(p^{i}-g\nA_{a}^{i}(x) \\frac{ \\lambda_{\\gamma \\delta}^{a}}{2} \\right)\n+\\beta m]q_{\\delta}(x).\n\\label{ham}\\end{equation}\nwith\n\\begin{equation}\nB_{a}^{i}(x)=\\epsilon_{ijk}(\\partial_{j}A_{a}^{k}+gf_{abc}A_{b}^{j}A_{c}^{k}),\n\\end{equation}\n$i,j,k=1,...,3$; $\\gamma,\\delta$=1,...,N; $a=1,...,N^2 - 1$ for SU(N)\ngauge group and $f_{abc}$ -- the structure constants of the SU(N).\nDirac and flavor indices are omitted and the summation\nconvention for all repeated indices is employed here and in the following.\nThe gluon vector potential $A_{a}^{i}(x)$ and minus the electric\nfield $-E_{a}^i (x)$ are canonically conjugate variables,\n\\begin{equation}\n[E_{a}^{i}(x),A_{b}^{j}(y)]=i\\delta_{ab}\\delta_{ij}\\delta (x-y),\n\\end{equation}\nand the quark fields obey the standard anticommutation relations.\n\nThe Hamiltonian (\\ref{ham}) is invariant under\nthe time independent gauge transformations. Using the matrix valued\nhermitian fields\n\\begin{equation}\nA_{\\alpha\\beta}^{i}(x)=A_{a}^{i}(x) \\frac{ \\lambda_{\\alpha\\beta}^{a}}{2},\n\\; \\; \\; \\; \\;\nE_{\\alpha\\beta}^{i}(x)=E_{a}^{i}(x) \\frac{\\lambda_{\\alpha\\beta}^{a}}{2},\n\\end{equation}\nwhere $\\lambda^{a}$ are the SU(N) generators with the properties\n\\begin{eqnarray}\n[\\lambda^{a},\\lambda^{b}]&=&2if_{abc}\\lambda^{c};\\; \\; \\;\n\\{\\lambda^a,\\lambda^b\\} = \\frac{4}{N}\n\\delta_{ab} + 2d_{abc} \\lambda^c;\\nonumber \\\\\nTr\\lambda^{a}\\lambda^{b}&=&2\\delta_{ab};\\; \\; \\; \\; \\;\n\\lambda_{\\alpha \\beta}^{a} \\lambda_{\\gamma\\delta}^{a} = 2[\\delta_{\\alpha\\delta}\n\\delta_{\\beta\\gamma}-\\frac{1}{N}\\delta_{\\alpha\\beta}\\delta_{\\gamma\\delta}]\n\\end{eqnarray}\none can write the gauge transformation as\n\\begin{equation}\n A^i \\rightarrow SA^{i}S^{+}+\\frac{i}{g}S\\partial^{i}S^{+};\\;\\; \\;\nE^i\\rightarrow SE^{i}S^{+};\\; \\; \\; q\\rightarrow Sq \\end{equation}\nwhere S(x) are time independent but x - dependent unitary $N\\times N$\nmatrices, elements of the SU(N) group. The generators of this\n transformation\n\\begin{equation}\n G_{a}(y) \\equiv G_a^A (y) + G_a^q (y), \\nonumber \\end{equation}\n\\begin{equation}\nG_{a}^A (y) = \\partial_{i}E_{a}^{i}(y)+gf_{abc}A_{b}^{i}(y)E_{c}^{i}(y),\n\\; \\; G_{a}^q (y) = -gq^{+}(y)\n\\frac{\\lambda^{a}}{2} q(y) \\label{gen} \\end{equation}\nare conserved,\n\\begin{equation}\n\\frac{\\partial G_{a}(x)}{\\partial t}=i[H,G_{a}(x)]=0\\end{equation}\nand it is consistent to impose the Gauss law constraints\n\\begin{equation}\nG_{a}(x)|\\Psi>=0 \\label{gl} \\end{equation}\nfor all physical states.\nAlthough $G_{a}(x)$ do not commute, their commutators\n\\begin{equation}\n[G_{a}(x),G_{b}(y)]=gf_{abc}\\delta (x-y)G_{c}(x)\\end{equation}\nallow to set them all simultaneously zero.\n\n\n\\setcounter{equation}{0} \\ALTsect{Rigid Gauge Rotor.}\n\nIn this section I will discuss the rigid gauge \"rotations\". Classically\nI define them as gauge field configurations of the type\n\\begin{equation}\n A^i(x,t) = U(x,t)a^{i}(x)U^{+}(x,t)+\\frac{i}{g}U(x,t)\\partial^{i}U^{+}(x,t)\n\\label{rgd}\\end{equation}\nwhere $a^i(x)$ are t-independent, fixed as far as their x-dependence is\nconcerned, \"rigid\" fields which I do not\nspecify and leave them arbitrary for the moment. Eq.(\\ref{rgd}) is\nthe simplest example of the transformation to the \"body fixed\" frame\nof the local gauge symmetry in which I have assumed that the dynamics\nof the field in this frame is very stiff so that the field\ncan be approximately replaced by\nits static average. In general $a^i$ is of course dynamical but should\nbe viewed as constrained since $U(x,t)$ already contains a third of the\ndegrees of freedom. For non rigid $a^i$ there is no obvious choice of\nthe body-fixed frame and it can be constrained in a variety of ways,\nsay, $a^3 = 0$ (axial gauge), $\\partial_i a^i = 0$ (Coulomb gauge),\netc. In our language\nthese different gauge fixings correspond to different \"rotating\" frames.\nSince they are \"non inertial\" the dynamics will look very differently\ndepending on the choice of the frame and different fictitious\nforces, the analogue of Coriolis and centrifugal forces,\nwill be present. I am planning to discuss these issues elsewhere.\n\nWith the anzatz (\\ref{rgd}) the covariant derivatives are\n\\begin{equation}\nD^i \\equiv \\partial ^i - igA^i = U(x,t) d^i(x) U^{+}(x,t),\\end{equation}\nwith fixed, rigid\n\\begin{equation}\nd^i(x) \\equiv \\partial ^i - iga^i(x). \\end{equation}\nInserting (\\ref{rgd}) in the Hamiltonian (\\ref{ham})\none finds that the gauge invariant potential term\n$\\sum_{i,a}(B_{a}^{i}(x))^{2} \\sim \\sum_{i,j}Tr[D^i,D^j]^2 = \\sum_{i,j}\nTr[d^i,d^j]^2$ is\nindependent of the $U$'s, i.e. it is fixed, nondynamical in this model.\nThe dynamics of the gauge field\nis governed by the kinetic energy, i.e. the term with\nthe electric field in (\\ref{ham}). Using\n$\\partial_0(U\\partial_iU^{+}) = (i\/2)U(\\partial_i\\omega)U^{+}$ with\n$\\omega = 2i U^{+}\\partial_0 U$ one finds\n\\begin{eqnarray}\n-E^{i} = \\partial_0 A^{i} = \\frac{1}{2g}U [\\omega,d^i] U^{+},\n\\label{dai}\\end{eqnarray}\nand therefore the kinetic energy in (\\ref{ham}) is\n\\begin{equation}\n\\frac{1}{4}\\int d^3 x Tr(\\partial_0 A^i)^2 =\n - \\frac{1}{16g^2}\\int d^3 x Tr\\left( \\omega[d^i,[d^i,\\omega]]\\right)\n \\label{ke} \\end{equation}\nwhere I have disregarded surface terms, ignoring for the moment\npossible non vanishing fields at infinity, non trivial topologies and\nother global issues (cf. below).\n\nThe double commutator in (\\ref{ke}) with the summation over all indices,\n$x,i$ and the color is the straightforward generalization of the\nfamiliar double vector product summed over all particles indices\n in the moment of inertia tensor\nappearing in the kinetic energy of rigid space rotations of system of\nparticles with fixed relative positions.\n Following this analogy the energy (\\ref{ke}) of the rigid gauge\n rotations can be written\n\\begin{equation}\nE_{rot} = \\frac{1}{4}\\int d^3 x Tr(\\omega I \\omega) \\label{en} \\end{equation}\nwhere the moment of inertia is defined as a differential matrix operator\nsuch that\n\\begin{eqnarray}\nI\\omega &\\equiv& - \\frac{1}{4g^2} \\left[ d^i,[d^i,\\omega]\\right] = \\nonumber \\\\\n&=& - \\frac{1}{4g^2}\\left(\\partial_i^2 - ig[\\partial_i a^i,\\omega] -\n2ig[a^i,\\partial_i\\omega] - g^2 \\left[ a^i,[a^i,\\omega]\\right]\\right) .\n \\label{mom}\n\\end{eqnarray}\n To obtain the corresponding\nHamiltonian one can use the gauge field part $G^A$ of the generators\n(\\ref{gen}). Using Eqs. (\\ref{rgd}), (\\ref{dai}) and the\ndefinition (\\ref{mom}) one finds\n\\begin{equation}\nG^A = [\\partial^i - ig A^i,E^i] =\n\\frac{1}{2g}U \\left[d^i,[d^i,\\omega]\\right] U^{+} = -2g U (I\\omega) U^{+}.\n\\label{omg} \\end{equation}\nDefining the gauge generators in the rotating frame\n$\\hat{G} = U^{+} G^A U$, expressing $\\omega = - I^{-1} \\hat{G} \/2g$ from (\\ref{omg}) and\nsubstituting in (\\ref{en}) one finds the Hamiltonian of the rigid gauge\nrotor\n\\begin{equation}\nH_{rot}^A = \\frac{1}{16g^2}\\int d^3 x d^3 y Tr \\hat{G} (x) I^{-1}(x,y)\n\\hat{G} (y) = \\frac{1}{4g^2}\\int d^3 x d^3 y \\hat{G}_a(x) I_{ab}^{-1}(x,y)\n\\hat{G}_b(y).\n\\label{hrot} \\end{equation}\nwhere $I_{ab}^{-1}(x,y) = (1\/4) Tr(\\lambda^a I^{-1}(x,y)\n\\lambda^b)$ is proportional to the inverse of the operator $-d^i_{ac}\nd^i_{cb}$ with $d^i_{ab} = \\partial_i \\delta_{ab} - g f_{abc} a^i_{c}$\nand I assumed that this operator does not have zero eigenvalues.\nIn a more careful way of handling fields at infinity one should avoid\nthe integration by parts in (\\ref{ke}). The inverse \"moment\nof inertia\" operator is then replaced by a less transparent\n\\begin{equation}\nK_{aa'}(x,x') = \\int d^3 y \\left[ d^i_{bc}(y) I^{-1}_{ca}(y,x)\\right]\n\\left[ d^i_{bc'}(y) I^{-1}_{c'a'}(y,x')\\right].\\end{equation}\nMost of the following results remain valid for both forms of this\noperator.\n\nThe meaning of the preceeding expressions is quite obvious. They are the\nfield-theoretic generalization of the standard rigid rotor results.\nThe unbroken\nlocal gauge symmetry of QCD means that there are free SU(N) color\ngauge \"rotations\" at\nevery space point. Expression (\\ref{hrot}) shows that the \"rotations\"\nat different points as well as around different color axes are coupled\nvia the non diagonal\nelements of the moment of inertia \"tensor\" $I_{ab}(x,y)$ in the manner\nsimilar to the coupling between rigid rotations around different space axes in\nsystems of particles.\n\nIt does not seem to be useful to diagonalize the operator $I^{-1}$ in\n(\\ref{hrot}).\nThe standard diagonal form of the rigid rotor Hamiltonian , i.e., $H = \\frac{1}{2}\\sum_a\nL_a^2\/I_a$ can be usefully achieved only in the case of\nrotations corresponding to a single SU(2) group to which the familiar\nrigid space rotations belong. Diagonalizing the moment of inertia\nin the case of higher groups will introduce combinations\nof the generators multiplied by matrices of orthogonal rotations.\nThese in general will not have\nthe group commutation relations. Already for a single\nSU(3) the group O(8) of\nrotations in the adjoint space needed in order to diagonalize the\nmoment of inertia is much larger than SU(3).\n\n The actual values of the moment of inertia depend on the rigid\nconfiguration $a^i (x)$ of the gauge field via the\nexpression (\\ref{mom}). This comprises the \"free parameter\" of the rigid\ngauge rotor model. For abelian theory or alternatively\nin the limit $g \\rightarrow 0$ the inverse of $I(x,y)$\nappearing in Eq.(\\ref{hrot})\nis just the Coulomb propagator.\n In the opposite large $g$ or long wavelength limit\n$I(x,y)$ becomes a local tensor given by the last term in (\\ref{mom})\nwhich is obviously the SU(N)\ngeneralization of the moment of inertia expression.\n\n An important feature of\nthe Hamiltonian (\\ref{hrot}) is that despite the severe limitation of the\nallowed gauge field configurations imposed by (\\ref{rgd}) it remaines\ngauge invariant. This is because (\\ref{hrot}) depends on $\\hat{G}$\nrather than $G^A$. Under a gauge transformation $U \\rightarrow SU$,\n$G^A$ transforms as $SG^A S^{+}$ so that $\\hat{G}(x)$ and therefore\n$H_{rot}^A$ stay invariant. The gauge invariance of (\\ref{hrot}) is the\nsimplest illustration of the usefulness of the introduction of the\nbody fixed frame. One can freely approximate the dynamics in this frame\nwithout fears of violating the symmetry with respect to which the frame\n has been defined, i.e. the local gauge symmetry in the present case.\n\nConsider another transformation, $U \\rightarrow US$. Referring to\nEq.(\\ref{rgd}) one can interpret this transformation\neither as the change of $U$ i.e. the transformation of the intrinsic\nframe with respect to the rigid \"shape\" $a^i$ or\nas the change of $a^i$ ,\n $a^i \\rightarrow Sa^{i}S^{+}+\\frac{i}{g}S\\partial^{i}S^{+}$, i.e.\nthe transformation of the intrinsic \"shape\" with respect\nto the intrinsic frame.\nSuch transformations obviously form a group of local SU(N) gauge\ntransformations which I will call\n the intrinsic or \"body fixed\"\n gauge group to distinguish it from the \"laboratory\" gauge\ngroup of the ordinary gauge transformations. According to two different\ninterpretations of the intrinsic gauge transformations\ngiven above one has two options. One option is to\n regard $\\hat{G}$'s as the generators of the intrinsic group.\nThey act on the dynamical variables $U$ but they have a disadvantage in\nthat the \"laboratory\"\ngroup is not completely independent of such an intrinsic group, e.g.\nthey both have identical Casimir operators. Another option is to formally\nintroduce operators which gauge transform the intrinsic variables $a^i$.\nThey will have the same form as $G^A$'s but with $a^i$ replacing $A^i$.\nDefined in this way the intrinsic group will be completely independent\nof the \"laboratory\" gauge group but will act on nondynamical variables\nwhich do not appear in the wavefunctions. Convenience should dictate\nwhich one to use.\n\nThe above introduction of the intrinsic vs \"laboratory\" gauge groups\nis obviously quite general with e.g. the definition of $\\hat G(x)$ being\nindependent of the rigid rotor restrictions set by fixing $a^i$\nto be nondynamical in (\\ref{rgd}).\nUnlike the local gauge symmetry in \"laboratory\", the symmetry in the\n\"body fixed\" frame can be broken.\nE.g., the gauge invariant Hamiltonian (\\ref{hrot})\n is in general not invariant under the transformations of the intrinsic\ngauge group. This is a simple example of the situation to which I\nreferred earlier as a possible existence of \"deformation\"\nvs impossibility of the symmetry breakdown in the context of non\nabelian local\ngauge theory. The character of the deformation can be classified\nusing the intrinsic gauge group, e.g. in classification of\npossible \"deformed shapes\" of the rigid gauge rotor (\\ref{hrot})\nby the transformation properties of the moment of inertia\n$I_{ab}(x,y)$ under this group. Here I obviously adopt the second\ninterpretation of the intrinsic group. The invariance of\n$I_{ab}(x,y)$ under all\nintrinsic transformations would be analogous\nto the spherical rotor limit in the space rotation case.\nThe invariance under a continuous subgroup of the intrinsic group is the\nanalog of the axial symmetric rotor, etc. Discrete intrinsic\nsubgroups should also be considered.\n\nConsider a rigid gauge configuration $a^{i'}$ which is a gauge\ntransform of $a^i$, $a^{i'} = S(a^i +(i\/g)\\partial_i)S^{+}$.\n The Hamiltonian\n(\\ref{hrot}) will have the same form with the same moment of inertia\nbut with $\\hat{G}$ replaced by $S^{+}\\hat{G} S$. The eigenvalues of this\ntransformed Hamiltonian will not change and will therefore depend only\non gauge invariant combinations of the rigid $a^i$, i.e. on the Wilson\nloop variables $Tr P exp (ig\\oint a^i dx_i)$.\n\\setcounter{equation}{0} \\ALTsect{Static Quarks}\n\nSo far I have discussed the rigid gauge rotor limit\nof only the first term in (\\ref{ham}). The resulting $H_{rot}^A$ is relevant\nfor the discussion of very heavy quarks.\nThey can be considered static as far as their translational motion is\nconcerned. They still have a wavefunction describing the motion of their color\ndegrees of freedom. Because of (\\ref{gl}) this motion is coupled to\nthe \"rotations\" of the gauge field which I will treat\nusing (\\ref{hrot}).\n\nIn the limit of $m\\rightarrow \\infty$ the quark kinetic energy\nterm $q^{+}\\vec{\\alpha}\\vec{p}q$ and the quark color current coupling\n$q^{+}\\vec{\\alpha}\\lambda^{a}q$ in Eq.(\\ref{ham}) can be neglected\nand the resulting Hamiltonian decouples into a part containing the gauge field\nand another containing the quarks, $H=H_{A}+H_{q}$, where\n$H_{A}$ is the first term in (\\ref{ham})\nand $H_q = m\\int d^3x q^{+} (x)\\beta q(x)$.\nThe coupling appears only via the Gauss law constraint, Eq.(\\ref{gl}).\nThe wave function can not be taken\nas a product $\\Psi=\\Psi(q)\\Psi(A)$ but should be a local color singlet.\nIn the representation in which\n\\[ \\beta=\\left( \\begin{array}{cc} 1 & 0 \\\\ 0 & -1\\end{array}\\right) \\]\n\\[ q_{\\alpha}(x)=a_{\\alpha}(x) \\left( \\begin{array}{cc} 1 \\\\ 0\\end{array}\n\\right) + b_{\\alpha}^{+}(x) \\left( \\begin{array}{cc} 0 \\\\ 1\\end{array}\n\\right) \\]\nwith\n\\begin{eqnarray}\n\\{a_{\\alpha}(x),a_{\\beta}^{+}(y)\\}&=&\\delta_{\\alpha\\beta}\\delta (x-y)\n\\nonumber \\\\\n\\{b_{\\alpha}(x),b_{\\beta}^{+}(y)\\}&=&\\delta_{\\alpha\\beta}\n\\delta (x-y), etc...\\nonumber\n\\end{eqnarray}\nthe eigenfunctions of $H_{q}$ are trivially written down.\nConsider, e.g.,\n\\begin{eqnarray}\n|\\Psi(q)>\\equiv |vac(q)> &=& |0>, \\label{vac} \\\\\n|\\Psi(q)>\\equiv |x_{0},\\alpha> &=& a_{\\alpha}^{+}(x_{0})|0> \\label{1qu} \\\\\n|\\Psi(q)>\\equiv |x_{0},\\alpha ;y_{0},\\beta > &=& a_{\\alpha}^{+}(x_{0})b_{\\beta}\n^{+}(y_{0})|0> \\label{2qu} \\end{eqnarray}\nThese wave functions describe respectively zero quarks, one\nstatic quark at $x_{0}$ with color component $\\alpha$ and a static\nquark - antiquark pair at $x_0$ and $y_0$. It is easy to form local\ncolor singlets with these wave functions.\n For e.g. the quark-antiquark pair it is\n\\begin{equation}\n |\\Psi>=\\sum_{\\alpha\\beta} \\Psi_{x_{0},\\alpha ;y_{0},\\beta}(A)|x_{0},\n\\alpha ;y_{0},\\beta> \\end{equation}\nwith the wave functional\n$\\Psi_{x_{0},\\alpha ;y_{0},\\beta}(A)$ satisfying\n\\begin{eqnarray}G_{a}^{A}(x)\\Psi_{x_{0},\\alpha ;y_{0},\\beta }(A) =\ng\\delta(x-x_{0})\\frac{\\lambda_{\\alpha\\alpha^{'}}^{a}}{2}\n\\Psi_{x_{0},\\alpha^{'} ;y_{0},\\beta}(A)+g\\delta(x-y_{0})\n\\frac{\\bar{\\lambda}_{\\beta\n\\beta^{'}}^{a}}{2}\\Psi_{x_{0},\\alpha ;y_{0},\\beta^{'}}(A)\\label{cnd}\n\\end{eqnarray}\nwhere $\\bar{\\lambda}_{\\alpha\\beta}^{a}=-\\lambda_{\\alpha\\beta}^{a*}$.\nThe wave functional of the\ngauge field should be a singlet at every point in space except\nat the position of the quarks where it should transform as N and $\\bar{N}$\nmultiplets of SU(N). This constraint together with the\nSchr\\\"odinger equation\n\\begin{equation}\nH_{A}\\Psi_{x_{0},\\alpha;y_{0},\\beta}(A)=E\\Psi_{x_{0},\\alpha;y_{0},\\beta}(A)\n\\end{equation}\ncompletely defines the problem for the gauge field.\n\n In the rigid gauge rotor limit $H_A$ is given by Eq.(\\ref{hrot}).\nThe wavefunctions of this Hamiltonian are general functionals\n $\\Psi [U(x)]$ of the SU(N) matrices $U_{\\alpha \\beta} (x)$.\nTheir scalar product is determined by functional integration over the\n$U$'s with the corresponding group invariant measure.\nThe vacuum wave functional must obey $G_a^A (x)\\Psi_{vac}[U] = 0$.\nThis means that it is a constant independent of $U_{\\alpha \\beta}(x)$.\nSince also $\\hat{G}_a (x)\\Psi_{vac}[U] = 0$ the vacuum energy is zero according\nto (\\ref{hrot}).\nRegarding the parametrization of the $U$'s in terms of the appropriate Euler\nangles of the SU(N) rotations at every space point, the constant\n$\\Psi_{vac}[U(x)]$ means that all the \"orientations\" of $U(x)$ at all points\nare equally\nprobable, i.e. there are no correlations between the \"orientations\"\nof the rigid gauge rotor at different points. This is as \"random\" as\nthe distribution of the $U$'s can get. The absence of correlations\nis the property only of the vacuum. For other states the \"orientations\"\nof the gauge fields at different space points are correlated via the\n\"moment of inertia\" operator.\n\nIn order to discuss\nthe wave functions with non zero number of quarks it is sufficient\nto know some simple\nproperties of the gauge generators $G_a^A(x)$ and $\\hat{G}_a(x)$. Since\n$\\hat{G}_a(x)$ are gauge scalars, they commute with $G_a^A (x)$,\n\\begin{equation}\n[G_a^A(x),\\hat{G}_b(y)] = 0 \\end{equation}\nwhich means that together the generators of the \"laboratory\" and the\nintrinsic gauge groups provide a complete set of commuting\nquantum numbers for the wave functionals $\\Psi [U_{\\alpha \\beta }(x)]$.\nIndeed, since the Casimir operators for $G^A$'s and $\\hat{G}$'s coincide one has\ne.g., for the SU(2) the $(G_a^A(x))^2$, $G_3^A(x)$ and $\\hat{G}_3(x)$,\ni.e. three local commuting operator fields for the three fields of the\nEuler angles needed to specify the $U(x)$.\nIn the SU(3) one has eight fields of the \"Euler angles\" and eight local\ncommuting generators made off $G^A$'s and $\\hat{G}$'s --\n the two group Casimir operators, one Casimir operator\nof an SU(2) subgroup for $G^A$'s, say $\\sum_{a=1}^{3} (G_a^A)^2(x)$ and the\ncorresponding one for the $\\hat{G}$'s and respectively two\npairs of the Cartan generators -- $G_3^A(x)$, $G_8^A(x)$ and $\\hat{G}_3(x)$ and\n $\\hat{G}_8(x)$. This counting continues correctly for any N, i.e.\n$N-1$ for the SU(N) Casimir operators, $2((N-1)+(N-2)+... +1)$\nfor the Casimir operators of pairs of SU$(N-1)$...SU(2) subgroups and $2(N-1)$\nfor the Cartan generators. Altogether there are $N^2 - 1$ local commuting\noperators as needed. An eigenfunction\nof this complete set of operators is the Wigner function\n$D_{K K^{\\prime}}^L(U(x))$ of U at a certain space point. $K$ and $K^{\\prime}$\nare the quantum numbers of the \"laboratory\" and the intrinsic groups and $L$\ndetermines the representation.\n\nSince under an infinitesimal gauge transformations $U\\rightarrow\n(1+i\\epsilon_a(x) \\frac{\\lambda_a}{2})U$ one can easily verify that\n\\begin{eqnarray}\n\\left[ G_{a} (x),U_{\\alpha \\beta}(y) \\right] &=& g \\delta (x - y)\n\\frac{\\lambda^a_{\\alpha \\gamma}}{2} U_{\\gamma \\beta}(x) \\nonumber \\\\\n\\left[ G_{a} (x),U_{\\alpha \\beta}^{+}(y) \\right] &=& - g \\delta (x -y)\nU_{\\alpha \\gamma}^{+}(x)\\frac{\\lambda^a_{\\gamma \\beta}}{2} \\label{com} \\\\\n\\left[ \\hat{G}_{a} (x),U_{\\alpha \\beta}(y) \\right] &=& g \\delta (x - y)\nU_{\\alpha \\gamma}(x) \\frac{\\lambda^a_{\\gamma \\beta}}{2} \\nonumber \\\\\n\\left[ \\hat{G}_{a} (x),U_{\\alpha \\beta}^{+}(y) \\right] &=& - g \\delta (x -y)\n\\frac{\\lambda^a_{\\alpha \\gamma}}{2} U_{\\gamma \\beta}^{+}(x)\n \\nonumber \\end{eqnarray}\n All the operators in the rigid gauge rotor model are functions of the\n$G$'s and $U$'s. E.g. consider the electric field operator. According\nto Eq.(\\ref{dai}) it is\n\\begin{equation}\nE^i = -\\frac{1}{4g^2} U [d^i,I^{-1} \\hat{G}] U^{+} \\label{elf} \\end{equation}\nwhere I have expressed $\\omega$ in terms of $\\hat{G}$ using (\\ref{omg}).\n\nUsing the relations (\\ref{com}) it is easy to write the general form of\nthe wave functions for a single quark and for a quark--antiquark pair,\n\\begin{eqnarray}\n\\Psi_{x_0,\\alpha}[U] = U_{\\alpha \\gamma}(x_0)c_{\\gamma} \\nonumber \\\\\n\\Psi_{x_0,\\alpha;y_0,\\beta}[U] = U_{\\alpha \\gamma}(x_0)U_{\\delta\\beta}\n^{+}(y_0)c_{\\gamma \\delta} \\label{wfn}\\end{eqnarray}\nThey satisfy the conditions (\\ref{cnd}) following from the\nGauss law with constant coefficients\n $c_{\\gamma}$ and $c_{\\gamma \\delta}$\nwhich give the probability amplitudes of the intrinsic quantum\nnumbers $\\gamma$ and $\\delta$. They should be normalized,\n$\\sum |c_{\\gamma}|^2 = 1 ; \\sum |c_{\\gamma \\delta}|^2 = 1$ to assure\nthe normalization $\\int d[U(x)] |\\Psi [U]|^2 = 1$\n\nThese amplitudes\n must be found by solving the corresponding Schr\\\"odinger equations but before\ndescribing this I wish to remark that the above form of\nthe wave functions is valid also when the limitation of\nthe rigid gauge rotations is relaxed and the most general gauge\nconfigurations are allowed. The parametrization (\\ref{rgd}) is still very\nuseful but now with fully dynamical fields $a^i$\n the variation of which should be limited only by a \"gauge fixing\"\ncondition to avoid overcounting as described above.\n The dynamics will of course be\nthat of the full QCD but the wave functions\nof the static quark and the quark--antiquark pair will have the same form\n(\\ref{wfn}). The difference will be that amplitudes $c_{\\gamma}$ and\n$c_{\\gamma \\delta}$ will be functionals of $a^i(x)$ describing the\nspace and\ncolor fluctuations of the \"string\" attached to the quark or between the\nquark and the antiquark. In the rigid gauge rotation case there are only\ncolor fluctuations described by constant amplitudes.\n\nFor quarks in higher representations the wave functions have the same\nform with $U$ replaced by the appropriate Wigner D-function. E.g. in\nthe adjoint representation\n\\begin{eqnarray}\n\\Psi_{x_0,a}[U] = Tr(U(x_0)\\lambda^aU^{+}(x_0)\\lambda^b)c_b,\n\\nonumber \\end{eqnarray}\netc.\n\nI will now derive the Schr\\\"odinger equation for the string amplitudes\n$c_{\\gamma}$ and $c_{\\gamma\\delta}$.\n Acting with the Hamiltonian (\\ref{hrot}) on (\\ref{wfn}), using (\\ref{com})\nand the orthogonality of $U$'s with respect to the integration over the\ngroup, $\\int dU U_{\\alpha \\beta}^{*} U_{\\mu \\nu} = \\delta_{\\alpha \\mu}\n\\delta_{\\beta \\nu}$, I find\n\\begin{eqnarray}\nQ_{\\alpha \\gamma}(x_0)c_{\\gamma} = E c_{\\alpha}, \\nonumber \\\\\nQ_{\\alpha \\gamma}(x_0)c_{\\gamma \\beta} + Q^{*}_{\\beta \\mu}(y_0)c_{\\alpha \\mu}\n- P_{\\alpha \\beta , \\gamma \\mu }(x_0,y_0)c_{\\gamma \\mu} =\nE c_{\\alpha \\beta}, \\label{sch1} \\end{eqnarray}\nwhere I denoted\n\\begin{eqnarray}\nQ_{\\alpha \\gamma}(x_0) = \\frac{1}{4} I_{ab}^{-1}(x_0,x_0)(\\lambda^a\n\\lambda^b)_{\\alpha \\gamma}, \\\\\nP_{\\alpha \\beta , \\gamma \\mu }(x_0,y_0) = \\frac{1}{2} I_{ab}^{-1}(x_0,y_0)\n\\lambda^a_{\\alpha \\gamma} \\lambda^b_{\\mu \\beta}.\\end{eqnarray}\nIn SU(2) $Q_{\\alpha \\gamma}$ takes a particularly simple diagonal form,\n$Q_{\\alpha \\gamma} = \\delta_{\\alpha \\gamma}(1\/4)I_{aa}^{-1}(x_0,x_0)$.\nand is the eigenvalue for a single quark. For quarks in e.g. adjoint\nrepresentation the lambda matrices in the expressions above are replaced\nby the corresponding group generators $if_{abc}$.\nThe first two terms in the second line of\n(\\ref{sch1}) are the quark and the antiquark self\nenergies whereas the last term is their interaction. In QCD one expects\nthat terms like $Q$ are inflicted by the long and short distance\ndivergences and should be properly regularized which I will assume for the\nrest of the paper. I will further assume the translational invariance of\n$Q$, i.e. its independence of $x_0$. One can then rewrite Eq.(\\ref{sch1})\nby transforming it to the basis in\nwhich $Q$ is diagonal. Defining its eigenvectors\n$Qb^{(n)} = \\epsilon_n b^{(n)}$ and expanding\n$c_{\\gamma \\beta} = d_{mn} b_{\\gamma}^{(m)} b_{\\beta}^{(n)*}$ one finds\n\\begin{equation}\n d_{mn} = (E - \\epsilon^k - \\epsilon^l) d_{kl}\n\\;\\;\\; {\\rm(no\\;\\;sum\\;\\;over\\;\\;k\\;\\;and\\;\\;l)} \\label{peq} \\end{equation}\nwhere $ = P_{\\alpha \\beta , \\gamma \\mu } b_\\gamma^{(m)}\nb_\\mu^{(n)*} b_\\alpha^{(k)} b_\\beta^{(l)*}$. The Schr\\\"odinger equation (\\ref{peq})\nis $N^2 \\times N^2$ matrix equation and the most interesting question of course\nconcerns the dependence of its eigenvalues on the distance $|x_0 - y_0|$\nfor various possible choices of the rigid gauge field configuration\n$a^i(x)$ on which the matrix $P$ depends. I will address this question\nin the next section.\n\n\\setcounter{equation}{0} \\ALTsect{Choices of The Rigid Field. Mean Field Equations.}\n\nThe rigid configuration if it exists in QCD must reflect the\nproperties of the gluon condensate of the vacuum. One of the more\naccepted views of the QCD vacuum is that this is a condensate of non\ntrivial topological configurations -- the Z(N) vortices,\nc.f.,\\cite{top}. Although\nsuch configurations are easily incorporated in the above formalism\nI was not able to overcome\ntechnical difficulties in working out a theory of their condensation.\n\nOn a heuristic level each Z(N) vortex carries\n a unit of flux of the colormagnetic\nfield. Condensation of the vortices presumably\nmeans that there is a non zero average of this field in the vacuum.\n Of course due to unbroken local gauge symmetry it\nmust undergo free \"gauge rotations\" at each space point. In the ground\nstate this means that there are\nequal probabilities of all the \"orientations\" yielding zero\naverage value in the laboratory. The finite average\nvalue of the condensate field\ncan only be \"seen\" in the \"body fixed\" frame and should appear in this\npicture in the manner similar to\n $a^i$ in the expression (\\ref{rgd}) for our rigid gauge\nrotations. The field strength\n\\begin{equation}\nB^i(x) = U(x)b^i(x)U^{+}(x),\\,\\,\\, \\ b^i = \\frac{i}{g} \\epsilon_{ijk}\n [d^j,d^k], \\end{equation}\nalso averages to zero in the ground state but has a non zero value $b^i$\nin the \"body fixed\" frame.\n\nVia the dynamics of $U(x)$ the anzatz (\\ref{rgd}) leads to colorelectric\nfield (\\ref{elf}) which propagates away from points\nwhere $\\hat{G}(x)$ is non zero, i.e. from the location of static quarks.\nThe propagator of this field is controlled by the condensate\nfield $a^i$ which enters the expressions for $I^{-1}$ and $d^i$.\nThis propagator is a long range Coulomb potential for zero $a^i$ and\n is a Gaussian for\n$a^i$ corresponding to a uniform colormagnetic $b^i$ (cf., below).\nThe screening of the propagation range of the colorelectric field\nin the presence of the colormagnetic \"condensate\" $b^i$ is\nreminiscent of the dual to the Meissner effect of screening of\na magnetic field by the electric condensate of a superconductor.\nThis possibility of the dual Meissner effect is of course a\nstandard scenario for confinement in QCD. It is expected that\ntubes of flux of the colorelectric field are formed which\nconnect quarks and make their energy depend linearly on the distance.\n\nIn the present formalism a way to attempt to model the formation\nof a confining string is to look for such a configuration of\nthe rigid field $a^i(x)$ for which the propagator $I^{-1}$ behaves\nroughly speaking as\none dimensional for large separations along some given line in space\nat the end of which quarks can be placed. This means that\na sort of magnetic \"wave guide\" should be constructed so that\nthe Green's function of the operator $-d^2_{ab} =\n-d^i_{ac} d^i_{cb} =\n -(\\partial_i \\delta_{ac} - g f_{acc'} a^i_{c'})(\\partial_i - g f_{cbb'}\n a^i_{b'})$ is\nasymptotically $\\propto |x-x'|$ along, say, one of the coordinate\naxes. In order to see the difficulties in finding such a configuration\nconsider for simplicity 2 space dimensions and choose\n$a^1 = c(y)$ and $a^2 = 0$ with an arbitrary $c(y)$. This choice\ncorresponds to the colormagnetic field $b(y) = \\partial_y c(y)$\ndepending only on one coordinate $y$. The operator to invert is then\n\\begin{equation}\n-(\\partial_x -ig c(y) \\cdot F)^2- \\partial_y^2 \\end{equation}\nwhere I denoted the color spin matrices $F^a_{bc} = if_{bac}$\nand $c(y) \\cdot F = c_a (y) F^a$. The propagator is then\n\\begin{equation}\n\\int_{-\\infty}^{\\infty}\n dk e^{ik (x - x')} \\sum_{n} \\frac{\\chi_n(k,y) \\chi_n (k,y')}\n{\\epsilon_n (k)}, \\label{prop} \\end{equation}\nwhere $\\chi_n (k,x)$ and $\\epsilon_n (k)$ are solutions of\n\\begin{equation}\n [-\\partial_y^2 + (k +c(y) \\cdot F)^2 ] \\chi_n (k,y) = \\epsilon_n (k)\n\\chi_n (k,y). \\label{lan} \\end{equation}\nIn order to achieve the desired confining behaviour of the propagator\nthe sum in (\\ref{prop}) must be\n$\\sim k^2$ for $k \\rightarrow 0$. The simplest\nis to assume that the lowest eigenvalue of (\\ref{lan}),\n$\\epsilon_0 (k)$ should vanish as $k^2$ for small $k$. However\nthe operator in (\\ref{lan}) is a sum of squares and does not have zero\neigenvalues for non trivial regular $c(y)$. It is also not symmetric\nin $k$ for small values of $k$ but this seems to be less of a problem.\nThe same conclusions seem to hold in 3 space dimensions. It is quite\npossible that perhaps a singular configuration $a^i$ exists\nwhich leads to zero eigenvalue in (\\ref{lan}) at zero $k$\nbut I was not able to find it.\n\nThe strong coupling limit of lattice QCD suggests\n that quarks in the fundamental reperesentation are confined\nwhereas they are only\nscreened if put in the adjoint representation. This\ncrucial difference comes from fairly simple quantum mechanics\nof color degrees of freedom related to matching of group\nrepresentations in neighboring lattice points.\n In our rigid gauge rotor model a similar\nsimple quantum mechanics of colors is retained. As a result the\neigenenergies of a systems of static quarks will be determined by\ndifferent\ncombinations of the color components of $I^{-1}$ depending on the\nrepresentation of the quarks. E.g., as already mentioned\nin Section 3 when the quarks are taken in\nthe adjoint representation the $\\lambda$ matrices in the expressions\nfor $P$ and $Q$ in the Schr\\\"odinger equations (\\ref{sch1})\nare replaced by their adjoint counterpartners $F$.\n\nIn order to find the optimal $a^i$ in a systematic way one can\nfollow a variational approach and minimize the\nground state energy of the rigid gauge rotations for fixed positions\nof static quarks.\n This energy is given by a sum\nof the lowest eigenenergy of $H^A_{rot}$, Eq.(\\ref{hrot}) and the\ncolormagnetic energy given by the second term in (\\ref{ham}) with\nrigidly \"rotating\" $A(x)$, Eq.(\\ref{rgd}), i.e.\n\\begin{equation}\nE[a^i] = E_{rot}[a^i] - \\frac{1}{2g^2}\\int d^3x Tr [d^i,d^j]^2 \\end{equation}\nVariation of this expression gives\n\\begin{equation}\n\\partial_i f^{ij} - ig [a^i,f^{ij}] = \\frac{1}{2}\\frac{\\delta E_{rot}}\n{\\delta a^i} , \\label{mfld} \\end{equation}\nwhere $f^{ij} = (i\/g) [d^i,d^j]$ .\n Eq.(\\ref{mfld}) is obviously gauge invariant.\n In the vacuum $E_{rot}$ is zero and the minimization of the second\nterm simply gives the classical equation for $a^i$ in the vacuum.\nFor a quark- antiquark system $E_{rot}$ is non trivial and\ndepends on the distance between the quarks. I plan to discuss\nthe solutions of the equation (\\ref{mfld}) and their relation\nto confinement elsewhere.\n\nIn the rest of this Section as an illustration of a simple\n choice for the rigid field $a^i$ which allows\nto obtain some analytic results I consider it to be diagonal,\n$a^i_{\\alpha \\beta} (x) = \\delta_{\\alpha \\beta} a^i_\\alpha (x)$.\nThe moment of inertia operator with such $a^i$ is\n\\begin{equation}\n-\\frac{1}{4g^2}\\left[d^i ,[d^i ,\\omega ] \\right] _{\\alpha \\beta} =\n-\\frac{1}{4g^2} \\left[ \\partial ^i - ig(a^i_\\alpha - a^i_\\beta )\\right] ^{2}\n\\omega_{\\alpha \\beta} . \\label{dmi} \\end{equation}\nUsing Green's function satisfying\n\\begin{equation}\n\\left(\\partial ^i - ig(a^i_\\alpha - a^i_\\beta )\\right) ^{2}\n J_{\\alpha \\beta }(x,y) = - \\delta (x-y),\\label{grf} \\end{equation}\nand following the procedure leading to Eq. (\\ref{hrot}) one finds the rigid\ngauge rotor Hamiltonian in this case\n\\begin{equation}\nH = \\frac{1}{4} \\int d^2 x d^2 y \\hat{G}_{\\alpha \\beta}(x) J_{\\alpha \\beta}(x,y)\n\\hat{G}_{\\beta \\alpha}(y). \\end{equation}\nThe Schr\\\"odinger equation for the static quark--antiquark wave function\n(\\ref{wfn}) has the form (\\ref{peq}) with\n\\begin{equation}\nQ_{\\alpha \\gamma}(x_0) = \\frac{1}{4} \\delta_{\\alpha \\gamma}\\left[\n\\sum_\\beta J_{\\alpha \\beta}(x_0,x_0) - \\frac{1}{N}\n\\left(2 J_{\\alpha \\alpha}(x_0,x_0) -\n\\frac{1}{N}\\sum_\\beta J_{\\beta \\beta}(x_0,x_0)\\right)\\right] \\end{equation}\n\\begin{eqnarray}\nP_{\\alpha \\beta , \\gamma \\mu }(x_0,y_0) &=&- \\frac{1}{2}\n \\delta_{\\gamma \\mu} \\delta_{\\alpha \\beta} J_{\\alpha \\gamma}(x_0,y_0) + \\\\\n&+& \\frac{1}{2N}\\delta_{\\alpha \\gamma} \\delta_{\\mu \\beta}\\left[J_{\\beta \\beta}\n(x_0,y_0) + J_{\\gamma \\gamma}(x_0,y_0)\n - \\frac{1}{N} \\sum_\\nu J_{\\nu \\nu}(x_0,y_0)\\right]. \\nonumber \\end{eqnarray}\nThe diagonal components of $J$ are simple Coulomb propagators independent\nof the color so that the expressions for $Q$ and $P$ can be simplified\n further but I will not go into the details of this. Instead\nI will now consider the choice of $a^i$\nwhich corresponds to a much discussed in the\nliterature situation of a uniform colormagnetic field.\nI emphasize that in the present model this field is uniform in the\nintrinsic, \"body fixed\" frame. For simplicity I will\nfirst work in 2+1 dimensions and will try to extend to 3+1 in the\nnext section. I set\n\\begin{equation}\n a^i_\\alpha (x) =\n \\frac{1}{2}b_{\\alpha} \\epsilon_{ij}x^j\n\\label{cb1} \\end{equation}\nwhere the space indices $i,j$ presently run over the values 1 and 2.\nIn two space dimensions one can take $b$ diagonal in color\nsince the transformation diagonalizing it is a part of $U$'s in\n (\\ref{rgd}).\nExplicit expression for $J$ is easily obtained\nin this case from the known Green's function\nof a Schr\\\"odinger equation in a constant magnetic field, cf. Ref.\\cite{fey},\n\\begin{equation}\nJ_{\\alpha \\beta}(x,y) = \\frac{1}{4\\pi} e^{i(gb_{\\alpha \\beta}\/2)\n\\epsilon_{ij} x^i y^j} \\int_0^{\\infty} \\frac{ds}{\\sinh s}\ne^{-(|g b_{\\alpha \\beta}|\/4)(x-y)^2 \\coth s} \\end{equation}\nwhere $b_{\\alpha \\beta} = b_\\alpha - b_\\beta$. For $x = y$ this expression\nis independent of color indices. It must be regulated to prevent the\ndivergence, e.g.\n\\begin{equation}\n \\frac{g^2 N}{2\\pi} \\int_{s_0}^{\\infty} \\frac{ds}{\\sinh s},\\end{equation}\nwhere $s_0$ is a regularization\ncutoff. Although $J(x,y)$ does not depend only on\nthe distance $|x-y|$ the Schr\\\"odinger equation\n(\\ref{peq}) with $P$ and $Q$ based on such $J$\nis translationally invariant. Shifting the coordinates by say a vector\n$h$ and simultaneously performing a gauge transformation of the wave function\n$c_{\\alpha \\alpha} \\rightarrow c_{\\alpha \\alpha}\\exp\\left[i(gb_{\\alpha}\/2)\n\\epsilon_{ij}h^i (x_0^j - y_0^j)\\right]$ leaves Eq.(\\ref{peq})\ninvariant. The integral in the expression for $J(x,y)$\ncan be expressed in terms of the Bessel function\n$K_0(|g(b_{\\alpha} - b_{\\beta}|z^2\/4)$ with $z = x - y$ and for\n $|g (b_{\\alpha} - b_{\\beta})| z^2 \\rightarrow \\infty$\nit has the following asymptotic form\n\\begin{equation}\n\\frac{1}{4\\sqrt{\\pi}}(2|g(b_\\alpha - b_\\beta)|)^{-1\/4} \\exp\\left[ -\n |g(b_\\alpha - b_\\beta)| z^2\/4\\right]. \\end{equation}\nFor finite values of $g|b_{\\alpha} - b_{\\beta}|$\nit decreases as a Gaussian at large separations\n$z$. This should lead to a similar decrease of\nthe eigenvalues of (\\ref{peq}) -- an entirely unsatisfactory behavior\nas far as the confinement is concerned.\nIn the next Section it will be seen that the situation may be\ndifferent in the large N limit.\n\n\\setcounter{equation}{0} \\ALTsect{ Large N Random Colormagnetic Fields.}\n\nAs mentioned in the Introduction\nrigid gauge field rotations should be relevant\nfor QCD in the large N limit where it is expected that a\nmaster field configuration dominates the vacuum, \\cite{wit}.\nAs in the case of a condensate\nsuch a configuration can not be just some fixed\ngauge field potential $A^i_a(x)$.\nIt must be allowed to undergo free gauge \"rotations\"\nexactly as $a^i$ in\nEq.(\\ref{rgd}) since the gauge invariance is not expected to be broken\nin the large N limit. The dynamics of these rotations can not be \"frozen\"\nand must be described by the gauge rotor Hamiltonian considered in\n Section 2. These \"rotations\" induce an interaction\nbetween quarks as was shown in Section 3 for static quarks and will be\ndemonstrated for dynamical quarks in Section 6 below where it will\n also be shown that in addition\n$a^i$ appears as a background field in the Dirac operator.\n\nAnother important consideration is that for large N there is a large\nnumber of degrees of freedom operating at each space point which\nintroduces statistical elements in the theory, cf. Refs.\n\\cite{mat,ran,bor}.\nExperience with this limit for simple systems indicates that\ntwo types of gauge invariant physical operators should exist, analogoes to\nmacroscopic and microscopic observables in thermodynamics. The former\ndepend on finite (relative to N) number of dynamical variables\nand involve sums over all labels of the degrees of freedom,\ni.e. the color indices. A simple example is\n$a^i_{\\alpha\\beta}a^i_{\\beta\\alpha}$, etc. Operators without such\nsummations, e.g., $a^i_{\\alpha\\beta}$ with fixed $\\alpha$ and $\\beta$\nbelong to the second type which must be regarded as\nmicroscopic observables\n like ,e.g., a coordinate of a particle or a single\nspin variable in thermodynamic systems. The fluctuations of the macroscopic\noperators are suppressed and expectations of their products factorize at\n$N = \\infty$. This is not so for microscopic observables.\n\nOn the basis of these considerations one can adopt the following point of view.\nAfter allowing for free gauge rotations according to (\\ref{rgd}), i.e.\nafter transformation to the body-fixed frame, one should\n consider $a^i(x)$ as static\n random matrix functions described by a probability\ndistribution $P[a^i(x)]$. This distribution can be determined following\nthe ideas of the random matrix theory, cf.\n Ref.\\cite{ent}. To this end one should introduce the amount of information\n (negative entropy)\n\\begin{equation}\nI\\left\\{P\\left[ a \\right]\\right\\} =\n \\int D\\mu[a^i(x)] P[a^i(x)] ln P[a^i(x)] \\label{inf}\n\\end{equation}\nassociated with the $P[a^i(x)]$.\nMinimizing $I\\{P[a]\\}$ subject to suitably chosen constraints on\nmacroscopic-like variables should determine the least biased distribution\n$P[a^i(x)]$. As in statistical mechanics\nthe large N factorization should then simply appear as a consequence of\nthe central limit theorem.\n\nThere are two crucial questions which need to be answered\nin following this procedure - what is the appropriate measure in\nthe integral (\\ref{inf}) and what are the variables which should\nbe constrained. I hope to address the general answer to\nthese questions in the future work. Presently I will illustrate\nhow the procedure can be put to work for a uniform colormagnetic field,\nEq.(\\ref{cb1}).\n\n In the limit of large N\nonly the first terms in the expressions for $P$ and $Q$ above should\nbe retained and the Schr\\\"odinger equation (\\ref{peq}) for diagonal components\nof the string amplitude becomes\n\\begin{equation}\n-2g^2\\sum_\\beta J_{\\alpha \\beta} (x_0,y_0) c_{\\beta \\beta} =\n(E - E^0_\\alpha) c_{\\alpha \\alpha}, \\label{meq} \\end{equation}\nwhere $E^0_\\alpha = 2g^2 \\sum_\\mu J_{\\alpha \\mu}(x_0,x_0)$.\nThe non diagonal string amplitudes decouple and satisfy a trivial equation\n$(E^0_\\mu + E^0_\\nu)c_{\\mu \\nu} = 2 E c_{\\mu \\nu}$ the eigenvalues of which are\nsimply the sums of the selfenergies. Without careful treatment of long and\nshort distance regularization in the large N limit one can not reliably\ndiscuss these eigenvalues taken separately and I will concentrate\non Eq.(\\ref{meq}). Using translational invariance and\nwriting this equation for $x_0 = 0$ and $y_0 = z$ one obtains\n\\begin{equation}\n\\sum_{\\beta=1}^N \\int_0^{\\infty} \\frac{ds}{4\\pi \\sinh s}\ne^{-(|g (b_{\\alpha} - b_{\\beta})|\/4) z^2 \\coth s} c_{\\beta \\beta} =\n- \\frac{E}{2g^2} c_{\\alpha \\alpha}.\\label{teq} \\end{equation}\nas the large N limit of the Schr\\\"odinger equation for a static quark --\nantiquark pair in the rigid gauge\nconfiguration corresponding to a uniform colormagnetic field in 2+1\ndimensions. One must still specify the large N scaling of various quantities\nwhich enter this equation. Provided each term in the sum on the left hand\nside is of the same order of magnitude I get the standard\nscaling of the coupling constant requiring that $g^2 N$ is held fixed.\nIn the exponential of the integrand one can then extract the finite combination\n$\\bar g = g\\sqrt{N}$. The problem is then to determine the\nscaling and in general the entire distribution of\nthe field components $b_{\\alpha}\/\\sqrt{N}$. Regarding the behavior at\nlarge separations $z$ one notes that if\nthe limit of $N \\rightarrow \\infty$ is taken first in such a way that\nthe differences $|b_{\\alpha} - b_{\\beta}|\/\\sqrt{N}$ decrease then the Gaussian\ndecay can possibly be prevented.\n\nI use this example to demonstrate how the ideas about the\nstatistical nature of the large N limit can be used to determine the\ndistribution of the components $b_\\alpha\/\\sqrt{N}$.\nI consider what happens with the Wilson loop\n$W(C) = \\frac{1}{N}$ in the present theory.\nChoosing the loop perpendicular to the time axis, inserting (\\ref{rgd})\nin $W(C)$, using its gauge invariance and the explicit form (\\ref{cb1}) of\n$a^i$ one finds\n\\begin{equation}\n W(C) = \\frac{1}{N}\\sum_\\alpha e^{ig b_\\alpha S} = \\int_{-\\infty}^\\infty\n db \\rho (b) e^{igbS}\n\\end{equation}\nwhere $S$ is the area of the loop and $\\rho (b) =\n(1\/N)\\sum_{\\alpha=1}^N \\delta (b - b_\\alpha)$ is the density of the field\ncomponents. In the large N limit $\\rho(b)$ can be approximated\nby a smooth function provided the range of variations of b does\nnot grow with N. Assuming this one easily finds simple expressions for\n$\\rho (b)$ , e.g. Lorenzian\n\\begin{equation}\n\\rho (b) = \\frac{b_0 \\sqrt{N}}{\\pi (b^2 + Nb_0^2)} \\label{lor} \\end{equation}\nwhich lead to the area law dependence of the Wilson loop,\n$W(C) = \\exp (-\\bar{g}b_0 S)$. The combination $\\bar{g} b_0$ plays the role of\nthe string constant. The placing of $N$'s in (\\ref{lor}) was chosen in such\na way as to have this constant finite for $N \\rightarrow \\infty$.\n\nThe choice (\\ref{lor}) is the simplest possible.\nAny meromorfic function $\\rho(b)$ with poles in the upper plane\nwill give the area law with the string tension controlled by the\nposition of the pole closest to the real axis. One can also take functions\nwith other type of singularities in the upper complex plane, etc. The simple\nchoice (\\ref{lor}) gives area law for any S,\nmissing entirely the asymptotic freedom behavior\nat small S. One can attempt to correct this by choosing more\ninvolved expressions for $\\rho$. A much more serious problem\nis that the space oriented Wilson loop may not be a good measure\nof confining properties in the model where one has\nnever worried about the Lorenz invariance.\n\nAdopting any form of the \"single component\" density $\\rho(b)$\nstill leaves the distribution of the values of $b_\\alpha$ needed\nin, e.g., Eq. (\\ref{teq}) largely undetermined. Using\n statistical concepts described in the beginning of this\nSection one should view $b_\\alpha$'s as random quantities\nand introduce their joint probability distribution\n$P(b_1, b_2,...,b_N)$ which should be such\nthat $\\rho(b)$ is reproduced but is\notherwise maximally random, i.e. contains least amount of information.\nThe question immediately arises as to whether $\\rho(b)$ is the only\nquantity which should constrain $P(b_1,...,b_N)$ and what is the complete\nset of such constraints. In the absence of general answers\nI take $\\rho(b)$ controlling $W(C)$ as an example\nand determine the distribution $P(b_1,...,b_N)$ by\nminimizing the appropriate negative entropy (information)\nwith this constraint.\n\nThe quantities $b_\\alpha$'s are eigenvalues of a\nhermitian, in general complex matrix. The information\ncontent of a probability distribution $P(b_1,...,b_N)$ of such eigenvalues\nis a well studied question, cf. Ref.\\cite{ent}. It is\n\\begin{equation} \\int d\\mu[b] P(b_1,...,b_N) ln P(b_1,...,b_N)\\label{ent}\\end{equation}\nwhere the measure is $d\\mu[b] = const \\prod_{\\alpha > \\beta} |b_\\alpha\n- b_\\beta|^2\ndb_1 db_2...db_N$, reflecting the repulsion of the eigenvalues. Minimizing\n(\\ref{ent}) under the condition of a given $\\rho (b) =\n(1\/N)\\sum_\\alpha (b - b_\\alpha)$ one finds\n\\begin{equation}\nP(b_1,...,b_N) = const \\;\\exp \\left(\\sum_{\\alpha\\neq\\beta}^N ln|b_\\alpha -\nb_\\beta| - 2N\\sum_\\alpha^N \\int_{-\\infty}^\\infty ln|b_\\alpha - b^\\prime|\n\\rho (b^\\prime ) db^\\prime\\right).\\label{dis} \\end{equation}\n Using, e.g. Eq. (\\ref{lor}) for $\\rho (b)$ this expression becomes explicitly\n\\begin{equation}\nP(b_1,...,b_N) = const\\; \\exp \\left(\\sum_{\\alpha\\neq\\beta}^N ln|b_\\alpha -\nb_\\beta| - N\\sum_\\alpha^N ln (b_\\alpha^2 +Nb_0^2) \\right).\\label{pbb} \\end{equation}\nThe constant in front of this expression must assure the normalization of\n$P$ and can be calculated by the methods described in Ref.\\cite{meh}.\nUsing the standard interpretation of $P(b_1,...,b_N)$ as a partition\nfunction of a fictitious Coulomb gas\n one can say that the \"particles\" $b_\\alpha$\nare \"repelled\" from each other by the first\nterm in its exponential but are kept within the interval $b_0$ by the second\nterm representing the interaction with the background \"charge\" distributed\naccording to $\\rho(b)$. The average distance $|b_\\alpha - b_\\beta| \\sim b_0\/N$\nbecomes very small in the large N limit. The Schr\\\"odinger equation (\\ref{teq}) is now\na random matrix equation with the probability distribution of its elements\ncontrolled by the $P(b_1,...,b_N)$ above. The actual numerical\nsolution of this equation is now in progress.\n\nIn a similar way one can consider rigid gauge configuration representing\nuniform colormagnetic field in $3 + 1$ dimensions. This field in the intrinsic\nframe corresponds to the choice\n\\begin{equation}\na^i(x) = \\frac{1}{2} \\epsilon_{ijk} b^j x^k \\label{cb2} \\end{equation}\nwhere now however the three color matrices $b^i$ in general cannot be\nassumed diagonal. For such non-diagonal colormagnetic field the inversion\nof the moment of inertia operator $-(1\/4g^2)[d^i,[d^i,\\omega]]$\nrequires a solution of a matrix differential equation. This equation\nsimplifies considerably if $b^i$'s are nonetheless restricted to be diagonal,\n$b^i_{\\alpha \\beta} = \\delta_{\\alpha \\beta} b^i_\\alpha$. Then\nthe equation (\\ref{dmi}) is still valid with the index $i$ now running from\n1 to 3. In the following equation (\\ref{grf}) for the Green's function one\nshould just replace $(b_\\alpha - b_\\beta)\\epsilon_{ij}x^j$ by\n$\\epsilon_{ijk}(b^j_\\alpha -\n b^j_\\beta)x^k$. The expression for this Green's function is known\n and one can repeat all the\nsteps leading to the static quark--antiquark\nSchr\\\"odinger equation which is the analog of Eq.(\\ref{teq}) in 3+1 dimensions.\n\nTurning again to the Wilson loop one finds in this case\n $\\oint_C a^i dx_i = b^j S_j$ with $S_j = (1\/2) \\epsilon_{jki}\n\\oint_C x^k dx_i$ so that $\\left( \\oint_c a^i dx_i \\right)_\\alpha =\nb_\\alpha S \\cos \\theta _\\alpha$ (no sum over $\\alpha$) where\n$S = \\sqrt{\\sum_i (S^i)^2}, b_\\alpha = \\sqrt {\\sum_i (b^i_\\alpha)^2}$\nand $\\theta_\\alpha$ -- the angle between the vectors $b^i_\\alpha$ and $S^i$\nat a given $\\alpha$. S is the area of the loop when it is planar and\nis related to the minimal area in general.\nThe Wilson loop is\n\\begin{eqnarray}\nW(C) & = & \\frac{1}{N}<\\sum_\\alpha e^{igSb_\\alpha \\cos \\theta_\\alpha}>\n = \\nonumber \\\\\n & = & \\frac{1}{N}\\sum_\\alpha 2\\pi \\int _{-1}^{1} d(\\cos \\theta_{\\alpha})\ne^{igSb_\\alpha \\cos \\theta_\\alpha} = \\label{wl3} \\\\\n & = & \\frac{4\\pi}{gS} \\int_0^{\\infty} \\frac{db}{b} \\rho(b) \\sin (gbS),\n \\nonumber \\end{eqnarray}\nwhere $\\rho (b) = (1\/N) \\sum_\\alpha \\delta (b - b_\\alpha)$ is the density\nof the positive lengths of the color components of the vector\n$b^i$. In (\\ref{wl3}) I have performed the angle averaging which must be\npresent in the vacuum wavefunction.\nOne can easily choose $\\rho (b)$, e.g. the square of Lorentzian\n\\begin{equation}\n\\rho(b) = \\frac{4 b_0 b^2\\sqrt{N}}{\\pi (b^2 + Nb_0^2)^2} \\label{rh2} \\end{equation}\nwhich gives the area law $W(C) = 4\\pi\\exp (-\\bar{g}b_0S)$. This choice is\nagain not unique and gives the area law for any $S$. It has a powerlike\ntale as opposed to the perturbative Gaussian.\n\nThe statistical arguments for finding the entire distribution\nof $b^i_\\alpha$ can be used in $3+1$ dimensional\ncase as well with the difference that in this case the density of the lengths\nof the vectors $b^i_\\alpha$ is fixed by, e.g. Eq.(\\ref{rh2}) and their directions\nare distributed isotropically.\n\n\\setcounter{equation}{0} \\ALTsect{Dynamic Quarks}\n\nDynamic quarks can be easily included in the rigid gauge rotation model.\nFor this I define quark fields in the \"rotating frame\", $q = U\\hat{q}$,\nuse Eq. (\\ref{rgd}) in the second term of the QCD Hamiltonian (\\ref{ham})\nand replace the first term in it by (\\ref{hrot}). Using moreover\n the Gauss law constraint (\\ref{gl}) I can write the original\nQCD Hamiltonian (\\ref{ham}) in the rigid gauge rotor limit as\nexpressed in terms of the quark fields only,\n\\begin{equation}\nH_{rot} = \\frac{1}{2}\\int d^3 x d^3 y\n\\hat{\\rho}_a (x) I_{ab}^{-1}(x,y,[a^i]) \\hat{\\rho}_b (y)\n+\\int d^3 x \\hat{q}^{+}(x)[\\alpha^{i}(p^{i}-g\na^{i}(x))+\\beta m]\\hat{q}(x), \\label{hrt1} \\end{equation}\nwhere $\\hat{\\rho}_a = \\hat{q}^{+}\\lambda ^a\\hat{q}$\nare the color quark densities in the rotating frame.\nThe Hamiltonian $H_{rot}$ describes quarks with gauge strings attached to them,\ni.e. $q(x)$ are multiplied by $U^{+}(x)$. They move in an external\ncolormagnetic field described by the vector potential $a^i(x)$ and\n interact\nvia an instantaneous interaction $I^{-1}_{ab}(x,y,[a^i])$ also\ndepending on $a^i$ via Eq.(\\ref{mom}). The simultaneous appearance of\nthe rigid gauge field configuration both as a background and as \"inducing\"\nthe quark-quark interaction is ultimately a consequence of the\ngauge invariance which requires that non dynamical rigid gauge fields\nappear only in the form (\\ref{rgd}).\n\n$H_{rot}$ is gauge invariant since the operators $\\hat{q}$ are. Moreover\nthis Hamiltonian should only be used in the color singlet sector of the\ntheory since I have used the Gauss law to derive it.\n\nFor the vanishing $a^i$ the Hamiltonian $H_{rot}$ describes free quarks interacting\nCoulombically. Also for a general non zero $a^i$\n$H_{rot}$ should be regarded as the QCD analogue\nof the QED Hamiltonian in which only the instantaneous Coulomb\ninteraction between the charges has been retained.\nIndeed the analogue of the rigid gauge \"rotations\" (\\ref{rgd}) in QED\nis $A^i(x,t) = a^i (x) + \\partial ^i \\chi(x,t)$\nwith abelian $U = \\exp (ig\\chi(x,t))$, fixed rigid $a^i (x)$ and dynamical\n$\\chi (x,t)$. Repeating the steps leading to (\\ref{hrt1}) one will\nderive in the QED case the Coulomb interaction between the charge densities.\n\nRegarding the possible role of $a^i(x)$ as the master field in the large N\nlimit\none has in $H_{rot}$ a way in which this field should enter the quark\nsector of the theory, i.e. serving both as a background field and\nperhaps somewhat surprisingly also controlling\nthe quark interaction. Following the developments of Section 5 this\nfield should be regarded as random. The appearance of a random interaction\nbetween the quarks means that a possible mechanism for confinement\nof dynamical quarks in the large N limit could be\nrelated to the localization of their relative distances.\nThe possible connection between\n confinement and localization has already been mentioned\nin the past but usually in the context of a random background field and not\nwith random interactions as appear in the present model.\n\nThe Hamiltonian (\\ref{hrt1}) takes exact account of the gauge symmetry.\nOne must however also worry about global symmetries.\nFor any N the Hamiltonian $H_{rot}$ may serve as a possible basis for various\nphenomenological\ndevelopments. Both for this matter and conceptually one must face the\nissue that allowing for an arbitrary x-dependence of various color components\nof $a^i$ in (\\ref{hrt1}) leads to breaking\nof important symmetries such as translational, rotational, Lorenz,\ntime reversal, and various discrete symmetries. Of course the breaking\nof continuous space symmetries is not uncommon in phenomenology,\ne.g. the bag model, the quark potential model, the Skyrme model, etc.\n Symmetries can be restored by\nconsidering all configurations translated by the symmetry and integrating\nover them using collective coordinates. This of course\napplies to both continuous and discrete symmetries.\nIn the absence of the guidance from the\nsymmetries a more dynamical criterion for fixing\n$a^i$ seems to be the condition of lowest energy.\nThis leads naturally to a generalization of the variational approach\nof Section 4 in which the variational energy\nshould be replaced by the ground state energy $E_0[a^i(x)]$\nof (the suitably regularized) $H_{rot}$ found for a given $a^i (x)$.\nShould the solution $a^i$\n break a global symmetry, the symmetry \"images\" of this $a^i$\nwill also be solutions and one should \"sum\" over all of them in a standard\nway thereby restoring the symmetry.\nThis variational approach may be combined with the Hartree-Fock\nmethod which should allow to calculate $E_0 [a^i(x)]$ approximately. The\nHartree-Fock approximation for fermions was shown to be consistent with\nthe large N approximation, c.f., Ref.\\cite{sal}.\nIn a combined approach one should form for fixed $a^i$\nan expectation value of\n$H_{rot}$ with respect to a trial state of a chosen\ncolor singlet configuration of quarks\n(e.g., vacuum, baryon, etc) which must be a product state, i.e. such that\nthe expectation values with respect to this state have a non interacting\nfactorized form,i.e.,\n\\begin{eqnarray}\n <\\hat{\\rho}_a (x) \\hat{\\rho}_b (y)> &=& \\lambda^a_{\\alpha \\beta}\n\\lambda^b_{\\gamma \\delta}(\n<\\hat{q}_\\alpha^{+}(x)\\hat{q}_\\beta(x)><\\hat{q}_\\gamma^{+}(y)\\hat{q}_\\delta(y)> -\\nonumber\n\\\\\n&-&<\\hat{q}_\\alpha^{+}(x)\\hat{q}_\\delta(y)><\\hat{q}_\\gamma^{+}(y)\\hat{q}_\\beta(x)>).\n\\label{hf} \\end{eqnarray}\nFor a global color singlet state\n$<\\hat{q}_\\alpha^{+}(x)\\hat{q}_\\beta(y)> = \\delta_{\\alpha \\beta}\\rho (y,x)$, with\n$\\rho(x,y) = \\frac {1}{N}\\sum_{\\gamma}<\\hat{q}_\\gamma^{+}(y)\\hat{q}_\\gamma(x)>$\nand therefore\n\\begin{equation}\n<\\hat{\\rho}_a (x) \\hat{\\rho}_b (y)> = -2\\delta_{ab}\\rho(x,y)\\rho(y,x)\n\\label{fck} \\end{equation}\n\nFollowing the standard Hartree-Fock routine, cf., Ref.\\cite{hfr},\nthe single quark density matrix can be expanded\nin terms of a complete set of functions, i.e., $\\rho (x,y) =\n\\sum_n f_n \\psi_n (x) \\psi_n^* (y)$. At this stage it is customary\nto add the so called Slater determinant condition\nwhich means that the single quark states $\\psi_n$ have sharp occupations\n $f_n = 0$ or $1$. This condition and its\ncompatibility with the large N limit and Lorenz invariance were discussed in\nRef.\\cite{bor}. Adopting this condition, using (\\ref{fck}) in forming the\n expectation\n $E_{HF}[\\rho(x,y),a^i(x)] = $ and varying with the respect to\n$\\psi_n(x)$'s with constraints on their\nnormalization one obtains the Hartree-Fock equation\n\\begin{equation}\n[\\alpha^{i}p^{i}+\\beta m]\\psi_n(x)\n- 2\\int d^3 y I_{aa}^{-1}(x,y,[a^i]) \\rho (x,y)\\psi_n(y)\n = \\epsilon_n \\psi_n(x), \\label{feq} \\end{equation}\nwhich appears here as a selfconsistent\nDirac equation for the quark wave functions $\\psi_n(x)$.\nNote that in this equation $a^i$ disappeared from the\nDirac operator and enters only the interaction $I_{aa}^{-1}(x,y)$.\n Solutions of (\\ref{feq}) determine the optimum $\\rho$ and thus\n$E_{HF}$ for a given $a^i$.\n\n Hartree-Fock equation similar to\n(\\ref{feq}) has been investigated in the 1+1 dimensional QCD, \\cite{bor}\n although\nthere for obvious reasons the field $a^i$ was absent and $I^{-1}(x,y)$ was\nsimply $|x - y|$. Both\nthe t'Hooft meson spectrum and the baryon soliton solutions\nwere found in this approach\nin 1+1 dimensions. For small quark masses the baryon was the realization of the\nskyrmion described in the quark language. If\nsuccessful the approach based on Eq.(\\ref{feq}) can possibly lead to similar\nresults in 2+1 and 3+1 dimensions.\n\n\n\\setcounter{equation}{0} \\ALTsect*{Acknowledgement}\nUseful discussions with S. Elizur, S. Finkelstein, J. Goldstone, K. Johnson,\nA. Kerman, F. Lenz, M. Milgrom, J. Negele, J. Polonyi, E. Rabinovici,\nS. Solomon and H. Weidenm\\\"uller are acknowledged gratefully. Special\nacknowledgement is to A. Schwimmer for patient teaching and\nanswering my questions and for critical reading of the manuscript. Parts of\nthis work were done during visits in Max-Planck-Institut fur Kernphysik\nat Heidelberg and I wish to thank H. Weidenm\\\"uller for warm hospitality\n there.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe hadronic decays of heavy quarkonia below the threshold for heavy \nmeson pair production are understood to proceed predominantly via\nthree intermediate gluons. One of the gluons can be replaced by a \nphoton with a penalty of order the ratio of coupling constants, \n$\\alpha \/ \\alpha_s$. Such exclusive radiative decays of the heavy vector\nmesons $J\/\\psi$\\ and $\\Upsilon$\\ have been the subject of many experimental and\ntheoretical studies. For the experimenter, the final states from\nradiative decays are relatively easy to identify as they have a high\nenergy photon, a low multiplicity of other particles, and low\nbackground. Theoretically, the radiative decays of heavy quarkonia\ninto a single light hadron provide a particularly clean environment to \nstudy the conversion of gluons into hadrons, and thus their study is a\ndirect test of QCD. $\\Uos \\to \\gamma \\eta^{\\prime}$\\ is one such candidate channel. This\ndecay channel has been observed to be produced in the $J\/\\psi$\\\ncharmonium system (the $1^3\\text{S}_1$ state of $c \\bar{c}$) with\n$\\mathcal{B}(J\/\\psi\\to\\gamma \\eta^{\\prime}) = (4.71\\pm0.27)\\times10^{-3}$~\\cite{PDG}. \nNaive scaling predicts that decay rates for radiative $\\Upsilon(1\\text{S})$ decays\nare suppressed by the factor \n$(q_{b}m_{c}\/q_{c}m_{b})^{2}$ $\\approx 1\/40$ \nwith respect to the corresponding $J\/\\psi$\\ radiative decays.\nThis factor arises because the quark-photon coupling is proportional\nto the electric charge, and the quark propagator is roughly\nproportional to $1\/m$ for low momentum quarks. Taking into account the\ntotal widths~\\cite{PDG} of $J\/\\psi$\\ and $\\Upsilon(1\\text{S})$, the branching fraction\nof a particular $\\Upsilon(1\\text{S})$ radiative decay mode is expected to be around\n0.04 of the corresponding $J\/\\psi$\\ branching fraction. However, the CLEO \nsearch~\\cite{CleoEtaPrimeStudy} for $\\Uos \\to \\gamma \\eta^{\\prime}$\\ in $61.3\\,\\rm pb^{-1}$ of data\ncollected with the CLEO~II detector found no signal in this mode, and\nresulted in a 90\\% confidence level \nupper limit of $1.6\\times10^{-5}$ for the branching\nfraction $\\Uos \\to \\gamma \\eta^{\\prime}$, an order of magnitude smaller than this\nexpectation.\n\nThe two-body decay \\UpsToGammaFTwo\\ has been\nobserved~\\cite{CleoF2_1270} in the older CLEO~II $\\Upsilon(1\\text{S})$ analysis,\nand this observation has been confirmed~\\cite{LuisAnalysis,Holger}, with much\ngreater statistics, in CLEO~III data. \nThe measurement $\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma f_2(1270)) = (10.2\\pm1.0) \\times10^{-5}$, \nfrom the combination of the two CLEO~III measurements, \nis $0.074\\pm0.010$ times the corresponding $J\/\\psi$\\\ndecay mode, showing a deviation of roughly a factor of two from the\nnaive scaling estimates.\nIn radiative $J\/\\psi$\\ decays the\nratio of $\\eta^{\\prime}$\\ to \\fTwo\\ production is $3.4\\pm0.4$. If the same ratio\nheld in $\\Upsilon(1\\text{S})$, the $\\eta^{\\prime}$\\ channel would be clearly visible.\nThe channel $\\Uos \\to \\gamma \\eta$\\ has\nreceived significant theoretical attention. \nThis channel has been observed in $J\/\\psi$\\ decays~\\cite{PDG}\nwith the branching fraction of $(9.8\\pm1.0)\\times 10^{-4}$, a value\nsmaller by a factor of \nfive\nthan $\\mathcal{B}(J\/\\psi\\to\\gamma \\eta^{\\prime})$. \nThe previous CLEO search of $\\Upsilon(1\\text{S})$ decays produced an upper limit of\n$2.1\\times10^{-5}$ at the 90\\% confidence level \nfor this mode~\\cite{CleoEtaStudy}.\n\n\nSeveral authors have tried to explain the lack of signals in radiative \n$\\Upsilon(1\\text{S})$ decays into pseudoscalar mesons, using a variety of models\nwhich produce branching fraction predictions of \n$10^{-6}\\ \\text{to}\\ 10^{-4}$.\nEmploying the Vector \nMeson Dominance Model (VDM), Intemann~\\cite{Intemann} predicts the\nbranching fractions for the heavy vector meson radiative decay into\nlight pseudoscalar mesons. Using the mixing mechanism of $\\eta$,\n$\\eta^{\\prime}$\\ with the as-yet-unobserved pseudoscalar resonance $\\eta_{b}$,\nChao~\\cite{KTChao} first calculated the mixing angle\n$\\lambda_{\\eta\\eta_{b}}$ in order to estimate the radiative branching\nfractions. Baier and Grozin~\\cite{BaierGrozin} showed that for light\nvector mesons (such as $J\/\\psi$) there might be an additional ``anomaly''\ndiagram that contributes significantly to the radiative decays. Noting\nthat VDM has no direct relation to QCD as the fundamental theory of\nstrong interactions, and referring to~\\cite{Intemann},\nMa tries to address the problem by using factorization at tree level\nwith NRQCD matrix elements to describe the heavy vector meson portion\nmultiplied by a set of twist-2 and twist-3 gluonic distribution\namplitudes~\\cite{JPMa}.\n\n\n\\section{Detector and Data Sample}\nThis study is based upon data collected by the CLEO III detector\nat the Cornell Electron Storage Ring (CESR). CLEO III is\na versatile multi-purpose particle detector described fully\nelsewhere~\\cite{cleoiii-detector}. Centered on the $e^+e^-$ interaction\nregion of CESR, the inner detector consists of a silicon strip vertex\ndetector and a wire drift chamber measuring the momentum vectors and\nthe ionization energy losses ($dE\/dx$) of charged tracks based on their\ntrajectories in the presence of a 1.5T solenoidal magnetic field. The\nsilicon vertex detector and the drift chamber tracking system together\nachieve a charged particle momentum resolution of 0.35\\% (1\\%) at\n1\\,\\rm GeV\/$c$\\ (5\\,\\rm GeV\/$c$)\nand a fractional $dE\/dx$\\ resolution of 6\\% for hadrons and 5\\% for electrons. \nBeyond the drift chamber is a Ring Imaging Cherenkov Detector, RICH,\nwhich covers 80\\% of the solid angle \nand is used to further identify charged particles by giving for each\nmass hypothesis the fit likelihood to the measured Cherenkov radiation\npattern. After the RICH is a CsI crystal calorimeter that covers 93\\%\nof the solid angle, allowing both photon detection and electron\nsuppression. The calorimeter provides an energy \nresolution of 2.2\\% (1.5\\%) for 1\\ \\rm GeV\\ (5\\ \\rm GeV) photons. Beyond the calorimeter \nis a superconducting solenoidal coil providing the magnetic field,\nfollowed by iron flux return plates with wire chambers interspersed \nat 3, 5, and 7 hadronic interaction lengths (at normal \nincidence) to provide\nmuon identification.\n\nThe data sample has an integrated luminosity of $1.13\\,\\rm fb^{-1}$ taken at the\n$\\Upsilon(1\\text{S})$ energy $\\sqrt{s} = 9.46\\ \\rm GeV$, which corresponds to \n$N_{\\Upsilon(1\\text{S})} = 21.2\\pm0.2$ million $\\Upsilon(1\\text{S})$ decays~\\cite{CLEO-III-NUPS}. \nThe efficiencies for decay chain reconstruction were obtained from\nMonte \nCarlo simulated radiative events generated with the \n($1+\\cos^{2}\\theta$) angular distribution expected for decays \n$\\Upsilon(1\\text{S}) \\to \\gamma+\\text{pseudoscalar}$. The Monte Carlo simulation of\nthe detector response was based upon GEANT~\\cite{GEANT}, and\nsimulation events were processed in an identical fashion to data.\n\n\n\n\\newcommand{$\\sigma_{\\pi,i}$}{$\\sigma_{\\pi,i}$}\n\\newcommand{$\\sigma^{2}_{\\pi}$}{$\\sigma^{2}_{\\pi}$}\n\\newcommand{$\\sum_{i}^{3}\\sigma^{2}_{\\pi,i}$}{$\\sum_{i}^{3}\\sigma^{2}_{\\pi,i}$}\n\n\\newcommand{$S_{\\pi,i}$}{$S_{\\pi,i}$}\n\\newcommand{$S^{2}_{\\pi}$}{$S^{2}_{\\pi}$}\n\\newcommand{$\\sum_{i}^{3}S^{2}_{\\pi,i}$}{$\\sum_{i}^{3}S^{2}_{\\pi,i}$}\n\n\\newcommand{$(m_{\\gamma\\gamma} - m_{\\pi^0})\/\\sigma_{\\gamma\\gamma}$}{$(m_{\\gamma\\gamma} - m_{\\pi^0})\/\\sigma_{\\gamma\\gamma}$}\n\\newcommand{$((M_{reconstructed}-M_{\\pi^0})\/resolution)$}{$((M_{reconstructed}-M_{\\pi^0})\/resolution)$}\n\\newcommand{\\SDeDxDefn}{\\text{S}_{dE\/dx} \\equiv\n(dE\/dx(\\text{measured}) - dE\/dx(\\text{expected}))\/\\sigma_{dE\/dx}}\n\\newcommand{\\text{S}_{dE\/dx}}{\\text{S}_{dE\/dx}}\n\\newcommand{$\\text{S}_{dE\/dx}^{2}$}{$\\text{S}_{dE\/dx}^{2}$}\n\\section{Event Selection and Results}\\label{sec:event-selection}\nIn our search for $\\Uos \\to \\gamma \\eta$\\ and $\\Uos \\to \\gamma \\eta^{\\prime}$, we reconstruct $\\eta$\nmesons in the modes $\\eta \\to \\gamma \\gamma$, $\\eta \\to \\pi^{+} \\pi^{-} \\pi^{0}$, and $\\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$; the\nlatter two will collectively be referred to as $\\eta\\to3\\pi$. We\nreconstruct the\n$\\eta^{\\prime}$\\ meson in the mode $\\eta \\pi^+ \\pi^-$ with\n$\\eta$ decaying in any of the above modes, and in addition, \nthe mode $\\eta^{\\prime} \\to \\gamma \\rho^0$, where $\\rho^0 \\to \\pi^+\\pi^-$. \nFrom the CLEO~II studies~\\cite{CleoEtaPrimeStudy,CleoEtaStudy} we\nexpected five out of the seven modes under investigation to be relatively\nbackground free and so we employ minimal selection\ncriteria to maximize sensitivity and minimize possible systematic\nbiases. \nThe other two, $\\eta \\to \\gamma \\gamma$ and $\\eta^{\\prime} \\to \\gamma \\rho^0$, have large branching\nfractions, but also large backgrounds, and so our event selection for these\nmodes aims to decrease the background with a corresponding loss of\nefficiency. \n\nOur general analysis strategy is to reconstruct the complete decay\nchain ensuring that none of the constituent tracks or showers have\nbeen used more than once, then kinematically constrain the intermediate\n$\\pi^{0}$\\ and $\\eta$ meson candidates to their nominal masses~\\cite{PDG},\nand finally require the event to be consistent with having the 4-momentum of the\ninitial $e^+e^-$ system. Multiply-reconstructed $\\Upsilon(1\\text{S})$ candidates in an\nevent, a problem of varying severity from mode to mode, is dealt with \nby selecting the combination with lowest $\\chi^{2}_{\\mathrm{Total}}$, the sum of\nchi-squared of the 4-momentum constraint ($\\chi^{2}_{\\text{P}4}$) and chi-squared of\nall the mass-constraints involved in a particular decay chain. For\nexample, there are four mass-constraints involved in the decay chain\n$\\Uos \\to \\gamma \\eta^{\\prime};\\EtaThreePZ$, three $\\pi^{0}$\\ mass-constraints and one $\\eta$\nmass-constraint. \nThe mode $ \\Uos \\to \\gamma \\eta; \\EtaThreePZ $ is an exception in which we\npreferred to accept the $\\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$ candidate having the lowest\n$S^{2}_{\\pi}$\\ $\\equiv$ $\\sum_{i}^{3}S^{2}_{\\pi,i}$, with $S_{\\pi,i}$\\ $\\equiv$\n$(m_{\\gamma\\gamma} - m_{\\pi^0})\/\\sigma_{\\gamma\\gamma}$\\ of the \\textit{i}th $\\pi^{0}$ candidate. \nThe yield is obtained by counting the number of final state $\\eta$ or\n$\\eta^{\\prime}$\\ candidates within our acceptance mass window defined as the\ninvariant mass region centered around the mean value and providing\n98\\% signal acceptance as determined from signal Monte Carlo. Whenever\npossible, an event vertex is calculated using the information from\nthe charged tracks, and the 4-momentum of the photon candidates is then\nrecalculated, assuming that the showers originate from the event\nvertex rather than the origin of the CLEO coordinate system. This produces\nan improvement in the $\\eta$ and $\\eta^{\\prime}$\\\ncandidates' invariant mass resolution of roughly 10\\%, leading to a slight\nincrease in the sensitivity of the measurement.\n\nThe CLEO~III trigger~\\cite{cleoiii-trigger} relies upon two components:\n(1) the tracking-based ``axial'' and ``stereo'' triggers derived from\nthe signals on the 16 axial layers of the drift chamber, and the signals\nregistered on the chamber's 31 stereo layers, and (2) the calorimeter-based\ntrigger derived from the energy deposition in the CsI crystal\ncalorimeter. The events for the ``all neutral'' modes $\\Uos \\to \\gamma \\eta; \\EtaGG$\\ and\n$ \\Uos \\to \\gamma \\eta; \\EtaThreePZ $ are collected by the calorimeter-based trigger\ncondition requiring two high energy back-to-back showers.\nWe demand that triggered events meet the\nfollowing analysis requirements: (a) a high energy calorimeter shower not\nassociated with a charged track, having a lateral profile consistent\nwith being a photon, and having a measured energy greater than\n4.0\\ \\rm GeV\\ must be present; \n(b) there must be the correct\nnumber of pairs of oppositely charged, \ngood quality tracks with usable $dE\/dx$\\ information. \nThe efficiency of these requirements is more than 60\\% in\nmodes involving charged tracks and approximately 54\\% and 45\\% \nfor cases where $\\eta \\to \\gamma \\gamma$ and $\\eta\\to 3\\pi^{0}$, respectively.\n\n\nThe photon candidates we use in forming $\\pi^{0}$\\ and $\\eta \\to \\gamma \\gamma$ candidates\nhave minimum energy depositions of 30\\ \\rm MeV\\ and 50\\ \\rm MeV,\nrespectively. All photon candidates are required to be not associated\nto charged tracks, and at least one of the photon candidates of \neach pair must have\na lateral profile consistent with that expected for a photon. The\nphoton candidates we use in reconstructing the $\\eta$ meson in the\n$\\gamma \\gamma$\\ mode must be detected either in the fiducial barrel or\nthe fiducial endcap\\footnote{The fiducial regions of the barrel and \nendcap are defined by \n$|\\cos(\\theta)|<0.78$ and $0.85<|\\cos(\\theta)|<0.95$,\nrespectively; the region between the barrel fiducial region and the\nendcap fiducial region is not used due to its relatively poor\nresolution. \n} calorimeter region only. \nThese candidates are then kinematically\nconstrained to the nominal meson mass, the exception being $\\Uos \\to \\gamma \\eta; \\EtaGG$,\nwhere no mass-constraining was done to the $\\eta$ candidate, because\nwe examine $m_{\\gamma\\gamma}$ in this mode to determine our yield.\n\nThe $\\eta$ candidates in the mode $\\pi^+\\pi^-\\pi^0$ are built by first\nforcing pairs of oppositely charged quality tracks to originate from a\ncommon vertex. The $\\pi^{0}$\\ candidate having invariant mass within\n$7\\sigma_{\\gamma\\gamma}$ is then added to complete the reconstruction\nof $\\eta \\to \\pi^{+} \\pi^{-} \\pi^{0}$ candidates. The charged tracks are required to be\nconsistent with being pions by adding the pion hypothesis $\\SDeDxDefn$\nin quadrature for two tracks and requiring the sum of $\\text{S}_{dE\/dx}^{2}$\\ to be\nless than 16. \n\nIn the case of $\\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$, the $\\eta$ candidate is simply built by\nadding three different $\\pi^{0}$\\ candidates, where no constituent photon\ncandidate contributes more than once in a candidate $\\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$\nreconstruction. The $\\pi^{0}$\\ candidates are selected by requiring\n$S_{\\pi}<10.0$. In order to increase the efficiency in this mode, \nan exception was made to the fiducial region requirement, and\nphotons in the gap between the barrel and endcap fiducial regions\nwere allowed.\n\n\n\\subsection{The Decay $\\Upsilon\\to\\gamma\\eta ,\\eta\\to 3\\pi$}\nThe $\\Upsilon$ candidate in the mode $\\gamma \\eta$ is\nformed by combining a high-energy photon ($E>4\\ \\rm GeV$) with the $\\eta$\ncandidate, requiring that this photon is not a daughter of the $\\eta$\ncandidate. The $\\Upsilon$ candidate is then subjected to the\n4-momentum constraint of the initial $e^+e^-$ system. In the case of\n$\\eta\\to3\\pi$, multiply reconstructed $\\Upsilon$ candidates were\nrestricted by selecting only one candidate. \nFor $\\eta \\to \\pi^{+} \\pi^{-} \\pi^{0}$, we select the candidate with the lowest $\\chi^{2}_{\\mathrm{Total}}$,\nthe sum of chi-squared of the 4-momentum constraint\nand chi-squared of the mass-constraint to the $\\pi^{0}$\\ candidate.\nFor $\\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$, we select the candidate with the smallest $S^{2}_{\\pi}$.\nThe selected $\\Upsilon$ candidate is further required\nto satisfy the 4-momentum consistency criterion, restricting \n$\\chi^{2}_{\\text{P}4}$\\ $<100$ for $\\eta \\to \\pi^{+} \\pi^{-} \\pi^{0}$ and a less stringent cut of 200\nfor $\\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$ measurements.\nIn addition, we limit the number of reconstructed calorimeter showers \nfor the mode $ \\Uos \\to \\gamma \\eta; \\EtaThreePZ $ to minimize backgrounds such as\n$e^{+} e^{-} \\to \\gamma \\phi$\\ where \\phiKsKl\\ without jeopardizing the signal efficiency.\n\nFrom Monte Carlo simulations, the overall reconstruction efficiencies,\n$\\epsilon_i$, for each channel are determined to be $(28.5\\pm4.3)\\%$\nand $(11.8\\pm1.9)\\%$ for the decay chains $\\Upsilon\\to\\gamma\\eta,\\eta \\to \\pi^{+} \\pi^{-} \\pi^{0}$ and\n$\\Upsilon\\to\\gamma\\eta, \\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$, respectively. \nThe uncertainties in the efficiency\ninclude the Monte Carlo samples' statistical uncertainty and our\nestimate of possible systematic biases, which are discussed further\nin Section~\\ref{sec:sys-lim}.\n\nWe find no candidate events within our acceptance invariant mass\nwindow for the search $\\Uos \\to \\gamma \\eta$, $\\eta\\to3\\pi$. The invariant mass\ndistributions for candidate $\\eta \\to \\pi^{+} \\pi^{-} \\pi^{0}$ and $\\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$, after\nimposing all the selection criteria are shown in Figure~\\ref{fig:eta-3pi}.\n\\begin{figure}\n\\includegraphics*[width=3.25in]{0990307-001.eps}\n\\caption{Candidate $\\eta \\to \\pi^{+} \\pi^{-} \\pi^{0}$ (top) and $\\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$ (bottom)\ninvariant mass distributions from $\\Upsilon(1\\text{S})$ data.\nThe large number of events near 780\\,\\rm MeV\/$c^2$\\ (top) is due to the \nabundant process $e^{+} e^{-} \\to \\gamma \\omega$. No events are observed in our acceptance\nregion, bounded by the arrows.} \n\\label{fig:eta-3pi}\n\\end{figure}\n\n\\subsection{The Decay $\\Upsilon\\to\\gamma\\eta ,\\eta\\to\\gamma\\gamma$}\nThe 3-photon final state resulting from $\\Uos \\to \\gamma \\eta; \\EtaGG$\\ is dominated\nby the QED process $e^{+} e^{-} \\to \\gamma \\gamma \\gamma$. Our selection criteria of loosely\nreconstructing an $\\eta \\to \\gamma \\gamma$ meson and requiring the \\chisq\\ of\n4-momentum constraint on the $\\Upsilon(1\\text{S})$ meson formed by adding a\nhard-photon to be $<200$ are not sufficient to suppress this\nbackground. The QED background, however has a distinct feature - the\ntwo photons having energies \n$E_{hi}$ and $E_{lo}$ used in reconstructing the $\\eta$ candidate have\na large energy asymmetry, where asymmetry is defined as\n$(E_{hi}-E_{lo})\/(E_{hi}+E_{lo})$. \nReal $\\eta$ mesons are expected to have an approximately\nuniform distribution of\nasymmetry in the range (0,1). We require the asymmetry to be less than\n0.8. To further discriminate between the signal and the background, we\nused a neural net approach. \n\nThe input to the neural net is a vector of six variables, namely the\nmeasured energy and the polar angle $\\theta$ of each of the three calorimeter\nshowers used in the reconstruction chain. The training sample is\ncomprised of 20,000 simulated signal and background events in equal\nproportion. The simulated $e^{+} e^{-} \\to \\gamma \\gamma \\gamma$\\ background events have a\nhigh-energy photon ($E>4\\ \\rm GeV$), $\\gamma \\gamma$\\ invariant mass for the two\nlower-energy photons in the range 0.4-0.7\\,\\rm GeV\/$c^2$, and energy asymmetry\nless than 0.8.\n\n\nFor our final selection, we choose neural-net output with $51\\%$\nefficiency while rejecting $86\\%$ of the background.\nThe combined efficiency of our\nselection criteria for this mode is $(23.8\\pm2.4)\\%$, which includes\npossible systematic biases and statistical uncertainties from the\nsimulation. The resulting $\\gamma \\gamma$\\ invariant mass distribution from\n$\\Upsilon(1\\text{S})$ data is fit, as shown in Figure~\\ref{fig:eta-gg},\nto a double Gaussian function, whose mass and widths are fixed to values\nfound from signal Monte Carlo data, along with a second order polynomial\nbackground function. From this likelihood fit, we obtain $-2.3\\pm8.7$\nevents; consistent with zero. We then perform the same likelihood fit\nmultiple times fixing the signal area to different values, assigning\neach of the fits a probability proportional to $e^{-{\\chi^2}\/2}$,\nwhere \\chisq\\ is obtained from the likelihood fit.\nThe resulting probability distribution is normalized and numerically\nintegrated up to 90\\% of the area to obtain the yield at 90\\%\nconfidence level. Our limit thus obtained is 14.5 events at 90\\%\nconfidence level. \n\\begin{figure}\n\\includegraphics*[width=3.25in]{0990307-002.eps}\n\\caption{Invariant mass distribution of $\\gamma \\gamma$\\ candidates in $\\Upsilon(1\\text{S})$\n data for the mode $\\Uos \\to \\gamma \\eta; \\EtaGG$, overlaid with fits using a) floating area\n (solid red) yielding $-2.3\\pm8.7$ events, and b) area fixed to 14.5\n events (dashed blue), the upper limit corresponding to 90\\% C.L.}\n\\label{fig:eta-gg}\n\\end{figure}\n\n\\subsection{The Decay $\\Upsilon\\to\\gamma\\eta^{\\prime} ,\\eta^{\\prime}\\to\\eta\\pi^+\\pi^-$}\nReconstruction of the decay chains $\\Uos \\to \\gamma \\eta^{\\prime}$, where \n$\\eta^{\\prime} \\to \\eta \\pi^+ \\pi^-$, builds on the search $\\Uos \\to \\gamma \\eta$\\. \nThe reconstructed $\\eta$ candidate is\nconstrained to the nominal $\\eta$ mass. The mass-constrained $\\eta$\ncandidate is further combined with a pair of oppositely charged\nquality tracks by forcing the tracks and the $\\eta$ candidate to\noriginate from a common vertex. In reconstruction of $\\eta^{\\prime}; \\EtaPMZ$,\ncare is exercised to ensure that no track is used more than once in the\ndecay chain. The high energy photon is combined with the $\\eta^{\\prime}$\\\ncandidate to build an $\\Upsilon$ candidate which is further\nconstrained to the 4-momentum of the initial $e^+e^-$ system. \nIn the reconstruction chain $\\eta^{\\prime}; \\EtaGG$, the $\\Upsilon$ candidate with\nthe lowest sum of chi-squared to the 4-momentum constraint ($\\chi^{2}_{\\text{P}4}$)\ncombined with the chi-squared of the mass-constraint to the $\\eta$\ncandidate (\\EtaMassFitChisq) is accepted as\nthe representative $\\Upsilon$ candidate in the reconstructed event.\nIn the modes where $\\eta\\to3\\pi$, \nthe $\\pi^{0}$\\ mass-constraint chi-squared, $\\chi^2_{\\pi^0}$, also contributes to the \n$\\chi^{2}_{\\mathrm{Total}}$ . \n\n\nTo ensure that only good quality $\\eta$ candidates participate in the decay\nchain, the \\EtaMassFitChisq\\ values of ``$\\eta\\to \\text{all neutral}$''\ncandidates are required to be less than 200. Owing to the better\nmeasurements of charged track momenta, this criterion is more\nstringent (\\EtaMassFitChisq $<100$) in the case of $\\eta \\to \\pi^{+} \\pi^{-} \\pi^{0}$. The \ntargeted efficiency (around 99\\%) of this requirement is achieved in\nall three cases.\n \nThe charged tracks used in reconstructing $\\eta^{\\prime}$\\ candidates have to be\nconsistent with the pion hypothesis.\nWe again require the sum of squared $\\text{S}_{dE\/dx}$ added in quadrature to\nbe less than 16 for both the two track and four track cases. The\nefficiency of this requirement alone is around 99\\%.\n\nThe selected $\\Upsilon$ candidate is further required to satisfy the\n4-momentum consistency criterion, restricting $\\chi^{2}_{\\text{P}4}$\\ $<100$\nin the $\\eta \\to \\gamma \\gamma$ case and a less stringent value of 200 for\n$\\eta\\to3\\pi$. The overall reconstruction efficiencies of our\nselection criteria as determined from signal Monte Carlo simulations are\n$(35.3\\pm5.2)\\%$, $(24.5\\pm2.2)\\%$ and $(14.4\\pm2.9)\\%$ for $\\eta$\ndecays to $\\gamma \\gamma$, $\\pi^+\\pi^-\\pi^0$ and $3\\pi^0$, respectively.\n\nAfter these selection criteria, we find\nno candidate events in the modes \\ModeEtapGG\\ and $\\Uos \\to \\gamma \\eta^{\\prime};\\EtaThreePZ$, as\nshown in Figure~\\ref{fig:etap-eta-pi-pi}. However, in the mode\n\\ModeEtapPMZ, we find two good candidate events passing our selection\ncriteria as shown in Figure~\\ref{fig:etap-eta-pi-pi}.\nThese two events have been looked at in detail and appear to be\ngood signal events. However, they are insufficient to allow\nus to claim a positive signal, as no candidate events are observed \nin the modes \\ModeEtapGG\\ and $\\Uos \\to \\gamma \\eta^{\\prime};\\EtaThreePZ$, each providing higher\nsensitivity than the decay chain \\ModeEtapPMZ.\n\\begin{figure}\n\\includegraphics*[width=3.44in]{0990307-003.eps}\n\\caption{Invariant mass distributions of $\\eta\\pi^+\\pi^-$ candidates from $\\Upsilon(1\\text{S})$\ndata. The $\\eta$ candidate is constrained to the nominal $\\eta$ meson\nmass. No events are observed in the signal box for $\\eta \\to \\gamma \\gamma$ (top) and\n$\\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$ (bottom); two signal events are observed for\n$\\eta \\to \\pi^{+} \\pi^{-} \\pi^{0}$ (middle).}\n\\label{fig:etap-eta-pi-pi}\n\\end{figure}\n\n\\subsection{The Decay $\\Upsilon\\to\\gamma\\eta^{\\prime} ,\\eta^{\\prime}\\to\\gamma\\rho^0$}\nThe reconstruction scheme for the decay chain \\ModeEtapGR\\ is slightly\ndifferent from those previously described. We first build $\\rho^0$\ncandidates by forcing pairs of oppositely charged tracks to originate\nfrom a common vertex. Next, we add a photon candidate \n(which we refer to as the ``soft shower'' having energy $E_s$ in\ncontrast with the high energy radiative photon) \nnot associated with charged tracks, and having a lateral profile\nconsistent with being a photon, to build $\\eta^{\\prime}$\\ candidates. To obtain\nthe maximum yield, we neither restrict the energy $E_s$ of the\nphoton nor the invariant mass of the $\\rho^0$ candidate at this stage. \nA high energy photon is then added, ensuring that the soft shower and\nhigh energy photon are distinct, to build the $\\Upsilon$ candidate. The\n$\\Upsilon$ candidate is then constrained to the 4-momentum of the\ninitial $e^+e^-$ system and the candidate with the lowest\n$\\chi^{2}_{\\text{P}4}$\\ value is selected.\n\nThe candidate $\\eta^{\\prime}$\\ invariant mass resolution is vastly improved due\nto the mass-constraints on the candidate $\\pi^{0}$\\ and $\\eta$ mesons in\n$\\eta^{\\prime} \\to \\eta \\pi^+ \\pi^-$ decays. In reconstruction of\n$\\eta^{\\prime} \\to \\gamma \\rho^0$, a significant improvement in candidate $\\eta^{\\prime}$\\ invariant mass\nresolution ($\\approx 30\\%$) as well as the energy resolution of the\nsoft shower is achieved by performing the 4-momentum constraint on\nthe $\\Upsilon$ candidate.\n\nParticle identification in the channel $\\eta^{\\prime} \\to \\gamma \\rho^0$\\ is achieved by\ndemanding the combined RICH and $dE\/dx$\\ likelihood for the pion\nhypothesis be greater than the combined likelihood for each of the\nelectron, kaon and proton hypotheses. Copiously produced QED processes\nsuch as $e^+e^- \\to \\gamma \\gamma e^+e^-$ are\nsuppressed by imposing an electron veto, requiring that\n$|E\/p-1.0|>0.05$, where $p$ is the measured momentum and $E$ is the\nassociated calorimeter energy of the charged track. QED events of the type \n$e^+e^- \\to \\gamma \\gamma \\mu^+\\mu^-$ are suppressed by requiring that\nneither track registers a hit five hadronic interaction lengths deep\ninto the muon detector system. Continuum background of the type \n$e^+e^- \\to \\gamma \\gamma \\rho^0$ is suppressed by demanding\n$E_s>100$\\ \\rm MeV. Finally, the event is ensured to be complete by\ndemanding $\\chi^{2}_{\\text{P}4}$\\ $< 100$. \nThe overall efficiency of the selection criteria for this mode is\n$(40.1\\pm2.1)\\%$, including possible systematic uncertainties and\nthe statistical uncertainty of the Monte Carlo sample.\n\n\\begin{figure}\n\\includegraphics*[width=3.25in]{0990307-004.eps}\n\\caption{Invariant mass distribution of $\\gamma\\rho^0$ candidates in $\\Upsilon(1\\text{S})$\n data for the mode \\ModeEtapGR\\ overlaid with fits using a) floating area\n (solid red) yielding $-3.1\\pm5.3$ events, and b) area fixed to\n 8.6 events (dashed blue), corresponding to the upper limit at 90\\% C.L.}\n\\label{fig:etap-gr}\n\\end{figure}\n\nAlthough highly efficient, our selection criteria are not sufficient to\nsuppress the smooth continuum background from the reaction\n$e^+e^- \\to \\gamma \\gamma \\rho^0$.\nThe candidate $\\eta^{\\prime} \\to \\gamma \\rho^0$\\ invariant mass distribution after our selection\ncriteria, shown in Figure~\\ref{fig:etap-gr}, is fit to\na double Gaussian function over a floating polynomial background\nfunction of order one. The parameters of the double Gaussian function\nare fixed to the values obtained from a fit to signal Monte\nCarlo and the area is left to float. The likelihood fit yields\n$-3.1\\pm5.3$ events, which is consistent with zero. In the absence of a\nclear signal, we determine the upper limit yield\nas we do in the case of $\\Uos \\to \\gamma \\eta; \\EtaGG$, and find an upper limit at 90\\%\nconfidence level of 8.6\nevents. \n\n\n\n\n\\section{Systematic Uncertainties and Combined Upper Limits}\\label{sec:sys-lim}\nSince we do not have a signal in any of the modes, and since the\nkinematic efficiency is near-maximal, statistical uncertainties\ndominate over systematic uncertainties. \nBy comparison of the expected yield of the QED process $e^{+} e^{-} \\to \\gamma \\gamma \\gamma$\\\nwith the calculated cross-section for this process, we estimate the\nuncertainty on the trigger simulation for ``all neutral'' modes to be\n4.5\\%. For modes with only two charged tracks, we have studied the \nQED processes $e^+e^- \\to \\gamma \\rho^0$ and $e^+e^- \\to \\gamma \\phi$, \nand assign a 13\\% uncertainty\non the efficiency due to possible trigger mismodeling. For events with\nmany charged tracks, we assign a systematic\nuncertainty of 1\\% as the relevant trigger lines are very well understood,\nredundant, and\nvery efficient.\nWe assign 1\\% uncertainty per track in charged track\nreconstruction based upon CLEO studies~\\cite{CLEOSystematics}\nof low-multiplicity events, and\n2.5\\% systematic uncertainty per photon from mismodeling of\ncalorimeter response which translates to 5\\% uncertainty per meson\n($\\pi^{0}$\\ and $\\eta$) decaying into $\\gamma \\gamma$, \nagain based upon CLEO studies~\\cite{CLEOSystematics}.\nThe systematic uncertainty in $\\text{S}_{dE\/dx}$ for two\ntracks added in quadrature (as in $ \\Uos \\to \\gamma \\eta; \\EtaPMZ $) was evaluated to be 4\\%\nby considering the efficiency difference of this requirement in\nMonte Carlo and data samples of\n$e^{+} e^{-} \\to \\gamma \\omega$. Consequently, we assign 4\\% and 5.7\\% uncertainty to the\nreconstruction efficiencies of modes involving two and four charged\ntracks, respectively, excepting $\\eta^{\\prime} \\to \\gamma \\rho^0$\\ where this requirement was not\nimposed. For the mode $\\eta^{\\prime} \\to \\gamma \\rho^0$, the systematic uncertainty in the\nefficiency of analysis cuts, found to be 3.9\\%, was evaluated by\ncomparing the efficiency difference in Monte Carlo and data by\nstudying the $\\rho^0$ signal due to the QED processes. \nFor the neural-net cut in the mode $\\Uos \\to \\gamma \\eta; \\EtaGG$, we studied the\nefficiency in QED $e^{+} e^{-} \\to \\gamma \\gamma \\gamma$\\ simulated events and the real data\ndominated by the same QED process for a wide range of neural-net\noutput values. We find a maximum difference of 7\\% in these two\nnumbers, which we take as a conservative estimate of the\nassociated systematic uncertainty.\nThe systematic uncertainties for various $\\eta$ and $\\eta^{\\prime}$\\ decay modes\nare listed in Table~\\ref{tab:sys-errs}. \nThese uncertainties were added in quadrature, along with the\nstatistical error due to the limited size of Monte Carlo samples, to\nobtain the overall systematic uncertainties in the efficiencies.\n\n\\begin{small}\n\\begingroup\n\\squeezetable\n\\begin{table*}[!]\n\\begin{center}\n\\caption{\\label{tab:sys-errs}Contributions to systematic\nuncertainties in the efficiencies for $\\Uos \\to \\gamma \\eta^{\\prime}$\\ (upper half) and\n$\\Uos \\to \\gamma \\eta$\\ (lower half). The uncertainties are expressed as relative\npercentages and combined in quadrature.}\n\\begin{ruledtabular}\n\\begin{tabular}{lcccc}\nUncertainty source & ~~$\\eta^{\\prime}; \\EtaGG$ & ~~$\\eta^{\\prime}; \\EtaPMZ$ & ~~$\\eta^{\\prime}; \\EtaThreePZ$\n& $\\eta^{\\prime} \\to \\gamma \\rho^0$ \\\\ \n\n\\hline\nTrigger mismodeling & 13 & 1 & 13 & 1 \\\\\nTrack reconstruction & 2 & 4 & 2 & 2 \\\\\nCalorimeter response & 5 & 5 & 15 & 2.5 \\\\\nAnalysis cuts & 4 & 5.7 & 4 & 3.9 \\\\\nMonte Carlo statistics & 1.0 & 1.6 & 2.4 & 1.0 \\\\\n\\hline\nCombined uncertainty & 14.7 & 8.8 & 20.4 & 5.2 \\\\\n\\hline\n& & & & \\\\\nUncertainty source & $\\eta \\to \\gamma \\gamma$ & $\\eta \\to \\pi^{+} \\pi^{-} \\pi^{0}$ & $\\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$ & \\\\\n\\hline\nTrigger mismodeling & 4.5 & 13 & 4.5 & \\\\\nTrack reconstruction & - & 2 & - & \\\\\nCalorimeter response & 5 & 5 & 15 & \\\\\nAnalysis cuts & 7 & 4 & - & \\\\\nMonte Carlo statistics & 1.3 & 1.2 & 1.7 & \\\\\n\\hline\nCombined uncertainty & 9.8 & 15.2 & 16.0 & \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{center}\n\\end{table*}\n\\endgroup\n\\end{small}\n\n\n\\begin{small}\n\\begingroup\n\\squeezetable\n\\begin{table*}[ht]\n\\begin{center}\n\\caption{Results of the search for $\\Uos \\to \\gamma \\eta^{\\prime}$\\ and $\\Uos \\to \\gamma \\eta$. Results include\nstatistical and systematic uncertainties, as described in the\ntext. The combined limit is obtained after including the systematic\nuncertainties.}\n\\label{tab:limits}\n\\begin{ruledtabular}\n\\begin{tabular}{lcccc}\n& ~~$\\eta^{\\prime}; \\EtaGG$ & ~~$\\eta^{\\prime}; \\EtaPMZ$ & ~~$\\eta^{\\prime}; \\EtaThreePZ$ & ~~$\\eta^{\\prime} \\to \\gamma \\rho^0$ \\\\\n\\hline\n\nObserved events & 0 & 2 & 0 & $-3.1\\pm5.3$ \\\\\n\n$\\mathcal{B}_{\\eta^{\\prime},i}\\%$ & \n$17.5\\pm0.6$ & $10.0\\pm0.4$ & $14.4\\pm0.5$ & $29.5\\pm1.0$ \\\\\n\nReconstruction efficiency (\\%) & \n$35.2\\pm5.2$ & $24.5\\pm2.2$ & $14.4\\pm2.9$ & $40.1\\pm2.1$ \\\\ \n\n$\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\eta^{\\prime})(90\\%~\\text{C.L.})$\\footnotemark[1] &\n\\ulbr{1.8} & \\ulbr{10.3} & \\ulbr{5.2} & \\ulbr{3.4} \\\\ \n\n\n$\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\eta^{\\prime})(90\\%~\\text{C.L.})$\\footnotemark[2] &\n\\ulbr{1.9} & \\ulbr{10.4} & \\ulbr{5.8} & \\ulbr{3.4} \\\\ \n\n\\hline\n\n\\multicolumn{2}{l}{Combined limit on \n$\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\eta^{\\prime})$} &\n\\multicolumn{2}{c}{\\ulbr{1.9}} \\\\\n\n\\hline\n\n& & & & \\\\\n\n& $\\eta \\to \\gamma \\gamma$ & $\\eta \\to \\pi^{+} \\pi^{-} \\pi^{0}$ & $\\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$ & \\\\\n\n\\hline\n\nObserved events & $-2.3\\pm8.7$ & 0 & 0 & \\\\\n\n$\\mathcal{B}_{\\eta,i}\\%$ & $39.4\\pm0.3$ & $22.6\\pm0.4$ & $32.5\\pm0.3$ & \\\\\n\nReconstruction efficiency (\\%) & $23.8\\pm2.4$ & $28.5\\pm2.9$ &\n$11.8\\pm1.9$ & \\\\\n\n$\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\eta)(90\\%~\\text{C.L.})$\\footnotemark[1]\n& \\ulbr{7.3} & \\ulbr{1.7} & \\ulbr{2.8} & \\\\\n\n\n$\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\eta)(90\\%~\\text{C.L.})$\\footnotemark[2] & \n\\ulbr{7.4} & \\ulbr{1.8} & \\ulbr{2.9} & \\\\\n\\hline\n\nCombined limit on $\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\eta)$ &\n\\multicolumn{3}{c}{\\ulbr{1.0}} &\\\\\n\n\n\\end{tabular}\n\\end{ruledtabular}\n\\footnotetext[1]{excluding systematic uncertainties}\n\\footnotetext[2]{including systematic uncertainties}\n\\end{center}\n\\end{table*}\n\\endgroup\n\\end{small}\n\n\n\\begin{figure*}[!]\n\\mbox{\\includegraphics*[width=6.3in]{0990307-005.eps}}\n\\caption{Likelihood distributions as a function of branching fraction\nfor the decay mode $\\Uos \\to \\gamma \\eta$\\ (left) and $\\Uos \\to \\gamma \\eta^{\\prime}$\\ (right). All\ndistributions are smeared by respective systematic uncertainties and\nnormalized to the same area. The solid black curve denotes the combined\nlikelihood distribution.} \n\\label{fig:limit-plots}\n\\end{figure*}\nThe systematic uncertainties in efficiencies, uncertainties in the\nproduct branching ratios, and the statistical uncertainty in the number\nof $\\Upsilon(1\\text{S})$ decays, $N_{\\Uos}$, are incorporated~\\cite{Cousins}\nby a ``toy'' Monte Carlo procedure to obtain smeared likelihood\ndistributions for the branching fraction in each mode,\n$\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\text{P}) = N_{\\text{P}}\/(\\epsilon_{i}\\cdot\\mathcal{B}_{\\text{P},i}\\cdot N_{\\Uos})$, \nwhere $\\text{P} = \\eta , \\eta^{\\prime}$, and $\\epsilon_i$ and\n$\\mathcal{B}_{\\text{P},i}$ denote the efficiency and branching\nfractions of the $i$th mode. \nTo obtain the smeared likelihood distribution $\\mathcal{L}_{\\text{P},i}$, the\nexperiment is performed multiple times, randomly selecting\n$N_{\\text{P}}$ from the likelihood function appropriate for each\nmode\\footnote{For modes with zero or few observed events, the\n appropriate likelihood function is generated from Poisson statistics.\n For the background limited modes $\\eta \\to \\gamma \\gamma$ and $\\eta^{\\prime} \\to \\gamma \\rho^0$, we \n already have the likelihood function which we used in calculating\n the upper limit of the observed number of events at 90\\% CL.} and then\ndividing by the sensitivity factor $\\epsilon_{i}\\cdot\\mathcal{B}_{\\text{P},i}\\cdot N_{\\Uos}$, where each term is picked\nfrom a Gaussian distribution about their mean values with the\nappropriate standard deviation. \n\nThe combined likelihood distribution for\n$\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\text{P})$ is derived as\n$\\mathcal{L}_{\\text{P}} = \\prod_{i}{\\mathcal{L}_{\\text{P},i}}$ which\nis summed up to 90\\% of the area in the physically allowed region to obtain\nthe upper limit branching fraction for $\\Upsilon(1\\text{S})\\to\\gamma\\text{P}$.\nFrom the constituent $\\mathcal{L}_{\\text{P},i}$ and the combined\n$\\mathcal{L}_{\\text{P}}$ as shown in \nFigure~\\ref{fig:limit-plots}, \nwe obtain upper limits on\n$\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\eta)$ of \\br{7.4}, \\br{1.8}, \\br{2.9}, \nand \\br{1.0} for $\\eta$ decaying into $\\gamma \\gamma$,\n$\\pi^+\\pi^-\\pi^0$, $\\pi^0\\pi^0\\pi^0$, and all three combined,\nrespectively. We obtain upper limits for\n$\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\eta^{\\prime})$ \nof \\br{1.9}, \\br{10.4}, \\br{5.8}, and \\br{3.4} for $\\eta$\ndecaying into $\\gamma \\gamma$, $\\pi^+\\pi^-\\pi^0$, $\\pi^0\\pi^0\\pi^0$,\nand $\\eta^{\\prime} \\to \\gamma \\rho^0$, respectively. The combined upper limit for $\\mathcal{B}$(\\ModeEtap)\\ is\n$1.9\\times10^{-6}$, a value larger than one of the sub-modes\n(\\ModeEtapGG), due to the two candidate events in \\ModeEtapPMZ.\nThe numbers of observed events, detection efficiencies and upper\nlimits are listed in Table~\\ref{tab:limits}. \n\n\n\\section{Summary and Conclusion}\\label{sec: summary}\nWe report on a new search for the radiative decay of $\\Upsilon(1\\text{S})$ to the \npseudoscalar mesons $\\eta$ and $\\eta^{\\prime}$\\ in $21.2\\times10^{6}$ $\\Upsilon(1\\text{S})$ \ndecays collected with the CLEO~III detector. The $\\eta$ meson was reconstructed\nin the three modes $\\eta \\to \\gamma \\gamma$, $\\eta \\to \\pi^{+} \\pi^{-} \\pi^{0}$ or $\\eta \\to \\pi^{0} \\pi^{0} \\pi^{0}$.\nThe $\\eta^{\\prime}$\\ meson was reconstructed either in the mode $\\eta^{\\prime} \\to \\gamma \\rho^0$\\ or \n$\\eta^{\\prime} \\to \\pi^{+} \\pi^{-} \\eta$ with $\\eta$ decaying through any\nof the above three modes. All these modes except for $\\eta^{\\prime} \\to \\gamma \\rho^0$\\ had earlier\nbeen investigated in CLEO~II data amounting to \n$N_{\\Uos}$ $ = 1.45\\times 10^6$ $\\Upsilon(1\\text{S})$ mesons and resulted in\nprevious upper limits $\\mathcal{B}$(\\ModeEtap)\\ $< 1.6 \\times 10^{-5}$ and \n$\\mathcal{B}$(\\ModeEta)\\ $< 2.1 \\times 10^{-5}$ \nat 90\\% C.L. These limits were already smaller than \nthe naive predictions based upon the scaling of the decay rate for the\ncorresponding $J\/\\psi$\\ radiative decay mode by the factor \n$(q_{b}m_{c}\/q_{c}m_{b})^{2}$,\nand also the model of K\\\"orner {\\it et al.,}~\\cite{KKKS}, \nwhose perturbative QCD approach predictions\nfor $\\mathcal{B}(J\/\\psi\\to\\gamma X)$ where $X = \\eta,\n\\eta^{\\prime}, f_2$ as well as $\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma f_2)$ agree\nwith experimental results.\n\nWith a CLEO~III data sample 14.6 times as large as the CLEO~II data\nsample, we find no convincing signal in any of the modes. Based purely\nupon the luminosities, we would expect the new upper limits to be scaled \ndown by a factor of between 14.6 (in background-free modes) and\n$\\sqrt{14.6}$ in background dominated modes if the two CLEO detectors \n(CLEO~II and CLEO~III) offered similar particle detection efficiencies. \nIn the search for $\\Uos \\to \\gamma \\eta$\\ we find no hint of \na signal, and manage to reduce the limit by an even larger factor. In\nthe search for $\\Uos \\to \\gamma \\eta^{\\prime}$, however, we find two clean candidate events\nin the channel \\ModeEtapPMZ, which, though we\ncannot claim them as signal, do indicate the possibility that we are\nclose to the sensitivity necessary to obtain a positive result. \nBecause of these two events, our combined limit for $\\Uos \\to \\gamma \\eta^{\\prime}$\\ is not \nreduced by as large a factor as the luminosity ratio, and in fact is \nlooser than that which would be obtained if we analyzed \none sub-mode (\\ModeEtapGG) alone.\nIn this analysis we found upper limits which we\nreport at 90\\% confidence level as\n\\begin{displaymath}\n\\mathcal{B}(\\Upsilon(1\\text{S}) \\to \\gamma \\eta) < 1.0 \\times 10^{-6},\n\\end{displaymath}\n\\begin{displaymath}\n\\mathcal{B}(\\Upsilon(1\\text{S}) \\to \\gamma \\eta^{\\prime}) < 1.9 \\times 10^{-6}.\n\\end{displaymath}\n\nOur results are sensitive enough to test the appropriateness of the\npseudoscalar mixing approach as pursued by Chao~\\cite{KTChao}, where\nmixing angles among various pseudoscalars including $\\eta_{b}$\\ are\ncalculated. Then, using a calculation for the M1 transition\n$\\Upsilon\\to\\gamma\\eta_{b}$, he predicts \n$\\mathcal{B}(\\Upsilon(1\\text{S}) \\to \\gamma \\eta) = 1\\times 10^{-6}$ and\n$\\mathcal{B}(\\Upsilon(1\\text{S}) \\to \\gamma \\eta^{\\prime}) = 6\\times 10^{-5}$. Our\nlimit for $\\Uos \\to \\gamma \\eta^{\\prime}$\\ is significantly smaller than Chao's prediction\nand does not support his approach.\n\nThe sensitivity challenge posed by both the extended vector dominance\nmodel and the higher twist approach of Ma are beyond our reach. \nIn extended VDM, Intemann predicts \n$1.3\\times 10^{-7} < \\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\eta) < 6.3\\times 10^{-7}$ \nand \n$5.3\\times 10^{-7} < \\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\eta^{\\prime}) <\n2.5\\times 10^{-6}$, where the two limits are determined by having\neither destructive or constructive interference, respectively, between\nthe terms involving $\\Upsilon(1\\text{S})$ and $\\Upsilon(2\\text{S})$. Even if it is determined that \nthe amplitudes are added constructively, our limit remains higher than\nthe VDM prediction for $\\Upsilon(1\\text{S})\\to\\gamma\\eta$. \n\nMa's prediction of \n$\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\eta^{\\prime}) \\approx 1.7\\times 10^{-6}$ is\nconsistent with our result. However, his prediction for\n$\\mathcal{B}(\\Upsilon(1\\text{S})\\to\\gamma\\eta) \\approx 3.3\\times 10^{-7}$ is a factor of\n$\\sim 3$ smaller than our limit.\n\n\n\n\n\n\\begin{comment}\n\\begin{figure}\n\\includegraphics*[bb = 35 150 535 610, width=3.25in]{\\etalimitplotps}\n\\caption{Likelihood distribution as function of branching\nratio for the decay mode $\\Uos \\to \\gamma \\eta$. Black curve denotes the combined\ndistribution.}\n\\label{fig:eta log limits}\n\\end{figure}\n\\begin{figure}\n\\includegraphics*[bb = 35 150 535 610, width=3.25in]{\\etaplimitplotps}\n\\caption{Likelihood distribution as function of branching\nratio for the decay mode $\\Uos \\to \\gamma \\eta^{\\prime}$. Black curve denotes the combined\ndistribution.}\n\\label{fig:etap log limits}\n\\end{figure}\n\\end{comment}\n\\begin{comment}\n\\begin{figure*}\n\\mbox{\\includegraphics*[bb = 55 150 535 610, width=3.3in]{\\etalimitplotps}}\n\\mbox{\\includegraphics*[bb = 55 150 535 610, width=3.3in]{\\etaplimitplotps}}\n\\caption{Likelihood distributions as function of branching fraction\nfor the decay mode $\\Uos \\to \\gamma \\eta$\\ (left) and $\\Uos \\to \\gamma \\eta^{\\prime}$\\ (right). All\ndistributions are smeared by respective systematic uncertainties and\nnormalized to unit area. Black curve denotes the combined likelihood\ndistribution.}\n\\label{fig:limit-plots}\n\\end{figure*}\n\\end{comment}\n\n\nWe gratefully acknowledge the effort of the CESR staff \nin providing us with excellent luminosity and running conditions. \nD.~Cronin-Hennessy and A.~Ryd thank the A.P.~Sloan Foundation. \nThis work was supported by the National Science Foundation,\nthe U.S. Department of Energy, and \nthe Natural Sciences and Engineering Research Council of Canada.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbigm b/data_all_eng_slimpj/shuffled/split2/finalzzbigm new file mode 100644 index 0000000000000000000000000000000000000000..79610f9eb207d3e04a8a96acf651d2f8f73b2b46 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbigm @@ -0,0 +1,5 @@ +{"text":"\\section{Basic computational properties of $\\mathsf{LEM}$}\n\\label{sec: basic computational properties of IMLLshpos}\nThe observations in the previous sections lead us to set the reduction rules on terms, which allow duplication and erasure for values only, as in Figure~\\ref{fig: term reduction rules}.\nThose reductions are more restrictive than the cut-elimination\nsteps that we could perform on $\\mathsf{LEM}$ if we look at\nit as it was a pure logical system, i.e.~not a type-assignment. Since the cut-elimination of $\\mathsf{LEM}$\nworks as in $\\mathsf{LL}$, once replaced \\enquote{!} for \\enquote{$ \\shpos $}, we can observe the effect of moving the $ cut $ in:\n\\begin{equation}\\label{eqn: non linear duplication}\n\\AxiomC{\\vdots}\n\\noLine\n\\UnaryInfC{$y: \\shpos A, z: \\shpos \\mathbf{1} \\vdash yz: \\mathbf{1}$}\n\\RightLabel{$p$}\n\\UnaryInfC{$y: \\shpos A, z: \\shpos \\mathbf{1} \\vdash yz: \\shpos \\mathbf{1}$}\n\\AxiomC{$\\vdots$}\n\\noLine\n\\UnaryInfC{$\\Gamma, x_1: \\shpos \\mathbf{1}, x_2: \\shpos \\mathbf{1} \\vdash M:\\ \\tau$}\n\\AxiomC{\\vdots}\n\\noLine\n\\UnaryInfC{$\\vdash I: \\mathbf{1}$}\n\\RightLabel{$c$}\n\\BinaryInfC{$\\Gamma, x: \\shpos \\mathbf{1} \\vdash \\mathtt{ copy} ^{I}_{\\mathbf{1}}\\, x \\mathtt{ \\ as \\ } y,z \\mathtt{ \\ in \\ }M: \\tau$}\n\\RightLabel{$cut$}\n\\BinaryInfC{$\\Gamma, y: \\shpos A, z: \\shpos \\mathbf{1} \\vdash \\mathtt{ copy} ^{I}_{\\mathbf{1}}\\, yz \\mathtt{ \\ as \\ } y,z \\mathtt{ \\ in \\ }M: \\tau$}\n\\DisplayProof\n\\end{equation}\nupward, in order to eventually eliminate it.\nThe move would require to duplicate the \\emph{open} term $yz$, erroneously \nyielding a non linear term. So, at the proof-theoretical level, moves \nof the cut rule exist that cannot correspond to any reduction on terms. \nIn order to circumvent the here above misalignment, we proceed as follows:\n\\begin{itemize}\n\t\\item \n\tWe define the \\emph{lazy cut-elimination steps}. Their introduction rules out any attempt to eliminate instances \n\tof cuts like~\\eqref{eqn: non linear duplication}.\n\tThe apparent drawback is to transform cuts like~\\eqref{eqn: non linear duplication} into \\emph{deadlocks}, i.e. into instances of $ cut $\n\tthat we cannot eliminate.\n\t\n\t\\item\n\tDeadlocks are not a problem. Once defined \\emph{lazy types},\n\twe can show that a \\emph{lazy cut-elimination strategy} exists\n\t\tsuch that it eliminates all the cut rules that may\n\t\toccur in a derivation of a lazy type. \n\t\tThe cost of the elimination is cubical\n\t\t(Theorem~\\ref{thm: cut elimination for downarrow IMLL2}).\n\\end{itemize}\nLast, we show that the reduction on terms in Figure~\\ref{fig: term reduction rules} and Figure~\\ref{fig: term commuting conversions} enjoy Theorem~\\ref{thm:Subject Reduction for IMLL2shpos}, i.e.~Subject reduction.\n\n\\subsection{Cut-elimination and its cubical complexity}\n\\label{Cut elimination, complexity, and subject reduction for IMLL2^shpos}\n\\begin{defn}[The cuts of $\\mathsf{LEM} $]\n\\label{defn: definitions for cut} \nLet $ (X,Y) $ identify an instance:\n\\begin{equation}\\label{eqn: cut elimination format}\n\\AxiomC{\\vdots}\n\\RightLabel{$X$}\n\\UnaryInfC{$\\Gamma \\vdash N:\\sigma$}\n\\AxiomC{\\vdots}\n\\RightLabel{$Y$}\n\\UnaryInfC{$\\Delta, x:\\sigma \\vdash M: \\tau$}\n\\RightLabel{$ cut $}\n\\BinaryInfC{$\\Gamma,\\Delta \\vdash M[N\/x]: \\tau$}\n\\DisplayProof\n\\end{equation}\nof the rule $ cut $ that occurs in a given derivation \n$ \\mathcal{D} $ of \n$\\mathsf{LEM}$, where $X$ and $Y$ are two of the rules in Figure~\\ref{fig: the system shpos add IMLL2}. \n\\emph{Axiom cuts} involve $ ax $, and are of the form $(X,ax)$ or $(ax,Y)$, for some $ X$ and $ Y $. \n\\emph{Exponential cuts} are ($p$,$d$), ($p$,$w$), and ($p$,$c$). \n\\emph{Principal cuts} are $(\\multimap$R$,\\multimap$L$)$,\n$(\\forall$R$,\\forall$L$)$ and every exponential cut.\n\\emph{Symmetric cuts} contain axiom and principal cuts.\nEvery symmetric cut that is not exponential is \\emph{multiplicative}. \n\\emph{Commuting cuts} are all the remaining instances of $ cut $, not mentioned here above, $(p, p)$ included, for example.\n\\par\nA \\textit{lazy cut} is every instance of the cut~\\eqref{eqn: cut elimination format} which is both exponential and such that \n$N$ is a value.\n\\par\nA \\emph{deadlock} is every instance of the cut~\\eqref{eqn: cut elimination format} which is both exponential and such that $\\Gamma\\neq \\emptyset$. Otherwise, it is \\emph{safe}.\n\\qed\n\\end{defn}\n\n\n\\begin{figure}[t]\n\\begin{equation*}\n\\scalebox{0.6}{$\n\\begin{aligned}\n\\AxiomC{$\\mathcal{D}$}\n\t\\noLine\n\t\\UnaryInfC{$\\Gamma, x:\\sigma \\vdash M: A$}\n\t\\RightLabel{$\\multimap R$}\n\t\\UnaryInfC{$\\Gamma \\vdash \\lambda x.M: \\sigma\\multimap A$}\n\t\\AxiomC{$\\mathcal{D}'$}\n\t\\noLine\n\t\\UnaryInfC{$\\Delta\\vdash N:\\sigma$}\n\t\\AxiomC{$\\mathcal{D}''$}\n\t\\noLine\n\t\\UnaryInfC{$\\Theta, z:A \\vdash P:\\tau$}\n\t\\RightLabel{$\\multimap L$}\n\t\\BinaryInfC{$\\Delta,\\Theta, y:\\sigma\\multimap A \\vdash P[yN\/z]:\\tau$}\n\t\\RightLabel{$cut$}\n\t\\BinaryInfC{$\\Gamma, \\Delta, \\Theta \\vdash P[(\\lambda x.M)N\/z]:\\tau$}\n\t\\DisplayProof\n&\\rightsquigarrow \\ \n\\AxiomC{$\\mathcal{D}'$}\n\t\\noLine\n\t\\UnaryInfC{$\\Delta\\vdash N:\\sigma$}\n\t\\AxiomC{$\\mathcal{D}$}\n\t\\noLine\n\t\\UnaryInfC{$\\Gamma, x:\\sigma \\vdash M:A$}\n\t\\RightLabel{$cut$}\n\t\\BinaryInfC{$\\Gamma, \\Delta \\vdash M[N\/x]:A$}\n\t\\AxiomC{$\\mathcal{D}''$}\n\t\\noLine\n\t\\UnaryInfC{$\\Theta, z:A \\vdash P:\\tau$}\n\t\\RightLabel{$cut$}\n\t\\BinaryInfC{$\\Gamma, \\Delta, \\Theta \\vdash P[M[N\/x]\/z]:\\tau$}\n\t\\DisplayProof\\\\ \\\\ \n\t\t\\AxiomC{$\\mathcal{D}$}\n\t\\noLine\n\t\\UnaryInfC{$\\Gamma \\vdash M: A\\langle\\gamma\/\\alpha\\rangle$}\n\t\\RightLabel{$\\forall R$}\n\t\\UnaryInfC{$\\Gamma \\vdash M: \\forall\\alpha. A$}\n\t\\AxiomC{$\\mathcal{D}'$}\n\t\\noLine\n\t\\UnaryInfC{$\\Delta, x: A\\langle B\/\\alpha\\rangle \\vdash N:\\tau$}\n\t\\RightLabel{$\\forall L$}\n\t\\UnaryInfC{$\\Delta, x: \\forall\\alpha. A \\vdash N:\\tau$}\n\t\\RightLabel{$cut$}\n\t\\BinaryInfC{$\\Gamma,\\Delta \\vdash N[M\/x]:\\tau$}\n\t\\DisplayProof\n& \\rightsquigarrow \\ \n\t\\AxiomC{$\\mathcal{D} \\langle B \/\\alpha\\rangle$}\n\t\\noLine\n\t\\UnaryInfC{$\\Gamma\\vdash M:A\\langle B\/\\alpha\\rangle$}\n\t\\AxiomC{$\\mathcal{D}'$}\n\t\\noLine\n\t\\UnaryInfC{$\\Delta, x:A\\langle B\/\\alpha\\rangle \\vdash N:\\tau$}\n\t\\RightLabel{$cut$}\n\t\\BinaryInfC{$\\Gamma, \\Delta \\vdash N[M\/x]:\\tau$}\n\t\\DisplayProof\\\\ \\\\ \n\t\t\\AxiomC{$\\mathcal{D}$}\n\t\\noLine\n\t\\UnaryInfC{$\\vdash V: \\sigma$}\n\t\\RightLabel{$p$}\n\t\\UnaryInfC{$\\vdash V: \\shpos \\sigma$}\n\t\\AxiomC{$\\mathcal{D}'$}\n\t\\noLine\n\t\\UnaryInfC{$\\Gamma, x: \\sigma \\vdash M:\\tau$}\n\t\\RightLabel{$d$}\n\t\\UnaryInfC{$\\Gamma, y: \\shpos \\sigma \\vdash M[y\/x]:\\tau$}\n\t\\RightLabel{$cut$}\n\t\\BinaryInfC{$\\Gamma \\vdash M[ V\/x]:\\tau$}\n\t\\DisplayProof\n& \\rightsquigarrow \\ \n\t\\AxiomC{$\\mathcal{D}$}\n\t\\noLine\n\t\\UnaryInfC{$\\vdash V:\\sigma$}\n\t\\AxiomC{$\\mathcal{D}'$}\n\t\\noLine\n\t\\UnaryInfC{$\\Gamma, x:\\sigma \\vdash M:\\tau$}\n\t\\RightLabel{$cut$}\n\t\\BinaryInfC{$\\Gamma \\vdash M[V\/x]:\\tau$}\n\t\\DisplayProof\\\\\\\\ \n\\AxiomC{$\\mathcal{D}$}\n\\noLine\n\\UnaryInfC{$\\vdash V:\\sigma$}\n\\RightLabel{$p$}\n\\UnaryInfC{$\\vdash V: \\shpos \\sigma$}\n\\AxiomC{$\\mathcal{D}'$}\n\\noLine\n\\UnaryInfC{$\\Delta \\vdash M:\\tau$}\n\\RightLabel{$w$}\n\\UnaryInfC{$\\Delta, x: \\shpos \\sigma \\vdash \\mathtt{discard}_{\\sigma}\\, x \\mathtt{\\ in\\ }M:\\tau$}\n\\RightLabel{$cut$}\n\\BinaryInfC{$\\Delta \\vdash \\mathtt{discard}_{\\sigma}\\, V \\mathtt{\\ in\\ }M:\\tau$}\n\\DisplayProof\n& \\rightsquigarrow \\ \n\\AxiomC{$\\mathcal{D}'$}\n\\noLine\n\\UnaryInfC{$\\Delta \\vdash M:\\tau$}\n\\DisplayProof\\\\ \\\\\n\\AxiomC{$\\mathcal{D}$}\n\\noLine\n\\UnaryInfC{$\\vdash V: \\sigma$}\n\\RightLabel{$p$}\n\\UnaryInfC{$\\vdash V: \\shpos \\sigma$}\n\\AxiomC{$\\mathcal{D}_1$}\n\\noLine\n\\UnaryInfC{$\\Delta, y: \\shpos \\sigma, z:\\shpos \\sigma \\vdash M: \\tau$}\n\\AxiomC{$\\mathcal{D}_2$}\n\\noLine\n\\UnaryInfC{$\\vdash U :\\sigma$}\n\\RightLabel{$c$}\n\\BinaryInfC{$\\Delta, x:\\shpos \\sigma \\vdash \\mathtt{copy}^{U }_{\\sigma}\\, x \\mathtt{\\ as\\ } y, z \\mathtt{\\ in\\ }M:\\tau$}\n\\RightLabel{$cut$}\n\\BinaryInfC{$\\Delta \\vdash \\mathtt{copy}^{U} _{\\sigma}\\, \nV \\mathtt{\\ as\\ } y, z \\mathtt{\\ in \\ }M:\\tau$}\n\\DisplayProof\n&\\rightsquigarrow \\ \n\\AxiomC{$\\mathcal{D}$}\n\\noLine\n\\UnaryInfC{$\\vdash V : \\sigma$}\n\\RightLabel{$p$}\n\\UnaryInfC{$\\vdash V : \\shpos \\sigma$}\n\\AxiomC{$\\mathcal{D}$}\n\\noLine\n\\UnaryInfC{$\\vdash V :\\sigma$}\n\\RightLabel{$p$}\n\\UnaryInfC{$\\vdash V : \\shpos \\sigma$}\n\\AxiomC{$\\mathcal{D}_1$}\n\\noLine\n\\UnaryInfC{$\\Delta, y: \\shpos \\sigma, z: \\shpos \\sigma \\vdash M:\\tau$}\n\\RightLabel{$cut$}\n\\BinaryInfC{$\\Delta, z: \\shpos \\sigma \\vdash M[V\/y]:\\tau$}\n\\RightLabel{$cut$}\n\\BinaryInfC{$\\Delta \\vdash M[V\/y, V\/z]:\\tau$}\n\\DisplayProof\n\\end{aligned}\n$}\n\\end{equation*}\n\\caption{\\emph{Lazy} cut-elimination rules for the principal cuts $(\\multimap$R$,\\multimap$L$)$, $(\\forall$L$,\\forall$R$)$, \n$(p, d)$, $(p, w)$, and $(p, c)$.}\n\\label{fig: cut elimination exponential}\n\\end{figure}\n\n\\noindent \nThe lazy cut-elimination rules that we introduce here below \nare the standard ones, but restricted to avoid the elimination of non lazy instances of the\nexponential cuts $(p, d)$, $(p, w)$ and $(p, c)$. \n\n\\begin{defn}[Lazy cut-elimination rules]\n\\label{defn:Lazy cut-elimination rules}\nFigure~\\ref{fig: cut elimination exponential} \nintroduces the \\emph{lazy cut-elimination rules} for the principal \ncuts.\nThe elimination rules for commuting and axiom cuts are standard, so we omit\nthem all from Figure~\\ref{fig: cut elimination exponential};\nthe (possibly) less obvious commuting ones can be recovered \nfrom the reductions on terms in Figure~\\ref{fig: term commuting conversions}.\nWe remark that the elimination of the principal cuts\n$(\\forall$R$,\\forall$L$)$ and $(p, d)$ does not modify the subject \nof their concluding judgment. So, we call them \\textit{insignificant} as every other cut-elimination rule non influencing their concluding subject.\nGiven a derivation $ \\mathcal{D} $, we write $\\mathcal{D}\\rightsquigarrow \\mathcal{D}'$ if $\\mathcal{D}$ rewrites to some $\\mathcal{D}$ by \none of the above rules.\n\\qed\n\\end{defn}\n\\noindent\nLazy cut-elimination is a way of preventing the erasure and the duplication of terms that are not values, and hence to restore a correspondence between cut-elimination and term reduction. However, one can run into derivations containing exponential cuts that will never turn into lazy cuts, like the deadlock in~\\eqref{eqn: non linear duplication}. The solution we adopt is to identify a set of judgments whose derivations can be rewritten into cut-free ones by a sequence of lazy cut-elimination steps.\n\n\\begin{defn}[Lazy types, lazy judgments and lazy derivations]\n\\label{defn: lazyness} \nWe say that $\\sigma$ is a \\textit{lazy type} if it contains no \\emph{negative} occurrences of $\\forall$. \nAlso, we say that $x_1:\\sigma_1, \\ldots, x_n:\\sigma_n \\vdash M:\\tau$ \nis a \\textit{lazy judgment} if $\\tau$ is a lazy type and \n$\\sigma_1$, \\ldots, $\\sigma_n$ contain no \\emph{positive} \noccurrences of $\\forall$. \nLast, $\\mathcal{D}\\triangleleft \\Gamma \\vdash \nM: \\tau$ is called a \\textit{lazy derivation} if $\\Gamma \\vdash M: \\tau$ \nis a lazy judgment.\n\\qed\n\\end{defn}\n\\noindent\nLemma~\\ref{fact: quantification propagation} and \n\\ref{lem: lazy derivation properties} here below, as well as \nDefinition~\\ref{defn: bound mio} and\n\\ref{defn:Height of an inference rule}, are the last preliminaries\nto show the relevance of lazy cuts that occur in lazy derivations.\n\n\\begin{lem} {\\ }\n\\label{fact: quantification propagation}\n\\begin{enumerate}[(1)]\n\\item \n\\label{enum: shpos is lazy} \nEvery type of the form $\\shpos \\sigma$ is closed and lazy.\n\\item \n\\label{enum: closed implies positive forall} \nEvery closed type has at least a positive quantification. \n\\item \n\\label{enum: rules c,d,w,forallL and some p are not lazy} \nLet $ \\rho $ be any instance of \n$\\forall \\mathrm{L}$, $d$, $w$, $c$, and $ p $, the latter with a\nnon empty context. The conclusion of $ \\rho $ is not lazy.\n\\item \n\\label{enum: quantification propagation} \nLet $ \\rho $ be any instance of $ ax $, $ \\multimap\\mathrm{R}$, \n$ \\multimap\\mathrm{L}$, $ \\forall\\mathrm{R}$, and $ p $, the latter with an empty context. If the conclusion of $ \\rho $ is lazy, then, every premise of $ \\rho $ is lazy.\n\\item \n\\label{enum: cut-free and lazy implies all judgements lazy} \nIf $\\mathcal{D}$ is a cut-free and lazy derivation of $\\mathsf{LEM}$, \nthen all its judgments are lazy and no occurrences of $\\forall \\mathrm{L}$, $d$, $w$, $c$, and $ p $, the latter with a\nnon empty context, can exist in $ \\mathcal{D} $.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nPoint~\\ref{enum: shpos is lazy} holds by Definition~\\ref{defn:Types for IMLL2shpos}. \nPoint~\\ref{enum: closed implies positive forall} is by a structural induction on types. \nConcerning Point~\\ref{enum: rules c,d,w,forallL and some p are not lazy}, the conclusions of $d$, $w$, $c$, and $ p $ contain $ \\shpos\\sigma $. This is a closed type, hence, by Point~\\ref{enum: closed implies positive forall}, such conclusions are not lazy judgments. Moreover, \n$\\forall \\mathrm{L}$ introduces a positive occurrence of\n$ \\forall $ in the context of its conclusion, so that this latter cannot be a lazy judgment. \nPoint~\\ref{enum: quantification propagation} is a case analysis on every listed inference rule.\nAs for Point~\\ref{enum: cut-free and lazy implies all judgements lazy},\nwe can proceed by structural induction on $ \\mathcal{D} $. By definition,\nthe conclusion of $ \\mathcal{D} $ is a lazy judgment. \nPoint~\\ref{enum: rules c,d,w,forallL and some p are not lazy} excludes\nthat one among $\\forall \\mathrm{L}$, $d$, $w$, $c$, and $ p $ (with a\nnon empty context) may be the last rule of $ \\mathcal{D} $. So, \nonly one among $ ax $, $ \\multimap\\mathrm{R}$, \n$ \\multimap\\mathrm{L}$, $ \\forall\\mathrm{R}$, and $ p $ (with an empty context) can be the concluding rule, say $ r $, of $ \\mathcal{D} $.\nPoint~\\ref{enum: quantification propagation} implies that \nall the premises of $ r $ are lazy. Hence, we can apply the inductive hypothesis to the derivations of the premises of $ r $ and conclude.\n\\end{proof}\n\n\\begin{defn}[Size of a derivation]\n\t\\label{defn: bound mio}\n\tThe \\emph{size} $\\vert\\mathcal{D}\\vert$ of a derivation $\\mathcal{D}$ in $\\mathsf{LEM}$ is defined by induction:\n\t\\begin{enumerate}\n\t\t\\item \n\t\tIf $\\mathcal{D}$ is $ax$ then $\\vert\\mathcal{D}\\vert=1$.\n\t\t\\item \n\t\tIf $\\mathcal{D}$ is a derivation $\\mathcal{D}'$ that concludes by a\n\t\trule with a single premise, then $\\vert\\mathcal{D}\\vert= \\vert\\mathcal{D}'\\vert+1$.\n\t\t\\item \n\t\tIf $\\mathcal{D}$ composes two derivations $\\mathcal{D}'$ and $\\mathcal{D}'' $ by a rule with two premises, but different from $c$, \n\t\tthen $\\vert\\mathcal{D}\\vert= \\vert\\mathcal{D}'\\vert+ \\vert\\mathcal{D}''\\vert+1$. \n\t\t\\item \n\t\t\\label{enum:on standard dimension of c}\n\t\tIf $\\mathcal{D}$ composes two derivations $\\mathcal{D}'$ and $\\mathcal{D}'' $ by the rule $c$, \n\t\tthen $\\vert\\mathcal{D}\\vert= \\vert\\mathcal{D}'\\vert+ \\vert\\mathcal{D}''\\vert+3$. \n\t\t\\qed\n\t\\end{enumerate}\n\\end{defn}\n\n\\begin{rem}\nAdding ``3'' instead of the possibly expected ``1'' in the clause~\\eqref{enum:on standard dimension of c} of Definition~\\ref{defn: bound mio} highlights the non linearity that the instances of $ c $ \nintroduce in the course of the lazy cut-elimination on \\textsf{LEM} of Definition~\\ref{defn:Lazy cut-elimination strategy} below.\n\\qed\n\\end{rem}\n\n\\begin{lem} \n\\label{lem: lazy derivation properties}\nLet $\\mathcal{D}\\triangleleft x_1: \\sigma_1, \\ldots, x_n: \\sigma_n \\vdash M: \\sigma$ be a cut-free and lazy derivation.\n\\begin{enumerate}[(1)]\n\\item \n\\label{enum: cut free and lazy implies linear normal form} \n$M$ is a linear $\\lambda$-term in normal form.\n\\item \n\\label{enum: size term bounded size types LEML} \n$\\vert M \\vert \\leq \\sum_{i=1}^n \\vert \\sigma_i \\vert + \\vert \\sigma \\vert $.\n\\item \n\\label{enum: relation lazy cut free derivation and term} \n$\\vert \\mathcal{D} \\vert= \\vert M \\vert +k$, where $k$ is the number of $\\forall$ and $\\shpos$ occurring in $\\sigma_1, \\ldots, \\sigma_n, \\sigma$.\n\\item \n\\label{enum: lazy cut free derivations are smaller if terms are smaller} \nIf $\\mathcal{D}'\\triangleleft x_1: \\sigma_1, \\ldots, x_n: \\sigma_n \\vdash N: \\sigma$ is a lazy and cut-free derivation, then $\\vert N \\vert \\leq \\vert M \\vert$ implies $\\vert \\mathcal{D}'\\vert \\leq \\vert \\mathcal{D} \\vert $.\n\\item \n\\label{enum: finite number of normal forms} \nThe set of values with lazy type $ \\sigma $ is finite.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nThe assumptions on $ \\mathcal{D}$ allow to apply Lemma~\\ref{fact: quantification propagation}.\\ref{enum: cut-free and lazy implies all judgements lazy}\nwhich implies that every judgment in $\\mathcal{D}$ is lazy and\nfree of $\\forall$L, $d$, $w$, $c$ and $ p $ (with a non empty context). \nHence, we can prove Points~\\ref{enum: cut free and lazy implies linear normal form}-\\ref{enum: relation lazy cut free derivation and term} by induction on the structure of $\\mathcal{D}$. Point~\\ref{enum: lazy cut free derivations are smaller if terms are smaller} is a corollary of\nPoint~\\ref{enum: relation lazy cut free derivation and term}. Point~\\ref{enum: finite number of normal forms} is a corollary of \nPoint~\\ref{enum: size term bounded size types LEML}.\n\\end{proof}\n\n\n\n\\begin{defn}[Height of an inference rule]\n\\label{defn:Height of an inference rule}\nLet $\\mathcal{D}\\triangleleft \\Gamma\\vdash M:\\sigma$ be a derivation and \n$ r $ a rule instance in it. The \\emph{height of $r$}, written $ h(r) $, is the number of rule instances from the conclusion \n$ \\Gamma\\vdash M:\\sigma $ of $ \\mathcal{D} $ upward to the conclusion of $ r $. The \\textit{height of $\\mathcal{D}$}, written $h(\\mathcal{D})$, is the largest $h(r)$ among its rule instances.\n\\qed\n\\end{defn}\n\n\n\\noindent\nLemma~\\ref{lem: existence of the safe exponential cut} \nand~\\ref{lem: above the cut} assure that we can eliminate exponential lazy cuts from a lazy derivation.\n\n\\begin{lem}[Existence of a lazy $ cut $] \n\\label{lem: existence of the safe exponential cut} \nLet $\\mathcal{D}$ be a lazy derivation with only exponential cuts in it.\nAt least one of those cuts is \\emph{safe}.\n\\end{lem}\n\\begin{proof}\nLet $\\Gamma \\vdash M: \\tau$ be the conclusion of $ \\mathcal{D} $.\nBy contradiction, let us suppose that every occurrence of (exponential)\n$ cut $ in $ \\mathcal{D} $ is a deadlock.\nAt least one of them, say $ c_m $, has minimal height $ h(c_m) $, i.e.~no other $ cut $ occurs in the sequence of rule instances, say $ r_1,\\ldots r_n $, from the conclusion of $ c_m $ down to the\none of $ \\mathcal{D} $.\nSince $ c_m $ is a deadlock, its leftmost premise has form $ \\Delta \\vdash N:\\shpos\\sigma $, \nwhere $ \\Delta\\neq \\emptyset $. \nBy Proposition~\\ref{prop: exponential context}, $\\Delta$ is strictly exponential and the whole $ \\Delta \\vdash N:\\shpos\\sigma $ is a non lazy\njudgment by Lemma~\\ref{fact: quantification propagation}.\\ref{enum: shpos is lazy} and Lemma~\\ref{fact: quantification propagation}.\\ref{enum: closed implies positive forall}. \nThe contraposition of\nLemma~\\ref{fact: quantification propagation}.\\ref{enum: quantification propagation} \nimplies that the non lazy judgment in $ c_m $ can only be transformed to \na non lazy judgment by every $ r_i $, with $ 1\\leq i\\leq n $, letting the \nconclusion of $ \\mathcal{D} $ non lazy, so\ncontradicting the assumption.\nHence, $ c_m $ must be safe.\n\\end{proof}\n\n\\begin{lem}[Eliminating a lazy $ cut $]\n\\label{lem: above the cut}\nLet $\\mathcal{D}$ be a lazy derivation with only exponential cuts in it.\nA lazy derivation $\\mathcal{D}^*$ exists such that \nboth $\\mathcal{D} \\rightsquigarrow \\mathcal{D}^*$, by reducing a lazy cut, and \n$\\vert\\mathcal{D}^*\\vert < \\vert\\mathcal{D}\\vert$.\n\\end{lem}\n\\begin{proof}\nLemma~\\ref{lem: existence of the safe exponential cut}\nimplies that $ \\mathcal{D} $ contains at least an exponential cut which is safe. Let us take $(p, X)$ with maximal height $ h((p, X)) $ among those safe instances of $ cut $. \nSo, if $ (p, X) $ has form:\n\\begin{equation}\n\\label{eqn: above the cut}\n\\AxiomC{$\\mathcal{D}'$}\n\\noLine\n\\UnaryInfC{$\\vdash N: \\sigma$}\n\\RightLabel{$p$}\n\\UnaryInfC{$\\vdash N: \\shpos \\sigma$}\n\\AxiomC{\\vdots}\n\\RightLabel{$X$}\n\\UnaryInfC{$\\Delta, x:\\shpos \\sigma \\vdash M:\\tau$}\n\\RightLabel{$cut$ \\enspace ,}\n\\BinaryInfC{$ \\Delta \\vdash M[N\/x]:\\tau$}\n\\DisplayProof\n\\end{equation}\n\\noindent\nthen $\\mathcal{D}'$ is a lazy derivation because $\\shpos \\sigma$ is a lazy type by Lemma~\\ref{fact: quantification propagation}.\\ref{enum: shpos is lazy}. Since $\\mathcal{D}'$ is lazy and can only contain exponential cuts, by Lemma~\\ref{lem: existence of the safe exponential cut} and by maximality of $ h((p, X)) $, it is forcefully cut-free.\nSo, by Lemma~\\ref{lem: lazy derivation properties}.\\ref{enum: cut free and lazy implies linear normal form}, we have that $ N $ is a value, \ni.e.~$(p, X)$ is lazy and we can reduce it to obtain $ \\mathcal{D}^*$. \nIf $ X $ in~\\eqref{eqn: above the cut} is $d$ or $w$, then it is \nsimple to show\nthat $\\vert\\mathcal{D}^*\\vert<\\vert\\mathcal{D}\\vert$. \nLet $ X $ be $ c $. Then, \\eqref{eqn: above the cut} is:\n\\begin{equation}\n\\label{eqn:above the cut (p,c)}\n\\AxiomC{$\\mathcal{D}'$}\n\\noLine\n\\UnaryInfC{$\\vdash V: \\sigma$}\n\\RightLabel{$p$}\n\\UnaryInfC{$\\vdash V: \\shpos \\sigma$}\n\\AxiomC{$\\mathcal{D}''$}\n\\noLine\n\\UnaryInfC{$\\Delta, y: \\shpos \\sigma, z:\\shpos \\sigma \\vdash M': \\tau$}\n\\AxiomC{$\\mathcal{D}'''$}\n\\noLine\n\\UnaryInfC{$\\vdash U: \\sigma$}\n\\RightLabel{$c$}\n\\BinaryInfC{$\\Delta, x:\\shpos \\sigma \\vdash \\mathtt{copy}^{U}_{\\sigma}\\, x \\mathtt{\\ as\\ }y,z \\mathtt{\\ in\\ }M':\\tau$}\n\\RightLabel{$cut$ \\enspace ,}\n\\BinaryInfC{$\\Delta \\vdash \\mathtt{copy}^{U} _{\\sigma}\\, V \\mathtt{\\ as\\ }y,z \\mathtt{\\ in \\ }M':\\tau$}\n\\DisplayProof\n\\end{equation}\nwith $\\mathcal{D}'''$ lazy and cut-free for the same reasons as $ \\mathcal{D}' $ is. So, \\eqref{eqn:above the cut (p,c)} can reduce to:\n\\begin{equation}\n\\label{eqn:reducing above the cut (p,c)}\n\\AxiomC{$\\mathcal{D}'$}\n\\noLine\n\\UnaryInfC{$\\vdash V : \\sigma$}\n\\RightLabel{$p$}\n\\UnaryInfC{$\\vdash V : \\shpos \\sigma$}\n\\AxiomC{$\\mathcal{D}'$}\n\\noLine\n\\UnaryInfC{$\\vdash V :\\sigma$}\n\\RightLabel{$p$}\n\\UnaryInfC{$\\vdash V : \\shpos \\sigma$}\n\\AxiomC{$\\mathcal{D}''$}\n\\noLine\n\\UnaryInfC{$\\Delta, y: \\shpos \\sigma, z: \\shpos \\sigma \\vdash M':\\tau$}\n\\RightLabel{$cut$ \\enspace .}\n\\BinaryInfC{$\\Delta, z: \\shpos \\sigma \\vdash M'[V\/y]:\\tau$}\n\\RightLabel{$cut$}\n\\BinaryInfC{$\\Delta \\vdash M'[V\/y, V\/z]:\\tau$}\n\\DisplayProof\n\\end{equation}\n\\noindent\nBy Lemma~\\ref{lem: lazy derivation properties}.\\ref{enum: finite number of normal forms}, we can safely assume that $U$ is a value with largest size among values of type $\\sigma$.\nSo, Lemma~\\ref{lem: lazy derivation properties}.\\ref{enum: size term bounded size types LEML} implies $\\vert V \\vert \\leq \\vert U \\vert$, \nfrom which $\\vert \\mathcal{D}' \\vert \\leq \\vert \\mathcal{D}''' \\vert$,\nby Lemma~\\ref{lem: lazy derivation properties}.\\ref{enum: lazy cut free derivations are smaller if terms are smaller}.\nBy applying Definition~\\ref{defn: bound mio}.\\ref{enum:on standard dimension of c} to \n\\eqref{eqn:above the cut (p,c)} and\n\\eqref{eqn:reducing above the cut (p,c)}, we have $\\vert \\mathcal{D}^* \\vert < \\vert \\mathcal{D} \\vert$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\\begin{defn}[Lazy cut-elimination strategy]\n\\label{defn:Lazy cut-elimination strategy}\nLet $ \\mathcal{D} $ be a lazy derivation of $ \\textsf{LEM} $.\nLet a \\emph{round} on $ \\mathcal{D} $ be defined as follows:\n\\begin{enumerate}[\\{1\\}]\n\t\\item \\label{enum: round comm} \n\tLet eliminate all the commuting instances of $ cut $.\n\t\\item \\label{enum: round sym} \n\tIf any instance of $ cut $ remains, it is necessarily symmetric.\n\tLet reduce a multiplicative cut, if any.\n\tOtherwise, let reduce a lazy exponential cut, if any.\n\\end{enumerate}\nThe \\emph{lazy cut-elimination strategy} iterates \nrounds, starting from $ \\mathcal{D} $, until instances of $ cut $\nexist in the obtained derivation.\n\\qed\n\\end{defn}\n\n\t\n\n\n\\begin{thm}[Lazy cut-elimination has a cubic bound]\n\\label{thm: cut elimination for downarrow IMLL2}\nLet $\\mathcal{D}$ be a lazy derivation. \nThe lazy cut-elimination can reduce $\\mathcal{D}$ to a cut-free $\\mathcal{D}^*$ in $\\mathcal{O}(\\vert \\mathcal{D} \\vert^3)$ steps.\n\\end{thm}\n\\begin{proof}\nLet $H(\\mathcal{D})$ be the sum of the heights $h(\\mathcal{D}')$ \nof all sub-derivations $\\mathcal{D}'$ of \n$\\mathcal{D}$ whose conclusion is an instance of $ cut $.\nWe proceed by induction on the lexicographically order of the pairs \n$\\langle\\vert \\mathcal{D}\\vert , H(\\mathcal{D}) \\rangle$. \nTo show that the lazy cut-elimination strategy in Definition~\\ref{defn:Lazy cut-elimination strategy} terminates, we\nstart by applying a round to $ \\mathcal{D} $, using \nstep~\\ref{enum: round comm}.\nEvery commuting cut-elimination step just moves an instance of $ cut $ upward, strictly decreasing $H(\\mathcal{D})$ and leaving\n$ \\vert \\mathcal{D}\\vert $ unaltered. \nLet us continue by applying\n\tstep~\\ref{enum: round sym} of the round.\nAs usual, $ \\vert \\mathcal{D}\\vert $ shrinks when eliminating a multiplicative cut.\nIf only exponential instances of $ cut $ remain, by Lemma~\\ref{lem: above the cut} we can rewrite \n$ \\mathcal{D} $ to $ \\mathcal{D}' $ by reducing a lazy exponential cut in such a way that \n$ \\vert \\mathcal{D}'\\vert < \\vert \\mathcal{D} \\vert $. Therefore, the lazy cut-elimination strategy terminates with a cut-free derivation $ \\mathcal{D}^* $.\n\\par\nWe now exhibit a bound on the number of \ncut-elimination steps from $ \\mathcal{D} $ to $ \\mathcal{D}^* $.\nGenerally speaking, we can represent a lazy strategy as:\n\\begin{align}\n\\label{align:cut-elimination bound diagram}\n\\mathcal{D}\n= \\mathcal{D}_0\n\\underbrace{\\longrightarrow}_{cc_0}\n\\mathcal{D}'_0\n\\rightsquigarrow\n\\mathcal{D}_1\n\\,\\cdots \\underbrace{\\longrightarrow}_{cc_i}\n\\mathcal{D}'_{i}\n\\rightsquigarrow\n\\mathcal{D}_{i+1}\n\\underbrace{\\longrightarrow}_{cc_{i+1}}\\cdots\\,\n\\mathcal{D}'_{n-1}\n\\rightsquigarrow\n\\mathcal{D}_n\n\\underbrace{\\longrightarrow}_{cc_{n}}\n\\mathcal{D}'_{n}\n= \\mathcal{D}^*\n\\enspace ,\n\\end{align}\nwhere every $ cc_j $ denotes the number of \ncommuting cuts applied from derivation $ \\mathcal{D}_{j} $ to derivation\n$ \\mathcal{D}'_{j} $, for every $ 0\\leq j\\leq n $.\nA bound on every $ cc_j $ is $ \\vert\\mathcal{D}_{j}\\vert^2 $ because every\ninstance of rule in $ \\mathcal{D}_{j} $ can, in principle, be commuted with every other. \nThe first part of the proof implies\n$ \\vert\\mathcal{D}_{j}\\vert = \\vert\\mathcal{D}'_{j}\\vert $,\nfor every $ 0\\leq j\\leq n $.\nLemma~\\ref{lem: above the cut} implies\n$ \\vert\\mathcal{D}'_{j}\\vert > \\vert\\mathcal{D}_{j+1}\\vert $,\nfor every $ 0\\leq j\\leq n-1 $.\nSo, $ n\\leq \\vert\\mathcal{D}\\vert $ and the total number of cut-elimination steps in \n\\eqref{align:cut-elimination bound diagram} is \n$ O(\\vert\\mathcal{D}\\vert\\cdot\\vert\\mathcal{D}\\vert^2) $.\n\\end{proof}\n\n\\begin{rem}\nThe cubic bound on the lazy strategy keeps holding also in the case we apply the \\emph{lazy} cut-elimination to \\emph{non lazy} derivations. Of course, in that case, deadlocks may remain in the final derivation where no instance of $cut$ can be further eliminated.\n\\qed\n\\end{rem}\n\n\n\n\\subsection{Subject reduction theorem}\n\\label{sec: subject reduction}\nThe proof of the Subject reduction requires some typical preliminaries.\n\\begin{lem}[Substitution]\n\\label{lem: substitution type variables} \nIf $\\Gamma \\vdash M: \\tau$ then $\\Gamma[A\/ \\alpha] \\vdash M: \\tau[A\/ \\alpha]$, for every \\emph{linear} type $ A $.\n\\end{lem}\n\\begin{lem}[Generation] \\label{lem: generation} {\\ } \n\\begin{enumerate}[(1)]\n\\item \\label{lem:generation 1} \nIf $\\mathcal{D} \\triangleleft \\Gamma \\vdash \\lambda x. M:\\tau$, then \n$\\tau= \\shpos^n \\forall \\vec{ \\alpha}. (\\sigma \\multimap A)$, where \n$\\shpos^n\\triangleq \\shpos \\overset{n}{\\ldots}\\shpos$ and \n$\\vec{\\alpha}= \\alpha_1,\\ldots, \\alpha_m$, for some $n\\geq0$ and $m\\geq 0$.\n\\item \\label{lem:generation 5} \nIf $\\mathcal{D} \\triangleleft \\Delta, x: \\forall \\alpha.A \\vdash P:\\tau$, then an instance $ r $ of $\\forall\\mathrm{L}$ exists in $\\mathcal{D}$ \nwith \nconclusion $\\Delta', x: \\forall \\alpha.A \\vdash P':\\tau'$,\nfor some $\\Delta', P'$ and $\\tau'$. \nI.e., $ r $ introduces $x:\\forall \\alpha.A$.\n\\item \\label{lem:generation 6}\nIf $\\mathcal{D} \\triangleleft \\Delta, x: \\sigma \\multimap B \n \\vdash P[xN\/y]: \\tau$, \nthen an instance $ r $ of $\\multimap\\mathrm{L}$ \nexists in $\\mathcal{D}$ with\nconclusion $\\Delta', x: \\sigma \\multimap B \\vdash P'[xN'\/y] : \\tau'$,\nfor some $\\Delta', P', N'$ and $\\tau'$.\nI.e., $ r $ introduces $x:\\sigma \\multimap B$.\n\\item \\label{lem:generation 2} \nIf $\\mathcal{D} \\triangleleft \\Gamma \\vdash \\lambda x. M : \\forall \\alpha. A$, then a derivation $\\mathcal{D}'$ exists\nwhich is $\\mathcal{D}$ with some rule permuted in order to obtain an instance of $\\forall\\mathrm{R}$ as last rule of $\\mathcal{D}'$.\n\\item \\label{lem:generation 3} \nIf $\\mathcal{D}\\triangleleft \\Gamma \\vdash \\lambda x. P: \\sigma \\multimap B$, then a derivation $\\mathcal{D}'$ exists\nwhich is $\\mathcal{D}$ with some rule permuted in order to obtain an instance of $\\multimap\\mathrm{R}$ as last rule of $\\mathcal{D}'$.\n\\item \\label{lem:generation 7}\nIf $\\mathcal{D} \\triangleleft \\Delta, x: \\shpos \\sigma \\vdash P[xN\/y] : \n\\tau$, then an instance $ r $ of $ d $ exists in $\\mathcal{D}$ \nwith \nconclusion $\\Delta', x: \\shpos \\sigma \\vdash P'[xN'\/y]:\\tau'$,\nfor some $\\Delta', P', N'$ and $\\tau'$.\nI.e., $d$ introduces $x:\\shpos \\sigma$.\n\\item \\label{lem:generation 4} \nIf $\\mathcal{D} \\triangleleft \\Gamma \\vdash M: \\shpos \\sigma$, then a derivation $\\mathcal{D}'$ exists\nwhich is $\\mathcal{D}$ with some rule permuted in order to get an instance of $p$ as last rule of $\\mathcal{D}'$.\n\\item \\label{lem:generation 8}\nIf $\\mathcal{D} \\triangleleft \\Delta, x: \\shpos \\sigma \\vdash \n\\mathtt{discard}_{\\sigma}\\, x \\mathtt{\\ in \\ }P: \\tau$, then\nan instance $ r $ of $ w $ exists in $\\mathcal{D}$ with \nconclusion \n$ \\Delta', x: \\shpos\\sigma \n \\vdash \n \\mathtt{discard}_{\\sigma}\\, x \\mathtt{\\ in \\ }P': \\tau' $,\n for some $ \\Delta', P' $ and $ \\tau' $.\nI.e., $r$ introduces $x:\\shpos \\sigma$.\n\\item \\label{lem:generation 9}\nIf $\\mathcal{D} \\triangleleft \\Delta, x: \\shpos \\sigma \\vdash \n\t\\mathtt{copy}_{\\sigma}^{U} \\, x \\mathtt{\\ as \\ } x_1, x_2 \\mathtt{\\ in \\ }P: \\tau$, then\n\tan instance $ r $ of $ c $ exists in $\\mathcal{D}$ with \n\tconclusion \n\t$\\Delta', x: \\shpos \\sigma \\vdash \n\t\\mathtt{copy}_{\\sigma}^{U} \\, x \\mathtt{\\ as \\ } \n\tx_1, x_2 \\mathtt{\\ in \\ }P': \\tau'$,\n\tfor some $ \\Delta', P' $ and $ \\tau' $.\n\tI.e., $r$ introduces $x:\\shpos \\sigma$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nWe can adapt the proof by \nGaboardi\\&Ronchi in~\\cite{gaboardi2009light} to $\\mathsf{LEM}$\nbecause the types in Definition~\\ref{defn:Types for IMLL2shpos}\nare a sub-set of Gaboardi\\&Ronchi's \\emph{essential types}.\nIn particular, Point\\!~\\textit{\\ref{lem:generation 4}} relies on\nProposition~\\ref{prop: exponential context}.\n\\end{proof}\n\n\\begin{thm}[Subject reduction] \n\\label{thm:Subject Reduction for IMLL2shpos} \n If $\\Gamma \n\\vdash M: \\tau$ and $M \\rightarrow M^\\prime$, then $\\Gamma \n\\vdash M^\\prime: \\tau$.\n\\end{thm}\n\\begin{proof}\nWe proceed by structural induction on $ \\mathcal{D} $.\nThe crucial case of $M \\rightarrow M^\\prime$\n(Figure~\\ref{fig: term reduction rules}) is when\n$(\\lambda x. P)Q$ exists in $M$ and $\\mathcal{D}$ contains:\n\\begin{prooftree}\n\\AxiomC{$\\mathcal{D}' \\triangleleft \\Delta \\vdash \\lambda x. P: \\sigma$}\n\\AxiomC{$\\mathcal{D}'' \\triangleleft \\Sigma, y: \\sigma \\vdash N[ yQ\/z ]:\\tau$}\n\\RightLabel{$ cut $ \\enspace .}\n\\BinaryInfC{$\\mathcal{D} \\triangleleft \\Delta, \\Sigma \\vdash N[(\\lambda x.P)Q\/z]:\\tau$}\n\\end{prooftree}\nLemma~\\ref{lem: generation}.\\textit{\\ref{lem:generation 1}} implies that $\\sigma= \\shpos^n \\forall \\vec{ \\alpha}. (\\sigma_1 \\multimap C)$, where $\\shpos^n\\triangleq \\shpos \\overset{n}{\\ldots}\\shpos$ and $\\vec{\\alpha}= \\alpha_1,\\ldots, \\alpha_m$, for some $n\\geq0$ and $m\\geq 0$. \nLemma~\\ref{lem: generation}.\\textit{\\ref{lem:generation 5}}, \n\\ref{lem: generation}.\\textit{\\ref{lem:generation 6}}, \nand \\ref{lem: generation}.\\textit{\\ref{lem:generation 7}}, \nimply that $\\mathcal{D}''$ has form: \n\\begin{equation}\n\\label{eqn:conclusion of derivation}\n\\AxiomC{\\vdots}\n\\noLine\n\\UnaryInfC{$ \\Sigma_1'' \\vdash Q'':\\sigma_1'$}\n\\AxiomC{$\\vdots$}\n\\noLine\n\\UnaryInfC{$\\Sigma_2'', z: C' \\vdash N'': \\tau''$}\n\\RightLabel{$\\multimap$L}\n\\BinaryInfC{$\\Sigma''_1, \\Sigma''_2, y''':\\sigma_1' \\multimap C' \\vdash N''[y'''Q''\/z]: \\tau''$}\n\\noLine\n\\UnaryInfC{$\\vdots $}\n\\noLine\n\\UnaryInfC{$\\Sigma', y'': \\forall \\vec{\\alpha}. (\\sigma_1 \\multimap C )\\vdash N'[y''Q'\/z]: \\tau'$}\n\\RightLabel{$d$}\n\\UnaryInfC{$\\Sigma', y': \\shpos \\forall \\vec{\\alpha}.( \\sigma_1 \\multimap C) \\vdash N'[y'Q'\/z]: \\tau'$}\n\\noLine\n\\UnaryInfC{$\\vdots $}\n\\noLine\n\\UnaryInfC{$\\Sigma, y:\\shpos ^n \\forall \\vec{\\alpha}. (\\sigma_1 \\multimap C) \\vdash N[yQ\/z]: \\tau$}\n\\DisplayProof\n\\end{equation}\nwhere $\\sigma_1'= \\sigma_1[A_1\/\\alpha_1, \\ldots, A_m \/\\alpha_m]$ and $C'=C[A_1\/\\alpha_1, \\ldots, A_m \/\\alpha_m]$, for some $\\Sigma', \\Sigma''_1, \\Sigma''_2,y', y'',y''', N', N'', Q',Q'', \\tau',\\tau'', A_1, \\ldots, A_m$.\nLemma~\\ref{lem: generation}.\\textit{\\ref{lem:generation 2}}, \n\\ref{lem: generation}.\\textit{\\ref{lem:generation 3}} and \n\\ref{lem: generation}.\\textit{\\ref{lem:generation 4}} imply that, permuting some of its rules,\n$\\mathcal{D}'$ can be reorganized as:\n\\begin{prooftree}\n\\AxiomC{$\\vdots$}\n\\noLine\n\\UnaryInfC{$\\Delta, x:\\sigma_1 \\vdash P: C$}\n\\RightLabel{$\\multimap$R}\n\\UnaryInfC{$\\Delta \\vdash \\lambda x. P:\\sigma_1 \\multimap C$}\n\\RightLabel{$\\forall$R \\enspace ,}\n\\doubleLine\n\\UnaryInfC{$\\Delta \\vdash \\lambda x.P: \\forall \\vec{\\alpha}. (\\sigma_1 \\multimap C)$}\n\\doubleLine\n\\RightLabel{$p$}\n\\UnaryInfC{$\\Delta \\vdash \\lambda x.P: \\shpos^n \\forall \\vec{\\alpha}. (\\sigma_1 \\multimap C)$}\n\\end{prooftree}\nwhere the concluding instances of $ p $ are necessary if $n>0$ and\nare legally introduced because $\\Delta$ is strictly exponential as consequence of Proposition~\\ref{prop: exponential context}\nthat we can apply to the judgment beacause $ \\shpos\\sigma $ is\nstrictly exponential as well.\nMoreover, Lemma~\\ref{lem: substitution type variables} assures that a derivation of $\\Delta, x:\\sigma_1' \\vdash P: C'$ exists\nbecause $\\alpha_1, \\ldots, \\alpha_m$ are not free in $\\Delta$. Therefore:\n\\begin{prooftree}\n\\AxiomC{$\\vdots$}\n\\noLine\n\\UnaryInfC{$\\Sigma''_1 \\vdash Q'': \\sigma'_1$}\n\\AxiomC{$\\vdots$}\n\\noLine\n\\UnaryInfC{$\\Delta, x:\\sigma'_1 \\vdash P: C'$}\n\\RightLabel{$cut$}\n\\BinaryInfC{$\\Delta, \\Sigma''_1 \\vdash P[Q''\/x]: C'$}\n\\AxiomC{$\\vdots$}\n\\noLine\n\\UnaryInfC{$\\Sigma''_2, z:C' \\vdash N'': \\tau''$}\n\\RightLabel{$cut$ \\enspace .}\n\\BinaryInfC{$\\Delta, \\Sigma''_1, \\Sigma''_2 \\vdash N''[P[Q''\/x]\/z]: \\tau''$}\n\\noLine\n\\UnaryInfC{$\\vdots$}\n\\noLine\n\\UnaryInfC{$\\Delta, \\Sigma \\vdash N[P[Q\/x]\/z]: \\tau$}\n\\end{prooftree}\n\\noindent\nwhich concludes with the same rules as in~\\eqref{eqn:conclusion of derivation}. A similar proof exists, which relies on\nLemma~\\ref{lem: generation}.\\textit{\\ref{lem:generation 8}}, \nor Lemma~\\ref{lem: generation}.\\textit{\\ref{lem:generation 9}},\nwhen reducing\n$\\mathtt{discard}_{\\sigma}$ $V\\mathtt{\\ in\\ }$ $M$, or \n$\\mathtt{copy}^{U}_{\\sigma}\\, V \\mathtt{\\ as\\ }y,z \\mathtt{\\ in\\ }M$. All the remaining cases are straightforward. \n\\end{proof}\n\\section{Conclusions}\n\\label{section:Conclusions}\\noindent\nWe introduce $\\mathsf{LEM}$. It is a type-assignment for the \nlinear $ \\lambda $-calculus extended with new constructs that can \nduplicate or erase values, i.e.~closed and normal linear \n$ \\lambda $-terms. $\\mathsf{LEM}$ enjoys a mildly weakened \ncut-elimination, and the Subject reduction. The internalization of the \nmechanism of linear weakening and contraction by means of modal rules \nallows to exponentially compress derivations of $\\mathsf{IMLL}_2$. \nOn one side, this enables to represent boolean circuits more compactly, as compared to previous ones, based on the multiplicative fragment of Linear Logic. On the other, $\\mathsf{LEM}$ can represent Church-like encoding of the natural numbers, with successor and the addition for them.\n\nWe conclude by briefly discussing possible future works.\n\nIn Section~\\ref{sec: duplication and erasure in a linear setting}, we \nconjecture that a version of the general separation property holds in \nthe linear $\\lambda$-calculus (Conjecture~\\ref{conj: linear separation}.)\nWere it true, we could show that a\n duplicator exists for all the finite sets of closed terms in \n$\\beta\\eta$-normal form, and connect\nlinear duplication with the standard notion of separation.\n\nIn Section~\\ref{sec: the system shpos, with, oplus IMLL2 cut elimination and bound}, we design $\\mathsf{LEM}$ to express linear weakening and \ncontraction in the same spirit as of the exponential rules of \n$\\mathsf{LL}$. We are working to push the analogy further,\nby formulating a type-assignment that extends\n$\\mathsf{IMLL}_2$ with \\enquote{linear additives}. A candidate rule is:\n\\begin{prooftree}\n\\AxiomC{$x_1:A \\vdash M_1: A_1 $}\n\\AxiomC{$x_2: A \\vdash M_2: A_2$}\n\\AxiomC{$ \\vdash V:A$}\n\\RightLabel{$\\with$R}\n\\TrinaryInfC{$x: A \\vdash \\mathtt{copy}^V_A \\, x \\mathtt{\\ as \\ }x_1,x_2 \\mathtt{\\ in \\ } \\langle M_1, M_2\\rangle : A_1 \\with A_2$}\n\\end{prooftree}\nwhere $V$ is a value and $A, A_1, A_2$ are closed and without negative occurrences of $\\forall$. The intuition behind this rule is the one \nwe discuss in Section~\\ref{sec: the system shpos, with, oplus IMLL2 cut elimination and bound} for the contraction rule of $\\mathsf{LEM}$. \nWe also claim that the new system would keep the normalization cost linear, unlike standard additives (see~\\cite{mairson2003computational}). \n\nSection~\\ref{sec: encoding boolean circuits} presents an encoding of the boolean circuits not preserving their depth. Moving to unbounded fan-in proof nets for $\\mathsf{LEM}$ would improve the correspondence, where the rules $ p, w, c $ and $ d $ would be expressed by nodes and boxes. \nOperations on them would compactly perform duplication and get rid of garbage, possibly improving~\\cite{DBLP:conf\/lics\/Terui04,DBLP:conf\/lfcs\/MogbilR07,DBLP:journals\/corr\/abs-1201-1120}. \nA reasonable question would then be whether the use of alternative and weaker exponential rules in $\\mathsf{LEM}$ could be the right approach to capture circuit complexity classes like $\\textsf{NC}, \\textsf{AC}$, and $\\textsf{P}\/_{\\operatorname{poly}}$ in analogy with the implicit characterizations of the Polynomial and Elementary-time computational complexity classes by means of Light Logics \\cite{lafont2004soft,danos2003linear}.\n\nSection~\\ref{section:Church numerals} contributes to the problem of defining numeral systems in linear settings. \nIn~\\cite{Mackie2018}, Mackie has recently introduced linear variants of numeral systems.\nHe shows that successor, addition, predecessor, and subtraction have representatives in the linear $\\lambda$-calculus. We could not find how giving type in $\\mathsf{LEM}$ to some of the terms of Mackie's numeral systems.\nHowever, by merging Mackie's encoding and Scott numerals \\cite{CF:58}, numeral systems seem to exist which $ \\mathsf{LEM} $ can give a type to. The cost would be to extend $\\mathsf{LEM}$ with recursive types, following \nRoversi\\&Vercelli \\cite{Roversi+Vercelli:2010-DICE10}.\n\n\n\\section{Duplication and erasure for the linear $\\lambda$-calculus}\n\\label{sec: duplication and erasure in a linear setting}\nAs a motivational background we discuss erasure and duplication in the linear $\\lambda$-calculus both in an untyped and in a type-assignment setting.\n\n\\subsection{The untyped setting}\nThe linear $\\lambda$-calculus forbids any form of \\emph{direct} duplication \nof $ \\lambda $-terms, by means of multiple occurrences of the same variable, \nor of erasure, by omitting occurrences of bound variables in a $ \\lambda $-term. \nNevertheless, erasure and duplication can be simulated. \nConcerning the former, a first approach has been developed by Klop \\cite{klop1980combinatory}, and can be called \\enquote{erasure by garbage collection}. It consists on accumulating unwanted data during computation in place of erasing it. \nFor example, $K'=\\lambda xy.\\langle x,y \\rangle$ represents the classical \n$K= \\lambda xy.x$, the second component of $\\langle x,y \\rangle$ \nbeing garbage. Another approach is by Mackie, and can be called \n\\enquote{erasure by data consumption} \\cite{Mackie2018}. It involves a step-wise erasure \nprocess that proceeds by $\\beta$-reduction, according to the following definition: \n\n\\begin{defn}[Erasability]\n\\label{defn: erasure} \nA linear $\\lambda$-term $M$ is \\emph{erasable} \nif $\\mathcal{C}[M]\\rightarrow_\\beta^* I$, for some context $\\mathcal{C}[]$ \nsuch that $ \\mathcal{C}[M] $ is linear.\n\\qed \n\\end{defn}\n\\begin{exmp}\nThe context $\\mathcal{C}[]=(\\lambda z.[])III$ erases \n$\\lambda xy.zxy$ because, filling \n$ [] $ by $\\lambda xy.zxy$, we obtain a closed linear $\\lambda$-term that reduces to $I$.\n\\qed\n\\end{exmp}\n\\noindent\nIn~\\cite{mairsonlinear}, Mackie proves that all closed linear $\\lambda$-terms can be erased by means of very simple contexts.\n\\begin{lem}[\\cite{mairsonlinear}]\n\\label{lem: linear terms are solvable} \nLet $ M $ be any closed linear $\\lambda$-term. Then there exists $n \\geq 0$ such that $M I\\overset{n }{\\ldots}I\\rightarrow_\\beta^* I$.\n\\end{lem}\n\\noindent\nThe above result is closely related to solvability (see~\\cite{barendregt1984lambda}): \n``\\textit{A $\\lambda$-term $M$ in the standard $\\lambda$-calculus is said \\textit{solvable} if, for some $n$, there exist $\\lambda$-terms \n$N_1, \\ldots, N_n$ such that $M N_1 \\ldots N_n =_\\beta I$.}''\nLemma~\\ref{lem: linear terms are solvable} states that every closed linear $\\lambda$-term is solvable by linear contexts. \n\\par \nIn fact, the notion of erasability can be addressed in a more general setting.\n\\begin{defn}[Erasable sets]\\label{defn: erasable sets} Let $X$ be a set of linear $\\lambda$-terms. We say that $X$ is an \\textit{erasable set} if a linear $\\lambda$-term $\\mathtt{E}_X$ exists such that $\\mathtt{E}_X \\, M \\rightarrow^*_{\\beta \\eta } I$, for all $M \\in X$. We call $\\mathtt{E}_X$ \\textit{eraser} of $X$.\n\\end{defn}\n\\begin{prop} \\label{prop: erasable if and only if closed} A finite set $X$ of linear $\\lambda$-terms is erasable if and only if all terms in $X$ are closed. \n\\end{prop}\n\\begin{proof}\nLet $X$ be a finite set of linear $\\lambda$-terms. To prove the left-to-right direction, suppose $X$ is erasable. By definition, there exists a linear $\\lambda$-term $\\mathtt{E}_X$ such that $\\mathtt{E}_X \\, M \\rightarrow_{\\beta\\eta}^* I$, for all $M \\in X$. Since $ I $ is closed, by Fact~\\ref{fact:Stability} each $ M\\in X $ must be closed too. Let us now suppose that all terms in $X$ are closed, and let $M_1, \\ldots, M_n$ be such terms. By Lemma~\\ref{lem: linear terms are solvable}, for every $i \\leq n$ there exists a $k_i\\geq 0$ such that $M_i I\\overset{k_i }{\\ldots}I\\rightarrow_\\beta^* I$. It suffices to set $\\mathtt{E}_{X}\\triangleq \\lambda x. x I\\overset{k }{\\ldots}I $, where $k= \\max_{i=1}^n k_i$. \n\\end{proof}\n\\noindent\nRecall from Definition~\\ref{eqn:datatype unity and product} that $\\langle M, N \\rangle\\triangleq \\lambda z.z MN$. In the same spirit of Definition~\\ref{defn: erasable sets}, we now investigate duplicability in the linear $\\lambda$-calculus. \n\\begin{defn}[Duplicable sets]\\label{defn: duplicable sets} Let $X$ be a set of linear $\\lambda$-terms. We say that $X$ is a \\textit{duplicable set} if a linear $\\lambda$-term $\\mathtt{D}_X$ exists such that $\\mathtt{D}_X \\, M \\rightarrow^*_{\\beta \\eta} \\langle M, M \\rangle$ and $FV(\\mathtt{D}_X)\\cap FV(M) =\\emptyset$, for all $M \\in X$. We call $\\mathtt{D}_X$ \\textit{duplicator} of $X$.\n\\end{defn}\n\\begin{prop}\\label{prop: duplicable implies closed} If a finite set $X$ of linear $\\lambda$-terms is duplicable then all terms in $X$ are closed. \n\\end{prop}\n\\begin{proof}\nLet $X$ be a finite set of linear $\\lambda$-terms, and suppose $X$ is duplicable. By definition, there exists a linear $\\lambda$-term $\\mathtt{D}_X$ such that $\\mathtt{D}_X \\, M \\rightarrow_{\\beta\\eta}^* \\langle M, M \\rangle$, for all $M \\in X$. Since both $M$ and $\\mathtt{D}_{X}$ are linear $\\lambda$-terms, and $FV(\\mathtt{D}_X)\\cap FV(M) =\\emptyset$, we have that $\\mathtt{D}_X\\, M$ is linear, for all $M \\in X$. If there were a variable occurring free in a term $M\\in X$, then it would occur twice in $\\langle M, M \\rangle$, contradicting Fact~\\ref{fact:Stability}.\n\\end{proof}\n\\noindent\nWe conjecture that the converse holds as well, as long as we restrict to sets of distincts $\\beta\\eta$-normal forms. Indeed, duplication in a linear setting ultimately relies on the following linear version of the general separation theorem for the standard $\\lambda$-calculus proved by Coppo et al.~\\cite{coppo1978semi}:\n\\begin{conj}[General separation] \\label{conj: linear separation} Let $X=\\lbrace M_1, \\ldots, M_n \\rbrace$ be a set of distinct closed linear $\\lambda$-terms in $\\beta\\eta$-normal form. Then, for all $N_1, \\ldots, N_n$ closed linear $\\lambda$-terms, there exists a closed linear $\\lambda$-term $F$ such that $F\\, M_i =_{\\beta \\eta}N_i$, $\\forall i \\leq n$.\n\\end{conj}\n\\noindent\nNow, let $X= \\lbrace M_1, \\ldots, M_n \\rbrace$ be a finite set of distinct closed linear $\\lambda$-terms in $\\beta\\eta$-normal form. If Conjecture~\\ref{conj: linear separation} were true, by fixing $N_i\\triangleq \\langle M_i, M_i \\rangle$ for all $i \\leq n$, there would exists a closed linear $\\lambda$-term $\\mathtt{D}_X$ such that $\\mathtt{D}_X \\, M_i \\rightarrow_{\\beta \\eta}^* \\langle M_i, M_i\\rangle$. \nSo, we could connect linear erasure and duplication to standard $\\lambda$-calculus notions:\n\\begin{equation*}\n\\begin{split}\n\\textit{solvability} & \\textrm{ implies } \\textit{linear erasability}\\\\\n\\textit{separation} & \\textrm{ implies } \\textit{linear duplication} \n\\enspace .\n\\end{split}\n\\end{equation*}\nThis topic is left to future work (see Section~\\ref{section:Conclusions}).\n\n\\subsection{The typed setting}\nErasure and duplication are less direct and liberal in \n$ \\textsf{IMLL}_2 $ which assigns types to linear $ \\lambda $-terms. \nSpecifically, it is possible to erase or duplicate all values (Definition~\\ref{defn:Values among linear lambda-terms}) of what we call \\enquote{ground type}, i.e.~Mairson\\&Terui's notion of closed $\\Pi_1$-type~\\cite{mairson2003computational}, whose formal definition will be recalled shortly. A typical example of ground type in $\\mathsf{IMLL}_2$ is the one representing booleans.\nThe standard second-order intuitionistic formulation of booleans (i.e.~$\\forall \\alpha .\\alpha \\multimap \\alpha \\multimap \\alpha$) is \nmeaningless for $\\mathsf{IMLL}_2$ due to the lack of free weakening. \nMairson\\&Terui \\citep{mairson2003computational} define them as:\n\\begin{align}\n&\\mathbf{B}\\triangleq\\forall \\alpha. \\alpha \\multimap \\alpha \\multimap \\alpha \\otimes \\alpha \n\\label{eqn: boolean data type}\n&&\\mathtt{tt}\\triangleq\t\\lambda x. \\lambda y. \\langle x,y \\rangle \n&&\\mathtt{ff}\\triangleq \\lambda x. \\lambda y. \\langle y,x \\rangle \n\\end{align} \nwhere the values \\enquote{truth} \\texttt{tt}\nand the \\enquote{falsity} \\texttt{ff} implement the\n\\enquote{erasure by garbage collection}: the first element of the pair is the \\enquote{real} output, while the second one is garbage.\nStarting from~(\\ref{eqn: boolean data type}), Mairson shows in~\\cite{mairsonlinear} that $\\mathsf{IMLL}$ is expressive enough to encode boolean functions. Mairson and Terui reformulate that encoding \nin $\\mathsf{IMLL}_2$ in order to prove \nresults about the complexity of cut-elimination \n\\cite{mairson2003computational}. The advantage of $\\mathsf{IMLL}_2$ is to assign uniform types to the $\\lambda$-terms representing boolean functions. An \\emph{eraser} $\\mathtt{E}_{\\mathbf{B}}$ and a \\emph{duplicator} $\\mathtt{D}_{\\mathbf{B}}$ are the keys to obtain the encoding:\n\\begin{align}\n\\label{eqn: erasure booleans}\n\\mathtt{E}_{\\mathbf{B}}& \\triangleq\n\\lambda z. \\mathtt{let\\ }zII \\mathtt{\\ be \\ } x,y \\mathtt{ \\ in \\ }(\\mathtt{let \\ }y \\mathtt{ \\ be \\ } I \\mathtt{ \\ in \\ }x) :\\mathbf{B}\\multimap \\mathbf{1}\n\\\\\n\\label{eqn: duplication booleans}\n\\mathtt{D}_{\\mathbf{B}}& \\triangleq\n\\lambda z. \\pi_1(z\\langle \\mathtt{tt}, \\mathtt{tt}\\rangle\\langle \\mathtt{ff}, \\mathtt{ff}\\rangle) : \\mathbf{B}\\multimap \\mathbf{B}\\otimes \\mathbf{B}\n\\\\\n\\label{eqn: boolean projection}\n\\pi_1 &\\triangleq\n\\lambda z. \\mathtt{let\\ }z \\mathtt{\\ be \\ }x,y \\mathtt{\\ in\\ }(\\mathtt{let \\ }\\mathtt{E}_{\\mathbf{B}} \\, y \\mathtt{\\ be \\ }I \\mathtt{\\ in \\ }x) :\n(\\mathbf{B}\\otimes \\mathbf{B}) \\multimap\\mathbf{B}\n\\end{align}\nwith $ \\mathbf{B} $ as in \\eqref{eqn: boolean data type} and $\\pi_1$ the linear $\\lambda$-term projecting the first element of a pair.\\\\\n Switching to type-assignment setting we get uniform copying and erasing mechanisms of the whole \\textit{class} of values of a given ground type. Note that the type-theoretical constraints let the erasure of a typed linear $\\lambda$-term make use of something more than mere stacks of identities as in Lemma~\\ref{lem: linear terms are solvable}. Also, note that \\textit{both} the possible results $\\langle \\mathtt{tt}, \\mathtt{tt} \\rangle$ \nand $\\langle \\mathtt{ff}, \\mathtt{ff} \\rangle$ of duplications are built-in \ncomponents of $ \\texttt{D}_{\\textbf{B}} $.\nIn accordance with the given input, $\\mathtt{D}_{\\mathbf{B}}$ selects \nthe right pair representing the result by \\textit{erasing} the \nunwanted one. Such a \\enquote{linear} form of \\emph{duplication by selection and erasure} is a step-by-step elimination of useless data until the desired result shows up. \n\\par\nThe analysis of~(\\ref{eqn: erasure booleans}) and~(\\ref{eqn: duplication booleans}) leads to the following formal notions: \n\\begin{defn} [Duplicable and erasable types in $\\mathsf{IMLL}_2$] \n\\label{defn: duplicable and erasable types} \nLet $A$ be a type in $\\mathsf{IMLL}_2$. It is a \\textit{duplicable type} \nif a linear $\\lambda$-term $\\mathtt{D}_A: A \\multimap A \\otimes A$ exists \nsuch that $\\mathtt{D}_{A} \\,V \\rightarrow_{\\beta \\eta}^* \n\\langle V, V \\rangle$, for every value $V$ of $A$.\n\nMoreover, $A$ is an \\textit{erasable type} if a linear $\\lambda$-term $\\mathtt{E}_A:A \\multimap \\mathbf{1}$ exists such that\n$\\mathtt{E}_{A} \\, V \\rightarrow_{\\beta \\eta}^* I$,\nfor every value $V$ of $A$.\n\nWe call $\\mathtt{D}_A$ \\emph{duplicator} of $A$ and $\\mathtt{E}_{A}$ its \\emph{eraser}. \n\\qed\n\\end{defn}\n\\noindent\nDuplicators and erasers in Definition~\\ref{defn: duplicable and erasable types} apply to values of a given type, i.e.~\\textit{closed} and normal inhabitants. This is not a loss of generality because Proposition~\\ref{prop: erasable if and only if closed} and Proposition~\\ref{prop: duplicable implies closed} say that only closed terms can be duplicated or erased linearly.\n\n\\begin{defn}[$\\Pi_1$, $\\Sigma_1$-types \\cite{mairson2003computational} and Ground types]\n\\label{defn:Pi and Sigma types} \nThe following mutually defined grammars generate $\\Pi_1$ and $\\Sigma_1$-types:\n\\begin{align*}\n\\Pi_1&:= \\alpha \\ \\vert \\ \\Sigma_1 \n\\multimap \\Pi_1 \\ \\vert \\ \\forall \\alpha. \\Pi_1 \\\\\n\\Sigma_1&:= \\alpha \\ \\vert \\ \\Pi_1 \n\\multimap \\Sigma_1 \n\\end{align*} \nWe call \\emph{ground types} the \\emph{closed} $\\Pi_1$-types.\n\\qed\n\\end{defn}\n\\noindent\nWe note that the universal quantifier $\\forall$ occurs only positively \nin a $\\Pi_1$-type, hence in ground types.\n\\par\nThe booleans $\\mathbf{B}$ in~\\eqref{eqn: boolean data type}, \nthe unit $\\mathbf{1}$ and the tensor $A\\otimes B$ as in Definition~\\ref{eqn:datatype unity and product} are ground types, if $A$ and $B$ are.\nIn fact, following \\cite{mairson2003computational}, tensors and units can \noccur \\emph{also} to the left-hand side of a linear implication \n\\enquote{$ \\multimap $}, even in negative positions. The reason is that \nwe can ignore them in practice, thanks to the isomorphisms:\n\\begin{align*}\n((A \\otimes B) \\multimap C) \\multimapboth (A \\multimap B \\multimap C) \n&&\n(\\mathbf{1}\\multimap C) \\multimapboth C\n\\enspace .\n\\end{align*} \nGround types represent \\emph{finite} data types, while the values with a ground type represent their data.\n\\begin{fact}\\label{rem: every normal term has Pi1 type} \nEvery closed linear $\\lambda$-term $ M $ has a ground type.\n\\qed\n\\end{fact}\n\\begin{proof}\nEvery closed linear $\\lambda$-term $M$ is typable in $\\mathsf{IMLL}$ (see~\\cite{hindley1989bck}). Types in $\\mathsf{IMLL}$ are quantifier-free instances\nof $\\Pi_1$-types. Hence, $M$ has also a $\\Pi_1$-type $A$ in $\\mathsf{IMLL}_2$. \nLet $FV(A)= \\lbrace\\alpha_1, \\ldots, \\alpha_n\\rbrace $. Since $M$ inhabits $A$, it also inhabits \n$\\overline{A} =\\forall \\alpha_1.\\cdots.\\forall \\alpha_n. A$, which is a \nclosed $\\Pi_1$-type, i.e.~a ground type in $\\mathsf{IMLL}_2$.\n\\end{proof}\n\\noindent\nThe class of ground types is a subset of both the classes of duplicable and erasable types. \n\\begin{thm}[\\cite{mairson2003computational}]\n\\label{thm: pi1 types are erasable}\nEvery ground type is erasable.\n\\end{thm}\n\\begin{proof} \nThe proof follows from proving two statements\nby simultaneous induction: (i) For every $\\Pi_1$-type $A$ with free type variables $\\alpha_1, \\ldots, \\alpha_n$ there exists \na linear $ \\lambda $-term $\\mathtt{E}_A$ such \nthat $ \\vdash \\mathtt{E}_A: A[\\mathbf{1}\/\\alpha_1, \\ldots,\\mathbf{1} \/\\alpha_n] $ $\\multimap \\mathbf{1}$, and (ii) for every $\\Sigma_1$-type $A$ with free type variables $\\alpha_1, \\ldots, \\alpha_n$ there exists a linear $ \\lambda $-term $\\mathtt{H}_A$ such that \n$ \\vdash \\mathtt{H}_A:A[\\mathbf{1}\/\\alpha_1, \\ldots,\\mathbf{1}\/ \\alpha_n]$.\n\\end{proof}\n\n\\begin{thm}\n\\label{thm: pi1 are duplicable}\nEvery inhabited ground type is duplicable.\n\\end{thm}\n\\noindent\nMairson and Terui sketch the proof of Theorem~\\ref{thm: pi1 are duplicable} in\n\\cite{mairson2003computational}.\n\\ref{sec: the d-soundness theorem DICE}, which we see as an integral and relevant part of this work, develops it in every detail. \n\n\\section{Introduction}\n\\label{section:Introduction}\nGirard introduces \\textit{Linear Logic} ($\\mathsf{LL}$) in \n\\cite{Girard:TCS87} as a refinement of both classical and intuitionistic \nlogic. \\textsf{LL} decomposes the intuitionistic implication \n\\enquote{$\\Rightarrow$} into the more primitive linear implication \n\\enquote{$\\multimap$} and modality \\enquote{$\\oc$} (of course), \nthe latter giving a logical status to weakening and contraction by\nmeans of the so-called \\textit{exponential rules}. \nAccording to the \\emph{Curry-Howard correspondence},\n this decomposition allows to identify a strictly linear \ncomponent of the functional computations that interacts with the non-linear one, in which duplication and erasure are allowed.\n\nThis work focuses on $\\mathsf{IMLL}_2$, i.e.~second-order intuitionistic \\textit{multiplicative} Linear Logic which, we recall, is free of any kind of exponential rules. \nThe Curry-Howard correspondence tightly relates $\\mathsf{IMLL}_2$ and \nthe linear $\\lambda$-calculus, a sub-language of the standard $\\lambda$-calculus without explicit erasure and duplication.\n\nInteresting works exist on the expressiveness of both the untyped and the typed linear $ \\lambda $-calculus.\n\nAlves et al.~\\cite{DBLP:conf\/csl\/AlvesFFM06} recover the full computational power of G\\\"odel System $T$ by adding booleans, natural numbers, and a linear iterator to the linear $\\lambda$-calculus, the non-linear features coming specifically from the iterator and the numerals. \n\nMatsuoka investigates the discriminating power of linear $ \\lambda $-terms with types in $ \\textsf{IMLL}$, i.e.~intuitionistic multiplicative Linear Logic, proving typed variants of B\\\"ohm Theorem~\\cite{MATSUOKA200737}. We remark that, in this setting, discriminating among linear $ \\lambda $-terms relies on \\emph{a specific form of weakening} already inside $ \\textsf{IMLL} $. \n\nAnother work that exploits the built-in erasure and copying mechanisms of the linear $\\lambda$-calculus is by Mairson~\\cite{mairsonlinear}. \nWith no new constructors, Mairson encodes boolean circuits in the linear \n$ \\lambda $-calculus. Moreover, Mairson\\&Terui reformulate Mairson's \nresults inside $\\mathsf{IMLL}_2$ and prove bounds on the complexity of the cut-elimination in sub-systems of \\textsf{LL}~\\cite{mairson2003computational}.\n\n\\paragraph{Contributions}\nStarting from Mairson\\&Terui's \\cite{mairson2003computational}, this work investigates a structural proof-theory, and the related Curry-Howard correspondence, of $\\mathsf{IMLL}_2$ extended with inference rules for contraction and weakening. \n\\begin{enumerate}\n\\item \nWe introduce the \\emph{Linearly Exponential and Multiplicative} system $\\mathsf{LEM}$, giving a logical status to\nthe erasure and the duplication that~\\cite{mairson2003computational} identifies inside the linear $ \\lambda $-calculus. \n$\\mathsf{LEM}$ is a type-assignment for a \\emph{linear} \n$\\lambda$-calculus endowed with constructs for weakening and \ncontraction, and it is obtained by extending $\\mathsf{IMLL}_2$ with \nrules on modal formulas ``$ \\shpos A$''. $\\mathsf{LEM}$ can be seen as a sub-system of $\\mathsf{LL}$ \nwith a restricted form ``$ \\shpos $'' of \\enquote{$\\oc$}.\n\n\\item\nWe consider a mildly weakened cut-elimination, called ``lazy'', that faithfully represents the mechanism of linear erasure and duplication discussed in~\\cite{mairson2003computational}, and we identify a set of derivations in $\\mathsf{LEM}$\nthat rewrite to cut-free \nones under that lazy cut-elimination in a cubic number of steps \n(Section~\\ref{Cut elimination, complexity, and subject reduction for IMLL2^shpos}). \nMoreover, we show the Subject reduction of $\\mathsf{LEM}$ \n(Section~\\ref{sec: subject reduction}).\n\n\\item \nWe prove that the cut-elimination of $\\mathsf{IMLL}_2$ can simulate the \none of $\\mathsf{LEM}$ at a cost which can be exponential in the size of \nthe given derivation of $\\mathsf{LEM}$ \n(Section~\\ref{sec: the expressiveness of the system}).\nSo, $\\mathsf{LEM}$ can speed up the cut-elimination of $\\mathsf{IMLL}_2$, meaning that it compresses in smaller derivations what can be algorithmically expressed in $\\mathsf{IMLL}_2$.\n\n\\item Hence, we explore the algorithmic expressiveness of $\\mathsf{LEM}$\n(Section~\\ref{section:The expressiveness of IMLLshpos2 and applications}):\n\\begin{enumerate}\n\t\\item \n\tBoth $\\mathsf{LEM}$ and $\\mathsf{IMLL}_2$ can represent boolean circuits.\t\n\tHowever, the copying mechanism, directly available in $\\mathsf{LEM}$, makes the encoding of the fan-out of the nodes of the circuit essentially natural, facilitating the modularity and the readability of the encoding itself.\n\tMoreover, the erasure in $\\mathsf{LEM}$ avoids to accumulate garbage when evaluating a circuit represented by a derivation of $\\mathsf{LEM}$, unlike in other proposals.\n\t\\item \n\tWe show that numerals, structurally related to Church ones, \n\texist in $\\mathsf{LEM}$. Their type is\n\t$ (\\shpos \\forall\\alpha.(\\alpha\\multimap\\alpha) ) \\multimap\\forall\\alpha.(\\alpha\\multimap\\alpha) $ that forbids\n\titerations longer than the complexity of the lazy cut-elimination. Remarkably, the numerals in $\\mathsf{LEM}$ admit successor and addition that work as expected, thanks to the Subject reduction. \n\n \\item \n\tFinally, we show that Hereditarily Finite Permutations, which form a group inside the linear\n\t$ \\lambda $-calculus, inhabit a simple generalization \n\tof the here above type of numerals, so possibly connecting \n\t$\\mathsf{LEM}$ with reversible computations.\n\\end{enumerate}\nThe above contributions follow from a fully detailed, and not\nat all obvious, technical reworking of Mairson\\&Terui's \\cite{mairson2003computational} work. \nWe propose it as a solid base to further investigations concerning duplication and erasure in a purely linear setting.\n\\end{enumerate}\nSection~\\ref{sec: background} is about (formal) preliminaries. Section~\\ref{sec: duplication and erasure in a linear setting} introduces the motivating background and \nSection~\\ref{sec: the system shpos, with, oplus IMLL2 cut elimination and bound} formally defines $\\mathsf{LEM}$.\n\n\\paragraph{Acknowledgments}\nWe are indebted to the anonymous reviewers for their patience and constructive attitude with which they red and commented on previous versions of this work.\n\\section*{\\refname}\n\n\n\\section{Preliminaries} \\label{sec: background}\n\\subsection{The linear $\\lambda$-calculus}\n\nWe assume the reader to be familiar with standard $\\lambda$-calculus and related concepts like:\n(i) the set $ FV(M) $ of the free variables of the $ \\lambda $-term $ M $, \n(ii) the meta-level substitution $M[N\/x]$ that replaces \nthe $ \\lambda $-term $ N $ for every free occurrence of the variable $ x $\nin $ M $, \n(iii) the contexts $ \\mathcal{C}[] $, i.e.~$ \\lambda $-terms with a place-holder (hole) $ [] $ \n that may capture free variables of a $ \\lambda $-term plugged into $ [] $,\n (iv) the $\\alpha$-equivalence ($=_\\alpha$),\n(v) the $\\beta$-reduction $(\\lambda x.M)N\\rightarrow_\\beta M[N\/x]$, \n(vi) the $\\eta$-reduction $\\lambda x.Mx\\rightarrow_\\eta M$ that can apply \nif $ x $ is not free in $ M $. Both $\\rightarrow_\\beta$ and $\\rightarrow_\\eta$ are considered contextually closed.\n\n\n\nBy $\\rightarrow_\\beta^*$ we denote the reflexive and transitive closure of the $ \\beta $-reduction, and by $=_{\\beta}$ its reflexive, symmetric and transitive closure.\n\nAlso, \nby $\\rightarrow_\\eta^*$ we denote the reflexive and transitive closure of the $ \\eta $-reduction, and by $=_{\\eta}$ its reflexive, symmetric and transitive closure.\n\nFinally, by $\\rightarrow_{\\beta \\eta}$ we denote $\\rightarrow_\\beta \\cup \\rightarrow_{\\eta}$, and by $\\rightarrow^*_{\\beta \\eta}$ we denote its reflexive and transitive closure.\n\nA $\\lambda$-term is in $\\beta$-\\textit{normal form}, or simply \n($ \\beta $-)\\emph{normal},\nwhenever no $ \\beta $-reduction applies to it. \nA $\\lambda$-term is in $\\eta$-\\textit{normal form}, or simply \\emph{$ \\eta $-normal} if no $\\eta$-reduction applies to it. Finally, a $\\lambda$-term is in $\\beta\\eta$-\\textit{normal form}, or simply $ \\beta \\eta $-\\emph{normal}, whenever no $ \\beta\\eta $-reduction applies to it.\n\nA $\\lambda$-term is \\textit{closed} if $FV(M)=\\emptyset$. \n\nThe \\textit{size} $\\vert M \\vert $ of $M$ is the number of nodes in its syntax tree. \n\nThe \\emph{linear} $\\lambda$-calculus is the $\\lambda$-calculus restricted to \\emph{linear} $\\lambda$-terms: \n\n\\begin{defn}[Linear $\\lambda$-terms] \n\\label{defn:Linear lambda-terms}\nA $\\lambda$-term $M$ is \\textit{linear} if all of its free variables occur once in it and every proper sub-term $ \\lambda x.M' $ of $M$ is such that $ x $ occurs in $ M' $ and $M' $ is linear.\n\\qed\n\\end{defn}\n\\noindent\nFor example, \n$I\\triangleq \\lambda x.x$ and \n$C\\triangleq\\lambda x. \\lambda y. \\lambda z. xzy$ are linear, \nwhile $K \\triangleq \\lambda x. \\lambda y. x$ and \n$S\\triangleq\\lambda x. \\lambda y. \\lambda z. xz(yz) $ are not.\n\n\\vspace{\\baselineskip}\n\\noindent\nTo our purposes, we shall adopt the following notion of value:\n\\begin{defn}[Values]\n\\label{defn:Values among linear lambda-terms}\nA \\emph{value} is every linear $ \\lambda $-term which is both \n($ \\beta $-)\\emph{normal} and \\emph{closed}. \n\\qed\n\\end{defn}\n\\noindent\nWe shall generally use $ V $ and $ U $ to range over values.\n\n\\begin{fact}[Stability]\n\\label{fact:Stability}\nLinear $\\lambda$-terms are stable under $\\beta$-reduction, \ni.e.~$M$ linear and $M \\rightarrow_\\beta N $ imply $N$ is linear. \nAnalogously, linear $\\lambda$-terms are stable under $\\eta$-red\\-uct\\-ion, i.e.~$M$ linear and $M \\rightarrow_\\eta N$ imply $N$ is linear. In both cases, $FV(N)= FV(M)$.\n\\qed\n\\end{fact}\n\\noindent\nFinally, we shall write $M \\circ N$ in place of $\\lambda z. M(Nz)$.\n\n\\begin{figure}[ht]\n\t\\begin{mathpar}\n\t\t\\inferrule*[Right= $ax$]\n\t\t{\\\\}\n\t\t{x: A \\vdash x:A} \n\t\t\\and\n\t\t\\inferrule*[Right= $cut$]\n\t\t{\\Gamma \\vdash N: A \\\\ \\Delta, x:A \\vdash M:C}\n\t\t{\\Gamma, \\Delta \\vdash M[N\/x]:C}\n\t\t\\\\\n\t\t\\inferrule*[Right= $\\multimap$R]\n\t\t{\\Gamma, x:A \\vdash M:B}\n\t\t{\\Gamma \\vdash \\lambda x. M : A \\multimap B} \n\t\t\\and\n\t\t\\inferrule*[Right= $\\multimap$L]\n\t\t{\\Gamma \\vdash N: A \\\\ \\Delta, x:B \\vdash M: C}\n\t\t{\\Gamma, \\Delta, y: A \\multimap B \\vdash M[yN\/x]: C}\n\t\t\\\\\n\t\t\\inferrule*[Right=$\\forall$R]\n\t\t{\\Gamma \\vdash M: A\\langle \\gamma\/\\alpha \\rangle \n\t\t \\\\ \\gamma \\not \\in FV(\\operatorname{rng}(\\Gamma))}\n\t\t{\\Gamma \\vdash M: \\forall \\alpha.A} \n\t\t\\and\n\t\t\\inferrule*[Right= $\\forall$L]\n\t\t{\\Gamma, x: A\\langle B\/ \\alpha \\rangle \\vdash M:C}\n\t\t{\\Gamma, x: \\forall \\alpha. A \\vdash M:C}\n\t\\end{mathpar}\n\t\\caption{$\\mathsf{IMLL}_2$ as a type-assignment system.}\n\t\\label{figure:IMLL2}\n\\end{figure}\n\n\\subsection{The systems $\\mathsf{IMLL}_2$ and $\\mathsf{IMLL}$} \\label{subsec: IMLL2}\nWe assume familiarity with basic proof-theoretical notions and with Linear Logic (see~\\cite{girard1987linear,troelstra2000basic}.)\n\\emph{Second-order Intuitionistic Multiplicative Linear Logic} ($\\mathsf{IMLL}_2$), seen as a type-assignment for the linear $\n \\lambda $-calculus, is in Figure~\\ref{figure:IMLL2}, where, we remark, the only logical operators are the universal quantifier ``$ \\forall$'' and the\n\tlinear implication ``$ \\multimap $''. $\\mathsf{IMLL}_2$ derives \\emph{judgments} $\\Gamma \\vdash M: A$, i.e.~a \\emph{type} $ A $ for the linear $ \\lambda $-term $ M $ from the \\emph{context} $\\Gamma$.\nA \\textit{type} is a (type) variable $\\alpha$, or an \\emph{implication} $A \\multimap B$, or a \\emph{universal quantification} $\\forall \\alpha. A$, where\n$A$ and $B$ are types. \nThe set of free type variables of $ A $ is $ FV(A) $. If $FV(A)= \\emptyset$, then $ A $ is \\textit{closed}. If $FV(A)= \\lbrace \\alpha_1, \\ldots, \\alpha_n \\rbrace$, then \\textit{a closure} $\\overline{A}$ of $A$ is $\\forall \\alpha_1. \\cdots. \\forall \\alpha_n. A$, not necessarily linked to a specific order of $ \\alpha_1,\\ldots,\\alpha_n $.\nThe standard meta-level substitution of a type $ B $ for every free occurrence of $ \\alpha $ in $ A $ is $A \\langle B\/ \\alpha \\rangle$. \nThe \\textit{size} $\\vert A \\vert$ of the type $A$ is the number of nodes in its syntax tree. \nA \\textit{context} $\\Gamma$ has form $x_1: A_1, \\ldots, x_n:A_n$, with $ n\\geq 0 $, i.e.~it is a finite multiset of \\textit{assumptions} $x: A$, where $ x $ is a\n $ \\lambda $-variable. \nThe \\emph{domain} $\\operatorname{dom}(\\Gamma)$ of $ \\Gamma $ is \n$\\{x_1, \\ldots, x_n\\}$ and its range $ \\operatorname{rng}(\\Gamma) $\nis $\\{A_1, \\ldots, A_n\\}$.\nThe size $\\vert \\Gamma \\vert$ of $ \\Gamma $ is $\\sum^n_{i=1}\\vert A_i \\vert$. \nTypically, names for contexts are $\\Gamma, \\Delta$ or $\\Sigma$.\n\nSince $ \\textsf{IMLL}_2 $ gives types to linear $ \\lambda $-terms, \n$\\multimap$L is necessarily subject to the \\emph{linearity constraint} \n$\\operatorname{dom}(\\Gamma) \\cap \\operatorname{dom}(\\Delta)= \\emptyset$. \nWe range over the derivations of $\\mathsf{IMLL}_2$ by $\\mathcal{D}$. \nThe \\textit{size} $\\vert \\mathcal{D} \\vert$ of $\\mathcal{D}$ is the number of the rule instances that $\\mathcal{D}$ contains. \nWe say that $\\Gamma \\vdash M: B$ is \\textit{derivable} if a derivation $\\mathcal{D}$ exists that concludes with the judgment $ \\Gamma \\vdash M:B$, and we also say that $\\mathcal{D}$ is a derivation of $ \\Gamma \\vdash M:B$. In that case we write $\\mathcal{D}\\triangleleft \\Gamma \\vdash M: B$ saying that \n$M$ is an \\textit{inhabitant} of $B$ or that $B$ is \\textit{inhabited} by $M$ \nfrom $\\Gamma$. \nThe cut-elimination steps for $\\mathsf{IMLL}_2$ are standard and both\ncut-elimination and confluence hold for it \\cite{troelstra2000basic}. \n\n\n\\textit{Propositional Intuitionistic Multiplicative Linear Logic} ($\\mathsf{IMLL}$) is $\\mathsf{IMLL}_2$ without $\\forall$R and $\\forall$L. \nFrom Hindley \\cite{hindley1989bck}, we recall that $\\mathsf{IMLL}$, \nthus $\\mathsf{IMLL}_2$, gives a type to every linear $\\lambda$-term. \nThe converse holds as well, due to the above \\textit{linearity constraint} \non $\\multimap$L, so the class of linear $\\lambda$-terms is exactly the one \nof all typable $ \\lambda $-terms in $\\mathsf{IMLL}_2$. \nIt follows that second-order does not allow to type more terms but it is nevertheless useful to assign uniform types to structurally related \n$ \\lambda $-terms.\n\nWe conclude by recalling standard definitions of types in $\\mathsf{IMLL}_2$:\n\n\\begin{defn}[Basic datatypes]\n\t\\label{eqn:datatype unity and product}\nThe \\emph{unity} type is $ \\textbf{1} \\triangleq \\forall \\alpha. (\\alpha \\multimap \\alpha) $ with constructor $ I \\triangleq \\lambda x.x $, i.e.~the identity,\n\tand destructor $ \\mathtt{let}\\ M \\mathtt{\\ be \\ }I \\mathtt{\\ in \\ }N \\triangleq MN $;\n\nThe \\emph{tensor product} type\n\t$ A \\otimes B \\triangleq \n\t\\forall \\alpha. (A \\multimap B \\multimap \\alpha) \\multimap \\alpha $ \n\twith constructor \n\t$ \\langle M, N \\rangle \\triangleq \\lambda z. z\\,M\\,N $\n\tand destructor $ \\mathtt{let}\\ M \\mathtt{\\ be \\ }x,y \\mathtt{\\ in \\ }N\n\t\\triangleq M(\\lambda x. \\lambda y. N) $.\n\t\nBoth binary tensor product and pair extend to their obvious $n$-ary versions\n$A^n= \\underbrace{A \\otimes \\ldots\\otimes A}_{n}$ and $M^n \\triangleq \\langle \\underbrace{ M, \\ldots, M}_{n}\\rangle$.\n\\qed\n\\end{defn}\n\\begin{rem}\n\tEvery occurrence of unity, ($ n $-ary) tensor and $ n $-tuple in the coming sections will be taken from Definition~\\ref{eqn:datatype unity and product}.\n\\end{rem}\n\\noindent\t\nFinally, Definition~\\ref{eqn:datatype unity and product} talks about \\emph{datatypes} because, by introducing a specific syntax for constructors and destructors, we implicitly adopt a pattern matching mechanism\nto operate on $ \\lambda $-terms typed with those types.\n\\section{The expressiveness of $\\mathsf{LEM}$ and applications}\n\\label{section:The expressiveness of IMLLshpos2 and applications}\nTheorem~\\ref{thm: translations for LAML} says that $\\mathsf{LEM}$ is not \\enquote{algorithmically} more expressive than $ \\mathsf{IMLL}_2 $. Nonetheless, terms with type in $\\mathsf{LEM}$, and their evaluation mechanisms, exponentially compress the corresponding linear $ \\lambda $-terms and evaluations in $ \\mathsf{IMLL}_2 $ (Theorem~\\ref{thm: exponential compression}). The goal of this section is to explore the benefits of this compression.\n\n\n\n\\begin{figure}[ht]\n\\centering\n \\subfigure[\\scriptsize{From left, input, internal, fan-out, and output nodes.}\t\\label{figure:node of boolean circuits} ]{\n\\scalebox{.65}{\n \\begin{tikzpicture}[\n node\/.style={circle,minimum size=1pt,fill=white},\n node distance=0.25cm\n ]\n\n \\node[circle, inner sep =1pt, draw](op){\\scriptsize{$\\textsl{op}^n$}};\n \\node[circle] at ($(op)+(0, +1)$)(ab){\\scriptsize{\\ldots $n\\geq 0$\\ldots}} ;\n \\coordinate[right = of ab](abri){} ;\n \\coordinate[left = of ab](able){} ;\n \\draw[->](able)to[out=270, in=135](op);\n \\draw[->](abri)to[out=270, in=45](op);\n \\draw[->](op)--($(op)+(0, -1)$) ;\n \n \n \n \\node[node, draw, left=3cm of op](in){\\scriptsize{$\\textrm{x}$}};\n \\draw[->](in)--($(in)+(0, -1)$) ; \n \n \\node[circle, inner sep= 1pt, draw, right=3cm of op, fill=black](fan){};\n \\node[node] at ($(fan)+(0, -1)$)(befan){\\scriptsize{\\ldots $n\\geq 0$\\ldots}} ;\n \\node[circle] at ($(fan)+(0, +1)$)(abfan){} ;\n \\coordinate[right = of befan](berifan){} ;\n \\coordinate[left = of befan](belefan){} ;\n \\draw[->, thick](fan)to[out=225, in=90](belefan);\n \\draw[->,thick](fan)to[out=315, in=90](berifan);\n \\draw[->](abfan)to[out=270, in=90](fan); \n \n \\node[node, draw, right=4cm of belefan](out){\\scriptsize{$\\mathrm{y}$}};\n \\draw[<-](out)--($(out)+(0, 1)$) ; \n \\end{tikzpicture}\n }} \n \n\n\\subfigure[\\scriptsize{The 2-bits full-adder boolean circuit}\\label{figure:2-bits full adder}]{\n\\scalebox{.65}{\n \\begin{tikzpicture}[\n auto=left,\n empty\/.style = {minimum size=0cm,fill=white},\n el\/.style = {inner sep=2pt, align=left, sloped},\n node\/.style={circle,minimum size=.9cm,fill=white},\n sqnode\/.style={rectangle,inner sep=2pt,fill=white},\n node distance=1cm\n ]\n \\draw(0, 1)node(q){};\n \\node[node,draw ] (b1) {$ \\mathrm{x}_1 $};\n \\node[node,draw,below=of b1] (b2) {$ \\mathrm{x}_2 $};\n \\node[node,draw,below=of b2] (cin) {$ \\mathrm{x}_{\\textrm{in}} $};\n \n \\node[circle, fill=black, inner sep=1pt, right=of b1, draw ](fop1) {};\n \\node[circle, fill=black, inner sep=1pt, right=of b2, draw ](fop2){};\n\n\n \\node[node,draw,right=2cm of b1] (xor1) {$\\bigoplus$};\n \\node[circle, fill=black, inner sep=1pt, right=of xor1] (foxor1) {};\n \\node[node,draw,right=2cm of b2] (and1) {$\\bigwedge$};\n \n \\node[node,draw,right= of foxor1] (xor2) {$\\bigoplus$};\n \\draw ($ (xor2)+(0,-3.825) $) node[node,draw] (and2) {$\\bigwedge$};\n \\node[circle, fill=black, inner sep=1pt, draw, left= of and2] (focin) {};\n \n \\node[node, below=of xor2](preor){};\n \\node [node,draw, right= of preor] (or) {$\\bigvee$ };\n \n \n \n \\node[node, right=of xor2, draw] (output1){$ \\mathrm{y}_1 $};\n \\node[node, right=of or, draw] (output2){$ \\mathrm{y}_2 $};\n \\draw (xor2) to[out=0, in=180]node[above]{\\scriptsize{$s$}} (output1);\n \\draw (or) to[out=0, in=180]node[above]{\\scriptsize{$y_{\\text{out}}$}} (output2);\n \\draw ($ (output1)+(1.75,0) $) node[sqnode] (s) {\\scriptsize{$(x'_1\\oplus x'_2)\\oplus y'$}};\n \\draw ($ (output2)+(2.50,0) $) \n node[sqnode] (s) {\\scriptsize{$(x''_1\\wedge x''_2)\\vee((x'_1\\oplus x'_2)\\wedge y'')$}};\n\n \n \\draw (b1) to[out=0,in=180]node[above]{\\scriptsize{$x_1$}} (fop1);\n \\draw (b2) to[out=0,in=180]node[above]{\\scriptsize{$x_2$}} (fop2);\n \\draw (cin) to[out=0,in=180]node [above]{\\scriptsize{$y_{\\text{in}}$}}(focin);\n \n \\draw[thick](fop1) to[out=0,in=180] node[above ] {\\scriptsize{$ x'_1 $}} (xor1);\n \\draw[thick](fop1) to[out=0,in=135] node[below,pos=0.2] {\\scriptsize{$ x''_1\\ \\ $}} (and1);\n \n \\draw[ thick] (fop2) to[out=0,in=225] node[above,pos=0.2] {\\scriptsize{$ x'_2\\ \\ $}} (xor1);\n \\draw[ thick] (fop2) to[out=0,in=180] node[below ] {\\scriptsize{$ x''_2 $}} (and1);\n \n \\draw[ thick] (focin) to[out=0,in=225] node[above, pos=0.1] {\\scriptsize{$ y'\\ \\ $}} (xor2);\n \\draw[ thick] (focin) to[out=0,in=180] node[below] {\\scriptsize{$ y''$}} (and2);\n\n \\draw (xor1) to[out= 0,in=180]node[above]{\\scriptsize{$z_1$}} (foxor1);\n \\draw[ thick] (foxor1) to[out= 0,in=180] node[above] {\\scriptsize{$ z_1' $}} (xor2);\n \\draw[ thick] (foxor1) to[out=0,in=135] node[below,pos=0.1] {\\scriptsize{$z''_1 \\ \\ $}} (and2);\n\n \\draw (and1) to[out=0,in=180] node[above,pos=0.1] {\\scriptsize{$ z_2 $}} (or);\n \\draw (and2) to[out=0,in=225] node[below, right] {\\scriptsize{$z_3 $}} (or);\n \\end{tikzpicture}\n}\n}\n\\subfigure[\\scriptsize{The 3-bits majority boolean circuit}\n\\label{figure:maj3}]{\n\\scalebox{.65}{\n\\begin{tikzpicture}[\nauto=left,\nempty\/.style = {minimum size=0cm,fill=white},\nel\/.style = {inner sep=2pt, align=left, sloped},\nnode\/.style={circle,minimum size=.9cm,fill=white},\nsqnode\/.style={rectangle,inner sep=2pt,fill=white},\nnode distance=1cm\n]\n \\draw(0, 1)node(q){};\n\\node[node,draw ] (b1) {$ \\mathrm{x}_1 $};\n\\node[node,draw,below=of b1] (b2) {$ \\mathrm{x}_2 $};\n\n\\node[circle, fill=black, inner sep=1pt, right=of b1, draw] (fob1){}; \n\\node[circle, fill=black, inner sep=1pt, right=of b2, draw ] (fob2){};\n\n\\node[node,draw,right=of fob1] (o1) {$\\bigvee$};\n\\node[node,draw,right=of fob2] (a1) {$\\bigwedge$};\n\\node[node,draw,below=of b2 ] (b3) {$ \\mathrm{x}_3 $};\n\n\n\n\\coordinate[right=of a1] (foa1);\n\n\\node[node,draw,right=of foa1] (o2) {$\\bigvee$};\n\\node[node,draw,below=of o2] (a2) {$\\bigwedge$};\n\n\\node[circle, fill=black, inner sep=1pt, right=of a1, draw ] (foa1){};\n\\node[node, below= of a1](beforefob3){};\n\\node[circle, fill=black, inner sep=1pt, right= of beforefob3, draw] (fob3){};\n\\node[circle, fill=black, inner sep=1pt, right=of o2, draw ] (foo2){};\n\n\\node[node,draw,right=of foo2] (a3) {$\\bigwedge$};\n\\node[node,draw,above=of a3] (o3) {$\\bigvee$};\n\\node[circle, fill=black, inner sep=1pt, left= of o3, draw] (foo1){};\n\\draw ($ (o3)+(2,-1) $) node[node,draw] (a) {$\\bigwedge$};\n\n\\node[node, right = of a,draw ] (output1){$\\mathrm{y_1}$};\n\\node[node, right=of a2, draw] (output2){$\\mathrm{y_2}$};\n\\draw (a) to[out=0, in=180]node[above]{\\scriptsize{$m$}} (output1);\n \\draw (a2) to[out=0, in=180]node[above]{\\scriptsize{$g$}} (output2);\n\\draw ($(output1)+(2,0)$)node[sqnode] (grabage) {\\scriptsize{$ \\textsl{maj}_3(x_1',x_2',x_3')$}};\n\\draw ($(output2)+(2,0)$)node[sqnode] (out) {\\scriptsize{$ \\textsl{min}_3\\{x_1'',x_2'',x_3''\\} $}};\n\n\\draw (b1) to[out= 0,in=180]node[above]{\\scriptsize{$x_1$}} (fob1);\n\\draw[thick] (fob1) to[out= 0,in=180] node[above] { \\scriptsize{$x_1'$}} (o1);\n\\draw[thick] (fob1) to[out= 0,in=135] node[below, pos=0.2] { \\scriptsize{$x_1''\\ \\ $}} (a1);\n\\draw (b2) to[out= 0,in=180] node[above]{\\scriptsize{$x_2$}}(fob2);\n\\draw[thick] (fob2) to[out= 0,in=225] node[above, pos=0.2] {\\scriptsize{ $x_2'\\ \\ \\ $}} (o1);\n\\draw[thick] (fob2) to[out= 0,in=180] node[below] { \\scriptsize{$x_2''$}}(a1);\n\n\\draw (o1) to[out= 0,in=180] (foo1);\n\\draw (a1) to[out= 0,in=180]node[above]{\\scriptsize{$y_2$}} (foa1);\n\\draw[thick] (foa1) to[out=0,in=135] node[below, pos=0.2] {\\scriptsize{$ y_2''\\ \\ $}} (a2);\n\\draw [thick](foa1) to[out=0,in=180] node[above] {\\scriptsize{$ y_2'$}} (o2);\n\\draw (b3) to[out= 0,in=180]node[above]{\\scriptsize{$x_3$}} (fob3);\n\\draw[thick] (fob3) to[out= 0,in=225] node[above, pos=0.2] {\\scriptsize{$ x_3'\\ \\ \\ $}} (o2);\n\\draw[thick] (fob3) to[out= 0,in=180] node[below] {\\scriptsize{$ x_3''$}} (a2);\n\n\\draw (o1) to[out= 0,in=180]node[above]{\\scriptsize{$y_1$}} (foo1);\n\\draw[thick] (foo1) to[out= 0,in=180] node[above] {\\scriptsize{$ y_1'$}} (o3);\n\\draw[thick] (foo1) to[out= 0,in=135] node[below, pos=0.2] {\\scriptsize{$ y_1''\\ \\ $}} (a3);\n\\draw (o2) to[out= 0,in=180]node[above]{\\scriptsize{$y_3$}} (foo2);\n\\draw[thick] (foo2) to[out= 0,in=225] node[above, pos=0.2] {\\scriptsize{ $y_3'\\ \\ \\ $}} (o3);\n\\draw[thick] (foo2) to[out= 0,in=180] node[below] {\\scriptsize{$ y_3''$}} (a3);\n\n\\draw (o3) to[out= 0,in=135]node[above, right]{\\scriptsize{$\\ z_1$}} (a);\n\\draw (a3) to[out= 0,in=225]node[below, right]{\\scriptsize{$\\ z_2$}} (a);\n\\end{tikzpicture}\n}\n}\n\\caption{Nodes of boolean circuits and some examples. \nWriting, for example,\n$ \\textsl{maj}_3\\{x_1',x_2',x_3'\\} $ in place of \n$ \\textsl{maj}_3\\{x_1,x_2,x_3\\} $ would be equivalent. \nThe current notation just highlights which is the component of the fan-out nodes that an output depends on.}\n\\label{fig: some examples of boolean circuits}\n \\end{figure}\n \n\\begin{figure}\n\\scalebox{0.85}{\n\\bgroup\n\\def\\arraystretch{1.5}%\n\\begin{tabular}{l|ll} \n $\\neg$ & $\\mathtt{not} \\triangleq\\lambda b. \\lambda x. \\lambda y. b yx $&$:\\mathbf{B} \\multimap\\mathbf{B}$\n \\\\ \\hline\n$\\wedge^0$ &$\\mathtt{and}^0 \\triangleq \\lambda x.\\lambda y. \\langle x,y \\rangle$ &$:\\mathbf{B}$\n\\\\\n$\\wedge^1$ &$\\mathtt{and}^1\\triangleq I$ &$:\\mathbf{B}\\multimap \\mathbf{B}$\n\\\\\n$\\wedge^2$& $ \\mathtt{and}^2 \\triangleq \n\\lambda x_1. \\lambda x_2. \\pi_1(x_1 x_2\\, \\mathtt{ff})$ &$:\n\\mathbf{B}\\multimap \\mathbf{B} \\multimap \\mathbf{B}$\n\\\\\n$\\wedge^{n+2} $ & $\\mathtt{and}^{n+2}\\triangleq \\lambda x_1\\ldots x_{n+1}x_{n+2}. \\mathtt{and}^2\\, (\\mathtt{and}^{n+1}\\, x_1\\, \\ldots\\, x_{n+1})\\, x_{n+2}$ &$:\\mathbf{B}\\multimap \\overset{n+2}{\\ldots} \\multimap \\mathbf{B}\\multimap\\mathbf{B}$\n\\\\ \\hline\n$\\vee^0$ &$ \\mathtt{or}^0 \\triangleq \\lambda x.\\lambda y. \\langle y,x \\rangle$ &$:\\mathbf{B}$\\\\\n$\\vee^1$ &$\\mathtt{or}^1 \\triangleq I $ &$:\\mathbf{B}\\multimap \\mathbf{B}$\n\\\\\n$\\vee^2$ & $\\mathtt{or}^2 \\triangleq \\lambda x_1. \\lambda x_2. \\pi_1(x_1 \\mathtt{tt}\\, x_2)$ &$:\n\\mathbf{B}\\multimap \\mathbf{B} \\multimap \\mathbf{B}$\n\\\\\n$\\vee^{n+2} $ & $\\mathtt{or}^{n+2}\\triangleq \\lambda x_1\\ldots x_{n+1}x_{n+2}. \\mathtt{or}^2\\, (\\mathtt{or}^{n+1}\\, x_1\\, \\ldots\\, x_{n+1})\\, x_{n+2}$ &$: \\mathbf{B}\\multimap \\overset{n+2}{\\ldots} \\multimap \\mathbf{B}\\multimap\\mathbf{B}$\\\\ \\hline\n\\textsl{fo}$^0$& $\\mathtt{out}^0\\triangleq \\lambda x. \\mathtt{discard}_{\\mathbf{B}}\\, x \\mathtt{ \\ in \\ } I$ &$: \\shpos \\mathbf{B}\\multimap \\mathbf{1}$\\\\\n\\textsl{fo}$^1$& $\\mathtt{out}^1\\triangleq I$ &$: \\shpos \\mathbf{B}\\multimap \\mathbf{B}$ \\\\\n\\textsl{fo}$^2$& $\\mathtt{out}^2\\triangleq \\lambda x. \n\\texttt{copy}_{\\mathbf{B}}^{\\mathtt{tt}}\\, x\\ \\texttt{as}\\ x_1,x_2\\ \\texttt{in}\\ \\langle x_1, x_2\\rangle$ &$:\\shpos \\mathbf{B}\\multimap \\shpos \\mathbf{B}\\otimes \\shpos \\mathbf{B}$ \\\\\n\\textsl{fo}$^{n+2}$& $\\mathtt{out}^{n+2}\\triangleq \\lambda x. \n\\texttt{copy}_{\\mathbf{B}}^{\\mathtt{tt}}\\, x\\ \\texttt{as}\\ x_1,x_2\\ \\texttt{in}\\ \n\\langle \n\\mathtt{out}^{n+1}\\, x_1,\nx_2\n\\rangle $ &$: \\shpos \\mathbf{B}\\multimap \\shpos \\mathbf{B} \\otimes \\overset{n+2}{\\ldots} \\otimes \\shpos \\mathbf{B}$ \\\\\n\\end{tabular}\n\\egroup \n}\n\\caption{Encoding of boolean functions and fan-out.}\n\\label{fig: enc boolean functions and fanout}\n\\end{figure}\n\n\\subsection{Boolean circuits in $\\mathsf{LEM}$}\n\\label{sec: encoding boolean circuits}\nWe encode boolean circuits as terms of $\\mathsf{LEM}$\n(Definition~\\ref{defn: from boolean circuits to terms}) and \nwe prove a simulation result \n(Proposition~\\ref{prop: simulation for boolean circuits}).\n\nThe encoding is inspired by Mairson\\&Terui~\\cite{mairsonlinear}. Other encodings of the boolean circuits have been shown in Terui~\\cite{DBLP:conf\/lics\/Terui04}, \nMogbil\\&Rahli~\\cite{DBLP:conf\/lfcs\/MogbilR07} and \nAubert~\\cite{DBLP:journals\/corr\/abs-1201-1120} by considering the \\textit{unbounded} proof-nets for the multiplicative fragment \\textsf{MLL} of Linear Logic. Unbounded proof-nets are an efficient language able to express $n$-ary tensor products by single nodes and to characterize parallel computational complexity classes such as \\textsf{NC}, \\textsf{AC}, and $\\textsf{P}\/_{\\operatorname{poly}}$. The contribution of this work to these encodings is in the use of \\texttt{copy} and \\texttt{discard} to directly express the fan-out nodes that allow a more compact and modular representation of circuits. In particular, as compared to~\\cite{DBLP:conf\/lics\/Terui04,DBLP:conf\/lfcs\/MogbilR07,DBLP:journals\/corr\/abs-1201-1120}, our encoding is able to get rid of the garbage that accumulates in the course of the simulation. \n\n\n\n\n\nWe start by briefly recalling the basics of boolean circuits from \nVollmer~\\cite{Vollmer:1999:ICC:520668}.\n\n\\begin{defn}[Boolean circuits] \nA \\textit{boolean circuit} $ C $ is a finite, directed and acyclic graph with $n$ \\emph{input nodes}, $m$ \\emph{output nodes}, \\emph{internal nodes} and \\textit{fan-out nodes} as in Figure~\\ref{figure:node of boolean circuits}. \nThe incoming (resp.~outgoing) edges of a node are \\textit{premises} \n(resp.~\\textit{conclusions}). The \\textit{fan-in} of an internal node is the number of its premises. \nLabels for the $n$ input nodes of $ C $ are $\\mathrm{x}_1, \\ldots, \\mathrm{x}_n$ and those ones for the $m$ outputs are $\\mathrm{y}_1, \\ldots, \\mathrm{y}_m$. \nEach internal node with fan-in $n \\geq 0$ has an $n$-ary boolean function $\\textsl{op}^n$ as its label, provided that if $n=0$, then $\\textsl{op}^n$ is a boolean constant in $\\lbrace 0,1 \\rbrace$. \nThe fan-out nodes have no label. \nInput and internal nodes are \\textit{logical nodes} and their conclusions are \\textit{logical edges}. If $\\nu$ and $\\nu'$ are logical nodes, then $\\nu'$ is a \\textit{successor} of $\\nu$ if a directed path from $\\nu$ to $\\nu'$ exists which crosses no logical node.\nThe \\emph{size} $ |C| $ of $C$ is the number of nodes. Its \\emph{depth} $ \\delta(C) $ is the length of the longest path from an input node to a output node. \nA \\textit{basis} $\\mathcal{B}$ is a set of boolean functions. A boolean circuit $C$ is \\textit{over a basis} $\\mathcal{B}$ if the label of every of its internal nodes belong to $\\mathcal{B}$ only. The standard unbounded fan-in basis is $\\mathcal{B}_1= \\lbrace \\neg, (\\wedge^n)_{n \\in \\mathbb{N}}, (\\vee^n)_{n \\in \\mathbb{N}}\\rbrace$. \n\\qed\n\\end{defn}\n\\noindent\nWhen representing boolean circuits as terms we label edges by $\\lambda$-variables, we omit their orientation, we assume that every fan-out always has a logical edge as its premise and we draw non-logical edges, i.e.~conclusions of fan-out nodes, as thick lines.\nFigures~\\ref{figure:2-bits full adder} and~\\ref{figure:maj3} are examples.\nA 2-bits full-adder is in the first one. \nIt takes two bits $ x_1, x_2 $ and a carrier $ y_{\\textrm{in}} $ as inputs.\nIts outputs are the sum $ s = (x'_1\\oplus x'_2)\\oplus y' $ and the \ncarrier $ y_{\\text{out}} = (x''_1\\wedge x''_2)\\vee((x'_1\\oplus x'_2)\\wedge y'')$, where $ \\oplus $ is the exclusive or that we can\nobtain by the functionally complete functions in $ \\mathcal{B}_1 $.\nFigure~\\ref{figure:maj3} is the 3-bits majority function $ \\textsl{maj}_3(x_1,x_2,x_3)$. \nIt serially composes three occurrences of the boolean circuit that switches two inputs $ x_1 $ and $ x_2 $ in order to put the greatest on the topmost output and the smallest on the bottommost one, under the convention that $ 0$ is smaller than $1$.\nSo, the 3-bits majority circuit first sorts its input bits and then checks if the topmost two, i.e. the majority, are both set to $1$.\nThe lowermost output is garbage. \n\n\nTranslating boolean circuits as terms of $\\mathsf{LEM}$ requires to encode the boolean functions in $\\mathcal{B}_1$ and the fan-out nodes. \nFigure~\\ref{fig: enc boolean functions and fanout} reports them, where $\\mathtt{tt}$ and $\\mathtt{ff}$ encode the boolean values in~\\eqref{eqn: boolean data type}, and $\\pi_1$ is the projection in~\\eqref{eqn: boolean projection}. \nAs a typographical convention, $\\mathtt{i}\\in \\lbrace \\mathtt{tt}, \\mathtt{ff} \\rbrace$ will code the boolean constant $i \\in \\lbrace 0,1 \\rbrace$, \nand $\\mathtt{op}^n$ the $n$-ary boolean function $\\textsl{op}^n$, according to Figure~\\ref{fig: enc boolean functions and fanout}. \nWe shorten $\\mathtt{and}^0$, $\\mathtt{or}^0$, $\\mathtt{and}^2$, $\\mathtt{or}^2$, and $\\mathtt{out}^2$ as $\\mathtt{tt}$, $\\mathtt{ff}$, $\\mathtt{and}$, $\\mathtt{or}$, and $\\mathtt{out}$, respectively. The encoding of the binary exclusive or $\\oplus$ is $\\mathtt{xor}$.\n\nWe recall that boolean circuits are a model of parallel computation, while the $\\lambda$-calculus models sequential computations. Mapping the former into the latter requires some technicalities. The notion of \\emph{level} allows to topologically sort the structure of the boolean circuits in order to preserve the node dependencies:\n\\begin{defn}[Level] The \\textit{level} $l$ of a logical node $\\nu$ in a boolean circuit $C$ is:\n\\begin{enumerate}\n\\item $ 0 $ if $\\nu$ has no successors, and\n\\item $\\max \\lbrace l_1, \\ldots, l_k \\rbrace +1$ if $\\nu$ has successors $\\nu_1, \\ldots, \\nu_k$ with levels $l_1, \\ldots, l_k$.\n\\end{enumerate}\nThe \\textit{level} of a logical edge is the level of the logical node it is the conclusion of. \nThe \\textit{level} of a boolean circuit is the greatest level of its logical nodes.\n\\qed\n\\end{defn}\nWe define a level-by-level translation of unbounded fan-in boolean circuits over $\\mathcal{B}_1$ into terms typable in $\\mathsf{LEM}$ taking inspiration from Schubert~\\cite{schubert2001complexity}:\n\\begin{defn}[From boolean circuits to terms]\\label{defn: from boolean circuits to terms}\nLet $C$ be a boolean circuit with $n$ inputs and $m$ outputs. We define the term $\\mathtt{level}^l_C$ by induction on $l-1$:\n\\begin{enumerate}\n\\item $\\mathtt{level}_{C}^{-1} \\triangleq \\langle x_1, \\ldots, x_n \\rangle$, where $x_1, \\ldots, x_n$ are the variables labelling the logical edges of level $0$.\n\\item $\\mathtt{level}_{C}^{l} \\triangleq (\\lambda x_1\\ldots x_n x_{n+1}\\ldots x_{m}. \\mathtt{let}\\, (\\mathtt{out}^{k_{1}}\\, x_{1}) \\ \\mathtt{\\ be \\ }y^1_1, \\ldots, y^1_{k_1} \\mathtt{ \\ in \\ } \\ldots$\\\\\n$\\mathtt{let}\\, (\\mathtt{out}^{k_{n}}\\, x_{n}) \\mathtt{\\ be \\ }y^n_1, \\ldots, y^n_{k_n} \\mathtt{ \\ in \\ } \\mathtt{level}_{C}^{l-1})\\, B_1 \\ldots B_m$, where:\n\\begin{enumerate}\n\\item $x_1,\\ldots ,x_n,x_{n+1},\\ldots, x_{m}$ are the variables labelling the logical edges of level $ l $, \n\\item for all $1 \\leq j \\leq n$, $x_j$ is the premise of a fan-out node with conclusions labelled with $y^j_1, \\ldots, y^j_{k_j}$ (see Figure~\\ref{fig: level fan out e internal nodes}).\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}[node\/.style={circle,minimum size=1pt,fill=white},\n node distance=0.25cm]\n \\node[circle, inner sep= 1pt, draw, fill=black](fan){};\n \\node[node] at ($(fan)+(0, -0.50)$)(befan){\\ldots} ;\n \\node[circle] at ($(fan)+(0, +0.50)$)(abfan){} ;\n \\coordinate[right = of befan](berifan){} ;\n \\coordinate[left = of befan](belefan){} ;\n \\draw[-, thick](fan)to[out=225, in=90]node[left]{\\scriptsize{$y^j_1$}\\ \\ }(belefan);\n \\draw[-,thick](fan)to[out=315, in=90]node[right]{\\ \\scriptsize{$y^j_{k_j}$}}(berifan);\n \\draw[-](abfan)to[out=270, in=90]node[right]{\\scriptsize{$x_j$}}(fan); \n\n \\node[circle, inner sep =1pt, draw, right=3cm of fan](op){\\scalebox{0.5}{$\\textsl{op}^h$}};\n \\node[circle] at ($(op)+(0, +0.50)$)(ab){\\scriptsize{\\ldots}} ;\n \\coordinate[right = of ab](abri){} ;\n \\coordinate[left = of ab](able){} ;\n \\draw[-](able)to[out=270, in=135]node[left]{\\scriptsize{$z_1$}}(op);\n \\draw[-](abri)to[out=270, in=45]node[right]{\\scriptsize{$\\, z_h$}}(op);\n \\draw[-](op)--node[right]{\\scriptsize{$x_i$}} ($(op)+(0, -0.50)$) ;\n\\end{tikzpicture}\n\\caption{From left, a fan-out node and an internal node.}\n\\label{fig: level fan out e internal nodes}\n\\end{figure}\n\\item for all $1 \\leq i \\leq m$, if $x_i$ is the variable labeling the conclusion of an internal node $\\textsl{op}^h$ with premises labeled by $z_1, \\ldots, z_h$, respectively (see Figure~\\ref{fig: level fan out e internal nodes}), then $B_i\\triangleq \\mathtt{op}^h\\, z_1\\, \\ldots\\, z_h$. If $x_i$ is the variable labelling the conclusion of an input node then $B_i \\triangleq x_i$.\n\\end{enumerate}\n\\end{enumerate}\nLast, if the input nodes have conclusions labeled by \n$x_1, \\ldots, x_{n}$, respectively, and if $C$ has level $l$,\nthen we define \n$\\lambda(C) \\triangleq \\lambda x. \\mathtt{let}\\ x \\mathtt{ \\ be \\ }x_1, \\ldots, x_{n} \\mathtt{ \\ in \\ } \\mathtt{level}_C^l$.\n\\qed\n\\end{defn}\n\\begin{exmp}[$2$-bits full adder] The level-by-level translation of the boolean circuit $C$ in Figure~\\ref{figure:2-bits full adder} is the following:\n\\label{exmp:2-bits full-adder}\n\\allowdisplaybreaks\n\\begin{align*}\n\\mathtt{level}_{C}^{-1}&\\triangleq \\langle s, y_{\\text{out}} \\rangle\\\\\n\\mathtt{level}_{C}^0&\\triangleq (\\lambda s. \\lambda y_{\\text{out}}. \\mathtt{level}^{-1}_C)(\\mathtt{xor}\\, z'_1\\, y')(\\mathtt{or}\\, z_2\\, z_3)\\\\\n\\mathtt{level}_{C}^1&\\triangleq (\\lambda z_2. \\lambda z_3. \\mathtt{level}_{C}^0)(\\mathtt{and}\\, x''_1\\, x''_2)(\\mathtt{and}\\, z''_1\\, y'') \\\\\n\\mathtt{level}_{C}^2&\\triangleq(\\lambda z_1.\\lambda y_{\\text{in}}. \\mathtt{let}\\,(\\mathtt{out}\\, z_1)\\ \\mathtt{be}\\ z'_1, z''_1 \\ \\mathtt{ \\ in \\ }\\\\\n&\n\\phantom{\\ \\triangleq\\ \\qquad }\n\\mathtt{let}\\,(\\mathtt{out}\\, y_{\\text{in}}) \\mathtt{ \\ be\\ }y', y'' \\mathtt{ \\ in \\ }\\mathtt{level}_C ^1 )(\\mathtt{xor}\\, x'_1\\, x'_2)\\, y_{\\text{in}}\\\\\n\\mathtt{level}_{C}^3&\\triangleq (\\lambda x_1. \\lambda x_2. \\mathtt{let}\\,(\\mathtt{out}\\, x_1) \\mathtt{ \\ be\\ }x_1', x_1'' \\mathtt{ \\ in \\ } \\\\\n&\n\\phantom{\\ \\triangleq\\ \\qquad }\n\\mathtt{let}\\,(\\mathtt{out}\\, x_2) \\mathtt{ \\ be\\ }x_2', x_2'' \\mathtt{ \\ in \\ } \\mathtt{level}_C^2)\\, x_1\\, x_2 \n\\end{align*}\nwhere we set $\\lambda(C)\\triangleq \\lambda x. \\mathtt{let }\\, x \\mathtt{\\ be \\ } x_1, x_2, y_{\\text{in}} \\mathtt{ \\ in \\ } \\mathtt{level}_C ^3$ which reduces to:\n\\begin{align*}\n&\\lambda x. \\mathtt{let }\\, x \\mathtt{\\ be \\ } x_1, x_2, y_{\\text{in}} \\mathtt{ \\ in \\ } (\\mathtt{let}\\, (\\mathtt{out}\\, x_1) \\mathtt{\\ be \\ }x'_1, x''_1 \\mathtt{ \\ in \\ }\\\\\n&\\mathtt{let}\\, (\\mathtt{out} \\, x_2) \\mathtt{\\ be \\ }x'_2, x''_2 \\mathtt{ \\ in \\ }( \\mathtt{let}\\, (\\mathtt{out}\\, y_{\\text{in}})\\mathtt{\\ be \\ }y', y'' \\mathtt{ \\ in \\ }\\\\\n&\\mathtt{let}\\,(\\mathtt{out}\\, (\\mathtt{xor}\\, x'_1\\, x'_2)) \\mathtt{ \\ be \\ } z'_1, z''_2 \\mathtt{ \\ in\\ } \\langle \\mathtt{xor}\\, z'_1\\, y', \\mathtt{or}\\, (\\mathtt{and}\\, x''_1\\, x''_2)(\\mathtt{and}\\, z''_1\\, y'') \\rangle \\ ))\n\\enspace .\n\\qed\n\\end{align*}\n\\end{exmp}\n\\begin{exmp}[$3$-bits majority] The level-by-level translation of the boolean circuit $C$ in Figure~\\ref{figure:maj3} is the following:\n\\label{exmp:maj3}\n\\allowdisplaybreaks\n\\begin{align*}\n\\mathtt{level}_C^{-1}&\\triangleq \\langle m, g \\rangle \n\\\\\n\\mathtt{level}_C^{0}&\\triangleq (\\lambda m. \\lambda g. \\mathtt{level}_C^{-1})(\\mathtt{and}\\, z_1\\, z_2)(\\mathtt{and}\\, y''_2\\, x''_3)\\\\\n\\mathtt{level}_C^{1}&\\triangleq (\\lambda z_1. \\lambda z_2. \\mathtt{level}_C^0)(\\mathtt{or}\\, y'_1\\, y'_3)(\\mathtt{and}\\, y''_1\\, y''_3)\\\\\n\\mathtt{level}_C^{2}&\\triangleq (\\lambda y_1. \\lambda y_3. \\mathtt{let}\\, (\\mathtt{out}\\, y_1)\\mathtt{\\ be \\ } y'_1, y''_1 \\mathtt{ \\ in \\ }\\\\\n& \n\\phantom{\\ \\triangleq \\quad}\n\\mathtt{let}\\, (\\mathtt{out} \\, y_3)\\mathtt{ \\ be \\ }y'_3, y''_3 \\mathtt{ \\ in \\ }\\mathtt{level}_C^1 )(\\mathtt{or}\\, x'_1\\, x'_2 )(\\mathtt{or}\\, y'_2\\, x'_3)\\\\\n\\mathtt{level}_C^3&\\triangleq (\\lambda y_2.\\lambda x_3. \\mathtt{let}\\, (\\mathtt{out}\\, y_2)\\mathtt{\\ be \\ }y'_2, y''_2 \\mathtt{ \\ in \\ }\\\\\n& \n\\phantom{\\ \\triangleq\\quad}\n\\mathtt{let}(\\mathtt{out} \\, x_3)\\mathtt{\\ be \\ }x'_3, x''_3 \\mathtt{ \\ in \\ }\\mathtt{level}_C^{2} )(\\mathtt{and}\\, x''_1\\, x''_2 )\\, x_3\\\\\n\\mathtt{level}_C^4&\\triangleq (\\lambda x_1. \\lambda x_2. \\mathtt{let}\\, (\\mathtt{out}\\, x_1)\\mathtt{\\ be \\ }x'_1, x''_1 \\mathtt{ \\ in \\ }\\\\\n& \n\\phantom{\\ \\triangleq \\quad}\n\\mathtt{let}\\, (\\mathtt{out}\\, x_2)\\mathtt{\\ be \\ }x'_2, x''_2 \\mathtt{ \\ in \\ }\\mathtt{level}_C^{3} )\\, x_1\\, x_2\n\\end{align*}\nwhere we set $\\lambda(C)\\triangleq \\lambda x. \\mathtt{let \\ }x \\mathtt{\\ be \\ }x_1, x_2, x_3 \\mathtt{ \\ in \\ }\\mathtt{level}_C^4$ which reduces to:\n\\begin{align*}\n&\\lambda x. \\mathtt{let \\ }x \\mathtt{\\ be \\ }x_1, x_2, x_3 \\mathtt{ \\ in \\ } \\mathtt{let}\\, (\\mathtt{out}\\, x_1)\\mathtt{\\ be \\ }x'_1,x''_1 \\mathtt{ \\ in \\ }\\\\\n&\\mathtt{let}\\,(\\mathtt{out} \\, x_2 )\\mathtt{\\ be \\ }x'_2, x''_2 \\mathtt{\\ in\\ } ( \\mathtt{let}\\,(\\mathtt{out}\\, x_3)\\mathtt{\\ be \\ }x'_3, x''_3 \\mathtt{ \\ in \\ }\\\\\n&\\mathtt{let}\\,(\\mathtt{out}\\,(\\mathtt{or}\\, x'_1\\, x'_2))\\mathtt{\\ be \\ }y'_1, y''_1 \\mathtt{ \\ in \\ } (\n \\mathtt{let}\\,(\\mathtt{out}\\,(\\mathtt{and}\\, x''_1\\, x''_2)) \\mathtt{\\ be \\ }y'_2, y''_2 \\mathtt{ \\ in \\ }\\\\\n& \\mathtt{let }\\,(\\mathtt{out}\\,(\\mathtt{or}\\, y'_2\\, x'_3))\\mathtt{\\ be \\ }y'_3, y''_3 \\mathtt{ \\ in \\ }\n\\langle \\mathtt{ and }\\,(\\mathtt{or}\\, y'_1\\, y'_3)(\\mathtt{and}\\, \\, y''_1\\, y''_3) , \\mathtt{and}\\,y''_2\\, x''_3 \\rangle \\ ))\n\\enspace . \\qed\n\\end{align*}\n\\end{exmp}\n\n\\noindent\nThe size of the term coding an internal node depends on its fan-in.\nLikewise, the size of the term coding a fan-out node depends on the number of conclusions. The size of the circuit bounds both values. Moreover, by Theorem~\\ref{thm:Subject Reduction for IMLL2shpos}, reducing a typable term yields a typable term. These observations imply:\n\\begin{prop}[Simulation of circuit evaluation]\\label{prop: simulation for boolean circuits}\nIf $ C$ is an unbounded fan-in boolean circuit over $\\mathcal{B}_1$ with $n$ inputs and $m$ outputs then $\\lambda(C)$ is such that:\n\\begin{enumerate}\n\\item its size is $ O(|C|) $, \n\\item it has type $(\\shpos \\mathbf{B}\\otimes \\overset{n}{\\ldots}\\otimes \\shpos \\mathbf{B})\\multimap ( \\mathbf{B}\\otimes \\overset{m}{\\ldots}\\otimes \\mathbf{B})$ in $\\mathsf{LEM}$, and\n\\item for all $(i_1, \\ldots, i_n) \\in \\lbrace 0, 1 \\rbrace^n$, the evaluation of $C$ on input $(i_1, \\ldots, i_n)$ outputs the tuple $(i'_1, \\ldots, i'_{m})\\in \\lbrace 0, 1 \\rbrace^m$ iff $\\lambda(C) \\, \\langle \\mathtt{i}_1, \\dots, \\mathtt{i}_n \\rangle \\rightarrow^*\\langle \\mathtt{i}'_1, \\ldots, \\mathtt{i}'_m\\rangle$.\n\\end{enumerate}\n\\end{prop}\n\\noindent\nIt should not be surprising that the translation cannot preserve the depth of a given circuit, since $ \\mathsf{LEM}$ has only \\emph{binary} logical operators. That is why we use nested instances \n\\enquote{\\texttt{let}}\n(Definition~\\eqref{eqn:datatype unity and product}) to access single elements of\n$A_1\\otimes\\ldots\\otimes A_n $. We could preserve the depth by extending \\textsf{LEM} with unbounded tensor products as done, for example, in~\\cite{DBLP:conf\/lics\/Terui04} for the multiplicative fragment of linear logic $\\mathsf{MLL}$. \n\n\n\\subsection{Numerals in $ \\mathsf{LEM} $}\n\\label{section:Church numerals}\nWe introduce a class $ \\mathcal{N} $ of terms in $\\mathsf{LEM}$, called \\emph{numerals}, that represent natural numbers.\nWe give a successor \\texttt{S} and an addition \\texttt{A} on numerals, both typable in $\\mathsf{LEM}$, and we show that they \nbehave as expected using Subject reduction. Moreover, the numerals can operate as\niterators on a class of terms in $\\mathsf{LEM}$ that form a group with respect to the application.\n\n\\begin{defn}[Terms and types for $ \\mathcal{N} $]\n\\label{defn:Terms and types for mathcalN}\nLet us recall that $\\mathbf{1}$ is $\\forall \\alpha. \\alpha \\multimap \\alpha$ and $I$ is $\\lambda x.x$ (Section~\\ref{sec: background}).\nThe numerals of $ \\mathcal{N} $ have form:\n\\begin{align}\n\\nonumber\n\\overline{0} &\\triangleq \n \\lambda fx. \\texttt{discard}_{\\mathbf{1}}\\, f\\, \\texttt{in}\\, x: \\mathbb{N} \n\\\\\n\\nonumber\n\\overline{1} &\\triangleq\n \\lambda fx. fx: \\mathbb{N}\n\\\\\n\\overline{n+2} &\\triangleq\n \\lambda fx. \\texttt{copy}_{\\mathbf{1}}^{I}\\, f\\, \\texttt{as}\\,\n f_1\\ldots f_n\\,\\texttt{in}\\, f_1(\\ldots(f_n\\, x)\\ldots): \\mathbb{N}\n\\enspace ,\n\\label{eqn: infinito}\n\\end{align}\n\\noindent\nwhere, for any $M$,\n$ \\texttt{copy}_{\\mathbf{1}}^{I}\\, f_0\\, \\texttt{as}\\, f_1\\ldots f_n\\,\\texttt{in}\\ M $ in~\\eqref{eqn: infinito} stands for:\n\\begin{align*}\n\\texttt{copy}_{\\mathbf{1}}^{I}\\, f_0\\, \\texttt{as}\\, f_1, f'_2 \\,\\texttt{in}\\,\n (\\texttt{copy}_{\\mathbf{1}}^{I}\\, f'_2\\, \\texttt{as}\\, f_2, f'_3 \\,\\texttt{in}\\, \\ldots\n (\\texttt{copy}_{\\mathbf{1}}^{I}\\, f'_{n-1}\\, \\texttt{as}\\, f_{n-1}, f_n \\,\\texttt{in}\\, M)\\ldots)\n \\enspace ,\n\\end{align*}\n\\noindent\nand $ \\mathbb{N} \\triangleq \\mathbf{N}[\\mathbf{1}\/\\alpha] $ with\n$ \\mathbf{N} \\triangleq\n(\\shpos \\alpha) \\multimap \\alpha $.\nIn order to identify terms that represent the same natural number, \nwe take numerals up to the following equivalences:\n\\begin{align*}\n&\\texttt{copy}_{\\boldmath{1}}^{I}\\, f\\, \\texttt{as}\\, f_1, f_2 \\,\\texttt{in}\\, \n(\\texttt{copy}_{\\boldmath{1}}^{I}\\, f_2\\, \\texttt{as}\\, f_3, f_4 \\,\\texttt{in}\\, M)\n\\\\\n&\n\\ \\,\n\\qquad\n\\qquad\n\\qquad\n\\qquad\n\\qquad\n= \\texttt{copy}_{\\boldmath{1}}^{I}\\, f\\, \\texttt{as}\\, f_2, f_4 \\,\\texttt{in}\\, (\n \\texttt{copy}_{\\boldmath{1}}^{I}\\, f_2\\, \\texttt{as}\\, f_1, f_3 \\,\\texttt{in}\\, M)\n\\\\\n&f (\\texttt{copy}_{\\boldmath{1}}^{I}\\, f'\\, \\texttt{as}\\, g, h \\ \\texttt{in}\\, M)\n=\n\\texttt{copy}_{\\boldmath{1}}^{I}\\, f'\\, \\texttt{as}\\, g, h \\,\\texttt{in}\\, f\\,M\n\\enspace .\n\\qquad \n\\qquad \n\\qquad \n\\qquad \n\\qed\n\\end{align*}\n\\end{defn}\nThe elements of $ \\mathcal{N} $ are the analogue of the Church numerals. Let us compare $\\mathbb{N}$ to the type $\\mathbf{int}$ of the Church numerals in Linear Logic: \n\\begin{equation*}\n\\begin{split}\n\\mathbf{int}&\\triangleq \\forall \\alpha .(\\oc (\\alpha \\multimap \\alpha)\\multimap (\\alpha \\multimap \\alpha))\\\\\n\\mathbb{N} &\\triangleq (\\shpos \\forall \\alpha .(\\alpha \\multimap \\alpha))\\multimap \\forall \\alpha . (\\alpha \\multimap \\alpha) \\enspace .\n\\end{split}\n\\end{equation*}\nIn the former the universal quantification is in positive position, while in the latter it occurs on both sides of the main implication. This is because we can apply the modality $\\shpos$ only to ground types, which are closed. Also, observe that the lack of an external quantifier in $\\mathbb{N}$ limits the use of numerals as iterators.\n\nThe analogy with the Church numerals can be pushed further by defining a successor $\\texttt{S}$ and an addition $\\texttt{A}$:\n\\begin{align*}\n\\texttt{S} & \\triangleq\n\\lambda nfx.\n\\texttt{copy}_{\\boldmath{1}}^{I}\\,f\\,\\texttt{as}\\, f_1, f_2 \\,\\texttt{in}\\, f_1(n f_2 x): \\mathbb{N}\\multimap\\mathbb{N}\n\\\\\n\\texttt{A} & \\triangleq\n\\lambda mnfx.\n\\texttt{copy}_{\\boldmath{1}}^{I}\\,f\\,\\texttt{as}\\, f_1, f_2 \\,\\texttt{in}\\, \nm f_1(n f_2 x): \\mathbb{N}\\multimap \\mathbb{N}\\multimap\\mathbb{N}\n\\enspace .\n\\end{align*}\n\\noindent\nSticking to the computational behaviour of the terms\n(Figures~\\ref{fig: term reduction rules} \nand~\\ref{fig: term commuting conversions}), not of the underlying derivations, \nthe Subject reduction \n(Theorem~\\ref{thm:Subject Reduction for IMLL2shpos}) implies:\n\n\\begin{prop} \\label{prop: succ add}For all $n , m\\geq 0$, $ \\mathtt{S}\\,\\overline{n} \\rightarrow^* \\overline{n+1} $ and $ \\mathtt{A}\\,\\overline{m}\\,\\overline{n}\\rightarrow^* \\overline{m+n} $.\n\\end{prop}\n\\noindent\nFor example, the following reduction is legal:\n\\begin{align*}\n\\texttt{S} \\,\\overline{2} & \\triangleq\n(\\lambda nfx.\n\\texttt{copy}_{\\boldmath{1}}^{I}\\,f\\,\\texttt{as}\\, f_1, f_2 \\,\\texttt{in}\\, f_1(n f_2 x))\n( \\lambda gy. \\texttt{copy}_{\\mathbf{1}}^{I}\\,g\\, \\texttt{as}\\,\n g_1, g_2\\,\\texttt{in}\\, g_1(g_2\\, y))\n\\\\\n& \\rightarrow\n\\lambda fx.\n\\texttt{copy}_{\\boldmath{1}}^{I}\\,f\\,\\texttt{as}\\, f_1, f_2 \\,\\texttt{in}\\, f_1(( \\lambda gy. \\texttt{copy}_{\\mathbf{1}}^{I}\\,g\\, \\texttt{as}\\,\n g_1, g_2\\,\\texttt{in}\\, g_1(g_2\\, y)) f_2 x)\n\\\\\n& \\rightarrow\n\\lambda fx.\n\\texttt{copy}_{\\boldmath{1}}^{I}\\,f\\,\\texttt{as}\\, f_1, f_2 \\,\\texttt{in}\\, f_1(( \\lambda y. \\texttt{copy}_{\\mathbf{1}}^{I}\\,f_2\\, \\texttt{as}\\,\n g_1, g_2\\,\\texttt{in}\\, g_1(g_2\\, y)) x)\n\\\\\n& \\rightarrow\n\\lambda fx.\n\\texttt{copy}_{\\boldmath{1}}^{I}\\,f\\,\\texttt{as}\\, f_1, f_2 \\,\\texttt{in}\\, f_1( \\texttt{copy}_{\\mathbf{1}}^{I}\\,f_2\\, \\texttt{as}\\,\n g_1, g_2\\,\\texttt{in}\\, g_1(g_2\\, x)) \n \\\\\n& =\n\\lambda fx.\n\\texttt{copy}_{\\boldmath{1}}^{I}\\,f\\,\\texttt{as}\\, f_1, f_2 \\,\\texttt{in}\\, ( \\texttt{copy}_{\\mathbf{1}}^{I}\\,f_2\\, \\texttt{as}\\,\n g_1, g_2\\,\\texttt{in}\\, f_1(g_1(g_2\\, x))) \n \\triangleq \\overline{3}\n\\end{align*}\n\\noindent\nObserve that Proposition~\\ref{prop: succ add} considers typable terms and term reductions by exploiting Theorem~\\ref{thm:Subject Reduction for IMLL2shpos}. A similar result cannot be restated for the related derivations and the lazy-cut elimination (Definition~\\ref{defn:Lazy cut-elimination strategy}). For example, the here above term $\\texttt{S} \\,\\overline{2}$ has type $\\mathbb{N}$, that is not lazy (Definition~\\ref{defn: lazyness}), due to the presence of a universal quantification in negative position. Indeed, the lazy cut-elimination strategy of a derivation of $\\texttt{S} \\,\\overline{2}$ runs into deadlocks before producing a cut-free derivation of $\\overline{3}$. \n\n\nAs far as we could see, the \\enquote{zero-test}, the predecessor and the subtraction on numerals cannot have type in $\\mathsf{LEM}$. \nThe problem is the position of the universal quantifiers of $ \\mathbb{N} $. Consider, for example, the following predecessor:\n\\begin{align*}\n\\texttt{P} & \\triangleq\n\\lambda nsz.n\\, S[s]\\, B[z]\n&&\n(\\textrm{Predecessor})\n\\\\\nS[M] & \\triangleq\n\\lambda p.\\texttt{let}\\ p\\ \\texttt{be}\\ l, r\\ \\texttt{in}\\ \n\\langle M, lr \\rangle\n&&\n(\\textrm{Step function})\n\\\\\nB[N] & \\triangleq \\langle I, N\\rangle\n&&\n(\\textrm{Base function})\n\\end{align*}\nintroduced by Roversi \\cite{Roversi:1999-CSL}. Giving a type to \\texttt{P} would require to substitute $ (\\alpha\\multimap\\alpha)\\otimes\\alpha$ for $ \\alpha $ in $ \\mathbb{N} $, as suggested by the application of $ n:\\mathbb{N} $ to $ S[s] $. The position of the universal quantifiers in $\\mathbb{N}$ forbids it. Were such an instance legal, we could iterate functions, contradicting the cubic bound on the cut-elimination\n(Theorem~\\ref{thm: cut elimination for downarrow IMLL2}.)\n\nFurther, we can generalize $\\mathbb{N}$ to\n$\\mathbf{N}[\\overline{(A \\multimap A)} \/\\gamma]$,\nwhere $\\overline{(A \\multimap A)}$ is the closure of a \nquantifier-free type $A \\multimap A$,\nand find that \\emph{Hereditarily finite permutations} (\\textsf{HFP})\nby Dezani\n\\cite{DBLP:journals\/tcs\/Dezani-Ciancaglini76} \ninhabit $\\overline{A \\multimap A}$. \nAn \\textsf{HFP} is a $\\lambda$-term of the form $ P \\triangleq \\lambda z x_1 \\ldots x_n. z(P_1 x_{\\rho(1)})\\ldots (P_n x_{\\rho(n)}) $,\nfor some $ n\\geq 0 $, where $\\rho \\in S_n$ (the symmetric group of $\\lbrace 1, \\ldots, n\\rbrace$) and $ P_1, \\ldots, P_n$ are \\textsf{HFP}. The class $\\mathcal{H}_{\\text{lin}}$ of linear $\\lambda$-terms which are \\textsf{HFP} (considered modulo $\\beta \\eta$-equivalence) forms a group:\n\\begin{enumerate}\n\\item The binary operation is $\\lambda fgx.f(g\\,x)$;\n\\item The identity is $I$;\n\\item If $P=\\lambda z x_1 \\ldots x_n. z(P_1 x_{\\rho(1)})\\ldots (P_n x_{\\rho(n)})$ is in $\\mathcal{H}_{\\text{lin}}$, the inductively defined inverse is:\n \\[P^{-1} \\triangleq \\lambda z' x_1 \\ldots x_n. z'(P^{-1}_{\\rho^{-1}(1)} x_{\\rho^{-1}(1)})\\ldots (P^{-1}_{\\rho^{-1}(n)} x_{\\rho^{-1}(n)})\\] where, for all $1 \\leq i \\leq n$, $\\rho^{-1}_i$ is the inverse permutation of $\\rho_i$ and $P^{-1}_{i}$ is the inverse of $P_{i}$.\n\\end{enumerate}\nFor example, let $ P =\\lambda wabc.w(\\lambda xy. ayx)(\\lambda xy. bxy)c $ which belongs to \\textsf{HFP} since\n$ \\lambda xy. ayx =_{\\beta } (\\lambda zxy. zyx) a $,\n$ \\lambda xy. bxy =_{\\beta } (\\lambda zxy. zxy) b $,\n$ c =_{\\beta } Ic $, where $ (\\lambda zxy. zyx)$, $ (\\lambda zxy. zxy) $ and $ I $ are in \\textsf{HFP}. Then, $P$ has type\n$\\forall \\alpha_x \\alpha_y\\alpha_a\\alpha_b\\alpha_c\\alpha. \nA \\multimap A$, \nwhich is a ground type\nwhere $A$ is \n$(\\alpha_x\\multimap\\alpha_y\\multimap\\alpha_a )\n \\multimap \n (\\alpha_x\\multimap\\alpha_y\\multimap\\alpha_b ) \n \\multimap \\alpha_c \n \\multimap \\alpha$.\nThese observations show an unexpected link between $\\mathsf{LEM}$ and reversible computation (see Perumalla \\cite{perumalla2013chc} for a thorough introduction) that can well be expressed in terms of monoidal structures where permutations play a central role \\cite{DBLP:conf\/types\/PaoliniPR15, paolini16ictcs, paolini2018ngc}.\n\n\\section{The proof of Theorem~\\ref{thm: pi1 are duplicable}} \n\\label{sec: the d-soundness theorem DICE}\nIn this section we give a detailed proof of Theorem~\\ref{thm: pi1 are duplicable} for $\\mathsf{IMLL}_2$, which states that if $A$ is an inhabited ground type, i.e~an inhabited closed $\\Pi_1$-type, then $A$ is also a duplicable type. By Definition~\\ref{defn: duplicable and erasable types}, this amounts to show that a linear $\\lambda$-term $\\mathtt{D}_{A}: A\\multimap A \\otimes A$ exists such that $\\mathtt{D}_{A}\\, V \\rightarrow^*_{\\beta \\eta} \\langle V, V \\rangle$ holds for every value $V$ of $A$. We shall construct $ \\mathtt{D}_{A} $ as the composition of three linear $\\lambda$-terms, as diagrammatically displayed in Figure~\\ref{fig:DA diagram}. For each such component we dedicate a specific subsection. For the sake of presentation, in this section we focus on terms of $\\mathsf{IMLL}_2$ rather than on derivations, so that when we say that a term $M$ has type $A$ with context $\\Gamma$, we clearly mean that a derivation $\\mathcal{D}$ exists such that $\\mathcal{D}\\triangleleft \\Gamma \\vdash M: A$. Moreover, as assumed in Section~\\ref{sec: background}, terms are considered modulo $\\alpha$-equivalence. \n\\begin{figure}\n \\begin{tikzpicture}[baseline= (a).base]\n \\node[scale=.725] (a) at (0,0){\n \\begin{tikzcd}[column sep=scriptsize, row sep=tiny]\n A \\ar[rr, \"\\mathtt{sub}^s_{A}\"] \n && \n A^-[\\mathbf{B}^s] \\ar[rr, \"\\mathtt{enc}^s _{A}\"] \n && \n \\mathbf{B}^s \n \\ar[rr, \"\\mathtt{dec}^s _{A} \"] \n &&\n A \\otimes A\n \\\\\n V \\ar[rr, mapsto] \n && \n V_{A} \\ar[rr, mapsto] \n && \n \\lceil V_{A} \\rceil \\ar[rr, mapsto]\n &&\n \\langle V_{A}, V_{A} \\rangle\n \\end{tikzcd}\n };\n \\end{tikzpicture}\n \\caption{The diagrammatic representation of $ \\mathtt{D}_{A}$.}\n \\label{fig:DA diagram}\n\\end{figure}\n\\subsection{The linear $ \\lambda $-term $ \\mathtt{sub}^s_{A} $}\nRoughly, the $\\lambda$-term $\\mathtt{sub}^s_{A}$, when applied to a value $V$ of ground type $A$, produces its $\\eta$-long normal form $V_{A}$ whose type is obtained from $A$ as follows: we strip away every occurrence of $\\forall$ and we substitute each type variable with the $s$-ary tensor of boolean datatypes $\\mathbf{B}^s=\\mathbf{B}\\otimes \\overset{s}{\\ldots}\\otimes \\mathbf{B}$, for some $s>0$.\\\\\nBefore introducing the $\\lambda$-term $\\mathtt{sub}^s_{A}$, we need the definition of $\\eta$-long normal form:\n\\begin{defn}[$\\eta$-long normal forms]\\label{defn: eta long nf}\nLet $\\mathcal{D}\\triangleleft \\Gamma \\vdash M: B$ be cut-free.\nWe define the $\\eta$-\\textit{expansion} of $\\mathcal{D}$, denoted $\\mathcal{D}^\\Gamma_B$, as the derivation obtained from $\\mathcal{D}$ by substituting every occurrence of:\n\\begin{prooftree}\n\\AxiomC{}\n\\RightLabel{$ax$}\n\\UnaryInfC{$x: A \\vdash x: A$}\n\\end{prooftree}\n with a derivation of $ x: A \\vdash M':A $, for some $ M' $, whose axioms have form $y: \\alpha \\vdash y: \\alpha$. The $\\eta$-expansion is unique and transforms the $\\lambda$-term $M$ to its \\emph{$\\eta$-long normal form}, denoted by $M^{\\Gamma}_B$ and such that $M^\\Gamma_B\\rightarrow^*_\\eta M$.\nIf the context $\\Gamma$ of an $ \\eta $-expanded $ \\mathcal{D} $ is \n$x_1: A_1, \\ldots, x_n:A$ we may write \n$\\mathcal{D}^{A_1, \\ldots, A_n}_B$ and $M^{A_1, \\ldots, A_n}_B$.\nIf $\\Gamma$ is empty, we feel free to write $\\mathcal{D}_B$ and $M_B$.\n\\end{defn}\n\\begin{lem}\\label{lem: canonical definition for eta long normal forms} Let $\\mathcal{D}\\triangleleft \\Gamma \\vdash M: A$ be a cut-free derivation in $\\mathsf{IMLL}_2$, and let $M^\\Gamma_A$ denote the $\\eta$-long normal form obtained by $\\eta$-expanding $\\mathcal{D}$. Then:\n\\begin{enumerate}[1]\n\\item \\label{eqn: 1 canonical definition for eta long normal forms} If $M=x$, $A= \\alpha$, and $\\Gamma= x: \\alpha$ then $x^\\alpha_\\alpha=x$.\n\\item \\label{eqn: 2 canonical definition for eta long normal forms} If $M=x$, $A= \\forall \\alpha. B$, and $\\Gamma= x: \\forall \\alpha. B$ then $x^{\\forall \\alpha. B}_{\\forall \\alpha. B}= x^{B}_B$.\n\\item \\label{eqn: 3 canonical definition for eta long normal forms} If $M=x$, $A= B \\multimap C$, and $\\Gamma= x: B \\multimap C$ then $x^{B \\multimap C}_{B \\multimap C}=\\lambda y.(xy^{B}_B)^C_C$.\n\\item \\label{eqn: 4 canonical definition for eta long normal forms} If $A= \\forall \\alpha. B$ then $M^{\\Gamma}_{\\forall \\alpha. B}= M^\\Gamma_{B\\langle \\gamma\/\\alpha \\rangle}$, for some $\\gamma$. \n\\item \\label{eqn: 5 canonical definition for eta long normal forms} If $M= \\lambda x. N$ and $A= B \\multimap C$ then $(\\lambda x. N)^{\\Gamma}_{B \\multimap C}= \\lambda x. N^{\\Gamma, x: B}_C$.\n\\item \\label{eqn: 7 canonical definition for eta long normal forms} If $M=P[yN\/x]$ and $\\Gamma=\\Delta, \\Sigma, y: B \\multimap C$, where $P$ has type $A$ with context $\\Delta, x:C$ and $N$ has type $B$ with context $\\Sigma$, then $(P[yN\/x])^{\\Gamma}_A=P^{\\Delta, x: C}_A[yN^\\Sigma _B\/x]$.\n\\item \\label{eqn: 6 canonical definition for eta long normal forms} If $M=P[yN\/x]$ and $\\Gamma= \\Gamma', y: \\forall \\alpha. B$ then we have $(P[yN\/x])^{\\Gamma', y: \\forall \\alpha.B}_A=(P[yN\/x])^{\\Gamma', y: B\\langle D\/\\alpha \\rangle}_A$, for some type $D$. \n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nJust follow the definition of $\\eta$-long normal form.\n\\end{proof}\n\n\\begin{defn} \\label{defn: stripping forall} Let $A$ be a type in $\\mathsf{IMLL}_2$. We define $A^-$ by induction on the complexity of the type:\n\\begin{align*}\n\\alpha^- &\\triangleq \\alpha \\\\\n(A \\multimap B)^- &\\triangleq A^- \\multimap B^-\\\\\n(\\forall \\alpha. A)^-&\\triangleq A^-\\langle \\gamma\/\\alpha \\rangle \\enspace, \n\\end{align*}\nwhere $\\gamma$ is taken from the head of an infinite list of fresh type variables. \nThe notation $A[B]$ denotes the type obtained by replacing $B$ for every free type variable of $A$. Moreover, if $\\Gamma= x_1:A_1, \\ldots, x_n:A_n$, then $\\Gamma^-$ stands for $x_1:A_1^-, \\ldots, x_n:A^-_n$, and $\\Gamma[B]$ stands for $x_1:A_1[B], \\ldots, x_n:A_n[B]$.\\qed\n\\end{defn}\n\\begin{defn}[The linear $\\lambda$-term $\\mathtt{sub}^s_{A}$]\\label{defn: sub} Let $s>0$. We define the linear $\\lambda$-terms $\\mathtt{sub}^s_{A}: A[\\mathbf{B}^s]\\multimap A^-[\\mathbf{B}^s]$, where $A $ is a $ \\Pi_1$-type, and $\\overline{\\mathtt{sub}}^s_{A}: A^-[\\mathbf{B}^s] \\multimap A[\\mathbf{B}^s]$, where $A $ is a $\\Sigma_1$-type, by simultaneous induction on the size of $A$:\n\\begin{align*}\n\t&\\mathtt{sub}^s_{\\alpha}\\triangleq \\lambda x.x \n\t&&\\overline{\\mathtt{sub}}^s_{\\alpha} \\triangleq \\lambda x.x \\\\\n\t&\\mathtt{sub}^s_{\\forall \\alpha. B} \\triangleq \\mathtt{sub}^s_{B} \n&& \\\\\n\t&\\mathtt{sub}^s_{B \\multimap C} \\triangleq \\lambda x.\\lambda y. \\mathtt{sub}^s_{C}(x\\, (\\overline{\\mathtt{sub}}^s_{B}\\,y)) \n\t&& \\overline{\\mathtt{sub}}^s_{B \\multimap C}\n\t \\triangleq \n\t \\lambda x.\\lambda \n\t y.\\overline{\\mathtt{sub}}^s_C(x\\,(\\mathtt{sub}^s_{B}\\,y)).\t\n \\qed\n\\end{align*}\n\\end{defn}\nThe following will be used to compact the proof of some of the coming lemmas.\n\\begin{defn}\\label{defn: sub substitution}\nLet $s>0$. Let $A$ be a $\\Pi_1$-type. Let $\\Gamma=x_1:A_1, \\ldots, x_n:A_n$ be a context of $\\Sigma_1$-types. Let $M$ be an inhabitant of \n$A[\\mathbf{B}^s]$ with context $\\Gamma[\\mathbf{B}^s]$.\nThen $M[\\Gamma]$ denotes the substitution:\n\\begin{align*}\n&M[\\overline{\\mathtt{sub}}^s_{A_1}\\,x'_1\/x_1, \\ldots,\\overline{\\mathtt{sub}}^s_{A_n}\\,x'_n\/x_n ]\n\\end{align*} \nfor some $x'_1, \\ldots, x'_n$.\n\\qed\n\\end{defn}\n\n\\begin{lem}\\label{lem: sub variable to eta long normal form} Let $s>0$ and $z$ be of type $A[\\mathbf{B}^s]$.\n\\begin{enumerate}[(1)]\n\\item \\label{eqn: 1 sub variable to eta long normal form} If $A$ is a $\\Pi_1$-type, then $\\mathtt{sub}^s_{A}\\, z \\rightarrow_\\beta^* z_{A}^A$.\n\\item \\label{eqn: 2 sub variable to eta long normal form} If $A$ is a $\\Sigma_1$-type, then $\\overline{\\mathtt{sub}}^s_A\\, z \\rightarrow_\\beta^* z^A_A$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nWe prove both points by simultaneous induction on $\\vert A \\vert$:\n\\begin{enumerate}\n\\item Case $A= \\alpha$. Both the statements are straightforward since we have $z^\\alpha _\\alpha=z$ by Lemma~\\ref{lem: canonical definition for eta long normal forms}.\\ref{eqn: 1 canonical definition for eta long normal forms}.\n\\item Case $A= \\forall \\alpha. B$. This case applies to point~\\ref{eqn: 1 sub variable to eta long normal form} only. By induction hypothesis, for every variable $x$ of type $B[\\mathbf{B}^s]$, $\\mathtt{sub}^s_{B}\\,x \\rightarrow_\\beta^* x^B_B$. The $\\lambda$-term $\\mathtt{sub}^s_B$ has type $B[\\mathbf{B}^s]\\multimap B^-[\\mathbf{B}^s]$, which is equal to $(B\\langle \\mathbf{B}^s\/\\alpha\\rangle )[\\mathbf{B}^s]\\multimap (\\forall \\alpha. B)^-[\\mathbf{B}^s]$. Hence, $\\mathtt{sub}^s_B$ has also type $(\\forall \\alpha. B)[\\mathbf{B}^s]\\multimap (\\forall \\alpha. B)^-[\\mathbf{B}^s]$. Moreover, by Definition~\\ref{defn: sub} we have $\\mathtt{sub}^s_{B}= \\mathtt{sub}^s_{\\forall \\alpha. B}$. Therefore, for every variable $z$ of type $(\\forall \\alpha. B)[\\mathbf{B}^s]$ we have $\\mathtt{sub}^s_{\\forall \\alpha. B}\\, z= \\mathtt{sub}^s_B\\, z \\rightarrow_\\beta^* z^{B}_B$. But $z^{B}_B=z^{\\forall \\alpha. B}_{\\forall \\alpha. B}$ by Lemma~\\ref{lem: canonical definition for eta long normal forms}.\\ref{eqn: 2 canonical definition for eta long normal forms}.\n\\item Case $A= B \\multimap C$. We prove point~\\ref{eqn: 1 sub variable to eta long normal form} only (point~\\ref{eqn: 2 sub variable to eta long normal form} is similar). Let $z$ be of type $(B\\multimap C)[\\mathbf{B}^s]=B[\\mathbf{B}^s]\\multimap C[\\mathbf{B}^s]$. Then we have\n\\allowdisplaybreaks\n\\begin{align*}\n\\mathtt{sub}^s_{B \\multimap C}\\, z &=(\\lambda x. \\lambda y. \\mathtt{sub}^s_{C}(x(\\overline{\\mathtt{sub}}^s_{B}\\, y)))z &&\\text{Definition}~\\ref{defn: sub} \\\\\n&\\rightarrow_\\beta \\lambda y. \\mathtt{sub}^s_{C}(z(\\overline{\\mathtt{sub}}^s_{B}\\, y))&&\\\\ \n&=\\lambda y. ( \\mathtt{sub}^s_{C}\\, w)[z(\\overline{\\mathtt{sub}}^s_{B}\\, y)\/w] &&\\\\\n& \\rightarrow_\\beta^* \\lambda y. w^C_C [z (\\overline{\\mathtt{sub}}^s_{B}\\, y)\/w]&& \\text{induction hyp., point}~\\ref{eqn: 1 sub variable to eta long normal form}\\\\\n& \\rightarrow_\\beta^* \\lambda y. w^C_C [z y^B_B\/w]&& \\text{induction hyp., point}~\\ref{eqn: 2 sub variable to eta long normal form}\\\\\n&= \\lambda y. (zy^B_B)^C_C &&\\\\\n&= z^{B \\multimap C}_{B\\multimap C}&& \\text{Lemma}~\\ref{lem: canonical definition for eta long normal forms}.\\ref{eqn: 3 canonical definition for eta long normal forms}.\n\\end{align*}\n\\end{enumerate}\n\\end{proof}\n\\begin{lem}\\label{lem: sub preservation of eta long normal form} Let $s>0$. \nIf $z:A[\\mathbf{B}^s]$, where $A$ is a $\\Pi_1$-type, then $\\mathtt{sub}^s_A\\, z^A_A \\rightarrow_\\beta^* z^A_A $.\n\\end{lem}\n\\begin{proof}\n We prove it by induction on $\\vert A \\vert$:\n\\begin{enumerate}\n\\item Case $A= \\alpha$. The statement is straightforward since we have $z^\\alpha _\\alpha=z$ by Lemma~\\ref{lem: canonical definition for eta long normal forms}.\\ref{eqn: 1 canonical definition for eta long normal forms}\n\\item Case $A= \\forall \\alpha. B$. By Definition~\\ref{defn: sub}, $\\mathtt{sub}^s_{\\forall \\alpha. B}= \\mathtt{sub}^s_{B}$ and we use the induction hypothesis.\n\\item Case $A= B \\multimap C$. Then we have\n\\allowdisplaybreaks\n\\begin{align*}\n \\mathtt{sub}^s_{B \\multimap C}\\, z^{B \\multimap C}_{B\\multimap C}&= (\\lambda x. \\lambda y. \\mathtt{sub}^s_{C}(x(\\overline{\\mathtt{sub}}^s_{B}\\, y)))z^{B \\multimap C}_{B\\multimap C}&& \\text{Definition}~\\ref{defn: sub}\\\\\n &= (\\lambda x. \\lambda y. \\mathtt{sub}^s_{C}(x(\\overline{\\mathtt{sub}}^s_{B}\\, y)))( \\lambda w. (zw^{B}_B)^C_C)&&\\text{Lemma}~\\ref{lem: canonical definition for eta long normal forms}.\\ref{eqn: 3 canonical definition for eta long normal forms}\\\\\n &\\rightarrow_\\beta \\lambda y. \\mathtt{sub}^s_{C}((\\lambda w. (zw^{B}_B)^C_C)(\\overline{\\mathtt{sub}}^s_{B}\\, y))\\\\\n &\\rightarrow _\\beta \\lambda y. \\mathtt{sub}^s_{C} (z(\\overline{\\mathtt{sub}}^s_{B}\\, y)^{B}_B)^C_C \\\\\n &\\rightarrow_\\beta^* \\lambda y. \\mathtt{sub}^s_{C} (z y^{B}_B)^C_C&&\\text{Lemma}~\\ref{lem: sub variable to eta long normal form}.\\ref{eqn: 2 sub variable to eta long normal form}\\\\\n &= \\lambda y. (\\mathtt{sub}^s_{C}\\, w^{C}_C)[ z y^{B}_B\/w] \\\\ \n &\\rightarrow_\\beta^* \\lambda y. w^C_C[ z y^{B}_B\/w] &&\\text{induction hyp.} \\\\ \n &= \\lambda y. (zy^B_B)^C_C \\\\ \n &=_\\alpha z^{B\\multimap C}_{B \\multimap C}&&\\text{Lemma}~\\ref{lem: canonical definition for eta long normal forms}.\\ref{eqn: 3 canonical definition for eta long normal forms}\n\\end{align*} \n\\end{enumerate}\n\\end{proof}\n\\begin{lem}\\label{lem: sub} Let $s>0$. Let $A$ be a $\\Pi_1$-type, and let \n$\\Gamma=x_1:A_1, \\ldots, x_n:A_n$ be a context of $\\Sigma_1$-types. \nIf $\\Gamma[\\mathbf{B}^s]\\vdash M:A[\\mathbf{B}^s]$, with $M$ normal, then:\n\\begin{equation*}\n\\mathtt{sub}^s_{A}\\, M[\\Gamma]\\rightarrow_\\beta^* M^\\Gamma _A \\enspace .\n\\end{equation*}\n\\end{lem}\n\\begin{proof}\nLet $Q_{\\Gamma, A}$ be the number of universal quantifications in $A_1, \\ldots, A_n, A$. We prove the result by induction on $\\vert M \\vert+ Q_{\\Gamma, A}$. If $M=z$ then $\\Gamma= z:A$ and $\\mathtt{sub}^s_A\\, M[\\Gamma]= \\mathtt{sub}^s_{A}(\\overline{\\mathtt{sub}}^s_{A}\\, z)$. By point~\\ref{eqn: 2 sub variable to eta long normal form} of Lemma~\\ref{lem: sub variable to eta long normal form} and by Lemma~\\ref{lem: sub preservation of eta long normal form} we have $\\mathtt{sub}^s_A( \\overline{\\mathtt{sub}}^s_A \\, z) \\rightarrow_\\beta^* \\mathtt{sub}^s_A \\, z^A_A \\rightarrow_\\beta^* z^A_A$. If $M= \\lambda z.N$ then we have two cases depending on the type of $M$:\n\\begin{enumerate}\n\\item Case $A= \\forall \\alpha. B$. The $\\lambda$-term $\\mathtt{sub}^s_B$ has type $B[\\mathbf{B}^s]\\multimap B^-[\\mathbf{B}^s]$, which is equal to $(B\\langle \\mathbf{B}^s\/\\alpha\\rangle )[\\mathbf{B}^s] \\multimap (\\forall \\alpha. B)^-[\\mathbf{B}^s]$, so that $\\mathtt{sub}^s_B$ has also type $(\\forall \\alpha. B)[\\mathbf{B}^s]\\multimap (\\forall \\alpha. B)^-[\\mathbf{B}^s]$. By Definition~\\ref{defn: sub} we have $\\mathtt{sub}^s_{B}= \\mathtt{sub}^s_{\\forall \\alpha. B}$. By using the induction hypothesis, for every $M$ of type $(\\forall \\alpha. B)[\\mathbf{B}^s]$ with context $\\Gamma[\\mathbf{B}^s]$, we have $\\mathtt{sub}^s_{\\forall \\alpha. B}\\, M[\\Gamma]= \\mathtt{sub}^s_B\\, M[\\Gamma] \\rightarrow_\\beta^* M^{\\Gamma}_{B}$. Moreover, by Lemma~\\ref{lem: canonical definition for eta long normal forms}.\\ref{eqn: 4 canonical definition for eta long normal forms}, $M^{\\Gamma}_{B}= M^\\Gamma_{\\forall \\alpha. A}$.\n\\item Case $A= B \\multimap C$. Then we have:\n\\allowdisplaybreaks\n\\begin{align*}\n\\mathtt{sub}^s_{B \\multimap C}\\, M[\\Gamma]&=(\\lambda x.\\lambda y. \\mathtt{sub}^s_{C}(x\\, (\\overline{\\mathtt{sub}}^s_{B}\\,y)))(\\lambda z. N)[\\Gamma]&&\\text{Definition}~\\ref{defn: sub}\\\\\n&\\rightarrow_\\beta \\lambda y. \\mathtt{sub}^s_{C}((\\lambda z.N)[\\Gamma] \\, (\\overline{\\mathtt{sub}}^s_{B}\\,y))\\\\\n&\\rightarrow_\\beta \\lambda y. \\mathtt{sub}^s_{C}((N[\\Gamma])[\\overline{\\mathtt{sub}}^s_{B}\\,y\/z])\\\\\n&= \\lambda y. \\mathtt{sub}^s_{C}(N[\\Gamma, y:B] )&&\\text{Definition}~\\ref{defn: sub substitution}\\\\\n&\\rightarrow_\\beta^* \\lambda y.N^{\\Gamma,y: B}_C&&\\text{induction hyp.}\\\\\n&=(\\lambda y.N)^{\\Gamma}_{B \\multimap C}&&\\text{Lemma}~\\ref{lem: canonical definition for eta long normal forms}.\\ref{eqn: 5 canonical definition for eta long normal forms}\\\\\n&=_{\\alpha} M^\\Gamma_{B \\multimap C}.\n\\end{align*}\n\\end{enumerate}\nIf $M=P[zN\/w]$ then the type of $z$ cannot have an outermost universal quantification, because $\\Gamma$ is a context of $\\Sigma_1$-types. So $z$ has type of the form $B \\multimap C$ in $\\Gamma$. Let $\\Gamma'$ and $\\Gamma''$ be contexts such that $\\Gamma= \\Gamma', \\Gamma'', z: B \\multimap C$, $\\operatorname{dom}(\\Gamma')=FV(P)$, and $\\operatorname{dom}(\\Gamma'')=FV(N)$. Then we have:\n\\allowdisplaybreaks\n\\begin{align*}\n\\mathtt{sub}^s _{A}\\, M[\\Gamma]&= \\mathtt{sub}^s_{A}(P[zN\/w])[\\Gamma]\\\\\n&= \\mathtt{sub}^s_{A}(P[\\Gamma'] [(zN)[\\Gamma'', z: B \\multimap C]\/w])\\\\\n&=\\mathtt{sub}^s_A (P[\\Gamma'][(\\overline{\\mathtt{sub}}^s_{B \\multimap C}\\, z)(N[\\Gamma''])\/w])\\\\\t \n&\\rightarrow_\\beta^* \\mathtt{sub}^s_A (P[\\Gamma'][ \\overline{\\mathtt{sub}}^s_C(z \\,(\\mathtt{sub}^s_{B}\\,(N[\\Gamma''])))\/w])&&\\text{Definition}~\\ref{defn: sub}\\\\\t \n&\\rightarrow_\\beta^* \\mathtt{sub}^s_A (P[\\Gamma'][ \\overline{\\mathtt{sub}}^s_C(z N^{\\Gamma''}_B)\/w] )&&\\text{induction hyp.}\\\\\t\n&= (\\mathtt{sub}^s_A \\, P[\\Gamma'][\\overline{\\mathtt{sub}}^s_C\\, w\/w] ) [z N^{\\Gamma''}_B\/w]\\\\\t\n&= (\\mathtt{sub}^s_A \\, P[\\Gamma', w: C])[z N^{\\Gamma''}_B\/w]&&\\text{Definition}~\\ref{defn: sub substitution}\\\\\n&\\rightarrow_\\beta^* P^{\\Gamma', w: C}_A [z N^{\\Gamma''}_B\/w]&&\\text{induction hyp.}\\\\\t\n&=(P[zN\/w])^{\\Gamma}_A &&\\text{Lemma}~\\ref{lem: canonical definition for eta long normal forms}.\\ref{eqn: 7 canonical definition for eta long normal forms}\n\\end{align*}\n\\end{proof}\n\n\n\\subsection{The linear $\\lambda$-term $ \\mathtt{enc}^s_{A} $}\nOne missing ingredient in the previous subsection is the value of $ s $, which is fixed to some strictly positive integer. To determine $s$ we need the following property:\n\\begin{lem} \\label{lem: pi1type bound} For every cut-free derivation $\\mathcal{D}\\triangleleft \\Gamma \\vdash M: B$ in $\\mathsf{IMLL}_2$ which does not contain applications of $\\forall$L, the following inequations hold:\n\\begin{equation}\\label{eqn: 1 inequation}\n\\vert M \\vert \\leq \\vert M^{\\Gamma}_B \\vert \\leq \\vert \\Gamma^- \\vert + \\vert B^- \\vert \\leq 2 \\cdot \\vert M_B^{\\Gamma} \\vert \\enspace ,\n\\end{equation}\nwhere $(\\_ )^-$ is as in Definition~\\ref{defn: stripping forall}, and $M^\\Gamma_A$ is as in Definition~\\ref{defn: eta long nf}.\n\\end{lem}\n\\begin{proof}\nThe inequation $\\vert M \\vert \\leq \\vert M^{\\Gamma}_B \\vert$ is by definition of $\\eta$-long normal form. Now, let $\\mathcal{D}^{\\Gamma}_B$ be the $\\eta$-expansion of $\\mathcal{D}$, so that $\\mathcal{D}^{\\Gamma}_B \\triangleleft \\Gamma \\vdash M^{\\Gamma}_B: B$. We prove the remaining two inequations by induction on $\\mathcal{D}^{\\Gamma}_B$. If it is an axiom then, by definition of $\\eta$-expansion, it must be of the form $x: \\alpha \\vdash x:\\alpha$, where $M^{\\Gamma}_B=x$. Hence, $\\vert x \\vert \\leq 2 \\cdot \\vert \\alpha \\vert \\leq 2 \\cdot \\vert x \\vert$. Both the rules $\\multimap$R and $\\multimap$L, increase by one the overall size of the types in a judgment and of the corresponding term, so the inequalities still hold. Last, the rules for $\\forall$ do not affect the size of both $\\Gamma^-, B^-$ and $M^{\\Gamma}_B$.\n\\end{proof}\n\\noindent\nNotice that Lemma \\ref{lem: pi1type bound} does not hold in general whenever $\\mathcal{D}$ contains instances of the inference rule $\\forall$L, since one can exploit the inference rule $\\forall$L to \\enquote{compress} the size of a type. \\\\\nNow, consider a cut-free derivation $\t\\mathcal{D}\\triangleleft \\vdash M:A$, where $A$ is a ground type. Since negative occurrences of $\\forall$ are not allowed in $A$, $\\mathcal{D}$ contains no application of $\\forall$L and, by Lemma~\\ref{lem: pi1type bound}, this implies that $\\vert M \\vert \\leq \\vert A^-\\vert$. This limits the number of variables a generic inhabitant of $A$ has, so that we can safely say that the variables of $ M $ must certainly belong to a fixed set $\\lbrace \\mathrm{x}_1, \\ldots, \\mathrm{x}_{\\vert A^- \\vert} \\rbrace$. The next step is to show that we can encode every normal form as a tuple of booleans, i.e.~as elements in $ \\mathbf{B}^s $ with a sufficiently large $ s $. Actually, we are interested in $\\eta$-long normal forms only, due to the way the linear $\\lambda$-term $\\mathtt{sub}^s_{A}$ acts on inhabitants of $A$ as shown in the previous subsection. So, given a ground type $ A$, we can represent the $\\eta$-long normal forms of type $A $ with tuples of type $ \\mathbf{B}^{\\mathcal{O}(\\vert A^-\\vert \\, \\cdot \\, \\log \\vert A^-\\vert)} $, since each such linear $\\lambda$-term has at most $\\vert A^- \\vert$ symbols, each one encoded using around $\\log \\vert A^- \\vert$ bits. By setting $s=c \\cdot (\\vert A^-\\vert \\, \\cdot \\, \\log \\vert A^- \\vert)$ for some $c>0$ large enough, there must exist a coding function $\\lceil \\_ \\rceil: \\Lambda_s \\longrightarrow \\mathbf{B}^s$, where $\\Lambda_{s} $ is the set of all normal linear $\\lambda$-terms having size bounded by $s$. The role of the $\\lambda$-term $\\mathtt{enc}^s_{A}$ is to internalize the coding function $\\lceil \\_ \\rceil$ in $\\mathsf{IMLL}_2$ as far as the $\\eta$-long normal forms of a fixed type $A$ are concerned. \n\nThe coming Lemma~\\ref{lem: existence abs app} relies on an \niterated selection mechanism, i.e.~a nested \\texttt{if}-\\texttt{then}-\\texttt{else} construction. In order to define selection, we first we need to extend the projection in~\\eqref{eqn: boolean projection} (Section~\\ref{sec: duplication and erasure in a linear setting}). \n\n\n\\begin{defn}[Generalized projection]\\label{defn: generalized projection}\nLet $A$ be a ground type. For all $k \\geq 0$ and $\\vec{m}=m_1, \\ldots, m_ k \\geq 0$, the linear $\\lambda$-term $\\pi^{\\vec{m}}_1$ is defined below:\n\\begin{equation*}\n\\pi_1^{\\vec{m} } \\triangleq \\begin{cases} \n\\lambda z. \\mathtt{let\\ }z \\mathtt{\\ be\\ }x,y \\mathtt{\\ in\\ \n}(\\mathtt{let\\ } \\mathtt{E}_{A}\\,y \\mathtt{\\ be\\ }I \\mathtt{\\ in\\ }x) &\\text{if }k=0\\\\ \n \\lambda z. \\mathtt{let\\ }z \\mathtt{\\ be\\ }x,y \\mathtt{\\ in\\ }(\\mathtt{let\\ } \\mathtt{E}_{A}\\,(y\\, \\mathtt{tt}^{m_1} \\ldots \\mathtt{tt}^{m_k}) \\mathtt{\\ be\\ }I \\mathtt{\\ in\\ }x) &\\text{if }k>0\n \\end{cases} \n\\end{equation*}\nwith type $B \\otimes B \\multimap B$, where $B \\triangleq \\mathbf{B}^{m_1} \\multimap \\ldots \\multimap \\mathbf{B}^{m_k} \\multimap A$. When $k=0$ we simply write $\\pi_1$ in place of $\\pi_1^{\\vec{m}}$, whose type is $A \\otimes A \\multimap A$.\n\\end{defn}\n\n\\begin{defn}[Generalized selection] \\label{defn: generalized selection} Let $A$ be a ground type and let $M_{\\mathtt{tt}^n}$, $ M_{\\langle \\mathtt{tt}^{n-1},\\mathtt{ff}\\rangle}$, \\ldots, $M_{\\langle \\mathtt{tt},\\mathtt{ff}^{n-1}\\rangle}$, $M_{\\mathtt{ff}^n}$ be (not necessarily distinct) normal inhabitants of $\\mathbf{B}^{m_1} \\multimap \\ldots \\multimap \\mathbf{B}^{m_k} \\multimap A$, for some $n \\geq 1$, $k \\geq 0 $, and $\\vec{m}=m_1, \\ldots, m_k \\geq 0$. We define the linear $\\lambda$-term:\n\\begin{equation}\\label{eqn: generalized selection}\n\\mathtt{if \\ }x \\mathtt{\\ then \\ }[M_{\\mathtt{tt}^n}, M_{\\langle \\mathtt{tt}^{n-1}, \\mathtt{ff}\\rangle}, \\ldots, M_{\\langle \\mathtt{tt},\\mathtt{ff}^{n-1}\\rangle},M_{\\mathtt{ff}^n} ]^{\\vec{m} }\n\\end{equation}\nwith type $ \\mathbf{B}^{m_1} \\multimap \\ldots \\multimap \\mathbf{B}^{m_k} \\multimap A$ and context $x:\\mathbf{B}^n$ by induction on $n$:\n\\begin{itemize}\n\\item $n=1$: $ \\mathtt{if \\ }x \\mathtt{\\ then \\ }[M_{\\mathtt{tt}},M_{\\mathtt{ff}} ]^{\\vec{m} } \\triangleq \\pi_1^{\\vec{m}} (x\\, M_{\\mathtt{tt}} \\, M_{\\mathtt{ff}})$.\n\\item $n>1$: $ \\mathtt{if \\ }x \\mathtt{\\ then \\ }[M_{\\mathtt{tt}^n}, M_{\\langle \\mathtt{tt}^{n-1}, \\mathtt{ff}\\rangle}, \\ldots, M_{\\langle \\mathtt{tt},\\mathtt{ff}^{n-1}\\rangle},M_{\\mathtt{ff}^n} ]^{\\vec{m}} \\triangleq $\n\\begin{align*}\n&\\mathtt{ let \\ }x \\mathtt{ \\ be \\ } x_1, x_2 \\mathtt{\\ in\\ } (\\mathtt{if\\ }x_2 \\mathtt{\\ then \\ } \\\\\n& \\big[ (\\lambda y_1. \\mathtt{if \\ }y_1 \\mathtt{\\ then \\ } [P_{\\mathtt{tt}^{n-1}}, P_{\\langle \\mathtt{tt}^{n-2}, \\mathtt{ff}\\rangle}, \\ldots, P_{\\langle \\mathtt{tt},\\mathtt{ff}^{n-2}\\rangle},P_{\\mathtt{ff}^{n-1}}]^{\\vec{m}}),\\\\\n&\\phantom{\\big[}(\\lambda y_2. \\mathtt{if \\ }y_2 \\mathtt{\\ then \\ } [Q_{\\mathtt{tt}^{n-1}}, Q_{\\langle \\mathtt{tt}^{n-2}, \\mathtt{ff}\\rangle}, \\ldots, Q_{\\langle \\mathtt{tt},\\mathtt{ff}^{n-2 }\\rangle},Q_{\\mathtt{ff}^{n-1}}]^{\\vec{m} }) \\big]^{n-1, \\vec{m}}) \\, x_1 \n\\end{align*}\nwhere, $\\pi_1^{\\vec{m}}$ is as in Definition~\\ref{defn: generalized projection} and, for every $n$-tuple $\\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_n \\rangle$ of booleans, \n$ P_{\\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_n \\rangle}\\triangleq M_{\\langle \\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_n \\rangle, \\mathtt{tt} \\rangle}$, $\nQ_{\\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_n \\rangle}\\triangleq M_{\\langle \\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_n \\rangle,\\mathtt{ff} \\rangle}$.\n\\end{itemize}\nwhen $k=0$ we feel free of ruling out the apex $\\vec{m}$ in~\\eqref{eqn: generalized selection}.\n\\end{defn}\n\\begin{lem} Let $A$ be a ground type and let $M_{\\mathtt{tt}^n}$, $ M_{\\langle \\mathtt{tt}^{n-1},\\mathtt{ff}\\rangle}$, \\ldots, $M_{\\langle \\mathtt{tt},\\mathtt{ff}^{n-1}\\rangle}$, $M_{\\mathtt{ff}^n}$ be (not necessarily distinct) normal inhabitants of $\\mathbf{B}^{m_1} \\multimap \\ldots \\multimap \\mathbf{B}^{m_k} \\multimap A$, for some $n \\geq 1$, $k \\geq 0 $, and $\\vec{m}=m_1, \\ldots, m_k \\geq 0$. \nFor every $n$-tuple of booleans $\\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_n \\rangle$ it holds that:\n\\begin{equation*}\n\\mathtt{if \\ } \\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_n \\rangle \\mathtt{\\ then \\ }(M_{\\mathtt{tt}^n}, M_{\\langle \\mathtt{tt}^{n-1}, \\mathtt{ff}\\rangle}, \\ldots, M_{\\langle \\mathtt{tt},\\mathtt{ff}^{n-1}\\rangle},M_{\\mathtt{ff}^n} ) \\rightarrow_\\beta^* M_{\\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_n \\rangle}\\enspace .\n\\end{equation*}\n\\end{lem}\n\\begin{proof}\nStraightforward.\n\\end{proof}\nNotice that, if $n=1$ and $k=0$ in Definition~\\eqref{defn: generalized selection}, we get the usual \\texttt{if}-\\texttt{then}-\\texttt{else} construction defined in~\\cite{gaboardi2009light} as:\n\\begin{equation}\\label{eqn: pi1m}\n\\mathtt{if}\\ x \\ \\mathtt{then}\\ M_1\\ \\mathtt{else}\\ M_2\n\\triangleq \\pi_1(x\\,M_1\\,M_2)\n\\end{equation}\nwith type $A$ and context $x: \\mathbf{B}$, where $\\pi_1: A \\otimes A \\multimap A$ is as in Definition~\\ref{defn: generalized projection}. Clearly, if $\\mathtt{b}_1\\triangleq\\mathtt{tt}$ and $\\mathtt{b}_2\\triangleq \\mathtt{ff}$, then $\\mathtt{if}\\ \\mathtt{b}_i\\ \\mathtt{then}\\ M_1\\ \\mathtt{else}\\ M_2 \n\\rightarrow_\\beta^* M_i$ for $i=1,2$.\n\nBefore defining the linear $\\lambda$-term $\\mathtt{enc}_A^s$ we need to encode the $ \\lambda $-abstractions and the applications in $\\mathsf{IMLL}_2$.\n\n\\begin{lem} \\label{lem: existence abs app} Let $s>0$. The following statements hold:\n\\begin{enumerate}[(1)]\n\\item \\label{eqn: abs} A linear $\\lambda$-term $\\mathtt{abs}^s: \\mathbf{B}^s \\multimap \\mathbf{B}^{s}\\multimap \\mathbf{B}^s$ exists such that $\\mathtt{abs}\\lceil x \\rceil \\lceil M \\rceil \\rightarrow^*_\\beta \\lceil \\lambda x. M \\rceil$, if $\\vert \\lambda x .M \\vert \\leq s $ and $ x \\in \\lbrace \\mathrm{x}_1, \\ldots, \\mathrm{x}_{s} \\rbrace$.\n\\item\\label{eqn: app} A linear $\\lambda$-term $\\mathtt{app}^s: \\mathbf{B}^s \\multimap \\mathbf{B}^{s}\\multimap \\mathbf{B}^s$ exists such that $\\mathtt{app} \\lceil M \\rceil \\lceil N \\rceil \\rightarrow^* _\\beta \\lceil MN \n\\rceil $, if $\\vert MN \\vert \\leq s$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nWe sketch the proof of Point~\\ref{eqn: abs} only, since Point~\\ref{eqn: app} is similar. Recall the notation in Definition~\\ref{defn: generalized selection}. We let boolean values range over $b_1, b_2, \\ldots$ and with $\\mathtt{b}$ we denote the corresponding encoding of the boolean value $b$ in $\\mathsf{IMLL}_2$. The linear $\\lambda$-term $\\mathtt{abs}$ is of the form:\n\\begin{equation*}\n\\lambda x. \\lambda y. (\\mathtt{if \\ }x \\mathtt{\\ then\\ } [ P_{\\mathtt{tt}^s}, P_{\\langle \\mathtt{tt}^{s-1}, \\mathtt{ff}\\rangle}, \\ldots, P_{\\langle \\mathtt{ff}, \\mathtt{tt}^{s-1}\\rangle},P_{\\mathtt{ff}^s}]^s)\\, y\n\\end{equation*}\nwhere, for all $s$-tuple of booleans $T=\\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_s \\rangle$, the linear $\\lambda$-term $P_{T}$ with type $\\mathbf{B}^s \\multimap \\mathbf{B}^s$ is as follows:\n\\begin{equation*}\n\\lambda y.\\mathtt{if \\ }y \\mathtt{\\ then\\ } [Q^T_{\\mathtt{tt}^s}, Q^T_{\\langle \\mathtt{tt}^{s-1}, \\mathtt{ff}\\rangle}, \\ldots, Q^T_{\\langle \\mathtt{ff}, \\mathtt{tt}^{s-1}\\rangle},Q^T_{\\mathtt{ff}^s}]\\enspace .\n\\end{equation*}\nFor all $T=\\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_s \\rangle$ and for all $T'=\\langle \\mathtt{b}'_1, \\ldots, \\mathtt{b}'_s \\rangle$ we define:\n\\begin{equation*}\n Q^T_{T'}= \\begin{cases} \\lceil \\lambda x. M \\rceil &\\text{if } \\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_s \\rangle= \\lceil x\\rceil, \\ \\langle \\mathtt{b}'_1, \\ldots, \\mathtt{b}'_s \\rangle= \\lceil M \\rceil, \\\\ & \\text{and }\\vert \\lambda x. M \\vert \\leq s\\\\ \\\\\n\\langle \\mathtt{tt},\\overset{s}{ \\ldots}, \\mathtt{tt} \\rangle &\\text{otherwise}.\n\\end{cases}\n\\end{equation*}\n\\end{proof}\nThe $\\lambda$-term $\\mathtt{enc}^s_{A}$, given a value $V_A$ in $\\eta$-long normal form and of type $A$, combines the $\\lambda$-terms $\\mathtt{abs}^s$ and $\\mathtt{app}^s$ to construct its encoding.\n\\begin{defn}[The linear $\\lambda$-term $\\mathtt{enc}^s_{A}$] \\label{defn: enc} Let $s>0$. We define the linear $\\lambda$-terms $\\mathtt{enc}^s_{A}: A^-[\\mathbf{B}^s]\\multimap \\mathbf{B}^s$, where $A$ is a $\\Pi_1$-type, and $\\overline{\\mathtt{enc}}^s_A: \\mathbf{B}^s \\multimap A^-[\\mathbf{B}^s]$, where $A$ is a $\\Sigma_1$-type, by simultaneous induction on the size of $A$:\n \\begin{align*}\n& \\mathtt{enc}^s_{\\alpha} \\triangleq \\lambda z. z && \\mathtt{enc}^s_{B \\multimap C}\n \\triangleq \\lambda z. \\mathtt{abs}^s \\lceil x \\rceil\\, (\\mathtt{enc}^s_{C}\\, (z\\, (\\overline{\\mathtt{enc}}^s_{B}\\, \\lceil x \\rceil))) \\\\\n & \\overline{\\mathtt{enc}}^s_{\\alpha} \\triangleq \\lambda z. z\n && \\overline{\\mathtt{enc}}^s_{B \\multimap C}\\triangleq \\lambda z. \\lambda x. \\overline{\\mathtt{enc}}^s_{C}\\, (\\mathtt{app}^s z\\, (\\mathtt{enc}^s_{B}\\,x))\n \\end{align*}\n \\noindent\n with $x$ chosen fresh in $\\lbrace \\mathrm{x}_1, \\ldots, \\mathrm{x}_{s} \\rbrace$.\\qed\n \\end{defn}\n The following will be used to compact the proof of some of the coming lemmas.\n \\begin{defn} \nLet $s>0$, and let $A$ be a $\\Pi_1$-type and $\\Gamma=x_1:A_1, \\ldots, x_n:A_n$ be a context of $\\Sigma_1$-types. If $M$ is an inhabitant of type $A^-[\\mathbf{B}^s]$ with context $\\Gamma^-[\\mathbf{B}^s]$ then $M[\\Gamma]$ denotes the substitution:\n\\begin{equation*}\n M[\\overline{\\mathtt{enc}}^s_{A_1}\\,x'_1 \/x_1 ,\\ldots, \\overline{\\mathtt{enc}}^s_{A_n}\\, x'_n\/x_n] \n\\end{equation*} \nfor some $x'_1, \\ldots, x'_n$. \\qed\n\\end{defn} \nTo prove that $\\mathtt{enc}^s_{A}$ is able to encode a value $V_A$ of type $A$ we need an intermediate step. We first prove that $\\mathtt{enc}^s_{A}$ substitutes every $\\lambda$-abstraction in $V_{A}$ with an instance of $\\mathtt{abs}^s$, and every application with an instance of $\\mathtt{app}^s$, thus producing a \\enquote{precode}. Then we prove that, when every free variable in it has been substituted with its respective encoding, the precode reduces to $\\lceil V_{A}\\rceil$.\n\\begin{defn} Let $s>0$. If $M$ is a linear $\\lambda$-term in normal form such that $\\vert M \\vert \\leq s$, we define $M^s$ by induction on $\\vert M \\vert$:\n\\begin{enumerate}\n\\item $M=x$ if and only if $M^s=x$,\n\\item $M= \\lambda x. N$ if and only if $M^s= \\mathtt{abs}^s \\, \\lceil x' \\rceil \\, N^s[\\lceil x' \\rceil\/x] $,\n\\item $M=PQ$ if and only if $M^s= \\mathtt{app}^s \\, P^s \\, Q^s$,\n\\end{enumerate} \nwhere $x'$ is fresh, chosen in $\\lbrace \\mathrm{x}_1, \\ldots, \\mathrm{x}_{s} \\rbrace$. \\qed\n\\end{defn}\n\\begin{lem} \\label{lem: substitution lemma for s} Let $s>0$. If $M$ and $N$ are linear $\\lambda$-terms, then $M^s[N^s\/x]=(M[N\/x])^s$.\n\\end{lem}\n\\begin{proof}\nBy induction on $\\vert M \\vert$. If $M=x$ then $x^s[N^s\/x]= x[N^s\/x]= N^s= (x[N\/x])^s$. If $M=PQ$ then either $x$ occurs in $P$ or it occurs in $Q$, and let us consider the case $x \\in FV(P)$, the other case being similar: by using the induction hypothesis we have $(PQ)^s [N^s\/x]= \\mathtt{app}^s \\, P^s[N^s\/x] \\, Q^s= \\mathtt{app}^s (P[N\/x])^s \\, Q^s= (P[N\/x]Q)^s=((PQ)[N\/x])^s$. If $M= \\lambda y. P$ then we have that $(\\lambda y. P)^s [N^s \/x]$ $= \\mathtt{abs}^s \\, \\lceil y' \\rceil \\, P^s[N^s \/x][\\lceil y' \\rceil\/y]$ $= \\mathtt{abs}^s \\, \\lceil y' \\rceil \\, (P[N \/x])^s [\\lceil y' \\rceil \/y] $ $= (\\lambda y. P[N\/x])^s$ $= ((\\lambda y. P)[N\/x])^s$.\n\\end{proof}\n\\begin{lem}\\label{lem: precoding lemma} Let $s>0$. If $M$ is a linear $\\lambda$-term in normal form such that $\\vert M \\vert \\leq s$ with free variables $x_1, \\ldots, x_n$ then \n\\[M^s [ \\vec{\\lceil x' \\rceil}]\\rightarrow_\\beta^* \\lceil M[x'_1\/x_1, \\ldots, x'_n\/x_n] \\rceil\\]\nwhere $ \\vec{\\lceil x' \\rceil}=[\\lceil x'_1 \\rceil\/x_1, \\ldots, \\lceil x'_n \\rceil\/x_n]$ and $x'_1, \\ldots, x'_n$ are distinct and fresh in $\\lbrace \\mathrm{x}_1, \\ldots, \\mathrm{x}_{s} \\rbrace$.\n\\end{lem}\n\\begin{proof}\nBy induction on $\\vert M \\vert$. If $M=x$ then $\\exists i \\leq n$ $x_i=x$, so that \n$x^s [\\lceil x'_i \\rceil\/x]= x [\\lceil x'_i \\rceil\/x]=\\lceil x'_i\\rceil= \\lceil x[x'_i\/x] \\rceil$. If $M= \\lambda y. N$ then, using the induction hypothesis, we have: \n\\allowdisplaybreaks\n\\begin{align*}\n(\\lambda y. N)^s [ \\vec{\\lceil x' \\rceil}]&= (\\mathtt{abs}^s \\, \\lceil y' \\rceil \\, N^s [\\lceil y' \\rceil\/y]) [ \\vec{\\lceil x' \\rceil} ]\\\\\n&=\\mathtt{abs}^s \\, \\lceil y' \\rceil \\, N^s[ \\vec{\\lceil x' \\rceil}, \\lceil y' \\rceil \/y]\\\\\n&\\rightarrow_\\beta^* \\mathtt{abs}^s \\, \\lceil y' \\rceil \\, \\lceil N[x_1'\/x_1, \\ldots, x'_n\/x_n, y'\/y] \\rceil\\\\\n&\\rightarrow_\\beta^* \\lceil \\lambda y'. N[x_1'\/x_1, \\ldots, x'_n\/x_n, y'\/y] \\rceil&& \\text{Lemma~\\ref{lem: existence abs app}}\\\\\n&=_{\\alpha}\\lceil (\\lambda y. N)[x_1'\/x_1, \\ldots, x'_n\/x_n] \\rceil.\n\\end{align*}\nIf $M=PQ$ then let $y_1, \\ldots, y_m$ (resp. $z_1, \\ldots, z_k$) be the free variables of $P$ (resp. $Q$), and let $\\vec{x'}=y'_1, \\ldots, y'_m, z'_1, \\ldots, z'_k$. Then we have: \n\\allowdisplaybreaks\n\\begin{align*}\n&(PQ)^s [ \\vec{\\lceil x' \\rceil}]=\\\\\n&= \\mathtt{app}^s \\, P^s [ \\vec{\\lceil y' \\rceil}] \\, Q^s [ \\vec{\\lceil z' \\rceil}] \\\\\n&\\rightarrow_\\beta^* \\mathtt{app}^s \\, \\lceil P[y_1'\/y_1, \\ldots, y'_m\/y_m] \\rceil \\, \\lceil Q[z_1'\/z_1, \\ldots, z'_k\/z_k] \\rceil \\\\\n&\\rightarrow_\\beta^* \\lceil P[y_1'\/y_1, \\ldots, y'_m\/y_m] Q[z_1'\/z_1, \\ldots, z'_k\/z_k] \\rceil && \\text{Lemma~\\ref{lem: existence abs app}}\\\\\n&= \\lceil (PQ)[x_1'\/x_1, \\ldots, x'_n\/x_n] \\rceil. \n\\end{align*}\n\\end{proof}\nIt is easy to check that if $M$ is an inhabitant of a $\\Pi_1$-type $A$ with context $\\Gamma=x_1:A_1, \\ldots, x_n:A_n$ of $\\Sigma_1$-types, then $M$ has also type $A^-[\\mathbf{B}^s]$ with context $\\Gamma^-[\\mathbf{B}^s]$.\n\\begin{lem}\\label{lem: enc} Let $M$ be a $\\eta$-long normal form of type $A$ with context $\\Gamma=x_1:A_1, \\ldots, x_n:A_n$, where $A$ is a $\\Pi_1$-type and $\\Gamma$ is a context of $\\Sigma_1$-types, and let $\\sum_{i=1}^m\\vert A_i^- \\vert+ \\vert A^- \\vert=k$ and $s=c \\cdot (k\\cdot \\log k)$, for some $c$ large enough. Then:\n\\begin{equation*}\n (\\mathtt{enc}^s_A\\, M[\\Gamma]) [\\vec{\\lceil x' \\rceil}] \\rightarrow_\\beta^*\\lceil M[x'_1\/x_1, \\ldots, x'_n\/x_n] \\rceil \\enspace , \n\\end{equation*}\n where $ \\vec{\\lceil x' \\rceil}=[\\lceil x'_1 \\rceil\/x_1, \\ldots, \\lceil x'_n \\rceil\/x_n]$, with $x'_1, \\ldots, x'_n$ distinct and chosen fresh in $\\lbrace \\mathrm{x}_1, \\ldots, \\mathrm{x}_{s} \\rbrace$.\n\\end{lem}\n\\begin{proof} By Lemma~\\ref{lem: precoding lemma} it suffices to prove by induction on $\\vert M \\vert $ that the reduction $\\mathtt{enc}^s_A\\, M[\\Gamma] \\rightarrow_\\beta^* M^s$ holds. If $M=x$ then $A= \\alpha$ and $\\Gamma= x:\\alpha$, because $M$ is in $\\eta$-long normal form, so that we have $\\mathtt{enc}_{\\alpha}^s \\, x[x: \\alpha]= \\mathtt{enc}^s_{\\alpha}(\\overline{\\mathtt{enc}}_\\alpha^s \\, x) \\rightarrow_\\beta^* x= x^s $. If $M= \\lambda y.N$ then $A= B \\multimap C$, so that:\n\\allowdisplaybreaks\n\\begin{align*}\n&\\mathtt{enc}_{B \\multimap C}^s ((\\lambda y.N)[\\Gamma]) \\\\\n&\\rightarrow_\\beta \\mathtt{abs}^s\\, \\lceil x' \\rceil (\\mathtt{enc}^s_C((\\lambda y. N[\\Gamma])(\\overline{\\mathtt{enc}}^s_B \\, \\lceil y' \\rceil))) &&\\text{Definition~\\ref{defn: enc}}\\\\\n&\\rightarrow_\\beta \\mathtt{abs}^s\\, \\lceil y' \\rceil (\\mathtt{enc}^s_C(N[\\Gamma][\\overline{\\mathtt{enc}}^s_B \\, \\lceil y' \\rceil\/y])) \\\\\n&= \\mathtt{abs}^s\\, \\lceil y' \\rceil (\\mathtt{enc}^s_C(N[\\Gamma][\\overline{\\mathtt{enc}}^s_B \\, x\/x])) [ \\lceil y' \\rceil \/y]\\\\\n&= \\mathtt{abs}^s\\, \\lceil y' \\rceil (\\mathtt{enc}^s_C(N[\\Gamma, y:B]))[ \\lceil y'\\rceil\/y]&&\\text{Definition~\\ref{defn: enc}}\\\\\n&\\rightarrow_\\beta^* \\mathtt{abs}^s\\, \\lceil y ' \\rceil (N^s [\\lceil y'\\rceil \/y]) && \\text{induction hyp.}\\\\\n&= (\\lambda y. N)^s .\n\\end{align*}\nLast, suppose $M=P[yN\/x]$, and let $\\Sigma$, $\\Delta$ be contexts such that $\\Gamma=\\Sigma, \\Delta, y: B \\multimap C$, $\\operatorname{dom}(\\Sigma)=FV(P)$, and $\\operatorname{dom}(\\Delta)=FV(N)$. Then we have:\n\\allowdisplaybreaks\n\\begin{align*}\n&\\mathtt{enc}^s_{A} (P[yN\/x])[\\Gamma]\\\\\n&= \\mathtt{enc}^s_{A} (P[\\Sigma][(yN)[\\Delta, y:B \\multimap C] \/x]) \\\\\n&= \\mathtt{enc}^s_{A} (P[\\Sigma][(\\overline{\\mathtt{enc}}^s_{B \\multimap C}\\, y ) N[\\Delta] \/x]) \\\\\n&\\rightarrow_\\beta^* \\mathtt{enc}^s_{A} (P[\\Sigma][ \\overline{\\mathtt{enc}}^s_{C}(\\mathtt{app}^s \\, y (\\mathtt{enc}^s_B \\, N[\\Delta])) \/x]) &&\\text{Definition~\\ref{defn: enc}}\\\\\n& \\rightarrow_\\beta^* \\mathtt{enc}^s_{A} (P[\\Sigma][ \\overline{\\mathtt{enc}}^s_{C}(\\mathtt{app}^s \\, y \\, N^s) \/x])&&\\text{induction hyp.}\\\\\n&= \\mathtt{enc}^s_{A} (P[\\Sigma][ \\overline{\\mathtt{enc}}^s_{C}(yN)^s \/x])\\\\\n&=\\mathtt{enc}^s_{A} (P[\\Sigma, x: C]) [ (yN)^s \/x]&&\\text{Definition~\\ref{defn: enc}}\\\\\n&\\rightarrow_\\beta^* P^s [ (yN)^s \/x] &&\\text{induction hyp.}\\\\\n&=(P[yN\/x])^s && \\text{Lemma~\\ref{lem: substitution lemma for s}}. \n\\end{align*}\n\\end{proof}\n\n \n\n\\subsection{The linear $\\lambda$-term $ \\mathtt{dec}^s_{A}$} \\label{subsection: dec and DBn}\nThe linear $ \\lambda $-term $ \\mathtt{dec}^s_{A} $ is the component of $\\mathtt{D}_{A}$ requiring the type inhabitation. Roughly, it takes in input a tuple of boolean values encoding the $\\eta$-long normal form $V_{A}$ of a ground type $A$, and it produces the pair $\\langle V_{A}, V_{A}\\rangle$. To ensure that $ \\mathtt{dec}^s_{A} $ is defined on all possible inputs, it is built in such a way that it returns a default inhabitant of $A$ whenever the tuple of booleans in input does not encode any $\\lambda$-term. \n\\begin{defn}[The linear $\\lambda$-term $\\mathtt{dec}^s_{A}$]\\label{defn: dec} Let $A$ be a ground type and let $U$ be a value of type $A$. If for some $c$ large enough $s=c \\cdot (\\vert A^- \\vert \\cdot \\log \\vert A^- \\vert )$, then we define the linear $\\lambda$-term $\\mathtt{dec}^s_{A}: \\mathbf{B}^s \\multimap A\\otimes A$ as follows:\n\\begin{equation*}\n\\lambda x. \\mathtt{if \\ }x \\mathtt{\\ then\\ } [P_{\\mathtt{tt}^s}, P_{\\langle \\mathtt{tt}^{s-1}, \\mathtt{ff}\\rangle}, \\ldots, P_{\\langle \\mathtt{ff}, \\mathtt{tt}^{s-1}\\rangle},P_{\\mathtt{ff}^s}]\n\\end{equation*}\nwhere, for all $T=\\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_s \\rangle$ of type $\\mathbf{B}^s$:\n\\begin{equation*}\n P_{T}= \\begin{cases} \\langle V_{A}, V_{A} \\rangle &\\text{if }\\langle \\mathtt{b}_1, \\ldots, \\mathtt{b}_s \\rangle= \\lceil V_{A} \\rceil \\\\\n\\langle U, U \\rangle &\\text{otherwise}.\n\\end{cases}\n\\end{equation*}\n\\end{defn}\nWe are now able to prove the fundamental result of this section: \n\\begin{thm}[Duplication~\\cite{mairson2003computational}]\nEvery inhabited ground type is duplicable.\n\\end{thm}\n\\begin{proof}\n The duplicator $\\mathtt{D}_{A}$ of a inhabited ground type is defined as follows: we fix $s=c \\cdot (\\vert A^- \\vert \\cdot \\log \\vert A^- \\vert )$, we fix a default value $U$ of $A$ (see Definition~\\ref{defn: dec}), and we set:\n\\begin{equation*}\n\\mathtt{D}_{A}\\triangleq \\mathtt{dec}^s_{A}\\circ \\mathtt{enc}^s_{A}\\circ \\mathtt{sub}^s_{A}\n\\end{equation*}\nwhich has type $A \\multimap A \\otimes A$. By Lemma~\\ref{lem: sub}, Lemma~\\ref{lem: enc}, and Definition~\\ref{defn: dec} the conclusion follows. Moreover, for all values $V$ of type $A$, we have:\n\\begin{equation*}\n\\mathtt{D}_A\\, V \\rightarrow_{\\beta}^* \\langle V_A, V_A\\rangle \\rightarrow_{\\eta}^* \\langle V, V\\rangle \\enspace .\n\\end{equation*}\n\\end{proof}\n\\begin{rem}\\label{rem: duplicator} If $A$ is a ground type inhabited by the value $U$, we shall write $\\mathtt{D}_{A}^{U}$ to stress that the default inhabitant of $A$ used in constructing the duplicator $\\mathtt{D}_{A}$ of $A$ is $U$.\n\\end{rem}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{The system $\\mathsf{LEM}$}\\label{sec: the system shpos, with, oplus IMLL2 cut elimination and bound}\nTheorems~\\ref{thm: pi1 types are erasable} and~\\ref{thm: pi1 are duplicable} say that the ground types can be weakened and contracted in $\\mathsf{IMLL}_2$. We here logically internalize those kinds of weakening a contraction in the deductive system $\\mathsf{LEM}$ (Linearly Exponential and Multiplicative). It extends $\\mathsf{IMLL}_2$ \nwith inference rules for the modality \n``$\\shpos$'' that closely recall the exponential rules in Linear Logic.\n\n\\begin{defn}[Types of $\\mathsf{LEM}$]\n\\label{defn:Types for IMLL2shpos}\nLet $\\mathcal{X}$ be a denumerable set of type variables. The following grammar~\\eqref{eqn: grammar mathcalAe} generates the \\textit{exponential types}, while the grammar~\\eqref{eqn: grammar A} generates the \\textit{linear types}:\n\\begin{align}\n\\sigma := &\\ A \\ \\vert \\ \\shpos \\sigma \\label{eqn: grammar mathcalAe}\\\\\nA:= &\\ \\alpha \\ \\vert \\ \\sigma \\multimap A \\ \\vert \\ \\forall \\alpha.A \\label{eqn: grammar A} \n\\end{align}\nwhere $\\alpha \\in \\mathcal{X}$ and, in the last clause of the grammar~\\eqref{eqn: grammar mathcalAe}, i.e.~the one introducing $ \\shpos \\sigma $, \\emph{the type $ \\sigma$ must be closed and without negative occurrences of $\\forall$}. The set of all types generated by the grammar~(\\ref{eqn: grammar mathcalAe}) will be denoted $\\Theta_\\shpos$. A type is \\textit{strictly exponential} if it is of the form $\\shpos \\sigma$. A \\textit{strictly exponential context} is a context containing only strictly exponential types and, similarly, a \\textit{linear context} contains only linear types. Finally, $A\\langle B\/\\alpha \\rangle$ is the standard meta-level substitution of $B$, for every occurrence of $ \\alpha$ in $A$. \n\\qed\n\n\\end{defn}\n\\noindent\n\\begin{rem}\nThe modality ``$\\shpos$'' identifies where the ground types (Definition~\\ref{defn:Pi and Sigma types}) occur in the \ngrammars~(\\ref{eqn: grammar mathcalAe}) and~(\\ref{eqn: grammar A})\n because it applies to closed types that are free from negative occurrences of $\\forall$. \nSo, the occurrences of $\\shpos \\sigma$ identify where contraction and weakening rules can apply in the derivations of $\\mathsf{LEM}$.\n\\qed\n\\end{rem}\n\\noindent\nAlso, we observe that syntactically replacing the Linear Logic modality \\enquote{$\\oc$} for \n\\enquote{$\\shpos$} in~(\\ref{eqn: grammar mathcalAe}) and~(\\ref{eqn: grammar A}) yields a subset of Gaboardi\\&Ronchi's \\textit{essential types}~\\cite{gaboardi2009light}, introduced to prove Subject reduction in a variant of Soft Linear Logic. Essential types forbid the occurrences of modalities in the right-hand side of an implication, such as in $A \\multimap \\oc B$.\n\n\n\n\n\n\n$\\mathsf{LEM}$ will be defined as a type-assignment for the term calculus $\\Lambda_\\shpos$, that is essentially the standard linear $\\lambda$-calculus with explicit and type-dependent constructs for erasure and duplication, i.e.~$\\mathtt{discard}_\\sigma$ and $\\mathtt{copy}_\\sigma ^V$, the latter being also decorated with a value $V$. These new constructs are able to copy and discard values only, i.e.~closed and normal linear $\\lambda$-terms. \n\n\\begin{defn}[Terms and reduction of $\\mathsf{LEM} $] \n\\label{defn:lambda-terms and values of IMLLshpos2} \nLet $\\mathcal{V}$ be a denumerable set of variables. The \\textit{terms} of $\\mathsf{LEM}$ are given by the grammar:\n\\begin{align}\nM, N := \\ & x\\ \n \\vert\\ \\lambda x.M\\ \n \\vert\\ MN\\ \n \\vert\\ \\mathtt{discard}_{\\sigma}\\, M \\mathtt{\\ in\\ } N\\ \n \\vert\\ \\mathtt{copy}^{V}_{\\sigma}\\, M \\mathtt{\\ as\\ }x,y \n \\mathtt{\\ in\\ } N\n& \\label{eqn: term assignment shposIMLL2}\n\\end{align} \nwhere $x, y\\in \\mathcal{V}$, $ V $ is a value (Definition~\\ref{defn:Values among linear lambda-terms}, Section~\\ref{sec: background}), and $ \\sigma \\in \\Theta_\\shpos$. The set of all terms of $\\mathsf{LEM}$ will be denoted $\\Lambda_{\\shpos}$.\nThe set of the free variables of a term, and the notion of size are standard for variables, abstractions, and applications. \nThe extension to the new constructors are:\n\\begin{align}\n\\nonumber\nFV(\\mathtt{ discard }_{\\sigma}\\, M\\mathtt{ \\ in \\ }N)= \n& \\ FV(M)\\cup FV(N)\n\\\\\n\\nonumber\nFV(\\mathtt{ copy}^{V}_{\\sigma}\\, M \\mathtt{ \\ as \\ }x, y \\mathtt{ \\ in \\ }N)= & \\ FV(M) \\cup (FV(N) \\setminus \\lbrace x, y \\rbrace)\n\\\\\n\\nonumber\n\\vert \\mathtt{ discard }_{\\sigma}\\, M\\mathtt{ \\ in \\ }N\\vert = & \\ \\vert M\\vert + \\vert N\\vert +1 \n\\\\\n\\label{align: size of copy}\n\\vert \\mathtt{ copy}^{V}_{\\sigma}\\, M \\mathtt{ \\ as \\ }x, y \\mathtt{ \\ in \\ }N\\vert = & \\ \\vert M\\vert + \\vert N \\vert + \\vert V \\vert + 1 \\enspace .\n\\end{align}\nA term $M$ in~\\eqref{eqn: term assignment shposIMLL2} is \\textit{linear} \nif all of its free variables occur once in it and every proper sub-term \nof $M$ with form $ \\lambda x.N $ (resp.~$ \\mathtt{copy}^{V}_{\\sigma}\\, M' \\mathtt{\\ as\\ }y,z \\mathtt{\\ in\\ } N $) is such that $ x$ (resp.~$y,z$) must occur in $ N $ and $ N $ is linear. \\textit{Henceforth, we use linear terms only.}\n \nThe notions of meta-level substitution and context are as usual.\n\n The \\textit{one-step reduction relation} $\\rightarrow$ is a binary relation on terms. It is defined by the reduction rules in Figure~\\ref{fig: term reduction rules} and by the commuting conversions in Figure~\\ref{fig: term commuting conversions}. It applies in any context. Its reflexive and transitive closure is denoted $\\rightarrow^*$. A term is said a (or is in) \\textit{normal form} if no reduction step applies to it. \n\\qed\n\\end{defn}\n\\begin{figure}[t]\n\\centering\n\\begin{equation}\\nonumber\n\\begin{aligned}\n\\scalebox{0.7}{$ (\\lambda x. M)N $}&\\scalebox{0.7}{$ \\ \\rightarrow \\ M[N\/x] $}\\\\\n\\scalebox{0.7}{$ \\mathtt{discard}_{\\sigma}\\, V \\mathtt{\\ in\\ }M $} & \\scalebox{0.7}{$ \\ \\rightarrow \\ M $}\\\\\n\\scalebox{0.7}{$ \\mathtt{copy}^{U}_{\\sigma}\\, V \\mathtt{\\ as\\ } y, z \\mathtt{\\ in\\ }M $} & \\scalebox{0.7}{$ \\ \n\t\\rightarrow \\ M[V\/y, V\/z] $}\n\\end{aligned}\n\\end{equation}\n\\caption{Reduction on terms.}\n\\label{fig: term reduction rules}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\begin{equation}\\nonumber\n\t\\begin{aligned}\n\t\\scalebox{0.7}{$(\\mathtt{discard}_{\\sigma}\\, M \\mathtt{\\ in\\ } N)P $}&\\scalebox{0.7}{$ \\ \\rightarrow\\ \\mathtt{discard}_{\\sigma}\\, M \\mathtt{\\ in\\ }(NP) $}\\\\\n \\scalebox{0.7}{$\\mathtt{discard}_{\\sigma}\\, (\\mathtt{discard}_{\\tau}\\, M \\mathtt{\\ in\\ } N) \\mathtt{\\ in \\ }P $}&\\scalebox{0.7}{$\\ \\rightarrow \\ \\mathtt{discard}_{\\tau}\\, M \\mathtt{\\ in\\ }(\\mathtt{discard}_{\\sigma}\\, N \\mathtt{\\ in\\ }P)$}\\\\\n\t\\scalebox{0.7}{$\\mathtt{copy}^{V}_{\\sigma}\\, (\\mathtt{discard}_{\\tau}\\, M \\mathtt{\\ in\\ } N)\\mathtt{\\ as\\ }y,z \\mathtt{\\ in\\ }P $}& \\scalebox{0.7}{$\\ \\rightarrow \\ \\mathtt{discard}_{\\tau}\\, M \\mathtt{\\ in\\ }(\\mathtt{copy}^{V}_{\\sigma}\\, N \\mathtt{\\ as\\ }y,z \\mathtt{\\ in\\ }P)$}\n\t\\\\\n\t\\scalebox{0.7}{$(\\mathtt{copy}^{V}_{\\sigma}\\, M \\mathtt{\\ as\\ }y,z \\mathtt{\\ in\\ }N)P $}&\\scalebox{0.7}{$\\ \\rightarrow \\ \\mathtt{copy}^{V}_{\\sigma}\\, M \\mathtt{\\ as\\ }y,z \\mathtt{\\ in\\ }(NP) $}\\\\\n\t\\scalebox{0.7}{$\\mathtt{discard}_{\\sigma}\\, (\\mathtt{copy}^{V}_{\\tau}\\, M \\mathtt{\\ as\\ }y,z \\mathtt{\\ in\\ }N) \\mathtt{\\ in \\ }P $}&\\scalebox{0.7}{$\\ \\rightarrow \\ \\mathtt{copy}^{V}_{\\tau}\\, M \\mathtt{\\ as\\ }y,z \\mathtt{\\ in\\ }(\\mathtt{discard}_{\\sigma}\\, N \\mathtt{\\ in\\ }P)$}\\\\\n\t\\scalebox{0.7}{$\\mathtt{copy}^{U}_{\\sigma}\\, (\\mathtt{copy}^{V}_{\\tau}\\, M \\mathtt{\\ as\\ }y,z \\mathtt{\\ in\\ }N)\\mathtt{\\ as\\ }y',z' \\mathtt{\\ in\\ }P $}& \\scalebox{0.7}{$\\ \\rightarrow \\ \\mathtt{copy}^{V}_{\\tau}\\, M \\mathtt{\\ as\\ } y,z \\mathtt{\\ in\\ }(\\mathtt{copy}^{U}_{\\sigma}\\, N \\mathtt{\\ as\\ }y',z' \\mathtt{\\ in\\ }P)$}\n\t\\end{aligned}\n\t\\end{equation}\n\t\\caption{Commuting conversions on terms.}\n\t\\label{fig: term commuting conversions}\n\\end{figure}\n\\noindent\n\n\\begin{figure}[htbp]\n\\begin{mathpar}\n\\scalebox{0.8}{$\n\\inferrule*[Right= $ax$]{\\\\}{x: A \\vdash x:A}$} \\and\n\\scalebox{0.8}{$\\inferrule*[Right= $cut$]{\\Gamma \\vdash N: \\sigma \\\\ \\Delta, x: \\sigma \\vdash M:\\tau}{\\Gamma, \\Delta \\vdash M[N\/x]: \\tau}$} \\\\\n\\scalebox{0.8}{$\\inferrule*[Right= $\\multimap$R]{\\Gamma, x:\\sigma \\vdash M:B}{\\Gamma \\vdash \\lambda x. M : \\sigma \\multimap B }$} \\and\n\\scalebox{0.8}{$\\inferrule*[Right= $\\multimap$L]{\\Gamma \\vdash N: \\sigma \\\\ \\Delta, x:B \\vdash M: \\tau}{\\Gamma, \\Delta, y: \\sigma \\multimap B \\vdash M[yN\/x]: \\tau}$}\\\\\n\\scalebox{0.8}{$\\inferrule*[Right=$\\forall$R]{\\Gamma \\vdash M: A \\langle \\gamma \/ \\alpha \n\\rangle\\\\ \\gamma \\not \\in FV(\\operatorname{rng}(\\Gamma))}{\\Gamma \\vdash M: \\forall \\alpha.A}$} \\and\n\\scalebox{0.8}{$\\inferrule*[Right= $\\forall$L]{ \\Gamma, x: A\\langle B\/ \\alpha \\rangle \\vdash M:\\tau}{\\Gamma, x: \\forall \\alpha. A \\vdash M:\\tau}$}\\\\\n\\scalebox{0.8}{$\\inferrule*[Right=$p$]{x_1:\\shpos \\sigma_1, \\ldots, x_n:\\shpos \\sigma_n \\vdash M:\\sigma }{x_1: \\shpos \\sigma_1, \\ldots, x_n: \\shpos \\sigma_n \\vdash M: \\shpos \\sigma }$} \\and\n\\scalebox{0.8}{$\\inferrule*[Right=$d$]{\\Gamma, x: \\sigma \\vdash M:\\tau}{\\Gamma, y: \\shpos \\sigma \\vdash M[y\/x]:\\tau}$}\\\\\n\\scalebox{0.8}{$\\inferrule*[Right=$w$]{\\Gamma \\vdash M:\\tau}{\\Gamma, x: \\shpos \\sigma \\vdash \\mathtt{ discard}_{\\sigma}\\, x\\mathtt{ \\ in \\ }M:\\tau}$} \\and \n\\scalebox{0.8}{$\\inferrule*[Right=$c$]{\\Gamma, y: \\shpos \\sigma, z: \\shpos \\sigma \\vdash M:\\tau \\\\ \\vdash V: \\sigma}{\\Gamma, x: \\shpos \\sigma \\vdash \\mathtt{ copy} ^{V}_{\\sigma}\\, x \\mathtt{ \\ as \\ } y,z \\mathtt{ \\ in \\ }M:\\tau}$}\n\\end{mathpar}\n\\caption{The system $\\mathsf{LEM}$.}\n\\label{fig: the system shpos add IMLL2} \n\\end{figure}\n\\noindent\nBoth the type and the term annotations in the constructs $\\mathtt{discard}_\\sigma$ and $\\mathtt{copy}^V_\\sigma$ will become meaningful once we introduce the type-assignment system. The value $V$ will be an inhabitant of $\\sigma$, a necessary condition in order to faithfully express the mechanism of linear duplication.\n\nThe structure of the types in Definition~\\ref{defn:Types for IMLL2shpos} drives the definition of $ \\mathsf{LEM} $.\n\\begin{defn}[The system $\\mathsf{LEM} $]\n\\label{defn:The type assigment IMLLshpos2}\nIt is the type-assignment system for the term calculus $\\Lambda_\\shpos$ (Definition~\\ref{defn:lambda-terms and values of IMLLshpos2}) in Figure~\\ref{fig: the system shpos add IMLL2}. \nIt extends $\\mathsf{IMLL}_2$ with the rules \\emph{promotion} $p$, \\emph{dereliction} $d$, \\emph{weakening} $w$ and \\emph{contraction} $c$.\nAs usual, $\\multimap$R, $\\forall$R, and $p$ are right rules\nwhile\n$\\multimap$L, $\\forall$L, $d$, $w$, and $c$ are left ones.\n\\qed\n\\end{defn}\n\\noindent\nFirst, we observe that $ax$ cannot introduce exponential types, like\nin the \\emph{essential types} of the type systems in \\cite{gaboardi2009light}. This is the base for proving:\n\\begin{prop}[Exponential context from exponential conclusion]\n\\label{prop: exponential context} \nIf $\\mathcal{D} \\triangleleft \\Gamma \\vdash M: \\shpos \\sigma$ is a derivation in \n$\\mathsf{LEM}$, then $\\Gamma$ is a strictly exponential context.\n\\end{prop}\n\\begin{proof}\n\tBy structural induction on the derivation $\\mathcal{D}$.\n\\end{proof}\n\\noindent\nAlso, we observe that $p$, $d$, $w$, $c$ of $\\mathsf{LEM}$ are reminiscent of the namesake Linear Logic exponential rules, but they only apply to types $ \\shpos\\sigma $ that~\\eqref{eqn: grammar mathcalAe} (Definition~\\ref{defn:Types for IMLL2shpos}) generates, i.e.~closed types with no negative occurrences of universal quantifiers. \nThe rule $c$ has one premise more than the contraction rule in Linear Logic. This premise \\enquote{witnesses} that the type $\\sigma$ we want to contract is inhabited by at least one value $V$. This is because $ c $ expresses the mechanism of linear contraction discussed in the previous section for $\\mathsf{IMLL}_2$.\nAs we shall see in Theorem~\\ref{thm: translations for LAML}, the term $\\mathtt{copy}^V_\\sigma$ that $c$ introduces is a (very) compact notation for duplicators of ground types, \nwhose detailed description is in~\\ref{sec: the d-soundness theorem DICE}. Roughly, the duplicator of a ground type $A$ is a linear $\\lambda$-term that, taking a value $U$ of type $A$ as argument, implements the following three main operations:\n\\begin{enumerate}\n\t\\item \\label{enumerate: duplication step 1}\n\texpand $ U $ to its $ \\eta $-long normal form \n\t$ U_{A} $ \n\twhose dimension is bounded by the dimension of the type $A$. \n\tThis is why using a modality to identify types for which \n\tthis can be done is so important;\n\t\n\t\\item \\label{enumerate: duplication step 2}\n compile $ U_{A} $ to a linear $ \\lambda $-term\n $ \\lceil U_{A} \\rceil $ which encodes $ U_{A} $ as \n a boolean tuple;\n\t\n\t\\item \\label{enumerate: duplication step 3}\n copy and decode $ \\lceil U_{A} \\rceil $, obtaining the duplication \n $ \\langle U_{A}, U_{A} \\rangle $ of $ U_{A} $ as final result. This duplication \n is by means of the term $\\mathtt{dec}^s_{A}$ in~\\ref{subsection: dec and DBn}. It nests a series of \\texttt{if}-\\texttt{then}-\\texttt{else} constructs which is a look-up table, possibly quite big, that stores all the pairs of normal inhabitants of $A$.\n Each of them represents a possible outcome of the duplication.\n Given a boolean tuple $ \\lceil U_{A} \\rceil $ in input, the nested \\texttt{if}-\\texttt{then}-\\texttt{else} select the corresponding pair \n $ \\langle U_{A}, U_{A} \\rangle $, erasing all the remaining \\enquote{candidates}. The inhabitation condition for $A$ stated in Theorem~\\ref{thm: pi1 are duplicable} ensures that the default pair $ \\langle V, V \\rangle $ exists as a sort of \\enquote{exception}. We ``throw'' it each time the boolean tuple that $\\mathtt{dec}^s_{A}$ receives as input does not encode any term.\n\\end{enumerate}\n\\noindent\nPoint~\\ref{enumerate: duplication step 3} is the one implementing Mairson\\&Terui's \\enquote{\\emph{duplication by selection and erasure}} discussed in Section~\\ref{sec: duplication and erasure in a linear setting}. It involves the component of the duplicator which is exponential in the size of $A$. Therefore, as we shall see in Theorem~\\ref{thm: exponential compression}, the construct $\\mathtt{copy}_\\sigma^V$ exponentially compresses the linear duplication mechanism encoded in a duplicator.\n\nWe conclude this section by commenting about how \\enquote{$ \\shpos $} and \\textsf{LL}'s \\enquote{$\\oc$} differ. Intuitively, the latter allows to duplicate or erase logical structure, or terms, \\emph{at once}, which is the standard way to computationally interpret contraction and weakening of a logical system.\nThe modality \\enquote{$ \\shpos $} identifies duplication and erasure processes with a more \\emph{constructive} nature. The duplication proceeds step-by-step among a whole set of possible choices in order to identify those ones that cannot contain the copies of the term we are interested to duplicate, until it eventually reaches what it searches. Then, it exploits erasure. Erasing means eroding step-by-step a derivation or a term, according to the type that drives its construction.\n\n\n\n\n\\section{Translation of $\\mathsf{LEM}$ into $\\mathsf{IMLL}_2$ and exponential compression}\\label{sec: the expressiveness of the system}\nThe system $\\mathsf{LEM}$ provides a logical status to copying \nand erasing operations that exist in $\\mathsf{IMLL}_2$. In what follows, we \nshow that a translation $(\\_)^\\bullet$ from $\\mathsf{LEM}$ into $\\mathsf{IMLL}_2$ exists \nthat \\enquote{unpacks} both the constructs $\\mathtt{discard}_{\\sigma}$ and \n$\\mathtt{copy}_{\\sigma}^{V}$ by turning them into, respectively, an eraser \nand a duplicator of ground types. Then, we show that the reduction steps in \nFigure~\\ref{fig: term reduction rules} and the commuting conversions in \nFigure~\\ref{fig: term commuting conversions} can be simulated by the \n$\\beta \\eta$-reduction of the linear $\\lambda$-calculus, as long as we \nrestrict to terms of $\\Lambda_\\shpos$ typable in $\\mathsf{LEM}$. \nLast, we discuss the complexity of the translation, and we prove that \nevery term typable in $\\mathsf{LEM}$ is mapped to a linear \n$\\lambda$-term whose size can be is exponential in the one of the original \nterm.\n\n We start with defining the translation from $\\mathsf{LEM}$ to $\\mathsf{IMLL}_2$.\n\\begin{defn} [From $ \\mathsf{LEM} $ to $ \\mathsf{IMLL}_2 $]\n\\label{defn: translation IMLL2 shpos with oplus into IMLL2} \nThe map \n$(\\_)^\\bullet: \\mathsf{LEM} \\longrightarrow\\mathsf{IMLL}_2$ \ntranslates a derivation \n$\\mathcal{D} \\triangleleft\\Gamma \\vdash_{\\mathsf{LEM}} M: \\tau $ into a derivation $\\mathcal{D}^\\bullet \\triangleleft \\Gamma^\\bullet \\vdash_{\\mathsf{IMLL}_2} M^\\bullet :\\tau^\\bullet$:\n\\begin{enumerate}\n\\item For all types $\\sigma \\in \\Theta_\\shpos$:\n\\begin{align*}\n\\alpha^\\bullet &\\triangleq \\alpha\\\\\n(\\tau \\multimap A)^\\bullet &\\triangleq \\tau^\\bullet \\multimap A^\\bullet \\\\\n(\\forall \\alpha . A)^\\bullet &\\triangleq \\forall \\alpha. A^\\bullet \\\\\n(\\shpos \\tau)^\\bullet & \\triangleq \\tau^\\bullet \\enspace .\n\\end{align*}\n\\item For all contexts $\\Gamma= x_1: \\sigma_1, \\ldots, x_n: \\sigma_n$, we set $\\Gamma^\\bullet\\triangleq x_1: \\sigma_1 ^\\bullet, \\ldots, x_n: \\sigma_n^\\bullet$;\n\\item\n\\label{enum: translation on terms} \nFor all typable terms $ M\\in \\Lambda_\\shpos$:\n\\begin{align*}\nx^\\bullet &\\triangleq x\n\\\\\n(\\lambda x. P )^\\bullet &\\triangleq \\lambda x. P^\\bullet \n\\\\\n(PQ)^\\bullet &\\triangleq P^\\bullet Q^\\bullet\n\\\\\n(\\mathtt{discard}_{\\sigma}\\, P \\mathtt{\\ in\\ }Q)^\\bullet\n&\\triangleq\n \\mathtt{let\\ } \\mathtt{E}_{\\sigma^\\bullet}\\, P^\\bullet \\mathtt{\\ be\\ }I \\mathtt{\\ in\\ }Q^\\bullet \n\\\\\n(\\mathtt{copy}^{V}_{\\sigma}\\, P \\mathtt{\\ as\\ }x_1,x_2 \\mathtt{\\ in\\ \n}Q)^\\bullet &\\triangleq \\mathtt{let\\ }\\mathtt{D}^{V^\\bullet} _{\\sigma^\\bullet}\\, P^\\bullet \n\\mathtt{\\ be\\ } x_1,x_2 \\mathtt{\\ in \\ }Q^\\bullet \\enspace ,\n\\end{align*}\nwhere $\\mathtt{E}_{\\sigma^\\bullet}$ and $\\mathtt{D}^{V^\\bullet} _{\\sigma^\\bullet}$ (see Remark~\\ref{rem: duplicator} in~\\ref{sec: the d-soundness theorem DICE}) are the eraser and the duplicator of $\\sigma^\\bullet$ which is both\nground, because $\\sigma$ is closed and with no negative occurrences of $\\forall$, \nand inhabited by $V^\\bullet$. \n\n\\item \nThe definition of $ (\\_)^\\bullet $ extends to any derivation \n$\\mathcal{D}\\triangleleft\\Gamma\\vdash M:\\sigma$ of $\\mathsf{LEM}$\nin the obvious way, following\nthe structure of $ M^{\\bullet} $. \nFigure \\ref{fig: translation inference rules} collects the most interesting cases.\n\\qed\n\\end{enumerate} \n\\end{defn}\n\n\\begin{rem}\nBoth $\\mathtt{E}_{\\sigma^\\bullet}$ and \n$\\mathtt{D}^{V^\\bullet} _{\\sigma^\\bullet}$ in \npoint~\\ref{enum: translation on terms} of \nDefinition~\\ref{defn: translation IMLL2 shpos with oplus into IMLL2} here above exist by Theorem~\\ref{thm: pi1 types are erasable} and Theorem~\\ref{thm: pi1 are duplicable}.\n\\qed\n\\end{rem}\n\n\\begin{figure}[t]\n\\centering\n\\begin{equation}\n\\begin{aligned}\\nonumber\n\\left(\n\\scalebox{0.6}{\n\\AxiomC{$\\mathcal{D}$}\n\\noLine\n\\UnaryInfC{$x_1: \\shpos \\sigma_1, \\ldots, x_n: \\shpos \\sigma_n \\vdash M: \\sigma$}\n\\RightLabel{$p$}\n\\UnaryInfC{$x_1: \\shpos \\sigma_1, \\ldots, x_n: \\shpos \\sigma_n \\vdash M: \\shpos \\sigma$}\n\\DisplayProof } \n\\right)^\\bullet\n\\triangleq &\n\\left(\n\\scalebox{0.6}{\n\\AxiomC{$\\mathcal{D}$}\n\\noLine\n\\UnaryInfC{$x_1: \\shpos \\sigma_1, \\ldots, x_n: \\shpos \\sigma_n \\vdash M :\\sigma$}\n\\DisplayProof } \n\\right)^\\bullet\n\\\\\n\\\\\n\\left(\n\\scalebox{0.6}{\n\\AxiomC{$\\mathcal{D}$}\n\\noLine\n\\UnaryInfC{$\\Gamma, x: \\sigma \\vdash M:\\tau$}\n\\RightLabel{$d$}\n\\UnaryInfC{$\\Gamma, y: \\shpos \\sigma \\vdash M[y\/x]: \\tau$}\n\\DisplayProof} \n\\right)^\\bullet\n\\triangleq &\n\\scalebox{0.6}{\n\\AxiomC{}\n\\RightLabel{ax}\n\\UnaryInfC{$ y:\\sigma^\\bullet \\vdash y: \\sigma^\\bullet$}\n\\def3pt{0pt}\n\\AxiomC{$\\left( \\begin{aligned}\\begin{gathered} \\mathcal{D} \\\\ \\Gamma, x: \\sigma \\vdash M: \\tau \\end{gathered}\\end{aligned}\\right)^\\bullet$}\n\\noLine\n\\UnaryInfC{\\hspace{2.5cm}}\n\\def3pt{3pt}\n\\RightLabel{$cut$}\n\\BinaryInfC{$\\Gamma^\\bullet, y: \\sigma^\\bullet \\vdash M^\\bullet [y\/x]:\\tau^\\bullet$}\n\\DisplayProof} \n\\\\\n\\\\\n\\left(\n\\scalebox{0.6}{\n\\AxiomC{$\\mathcal{D}$}\n\\noLine\n\\UnaryInfC{$\\Gamma \\vdash M:\\tau$}\n\\RightLabel{$w$}\n\\UnaryInfC{$\\Gamma , x: \\shpos \\sigma \\vdash \\mathtt{discard }_{\\sigma}\\, x \\mathtt{\\ in\\ }M:\\tau$}\n\\DisplayProof}\n\\right)^\\bullet\n\\triangleq&\n\\scalebox{0.6}{\n\\AxiomC{$\\vdots$}\n\\noLine\n\\UnaryInfC{$x: \\sigma^\\bullet \\vdash \\mathtt{E}_{\\sigma^\\bullet}\\, x: \\mathbf{1}$}\n\\AxiomC{$ \\left( \\begin{aligned}\\begin{gathered} \\mathcal{D} \\\\ \\Gamma \\vdash M :\\tau \\end{gathered}\\end{aligned} \\right)^\\bullet$}\n\\noLine\n\\UnaryInfC{\\vdots}\n\\noLine\n\\UnaryInfC{$\\Gamma^\\bullet, y: \\mathbf{1} \\vdash \\mathtt{let\\ }y \\mathtt{\\ be\\ }I \\mathtt{\\ in\\ }M^\\bullet :\\tau^\\bullet$}\n\\RightLabel{$cut$}\n\\BinaryInfC{$\\Gamma^\\bullet, x:\\sigma^\\bullet \\vdash \\mathtt{let\\ }\\mathtt{E}_{\\sigma^\\bullet}\\, x \\mathtt{\\ be\\ }I \\mathtt{\\ in\\ }M^\\bullet :\\tau^\\bullet$}\n\\DisplayProof}\n\\\\\n\\\\\n\\left( \n\\scalebox{0.6}{\n\\AxiomC{$\\mathcal{D}_1$}\n\\noLine\n\\UnaryInfC{$\\Gamma, x_1: \\shpos \\sigma, x_2: \\shpos \\sigma \\vdash M: \\tau$}\n\\AxiomC{$\\mathcal{D}_2$}\n\\noLine\n\\UnaryInfC{$\\vdash V: \\sigma$}\n\\RightLabel{$c$}\n\\BinaryInfC{$\\Gamma, x: \\shpos \\sigma \\vdash \\mathtt{copy}^{V}_{\\sigma} \\, x \\mathtt{\\ as \\ } x_1, x_2 \\mathtt{ \\ in\\ } M: \\tau$}\n\\DisplayProof}\n\\right)^\\bullet\n\\triangleq & \n\\scalebox{0.6}{\n\\def\\hskip .1cm{\\hskip 0.4cm}\n\\def1pt{1pt}\n\\AxiomC{$\\mathcal{D}_2^\\bullet$}\n\\noLine\n\\UnaryInfC{$\\vdash V^\\bullet : \\sigma^\\bullet$}\n\\noLine\n\\UnaryInfC{$\\vdots$}\n\\noLine\n\\UnaryInfC{$x:\\sigma^\\bullet \\vdash \\mathtt{D}^{V^\\bullet } _{\\sigma^\\bullet} \\, x: \\sigma^\\bullet \\otimes \\sigma^\\bullet$}\n\\AxiomC{$\\left( \\begin{aligned} \\begin{gathered}\\mathcal{D}_1 \\\\ \\Gamma, x_1: \\shpos \\sigma, x_2: \\shpos \\sigma \\vdash M : \\tau \\end{gathered}\\end{aligned} \\right)^\\bullet$}\n\\noLine\n\\UnaryInfC{\\vdots}\n\\noLine\n\\UnaryInfC{$\\Gamma^\\bullet, y :\\sigma^\\bullet \\otimes \\sigma^\\bullet \\vdash \\mathtt{let\\ }y \\mathtt{\\ be\\ }x_1,x_2 \\mathtt{\\ in\\ }M^\\bullet: \\tau^\\bullet$}\n\\RightLabel{$cut$}\n\\BinaryInfC{$\\Gamma^\\bullet, x: \\sigma^\\bullet \\vdash \\mathtt{let\\ }\\mathtt{D}_{\\sigma^\\bullet}^{V^\\bullet} \\, x \\mathtt{\\ be\\ }x_1,x_2 \\mathtt{\\ in\\ }M^\\bullet :\\tau^\\bullet$}\n\\DisplayProof}\n\\end{aligned}\n\\end{equation}\n\\caption{The translation of the rules $p$, $d$, $w$ and $c$.}\n\\label{fig: translation inference rules}\n\\end{figure}\n\nThe simulation theorem requires some preliminaries.\n\n\\begin{lem} \\label{lem: translation on values}For every typable value $V$:\n\\begin{enumerate}[(1)]\n\\item \\label{enum: 1 translation on values} $V^\\bullet =V$.\n\\item \\label{enum: 2 translation on values} $V$ has type $\\sigma$ if and only if $V^\\bullet$ has type $\\sigma^\\bullet$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nStraightforward consequence of Definition~\\ref{defn: translation IMLL2 shpos with oplus into IMLL2}.\n\\end{proof}\n\n\\begin{lem} \\label{lem: bullet commutes well with substitution in LAML} For all terms $M, N \\in \\Lambda_{\\shpos}$ typable in $\\mathsf{LEM}$, $M^\\bullet [N^\\bullet\/x]= (M[N\/x])^\\bullet$.\n\\end{lem}\n\\begin{proof}\nWe can proceed by a standard structural induction on $ M $.\n\\end{proof}\n\\noindent\nWe now show that every reduction on terms typable in $\\mathsf{LEM}$ can be simulated in the linear $\\lambda$-calculus by means of the $\\beta \\eta$-reduction relation. We recall that\nSubject reduction holds on every typable $M \\in \\Lambda_\\shpos$\n(Theorem~\\ref{thm:Subject Reduction for IMLL2shpos}). \nMoreover, every linear $\\lambda$-term that has a type in $\\mathsf{IMLL}$, has one in $\\mathsf{IMLL}_2$ (see~\\cite{hindley1989bck}). So, we state the simulation theorem \nfor terms, rather than the related derivations.\n\\begin{thm}[Simulation]\n\\label{thm: translations for LAML} Let $\\mathcal{D}\\triangleleft \\Gamma \\vdash M: \\sigma$ be a derivation in $\\mathsf{LEM}$. If $M_1 \\rightarrow M_2$ then $M_1^\\bullet\\rightarrow^*_{\\beta \\eta}M_2^\\bullet$:\n\\begin{center}\n\\begin{tikzcd}\nM_1\\arrow[d, dashed] \\arrow[r]& M_2 \\arrow[d,dashed] \\\\\nM_1^\\bullet \\arrow[r, \"*\" pos=1, \"\\beta \\eta \" ' pos=1.05] & M_2^{ \\bullet}\n\\end{tikzcd}\n\\enspace.\n\\end{center}\n\\end{thm}\n\\begin{proof} We can proceed by structural induction on $M_1$.\nOne of the most interesting cases is when\n$M_1$ is $ (\\lambda x. P)Q$ and $M_2= P[Q\/x]$. \nLemma~\\ref{lem: bullet commutes well with substitution in LAML}\nimplies $((\\lambda y. P)Q)^\\bullet= (\\lambda y. P^\\bullet)Q^\\bullet \\rightarrow_\\beta P^\\bullet[Q^\\bullet\/x] =(P[Q\/x])^\\bullet$. \nIf $M_1$ is $\\mathtt{discard}_{\\sigma}\\, V \\mathtt{\\ in\\ }N$ and\n$M_2$ is $N$, then $V$ is a value of type $\\sigma$. By Lemma~\\ref{lem: translation on values}.\\ref{enum: 2 translation on values}, $V^\\bullet$ is a value of $\\sigma^\\bullet$. Hence:\n\\allowdisplaybreaks\n\\begin{align*}\n(\\mathtt{discard}_{\\sigma}\\, V \\mathtt{\\ in\\ }N)^\\bullet &\n\\triangleq \\mathtt{let\\ } \\mathtt{E}_{\\sigma^\\bullet}\\, V^\\bullet \\mathtt{\\ be\\ }I \\mathtt{\\ in\\ }N^\\bullet\n\\rightarrow^*_\\beta N^\\bullet\n\\end{align*}\n\\noindent\nby Theorem~\\ref{thm: pi1 types are erasable}.\nIf $M_1$ is $\\mathtt{copy}^{U}_{\\sigma}\\, V \\mathtt{\\ as\\ }x_1,x_2 \\mathtt{\\ in\\ }N$ and $M_2$ is $N[V\/x_1, V\/x_2]$, \nthen $U$ and $V$ are both values of type $\\sigma$. By Lemma~\\ref{lem: translation on values}.\\ref{enum: 2 translation on values}, $U^\\bullet$ and $V^\\bullet$ are both values of type $\\sigma^\\bullet$. \nHence:\n\\allowdisplaybreaks\n\\begin{align*}\n(\\mathtt{copy}^{U}_{\\sigma}\\, V \\mathtt{\\ as\\ }x_1,x_2 \\mathtt{\\ in\\ \n}N)^\\bullet&\\triangleq \\mathtt{let\\ }\\mathtt{D}^{U^\\bullet } _{\\sigma^\\bullet}\\, V^\\bullet\n\\mathtt{\\ be\\ } x_1,x_2 \\mathtt{\\ in \\ }N^\\bullet \\\\\n&\\rightarrow^*_{\\beta \\eta} \\mathtt{let\\ }\\langle V^\\bullet, V^\\bullet \\rangle \\mathtt{\\ be\\ } x_1,x_2 \\mathtt{\\ in \\ }N^\\bullet &&\\text{Theorem}~\\ref{thm: pi1 are duplicable} \\\\\n&\\rightarrow^*_\\beta N^\\bullet[V^\\bullet\/x_1, V^\\bullet\/x_2]\\\\\n&\\triangleq (N^\\bullet[V^\\bullet\/x_1]) [ V^\\bullet\/x_2]\\\\\n&= ((N[V\/x_1]) [ V\/x_2])^\\bullet &&\\text{Lemma}~\\ref{lem: bullet commutes well with substitution in LAML}\\\\\n&\\triangleq (N[V\/x_1, V\/x_2])^\\bullet \\enspace .\n\\end{align*}\n\\end{proof}\n\\noindent\nWe conclude by estimating the impact of the translation on the size of terms produced by \\enquote{unpacking} the constructs $\\mathtt{discard}_{\\sigma}$ and $\\mathtt{copy}^V_{\\sigma}$. \nWe need to bound the dimension of erasers and duplicators with ground type $A$. We rely on the map $(\\_ )^-$ \n(Definition~\\ref{defn: stripping forall} \nin~\\ref{sec: the d-soundness theorem DICE})\nthat, intuitively, strips every occurrence of $\\forall$ away from a\ngiven type (Remark~\\ref{rem: duplicator} in~\\ref{sec: the d-soundness theorem DICE}.)\n\n\\begin{lem}[Size of duplicators and erasers] \n\\label{lem: size of duplicator} For every ground type $A$:\n\\begin{enumerate}[(1)]\n\\item \\label{enum: size of eraser}$\\vert \\mathtt{E}_{A} \\vert \\in \\mathcal{O}(\\vert A^- \\vert)$.\n\\item\\label{enum: size of duplicator} If $V$ is a value of $A$, then $\\vert \\mathtt{D}_{A}^{V} \\vert \\in \\mathcal{O}( 2^{\\vert A^- \\vert^2})$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nPoint~\\ref{enum: size of eraser} is straightforward by looking at the proof of Theorem~\\ref{thm: pi1 types are erasable}. Concerning Point~\\ref{enum: size of duplicator}, from \n\\ref{sec: the d-soundness theorem DICE} we know that $\\mathtt{D}_{A}^{V}$ is\n$ \\mathtt{dec}^s_{A} \\circ \n\\mathtt{enc}_{A}^s \\circ\n\\mathtt{sub}^s_{A} $,\nwhere $s= \\mathcal{O}(\\vert A^- \\vert \\, \\cdot \\, \\log \\vert A^- \\vert )$. The components of $\\mathtt{D}_{A}^{V}$ with a\nsize not linear in $\\vert A^- \\vert $ are $\\mathtt{dec}^s_{A}$ and $\\mathtt{enc}^s_{A}$. The $ \\lambda $-term $\\mathtt{dec}^s_{A}$ (see point~\\ref{enumerate: duplication step 3} in Section~\\ref{sec: the system shpos, with, oplus IMLL2 cut elimination and bound}) nests occurrences of\n\\texttt{if}-\\texttt{then}-\\texttt{else} each containing $2^{s}$ pairs of normal inhabitants of $A$, every of which,\nby Lemma~\\ref{lem: pi1type bound}, has size bounded by \n$\\vert A^- \\vert$. \nSimilarly, $\\mathtt{enc}^s_{A}$ alternates instances of\n$\\lambda$-terms $\\mathtt{abs}^s$ and $\\mathtt{app}^s$ which, \nagain, nest occurrences of \\texttt{if}-\\texttt{then}-\\texttt{else} \nevery one with $2^{s}$ instances of boolean strings of size $s$. \nThe overall complexity of $\\mathtt{D}_{A}^{V}$ is $\\mathcal{O}(s \\cdot 2^{s})= \\mathcal{O}(2^{\\vert A^- \\vert^2})$. \n\\end{proof}\n\\noindent\n\n\\begin{thm}[Exponential Compression]\n\\label{thm: exponential compression} \nLet $\\mathcal{D} \\triangleleft \\Gamma \\vdash M: \\sigma$ be a derivation in \n$\\mathsf{LEM}$. Then, \n$\\vert M^\\bullet \\vert = \\mathcal{O}(2^{\\vert M \\vert^k})$, \nfor some $k \\geq 1$.\n\\end{thm}\n\\begin{proof} The proof is by structural induction on $M$. The interesting case is when $M$ is \n$\\mathtt{copy}^{V}_{\\sigma}\\, P \\mathtt{\\ as\\ }x_1,x_2 \\mathtt{\\ in\\ }Q$. \nSince $M$ is typable, $V$ has type $\\sigma$, that is closed and free from negative occurrences of $\\forall$, hence lazy. \nBy Lemma~\\ref{lem: lazy derivation properties}.\\ref{enum: finite number of normal forms}, it is safe to assume that $V$ is a value with largest size among values of type $\\sigma$.\nBy Lemma~\\ref{lem: translation on values}, $V=V^\\bullet$ is also the largest value of type $\\sigma^\\bullet$ in $\\mathsf{IMLL_2}$. Finally, by Lemma~\\ref{lem: pi1type bound}, this implies that $V$ is a $\\eta$-long normal form of type $\\sigma^\\bullet$. Now, by using Definition~\\ref{eqn:datatype unity and product}, we have $\\vert M^\\bullet \\vert= \\vert \\mathtt{let\\ }\\mathtt{D}^{V} _{\\sigma^\\bullet}\\, P^\\bullet \\mathtt{\\ be\\ } x_1,x_2 \\mathtt{\\ in \\ }Q^\\bullet\\vert = \\vert \\mathtt{D}^{V} _{\\sigma^\\bullet}\\vert + \\vert P^\\bullet \\vert+ \\vert Q^\\bullet \\vert +4$. On the one hand, by induction hypothesis, we obtain $\\vert P^\\bullet \\vert = \\mathcal{O}(2^{\\vert P \\vert^{k'}})$ and $\\vert Q^\\bullet \\vert = \\mathcal{O}(2^{\\vert Q \\vert^{k''}})$, for some $k', k'' \\geq 1$. On the other hand, by applying both Lemma~\\ref{lem: size of duplicator} and Lemma~\\ref{lem: pi1type bound}, we have $\\vert \\mathtt{D}^{V} _{\\sigma^\\bullet}\\vert=\\mathcal{O}(2^{\\vert A^- \\vert^2}) =\\mathcal{O}(2^{2 \\cdot \\vert V \\vert^2})$. \nTherefore, there exists $k\\geq 1$ such that $\\vert M^\\bullet \\vert = \\mathcal{O}(2^{(\\vert V \\vert + \\vert P \\vert + \\vert Q \\vert +1)^k})=\\mathcal{O}(2^{\\vert M \\vert^k })$.\n\\end{proof}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Vanishing topos and\nthe semi-continuity of the Swan conductor}\n\\label{svan}\n\\subsection{Calculus on vanishing topos}\\label{ssvan}\n\nLet $f\\colon X\\to S$\nbe a morphism of schemes.\nBy abuse of notation,\nlet $X, S$ also denote\nthe associated \\'etale toposes.\nFor the definition of\nthe {\\em vanishing topos}\n$X\\overset \\gets\\times_SS$\nand the morphisms \n$$\\begin{CD}\nX@>{\\Psi_f}>> X\\overset \\gets\\times_SS\n@>{p_2}>>S\\\\\n@.@V{p_1}VV\\\\\n@.X\\end{CD}$$\nof toposes,\nwe refer to \\cite[1.1, 4.1, 4.3]{Il} and\n\\cite[1.1]{TS}.\nFor a geometric point $x$ of \na scheme $X$,\nwe assume in this article\nthat the residue field\nof $x$ is a separable closure\nof the residue field at the\nimage of $x$ in $X$,\nif we do not say otherwise explicitly.\n\nFor a geometric point $x$ of $X$,\nthe fiber \n$x\\times_X(X\\overset \\gets\\times_SS)$\nof $p_1\\colon\nX\\overset \\gets\\times_SS\\to X$\nat $x$ is the vanishing topos\n$x\\overset \\gets\\times_SS$\nand is canonically\nidentified with the strict localization\n$S_{(s)}$\nat the geometric point $s=f(x)$\nof $S$ defined by the image of $x$\n(cf.\\ \\cite[(1.8.2)]{TS}).\n\nA point on the topos $X\\overset \\gets\\times_SS$\nis defined by a triple denoted $x\\gets t$\nconsisting of a geometric point $x$ of $X$,\na geometric point $t$ of $S$\nand a specialization $s=f(x)\\gets t$\nnamely a geometric point $S_{(s)}\\gets t$\nof the strict localization\nlifting $S\\gets t$.\nThe fiber \n$(X\\overset \\gets\\times_SS)\n\\times_{\nS\\overset \\gets\\times_SS}\n(s\\gets t)$\nof the canonical morphism\n$X\\overset \\gets\\times_SS\\to \nS\\overset \\gets\\times_SS$\nat a point $s\\gets t$\nis canonically identified with\nthe geometric fiber $X_{s}$.\nThe fiber products\n$X_{(x)}\\times_{S_{(s)}}S_{(t)}$\nand\n$X_{(x)}\\times_{S_{(s)}}t$\nare called the {\\em Milnor tube}\nand the {\\em Milnor fiber}\nrespectively.\n\nFor a commutative diagram\n$$\\xymatrix{\nX\\ar[rr]^f\\ar[rd]_p&&\nY\\ar[ld]^g\\\\\n&S}$$\nof morphisms of schemes,\nthe morphism\n$\\overset{\\gets} g\\colon X\\overset \\gets\\times_YY\\to\nX\\overset \\gets\\times_SS$\nis defined by functoriality and\nwe have a canonical isomorphism\n$\\Psi_p\\to \\overset{\\gets} g\n\\circ \\Psi_f$.\nOn the fibers of a geometric point $x$\nof $X$, the morphism\n$\\overset{\\gets} g$\ninduces a morphism\n\\begin{equation}\ng_{(x)}\\colon Y_{(y)}\n=x\\overset\\gets\\times_YY\\to S_{(s)}\n=x\\overset\\gets\\times_SS\n\\label{eqpx}\n\\end{equation}\non the strict localizations\nat $y=f(x)$ and $s=p(x)$ \\cite[(1.7.3)]{TS}.\nIn particular for $Y=X$,\nwe have a canonical isomorphism\n$\\Psi_p\\to \\overset{\\gets} g\n\\circ \\Psi_{\\rm id}$.\n\nLet $\\Lambda$ be a finite local\nring with residue\nfield $\\Lambda_0$ of characteristic \n$\\ell$ invertible on $S$.\nLet $D^+(-)$ denote\nthe derived category of \ncomplexes of $\\Lambda$-modules\nbounded below\nand let $D^b(-)$ denote\nthe subcategory consisting\nof complexes with bounded cohomology.\nIn the following,\nwe assume that $S$ and $X$ are quasi-compact\nand quasi-separated.\nWe say that an object of\n$D^b(X\\overset \\gets\\times_SS)$\nis constructible\nif there exist finite partitions\n$X=\\coprod_\\alpha X_\\alpha$\nand\n$S=\\coprod_\\beta S_\\beta$ \nby locally closed\nconstructible subschemes such that\nthe restrictions to $X_\\alpha\n\\overset \\gets\\times_SS_\\beta$\nof cohomology sheaves are locally constant\nand constructible \\cite[1.3]{TS}.\n\nLet $D^b_c(-)$ denote\nthe subcategory of $D^b(-)$ consisting\nof constructible objects\nand\nlet $D_{\\rm ctf}(-)\n\\subset D^b_c(-)$ denote\nits subcategory consisting\nof objects of finite tor-dimension.\nIf $\\Lambda$ is a field,\nwe have\n$D_{\\rm ctf}(-)\n=D^b_c(-)$.\n\nWe canonically identify\na function on the underlying set\nof a scheme $X$ with \nthe function on the set\nof isomorphism classes of\ngeometric points $x$ of $X$.\nSimilarly, we call a function on the set\nof isomorphism classes of\npoints $x\\gets t$ of $X\\overset \\gets\\times_SS$\na function on $X\\overset \\gets\\times_SS$.\nWe say that a function on \n$X\\overset \\gets\\times_SS$\nis a {\\em constructible function}\nif there exist finite partitions\n$X=\\coprod_\\alpha X_\\alpha$\nand\n$S=\\coprod_\\beta S_\\beta$ as above\nsuch that the restrictions\nto $X_\\alpha\n\\overset \\gets\\times_SS_\\beta$\nare locally constant.\n\nFor an object ${\\cal K}$ of\n$D_{\\rm ctf}(X\\overset \\gets\\times_SS)$,\nthe rank function \n$\\dim {\\cal K}_{x\\gets t}$\nis defined as a constructible function\non $X\\overset \\gets\\times_SS$.\nIf $\\Lambda=\\Lambda_0$ is a field, we have\n$\\dim {\\cal K}_{x\\gets t}=\n\\sum_q(-1)^q\\dim {\\cal H}^q{\\cal K}_{x\\gets t}$.\nIn general, \nwe have\n$\\dim_\\Lambda {\\cal K}\n=\n\\dim_{\\Lambda_0} \n{\\cal K}\\otimes_\\Lambda^L\\Lambda_0$.\n\n\n\n\\begin{df}\\label{df11}\nLet $Z$ be a quasi-finite scheme \nover $S$ \nand let $\\varphi\\colon Z\\to {\\mathbf Q}$\nbe a function.\nWe define the {\\em derivative} $\\delta(\\varphi)$\nof $\\varphi$ as a function\non $Z\\overset \\gets\\times_SS$\nby \n\\begin{equation}\n\\delta(\\varphi)(x\\gets t)\n=\\varphi(x)\n-\\sum_{z\\in Z_{(x)}\\times\n_{S_{(s)}}t}\\varphi(z)\n\\label{eqdel}\n\\end{equation}\nwhere $s=f(x)$.\nIf the derivative $\\delta(\\varphi)$\nis $0$ (resp.\\ $\\delta(\\varphi)\\geqq 0$),\nwe say that the function $\\varphi$\nis {\\em flat} (resp.\\ {\\em increasing})\nover $S$.\nIf the morphism $f\\colon Z\\to S$ is finite,\nwe define a function $f_*\\varphi$ on $S$\nby \n\\begin{equation}\nf_*\\varphi(s)=\n\\sum_{x\\in Z_s}\\varphi(x).\n\\label{eqf*}\n\\end{equation}\n\\end{df} \n\n\n\\begin{lm}\\label{lmscZ}\nLet $S$ be a noetherian scheme,\n$Z$ be a quasi-finite scheme \nover $S$ and \n$\\varphi\\colon Z\\to {\\mathbf Q}$ be a function.\n\n{\\rm 1.}\nAssume that\n$Z$ is \\'etale over $S$.\nThen $\\varphi$ is locally constant\n(resp.\\ upper semi-continuous)\nif and only if\nit is flat over $S$\n(resp.\\ constructible\nand increasing over $S$).\n\n\n{\\rm 2.} \nThe function $\\varphi$ is\nconstructible if and only if its\n{\\em derivative} $\\delta(\\varphi)\n\\colon Z\\overset \\gets\\times_SS\n\\to {\\mathbf Q}$\ndefined in {\\rm (\\ref{eqdel})}\nis constructible.\nConsequently, if $\\varphi$ is flat over $S$,\nthen $\\varphi$ is constructible.\n\n{\\rm 3.} \nAssume that\n$\\varphi$ is {\\em flat}\nover $S$.\nThen, $\\varphi=0$\nif and only if\n$\\varphi(x)=0$ for\nthe generic point $x$ of\nevery irreducible component of $Z$.\n\n{\\rm 4.}\nAssume that \nthe morphism $f\\colon Z\\to S$ is finite.\nIf $\\varphi$ is constructible,\nthe function $f_*\\varphi$\nis also constructible.\nAssume that $\\varphi$ is constructible\nand is increasing over $S$.\nThen the function $f_*\\varphi$ on $S$ is \nupper semi-continuous.\nFurther, the function $\\varphi$ is flat over $S$ if \nand only if\n$f_*\\varphi$ is locally constant.\n\\end{lm}\n\n\\proof{\n1. \nSince the question is \\'etale local on $Z$,\nwe may assume that $Z\\to S$ \nis an isomorphism.\nThen the assertion is clear.\n\n\n2.\nAssume $\\delta(\\varphi)$ is constructible.\nBy noetherian induction,\nit suffices to show the following:\nFor every geometric point $t$ of $S$\nand the closure $T\\subset S$ of its image,\nthere exists a dense open subset\n$V\\subset T$ such that\n$\\varphi$ is locally constant on\n$Z\\times_SV$.\nReplacing $S$ by $T$,\nit suffices to consider the case where \n$t$ dominates\nthe generic point of an irreducible scheme\n$S$.\nFor a geometric point $x$ of $Z$\nabove $t$,\nwe have $\\delta(\\varphi)(x\\gets t)=0$.\nBy further replacing $X$ by\nan \\'etale neighborhood of $x$,\nit suffices to consider the case\nwhere $Z$ is \\'etale over $S$\nand $\\delta(\\varphi)=0$.\nThen, by 1,\n$\\varphi$ is locally constant and hence\nconstructible.\n\nAssume $\\varphi$ is constructible.\nFor a closed subset $T$ of $S$,\nthe subtopos\n$(Z\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z\\times_ST)\\overset\\gets\\times_ST$\nis empty.\nHence, \nby noetherian induction,\nit suffices to show the following:\nFor every geometric point $t$ of $S$\nand the closure $T\\subset S$ of its image,\nthere exists a dense open subset\n$V\\subset T$ such that\n$\\delta(\\varphi)$ is locally constant on\n$(Z\\times_SV)\\overset\\gets\\times_SV$.\nSimilarly as above,\nit suffices to consider the case where \n$Z$ is \\'etale over $S$\nand $\\varphi$ is locally constant.\nThen, by 1.,\n$\\delta(\\varphi)=0$ and is constructible.\n\n\n3.\nBy (\\ref{eqdel}),\na function flat over $S$ is uniquely determined\nby the values at the generic points\nof irreducible components.\n\n\n4.\nAssume that $f\\colon Z\\to S$ is finite.\nIf $\\varphi$ is constructible,\nthere exists a dense open subscheme\n$U$ of $S$ such that\n$\\varphi$ is locally constant\non $Z\\times_SU$.\nHence \n$f_*\\varphi$ is constructible by\nnoetherian induction.\nFor a specialization $s\\gets t$,\nwe have\n$\\sum_{x\\in Z_s}\\delta(\\varphi)(x\\gets t)\n=f_*\\varphi(s)-\nf_*\\varphi(t)\n=\\delta(f_*\\varphi)(s\\gets t)$.\nHence we may assume\n$Z=S$ and then the assertion is clear.\n\\qed}\n\\medskip\n\nWe give an example\nof flat function.\nLet $S$ be a noetherian\nscheme, $X$ be a\nscheme of finite type over $S$\nand $Z\\subset X$ be a closed\nsubscheme quasi-finite over $S$.\nLet $A$ be a complex of \n${\\cal O}_X$-modules\nsuch that the cohomology sheaves\n${\\cal H}^q(A)$ are coherent\n${\\cal O}_X$-modules supported on $Z$\nfor all $q$ and\nthat $A$ is of finite tor-dimension \nas a complex of ${\\cal O}_S$-modules.\nFor a geometric point $z$ of $Z$ \nand its image $s$ in $S$,\nlet ${\\cal O}_{S,s}$ denote the {\\em strict}\nlocalization\nand $k(s)$ the separably closed\nresidue field of ${\\cal O}_{S,s}$.\nThen, the $k(s)$-vector spaces\n$Tor^{{\\cal O}_{S,s}}_q\n(A_z,k(s))$ are of finite dimension\nand are $0$ except for finitely many $q$.\nWe define a function\n$\\varphi_A\\colon Z\\to {\\mathbf Z}$\nby\n\\begin{equation}\n\\varphi_A(z)\n=\n\\sum_q\n(-1)^q\n\\dim_{k(s)}\nTor^{{\\cal O}_{S,s}}_q\n(A_z,k(s)).\n\\label{eqA}\n\\end{equation}\n\n\\begin{lm}\\label{lmA}\nLet noetherian schemes\n$Z\\subset X\\to S$ \nand a complex $A$ be as above.\n\n{\\rm 1.}\nThe function \n$\\varphi_A\\colon Z\\to {\\mathbf Z}$\ndefined by {\\rm (\\ref{eqA})} is\nconstructible and flat over $S$.\n\n{\\rm 2.}\nSuppose that $S$ and $Z$ are integral\nand that the image of the generic point\n$\\xi$ of $Z$ is the generic point\n$\\eta$ of $S$.\nIf $A={\\cal O}_Z$,\nthe value of the function $\\varphi_A$\nat a geometric point of $Z$ above $\\xi$ is\nthe inseparable degree $[k(\\xi):k(\\eta)]\n_{\\rm insep}$.\n\\end{lm}\n\nThe condition that\n$A$ is of finite tor-dimension \nas a complex of ${\\cal O}_S$-modules\nis satisfied if $S$ is regular.\n\n\\proof{1.\nSince the assertion is\n\\'etale local on $Z$,\nwe may assume that $Z$ is\nfinite over $S$, \nthat $X$ and $S$ are affine and that\n$z$ is the unique point in the geometric fiber $Z\n\\times_S{\\rm Spec}\\ k(s)$.\nThen, the complex $Rf_*A$\nis a perfect complex of\n${\\cal O}_S$-modules\nand $\\varphi_A(z)$ equal the rank of\n$Rf_*A$.\nHence, the assertion follows.\n\n2.\nWe may assume that\n$S={\\rm Spec}\\ k(\\eta)$\nand $Z={\\rm Spec}\\ k(\\xi)$\nand the assertion follows.\n\\qed}\n\\medskip\n\nWe generalize the definition\nof derivative\nto functions on vanishing topos.\n\n\\begin{df}\\label{df14}\nLet \n\\begin{equation}\n\\xymatrix{\nZ\\ar[rr]^f\\ar[rd]_p&&\nY\\ar[ld]^g\\\\\n&S}\n\\label{eqfgp}\n\\end{equation} \nbe a commutative diagram of\nmorphisms of schemes\nsuch that $Z$ is quasi-finite over $S$.\nLet $\\psi\\colon \nZ\\overset\\gets\\times_YY\\to {\\mathbf Q}$\nbe a function such that\n$\\psi(x\\gets w)=0$\nunless $w$ is not supported\non the image of $f_{(x)}\n\\colon Z_{(x)}\\to Y_{(y)}$\nwhere $y=f(x)$.\nWe define the derivative\n$\\delta(\\psi)$ as \na function on \n$Z\\overset\\gets\\times_SS\\to {\\mathbf Z}$\nby\n\\begin{equation}\n\\delta(\\psi)(x\\gets t)\n=\\psi(x\\gets y)\n-\\sum_{w\\in Y_{(y)}\\times_{S_{(s)}}t}\n\\psi(x\\gets w)\n\\label{eqdelY}\n\\end{equation}\nwhere $y=f(x)$ and $s=p(x)$.\nWe say that $\\psi$ is {\\em flat} over $S$\nif $\\delta(\\psi)=0$.\n\\end{df}\n\nThe sum in the right hand side\nof (\\ref{eqdelY})\nis a finite sum by the assumption\nthat $Z$ is quasi-finite over $S$\nand the assumption on the support of $\\psi$.\nIf $Z=Y$, we recover the\ndefinition (\\ref{eqdel})\nby applying (\\ref{eqdelY})\nto the pull-back $p_2^*\\varphi\\colon\nZ\\overset\\gets\\times_ZZ\\to {\\mathbf Z}$\nby $p_2\\colon\nZ\\overset\\gets\\times_ZZ\\to Z$.\n\nThe following elementary Lemma will be used\nin the proof of a generalization\nof the continuity of the Swan conductor.\n\n\\begin{lm}\\label{lmdelp}\nLet the assumption on the diagram\n{\\rm (\\ref{eqfgp})} be as in\nDefinition {\\rm \\ref{df14}} and let\n$\\varphi$ be a function on $Z$.\nWe define a function\n$\\psi$ on $Z\\overset\\gets\\times_YY$\nby\n\\begin{equation}\n\\psi(x\\gets w)=\n\\sum_{z\\in Z_{(x)}\\times_{Y_{(y)}}w}\n\\varphi(z)\n\\label{eqpsi}\n\\end{equation}\nwhere $y=f(x)$.\nThen the derivative $\\delta(\\varphi)$\non $Z\\overset\\gets\\times_SS$\ndefined by {\\rm (\\ref{eqdel})}\nequals \n$\\delta(\\psi)$\ndefined by {\\rm (\\ref{eqdelY})}.\n\\end{lm}\n\n\\proof{\nIt follows from\n$\\psi(x\\gets y)=\\varphi(x)$\nand \n$Z_{(x)}\\times_{S_{(s)}}t\n=\n\\coprod_{w\\in Y_{(y)}\\times_{S_{(s)}}t}\n(Z_{(x)}\\times_{Y_{(y)}}w)$.\nNote that except for finitely many\ngeometric points $w$ of\n$Y_{(y)}\\times_{S_{(s)}}t$\nthose supported on the image of\n$Z_{(x)}$, the fiber\n$Z_{(x)}\\times_{Y_{(y)}}w$\nis empty.\n\\qed}\n\n\\subsection{Nearby cycles and the local acyclicity}\\label{ssla}\n\nFor a morphism $f\\colon X\\to S$,\nthe morphism $\\Psi_f\\colon X\\to \nX\\overset \\gets\\times_SS$\ndefines the nearby cycles functor\n$R\\Psi_f:D^+(X)\\to D^+(X\\overset \\gets\\times_SS)$.\nThe canonical morphism\n$p_1^*\\to R\\Psi_f$\nof functors is defined by adjunction\nand by the isomorphism ${\\rm id}\\to p_1\\circ \\Psi_f.$\nThe cone of the morphism\n$p_1^*\\to R\\Psi_f$\ndefines the vanishing cycles functor\n$R\\Phi_f:D^+(X)\\to D^+(X\\overset \\gets\\times_SS)$.\nIf $S$ is the spectrum of \na henselian discrete valuation ring\nand if $s,\\eta$ denote\nits closed and generic points,\nwe recover the classical construction\nof complexes $\\psi, \\phi$ of nearby cycles\nand vanishing cycles as \nthe restrictions to\n$X_s\\overset \\gets\\times_S\\eta$\nof $R\\Psi_f$ and $R\\Phi_f$\nrespectively.\n\nRecall that $f\\colon X\\to S$\nis said to be locally acyclic relatively\nto a complex ${\\cal F}$ \nof $\\Lambda$-modules\non $X$\n{\\rm \\cite[Definition 2.12]{TF}}\nif the canonical morphism\n\\begin{equation}\n\\begin{CD}\n{\\cal F}_x\n@>>>\nR\\Gamma(X_{(x)}\\times_{S_{(s)}}t,\n{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}t})\n\\end{CD}\n\\label{eqla}\n\\end{equation}\nis an isomorphism\nfor every $x\\gets t$.\nRecall that $f\\colon X\\to S$\nis said to be universally locally acyclic relatively\nto ${\\cal F}$,\nif for every morphism $S'\\to S$,\nits base change is locally acyclic relatively\nto the pull-back of ${\\cal F}$.\n\n\n\\begin{lm}[{\\rm cf.\\ \\cite[Proposition 7.6.2]{Fu}}]\\label{lmctf}\nLet $f\\colon X\\to S$\nbe a morphism of finite type and\nlet ${\\cal F}\\in D_{\\rm ctf}(X)$\nbe a complex of finite tor-dimension.\n\n{\\rm 1.}\nSuppose that ${\\cal F}$ is of tor-amplitude $[a,b]$ and that $f\\colon X\\to S$ is of relative dimension $d$. Then, \nfor points $x\\gets t$\nof $X\\overset{\\gets}\\times_SS$,\nthe complex\n$R\\Gamma(X_{(x)}\\times_{S_{(s)}}t,\n{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}t})$\nof $\\Lambda$-modules\nis of\ntor-amplitude $[a,b+d]$ \nand,\nfor a $\\Lambda$-module $M$,\nthe canonical morphism\n\\begin{equation}\nR\\Gamma(X_{(x)}\\times_{S_{(s)}}t,\n{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}t})\n\\otimes_\\Lambda^LM\n\\to \nR\\Gamma(X_{(x)}\\times_{S_{(s)}}t,\n{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}t}\n\\otimes_\\Lambda^LM)\n\\label{eqFu}\n\\end{equation}\nis an isomorphism.\n\n{\\rm 2.}\nLet $\\Lambda_0$\nbe the residue field of $\\Lambda$.\nThen,\n$f\\colon X\\to S$ is locally acyclic\n(resp.\\ universally locally acyclic)\nrelatively to \n${\\cal F}$ if and only if\nit is so relatively to\n${\\cal F}_0=\n{\\cal F}\\otimes_\\Lambda^L\\Lambda_0$.\n\\end{lm}\n\n\\proof{\n1.\nBy the assumption that\n$f\\colon X\\to S$\nis of finite type and of\nrelative dimension $d$,\nthe functor \n$R\\Gamma(X_{(x)}\\times_{S_{(s)}}t,-)$\nis of cohomological dimension $\\leqq d$\nby \\cite[Corollaire 3.2]{XIV}.\nHence, similarly as \\cite[(4.9.1)]{Rapport},\nthe canonical morphism (\\ref{eqFu})\nis an isomorphism.\nSince the complex\n${\\cal F}|_{X_{(x)}\\times_{S_{(s)}}t}\n\\otimes_\\Lambda^LM$\nis acyclic outside $[a,b]$,\nthe complex\n$R\\Gamma(X_{(x)}\\times_{S_{(s)}}t,$\n${\\cal F}|_{X_{(x)}\\times_{S_{(s)}}t}\n\\otimes_\\Lambda^LM)$\nis acyclic outside $[a,b+d]$.\nThus, the complex\n$R\\Gamma(X_{(x)}\\times_{S_{(s)}}t,\n{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}t})$ is \nof tor-amplitude $[a,b+d]$.\n\n2.\nIt suffices to show the \nassertion for local acyclicity.\nIf the canonical morphism \n(\\ref{eqla})\nis an isomorphism for $x\\gets t$,\nthen (\\ref{eqla})\nfor ${\\cal F}_0$ is an isomorphism\nby the isomorphism (\\ref{eqFu}) \nfor $M=\\Lambda_0$.\n\nTo show the converse,\nlet $I^\\bullet$ be a filtration\nby ideals of $\\Lambda$\nsuch that ${\\rm Gr}\\Lambda$\nis a $\\Lambda_0$-vector space.\nThen, $I^\\bullet$ defines a filtration\non ${\\cal F}={\\cal F}\\otimes^L\\Lambda$ and\na canonical isomorphism\n${\\rm Gr}{\\cal F}\\to\n{\\cal F}_0\\otimes_{\\Lambda_0}\n{\\rm Gr}\\Lambda$.\nHence if (\\ref{eqla})\nis an isomorphism for ${\\cal F}_0$ then\n(\\ref{eqla})\nfor ${\\cal F}$ is an isomorphism.\n\\qed}\n\\medskip\n\nWe consider a commutative diagram\n\\begin{equation}\n\\xymatrix{\nX\\ar[rr]^f\\ar[rd]_p&&\nY\\ar[ld]^g\\\\\n&S}\n\\label{eqXYS}\n\\end{equation} \nof schemes.\nThe canonical isomorphism\n$\\overset \\gets g\\circ \\Psi_f\n\\to \\Psi_p$\ninduces an isomorphism of functors\n\\begin{equation}\nR\\overset \\gets g_{*}\\circ R\\Psi_f\n\\to R\\Psi_p\n\\label{eqPsPs}\n\\end{equation}\n\n\nFor an object ${\\cal K}$\nof $D^+(X\\overset \\gets\\times_YY)$\nand a geometric point $x$ of $X$,\nthe restriction of $R\\overset{\\gets} g_*{\\cal K}$\non $x\\overset \\gets\\times_SS=S_{(s)}$\nfor $s=f(x)$\nis canonically identified\nwith $Rg_{(x)*}({\\cal K}|_{Y_{(y)}})$\nfor $y=f(x)$\nin the notation of (\\ref{eqpx})\nby \\cite[(1.9.2)]{TS}.\nFor the stalk at a point $x\\gets t$ of\n$X\\overset \\gets\\times_SS$,\nthis identification gives a canonical isomorphism\n\\begin{equation}\nR\\overset{\\gets} g_*{\\cal K}_{x\\gets t}\n\\to\nR\\Gamma(Y_{(y)}\\times_{S_{(s)}}\nS_{(t)},{\\cal K}|_{Y_{(y)}\\times_{S_{(s)}}\nS_{(t)}}).\n\\label{eqMt}\n\\end{equation}\nFor an object ${\\cal F}$ of $D^+(X)$,\n(\\ref{eqMt}) applied to $Y=X$ \ngives a canonical identification\n\\begin{equation}\nR\\Psi_p{\\cal F}_{x\\gets t}\n\\to\nR\\Gamma(X_{(x)}\\times_{S_{(s)}}\nS_{(t)},{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}\nS_{(t)}})\\label{eqMt2}\n\\end{equation}\nwith the cohomology\nof the Milnor tube \\cite[(1.1.15)]{TS}.\n\nA cartesian diagram\n$$\\begin{CD}\nX@<<< X_T\\\\\n@VfVV @VV{f_T}V\\\\\nS@>> \\overset{\\gets*} ip_1^*\n@>>> \\overset{\\gets*} iR\\Psi_f\n@>>> \\overset{\\gets*} iR\\Phi_f@>>>\\\\\n@.@V{\\simeq}VV @VVV @VVV\\\\\n@>>> p_1^*i^*\n@>>> R\\Psi_{f_T}i^*\n@>>> R\\Phi_{f_T}i^*@>>>\\\\\n\\end{CD}\n\\label{eqbc}\n\\end{equation}\nof distinguished triangles of functors.\nFor an object ${\\cal F}$ of $D^+(X)$,\nwe say that the formation of\n$R\\Psi_f{\\cal F}$ commutes with\nthe base change $T\\to S$ if\nthe middle vertical arrow defines\nan isomorphism\n$\\overset{\\gets*} iR\\Psi_f{\\cal F}\n\\to R\\Psi_{f_T}i^*{\\cal F}$.\n\n\nFor a point $x\\gets t$\nof $X\\overset\\gets\\times_SS$,\nif $T\\subset S$ denotes\nthe closure of the image of $t$ in $S$,\nthe left square of (\\ref{eqbc})\ninduces a commutative diagram\n\\begin{equation}\n\\xymatrix{\n(p_1^*{\\cal F})_{x\\gets t}\n={\\cal F}_x\\ar[rr]\\ar[rrd]\n&&\nR\\Psi_f{\\cal F}_{x\\gets t}\n=\nR\\Gamma(X_{(x)}\\times_{S_{(s)}}\nS_{(t)},{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}\nS_{(t)}})\\ar[d]\\\\\n&&\nR\\Psi_{f_T}({\\cal F}|_{X_T})_{x\\gets t}\n=\nR\\Gamma(X_{(x)}\\times_{S_{(s)}}\nt,{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}\nt}).}\n\\label{eqMf}\n\\end{equation}\nThe vertical arrow\nis the canonical morphism from the cohomology \nof the Milnor tube\nto that of the Milnor fiber\nand the slant arrow\nis the canonical morphism (\\ref{eqla}).\nRecall that we assume\nthat the residue field of $t$\nis a separable closure of\nthe residue field at\nthe image in $S_{(s)}$.\n\nWe interpret the local acyclicity\nin terms of vanishing topos.\n\n\\begin{pr}\\label{lmapp}\nLet $f\\colon X\\to S$\nbe a morphism of schemes.\nThen, for an object ${\\cal F}$ of $D^+(X)$, \nthe conditions\nin {\\rm 1.{}\\!} and {\\rm 2.{}\\!} \nbelow are equivalent to each other respectively.\n\n\n{\\rm 1.}\n{\\rm (1)}\nFor every finite morphism $g\\colon T\\to S$,\nfor every geometric point $x$ of $X$\nand \nfor every specialization\n$t\\gets u$ of\ngeometric points of $T$\nsuch that \n$s=f(x)=g(t)$\nas geometric points of $S$,\nthe canonical morphism\n\\begin{equation}\nR\\Gamma(X_{(x)}\\times_{S_{(s)}}\nT_{(u)},{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}\nT_{(u)}})\n\\to\nR\\Gamma(X_{(x)}\\times_{S_{(s)}}\nu,{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}\nu})\n\\label{eqSTt}\n\\end{equation}\nthat is a vertical arrow in {\\rm (\\ref{eqMf})}\nfor the base change $f_T\\colon X_T\n=X\\times_ST\\to T$\nis an isomorphism.\n\n{\\rm (2)}\nThe formation of $R\\Psi_f{\\cal F}$\ncommutes with finite base change $T\\to S$.\n\n{\\rm 2. ({\\cite[Corollaire 2.6]{app}})}\n{\\rm (1)}\nThe morphism $f\\colon X\\to S$ is \n(resp.\\ universally) locally \nacyclic relatively to ${\\cal F}$.\n\n{\\rm (2)}\nThe canonical morphism\n$p_1^*{\\cal F}\\to\nR\\Psi_f{\\cal F}$\nis an isomorphism\nand the formation of $R\\Psi_f{\\cal F}$\ncommutes with finite (resp.\\ arbitrary) base change $T\\to S$.\n\n{\\rm (3)}\nThe canonical morphism\n$p_1^*{\\cal F}_T\\to\nR\\Psi_{f_T}{\\cal F}_T$\nis an isomorphism\nfor every finite \n(resp.\\ every) morphism $T\\to S$\nand the pull-back ${\\cal F}_T$\nof ${\\cal F}$ on $X_T=X\\times_ST$.\n\\end{pr}\n\n\\proof{\n1.\nLet $T\\to S$ be a finite morphism\nand $x\\mapsto s$ and $t\\gets u$ be as in\nthe condition (1).\nThen, \nfor the geometric point $x'$ of\n$X_T$ defined by a unique point of\n$x\\times_st$,\nthe Milnor tube\n$X_{T (x')}\\times_{T_{(t)}}\nT_{(u)}$\nis canonically isomorphic to\n$X_{(x)}\\times_{S_{(s)}}\nT_{(u)}$ since $T\\to S$ is finite.\nThus, the morphism\n(\\ref{eqSTt}) is identified with\nthe morphism\n$R\\Gamma(X_{(x)}\\times_{S_{(s)}}\nT_{(u)},{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}\nT_{(u)}})\n\\to\nR\\Gamma(X_{(x)}\\times_{S_{(s)}}\nu,{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}\nu})$\nthat is the vertical arrow in {\\rm (\\ref{eqMf})}\nfor $f_T\\colon X_T\\to T$\nat $x'\\gets u$.\n\nLet $T'\\subset T$\nbe the closure of the image of $u$.\nWe consider the\nbase change morphisms\n\\begin{equation}\n\\begin{CD}\nR\\Psi_f{\\cal F}_{x\\gets u}\n=&\nR\\Gamma(X_{(x)}\\times_{S_{(s)}}\nS_{(u)},{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}\nS_{(u)}})\\\\\n&@VVV\\\\\nR\\Psi_{f_T}({\\cal F}|_{X_T})_{x\\gets u}\n=&\nR\\Gamma(X_{(x)}\\times_{S_{(s)}}\nT_{(u)},{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}\nT_{(u)}})\\\\\n&@VVV\\\\\nR\\Psi_{f_{T'}}({\\cal F}|_{X_{T'}})_{x\\gets u}\n=&\nR\\Gamma(X_{(x)}\\times_{S_{(s)}}\nu,{\\cal F}|_{X_{(x)}\\times_{S_{(s)}}\nu}).\n\\end{CD}\n\\label{eqSTft}\n\\end{equation}\nThe condition (1) implies\nthat the lower arrow and\nthe composition are isomorphisms.\nHence, the upper arrow\nis an isomorphism\nand we have\n(1)$\\Rightarrow$(2).\n\nConversely, \nthe condition (2) implies\nthat the upper arrow and\nthe composition are isomorphisms.\nHence, the lower arrow\nthat is the same as\n(\\ref{eqSTt}) is an isomorphism.\nThus, we have\n(2)$\\Rightarrow$(1).\n\n\n2. \nFirst, we show the\nequivalence of (1) and (2)\nin the cases without resp.\nThe condition (1) is equivalent to\nthat the slant arrow in (\\ref{eqMf})\nis an isomorphism\nfor every point $x\\gets t$\nof $X\\overset \\gets\\times_SS$.\nHence the condition (2) implies the condition (1)\nby (2)$\\Rightarrow$(1) in 1.\\\nand the commutativity of the diagram (\\ref{eqMf}).\n\nConversely, by \\cite[Corollaire 2.6]{app},\nif the condition (1) is satisfied,\nthe formation of\n$Rf_{(x)*}({\\cal F}|_{X_{(x)}})$\ncommutes with finite base change\nfor every geometric point $x$ of $X$\nwhere $f_{(x)}\\colon X_{(x)}\\to S_{(s)}$\nis the morphism on the strict localizations\ninduced by $f$.\nThus, for every finite morphism\n$T\\to S$ and every point $x\\gets u$\nof $X_T\\overset\\gets\\times_TT$,\nthe upper arrow in (\\ref{eqSTft})\nis an isomorphism.\nHence\nthe formation of $R\\Psi_f{\\cal F}$\ncommutes with finite base change $T\\to S$.\nFurther the vertical arrow in (\\ref{eqMf})\nis an isomorphism. Thus\nthe canonical morphism\n$p_1^*{\\cal F}\\to\nR\\Psi_f{\\cal F}$\nis an isomorphism\nfurther by \nthe commutativity of the diagram (\\ref{eqMf}).\n\nIf $p_1^*{\\cal F}\\to\nR\\Psi_f{\\cal F}$\nis an isomorphism,\nthe formation of\n$R\\Psi_f{\\cal F}$ commutes\nwith base change $T\\to S$\nif and only if\n$p_1^*{\\cal F}_{X_T}\\to\nR\\Psi_{f_T}{\\cal F}_{X_T}$\nis an isomorphism\nby the left square of (\\ref{eqbc}).\nHence we have\nan equivalence (2)$\\Leftrightarrow$(3).\nThe equivalence (1)$\\Leftrightarrow$(3)\nin the cases with resp.\\ follows\nimmediately from that without resp.\n\\qed}\n\n\n\n\\begin{pr}\\label{prM}\nLet $f\\colon X\\to S$\nbe a morphism of finite type of noetherian schemes\nand let $Z\\subset X$ be a closed\nsubscheme quasi-finite over $S$.\nLet ${\\cal F}$\nbe an object of\n$D^b_c(X)$\nsuch that the restriction of $f\\colon X\\to S$\nto the complement $X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z$ is \n(resp.\\ universally) locally acyclic\nrelatively to the restriction of ${\\cal F}$.\n\n{\\rm 1. (cf.\\ {\\cite[Proposition 6.1]{Or}})}\n$R\\Psi_f{\\cal F}$ and $R\\Phi_f{\\cal F}$\nare constructible\nand their formations commute\nwith finite (resp.\\ arbitrary)\nbase change.\nThe constructible object \n$R\\Phi_f{\\cal F}$ is supported\non $Z\\overset \\gets\\times_SS$.\n\n{\\rm 2. }\nLet $x$ be a geometric point of $X$\nand $s=f(x)$ be the geometric point\nof $S$ defined by the image of $x$ by $f$.\nLet $t$ and $u$ be geometric points\nof $S_{(s)}$\nand $t\\gets u$ be a specialization. \nThen, there exists a distinguished triangle\n\\begin{equation}\n\\begin{CD}\n@>>>\nR\\Psi_f{\\cal F}_{x\\gets t}\n@>>>\nR\\Psi_f{\\cal F}_{x\\gets u}\n@>>>\n{\\displaystyle\n\\bigoplus_{z\\in (Z\\times_XX_{(x)})\n\\times_{S_{(s)}}t}}\nR\\Phi_f{\\cal F}_{z\\gets u}\n@>>>\n\\end{CD}\n\\label{eqDel}\n\\end{equation}\nwhere \n$R\\Psi_f{\\cal F}_{x\\gets t}\n\\to\nR\\Psi_f{\\cal F}_{x\\gets u}$\nis the cospecialization.\n\\end{pr}\n\nThe commutativity of the formation of $R\\Psi_f{\\cal F}$ with any base change implies\nits constructibility by \\cite[8.1, 10.5]{Or}\nas noted after \\cite[Theorem 1.3.1]{TS}.\n\n\\proof{\n1.\nThe constructibility \nis proved by\ntaking a compactification\nin \\cite[Proposition 6.1]{Or}.\nThe commutativity with base change\nis proved similarly by\ntaking a compactification\nand applying the proper base change\ntheorem.\n\nThe assertion on the support of $R\\Phi_f{\\cal F}$\nfollows from Proposition \\ref{lmapp}.2 (1)$\\Rightarrow$(2).\n\n2.\nLet $t$ and $u$\nbe geometric points of\n$S_{(s)}$ and $t\\gets u$ be a specialization.\nBy replacing $S$ by the strict localization\n$S_{(s)}$\nand shrinking $X$, \nwe may assume that $S=S_{(s)}$, that $X$ is affine\nand that $Z=Z\\times_XX_{(x)}$ is finite over $S$.\n\nWe consider the diagram\n$$\\begin{CD}\ns@. t\\\\\n@V{i_{s}}VV @VV{i_t}V\\\\\nS@>>\nR\\Gamma_{c}(X_s,\ni_s^*Rj_*\\Phi_{t\\gets u}{\\cal F})\n\\\\\n&@VVV@VVV\\\\\n\\bigoplus_{z\\in Z_t}\n\\Delta_z=\n&R\\Gamma(Z_t,\ni_t^*\\Phi_{t\\gets u} {\\cal F})\n@>>>\nR\\Gamma_c(X_t,\ni_t^*\\Phi_{t\\gets u} {\\cal F})\n\\end{CD}$$\nof isomorphisms\nand the assertion follows.\n\\qed}\n\n\\begin{cor}\\label{corsl}\nWe keep the assumptions in Proposition {\\rm \\ref{prM}}\nand \nlet $x$ be a geometric point of $X$\nand $s=f(x)$ be the geometric point\nof $S$ defined by the image of $x$ by $f$\nas in Proposition {\\rm \\ref{prM}.2}.\nThen, the restriction of\nthe constructible sheaf\n$R^q\\Psi_f{\\cal F}$\non $x\\overset\\gets\\times_SS\n=S_{(s)}$\nis locally constant \noutside the image of\nthe finite scheme $Z\\times_XX_{(x)}$\nfor every $q$.\n\\end{cor}\n\n\\proof{\nLet $t$ and $u$\nbe geometric points of $S_{(s)}$\nnot in the image of $Z\\times_XX_{(x)}$\nand $t\\gets u$ be a specialization.\nSince $R\\Psi_f{\\cal F}$\nis constructible,\nit suffices to show that\nthe cospecialization morphism\n$R\\Psi_f{\\cal F}_{x\\gets t}\n\\to\nR\\Psi_f{\\cal F}_{x\\gets u}$ is an isomorphism.\nThen by the assumption on\nthe local acyclicity,\nthe complex $\\Phi_{t\\gets u}{\\cal F}$ \nin the proof of Proposition \\ref{prM}.2 is acyclic.\nHence the assertion follows from (\\ref{eqDel})\nand the isomorphism\n$R\\Phi_f{\\cal F}_{z\\gets u}\\to \n(\\Phi_{t\\gets u}{\\cal F})_z$.\n\\qed}\n\n\\begin{cor}\\label{corsc}\nWe keep the assumptions in Proposition {\\rm \\ref{prM}}.\nWe further assume that \n${\\cal F}$ is of finite tor-dimension.\n\n{\\rm 1.}\nThe complexes\n$R\\Psi_f{\\cal F}$\nand \n$R\\Phi_f{\\cal F}$\nare of finite tor-dimension.\nConsequently,\nthe functions\n$\\dim R\\Psi_f{\\cal F}$\nand \n$\\dim R\\Phi_f{\\cal F}$\nare defined and constructible.\n\n{\\rm 2.}\nDefine a constructible\nfunction $\\delta_{\\cal F}$ \non $X\\overset\\gets\\times_SS$\nsupported on $Z\\overset \\gets\\times_SS$ \nby $\\delta_{\\cal F}(x\\gets t)=\n\\dim R\\Phi_f{\\cal F}_{x\\gets t}$.\nLet $\\Lambda_0$\ndenote the residue field\nof $\\Lambda$ and assume that\n$R\\Phi_f{\\cal F}\n\\otimes^L_{\\Lambda_0}\\Lambda$ \nis acyclic except at degree $0$.\nThen, we have $\\delta_{\\cal F}\\geqq 0$\nand the equality $\\delta_{\\cal F}=0$\nis equivalent to the condition that\nthe morphism $f$ is (resp.\\ universally) locally acyclic\nrelatively to ${\\cal F}$.\n\\end{cor}\n\n\\proof{\n1. By Proposition \\ref{prM}.1,\nProposition \\ref{lmapp}.1\nand Lemma \\ref{lmctf}.1,\nthe complex\n$R\\Psi_f{\\cal F}$\nis of finite tor-dimension\nand hence\n$R\\Phi_f{\\cal F}$\nis also of finite tor-dimension.\nSince they are constructible \nby Proposition \\ref{prM}.1,\nthe functions\n$\\dim R\\Psi_f{\\cal F}$\nand \n$\\dim R\\Phi_f{\\cal F}$\nare defined and constructible.\n\n\n2.\nThe positivity $\\delta_{\\cal F}\\geqq 0$\nfollows from \nthe assumption that\n$R\\Phi_f{\\cal F}$ is acyclic except at degree $0$.\nFurther the equality $\\delta_{\\cal F}=0$\nis equivalent to \n$R\\Phi_f{\\cal F}=0$.\nSince the formation of \n$R\\Psi_f{\\cal F}$\ncommutes with finite (resp.\\ arbitrary) \nbase change by Proposition \\ref{prM}.1,\nit is further\nequivalent to the condition that\nthe morphism $f$ is (resp.\\ universally) locally acyclic\nrelatively to ${\\cal F}$\nby Proposition \\ref{lmapp}.2.\n\\qed}\n\n\\begin{lm}\\label{lmP}\nAssume that\n$\\Lambda=\\Lambda_0$\nis a field.\nThen, the assumption \nthat $R\\Phi_f{\\cal F}$ \nis acyclic except at degree $0$\nin Corollary {\\rm \\ref{corsc}.2}\nis satisfied if the following conditions\nare satisfied:\nThe scheme $S$ is noetherian,\nthe restriction of $f\\colon X\\to S$\nto $X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z$ is \nuniversally locally acyclic\nrelatively to the restriction of ${\\cal F}$\nand the following condition {\\rm (P)}\nis satisfied.\n\n{\\rm (P)}\nFor every morphism\n$T\\to S$ from the spectrum $T$ of\na discrete valuation ring,\nthe pull-back of ${\\cal F}[1]$\nto $X_T$ is perverse.\n\\end{lm}\n\n\\proof{\nLet $x\\gets t$ be a point of\n$X\\overset \\gets\\times_SS$ \nand let $T\\to S$ be a morphism\nfrom the spectrum $T$ of\na discrete valuation ring\nsuch that the image of $T\\to S$\nis the same as that of $\\{f(x),t\\}$.\nSince the formation of $R\\Phi_f{\\cal F}$ commutes\nwith arbitrary base change\nby Proposition \\ref{prM}.1,\nthe base change morphism\n$R\\Phi_f{\\cal F}_{x\\gets t}\n\\to R\\Phi_{f_T}({\\cal F}|_{X_T})_{x\\gets t}$\nis an isomorphism.\nThe complex\n$R\\Phi_{f_T}({\\cal F}|_{X_T})$ \nis a perverse sheaf \nby the assumption (P)\nand by the theorem of Gabber\n\\cite[Corollaire 4.6]{au}.\nSince \n$R\\Phi_{f_T}({\\cal F}|_{X_T})$ \nvanishes outside the closed fiber $Z_s$,\nthis implies that\nthe complex $R\\Phi_f{\\cal F}$\nis acyclic except at degree $0$.\n\\qed}\n\\medskip\n\nThe condition {\\rm (P)}\nis satisfied\nif $f\\colon X\\to S$ is smooth of relative dimension $d$\nand\n${\\cal F}=j_!{\\cal G}[d]$\nfor the open immersion $j\\colon U\\to X$\nof the complement $U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D$\nof a Cartier divisor $D$\nand a locally constant constructible \nsheaf ${\\cal G}$\nof free $\\Lambda$-modules on $U$.\n\n\\subsection{Semi-continuity of the Swan conductor}\\label{sssc}\n\nIn this subsection,\nwe assume that $\\Lambda=\\Lambda_0$ is a field\nfor simplicity.\nIf $\\Lambda$ is not a field,\nthe same results hold without modifications\nfor constructible complexes of\n$\\Lambda$-modules\nof finite tor-dimension,\nby taking \n$\\otimes^L_{\\Lambda}\\Lambda_0$.\n\n\nWe reformulate the main result of\nDeligne-Laumon in \\cite{DL}\nin Proposition \\ref{prDL} below.\nLet $f\\colon X\\to S$\nbe a flat morphism of relative dimension $1$\nand let $Z\\subset X$ be a closed subscheme.\nAssume that \n$X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z$ is smooth over $S$\nand that $Z$ is quasi-finite over $S$.\nLet ${\\cal F}$ be a constructible\ncomplex of $\\Lambda$-modules on $X$ such that\nthe restrictions of the cohomology sheaves\non $X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z$ are locally constant.\n\nLet $s\\to S$ be a geometric point\nsuch that the residue\nfield is an {\\em algebraic closure}\nof the residue field of the image of $s$\nin $S$.\nFor a geometric point \n$x$ of $Z$ above $s$,\nthe normalization of\nthe strict localization\n$X_{s,(x)}$ is the finite disjoint\nunion $\\amalg_i{\\rm Spec}\\ {\\cal O}_{K_i}$ \nwhere ${\\cal O}_{K_i}$ are strictly local discrete valuation\nrings with algebraically\nclosed residue field $k(s)$.\nLet $\\bar\\eta_i$ denote\nthe geometric point\ndefined by a separable closure $\\bar K_i$ of\nthe fraction field $K_i$.\nFor a $\\Lambda$-representation $V$\nof the absolute Galois group $G_{K_i}\n={\\rm Gal}(\\bar K_i\/K_i)$,\nthe Swan conductor\n${\\rm Sw}_{K_i}V\\in {\\mathbf N}$\nis defined \\cite{DL}\nand the total dimension is\ndefined as the sum\n$\\dim{\\rm tot}_{K_i}V\n=\\dim V\n+{\\rm Sw}_{K_i}V$.\n\nThe stalk\n${\\cal H}^q({\\cal F})_{{\\bar \\eta}_i}$\nfor each integer $q$\ndefines a $\\Lambda$-representation\nof the absolute Galois group $G_{K_i}$\nand hence the total dimension\n$\\dim{\\rm tot}_{K_i}{\\cal F}_{{\\bar \\eta}_i}$\nis defined as the alternating sum\n$\\sum_q(-1)^q\n\\dim{\\rm tot}_{K_i}{\\cal H}^q({\\cal F})_{{\\bar \\eta}_i}$.\nWe define the Artin conductor by\n\\begin{equation}\na_x({\\cal F}|_{X_s})\n=\n\\sum_i\n\\dim{\\rm tot}_{K_i}{\\cal F}_{{\\bar \\eta}_i}\n-\n\\dim{\\cal F}_x.\n\\label{eqax}\n\\end{equation}\nWe define \na function $\\varphi_{\\cal F}$\non $X$ supported on $Z$ by\n\\begin{equation}\n\\varphi_{\\cal F}(x)=\na_x({\\cal F}|_{X_s})\n\\label{eqphis}\n\\end{equation}\nfor $s=f(x)$.\nThe derivative $\\delta(\\varphi_{\\cal F})$\non $X\\overset\\gets\\times_SS$\nis defined by (\\ref{eqdel}).\n\n\n\\begin{pr}[{\\cite[Th\\'eor\\`eme 2.1.1]{DL}}]\\label{prDL}\nLet $S$ be a noetherian scheme\nand \n$f\\colon X\\to S$\nbe a flat morphism of relative dimension $1$.\nLet $Z\\subset X$\nbe a closed subscheme quasi-finite\nover $S$\nsuch that $U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z$ is smooth over $S$.\nLet ${\\cal F}$ be a constructible\ncomplex of $\\Lambda$-modules \non $X$ such that\nthe restrictions of cohomology sheaves\non $X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z$ are locally constant.\n\n{\\rm 1.}\nThe objects $R\\Psi_f{\\cal F}$\nand $R\\Phi_f{\\cal F}$\nare constructible\nand their formations commutes\nwith any base change.\nThe function\n$\\varphi_{\\cal F}$ {\\rm (\\ref{eqphis})}\nsatisfies\n\\begin{equation}\n\\dim R\\Phi_f{\\cal F}_{x\\gets t}=\n\\delta(\\varphi_{\\cal F})(x\\gets t)\n\\label{eqDL}\n\\end{equation}\nand is constructible.\n\n{\\rm 2.}\nAssume ${\\cal F}=j_!{\\cal G}[1]$\nfor the open immersion \n$j\\colon U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z\\to X$\nand a locally constant constructible \nsheaf ${\\cal G}$\non $U$\nand that $Z$ is flat over $S$.\nThen, we have $\\delta(\\varphi_{\\cal F})\\geqq 0$.\nThe function $\\varphi_{\\cal F}$\nis {\\em flat} over $S$\nif and only if\n$f\\colon X\\to S$ is universally locally acyclic\nrelatively to ${\\cal F}=j_!{\\cal G}[1]$.\n\\end{pr}\n\n\\proof{We sketch and\/or recall an outline of\nproof with some modifications.\n\n{\\rm 1.}\nThe constructibility of $R\\Psi_f{\\cal F}$\nand $R\\Phi_f{\\cal F}$\nand the commutativity with base\nchange follow\nfrom Proposition \\ref{prM}.1\nand the local acyclicity of smooth morphism.\n\nBy devissage, the proof of \n(\\ref{eqDL}) is reduced to \nthe case where ${\\cal F}=j_!{\\cal G}[1]$\nfor the open immersion \n$j\\colon U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z\\to X$\nand a locally constant sheaf ${\\cal G}$ on $U$.\nBy the commutativity with base\nchange,\nthe equality (\\ref{eqDL})\nis reduced to the case where $S$\nis the spectrum of a complete\ndiscrete valuation ring\nwith algebraically closed residue field.\nFurther by base change\nand the normalization, we may assume \nthat $X$ is normal\nand that its generic fiber is smooth.\nBy devissage, we may assume\nthat $Z$ is flat over $S$.\n\nIn this case, (\\ref{eqDL})\nwas first proved in \\cite{DL}, under an\nextra assumption that $X$ is smooth,\nby constructing a\ngood compactification using a deformation argument.\nLater it was reproved together with a generalization\n in \\cite[Remark (4.6)]{Kato}\nusing the semi-stable reduction theorem\nof curves\nwithout using the deformation argument.\n\nSince $R\\Phi_f{\\cal F}$ is constructible,\nthe equality (\\ref{eqDL}) implies that\nthe function $\\delta(\\varphi_{\\cal F})$\non $Z\\overset\\gets\\times_SS$\nis constructible.\nHence $\\varphi_{\\cal F}$\non $Z$ is constructible\nby Lemma \\ref{lmscZ}.2.\n\n2.\nThe complex $R\\Phi_f{\\cal F}$\nis acyclic except at degree $0$\nby Lemma \\ref{lmP}.\nHence the assertions follow\nfrom the equality (\\ref{eqDL}) \nand Corollary \\ref{corsc}.2.\n\\qed}\n\n\\begin{cor}\\label{corDL}\nAssume further that $Z$ is finite \nand flat over $S$\nand that ${\\cal F}=j_!{\\cal G}$\nfor a locally constant sheaf ${\\cal G}$\non $U$.\nThen, \nthe function $f_*\\varphi_{\\cal F}$ \n{\\rm (\\ref{eqf*})} on $S$\nis lower semi-continuous.\nThe function $f_*\\varphi_{\\cal F}$\nis locally constant if and only if\n$f\\colon X\\to S$ is universally locally acyclic\nrelatively to ${\\cal F}=j_!{\\cal G}$.\n\\end{cor}\n\n\\proof{\nIt follows from Proposition \\ref{prDL},\nLemma \\ref{lmscZ}.4 and Corollary \\ref{corsc}.2.\nThe lower semi-continuity\nreplaces the upper semi-continuity\nbecause of the shift $[1]$\nin Proposition \\ref{prDL}.2.\n\\qed}\n\\medskip\n\nWe give a slight generalization\nof Proposition \\ref{prDL}\nusing vanishing topos.\nLet \n\\begin{equation}\\xymatrix{\nZ\\ar[r]^{\\subset}&\nX\\ar[rr]^f\\ar[rd]_p&&\nY\\ar[ld]^g\\\\\n&&S}\n\\label{eqfpg}\n\\end{equation}\nbe a commutative diagram of morphisms\nof finite type of noetherian schemes\nsuch that\n$g\\colon Y\\to S$\nis flat of relative dimension $1$\nand that $Z\\subset X$ \nis a closed subscheme\nquasi-finite over $S$.\nFor geometric point\n$x\\to X$,\nwe set $y=f(x)$ and $s=p(x)$\nand define \n$T_{(x)}\\subset Y_{(y)}$ to be the image\nof the finite scheme\n$Z\\times_XX_{(x)}$ over $S_{(s)}$\nby $f_{(x)}\\colon X_{(x)}\\to Y_{(y)}$.\nAssume that, for every geometric point\n$x\\to X$,\nthe complement\n$Y_{(y)}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} T_{(x)}$\nis essentially smooth over $S_{(s)}$.\n\nLet ${\\cal K}$\nbe an object of\n$D^b_c(X\\overset \\gets\\times_YY)$\nsuch that, \nfor every geometric point\n$x\\to X$, the restrictions of\ncohomology sheaves on\n$Y_{(y)}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} T_{(x)}\n\\subset Y_{(y)}=x\\overset \\gets\\times_YY$ \nare locally constant.\nThen, similarly as (\\ref{eqphis}),\nwe define \na function $\\psi_{\\cal K}$\non $X\\overset\\gets\\times_YY$ by\n\\begin{equation}\n\\psi_{\\cal K}(x\\gets w)=\na_w({\\cal K}|_{Y_{(y)}\\times_{S_{(s)}t}})\n\\label{eqphi2}\n\\end{equation}\nwhere $y=f(x),s=p(x)$ and $t=g(w)$\nwith {\\em algebraically closed} residue field $k(t)$.\nWe also define a function $\\delta(\\psi_{\\cal K})$\non $X\\overset\\gets\\times_SS$ by\n(\\ref{eqdelY}).\n\n\\begin{pr}\\label{prc}\nLet the notation be as above.\nLet ${\\cal K}$\nbe an object of\n$D_c^b(X\\overset \\gets\\times_YY)$\nand $x\\gets t$ be a point of\n$X\\overset \\gets\\times_SS$.\nSet $y=f(x)$ and $s=p(x)$\nand assume that the restriction of\ncohomology sheaf ${\\cal H}^q{\\cal K}$\non $Y_{(y)}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} T_{(x)}$ \nis locally constant for every $q$.\nThen, we have\n\\begin{equation}\n\\dim R\\overset\\gets g_*{\\cal K}_{x\\gets t}-\n\\dim R\\overset\\gets g_*{\\cal K}_{x\\gets s}=\n\\delta(\\psi_{\\cal K})(x\\gets t).\n\\label{eqDL2}\n\\end{equation}\n\\end{pr}\n\n\\proof{\nBy the canonical isomorphisms\n$R\\overset\\gets g_*{\\cal K}_{x\\gets t}\n\\to\nR\\Gamma(Y_{(y)}\\times_{S_{(s)}}S_{(t)},\n{\\cal K}|_{Y_{(y)}\\times_{S_{(s)}}S_{(t)}})$\nand\n$R\\overset\\gets g_*{\\cal K}_{x\\gets s}\n\\to\nR\\Gamma(Y_{(y)},\n{\\cal K}|_{Y_{(y)}})={\\cal K}_y$\n(\\ref{eqMt}),\nwe obtain a distinguished triangle\n$\\to\nR\\overset\\gets g_*{\\cal K}_{x\\gets s}\n\\to\nR\\overset\\gets g_*{\\cal K}_{x\\gets t}\n\\to\nR\\Phi_{g_{(y)}}({\\cal K}|_{Y_{(y)}})_{y\\gets t}\n\\to $.\nHence it follows\nfrom Proposition \\ref{prDL}.1.\n\\qed}\n\\medskip\n\n\nIn fact, (\\ref{eqDL})\nis a special case of (\\ref{eqDL3}) below\nwhere $X=Y$.\n\n\\begin{cor}\\label{corc}\nWe keep the notation in Proposition {\\rm \\ref{prc}}.\nLet ${\\cal F}$ be an object of\n$D^b_c(X)$ such that ${\\cal K}=\nR\\Psi_f{\\cal F}$ is an object of\n$D^b_c(X\\overset\\gets\\times_YY)$.\nAssume that ${\\cal K}$\nand a point $x\\gets t$ of\n$X\\overset \\gets\\times_SS$\nsatisfies the condition in Proposition \n{\\rm \\ref{prc}}.\nThen, we have\n\\begin{equation}\n\\dim R\\Phi_p{\\cal F}_{x\\gets t}\n=\n\\delta(\\psi_{\\cal K})(x\\gets t).\n\\label{eqDL3}\n\\end{equation}\n\\end{cor}\n\n\\proof{\nBy the isomorphisms\n$R\\Psi_p{\\cal F}\n\\to R\\overset\\gets g_*{\\cal K}$\nand $R\\overset\\gets g_*{\\cal K}_{x\\gets s}\n\\to {\\cal K}_y\\to {\\cal F}_x$,\nwe obtain a distinguished triangle\n$\\to R\\overset\\gets g_*{\\cal K}_{x\\gets s}\n\\to R\\overset\\gets g_*{\\cal K}_{x\\gets t}\n\\to R\\Phi_p{\\cal F}_{x\\gets t}\\to$.\nHence it follows from\n(\\ref{eqDL2}).\n\\qed}\n\\medskip\n\nWe consider the diagram (\\ref{eqfpg})\nsatisfying the condition there\nand assume further that $g\\colon Y\\to S$ \nis smooth.\nLet ${\\cal F}$\nbe an object of\n$D^b_c(X)$ and \nassume that $p\\colon X\\to S$\nis locally acyclic\nrelatively to ${\\cal F}$\nand that the restriction of $f\\colon X\\to Y$\nto the complement $X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z$ is\nlocally acyclic\nrelatively to the restriction of ${\\cal F}$.\n\nWe define a function \n$\\varphi_{{\\cal F},f}$\non $Z$ as follows.\nFor a geometric point\n$x$ of $Z$,\nset $y=f(x)$ and $s=p(x)$.\nWe regard $s$ as a geometric point\nof $S$ such that the residue\nfield is an {\\em algebraic closure}\nof the residue field of the image of $s$\nin $S$.\nThe base change \n$f_s\\colon X_s\\to Y_s$\nof $f\\colon X\\to Y$\nis a morphism to a smooth curve\nover the algebraically closed field $k(s)$.\nThe strict localization\n$Y_{s,(y)}$ is ${\\rm Spec}\\ {\\cal O}_{K_y}$\nfor a strictly local\ndiscrete valuation ring ${\\cal O}_{K_y}$\nwith an algebraically closed\nresidue field $k(y)=k(s)$\nsince $Y_s$ is a smooth curve over $s$.\nThe cohomology of the \nstalk of the vanishing cycles complex\n$\\phi_x({\\cal F}|_{X_s},f_s)$\ndefine $\\Lambda$-representations\nof the absolute Galois group\n$G_{K_y}$ and hence the total dimension\n$\\dim{\\rm tot}_y\n\\phi_x({\\cal F}|_{X_s},f_s)$\nis defined as the alternating sum.\nSimilarly as (\\ref{eqphis}),\nwe define a function \n$\\varphi_{{\\cal F},f}$\non $Z$ by\n\\begin{equation}\n\\varphi_{{\\cal F},f}(x)\n=\n\\dim{\\rm tot}_y\n\\phi_x({\\cal F}|_{X_s},f_s)\n\\label{eqphKf}\n\\end{equation}\n\n\n\\begin{pr}\\label{prMsc}\nLet \n\\begin{equation*}\\xymatrix{\nZ\\ar[r]^{\\subset}&\nX\\ar[rr]^f\\ar[rd]_p&&\nY\\ar[ld]^g\\\\\n&&S}\n\\leqno{\\rm (\\ref{eqfpg})}\n\\end{equation*}\nbe a commutative diagram of morphisms\nof finite type of noetherian schemes\nsuch that\n$g\\colon Y\\to S$\nis {\\em smooth} of relative dimension $1$\nand that $Z\\subset X$ \nis a closed subscheme\nquasi-finite over $S$.\n\nLet ${\\cal F}$\nbe an object of\n$D^b_c(X)$ and \nassume that $p\\colon X\\to S$ is\nlocally acyclic\nrelatively to ${\\cal F}$\nand that the restriction of $f\\colon X\\to Y$\nto the complement $X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z$ is\nlocally acyclic\nrelatively to the restriction of ${\\cal F}$.\nThen, the function \n$\\varphi_{{\\cal F},f}$ \n{\\rm (\\ref{eqphKf})} on $Z$\nis constructible and {\\em flat} over $S$.\nIf $Z$ is \\'etale over $S$,\nit is locally constant.\n\\end{pr}\n\n\n\\proof{\nLet $x$ be a geometric point of\n$Z$ and let $y=f(x)$ and\n$s=p(x)$ be its images.\nWe regard $s$ as a geometric point\nof $S$ such that the residue\nfield is an {\\em algebraic closure}\nof the residue field of the image of $s$\nin $S$.\nLet $Y_{s,{(y)}}={\\rm Spec}\\ {\\cal O}_{K_y}$\nbe the strict localization\nof the geometric fiber and let\n$x\\gets u$ be the point\nof $X\\overset\\gets \\times_YY$\ndefined by a separable closure\nof $K_y$.\nThe complex $R\\Phi_f{\\cal F}$\nis constructible\nand its construction commutes\nwith base change\nby Proposition \\ref{prM}.1.\nHence, we have a canonical\nisomorphism\n$\\phi_x({\\cal F}|_{X_s},f_s)\n\\to\nR\\Phi_f{\\cal F}_{x\\gets u}$\nand\n\\begin{equation}\n\\varphi_{{\\cal F},f}(x)\n=\n\\dim{\\rm tot}_y\n\\phi_x({\\cal F}|_{X_s},f_s)\n=\n\\dim{\\rm tot}_yR\\Phi_f{\\cal F}_{x\\gets u}.\n\\label{eqphw}\n\\end{equation}\n\nWe apply Proposition \\ref{prc} to\n${\\cal K}=R\\Psi_f{\\cal F}$.\nThe assumption in Proposition \\ref{prc} that\n${\\cal H}^q{\\cal K}\n=R^q\\Psi_f{\\cal F}$\non\n$Y_{(y)}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} T_{(x)}\\subset\nY_{(y)}=x\\overset \\gets\\times_YY$\nis locally constant for every $q$\nis satisfied\nfor every geometric point $x$ of $X$\nby Corollary \\ref{corsl}.\nHence the function \n$\\psi_{\\cal K}$ (\\ref{eqphi2})\nfor ${\\cal K}=R\\Psi_f{\\cal F}$\nis defined as a function\non $X\\overset\\gets\\times_YY$.\n\nIn order to apply Lemma \\ref{lmdelp}, \nwe show\n$$\\psi_{\\cal K}(x\\gets w)\n=\\sum_{z\\in Z_{(x)}\\times_{Y_{(y)}}w}\n\\varphi_{{\\cal F},f}(z)$$\nfor a point $x\\gets w$\nof $Z\\overset\\gets\\times_YY$\nsuch that $w$ is supported on \nthe image $T_{(x)}\\subset Y_{(y)}$\nof $Z_{(x)}$.\nBy the assumption that $Y\\to S$ is\nsmooth, the fiber\n$Y_{(w)}\\times_{S_{(t)}}t$\nfor $t=p(w)$\nis the spectrum of a discrete valuation ring.\nLet $v$ be its geometric generic point\nregarded as a geometric point of\n$Y_{(w)}$.\n\nBy (\\ref{eqphi2}) and (\\ref{eqax}), we have\n$$\\psi_{\\cal K}(x\\gets w)\n=\n\\dim{\\rm tot}_w(R\\Psi_f{\\cal F}_{x\\gets v})\n-\n\\dim (R\\Psi_f{\\cal F}_{x\\gets w}).$$\nWe apply Proposition \\ref{prM}.2\nto $f\\colon X\\to Y$\nand specializations $y\\gets w\\gets v$\nto compute the right hand side.\nThen, the distinguished triangle (\\ref{eqDel})\nimplies that the right hand side equals\n$\\sum_{z\\in Z_{(x)}\\times_{Y_{(y)}}w}\n\\dim{\\rm tot}_w(R\\Phi_f{\\cal F}_{z\\gets v})$.\nBy (\\ref{eqphw})\napplied for $z$,\nit is further equal to $\n\\sum_{z\\in Z_{(x)}\\times_{Y_{(y)}}w}\n\\varphi_{{\\cal F},f}(z)\n$ as required.\n\nTherefore, by applying Lemma \\ref{lmdelp}, \nwe obtain\n$\\delta(\\psi_{\\cal K})=\n\\delta(\\varphi_{{\\cal F},f})$\nas functions on $Z\\overset\\gets\\times_SS$.\nSince $R\\Phi_p{\\cal F}=0$,\nthe function $\\psi_{\\cal K}$ is flat over $S$\nby (\\ref{eqDL3}).\nHence, \nthe function $\\varphi_{{\\cal F},f}$ is also flat\nover $S$.\nSince it is flat over $S$, \nthe function $\\varphi_{{\\cal F},f}$ is \nconstructible by Lemma \\ref{lmscZ}.2.\n\n\nIf $Z$ is \\'etale over $S$, the function\n$\\varphi_{{\\cal F},f}$ is locally constant by\nLemma \\ref{lmscZ}.1.\n\\qed}\n\n\\section{Closed conical subsets\non the cotangent bundle}\\label{sCc}\n\nIn this preliminary section,\nwe recall and study some notions\nintroduced in \\cite{Be}\nrelated to closed conical subsets\nof the cotangent bundles.\n\n\n\\subsection{$C$-transversality}\\label{ssCt}\n\n\n\nWe say that a closed subset \n$C\\subset E$ of a vector bundle\n$E$ over $X$ is {\\em conical} if it is stable\nunder the action of \nthe multiplicative group ${\\mathbf G}_m$.\nEquivalently,\nit is defined by a graded ideal ${\\cal I}$\nof the graded algebra $S^\\bullet_{{\\cal O}_X} \n{\\cal E}^\\vee$,\nif the vector bundle $E={\\mathbf V}({\\cal E})$ \nis associated to a locally free \n${\\cal O}_X$-module ${\\cal E}$.\nFor a closed conical subset\n$C\\subset E$,\nwe call its intersection\n$B$ with the $0$-section\nregarded as a closed\nsubset of $X$ the {\\em base} of $C$.\n\nLet \n${\\mathbf P}(C)={\\rm Proj}_X\n(S^\\bullet {\\cal E}^\\vee\/{\\cal I})\n\\subset {\\mathbf P}(E)={\\rm Proj}_X\nS^\\bullet {\\cal E}^\\vee$\ndenote the projectivization.\nThe projectivization\n${\\mathbf P}(C)$ is empty if and\nonly if $C$ is a subset of the\n$0$-section.\nThe projectivization \n${\\mathbf P}(C)$ itself does not\ndetermine $C$ \nbut the pair with the base\n$B$ determines $C$ uniquely.\n\nWe study the intersection of \na closed conical subset \nwith the inverse image of the 0-section\nby a morphism of vector bundles.\nFor a morphism $X\\to Y$ \nof finite type of locally noetherian schemes\nand a closed subset $Z\\subset X$,\nwe say that the restriction of $X\\to Y$ on $Z$\nis {\\em finite} (resp.\\ {\\em proper}) \nif for every closed subscheme\nstructure or equivalently\nfor the reduced closed subscheme\nstructure on $Z$,\nthe induced morphism\n$Z\\to Y$ is finite (resp.\\ proper).\n\n\n\\begin{lm}[{\\rm \\cite[Lemma 1.2 (ii)]{Be}}]\\label{lmnc}\nLet $E\\to F$ be \na morphism of vector bundles\nover a locally noetherian scheme $X$.\nFor a closed conical subset $C\n\\subset E$,\nthe following\nconditions are equivalent:\n\n\n{\\rm (1)}\nThe intersection of $C$\nwith the inverse image $K$ of\nthe $0$-section of $F$ by\n$E\\to F$ is a subset of\nthe $0$-section of $E$.\n\n{\\rm (2)}\nThe restriction of\n$E\\to F$ on $C$ is finite.\n\\end{lm}\n\n\\proof{\n(1)$\\Rightarrow$(2):\nSince the question is local on \n$X$, we may assume $X$ is\naffine.\nThen the assertion\nfollows from the elementary Lemma below.\n\n(2)$\\Rightarrow$(1):\nThe intersection $C\\cap K\\subset E$ is \na closed conical subset\nfinite over $X$.\nHence, it is a subset of the\n0-section.\n\\qed}\n\n\\begin{lm}\\label{lmRI}\nLet $R=\\bigoplus_{n\\geqq 0}\nR_n$ be a graded ring\nand $I=\\bigoplus_{n> 0}\nR_n\\subset R$ be the graded ideal.\nLet $M=\\bigoplus_{n\\geqq 0}\nM_n$ be a graded $R$-module.\nIf the $R\/I$-module\n$M\/IM$ is finitely generated,\nthen the $R$-module\n$M$ is finitely generated.\n\\end{lm}\n\n\\proof{\nLet $x_1,\\ldots,x_m\\in M$\nbe a lifting of a system of generators \nof $M\/IM$ consisting\nof homogeneous elements\nand let $N\\subset M$ be \nthe sub $R$-module\ngenerated by $x_1,\\ldots,x_m$.\nBy induction on $n$,\nthe morphism\n$N\/I^nN\\to M\/I^nM$ is \nsurjective for every $n\\geqq 0$.\nThus, we have $M_n\\subset N$\nfor every $n\\geqq 0$\nand the assertion follows.\n\\qed}\n\\medskip\n\n\n\nIn the rest of this article,\n$k$ denotes a field of characteristic \n$p\\geqq 0$\nand $X$ denotes a smooth scheme over $k$,\nunless otherwise stated.\nThe cotangent bundle\n$T^*X$ is the covariant\nvector bundle over $X$\nassociated to the locally free ${\\cal O}_X$-module\n$\\Omega^1_{X\/k}$.\nThe $0$-section of $T^*X$ is \nidentified with the conormal bundle\n$T^*_XX$ of $X\\subset X$.\n\nFor a closed conical subset\n$C\\subset T^*X$, \nwe define the condition for\na morphism coming into $X$\nto be $C$-transversal.\n\n\\begin{df}[{\\rm \\cite[1.2]{Be}}]\\label{dfCh}\nLet $X$ be a smooth scheme \nover a field $k$\nand let $C\\subset T^*X$ be a closed conical subset of \nthe cotangent bundle.\nLet $h\\colon W\\to X$ be a \nmorphism of smooth schemes over $k$.\nDefine \n\\begin{equation}\nh^*C=W\\times_XC\\subset W\\times_XT^*X\n\\label{eqhC}\n\\end{equation}\nto be the pull-back of $C$ and\nlet $K\\subset W\\times_XT^*X$ \nbe the inverse image of the $0$-section \nby the canonical morphism\n$dh\\colon W\\times_XT^*X\\to T^*W$.\n\n{\\rm 1.}\nFor a point $w\\in W$,\nwe say that $h\\colon W\\to X$\nis {\\em $C$-transversal} at $w$\nif the fiber $(h^*C\\cap K)\\times_Ww$\nof the intersection \nis a subset of the $0$-section\n$W\\times_XT^*_XX\\subset W\\times_XT^*X$.\n\nWe say that $h\\colon W\\to X$\nis {\\em $C$-transversal}\nif the intersection $h^*C\\cap K$\nis a subset of the $0$-section\n$W\\times_XT^*_XX\\subset W\\times_XT^*X$.\n\n{\\rm 2}.\nIf $h\\colon W\\to X$\nis $C$-transversal,\nwe define a closed conical subset\n\\begin{equation}\nh^{\\circ}C\\subset T^*W\n\\label{eqhoC}\n\\end{equation}\nto be the image of\n$h^*C$ by $W\\times_XT^*X\\to T^*W$.\n\\end{df}\n\nBy Lemma {\\rm \\ref{lmnc}},\nif $h\\colon W\\to X$ is $C$-transversal,\nthen $h^{\\circ}C$ is a closed conical subset\nof $T^*W$.\n\n\\begin{lm}\\label{lmCh}\nLet $X$ be a smooth scheme \nover a field $k$\nand let $C\\subset T^*X$ be a closed conical subset of \nthe cotangent bundle.\nLet $h\\colon W\\to X$ be a morphism \nof smooth schemes over $k$.\n\n\n{\\rm 1.}\nIf $h$ is smooth,\nthen $h$ is $C$-transversal\nand the canonical morphism\n$h^{*}C\\to h^{o}C$ is an isomorphism.\n\n{\\rm 2.}\nIf $C\\subset T^{*}_{X}X$ is \na subset of the $0$-section,\nthen $h$ is $C$-transversal.\n\n{\\rm 3.}\n{\\rm (cf.\\ \\cite[Lemma 2.2 (i)]{Be})}\nFor a morphism $g\\colon V\\to W$\nof smooth schemes over $k$,\nthe following conditions are equivalent:\n\n{\\rm (1)}\n$h$ is $C$-transversal\non a neighborhood of\n$g(V)\\subset W$ and \n$g\\colon V\\to W$ is\n$h^{\\circ}C$-transversal.\n\n{\\rm (2)}\nThe composition $h\\circ g\\colon V\\to X$\nis $C$-transversal.\n\n{\\rm 4.} {\\rm (\\cite[Lemma 1.2 (i)]{Be})}\nThe subset of $W$ consisting of\npoints $w\\in W$ where\n$h\\colon W\\to X$ is $C$-transversal\nis an open subset of $W$.\n\n{\\rm 5.}\nLet $D=\\bigcup_{i=1}^mD_i$\nbe a divisor with simple normal crossings\nof $X$ relatively to $X\\to {\\rm Spec}\\ k$\nand let \n\\begin{equation}\nC_D=\\bigcup_{I\\subset\\{1,\\ldots\\,m\\}}\nT^*_{D_I}X\\subset T^*X\n\\label{eqCD}\n\\end{equation}\nbe the union of the conormal bundles\nof the intersections\n$D_I=\\bigcap_{i\\in I}D_i$\nof irreducible components\nfor all subsets $I\\subset\\{1,\\ldots,m\\}$\nof indices, including $D_\\varnothing=X$.\nThen, $h\\colon W\\to X$ is $C$-transversal\nif and only if\n$h^*D=D\\times_XW$\nis a divisor with simple normal crossings\nrelatively to $W\\to {\\rm Spec}\\ k$\nand \n$h^*D_i\\subset W$ are smooth divisors\nfor $i=1,\\ldots,m$.\n\\end{lm}\n\nThe assertion 3 implies that\n$C$-transversal morphisms\nhave similar properties as\n\\'etale morphisms.\nFor ${\\cal F}$-transversal morphisms \nintroduced in Definition \\ref{dfprpg},\nproperties corresponding to 1-3\nwill be proved \nin Lemma \\ref{lmprpg}.\n\n\\proof{\n1. \nSince\nthe canonical morphism\n$dh\\colon W\\times_XT^*X\\to T^*W$\nis a closed immersion,\nthe intersection $h^*C\\cap K$\nis a subset of the $0$-section \n$K\\subset W\\times_XT^*X$ and\nthe morphism $h^*C\\to h^{\\circ}C$ is\nan isomorphism.\n\n\n2. \nSince $h^*C\\subset\nW\\times_XT^*X$ is a subset of \nthe $0$-section,\nthe assertion follows.\n\n3.\nSince \n$V\\times_XT^*X\\to T^*V$\nis the composition\n$V\\times_XT^*X\\to V\\times_WT^*W\\to T^*V$,\nthe condition (2) is equivalent to\nthat both \nthe intersection of\n$(hg)^*C=g^*(h^*C)\n\\subset V\\times_XT^*X$ with the\ninverse image of the $0$-section by\n$V\\times_XT^*X\\to V\\times_WT^*W$\nand the intersection of the image of\n$(hg)^*C$ in\n$V\\times_WT^*W$\nwith the\ninverse image of the $0$-section by\n$V\\times_WT^*W\\to T^*V$\nare subsets of the $0$-sections.\nBy 4 below,\nthe first condition means that\n$h$ is $C$-transversal\non a neighborhood of\n$g(V)$ and then the second means that\n$g\\colon V\\to W$ is\n$h^{\\circ}C$-transversal.\n\n4.\nThe complement of the subset is\nthe image of the closed subset\n${\\mathbf P}(h^*C\\cap K)\\subset\n{\\mathbf P}(W\\times_XT^*X)$\nof a projective space bundle over $W$.\n\n5. The $C$-transversality is equivalent to\nthe injectivity of\n$W\\times_XT^*_{D_I}X\\to\nT^*W$ for all $I\\subset \\{1,\\ldots, m\\}$.\nHence the assertion follows.\n\\qed}\n\\medskip\n\n\nFor a closed conical subset\n$C\\subset T^*X$, \nwe define the condition for\na morphism going out of $X$\nto be $C$-transversal.\n\n\\begin{df}[{\\rm \\cite[1.2]{Be}}]\\label{dfCf}\nLet $X$ be a smooth scheme \nover a field $k$\nand let $C\\subset T^*X$ be a closed conical subset \nof the cotangent bundle.\nLet $f\\colon X\\to Y$ \nbe a morphism of\nsmooth schemes over $k$.\n\n\n{\\rm 1.}\nFor a point $x\\in X$,\nwe say that $f\\colon X\\to Y$\nis {\\em $C$-transversal} at $x$\nif the fiber $df^{-1}(C)\\times_Xx$\nof the inverse image\nof $C$ by the canonical morphism\n$df\\colon X\\times_YT^*Y\\to T^*X$\nis a subset of the $0$-section\n$X\\times_YT^*_YY\\subset X\\times_YT^*Y$.\n\nWe say that $f\\colon X\\to Y$\nis {\\em $C$-transversal}\nif the inverse image $df^{-1}(C)$\nof $C$ by the canonical morphism\n$df\\colon X\\times_YT^*Y\\to T^*X$\nis a subset of the $0$-section\n$X\\times_YT^*_YY\\subset X\\times_YT^*Y$.\n\n{\\rm 2.}\nWe say that a pair\nof morphisms $h\\colon W\\to X$ and\n$f\\colon W\\to Y$ of smooth schemes over $k$\nis {\\em $C$-transversal}\nif $h\\colon W\\to X$ \nis $C$-transversal\nand if\n$f\\colon W\\to Y$ \nis $h^{\\circ}C$-transversal.\n\\end{df}\n\n\n\n\\begin{lm}\\label{lmCf}\nLet $X$ be a smooth scheme \nover a field $k$\nand let $C\\subset T^*X$ be a closed conical subset \nof the cotangent bundle.\nLet $f\\colon X\\to Y$ \nbe a morphism of\nsmooth schemes over $k$.\n\n{\\rm 1.}\nFor $Y={\\rm Spec}\\ k$,\nthe canonical\nmorphism $f\\colon X\\to {\\rm Spec}\\ k$ \nis $C$-transversal.\n\n{\\rm 2.}\nAssume that $C$ is the $0$-section\n$T^*_XX\\subset T^*X$.\nThen, $f$ is $C$-transversal\nif and only if $f$ is smooth.\n\n{\\rm 3.}\nAssume that $f$ is \\'etale.\nThen, $f$ is $C$-transversal\nif and only if \n$C$ is a subset of the $0$-section\n$T^*_XX\\subset T^*X$.\n\n{\\rm 4.}\nAssume that $f\\colon X\\to Y$ \nis $C$-transversal and let\n$g\\colon Y\\to Z$ be a smooth morphism.\nThen, the composition\n$g\\circ f\\colon X\\to Z$ \nis $C$-transversal.\n\n\n{\\rm 5.} {\\rm (\\cite[Lemma 1.2 (i)]{Be})}\nThe subset of $X$ consisting of\npoints $x\\in X$ where\n$f\\colon X\\to Y$ is $C$-transversal\nis an open subset of $Y$.\n\n{\\rm 6.}\nAssume that\n$f\\colon X\\to Y$ \nis $C$-transversal.\nThen, the morphism \n$f\\colon X\\to Y$ is smooth\non a neighborhood of the\nbase $B$ of $C$.\n\n{\\rm 7.}\nAssume that $Y$ is a curve\nand let $x\\in X$ be a point.\nThen, $f\\colon X\\to Y$ is \n{\\em not} $C$-transversal at\n$x$ if and only if\nthe image of the fiber\n$(X\\times_YT^*Y)\\times_Xx$\nby the canonical morphism\n$df\\colon X\\times_YT^*Y\\to T^*X$\nis a subset of the fiber $C\\times_Xx$.\n\n\n{\\rm 8.}\nLet $D=\\bigcup_{i=1}^mD_i$\nbe a divisor with simple normal crossings\nof $X$ relatively to $X\\to {\\rm Spec}\\ k$\nand let \n$C_D=\\bigcup_{I\\subset\\{1,\\ldots\\,m\\}}\nT^*_{D_I}X\\subset T^*X$\n{\\rm (\\ref{eqCD})}\nbe the union of the conormal bundles\nof the intersections\n$D_I=\\bigcap_{i\\in I}D_i$\nof irreducible components\nfor all subsets $I\\subset\\{1,\\ldots,m\\}$\nof indices.\nThen, $f\\colon X\\to Y$ is $C$-transversal\nif and only if\n$D\\subset X$\nhas simple normal crossings\nrelatively to $f\\colon X\\to Y$.\n\n{\\rm 9}.\nLet $h\\colon W\\to X$ and $f\\colon W\\to Y$\nbe morphisms of smooth schemes\nover $k$. Then, the following conditions\nare equivalent:\n\n{\\rm (1)}\nThe pair $(h,f)$ is $C$-transversal.\n\n{\\rm (2)}\nThe morphism $(h,f)\\colon W\\to X\\times Y$ \nis $C\\times T^*Y$-transversal.\n\\end{lm}\n\n\\medskip\nIn next subsection,\nwe will see that\nthe property 1 is related to\nthe generic local acyclicity\n\\cite[Corollaire 2.16]{TF}.\nThe property 2 is related to\nthe local acyclicity of smooth morphism\n(see also Lemma \\ref{lmlcst}.1).\nThe property 3 is related to\nthe characterization of\nlocally constant sheaves\n\\cite[Proposition 2.11]{cst}\n(see also Lemma \\ref{lmlcst}.3).\nThe property 4 is related to\n\\cite[Corollaire 2.7]{app}.\n\n\\proof{\n1.\nSince $T^*Y=0$ for $Y={\\rm Spec}\\ k$,\nthe assertion follows.\n\n2.\nAssume that\n$C$ is the $0$-section\n$T^*_XX\\subset T^*X$.\nThen, $f\\colon X\\to Y$\nis $C$-transversal\nif and only if the canonical morphism\n$X\\times_YT^*Y\\to T^*X$\nis an injection.\nHence, this is equivalent to that\n$f$ is smooth.\n\n\n3.\nAssume that $f$ is \\'etale.\nThen, since\n$X\\times_YT^*Y\\to T^*X$\nis an isomorphism,\nthe morphism\n$f$ is $C$-transversal if and only if\n$C$ is a subset of the $0$-section\n$T^*_XX\\subset T^*X$.\n\n\n4.\nAssume that \n$g\\colon Y\\to Z$ is smooth.\nThen, since $X\\times_ZT^*Z\\to \nX\\times_YT^*Y$ is an injection,\nthe $C$-transversality for $f$ \nimplies that for $g\\circ f$.\n\n5.\nThe complement of the subset is\nthe image of the closed subset\n${\\mathbf P}(df^{-1}C)\\subset\n{\\mathbf P}(X\\times_YT^*Y)$\nof a projective space bundle over $X$.\n\n6.\nIf $f\\colon X\\to Y$ \nis $C$-transversal,\nthen $df\\colon X\\times_YT^*Y\\to T^*X$\nis an injection on a neighborhood of $B$\nand hence\n$f\\colon X\\to Y$ is smooth\non a neighborhood of $B$.\n\n7.\nSince the fiber\n$(X\\times_YT^*Y)\\times_Xx$\nis a line and the inverse image\nof the fiber $C\\times_Xx$\nis its conical subset,\nthe inverse image of $C\\times_Xx$\nis not the subset of the $0$-section\nif and only if it is equal to\nthe line\n$(X\\times_YT^*Y)\\times_Xx$\nitself.\n\n8.\nThe $C$-transversality \nis equivalent to the injectivity of\n$D_I\\times_YT^*Y\\to T^*D_I$\nfor all subsets $I\\subset \\{1,\\ldots,m\\}$.\nHence, it is \nequivalent to the smoothness of\n$D_I\\to Y$\nfor $I\\subset \\{1,\\ldots,m\\}$.\nThus the assertion follows.\n\n9.\nThe conditions (1) and (2)\nare rephrased as conditions\non elements\n$(\\alpha,\\beta)\n\\in {\\rm Ker}(W\\times_XT^*X\\oplus\nW\\times_YT^*Y\n\\to T^*W)_w$\nin the fiber at $w\\in W$ \nof the kernel as follows:\n\n(1) If $\\alpha$ is contained in the inverse\nimage of $C$ and $\\beta=0$,\nthen $\\alpha=0$.\nFurther, if\n$\\alpha$ is contained in the inverse\nimage of $C$ then $\\beta=0$.\n\n(2) \nIf $\\alpha$ is contained in the inverse\nimage of $C$ then $(\\alpha,\\beta)=(0,0)$.\n\n\\noindent\nThus, the conditions are equivalent.\n\\qed}\n\n\n\\begin{df}[{\\rm \\cite[1.2]{Be}}]\\label{dff!C}\nLet $f\\colon X\\to Y$ be a\nmorphism of smooth schemes over $k$\nand $C\\subset T^*X$ be a \nclosed conical subset.\nAssume that $f$ is {\\em proper}\non the base $B$ of $C$.\nWe define a closed conical subset \n\\begin{equation}\nf_{\\circ}C \\subset T^*Y\n\\label{eqf!C}\n\\end{equation}\nto be the closure of the image\nby the first arrow\n$$\\begin{CD}T^*Y\n@<<< \nX\\times_YT^*Y@>>> T^*X\n\\end{CD}$$\nof the inverse image of $C$\nby the second arrow.\n\\end{df}\n\n\\begin{lm}\\label{lmncf}\nLet $f\\colon X\\to Y$\nbe a morphism\nof smooth schemes over $k$.\nLet $C\\subset T^*X$ be a closed conical \nsubset.\nAssume that $f$ is {\\em proper}\non the base $B$ of $C$.\nThen, for a morphism\n$g\\colon Y\\to Z$ of smooth schemes over $k$,\nthe following conditions\nare equivalent:\n\n{\\rm (1)}\nThe morphism $g$ is $f_\\circ C$-transversal.\n\n{\\rm (2)}\nThe composition $gf\\colon X\\to Z$ \nis $C$-transversal.\n\\end{lm}\n\n\\proof{\nWe consider the commutative diagram\n$$\\begin{CD}\n@.\nX\\times_ZT^*Z@>>>\nX\\times_YT^*Y@>>> T^*X\\\\\n@.@VVV @VVV\\\\\nT^*Z@<<<\nY\\times_ZT^*Z@>>> T^*Y.\n\\end{CD}$$\nThe condition (1) \n(resp.\\ (2)) means that\nthe subset in \n$T^*Z$ obtained by taking\ninverse images and images\nin the diagram\nstarting from\n$C\\subset T^*X$\nvia $T^*Y$ \n(resp.\\ via $X\\times_ZT^*Z$) is a subset of\nthe $0$-section.\nThey are equivalent\nsince the square is cartesian.\n\\qed\n\n\n\n\n\\begin{lm}\\label{lmChf}\nLet $C\\subset T^*X$ be a closed conical subset\nand let \n\\begin{equation}\n\\begin{CD}\nX@>> T^*_WX@>>>\nW\\times_XT^*X@>>> T^*W@>>>0\\\\\n@.@AAA @AAA @AAA@.\\\\\n0@>>> W\\times_ZT^*_ZY@>>>\nW\\times_YT^*Y@>>> W\\times_ZT^*Z@>>>0\n\\end{CD}\n\\label{eqTWZ}\n\\end{equation}\nof exact sequences\nof vector bundles on $W$.\nThe condition (1) is equivalent to\nthat the inverse image in \n$W\\times_YT^*Y$ of \n$h^*C\\subset W\\times_XT^*X$\nby the middle vertical arrow is\na subset of the $0$-section,\nby Lemma \\ref{lmCh}.4.\nThe condition (2) is equivalent to\nthat the inverse image in \n$T^*_WX$ of \n$h^*C\\subset W\\times_XT^*X$\nby the upper left horizontal arrow \nand the inverse image in \n$W\\times_ZT^*Z$ of \n$h^{\\circ}C\\subset T^*W$\nby the right vertical arrow \nare subsets of the $0$-sections.\nSince the left vertical arrow \nis an isomorphism,\nthe conditions (1) and (2) \nare equivalent.\n\n\n2.\nFirst, we assume $Z\\to Y$ is smooth.\nThen $h$ is smooth and $C$-transversal.\nFurther \nthe arrows in the right square\nof (\\ref{eqTWZ})\nare injections\nand the square is cartesian.\nThus the inverse image of\n$h^{\\circ}C\\subset \nW\\times_XT^*X\n\\subset T^*W$\nin $W\\times_ZT^*Z$\nis the same as the inverse image\nin $W\\times_YT^*Y$\nand hence $g$ is $h^{\\circ}C$-transversal.\n\nIf $Z\\to Y$ is an immersion,\nthe assertion follows from 1.\nIn general,\nit suffices to decompose \n$Z\\to Y$ as the composition\n$Z\\to Z\\times Y\\to Y$\nof the graph and the projection\nby Lemma \\ref{lmCh}.3.\n\n3. \nBy replacing $X$ by a neighborhood\nof $B$, we may assume\nthat $f$ is flat.\nThe condition (1) (reap.\\ (2))\nmeans that the inverse image of\n$C\\subset T^*X$\nin $T^*_WX$ \n(resp.\\ its image in $T^*_ZY$)\nis a subset of the $0$-section.\nSince the left vertical arrow\nin (\\ref{eqTWZ}) is an isomorphism,\nthese conditions are equivalent.\n\nThe subset\n$i^{\\circ}f_{\\circ}C$\n(resp.\\ $g_{\\circ}h^{\\circ}C$)\nof $T^*Z$\nis the image of\nthe subset of\n$W\\times_ZT^*Z$\nobtained from\n$h^*C\\subset\nW\\times_XT^*X$\nby taking inverse images\nand images\nin the right square of (\\ref{eqTWZ})\nvia $W\\times_YT^*Y$\n(resp.\\ via $T^*W$).\nSince the square is cartesian,\nthey are equal.\n\\qed}\n\n\n\n\n\\subsection{The universal family of\nhyperplane sections}\\label{ssRn}\n\nAssume that $X$ \nsmooth over a field $k$\nis quasi-projective\nand let ${\\cal L}$ be an ample invertible\n${\\cal O}_X$-module.\nLet $E$ be a $k$-vector space of finite\ndimension and\n$E\\to \\Gamma(X,{\\cal L})$ \nbe a $k$-linear mapping inducing\na surjection\n$E\\otimes_k{\\cal O}_X\\to {\\cal L}$\nand an immersion \n\\begin{equation}\ni\\colon X\\to {\\mathbf P}\n={\\mathbf P}(E^\\vee)=\n{\\rm Proj}\\ S^\\bullet E.\n\\label{eqiXP}\n\\end{equation}\nWe use a contra-Grothendieck notation\nfor a projective space\n${\\mathbf P}(E)(k)=(E\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \\{0\\})\/k^\\times$.\n\nThe dual ${\\mathbf P}^\\vee=\n{\\mathbf P}(E)$\nof ${\\mathbf P}$ parametrizes\nhyperplanes $H\\subset {\\mathbf P}$.\nThe universal hyperplane\n$Q=\\{(x,H)\\mid x\\in H\\}\n\\subset \n{\\mathbf P}\\times\n{\\mathbf P}^\\vee$\nis defined by\nthe identity ${\\rm id}\\in\n{\\rm End}(E)$\nregarded as a section \n$F\\in \\Gamma(\n{\\mathbf P}\\times\n{\\mathbf P}^\\vee,\n{\\cal O}(1,1))\n=E\\otimes E^\\vee$.\nBy the canonical injection\n$\\Omega^1_{{\\mathbf P}\/k}(1)\n\\to E\\otimes {\\cal O}_{\\mathbf P}$,\nthe universal hyperplane\n$Q$ is identified \nwith the covariant projective\nspace bundle\n${\\mathbf P}(T^*{\\mathbf P})$\nassociated to the cotangent bundle\n$T^*{\\mathbf P}$.\nThe image of the conormal bundle\n$T^*_Q(\n{\\mathbf P}\\times {\\mathbf P}^\\vee)\n\\to\nQ\\times_\n{{\\mathbf P}\\times {\\mathbf P}^\\vee}\nT^*({\\mathbf P}\\times {\\mathbf P}^\\vee)\n\\to\nQ\\times_\n{\\mathbf P}\nT^*{\\mathbf P}$\nby the projection\nis identified\nwith the universal sub line bundle\nof the pull-back \n$Q\\times_\n{\\mathbf P}T^*{\\mathbf P}$\non \n$Q={\\mathbf P}(T^*{\\mathbf P})$.\n\n\nThe fibered product\n$X\\times_{\\mathbf P}Q\n={\\mathbf P}(X\\times_{\\mathbf P}T^*{\\mathbf P})$\nis the intersection\nof $X\\times {\\mathbf P}^\\vee$\nwith $Q$\nin ${\\mathbf P}\\times {\\mathbf P}^\\vee$\nand is the universal family\nof hyperplane sections.\nWe consider\nthe universal family of hyperplane sections\n\\begin{equation}\n\\begin{CD}\nX@{p^\\vee}>> {\\mathbf P}^\\vee={\\mathbf P}(E).\n\\end{CD}\n\\label{eqhsfb}\n\\end{equation}\n\nLet $C\\subset T^*X$ be a\nclosed conical subset.\nDefine a closed conical subset\n$\\widetilde C\\subset\nX\\times_{\\mathbf P}T^*{\\mathbf P}$\nto be the pull-back of $C$ by\nthe surjection\n$X\\times_{\\mathbf P}T^*{\\mathbf P}\n\\to T^*X$\nand let\n\\begin{equation}\n{\\mathbf P}(\\widetilde C)\\subset\n{\\mathbf P}(X\\times_{\\mathbf P}T^*{\\mathbf P})\n=\nX\\times_{\\mathbf P}Q\n\\label{eqPS}\n\\end{equation}\nbe the projectivization.\nThe subset\n${\\mathbf P}({\\widetilde C})\n\\subset\nX\\times_{\\mathbf P}Q\n\\subset X\\times {\\mathbf P}^\\vee$\nconsists of the points $(x,H)$\nsuch that the fiber\n$T^*_{X\\times_{\\mathbf P}Q}\n(X\\times{\\mathbf P}^\\vee)\n\\times_{X\\times_{\\mathbf P}Q}(x,H)\n\\subset\n(X\\times_{\\mathbf P}T^*{\\mathbf P})\\times_Xx$ \nis a subset of $\\widetilde C$\nsince the conormal\nbundle $T^*_{X\\times_{\\mathbf P}Q}\n(X\\times{\\mathbf P}^\\vee)\n\\subset \nX\\times_{\\mathbf P} T^*{\\mathbf P}$\nis the universal sub line bundle\non the projective bundle\n$X\\times_{\\mathbf P}Q=\n{\\mathbf P}(X\\times_{\\mathbf P} T^*{\\mathbf P})$.\nIf $C=T^*_XX$ is the $0$-section,\nthe lifting $\\widetilde C$\nis the conormal bundle\n$T^*_X{\\mathbf P}$.\nFurther if $i\\colon X\\to {\\mathbf P}$\nis a closed immersion,\nthe image $p^\\vee(\n{\\mathbf P}(\\widetilde C))\n\\subset {\\mathbf P}^\\vee$\nis the dual variety of $X$.\n\nIf $V\\subset {\\mathbf P}$ is\nan open subscheme such that\n$i\\colon X\\to {\\mathbf P}$\ninduces a closed immersion\n$i^{\\circ}\\colon X\\to V$,\nthen\n$\\widetilde C\\subset\nX\\times_{\\mathbf P}T^*{\\mathbf P}\n=X\\times_VT^*V\n\\subset T^*V$\nis identified with $i^{\\circ}_{\\circ}C$\n(\\ref{eqf!C}).\n\n\n\\begin{lm}\\label{lmPC}\nLet $C\\subset T^*X$ be a\nclosed conical subset.\nThe complement \n$X\\times_{\\mathbf P}Q\n\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} {\\mathbf P}(\\widetilde C)$\nis the largest open subset\nwhere the pair\n$X\\gets X\\times_{\\mathbf P}Q\n\\to {\\mathbf P}^\\vee$\nis $C$-transversal.\n\\end{lm}\n\n\n\\proof{\nBy Lemma \\ref{lmCf}.9,\nthe largest open subset\n$U\\subset X\\times _{\\mathbf P}Q$\nwhere the pair\n$(p,p^\\vee)$\nis $C$-transversal equals\nthe largest open subset\nwhere $(p,p^\\vee)\\colon \nX\\times_{\\mathbf P}Q\\to X\\times\n{\\mathbf P}^\\vee$\nis $C\\times T^*{\\mathbf P}^\\vee$-transversal.\nThe kernel \n$L={\\rm Ker}\\bigl(\n(X\\times_{\\mathbf P}Q)\\times_XT^*X\n\\oplus\n(X\\times_{\\mathbf P}Q)\\times_\n{{\\mathbf P}^\\vee}T^*{\\mathbf P}^\\vee\n\\to \nT^*(X\\times_{\\mathbf P}Q)\\bigr)$\nis canonically identified with\nthe restriction of\nthe universal sub line bundle of\n$T^*{\\mathbf P}$ on\n$Q={\\mathbf P}(T^*{\\mathbf P})$.\nHence, $U$ is the complement\nof ${\\mathbf P}(\\widetilde C)$.\n\\qed}\n\\medskip\n\nWe consider a similar \nbut slightly different situation.\nLet $X$ be a smooth\nscheme over a field $k$\nand $h\\colon X\\to {\\mathbf P}\n={\\mathbf P}^n$\nbe an immersion over $k$\nto a projective space.\nLet $Q={\\mathbf P}(T^*{\\mathbf P})\n\\subset {\\mathbf P}\n\\times {\\mathbf P}^\\vee$\nbe the universal family\nof hyperplanes\nand consider the\ncommutative diagram \n\\begin{equation}\n\\xymatrix{\nX\\times_{\\mathbf P}Q\n\\ar[r]\\ar[d]_p\n\\ar[rrrd]^{\\!\\! p^{\\vee}}\n& \nQ\\ar[rrd]^{{\\bms p}^{\\vee}}\n\\ar[d]_{\\!\\!\\!\\!\\!\\! {\\bms p}}&&\n\\\\\nX\\ar[r]^h&\n{\\mathbf P}\n&&{\\mathbf P}^\\vee}\n\\label{eqXPh}\n\\end{equation}\nwith cartesian square.\nWe have $Q\n={\\mathbf P}(X\\times_{\\mathbf P}T^*{\\mathbf P})$.\n\n\n\\begin{lm}\\label{lmh!}\nLet ${\\mathbf P}={\\mathbf P}^n$\nbe a projective space \nand let ${\\mathbf P}^\\vee$\nbe the dual projective space.\nLet $C^\\vee\n\\subset T^*{\\mathbf P}^\\vee$\nbe a closed conical subset \nsuch that every irreducible\ncomponent is of dimension $n$.\nDefine a closed conical subset $C\n\\subset T^*{\\mathbf P}$ by\n$C={\\bm p}_{\\circ}\n{\\bm p}^{\\vee \\circ}C^\\vee$.\nThen, every irreducible\ncomponent of $C$ is of dimension $n$.\n\\end{lm}\n\n\\proof{\nIt suffices to consider\nthe case where $C^\\vee$ is irreducible.\nWe have ${\\mathbf P}(C)=\n{\\mathbf P}(C^\\vee)$\nin $Q={\\mathbf P}(T^*{\\mathbf P})\n={\\mathbf P}(T^*{\\mathbf P}^\\vee)$.\nUnless the base of $C^\\vee$\nis finite,\n$C$ contains the $0$-section\n$T^*_{\\mathbf P}{\\mathbf P}$.\nIf the base of $C^\\vee$\nconsists of a closed point\ncorresponding to a hyperplane\n$H\\subset {\\mathbf P}$,\nwe have $C=T^*_H{\\mathbf P}$.\nThus the assertion follows.\n\\qed}\n\n\\begin{pr}\\label{prhCt}\nLet the notation be as in\n{\\rm (\\ref{eqXPh})} and\nlet $C^\\vee\n\\subset T^*{\\mathbf P}^\\vee$\nbe a closed conical subset.\nDefine a closed conical subset \n$C\\subset T^*{\\mathbf P}$\nby $C={\\bm p}_{\\circ}{\\bm p}^{\\vee \\circ}C^\\vee$\nand \nits projectivization\n${\\mathbf P}(C)\n\\subset\n{\\mathbf P}(T^*{\\mathbf P})$.\nThen, the complement\n$X\\times_{\\mathbf P}Q\n\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} (X\\times_{\\mathbf P}Q)\n\\cap {\\mathbf P}(C)$\nis the largest open subset $U\n\\subset X\\times_{\\mathbf P}Q$\nwhere the pair\n$(p^\\vee, p)$\nis $C^\\vee$-transversal.\n\\end{pr}\n\n\\proof{\nThe proof is similar to that of\nLemma \\ref{lmPC}.1.\nBy Lemma \\ref{lmCf}.9,\nthe largest open subset\n$U\\subset X\\times _{\\mathbf P}Q$\nwhere the pair\n$(p^\\vee, p)$\nis $C^\\vee$-transversal equals\nthe largest open subset\nwhere $(p,p^\\vee)\\colon \nX\\times_{\\mathbf P}Q\\to X\\times\n{\\mathbf P}^\\vee$\nis $T^*X\\times C^\\vee$-transversal.\nThe kernel \n$L={\\rm Ker}\\bigl(\n(X\\times_{\\mathbf P}Q)\\times_XT^*X\n\\oplus\n(X\\times_{\\mathbf P}Q)\\times_\n{{\\mathbf P}^\\vee}T^*{\\mathbf P}^\\vee\n\\to \nT^*(X\\times_{\\mathbf P}Q)\\bigr)$\nis canonically identified with\nthe restriction of\nthe universal sub line bundle of\n$T^*{\\mathbf P}^\\vee$ on\n$Q={\\mathbf P}(T^*{\\mathbf P}^\\vee)$.\nHence, $U$ is the complement\nof the intersection\n$(X\\times_{\\mathbf P}Q)\n\\cap {\\mathbf P}(C^\\vee)$.\nSince $L$ is also canonically identified with\nthe restriction of\nthe universal sub line bundle\nof $T^*{\\mathbf P}$ on\n$Q={\\mathbf P}(T^*{\\mathbf P})$,\nwe have\n${\\mathbf P}(C^\\vee)\n={\\mathbf P}(C)$\nand the assertion follows.\n\\qed}\n\\medskip\n\nLet $\\Delta_X\\subset X\\times_{\\mathbf P}Q$\nbe the sub projective space\nbundle\n\\begin{equation}\n\\Delta_X={\\mathbf P}(T^*_X{\\mathbf P})\n\\subset\n{\\mathbf P}(X\\times_{\\mathbf P}T^*{\\mathbf P})\n=X\\times_{\\mathbf P}Q.\n\\label{eqRX}\n\\end{equation}\n\n\n\\begin{cor}\\label{corhCt}\nLet the notation be as in\nProposition {\\rm \\ref{prhCt}}.\n\n{\\rm 1.}\nThe following conditions are\nequivalent:\n\n{\\rm (1)}\nThe immersion $h\\colon X\\to\n{\\mathbf P}$ is $C$-transversal.\n\n\n{\\rm (2)}\nThe pair $(p^\\vee, p)$\nis $C^\\vee$-transversal\non a neighborhood\nof $\\Delta_X\\subset X\\times_{\\mathbf P}Q$.\n\n{\\rm 2.}\nAssume that\nthe immersion $h\\colon X\\to\n{\\mathbf P}$ is $C$-transversal.\nThen, $p^\\vee\\colon X\\times_{\\mathbf P}Q\n\\to {\\mathbf P}^\\vee$ is\n$C^\\vee$-transversal and we have\n$h^{\\circ}C=p_{\\circ}p^{\\vee \\circ}C^\\vee$.\n\\end{cor}\n\n\\proof{\n1.\nBy Proposition \\ref{prhCt},\nthe condition (2) is equivalent to\n${\\mathbf P}(C)\\cap\n\\Delta_X=\\varnothing$.\nThis is equivalent to\nthat $T^*_X{\\mathbf P}\n\\cap C$\nis a subset of the $0$-section\nand hence to (1).\n\n2.\nBy Lemma \\ref{lmChf}.3\napplied to $C={\\bm p}_{\\circ}\n{\\bm p}^{\\vee \\circ}C^\\vee$,\nthe assumption that\n$h\\colon X\\to {\\mathbf P}$\nis $C$-transversal\nimplies that the immersion\n$i\\colon X\\times_{\\mathbf P}Q\n\\to Q$ is ${\\bm p}^{\\vee \\circ}C^\\vee$-transversal\nand $h^{\\circ}C=\nh^{\\circ}{\\bm p}_{\\circ}\n{\\bm p}^{\\vee \\circ}C^\\vee=\np_{\\circ}i^{\\circ}{\\bm p}^{\\vee \\circ}C^\\vee$.\nHence by Lemma \\ref{lmCh}.3,\n$p^\\vee\\colon X\\times_{\\mathbf P}Q\n\\to {\\mathbf P}^\\vee$ is\n$C^\\vee$-transversal and\nwe have\n$i^{\\circ}{\\bm p}^{\\vee \\circ}C^\\vee=\np^{\\vee \\circ}C^\\vee$.\nThus the assertion is proved.\n\\qed}\n\n\n\n\n\\begin{pr}\\label{prfp}\nLet the notation be as in\nProposition {\\rm \\ref{prhCt}} and\nlet $f\\colon X\\to {\\mathbf A}^1$\nbe a smooth morphism over $k$.\nDefine sub vector bundles\nof $X\\times_{\\mathbf P}T^*{\\mathbf P}$\nby the cartesian diagram\n$$\\begin{CD}\nT^*_X{\\mathbf P}\n@>{\\subset}>> V@>{\\subset}>> \nX\\times_{\\mathbf P}T^*{\\mathbf P}\\\\\n@VVV@VVV@VVV\\\\\nT^*_XX\n@>{\\subset}>> \nX\\times_{{\\mathbf A}^1}T^*{\\mathbf A}^1\n@>{\\subset}>> T^*X\n\\end{CD}$$\nand closed subsets\n$\\Delta_X={\\mathbf P}(T^*_X{\\mathbf P})\n\\subset \\Delta_f={\\mathbf P}(V)\n\\subset \nX\\times_{\\mathbf P}Q\n={\\mathbf P}(X\\times_{\\mathbf P}T^*{\\mathbf P})$\nto be the associated projective subspace bundles.\nThen, the complement \n$X\\times_{\\mathbf P}Q\n\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} (\\Delta_f\\cap {\\mathbf P}(C))$\nis the largest open subset\n$U\\subset X\\times_{\\mathbf P}Q$\nwhere the pair\n$(p^\\vee,f\\circ p)$\nis $C^\\vee$-transversal.\n\\end{pr}\n\n\\proof{\nBy Lemma \\ref{lmCf}.9,\nthe largest open subset\n$U\\subset X\\times_{\\mathbf P}Q$\nwhere the pair\n$(p^\\vee,f\\circ p)$\nis $C^\\vee$-transversal\nequals\nthe largest open subset\nwhere $(f\\circ p,p^\\vee)\\colon\nX\\times_{\\mathbf P}Q\n\\to {\\mathbf A}^1\\times{\\mathbf P}^\\vee$\nis $T^*{\\mathbf A}^1\n\\times C^\\vee$-transversal.\nHence, similarly as in the proof\nof Proposition \\ref{prhCt},\n$U\\subset X\\times_{\\mathbf P}Q$ \nis the complement of\n$\\Delta_f\\cap \n((X\\times_{\\mathbf P}Q)\n\\cap {\\mathbf P}(C^\\vee))=\n\\Delta_f\\cap \n{\\mathbf P}(C)$.\n\\qed}\n\n\n\\begin{cor}\\label{corfp}\nLet the notation be as in\nPropositions {\\rm \\ref{prhCt}}\nand {\\rm \\ref{prfp}} and assume that\n$h\\colon X\\to {\\mathbf P}$\nis $C$-transversal.\nThen,\nthe composition $f\\circ p\\colon \nX\\times_{\\mathbf P}Q\\to X\\to {\\mathbf A}^1$\nis $p^{\\vee \\circ}C^\\vee$-transversal\non the complement\nof the intersection \nwith $\\Delta_f\\cap {\\mathbf P}(C)$.\nFurther, the intersection\n$\\Delta_f\\cap {\\mathbf P}(C)\n\\subset \nX\\times_{\\mathbf P}Q$\nis finite over the complement\nof the largest open subset\nof $X$ where\n$f\\colon X\\to {\\mathbf A}^1$\nis $h^{\\circ}C$-transversal.\n\\end{cor}\n\n\\proof{\nThe first assertion is clear from\nProposition \\ref{prfp}.\n\nWe show the second assertion.\nBy Corollary \\ref{corhCt}.2,\n$p^\\vee\\colon X\\times_{\\mathbf P}Q\n\\to {\\mathbf P}^\\vee$ is\n$C^\\vee$-transversal and\nwe have $h^{\\circ}C=\np_{\\circ}p^{\\vee \\circ}C^\\vee$.\nHence by Lemma \\ref{lmncf},\nif $f\\colon X\\to {\\mathbf A}^1$\nis $h^{\\circ}C$-transversal,\nthen $fp\\colon X\\times_{\\mathbf P}Q\n\\to {\\mathbf A}^1$ is\n$p^{\\vee \\circ} C^\\vee$-transversal.\nThus, by Proposition \\ref{prfp},\nthe intersection\n$\\Delta_f\\cap {\\mathbf P}(C)$\nis a subset of the inverse\nimage of\nthe complement\nof the largest open subset\nof $X$ where\n$f\\colon X\\to {\\mathbf A}^1$\nis $h^{\\circ}C$-transversal.\nTherefore, it suffices to show that\n$\\Delta_f\\cap {\\mathbf P}(C)$\nis finite over $X$.\n\n\nBy the assumption that\n$h\\colon X\\to {\\mathbf P}$\nis $C$-transversal,\nthe morphism\n$p\\colon X\\times_{\\mathbf P}Q\n\\to X$ is $p^{\\vee \\circ}C^\\vee$-transversal\non a neighborhood of $\\Delta_X$\nby Corollary \\ref{corhCt}.1.\nSince $f\\colon X\\to {\\mathbf A}^1$\nis smooth, the composition\n$fp\\colon X\\to {\\mathbf A}^1$\nis $p^{\\vee \\circ}C^\\vee$-transversal\non a neighborhood of $\\Delta_X$\nby Lemma \\ref{lmCf}.4.\nHence, by Proposition \\ref{prfp},\nthe intersection\n$\\Delta_f\\cap {\\mathbf P}(C)$\ndoes not meet $\\Delta_X$.\nSince $\\Delta_X$ is a hyperplane bundle\nof a projective space bundle\n$\\Delta_f$ over $X$,\nthe intersection\n$\\Delta_f\\cap {\\mathbf P}(C)$\nis finite over $X$.\n\\qed}\n\n\n\n\n\\subsection{Image of the projectivization}\n\nWe further recall from \\cite{Be}\na definition and properties.\n\n\\begin{df}[{\\rm cf.\\ \\cite[4.2]{Be}}]\\label{dfwi}\nLet $f\\colon X\\to S$ be a morphism\nof separated schemes of finite type over a field $k$\nand $Y,Z\\subset X$ be closed subsets.\nWe say that $Y$ and $Z$ {\\em well intersect}\nwith respect to $f$ if we have\n\\begin{equation}\n\\dim (Y\\times_SZ\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Y\\times_XZ)\n\\leqq \\dim Y+\\dim Z-\\dim S.\n\\label{eqdfwi}\n\\end{equation}\n\\end{df}\n\nWe slightly modified the\noriginal definition by replacing\nthe equality by an inequality.\n\n\\begin{lm}\\label{lmJ}\nLet $f\\colon X\\to S$ be a morphism of\nseparated schemes of finite type\nover a field $k$\nand $Y,Z\\subset X$ be closed subsets.\n\n{\\rm 1.}\nLet $g\\colon P\\to S$ be\na morphism of\nseparated schemes of finite type\nover a field $k$\nand $X\\to P$ be an immersion\nover $k$.\nThen, $Y$ and $Z$ well intersect\nwith respect to $f$\nif and only if they\nwell intersect\nwith respect to $g$.\n\n{\\rm 2.}\nLet $S'\\to S$ be a faithfully flat morphism of\nrelative constant dimension of\nseparated schemes of finite type\nover a field $k$\nand let $f'$ and $Y',Z'\\subset X'$\ndenote the base changes of $f$ and $Y,Z\\subset X$\nby $S'\\to S$.\nThen, $Y$ and $Z$ well-intersects\nwith respect to $f$ if and only if\n$Y'$ and $Z'$ well-intersects\nwith respect to $f'$.\n\n{\\rm 3.}\nLet $X\\to U\\to T$ be a morphism of\nseparated schemes of finite type\nover a field $k$\nand $V,W\\subset U$ be closed subsets.\nAssume that $X\\to S\\times T$\nis an immersion and that\n$Y,Z\\subset X$ are inverse images of\n$V,W\\subset U$.\nAssume further that\nthe morphisms\n$X\\to S$ and $X\\to U$\nand the base change\n\\begin{equation}\n(X\\times_SX)\n\\times_{(T\\times T)}\n(T\\times T)^{\\circ}\\to\n(U\\times U)\n\\times_{(T\\times T)}\n(T\\times T)^{\\circ}\n=U\\times U\n\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} U\\times_T U\n\\label{eqXXUU}\n\\end{equation}\nof the morphism\n$X\\times_SX\\to U\\times U$\nby the open immersion\n$(T\\times T)^{\\circ} =\nT\\times T\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} T\n\\to T\\times T$ of the complement\nof the diagonal are faithfully flat of relative\nconstant dimensions.\nThen, $Y$ and $Z$ well-intersects\nwith respect to $f$.\n\\end{lm}\n\n\\proof{1.\nSince $X\\to P$ is an immersion,\nwe have $Y\\times_XZ=\nY\\times_PZ$\nand the assertion follows.\n\n2.\nBy the assumption, every term in (\\ref{eqdfwi})\nwith $'$ equals the corresponding term\nwithout $'$ added the relative dimension\nand the assertion follows.\n\n3.\nBy the assumption that\n$X\\to U$ is faithfully \nflat of relative constant dimension,\nwe have\n\\begin{equation}\n\\dim Y=\\dim V+(\\dim X-\\dim U),\\quad\n\\dim Z=\\dim W+(\\dim X-\\dim U).\n\\label{eqYV}\n\\end{equation}\nBy the assumption that $X\\to S\\times T$\nis an immersion,\nthe complement of the diagonal\n$X\\times_SX\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} X$ \nequals the base change \n$(X\\times_SX)\n\\times_{(T\\times T)}\n(T\\times T)^{\\circ}$\nand \n$Y\\times_SZ\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Y\\times_XZ$\nis the inverse image of\n$V\\times W\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~}\nV\\times_T W$\nby the morphism\n(\\ref{eqXXUU}).\nSince the morphism\n(\\ref{eqXXUU}) is faithfully flat\nof relative constant dimension,\nwe have\n\\begin{align}\n&\\dim (Y\\times_SZ\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Y\\times_XZ)\n\\label{eqVW}\n\\\\\n&\\leqq\n\\dim V+\\dim W\n+(\\dim (X\\times_SX\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} X)\n-\\dim (U\\times U\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} U\\times_TU)).\n\\nonumber\n\\end{align}\nBy the assumption that\n$X\\to S$ is faithfully flat\nof relative constant dimension, \nwe have \n$\\dim X\\times_SX=2\\dim X-\\dim S$\nand the right hand side of\n(\\ref{eqVW}) equals\n$\\dim V+\\dim W\n+2(\\dim X-\\dim U)-\\dim S$\nand the assertion follows\nby (\\ref{eqYV}).\n\\qed}\n\n\n\\begin{lm}[{\\rm cf. \\cite[Lemma 4.2]{Be}}]\\label{lmwi}\nLet $Y,Z\\subset X$ be irreducible\nclosed subsets.\nAssume that \n$Y$ and $Z$ well intersects\nwith respect to $f$\nand we have\n$\\dim Y,\\dim Z<\\dim S$.\n\n{\\rm 1.}\nIf $Y=Z$,\nthen $Y\\to \\overline{f(Y)}$ \nis generically radicial.\n\n{\\rm 2.}\nIf $Y\\not \\subset Z$,\nthen we have\n$f(Y)\\not \\subset\n\\overline{f(Y)\\cap f(Z)}$\nand \n$f(Y)\\not \\subset f(Z)$.\n\\end{lm}\n\n\\proof{\n1.\nWe have\n$\\dim (Y\\times_SY\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Y)\n\\leqq 2\\dim Y-\\dim S< \\dim Y.$\n\n2.\nBy 1, we have\n$\\dim Y=\\dim \\overline{f(Y)}$.\nWe have $\\dim \\overline{f(Y)\\cap f(Z)}\n\\leqq\n\\dim Y\\times_SZ\n\\leqq\n\\max(\n\\dim (Y\\times_SZ\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Y\\times_XZ),\n\\dim Y\\times_XZ)$.\nSince $Y$ and $Z$ assumed to\nwell intersect,\nwe have\n$\\dim (Y\\times_SZ\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Y\\times_XZ)\n\\leqq \\dim Y+\\dim Z-\\dim S\n< \\dim Y$.\nIf $Y\\not\\subset Z$,\nwe have\n$\\dim Y\\times_XZ\n< \\dim Y$.\nThus, we have\n$\\dim \\overline{f(Y)\\cap f(Z)}\n<\\dim Y=\\dim \\overline{f(Y)}$.\n\nIf we had $f(Y)\\subset f(Z)$,\nwe would have\n$f(Y)\\subset\n\\overline{f(Y)\\cap f(Z)}$\nand a contradition.\n\\qed}\n\\medskip\n\nFor a $k$-vector space $E$\nof finite dimension\nand a $k$-linear mapping\n$E\\to \\Gamma(X,{\\cal L})$\ndefining an immersion\n$X\\to {\\mathbf P}={\\mathbf P}(E^\\vee)$,\nwe consider the following condition:\n\\begin{itemize}\n\\item[{\\rm (E)}]\nFor every pair of distinct closed\npoints $u\\neq v$ of the base change\n$X_{\\bar k}$ to an algebraic closure\n$\\bar k$ of $k$,\nthe composition\n\\begin{equation}\nE\\to \\Gamma(X,{\\cal L})\n\\otimes_k\\bar k\n\\to \n{\\cal L}_u\/{\\mathfrak m}_u^2{\\cal L}_u\n\\oplus\n{\\cal L}_v\/{\\mathfrak m}_v^2{\\cal L}_v\n\\label{eqE}\n\\end{equation}\nis a surjection.\n\\end{itemize}\nFor an integer $d\\geqq 1$,\nthe linear mapping\n$S^dE\n\\to\\Gamma(X,{\\cal L}^{\\otimes d})$ \nof the symmetric power\ndefines an immersion\n$X\\to {\\mathbf P}(S^dE^{\\vee})$.\n\n\\begin{lm}\\label{lmloc}\nLet $X$ be a quasi-projective\nscheme over a field\n$k$ and\n${\\cal L}$ be an ample\ninvertible ${\\cal O}_X$-module.\nAssume that $E\\to \\Gamma(X,{\\cal L})$ \ndefines an immersion $X\\to \n{\\mathbf P}={\\mathbf P}(E^{\\vee})$.\nFor $d\\geqq 3$, \nthe symmetric power\n$S^dE\\to \\Gamma(X,{\\cal L}^{\\otimes d})$\nsatisfies the condition {\\rm (E)} above.\n\\end{lm}\n\n\\proof{\nWe may assume $k=\\bar k,\\\nX={\\mathbf P}^n,\\\n{\\cal L}={\\mathcal O}(1),\\\nE=\\Gamma(X,{\\cal L})$\nand $u=(0,\\cdots,0,1),\nv=(1,0,\\cdots,0)$.\nThen, the assertion is clear.\n\\qed}\n\\medskip\n\nFor an immersion \n$i\\colon X\\to {\\mathbf P}$\nto a projective space\nand a closed conical\nsubset $C\\subset T^*X$,\nwe consider the following condition:\n\\begin{itemize}\n\\item[(C)]\nFor every irreducible component\n$C_a\\subset T^*X$ of $C=\\bigcup_aC_a$,\nthe inverse image\n$\\widetilde C_a\n\\subset X\\times_{\\mathbf P}T^*{\\mathbf P}$\nis not a subset of the\n$0$-section.\n\\end{itemize}\nIf the condition (C) is satisfied,\nthere is a one-to-one correpspondence\nbetween the irreducible components\nof $C$ and those of ${\\mathbf P}(\\widetilde C)$ sending $C_a$ to ${\\mathbf P}(\\widetilde C_a)$.\n\n\\begin{pr}[{\\rm \\cite[Lemma 4.3]{Be}}]\\label{prwi}\nLet $X$ be a quasi-projective\nsmooth scheme of dimension $n$\nover a field $k$ and\n${\\cal L}$ be an ample\ninvertible ${\\cal O}_X$-module.\nLet $E$ be a $k$-vector space\nof finite dimension and \n$E\\to \\Gamma(X,{\\cal L})$\nbe a $k$-linear mapping\ndefining an immersion\n$X\\to {\\mathbf P}={\\mathbf P}(E^\\vee)$\nand \nsatisfying the condition {\\rm (E)}\nbefore Lemma {\\rm \\ref{lmloc}}.\nLet $C\\subset T^*X$\nbe a closed conical subset\nsatisfying the condition {\\rm (C)} above.\nThen, for irreducible\ncomponents $C_a,C_b$ of $C$,\nthe projectivizations\n${\\mathbf P}(\\widetilde C_a),\n{\\mathbf P}(\\widetilde C_b)\n\\subset {\\mathbf P}(X\\times_{\\mathbf P}\nT^*{\\mathbf P})=X\\times_{\\mathbf P}Q$\nwell intersects with respect to\n$p^\\vee\\colon \nX\\times_{\\mathbf P}Q\\to {\\mathbf P}^\\vee$.\n\\end{pr}\n\\medskip\n\nThe proof is essentially the same as\nthat of \\cite[Lemma 4.3]{Be}.\nFor the sake of completeness,\nwe rephrase the proof.\n\n\n\\proof{\nLet ${\\cal I}\\subset {\\cal O}_{X\\times X}$\ndenote the ideal sheaf defining\nthe diagonal immersion $X\\to X\\times X$\nand let $P\\subset X\\times X$ denote\nthe closed subscheme defined by \n${\\cal I}^2$.\nThe projections $p_1,\\ p_2\\colon P\\to X$\nare finite flat of degree $n+1$.\nDefine a vector bundle $J$\nof rank $n+1$ on $X$ to be that associated\nto the invertible ${\\cal O}_P$-module \n$p_2^*{\\cal L}$ \nregarded as a locally free \n${\\cal O}_X$-module by $p_1$.\nThe exact sequence\n$0\\to \\Omega^1_X\\to {\\cal O}_P\n\\to {\\cal O}_X\\to 0$\ntensored with $p_2^*{\\cal L}$ \ndefines an exact sequence\n\\begin{equation}\n\\begin{CD}\n0@>>>\nT^*X\\otimes L\n@>>>\nJ\n@>>> L@>>>0\n\\end{CD}\n\\label{eqLJ}\n\\end{equation}\nof vector bundles on $X$.\n\nDefine a morphism $X\\times E\\to J$\nof vector bundles on $X$\nto be that induced by the pull-back\nby $p_2$ of the canonical morphism\n${\\cal O}_X\\otimes E\\to {\\cal L}$.\nFor a closed geometric point $x$,\nthe fiber of\n$X\\times E\\to J$\nis identified with the morphism\n$E\\to {\\cal L}_x\/{\\mathfrak m}_x^2{\\cal L}_x$\ninduced by the canonical morphism\n${\\cal O}_X\\otimes E\\to {\\cal L}$.\n\nWe regard\n${\\mathbf P}(\\widetilde C_a),\n{\\mathbf P}(\\widetilde C_b)\n\\subset X\\times_{\\mathbf P}Q$\nas subsets of\n$X\\times {\\mathbf P}^\\vee$\nas in Lemma \\ref{lmJ}.1.\nWe consider the cartesian diagram\n\\begin{equation}\n\\begin{CD}\n&X\\times {\\mathbf P}^\\vee\n@<<<\nX\\times E^{\\circ}\n@>>>\nJ\n@>>> X\n\\\\\n&@VVV@VVV\\\\\n{\\mathbf P}(E)=&\n{\\mathbf P}^\\vee\n@<<<\nE^{\\circ}&=E\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \\{0\\}.\n\\end{CD}\n\\label{eqEJ}\n\\end{equation}\nThe lower horizontal arrow is\nthe canonical surjection\nand is faithfully flat of\nrelative dimension 1.\nFor an irreducible component $C_a$\nof $C\\subset T^*X$,\ndefine a closed conical\nsubset $C_a\\otimes L\n\\subset T^*X\\otimes L\n\\subset J$\nas a twist of $C_a$ by using\nthe exact sequence (\\ref{eqLJ}).\nThe inverse image\n$\\widetilde C_a^{\\circ}\\subset\nX\\times E^{\\circ}$\nof $C_a\\otimes L\n\\subset J$ by the second upper arrow\nin (\\ref{eqEJ})\nequals the inverse image of\n${\\mathbf P}(\\widetilde C_a)\n\\subset X\\times_{\\mathbf P}Q\n\\subset X\\times {\\mathbf P}^\\vee$\nby the first upper arrow\nby the condition (C).\nBy Lemma \\ref{lmJ}.2\napplied to ${\\mathbf P}(\\widetilde C_a),\n{\\mathbf P}(\\widetilde C_b)\n\\subset\nX\\times_{\\mathbf P}Q\\subset\nX\\times {\\mathbf P}^\\vee$\nand the base change\n$E^{\\circ} \\to{\\mathbf P}^\\vee$,\nit suffices to show\nthat\n$\\widetilde C_a^{\\circ}$\nand\n$\\widetilde C_b^{\\circ}\n\\subset\nX\\times E^{\\circ}$\nwell-intersect with respect to\n$X\\times E^{\\circ} \\to E^{\\circ}$.\n\nThe morphism\n$X\\times E\\to J$\ninduces a surjection\nof vector bundles\n$(X\\times X)^{\\circ} \\times E\\to \n(J\\times J)\\times_{(X\\times X)}(X\\times X)^{\\circ}$\non $(X\\times X)^{\\circ}$\nby the condition (E).\nThus, by Lemma \\ref{lmJ}.3 applied\nto $E^{\\circ}\\gets X\\times E^{\\circ}\\to J\\to X$\ntaken as $S\\gets X\\to U\\to T$,\nthe assertion follows.\n\\qed}\n\n\n\\begin{cor}\\label{corrad}\nLet $X$ be a quasi-projective\nsmooth scheme of dimension $n$\nover a field $k$ and\n${\\cal L}$ be an ample\ninvertible ${\\cal O}_X$-module.\nLet $E$ be a $k$-vector space\nof finite dimension and \n$E\\to \\Gamma(X,{\\cal L})$\nbe a $k$-linear mapping\ndefining an immersion\n$X\\to {\\mathbf P}={\\mathbf P}(E^\\vee)$\nsatisfying the condition {\\rm (E)} \nbefore Lemma {\\rm \\ref{lmloc}}.\nLet $C\\subset T^*X$\nbe a closed conical subset\nsatisfying the condition {\\rm (C)} before\nProposition {\\rm \\ref{prwi}}.\nAssume that every irreducible component\n$C_a$ of\n$C=\\bigcup_aC_a$ is of dimension $n$. \n\n{\\rm 1.}\nFor every irreducible component\n$C_a$ of $C$,\nthe restriction\n${\\mathbf P}(\\widetilde C_a)\\to {\\mathbf P}^\\vee$\nof $p^\\vee\\colon X\\times_{\\mathbf P}Q\n\\to {\\mathbf P}^\\vee$\nis generically radicial\nand the closure \n$D_a=\\overline{\np^\\vee({\\mathbf P}(\\widetilde C_a))}\n\\subset {\\mathbf P}^\\vee$ is a divisor.\n\n{\\rm 2.}\nFor distinct irreducible\ncomponents $C_a\\neq C_b$ of $C$,\nwe have $D_a\\neq D_b$.\n\\end{cor}\n\n\\proof{\n1.\nSince the closed subset\n${\\mathbf P}(\\widetilde C)\\to X\\times_{\\mathbf P}Q$\nis of codimension $n$\nand $p^\\vee\\colon X\\times_{\\mathbf P}Q\n\\to {\\mathbf P}^\\vee$\nis of relative dimension $n-1$,\nwe have\n$\\dim {\\mathbf P}(\\widetilde C)\n=\\dim {\\mathbf P}^\\vee-1$.\nHence it follows from\nProposition \\ref{prwi}\nand Lemma \\ref{lmwi}.1.\n\n2.\nSimilarly as the proof of 1,\nthe assertion follows from\nProposition \\ref{prwi}\nand Lemma \\ref{lmwi}.2.\n\\qed}\n\n\n\n\\section{Singular support}\\label{sss}\n\nWe recall definitions and results from\n\\cite{Be} in Section \\ref{ssss}.\nWe give a description of\nsingular support of the $0$-extension\nof a locally constant sheaf\non the complement of\na divisor with simple normal crossings\nunder a certain assumption\nin Section \\ref{ssram}.\n\n\n\n\\subsection{Singular support}\\label{ssss}\n\n\n\nWe recall definitions and results from\n\\cite{Be}.\nWe assume that $X$\nis a smooth scheme over a field $k$.\n\nLet $\\Lambda$ be a finite local ring\nsuch that the characteristic $\\ell$\nof the residue field is\ninvertible in $k$\nand let ${\\cal F}$ be a constructible complex of\n$\\Lambda$-modules on \nthe \\'etale site of $X$.\nRecall that a complex \n${\\cal F}$ of $\\Lambda$-modules\nis {\\em constructible} if every cohomology sheaf\n${\\cal H}^q{\\cal F}$ is constructible\nand if ${\\cal H}^q{\\cal F}=0$ except\nfor finitely many $q$.\nWe say that a constructible complex\n${\\cal F}$ of $\\Lambda$-modules\nis {\\em locally constant} if every cohomology sheaf\nis locally constant.\n\nWe say that a constructible complex\n${\\cal F}$ of $\\Lambda$-modules\nis of {\\em finite tor-dimension} \nif there exists an integer $a$\nsuch that ${\\cal H}^q({\\cal F}\n\\otimes^L_{\\Lambda}M)=0$\nfor every $q \\dim X$\nsuch that \n$p^\\vee \\colon\nX\\times_{\\mathbf P}Q\n={\\mathbf P}(X\\times_{\\mathbf P}T^*{\\mathbf P})\n\\to {\\mathbf P}^\\vee$\nis universally locally acyclic on the complement\nof ${\\mathbf P}(\\widetilde C)\\cup Z\\subset \n{\\mathbf P}(X\\times_{\\mathbf P}T^*{\\mathbf P})$.\nThen ${\\cal F}$ is micro-supported on $C$.\n\\end{cor}\n\n\\proof{\nLet $C_0=SS{\\cal F}$\ndenote the singular support.\nBy Corollary \\ref{corss}.3,\nthe base $B_0$ of $C_0$\nequals the support of ${\\cal F}$.\nBy Theorem \\ref{thmRn},\nwe have ${\\mathbf P}(\\widetilde C)\\cup Z\n\\supset {\\mathbf P}(\\widetilde C_0)$.\nSince $Z\\subset \nX\\times_{\\mathbf P}Q$ is of codimension $>\n\\dim X$ by the assumption and\nevery irreducible component\nof ${\\mathbf P}(\\widetilde C_0)\\subset \nX\\times_{\\mathbf P}Q$ is of codimension $\\dim X$\nby Theorem \\ref{thmss}.2,\nwe have ${\\mathbf P}(\\widetilde C)\n\\supset {\\mathbf P}(\\widetilde C_0)$.\nThus, we have\n$C\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} B\\supset C_0\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} B_0$.\nBy the assumption $B\\supset B_0$,\nwe have $C\\supset C_0=SS{\\cal F}$\nand ${\\cal F}$ is micro-supported on $C$\nby Lemma \\ref{lmmc}.1.\n\\qed}\n\\medskip\n\n\n\n\\begin{thm}[{\\rm \\cite[Theorem 1.7]{Be}}]\\label{thmssD}\nLet $X$ be a {\\em projective}\nsmooth scheme of dimension $n$\nover a field $k$ and\n${\\cal L}$ be an ample\ninvertible ${\\cal O}_X$-module.\nLet $E$ be a $k$-vector space\nof finite dimension and \n$E\\to \\Gamma(X,{\\cal L})$\nbe a $k$-linear mapping\ndefining a {\\em closed} immersion\n$X\\to {\\mathbf P}={\\mathbf P}(E^\\vee)$\nand \nsatisfying the condition {\\rm (E)} \nbefore Lemma {\\rm \\ref{lmloc}}.\nThen, $D=p^\\vee({\\mathbf P}(SS{\\cal F}))\n\\subset {\\mathbf P}^\\vee$\nis a divisor and the complement\n${\\mathbf P}^\\vee\n\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D$ is the largest open\nsubset where\n$Rp^\\vee_*p^*{\\cal F}$\nis locally constant.\n\\end{thm}\n\n\\proof{\nIt suffices to apply\n\\cite[Theorem 1.7]{Be}\nto $i_*{\\cal F}$.\\qed}\n\n\n\n\\begin{cor}\\label{corssD}\nLet ${\\cal F}$ be a constructible\ncomplex of finite tor-dimension.\nThen, we have\n\\begin{equation}\nSS{\\cal F}\n=\nSS D_X{\\cal F}.\n\\label{eqssD}\n\\end{equation}\n\\end{cor}\n\nOne could deduce \ndirectly Corollary \\ref{corssD}\nand its consequence Lemma \\ref{lmRf}.4\nfrom the compatibility of vanishing cycles\nwith duality which seems missing\nin the literature.\n\n\\proof{\nWe may assume that $\\Lambda=\\Lambda_0$\nis a finite field by Corollary \\ref{corss}.4.\nSince the assertion is local on $X$,\nwe may take a closed immersion\n$i\\colon X\\to U$\nand an open immersion\n$j\\colon U\\to {\\mathbf P}$\nto a projective space.\nBy Lemma \\ref{lmlcst}.5 and\nCorollary \\ref{corss}.2,\nwe may assume $X$ is projective.\n\nLet $C=SS{\\cal F}$\nand $C'=SS D_X{\\cal F}$\nbe the singular supports.\nWe take a projective embedding\n$i\\colon X\\to {\\mathbf P}$\ndefined by $E$ satisfying\nthe conditions (E) \nbefore Lemma {\\rm \\ref{lmloc}}\nand (C) before\nProposition {\\rm \\ref{prwi}}.\nLet $D=p^\\vee({\\mathbf P}(\\widetilde C)),\nD'=p^\\vee({\\mathbf P}(\\widetilde C'))\n\\subset {\\mathbf P}^\\vee$\nbe the images.\nThen, by Theorem \\ref{thmssD},\nthe complement\n${\\mathbf P}^\\vee\n\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D$\nis the largest open subset\nwhere the (shifted and twisted) Radon transform\n$Rp^\\vee_*p^*{\\cal F}$\nis locally constant.\nSimilarly,\nthe complement\n${\\mathbf P}^\\vee\n\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D'$\nis the largest open subset\nwhere \n$Rp^\\vee_*p^*D_X{\\cal F}$\nis locally constant.\nSince the Radon transform commutes\nwith duality up to shift and twist,\nwe have\n$D=D'$. Hence we have $C=C'$\nby Corollary \\ref{corrad}.2.\n\\qed}\n\\medskip\n\n\n\nIn the proof of Proposition \\ref{prperv},\nwe will use the following fact\nproved in the course of \nthe proof of \\cite[Theorem 1.5]{Be}.\nFor a line $L\\subset {\\mathbf P}^\\vee$,\nthe axis $A_L\\subset {\\mathbf P}$\nis a linear subspace of codimension $2$\ndefined as the intersections\nof hyperplanes parametrized by $L$.\nOn the complement\n$X_L^{\\circ}=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} (X\\cap A_L)$,\na canonical morphism\n\\begin{equation}\n\\begin{CD}\np_L^{\\circ}\\colon\nX_L^{\\circ} @>>> L\n\\end{CD}\n\\label{eqpLo}\n\\end{equation}\nis defined by sending a point\n$x\\in X_L^{\\circ}$\nto the unique hyperplane $H\\in L$\ncontaining $x$.\n\n\n\n\n\\begin{lm}[{\\rm \\cite[4.9 (ii), (iii)]{Be}}]\\label{lmRn}\nLet $i\\colon X\\to {\\mathbf P}$\nbe an immersion and\nlet ${\\cal F}$ be\na perverse sheaf of\n$\\Lambda$-modules on $X$.\nAfter replacing \nthe immersion $i\\colon X\\to {\\mathbf P}$\nby the composition with\n$d$-th Veronese embedding\nfor $d\\geqq 3$,\nfor an irreducible component \n$C_a$ of $C$,\nlet $D_a=\\overline{p^\\vee({\\mathbf P}\n(\\widetilde C_a))}\\subset {\\mathbf P}^\\vee$\ndenote the closure of the image.\nThen there exist\ndense open subsets\n$D_a^{\\circ} \\subset D_a$\nsatisfying the following conditions:\n\nFor $C_a\\neq C_b$,\nwe have $D_a^{\\circ}\\cap D_b=\\varnothing$.\nThe inverse image\n${\\mathbf P}(\\widetilde C_a)^{\\circ}\n=\n{\\mathbf P}(\\widetilde C_a)\n\\times_{D_a}\nD_a^{\\circ}$\nis finite and radicial over\n$D_a^{\\circ}$.\nFor every $(x,H)\n\\in \n{\\mathbf P}(\\widetilde C_a)^{\\circ}$\nand for every line $L\\subset\n{\\mathbf P}^\\vee$\nsuch that $x\\in X_L^{\\circ}$,\nthat \n$L$ meets $D_a^{\\circ}$\nproperly at $H=p_L^{\\circ}(x)$\nand that\nthe tangent line \n$TL\\times_L \\{ H\\}$ of $L$\nat $H=p_L^{\\circ}(x)\n\\in {\\mathbf P}^\\vee$\nis not perpendicular to\nthe fiber \n$T^*_Q({\\mathbf P}\\times{\\mathbf P}^\\vee)\n\\times_Q(x,H)\n\\subset\nT^*{\\mathbf P}^\\vee\n\\times_{{\\mathbf P}^\\vee}\\{H\\}$,\nwe have \n$\\varphi_x^{-1}({\\cal F},p_L^{\\circ})\\neq 0$\nand\n$\\varphi_x^q({\\cal F},p_L^{\\circ} )= 0$\nfor $q\\neq -1$.\n\\end{lm}\n\n\n\n\n\\subsection{Singular support and ramification}\\label{ssram}\n\nWe assume $k$ is perfect.\nIn this subsection, we describe the\nsingular support $SS{\\cal F}$\nof ${\\cal F}=j_!{\\cal G}$\nfor a locally constant sheaf\n${\\cal G}$ on the complement\n$U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D$ of a divisor $D$\nwith simple normal crossings\nof a smooth scheme $X$\nover $k$\nand the open immersion $j\\colon U\\to X$.\nFirst, we study the tamely ramified case.\n\n\\begin{pr}\\label{prtame}\nLet ${\\cal G}\\neq 0$ be\na locally constant constructible sheaf\nof $\\Lambda$-modules\non the complement\n$U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D$ of a divisor $D=\\bigcup_{i=1}^mD_i$\nwith simple normal crossings\nof a smooth scheme $X$\nover a perfect field $k$.\nAssume that ${\\cal G}$\nis {\\em tamely ramified} along $D$.\nFor the open immersion $j\\colon U\\to X$\nand ${\\cal F}=j_!{\\cal G}$,\nwe have \n\\begin{equation}\nSS{\\cal F}\n=\n\\bigcup_{I\\subset\\{1,\\ldots,m\\}}\nT^*_{D_I}X\n\\label{eqFtame}\n\\end{equation}\n\\end{pr}\n\n\\proof{\nBy Lemma \\ref{lmlcst}.3,\nwe have\n$SS{\\cal F}\n\\subset\n\\bigcup_{I\\subset\\{1,\\ldots,m\\}}\nT^*_{D_I}X$.\nBy Corollary \\ref{corss}.2,\nit is reduced to\nthe case where $X={\\mathbf A}^n$\nand $U={\\mathbf G}_m^n$.\nBy the induction on $n=\\dim X$\nand further by the\ncompatibility with smooth pull-back,\nit suffices to show that the fiber\n$T^*_0X$ at the origin $0\\in X=\n{\\mathbf A}^n={\\rm Spec}\\ k[T_1,\\ldots,T_n]$\nis a subset of $SS^w{\\cal F}$.\nIf $n=0$,\nsince ${\\cal G}\\neq 0$,\nwe have \n$SS{\\cal F}=T^*X$ by Lemma \\ref{lmmc}.3.\n\nAssume $n>0$.\nLet $D_i=(T_i=0)$\n$\\subset X$ and\n$C=\\bigcup_{I\\subsetneqq\\{1,\\ldots,n\\}}\nT^*_{D_I}X$\nbe the union except the fiber\n$T^*_0X$.\nSince the morphism\n$f\\colon X\\to Y=\n{\\rm Spec}\\ k[T]$ defined by\n$T\\mapsto T_1+\\cdots+T_n$\nis $C$-transversal,\nit suffices to show the following.\n\\qed}\n\n\n\\begin{lm\n\\label{lmtame}\nLet $S={\\rm Spec}\\ {\\cal O}_{K}$ be the spectrum\nof a henselian discrete valuation \nring with algebraically closed residue field $k$,\n$X$ be a smooth\nscheme of finite type \nof relative dimension $n-1$ over $S$\nand $D$ be a divisor of $X$\nwith simple normal crossings.\nLet $x$ be a closed point\nof the closed fiber of $X$\ncontained in $D$.\nLet $t_1,\\ldots,t_n\\in {\\mathfrak m}_x$\nbe elements of the maximal ideal such that\n$\\bar t_1,\\ldots,\\bar t_n\\in \n{\\mathfrak m}_x\/{\\mathfrak m}_x^2$\nis a basis.\nAssume that $D$\nis defined by $t_1\\cdots t_n$\nand that the class of \na uniformizer $\\pi$ of $S$\nin ${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2$\nis not contained in any subspaces\ngenerated by $n-1$ elements\nof the basis $\\bar t_1,\\ldots,\\bar t_n$.\n\nLet $\\Lambda$ be a finite local ring\nwith residue characteristic $\\ell$ invertible on $S$\nand let ${\\cal G}$ be a locally constant\nconstructible sheaf of $\\Lambda$-modules\non the complement $U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D$\ntamely ramified along $D$.\nLet $j\\colon U\\to X$ denote\nthe open immersion.\n\n{\\rm 1.}\nOn a neighborhood of $x$,\nthe complex $\\phi(j_!{\\cal G})$\nis acyclic except at $x$\nand at degree $n-1$\nand the action of the inertia group\n$I_K={\\rm Gal}(\\bar K\/K)$\non $\\phi^{n-1}_x(j_!{\\cal G})$\nis tamely ramified.\n\n{\\rm 2.}\nIf ${\\cal G}$ is a sheaf of\nfree $\\Lambda$-modules,\nthen the $\\Lambda$-module\n$\\phi^{n-1}_x(j_!{\\cal G})$\nis free of rank ${\\rm rank}\\ {\\cal G}$.\n\\end{lm}\n\n\\proof{\n1.\nWe may write $\\pi=\\sum_{i=1}^n\nu_{i}t_{i}$ in ${\\mathfrak m}_x$\nand $u_{i}$ are invertible.\nBy replacing $t_{i}$ by \n$u_{i}t_{i}$,\nwe may assume that \n$X$ is \\'etale over\n${\\rm Spec} \\ {\\cal O}_{K}\n[t_{1},\\ldots,t_{n}]\/\n(\\pi-(t_{1}+\\cdots+t_n))$.\nBy Abhyankar's lemma,\nwe may assume that ${\\cal G}$\nis trivialized by the abelian covering\n$s_{i}^{m}=t_{i}$ for an integer\n$m$ invertible on $S$.\nSince the assertion is \\'etale local on $X$,\nwe may assume\n$X={\\rm Spec} \\ {\\cal O}_{K}\n[t_{1},\\ldots,t_n]\/\n(\\pi-(t_{1}+\\cdots+t_n))$.\nHence by the variant of\n\\cite[1.3.3 (i)]{app} for $f_!$,\nthe complex $\\phi(j_!{\\cal G})$\nis acyclic outside $x$.\nSince the complex\n$\\phi(j_!{\\cal G})[n-1]$ is a perverse \nsheaf by \\cite[Corollaire 4.6]{au},\nthe complex\n$\\phi(j_!{\\cal G})$ is acyclic except at \ndegree $n-1$.\n\n\nLet $p\\colon X'\\to X$ be the blow-up at $x$\nand $j'\\colon U\\to X'$ be the open immersion.\nLet $D'$ be the proper transform of $D$\nand $E$ be the exceptional divisor.\nThen, the union of $D'$ with the closed\nfiber $X'_s$ has simple normal crossings.\nHence, the action of the inertia group \n$I_K$ on\n$\\phi(j'_!{\\cal G})$ is tamely ramified\nby \\cite[Proposition 6]{epsilon}\nand $\\phi_x(j_!{\\cal G})=R\\Gamma(E,\\phi(j'_!{\\cal G}))$\nis also tamely ramified.\n\n2.\nBy 1,\nwe may assume $\\Lambda$ is a field.\nWe consider the stratification of\n$E$ defined by the intersections\nwith the intersections of\nirreducible components of $D'$.\nThen, on each stratum,\nthe restriction of the cohomology sheaves\n$\\phi^q(j'_!{\\cal G})$ are locally constant\nand are tamely ramified along the boundary\nby \\cite[Proposition 6]{epsilon}.\nFurther, the alternating sum of\nthe rank is $0$ except for\n$E^{\\circ} =E\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} (E\\cap D')$\nand equals ${\\rm rank}\\ {\\cal G}$ on\n$E^{\\circ}$.\nHence, we have\n$$\\dim \\phi_x(j_!{\\cal G})\n=\\chi(E,\\phi(j'_!{\\cal G}))\n=\\chi_c(E^{\\circ},\\phi(j'_!{\\cal G}))\n={\\rm rank}\\ {\\cal G}\\cdot\n\\chi_c(E^{\\circ}).$$\nSince $E^{\\circ}=E_{n-1}^{\\circ}\n\\subset {\\mathbf G}_m^{n-1}\n={\\rm Spec}\\ k[t_1^{\\pm 1},\\ldots,\nt_{n-1}^{\\pm 1}]$\nis the complement\nof the intersection\nwith the hyperplane\n$\\sum_it_i+1=0$\nand the intersection \nis isomorphic to $E_{n-2}^{\\circ}$,\nwe have\n$\\chi_c(E^{\\circ}_{n-1})=\n\\chi_c({\\mathbf G}_m^{n-1})-\n\\chi_c(E^{\\circ}_{n-2})=\n(-1)^{n-1}$\nby induction on $n$.\nThus the assertion follows.\n\\qed}\n\\medskip\n\nTo give a description of the singular\nsupport in some wildly ramified case\nusing ramification theory,\nwe briefly recall ramification theory\n\\cite{AS}, \\cite{nonlog}.\nLet $K$ be a henselian discrete valuation field\nwith residue field of characteristic $p>0$\nand $G_K={\\rm Gal}(K_{\\rm sep}\/K)$\nbe the absolute Galois group.\nThen, the (non-logarithmic)\nfiltration $(G_K^r)_{r\\geqq 1}$\nby ramification groups\nis defined in \\cite[Definition 3.4]{AS}.\nIt is a decreasing filtration by closed\nnormal subgroups\nindexed by rational numbers $\\geqq 1$.\n\nFor a real number $r\\geqq 1$,\nwe define subgroups\n$G_K^{r+}\\subset G_K^{r-}$\nby $G_K^{r+}=\\overline{\\bigcup_{s>r}\nG_K^s}$\nand $G_K^{r-}=\\bigcap_{s1$\nand \n$G_K^{r-}=G_K^{r+}$\nfor irrational numbers $r>1$.\n\nLet $\\Lambda$ be a finite field\nof characteristic $\\neq p$\nand let $V$ be a continuous representation\nof $G_K$ on a $\\Lambda$-vector space\nof finite dimension.\nThen, since $P=G_K^{1+}$ is a pro-$p$ group\nand since \n$G_K^{r-}=G_K^{r}$\nfor rational $r$ and\n$G_K^{r-}=G_K^{r+}$\nfor irrational $r$,\nthere exists a unique decomposition\n\\begin{equation}\nV=\\bigoplus_{r\\geqq 1}V^{(r)}\n\\label{eqslp}\n\\end{equation}\ncalled the slope decomposition\ncharacterized by the condition that\nthe $G_K^{r+}$ fixed part\n$V^{G_K^{r+}}$ is equal to the sum\n$\\bigoplus_{s\\leqq r}V^{(s)}$.\n\n\n\nWe study a geometric case\nwhere $X$ is a smooth scheme\nover a perfect field $k$ of characteristic\n$p>0$.\nLet $D$ be a reduced and irreducible divisor\nand $U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D$ be the complement.\nLet ${\\cal G}$ be a \nlocally constant constructible sheaf of\n$\\Lambda$-vector spaces on $U$.\nLet $\\xi$ be the generic point of\nan irreducible component of $D$.\nThen, the local ring\n${\\cal O}_{X,\\xi}$ \nis a discrete valuation\nring and the fraction field $K$\nof its henselization is called\nthe local field at $\\xi$.\nThe stalk of ${\\cal H}^q{\\cal G}$\nat the geometric point of $U$\ndefined by a separable closure\n$K_{\\rm sep}$ defines\na $\\Lambda$-vector space $V^q$\nwith an action\nof the absolute Galois group $G_K$.\n\nFor a rational number $r>1$,\nthe graded quotient\n${\\rm Gr}^rG_K\n=G_K^r\/G_K^{r+}$ is \na profinite abelian group annihilated by $p$\n\\cite[Corollary 2.28]{nonlog}\nand its dual group is related\nto differential forms as follows.\nWe define ideals\n${\\mathfrak m}_{K_{\\rm sep}}^{(r)}$\nand ${\\mathfrak m}_{K_{\\rm sep}}^{(r+)}$\nof the valuation ring ${\\cal O}_{K_{\\rm sep}}$ by\n${\\mathfrak m}_{K_{\\rm sep}}^{(r)}\n=\\{x\\in K_{\\rm sep}\\mid {\\rm ord}_Kx\\geqq r\\}$\nand ${\\mathfrak m}_{K_{\\rm sep}}^{(r+)}\n=\\{x\\in K_{\\rm sep}\\mid {\\rm ord}_Kx>r\\}$\nwhere ${\\rm ord}_K$ denotes the valuation \nnormalized by ${\\rm ord}_K(\\pi)=1$\nfor a uniformizer $\\pi$ of $K$.\nThe residue field $\\bar F$ of ${\\cal O}_{K_{\\rm sep}}$\nis an algebraic closure of $F$ and \nthe quotient\n${\\mathfrak m}_{K_{\\rm sep}}^{(r)}\n\/{\\mathfrak m}_{K_{\\rm sep}}^{(r+)}$\nis an $\\bar F$-vector space of dimension 1.\nA canonical injection\n\\begin{equation}\nch\\colon\nHom_{{\\mathbf F}_p}(\n{\\rm Gr}^rG_K,\n{\\mathbf F}_p)\n\\to \nHom_{\\bar F}\n({\\mathfrak m}_{\\bar K}^{(r)}\/\n{\\mathfrak m}_{\\bar K}^{(r+)},\n\\Omega^1_{X\/k,\\xi}\n\\otimes{\\bar F})\n\\label{eqcf}\n\\end{equation}\nis also defined\n\\cite[Corollary 2.28]{nonlog}.\n\nWe say that the ramification of\n${\\cal G}$ along $D$ is\nisoclinic of slope $r\\geqq 1$\nif $V=V^{(r)}$ in the slope decomposition\n(\\ref{eqslp}).\nThe ramification of\n${\\cal G}$ along $D$ is\nisoclinic of slope $1$\nif and only if\nthe corresponding Galois representation $V$\nis tamely ramified.\nAssume that the ramification of\n${\\cal G}$ along $D$ is\nisoclinic of slope $r>1$.\nAssume also that \n$\\Lambda$ contains a primitive\n$p$-th root of 1 and\nidentify ${\\mathbf F}_p$ with\na subgroup of \n$\\Lambda^\\times$.\nThen, $V=V^{(r)}$ is further decomposed\nby characters\n\\begin{equation}\nV=\\bigoplus_{\\chi\\colon\n{\\rm Gr}^rG_K\\to \n{\\mathbf F}_p}\n\\chi^{\\oplus m(\\chi)}.\n\\label{eqchd}\n\\end{equation}\nFor a character $\\chi$ appearing\nin the decomposition (\\ref{eqchd}),\nthe twisted differential form\n$ch(\\chi)$ defined on a\nfinite covering of a dense open\nscheme of $D$ is\ncalled a characteristic form of ${\\cal G}$.\n\n\nAssume that \n$U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D$ is\nthe complement of a divisor\nwith simple normal crossings $D$\nand let $D_1,\\ldots,D_m$\nbe the irreducible components of $D$.\nWe say the ramification of\n${\\cal G}$ along $D$ is\nisoclinic of slope $R=\\sum_ir_iD_i$\nif the ramification of\n${\\cal G}$ along $D_i$ is\nisoclinic of slope $r_i$\nfor every irreducible component\n$D_i$ of $D$.\n\nIn \\cite[Definition 3.1]{nonlog},\nwe define the condition for\nramification of ${\\cal G}$ along $D$\nto be non-degenerate.\nThe condition implies that\nthe characteristic forms\nare extended to differential forms\non the boundary without zero.\nWe say that \nthe ramification of ${\\cal G}$ is non-degenerate along $D$ \nif it admits \\'etale locally\na direct sum decomposition\n${\\cal G}=\\bigoplus_j{\\cal G}_j$\nsuch that each ${\\cal G}_j$\nis isoclinic of slope $R_j\\geqq D$\nfor a ${\\mathbf Q}$-linear combination $R_j$\nof irreducible components of $D$\nand that the ramification of ${\\cal G}_j$\nis non-degenerate along $D$ at multiplicity $R_j$. Note that \nthere exists \na closed subset of codimension at least 2\nsuch that on its complement,\nthe ramification of ${\\cal G}$ along $D$\nis non-degenerate.\n\nWe introduce a slightly stronger\ncondition that implies local acyclicity.\nWe say that \nthe ramification of ${\\cal G}$ is \n{\\em strongly} non-degenerate along $D$\nif it satisfies the condition above\nwith $R_j\\geqq D$\nreplaced by $R_j=D$\nor $R_j>D$.\nThe inequality $R_j>D$\nmeans that the coefficient in $R_j$\nof every irreducible component of $D$\nis $>1$.\nNote that \nif the ramification of ${\\cal G}$ along $D$\nis non-degenerate,\non the complement of the singular locus\nof $D$, \nthe ramification of ${\\cal G}$ along $D$\nis strongly non-degenerate.\n\nLet $j\\colon U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D\\to X$\ndenote the open immersion.\nWe define a closed conical subset\n$S(j_!{\\cal G})\\subset T^*X$\nfollowing the definition \nof the characteristic cycle given\n\\cite[Definition 3.5]{nonlog}\nin the strongly non-degenerate case.\nWe will show that $S(j_!{\\cal G})$\nis in fact equal to the singular support\nin Proposition \\ref{prnd} below.\n\nAssume first that \nthe ramification of ${\\cal G}$\nalong $D$ is isoclinic of\nslope $R=D$.\nThen, the locally constant sheaf\n${\\cal G}$ on $U$ is tamely ramified along $D$.\nIn this case,\nlet the singular support $SSj_!{\\cal G}$\n(\\ref{eqFtame}) denoted by\n\\begin{equation}\nS(j_!{\\cal G})\n=\n\\bigcup_IT^*_{D_I}X\n\\label{eqtameS}\n\\end{equation}\nwhere $T^*_{D_I}X$\ndenotes the conormal bundle\nof the intersection $D_I=\\bigcap_ID_i$\nfor a set of indices\n$I\\subset \\{1,\\ldots,m\\}$.\n\nAssume the ramification of ${\\cal G}$\nalong $D$ is isoclinic of\nslope $R=\\sum_ir_iD_i>\nD=\\sum_iD_i$. \nFor each irreducible component,\nwe have a decomposition by \ncharacters \n$V=\\bigoplus_{\\chi\\colon\n{\\rm Gr}^{r_i}G_{K_i}\\to \n{\\mathbf F}_p}\n\\chi^{\\oplus m(\\chi)}$.\nFurther, the characteristic form\nof each character $\\chi$\nappearing in the decomposition\ndefines a sub line bundle $L_\\chi$\nof the pull-back $D_\\chi\\times_XT^*X$\nof the cotangent bundle \nto a finite covering $\\pi_\\chi\n\\colon D_\\chi\\to D_i$\nby the non-degenerate assumption.\nThen, define \na closed conical subset\n$C=S(j_!{\\cal G})\\subset T^*X$ in\nthe case $R>D$ by \n\\begin{equation}\nS(j_!{\\cal G})\n=\nT^*_XX\\cup\n\\bigcup_i\n\\bigcup_{\\chi}\n\\pi_\\chi(L_\\chi).\n\\label{eqwildS}\n\\end{equation}\nIn the general strongly non-degenerate case,\nwe define a closed conical subset\n$C=S(j_!{\\cal G})\\subset T^*X$\nby the additivity\nand \\'etale descent.\n\n\\begin{pr}[{\\rm \\cite[Proposition 3.15]{nonlog}}]\\label{prnd}\nAssume that the ramification of\na locally constant constructible sheaf\n${\\cal G}$ of $\\Lambda$-modules\non the complement $U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D$ \nof a divisor with simple normal crossings\nis strongly non-degenerate\nalong $D$.\nThen, \n$S(j_!{\\cal G})$ defined by \n{\\rm (\\ref{eqtameS})},\n{\\rm (\\ref{eqwildS})},\nthe additivity and by \\'etale descent\nsatisfies\n\\begin{equation}\nSS j_!{\\cal G}\n=\nS(j_!{\\cal G}).\n\\label{eqSSj}\n\\end{equation}\n\\end{pr}\n\n\n\n\\proof{\nSince the assertion is \\'etale local,\nwe may assume that ${\\cal G}$\nis isoclinic of slope $R=D$ or $R>D$\nand that the ramification of ${\\cal G}$\nis non-degenerate along $D$ at multiplicity $R$.\nIf $R=D$, it is proved in Proposition \\ref{prtame}.\nTo prove the case $R>D$,\nwe use the following Lemma.\n\n\\begin{lm}\\label{lmDf2}\nLet $E$ be a $k$-vector space\nof finite dimension and \n$E\\to \\Gamma(X,{\\cal L})$\nbe a $k$-linear mapping\ndefining an immersion\n$X\\to {\\mathbf P}={\\mathbf P}(E^\\vee)$.\nLet $T\\subset X$ be\nan integral closed subscheme\nand define a subspace $E'={\\rm Ker}(E\\to \n\\Gamma(T,{\\cal L}\\otimes_{{\\cal O}_X}{\\cal O}_T))$\nand ${\\mathbf P}^{\\prime\\vee}\n={\\mathbf P}(E')\\subset\n{\\mathbf P}^{\\vee}\n={\\mathbf P}(E)$.\n\n{\\rm 1.}\n$T\\times {\\mathbf P}^{\\prime\\vee}\n\\subset\nX\\times {\\mathbf P}^{\\vee}$\nis a subset of\n$T\\times_{\\mathbf P}Q\n\\subset X\\times_{\\mathbf P}Q$\nand the complement\n$(T\\times_{\\mathbf P}Q)\n\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~}\n(T\\times {\\mathbf P}^{\\prime\\vee})$\nis the largest open subscheme where\n$T\\times_{\\mathbf P}Q\n\\to {\\mathbf P}^\\vee$\nis flat.\n\n{\\rm 2.}\nThe codimension of \n$E'\\subset E$\nis strictly larger than $\\dim T$.\n\\end{lm}\n\n\\proof{1.\nFor a hyperplane $H\\subset {\\mathbf P}$,\nwe have $T\\subset H$ \nif $H\\in {\\mathbf P}^{\\prime\\vee}$\nand\n$T\\cap H$ is a divisor of $T$\nif otherwise.\nHence we have\n$T\\times {\\mathbf P}^{\\prime\\vee}\n\\subset\nT\\times_{\\mathbf P}Q$\nand on the complement of\n$T\\times {\\mathbf P}^{\\prime\\vee}$,\nthe divisor \n$T\\times_{\\mathbf P}Q$\nof $T\\times{\\mathbf P}^\\vee$\nis flat over ${\\mathbf P}^\\vee$.\n\n\n\n2.\nThe immersion $X\\to {\\mathbf P}$\ninduces an immersion\n$T\\to {\\mathbf P}(E\/E')$.\nHence we have\n$\\dim T<\\dim E\/E'$.\n\\qed}\n\\medskip\n\nWe assume $R>D$.\nFirst, we show the inclusion\n$SSj_!{\\cal G}\n\\subset S(j_!{\\cal G})$.\nSince the assertion is local on $X$,\nwe may assume that $X$ is quasi-projective.\nLet $i\\colon X\\to {\\mathbf P}$\nbe an immersion \nsatisfying the condition {\\rm (E)}\nbefore Lemma {\\rm \\ref{lmloc}}\nand the condition {\\rm (C)} before\nProposition {\\rm \\ref{prwi}}.\nFor irreducible component $D_i$ of $D$,\nlet ${\\mathbf P}^\\vee_i={\\mathbf P}(E_i)\n\\subset\n{\\mathbf P}^\\vee={\\mathbf P}(E)$\nbe the subspace defined by\n$E_i={\\rm Ker}(E\\to \\Gamma(D_i,{\\cal L}))$.\nLet $Z\\subset X\\times_{\\mathbf P}Q$\nbe the union \n$Z=\\bigcup_{i=1,\\ldots,m}\nD_i\\times {\\mathbf P}^\\vee_i$.\nThen, by Lemma \\ref{lmDf2}.1,\nthe divisor \n$D\\times_{\\mathbf P}Q\n\\subset \nX\\times_{\\mathbf P}Q$ with\nsimple normal crossings\nis flat over ${\\mathbf P}^\\vee$\noutside $Z$.\nHence, by \\cite[Proposition 3.15]{nonlog},\n$p^\\vee\\colon \nX\\times_{\\mathbf P}Q\n\\to {\\mathbf P}^\\vee$\nis universally locally acyclic relatively to\nthe pull-back of $j_!{\\cal G}$\noutside the union\n$Z\\cup {\\mathbf P}(\\widetilde C)$\nfor $C=S(j!{\\cal F})$.\nFurther by Lemma \\ref{lmDf2}.2,\nthe closed subset\n$Z\\subset X\\times_{\\mathbf P}Q$\nis of codimension $>\\dim X$.\nHence we have\n$SSj_!{\\cal G}\n\\subset S(j_!{\\cal G})$\nby\nCorollary \\ref{corRn}.\n\nWe show the equality\n$SSj_!{\\cal G}\n= S(j_!{\\cal G})$\nkeeping the assumption $R>D$.\nBy Theorem \\ref{thmss}.2,\nwe may assume $D$ is irreducible\nsince the assertion is local.\nFurther we may assume \nthere exists a unique $L_\\chi$\nby the additivity\nsince the assertion is \\'etale local.\nThen, the assertion follows from\nthe contraposition of\nLemma \\ref{lmlcst}.1 (1)$\\Rightarrow$(2).\n\\qed}\n\n\n\n\n\\section{Characteristic cycle and \nthe Milnor formula}\\label{sCC}\n\nIn this section,\n$X$ denotes a smooth scheme\nover a perfect field $k$.\nAssume that every irreducible component\nof $X$ is of dimension $n$,\nunless otherwise stated.\n\n\n\\subsection{Morphisms defined by pencils}\n\\label{sspl}\n\n\nLet $E$ be a $k$-vector space of\nfinite dimension and\n${\\mathbf P}={\\mathbf P}(E^\\vee)$\nand its dual\n${\\mathbf P}^\\vee={\\mathbf P}(E)$\nbe as in Section \\ref{ssRn}.\nLet $X$ be a smooth scheme\nover $k$\nand let $i\\colon X\\to {\\mathbf P}$\nbe an immersion.\nLet $L\\subset {\\mathbf P}^\\vee$\nbe a line.\nThe morphism $p_L\\colon X_L\\to L$\ndefined by pencil\nis defined by the cartesian diagram\n\\begin{equation}\n\\begin{CD}\nX_L@>>>\nX\\times_{\\mathbf P}Q\\\\\n@V{p_L}VV @VV{p^\\vee}V\\\\\nL@>>> {\\mathbf P}^\\vee.\n\\end{CD}\n\\label{eqpL}\n\\end{equation}\nThe axis $A_L\\subset {\\mathbf P}$\nis the intersection of hyperplanes\nparametrized by $L$.\nIf the axis $A_L$ meets $X$ properly,\nthe scheme $X_L$ is the blow-up of $X$\nat the intersection\n$X\\cap A_L$.\nThe morphism\n$p_L^{\\circ}\\colon\nX_L^{\\circ} \\to L$\n(\\ref{eqpLo})\nis the restriction of\n$p_L\\colon X_L\\to L$\nto the complement \n$X_L^{\\circ} =X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} (X\\cap A_L)$.\n\n\\begin{lm}\\label{lmAL}\nLet $C\\subset T^*X$ be a\nclosed conical subset\nand \n${\\mathbf P}(\\widetilde C)\\subset\nX\\times_{\\mathbf P}Q=\n{\\mathbf P}(X\\times_{\\mathbf P}T^*{\\mathbf P})$\nbe the projectivization.\nAssume that every irreducible component \nof $X$ and every irreducible component\n$C_a$ of $C$ are of dimension $n$.\nLet $L\\subset {\\mathbf P}^\\vee$\nbe a line and\n$p_L^{\\circ}\\colon X_L^{\\circ}\\to L$\nbe the morphism defined\nby the pencil.\n\n\nThe complement\n$X_L^{\\circ}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~}( X_L^{\\circ} \\cap\n{\\mathbf P}(\\widetilde C))\n\\subset X_L^{\\circ}$\nis the largest open subset where\n$p_L^{\\circ}\\colon X_L^{\\circ}\\to L$\nis $C$-transversal.\n\\end{lm}\n\n\\proof{\nSince\n$X\\times_{\\mathbf P}Q\\to X$\nis smooth,\nthe immersion $X_L^{\\circ}\\to\nX\\times_{\\mathbf P}Q$ is $C$-transversal\nby Lemma \\ref{lmCh}.3.\nHence, for $x\\in X_L^{\\circ}$, the morphism\n$p_L^{\\circ}\\colon X_L^{\\circ}\\to L$\nis $C$-transversal at $x$ if and only if\n$(x,p_L(x))\\in X\\times_{\\mathbf P}Q$\nis not contained in ${\\mathbf P}(\\widetilde C)$\nby Lemma \\ref{lmPC}.1 and by\nLemma \\ref{lmChf}.1\napplied to the cartesian diagram (\\ref{eqpL}).\nHence the assertion follows.\n\\qed}\n\\medskip\n\nWe construct the universal family of\n(\\ref{eqpL}).\nLet ${\\mathbf G}=\n{\\rm Gr}(1,{\\mathbf P}^\\vee)$ be\nthe Grassmannian variety\nparametrizing lines in\n${\\mathbf P}^\\vee$.\nThe universal line\n${\\mathbf D}\\subset\n{\\mathbf P}^\\vee\\times\n{\\mathbf G}$ \nis canonically identified with the flag variety\nparametrizing\npairs $(H,L)$ of points $H$ of\n${\\mathbf P}^\\vee$ \nand lines $L$ passing through $H$.\nIt is the same as the flag variety\n${\\rm Fl}(1,2,E)$ \nparametrizing pairs\nof a line and a plane including the line in $E$.\n\nThe projective space\n${\\mathbf P}^\\vee$ and the\nGrassmannian variety\n${\\mathbf G}$\nare also equal to\nthe Grassmannian varieties\n${\\rm Gr}(1,E)$ and\n${\\rm Gr}(2,E)$ \nparametrizing lines and planes\nin $E$ respectively.\nThe projections\n${\\mathbf P}^\\vee\n\\gets {\\mathbf D}\n\\to {\\mathbf G}$\nsending a pair $(H,L)$ to the line $L$\nand to the hyperplane $H$\nare the canonical morphisms\n${\\rm Gr}(1,E)\\gets {\\rm Fl}(1,2,E)\\to {\\rm Gr}(2,E)$.\nBy the projection ${\\mathbf D}\\to\n{\\mathbf P}^\\vee$,\nit is also canonically identified\nwith the projective space bundle\nassociated to the tangent bundle\n${\\mathbf D}={\\mathbf P}(T{\\mathbf P}^\\vee)$\nby sending a line passing through\na point to the tangent line\nof the line at the point.\nLet ${\\mathbf A}\\subset {\\mathbf P}\n\\times {\\mathbf G}$\nbe the universal family\nof the intersections\nof hyperplanes parametrized\nby lines.\nThe scheme ${\\mathbf A}$\nis also canonically identified\nwith the Grassmann bundle\n${\\rm Gr}(2,T^*{\\mathbf P})$\nover ${\\mathbf P}$.\n\nLet $X$ be a smooth scheme\nover $k$\nand let $i\\colon X\\to {\\mathbf P}$\nbe an immersion.\nWe construct a commutative diagram\n\\begin{equation}\n\\begin{CD}\nX\\times_{\\mathbf P}Q\n@<<<\n(X\\times {\\mathbf G})'\n@>>> \nX\\times {\\mathbf G}\n\\\\\n@V{p^\\vee}VV @VVV @VVV\\\\\n {\\mathbf P}^\\vee\n @<<<\n{\\mathbf D}\n@>>>\n{\\mathbf G}.\n\\end{CD}\n\\label{eqGLP}\n\\end{equation}\nwhere the left square is cartesian\nas follows.\nThe left vertical arrow and the lower line\nare the canonical morphisms\n${\\rm Gr}(1,X\\times_{\\mathbf P}T^*{\\mathbf P})\n\\to\n{\\rm Gr}(1,E)\n\\gets\n{\\rm Fl}(1,2,E)\n\\to\n{\\rm Gr}(2,E)$.\nThe fiber product\n${\\mathbf D}\\times_{{\\mathbf P}^{\\vee}} \n(X\\times_{\\mathbf P}Q)$\nis canonically identified with \nthe blow-up \n$(X\\times {\\mathbf G})'\n\\to X\\times {\\mathbf G}$\nat the intersection ${\\mathbf A}\n\\cap (X\\times {\\mathbf G})\n=\nX\\times_{\\mathbf P} {\\mathbf A}\n=\n{\\rm Gr}(2,X\\times_{\\mathbf P} {\\mathbf A})$\nby the elementary lemma below.\n\n\nThe canonical morphism \n$(X\\times {\\mathbf G})'\n\\to X\\times {\\mathbf G}$\nis an isomorphism on \nthe complement\n\\begin{equation}\n(X\\times {\\mathbf G})^{\\circ}\n=\n(X\\times {\\mathbf G})\n\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~}\n(X\\times_{\\mathbf P} {\\mathbf A}).\n\\label{eqXGo}\n\\end{equation}\nThe scheme\n$(X\\times {\\mathbf G})^{\\circ}$\nregarded as an open subscheme of\n$(X\\times {\\mathbf G})'$\nis the complement of\n${\\rm Fl}(1,2,X\\times_{\\mathbf P}T^*{\\mathbf P})$\nregarded as a closed subscheme.\nFor a line\n$L\\subset {\\mathbf P}^\\vee$,\nthe diagram (\\ref{eqpL}) is the fiber of \nthe middle vertical arrow \n$(X\\times {\\mathbf G})'\n\\to {\\mathbf D}$ of (\\ref{eqGLP}) at the\npoint of ${\\mathbf G}$ corresponding to $L$.\nThe complement\n$X_L^{\\circ}$ \nis identified with \n$X_L\\cap (X\\times {\\mathbf G})^{\\circ}$.\n\n\n\\begin{lm}\\label{lmFEL}\nLet $0\\to F\\to E\\to L\\to 0$\nbe an exact sequence of\nvector bundles on a scheme $X$\nsuch that $L$ is a line bundle\nand let $1\\leqq r\\leqq {\\rm rank}\\ F-1$ \nbe an integer.\nThen, the Grassmannian bundles\nparametrizing subbundles\nof rank $r$ and of $r-1$\nand the flag bundles\nparametrizing pairs of subbundles\nof rank $r$ and $r-1$\nwith inclusion form a cartesian diagram\n\\begin{equation}\n\\begin{CD}\n{\\rm Fl}(r-1,r,F)\n@>>>\n {\\rm Gr}(r,F)\\\\\n@VVV@VVV\\\\\nP={\\rm Gr}(r-1,F)\\times_\n{{\\rm Gr}(r-1,E)}\n{\\rm Fl}(r-1,r,E)\n@>>>\n {\\rm Gr}(r,E).\n\\end{CD}\n\\label{eqFlG}\n\\end{equation} \nThe right vertical arrow\nis a regular immersion\nof codimension $r$\nand the lower horizontal arrow\nis the blow-up at the\nclosed subscheme\n${\\rm Gr}(r,F)\\subset\n{\\rm Gr}(r,E)$.\n\\end{lm}\n\n\\proof{\nLet $p\\colon P={\\rm Gr}(r-1,F)\\times_\n{{\\rm Gr}(r-1,E)}\n{\\rm Fl}(r-1,r,E)\\to X$\nand\n$$\\xymatrix{\nD={\\rm Fl}(r-1,r,E)\\ar[rd]_f\n\\ar[rr]^d\n&&\nG={\\rm Gr}(r,E)\\ar[ld]^g\n\\\\&X\n}$$\ndenote the canonical morphisms. \nLet $0\\to {\\cal E}\\to\n{\\cal F}\\to{\\cal L}\\to 0$\nbe the corresponding exact sequence\nof locally free ${\\cal O}_X$-modules.\nLet ${\\cal V}\\subset g^*{\\cal E}$\nand ${\\cal W}\\subset d^*{\\cal V}\n\\subset f^*{\\cal E}$\ndenote the universal sub \n${\\cal O}_G$-module of rank $r$\nand the universal sub \n${\\cal O}_D$-module of rank $r-1$\nrespectively.\n\n\nOn $P\\times_{\n{\\rm Gr}(r,E)}\n{\\rm Gr}(r,F)$,\nthe pull-back of\n${\\cal W}\\subset d^*{\\cal V}$\ndefines a flag on the pull-back of\n${\\cal F}$.\nHence, the diagram (\\ref{eqFlG})\nis cartesian.\nOn the complement\n${\\rm Gr}(r,E)\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~}\n{\\rm Gr}(r,F)$,\nthe restriction\nof ${\\cal V}$ and its\nintersection with\nthe pull-back of\n${\\cal F}$ \ndefine a flag on the restriction of\n$g^*{\\cal E}$.\nHence the restriction\n$P\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} {\\rm Fl}(r-1,r,F)\n\\to {\\rm Gr}(r,E)\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~}\n{\\rm Gr}(r,F)$\nis an isomorphism.\n\n\nThe ideal sheaf\n${\\cal I}\\subset {\\cal O}_G$ \ndefining the closed subscheme ${\\rm Gr}(r,F)\n\\subset{\\rm Gr}(r,E)$\nis characterized by the condition that\n${\\cal I}\\cdot g^*{\\cal L}$\nequals the image of the\ncomposition ${\\cal V}\\to g^*{\\cal E}\n\\to g^*{\\cal L}$.\nHence, the immersion\n${\\rm Gr}(r,F)\n\\to{\\rm Gr}(r,E)$ is a regular\nimmersion of codimension $r={\\rm rank}\\ {\\cal V}$.\n\nSince the image of the\ncomposition ${\\rm pr}_2^*d^*{\\cal V}\n\\to p^*{\\cal E}\n\\to p^*{\\cal L}$ is isomorphic to\nthe invertible ${\\cal O}_P$-module\n${\\rm pr}_2^*(d^*{\\cal V}\/{\\cal W})$,\nthe ideal \n${\\cal I}\\cdot {\\cal O}_P$ \ndefining the closed subscheme\n${\\rm Fl}(r-1,r,F)\\subset P$\nis locally generated by one element.\nSince ${\\rm Fl}(r-1,r,F)$ and $P$\nare smooth over $X$,\n${\\rm Fl}(r-1,r,F)$ is a divisor of $P$\nflat over $X$ and \nthe ideal \n${\\cal I}\\cdot {\\cal O}_P$ is\ninvertible.\nThus the morphism $P\\to {\\rm Gr}(r,E)$\nis canonically lifted to the blow-up\n$P\\to {\\rm Gr}(r,E)'$\nby the universality of blow-up.\n\nSince the pull-back $\\pi^*{\\cal I}$\non the blow-up $\\pi\\colon {\\rm Gr}(r,E)'\n\\to {\\rm Gr}(r,E)$ is an invertible\nideal, the image $\\pi^*{\\cal I}\\cdot\n\\pi^*g^*{\\cal L}$ of\nthe composition\n$\\pi^*{\\cal V}\\to\n\\pi^*g^*{\\cal E}\\to\n\\pi^*g^*{\\cal L}$ is an invertible module.\nHence the kernel\n$\\pi^*{\\cal V}\\cap\n\\pi^*g^*{\\cal F}$ is locally \na direct summand of\n$\\pi^*g^*{\\cal F}$ of rank $r-1$\nand \n$\\pi^*{\\cal V}\\cap\n\\pi^*g^*{\\cal F}\n\\subset \\pi^*{\\cal V}$ \ndefines a flag on\n$\\pi^*g^*{\\cal E}$.\nThus \nthe inverse morphism\n${\\rm Gr}(r,E)'\\to P$\nis defined\nand the assertion follows.\n\\qed}\n\n\\subsection{Isolated characteristic points and\nintersection numbers}\n\\label{ssic}\n\n\nThe characteristic cycle will\nbe defined as a cycle\ncharacterized by the Milnor formula\nat isolated characteristic points.\nWe will introduce the notion of\nisolated characteristic point\nand study the intersection number\nappearing in the Milnor formula.\n\n\n\\begin{df}\\label{dfisoc}\nLet $X$ be a smooth scheme over a\nfield $k$.\nLet $C\\subset T^*X$ be a \nclosed conical subset\nof the cotangent bundle $T^*X$.\nLet $h\\colon W\\to X$\nbe an \\'etale morphism\nand $f\\colon W\\to Y$ be a morphism\nover $k$ to a smooth curve over $k$.\n\n{\\rm 1.}\nWe say that a closed point $u$ of $W$\nis at most an {\\em isolated \n$C$-characteristic point} of \n$f\\colon W\\to Y\nif there exists an open neighborhood \n$V\\subset W$ of $u$ such that the pair \n$X\\gets V\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \\{u\\}\\to Y$ is $C$-transversal.\nWe say that a closed point $u\\in W$\nis an {\\em isolated $C$-characteristic point} of $f$\nif it is at most an isolated $C$-characteristic\npoint but \n$X\\gets W\\to Y$ is not $C$-transversal\nat $u$.\n\n\n{\\rm 2.}\nAssume that every irreducible component \nof $X$ and every irreducible component\n$C_a$ of $C$ are of dimension $n$.\nLet $u$ be at most an isolated $C$-characteristic point of \n$f\\colon W\\to Y$ with respect to $C$\nand $A=\\sum_am_a[C_a]$ be\na linear combination of irreducible components of $C$.\nThen, we define the intersection number\n\\begin{equation}\n(A,df)_{T^*W,u}\n\\label{eqAdf}\n\\end{equation}\nas the intersection number\n$\\sum_am_a(j^*C_a,f^*\\omega)_{T^*W,u}$\nsupported on the fiber of $u$\nfor the section $f^*\\omega$ of $T^*W$\ndefined by the pull-back of\na basis $\\omega$ of $T^*Y$\non a neighborhood of $f(u)\\in Y$.\n\\end{df}\n\nThe cotangent bundle $T^*W$\nis of dimension $2n$ and\nits closed subsets\n$j^*C_a,f^*\\omega$\nare of dimension $n$.\nTheir intersections\n$j^*C_a \\cap f^*\\omega$\nconsist of at most a unique isolated\npoint $f^*\\omega(u)\n\\in T^*_uW$\non the fiber of $u$\nand the intersection numbers\n$(j^*C_a,f^*\\omega)_{T^*W,u}$\nare defined\nif $u$ is at most an isolated \n$C$-characteristic point.\nFurther, since $C$ is conical,\nthe intersection numbers\n$(j^*C_a,f^*\\omega)_{T^*W,u}$\nare independent of the choice of\nbasis $\\omega$\nand the intersection number\n$(A,df)_{T^*W,u}$\nis well-defined.\n\n\nWe compute the intersection number\n(\\ref{eqAdf}) for\nmorphisms defined by pencils.\n\n\\begin{lm}\\label{lmApL}\nLet $C\\subset T^*X$ be a\nclosed conical subset\nand \n${\\mathbf P}(\\widetilde C)\\subset\nX\\times_{\\mathbf P}Q=\n{\\mathbf P}(X\\times_{\\mathbf P}T^*{\\mathbf P})$\nbe the projectivization.\nAssume that every irreducible component \nof $X$ and every irreducible component\n$C_a$ of $C$ are of dimension $n$.\nLet $L\\subset {\\mathbf P}^\\vee$\nbe a line and\n$p_L^{\\circ}\\colon X_L^{\\circ}\\to L$\nbe the morphism defined\nby the pencil.\n\n\n{\\rm 1.}\nLet $u\\in X_L^{\\circ}$ be a closed point.\nThen, $u$ is (resp.\\ at most) an isolated\ncharacteristic point of \n$p_L^{\\circ}\\colon X_L^{\\circ}\\to L$\nif and only if (resp.\\ either)\n$u$ is an isolated point of\n(resp.\\ or not contained in)\nthe intersection\n$X_L^{\\circ} \\cap\n{\\mathbf P}(\\widetilde C)\\subset\nX\\times_{\\mathbf P}Q$.\n\nFurther, if $u$ is at most an isolated\ncharacteristic point of \n$p_L^{\\circ}\\colon X_L^{\\circ}\\to L$,\nwe have an equality\n\\begin{equation}\n(C,dp_L^{\\circ})_{T^*X,u}\n=\n({\\mathbf P}(\\widetilde C),X_L^{\\circ})_{\nX\\times_{\\mathbf P}Q,u}\n\\label{eqAL}\n\\end{equation}\nof the intersection numbers.\n\n{\\rm 2.}\nAssume that $k$ is algebraically closed\nand \nthat $C$ is irreducible.\nSuppose that $p^\\vee\\colon \nX\\times_{\\mathbf P}Q\n\\to {\\mathbf P}^\\vee$ is generically \nfinite on ${\\mathbf P}(\\widetilde C)$\nand let $\\xi\\in {\\mathbf P}(\\widetilde C)$\nand $\\eta\\in \\Delta\n=\\overline{p^\\vee({\\mathbf P}(\\widetilde C))}\n\\subset {\\mathbf P}^\\vee$\ndenote the generic points.\nThen, there exists a smooth dense\nopen subscheme $\\Delta^{\\circ}\n\\subset \\Delta$ satisfying\nthe following condition:\n\nFor a line $L\\subset {\\mathbf P}^\\vee$\nmeeting $\\Delta$ transversally\nat $H\\in \\Delta^{\\circ}$ and \nfor an isolated $C$-characteristic point $u$\nof $p_L^{\\circ}\\colon \nX_L^{\\circ}\\to L$ such that $(u,H)\\in \n{\\mathbf P}(\\widetilde C)^{\\circ}\n={\\mathbf P}(\\widetilde C)\\times\n_\\Delta\\Delta^{\\circ}$, \nthe both sides of {\\rm (\\ref{eqAL})}\nare equal to the inseparable\ndegree $[\\xi:\\eta]_{\\rm insep}$.\n\\end{lm}\n\nSince ${\\mathbf P}(\\widetilde C)\n\\subset\nX\\times_{\\mathbf P}Q$ \nis of codimension $n$,\nthe intersection product in the right hand\nside of (\\ref{eqAL}) is defined.\n\n\n\\proof{\n1.\nThe first assertion follows\nimmediately from\nLemma \\ref{lmAL}.\nWe show the equality (\\ref{eqAL}).\nLet $\\tilde p_L^{\\circ}\\colon \n{\\mathbf P}_L^{\\circ}=\n{\\mathbf P}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} {\\mathbf A}_L\n\\to L$ denote the morphism\n$p_L^{\\circ}$ defined with\n$X$ replaced by ${\\mathbf P}$.\nLet $\\omega$ be a basis of\n$T^*L$ on the image $p_L(u)$\nand let $\\tilde p_L^{\\circ*}\\omega$\ndenote the section\n$X_L^{\\circ} \\to \nX_L^{\\circ} \\times_{\\mathbf P}T^*{\\mathbf P}$\ndefined on a neighborhood of $u$ by\nthe pull-back of $\\omega$\nby $\\tilde p_L^{\\circ}$.\nThen, we have\n\\begin{equation}\n(C,dp_L^{\\circ})_{T^*X,x}\n=\n(C,p_L^{\\circ *}\\omega)_{T^*X,x}\n=\n(\\widetilde C,\\tilde\np_L^{\\circ *}\\omega)_\n{X\\times_{\\mathbf P}T^*{\\mathbf P},x}\n=\n({\\mathbf P}(\\widetilde C),\n\\overline{\\widetilde p_L^{\\circ *}\\omega})_{\nX\\times_{\\mathbf P}Q,x}.\n\\label{eqCpL}\n\\end{equation}\n\nThe graph ${\\mathbf P}_L^{\\circ} \\to \n{\\mathbf P}\\times L$\nof $\\tilde p_L^{\\circ}$\nis a regular immersion of codimension 1\nand defines an exact sequence\n\\begin{equation}\n\\begin{CD}\n0\\to\n{\\mathbf P}_L^{\\circ}\\times_Q\nT^*_Q\n({\\mathbf P}\\times {\\mathbf P}^\\vee)\n@>>> \n({\\mathbf P}_L^{\\circ}\\times_{\\mathbf P}T^*{\\mathbf P})\n\\times_{{\\mathbf P}_L^{\\circ}}\n({\\mathbf P}_L^{\\circ}\\times_LT^*L)\n@>>>T^*{\\mathbf P}_L^{\\circ} \\to0\\end{CD}\n\\label{eqTPL}\n\\end{equation}\nof vector bundles on ${\\mathbf P}_L^{\\circ}$.\nHence the left arrow induces\nan isomorphism\n${\\mathbf P}_L^{\\circ}\\times_Q\nT^*_Q\n({\\mathbf P}\\times {\\mathbf P}^\\vee)\n\\to\n{\\mathbf P}_L^{\\circ}\\times_LT^*L$\nto the second factor.\nSince \n$T^*_Q\n({\\mathbf P}\\times {\\mathbf P}^\\vee)\n\\to \nQ\\times_{\\mathbf P}T^*{\\mathbf P}$\nis the universal sub line bundle\non\n$Q={\\mathbf P}(T^*{\\mathbf P})$,\nthe morphism\n${\\mathbf P}_L^{\\circ}\\times_LT^*L\n\\to\nT^*{\\mathbf P}_L^{\\circ}$\nalso defines the restriction\nof the universal sub line bundle\non ${\\mathbf P}_L^{\\circ}\\subset Q$.\nHence\nthe image of the section\n$\\overline{\\widetilde p_L^{\\circ *}\\omega}\n\\colon X_L^{\\circ} \\to \nX \\times_{\\mathbf P}Q=\n{\\mathbf P}(X\\times_{\\mathbf P}T^*{\\mathbf P})$ equals \n$X_L^{\\circ} \\subset\nX\\times_{\\mathbf P}Q$\nand (\\ref{eqCpL}) implies (\\ref{eqAL}).\n\n2.\nLet $\\Delta^{\\circ}\\subset \\Delta$\nbe a smooth dense open subscheme \nsuch that the base change\n${\\mathbf P}(\\widetilde C)^{\\circ}\n=\n{\\mathbf P}(\\widetilde C)\n\\times_\\Delta\\Delta^{\\circ}\n\\to \\Delta^{\\circ}$\nis the composition of\na finite flat radicial morphism\nof degree $[\\xi:\\eta]_{\\rm insep}$\nand a finite \\'etale morphism.\nThen, for a line $L$ and $(u,H)\\in \n{\\mathbf P}(\\widetilde C)^{\\circ}$ as in the assumption,\nthe intersection number\n$({\\mathbf P}(\\widetilde C),X_L^{\\circ})_{\nX\\times_{\\mathbf P}Q,u}$\nequals the degree of\nthe localization at $u$\nof the fiber of the finite flat morphism\n${\\mathbf P}(\\widetilde C)^{\\circ}\\to\n\\Delta^{\\circ}$\nat $H\\in \\Delta^{\\circ}$\nand is equal to\n$[\\xi:\\eta]_{\\rm insep}.$\n\\qed}\n\\medskip\n\nWe give a condition \nfor a function on isolated characteristic\npoints to be given \nas an intersection number.\n\n\\begin{df}\\label{dfflZ}\nLet $k$ be an algebraically closed field.\nLet $f\\colon Z\\to S$ be a {\\em quasi-finite} morphism\nof schemes of finite type \nover $k$.\nWe say that a function $\\varphi\n\\colon Z(k)\\to {\\mathbf Q}$\non the set of closed points\nis {\\em flat over} $S$\nif for every closed point $x\\in Z$,\nthere exists a commutative diagram\n$$\\xymatrix\n{U\\ar[r]\\ar[dr]_g&\nV\\times_SZ\\ar[r]^h\n\\ar[d]&\nZ\\ar[d]^f\\\\\n& V\\ar[r]&S\n}$$\nsatisfying the following conditions\n{\\rm (1)--(4):}\n\n{\\rm (1)} $V\\to S$ is an \\'etale neighborhood\nof $s=f(x)$.\n\n{\\rm (2)} $U\\subset V\\times_SZ$ is an open neighborhood\nof $x$.\n\n{\\rm (3)} $g\\colon\nU\\to V$ is finite\nand $g^{-1}(s)=\\{x\\}$.\n\n{\\rm (4)} The function $g_*\\varphi$ on $V(k)$\ndefined by $g_*\\varphi(t)=\n\\sum_{z\\in g^{-1}(t)}\\varphi(hz)$\nis {\\em constant}.\n\\end{df}\n\n\n\\begin{lm}\\label{lmfla}\nLet $f\\colon Z\\to S$ be a {\\em quasi-finite} morphism\nof schemes of finite type over\nan algebraically closed field $k$.\n\n{\\rm 1.}\nLet $\\varphi\n\\colon Z\\to {\\mathbf Q}$ be\na function.\nIf $\\varphi$ is flat over $S$\nin the sense of Definition {\\rm \\ref{df11}},\nits restriction on $Z(k)$ is flat over $S$\nin the sense of Definition {\\rm \\ref{dfflZ}}.\n\n{\\rm 2.}\nLet $\\varphi\n\\colon Z(k)\\to {\\mathbf Q}$\nbe a function flat over $S$\nin the sense of Definition {\\rm \\ref{dfflZ}}.\nThen, it is constructible on $Z(k)$.\nIf $\\varphi=0$ on $U(k)\\subset\nZ(k)$ for a dense open subset\n$U\\subset Z$,\nwe have\n$\\varphi=0$ on $Z(k)$.\n\\end{lm}\n\n\\proof{1.\nLet the notation be as in \nDefinition \\ref{dfflZ}.\nThen, by Lemma \\ref{lmscZ}.4,\n$g_*\\varphi$ is locally constant\nand the assertion follows.\n\n2.\nIf $Z\\to S$ is finite, surjective and\nradiciel, then $\\varphi$ is locally\nconstant.\nThe constructibility follows from this\nby devissage.\n\nSince the second assertion is\n\\'etale local on $Z$,\nwe may assume $Z$ is finite over $S$.\nThen $\\varphi$ on $Z(k)$\nis determined by its restriction\nto a dense open subset by\nthe condition (4) in Definition \\ref{dfflZ}.\n\\qed}\n\n\\begin{df}\\label{dffla}\nLet $X$ be a smooth scheme\nover $k$ and\n$C\\subset T^*X$\nbe a closed conical subset.\n\n{\\rm 1.}\nWe say that $\\varphi$ is\na {\\em function on \nisolated $C$-characteristic points}\nif a number $\\varphi(f,u)\\in {\\mathbf Q}$\nis defined\nfor every \nmorphism $f\\colon U\\to Y$\nover $k$\ndefined on an open subscheme $U\\subset X$\nto a smooth curve $Y$\nwith at most \nan isolated $C$-characteristic point $u\\in U$\nand if we have $\\varphi(f,u)=0$\nif $u\\in U$ is not\nan isolated $C$-characteristic point.\n\n\n{\\rm 2.}\nLet $\\varphi$ be\na function on \nisolated $C$-characteristic points.\nWe say that $\\varphi$ is {\\em flat}\nif for every commutative diagram\n\\begin{equation}\n\\xymatrix{\nZ\\ar[r]^{\\subset} &\nU\\ar[rr]^f\\ar[rd]\n\\ar[d]_{{\\rm pr}_1}&\n&Y\\ar[ld]^g\\\\\n&X&S&}\n\\label{eqfla}\n\\end{equation}\nof schemes over $k$\nsatisfying the conditions \n{\\rm (1)--(5)} below,\nthe function \n$\\varphi_f$ on $Z(k)$\ndefined by $\\varphi_f(u)=\\varphi(f_s,u)$ \nfor the base change\n$f_s\\colon U_s\n\\to Y_s$ of $f$ at $s={\\rm pr}_2(u)\\in S$\nis {\\rm flat} in the sense of \nDefinition {\\rm \\ref{dfflZ}:}\n\n{\\rm (1)} $S$ is smooth over $k$.\n\n{\\rm (2)} $g\\colon Y\\to S$\nis a smooth curve.\n\n{\\rm (3)} $U$ is an open subscheme\nof $X\\times S$.\n\n{\\rm (4)} $Z\\subset U$\nis a closed subset {\\em quasi-finite}\nover $S$.\n\n{\\rm (5)} The pair\n$({\\rm pr}_1,f)$\nis $C$-transversal\non the complement $U\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z$.\n\\end{df}\n\nNote that \nthe morphisms\n$f_s\\colon U_s\\to Y_s$\nare $C$-transversal except\npossibly at at most isolated\ncharacteristic points at $Z_s$\nunder the conditions\n(1)--(5) on the diagram (\\ref{eqfla})\nby Lemma \\ref{lmChf}.1.\n\n\n\\begin{pr}\\label{prfla}\nLet $X$ be a smooth scheme\nover\nan algebraically closed field\n$k$ of characteristic $p\\geqq 0$\nand\n$C\\subset T^*X$\nbe a closed conical subset.\nAssume that every irreducible\ncomponent of $X$\nand of $C=\\bigcup_aC_a$ are of dimension $n$.\nLet $\\varphi$ be\na ${\\mathbf Z}[\\frac 1p]$-valued\n(resp.\\ ${\\mathbf Z}$-valued)\nfunction on isolated $C$-characteristic points\nif $p>0$\n(resp.\\ if $p=0$).\n\n{\\rm 1.}\nThe following conditions\nare equivalent:\n\n{\\rm (1)}\n$\\varphi$ is flat\nin the sense of Definition {\\rm \\ref{dffla}.2}.\nIf $g\\colon Y\\to Z$ is an \\'etale morphism\nof smooth curves over $k$,\nwe have\n$\\varphi(f,u)=\\varphi(gf,u)$.\n\n{\\rm (2)}\nThere exists a\n${\\mathbf Z}[\\frac 1p]$-linear\n(resp.\\ ${\\mathbf Z}$-linear)\ncombination\n$A=\\sum_am_aC_a$ satisfying\n\\begin{equation}\n\\varphi(f,u)=(A,df)_{T^*X,u}\n\\label{eqafu}\n\\end{equation}\nfor every $f\\colon U\\to Y$\nwith at most\nan isolated $C$-characteristic point $u\\in U$.\n\nFurther $A$ in {\\rm (2)} is unique.\nFurther $A$ is independent of $C$\nin the sense that if $C'\\supset C$\nis a closed conical subset\nsuch that\nevery irreducible\ncomponent of $C'=\\bigcup_bC'_b$ is of dimension $n$,\nthen the linear combination\n$A'=\\sum_bm'_bC'_b$ satisfying\n{\\rm (\\ref{eqafu})}\nfor every $f\\colon U\\to Y$\nwith at most\nan isolated $C'$-characteristic point $u\\in U$\nequals $A$.\n\n{\\rm 2.}\nLet $h\\colon W\\to X$ be an \\'etale\nmorphism of smooth schemes\nover $k$ and\nlet $\\psi$ be\na ${\\mathbf Z}[\\frac 1p]$-valued\n(resp.\\ ${\\mathbf Z}$-valued)\nfunction on isolated $h^* C$-characteristic points\nif $p>0$\n(resp.\\ if $p=0$).\nWe assume that \n$\\varphi$ and $\\psi$ satisfy\nthe equivalent condition in {\\rm 1.}\\\nand let $A$ and $A'$\nbe the linear combinations\nof irreducible components\nof $C$ and of $h^* C$\nsatisfying {\\rm (\\ref{eqafu})}\nfor $\\varphi$ and $\\psi$\nrespectively.\nIf $\\varphi(f,hv)=\\psi(fh,v)$\nfor isolated characteristic points\n$v$ of $fh\\colon W\\times_XU\\to U\\to Y$,\nwe have $A'=h^*A$.\n\\end{pr}\n\n\n\n\\proof{\n1.\n(2)$\\Rightarrow$(1):\nLet the notation be as in\n(\\ref{eqfla}).\nWe show that\nthe function\n$\\varphi_f$ on $Z(k)$\ndefined by\n$\\varphi_f(u)=(A,df_s)_{T^*X,u}$\nis flat over $S$ in the sense of Definition\n\\ref{dffla}.2.\nWe may assume $A=C_a=C$.\nSince the assertion is local,\nwe may assume that\n$\\Omega^1_{Y\/S}$\nis free of rank $1$\nand the section\n$df\\colon U\\to T^*X$\ndefined by a basis\nis globally defined on $U$.\nSince $T^*X$ is regular,\nthe ${\\cal O}_{T^*X}$-module\n${\\cal O}_C$ is of finite tor-dimension.\nSince $U\\to S$ is flat,\nthe complex of \n${\\cal O}_U$-modules \n${\\cal O}_C\n\\otimes^L_{{\\cal O}_{T^*X}}\n{\\cal O}_U$ defined\nas the pull-back by $df$\nis of finite tor-dimension\nas a complex of \n${\\cal O}_S$-modules.\nHence the function\n$\\varphi_f$ on $Z(k)$\nis flat over $S$ by Lemma \\ref{lmA}.1\nand Lemma \\ref{lmfla}.1.\n\n\nIf $g\\colon Y\\to Z$ is an \\'etale morphism\nof smooth curves over $k$,\nwe have\n$(A,df)_{T^*X,u}\n=(A,dgf)_{T^*X,u}$.\n\n(1)$\\Rightarrow$(2):\nSince the question is local,\nwe may assume $X$ is affine.\nWe take a closed immersion\n$X\\to {\\mathbf A}^n\n={\\rm Spec}\\ k[T_1,\\ldots,T_n]\n\\subset\n{\\mathbf P}^n$.\nLet $E=\\Gamma({\\mathbf P}^n,\n{\\cal O}(1))$ and let\n${\\cal L}$ be the pull-back to\n$X$ of ${\\cal O}(1)$.\nAfter replacing $E\\to \n\\Gamma(X,{\\cal L})$\nby $S^dE\\to \n\\Gamma(X,{\\cal L}^{\\otimes d})$\nfor $d\\geqq 3$,\nwe may assume that\nthe conditions (E) and (C)\nbefore and after Lemma \\ref{lmloc}\nare satisfied.\nWe may identify\n$E$ with the $k$-linear subspace\nof $k[T_1,\\ldots,T_n]$\nconsisting of polynomials of degree $\\leqq 1$.\nSimilarly,\n$S^dE$ is identified\nwith the $k$-linear subspace\nof $k[T_1,\\ldots,T_n]$\nconsisting of polynomials of degree $\\leqq d$.\n\n\nWe consider the universal family \nas in Section \\ref{sspl}.\nDefine an open subset\n\\begin{equation}\n(X\\times {\\mathbf G})^\\triangledown\n\\subset\n(X\\times {\\mathbf G})^{\\circ}\n\\label{eqXGt}\n\\end{equation}\nof $(X\\times {\\mathbf G})^{\\circ}\n\\subset (X\\times {\\mathbf G})'$\n(\\ref{eqXGo}) \nto be the largest open subset such that\n${\\mathbf Z}(\\widetilde C)$\ndefined by the left cartesian square in\n\\begin{equation}\n\\begin{CD}\n{\\mathbf Z}(\\widetilde C)\n@>>>\n(X\\times {\\mathbf G})^\\triangledown\n@>f>> {\\mathbf D}\n@>>> {\\mathbf G}\\\\\n@VVV@VVV@VVV\\\\\n{\\mathbf P}(\\widetilde C)\n@>>> X\\times_{\\mathbf P}Q\n@>>> {\\mathbf P}^{\\vee}\n\\end{CD}\n\\label{eqZPC}\n\\end{equation}\nis quasi-finite over\n${\\mathbf G}$.\nBy Lemma \\ref{lmPC}.1,\nthe complement\n$(X\\times {\\mathbf G})^\\triangledown\n\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~}\n{\\mathbf Z}(\\widetilde C)$\nis the largest open subset\nwhere the pair of\n${\\rm pr}_1\\colon\n(X\\times {\\mathbf G})^\\triangledown\n\\to X$\nand \n$f\\colon\n(X\\times {\\mathbf G})^\\triangledown\n\\to {\\mathbf D}$\nis $C$-transversal.\nFurther by Lemma \\ref{lmAL},\nfor the pair $(u,L)\\in (X\\times {\\mathbf G})^{\\circ}$\nof a line $L\\subset {\\mathbf P}^\\vee$\nand $u\\in X_L^{\\circ}$,\nthe pair $(u,L)\\in (X\\times {\\mathbf G})^{\\circ}$ is a\npoint of ${\\mathbf Z}(\\widetilde C)$\n(resp.\\ of $(X\\times {\\mathbf G})^\\triangledown$)\nif and only if\n$u\\in X_L^{\\circ}$\nis (resp.\\ at most)\nan isolated $C$-characteristic point\nof $p_L^{\\circ} \\colon\nX_L^{\\circ} \\to L$.\n\n\nWe consider the diagram\n\\begin{equation}\n\\xymatrix{{\\mathbf Z}(\\widetilde C)\n\\ar[r]^{\\!\\!\\!\\!\\!\\!\\!\\!\\subset}&\n(X\\times {\\mathbf G})^\\triangledown\n\\ar[rr]^f\\ar[rd]_{p^\\triangledown}\n\\ar[d]&&\n{\\mathbf D}\\ar[ld]^g\\\\\n&X&{\\mathbf G}}\n\\label{eqMfg}\n\\end{equation}\nas (\\ref{eqfla}).\nFor a point $L$ of ${\\mathbf G}$,\nthe fiber of $f\\colon \n(X\\times {\\mathbf G})^\\triangledown\n\\to {\\mathbf D}$\nis a restriction of $p_L^{\\circ}\\colon \nX_L^{\\circ} \\to L$.\nBy the condition (1),\nthe function $\\varphi_f$ on \n${\\mathbf Z}(\\widetilde C)$\nis flat.\nHence, there exists a dense\nopen subscheme ${\\mathbf Z}(\\widetilde C)^{\\circ}\n\\subset {\\mathbf Z}(\\widetilde C)$\nwhere the function\n$\\varphi_f$\nis locally constant by Lemma \\ref{lmfla}.2.\n\nFor each irreducible component\n$C_a$ of $C=\\bigcup_aC_a$,\nthe function $\\varphi_f$\nis constant on a dense open subscheme\n${\\mathbf Z}(\\widetilde C_a)^{\\circ}\n=\n{\\mathbf Z}(\\widetilde C_a)\n\\cap {\\mathbf Z}(\\widetilde C)^{\\circ}\n\\subset\n{\\mathbf Z}(\\widetilde C_a)$.\nDefine a number $\\varphi_a\n\\in {\\mathbf Z}[\\frac1p]$ to be\nthe value of $\\varphi_f$\non ${\\mathbf Z}(\\widetilde C_a)^{\\circ}$.\nThe restriction \n${\\mathbf P}(\\widetilde C_a)\n\\to D_a=\n\\overline {p^\\vee({\\mathbf P}(\\widetilde C_a))}$\nof\n$p^\\vee\\colon X\\times_{\\mathbf P}Q\n\\to {\\mathbf P}^\\vee$\nis generically finite\nby Corollary \\ref{corrad}.1\nsince $E$ is assumed to satisfy the conditions \n(E) and (C) before \nand after Lemma {\\rm \\ref{lmloc}}.\nLet $\\xi_a\\in \n{\\mathbf P}(\\widetilde C_a)$\nand $\\eta_a\\in D_a$\nbe the generic points.\nWe define\n\\begin{equation}\nA=\n\\sum_a\n\\dfrac{\\varphi_a}\n{[\\xi_a:\\eta_a]_{\\rm insep}}[C_a].\n\\label{eqChE}\n\\end{equation}\nSince the inseparable degree\n$[\\xi_a:\\eta_a]_{\\rm insep}$ is a power of $p$\nif $p>0$ (resp. is $1$ if $p=0$),\nthe coefficients\nin $A$ are in ${\\mathbf Z}[\\frac1p]$\n(resp. in ${\\mathbf Z}$).\n\n\nWe show that $A$\nsatisfies (\\ref{eqafu})\nfor morphisms defined by pencils.\nLet $L\\subset {\\mathbf P}^\\vee$ be a line\nand\n$u\\in X_L^{\\circ}$ be at most an isolated\ncharacteristic point of\n$p_L^{\\circ}\\colon \nX_L^{\\circ}\\to L$.\nIf $u\\in X_L^{\\circ}$ is not an isolated\ncharacteristic point of\n$p_L^{\\circ}$,\nthen the both sides of (\\ref{eqafu}) are $0$.\nAssume $u\\in X_L^{\\circ}$ is an isolated\ncharacteristic point of\n$p_L^{\\circ}$.\nThen we have \n$(u,L)\\in {\\mathbf Z}(\\widetilde C)$\nby Lemma \\ref{lmApL}.1.\n\nShrinking ${\\mathbf Z}(\\widetilde C)^{\\circ}$\nif necessary,\nwe may assume that\n${\\mathbf Z}(\\widetilde C)^{\\circ}\n=\n\\coprod_a\n{\\mathbf Z}(\\widetilde C_a)^{\\circ}$.\nIf $(u,L)\\in\n{\\mathbf Z}(\\widetilde C_a)^{\\circ}$,\nthe left hand side of (\\ref{eqafu})\nis $\\varphi_a$\nby the definition of $\\varphi_a$.\nBy Lemma \\ref{lmApL}.2,\nthe left hand side of (\\ref{eqafu})\nis $\\dfrac{\\varphi_a}{[\\xi_a:\\eta_a]_{\\rm insep}}\n\\cdot [\\xi_a:\\eta_a]_{\\rm insep}\n=\\varphi_a$.\nHence the equality (\\ref{eqafu})\nholds on the dense open subset\n${\\mathbf Z}(\\widetilde C)^{\\circ}\n\\subset\n{\\mathbf Z}(\\widetilde C)$.\nThis also proves the uniqueness of $A$\nand implies the independence of $C$.\n\nThe left hand side\nof (\\ref{eqafu}) is \nthe function $\\varphi_f$\non ${\\mathbf Z}(\\widetilde C)$\nand is constructible and\nflat over ${\\mathbf G}$\nby assumption.\nThe right hand side\nof (\\ref{eqafu}) is also a function on \n${\\mathbf Z}(\\widetilde C)$\nflat over ${\\mathbf G}$\nas is proved in (2)$\\Rightarrow$(1).\nHence the equality (\\ref{eqafu})\nholds for every $(u,L)\n\\in {\\mathbf Z}(\\widetilde C)$\nby Lemma \\ref{lmfla}.2.\n\nWe show that the linear combination\n$A$ defined for $X\\to {\\mathbf A}^n\n\\subset {\\mathbf P}^n\n={\\mathbf P}(E^\\vee)$\nequals that defined for\n$X\\to {\\mathbf P}(S^dE^\\vee)$\nfor $d\\geqq 2$.\nSince $E\\subset\nk[T_1,\\ldots,T_n]$ consisting of\npolynomials of degree $\\leqq 1$\nis canonically identified\nwith a subspace of $S^dE\\subset\nk[T_1,\\ldots,T_n]$ consisting of\npolynomials of degree $\\leqq d$,\nthe uniqueness of $A$\nimplies the independence of $d$.\n\nWe show that the equality (\\ref{eqafu})\nholds for every morphism\n$f\\colon U\\to Y$\nwith at most\nan isolated $C$-characteristic point $u\\in U$.\nBy taking an \\'etale morphism\nto ${\\mathbf A}^1$\ndefined on a neighborhood\nof $f(u)\\subset Y$,\nwe may assume $Y={\\mathbf A}^1$\nand $f$ is defined by\na function on $U\\subset X$.\nWe may assume that\n$f$ is defined by a ratio of\npolynomials in $k[T_1,\\ldots,T_n]$ \nof degree $d\\geqq 1$.\nIn other words,\n$f$ equals a morphism\ndefined by a pencil\n$L$ and the assertion is proved.\n\n2.\nWe may assume $X$ is affine\nand take an immersion $X\\to\n{\\mathbf P}^n$ as in\nthe proof of 1.\nLet $C'_b\\subset h^*C$ be an irreducible\ncomponent \nand let $C_a\\subset C$ be the closure of\nits image.\nWe take a closed point\n$(u,L)\\in {\\mathbf Z}(\\widetilde C_a)^\\circ$\nas in the notation in the proof of 1.\\\nsuch that $u=h(v)$ for \na point $v\\in W$.\nThen, since $\\psi(p_Lh,v)\n=\\varphi(p_L,u)$,\nthe coefficient of\n$C'_b$ in $A'$ equals\nthat of\n$C_a$ in $A$.\n\\qed}\n\n\n\n\\subsection{Characteristic cycle}\\label{ssCC}\n\n\nWe state and prove the existence\nof characteristic cycle\nsatisfying the Milnor formula.\n\n\n\n\\begin{thm}[{\\rm cf.{} \\cite[Principe p.\\ 7]{bp}}]\\label{thmM}\nLet $X$ be a smooth scheme over \na perfect field $k$\nof characteristic $p>0$ (resp.\\ $p=0$)\nand ${\\cal F}$ be a constructible\ncomplex of $\\Lambda$-modules \nof finite tor-dimension on $X$.\nLet $C=\\bigcup_aC_a$ be a closed conical subset\nof the cotangent bundle $T^*X$\nsuch that ${\\cal F}$\nis micro-supported on $C$.\nAssume that every irreducible component of\n$X$ and every irreducible component $C_a$\nof $C$ are of dimension $n$.\nThen, there exists a unique \n${\\mathbf Z}[\\frac1p]$-linear \n(resp.\\ ${\\mathbf Z}$-linear) combination\n$CC_C{\\cal F}=\n\\sum_a m_a[C_a]$ satisfying the following\ncondition:\n\nFor every \\'etale morphism\n$j\\colon W\\to X$,\nevery morphism $f\\colon W\\to Y$\nto a smooth curve and \nevery at most {\\em isolated $C$-characteristic\npoint} $u\\in W$ of $f$,\nwe have\n\\begin{equation}\n-\\dim{\\rm tot}\\ \\phi_u\n(j^*{\\cal F},f)\n=\n(CC_C {\\cal F},df)_{T^*W,u}.\n\\label{eqMil}\n\\end{equation}\nFurther,\nthe linear combination\n$CC_C{\\cal F}$\nis independent of $C$\non which ${\\cal F}$ is micro-supported.\n\\end{thm\n\nWe will give a proof\nby Beilinson\nof the fact that\nthe characteristic cycle \nhas ${\\mathbf Z}$-coefficients\nin Section \\ref{sZ}.\nThis is a generalization of \nthe Hasse-Arf theorem\n\\cite{CL} in the case\n$\\dim X\\leqq 1$. \n\n\\proof{\nWe may assume $k$ is algebraically\nclosed by replacing $k$ by\nan algebraic closure.\nLet $\\Lambda_0$ denote\nthe residue field of the\nfinite local ring $\\Lambda$\nand set ${\\cal F}_0=\n{\\cal F}\\otimes_\\Lambda^L\\Lambda_0$.\nThen, we have\n$\\dim{\\rm tot} \\phi_u\n(j^*{\\cal F},f)\n=\n\\dim{\\rm tot} \\phi_u\n(j^*{\\cal F}_0,f)$\nand ${\\cal F}_0$ is\nmicro-supported on $C$\nif and only if\nand ${\\cal F}$ is\nmicro-supported on $C$\nby Lemma \\ref{lmmc}.7.\nThus, we may assume\n$\\Lambda$ is a field.\n\nWe regard the left hand side\nof (\\ref{eqMil})\nas a function $\\varphi$ on\nisolated $C$-characteristic points\nin the sense of\nDefinition \\ref{dffla}.1.\nIn fact,\nif the pair\nof $j\\colon W\\to X$ and $f\\colon\nW\\to Y$ is $C$-transversal,\nthen $f\\colon W\\to Y$ \nis universally locally acyclic\nrelatively to $j^*{\\cal F}$\nand the left hand side of\n(\\ref{eqMil}) is 0.\nIf $g\\colon Y\\to Z$\nis an \\'etale morphism,\nthe function $\\varphi$ satisfies \nthe condition \n$\\varphi(f,u)=\\varphi(gf,u)$ in (1)\nin Proposition \\ref{prfla}.1.\nIf $h\\colon W\\to X$\nis an \\'etale morphism,\nit also satisfies the condition \n$\\varphi(f,hv)=\\psi(fh,v)$\nin Proposition \\ref{prfla}.2.\nThus by Proposition \\ref{prfla},\nit suffices to show that\n$\\varphi$ is flat in the sense\nof Definition \\ref{dffla}.2.\n\nLet the notation be as in (\\ref{eqfla})\nand we apply \nProposition \\ref{prMsc}.\nThe morphism $f\\colon\nU\\to Y$\nis locally acyclic relatively to\nthe pull-back of ${\\cal F}$\non the complement of\n$Z$ by Lemma \\ref{lmPC}.2.\nThe projection\n${\\rm pr}_2\\colon\nU\\to S$\nis locally acyclic relatively to\nthe pull-back of ${\\cal F}$\nby the generic universal local acyclicity\n\\cite[Th\\'eor\\`eme 2.13]{TF}.\nSince $Z$ is quasi-finite over $S$,\nthe function\n$\\varphi_f$ is constructible and flat\nover $S$ by \nProposition \\ref{prMsc}\nand Lemma \\ref{lmfla}.1.\n\\qed}\n\n\\begin{df}\\label{dfCC}\nWe define the {\\em characteristic cycle}\n$CC {\\cal F}$ \nto be $CC_C{\\cal F}$\nindependent of $C$\non which ${\\cal F}$ is micro-supported.\n\\end{df}\n\nThe Milnor formula \\cite{Milnor}\nand (\\ref{eqMil})\nimply that for the constant sheaf $\\Lambda$,\nwe have\n$$CC \\Lambda\n=(-1)^n\\cdot [T^*_XX].$$\nThus, the formula (\\ref{eqMil})\nis a generalization of\nthe Milnor formula\nproved by Deligne in \\cite{Milnor}\nand shall be also called a Milnor formula.\nWe will give more examples \nand properties of characteristic cycles \nin the rest of this\nsubsection and in Section \\ref{spb}.\nWe keep assuming that $k$ is\nperfect and\n$X$ is smooth of dimension $n$ over $k$.\nLet ${\\cal F}$ be a constructible\ncomplex of $\\Lambda$-modules of finite tor-dimension on $X$.\n\n\\begin{lm}\\label{lmele}\n{\\rm 1.}\nIf ${\\cal F}$\nis locally constant,\nwe have \n\\begin{equation}\nCC {\\cal F}=(-1)^n\n{\\rm rank}\\ {\\cal F}\\cdot [T^*_XX].\n\\end{equation}\n\n{\\rm 2.}\nFor an \\'etale morphism\n$j\\colon U\\to X$,\nwe have\n\\begin{equation}\nCC j^*{\\cal F}\n=j^*CC {\\cal F}.\n\\end{equation}\n\n{\\rm 3.}\nAssume that $\\dim X=1$\nand let $U\\subset X$\nbe a dense open subscheme\nwhere ${\\cal F}$ is locally constant.\nFor $x\\in X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} U$,\nlet $\\bar \\eta_x$\ndenote a geometric generic\npoint of the strict localization\nat a geometric point $\\bar x$\nabove $x$ and let \n\\begin{equation}\na_x({\\cal F})=\n{\\rm rank}\\ {\\cal F}_{\\bar \\eta_x}\n-\n{\\rm rank}\\ {\\cal F}_{\\bar x}\n+\n{\\rm Sw}_x {\\cal F}_{\\bar \\eta_x}\n\\label{eqaxF}\n\\end{equation}\nbe the Artin conductor.\nThen, we have\n\\begin{equation}\nCC {\\cal F}=\n-\\Bigl(\n{\\rm rank}\\ {\\cal F}\\cdot [T^*_XX]\n+\n\\sum_{x\\in X\\!\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \\!U}a_x({\\cal F})\n\\cdot [T^*_xX]\n\\Bigr)\n\\label{eqdim1}\n\\end{equation}\n\\end{lm}\n\n\\proof{\n1.\nIt follows from the Milnor formula \\cite{Milnor}.\nIt will also follow immediately\nfrom the compatibility of\nthe characteristic cycles\nwith smooth pull-back\nProposition \\ref{prsm*}.\n\n2.\nSince the characterization (\\ref{eqMil})\nis an \\'etale local condition,\nthe assertion follows.\n\n3.\nBy Lemma \\ref{lmlcst}.2,\nit suffices to determine the coefficients.\nFor the $0$-section $T^*_XX$,\nit follows from 1 and 2.\nFor the fibers,\nsince \n$\\dim{\\rm tot}_x\\phi_x\n({\\cal F},{\\rm id})\n=\na_x({\\cal F})$,\nit follows from the Milnor formula (\\ref{eqMil})\nfor the identity $X\\to X$.\n\\qed}\n\\medskip\n\nFor surfaces,\nthe characteristic cycle\nis studied in \\cite{surface}.\n\n\\begin{df}\\label{dfi*A}\nLet $i\\colon X\\to Y$\nbe a closed immersion \nof smooth schemes over $k$\nand let \n\\begin{equation}\n\\begin{CD}\nT^{*}X@<<< X\\times_{Y}T^{*}Y\n@>>> T^{*}Y\n\\end{CD}\n\\label{eqi*A}\n\\end{equation}\nbe the canonical morphisms.\nLet $C\\subset T^*X$\nbe a closed conical subset.\nAssume that every irreducible component of $X$\nand every irreducible component \n$C_a$ of $C=\\bigcup_aC_a$\nare of dimension $n$\nand \nthat every irreducible component of $Y$\nis of dimension $m$.\nThen, for a linear combination\n$A=\\sum_am_a[C_a]$,\nwe define $i_*A$ to be \n$(-1)^{n-m}$-{\\em times} the push-forward\nby the second arrow\n$X\\times_{Y}T^{*}Y\\to T^{*}Y$\nin {\\rm (\\ref{eqi*A})}\nof the pull-back of $A$ by the first arrow\n$X\\times_{Y}T^{*}Y\\to T^{*}X$\nin the sense of intersection theory.\n\\end{df}\n\n\\begin{lm}\\label{lmRf}\nLet ${\\cal F}$ be a constructible complex\nof $\\Lambda$-modules \nof finite tor-dimension on $X$.\n\n{\\rm 1.}\nFor a distinguished triangle\n$\\to {\\cal F}'\\to {\\cal F}\\to {\\cal F}''\\to$\nin $D_{\\rm ctf}(X,\\Lambda)$,\nwe have\n\\begin{equation}\nCC {\\cal F}=\nCC {\\cal F}'+\nCC {\\cal F}''.\n\\label{eqadd}\n\\end{equation}\n\n{\\rm 2.}\nFor a closed immersion \n$i\\colon X\\to Y$\nof smooth schemes over $k$,\nwe have\n\\begin{equation}\nCC i_*{\\cal F}=\ni_*CC {\\cal F}.\n\\label{eqi*F}\n\\end{equation}\n\n\n\n{\\rm 3.}\nFor a morphism $f\\colon X\\to Y$\nof separated \nsmooth schemes of finite type over $k$, \nwe have\n\\begin{equation}\nCC Rf_*{\\cal F}=\nCC Rf_!{\\cal F}.\n\\label{eqRf}\n\\end{equation}\n\n{\\rm 4.}\nWe have\n\\begin{equation}\nCC D_X{\\cal F}=\nCC {\\cal F}.\n\\label{eqChD}\n\\end{equation}\n\\end{lm}\n\n\n\\proof{\n1.\nBy the characterization\nof characteristic cycle\nby the Milnor formula (\\ref{eqMil}),\nit follows from the additivity\nof the total dimension.\n\n2.\nLet $C\\subset T^*X$ be the singular\nsupport of ${\\cal F}$.\nThen,\n$i_*{\\cal F}$ is micro-supported\non $i_{\\circ}C\\subset T^*Y$\nby Lemma \\ref{lmmc}.6.\nLet $Y\\to {\\mathbf P}$\nbe an immersion satisfying the condition\n(E) before Lemma \\ref{lmloc}\nand the condition (C) before\nProposition {\\rm \\ref{prwi}}\nfor $i_{\\circ}C$.\nThen, by the description of\nthe characteristic cycle\n$CC {\\cal F}\n=CC_C^E {\\cal F}$\nin (\\ref{eqChE}),\nit follows from the canonical isomorphism\n$\\phi({\\cal F},p_L^{\\circ} \\circ i)\n\\to \\phi(i_*{\\cal F},p_L^{\\circ} )$\nfor the morphism\n$p_L^{\\circ} \\colon Y_L^{\\circ}\\to L$\ndefined by a pencil $L$.\n\n3.\nBy 1,\nit follows from \\cite{La}.\n\n4. \nWe have $SS{\\cal F}=SSD_X{\\cal F}$\nby Corollary \\ref{corssD}.\nBy 2 and Lemma \\ref{lmele}.2,\nwe may assume $X$ is projective\nas in the proof of Corollary \\ref{corssD}.\nLet $C=SS{\\cal F}=SSD_X{\\cal F}$\nbe the singular support.\nLet $X\\to {\\mathbf P}$\nbe a closed immersion\nsatisfying the condition\n(E) and (C) before \nand after Lemma \\ref{lmloc}.\nThen, for a point\n$(u,L)$\nin the dense open subset\n${\\mathbf P}(\\widetilde C)^{\\circ}\n\\subset\n{\\mathbf P}(\\widetilde C)\n\\subset X\\times_{\\mathbf P}Q$\nand $v=p_L(u)\\in L$,\nwe have\n$\\dim{\\rm tot}\\phi_u({\\cal F},p_L^{\\circ})\n=\na_v(Rp^\\vee_*p^*{\\cal F})|_L$\nand similarly for $D_X{\\cal F}$.\nSince\n$a_v(Rp^\\vee_*p^*{\\cal F})|_L\n=\na_vD_L(Rp^\\vee_*p^*{\\cal F})|_L\n=\na_v(Rp^\\vee_*p^*D_X{\\cal F})|_L$,\nit follows from the description of\nthe characteristic cycle\n$CC {\\cal F}\n=CC_C^E {\\cal F}$\nin (\\ref{eqChE}).\n\\qed}\n\\medskip\n\nFor the residue field $\\Lambda_0$ of\n$\\Lambda$,\na constructible complex\n${\\cal F}$ of $\\Lambda$-modules\nof finite tor-dimension on $X$\nis a perverse sheaf if and only if\n${\\cal F}\\otimes^L_\\Lambda\\Lambda_0$\nis a perverse sheaf.\n\n\\begin{pr}\\label{prperv}\nAssume ${\\cal F}$ is a perverse sheaf on $X$.\n\n{\\rm 1.} {\\rm (\\cite[Question p.\\ 7]{bp})}\nWe have\n\\begin{equation}\nCC {\\cal F}\\geqq0\n\\label{eqperv}\n\\end{equation}.\n\n{\\rm 2.}\nThe support of\n$CC {\\cal F}$\nequals $SS{\\cal F}$.\n\\end{pr}\n\n\\proof{\nBy the description of\nthe characteristic cycle\n$CC {\\cal F}\n=CC_C^E {\\cal F}$\nin (\\ref{eqChE}),\nit follows from Lemma \\ref{lmRn}.\n\\qed\n\n\n\n\n\\begin{cor}\\label{corRj}\nLet $j\\colon U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D\\to X$\nbe the open immersion of\nthe complement of a Cartier divisor.\nThen, for a perverse sheaf\n${\\cal F}$ of $\\Lambda$-modules on $U$,\nwe have\n\\begin{equation}\nSS Rj_*{\\cal F}=\nSS j_!{\\cal F}.\n\\label{eqSSRj}\n\\end{equation}\n\\end{cor}\n\n\\proof{\nSince the open immersion\n$j\\colon U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D\\to X$ is\nan affine morphism,\n$Rj_*{\\cal F}$ and \n$j_!{\\cal F}$ are perverse sheaves on $X$\nby \\cite[Corollaire 4.1.10]{BBD}.\nHence, it follows from\nLemma \\ref{lmRf}.3 and Proposition \\ref{prperv}.2.\n\\qed}\n\\medskip\n\nWe show the compatibility with\nsmooth pull-back.\n\n\n\\begin{df}\\label{dfCsm}\nLet $h\\colon W\\to X$ be \na smooth morphism of\nsmooth schemes over\na perfect field $k$ and let\n$C\\subset T^*X$ be a closed conical\nsubset.\nAssume that every irreducible\ncomponent of $X$ and\nevery irreducible\ncomponent of $C$\nare of dimension $n$\nand that\nevery irreducible\ncomponent of $W$ is of dimension $m$.\nLet \n\\begin{equation}\n\\begin{CD}\nT^*W@<<< W\\times_XT^*X\n@>>> T^*X\n\\end{CD}\n\\label{eqhsm}\n\\end{equation}\nbe the canonical morphisms.\nThen, for a linear combination\n$A=\\sum_am_a[C_a]$\nof irreducible components of\n$C=\\bigcup_aC_a$,\nwe define $h^!A$ to be \n$(-1)^{n-m}$-{\\em times} the push-forward\nby the first arrow\n$W\\times_XT^*X\\to T^*W$\nin {\\rm (\\ref{eqh!A})}\nof the pull-back of $A$ by the second arrow\n$W\\times_XT^*X\\to T^{*}X$\nin the sense of intersection theory.\n\\end{df}\n\n\n\n\n\\begin{pr}\\label{prsm*}\nLet $h\\colon W\\to X$ be a smooth morphism of \nsmooth schemes over a perfect field $k$ and let\n$C\\subset T^*X$ be a closed conical\nsubset.\nAssume that every irreducible\ncomponent of $X$ and\nevery irreducible\ncomponent of $C$\nare of dimension $n$\nand that\nevery irreducible\ncomponent of $W$ is of dimension $m$.\n\nLet ${\\cal F}$ be\na constructible complex of\n$\\Lambda$-modules on $X$\nof finite tor-dimension\nmicro-supported on\n$C\\subset T^*X$.\nThen, we have \n\\begin{equation}\nCC h^*{\\cal F}=\nh^!CC {\\cal F}.\n\\label{eqsm*}\n\\end{equation}\n\\end{pr}\n\n\\proof{\nSince the assertion is \\'etale local\non $W$, we may assume\n$W=X\\times {\\mathbf A}^n$\nand $h$ is the first projection.\nThen, this is the case\nwhere $Y={\\mathbf A}^n$\nand ${\\cal G}=\\Lambda$\nof \\cite[Theorem 3.6.2]{notes},\nwhich is proved \nusing the Thom-Sebastiani formula\n\\cite{TS}\nwithout\nusing the results in the rest of\nthis article.\n\\qed}\n\n\\subsection{Integrality}\\label{sZ}\n\nIn this section, we give a proof\nby Beilinson\nof the integrality of characteristic cycles.\n\n\\begin{thm}[Deligne]\\label{thmZ}\nThe coefficients\nof $CC{\\cal F}$ are integers.\n\\end{thm}\n\nThis is a generalization of \nthe Hasse-Arf theorem\n\\cite{CL} in the case\n$\\dim X\\leqq 1$. \nWe will deduce Theorem from\nthe Milnor formula\nand the following Proposition.\n\n\\begin{pr}[{\\rm \\cite[Proposition 4.12]{Be}}]\\label{prT}\nLet $X$ be a smooth scheme\nof dimension $n$\nover an algebraically closed\nfield $k$\nand $C\\subset T^*X$\nbe a closed irreducible conical subset\nof dimension $n$.\nLet $u\\in X$ be a closed\npoint and\nlet $(u,\\omega)\\in C$\nbe a closed smooth point of\n$C\\subset T^*X$ regarded as\na reduced closed subscheme.\n\nThen,\nthere exists a function\n$f$ defined on a neighborhood of $u$\nsuch that the section $df$\nof $T^*X$ meets $C$ transversely\nat $(u,\\omega)$\nif one of the following conditions {\\rm (1)}\nand {\\rm (2)} is satisfied:\n\n{\\rm (1)}\nThe characteristic $p$ of $k$ is\ndifferent from $2$.\n\n{\\rm (2)}\nLet $T$ be the tangent space\n$T_{(u,\\omega)}C$ of $C$ at \nthe smooth point $(u,\\omega)$.\nThen, \nthere exists a function\n$g$ defined on a neighborhood of $u$\nsuch that the section $dg$\nof $T^*X$ meets $C$ at \n$(u,\\omega)$\nand that\nthe dimension of\nthe intersection\n$(dg)_*(T_uX)\\cap T\\subset\nT_{(u,\\omega)}(T^*X)$\nis even.\n\\end{pr}\n\n\\proof\nTake a local coordinate\n$x_1,\\ldots,x_n$ of $X$ at $u$\nand \nwrite $\\omega=\\sum_ia_idx_i$\nas a $k$-linear combination\nof the basis $dx_1,\\ldots,dx_n$ of\nthe cotangent space $T^*_uX$.\nThen, the function $g=\\sum_ia_ix_i$\nsatisfies $dg(u)=\\omega$.\nWe will modify \n$g$ as $f=g+\\sum_{i,j}b_{ij}x_ix_j$\nto find a function $f$ satisfying\nthe required condition.\n\nLet $V=T_uX$\nand\n$W=T_{(u,\\omega)}(T^*X)$\ndenote the tangent spaces\nat $u\\in X$ and at\n$(u,\\omega)\\in T^*X$ respectively.\nThe tangent space $W$\nis decomposed as the direct sum of\nthe image $(dg)_*(V)$ of\nthe morphism $(dg)_*\\colon V\\to W$\ndefined by the section $dg\\colon X\\to T^*X$\nwith the tangent space\n$T_\\omega(T^*_uX)$ of the fiber\n$T^*_uX\\subset T^*X$.\nThe latter $T_\\omega(T^*_uX)$\nis naturally identified\nwith the cotangent space\n$T^*_uX\\subset T^*X$ \nthat is the dual\n$V^\\vee$ of $V=T_uX$.\nFurther identifying $V$ with\nits image $(dg)_*(V)$, we identify\n$W=V\\oplus V^\\vee$.\n\nThe required transversality condition means that\nthe intersection \n$(df)_*(V)\\cap T\\subset W$ is $0$.\nSince $df=dg+\\sum_{i,j}(b_{ij}+b_{ji})x_idx_j$,\nwe have $(df)_*=(dg)_*+(B+B^\\vee)$,\nwhere $B\\colon V\\to V^\\vee$\ndenotes the bilinear form on $V$\ndefined by the matrix $(b_{ij})$\nwith respect to the basis $(dx_i)$.\nConsequently, under the identification\n$W=V\\oplus V^\\vee$\nabove, the image of\n$(df)_*\\colon V\\to W$\nis identified with the graph $\\Gamma$ of\n$A=B+B^\\vee\\colon V\\to V^\\vee$.\nThus, the assertion is\na consequence of the\nfollowing lemma on linear algebra.\n\\qed}\n\n\\begin{lm}\\label{lmWV}\nLet $V$ be a $k$-vector space\nof finite dimension\nand $V^\\vee$ be the dual.\n\n{\\rm 1.}\nLet $T\\subset W=V\\oplus V^\\vee$\nbe a linear subspace of\n$\\dim T=\\dim V$\nand set\n$V_1=V\\cap T$.\nThen there exists\na direct sum decomposition\n$V=V_1\\oplus V_2$\nsatisfying the following property:\n\nFor a non-degenerate bilinear form\n$A_1\\colon V_1\\to V_1^\\vee$,\nwe extend it as $A=A_1\\oplus 0 \n\\colon V=V_1\\oplus V_2\\to \nV^\\vee=V_1^\\vee\\oplus V_2^\\vee$\nand let $\\Gamma\\subset\nW=V\\oplus V^\\vee$ denote the graph.\nThen we have\n$\\Gamma\\cap T=0$.\n\n{\\rm 2.}\nEither if $\\dim V$ is even or\nif the characteristic $p$ of $k$ is\ndifferent from $2$,\nthere exists a bilinear form\n$B\\colon V\\to V^\\vee$ such that\n$A=B+B^\\vee$ is non-degenerate.\n\\end{lm}\n\n\\proof{1.\nLet $\\bar T$ denote the\nimage of the morphism $T\\to \nW\\to W\/V=V^\\vee$\nand $\\bar T^\\perp\\subset V$\nbe the orthogonal subspace.\nThen by the assumption $\\dim T=\\dim V$,\nthe subspaces \n$V_1=V\\cap T$\nand $\\bar T^\\perp\\subset V$\nhave the same dimension.\nHence, there exists a\ndirect sum decomposition\n$V=V_1\\oplus V_2$ satisfying\n$V_2\\cap \\bar T^\\perp=0$.\nSince $V_2=(V_1^\\vee)^\\perp$,\nwe have $V_1^\\vee+\\bar T=V^\\vee$.\n\nBy the assumption that\n$A$ is non-degenerate,\nwe have $\\Gamma+T\\supset V_1+V_1^\\vee+V_2$.\nHence, further\nby $V_1^\\vee+\\bar T=V^\\vee$,\nwe obtain\n$\\Gamma+T=W$.\nThus $\\dim \\Gamma=\\dim V=\\dim T$\nimplies \n$\\Gamma\\cap T=0$.\n\n\n2.\nIf $p\\neq 2$, it suffices\nto take a non-degenerate symmetric \nbilinear form $B$.\nIf $p=2$ and if $\\dim V$ is even,\nit suffices to take\na non-degenerate alternating \nbilinear form $A$\nand to write $A=B+B^\\vee$\nby taking a symplectic basis.\n\\qed}\n\n\n\\proof[Proof of Theorem {\\rm \\ref{thmZ}}\n{\\rm (Beilinson)}]{\nWe may assume $k$ is algebraically closed.\nWrite $C=\\bigcup_aC_a$\nas the union of irreducible components.\nWe show that each\ncoefficient $m_a$ in\n$CC{\\cal F}=\\sum_am_a[C_a]$ is\nan integer.\nLet $(u,\\omega)$ be\na smooth point of $C_a$\nnot contained in any other \nirreducible component\n$C_b, (b\\neq a)$\nof $C$.\n\nIf one of the conditions\n(1) and (2) in Proposition \\ref{prT}\nis satisfied, \nfor $f$ as in Proposition \\ref{prT},\nthe coefficient\n$m_a$ equals\n$(CC{\\cal F},df)_{T^*X,u}$\nsince \n$(C_a,df)_{T^*X,u}=1$ and\n$(C_b,df)_{T^*X,u}=0$ for $b\\neq a$.\nBy the Hasse-Arf theorem\n\\cite{CL},\nthe total dimension\n$\\dim{\\rm tot} \\phi_u({\\cal F},f)$\nis an integer and\nthe Milnor formula\n$(CC{\\cal F},df)_{T^*X,u}=\n-\\dim{\\rm tot} \\phi_u({\\cal F},f)$\n(\\ref{eqMil})\nimplies that the coefficient\n$m_a=(CC{\\cal F},df)_{T^*X,u}$\nis an integer in this case.\n\nIn the exceptional case in $p=2$,\nwe take $X\\times {\\mathbf A}^1$\nand the pull-back ${\\rm pr}_1^*{\\cal F}$.\nIf the original $g$ does\nnot satisfies the condition (2)\nin Proposition \\ref{prT}\nat a smooth point $(u,\\omega)$\nof $C_a$,\nthen the composition\n${\\rm pr}_1^*g$ satisfies\nthe condition\nat the smooth point $((u,0),{\\rm pr}_1^*\\omega)$\nof ${\\rm pr}_1^oC_a\n\\subset T^*(X\\times {\\mathbf A}^1)$.\nThus, the assertion follows from\nthe compatibility Proposition \\ref{prsm*}\nof characteristic cycle with smooth pull-back.\n\\qed}\n\n\n\\section{Characteristic class}\\label{scc}\n\n\\subsection{Cycle classes\non projective space bundles}\\label{ssCh}\n\nIn this preliminary subsection,\nwe recall some basic facts\non the Chow groups of\n${\\mathbf P}^n$-bundles.\nIn this section,\nwe assume that $X$ is a\nscheme of finite type over a field $k$.\nWe can replace this assumption\nby some condition\nwhich assures necessary properties\non Chow groups.\n\n\nTo describe the Chow group\nof a projective space bundle, \nwe introduce some notation.\nLet $A=\\bigoplus_iA_i$\nbe a graded module\nand let $c_q, q=0,\\ldots, n+1$\nbe endomorphisms of\n$A$ of degree $-q$\nsending $A_i$ to $A_{i-q}$.\nWe assume that $c_0$ is the identity.\nWe formally set $f=\\sum_{q=0}^{n+1}\nc_qh^{n+1-q}$ and \ndefine a graded module\n\\begin{equation}\nA[h]\/(f)=A^{\\oplus n+1}=\n\\bigoplus_{q=0}^n\nAh^q\n\\label{eqAhf}\n\\end{equation}\nwhere $A_ih^q$\nhas degree $i+n-q$.\nWe define an endomorphism\n$h$ of $A[h]\/(f)$ of degree $-1$\nby sending\non $ah^q$ to $ah^{q+1}$\nfor $a\\in A$ and $q{i_*}>>\n{\\rm CH}_\\bullet({\\mathbf P}(E))\\\\\n@AAA@AAA\\\\\n{\\rm CH}_\\bullet(X)[h]\/(c_h(F))\n@>{c_h(E')\\cap }>>\n{\\rm CH}_\\bullet(X)[h]\/(c_h(E))\n\\end{CD}\n\\label{eqincl*}\n\\end{equation}\nis commutative.\n\\end{lm}\n\n\\proof{\n1.\nSince the pull-back of ${\\cal O}(1)$\nby ${\\mathbf P}(F)\\to {\\mathbf P}(E)$\nis also\n${\\cal O}(1)$ on ${\\mathbf P}(F)$,\nthe assertion follows from the definition.\n\n2.\nBy 1, it suffices to\nshow that the endomorphism $i_*i^*$\nof\n${\\rm CH}_\\bullet({\\mathbf P}(E))$\nis identified with the multiplication by $c_h(E')$\non ${\\rm CH}_\\bullet(X)[h]\/(c_h(E))$.\nBy the self-intersection formula,\nthe endomorphism $i_*i^*$\nequals the action of\nthe top Chern class\n$c_{n-m}\n(T_{{\\mathbf P}(F)}{\\mathbf P}(E))$\nof the normal bundle.\n\nBy the exact sequence\n$0\\to \\Omega_{{\\mathbf P}(E)\/X}\n\\to E^\\vee\\otimes {\\cal O}(-1)\\to {\\cal O}_{{\\mathbf P}(E)}\\to 0$\nand the corresponding one for\n${\\mathbf P}(F)$,\nwe obtain an isomorphism\n$N_{{\\mathbf P}(F)\/{\\mathbf P}(E)}\n\\to \nE^{\\prime \\vee}\\otimes {\\cal O}(-1)$\nfor the conormal sheaf.\nHence, the top Chern class\n$c_{n-m} (T_{{\\mathbf P}(F)}{\\mathbf P}(E))$\nequals \n$c_{n-m}(E'\\otimes {\\cal O}(1))$\nand is identified with $c_h(E')$.\n\\qed}\n\n\n\\begin{lm}\\label{lmEF1}\nAssume\n$E=F\\oplus {\\mathbf A}^1_X$.\nLet $i\\colon \n{\\mathbf P}(F)\n\\to\n{\\mathbf P}(E)$ denote the injection\nand let $p\\colon \n{\\mathbf P}(E)\\to X$ be the projection.\n\n\n{\\rm 1.}\nThe morphism\n\\begin{equation}\ni^*\\oplus p_*\\colon\n{\\rm CH}_\\bullet({\\mathbf P}(E))\n\\to\n{\\rm CH}_\\bullet({\\mathbf P}(F))\n\\oplus\n{\\rm CH}_\\bullet(X)\n\\label{eqQZ}\n\\end{equation}\nis an isomorphism.\nIf $X$ is irreducible\nof dimension $d\\leqq n$\nand $i_x\\colon x\\to X$ is\nthe immersion of a smooth\n$k$-rational point, \nthe second projection\n$p_*\\colon\n{\\rm CH}_d({\\mathbf P}(E))\n\\to\n{\\rm CH}_d(X)=\n{\\mathbf Z}$\nequals the pull-back $i_x^*\n\\colon\n{\\rm CH}_d({\\mathbf P}(E))\n\\to\n{\\rm CH}_0({\\mathbf P}^n)=\n{\\mathbf Z}$.\n\n\n{\\rm 2.}\nThe morphism\n\\begin{equation}\n i_*+ p^*\\colon\n{\\rm CH}_\\bullet({\\mathbf P}(F))\n\\oplus\n{\\rm CH}_\\bullet(X)\n\\to\n{\\rm CH}_\\bullet({\\mathbf P}(E))\n\\label{eqQZ2}\n\\end{equation}\nis an isomorphism.\nThe composition \n${\\rm CH}_\\bullet ({\\mathbf P}(E))\n\\to {\\rm CH}_{\\bullet}(X)$\nof the inverse of\nthe isomorphism {\\rm (\\ref{eqQZ2})}\nwith the second projection\nequals the pull-back by\nthe $0$-section\n$s\\colon X\\to F\\subset {\\mathbf P}(E)$.\n\\end{lm}\n\n\\proof{1.\nWe identify\n${\\rm CH}_\\bullet({\\mathbf P}(E))\n=\n{\\rm CH}_\\bullet(X)[h]\/(c_h(E))$\nand \n${\\rm CH}_\\bullet({\\mathbf P}(F))\n=\n{\\rm CH}_\\bullet(X)[h]\/(c_h(F))$.\nThen, $c_h(E)=c_h(F)h$\nand the morphism\n$i^*\\colon\n{\\rm CH}_\\bullet({\\mathbf P}(E))\n\\to\n{\\rm CH}_\\bullet({\\mathbf P}(F))$\nis identified with the surjection\n${\\rm CH}_\\bullet(X)[h]\/(c_h(E))\n\\to\n{\\rm CH}_\\bullet(X)[h]\/(c_h(F))$\nsending $h^n$ to $0$\nby Lemma \\ref{lmincl}.1.\nSince $p_*\\colon\n{\\rm CH}_\\bullet({\\mathbf P}(E))\n=\n{\\rm CH}_\\bullet(X)[h]\/(c_h(E))\n\\to\n{\\rm CH}_\\bullet(X)$\nis the morphism taking\nthe coefficient of\n$h^n$,\nthe morphism (\\ref{eqQZ}) is\nan isomorphism.\n\nThe second assertion follows from\nthe commutative diagram\n$$\\begin{CD}\n{\\rm CH}_d({\\mathbf P}(E))\n@>{i_x^*}>>\n{\\rm CH}_0({\\mathbf P}^n)\\\\\n@V{p_*}VV@VV{\\deg}V\\\\\n{\\rm CH}_d(X)\n@>{i_x^*}>>\n{\\rm CH}_0(x).\n\\end{CD}$$\n\n\n2.\nThe morphism\n$i_*\\colon\n{\\rm CH}_\\bullet({\\mathbf P}(F))\n\\to\n{\\rm CH}_\\bullet({\\mathbf P}(E))$\nis identified with the multiplication\n$h\\cap \\colon\n{\\rm CH}_\\bullet(X)[h]\/(c_h(F))\n\\to\n{\\rm CH}_\\bullet(X)[h]\/(c_h(E))$\nby Lemma \\ref{lmincl}.2.\nHence\nthe morphism (\\ref{eqQZ2}) is\nan isomorphism.\n\nSince $s^*i_*=0$ and\n$s^*p^*$ is the identity of\n${\\rm CH}_\\bullet(X)$,\nthe second assertion follows\nfrom the isomorphism (\\ref{eqQZ2}).\n\\qed}\n\n\\begin{lm}\\label{lmsurj}\nLet $\\theta\\colon E\\to F$ be a surjection\nof vector bundles on $X$\nof ${\\rm rank}\\ E=n+1\\geqq\n{\\rm rank}\\ F=m+1$ and\nlet $K$ denote the kernel.\nLet $\\pi\\colon \n{\\mathbf P}(E)'\n\\to\n{\\mathbf P}(E)$\nbe the blow-up at\n${\\mathbf P}(K)\n\\subset\n{\\mathbf P}(F)$\nand let $\\theta'\\colon \n{\\mathbf P}(E)'\n\\to\n{\\mathbf P}(F)$\ndenote the morphism induced by $\\theta$.\n\n{\\rm 1.}\nLet \n$c_h(K)^{-1}\\cap \n\\colon\n{\\rm CH}_\\bullet(X)[h]\/(c_h(E))\n\\to\n{\\rm CH}_\\bullet(X)[h]\/(c_h(F))$\ndenote the composition \nof the first projection\nwith the inverse of the isomorphism\n$$(c_h(K)\\cap)+ {\\rm can}\\colon\n{\\rm CH}_\\bullet(X)[h]\/(c_h(F))\n\\oplus\n{\\rm CH}_\\bullet(X)[h]\/(c_h(K))\n\\to\n{\\rm CH}_\\bullet(X)[h]\/(c_h(E)).$$\nThen, the diagram \n\\begin{equation}\n\\begin{CD}\n{\\rm CH}_\\bullet({\\mathbf P}(E))\n@>{\\theta'_*\\pi^*}>>\n{\\rm CH}_\\bullet({\\mathbf P}(F))\\\\\n@AAA@AAA\\\\\n{\\rm CH}_\\bullet(X)[h]\/(c_h(E))\n@>{c_h(K)^{-1}\\cap }>>\n{\\rm CH}_\\bullet(X)[h]\/(c_h(F))\n\\end{CD}\n\\label{eqsurj*}\n\\end{equation}\nis commutative.\n\n\n{\\rm 2.}\nThe diagram \n\\begin{equation}\n\\begin{CD}\n{\\rm CH}_\\bullet({\\mathbf P}(E))\n@<{\\pi_*\\theta^{\\prime *}}<<\n{\\rm CH}_\\bullet({\\mathbf P}(F))\\\\\n@AAA@AAA\\\\\n{\\rm CH}_\\bullet(X)[h]\/(c_h(E))\n@<{\\rm can}<<\n{\\rm CH}_\\bullet(X)[h]\/(c_h(F))\n\\end{CD}\n\\label{eqsurj}\n\\end{equation}\nis commutative.\n\\end{lm}\n\n\\proof{\nLet $A={\\rm CH}_\\bullet(X)$, $c_h(F)=f$ and $c_h(E)=f\\cdot g$\nand identify \n${\\rm CH}_\\bullet({\\mathbf P}(E))\n=A[h]\/(f\\cdot g)$ and\n${\\rm CH}_\\bullet({\\mathbf P}(F))\n=A[h']\/(f)$.\nLet $L\\subset {\\mathbf P}(F)\\times_XF$\nbe the universal sub line bundle\nand let\n$V\\subset {\\mathbf P}(F)\\times_XE$\nbe its pull-back by the base change of $\\theta$.\nThen, \n${\\mathbf P}(E)'\n\\to\n{\\mathbf P}(F)$\nis canonically identified with\nthe ${\\mathbf P}^{n-m}$-bundle\n${\\mathbf P}(V)$\nby Lemma \\ref{lmFEL}.\nThus, we identify\n${\\rm CH}_\\bullet({\\mathbf P}(E)')\n=A[h']\/(f)[h]\/((h-h')\\cdot g)$.\nLet $e$ denote the polynomial\nin $h$ and $h'$ defined\nby $f(h)-f(h')=(h-h')e$.\n\nWe have a cartesian diagram\n$$\\begin{CD}\n{\\mathbf P}(K)\n\\times \n{\\mathbf P}(F)\n@>>>\n{\\mathbf P}(E)'\\\\\n@VVV@VV{\\pi}V\\\\\n{\\mathbf P}(K)\n@>>>\n{\\mathbf P}(E).\n\\end{CD}$$\nWe show that this induces a cocartesian\ndiagram\n\\begin{equation}\n\\begin{CD}\nA[h]\/(g(h))[h']\/(f(h'))\n@>{(h-h')\\times}>>\nA[h']\/(f(h'))[h]\/((h-h')\\cdot g(h))\\\\\n@A{e\\times}AA@AAA\\\\\nA[h]\/(g(h))\n@>{f(h)\\times}>>\nA[h]\/(f(h)\\cdot g(h)).\n\\end{CD}\n\\label{eqPE'}\n\\end{equation}\non the Chow groups. \nThe diagram is cocartesian by\n\\cite[Proposition 6.7 (e)]{Ful}.\nBy Lemma \\ref{lmincl}.2,\nthe lower horizontal arrow is\nthe multiplication by $f(h)$.\nThe descriptions of\nthe left vertical arrow\nand the upper horizontal arrow\nfollow similarly from\nthe excess intersection formula\nand the self-intersection formula\nrespectively.\nSince $f(h')=0$ \nand hence $f(h)=(h-h')e$ in\n$A[h']\/(f(h'))[h]\/((h-h')\\cdot g(h))$,\nthe right vertical arrow\nis well-defined as\nthe canonical morphism.\n\n1.\nThe morphism $\\theta'_*\\pi^*$\nis the composition of\nthe right vertical arrow in (\\ref{eqPE'})\nwith the morphism\n$A[h']\/(f(h'))[h]\/((h-h')\\cdot g(h))\\to\nA[h']\/(f(h'))$\ntaking the coefficient of $h^{n-m}$.\nFor $i< n-m$,\nwe have $\\theta'_*\\pi^* h^i=0$.\nFor $i\\geqq 0$,\nwe have $h^ig(h)=\nh^{\\prime i}g(h)$ in\n$A[h']\/(f(h'))[h]\/((h-h')\\cdot g(h))$\nand hence $\\theta'_*\\pi^* \nh^ig(h)=h^{\\prime i}$.\nThus, \nthe commutativity of\n(\\ref{eqsurj*}) is proved.\n\n\n2.\nBy the cocartesian diagram\n(\\ref{eqPE'}),\nwe identify \n$A[h']\/(f(h'))[h]\/((h-h')\\cdot g(h))$\nwith the amalgamated sum.\nThen, the morphism\n$\\pi_*\\colon\nA[h']\/(f(h'))[h]\/((h-h')\\cdot g(h))\n\\to\nA[h]\/(f(h)\\cdot g(h))$\nis induced by the\nidentity of $A[h]\/(f(h)\\cdot g(h))$\nand \nthe morphism\n$A[h]\/(g(h))[h']\/(f(h'))\n\\to \nA[h]\/(g(h))$\ntaking the coefficient of $h^{\\prime m}$.\nThe morphism $\\pi_*\\theta^{\\prime *}$\nis the composition of\nthe canonical morphism\n$A[h']\/(f(h'))\\to\nA[h']\/(f(h'))[h]\/((h-h')$\nwith the above morphism.\nFor $0\\leqq i\\leqq m$,\nwe have $h^{\\prime i}\n=h^i+(h'-h)(\\deg {cc_X}>>\n{\\rm CH}_\\bullet(X)\\\\\n@V{f_*}VV@VV{f_!}V\\\\\nK(Y,\\Lambda)\n@>{cc_Y}>>\n{\\rm CH}_\\bullet(Y)\n\\end{CD}\n\\label{eqCHf}\n\\end{equation}\nfor proper morphism\n$f\\colon X\\to Y$ over $k$\nin full generality (cf.\\ \\cite[Note 87$_1$]{RS})\nexcept for the dimension $0$-part,\nas the following counterexample shows.\n\n\\begin{ex}\\label{exctr}\nWe identify ${\\rm CH}_\\bullet({\\mathbf P}^n)$\nwith $\\bigoplus_{i=0}^n{\\mathbf Z}$.\n\n{\\rm 1.}\nThe characteristic class $cc_{{\\mathbf P}^n}\n\\Lambda=\n(-1)^n[T^*_{{\\mathbf P}^n}\n{\\mathbf P}^n]=\n(-1)^n\nc(\\Omega^1_{{\\mathbf P}^n})=\n((-1)^i\\binom {n+1}{i+1})_{\\dim i}$\nis non-trivial on every degree.\nThe endomorphism\n$f_*$ of ${\\rm CH}_\\bullet({\\mathbf P}^n)$\ninduced by the Frobenius morphism\n$f\\colon {\\mathbf P}^n\\to {\\mathbf P}^n$\nis the multiplication by\n$p^i$ on the dimension $i$-part.\nSince $f_*\\Lambda=\\Lambda$,\nthe diagram \n{\\rm (\\ref{eqCHf})}\nis {\\em not} commutative for\nthe Frobenius morphism\n$f\\colon {\\mathbf P}^n\\to {\\mathbf P}^n$\nexcept possibly for the dimension $0$-part.\n\n{\\rm 2.}\nLet $j\\colon \n{\\mathbf A}^1\n={\\rm Spec}\\ k[x]\n\\to {\\mathbf P}^1$\nbe the open immersion\nand let $i\\colon \n{\\rm Spec}\\ k\n\\to {\\mathbf P}^1$\nbe the closed immersion\nof the complement.\nLet ${\\cal L}$ on ${\\mathbf A}^1$ be\nthe locally constant constructible\nsheaves of rank $1$ defined by\nthe Artin-Schreier equation\n$t^p-t=x$ and define\n${\\cal F}=j_!{\\cal L}\n\\oplus \\Lambda$\nand \n${\\cal G}=j_!\\Lambda^{\\oplus 2}$.\n\nThen, we have\n$CC{\\cal F}\n=\nCC{\\cal G}\n=-2(T^*_{{\\mathbf P}^1}{\\mathbf P}^1\n+T^*_\\infty{\\mathbf P}^1)$\nand hence\n$cc_{{\\mathbf P}^1}\n{\\cal F}\n=\ncc_{{\\mathbf P}^1}\n{\\cal G}\n=(-2,2)$.\nSince\n$cc_{{\\rm Spec} k}\ni^*{\\cal F}=1\\neq\ncc_{{\\rm Spec} k}\ni^*{\\cal G}=0$,\nthere exists {\\em no}\nright vertical arrow that makes the diagram\n\\begin{equation}\n\\begin{CD}\nK(X,\\Lambda)\n@>{cc_X}>>\n{\\rm CH}_\\bullet(X)\\\\\n@V{h^*}VV@VV{?}V\\\\\nK(W,\\Lambda)\n@>{cc_W}>>\n{\\rm CH}_\\bullet(W)\n\\end{CD}\n\\label{eqCHh}\n\\end{equation}\ncommutative\nfor the immersion $\\infty\\to {\\mathbf P}^1$.\n\\end{ex}\n\nWe consider the functoriality\nwith respect to push-forward.\nBy Lemma \\ref{lmccd}.1,\nthe commutative diagram\n(\\ref{eqCHf}) for \nproper smooth morphism $f\\colon\nX\\to {\\rm Spec}\\ k$\nmeans that the index formula\n$(CC{\\cal F},T^*_XX)=\\chi(X_{\\bar k},{\\cal F})$\nholds for\nconstructible complexes ${\\cal F}$ on $X$.\n\n\\begin{lm}\\label{lmclim}\nThe diagram {\\rm (\\ref{eqCHf})}\nis commutative if $f\\colon X\\to Y$\nis a closed immersion\nof embeddable schemes\nof finite type over $k$.\n\\end{lm}\n\n\\proof{\nLet $i\\colon Y\\to M$\nbe a closed immersion to\na smooth scheme over $k$.\nThen, since both $cc_X{\\cal F}$\nand $cc_Yf_*{\\cal F}$\nare defined by\n$CC(if)_*{\\cal F}$,\nthe assertion follows.\n\\qed}\n\\medskip\n\nLet $f\\colon X\\to Y$\nbe a proper morphism\nof embeddable schemes\nof finite type over $k$.\nAs in the proof of Corollary\n\\ref{corcnsm},\nwe obtain a commutative diagram\n\\begin{equation}\n\\begin{CD}\nX@>i>> M\\\\\n@VfVV @VV{g}V\\\\\nY@>j>> N.\n\\end{CD}\n\\label{eqXYMN}\n\\end{equation} where\n$M$ and $N$ are smooth,\nthe right vertical arrow is smooth\nand the horizontal arrows are\nclosed immersions.\nWe consider the morphisms\n\\begin{equation}\n\\begin{CD}\nY\\times_NT^*N\n@<{\\tilde f}<<\nX\\times_NT^*N\n@>h>>\nX\\times_NT^*M\n\\end{CD}\n\\label{eqfh}\n\\end{equation}\nwhere the left arrow is induced\nby $f$ and the right arrow is\nthe canonical injection.\n\nLet $C\n\\subset X\\times_MT^*M$\nbe a closed conical subset.\nAssume that every irreducible \ncomponent of $M$ and of $C=\\bigcup_aC_a$\nis of dimension $m$.\nAssume also that every irreducible \ncomponent of $N$ \nis of dimension $\\leqq n$\nand that every irreducible \ncomponent of\nthe closed conical subset\n$f_\\circ C=\\tilde f(h^{-1}C)\n\\subset Y\\times_NT^*N$\nis of dimension $\\leqq n$.\nThen for a linear combination\n$A=\\sum_am_aC_a$ of\nirreducible components\nof $C$,\nthe push-forward $f_!A=\n\\sum_a\nm_a\\tilde f_*h^!C_a$\nis defined as a linear combination of\nirreducible components of dimension $n$\nof $f_\\circ C$.\n\n\n\\begin{lm}\\label{lmcnpr}\nLet $f\\colon X\\to Y$\nbe a proper morphism\nof embeddable schemes\nof finite type over $k$\nand let {\\rm (\\ref{eqXYMN})}\nbe a commutative diagram\nof schemes over $k$.\nAssume that\n$M$ and $N$ are smooth\nof dimension $m$ and $n$\nrespectively,\nthat the right vertical arrow is smooth\nand that the horizontal arrows are\nclosed immersions.\n\nLet ${\\cal F}$\nbe a constructible complex of\n$\\Lambda$-modules on $X$ \nand $C=SS{\\cal F}\n\\subset X\\times_MT^*M$ \nbe the singular support of\n${\\cal F}$.\nAssume that \n$f_\\circ C=\\tilde f(h^{-1}C)\n\\subset Y\\times_NT^*N$\nis of dimension $\\leqq n$\nand that we have\n$CCR(gi)_*{\\cal F}=f_!CCi_*{\\cal F}$.\nThen, we have\n\\begin{equation}\ncc_YRf_*{\\cal F}\n=f_*cc_X{\\cal F}.\n\\label{eqprcc}\n\\end{equation}\n\\end{lm}\n\n\\proof{\nWe consider the morphisms\n\\begin{equation}\n\\begin{CD}\n{\\mathbf P}(Y\\times_NT^*N\n\\oplus {\\mathbf A}^1_Y)\n@<\\bar f<<\n{\\mathbf P}(X\\times_NT^*N\n\\oplus {\\mathbf A}^1_X)\n@>\\bar h>>\n{\\mathbf P}(X\\times_MT^*M\n\\oplus {\\mathbf A}^1_X)\n\\end{CD}\n\\label{eqfhb}\n\\end{equation}\nextending (\\ref{eqfh})\nand let $\\bar f_!\\colon\n{\\rm CH}_m({\\mathbf P}(X\\times_MT^*M\n\\oplus {\\mathbf A}^1_X))\n\\to\n{\\rm CH}_n({\\mathbf P}(Y\\times_NT^*N\n\\oplus {\\mathbf A}^1_Y))$\n denote the composition \n$\\bar f_*\\bar h^!$.\nBy Lemma \\ref{lmincl}.2,\nthe diagram\\begin{equation}\n\\begin{CD}\n{\\rm CH}_n({\\mathbf P}(X\\times_MT^*M\n\\oplus {\\mathbf A}^1_X))\n@>>>\n{\\rm CH}_\\bullet(X)\n\\\\\n@V{\\bar f_!}VV@VV{f_*}V\\\\\n{\\rm CH}_m({\\mathbf P}(Y\\times_NT^*N\n\\oplus {\\mathbf A}^1_Y))\n@>>>\n{\\rm CH}_\\bullet(Y)\n\\end{CD}\n\\label{eqCCpr}\n\\end{equation}\nis commutative.\n\nLet $A=CC{\\cal F}$.\nBy the assumption\n$\\dim f_\\circ C\\leqq n$,\na cycle $\\overline{f_!A}$ is \ndefined as that\nsupported on the closure of\n$f_\\circ C$ in\n${\\mathbf P}(Y\\times_NT^*N\n\\oplus {\\mathbf A}^1_Y)$\nby taking the closure of\nthe cycle $f_!A$.\nFurther, we have\n$\\overline{f_!A}=\n\\bar f_!\\bar A$.\nThus, by the assumption\n$CCj_*Rf_*{\\cal F}=f_!CCi_*{\\cal F}$\nand (\\ref{eqCCpr}),\nwe obtain\n$\\overline{CCj_*Rf_*{\\cal F}}\n=\\bar f_!\\overline{CCi_*{\\cal F}}\n=f_*\\overline{CCi_*{\\cal F}}$.\nHence, the assertion follows.\n\\qed}\n\\medskip\n\n\n\n\\section{Pull-back of characteristic cycle\nand the index formula}\\label{spb}\n\nWe prove that the construction\nof characteristic cycles\nis compatible with the pull-back\nby properly transversal morphisms\nin Section \\ref{sspb}.\nWe will derive from this\nan index formula for the Euler number\nat the end of Section \\ref{ssREP}.\n\nIn this section,\n$k$ denotes a field\nof characteristic $\\geqq0$.\nWe assume that\nirreducible components of\na smooth scheme over $k$\nhave the same dimension unless otherwise\nstated.\nWe also assume that\nirreducible components of\na closed conical subset \nof the cotangent of\na smooth scheme over $k$\nhave the same dimension\nas the base scheme.\n\n\\subsection{Pull-back of characteristic cycle}\\label{sspb}\n\nIn this subsection,\nwe assume that\n$X$ and $W$ are\nsmooth schemes over a field $k$.\nWe assume that\nevery irreducible component\nof $X$ (resp.\\ of $W$) is of dimension \n$n$ (resp.\\ $m$) and that\nevery irreducible component of\na closed conical subset\n$C\\subset T^*X$ is of dimension $n$.\nWe assume that\na constructible complex\n${\\cal F}$ of $\\Lambda$-modules\non $X$ is of finite tor-dimension.\n\n\n\\begin{df}\\label{dfCt}\nLet $X$ and $W$ be \nsmooth schemes over\na field $k$ and let\n$C\\subset T^*X$ be a closed conical\nsubset.\nAssume that every irreducible\ncomponent of $X$ and\nevery irreducible\ncomponent of $C$\nare of dimension $n$\nand that\nevery irreducible\ncomponent of $W$ is of dimension $m$.\n\n{\\rm 1.}\nWe say that a $C$-transversal\nmorphism $h\\colon W\\to X$ over $k$\nis {\\em properly $C$-transversal}\nif every irreducible component of\n$h^*C=W\\times_XC$\nis of dimension $m$.\n\n{\\rm 2.}\nLet $h\\colon W\\to X$ be \na properly $C$-transversal morphism\nand let \n\\begin{equation}\n\\begin{CD}\nT^*W@<<< W\\times_XT^*X\n@>>> T^*X\n\\end{CD}\n\\label{eqh!A}\n\\end{equation}\nbe the canonical morphisms.\nThen, for a linear combination\n$A=\\sum_am_a[C_a]$\nof irreducible components of\n$C=\\bigcup_aC_a$,\nwe define $h^!A$ to be \n$(-1)^{n-m}$-{\\em times} the push-forward\nby the first arrow\n$W\\times_XT^*X\\to T^*W$\nin {\\rm (\\ref{eqh!A})}\nof the pull-back of $A$ by the second arrow\n$W\\times_XT^*X\\to T^{*}X$\nin the sense of intersection theory.\n\\end{df}\n\n\\begin{lm}\n\\label{lmdimP}\nLet $h\\colon W\\to X$ be a morphism\nof smooth schemes over $k$\nand $C\\subset T^*X$ be a closed\nconical subset.\nAssume that every irreducible component\nof $X$ is of dimension $n$ and\nthat every irreducible component\nof $W$ is of dimension $m=n-c$.\nLet $\\dim_{h(W)}C$ denote\nthe minimum of $\\dim C\\cap T^*U$\nwhere $U$ runs through\nopen neighborhoods of the image $h(W)$.\n\n{\\rm 1.}\nFor $h^*C=W\\times_XC$,\nwe have\n$\\dim h^*C\\geqq \\dim C_{h(W)}-c$.\n\n{\\rm 2.}\nLet $g\\colon V\\to W$ be a morphism\nof smooth schemes over $k$.\nAssume that every irreducible component\nof $C$ is of dimension $n$ and\nthat every irreducible component\nof $V$ is of dimension $l=m-c'$.\nThen, \nthe following conditions are equivalent:\n\n{\\rm (1)}\n$h$ is properly $C$-transversal\non a neighborhood of\n$g(V)\\subset W$ and \n$g\\colon V\\to W$ is properly \n$h^{\\circ}C$-transversal.\n\n{\\rm (2)}\nThe composition $h\\circ g\\colon V\\to X$\nis properly $C$-transversal.\n\\end{lm}\n\n\\proof{\n1.\nIf $h$ is smooth, we have\n$\\dim h^*C= \\dim_{h(W)} C-c$.\nHence, it suffices to consider the\ncase where $h$ is a regular immersion\nof codimension $c$.\nThen it follows from\n\\cite[Chap.\\ 0 Proposition (16.3.1)]{EGA4}\n\n2.\nBy Lemma \\ref{lmCh}.3,\nit suffices to verify the conditions\non the dimension.\nThe implication (1)$\\Rightarrow$(2)\nis clear. We show\n(2)$\\Rightarrow$(1).\nBy 1, we have\n$\\dim g^*(h^*C)\\geqq \\dim_{g(V)} h^*C-c'\n\\geqq n-c-c'$.\nHence (2) implies\nthe equalities\n$\\dim_{g(V)} h^*C= n-c$ and\n$\\dim g^*(h^*C)=\\dim_{g(V)} h^*C-c'$.\nThus we have\n(2)$\\Rightarrow$(1).\n\\qed}\n\\medskip\n\nIf $h\\colon W\\to X$ is \nproperly $C$-transversal,\nthen every irreducible component of\n$h^{\\circ}C$\nis of dimension $\\dim W$\nby Lemma \\ref{lmnc}.\nA smooth morphism\n$h\\colon W\\to X$ is properly $C$-transversal\nby Lemma \\ref{lmCh}.1.\n\n\\begin{ex}\n{\\rm (\\cite[Example 2.18]{nonlog})}\nAssume that $k$ is a perfect\nfield of characteristic $p>2$.\nLet $X={\\mathbf A}^2\n={\\rm Spec}\\ k[x,y]\n\\supset\nU={\\mathbf G}_m\\times\n{\\mathbf A}^1\n={\\rm Spec}\\ k[x^{\\pm1},y]$.\nLet ${\\cal G}$ be a locally free sheaf\nof rank $1$ on $U$\ndefined by the Artin-Schreier equation\n$t^p-t=y\/x^p$.\nThen, the singular support\n$C=SSj_!{\\cal G}$\nfor the open immersion $j\\colon\nU\\to X$ equals the union\nof the $0$-section\n$T^*_XX$ with the line bundle\n$\\langle dy\\rangle_D$\non the $y$-axis $D=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} U$\nspanned by the section $dy$.\nHence, the immersion\n$D\\to X$ is $C$-transversal\nbut is {\\em not} properly\n$C$-transversal.\n\\end{ex}\n\n\n\n\n\nIn the rest of this section,\nwe assume that $k$ is perfect.\n\n\n\\begin{pr}[{Beilinson}]\\label{prh!}\nLet ${\\mathbf P}={\\mathbf P}^n$\nbe a projective space \nand let ${\\mathbf P}^\\vee$\nbe the dual projective space.\nLet ${\\cal G}$ be\na constructible complex of\n$\\Lambda$-modules\non ${\\mathbf P}^\\vee$\nand let ${\\cal F}$ denote\nthe naive inverse Radon transform\n$R{\\bm p}_*{\\bm p}^{\\vee *}{\\cal G}$.\nLet $C^\\vee\n\\subset T^*{\\mathbf P}^\\vee$\nbe a closed conical subset \nsuch that every irreducible\ncomponent is of dimension $n$\nand let\n$C={\\bm p}_{\\circ}\n{\\bm p}^{\\vee \\circ}C^\\vee\n\\subset T^*{\\mathbf P}$.\nAssume that ${\\cal G}$ is\nmicro-supported on $C^\\vee\\subset \nT^*{\\mathbf P}^\\vee$.\n\n{\\rm 1.}\nWe have\n\\begin{equation}\n{\\mathbf P}(CC {\\cal F})=\n{\\mathbf P}({\\bm p}_!CC {\\bm p}^{\\vee*}{\\cal G})\n=\n{\\mathbf P}({\\bm p}_!{\\bm p}^{\\vee!}CC {\\cal G}).\n\\label{eqp!}\n\\end{equation}\n\n{\\rm 2.}\nLet $X$ be a smooth subscheme\nof ${\\mathbf P}$\nand assume that the immersion\n$h\\colon X\\to {\\mathbf P}$\nis {\\em properly $C$-transversal}.\nThen, we have\n\\begin{equation}\nCC h^*{\\cal F}=\nh^!CC{\\cal F}.\n\\label{eqh!}\n\\end{equation}\n\\end{pr}\n\n\\proof{\nFirst, we prove\n\\begin{equation}\n{\\mathbf P}(CC Rp_*p^{\\vee *}{\\cal G})=\n{\\mathbf P}(p_!CC p^{\\vee*}{\\cal G})\n\\label{eqph!}\n\\end{equation}\nfor properly $C$-transversal\nimmersion\n$h\\colon X\\to {\\mathbf P}$.\nWe may assume that $k$ is algebraically closed.\nBoth $CC Rp_*p^{\\vee *}{\\cal G}$\nand $p_!CC p^{\\vee*}{\\cal G}$\nare supported on \n$p_{\\circ}p^{\\vee \\circ}C^\\vee\n=h^{\\circ}C$ by Corollary \\ref{corhCt}.2.\nBy Lemma \\ref{lmh!}\nand the assumption that\n$h$ is properly $C$-transversal,\nevery irreducible component\nof $h^\\circ C$ is of dimension $X$.\nHence it suffices to show the equality\n\\begin{equation}\n(CC Rp_*p^{\\vee *}{\\cal G},df)_u\n=(p_!CC p^{\\vee*}{\\cal G},df)_u\n\\label{eqMilR}\n\\end{equation}\nfor smooth\nmorphisms $f\\colon U\\to {\\mathbf A}^1$\ndefined on open subschemes $U\\subset X$\nwith at most an isolated $C$-characteristic point $u$.\n\nBy the Milnor formula,\nthe left hand side of (\\ref{eqMilR})\nequals\n$-\\dim{\\rm tot}\\\n\\phi_u(Rp_*p^{\\vee *}{\\cal G},f)$.\nBy Corollary \\ref{corfp},\nthere exist at most finitely many\nisolated $p^{\\vee \\circ}C$-characteristic points of\n$f p\\colon U\\times_{\\mathbf P}Q\n\\to {\\mathbf A}^1$.\nHence, \nthe right hand side of (\\ref{eqMilR})\nequals\n$\\sum_v\n(CC p^{\\vee*}{\\cal G},d(f p))_v$\nwhere $v$ runs through\nisolated $p^{\\vee \\circ}C$-characteristic points of\n$fp$.\nFurther by the Milnor formula,\nthis equals\n$-\\sum_v\\dim{\\rm tot}\\\n\\phi_v(p^{\\vee *}{\\cal G},fp)$.\nThus, the equality (\\ref{eqMilR})\nfollows from the isomorphism\n$$\\phi_u(Rp_*p^{\\vee *}{\\cal G},f)\n\\to\nR\\Gamma(Q\\times_Xu,\n\\phi(p^{\\vee *}{\\cal G},fp))\n\\to\n\\bigoplus_v\n\\phi_v(p^{\\vee *}{\\cal G},fp).$$\n\n1.\nFor the first equality,\nit suffices to take\n$X={\\mathbf P}$ in (\\ref{eqph!}).\nThe second follows from\nProposition \\ref{prsm*}.\n\n2.\nBy proper base change theorem,\nwe have an isomorphism\n$h^*{\\cal F}\n\\to Rp_*p^{\\vee *}{\\cal G}$.\nHence by (\\ref{eqp!})\nand (\\ref{eqph!}),\nwe have\n${\\mathbf P}(CCh^*{\\cal F})\n={\\mathbf P}(h^!CC{\\cal F})$.\nBy the assumption\nthat the immersion\n$h$ is properly $C$-transversal,\nthe coefficients\nof the $0$-section\n$T^*_XX$ in $CCh^*{\\cal F}$\nand in $h^!CC{\\cal F}$\nare both equal to $(-1)^{\\dim X}\n{\\rm rank}\\ {\\cal F}$.\nThus the assertion follows.\n\\qed}\n\\medskip\n\n\nFor a linear combination\n$A=\\sum_am_aC_a$\nof irreducible closed conical subsets\nof dimension $n$,\nwe define the Legendre transform\n$LA=(-1)^{n-1}{\\bm p}_!{\\bm p}^!A$.\nThis is also a linear combination\nof irreducible closed conical subsets\nof dimension $n$\nby Lemma \\ref{lmh!}.\nSince the definition of ${\\bm p}^!A$\ninvolves the sign $(-1)^{n-1}$,\nthat of the Legendre transform\ndoes not involve sign and\nwe have\n${\\mathbf P}(L(A))=\n\\sum_{a;C_a\\not\\subset \nT^*_{\\mathbf P}{\\mathbf P}}\nm_a{\\mathbf P}(C_a^\\vee)$.\n\n\n\n\\begin{cor}[Beilinson]\\label{corCCR}\nLet ${\\cal F}$\nbe a constructible complex\nof $\\Lambda$-modules on ${\\mathbf P}$.\nThen, for the Radon transform\n$R{\\cal F}$, we have\n\\begin{equation}\n{\\mathbf P}(CCR{\\cal F})=\n{\\mathbf P}(LCC{\\cal F}).\n\\label{eqCCR}\n\\end{equation}\n\\end{cor}\n\n\nWe will remove ${\\mathbf P}$\nin (\\ref{eqCCR})\nin Corollary \\ref{corLR}.\n\n\n\\proof{\nBy Proposition \\ref{prh!}.1,\nwe have \n$${\\mathbf P}(CCR{\\mathcal F})\n=\n{\\mathbf P}((-1)^{n-1}{\\bm p}^\\vee_!\nCC{\\bm p}^*{\\mathcal F})\n=\n{\\mathbf P}((-1)^{n-1}{\\bm p}^\\vee_!\n{\\bm p}^!\nCC{\\mathcal F})\n=\n{\\mathbf P}\n(LCC{\\mathcal F}).$$\n\\qed}\n\n\n\\begin{thm}\\label{thmi*}\nLet $X$ and $W$ be \nsmooth schemes over a perfect field $k$ and let\n$C\\subset T^*X$ be a closed conical\nsubset.\nAssume that every irreducible\ncomponent of $X$ and\nevery irreducible\ncomponent of $C$\nare of dimension $n$\nand that\nevery irreducible\ncomponent of $W$ is of dimension $m$.\n\nLet ${\\cal F}$ be\na constructible complex of\n$\\Lambda$-modules on $X$\nof finite tor-dimension\nmicro-supported on\n$C\\subset T^*X$\nand\nlet $h\\colon W\\to X$ be a \nproperly $C$-transversal morphism.\nThen, we have \n\\begin{equation}\nCC h^*{\\cal F}=\nh^!CC {\\cal F}.\n\\label{eqii}\n\\end{equation}\n\\end{thm}\n\n\\proof[Proof {\\rm (Beilinson)}]{\nLet $h\\colon W\\to X$\nbe a properly $C$-transversal morphism.\nSince $h$ is decomposed\nas the composition\nof the graph $W\\to W\\times X$\nand the projection $W\\times X\\to X$\nand since it is proved for\nsmooth morphisms in Proposition \\ref{prsm*},\nit is sufficient to show\nthe case where $h$ is an immersion.\n\n\nFirst, we show the case\nwhere $X$ is the projective space \n${\\mathbf P}={\\mathbf P}^n$.\nThe case where ${\\cal F}=R{\\bm p}_*\n{\\bm p}^{\\vee*}{\\cal G}$\nis the naive inverse Radon transform\nhas been proved in Proposition\n\\ref{prh!}.2.\nLet ${\\cal F}$ be a constructible\ncomplex on ${\\mathbf P}^n$.\nSince ${\\cal F}$ is isomorphic\nto $R^\\vee R{\\cal F}$ upto\nconstant sheaf\nand the assertion is clear for\nthe constant sheaf,\nit follows in the case\n$h$ is an immersion \nto ${\\mathbf P}$.\n\nWe show the general case.\nSince the assertion is local,\nwe may assume that $X$ is\naffine and take an immersion\n$i\\colon X\\to {\\mathbf P}$.\nFurther, we may assume that\nthere is a smooth subscheme\n$V\\subset {\\mathbf P}$\nsuch that $X\\cap V=W$\nand that the intersection\nis transversal.\nThen, the immersion $\\tilde h\\colon\nV\\to {\\mathbf P}$\nis $i_{\\circ}C$-transversal\non a neighborhood of $W$.\nHence, it follows from the case\nwhere $h$ is an immersion \nto ${\\mathbf P}$.\n\\qed}\n\\smallskip\n\n\nWe study the compatibility\nof characteristic classes with pull-back.\nLet $F\\to E$ be\nan injection of vector bundles\nover a scheme $W$ of finite type over $k$\nand let $p\\colon E\\to E\/F$\nbe the canonical surjection\nLet $C\\subset E$\nbe a closed conical subset\nsuch that\nthe intersection\n$C\\cap F$\nis a subset of the $0$-section.\nThen, for a linear combination\n$A=\\sum_am_aC_a$ of irreducible\ncomponents of $C=\\bigcup_aC_a$,\nthe intersection theory defines\na cycle $p^!p_*A$ supported\non $p^{-1}(p(C))=C+F\n\\subset E$.\n\n\\begin{df}\\label{dfprim}\nLet $X$ be an embeddable scheme of\nfinite type over $k$\nand let $i\\colon X\\to M$\nbe a closed immersion to\na smooth scheme over $k$.\nLet $C\\subset X\\times_MT^*M$\nbe a closed conical subset.\nLet $h\\colon W\\to X$\nbe a regular immersion\nof codimension $c$.\n\n{\\rm 1.}\nWe say that $h$ is \n{\\em properly $C$-transversal}\nif the following conditions are\nsatisfied:\nThe canonical morphism\n$T^*_WX\\to W\\times_MT^*M$\nis an injection of vector bundles on $W$.\nFor every irreducible\ncomponents $C_a$ of $C=\\bigcup_aC_a$,\nthe pull-back $W\\times_XC_a\\subset C_a$\nis of codimension $c$.\nThe intersection \n$h^*C\\cap T^*_WX$\nfor $h^*C=W\\times_XC\n\\subset W\\times_MT^*M$\nis a subset of the $0$-section.\n\n\n{\\rm 2.}\nAssume that $h\\colon W\\to X$\nis properly $C$-transversal.\nWe regard the pull-back\n$h^*C=\nW\\times_XC$ as a subset\nof $W\\times_MT^*M$\nand the conormal bundle\n$T^*_WX$ as a sub vector bundle\nof $W\\times_MT^*M$.\nThen, we define a closed conical\nsubset \n$h^!C\\subset W\\times_MT^*M$\nto be the sum\n$h^*C+T^*_WX$.\n\nFor a linear combination\n$A=\\sum_am_aC_a$ of\nirreducible components of\n$C=\\bigcup_aC_a$,\nlet $h^*C_a$ denote the\npull-back of $C_a$ by\nthe regular immersion\n$W\\times_MT^*M\\to \nX\\times_MT^*M$ in the sense of\nintersection theory\nand let $p\\colon W\\times_MT^*M\n\\to (W\\times_MT^*M)\/T^*_WX$\nbe the canonical surjection.\nThen,\nwe define\n$h^!A=(-1)^c\\sum_am_a\np^!p_*h^*C_a$.\n\\end{df}\n\n\n\n\\begin{pr}\\label{prprim}\nLet $X$ be an embeddable scheme of\nfinite type over $k$\nand let $i\\colon X\\to M$\nbe a closed immersion to\na smooth scheme over $k$.\nLet ${\\cal F}$\nbe a constructible complex of\n$\\Lambda$-modules on $X$ \nand $C=SS{\\cal F}\n\\subset X\\times_MT^*M$ \nbe the singular support of\n${\\cal F}$.\n\nLet $h\\colon W\\to X$\nbe a {\\em properly\n$C$-transversal} closed regular immersion\nof codimension $c$.\nThen, we have\n\\begin{equation}\nCC(i\\circ h)_*h^*{\\cal F}\n=h^!CCi_*{\\cal F},\n\\label{eqprim}\n\\end{equation}\n\\begin{equation}\ncc_Wh^*{\\cal F}\n=\n(-1)^c\nc(T^*_WX)^{-1}\\cap h^!cc_X{\\cal F}.\n\\label{eqprim2}\n\\end{equation}\n\\end{pr}\n\n\\proof{\nWe show (\\ref{eqprim}).\nSince the assertion is local on\n$W$, we may assume that\nthere exists a cartesian diagram\n$$\\begin{CD}\nW@>h>> X\\\\\n@VVV@VViV\\\\\nN@>j>>M\n\\end{CD}$$\nwhere the lower\nhorizontal arrow\n$j\\colon N\\to M$ \nis a regular immersion\nof codimension $c$\nof smooth schemes over $k$.\nFurther, we may assume that\n the immersion $j\\colon N\\to M$ \nis properly $C$-transversal.\nThen, by Theorem \\ref{thmi*},\nwe have $CCj^*i_*{\\cal F}\n=j^!CCi_*{\\cal F}$.\nSince\n$(i\\circ h)_*h^*{\\cal F}\n=\nj_*j^*i_*{\\cal F}$, \nwe obtain\n$CC(i\\circ h)_*h^*{\\cal F}\n=\nCCj_*j^*i_*{\\cal F}\n=\nj_*CCj^*i_*{\\cal F}\n=\nj_*j^!CCi_*{\\cal F}\n=\nh^!CCi_*{\\cal F}$\nby Lemma \\ref{lmRf}.2.\n\n\nThe equality (\\ref{eqprim2})\nfollows from (\\ref{eqprim}) and\nLemma \\ref{lmtheta}\nsince the definition of $h^!CCi^*{\\cal F}$\ninvolves the sign $(-1)^c$.\n\\qed}\n\n\\begin{cor}\\label{corcnsm}\nLet $f\\colon X\\to Y$\nbe a smooth morphism of \nrelative dimension $d$ of\nembeddable schemes \nof finite type over $k$\nand let $T^*X_{\/Y}$ \nand $c(T^*X_{\/Y})$ denote\nthe relative cotangent bundle\nand its total Chern class.\nThen,\nthe diagram\n\\begin{equation}\n\\begin{CD}\nK(Y,\\Lambda)\n@>{cc_Y}>>\n{\\rm CH}_\\bullet(Y)\\\\\n@V{f^*}VV@VV{(-1)^dc(T^*X_{\/Y})\\cap f^!}V\\\\\nK(X,\\Lambda)\n@>{cc_X}>>\n{\\rm CH}_\\bullet(X)\n\\end{CD}\n\\label{eqCHhf}\n\\end{equation}\nis commutative.\nIn particular, for $Y={\\rm Spec}\\ k$\nand for a geometrically constant\nsheaf ${\\cal F}$ on $X$ of dimension $n$,\nwe have\n\\begin{equation}\ncc_X{\\cal F}\n=\n(-1)^n\n{\\rm rank}\\ {\\cal F}\\cdot\nc(T^*X).\n\\label{eqY=k}\n\\end{equation}\n\\end{cor}\n\n\\proof{\nLet $i\\colon X\\to M$\nand $j\\colon Y\\to N$\nbe closed immersions\nto smooth schemes over $k$.\nAfter replacing $i\\colon X\\to M$\nby $(i,fj)\\colon X\\to M\\times N$,\nwe obtain a commutative diagram\n\\begin{equation}\n\\xymatrix{\nX\\ar[rdd]_f\\ar[rd]^{\\!\\!\\! h}\\ar[rrd]^i&&\\\\\n&Y\n\\ar[r]_{ j'}\\ar[d]^{g'}&M\\ar[d]^g\\\\\n&Y\\ar[r]^j&N\n}\n\\label{eqXYMNg}\n\\end{equation}\nwhere $g\\colon M\\to N$ is smooth\nand the square is cartesian.\nSince $g$ is smooth\nand $j'_*g^{\\prime *}{\\cal F}=\ng^*j_*{\\cal F}$,\nwe have\n$CCj'_*g^{\\prime *}{\\cal F}=\ng^!CCj_*{\\cal F}$\nby Theorem \\ref{thmi*}.\nLet $C=SSj_*{\\cal F}\n\\subset Y\\times_NT^*N$\nbe the singular support\nand let $C'=g^oC\n\\subset Y'\\times_MT^*M$.\nThen, since $g\\colon M\\to N$ is smooth,\nthe closed immersion\n$h\\colon X\\to Y'$\nis properly $C'$-transversal.\nHence by Proposition \\ref{prprim},\nwe have\n$CCi_*h^*g^{\\prime *}{\\cal F}=\nh^!CCj'_*g^{\\prime *}{\\cal F}=\nh^!g^!CCj_*{\\cal F}$.\nNamely, we have\n$CCi_*f^*{\\cal F}=\nf^!CCj_*{\\cal F}$.\nSince the definition of\nthe right hand side\ninvolves the sign $(-1)^d$,\nwe obtain\n\\begin{equation}\ncc_Xf^*{\\cal F}=\nc(T^*X_{\/Y})\\cap\n(-1)^df^!cc_Y{\\cal F}\n\\label{eqccTXY}\n\\end{equation}\nby Lemma \\ref{lmincl}.2.\nThus the assertion follows.\n\\qed}\n\n\n\n\\subsection{Radon transform and \nthe index formula}\\label{ssREP}\n\n\n\nAssume $X={\\mathbf P}^n$.\nWe identify \n$Q={\\mathbf P}(T^*{\\mathbf P})$\nand let\n${\\bm p}\\colon Q\\to {\\mathbf P}$ and\n${\\bm p}^\\vee\\colon Q\\to {\\mathbf P}^\\vee$\nbe the projections.\nThe Radon transforms\n$R=R{\\bm p}^\\vee_!\n{\\bm p}^*[n-1]$ and \n$R^\\vee=R{\\bm p}_!\n{\\bm p}^{\\vee*}[n-1](n-1)$\ndefine morphisms\n\\begin{equation}\nR\\colon\nK({\\mathbf P},\\Lambda)\n\\to\nK({\\mathbf P}^\\vee,\\Lambda),\n\\quad\nR^\\vee\\colon\nK({\\mathbf P}^\\vee,\\Lambda)\n\\to\nK({\\mathbf P},\\Lambda).\n\\label{dfKR}\n\\end{equation}\nDefine also morphisms $\\chi\\colon \nK({\\mathbf P},\\Lambda)\n\\to {\\mathbf Z}$\nand $\\chi\\colon \nK({\\mathbf P},\\Lambda)\n\\to {\\mathbf Z}$\nby $\\chi{\\cal F}=\n\\chi({\\mathbf P}_{\\bar k},{\\cal F})$\nand\nby $\\chi{\\cal G}=\n\\chi({\\mathbf P}^\\vee_{\\bar k},{\\cal G})$.\n\n\\begin{lm}\\label{lmPn}\nLet $n\\geqq 1$ be an integer\nand ${\\mathbf P}={\\mathbf P}^n$.\n\n{\\rm 1.}\nFor $a=0,\\ldots,n$,\nlet ${\\mathbf P}^a\n\\subset {\\mathbf P}\n={\\mathbf P}^n$\ndenote a linear subspace\nof dimension $a$.\nThen, the images\n$cc_{{\\mathbf P}^n}(\n\\Lambda_{{\\mathbf P}^a}[a])$\nfor $a=0,\\ldots,n$\ndo not depend on\nthe choice of linear subspaces\nand \nform a basis of\na free ${\\mathbf Z}$-module\n${\\rm CH}_\\bullet ({\\mathbf P})$.\n\n\n\n{\\rm 2.}\nThe diagram\n\\begin{equation}\n\\begin{CD}\nK({\\mathbf P},\\Lambda)\n@>{\\chi}>>\n{\\mathbf Z}\n\\\\\n@VRVV@VV{(-1)^{n-1}n\\times}V\\\\\nK({\\mathbf P}^\\vee,\\Lambda)\n@>{\\chi}>>\n{\\mathbf Z}\n\\end{CD}\n\\label{eqRchi}\n\\end{equation}\nand that with $R$ replaced\nby $R^\\vee$ and with\n${\\mathbf P}$ and\n${\\mathbf P}^\\vee$ switched\nare commutative.\n\n\n{\\rm 3.}\nAssume $k$ is algebraically closed.\nThe composition $R^\\vee R\\colon\nK({\\mathbf P},\\Lambda)\\to\nK({\\mathbf P},\\Lambda)$\nmaps ${\\cal F}$ to\n${\\cal F}+(n-1)\\chi{\\cal F}\\Lambda$.\nIf $n\\neq 1$\nand if $f\\colon\nK({\\mathbf P},\\Lambda)\n\\to {\\mathbf Z}$\nis a morphism satisfying\n$fR^\\vee R=n^2f$,\nthen we have\n$f=f([\\Lambda_{{\\mathbf P}^0}])\\cdot\n\\chi$.\n\\end{lm}\n\n\\proof{1.\nIt follows from\n$CC\n\\Lambda_{{\\mathbf P}^a}[a]=\nT^*_{{\\mathbf P}^a}{\\mathbf P}^n$.\n\n2.\nFor constructible complexes ${\\cal F}$\non ${\\mathbf P}$,\nwe have\n$\\chi R{\\cal F}\n=\n(-1)^{n-1}\\chi(Q_{\\bar k},{\\bm p}^*{\\cal F})\n=\n(-1)^{n-1}\\chi \nR{\\bm p}_*{\\bm p}^*{\\cal F}\n=\n(-1)^{n-1}\\chi\nR{\\bm p}_*{\\bm p}^*{\\Lambda}\\otimes{\\cal F}$\nby the projection formula.\nHence the assertion follows from\n$R^q{\\bm p}_*{\\bm p}^*{\\Lambda}\n=\\Lambda(-q\/2)$\nfor $0\\leqq q\\leqq 2(n-1)$ even\nand $=0$ for otherwise.\n\n3.\nSince $R^\\vee R{\\cal F}$\nis isomorphic to ${\\cal F}$\nup to geometrically constant sheaves,\nwe have\n$R^\\vee R{\\cal F}-{\\cal F}\n={\\rm rank}^\\circ\n(R^\\vee R{\\cal F}-{\\cal F})\\cdot\n\\Lambda$.\nBy taking $\\chi$ and applying 2,\nwe obtain\n$(n^2-1)$\n$\\chi{\\cal F}\n=\n{\\rm rank}^\\circ\n(R^\\vee R{\\cal F}-{\\cal F})\\cdot\n\\chi\\Lambda$.\nSince\n$\\chi\\Lambda=n+1$,\nwe obtain\n${\\rm rank}^\\circ\n(R^\\vee R{\\cal F}-{\\cal F})$\n$=\n(n-1)\\chi{\\cal F}$\nand the first assertion follows.\n\nIf $f$ satisfies the condition,\nsimilarly we obtain\n$(n^2-1)\nf({\\cal F})\n=\n(n-1)\\chi{\\cal F}\\cdot\nf(\\Lambda)$.\nThus, $f$ is a constant multiple of\n$\\chi$.\nSince $\\chi\\Lambda_{{\\mathbf P}^0}=1$,\nthe constant is given by\n$f(\\Lambda_{{\\mathbf P}^0})$.\n\\qed}\n\\smallskip\n\nWe define the Legendre transform\n\\begin{equation}\nL\\colon\n{\\rm CH}_\\bullet ({\\mathbf P})\n\\to\n{\\rm CH}_\\bullet ({\\mathbf P}^\\vee)\n\\label{eqbarL}\n\\end{equation}\nby $L={\\bm p}^\\vee_*{\\bm p}^*$\nand $L^\\vee={\\bm p}_*{\\bm p}^{\\vee*}$\nfor the projections\n${\\bm p}\\colon Q\\to {\\mathbf P}$ and\n${\\bm p}^\\vee\\colon Q\\to {\\mathbf P}^\\vee$.\n\n\\begin{pr}[Beilinson]\\label{prRR}\nLet $n\\geqq 1$ be an integer\nand ${\\mathbf P}={\\mathbf P}^n$.\n\n{\\rm 1.}\nThe diagram\n\\begin{equation}\n\\begin{CD}\nK({\\mathbf P},\\Lambda)\n@>{cc_{\\mathbf P}}>>\n{\\rm CH}_\\bullet ({\\mathbf P})\n\\\\\n@VRVV@VVLV\\\\\nK({\\mathbf P}^\\vee,\\Lambda)\n@>{cc_{{\\mathbf P}^\\vee}}>>\n{\\rm CH}_\\bullet ({\\mathbf P}^\\vee)\n\\end{CD}\n\\label{eqR}\n\\end{equation}\nand that with $R$ and $L$ replaced\nby $R^\\vee$ and $L^\\vee$ and with\n${\\mathbf P}$ and\n${\\mathbf P}^\\vee$ switched\nare commutative.\n\n{\\rm 2.}\nThe diagram\n\\begin{equation}\n\\xymatrix{\nK({\\mathbf P},\\Lambda)\\ar[rrd]_{\\chi}\n\\ar[rr]^{\\!\\!\\!\\!\\!\\!\\!\ncc_{\\mathbf P}}&&\n{\\rm CH}_\\bullet ({\\mathbf P})\n\\ar[d]^{\\deg}\\\\\n&&{\\mathbf Z}}\n\\label{eqchP}\n\\end{equation}\nis commutative.\n\\end{pr}\n\n\\proof{\nWe may assume that $k$ is algebraically closed.\nWe prove the assertions by \ninduction on $n$.\nIf $n=1$, the projections\n${\\bm p}\\colon Q\\to {\\mathbf P}$\nand\n${\\bm p}^\\vee\\colon Q\\to {\\mathbf P}^\\vee$\nare isomorphisms\nand the assertion 1 is obvious.\nSince $\\deg cc_{\\mathbf P}{\\cal F}\n=\n(CC{\\cal F},T^*_{\\mathbf P}{\\mathbf P})_\n{T^*{\\mathbf P}}$,\nthe assertion 2 for $n=1$ is nothing but\nthe Grothendieck-Ogg-Shafarevich\nformula for ${\\mathbf P}={\\mathbf P}^1$.\n\nWe show that the assertion 2 for $n-1\\geqq 1$\nimplies the assertion 1 for $n$.\nWe show the commutativity of\nthe diagram (\\ref{eqR})\nby using the direct sum decomposition\n\\begin{equation}\n\\begin{CD}\n{\\rm CH}_\\bullet({\\mathbf P}^\\vee)\n=\n{\\rm CH}_n({\\mathbf P}(T^*{\\mathbf P}^\\vee\n\\oplus {\\mathbf A}^1_{{\\mathbf P}^\\vee}))\n@>>>\n{\\rm CH}_{n-1}({\\mathbf P}(T^*{\\mathbf P}^\\vee))\n\\oplus\n{\\rm CH}_n({\\mathbf P}^\\vee)\\\\\n@.=\n{\\rm CH}_{n-1}(Q)\n\\oplus {\\mathbf Z}\n\\end{CD}\n\\label{eqCHQZ}\n\\end{equation}\n(\\ref{eqQZ}).\nThe compositions with the\nfirst projection $\n{\\rm CH}_\\bullet({\\mathbf P}^\\vee)\n\\to {\\rm CH}_{n-1}(Q)$\nare equal by Corollary \\ref{corCCR}.\nWe show that the compositions with the\nsecond projection ${\\rm pr}_2\\colon\n{\\rm CH}_\\bullet({\\mathbf P}^\\vee)\n\\to {\\mathbf Z}$\ninduced by the projection\n${\\mathbf P}(T^*{\\mathbf P}^\\vee\n\\oplus {\\mathbf A}^1_{{\\mathbf P}^\\vee})\n\\to \n{\\mathbf P}^\\vee$\nare the same.\n\nLet ${\\cal F}$ be a constructible\ncomplex of $\\Lambda$-modules\non ${\\mathbf P}$\nand $C=SS{\\cal F}$ \nbe the singular support of ${\\cal F}$.\nLet $H\\subset {\\mathbf P}$\nbe a hyperplane\nsuch that the immersion\n$h\\colon H\\to {\\mathbf P}$\nis properly $C$-transversal\nand let $i\\colon {\\rm Spec}\\ k\n\\to {\\mathbf P}^\\vee$ \nbe the immersion of the $k$-rational\npoint of ${\\mathbf P}^\\vee$ \ncorresponding to $H$.\n\nBy the hypothesis of induction,\nwe have\n$\\deg cc_Hh^*{\\cal F}=\n\\chi h^*{\\cal F}$.\nBy Proposition \\ref{prprim},\nwe have\n$cc_Hh^*{\\cal F}=\n-h^!cc_{\\mathbf P}{\\cal F}$.\nHence by the commutative diagram\n\\begin{equation}\n\\begin{CD}\n{\\rm CH}_\\bullet ({\\mathbf P})\n@>{h^!}>>\n{\\rm CH}_\\bullet (H)\n\\\\\n@VLVV@VV{\\deg}V\\\\\n{\\rm CH}_\\bullet ({\\mathbf P}^\\vee)\n@>{i^!}>>{\\mathbf Z}\n\\end{CD}\n\\label{eqH}\n\\end{equation}\nand by the last assertion in\nLemma \\ref{lmCHP}.2,\nwe obtain\n${\\rm pr}_2L\ncc_{\\mathbf P}{\\cal F}\n=i^!L\ncc_{\\mathbf P}{\\cal F}\n=-\\deg cc_Hh^*{\\cal F}\n=-\\chi h^*{\\cal F}$.\nWe also have\n${\\rm pr}_2\ncc_{{\\mathbf P}^\\vee} R{\\cal F}\n=\n(-1)^n\n{\\rm rank}^\\circ R{\\cal F}\n=\n-\\chi h^*{\\cal F}$\nby Lemma \\ref{lmccd}\nin the notation ${\\rm rank}^\\circ$\ndefined there.\nHence the assertion 1 follows.\n\nWe show that the assertion 1 for $n\\geqq 2$\nimplies the assertion 2 for $n$.\nBy the commutative diagrams\n(\\ref{eqR}),\nthe endomorphism $R^\\vee R$\nof $K({\\mathbf P},\\Lambda)$\npreserves the kernel\n$K({\\mathbf P},\\Lambda)^0$ of \n$cc_{\\mathbf P}\n\\colon K({\\mathbf P},\\Lambda)\n\\to \n{\\rm CH}_\\bullet({\\mathbf P})$.\nWe show that $R^\\vee R$ acts\non $K({\\mathbf P},\\Lambda)^0$ \nas the identity.\nSince $R^\\vee R{\\cal F}$\nis isomorphic to ${\\cal F}$\nup to constant sheaf,\nwe have\n$R^\\vee R{\\cal F}-{\\cal F}\n=\n{\\rm rank}^\\circ(R^\\vee R{\\cal F}-{\\cal F})\n\\cdot \\Lambda$.\nSince \n${\\rm rank}^\\circ=(-1)^n\n{\\rm pr}_2cc_{\\mathbf P}$\nby Lemma \\ref{lmccd},\nwe have\n${\\rm rank}^\\circ {\\cal F}=0$\nfor ${\\cal F}\\in K({\\mathbf P},\\Lambda)^0$.\nFurther by the commutative diagrams\n(\\ref{eqR}),\nwe also have\n${\\rm rank}^\\circ R^\\vee R{\\cal F}=0$\nfor ${\\cal F}\\in K({\\mathbf P},\\Lambda)^0$.\nHence\n$R^\\vee R$ acts\nas the identity on\n$K({\\mathbf P},\\Lambda)^0$.\n\nOn the other hand,\nby Lemma \\ref{lmPn}.3,\nwe have\n$\\chi R^\\vee R=n^2\\chi $.\nSince $n^2>1$,\n$\\chi$ annihilates\n$K({\\mathbf P},\\Lambda)^0=\n{\\rm Ker}\\ cc_{\\mathbf P}$\nand \n$\\chi$ induces a unique morphism\n${\\rm CH}_\\bullet({\\mathbf P})\n\\to {\\mathbf Z}$ by Lemma \\ref{lmPn}.1.\n\nFor the classes of\n${\\cal F}=\\Lambda_{{\\mathbf P}^a}[a]$,\nwe have\n$\\chi {\\cal F}=(CC{\\cal F},T^*_{\\mathbf P}\n{\\mathbf P})_{T^*{\\mathbf P}}\n=\\deg cc_{\\mathbf P}{\\cal F}$.\nThus, the diagram (\\ref{eqchP}) is commutative\nby Lemma \\ref{lmPn}.1.\n\\qed}\n\n\n\\begin{cor}[Beilinson]\\label{corLR}\nLet ${\\cal F}$\nbe a constructible complex\nof $\\Lambda$-modules on ${\\mathbf P}$.\nThen, for the Radon transform\n$R{\\cal F}$, we have\n\\begin{equation}\nCCR{\\cal F}=\nLCC{\\cal F}.\n\\label{eqCCL}\n\\end{equation}\n\\end{cor}\n\n\\proof{\nExcept for the coefficient\nof the $0$-section, it is proved in\nCorollary \\ref{corCCR}.\nSince the coefficient\nof the $0$-section\nis given by \n${\\rm pr}_2\\colon\n{\\rm CH}_\\bullet({\\mathbf P}^\\vee)\n\\to {\\mathbf Z}$,\nit follows from\nProposition \\ref{prRR}.1.\n\\qed}\n\n\nCorollary \\ref{corLR} means that\n\\cite[Conjecture 2.2]{notes} holds\nfor $p^\\vee\\colon Q\\to {\\mathbf P}^\\vee$\nand $p^*{\\cal F}$\nfor ${\\cal F}$ on ${\\mathbf P}$ \nby Lemma \\ref{lmh!}.\n\n\nWe state and prove\nthe index formula\nfor the Euler-Poincar\\'e characteristic.\n\n\n\\begin{thm}\\label{thmEP}\nLet $X$ be a projective smooth\nvariety over an algebraically closed\nfield.\nThen, we have\n\\begin{equation}\n\\chi(X,{\\cal F})\n=\n(CC {\\cal F},\nT^{*}_{X}X)_{T^{*}X}.\n\\label{eqEP}\n\\end{equation}\n\\end{thm}\n\n\n\\proof[Proof {\\rm (Beilinson)}]{\nSince $X$ is assumed projective,\nit follows from Lemma \\ref{lmclim}\nand Proposition \\ref{prRR}.2.\n\\qed}\n\n\n\n\n\\subsection{Characteristic cycle\nand ramification}\\label{ssramc}\n\nWe briefly recall the definition \nof the characteristic cycle\n\\cite[Definition 3.5]{nonlog}\nin the strongly non-degenerate case.\nWe use the notation in Section \\ref{ssram}.\n\nLet $X$ be a smooth scheme\nof dimension $n$ over a perfect \nfield $k$ and\n$U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D$ be\nthe complement of a divisor $D$\nwith normal crossings.\nLet $j\\colon U\\to X$ be\nthe open immersion and\n${\\cal G}$ be a locally constant\nconstructible sheaf of free $\\Lambda$-modules\non $U$.\nAssume that the ramification of\n${\\cal G}$ along $D$ is strongly non-degenerate.\n\nAssume first that $R=D$.\nThen, the locally constant sheaf\n${\\cal G}$ on $U$ is tamely ramified along $D$\nand we define a linear combination by\n\\begin{equation}\nC( j_!{\\cal G})\n=(-1)^n{\\rm rank}\\ {\\cal G}\\cdot\n\\sum_I[T^*_{D_I}X]\n\\label{eqtame}\n\\end{equation}\nwhere $T^*_{D_I}X$\ndenotes the conormal bundle\nof the intersection $D_I=\\bigcap_ID_i$\nfor sets of indices\n$I\\subset \\{1,\\ldots,m\\}$.\n\nAssume $R=\\sum_ir_iD_i>\nD=\\sum_iD_i$. \nFor each irreducible component,\nwe have a decomposition by \ncharacters \n$V=\\bigoplus_{\\chi\\colon\n{\\rm Gr}^{r_i}G_{K_i}\\to \n{\\mathbf F}_p}\n\\chi^{\\oplus m(\\chi)}$.\nFurther, the characteristic form\nof each character $\\chi$\nappearing in the decomposition\ndefines a sub line bundle $L_\\chi$\nof the pull-back $D_\\chi\\times_XT^*X$\nof the cotangent bundle \nto a finite covering $\\pi_\\chi\n\\colon D_\\chi\\to D_i$\nby the non-degenerate assumption.\nThen we define\n\\begin{equation}\nC(j_!{\\cal G})\n=(-1)^n\\Bigl(\n{\\rm rank}\\ {\\cal G}\\cdot\n[T^*_XX]\n+\n\\sum_i\n\\sum_{\\chi}\n\\dfrac{r_i\\cdot m(\\chi)}{[D_\\chi\\colon D_i]}\n\\pi_*[L_\\chi]\\Bigr).\n\\label{eqwild}\n\\end{equation}\nIn the general strongly non-degenerate case,\nwe define $C(j_!{\\cal G})$ by the additivity\nand \\'etale descent.\n\n\\begin{thm}\\label{thmram}\nLet $X$ be a smooth scheme\nof dimension $n$ over a perfect \nfield $k$ and\nlet $j\\colon U\\to X$ be the\nopen immersion of\nthe complement\n$U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} D$ of\na divisor $D$ with simple normal crossings.\nLet ${\\cal G}$\nbe a locally constant constructible\nsheaf of free $\\Lambda$-modules\non $U$ such that\nthe ramification along $D$\nis {\\em strongly non-degenerate}.\nThen we have\n\\begin{equation}\nCC j_!{\\cal G}\n=\nC(j_!{\\cal G}),\n\\label{eqCCj}\n\\end{equation}\nIn other words, \n$C(j_!{\\cal G})$ defined by\n{\\rm (\\ref{eqtame})},\n{\\rm (\\ref{eqwild})}, the additivity\nand by \\'etale descent\nusing\nramification theory\nsatisfies the Milnor formula\n{\\rm (\\ref{eqMil})}.\n\\end{thm}\n\nTheorem \\ref{thmram} is\nproved for dimension $\\leqq 1$\nin Lemma \\ref{lmele}.1 and 2.\nRecall that Theorem \\ref{thmram} is\nproved in dimension $2$\nin \\cite[Proposition 3.20]{surface}\nusing a global argument, as in \\cite{bp}.\nThe tamely ramified case\nof Theorem \\ref{thmram} has been proved\nin \\cite{Yang} by a different method.\nTheorem \\ref{thmram} gives\nan affirmative answer to\n\\cite[Conjecture 3.16]{nonlog}.\n\n\n\\proof{\nIt suffices to show the\nequality of the coefficients\nfor each irreducible component\nof $C=S(j_!{\\cal F})$ by Proposition \\ref{prnd}.\nBy the additivity of characteristic cycles\nand the compatibility with \\'etale pull-back,\nit suffices to show the\ntamely ramified case\nand the totally wildly ramified\ncase separately.\n\nFirst, we prove the tamely ramified case.\nIt suffices to determine the\ncoefficients of $[T^*_{D_I}X]$\nby induction on the number $d$ of\nelements of $I$.\nBy Proposition \\ref{prsm*},\nwe may assume $X={\\mathbf A}^n$\nand $D$ is the complement of\n${\\mathbf G}_m^n$.\nIf $n=0$ and $X$ consists of a\nsingle point, we have\n$CC {\\cal F}=\n{\\rm rank}\\ {\\cal F}\\cdot [T^*_XX]$.\nIf $n\\geqq 1$,\nit follows from Lemma \\ref{lmtame}.\n\nWe prove the totally wildly ramified case.\nIf $\\dim X=0$, it is proved above.\nIt follows from (\\ref{eqdim1}) if\n$\\dim X\\leqq 1$.\nIt suffices to compare\nthe coefficients in\n$CC j_!{\\cal G}$\nand $C(j_!{\\cal G})$\nassuming $\\dim X\\geqq 2$.\nSince the assertion is\n\\'etale local, we may \nassume that $C(j_!{\\cal G})$\nhas a unique irreducible\ncomponent different from the $0$-section.\nBy Theorem \\ref{thmi*}\nand Proposition \\cite[Proposition 3.8]{nonlog},\nfor every properly $C$-transversal immersion $i\\colon W\\to X$\nof a smooth curve,\nwe have \n$i^!CC j_!{\\cal G}=\nCC i^* j_!{\\cal G}$\nand $i^!C(j_!{\\cal G})\n=C(i^*j_!{\\cal G})$\nrespectively.\nHence it is reduced to the case\n$\\dim X=1$.\n\\qed}\n\n\n\\section{${\\cal F}$-transversality and singular support}\\label{sFt}\n\nWe introduce and study a notion of\n${\\cal F}$-transversality\nfor a morphism $h\\colon W\\to X$\nwith respect to a constructible\ncomplex ${\\cal F}$ on $X$\nin Section \\ref{ssFt}\nusing a canonical morphism (\\ref{eqprpg})\ndefined and studied in Section \\ref{ssFcan}.\nWe study the relation between\nthe ${\\cal F}$-transverality\nand the local acyclicity in Section \\ref{ssFla}\nand deduce a relation\nwith the singular support in Section \\ref{ssFs}.\n\n\\subsection{A canonical morphism}\\label{ssFcan}\n\n\nLet $k$ be a field\nand $\\Lambda$ be a finite\nlocal ring such that\nthe residue characteristic\n$\\ell$ is invertible in $k$.\nFor a separated morphism $h\\colon W\\to X$\nof schemes of finite type\nover $k$,\nthe functor\n$Rh^!\\colon D^b_c(X,\\Lambda)\n\\to D^b_c(W,\\Lambda)$\nis defined as the adjoint\nof\n$Rh_!\\colon D^b_c(W,\\Lambda)\n\\to D^b_c(X,\\Lambda)$.\nOne should be able to define\nthe functors $Rh_!,Rh^!$\nwithout assuming separatedness\nusing cohomological descent\nbut we will not go further in this\ndirection.\n\n\n\nLet ${\\cal F}$ and ${\\cal G}$ \nbe constructible complexes\nof $\\Lambda$-modules\non $X$ and on $W$\nrespectively\nand assume that\n${\\cal G}$ is of finite tor-dimension.\nThen, we have an isomorphism\n\\begin{equation}\n{\\cal F}\\otimes^L_\\Lambda\nRh_!{\\cal G} \n\\to Rh_!(h^*{\\cal F}\\otimes^L_\\Lambda{\\cal G} )\n\\label{eqprj}\n\\end{equation}\nof projection formula\n\\cite[(4.9.1)]{Rapport}.\n\\if{This is defined as the adjoint\n$h^*{\\cal F}\\otimes^L_\\Lambda\nh^*Rh_!{\\cal G} \n\\to h^*{\\cal F}\\otimes^L_\\Lambda{\\cal G}$\nof the morphism induced\nby the adjunction\n$h^*Rh_!{\\cal G} \n\\to {\\cal G}$\nif $h$ is proper.\nIt is defined as the inverse of\nthe isomorphism\n$h^*{\\cal F}\\otimes^L_\\Lambda\nh^*Rh_!{\\cal G} \n\\gets h^*{\\cal F}\\otimes^L_\\Lambda{\\cal G}$\nif $h$ is an open immersion.}\\fi\n\n\\begin{df}\\label{dfAB}\nLet $h\\colon W\\to X$\nbe a morphism of finite\ntype of schemes.\nLet ${\\cal F}$ and ${\\cal E}$ be\nconstructible complexes\nof $\\Lambda$-modules on $X$\nand let ${\\cal G}$ be\na constructible complex\nof $\\Lambda$-modules \nof finite tor-dimension on $W$.\nWe say that morphisms\n\\begin{equation}\n{\\cal F}\n\\otimes^L\nRh_!{\\cal G}\n\\to {\\cal E},\\quad\nh^*{\\cal F}\n\\otimes^L\n{\\cal G}\n\\to Rh^!{\\cal E}\n\\label{eqAB}\n\\end{equation}\ncorrespond to each other\nif the first one\nis the composition \nof the isomorphism {\\rm (\\ref{eqprj})}\nwith the adjoint\n$Rh_!(h^*{\\cal F}\n\\otimes^L\n{\\cal G})\n\\to\n{\\cal E}$\nof the second one.\n\\end{df}\n\nSince {\\rm (\\ref{eqprj})}\nis an isomorphism,\nthe correspondence\n{\\rm (\\ref{eqAB})}\nis a one-to-one correspondence.\nIf $h\\colon W\\to X$\nis a smooth separated morphism\nof relative dimension $d$,\nthe morphism\n${\\cal F}\n\\otimes^L\nRh_!\\Lambda(d)[2d]\n\\to {\\cal F}$\ninduced by\nthe trace mapping\n$Rh_!\\Lambda(d)[2d]\\to \\Lambda$\ncorresponds to the canonical isomorphism\n\\begin{equation}\nh^*{\\cal F}\n\\otimes^L\n\\Lambda(d)[2d]\n\\to Rh^!{\\cal F}\n\\label{eqPD}\n\\end{equation}\nof Poincar\\'e duality\n\\cite[Th\\'eor\\`eme 3.2.5]{DP}.\n\n\nRecall that a morphism\n${\\cal F}\\otimes^L_\\Lambda\nRh_!{\\cal G} \n\\to {\\cal E}$\ncorresponds to a morphism\n${\\cal F}\\to\nR{\\cal H}om(\nRh_!{\\cal G},{\\cal E})$\nbijectively.\nSimilarly\na morphism\n$h^*{\\cal F}\\otimes^L_\\Lambda\n{\\cal G} \n\\to Rh^!{\\cal E}$\ncorresponds to a morphism\n$h^*{\\cal F}\\to\nR{\\cal H}om({\\cal G},Rh^!{\\cal E})$\nbijectively.\n\n\\begin{lm}\\label{lmAB}\nLet $h\\colon W\\to X$\nbe a morphism of finite\ntype of schemes.\nLet ${\\cal F}$ and ${\\cal E}$ be\nconstructible complexes\nof $\\Lambda$-modules on $X$\nand let ${\\cal G}$ be\na constructible complex\nof $\\Lambda$-modules \nof finite tor-dimension on $W$.\nLet\n\\begin{equation}\n{\\cal F}\n\\otimes^L\nRh_!{\\cal G}\n\\to {\\cal E},\\quad\nh^*{\\cal F}\n\\otimes^L\n{\\cal G}\n\\to Rh^!{\\cal E}\n\\label{eqAB0}\n\\end{equation}\nbe morphisms corresponding\nto each other.\nThen,\nthe morphism\n\\begin{equation}\nh^*{\\cal F}\n\\to R{\\cal H}om({\\cal G},Rh^!{\\cal E})\n\\label{eqAB1}\n\\end{equation}\ninduced by the second one\nequals the adjoint of\nthe composition\n\\begin{equation}\n{\\cal F}\n\\to R{\\cal H}om(Rh_!{\\cal G},{\\cal E})\n\\to Rh_*\nR{\\cal H}om({\\cal G},Rh^!{\\cal E})\n\\label{eqAB2}\n\\end{equation}\nof that induced by the first\none and the inverse of the\ncanonical isomorphism of adjunction.\n\\end{lm}\n\n\n\n\\proof{\nWe consider the diagram\n\\begin{equation}\n\\xymatrix{\n{\\cal F}\n\\ar[d]\\ar[rd]&\n\\\\\nRh_*\nR{\\cal H}om\n({\\cal G},\nh^*{\\cal F}\\otimes^L{\\cal G})\n\\ar[d]\\ar[r]\n&\nR{\\cal H}om\n(Rh_!{\\cal G},\nRh_!(h^*{\\cal F}\\otimes^L{\\cal G}))\n\\ar[d]\\\\\nRh_*\nR{\\cal H}om\n({\\cal G},\nRh^!{\\cal E})\n\\ar[rd]\\ar[r]\n&\nR{\\cal H}om\n(Rh_!{\\cal G},\nRh_!Rh^!{\\cal E})\n\\ar[d]\\\\\n&\nR{\\cal H}om\n(Rh_!{\\cal G},\n{\\cal E}).}\n\\label{eqdgAB}\n\\end{equation}\nThe top vertical arrow is\nthe adjoint of $h^*{\\cal F}\n\\to \nR{\\cal H}om\n({\\cal G},\nh^*{\\cal F}\\otimes^L{\\cal G})$\ninduced by the identity of\n$h^*{\\cal F}\\otimes^L{\\cal G}$\nand the horizontal arrows\nare defined by the functoriality\nof $Rh_!$.\nThe upper slant arrow is induced\nby the isomorphism of projection formula \n(\\ref{eqprj}) and\nthe upper triangle is commutative.\nThe middle vertical arrows are\ninduced by the second morphism\nof (\\ref{eqAB0})\nand the middle square is commutative.\nThe lower slant arrow\nis the isomorphism of adjunction\nand the lower vertical arrow\nis induced by the adjunction map.\nThus, the diagram (\\ref{eqdgAB})\nis commutative.\n\nThe composition of \nthe left column in\nthe diagram (\\ref{eqdgAB})\nis the adjoint of \n(\\ref{eqAB1}).\nSince the composition \nof the upper slant arrow\nand the right column \nis induced by\nthe first morphism of (\\ref{eqAB0}),\nthe assertion follows.\n\\qed}\n\\medskip\n\nLet ${\\cal F}$ and ${\\cal G}$ \nbe constructible complexes\nof $\\Lambda$-modules\non $X$\nand assume that\n${\\cal G}$ is of finite tor-dimension.\nWe define a canonical morphism\n\\begin{equation}\nc_{h,{\\cal F},{\\cal G}}\\colon\nh^*{\\cal F}\\otimes^L_\\Lambda\nRh^!{\\cal G} \n\\to Rh^!({\\cal F}\\otimes^L_\\Lambda{\\cal G} )\n\\label{eqFG}\n\\end{equation}\nto be that corresponding\nto the morphism\n${\\cal F}\n\\otimes^L_\\Lambda\nRh_!Rh^!{\\cal G} \\to\n{\\cal F}\n\\otimes^L_\\Lambda\n{\\cal G}$ induced by the adjunction\n$Rh_!Rh^!{\\cal G} \\to\n{\\cal G}$.\n\n\n\\begin{lm}\\label{lmhRh}\nLet $h\\colon W\\to X$\nbe a separated\nmorphism of schemes.\nLet ${\\cal F}$ and ${\\cal G}$ \nbe constructible complexes\nof $\\Lambda$-modules\non $X$\nand assume that\n${\\cal G}$ is of finite tor-dimension.\n\n{\\rm 1.}\nLet $g\\colon V\\to W$\nbe a separated\nmorphism of schemes\nand $f\\colon V\\to X$\nbe the composition.\nThen the morphisms {\\rm (\\ref{eqFG})}\nfor $h,g$ and $f$ form\na commutative diagram\n\\begin{equation}\n\\xymatrix{\nf^*{\\cal F}\\otimes^L Rf^!{\\cal G}\n\\ar[d]\\ar[rrrr]^{c_{f,{\\cal F},{\\cal G}}}&&&&\nRf^!({\\cal F}\\otimes^L{\\cal G})\\ar[d]\\\\\ng^*h^*{\\cal F}\\otimes^L Rg^!Rh^!{\\cal G}\n\\ar[rr]^{c_{g,h^*{\\cal F},Rh^!{\\cal G}}}&&\nRg^!(h^*{\\cal F}\\otimes^L Rh^!{\\cal G})\n\\ar[rr]^{Rg^!(c_{h,{\\cal F},{\\cal G}})}&&\nRg^!Rh^!({\\cal F}\\otimes^L{\\cal G}).}\n\\label{eqhgf}\n\\end{equation}\nThe vertical arrows are the\ncanonical isomorphisms.\n\n{\\rm 2.}\nLet ${\\cal E}$ \nbe another constructible complex\nof $\\Lambda$-modules\non $X$\nand assume that ${\\cal F}$\nis of finite tor-dimension.\nThen the morphisms {\\rm (\\ref{eqFG})}\nform a commutative diagram\n\\begin{equation}\n\\xymatrix{\nh^*{\\cal E}\\otimes^L h^*\n{\\cal F}\\otimes^L Rh^!{\\cal G}\n\\ar[d]\n\\ar[rr]\n^{{\\rm id}\\otimes c_{h,{\\cal F},{\\cal G}}}\n&&\nh^*{\\cal E}\\otimes^L Rh^!\n({\\cal F}\\otimes^L {\\cal G})\n\\ar[d]^{c_{h,{\\cal E},{\\cal F}\\otimes{\\cal G}}}\n\\\\\nh^*({\\cal E}\\otimes^L\n{\\cal F})\\otimes^L Rh^!{\\cal G}\n\\ar[rr]^{c_{h,{\\cal E}\\otimes{\\cal F},{\\cal G}}}&&\nRh^!({\\cal E}\\otimes^L\n{\\cal F}\\otimes^L{\\cal G}).}\n\\label{eqEFG}\n\\end{equation}\n\n\n{\\rm 3.}\nAssume that $h$ is an immersion.\nThen the morphism {\\rm (\\ref{eqFG})}\nis induced by the restriction of\n\\begin{equation}\n{\\cal F}\\otimes^L Rh_*Rh^!{\\cal G}\n=\n{\\cal F}\\otimes^L \nR{\\cal H}om (h_!\\Lambda,\n{\\cal G})\n\\to \nRh_*Rh^!(\n{\\cal F}\\otimes^L {\\cal G})\n= \nR{\\cal H}om (h_!\\Lambda,\n{\\cal F}\\otimes^L{\\cal G}).\n\\label{eqhim}\n\\end{equation}\n\\end{lm}\n\n\\proof{\n1.\nIt follows from the commutative \ndiagram \n\\begin{equation}\n\\xymatrix{\nRf_!(f^*{\\cal F}\\otimes^L Rf^!{\\cal G})\n\\ar[d]\\ar[rr]&&\n{\\cal F}\\otimes^L\nRf_!Rf^!{\\cal G}\\ar[d]\\\\\nRh_!Rg_!(g^*h^*{\\cal F}\\otimes^L Rg^!Rh^!{\\cal G})\n\\ar[r]&\nRh_!(h^*{\\cal F}\\otimes^L Rg_!Rg^!Rh^!{\\cal G})\n\\ar[r]&\n{\\cal F}\\otimes^L\nRh_!Rg_!Rg^!Rh^!{\\cal G}.}\n\\label{eqhgpf}\n\\end{equation}\nfor the isomorphisms\nof projection formula.\n\n2.\nWe consider the diagram\n$$\\begin{CD}\nRh_!(h^*{\\cal F}\n\\otimes^LRh^!{\\cal G})\n@>>>\nRh_!Rh^!({\\cal F}\n\\otimes^L{\\cal G})\\\\\n@AAA@VVV\\\\\n{\\cal F}\n\\otimes^LRh_!Rh^!{\\cal G}\n@>>>\n{\\cal F}\n\\otimes^L{\\cal G}.\n\\end{CD}$$\nThe right vertical arrow\nis the adjunction morphism\nand the bottom horizontal arrow\nis induced by the adjunction morphism.\nThe left vertical arrow is\nthe isomorphism (\\ref{eqprj})\nof the projection formula \nand the top horizontal arrow is\ninduced by (\\ref{eqFG}).\nSince (\\ref{eqFG}) is the adjoint\nof the diagonal composition and is defined\nby the commutativity of\nthe lower left triangle,\nthe diagram is commutative.\nTensoring ${\\cal E}$\nand applying the projection formula,\nwe obtain the adjoint of\n(\\ref{eqEFG}).\n\n\n\n\n\n3.\nWe may assume that $h$ is a closed immersion.\nThen, the composition of the\nmorphism\n(\\ref{eqhim}) with\nthe adjunction\n$Rh_*Rh^!({\\cal F}\\otimes^L{\\cal G})\n\\to {\\cal F}\\otimes^L{\\cal G}$\nis the same as\n${\\cal F}\\otimes^LRh_*Rh^!{\\cal G}\n\\to {\\cal F}\\otimes^L{\\cal G}$\ndefining (\\ref{eqFG}) and the assertion follows.\n\\qed}\n\\medskip\n\nAssume ${\\cal G}=\\Lambda$\nand we consider\nthe canonical morphism\n\\begin{equation}\nc_{h,{\\cal F}}\n=c_{h,{\\cal F},\\Lambda}\n\\colon \nh^*{\\cal F}\\otimes^L_\\Lambda Rh^!\\Lambda\n\\to Rh^!{\\cal F}.\n\\label{eqprpg}\n\\end{equation}\nFor another morphism\n$g\\colon V\\to W$ and\nthe composition $f$,\nThe canonical morphisms\n(\\ref{eqFG})\nform a commutative diagram\n\\begin{equation}\n\\xymatrix{\nf^*{\\cal F}\n\\otimes^L \nRf^!\\Lambda\n\\ar[r]^{c_{f,{\\cal F}}}\\ar[d]&\nRf^!{\\cal F}\n\\ar[r]&\nRg^!Rh^!{\\cal F}\n\\\\\ng^*h^*{\\cal F}\n\\otimes^L \nRg^!Rh^!\\Lambda\n\\ar[rr]\n^{c_{g,h^*{\\cal F},Rh^!\\Lambda}}\n&&\nRg^!(h^*{\\cal F}\n\\otimes^L \nRh^!\\Lambda)\\ar[u]\n_{Rg^!(c_{h,{\\cal F}})}\n\\\\\ng^*h^*{\\cal F}\n\\otimes^L \nRg^!\\Lambda\n\\otimes^L \ng^*Rh^!\\Lambda\n\\ar[rr]^{c_{g,h^*{\\cal F}}\n\\otimes{\\rm id}}\\ar[u]\n^{{\\rm id}\\otimes\nc_{g,Rh^!\\Lambda}}&&\nRg^!h^*{\\cal F}\n\\otimes^L \ng^*Rh^!\\Lambda\\ar[u]\n_{c_{g,h^*{\\cal F},Rh^!\\Lambda}}\n}\n\\label{eqhRh}\n\\end{equation}\nby Lemma \\ref{lmhRh}.1 and 2.\n\n\nWe give another description\nof the morphism\n$c_{h,{\\cal F}}\\colon\nh^*{\\cal F}\\otimes^LRh^!\\Lambda\n\\to Rh^!{\\cal F}$\nassuming further that ${\\cal F}$\nis of finite tor-dimension\nand that $X$ is separated.\nRecall that the dual $D_X{\\cal F}$\nis defined as $R{\\cal H}om_\\Lambda\n({\\cal F},{\\cal K}_X)$ where\n${\\cal K}_X=Ra^!\\Lambda$ \nfor the structure morphism \n$a\\colon X\\to {\\rm Spec}\\ k$.\nFor a separated morphism $h\\colon W\\to X$\nover $k$,\nwe have a canonical isomorphism\n$Rh^!{\\cal K}_X=\nRh^!Ra^!\\Lambda\n\\to R(ah)^!\\Lambda={\\cal K}_W$.\nThe isomorphism of adjunction\n$Rh_*D_W\\to D_XRh_!$\ninduces an isomorphism\n$D_Wh^*D_X\\to Rh^!$.\n\n\\begin{lm}\\label{lmAB2}\nAssume that ${\\cal F}$\nis of finite tor-dimension and\nwe consider the morphisms\n\\begin{equation}\nh^*D_X{\\cal F}\n\\otimes^L\nh^*{\\cal F}\n\\otimes^L\nRh^!\\Lambda\n\\to\nh^*{\\cal K}_X\n\\otimes^L\nRh^!\\Lambda\n\\overset{c_{h,{\\cal K}_X}}\\longrightarrow\nRh^!{\\cal K}_X\\to {\\cal K}_W\n\\label{eqFGDD}\n\\end{equation}\nwhere the first arrow\nis induced by the canonical morphism\n$D_X{\\cal F}\n\\otimes^L{\\cal F}\\to {\\cal K}_X$.\nThen, the composition\n\\begin{equation}\nh^*{\\cal F}\n\\otimes^L\nRh^!\\Lambda\n\\to\nD_Wh^*D_X{\\cal F}\n\\to\nRh^!{\\cal F}\n\\label{eqFGD}\n\\end{equation}\nof the morphism induced\nby {\\rm (\\ref{eqFGDD})}\nwith the canonical isomorphism\nequals\nthe morphism \n$c_{h,{\\cal F}}\\colon\nh^*{\\cal F}\n\\otimes^L\nRh^!\\Lambda\n\\to Rh^!{\\cal F}$.\n\\end{lm}\n\n\\proof{\nLet $h^*D_X{\\cal F}\n\\to D_W(h^*{\\cal F}\n\\otimes^LRh^!\\Lambda)$\nbe the morphism induced by\n(\\ref{eqFGDD})\nand let\n\\begin{equation}\nD_X{\\cal F}\n\\to \nRh_*D_W(h^*{\\cal F}\n\\otimes^LRh^!\\Lambda)\n\\overset \\simeq \\to D_XRh_!(h^*{\\cal F}\n\\otimes^LRh^!\\Lambda)\n\\label{eq717}\n\\end{equation}\nbe its adjoint.\nThen, the morphism (\\ref{eqFGD})\nis the adjoint of\nthe dual $Rh_!(h^*{\\cal F}\n\\otimes^LRh^!\\Lambda)\n\\to {\\cal F}$ of (\\ref{eq717})\nsince the isomorphism\n$D_Wh^*D_X\\to Rh^!$\nis induced by\nthe isomorphism $Rh_*D_W\\to D_XRh_!$.\nThe morphism\n$D_X{\\cal F}\n\\otimes^LRh_!(h^*{\\cal F}\n\\otimes^LRh^!\\Lambda)\n\\to {\\cal K}_X$\ncorresponding to (\\ref{eq717})\ncorresponds\nto (\\ref{eqFGDD})\nby Lemma \\ref{lmAB}.\nThus, it is induced by\nthe isomorphism\nof the projection formula,\nthe canonical morphism\n$D_X{\\cal F}\n\\otimes^L{\\cal F}\\to {\\cal K}_X$\nand \nthe adjunction morphism\n$Rh_!h^!\\Lambda\\to \\Lambda$.\nHence the morphism\n(\\ref{eqFGD}) equals\n$c_{h,{\\cal F}}$.\n\\qed}\n\n\n\n\n\n\\subsection{${\\cal F}$-transversality}\\label{ssFt}\n\n\n\\begin{df}\\label{dfprpg}\nLet $h\\colon W\\to X$ \nbe a morphism of\nschemes of finite type over $k$\nand let ${\\cal F}$ be a constructible complex\nof $\\Lambda$-modules on $X$.\n\nIn the case $h$ is {\\em separated},\nwe say that $h$\nis {\\em ${\\cal F}$-transversal}\nif the canonical morphism \n\\begin{equation*}\nc_{h,{\\cal F}}\\colon\nh^*{\\cal F}\\otimes^L_\\Lambda Rh^!\\Lambda\n\\to Rh^!{\\cal F}\n\\leqno{\\rm(\\ref{eqprpg})}\n\\end{equation*}\nis an isomorphism.\nFor general $h$,\nwe say that $h$\nis {\\em ${\\cal F}$-transversal}\nif there exists an open covering\n$W=\\bigcup_iW_i$ consisting\nof open subschemes\nseparated over $X$\nsuch that\nthe restrictions $h|_{W_i}$\nare ${\\cal F}$-transversal.\n\\end{df}\n\nSince the condition is local, a morphism\n$h\\colon W\\to X$ is ${\\cal F}$-transversal\nif and only if\nthe restriction of $h$ to every open subset\nof $W$ separated over $X$\nis ${\\cal F}$-transversal.\n\n\\begin{lm}\\label{lmprpg}\nLet ${\\cal F}$ be a constructible complex\nof $\\Lambda$-modules on \na scheme $X$ \nof finite type over $k$\nand let $h\\colon W\\to X$\nbe a morphism of schemes\nof finite type over $k$.\n\n{\\rm 1.}\nIf $h$ is smooth,\nthen $h\\colon W\\to X$\nis ${\\cal F}$-transversal.\n\n{\\rm 2.}\nIf ${\\cal F}$ is locally constant,\nthen $h\\colon W\\to X$ is\n${\\cal F}$-transversal.\n\n{\\rm 3.}\nAssume that\n$h\\colon W\\to X$\nis ${\\cal F}$-transversal\nand that \n$Rh^!\\Lambda$ is isomorphic\nto $\\Lambda(c)[2c]$ for a\nlocally constant function $c$ on $W$.\nThen for a morphism\n$g\\colon V\\to W$\nof schemes\nof finite type over $k$,\nthe following conditions are equivalent:\n\n{\\rm (1)}\n$g\\colon V\\to W$ \nis $h^*{\\cal F}$-transversal.\n\n{\\rm (2)}\nThe composition\n$f\\colon V\\to W$ \nis ${\\cal F}$-transversal.\n\n\n{\\rm 4.}\nLet $\\Lambda_0$\nbe the residue field of $\\Lambda$.\nAssume that ${\\cal F}$\nis of finite tor-dimension\nand set ${\\cal F}_0=\n{\\cal F}\\otimes^L_\\Lambda\\Lambda_0$,\nThen,\n$h\\colon W\\to X$ is \n${\\cal F}$-transversal\nif and only if\nit is ${\\cal F}_0$-transversal.\n\n{\\rm 5.}\nAssume that ${\\cal F}$ is a perverse sheaf.\nAssume further that $h\\colon\nW\\to X$ is locally of\ncomplete intersection\nof relative virtual dimension\n$c$\nand that \n$Rh^!\\Lambda$ is isomorphic\nto $\\Lambda(c)[2c]$.\nIf $h\\colon W\\to X$ is\n${\\cal F}$-transversal,\nthen $h^*{\\cal F}[c]$\nand $Rh^!{\\cal F}[-c]$\nare perverse sheaves.\n\n{\\rm 6.}\nAssume that ${\\cal F}$\nis of finite tor-dimension.\nThen the following conditions are\nequivalent\n\n{\\rm (1)}\n$h$ is ${\\cal F}$-transversal.\n\n{\\rm (2)}\nThe morphism\n$h^*{\\cal F}\\otimes^L\nRh^!\\Lambda\n\\to D_Wh^*D_X{\\cal F}$\n{\\rm (\\ref{eqFGD})}\ninduced by the morphism\n$h^*D_X{\\cal F}\\otimes^L\nh^*{\\cal F}\\otimes^L\nRh^!\\Lambda\n\\to {\\cal K}_W$\n{\\rm (\\ref{eqFGDD})}\nis an isomorphism.\n\\end{lm}\n\nWe show a converse of 2.\\\nin Corollary \\ref{corfh}.\n\n\\proof{\nBy the remark after Definition \\ref{dfprpg},\nwe may assume that morphisms\nare separated.\nWe will omit to write\nthis remark in the sequel.\n\n1. \nWe may assume that\n$h$ is smooth of relative dimension $d$.\nWe consider the canonical\nisomorphism\n$\\Lambda(d)[2d]\\to Rh^!\\Lambda$\ndefined as the adjoint of\nthe trace mapping\n$Rh_!\\Lambda(d)[2d]\\to \\Lambda$.\nThen, by the description\nof the isomorphism\n(\\ref{eqPD}) loc.\\ cit.,\nthe diagram\n$$\\begin{CD}\nh^*{\\cal F}\\otimes^L\n\\Lambda(d)[2d]@>{\\rm (\\ref{eqPD})}>>\nRh^!{\\cal F}\\\\\n@VVV @|\\\\\nh^*{\\cal F}\\otimes^L\nRh^!\\Lambda\n@>{c_{h,{\\cal F}}}>>\nRh^!{\\cal F}\n\\end{CD}$$\nis commutative\nand\nthe morphism $c_{h,{\\cal F}}$\nis an isomorphism.\n\n2.\nSince the assertion is \\'etale local on $X$,\nit is reduced to the case where ${\\cal F}$\nis constant by devissage.\n\n3.\nWe consider the \ncommutative diagram\n$$\\xymatrix{\nf^*{\\cal F}\n\\otimes^L \nRf^!\\Lambda\n\\ar[r]^{c_{f,{\\cal F}}}\\ar[d]&\nRf^!{\\cal F}\n\\ar[r]&\nRg^!Rh^!{\\cal F}\n\\\\\ng^*h^*{\\cal F}\n\\otimes^L \nRg^!Rh^!\\Lambda\n&&\nRg^!(h^*{\\cal F}\n\\otimes^L \nRh^!\\Lambda)\\ar[u]\n_{Rg^!(c_{h,{\\cal F}})}\n\\\\\ng^*h^*{\\cal F}\n\\otimes^L \nRg^!\\Lambda\n\\otimes^L \ng^*Rh^!\\Lambda\n\\ar[rr]^{c_{g,h^*{\\cal F}}\n\\otimes{\\rm id}}\\ar[u]\n^{{\\rm id}\\otimes\nc_{g,Rh^!\\Lambda}}&&\nRg^!h^*{\\cal F}\n\\otimes^L \ng^*Rh^!\\Lambda\\ar[u]\n_{c_{g,h^*{\\cal F},Rh^!\\Lambda}}\n.}\n\\leqno{\\rm (\\ref{eqhRh})}$$\nSince \n$Rh^!\\Lambda$ is assumed to be isomorphic\nto $\\Lambda(c)[2c]$ for a\nlocally constant function $c$ on $W$,\nthe lower vertical arrows are isomorphisms.\nBy the assumption that\n$h\\colon W\\to X$\nis ${\\cal F}$-transversal,\nthe upper right vertical arrow\n$Rg^!(c_{h,{\\cal F}})$ is\nan isomorphism.\nThe condition (1) means that\nthe top horizontal arrow\n$c_{f,{\\cal F}}$ is\nan isomorphism\nand the condition (2) is equivalent\nto that the bottom horizontal arrow\n$c_{g,h^*{\\cal F}}\n\\otimes{\\rm id}$ is\nan isomorphism by \nthe assumption\nthat \n$Rh^!\\Lambda$ is isomorphic\nto $\\Lambda(c)[2c]$.\nHence the equivalence follows.\n\n4.\nSimilarly as Lemma \\ref{lmctf}.1,\nthe canonical morphism\n$Rf^!{\\cal F}\\otimes_\\Lambda^L\n\\Lambda_0\n\\to\nRf^!{\\cal F}_0$\nis an isomorphism.\nHence, similarly as the proof\nof Lemma \\ref{lmctf}.2,\nthe morphism $c_{h,{\\cal F}}$\nis an isomorphism\nif and only if \n$c_{h,{\\cal F}_0}$\nis an isomorphism.\n\n5. \nIf $h$ is smooth, $h^*(c)[c]=Rh^![-c]$\nis $t$-exact by \\cite[4.2.4]{BBD}\nand the assertion follows in this case.\nHence, we may assume\nthat $h$ is a regular immersion\nof codimension $-c$.\nThen, by \\cite[Corollary 4.1.10]{BBD}\nand induction on $-c$,\nwe have \n$h^*{\\cal F}(c)[c]\\in{^p}\\!D_c^{[0,-c]}(W,\\Lambda)$\nand\n$Rh^!{\\cal F}[-c]\\in{ ^p}\\!D_c^{[c,0]}(W,\\Lambda)$.\nBy the assumption,\nwe have an isomorphism\n$h^*{\\cal F}(c)[c]\\to Rh^!{\\cal F}[-c]$\nand the assertion follows.\n\n6.\nIt follows from Lemma \\ref{lmAB2}.\n\\qed}\n\\medskip\n\n\n\n\\begin{pr}\\label{prRhom}\nLet ${\\cal F}$ be a constructible complex\nof $\\Lambda$-modules of\nfinite tor-dimension on \na scheme $X$ of finite type over $k$.\n\n{\\rm 1.}\nLet $h\\colon W\\to X$\nbe a morphism of schemes \nof finite type over $k$\nand assume that\n$Rh^!\\Lambda$ is isomorphic\nto $\\Lambda(c)[2c]$ for a\nlocally constant function $c$ on $W$.\nThen, the following conditions are equivalent:\n\n{\\rm (1)}\nThe morphism $h\\colon W\\to X$ \nis ${\\cal F}$-transversal.\n\n\n{\\rm (2)}\nThe morphism $h\\colon W\\to X$ \nis $D_X{\\cal F}$-transversal.\n\n\\noindent\nFurther if $h\\colon W\\to X$ \nis ${\\cal K}_X$-transversal,\nthey are \nequivalent to the following condition:\n\n{\\rm (3)}\nThe canonical morphism\n$h^*R{\\cal H}om_X({\\cal F},{\\cal K}_X)\n\\to \nR{\\cal H}om_W(h^*{\\cal F},h^*{\\cal K}_X)$\nis an isomorphism.\n\n{\\rm 2.}\nLet ${\\cal G}$ be a constructible complex\nof $\\Lambda$-modules on \na scheme $X$ of finite type over $k$.\nThen, the following conditions are equivalent:\n\n{\\rm (1)}\nThe diagonal morphism\n$\\delta\\colon X\\to X\\times X$\nis \n$R{\\cal H}om_{X\\times X}({\\rm pr}_2^*{\\cal F},\n{\\rm pr}_1^!{\\cal G})$-transversal.\n\n{\\rm (2)}\nThe canonical morphism\n${\\cal G}\\otimes^L\nR{\\cal H}om_X({\\cal F},\\Lambda)\n\\to\nR{\\cal H}om_X({\\cal F},{\\cal G})$\nis an isomorphism.\n\\end{pr}\n\nThe assumptions in Proposition \\ref{prRhom}.1\nare satisfied if $X$ and $W$ are\nsmooth over $k$.\nThe canonical morphism\nin Proposition \\ref{prRhom}.1(3)\nis an analogue of \n$h^*Sol\\ \\!{\\cal M}\\to Sol h^*{\\cal M}$\nfor a ${\\cal D}$-module ${\\cal M}$.\n\n\\proof{\n1.\nThe equivalence of (1) and (2)\nfollows from\nLemma \\ref{lmprpg}.6,\nthe assumption\n$Rh^!\\Lambda\n\\simeq \\Lambda(c)[2c]$ and\nthe isomorphism\n${\\rm id}\\to D_WD_W$ of biduality\n\\cite[Th\\'eor\\`eme 4.3]{TF}.\n\n\n\nThe assumption that $h\\colon W\\to X$ \nis ${\\cal K}_X$-transversal\nmeans that the canonical morphism\n$c_{h,{\\cal K}_X}\\colon\nh^*{\\cal K}_X\\otimes^L Rh^!\\Lambda\n\\to Rh^!{\\cal K}_X={\\cal K}_W$\nis an isomorphism.\nHence by the definition of\n(\\ref{eqFGDD}),\nthe conditions (1) and (3)\nare equivalent.\n\n2.\nWe consider the commutative diagram\n$$\n\\begin{CD}\n\\delta^*R{\\cal H}om_X({\\rm pr}_2^*{\\cal F},\n{\\rm pr}_1^!{\\cal G})\n\\otimes^L R\\delta^!\\Lambda\n@>{c_{\\delta,R{\\cal H}om_X({\\rm pr}_2^*{\\cal F},\n{\\rm pr}_1^!{\\cal G})}} >>\nR\\delta^!\nR{\\cal H}om_X({\\rm pr}_2^*{\\cal F},\n{\\rm pr}_1^!{\\cal G})\n\\\\\n@AAA@VVV\\\\\n{\\cal G}\\otimes^L\nR{\\cal H}om_X({\\cal F},\\Lambda)\n@>>>\nR{\\cal H}om_X({\\cal F},{\\cal G})\n\\end{CD}$$\ndefined as follows.\nThe bottom horizontal arrow\nis the canonical morphism\nin the condition (2).\nThe canonical isomorphism\n${\\cal G}\\boxtimes^L\nD_X{\\cal F}\\to\nR{\\cal H}om_{X\\times X}({\\rm pr}_2^*{\\cal F},\n{\\rm pr}_1^!{\\cal G})$\n\\cite[(3.1.1)]{FL}\ninduces an isomorphism\n${\\cal G}\\otimes^L\nR{\\cal H}om_X({\\cal F},{\\cal K}_X)\n=\\delta^*(\n{\\cal G}\\boxtimes^L\nD_X{\\cal F})\n\\to \n\\delta^*\nR{\\cal H}om_{X\\times X}({\\rm pr}_2^*{\\cal F},\n{\\rm pr}_1^!{\\cal G})$.\nThis together with the canonical isomorphism\n$R\\delta^!\\Lambda \\otimes {\\cal K}_X\n\\to \\Lambda$\ndefines the left vertical arrow.\nThe right vertical arrow is defined by\n\\cite[3.1.12.2]{DP}.\nSince the condition (1) is equivalent\nto that the top horizontal arrow\n$c_{\\delta,R{\\cal H}om_X({\\rm pr}_2^*{\\cal F},\n{\\rm pr}_1^!{\\cal G})}$\nis an isomorphism,\nthe assertion follows.\n\\qed}\n\n\\begin{pr}\\label{prpgj}\nLet $$\n\\begin{CD}\nW@>h>> X\\\\\n@A{j'}AA@AAjA\\\\\nV@>{h'}>>U\n\\end{CD}$$\nbe a cartesian diagram\nof schemes \nof finite type over $k$\nsuch that the vertical arrows\nare open immersions.\nLet ${\\cal G}$ be a constructible complex\nof $\\Lambda$-modules on $U$.\nWe consider the conditions:\n\n\n{\\rm (1)}\nThe morphism $h\\colon W\\to X$ \nis $j_!{\\cal G}$-transversal.\n\n\n{\\rm (2)}\nThe morphism $h'\\colon V\\to U$\nis ${\\cal G}$-transversal.\n\n\n\n{\\rm 1.}\nThe condition {\\rm (1)}\nimplies {\\rm (2)}.\nConversely,\nif $Rh^!\\Lambda$ is isomorphic to\n$\\Lambda(c)[2c]$ for a\nlocally constant function $c$\nand if the canonical morphisms\n\\begin{equation}\nj_!{\\cal G}\\to Rj_*{\\cal G},\\quad\nj'_!h^{\\prime *}{\\cal G}\\to \nRj'_*h^{\\prime *}{\\cal G}\n\\label{eqj!j*}\n\\end{equation}\nare isomorphisms,\nthe condition {\\rm (2)}\nimplies {\\rm (1)}.\n\n{\\rm 2.}\nAssume that ${\\cal G}$ is of finite\ntor-dimension on $U$.\nThen, the condition\n{\\rm (1)} is equivalent\nto the combination of\n{\\rm (2)}\nand the following condition:\n\n{\\rm (3)}\nThe base change morphism\n\\begin{equation}\n\\begin{CD}\nh^*Rj_*R{\\cal H}om({\\cal G},{\\cal K}_U)\n@>>>\nRj'_*h^{\\prime *}\nR{\\cal H}om({\\cal G},{\\cal K}_U)\n\\end{CD}\n\\label{eqbcg}\n\\end{equation}\nis an isomorphism.\n\\end{pr}\n\n\\proof{1.\nThe implication {\\rm (1)}$\\Rightarrow\n${\\rm (2)} is clear.\n\nWe consider the commutative diagram\n$$\\xymatrix{\nj'_!h^{\\prime*}{\\cal G}\\otimes^L\nRh^!\\Lambda\n\\ar[r]\\ar[d]\n&\nh^*j_!{\\cal G}\\otimes^L\nRh^!\\Lambda\n\\ar[rr]^{c_{h,j_!{\\cal G}}}&&\nRh^!j_!{\\cal G}\\ar[d]\\\\\nRj'_*h^{\\prime*}{\\cal G}\\otimes^L\nRh^!\\Lambda\\ar[dr]&\n&&\nRh^!Rj_*{\\cal G}\\ar[d]\\\\\n&\nRj'_*(h^{\\prime*}{\\cal G}\\otimes^L\nRh^{\\prime !}\\Lambda)\n\\ar[rr]^{Rj'_*(c_{h',{\\cal G}})}&&\nRj'_*Rh^{\\prime!}{\\cal G}\n}$$\ndefined as follows.\nThe upper vertical arrows\nare induced by the\ncanonical morphisms (\\ref{eqj!j*}).\nThe top left horizontal arrow\nis induced by the isomorphism\n$j'_!h^{\\prime*}\\to \nh^*j_!$ and is an isomorphism.\nThe slant arrow is defined \nas the adjoint of the isomorphism\n$j^{\\prime*}\n(Rj'_*h^{\\prime*}{\\cal G}\\otimes^L\nRh^!\\Lambda)\\to \nRj'_*(h^{\\prime*}{\\cal G}\\otimes^L\nRh^{\\prime !}\\Lambda)$\nand is an isomorphism\nif the assumption on\n$Rh^!\\Lambda$ is satisfied.\nThe lower right vertical arrow\nis the adjoint of the isomorphism\n$j^*Rh_!\\to Rh'_!j^{\\prime *}$\nand is an isomorphism.\nThus under the assumptions,\nthe implication {\\rm (2)}$\\Rightarrow\n${\\rm (1)} holds.\n\n\n\n2.\nSince the condition (1)\nimplies (2),\nit suffices to show that\n(\\ref{eqbcg}) is an isomorphism\nif and only if\nthe condition (3) for ${\\cal F}=j_!{\\cal G}$\nin Proposition \\ref{prRhom}.1 is satisfied,\nassuming (2).\nWe consider the\ncommutative diagram\n$$\\begin{CD}\nh^*R{\\cal H}om_X(j_!{\\cal G},{\\cal K}_X)\n@>{(3)}>>\nR{\\cal H}om_W(h^*j_!{\\cal G},\nh^*{\\cal K}_X)\n\\\\\n@VVV@VVV\\\\\nh^*Rj_*R{\\cal H}om_U({\\cal G},{\\cal K}_U)\n@.\nR{\\cal H}om_W(j'_!h^{\\prime*}{\\cal G},\nh^*{\\cal K}_X)\n\\\\\n@V{(\\ref{eqbcg})}VV@VVV\\\\\nRj_*h^{\\prime*}R{\\cal H}om_U({\\cal G},{\\cal K}_U)\n@>>>\nRj'_*R{\\cal H}om_V(h^{\\prime*}\n{\\cal G},h^{\\prime*}{\\cal K}_U)\n\\end{CD}$$\ndefined as follows.\nThe upper left and the\nlower right vertical arrows are\nthe adjunction morphisms\nand are isomorphisms.\nThe upper right vertical arrow\nis induced by the isomorphism\n$h^*j_!{\\cal G}\\to\nj'_!h^{\\prime*}{\\cal G}$\nand is an isomorphism.\n\nThe top horizontal arrow (3) is\nthe canonical morphism in\nthe condition (3) in Proposition \n\\ref{prRhom}.1 for ${\\cal F}=j_!{\\cal G}$.\nThe lower one is induced by\nthat for ${\\cal G}$\nand is an isomorphism\nif (2) is satisfied,\nby Proposition \\ref{prRhom}.1\n(1)$\\Rightarrow$(3).\nThus the assertion follows.\n\\qed}\n\n\\subsection{${\\cal F}$-transversality\nand local acyclicity}\\label{ssFla}\n\nIn Proposition \\ref{prla},\n$X$ and $Y$ denote arbitrary schemes\nand $\\Lambda$ denotes a finite local ring\nsuch that the characteristic of the\nresidue field is invertible on $Y$.\n\n\\begin{pr}[{\\rm \\cite[Proposition 2.10]{app}}]\n\\label{prla}\nLet \n\\begin{equation}\n\\begin{CD}\nX@<{j'}<< U\\\\\n@VfVV @VV{f_V}V\\\\\nY@h>>X\\\\\n@VgVV @VfVV\\\\\nZ@>i>>Y\n\\end{CD}\n\\label{eqZiY}\n\\end{equation}\nbe a cartesian diagram of schemes.\nLet ${\\cal F}$\nbe a complex of $\\Lambda$-modules\non $X$\nand assume that\n$f$ is strongly locally acyclic relatively to ${\\cal F}$.\nThen \n$h\\colon W\\to X$\nis ${\\cal F}$-transversal.\n\\end{cor}\n\n\\proof{\nWe may assume that $i\\colon Z\\to Y$\nis a closed immersion.\nLet $V=Y\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z$\nand consider the cartesian diagram\n\\begin{equation}\n\\begin{CD}\nW@>h>>X@<{j'}<< U\\\\\n@VgVV @VfVV @VV{f_V}V\\\\\nZ@>i>>Y@{i'_s}>>X\\times_YY_{(s)}@<{j'_t}<< X_t\\\\\n@V{f_s}VV@V{f_{(s)}}VV @VV{f_t}V\\\\\ns@>{i_s}>>Y_{(s)}@<{j_t}<h>> X\\\\\n&@VVV @VVfV\\\\\n0=&y@>>>Y&={\\mathbf A}^d\n\\end{CD}$$\nwhere $f\\colon X\\to Y$ is smooth.\nSince $h\\colon W\\to X$ is\n$C$-transversal,\nby Lemma \\ref{lmCf}.1,\nwe may assume that $f\\colon X\\to Y$\nis $C$-transversal.\n\nSince ${\\cal F}$ is assumed\nto be micro-supported on $C$,\nthe morphism $f\\colon X\\to Y$ is locally\nacyclic relatively to ${\\cal F}$.\nHence the assertion follows from\nCorollary \\ref{corlat}.\n\n2.\nLet $h\\colon W\\to X$ and\n$f\\colon W\\to Y$\nbe a $C$-transversal pair of\nsmooth morphisms.\nIt suffices to show that\n$f\\colon W\\to Y$ is locally acyclic\nrelatively to $h^*{\\cal F}$.\nBy replacing ${\\cal F}$\nand $C$ by $h^*{\\cal F}$\nand $h^\\circ C$,\nwe may assume $W=X$.\nSince the base $B$ of $C$\ncontains the support of ${\\cal F}$,\nafter replacing $X$ by a neighborhood\nof the support of ${\\cal F}$,\nwe may assume that $f\\colon X\\to Y$\nis smooth by Lemma \\ref{lmCf}.6.\n\n\nWe show that the condition\nin Corollary \\ref{corF} is satisfied.\nLet (\\ref{eqF}) be a cartesian diagram\nsatisfying the condition there.\nSince $f\\colon X\\to Y$\nis smooth,\n$Y'$ and $W$ are smooth over \na finite extension of $k$\nand since $k$ is assumed perfect,\nthe schemes $X'$ and $W$ are smooth over $k$.\nBy Lemma \\ref{lmChf}.2,\nthe proper morphisms $p'\\colon X'\\to X$\nand $p'h\\colon W\\to X$ are\n$C$-transversal.\nThus, by the condition (2),\nthe morphisms $p'\\colon X'\\to X$\nand $p'h\\colon W\\to X$ are\n${\\cal F}$-transversal.\nHence by Lemma \\ref{lmprpg}.3,\n$h\\colon W\\to X'$ is \n$p^{\\prime *}{\\cal F}$-transversal.\nThus by Corollary \\ref{corF},\nthe morphism $f\\colon X\\to Y$ is\nlocally acyclic relatively to ${\\cal F}$.\n\\qed}\n\n\n\\begin{cor}\\label{corfh}\nAssume that\n$k$ is perfect\nand let ${\\cal F}$\nbe a constructible complex \nof $\\Lambda$-modules \nof finite tor-dimension on $X$.\nThen, \nthe following conditions are equivalent:\n\n{\\rm (1)}\n${\\cal F}$ is locally constant.\n\n{\\rm (2)}\nEvery morphism $h\\colon W\\to X$\nof finite type \nof smooth schemes\nis ${\\cal F}$-transversal.\n\\end{cor}\n\n\\proof{\nThe implication (1)$\\Rightarrow$(2)\nis Lemma \\ref{lmprpg}.2.\nBy Lemma \\ref{lmlcst},\nthe condition (1) is equivalent to\nthat ${\\cal F}$ is micro-supported\non the $0$-section $T^*_XX$.\nHence, it follows from Proposition \\ref{prfh}.\n\\qed}\n\n\n\\begin{cor}\\label{corpSS}\nLet ${\\cal F}$ be a constructible\ncomplex of $\\Lambda$-modules\nof finite tor-dimension on $X$\nand let $C=SS{\\cal F}$ denote the\nsingular support.\nThen, for a properly $C$-transversal\nmorphism $h\\colon W\\to X$\nof smooth schemes of\nfinite type over a perfect $k$,\nwe have \n\\begin{equation}\nSS h^*{\\cal F}\n=SS Rh^!{\\cal F}=h^{\\circ}SS{\\cal F}.\n\\label{eqpSS}\n\\end{equation}\n\\end{cor}\n\n\\proof{\nWe may assume that $\\Lambda$ is a field\nby Corollary \\ref{corss}.2.\nBy Proposition \\ref{prfh}.1,\n$h$ is ${\\cal F}$-transversal. \nFirst, we assume that ${\\cal F}$ is \na perverse sheaf.\nThen $h^*{\\cal F}[\\dim W-\\dim X]$\nis also a perverse sheaf\nby Lemma \\ref{lmprpg}.5\nand $SS h^*{\\cal F}$\nis the support of\n$CC h^*{\\cal F}$ by Proposition \\ref{prperv}.2.\nSince\n$CC h^*{\\cal F}=h^!CC{\\cal F}$ by Theorem \\ref{thmi*}\nand $CC{\\cal F}\\geqq 0$ by\nProposition \\ref{prperv}.1, the assertion\nfollows in this case.\nThe general case \nis reduced to the case where\n${\\cal F}$ is a perverse sheaf by \nthe equality (\\ref{eqssu}) for \n${\\cal F}$ and $h^*{\\cal F}$.\n\\qed}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThroughout, we work over the complex numbers ${\\mathbb{C}}$. \n\n\\subsection{The P=W conjecture}\nThe purpose of this paper is to present a proof of the $P=W$ conjecture by de Cataldo--Hausel--Migliorini \\cite{dCHM1} for arbitrary rank $n$ and genus $g \\geq 2$.\n\nLet $C$ be a nonsingular irreducible projective curve of genus $g \\geq 2$. For two coprime integers $n \\in {\\mathbb{Z}}_{\\geq 1}$ and $d \\in {\\mathbb{Z}}$, there are two moduli spaces $M_{\\mathrm{Dol}}$ and $M_B$, called the Dolbeault and the Betti moduli spaces, attached to $C,n,$ and $d$. \n\n\nThe Dolbeault moduli space parameterizes stable Higgs bundles $({\\mathcal E}, \\theta)$ on $C$, where ${\\mathcal E}$ is a vector bundle on $C$ of rank $n$ and degree $d$, $\\theta: {\\mathcal E} \\to {\\mathcal E} \\otimes \\Omega_C^1$ is a Higgs field, and stability is defined with respect to the slope $\\mu({\\mathcal E},\\theta) ={\\mathrm{deg}({\\mathcal E})}\/{\\mathrm{rk({\\mathcal E})}}$. The moduli space $M_{\\mathrm{Dol}}$ admits the structure of a completely integrable system\n\\[\nh: M_{\\mathrm{Dol}} \\to A := \\bigoplus_{i=1}^n H^0(C, {\\Omega_C^{1}}^{\\otimes i}), \\quad ({\\mathcal E}, \\theta) \\mapsto \\mathrm{char.polynomial}(\\theta),\n\\]\nwhich is referred to as the Hitchin system \\cite{Hit, Hit1}. The Hitchin map $h$ is surjective and proper; it is also Lagrangian with respect to the canonical holomorphic symplectic form on $M_{\\mathrm{Dol}}$ induced by the hyper-K\\\"ahler metric. The \\emph{perverse filtration} is an increasing filtration \n\\[\nP_0H^*(M_{\\mathrm{Dol}}, {\\mathbb{Q}})\\subset P_1H^*(M_{\\mathrm{Dol}}, {\\mathbb{Q}}) \\subset \\cdots \\subset H^*(M_{\\mathrm{Dol}}, {\\mathbb{Q}})\n \\]\non the (singular) cohomology of $M_{\\mathrm{Dol}}$ governed by the topology of the Hitchin system $h$; see Section \\ref{sec1.1} for a brief review.\n\nThe Betti moduli space $M_B$ is the (twisted) character variety associated with $\\mathrm{GL}_n({\\mathbb{C}})$ and degree $d$. It parametrizes isomorphism classes of irreducible local systems\n\\[\n\\rho: \\pi_1(C\\backslash \\{p\\}) \\rightarrow \\mathrm{GL}_n({\\mathbb{C}})\n\\]\nwhere $\\rho$ sends a loop around a chosen point $p$ to \n$e^{\\frac{2\\pi \\sqrt{-1} d}{n}}\\mathrm{Id}_n$. Concretely, we have\n\\begin{equation*}\nM_B := \\Big{\\{}a_k, b_k \\in \\mathrm{GL}_n({\\mathbb{C}}),~k=1,2,\\dots,g: ~~\\prod_{j=1}^g [a_j, b_j] = e^{\\frac{2\\pi \\sqrt{-1} d}{n}}\\mathrm{Id}_n \\Big{\\}}\\git\\mathrm{GL}_n({\\mathbb{C}}).\n\\end{equation*}\nIt is an affine variety whose mixed Hodge structure admits a nontrivial weight filtration \n\\[\nW_0 H^*(M_B, {\\mathbb{Q}}) \\subset W_1H^*(M_B, {\\mathbb{Q}}) \\subset \\cdots \\subset H^*(M_B, {\\mathbb{Q}}).\n\\]\n\nNon-abelian Hodge theory \\cite{Simp, Si1994II} gives a canonical diffeomorphism between the two very different algebraic varieties $M_{\\mathrm{Dol}}$ and $M_B$, which identifies their cohomology:\n\\begin{equation}\\label{NAH}\n H^*(M_{\\mathrm{Dol}}, {\\mathbb{Q}}) = H^*(M_{B}, {\\mathbb{Q}}).\n\\end{equation}\nThe $P=W$ conjecture by de Cataldo--Hausel--Migliorini \\cite{dCHM1} \nrefines the identification (\\ref{NAH}); it predicts that the perverse filtration associated with the Hitchin system is matched with the (double-indexed) weight filtration associated with the Betti moduli space. This establishes a surprising connection between topology of Hitchin systems and Hodge theory of character varieties.\n\n\\begin{conj}\\label{conj}[The P=W conjecture for $\\mathrm{GL}_n$, \\cite{dCHM1}] For any $k,m\\in {\\mathbb{Z}}_{\\geq 0}$, we have\n\\[\nP_kH^m(M_{\\mathrm{Dol}}, {\\mathbb{Q}}) = W_{2k}H^m(M_{B}, {\\mathbb{Q}}) = W_{2k+1}H^m(M_B, {\\mathbb{Q}}).\n\\]\n\\end{conj}\n\nConjecture \\ref{conj} has previously been proven for $n=2$ and arbitrary genus $g\\geq 2$ by de Cataldo--Hausel--Migliorini \\cite{dCHM1}, and recently for arbitrary rank $n$ and genus 2 by de Cataldo--Maulik--Shen \\cite{dCMS}. The compatibility between the $P=W$ conjecture and Galois conjugation on the Betti side was proven in \\cite{dCMSZ}, which implies that $P=W$ does not depend on the degree $d$ as long as it is coprime to $n$. We refer to \\cite[Section 1]{dCHM1} and the paragraphs following \\cite[Theorem 0.2]{dCMS} for discussions concerning connections between Conjecture \\ref{conj} and other directions. In particular, by \\cite{CDP} and \\cite[Section 9.3]{MT}, Conjecture \\ref{conj} implies the correspondence between Gopakumar--Vafa invariants and Pandharipande--Thomas invariants \\cite[Conjecture 3.13]{MT} for the local Calabi--Yau 3-fold $T^*C \\times {\\mathbb{C}}$ in any curve class $n[C]$. \n\n\nThe main result of this paper is a full proof of Conjecture \\ref{conj}.\n\n\\begin{thm}\\label{thm}\n Conjecture \\ref{conj} holds.\n\\end{thm}\n\n\nConjecture \\ref{conj} has several variants which, to the best of our knowledge, are still open. These include the version formulated for possibly singular moduli spaces, intersection cohomology, and general reductive groups \\cite{dCHM1, dCM, FM}, and the version formulated for moduli stacks \\cite{Davison}. There also exist parabolic versions of Conjecture \\ref{conj} and we expect that our argument applies to these settings using parabolic variants of the ingredients here \\cite{Mellit, OY}.\n\n\nFor the case of type $A_{n-1}$ ($\\mathrm{GL}_n, \\mathrm{PGL}_n, \\mathrm{SL}_n$) and a degree $d$ coprime to $n$, Conjecture \\ref{conj} (\\emph{i.e.} the $P=W$ conjecture for $\\mathrm{GL_n}$) is equivalent to the $P=W$ conjecture for $\\mathrm{PGL}_n$; see the paragraph after \\cite[Theorem 0.2]{dCMS}. The case of $\\mathrm{SL}_n$ is more subtle --- it is closely related to the endoscopic decomposition of the $\\mathrm{SL}_n$ Hitchin moduli spaces. A systematic discussion on this aspect can be found in \\cite[Section 5]{MS}. In the special case when $n=p$ a prime number, \\cite[Theorem 0.2]{SLn} implies that the $P=W$ conjecture for $\\mathrm{GL}_p$, $\\mathrm{SL}_p$, and $\\mathrm{PGL}_p$ are all equivalent. In particular, we have the following immediate consequence of Theorem \\ref{thm}, which generalizes the result of \\cite{dCHM1} for $\\mathrm{SL}_2$.\n\n\\begin{thm}\nThe $P=W$ conjecture holds for a curve $C$ of any genus at least two and $\\mathrm{SL}_p$ with $p$ a prime number.\n\\end{thm}\n\n\n\n\n\n\n\\subsection{Idea of the proof}\nOur proof of Conjecture \\ref{conj} has 4 major steps.\n\n\\begin{enumerate}\n \\item[Step 1.] \\emph{Strong perversity of Chern classes}: Using work of Markman \\cite{Markman}, Shende \\cite{Shende}, and Mellit \\cite{Mellit}, we may reduce the $P=W$ conjecture to a statement about the interaction between Chern classes of the universal family and the perverse filtration. This allows us to reduce the $P=W$ conjecture using a sheaf-theoretic statement which concerns strong perversity of Chern classes.\n \\item[Step 2.] \\emph{Vanishing cycle techniques}: We apply the formalism of vanishing cycles to reduce the sheaf-theoretic formulation of Step 1 for the Hitchin system, to the case of twisted Hitchin systems associated with meromorphic Higgs bundles. Our motivation is from the key observation by Ng\\^o \\cite{Ngo} and Chaudouard-Laumon \\cite{CL}, that the decomposition theorem for such twisted Hitchin systems is more manageable. \n \n Steps 1 and 2 are carried out in Sections 1 and 2. \n\n \\item[Step 3.]\\emph{Global Springer theory}: The global Springer theory of Yun \\cite{Yun1, Yun2,Yun3} produces rich symmetries for certain Hitchin moduli spaces; this proves the sheaf-theoretic formulation of Step 1 over the \\emph{elliptic locus}. For our purpose, we need a version of global Springer theory for the stable locus over the total Hitchin base. This is described in Section 3.\n\n \\item[Step 4.] \\emph{A support theorem}: Lastly, in order to extend part of Yun's symmetries induced by Chern classes from the elliptic locus to the total Hitchin base, we prove a support theorem parallel to \\cite{CL} for certain parabolic Hitchin moduli spaces. This is completed in Section 4.\n\\end{enumerate}\n\nThe idea of combining vanishing cycle functors and support theorems was also applied in \\cite{MS} to give a proof of the topological mirror symmetry conjecture of Hausel--Thaddeus \\cite{HT, GWZ}.\n\n\\subsection{Acknowledgements}\nWe would like to thank Mark Andrea de Cataldo, Bhargav Bhatt, and Max Lieblich for various discussions. We are especially grateful to Zhiwei Yun for explaining his thesis to us, both in his office and at the playground. J.S. was supported by the NSF grant DMS-2134315. \n\n\n\n\\section{Perverse filtrations and vanishing cycles}\n\nIn this section, we introduce the notion of \\emph{strong perversity} and show its compatibility with vanishing cycle functors (Proposition \\ref{prop1.4}). This plays a crucial role in Section 2 in lifting the $P=W$ conjecture sheaf-theoretically and reduce it to the twisted case.\n\n\\subsection{Perverse filtrations}\\label{sec1.1}\nLet $f: X \\to Y$ be a proper morphism between irreducible nonsingular quasi-projective varieties (or Deligne--Mumford stacks) with $\\mathrm{dim}X = a$ and $\\mathrm{dim}Y = b$. Let $r$ be the defect of semismallness of $f$:\n\\[\nr: = \\mathrm{dim} X \\times_YX - \\mathrm{dim}X.\n\\]\nIn particular, we have $r = a-b$ when $f$ has equi-dimensional fibers. The perverse filtration \n\\[\nP_0H^m(X, {\\mathbb{Q}}) \\subset P_1H^m(X, {\\mathbb{Q}}) \\subset \\dots \\subset H^m(X, {\\mathbb{Q}})\n\\]\nis an increasing filtration on the cohomology of $X$ governed by the topology of the morphism $f$; it is defined to be\n\\[\nP_iH^m(X, {\\mathbb{Q}}) := \\mathrm{Im}\\left\\{ H^{m-a+r}(X, {^\\mathbf{p}\\tau_{\\leq i}} (Rf_* {\\mathbb{Q}}_X[a-r])) \\to H^m(X, {\\mathbb{Q}})\\right\\}\n\\]\nwhere $^\\mathbf{p}\\tau_{\\leq * }$ is the perverse truncation functor \\cite{BBD}.\n\nA cohomology class $\\gamma \\in H^l(X, {\\mathbb{Q}})$ can be viewed as a morphism $\\gamma: {\\mathbb{Q}}_X \\to {\\mathbb{Q}}_X[l]$, which naturally induces \n\\begin{equation}\\label{induced}\n\\gamma: Rf_* {\\mathbb{Q}}_X \\to Rf_* {\\mathbb{Q}}_X [l]\n\\end{equation}\nafter pushing forward along $f$. For an integer $c\\geq 0$, we say that $\\gamma \\in H^l(X, {\\mathbb{Q}})$ has \\emph{strong perversity} $c$ with respect to $f$ if its induced morphism (\\ref{induced}) satisfies\n\\begin{equation}\\label{assumption}\n\\gamma \\left({^\\mathbf{p}\\tau_{\\leq i} Rf_* {\\mathbb{Q}}_X}\\right) \\subset {^\\mathbf{p}\\tau_{\\leq i+(c-l)}} \\left(Rf_* {\\mathbb{Q}}_X [l] \\right), \\quad \\forall i;\n\\end{equation}\nmore precisely, the condition (\\ref{assumption}) says that the composition\n\\[\n{^\\mathbf{p}\\tau_{\\leq i} Rf_* {\\mathbb{Q}}_X} \\hookrightarrow Rf_* {\\mathbb{Q}}_X \\xrightarrow{\\gamma} Rf_* {\\mathbb{Q}}_X [l]\n\\]\nfactors through ${^\\mathbf{p}\\tau_{\\leq i+(c-l)}} \\left(Rf_* {\\mathbb{Q}}_X [l] \\right) \\hookrightarrow Rf_* {\\mathbb{Q}}_X [l]$ for any $i$. Notice that $\\gamma$ automatically has strong perversity $l$, so this condition is interesting only when $c < l$.\n\n\\begin{lem}\\label{lem1.1}\nIf $\\gamma\\in H^l(X, {\\mathbb{Q}})$ has strong perversity $c$ with respect to $f$, then taking cup-product with $\\gamma$ satisfies\n\\[\n\\gamma\\cup - : H^m(X, {\\mathbb{Q}}) \\to H^{m+l}(X, {\\mathbb{Q}}), \\quad P_iH^m(X, {\\mathbb{Q}}) \\mapsto P_{i+c}H^{m+l}(X, {\\mathbb{Q}}).\n\\]\n\\end{lem}\n\n\\begin{proof}\nThis follows from taking global cohomology for (\\ref{assumption}) and noticing that\n\\begin{align*}\n H^{m-a+r}\\left(Y, {^\\mathbf{p}\\tau_{\\leq i+(c-l)}} \\left(Rf_* {\\mathbb{Q}}_X [l+a-r] \\right)\\right) & \\\\ = H^{(m-a+r)+l}&\\left(Y, {^\\mathbf{p}\\tau_{\\leq i+c}} \\left(Rf_* {\\mathbb{Q}}_X [a-r] \\right)\\right). \\qedhere\n \\end{align*}\n\\end{proof}\n\nIn general, the perverse filtration $P_\\bullet H^*(X, {\\mathbb{Q}})$ may not be multiplicative, \\emph{i.e.}, for $\\gamma_j \\in P_{c_j}H^*(X, {\\mathbb{Q}})$ (j=1,2,\\dots,s), it may not be true that\n\\[\n\\gamma_1 \\cup \\gamma_2 \\cup \\cdots \\cup \\gamma_s \\in P_{c_1+\\cdots+c_s}H^*(X, {\\mathbb{Q}});\n\\]\nsee \\cite[Exercise 5.6.8]{Park}. In fact, it was proven in \\cite[Theorem 0.6]{dCMS} that Conjecture \\ref{conj} is equivalent to the multiplicativity of the perverse filtration associated with the Hitchin system. The following easy observation illustrates the advantage of considering strong perversity in view of the multiplicativity issue.\n\n\\begin{lem}\\label{lem1.2}\nIf the class $\\gamma_j \\in H^{l_j}(X, {\\mathbb{Q}})$ ($j=1,2,\\dots, s$) has strong perversity $c_j$ with respect to $f$, then the cup product \n\\[\n\\gamma_1 \\cup \\gamma_2 \\cup \\cdots \\cup \\gamma_s \\in H^{l_1+\\cdots+l_s}(X, {\\mathbb{Q}})\n\\]\nhas strong perversity $\\sum_jc_j$.\n\\end{lem}\n\n\n\\subsection{Vanishing cycles}\\label{Sec1.2}\n\nThroughout section \\ref{Sec1.2}, we let $g: X \\to {\\mathbb{A}}^1$ be a morphism such that $X$ is nonsingular and irreducible with $X_0 = g^{-1}(0)$ the closed fiber over $0 \\in {\\mathbb{A}}^1$. We consider the vanishing cycle functor \n\\[\n\\varphi_g: D^b_c(X) \\rightarrow D^b_c(X_0)\n\\]\nwhich preserves the perverse $t$-structures,\n\\[\n\\varphi_g: \\mathrm{Perv}(X) \\rightarrow \\mathrm{Perv}(X_0).\n\\]\nHere $\\mathrm{Perv}(-)$ stands for the abelian category of perverse sheaves. We denote by \n\\[\n\\varphi_g : = \\varphi_g(\\mathrm{IC}_X) = \\varphi_g({\\mathbb{Q}}_X[\\mathrm{dim}X]) \\in \\mathrm{Perv}(X_0) \\]\nthe perverse sheaf of vanishing cycles. We use $X' \\subset X$ to denote the support of the vanishing cycle complex $\\varphi_g$ so that $\\varphi_g \\in \\mathrm{Perv}(X')$.\n\nRecall that for any bounded constructible object ${\\mathcal K} \\in D^b_c(X)$, a class $\\gamma \\in H^l(X, {\\mathbb{Q}})$ induces a morphism\n\\[\n\\gamma: {\\mathcal K} \\to {\\mathcal K} [l] \n\\]\nvia taking the tensor product with $\\gamma: {\\mathbb{Q}}_X \\to {\\mathbb{Q}}_X[l]$. The following lemma shows the compatibility between the vanishing cycle functor and restriction of cohomology classes.\n\n\\begin{lem}\\label{lem1.3}\nWith the same notation as above, let $i: X'\\hookrightarrow X$ be the closed embedding. The morphism \\[\ni^*\\gamma: \\varphi_g \\rightarrow \\varphi_g[l] \\in D^b_c(X')\n\\]\ninduced by the class $i^*\\gamma$ (applied to the object ${\\mathcal K} = \\varphi_g$) coincides with \n\\[\n\\varphi_g(\\gamma): \\varphi_g \\to \\varphi_g[l] \\in D_c^b(X')\n\\]\nobtained by applying the functor $\\varphi_g$ to $\\gamma: {\\mathbb{Q}}_X \\to {\\mathbb{Q}}_X[l]$.\n\\end{lem}\n\n\\begin{proof}\nLet $\\iota: X_0 \\hookrightarrow X$ be the closed embedding of the closed fiber over $0$. By \\cite[Definition 8.6.2]{KS}, the vanishing cycle functor can be written as\n\\begin{equation}\\label{vanishing}\n\\varphi_g(-) = \\iota^* R{\\mathcal H}{om}({\\mathcal C}, -) \\in D^b_c(X_0)\n\\end{equation}\nwith ${\\mathcal C}$ a fixed complex of sheaves. The morphism obtained\nby applying $R{\\mathcal H}{om}({\\mathcal C}, -)$ to $\\gamma: {\\mathbb{Q}}_X \\to {\\mathbb{Q}}_X[l]$ \nis equivalent to the morphism induced by applying $\\gamma$ to $R{\\mathcal H}{om}({\\mathcal C}, {\\mathbb{Q}}_X)$.\nSimilarly, the functor $\\iota^*$ sends the morphism induced by $\\gamma$\nto the morphism induced by $\\iota^*\\gamma$.\n\nTherefore, if we denote by $\\iota': X' \\hookrightarrow X$ the closed embedding and view $\\varphi_g$ as a perverse sheaf on $X'$, we have an equivalent morphisms\n\\begin{equation}\\label{321}\n\\varphi_g(\\gamma) = \\iota^*\\gamma: \\iota'_*\\varphi_g \\to \\iota'_*\\varphi_g[l] \\in D^b_c(X_0). \n\\end{equation}\n\n\nFinally, after applying $\\iota'^*: D^b_c(X_0) \\rightarrow D^b_c(X')$ to (\\ref{321}) and noticing $\\iota'^*\\iota'_*= \\mathrm{id}$, we obtain that the class $i^*\\gamma = \\iota'^*\\iota^*\\gamma$ induces\n\\[\n\\varphi_g(\\gamma): \\varphi_g \\to \\varphi_g[l] \\in D_c^b(X').\n\\]\nThis completes the proof.\n\\end{proof}\n\n\n\\begin{prop}\\label{prop1.4}\nLet $g: X \\to {\\mathbb{A}}^1$ and $X'$ be as above. Assume that $X'$ is nonsingular and\n\\begin{equation}\\label{4.1_0}\n\\varphi_g \\simeq \\mathrm{IC}_{X'} = {\\mathbb{Q}}_{X'}[\\mathrm{dim}X'] \\in \\mathrm{Perv}(X').\n\\end{equation}\nAssume further that we have the commutative diagram\n\\begin{equation*}\n\\begin{tikzcd}\nX' \\arrow[r, \"i\"] \\arrow[d, \"f'\"]\n& X \\arrow[d, \"f\"] \\\\\nY' \\arrow[r]\n& Y\n\\end{tikzcd}\n\\end{equation*}\nsuch that $f$ is proper and $g = \\nu \\circ f$ with $\\nu: Y \\to {\\mathbb{A}}^1$. Then if a class $\\gamma \\in H^l(X, {\\mathbb{Q}})$ has strong perversity $c$ with respect to $f$, its restriction $i^*\\gamma \\in H^l(X', {\\mathbb{Q}})$ has strong perversity $c$ with respect to $f'$. \n\\end{prop}\n\n\n\\begin{proof}\nBy definition, the morphism \n\\begin{equation}\\label{1.4_1}\n\\gamma: Rf_* {\\mathbb{Q}}_X \\to Rf_* {\\mathbb{Q}}_X[l] \\in D^b_c(Y)\n\\end{equation}\ninduced by $\\gamma$ satisfies\n\\begin{equation}\\label{1.4_2}\n\\gamma \\left({^\\mathbf{p}\\tau_{\\leq i} Rf_* {\\mathbb{Q}}_X}\\right) \\subset {^\\mathbf{p}\\tau_{\\leq i+(c-l)}} (Rf_* {\\mathbb{Q}}_X[l]), \\quad \\forall i.\n\\end{equation}\nNow we apply the vanishing cycle functor $\\varphi_\\nu$ to (\\ref{1.4_1}). On one hand, we have the base change $Rf'_* \\circ \\varphi_g \\simeq \\varphi_\\nu\\circ Rf_*$ and the fact that the vanishing cycle functor preserves the perverse $t$-structures. Therefore (\\ref{1.4_2}) implies that the morphism\n\\[\n\\varphi_g(\\gamma): Rf'_* \\varphi_g \\to Rf'_* \\varphi_g[l] \\in D^b_c(X')\n\\]\nsatisfies\n\\[\n\\varphi_g(\\gamma) \\left({^\\mathbf{p}\\tau_{\\leq i}} Rf'_* \\varphi_g\\right) \\subset {^\\mathbf{p}\\tau_{\\leq i+(c-l)}} (Rf'_* \\varphi_g[l]), \\quad \\forall i.\n\\]\nOn the other hand, using the isomorphism (\\ref{4.1_0}) and Lemma \\ref{lem1.3}, the above equation means precisely that the morphism $i^*\\gamma : {\\mathbb{Q}}_{X'} \\to {\\mathbb{Q}}_{X'}[l]$ satisfies\n\\begin{equation*}\n(i^*\\gamma) \\left({^\\mathbf{p}\\tau_{\\leq i} Rf'_* {\\mathbb{Q}}_{X'}}\\right) \\subset {^\\mathbf{p}\\tau_{\\leq i+(c-l)}} (Rf'_* {\\mathbb{Q}}_{X'}[l]), \\quad \\forall i;\n\\end{equation*}\nthat is, the class $i^*\\gamma$ has strong perversity $c$ with respect to $f'$.\n\\end{proof}\n\n\n\n\n\n\n\\section{Strong perversity for Chern classes}\n\nIn this section, we fix the rank $n$ and the degree $d$ with $(n,d)=1$. In Theorem \\ref{conj2.7}, we rephrase and then enhance Conjecture \\ref{conj} to a statement involving ${\\mathcal L}$-twisted Hitchin systems and strong perversity of Chern classes. It will be proven in Sections 3 and 4.\n\n\\subsection{Tautological classes}\n\nAs discussed in \\cite[Section 0.3]{dCMS}, the $P=W$ conjecture for $\\mathrm{GL}_n$ can be reduced to a statement involving tautological classes on $M_{\\mathrm{Dol}}$ and the perverse filtration associated with $h: M_{\\mathrm{Dol}} \\to A$, without reference to the Betti moduli $M_B$. In this subsection, we recall this reduction step.\n\n\nFor convenience, we work with the $\\mathrm{PGL}_n$ Dolbeault moduli space to avoid normalization of a universal family as in \\cite{dCMS}; we refer to \\cite{Shende} for a detailed discussion concerning the formulation of the $P=W$ conjecture in terms of tautological classes for the $\\mathrm{PGL}_n$ Dolbeault moduli space.\n\nFix ${\\mathcal N} \\in \\mathrm{Pic}^d(C)$. Let $\\widecheck{M}_{\\mathrm{Dol}}$ be \nthe moduli stack of stable Higgs bundles $({\\mathcal E}, \\theta)$ with $\\mathrm{det}({\\mathcal E}) \\simeq {\\mathcal N}$ and $\\mathrm{trace}(\\theta) = 0$, rigidified with respect to the generic $\\mu_n$-stabilizer; this is the same as taking its coarse moduli space. We refer to this (nonsingular) variety as the $\\mathrm{SL}_n$ Dolbeault moduli space of degree $d$. The finite group $\\Gamma: = \\mathrm{Pic}^0(C)[n]$ acts naturally on $\\widecheck{M}_{\\mathrm{Dol}}$ via tensor product. The $\\mathrm{PGL}_n$ Dolbeault moduli space of degree $d$ is recovered by taking the quotient stack\n\\begin{equation}\\label{233}\n\\widehat{M}_{\\mathrm{Dol}} : = \\widecheck{M}_{\\mathrm{Dol}}\/\\Gamma\n\\end{equation}\nwhich is a nonsingular Deligne--Mumford stack. The $\\mathrm{PGL}_n$ Hitchin system \n\\[\n\\widehat{h}: \\widehat{M}_{\\mathrm{Dol}} \\to \\widehat{A}: = \\bigoplus_{i=2}^n H^0(C, {\\Omega_C^1}^{\\otimes i})\n\\]\nis induced by the Hitchin map associated with $\\widecheck{M}_{\\mathrm{Dol}}$ as the $\\Gamma$-action is fiberwise with respect to $h$. Analogous to the $\\mathrm{GL}_n$ case, we have the perverse filtration $P_*H^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}})$ associated with $\\widehat{h}$. The universal $\\mathrm{PGL}_n$-bundle ${\\mathcal U}$ on $C \\times \\widehat{M}_{\\mathrm{Dol}}$ induces Chern characters\n\\[\n\\mathrm{ch}_k({\\mathcal U}) \\in H^{2k}(C \\times \\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}),\\quad k \\geq 2. \n\\]\nThe tautological classes $c_k(\\gamma)$ are defined to be\n\\[\nc_k(\\gamma): = \\int_\\gamma \\mathrm{ch}_k({\\mathcal U}) = q_{M*}(q_C^*\\gamma \\cup \\mathrm{ch}_k({\\mathcal U})) \\in H^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}), \\quad \\gamma \\in H^*(C,{\\mathbb{Q}}),\n\\]\nwhere $q_{(-)}$ are the projections from $C \\times \\widehat{M}_{\\mathrm{Dol}}$.\n\nNow we consider the $\\mathrm{PGL}_n$ Betti moduli space of degree $d$ with the isomorphism on cohomology provided by non-abelian Hodge theory (\\emph{c.f.}\\cite[Theorem 1.2.4]{dCHM1}):\n\\begin{equation}\\label{NAH2}\nH^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}) = H^*(\\widehat{M}_{B}, {\\mathbb{Q}}).\n\\end{equation}\nDefine the Hodge sub-vector space\n\\[\n^k\\mathrm{Hdg}^m(\\widehat{M}_B): = W_{2k}H^m(M_B, {\\mathbb{Q}}) \\cap F^kH^m(M_B, {\\mathbb{C}}) \\subset H^m(\\widehat{M}_{B}, {\\mathbb{Q}}).\n\\]\n\nThe following theorem, collecting results of Markman and Shende, provides a complete description of $H^*(\\widehat{M}_B, {\\mathbb{Q}})$ in terms of the Chern classes $\\mathrm{ch}_k({\\mathcal U})$ and the weight filtration.\n\n\\begin{thm}[\\cite{Markman, Shende}]\\label{thm2.1}\nWe use the same notation as above.\n\\begin{enumerate}\n \\item[(i)] The tautological classes $c_k(\\gamma)\\in H^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}})$ generate $H^*(\\widehat{M}_\\mathrm{Dol}, {\\mathbb{Q}})$ as a ${\\mathbb{Q}}$-algebra.\n \\item[(ii)] The class $c_k(\\gamma)$, passing through the non-abelian Hodge correspondence (\\ref{NAH2}), lies in $^k\\mathrm{Hdg}^*(\\widehat{M}_B)$. In particular, we have a canonical decomposition\n \\[\nH^*(\\widehat{M}_B, {\\mathbb{Q}}) = \\bigoplus_{m,k} {^k\\mathrm{Hdg}^m(\\widehat{M}_B)}. \\]\n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\nThe first part was proven in \\cite{Markman}, and the second part was proven in \\cite{Shende}.\n\\end{proof}\n\n\nTheorem \\ref{thm2.1} (ii) yields immediately that \n\\[\nW_{2k}H^m(\\widehat{M}_B, {\\mathbb{Q}}) = W_{2k+1}H^m(\\widehat{M}_B, {\\mathbb{Q}}).\n\\]\nMoreover, by Theorem \\ref{thm2.1}, the $P=W$ conjecture implies that each class $\\prod_{i=1}^s c_{k_i}(\\gamma_i)$ lies in the perverse piece $P_{\\Sigma_i{k_i}}H^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}})$; the latter is in fact equivalent to the $P=W$ conjecture. Indeed, suppose we have \n\\begin{equation}\\label{taut}\n\\prod_{i=1}^s c_{k_i}(\\gamma_i) \\in P_{\\Sigma_i{k_i}}H^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}})\n\\end{equation}\nfor any product of tautological classes.\nThen we know that $W_{2k}H^*(\\widehat{M}_B, {\\mathbb{Q}}) \\subset P_kH^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}})$. The curious hard Lefschetz theorem proven by Mellit \\cite{Mellit} forces the two filtrations $P_\\bullet$ and $W_{2\\bullet}$ to coincide as long as one contains the other. This was mentioned in the last paragraph of \\cite[Section 1]{Mellit}; we include its proof here for the reader's convenience.\n\n\\begin{lem}\nWe denote by $V$ the ${\\mathbb{Q}}$-vector space (\\ref{NAH2}) with the perverse and the weight filtrations $P_\\bullet$ and $W_{2\\bullet}$. If $W_{2k} \\subset P_k$ for all $k$, then $W_{2k} = P_k$ for all $k$.\n\\end{lem}\n\n\\begin{proof}\nAssume $r = \\mathrm{dim}\\widehat{M}$; both filtrations terminate at the $r$-th pieces, \\emph{i.e.}, $W_{2r} = P_{r} = V$. We first show that \n\\[\nW_0 = P_0, \\quad W_{2(r-1)} = P_{r-1}. \n\\]\nIn fact, since $W_\\bullet \\subset P_\\bullet$, we have\n\\[\n\\mathrm{dim} W_0 \\leq \\mathrm{dim} P_0 = \\mathrm{dim}V\/P_{r-1} \\leq \\mathrm{dim} V\/W_{2(r-1)};\n\\]\nmoreover, by the curious hard Lefschetz theorem, each inequality has to be an equality. So our claim follows. \n\nThen we proceed by applying the same argument to $W_1, P_1$ and $W_{2(r-2)}, P_{r-1}$. The lemma follows by a simple induction.\n\\end{proof}\n\n\nIn conclusion, we have reduced the $P=W$ conjecture to the following:\n\n\n\\begin{conj}[Equivalent version of P=W]\\label{conj2.2}\nCondition (\\ref{taut}) holds for all products of tautological classes.\n\\end{conj}\n\n\\subsection{Strong perversity for Chern classes for ${\\mathcal L}$-twisted Hitchin systems}\n\nFor our purposes, it is important to consider Dolbeault moduli spaces of Higgs bundles, twisted by an effective line bundle ${\\mathcal L}$ (\\emph{i.e.}, $H^0(C, {\\mathcal L}) \\neq 0$). These moduli spaces have already appeared in \\cite{CL, HT2, HT, Yun1, Yun2}; we review the construction here briefly. \n\nSet $\\Omega_{\\mathcal L}$ to be the line bundle $\\Omega^1_C \\otimes {\\mathcal L}$ on the curve $C$. We denote by $M^{\\mathcal L}_{\\mathrm{Dol}}$ the moduli space of stable twisted Higgs bundles\n\\[\n({\\mathcal E}, \\theta), \\quad \\theta: {\\mathcal E} \\to {\\mathcal E} \\otimes\\Omega_{\\mathcal L}, \\quad \\mathrm{rk}({\\mathcal E}) = n, ~~\\mathrm{deg}({\\mathcal E})=d,\n\\]\nwith respect to the slope stability condition. The corresponding Hitchin map\n\\[\nh^{\\mathcal L}: M^{\\mathcal L}_{\\mathrm{Dol}} \\to A^{\\mathcal L}: = \\bigoplus_{i=1}^n H^0\\left(C, {\\Omega_{\\mathcal L}}^{\\otimes i} \\right) ,\\quad ({\\mathcal E}, \\theta) \\mapsto \\mathrm{char.polynomial}(\\theta),\n\\]\nis still proper as in the untwisted case; but it fails to be a Lagrangian fibration when $\\mathrm{deg}({\\mathcal L})>0$. The ${\\mathcal L}$-twisted $\\mathrm{SL}_n$ and $\\mathrm{PGL}_n$ Dolbeault moduli spaces $\\widecheck{M}^{\\mathcal L}_{\\mathrm{Dol}}$ and $\\widehat{M}^{\\mathcal L}_{\\mathrm{Dol}}$ can be constructed similarly. The moduli space\n\\[\n\\widecheck{M}^{\\mathcal L}_{\\mathrm{Dol}}:= \\{({\\mathcal E}, \\theta) \\in M^{\\mathcal L}_{\\mathrm{Dol}}|~~ \\mathrm{det}({\\mathcal E}) \\simeq {\\mathcal N} \\in \\mathrm{Pic}^d(C), \\mathrm{trace}(\\theta) = 0\\}\n\\]\nadmits a Hitchin map\n\\[\n\\widecheck{h}^{\\mathcal L}: \\widecheck{M}^{\\mathcal L}_{\\mathrm{Dol}} \\to \\widehat{A}^{\\mathcal L}:= \\bigoplus_{i=2}^nH^0\\left(C, {\\Omega_{\\mathcal L}}^{\\otimes i} \\right)\\]\nand a fiberwise $\\Gamma = \\mathrm{Pic}^0(C)[n]$ action by tensor product. Taking the $\\Gamma$-quotient recovers the ${\\mathcal L}$-twisted $\\mathrm{PGL}_n$ Hitchin map\n\\[\n\\widehat{h}^{\\mathcal L}: \\widehat{M}^{\\mathcal L}_{\\mathrm{Dol}} =\\widecheck{M}^{\\mathcal L}_{\\mathrm{Dol}}\/\\Gamma \\to \\widehat{A}^{\\mathcal L}. \n\\]\n\n\nAn observation in \\cite[Section 4]{MS} is that, for a fixed closed point $p\\in C$, the ${\\mathcal L}$-twisted and ${\\mathcal L}(p)$-twisted $\\mathrm{SL}_n$ Dolbeault moduli spaces can be related via critical loci and vanishing cycles, which we recall in the following.\n\nBy viewing a ${\\mathcal L}$-twisted Higgs bundle naturally as a ${\\mathcal L}(p)$-twisted Higgs bundle, we have the natural embedding $i: \\widecheck{M}^{\\mathcal L}_{\\mathrm{Dol}} \\hookrightarrow \\widecheck{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}}$ which induces the commutative diagram\n\\begin{equation}\\label{2.3_1}\n\\begin{tikzcd}\n\\widecheck{M}^{\\mathcal L}_{\\mathrm{Dol}}\\arrow[r, hook,\"i\"] \\arrow[d, \"\\widecheck{h}^{\\mathcal L}\"]\n&\\widecheck{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}} \\arrow[d, \"\\widecheck{h}^{{\\mathcal L}(p)}\"] \\\\\n\\widehat{A}^{\\mathcal L} \\arrow[r,hook]\n& \\widehat{A}^{{\\mathcal L}(p)}.\n\\end{tikzcd}\n\\end{equation}\n\nWe recall the following theorem from \\cite{MS}.\n\n\\begin{thm}[\\cite{MS} Theorem 4.5]\\label{thm2.5}\nThere exists a regular function $g: \\widecheck{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}} \\to {\\mathbb{A}}^1$ factorized as $g = \\nu \\circ \\widecheck{h}^{{\\mathcal L}(p)}$ with $\\nu: \\widehat{A}^{{\\mathcal L}(p)} \\to {\\mathbb{A}}^1$ such that\n\\[\n\\varphi_g \\simeq \\mathrm{IC}_{\\widecheck{M}^{{\\mathcal L}}_{\\mathrm{Dol}}} = {\\mathbb{Q}}_{\\widecheck{M}^{{\\mathcal L}}_{\\mathrm{Dol}}}[\\mathrm{dim} \\widecheck{M}^{{\\mathcal L}}_{\\mathrm{Dol}}].\n\\]\n\\end{thm}\n\nSince the embedding $i: \\widecheck{M}^{\\mathcal L}_{\\mathrm{Dol}} \\hookrightarrow \\widecheck{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}}$ is $\\Gamma$-equivariant, taking $\\Gamma$-quotients in the diagram (\\ref{2.3_1}) yields the following commutative diagram:\n\\begin{equation}\\label{diagram3}\n\\begin{tikzcd}\n\\widehat{M}^{\\mathcal L}_{\\mathrm{Dol}}\\arrow[r, hook, \"\\widehat{i}\"] \\arrow[d, \"\\widehat{h}^{\\mathcal L}\"]\n&\\widehat{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}} \\arrow[d, \"\\widehat{h}^{{\\mathcal L}(p)}\"] \\\\\n\\widehat{A}^{\\mathcal L} \\arrow[r, hook]\n& \\widehat{A}^{{\\mathcal L}(p)}.\n\\end{tikzcd}\n\\end{equation}\n\n\n\\begin{prop}\\label{prop2.6}\nThe vanishing cycle complex $\\varphi_{\\widehat{g}}$ associated with the regular function\n\\[\n\\widehat{g}: = \\nu \\circ \\widehat{h}^{{\\mathcal L}(p)}: \\widehat{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}} \\to {\\mathbb{A}}^1\n\\]\nsatisfies\n\\[\n\\varphi_{\\widehat{g}} \\simeq \\mathrm{IC}_{\\widehat{M}^{{\\mathcal L}}_{\\mathrm{Dol}}} = {\\mathbb{Q}}_{\\widehat{M}^{{\\mathcal L}}_{\\mathrm{Dol}}}\\left[\\mathrm{dim} \\widehat{M}^{{\\mathcal L}}_{\\mathrm{Dol}}\\right].\\]\n\\end{prop}\n\n\\begin{proof}\nWe reduce Proposition \\ref{prop2.6} to Theorem \\ref{thm2.5}. Consider the $\\Gamma$-quotient map $r: \\widecheck{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}} \\to \\widehat{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}}$. The direct image $r_* {\\mathbb{Q}}_{\\widecheck{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}}}$ admits a natural $\\Gamma$-equivariant structure, whose invariant part recovers \n\\begin{equation}\\label{2.6_1}\n\\left(r_* {\\mathbb{Q}}_{\\widecheck{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}}}\\right)^{\\Gamma} \\simeq {\\mathbb{Q}}_{\\widehat{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}}}.\n\\end{equation}\nSince $g = r \\circ \\widehat{g}$, we have\n\\begin{align*}\n\\varphi_{\\widehat{g}} & = \\varphi_{\\widehat{g}}\\left({\\mathbb{Q}}_{\\widehat{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}}}\\left[\\mathrm{dim} \\widehat{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}}\\right]\\right) \\\\ & \\simeq \\varphi_{\\widehat{g}}\\left(r_* {\\mathbb{Q}}_{\\widecheck{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}}}\\left[\\mathrm{dim} \\widecheck{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}}\\right]\\right)^{\\Gamma}\\\\\n& \\simeq (r_* \\varphi_g)^\\Gamma \\\\ & \\simeq {\\mathbb{Q}}_{\\widehat{M}^{{\\mathcal L}}_{\\mathrm{Dol}}}\\left[\\mathrm{dim} \\widehat{M}^{{\\mathcal L}}_{\\mathrm{Dol}}\\right].\n\\end{align*}\nHere the first equation follows by definition, the second uses (\\ref{2.6_1}), the third follows from the base change, and the last is given by Theorem \\ref{thm2.5}.\n\\end{proof}\n\n\nNow we formulate a sheaf-theoretic enhancement of Conjecture \\ref{conj2.2}.\n\n\\begin{thm}\\label{conj2.7}\nThere exists an effective line bundle ${\\mathcal L}$ such that the class\n\\[\n\\mathrm{ch}_k({\\mathcal U}^{\\mathcal L}) \\in H^{2k}(C\\times \\widehat{M}^{\\mathcal L}_{\\mathrm{Dol}}, {\\mathbb{Q}})\n\\]\nhas strong perversity $k$ with respect to \n\\[\n\\mathbf{h}^{\\mathcal L}: = \\mathrm{id} \\times \\widehat{h}^{\\mathcal L}:C\\times \\widehat{M}^{\\mathcal L}_{\\mathrm{Dol}} \\to C\\times \\widehat{A}^{\\mathcal L}. \n\\]\nHere ${\\mathcal U}^{\\mathcal L}$ be the universal $\\mathrm{PGL}_n$-bundle over $C\\times \\widehat{M}_{\\mathrm{Dol}}$.\n\\end{thm}\n\nNotice that the perversity bound here is stronger than the trivial one of $2k$. We prove in the rest of this section that Theorem \\ref{conj2.7} indeed implies Conjecture \\ref{conj2.2}; therefore it implies Conjecture \\ref{conj}. We prove Theorem \\ref{conj2.7} in Sections 3 and 4.\n\n\\begin{rmk}\nNote that although for our purposes, we only need \\emph{one} effective line bundle ${\\mathcal L}$ satisfying Theorem \\ref{conj2.7}, our proof actually shows that it is true for \\emph{any} effective ${\\mathcal L}$.\n\\end{rmk}\n\n\\subsection{Theorem \\ref{conj2.7} implies Conjecture \\ref{conj2.2}} We start with the following claim.\n\n\n\\medskip\n\\noindent {\\bf Claim.} For a point $p\\in C$, if the class\n\\[\n\\mathrm{ch}_k({\\mathcal U}^{{\\mathcal L}(p)}) \\in H^{2k}(C\\times \\widehat{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}}, {\\mathbb{Q}}) \n\\]\nhas strong perversity $k$ with respect to $\\mathbf{h}^{{\\mathcal L}(p)}: C\\times \\widehat{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}} \\to C\\times \\widehat{A}^{{\\mathcal L}(p)}$, then the class\n\\[\n\\mathrm{ch}_k({\\mathcal U}^{{\\mathcal L}}) \\in H^{2k}(C\\times \\widehat{M}^{{\\mathcal L}}_{\\mathrm{Dol}}, {\\mathbb{Q}})\n\\]\nhas strong perversity $k$ with respect to $\\mathbf{h}^{{\\mathcal L}}: C\\times \\widehat{M}^{\\mathcal L}_{\\mathrm{Dol}} \\to C\\times \\widehat{A}^{\\mathcal L}$.\n\n\\begin{proof}[Proof of Claim]\n\nWe consider the commutative diagram obtained from (\\ref{diagram3}):\n\n\\begin{equation}\\label{diagram4}\n\\begin{tikzcd}\nC \\times \\widehat{M}^{\\mathcal L}_{\\mathrm{Dol}}\\arrow[r,hook, \"\\mathbf{i}\"] \\arrow[d, \"\\mathbf{h}^{\\mathcal L}\"]\n&C\\times \\widehat{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}} \\arrow[d, \"\\mathbf{h}^{{\\mathcal L}(p)}\"] \\\\\nC\\times \\widehat{A}^{\\mathcal L} \\arrow[r,hook]\n& C\\times \\widehat{A}^{{\\mathcal L}(p)}.\n\\end{tikzcd}\n\\end{equation}\nHere $\\mathbf{i} = \\mathrm{id} \\times \\widehat{i}$. By definition, we have $\\mathbf{i}^* {\\mathcal U}^{{\\mathcal L}(p)} = {\\mathcal U}^{{\\mathcal L}}$. Therefore \n\\[\n\\mathbf{i}^* \\mathrm{ch}_k({\\mathcal U}^{{\\mathcal L}(p)}) = \\mathrm{ch}_k({\\mathcal U}^{\\mathcal L}).\n\\]\nWe know from Proposition \\ref{prop2.6} that the vanishing cycle complex associated with the regular function\n\\[\n\\mathbf{g}: = \\widehat{g} \\circ \\mathrm{pr}_M: C\\times \\widehat{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}} \\to\\widehat{M}^{{\\mathcal L}(p)}_{\\mathrm{Dol}}\\to {\\mathbb{A}}^1\n\\]\nsatisfies\n\\[\n\\varphi_{\\mathbf{g}} \\simeq \\mathrm{IC}_{C\\times \\widehat{M}^{{\\mathcal L}}_{\\mathrm{Dol}}} = {{\\mathbb{Q}}}_{C\\times \\widehat{M}^{{\\mathcal L}}_{\\mathrm{Dol}}}\\left[\\mathrm{dim}~ (C\\times \\widehat{M}^{{\\mathcal L}}_{\\mathrm{Dol}})\\right].\n\\]\nOur claim then follows from Proposition \\ref{prop1.4} (applied to the diagram (\\ref{diagram4}), the class $\\mathrm{ch}_k({\\mathcal U}^{{\\mathcal L}(p)})$, and the map $\\mathbf{g}$).\n\\end{proof}\n\n\n\n\n\nAs a consequence of the claim, the statement of Theorem \\ref{conj2.7} holds for ${\\mathcal L} \\simeq {\\mathcal O}_C$, \\emph{i.e.}, the class \n\\[\n\\mathrm{ch}_k({\\mathcal U}) \\in H^{2k}(C\\times \\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}})\n\\] \nhas strong perversity $k$ with respect to \n\\[\n\\mathbf{h}:= \\mathrm{id} \\times \\widehat{h}: C\\times \\widehat{M}_{\\mathrm{Dol}} \\to C\\times \\widehat{A}.\n\\]\n\n\n\nFinally, to deduce the cohomological statement, we note that since $\\mathbf{h}$ is trivial at the $C$-factor, its induced perverse filtration can be described as:\n\\begin{equation}\\label{2.2_1}\nP_kH^*(C \\times \\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}) = H^*(C, {\\mathbb{Q}}) \\otimes P_kH^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}).\n\\end{equation}\nChoose a homogeneous basis of $H^*(C, {\\mathbb{Q}})$:\n\\[\n\\Sigma=\\{\\sigma_0, \\sigma_1, \\dots, \\sigma_{2g+2}\\}\n\\]\nwith Poincar\\'e-dual basis $\\{\\sigma^{\\vee}_{0}, \\dots, \\sigma^\\vee_{2g+2}\\}$.\n\nWe may express $\\mathrm{ch}_k({\\mathcal U})$ as\n\\begin{equation}\\label{2.4_1}\n\\mathrm{ch}_k({\\mathcal U}) = \\sum_{\\sigma\\in \\Sigma} \\sigma^\\vee \\otimes c_k(\\sigma) \\in H^*(C, {\\mathbb{Q}}) \\otimes H^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}).\n\\end{equation}\nSince $\\mathrm{ch}_k({\\mathcal U})$ has strong perversity $k$, Lemma \\ref{lem1.1} implies that\n\\[\n\\mathrm{ch}_k({\\mathcal U}) \\cup - : P_sH^*(C\\times \\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}) \\to P_{s+k}H^{*}(C\\times \\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}).\n\\]\nApplying this operator to $\\sigma \\otimes P_sH^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}) \\subset H^*(C\\times \\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}})$ with $\\sigma \\in \\Sigma$, we obtain from (\\ref{2.2_1}) and (\\ref{2.4_1}) that\n\\[\nc_k(\\sigma) \\cup -: P_sH^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}) \\to P_{s+k}H^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}). \n\\]\nIn particular, for any class $\\gamma \\in H^*(C, {\\mathbb{Q}})$ we have\n\\[\nc_k(\\gamma): P_sH^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}) \\to P_{s+k}H^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}})\\]\nwhich further yields\n\\[\n\\prod_ic_{k_1}(\\gamma_i) = \\left( \\prod_ic_{k_1}(\\gamma_i) \\right) \\cup 1 \\in P_{\\Sigma_ik_i}H^*(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}}).\n \\]\n Here we used $1\\in P_0H^0(\\widehat{M}_{\\mathrm{Dol}}, {\\mathbb{Q}})$ in the last equation.\n\nThis completes the proof that Theorem \\ref{conj2.7} implies Conjecture \\ref{conj2.2}. \\qed\n\n\n\n\n\\section{Global Springer theory}\n\nIn this section, we review Yun's global Springer theory \\cite{Yun1, Yun2, Yun3} and use it to deduce Theorem \\ref{conj2.7}, assuming a support theorem (Theorem \\ref{thm3.2}). Global Springer theory was previously used to study perverse filtrations for affine Springer fibers in terms of Chern classes \\cite{OY, OY2}. In our setting, we require a partial extension of Yun's results from the elliptic locus to the entire Hitchin base.\n\n\\subsection{Notations}\nWe fix ${\\mathcal L}$ to be an effective line bundle of sufficiently large degree\n($\\mathrm{deg}\\left({\\mathcal L}\\right)> 2g$ is enough for us).\n Since from now on we only concern the ${\\mathcal L}$-twisted moduli spaces, we will use $M, \\widecheck{M}$, and $\\widehat{M}$ to denote the ${\\mathcal L}$-twisted $\\mathrm{GL}_n$, $\\mathrm{SL}_n$, and $\\mathrm{PGL}_n$ Dolbeault moduli spaces $M^{\\mathcal L}_{\\mathrm{Dol}}, \\widecheck{M}^{\\mathcal L}_{\\mathrm{Dol}}$, and $\\widehat{M}^{\\mathcal L}_{\\mathrm{Dol}}$ respectively. For the same reason, we will uniformly use the term \\emph{Higgs bundles} to call ${\\mathcal L}$-twisted Higgs bundles.\n\nFrom now on we let $G = \\mathrm{PGL}_n$, $\\mathfrak{g}$ its Lie algebra, $B \\subset G$ the Borel subgroup induced by upper triangular matrices with $\\mathfrak{b}$ the Lie algebra, $T\\subset B$ the maximal torus given by diagonal matrices, and $W \\simeq \\mathfrak{S}_n$ the Weyl group. We denote by ${\\mathbb{X}}^*(T)$ the character group of $T$; it is isomorphic to ${\\mathbb{Z}}^{n-1}$ as an abelian group.\n\n\n\n\n\\subsection{Parabolic moduli stacks}\n\nLet $\\widehat{{\\mathfrak{M}}}$ be the moduli stack of $G$-Higgs bundles on $C$; we do not impose any stability condition on $\\widehat{{\\mathfrak{M}}}$ so that it is only a (singular) algebraic stack. The stable locus of $\\widehat{{\\mathfrak{M}}}$ is a nonsingular Deligne--Mumford substack\n\\[\n\\widehat{M} \\hookrightarrow \\widehat{{\\mathfrak{M}}}.\n\\]\n\nYun's global Springer theory \\cite{Yun1} constructs an algebraic stack $\\widehat{{\\mathfrak{M}}}^\\mathrm{par}$ over $C \\times \\widehat{{\\mathfrak{M}}}$:\n\\begin{equation}\\label{PI}\n\\pi: \\widehat{{\\mathfrak{M}}}^\\mathrm{par} \\to C \\times \\widehat{{\\mathfrak{M}}}\n\\end{equation}\nwhich is a global analog of the Grothendieck simultaneous resolution. There are two equivalent constructions of $\\widehat{{\\mathfrak{M}}}^{\\mathrm{par}}$ given in \\cite[Section 2.1]{Yun1}. \n\n\nThe first is to construct $\\widehat{{\\mathfrak{M}}}^{\\mathrm{par}}$ as the moduli stack of parabolic Higgs bundles, which are quadruples $(x, {\\mathcal E}, \\theta, {\\mathcal E}_x^B)$, with $({\\mathcal E}, \\theta)$ is a $G$-Higgs bundle, $x\\in C$ a closed point, and ${\\mathcal E}_x^B$ a $B$-reduction of ${\\mathcal E}$ at $x$, satisfying the constraint that $\\theta$ is compatible with ${\\mathcal E}_x^B$; see \\cite[Definition 2.1.1]{Yun1}. Then the morphism $\\pi$ is given by forgetting the $B$-reduction:\n\\[\n\\pi (x, {\\mathcal E}, \\theta, {\\mathcal E}_x^B) = (x, ({\\mathcal E},\\theta)) \\in C \\times \\widehat{{\\mathfrak{M}}}.\n\\]\n\nThe second construction is via the Grothendieck simultaneous resolution $\\pi_G: [\\mathfrak{b}\/B] \\to [\\mathfrak{g}\/G]$. More precisely, let $\\rho_{\\mathcal L}$ be the ${\\mathbb{G}}_m$-torsor over $C$ associated with the line bundle $\\Omega_{\\mathcal L}$. Denote by $[\\mathfrak{g}\/G]_{\\mathcal L}$ (resp. $[\\mathfrak{b}\/B]_{\\mathcal L}$) the family of $[\\mathfrak{g}\/G]$ (resp. $[\\mathfrak{b}\/B]$) over the curve $C$ twisted by the torsor $\\rho_{\\mathcal L}$. We have a tautological evaluation map\n\\begin{equation}\\label{evaluation}\n \\begin{tikzcd}[column sep=small]\n C\\times \\widehat{{\\mathfrak{M}}} \\arrow[dr, \"\"] \\arrow[rr, \"\\mathrm{ev}\"] & & {[\\mathfrak{g}\/G]_{\\mathcal L}} \\arrow[dl, \"\"] \\\\\n & C & \n\\end{tikzcd}\n\\end{equation}\nwhich is a natural $C$-morphism; after base change to a closed point $x\\in C$, the map $\\mathrm{ev}$ sends a $G$-Higgs bundle to the evaluation of its Higgs field at $x$. The morphism (\\ref{PI}) is then induced by the base change of the Grothendieck simultaneous resolution along the evaluation map (see \\cite[Lemma 2.1.2]{Yun1}):\n\\begin{equation*}\n\\begin{tikzcd}\n\\widehat{{\\mathfrak{M}}}^{\\mathrm{par}} \\arrow[r, \"\\mathrm{ev}^p\"] \\arrow[d, \"\\pi\"]\n&{[\\mathfrak{b}\/B]_{\\mathcal L}} \\arrow[d, \"\\pi_G\"] \\\\\nC\\times \\widehat{{\\mathfrak{M}}} \\arrow[r, \"\\mathrm{ev}\"]\n& {[\\mathfrak{g}\/G]_{\\mathcal L}}.\n\\end{tikzcd}\n\\end{equation*}\n\nThe parabolic Hitchin system\n\\[\n\\widehat{h}^p: \\widehat{{\\mathfrak{M}}}^{\\mathrm{par}} \\to C\\times \\widehat{A}\n\\]\nis the composition of $\\pi:\\widehat{{\\mathfrak{M}}}^{\\mathrm{par}} \\to C\\times \\widehat{{\\mathfrak{M}}}$ and the morphism $\\mathbf{h}: C\\times \\widehat{{\\mathfrak{M}}} \\to C\\times \\widehat{A}$\ninduced by the standard (stacky) Hitchin map $\\widehat{h}: \\widehat{{\\mathfrak{M}}} \\to \\widehat{A}$.\\footnote{Recall that we always use the ${\\mathcal L}$-twisted version.}\n\nIn general the moduli stack $\\widehat{{\\mathfrak{M}}}^{\\mathrm{par}}$ is singular, and the parabolic Hitchin map is not proper. In the next section we will impose a stability condition and then restrict to the stable locus. \n\n\\subsection{Stable loci}\n\nWe define the \\emph{stable locus} of the moduli of parabolic $G$-Higgs bundles to be\n\\[\n\\widehat{M}^{\\mathrm{par}}: = (C \\times \\widehat{M}) \\times_{C \\times \\widehat{{\\mathfrak{M}}}} \\widehat{{\\mathfrak{M}}}^{{\\mathrm{par}}};\n\\]\nequivalently, it fits into the Cartesian diagrams\n\\begin{equation}\\label{diag0}\n\\begin{tikzcd}\n\\widehat{M}^{\\mathrm{par}} \\arrow[r,hook] \\arrow[d, \"\\pi\"]& \\widehat{{\\mathfrak{M}}}^{\\mathrm{par}} \\arrow[r, \"\\mathrm{ev}^p\"] \\arrow[d, \"\\pi\"]\n&{[\\mathfrak{b}\/B]_{\\mathcal L}} \\arrow[d, \"\\pi_G\"] \\\\ C \\times \\widehat{M} \\arrow[r,hook]\n& C\\times \\widehat{{\\mathfrak{M}}} \\arrow[r, \"\\mathrm{ev}\"]\n& {[\\mathfrak{g}\/G]_{\\mathcal L}}.\n\\end{tikzcd}\n\\end{equation}\n\nWe use the same notation as for the stacky case to denote the parabolic Hitchin map restricted to the stable locus:\n\\begin{equation}\\label{parah}\n\\widehat{h}^p: \\widehat{M}^{\\mathrm{par}} \\to C\\times \\widehat{A}.\n\\end{equation}\n\n\\begin{prop}\\label{prop3.1}\n\\begin{enumerate}\n \\item[(i)] The moduli stack $\\widehat{M}^{\\mathrm{par}}$ is nonsingular and Deligne--Mumford; and\n \\item[(ii)] the parabolic Hitchin map (\\ref{parah}) is proper.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\nThe Deligne--Mumford part of (i) follows from the proof of \\cite[Proposition 2.5.1 (3)]{Yun1}. Indeed, the left vertical arrow in the diagram (\\ref{diag0}) is the pullback of $\\pi_G$, therefore it is schematic and of finite type. This implies that $\\widehat{M}^{\\mathrm{par}}$ is Deligne--Mumford since $C \\times \\widehat{M}$ is Deligne--Mumford.\n\nTo prove the smoothness part of (i), we use the evaluation map (\\ref{evaluation}). Recall that \\cite[Proposition 4.1]{MS} (which was proven via deformation theory) shows that the $C$-morphism $\\mathrm{ev}$ is smooth after restricting over the stable locus $C \\times \\widehat{M}$. Hence by the Cartesian diagrams of (\\ref{diag0}), the evaluation map\n\\[\n\\mathrm{ev}: \\widehat{M}^{\\mathrm{par}} \\to {[\\mathfrak{b}\/B]_{\\mathcal L}}\n\\]\nis also smooth. Since the target is a nonsingular algebraic stack, so is the source.\n\n(ii) follows directly from (\\ref{diag0}) and that $\\widehat{h}: \\widehat{M} \\to \\widehat{A}$ is proper.\n\\end{proof}\n\nAs a consequence of Proposition \\ref{prop3.1}, the direct image complex\n\\[\nR{\\widehat{h}^p}_* ~{\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}} \\in D^b_c(C\\times \\widehat{A})\n\\]\nsatisfies the decomposition theorem \\cite{BBD}.\n\n\\begin{thm}[Support theorem for parabolic Hitchin map]\\label{thm3.2}\nThe decomposition theorem for the parabolic Hitchin map has full support, \\emph{i.e.}, any non-trivial simple perverse summand of \\[\n^\\mathfrak{p}{\\mathcal H}^i\\left(R{\\widehat{h}^p}_* ~{\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}} \\right), \\quad \\forall i\\in {\\mathbb{Z}}\\]\nhas support $C\\times \\widehat{A}$.\n\\end{thm}\n\nWe will postpone the proof of Theorem \\ref{thm3.2} to Section 4.\n\n\n\\subsection{Proof of Theorem \\ref{conj2.7}}\n\nWe first prove Theorem \\ref{conj2.7} assuming Theorem \\ref{thm3.2}.\n\nThere are three ingredients of global Springer theory\n\\[\n\\pi: \\widehat{M}^\\mathrm{par} \\to C \\times \\widehat{M}\n\\]\nwhich are important in the proof of Theorem \\ref{conj2.7}. We summarize them as follows.\n\\begin{enumerate}\n \\item[(A)] \\emph{(Splitting of the universal $G$-bundle)} As explained in the second bullet point of \\cite[Construction 6.1.4]{Yun1}, the Chern roots of $\\pi^*{\\mathcal U}$ are given by $c_1(L(\\xi))$ where $L(\\xi)$ is the tautological line bundle on $\\widehat{M}^{\\mathrm{par}}$ associated with $\\xi \\in {\\mathbb{X}}^*(T)$. \n \n More precisely, $\\pi^*{\\mathcal U}$ is the universal $G$-bundle on $\\widehat{M}^{\\mathrm{par}}$ whose fiber over $(x,{\\mathcal E},\\theta, {\\mathcal E}^B_x)$ is ${\\mathcal E}_x$; the $B$-reduction ${\\mathcal E}_x^B$ yields a $T$-torsor over $\\widehat{M}^{\\mathrm{par}}$ which induces for each $\\xi \\in {\\mathbb{X}}^*(T)$ a line bundle $L(\\xi)$.\n\n \n \\item[(B)] \\emph{(Strong perversity for $c_1(L(\\xi))$)} By \\cite[Lemma 3.2.3]{Yun2}, there exists a Zariski dense open subset of $C\\times \\widehat{A}$ over which the operator\n \\[\nc_1(L(\\xi)): R\\widehat{h}^p_*{\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}} \\to R\\widehat{h}^p_*{\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}}[2] \\]\nhas strong perversity $1$ for any $\\xi \\in {\\mathbb{X}}^*(T)$. \nSince $c_1(L(\\xi))$ automatically has strong perversity $2$, showing it has strong perversity $1$ is equivalent to showing\nthe induced morphism of perverse cohomology sheaves\n\\[\n{}^{p}{\\mathcal H}^i\\left( R\\widehat{h}^p_*{\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}}\\right)\\rightarrow\n{}^{p}{\\mathcal H}^i\\left(R\\widehat{h}^p_*{\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}}[2]\\right)\n\\]\nvanishes for each $i$. Once we have Theorem \\ref{thm3.2}, these sheaves have full support over the entire base, so this vanishing (and thus strong perversity $1$) extends over the total base $C \\times \\widehat{A}$ as well. \nIn fact, \\cite[Lemma 3.2.3]{Yun2} was proven in exactly this way using a support theorem \\cite[Section 4.6.2]{Yun2} for the elliptic locus.\n\n \n \\item[(C)] \\emph{(Springer's Weyl group action)} By the Cartesian diagrams (\\ref{diag0}), we may pullback Springer's sheaf-theoretic Weyl group action from the Grothendieck simultaneous resolution; in particular, the object $R\\pi_* {\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}}$ admits a canonical $W$-action whose invariant part recovers the trivial local system\n \\[\n\\left(R\\pi_* {\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}} \\right)^W = {\\mathbb{Q}}_{C\\times \\widehat{M}}. \\]\nBy taking global cohomology we have\n\\[\nH^*(\\widehat{M}^{\\mathrm{par}}, {\\mathbb{Q}}) = H^*(C\\times \\widehat{M}, {\\mathbb{Q}}) \\oplus \\left(\\mathrm{variant~~part~~of~~} W \\right).\n\\]\n\\end{enumerate}\n\nNow we prove Theorem \\ref{conj2.7}. We first prove its parabolic version.\n\n\\medskip\n\\noindent {\\bf Claim.} The class\n\\[\n\\mathrm{ch}_k\\left(\\pi^* {\\mathcal U}\\right) \\in H^{2k}(\\widehat{M}^{\\mathrm{par}}, {\\mathbb{Q}})\n\\]\nhas strong perversity $k$ with respect to the parabolic Hitchin system (\\ref{parah}).\n\n\\begin{proof}[Proof of Claim]\nBy (A), the Chern character $\\mathrm{ch}_k(\\pi^*{\\mathcal U})$ can be expressed in terms of $c_1(L(\\xi))$. Hence the claim follows from Lemma \\ref{lem1.2} and the strong perversity of $c_1(L(\\xi))$ given by (B).\n\\end{proof}\n\n\nNext, we reduce Theorem \\ref{conj2.7} to the claim above via the following lemma.\n\n\\begin{lem}\nFor a class $\\gamma \\in H^l(C \\times \\widehat{M}, {\\mathbb{Q}})$, if $\\pi^*\\gamma \\in H^l(\\widehat{M}^{\\mathrm{par}}, {\\mathbb{Q}})$ has strong perversity $k$ with respect to $\\widehat{h}^p: \\widehat{M}^{\\mathrm{par}} \\to C\\times \\widehat{A}$, then $\\gamma$ has strong perversity $k$ with respect to ${\\mathbf{h}} : C\\times \\widehat{M} \\to C\\times \\widehat{A}$.\n\\end{lem}\n\n\\begin{proof}\n This is a consequence of (C). We write the class $\\gamma$ as a map \n \\begin{equation}\\label{3.3_1}\n\\gamma: {\\mathbb{Q}}_{C\\times \\widehat{M}} \\to {\\mathbb{Q}}_{C\\times \\widehat{M}}[l], \n\\end{equation}\nwhose pullback \n\\begin{equation}\\label{3.3_2}\n\\pi^*\\gamma: {\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}} \\to {\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}}[l]\n\\end{equation}\nrecovers the class $\\pi^*\\gamma \\in H^l(\\widehat{M}^{\\mathrm{par}}, {\\mathbb{Q}})$. By the projection formula $\\pi_*\\pi^*\\gamma = |W|\\cdot \\gamma$, we may recover the morphism (\\ref{3.3_1}) from (\\ref{3.3_2}) by derived pushing forward to $C \\times \\widehat{M}$ and taking the $W$-invariant part.\n\nNow by the assumption we know that the action of $\\pi^*\\gamma$ on the object $R\\widehat{h}^p_*{\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}}$ satisfies\n\\begin{equation}\\label{3.3_4}\n\\pi^*\\gamma: {^\\mathfrak{p}\\tau_{\\leq i}}R\\widehat{h}^p_*{\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}} \\to {^\\mathfrak{p}\\tau_{\\leq i+(k-l)}}\\left(R\\widehat{h}^p_*{\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}}[l]\\right).\n\\end{equation}\nSince we have\n\\[\nR\\widehat{h}^p_*{\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}} = R\\mathbf{h}_*\\left(R\\pi_* {\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}}\\right),\n\\] \nthe ingredient (C) produces a natural $W$-action on this object whose invariant part recovers $R\\mathbf{h}_*{\\mathbb{Q}}_{C\\times \\widehat{M}}$; the operator\n\\begin{equation}\\label{3.3_3}\n\\gamma: R\\mathbf{h}_*{\\mathbb{Q}}_{C\\times \\widehat{M}} \\to R\\mathbf{h}_*{\\mathbb{Q}}_{C\\times \\widehat{M}}[l] \n\\end{equation}\nis then recovered from the $W$-invariant part of the action of $\\pi^*\\gamma$ on $R\\widehat{h}^p_*{\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}}$. In particular, the desired property concerning (\\ref{3.3_3}) follows from the $W$-invariant part of (\\ref{3.3_4}).\n\\end{proof}\n\n\n\n\nThus we have completed the proof of Theorem \\ref{conj2.7}. \\qed\n\n\n\n\n\n\n\\section{Parabolic support theorem}\n\nIn the last section, we prove Theorem \\ref{thm3.2}. This is the parabolic version of the Chaudouard--Laumon support theorem \\cite{CL}. For the proof, we ultimately reduce it to a relative dimension bound which we establish in Section \\ref{sec4.4}.\n\n\\subsection{Review of support theorems}\\label{Sec4.1}\n\nWe start with a review of support theorems for Hitchin systems.\n\nFor a proper morphism $f: X\\to Y$ with $X,Y$ nonsingular Deligne--Mumford stacks, the decomposition theorem \\cite{BBD} yields\n\\[\nRf_* {\\mathbb{Q}}_X \\simeq \\bigoplus_i {^\\mathfrak{p}{\\mathcal H}^i(Rf_* {\\mathbb{Q}}_X)[-i]}\n\\]\nwith ${^\\mathfrak{p}{\\mathcal H}^i}(Rf_* {\\mathbb{Q}}_X)$ semisimple perverse sheaves on $Y$; we say that a closed $Z \\subset Y$ is a support of $f$ if it is a support of a simple summand of some ${^\\mathfrak{p}{\\mathcal H}^i}(Rf_* {\\mathbb{Q}}_X)$. A particularly interesting case is that $f$ has full support, that is, $Y$ is the only support of $f$; in this case the cohomology of any closed fiber of $f$ is governed by the nonsingular fibers.\n\nThe study of supports for Hitchin systems was initiated by B.C. Ng\\^o which is crucial in his proof of the fundamental lemma of the Langlands program \\cite{Ngo}. He determines all the supports for the Hitchin system (including the ${\\mathcal L}$-twisted cases) after restricting to the elliptic locus of the Hitchin base; that is the subset formed by integral spectral curves.\n\nAfter Ng\\^o's work, Chaudouard--Laumon \\cite{CL} observed that, if we consider the moduli space of ${\\mathcal L}$-twisted $\\mathrm{GL}_n$ stable Higgs bundles with $\\mathrm{deg}({\\mathcal L}) >0$, then Ng\\^o's support theorem can be extended to the total Hitchin base; in particular they showed that the ${\\mathcal L}$-twisted $\\mathrm{GL}_n$ Hitchin system has full support. Chaudouard--Laumon's idea was extended to the $\\mathrm{SL}_n$ case \\cite{dC_SL}, the endoscopic moduli spaces \\cite{MS}, and singular cases involving strictly semistable Higgs bundles \\cite{MS2, MS3}. See also \\cite{dCHM2, MM} concerning the supports over the open subset of reduced spectral curves for the untwisted (\\emph{i.e.} ${\\mathcal L} = 0$) Hitchin system.\n\nThe idea of Chaudouard--Laumon \\cite{CL} is to show that each support of the ${\\mathcal L}$-twisted Hitchin system has generic point lying in the elliptic locus; this is achieved by combining two constraints: (I) the support inequality for a weak abelian fibration, and (II) $\\delta$-regularity for spectral curves. We describe them in more detail.\n\\begin{enumerate}\n \\item[(I)] \\emph{(Support inequaltiy)} The Hitchin system admits the structure of a weak abelian fibration, that is, there is a commutative group scheme $P$ over the Hitchin base acting on the moduli space which satisfies certain properties. Then an argument generalizing the Goresky--MacPherson inequality leads to a codimension estimate for the supports. More concretely, it says that any support $Z$ has codimension bounded above by the $\\delta$-function of the group $P$. This part was already carried out in Ng\\^o \\cite[Section 7]{Ngo}; see \\cite[Section 1]{MS2} for a summary.\n \\item[(II)] \\emph{($\\delta$-regularity)} In the case of type $A$ and for the stable locus, the group scheme is obtained from the multi-degree 0 relative Picard (or, for $\\mathrm{SL}_n$, Prym) variety associated with the family of spectral curves. Then we have the Severi inequality, referred to as \\emph{$\\delta$-regularity}, for the spectral curves. We refer to \\cite[Theorem 5.4.4]{dC_SL} for the precise statement.\n\\end{enumerate}\n\nUsing (I) and (II), one can deduce that no support is allowed to have generic point lying outside the open subset of integral curves; this was explained in \\cite[Section 11]{CL} for $\\mathrm{GL}_n$, in \\cite[Section 6.2]{dC_SL} for $\\mathrm{SL}_n$, and in \\cite[Section 4.5]{MS2} for a general complete linear system in a del Pezzo surface. The argument is to combine the two inequalities above to deduce a numerical contradiction if there is a support that appears outside the elliptic locus.\n\n\n\\subsection{Parabolic Hitchin systems}\nNow we focus on the ${\\mathcal L}$-twisted parabolic Hitchin system (\\ref{parah}). Let $\\widehat{A}^{\\mathrm{ell}} \\subset \\widehat{A}$ be the open subset parameterizing integral spectral curves (the elliptic locus). Following Ng\\^o's method, Yun \\cite{Yun2, Yun3} proved a parabolic support theorem over $C \\times \\widehat{A}^{\\mathrm{ell}}$ and determined all the supports of (\\ref{parah}) in $C \\times \\widehat{A}^\\mathrm{ell}$. More precisely, by \\cite[Section 2]{Yun3} any strict subset of $C \\times \\widehat{A}^{\\mathrm{ell}}$, which is a support of $\\widehat{h}^{\\mathrm{par}}|_{C \\times \\widehat{A}^{\\mathrm{ell}}}$, is a component of the \\emph{endoscopic loci} governed by the endoscopic theory of $G$. As we only consider the special case $G = \\mathrm{PGL}_n$, there are no nontrivial endoscopic loci and the restricted Hitchin map $\\widehat{h}^{\\mathrm{par}}|_{C \\times \\widehat{A}^{\\mathrm{ell}}}$ has full support.\n\nTo prove Theorem \\ref{thm3.2}, it suffices to show that there is no support of (\\ref{parah}) whose generic point lying outside $C \\times \\widehat{A}^{\\mathrm{ell}}$.\n\nSince stability for a $\\mathrm{PGL}_n$ Higgs bundle is described by its corresponding vector bundle (see (\\ref{233})), it is more convenient to work with the $\\mathrm{SL}_n$ moduli spaces. We consider the following Cartesian diagram\n\\begin{equation}\\label{diag321}\n\\begin{tikzcd}\n\\widecheck{M}^{\\mathrm{par}} \\arrow[r, \"(-)\/\\Gamma\"] \\arrow[d, \"\\pi'\"]\n& \\widehat{M}^{\\mathrm{par}} \\arrow[d, \"\\pi\"] \\\\\nC\\times \\widecheck{M} \\arrow[r, \"(-)\/\\Gamma\"]\n& C \\times \\widehat{M}.\n\\end{tikzcd}\n\\end{equation}\nHere the horizontal maps are given by the natural quotient maps by $\\Gamma = \\mathrm{Pic}^0(C)[n]$. To describe the map $\\pi'$ at the left-side of the diagram (see \\cite[Example 2.2.5]{Yun1}), we recall that $\\widecheck{M}$ parameterizes traceless Higgs bundles with fixed determinant \n\\[\n({\\mathcal E}, \\theta) , \\quad \\theta: {\\mathcal E} \\to {\\mathcal E}\\otimes \\Omega_{\\mathcal L}, \\quad \\mathrm{rk}({\\mathcal E}) = n, ~~\\mathrm{det}({\\mathcal E}) \\simeq {\\mathcal N},~~ \\mathrm{trace}(\\theta) = 0\n\\]\nwith respect to slope stability. Similarly $\\widecheck{M}^{\\mathrm{par}}$ parameterizes \n\\[\n(x, {\\mathcal E} = {\\mathcal E}_0 \\supset {\\mathcal E}_1 \\supset \\cdots \\supset {\\mathcal E}_n = {\\mathcal E}_0(-x), \\theta)\n\\]\nwhere $x\\in C$, $({\\mathcal E}, \\theta) \\in \\widecheck{M}$, each ${\\mathcal E}_i$ in the flag is of rank $n$ with ${\\mathcal E}_i\/{\\mathcal E}_{i+1}$ a length 1 skyscraper sheaf supported at $x$, the Higgs field preserves the flag $\\theta({\\mathcal E}_i) \\subset {\\mathcal E}_i\\otimes \\Omega_{\\mathcal L}$, and the map $\\pi'$ in the diagram is the forgetful map. We note that the stability condition for a parabolic Higgs bundle is determined by the stability of the underlying Higgs bundle. \n\nFollowing \\cite[Example 2.2.5]{Yun1} we may also describe $\\widecheck{M}^{\\mathrm{par}}$ via spectral curves. If we present a point in $\\widecheck{M}$ as a 1-dimensional sheaf ${\\mathcal F}_\\alpha$ supported on a spectral curve $C_\\alpha \\subset \\mathrm{Tot}(\\Omega_{\\mathcal L})$ with $p_\\alpha: C_\\alpha \\to C$ the projection, then for any $x\\in C$, a closed point in the fiber $\\pi'^{-1}(x, {\\mathcal F}_\\alpha)$ is represented by\n\\[\n{\\mathcal F}_\\alpha = {\\mathcal F}_0 \\supset {\\mathcal F}_1 \\supset \\cdots \\supset {\\mathcal F}_n = {\\mathcal F}_0\\otimes p_\\alpha^* {\\mathcal O}_C(-x), \\quad \\mathrm{lengh}({\\mathcal F}_i\/{\\mathcal F}_{i+1}) = 1;\n\\]\nsee \\cite[(2.4)]{Yun1}.\n\nNow we consider the $\\mathrm{SL}_n$ parabolic Hitchin system\n\\begin{equation}\\label{SLh}\n\\widecheck{h}^p: \\widecheck{M}^{\\mathrm{par}} \\to C\\times \\widecheck{M} \\to C\\times \\widehat{A}.\n\\end{equation}\nFrom the diagram (\\ref{diag321}), it recovers the $\\mathrm{PGL}_n$ Hitchin system $\\widehat{h}^p: \\widehat{M}^{\\mathrm{par}} \\to C \\times \\widehat{A}$ by taking the quotient of the source by $\\Gamma$. In particular, we have\n\\[\nR{\\widehat{h}^p}_* ~{\\mathbb{Q}}_{\\widehat{M}^{\\mathrm{par}}} = \\left(R{\\widecheck{h}^p}_* ~{\\mathbb{Q}}_{\\widecheck{M}^{\\mathrm{par}}}\\right)^\\Gamma \\in D_c^b(C\\times \\widehat{A}).\n\\]\nHence in order to prove Theorem \\ref{thm3.2}, it suffices to show that there is no support of (\\ref{SLh}) with generic point lying outside $C \\times \\widehat{A}^{\\mathrm{ell}}$. In the next two sections, we adapt the strategy of Chaudouard--Laumon \\cite{CL} and de Cataldo \\cite{dC_SL} (for the $\\mathrm{SL}_n$-version) to this parabolic setting, and verify the parabolic analogs of (I) and (II) of Section \\ref{Sec4.1}. \n\n\n\n\n\\subsection{Weak abelian fibrations and $\\delta$-regularity}\n\nA general setup for support theorems was given in \\cite[Theorem 1.8]{MS2}. We now check that all the assumptions there are satisfied by (\\ref{SLh}).\n\n\nWe first construct the $(C\\times \\widehat{A})$-group scheme $P$ which acts on $\\widecheck{M}^{\\mathrm{par}}$. Recall that in \\cite{dC_SL} de Cataldo showed that $\\widecheck{M}$ admits a weak abelian fibration structure $(\\widecheck{P}, \\widecheck{M}, \\widehat{A})$.\\footnote{We refer to \\cite[Section 1]{MS2} for an introduction to weak abelian fibrations.} Here the $\\widehat{A}$-group scheme $\\widecheck{P}$ which acts on $\\widecheck{M}$ is given by the identity component of the relative Prym variety associated with the spectral curves \\cite[(43)]{dC_SL}. For a spectral curve $p_\\alpha: C_\\alpha \\to C$ with a Higgs bundle given by ${\\mathcal F}_\\alpha$ supported on $C_\\alpha$, the Prym action is induced by tensor product\n\\[\n{\\mathcal Q} \\cdot {\\mathcal F}_{\\alpha} = {\\mathcal Q} \\otimes {\\mathcal F}_\\alpha, \\quad {\\mathcal Q} \\in \\mathrm{Prym}^0(C_\\alpha\/C) = \\widecheck{P}_{[C_\\alpha]},~ \\quad [C_\\alpha] \\in \\widehat{A}. \n\\]\nWe denote by $P$ the $(C\\times \\widehat{A})$-group scheme obtained by the pullback of $\\widecheck{P}$. The $P$-action on $C\\times \\widecheck{M}$ can be lifted to $\\widecheck{M}^{\\mathrm{par}}$:\n\\[\n{\\mathcal Q} \\cdot (x, {\\mathcal F}_\\alpha = {\\mathcal F}_0 \\supset {\\mathcal F}_1 \\supset \\cdots \\supset {\\mathcal F}_n) = (x, {\\mathcal Q}\\otimes{\\mathcal F}_\\alpha = {\\mathcal Q} \\otimes {\\mathcal F}_0 \\supset {\\mathcal Q}\\otimes{\\mathcal F}_1 \\supset \\cdots \\supset {\\mathcal Q} \\otimes {\\mathcal F}_n),\n\\] \nsince the stability condition does not rely on the flag.\n\n\n\\begin{prop}\nThe triple $(P, \\widecheck{M}, C\\times \\widehat{A})$ forms an weak abelian fibration, \\emph{i.e.}, it satisfies (i,ii,ii) of \\cite[Section 1.1]{MS2}.\n\\end{prop}\n\n\\begin{proof}\n (i) is clear from (\\ref{diag0}), and (iii) only depends on the group scheme $P$ which follows from the corresponding property for $\\widecheck{P}$ proved in \\cite[Theorem 4.7.2]{dC_SL}. To prove (ii): since the $P$-action on $\\widecheck{M}^{\\mathrm{par}}$ is a lifting of the $P$-action on $C\\times \\widecheck{M}$, and the latter has affine stabilizers by \\cite{dC_SL}, we obtain that the $P$-stabilizer for any point $z \\in \\widecheck{M}^{\\mathrm{par}}$ is contained in the (affine) $P$-stabilizer for $\\pi(z) \\in C \\times \\widecheck{M}$. This proves (ii).\n\\end{proof}\n\n\n\n\nFrom this proposition, in order to show the support inequality ((I) of Section \\ref{Sec4.1}) for $(P, \\widecheck{M}, C\\times \\widehat{A})$, by \\cite[Theorem 1.8]{MS2}, it suffices to estabish the relative dimension bound\n\\begin{equation}\\label{relative}\n \\tau_{> 2\\mathbf{d}}\\left( R\\widecheck{h}^p_* {\\mathbb{Q}}_{\\widecheck{M}^{\\mathrm{par}}}\\right) = 0, \\quad \\mathbf{d}: = \\mathrm{dim} \\widecheck{M}^{\\mathrm{par}} - \\mathrm{dim} (C\\times \\widehat{A})\n\\end{equation}\nwith $\\tau_{>*}$ the standard truncation functor. \n\nOnce we have (\\ref{relative}),\nwe can combine the support inequality with $\\delta$-regularity of \n$\\widecheck{P}$ \\cite[corollary 5.4.4]{dC_SL} to prove \nTheorem \\ref{thm3.2} as follows. If $Z \\subset C \\times \\widehat{A}$ is an irreducible support of $\\widecheck{h}^p$, the projection $W = \\mathrm{pr}_{\\widehat{A}}(Z) \\subset \\hat{A}$ also satisfies the support inequality\n\\[\n\\mathrm{codim}_{\\widehat{A}} W \\leq \\mathrm{codim}_{C\\times \\widehat{A}}Z \\leq \\delta_Z = \\delta_W.\n\\]\nHere we used in the second inequality the support inequality for $\\widecheck{h}^p$, and used in the last equality that the $\\delta$-function on $C\\times \\widehat{A}$ is pulled back from $\\widehat{A}$. The argument of \\cite[Section 6.2]{dC_SL} then shows that the generic point of $W$ lies in $\\widehat{A}^{\\mathrm{ell}}$. Hence the generic point of $Z$ lies in $C\\times \\widehat{A}^{\\mathrm{ell}}$ as we desired.\n\n\n\n\nIn conclusion, this reduces Theorem \\ref{thm3.2} to (\\ref{relative}).\n\n\n\n\\subsection{Proof of the relative dimension bound (\\ref{relative})}\\label{sec4.4}\n\nWe need to show that each closed fiber of the morphism\n$$\\widecheck{h}^p: \\widecheck{M}^{\\mathrm{par}} \\rightarrow C\\times \\widehat{A}$$\nhas dimension\n\\[\n\\mathbf{d} = \\mathrm{dim}\\widecheck{M}^{\\mathrm{par}} - \\mathrm{dim} (C\\times \\widehat{A}) = \\frac{n(n-1)}{2}\\deg({\\mathcal L}) + (n^2-1)(g-1).\\]\nHere the last equation is obtained by the dimension formula of \\cite[Proposition 2.4.6]{dC_SL} directly:\n\\[\n\\mathbf{d} = \\mathrm{dim}\\widecheck{M} - \\mathrm{dim}\\widehat{A} = \\frac{n(n-1)}{2} \\deg({\\mathcal L}) + (n^2-1)(g-1).\n\\]\n\n\n\nFirst, we note that the morphism $\\widecheck{h}^p$ is surjective since the usual Hitchin map $\\widecheck{h}: \\widecheck{M} \\to \\widehat{A}$ is surjective. Furthermore, as $\\widecheck{h}^p$ is equivariant with respect to the scaling ${\\mathbb{G}}_m$-action on the Higgs fields, it suffices to bound from above the dimension of the fiber over $(x, 0) \\in C\\times \\widehat{A}$ for each point $x \\in C$ by $\\mathbf{d}$, since upper semicontinuity will force all other fibers to have the same dimension. We fix the point $p$ from now on.\n\nUsing the assumption $\\deg({\\mathcal L})>2g$, we may express ${\\mathcal L}$ as ${\\mathcal O}_C(D)$ with $D = x_0 + x_1 + \\dots + x_t$ an effective reduced divisor containing $x = x_0$. In particular, a Higgs bundle $({\\mathcal E}, \\theta: {\\mathcal E} \\to {\\mathcal E}\\otimes \\Omega_{\\mathcal L})$ can be viewed as a meromorphic (un-twisted) Higgs bundles with at most simple poles along $D$:\n\\[\n({\\mathcal E}, \\theta), \\quad \\theta: {\\mathcal E} \\to {\\mathcal E} \\otimes \\Omega_C^1(D).\n\\]\nWe will control the fiber dimension using an analogous bound for the nilpotent cone for \\emph{strongly parabolic} Higgs bundles for the pair $(C,D)$.\n\nRecall that, given $D$ as above, a strongly parabolic Higgs bundle consists of an $\\mathrm{SL}_n$ Higgs bundle of degree $d$:\n\\[\n({\\mathcal E}, \\theta), \\quad \\theta: {\\mathcal E}\\to {\\mathcal E} \\otimes \\Omega^1_C(D),~~~\\mathrm{rk}({\\mathcal E}) =n,~~\\mathrm{det}({\\mathcal E})\\simeq {\\mathcal N}, ~~\\mathrm{trace}(\\theta) = 0,\n\\]\nalong with a flag at each point $x_i \\in D$:\n\\[\n{\\mathcal E}|_{x_i}=F_{x_i,0} \\supset F_{x_i,1} \\supset F_{x_i,2}, \\supset \\cdots \\supset F_{x_i,n} = 0, \\quad \\mathrm{dim} F_{x_i,j}\/F_{x-i, j+1} = 1,\n\\]\nsuch that the Higgs field satisfies $\\theta(F_{x_i, j}) \\subset F_{x_i, j+1}$.\\footnote{One may compare this with the (non-strong) parabolic condition $\\theta(F_{x,j}) \\subset F_{x,j}$ we used earlier in this paper for the global Springer theory.}\n\n\n\n\nLet $\\widecheck{M}^{\\mathrm{spar}}(D)$ denote the moduli space of strongly parabolic Higgs bundles associated with the pair $(C,D)$, such that the underlying twisted Higgs bundle is stable. Let\n$$A(D) = \\bigoplus_{i=2}^{n} H^0(C, \\Omega_{\\mathcal L}^{\\otimes i}(-D)) \\subset \\widehat{A}$$\ndenote the linear subspace consisting of sections which vanish along the points of $D$. There is a Hitchin map for the strongly parabolic Higgs moduli space\n$$\\widecheck{h}^{\\mathrm{spar}}:\\widecheck{M}^{\\mathrm{spar}}(D) \\rightarrow A(D)$$\nwhich is proper, surjective, and Lagrangian with respect to a natural holomorphic symplectic form \\cite{Faltings}. In particular, the dimension of the zero fiber is given by\n\\begin{align*}\n\\dim \\widecheck{M}^{\\mathrm{spar}}(D)_0 = \\dim A(D) &= \\sum_{j=2}^{n} \\left((j-1)(\\deg(\\Omega_L)) +g-1\\right)\\\\\n&= \\frac{n(n-1)}{2}\\deg ({\\mathcal L}) + (n^2-1)(g-1).\n\\end{align*}\nSee also \\cite[Theorem 6.9]{SWW} for a direct proof of the above dimension formula.\\footnote{The formula above has $g$ less than the dimension formula obtained in \\cite[Theorem 6.9]{SWW}, since we consider the $\\mathrm{SL}_n$ case where we fixed the determinant on $C$.}\n\nWe now consider the following diagram relating the two types of parabolic Higgs moduli spaces $\\widecheck{M}^{\\mathrm{spar}}(D)$ and \n$\\widecheck{M}^{\\mathrm{par}}$:\n\n\\begin{equation}\\label{diagram}\n\\begin{tikzcd}\n\\widecheck{M}^{\\mathrm{spar}}(D)_0 \\arrow{d}{q_0}\\arrow[r,hook]\n&\\widecheck{M}^{\\mathrm{spar}}(D)\\arrow{d}{q}\n& \n\\\\\n\\widecheck{M}^{\\mathrm{par}}_{(x,0)}\\arrow{d}\\arrow[r, hook]&\n{\\widecheck{M}^{\\mathrm{par}}}|_{x \\times A(D)} \\arrow{d}\\arrow[r,hook]&\n\\widecheck{M}^{\\mathrm{par}}\\arrow[d,\"\\widecheck{h}^p\"]\\\\\n\\{(x,0)\\} \\arrow[r, hook] & x \\times A(D) \\arrow[r, hook] & C\\times \\widehat{A}\n\\end{tikzcd}\n\\end{equation}\nwhere all squares are Cartesian. \n\nWe first claim that the natural map $q$ (sending a strong parabolic Higgs bundle to a parabolic Higgs bundle) is surjective. Indeed, given a point $z \\in \\widecheck{M}^{\\mathrm{par}}$ lying over $C\\times {A}(D)$,\nchoosing a point in its preimage $q^{-1}(z)$ only consists of fixing a flag at each point of $x_i$ with $i>0$, preserved by the Higgs field --- since the characteristic polynomial already has zero roots at all points of $D$, the Higgs field is automatically strongly parabolic.\n\n\nBy base change, this implies that $q_0$ is surjective as well, so we get the desired dimension upper bound\n\\[\n\\dim {\\widecheck{M}^{\\mathrm{par}}}_{(x,0)} \\leq \\dim \\widecheck{M}^{\\mathrm{spar}}(D)_0 = \\frac{n(n-1)}{2}\\deg ({\\mathcal L}) + (n^2-1)(g-1),\n\\]\nwhich completes the proof. \\qed\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\begin{figure}[!ht]\n \\centering\n \\begin{subfigure}{1.0\\columnwidth}\n \\includegraphics[draft=false,trim={0 200px 0 200px},clip,width=\\columnwidth]{figures\/smoke_stable0027.jpeg}\n \\end{subfigure} \\\\\n \\begin{subfigure}{1.0\\columnwidth}\n \\includegraphics[draft=false,trim={0 200px 0 200px},clip,width=\\columnwidth]{figures\/smoke_stable0126.jpeg}\n \\end{subfigure} \\\\\n \\begin{subfigure}{1.0\\columnwidth}\n \\includegraphics[draft=false,trim={0 200px 0 200px},clip,width=\\columnwidth]{figures\/smoke_stable0260.jpeg}\n \\end{subfigure}\n \n\\caption{{\\textbf{Colorful smoke jets}}. Multicolored jets of smoke are simulated with BSLQB. Intricate mixing is induced as the flows collide at the spherical boundary.}\\label{fig:rainbowsmoke}\n\\end{figure}\n\nWhether it be billowing smoke, energetic explosions, or breaking waves, simulation of incompressible flow is an indispensable tool for modern visual effects. Ever since the pioneering works of Foster and Metaxas \\shortcite{foster:1996:liquids}, Stam \\shortcite{stam:1999:stable} and Fedkiw et al. \\shortcite{fedkiw:2001:visual,foster:2001:practical}, the Chorin \\shortcite{chorin:1967:numerical} splitting of advective and pressure projection terms has been the standard in computer graphics applications \\cite{bridson:2008:fluid-simulation}. Most techniques use regular grids of Marker-And-Cell (MAC) \\cite{harlow:1965:viscous-flow} type with pressure and velocity components staggered at cell centers and faces respectively. Furthermore, advection is most often discretized using semi-Lagrangian techniques originally developed in the atmospheric sciences \\cite{stam:1999:stable,robert:1981:stable}. Although well-established, these techniques are not without their drawbacks. For example, the staggering utilized in the MAC grids is cumbersome since variables effectively live on four different grids. This can complicate many algorithms related to incompressible flow. E.g. Particle-In-Cell (PIC) \\cite{harlow:1964:pic} techniques like FLIP \\cite{brackbill:1986:flip-pic,zhu:2005:sand-fluid}, Affine\/Polynomial Particle-In-Cell (APIC\/PolyPIC) \\cite{jiang:2015:apic,fu:2017:poly} and the Material Point Method (MPM) \\cite{sulsky:1994:history-materials,stomakhin:2014:augmented-mpm} must transfer information separately to and from each individual grid. Similarly, semi-Lagrangian techniques must separately solve for upwind locations at points on each of the velocity component grids. Moreover, while semi-Lagrangian techniques are renowned for the large time steps they admit (notably larger than the Courant-Friedrichs-Levy (CFL) condition), their inherent stability is plagued by dissipation that must be removed for most visual effects phenomena. Another limitation of the MAC grid arises with free-surface water simulation. In this case, the staggering prevents many velocity components near the fluid free surface from receiving a correction during projection (see e.g. \\cite{bridson:2008:fluid-simulation}). Each of these velocity components must then be separately extrapolated to from the interior to receive a pressure correction. \\\\\n\\\\\nMAC grids are useful because the staggering prevents pressure null modes while allowing for accurate second order central differencing in discrete grad\/div operators. However, there are alternatives in the computational physics literature. Many mixed Finite Element Method (FEM) techniques use collocated velocities \\cite{hughes:2000:book} without suffering from pressure mode instabilities. For example, Taylor-Hood elements \\cite{taylor:1973:TH} use collocated multi-quadratic velocity interpolation and multilinear pressure interpolation to enforce incompressiblity. Recently, B-spline interpolation \\cite{deboor:1978:splines} has been used with Taylor-Hood \\cite{bressan:2010:isogeometric}. We build on this work and develop an approach based on collocated multi-quadratic B-spline interpolation for velocities. This choice is motivated by the simplicity of collocated grids compared to staggering, but also because of the ease of attaining continuous derivatives with B-spline interpolation. For example, this interpolation is often chosen with MPM applications since $C^1$ interpolation is essential for stability \\cite{steffen:2008:analysis}. In the context of fluids, we show that this allows for extremely stable and accurate advection.\n\\\\\n\\\\\nWe develop a new approach for Chorin splitting \\shortcite{chorin:1967:numerical} based on the collocated multiquadratic B-spline velocity, multilinear pressure Taylor-Hood element \\cite{bressan:2010:isogeometric}. However, unlike the fully collocated technique of Bressan \\shortcite{bressan:2010:isogeometric}, we stagger pressures on the nodes of the grid and velocities at cell centers as in \\cite{ando:2013:surfacing}, since it reduces coupling in the pressure projection system and naturally accommodates particle-based definition of the flow domain for free-surface simulation of water. Notably, our formulation does not require velocity extrapolation after pressure projection for free-surface flow calculations as is typically needed with MAC grids. We use regular grids, but as in \\cite{batty:2007:solid-fluid,batty:2008:buckling,larionov:2017:stokes}, we allow for irregular domains in a variational way using cut cells. However, rather than a weighted finite different approach, we use an FEM approach as in XFEM \\cite{belytschko:2009:review,koschier:2017:xfem} and virtual node (VNA) \\cite{schroeder:2014:vna} techniques. In VNA and XFEM approaches, integrals arising in the variational formulation are carried out over the intersection of the grid with the domain geometry. \\\\\n\\\\\n\\begin{figure}[!ht]\n \\centering\n \\begin{subfigure}{\\columnwidth}\n \\includegraphics[draft=false,trim={0 100px 0 50px},clip,width=\\columnwidth]{figures\/dambreak_1.jpeg}\n \\end{subfigure} \\\\\n \\begin{subfigure}{\\columnwidth}\n \\includegraphics[draft=false,trim={0 100px 0 50px},clip,width=\\columnwidth]{figures\/dambreak_14.jpeg}\n \\end{subfigure} \\\\\n \\begin{subfigure}{\\columnwidth}\n \\includegraphics[draft=false,trim={0 100px 0 50px},clip,width=\\columnwidth]{figures\/dambreak_0090.jpeg}\n \\end{subfigure}\n\\caption{{\\textbf{Dam break}}. A block of water falls in a rectangular domain with obstacles. Dynamic splashing behavior is followed by settling of the water in the tank. White water rendering effects are added based on \\cite{ihmsen:2012:unified}.}\\label{fig:dambreak_final}\n\\end{figure}\nWe leverage $C^1$ continuity guaranteed by our quadratic B-spline velocity interpolation to develop BSLQB, a novel Backward Semi-Lagrangian (BSL) \\cite{robert:1981:stable} technique that achieves second order accuracy in space and time. BSL techniques utilize the implicit form of semi-Lagrangian advection. We show that our novel BSL method for quadratic B-splines dramatically reduces numerical dissipation with only a small modification to the widely-adopted explicit semi-Lagrangian formulations typically used in graphics applications. Semi-Lagrangian techniques for velocity advection utilize the implicit relation associated with solution of Burgers' equation\n\\begin{align}\\label{eq:impBurg}\n\\mb{u}(\\mb{x},t)=\\mb{u}(\\mb{x}-(t-s)\\mb{u}(\\mb{x},t),s)\\iff \\ \\frac{D\\mb{u}}{Dt}=\\frac{\\partial \\mb{u}}{\\partial t}+\\frac{\\partial \\mb{u}}{\\partial \\mb{x}}\\mb{u}=\\mathbf{0}\n\\end{align}\nfor $s\\leq t$ \\cite{evans:2010:pde}. Traditionally, graphics applications have preferred the explicit variant of semi-Lagragian advection whereby grid velocities are updated through the expression\n\\begin{align}\\label{eq:SL}\n\\mb{u}_\\mb{i}^{n+1}=\\mb{u}(\\mb{x}_\\mb{i}-\\Delta t \\mb{u}_\\mb{i}^n,t^n)\n\\end{align}\nwhere $\\mb{x}_\\mb{i}$ is the location of grid node $\\mb{i}$, $\\mb{u}_\\mb{i}^n,\\mb{u}_\\mb{i}^{n+1}$ are velocities at the node at times $t^n$ and $t^{n+1}$ respectively and interpolation over the velocity grid is used to estimate $\\mb{u}(\\mb{x}_\\mb{i}-\\Delta t \\mb{u}_\\mb{i}^n,t^n)$ at non-grid node locations \\cite{sawyer:1963:semi,stam:1999:stable}. In contrast, BSL techniques leverage Equation ~\\eqref{eq:impBurg} directly \n\\begin{align}\\label{eq:BSL}\n\\mb{u}_\\mb{i}^{n+1}=\\mb{u}(\\mb{x}_\\mb{i}-\\Delta t \\mb{u}_\\mb{i}^{n+1},t^n)\n\\end{align}\nwhich requires the solution of an implicit equation for $\\mb{u}_\\mb{i}^{n+1}$ \\cite{robert:1981:stable}. Since our grid interpolation is naturally $C^1$, we show that this can be done very efficiently using a few steps of Newton's method. While this is more expensive than the explicit semi-Lagrangian formulations, we note that each node can still be updated in parallel since the implicit equations for $\\mb{u}_\\mb{i}^{n+1}$ are decoupled in $\\mb{i}$. We show that solution of the implicit Equation~\\eqref{eq:BSL}, rather than the traditionally used explicit Equation~\\eqref{eq:SL} improves the order of convergence from first to second (in space and time). Notably, this does not require use of multiple time steps for backward\/forward estimations of error, as is commonly done \\cite{kim:2006:advections,kim:2005:bfecc,selle:2008:unconditionally,xiu:2001:semi,schroeder:2014:vna}. Furthermore, our method allows for larger-than-CFL time steps and is as stable or more so than explicit semi-Lagrangian formulations.\\\\\n\\\\\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}{.49\\columnwidth}\n \\includegraphics[draft=false,trim={0 60px 0 60px},clip,width=\\columnwidth]{figures\/Inner_circle_8.jpeg}\n \\end{subfigure}\n \\begin{subfigure}{.49\\columnwidth}\n \\includegraphics[draft=false,trim={0 60px 0 60px},clip,width=\\columnwidth]{figures\/Inner_circle52.jpeg}\n \\end{subfigure}\\\\\n \\begin{subfigure}{.49\\columnwidth}\n \\includegraphics[draft=false,trim={0 60px 0 60px},clip,width=\\columnwidth]{figures\/Inner_circle_401.jpeg}\n \\end{subfigure}\n \\begin{subfigure}{.49\\columnwidth}\n \\includegraphics[draft=false,trim={0 60px 0 60px},clip,width=\\columnwidth]{figures\/Inner_circle_600.jpeg}\n \\end{subfigure}\n\\caption{{\\textbf{SL vs. BSLQB}}. We compare semi-Lagrangian (left) and BSLQB (right) in a vorticity-intensive example. BSLQB breaks symmetry and exhibits a more turbulent flow pattern. Note we only use particles for flow visualization and not for PolyPIC advection in this example.}\\label{fig:inner_circle}\n\\end{figure}\nLastly, we develop a hybrid particle\/BSLQB advection technique that utilizes PolyPIC \\cite{fu:2017:poly} in portions of the domain covered by particles and BSLQB in portions without particles. Our formulation naturally leverages the strengths of both approaches. Dense concentrations of particles can be added to regions of the domain where more detail is desired. Also, if particle coverage becomes too sparse because of turbulent flows, BSLQB can be used in the gaps. We demonstrate the efficacy of this technique with smoke simulation and narrow banding of particles near the fluid surface with water simulations as in \\cite{chentanez:2015:coupling,ferstl:2016:narrow,sato:2018:nb}. In this case, level set advection naturally enabled with our BSLQB formulation is preferred in deeper water regions. We summarize our contributions as:\n\\begin{itemize}\n\\item A novel cut-cell collocated velocity B-spline mixed FEM method for Chorin \\shortcite{chorin:1967:numerical} splitting discretization of the incompressible Euler equations.\n\\item BSLQB: a novel BSL technique designed for collocated multiquadratic B-spline velocity interpolation that achieves second order accuracy in space and time.\n\\item A hybrid BSLQB\/PolyPIC method for narrow band free-surface flow simulations and concentrated-detail smoke simulations.\n\\end{itemize}\n\n\\begin{figure*}[!ht]\n \\centering\n \\begin{subfigure}{.33\\textwidth}\n \\includegraphics[draft=false,trim={0 0 0 0},clip,width=\\columnwidth]{figures\/bunnydrown_0003.jpeg}\n \\end{subfigure} \n \\begin{subfigure}{.33\\textwidth}\n \\includegraphics[draft=false,trim={0 0 0 0},clip,width=\\columnwidth]{figures\/bunnydrown_0024.jpeg}\n \\end{subfigure} \n \\begin{subfigure}{.33\\textwidth}\n \\includegraphics[draft=false,trim={0 0 0 0},clip,width=\\columnwidth]{figures\/bunnydrown_0114.jpeg}\n \\end{subfigure}\n\\caption{{\\textbf{Dam break with bunny}}: Opposing blocks of water collapse in a tank and flow around the irregular domain boundary placed in the middle of the tank. Particles are colored from slow (blue) to fast (white) speed. }\\label{fig:bunny_dambreak_final}\n\\end{figure*}\n\n\\section{Previous Work}\n\n\\subsection{Advection}\nStam \\shortcite{stam:1999:stable} first demonstrated the efficacy of semi-Lagrangian techniques for graphics applications and they have since become the standard, largely due to the large time steps they engender and their simple interpolatory nature. Many modifications to the original approach of Stam \\shortcite{stam:1999:stable} have been developed, often inspired by approaches in the engineering literature. Fedkiw et al. \\shortcite{fedkiw:2001:visual} use vorticity confinement \\cite{steinhoff:1994:modification} to counterbalance vorticity lost to dissipation and cubic grid interpolation. Kim et al. \\shortcite{kim:2006:advections,kim:2005:bfecc} and Selle et al. \\cite{selle:2008:unconditionally} combine forward and backward semi-Lagrangian steps to estimate and remove dissipative errors. Constrained Interpolation Profile \\cite{kim:2008:semi,yabe:2001:multiphase-analysis,song:2009:derivative-particles} techniques additionally advect function derivatives to reduce dissipation. Molemaker et al. \\shortcite{molemaker:2008:low} use the QUICK technique of Leonhard \\shortcite{leonard:1979:stable} which is essentially upwinding with quadratic interpolation and Adams-Bashforth temporal discretization, although this does not have the favorable stability properties of semi-Lagrangian. Backward Difference Formula techniques are useful because they use an implicit multistep formulation for higher-order semi-Lagrangian advection yet still only require one projection per time step \\cite{xiu:2001:semi,schroeder:2014:vna}.\\\\\n\\\\\nThe main idea in semi-Lagrangian techniques is to interpolate data from a characteristic point. This idea goes back to the Courant-Issaacson-Rees \\shortcite{courant:1952:solution} method. However, as noted in \\cite{fedkiw:2001:visual} semi-Lagrangian advection is very popular in atmospheric science simulation and the variants used in graphics that account for characteristics traveling beyond the local cell in one time step go back to Sawyer \\shortcite{sawyer:1963:semi}. The first BSL approach utilizing Equation~\\eqref{eq:BSL} was done by Robert \\shortcite{robert:1981:stable} in which they use fixed point iteration to solve the nonlinear equation. They fit a bicubic function to their data over $4\\times4$ grid patches, then use that function in the fixed point iteration. If the upwind point leaves the grid, they clamp it to the boundary of the $4\\times4$ patch. This clamping will degrade accuracy for larger time steps. In this case, more general interpolation is typically used (see \\cite{staniforth:1991:semi,falcone:1998:convergence} for useful reviews). Pudykiewicz and Staniforth \\shortcite{pudykiewicz:1984:some} investigate the effects of BSL versus explicit semi-Lagrangian. Specifically, they compare Bates and McDonald \\shortcite{bates:1982:multiply} (explicit) versus Robert \\shortcite{robert:1981:stable} (BSL). They show that keeping all things equal, the choice of Equation~\\eqref{eq:SL} (explicit) instead of Equation~\\eqref{eq:BSL} (BSL) leads to more dissipation and mass loss. This is consistent with our observations with BSLQB.\\\\\n\\\\\nInterestingly, multiquadratic B-splines have not been adopted by the semi-Lagrangian community, despite their natural regularity. Hermite splines, multicubic splines and even Lagrange polynomials are commonly used \\cite{staniforth:1991:semi}. Preference for Hermite splines and Lagrange polynomials is likely due to their local nature (they do not require solution of a global system for coefficients) and preference for multicubic splines (over multi-quadratic) is possibly due to the requirement of odd degree for natural splines (odd degree splines behave like low pass filters and tend to be smoother than even degree splines \\cite{cheng:2001:quadratic,cheney:2012:numerical}). Cubic splines are considered to be more accurate than Hernite splines and Lagrange interpolation \\cite{staniforth:1991:semi,makar:1996:basis}. Interestingly, Riish{\\o}jgaard et al. \\shortcite{riishojgaard:1998:use} found that cubic spline interpolation gave rise to a noisier solution than cubic Lagrange interpolation with a technique analogous to that of Makar and Karpik \\shortcite{makar:1996:basis}. However, they also note that addition of a selective scale diffusion term helps reduce noise associated with cubic splines. Wang and Layton \\shortcite{wang:2010:new} use linear B-splines with BSL but only consider one space dimension which makes Equation~\\eqref{eq:BSL} linear and easily solvable.\\\\\n\\\\\nDissipation with explicit semi-Lagrangian advection is so severe that many graphics researchers have resorted to alternative methods to avoid it. Mullen et al. \\shortcite{mullen:2009:energy} develop energy preserving integration to prevent the need for correcting dissipative behavior. Some authors \\cite{qu:2019:mcm,tessendorf:2011:MCM,sato:2017:long,sato:2018:spatially} resolve the flow map characteristics for periods longer than a single time step (as opposed to one step with semi-Lagrangian) to reduce dissipation. Hybrid Lagrange\/Eulerian techniques like PIC (and related approaches) \\cite{bridson:2008:fluid-simulation,jiang:2015:apic,fu:2017:poly,zhu:2005:sand-fluid} explicitly track motion of particles in the fluid, which is nearly dissipation-free, but can suffer from distortion in particle sampling quality. Vorticity formulations are also typically less dissipative, but can have issues with boundary conditions enforcement \\cite{selle:2005:vortex,angelidis:2005:simulation,chern:2016:schrodinger,elcott:2007:stable,park:2005:vortex,weissmann:2010:filament}. Zehnder et al., Zhang et al. and Mullen et al. \\shortcite{mullen:2009:energy,zehnder:2018:advection,narain:2019:ref,zhang:2015:restoring} have noted that the Chorin projection itself causes dissipation. Zhang et al. \\shortcite{zhang:2015:restoring} reduced artificial dissipation caused by the projection step by estimating lost vorticity and adding it back into the fluid. Zehnder et al. \\shortcite{zehnder:2018:advection,narain:2019:ref} propose a simple, but very effective modification to the splitting scheme that is similar to midpoint rule integration to reduce the projection error.\n\n\\subsection{Pressure projection}\n\nGraphics techniques utilizing pressure projection typically use voxelized MAC grids with boundary conditions enforced at cell centers and faces, however many methods improve this by taking into account sub-cell geometric detail. Enright et al. \\shortcite{enright:2003:using} showed that enforcing the pressure free surface boundary condition at MAC grid edge crossings (rather than at cell centers) dramatically improved the look of water surface waves and ripples. Batty, Bridson and colleagues developed variational weighted finite difference approaches to enforce velocity boundary conditions with MAC grids on edge crossings and improved pressure boundary conditions at the free surface in the case of viscous stress \\cite{batty:2007:solid-fluid,batty:2008:buckling,larionov:2017:stokes}. XFEM \\cite{belytschko:2009:review,koschier:2017:xfem} and virtual node (VNA) \\cite{schroeder:2014:vna} techniques also use cut cell geometry with variational techniques. Schroeder et al. \\shortcite{schroeder:2014:vna} use cut cells with MAC grids, but their technique is limited to moderate Reynolds numbers. \\\\\n\\\\\n\\begin{figure}[!t]\n \\centering\n \\begin{subfigure}{0.24\\columnwidth}\n \\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\\columnwidth]{figures\/globe0000.jpeg}\n \\end{subfigure\n \\begin{subfigure}{0.24\\columnwidth}\n \\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\\columnwidth]{figures\/globe0007.jpeg}\n \\end{subfigure}\n \\begin{subfigure}{0.24\\columnwidth}\n \\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\\columnwidth]{figures\/globe0013.jpeg}\n \\end{subfigure\n \\begin{subfigure}{0.24\\columnwidth}\n \\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\\columnwidth]{figures\/globe0062.jpeg}\n \\end{subfigure}\n\\caption{{\\textbf{Water in a globe}}. A block of water splashes and naturally slides along cut cell boundaries in an irregular domain interior to one large sphere and exterior to one small sphere.}\\label{fig:snowglobe}\n\\end{figure}\nThere is a vast literature on enforcing incompressibility in the FEM community \\cite{hughes:2000:book}. Our approach is most similar to the B-spline Taylor-Hood element of Bressan \\cite{bressan:2010:isogeometric}. Adoption of B-spline interpolation in FEM is part of the isogeometric movement \\cite{hughes:2005:isogeometric,ruberg:2012:subdivision}. Originally motivated by the desire to streamline the transition from computer-aided design (CAD) to FEM simulation, isogeometric analysis explores the use of CAD-based interpolation (e.g. B-splines and nonuniform rational B-splines (NURBS)) with FEM methodologies. Hughes et al. \\shortcite{hughes:2005:isogeometric} show that in addition to simplifying the transition from CAD to simulation, the higher regularity and spectral-like properties exhibited by these splines makes them more accurate than traditionally used interpolation. We enforce Dirichlet boundary conditions weakly as in XFEM and VNA approaches \\cite{belytschko:2009:review,koschier:2017:xfem,schroeder:2014:vna}. Bazilevs et al. \\shortcite{bazilevs:2007:weak} show that weak Dirichlet enforcement with isogeometric analysis can be more accurate than strong enforcement.\\\\\n\\\\\nGraphics applications are typically concerned with turbulent, high-Reynolds numbers flows. Interestingly, B-splines have proven effective for these flows by researchers in the Large Eddy Simulation (LES) community \\cite{kim:1998:mixed,kravchenko:1999:bspline}. Kravchenko et al. \\shortcite{kravchenko:1999:bspline} use a variational weighted residuals approach with B-splines for turbulent LES and show that the increased regularity significantly reduces computational costs. Boatela et al. \\shortcite{botella:2002:collocation} use a similar approach, but apply a collocation technique where the strong form of the div-grad formulation of incompressibility is enforced point wise. They show that their B-spline approach attains optimal order of accuracy with accurate resolution of quadratic flow invariants. Boatela et al. \\shortcite{botella:2002:collocation} also introduce a notion of sparse approximation to the inverse mass matrix to avoid dense systems of equations in the pressure solve.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}{0.5\\columnwidth}\n \\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\\columnwidth]{figures\/bunny_smoke_thin40_0001.jpeg}\n \\end{subfigure\n \\begin{subfigure}{0.5\\columnwidth}\n \\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\\columnwidth]{figures\/bunny_smoke_thin40_0026.jpeg}\n \\end{subfigure} \\\\[-1ex]\n \\begin{subfigure}{0.5\\columnwidth}\n \\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\\columnwidth]{figures\/bunny_smoke_thin40_0065.jpeg}\n \\end{subfigure\n \\begin{subfigure}{0.5\\columnwidth}\n \\includegraphics[draft=false,trim={300px 0 300px 0},clip,width=\\columnwidth]{figures\/bunny_smoke_thin40_0090.jpeg}\n \\end{subfigure}\n\\caption{{\\textbf{Smoke in an irregular domain}}. Multicolored spheres of smoke with non-zero initial velocity conditions flow and collide inside the Stanford bunny. Zero normal velocity is enforced with our cut cell formulation.}\\label{fig:bunnysmoke}\n\\end{figure}\n\n\\section{Governing Equations and Operator Splitting}\nWe solve the incompressible Euler equations that describe the evolution of a fluid in terms of its mass density $\\rho$, velocity $\\mb{u}$, pressure $p$ and gravitational constant $\\gg$ as\n\\begin{align}\n\\rho\\frac{D\\mb{u}}{Dt} &=\\rho\\left(\\frac{\\partial \\mb{u}}{\\partial t} + \\frac{\\partial \\mb{u}}{\\partial \\mb{x}}\\mb{u}\\right)=-\\nabla p + \\rho\\gg, \\ \\mb{x}\\in\\Omega \\label{eq:mom_cont}\\\\\n\\nabla\\cdot\\mb{u} &= 0, \\ \\mb{x}\\in\\Omega \\label{eq:div_cont} \\\\\n\\mb{u}\\cdot\\mb{n}&=a, \\ \\mb{x}\\in\\partial \\Omega_D \\label{eq:bcv_cont}\\\\\np&=0, \\ \\mb{x}\\in\\partial \\Omega_N \\label{eq:bcp_cont}\n\\end{align}\nwhere Equation~\\eqref{eq:mom_cont} is balance of linear momentum, Equation~\\eqref{eq:div_cont} is the incompressibility constraint, Equation~\\eqref{eq:bcv_cont} is the boundary condition for the normal component of the velocity and Equation~\\eqref{eq:bcp_cont} is the free surface boundary condition. We use $\\Omega$ to denote the region occupied by the fluid, $\\partial \\Omega_D$ to denote the portion of the boundary of the fluid domain on which velocity is prescribed to be $a$ (which may vary over the boundary) and $\\partial \\Omega_N$ is the surface of the water where the pressure is zero (see Figure~\\ref{fig:dAndg}).\\\\\n\\\\\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[draft=false,width=.45\\columnwidth]{figures\/domain}}\n&\n\\subf{\\includegraphics[draft=false,width=.45\\columnwidth]{figures\/cutcell.eps}}\n\\\\\n\\end{tabular}\n\\caption{{\\textbf{Flow domain and grid}}. {\\textbf{Left}}: we use $\\Omega$ to denote the fluid domain, with $\\partial\\Omega_D$ used to indicate the portion of the fluid domain subject to velocity boundary conditions and $\\partial\\Omega_N$ to indicate the free-surface portion of the boundary with pressure condition $p=0$. {\\textbf{Right}}: We use multiquadratic interpolation for velocity ($\\bar{\\mb{u}}_\\mb{i}$ at cell centers, blue) and multilinear for pressure ($p_\\mb{c}$ at nodes, red). The fluid domain is defined with sub-grid-cell accuracy.}\\label{fig:dAndg}\n\\end{figure}\nIn a Chorin \\shortcite{chorin:1967:numerical} operator splitting of the advective and pressure terms, velocity is first updated to an intermediate field $\\mb{w}$ under the convective $\\rho\\frac{D\\mb{u}}{Dt}=\\mathbf{0}$, followed by an update from the pressure and gravitational body forcing under $\\rho\\frac{\\partial \\mb{u}}{\\partial t}=-\\nabla p + \\rho\\gg$ where the pressure is determined to enforce $\\nabla\\cdot\\mb{u} = 0$. Dividing by the mass density, the convective step is seen to be an update under Burgers' equation~\\eqref{eq:impBurg}. Burgers' equation governs temporally constant Lagrangian velocity (zero Lagrangian acceleration). The characteristic curves for flows of this type are straight lines (since the Lagrangian acceleration is zero), on which the velocity is constant (see Figure~\\ref{fig:burgers}). This gives rise to the implicit relation $\\mb{u}(\\mb{x},t)=\\mb{u}(\\mb{x}-(t-s)\\mb{u}(\\mb{x},t),s)$ for $s\\leq t$. Intuitively, if we want to know the velocity $\\mb{u}(\\mb{x},t)$ at point $\\mb{x}$ at time $t$, we look back along the characteristic passing through $\\mb{x}$ at time $t$ to any previous time $s$; however, the characteristic is the straight line defined by the velocity $\\mb{u}(\\mb{x},t)$ that we want to know. Hence we take an implicit approach to the solution of this equation, which when combined with the operator splitting amounts to\n\\begin{align}\n\\frac{\\mb{w}-\\tilde{\\mb{u}}^n}{\\Delta t} &=\\mathbf{0} \\label{eq:split_a} \\\\\n\\rho\\frac{\\mb{u}^{n+1}-\\mb{w}}{\\Delta t} &=-\\nabla p^{n+1} + \\rho\\gg \\label{eq:split_p}\\\\\n\\nabla\\cdot\\mb{u}^{n+1} &= \\mathbf{0} \\label{eq:split_div}\n\\end{align}\nwhere we use the notation $\\mb{u}^{n+\\alpha}(\\mb{x})=\\mb{u}(\\mb{x},t^{n+\\alpha})$, $\\alpha=0,1$ to denote the time $t^{n+\\alpha}$ velocities. Furthermore, the intermediate velocity $\\mb{w}$ is related to $\\tilde{\\mb{u}}^n$ through $\\tilde{\\mb{u}}^n(\\mb{x})=\\mb{u}(\\mb{x}-\\Delta t \\mb{w}(\\mb{x}),t^n)$.\n\n\\begin{figure*}[ht!]\n\\includegraphics[draft=false,width=\\textwidth]{figures\/burgers}\n\\caption{{\\textbf{BSL versus SL}}. We illustrate the difference between explicit semi-Lagrangian and BSL in 1D. {\\textbf{Left}}: The exact solution of Burgers' equation has straight line characteristics shown in blue, green and red on which velocity (plotted above the plane in gray) is constant. {\\textbf{Center}}: BSL (green) uses Newton's method to solve for the exact characteristic going through $x_i$ at time $t^{n+1}$ to determine $u_i^{n+1}$. {\\textbf{Right}}: explicit semi-Lagrangian (red) uses a stale, time $t^n$ approximation of the characteristic which over shoots, resulting in an underestimate of the velocity and energy loss.}\\label{fig:burgers}\n\\end{figure*}\n\n\\section{Spatial Discretization}\nWe discretize in space by first representing velocity and pressure in terms of mulitquadratic and multilinear B-splines for velocity and pressure respectively. We use a regular grid with spacing $\\Delta x$ and define pressure degrees of freedom at grid vertices and velocity degrees of freedom at grid cell centers as in \\cite{ando:2013:surfacing} (see Figure~\\ref{fig:dAndg}). This efficiently aligns the support of the multiquadratic and multilinear interpolating functions which naturally allows for a grid-cell-wise definition of the flow domain (see Figure~\\ref{fig:fsdnb}). We use $N_\\mb{i}(\\mb{x})$ to represent the multiquadratic B-spline basis function associated with velocity degree of freedom $\\bar{\\mb{u}}_\\mb{i}$ at grid cell center $\\mb{x}_\\mb{i}$ and $\\chi_\\mb{c}(\\mb{x})$ for the multilinear basis function associated with pressure $p_\\mb{c}$ at grid node $\\mb{x}_\\mb{c}$. These are defined as\n\\begin{align}\nN_\\mb{i}(\\mb{x})&=\\prod_{\\alpha}\\hat{N}(\\frac{x_\\alpha-x_{\\alpha \\mb{i}}}{\\Delta x}), \\ \\chi_\\mb{c}(\\mb{x})=\\prod_{\\alpha}\\hat{\\chi}(\\frac{x_\\alpha-x_{\\alpha \\mb{c}}}{\\Delta x})\\\\\n\\hat{N}(\\eta)&=\\left\\{\\begin{array}{lcc}\n\\frac{\\left(\\eta +\\frac{3}{2}\\right)^2}{2},&\\eta\\in(-\\frac{3}{2},-\\frac{1}{2})\\\\\n-\\eta^2+\\frac{3}{4},&\\eta\\in[-\\frac{1}{2},\\frac{1}{2}]\\\\\n\\frac{\\left(\\eta -\\frac{3}{2}\\right)^2}{2},&\\eta\\in(\\frac{1}{2},\\frac{3}{2})\\\\\n0,&\\textrm{otherwise}\n\\end{array}\\right. \\ \n\\hat{\\chi}(\\nu)=\\left\\{\\begin{array}{lcc}\n1+\\nu,&\\nu\\in(-1,0)\\\\\n1-\\nu,&\\nu\\in[0,1)\\\\\n0,&\\textrm{otherwise}\n\\end{array}\\right.\n\\end{align}\nwhere we use Greek indices $\\alpha$ to indicate components of the vectors $\\mb{x}$, $\\mb{x}_\\mb{i}$ and $\\mb{x}_\\mb{c}$. With this convention we interpolate to define velocity and pressure fields \n\\begin{align}\\label{eq:interp}\n\\mb{u}(\\mb{x})=\\sum_\\mb{i} \\bar{\\mb{u}}_\\mb{i} N_\\mb{i}(\\mb{x}), \\ p(\\mb{x})=\\sum_\\mb{c} p_\\mb{c} \\chi_\\mb{c}(\\mb{x}).\n\\end{align}\nWe use the notation $\\bar{\\mb{u}}_\\mb{i}$ to distinguish it from the velocity at the grid node $\\mb{u}(\\mb{x}_\\mb{i})=\\sum_\\mb{j} \\bar{\\mb{u}}_\\mb{j} N_\\mb{j}(\\mb{x}_\\mb{i})$ since the multiquadratic B-splines are not interpolatory and these will in general be different. Note that multilinear interpolation is interpolatory and $p_\\mb{c}=\\sum_\\mb{d} p_\\mb{d} \\chi_\\mb{d}(\\mb{x}_\\mb{c})$.\n\\subsection{BSLQB Advection}\\label{sec:bslqb}\nWith this interpolation choice, we first solve for intermediate grid node velocity values $\\mb{w}(\\mb{x}_\\mb{i})$ from Equation~\\eqref{eq:split_a} as\n\\begin{align}\\label{eq:ad_disc}\n\\mb{w}(\\mb{x}_\\mb{i})=\\sum_\\mb{j} \\bar{\\mb{u}}^n_\\mb{j} N_\\mb{j}\\left(\\mb{x}_\\mb{i}-\\Delta t \\mb{w}(\\mb{x}_\\mb{i})\\right).\n\\end{align}\nWe can solve this equation using Newton's method since the multiquadratic B-splines are $C^1$. We use $\\mb{w}^k_\\mb{i}$ to denote the $k^\\textrm{th}$ Newton approximation to $\\mb{w}(\\mb{x}_\\mb{i})$. Explicit semi-Lagrangian is used as an initial guess with $\\mb{w}^0_\\mb{i}=\\sum_\\mb{j} \\bar{\\mb{u}}^n_\\mb{j} N_\\mb{j}\\left(\\mb{x}_\\mb{i}-\\Delta t \\sum_\\ll \\bar{\\mb{u}}^n_\\ll N_\\ll(\\mb{x}_\\mb{i})\\right)$ and then we update iteratively via $\\mb{w}^k_\\mb{i}\\mathrel{+}=\\boldsymbol\\delta\\mb{u}^k$ with Newton increment $\\boldsymbol\\delta\\mb{u}^k$ satisfying\n\\begin{align*}\n\\boldsymbol\\delta\\mb{u}^k&=\\left(\\mb{I}+\\Delta t \\frac{\\partial \\mb{u}^n}{\\partial \\mb{x}}\\left(\\mb{x}_\\mb{i}-\\Delta t \\mb{w}^k_\\mb{i}\\right)\\right)^{-1}\\left(\\sum_\\mb{j} \\bar{\\mb{u}}^n_\\mb{j} N_\\mb{j}\\left(\\mb{x}_\\mb{i}-\\Delta t \\mb{w}^k_\\mb{i}\\right)-\\mb{w}^k_\\mb{i}\\right)\n\\end{align*}\nwhere $\\frac{\\partial \\mb{u}^n}{\\partial \\mb{x}}\\left(\\mb{x}_\\mb{i}-\\Delta t \\mb{w}^k_\\mb{i}\\right)=\\sum_\\mb{j} \\bar{\\mb{u}}^n_\\mb{j}\\frac{\\partial N_\\mb{j}}{\\partial \\mb{x}}\\left(\\mb{x}_\\mb{i}-\\Delta t \\mb{w}^k_\\mb{i}\\right)$. It is generally observed \\cite{kuo:1990:semi,pudykiewicz:1984:some} that with BSL approaches of this type, this iteration will converge as long as $\\mb{I}+\\Delta t \\sum_\\mb{j} \\bar{\\mb{u}}^n_\\mb{j} \\frac{\\partial N_\\mb{j}}{\\partial \\mb{x}}\\left(\\mb{x}_\\mb{i}-\\Delta t \\mb{w}^k_\\mb{i}\\right)$ is non-singular. We note that this condition holds as long as no shocks form under Burgers' equation \\cite{evans:2010:pde} (forward from time $t^n$). This is a safe assumption since we are modeling incompressible flow with which shock formation does not occur, but it may be a problem for compressible flows. In practice, this iteration converges in 3 or 4 iterations, even with CFL numbers larger than 4 (see Section~\\ref{sec:ex_hybrid}). When it does fail (which occurs less than one percent of the time in the examples we run), it is usually for points near the boundary with characteristics that leave the domain (since we cannot estimate $\\frac{\\partial \\mb{u}^n}{\\partial \\mb{x}}$ using grid interpolation if the upwind estimate leaves the grid). In this case we use explicit semi-Lagrangian and interpolate from the boundary conditions if the characteristic point is off the domain.\\\\\n\\\\\nOnce we have obtained the grid node values of the intermediate velocity $\\mb{w}(\\mb{x}_\\mb{i})$, we must determine interpolation coefficients $\\bar{\\mb{w}}_\\mb{j}$ such that $\\mb{w}(\\mb{x}_\\mb{i})=\\sum_\\mb{j} \\bar{\\mb{w}}_\\mb{j} N_\\mb{j}(\\mb{x}_\\mb{i})$. On the boundary of the grid, we set $\\bar{\\mb{w}}_\\mb{j} = \\mb{w}(\\mb{x}_\\mb{j})$ since we can only interpolate to $\\mb{x}_\\mb{i}$ if all of its neighbors have data. This yields a square, symmetric positive definite system of equations for the remaining $\\bar{\\mb{w}}_\\mb{j}$. The system is very well conditioned with sparse, symmetric matrix $N_\\mb{j}(\\mb{x}_\\mb{i})$ consisting of non-negative entries and rows that sum to one. The sparsity and symmetry of the system arises from the compact support and geometric symmetry, respectively, of the B-spline basis functions $N_\\mb{j}$. The system can be solved to a residual of machine precision in one iteration of PCG (or tens of iterations of unpreconditioned CG). In practice, we have noticed that for some flows, determining the coefficients $\\bar{\\mb{w}}_\\mb{j}$ can lead to increasingly oscillatory velocity fields. This is perhaps due to the unfavorable filtering properties of even order B-splines \\cite{cheng:2001:quadratic,cheney:2012:numerical}. However, we found that a simple stabilization strategy can be obtained as\n\\begin{align}\\label{eq:BSLQBsys}\n\\sum_\\mb{j} \\left(\\lambda N_\\mb{j}(\\mb{x}_\\mb{i}) + (1-\\lambda)\\delta_{\\mb{i}\\mb{j}}\\right)\\bar{\\mb{w}}_\\mb{j}=\\mb{w}(\\mb{x}_\\mb{i})\n\\end{align}\nwhere $\\lambda\\in[0,1]$ and $\\delta_{\\mb{i}\\mb{j}}$ is the Kronecker delta. A value of $\\lambda=0$ is very stable, but extremely dissipative. Stable yet energetic behavior is achieved by decreasing the value of $\\lambda$ under grid refinement. In practice we found that $\\lambda\\in (.95,1 ]$ with $\\lambda =c\\Delta x$ for constant $c$ provided a good balance without compromising second order accuracy of the method (see Section~\\ref{sec:ex_hybrid}). We note that Riish{\\o}jgaard et al. \\shortcite{riishojgaard:1998:use} also added diffusion to cubic spline interpolation based semi-Lagrangian to reduce noise.\n\\subsection{Hybrid BSLQB-PolyPIC Advection}\\label{sec:poly}\nIn some portions of the domain, we store particles with positions $\\mb{x}_p^n$ and PolyPIC \\cite{fu:2017:poly} velocity coefficients $\\mb{c}^n_p$. In the vicinity of the particles, we use PolyPIC \\cite{fu:2017:poly} to update the intermediate velocity field $\\bar{\\mb{w}}_\\mb{j}$. First we update particle positions as $\\mb{x}^{n+1}_p=\\mb{x}^{n}_p + \\Delta t \\mb{v}_p^{n}$ (where the velocity $\\mb{v}_p^{n}$ is determined from $\\mb{c}^n_p$ following \\cite{fu:2017:poly}). Then the components $\\bar{w}_{\\mb{j}\\alpha}$ of the coefficients $\\bar{\\mb{w}}_\\mb{j}$ are determined as\n\\begin{align}\\label{eq:polyp2g}\n\\bar{w}_{\\mb{j}\\alpha}=\\frac{\\sum_p m_pN_\\mb{j}(\\mb{x}_p^{n+1})\\left(\\sum_{r=1}^{N_r} s_r(\\mb{x}_\\mb{j}-\\mb{x}^{n+1}_p)c^n_{pr\\alpha}\\right)}{\\sum_p m_pN_\\mb{j}(\\mb{x}_p^{n+1})}\n\\end{align}\nwhere $N_r$ is the number of polynomial modes $s_r(\\mb{x})$, as in Fu et al. \\shortcite{fu:2017:poly}. To create our hybrid approach, we update $\\bar{w}_{\\mb{j}\\alpha}$ from Equation~\\eqref{eq:polyp2g} whenever the denominator is greater than a threshold $\\sum_p m_pN_\\mb{j}(\\mb{x}_p^{n+1})>\\tau^m$, otherwise we use the BSLQB update from Equation~\\eqref{eq:BSLQBsys}. We use this threshold because the grid node update in Equation\\eqref{eq:polyp2g} loses accuracy when the denominator is near zero and in this case the BSLQB approximation is likely more accurate. Note that the polynomial mode coefficients for the next time step $\\mb{c}^{n+1}_p$ are determined from the grid velocities at the end of the time step (using particle positions $\\mb{x}_p^{n+1}$ and after pressure projection).\n\n\\section{Pressure Projection}\nWe solve Equations~\\eqref{eq:split_p}-\\eqref{eq:split_div} and boundary condition Equations~\\eqref{eq:bcv_cont}-\\eqref{eq:bcp_cont} in a variational way. To do this, we require that the dot products of Equations~\\eqref{eq:split_p}, \\eqref{eq:split_div} and Equations~\\eqref{eq:bcv_cont} with arbitrary test functions $\\mb{r}$, q and $\\mu$ respectively integrated over the domain are always equal to zero. The free surface boundary condition in Equation~\\eqref{eq:bcp_cont} is naturally satisfied by our treatment of Equation~\\eqref{eq:split_p}. We summarize this as \n\\begin{align}\n\\int_\\Omega \\mb{r}\\cdot\\rho\\left(\\frac{\\mb{u}^{n+1}-\\mb{w}}{\\Delta t}\\right)d\\mb{x}&=\\int_\\Omega p^{n+1}\\nabla\\cdot\\mb{r} + \\rho\\mb{r}\\cdot \\gg d\\mb{x} \\label{eq:var_mom}\\\\\n&-\\int_{\\partial \\Omega}p^{n+1} \\mb{r}\\cdot \\mb{n} ds(\\mb{x})\\nonumber\\\\\n\\int_{\\Omega} q \\nabla \\cdot \\mb{u}^{n+1} d\\mb{x}&=0 \\label{eq:var_div}\\\\\n\\int_{\\partial \\Omega_D} \\mu \\left(\\mb{u}^{n+1}\\cdot\\mb{n}-a\\right) ds(\\mb{x})&=0.\\label{eq:var_bcv}\n\\end{align}\nHere we integrate by parts in the integral associated with Equation~\\eqref{eq:split_p}. Furthermore, we modify the expression $\\int_{\\partial \\Omega}p^{n+1} \\mb{r}\\cdot \\mb{n} ds(\\mb{x})$ in Equation~\\eqref{eq:var_mom} in accordance with the boundary conditions. We know that the pressure is zero on $\\partial \\Omega_N$, however we do not know its value on $\\partial \\Omega_D$. We introduce the pressure on this portion of the domain as a Lagrange multiplier $\\lambda^{n+1}$ associated with satisfaction of the velocity boundary condition in Equation~\\eqref{eq:var_bcv}. Physically, this is the external pressure we would need to apply on $\\partial \\Omega_D$ to ensure that $\\mb{u}^{n+1}\\cdot\\mb{n}=a$. With this convention, we have $\\int_{\\partial \\Omega}p^{n+1} \\mb{r}\\cdot \\mb{n} ds(\\mb{x})=\\int_{\\partial \\Omega_D}\\lambda^{n+1} \\mb{r}\\cdot \\mb{n} ds(\\mb{x})$. We note that unlike Equation~\\eqref{eq:var_bcv} (and its strong form counterpart $\\eqref{eq:bcv_cont}$) that requires introduction of a Lagrange multiplier, Equation~\\eqref{eq:bcp_cont} is naturally enforced through the weak form simply by setting $p^{n+1}=0$ in the integral over $\\partial \\Omega_N$ in Equation~\\eqref{eq:var_mom}.\\\\\n\\\\\nTo discretize in space, we introduce interpolation for the test functions $\\mb{r}$, $q$ and $\\mu$. We use the same spaces as in Equation~\\eqref{eq:interp} for velocity and pressure for $\\mb{r}=\\sum_\\mb{i} \\bar{\\mb{r}}_\\mb{i} N_\\mb{i}$ and $q=\\sum_\\mb{d} q_\\mb{d} \\chi_\\mb{d}$. For the test functions $\\mu$, we choose the same space as $q,p$, but with functions restricted to $\\partial \\Omega_D$, $\\mu=\\sum_\\mb{b} \\mu_\\mb{b} \\chi_\\mb{b}$ for $\\mb{b}$ with grid cell $\\Omega_\\mb{b} \\cap \\partial \\Omega_D \\neq \\emptyset$ (see Figure~\\ref{fig:fsdnb}). We choose the same space for $\\lambda^{n+1}=\\sum_\\mb{b}\\lambda^{n+1}_\\mb{b} \\chi_\\mb{b}$ to close the system. With these choices for the test functions, the variational problem is projected to a finite dimensional problem defined by the interpolation degrees of freedom. This is expressed as a linear system for velocities $\\bar{\\mb{u}}_\\mb{j}^{n+1}$, internal pressures $p^{n+1}_\\mb{c}$, and external pressures $\\lambda_\\mb{b}^{n+1}$ that is equivalent to \n\\begin{align}\n\\left(\\begin{array}{ccc}\n\\mb{M}&-\\mb{D}^T&\\mb{B}^T\\\\\n-\\mb{D}&&\\\\\n\\mb{B}&&\n\\end{array}\\right)\n\\left(\n\\begin{array}{c}\n\\mb{U}^{n+1}\\\\\n\\mb{P}^{n+1}\\\\\n\\boldsymbol\\Lambda^{n+1}\n\\end{array}\n\\right)=\n\\left(\n\\begin{array}{c}\n\\mb{M}\\mb{W} + \\hat{\\gg}\\\\\n\\mathbf{0}\\\\\n\\AA\n\\end{array}\n\\right).\n\\end{align}\nHere $\\mb{U}^{n+1}$, $\\mb{P}^{n+1}$ and $\\boldsymbol\\Lambda^{n+1}$ are the vectors of all unknown $\\bar{\\mb{u}}_\\mb{j}^{n+1}$, $p_\\mb{c}^{n+1}$ and $\\lambda_\\mb{b}^{n+1}$ respectively. Furthermore $\\mb{M}$ is the mass matrix, $\\mb{B}$ defines the velocity boundary conditions and $\\mb{D}$ defines the discrete divergence condition. Lastly, $\\mb{W} $ is the vector of all $\\bar{\\mb{w}}_\\mb{i}$ that define the intermediate velocity, $\\hat{\\gg}$ is from gravity and $\\AA$ is the variational boundary condition. Using the convention that Greek indices $\\alpha,\\beta$ range from $1-3$, these matrices and vectors have entries \n\\begin{align}\nM_{\\alpha\\mb{i}\\beta\\mb{j}}=\\delta_{\\alpha\\beta}\\int_\\Omega \\frac{\\rho}{\\Delta t} N_\\mb{i} N_\\mb{j} d\\mb{x}, \\ D_{\\mb{d} \\beta \\mb{j}}&=\\int_\\Omega \\chi_\\mb{d}\\frac{\\partial N_\\mb{j}}{\\partial x_\\beta}d\\mb{x}, \\ \\hat{g}_{\\alpha \\mb{i}}=\\int_\\Omega \\rho g_\\alpha N_\\mb{i} d\\mb{x} \\label{eq:vol_int} \\\\\nB_{\\mb{b} \\beta\\mb{j}}=\\int_{\\Omega_D} \\chi_\\mb{b} N_\\mb{j} n_\\beta ds(\\mb{x}), \\ A_\\mb{b} &= \\int_\\Omega a\\chi_\\mb{b} ds(\\mb{x}).\\label{eq:b_int}\n\\end{align}\nIf we define $\\mb{G}=[-\\mb{D}^T,\\mb{B}^T]$, we can convert this system into a symmetric positive definite one for $\\mb{P}^{n+1}$ and $\\boldsymbol\\Lambda^{n+1}$ followed by a velocity correction for $\\mb{U}^{n+1}$\n\\begin{align}\n\\left(\n\\begin{array}{c}\n\\label{eq:spd_system}\n\\mb{P}^{n+1}\\\\\n\\boldsymbol\\Lambda^{n+1}\n\\end{array}\n\\right)&=\\left(\\mb{G}^{T}\\mb{M}^{-1}\\mb{G}\\right)^{-1}\n\\left(\\mb{G}^T\\left(\\mb{W}+\\mb{M}^{-1}\\hat{\\gg}\\right)-\\left(\\begin{array}{c}\\mathbf{0}\\\\\\AA\\end{array}\\right)\\right)\\\\\n\\mb{U}^{n+1}&=-\\mb{M}^{-1}\\mb{G}\\left(\n\\begin{array}{c}\n\\mb{P}^{n+1}\\\\\n\\boldsymbol\\Lambda^{n+1}\n\\end{array}\n\\right)+\\mb{W}+\\mb{M}^{-1}\\hat{\\gg}.\n\\end{align}\nUnfortunately, this system will be dense in the current formulation since the full mass matrix $M_{\\alpha\\mb{i}\\beta\\mb{j}}$ is non-diagonal with dense inverse \\cite{botella:2002:collocation}. However, a simple lumped mass approximation\n\\begin{align}\\label{eq:mass_lump}\nM^l_{\\alpha\\mb{i}\\beta\\mb{j}}=\\left\\{\n\\begin{array}{lcc}\n\\delta_{\\alpha\\beta}\\int_\\Omega \\frac{\\rho}{\\Delta t} N_\\mb{i} d\\mb{x},&\\mb{i}=\\mb{j}\\\\\n0,&\\textrm{otherwise}\n\\end{array}\n\\right.\n\\end{align}\ngives rise to a sparse matrix in Equation~\\eqref{eq:spd_system}.\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[draft=false,width=.45\\columnwidth]{figures\/narrow_band}}\n&\n\\subf{\\includegraphics[draft=false,width=.4\\columnwidth]{figures\/narrow_band2}}\n\\\\\n\\end{tabular}\n\\caption{{\\textbf{Discrete free surface fluid domain}}. {\\textbf{Left}}: We define the fluid domain to consist of cells that either have (1) a particle (dark blue) in it or (2) a node with non-positive level set value (light blue). {\\textbf{Right}}: Boundary Lagrange multiplier external pressure $\\lambda_\\mb{b}$ (orange circles) are like the interior pressures $p_\\mb{c}$ except only defined on fluid domain cells that intersect $\\partial \\Omega_D$.}\\label{fig:fsdnb}\n\\end{figure}\n\n\\begin{figure}[!ht]\n \\centering\n \\begin{subfigure}{.49\\columnwidth}\n \\includegraphics[draft=false,clip,width=1.0\\columnwidth]{figures\/narrow_band_2d0000_cropped.jpeg}\n \n \\end{subfigure} \n \\begin{subfigure}{.49\\columnwidth}\n \\includegraphics[draft=false,clip,width=1.0\\columnwidth]{figures\/narrow_band_2d0048_cropped.jpeg}\n \n \\end{subfigure}\\\\\n \\begin{subfigure}{0.33\\columnwidth}\n \\includegraphics[draft=false,trim={920px 430px 830px 470px},clip,width=1.0\\columnwidth]{figures\/narrowband3d_0.jpeg}\n \\end{subfigure\n \\begin{subfigure}{0.33\\columnwidth}\n \\includegraphics[draft=false,trim={920px 430px 830px 470px},clip,width=1.0\\columnwidth]{figures\/narrowband3d_12.jpeg}\n \\end{subfigure}\n \\begin{subfigure}{0.33\\columnwidth}\n \\includegraphics[draft=false,trim={920px 430px 830px 470px},clip,width=1.0\\columnwidth]{figures\/narrowband3d_26.jpeg}\n \\end{subfigure\n\\caption{{\\textbf{Narrow band free surface}}. A circle\/sphere falls in a tank of water under gravity. Using only a narrow band of particles saves computational cost and enables increased resolution of the free surface. {\\textbf{Top}}: In 2D we illustrate the hybrid particle(dark blue)\/level set (light blue) representation. {\\textbf{Bottom}}: Particles are colored based on velocity magnitude.}\\label{fig:narrow_band_2d}\n\\end{figure}\n\n\n\\subsection{Cut cells}\n\\label{sec:cutcells}\nAs in XFEM and VNA approaches \\cite{belytschko:2009:review,koschier:2017:xfem,schroeder:2014:vna}, we resolve sub-grid-cell geometry by simply performing the integrations in Equations~\\eqref{eq:vol_int}-\\eqref{eq:b_int} over the geometry of the fluid domain. We use a level set to define solid boundaries (green in Figure~\\ref{fig:fsdnb}) on which velocity boundary conditions are defined. We triangulate the zero isocontour using marching cubes \\cite{chernyaev:1995:marching} (see Figure~\\ref{fig:cutcell}). The integrals in Equations~\\eqref{eq:vol_int}-\\eqref{eq:b_int} all involve polynomials over volumetric polyhedra (Equations~\\eqref{eq:vol_int}, blue in Figure~\\ref{fig:cutcell}) or surface polygons (Equations~\\eqref{eq:b_int}, green in Figure~\\ref{fig:cutcell}) and we use Gauss quadrature of order adapted to compute the integrals with no error (see \\cite{gagniere:2020:tech_doc}). For free surface flows, we use particles (and additionally a level set function in the case of narrow banding, see Section~\\eqref{sec:nb}) to denote grid cells with fluid in them. Cells near the solid boundary are clipped by the marching cubes geometry. The fluid domain $\\Omega$ is defined as the union of all clipped and full fluid cells (see Figure~\\ref{fig:fsdnb}). \\\\ \n\\\\\nNotably, taking a cut cell approach with our variational formulation allows us to prove that our method can resolve a standing pool of water exactly without producing numerical currents. We know that with gravitational force $\\rho\\gg$ (e.g. with $\\gg$ pointing in the $y$ direction with magnitude $g$), steady state is maintained if the pressure increases with depth as $p=\\rho g\\left(y_0-y\\right)$ where $y_0$ is the height of the water surface at rest, since $-\\nabla p + \\rho\\gg = \\mathbf{0}$. Since we use multilinear interpolating functions for $p$, the exact solution is representable in our discrete space and with a short proof we show (see \\cite{gagniere:2020:tech_doc}) that this means our method will choose it to maintain a standing pool of water, independent of fluid domain boundary geometry.\n\n\\begin{figure}[!ht]\n\\includegraphics[draft=false,width=\\columnwidth]{figures\/cut_cell_combo}\n\\caption{{\\textbf{Cut cells}}. We show the 14 essential cases used in determining the cut cell fluid domain geometry. Blue faces indicate the intersection of the grid cell with the fluid domain. Green faces indicate the velocity boundary condition faces on $\\partial \\Omega_D$.}\\label{fig:cutcell}\n\\end{figure}\n\n\\section{Narrow band free surface}\\label{sec:nb}\nFor free surface flows, we develop a narrow band approach as in \\shortcite{chentanez:2015:coupling,ferstl:2016:narrow,sato:2018:nb}. We represent the fluid domain with a level set and seed particles in a band of width $W$ from the zero isocontour (see Figure~\\ref{fig:fsdnb}). Particles are advected and used to augment BSLQB advection as detailed in Section~\\ref{sec:poly}. We also advect the level set by interpolating its value at the previous step from the upwind location $\\mb{x}_\\mb{i}-\\Delta t \\mb{w}(\\mb{x}_\\mb{i})$ determined in Equation~\\eqref{eq:ad_disc}. We then use the updated particle locations to compute a narrow band level set from the particles based on the method of Boyd and Bridson \\cite{boyd:2012:multiflip}. We update the level set to be the union of that defined by the narrow band and that from advection. This is done by taking the minimum of the two level set values and then redistancing with the method of Zhao \\shortcite{zhao:2005:fast}.\n\\section{Examples}\n\\begin{figure}[h]\n\\includegraphics[draft=false,width=\\columnwidth]{figures\/vonKarman_cropped}\n\\caption{{\\textbf{Von Karman vortex shedding}}. We demonstrate the accuracy of our Hybrid BSLQB\/PolyPIC with vortex shedding past a notch in 2D. Note the smooth transition between regions with particles (PolyPIC) and those without (BSLQB).}\\label{fig:vonK}\n\\end{figure}\n\n\n\\subsection{Hybrid BSLQB\/PolyPIC}\\label{sec:ex_hybrid}\nWe demonstrate our hybrid BSLQB\/PolyPIC advection with water simulation. We prevent excessive run times by utilizing a narrow band of particles near the free surface and a level set (with BSLQB advection) in deeper levels. Figure~\\ref{fig:narrow_band_2d} Top shows a disc of water splashing in a rectangular tank with dimension $1\\times2$ and grid cell size $\\Delta x = 1\/255$. The time step is restricted to be in the range $\\Delta t \\in \\left[0.005, 0.01\\right]$. 20 particles are initialized in every cell that is initially in a narrow band of $7 \\Delta x$ below the zero isocontour of the level set. Figure~\\ref{fig:narrow_band_2d} Bottom shows an analogous 3D example where a sphere of water splashes in a tank. A cell size of $\\Delta x = \\frac{1}{63}$ is used in a domain with dimensions $1\\times 2 \\times 1$. We take a fixed time step of $\\Delta t = 0.01$ and demonstrate that narrow banding does not prevent larger-than-CFL time steps. 1,008,187 particles are used to resolve the free surface in a narrow band of width $5\\Delta x$. As in 2D, the particles capture highly-dynamic behavior of the free surface while the level set is sufficient to represent the bulk fluid in the bottom half of the domain.\\\\\n\\\\\nWe also demonstrate our hybrid advection with a vortex shedding example (see Figure~\\ref{fig:vonK}). The flow domain $\\Omega$ is a $3\\times 1$ rectangle with circle of radius $0.05$. We seed a band of particles of width $.2$ above the midline $y=.5$ for PolyPIC advection. Advection in the rest of the domain is done with BSLQB. The vorticity plot illustrates a seamless transition between the two advection schemes. The simulation was run with a grid resolution of $\\Delta x=\\frac{1}{255}$, CFL number of 4 (i.e. $\\Delta t = \\frac{4\\Delta x}{v_\\textrm{max}}$), and inlet speed of $1.5$.\n\\begin{figure}[!b]\n \\centering\n \\begin{subfigure}{.49\\columnwidth}\n \\includegraphics[draft=false,clip,width=\\columnwidth]{figures\/cutcell_compare1.jpeg}\n \\end{subfigure}\n \\begin{subfigure}{.49\\columnwidth}\n \\includegraphics[draft=false,clip,width=\\columnwidth]{figures\/cutcell_compare2.jpeg}\n \\end{subfigure}\n \n\\caption{{\\textbf{Cut cell vs.\\ voxelized domain}}. Using a cut cell domain (right) instead of a voxelized domain (left) yields marked improvements in simulation quality.}\\label{fig:voxelized_vs_cut_cell}\n\\end{figure}\n\n\\subsection{BSLQB comparison with explicit semi-Lagrangian}\nWe demonstrate improved resolution of flow detail with BSLQB compared to explicit semi-Lagrangian in a 2D example of smoke flowing past a circle (see Figure~\\ref{fig:interp_correction}) and with a 2D spinning circle example (see Figure~\\ref{fig:inner_circle}). Note that particles are only used for flow visualization and not for PolyPIC advection in these examples. BSLQB exhibits more energetic, turbulent flows than semi-Lagrangian advection. Notably, the BSLQB result breaks symmetry sooner. In Figure~\\ref{fig:interp_correction} we also examine the effect of extremal values of the $\\lambda$ parameter described in Equation~\\eqref{eq:BSLQBsys}. A zero value of $\\lambda$ is quite dissipative compared to a full value of $\\lambda = 1$ for both semi-Lagrangian and BSLQB. As mentioned in Section~\\ref{sec:bslqb}, we generally found that keeping $\\lambda$ close to 1 provided the least dissipative behavior, while setting the value slightly less than 1 helped restore stability when necessary (one can also dynamically adjust this value over the course of a simulation). In Figure~\\ref{fig:inner_circle}, we initially set the angular velocity to 4 radians per second in a circle of radius $.2$ (with $\\Omega=[0,1]\\times[0,1]$). The simulation is run with $\\Delta x=\\frac{1}{511}$ and a $\\Delta t = .02$ (CFL number of 3).\\\\\n\\\\\nWe examine the convergence behavior of BSLQB for the 2D Burgers' equation $\\frac{D\\mb{u}}{Dt}=\\mathbf{0}$ with initial data $\\mb{u}(\\mb{x})=\\mb{x}\\cdot\\left(\\AA\\mb{x}\\right)$ for $\\AA=\\mb{R}\\boldsymbol\\Lambda\\mb{R}^T$ for diagonal $\\boldsymbol\\Lambda$ with entries $1$ and $.25$ and rotation (of $.1$ radians) $\\mb{R}$ (see Figure~\\ref{fig:conv}). We examine the convergence behavior under refinement in space and time with $\\Delta t=\\Delta x$. We compute the best fit line to the plot of the logarithm of the $L^\\infty$ norm of the error versus the logarithm of $\\Delta x$ for a number of grid resolutions. We observe slopes of approximately 2 for BSLQB with interpolation parameter $\\lambda=1$ and $\\lambda=1-c\\Delta x$ (with $c = 2.95$), indicating second order accuracy in space and time under refinement. We observe slopes of approximately 1 for explicit semi-Lagrangian, indicating first order.\n\\begin{figure}[ht]\n \\centering\n \\begin{subfigure}{1.0\\columnwidth}\n \\includegraphics[draft=false,trim={0 60px 0 60px},clip,width=\\columnwidth]{figures\/interp_correction_0300.pdf}\n \\end{subfigure}\n \\\\\n \\begin{subfigure}{1.0\\columnwidth}\n \\includegraphics[draft=false,trim={0 40px 0 300px},clip,width=\\columnwidth]{figures\/interp_correction_0060.jpeg}\n \\end{subfigure}\n\\caption{{\\textbf{Interpolation correction}}. BSLQB exhibits more fine-scale flow detail and vorticity than semi-Lagrangian for extremal values of interpolation parameter $\\lambda$ (Equation~\\eqref{eq:BSLQBsys}). From left to right: semi-Lagrangian with $\\lambda = 0$, BSLQB with $\\lambda = 0$, semi-Lagrangian with $\\lambda = 1$, BSLQB with $\\lambda = 1$.}\\label{fig:interp_correction}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\includegraphics[draft=false,width=\\columnwidth]{figures\/Convergence}\n\\caption{{\\textbf{Convergence}}. We compare explicit semi-Lagrangian (SL, red), with BSLQB (blue) and interpolation coefficient $\\lambda=1$ (Equation~\\eqref{eq:BSLQBsys}) and BSLQB with interpolation coefficient $\\lambda=1-c\\Delta x$ (orange). We plot $\\log(\\Delta x)$ versus $\\log(e)$ (where $e$ is the infinity norm of the error) for a variety of grid resolutions $\\Delta x$ and compute the best fit lines. The slope of the line provides empirical evidence for the convergence rate of the method.}\\label{fig:conv}\n\\end{figure}\n\\begin{figure}[!ht]\n \\centering\n \\begin{subfigure}{\\columnwidth}\n \\includegraphics[draft=false,trim={0 200px 0 200px},clip,width=\\columnwidth]{figures\/smoke_fast0104.jpeg}\n \\end{subfigure}\\\\\n \\begin{subfigure}{\\columnwidth}\n \\includegraphics[draft=false,trim={0 200px 0 200px},clip,width=\\columnwidth]{figures\/smoke_fast0161.jpeg}\n \\end{subfigure}\\\\\n \\begin{subfigure}{\\columnwidth}\n \\includegraphics[draft=false,trim={0 200px 0 200px},clip,width=\\columnwidth]{figures\/\/smoke_fast0350.jpeg}\n \\end{subfigure}\n\\caption{{\\textbf{Smoke jet}}. A plume of smoke is simulated with BSLQB. Zero normal velocity boundary conditions are enforced on the irregular boundary of the sphere inducing intricate flow patterns as the smoke approaches it.}\\label{fig:smokejet}\n\\end{figure}\n\n\\subsection{Cut cell examples}\nWe demonstrate the ability of our cut cell method to produce detailed flows in complicated irregular domains for smoke and free surface water examples. Figure~\\ref{fig:rainbowsmoke} demonstrates the subtle and visually interesting behavior that arises as two plumes of multicolored smoke flow to the center of a cubic domain colliding with a spherical boundary. We use $\\Delta x = 1\/63$ and $\\Delta t = .02$. We demonstrate a more complex domain in Figure~\\ref{fig:bunnysmoke}. Puffs of colored smoke with converging initial velocities are placed in a bunny shaped clear domain. We use grid size $\\Delta x = 1\/127$ and a fixed time step of $\\Delta t = 0.01$ (CFL number $>1$). In Figure~\\ref{fig:snowglobe}, we demonstrate water splashing, while accurately conforming to the walls of an irregular domain defined as the interior of a large sphere and exterior of a small inner sphere. The spatial resolution of the domain is $\\Delta x = 1\/127$, and 30 particles per cell are seeded in the initial fluid shape. A minimum time step of $\\Delta t=0.001$ is enforced, which is often larger than the CFL condition. We also consider dam break simulations in rectangular domains with column obstacles (Figure~\\ref{fig:dambreak_final}) and a bunny obstacle (Figure~\\ref{fig:dambreak_final}). Both examples use a grid cell size of $\\Delta x = 1\/127$, 8 particles per cell and a fixed time step of $\\Delta t = 0.003$. Lastly, we demonstrate the benefits of our cut cell formulation over a more simplified, voxelized approach in Figure~\\ref{fig:voxelized_vs_cut_cell}. Notice the water naturally sliding in the cut cell domain compared with the jagged flow in the voxelized domain.\n\n\n\n\\subsection{Performance considerations}\n\n\\begin{table}[]\n\\begin{tabular}{@{}llll@{}}\n\\toprule\nExample & Seconds & \\# Particles & $\\Delta x^{-1}$ \\\\ \\midrule\nSmoke Jet (Fig.~\\ref{fig:smokejet}) & 1,212 & 12,502,349 & 127 \\\\\nMultiple Jets (Fig.~\\ref{fig:rainbowsmoke}) & 53 & 25,004,699 & 63 \\\\\nBunny Smoke (Fig.~\\ref{fig:bunnysmoke}) & 160 & 24,000,000 & 127 \\\\\nSmoke Spheres$\\text{*}$ (Fig.~\\ref{fig:bigsmoke}) & 428 & 64,000,000 & 255 \\\\\nNarrow Band (Fig.~\\ref{fig:narrow_band_2d}) & 396 & 1,008,187 & 63 \\\\\nWater Globe (Fig.~\\ref{fig:snowglobe}) & 242 & 524,415 & 127 \\\\\nDam Break (Fig.~\\ref{fig:dambreak_final}) & 870 & 3,251,409 & 127 \\\\\nBunny Dam Break (Fig.~\\ref{fig:bunny_dambreak_final}) & 1,171 & 4,797,535 & 127 \\\\ \\bottomrule\n\\end{tabular}\n\\caption{Average time per frame (in seconds) for each of the 3D examples shown in the paper. Examples were run on workstations with 16-core CPUs running at 2.20 GHz, except for the smoke spheres example, which was run on a cluster equipped with CPUs running at 3.07 GHz and Nvidia Tesla V100 GPUs which were used for the linear solves.}\n\\label{tbl:perf}\n\\end{table}\n\nThe implementation of our method takes advantage of hybrid parallelism (MPI, OpenMP, and CUDA\/OpenCL) on heterogeneous compute architectures in order to achieve practical runtime performance (see Table~\\ref{tbl:perf} for 3D example performance numbers). The spatial domain is uniformly divided into subdomains assigned to distinct MPI ranks, which distributes much of the computational load at the expense of synchronization overhead exchanging ghost information across ranks. On each rank, steps of our time integration loop such as BSLQB advection are multithreaded using OpenMP or CUDA when appropriate. The dominant costs per time step are the solution of the pressure projection system and, in the case of free surface simulation, assembly of the pressure system and its preconditioner. We permute Equation~\\eqref{eq:spd_system} so that each rank's degrees of freedom are contiguous in the solution vector then solve the system using AMGCL \\cite{demidov:2019:amgcl} using the multi-GPU VexCL backend (or the OpenMP CPU backend on more limited machines). Using a strong algebraic multigrid preconditioner with large-degree Chebyshev smoothing allows our system to be solved to desired tolerance in tens of iterations, even at fine spatial resolution. An important step in minimizng the cost of system assembly is to scalably parallelize sparse matrix-matrix multiplication, for which we use the algorithm of Saad \\shortcite{saad:2003:sparse}. In the future, we are interested in implementing load balancing strategies such as the simple speculative load balancing approach of \\cite{shah:2018:balancing}, particularly for free surface flows. We note that our implementation enables high-resolution simulations such as that in Figure~\\ref{fig:bigsmoke} at relatively modest computational cost (see Table~\\ref{tbl:perf}).\n\n\\section{Discussion and Limitations}\nOur approach has several key limitations that could be improved. First, our adoption of collocated multiquadratic velocity and multilinear pressure is a significant departure from most fluid solvers utilized in graphics applications. We note that BSLQB and BSLQB\/PolyPIC could be used with a MAC grid; however, each velocity face component would have to be solved for individually. Another drawback for our multiquadratic velocity and multilinear pressure formulation is that it gives rise to a very wide pressure system stencil consisting of 49 non-zero entries per row in 2D and 343 in 3D. Collocated approaches that make use of multilinear velocities and constant pressure give rise to 9 (2D) and 27 (3D) entries per row \\cite{zhang:2017:impm}, however they do not allow for $C^1$ continuity and require spurious pressure mode damping. Our wide stencils likely negatively affect the efficacy of preconditioning techniques as well, however we were very pleased with the efficiency of the AMGCL \\cite{demidov:2019:amgcl} library. Also, while the use of mass lumping in Equation~\\eqref{eq:mass_lump} is necessary to ensure a sparse pressure projection system, Boatella et al. \\shortcite{botella:2002:collocation} note that this has been shown to degrade accuracy. In fact, Boatela et al. \\shortcite{botella:2002:collocation} introduce a sparse approximate inverse to the full mass matrix to avoid dense systems of equations in the pressure solve without degrading accuracy. Split cubic interpolation, which approximates similar systems with tridiagonal ones could also possibly be used for this \\cite{huang:1994:semi}. Adoption of one of these approaches with our formulation would be an interesting area of future work. Also, we note that the more sophisticated transition criteria for narrow banding techniques in Sato et al. \\shortcite{sato:2018:nb} could naturally be used with our method. Finally, we note that the work of Zehnder et al. \\shortcite{zehnder:2018:advection,narain:2019:ref} could be easily applied to our technique to further reduce dissipation since it is based on the Chorin \\shortcite{chorin:1967:numerical} splitting techniques (Equations~\\eqref{eq:split_a}-\\eqref{eq:split_div}) that we start from.\n\n\\begin{figure}[!hb]\n \\centering\n \\begin{subfigure}{.49\\columnwidth}\n \\includegraphics[draft=false,trim={0 0 0 0},clip,width=\\columnwidth]{figures\/bigsmoke_0001.jpeg}\n \\end{subfigure} %\n \\begin{subfigure}{.49\\columnwidth}\n \\includegraphics[draft=false,trim={0 0 0 0},clip,width=\\columnwidth]{figures\/bigsmoke_0064.jpeg}\n \\end{subfigure} \\\\ %\n \\begin{subfigure}{.49\\columnwidth}\n \\includegraphics[draft=false,trim={0 0 0 0},clip,width=\\columnwidth]{figures\/bigsmoke_0128.jpeg}\n \\end{subfigure} %\n \\begin{subfigure}{.49\\columnwidth}\n \\includegraphics[draft=false,trim={0 0 0 0},clip,width=\\columnwidth]{figures\/bigsmoke_0240.jpeg}\n \\end{subfigure}\n\n\\caption{{\\textbf{High-resolution smoke}}: Two spheres of smoke collide in a high-resolution 3D simulation ($\\Delta x = 1\/255$). BSQLB accurately resolves vorticial flow detail.}\\label{fig:bigsmoke}\n\\end{figure}\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn their paper ``AndrODet: An adaptive Android obfuscation detector'' \\cite{Mirzaei2019}, Mirzaei at al. present the modular machine learning system AndrODet, which is capable of detecting three types of obfuscation in Android apps using statically extracted features: string encryption, identifier renaming, and control-flow obfuscation. AndrODet uses the MOA \\cite{bifet2010moa} framework to perform on-line learning. The authors also compare their on-line approach with batch learning. In this comment paper, we have only considered the string encryption (SE) detection capabilities of AndrODet. Therefore, we restrict the discussion to SE detection for the remainder of the paper.\n\nThe authors propose several features for classifying apps as string encrypted, such as the average string entropy, average string length, etc., and evaluate on-line and batch models based on these features. For on-line learning, a combination of the AMD \\cite{Wei2017} and PraGuard \\cite{Maiorca2015} datasets are used for training and evaluation, whereas for batch-learning the model was trained on the AMD dataset and evaluated on the PraGuard dataset. The AMD set consists of 24,553 malware samples labeled with the obfuscation methods used by each sample (if any). PraGuard was constructed by collecting samples from two malware databases and running them through the DexGuard \\cite{DexGuard} obfuscation tool.\n\nThe evaluation by Mirzaei et al.\\ shows that, using the features proposed in the paper, both batch learning and on-line learning approaches can achieve over 80\\% accuracy in SE detection. However, upon closer examination, we discovered a significant methodological problem in the way their evaluation is performed. While the AMD dataset has almost 25,000 malware samples, the samples in the dataset all belong to one of 71 malware families. Therefore, many samples share unique characteristics due to the fact that they belong to the same family. Moreover, for most malware families, all samples are in the same class, i.e., either all samples use SE or none of the samples use SE. (Only 3,883 samples belong to a family that has both SE and non-SE members.) This introduces a risk of \\emph{memorization}, i.e., that the classifier learns a set of unique signatures of \\emph{malware families}, instead of a generalizable model for detecting SE. Also, during evaluation of their approach, Mirzaei et al.\\ make no effort to organize the dataset in a way that prevents malware of the same family from appearing in both the training and testing data. Therefore, memorization would also risk artificially inflating the measured accuracy of the model. Our own evaluation strongly suggests that the high accuracy that they report is indeed due to memorization. We trained and evaluated models on the AMD dataset with the same configuration and feature set as Mirzaei et al., using both batch learning and on-line learning. In the first experiment, 100 training and testing subsets were created by randomly sampling apps from the AMD dataset. In the second experiment, we made sure that the datasets were split so that samples from the same family never appeared in both the training and testing subsets. In the first experiment, we achieved an average accuracy of 92\\% and 89\\% for the batch and on-line cases respectively, which is similar to the accuracy reported by Mirzaei et al.\\ However, in the second experiment, wherein we eliminated the possibility of learning a model based on the malware family, the respective accuracies dropped to 50\\% and 51\\%.\n\nThe dramatic drop in accuracy was surprising to us at first, as the accuracy when evaluating the batch learning model on the PraGuard dataset was very high according to the paper, which in turn seems to imply that the model can generalize well across different datasets. Unfortunately, however, the good performance on the PraGuard dataset appears to be coincidental. Since the DexGuard tool that was used to generate the PraGuard dataset employs a SE method that is fundamentally different from what is used by most other obfuscators, this dataset is not well-suited to evaluate how well an SE detector generalizes. Furthermore, the PraGuard dataset is particularly ill-suited to evaluate the AndrODet system, since the DexGuard tool effectively makes it impossible to statically extract the features used by AndrODet from encrypted strings. We elaborate further on this issue in Section \\ref{sec:PraGuard}.\n\n\\section{Background}\n\nIn this section, we first provide some technical background on Android apps, which is required to understand the following discussion. We then briefly describe the AndrODet system.\n\nAndroid apps are distributed in the form of Android application packages (APKs), which can in turn contain several DEX-files. Each DEX file contains static data and bytecode of one or more classes. Of particular interest to our discussion here is the \\emph{string sectio}n of a DEX file, where all identifiers (method names, etc.) and constant strings used by the contained classes are stored. Each unique constant string has one entry in the string section. Obfuscation tools that implement SE replace the plaintext constant strings stored in the string section of an app with encrypted or scrambled versions and insert special decryption logic into the program code to decrypt strings just prior to their use. (Only non-identifier strings are relevant in the context of SE, as identifiers must be statically resolvable. Therefore, identifiers are obfuscated through renaming.)\n\nThe SE detection component of AndrODet retrieves all (non-identifier) strings from the string section of all DEX-files in an APK and extracts several features from the string material. The features are then fed to a machine learning model that determines whether or not the app uses SE. The following features are computed by AndrODet from the strings in the string section:\n\n\\begin{itemize}\n\t\\item Average entropy\n\t\\item Average wordsize\n\t\\item Average length\n\t\\item Average number of '\\texttt{=}' characters\n\t\\item Average number of '\\texttt{-}' characters\n\t\\item Average number of '\\texttt{\/}' characters\n\t\\item Average number of '\\texttt{+}' characters\n\t\\item Average number of repeated characters\n\\end{itemize}\n\nOn a side note, we believe that the authors' use of the term \"average wordsize\" is misleading since, based on the published source code of AndrODet, it turns out that the \"wordsize\" does not refer to the average size of words in strings. Instead, it is the size of the underlying Python string object used to represent strings in their implementation. It is unclear to us why the authors chose to use this as a feature, since the size of a string object is not solely determined by the number of bytes required to represent a string, but also depends on, for example, the underlying Python implementation and the memory allocator used by the OS. Also, the size of a string object is not guaranteed to be the same for two strings with the same content, or to be consistent between runs.\n\nAndrODet uses the MOA platform for on-line learning, in which samples are fed to the on-line learning system in a streaming fashion. The \\emph{prequential} evaluation mode of MOA was used for evaluation: When a new sample arrives from the stream, it is classified by the current model and the result is recorded. The model is then immediately updated with the new sample, and the process repeats for the next sample. The final accuracy is computed as the average percentage of correct classifications. The authors also train a model using batch learning for comparison. For batch learning, they use the ATM framework \\cite{Swearingen2017} for automatic tuning of model hyperparameters.\n\n\\section{Empirical Verification of the Problem}\n\nWe have evaluated both the batch and on-line learning approaches described by Mirzaei et al., using the published AndrODet source code. We used all of the samples from the AMD dataset, except for 135 samples in which the strings in the string section could not be decoded. We created the datasets for training and testing in the following way: First, 100 training and testing sets were constructed by splitting the AMD dataset into two sets of equal size using completely random sampling. Next, we created another 100 pairs of training and testing sets, by repeating the above procedure, with the added constraint that samples from the same malware family should never appear in both training and testing data. These two collections of training and testing sets will be referred to as the \\emph{random} and \\emph{non-overlapping} sets, respectively, for the remainder of this section. The exact procedure for creating the non-overlapping sets is outlined in Algorithm \\ref{alg:nonoverlapping}.\n\n\\begin{algorithm}\n\t$F$: set of all families\\\\\n\t$S$: set of all samples\\\\\n\t$T_r$: set of samples in the training set\\\\\n\t$T_e$: set of samples in the testing set\\\\\n\ts($f$): a function returning all the samples belonging to family $f$\\\\\n\trand($s$): a function returning a random sample from set $s$\n\t\\BlankLine\n\t\\SetKwInOut{Input}{input}\\SetKwInOut{Output}{output}\n\t\\Input{$S,F$}\n\t\n\t$T_r = \\emptyset$\\\\\n\t\\While{$|T_r| \\leq \\frac{|S|}{2}$}{\n\t\t$f$ = rand($F$)\\\\\n\t\t$F = F - f$\\\\\n\t\t$Tr = Tr \\cup s(f)$\n\t}\n\t$T_e = S - T_r$\n\t\n\t\\Output{$Tr,Te$}\n\t\n\t\\caption{Procedure for constructing training and testing sets without overlapping malware families.}\n\t\\label{alg:nonoverlapping}\n\\end{algorithm}\n\nWe repeated the experiments for each of the 100 train\/test sets of the respective splitting strategies. Like Mirzaei et al., we used an SVM classifier for batch learning, and leveraging bagging for on-line learning. For the batch learning case we ran the ATM framework to find the best hyperparameters for each train\/test set, using a budget of 200 configuration trials. For the on-line case, we first trained a model on the training set, and then used the model to classify the samples in the testing data (without updating the model). The results for batch and on-line learning are shown in Figure \\ref{fig:boxplot-full}. For the on-line case, we get a mean accuracy of 89\\% with the random sets, which is similar to the accuracy reported by Mirzaei et al.\\ For the non-overlapping case, however, the accuracy drops to only 51\\% on average (with a much greater variance across the 100 models). For the batch learning case we get similar results, with an average accuracy of 92\\% for the random case, and 50\\% for the non-overlapping case.\n\n\\begin{figure}\n\t\\centering\n\\begin{tikzpicture}\n\\begin{axis}[\naxis on top,\ntick pos=both,\nxmin=0.5, xmax=4.5,\nxtick style={color=black},\nxtick={1,2,3,4},\nxticklabel style = {rotate=90.0},\nxticklabels={MOA-random,MOA-nonoverlapping,ATM-random,ATM-nonoverlapping},\nylabel={Accuracy},\nymin=0, ymax=1,\nytick style={color=black}\n]\n\\addplot [blue]\ntable \n\t0.775 0.88768531411254\n\t1.225 0.88768531411254\n\t1.225 0.893173069047424\n\t0.775 0.893173069047424\n\t0.775 0.88768531411254\n};\n\\addplot [blue, dashed]\ntable \n\t1 0.88768531411254\n\t1 0.881726595134737\n};\n\\addplot [blue, dashed]\ntable \n\t1 0.893173069047424\n\t1 0.897534605618806\n};\n\\addplot [black]\ntable \n\t0.8875 0.881726595134737\n\t1.1125 0.881726595134737\n};\n\\addplot [black]\ntable \n\t0.8875 0.897534605618806\n\t1.1125 0.897534605618806\n};\n\\addplot [blue, mark=*, mark size=1, mark options={solid,draw=black}, only marks]\ntable \n\t1 0.879351298222623\n\t1 0.878532230321894\n\t1 0.878614137111967\n};\n\\addplot [blue]\ntable \n\t1.775 0.448953868188499\n\t2.225 0.448953868188499\n\t2.225 0.565457452789083\n\t1.775 0.565457452789083\n\t1.775 0.448953868188499\n};\n\\addplot [blue, dashed]\ntable \n\t2 0.448953868188499\n\t2 0.331892251413369\n};\n\\addplot [blue, dashed]\ntable \n\t2 0.565457452789083\n\t2 0.718708718626156\n};\n\\addplot [black]\ntable \n\t1.8875 0.331892251413369\n\t2.1125 0.331892251413369\n};\n\\addplot [black]\ntable \n\t1.8875 0.718708718626156\n\t2.1125 0.718708718626156\n};\n\\addplot [blue, mark=*, mark size=1, mark options={solid,draw=black}, only marks]\ntable \n\t2 0.193103448275862\n\t2 0.211149571871015\n\t2 0.19775890746739\n\t2 0.245456808563605\n\t2 0.172118618159584\n\t2 0.80905030757563\n\t2 0.813487972508591\n\t2 0.830046082949309\n\t2 0.807444061962134\n};\n\\addplot [blue]\ntable \n\t2.775 0.915369809157179\n\t3.225 0.915369809157179\n\t3.225 0.919690392333524\n\t2.775 0.919690392333524\n\t2.775 0.915369809157179\n};\n\\addplot [blue, dashed]\ntable \n\t3 0.915369809157179\n\t3 0.910803505610615\n};\n\\addplot [blue, dashed]\ntable \n\t3 0.919690392333524\n\t3 0.923007617331477\n};\n\\addplot [black]\ntable \n\t2.8875 0.910803505610615\n\t3.1125 0.910803505610615\n};\n\\addplot [black]\ntable \n\t2.8875 0.923007617331477\n\t3.1125 0.923007617331477\n};\n\\addplot [blue, mark=*, mark size=1, mark options={solid,draw=black}, only marks]\ntable \n\t3 0.896387910557785\n\t3 0.926939143254976\n};\n\\addplot [blue]\ntable \n\t3.775 0.441470797594011\n\t4.225 0.441470797594011\n\t4.225 0.556582811685542\n\t3.775 0.556582811685542\n\t3.775 0.441470797594011\n};\n\\addplot [blue, dashed]\ntable \n\t4 0.441470797594011\n\t4 0.309025650004377\n};\n\\addplot [blue, dashed]\ntable \n\t4 0.556582811685542\n\t4 0.726636475916015\n};\n\\addplot [black]\ntable \n\t3.8875 0.309025650004377\n\t4.1125 0.309025650004377\n};\n\\addplot [black]\ntable \n\t3.8875 0.726636475916015\n\t4.1125 0.726636475916015\n};\n\\addplot [blue, mark=*, mark size=1, mark options={solid,draw=black}, only marks]\ntable \n\t4 0.202896551724138\n\t4 0.0830752413918747\n\t4 0.177315805564048\n};\n\\addplot [red]\ntable \n\t0.775 0.89102301580801\n\t1.225 0.89102301580801\n};\n\\addplot [red]\ntable \n\t1.775 0.510450114148141\n\t2.225 0.510450114148141\n};\n\\addplot [red]\ntable \n\t2.775 0.917642722581702\n\t3.225 0.917642722581702\n};\n\\addplot [red]\ntable \n\t3.775 0.493219442035339\n\t4.225 0.493219442035339\n};\n\\end{axis}\n\\end{tikzpicture}\n\t\\caption{Box plot of classifier accuracy for the random and non-overlapping train\/test configurations, using the batch (ATM) and on-line (MOA) machine learning frameworks.}\n\t\\label{fig:boxplot-full}\n\\end{figure}\n\nOne potential concern for us was that the distribution of samples from different families in the AMD dataset is highly skewed. For example, about one third of all samples belong to one family. This could, potentially, result in some training or testing sets containing samples from only a handful of families when we use our non-overlapping approach. To rule out the possibility that this was the cause of the reduced accuracy in the non-overlapping cases, we conducted another set of experiments where a classifier was trained on samples from all families except one, and the model was evaluated on the left-out family. We repeated this process for all families and computed the average accuracy, weighted by the number of samples in each family. The experiment was performed both for batch (ATM) and on-line learning (MOA), using the same parameters and setup as described above. Similarly to our first set of experiments, the average accuracy was 51\\% for batch learning and 59\\% for on-line learning. Moreover, the F-scores for the respective cases were 0.14 and 0.23. Figure \\ref{fig:acc-families} shows the individual classification accuracy for each family. As can be seen from the figure, the accuracy varies widely between different families.\n\n\\begin{figure*}\n\t\\includegraphics[width=\\linewidth]{acc-families}\n\t\\caption{Individual classification accuracy of each family, when a classifier was trained on all other families in the AMD dataset.}\n\t\\label{fig:acc-families}\n\\end{figure*}\n\n\\section{Shortcomings of the PraGuard Dataset for Evaluating AndrODet}\n\\label{sec:PraGuard}\n\nThe PraGuard dataset was created by obfuscating all samples from the combined MalGenome and Contagio malware datasets using the obfuscation tool DexGuard. Mirzaei et al.\\ used 1495 string-encrypted samples from PraGuard, along with the same number of non-obfuscated samples, to evaluate their batch-learned model. The reason that we believe PraGuard to be a poor choice for evaluating AndrODet is the particular way in which DexGuard implements SE. Most obfuscators that support SE work by substituting a plaintext string in the string section with an encrypted version. Therefore, the encrypted string can still be readily extracted from the string section and analyzed. However, DexGuard implements a more sophisticated form of SE that stores encrypted strings as byte arrays, rather than as actual strings in the string section. Since such byte arrays cannot be easily told apart from regular arrays used by the obfuscated app, DexGuard effectively prevents extraction and analysis of encrypted strings. Although DexGuard is closed source, independent analysis of DexGuard-obfuscated apps has confirmed this method of applying SE \\cite{Moses2018}. Because DexGuard prevents AndrODet from extracting strings from the string section to compute features used for classification, we believe PraGuard is a poor choice for evaluating the generalizability of AndrODet. In fact, detecting DexGuard-obfuscated apps in which all (or almost all) constant strings have been removed from the string section is a trivial task that can be achieved with a simple heuristic, and does not require an advanced machine learning approach. We have analyzed the PraGuard dataset and confirmed that for 90\\% of the SE samples \\emph{all} non-identifier strings were removed from the string section\\footnote{Additionally, for about 3\\% of the samples, the AndrODet code for extracting strings failed to process the APK.}. The vast majority of the remaining samples had only a small number of non-identifier strings. We speculate that the relatively good performance on the PraGuard dataset reported by Mirzaei et al.\\ is due to some DexGuard-encrypted samples appearing in the AMD dataset, allowing the classifier to learn to identify this particular type of SE. (For example, it would be trivial for a machine learning system to learn that an average string length of 0 indicates obfuscation.) However, as our experiments in the preceding section show, AndrODet fails to accurately detect SE in the general case.\n\n\\section{Discussion and Conclusion}\n\nOur experimental evaluation of the SE detection capabilities of AndrODet strongly indicates that the system fails to learn a generalizable model for detecting SE, and instead bases its decision on the malware family to which a sample belongs. Since we obtained similar results with both the on-line and batch learning approaches proposed by Mirzaei et al., we believe the main problem to be the features used by AndrODet, rather than limitations of a specific machine learning approach. A potential explanation for the weak discriminative power of the features used to detect SE in AndrODet is that they are all based on compound statistics over all strings in an app. Since the percentage of strings that are actually subjected to encryption can vary dramatically between SE-obfuscated apps (i.e., it cannot be assumed that all strings in an SE-obfuscated app are encrypted) features such as the \\emph{average} string entropy or length are not very informative. A more viable approach might be to detect SE on a per-string basis, and determine, for example, a threshold on the number of obfuscated strings for classifying an app as using SE.\n\nFinally, it should be noted that we have only considered SE detection in this comment paper. Since we have not evaluated the performance of AndrODet's detection of identifier renaming and control flow obfuscation, we can neither confirm nor rule out the possibility of similar problems when detecting these kinds of obfuscations. Since Mirzaei et al.\\ use compound statistics for the entire app also when detecting these other two types of obfuscation, one potential concern is that AndrODet may fail to generalize to cases where only a subset of the identifiers, or only a fraction of the code of an app, has been obfuscated.\n\n\\smallskip\n\\noindent\n\\emph{In the interest of open science, we have made the code for running our experiments available at}\\\\ \n\\begin{minipage}{\\linewidth}\n{\\small \\url{https:\/\/github.com\/alirezamohammadinodooshan\/androdet-se-eval}}\n\\end{minipage}\n\n\\section*{Acknowledgments}\nThe calculations were performed using computer resources provided by the Swedish National Infrastructure for Computing (SNIC) at the National Supercomputer Centre (NSC).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcohc b/data_all_eng_slimpj/shuffled/split2/finalzzcohc new file mode 100644 index 0000000000000000000000000000000000000000..4d27b49ce991cfb60a4c1a6ec362c231a9c5165b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcohc @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \nDetection of cosmic rays arriving at the Earth with energies above \n$10^{20} $ eV questions the presence of the GZK cutoff~\\cite{gzk}.\nThis cutoff determines the energy where the cosmic ray spectrum\nis expected to abruptly steepen. Cosmic rays with ultra high energies (above \n$\\sim 5 \\times 10^{19} $ eV) lose energy through photoproduction of pions\nwhen transversing the Cosmic Microwave Background Radiation (CMB). As\nthe CMB attenuates ultra high energy cosmic rays (UHECR) on a 50 Mpc\nscale (or characteristic distance) at $10^{20}$ eV,\none can determine it's production maximum distance. An event\nof $10^{20} {\\rm eV}$ has to be produced within $\\sim 100$ Mpc, unless it is\na non standard particle~\\cite{cfk,afk}.\nThe absence of any powerful source located within this \nrange~\\cite{sommers} --- that could accelerate a cosmic ray to such an \nenergy --- turns the existence \nof these events into a mystery, the so called GZK puzzle. \n\nThe results of two important cosmic ray experiments AGASA~\\cite{agasa} and \nHiRes~\\cite{hires} are not consistent. Not only is the energy \nspectrum measured by HiRes systematically below the one measured by AGASA, \nbut also the \nHiRes spectrum steepens around $10^{20} $ eV while AGASA's spectrum flattens \naround this energy region. The steepening in the HiRes spectrum may be in \nagreement with a GZK cutoff, while AGASA's is thought not to be. \n\nThere are many possible ways to understand this discrepancy\n~\\cite{demarco,stanev}. \nThe Pierre \nAuger Observatory~\\cite{auger} will soon have a statistically significant \ndata sample and\nwill certainly shed light into understanding these events.\n\nIn this article we focus on the role\nof the shape of the error distribution in the energy determination. \nWe show that the intrinsic \nfeatures of an air shower results in a lognormal error distribution on the\nenergy determination. \nThe minimum standard deviation of this distribution ($\\sigma$) is set by \nphysical properties of the shower. \nIf additional errors due to detection --\nwhich increases the $\\sigma$ -- are not kept to minimum, the end of the\nenergy spectrum will be smeared in a way that the GZK feature might not be seen.\n\nUnderstanding the energy error is crucial in order to determine whether \nor not the GZK cutoff is present. \nA lognormal error distribution on the reconstructed primary cosmic ray\nenergy is to be expected due to fluctuations\nboth in the shower starting point as well as from the cascade development \n\\cite{gaisser}. According to simulations by the AUGER collaboration \n\\cite{desrep} the depth of first interaction affects the rate of development\nof the particle cascade of the shower which results in a fluctuation\nof about 15\\% on the number of muons and about 5\\% on the eletromagnetic\ncomponent. Auger also predicts that the number of muons in a proton induced \nshower increases with primary energy as E$^{0.85}$ \\cite{desrep}. \nThe 15\\% fluctuation will then contribute as a fixed fractional error and\nthe fluctuation on the number of muons on the ground will be \n$N = (1 \\pm 0.15) N_0 (E\\,\/\\,E_0)^{0.85}$. Therefore one has to add a 15\\% \ncontribution to the error in estimating the primary energy in addition \nto the $\\sqrt{N}$ error factor.\nAs this shower starting fluctuation error is a percentage of the \nenergy it results in a lognormal error distribution.\n\nThere are mainly two ways of determining the energy: ground detectors reconstruct\nthe energy based on the particle density at a certain distance from the\nshower core and fluorescence detectors which \ndetermine the energy through the shower longitudinal profile~\\cite{hires}.\nThe longitudinal profile determines the number of particles in the shower\nper depth and it is well known to have large fluctuations. \nAs mentioned above, the fluctuations arise both from the shower starting point \nas well as from the cascade development. \nThe same is expected for the energy determination in ground detectors, \nsince the particle density depends on the number of particles.\n\nThe inherent fluctuations and resulting lognormal error distribution will affect\ncrucially the analysis of data collected in ground arrays since their data\nsample is collected at one particular depth. It does also affect fluorescence\ndata but as the energy reconstruction uses the full longitudinal profile of\nthe shower, there is more potential information to estimate the original \nenergy.\n\nFigure~\\ref{fig:grdp} shows the distribution\nof particles at ground level for $2 \\times 10^4$ simulated \nshowers~(using \\cite{aires}) \nfrom $10^{20}$ eV protons. A lognormal fit with $\\sigma = 0.08$ is\nsuperimposed and it is clear that the distribution has a lognormal shape.\nThe same distribution for showers from $10^{18}$~eV\nprotons is shown in Figure~\\ref{fig:grdp18}. The poor fit is due to an excess\nof simulated events relative to the lognormal at the high end.\nThe standard deviation of the fit is 0.14. Effects due to the errors with\nasymmetrical and non-gaussian tails are shown in \\cite{vaz}.\n\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=220pt \\epsfbox{pgrd_e20.eps}\n\\caption{Distribution of total number of particles at ground level.\nRatio of number of particles over average is shown.\n$2\\times 10^4$ showers were simulated with the Aires~\\cite{aires} package.\nPrimary particles are $10^{20}$ eV protons and $\\langle {\\rm N_{part}}\n\\rangle = 2.7 \\times 10^{10}$. Superimposed is a lognormal curve with $\\sigma = 0.08$.}\n\\label{fig:grdp}\n\\end{figure}\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=220pt \\epsfbox{pgrd_e18.eps}\n\\caption{Same as Figure~\\ref{fig:grdp} but for $10^{18}$ eV protons\nas the primaries. Here $\\langle {\\rm N_{part}} \\rangle = 14.7 \\times 10^7$.\nSuperimposed is a lognormal curve with $\\sigma = 0.14$. The poor fit is due to \nan excess of simulated events relative to the lognormal at the high end.}\n\\label{fig:grdp18}\n\\end{figure}\n\nThe simulated showers used Sybill interaction model and assumed that the ground \nwas at sea level (defined in Aires \\cite{aires} as 0\\,m or 1036\\,g\/cm$^2$). A more\nthorough analysis is under way to understand why the error distribution for lower\nenergies (as in Figure~\\ref{fig:grdp18}) deviates from the lognormal shape.\nHowever it is clear that most of the events in excess come from the tail of the\nmaximum depth (XMAX) of the shower distribution. In Figure~\\ref{fig:xmax18} we\nshow the XMAX distribution for the same events used in Figure~\\ref{fig:grdp18}.\nIf we cut events with XMAX $>$ 890 g\/cm$^2$ from this data set, \nthe ground particles distribution will lose part of the excess events. This \ndistribution is shown in Figure~\\ref{fig:cut18}. \nThese excess events, if included, would only make more exaggerated the effect we discuss here.\n\n\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=220pt \\epsfbox{xmax_e18.eps}\n\\caption{Maximum shower depth (XMAX) distribution for $2 \\times 10^{4}$ showers \nwith $10^{18}$ eV protons as the primaries. The arrow in the XMAX axis indicates\nwhere an analysis cut will be applied.}\n\\label{fig:xmax18}\n\\end{figure}\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=220pt \\epsfbox{pgrde18cut.eps}\n\\caption{Same as Figure~\\ref{fig:grdp18} but events with XMAX $>$ 890 g\/cm$^2$\nwere removed. Superimposed is a lognormal curve with $\\sigma = 0.13$. \nThe fit improves in relation to Figure~\\ref{fig:grdp18}.}\n\\label{fig:cut18}\n\\end{figure}\n\n\nThe results shown in \nFigures~\\ref{fig:grdp} and~\\ref{fig:grdp18} are also dependent on the location of\nthe ground level. We have also simulated events with ground above\nsea level at 950\\,g\/cm$^2$. The lognormal continues to fit well the\n$10^{20}$ eV distribution and its $\\sigma$ improves to 0.05. The $10^{18}$ eV \ndistribution still has an excess but the chisquare improves to 4 and the $\\sigma$\nto 0.10.\n\nBelow we will describe how we determine the UHECR spectrum\nassuming a injection power spectrum from cosmologically distributed sources. \nWe account for energy loss due to propagation through the CMB. \nWe then describe how the energy error is\nevaluated and how it affects the energy reconstruction and the determination\nof the GZK cutoff.\n\n\\section{Analytical determination of UHECR propagation and energy \nspectrum}\nOur analytical approach assumes a cosmological cosmic ray flux. We \nassume extragalactic sources isotropically distributed at different \nredshifts~\\cite{blan}. These\nsources produce a power law energy spectrum (injection spectrum) which is\nassumed to be:\n\\begin{equation}\nF(E) = k E^{-\\alpha} \\exp\\left(-\\frac{E}{E_{max}}\\right)\n\\label{eq:flux}\n\\end{equation}\nwhere $E$ is the cosmic ray energy, $k$ is a normalization factor, $\\alpha$ \nis the spectral index and $E_{max}$ is the maximum energy at the source.\n\nThe energy degradation of protons through the CMB includes losses \ndue to pair production~\\cite{blu,geddes}, expansion of the universe~\\cite{bere} \nand photopion production~\\cite{bere}. These losses at present\nepoch are shown in Figure~\\ref{fig:enloss}.\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=250pt \\epsfbox{enloss_t.eps}\n\\caption{Energy losses (as labeled) of a proton transversing the CMB at \npresent epoch.}\n\\label{fig:enloss}\n\\end{figure}\nWe include current values\nfor the matter and dark energy density ($\\Omega_M$ and $\\Omega_\\Lambda$)\nwhen considering the energy loss due to expansion of the universe \n($\\beta_z$):\n\n\\begin{equation}\n\\beta_z(E,z) = H_0 \\sqrt{\\Omega_M (1 + z)^3 + \\Omega_\\Lambda}\n\\end{equation}\nwhere $\\beta$ is defined as $\\beta = 1\/E \\times dE\/dt$ and \n$\\Omega_M = 0.3$, $\\Omega_\\Lambda = 0.7$ and $H_0 = 75 $\n~km~s$^{-1}$~Mpc$^{-1}$.\n\nAlso the energy losses due to pair or to photopion production ($\\beta(E,z)$)\nat a certain epoch with redshift z, is corrected. Since the number density\nof the cosmic background photons varies as $n = n_0 \\, (1+z)^3$ the energy loss\nat z differs from the energy loss today ($\\beta_0(E)$) in the following way:\n\n\\begin{equation}\n\\beta(E,z) = (1 + z)^3 \\beta_0((1+z)E)\n\\end{equation}\n\nOnce all energy loss mechanisms are known, the energy with which a\nproton has to be generated in order to account for the energy observed\ntoday can be determined. The generated energy depends on the distance\nor epoch (redshift) from today. This can be well determined by a\nmodification factor $\\eta(E,z)$~\\cite{bere} which relates the generated\nenergy spectrum to the modified (and measured) one.\n\nThe cosmological flux assumes the observer in the center of a sphere\nof large radius and an isotropic density of sources \\cite{blan}.\nThe flux at the Earth is given by:\n\\begin{eqnarray*}\nj(E) & = & \\frac{c}{4 \\pi H_0} \\int_0^z F(E_g) \n\\left(\\frac{E_g}{E}\\right)^{-\\alpha} (1 + z)^m \n\\frac{dE_g}{dE} \\\\\n& & \\times \\frac{dz}\n{(1 + z)\\left[\\Omega_M (1 + z)^3 + \\Omega_\\Lambda\\right]^{1\/2}}\n\\end{eqnarray*}\nwhere $E_g$ is the generated cosmic ray energy (at a source located with\nredshift $z$); $F(E)$ is given by Equation~\\ref{eq:flux}; $E$ is the cosmic \nray energy determined at the Earth; $\\alpha$ is the same spectral index as\nin Equation~\\ref{eq:flux}; $m$ accounts for the luminosity evolution of the\nsources and $c$ is the speed of light. We assume $m = 0$ and therefore do not\ntake luminosity evolution into account. \nThe modification factor $\\eta$ is given by:\n\\begin{equation}\n\\eta = \\int_0^z \\left(\\frac{E_g}{E}\\right)^{-\\alpha} \\frac{dE_g}{dE} \\frac{dz}\n{(1 + z)\\left[\\Omega_M (1 + z)^3 + \\Omega_\\Lambda\\right]^{1\/2}}\n\\end{equation}\n\nFor comparison, we determine the modification factor for arbitrary\nredshifts and assuming no cosmological constant. Our results match those\nof~\\cite{bere,demarco} and are shown in Figure~\\ref{fig:mod}.\n\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=250pt \\epsfbox{mf.eps}\n\\caption{Modification factor~\\cite{bere} from our analytical calculation\nversus cosmic ray energy. Curves are for different redshifts (top: \n$z = 0.002$; middle: $z = 0.02$; bottom $z = 0.2$)\nand assuming no cosmological constant in order to compare results to \n\\cite{bere,demarco}.}\n\\label{fig:mod}\n\\end{figure}\n\nIn Figure~\\ref{fig:specg} is shown (black solid curve) the cosmic ray expected\nflux versus energy ($E$) at Earth multiplied by $E^3$ \nfrom a cosmological injection spectrum with $\\alpha = 2.6$. The expected GZK\nfeature is present.\n\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=250pt \\epsfbox{fluxg.8.eps}\n\\caption{Cosmic ray energy spectrum ($\\times E^3$) from a cosmological \nflux (solid black line) with spectral index $\\alpha = 2.6$. The other curves \nare the energy spectrum convoluted with a lognormal\nerror with standard deviations $\\sigma$ as shown.}\n\\label{fig:specg}\n\\end{figure}\n\n\\section{Errors effects on the energy reconstruction}\nWe will now assume that the cosmic ray energy spectrum from a cosmological\nisotropic distribution of sources \nis the true spectrum. To understand how an error in the reconstructed energy\naffects the spectrum, we convolute the cosmological flux assuming a lognormal\nerror on the energy.\n\nThe lognormal distribution is given by\n\\begin{equation}\n\\frac{dP(E',E)}{d\\ln E} = k \n\\exp\\left[-\\frac{1}{2\\sigma^2} \\log^2\\frac{E'}{E}\\right]\n\\end{equation}\nwhere $k = 1\\, \/\\, \\sqrt{2 \\pi}\\sigma$ is a normalization to unit area and \n$\\sigma$ is the standard deviation of $\\log_{10}E$. \nWhen a lognormal error in the energy reconstruction is assumed, the flux will be \nconvoluted in the following way:\n\\begin{equation}\ndF'(E) = F(E') \\frac{dP(E',E)}{dE} dE' \n\\end{equation}\nwhere F is given by Equation~\\ref{eq:flux}.\n\nThe expected flux ($\\times E^3$) for energies reconstructed with a lognormal\nerror distribution is shown in Figure~\\ref{fig:specg}. The curves are for \na spectral index\n$\\alpha = 2.6$ and $\\sigma = 0.08$, 0.14, 0.25 and 0.5 as labeled.\n\nIt is very clear that not only the flux increases by a constant factor but\nalso the GZK feature is smeared. As shown in Figures~\\ref{fig:grdp}\nand \\ref{fig:grdp18}, the standard deviation in the lognormal \ndistribution will be obtained in an ideal case where thousands of events \nare detected depends on the energy of the primary particle. It is\n0.08 for a $10^{20}$ eV proton and 0.14 for a $10^{18}$ eV proton.\nFigure~\\ref{fig:specg} shows that if the standard deviation is above\n0.14 the GZK cutoff will show up at higher energies than in the true \nspectrum.\n\n\\section{Results and Conclusions}\nFigure~\\ref{fig:specg} shows how the energy spectrum from a cosmological\nflux is smeared due to a lognormal error in the energy reconstruction.\nIntrinsic shower fluctuations leads to a lognormal distribution of\nobserved energy deposition and number of particles in the shower.\nA standard deviation of $log_{10}E$ equal to 0.25 is enough to modify not only the \nshape as\nwell as the normalization of the spectrum measured at the Earth.\nAs a consequence the GZK feature will be smeared and might not be detected \nat all. Such a $\\sigma$ (standard deviation of $\\log_{10}E$) can\neasily result from a detector that only samples a small portion of\nthe total number of particles. This will be more crucial\nto ground detectors since their particle sample is detected all at one \nheight. The standard deviation of the intrinsic \nenergy error distribution for ground detectors is expected to be larger \nthan for fluorescence detectors.\n\nThe air fluorescence\ndetectors will have lower intrinsic lognormal standard deviation as they\nobserve the full development of the shower through the range of view.\nThey miss observing only what goes into the ground or is out of the field\nof view.\n\nAs the Pierre Auger Observatory will not only increase the data sample\nto a significant level, but also combine both ground and fluorescence\ntechniques, it will have constraints to understand and better control\nthe errors in the energy reconstruction. In this way it is\npossible to keep the standard deviation of the intrinsic lognormal\nenergy error to its minimum value.\n\nThe lognormal curves shown in \nFigure~\\ref{fig:grdp} and \\ref{fig:grdp18} have $\\sigma = 0.08$ and 0.14\nrespectively. However one can expect a larger value from an observed distribution\nsince the detectors sample only a portion of the total number of particles.\nOn the other hand\nthe standard deviation of the distribution depends on the ground level altitude\nand\ntherefore an analysis equivalent to ours has to be done for a specific\ndepth.\n\nFigure~\\ref{fig:2spec} shows that the lognormal error in the energy\nis also affected by the spectral index of the injection spectra. However\nthe error in the energy reconstruction will smear the flux in a\nsignificant way independently of the spectral index.\n\n\\begin{figure} \n\\centering\\leavevmode \\epsfxsize=300pt \\epsfbox{flux2.eps}\n\\vspace*{-2.5cm}\n\\caption{Same as Figure~\\ref{fig:specg} but with spectral index $\\alpha = 3.0$\n(top) and $\\alpha = 2.3$ (bottom). Curve with crosses (x) has \n$\\sigma = 0.5$; circles (o), $\\sigma = 0.25$ and squares, $\\sigma = 0.1$.}\n\\label{fig:2spec}\n\\end{figure}\n\nWe have shown that a lognormal error in the energy reconstruction of\nthe UHECR spectra will affect not only the shape but also the normalization\nof the measured energy spectra. A standard deviation equal to or greater \nthan 0.25 will smear the GZK feature. As a consequence this feature will \nnot be seen. \nThis result is independent of the spectral index of the injection spectra.\n\nIn conclusion, the establishment of the presence or not of the GZK cutoff in \nthe \nUHECR spectrum depends not only in a larger data sample but also in the \ndetermination of the shape of the energy error distribution. The standard \ndeviation of this distribution has to be kept to its intrinsec value. If\nit is equal or greater to 0.25 the GZK feature will be smeared and not be\ndetected.\n\n{\\em Acknowledgements --} \nWe thank Don Groom for useful comments.\nThis work was partially supported by NSF Grant Physics\/Polar Programs\nNo. 0071886 and in part\nby the Director, Office of Energy Research, Office of High Energy and\nNuclear Physics, Division of High Energy Physics of the U.S. Department\nof Energy under Contract Num. DE-AC03-76SF00098 through the Lawrence\nBerkeley National Laboratory.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $\\sigma\\in(0,1)$ with $\\sigma\\neq\\frac{1}{2}$. We consider the Cauchy problem for the fractional nonlinear Schr\\\"odinger equation\n\\begin{equation}\\tag{$\\textup{NLS}_\\sigma$}\ni\\partial_tu+(-\\Delta)^\\sigma u+\\mu|u|^{p-1}u=0,\\ u(0)=u_0\\in H^s,\n\\end{equation}\nwhere $\\mu= \\pm 1$ depending on the focusing or defocusing case. The operator $(-\\Delta)^{\\sigma}$ is the so-called fractional laplacian, a Fourier multiplier of $|\\xi|^{2\\sigma}$. The fractional laplacian is the infinitesimal generator of some Levy processes \\cite{B}. A rather extensive study of the potential theoretic aspects of this operator can be found in \\cite{landkof}. \n\n\nThe previous equation is a fundamental equation of fractional quantum mechnics, a generalization of the standard quantum mechanics extending the Feynman path integral to Levy processes \\cite{laskin}. \n\nThe purpose of the present paper is to develop a general well-posedness and ill-posedness theory in Sobolev spaces. The one-dimensional case has been treated in \\cite{CHKL} for cubic nonlinearities, i.e. $p=3$, and $\\sigma\\in(\\frac{1}{2},1)$. Here, we consider a higher-dimensional version and other types of nonlinear terms. We also include all $\\sigma\\in (0,1)$ except $\\sigma=\\frac{1}{2}$; furthermore, contrary to \\cite{CHKL} where the use of Bourgain spaces was crucial (since the main goal of their paper was to derive well-posedness theory on the flat torus), we rely only on standard Strichartz estimates and functional inequalities in $\\mathbb{R}^d.$ In the case of Hartree-type nonlinearities, the local well-posedness and blow-up have been investigated in \\cite{ozawa}. \n\nIn the present paper, we will not consider global aspects with large data. For that, we refer the reader to \\cite{sire2} for a study of the energy-critical equation in the radial case, following the seminal work of Kenig and Merle \\cite{km1,km2}. As a consequence, we do not consider blow-up phenomena, an aspect we will treat in a forthcoming work.\n\nWe introduce two important exponents for our purposes: \n$$\ns_c=\\frac{d}{2}-\\frac{2\\sigma}{p-1}\n$$\nand \n$$s_g=\\frac{1-\\sigma}{2}.$$\n\nHere, $s_c$ is the scaling-critical regularity exponent in the following sense: for $\\lambda>0$, the transformation\n\\[u(t,x)\\mapsto \\frac{1}{\\lambda^{2\\sigma\/(p-1)}} u\\Big(\\frac{t}{\\lambda^{2\\sigma}},\\frac{x}{\\lambda}\\Big), \\quad u_0(x)\\mapsto \\frac{1}{\\lambda^{2\\sigma\/(p-1)}} u_0\\Big(\\frac{x}{\\lambda}\\Big)\\]\nkeeps the equation invariant and one can expect local-wellposedness for $s \\geq s_c$, since the scaling leaves the $\\dot H^{s_c}$ norm invariant. On the other hand, $s_g$ is the critical regularity in the ``pseudo\"-Galilean invariance (see the proof of ill-posedness below). \nUnder the flow of the equation ($\\textup{NLS}_\\sigma$), the following\nquantities are conserved:\n\\begin{align*}\nM[u]=&\\int_{\\mathbb R^d} |u(t,x)|^2dx&&\\textup{(mass)},\\\\\nE[u]=&\\int_{\\mathbb R^d}\\frac{1}{2}|\\,|\\nabla|^{\\sigma}u(t,x)|^2+\\frac{\\mu\n}{p+1}|u(t,x)|^{p+1}dx.&&\\textup{(energy)}.\n\\end{align*}\n\nAn important feature of the equation under study is a loss of derivatives for the Strichartz estimates as proved in \\cite{COX}. Unless additional assumptions are met such as radiality as in \\cite{zihua}, one has a loss of $d(1-\\sigma)$ derivatives in the dispersion (see \\eqref{dispersive estimate with loss}). This happens to be an issue in several arguments. \n\n\n\\subsection*{Main results} The goal of this paper is to show that $(\\textup{NLS}_\\sigma)$ is locally well-posed in $H^s$ for $s\\geq \\max(s_c,s_g, 0)$, and it is ill-posed in $H^s$ for $s\\in (s_c, 0)$. We start with well-posedness results. \n\\begin{theorem}[Local well-posedness in subcritical cases]\\label{subcritical LWP}\nLet\n\\begin{align*}\n\\left\\{\\begin{aligned}\n&s\\geq s_g&&\\text{ when }d=1\\textup{ and }2\\leq p<5,\\\\\n&s>s_c&&\\text{ when }d=1\\textup{ and }p\\geq 5,\\\\\n&s>s_c&&\\text{ when }d\\geq 2\\textup{ and }p\\geq3.\n\\end{aligned}\n\\right.\n\\end{align*}\nThen, $(\\textup{NLS}_\\sigma)$ is locally well-posed in $H^s$. \n\\end{theorem}\n\n\n\n\\begin{theorem}[Local well-posedness in critical cases]\\label{critical LWP}\nSuppose that\n\\begin{align*}\n\\left\\{\\begin{aligned}\n&p>5&&\\text{ when }d=1,\\\\\n&p>3&&\\text{ when }d\\geq 2.\n\\end{aligned}\n\\right.\n\\end{align*}\nThen, $(\\textup{NLS}_\\sigma)$ is locally well-posed in $H^{s_c}$. \n\\end{theorem}\n\nThe proof of Theorem \\ref{critical LWP} is based on a new method, improving on estimates in \\cite{CKSTT}. This improvement, based on controlling the nonlinearity in a suitable space, is necessary due to the loss of derivatives in the Strichartz estimates. \n\nAs a by-product, we also prove small data scattering. \n\\begin{theorem}[Small data scattering]\\label{scattering}\nSuppose that\n\\begin{align*}\n\\left\\{\\begin{aligned}\n&p>5&&\\text{ when }d=1,\\\\\n&p>3&&\\text{ when }d\\geq 2.\n\\end{aligned}\n\\right.\n\\end{align*}\nThen, there exists $\\delta>0$ such that if $\\|u_0\\|_{H^{s_c}}<\\delta$, then $u(t)$ scatters in $H^{s_c}$. Precisely, there exist $u_\\pm \\in H^{s_c}$ such that\n$$\\lim_{t\\to\\pm\\infty}\\|u(t)-e^{it(-\\Delta)^\\sigma}u_\\pm\\|_{H^{s_c}}=0.$$\n\\end{theorem}\n\n\\begin{remark}\nContrary to the case $\\sigma\\neq\\frac{1}{2}$, when $\\sigma=\\frac{1}{2}$, the fractional NLS does not have small data scattering. See \\cite{krieger}. \n\\end{remark}\n\nFinally, our last theorem is the ill-posedness result. Note that our result is not optimal, since one should expect ill-posedness in $H^s$ up to $s_g=\\frac{1-\\sigma}{2}$, which is nonnegative. We hope to come back to this issue in a forthcoming work. \n\n\\begin{theorem}[Ill-posedness]\\label{ill-posedness}\nLet $d=1,2$ or $3$ and $\\sigma\\in(\\frac{d}{4},1)$. If $p$ is not an odd integer, we further assume that $p\\geq k+1$, where $k$ is an integer larger than $\\frac{d}{2}$. Then, $(\\textup{NLS}_\\sigma)$ is ill-posed in $H^s$ for $s\\in (s_c,0)$.\n\\end{theorem}\n\nAn interesting feature of the previous ill-posedness result is the fact that, contrary to the standard NLS equation ($\\sigma=1$) there is no exact Galilean invariance. However, one can introduce a new ``pseudo-Galilean invariance\" which is enough to our purposes. More precisely, for $v\\in\\mathbb{R}^d$, we define the transformation\n$$\\mathcal{G}_vu(t,x)=e^{-iv\\cdot x}e^{it|v|^{2\\sigma}}u(t,x-2t\\sigma|v|^{2(\\sigma-1)}v).$$\nNote that when $\\sigma=1$, $\\mathcal{G}_v$ is simply a Galilean transformation, and that NLS is invariant under this transformation, that is, if $u(t)$ solves NLS, so does $\\mathcal{G}_vu(t)$. However, when $\\sigma\\neq1$, $(\\textup{NLS}_\\sigma)$ is not exactly symmetric with respect to pseudo-Galilean transformations. This opens the construction of solitons for $(\\textup{NLS}_\\sigma)$ which happen to be different from the ones constructed in the standard case $\\sigma=1$. Indeed, if we search for exact solutions of the type\n\\begin{equation}\nu(t,x)=e^{it(|v|^{2\\sigma}-\\omega^{2\\sigma})} e^{-iv\\cdot x}Q_\\omega(x-2t\\sigma|v|^{2(\\sigma-1)}v),\n\\end{equation}\nthen the profile $Q_\\omega$ solves the pseudo-differential equation\n\\begin{equation}\\label{Q equation}\n\\mathcal{P}_vQ_\\omega+\\omega^{2\\sigma}Q_\\omega-|Q_\\omega|^{p-1}Q_\\omega=0,\n\\end{equation}\nwhere\n\\begin{equation}\n\\mathcal{P}_v=e^{iv\\cdot x}(-\\Delta)^\\sigma e^{-iv\\cdot x}-|v|^{2\\sigma}-2i\\sigma|v|^{2\\sigma-2}v\\cdot\\nabla,\n\\end{equation}\ni.e., $\\mathcal{P}_v$ is a Fourier multiplier $\\widehat{\\mathcal{P}_v f}(\\xi)=p_v(\\xi)\\hat{f}(\\xi)$, wiht symbol\n\\begin{equation}\np_v(\\xi)=|\\xi-v|^{2\\sigma}-|v|^{2\\sigma}+2\\sigma|v|^{2\\sigma-2}v\\cdot\\xi.\n\\end{equation}\nWe plan to come back to this issue in future works. \n\n\n\n\n\n\n\\section{Strichartz Estimates}\n\nIn this section, we review Strichartz estimates for the linear fractional Schr\\\"odinger operators. \nWe say that $(q,r)$ is \\textit{admissible} if\n$$\\frac{2}{q}+\\frac{d}{r}=\\frac{d}{2},\\quad 2\\leq q,r\\leq\\infty,\\quad (q,r,d)\\neq(2,\\infty,2).$$\nWe define the Strichartz norm by\n$$\\|u\\|_{S_{q,r}^s(I)}:=\\||\\nabla|^{-d(1-\\sigma)(\\frac{1}{2}-\\frac{1}{r})}u\\|_{L_{t\\in I}^qW_x^{s,r}},$$\nwhere $I=[0,T)$. Let $\\psi: \\mathbb{R}^d\\to [0,1]$ be a compactly supported smooth function such that $\\sum_{N\\in 2^{\\mathbb{Z}}}\\psi_N=1$, where $\\psi_N(\\xi)=\\psi(\\frac{\\xi}{N})$. For dyadic $N\\in 2^{\\mathbb{Z}}$, let $P_N$ be a Littlewood-Paley projection, that is, $\\widehat{P_N f}(\\xi)=\\psi(\\frac{\\xi}{N})\\hat{f}(\\xi)$. Then, we define a slightly stronger Strichartz norm by \n$$\\|u\\|_{\\tilde{S}_{q,r}^s(I)}:=\\Big(\\sum_{N\\in2^\\mathbb{Z}}\\|P_N(|\\nabla|^{-d(1-\\sigma)(\\frac{1}{2}-\\frac{1}{r})}u)\\|_{L_{t\\in I}^qW_x^{s,r}}^2\\Big)^{1\/2}.$$\n\n\\begin{proposition}[Strichartz estimates \\cite{COX}]\\label{Strichartz}\nFor an admissible pair $(q,r)$, we have\n\\begin{align*}\n\\|e^{it(-\\Delta)^\\sigma}u_0\\|_{S_{q,r}^s(I)}, \\|e^{it(-\\Delta)^\\sigma}u_0\\|_{\\tilde{S}_{q,r}^s(I)}&\\lesssim\\|u_0\\|_{H^s},\\\\\n\\Big\\|\\int_0^t e^{i(t-s)(-\\Delta)^\\sigma}F(s)ds\\Big\\|_{S_{q,r}^s(I)}&\\lesssim \\|F\\|_{L_{t\\in I}^1H_x^{s}},\\\\\n\\Big\\|\\int_0^t e^{i(t-s)(-\\Delta)^\\sigma}F(s)ds\\Big\\|_{\\tilde{S}_{q,r}^s(I)}&\\lesssim \\|F\\|_{L_{t\\in I}^1H_x^{s}}.\n\\end{align*}\n\\end{proposition}\n\n\\begin{proof}[Sketch of Proof]\nBy the standard stationary phase estimate, one can show that\n$$\\|e^{it(-\\Delta)^\\sigma}P_1\\|_{L^1\\to L^\\infty}\\lesssim|t|^{-\\frac{d}{2}},$$\nand by scaling,\n\\begin{equation}\\label{dispersive estimate with loss}\n\\|e^{it(-\\Delta)^\\sigma}P_N\\|_{L^1\\to L^\\infty}\\lesssim N^{d(1-\\sigma)}|t|^{-\\frac{d}{2}}.\n\\end{equation}\nThen, it follows from the argument of Keel-Tao \\cite{KT} that for any $I\\subset\\mathbb{R}$,\n\\begin{align*}\n\\|e^{it(-\\Delta)^\\sigma}P_N(|\\nabla|^{-d(1-\\sigma)(\\frac{1}{2}-\\frac{1}{r})}u_0)\\|_{L_{t\\in I}^q W_x^{s,r}}&\\lesssim\\|P_Nu_0\\|_{H^s},\\\\\n\\Big\\|\\int_0^t e^{i(t-s)(-\\Delta)^\\sigma}P_N(|\\nabla|^{-d(1-\\sigma)(\\frac{1}{2}-\\frac{1}{r})}F)(s)ds\\Big\\|_{L_{t\\in I}^qW_x^{s,r}}&\\lesssim \\|P_NF\\|_{L_{t\\in I}^1H_x^{s}}.\n\\end{align*}\nSquaring the above inequalities and summing them over all dyadic numbers in $2^{\\mathbb{Z}}$, we prove Strichartz estimates.\n\\end{proof}\n\n\nThe loss of derivatives is due to the Knapp phenomenon (see \\cite{zihua}). However, in the radial case, one can overcome this loss as proved in \\cite{zihua}, restricting then the admissible powers of the fractional laplacian. Indeed, in \\cite{zihua}, this is proved that one has optimal Strichartz estimates if $\\sigma \\in (d\/(2d-1),1)$. In particular, the number $d\/(2d-1)$ is larger than $1\/2$ and there is a gap between the Strichartz estimates for the wave operator $\\sigma=1\/2$ and the one occuring for higher powers. This issue suggests that a new phenomenon might occur for this range of powers. \n\n\n\n\n\\section{Local Well-posedness}\n\nWe establish local well-posedness of the fractional NLS by the standard contraction mapping argument based on Strichartz estimates. Due to loss of regularity in Strichartz estimates, our proof relies on the $L_x^\\infty$ bounds (see Lemma 3.2 and 3.3).\n\n\\subsection{Subcritical cases}\n\nFirst, we consider the case that $d=1$ and $2\\leq p<5$. In this case, the equation is scaling-subcritical in $H^s$ for $s>s_g$, since $s_g>s_c$. We remark that in the proof, we control the $L_{t\\in I}^4L_x^\\infty$ norm simply by Strichartz estimates (see \\eqref{1d Strichartz 1} and \\eqref{1d Strichartz 2}).\n\n\\begin{proof}[Proof of Theorem \\ref{subcritical LWP} when $d=1$ and $2\\leq p<5$]\nWe define\n$$\\Phi_{u_0}(u):=e^{it(-\\Delta)^\\sigma}u_0+ i\\mu\\int_0^t e^{i(t-s)(-\\Delta)^\\sigma}(|u|^{p-1}u)(s)ds.$$\nLet\n$$\\|u\\|_{X^s}:=\\|u\\|_{L_{t\\in I}^\\infty H_x^s\\cap L_{t\\in I}^4 L_x^\\infty},$$\nwhere $I=[0,T)$. Then, applying the 1d Strichartz estimates\n\\begin{align}\n\\|e^{it(-\\Delta)^\\sigma}u_0\\|_{L_{t\\in I}^4 L_x^\\infty}&\\lesssim\\|u_0\\|_{\\dot{H}^{s_g}}\\label{1d Strichartz 1},\\\\\n\\|e^{it(-\\Delta)^\\sigma}u_0\\|_{L_{t\\in I}^\\infty H_x^s}&\\lesssim\\|u_0\\|_{H^s},\\nonumber\\\\\n\\Big\\|\\int_0^t e^{i(t-s)(-\\Delta)^\\sigma}F(s)ds\\Big\\|_{L_{t\\in I}^4L_x^\\infty}&\\lesssim \\|F\\|_{L_{t\\in I}^1\\dot{H}_x^{s_g}}\\label{1d Strichartz 2},\\\\\n\\Big\\|\\int_0^t e^{i(t-s)(-\\Delta)^\\sigma}F(s)ds\\Big\\|_{L_{t\\in I}^\\infty H_x^s}&\\lesssim \\|F\\|_{L_{t\\in I}^1H_x^s}\\nonumber,\n\\end{align}\nwe get\n$$\\|\\Phi_{u_0}(u)\\|_{X^s}\\lesssim \\|u_0\\|_{H^s}+\\||u|^{p-1}u\\|_{L_{t\\in I}^1H_x^s}.$$\nBy the fractional chain rule\n\\begin{equation}\\label{chain rule}\n\\||\\nabla|^s F(u)\\|_{L^q}\\lesssim \\|F'(u)\\|_{L^{p_1}}\\||\\nabla|^s u\\|_{L^{p_2}},\n\\end{equation}\nwhere $s>0$ and $\\frac{1}{q}=\\frac{1}{p_1}+\\frac{1}{p_2}$, and H\\\"older inequality, we obtain\n$$\\||u|^{p-1}u\\|_{L_{t\\in I}^1H_x^s}\\lesssim \\Big\\| \\||u|^{p-1}\\|_{L_x^\\infty}\\|u\\|_{H_x^s}\\Big\\|_{L_{t\\in I}^1} \\leq T^{\\frac{5-p}{4}}\\|u\\|_{L_{t\\in I}^4L_x^\\infty}^{p-1}\\|u\\|_{L_{t\\in I}^\\infty H_x^s}.$$\nFor the fractional chain rule \\eqref{chain rule}, we refer \\cite{CW}, for example. We remark that one can choose $p_1=\\infty$ in \\eqref{chain rule}. Indeed, this can be proved by a little modification of the last step in the proof of Proposition 3.1 in \\cite{CW}. Thus, we have\n$$\\|\\Phi_{u_0}(u)\\|_{X^s}\\lesssim \\|u_0\\|_{H^s}+T^{\\frac{5-p}{4}} \\|u\\|_{X^s}^p.$$\nSimilarly, by Strichartz estimates,\n$$\\|\\Phi_{u_0}(u)-\\Phi_{u_0}(v)\\|_{X^s}\\lesssim \\||u|^{p-1}u-|v|^{p-1}v\\|_{L_{t\\in I}^1H_x^s}.$$\nThen, applying the fractional Leibniz rule and the fractional chain rule in \\cite{CW}, we get \n\\begin{align*}\n\\||u|^{p-1}u-|v|^{p-1}v\\|_{H_x^s}&=\\Big\\|\\int_0^1 p|v+t(u-v)|^{p-1}(u-v) dt\\Big\\|_{H_x^s}\\\\\n&\\leq p\\int_0^1\\||v+t(u-v)|^{p-1}(u-v) \\|_{H_x^s}dt\\\\\n&\\lesssim\\int_0^1\\|v+t(u-v)\\|_{L_x^\\infty}^{p-1}\\|u-v\\|_{H_x^s}\\\\\n&\\ \\ \\ +\\||v+t(u-v)|^{p-1}\\|_{H_x^s}\\|u-v\\|_{L_x^\\infty}dt\\\\\n&\\lesssim\\int_0^1\\|v+t(u-v)\\|_{L_x^\\infty}^{p-1}\\|u-v\\|_{H_x^s}\\\\\n&\\ \\ \\ +\\|v+t(u-v)\\|_{L_x^\\infty}^{p-2}\\|v+t(u-v)\\|_{H_x^s}\\|u-v\\|_{L_x^\\infty}dt\\\\\n&\\leq (\\|u\\|_{L_x^\\infty}^{p-1}+\\|v\\|_{L_x^\\infty}^{p-1})\\|u-v\\|_{H_x^s}\\\\\n&\\ \\ \\ +(\\|u\\|_{L_x^\\infty}^{p-2}+\\|v\\|_{L_x^\\infty}^{p-2})(\\|u\\|_{H_x^s}+\\|v\\|_{H_x^s})\\|u-v\\|_{L_x^\\infty}.\n\\end{align*}\nThus, it follows that\n\\begin{align*}\n&\\|\\Phi_{u_0}(u)-\\Phi_{u_0}(v)\\|_{X^s}\\\\\n&\\lesssim T^{\\frac{5-p}{4}}\\Big\\{(\\|u\\|_{L_{t\\in I}^4L_x^\\infty}^{p-1}+\\|v\\|_{L_{t\\in I}^4L_x^\\infty}^{p-1})\\|u-v\\|_{L_{t\\in I}^\\infty H_x^s}\\\\\n&\\quad\\quad\\quad+(\\|u\\|_{L_{t\\in I}^4L_x^\\infty}^{p-2}+\\|v\\|_{L_{t\\in I}^4L_x^\\infty}^{p-2})(\\|u\\|_{L_{t\\in I}^\\infty H_x^s}+\\|v\\|_{L_{t\\in I}^\\infty H_x^s})\\|u-v\\|_{L_{t\\in I}^4L_x^\\infty}\\Big\\}\\\\\n&\\lesssim T^{\\frac{5-p}{4}} (\\|u\\|_{X^s}^{p-1}+\\|v\\|_{X^s}^{p-1})\\|u-v\\|_{X^s}.\n\\end{align*}\nChoosing sufficiently small $T>0$, we conclude that $\\Phi_{u_0}$ is a contraction on a ball\n$$B:=\\{u: \\|u\\|_{X^s}\\leq 2\\|u_0\\|_{H^s}\\}$$\nequipped with the norm $\\|\\cdot\\|_{X^s}$.\n\\end{proof}\n\nNext, we will prove Theorem \\ref{subcritical LWP} when $d=1$ and $p\\geq 5$, or $d\\geq 2$ and $p\\geq3$. In this case, we do not have a good control on the $L_x^\\infty$ norm from Strichartz estimates. Instead, we make use of Sobolev embedding.\n\n\\begin{lemma}[$L_{t\\in I}^{p-1}L_x^\\infty$ bound]\nSuppose that $d=1$ and $p\\geq5$, or $d\\geq 2$ and $p\\geq3$. Let $s>s_c$. Then, we have\n\\begin{equation}\\label{subcritical Lx-infty bound}\n\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}\\lesssim T^{0+}\\|u\\|_{S_{q_0,r_0}^{s}(I)},\n\\end{equation}\nwhere $(q_0,r_0) =((p-1)^+,\\Big(\\tfrac{2d(p-1)}{d(p-1)-4}\\Big)^-)$ is an admissible pair. Here, we denote by $c^+$ a number larger than $c$ but arbitrarily close to $c$, and similarly for $c^-$.\n\\end{lemma}\n\n\\begin{proof}\nWe observe that\n$$\\frac{1}{r_0}-\\frac{s-d(1-\\sigma)(\\frac{1}{2}-\\frac{1}{r_0})}{d}<0.$$\nThus, by Sobolev inequality,\n$$\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}\\leq T^{0+}\\|u\\|_{L_{t\\in I}^{q_0}L_x^\\infty}\\lesssim\\||\\nabla|^{-d(1-\\sigma)(\\frac{1}{2}-\\frac{1}{r_0})}u\\|_{L_{t\\in I}^{q_0}W_x^{s,r_0}}=\\|u\\|_{S_{q_0,r_0}^s(I)}.$$\n\\end{proof}\n\nWe also employ a standard persistence of regularity argument.\n\\begin{lemma}[Persistence of regularity]\\label{persistence of regularity}\nLet $10$, $\\Phi_{u_0}$ is contractive on a ball\n$$B:=\\{u: \\|u\\|_{X^s}\\leq 2\\|u_0\\|_{H^s}\\}$$\nequipped with the norm $\\|\\cdot\\|_{X^0}$, which is complete by Lemma \\ref{persistence of regularity}.\n\\end{proof}\n\n\\begin{remark}\nThe standard persistence of regularity argument allows us to avoid derivatives in \\eqref{difference in LWP proof}. Indeed, for $u\\in B$, $\\|\\langle\\nabla\\rangle^su\\|_{L_{t\\in I}^{p-1}L_x^\\infty}$ is not necessarily bounded.\n\\end{remark}\n\n\\subsection{Scaling-critical cases}\nIn the scaling-critical case, we use the following lemma, which plays the same role as \\eqref{subcritical Lx-infty bound}. We note that the norms in the lemma are defined via the Littlewood-Paley projection in order to overcome the failure of the Sobolev embedding $W^{s,p}\\hookrightarrow L^q$, $\\frac{1}{q}=\\frac{1}{p}-\\frac{s}{d}$, when $q=\\infty$. Lemma 3.3 generalizes \\cite[Lemma 3.1]{CKSTT}.\n\n\\begin{lemma}[Scaling-critical $L_{t\\in I}^{p-1}L_x^\\infty$ bound]\n\\begin{equation}\\label{critical Lx-infty bound}\n\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}^{p-1}\\lesssim \\left\\{\\begin{aligned}\n&\\|u\\|_{\\tilde{S}_{4,\\infty}^{s_c}(I)}^4\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-5}&&\\text{ when }d=1\\textup{ and }p>5,\\\\\n&\\|u\\|_{\\tilde{S}_{2+,\\infty-}^{s_c}(I)}^{2}\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-3}&&\\text{ when }d=2\\textup{ and }p>3,\\\\\n&\\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}^{2}\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-3}&&\\text{ when }d\\geq 3\\textup{ and }p>3.\n\\end{aligned}\n\\right.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nWe will prove the lemma only when $d\\geq3$. By interpolation $\\|f\\|_{L^{p_\\theta}}\\leq \\|f\\|_{L^{p_0}}^\\theta \\|f\\|_{L^{p_1}}^{1-\\theta}$, $\\frac{1}{p_\\theta}=\\frac{\\theta}{p_0}+\\frac{1-\\theta}{p_1}$, $0<\\theta<1$, it suffices to show the lemma for rational $(p-1)=\\frac{m}{n}>2$ with $\\gcd(m,n)=1$. First, we estimate\n$$A(t)=\\Big[\\sum_N\\|P_{N}u(t)\\|_{L_x^\\infty}\\Big]^m\\sim\\sum_{N_1\\geq\\cdots\\geq N_m}\\prod_{i=1}^m\\|P_{N_i}u(t)\\|_{L_x^\\infty}.$$\nObserve from Bernstein's inequality that \n\\begin{align}\n\\|P_Nu(t)\\|_{L_x^\\infty}&\\lesssim N^{-\\frac{\\sigma(p-3)}{p-1}}d_N\\label{Bernstein1},\\\\\n\\|P_Nu(t)\\|_{L_x^\\infty}&\\lesssim N^{\\frac{2\\sigma}{p-1}}d_N'\\label{Bernstein2},\n\\end{align}\nwhere\n$$d_N=\\|P_Nu(t)\\|_{\\dot{W}_x^{s_c-(1-\\sigma),\\frac{2d}{d-2}}},\\ d_N'=\\|P_Nu(t)\\|_{\\dot{H}_x^{s_c}}.$$\nAs a consequence, we have\n\\begin{equation}\\label{Bernstein3}\n\\|P_Nu(t)\\|_{L_x^\\infty}\\lesssim\\Big(N^{-\\frac{\\sigma(p-3)}{p-1}}d_N\\Big)^\\theta\\Big(N^{\\frac{2\\sigma}{p-1}}d_N'\\Big)^{1-\\theta}=N^{\\frac{\\sigma(p-3)}{(p-1)(p-2)}}(d_N)^\\theta (d_N')^{1-\\theta},\n\\end{equation}\nwhere $\\theta=\\frac{1}{p-2}$. Hence, applying \\eqref{Bernstein1} for $i=1,\\cdots,n$ and \\eqref{Bernstein3} for $i=n+1,\\cdots,m$, we bound $A(t)$ by\n$$\\lesssim\\sum_{N_1\\geq \\cdots\\geq N_m} \\Big(\\prod_{i=1}^nN_i^{-\\frac{\\sigma(p-3)}{p-1}}d_{N_i}\\Big) \\Big(\\prod_{i=n+1}^m N_i^{\\frac{\\sigma(p-3)}{(p-1)(p-2)}}(d_{N_i})^\\theta (d_{N_i}')^{1-\\theta}\\Big).$$\nFor an arbitrarily small $\\epsilon>0$, we let\n$$\\tilde{d}_N=\\sum_{N'\\in 2^{\\mathbb{Z}}} \\min\\Big(\\frac{N}{N'},\\frac{N'}{N}\\Big)^\\epsilon d_{N'},\\quad \\tilde{d}_N'=\\sum_{N'\\in 2^{\\mathbb{Z}}} \\min\\Big(\\frac{N}{N'},\\frac{N'}{N}\\Big)^\\epsilon d_{N'}'.$$\nThen, since $d_N\\leq \\tilde{d}_N$ and $\\tilde{d}_{N_i}\\leq (\\frac{N_1}{N_{i}})^\\epsilon \\tilde{d}_{N_1}$ and similarly for primes, $A(t)$ is bounded by\n$$\\lesssim\\sum_{N_1\\geq \\cdots\\geq N_m} \\Big(\\prod_{i=1}^n N_i^{-\\frac{\\sigma(p-3)}{p-1}}\\Big(\\frac{N_1}{N_{i}}\\Big)^\\epsilon \\tilde{d}_{N_1}\\Big) \\Big(\\prod_{i=n+1}^m N_i^{\\frac{\\sigma(p-3)}{(p-1)(p-2)}}\\Big(\\frac{N_1}{N_{i}}\\Big)^\\epsilon (\\tilde{d}_{N_1})^\\theta (\\tilde{d}_{N_1}')^{1-\\theta}\\Big).\n$$\nSumming in $N_m, N_{m-1}, ..., N_{n+1}$ and using $m-n=(p-2)n$,\n\\begin{align*}\nA(t)&\\lesssim\\sum_{N_1\\geq \\cdots\\geq N_n} \\Big(\\prod_{i=1}^n N_i^{-\\frac{\\sigma(p-3)}{p-1}}\\Big(\\frac{N_1}{N_{i}}\\Big)^\\epsilon \\tilde{d}_{N_1}\\Big)\\\\\n&\\quad\\quad\\quad\\quad\\times N_n^{\\frac{\\sigma(p-3)(m-n)}{(p-1)(p-2)}}\\Big(\\frac{N_1}{N_{n}}\\Big)^{(m-n)\\epsilon} (\\tilde{d}_{N_1})^{(m-n)\\theta} (\\tilde{d}_{N_1}')^{(m-n)(1-\\theta)}\\\\\n&=\\sum_{N_1\\geq \\cdots\\geq N_n} \\Big(\\prod_{i=1}^n N_i^{-\\frac{\\sigma(p-3)}{p-1}}\\Big(\\frac{N_1}{N_{i}}\\Big)^\\epsilon \\tilde{d}_{N_1}\\Big)\\\\\n&\\quad\\quad\\quad\\quad\\times N_n^{\\frac{\\sigma(p-3)n}{p-1}}\\Big(\\frac{N_1}{N_{n}}\\Big)^{(p-2)n\\epsilon} (\\tilde{d}_{N_1})^{(m-n)\\theta} (\\tilde{d}_{N_1}')^{(m-n)(1-\\theta)},\n\\end{align*}\nand then summing in $N_n, N_{n-1}, ..., N_1$, we obtain that\n$$A(t)\\lesssim\\sum_{N_1} (\\tilde{d}_{N_1})^{n+(m-n)\\theta} (\\tilde{d}_{N_1}')^{(m-n)(1-\\theta)}=\\sum_{N_1} (\\tilde{d}_{N_1})^{2n} (\\tilde{d}_{N_1}')^{m-2n},$$\nwhich is, by H\\\"older inequality and Young's inequality, bounded by\n\\begin{align*}\n&\\lesssim\\|(\\tilde{d}_{N})^{2n} \\|_{\\ell_N^2}\\|(\\tilde{d}_{N}')^{m-2n}\\|_{\\ell_N^{2}}=\\|\\tilde{d}_{N}\\|_{\\ell_N^{4n}}^{2n}\\|\\tilde{d}_{N}'\\|_{\\ell_N^{2(m-2n)}}^{m-2n}\\\\\n&\\leq \\|\\tilde{d}_{N}\\|_{\\ell_N^2}^{2n}\\|\\tilde{d}_{N}'\\|_{\\ell_N^2}^{m-2n}\\lesssim \\|d_{N}\\|_{\\ell_N^2}^{2n}\\|d_{N}'\\|_{\\ell_N^2}^{m-2n}=\\|d_{N}\\|_{\\ell_N^2}^{2n}\\|d_{N}'\\|_{\\ell_N^2}^{(p-3)n}.\n\\end{align*}\nFinally, by the estimate for $A(t)$, we prove that \n\\begin{align*}\n\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}^{p-1}&\\leq \\int_I A(t)^{p-1}dt=\\int_I A(t)^\\frac{m}{n}dt\\\\\n&\\lesssim\\int_I \\|d_N\\|_{\\ell_N^2}^2\\|d_N'\\|_{\\ell_N^2}^{p-3} dt\\leq \\|d_N\\|_{L_{t\\in I}^2\\ell_N^2}^2\\|d_N'\\|_{L_{t\\in I}^\\infty\\ell_N^2}^{p-3}\\\\\n&\\leq \\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}^{2}\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-3}.\n\\end{align*}\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of theorem \\ref{critical LWP}]\nFor simplicity, we assume that $d\\geq3$. Indeed, with little modifications, we can prove the theorem when $d=1,2$. We define $\\Phi_{u_0}(u)$ as in the proof of Theorem \\ref{subcritical LWP}. Then, by Strichartz estimates, the fractional chain rule and \\eqref{critical Lx-infty bound}, we have\n\\begin{align*}\n\\|\\Phi_{u_0}(u)\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}&\\leq \\|e^{it(-\\Delta)^\\sigma}u_0\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}+c_0\\||u|^{p-1}u\\|_{L_{t\\in I}^1H^{s_c}}\\\\\n&\\leq \\|e^{it(-\\Delta)^\\sigma}u_0\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}+c_1\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}^{p-1}\\|u\\|_{L_{t\\in I}^\\infty H^{s_c}}\\\\\n&\\leq \\|e^{it(-\\Delta)^\\sigma}u_0\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}+c\\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}^2\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-2}.\n\\end{align*}\nSimilarly, one can show that\n\\begin{align*}\n\\|\\Phi_{u_0}(u)\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}&\\leq c\\|u_0\\|_{H^{s_c}}+c\\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}^2\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-2}\n\\end{align*}\nand\n\\begin{align*}\n&\\|\\Phi_{u_0}(u)-\\Phi_{u_0}(v)\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^0(I)}+\\|\\Phi_{u_0}(u)-\\Phi_{u_0}(v)\\|_{\\tilde{S}_{\\infty, 2}^0(I)}\\\\\n&\\leq c_0(\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}^{p-1}+\\|v\\|_{L_{t\\in I}^{p-1}L_x^\\infty}^{p-1})\\|u-v\\|_{L_{t\\in I}^\\infty L_x^2}\\\\\n&\\leq c(\\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}^2\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-2}+\\|v\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}^2\\|v\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}^{p-2})\\|u-v\\|_{L_{t\\in I}^\\infty L_x^2}.\n\\end{align*}\nNow we let $\\delta=\\delta(c,\\|u_0\\|_{H^{s_c}})>0$ be a sufficiently small number to be chosen later, and then we pick $T=T(u_0,\\delta)>0$ such that\n$$\\|e^{it(-\\Delta)^\\sigma}u_0\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}\\leq\\delta,$$\nDefine \n$$B=\\Big\\{u: \\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}\\leq 2\\delta\\textup{ and }\\|u\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}\\leq 2c\\|u_0\\|_{H^{s_c}}\\Big\\}$$\nequipped with the norm\n$$\\|u\\|_X:=\\|u\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{0}(I)}+\\|u\\|_{\\tilde{S}_{\\infty,2}^{0}(I)}.$$\nThen, for $u\\in B$, we have\n\\begin{align*}\n\\|\\Phi_{u_0}(u)\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(I)}&\\leq \\delta+c (2\\delta)^2(2c\\|u_0\\|_{{H}^{s_c}})^{p-2}\\leq 2\\delta,\\\\\n\\|\\Phi_{u_0}(u)\\|_{\\tilde{S}_{\\infty,2}^{s_c}(I)}&\\leq c\\|u_0\\|_{{H}^{s_c}}+c (2\\delta)^2(2c\\|u_0\\|_{{H}^{s_c}})^{p-2}\\leq 2c\\|u_0\\|_{{H}^{s_c}}.\n\\end{align*}\nChoosing sufficiently small $\\delta>0$, we prove that $\\Phi_{u_0}$ maps $B$ to itself. Similarly, one can show \n$$\\|\\Phi_{u_0}(u)-\\Phi_{u_0}(v)\\|_X\\leq\\frac{1}{2}\\|u-v\\|_X.$$\nTherefore, it follows that $\\Phi_{u_0}$ is a contraction mapping in $B$.\n\\end{proof}\n\n\\begin{remark}\n$(i)$ In the proofs, the $L_x^\\infty$ norm bounds are crucial for the following reason. In Proposition \\ref{Strichartz}, there is a loss of regularity except the trivial ones,\n$$\\|e^{it(-\\Delta)^\\sigma}u_0\\|_{L_{t\\in I}^\\infty L_x^2}=\\|u_0\\|_{L^2}$$\nand\n$$\\Big\\|\\int_0^t e^{it(-\\Delta)^\\sigma}F(s)ds\\Big\\|_{L_{t\\in I}^\\infty L_x^2}\\leq \\|F\\|_{L_{t\\in I}^1L_x^2}.$$\nHence, when we estimate the $L_{t\\in I}^\\infty H_x^s$ norm of the integral term in $\\Phi_{u_0}(u)$, we are forced to use the trivial one\n\\begin{equation}\n\\Big\\|\\int_0^t e^{it(-\\Delta)^\\sigma}|u|^{p-1}u(s)ds\\Big\\|_{L_{t\\in I}^\\infty H_x^s}\\leq \\||u|^{p-1}u\\|_{L_{t\\in I}^1H_x^s}.\n\\end{equation}\nIndeed, otherwise, we have a higher regularity norm on the right hand side. Then, we cannot close the contraction mapping argument. Moreover, if $u_0\\in H^s$, there is no good bound for $\\|e^{it(-\\Delta)^\\sigma}u_0\\|_{L_{t\\in I}^qW_x^{s,r}}$ except the trivial one $(q,r)=(\\infty,2)$. Thus, we are forced to bound the right hand side of (3.10) by\n$$\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}^{p-1}\\|u\\|_{L_{t\\in I}^\\infty H_x^s}.$$\nTherefore, we should have a good control on $\\|u\\|_{L_{t\\in I}^{p-1}L_x^\\infty}$.\\\\\n$(ii)$ When $p<3$, the $L_{t\\in I}^{p-1}L_x^\\infty$ norm is scaling-supercritical. Thus, based on our method, the assumptions on $p$ in Theorem \\ref{subcritical LWP} and \\ref{critical LWP} are optimal except $p=3$ in the critical case.\n\\end{remark}\n\n\n\n\n\n\n\n\n\\section{Small Data Scattering}\n\n\n\n\\begin{proof}[Proof of Theorem \\ref{scattering}]\nFor simplicity, we consider the case $d\\geq 3$ only. It follows from the estimates in the proof of Theorem \\ref{critical LWP} that if $\\|u_0\\|_{H^s}$ is small enough, then\n$$\\|u(t)\\|_{L_{t\\in\\mathbb{R}}^{p-1}L_x^\\infty}+\\|u(t)\\|_{L_{t\\in\\mathbb{R}}^\\infty H_x^{s_c}}\\leq\\|u(t)\\|_{\\tilde{S}_{2,\\frac{2d}{d-2}}^{s_c}(\\mathbb{R})}+\\|u(t)\\|_{\\tilde{S}_{\\infty,2}^{s_c}(\\mathbb{R})}\\lesssim\\|u_0\\|_{H^{s_c}}<\\infty.$$\nBy Strichartz estimates, the fractional chain rule and \\eqref{critical Lx-infty bound}, we prove that\n\\begin{align*}\n&\\|e^{-iT_1(-\\Delta)^\\sigma}u(T_1)-e^{-iT_2(-\\Delta)^\\sigma}u(T_2)\\|_{H^{s_c}}\\\\\n&=\\Big\\|\\int_{T_1}^{T_2} e^{-is(-\\Delta)^\\sigma}(|u|^{p-1}u)(s)ds\\Big\\|_{H^{s_c}}\\\\\n&\\leq \\|u(t)\\|_{L_{t\\in[T_1,T_2)}^{p-1}L_x^\\infty}^{p-1}\\|u(t)\\|_{L_{t\\in[T_1,T_2)}^\\infty H_x^{s_c}}\\to 0\n\\end{align*}\nas $T_1,T_2\\to\\pm\\infty$. Thus, the limits\n$$u_\\pm=\\lim_{t\\to\\pm\\infty} e^{-it(-\\Delta)^\\sigma}u(t)$$\nexist in $H^{s_c}$. Repeating the above estimates, we show that\n$$\\|u(t)-e^{it(-\\Delta)^\\sigma}u_\\pm\\|_{H^{s_c}}=\\|e^{-it(-\\Delta)^\\sigma}u(t)-u_\\pm\\|_{H^{s_c}}\\to 0$$ \nas $t\\to\\pm\\infty$.\n\\end{proof}\n\n\\section{Ill-posedness}\n\nWe will prove Theorem \\ref{ill-posedness} following the strategy in \\cite{CCT}. Throughout this section, we assume that $d=1,2$ or $3$ and $\\frac{d}{4}<\\sigma<1$. If $p$ is not an odd integer, we further assume that $p\\geq k+1$, where $k$ is the smallest integer greater than $\\frac{d}{2}$. \n\nFirst, we construct an almost non-dispersive solution by small dispersion analysis.\n\n\\begin{lemma}[Small dispersion analysis]\\label{small dispersion analysis}\nGiven a Schwartz function $\\phi_0$, let $\\phi^{(\\nu)}(t,x)$ be the solution to the fractional NLS\n\\begin{equation}\\label{small dispersion}\ni\\partial_t u+\\nu^{2\\sigma}(-\\Delta)^\\sigma u+\\mu|u|^{p-1}u=0,\\ u(0)=\\phi_0,\n\\end{equation}\nand $\\phi^{(0)}(t,x)$ be the solution to the ODE with no dispersion\n$$i\\partial_t u+\\mu|u|^{p-1}u=0,\\ u(0)=\\phi_0,$$\nthat is,\n\\begin{equation}\\label{no dispersion}\n\\phi^{(0)}(t,x)=\\phi_0(x)e^{it\\omega |\\phi_0(x)|^{p-1}}.\n\\end{equation}\nThen there exist $C, c>0$ such that if $0<\\nu\\leq c$ is sufficiently small, then\n\\begin{equation}\\label{small dispersion estimate}\n\\|\\phi^{(\\nu)}(t)-\\phi^{(0)}(t)\\|_{H^k}\\leq C\\nu^{2\\sigma}\n\\end{equation}\nfor all $|t|\\leq c |\\log \\nu|^c$.\n\\end{lemma}\n\n\\begin{proof}\nThe proof closely follows the proof of Lemma 2.1 in \\cite{CCT}.\n\\end{proof}\n\nObviously, $\\phi^{(\\nu)}(t,\\nu x)$ is a solution to $(\\textup{NLS}_\\sigma)$. Moreover, $\\phi^{(\\nu)}(t,\\nu x)$ is bounded and almost flat in the following sense.\n\n\\begin{corollary}\\label{small dispersion corollary}\nLet $\\phi^{(\\nu)}$, $\\nu$ and $c$ be in Lemma \\ref{small dispersion analysis}. Let $s\\geq 0$. Then,\n\\begin{equation}\\label{L^infty control}\n\\|\\phi^{(\\nu)}(t,\\nu x)\\|_{L_x^\\infty}\\sim 1\n\\end{equation}\nand\n\\begin{equation}\\label{H^s control}\n\\|\\phi^{(\\nu)}(t,\\nu x)\\|_{\\dot{H}_x^s}\\sim \\nu^{s-\\frac{d}{2}}(c |\\log \\nu|^c)^s\n\\end{equation}\nfor all $|t|\\leq c |\\log \\nu|^c$.\n\\end{corollary}\n\n\\begin{proof}\nSince $k>\\frac{d}{2}$, by the Sobolev inequality, we have\n\\begin{align*}\n\\|\\phi^{(\\nu)}(t,\\nu x)-\\phi^{(0)}(t,\\nu x)\\|_{L_x^\\infty}&=\\|\\phi^{(\\nu)}(t, x)-\\phi^{(0)}(t,x)\\|_{L_x^\\infty}\\\\\n&\\lesssim\\|\\phi^{(\\nu)}(t)-\\phi^{(0)}(t)\\|_{H^k}\\lesssim\\nu^{2\\sigma}.\n\\end{align*}\nThen, \\eqref{L^infty control} follows from the explicit formula \\eqref{no dispersion} for $\\phi^{(0)}(t,x)$. \nIt follows from \\eqref{small dispersion estimate} and \\eqref{no dispersion} that\n$$\\|\\phi^{(\\nu)}(t, \\nu x)\\|_{\\dot{H}_x^s}\\leq \\nu^{s-\\frac{d}{2}}(\\|\\phi^{(0)}(t)\\|_{\\dot{H}^s}+\\|\\phi^{(\\nu)}(t)-\\phi^{(0)}(t)\\|_{\\dot{H}^s})\\sim \\nu^{s-\\frac{d}{2}}(c |\\log \\nu|^c)^s.$$\n\\end{proof}\n\nFor $v\\in\\mathbb{R}^d$, we define the pseudo-Galilean transformation by\n$$\\mathcal{G}_vu(t,x)=e^{-iv\\cdot x}e^{it|v|^{2\\sigma}}u(t,x-2t\\sigma|v|^{2(\\sigma-1)}v).$$\nNote that when $\\sigma=1$, $\\mathcal{G}_v$ is simply a Galilean transformation, and that NLS is invariant under this transformation, that is, if $u(t)$ solves NLS, so does $\\mathcal{G}_vu(t)$. However, when $\\sigma\\neq1$, $(\\textup{NLS}_\\sigma)$ is not exactly symmetric with respect to pseudo-Galilean transformations. Indeed, if $u(t)$ solves $(\\textup{NLS}_\\sigma)$, then $\\tilde{u}(t)=\\mathcal{G}_vu(t)$ obeys $(\\textup{NLS}_\\sigma)$ with an error term\n\\begin{equation}\\label{fNLS with error}\ni\\partial_t\\tilde{u}+(-\\Delta)^\\sigma\\tilde{u}+\\omega|\\tilde{u}|^{p-1}\\tilde{u}=e^{it|v|^{2\\sigma}}e^{-iv\\cdot x}(\\mathcal{E}u)(t,x-2\\sigma t|v|^{2(\\sigma-1)}v),\n\\end{equation}\nwhere\n$$\\widehat{\\mathcal{E}u}(\\xi)=E(\\xi)\\hat{u}(\\xi)$$\nwith \n$$E(\\xi)=|\\xi-v|^{2\\sigma}-|\\xi|^{2\\sigma}-|v|^{2\\sigma}+2\\sigma|v|^{2(\\sigma-1)}v\\cdot\\xi.$$\nHowever, we note that\n\\begin{equation}\\label{E bound}\n|E(\\xi)|\\lesssim |\\xi|^{2\\sigma}.\n\\end{equation}\nIndeed, if $|\\xi|\\leq\\frac{|v|}{100}$, then\n$$E(\\xi)=\\Big||v|^{2\\sigma}\\Big(|\\tfrac{v}{|v|}-\\tfrac{\\xi}{|v|}|^{2\\sigma}-1+2\\sigma \\tfrac{v}{|v|}\\cdot\\tfrac{\\xi}{|v|}\\Big)-|\\xi|^{2\\sigma}\\Big|\\lesssim|v|^{2\\sigma}\\tfrac{|\\xi|^2}{|v|^2}+|\\xi|^{2\\sigma}\\lesssim|\\xi|^{2\\sigma}.$$\nOtherwise, \n$$E(\\xi)\\lesssim|\\xi|^{2\\sigma}+|v|^{2\\sigma}+|\\xi|^{2\\sigma}+|v|^{2\\sigma}+2\\sigma|v|^{2\\sigma-1}|\\xi|\\lesssim|\\xi|^{2\\sigma}.$$\nTherefore, one would expect an \\textit{almost} symmetry for an almost flat solution $u(t)$, such as $\\phi^{(\\nu)}(t,\\nu x)$ in Lemma \\ref{small dispersion analysis}. Precisely, we have the following lemma.\n\n\\begin{lemma}[Pseudo-Galilean transformation]\\label{Pseudo-Galilean transformation}\nLet $\\phi^{(\\nu)}$, $\\nu$ and $c$ be in Lemma \\ref{small dispersion analysis}. For $v\\in\\mathbb{R}^d$, we define\n$$\\tilde{u}(t,x)=(\\mathcal{G}_v\\phi^{(\\nu)}(\\cdot, \\nu\\cdot))(t,x)=e^{-iv\\cdot x}e^{it|v|^{2\\sigma}}\\phi^{(\\nu)}\\big(t,\\nu(x-2t\\sigma|v|^{2(\\sigma-1)}v)\\big),$$\nand let $u(t,x)$ be the solution to $(\\textup{NLS}_\\sigma)$ with the same initial data\n\\begin{equation}\\label{initial data}\ne^{-iv\\cdot x}\\phi^{(\\nu)}(0,\\nu x)=e^{-iv\\cdot x}\\phi_0(0,\\nu x).\n\\end{equation}\nThen, there exists $\\delta>0$ such that\n\\begin{equation}\\label{estim}\n\\|e^{iv\\cdot x}(u(t)-\\tilde{u}(t))\\|_{H_x^k}\\lesssim \\nu^{\\delta}\n\\end{equation}\nfor all $|t|\\leq c |\\log \\nu|^c$.\n\\end{lemma}\n\n\\begin{remark}\nWhen $p=3$, in \\cite{CHKL} the authors could use the counterexample in \n\\cite{CCTamer}. This counterexample is constructed by \npseudo-conformal symmetry and Galilean transformation. A good thing is \nthat this solution is very small in high Sobolev norms, too. Somehow, \nthis smallness allows \\cite{CHKL} to show that the error in pseudo-Galilean \ntransformation is also small. However, when $p>3$, the counterexample in \n\\cite{CCTamer} does not work. Later, Christ, Colliander and Tao \\cite{CCT} constructed \na different counterexample which works for more general $p$. \nUnfortunately, this counterexample is not small in high Sobolev norms. \nIt is very large instead. In particular, for our purposes, it is hard to control the error from \npseudo-Galilean transformation. But, our new \ncounterexample still has small high Sobolev norm after translating it \nto its frequency center; this is the term $e^{iv\\cdot x}$ in equation \\eqref{estim}. Using this smallness, we can prove that pseudo-Galilean \ntransformation is almost invariant. We also remark that the condition $\\sigma>\\frac{d}{4}$ is to guarantee smallness of the error (see \\eqref{II(s) estimate}).\n\\end{remark}\n\n\\begin{proof}[Proof of Lemma \\ref{Pseudo-Galilean transformation}]\nLet $R(t)=(u-\\tilde{u})(t)$. Then, $R(t)$ satisfies\n$$i\\partial_tR+(-\\Delta)^\\sigma R=\\mu\\big(|\\tilde{u}|^{p-1}\\tilde{u}-|u|^{p-1}u\\big)-e^{it|v|^{2\\sigma}}(\\mathcal{E}\\phi^{(\\nu)}\\big(t,\\nu(x-2\\sigma t|v|^{2(\\sigma-1)}v)\\big),$$\nor equivalently\n\\begin{align*}\nR(t)&=i\\int_0^t e^{i(t-s)(-\\Delta)^{\\sigma}}\\Big\\{\\mu\\big(|u|^{p-1}u-|\\tilde{u}|^{p-1}\\tilde{u}\\big)(s)\\\\\n&\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad+e^{is|v|^{2\\sigma}}(\\mathcal{E}\\phi^{(\\nu)}\\big(s,\\nu(x-2\\sigma s|v|^{2(\\sigma-1)}v)\\big)\\Big\\}ds.\n\\end{align*}\nHence, by a trivial estimate, we get\n\\begin{align*}\n\\|e^{iv\\cdot x}R(t)\\|_{H^k}&\\leq\\int_0^t \\big\\|e^{iv\\cdot x}\\big(|u|^{p-1}u-|\\tilde{u}|^{p-1}\\tilde{u}\\big)(s)\\big\\|_{H^k}+\\|\\mathcal{E}\\phi^{(\\nu)}(s,\\nu\\cdot)\\|_{H^k}ds\\\\\n&=\\int_0^t I(s)+II(s)ds.\n\\end{align*}\nFirst, by \\eqref{E bound} and \\eqref{H^s control}, we show that\n\\begin{equation}\\label{II(s) estimate}\n\\int_0^tII(s)ds\\lesssim \\int_0^t\\sum_{j=0}^k\\|\\phi^{(\\nu)}(s,\\nu\\cdot)\\|_{\\dot{H}^{j+2\\sigma}}ds\\sim (c |\\log \\nu|^c)^{1+2\\sigma-\\frac{d}{2}}\\nu^{2\\sigma-\\frac{d}{2}}.\n\\end{equation}\nFor $I(s)$, expanding $u=\\tilde{u}+R$ and then applying H\\\"older inequality and Sobolev inequalities, we bound $I(s)$ by\n\\begin{equation}\\label{I(s) bound}\n\\lesssim\\sum_{j=1}^p\\|e^{iv\\cdot x}R\\|_{H^k}^j.\n\\end{equation}\nFor example, when $p=3$,\n\\begin{align*}\nI(s)&\\leq2\\| |e^{iv\\cdot x}\\tilde{u}|^2e^{iv\\cdot x}R\\|_{H^k}+\\|(e^{iv\\cdot x}\\tilde{u})^2\\overline{e^{iv\\cdot x}R}\\|_{H^k}+2\\|e^{iv\\cdot x}\\tilde{u}|e^{iv\\cdot x}R|^2\\|_{H^k}\\\\\n&\\quad+\\|\\overline{e^{iv\\cdot x}\\tilde{u}}(e^{iv\\cdot x}R)^2\\|_{H^k}+\\||e^{iv\\cdot x}R|^2e^{iv\\cdot x}R\\|_{H^k}\\\\\n&=:I_1(s)+I_2(s)+I_3(s)+I_4(s)+I_5(s).\n\\end{align*}\nConsider\n$$I_1(s)=\\sum_{|\\alpha|\\leq k}\\|\\nabla_{x_1}^{\\alpha_1}\\cdots\\nabla_{x_d}^{\\alpha_d}(|e^{iv\\cdot x}\\tilde{u}|^2e^{iv\\cdot x}R)(s)\\|_{L^2}=:\\sum_{|\\alpha|\\leq k}I_{1,\\alpha}(s),$$\nwhere $\\alpha=(\\alpha_1,\\alpha_2,\\cdots,\\alpha_d)$ is a multi-index with $|\\alpha|=\\sum_{i=1}^d\\alpha_i$. Observe that whenever a derivative hits\n$$e^{iv\\cdot x}\\tilde{u}(s)=e^{is|v|^{2\\sigma}}\\phi^{(\\nu)}\\big(s,\\nu(x-2s\\sigma|v|^{2(\\sigma-1)}v)\\big),$$\nwe get a small factor $\\nu$. Hence, after distributing derivatives by the Leibniz rule, the worst term we have in $I_{1,\\alpha}(s)$ is\n$$\\| |e^{iv\\cdot x}\\tilde{u}(s)|^2\\nabla^\\alpha e^{iv\\cdot x}R(s)\\|_{L^2},$$\nwhich is, by \\eqref{L^infty control}, bounded by \n$$\\| e^{iv\\cdot x}\\tilde{u}(s)\\|_{L^\\infty}^2\\|\\nabla^\\alpha e^{iv\\cdot x}R(s)\\|_{L^2}\\sim \\|\\nabla^\\alpha e^{iv\\cdot x}R(s)\\|_{L^2}.$$\nLikewise, we estimate other terms.\n\nCollecting all,\n$$\\|e^{iv\\cdot x}R(t)\\|_{H^k}\\lesssim (c |\\log \\nu|^c)^{1+2\\sigma-\\frac{d}{2}}\\nu^{2\\sigma-\\frac{d}{2}}+\\int_0^t \\sum_{j=1}^p\\|e^{iv\\cdot x}R(s)\\|_{H^k}^jds$$\nfor $|t|\\leq c|\\log \\nu|^c$. Then, by the standard nonlinear iteration argument, we prove the lemma.\n\\end{proof}\n\nSince we have solutions almost symmetric with respect to the pseudo-Galilean transformations, we can make use of the following decoherence lemma to construct counterexamples for local well-posedness.\n\n\\begin{lemma}[Decoherence]\\label{Decoherence}\nLet $s<0$. Fix a nonzero Schwartz function $w$. For $a,a'\\in[\\frac{1}{2},1]$, $0<\\nu\\leq\\lambda\\ll 1$ and $v\\in\\mathbb{R}^d$ with $|v|\\geq 1$, we define \n$$\\tilde{u}^{(a,\\nu,\\lambda,v)}(t,x):=\\mathcal{G}_v\\Big(\\lambda^{-\\frac{2\\sigma}{p-1}}\\phi^{(a,\\nu)}(\\lambda^{-2\\sigma}\\cdot,\\lambda^{-1}\\nu\\cdot)\\Big)(t,x),$$\nwhere $\\phi^{(a,\\nu)}$ is the solution to \\eqref{small dispersion} with initial data $aw$. Then, we have\n\\begin{align*}\n\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(0)\\|_{H^s}, \\|\\tilde{u}^{(a',\\nu,\\lambda, v)}(0)\\|_{H^s}&\\leq C|v|^s\\lambda^{-\\frac{2\\sigma}{p-1}}(\\tfrac{\\lambda}{\\nu})^{d\/2},\\\\\n\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(0)-\\tilde{u}^{(a',\\nu,\\lambda, v)}(0)\\|_{H^s}&\\leq C|v|^s\\lambda^{-\\frac{2\\sigma}{p-1}}(\\tfrac{\\lambda}{\\nu})^{d\/2}|a-a'|\n\\end{align*}\nand \n\\begin{align*}\n&\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(t)-\\tilde{u}^{(a',\\nu,\\lambda, v)}(t)\\|_{H^s}\\\\\n&\\geq c|v|^s\\lambda^{-\\frac{2\\sigma}{p-1}}(\\tfrac{\\lambda}{\\nu})^{d\/2}\\Big\\{\\|(\\phi^{(a,\\nu)}(\\tfrac{t}{\\lambda^{2\\sigma}})-\\phi^{(a',\\nu)}(\\tfrac{t}{\\lambda^{2\\sigma}})\\|_{L^2}-C|\\log \\nu|^C(\\tfrac{\\lambda}{\\nu})^{-k}|v|^{-s-k}\\Big\\}\n\\end{align*}\nfor all $|t|\\leq c|\\log \\nu|^c\\lambda^{2\\sigma}$.\n\\end{lemma}\n\n\\begin{proof}\nThe proof closely follows the proof of Lemma 3.1 in \\cite{CCT}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{ill-posedness}]\nThe proof is very similar to that of Theorem 1 in \\cite{CCT} except that in the last step, we need to use Lemma \\ref{Pseudo-Galilean transformation} due to lack of exact symmetry. We give a proof for the readers' convenience.\n\nLet $\\epsilon>0$ be a given but arbitrarily small number. Let $\\nu=\\lambda^{\\alpha}$, where $\\alpha>0$ is a small number to be chosen later. Then, we pick $v\\in \\mathbb{R}^d$ such that\n$$\\lambda^{-\\frac{2\\sigma}{p-1}}|v|^s(\\lambda\/\\nu)^{d\/2}=\\epsilon\\Leftrightarrow |v|=\\nu^{\\frac{1}{s}(\\frac{d(1-\\alpha)}{2}+\\frac{2\\alpha\\sigma}{p-1})}\\epsilon^{1\/s}.$$\nNote that since $s<0$, $\\frac{1}{s}(\\frac{d(1-\\alpha)}{2}+\\frac{2\\alpha\\sigma}{p-1})=\\frac{1}{s}(\\frac{d}{2}-\\alpha s_c)<0$ for sufficiently small $\\alpha$, and thus $|v|\\geq 1$. Hence, it follows from Lemma \\ref{Decoherence} that\n \\begin{align}\n\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(0)\\|_{H^s}, \\|\\tilde{u}^{(a',\\nu,\\lambda, v)}(0)\\|_{H^s}&\\leq C\\epsilon,\\\\\n\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(0)-\\tilde{u}^{(a',\\nu,\\lambda, v)}(0)\\|_{H^s}&\\leq C\\epsilon|a-a'|,\n\\end{align}\nand \n\\begin{align*}\n&\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(t)-\\tilde{u}^{(a',\\nu,\\lambda, v)}(t)\\|_{H^s}\\\\\n&\\geq c\\epsilon\\Big\\{\\|(\\phi^{(a,\\nu)}(\\tfrac{t}{\\lambda^{2\\sigma}})-\\phi^{(a',\\nu)}(\\tfrac{t}{\\lambda^{2\\sigma}})\\|_{L^2}-C|\\log \\nu|^C(\\tfrac{\\lambda}{\\nu})^{-k}|v|^{-s-k}\\Big\\}\n\\end{align*}\nfor all $|t|\\leq c|\\log \\nu|^c\\lambda^{2\\sigma}$. Now we observe from the explicit formula \\eqref{no dispersion} for $\\phi^{(a,0)}$ and \\eqref{small dispersion} that there exists $T>0$ such that $\\|\\phi^{(a,\\nu)}(T)-\\phi^{(a',\\nu)}(T)\\|_{L^2}\\geq c$. Moverover, if $\\alpha>0$ is sufficiently small, $C|\\log \\nu|^C(\\tfrac{\\lambda}{\\nu})^{-k}|v|^{-s-k}\\to 0$ as $\\nu \\to0$. Therefore, for $\\nu$ small enough, we have\n\\begin{equation}\n\\|\\tilde{u}^{(a,\\nu,\\lambda, v)}(\\lambda^{2\\sigma}T)-\\tilde{u}^{(a',\\nu,\\lambda, v)}(\\lambda^{2\\sigma}T)\\|_{H^s}\\geq c\\epsilon.\n\\end{equation}\nNext, we replace $\\tilde{u}^{(a,\\nu,\\lambda, v)}$ and $\\tilde{u}^{(a',\\nu,\\lambda, v)}$ in $(6.11)$, $(6.12)$ and $(6.13)$ by $u^{(a,\\nu,\\lambda, v)}$ and $u^{(a',\\nu,\\lambda, v)}$ by Lemma \\ref{Pseudo-Galilean transformation} with $O(\\nu^{\\delta})$ erorr. Then, making $|a-a'|$ arbitrarily small and then sending $\\nu\\to 0$ (so, $\\lambda^{2\\sigma}T\\to 0$), we complete the proof.\n\\end{proof}\n\n\\section*{Acknowledgements}\nY.H. would like to thank IH\\'ES for their hospitality and support while he visited in the summer of 2014. Y.S. would like to thank the hospitality of the Department of Mathematics at University of Texas at Austin where part of the work was initiated. Y.S. acknowledges the support of ANR grants \"HAB\" and \"NONLOCAL\". \n\n\\bibliographystyle{alpha} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe study of learning systems with concepts borrowed from statistical mechanics and thermodynamics has a long history reaching back to Maxwell's demon and the ensuing debate on the relation between physics and information \\cite{parrondo2015thermodynamics}. Over the last 20 years, the informational view of thermodynamics has experienced great developments, which has allowed to broaden its scope form equilibrium to non-equilibrium phenomena \\cite{jarzynski2011equalities,de2013non}. Of particular importance are the so-called fluctuation theorems \\cite{seifert2012stochastic,jarzynski2000hamiltonian,crooks1999entropy}, which relate equilibrium quantities to non-equilibrium trajectories allowing, thus, to approximate equilibrium quantities via experimental realizations of non-equilibrium processes \\cite{ytreberg2004efficient,park2003free}. Among the fluctuation theorems, two results stand out, Jarzynski's equality \\cite{jarzynski1997equilibrium,cohen2004note,jarzynski2004nonequilibrium} and Crooks' fluctuation theorem \\cite{crooks1998nonequilibrium,crooks2000path}, as they aim to bridge the apparent chasm between reversible microscopic laws \nand irreversible macroscopic phenomena \\cite{loschmidt1876ueber}.\n\nThe advances in non-equilibrium thermodynamics have recently also led to new theoretical insights into simple learning systems \\cite{goldt2017stochastic,perunov2016statistical,england2015dissipative,still2012thermodynamics,ortega2013thermodynamics,grau2018non}.\nAbstractly, thermodynamic quantities like energy, entropy or free energy can be thought to define order relations between states \\cite{lieb1991,gottwald2019}, which makes them applicable to a wide range of problems. In the economic sciences, for example, such order relations are typically used to define a decision-maker's preferences over states \\cite{mascollel1995}. Accordingly, a decision-maker or a learning system can be thought to maximize a utility function, analogous to a physical system that aims to minimize an energy function.\nMoreover, in the presence of uncertainty in stochastic choice, such decision-makers can be thought to operate under entropy constraints reflecting the decision-maker's precision \\cite{ortega2013thermodynamics,parrondo2015thermodynamics}, resulting in soft-maximizing the corresponding utility function instead of perfectly maximizing it. This is formally equivalent to following a Boltzmann distribution with energy given by the utility. Therefore, in this picture, the physical concept of work corresponds to utility changes caused by the environment, whereas the physical concept of heat corresponds to utility gains due to internal adaptation \\cite{still2012thermodynamics}. Like a thermodynamic system is driven by work, such learning systems are driven by changes in the utility landscape (e.g. changes in an error signal). By exposing learning systems to varying environmental conditions, it has been hypothesized that adaptive behavior can be studied in terms of fluctuation theorems \\cite{grau2018non,england2015dissipative}, which are not necessarily tied to physical processes but are broadly applicable to stochastic processes satisfying certain constraints \\cite{hack2022jarzyskis}.\n\nAlthough fluctuation theorems have been empirically observed in numerous experiments in the physical sciences \\cite{douarche2005experimental,collin2005verification,saira2012test,liphardt2002equilibrium,an2015experimental,smith2018verification}, there have been no reported experimental results relating fluctuation theorems to adaptive behavior in humans or other living beings. Here, we test Jarzynski's equality and Crooks' fluctuation theorem experimentally in a human sensorimotor adaptation task.\nIn this context, the fluctuation theorem establishes a linear relationship between the externally imposed utility changes driving the learning process (which are directly related to non-predicted information and energy dissipation \\cite{still2012thermodynamics}) and the log-probability ratio between forward and backward adaptation trajectories, when exposing participants to the sequence of environments either in the forward or reverse order. Accordingly, such learners can be quantitatively characterized by a hysteresis effect that can also be observed in simple physical systems.\n\n\\section{Results}\n\\label{sect:results}\n\nIn a visuomotor adaptation task, human participants controlled a cursor on a screen towards a single stationary target by moving a mechanical manipulandum that was obscured from their vision under an overlaid screen---see Figure~\\ref{methods exp}\\textbf{A}. Crucially, in each trial $n$, the position of the cursor could be rotated with angle $\\theta_n$ relative to the actual hand position so that participants had to adapt when moving the cursor from the start position to the target. To measure participants' adaptive state, we recorded their movement position at the time of crossing a certain distance from the start position, so that their response could be characterized by an angle $x_n$. \nThe deviation between participants' response and the required movement incurs a sensorimotor loss \\cite{kording2004loss}, that can be quantified as an exponential quadratic error \n\\begin{equation}\n\\label{utility}\n E_n(x)= 1 - e^{- (x-(\\theta_n+b))^2},\n\\end{equation}\nwhere $\\theta_n$ is the true rotation angle set by the trial $n$ and $b$ is a participant-specific parameter for the bias, respectively---see Figure \\ref{methods exp}\\textbf{D}. This loss is taken to be the energy (or negative utility) of a participant's stochastic response $X_n = x_n$. Therefore the pointing behavior after a suitably long adaption time can be described by a Boltzmann equilibrium distribution $p_n^{eq}$ of the form\n\\begin{equation}\n\\label{Boltzmann}\n p_{n}^{eq}(x_n) = \\exp\\big(-\\beta( E_n(x_n) - F_n )\\big),\n\\end{equation}\nfor all $x_n\\in A_n$, where the sensorimotor error $E_n(x_n)$ plays the role of an energy, the free energy term $F_n = \\frac{1}{\\beta} \\log \\int_{A_n} \\exp\\left( -\\beta E_n(x_n) \\right) dx_n$ is caused by the normalization, and $A_n$ is the support of the equilibrium distribution $p_{n}^{eq}$, which will vary for each participant, as we explain in Section \\ref{exp design}. Moreover, the softness-parameter $\\beta$, also known as \\textit{inverse temperature}, controls the trade-off between entropy maximization and energy minimization, essentially interpolating between a purely stochastic choice ($\\beta = 0$) and a purely rational choice ($\\beta \\to \\infty$) minimizing the energy perfectly. \n\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=.32]{lastI.jpg}\n \\caption{\\textbf{A} Schematic representation of an experimental trial with deviation angle $\\theta$. The dotted line represents the participant's hand movement and the continuous line represents the rotated movement observed on the screen. \\textbf{B} Experimental protocol. The continuous line represents the deviation angles $\\theta$ imposed during the first experimental cycle, where trials 1 to 25 constitute the forward process and trials 34 to 58 constitute the backward process. The dotted line represents the second cycle. \\textbf{C} Illustration of the equilibrium distributions \\eqref{Boltzmann} with $b,\\theta_n=0$ resulting from the exponential quadratic error \\eqref{utility} and, respectively, $\\beta=1,1.5,2$.\n The shaded area represents the target, which tolerates, at most, an error of $2 \\degree$. \\textbf{D} As participants have to equilibrate between forward and backward protocol, we compare their performance in the $0 \\degree$ plateaus between protocols with the equilibrium distribution recorded before the start of the protocol, here shown exemplarily for participant 7 (red: normalized error histogram for the in-between plateaus, green: equilibrium histogram). The same comparison for each participant can be found in Figure \\ref{plateaus all}.}\n \\label{methods exp}\n\\end{figure}\n\nThe task consisted in a sequence of target reaching trials, where the rotation angle $\\theta_n$ changed from one trial $n$ to the next trial $n+1$ according to a given up-down protocol---see Figure~\\ref{methods exp}\\textbf{B}---, so that participants' responses could be represented by a trajectory $\\vect{x}=(x_0,x_1,..,x_N)$.\nWhen the environment is changing over many time steps, we can distinguish error changes $\\Delta E_{ext}(\\vect{x}) \\coloneqq \\sum_{n=0}^{N-1} (E_{n+1}(x_n)-E_{n}(x_n))$ that are induced externally by changes in the environment, from error changes $\\Delta E_{int}(\\vect{x}) \\coloneqq \\sum_{n=1}^{N} (E_{n}(x_n)-E_{n}(x_{n-1}))$ due to internal adaptation when changing from $x_{n-1}$ to $x_{n}$. Crucially, it is exactly the externally induced changes in error, $\\Delta E_{ext}(\\vect{x})$, analogous to the physical concept of work, that drive the adaptation process: if $\\Delta E_{ext}(\\vect{x})$ is large, the system is more surprised and has to adapt more. In the following, we thus refer to $\\Delta E_{ext}(\\vect{x})$ as \\emph{driving error} or \\emph{driving signal}. When applying Crooks' fluctuation theorem for general adaptive systems \\cite{hack2022jarzyskis} to the above setting, we obtain the linear relation\n\\begin{equation}\n\\label{prediction}\n \\Delta E_{ext}(\\vect{x}) - \\Delta F = \\frac{1}{\\beta}\\log \\left( \\frac{\\rho^F(\\Delta E_{ext}(\\vect{x}))}{\\rho^B(-\\Delta E_{ext}(\\vect{x}))} \\right),\n\\end{equation}\nwhere $\\Delta F$ denotes the free energy difference $F_N-F_0$, and $\\rho^F$ and $\\rho^B$ are the learner's probability densities over possible driving errors after sequentially exposing the learning system to the $N+1$ environments in forward and reverse order, respectively. In equation \\eqref{prediction}, these densities are evaluted at the actual driving errors $\\Delta E_{ext}(\\vect{x})$ and $-\\Delta E_{ext}(\\vect{x})$, respectively, for a particular adaptive trajectory $\\vect{x}$.\n \n\n\nA direct consequence of \\eqref{prediction} is Jarzynski's equality \\cite{crooks1998nonequilibrium}, which states that\n\\begin{equation}\n\\label{prediction II}\n \\big\\langle e^{-\\beta \\Delta E_{ext}(\\vect{X})}\\big\\rangle = e^{-\\beta \\Delta F},\n\\end{equation}\nwhere $\\langle ~ \\cdot ~ \\rangle \\coloneqq \\mathbb E[~\\cdot~]$ denotes the expectation operator, considering $\\vect{X} = (X_n)_{n=0}^N$ a Markov chain with transition densitites $\\Pi_n$ that have $p^{eq}_n$ as stationary distributions. In our experiment, $\\vect{X}$ represents participants' responses that are repeated over multiple repetitions of the forward-backward protocol. \nIn the following, we will test the relationships \\eqref{prediction} and \\eqref{prediction II} experimentally with $\\Delta F = 0$ as our human learners start and end in the same environmental state.\n\n\nIn our experiment the task is divided into 20 cycles of 66 trials each, following the protocol \\eqref{protocol} illustrated in Figure \\ref{methods exp}\\textbf{B}. We refer to trials 1 to 25 of each cycle as a realization of the \\emph{forward process} and trials 34 to 58 as a realization of the \\emph{backward process}. Notice the backward process consists of the same angles as the forward process, that is, the same utility functions, but in reversed order. Thus, we record for each participant 20 values for $\\Delta E_{ext}(\\vect{x})$ in both the forward and backward processes\nthat we use to estimate participants' probability densities of the forward and backward processes, $\\rho^F$ and $\\rho^B$, respectively, using kernel density estimation. As the amount of data is limited to test the linear relation in \\eqref{prediction}, we will use simulation results in the following to compare against participants' behavior.\n\n\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=.2]{fig-2-last-IV.jpg}\n \\caption{Simulation of Crooks' fluctuation theorem. \\textbf{A} Simulation with 1000 cycles. In black, the theoretical prediction; in red, the linear regression for the simulated data and, in green, the simulated points. Since the simulated data set adjusts pretty well to Crooks' fluctuation theorem \\eqref{prediction}, Jarzynski's equality \\eqref{prediction II} is fulfilled. \\textbf{B} Simulation with 20 cycles and bootstrapping. The black line is the theoretical prediction \\eqref{prediction} while the red line and shaded area are, respectively, the mean and the 99 \\% confidence interval of \\eqref{prediction} after 1000 bootstraps of the driving error values obtained in a single run (which consists of 20 cycles).}\n \\label{2 simus}\n\\end{figure}\n\nWhen simulating an artificial decision-maker based on a stochastic optimization scheme with Markovian dynamics, for example a Metropolis-Hasting algorithm with target distribution $p_n^{eq}\\propto \\exp(-\\beta E_n)$, it is clear that we can recover the linear relationship \\eqref{prediction}, provided that sufficient samples are collected \\cite{hack2022jarzyskis}---see, for example, a simulation with 1000 cycles in Figure \\ref{2 simus}\\textbf{A} where we can see a good adjustment between the theoretical prediction (in black) and the linear regression of the observed data (in red). As a result, \\eqref{prediction II} also holds in this scenario. The more critical question is what happens when only few samples are available. To this end, we use the stochastic optimization algorithm to simulate the protocol of our experiment, that is, 20 cycles, and indicate confidence intervals using 1000 bootstraps. It can be seen in Figure \\ref{2 simus}\\textbf{B} that the theoretical prediction is consistent with the $99\\%$ confidence interval in the region where $|\\Delta E_{\\textrm{ext}}| \\leq 4$ (which is the region where our experimental data lies).\nUsing the same bootstrapped data, we obtain several estimates of $\\langle e^{- \\Delta E_{ext}(\\vect{X})}\\rangle$ (the mean of $e^{- \\Delta E_{ext}(\\vect{X})}$ for the observed values of $\\Delta E_{ext}(\\vect{X})$ at each bootstrap) which we use to calculate a confidence interval for it. This results in the $99\\%$ confidence interval for $\\langle e^{-\\Delta E_{ext}(\\vect{X})}\\rangle$ being $(0.48,\\text{ }1.64)$, which is consistent with the theoretical prediction \\eqref{prediction II} for $\\Delta F = 0$. Accordingly, we will expect a similar behavior for our experimental data. Note we take, for simplicity, $b=0$, $\\beta=1$ and, for all $n$, $A_n=[-90,90]$ in these simulations.\n\n\n\n\\begin{comment}\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=.35]{long-simu2.eps}\n \\caption{Linear relation long simulation with 1000 cycles. In black, the theoretical prediction; in red, the linear regression for the simulated data and, in blue, the simulated points. Since the simulated dataset adjusts pretty well to Crooks' fluctuation theorem, Jarzynski's equality is fulfilled.}\n \\label{long simu}\n\\end{figure}\n\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=.8]{short.simu-boot2.eps}\n \\caption{Linear relation short simulation. In black, the theoretical line; in blue, two thin lines stand for the boundaries of the 99.7 \\% confidence interval after 1000 bootstraps of the driving error values obtained in one run of the short simulation and a thick line stands for the mean of these bootstraps. (we should color between the lines)}\n \\label{short simu}\n\\end{figure}\n\\end{comment}\n\n\\begin{figure}[!tb]\n\\centering\n \n \\includegraphics[scale=0.21]{lastIII-II.jpg}\n \\caption{Hysteresis effect. The filled triangles are the mean of the observed angles for every deviation in both the forward process, in green, and the backward process, in red. The black line is the forward protocol. Participants that achieve at least $50\\%$ adaptation are shaded by a green background color. Hysteresis can be observed between trials 1 and 5, 9 and 17 and 21 and 25. Notice, as expected, the forward means are below the backward in the first region, above in the second and below again in the third.}\n \\label{hyste plot}\n\\end{figure}\n\nParticipants' average adaptive responses can be seen in Figure \\ref{hyste plot} compared to the experimentally imposed true parameter values (the trial-by-trial responses can be seen in Figure \\ref{forward all}). The green and red lines distinguish the forward and backward trajectories, respectively, so that, from the contrast between the two curves, hysteresis becomes apparent, as common in simple physical systems \\cite{jarzynski2011equalities} and as reported previously in similar experiments for sensorimotor adaptation \\cite{turnham2012facilitation}. Participants that achieve at least $50\\%$ adaptation are shaded by a green background color and are our participants of interest. The three participants that fail to achieve this minimum adaptation level are marked by a red shade. Instead of excluding these participants entirely from the analysis, we keep them in to show the contrast to the well-adapted participants and to highlight that the results reported for the well-adapted participants do not hold trivially for any participant producing inconsistent behavior.\n\nFigure \\ref{together} shows participants' data compared to the theoretical prediction from \\eqref{prediction} and the 99 \\% confidence interval after 1000 bootstraps as in the case of the simulations in Figure \\ref{2 simus}\\textbf{B}.\nThere, we see that our data follow the trend of the theoretical prediction and lie within or close to the confidence interval bounds of the prediction for ample regions for several participants.\nThis is not a trivial result, as can be easily seen, when randomizing the temporal order of the trajectory points or when replacing the utility function with another one that does not fit the setup. Figure \\ref{togetherB}\\textbf{B} and \\ref{togetherB}\\textbf{C} show this, for example, for an inverted Mexican hat (\\eqref{mex hat} with $\\sigma=4$) that assigns low utility to the target region, and for resamples of the trajectory points in a random order, respectively. Both results are clearly incompatible with the theoretical prediction. \n\n\nWhen conducting an additional robustness analysis in Figure~\\ref{graph distances}, we found that, under the proposed utility function, participants' behavior is compatible with Crooks' fluctuation theorem for a broad neighbourhood of parameter settings, but breaks down when choosing implausible parameters. Regarding Jarzynski's equality \\eqref{prediction II}, the confidence intervals for the majority of participants are consistent with the theoretical prediction when using the bootstrapped values to calculate $\\langle e^{-\\beta \\Delta E_{ext}(\\vect{X})}\\rangle$ (cf. Table \\ref{jarz participants}). In contrast, when following the same procedure for both the inverted Mexican hat and the randomized procedure, we obtain consistency for a considerably smaller number of participants. In particular, for the inverted Mexican hat, we obtain consistency for only two participants. Moreover, these participants are $S_8$ and $S_9$, which belong to the group that did not reach at least $50\\%$ adaptation (indicated by the red background area in the figures). For the randomized procedure, the expected number of participants that show consistency is also close to two, although the specific subjects which are consistent vary with the realization of the randomized procedure. More specifically, after 1000 runs of the randomized procedure, the mean number of consistent subjects we observed was 2.33. \n\n\n\\begin{comment}\n\\begin{table}[!htb]\n\\centering\n \\begin{tabular}{||c| c| c ||} \n \\hline\n participant & Mean & Standard deviation \\\\ [0.5ex] \n \\hline\\hline\n 1 & 1.02 & 0.35 \\\\ \n 2 & 0.72 & 0.30 \\\\\n 3 & 0.43 & 0.09 \\\\\n 4 & 1.36 & 0.51 \\\\\n 5 & 0.57 & 0.27 \\\\\n 6 & 0.48 & 0.15 \\\\\n 7 & 0.31 & 0.10 \\\\\n 8 & 4.80 & 2.60 \\\\\n 9 & 1.55 & 0.51 \\\\\n 10 & 2.28 & 1.30 \\\\\n \\hline\n \\end{tabular}\n \\caption{Jarzynski's equality experimental results. Notice the results should be one according to the theoretical prediction. To get the results, we bootstrap the observed value of $\\Delta E_{ext}(\\vect{x})$ for the forward process 1000 times and calculate $\\int p(W) e^{-W}dW$, where $p(W)$ is obtained using kernel density estimation on each bootstrap. Aside from participants 3 and 7, all participants are almost consistent with 3 sigmas.}\n \\end{table}\n\\end{comment}\n \n \\begin{table}[!tb]\n\\centering\n \\begin{tabular}{||c| c|| c| c ||} \n \\hline\n participant & Confidence interval & participant & Confidence interval \\\\ [0.5ex] \n \\hline\\hline\n 1 & \\cellcolor[RGB]{175,234,180}(0.03,\\text{ }48.59) & 6 & \\cellcolor[RGB]{175,234,180} (0.04,\\text{ }3.75) \\\\ \n 2 & \\cellcolor[RGB]{175,234,180} (0.03,\\text{ }137.58) & 7 &\\cellcolor[RGB]{175,234,180} (0.01,\\text{ }0.50)\\\\\n 3 & \\cellcolor[RGB]{175,234,180} (0.01,\\text{ }3.63) & 8 &\\cellcolor[RGB]{255, 182, 193}(1.98,\\text{ }518130.21)\\\\\n 4 & \\cellcolor[RGB]{175,234,180}(0.49,\\text{ }63.48) & 9 & \\cellcolor[RGB]{255, 182, 193}(0.76,\\text{ }77.24)\\\\\n 5 &\\cellcolor[RGB]{175,234,180} (0.46,\\text{ }1.37) & 10 &\\cellcolor[RGB]{255, 182, 193}(0.26,\\text{ }48758.33)\\\\\n \\hline\n \\end{tabular}\n \\caption{Experimental results for Jarzynski's equality. We include the confidence intervals for the left hand side of \\eqref{prediction II}, which we obtain after bootstrapping the observed values of $\\Delta E_{ext}(\\vect{x})$ for the forward process 1000 times and estimating $\\langle e^{-\\beta \\Delta E_{ext}(\\vect{X})}\\rangle$ by its mean for each set of bootstrapped data.\n\n Note, in our setup, the theoretical prediction fulfills $\\Delta F=0$ in the right hand side of \\eqref{prediction II}, resulting in values around $1.0$ for this estimate. Participants that achieve at least $50\\%$ adaptation (c.f. Figure \\ref{hyste plot}) are shaded by a green background color.}\n \\label{jarz participants}\n \\end{table}\n\n\n\n\n\n\\section{Discussion}\n\n\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.21]{lastIV.jpg}\n \\caption{Experimental results for Crooks' fluctuation theorem when the sensorimotor loss behaves as an exponential quadratic error \\eqref{utility}. The black line is the theoretical prediction of Crooks' fluctuation theorem \\eqref{prediction} while the curves stand for the mean path after 1000 bootstraps of the observed driving error values. Participants that achieve at least $50\\%$ adaptation (c.f. Figure \\ref{hyste plot}) are shaded by a green background color. The shaded areas inside the graphs are the 99\\% confidence intervals which result from bootstrapping. Note we fit the parameters for each participant according to Section \\ref{exp design}.}\n \\label{together}\n\\end{figure}\n\n\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.21]{lastV-II.jpg}\n \\caption{Control results for Crooks' fluctuation theorem in two scenarios: \\textbf{A} the sensorimotor loss behaves like a Mexican hat function and \\textbf{B} the sensorimotor loss behaves as an exponential quadratic error but we sample the observed angles randomly with repetition. The black line is the theoretical prediction of Crooks' fluctuation theorem \\eqref{prediction} while the curves stand for the mean path after 1000 bootstraps of the observed driving error values. The shaded areas inside the graphs are the 99\\% confidence intervals which result from bootstrapping. Note, for simplicity, we assume $\\beta=1$ for all participants when using the Mexican hat to demonstrate that the result in (A) does not trivially hold for any cost function. For \\textbf{B}, we fit the parameters for each participant according to Section \\ref{exp design}.}\n \\label{togetherB}\n\\end{figure}\n\nIn our experiment we have investigated the hypothesis that human sensorimotor adaptation may be participant to the thermodynamic fluctuation theorems first reported by Crooks \\cite{crooks1999entropy} and Jarzynski \\cite{jarzynski2000hamiltonian}. In particular, we tested whether changes in sensorimotor error induced externally by an experimental protocol are linearly related to the log-ratio of the probabilities of behavioral trajectories under a given forward and time-reversed backward protocol of a sequence of visuomotor rotations. We found that participants' data, in all cases where participants showed an appropriate adaptive response, was consistent with this prediction\nor close to its confidence interval bounds, as expected from our simulations with finite sample size.\nMoreover, we found that the exponentiated error averaged over the path probabilities was statistically compatible with unity for these participants, in line with Jarzynski's theorem. \n\nTogether these results not only extend the experimental evidence of \\linebreak Boltzmann-like relationships between the probabilities of behavior and the corresponding order-inducing functions---such as energy, utility, or sensorimotor error---from the equilibrium to the non-equilibrium domain, but also from simple physical systems to more complex learning systems when studying adaptation in changing environments, deepening, thus, the parallelism between thermodynamics in physics and decision-making systems \\cite{ortega2013thermodynamics}.\n\n\\begin{comment}\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.65]{fig_5.png}\n \\caption{(NEED TO CHANGE) Close-up of Figure \\ref{together} \\textbf{A} in the region where the simulations of the experiment show a better fit regarding Crooks' fluctuation theorem \\eqref{prediction}, namely, the $\\Delta E_{\\textrm{ext}}$-interval $[-1,1]$ (cf. Figure \\ref{2 simus}\\textbf{B}).}\n \\label{together zoom}\n\\end{figure}\n\\end{comment}\n\nWhen testing for the validity of thermodynamic relations, one of the most critical issues is the choice of the energy function, that is, in our case, the error cost function. In physical systems, the energy function is usually hypothesized following from simple models involving point masses, springs, rigid bodies, etc., and generally requires knowledge of the degrees of freedom of the system under consideration. Here we have used an exponential quadratic error as a utility function, as it has been suggested previously that human pointing behavior can be best captured by loss functions that approximately follow a negative parabola for small errors and then level off for large errors \\cite{kording2004loss}. In the absence of very large errors, many studies in the literature on sensorimotor learning have only used the quadratic loss term \\cite{wolpert1995internal,todorov2002optimal}. Thus, our assumptions are in line with the literature. Crucially, the reported results fail when assuming non-sensical cost functions, like the Mexican hat.\n\n\nExperimental tests of both Jarzynski's equality \\eqref{prediction II} and Crooks fluctuation theorem \\eqref{prediction} have been previously reported in classical physics \\cite{douarche2005experimental,collin2005verification,toyabe2010experimental,saira2012test,liphardt2002equilibrium} and also, in the case of Jarzynski's equality, in quantum physics \\cite{an2015experimental,smith2018verification}. Importantly, these results have been successfully tested in several contexts: unfolding and refolding processes involving RNA \\cite{collin2005verification,liphardt2002equilibrium}, electronic transitions between electrodes manipulating a charge parameter \\cite{saira2012test}, rotation of a macroscopic object inside a fluid surrounded by magnets where the current of a wire attached to the macroscopic object is manipulated \\cite{douarche2005experimental}, and a trapped ion \\cite{an2015experimental,smith2018verification}. Despite differences in physical realization, protocols, and energy functions (and thus work functions), all the above experiments follow the same basic design behind the approach presented here. This supports the claim that \nfluctuation theorems do not necessarily rely on involved physical assumptions but are simple mathematical properties of certain stochastic processes \\cite{hack2022jarzyskis}, although originally they were derived in the context of non-equilibrium thermodynamics \\cite{jarzynski1997equilibrium,crooks1998nonequilibrium}. \n\nMathematically, Crooks theorem \\eqref{prediction} holds for any Markov process (i), whose initial distribution is in equilibrium (ii), and whose transition probabilities satisfy detailed balance with respect to the corresponding equilibrium distributions (iii) \\cite{hack2022jarzyskis}. \nOur experimental test of Equation~\\eqref{prediction} can be seen, thus, as a test for the hypothesis that human sensorimotor adaptation processes satisfy conditions (i), (ii), and (iii). Condition (i) requires adaptation to be Markovian, which is in line with most error-driven models of sensorimotor adaptation \\cite{shadmehr2012} that assume some internal state update of the form $x_{t+1}=f(x_t, e)$ with adaptive state $x$ and error $e$. While such models have proven fruitful for simple adaptation tasks like ours, they also have clear limitations, for example when it comes to meta-learning processes that have been reported in more complex learning scenarios \\cite{braun2010,lieder2019}. Condition (ii) is supported by our data in the second and last rows of Figure \\ref{plateaus all},\nwhere it can be seen that participants' behavior at the beginning of each cycle is roughly consistent with the equilibrium behavior recorded prior to the start of the experiment. Condition (iii) requires that the adaptive process converges to the equilibrium distribution \\eqref{Boltzmann} dictated by the environment and that the behaviour remains statistically unchanged when staying in that environment. Moreover, it requires that the equilibrium behavior at each energy level is time-reversible, that means, once adaption has ceased the trial-by-trial behavior would have the same statistics when played forward or backward in a video recording. Note, however, that does not imply time-reversibility over the entire adaptation trajectory, but is only required locally for each transition step. \nThe usual noise-driven models of adaptation fulfil this requirement, like the Metropolis-Hastings scheme that has been proposed to simulate human sensorimotor adaptation \\cite{SANBORN2016883,grau2018non}.\n\nWhile Jarzynski's equality \\eqref{prediction II} directly follows from Crooks theorem, weaker assumptions are sufficient to derive it \\cite{hack2022jarzyskis,jarzynski1997equilibrium}. In particular, condition (iii) regarding detailed balance is not necessary, as it is only required that the behavioral distribution does not change anymore once the equilibrium distribution is reached. Thus, Equation~\\eqref{prediction II} can be used as a test for the weaker hypothesis that human sensorimotor adaptation satisfies conditions (i), (ii) and stationarity after convergence. While Jarzynski's equality only requires samples from the forward process, Crooks theorem also tests the relation between the forward and the backward processes. In particular, Crooks theorem\ndecouples the information processing with respect to any particular environment from the biases introduced by the adaptation history, that is, it assumes the transition probabilities for any given environment are independent of the history. Hence, the observed difference in behaviour after having adapted to the same environment, the hysteresis, is solely explained in terms of the information processing history before encountering the environment. Such hysteresis effects are not only common in simple physical systems like magnets or elastic bands, but have also been reported for sensorimotor tasks \\cite{kelso1994,schack2011,turnham2012facilitation}. The hysteresis effects we report in Figure~\\ref{hyste plot} are in line with a system obeying Crooks theorem and can be replicated using Markov Chain Monte Carlo simulations of adaptation \\cite{grau2018non}.\n\n\n\n\n\n\n\n\n\n\nOur study is part of a number of recent studies that have tried to harness equilibrium and non-equilibrium thermodynamics to gain new theoretical insights into simple learning systems \\cite{goldt2017stochastic,perunov2016statistical,england2015dissipative,still2012thermodynamics,ortega2013thermodynamics}.\nFor example, the information that can be acquired by learning in simple forward neural networks has been shown to be bounded by thermodynamic costs given by the entropy change in the weights and the heat dissipated into the environment \\cite{seifert2012stochastic}. More generally, when interpreting a system's response to a stochastic driving signal in terms of computation, the amount of non-predictive information contained in the state about past environmental fluctuations is directly related to the amount of thermodynamic dissipation \\cite{still2012thermodynamics}. This suggests that thermodynamic fundamentals, like the second law, can be carried over to learning systems.\nConsider, for example, a Bayesian learner where the utility is given by the log-likelihood model and where the data are presented either in one chunk for a single update, or consecutively in little batches with many little updates. In the latter case, the cumulative surprise is much smaller and lower bounded by the log-likelihood of the data, which corresponds to the free energy difference before and after learning \\cite{grau2018non}. Finally, it has even been suggested that the dissipation of absorbed work as it is studied in a generalized Crooks theorem may underlie a general thermodynamic mechanism for self-organization and adaptation in living matter \\cite{england2015dissipative}, raising the question of whether such a general principle of adaptive dissipation could also govern biological learning processes \\cite{perunov2016statistical}.\n\n\n\n\n\\begin{comment}\n\\begin{figure}[!htb]\n\\centering\n \\includegraphics[scale=.8]{jarz-simu.eps}\n \\caption{In blue, the histogram for Jarzynski's equality. The values are obtained after 1000 bootstraps of one run of the short simulation. The mean is 0.91 and the standard deviation is 0.15. In red, the best fit by a normal distribution. The Lilliefors normality test is not rejected. The best fit mean is 0.93.}\n \\label{jarzynski simu}\n\\end{figure}\n\\end{comment}\n\n\\begin{comment}\n\\section*{Other narrative}\n\n\\paragraph{Narrative} We aim to study quantitatively the difficulty of adaptation in human decision-making. Given a set of $N+1$ environments where, for each of them, the decision-maker has to choose, after a finite amount of time, some element of a set $S$, we are interested in the distribution of the choices $(x_0,x_1,..,x_N) \\in S^{N+1}$. We could, for example, measure somehow the discrepancy between the observed distribution and the distribution we would get if the decision-maker adapted perfectly. Alternatively, we could measure it by comparing the observed distribution $p$ with the one we get when presenting the decision-maker with the same set of environments in reversed order $p^\\dagger$. In case the decision-maker learned perfectly every environment, both distributions would coincide. The discrepancy between them is, thus, a way of measuring how well the decision-maker adapts to the presented environments. In physics, this discrepancy follows a specific equation, namely\n\\begin{equation}\n\\label{ave eq}\n \\langle W_d(\\vect{x})\\rangle_{\\rho(\\vect{x})} = \\frac{1}{\\beta} D_{KL}(p||p^\\dagger)\n\\end{equation}\nwhere $p(\\vect{x})$ is the probability of observing some vector $\\vect{x}$ when facing the sequence of environments and $p^\\dagger(\\vect{x})$ is the probability of observing the same vector, in reversed order, when facing the sequence of environments reversed and $W_d$ is the dissipated work (explain). Actually, \\eqref{ave eq} has a point wise counterpart (from which it can be derived) which states $\\forall \\vec{x} \\in S^{N+1}$\n\\begin{equation}\n\\label{point eq}\n W_d(\\vect{x})=\\frac{1}{\\beta} \\log \\frac{\\rho(X_N = x_N,..,X_0=x_0)}{\\rho(Y_N=x_0,..,Y_0=x_N)} \n\\end{equation}\nwhere $X_i$ are the random variables corresponding to the choices when the energies are given in the original order and $Y_i$ the random variables when the environment order in inverted. Given the fact we have a small sample, instead of testing directly \\eqref{point eq}, we test a consequence of it, namely,\n\\begin{equation}\n W_0= \\log \\Big( \\frac{\\rho^F(W=W_0)}{\\rho^B(W=-W_0)} \\Big)\n\\end{equation}\n$\\forall W_0 \\in W(S^{N+1}$ where $W$ is the work (explain).\n\\end{comment}\n\n\\begin{comment}\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{initial.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{initial.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{initial.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{initial.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{initial.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{initial.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{initial.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{initial.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{initial.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{initial.s10.eps}\n \\caption{In blue, histogram initial 100 trials (no deviation). In red, equilibrium distribution where the width and center are chosen to represent the histogram.}\n \\label{plateaus all}\n\\end{figure}\n\n\\clearpage \n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{plat.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{plat.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{plat.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{plat.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{plat.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{plat.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{plat.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{plat.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{plat.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{plat.s10.eps}\n \\caption{Histogram 0 deviation plateaus.}\n \\label{plateaus all}\n\\end{figure}\n\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{forw.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{forw.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{forw.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{forw.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{forw.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{forw.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{forw.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{forw.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{forw.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{fow.s10.eps}\n \\caption{Observed angles in the forward trajectories.}\n \\label{forward all}\n\\end{figure}\n\\end{comment}\n\n\\begin{comment}\n\\clearpage \n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.3]{hyste-s1.eps} \\hfill\n \\includegraphics[scale=0.3]{hyste-s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{hyste-s3.eps} \\hfill\n \\includegraphics[scale=0.3]{hyste-s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{hyste-s5.eps} \\hfill\n \\includegraphics[scale=0.3]{hyste-s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{hyste-s7.eps} \\hfill\n \\includegraphics[scale=0.3]{hyste-s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{hyste-s9.eps} \\hfill\n \\includegraphics[scale=0.3]{hyste-s10.eps}\n \\caption{Hysteresis plot. In green, the mean of the observed angles for the forward process. In red, the mean of the observed angles for the backward process. In black, the forward protocol. Hysteresis can be observed between trials 1 and 5, 9 and 17 and 21 and 25. Notice the forward means are below the backward in the first region, above in the second and below again in the third, as expected.}\n \\label{hyste plot}\n\\end{figure}\n\\end{comment}\n\n\n\\begin{comment}\n\\clearpage \n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.3]{jarz-s1.eps} \\hfill\n \\includegraphics[scale=0.3]{jarz-s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{jarz-s3.eps} \\hfill\n \\includegraphics[scale=0.2]{jarz-s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{jarz-s5.eps} \\hfill\n \\includegraphics[scale=0.2]{jarz-s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{jarz-s7.eps} \\hfill\n \\includegraphics[scale=0.3]{jarz-s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{jarz-s9.eps} \\hfill\n \\includegraphics[scale=0.3]{jarz-s10.eps}\n \\caption{Histogram Jarzynski's equation data. The (mean,standard deviation) pair for each participant are: (0.8,0.28), (0.57,0.24), (0.33,0.07), (1.05,0.42), (0.45,0.2), (0.38,0.12), (0.25,0.08), (3.83,1.97), (1.22,0.42) and (1.81,1.03). 7 of the 10 participants show results consistent with the Jarzynski equality with 3 sigmas or less.}\n \\label{jarz data}\n\\end{figure}\n\\end{comment}\n\n\\begin{comment}\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{back.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{back.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{back.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{back.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{back.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{back.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{back.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{back.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{back.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{back.s10.eps}\n \\caption{Observed angles in the backward trajectories.}\n \\label{forward all}\n\\end{figure}\n\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{boot.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{boot.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{boot.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{boot.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{boot.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{boot.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{boot.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{boot.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{boot.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{boot.s10.eps}\n \\caption{Experimental linear relation. In black, the theoretical line, and, in blue, the boundaries of the 99.7 \\% confidence interval after 1000 bootstraps of the measured driving error values.}\n \\label{result}\n\\end{figure}\n\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{mex.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{mex.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{mex.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{mex.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{mex.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{mex.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{mex.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{mex.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{mex.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{mex.s10.eps}\n \\caption{Experimental linear relation using a mexican hat as energy function. In black, the theoretical line, and, in blue, the boundaries of the 99.7 \\% confidence interval after 1000 bootstraps of the measured driving error values.}\n \\label{mex}\n\\end{figure}\n\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.3]{rando.s1.eps} \\hfill\n \\includegraphics[scale=0.3]{rando.s2.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{rando.s3.eps} \\hfill\n \\includegraphics[scale=0.3]{rando.s4.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{rando.s5.eps} \\hfill\n \\includegraphics[scale=0.3]{rando.s6.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{rando.s7.eps} \\hfill\n \\includegraphics[scale=0.3]{rando.s8.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.3]{rando.s9.eps} \\hfill\n \\includegraphics[scale=0.3]{rando.s10.eps}\n \\caption{Experimental linear relation sampling with repetition the observed angles. In black, the theoretical line, and, in blue, the boundaries of the 99.7 \\% confidence interval after 1000 bootstraps of the measured driving error values.}\n \\label{random}\n\\end{figure}\n\\end{comment}\n\n\\begin{comment}\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.6]{allImages.png}\n \\caption{Experimental linear relation for three cases: original utility function (rows 1 and 2), inverted Mexican hat utility function (rows 3 and 4) and original utility function after sampling with repetition the observed angles (rows 5 and 6). In black, the theoretical line, and, in blue, the mean path after 1000 bootstraps of the observed driving error values. The colored areas represent the 99\\% confidence interval after 1000 bootstraps of the measured driving error values in each case.}\n \\label{together}\n\\end{figure}\n\\end{comment}\n\n\\begin{comment}\n\\begin{figure}[!tb]\n\\centering\n \\includegraphics[scale=0.6]{fig_5.png}\n \\caption{Close-up of Figure \\ref{together} in the region where the simulations of the experiment show a better fit of the theoretical prediction: driving error values between 1 and -1.}\n \\label{together zoom}\n\\end{figure}\n\\end{comment}\n\n\\begin{comment}\n\\clearpage\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.4]{mex.s1_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{mex.s2_zoom.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{mex.s3_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{mex.s4_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{mex.s5_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{mex.s6_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{mex.s7_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{mex.s8_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{mex.s9_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{mex.s10_zoom.eps}\n \\caption{Zoom mexican hat}\n \\label{mexa zoom}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.4]{rando.s1_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{rando.s2_zoom.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{rando.s3_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{rando.s4_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{rando.s5_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{rando.s6_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{rando.s7_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{rando.s8_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{rando.s9_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{rando.s10_zoom.eps}\n \\caption{Zoom resampling}\n \\label{resamp zoom}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[scale=0.4]{boot.s1_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{boot.s2_zoom.eps} \n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{boot.s3_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{boot.s4_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{boot.s5_xoom.eps} \\hfill\n \\includegraphics[scale=0.4]{boot.s6_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{boot.s7_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{boot.s8_zoom.eps}\n \\\\[\\smallskipamount]\n \\includegraphics[scale=0.4]{boot.s9_zoom.eps} \\hfill\n \\includegraphics[scale=0.4]{boot.s10_zoom.eps}\n \\caption{Zoom original}\n \\label{orig zoom}\n\\end{figure}\n\\end{comment}\n\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFederated learning~\\citep{kairouz2021advances} enables collaborative learning from distributed data located at multiple clients without the need to share the data among the different clients or with a central server. Much progress has been made in recent work on various aspects of this problem setting, such as improved optimization at each client~\\citep{li2020federatedheterogenous}, improved aggregation of client models at the server~\\citep{chen2020fedbe}, handling the heterogeneity in clients' data distributions~\\citep{zhu2021data}, and also efforts towards personalization of the client models~\\citep{mansour2020three}.\n\nMost existing formulations of federated learning view it as an optimization problem where the global loss function is optimized over multiple rounds, with each round consisting of point estimation of a loss function defined over the client's local data, followed by an aggregation of the client models on a central server. Point estimation, however, is prone to overfitting especially if the amount of training data on clients is very small. Moreover, crucially, such an approach ignores the uncertainty in the client models. Indeed, taking into account the model uncertainty has been shown to be useful not just for improved accuracy and robustness of predictions when the amount of training data is limited, as well as in other tasks, such as out-of-distribution (OOD) detection~\\citep{salehi2021unified} and active learning~\\citep{ahn2022federated}. In this work, we present a Bayesian approach for federated learning which takes into account the model uncertainty, and also demonstrate its effectiveness for other tasks in federated settings where accurate estimates of model uncertainty are crucial, such as OOD detection and active learning in federated setting.\n\nDespite its importance, federated learning in a Bayesian setting is inherently a challenging problem. Unlike standard federated learning, in the Bayesian setting, each client needs to estimate the posterior distribution over its weights (and also the posterior predictive distribution which needed at the prediction stage) which is an intractable problem.\nTypical ways to address this intractability of Bayesian inference for deep learning models include (1) Approximate Bayesian inference where the posterior distribution of model parameters is usually estimated via approximate inference methods, such as MCMC~\\citep{zhang2019cyclical,izmailov2021bayesian}, variational inference~\\citep{zhang2018advances}, or other faster approximations such as modeling the posterior via a Gaussian distribution constructed using the SGD iterates~\\citep{maddox2019simple}, or (2) ensemble methods, such as deep ensembles~\\citep{lakshminarayanan2017simple} where the model is trained using different initialization to yield an ensemble whose diversity represents the model uncertainty.\n\nThe other key challenge for Bayesian federated learning is efficiently communicating the client model parameters, which are represented by a probability distribution, to the server, and their aggregation at the server. Note that, unlike standard federated learning, in the Bayesian settings, each client would maintain either a probability distribution over its model weights or an ensemble over the model weights. Both of these approaches make it difficult to efficiently communicate the client models and aggregate them at the server. Some recent attempts towards Bayesian federated learning have relied on simplifications such as assuming that the posterior distribution of each client's weights is a Gaussian~\\citep{al2020federated,linsner2021approaches}, which makes model communication and aggregation at the server somewhat easier. However, this severely restricts the expressiveness of the client models. In our work, we do not make any assumption on the form of the posterior distribution of the client weights. Another appealing aspects of our Bayesian federated learning model is that, at test time, it does not require Monte-Carlo averaging~\\citep{bishop2006pattern,korattikara2015bayesian} which is usually required by Bayesian methods (especially for non-conjugate models, such as deep learning models) at test time, making them slow (essentially, using $S$ Monte-Carlo samples from the posterior makes prediction $S$ times slower). In contrast, our approach leverages ideas from distillation of posterior predictive distribution~\\citep{korattikara2015bayesian}, using which we are able to represent the entire posterior predictive distribution using a single deep neural network, resulting in fast predictions at test time.\n\nOur contributions are summarized below\n\\begin{itemize}\n \\item We present a novel and efficient approach to Bayesian federated learning in which each client performs a distillation of its posterior predictive distribution into a single deep neural network. This allows solving the Bayesian federated learning problem using ideas developed for standard federated learning methods, while still capturing and leveraging model uncertainty. \n \\item Our approach does not make any strict assumptions on the form of the clients' posterior distributions (e.g., Gaussian~\\citep{al2020federated}) or predictive distributions. Moreover, despite being Bayesian, our approach is still fast at test time since it does not require Monte-Carlo averaging (which is akin to averaging over an ensemble) but uses the idea of distribution distillation to represent the PPD via a single deep neural network.\n \\item We present various ways to aggregate the clients' predictive distributions at the server, both with as well as without requiring publicly available (unlabeled) data at the server. \n \\item In addition to tasks such as classification and out-of-distribution (OOD) detection, we also show a use case of our approach for the problem of active learning in federated setting~\\citep{ahn2022federated} where our approach outperforms existing methods.\n\\end{itemize}\n\n\n\\begin{comment}\nThe popularity of machine and deep learning approaches is a result of availability of better computation and storage mediums in the ecosystem. In many practical scenarios, the pool of data is readily available in the form of a repository or some other storage medium. However, there are also ample cases, where the sharing of data is of concern if it is of private in nature e.g. medical health records. This, coupled with the increasing trend of computational power in mobile and personal devices has brought attention to the concept of leveraging resources on local devices for computation of learning models. It has led an increase in interest in the field of \\textit{federated learning}.\n\nFederated learning explores the possibility of training the learning models using local resources present on remote devices. It uses local computation resources, thereby increasing computation efficiency due to a distributed training fashion. Also, it uses data in a secure way by retaining it with the clients. This eliminates the need of a central data repository and also saves the storage cost and the expensive communication cost between any central server and the distributed systems. Federated learning process usually iterates through many rounds of communication alternating between the server and client level computations. Usually, in each round, the clients initializes their model with the global server model and trains it using local data, which is then sent to the server. The server then aggregates these models to form the global server model, which can be used for the next round.\n\nBesides the formulation stated above in simpler words, the problem becomes more challenging due to a few more aspects.\n\n\\begin{enumerate}\n \\item \\textbf{Imbalanced data} Different devices may be able to generate or store different volumes of data. This can be either due to storage characteristics of device or the variability in usage of service\/application generating the data. This means that based on the available data, local learning models may have a small or large base of data to learn from and so their performance can vary with huge gaps.\n \\item \\textbf{Non-IID data} The data generated by various devices may have different distribution depending on the usage. For ex: Depending on the genre of conversation, the data for next word prediction task may differ.\n \\item \\textbf{Inactive participation} Many devices do not participate actively because of their availability and data generation activity. This may lead to different number of devices and different active devices present in each round.\n\\end{enumerate}\n\\end{comment}\n\n\\section{Bayesian Federated Learning via Predictive Distribution Distillation}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.35]{figures\/FedDiagram.png}\n \\caption{\\small{The above figure summarizes our framework. Each client infers the (approximate) posterior distribution by generating the posterior samples (teacher models) which are distilled to give the PPD (student model parameterized by a deep neural network). Each client communicates its MAP teacher sample and the PPD to the server which aggregates them to yield a global teacher sample and the global PPD, both of which are sent back to the clients, which use these quantities for the next round of learning.}}\n \\label{fig:fedppd_diagram}\n\\end{figure}\n\nUnlike standard federated learning where the client model is represented by a single neural network whose weights are estimated by minimizing a loss function using client's data, we consider the Bayesian setting of federated learning where each client learns a posterior distribution over its weights. The goal is to efficiently communicate the clients' local posteriors to the server and aggregate these local posteriors to learn a global posterior.\n\nHowever, since we usually care about predictive tasks, the actual quantity of interest in the Bayesian setting is not the posterior distribution per se, but the posterior predictive distribution (PPD). Given a set of $S$ samples $\\theta^{(1)},\\ldots,\\theta^{(S)}$ from the posterior, estimated using some training data $\\mathcal{D}$, the (approximate) PPD of a model is defined as $p(y|x,\\mathcal{D}) = \\frac{1}{S}\\sum_{i=1}^S p(y|x,\\theta^{(i)})$. Note that the PPD can be thought of as an ensemble of $S$ models.\n\nSince the PPD is the actual quantity of interest, in our Bayesian federated learning setting, we aim to directly estimate the PPD at each client. However, even estimating and representing the PPD has challenges. In particular, since the PPD is essentially an ensemble of models, storing and communicating such an ensemble from each client to the server can be challenging. To address this issue, we leverage the idea of distribution\/ensemble distillation~\\citep{korattikara2015bayesian}, where the PPD of a deep learning model can be efficiently distilled and stored as a single deep neural network. We leverage this distillation idea on each client to represent the client's PPD using a single neural network which can then be communicated and aggregated at the server in pretty much the same way as it is done in standard federated learning.\n\n\nOur approach can be summarized as follows (and is illustrated in Fig.~\\ref{fig:fedppd_diagram})\n\n\\begin{enumerate}\n \\item For each client, we perform approximate Bayesian inference for the posterior distribution of the client model weights using Markov Chain Monte Carlo (MCMC) sampling. This gives us a set of samples from the client's posterior and these samples will be used as as teacher models which we will distill into a student model. We use stochastic gradient Langevin dynamics (SGLD) sampling~\\citep{welling2011bayesian} since it gives us an online method to efficiently distill these posterior samples into a student model (step 2 below).\n \\item For each client, we distill the MCMC samples (teacher models) directly into the posterior predictive distribution (PPD), which is the student model. Notably, in this distillation based approach~\\citep{korattikara2015bayesian}, the PPD for each client is represented succinctly by a \\emph{single} deep neural network, instead of via an ensemble of deep neural network. This makes prediction stage much faster as compared to typical Bayesian approaches. \n \\item For each client, the teacher model with largest posterior probability (i.e., the MAP sample) from its posterior distribution and the student model representing the client's PPD (both of which are deep neural networks), are sent to the server. \n \\item The server aggregates the teacher and student models it receives from all the clients. For the aggregation, we consider several approaches which we describe in Sec.~\\ref{sec:aggr}. \n \\item The aggregated teacher and student models are sent back to each client, and the process continues for the next round.\n \\item We continue steps 1-5 till convergence.\n\\end{enumerate}\n\n\\subsection{Posterior Inference and Distillation of Client's PPD}\n\\label{sec:fedbdk-1}\nWe assume there are $K$ clients with labeled data $\\mathcal{D}_1,\\ldots,\\mathcal{D}_K$, respectively. On each client, we take the Monte Carlo approximation of its posterior predictive distribution (PPD) and distill it into a single deep neural network using an online Bayesian inference algorithm, as done by the Bayesian Dark Knowledge (BDK) approach in~\\citep{korattikara2015bayesian}. Each iteration of this distillation procedure first generates a sample from the client's posterior distribution using the stochastic Langevin gradient descent (SGLD) algorithm~\\citep{welling2011bayesian} and incrementally ``injects'' into a deep neural network $\\mathcal{S}$ (referred to as ``student'') with parameters $w$, representing a succinct form of the client's (approximate) PPD. This is illustrated by each of the client blocks shown in Fig.~\\ref{fig:fedppd_diagram}. For client $k$, assuming the set of samples generated by SGLD to be $\\theta_k^{(1)},\\ldots,\\theta_k^{(S)}$, this distillation procedure can be seen as learning the parameters $w_k$ of the client $k$'s student model $\\mathcal{S}_k$ by minimizing the following loss function~\\citep{korattikara2015bayesian}\n\\begin{equation}\n \\hat{L}(w_k) = - \\frac{1}{S} \\sum_{i=1}^S \\sum_{x^\\prime \\in \\mathcal{D}^\\prime_k} \\mathbb{E}_{p(y | x', \\theta_k^{(i)})} \\log \\mathcal{S}_k\n\n\\end{equation}\nNote that, in the above equation, to compute the loss, we use an unlabeled distillation dataset $\\mathcal{D}_k^\\prime$ at client $k$. This unlabeled dataset can be generated from the original labeled dataset $\\mathcal{D}_k$ by adding perturbations to the inputs, as suggested in~\\citep{korattikara2015bayesian}. \n\nWe sketch the full algorithm for optimizing for $w_k$ in the Supplementary Material. We use this algorithm at each client to learn the student model $\\mathcal{S}_k$ which represents a compact approximation of the client $k$'s PPD in form of a single deep neural network (as shown in the client block in Fig.~\\ref{fig:fedppd_diagram}), which can be now communicated to the server just like client models are communicated in standard federated learning algorithms. Note that, as shown in Fig.~\\ref{fig:fedppd_diagram}, in our federated setting, in addition to weights $w_k$ of its PPD approximation (the student model), each client $k$ also sends the posterior sample $\\theta_k^{MAP}$ (the sample with the largest posterior probability), to the server.\n\\subsection{Aggregation of Client Models}\n\\label{sec:aggr}\nAs described in Sec.~\\ref{sec:fedbdk-1}, the server receives two models from client $k$ - the (approximate) MAP sample $\\theta_k^{MAP}$ (the teacher) as well as the (approximate) PPD $w_k$ (the student). We denote the teacher models (approximate MAP samples) from the $K$ clients as $\\{\\theta_1^{MAP},\\ldots,\\theta_K^{MAP}\\}$ and the respective student models (approximate PPD) as $\\{w_1,\\ldots,w_K\\}$. These models need to be aggregated and then sent back to each client for the next round. We denote the server aggregated quantities for the teacher and student models as $\\theta_g$ and $w_g$ (we use $g$ to refer to ``global'').\n\nIn this work, we consider and experiment with three aggregation schemes on the server.\n\n\\textbf{Simple Aggregation of Client Models:} Our first aggregation scheme (shown in Algorithm~\\ref{algo-agg}) computes dataset-size-weighted averages of all the teacher models and all the student models received at the server. Denoting the number of training examples at client $k$ as $n_k$ and $N = \\sum_{k=1}^K n_k$, we compute $\\theta_g = \\frac{1}{N}\\sum_{k=1}^K n_k\\theta_k^{MAP}$ and $w_g = \\frac{1}{N}\\sum_{k=1}^K n_k w_k$, similar to how FedAvg algorithm~\\citep{mcmahan2017communication} aggregates client models on the server.\n\n\\textbf{Client-entropy-based Aggregation of Client Models:} Our second aggregation scheme (shown in Algorithm~\\ref{algo-ent}) uses an estimate of each client model's uncertainty to perform an importance-weighted averaging of the student models from all the clients. For each client $k$, we apply its student model $w_k$ on an unlabeled dataset available at the server and compute the average entropy of predictions on this entire dataset. Denoting the average predictive entropy of client $k$ to be $e_k$, we calculate an importance weight for client $k$ as $I_k = n_k\/e_k$. Essentially, a client with larger predictive entropy will receive a smaller importance weight. Using these importance weights, the student models (PPD weights) are aggregated as $w_g = \\frac{1}{\\sum_{k=1}^K I_k}\\sum_{k=1}^K I_k w_k$. For teacher models, however, we still use the simple dataset-size-weighted average as $\\theta_g = \\frac{1}{N}\\sum_{k=1}^K n_k\\theta_k^{MAP}$.\n\n\\textbf{Distillation-based Aggregation of Client Models:} Our third aggregation scheme goes beyond computing (weighted) averages of models received only from the clients. The motivation behind this approach is that the client models (both teachers as well as students) received at the server may not be diverse enough to capture the diversity and heterogeneity of the clients~\\citep{chen2020fedbe}. To address this issue, this approach (shown in Algorithm~\\ref{algo-distill}) first fits two probability distributions, one over the $K$ teacher models and the other over the $K$ student models received from the clients. It then uses these distributions to generate $M$ \\emph{additional} client-\\emph{like} teacher models and student models. Using the actual teacher models (resp. student models) and the additionally \\emph{generated} teacher models (resp. student models), we perform knowledge distillation on the server to compute the global teacher model $\\theta_g$ and the global student model $w_g$. This server-side distillation procedure requires an \\emph{unlabeled} dataset $\\mathcal{U}$ on the server. Applying the actual and generated teacher models (resp. student models) on the unlabeled dataset $\\mathcal{U}$ gives us pseudo-labeled data $\\mathcal{T}$ where each pseudo-label is defined as the averaged prediction (softmax probability vector) obtained by applying the actual and generated teacher models (resp. student models) to an unlabeled input. For the distillation step, we finally run the Stochastic Weighted Averaging (SWA) algorithm~\\citep{izmailov2018averaging} using the pseudo-labeled data $\\mathcal{T}$ and the simple aggregation of the client models as initialization. Both $\\theta_g$ and $w_g$ can be obtained by following this procedure in an identical manner. Recently, this idea was also used in Federated Bayesian Ensemble (FedBE)~\\citep{chen2020fedbe}. However, FedBE is \\emph{not} Bayesian in the sense that the clients still perform point estimation and it is only the distillation step where a distribution is fit over the client models to generate a more diverse ensemble of models which are distilled using the SWA algorithm to get the global model. \n\n\\begin{comment}\nOur method proposes to use and experiments with three forms of aggregation based on availability of unlabelled data at the server. The aggregation process for version 1 is based on Federated Averaging and is highlighted in Algorithm \\ref{alg:alg2}.\n\nThe aggregation process for version 2 is based on Federated Bayesian Ensemble and is highlighted in Algorithm \\ref{alg:alg3}. It creates a probability distribution over the given samples by treating the individual samples as points from the distribution. Then, using a few more points from the distribution, it approximates an ensemble from the given model.\n\nThe third approach utilizes the perspective that the student model can be a good estimator for entropy since it outputs Posterior Predictive Distribution. Hence, we can use entropy estimates of some sample data along with the original data size (used in Federated Averaging) for aggregation at server. The procedure for the same is highlighted in \\ref{alg:alg4}.\n\n\\end{comment}\n\n\\begin{comment}\nWe aim to learn a global model in federated setting with focus on the underlying uncertainty and propose to learn Predictive Posterior Distribution (PPD) on clients. We sketch our approach in Algorithm~\\ref{algo} and illustrate it in Figure~\\ref{fig:fedppd_diagram}.\n\nIn federated learning, each client holds a local dataset which cannot be shared with other clients\/server. This gets further challenging if data is distributed in non-iid fashion across clients, leading to a possibility of clients models differing substantially. Thus, instead of a point-estimate on clients local dataset, we propose to consider clients' predictions averaged over its posterior distribution. Unfortunately, computing posterior distribution is intractable and computationally inefficient given the large space of model parameters. So, we follow~\\citep{korattikara2015bayesian} to approximate the true posterior by drawing samples from it and distilling them into a single model representing PPD. Basically, each client updates its local model (a.k.a teacher model) for $k$ steps using SGLD optimizer and distills the knowledge of the updated model into a second model (a.k.a student model); $k$ being the hyperparameter. This local training continues for fixed number of update steps or until the local model has converged. Note that any sampling method can be used to draw samples from the posterior and is not limited to SGLD used in~\\citep{korattikara2015bayesian}.\n\n\nThe method proposed in the paper aims to perform Bayesian inference in a federated learning setting. This is done by constructing a model at local clients and global server that outputs Posterior Predictive Distribution instead of predictions derived from a single MAP\/MLE estimate.\n\nHowever, the posterior and predictive posterior distribution for models like neural networks are intractable. So, through many works, there has been an attempt to form approximation over the posterior to derive Posterior Predictive Distribution. An interesting approach to this is to model the Posterior Predictive Distribution using a distillation based training. The method, Bayesian Dark Knowledge, does not aim to approximate the posterior but to output Posterior Predictive Distribution for a given input. The objective function of this approach is to train a student model that approximates the Posterior Predictive Distribution of the teacher network. It can be expressed as below:\n\\begin{gather*}\n \\hat{L}(w | x') = - \\frac{1}{| \\Theta |} \\sum_{\\theta^S \\in \\Theta} \\mathbb{E}_p(y | x', \\theta^S) \\log S(y | x, w')\n\\end{gather*}\nThis loss function is on a single data point x', but to integrate it over the given domain, a dataset $D'$ is taken. The teacher model continuously generates samples from posterior $p(\\Theta | D)$ by training on the original dataset $D$ which are distilled into the student model $S$ using $D'$ making it behave as probabilistic weighted aggregate of output from samples frandom posterior. The updates for teacher and student model are highlighted in the Bayesian Dark Knowledge (BDK) algorithm demonstrated in \\{cite paper\\}. The Bayesian Dark Knowledge approach in their work used SGLD for generating posterior samples.\n\nIn the proposed approach, we assume two models present globally that learn the posterior and posterior predictive distribution using updates from local clients. At the beginning of each round, the local clients initialize their teacher and student network from weights of the central server teacher and student models. They update their individual teacher and student models following the methodology of Bayesian Dark Knowledge by using the data generated\/stored within them locally. At the end of the round, they communicate the teacher and student models to the central server, where the central server aggregates the weights from these models, which are then used by the clients in the next round. The framework for the same is outlined in Algorithm \\ref{alg:alg1}.\n\nOur method proposes to use and experiments with three forms of aggregation based on availability of unlabelled data at the server. The aggregation process for version 1 is based on Federated Averaging and is highlighted in Algorithm \\ref{alg:alg2}.\n\nThe aggregation process for version 2 is based on Federated Bayesian Ensemble and is highlighted in Algorithm \\ref{alg:alg3}. It creates a probability distribution over the given samples by treating the individual samples as points from the distribution. Then, using a few more points from the distribution, it approximates an ensemble from the given model.\n\nThe third approach utilizes the perspective that the student model can be a good estimator for entropy since it outputs Posterior Predictive Distribution. Hence, we can use entropy estimates of some sample data along with the original data size (used in Federated Averaging) for aggregation at server. The procedure for the same is highlighted in \\ref{alg:alg4}.\n\n\\end{comment}\n\n\\begin{comment}\nWe have performed experiments using the above proposed approaches and a few baselines on classification task for a few datasets. A few additional experiments have also been performed to measure the performance of the above approaches on federated learning related tasks. The description of experiments, experimental settings and results for all the experiments are shown in the Experimental Results section.\n\\end{comment}\n\n\\begin{minipage}[t]{0.5\\textwidth}\n\\begin{algorithm}[H]\n\\caption{FedPPD}\n\\label{algo}\n\\begin{algorithmic}[1]\n\\Require {Number of communication rounds $T$, \\newline\nTotal clients $K$, \\newline Unlabeled dataset $\\mathcal{U} = \\{x_i\\}_{i=1}^P$ \\newline Server teacher model weights $\\theta_g$, \\newline Server student model weights $w_g$, \\newline Client teacher model weights $\\{\\theta_i\\}_{i=1}^{K}$, \\newline Client student model weights $\\{w_i\\}_{i=1}^K$ \\newline Number of training samples at client $\\{n_i\\}_{i=1}^K$ \\newline}\n\n\\For{ each round $t = 0, \\ldots, T-1$}\n \\State Server broadcasts $\\theta_g^{(t)}$ and $w_g^{(t)}$ \\newline \n \\For{ each client $i \\in \\{1, \\dots, K\\}$}\n \\State $\\theta_i = \\theta_g^{(t)}$\n \\State $w_i = w_g^{(t)}$\n \\State Update $\\theta_i$ and $w_i$ locally as per \\citep{korattikara2015bayesian}\n \\EndFor\n \\State Communicate $\\{\\theta_i^{MAP}\\}_{i=1}^K$ and $\\{w_i\\}_{i=1}^K$ to server \\newline\n \\State $\\theta_g^{(t+1)}$, $w_g^{(t+1)}$ = \\newline Server\\_Update($\\{\\theta_i^{MAP}\\}_{i=1}^K, \\{w_i\\}_{i=1}^K, \\{n_i\\}_{i=1}^K$) \\newline\n\\EndFor \\newline\n\\State \\Return $w_g$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[H]\n\\caption{Server\\_Update(Average)}\n\\label{algo-agg}\n\\begin{algorithmic}[1]\n\\Require $\\{\\theta_i^{MAP}\\}_{i=1}^{K}$, $\\{w_i\\}_{i=1}^K$, $\\{n_i\\}_{i=1}^K$ \\newline\n\\State $N = \\sum_{i=1}^K n_i$\n\\State \\Return $\\frac{1}{N}\\sum_{i=1}^K n_i \\theta_i^{MAP}, \\frac{1}{N}\\sum_{i=1}^K n_i w_i$ \\newline\n\\end{algorithmic}\n\\end{algorithm}\n\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.45\\textwidth}\n\n\n\\begin{algorithm}[H]\n\\caption{Server\\_Update(Entropy)}\n\\label{algo-ent}\n\\begin{algorithmic}[1]\n\\Require $\\mathcal{U}, \\{\\theta_i^{MAP}\\}_{i=1}^{K}, \\{w_i\\}_{i=1}^K, \\{n_i\\}_{i=1}^K$ \\newline\n\\State $\\theta_g = \\frac{1}{\\sum_{i=1}^K n_i}\\sum_{i=1}^K n_i \\theta_i^{MAP}$ \\newline\n\\State $I = [ ]$ \\Comment{Clients' importance weights}\n\\For {client $i = 1, \\dots, K$}\n \\State $I[i] = n_i\/Entropy(w_i, \\mathcal{U})$\n\\EndFor \\newline\n\\State $w_g = \\frac{1}{\\sum_{i=1}^K I[i]}\\sum_{i=1}^K I[i] w_i$ \\newline\n\\State \\Return $\\theta_g, w_g$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[H]\n\\caption{Server\\_Update(Distill)}\n\\label{algo-distill}\n\\begin{algorithmic}[1]\n\\Require $\\mathcal{U}, \\{\\theta_i^{MAP}\\}_{i=1}^{K}, \\{w_i\\}_{i=1}^K, \\{n_i\\}_{i=1}^K$ \\newline\n\\State $\\overline{\\theta}, \\overline{w} = $ Server\\_Update(Average)\n\\newline\n\n\\State Construct global teacher model distribution $p(\\theta | D)$ from $\\{\\theta_i^{MAP}\\}_{i=1}^K$\n\\State Sample $M$ additional teachers and form teacher ensemble\n\\newline $E_T=\\{\\theta_m \\sim p(\\theta | \\mathcal{D})\\}_{m=1}^{M} \\cup \\{\\overline{\\theta}\\} \\cup \\{\\theta_i\\}_{i=1}^{K}$\n\\State Annotate $\\mathcal{U}$ using $E_T$ to generate pseudo-labeled dataset $\\mathcal{T}$ \n\\State Distill $E_T$ knowledge to $\\overline{\\theta}$ using SWA \n\\newline\n$\\theta_g = SWA(\\overline{\\theta}, E_T, \\mathcal{T})$ \n\\newline\n\n\\State Similarly follow steps 2-5 with $\\{w_i\\}_{i=1}^K$ to get $w_g$\n\\newline\n\\State \\Return $\\theta_g, w_g$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\end{minipage}\n\n\n\nThe overall sketch of our Bayesian federated learning procedure, which we call FedPPD (Federated Learning via Posterior Predictive Distributions), is shown in Algorithm~\\ref{algo}. The three aggregation schemes for server-side updates are shown in Algorithm~\\ref{algo-agg}, \\ref{algo-ent}, and \\ref{algo-distill}. Note that, among the 3 aggregation schemes, Algorithm~\\ref{algo-agg} does not require any unlabeled data at the server whereas \\ref{algo-ent}, and \\ref{algo-distill} assume that the server has access to unlabeled data. Also, owing to high computation capacity, server can compute $\\theta_g$ and $w_g$ in parallel for all the aggregation schemes; incurring no additional delays in communication rounds.\n\n\\section{Related Work}\n\nFederated learning has received considerable research interest recently. The area is vast and we refer the reader to excellent surveys~\\citep{li2020federated,kairouz2021advances} on the topic for a more detailed overview. In this section, we discuss the works that are the most relevant to our work.\n\nWhile standard federated learning approaches assume that each client does point estimation of its model weights by optimizing a loss function over its own data, recent work has considered posing federated learning as a posterior inference problem where a global posterior distribution is inferred by aggregating local posteriors computed at each client. FedPA~\\citep{al2020federated} is one such recent approach which performs approximate inference for the posterior distribution of each client's weights. However, it assumes a restrictive form for the posterior (Gaussian). Moreover, the method needs to estimate the covariance matrix of the Gaussian posterior, which is difficult in general and approximations are needed. Moreover, although FedPA estimates the (approximate) posterior on each client, due to efficiency\/communication concerns, at the server, it only computes a point estimate (the mean) of the global posterior. Thus, even though the approach is motivated from a Bayesian setting, in the end, it does not provide a posterior distribution or a PPD for the global model.\n\nRecently, \\citep{linsner2021approaches} presented methods for uncertainty quantification in federated learning using a variety of posterior approximation methods for deep neural networks, such as Monte Carlo dropout~\\citep{gal2016dropout}, stochastic weight averaging Gaussian (SWAG)~\\citep{maddox2019simple}, and deep ensembles~\\citep{lakshminarayanan2017simple}. These approaches, however, also suffer from poor quality of approximation of the posterior at each client. \\citep{lee2020bayesian} also propose a Bayesian approach for federated learning. However, their approach also makes restrictive assumptions, such as the distribution of the gradients at each of the clients being jointly Gaussian.\n\nInstead of a simple aggregation of client models at the server, FedBE~\\citep{chen2020fedbe} uses the client models to construct a distribution at the server and further distills this distribution into a single model using distillation. This model is then sent by the server to each client for the next round. Though maintaining a distribution over client models and distilling it in a single model is more robust than a simple aggregation like federated averaging, FedBE only performs point estimation ignoring any uncertainty in the client models. Another probabilistic approach to federated learning \\cite{thorgeirsson2020probabilistic} fits a Gaussian distribution using the client models, and sends the mean of this Gaussian to each client for the next round of client model training. This approach too does not estimate a posterior at each client, and thus ignores the uncertainty in client models. \n\nIn the context of Bayesian learning, recent work has also explored federated versions of Markov Chain Monte Carlo sampling algorithms, such as stochastic gradient Langevin dynamics sampling~\\citep{lee2020bayesian,el2021federated}. While interesting in their own right in terms of performing MCMC sampling in federated settings, these methods are not designed with the goal of real-world applications of federated learning, where fast prediction and compact model sizes are essential.\n\nAmong other probabilistic approaches to federated learning, recent work has explored the use of latent variables in federated learning. In \\cite{louizos2021expectation}, a hierarchical prior is used on client model's weights where the prior's mean is set to the server's global model, and additional latent variables can also be used to impose other structures, such as sparsity of client model weights. However, these approaches do not model the uncertainty in the client model.\n\nSome of the recent work on federated learning using knowledge distillation is also relevant. Note that our work leverages the ideas of teacher-student distillation, both at the clients (when learning a representation of the PPD using a single deep neural network), as well as in our third aggregation strategy where server-side distillation is used for learning the global model. In federated learning, the idea of distillation has been used in other works as well, such as federated learning when the client models are of different sizes and the (weighted) averaging does not make sense due to the different size\/architecture of different client models~\\citep{zhu2021data}.\n\n\\section{Experiments}\nIn this section, we compare our Bayesian federated learning approach with various relevant baselines on several benchmark datasets. We report results on the following tasks: (1) Classification in federated setting, (2) Active Learning in federated setting, and (3) OOD detection on each client. In this section, we refer to our approach with simple averaging on the server side as FedPPD, the variant with entropy-weighted averaging on the server side as FedPPD+Entropy, and the variant with distillation based aggregation on the server side as FedPPD+Distill. \n\n\\subsection{Experimental Setup}\n\\subsubsection{Baselines} We compare our methods with the following baselines\n\n(1) \\textbf{FedAvg}~\\citep{mcmahan2017communication} is the standard federated algorithm in which the local models of the participating clients are aggregated at server to compute a global model which is then sent back to all the clients for initialization in the next round.\n\n(2) \\textbf{FedBE}~\\citep{chen2020fedbe} is another state-of-the-art baseline which provides a more robust aggregation scheme in which instead of only averaging the client models at the server, a probability distribution is fit using the client models, several other models are generated from this probability distribution, and then the client models as well as the generated models are distilled into a single model to yield the global model at the server, which is sent to all the clients for initialization in the next round. Note however that the clients in FedBE only perform point estimation of their weights unlike our approach which estimates the posterior distribution and the PPD of each client. \n\n(3) \\textbf{Federated SWAG}~\\citep{linsner2021approaches} is a Bayesian federated learning algorithm which is essentially based on a federated extension of the SWAG~\\cite{maddox2019simple} which is an efficient Bayesian inference algorithm for deep neural networks. However, Federated SWAG relies on a simplification that it executes standard federated averaging for all except the last round and in the last round, the SWAG algorithm is invoked at each client to yield a posterior. Also note that Federated SWAG requires Monte-Carlo sampling at test time (thus relying on ensemble based slow prediction) unlike our method which only requires a single neural network to make the prediction.\n\nWe also considered a comparison with \\textbf{FedPA}~\\citep{al2020federated} which estimates a posterior (assumed to be Gaussian) over the client weights. However, in our experiments (using the author-provided code and suggested experimental settings) on the benchmark dataset, FedPA performed comparably or worse than FedAvg. We therefore omit those results from the main text and report them in the Supplementary Material. \n \n\\begin{comment}\n\n\\end{comment}\n\n\\subsubsection{Datasets}\nWe evaluate and compare our approach with baseline methods on four datasets: MNIST~\\citep{lecun-mnisthandwrittendigit-2010}, FEMNIST~\\citep{cohen2017emnist}, and CIFAR-10\/100~\\citep{krizhevsky2009learning}. MNIST comprises of images of handwritten digits categorized into 10 classes. It has a total of 60,000 images for training and 10,000 images for testing. FEMNIST consists of images of handwritten characters (digits, lowercase, and uppercase alphabets resulting in total of 62 classes) written by multiple users. It has a total of 80,523 images written by 3,550 users. CIFAR-10 consists of $32\\times32$ dimensional RGB images categorised into 10 different classes. It has a total of 50,000 images for training and 10,000 images for testing. CIFAR-100 is similar to CIFAR-10 but has 100 distinct classes.\n\n\\subsubsection{Model Architecture and Configurations}\n\\label{sec:config}\nIn all our experiments, the student model has a larger capacity compared to teacher model as it is modeling the PPD by distilling multiple models drawn from the posterior distribution. We have used a customized CNN architecture for both teacher and student model on MNIST, FEMNIST and CIFAR-10 dataset, with student model being deeper and\/or wider than its corresponding teacher model. For CIFAR-100, ResNet-18 and ResNet-34 are used as the teacher and student model, respectively.\n\nIn all our experiments, we consider $K=10$ clients with data heterogeneity. Each client holds a small non-i.i.d. subset of training data - approximately 2000 samples for FEMNIST, CIFAR-10 and CIFAR-100 and around 500 samples for MNIST. We use the Leaf~\\citep{caldas2018leaf} benchmark to distribute FEMNIST data across clients based on the writer. We have also excluded digits and considerd only alphabets to increase the class imbalance. However, we let clients have data from multiple writers ensuring that no two clients have same writer assigned. This results in almost similar class distribution but with different styled handwritten images distributed across clients. Also, this setting is different from data distribution on other datasets where each client strictly maintains a small subset of all the classes. In case of MNIST and CIFAR-10, we ensure that there are atmost 2 major classes per client and upto 20 distinct major classes in case of CIFAR-100. For a fair comparison, we run our method and all the baselines for 200 rounds on all the datasets (except MNIST where we run it for 100 rounds) and train local client model for 10 epochs in each round. Also, we assume complete client participation i.e. all the clients are considered in each round. However, we tune the learning rate, momentum and weight decay for each method independently. For FedBE and FedPPD, we run additional 20 and 50 epochs at the server for distillation on CIFAR\/MNIST and FEMNIST datasets, respectively. \n\n\n\n\\begin{wrapfigure}{R}{0.4\\textwidth}\n\\vspace{-5em}\n \\centering\n \\includegraphics[scale=0.4]{figures\/CIFAR10_Acc_Plot.pdf}\n \\caption{Convergence of all the methods on CIFAR-10 dataset}\n \\label{fig:convergence_plot}\n \\vspace{-1em}\n \n \n \n \n \n \n \n \n\\end{wrapfigure}\n\n\\subsection{Tasks}\n\\textbf{Classification} \nWe evaluate FedPPD (its three variants) and the baselines on several classification datasets and report the accuracy on the respective test datasets. The results are shown in Table~\\ref{tab:classification_acc}. We also show the convergence of all the methods in on CIFAR-10 in Figure~\\ref{fig:convergence_plot} (we show similar plots for other datasets in Supplementary Material). All the three variants of FedPPD outperform the other baselines on all the datasets. As compared to the best performing baseline, our approach yields an improvement of $4.44\\%$ and $7.08\\%$ in accuracy on CIFAR-10 and CIFAR-100, respectively. On MNIST and FEMNIST datasets too, we observe noticeable improvements. The improvements across the board indicate that FedPPD and its variants are able to leverage model uncertainty to yield improved predictions especially when the amount of training data per client is small, which is the case with the experimental settings that we follow (as we mention in Sec.~\\ref{sec:config}). We also observe that in cases where there is a significant heterogeneity in the data distribution across the different clients (on CIFAR-10 and CIFAR-100), the performance gains offered by FedPPD and its variants are much higher as compared to the baselines. On other datasets (MNIST and FEMNIST), the data distributions are roughly similar across different clients and even though the accuracies are higher, the performance gains are not as significant, but reasonable nevertheless.\n\n\n\n\n\\begin{table}[!htbp]\n\\vspace{-2em}\n \\setlength\\tabcolsep{1pt}\n \n \\begin{minipage}{0.5\\linewidth}\n \\begin{tabular}{ccccc}\n \\toprule\n Model & MNIST & FEMNIST & CIFAR-10 & CIFAR-100 \\\\\n \\midrule\n FedAvg & 97.74 & 87.40 & 57.20 & 47.02 \\\\\n FedAvg+SWAG & 97.75 & 87.45 & 57.34 & 47.07 \\\\\n FedBE & 97.82 & 88.12 & 60.18 & 47.52 \\\\\n FedPPD & 97.85 & \\textbf{88.81} & 61.86 & 53.00 \\\\\n FedPPD+Entropy & 97.93 & 88.65 & 62.19 & 52.72 \\\\ \n FedPPD+Distill & \\textbf{98.08} & 88.80 & \\textbf{64.62} & \\textbf{54.60} \\\\\n \n \\bottomrule\n \\end{tabular}\n \\caption{Federated classification test accuracies on benchmark datasets}\n \\label{tab:classification_acc}\n \\end{minipage}\n \\hfill\n \\begin{minipage}{0.38\\textwidth}\n \\begin{figure}[H]\n \\small\n \\includegraphics[height=4cm,width=5cm]{figures\/AL_Acc_Plot.pdf}\n \\caption{Federated Active Learning on CIFAR-10 dataset. Note: FedAvg+SWAG performed almost similarly to FedAvg on this task as well so we skip it from the plot.}\n \\label{fig:al_results}\n \\end{figure}\n \\end{minipage}\n \\vspace{-0.5cm}\n\\end{table}\n\n\\textbf{Federated Active Learning} We further show the usefulness of our Bayesian approach by using it for the problem of federated active learning. In active learning, the goal of the learner is to iteratively request the labels of the most informative inputs instances, add these labeled instances to the training pool, retrain the model, and repeat the process until the labeling budget remains. Following~\\citep{ahn2022federated}, we extend our method and the baselines to active learning in federated setting using entropy of the predictive distribution of an input $x$ as the acquisition function, defined as $I(x) = -\\sum_{i=1}^{k} p(y=y_i|x) \\log p(y=y_i|x)$ as a score function. In federated active learning setting (we provide a detailed sketch of the federated active learning algorithm in the Supplementary Material), each client privately maintains a small amount of labeled data and a large pool of unlabeled examples. In each round of active learning, clients participate in federated learning with their currently labeled pool of data until the global model has converged. Now, each client uses the global model to identify a fixed number (budget) of the most informative inputs among its pool of unlabeled input based on the highest predictive entropies $I(x)$, which are then annotated (locally maintaining data privacy) and added to the pool of labeled examples. Now, with this, next round of active learning begins, where clients will participate in federated learning and use the global model to expand their labeled pool. This process continues until either the unlabeled dataset has been exhausted completely or desired accuracy has been achieved. For a fair comparison, we have run federated active learning on CIFAR-10 dataset with same parameters for all the approaches. We start active learning with 400 labeled and 3200 unlabeled samples at each client and use a budget of 400 samples in every round of active learning. For federated learning, we use the same hyperparameters as for the classification experiments. We stop federated active learning once all the clients have exhausted their unlabeled dataset and show the results in Figure~\\ref{fig:al_results}. FedPPD and its variants attain the best accuracies among all the methods compared, which shows that our Bayesian approach provides more robust estimates of the model and predictive uncertainty as compared to the other baselines, and thus outperforms them on the problem of federated active learning.\n\n\\textbf{Out-of-distribution (OOD) detection} We also evaluate FedPPD and its variants, and the other baselines, in terms of their ability to distinguish between Out-of-Distribution (OOD) data and data used during training phase (in-distribution data). For this, given any sample $x$ to be classified among $k$ distinct classes and model weights $\\theta$ (or PPD for our approach), we compute Shannon entropy of the model's predictive distribution for the input $x$ and compute the AUROC (Area Under ROC curve) metric. We use KMNIST as OOD data for models trained on FEMNIST, and SVHN for CIFAR-10\/CIFAR-100 models. Note that, to avoid class imbalance, we sample an equal amount of data for both the distributions (out and in) and repeat it 5 times. We report the results in Table~\\ref{tab:auroc_score}. FedPPD and its variants consistently result in better AUROC scores on all the datasets validating its robustness and accurate estimates of model uncertainty. In addition to OOD detection, we also apply all the methods for the task of identifying the correct predictions and incorrect predictions based on the predictive entropies. For this task too, FedPPD and its variants outperform the other baselines. \n\n\\begin{table}[!htbp]\n\\setlength\\tabcolsep{4pt}\n \\centering\n \\scriptsize\n \\makebox[\\textwidth][c]{\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\toprule & \\multicolumn{3}{c|}{Out of Domain Detection} & \\multicolumn{3}{c|}{Correct\/Incorrect Prediction}\\\\\n\\midrule\n Model & FEMNIST & CIFAR-10 & CIFAR-100 & FEMNIST & CIFAR-10 & CIFAR-100 \\\\\n \\hline\n FedAvg & $0.957 \\pm 0.003$ & $0.728 \\pm 0.013$ & $0.703 \\pm 0.011$ & $0.846 \\pm 0.011$ & $0.742 \\pm 0.011$ & $0.792 \\pm 0.003$ \\\\\n FedAvg+SWAG & $0.956 \\pm 0.003$ & $0.728 \\pm 0.013$ & $0.704 \\pm 0.011$ & $0.845 \\pm 0.009$ & $0.743 \\pm 0.010$ & $0.800 \\pm 0.004$\\\\\n FedBE & $0.959 \\pm 0.002$ & $0.728 \\pm 0.006$ & $0.669 \\pm 0.009$ & $\\mathbf{0.863 \\pm 0.005}$ & $0.753 \\pm 0.007$ & $0.789 \\pm 0.005$\\\\\n FedPPD & $\\mathbf{0.983 \\pm 0.003}$ & $0.701 \\pm 0.007$ & $0.698 \\pm 0.009$ & $0.862 \\pm 0.008$ & $0.755 \\pm 0.007$ & $0.814 \\pm 0.003$\\\\\n FedPPD+Entropy & $0.982 \\pm 0.002$ & $\\mathbf{0.768 \\pm 0.009}$ & $0.721 \\pm 0.014$ & $0.856 \\pm 0.006$ & $0.749 \\pm 0.007$ & $0.817 \\pm 0.004$\\\\ \n FedPPD+Distill & $0.975 \\pm 0.002$ & $0.765 \\pm 0.006$ & $\\mathbf{0.784 \\pm 0.008}$ & $0.853 \\pm 0.013$ & $\\mathbf{0.769 \\pm 0.006}$ & $\\mathbf{0.823 \\pm 0.002}$\\\\\n \n \\bottomrule\n \\end{tabular}\n }\n \\caption{AUROC scores for OOD detection and correct\/incorrect predictions}\n \\label{tab:auroc_score}\n\\end{table}\n\\vspace{-1cm}\n\n\\begin{comment}\n\\begin{table}[!htbp]\n \\tiny\n \\begin{tabular}{cccc}\n \\toprule\n Model & FEMNIST & CIFAR-10 & CIFAR-100 \\\\\n \\midrule\n FedAvg & $0.957 \\pm 0.003$ & $0.728 \\pm 0.013$ & $0.703 \\pm 0.011$ \\\\\n FedAvg+SWAG & $0.956 \\pm 0.003$ & $0.728 \\pm 0.013$ & $0.704 \\pm 0.011$\\\\\n FedBE & $0.966 \\pm 0.003$ & $0.728 \\pm 0.006$ & $0.669 \\pm 0.009$ \\\\\n FedPPD & $\\mathbf{0.983 \\pm 0.003}$ & $0.701 \\pm 0.007$ & $0.698 \\pm 0.009$ \\\\\n FedPPD+Entropy & $0.982 \\pm 0.002$ & $\\mathbf{0.768 \\pm 0.009}$ & $0.721 \\pm 0.014$ \\\\ \n FedPPD+Distill & $0.949 \\pm 0.003$ & $0.765 \\pm 0.006$ & $\\mathbf{0.784 \\pm 0.008}$ \\\\\n \n \\bottomrule\n \\end{tabular}\n \\caption{AUROC score for OOD data detection}\n \\label{tab:auroc_correct}\n\\end{table}\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{ >{\\centering\\arraybackslash}p{4cm} >{\\centering\\arraybackslash}p{4cm} }\n \\toprule\n Model & CIFAR-10\\\\\n \\midrule\n FedAvg & 51.54\\\\\n FedBE & 54.29\\\\\n FedPPD & 58.87\\\\\n FedPPD+Distill & 57.44\\\\\n \n \\bottomrule\n \\end{tabular}\n \\caption[caption]{\\centering Classification accuracy on test \\hspace{\\linewidth} dataset\n for federated active learning experiment}\n \\label{tab:active_learning}\n\\end{table}\n\\end{comment}\n\n\\vspace{0.5em}\n\\section{Conclusion and Discussion}\n\\vspace{-0.5em}\nCapturing and leveraging model uncertainty in federated learning has several benefits as we demonstrate in this work. To achieve this, we developed a Bayesian approach to federated learning by leveraging the idea of distilling the posterior predictive into a single deep neural network. The Bayesian approach not only yields more accurate and robust predictions in federated learning in situations with limited training data at each client and heterogeneity across clients, but is also helpful for tasks, such as OOD detection and active learning in federated setting. Our work provides a general framework to solve Bayesian federated learning. In this work, we consider a specific scheme to distill the PPD at each client. However, other methods that can distill the posterior distribution into a single neural network~\\citep{wang2018adversarial,vadera2020generalized} are also worth leveraging for Bayesian federated learning. Another interesting future work will be to extend our approach to settings where different clients could possibly be having different model architectures. Finally, our approach first generates MCMC samples (using SGLD) and then uses these samples to obtain the PPD in form of a single deep neural network. Recent work has shown that it is possible to distill an ensemble into a single model without explicitly generating samples from the distribution~\\citep{ratzlaff2019hypergan}. Using these ideas for Bayesian federated learning would also be an interesting future work. \n\n\\begin{comment}\n\\begin{ack}\nUse unnumbered first level headings for the acknowledgments. All acknowledgments\ngo at the end of the paper before the list of references. Moreover, you are required to declare\nfunding (financial activities supporting the submitted work) and competing interests (related financial activities outside the submitted work).\nMore information about this disclosure can be found at: \\url{https:\/\/neurips.cc\/Conferences\/2022\/PaperInformation\/FundingDisclosure}.\n\n\nDo {\\bf not} include this section in the anonymized submission, only in the final paper. You can use the \\texttt{ack} environment provided in the style file to automatically hide this section in the anonymized submission.\n\\end{ack}\n\\end{comment}\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAt the frontier of computational statistics there is growing interest\nin parallel implementation of Monte Carlo algorithms using multi-processor\nand distributed architectures. However, the resampling step of sequential\nMonte Carlo (SMC) methods \\citep{gordon1993novel} (see \\citep{kunsch2013particle}\nfor a recent overview) which involves a degree of interaction between\nsimulated ``particles'', hinders their parallelization. So, whilst\nmulti-processor implementation offers some speed up for SMC, the potential\nbenefits of distributed computing are not fully realized \\citep{lee2010utility}. \n\nPerforming resampling only occasionally, a technique originally suggested\nfor the somewhat different reason of variance reduction \\citep{liu1995blind},\nalleviates this problem to some extent, but the collective nature\nof the resampling operation remains the computational bottleneck.\nOn the other hand, crude attempts to entirely do away with the resampling\nstep may result in unstable or even non-convergent algorithms. With\nthese issues in mind we seek a better understanding of the relationship\nbetween the interaction structure of SMC algorithms and theoretical\nproperties of the approximations they deliver. Our overall aim is\nto address the following question:\n\n\\smallskip{}\n\n\n\\emph{To what extent can the degree of interaction between particles\nbe reduced, whilst ensuring provable stability of the algorithm?}\n\n\\smallskip{}\nOur strategy is to introduce and study an unusually general type of\nSMC algorithm featuring a parameterized resampling mechanism. This\nprovides a flexible framework in which we are ultimately able to attach\nmeaning to \\emph{degree of interaction} in terms of graph-theoretic\nquantities. To address the matter of \\emph{provable stability}, we\nseek conditions under which the algorithm yields time-uniformly convergent\napproximations of prediction filters, and approximations of marginal\nlikelihoods whose relative variance can be controlled at a linear-in-time\ncost. \n\nThe general algorithm we study is defined in terms of a family of\nMarkov transition matrices, $\\alpha$, and we refer to the algorithm\nitself as $\\alpha$SMC. We shall see that through particular choices\nof $\\alpha$ one obtains, as instances of $\\alpha$SMC, well known\nalgorithms including sequential importance sampling (SIS), the bootstrap\nparticle filter (BPF) and the adaptive resampling particle filter\n(ARPF) in which resampling is triggered by monitoring some functional\ncriterion, such as the Effective Sample Size (ESS) \\citep{liu1995blind}. \n\nAlthough the ESS does not necessarily appear in the definition of\nthe general $\\alpha$SMC algorithm, we find that it does appear quite\nnaturally from the inverse quadratic variation of certain martingale\nsequences in its analysis. This allows us to make precise a sense\nin which algorithmic control of the ESS can guarantee stability of\nthe algorithm. Our results apply immediately to the ARPF, but our\nstudy has wider-reaching methodological consequences: in our framework\nit becomes clear that the standard adaptive resampling strategy is\njust one of many possible ways of algorithmically controlling the\nESS, and we can immediately suggest new, alternative algorithms which\nare provably stable, but designed to avoid the type of complete interaction\nwhich is inherent to the ARPF and which hinders its parallelization.\nThe structure of this paper and our main contributions are as follows. \n\nSection~\\ref{sec:aSMC} introduces the general algorithm, $\\alpha$SMC.\nWe explain how it accommodates several standard algorithms as particular\ncases and comment on some other existing SMC methods.\n\nSection~\\ref{sec:Martingale-approximations-and} presents Theorem\n\\ref{thm:convergence}, a general convergence result for $\\alpha$SMC.\nWe give conditions which ensure unbiased approximation of marginal\nlikelihoods and we elucidate connections between certain invariance\nproperties of the matrices $\\alpha$ and the negligibility of increments\nin a martingale error decomposition, thus formulating simple sufficient\nconditions for weak and strong laws of large numbers. We also discuss\nsome related existing results.\n\nSection~\\ref{sec:stability} presents our second main result, Theorem\n\\ref{thm:L_R_mix}. We show, subject to regularity conditions on the\nhidden Markov model (HMM) under consideration, that enforcement of\na strictly positive lower bound on a certain coefficient associated\nwith ESS of $\\alpha$SMC is sufficient to guarantee non-asymptotic,\ntime-uniform bounds on: 1) the exponentially normalized relative second\nmoment of error in approximation of marginal likelihoods, and 2) the\n$L_{p}$ norm of error in approximation of prediction filters. The\nformer implies a linear-in-time variance bound and the latter implies\ntime-uniform convergence. These results apply immediately to the ARPF.\n\nSection~\\ref{sec:Discussion} houses discussion and application of\nour results. We point out the pitfalls of some naive approaches to\nparallelization of SMC and discuss what can go wrong if the conditions\nof Theorem~\\ref{thm:convergence} are not met. Three new algorithms,\nwhich adapt the degree of interaction in order to control the ESS\nand which are therefore provably stable, are then introduced. We discuss\ncomputational complexity and through numerical experiments examine\nthe degree of interaction involved in these algorithms and the quality\nof the approximations they deliver compared to the ARPF.\n\n\n\\section{$\\alpha$SMC\\label{sec:aSMC}}\n\nA hidden Markov model (HMM) with measurable state space $\\left(\\mathsf{X},\\mathcal{X}\\right)$\nand observation space $\\left(\\mathsf{Y},\\mathcal{Y}\\right)$ is a\nprocess $\\left\\{ \\left(X_{n},Y_{n}\\right);n\\geq0\\right\\} $ where\n$\\left\\{ X_{n};n\\geq0\\right\\} $ is a Markov chain on $\\mathsf{X}$,\nand each observation $Y_{n}$, valued in $\\mathsf{Y}$, is conditionally\nindependent of the rest of the process given $X_{n}$. Let $\\mu_{0}$\nand $f$ be respectively a probability distribution and a Markov kernel\non $\\left(\\mathsf{X},\\mathcal{X}\\right)$, and let $g$ be a Markov\nkernel acting from $\\left(\\mathsf{X},\\mathcal{X}\\right)$ to $\\left(\\mathsf{Y},\\mathcal{Y}\\right)$,\nwith $g(x,\\cdot)$ admitting a density, denoted similarly by $g(x,y)$,\nwith respect to some dominating $\\sigma$-finite measure. The HMM\nspecified by $\\mu_{0}$, $f$ and $g$, is\n\\begin{eqnarray}\n & & X_{0}\\sim\\mu_{0}(\\cdot),\\quad\\left.X_{n}\\right|\\{X_{n-1}=x_{n-1}\\}\\sim f(x_{n-1},\\cdot),\\quad n\\geq1,\\label{eq:HMM}\\\\\n & & \\;\\quad\\quad\\quad\\quad\\hspace{1.1em}\\quad\\left.Y_{n}\\right|\\left\\{ X_{n}=x_{n}\\right\\} \\sim g(x_{n},\\cdot),\\quad\\quad\\quad\\quad n\\geq0.\\nonumber \n\\end{eqnarray}\n\n\nWe shall assume throughout that we are presented with a fixed observation\nsequence $\\left\\{ y_{n};n\\geq0\\right\\} $ and write \n\\[\ng_{n}(x):=g(x,y_{n}),\\quad n\\geq0.\n\\]\nThe following assumption imposes some mild regularity which ensures\nthat various objects appearing below are well defined. It shall be\nassumed to hold throughout without further comment.\n\\begin{assumption*}\n$\\mathbf{\\mathbf{(A1)}}$ For each $n\\geq0$, $\\sup_{x}g_{n}(x)<+\\infty$\nand $g_{n}(x)>0$ for all $x\\in\\mathsf{X}$.\n\\end{assumption*}\nWe take as a recursive definition of the \\emph{prediction filters},\nthe sequence of distributions $\\left\\{ \\pi_{n};n\\geq0\\right\\} $ given\nby \n\\begin{eqnarray}\n & & \\pi_{0}:=\\mu_{0},\\nonumber \\\\\n & & \\pi_{n}\\left(A\\right):=\\frac{\\int_{\\mathsf{X}}\\pi_{n-1}\\left(dx\\right)g_{n-1}(x)f(x,A)}{\\int_{\\mathsf{X}}\\pi_{n-1}\\left(dx\\right)g_{n-1}(x)},\\quad A\\in\\mathcal{X},\\quad n\\geq1,\\label{eq:filtering_recursion}\n\\end{eqnarray}\nand let $\\left\\{ Z_{n};n\\geq0\\right\\} $ be defined by\n\\begin{equation}\nZ_{0}:=1,\\quad\\quad Z_{n}:=Z_{n-1}\\int_{\\mathsf{X}}\\pi_{n-1}\\left(dx\\right)g_{n-1}\\left(x\\right),\\quad n\\geq1.\\label{eq:Z_recusion}\n\\end{equation}\nDue to the conditional independence structure of the HMM, $\\pi_{n}$\nis the conditional distribution of $X_{n}$ given $Y_{0:n-1}=y_{0:n-1}$;\nand $Z_{n}$ is the marginal likelihood of the first $n$ observations,\nevaluated at the point $y_{0:n-1}$. Our main computational objectives\nare to approximate $\\left\\{ \\pi_{n};n\\geq0\\right\\} $ and $\\left\\{ Z_{n};n\\geq0\\right\\} $. \n\n\n\\subsection{The general algorithm}\n\nWith population size $N\\geq1$, we write $[N]:=\\{1,\\ldots,N\\}$. \\emph{To\nsimplify presentation, whenever a summation sign appears without the\nsummation set made explicit, the summation set is taken to be $[N]$,\nfor example we write $\\Sigma_{i}$ to mean $\\Sigma_{i=1}^{N}$. }\n\nThe $\\alpha$SMC algorithm involves simulating a sequence $\\left\\{ \\zeta_{n};n\\geq0\\right\\} $\nwith each $\\zeta_{n}=\\left\\{ \\zeta_{n}^{1},\\ldots,\\zeta_{n}^{N}\\right\\} $\nvalued in $\\mathsf{X}^{N}$. Denoting $\\mathbb{X}:=\\left(\\mathsf{X}^{N}\\right)^{\\mathbb{N}}$,\n$\\mathcal{F}^{\\mathbb{X}}:=\\left(\\mathcal{X}^{\\otimes N}\\right)^{\\otimes\\mathbb{N}}$,\nwe shall view $\\left\\{ \\zeta_{n};n\\geq0\\right\\} $ as the canonical\ncoordinate process on the measurable space $\\left(\\mathbb{X},\\mathcal{F}^{\\mathbb{X}}\\right)$,\nand write $\\mathcal{F}_{n}$ for the $\\sigma$-algebra generated by\n$\\left\\{ \\zeta_{0},\\ldots,\\zeta_{n}\\right\\} $. By convention, we\nlet $\\mathcal{F}_{-1}:=\\{\\mathbb{X},\\emptyset\\}$ be the trivial $\\sigma$-algebra.\nThe sampling steps of the $\\alpha$SMC algorithm, described below,\namount to specifying a probability measure, say $\\mathbb{P}$, on\n$\\left(\\mathbb{X},\\mathcal{F}^{\\mathbb{X}}\\right)$. Expectation w.r.t.~$\\mathbb{P}$\nshall be denoted by $\\mathbb{E}$. \n\nLet $\\mathbb{A}_{N}$ be a non-empty set of Markov transition matrices,\neach of size $N\\times N$. For $n\\geq0$ let $\\alpha_{n}:\\mathbb{X}\\rightarrow\\mathbb{A}_{N}$\nbe a matrix-valued map, and write $\\alpha_{n}^{ij}$ for the $i$th\nrow, $j$th column entry so that for each $i$ we have $\\sum_{j}\\alpha_{n}^{ij}=1$\n(with dependence on the $\\mathbb{X}$-valued argument suppressed).\nThe following assumption places a restriction on the relationship\nbetween $\\alpha$ and the particle system $\\left\\{ \\zeta_{n};n\\geq0\\right\\} $.\n\\begin{assumption*}\n\\textbf{\\emph{(A2)}} For each $n\\geq0$, the entries of $\\alpha_{n}$\nare all measurable with respect to $\\mathcal{F}_{n}$\n\\end{assumption*}\nIntuitively, the members of $\\mathbb{A}_{N}$ will specify different\npossible interaction structures for the particle algorithm and under\n\\textbf{(A2)}, each $\\alpha_{n}$ is a random matrix chosen from $\\mathbb{A}_{N}$\naccording to some deterministic function of $\\left\\{ \\zeta_{0},\\ldots,\\zeta_{n}\\right\\} $.\nExamples are given below. We shall write $\\mathbf{1}_{1\/N}$ for the\n$N\\times N$ matrix which has $1\/N$ as every entry and write $Id$\nfor the identity matrix of size apparent from the context in which\nthis notation appears. We shall occasionally use $Id$ also to denote\nidentity operators in certain function space settings. Let $\\mathcal{M}$,\n$\\mathcal{P}$ and $\\mathcal{L}$ be respectively the collections\nof measures, probability measures and real-valued, bounded, $\\mathcal{X}$-measurable\nfunctions on $\\mathsf{X}$. We write\n\\[\n\\left\\Vert \\varphi\\right\\Vert :=\\sup_{x}\\left|\\varphi(x)\\right|,\\quad\\quad\\text{osc}(\\varphi):=\\sup_{x,y}\\left|\\varphi(x)-\\varphi(y)\\right|,\n\\]\nand\n\\begin{equation}\n\\mu(\\varphi):=\\int_{\\mathsf{X}}\\varphi(x)\\mu(dx),\\quad\\text{for any}\\quad\\varphi\\in\\mathcal{L},\\;\\mu\\in\\mathcal{M}.\\label{eq:mu(phi)_notation}\n\\end{equation}\n\n\\begin{rem*}\nNote that $\\mathbb{X}$, $\\mathcal{F}^{\\mathbb{X}}$, $\\mathcal{F}_{n}$,\n$\\mathbb{P}$, $\\alpha$ and various other objects depend on $N$,\nbut this dependence is suppressed from the notation. Unless specified\notherwise, any conditions which we impose on such objects should be\nunderstood as holding for all $N\\geq1$.\n\\end{rem*}\nLet $\\left\\{ W_{n}^{i};i\\in[N],n\\geq0\\right\\} $ be defined by the\nfollowing recursion:\n\n\\begin{equation}\nW_{0}^{i}:=1,\\quad\\quad W_{n}^{i}:=\\sum_{j}\\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\\zeta_{n-1}^{j}),\\quad i\\in[N],n\\geq1.\\label{eq:W_n_defn}\n\\end{equation}\n\n\nThe following algorithm implicitly specifies the law $\\mathbb{P}$\nof the $\\alpha$SMC particle system. For each $n\\geq1$, the ``Sample''\nstep should be understood as meaning that the variables $\\zeta_{n}=\\left\\{ \\zeta_{n}^{i}\\right\\} _{i\\in[N]}$\nare conditionally independent given $\\left\\{ \\zeta_{0},\\ldots,\\zeta_{n-1}\\right\\} $.\nThe line of Algorithm~\\ref{alg:aSMC} marked $(\\star)$ is intentionally\ngeneric, it amounts to a practical, if imprecise restatement of \\textbf{(A2).\n}In the sequel we shall examine instances of $\\alpha$SMC which arise\nwhen we consider specific $\\mathbb{A}_{N}$ and impose more structure\nat line $(\\star)$.\n\n\\begin{algorithm}[H]\n\\begin{raggedright}\n\\qquad{}For $n=0$,\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}For $i=1,\\ldots,N$,\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}Set\\quad{} $W_{0}^{i}=1$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}Sample\\quad{} $\\left\\{ \\zeta_{0}^{i}\\right\\} _{i\\in[N]}\\iid\\mu_{0}$ \n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}For $n\\geq1$,\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n$(\\star)$\\qquad{}\\enskip{}\\hspace{0.25em}Select $\\alpha_{n-1}$\nfrom $\\mathbb{A}_{N}$ according to some functional of $\\left\\{ \\zeta_{0},\\ldots,\\zeta_{n-1}\\right\\} $\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}For $i=1,\\ldots,N$,\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}Set\\quad{} $W_{n}^{i}=\\sum_{j}\\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\\zeta_{n-1}^{j})$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}Sample\\quad{} $\\zeta_{n}^{i}|\\mathcal{F}_{n-1}\\;\\sim\\;\\dfrac{\\sum_{j}\\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\\zeta_{n-1}^{j})f(\\zeta_{n-1}^{j},\\cdot)}{W_{n}^{i}}$\n\\par\\end{raggedright}\n\n\\protect\\caption{$\\alpha$SMC\\label{alg:aSMC}}\n\\end{algorithm}\n\n\nWe shall study the objects \n\n\\begin{equation}\n\\pi_{n}^{N}:=\\frac{\\sum_{i}W_{n}^{i}\\;\\delta_{\\zeta_{n}^{i}}}{\\sum_{i}W_{n}^{i}},\\quad\\quad\\quad Z_{n}^{N}:=\\frac{1}{N}\\sum_{i}W_{n}^{i},\\quad n\\geq0,\\label{eq:pi^N_andZ^N}\n\\end{equation}\nwhich as the notation suggests, are to be regarded as approximations\nof $\\pi_{n}$ and $Z_{n}$, respectively. We shall also be centrally\nconcerned with the following coefficient, which is closely related\nto the ESS,\n\n\\begin{equation}\n\\mathcal{E}_{n}^{N}:=\\frac{\\left(N^{-1}\\sum_{i}W_{n}^{i}\\right)^{2}}{N^{-1}\\sum_{i}\\left(W_{n}^{i}\\right)^{2}}=\\frac{\\left(N^{-1}\\sum_{i}\\sum_{j}\\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\\zeta_{n-1}^{j})\\right)^{2}}{N^{-1}\\sum_{i}\\left(\\sum_{j}\\alpha_{n-1}^{ij}W_{n-1}^{j}g_{n-1}(\\zeta_{n-1}^{j})\\right)^{2}},\\quad n\\geq1,\\label{eq:ESS_defn_front}\n\\end{equation}\nand by convention $\\mathcal{E}_{0}^{N}:=1$. The second equality in\n(\\ref{eq:ESS_defn_front}) is immediate from the definition of $W_{n}^{i}$,\nsee (\\ref{eq:W_n_defn}). Note that $\\mathcal{E}_{n}^{N}$ is always\nvalued in $[0,1]$, and if we write\n\\begin{equation}\nN_{n}^{\\text{eff}}:=N\\mathcal{E}_{n}^{N},\\label{eq:N_eff}\n\\end{equation}\nwe obtain the ESS of \\citet{liu1995blind}, although of course in\na generalized form, since $\\mathcal{E}_{n}^{N}$ is defined in terms\nof the generic ingredients of $\\alpha$SMC. A few comments on generality\nare in order. Firstly, for ease of presentation, we have chosen to\nwork with a particularly simple version of $\\alpha$SMC, in which\nnew samples are proposed using the HMM Markov kernel $f$. The algorithm\nis easily generalized to accommodate other proposal kernels. Secondly,\nwhilst we focus on the application of SMC methods to HMM's, our results\nand methodological ideas are immediately transferable to other contexts,\nfor example via the framework of \\citep{smc:meth:DDJ06}.\n\n\n\\subsection{Instances of $\\alpha$SMC\\label{sub:Instances-of-SMC}}\n\nWe now show how $\\alpha$SMC admits SIS, the BPF and the ARPF, as\nspecial cases, through particular choices of $\\mathbb{A}_{N}$. Our\npresentation is intended to illustrate the structural generality of\n$\\alpha$SMC, thus setting the scene for the developments which follow.\nThe following lemma facilitates exposition by ``unwinding'' the\nquantities $\\left\\{ W_{n}^{i}\\right\\} _{i\\in[N]}$ defined recursively\nin (\\ref{eq:W_n_defn}). It is used throughout the remainder of the\npaper.\n\\begin{lem}\n\\label{lem:W_n_representation}For $n\\geq1$, $0\\leq p0$,\nor equivalently\n\\[\n\\inf_{n\\geq0}N_{n}^{\\text{eff}}\\geq N\\tau>0,\n\\]\n by construction. This seemingly trivial observation turns out to\nbe crucial when we address time-uniform convergence of the ARPF in\nSection~\\ref{sec:stability}, and the condition $\\inf_{n\\geq0}\\mathcal{E}_{n}^{N}>0$\nwill appear repeatedly in discussions which lead to the formulation\nof new, provably stable algorithms in Section~\\ref{sec:Discussion}. \n\n\n\\subsubsection*{Comments on other algorithms}\n\nIn the engineering literature, a variety of algorithmic procedures\ninvolving distributed computing have been suggested \\citep{bolic2005resampling}.\n``Local'' particle approximations of Rao--Blackwellized filters\nhave been devised in \\citep{chen2011decentralized} and \\citep{johansen2012exact}\n. \\citet{Verge_island_particle} have recently suggested an ``island''\nparticle algorithm, designed for parallel implementation, in which\nthere are two levels of resampling and the total population size $N=N_{1}N_{2}$\nis defined in terms of the number of particles per island, $N_{1}$,\nand the number of islands, $N_{2}$. Interaction at both levels occurs\nby resampling, at the island level this means entire blocks of particles\nare replicated and\/or discarded. They investigate the trade-off between\n$N_{1}$ and $N_{2}$ and provide asymptotic results which validate\ntheir algorithms. In the present work, we provide some asymptotic\nresults in Section~\\ref{sec:Martingale-approximations-and} but it\nis really the non-asymptotic results in Section~\\ref{sec:stability}\nwhich lead us to suggest specific novel instances of $\\alpha$SMC\nin Section~\\ref{sec:Discussion}. Moreover, in general $\\alpha$SMC\nis distinct from all these algorithms and, other than in some uninteresting\nspecial cases, none of them coincide with the adaptive procedures\nwe suggest in Section~\\ref{sub:Algorithms-with-adaptive}. \n\n\n\\section{Convergence\\label{sec:Martingale-approximations-and}}\n\nIn this section our main objective is to investigate, for general\n$\\alpha$SMC (Algorithm~\\ref{alg:aSMC}), conditions for convergence\n\\begin{equation}\nZ_{n}^{N}-Z_{n}\\rightarrow0\\quad\\text{and}\\quad\\pi_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)\\rightarrow0,\\label{eq:as_convergence_intro}\n\\end{equation}\nat least in probability, as $N\\rightarrow\\infty$. \n\nIn the case of SIS, i.e.~$\\mathbb{A}_{N}=\\{Id\\}$, it is easy to\nestablish (\\ref{eq:as_convergence_intro}), since the processes $\\left\\{ \\zeta_{n}^{i};n\\geq0\\right\\} _{i\\in[N]}$\nare independent Markov chains, of identical law. On the other hand,\nfor the bootstrap filter, i.e.~$\\mathbb{A}_{N}=\\{\\mathbf{1}_{1\/N}\\}$,\nthe convergence $\\pi_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)\\rightarrow0$,\ncan be proved under very mild conditions, by decomposing $\\pi_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)$\nin terms of ``local'' sampling errors, see amongst others \\citep{smc:theory:Dm04,smc:the:DM08}\nfor this type of approach. For instance, for $A\\in\\mathcal{X}$ we\nmay write \n\\begin{eqnarray}\n\\pi_{1}^{N}(A)-\\pi_{1}(A) & = & \\frac{1}{N}\\sum_{i}\\delta_{\\zeta_{1}^{i}}(A)-\\frac{\\sum_{i}g_{0}(\\zeta_{0}^{i})f(\\zeta_{0}^{i},A)}{\\sum_{i}g_{0}(\\zeta_{0}^{i})}\\label{eq:intro_boot_decomp1}\\\\\n & + & \\frac{\\sum_{i}g_{0}(\\zeta_{0}^{i})f(\\zeta_{0}^{i},A)}{\\sum_{i}g_{0}(\\zeta_{0}^{i})}-\\pi_{1}(A).\\label{eq:intro_boot_decomp2}\n\\end{eqnarray}\nHeuristically, the term on the r.h.s.~of (\\ref{eq:intro_boot_decomp1})\nconverges to zero because given $\\mathcal{F}_{0}$, the samples $\\left\\{ \\zeta_{1}^{i}\\right\\} _{i\\in[N]}$\nare conditionally i.i.d.~according $\\frac{\\sum_{i}g_{0}(\\zeta_{0}^{i})f(\\zeta_{0}^{i},\\cdot)}{\\sum_{i}g_{0}(\\zeta_{0}^{i})}$,\nand the term in (\\ref{eq:intro_boot_decomp2}) converges to zero because\nthe samples $\\left\\{ \\zeta_{0}^{i}\\right\\} _{i\\in[N]}$ are i.i.d.~according\nto $\\mu_{0}$. A similar argument ensures that $\\pi_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)\\rightarrow0$,\nfor any $n\\geq0$ and therefore by the continuous mapping theorem\n$Z_{n}^{N}-Z_{n}\\rightarrow0$, since \n\\[\nZ_{n}=\\prod_{p=0}^{n-1}\\pi_{p}(g_{p}),\\quad\\text{and}\\quad Z_{n}^{N}=\\prod_{p=0}^{n-1}\\pi_{p}^{N}(g_{p}).\n\\]\nIn the case of $\\alpha$SMC, $\\left\\{ \\zeta_{n}^{i}\\right\\} _{i\\in[N]}$\nare conditionally independent given $\\mathcal{F}_{n-1}$, but we do\nnot necessarily have either the unconditional independence structure\nof SIS, or the conditionally i.i.d.~structure of the BPF to work\nwith. \n\n\\citet{smc:the:DM08} have established a CLT for the ARPF using an\ninductive approach w.r.t.~deterministic time periods. \\citet{arnaud2009smc}\nhave obtained a CLT for the ARPF based on an alternative multiplicative\nfunctional representation of the algorithm. Convergence of the ARPF\nwas studied in \\citep{del2012adaptive} by coupling the adaptive algorithm\nto a reference particle system, for which resampling occurs at deterministic\ntimes. One of the benefits of their approach is that existing asymptotic\nresults for non-adaptive algorithms, such as central limit theorems\n(CLT), can then be transferred to the adaptive algorithm with little\nfurther work. Their analysis involves a technical assumption \\citep[Section 5.2]{del2012adaptive}\nto deal with the situation where the threshold parameters coincide\nwith the adaptive criteria. Our analysis of $\\alpha$SMC does not\nrest on any such technical assumption, and in some ways is more direct,\nbut we do not obtain concentration estimates or a CLT. Some more detailed\nremarks on this matter are given after the statement of Theorem~\\ref{thm:convergence}. \n\n\\citet{crisan2012particle} studied convergence and obtained a CLT\nfor an adaptive resampling particle filter in continuous time under\nconditions which they verify for the case of ESS-triggered resampling,\nwithout needing the type of technical assumption of \\citep{del2012adaptive}.\nTheir study focuses, in part, on the random times at which resampling\noccurs and dealing with the subtleties of the convergence in continuous\ntime. Our asymptotic $N\\rightarrow\\infty$ analysis is in some ways\nless refined, but in comparison to this and the other existing works,\nwe analyze a more general algorithm, and it is this generality which\nallows us to suggest new adaptive algorithms in Section~\\ref{sec:Discussion},\ninformed by the time-uniform non-asymptotic error bounds in our Theorem~\\ref{thm:L_R_mix}. \n\nTo proceed, we need some further notation involving $\\alpha$. Let\nus define the matrices: $\\alpha_{n,n}:=Id$ for $n\\geq0$, and recursively\n\\begin{equation}\n\\alpha_{p,n}^{ij}:=\\sum_{k}\\alpha_{p+1,n}^{ik}\\alpha_{p}^{kj},\\quad\\quad(i,j)\\in[N]^{2},\\;0\\leq p0$.\nDue to the conditional independence structure of the HMM, it can easily\nbe checked that \n\\[\n\\pi_{n}=\\frac{\\gamma_{n}}{\\gamma_{n}(1)},\\quad\\quad Z_{n}=\\gamma_{n}(1),\\quad n\\geq0,\n\\]\nand\n\\[\n\\overline{Q}_{p,n}=\\frac{Q_{p,n}}{\\pi_{p}Q_{p,n}(1)}.\n\\]\n\n\nFor $i\\in[N]$ and $0\\leq p\\leq n$, introduce the random measures\n\\begin{equation}\n\\Gamma_{p,n}^{N}:=\\sum_{i}\\beta_{p,n}^{i}W_{p}^{i}\\delta_{\\zeta_{p}^{i}},\\quad\\quad\\overline{\\Gamma}_{p,n}^{N}:=\\frac{\\Gamma_{p,n}^{N}}{\\gamma_{p}(1)}.\\label{eq:Gamma_defn}\n\\end{equation}\nwhere $W_{p}^{i}$ is as in (\\ref{eq:W_n_defn}). For simplicity of\nnotation, we shall write $\\Gamma_{n}^{N}:=\\Gamma_{n,n}^{N},\\;\\overline{\\Gamma}_{n}^{N}:=\\overline{\\Gamma}_{n,n}^{N}$.\nIf we define \n\\begin{equation}\n\\overline{W}_{n}^{i}:=\\frac{W_{n}^{i}}{\\gamma_{n}(1)},\\quad n\\geq0,\\label{eq:W_bar_defn}\n\\end{equation}\nthen we have from (\\ref{eq:Gamma_defn}), \n\\[\n\\overline{\\Gamma}_{p,n}^{N}=\\sum_{i}\\beta_{p,n}^{i}\\overline{W}_{p}^{i}\\delta_{\\zeta_{p}^{i}}.\n\\]\n\n\nFinally, we observe from (\\ref{eq:beta_n_n_defn}) that\n\\[\n\\Gamma_{n}^{N}=\\sum_{i}\\beta_{n,n}^{i}W_{n}^{i}\\delta_{\\zeta_{n}^{i}}=N^{-1}\\sum_{i}W_{n}^{i}\\delta_{\\zeta_{n}^{i}}.\n\\]\n\n\n\n\\subsection{Error decomposition}\n\nThroughout this section let $\\varphi\\in\\mathcal{L}$, $n\\geq0$ and\n$N\\geq1$ be arbitrarily chosen, but then fixed. Define, for $1\\leq p\\leq n$\nand $i\\in[N]$,\n\\[\n\\Delta_{p,n}^{i}:=\\overline{Q}_{p,n}(\\varphi)(\\zeta_{p}^{i})-\\frac{\\sum_{j}\\alpha_{p-1}^{ij}W_{p-1}^{j}\\overline{Q}_{p-1,n}(\\varphi)(\\zeta_{p-1}^{j})}{\\sum_{j}\\alpha_{p-1}^{ij}W_{p-1}^{j}\\overline{Q}_{p}(1)(\\zeta_{p-1}^{j})},\n\\]\nand $\\Delta_{0,n}^{i}:=\\overline{Q}_{0,n}(\\varphi)(\\zeta_{0}^{i})-\\mu_{0}\\overline{Q}_{0,n}(\\varphi)$,\nso that $\\mathbb{E}\\left[\\left.\\Delta_{p,n}^{i}\\right|\\mathcal{F}_{p-1}\\right]=0$\nfor any $i\\in[N]$ and $0\\leq p\\leq n$. Then for $0\\leq p\\leq n$\nand $i\\in[N]$ set $k:=pN+i$, and\n\n\\[\n\\xi_{k}^{N}:=\\sqrt{N}\\beta_{p,n}^{i}\\overline{W}_{p}^{i}\\Delta_{p,n}^{i},\n\\]\nso as to define a sequence $\\left\\{ \\xi_{k}^{N};k=1,\\ldots,(n+1)N\\right\\} $.\nFor $k=1,\\ldots,(n+1)N$, let $\\mathcal{F}^{(k)}$ be the $\\sigma$-algebra\ngenerated by $\\left\\{ \\zeta_{p}^{i};\\; pN+i\\leq k,\\; i\\in[N],0\\leq p\\leq n\\right\\} $.\nSet $\\mathcal{F}^{(-1)}:=\\{\\mathbb{X},\\emptyset\\}$.\n\nThe following proposition is the main result underlying Theorem~\\ref{thm:convergence}.\nThe proof is given in the appendix.\n\\begin{prop}\n\\label{prop:martingale} Assume $\\mathbf{(A2)}$ and $\\mathbf{(B)}$.\nWe have the decomposition\n\n\\begin{equation}\n\\sqrt{N}\\left[\\overline{\\Gamma}_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)\\right]=\\sum_{k=1}^{(n+1)N}\\xi_{k}^{N},\\label{eq:Gamma_telescope}\n\\end{equation}\nwhere for $k=1,\\ldots,(n+1)N$, the increment $\\xi_{k}^{N}$ is measurable\nw.r.t.~$\\mathcal{F}^{(k)}$ and satisfies \n\\begin{equation}\n\\mathbb{E}\\left[\\left.\\xi_{k}^{N}\\right|\\mathcal{F}^{(k-1)}\\right]=\\mathbb{E}\\left[\\left.\\xi_{k}^{N}\\right|\\mathcal{F}_{p-1}\\right]=0,\\quad\\text{with}\\quad p:=\\left\\lfloor (k-1)\/N\\right\\rfloor .\\label{eq:xi_cond_exp}\n\\end{equation}\nFor each $r\\geq1$ there exists a universal constant $B(r)$ such\nthat\n\\begin{eqnarray}\n & & \\mathbb{E}\\left[\\left|\\overline{\\Gamma}_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)\\right|^{r}\\right]^{1\/r}\\nonumber \\\\\n & & \\leq B(r)^{1\/r}\\sum_{p=0}^{n}\\mathrm{osc}\\left(\\overline{Q}_{p,n}(\\varphi)\\right)\\mathbb{E}\\left[\\left|\\sum_{i}\\left(\\beta_{p,n}^{i}\\overline{W}_{p}^{i}\\right)^{2}\\right|^{r\/2}\\right]^{1\/r}.\\label{eq:martingale_burkholder_bound}\n\\end{eqnarray}\n \n\\end{prop}\nThe proof of Theorem~\\ref{thm:convergence}, which is mostly technical,\nis given in the appendix. Here we briefly discuss our assumptions\nand sketch some of the main arguments. Part 1) of Theorem~\\ref{thm:convergence}\nfollows immediately from (\\ref{eq:Gamma_telescope}) and (\\ref{eq:xi_cond_exp})\napplied with $\\varphi=1$. In turn, the martingale structure of\\textbf{\n}(\\ref{eq:Gamma_telescope}) and (\\ref{eq:xi_cond_exp}) is underpinned\nby the measurability conditions \\textbf{(A2)} and $\\mathbf{(B)}$.\nThe proofs of parts 2) and 3) of Theorem~\\ref{thm:convergence},\ninvolve applying Proposition~\\ref{prop:martingale} in conjunction\nwith the identities\n\\begin{eqnarray}\nZ_{n}^{N}-Z_{n} & = & \\Gamma_{n}^{N}(1)-\\gamma_{n}(1),\\nonumber \\\\\n\\pi_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi) & = & \\frac{\\Gamma_{n}^{N}(\\varphi)}{\\Gamma_{n}^{N}(1)}-\\frac{\\gamma_{n}(\\varphi)}{\\gamma_{n}(1)}.\\label{eq:convergence_sketch_id}\n\\end{eqnarray}\nIn order to prove that these errors convergence to zero in probability\nwe show that the quadratic variation term in (\\ref{eq:martingale_burkholder_bound})\nconverges to zero. In general, we cannot hope for the latter convergence\nwithout some sort of negligibility hypothesis on the product terms\n$\\left\\{ \\mathrm{osc}\\left(\\overline{Q}_{p,n}(\\varphi)\\right)\\beta_{p,n}^{i}\\overline{W}_{p}^{i};i\\in[N]\\right\\} $.\nAssumption $\\mathbf{\\mathbf{(A1)}}$ allows us to crudely upper-bound\n$\\mathrm{osc}\\left(\\overline{Q}_{p,n}(\\varphi)\\right)$ and $\\overline{W}_{p}^{i}$;\nthe measurability condition $\\mathbf{(B)}$ allows us to dispose of\nthe expectation in (\\ref{eq:martingale_burkholder_bound}); then via\nMarkov's inequality and the classical equivalence: \n\\[\n\\lim_{N\\rightarrow\\infty}\\max_{i\\in[N]}\\beta_{p,n}^{i}=0\\quad\\Leftrightarrow\\quad\\lim_{N\\rightarrow\\infty}\\sum_{i}\\left(\\beta_{p,n}^{i}\\right)^{2}=0,\n\\]\nwhich holds since $\\left(\\max_{i\\in[N]}\\beta_{p,n}^{i}\\right)^{2}\\leq\\sum_{i}\\left(\\beta_{p,n}^{i}\\right)^{2}\\leq\\max_{i\\in[N]}\\beta_{p,n}^{i}$,\nthe negligibility part of $\\mathbf{(B^{+})}$ guarantees that $\\left|\\Gamma_{n}^{N}(\\varphi)-\\gamma_{n}(\\varphi)\\right|$\nconverges to zero in probability. The stronger condition $\\mathbf{(B^{++})}$\nbuys us the $\\sqrt{N}$ scaling displayed in part 3). In Section~\\ref{sub:Ensuring-convergence}\nwe discuss what can go wrong when $\\mathbf{(B^{+})}$ does not hold.\n\n\n\\section{Stability\\label{sec:stability}}\n\nIn this section we study the stability of approximation errors under\nthe following regularity condition.\n\\begin{assumption*}\n$\\mathbf{(C)}$ There exists $\\left(\\delta,\\epsilon\\right)\\in[1,\\infty)^{2}$\nsuch that\n\\[\n\\sup_{n\\geq0}\\sup_{x,y}\\frac{g_{n}(x)}{g_{n}(y)}\\leq\\delta,\\quad\\quad f(x,\\cdot)\\leq\\epsilon f(y,\\cdot),\\quad(x,y)\\in\\mathsf{X}^{2}.\n\\]\n\\end{assumption*}\n\\begin{rem}\n\\label{rem:assumption_C}Assumption $\\mathbf{(C)}$ is a standard\nhypothesis in studies of non-asymptotic stability properties of SMC\nalgorithms. Similar conditions have been adopted in \\citep[Chapter 7]{smc:theory:Dm04}\nand \\citep{smc:the:LGO04}, amongst others. $\\mathbf{(C)}$ guarantees\nthat $Q_{p,n}$, and related objects, obey a variety of regularity\nconditions. In particular, we immediately obtain\n\n\\begin{equation}\n\\sup_{p,n}\\sup_{x}\\overline{Q}_{p,n}(1)(x)\\leq\\sup_{p,n}\\sup_{x,y}\\frac{Q_{p,n}(1)(x)}{Q_{p,n}(1)(y)}\\leq\\delta\\epsilon<+\\infty.\\label{eq:Q_p,n_bounded}\n\\end{equation}\nFurthermore if we introduce the following operators on probability\nmeasures:\n\n\\begin{equation}\n\\Phi_{n}:\\mu\\in\\mathcal{P}\\mapsto\\frac{\\mu Q_{n}}{\\mu(g_{n-1})}\\in\\mathcal{P}\\quad\\quad n\\geq1,\\label{eq:Phi_defn}\n\\end{equation}\n\n\n\\begin{equation}\n\\Phi_{p,n}:=\\Phi_{n}\\circ\\cdots\\circ\\Phi_{p+1},\\quad0\\leq p0$. The argument\nis inductive. To initialize, note that since by definition $Z_{0}^{N}=Z_{0}=1$,\nwe have $v_{0}=0$. Now assume (\\ref{eq:v_n_induction_hyp}) holds\nat all ranks strictly less than some fixed $n\\geq1$. Using (\\ref{eq:v_n_recursion}),\nwe then have at rank $n$,\n\\begin{eqnarray*}\nv_{n} & \\leq & \\frac{C}{N\\tau}\\sum_{p=0}^{n-1}\\left(v_{p}+1\\right)\\\\\n & \\leq & \\frac{C}{N\\tau}\\sum_{p=0}^{n-1}\\left(1+\\frac{C}{N\\tau}\\right)^{p}\\\\\n & = & \\frac{C}{N\\tau}\\frac{\\left(1+\\frac{C}{N\\tau}\\right)^{n}-1}{\\left(1+\\frac{C}{N\\tau}\\right)-1}\\\\\n & = & \\left(1+\\frac{C}{N\\tau}\\right)^{n}-1.\n\\end{eqnarray*}\nThis completes the proof of (\\ref{eq:v_n_induction_hyp}), from which\nthe second equality on the right of (\\ref{eq:them_Stability_Statement})\nfollows immediately upon noting that by Theorem~\\ref{thm:convergence},\n$\\mathbb{E}[Z_{n}^{N}]=Z_{n}$.\n\nFor the second bound on the right of (\\ref{eq:them_Stability_Statement}),\nfirst note that as per Remark~\\ref{rem:assumption_C}, under $\\mathbf{(C)}$\\textbf{\n}we have \n\\begin{eqnarray*}\n\\left\\Vert P_{p,n}(\\bar{\\varphi})\\right\\Vert & = & \\sup_{x}\\left|P_{p,n}(\\varphi)(x)-\\pi_{n}(\\varphi)\\right|\\\\\n & = & \\sup_{x}\\left|\\Phi_{p,n}(\\delta_{x})(\\varphi)-\\Phi_{p,n}(\\pi_{p})(\\varphi)\\right|\\\\\n & \\leq & \\sup_{\\mu,\\nu\\in\\mathcal{P}}\\left\\Vert \\Phi_{p,n}(\\mu)-\\Phi_{p,n}(\\nu)\\right\\Vert \\left\\Vert \\varphi\\right\\Vert \\leq\\left\\Vert \\varphi\\right\\Vert C\\rho^{n-p},\n\\end{eqnarray*}\nand \n\\[\n\\sup_{n\\geq0}\\sup_{p\\leq n}\\;\\delta_{p,n}<+\\infty,\n\\]\n Using these upper bounds, the fact that under $\\mathbf{(B^{++})}$\nwe have $\\beta_{p,n}^{i}=1\/N$, and Proposition~\\ref{prop:L_p_bound_mixing},\nwe find that there exists a finite constant $\\widetilde{B}(r)$ such\nthat for any $N\\geq1$, $n\\geq0$, $\\varphi\\in\\mathcal{L}$, \n\n\\emph{\n\\[\n\\mathbb{E}\\left[\\left|\\pi_{n}^{N}(\\varphi)-\\pi_{n}(\\varphi)\\right|^{r}\\right]^{1\/r}\\leq\\left\\Vert \\varphi\\right\\Vert \\frac{\\tilde{B}(r)}{\\sqrt{N}}\\sum_{p=0}^{n}\\rho^{n-p}\\mathbb{E}\\left[\\left|\\mathcal{E}_{p}^{N}\\right|^{-r\/2}\\right]^{1\/r},\n\\]\n}where\n\\[\n\\mathcal{E}_{n}^{N}=\\frac{\\left(N^{-1}\\sum_{i}W_{n}^{i}\\right)^{2}}{N^{-1}\\sum_{i}\\left(W_{n}^{i}\\right)^{2}}.\n\\]\n\n\\end{proof}\n\n\\section{Discussion\\label{sec:Discussion}}\n\n\n\\subsection{Why not just run independent particle filters and average?\\label{sub:Why-not-just}}\n\nOne obvious approach to parallelization of SMC is to run a number\nof independent copies of a standard algorithm, such as the BPF, and\nthen in some sense simply average their outputs. Let us explain possible\nshortcomings of this approach. \n\nSuppose we want to run $s\\geq1$ independent copies of Algorithm~\\ref{alg:boot_pf},\neach with $q\\geq1$ particles. For purposes of exposition, it is helpful\nto express this collection of independent algorithms as a particular\ninstance of $\\alpha$SMC: for the remainder of Section~\\ref{sub:Why-not-just},\nwe set $N=sq$ and consider Algorithm~\\ref{alg:aSMC} with $\\mathbb{A}_{N}$\nchosen to consist only of the block diagonal matrix:\n\\begin{equation}\n\\left[\\begin{array}{cccc}\n\\mathbf{q^{-1}} & \\mathbf{0} & \\cdots & \\mathbf{0}\\\\\n\\mathbf{0} & \\mathbf{q^{-1}} & \\cdots & \\mathbf{0}\\\\\n\\vdots & \\vdots & \\ddots & \\vdots\\\\\n\\mathbf{0} & \\mathbf{0} & \\cdots & \\mathbf{q^{-1}}\n\\end{array}\\right]\\label{eq:alpha_block}\n\\end{equation}\nwhere $\\mathbf{q}^{-1}$ is a $q\\times q$ submatrix with every entry\nequal to $q^{-1}$ and $\\mathbf{0}$ is a submatrix of zeros, of the\nsame size. In this situation, a simple application of Lemma~\\ref{lem:W_n_representation}\nshows that for any $n\\geq1$ and $\\ell\\in[s]$, if we define $B(\\ell):=\\{(\\ell-1)q+1,(\\ell-1)q+2,\\ldots,\\ell q\\}$,\nthen\n\\begin{equation}\n\\text{for all}\\quad i_{n}\\in B(\\ell),\\quad\\quad W_{n}^{i_{n}}=\\prod_{p=0}^{n-1}\\left(N^{-1}\\sum_{i_{p}\\in B(\\ell)}g_{p}\\left(\\zeta_{p}^{i_{p}}\\right)\\right)=:\\mathbb{W}_{n}^{\\ell},\\label{eq:W_n^i_blocks}\n\\end{equation}\nc.f. (\\ref{eq:bootstrap_W_n^i})--(\\ref{eq:bootstrap_Z_n^N}), and\nfurthermore upon inspection of Algorithm~\\ref{alg:aSMC}, we find\n\\begin{equation}\n\\text{for all }\\ell\\in[s]\\text{ and }i\\in B(\\ell),\\quad\\quad\\mathbb{P}\\left(\\left.\\zeta_{n}^{i}\\in A\\right|\\mathcal{F}_{n-1}\\right)=\\frac{\\sum_{j\\in B(\\ell)}g_{n-1}\\left(\\zeta_{n-1}^{j}\\right)f\\left(\\zeta_{n-1}^{j},A\\right)}{\\sum_{j\\in B(\\ell)}g_{n-1}\\left(\\zeta_{n-1}^{j}\\right)},\\label{eq:blocks_law}\n\\end{equation}\nfor any $A\\in\\mathcal{X}$. It follows that the blocks of particles\n\\[\n\\hat{\\zeta}_{n}^{k}:=\\left\\{ \\zeta_{n}^{i}\\right\\} _{i\\in B(\\ell)},\\quad\\ell\\in[s],\n\\]\nare independent, and for each $\\ell\\in[s]$, the sequence $\\left\\{ \\hat{\\zeta}_{n}^{\\ell};n\\geq0\\right\\} $\nevolves under the same law as a BPF, with $q$ particles. Furthermore\nwe notice \n\\[\n\\pi_{n}^{N}=\\pi_{n}^{sq}=\\frac{\\sum_{i}W_{n}^{i}\\;\\delta_{\\zeta_{n}^{i}}}{\\sum_{i}W_{n}^{i}}=\\frac{\\sum_{\\ell\\in[s]}\\sum_{i\\in B(\\ell)}W_{n}^{i}\\;\\delta_{\\zeta_{n}^{i}}}{\\sum_{\\ell\\in[s]}\\sum_{i\\in B(\\ell)}W_{n}^{i}}=\\frac{\\sum_{\\ell\\in[s]}\\mathbb{W}_{n}^{\\ell}\\left(q^{-1}\\sum_{i\\in B(\\ell)}\\delta_{\\zeta_{n}^{i}}\\right)}{\\sum_{\\ell\\in[s]}\\mathbb{W}_{n}^{\\ell}},\n\\]\nwhere $q^{-1}\\sum_{i\\in B(\\ell)}\\delta_{\\zeta_{n}^{i}}$ may be regarded\nas the approximation of $\\pi_{n}$ obtained from the $\\ell$th block\nof particles. Since we have assumed that $\\mathbb{A}_{N}$ consists\nonly of the matrix (\\ref{eq:alpha_block}), $\\mathbf{(A2)}$ and$\\mathbf{(B^{++})}$\nhold, and by Theorem~\\ref{thm:convergence} we are assured of the\na.s.~convergence $\\pi_{n}^{sq}(\\varphi)\\rightarrow\\pi_{n}(\\varphi)$\nwhen $q$ is fixed and $s\\rightarrow\\infty$. In words, we have convergence\nas the total number of bootstrap algorithms tends to infinity, even\nthough the number of particles within each algorithm is fixed. On\nthe other hand, simple averaging of the output from the $m$ independent\nalgorithms would entail reporting:\n\\begin{equation}\n\\frac{1}{sq}\\sum_{i\\in[sq]}\\delta_{\\zeta_{n}^{i}}\\label{eq:naive}\n\\end{equation}\nas an approximation of $\\pi_{n}$; the problem is that (\\ref{eq:naive})\nis biased, in the sense that in general it is not true that, with\n$q$ fixed, $(sq)^{-1}\\sum_{i\\in[sq]}\\varphi(\\zeta_{n}^{i})\\rightarrow\\pi_{n}(\\varphi)$\nas $s\\rightarrow\\infty$ (although obviously we do have convergence\nif $q\\rightarrow\\infty$). In summary, simple averages across independent\nparticle filters do not, in general, converge as the number of algorithms\ngrows. \n\nWe can also discuss the quality of an approximation of $Z_{n}$ obtained\nby simple averaging across the $s$ independent algorithms; let us\nconsider the quantities\n\\[\n\\mathbb{Z}_{n}^{(q,\\ell)}:=\\frac{1}{\\ell}\\sum_{j\\in[\\ell]}\\mathbb{W}_{n}^{j},\\quad\\ell\\in[s].\n\\]\nComparing (\\ref{eq:W_n^i_blocks}) with (\\ref{eq:bootstrap_Z_n^N}),\nand noting (\\ref{eq:blocks_law}) and the independence properties\ndescribed above, we have\n\\begin{equation}\n\\mathbb{E}\\left[\\mathbb{Z}_{n}^{(q,s)}\\right]=Z_{n},\\quad\\quad\\mathbb{E}\\left[\\left(\\frac{\\mathbb{Z}_{n}^{(q,s)}}{Z_{n}}-1\\right)^{2}\\right]=\\frac{1}{s}\\mathbb{E}\\left[\\left(\\frac{\\mathbb{Z}_{n}^{(q,1)}}{Z_{n}}-1\\right)^{2}\\right],\\label{eq:Z_naive_average}\n\\end{equation}\nwhere the first equality holds due to the first part of Theorem~\\ref{thm:convergence}:\nin this context the well known lack-of-bias property of the BPF. Under\ncertain ergodicity and regularity conditions, \\citep[Proposition 4]{WhiteleyTPF}\nestablishes that $\\mathbb{E}\\left[\\left(\\mathbb{Z}_{n}^{(q,1)}\/Z_{n}\\right)^{2}\\right]$\ngrows exponentially fast along observation sample paths when $q$\nis fixed and $n\\rightarrow\\infty$. When that occurs, it is clear\nfrom (\\ref{eq:naive}) that $s$ must be scaled up exponentially fast\nwith $n$ in order to control the relative variance of $\\mathbb{Z}_{n}^{(q,s)}$.\nOn the other hand, by Theorem~\\ref{thm:L_R_mix} and Remark~\\ref{rem:linear_variance},\nit is apparent that if we design an instance of $\\alpha$SMC so as\nto enforce $\\inf_{n\\geq0}\\mathcal{E}_{n}^{N}>0$, then we can control\n$\\mathbb{E}\\left[\\left(Z_{n}^{N}\/Z_{n}\\right)^{2}\\right]$ at a more\nmodest computational cost. When $\\mathbb{A}_{N}$ consists only of\nthe matrix (\\ref{eq:alpha_block}) we do not have a guarantee that\n$\\inf_{n\\geq0}\\mathcal{E}_{n}^{N}>0$, but in Section~\\ref{sub:Algorithms-with-adaptive}\nwe shall suggest some novel algorithms which do guarantee this lower\nbound and therefore enjoy the time-uniform convergence and linear-in-time\nvariance properties of Theorem~\\ref{thm:L_R_mix}. Before addressing\nthese stability issues we discuss the conditions under which the $\\alpha$SMC\nalgorithm converges.\n\n\n\\subsection{Ensuring convergence\\label{sub:Ensuring-convergence}}\n\nThroughout Section~\\ref{sub:Ensuring-convergence}, we consider the\ngeneric Algorithm~\\ref{alg:aSMC}. We describe what can go wrong\nif the conditions of Theorem~\\ref{thm:convergence} do not hold:\nas an example of a situation in which $\\mathbf{(B^{+})}$ does not\nhold; suppose that $\\mathbb{A}_{N}$ consists only of the transition\nmatrix of a simple random walk on the star graph with $N$ vertices,\ncall it $\\mathcal{S}_{N}$. That is, for $N>2$, $\\mathcal{S}_{N}$\nis an undirected tree with one internal vertex and $N-1$ leaves,\nand for $N\\leq2$, all vertices are leaves. Examples of $\\mathcal{S}_{N}$\nare illustrated in Figure~\\ref{fig:Star-graphs-of}. It is elementary\nthat a simple random walk on $\\mathcal{S}_{N}$ has unique invariant\ndistribution given by \n\\[\n\\frac{d_{N}^{i}}{\\sum_{j}d_{N}^{j}},\\quad i\\in[N],\\quad\\quad\\text{ where}\\quad d_{N}^{i}:=\\text{ degree of vertex }i\\text{ in }\\mathcal{S}_{N}.\n\\]\nAssuming that for every $N>2$ the internal vertex of $\\mathcal{S}_{N}$\nis labelled vertex $1$, then for all $0\\leq p2$, so $\\mathbf{(B^{+})}$ does not hold, and thus part\n2) of Theorem~\\ref{thm:convergence} does not hold.\n\n\\begin{figure}\n\\hfill{}\\includegraphics[width=1\\textwidth]{stars2}\\hfill{}\n\n\\protect\\caption{\\label{fig:Star-graphs-of}Star graphs. }\n\n\n\\end{figure}\n\n\nAs a more explicit example of convergence failure, suppose that $\\mathbb{A}_{N}$\nconsists only of the matrix which has $1$ for every entry in its\nfirst column, and zeros for all other entries. This is the transition\nmatrix of a random walk on a directed graph of which all edges lead\nto vertex $1$. It follows that for all $0\\leq p0$\nshould be enforced, so as to ensure stability\n\n\\item\\label{enu:criterion2}the computational complexity of associated\nsampling, weight and ESS calculations should not be prohibitively\nhigh\n\n\\end{enumerate} The motivation for (\\ref{enu:criterion1}) is the\ntheoretical assurance given by Theorem~\\ref{thm:L_R_mix}. The motivation\nfor (\\ref{enu:criterion2}) is simply that we do not want an algorithm\nwhich is much more expensive than any of the standard SMC methods,\nAlgorithms~\\ref{alg:SIS}--\\ref{alg:boot_pf} and the ARPF. It is\neasily checked that the complexity of SIS is $O(N)$ per unit time\nstep, which is the same as the complexity of the BPF \\citep{carpenter1999improved}\nand the ARPF.\n\nThroughout the remainder of Section~\\ref{sub:Algorithms-with-adaptive}\nwe shall assume that $\\mathbb{A}_{N}$ consists only of transition\nmatrices of simple random walks on regular undirected graphs. We impose\na little structure in addition to this as per the following definition,\nwhich identifies an object related to the standard notion of a block-diagonal\nmatrix.\n\\begin{defn*}\nA \\textbf{B-matrix} is a Markov transition matrix which specifies\na simple random walk on a regular undirected graph which has a self-loop\nat every vertex and whose connected components are all complete subgraphs.\n\\end{defn*}\nNote that due to the graph regularity appearing in this definition,\nif $\\mathbb{A}_{N}$ consists only of B-matrices, then $\\mathbf{(B^{++})}$\nis immediately satisfied. This regularity is also convenient for purposes\nof interpretation: it seems natural to use graph degree to give a\nprecise meaning to ``degree of interaction''. Indeed $Id$ and $\\mathbf{1}_{1\/N}$\nare both B-matrices, respectively specifying simple random walks on\n$1$-regular and $N$-regular graphs, and recall for the ARPF, $\\mathbb{A}_{N}=\\left\\{ Id,\\mathbf{1}_{1\/N}\\right\\} $;\nthe main idea behind the new algorithms below is to consider an instance\nof $\\alpha$SMC in which $\\mathbb{A}_{N}$ is defined to consist of\nB-matrices of various degrees $d\\in[N]$, and define adaptive algorithms\nwhich select the value of $\\alpha_{n-1}$ by searching through $\\mathbb{A}_{N}$\nto find the graph with the smallest $d$ which achieves $\\mathcal{E}_{n}^{N}\\geq\\tau>0$\nand hence satisfies criterion 1. In this way, we ensure provable stability\nwhilst trying to avoid the complete interaction which occurs when\n$\\alpha_{n-1}=\\mathbf{1}_{1\/N}$. \n\nAnother appealing property of B-matrices is formalized in the following\nlemma; see criterion (\\ref{enu:criterion2}) above. The proof is given\nin the appendix.\n\\begin{lem}\n\\label{lem:complexity}Suppose that $A=\\left(A^{ij}\\right)$ is a\nB-matrix of size $N$. Then given the quantities $\\left\\{ W_{n-1}^{i}\\right\\} _{i\\in[N]}$\nand $\\left\\{ g_{n-1}(\\zeta_{n-1}^{i})\\right\\} _{i\\in[N]}$, the computational\ncomplexity of calculating $\\left\\{ W_{n}^{i}\\right\\} _{i\\in[N]}$\nand simulating $\\left\\{ \\zeta_{n}^{i}\\right\\} _{i\\in[N]}$ as per\nAlgorithm~\\ref{alg:aSMC}, using $\\alpha_{n-1}=A$, is $O(N)$.\n\\end{lem}\nWhen calculating the overall complexity of Algorithm~\\ref{alg:aSMC}\nwe must also consider the complexity of line $(\\star)$, which in\ngeneral depends on $\\mathbb{A}_{N}$ and the particular functional\nused to choose $\\alpha_{n}$. We resume this complexity discussion\nafter describing the specifics of some adaptive algorithms. \n\n\n\\subsubsection*{Adaptive interaction}\n\nThroughout this section we set $m\\in\\mathbb{N}$ and then $N=2^{m}$.\nConsider Algorithm~\\ref{alg:aSMC} with $\\mathbb{A}_{N}$ chosen\nto be the set of B-matrices of size $N$. We suggest three adaptation\nrules at line $(\\star)$ of Algorithm~\\ref{alg:aSMC}: Simple, Random,\nand Greedy, all implemented via Algorithm~\\ref{alg:generic adaptation}\n(note that dependence of some quantities on $n$ is suppressed from\nthe notation there), but differing in the way they select the index\nlist $\\mathcal{I}_{k}$ which appears in the ``while'' loop of that\nprocedure. The methods for selecting $\\mathcal{I}_{k}$ are summarised\nin Table~\\ref{tab:Choosing_I_k}: the Simple rule needs little explanation,\nthe Random rule implements an independent random shuffling of indices\nand the Greedy rule is intended, heuristically, to pair large weights,\n$\\mathbb{W}_{k}^{i}$, will small weights in order to terminate the\n``while'' loop with as small a value of $k$ as possible. Note that,\nformally, in order for our results for $\\alpha$SMC to apply when\nthe Random rule is used, the underlying probability space must be\nappropriately extended, but the details are trivial so we omit them. \n\n\\begin{algorithm}\n\\begin{raggedright}\n\\qquad{}at iteration $n$ and line $(\\star)$ of Algorithm~\\ref{alg:aSMC},\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}for $i=1,\\ldots,N$,\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}set $B(0,i)=\\{i\\}$, $\\mathbb{W}_{0}^{i}=W_{n-1}^{i}g_{n-1}(\\zeta_{n-1}^{i})$,\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}set $k=0$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}set $\\overline{\\mathbb{W}}_{0}=N^{-1}\\sum_{i}\\mathbb{W}_{0}^{i}$\n, $\\mathcal{E}=\\frac{\\left(\\overline{\\mathbb{W}}_{0}\\right)^{2}}{N^{-1}\\sum_{i}\\left(\\mathbb{W}_{0}^{i}\\right)^{2}}$, \n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}while $\\mathcal{E}<\\tau$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}set $\\mathcal{I}_{k}$ according to the\nSimple, Random or Greedy scheme of Table~\\ref{tab:Choosing_I_k}.\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}for $i=1,\\ldots,N\/2^{k+1}$ \n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}\\qquad{}set $B(k+1,i)=B(k,\\mathcal{I}_{k}(2i-1))\\cup B(k,\\mathcal{I}_{k}(2i))$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}\\qquad{}set $\\mathbb{W}_{k+1}^{i}=\\mathbb{W}_{k}^{\\mathcal{I}_{k}(2i-1)}\/2+\\mathbb{W}_{k}^{\\mathcal{I}_{k}(2i)}\/2$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}set $k=k+1$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}\\qquad{}set $\\mathcal{E}=\\frac{\\left(\\overline{\\mathbb{W}}_{0}\\right)^{2}}{N^{-1}2^{k}\\sum_{i\\in[N\/2^{k}]}\\left(\\mathbb{W}_{k}^{i}\\right)^{2}}$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}set $K_{n-1}=k$\n\\par\\end{raggedright}\n\n\\begin{raggedright}\n\\qquad{}\\qquad{}set $\\alpha_{n-1}^{ij}=\\begin{cases}\n1\/2^{K_{n-1}}, & \\text{if }i\\sim j\\text{ according to \\ensuremath{\\left\\{ B(K_{n-1},i)\\right\\} _{i\\in[N\/2^{K_{n-1}}]}}}\\\\\n0, & \\text{otherwise}.\n\\end{cases}$\n\\par\\end{raggedright}\n\n\\protect\\caption{\\label{alg:generic adaptation}Adaptive selection of $\\alpha_{n-1}$}\n\\end{algorithm}\n\n\nFollowing the termination of the ``while'' loop, Algorithm~\\ref{alg:generic adaptation}\noutputs an integer $K_{n-1}$ and a partition $\\left\\{ B(K_{n-1},i)\\right\\} _{i\\in[N\/2^{K_{n-1}}]}$\nof $[N]$ into $N\/2^{K_{n-1}}$ subsets, each of cardinality $2^{K_{n-1}}$;\nthis partition specifies $\\alpha_{n-1}$ as a B-matrix and $2^{K_{n-1}}$\nis the degree of the corresponding graph (we keep track of $K_{n-1}$\nfor purposes of monitoring algorithm performance in Section~\\ref{sub:Numerical-illustrations}).\nProposition~\\ref{prop:Upon-termination-of} is a formal statement\nof its operation and completes our complexity considerations. The\nproof is given in the appendix. It can be checked by an inductive\nargument similar to the proof of Lemma~\\ref{lem:ARPF_A2}, also in\nthe appendix, that when $\\alpha_{n}$ is chosen according to Algorithm~\\ref{alg:generic adaptation}\ncombined with any of the adaptation rules in Table~\\ref{tab:Choosing_I_k},\n\\textbf{(A2)} is satisfied. \n\\begin{prop}\n\\label{prop:Upon-termination-of}The weights $\\left\\{ \\mathbb{W}_{k}^{i}\\right\\} _{i\\in[N\/2^{k}]}$\ncalculated in Algorithm~\\ref{alg:generic adaptation} obey the expression\n\\begin{equation}\n\\mathbb{W}_{k}^{i}=2^{-k}\\sum_{j\\in B(k,i)}W_{n-1}^{j}g_{n-1}(\\zeta_{n-1}^{j}).\\label{eq:W_k_explicit}\n\\end{equation}\nMoreover, $\\alpha_{n-1}$ delivered by Algorithm~\\ref{alg:generic adaptation}\nis a B-matrix and when this procedure is used at line $(\\star)$ of\nAlgorithm~\\ref{alg:aSMC}, the weights calculated in Algorithm~\\ref{alg:aSMC}\nare given, for any $i\\in[N\/2^{K_{n-1}}]$, by\n\\begin{equation}\nW_{n}^{j}=\\mathbb{W}_{K_{n-1}}^{i},\\quad\\quad\\text{for all \\quad}j\\in B(K_{n-1},i)\\label{eq:W_equals_bb_W}\n\\end{equation}\nand $\\mathcal{E}_{n}^{N}\\geq\\tau$ always. The overall worst-case\ncomplexity of Algorithm~\\ref{alg:aSMC} is, for the three adaptation\nrules in Table~\\ref{tab:Choosing_I_k}, Simple: $O(N)$, Random:\n$O(N)$, and Greedy: $O(N\\log_{2}N)$. \n\\end{prop}\n\\begin{table}[h]\n\\begin{tabular}[c]{>{\\raggedright}p{1.2cm}l}\n\\toprule \n\\addlinespace \n\\textbf{\\footnotesize{Simple}} & {\\footnotesize{set $\\mathcal{I}_{k}=(1,\\ldots,N\/2^{k})$}}\\tabularnewline\\addlinespace\n\\midrule \n\\addlinespace \n\\textbf{\\footnotesize{Random}} & \n{\\footnotesize{if $k=0$, set $\\mathcal{I}_{k}$ to a random permutation of $[N\/2^{k}]$, otherwise $\\mathcal{I}_{k}=(1,\\ldots,N\/2^{k})$}}\\tabularnewline\\addlinespace\n\\midrule \n\\addlinespace \n\\multirow{2}{1.2cm}{\\textbf{\\footnotesize{Greedy}}} &\n{\\footnotesize{set $\\mathcal{I}_{k}$ such that}}\\tabularnewline &\n{\\hspace{\\bigskipamount}\\footnotesize{$\\mathbb{W}_{k}^{\\mathcal{I}_{k}(1)}\\geq\\mathbb{W}_{k}^{\\mathcal{I}_{k}(3)}\\geq\\cdots\\geq\\mathbb{W}_{k}^{\\mathcal{I}_{k}(N\/2^{k}-1)}\\geq\\mathcal{\\mathbb{W}}_{k}^{\\mathcal{I}_{k}(N\/2^{k})}\\cdots\\geq\\mathbb{W}_{k}^{\\mathcal{I}_{k}(4)}\\geq\\mathbb{W}_{k}^{\\mathcal{I}_{k}(2)}$}}\\tabularnewline\\addlinespace \\bottomrule\\addlinespace\n\\end{tabular}\\protect\\caption{\\label{tab:Choosing_I_k}Adaptation rules for choosing $\\mathcal{I}_{k}$}\n\\end{table}\n\n\n\n\\subsection{Numerical illustrations\\label{sub:Numerical-illustrations}}\n\nWe consider a stochastic volatility HMM: \n\\begin{eqnarray*}\n & & X_{0}\\sim\\mathcal{N}(0,1),\\quad X_{n}=aX_{n-1}+\\sigma V_{n},\\\\\n & & Y_{n}=\\varepsilon W_{n}\\exp(X_{n}\/2).\n\\end{eqnarray*}\n where $\\left\\{ V_{n}\\right\\} _{n\\in\\mathbb{N}}$ and $\\left\\{ W_{n}\\right\\} _{n\\in\\mathbb{N}}$\nare sequences of mutually i.i.d.~$\\mathcal{N}(0,1)$ random variables,\n$\\left|a\\right|<1$, and $\\sigma,\\varepsilon>0$. To study the behaviour\nof the different adaptation rules in terms of effective sample size,\na sequence of $3\\cdot10^{4}$ observations were generated from the\nmodel with $a=0.9$, $\\sigma=0.25$, and $\\varepsilon=0.1$. This\nmodel obviously does not satisfy $\\mathbf{(C)}$, but $\\mathbf{(A1)}$\nis satisfied as long as the observation record to does include the\nvalue zero.\n\nThe ARPF and $\\alpha$SMC with the Simple, Random and Greedy adaptation\nprocedures specified in Section~\\ref{sub:Algorithms-with-adaptive}\nwere run on this data with $N=2^{10}$ and threshold $\\tau=0.6$.\nTo give some impression of ESS and interaction behaviour, Figure~\\ref{fig:ESS-and-interaction}\nshows snapshots of $N_{n}^{\\text{eff}}$ and $K_{n}$ versus $n$,\nfor $575\\leq n\\leq825$. The sample path of $N_{n}^{\\text{eff}}$\nfor ARPF displays a familiar saw-tooth pattern, jumping back up to\n$N=2^{10}$ when resampling, i.e.~when $K_{n}=10$. The Simple adaptation\nscheme keeps $N_{n}^{\\text{eff}}$ just above the threshold $\\tau N=0.6\\times2^{10}$,\nwhereas the Greedy strategy is often able to keep $N_{n}^{\\text{eff}}$\nwell above this threshold, with smaller values of $K_{n}$, i.e.~with\na lower degree of interaction. The results for the Random adaptation\nrule, not shown in this plot, where qualitatively similar to those\nof the Greedy algorithm but slightly closer to the Simple adaptation. \n\nIn order to examine the stationarity of the particle processes as\nwell as the statistical behavior of the degree of interaction over\ntime, Figure~\\ref{fig:histograms_and_E_vs_k} shows two histograms\nof $K_{n}$ for each of the adaptation rules. One histogram is based\non the sample of $K_{n}$ where $100 \\phi >> \\sigma_1\n\\end{CD}\n$$\nof a contraction $\\phi$ and a graph inclusion $a$, there is a\n\\emph{pullback diagram},\n$$\n\\begin{CD}\n\\sigma_4 @> a^*\\phi >> \\sigma_3 \\\\\n@V \\phi^* a VV @VV a V \\\\\n\\sigma_2 @> \\phi >> \\sigma_1\n\\end{CD}\n$$\nof a contraction $a^*\\phi$ and a graph inclusion $\\phi^*a$ such that\nthe maps on vertices, $a\\circ a^*\\phi, \\phi \\circ\n\\phi^*a:\\text{Vertex}(\\sigma_4) \\rightarrow \\text{Vertex}(\\sigma_1)$\nare equal. The diagram is unique up to a unique isomorphism (both as\na contraction and a graph inclusion) of $\\sigma_4$. The category of\nmodular graphs is denoted $\\mathfrak{G}$.\n\n\\medskip\\noindent\nTo each prestable curve there is an associated modular graph, and to\neach modular graph $\\sigma$ there is an Artin stack $\\mathfrak{M}(\\sigma)$\nparametrizing prestable curves along with a contraction of the\nassociated modular graph to $\\sigma$. This defines a lax 2-functor\nfrom $\\mathfrak{G}$ to the 2-category of Artin stacks, covariant for\ncontractions, contravariant for graph inclusions, and such that for\nevery pullback diagram there is a 2-equivalence $\\mathfrak{M}(a)\\circ\n\\mathfrak{M}(\\phi) \\Rightarrow \\mathfrak{M}(a^*\\phi) \\circ \\mathfrak{M}(\\phi^*a)$.\n\n\\begin{defn} \\label{defn-decg}\n\\marpar{defn-decg}\nA \\emph{category of decorated modular graphs} is a category $\\mathfrak{H}$\nwith 2 sets of morphisms -- $\\mathfrak{H}$-contractions and $\\mathfrak{H}$-graph\ninclusions -- together with a functor $p:\\mathfrak{H} \\rightarrow \\mathfrak{G}$\ncompatible with both contractions and graph inclusions satisfying the\nfollowing axioms,\n\\begin{enumerate}\n\\item[(i)]\nfor every $\\mathfrak{H}$-contraction $\\phi:\\tau_2 \\rightarrow \\tau_1$ and\n$\\mathfrak{H}$-graph inclusion $a:\\tau_3 \\rightarrow \\tau_1$, there exists\nan object $\\tau_4$, an $\\mathfrak{H}$-contraction $a^*\\phi:\\tau_4\n\\rightarrow \\tau_3$ and an $\\mathfrak{H}$-graph inclusion $\\phi^*a:\\tau_4\n\\rightarrow \\tau_2$ mapping under $p$ to a pullback diagram, moreover\nthis is unique up to unique isomorphism of $\\tau_4$, and\n\\item[(ii)]\nfor every object $\\tau$ in $\\mathfrak{H}$ and every contraction $\\phi:p(\\tau)\n\\rightarrow \\sigma$ in $\\mathfrak{G}$, there is a $\\mathfrak{H}$-contraction $\\psi$\nsuch that $p(\\psi)=\\phi$, and $\\psi$ is unique up to unique isomormphism.\n\\end{enumerate}\n\\end{defn}\n\n\\begin{constr} \\label{constr-semigp}\n\\marpar{constr-semigp}\nLet $A$ be an Abelian semigroup. Define $\\mathfrak{G}_A$ to be the category\nwhose objects are pairs $(\\sigma,\\alpha)$ of a modular graph $\\sigma$\ntogether with a function $\\alpha:\\text{Vertex}(\\sigma) \\rightarrow A$,\nwhere $\\mathfrak{G}_A$-contractions, $\\phi:(\\sigma_1,\\alpha_1) \\rightarrow\n(\\sigma_2,\\alpha_2)$, are contractions $\\phi:\\sigma_1 \\rightarrow\n\\sigma_2$ such that $\\alpha_2(v) = \\sum_{w\\in \\phi^{-1}(v)}\n\\alpha_1(w)$ for every $v\\in \\text{Vertex}(\\sigma_2)$, and where\n$\\mathfrak{G}_A$-graph inclusions, $a:(\\sigma_1,\\alpha_1) \\rightarrow\n(\\sigma_2,\\alpha_2)$, are graph inclusions $a:\\sigma_1 \\rightarrow\n\\sigma_2$ such that $\\alpha_2(a(v)) = \\alpha_1(v)$ for every vertex\n$v\\in \\text{Vertex}(\\sigma_1)$. Define $p:\\mathfrak{G}_A \\rightarrow\n\\mathfrak{G}$ to be the obvious forgetful functor. This is a category of\ndecorated modular graphs; the only one used in the rest of this paper.\n\\end{constr}\n\n\\medskip\\noindent\nThe aim of this section is to construct for every object $\\tau$ of\n$\\mathfrak{H}$ an Artin stack $\\mathfrak{M}_{\\mathfrak{H}}(\\tau)$ parametrizing\nprestable curves along with a lifting of the associated modular graph\nto an object of $\\mathfrak{H}$ contracting to $\\tau$. The association $\\tau\n\\mapsto \\mathfrak{M}_{\\mathfrak{H}}(\\tau)$ should define a lax 2-functor from\n$\\mathfrak{H}$ to the 2-category of Artin stacks, covariant for\n$\\mathfrak{H}$-contractions, contravariant for $\\mathfrak{H}$-graph inclusions,\nand such that for every pullback diagram there is an associated\n2-equivalence.\n\n\\begin{defn} \\label{defn-sat}\n\\marpar{defn-sat}\nLet $\\mathfrak{H}$ be a category of decorated modular graphs, considered as\na usual category whose morphisms are $\\mathfrak{H}$-contractions. A\nsubcategory $\\mathfrak{H}'$ is \\emph{saturated} if $\\mathfrak{H}'$ contains every\n$\\mathfrak{H}$-contraction whose domain is in $\\mathfrak{H}'$. A subcategory\n$\\mathfrak{H}'$ is \\emph{$p$-embedding} if the functor of categories with\ncontractions as morphisms, $p:\\mathfrak{H}' \\rightarrow \\mathfrak{G}$, is an\nequivalence to a (necessarily full) subcategory of $\\mathfrak{G}$.\n\n\\noindent\nLet $\\tau$ be an object of $\\mathfrak{H}$ and denote by $\\mathfrak{H}_\\tau$ the\ncategory whose objects are contractions $\\phi:\\tau_\\phi \\rightarrow\n\\tau$ and whose morphisms are commutative diagrams of contractions. A\nsubcategory $\\mathfrak{H}'$ of $\\mathfrak{H}_\\tau$ is \\emph{saturated} if\n$\\mathfrak{H}'$ contains every morphism in $\\mathfrak{H}_\\tau$ whose domain is in\n$\\mathfrak{H}'$. A subcategory $\\mathfrak{H}'$ of $\\mathfrak{H}_\\tau$ is\n\\emph{$p$-embedding} if the functor $p:\\mathfrak{H}' \\rightarrow\n\\mathfrak{G}_{p(\\tau)}$ is an equivalence to a (necessarily full)\nsubcategory of $\\mathfrak{G}_{p(\\tau)}$. Denote by\n$\\text{Sat}(\\mathfrak{H}_\\tau)$ the set of saturated, $p$-embedding\nsubcategories of $\\mathfrak{H}_\\tau$ directed by reverse inclusion of\nsubcategories.\n\\end{defn}\n\n\\medskip\\noindent\nLet $\\tau$ be an object of $\\mathfrak{H}$ and let $\\mathfrak{H}'$ be a saturated,\n$p$-embedding subcategory of $\\mathfrak{H}_\\tau$. Define\n$U_{\\mathfrak{H}'}(\\tau)$ to be the open substack of $\\mathfrak{M}(p(\\tau))$ whose\ncomplement is the union of the images of all 1-morphisms\n$\\mathfrak{M}(\\phi):\\mathfrak{M}(\\sigma) \\rightarrow \\mathfrak{M}(p(\\tau))$ such that\n$\\phi$ is not in the image of $p:\\mathfrak{H}' \\rightarrow\n\\mathfrak{G}_{p(\\tau)}$. It is straightforward that $u_{\\mathfrak{H}'}(\\tau)$ is\nopen: the intersection with every quasi-compact open substack of\n$\\mathfrak{M}(p(\\tau))$ is open, and $U_{\\mathfrak{H}'}(\\tau)$ is the union of\nthese open sets.\n\n\\medskip\\noindent\nLet $\\mathfrak{H}' \\subset \\mathfrak{H}''$ be saturated, $p$-embedding\nsubcategories of $\\mathfrak{H}_\\tau$. Then $U_{\\mathfrak{H}'}(\\tau) \\subset\nU_{\\mathfrak{H}''}(\\tau)$ as subsets of $\\mathfrak{M}(p(\\tau))$. Therefore\n$\\mathfrak{H}' \\mapsto U_{\\mathfrak{H}'}(\\tau)$ is a directed system of open\nimmersions of Artin stacks indexed by $\\text{Sat}(\\mathfrak{H}_\\tau)$.\nBecause this is a directed system of open immersions, the direct limit\nis an Artin stack.\n\n\\begin{notat} \\label{notat-MHtau}\n\\marpar{notat-MHtau}\nDenote by $\\mathfrak{M}_{\\mathfrak{H}}(\\tau)$ the direct limit of the directed\nsystem $\\mathfrak{H}' \\mapsto U_{\\mathfrak{H}'}(\\tau)$. Denote by\n$\\mathfrak{M}_p(\\tau): \\mathfrak{M}_{\\mathfrak{H}}(\\tau) \\rightarrow \\mathfrak{M}(p(\\tau))$\nthe natural 1-morphism. If $\\mathfrak{H}=\\mathfrak{G}_A$, also denote\n$\\mathfrak{M}_{\\mathfrak{H}}(\\tau)$ by $\\mathfrak{M}_A(\\tau)$.\n\\end{notat}\n\n\\medskip\\noindent\nThe ``points'' of $\\mathfrak{M}_{\\mathfrak{H}}(\\tau)$ have a simple description.\n\n\\begin{defn} \\label{defn-strict}\n\\marpar{defn-strict}\nFor every modular graph $\\sigma$, define\n$\\mathfrak{M}^{\\text{strict}}(\\sigma)$ to be the open substack of\n$\\mathfrak{M}(\\sigma)$ that is the complement of the images of all\n$\\mathfrak{M}(\\phi)$ where $\\phi:\\sigma' \\rightarrow \\sigma$ is a\nnon-invertible contraction.\n\\end{defn}\n\n\\begin{lem} \\label{lem-pts}\n\\marpar{lem-pts}\nLet $\\tau$ be an object of $\\mathfrak{H}$ and let $\\phi:\\sigma \\rightarrow\np(\\tau)$ be a contraction. The 2-fibered product\n$\\mathfrak{M}^{\\text{strict}}(\\sigma) \\times_{\\mathfrak{M}(p(\\tau))}\n\\mathfrak{M}_{\\mathfrak{H}}(\\tau)$ is equivalent to a disjoint union of copies of\n$\\mathfrak{M}^{\\text{strict}}(\\sigma)$ indexed by equivalence classes of\ncontractions $\\psi$ in $\\mathfrak{H}_\\tau$ such that $p(\\psi)=\\phi$.\n\\end{lem}\n\n\\begin{proof}\nLet $(\\eta,\\zeta,\\theta)$ be an object of the 2-fibered product, i.e.,\na triple of an object of $\\mathfrak{M}^{\\text{strict}}(\\sigma)$, an object\nof $\\mathfrak{M}_{\\mathfrak{H}}(\\tau)$ and an equivalence\n$\\theta:\\mathfrak{M}(\\phi)(\\eta) \\rightarrow \\mathfrak{M}_{p}(\\tau)(\\zeta)$.\nThere is a saturated, $p$-embedding subcategory $\\mathfrak{H}'$ such that\n$\\mathfrak{M}_p(\\tau)(\\zeta)$ is in $U_{\\mathfrak{H}'}(\\tau)$. Because this is in\nthe image of $\\mathfrak{M}(\\phi)$, there is a contraction $\\psi:\\tau_\\psi\n\\rightarrow \\tau$ in $\\mathfrak{H}'$ such that $\\phi=p(\\psi)$. Because\n$\\mathfrak{H}'$ is $p$-embedding, $\\psi$ is unique up to unique isomorphism.\nBy the nature of the direct limit, $\\psi$ is independent of the choice\nof $\\mathfrak{H}'$.\n\n\\medskip\\noindent\nConversely, given an object $\\eta$ of $\\mathfrak{M}^{\\text{strict}}(\\sigma)$\nand a contraction $\\psi:\\tau_\\psi \\rightarrow \\tau$ in $\\mathfrak{H}_\\tau$\nsuch that $p(\\psi) = \\phi$, define $\\mathfrak{H}'$ to be the subcategory of\n$\\mathfrak{H}_\\tau$ consisting of all contractions through which $\\psi$\nfactors. By Definition~\\ref{defn-decg}(ii), this is a saturated,\n$p$-embedding subcategory. And $\\mathfrak{M}(\\phi)(\\eta)$ is in\n$U_{\\mathfrak{H}}(\\tau)$. The image in the direct limit is an object\n$\\zeta$, and there is a canonical isomorphism $\\theta:\n\\mathfrak{M}(\\phi)(\\eta) \\rightarrow \\mathfrak{M}_p(\\tau)(\\zeta)$. Thus\n$(\\eta,\\zeta,\\theta)$ is an object of the 2-fibered product.\n\n\\medskip\\noindent\nIt is left to the reader to verify these operations give an\nequivalence of stacks.\n\\end{proof}\n\n\\medskip\\noindent\n\\textbf{Note:} \nThe functorialities are only sketched. Given an $\\mathfrak{H}$-contraction\n$\\phi:\\tau_1 \\rightarrow \\tau_2$, Definition~\\ref{defn-decg}(ii) gives\na map of directed sets $\\text{Sat}(\\phi):\\text{Sat}(\\mathfrak{H}_{\\tau_1})\n\\rightarrow \\text{Sat}(\\mathfrak{H}_{\\tau_1})$, and composition with\n$\\mathfrak{M}(p(\\phi))$ gives a compatible family of 1-morphisms of directed\nsystems. This defines the 1-morphism $\\mathfrak{M}_{\\mathfrak{H}}(\\phi)$. Given\nan $\\mathfrak{H}$-graph inclusion $a:\\tau_1 \\rightarrow \\tau_2$, existence\nof pullback diagrams, Definition~\\ref{defn-decg}(i), gives a map of\ndirected sets $\\text{Sat}(a):\\text{Sat}(\\mathfrak{H}_{\\tau_2}) \\rightarrow\n\\text{Sat}(\\mathfrak{H}_{\\tau_1})$, and composition with $\\mathfrak{M}(p(a))$\ngives a compatible family of 1-morphisms of directed systems. This\ndefines the 1-morphism $\\mathfrak{M}_{\\mathfrak{H}}(a)$. The rest of the\ncompatibilities are straightforward.\n\n\\section{The universal relative Picard for curves of compact type}\n\\label{sec-cpct}\n\\marpar{sec-cpct}\n\n\\noindent\nThe results in this section are well-known, and easily follow from\n~\\cite{Raynaud} and ~\\cite{Ner}. It is useful in the rest of the\npaper to gather the results here.\n\n\\begin{notat} \\label{notat-H}\n\\marpar{notat-H}\nDenote by $\\mathfrak{H} \\subset \\mathfrak{G}_\\mathbb{Z}$ the full subcategory of objects\n$(\\sigma,\\alpha)$ such that $\\sigma$ is a forest of trees, i.e., the\ngraph has no cycles. For each triple of integers $g,n \\geq 0$ and\n$e$, denote by $\\tau_{g,n}(e)$ the object of $\\mathfrak{H}$ consisting of a\ntree $\\sigma_{g,n}$ with a single vertex of genus $g$ and $n$ flags,\nsuch that $\\alpha(v)=n$.\n\\end{notat}\n\n\\medskip\\noindent\nDenote by $\\pi:\\mathcal{C} \\rightarrow \\mathfrak{M}_{\\mathfrak{H}}(\\tau_{g,0}(0))$ the\npullback from $\\mathfrak{M}(\\sigma_{g,0})$ of the universal curve. For each\n4-tuple $A=((g',g''),(e',e''))$ of integers $g',g''\\geq 0$,\n$g'+g''=g$, and integers $e',e''$, $e'+e''=0$, denote by $\\tau_A$ the\ntree with vertices $v',v''$ such that $g(v')=g', \\alpha(v')=e'$ and\n$g(v'')=g'', \\alpha(v'')=e''$. Denote by $\\phi:\\tau_A \\rightarrow\n\\tau_{g,0}(0)$ the canonical contraction. The 2-fibered product\n$\\mathfrak{M}_{\\mathfrak{H}}(\\tau_A) \\times_{\\mathfrak{M}_{\\mathfrak{H}}(\\tau_{g,0}(0))}\n\\mathcal{C}$ has 2 irreducible components $\\mathcal{C}', \\mathcal{C}''$ corresponding\nto the vertices $v', v''$. There is a unique effective Cartier\ndivisor $\\mathcal{D} \\subset \\mathcal{C}$ such that for every\n$A=((g',g''),(e',e''))$,\n$$\n\\mathfrak{M}^{\\text{strict}}_{\\mathfrak{H}}(\\tau_A)\n\\times_{\\mathfrak{M}_{\\mathfrak{H}}(\\tau_{g,0}(0))} \\mathcal{D}\n$$ \nis empty if $e'=e''=0$ and is $e'\\mathcal{C}''$ if\n$e'>0$. \n\n\\medskip\\noindent\nLet $U\\subset \\mathfrak{M}(\\sigma_{g,0})$ denote the open substack that is\nthe image of $\\mathfrak{M}_p(\\tau_{g,0}(0))$, i.e., $U$ is the Artin stack\nof $n$-pointed, genus $g$ curves of \\emph{compact type}. The\n1-morphism $\\pi:\\mathcal{C}_U \\rightarrow U$ is cohomologically flat, so by\n~\\cite[Thm. 7.3]{Artin} the relative Picard functor of the universal\ncurve over $U$ is a 1-morphism $\\text{pr}:\\text{Pic}_{\\mathcal{C}_U\/U}\n\\rightarrow U$ relatively representable by \\emph{non-separated}\nalgebraic spaces. The closure $E_{\\mathcal{C}_U\/U}$ of the identity\nsection gives a closed substack of $\\text{Pic}_{\\mathcal{C}_U\/U}$ which is\nrelatively representable over $U$ by non-separated group algebraic\nspaces, ~\\cite[Prop. 5.2]{Raynaud}. The quotient $Q_{\\mathcal{C}_U\/U}$ of\n$\\text{Pic}_{\\mathcal{C}_U\/U}$ by $E_{\\mathcal{C}_U\/U}$ is a stack that is\nrelatively representable over $U$ by a countable disjoint union of\nsmooth, proper group algebraic spaces, ~\\cite[Thm. 4.1.1]{Raynaud}\n(properness requires a bit more, see ~\\cite[Ex. 8, p. 246]{Ner}). The\nnext lemma describes $E_{\\mathcal{C}_U\/U}$.\n\n\\medskip\\noindent\nThe invertible sheaf $\\mathcal O_{\\mathcal{C}}(\\mathcal{D})$ defines a\n1-morphism $f:\\mathfrak{M}_{\\mathfrak{H}}(\\tau_{g,0}(0)) \\rightarrow\n\\text{Pic}_{\\mathcal{C}_U\/U}$, and there is a natural 2-equivalence of\n$\\text{pr}\\circ f$ with $\\mathfrak{M}_{\\mathfrak{H}}(\\tau_{g,0}(0))$.\n\n\\begin{lem} \\label{lem-subgp} \n\\marpar{lem-subgp}\nThe 1-morphism $f$ defines an equivalence to $E_{\\mathcal{C}_U\/U}$, the\nclosure of the \nidentity section of $\\text{Pic}_{\\mathcal{C}_U\/U}$. \nDenoting by\n$Q^0_{\\mathcal{C}_U\/U}$ the identity component of the quotient and by\n$\\text{Pic}^0_{\\mathcal{C}_U\/U}$ the preimage, there are 1-morphisms\n$$\n\\text{Pic}^0_{\\mathcal{C}_U\/U} \\rightleftarrows\n\\mathfrak{M}_{\\mathfrak{H}}(\\tau_{g,0}) \\times\nQ^0_{\\mathcal{C}_U\/U}\n$$ \ngiving an equivalence of stacks\nover $U$, and splitting the extension of group algebraic spaces over\n$U$.\n\\end{lem} \n\n\\begin{proof}\nIt is easy to see $f$ is an equivalence to its image which is a\nsubgroup of $E_{\\mathcal{C}_U\/U}$. To prove the image of $f$ is all of\n$E_{\\mathcal{C}_U\/U}$, by the valuative criterion of closedness it suffices\nto check equality of pullbacks for every map of a DVR to\n$\\mathfrak{M}_{g,0}$ sending the generic point to\n$\\mathfrak{M}_{g,0}^{\\text{strict}}$. By ~\\cite[Prop. 6.1.3]{Raynaud}, the\nsections of $E_{\\mathcal{C}_U\/U}$ over are a DVR are just the quotient of\nthe free Abelian group on the irreducible components of the closed\nfiber by the subgroup generated by the entire fiber. By\nLemma~\\ref{lem-pts} the same is true for the pullback of\n$\\mathfrak{M}_{\\mathfrak{H}}(\\tau_{g,0}(0))$, and it is clear the map between them\nis an isomorphism.\n\n\\medskip\\noindent\nThe splitting of $\\text{Pic}^0_{\\mathcal{C}_U\/U} \\rightarrow\nQ^0_{\\mathcal{C}_U\/U}$ is given by the subfunctor of\n$\\text{Pic}^0_{\\mathcal{C}_U\/U}$ of invertible sheaves whose degree on\nevery irreducible component of every fiber is $0$, denoted by $P^0$ in\n\\cite{Raynaud}.\n\\end{proof}\n\n\\medskip\\noindent\nIn the special case that $g=0$, more is true. First of all,\n$\\mathfrak{M}_{\\mathbb{Z}}(\\tau) = \\mathfrak{M}_{\\mathfrak{H}}(\\tau)$ for every $\\tau$ of genus\n$0$. Secondly, $U=\\mathfrak{M}_{0,n}$. \n\n\\begin{cor}[Raynaud, Prop. 9.3.1, ~\\cite{Raynaud}] \\label{cor-subgp}\n\\marpar{cor-subgp}\nFor $g=0$, $\\text{Pic}^0_{\\mathcal{C}\/\\mathfrak{M}_{0,0}}$ is equivalent\nto $\\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(0))$.\n\\end{cor}\n\n\\medskip\\noindent\nMoreover, the union $\\cup_{e\\in \\mathbb{Z}} \\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(e))$ is a\ngroup algebraic space over $\\mathfrak{M}_{0,0}$ containing\n$\\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(0))$ as a subgroup algebraic space over\n$\\mathfrak{M}_{0,0}$. Essentially, given a contraction\n$\\phi:\\sigma\\rightarrow \\sigma_{0,0}$ and given liftings\n$\\psi_i:(\\sigma,\\alpha_i) \\rightarrow \\tau_{0,0}(e_i)$ for $i=1,2$,\naddition is determined by $\\psi_1+\\psi_2=\\psi:(\\sigma,\n\\alpha_1+\\alpha_2) \\rightarrow \\tau_{0,0}(e_1+e_2)$. The \\emph{total\ndegree map} gives an isomorphism of $\\text{Pic}\/\\text{Pic}^0$ with\n$\\mathbb{Z}$. The following result is easy.\n\n\\begin{lem} \\label{lem-g0}\n\\marpar{lem-g0}\nFor $g=0$, for every $e$ there is an equivalence of stacks over\n$\\mathfrak{M}_{0,0}$, $\\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(e)) \\rightleftarrows\n\\text{Pic}^e_{\\mathcal{C}\/\\mathfrak{M}_{0,0}}$, such that the equivalence\n$\\cup_{e\\in \\mathbb{Z}} \\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(e)) \\rightleftarrows\n\\text{Pic}_{\\mathcal{C}\/\\mathfrak{M}_{0,0}}$ is an equivalence of group algebraic\nspaces over $\\mathfrak{M}_{0,0}$ and is compatible with the equivalence in\nCorollary~\\ref{cor-subgp}.\n\\end{lem}\n\n\\subsection{Notation for boundary divisor classes}\n\\label{subsec-bound}\n\\marpar{subsec-bound}\n\n\\noindent\nLet $r\\geq 0$ be an integer, and let $e_1,\\dots,e_r$ be integers.\nDenote by $\\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(e_1,\\dots,e_r))$ the 2-fibered\nproduct,\n$$\n\\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(e_1)) \\times_{\\mathfrak{M}_{0,0}}\n\\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(e_2)) \\times_{\\mathfrak{M}_{0,0}} \\dots\n\\times_{\\mathfrak{M}_{0,0}} \\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(e_r)).\n$$\nFor each $i=1,\\dots,r$, let $(e'_i,e''_i)$ be a pair of integers such\nthat $e'_i + e''_i = e_i$. Let $\\sigma$ be the modular graph with two\nvertices $v',v''$ with $g(v')=g(v'')=0$, one edge connecting $v',\nv''$, and no tails. Let $\\phi:\\sigma \\rightarrow \\sigma_{0,0}$ be the\ncanonical contraction. Denote by\n$\\zeta:\\mathfrak{M}^{\\text{strict}}(\\sigma) \\rightarrow\n\\mathfrak{M}_{\\mathbb{Z}}(\\tau_{g,0}(e_1,\\dots,e_r))$ the 1-morphism whose\nprojection to the $i^{\\text{th}}$ factor is determined via\nLemma~\\ref{lem-pts} by the lifting $\\psi_i:(\\sigma,\\alpha_i)\n\\rightarrow \\tau_{g,0}(e_i)$ of $\\phi$ such that $\\alpha_i(v') = e'_i,\n\\alpha_i(v'')=e_i''$. Define $\\Delta_{(e'_1,e''_1,\\dots,e'_r,e''_r)}$\nto be the effective Cartier divisor on\n$\\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(e_1,\\dots,e_r))$ that is the closure of the\nimage of $\\zeta$.\n\n\\medskip\\noindent\nLet $\\pi:C\\rightarrow M$ be a flat 1-morphism relatively represented\nby proper algebraic spaces whose geometric fibers are connected,\nat-worst-nodal curves of arithmetic genus $0$. Let $D_1,\\dots,D_r$ be\nCartier divisor classes on $C$ of relative degrees $e_1,\\dots,e_r$.\nLet $f(e'_1,e''_1,\\dots,e'_r,e''_r)$ be a function on $\\mathbb{Z}^{2r}$ with\nvalues in $\\mathbb{Z}$, resp. $\\mathbb{Q}$, etc. Denote by $\\xi: M \\rightarrow\n\\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(e_1,\\dots,e_r))$ the 1-morphism whose\nprojection to the $i^\\text{th}$ factor,\n$\\text{Pic}^{e_i}_{\\mathfrak{C}\/\\mathfrak{M}_{0,0}}$ is determined by\n$\\mathcal O_C(D_i)$.\n\n\\begin{notat} \\label{notat-boundary}\n\\marpar{notat-boundary}\nDenote by,\n$$\n\\sum_{(\\beta',\\beta'')} f(\\langle D_1,\\beta' \\rangle, \\langle\nD_1, \\beta'' \\rangle, \\dots, \\langle D_r, \\beta' \\rangle, \\langle D_r,\n\\beta'' \\rangle) \\Delta_{\\beta',\\beta''}\n$$\nthe Cartier divisor class, resp. $\\mathbb{Q}$-Cartier divisor class, etc., \nthat is the pullback by $\\xi$ of the Cartier divisor class, etc.,\n$$\n\\sum_{(e'_1,e''_1,\\dots,e'_r,e''_r)} f(e'_1,e''_1,\\dots,e'_r,e''_r)\n\\Delta_{(e'_1,e''_1,\\dots,e'_r,e''_r)},\n$$\nthe summation over all sequences $(e'_1,e''_1,\\dots,e'_r,e''_r)$ with\n$e'_i + e''_i = e_i$. \nIf $f(e'_1,e''_1,\\dots,e'_r,e''_r) = f(e''_1,e'_1,\\dots,e''_r,e'_r)$,\ndenote by,\n$$\n{\\sum_{(\\beta',\\beta'')}}^\\prime f(\\langle D_1,\\beta' \\rangle, \\langle\nD_1, \\beta'' \\rangle, \\dots, \\langle D_r, \\beta' \\rangle, \\langle D_r,\n\\beta'' \\rangle) \\Delta_{\\beta',\\beta''}\n$$\nthe pullback by $\\xi$ of,\n$$\n{\\sum_{(e'_1,e''_1,\\dots,e'_r,e''_r)}}^\\prime\nf(e'_1,e''_1,\\dots,e'_r,e''_r)\n\\Delta_{(e'_1,e''_1,\\dots,e'_r,e''_r)},\n$$\nwhere the summation is over equivalence classes of sequences\n$(e'_1,e''_1,\\dots,e'_r,e''_r)$ such that $e'_i+e''_i=e_i$ \nunder the equivalence relation\n$(e'_1,e''_1,\\dots,e'_r,e''_r) \\sim (e''_1,e'_1,\\dots,e''_r,e'_r)$. \n\\end{notat}\n\n\\begin{ex} \\label{ex-boundary}\n\\marpar{ex-boundary}\nLet $n\\geq 0$ be an integer and let $(A,B)$ be a partition of\n$\\{1,\\dots,n\\}$. For the universal family over $\\mathfrak{M}_{0,n}$, denote\nby $s_1,\\dots,s_n$ the universal sections. Then,\n$$\n\\sum_{\\beta',\\beta''} \\prod_{i\\in A} \\langle s_i,\\beta' \\rangle\n\\cdot \\prod_{j\\in B} \\langle s_j, \\beta'' \\rangle\n\\Delta_{\\beta',\\beta''} \n$$\nis the Cartier divisor class of the boundary divisor $\\Delta_{(A,B)}$. \n\\end{ex}\n\n\n\\section{The functor $Q_\\pi$} \\label{sec-Qpi}\n\\marpar{sec-Qpi}\n\n\\noindent\nLet $M$ be an Artin stack, and let $\\pi:C \\rightarrow M$ be a flat\n1-morphism, relatively representable by proper algebraic spaces whose\ngeometric fibers are connected, at-worst-nodal curves of arithmetic\ngenus $0$. There exists an invertible dualizing sheaf $\\omega_\\pi$,\nand the relative trace map, $\\text{Tr}_\\pi: R\\pi_* \\omega_\\pi[1]\n\\rightarrow \\mathcal O_M$ is a quasi-isomorphism. In particular,\n$\\text{Ext}^1_{\\mathcal O_C}(\\omega_\\pi,\\mathcal O_C)$ is canonically isomorphic to\n$H^0(M,\\mathcal O_M)$. Therefore $1\\in H^0(M,\\mathcal O_M)$ determines an extension\nclass, i.e., a short exact sequence,\n$$\n\\begin{CD}\n0 @>>> \\omega_\\pi @>>> E_\\pi @>>> \\mathcal O_C @>>> 0.\n\\end{CD}\n$$\nThe morphism $\\pi$ is perfect, so for every complex $F^\\bullet$ perfect\nof bounded amplitude on $C$, $R\\pi_* F^\\bullet$ is a perfect complex of\nbounded amplitude on $M$. By ~\\cite{detdiv}, the\ndeterminant of a perfect complex of bounded amplitude is defined.\n\n\\begin{defn} \\label{defn-E}\n\\marpar{defn-E}\nFor every complex $F^\\bullet$ perfect of bounded amplitude on $C$, \ndefine $Q_\\pi(F^\\bullet) = \\text{det}(R\\pi_* E_\\pi\\otimes F^\\bullet)$.\n\\end{defn}\n\n\\medskip\\noindent\nThere is another interpretation of $Q_\\pi(F^\\bullet)$.\n\n\\begin{lem} \\label{lem-interp}\n\\marpar{lem-interp}\nFor every complex $F^\\bullet$ perfect of bounded amplitude on $C$, \n$$\nQ_\\pi(F^\\bullet) \\cong\n\\text{det}(R\\pi_*(F^\\bullet)) \\otimes \n\\text{det}(R\\pi_*((F^\\bullet)^\\vee))^\\vee.\n$$\n\\end{lem}\n\n\\begin{proof}\nBy the short exact sequence for $E_\\pi$, $Q_\\pi(F^\\bullet) \\cong\n\\text{det}(R\\pi_*(F^\\bullet)) \\otimes \\text{det}(R\\pi_*(\\omega_\\pi\n\\otimes F^\\bullet))$. The lemma follows by duality.\n\\end{proof}\n\n\\medskip\\noindent\nIt is straightforward to compute $F^\\bullet$ whenever there exist\ncycle class groups for $C$ and $M$ such that Chern classes are defined\nfor all perfect complexes of bounded amplitude and such that\nGrothendieck-Riemann-Roch holds for $\\pi$.\n\n\\begin{lem} \\label{lem-GRR}\n\\marpar{lem-GRR}\nIf there exist cycle class groups for $C$ and $M$ such that Chern\nclasses exist for all perfect complexes of bounded amplitude and such\nthat Grothendieck-Riemann-Roch holds for $\\pi$, then modulo $2$-power\ntorsion, the first Chern class of $Q_\\pi(F^\\bullet)$ is\n$\\pi_*(C_1(F^\\bullet)^2 - 2C_2(F^\\bullet))$.\n\\end{lem}\n\n\\begin{proof}\nDenote the Todd class of $\\pi$ by $\\tau = 1 + \\tau_1 + \\tau_2 +\n\\dots$. Of course $\\tau_1 = -C_1(\\omega_\\pi)$. By GRR,\n$\\text{ch}(R\\pi_* \\mathcal O_C) = \\pi_*(\\tau)$. The canonical map $\\mathcal O_M\n\\rightarrow R\\pi_*\\mathcal O_C$ is a quasi-isomorphism. Therefore\n$\\pi_*(\\tau_2)=0$, modulo $2$-power torsion. By additivity of the\nChern character, $\\text{ch}(E_\\pi) = 2 + C_1(\\omega_\\pi) +\n\\frac{1}{2}C_1(\\omega_\\pi)^2 + \\dots$. Therefore,\n$$\n\\text{ch}(E_\\pi)\\cdot \\tau = 2 + 2\\tau_2 + \\dots\n$$\nSo for any complex $F^\\bullet$ perfect of bounded amplitude,\n$$\n\\begin{array}{c}\n\\text{ch}(E_\\pi\\otimes F^\\bullet)\\cdot \\tau = \\text{ch}(F^\\bullet) \\cdot\n\\text{ch}(E_\\pi) \\cdot \\tau = \\\\\n(\\text{rk}(F^\\bullet) + C_1(F^\\bullet) +\n\\frac{1}{2}(C_1(F^\\bullet)^2 - 2C_2(F^\\bullet)) +\\dots)(2 + 2\\tau_2+\n\\dots). \n\\end{array}\n$$\nApplying $\\pi_*$ gives,\n$$\n2\\pi_*(C_1(F^\\bullet)) + \\pi_*(C_1(F^\\bullet)^2 - 2C_2(F^\\bullet)) + \\dots\n$$\nTherefore the first Chern class of $\\text{det}(R\\pi_*(E_\\pi\\otimes\nF^\\bullet))$ is $\\pi_*(C_1(F^\\bullet)^2 - 2C_2(F^\\bullet))$, modulo\n$2$-power torsion.\n\\end{proof}\n\n\\begin{rmk} \\label{rmk-Q}\nThe point is this. In every reasonable case, $Q_\\pi$ is just\n$\\pi_*(C_1^2-2C_2)$. Moreover $Q_\\pi$ is compatible with base-change\nby arbitrary 1-morphisms. This allows to reduce certain computations\nto the Artin stack of all genus $0$ curves. As far as we are aware,\nno one has written a definition of cycle class groups for all locally\nfinitely presented Artin stacks that has Chern classes for all perfect\ncomplexes of bounded amplitude, has pushforward maps and\nGrothendieck-Riemann-Roch for perfect 1-morphisms representable by\nproper algebraic spaces, and has pullback maps by arbitrary\n1-morphisms for cycles coming from Chern classes. Doubtless such a\ntheory exists; whatever it is, $Q_\\pi = \\pi_*(C_1^2-2C_2)$.\n\\end{rmk}\n\n\\medskip\\noindent\nLet the following diagram be 2-Cartesian,\n$$\n\\begin{CD}\nC' @> \\zeta_C >> C \\\\\n@V \\pi' VV @VV \\pi V \\\\\nM' @> \\zeta_M >> M\n\\end{CD}\n$$\ntogether with a 2-equivalence $\\theta:\\pi\\circ \\zeta_C \\Rightarrow\n\\zeta_M \\circ \\pi'$. \n\n\\begin{lem} \\label{lem-pullback}\n\\marpar{lem-pullback}\nFor every complex $F^\\bullet$ perfect of bounded amplitude on $C$, \n$\\zeta_M^* Q_\\pi(F^\\bullet)$ is isomorphic to $Q_{\\pi'}(\\zeta_C^*\nF^\\bullet)$.\n\\end{lem}\n\n\\begin{proof}\nOf course $\\zeta_C^* E_\\pi = E_{\\pi'}$. And $\\zeta_M^* R\\pi_*$ is\ncanonically equivalent to $R(\\pi')_* \\zeta_C^*$ for perfect\ncomplexes of bounded amplitude. Therefore $\\zeta_M^*\nQ_\\pi(F^\\bullet)$ equals $\\text{det}(\\zeta_M^* R\\pi_*(E_\\pi\\otimes\nF^\\bullet))$ equals $\\text{det}(R(\\pi')_* \\zeta_C^*(E_\\pi \\otimes\nF^\\bullet))$ equals $\\text{det}(R(\\pi')_* E_{\\pi'}\\otimes \\zeta_C^*\nF^\\bullet)$ equals $Q_{\\pi'}(\\zeta_C^* F^\\bullet)$.\n\\end{proof}\n\n\\begin{lem} \\label{lem-inv}\n\\marpar{lem-inv}\nLet $L$ be an invertible sheaf on $C$ of relative degree $e$ over\n$M$. For every invertible sheaf $L'$ on $M$, $Q_\\pi(L\\otimes \\pi^*L')\n\\cong Q_\\pi(L)\\otimes (L')^{2e}$. In particular, if $e=0$,\n$Q_\\pi(L\\otimes \\pi^* L') \\cong Q_\\pi(L)$.\n\\end{lem}\n\n\\begin{proof}\nTo compute the rank of $R\\pi_*(E_\\pi\\otimes F^\\bullet)$ over any\nconnected component of $M$, it suffices to base-change to the spectrum\nof a field mapping to that component. Then, by\nGrothendieck-Riemann-Roch, the rank is $2\\text{deg}(C_1(F^\\bullet))$.\nIn particular, $R\\pi_*(E_\\pi \\otimes L)$ has rank $2e$. \n\n\\medskip\\noindent\nBy the projection formula, $R\\pi_*(E_\\pi\\otimes L\\otimes \\pi^* L')\n\\cong R\\pi_*(E_\\pi \\otimes L)\\otimes L'$. Of course\n$\\text{det}(R\\pi_*(E_\\pi \\otimes L)\\otimes L') = Q_\\pi(L)\\otimes\n(L')^\\text{rank}$. This follows from the uniqueness of $\\text{det}$:\nfor any invertible sheaf $L'$ the association $F^\\bullet \\mapsto\n\\text{det}(F^\\bullet \\otimes L')\\otimes\n(L')^{-\\text{rank}(F^\\bullet)}$ also satisfies the axioms for a\ndeterminant function and is hence canonically isomorphic to\n$\\text{det}(F^\\bullet)$. Therefore $Q_\\pi(L\\otimes \\pi^*L') =\nQ_\\pi(L)\\otimes (L')^{2e}$.\n\\end{proof}\n\n\\section{Local computations} \\label{sec-local}\n\\marpar{sec-local}\n\n\\noindent\nThis section contains 2 computations: $Q_\\pi(\\omega_\\pi)$ and\n$Q_\\pi(L)$ for every invertible sheaf on $C$ of relative degree $0$.\nBecause of Lemma~\\ref{lem-pullback} the first computation reduces to\nthe universal case over $\\mathfrak{M}_{0,0}$. Because of\nLemma~\\ref{lem-pullback} and Lemma~\\ref{lem-inv}, the second\ncompuation reduces to $\\mathcal O_{\\mathcal{C}}(\\mathcal{D})$ over\n$\\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(0))$. In each case the computation is\nperformed locally.\n\n\\subsection{Computation of $Q_\\pi(\\omega_\\pi)$} \\label{subsec-Qomega}\n\\marpar{subsec-Qomega}\nAssociated to $\\pi_C:C\\rightarrow M$, there is a 1-morphism $\\zeta_M:M\n\\rightarrow \\mathfrak{M}_{0,0}$, a 1-morphism $\\zeta_C: C \\rightarrow\n\\mathcal{C}$, and a 2-equivalence $\\theta:\\pi_{\\mathcal{C}} \\circ \\zeta_C\n\\Rightarrow \\zeta_M\\circ \\pi_C$ such that the following diagram is\n2-Cartesian,\n$$\n\\begin{CD}\nC @> \\zeta_C >> \\mathcal{C} \\\\\n@V \\pi_C VV @VV \\pi_{\\mathcal{C}} V \\\\\nM @> \\zeta_M >> \\mathfrak{M}_{0,0}\n\\end{CD}\n$$\nOf course $\\omega_{\\pi_C}$ is isomorphic to $\\zeta_C^*\n\\omega_{\\pi_\\mathcal{C}}$. By Lemma~\\ref{lem-pullback},\n$Q_{\\pi_C}(\\omega_{\\pi_C}) \\cong \\zeta_M^*\nQ_{\\pi_{\\mathcal{C}}}(\\omega_{\\pi_{\\mathcal{C}}})$. So the computation of\n$Q_{\\pi_C}(\\omega_{\\pi_C})$ is reduced to the universal family.\n\n\\medskip\\noindent\nLet the open substack $U_1\\subset \\mathfrak{M}_{0,0}$ be the complement of\nthe union of the images of $\\mathfrak{M}(\\phi):\\mathfrak{M}(\\sigma) \\rightarrow\n\\mathfrak{M}_{0,0}$ as $\\phi:\\sigma \\rightarrow \\sigma_{0,0}$ ranges over\nall contractions such that $\\#\\text{Vertex}(\\sigma) \\geq 3$. Let $U_2\n\\subset U_1$ be the open substack\n$\\mathfrak{M}^{\\text{strict}}(\\sigma_{0,0})$.\n\n\\begin{prop} \\label{prop-Qomega}\n\\marpar{prop-Qomega}\n\\begin{enumerate}\n\\item[(i)]\nOver the open substack $U_1$, $\\omega_\\pi^\\vee$ is $\\pi$-relatively\nample. \n\\item[(ii)]\nOver $U_1$,\n$R^1\\pi_*\\omega_\\pi^\\vee|_{U_1} = (0)$ and $\\pi_*\n\\omega_\\pi^\\vee|_{U_1}$ is locally free of rank 3. \n\\item[(iii)]\nOver $U_2$, there is\na canonical isomorphism $i:\\text{det}(\\pi_* \\omega_\\pi^\\vee|_{U_2})\n\\rightarrow \\mathcal O_{U_2}$. \n\\item[(iv)]\nThe image of $\\text{det}(\\pi_*\\omega_\\pi^\\vee|_{U_1}) \\rightarrow\n\\text{det}(\\pi_*\\omega_\\pi^\\vee|_{U_2}) \\xrightarrow{i} \\mathcal O_{U_2}$ is\n$\\mathcal O_{U_1}(-\\Delta) \\subset \\mathcal O_{U_2}$.\n\\item[(v)]\nOver $U_1$, $Q_\\pi(\\omega_\\pi)|_{U_1} \\cong \\mathcal O_{U_1}(-\\Delta)$.\nTherefore on all of $\\mathfrak{M}_{0,0}$, $Q_\\pi(\\omega_\\pi) \\cong\n\\mathcal O_{\\mathfrak{M}_{0,0}}(-\\Delta)$. \n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\nOver $\\mathbb{Z}$, let $V = \\mathbb{Z}\\{\\mathbf{e}_0,\\mathbf{e}_1\\}$ be a free module of\nrank $2$. Choose dual coordinates $y_0,y_1$ for $V^\\vee$. Let\n${\\mathbb P}^1_\\mathbb{Z} = {\\mathbb P}(V)$ be the projective space with homogeneous\ncoordinates $y_0, y_1$. Let $\\mathbb{A}^1_\\mathbb{Z}$ be the affine space with\ncoordinate $x$. Denote by $Z\\subset \\mathbb{A}^1_\\mathbb{Z} \\times {\\mathbb P}^1_\\mathbb{Z}$ the\nclosed subscheme $\\mathbb{V}(x,y_1)$, i.e., the image of the section\n$(0,(1,0))$. Let $\\nu:C\\rightarrow \\mathbb{A}^1_\\mathbb{Z} \\times {\\mathbb P}^1_\\mathbb{Z}$ be\nthe blowing up of $Z$. Denote by $E\\subset C$ the exceptional\ndivisor.\n\n\\medskip\\noindent\nDefine $\\pi:C\\rightarrow \\mathbb{A}^1_\\mathbb{Z}$ to be $\\text{pr}_{\\AA^1}\\circ\n\\nu$. This is a flat, proper morphism whose geometric fibers are\nconnected, at-worst-nodal curves of arithmetic genus $0$. Moreover,\nno geometric fiber has more than 1 node. Thus there is a 1-morphism\n$\\zeta:\\mathbb{A}^1_\\mathbb{Z} \\rightarrow U_1$ such that the pullback of $\\mathcal{C}$\nis equivalent to $C$. It is straightforward that $\\zeta$ is smooth\nand is surjective on geometric points. Thus (i) and (ii) can be\nchecked after base-change by $\\zeta$. Also (iv) will reduce to a\ncomputation after base-change by $\\zeta$.\n\n\\medskip\\noindent\n\\textbf{(i) and (ii):} Denote by ${\\mathbb P}^2_\\mathbb{Z}$ the projective space with\ncoordinates $u_0,u_1,u_2$. There is a rational transformation\n$f:\\mathbb{A}^1_\\mathbb{Z} \\times {\\mathbb P}^1_\\mathbb{Z} \\dashrightarrow \\mathbb{A}^1_\\mathbb{Z} \\times\n{\\mathbb P}^2_\\mathbb{Z}$ by\n$$\n\\begin{array}{ccc}\nf^*x & = & x, \\\\\nf^*u_0 & = & xy_0^2, \\\\\nf^*u_1 & = & y_0y_1, \\\\\nf^*u_2 & = & y_1^2\n\\end{array}\n$$\nBy local computation, this extends to a morphism $f:C\\rightarrow\n\\mathbb{A}^1_\\mathbb{Z} \\times {\\mathbb P}^2_\\mathbb{Z}$ that is a closed immersion and whose\nimage is $\\mathbb{V}(u_0u_2-xu_1^2)$. By the adjunction formula,\n$\\omega_\\pi$ is the pullback of $\\mathcal O_{{\\mathbb P}^2}(-1)$. In particular,\n$\\omega_\\pi^\\vee$ is very ample. Moreover, because\n$H^1({\\mathbb P}^2_\\mathbb{Z},\\mathcal O_{{\\mathbb P}^2}(-1)) = (0)$, also $H^1(C,\\omega_\\pi^\\vee) =\n(0)$. By cohomology and base-change results,\n$R^1\\pi_*(\\omega_\\pi^\\vee) = (0)$ and $\\pi_*(\\omega_\\pi^\\vee)$ is\nlocally free of rank 3. \n\n\\medskip\\noindent\n\\textbf{(iii):} The curve ${\\mathbb P}^1_\\mathbb{Z} = {\\mathbb P}(V)$ determines a morphism\n$\\eta: \\text{Spec }(\\mathbb{Z}) \\rightarrow U_2$. This is smooth and surjective on\ngeometric points. Moreover it gives a realization of $U_2$ as the\nclassifying stack of the group scheme $\\text{Aut}({\\mathbb P}(V)) =\n\\textbf{PGL}(V)$. Taking the exterior power of the Euler exact\nsequence, $\\omega_{{\\mathbb P}(V)\/\\mathbb{Z}} = \\bigwedge^2(V^\\vee)\\otimes\n\\mathcal O_{{\\mathbb P}(V)}(-2)$. Therefore $H^0({\\mathbb P}(V),\\omega_{{\\mathbb P}(V)\/\\mathbb{Z}}^\\vee)$\nequals $\\bigwedge^2(V)\\otimes \\text{Sym}^2(V^\\vee)$ as a\nrepresentation of $\\textbf{GL}(V)$. The determinant of this\nrepresentation is the trivial character of $\\textbf{GL}(V)$.\nTherefore it is the trivial character of $\\textbf{PGL}(V)$. This\ngives an isomorphism of $\\text{det}(\\pi_*\\omega_\\pi|_{U_2})$ with\n$\\mathcal O_{U_2}$.\n\n\\medskip\\noindent\n\\textbf{(iv):} This can be checked after pulling back by $\\zeta$. The\npullback of $U_2$ is $\\mathbb{G}_{m,\\mathbb{Z}} \\subset \\mathbb{A}^1_\\mathbb{Z}$. The\npullback of $i$ comes from the determinant of\n$H^0(\\mathbb{G}_{m,\\mathbb{Z}}\\times {\\mathbb P}^1_\\mathbb{Z},\\omega_\\pi^\\vee) =\n\\bigwedge^2(V) \\otimes \\text{Sym}^2(V^\\vee) \\otimes\n\\mathcal O_{\\mathbb{G}_m}$. By the adjunction formula, $\\omega_{C\/\\mathbb{A}^1} =\n\\nu^*\\omega_{\\mathbb{A}^1\\times {\\mathbb P}^1\/\\mathbb{A}^1}(E)$. Hence\n$\\nu_*\\omega_{C\/\\mathbb{A}^1}^\\vee = I_Z \\omega_{\\mathbb{A}^1\\times\n{\\mathbb P}^1\/\\mathbb{A}^1}$. Therefore the canonical map,\n$$\nH^0(C,\\omega_{C\/\\mathbb{A}^1}^\\vee) \\rightarrow H^0(\\mathbb{A}^1_\\mathbb{Z} \\times\n{\\mathbb P}^1_\\mathbb{Z}, \\omega^\\vee_{\\mathbb{A}^1\\times {\\mathbb P}^1\/\\mathbb{A}^1}),\n$$ \nis given by,\n$$\n\\begin{array}{l}\n\\mathcal O_{\\mathbb{A}^1}\\{\\mathbf{f}_0,\\mathbf{f}_1,\\mathbf{f}_2\\} \\rightarrow\n\\bigwedge^2(V)\\otimes \\text{Sym}^2(V^\\vee) \\otimes \\mathcal O_{\\mathbb{A}^1}, \\\\\n\\mathbf{f}_0 \\mapsto x\\cdot (\\mathbf{e}_0\\wedge\\mathbf{e}_1)\\otimes y_0^2, \\\\\n\\mathbf{f}_1 \\mapsto \\ \\ \\ \\ (\\mathbf{e}_0\\wedge \\mathbf{e}_1) \\otimes y_0y_1, \\\\\n\\mathbf{f}_2 \\mapsto \\ \\ \\ \\ (\\mathbf{e}_0\\wedge \\mathbf{e}_1) \\otimes y_1^2\n\\end{array}\n$$\nIt follows that $\\text{det}(\\pi_*\\omega_\\pi^\\vee) \\rightarrow\n\\mathcal O_{\\mathbb{G}_m}$ has image $\\langle x \\rangle \\mathcal O_{\\mathbb{A}^1}$, i.e.,\n$\\eta^* \\mathcal O_{U_1}(-\\Delta)$.\n\n\\medskip\\noindent\n\\textbf{(v):} By the short exact sequence for $E_\\pi$,\n$Q_\\pi(\\omega_\\pi) = \\text{det}(R\\pi_*\\omega_\\pi)\\otimes\n\\text{deg}(R\\pi_* \\omega_\\pi^2)$. Because the trace map is a\nquasi-isomorphism, $\\text{det}(R\\pi_* \\omega_\\pi) = \\mathcal O_{U_1}$. By\n(ii) and duality, $$ \\text{det}(R\\pi_* \\omega_\\pi^2) \\cong\n\\text{det}(R^1\\pi_*\\omega_\\pi^2)^vee \\cong \\text{det}(\\pi_*\n\\omega_\\pi^\\vee).\n$$ \nBy (iv), this is $\\mathcal O_{U_1}(-\\Delta)$. Therefore $Q_\\pi(\\omega_\\pi)\n\\cong \\mathcal O_{U_1}(-\\Delta)$ on $U_1$. Because $\\mathfrak{M}_{0,0}$ is\nregular, and because the complement of $U_1$ has codimension $2$, this\nisomorphism of invertible sheaves extends to all of $\\mathfrak{M}_{0,0}$.\n\\end{proof}\n\n\\medskip\\noindent\nThe sheaf of relative differentials $\\Omega_\\pi$ is a pure coherent\nsheaf on $\\mathcal{C}$ of rank $1$, flat over $\\mathfrak{M}_{0,0}$ and is\nquasi-isomorphic to a perfect complex of amplitued $[-1,0]$. \n\n\\begin{lem} \\label{lem-Omega}\n\\marpar{lem-Omega}\nThe perfect complex $R\\pi_*\\Omega_\\pi$ has rank $-1$ and determinant\n$\\cong \\mathcal O_{\\mathfrak{M}_{0,0}}(-\\Delta)$. The perfect complex $R\\pi_*\nR\\textit{Hom}_{\\mathcal O_{\\mathcal{C}}}(\\Omega_\\pi,\\mathcal O_{\\mathcal{C}})$ has rank $3$\nand determinant $\\cong \\mathcal O_{\\mathfrak{M}_{0,0}}(-2\\Delta)$.\n\\end{lem}\n\n\\begin{proof}\nThere is a canonical injective sheaf homomorphism $\\Omega_\\pi\n\\rightarrow \\omega_\\pi$ and the support of the cokernel, $Z\\subset\n\\mathcal{C}$, is a closed substack that is smooth and such that\n$\\pi:Z\\rightarrow \\mathfrak{M}_{0,0}$ is unramified and is the normalization\nof $\\Delta$. Over $U_1$, the lemma immediately follows from this and\nthe arguments in the proof of Proposition~\\ref{prop-Qomega}. As in\nthat case, it suffices to establish the lemma over $U_1$.\n\\end{proof}\n\n\\subsection{Computation of $Q_\\pi(L)$ for invertible sheaves of degree\n $0$} \\label{subsec-QL}\n\\marpar{subsec-QL}\n\n\\noindent\nLet $M$ be an Artin stack, let $\\pi:C\\rightarrow M$ be a flat\n1-morphism, relatively representable by proper algebraic spaces whose\ngeometric fibers are connected, at-worst-nodal curves of arithmetic\ngenus $0$. Let $L$ be an invertible sheaf on $C$ of relative degree\n$0$ over $M$. This determines a morphism to the relative Picard of\nthe universal curve over $\\mathfrak{M}_{0,0}$, i.e., $\\zeta_M:M \\rightarrow\n\\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(0))$ such that the pullback of $\\mathcal{C}$ is\nequivalent to $C$, and such that the pullback of\n$\\mathcal O_{\\mathcal{C}}(\\mathcal{D})$ differs from $L$ by $\\pi^*L'$ for an invertible\nsheaf $L'$ on $M$. By Lemma~\\ref{lem-pullback} and\nLemma~\\ref{lem-inv}, $Q_\\pi(L) \\cong \\zeta_M^*\nQ_\\pi(\\mathcal O_{\\mathcal{C}}(\\mathcal{D}))$.\n\n\\medskip\\noindent\nLet $\\pi:\\mathcal{C} \\rightarrow \\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(0))$ be the\nuniversal curve.\n\n\\begin{prop} \\label{prop-lcomp}\n\\marpar{prop-lcomp}\nOver $\\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(0))$, $\\pi_*E_\\pi(\\mathcal{D}) = (0)$ and \n$R^1\\pi_*\nE_\\pi(\\mathcal{D})$ is a sheaf supported on $\\Delta$. The stalk of\n$R^1\\pi_* E_\\pi(\\mathcal{D})$ at the generic point of $\\Delta_a$ is a\ntorsion sheaf of length $a^2$. The filtration by order of vanishing\nat the generic point has associated graded pieces of length $2a-1,\n2a-3, \\dots, 3,1$. \n\\end{prop}\n\n\\begin{proof}\nOver the open complement of $\\Delta$, the divisor $\\mathcal{D}$ is $0$. So\nthe first part of the proposition reduces to the statement that\n$R\\pi_* E_\\pi$ \nis quasi-isomorphic to $0$. By definition of $E_\\pi$, there is an\nexact triangle,\n$$\n\\begin{CD}\nR\\pi_* E_\\pi @>>> R\\pi_* \\mathcal O_{\\mathcal{C}} @> \\delta >> R\\pi_* \\omega_\\pi\n[1] @>>> \nR\\pi_* E_\\pi[1].\n\\end{CD}\n$$\nOf course the canonical isomorphism $R\\pi_* \\mathcal O_{\\mathcal{C}} \\cong\n\\mathcal O_{\\mathfrak{M}}$, and \n$E_\\pi$ were defined so that the composition of $\\delta$ with the trace\nmap, which is a quasi-isomorphism in this case, would be the\nidentity. Therefore $\\delta$ is a quasi-isomorphism, so $R\\pi_*\nE_\\pi$ is quasi-isomorphic to $0$. \n\n\\medskip\\noindent\nThe second part can be proved, and to an extent only makes sense,\nafter smooth base-change to a scheme. Let ${\\mathbb P}^1_s$ be a copy of\n${\\mathbb P}^1$ with homogeneous coordinates $S_0,S_1$. Let ${\\mathbb P}^1_x$ be a\ncopy of ${\\mathbb P}^1$ with homogeneous coordinates $X_0,X_1$. Let ${\\mathbb P}^1_y$\nbe a copy of ${\\mathbb P}^1$ with homogeneous coordinates $Y_0,Y_1$. Denote\nby $C \\subset {\\mathbb P}^1_s \\times {\\mathbb P}^1_x \\times {\\mathbb P}^1_y$ the divisor with\ndefining equation $F=S_0X_0Y_0 - S_1X_1Y_1$. The projection\n$\\text{pr}_s:C \\rightarrow {\\mathbb P}^1_s$ is a proper, flat morphism whose\ngeometric fibers are connected, at-worst-nodal curves of arithmetic\ngenus $0$. Denote by $L$ the invertible sheaf on $C$ that is the\nrestriction of $\\text{pr}_x^*\\mathcal O_{{\\mathbb P}^1_x}(a)\\otimes\n\\text{pr}_y^*\\mathcal O_{{\\mathbb P}^1_y}(-a)$. This is an invertible sheaf of\nrelative degree $0$. Therefore there is an induced $1$-morphism\n$\\zeta:{\\mathbb P}^1_s \\rightarrow \\mathfrak{M}_{\\mathbb{Z}}(\\tau_{0,0}(0))$. \n\n\\medskip\\noindent\nIt is straightforward that $\\zeta$ is smooth, and the image intersects\n$\\Delta_b$ iff $b=a$. Moreover, $\\zeta^* \\Delta_a$ is the reduced\nCartier divisor $\\mathbb{V}(S_0S_1) \\subset {\\mathbb P}^1_s$. There is an\nobvious involution $i:{\\mathbb P}^1_s\\rightarrow {\\mathbb P}^1_s$ by\n$i(S_0,S_1)=(S_1,S_0)$, and $\\zeta\\circ i$ is $2$-equivalent to\n$\\zeta$. Therefore the length of the $R^1\\text{pr}_{s,*}\nE_{\\text{pr}_s}\\otimes L$ is $2$ times the length of the stalk of\n$R^1\\pi_* E_{\\pi}(\\mathcal{D})$ at the generic point of $\\Delta_a$; more\nprecisely, the length of the stalk at each of $(1,0),(0,1)\\in {\\mathbb P}^1_s$\nis the length of the stalk at $\\Delta_a$. Similarly for the lengths\nof the associated graded pieces of the filtration.\n\n\\medskip\\noindent\nBecause $E_{\\text{pr}_s}$ is the extension class of the Trace mapping,\n$R^1\\text{pr}_{s,*}E_{\\text{pr}_s} \\otimes L$ is the cokernel of the\n$\\mathcal O_{{\\mathbb P}^1_s}$-homomorphisms,\n$$\n\\gamma:\\text{pr}_{s,*}(L) \\rightarrow\n\\text{Hom}_{\\mathcal O_{{\\mathbb P}^1_s}}(\\text{pr}_{s,*}(L^\\vee), \\mathcal O_{{\\mathbb P}^1_s}),\n$$\ninduced via adjointness from the multiplication map,\n$$\n\\text{pr}_{s,*}(L) \\otimes \\text{pr}_{s,*}(L^\\vee) \\rightarrow\n\\text{pr}_{s,*}(\\mathcal O_C) = \\mathcal O_{{\\mathbb P}^1_s}.\n$$\n\n\\medskip\\noindent\nOn ${\\mathbb P}^1_s\\times {\\mathbb P}^1_x \\times {\\mathbb P}^1_y$ there is a locally free\nresolution of the push-forward of $L$, resp. $L^\\vee$,\n$$\n\\begin{array}{c}\n0 \\rightarrow \\mathcal O_{{\\mathbb P}^1_s}(-1)\\boxtimes \\mathcal O_{{\\mathbb P}^1_x}(a-1)\\boxtimes\n\\mathcal O_{{\\mathbb P}^1_y}(-a-1) \\xrightarrow{F} \\mathcal O_{{\\mathbb P}^1_s}(0)\\boxtimes\n\\mathcal O_{{\\mathbb P}^1_x}(a) \\boxtimes \\mathcal O_{{\\mathbb P}^1_y}(-a) \\rightarrow L \\rightarrow\n0, \\\\\n0 \\rightarrow \\mathcal O_{{\\mathbb P}^1_s}(-1)\\boxtimes \\mathcal O_{{\\mathbb P}^1_x}(-a-1)\\boxtimes\n\\mathcal O_{{\\mathbb P}^1_y}(a-1) \\xrightarrow{F} \\mathcal O_{{\\mathbb P}^1_s}(0)\\boxtimes\n\\mathcal O_{{\\mathbb P}^1_x}(-a) \\boxtimes \\mathcal O_{{\\mathbb P}^1_y}(a) \\rightarrow L^\\vee\n\\rightarrow 0\n\\end{array}\n$$\nHence $R\\text{pr}_{s,*}L$ is the complex,\n$$\n\\mathcal O_{{\\mathbb P}^1_s}(-1)\\otimes_k H^0({\\mathbb P}^1_x,\\mathcal O_{{\\mathbb P}^1_x}(a-1)) \\otimes_k\nH^1({\\mathbb P}^1_y,\\mathcal O_{{\\mathbb P}^1_y}(-a-1)) \\xrightarrow{F} \\mathcal O_{{\\mathbb P}^1_s}\n\\otimes_k H^0({\\mathbb P}^1_x,\\mathcal O_{{\\mathbb P}^1_x}(a))\\otimes_k\nH^1({\\mathbb P}^1_y,\\mathcal O_{{\\mathbb P}^1_y}(-a)).\n$$\nSimilarly for $R\\text{pr}_{s,*}L^\\vee$. It is possible to write out\nthis map explicitly in terms of bases for $H^0$ and $H^1$, but for the\nmain statement just observe the complex has rank $1$ and degree $-a^2$.\nSimilarly for $R\\text{pr}_{s,*}L^\\vee$. Therefore $R^1\\pi_* E_\\pi(L)$\nis a torsion sheaf of length $2a^2$. Because it is equivariant for\n$i$, the localization at each of $(0,1)$ and $(1,0)$ has length $a^2$.\n\n\\medskip\\noindent\nThe lengths of the associated graded pieces of the filtration by order\nof vanishing at $\\mathbb{V}(S_0S_1)$ can be computed from the\ncomplexes for $R\\text{pr}_{s,*}L$ and $R\\text{pr}_{s,*}L^\\vee$. This\nis left to the reader.\n\\end{proof}\n\n\\begin{cor} \\label{cor-lcomp}\n\\marpar{cor-lcomp}\nIn the universal case, $Q_\\pi(\\mathcal{D}) = -\\sum_{a\\geq 0} a^2 \\Delta_a$.\nTherefore in the general case of $\\pi:C\\rightarrow M$ and an\ninvertible sheaf $L$ of relative degree $0$,\n$$\nQ_\\pi(L) = {\\sum_{\\beta',\\beta''}}^\\prime \\langle C_1(L),\\beta'\n\\rangle \\langle \nC_1(L), \\beta'' \\rangle \\Delta_{\\beta',\\beta''}.\n$$\n\\end{cor}\n\n\\section{Some divisor class relations} \\label{sec-div}\n\\marpar{sec-div}\n\n\\noindent\nIn this section, Proposition~\\ref{prop-Qomega} and\nProposition~\\ref{prop-lcomp} are used to deduce several other divisor\nclass relations. As usual, let $M$ be an Artin stack and let\n$\\pi:C\\rightarrow M$ be a flat 1-morphism, relatively representable by\nproper algebraic spaces whose geometric fibers are connected,\nat-worst-nodal curves of genus $0$.\n\n\\begin{hyp} \\label{hyp-GRR}\n\\marpar{hyp-GRR}\nThere are cycle class groups for $C$ and $M$ admitting Chern classes\nfor locally free sheaves, and such that Grothendieck-Riemann-Roch\nholds for $\\pi$.\n\\end{hyp}\n\n\\begin{lem} \\label{lem-rel1}\nFor every Cartier divisor class $D$ on $C$ of relative degree $\\langle\nD, \\beta \\rangle$ over $M$, modulo $2$-power torsion,\n$$\n\\pi_*(D\\cdot D) + \\langle D,\\beta \\rangle \\pi_*(D\\cdot\nC_1(\\omega_\\pi)) = {\\sum_{\\beta',\\beta''}}^\\prime \\langle D,\\beta'\n\\rangle \\langle D,\\beta'' \\rangle \\Delta_{\\beta',\\beta''}.\n$$\n\\end{lem}\n\n\\begin{proof}\nDefine $D' = 2D + \\langle D,\\beta \\rangle C_1(\\omega_\\pi)$. This is a\nCartier divisor class of relative degree $0$. By\nCorollary~\\ref{cor-lcomp}, \n$$\nQ_\\pi(D') = {\\sum_{\\beta',\\beta''}}^\\prime (\\langle 2D,\\beta' \\rangle\n-\\langle D,\\beta \\rangle)(\\langle 2D,\\beta'' \\rangle - \\langle D,\\beta\n\\rangle) \\Delta_{\\beta',\\beta''}.\n$$\nBy Lemma~\\ref{lem-GRR} this is,\n$$\n\\begin{array}{c}\n4\\pi_*(D\\cdot D) +4\\langle D,\\beta \\rangle \\pi_*(D\\cdot\nC_1(\\omega_\\pi) + (\\langle D,\\beta \\rangle)^2 Q_\\pi(C_1(\\omega_\\pi)) =\n\\\\\n{\\sum_{\\beta',\\beta''}}^\\prime (4\\langle D,\\beta' \\rangle \\langle\nD,\\beta'' \\rangle - (\\langle D,\\beta \\rangle)^2)\n\\Delta_{\\beta',\\beta''}.\n\\end{array}\n$$\nBy Proposition~\\ref{prop-Qomega}, $Q_\\pi(\\omega_\\pi) =\n-{\\sum_{\\beta',\\beta''}}' \\Delta_{\\beta',\\beta''}$. Substituting this\ninto the equation, simplifying, and dividing by 4 gives the relation.\n\\end{proof}\n\n\\begin{lem} \\label{lem-rel2}\n\\marpar{lem-rel2}\nFor every pair of Cartier divisor classes on $C$, $D_1,D_2$, of\nrelative degrees $\\langle D_1,\\beta \\rangle$, resp. $\\langle D_2,\\beta\n\\rangle$, modulo $2$-power torsion,\n$$\n\\begin{array}{c}\n2\\pi_*(D_1\\cdot D_2) + \\langle D_1,\\beta \\rangle \\pi_*(D_2\\cdot\nC_1(\\omega_\\pi)) + \\langle D_2,\\beta \\rangle \\pi_*(D_1\\cdot\nC_1(\\omega_\\pi)) = \n\\\\\n\\\\\n{\\sum_{\\beta',\\beta''}}^\\prime (\\langle D_1,\\beta' \\rangle \\langle\nD_2,\\beta'' \n\\rangle + \\langle D_2,\\beta' \\rangle \\langle D_1,\\beta'' \\rangle)\n\\Delta_{\\beta',\\beta''}.\n\\end{array}\n$$\n\\end{lem}\n\n\\begin{proof}\nThis follows from Lemma~\\ref{lem-rel1} and the polarization identity\nfor quadratic forms.\n\\end{proof}\n\n\\begin{lem} \\label{lem-rel3}\n\\marpar{lem-rel3}\nFor every section of $\\pi$, $s:M\\rightarrow C$, whose image is\ncontained in the smooth locus of $\\pi$,\n$$\ns(M)\\cdot s(M) + s(M)\\cdot C_1(\\omega_\\pi).\n$$\n\\end{lem}\n\n\\begin{proof}\nThis follows by adjunction since the relative dualizing sheaf of\n$s(M)\\rightarrow M$ is trivial.\n\\end{proof}\n\n\\begin{lem} \\label{lem-rel4}\n\\marpar{lem-rel4}\nFor every section of $\\pi$, $s:M\\rightarrow C$, whose image is\ncontained in the smooth locus of $\\pi$ and for every Cartier divisor\nclass $D$ on $C$ of relative degree $\\langle D,\\beta \\rangle$ over\n$M$, modulo $2$-power torsion,\n$$\n\\begin{array}{c}\n2\\langle D,\\beta \\rangle s^*D -\\pi_*(D\\cdot D) - \\langle D,\\beta\n\\rangle^2 \\pi_*(s(M)\\cdot s(M)) = \\\\\n\\\\\n{\\sum_{\\beta',\\beta''}}^\\prime\n(\\langle D,\\beta' \\rangle^2 \\langle s(M),\\beta'' \\rangle + \\langle\nD,\\beta'' \\rangle^2 \\langle s(M),\\beta' \\rangle )\n\\Delta_{\\beta',\\beta''}.\n\\end{array}\n$$\n\\end{lem}\n\n\\begin{proof}\nBy Lemma~\\ref{lem-rel2},\n$$\n\\begin{array}{c}\n2s^* D + \\pi_*(D\\cdot C_1(\\omega_\\pi)) + \\langle D,\\beta \\rangle\n\\pi_*(s(M)\\cdot C_1(\\omega_\\pi)) = \\\\\n\\\\\n{\\sum}^\\prime (\\langle D,\\beta' \\rangle \\langle s(M),\\beta'' \\rangle +\n\\langle D,\\beta'' \\rangle \\langle s(M),\\beta' \\rangle\n)\\Delta_{\\beta',\\beta''}.\n\\end{array}\n$$\nMultiplying both sides by $\\langle D,\\beta \\rangle$,\n$$\n\\begin{array}{c}\n2\\langle D,\\beta \\rangle s^* D + \\langle D,\\beta \\rangle \\pi_*(D \\cdot\nC_1(\\omega_\\pi)) + \\langle D, \\beta \\rangle^2 \\pi_*(s(M)\\cdot\nC_1(\\omega_\\pi)) = \\\\\n\\\\\n{\\sum}^\\prime(\\langle D,\\beta \\rangle \\langle D,\\beta' \\rangle \\langle\ns(M),\\beta'' \\rangle + \\langle D,\\beta \\rangle \\langle D,\\beta''\n\\rangle \\langle s(M),\\beta' \\rangle ) \\Delta_{\\beta',\\beta''}.\n\\end{array}\n$$\nFirst of all, by Lemma~\\ref{lem-rel4}, $\\langle D,\\beta \\rangle^2\n\\pi_*(s(M) \\cdot C_1(\\omega_\\pi)) = - \\langle D,\\beta \\rangle^2\n\\pi_*(s(M)\\cdot s(M))$. Next, by Lemma~\\ref{lem-rel1},\n$$\n\\langle D,\\beta \\rangle \\pi_*(D\\cdot C_1(\\omega_\\pi)) = -\\pi_*(D\\cdot\nD) + {\\sum}^\\prime \\langle D,\\beta' \\rangle \\langle D,\\beta''\n\\Delta_{\\beta',\\beta''}.\n$$\nFinally,\n$$\n\\begin{array}{c}\n\\langle D,\\beta \\rangle \\langle D,\\beta' \\rangle \\langle s(M),\\beta''\n\\rangle + \\langle D,\\beta \\rangle \\langle D,\\beta'' \\rangle \\langle\ns(M),\\beta' \\rangle = \\\\ \\\\\n(\\langle D,\\beta' \\rangle + \\langle D,\\beta'' \\rangle) \\langle\nD,\\beta' \\rangle \\langle s(M),\\beta'' \\rangle + (\\langle D,\\beta'\n\\rangle + \\langle D,\\beta'' \\rangle) \\langle D,\\beta'' \\rangle \\langle\ns(M), \\beta' \\rangle = \\\\ \\\\\n\\langle D,\\beta' \\rangle^2 \\langle s(M),\\beta'' \\rangle + \\langle\nD,\\beta'' \\rangle^2 \\langle s(M),\\beta' \\rangle + \\langle D,\\beta'\n\\rangle \\langle D,\\beta'' \\rangle (\\langle s(M),\\beta' \\rangle +\n\\langle s(M),\\beta'' \\rangle) = \\\\ \\\\\n\\langle D,\\beta' \\rangle^2 \\langle s(M),\\beta'' \\rangle + \\langle\nD,\\beta'' \\rangle^2 \\langle s(M),\\beta' \\rangle + \\langle D,\\beta'\n\\rangle \\langle D,\\beta'' \\rangle.\n\\end{array}\n$$\nPlugging in these 3 identities and simplifying gives the relation.\n\\end{proof}\n\n\\medskip\\noindent\nLet $\\mathcal{C}$ be the universal curve over $\\mathfrak{M}_{0,0}$. Let\n$\\mathcal{C}_\\text{smooth}$ denote the smooth locus of $\\pi$. The\n2-fibered product $\\text{pr}_1:\\mathcal{C}_\\text{smooth}\n\\times_{\\mathfrak{M}_{0,0}} \\mathcal{C} \n\\rightarrow \\mathcal{C}_\\text{smooth}$ together with the diagonal\n$\\Delta:\\mathcal{C}_\\text{smooth} \\rightarrow \\mathcal{C}_\\text{smooth}\n\\times_{\\mathfrak{M}_{0,0}} \\mathcal{C}$ determine a 1-morphism\n$\\mathcal{C}_\\text{smooth} \\rightarrow \\mathfrak{M}_{1,0}$. This extends to a\n1-morphism $\\mathcal{C} \\rightarrow \\mathfrak{M}_{1,0}$. The pullback of the\nuniversal curve is a 1-morphism $\\pi':\\mathcal{C}' \\rightarrow \\mathcal{C}$ that\nfactors through $\\text{pr}_1:\\mathcal{C}\\times_{\\mathfrak{M}_{0,0}} \\mathcal{C}\n\\rightarrow \\mathcal{C}$. Denote the pullback of the universal section by\n$s:\\mathcal{C} \\rightarrow \\mathcal{C}'$. Now $\\mathcal{C}$ is regular, and the\ncomplement of $\\mathcal{C}_\\text{smooth}$ has codimension $2$. In\nparticular, $s^*\\mathcal O_{\\mathcal{C}'}(s(\\mathcal{C}))$ can be computed on\n$\\mathcal{C}_\\text{smooth}$. But the restriction to $\\mathcal{C}_\\text{smooth}$\nis clearly $\\omega^\\vee_\\pi$. Therefore $s^*\\mathcal O_{\\mathcal{C}'}(s(\\mathcal{C}))\n\\cong \n\\omega_\\pi^\\vee$ on all of $\\mathcal{C}$.\n\n\\medskip\\noindent\nPulling this back by $\\zeta_C:C\\rightarrow \\mathfrak{C}$ gives a 1-morphism\n$\\pi':C'\\rightarrow C$ that factors through $\\text{pr}_1:C\\times_M\nC\\rightarrow C$. Let $D$ be a Cartier divisor class on $C$ and\nconsider the pullback to $C'$ of $\\text{pr}_2^* D$ on $C\\times_M C$.\nThis is a Cartier divisor class $D'$ on $C'$. Of course $s^* D' =\nD$. Moreover, by the projection formula the pushforward to $C\\times_M\nC$ of $D'\\cdot D'$ is $\\text{pr}_2^*(D\\cdot D)$. Therefore\n$(\\pi')_*(D'\\cdot D')$ is $(\\text{pr}_1)_*\\text{pr}_2^*(D\\cdot D)$,\ni.e., $\\pi^*\\pi_*(D\\cdot D)$. Finally, denote by,\n$$\n\\sum_{\\beta',\\beta''} \\langle D,\\beta'' \\rangle^2\n\\widetilde{\\Delta}_{\\beta',\\beta''},\n$$\nthe divisor class on $C$,\n$$\n{\\sum_{\\beta',\\beta''}}^\\prime (\\langle D,\\beta'' \\rangle^2 \\langle\ns,\\beta' \\rangle + \\langle D,\\beta' \\rangle^2 \\langle s,\\beta''\n\\rangle)\\Delta_{\\beta',\\beta''}.\n$$\nThe point is this: if $\\pi$ is smooth over every generic point of $M$,\nthen the divisor class $\\widetilde{\\Delta}_{\\beta',\\beta''}$ is the\nirreducible component of $\\pi^{-1}(\\Delta_{\\beta',\\beta''})$\ncorresponding to the vertex $v'$, i.e., the irreducible component with\n``curve class'' $\\beta'$.\nPutting this all together and applying Lemma~\\ref{lem-rel4} gives the\nfollowing.\n\n\\begin{lem} \\label{lem-rel5}\n\\marpar{lem-rel5}\nFor every Cartier divisor class $D$ on $C$ of relative degree $\\langle\nD,\\beta \\rangle$ over $M$,\n$$\n\\begin{array}{c}\n2\\langle D,\\beta \\rangle D - \\pi^*\\pi_*(D\\cdot D) + \\langle D,\\beta\n\\rangle^2 C_1(\\omega_\\pi) = \\\\ \\\\\n\\sum_{\\beta',\\beta''} \\langle D,\\beta'' \\rangle^2\n\\widetilde{\\Delta}_{\\beta',\\beta''}.\n\\end{array}\n$$\nIn particular, the relative Picard group of $\\pi$ is generated by\n$C_1(\\omega_\\pi)$ and the boundary divisor classes\n$\\widetilde{\\Delta}_{\\beta',\\beta''}$. \n\\end{lem}\n\n\\begin{rmk} \\label{rmk-rel5}\n\\marpar{rmk-rel5}\nIf $\\langle D,\\beta \\rangle \\neq 0$ then, at least up to torsion,\nLemma~\\ref{lem-rel1} follows from Lemma~\\ref{lem-rel5} by intersecting\nboth sides of the relation by $D$ and then applying $\\pi_*$. This was\npointed out by Pandharipande, who also proved Lemma~\\ref{lem-rel4} up\nto numerical equivalence in ~\\cite[Lem. 2.2.2]{QDiv} (by a very\ndifferent method). \n\\end{rmk}\n\n\\begin{lem} \\label{lem-rel6}\n\\marpar{lem-rel6}\nLet $s,s':M\\rightarrow C$ be sections with image in the smooth locus\nof $\\pi$ such that $s(M)$ and $s'(M)$ are disjoint. Then,\n$$\n\\pi_*(s(M)\\cdot s(M)) + \\pi_*(s'(M)\\cdot s'(M)) = -\n\\sum_{\\beta',\\beta''} \\langle s(M),\\beta' \\rangle \\langle\ns'(M),\\beta'' \\rangle \\Delta_{\\beta',\\beta''}.\n$$\n\\end{lem}\n\n\\begin{proof}\nApply Lemma~\\ref{lem-rel2} and use $s(M)\\cdot s'(M)=0$ and\nLemma~\\ref{lem-rel3}. \n\\end{proof}\n\n\\begin{lem} \\label{lem-rel7}\nLet $r\\geq 2$ and $s_1,\\dots,s_r:M\\rightarrow C$ be sections with image in the\nsmooth locus of $\\pi$ and which are pairwise disjoint. Then,\n$$\n-\\sum_{i=1}^r \\pi_*(s_i(M)\\cdot s_i(M)) = (r-2)\\pi_*(s_1(M)\\cdot\ns_1(M)) + \\sum_{\\beta',\\beta''} \\langle s_1(M),\\beta' \\rangle \\langle\ns_2(M) + \\dots + s_r(M),\\beta'' \\rangle \\Delta_{\\beta',\\beta''}.\n$$\n\\end{lem}\n\n\\begin{proof}\nThis follows from Lemma~\\ref{lem-rel6} by induction.\n\\end{proof}\n\n\\begin{lem} \\label{lem-rel8}\n\\marpar{lem-rel8}\nLet $r\\geq 2$ and let $s_1,\\dots,s_r:M \\rightarrow C$ be sections with\nimage in the smooth locus of $\\pi$ and which are pairwise disjoint.\nThen,\n$$\n-\\sum_{i=1}^r \\pi_*(s_i(M)\\cdot s_i(M)) = r(r-2) \\pi_*(s_1(M)\\cdot\ns_1(M)) + \\sum_{\\beta',\\beta''} \\langle s_1(M),\\beta' \\rangle \\langle\ns_2(M) + \\dots + s_r(M),\\beta'' \\rangle^2 \\Delta_{\\beta',\\beta''}.\n$$\nCombined with Lemma~\\ref{lem-rel7} this gives,\n$$\n\\begin{array}{c}\n(r-1)(r-2)\\pi_*(s_1(M)\\cdot s_1(M)) = \\\\ \\\\\n-\\sum_{\\beta',\\beta''} \\langle\ns_1(M),\\beta' \\rangle \\langle s_2(M) + \\dots + s_r(M),\\beta'' \\rangle\n(\\langle s_2(M)+\\dots +s_r(M), \\beta'' \\rangle - 1)\n\\Delta_{\\beta',\\beta''},\n\\end{array}\n$$\nwhich in turn gives,\n$$\n\\begin{array}{c}\n-(r-1)\\sum_{i=1}^r\\pi_*(s_i(M)\\cdot s_i(M)) = \\\\ \\\\\n\\sum_{\\beta',\\beta''} \\langle s_1(M),\\beta' \\rangle \\langle s_2(M) +\n\\dots + s_r(M), \\beta'' \\rangle (r- \\langle s_2(M) +\\dots + s_r(M),\n\\beta'' \\rangle) \\Delta_{\\beta',\\beta''}.\n\\end{array}\n$$\nIn the notation of Example~\\ref{ex-boundary}, this is,\n$$\n-(r-1)(r-2)\\pi_*(s_1(M)\\cdot s_1(M)) = \\sum_{(A,B), \\ 1\\in A}\n\\#B(\\#B-1) \\Delta_{(A,B)},\n$$\nand\n$$\n-(r-1)\\sum_{i=1}^r \\pi_*(s_i(M)\\cdot s_i(M)) = \\sum_{(A,B), \\ 1\\in A}\n\\#B(r-\\#B) \\Delta_{(A,B)}.\n$$\n\\end{lem}\n\n\\begin{proof}\nDenote $D=\\sum_{i=2}^r s_i(M)$. Apply Lemma~\\ref{lem-rel4} to get,\n$$\n\\begin{array}{c}\n2(r-1)\\cdot 0 - \\sum_{i=2}^r \\pi_*(s_i(M)\\cdot s_i(M)) -\n(r-1)^2\\pi_*(s_1(M)\\cdot s_1(M)) = \\\\ \\\\\n\\sum_{\\beta',\\beta''} \\langle s_1(M),\\beta' \\rangle \\langle\ns_2(M)+\\dots + s_r(M),\\beta'' \\rangle^2 \\Delta_{\\beta',\\beta''}.\n\\end{array}\n$$\nSimplifying,\n$$\n-\\sum_{j=1}^r \\pi_*(s_i(M)\\cdot s_i(M)) = r(r-2) \\pi_*(s_1(M)\\cdot\ns_1(M)) + \\sum \\langle s_1(M),\\beta' \\rangle \\langle s_2(M)+ \\dots\n+s_r(M),\\beta'' \\rangle^2 \\Delta_{\\beta',\\beta''}.\n$$\nSubtracting from the relation in Lemma~\\ref{lem-rel7} gives the\nrelation for $(r-1)(r-2)\\pi_*(s_1(M)\\cdot s_1(M))$. \nMultiplying the first relation by $(r-1)$, plugging in the second\nrelation and simplifying gives the third relation.\n\\end{proof}\n\n\\begin{lem} \\label{lem-rel9}\n\\marpar{lem-rel9}\nLet $r\\geq 2$ and let $s_1,\\dots,s_r:M\\rightarrow C$ be everywhere\ndisjoint sections with image in the smooth locus. For every $1\\leq i\n< j \\leq r$, using the notation from Example~\\ref{ex-boundary},\n$$\n\\sum_{(A,B), \\ i\\in A} \\#B(r-\\#B) \\Delta_{(A,B)} = \\sum_{(A',B'), j \\in\n A} \\#B'(r-\\#B') \\Delta_{(A',B')}.\n$$\n\\end{lem}\n\n\\begin{proof}\nThis follows from Lemma~\\ref{lem-rel8} by permuting the roles of $1$\nwith $i$ and $j$.\n\\end{proof}\n\n\\begin{lem} \\label{lem-rel10}\n\\marpar{lem-rel10}\nLet $r \\geq 2$ and let $s_1,\\dots,s_r:M \\rightarrow C$ be everywhere\ndisjoint sections with image in the smooth locus of $\\pi$. For every\nCartier \ndivisor class $D$ on $C$ of relative degree $\\langle D,\\beta \\rangle$,\n$$\n2(r-1)(r-2)\\langle D,\\beta \\rangle s_1^*D = (r-1)(r-2)\\pi_*(D\\cdot D)\n+ \\sum_{\\beta',\\beta''} \\langle s_1(M),\\beta' \\rangle a(D,\\beta'')\n\\Delta_{\\beta',\\beta''},\n$$\nwhere, \n$$\na(D,\\beta'') = (r-1)(r-2)\\langle D,\\beta'' \\rangle^2 -\n\\langle D,\\beta \\rangle^2 \\langle s_2(M)+\\dots +s_r(M), \\beta'' \\rangle (\n\\langle s_2(M)+\\dots + s_r(M),\\beta'' \\rangle -1).\n$$\nIn particular, if $r\\geq 3$, then modulo torsion $s_i^*D$ is in the\nspan of $\\pi_*(D\\cdot D)$ and boundary divisors for every\n$i=1,\\dots,r$.\n\\end{lem}\n\n\\begin{proof}\nThis follows from Lemma~\\ref{lem-rel4} and Lemma~\\ref{lem-rel8}.\n\\end{proof}\n\n\\begin{lem} \\label{lem-rel11}\n\\marpar{lem-rel11}\nLet $r\\geq 2$ and let $s_1,\\dots,s_r:M\\rightarrow C$ be everywhere\ndisjoint sections with image in the smooth locus of $\\pi$. Consider\nthe sheaf $\\mathcal{E} = \\Omega_\\pi(s_1(M)+\\dots+s_r(M))$. The perfect\ncomplex $R\\pi_*R\\textit{Hom}_{\\mathcal O_C}(\\mathcal{E},\\mathcal O_C)$ has rank $3-r$ and\nthe first Chern class of the determinant is $-2\\Delta\n-\\sum_{i=1}^r(s_i(M)\\cdot s_i(M))$. In particular, if $r\\geq 2$, up\nto torsion,\n$$\n\\begin{array}{c}\nC_1(\\text{det}R\\pi_*\nR\\textit{Hom}_{\\mathcal O_C}(\\Omega_\\pi(s_1(M)+\\dots+s_r(M)),\\mathcal O_C)) = \\\\ \\\\\n-2\\Delta + \\frac{1}{r-1}\\sum_{(A,B),\\ 1\\in A} \\#B(r-\\#B)\n\\Delta_{(A,B)}.\n\\end{array}\n$$\n\\end{lem}\n\n\\begin{proof}\nThere is a short exact sequence,\n$$\n\\begin{CD}\n0 @>>> \\Omega_\\pi @>>> \\Omega_\\pi(s_1(M)+\\dots+s_r(M))\n@>>> \\oplus_{i=1}^r (s_i)_*\\mathcal O_M @>>> 0.\n\\end{CD}\n$$\nCombining this with Lemma~\\ref{lem-Omega}, Lemma~\\ref{lem-rel8}, and\nchasing through exact sequences gives the lemma.\n\\end{proof}\n\n\\section{The virtual canonical bundle} \\label{sec-vircan}\n\\marpar{sec-vircan}\n\n\\noindent\nLet $k$ be a field, let $X$ be a connected, smooth algebraic space\nover $k$ of dimension $n$, let\n$M$ be an Artin stack over $k$, let $\\pi:C\\rightarrow M$ be a flat\n1-morphism, \nrelatively representable by proper algebraic spaces whose geometric\nfibers are connected, at-worst-nodal curves of arithmetic genus $0$,\nlet $s_1,\\dots,s_r:M\\rightarrow C$ be pairwise disjoint sections with\nimage contained in the smooth locus of $\\pi$ (possibly $r=0$, i.e.,\nthere are no sections), and let $f:C\\rightarrow\nX$ be a 1-morphism of $k$-stacks. In this setting, Behrend and\nFantechi introduced a perfect complex $E^\\bullet$ on $M$ of amplitude\n$[-1,1]$ and a morphism to the cotangent complex,\n$\\phi:E^\\bullet \\rightarrow L_M^\\bullet$, ~\\cite{BM}. \nIf $\\text{char}(k)=0$ and $M$ is the Deligne-Mumford stack of stable\nmaps to $X$, Behrend and Fantechi prove $E^\\bullet$ has amplitude\n$[-1,0]$,\n$h^0(\\phi)$ is an isomorphism\nand $h^{-1}(\\phi)$ is surjective. In\nmany interesting cases, $\\phi$ is a quasi-isomorphism. Then\n$\\text{det}(E^\\bullet)$ is an invertible \ndualizing sheaf for $M$. \nBecause of this, \n$\\text{det}(E^\\bullet)$ is called the \\emph{virtual canonical\n bundle}. In this section the relations from Section~\\ref{sec-div}\nare used to give a formula for the divisor class of the virtual\ncanonical bundle. Hypothesis ~\\ref{hyp-GRR} holds for $\\pi$.\n\n\\medskip\\noindent\nDenote by $L_{(\\pi,f)}$ the cotangent complex of the morphism\n$(\\pi,f):C\\rightarrow M\\times X$. This is a perfect complex of\namplitude $[-1,0]$. There is a distinguished triangle,\n$$\n\\begin{CD}\nL_\\pi @>>> L_{(\\pi,f)} @>>> f^* \\Omega_X[1] @>>> L_\\pi[1].\n\\end{CD}\n$$\nThere is a slight variation $L_{(\\pi,f,s)}$ taking into account the\nsections which fits into a distinguished triangle,\n$$\n\\begin{CD}\nL_\\pi(s_1(M)+\\dots+s_r(M)) @>>> L_{(\\pi,f,s)} @>>> f^*\\Omega_X[1] @>>>\nL_\\pi(s_1(M)+\\dots+s_r(M))[1]. \n\\end{CD}\n$$\nThe complex $E^\\bullet$ is defined\nto be $(R\\pi_*(L_{(\\pi,f,s)}^\\vee)[1])^\\vee$, where $(F^\\bullet)^\\vee$\nis $R\\textit{Hom}(F^\\bullet,\\mathcal O)$. In particular,\n$\\text{det}(E^\\bullet)$ is the determinant of\n$R\\pi_*(L_{(\\pi,f,s)}^\\vee)$. From the distinguished triangle,\n$\\text{det}(E^\\bullet)$ is\n$$\n\\text{det}(R\\pi_*\nR\\textit{Hom}_{\\mathcal O_C}(\\Omega_\\pi(s_1(M)+\\dots+s_r(M)),\\mathcal O_C)) \n\\otimes \\text{det}(R\\pi_* f^*T_X)^\\vee.\n$$\nBy Lemma~\\ref{lem-rel11}, the first term is known. \nThe second term\nfollows easily from Grothendieck-Riemann-Roch.\n\n\\begin{lem} \\label{lem-TX}\n\\marpar{lem-TX}\nAssume that the relative degree of $f^*C_1(\\Omega_X)$ is nonzero.\nThen $R\\pi_*f^*T_X[-1]$ has rank $\\langle -f^*C_1(\\Omega_X),\\beta\n\\rangle + n$, and up to torsion the first Chern class of the\ndeterminant is,\n$$\n\\begin{array}{c}\n\\frac{1}{2\\langle -f^*C_1(\\Omega_X),\\beta \\rangle} \\left[ 2\\langle\n-f^*C_1(\\Omega_X), \\beta \\rangle \\pi_* f^* C_2(\\Omega_X) \\right. \\\\\n\\\\\n - (\\langle\n-f^* C_1(\\Omega_X),\\beta \\rangle + 1) \\pi_* f^* C_1(\\Omega_X)^2 +\n\\\\ \\\\\n\\left.\n{\\sum}' \\langle -f^*C_1(\\Omega_X),\\beta' \\rangle \\langle\n-f^*C_1(\\Omega_X),\\beta'' \\rangle \\Delta_{\\beta',\\beta''}\\right].\n\\end{array}\n$$\n\\end{lem}\n\n\\begin{proof}\nThe Todd class $\\tau_\\pi$ of $\\pi$ is $1 - \\frac{1}{2}C_1(\\omega_\\pi)\n+ \\tau_2 + \\dots$, \nwhere $\\pi_*\\tau_2 = 0$. The Chern character of $f^*T_X$ is,\n$$\nn - f^*C_1(\\Omega_X) + \\frac{1}{2}(f^*C_1(\\Omega_X)^2 -\n2f^*C_2(\\Omega_X)) + \\dots\n$$\nTherefore $\\text{ch}(f^*T_X)\\cdot \\tau_\\pi$ equals,\n$$\nn -\n\\left[f^*C_1(\\Omega_X)+\\frac{n}{2}C_1(\\Omega_\\pi) \\right] + \\frac{1}{2}\\left[\nf^*C_1(\\Omega_X)^2 - 2f^*C_2(\\Omega_X) + f^*C_1(\\Omega_X)\\cdot\nC_1(\\omega_\\pi) \\right] + n\\tau_2 + \\dots\n$$\nApplying $\\pi_*$ and using that $\\pi_*\\tau_2 = 0$, the rank is\n$n+\\langle -f^*C_1(\\Omega_X),\\beta \\rangle$, and the determinant has\nfirst Chern class,\n$$\n\\frac{1}{2}\\pi_*\\left[ f^*C_1(\\Omega_X)^2 - 2f^*C_2(\\Omega_X) \\right] +\n\\frac{1}{2} \\pi_*(f^*C_1(\\Omega_X)\\cdot C_1(\\omega_\\pi)).\n$$\nApplying Lemma~\\ref{lem-rel1} and simplifying gives the relation.\n\\end{proof} \n\n\\begin{prop} \\label{prop-vc}\n\\marpar{prop-vc}\nThe rank of $E^\\bullet$ is $\\langle -f^*C_1(\\Omega_X),\\beta \\rangle +\nn + r - 3.$ The following divisor class reations hold modulo torsion.\nIf $\\langle -f^*C_1(\\Omega_X),\\beta \\rangle\\neq 0$ and $r=0$,\nthe first Chern class of the virtual canonical bundle\nis,\n\\begin{equation}\n\\begin{array}{c}\n\\frac{1}{2\\langle -f^*C_1(\\Omega_X),\\beta \\rangle} \\left[ 2\\langle\n-f^*C_1(\\Omega_X), \\beta \\rangle \\pi_* f^* C_2(\\Omega_X) \\right. \\\\\n\\\\\n- (\\langle\n-f^* C_1(\\Omega_X),\\beta \\rangle + 1) \\pi_* f^* C_1(\\Omega_X)^2 +\n\\\\ \\\\\n\\left.\n{\\sum}' (\\langle -f^*C_1(\\Omega_X),\\beta' \\rangle \\langle\n-f^*C_1(\\Omega_X),\\beta'' \\rangle -4\\langle -f^*C_1(\\Omega_X),\\beta\n\\rangle) \\Delta_{\\beta',\\beta''}\\right].\n\\end{array}\n\\end{equation}\nIf $\\langle -f^*C_1(\\Omega_X),\\beta \\rangle \\neq 0$ and $r=1$, \nthe first Chern class of the virtual canonical bundle is,\n\\begin{equation}\n\\begin{array}{c}\n\\frac{1}{2\\langle -f^*C_1(\\Omega_X),\\beta \\rangle} \\left[ 2\\langle\n-f^*C_1(\\Omega_X), \\beta \\rangle \\pi_* f^* C_2(\\Omega_X) \\right. \\\\\n\\\\\n- (\\langle\n-f^* C_1(\\Omega_X),\\beta \\rangle + 1) \\pi_* f^* C_1(\\Omega_X)^2 +\n\\\\ \\\\\n\\left.\n{\\sum}' (\\langle -f^*C_1(\\Omega_X),\\beta' \\rangle \\langle\n-f^*C_1(\\Omega_X),\\beta'' \\rangle -4\\langle -f^*C_1(\\Omega_X),\\beta\n\\rangle) \\Delta_{\\beta',\\beta''}\\right] \\\\\n\\\\\n-\\pi_*(s_1(M)\\cdot s_1(M)).\n\\end{array}\n\\end{equation}\nIf $\\langle -f^*C_1(\\Omega_X),\\beta \\rangle \\neq 0$ and $r\\geq 2$, \nthe first Chern class of the virtual canonical bundle is,\n\\begin{equation}\n\\begin{array}{c}\n\\frac{1}{2\\langle -f^*C_1(\\Omega_X),\\beta \\rangle} \\left[ 2\\langle\n-f^*C_1(\\Omega_X), \\beta \\rangle \\pi_* f^* C_2(\\Omega_X) \\right. \\\\\n\\\\\n- (\\langle\n-f^* C_1(\\Omega_X),\\beta \\rangle + 1) \\pi_* f^* C_1(\\Omega_X)^2 +\n\\\\ \\\\\n\\left.\n{\\sum}' (\\langle -f^*C_1(\\Omega_X),\\beta' \\rangle \\langle\n-f^*C_1(\\Omega_X),\\beta'' \\rangle -4\\langle -f^*C_1(\\Omega_X),\\beta\n\\rangle) \\Delta_{\\beta',\\beta''}\\right] \\\\\n\\\\\n+ \\frac{1}{r-1} \\sum_{(A,B), 1\\in\n A} \\#B(r-\\#B)\\Delta_{(A,B)}.\n\\end{array}\n\\end{equation}\n\\end{prop}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Background}\n\nThe Parkes-MIT-NRAO (PMN) 4.85~GHz survey (Griffith~\\&~Wright~1993) covered the entire southern sky to a limiting flux density of 30~mJy (varying slightly with declination); it increased the number of known southern radio sources by a factor of approximately 6. Wright et al. (1996) made simultaneous 4.80\/8.64~GHz Australia Telescope Compact Array (ATCA) measurements of a complete sample of 8068 PMN sources defined by the flux density limits $S_{4.85~\\rm{GHz}} \\geq 70$~mJy $(-73^\\circ < \\delta < -38^\\circ.5)$, $S_{4.85~\\rm{GHz}} \\geq 50$~mJy $(-87^\\circ < \\delta < -73^\\circ)$ and the requirement that galactic latitude $|b| \\geq 2^\\circ$. The Wright et al. ATCA observations resulted in: positions accurate to (standard error) $0.6''$; spectral data and structural information of resolution approximately $1''$. This dataset is the largest deep, complete, unbiased, dual-frequency, interferometric catalogue of southern radio sources. It covers $19~\\%$ of the sky, yet this area contains only 1 of the 39 previously known cases of gravitational lensing classed as \\emph{probable\\ lenses} by the \\emph{CASTLES}\\footnote{$\\rm{http:\/\/cfa-www.harvard.edu\/castles\/castles.edu}$} collaboration (Figure 1). The resolution ($\\sim 1''$) samples the peak of the image separation histogram for the 14 JVAS\/CLASS lenses published prior to these proceedings (Browne~et~al.,~1998, Koopmans~et~al.,~1999, Myers~et~al.,~1999). The 2178 flat--spectrum sources in these data constitute our finding survey.\n\n\nCandidate gravitational lenses were selected using an automated routine, based on objective criteria, designed to reduce sample contamination by radio galaxies and objects within, or behind, the Galactic plane. Radio galaxies are often observed to exhibit steepening radio spectral indices with increasing separation from the core (e.g. Wiita \\& Gopal-Krishna, 1990), objects showing such spectral steepening were not included in the sample. Simulated ATCA observations of known northern hemisphere lenses (transposed to declinations within the surveyed region) were used to test and refine candidate selection.\n\nThe 110 flat-spectrum objects satisfying our selection criteria of component fluxes $S_{\\nu}> 5 \\sigma~(\\sim~30$~mJy) (at both 4.80 and 8.64~GHz) and $|b| \\geq 5^\\circ$ were chosen for further study at radio, infrared and optical wavelengths. New simultaneous 4.80\/8.64 GHz, dual linear polarisation ATCA observations of each lens candidate have been made. These new data (of resolution $\\sim 1''$ at 8.64~GHz) exhibit a factor 8 improvement in S:N, and vastly improved coverage of the interferometric u-v plane, over data in hand. Total intensity and polarisation maps of all objects have been made and examined. Re-running the automated lens candidate finder rejected half of the observed objects. We have obtained deep near infrared $K_N$-Band images of 35 sources remaining in the lens candidate sample, deep optical B-Band images (and colours) for 19 of these objects and 1 spectrum. \n\n\n\\begin{figure}\n\\caption{\\small{Aitoff projection showing the positions of known gravitational lenses, their discovery method and the sky coverage of the current major surveys for flat radio spectrum gravitational lenses. The hatched region is covered by this survey (the region shaded black represents the un-surveyed area of radius $3^\\circ$ centered on the southern celestial pole). The region bounded by solid black lines is the area ($0^\\circ > \\delta > -40^\\circ$, $|b| > 10^\\circ$) surveyed by the MIT group (Winn et al., these proceedings), the regions of the northern sky for which $|b| > 10^\\circ$ (dashed line) are covered by CLASS (Browne et al., these proceedings).}}\n\\plotfiddle{oprouton1.eps}{5.15cm}{90}{40}{40}{125}{-50}\n\\end{figure}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\n\nBlandford and Payne \\cite{blandpayne} showed angular momentum transfer via large magnetic stresses, \nin the absence of alpha-viscosity, in the framework of self-similar\nKeplerian disk flows. Here, we show in a simpler 1.5-dimensional, vertically averaged\ndisk model that the Maxwell stresses due to strong magnetic fields are adequate enough \nfor angular momentum transfer even in advective accretion flows, without self-similar assumption, \ndescribing the hard spectral states of black hole sources. \n\nThe idea of exploring magnetic stress in order to explain astrophysical systems is not\nreally new. This was implemented, for example, in the solar wind which was understood to \nhave decreased Sun's angular momentum through the effect\nof magnetic stresses (see, e.g., Ref.~\\refcite{waber}), in the proto-stellar gas clouds \nwhich might have been contracted by magnetic effects \\cite{Mouschovias}. \nOzernoy and Usov \\cite{usov} and Blandford \\cite{bland} showed that the energy is possible to \nextract continuously by electromagnetic torques and\ntwisted field lines in accretion disks. By linear stability analysis of the accretion disks,\nit was shown by Cao and Spruit \\cite{cao} that angular momentum is possible to remove by the \nmagnetic torque exerted by a centrifugally driven wind. \nHowever, by solving the local vertical structure\nof a geometrically thin accretion disk threaded by a poloidal magnetic field, \nOgilvie and Livio \\cite{ogil} showed the shortcoming of launching an outflow and suggested\nfor an existence of additional source of energy for its successful launching.\n\nHere we demonstrate, semi-analytically, the effects of strong magnetic field, stronger than\nthat needed for magnetorotational instability (MRI) \\cite{sujit}, with \nplasma$-\\beta > 1$ yet, on to the vertically averaged advective accretion flows in \nvertical equilibrium in order to transport matter. \nTherefore, we consider the flow variables to depend on the radial coordinate only. \nAlthough, in reality, a non-zero vertical magnetic field should induce a vertical motion,\nin the platform of the present assumption, any vertical motion will be featured as an outward motion. \nIndeed, our aim here is to furnish removal of angular momentum from the flow via\nmagnetic stresses, independent of its vertical or outward transport.\n\n\n\n\n\\section{Basic model equations}\\label{model}\n\nWe describe optically thin, magnetized, viscous, axisymmetric, advective, vertically averaged, steady-state accretion\nflow, in the pseudo-Newtonian framework with the Mukhopadhyay \\cite{m02} potential. Hence, the equation \nof continuity, vertically averaged hydromagnetic equations for energy-momentum balance in different directions are \ngiven by (assuming that the dimensionless variables do not vary significantly in the vertical direction such that \n$\\partial\/ \\partial z \\sim s_i\/h$ and, as a consequence, the vertical component of velocity is zero),\n\\begin{eqnarray}\n\\nonumber\n&&\\dot{M}=4\\pi x \\rho h \\vartheta,~~~ \n\\vartheta\\frac{d\\vartheta}{dx}+\\frac{1}{\\rho}\\frac{dP}{dx}-\\frac{\\lambda^{2}}{x^3}+F=\\frac{1}{4\\pi \\rho}\\left(B_x\\frac{dB_x}{dx}+s_1\\frac{B_xB_z}{h}-\\frac{B_{\\phi}^2}{x}\\right),\\\\\n\\nonumber\n&&\\vartheta\\frac{d\\lambda}{dx}=\\frac{1}{x\\rho}\\frac{d}{dx}\\left(x^2W_{x\\phi}\\right)+\\frac{x}{4\\pi \\rho}\\left(B_x\\frac{dB_{\\phi}}{dx}+s_2\\frac{B_zB_{\\phi}}{h}+\\frac{B_xB_{\\phi}}{x}\\right),\\\\\n&&\\frac{P}{\\rho h}=\\frac{Fh}{x}-\\frac{1}{4\\pi \\rho}\\left(B_x\\frac{dB_z}{dx}+s_3\\frac{B_z^2}{h}\\right),\\\\\n\\nonumber\n&&\\vartheta T\\frac{ds}{dx}=\\frac{\\vartheta}{\\Gamma_3-1}\\left(\\frac{dP}{dx}-\\frac{\\Gamma_1P}{\\rho}\\frac{d\\rho}{dx}\\right)\n=Q^+-Q^-=Q^+_{vis}+Q^+_{mag}-Q^-_{vis}-Q^-_{mag},\n\\end{eqnarray}\nwhere $W_{x\\phi}=\\alpha(P+\\rho \\vartheta^2)$ and $\\alpha$ the Shakura-Sunyaev viscosity parameter \\cite{ss}.\nNote that $s_1$, $s_2$ and $s_3$ are the degrees of vertical scaling for the radial, azimuthal and vertical components of the \nmagnetic field respectively. \nHere $\\dot{M}$ is the \nconserved mass accretion rate, $\\rho$ the mass density of the flow, $\\vartheta$ the radial velocity, $P$ the total \npressure including the magnetic contribution, $F$ the force corresponding to the pseudo-Newtonian potential for \nrotating black holes \\cite{m02}, $\\lambda$ the angular momentum per unit mass, $W_{x\\phi}$ the viscous shearing stress written \nfollowing the Shakura-Sunyaev prescription \\cite{ss} with appropriate modification \\cite{mg03},\n$h \\sim z$, the half-thickness and $x$ the radial coordinate of the disk, when both of \nthem are expressed in units of $GM\/c^2$, where $G$ the Newton's gravitation constant, $M$ the mass of black hole, \n$c$ the speed of light, $s$ the entropy per unit volume, \n$T$ the (ion) temperature of the flow, $Q^+$ and $Q^-$ are the net rates of energy released and radiated out per unit \nvolume in\/from the flow respectively (when $Q^+_{vis}$, $Q^+_{mag}$, $Q^-_{vis}$, $Q^-_{mag}$ are the respective \ncontributions from viscous and magnetic parts). All the variables are made dimensionless in the spirit of \ndimensionless $x$ and $z$. We further assume, for the present purpose, the heat radiated \nout proportional to the released rate with the proportionality constants $(1 - f_{vis})$ and $(1 - f_m)$, respectively, \nfor viscous and magnetic parts of the radiations. $\\Gamma_1$, $\\Gamma_3$, which are functions of polytropic constant $\\gamma$, \nindicate the polytropic indices depending \non the gas and radiation content in the flow (see, e.g., Ref.~\\refcite{rm1}, for exact expressions) \nand $B_x$, $B_{\\phi}$ and $B_z$ are the components of magnetic field. The model for $Q^+_{vis}$ is taken from the \nprevious work \\cite{rm1}, and the relation for $Q^+_{mag}$ is taken from that by Bisnovatyi-Kogan and \nRuzmaikin \\cite{bis}.\n\nHydromagnetic flow equations must be supplemented by (for the present purpose, steady-state) equations of induction\nand no magnetic monopole at the limit of very large Reynolds number, given by\n\\begin{eqnarray}\n\\nabla \\times \\vec{v} \\times \\vec{B}=0,~~~\n\\frac{d}{dx}(xB_x)+s_3\\frac{B_z}{h}=0,\n\\end{eqnarray}\nwhen $\\vec{v}$ and $\\vec{B}$ are respectively the velocity and magnetic field vectors and $\\nu_m$ is the magnetic \ndiffusivity. \n\n\\section{Solutions and Results}\\label{sol}\n\nWe take into account two situations. (1) Flows with a relatively higher $\\dot{M}$ and, hence, lower $\\gamma$, \nmodelled around stellar mass black holes: such flows may or may not form Keplerian accretion disks. (2) Flows with \na lower $\\dot{M}$ and, hence, higher $\\gamma$, modelled around supermassive black holes: such flows are necessarily \nhot gas dominated advective (or advection dominated) accretion flows.\n\nWe find that the flows with plasma-$\\beta > 1$, but $\\alpha=0$, exhibit adequate matter transport, as efficient as the \n$\\alpha$-viscosity with $\\alpha= 0.08$, but without magnetic stresses, would do. This is interesting as the origin of $\\alpha$ (and the corresponding \ninstability and turbulence) is\nitself not well understood. The maximum required large scale magnetic field is $\\sim 10^5$G in a disk\naround $10M_{\\odot}$ black holes and $\\sim10$G in a disk around $10^7M_{\\odot}$ supermassive black holes,\nwhere $M_\\odot$ is solar mass. \nThe presence of such a field, in particular for a stellar mass black hole disk when the binary companion supplying \nmass is a Sun-like star with\nthe magnetic field on average 1G, may be understood, if the field is approximately frozen with the disk fluids (or the\nsupplied fluids from the companion star remain approximately frozen with the magnetic field) or disk fluids exhibit very\nlarge Reynolds number. Indeed, all the present computations are done at the limit of large Reynolds number, as really\nis the case in accretion flows, such that the term associated with the magnetic diffusivity in the induction equation is\nneglected. The size of a disk around supermassive black holes is proportionately larger compared to that around\na stellar mass black hole. Hence, from the equipartition theory, indeed the magnetic field is expected to be decreased\nhere compared to that around stellar mass black holes. Figure 1 shows a typical set of accretion solutions and confirms that\nflows around a stellar mass black hole with $\\alpha$-viscosity but without large scale magnetic fields (viscous flow)\nshow similar behavior to that with large scale magnetic fields \nwithout $\\alpha$-viscosity (magnetic flow). The flows around a supermassive black hole show very similar features, except\nwith reduced field strength. For other details, see Ref.~\\refcite{kou}.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[angle=0,width=2.5in]{f1}\n\\end{center}\n\\caption{\n(a) Mach number, (b) angular momentum per unit mass in $GM\/c$, (c) inverse of plasma-$\\beta$,\n(d) azimuthal component of magnetic field in G,\nwhen solid and dotted lines are for magnetic flows around\nSchwarzschild ($a=0, \\lambda_c=3.2$) and Kerr ($a=0.998, \\lambda_c=1.8$) black holes respectively,\nand dashed and long-dashed lines are for viscous flows\naround Schwarzschild ($a=0, \\alpha=0.017, \\lambda_c=3.15$) and Kerr ($a=0.998, \\alpha=0.012, \\lambda_c=1.8$)\nblack holes respectively, where $a$ is the dimensionless spin parameter of black hole and $\\lambda_c$ the quantity at critical radius.\nOther parameters are $M=10M_\\odot$, $\\dot{M}=0.1$ Eddington rate,\npolytropic constant $\\gamma=1.335, f_{vis}=f_m=0.5, s_2=-0.5$.\n}\n\\label{stmasm}\n\\end{figure}\n\n\nLet us now explore in more details, how exactly various components of magnetic stress lead to\nangular momentum transfer in the flows. \nFigure \\ref{stmasstr}a shows that the stress component $B_x B_z$ around a Schwarzschild\nblack hole increases almost throughout\nas matter advances towards the black hole. This implies that the flow is prone\nto outflow through the field lines, which effectively helps in\ninfalling matter towards the black hole. However, in the near vicinity of black hole, $B_x B_z$ decreases,\nas indeed outflow is not possible therein. \nIn this flow zone, the angular momentum becomes very small which practically does\nnot affect the infall. The magnitude of $B_\\phi B_z$ decreases till the\ninner region of accretion flow, implying the part of matter to be spiralling out\nand, hence, removing angular momentum leading to infall of matter.\nFinally, the magnitude of $B_x B_\\phi$\nincreases at a large and a small distances from the black hole (except around\nthe transition radius), which helps removing angular momentum and further infall. \nThis is the same as the Shakura-Sunyaev viscous stress would do with the increase of matter pressure.\nHowever, at the intermediate zone,\nthe angular momentum transfer through $B_x B_\\phi$ reverses and a part of\nthe matter outflows. At the Keplerian to sub-Keplerian transition zone,\ndue to the increase of flow thickness, matter is vertically kicked effectively,\nshowing a decrease of $B_x B_\\phi$. Majority of the features remain similar\nfor the flow around a rotating black hole, as shown in Fig. \\ref{stmasstr}b.\nHowever, a rotating black hole reveals a stronger\/efficient outflow\/jet in general. Hence, except at the inner zone,\n$B_x B_\\phi$ decreases throughout, which helps kicking the matter outwards by \ntransferring the angular momentum inwards. \n\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[angle=0,width=2.5in]{f2}\n\\end{center}\n\\vskip-4cm\n\\caption{\nComponents of magnetic stress: $B_x B_\\phi$ (solid line),\n$B_x B_z$ (dotted line), $B_\\phi B_z$ (dashed line), for (a) Schwarzschild\nmagnetic flow of Fig. \\ref{stmasm}, (b) Kerr magnetic flow of Fig. \\ref{stmasm}.\n}\n\\label{stmasstr}\n\\end{figure}\n\nDifferent components of the magnetic stress tensor have different roles: $B_xB_{\\phi}$ controls the infall in the disk plane,\nwhereas $B_{\\phi}B_z$ renders the flow to spiral outwards and, hence, outflow. Moreover, $B_xB_z$ helps to kick the \nmatter out vertically. Larger the field strength, larger is the power of magnetic stresses. Interestingly, \nthe magnitude of magnetic\nfield decreases, as the steady-state matter advances towards the black hole. This is primarily because $B_{\\phi}B_z$ \n(and also $B_xB_{\\phi}$ for a rotating black hole) decreases inwards almost entirely in order to induce inflow\nvia angular momentum transfer through outflow. \nThis further reveals\na decreasing $|B_{\\phi}|$ as the output of self-consistent solutions of the coupled set of equations.\n\n\n\\section{Discussion and Conclusions}\\label{last}\n\nIs there any observational support for the existence of such a magnetic field, as required for the magnetic accretion\nflows discussed here? Interestingly, the polarization measurements in the hard state of Cyg X-1 imply that it should\nhave at least 10mG field at the source of emission \\cite{lau}. In order to explain such high polarization, a\njet model was suggested by Zdziarski et al. \\cite{zd}, which requires a magnetic field $\\sim (5 - 10) \\times 10^5$G \nat the base of jet and hence in the underlying accretion disk. \n\nIn the present computations, we have assumed the flow to be vertically averaged without allowing any vertical\ncomponent of the flow velocity. The most self-consistent approach,\nin order to understand vertical transport of matter through the magnetic effects which in turn leads to the radial\ninfall of rest of the matter, is considering the flow to be moving in the vertical direction from the disk plane as well.\nSuch an attempt, in the absence of magnetic and viscous effects, was made earlier by one of the present authors \\cite{deb} in the model\nframework of coupled disk-outflow systems. In such a framework, the authors further showed that the outflow\npower of the correlated disk-outflow systems increases with the increasing spin of black holes. Our future goal is now\nto combine that model with the model of present work, so that the coupled disk-outflow systems can be investigated\nmore self-consistently and rigorously, when the magnetic field plays indispensable role in order to generate vertical\nflux in the three-dimensional flows.\n\n\n\n\n\\section*{Acknowledgments}\nK.C. thanks the Academies' Summer Research Fellowship Programme of India for offering him a Fellowship to\npursue his internship in Indian Institute of Science, Bangalore, \nwhen most of the calculations of this project were done.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecently, high resolution X-ray images using {\\it Chandra} have\nrevealed 49 point sources in the Antennae \\citep{zez02a}. We will\nassume a distance to the Antennae of 19.3 Mpc (for $H_{0}$=75 km\ns$^{-1}$ Mpc$^{-1}$), which implies 10 sources have X-ray luminosities\ngreater than $10^{39}$ ergs s$^{-1}$. Considering new observations of\nred giant stars in the Antennae indicate a distance of 13.8 Mpc\n\\citep{sav04}, we point out this ultraluminous X-ray source population\ncould decrease by roughly a half. Typically, masses of black holes\nproduced from standard stellar evolution are less than $\\sim20$ \n$M_{\\odot}$ \\citep[e.g.,][]{fry01}. The Eddington luminosity limit\nimplies that X-ray luminosities $>10^{39}$ ergs s$^{-1}$ correspond to\nhigher-mass objects not formed from a typical star. Several authors\n\\citep[e.g.,][]{fab89,zez99,rob00,mak00} suggest these massive ($10\n\\sbond 1000$ $M_{\\sun}$) compact sources outside galactic nuclei are\nintermediate mass black holes (IMBHs), a new class of BHs. While\nIMBHs could potentially explain the observed high luminosities, other\ntheories exist as well, including beamed radiation from a stellar mass\nBH \\citep{kin01}; super-Eddington accretion onto lower-mass objects\n\\citep[e.g.,][]{moo03,beg02}; or supernovae exploding in dense\nenvironments \\citep{ple95,fab96}.\n\nCompact objects tend to be associated with massive star formation,\nwhich is strongly suspected to be concentrated in young stellar\nclusters \\citep{lad03}. Massive stars usually end their lives in\nsupernovae, producing a compact remnant. This remnant can be kicked\nout of the cluster due to dynamical interactions, stay behind after\nthe cluster evaporates, or remain embedded in its central regions.\nThis last case is of particular interest to us as the compact object\nis still {\\it in situ}, allowing us to investigate its origins via the\nambient cluster population. The potential for finding such\nassociations is large in the Antennae due to large numbers of both\nX-ray point sources and super star clusters; a further incentive for\nstudying these galaxies.\n\nIn \\citet[henceforth Paper I]{bra05} we presented $J$ and $K_s$\nphotometry of $\\sim220$ clusters in the Antennae. Analysis of ($J -\nK_s$) colors indicated that many clusters in the overlap region suffer\nfrom 9--10 mag of extinction in the $V$-band. This result contrasts\nwith previous work by \\citet{whi02} who associated optical sources\nwith radio counterparts in the Antennae \\citep{nef00} and argued that\nextinction is not large in this system. Here, we continue our\nanalysis of these Antennae IR images by making a frame-tie between the\nIR and {\\it Chandra} X-ray images from \\citet{fab01}. Utilizing the\nsimilar dust-penetrating properties of these wavelengths, we\ndemonstrate the power of this approach to finding counterparts to\nX-ray sources. By comparing the photometric properties of clusters\nwith and without X-ray counterparts, we seek to understand the cluster\nenvironments of these X-ray sources. In \\S2 we discuss the IR\nobservations of the Antennae. \\S3 explains our matching technique and\nthe photometric properties of the IR counterparts. We conclude with a\nsummary of our results in \\S4.\n\n\\section{Observations and Data Analysis}\n\n\\subsection{Infrared Imaging}\n\nWe obtained near-infrared images of NGC 4038\/9 on 2002 March 22 using\nthe Wide-field InfraRed Camera (WIRC) on the Palomar 5-m telescope.\nAt the time of these observations, WIRC had been commissioned with an\nunder-sized HAWAII-1 array (prior to installation of the full-sized\nHAWAII-2 array in September 2002), providing a $\\sim 4.7 \\times\n4.7$-arcminute field of view with $\\sim0\\farcs25$ pixels (``WIRC-1K''\n-- see \\citet{wil03} for details). Conditions were non-photometric\ndue to patches of cloud passing through. Typical seeing-limited\nimages had stellar full-width at half-maximum of $1\\farcs0$ in $K_s$\nand $1\\farcs3$ in $J$. We obtained images in both the $J$- ($1.25\n\\mu$m) and $K_s$-band ($2.15 \\mu$m) independently. The details of the\nprocessing used to obtain the final images are given in Paper 1.\n\n\\subsection{Astrometric Frame Ties}\n\nThe relative astrometry between the X-ray sources in NGC 4038\/9 and\nimages at other wavelengths is crucial for successful identification\nof multi-wavelength counterparts. Previous attempts at this have\nsuffered from the crowded nature of the field and confusion between\npotential counterparts \\citet{zez02b}. However, the infrared waveband\noffers much better hopes for resolving this issue, due to the similar\ndust-penetrating properties of photons in the {\\it Chandra} and $K_s$\nbands. (See also \\citet{bra05} for a comparison of IR extinction to\nthe previous optical\/radio extinction work of \\citet{whi02}.) We thus\nproceeded using the infrared images to establish an astrometric\nframe-tie, i.e. matching {\\it Chandra} coordinates to IR pixel\npositions.\n\nAs demonstrated by \\citet{bau00}, we must take care when searching for\nX-ray source counterparts in crowded regions such as the Antennae.\nTherefore, our astrometric frame-tie used a unique approach based on\nsolving a two-dimensional linear mapping function relating right\nascension and declination coordinates in one image with x and y pixel\npositions in a second image. The solution is of the form:\n\n\\begin{eqnarray}\nr_1 = ax_1+by_1+c, \\\\\nd_1 = dx_1+dy_1+f\n\\end{eqnarray}\n\nHere $r_1$ and $d_1$ are the right ascension and declination,\nrespectively, for a single source in one frame corresponding to the\n$x_1$ and $y_1$ pixel positions in another frame. This function\nconsiders both the offset and rotation between each frame. Since we\nare interested in solving for the coefficients $a$--$f$, elementary\nlinear algebra indicates we need six equations or three separate\nmatches. Therefore, we need at least three matches to fully describe\nthe rms positional uncertainty of the frame-tie.\n\nWe first used the above method to derive an approximate astrometric\nsolution for the WIRC $K_s$ image utilizing the presence of six\nrelatively bright, compact IR sources which are also present in images\nfrom the 2-Micron All-Sky Survey (2MASS). We calculated pixel\ncentroids of these objects in both the 2MASS and WIRC images, and used\nthe 2MASS astrometric header information to convert the 2MASS pixel\ncentroids into RA and Dec. These sources are listed in Table 1.\nApplying these six matches to our fitting function we found a small\nrms positional uncertainty of $0\\farcs2$, which demonstrates an\naccurate frame-tie between the 2MASS and WIRC images.\n\nUsing the 2MASS astrometric solution as a baseline, we identified\nseven clear matches between {\\it Chandra} and WIRC sources, which had\nbright compact IR counterparts with no potentially confusing sources\nnearby (listed in Table 2). We then applied the procedure described\nabove, using the {\\it Chandra} coordinates listed in Table 1 of\n\\citet{zez02a} (see that reference for details on the {\\it Chandra}\nastrometry) and the WIRC pixel centroids, and derived the astrometric\nsolution for the IR images in the X-ray coordinate frame. For the 7\nmatches, we find an rms residual positional uncertainty of $\\sim\n0\\farcs5$ which we adopt as our $1 \\sigma$ position uncertainty. We\nnote that the positional uncertainty is an entirely {\\it empirical}\nquantity. It shows the achieved uncertainty in mapping a target from\none image reference frame to the reference frame in another band, and\nautomatically incorporates all contributing sources of uncertainty in\nit. These include, but are not limited to, systematic uncertainties\n(i.e. field distortion, PSF variation, etc. in both {\\it Chandra} and\nWIRC) and random uncertainties (i.e. centroid shifts induced by photon\nnoise, flatfield noise, etc. in both {\\it Chandra} and WIRC). Thus,\ngiven the empirical nature of this uncertainty, we expect it to\nprovide a robust measure of the actual mapping error -- an expectation\nwhich seems to be borne out by the counterpart identification in the\nfollowing section.\n\nTo further test the accuracy of our astrometric solution we explored the range\nin rms positional uncertainties for several different frame-ties.\nSpecifically we picked ten IR\/X-ray matches separated by\n$<$1$\\arcsec$, which are listed in Table 3 (see \\S3.1). Of these ten\nwe chose 24 different combinations of seven matches resulting in 24\nunique frame-ties. Computing the rms positional uncertainty for each\nwe found a mean of 0$\\farcs$4 with a 1$\\sigma$ uncertainty of\n0$\\farcs$1. Considering the rms positional uncertainty for the\nframe-tie used in our analysis falls within 1$\\sigma$ of this mean\nrms, this indicates we made an accurate astrometric match between the\nIR and X-ray.\n\n\n\\subsection{Infrared Photometry}\n\nWe performed aperture photometry on 222 clusters in the $J$-band and\n221 clusters in the $K_s$-band (see also Paper 1). We found that the\nfull width at half maximum (FWHM) was 3.5 pixels ($0\\farcs9$) in the\n$K_s$ image and 4.6 pixels ($1\\farcs2$) in the $J$ band. We used a\nphotometric aperture of 5-pixel radius in $K_s$ band, and 6-pixel\nradius in $J$ band, corresponding to $\\sim 3\\sigma$ of the Gaussian\nPSF.\n\nBackground subtraction is both very important and very difficult in an\nenvironment such as the Antennae due to the brightness and complex\nstructure of the underlying galaxies and the plethora of nearby\nclusters. In order to address the uncertainties in background\nsubtraction, we measured the background in two separate annuli around\neach source: one from 9 to 12 pixels and another from 12 to 15 pixels.\nDue to the high concentration of clusters, crowding became an issue.\nTo circumvent this problem, we employed the use of sky background arcs\ninstead of annuli for some sources. These were defined by a position\nangle and opening angle with respect to the source center. All radii\nwere kept constant to ensure consistency. In addition, nearby bright\nsources could shift the computed central peak position by as much as a\npixel or two. If the centroid position determined for a given source\ndiffered significantly ($>1$ pixel) from the apparent brightness peak\ndue to such contamination, we forced the center of all photometric\napertures to be at the apparent brightness peak. For both annular\nregions, we calculated the mean and median backgrounds per pixel.\n\nMultiplying these by the area of the central aperture, these values\nwere subtracted from the flux measurement of the central aperture to\nyield 4 flux values for the source in terms of DN. Averaging these\nfour values provided us with a flux value for each cluster. We\ncomputed errors by considering both variations in sky background,\n$\\sigma_{sky}$, and Poisson noise, $\\sigma_{adu}$. We computed\n$\\sigma_{sky}$ by taking the standard deviation of the four measured\nflux values. We then calculated the expected Poisson noise by scaling\nDN to $e^-$ using the known gain of WIRC \\citep[2 $e^{-1}$\nDN$^{-1}$,][]{wil03} and taking the square root of this value. We\nadded both terms in quadrature to find the total estimated error in\nphotometry.\n\nWe then calibrated our photometry using the bright 2MASS star in Table 1.\n\n\\section{Results and Discussion}\n\n\\subsection{Identification of IR Counterparts to {\\it Chandra} Sources}\n\nWe used the astrometric frame-tie described above to identify IR\ncounterpart candidates to {\\it Chandra} X-ray sources in the WIRC\n$K_s$ image. We restricted our analysis to sources brighter than\n$K_s\\sim19.4$ mag. This is our $K_s$ sensitivity limit which we\ndefine in our photometric analysis below (see \\S3.2.1). Using the\n$0\\farcs5$ rms positional uncertainty of our frame-tie, we defined\ncircles with 2$\\sigma$ and 3$\\sigma$ radii around each {\\it Chandra}\nX-ray source where we searched for IR counterparts (Figure 1). If an\nIR source lay within a $1\\farcs0$ radius ($2\\sigma$) of an X-ray\nsource, we labeled these counterparts as ``strong''. Those IR sources\nbetween $1\\farcs0$ and $1\\farcs5$ ($2-3 \\sigma$) from an X-ray source\nwe labeled as ``possible'' counterparts. We found a total of 13\nstrong and 6 possible counterparts to X-ray sources in the Antennae.\nThese sources are listed in Table 3 and shown in Figure 3. Of the 19\nX-ray sources with counterparts, two are the nuclei \\citep{zez02a},\none is a background quasar \\citep{cla05}, and two share the same IR\ncounterpart. Therefore, in our analysis of cluster properties, we\nonly consider the 15 IR counterparts that are clusters. (While\nX-42 has two IR counterparts, we chose the closer, fainter cluster for\nour analysis.)\n\nWe then attempted to estimate the level of ``contamination'' of these\nsamples due to chance superposition of unrelated X-ray sources and IR\nclusters. This estimation can be significantly complicated by the\ncomplex structure and non-uniform distribution of both X-ray sources\nand IR clusters in the Antennae, so we developed a simple, practical\napproach. Given the $<0\\farcs5$ rms residuals in our relative\nastrometry for sources in Table 2, we assume that any IR clusters\nlying in a background annulus with radial size of\n$2\\farcs0$--$3\\farcs0$ ($4-6\\sigma$) centered on all X-ray source\npositions are chance alignments, with no real physical connection (see\nFigure 1). Dividing the total number of IR sources within the\nbackground annuli of the 49 X-ray source positions by the total area\nof these annuli, we find a background IR source surface density of\n0.02 arcsecond$^{-2}$ near {\\it Chandra} X-ray sources. Multiplying\nthis surface density by the total area of all ``strong'' regions\n($1\\farcs0$ radius circles) and ``possible'' regions ($1\\farcs0$ --\n$1\\farcs5$ annuli) around the 49 X-ray source positions, we estimated\nthe level of source contamination contributing to our ``strong'' and\n``possible'' IR counterpart candidates. We expect two with a\n$1\\sigma$ uncertainty of +0.2\/-0.1\\footnote{Found using confidence\nlevels for small number statistics listed in Tables 1 and 2 of\n\\citet{geh86}.} of the 13 ``strong'' counterparts to be due to chance\nsuperpositions, and three with a $1\\sigma$ uncertainty of\n+0.5\/-0.3\\footnotemark[\\value{footnote}] of the six ``possible''\ncounterparts to be chance superpositions.\n\nThis result has several important implications. First of all, it is\nclear that we have a significant excess of IR counterparts within\n$1\\farcs0$ of the X-ray sources -- 13, where we expect only two in the\nnull hypothesis of no physical counterparts. Even including the\n``possible'' counterparts out to $1\\farcs5$, we have a total of 19\ncounterparts, where we expect only five from chance superposition.\nSecondly, this implies that for any given ``strong'' IR counterpart,\nwe have a probability of $\\sim$ 85\\% ($11\/13$ with a $1\\sigma$\nuncertainty of 0.3\\footnotemark[\\value{footnote}]) that the\nassociation with an X-ray source is real. Even for the ``possible''\ncounterparts, the probability of true association is $\\sim$50\\%. These\nlevels of certainty are a tremendous improvement over the\nX-ray\/optical associations provided by \\citet{zez02b}, and are strong\nmotivators for follow-up multi-wavelength studies of the IR\ncounterparts. Finally, we can also conclude from strong concentration\nof IR counterparts within $\\sim 1 \\arcsec$ of X-ray sources that the\nframe tie uncertainty estimates described above are reasonable.\n\nFigure 2 is a $4\\farcm3\\times4\\farcm3$ $K_s$ image of the Antennae\nwith X-ray source positions overlaid. We designate those X-ray\nsources with counterparts using red circles. Notice that those\nsources with counterparts lie in the spiral arms and bridge region of\nthe Antennae. Since these regions are abundant in star formation,\nthis seems to indicate many of the X-ray sources in the Antennae are\ntied to star formation in these galaxies.\n\n\\subsection{Photometric Properties of the IR Counterparts}\n\n\\subsubsection{Color Magnitude Diagrams}\n\nUsing the 219 clusters that had both $J$ and $K_s$ photometry, we made\n$(J-K_s)$ versus $K_s$ color magnitude diagrams (Figure 4). We\nestimated our sensitivity limit by first finding all clusters with\nsignal-to-noise $\\sim5\\sigma$. The mean $J$ and $K_s$ magnitudes for\nthese clusters were computed separately and defined as cutoff values\nfor statistical analyzes. This yielded 19.0 mag in $J$ and 19.4 mag\nin $K_s$. We note that the X-ray clusters are generally bright in the\nIR compared to the general population of clusters. While the IR\ncounterpart for one X-ray source (X-32) falls below our $J$-band\nsensitivity limit, its $K_s$ magnitude is still above our $K_s$\ncutoff. Therefore, we retained this source in our analysis.\n\nWe then broke down the X-ray sources into three luminosity classes\n(Figure 4). We took the absorption-corrected X-ray luminosities,\n$L_X$, as listed in Table 1 of \\citet{zez02a} for all sources of\ninterest. These luminosities assumed a distance to the Antennae of 29\nMpc. We used 19.3 Mpc (for $H_{0}$=75 km s$^{-1}$ Mpc$^{-1}$) instead\nand so divided these values by 2.25 as suggested in \\citet{zez02a}.\nWe defined the three X-ray luminosities as follows: Low Luminosity\nX-ray sources (LLX's) had $L_{X}$ $<$ 3$\\times$$10^{38}$ ergs\ns$^{-1}$, High Luminosity X-ray sources (HLX's) were between $L_{X}$\nof 3$\\times10^{38}$ergs s$^{-1}$ and 1$\\times10^{39}$ ergs\ns$^{-1}$, while $L_{X}$ $>$ 1$\\times10^{39}$ ergs s$^{-1}$ were\nUltra-Luminous X-ray Sources (ULX's). In Figure 4 we designate each\nIR counterpart according to the luminosity class of its corresponding\nX-ray source. There does not appear to be a noticeable trend in the\nIR cluster counterparts between these different groupings.\n\n\\subsubsection{Absolute K Magnitudes}\n\nTo further study the properties of these IR counterparts, we\ncalculated $M_{K_s}$ for all IR clusters. We calculated reddening\nusing the observed colors, $(J-K_s)_{obs}$, (hence forth the ``color\nmethod''). Assuming all clusters are dominated by O and B stars,\ntheir intrinsic $(J-K_s)$ colors are $\\sim$0.2 mag. Approximating\nthis value as 0 mag, this allowed us to estimate $A_{K_s}$ as $\\simeq$\n$(J-K_s)_{obs}$\/1.33 using the extinction law defined in\n\\citet[hereafter CCM]{car89}. Since these derived reddenings are\nbiased towards young clusters, they will lead to an overestimate of\n$M_{K_s}$ for older clusters.\n\nFor IR counterparts to X-ray sources, we also computed X-ray-estimated\n$A_{K_s}$ using the column densities, $N_{H}$, listed in Table 5 of\n\\citet{zez02a}. Here, $N_{H}$ is derived by fitting both a Power Law\n(PL) and Raymond-Smith \\citep[RS][]{ray77} model to the X-ray spectra.\nUsing the CCM law, $A_{K_s}$ is defined as 0.12$A_{V}$. Taking\n$A_{V}$ = $5\\times10^{-22}$ mag cm$^2$ $N_{H}$, we could then derive\n$A_{K_s}$.\n\nThen we compared $A_{K_s}$ calculated using the ``color method'' to\n$A_{K_s}$ found using the above two $N_{H}$ models. We found the\n``color method'' matched closest to $N_{H}(PL)$ for all except one\n(the cluster associated with {\\it Chandra} source 32 as designated in\n\\citet{zez02b}).\n\nIn Figure 5, we plot histograms of the distribution of $K_s$-band\nluminosity, $M_{K_s}$. Figure 5 displays all clusters as well as over\nplotting only those with X-ray counterparts. Notice that the clusters\nwith associated X-ray sources look more luminous. To study whether\nthis apparent trend in luminosity is real, we compared these two\ndistributions using a two-sided Kolmogorov-Smirnov (KS) test. In our\nanalysis, we only included clusters below $M_{K_s}$ $<$ -13.2 mag.\nRestricting our study to sources with ``good'' photometry, we first\ndefined a limit in $K_s$, 18.2 mag, using the limiting $J$ magnitude,\n19.0 mag as stated above, and, since the limit in $K_s$ is a function\nof cluster color, the median $(J-K_s)$ of 0.8 mag. Subtracting the\ndistance modulus to the Antennae, 31.4 mag, from this $K_s$ limit, we\ncomputed our cutoff in $M_{K_s}$. Since all clusters with X-ray\nsources fall below this cutoff, our subsample is sufficient to perform\na statistical comparison.\n\nThe KS test yielded a D-statistic of 0.37 with a probability of\n$3.2\\times10^{-2}$ that the two distributions of clusters with and\nwithout associated X-ray sources are related. Considering the\nseparate cluster populations as two probability distributions, each\ncan be expressed as a cumulative distribution. The D-statistic is\nthen the absolute value of the maximum difference between each\ncumulative distribution. This test indicates that those clusters with\nX-ray counterparts are more luminous than most clusters in the\nAntennae.\n\n\\subsubsection{Cluster Mass Estimates}\n\n\\citet{whi99} found 70\\% of the bright clusters observed with the {\\it\nHubble Space Telescope} have ages $<$20 Myr. Therefore, in this study\nwe will assume all clusters are typically the same age, $\\sim$20 Myr.\nThis allows us to make the simplifying assumption that cluster mass is\nproportional to luminosity and ask: Does the cluster mass affect the\npropensity for a given progenitor star to produce an X-ray binary? We\nestimated cluster mass using $K_s$ luminosity ($M_{K_s}$). Since\ncluster mass increases linearly with flux (for an assumed constant age\nof all clusters), we converted $M_{K_s}$ to flux. Using the data as\nbinned in the $M_{K_s}$ histogram (Figure 5), we calculated an average\nflux per bin. By computing the fraction of number of clusters per\naverage flux, we are in essence asking what is the probability of\nfinding a cluster with a specific mass. Since those clusters with\nX-ray sources are more luminous, we expect a higher probability of\nfinding an X-ray source in a more massive cluster. As seen in Figure\n6, this trend does seem to be true. Applying a KS-test between the\ndistributions for all clusters and those associated with X-ray sources\nfor clusters below the $M_{K_s}$ completeness limit defined in the\nprevious section, we find a D-statistic of 0.66 and a probability of\n$7.2\\times10^{-3}$ that they are the same. Hence, the two\ndistributions are distinct, indicating it is statistically significant\nthat more massive clusters tend to contain X-ray sources.\n\nWhile we assume all clusters are $\\sim$20 Myr above, we note that the\nactual range in ages is $\\sim$1--100 Myr \\citep{whi99}.\nBruzual-Charlot spectral photometric models \\citep{bru03} indicate\nthat clusters in this age range could vary by a factor of roughly 100\nin mass for a given $K_s$ luminosity. Thus, we emphasize that the\nanalysis above should be taken as suggestive rather than conclusive\nevidence, and note that in a future paper (Clark, et al. 2006, in\npreparation) we explore this line of investigation and the impacts of\nage variations on the result in depth.\n\n\\subsubsection{Non-detections of IR Counterparts to X-ray Sources}\n\nTo assess whether our counterpart detections were dependent of\nreddening or their intrinsic brightness, we found limiting values for\n$M_{K_s}$ for those X-ray sources without detected IR counterparts.\nWe achieved this by setting all clusters $K_s$ magnitudes equal to our\ncompleteness limit defined for the CMDs (19.4 mag; see \\S3.2.1) and\nthen finding $M_{K_s(lim)}$ for each using $A_{K_s}$ calculated for\nthat cluster. Since $M_{K_s(lim)}$ is theoretical and only depends on\nreddening, we could now find this limit for all X-ray sources using an\n$A_{K_s}$ estimated from the observed $N_{H}$ values. Thus we\nconsidered all IR counterparts (detections) and those X-ray sources\nwithout a counterpart (nondetections). If nondetections are due to\nreddening there should not exist a difference in $M_{K_s(lim)}$\nbetween detections and nondetections. In contrast, if nondetections\nare intrinsically fainter, we expect a higher $M_{K_s(lim)}$ for these\nsources. In the case of detections, we considered reddening from both\nthe ``color method'' and the $N_{H}(PL)$ separately. We could only\nderive nondetections using $N_{H}(PL)$ reddening. Figure 7 shows\n$M_{K_s(lim)}$ appears higher for all nondetections. To test if this\nobservation is significant, we applied a KS-test to investigate\nwhether detections and nondetections are separate distributions. We\nfind a D-statistic of 0.82 and probability of $8.8\\times10^{-6}$ that\nthese two distributions are the same using the ``color method'' for\ndetections. Considering the $N_{H}(PL)$ reddening method for\ndetections instead, the D-statistic drops to 0.48 and the probability\nincreases to $3.9\\times10^{-2}$. Since both tests indicate these\ndistributions are distinct, the observed high $M_{K_s(lim)}$ for\nnondetections seems to be real. This leads to the conclusion that\nthese sources were undetected because they are intrinsically IR-faint,\nand that reddening does not play the dominant role in nondetections.\n\nWe summarize these statistics in Table 4. Here we calculated the mean\n$K_s$, $(J-K_s)$, and $M_{K_s}$ for three different categories: 1) all\nclusters, 2) clusters only connected with X-ray sources, and 3) these\nclusters broken down by luminosity class. We also include\nuncertainties in each quantity. Notice that the IR counterparts\nappear brighter in $K_s$ and intrinsically more luminous than most\nclusters in the Antennae, although there is no significant trend in\ncolor. We also summarize the above KS-test results in Table 5.\n\n\\section{Conclusions}\n\nWe have demonstrated a successful method for finding counterparts to\nX-ray sources in the Antennae using IR wavelengths. We mapped {\\it\nChandra} X-ray coordinates to WIRC pixel positions with a positional\nuncertainty of $\\sim 0\\farcs5$. Using this precise frame-tie we\nfound 13 ``strong'' matches ($< 1\\farcs0$ separation) and 5\n``possible'' matches ($1 - 1\\farcs5$ separation) between X-ray\nsources and IR counterparts. After performing a spatial and\nphotometric analysis of these counterparts, we reached the following\nconclusions:\n\n1. We expect only 2 of the 13 ``strong'' IR counterparts to be chance\nsuperpositions. Including all 19 IR counterparts, we estimated 5 are\nunrelated associations. Clearly, a large majority of the X-ray\/IR\nassociations are real.\n\n2. The IR counterparts tend to reside in the spiral arms and bridge\nregion between these interacting galaxies. Since these regions\ncontain the heaviest amounts of star formation, it seems evident that\nmany of the X-ray sources are closely tied to star formation in this\npair of galaxies.\n\n3. A $K_s$ vs. $(J - K_s)$ CMD reveal those clusters associated with\nX-ray sources are brighter in $K_s$ but there does not seem to be a\ntrend in color. Separating clusters by the X-ray luminosity classes\nof their X-ray counterpart does not reveal any significant trends.\n\n4. Using reddening derived $(J - K_s)$ colors as well as from\nX-ray-derived $N_H$, we found $K_s$-band luminosity for all clusters.\nA comparison reveals those clusters associated with X-ray sources are\nmore luminous than most clusters in the Antennae. A KS-test indicates\na significant difference between X-ray counterpart clusters and the\ngeneral population of clusters.\n\n5. By relating flux to cluster luminosity, simplistically assuming a\nconstant age for all clusters, we estimated cluster mass. Computing\nthe fraction of number of clusters per average flux, we estimated the\nprobability of finding a cluster with a specific mass. We find more\nmassive clusters are more likely to contain X-ray sources, even after\nwe normalize by mass.\n\n6. We computed a theoretical, limiting $M_{K_s}$ for all counterparts\nto X-ray sources in the Antennae using X-ray-derived reddenings.\nComparing detections to non-detections, we found those clusters with\nX-ray source are intrinsically more luminous in the IR.\n\nIn a future paper exploring the effects of cluster mass on XRB\nformation rate (Clark, et al. 2006a, in preparation), we will\ninvestigate the effects of age on cluster luminosity and hence our\ncluster mass estimates. Another paper will extend our study of the\nAntennae to optical wavelengths (Clark, et al. 2006b, in preparation).\nThrough an in depth, multi-wavelength investigation we hope to achieve\na more complete picture of counterparts to several X-ray sources in\nthese colliding galaxies.\n\n\\acknowledgments\n\nThe authors thank the staff of Palomar Observatory for their excellent\nassistance in commissioning WIRC and obtaining these data. WIRC was\nmade possible by support from the NSF (NSF-AST0328522), the Norris\nFoundation, and Cornell University. S.S.E. and D.M.C. are supported\nin part by an NSF CAREER award (NSF-9983830). We also thank\nJ.R. Houck for his support of the WIRC instrument project.\n\n\\vfill \\eject\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWe consider realizations of linear dynamical systems that are variously characterized as positive real, passive, or port-Hamiltonian.\nWe restrict ourselves to \\emph{linear time-invariant systems} represented as\n\\begin{equation} \\label{statespace}\n \\begin{array}{rcl} \\dot x & = & Ax + B u \\quad \\mbox{with}\\ x(0)=0,\\\\\ny&=& Cx+Du,\n\\end{array}\n\\end{equation}\nwhere $u:\\mathbb R\\to\\mathbb{C}^m$, $x:\\mathbb R\\to \\mathbb{C}^n$, and $y:\\mathbb R\\to\\mathbb{C}^m$ are vector-valued functions denoting, respectively, the \\emph{input}, \\emph{state},\nand \\emph{output} of the system. The coefficient matrices $A\\in \\mathbb{C}^{n \\times n}$, $B\\in \\mathbb{C}^{n \\times m}$, $C\\in \\mathbb{C}^{m \\times n}$, and $D\\in \\mathbb{C}^{m \\times m}$ are constant.\nReal and complex $n$-vectors ($n\\times m$ matrices) are denoted by $\\mathbb R^n$, $\\mathbb C^{n}$\n($\\mathbb R^{n \\times m}$, $\\mathbb{C}^{n \\times m}$), respectively.\nWe refer to (\\ref{statespace}) concisely in terms of a four-tuple of matrices describing the realization ${\\mathcal M}:=\\left\\{A,B,C,D\\right\\}$.\n\nOur principal focus is on the structure of \\emph{passive} systems and relationships with positive-real transfer functions and port-Hamiltonian system representations.\nA nice introduction to passive systems can be found in the seminal papers of Willems (\\cite{Wil71}, \\cite{Wil72a}, \\cite{Wil72b}), where a general notion of system passivity is introduced and linked to related system-theoretic notions such as positive realness and stability. Willems refers to the earlier works of Kalman \\cite{Kal63}, Popov \\cite{Pop73}, and Yakubovich \\cite{Yak62}, where versions of what are now called Kalman-Yakubovich-Popov (KYP) conditions were derived.\nRenewed interest in these ideas came from the study of port-Hamiltonian (pH) systems, which may be viewed as particular parameterizations of passive systems that arise from certain energy-based modeling frameworks (see e.g. \\cite{Sch04}, \\cite{Sch13}, \\cite{SchJ14}). The KYP conditions lead to characterizations of system passivity through the solution set of an associated \\emph{linear matrix inequality (LMI)}. The convexity of this solution set has led to extensive use of\nconvex optimization techniques in systems and control theory (see e.g. \\cite{BoyB90}, \\cite{NesN94}).\n\nThe solution set of the KYP-LMI leads to a natural parametrization of families of pH realizations for a given passive system.\nWith this observation, it is not surprising that some pH realizations of a given system reflect well the underlying robustness of passivity to system perturbations and that some pH realizations will do this better than others. Our main result shows that the analytic center of certain barrier functions associated with the KYP-LMI leads to favorable pH realizations in this sense; we derive computable bounds for the passivity\nradii for these realizations.\n\nThe paper is organized as follows. In Section~\\ref{sec:prelim}, we recall the KYP conditions and link the solution set of the KYP-LMI to different system realizations that reflect passivity. In Section~\\ref{sec:analytic} we review some basic concepts in convex analysis\n and introduce the concept of an \\emph{analytic center} associated with a barrier function for the KYP-LMI.\nIn Section~\\ref{sec:passrad} we define the passivity radius for a model realization ${\\mathcal M}$, measuring its robustness to system perturbations that may lead to a loss of passivity. We show that the analytic center of the KYP-LMI yields a model representation with good robustness properties. In Section~\\ref{sec:criteria} we consider other measures that could also serve as criteria for robustness of model passivity.\nIn Section~\\ref{sec:num} we illustrate our analytic results with a few numerical examples. We conclude with Section~\\ref{sec:conclusion},\noffering also some points for further research.\n\n\\section{Positive-realness, passivity, and\n\\hfill \\break \\mbox{port-Hamiltonian systems} }\\label{sec:prelim}\n\n\nWe restrict ourselves to linear time-invariant systems as in \\eqref{statespace} which are \\emph{minimal}, that is, the pair $(A,B)$ is \\emph{controllable} (for all $s\\in \\mathbb C$, $\\rank \\mbox{\\small $[\\,s I-A \\quad B\\,]$} =n$ ), and the pair $(A,C)$ is \\emph{observable} ($(A^\\mathsf{H},C^\\mathsf{H})$ is controllable). Here, the Hermitian (or conjugate) transpose (transpose) of a vector or matrix $V$ is denoted by\n$V^{\\mathsf{H}}$ ($V^{mathsf{T}}$). We also assume that $\\rank B=\\rank C=m$.\nWe require input\/output port dimensions to be equal ($m$) and for convenience we assume that the system is initially in a quiescent state, $x(0)=0$. By applying the Laplace transform to \\eqref{statespace} and eliminating the state, we obtain the \\emph{transfer function}\n\\begin{equation} \\label{ABCD}\n\\mathcal T(s):=D+C(sI_n-A)^{-1}B,\n\\end{equation}\nmapping the Laplace transform of $u$ to the Laplace transform of $y$.\n$I_n$ is the identity matrix in $\\mathbb{C}^{n \\times n}$\n(subsequently, the subscript may be omitted if the dimension is clear from the context).\nOn the imaginary axis $\\imath \\mathbb R$, $\\mathcal T(\\imath\\omega)$\ndescribes the \\emph{frequency response} of the system.\n\n\nWe denote the set of Hermitian matrices in $\\mathbb{C}^{n \\times n}$ by $\\Hn$.\nPositive definiteness (semidefiniteness) of $A\\in \\Hn$ is denoted by $A>0$ ($A\\geq 0$).\nThe set of all positive definite (positive semidefinite) matrices in $\\Hn$ is denoted $\\Hnpd$ ($\\Hnpsd$). The real and imaginary parts of a complex matrix, $Z$, are written as ${\\mathfrak{Re}} (Z)$ and ${\\mathfrak{Im}} (Z)$, respectively.\n\nWe proceed to review briefly some representations of linear systems associated with the notion of passivity.\n\\subsection{Positive-real systems}\\label{sec:posreal}\nConsider a system ${\\mathcal M} $ as in (\\ref{statespace}) and its transfer function $\\mathcal T$ as in \\eqref{ABCD}.\n\\begin{definition}\\label{def:posreal}\nA transfer function $\\mathcal T(s)$ is {\\em positive real}\nif the matrix-valued rational function\n\\begin{equation}\\label{defphi}\n\\Phi(s):= \\mathcal T^{\\mathsf{H}}(-s) + \\mathcal T(s)\n\\end{equation}\nis positive semidefinite for $s$ on the imaginary axis:\n$$\n\\Phi(\\imath\\omega)\\in \\Hmpsd \\quad \\mbox{ for all }\\omega\\in \\mathbb{R}.\n$$\n$\\mathcal T(s)$ is \\emph{strictly positive real} if $\\Phi(\\imath \\omega)\\in \\Hmpd $ for all $\\ \\omega\\in \\mathbb{R}$.\n\\end{definition}\nFor any $X \\in \\Hn$, define the matrix function\n\\begin{eqnarray*} \\label{prls}\nW(X) &:=& \\left[\n\\begin{array}{cc}\n-X\\,A - A^{\\mathsf{H}}X & C^{\\mathsf{H}} - X\\,B \\\\\nC- B^{\\mathsf{H}}X & D+D^{\\mathsf{H}}\n\\end{array}\n\\right]\\\\\n& =& W(0)- \\left[\n\\begin{array}{cc}\nX\\,A + A^{\\mathsf{H}}X & X\\,B \\\\\nB^{\\mathsf{H}}X & 0\n\\end{array}\n\\right].\n\\end{eqnarray*}\n From \\eqref{ABCD} and (\\ref{defphi}), simple manipulations produce\n\\begin{eqnarray*}\n\\Phi\\,(s) &= &\n\\left[ \\begin{array}{cc} B^{\\mathsf{H}}(-s\\,I_n - A^{\\mathsf{H}})^{-1} & I_m \\end{array} \\right]\n\\, W(0) \\left[ \\begin{array}{c} (s\\,I_n -A)^{-1}B \\\\ I_m \\end{array} \\right] \\nonumber \\\\\n & =&\n\\left[ \\begin{array}{cc} B^{\\mathsf{H}}(-s\\,I_n - A^{\\mathsf{H}})^{-1} & I_m \\end{array} \\right]\n\\, W(X) \\left[ \\begin{array}{c} (s\\,I_n -A)^{-1}B \\\\ I_m \\end{array} \\right]. \\label{popovs}\n\\end{eqnarray*}\nDefine the \\emph{matrix pencils}:\n\\[\n\\mathcal L_{0}(s) =\ns \\left[ \\begin{array}{cc|c} 0 & I_n & 0\\\\[1mm]\n\t-I_n & 0 & 0\\\\[1mm]\n\t\\hline \\rule{0mm}{1.1em}\n 0 & 0 & 0 \\end{array} \\right]\n\t-\\left[ \\begin{array}{cc|c} 0 & A & B \\\\[1mm]\n\tA^{\\mathsf{H}} & 0 & C^{\\mathsf{H}} \\\\[1mm]\n \\hline \\rule{0mm}{1.1em}\n B^{\\mathsf{H}} & C & D+D^{\\mathsf{H}} \\end{array} \\right],\n\\]\nand, for any $X\\in \\Hn$,\n\\[\n\\mathcal L_{X}(s) =\n\\left[ \\begin{array}{cc|c} I_n & 0 & 0 \\\\\n-X & I_n & 0 \\\\ \\hline 0 & 0 & I_m \\end{array} \\right] \\mathcal L_{0}(s)\n\\left[ \\begin{array}{cc|c} I_n & -X & 0 \\\\\n0 & I_n & 0 \\\\ \\hline 0 & 0 & I_m \\end{array} \\right],\n\\]\nObserve for any $X\\in \\Hn$, $\\mathcal L_{X}(s)$ and $\\mathcal L_{0}(s)$ are equivalent pencils, since they are related\nvia a congruence transformation. Note also that $\\Phi(s)$ is\nthe \\emph{Schur complement} associated with the $(3,3)$ block of $\\mathcal L_{0}(s)$\n(and hence, also of $\\mathcal L_{X}(s)$ for any $X\\in \\Hn$).\n\nIf $\\mathcal T(s)$ is positive real,\nthen it is known \\cite{Wil71} that there exists $X\\in \\Hn$ such that the KYP-LMI holds, namely\n\\begin{equation} \\label{KYP-LMI}\nW(X) \\geq 0,\n\\end{equation}\nand so a factorization must exist:\n\\begin{equation} \\label{lw}\nW(X) =\n\\left[ \\begin{array}{c} L^{\\mathsf{H}} \\\\ M^{\\mathsf{H}} \\end{array} \\right]\\,\n\\left[ \\begin{array}{cc} L & M \\end{array} \\right]\n\\end{equation}\nfor $L\\in \\mathbb{C}^{r \\times n}$ and $M \\in \\mathbb{C}^{r \\times m} $,\n where $r=\\rank W(X)$.\n Introducing $\\mathcal G(s)= L(s\\,I_n - A)^{-1}B + M,$~one may then define the \\emph{spectral factorization} of $\\Phi(s)$ as\n\\begin{equation} \\label{spectral}\n\\Phi(s)= \\mathcal G^{\\mathsf{H}}(- s) \\mathcal G(s).\n\\end{equation}\nDefine the solution set and subsets to the KYP-LMI (\\ref{KYP-LMI}):\n\\begin{subequations}\\label{LMIsolnsets}\n\\begin{align}\n&{\\mathbb X}:=\\left\\{ X\\in \\Hn \\left|\\ W(X) \\geq 0 \\right.\\right\\}, \\label{XsolnWpsd} \\\\[1mm]\n&\\XWpd :=\\left\\{ X\\in \\Hn \\left| W(X) \\geq 0,\\ X >0 \\right.\\right\\} = \\Hnpd \\cap {\\mathbb X}, \\label{XpdsolnWpsd} \\\\[1mm]\n&\\XWpdpd :=\\left\\{ X\\in \\Hn \\left| W(X) > 0,\\ X >0 \\right.\\right\\}, \\label{XpdsolWpd}\n\\end{align}\n\\end{subequations}\nFor each $X\\in{\\mathbb X}$, there is a factorization of $W(X)$ of the form (\\ref{lw}) leading to a spectral factorization (\\ref{spectral}) of $\\Phi(s)$. We are mainly interested in $\\XWpd$ and $\\XWpdpd$, which are\nrespectively, the set of positive-definite solutions to the KYP-LMI (\\ref{KYP-LMI}), and the subset of those solutions for which the KYP-LMI (\\ref{KYP-LMI}) holds \\emph{strictly}.\n\nAn important subset of ${\\mathbb X}$ are those solutions to (\\ref{KYP-LMI}) for which the\nrank $r$ of $W(X)$ is minimal ({i.e.}, for which $r=\\rank\\Phi(s)$). Let $S:= D+D^{\\mathsf{H}}=\\lim_{s\\rightarrow\\infty}\\Phi(s)$.\nIf $S$ is nonsingular, then\nthe minimum rank solutions in $\\XWpd$\nare those for which $\\rank W(X) = \\rank S = m$, which in turn is the case\nif and only if the Schur complement of $S$ in $W(X)$ is zero. This Schur\ncomplement is associated with the \\emph{algebraic Riccati equation (ARE)}:\n\\begin{multline}\n\\mathsf{Ricc}(X) := -XA-A^{\\mathsf{H}}X \\\\ -(C^{\\mathsf{H}}-XB)S^{-1}(C-B^{\\mathsf{H}}X)=0.\\label{riccati}\n\\end{multline}\nSolutions to (\\ref{riccati}) produce directly a spectral factorization of $\\Phi(s)$,\nIndeed, each solution $X$ of~\\eqref{riccati} corresponds to a\n\\emph{Lagrangian invariant subspace} spanned by the columns of $U:=\\mat{cc} I_n & -X^{\\mathsf{T}} \\rix^{\\mathsf{T}} $\nthat remains invariant under the action of the Hamiltonian matrix\n\\begin{equation}\\label{HamMatrix}\nH:=\\mat{cc} A-B S^{-1} C & - B S^{-1} B^{\\mathsf{H}} \\\\\nC^{\\mathsf{H}} S^{-1} C & -(A-B S^{-1} C)^{\\mathsf{H}} \\rix.\n\\end{equation}\n$U$ satisfies $HU=U A_F$ for a \\emph{closed loop matrix} $A_F=A-BF$ with $F := S^{-1}(C-B^{\\mathsf{H}}X)$ (see e.g., \\cite{FreMX02}).\nEach solution $X$ of~\\eqref{riccati} could also be associated with an \\emph{extended Lagrangian invariant subspace}\nfor the pencil $\\mathcal{L}_{0}(s)$ (see \\cite{BenLMV15}), spanned by the columns of\n$ \\widehat{U}:=\\mat{ccc} -X^{\\mathsf{T}}\n& I_n & -F^{\\mathsf{T}} \\rix^{\\mathsf{T}}$.\n In particular, $\\widehat{U}$ satisfies\n\\[\n\\left[ \\begin{array}{ccc} 0 & A & B \\\\\n\tA^{\\mathsf{H}} & 0 & C^{\\mathsf{H}} \\\\ B^{\\mathsf{H}} & C & S \\end{array} \\right] \\widehat{U}\n =\\left[ \\begin{array}{ccc} 0 & I_n & 0\\\\\n\t-I_n & 0 & 0\\\\ 0 & 0 & 0 \\end{array} \\right] \\widehat{U} A_F,\n\\]\nsee also {e.g.} \\cite{IonOW98,Wil71}. The condition that $S$ is invertible is equivalent\nto the condition that the pencil $\\mathcal{L}_{0}(s)$ has \\emph{differentiation index one},\n{i.e.}, all eigenvalues at $\\infty$ are semi-simple, \\cite{KunM06}.\nIf $\\mathcal{L}_{0}(s)$ has no purely imaginary eigenvalues, then there are ${2n}\\choose{n}$ solutions $X\\in \\Hn$ of (\\ref{riccati}), each associated with an appropriate choice of a Lagrangian invariant subspace for $\\mathcal{L}_{0}(s)$. Every choice leads to different spectra for the closed loop matrix, $A_F$ (see \\cite{FreMX02,Wil71} for a parametrization of all possible Lagrangian subspaces).\nAmong the possible solutions of (\\ref{riccati}) there are two extremal solutions, $X_-$ and $X_+$. $X_-$ leads to a closed loop matrix, $A_F$, with spectra in the (open) left half-plane; $X_+$ leads to $A_F$ with spectra in the (open) right half-plane. All solutions $X$ of (\\ref{riccati}) are bracketed by $X_-$ and $X_+$:\n\\begin{equation} \\label{Xbounded}\n0\\leq X_- \\leq X \\leq X_+\n\\end{equation}\nand so, in this special case the set ${\\mathbb X}$ is bounded, but it may be empty or the solution may be unique if $X_-=X_+$, see Section~\\ref{sec:analytic}.\n\n\\subsection{Passive systems}\\label{sec:DisSys}\n\\begin{definition}\\label{def:passive}\nA system ${\\mathcal M} :=\\left\\{A,B,C,D\\right\\}$ is \\emph{passive} if there exists a state-dependent\n\\emph{storage function}, $\\mathcal H(x)\\geq 0$, such that for any $\\mu,t_0\\in \\mathbb R$ with $\\mu>t_0$,\n the \\emph{dissipation inequality} holds:\n\\begin{equation} \\label{supply} \\mathcal H(x(\\mu))-\\mathcal H(x(t_0)) \\le \\int_{t_0}^{\\mu} {\\mathfrak{Re}} (y(t)^{\\mathsf{H}}u(t)) \\, dt\n\\end{equation}\nIf for all $\\mu>t_0$, the inequality in \\eqref{supply}\nis strict then the system is \\emph{strictly passive}.\n\\end{definition}\nIn the terminology of (\\cite{Wil72a}),\n${\\mathfrak{Re}} (y(t)^{\\mathsf{H}}u(t))$ is the \\emph{supply rate} of the system.\nA general theory of dissipative systems (of which passive systems are a special case) was developed in the seminal papers \\cite{Wil71,Wil72a,Wil72b}, where links to earlier work by Kalman, Popov, and Yakubovich and the KYP-LMI (\\ref{KYP-LMI}) are given. Note that the original definition of passivity given by Willems was for real systems; we reformulate it here for complex systems.\n\\begin{theorem}[\\cite{Wil71}]\\label{LMIWil}\nSuppose the system ${\\mathcal M}$ of (\\ref{statespace}) is minimal. Then the KYP-LMI (\\ref{KYP-LMI}): $W(X)\\geq 0$,\nhas a solution $X\\in \\Hnpd$ if and only if ${\\mathcal M}$ is a passive system. If this is the case, then\n\\begin{itemize}\n\\item $\\mathcal H(x):=\\frac{1}{2}x^{\\mathsf{H}}Xx$ defines a storage function associated with the supply rate ${\\mathfrak{Re}}(y^\\mathsf{H}u)$ satisfying the dissipation inequality \\eqref{supply};\\\\\n\\item there exist maximal and minimal solutions $X_- \\leq X_+$ in $\\Hnpd$ of \\eqref{KYP-LMI},\nsuch that for all solutions, $X$, of \\eqref{KYP-LMI}:\n %\n\\[\n 0 < X_- \\leq X \\leq X_+.\n\\]\n\\end{itemize}\n\\end{theorem}\nRecall that a matrix $A\\in \\mathbb{C}^{n \\times n}$ is \\emph{asymptotically stable} if all its eigenvalues are in the open left half plane and \\emph{(Lyapunov) stable} if all eigenvalues are in the closed left half plane with any eigenvalues occurring on the imaginary axis being semisimple.\nTheorem~\\ref{LMIWil} asserts that if $X>0$ is a solution of $W(X)\\geq 0$, then the system ${\\mathcal M}$ of \\eqref{statespace} is stable and if it satisfies $W(X)> 0$, then it is asymptotically stable, since $\\mathcal H(x)$ is a Lyapunov function for ${\\mathcal M}$ which is strict if $W(X)> 0$, (see {e.g.} \\cite{LanT85}). Note, however, that for (asymptotic) stability of $A$ it is sufficient if the $(1,1)$ block of $W(X)$ is (positive definite) positive semidefinite.\n\\begin{corollary}\\label{cor:pr}\nConsider a minimal system ${\\mathcal M} $ as in (\\ref{statespace}). ${\\mathcal M}$ is passive if and only if it is positive real\nand stable. It is strictly passive if and only if it is strictly positive real and asymptotically stable. In the latter case, $X_+ -X_- >0$ (\\cite{Wil71}).\n\\end{corollary}\nNote that minimality is not necessary for passivity. For example, the system\n$\\dot{x}=-x,y = u$ is both stable and passive but not minimal. In this case, the KYP-LMI (\\ref{KYP-LMI}) is satisfied with any (scalar) $X>0$, the Hamiltonian may be defined as\n$\\mathcal H(x)=\\frac{X}{2} x(t)^2$, and the dissipation inequality evidently holds since for $t_1\\geq t_0$,\n{\\small\n\\vspace{-5mm}\\begin{align*}\n\\mathcal{H}(x(t_1))-\\mathcal{H}(x(t_0)) & = \\frac{X}{2} (x(t_0) e^{-(t_1-t_0)})^2 -\\frac{X}{2} (x(t_0))^2 \\\\\n = \\frac{X}{2} (x(t_0))^2& (e^{-2(t_1-t_0)}-1) \\leq 0 \\leq \\int_{t_0}^{t_1} y(t) \\, u(t) \\, dt\n\\end{align*} }\n\\subsection{Port-Hamiltonian systems}\\label{sec:PH}\n\\begin{definition}\\label{def:ph}\nA linear time-invariant \\emph{port-Hamiltonian (pH) system} is one for which the following realization is possible:\n\\begin{equation} \\label{pH}\n \\begin{array}{rcl} \\dot x & = & (J-R)Q x + (G-K) u,\\\\\ny&=& (G+K)^{\\mathsf{H}}Q x+Du,\n\\end{array}\n\\end{equation}\nwhere $Q=Q^{\\mathsf{H}} >0$, $J=-J^{\\mathsf{H}}$, and\n\\[\n\\left[ \\begin{array}{lc} R & K \\\\ K^{\\mathsf{H}} & \\mathsf{sym}(D) \\end{array} \\right] \\geq 0 \\quad \\mbox{with}\n\\quad \\mathsf{sym}(D)=\\frac12(D+D^{\\mathsf{H}})\n\\]\n\\end{definition}\nPort-Hamiltonian systems were introduced in \\cite{Sch04} as a tool for energy-based modeling. An energy storage function\n$\\mathcal{H}(x)=\\frac12x^{\\mathsf{H}}Qx$ plays a central role and under the conditions given, the dissipation inequality (\\ref{supply}) holds and so pH systems are always passive. Conversely, any passive system may be represented as a pH system ({\\ref{pH}), see e.g., \\cite{BeaMX15_ppt}.\nWe briefly describe the construction of such a representation: Suppose the model ${\\mathcal M}$ of \\eqref{statespace} is minimal and passive and let $X=Q\\in \\XWpd$ be a solution of the KYP-LMI \\eqref{KYP-LMI}.\nFor this $Q$, define $J:=\\frac12 (AQ^{-1}- Q^{-1}A^{\\mathsf{H}})$, $R:=-\\frac 12 (AQ^{-1}+ Q^{-1}A^{\\mathsf{H}})$,\n$K:= \\frac12\\left(Q^{-1}C^{\\mathsf{H}}-B\\right)$, and $G:= \\frac12\\left(Q^{-1}C^{\\mathsf{H}}+B\\right)$.\nDirect substitution shows that (\\ref{statespace}) may be written in the form of (\\ref{pH}), with $J=-J^{\\mathsf{H}}$, and\n\\[\n\\left[ \\begin{array}{lc} R & K \\\\ K^{\\mathsf{H}} & \\mathsf{sym}(D) \\end{array} \\right] =\n\\frac12 \\left[ \\begin{array}{lc} Q^{-1} & 0 \\\\ 0 & I \\end{array} \\right] \\ W(Q)\\\n\\left[ \\begin{array}{lc} Q^{-1} & 0 \\\\ 0 & I \\end{array} \\right] \\geq 0.\n\\] \n\nAnother possible representation of a passive system as a standard pH system can be obtained by using a symmetric factorization of a solution $X$ of\n\\eqref{KYP-LMI}: $X=T^{\\mathsf{H}}T$ with $T\\in \\mathbb{C}^{n \\times n}$ (e.g., the Hermitian square root of $X$ or the Cholesky factorization of $X$ are two possibilities).\nOne defines a state-space transformation, $x_T=Tx$, leading to an equivalent realization in $T$-coordinates:\n\\[\n\\{A_T,B_T,C_T,D_T\\} := \\{TAT^{-1}, TB, CT^{-1}, D \\}\n\\]\nThe associated KYP-LMI \\eqref{KYP-LMI} with respect to the new coordinate system can be written as\n\\begin{eqnarray*} \\label{prls}\nW_T(\\hat{X}) &:=& \\left[\n\\begin{array}{cc}\n-\\hat{X}\\,A_T - A_T^{\\mathsf{H}} \\hat{X} & C_T^{\\mathsf{H}} - \\hat{X}\\,B_T \\\\\nC_T- B_T^{\\mathsf{H}} \\hat{X} & D+D^{\\mathsf{H}}\n\\end{array}\n\\right] \\geq 0,\n\\end{eqnarray*}\nbut since $X=T^{\\mathsf{H}}T$ is a solution to the KYP-LMI \\eqref{KYP-LMI} in the original state-space coordinates, we have $\\hat{X}=I$ and\n\\begin{eqnarray} \\nonumber\nW_T(I)&=&\\left[ \\begin{array}{cccc} T^{-H} & 0\\\\ 0 & I_m\n\\end{array}\n\\right] W(X)\n\\left[ \\begin{array}{cccc} T^{-1} & 0\\\\ 0 & I_m\n\\end{array}\n\\right] \\geq 0. \\label{PH}\n\\end{eqnarray}\nWe can then use the Hermitian and skew-Hermitian part of $A_T$ to obtain a pH representation in $T$-coordinates:\n$J_T:=\\frac12 (A_T-A_T^{\\mathsf{H}})$, $R_T:=-\\frac 12 (A_T+ A_T^{\\mathsf{H}})$,\n$K_T:= \\frac12\\left(C_T^{\\mathsf{H}}-B_T\\right)$, $G_T:= \\frac12\\left(C_T^{\\mathsf{H}}+B_T\\right)$, and $Q_T=I$, so that\n\\begin{equation} \\label{pHalt}\n \\begin{array}{rcl} \\dot{x}_T & = & (J_T-R_T)x_T + (G_T-K_T) u,\\\\\ny&=& (G_T+K_T)^{\\mathsf{H}} x_T+Du,\n\\end{array}\n\\end{equation}\nis a valid pH representation in $T$ state-space coordinates.\n\n\nWe have briefly presented three closely related concepts for a minimal linear time-invariant system of the form \\eqref{statespace}, positive realness, passivity, and that the system has pH structure. All three properties can be characterized algebraically via the solutions of linear matrix inequalities, invariant subspaces of special even pencils, or solutions of Riccati inequalities. However, there typically is a lot of freedom in the representation of such systems. This freedom, which results from particular choices of solutions to the KYP-LMI as well as subsequent state space transformations, may be used to make the representation more robust\nto perturbations. In many ways the pH representation seems to be the most robust representation \\cite{MehMS16,MehMS17} and it also has many other advantages: it encodes the geometric and algebraic properties directly in the properties of the coefficients \\cite{SchJ14}; it allows easy ways for structure preserving model reduction \\cite{GugPBS12,PolV10}; it easily extends to descriptor systems \\cite{BeaMXZ17_ppt,Sch13}; and it greatly simplifies optimization methods for computing stability and passivity radii \\cite{GilMS18,GilS16,GilS17,OveV05}.\n\n\nThe remainder of this paper will deal with the question of how to make use of this freedom in the state space transformation to determine a 'good' or even `optimal' representation as a pH system. To do this we study in the next section the set of solutions of the KYP-LMI \\eqref{KYP-LMI} and in particular its \\emph{analytic center}.\n\n\\section{The analytic center of the solution set $\\mathbb{X}^>$} \\label{sec:analytic}\nSolutions of the KYP-LMI \\eqref{KYP-LMI} and of linear matrix inequalities\nare usually obtained via optimization algorithms, see {e.g.} \\cite{BoyEFB94}. A common approach involves defining a\na \\emph{barrier function} $b:\\mathbb{C}^{n \\times n} \\to \\mathbb C$ that is defined and finite throughout the interior of the constraint set becoming infinite as the boundary is approached, and\nthen using this function in the optimization scheme. The minimum of the barrier function\nitself is of independent interest and is called the \\emph{analytic center} of the constraint set \\cite{GenNV99}.\n\nWe have seen in Section~\\ref{sec:DisSys} that for a system ${\\mathcal M} $ that is minimal and strictly passive there exists a (possibly large) class of state space transformations that transform the system to pH form. This class is characterized by the set $\\XWpd$ of positive definite solutions of (the strict version) of the linear matrix inequality \\eqref{KYP-LMI}.\nIf the set $\\XWpd$ is non-empty and bounded, then the \\emph{barrier function}\n\\begin{equation*}\nb(X) := - \\log \\det W(X),\n\\end{equation*}\nis bounded from below, but becomes infinitely large when $W(X)$ becomes singular.\nThe analytic center of the domain $\\XWpd$ is\nthe minimizer of this barrier function. To characterize the analytic center, we analyze the \\emph{interior} of the set $\\XWpd$\ngiven by\n\\begin{eqnarray*\n\\mathrm{Int} \\, \\XWpd &:=& \\left\\{ X\\in \\XWpd \\; |\n\\mbox{ there exists } \\delta>0 \\mbox{ such that }\\right .\\\\ && \\left .X+\\Delta_X \\in \\XWpd\n \\mbox{for all}\\; \\Delta_X \\in \\Hn \\mbox{ with } \\|\\Delta_X\\|\\le \\delta \\right\\}.\n\\end{eqnarray*\nHere $\\|\\Delta_X\\|$ is the spectral norm of $\\Delta_X$ given by the maximal singular value of $\\Delta_X$.\nWe compare $\\mathrm{Int} \\, \\XWpd$ with the open set\n\\\n\\XWpdpd = \\left\\{ X\\in \\XWpd \\; | \\; W(X)> 0 \\right\\}.\n\\\nSince $b(X)$ is finite for all points in $\\XWpdpd$, there is an open neighborhood where it stays bounded, and this implies that $\\XWpdpd \\subseteq \\mathrm{Int} \\, \\XWpd$.\nThe converse inclusion is not necessarily true. For example, consider a $2\\times 2$ transfer function having the form, $\\mathcal T(s)=\\mathrm{diag} (t(s), 0 )$, where $t(s)$ is a scalar-valued, strictly passive transfer function. The LMI is rank deficient for all $X\\in \\Hn$ (hence $\\XWpdpd=\\emptyset$) but there is a relative interior, since $t(s)$ is strictly passive. The characterization when both sets are equal is given by the following theorem.\n\\begin{theorem}\\label{thm:interior}\nSuppose the system ${\\mathcal M}$ of (\\ref{statespace}) is passive and $\\rank(B)=m$. Then $\\XWpdpd \\equiv \\mathrm{Int}\\,\\XWpd$.\n\\end{theorem}\n{\\bf Proof:} If $\\XWpd=\\emptyset$ then $\\XWpdpd=\\emptyset$ as well.\nOtherwise, pick an $X\\in \\mathrm{Int}\\,\\XWpd$ and suppose\nthat $W(X)$ is positive semidefinite and singular. Then there exists a nontrivial $0\\neq [z_1^\\mathsf{T},z_2^\\mathsf{T}]^\\mathsf{T}\\in\\mathsf{Ker}\\, W(X)$ and\nan $\\varepsilon>0$ sufficiently small so that\nif $\\Delta X \\in \\Hn$ with $\\|\\Delta X\\|_F\\leq \\varepsilon$ then $X+\\Delta X\\in \\XWpd$.\nObserve that for all such $\\Delta X $, we have $W(X+\\Delta X)=W(X)+\\Gamma(\\Delta X)\\geq 0$, where\n $ \\Gamma(\\Delta X)=-\\left[\\begin{array}{cc} \\Delta X A+ A^{\\mathsf{H}} \\Delta X & \\Delta X B \\\\\n B^{\\mathsf{H}}\\Delta X & 0 \\end{array} \\right] $,\n and so\n %\n\\begin{equation}\n0\\leq \\left[\\begin{array}{c} z_1\\\\ z_2 \\end{array} \\right]^{\\mathsf{H}}\nW(X+\\Delta X)\\ \\left[\\begin{array}{c} z_1\\\\ z_2 \\end{array} \\right]\n =\n\\left[\\begin{array}{c} z_1\\\\ z_2 \\end{array} \\right]^{\\mathsf{H}}\n\\Gamma(\\Delta X)\\ \\left[\\begin{array}{c} z_1\\\\z_2 \\end{array} \\right] \\label{nonstrict}\n \\end{equation}\n %\nIf there was a choice for $\\Delta X \\in \\Hn$ with $\\|\\Delta X\\|_F\\leq \\varepsilon$ producing strict inequality in \\eqref{nonstrict},\nthen we would arrive at a contradiction, since the choice $-\\Delta X$ satisfies the same requirements yet violates the inequality. Thus, we must have equality in \\eqref{nonstrict}\nfor all $\\Delta X \\in \\Hn$,\nwhich in turn implies\n\\[\nW(X+\\Delta X)\\left[\\begin{array}{c} z_1\\\\ z_2 \\end{array} \\right]= \\Gamma(\\Delta X)\\left[\\begin{array}{c} z_1\\\\ z_2 \\end{array} \\right]=0.\n\\]\nThis\nmeans that\n $B^{\\mathsf{H}}\\Delta X\\, u=0$ for all $\\Delta X \\in \\Hn$.\n If $z_1=0$, then we find that $\\Delta X B\\, z_2=0$ for all $\\Delta X \\in \\Hn$ which in turn means $B z_2=0$ and so, in light of the initial hypothesis on $B$, means that $z_2=0$ which is a contradiction, and thus,\n \nwe must conclude that $W(X)$ is nonsingular after all, hence positive definite.\n To eliminate the last remaining case, suppose that $z_1\\neq 0$. Choosing first $\\Delta X=I$, we find that $z_1\\perp \\mathsf{Ran}(B)$.\n Pick $0\\neq b\\in \\mathsf{Ran}(B)$ and define $\\Delta X =I-2ww^{\\mathsf{H}}$ with\n $w=\\frac{1}{\\sqrt{2}}(\\frac{z_1}{\\|z_1\\|}-\\frac{b}{\\|b\\|})$. Then\n $B^{\\mathsf{H}}\\Delta X\\, z_1 = \\frac{\\|z_1\\|}{\\|b\\|}B^{\\mathsf{H}}b =0$\n which implies that $z_1=0$, and so, $z=0$,\n $W(X)>0$, and again the assertion holds.\n\\hfill $\\Box$\n\nTo characterize when $\\XWpdpd \\equiv \\mathrm{Int}\\,\\XWpd\\neq \\emptyset$ is complicated and there is some confusion in the literature, because several factors may influence the solvability of the KYP-LMI.\nIt is clear that $S=D+D^{H}$ must be positive definite for a solution to be in $\\XWpdpd$, but clearly this is not sufficient as is demonstrated by the simple system $\\dot x=u$, $y=x+du$ (with $d>0$) which is minimal and has $X=1\\in \\XWpd$ as only solution of the KYP-LMI, so $\\mathrm{Int} \\, \\XWpd=\\emptyset$.\nIn this case the associated pencil $\\mathcal L_0$ has one eigenvalue $\\infty$ and the purely imaginary eigenvalues $\\pm i d$. It is passive, but not strictly passive, and stable (but not asymptotically stable), which is in contradiction to many statements in the literature, see e.~g.~\\cite{Gri04}, where unfortunately no distinction between passivity and strict passivity is made. The system is, furthermore, port-Hamiltonian with $J=0$, $R=0$, $B,C=1$, $Q=1$, $P=0$ and $D=1$, and satisfies the dissipation inequality. An analogous example is obtained with $A= \\left [ \\begin{array}{cc} 0 & 1 \\\\ -1 & 0 \\end{array} \\right ]$, $B^{\\mathsf{H}}=C=\\left [ \\begin{array}{cc} 1 & 0 \\end{array} \\right ]$, and $D=1$.\nThen, $X=I_2$ is the unique positive definite solution, $\\mathrm{Int} \\, \\XWpd=\\emptyset$, and there are double eigenvalues of $\\mathcal L_0$ at $\\pm i$. The system is not asymptotically stable and not strictly passive, but stable, passive and pH. If in this example we choose $D=0$, then still $X=I$ is the unique positive definite solution of the KYP-LMI, but now $\\mathcal L_0$ has the only purely imaginary eigenvalue $0$ and two Kronecker blocks of size $2$ for the eigenvalue $\\infty$. In this case the Riccati equation \\eqref{riccati} cannot be be formed and there does not exist a two-dimensional extended Lagrangian invariant space associated with the stable eigenvalues.\n\n\n\\begin{remark}{\\rm\nNote that the solutions $X_+$ and $X_-$ of the Riccati equation \\eqref{riccati} yield singular $W(X_+)$ and $W(X_-)$, and are thus on the boundary of $\\XWpd$,\neven though they are positive definite, as was pointed out in Theorem \\ref{LMIWil}.\n}\\end{remark}\n\n\nIn the sequel, we assume that $\\XWpdpd\\neq \\emptyset$, so that the analytic center of $\\XWpd$ is well-defined, see also \\cite{NesN94}, and we can compute it as a candidate for a `good' solution to the LMI (\\ref{KYP-LMI}) yielding a robust representation. This requires the solution of an optimization problem.\nKeep in mind that we have the following the set inclusions\n\\[\n\\mathbb{X}_{W} \\subset \\mathrm{Int}\\XWpd=\\XWpdpd \\subset \\XWpd \\subset \\Hnpd \\subset \\Hn.\n\\]\nFor $X,Y \\in \\Hn $ we define the \\emph{Frobenius inner product}\n\\[\n \\langle X,Y\\rangle := \\mathsf{trace}\\!\\left({\\mathfrak{Re}}(Y)^T{\\mathfrak{Re}}(X)+{\\mathfrak{Im}}(Y)^T{\\mathfrak{Im}}(X)\\right),\n\\]\nwhich has the properties $\\langle X,Y \\rangle= \\langle Y,X \\rangle$, $\\|X\\|_F= \\langle X,X \\rangle^{\\frac{1}{2}}$, and $YZ \\rangle= \\langle YX,Z \\rangle = \\langle XZ,Y \\rangle$.\n\nThe \\emph{gradient} of the barrier function $b(X)$ with respect to $W$ is given by\n\\[\n\\partial b(X) \/ \\partial W = -W(X)^{-1}.\n\\]\nUsing the chain rule and the Frobenius inner product, it follows from \\cite{NesN94}\nthat $X\\in \\mathbb{C}^{n \\times n}$ is an extremal point of $b(X)$ if and only if\n\\[\n\\langle \\partial b(X) \/ \\partial W, \\Delta W(X)[\\Delta_X] \\rangle = 0 \\quad \\mbox{for all} \\; \\Delta_X \\in \\Hn,\n\\]\nwhere $\\Delta W(X)[\\Delta_X]$ is the incremental step in the direction $\\Delta_X$ given by\n\\[\n\\Delta W(X)[\\Delta_X] = -\\left[ \\begin{array}{cc}\nA^{\\mathsf{H}}\\Delta_X+\\Delta_X A & \\Delta_X B \\\\ B^{\\mathsf{H}}\\Delta_X & 0 \\end{array} \\right].\n\\]\nFor an extremal point it is then necessary that\n\\begin{equation}\\label{orth}\n\\langle W(X)^{-1} , \\left[ \\begin{array}{cc}\nA^{\\mathsf{H}}\\Delta_X + \\Delta_X A & \\Delta_X B \\\\ B^{\\mathsf{H}}\\Delta_X & 0 \\end{array} \\right] \\rangle \\ =0\n\\end{equation}\nfor all $\\Delta_X\\in \\Hn$.\nDefining $F := S^{-1}(C-B^HX)$,\n$P :=-A^HX-XA-F^{\\mathsf{H}}SF$, and $A_F := A-BF$, it has been shown in \\cite{GenNV99} that \\eqref{orth} holds if and only if $P$ is invertible and\n\\begin{equation} \\label{skew}\nA_F^{\\mathsf{H}}P +PA_F =0.\n\\end{equation}\nNote that $P$ is nothing but the evaluation of the Riccati operator \\eqref{riccati} at $X$, and that $A_F$ is the corresponding closed loop matrix. For the solutions of the Riccati equation we have\n$P=\\mathsf{Ricc}(X)=0$ (so $P$ is not invertible) and the corresponding closed loop matrix\nhas all its eigenvalues equal to an adequate subset of the eigenvalues of the Hamiltonian matrix $H$ in (\\ref{HamMatrix}). For an interior point of $\\XWpd$\nwe have $P=\\mathsf{Ricc}(X)> 0$, and hence $P$ has an invertible\nsquare root $P^{\\frac{1}{2}}\\in \\Hnpd$. Multiplying\n(\\ref{skew}) on both sides with $P^{-\\frac{1}{2}}$\nwe obtain that\n\\[\nP^{-\\frac{1}{2}}A_F^{\\mathsf{H}}P^{\\frac{1}{2}} + P^{\\frac{1}{2}}A_FP^{-\\frac{1}{2}}=0.\n\\]\nThus, $\\hat A_F:=P^{\\frac{1}{2}}A_F P^{-\\frac{1}{2}}$ is skew-Hermitian and therefore $\\hat A_F$ as well as $A_F$ have all its eigenvalues on the imaginary axis. Hence the closed loop matrix $A_F$ of the analytic center has a spectrum that is also 'central' in a certain sense.\n\nIt is important to note that\n\\[\n \\det (W(X)) = \\det (\\mathsf{Ricc}(X)) \\det S,\n\\]\nwhich implies that we are also finding a stationary point of\n$\\det(\\mathsf{Ricc}(X))$, since $S$\nis constant and invertible. Since $P\\in \\Hnpd$, we can rewrite the equations defining the analytic center of\n$\\XWpd$ as the solutions $X\\in \\Hn$, $P\\in \\Hnpd$, $F\\in \\mathbb C^{m,n}$ of the system of matrix equations\n\\begin{eqnarray} \\nonumber\nS F &=& C-B^{\\mathsf{H}}X,\\\\\n P &=& -A^{\\mathsf{H}}X-XA-F^{\\mathsf{H}}SF, \\label{FXP}\\\\ \\nonumber\n 0&=& P(A-BF)+(A^{\\mathsf{H}}-F^{\\mathsf{H}}B^{\\mathsf{H}})P\\\\\n &=&PA_F+A_F^{\\mathsf{H}}P.\\nonumber\n\\end{eqnarray}\nSystem (\\ref{FXP}) can be used to determine a solution via an iterative method, that uses a starting value $X_0$ to compute $P_0,F_0$ and then consecutively solutions $X_i$, $i=1,2,\\ldots$\nfollowed by computing a new $P_i$ and $F_i$.\n\n\\begin{remark}\\label{rem:evp}{\\rm\nFor given matrices $P,F$ the solution $X$ of \\eqref{riccati} can be obtained via\nthe invariant subspace\n\\[\n\\left[ \\begin{array}{ccc} 0 & I_n & 0\\\\\n\t-I_n & 0 & 0\\\\ 0 & 0 & 0 \\end{array} \\right]\\mat{c} -X \\\\ \\phantom{-}I_n \\\\ -F \\rix Z=\n\\left[ \\begin{array}{ccc} 0 & A & B \\\\\n\tA^{\\mathsf{H}} & -P & C^{\\mathsf{H}} \\\\ B^{\\mathsf{H}} & C & S \\end{array} \\right]\\mat{c} -X \\\\ \\phantom{-}I_n \\\\ -F \\rix,\n\\]\nComputing this subspace for $P=\\mathsf{Ricc}(X)=0$ allows to compute the extremal solutions $X_+$ and $X_-$ of \\eqref{riccati}, which then can be used to compute a starting point for an optimization scheme, see \\cite{BanMVN17_ppt} for details.\n}\n\\end{remark}\n\n\n\n\\section{The passivity radius}\\label{sec:passrad}\nOur goal to achieve `good' or even `optimal' pH representations of a passive system can be realized in different ways. A natural measure for optimality is a large \\emph{passivity radius}\n$\\rho_{{\\mathcal M}}$, which is the smallest perturbation (in an appropriate norm) to the coefficients of a model ${\\mathcal M} $ that makes the system non-passive. Computational methods to determine $\\rho_{{\\mathcal M}}$ were introduced in \\cite{OveV05}, while the converse question, what is the nearest passive system to a non-passive system has recently been discussed in \\cite{GilMS18,GilS17}.\n\nOnce we have determined a solution $X\\in \\XWpd$ to the LMI~(\\ref{KYP-LMI}), we can determine the representation \\eqref{pH}\nas in Section~\\ref{sec:PH} and the system is automatically passive (but not necessarily strictly passive). For each such representation we can determine the passivity radius and then choose the most robust solution $X\\in \\XWpd$ under perturbations by maximizing the passivity radius or by minimizing the condition number of $X^{\\frac 12}$, which is the transformation matrix to pH form, see Section~\\ref{sec:cond}.\n\n\\subsection{The $X$-passivity radius}\n\nAlternatively, for $X\\in \\mathrm{Int}\\XWpd$ we can determine the smallest perturbation ${\\Delta_{\\mathcal M}}$ of the system matrices $A,B,C,D$ of the model ${\\mathcal M} $ that leads to a loss of positive definiteness of $W(X)$, because then we are on the boundary of the set of passive systems. This is a very suitable approach for perturbation analysis, since for fixed $X$ the matrix\n\\begin{equation*} \\label{hx}\nW(X) = \\left[\n\\begin{array}{cc}\n0 & C^{\\mathsf{H}} \\\\\nC & D+D^{\\mathsf{H}}\n\\end{array}\n\\right]\n- \\left[\\begin{array}{cc}\nA^{\\mathsf{H}}X + X\\,A & X\\,B \\\\\nB^{\\mathsf{H}}X & 0\n\\end{array}\n\\right]\n\\end{equation*}\nis linear in the unknowns $A,B,C,D$ and when we perturb the coefficients, then we preserve strict passivity as long as\n\\begin{eqnarray*}\n&& W_{\\Delta_{\\mathcal M}} (X) := \\left[\n\\begin{array}{cc}\n 0 & (C+\\Delta_C)^{\\mathsf{H}} \\\\\n(C+\\Delta_C) & (D+\\Delta_D)+(D+\\Delta_D)^{\\mathsf{H}}\n\\end{array}\n\\right]\\\\\n&& \\qquad - \\left[\\begin{array}{cc}\n(A+\\Delta_A)^{\\mathsf{H}}X + X\\,(A+\\Delta_A) & X\\,(B+\\Delta_B) \\\\\n(B+\\Delta_B)^{\\mathsf{H}}X & 0\n\\end{array}\n\\right]>0.\n\\end{eqnarray*}\nHence, given $X\\in \\mathrm{Int}\\XWpd$, we can look for the smallest perturbation $\\Delta_{\\mathcal M}$ to the model ${\\mathcal M} $ that makes $\\det (W_{\\Delta_{\\mathcal M}}(X)=0$. To measure the size of the perturbation $\\Delta_{\\mathcal M}$ of a state space model ${\\mathcal M} $ ,we use the Frobenius norm\n\\[\n \\|\\Delta_{\\mathcal M} \\| := \\left \\|\\left[\\begin{array}{ccc}\n\\Delta_A & \\Delta_B \\\\\n\\Delta_C & \\Delta_D\n\\end{array}\\right] \\right \\|_F.\n\\]\nDefining for $X\\in \\mathrm{Int}\\XWpd$ the \\emph{$X$-passivity radius} as\n\\[\n\t\\rho_{\\mathcal M}(X):= \\inf_{\\Delta_{\\mathcal M}\\in \\mathbb C^{n+m,n+m}}\\left\\{ \\| \\Delta_{\\mathcal M} \\| \\; | \\; \\det W_{\\Delta_{\\mathcal M}}(X) = 0\\right\\}.\n\\]\nNote that in order to compute $\\rho_{\\mathcal M}(X)$ for the model ${\\mathcal M} $, we must first find a point $X\\in \\mathrm{Int}\\XWpd$, since $W(X)$ must be positive definite to start with and also $X$ should be positive definite to obtain a state-space transformation to pH form.\n\nWe have the following relation between the $X$-passivity radius and the usual passivity radius.\n\\begin{lemma}\\label{passbound}\nConsider a given model ${\\mathcal M}$ . Then the passivity radius is given by\n\\begin{eqnarray*}\n\\nonumber\n\t\\rho_{{\\mathcal M}}&=& \\sup_{X\\in \\mathrm{Int}\\XWpd}\\inf_{\\Delta_{\\mathcal M}\\in \\mathbb C^{n+m,n+m}}\\{\\| \\Delta_{\\mathcal M} \\| | \\det W_{\\Delta_{\\mathcal M}}(X)=0\\}\\\\ &=& \\sup_{X\\in \\mathrm{Int}\\XWpd} \\rho_{{\\mathcal M}}(X).\\label{passive}\n\t\\end{eqnarray*}\n\\end{lemma}\n{\\bf Proof:}\n\tIf for any given $X \\in \\mathrm{Int} \\XWpd$ we have that $\\| \\Delta_{\\mathcal M} \\|< \\rho_{{\\mathcal M}}(X)$, then all systems ${\\mathcal M} +\\Delta_{\\mathcal M}$ with $\\| \\Delta_{\\mathcal M} \\| < \\rho_{\\mathcal M}(X)$ are strictly passive.\n\tTherefore $\\rho_{\\mathcal M} \\ge \\sup_{\\mathrm{Int}\\XWpd} \\rho_{\\mathcal M}(X)$. Equality follows, since there exists a perturbation $\\Delta_{\\mathcal M}$ of norm $\\rho_{\\mathcal M}$ for which there does not exist a point $X\\in \\mathrm{Int}\\XWpd$ for which $W_{\\Delta_{\\mathcal M}}(X)> 0$, hence, this system is not strictly passive anymore.\n\\hfill $\\Box$\n\nWe derive an explicit formula for the $X$-passivity radius based on a one parameter optimization problem. For this, we rewrite the condition $W_{\\Delta_{\\mathcal M}} (X)>0$ as\n\\begin{eqnarray} \\nonumber\n&& \\left[\\begin{array}{cc} -X & 0 \\\\ 0 & I_m \\end{array}\\right]\n\\left[\\begin{array}{cc} A + \\Delta_A & B+\\Delta_B \\\\ C+\\Delta_C & D+\\Delta_D \\end{array}\\right] \\\\\n&& \\label{wdelta} \\qquad +\n\\left[\\begin{array}{cc} A^{\\mathsf{H}}+\\Delta_A^{\\mathsf{H}} & C^{\\mathsf{H}}+\\Delta_C^{\\mathsf{H}} \\\\ B^{\\mathsf{H}}+\\Delta_B^{\\mathsf{H}} & D^{\\mathsf{H}}+\\Delta_D^{\\mathsf{H}} \\end{array}\\right]\n\\left[\\begin{array}{cc} -X & 0 \\\\ 0 & I_m \\end{array}\\right] >0.\n\\end{eqnarray}\nSetting\n\\begin{equation} \\label{defWhatX}\n\\hat W:=W(X), \\; \\hat X := \\left[\\begin{array}{cc} X & 0 \\\\ 0 & I_m \\end{array}\\right], \\; \\Delta_T := \\left[\\begin{array}{cc} -\\Delta_A & -\\Delta_B \\\\ \\Delta_C & \\Delta_D \\end{array}\\right],\n\\end{equation}\ninequality (\\ref{wdelta}) can be written as the LMI\n\\begin{equation} \\label{WDelta}\nW_{\\Delta_{\\mathcal M}} = \\hat W+\\hat X \\Delta_T + \\Delta_T^{\\mathsf{H}} \\hat X >0\n\\end{equation}\nas long as the system is still passive. In order to violate this condition, we need to find the smallest $\\Delta_T$ such that $\\det W_{\\Delta_{\\mathcal M}} =0$. Factoring out $\\hat W^{-\\frac{1}{2}}$ on both sides of (\\ref{WDelta}) yields the characterization\n\\begin{eqnarray}\n&&\\det\\left(I_{n+m} +\n\\hat W^{-\\frac{1}{2}}\\hat X \\Delta_T \\hat W^{-\\frac{1}{2}}+\n\\hat W^{-\\frac{1}{2}}\\Delta_T^{\\mathsf{H}} \\hat X \\hat W^{-\\frac{1}{2}}\\right) \\nonumber \\\\\n&&\\det\\left(I_{n+m} +\n\\left[ \\begin{array}{cc} \\hat W^{-\\frac{1}{2}}\\hat X & \\hat W^{-\\frac{1}{2}}\\end{array}\\right]\n\\left[ \\begin{array}{cc} 0 & \\Delta_T \\\\ \\Delta_T^{\\mathsf{H}} & 0 \\end{array}\\right]\n\\left[ \\begin{array}{cc} \\hat X\\hat W^{-\\frac{1}{2}} \\\\ \\hat W^{-\\frac{1}{2}}\\end{array}\\right] \\right)\n\\nonumber \\\\ \\nonumber\n&&\\det\\left(I_{2(n+m)} +\n\\left[ \\begin{array}{cc} 0 & \\Delta_T \\\\ \\Delta_T^{\\mathsf{H}} & 0 \\end{array}\\right]\n\\left[ \\begin{array}{cc} \\hat X \\hat W^{-\\frac{1}{2}} \\\\ \\hat W^{-\\frac{1}{2}}\\end{array}\\right]\n\\left[ \\begin{array}{cc} \\hat W^{-\\frac{1}{2}}\\hat X & \\hat W^{-\\frac{1}{2}}\\end{array}\\right] \\right)\\\\\n&& =0.\\label{rhoX}\n\\end{eqnarray}\nThe minimal perturbation $\\Delta_T$ for which this is the case was described in \\cite{OveV05}\nusing the following theorem, which we have slightly modified in order to take into account the positive semi-definiteness of the considered matrix.\n\\begin{theorem} \\label{thm:OveV} Consider the matrices $\\hat X, \\hat W$ in (\\ref{defWhatX}) and the pointwise positive semidefinite matrix function\n\\begin{equation}\\label{defmgamma}\nM(\\gamma):= \\left[ \\begin{array}{cc} \\gamma \\hat X \\hat W^{-\\frac{1}{2}} \\\\ \\hat W^{-\\frac{1}{2}} \/ \\gamma\\end{array}\\right]\n\\left[ \\begin{array}{cc} \\gamma \\hat W^{-\\frac{1}{2}} \\hat X & \\hat W^{-\\frac{1}{2}} \/ \\gamma \\end{array}\\right]\n\\end{equation}\nin the real parameter $\\gamma$. Then the largest eigenvalue $\\lambda_{\\max}(M(\\gamma))$ is a \\emph{unimodal function} of $\\gamma$ ({i.e.} it is first monotonically decreasing and then monotonically increasing with growing $\\gamma$). At the minimizing value $\\underline \\gamma$, $M(\\underline{\\gamma})$ has an eigenvector $z$, {i.e.}\n\\[\n M(\\underline{\\gamma}) z = \\underline\\lambda_{\\max} z, \\quad z:=\\left[ \\begin{array}{cc} u \\\\ v \\end{array}\\right],\n \\]\nwhere\n$ \\|u\\|_2^2=\\|v\\|_2^2=\\frac{1}{2}$.\nThe minimum norm perturbation $\\Delta_T$ is of rank $1$ and is given by $\\Delta_T=2uv^{\\mathsf{H}}\/\\underline{\\lambda}_{\\max}$. It has norm $1\/\\underline{\\lambda}_{\\max}$\nboth in the spectral norm and in the Frobenius norm.\n\\end{theorem}\n{\\bf Proof}\nThe proof for a slightly different formulation was presented in \\cite{OveV05}. Here we therefore just present the basic idea in our formulation. Let $\\Gamma:=\\mathrm{diag}(\\gamma I_{m+n}, \\frac{1}{\\gamma} I_{m+n})$, then $M(\\gamma)=\\Gamma H_1 \\Gamma$, while\n\\[\n\\Gamma^{-1}\\left[ \\begin{array}{cc} 0 & \\Delta_T \\\\ \\Delta_T^{\\mathsf{H}} & 0 \\end{array}\\right]\\Gamma^{-1}=\\left[ \\begin{array}{cc} 0 & \\Delta_T \\\\ \\Delta_T^{\\mathsf{H}} & 0 \\end{array}\\right].\n\\]\nSetting\n\\\nK(\\gamma):=\\left( I_{2(n+m)} + \\left[ \\begin{array}{cc} 0 & \\Delta_T \\\\ \\Delta_T^{\\mathsf{H}} & 0 \\end{array}\\right] M(\\gamma) \\right),\n\\\nthen $\\det(K(\\gamma))$ is independent of $\\gamma$. But the vector $z$ is in the kernel of $K(\\gamma)$, which implies that also $K(1)$ is singular. The value of the norm of the constructed\n$\\Delta_T$ follows from the fact that the subvectors $u$ and $v$ must have equal norm at the minimum.\n\\hfill $\\Box$\n\nA bound for $\\underline{\\lambda}_{\\max}$ in Theorem~\\ref{thm:OveV} is obtained by the following result.\n\\begin{corollary}\\label{cor:lev} Consider the matrices $\\hat X, \\hat W$ in (\\ref{defWhatX}) and the pointwise matrix function $M(\\gamma)$ as in (\\ref{defmgamma}). The largest eigenvalue of $M(\\gamma)$ is also the largest eigenvalue of\n\\[\n\\gamma^2 \\hat W^{-\\frac{1}{2}} \\hat X^2 \\hat W^{-\\frac{1}{2}} + \\hat W^{-1}\/\\gamma^2.\n\\]\nA simple upper bound for $\\underline{\\lambda}_{\\max}$ is given by $\\underline{\\lambda}_{\\max}\\le \\frac{2}{\\alpha\\beta}$ where $\\alpha^2:=\\lambda_{\\min}(\\hat W)$ and $\\beta^2=\\lambda_{\\min}(\\hat X^{-1}\\hat W\\hat X^{-1})$. The corresponding lower bound for $\\| \\Delta_T \\|_F$ then becomes\n\\[\n \\rho_{\\mathcal M}(X) = \\min_{\\gamma} \\| \\Delta_T \\|_F \\ge \\alpha\\beta\/2.\n\\]\n\\end{corollary}\n{\\bf Proof} Clearly $\\|\\hat W^{-1}\\|_2\\le \\frac{1}{\\alpha^2}$ and $\\|\\hat W^{-\\frac{1}{2}} \\hat X^2 \\hat W^{-\\frac{1}{2}} \\|_2\\le \\frac{1}{\\beta^2}$. So if we choose $\\gamma^2=\\frac{\\beta}{\\alpha}$\n\tthen\n\\begin{eqnarray*}\n&& \\min_{\\gamma} \\|\\gamma^2 \\hat W^{-\\frac{1}{2}} \\hat X^2 \\hat W^{-\\frac{1}{2}} + \\hat W^{-1}\/\\gamma^2\\|\\\\\n && \\qquad\\le \\| (\\beta\/\\alpha)\\hat W^{-\\frac{1}{2}} \\hat X^2 \\hat W^{-\\frac{1}{2}} + (\\alpha\/\\beta)\\hat W^{-1}\\|\\\\\t &&\\qquad \\le \\frac{2}{\\alpha\\beta} . \\qquad \\Box\n\\end{eqnarray*}\n\n\tWe can construct a perturbation $\\Delta_T=\\epsilon (\\alpha\\beta)vu^{\\mathsf{H}}$ of norm $|\\epsilon|(\\alpha\\beta)$ which makes the matrix $W_{\\Delta_{\\mathcal M}}$ singular and therefore gives an upper bound for $\\rho_M(X)$. To compute this perturbation, let $u$, $v$ and $w$ be vectors of norm $1$, satisfying\n$\\hat W^{-\\frac{1}{2}}u=u\/\\alpha$, $\\hat W^{-\\frac{1}{2}} \\hat X v=w\/\\beta$, $\\Delta_T=\\epsilon(\\alpha\\beta)vu^{\\mathsf{H}}$, and $\\epsilon u^{\\mathsf{H}}w=-|\\epsilon u^{\\mathsf{H}}w|$,\n{i.e.}, $u$, $v$ and $w$ are the singular vectors associated with the largest singular values\n$1\/\\alpha$ of\t$\\hat W^{-\\frac{1}{2}}$ and $1\/\\beta$ of $\\hat W^{-\\frac{1}{2}}\\hat X$. Inserting these values in (\\ref{rhoX}), it follows that\n\\begin{eqnarray*}\n&& \\det\\left(I_{n+m} +\n\t\\left[ \\begin{array}{cc} \\hat W^{-\\frac{1}{2}}\\hat X & \\hat W^{-\\frac{1}{2}}\\end{array}\\right]\n\t\\left[ \\begin{array}{cc} 0 & \\Delta_T \\\\ \\Delta_T^{\\mathsf{H}} & 0 \\end{array}\\right]\n\t\\left[ \\begin{array}{cc} \\hat X\\hat W^{-\\frac{1}{2}} \\\\ \\hat W^{-\\frac{1}{2}}\\end{array}\\right] \\right) \\\\\n&& \\qquad =\\det\\left(I_{n+m} +\n\t\\left[ \\begin{array}{cc} w & u \\end{array}\\right]\n\t\\left[ \\begin{array}{cc} 0 & \\epsilon \\\\ \\overline \\epsilon & 0 \\end{array}\\right]\n\t\\left[ \\begin{array}{cc} w^{\\mathsf{H}} \\\\ u^{\\mathsf{H}} \\end{array}\\right] \\right) \\\\\n&& \\qquad =\n\t\\det\\left(I_{2} + \\left[ \\begin{array}{cc} 0 & \\epsilon \\\\ \\overline \\epsilon & 0 \\end{array}\\right]\n\t\\left[ \\begin{array}{cc} w^{\\mathsf{H}} \\\\ u^{\\mathsf{H}} \\end{array}\\right] \\left[ \\begin{array}{cc} w & u \\end{array}\\right]\\right).\n\\end{eqnarray*}\nIf we now choose the argument of the complex number $\\epsilon$ such that $\\epsilon u^{\\mathsf{H}}w$ is real and negative and the amplitude of $\\epsilon$ such that $1=|\\epsilon u^{\\mathsf{H}}w|+ |\\epsilon|$, \tthen\n\\[\n\\det \\left(I_{2} + \\left[ \\begin{array}{cc} \\epsilon u^{\\mathsf{H}}w & \\epsilon\\\\ \\overline \\epsilon & \\overline \\epsilon w^{\\mathsf{H}}u \\end{array}\\right]\\right) = (1-|\\epsilon u^{\\mathsf{H}}w|)^2-|\\epsilon|^2=0.\n\\]\nSince $0\\le |u^{\\mathsf{H}}w| \\le 1$, we have that $\\frac{1}{2} \\le |\\epsilon| \\le 1$ and thus we have the interval $\\alpha\\beta\/2 \\le \\rho_{\\mathcal M}(X) \\le |\\epsilon| \\alpha\\beta$, $\\frac{1}{2} \\le |\\epsilon| \\le 1$ in which the $X$-passivity radius is located.\nIf $u$ and $w$ are linearly dependent, then this interval shrinks to a point and the estimate is exact.\nWe summarize these observations in the following theorem.\n\\begin{theorem}\\label{thm:Xpassivity}\nLet ${\\mathcal M}=\\{A,B,C,D\\}$ be a given model and let $X\\in \\mathrm{Int}\\XWpd$. Then the $X$-passivity radius $\\rho_{\\mathcal M}(X)$ at this $X$ is bounded by\n\\[\n \\alpha\\beta\/2 \\le \\rho_{\\mathcal M}(X) \\le \\alpha\\beta\/(1+|u^{\\mathsf{H}}w|),\n \\]\n %\nwhere\n$\\alpha^2:=\\lambda_{\\min}(\\hat W)$, $\\beta^2=\\lambda_{\\min}(\\hat X^{-1}\\hat W\\hat X^{-1})$, $\\hat W^{-\\frac{1}{2}}u=u\/\\alpha$, $\\hat W^{-\\frac{1}{2}} \\hat X v=w\/\\beta$.\nMoreover, if $u$ and $w$ are linear dependent, then $\\rho_{\\mathcal M}(X)=\\alpha\\beta\/2$.\n\\end{theorem}\nIf we use these bounds for the passivity radius in a pH system, we have the following corollary.\n\\begin{corollary}\\label{cor:xeqI} If for a given system $\\mathcal M$ we have that $X=I_n\\in \\mathrm{Int}\\XWpd$ then\nthe corresponding representation of the system is port-Hamiltonian, {i.e.}, it has the representation ${\\mathcal M}:= \\{J-R,G-K,G^{\\mathsf{H}}+K^{\\mathsf{H}},S+N\\}$ and the X-passivity radius\nis given by\n \\[\n \\rho_{\\mathcal M}(I)=\\lambda_{\\min}W(I)=\\lambda_{\\min}\\left[\\begin{array}{cc} R & K \\\\ K^{\\mathsf{H}} & S\t \\end{array}\\right].\n \\]\n %\nMoreover, if $X=I_n$ is the analytic center of $\\mathrm{Int}\\XWpd$, then\n$\\rho_{\\mathcal M}(I)$ equals the passivity radius $\\rho_{\\mathcal M}$ of ${\\mathcal M}$.\n\\end{corollary}\n{\\bf Proof}\n This follows directly from Theorem~\\ref{thm:Xpassivity}, since then $\\alpha=\\beta$ and we can choose $u=w$.\n\\hfill $\\Box$\n\\begin{remark}\\label{rem:alphabeta}{\\rm Considering a pH representation,\nthe conditions $\\hat W\\geq \\alpha^2 I_{n+m}$ and $\\hat X^{-1} \\hat W \\hat X^{-1}\\geq \\beta^2 I_{n+m}$ yield the necessary (but not sufficient) condition for passivity that\n\\[\n\\left[ \\begin{array}{cc} \\hat W & \\alpha \\beta I_{n+m} \\\\\n \t\\alpha \\beta I_{n+m} & \\hat X^{-1} \\hat W \\hat X^{-1}\t\n \t\\end{array} \\right] \\geq 0,\n\\] \t\nUsing the square root $\\hat T:= \\hat X^{\\frac12}$ of $\\hat X>0$ and a congruence transformation, one finds that this is equivalent to\n\\[\n \\left[ \\begin{array}{cc} \\hat T^{-1} \\hat W \\hat T^{-1} & \\alpha \\beta I_{n+m} \\\\\n \t\\alpha \\beta I_{n+m} & \\hat T^{-1} \\hat W \\hat T^{-1}\t\n \t\\end{array} \\right] \\geq 0,\n\\]\nwhich implies that $\\hat T^{-1} \\hat W \\hat T^{-1} \\geq \\alpha\\beta I_{n+m}$.\n\nDefining $\\xi:=\\lambda_{\\min}(\\hat T^{-1} \\hat W \\hat T^{-1})$, we then also obtain the inequality $\\xi \\ge \\alpha\\beta$. Because of \\eqref{PH}, this is also equal to\n\\[\n \\xi =\\lambda_{\\min} \\left[\\begin{array}{cc}\nR & K \\\\ K^{\\mathsf{H}} & S\n\\end{array}\\right]\n\\]\nwhich suggests that pH representations\nare likely to guarantee a good passivity margin. In order to compute the optimal product $\\xi=\\alpha\\beta$, we could maximize $\\xi$ under the constraint\n\\[\n\\hat W-\\xi\\hat X = \\left[ \\begin{array}{cc} - XA- A^{\\mathsf{H}} X - \\xi X & C^{\\mathsf{H}}- XB \\\\ C-B^{\\mathsf{H}} X & S-\\xi I_m \\end{array}\\right]> 0.\n\\]\n}\n\\end{remark}\n\n\\subsection{A port-Hamiltonian barrier function}\n\nFrom our previous discussions it appears that if we want to make sure that a state-space representation has a large passivity radius, we should not maximize the determinant of $W(X)$, but maximize instead\n\\begin{equation} \\label{tilde}\n\\det \\left ( \\left[ \\begin{array}{cc} X^{-\\frac12} & 0 \\\\ 0 & I_m \\end{array}\\right] W(X)\\left[ \\begin{array}{cc} X^{-\\frac12} & 0 \\\\ 0 & I_m \\end{array}\\right] \\right )\n\\end{equation}\nunder the constraint $X>0$ so that its square root $T=X^\\frac12$ exists.\nEquivalently, if $X>0$, we can maximize the determinant of\n\\begin{eqnarray*} \\tilde W(X)&:=& W(X)\\left[ \\begin{array}{cc} X^{-1} & 0 \\\\ 0 & I_m \\end{array}\\right]\\\\\n&=& \\left[ \\begin{array}{cc} -XAX^{-1}-A^{\\mathsf{H}} & C^{\\mathsf{H}}-XB \\\\ CX^{-1}-B^{\\mathsf{H}} & S \\end{array}\\right]\n\\end{eqnarray*}\nwhich has the same eigenvalues and the same determinant, but is expressed in terms of the variable $X$.\n\nWith this modified barrier function $\\tilde b(X):=- \\log \\det \\tilde W(X)$ we obtain the following formulas for the gradient of the barrier with respect to $\\tilde W$ and the incremental step of $\\tilde W(X)$ in the direction $\\Delta_X$.\n\\begin{eqnarray*}\n\\partial \\tilde b(X\/\\partial \\tilde W) &=& -\\tilde W(X)^{-{\\mathsf{H}}} = -W(X)^{-1}\\left[ \\begin{array}{cc} X & 0 \\\\ 0 & I_m \\end{array}\\right], \\\\\n\\Delta \\tilde W(X)[\\Delta_X] &=& \\left[ \\begin{array}{cc}\nXAX^{-1}\\Delta_X - \\Delta_X A & -\\Delta_X B \\\\ -CX^{-1}\\Delta_X & 0 \\end{array} \\right]\\left[ \\begin{array}{cc} X^{-1} & 0 \\\\ 0 & I_m \\end{array}\\right].\n\\end{eqnarray*}\nUsing again the chain rule, the necessary condition for an extremal point is then that for all $\\Delta_X \\in \\Hn$, $<\\partial \\tilde b(X) \/ \\partial \\tilde W, \\Delta \\tilde W(X)[\\Delta_X]> = 0$, or equivalently\n\\begin{equation} \\label{orthtilde}\n< W(X)^{-1} , \\left[ \\begin{array}{cc} XAX^{-1}\\Delta_X -\\Delta_X A & -\\Delta_X B \\\\ -CX^{-1}\\Delta_X & 0\n\\end{array} \\right] > \\ =0 \\; .\n\\end{equation}\nDefining $P$ and $F$ as before, and using that\n\\[\n W(X)^{-1} = \\left[ \\begin{array}{cc} I_n & 0 \\\\ -F & I_m \\end{array}\\right]\n \\left[ \\begin{array}{cc} P^{-1} & 0 \\\\ 0 & S^{-1} \\end{array}\\right]\n \\left[ \\begin{array}{cc} I_n & -F^{\\mathsf{H}} \\\\ 0 & I_m \\end{array}\\right],\n\\]\nit then follows that \\eqref{orthtilde} holds if and only if $P$ is invertible and for all $\\Delta_X \\in \\Hn$\nwe have\n\\[\n< P^{-1} , (XAX^{-1}+F^{\\mathsf{H}}CX^{-1})\\Delta_X - \\Delta_X (A-BF) > \\ =0,\n\\]\nor equivalently\n\\[\n< \\Delta_X , P^{-1} (XAX^{-1}+F^{\\mathsf{H}}CX^{-1}) - (A-BF)P^{-1} > \\ =0,\n\\]\nwhich can be expressed as\n\\begin{eqnarray*}\n&& P[(X^{-1}A^{\\mathsf{H}}X+X^{-1}C^{\\mathsf{H}}F)-(A-BF)]\\\\\n&& \\qquad + [(XAX^{-1}+F^HCX^{-1})-(A-BF)^{\\mathsf{H}}]P=0.\n\\end{eqnarray*}\nNote that if one performs the coordinate transformation\n\\[\n\\{A_T,B_T,C_T,D_T\\} := \\{TAT^{-1}, TB, CT^{-1}, D \\}\n\\]\nwhere $T^2=X$, then $P_T=T^{-1}PT^{-1}$ and $F_T=FT^{-1}$, which yields\nthe equivalent condition\n\\begin{eqnarray*}\n&& P_T[(A_T^{\\mathsf{H}}-A_T)+(C^{\\mathsf{H}}_T+B_T)F_T]\\\\\n && \\qquad +[(A_T^{\\mathsf{H}}-A_T)+(B_T+C_T^{\\mathsf{H}})F_T]^{\\mathsf{H}}P_T=0.\n\\end{eqnarray*}\nMoreover, we have that\n\\[\n F_T = S^{-1}(C_T-B_T^{\\mathsf{H}}), \\quad P_T= -A_T-A_T^{\\mathsf{H}}-F_T^{\\mathsf{H}}SF_T.\n\\]\nThus, if we use a pH representation ${\\mathcal M}=\\{A_T,B_T,C_T,D_T\\}=\\{J-R,G-K,(G+K)^{\\mathsf{H}},D\\}$, then at the analytic center of the modified barrier function, we have\n\\begin{eqnarray*}\nSF_T&=&K^{\\mathsf{H}}, \\ P_T= R -F_T^{\\mathsf{H}}SF_T, \\\\\n0&=& P_T(J-GF_T)+(J-GF_T)^{\\mathsf{H}}P_T,\n\\end{eqnarray*}\nand of course $X_T=I$, which implies that the passivity radius is given by\n\\[\n \\lambda_{\\min} \\left[ \\begin{array}{cc} R & K \\\\ K^{\\mathsf{H}} & S \\end{array}\\right].\n\\]\nOn the other hand, since\nwe have optimized the determinant of $\\tilde W(X)$ which has the same determinant as \\eqref{tilde}, it follows that\n$$ \\det \\tilde W(X)= \\det \\left[ \\begin{array}{cc} R & K \\\\ K^{\\mathsf{H}} & S \\end{array}\\right]\n$$\nand we can expect to have obtained a nearly optimal passivity margin as well.\n\n\n\n\\section{Other Radii} \\label{sec:criteria}\n\t\nEven though in this paper we are focusing on the passivity, we point out that pH representations also have other properties that are important to consider. In this section, we consider two such properties.\n\n\\subsection{The Stability Radius}\n\n\nIf a positive definite solution of the LMI (\\ref{KYP-LMI}) exists, then it follows from the positive definiteness of the $(1,1)$ block that the system is asymptotically stable. Hence we can employ the same technique that we have used for the characterization of the passivity radius to bound the \\emph{stability radius}, {i.e.}, the smallest perturbation $\\Delta_A$ that makes the system loose its asymptotic stability. Introducing the positive definite matrices\n\\begin{eqnarray}\\nonumber\nV(X)&:=& -XA-A^{\\mathsf{H}}X , \\\\ V_{\\Delta_A}(X)&:=& V(X) - X\\Delta_A-\\Delta_A^{\\mathsf{H}}X,\n\\label{VDelta}\n\\end{eqnarray}\nwe define the \\emph{$X$-stability radius} as the smallest $\\Delta_A$, for which $V_{\\Delta_A}(X)$ looses its positive definiteness, {i.e.} for\n$X\\in {\\mathcal X}^{>0}$ with $V(X)> 0$, the $X$-stability radius is defined as\n\t%\n\t\\[\n\t\\rho_A(X):= \\inf_{\\Delta_A\\in \\mathbb{C}^{n \\times n}}\\left\\{ \\| \\Delta_A \\| \\; | \\; \\det V_{\\Delta_A}(X) = 0\\right\\}.\n\t\\]\n\t%\nNote that \\eqref{VDelta} is defined similar to \\eqref{WDelta}, except for a sign change and therefore as for the passivity radius we obtain the bound\n\\[\n\\alpha\\beta\/2 \\le \\rho_A(X) \\le \\alpha\\beta\/(1+|u^{\\mathsf{H}}w|),\n\\]\nwhere $\\alpha^2 = \\lambda_{\\min}(V)$, $\\beta^2 = \\lambda_{\\min}(X^{-1}VX^{-1})$, and where $u$, $v$ and $w$ are normalized vectors satisfying\n\\[\nV^{-\\frac{1}{2}}u=u\/\\alpha, \\quad V^{-\\frac{1}{2}} X v=w\/\\beta.\n\\]\n\\begin{corollary}\\label{cor:xeqIstab} If for a given system $\\mathcal M$ we have that $X=I_n\\in \\mathrm{Int}\\XWpd$ then at this point the corresponding representation of the system is port-Hamiltonian, {i.e.}, it has the representation ${\\mathcal M}:= \\{J-R,G-K,G^{\\mathsf{H}}+K^{\\mathsf{H}},D\\}$ and the X-stability radius of this model is given by\n\\[\n\t\\rho_A(I)=\\lambda_{\\min}V(I)=\\lambda_{\\min} (R).\n\\]\n\\end{corollary}\n{\\bf Proof}\n\tThis follows directly from Theorem~\\ref{thm:Xpassivity}, since then $\\alpha=\\beta$ and we can choose $u=w$.\n\\hfill $\\Box$\n\n\\begin{remark}\\label{rem:stabrad}{\\rm\n\t\tIt follows from the conditions $V\\geq \\alpha^2 I_{n+m}$ and $X^{-1} V X^{-1}\\geq \\beta^2 I_{n+m}$ that a necessary (but not sufficient) condition for stability is given by $T^{-1}VT^{-1}\\geq \\alpha\\beta I_n$, where $T=X^{\\frac12}$.\n\t}\n\\end{remark}\nAnother robustness measure for the transformation $T$ is to require that the \\emph{field of values} $\\{ x^{\\mathsf{H}}A_Tx | x\\in \\mathbb C^n\\}$ of the transformed matrix $A_T$ is as far left as possible into the left half plane. In other words, we want to minimize the real part of the right most \\emph{Rayleigh quotient} of $A_T$ given by\n\\[\n\\min_{T s.t. T^2\\in \\XWpd} \\{ \\max_{x\\neq 0, x\\in \\mathbb{C}^n} {\\mathfrak{Re}}(\\frac{x^{\\mathsf{H}}A_Tx}{x^Hx}) \\}.\n\\]\nWriting $A_T=J_T-R_T$ with $J_T=-J_T^{\\mathsf{H}}$ and $R_T=R_T^{\\mathsf{H}}$ we clearly only need to $x^{\\mathsf{H}}R_Tx$, since ${\\mathfrak{Re}} (x^{\\mathsf{H}}J_Tx)=0$. In other words, we want to determine\n\\[\n\\min_{T\\in \\XWpd} \\{ \\max_{x\\neq 0, x\\in \\mathbb{C}^n} (\\frac{x^{\\mathsf{H}}R_Tx}{x^{\\mathsf{H}}x}) \\},\n\\]\nwhich amounts to maximizing the smallest eigenvalue of the $(1,1)$ block of the LMI~(\\ref{KYP-LMI}).\nIt therefore is an alternative to maximize the determinant of $W_T(X)$, since this will tend to maximize all of its eigenvalues, including those of the principal submatrices.\n\nWe are not advocating here to use either of these two approaches to compute the stability radius of our system, since we know that it is given explicitly by the formula\n\n\\[ \\rho_A = \\min_{\\omega\\in \\mathbb{R} } \\sigma_{\\min}(A-\\imath \\omega I_n).\n\\]\nWe just want to stress here that using a pH realization based on the analytic center will also yield a robust stability margin.\n\n\n\\subsection{Well conditioned state-space transformations}\\label{sec:cond}\n\nSince for any solution $X\\in \\XWpd$, $T=X^{\\frac 12}$ yields a state space transformation to pH form, we can also try to optimize the condition number of $T$ or directly the condition number of $X=T^2\\in \\XWpd$ within the set described by $ X_- \\leq X \\leq X_+ $.\n\nLet us first consider the special case that $X_+$ and $X_-$ commute. In this case there exists a unitary transformation $U$ that simultaneously diagonalizes both $X_-$ and $X_+$, {i.e.}, $\nU^{\\mathsf{H}}X_-U=\\mathrm{diag}\\{\\lambda^{(-)}_1, \\ldots, \\lambda^{(-)}_n\\}$ and $U^{\\mathsf{H}}X_+U=\\mathrm{diag}\\{\\lambda^{(+)}_1, \\ldots, \\lambda^{(+)}_n\\}$.\nSince $ X_- \\leq X \\leq X_+ $, it follows that each eigenvalue $\\lambda_i$ of $X$, $i=1,\\ldots,n$ must lie in the closed interval\n$\\lambda_i \\in [\\lambda^{(-)}_i , \\lambda^{(+)}_i]$, and that these intervals are nonempty. If there exists a point $\\lambda$\nin the intersection of all these intervals, then $X=\\lambda I_n$ is an optimal choice and it has condition number $\\kappa(X)=1$. If not, then there are at least two non-intersecting intervals, which implies that\n$\\lambda^{(+)}_{\\min} < \\lambda^{(-)}_{\\max}$ and hence that the closed interval $[\\lambda^{(+)}_{\\min} , \\lambda^{(-)}_{\\max}]$ must then be non-empty. Moreover, it must also intersect each of the intervals $[\\lambda^{(-)}_i , \\lambda^{(+)}_i]$ in at least one point.\nThus, if we choose for any $i=1,\\ldots,n$\n\\[\n \\lambda_i \\in [\\lambda^{(-)}_i , \\lambda^{(+)}_i] \\cap [\\lambda^{(+)}_{\\min} , \\lambda^{(-)}_{\\max}],\n\\]\nthen the resulting matrix will have optimal condition number $\\kappa(X)=\n\\frac{\\lambda^{(-)}_{\\max}}{\\lambda^{(+)}_{\\min}}$, and hence $\\kappa(T)=\\sqrt{\\frac{\\lambda^{(-)}_{\\max}}{\\lambda^{(+)}_{\\min}}}$.\nThe proof that this is optimal follows from the Loewner ordering of positive semidefinite matrices. The largest eigenvalue of $X$ must be larger or equal to $\\lambda_{\\max}^{(-)}$ and the smallest eigenvalue of $X$ must be smaller or equal to $\\lambda_{\\min}^{(+)}$.\n\nIf $X_+$ and $X_-$ do not commute, then there still exists a (non-unitary) congruence transformation $L$ which simultaneously diagonalizes both $D^{(-)}:=L^{\\mathsf{H}}X_-L$ and $D^{(+)}:=L^{\\mathsf{H}}X_+L$ but the resulting diagonal elements $d^{(-)}_i$\nand $d^{(+)}_i$ are not the eigenvalues anymore. Nevertheless, the same construction holds for any matrix $X$, but we cannot prove optimality anymore. On the other hand, we can guarantee the bound\n\\[\n\\kappa(T)\\le \\max \\left (\\kappa(L), \\kappa(L) \\sqrt{\\frac{d^{(-)}_{\\max}}{d^{(+)}_{\\min}}}\\right ).\n\\]\n\n\\section{Numerical examples}\\label{sec:num}\nIn this section we present a few numerical examples for realizations that are construct on the basis of the analytic center.\n\nWe first look at a real scalar transfer function of first degree ($m$=$n$=1) because in this case both analytic centers that we proposed earlier, are easy to compute analytically. The transfer functions of interest are given by\n\\begin{eqnarray*} T(s)&=&d + \\frac{cb}{s-a}, \\\\\n \\Phi(s)&=&\\left[ \\begin{array}{cc} b\/(-s-a) & 1 \\end{array} \\right]\n\\left[\n\\begin{array}{cc}\n0 & c \\\\\nc & 2d\n\\end{array}\n\\right].\\left[ \\begin{array}{c} b\/(s-a) \\\\ 1 \\end{array} \\right].\n\\end{eqnarray*}\nIf we assume that it the system is strictly passive, then\n\\[\nW(x) = \\left[\\begin{array}{cc} -2ax & c-bx \\\\ c-bx & 2d \\end{array}\\right]\n\\]\nmust be positive definite for some value of $x$. This implies that $d>0$ and that\n$\\det W(x)=-4adx-(c-bx)^2=-b^2x^2-2(2ad-cb)x-c^2 $ is positive for some value of $x$.\nThis implies that the discriminant $(2ad-cb)^2-b^2c^2 = 4a^2d^2-4abcd = 4ad(ad-bc)$ must be positive. Since the system is also stable, we finally have the following necessary and sufficient conditions for strict passivity of a real first degree scalar function:\n\\[\n a<0, \\quad d>0, \\quad \\det \\left[\\begin{array}{cc}\na & b \\\\ c & d \\end{array}\\right] < 0.\n\\]\nIt is interesting to point out that the transfer function $\\Phi(\\jmath\\omega)$ is a non-negative and unimodal function of $\\omega$ with extrema at $0$ and $\\infty$.\nWe thus can check strict passivity by verifying the positivity of $\\Phi(\\jmath\\omega)$ at these two values, and the stability of $a$~:\n\\[ \\Phi(\\infty) = d >0 , \\ \\Phi(0)= \\frac{2(ad-cb)}{a} >0, \\ a<0.\n\\]\nSince the determinant is quadratic in $x$, it is easy to determine the analytic center $x_*$ of the linear matrix inequality $W(x)>0$ and the corresponding feedback and Riccati operator:\n\\begin{eqnarray*} x_*&=&\\frac{c}{b}-2d\\frac{a}{b^2}, \\quad\nf=\\frac{a}{b}, \\quad p=2d\\frac{a^2}{b^2}-2c\\frac{a}{b},\\\\\nW(x) &=& \\left[\\begin{array}{cc}\n4d\\frac{a^2}{b^2}-2c\\frac{a}{b} & 2d\\frac{a}{b} \\\\ 2d\\frac{a}{b} & 2d \\end{array}\\right]\n= \\left[\\begin{array}{cc} 1 & \\frac{a}{b} \\\\ 0 & 1 \\end{array}\\right].\n\\left[\\begin{array}{cc}p & 0 \\\\ 0 & 2d \\end{array}\\right]\n\\left[\\begin{array}{cc} 1 & 0 \\\\ \\frac{a}{b} & 1\\end{array}\\right],\n\\end{eqnarray*}\nwhich implies $\\det H(x)=2d\\cdot p$ and the strict passivity condition\n\\[ a < 0, \\quad r > 0 \\quad \\mathrm{and} \\quad p= \\frac{2a}{b^2}(ad-bc) >0.\n\\]\nThe strict passivity is lost when either one of the following happens\n\\[ d+\\delta_d=0, \\quad a+\\delta_a = 0, \\quad \\det \\left[ \\begin{array}{cc} a+\\delta_a & b+\\delta_b \\\\ c+\\delta_c & d+\\delta_d \\end{array} \\right]=0.\n\\]\nTherefore, it follows that\n\\[\n\\rho = \\min(d, a, \\sigma_2 \\left[ \\begin{array}{ccc} a & b \\\\ c & d \\end{array} \\right]) = \\sigma_2 \\left[ \\begin{array}{ccc} a & b \\\\ c & d \\end{array} \\right].\n\\]\nBut at the analytic center $x_*=(2da-cb)\/b^2$, we have\n\\[\n \\det\n2d\\left[ \\begin{array}{cc} 2\\frac{a^2}{b^2}-\\frac{ac}{bd} & \\frac{a}{b} \\\\ \\frac{a}{b} & 1 \\end{array} \\right]=2\\frac{da}{b^2}(\\frac{a}{b}-\\frac{c}{d})\n\\]\nwhich shows that the positivity at the analytic center yields the correct condition for strict passivity of the model.\n\nIf we use the port-Hamiltonian barrier function, we have to consider only $x>0$\nsince $\\det \\tilde W(x)=\\det W(x) \/x$. One easily checks that the derivative of\n$\\det \\tilde W(x)$ has a positive zero at $x_*=|\\frac{c}{b}|$, which eventually yields a balanced realization ${\\mathcal M}_T=\\{a,\\sqrt{|bc|}, bc\/\\sqrt{|bc|},d\\}$, and an improved passivity radius\n\\[\n \\sigma_2 \\left[ \\begin{array}{ccc} a & bc\/\\sqrt{|bc|} \\\\ \\sqrt{|bc|} & d \\end{array} \\right].\n\\]\n\n\nAs second test case we look at a random numerical model $\\{A,B,C,D\\}$ in pH form of state dimension $n=6$ and input\/output dimension $m=3$ via\n\\[\n \\left[ \\begin{array}{ccc} R & K \\\\ K^{\\mathsf{H}} & S \\end{array} \\right] := M M^{\\mathsf{H}} ,\n\\]\nwhere $M$ is a random $(n+m)\\times(n+m)$ matrix generated in MATLAB. From this we then identified the model $A:=-R\/2, B:=-C^{\\mathsf{H}}:=-K\/2$ and $D:=S\/2$.\nThis construction guarantees us that $X_0=I_n$ satisfies the LMI positivity constraint for the model ${\\mathcal M}:=\\{A,B,C,D\\}$. We then used the Newton iteration developed in \\cite{BanMVN17_ppt} to compute the analytic center $X_c$ of the LMI\n\\[ W(X):= \\left[\n\\begin{array}{cc}\n-X\\,A - A^{\\mathsf{H}}X & C^{\\mathsf{H}} - X\\,B \\\\\nC- B^{\\mathsf{H}}X & S\n\\end{array}\n\\right] >0,\n\\]\n\nusing the barrier function $b(X):=-\\ln \\det W(X)$. We then determined the quantities $\\alpha^2 := \\lambda_{\\min}()\\hat W)$, $\\beta^2 := \\lambda_{\\min}(\\hat X^{-1}\\hat W \\hat X^{-1})$, and $\\xi := \\lambda_{\\min}(\\hat X^{-\\frac12}\\hat W \\hat X^{-\\frac12})$,\nwhere $\\tilde W:=W(X_c)$ and $\\tilde X := \\mathrm{diag} \\{X_c,I_m\\}$. The constructed matrix\n\\[\n\\hat X^{-\\frac12}\\hat W \\hat X^{-\\frac12}=\\left[ \\begin{array}{ccc} R_c & K_c \\\\ K_c^{\\mathsf{H}} & S_c \\end{array} \\right]\n\\]\nalso contains the parameters of the port-Hamiltonian realization at the analytic center $X_c$. The results are given in the table below\n\\[\n\\begin{array}{c|c|c|c|c|c}\n\t \\alpha^2 & \\beta^2 & \\xi & \\alpha\\beta & \\lambda_{\\min}(R_c) & \\rho_{stab} \\\\\n\t \\hline\n\t 0.002366 & 0.001065 & 0.002381 & 0.001587 & 0.1254 & 0.1035\n\\end{array}\n\\]\nThey indicate that $\\lambda_{\\min}(R_c)$ at the analytic center is a good approximation of the true stability radius, and that $\\lambda_{\\min}(\\hat X^{-\\frac12}\\hat W \\hat X^{-\\frac12})$ at the analytic center is a good approximation of the $X_c$-passivity radius estimate $\\alpha\\beta$.\n\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nIn this paper we have introduced the concept of the analytic center for a barrier function derived from the KYP LMI $W(X)>0$ for passive systems.\nWe have shown that the analytic center yields very promising results for choosing the coordinate transformation to pH form for a given passive system.\n\nSeveral important issues have not been addressed yet. Can we also apply these ideas to the limiting situations where $W(X)$ is singular and\/or $X$ is singular, or when the given system is not minimal. More importantly, one should also analyze if these ideas can also be adapted to models\nrepresented in descriptor form.\n\nAnother interesting issue is that of finding the nearest passive system to a given system that is not passive. This has an important application in identification,\nwhere may loose the property of passivity, due to computational and round-off errors incurred during the identification.\n\\section*{Acknowledgment}\nThe authors greatly appreciate the help of Daniel Bankmann from TU Berlin for the computation of the numerical example.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdecu b/data_all_eng_slimpj/shuffled/split2/finalzzdecu new file mode 100644 index 0000000000000000000000000000000000000000..e7099f5ca0c887e2def74274e521d98949783eb6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdecu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn many real-world problems, a central objective is to optimally choose a right action to maximize the profit of interest.\nFor example, in marketing, an advertising campaign is designed to promote people to purchase a product~\\citep{renault_chapter_2015}.\nA marketer can choose whether to deliver an advertisement to each individual or not,\nand the outcome is the number of purchases of the product.\nAnother example is personalized medicine, where a treatment is chosen depending on each\npatient to maximize the medical effect and minimize the risk of adverse events\nor harmful side effects~\\citep{katsanis_case_2008, abrahams_case_2009}.\nIn this case, giving or not giving a medical treatment to each individual are the possible actions to choose,\nand the outcome is the rate of recovery or survival from the disease.\nHereafter, we use the word \\emph{treatment} for taking an action, following the personalized medicine example.\n\n\\emph{A\/B testing}~\\citep{kohavi_controlled_2009} is a standard method for such tasks, where two groups of people, A and B, are randomly chosen.\nThe outcomes are measured separately from the two groups after treating all the members of Group A but none of Group B.\nBy comparing the outcomes between the two groups by a statistical test,\none can examine whether the treatment positively or negatively affected the outcome.\nHowever, A\/B testing only compares the two extreme options: treating everyone or no one.\nThese two options can be both far from optimal when the treatment has positive effect on some individuals but negative effect on others.\n\nTo overcome the drawback of A\/B testing, \\emph{uplift modeling} has been investigated recently~\\citep{radcliffe2011real, uplift-icml2012-workshop, rzepakowski2012decision}.\n\\emph{Uplift modeling} is the problem of estimating the \\emph{individual uplift}, the incremental profit brought by the treatment conditioned on features of each individual.\nUplift modeling enables us to design a refined decision rule for optimally determining whether to treat each individual or not, depending on his\/her features.\nSuch a treatment rule allows us to only target those who positively respond to the treatment and avoid treating negative responders.\n\nIn the standard uplift modeling setup, there are two types of labels~\\citep{radcliffe2011real, uplift-icml2012-workshop, rzepakowski2012decision}:\nOne is whether the treatment has been given to the individual and the other is its outcome.\nExisting uplift modeling methods require each individual to be \\emph{jointly}\ngiven these two labels for analyzing the association between outcomes and the\ntreatment~\\citep{radcliffe2011real, uplift-icml2012-workshop, rzepakowski2012decision}.\nHowever, joint labels are expensive or hard (or even impossible) to obtain in many real-world problems.\nFor example, when distributing an advertisement by email,\nwe can easily record to whom the advertisement has been sent.\nHowever, for technical or privacy reasons,\nit is difficult to keep track of those people until we observe the outcomes on whether they buy the product or not.\nAlternatively, we can easily obtain information about purchasers of the product at the moment when the purchases are actually made.\nHowever, we cannot know whether those who are buying the product have been exposed to the advertisement or not.\nThus, every individual always has one missing label.\nWe term such samples \\emph{separately labeled samples}.\n\nIn this paper, we consider a more practical uplift modeling setup\nwhere no jointly labeled samples are available, but only separately labeled samples are given.\nTheoretically, we first show that the individual uplift is identifiable when we have two sets of\nseparately labeled samples collected under \\emph{different} treatment policies.\nWe then propose a novel method that directly estimates the individual uplift only from separately labeled samples.\nFinally, we demonstrate the effectiveness of the proposed method through experiments.\n\n\\section{Problem Setting}\nThis paper focuses on estimation of the \\emph{individual uplift} $u(\\bm x)$, often called \\emph{individual treatment effect (ITE)} in the causal inference literature~\\citep{rubin_causal_2005}, defined as\n$u(\\bm x) := \\mathbf{E}[Y_1\\given \\bm x] - \\mathbf{E}[Y_{-1}\\given \\bm x]$,\nwhere\n$\\mathbf{E}[\\;\\cdot\\given\\cdot\\;]$ denotes the conditional expectation,\nand $\\bm x$ is a $\\mathcal{X}$-valued random variable\n($\\mathcal{X}\\subseteq \\Re^d$) representing features of an individual,\nand\n $Y_1, Y_{-1}$ are $\\mathcal{Y}$-valued \\emph{potential outcome variables}~\\citep{rubin_causal_2005}\n($\\mathcal{Y}\\subseteq\\Re$) representing outcomes that would be observed\nif the individual was treated and not treated, respectively.\nNote that only one of either $Y_1$ or $Y_{-1}$ can be observed for each individual.\nWe denote the $\\{1, -1\\}$-valued random variable of the treatment assignment\nby $t$, where $t = 1$ means that the individual has been treated and $t = -1$ not treated.\nWe refer to the population for which we want to evaluate $u(\\bm x)$ as the \\emph{test population},\nand denote the density of the test population by $p(Y_1, Y_{-1}, \\bm x, t)$.\n\nWe assume that $t$ is \\emph{unconfounded}\nwith either of $Y_1$ and $Y_{-1}$ conditioned on $\\bm x$,\ni.e.\\@ $p(Y_1\\given \\bm x, t) = p(Y_1\\given \\bm x)$\nand $p(Y_{-1}\\given \\bm x, t) = p(Y_{-1}\\given \\bm x)$.\nUnconfoundedness is an assumption commonly made in observational studies~\\citep{ite_gen_bound, causal_uplift_review}.\nFor notational convenience, we denote by $y := Y_t$ the outcome of the treatment assignment $t$.\nFurthermore, we refer to any conditional density of $t$ given $\\bm x$ as a \\emph{treatment policy}.\n\nIn addition to the test population,\nwe suppose that there are two \\emph{training populations} $k = 1, 2$,\nwhose joint probability density $p_k(Y_1, Y_{-1}, \\bm x, t)$\nsatisfy\n\\begin{align}\n p_k(Y_{t_0}=y_0\\given\\bm x = \\bm x_0)\n &= p(Y_{t_0}=y_0\\given\\bm x=\\bm x_0)\\quad \\text{(for $k=1, 2$)},\\label{eq:1st_cond}\\\\\n p_1(t=t_0\\mid \\bm x=\\bm x_0)\n &\\neq p_2(t=t_0 \\mid \\bm x=\\bm x_0), \\label{eq:conditions_on_p}\n\\end{align}\nfor all possible realizations $\\bm x_0\\in\\mathcal{X}$, $t_0\\in\\{-1, 1\\}$, and $y_0\\in\\mathcal{Y}$.\nIntuitively, Eq.~\\eqref{eq:1st_cond} means that\npotential outcomes depend on $\\bm x$ in the same way as those in the test population,\nand Eq.~\\eqref{eq:conditions_on_p} states that those two policies give a treatment with different probabilities for every $\\bm x = \\bm x_0$.\n\nWe suppose that the following four training data sets, which we call \\emph{separately labeled samples}, are given:\n\\begin{align*}\n\\{(\\bm x^{(k)}_i, y^{(k)}_i)\\}_{i=1}^{n_k}\\stackrel{\\mathrm{i.i.d.}}{\\sim} p_k(\\bm x, y),\\quad\n \\{(\\widetilde{\\bm x}^{(k)}_i, t^{(k)}_i)\\}_{i=1}^{\\widetilde{n}_k}\\stackrel{\\mathrm{i.i.d.}}{\\sim} p_k(\\bm x, t)\\quad \\text{(for $k=1, 2$)},\n\\end{align*}\nwhere $n_k$ and $\\widetilde{n}_k$, $k=1, 2$, are positive integers.\nUnder Assumptions~\\eqref{eq:1st_cond}, \\eqref{eq:conditions_on_p}, and the unconfoundedness, we have\n$p_k(Y_t \\given\\bm x, t=t_0) = p(Y_{t_0}\\given\\bm x, t=t_0) = p(Y_{t_0}\\given\\bm x)$\nfor $t_0\\in\\{-1,1\\}$ and $k\\in\\{1, 2\\}$.\nNote that we can safely denote $p(y \\given \\bm x, t) := p_k(y\\given \\bm x, t)$.\nMoreover, we have $\\mathbf{E}[Y_{t_0}\\given\\bm x] = \\mathbf{E}[y\\given\\bm x, t=t_0]$ for $t_0=1, -1$,\nand thus our goal boils down to the estimation of\n\\begin{align}\n u(\\bm x) = \\mathbf{E}[y\\given\\bm x,t=1] - \\mathbf{E}[y\\given\\bm x,t=-1]\\label{eq:i_ul}\n\\end{align}\nfrom the separately labeled samples,\nwhere the conditional expectation is taken over $p(y\\given\\bm x, t)$.\n\nEstimation of the individual uplift is important for the following reasons.\n\n\\textbf{It enables the estimation of the average uplift.}\nThe \\emph{average uplift} $U(\\pi)$ of the treatment policy $\\pi(t\\given\\bm x)$ is the\naverage outcome of $\\pi$, subtracted by that of the policy $\\pi_{-}$,\nwhich constantly assigns the treatment as $t=-1$,\ni.e., $\\pi_-(t=\\tau\\given\\bm x) := 1[\\tau=-1]$,\nwhere $1[\\cdot]$ denotes the indicator function:\n\\begin{align}\n U(\\pi)\n &:= \\iint \\sum_{t=-1,1}y p(y\\given\\bm x,t)\\pi(t\\given\\bm x) p(\\bm x) \\mathrm{d}y\\dbm x\n -\\iint \\sum_{t=-1,1} y p(y\\given\\bm x,t)\\pi_-(t\\given\\bm x) p(\\bm x) \\mathrm{d}y\\dbm x\\nonumber\\\\\n &= \\int u(\\bm x) \\pi(t = 1\\given\\bm x) p(\\bm x) \\dbm x.\\label{eq:AO_AU}\n\\end{align}\nThis quantity can be estimated from samples of $\\bm x$ once we obtain an estimate of $u(\\bm x)$.\n\n\\textbf{It provides the optimal treatment policy.}\nThe treatment policy given by $\\pi(t=1\\given\\bm x) = 1[0 \\le u(\\bm x)]$ is the\noptimal treatment that maximizes the average uplift $U(\\pi)$ and equivalently the average outcome $\\iint \\sum_{t=-1,1}y\np(y\\given\\bm x,t)\\pi(t\\given\\bm x) p(\\bm x) \\mathrm{d}y\\dbm x$ (see Eq.~\\eqref{eq:AO_AU})~\\citep{rzepakowski2012decision}.\n\n\\textbf{It is the optimal ranking scoring function.}\nFrom a practical viewpoint, it may be useful to prioritize\nindividuals to be treated according to some ranking scores\nespecially when the treatment is costly and only a limited number of\nindividuals can be treated due to some budget constraint.\nIn fact, $u(\\bm x)$ serves as the optimal ranking scores for this purpose~\\citep{tuffery_data_2011}.\nMore specifically, we define a family of treatment policies $\\{\\pi_{f,\n \\alpha}\\}_{\\alpha\\in\\Re}$\n\\emph{associated with scoring function $f$} by $\\pi_{f,\\alpha}(t=1\\given\\bm x) =\n1[\\alpha \\le f(\\bm x)]$. Then, under some technical condition, $f = u$\nmaximizes the \\emph{area under the uplift curve (AUUC)} defined as\n\\begin{align*}\n \\AUUC(f)\n &:= \\int_0^1 U(\\pi_{f, \\alpha}) \\mathrm{d}C_\\alpha\\\\\n &= \\int_0^1 \\int u(\\bm x) 1[\\alpha \\le f(\\bm x)] p(\\bm x) \\dbm x \\mathrm{d}C_\\alpha\\\\\n &= \\mathbf{E}[1[f(\\bm x) \\le f(\\bm x')] u(\\bm x')],\n\\end{align*}\nwhere $C_\\alpha := \\Pr[f(\\bm x) < \\alpha]$, $\\bm x,\\bm\nx'\\stackrel{\\mathrm{i.i.d.}}{\\sim} p(\\bm x)$, and $\\mathbf{E}$ denotes the expectation\nwith respect to these variables.\nAUUC is a standard performance measure for uplift modeling\nmethods~\\citep{radcliffe_using_2007, radcliffe2011real, uplift-icml2012-workshop, rzepakowski2012decision}.\nFor more details, see Appendix~\\ref{sec:AUUC_ranking} in the supplementary material.\n\n\\textbf{Remark on the problem setting:}\nUplift modeling is often referred to as individual treatment effect estimation or heterogeneous treatment effect estimation\nand has been extensively studied especially in the causal inference literature~\\citep{rubin_causal_2005, pearl2009causality, causal_uplift_review, hill_bayesian_2011, imai_estimating_2013, wager_estimation_2015, johansson_learning_2016, kunzel_meta-learners_2017}.\nIn particular, recent research has investigated the problem under the setting of \\emph{observational studies}, inference using data obtained from \\emph{uncontrolled experiments} because of its practical importance~\\citep{ite_gen_bound}.\nHere, experiments are said to be uncontrolled when some of treatment variables are not controlled to have designed values.\n\nGiven that treatment policies are unknown, our problem setting is also of observational studies\nbut poses an additional challenge that stems from missing labels.\nWhat makes our problem feasible is that we have two kinds of data sets\nfollowing different treatment policies.\n\nIt is also important to note that our setting generalizes the standard setting for observational studies since the former is reduced to the latter when one of the treatment policies always assigns individuals to the treatment group, and the other to the control group.\n\nOur problem is also closely related to individual treatment effect estimation via instrumental variables~\\citep{imbens_instrumental_2014, athey_generalized_2016, hartford_deep_2017, lewis_adversarial_2018}.\\footnote{%\nAmong the related papers mentioned above, the most relevant one is~\\citet{lewis_adversarial_2018},\nwhich is concurrent work with ours.\n}\n\n\\section{Naive Estimators}\\label{sec:naive}\nA naive approach is first estimating the conditional density\n$p_k(y\\given\\bm x)$ and $p_k(t\\given\\bm x)$ from training samples by some\nconditional density estimator \\citep{bishop2006PRML,\n sugiyama2010conditional}, and then solving the following linear system for\n$p(y\\given\\bm x, t=1)$ and $p(y\\given\\bm x, t=-1)$:\n\\begin{align}\n \\underbrace{p_k(y\\given \\bm x)}_{\\text{Estimated from $\\{(\\bm x^{(k)}_i, y^{(k)}_i)\\}_{i=1}^{n}$}} &= \\sum_{t=-1,1} p(y\\given\\bm x, t) \\underbrace{p_k(t\\given\\bm x)}_{\\text{Estimated from $\\{(\\widetilde{\\bm x}^{(k)}_i, t^{(k)}_i)\\}_{i=1}^{\\widetilde{n}}$}}\n \\quad \\text{(for $k = 1, 2$)}.\\label{eq:p_y_given_x}\n\\end{align}\nAfter that, the conditional expectations of $y$ over $p(y\\given\\bm x,\nt=1)$ and $p(y\\given\\bm x, t=-1)$ are calculated by numerical integration, and finally\ntheir difference is calculated to obtain another estimate of $u(\\bm x)$.\n\nHowever, this may not yield a good estimate due to the difficulty of\nconditional density estimation and the instability of numerical integration.\nThis issue may be alleviated by working on the following linear system implied by Eq.~\\eqref{eq:p_y_given_x} instead:\n$\\mathbf{E}_k[y\\given\\bm x] = \\sum_{t=-1,1}\\mathbf{E}[y\\given\\bm x, t]p_k(t\\given\\bm x)$, $k=1, 2$,\nwhere $\\mathbf{E}_k[y\\given\\bm x]$ and $p_k(t\\given\\bm x)$ can be estimated from our samples.\nSolving this new system for $\\mathbf{E}[y\\given\\bm x, t=1]$ and $\\mathbf{E}[y\\given\\bm x, t=-1]$ and taking their difference gives an estimate of $u(\\bm x)$.\nA method called \\emph{two-stage least-squares} for instrumental variable regression takes such an approach~\\citep{imbens_instrumental_2014}.\n\nThe second approach of estimation $E_k[y|x]$ and $p_k(t|x)$ avoids both conditional density estimation and numerical\nintegration, but it still involves post processing of solving the linear system and\nsubtraction, being a potential cause of performance deterioration.\n\n\n\\section{Proposed Method}\nIn this section, we develop a method that can overcome the aforementioned problems\nby directly estimating the individual uplift.\n\n\n\\subsection{Direct Least-Square Estimation of the Individual Uplift}\n\\label{sec:direct_estimation}\nFirst, we will show an important lemma that directly relates the marginal distributions of separately labeled samples to the individual uplift $u(\\bm x)$.\n\n\\begin{lemma}\\label{lem:u_is_diff_ratio}\nFor every $\\bm x$ such that $p_1(\\bm x) \\neq p_2(\\bm x)$, $u(\\bm x)$ can be\nexpressed as\n\\begin{align}\n u(\\bm x)\n = 2\\times\\frac{\\mathbf{E}_{y\\sim p_1(y\\given \\bm x)}[y] - \\mathbf{E}_{y\\sim p_2(y\\given \\bm x)}[y]}%\n{\\mathbf{E}_{t\\sim p_1(t\\given\\bm x)}[t] - \\mathbf{E}_{t\\sim p_2(t\\given\\bm x)}[t]}\\label{eq:diff_ratio}.\n\\end{align}\n\\end{lemma}\nFor a proof, refer to Appendix~\\ref{sec:proof_of_lemma1} in the supplementary material.\n\nUsing Eq.~\\eqref{eq:diff_ratio}, we can re-interpret the naive methods described\nin Section~\\ref{sec:naive} as estimating the conditional expectations on the right-hand side by separately performing regression on\n$\\{(\\bm x^{(1)}_i, y^{(1)}_i)\\}_{i=1}^{n_1}$, $\\{(\\bm x^{(2)}_i, y^{(2)}_i)\\}_{i=1}^{n_2}$,\n$\\{(\\widetilde{\\bm x}^{(1)}_i, t^{(1)}_i)\\}_{i=1}^{\\widetilde{n}_1}$, and $\\{(\\widetilde{\\bm x}^{(2)}_i, t^{(2)}_i)\\}_{i=1}^{\\widetilde{n}_2}$.\nThis approach may result in unreliable performance when the denominator\nis close to zero, i.e., $p_1(t\\given\\bm x) \\simeq p_2(t\\given\\bm x)$.\n\nLemma~\\ref{lem:u_is_diff_ratio} can be simplified by introducing auxiliary variables $z$ and $w$, which are $\\mathcal{Z}$-valued and $\\{-1, 1\\}$-valued random variables\nwhose conditional probability density and mass are defined by\n\\begin{align*}\n \\textstyle p(z=z_0\\given\\bm x) &= \\textstyle\\frac{1}{2} p_1(y=z_0\\mid\\bm x)\n + \\frac{1}{2} p_2(y=-z_0\\given\\bm x),\\\\\n \\textstyle p(w=w_0\\given\\bm x) &= \\textstyle\\frac{1}{2} p_1(t=w_0\\mid\\bm x) + \\frac{1}{2} p_2(t=-w_0\\given\\bm x),\n\\end{align*}\nfor any $z_0\\in\\mathcal{Z}$ and any $w_0\\in\\{-1, 1\\}$,\nwhere $\\mathcal{Z} := \\{s_0 y_0\\given y_0\\in\\mathcal{Y}, s_0\\in\\{1, -1\\}\\}$.\n\n\\begin{lemma}\\label{lem:u_is_ratio}\nFor every $\\bm x$ such that $p_1(\\bm x) \\neq p_2(\\bm x)$, $u(\\bm x)$ can be\nexpressed as\n\\begin{align*}\nu(\\bm x) = 2\\times\\frac{\\mathbf{E}[z \\given \\bm x]}{\\mathbf{E}[w \\mid \\bm x]},\n\\end{align*}\nwhere $\\mathbf{E}[z\\given\\bm x]$ and $\\mathbf{E}[w\\given\\bm x]$ are the conditional expectations\nof $z$ given $\\bm x$ over $p(z\\given\\bm x)$ and $w$ given $\\bm x$ over\n$p(w\\given\\bm x)$, respectively.\n\\end{lemma}\nA proof can be found in Appendix~\\ref{sec:proof_of_lemma2} in the supplementary material.\n\nLet $w^{(k)}_{i} := (-1)^{k-1} t^{(k)}_i$ and $z^{(k)}_{i} := (-1)^{k-1} y^{(k)}_i$.\nAssuming that $p_1(\\bm x) = p_2(\\bm x) =: p(\\bm x)$, $n_1 = n_2$, and $\\widetilde{n}_1 =\n\\widetilde{n}_2$ for simplicity,\n$\\{(\\widetilde{\\bm x}_i, w_i)\\}_{i=1}^n := \\{(\\widetilde{\\bm x}^{(k)}_i,\nw^{(k)}_{i})\\}_{k=1,2;\\ i=1,\\dots,\\widetilde{n}_k}$ and $\\{(\\bm x_i, z_i)\\}_{i=1}^n := \\{(\\bm x^{(k)}_i,\nz^{(k)}_{i})\\}_{k=1,2;\\ i=1,\\dots,n_k}$ can\nbe seen as samples drawn from $p(\\bm x, z) := p(z\\given\\bm x)p(\\bm x)$\nand $p(\\bm x, w) := p(w\\given\\bm x)p(\\bm x)$, respectively, where $n=n_1+n_2$ and $\\widetilde{n}=\\widetilde{n}_1+\\widetilde{n}_2$.\nThe more general cases where $p_1(\\bm x) \\neq p_2(\\bm x)$, $n_1 \\neq\nn_2$, or $\\widetilde{n}_1 \\neq \\widetilde{n}_2$ are discussed in Appendix~\\ref{sec:different_p_or_n} in the supplementary material.\n\n\\begin{theorem}\\label{thm:u_by_ls}\nAssume that $\\mu_w, \\mu_z \\in L^2(p)$ and $\\mu_w(\\bm x) \\neq 0$ for every $\\bm x$ such that $p(\\bm x) > 0$,\nwhere $L^2(p) := \\{f: \\mathcal{X}\\to\\Re \\given \\mathbf{E}_{\\bm x\\sim p(\\bm x)}[f(\\bm x)^2] < \\infty\\}$.\nThe individual uplift $u(\\bm x)$ equals the\nsolution to the following least-squares problem:\n\\begin{align}\n u(\\bm x) = \\argmin_{f\\in L^2(p)} \\mathbf{E}[(\\mu_w(\\bm x)f(\\bm x) - 2\\mu_z(\\bm x))^2]\\label{eq:ls},\n\\end{align}\nwhere $\\mathbf{E}$ denotes the expectation over $p(\\bm x)$,\n$\\mu_w(\\bm x) := \\mathbf{E}[w\\given\\bm x]$, and $\\mu_z(\\bm x) := \\mathbf{E}[z\\given\\bm x]$.\n\\end{theorem}\nTheorem~\\ref{thm:u_by_ls} follows from Lemma~\\ref{lem:u_is_ratio}.\nNote that $p_1(\\bm x) \\neq p_2(\\bm x)$ in Eq.~\\eqref{eq:conditions_on_p}\nimplies $\\mu_w(\\bm x)\\neq 0$.\n\nIn what follows, we develop a method that directly estimates $u(\\bm x)$\nby solving Eq.~\\eqref{eq:ls}.\nA challenge here is that it is not straightforward to evaluate the objective\nfunctional since it involves unknown functions, $\\mu_w$ and $\\mu_z$.\n\n\n\\subsection{Disentanglement of $z$ and $w$}\nOur idea is to transform the objective functional in Eq.~\\eqref{eq:ls} into\nanother form in which $\\mu_w(\\bm x)$ and $\\mu_z(\\bm x)$ appear separately and linearly inside the expectation operator so that we can approximate them using our separately labeled samples.\n\nFor any function $g \\in L^2(p)$ and any $\\bm x\\in\\mathcal{X}$, expanding the left-hand side of the inequality $\\mathbf{E}[(\\mu_w(\\bm x)f(\\bm x)\n- 2\\mu_z(\\bm x) - g(\\bm x))^2] \\ge 0$, we have\n\\begin{align}\n \\mathbf{E}[(\\mu_w(\\bm x)f(\\bm x) - 2\\mu_z(\\bm x))^2]\n \\ge 2\\mathbf{E}[(\\mu_w(\\bm x) f(\\bm x) - 2\\mu_z(\\bm x)) g(\\bm x)] - \\mathbf{E}[g(\\bm x)^2]\n =: J(f, g).\\label{eq:lowerbound}\n\\end{align}\nThe equality is attained when $g(\\bm x) = \\mu_w(\\bm x) f(\\bm x) - \\mu_z(\\bm x)$\nfor any fixed $f$.\nThis means that the objective functional of Eq.~\\eqref{eq:ls} can be calculated\nby maximizing $J(f, g)$ with respect to $g$.\nHence,\n\\begin{align}\n u(\\bm x)\n &= \\argmin_{f\\in L^2(p)} \\max_{g\\in L^2(p)} J(f, g).\\label{eq:original_prob}\n\\end{align}\n\nFurthermore, $\\mu_w$ and $\\mu_z$ are separately and linearly included in $J(f, g)$,\nwhich makes it possible to write it in terms of $z$ and $w$ as\n\\begin{align}\n J(f, g) = 2\\mathbf{E}[w f(\\bm x) g(\\bm x)] - 4\\mathbf{E}[z g(\\bm x)] - \\mathbf{E}[g(\\bm x)^2].\\label{eq:Lfg}\n\\end{align}\nUnlike the original objective functional in Eq.~\\eqref{eq:ls},\n$J(f, g)$ can be easily estimated using sample averages by\n\\begin{align}\n \\widehat{J}(f, g)\n = \\frac{2}{\\widetilde{n}} \\sum_{i=1}^{\\widetilde{n}} w_i f(\\widetilde{\\bm x}_i) g(\\widetilde{\\bm x}_i)\n - \\frac{4}{n} \\sum_{i=1}^{n} z_i g(\\bm x_i)\n - \\frac{1}{2n} \\sum_{i=1}^{n} g(\\bm x_i)^2\n - \\frac{1}{2\\widetilde{n}} \\sum_{i=1}^{\\widetilde{n}} g(\\widetilde{\\bm x}_i)^2.\\label{eq:approxLfg}\n\\end{align}\n\n\nIn practice, we solve the following regularized empirical optimization problem:\n\\begin{align}\n \\min_{f\\in F}\\max_{g\\in G} \\widehat{J}(f, g) + \\Omega(f, g),\\label{eq:approx_prob}\n\\end{align}\nwhere $F$, $G$ are models for $f$, $g$ respectively, and\n$\\Omega(f, g)$ is some regularizer.\n\nAn advantage of the proposed framework is that it is model-independent,\nand any models can be trained by optimizing the above objective. \n\nThe function $g$ can be interpreted as a \\emph{critic} of $f$ as follows.\nMinimizing Eq.~\\eqref{eq:Lfg} with respect to $f$ is equivalent to\nminimizing $J(f, g) = \\mathbf{E}[g(\\bm x) \\{\\mu_w(\\bm x) f(\\bm x) - 2\\mu_z(\\bm x)\\}]$.\n$g(\\bm x)$ serves as a good critic of $f(\\bm x)$\nwhen it makes the cost $g(\\bm x) \\{\\mu_w(\\bm x) f(\\bm x) - 2\\mu_z(\\bm x)\\}$ larger for $\\bm x$ at which $f$ makes a larger error $\\vert\\mu_w(\\bm x) f(\\bm x) - 2\\mu_z(\\bm x)\\vert$.\nIn particular, $g$ maximizes the objective above when $g(\\bm x) = \\mu_w(\\bm x) f(\\bm x) - 2\\mu_z(\\bm x)$ for any $f$, and the maximum coincides with the least-squares objective in Eq.~\\eqref{eq:ls}.\n\nSuppose that $F$ and $G$ are linear-in-parameter models:\n$F = \\{f_{\\bm\\alpha}: \\bm x \\mapsto \\bm\\alpha^\\top\\bm\\phi(\\bm x)\\given \\bm\\alpha\\in\\Re^{b_f}\\}$\nand $G = \\{g_{\\bm\\beta}: \\bm x \\mapsto \\bm\\beta^\\top\\bm\\psi(\\bm x)\\given \\bm\\beta\\in\\Re^{b_g}\\}$,\nwhere $\\bm\\phi$ and $\\bm\\psi$ are $b_{\\mathrm{f}}$-dimensional\nand $b_{\\mathrm{g}}$-dimensional vectors of basis functions in $L^2(p)$.\nThen, $\\widehat{J}(f_{\\bm\\alpha}, g_{\\bm\\beta}) = 2\\bm\\alpha^\\top\\bm A\\bm\\beta - 4\\bm b^\\top\\bm\\beta -\\bm\\beta^\\top\\bm C\\bm\\beta$,\nwhere\n\\begin{align*}\n \\bm A &:= \\frac{1}{\\widetilde{n}} \\sum_{i=1}^{\\widetilde{n}} w_i \\bm\\phi(\\widetilde{\\bm x}_i)\\bm\\psi(\\widetilde{\\bm x}_i)^\\top,\n \\quad\\bm b := \\frac{1}{n} \\sum_{i=1}^{n} z_i \\bm\\psi(\\bm x_i),\\\\\n \\bm C &:= \\frac{1}{2n} \\sum_{i=1}^{n} \\bm\\psi(\\bm x_i)\\bm\\psi(\\bm x_i)^\\top\n + \\frac{1}{2\\widetilde{n}} \\sum_{i=1}^{\\widetilde{n}}\\bm\\psi(\\widetilde{\\bm x}_i)\\bm\\psi(\\widetilde{\\bm x}_i)^\\top.\n\\end{align*}\nUsing $\\ell_2$-regularizers, $\\Omega(f, g) = \\lambda_{\\mathrm{f}} \\bm\\alpha^\\top\\bm\\alpha -\n\\lambda_{\\mathrm{g}}\\bm\\beta^\\top\\bm\\beta$ with some positive constants $\\lambda_{\\mathrm{f}}$ and $\\lambda_{\\mathrm{g}}$,\nthe solution to the inner maximization problem can be obtained in the following\nanalytical form:\n\\begin{align*}\n \\widehat{\\bm\\beta}_{\\bm\\alpha}\n := \\argmax_{\\bm\\beta} \\widehat{J}(f_{\\bm\\alpha}, g_{\\bm\\beta})\n = \\widetilde{\\bm C}^{-1}(\\bm A^\\top\\bm\\alpha - 2\\bm b),\n\\end{align*}\nwhere $\\widetilde{\\bm C} = \\bm C + \\lambda_{\\mathrm{g}}\\bm I_{b_{\\mathrm{g}}}$ and\n$\\bm I_{b_{\\mathrm{g}}}$ is the $b_{\\mathrm{g}}$-by-$b_{\\mathrm{g}}$ identity matrix. Then, we can obtain the solution\nto Eq.~\\eqref{eq:approx_prob} analytically as\n\\begin{align*}\n \\widehat{\\bm\\alpha}\n &:= \\argmin_{\\bm\\alpha} \\widehat{J}(f_{\\bm\\alpha}, g_{\\widehat{\\bm\\beta}_{\\bm\\alpha}})\n = 2(\\bm A\\widetilde{\\bm C}^{-1}\\bm A^\\top + \\lambda_{\\mathrm{f}}\\bm I_{b_{\\mathrm{g}}})^{-1}\\bm A \\widetilde{\\bm C}^{-1}\\bm b.\n\\end{align*}\nFinally, from Eq.~\\eqref{eq:ls},\nour estimate of $u(\\bm x)$ is given as $\\widehat{\\bm\\alpha}^\\top\\bm\\phi(\\bm x)$.\n\n\n\\textbf{Remark on model selection:}\nModel selection for $F$ and $G$ is not straightforward since the test performance measure cannot be directly evaluated with (held out) training data of our problem.\nInstead, we may evaluate the value of $J(\\widehat{f}, \\widehat{g})$, where $(\\widehat{f}, \\widehat{g})\\in F\\times G$ is the optimal solution pair to $\\min_{f\\in F}\\max_{g\\in G} \\widehat{J}(f, g)$.\nHowever, it is still nontrivial to tell if the objective value is small because the solution is good in terms of the outer minimization, or because it is poor in terms of the inner maximization.\nWe leave this issue for future work.\n\n\n\\section{Theoretical Analysis}\nA theoretically appealing property of the proposed method is that its objective\nconsists of simple sample averages.\nThis enables us to establish a generalization error bound in terms of the Rademacher complexity~\\citep{koltchinskii_rademacher_2001, mohri2012foundations}.\n\nDenote\n$\\varepsilon_G(f) := \\sup_{g\\in L^2(p)} J(f, g) - \\sup_{g\\in G} J(f, g)$.\nAlso, let $\\mathfrak{R}_{q}^N(H)$ denote the \\emph{Rademacher complexity} of a set of\nfunctions $H$ over $N$ random variables following probability density $q$ (refer\nto Appendix~\\ref{sec:gen_bound} for the definition).\nProofs of the following theorems and corollary can be found in Appendix~\\ref{sec:gen_bound}, Appendix~\\ref{sec:proof_of_cor1}, and Appendix~\\ref{sec:proof_of_MSE_bound} in the supplementary material.\n\n\\begin{theorem}\\label{thm:err_bound}\n Assume that $n_1 = n_2$, $\\widetilde{n}_1 = \\widetilde{n}_2$,\n $p_1(\\bm x) = p_2(\\bm x)$,\n $W := \\inf_{x\\in\\mathcal{X}}\\abs{\\mu_w(\\bm x)} > 0$,\n $M_{\\mathcal{Z}} := \\sup_{z\\in\\mathcal{Z}}\\abs{z} < \\infty$,\n $M_{F} := \\sup_{f\\in F, x\\in\\mathcal{X}}\\abs{f(x)} < \\infty$,\n and $M_{G} := \\sup_{g\\in G, x\\in\\mathcal{X}}\\abs{g(x)} < \\infty$.\n Then, the following holds with probability at least $1-\\delta$\n for every $f\\in F$:\n \\begin{align*}\n \\mathbf{E}_{\\bm x\\sim p(\\bm x)}[(f(\\bm x) - u(\\bm x))^2]\n \\le \\frac{1}{W^2}\\left[\n \\sup_{g\\in G}\\widehat J(f, g)\n + \\mathcal{R}_{F, G}^{n, \\widetilde{n}}\n + \\left(\\frac{M_z}{\\sqrt{2n}}\n + \\frac{M_w}{\\sqrt{2\\widetilde{n}}}\\right)\\sqrt{\\log\\frac{2}{\\delta}}\n + \\varepsilon_G(f)\n \\right],\n \\end{align*}\n where $M_z := 4M_{\\mathcal{Y}}M_{\\mathcal{G}} + M_{\\mathcal{G}}^2\/2$,\n $M_w = 2 M_{\\mathcal{F}}M_{\\mathcal{G}} + M_{\\mathcal{G}}^2\/2$,\n and\n $\\mathcal{R}_{F, G}^{n, \\widetilde{n}} := 2(M_F + 4M_Z)\\mathfrak{R}_{p(\\bm x, z)}^{n}(G) + 2(2M_F +\n M_G)\\mathfrak{R}_{p(\\bm x, w)}^{\\widetilde{n}}(F) + 2(M_F + M_G)\\mathfrak{R}_{p(\\bm x, w)}^{\\widetilde{n}}(G)$.\n\\end{theorem}\n\nIn particular, the following bound holds for the linear-in-parameter models.\n\\begin{corollary}\\label{cor:ls_bound}\n Let $F = \\{x\\mapsto \\bm\\alpha^\\top\\bm\\phi(\\bm x) \\given \\norm{\\bm\\alpha}_2 \\le \\Lambda_F\\}$,\n $G = \\{x\\mapsto \\bm\\beta^\\top\\bm\\psi(\\bm x) \\given \\norm{\\bm\\beta}_2 \\le \\Lambda_G\\}$.\n Assume that $r_F := \\sup_{\\bm x\\in\\mathcal{X}}\\norm{\\bm\\phi(\\bm x)} < \\infty$\n and $r_G := \\sup_{\\bm x\\in\\mathcal{X}}\\norm{\\bm\\psi(\\bm x)} < \\infty$,\n where $\\norm{\\cdot}_2$ is the $L_2$-norm.\n Under the assumptions of Theorem~\\ref{thm:err_bound},\n it holds with probability at least $1-\\delta$ that for every $f\\in F$,\n \\begin{align*}\n \\mathbf{E}_{\\bm x\\sim p(\\bm x)}[(f(\\bm x) - u(\\bm x))^2]\n \\le \\frac{1}{W^2}\\left[\n \\sup_{g\\in G}\\widehat J(f, g)\n + \\frac{C_z\\sqrt{\\log\\frac{2}{\\delta}}+D_z}{\\sqrt{2n}}\n + \\frac{C_w\\sqrt{\\log\\frac{2}{\\delta}}+D_w}{\\sqrt{2\\widetilde{n}}}\n + \\varepsilon_G(f)\n \\right],\n \\end{align*}\n where\n $C_z := r_G^2 \\Lambda_G^2 + 4 r_G \\Lambda_G M_\\mathcal{Y}$,\n $C_w := 2 r_F^2 \\Lambda_F^2 + 2 r_F r_G \\Lambda_F\\Lambda_G+ r_G^2 \\Lambda_G^2 $,\n $D_z := r_G^2 \\Lambda_G^2\/2 + 4r_G \\Lambda_G M_\\mathcal{Y}$,\n and $D_w := r_G^2\\Lambda_G^2\/2 + 4r_F r_G\\Lambda_F \\Lambda_G$.\n\\end{corollary}\nTheorem~\\ref{thm:err_bound} and Corollary~\\ref{cor:ls_bound} imply that minimizing\n$\\sup_{g\\in G} \\widehat{J}(f, g)$, as the proposed method does,\namounts to minimizing an upper bound of the mean squared error.\nIn fact, for the linear-in-parameter models, it can be shown that\nthe mean squared error of the proposed estimator is upper bounded\nby $O(\\nicefrac{1}{\\sqrt{n}} + \\nicefrac{1}{\\sqrt{\\widetilde{n}}})$\nplus some model mis-specification error with high probability as follows.\n\n\\newcommand{O\\left(\\left(\\frac{1}{\\sqrt{n}}+\\frac{1}{\\sqrt{\\widetilde{n}}}\\right)\\log\\frac{1}{\\delta}\\right)}{O\\left(\\left(\\frac{1}{\\sqrt{n}}+\\frac{1}{\\sqrt{\\widetilde{n}}}\\right)\\log\\frac{1}{\\delta}\\right)}\n\\begin{theorem}[Informal]\\label{thm:MSE_bound}\n Let $\\widehat{f} \\in F$ be any approximate solution to $\\inf_{f\\in F}\\sup_{g\\in G}\\widehat{J}(f, g)$ with sufficient precision.\n Under the assumptions of Corollary~\\ref{cor:ls_bound},\n it holds with probability at least $1-\\delta$ that\n\\begin{align}\n \\mathbf{E}_{\\bm x\\sim p(\\bm x)}[(\\widehat{f}(\\bm x)-u(\\bm x))^2]\n \\le O\\left(\\left(\\frac{1}{\\sqrt{n}}+\\frac{1}{\\sqrt{\\widetilde{n}}}\\right)\\log\\frac{1}{\\delta}\\right) + \\frac{2\\varepsilon_G^F + \\varepsilon_F}{W^2},\n\\end{align}\nwhere $\\varepsilon_G^F := \\sup_{f\\in F}\\varepsilon_G(f)$ and $\\varepsilon_F := \\inf_{f\\in F} J(f)$.\n\\end{theorem}\nA more formal version of Theorem~\\ref{thm:MSE_bound} can be found in Appendix~\\ref{sec:proof_of_MSE_bound}.\n\n\n\\section{More General Loss Functions}\nOur framework can be extended to more general loss functions:\n\\begin{align}\n \\inf_{f\\in L^2(p)} \\mathbf{E}[\\ell(\\mu_w(\\bm x)f(\\bm x), 2\\mu_z(\\bm x))],\\label{eq:gen_obj}\n\\end{align}\nwhere $\\ell: \\Re\\times\\Re\\to\\Re$ is a loss function that is lower semi-continuous and convex\nwith respect to both the first and the second arguments,\nwhere a function $\\varphi:\\Re\\to\\Re$ is \\emph{lower semi-continuous} if $\\liminf_{y\\to y_0} \\varphi(y) = \\varphi(y_0)$ for every $y_0\\in\\Re$~\\citep{rockafellar_convex_1970}.\\footnote{%\n$\\liminf_{y\\to y_0} \\varphi(y) := \\lim_{\\delta\\searrow 0}\\inf_{\\abs{y-y_0} \\le \\delta} \\varphi(y)$.\n}\nAs with the squared loss, a major difficulty in solving this optimization\nproblem is that the operand of the expectation has nonlinear dependency on both\n$\\mu_w(\\bm x)$ and $\\mu_z(\\bm x)$ at the same time.\nBelow, we will show a way to transform the objective functional into a form that\ncan be easily approximated using separately labeled samples.\n\nFrom the assumptions on $\\ell$, we have $\\ell(y, y') = \\sup_{z\\in\\Re}yz -\n\\ell^*(z, y')$, where $\\ell^*(\\cdot, y')$ is the convex conjugate of the\nfunction $y\\mapsto \\ell(y, y')$ defined for any $y'\\in\\Re$ as $z \\mapsto\n\\ell^*(z, y') = \\sup_{y\\in\\Re}[yz-\\ell(y, y')]$ (see \\citet{rockafellar_convex_1970}). Hence,\n\\begin{align*}\n \\mathbf{E}[\\ell(\\mu_w(\\bm x) f(\\bm x), 2\\mu_z(\\bm x))]\n = \\sup_{g\\in L^2(p)} \\mathbf{E}[\\mu_w(\\bm x)f(\\bm x)g(\\bm x) - \\ell^*(g(\\bm x), 2\\mu_z(\\bm x))].\n\\end{align*}\nSimilarly, we obtain $\\mathbf{E}[\\ell^*(g(\\bm x), 2\\mu_z(\\bm x))] = \\sup_{h\\in L^2(p)}\n2\\mathbf{E}[\\mu_z(\\bm x)h(\\bm x)] - \\mathbf{E}[\\ell^*_*(g(\\bm x), h(\\bm x))]$,\nwhere $\\ell^*_*(y, \\cdot)$ is the convex conjugate of the function $y'\\mapsto\n\\ell^*(y, y')$ defined for any $y, z'\\in\\Re$ by $\\ell^*_*(y, z') := \\sup_{y'\\in\\Re}[y'z-\\ell^*(y, y')]$.\nThus, Eq.~\\eqref{eq:gen_obj} can be rewritten as\n\\begin{align*}\n \\inf_{f\\in L^2(p)}\\sup_{g\\in L^2(p)} \\inf_{h\\in L^2(p)} K(f, g, h),\n\\end{align*}\nwhere $K(f, g, h) := \\mathbf{E}[\\mu_w(\\bm x)f(\\bm x)g(\\bm x)] - 2\\mathbf{E}[\\mu_z(\\bm x)h(\\bm x)] + \\mathbf{E}[\\ell_*^*(g(\\bm x),\nh(\\bm x))]$. Since $\\mu_w$ and $\\mu_z$ appear separately and linearly, $K(f, g, h)$ can be approximated by sample averages using separately\nlabeled samples.\n\n\n\\section{Experiments}\\label{sec:experiment}\nIn this section, we test the proposed method and compare it with baselines.\n\n\\subsection{Data Sets}\n\\label{sec:experiment_setting}\nWe use the following data sets for experiments.\n\n\\textbf{Synthetic data:}\n Features $\\bm x$ are drawn from the two-dimensional Gaussian distribution\n with mean zero and covariance $10\\bm I_2$.\n We set $p(y\\given\\bm x, t)$ as the following logistic models:\n $p(y \\given \\bm x, t) = 1\/(1 - \\exp(-y\\bm a_t^\\top \\bm x))$,\n where $\\bm a_{-1} = (10, 10)^\\top$ and $\\bm a_{1} = (10, -10)^\\top$.\n We also use the logistic models for $p_k(t\\given\\bm x)$: $p_1(t\\given\\bm x) = 1\/(1-\\exp(-tx_2))$ and $p_2(t\\given\\bm x) = 1\/(1-\\exp(-t\\{x_2 + b\\})$,\n where $b$ is varied over 25 equally spaced points in $[0, 10]$.\n We investigate how the performance changes when the difference between $p_1(t\\given\\bm x)$ and $p_2(t\\given\\bm x)$ varies.\n\n\\textbf{Email data:}\nThis data set consists of data collected in an email advertisement campaign for promoting customers to visit a website of a store~\\citep{hillstrom, email_winner}.\nOutcomes are whether customers visited the website or not.\nWe use $4 \\times 5000$ and $2000$ randomly sub-sampled data points for training and evaluation, respectively.\n\n\\textbf{Jobs data:}\nThis data set consists of randomized experimental data obtained from a job training program called the National Supported Work Demonstration~\\citep{lalonde1986evaluating}, available at \\url{http:\/\/users.nber.org\/~rdehejia\/data\/nswdata2.html}.\nThere are 9 features, and outcomes are income levels after the training program.\nThe sample sizes are $297$ for the treatment group and $425$ for the control group.\nWe use $4\\times 50$ randomly sub-sampled data points for training\nand $100$ for evaluation.\n\n\\textbf{Criteo data:}\nThis data set consists of banner advertisement log data collected by Criteo~\\citep{couterfactual_test_bed} available at \\url{http:\/\/www.cs.cornell.edu\/~adith\/Criteo\/}.\nThe task is to select a product to be displayed in a given banner so that the click rate will be maximized.\nWe only use records for banners with only one advertisement slot.\nEach display banner has 10 features, and each product has 35 features.\nWe take the 12th feature of a product as a treatment variable\nmerely because it is a well-balanced binary variable.\nThe outcome is whether the displayed advertisement was clicked.\nWe treat the data set as the population although it is biased from the actual population since non-clicked impressions were randomly sub-sampled down to $10\\%$ to reduce the data set size.\nWe made two subsets with different treatment policies by appropriately sub-sampling according to the predefined treatment policies (see Appendix~\\ref{sec:subsample} in the supplementary material).\nWe set $p_k(t\\given\\bm x)$ as $p_1(t\\given\\bm x) = 1\/(1+\\exp(-t\\bm 1^\\top \\bm x))$ and $p_2(t\\given\\bm x) = 1\/(1+\\exp(t\\bm 1^\\top\\bm x))$, where $\\bm 1 := (1, \\dots, 1)^\\top$.\n\n\\subsection{Experimental Settings}\nWe conduct experiments under the following settings.\n\n\\textbf{Methods compared:}\nWe compare the proposed method with baselines that separately estimate the four conditional expectations in Eq.~\\eqref{eq:diff_ratio}.\nIn the case of binary outcomes, we use the logistic-regression-based (denoted by FourLogistic) and a neural-network-based method trained with the soft-max cross-entropy loss (denoted by FourNNC).\nIn the case of real-valued outcomes, the ridge-regression-based (denoted by FourRidge) and a neural-network-based method trained with the squared loss (denoted by FourNNR).\nThe neural networks are fully connected ones with two hidden layers each with 10 hidden units.\nFor the proposed method, we use the linear-in-parameter models with Gaussian basis functions centered at randomly sub-sampled training data points (see Appendix~\\ref{sec:gau_basis} for more details).\n\n\\textbf{Performance evaluation:}\nWe evaluate trained uplift models by the area under the uplift curve (AUUC)\nestimated on test samples with joint labels as well as \\emph{uplift curves}~\\citep{radcliffe_differential_1999}.\nThe uplift curve of an estimated individual uplift is the trajectory of the average uplift\nwhen individuals are gradually moved from the control group to the treated group in the descending order according to the ranking given by the estimated individual uplift.\nThese quantities can be estimated when data are randomized experiment ones.\nThe Criteo data are not randomized experiment data unlike other data sets,\nbut there are accurately logged propensity scores available.\nIn this case, uplift curves and the AUUCs can be estimated using the inverse propensity scoring~\\citep{austin2011introduction, unbiased_offline}.\nWe conduct $50$ trials of each experiment with different random seeds.\n\n\n\\subsection{Results}\nThe results on the synthetic data are summarized in Figure~\\ref{fig:toy_auuc}.\nFrom the plots, we can see that all methods perform relatively well in terms of AUUCs when the policies are distant from each other (i.e., $b$ is larger).\nHowever, the performance of the baseline methods immediately declines as the treatment policies get closer to each other (i.e., $b$ is smaller).\\footnote{%\nThe instability of performance of FourLogistic can be explained as follows. FourLogistic uses linear models, whose expressive power is limited.\nThe resulting estimator has small variance with potentially large bias. Since different $b$ induces different $u(\\bm x)$, the bias depends on $b$.\nFor this reason, the method works well for some $b$ but poorly for other $b$.\n}\nIn contrast, the proposed method maintains its performance relatively longer until $b$ reaches the point around $2$.\nNote that the two policies would be identical when $b = 0$, which makes it impossible to identify the individual uplift from their samples by any method since the system in Eq.~\\eqref{eq:p_y_given_x} degenerates.\nFigure~\\ref{fig:toy_result} highlights their performance in terms of the squared error.\nFor FourNNC, test points with small policy difference $\\abs{p_1(t=1\\given\\bm x) - p_2(t=1\\given\\bm x)}$ (colored darker) tend to have very large estimation errors.\nOn the other hand, the proposed method has relatively small errors even for such points.\nFigure~\\ref{fig:result_real} shows results on real data sets.\nThe proposed method and the baseline method with logistic regressors\nboth performed better than the baseline method with neural nets\non the Email data set (Figure~\\ref{fig:Email_ul_curve}).\nOn the Jobs data set, the proposed method again performed better than the baseline methods with neural networks.\nFor the Criteo data set, the proposed method outperformed the baseline methods (Figure~\\ref{fig:Criteo_ul_curve}).\nOverall, we confirmed the superiority of the proposed both on synthetic and real data sets.\n\n\\begin{figure}[t]\n\\centering\n\\begin{minipage}{0.31\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{result_synthetic.pdf}\n\\end{minipage}\n\\hspace{2mm}\n\\begin{minipage}{0.64\\textwidth}\n\\centering\n\\caption{Results on the synthetic data.\n The plot shows the average AUUCs obtained by the\n proposed method and the baseline methods\n for different $b$.\n $p_1(t\\given\\bm x)$ and $p_2(t\\given\\bm x)$\n are closer to each other when $b$ is smaller.}\n\\label{fig:toy_auuc}\n\\end{minipage}\n\\end{figure}\n\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[c]{0.31\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{sqerr_4Logistic_b1.png}\n \\subcaption{Baseline (FourLogistic).}\n \\label{fig:error_4logistic_toydata}\n \\end{subfigure}\n \\begin{subfigure}[c]{0.31\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{sqerr_4NN_b1.png}\n \\subcaption{Baseline (FourNNC).}\n \\label{fig:error_4NN_toydata}\n \\end{subfigure}\n \\begin{subfigure}[c]{0.31\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{sqerr_MinMaxGau_b1.png}\n \\subcaption{Proposed (MinMaxGau).}\n \\label{fig:error_proposed_toydata}\n \\end{subfigure}\n \\caption{The plots show the squared errors of the estimated individual uplifts\n on the synthetic data with $b=1$.\n Each point is darker-colored when $\\abs{p_1(t=1\\given\\bm x)-p_2(t=1\\given\\bm x)}$ is smaller, and lighter-colored otherwise.}\n \\label{fig:toy_result}\n \\begin{subfigure}[c]{0.31\\textwidth}\n \\includegraphics[width=\\textwidth]{result_Email.pdf}\n \\subcaption{The Email data.}\n \\label{fig:Email_ul_curve}\n \\end{subfigure}\n \\begin{subfigure}[c]{0.31\\textwidth}\n \\includegraphics[width=\\textwidth]{result_Jobs.pdf}\n \\subcaption{The Jobs data.}\n \\label{fig:Jobs_ul_curve}\n \\end{subfigure}\n \\begin{subfigure}[c]{0.31\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{result_Criteo.pdf}\n \\subcaption{The Criteo data.}\n \\label{fig:Criteo_ul_curve}\n \\end{subfigure}\n \\caption{Average uplifts as well as their standard errors on real-world data sets.}\n \\label{fig:result_real}\n\\end{figure*}\n\n\\section{Conclusion}\nWe proposed a theoretically guaranteed and practically useful method for uplift modeling or individual treatment effect estimation under the presence of systematic missing labels.\nThe proposed method showed promising results in our experiments on synthetic and real data sets.\nThe proposed framework is model-independent: any models can be used to approximate the individual uplift including ones tailored for specific problems and complex models such as neural networks.\nOn the other hand, model selection may be a challenging problem due to the min-max structure.\nAddressing this issue would be important research directions for further expanding the applicability and improving the performance of the proposed method.\n\n\n\\clearpage\n\\subsubsection*{Acknowledgments}\nWe are grateful to Marthinus Christoffel du Plessis and Takeshi Teshima for their inspiring suggestions and for the meaningful discussions.\nWe would like to thank the anonymous reviewers for their helpful comments.\nIY was supported by JSPS KAKENHI 16J07970.\nJA and FY would like to thank Adway for its support.\nMS was supported by the International Research Center for Neurointelligence (WPI-IRCN) at The University of Tokyo Institutes for Advanced Study.\n\n\\FloatBarrier\n\n\\small\n\n\\bibliographystyle{plainnat}\n\n\\subsection{Average Uplift in Terms of the Individual Uplift}\n\\begin{align}\nU(\\pi) &= \\iint \\sum_{t=-1,1} y p(y \\given t, \\bm x) \\pi(t \\given\\bm x) p(\\bm x) \\mathrm{d}y \\dbm x\n- \\iint \\sum_{t=-1,1} y p(y \\given t, \\bm x) 1[t = -1] p(\\bm x) \\mathrm{d}y\\dbm x\\nonumber\\\\\n&= \\iint y [p(y \\given t = 1, \\bm x) \\pi(t = 1\\given\\bm x)\n- p(y \\given t = -1, \\bm x)\\pi(t = 1\\given\\bm x)] p(\\bm x) \\mathrm{d}y\\dbm x\\nonumber\\\\\n&= \\iint y [p(y \\given t = 1, \\bm x) - p(y \\given t = -1, \\bm x)]\n\\pi(t = 1\\given\\bm x) p(\\bm x) \\mathrm{d}y\\dbm x\\nonumber\\\\\n&= \\int u(\\bm x) \\pi(t = 1\\given\\bm x) p(\\bm x) \\dbm x.\n\\end{align}\n\n\\subsection{Area Under the Uplift Curve and Ranking}\n\\label{sec:AUUC_ranking}\nDefine the following symbols:\n\\begin{itemize}\n \\item $C_\\alpha := \\Pr[f(\\bm x) < \\alpha ]$,\n \\item $U(\\alpha; f) := \\int u(\\bm x)1[\\alpha \\le f(\\bm x)]p(\\bm x)\\dbm x$,\n \\item $\\Rank(f) :=\\mathbf{E}[1[f(\\bm x') \\le f(\\bm x)][u(\\bm x') - u(\\bm x)]]$,\n \\item $\\AUUC(f) := \\int_0^1 U(\\alpha; f) \\mathrm{d}C_\\alpha$.\n\\end{itemize}\nThen, we have\n\\begin{align*}\n \\AUUC(f)\n &= \\int_{-\\infty}^\\infty U(\\alpha) \\frac{\\mathrm{d}C_\\alpha}{\\mathrm{d}\\alpha}\\mathrm{d}\\alpha\\\\\n &= \\int_{-\\infty}^\\infty U(\\alpha) p_{f(\\bm x)}(\\alpha)\\mathrm{d}\\alpha\\\\\n &= \\int_{\\Re^d} U(f(\\bm x)) p(\\bm x)\\dbm x\\\\\n &= \\iint 1[f(\\bm x) \\le f(\\bm x')]u(\\bm x')p(\\bm x')\\dbm x' p(\\bm x)\\dbm x\\\\\n &= \\mathbf{E}[1[f(\\bm x) \\le f(\\bm x')]u(\\bm x')]\\\\\n &(= \\mathbf{E}[1[f(\\bm x) \\le f(\\bm x')][y^+ - y^-]]),\n\\end{align*}\nwhere $y^+ \\sim p(y\\given\\bm x', t=1)$ and $y^-\\sim p(y\\given\\bm x', t=-1)$.\n\nAssuming $\\Pr[f(\\bm x') = f(\\bm x)] = 0$, we have\n\\begin{align*}\n \\Rank(f)\n &:=\\mathbf{E}[1[f(\\bm x) \\ge f(\\bm x')][u(\\bm x) - u(\\bm x')]]\\\\\n &= \\mathbf{E}[1[f(\\bm x) \\ge f(\\bm x')]u(\\bm x)]\\\\\n &\\phantom{=}- \\mathbf{E}[1[f(\\bm x) \\ge f(\\bm x')]u(\\bm x')]\\\\\n &= \\AUUC(f)\n - \\mathbf{E}[(1 - 1[f(\\bm x) \\le f(\\bm x')])u(\\bm x')]\\\\\n &= \\mathbf{E}[u(\\bm x)]-2\\AUUC(f).\n\\end{align*}\nThus, $\\Rank(f) = 2(\\AUUC(f) - \\AUUC(r))$, where $r:\\Re^d\\to\\Re$ is the random\nranking scoring function that outputs $1$ or $-1$ with probability $1\/2$ for any\ninput $\\bm x$.\n$\\Rank(f)$ is maximized when $f(\\bm x) \\le f(\\bm x') \\iff u(\\bm x) \\le u(\\bm\nx')$ for almost every pair of $\\bm x\\in\\Re^d$ and $\\bm x\\in\\Re^d$.\nIn particular, $f = u$ is a maximizer of the objective.\n\n\\subsection{Proof of Lemma~\\ref{lem:u_is_diff_ratio}}\n\\label{sec:proof_of_lemma1}\n\\newtheorem*{lem:u_is_diff_ratio}{Lemma~\\ref{lem:u_is_diff_ratio}}\n\\begin{lem:u_is_diff_ratio}\nFor every $\\bm x$ such that $p_1(\\bm x) \\neq p_2(\\bm x)$, $u(\\bm x)$ can be\nexpressed as\n\\begin{align}\n u(\\bm x)\n = 2\\times\\frac{\\mathbf{E}_{y\\sim p_1(y\\given \\bm x)}[y] - \\mathbf{E}_{y\\sim p_2(y\\given \\bm x)}[y]}%\n{\\mathbf{E}_{t\\sim p_1(t\\given\\bm x)}[t] - \\mathbf{E}_{t\\sim p_2(t\\given\\bm x)}[t]}.\n\\end{align}\n\\end{lem:u_is_diff_ratio}\n\n\\begin{proof}\n\\begin{align*}\n \\mathbf{E}_{y\\sim p_1(y\\given \\bm x)}[y] - \\mathbf{E}_{y\\sim p_2(y\\given \\bm x)}[y]\n &= \\int\\sum_{\\tau=-1,1}y p(y\\given\\bm x,t=\\tau)p_1(t=\\tau\\given\\bm x)\\mathrm{d}y\\\\\n &\\phantom{=}-\\int\\sum_{\\tau=-1,1}y p(y\\given\\bm x,t=\\tau)p_2(t=\\tau\\given\\bm x)\\mathrm{d}y\\\\\n &= \\int\\sum_{\\tau=-1,1}y p(y\\given\\bm x,t=\\tau)(p_1(t=\\tau\\given\\bm x)-p_2(t=\\tau\\given\\bm x))\\mathrm{d}y\\\\\n &= \\sum_{\\tau=-1,1}\\mathbf{E}_{y\\sim p(y\\given\\bm x,t=\\tau)}[y](p_1(t=\\tau\\given\\bm x)-p_2(t=\\tau\\given\\bm x))\\\\\n &= \\mathbf{E}_{y\\sim p(y\\given\\bm x,t=1)}[y](p_1(t=1\\given\\bm x)-p_2(t=1\\given\\bm x))\\\\\n &\\phantom{=}+\\mathbf{E}_{y\\sim p(y\\given\\bm x,t=-1)}[y](1-p_1(t=1\\given\\bm x)-1+p_2(t=1\\given\\bm x))\\\\\n &= u(\\bm x) (p_1(t=1\\mid\\bm x) - p_2(t=1\\mid\\bm x)).\n\\end{align*}\nWhen $p_1(t=1\\mid\\bm x) \\neq p_2(t=1\\mid\\bm x)$, it holds that\n\\begin{align*}\n u(\\bm x)\n &= \\frac{\\mathbf{E}_{y\\sim p_1(y\\given \\bm x)}[y] - \\mathbf{E}_{y\\sim p_2(y\\given \\bm x)}[y]}%\n {p_1(t=1\\mid\\bm x) - p_2(t=1\\mid\\bm x)}\\\\\n &= 2\\times\\frac{\\mathbf{E}_{y\\sim p_1(y\\given \\bm x)}[y] - \\mathbf{E}_{y\\sim p_2(y\\given \\bm x)}[y]}%\n{\\mathbf{E}_{t\\sim p_1(t\\mid\\bm x)}[t] - \\mathbf{E}_{t\\sim p_2(t\\mid\\bm x)}[t]}.\n\\end{align*}\n\\end{proof}\n\n\\subsection{Proof of Lemma~\\ref{lem:u_is_ratio}}\n\\label{sec:proof_of_lemma2}\n\\newtheorem*{lem:u_is_ratio}{Lemma~\\ref{lem:u_is_ratio}}\n\\begin{lem:u_is_ratio}\nFor every $\\bm x$ such that $p_1(\\bm x) \\neq p_2(\\bm x)$, $u(\\bm x)$ can be\nexpressed as\n\\begin{align*}\nu(\\bm x) = 2\\times\\frac{\\mathbf{E}[z \\given \\bm x]}{\\mathbf{E}[w \\mid \\bm x]},\n\\end{align*}\nwhere $\\mathbf{E}[z\\given\\bm x]$ and $\\mathbf{E}[w\\given\\bm x]$ are the conditional expectations\nof $z$ given $\\bm x$ over $p(z\\given\\bm x)$ and $w$ given $\\bm x$ over\n$p(w\\given\\bm x)$, respectively.\n\\end{lem:u_is_ratio}\n\n\n\\begin{proof}\nWe have\n\\begin{align*}\n \\mathbf{E}[z\\given\\bm x] &= \\int\\zeta\\left[\\frac{1}{2}p_1(y=\\zeta\\given\\bm x)+\\frac{1}{2}p_2(y=-\\zeta\\given\\bm x)\\right]\\mathrm{d}\\zeta\\\\\n &= \\frac{1}{2}\\int\\zeta p_1(y=\\zeta\\given\\bm x)\\mathrm{d}\\zeta+\\frac{1}{2}\\int\\zeta p_2(y=-\\zeta\\given\\bm x)\\mathrm{d}\\zeta\\\\\n &= \\frac{1}{2}\\int y p_1(y\\given\\bm x)\\mathrm{d}y - \\frac{1}{2}\\int y p_2(y\\given\\bm x)\\mathrm{d}y\\\\\n &= \\frac{1}{2}\\mathbf{E}_{y\\sim p_1(y\\given\\bm x)}[y] - \\frac{1}{2}\\mathbf{E}_{y\\sim p_2(y\\given\\bm x)}[y].\n\\end{align*}\nSimilarly, we obtain\n\\begin{align*}\n \\mathbf{E}[w\\given\\bm x] &= \\frac{1}{2}\\mathbf{E}_{t\\sim p_1(t\\mid\\bm x)}[t] - \\frac{1}{2}\\mathbf{E}_{t\\sim p_2(t\\mid\\bm x)}[t].\n\\end{align*}\nThus, \n\\begin{align*}\n 2\\times\\frac{\\mathbf{E}[z\\given\\bm x]}{\\mathbf{E}[w\\given\\bm x]}\n &= 2\\times\\frac{\\mathbf{E}_{y\\sim p_1(y\\given \\bm x)}[y] - \\mathbf{E}_{y\\sim p_2(y\\given \\bm x)}[y]}%\n {\\mathbf{E}_{t\\sim p_1(t\\mid\\bm x)}[t] - \\mathbf{E}_{t\\sim p_2(t\\mid\\bm x)}[t]}\n = u(\\bm x).\n\\end{align*}\n\\end{proof}\n\n\\subsection{Proof of Theorem~\\ref{thm:err_bound}}\n\\label{sec:gen_bound}\nWe restate Theorem~\\ref{thm:err_bound} below.\n\\newtheorem*{thm:err_bound}{Theorem~\\ref{thm:err_bound}}\n\\begin{thm:err_bound}\n Assume that $n_1 = n_2$, $\\widetilde{n}_1 = \\widetilde{n}_2$,\n $p_1(\\bm x) = p_2(\\bm x)$,\n $W := \\inf_{x\\in\\mathcal{X}}\\abs{\\mu_w(\\bm x)} > 0$,\n $M_{\\mathcal{Z}} := \\sup_{z\\in\\mathcal{Z}}\\abs{z} < \\infty$,\n $M_{F} := \\sup_{f\\in F, x\\in\\mathcal{X}}\\abs{f(x)} < \\infty$,\n and $M_{G} := \\sup_{g\\in G, x\\in\\mathcal{X}}\\abs{g(x)} < \\infty$.\n Then, the following holds with probability at least $1-\\delta$ that\n for every $f\\in F$,\n \\begin{align*}\n \\mathbf{E}_{\\bm x\\sim p(\\bm x)}[(f(\\bm x) - u(\\bm x))^2]\n \\le \\frac{1}{W^2}\\left[\n \\sup_{g\\in G}\\widehat J(f, g)\n + \\mathcal{R}_{F, G}^{n, \\widetilde{n}}\n + \\left(\\frac{M_z}{\\sqrt{2n}}\n + \\frac{M_w}{\\sqrt{2\\widetilde{n}}}\\right)\\sqrt{\\log\\frac{2}{\\delta}}\n + \\varepsilon_G(f)\n \\right],\n \\end{align*}\n where $M_z := 4M_{\\mathcal{Y}}M_{\\mathcal{G}} + M_{\\mathcal{G}}^2\/2$,\n $M_w = 2 M_{\\mathcal{F}}M_{\\mathcal{G}} + M_{\\mathcal{G}}^2\/2$,\n $\\mathcal{R}_{F, G}^{n, \\widetilde{n}} := 2(M_F + 4M_Z)\\mathfrak{R}_{p(\\bm x, z)}^{n}(G) + 2(2M_F +\n M_G)\\mathfrak{R}_{p(\\bm x, w)}^{\\widetilde{n}}(F) + 2(M_F + M_G)\\mathfrak{R}_{p(\\bm x, w)}^{\\widetilde{n}}(G)$.\n\\end{thm:err_bound}\n\nDefine $J(f, g)$ and $\\widehat{J}(f, g)$ as in Section~3.2 and denote\n\\begin{align*}\n \\varepsilon_G(f)\n &:= \\sup_{g\\in L^2(p)} J(f, g) - \\sup_{g\\in G} J(f, g).\n\\end{align*}\n\n\\begin{definition}[Rademacher Complexity]\n We define the \\emph{Rademacher complexity} of $H$\n over $N$ random variables following probability distribution $q$ by\n \\begin{align*}\n \\mathfrak{R}_p^N(H) = \\mathbf{E}_{V_1, \\dots, V_N, \\sigma_1, \\dots, \\sigma_N}\\left[\\sup_{h\\in H}\\frac{1}{N}\\sum_{i=1}^N\\sigma_ih(V_i)\\right],\n \\end{align*}\n where $\\sigma_1, \\dots, \\sigma_N$ are independent, $\\{-1,\n 1\\}$-valued uniform random variables.\n\\end{definition}\n\n\n\\begin{lemma}\\label{lem:obj_bound}\n Under the assumptions of Theorem~\\ref{thm:err_bound},\n with probability at least $1-\\delta$, it holds that for every $f\\in F$,\n \\begin{align*}\n J(f, g) \\le \\widehat{J}(f, g) + \\mathfrak{R}_{F, G}\n + \\left(\\frac{M_z}{\\sqrt{n}} + \\frac{M_w}{\\sqrt{\\widetilde{n}}}\\right)\\sqrt{\\log\\frac{2}{\\delta}}.\n \\end{align*}\n\\end{lemma}\n\nTo prove Lemma~\\ref{lem:obj_bound}, we use the following lemma, which is a slightly modified\nversion of Theorem~3.1 in~\\citet{mohri2012foundations}.\n\\begin{lemma}\\label{lem:rad_bound}\n Let $H$ be a set of real-valued functions on a measurable space $\\mathcal{D}$.\n Assume that $M := \\sup_{h\\in H, v\\in\\mathcal{D}} h(v) < \\infty$.\n Then, for any $h\\in H$ and any $\\mathcal{D}$-valued i.i.d. random variables\n $V, V_1, \\dots, V_N$ following density $q$, we have\n \\begin{align}\n \\mathbf{E}[h(V)] \\le \\frac{1}{N}\\sum_{i=1}^N h(V_i) + 2 \\mathfrak{R}_{q}^N(H) + \\sqrt{\\frac{M^2}{N}\\log\\frac{1}{\\delta}}.\n \\end{align}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:rad_bound}]\n We follow the proof of Theorem~3.1 in~\\citet{mohri2012foundations}\n except that we set the constant $B_\\phi$ in Eq.~\\eqref{eq:diff_bounded} to $M\/m$\n when we apply McDiarmid's inequality (Section~\\ref{sec:McD}).\n\\end{proof}\n\nNow, we prove Lemma~\\ref{lem:obj_bound}.\n\\begin{proof}[Proof of Lemma~\\ref{lem:obj_bound}]\n For any $f\\in\\mathcal{F}$, $g\\in\\mathcal{G}$,\n $\\bm x', \\widetilde{\\bm x}' \\in\\mathcal{X}$,\n $z'\\in\\mathcal{Z} := \\{y, -y\\given y\\in\\mathcal{Y}\\}$, and $w'\\in\\{-1, 1\\}$, we define $h_z$ and $h_w$ as follows:\n \\begin{align*}\n h_z(\\bm x', z'; g) &:= -4z'g(\\bm x') - \\frac{1}{2}g(\\bm x')^2,\\\\\n h_w(\\widetilde{\\bm x}', w'; f, g) &:= w'f(\\widetilde{\\bm x}')g(\\widetilde{\\bm x}') - \\frac{1}{2} g(\\widetilde{\\bm x}')^2.\n \\end{align*}\n\n Denoting $H_z := \\{(\\bm x', z') \\mapsto h_z(\\bm x', z'; g) \\given g\\in G\\}$, we have\n \\begin{align*}\n \\sup_{h\\in H_z, \\bm x'\\in\\mathcal{X}, z'\\in\\mathcal{Z}} \\abs{h(\\bm x', z')}\n &\\le 4M_ZM_G + \\frac{1}{2}M_G^2 =: M_z < \\infty,\n \\end{align*}\n and thus, we can apply Lemma~\\ref{lem:rad_bound} to confirm that\n with probability at least $1-\\delta\/2$,\n \\begin{align*}\n \\mathbf{E}_{(\\bm x, z)\\sim p(\\bm x, z)}[h_z(\\bm x, z; g)]\n \\le \\frac{1}{n} \\sum_{(\\bm x_i, z_i)\\in S_z} h_z(\\bm x_i, z_i; g)\n + 2 \\mathfrak{R}_p^{n}(H_z)\n + \\sqrt{\\frac{M_z^2}{n}\\log\\frac{2}{\\delta}},\n \\end{align*}\n where $\\{(\\bm x_i, z_i)\\}_{i=1}^n =: S_z$ are the samples defined in Section~\\ref{sec:direct_estimation}.\n Similarly, it holds that with probability at least $1-\\delta\/2$,\n \\begin{align*}\n \\mathbf{E}_{(\\widetilde{\\bm x}, w)\\sim p(\\bm x, w)}[h_w(\\widetilde{\\bm x}, w; f, g)]\n \\le \\frac{1}{\\widetilde{n}} \\sum_{(\\widetilde{\\bm x}, w_i)\\in S_w} h_w(\\widetilde{\\bm x}_i, w_i; f, g)\n + 2 \\mathfrak{R}_p^{\\widetilde{n}}(H_w)\n + \\sqrt{\\frac{M_w^2}{\\widetilde{n}}\\log\\frac{2}{\\delta}},\n \\end{align*}\n where $H_w := \\{(\\widetilde{\\bm x}', w') \\mapsto h_w(\\widetilde{\\bm x}', w';\n f, g) \\given f\\in F, g\\in G\\}$, $M_w := M_FM_G + M_G^2\/2$,\n and $\\{(\\widetilde{\\bm x}_i, w_i)\\}_{i=1}^n =: S_w$ are the samples defined in Section~\\ref{sec:direct_estimation}.\n By the union bound, we have the following with probability at least\n $1-\\delta$:\n \\begin{align}\n &\\mathbf{E}_{(\\bm x, z)\\sim p(\\bm x, z)}[h_z(\\bm x, z; g)]\n + \\mathbf{E}_{(\\widetilde{\\bm x}, w)\\sim p(\\bm x, w)}[h_w(\\widetilde{\\bm x}, w; f, g)]\\\\\n &\\le \\frac{1}{n} \\sum_{(\\bm x_i, z_i)\\in S_z} h_z(\\bm x_i, z_i, g)\n + \\frac{1}{\\widetilde{n}} \\sum_{(\\widetilde{\\bm x}, w_i)} h_w(\\bm x_i, w_i, f, g)\\\\\n &\\phantom{\\le}+ 2(\\mathfrak{R}_p^{n}(H_z) + \\mathfrak{R}_p^{\\widetilde{n}}(H_w))\n + \\left(\\frac{M_z}{\\sqrt{n}} + \\frac{M_w}{\\sqrt{\\widetilde{n}}}\\right)\\sqrt{\\log\\frac{2}{\\delta}},\\label{eq:bound_h}\n \\end{align}\n Using some properties of the Rademacher complexity including Talagrand's lemma,\n we can show that\n \\begin{align}\n \\mathfrak{R}_p^{n}(H_z) &\\le (M_F + 4M_Z)\\mathfrak{R}_p^{n}(G),\\label{eq:bound_R_z}\\\\\n \\mathfrak{R}_p^{\\widetilde{n}}(H_w) &\\le (2M_F + M_G)\\mathfrak{R}_p^{\\widetilde{n}}(F) + (M_F + M_G)\\mathfrak{R}_p^{\\widetilde{n}}(G).\\label{eq:bound_R_w}\n \\end{align}\n Clearly,\n \\begin{align*}\n \\widehat{J}(f, g) &= \\frac{1}{n}\\sum_{(\\bm x_i, z_i)\\in S_z} h(\\bm x_i, z_i;\n g)\n + \\frac{1}{\\widetilde{n}} \\sum_{(\\widetilde{\\bm x}_i, w_i)\\in S_w} h(\\widetilde{\\bm x}_i, w_i; f, g),\\\\\n J(f, g) &= \\mathbf{E}_{(\\bm x, z)\\sim p(\\bm x, z)}[h_z(\\bm x, z; g)] +\n \\mathbf{E}_{(\\widetilde{\\bm x}, w)\\sim p(\\bm x, z)}[h_w(\\widetilde{\\bm x}, w; f,\n g)].\n \\end{align*}\n From Eq.~\\eqref{eq:bound_h}, Eq.~\\eqref{eq:bound_R_z}, and Eq.~\\eqref{eq:bound_R_w}, we obtain\n \\begin{align}\n J(f, g) \\le \\widehat{J}(f, g) + \\mathfrak{R}_{F, G}\n + \\left(\\frac{M_z}{\\sqrt{n}} + \\frac{M_w}{\\sqrt{\\widetilde{n}}}\\right)\\sqrt{\\log\\frac{2}{\\delta}},\\label{eq:JvsJhat}\n \\end{align}\n where\n \\begin{align*}\n \\mathfrak{R}_{F, G} := 2(M_F + 4M_Z)\\mathfrak{R}_p^{n}(G)\n + 2(2M_F + M_G)\\mathfrak{R}_p^{\\widetilde{n}}(F) + 2(M_F + M_G)\\mathfrak{R}_p^{\\widetilde{n}}(G).\n \\end{align*}\n\n\\end{proof}\n\nFinally, we prove Theorem~\\ref{thm:err_bound}.\n\\begin{proof}[Proof of Theorem~\\ref{thm:err_bound}]\n From Lemma~\\ref{lem:obj_bound}, with probability at least $1-\\delta$, it holds that for all $f\\in F$\n \\begin{align}\n \\sup_{g\\in G} J(f, g)\n \\le \\sup_{g\\in G} \\widehat{J}(f, g) + \\mathfrak{R}_{F, G}\n + \\left(\\frac{M_z}{\\sqrt{n}} + \\frac{M_w}{\\sqrt{\\widetilde{n}}}\\right)\\sqrt{\\log\\frac{2}{\\delta}}.\\label{eq:b1}\n \\end{align}\n Moreover, recalling $W := \\inf_{\\bm x\\in\\mathcal{X}}\\abs{\\mu_w(\\bm x)}$\n\tand $\\sup_{g\\in L^2(p)} J(f, g) = \\mathbf{E}[(\\mu_w(\\bm x)f(\\bm x) -\\mu_z(\\bm x))^2]$, we have\n \\begin{align}\n \\mathbf{E}\\left[(f(\\bm x) - u(\\bm x))^2\\right]\n &= \\mathbf{E}\\left[\\left(f(\\bm x) - \\frac{\\mu_z(\\bm x)}{\\mu_w(\\bm x)}\\right)^2\\right]\\\\\n &\\le \\frac{1}{W^2} \\mathbf{E}[(\\mu_w(\\bm x)f(\\bm x) - \\mu_z(\\bm x))^2]\\\\\n &= \\frac{1}{W^2} \\left[\\varepsilon_G(f) + \\sup_{g\\in G} J(f, g)\\right].\\label{eq:b2}\n \\end{align}\n Combining Eq.~\\eqref{eq:b1} and Eq.~\\eqref{eq:b2} yields the inequality of the theorem.\n\\end{proof}\n\n\n\\subsection{Proof of Corollary~\\ref{cor:ls_bound}}\n\\label{sec:proof_of_cor1}\n\\newtheorem*{cor:ls_bound}{Corollary~\\ref{cor:ls_bound}}\n\\begin{cor:ls_bound}\n Let $F = \\{x\\mapsto \\bm\\alpha^\\top\\bm\\phi(\\bm x) \\given \\norm{\\bm\\alpha}_2 \\le \\Lambda_F\\}$,\n $G = \\{x\\mapsto \\bm\\beta^\\top\\bm\\psi(\\bm x) \\given \\norm{\\bm\\beta}_2 \\le \\Lambda_G\\}$,\n and assume that $r_F := \\sup_{\\bm x\\in\\mathcal{X}}\\norm{\\bm\\phi(\\bm x)}_2 < \\infty$\n and $r_G := \\sup_{\\bm x\\in\\mathcal{X}}\\norm{\\bm\\psi(\\bm x)}_2 < \\infty$,\n where $\\norm{\\cdot}_2$ is the $L_2$-norm.\n Under the assumptions of Theorem~\\ref{thm:err_bound},\n it holds with probability at least $1-\\delta$ that for every $f\\in F$,\n \\begin{align*}\n \\mathbf{E}_{\\bm x\\sim p(\\bm x)}[(f(\\bm x) - u(\\bm x))^2]\n \\le \\frac{1}{W^2}\\left[\n \\sup_{g\\in G}\\widehat J(f, g)\n + \\frac{C_z\\sqrt{\\log\\frac{2}{\\delta}}+D_z}{\\sqrt{2n}}\n + \\frac{C_w\\sqrt{\\log\\frac{2}{\\delta}}+D_w}{\\sqrt{2\\widetilde{n}}}\n + \\varepsilon_G(f)\n \\right],\n \\end{align*}\n where\n $C_z := r_G^2 \\Lambda_G^2 + 4 r_G \\Lambda_G M_\\mathcal{Y}$,\n $C_w := 2 r_F^2 \\Lambda_F^2 + 2 r_F r_G \\Lambda_F\\Lambda_G+ r_G^2 \\Lambda_G^2 $,\n $D_z := r_G^2 \\Lambda_G^2\/2 + 4r_G \\Lambda_G M_\\mathcal{Y}$,\n and $D_w := r_G^2\\Lambda_G^2\/2 + 4r_F r_G\\Lambda_F \\Lambda_G$.\n\\end{cor:ls_bound}\n\n\\begin{proof}\nUnder the assumptions, it is known that the Rademacher complexity of the linear-in-parameter model $F$ can be upper bounded as follows~\\citep{mohri2012foundations}:\n\\begin{align*}\n \\mathfrak{R}_p^N(F) \\le \\frac{r_F \\Lambda_F}{\\sqrt{N}}.\n\\end{align*}\nWe can bound $\\mathfrak{R}_p^N(G)$ similarly.\nApplying these bounds to Theorem~\\ref{thm:err_bound}, we obtain the statement of Corollary~1. \n\\end{proof}\n\n\n\\subsection{Proof of Theorem~\\ref{thm:MSE_bound}}\n\\label{sec:proof_of_MSE_bound}\n\\newtheorem*{thm:MSE_bound}{Theorem~\\ref{thm:MSE_bound}}\nWe prove the following, formal version of Theorem~\\ref{thm:MSE_bound}.\n\\begin{thm:MSE_bound}\nUnder the assumptions of Corollary~\\ref{cor:ls_bound}, it holds with probability at least $1-\\delta$ that $\\mathbf{E}[(\\widehat{f}(\\bm x)-u(\\bm x))^2]\\le(4e_{n, \\delta} + 2\\varepsilon_G^F + \\varepsilon_F)\/W^2$,\nwhere $\\varepsilon_G^F := \\sup_{f\\in F}\\varepsilon_G(f)$,\nand $\\varepsilon_F := \\inf_{f\\in F} J(f)$,\n$\\widehat{f} \\in F$ is any approximate solution to $\\inf_{f\\in F}\\sup_{g\\in G}\\widehat{J}(f, g)$ satisfying $\\sup_{g\\in G} \\widehat{J}(\\widehat{f}, g) \\le \\inf_{f\\in F}\\sup_{g\\in G} \\widehat{J}(f, g) + e_{n, \\delta}$, and\n \\begin{align*}\n e_{n, \\delta} := \\frac{C_z\\sqrt{\\log\\frac{2}{\\delta}}+D_z}{\\sqrt{2n}}\n + \\frac{C_w\\sqrt{\\log\\frac{2}{\\delta}}+D_w}{\\sqrt{2\\widetilde{n}}}.\n \\end{align*}\n\n\\end{thm:MSE_bound}\n\n\\begin{proof}\n Let $J(f) := \\sup_{g\\in L^2} J(f, g) = \\mathbf{E}[(\\mu_w(\\bm x)f(\\bm x) - \\mu_z(\\bm x))^2]$, $J_G(f) := \\sup_{g\\in G} J(f, g)$, $\\widehat{J}_G(f) := \\sup_{g\\in G} \\widehat{J}(f, g)$.\n Let $\\widetilde{f} \\in F$ be any approximate solution to $\\inf_{f\\in F}J(f)$ satisfying $J(\\widetilde{f}) \\le \\varepsilon_F + e_{n, \\delta}$.\n\n As a special case of Eq.~\\ref{eq:b1}, we can prove that with probability at least $1 - \\delta$, it holds for every $f\\in F$ that $J_G(f) \\le \\widehat{J}_G(f) + e_{n, \\delta}$.\n From Corollary~\\ref{cor:ls_bound}, it holds that with probability at least $1 - \\delta$,\n\\begin{align*}\nJ(\\widehat{f})\n&\\le\n\\left[J(\\widehat{f}) - J_G(\\widehat{f})\\right]\n+\\left[J_G(\\widehat{f}) - \\widehat{J}_G(\\widehat{f})\\right]\n+\\left[\\widehat{J}_G(\\widehat{f}) - \\widehat{J}_G(\\widetilde{f})\\right]\\\\\n&\\phantom{\\le}\n+\\left[\\widehat{J}_G(\\widetilde{f}) - J_G(\\widetilde{f})\\right]\n+ \\left[J_G(\\widetilde{f}) - J(\\widetilde{f})\\right]\n+ J(\\widetilde{f})\\\\\n&\\le \\varepsilon_G^F + e_{n, \\delta} + e_{n, \\delta}\\\\\n&\\phantom{\\le} + e_{n, \\delta} + \\varepsilon_G^F + \\left[\\varepsilon_F + e_{n, \\delta} \\right]\\\\\n&\\le 4e_{n, \\delta} + 2\\varepsilon_G^F + \\varepsilon_F.\n\\end{align*}\nSince $\\mathbf{E}[(\\widehat{f}(\\bm x)-u(\\bm x))^2] \\le \\frac{1}{W^2}J(\\widehat{f})$, \nwe obtain the bound in Theorem~\\ref{thm:MSE_bound}.\n\\end{proof}\n\n\n\\subsection{Binary Outcomes}\nWhen outcomes $y$ take on binary values, e.g., success or failure, without loss of generality, we can assume that $y\\in\\{-1, 1\\}$.\nThen, by the definition of the individual uplift, $u(\\bm x)\\in[-2, 2]$ for any\n$\\bm x\\in\\Re^d$.\nIn order to incorporate this fact, we may add the following range constraints on $f$: $-2 \\le f(\\bm x) \\le 2$ for every $\\bm x \\in \\{\\bm x_i\\}_{i=1}^n \\cup \\{\\widetilde{\\bm x}_i\\}_{i=1}^{\\widetilde{n}}$.\n\n\n\\subsection[Handling Different Feature Distributions]{Cases Where $p_1(\\bm x) \\neq p_2(\\bm x)$ or $(n_1, \\widetilde{n}_1)\\neq(n_1, \\widetilde{n}_1)$}\n\\label{sec:different_p_or_n}\n\nSo far, we have assumed that $p_1(\\bm x) = p_2(\\bm x)$, $m_1 = m_2$, and $n_1\n= n_2$.\nThe proposed method can be adapted to the more general case where these assumptions may not hold.\n\nLet $r_k(\\bm x) = \\frac{n}{2n_k}\\cdot\\frac{p(\\bm x)}{p_k(\\bm x)}$ and\n$\\widetilde{r}_k(\\bm x) = \\frac{\\widetilde{n}}{2\\widetilde{n}_k}\\cdot\\frac{p(\\bm x)}{p_k(\\bm x)}$, $k=1,2$, for every $\\bm x$ with $p_k(\\bm x) > 0$.\nLet $k_i := 1$ if the sample $\\bm x_i$ originally comes from $p_1(\\bm x)$,\nand $k_i := 2$ if it comes from $p_2(\\bm x)$. Similarly, define\n$\\widetilde{k}_i\\in\\{1, 2\\}$ according to whether $\\widetilde{\\bm x}_i$ comes\nfrom $p_1(\\bm x)$ or $p_2(\\bm x)$.\nThen, unbiased estimators of the three terms in the proposed objective Eq.~\\eqref{eq:Lfg} \nare given as the following weighted sample averages using $r_k$ and $\\widetilde{r}_k$:\n\\begin{align*}\n \\mathbf{E}_{\\bm x\\sim p(\\bm x)}[w f(\\bm x)g(\\bm x)]\n &\\approx \n \\frac{1}{\\widetilde{n}}\\sum_{i=1}^{\\widetilde{n}}[w_i f(\\widetilde{\\bm x}_i)g(\\widetilde{\\bm x}_i)\\widetilde{r}_{\\widetilde{k}_i}(\\widetilde{\\bm x}_i)],\\\\\n \\mathbf{E}_{\\bm x\\sim p(\\bm x)}[z g(\\bm x)]\n &\\approx \n \\frac{1}{n}\\sum_{i=1}^{n}[z_i g(\\bm x_i)r_{k_i}(\\bm x_i)]\\\\\n \\mathbf{E}_{\\bm x\\sim p(\\bm x)}[g(\\bm x)^2]\n &\\approx \n \\frac{1}{2n}\\sum_{i=1}^{n}[g(\\bm x_i)^2r_{k_i}(\\bm x_i)]\n + \\frac{1}{2\\widetilde{n}}\\sum_{i=1}^{\\widetilde{n}}[g(\\widetilde{\\bm x}_i)^2\\widetilde{r}_{\\widetilde{k}_i}(\\widetilde{\\bm x}_i)].\n\\end{align*}\n\nThe density ratios $p_k(\\bm x)\/p(\\bm x)$ can be accurately estimated using i.i.d.\\@\nsamples from $p_k(\\bm x)$ and $p(\\bm x)$~\\citep{nguyen2010estimating, sugiyama2012density, yamada2013relative, liu2017ratio}.\n\n\n\\subsection{Unbiasedness of the Weighted Sample Average}\nBelow, we show that the weighted sample averages are unbiased estimates.\nWe only prove for $\\mathbf{E}[wf(\\bm x)g(\\bm x)]$ since the other cases can be proven similarly.\nThe expectation of the weighted sample average transforms as follows:\n\\begin{align*}\n &\\frac{1}{\\widetilde{n}}\\sum_{i=1}^{\\widetilde{n}} \\mathbf{E}_{\\widetilde{\\bm x}^{(k)}_i\\sim p_k(\\bm x), t^{(k)}_i\\sim p_k(t\\given\\widetilde{\\bm x}^{(k)}_i)}\\left[w_if(\\widetilde{\\bm x}_i)g(\\widetilde{\\bm x}_i)\\widetilde{r}_{\\widetilde{k}_i}(\\widetilde{\\bm x}_i)\\right]\\\\\n &=\\frac{1}{\\widetilde{n}}\\sum_{k=1,2}\\sum_{i=1}^{\\widetilde{n}_k} \\mathbf{E}_{\\bm x\\sim p_k(\\bm x), t\\sim p_k(t\\given\\bm x)}\\left[(-1)^{k-1}t f(\\bm x)g(\\bm x)\\frac{\\widetilde{n}}{2\\widetilde{n}_k}\\cdot\\frac{p(\\bm x)}{p_k(\\bm x)}\\right]\\\\\n &=\\frac{1}{2}\\sum_{k=1,2} \\mathbf{E}_{\\bm x\\sim p(\\bm x), t\\sim p_k(t\\given\\bm x)}\\left[(-1)^{k-1}t f(\\bm x)g(\\bm x)\\right]\\\\\n &=\\iint(-1)^{k-1}t\\sum_{k=1,2}\\frac{1}{2}p_k(t\\given\\bm x)f(\\bm x)g(\\bm x)p(\\bm x)\\mathrm{d}t \\dbm x\\\\\n &=\\iint w p(w\\given\\bm x) f(\\bm x)g(\\bm x)p(\\bm x)\\mathrm{d}t \\dbm x\\\\\n &=\\mathbf{E}_{\\bm x\\sim p(\\bm x), w\\sim p(w\\given\\bm x)}[w f(\\bm x)g(\\bm x)].\n\\end{align*}\n\n\n\\subsection{Gaussian Basis Functions Used in Experiments}\n\\label{sec:gau_basis}\nThe $l$-th element of $\\bm\\phi(\\bm x) = (\\phi_1(\\bm x), \\dots, \\phi_{b_f}(\\bm x))^\\top$ is defined by\n\\begin{align*}\n \\phi_l(\\bm x) := \\exp\\left(\\frac{-\\norm{\\bm x - \\bm x^{(l)}}^2}{\\sigma^2}\\right),\n\\end{align*}\nwhere $\\bm x^{(l)}$, $l = 1, \\dots, b_f$, are randomly chosen training data points.\nWe used $b_f = 100$ and $\\sigma = 25$ for all experiments. $\\bm\\psi$ is defined similarly.\n\n\\subsection{Justification of the Sub-Sampling Procedure}\n\\label{sec:subsample}\nSuppose that we want a sample subset $S_k$ following the treatment policy $p_k(t\\mid\\bm x)$.\nFor each sample $(\\bm x_i, t_i, y_i) \\sim p(\\bm x, t, y)$ in the original dataset,\nwe randomly add it into $S_k$ with probability proportional to $p_k(t_i\\mid\\bm x_i) \/ p(t_i\\mid\\bm x_i)$.\nThen,\n\\begin{align*}\n p(\\bm x_i, t_i, y_i \\mid (\\bm x_i, t_i, y_i) \\in S_k)\n &= \\frac{p((\\bm x_i, t_i, y_i) \\in S_k \\mid \\bm x_i, t_i, y_i)p(\\bm x_i, t_i, y_i)}%\n {\\int \\sum_{y_i, t_i} p((\\bm x_i, t_i, y_i) \\in S_k \\mid \\bm x_i, t_i, y_i)p(\\bm x_i, t_i, y_i)\\mathrm{d}\\bm x_i}\\\\\n &= \\frac{p_k(t_i\\mid\\bm x_i)p(y_i\\mid \\bm x_i, t_i) p(\\bm x_i)}\n {\\int \\sum_{y_i, t_i} p_k(t_i\\mid\\bm x_i)p(y_i\\mid \\bm x_i, t_i) p(\\bm x_i)\\mathrm{d}\\bm x_i}\\\\\n &= p_k(t_i\\mid\\bm x_i)p(y_i\\mid \\bm x_i, t_i) p(\\bm x_i).\n\\end{align*}\nThis means that the subsamples $S_k$ preserve the original $p(y\\mid\\bm x, t)$ and $p(\\bm x)$ but follow the desired treatment policy $p_k(t\\mid\\bm x)$.\n\n\n\n\\subsection{McDiarmid's Inequality}\n\\label{sec:McD}\nAlthough McDiarmid's inequality is a well known theorem,\nwe present the statement to make the paper self-contained.\n\\begin{theorem}[McDiarmid's inequality]\n \\label{theorem:McD}\n Let $\\varphi: \\mathcal{D}^N \\to \\Re$ be a measurable function.\n Assume that there exists a real number $B_{\\varphi} > 0$ such that\n \\begin{align}\n \\abs{\\varphi(v_1, \\dots, v_N) - \\varphi(v'_1,\n \\dots, v'_N)} \\le B_{\\varphi}, \\label{eq:diff_bounded}\n \\end{align}\n for any $v_i, \\dots, v_N, v_1, \\dots, v'_N\\in\\mathcal{D}$ where $v_i = v'_i$\n for all but one $i\\in\\{1, \\dots, N\\}$.\n Then, for any $\\mathcal{D}$-valued independent random variables $V_1,\n \\dots, V_N$ and any $\\delta > 0$\n the following holds with probability at least $1 - \\delta$:\n \\begin{align*}\n \\varphi(V_1, \\dots, V_N) \\le \\mathbf{E}[\\varphi(V_1, \\dots, V_N)] + \\sqrt{\\frac{B_{\\varphi}^2N}{2} \\log \\frac{1}{\\delta}}.\n \\end{align*}\n\\end{theorem}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{#1}}\n\n\\newcommand{\\Delta}{\\Delta}\n\\newcommand{\\frac{1}{2}}{\\frac{1}{2}}\n\\newcommand{\\frac{1}{3}}{\\frac{1}{3}}\n\\newcommand{\\frac{1}{4}}{\\frac{1}{4}}\n\\newcommand{\\p}[2]{{\\pi_{#1}^{(#2)}}}\n\\newcommand{{\\rm Tr}\\,}{{\\rm Tr}\\,}\n\\newcommand{\\Lambda}{\\Lambda}\n\\newcommand{\\lambda}{\\lambda}\n\\newcommand{\\begin{equation}}{\\begin{equation}}\n\\newcommand{\\end{equation}}{\\end{equation}}\n\\newcommand{\\gamma_{\\rm str}}{\\gamma_{\\rm str}}\n\n\\hyphenation{pre-print}\n\n\\begin{document}\n\\vspace*{-3.3cm}\n\\vbox{\n\\begin{flushright}\nNBI-HE-94-28 \\\\\nMay 1994\n\\end{flushright}\n\\title{From Trees to Galaxies:\\\\\nThe Potts Model on a Random Surface}\n\\author{Mark Wexler \\\\ \\\\\n \\small {\\it Niels Bohr Institute, Blegdamsvej 17}\\\\\n \\small {\\it 2100 Copenhagen \\O, Denmark}\\\\ \n \\small {\\it wexler@nbivax.nbi.dk}}\n\\date{}\n\\maketitle\n\\begin{abstract}\nThe matrix model of random surfaces with $c=\\infty$ has\nrecently been solved and found to be identical to a random\nsurface coupled to a $q$-states Potts model with $q=\\infty$.\nThe mean field-like solution exhibits a novel type of tree\nstructure. The natural question is, down to which---if\nany---finite values of $c$ and $q$ does this behavior persist?\nIn this work we develop, for the Potts model, an expansion\nin the fluctuations about the $q=\\infty$ mean field solution.\nIn the lowest---cubic---non-trivial order in this expansion\nthe corrections to mean field theory can be given a nice\ninterpretation in terms of structures (trees and ``galaxies'')\nof spin clusters. When $q$ drops below a finite $q_c$, the\ngalaxies overwhelm the trees at all temperatures, thus\nsuppressing mean field behavior. Thereafter the phase\ndiagram resembles that of the Ising model, $q=2$.\n\\end{abstract}}\n\n\\section{Introduction}\n\nThe infinite-state Potts model on random spherical\nsurface can be solved exactly \\cite{Me2,Me3} by resumming all\norders of its low temperature expansion \\cite{Me1}. The\nproblem and its solution are of interest for the following\nreasons.\n\\begin{itemize}\n\\item\nIt can be shown to be equivalent to conformally invariant\nmodels with $c = \\infty$ (which can be realized, for example,\nby coupling multiple Ising models to the surface). Thus the\nsolution opens a window onto the mysterious $c>1$ regime,\nhitherto inaccessible via either Liouville theory or matrix\nmodels.\n\\item\nThis equivalence to conformal matter holds in spite of a\nwell known fact that on a fixed two-dimensional lattice\na Potts model with more that four states suffers a\nfirst order---rather than a continuous---phase transition.\nCoupling an $\\infty$-state Potts model to\na random surface changes the order of the transition\nfrom first to third; in the case of $q<4$\nthe order of the transition correspondingly changes\nfrom second to third.\n\\item\nThe reason why the transition for $q=\\infty$ is of third\norder is that it is no longer a spin-ordering transition;\nrather it is a tree-growing transition, where the nodes\nof the trees are spin clusters (equivalently, ``baby\nuniverses''). The transition in the random surface\nPotts model from spin-ordering (low $q$) to tree-growing\n(high $q$) behavior is a novel one, and deserves attention\nin its own right.\n\\end{itemize}\n\nSome recent work has given support to the mean field\ntheory at $q=c=\\infty$. Ambj\\o rn, Durhuus and\nJ\\'onsson \\cite{ADJ,Durhuus} (see also \\cite{ABC})\nhave shown that if a random\nsurface model has susceptibility exponent $\\gamma_{\\rm str}$, its\n``polymerized'' version has $\\bar{\\gamma}_{\\rm str}\n=\\gamma_{\\rm str}\/(\\gamma_{\\rm str}-1)$,\nin agreement with the mean field result $\\gamma_{\\rm str} = \\frac{1}{3}$.\nTheir polymerized surfaces of baby universes\nare suggestive of the trees of clusters found in the\nmean field model.\\footnote{Although baby universes\n(geometrical structures) are not {\\it a priori} identical\nto clusters (matter structures), the two seem to fuse\nat large $q$ or $c$. Note that the trees in \\cite{ADJ}\nare a result of the $N\\to\\infty$ limit, while the\ntrees in \\cite{Me2,Me3} are a result of $q,c\\to\\infty$.}\nHarris and Wheater \\cite{HarrisWheater}\nhave studied multiple Ising models,\nand in the limit $c\\to\\infty$ have identified trees as\ndominant, and found\n$\\gamma_{\\rm str}=\\frac{1}{2}$.\\footnote{Their trees are made of\n{\\it vertices}, not {\\it clusters}, and thus form\na subset of trees-of-clusters. There is no contradiction:\nto find clusters at the vertices of the trees, one\nhas to take $c\\to\\infty$ and $T\\to0$ simultaneously,\nkeeping $e^{-2\/T}c$ fixed.}\nA recent numerical study \\cite{MonteCarlo} of multiple\n$q=2, 3,$ and $4$ Potts models shows that spikiness of\nthe surfaces increases with increasing central charge,\nand that $\\gamma_{\\rm str}$ may well approach $\\frac{1}{3}$.\nFinally, a current numerical study of the Potts model\nfor various values of $q$ and for surfaces with tadpoles\nand self-energies \\cite{Teaser} is confirming the scaling of the\ncritical temperature $T_c^{-1} = \\frac{1}{2} \\log q + {\\cal O}(1)$\nwith large $q$, and, more significantly, details of\ncluster geometry predicted by the mean field theory.\n\nFor convenience, let us introduce the concept of a\n{\\it skeleton graph}: given a triangulated surface with some spin\nconfiguration, its skeleton is a graph that represents\nthe spin clusters as simple vertices, and whose edges are\nthe links between different spin clusters.\\footnote{So\nbe definition, no vertex of a skeleton may be connected\nto itself.}\nEach loop of skeleton graph introduces a factor $1\/q$\n(or $1\/c$ for conformal models).\nThus at $q=\\infty$ (or at $c=\\infty$),\nthe only skeletons that are present are trees.\nWhen we move down to large but finite $q$, the corrections\nintroduce loops; and while the leading terms are universal,\nthe corrections in $1\/q$ are not---their details depend\non the details of the matter model.\nThis is a familiar picture in $1\/D$ expansions. For instance\nin the scalar $\\lambda\\phi^4$ theory, the upper critical\ndimension is 4: below this, fluctuations or ``loops''\nbecome essential, and mean field theory breaks down.\n\nIt would be very interesting to know if something similar\nhappens for random surfaces. One's curiousity is heightened\nby the fact that Liouville theory seems to be silent on the\nmatter. To investigate the large-but-finite $q$ region,\nwe develop a systematic expansion of the model (in its\nequivalent one-matrix form) about the $q=\\infty$ solution.\nA priori, we would expect that the first non-trivial order\nin this expansion to be quadratic, as in many similar cases.\nIn this order we sum all the skeletons that have one or\nno loops, in other words the $q^0$ and $q^{-1}$ terms in\nthe partition function. But it is not hard to show that\nthis $q^{-1}$ correction is insignificant, in that it does not\nmodify the critical behavior of the model.\n\nThe next, cubic, order of our approximation is much richer.\nHere we include all skeletons whose clusters have three\nor fewer external legs that lie on loops. We therefore\nhave terms with arbitrarily many loops, meaning arbitrarily\nhigh powers of $q^{-1}$. A new structure makes its appearance:\nthe clusters are organized into $\\phi^3$ graphs of spherical\ntopology. These ``surfaces'' of clusters will be called\n{\\it galaxies}. The trees are still present: any number\nof them may grow out of any cluster. On top of that,\na typical skeleton graph includes several galaxies,\nlinked also into a tree.\n\nThis complicated system shows complicated critical behavior.\nFor $q=\\infty$, there is a magnetized low temperature phase where the\nclusters are large, and there are few of them: essentially\npure gravity. At some point that phase becomes unstable,\nand the tree of clusters blows up. Above that, the clusters\nare small, but the tree is large.\n\nAt large but finite $q$, the above picture of critical behavior\nis unchanged, except for the appearance of a new phase\nat high temperature,\nthat of large galaxies. At low temperature we still have\nthe pure gravity phase of large clusters; this still goes\ninto an intermediate phase of large trees. But now, at an even higher\ntemperature (which is infinite when $q$ is infinite, and\nwhich decreases with $q$), the area of the galaxies---the\nnumber of clusters they contain---diverges. In this phase,\nthe loops become essential: without loops there could be no\ngalaxies. The phase of large galaxies is also akin to\npure gravity.\n\nThe tree phase becomes narrower and narrower as $q$ decreases,\nuntil at some $q_c$, it disappears altogether: one goes\nfrom the magnetized phase directly into the phase\nof large galaxies. Although mean field theory is modified\nat any finite $q$, it is completely destroyed below $q_c$.\nFor $q \\le q_c$ the phase diagram is altogether reasonable,\nin that it resembles the known phase diagrams of the $q<4$\nmodels: there are two pure gravity phases, at low and at\nhigh temperature, separated by a multicritical point.\n\nThis paper is organized as follows. Section 2 presents the\nmulti-matrix version of the Potts model, its low temperature\nexpansion, its effective one-matrix versions, and its\n$q=\\infty$ solution.\nIn section 3 we introduce an expansion about the $q=\\infty$\nsolutions and its truncated versions, and discuss its\ngeometrical implications.\nIn section 5 we\nanalyze the critical behavior of the simplest nontrivial\ntruncation.\n\n\\section{Definition of the model}\n\nThe $q$-state Potts model on a spherical random surface\nwill be defined using a matrix model (see \\cite{GinspargMoore}\nfor a review):\n\\begin{equation}\n\\label{basicpotts}\nF_q(g,\\zeta) = \\frac{1}{q} \\log \\frac{1}{f_0}\n\\int {\\cal D} \\phi_1 \\cdots {\\cal D} \\phi_q \\:\n\\exp -N \\, {\\rm Tr}\\, \\left( \\frac{1}{2} \\sum_{ij}\n\\phi_i G_{ij} \\phi_j + g \\sum_i \\phi_i^3 \\right)\n\\end{equation}\nwhere the $\\phi_i$ are $N \\times N$ hermitian matrices\nand all matrix integrals are to be understood as being\ndivided by $N^2$ and taken in the limit $N \\to \\infty$.\nThe matrix $G$ defines\nthe matter that is to be coupled to the fluctuating\nsurface; in the case of the Potts model it is\n\\begin{equation}\nG = \\left(\n\\begin{array}{cccc}\n1 & \\zeta & \\cdots & \\zeta \\\\\n\\zeta & 1 & \\ddots & \\vdots \\\\\n\\vdots & \\ddots & \\ddots & \\zeta \\\\\n\\zeta & \\cdots & \\zeta & 1\n\\end{array}\n\\right)^{-1}\n= \\: \\left(\n\\begin{array}{cccc}\na_q & b_q & \\cdots & b_q \\\\\nb_q & a_q & \\ddots & \\vdots \\\\\n\\vdots & \\ddots & \\ddots & b_q \\\\\nb_q & \\cdots & b_q & a_q\n\\end{array} \\right)\n\\end{equation}\n\\begin{equation}\n\\label{aqbq}\na_q = \\frac{1 + (q-2) \\zeta}{(1-\\zeta)(1 + (q-1)\\zeta)},\\qquad\nb_q = \\frac{-\\zeta} {(1-\\zeta)(1 + (q-1)\\zeta)}\n\\end{equation}\nThe coupling constant $\\zeta$ is related to the matter\ntemperature by $\\zeta = e^{-2\/T}$.\nFinally, the normalization $f_0$ is defined by requiring\n$F_q(0,\\zeta) = 1$.\n\nThe partition function $F_q$ can be expanded in powers of\n$\\zeta$. This is the so-called low temperature or cluster\nexpansion, as in order $\\zeta^n$ we have a sum over all connected\nconfigurations of clusters with $n$ inter-cluster links;\nas well as over the {\\it internal} geometry of each cluster.\nDefining $\\Delta = q - 1$,\nthe first four orders of the expansion are\n\\begin{eqnarray}\n\\label{lte}\n\\lefteqn{F_q(g,c) = \\pi_0 +\n\\frac{1}{2} \\Delta \\pi_1^2 \\zeta + \n\\left[\\frac{1}{2} \\Delta^2 \\pi_1^2 (\\pi_2 + \\pi_{11}) +\n \\frac{1}{4} \\Delta \\pi_2^2 \\right] \\zeta^2 } \\\\\n& & \\hspace{1.2cm} \\epsfxsize=.1cm\\epsfbox{z0.eps}\n \\hspace{.8cm} \\epsfxsize=.7cm\\epsfbox{z11.eps}\n \\hspace{1.7cm} \\epsfxsize=1cm\\epsfbox{z112.eps}\n \\hspace{1.7cm} \\epsfxsize=.7cm\\epsfbox{z22.eps} \\nonumber \\\\\n& & \\mbox{\\vspace*{-1cm}} \\nonumber \\\\\n& & \\mbox{} + \\left[\\frac{1}{2} \\Delta^3 \\pi_1^2 (\\pi_2 + \\pi_{11})^2 +\n\\frac{1}{6} \\Delta^3 \\pi_1^3 (\\pi_3 + \\pi_{12} + \\pi_{111})\n+ \\frac{1}{6} \\Delta(\\Delta-1) \\pi_2^3 \\right. \\nonumber \\\\\n& & \\vspace*{-1cm}\n \\hspace{1.5cm} \\epsfxsize=1.2cm\\epsfbox[0 -160 500 -124]{z1122.eps}\n \\hspace{2.8cm} \\epsfxsize=.6cm\\epsfbox[0 -50 252 253]{z1113.eps}\n \\hspace{2.9cm} \\epsfxsize=.7cm\\epsfbox[0 -60 288 192]{z222.eps} \\nonumber \\\\\n& & \\qquad\\left. \\mbox{} + \\frac{1}{2} \\Delta^2 \\pi_1 \\pi_2 \\left(\\pi_3\n+ \\frac{\\pi_{12}}{3}\\right) +\n\\frac{1}{24} \\Delta \\pi_3^2\\right] \\zeta^3 + \\cdots \\nonumber \\\\\n& & \\hspace{2.4cm} \\epsfxsize=1cm\\epsfbox{z123.eps}\n \\hspace{2.1cm} \\epsfxsize=.7cm\\epsfbox{z33.eps}\n\\end{eqnarray}\nThe picture underneath each term is the corresponding\nskeleton graph---see the Introduction.\nThe various coefficients $\\pi_p$\n($p$ is a partition of an integer $n$, where $n$\nis the cluster's external legs)\nare functions of the cosmological constant $g$ that sum\nover the internal geometries of the clusters.\n\nTo define and to calculate the $\\pi$'s, one introduces\nan external matrix integral\n\\begin{equation}\n\\label{kgndef}\n\\Pi(g,\\Lambda) = -\\frac{1}{2N} {\\rm Tr}\\,\\Lambda^2 -\n\\log \\frac{1}{p_0} \\int {\\cal D}\\phi \\:\n\\exp -N \\, {\\rm Tr}\\, \\left( \\Lambda\\phi + \\frac{1}{2} \\phi^2 +\ng \\phi^3 \\right)\n\\end{equation}\nThis integral has been calculated by Kazakov and by\nGross and Newman.\nIf we expand it in powers of $\\Lambda$, we get\n\\begin{eqnarray}\n\\label{kgnexp}\n\\Pi(g,\\Lambda) & = & \\pi_0 + \\frac{\\pi_1}{N} {\\rm Tr}\\, \\Lambda +\n\\frac{1}{2} \\left[\\frac{\\pi_2}{N} {\\rm Tr}\\, \\Lambda^2 +\n\\frac{\\pi_{11}}{N^2} ({\\rm Tr}\\, \\Lambda)^2 \\right] \\nonumber \\\\\n& & \\qquad + \\left[ \\frac{\\pi_3}{N} {\\rm Tr}\\, \\Lambda^3 +\n\\frac{\\pi_{12}}{N^2} {\\rm Tr}\\, \\Lambda {\\rm Tr}\\, \\Lambda^2 +\n\\frac{\\pi_{111}}{N^3} ({\\rm Tr}\\, \\Lambda)^3 \\right] + \\cdots\n\\end{eqnarray}\nThe coefficients $\\pi_p(g)$ are the ones we encounter\nin the low temperature expansion (\\ref{lte}).\nIn $n$-th order, we find $\\pi_p$ for all partitions\n$p$ of $n$. The coefficient $\\pi_{n_1 n_2 \\cdots}$\nis a connected Green's function of the one-matrix\nmodel $\\Pi(g,0)$ with $n_1 + n_2 + \\cdots\\:$ external\nlegs, on each of which we place the matrix $\\Lambda$;\nthe graphs can be twisted in various ways, which\ngives rise to the different combinations of traces.\nFor example, $\\pi_2$ sums graphs such as\n$$\n\\ba\\epsfxsize=3cm\\epsfbox{twist0.eps}\\ea \\quad \\leadsto \\quad {\\rm Tr}\\, \\Lambda^2\n$$\nwhile $\\pi_{11}$ sums graphs such as\n$$\n\\ba\\epsfxsize=3cm\\epsfbox{twist1.eps}\\ea \\quad \\leadsto \\quad ({\\rm Tr}\\,\\Lambda)^2\n$$\nThe untwisted coefficients $\\pi_n$ are just\nthe $n$-point connected BIPZ Green's function \\cite{BIPZ},\nwhile the twisted coefficients have not been discussed\nbefore in the literature. They can all be calculated\nexactly, and the first few are given in\nAppendix A.\n\nThe complicated linear combinations\nof the coefficients $\\pi_p$ appearing\nin the low temperature expansion (\\ref{lte}) are due\nto the large $N$ limit of our model. It will be seen\nbelow that there is a way to organize these linear\ncombinations that will clarify their nature and simplify\ntheir calculation.\n\nHere we briefly summarize the solution of the Potts\nmodel in the $q\\to\\infty$ limit.\nAs it can easily be seen that the coefficient of $\\zeta^n$\nin the partition function is an $n$-th degree polynomial\nin $\\Delta$, the proper scaling for the temperature is\n$\\zeta \\sim 1\/\\Delta$. Defining $\\zeta = a\/\\Delta$ and keeping $a$\nfixed\\footnote{This is the most interesting way of taking\nthe $q\\to\\infty$ limit. If we keep $\\zeta$ fixed instead,\nfor example, we will find only one of the several phases\nof the model.}\nwhile letting $q \\to \\infty$, it was found that\nonly skeleton graphs that are trees contribute to\n$F_\\infty(g,a)$. These can be resummed to give\n\\begin{eqnarray}\n\\label{qinfint}\nF_\\infty(g,a) & = & \\lim_{\\theta\\to\\infty} \\frac{1}{\\theta}\n\\log \\frac{1}{f_0} \\int_{-\\infty}^{\\infty} dx \\:\n\\exp -\\theta \\left[ \\frac{x^2}{2a} - \\Pi(g, x) \\right]\\\\\n\\label{qinfsol}\n& = & \\Pi(g, y) - \\frac{y^2}{2a},\\qquad\n\\Pi(g,y) \\equiv \\Pi(g,y1)\n\\end{eqnarray}\nwhere the function $y(g,a)$ is a saddle point of\n$F_\\infty$:\n\\begin{equation}\n\\label{qinfsaddle}\ny = a \\frac{\\partial}{\\partial y} \\Pi(g,y) = a \\Pi_1(g,y)\n\\end{equation}\nThe general features of the $q=\\infty$ model's critical\nbehavior have already been described in the Introduction;\nsee \\cite{Me3} for details.\\footnote{This limit can\nbe called a mean field solution because it can be\narrived at by (an {\\it a priori} inexact) procedure\nof substituting $\\phi_i \\phi_j \\to \\Phi \\phi_i$ in\nthe multi-matrix model (\\ref{basicpotts}) and solving\nself-consistently for $\\Phi$; additionally, one\nhas to take the (not {\\it a priori} obvious) ansatz\nthat $\\Phi$ is a multiple of unity, as in Wilson's\nmean field solution of QCD.}\n\nKazakov \\cite{Kazakov} has pointed out\nthat the random surface Potts model\ncan also be formulated as a one-matrix model with the action\n\\begin{equation}\n\\label{natural}\nS_n = \\Delta \\left[\\frac{1}{2\\hat{a}} {\\rm Tr}\\,\\phi^2 -\nN \\left(1 + \\frac{1}{\\Delta}\\right)\n\\Pi\\left((1 - a\/\\Delta)^{3\/2} g, \\phi\\right) \\right],\n\\quad \\hat{a} = \\frac{a}{1 - a\/\\Delta}\n\\end{equation}\nThe simple derivation is given in Appendix B.\nThis effective action will be called the ``natural'' one.\nFor the purposes of the present calculation, it is useful to\nintroduce a slightly different, ``artificial'' effective action\n\\begin{equation}\n\\label{artificial}\nS_a = \\Delta \\left[-\\rho\\, {\\rm Tr}\\,\\phi + \\frac{1}{2a} {\\rm Tr}\\,\\phi^2 -\nN \\Pi(g,\\phi) \\right]\n\\end{equation}\nThe difference between these two effective models, their relative\nmerits and drawbacks, are fully explained in\nAppendix B.\n\nIn going from the multi-matrix model (\\ref{basicpotts})\nto the effective one-matrix models (\\ref{natural}) and\n(\\ref{artificial}), we trade the complexity of having\nseveral matrices for the complexity of the new one-matrix\npotential---see eq.\\ (\\ref{kgnexp}). But what is really\nhappening is that the vertices of the effective models\nare playing a completely different role than the vertices\nof the original model. In the original multi-matrix model,\nthe vertices represented points on the discretized\nworldsheet. In the effective models, the vertices---which\nare complicated functions of the original cosmological\nconstant $g$---actually represent entire spin clusters.\nThis is possible because a twisted cluster is equivalent,\nfrom the large-$N$ fatgraph point of view, to a nonlinear\nvertex\n$$\n\\ba\\epsfxsize=3cm\\epsfbox{twist1.eps}\\ea \\quad \\Longleftrightarrow \\quad \\ba\\epsfxsize=1.5cm\\epsfbox{nonlin.eps}\\ea\n$$\nsuch as $({\\rm Tr}\\,\\Lambda)^2$. This is why all possible nonlinear\nterms appear in the expansion (\\ref{kgnexp}) of the\npotential $\\Pi(g,\\Lambda)$; or, seen from the other direction,\nwhy clusters of every possible twist appear in the low\ntemperature expansion (\\ref{lte}).\\footnote{The\nprocedure that led from multi-matrix model to the effective\none-matrix model can be applied to any---not only Potts---matter.\nOnly the Potts model, however, is symmetric with respect\nto {\\it any} relabeling of matter states; that is why its\neffective model is a {\\it one}-matrix model. Effective\ncluster models for other types of matter are multi-matrix\nmodels.}\n\nOne could rederive the $q,\\Delta \\to \\infty$ limit\nin a quick and dirty way by noticing that\nwhen the model (\\ref{natural}) is reduced to an eigenvalue\nproblem, the charge of the eigenvalues will scale as\n$\\Delta^{-1}$, and therefore the width of the eigenvalue\nspectrum will scale as $\\Delta^{-1\/2}$. At $\\Delta=\\infty$, the\neigenvalue density will be a delta function---precisely\nthe content of equation (\\ref{qinfint}).\n\n\\section{Fluctuations about the mean field}\n\nSo at $q=\\infty$ the eigenvalue density of the effective\nmodels (\\ref{natural}) and (\\ref{artificial}) collapses\nto a delta function.\nWhen $q$ is large but finite, the eigenvalues\nspread into a narrowly peaked distribution. The main idea\nof this work is to investigate the shape of that distribution\nby expanding the effective models about the scalar solution\n\\begin{equation}\n\\label{phipsi}\n\\phi = y 1 + \\psi\n\\end{equation}\nin powers of $\\psi$. One type of critical point will result\nfrom singularities of the equations for the eigenvalue\ndistribution of $\\psi$.\nOnce an approximate distribution is found in this way, we will\nplug it back into the full effective model to determine critical\nbehavior;\nanother type of critical point will result from singularities\nof the potential $\\Pi$ for a given $\\psi$.\nFor simplicity, the following calculations will be shown only for\nthe artificial model (\\ref{artificial}); the calculations for the\nnatural model (\\ref{natural}) differ in trivial ways, and their\nresult will be given at the end.\n\nThe action (\\ref{artificial}) now reads\n\\begin{equation}\n\\label{intermact}\nS_a = \\Delta \\left[ N\\left(-\\rho y + \\frac{y^2}{2a}\\right) +\n\\left(-\\rho + \\frac{y}{a}\\right) {\\rm Tr}\\, \\psi\n+ \\frac{1}{2a} {\\rm Tr}\\,\\psi^2\n-N \\Pi(g,y+\\psi) \\right]\n\\end{equation}\nThe reader is reminded \\cite{Me3}\nthat the $q=\\infty$ partition function\n$F_\\infty$ is a sum over free trees, and its saddle point $y(g,a)$\nis a sum over {\\it planted} trees: trees with a marked vertex\nof order one. Explicitly,\n\\begin{equation}\n\\label{yexpl}\ny(g,a) = \\pi_1 a + \\pi_1 (\\pi_2 + \\pi_{11}) a^2 +\n\\left[\\pi_1 (\\pi_2 + \\pi_{11})^2 +\n\\frac{1}{2} \\pi_1^2 (\\pi_3 + \\pi_{12} + \\pi_{111})\\right] + \\cdots\n\\end{equation}\nThis low temperature expansion of $y$ can be represented\ngraphically as\n$$\n\\epsfxsize=8cm\\epsfbox{y.eps}\n$$\nWe are led to consider the expansion of the external matrix integral\nabout a scalar:\n\\begin{eqnarray}\n\\label{expwithy}\n\\lefteqn{\n\\Pi(g,y + \\psi) = \\Pi_0(g,y)\n+ \\frac{\\Pi_1(g,y)}{N} {\\rm Tr}\\, \\psi}\\\\\n& & \\quad \\mbox{} + \\frac{1}{2} \\left[\n\\frac{\\Pi_2(g,y)}{N} {\\rm Tr}\\, \\psi^2\n+ \\frac{\\Pi_{11}(g,y)}{N^2} ({\\rm Tr}\\, \\psi)^2 \\right] \\nonumber \\\\\n& & \\quad \\mbox{} + \\frac{1}{6} \\left[\n\\frac{\\Pi_3(g,y)}{N} {\\rm Tr}\\, \\psi^3\n+ \\frac{\\Pi_{12}(g,y)}{N^2} {\\rm Tr}\\, \\psi {\\rm Tr}\\, \\psi^2\n+ \\frac{\\Pi_{111}(g,y)}{N^3} ({\\rm Tr}\\,\\psi)^3 \\right]\n+ \\cdots \\nonumber\n\\end{eqnarray}\nThe vertex functions $\\Pi_p$ can be calculated explicitly, and are\ngiven in Appendix A.\nTo understand what they are counting,\nwe can expand several of them in powers of $y$:\n\\begin{eqnarray}\n\\Pi_0(g,y) & = & \\pi_0 + \\pi_1 y + \\frac{1}{2} (\\pi_2 + \\pi_{11}) y^2\n+ \\frac{1}{6} (\\pi_3 + \\pi_{12} + \\pi_{111}) y^3 + \\cdots\\nonumber \\\\\n\\Pi_2(g,y) & = & \\pi_2 + \\left(\\pi_3 + \\frac{\\pi_{12}}{3}\\right) y\n+ \\frac{1}{2} \\left(\\pi_4 + \\frac{\\pi_{13}}{2} + \\frac{\\pi_{22}}{3} +\n\\frac{\\pi_{112}}{6} \\right) y^2 + \\cdots\\nonumber \\\\\n\\label{cubexpy}\n\\Pi_3(g,y) & = & \\pi_3 + \\left(\\pi_4 + \\frac{\\pi_{13}}{4}\n\\right) y + \\cdots,\\: {\\rm etc.}\n\\end{eqnarray}\nThe linear combinations of $\\pi_p$'s in $\\Pi_0, \\Pi_2, \\Pi_3,\n\\ldots$ are precisely those that occur in the low temperature\nexpansion (\\ref{lte}). The $n$-th coefficient of $\\Pi_0$\ncounts clusters that have $n$ external legs, each of which is\nconnected to a tree, {\\it i.e.}, none of its legs reconnect.\nBy contrast,\nthe $n$-th coefficient of $\\Pi_2$ counts clusters that have\ntwo legs that lie on a loop (and will eventually reconnect),\nand $n$ other legs that are connected to trees; and so on.\nThus a cluster with three external legs, for example, may\nappear in the low temperature expansion as $\\pi_3$, $\\pi_3\n+ \\pi_{12}\/3$, or $\\pi_3 + \\pi_{12} + \\pi_{111}$, depending\non whether it has 3, 2, or 0 legs that lie on loops.\n\nThe expansion of action (\\ref{intermact}) in powers of\n$\\psi$ is therefore\\footnote{Two terms cancel due to\n(\\ref{qinfsol}).}\n\\begin{eqnarray}\n\\label{artfull}\n\\frac{S_a}{\\Delta} & = & -N \\left(\\Pi_0 - \\frac{y^2}{2a} + \\rho y\\right)\n-\\rho\\,{\\rm Tr}\\,\\psi + \\frac{1}{2} (a^{-1} - \\Pi_2) {\\rm Tr}\\, \\psi^2 -\n\\frac{\\Pi_{11}}{2N} ({\\rm Tr}\\,\\psi)^2 \\nonumber \\\\\n& & \\qquad\\mbox{} - \\frac{\\Pi_3}{6} {\\rm Tr}\\, \\psi^3\n- \\frac{\\Pi_{12}}{6 N} {\\rm Tr}\\, \\psi \\, {\\rm Tr}\\, \\psi^2\n- \\frac{\\Pi_{111}}{6 N^2} ({\\rm Tr}\\,\\psi)^3 + \\cdots\n\\end{eqnarray}\nAccording to the above discussion, the vertex $\\Pi_3\\,{\\rm Tr}\\,\\psi^3$\nis really an infinite sum of vertices\n(\\ref{cubexpy}), and can be graphically\nrepresented as\n\\begin{equation}\n\\label{vertfigone}\n\\Pi_3(g,y)\\,{\\rm Tr}\\,\\psi^3 \\quad \\leadsto \\quad\n\\begin{array}{c}\\epsfxsize=7cm\\epsfbox{vert0.eps}\\end{array}\n\\end{equation}\nwhile a non-linear vertex such as $\\Pi_{12}\\,{\\rm Tr}\\,\\psi\\,{\\rm Tr}\\,\\psi^2$\nmay be represented as\n\\begin{equation}\n\\label{vertfigtwo}\n\\Pi_{12}(g,y)\\,{\\rm Tr}\\,\\psi\\,{\\rm Tr}\\,\\psi^2 \\quad \\leadsto \\quad\n\\begin{array}{c}\\epsfxsize=7cm\\epsfbox{vert1.eps}\\end{array}\n\\end{equation}\nThe model (\\ref{artfull}) is an expansion about the tree-like,\nmean field solution $\\phi = y$. The geometrical meaning of\nthis statement is reflected in the above pictures: trees are\nautomatically included in the vertices of the new matrix model.\nThe graphs of this matrix model can be pictured as a dense\nnetwork (in fact, a spherical surface) of clusters, from the\nvertices of which sprout trees.\n\nTo proceed farther with the matrix\nmodel (\\ref{artfull}) we are obliged to truncate\nthe potential $\\Pi(g,y+\\psi)$.\nThe simplest possibility is to truncate\nafter the quadratic term.\nWe take $\\rho=0$ in this case,\nand immediately evaluate the partition\nfunction\\footnote{The last term in (\\ref{trunctwo})\nis there to cancel cluster self-con\\-nec\\-tions; it should have\nbeen put in by hand as a constant in the action.}\n\\begin{equation}\n\\label{trunctwo}\nF = \\Pi_0(y) - \\frac{y^2}{2a}\n- \\frac{1}{2\\Delta} \\log[1 - a \\Pi_2(y)] - \\frac{a \\Pi_2(y)}{2\\Delta}\n\\end{equation}\nThe new $1\/\\Delta$ terms in the partition function just count the\none-loop contributions to the low temperature expansion\n(\\ref{lte}).\nIt is clear that they cannot effect the critical\nbehavior of the model: a large tree with one loop is\nindistinguishable from just a large tree. This can\nbe demonstrated by noticing that the only way in which\nthe $1\/\\Delta$ terms can effect the critical behavior of the model\nis through a divergence of the logarithm;\nbut, since $\\Pi_2 < \\partial_y^2\n\\Pi_0 = \\Pi_2 + \\Pi_{11}$ (see Appendix A),\nand $1 - a \\partial_y^2\n\\Pi_0 \\ge 0$ in the physical region of the $q=\\infty$ model\n(see \\cite{Me2}), this is impossible.\n\nWe are therefore obliged to keep higher order terms in the\npotential $\\Pi(g,y+\\psi)$.\nThe meaning of a truncation after $\\psi^n$ terms,\nfor $n \\ge 3$, is as follows.\nConsider a cluster with $\\lambda$ links leading to other clusters.\nOf those links, $\\lambda_t$ are to trees and $\\lambda_\\ell = \\lambda - \\lambda_t$\nare to loops; meaning that if one of the $\\lambda_t$ tree links is\ncut, the skeleton will be disconnected. For instance, the cluster\nindicated by the arrow\n$$\n\\epsfxsize=2cm\\epsfbox{exvert.eps}\n$$\nhas $\\lambda_t = 2$ and $\\lambda_\\ell = 3$. So: if we truncate the\naction (\\ref{artfull}) after the $\\psi^n$ terms, we discard\nany skeleton that has a cluster with $\\lambda_\\ell > n$. Note\nthat $\\lambda_t$ does not matter: as discussed above, each vertex\nof (\\ref{artfull}) automatically includes arbitrary numbers\nof trees.\n\nWe now consider the model (\\ref{artfull}) in the {\\it cubic\napproximation}: we discard all terms of order $\\psi^4$ or\nhigher. In this approximation the\nclusters---which are themselves spherical $\\varphi^3$\ngraphs---are arranged into spherical $\\varphi^3$ graphs\nthat also have trees of clusters\ngrowing from their vertices (given by\npowers of $y$: see (\\ref{vertfigone}), (\\ref{vertfigtwo})).\nThis structure will be\ncalled a ``galaxy.'' A typical graph in the cubic\nmodel has several galaxies, with the nonlinear terms\n$({\\rm Tr}\\,\\phi)^2$, ${\\rm Tr}\\,\\phi\\,{\\rm Tr}\\,\\phi^2$, and $({\\rm Tr}\\,\\phi)^3$\nconnecting the galaxies into a tree structure (not to\nbe confused with the trees that grow out of the vertices\ninside the galaxies). The cubic approximation differs\nradically from the quadratic one in that it does not simply\nsum over graphs with one more loop and with one higher\norder in $\\Delta^{-1}$, but rather it sums over a large\nclass of graphs with an arbitrary number of loops.\nIn fact, it sums over all graphs, provided that no\ncluster has more than three legs lying on loops;\nfor example, the following are simple skeleton graphs\nthat are {\\it not} included in the cubic approximation:\n$$\n\\epsfxsize=5cm\\epsfbox{notcubic.eps}\n$$\n\nThe solution of the cubic model will proceed along the\nsame lines as the simpler model solved in \\cite{Indians}.\nFirst off, we set\n\\begin{equation}\n\\rho = -\\frac{\\Pi_3}{2a} - \\frac{\\Pi_{12}}{6a}\n\\end{equation}\nto cancel the cluster self-connections. Then we follow\nthe standard procedure, reducing the problem to eigenvalues\nand introducing a continuous approximation to the spectrum\n$\\lambda(x)$. Using \nprocedure that is exact in the $N\\to\\infty$ limit\nwe introduce the two moments\n\\begin{equation}\nc = \\left\\langle\\frac{1}{N}{\\rm Tr}\\,\\psi\\right\\rangle = \\int dx\\,\\lambda(x),\n\\qquad\nd = \\left\\langle\\frac{1}{N}{\\rm Tr}\\,\\psi^2\\right\\rangle=\\int dx\\,\\lambda^2(x)\n\\end{equation}\nas parameters, for which will we solve self-consistently\nat the end. The saddle point equation for the action reads\n\\begin{eqnarray}\n\\label{saddlepoint}\n\\frac{\\delta S}{\\delta\\lambda} & = & q + m \\lambda(x) + 3 p \\lambda^2(x)\n- \\frac{1}{2} \\int' \\frac{dy}{\\lambda(x)-\\lambda(y)} = 0 \\\\\n-\\frac{p}{\\Delta} & = & \\frac{\\Pi_3}{6} \\\\\n-\\frac{m}{\\Delta} & = & -\\frac{1}{a} + \\Pi_2 + \\frac{c \\Pi_{12}}{3} \\\\\n-\\frac{q}{\\Delta} & = & \\rho + c \\Pi_{11} +\n\\frac{d\\Pi_{12}}{6} + \\frac{c^2 \\Pi_{111}}{2}\n\\end{eqnarray}\nkeeping in mind that the $\\Pi_p$'s are (complicated) functions\nof $g$ and $y(g,a)$.\nThe eigenvalue density for this problem is\n\\begin{equation}\n\\label{evaldens}\nu(\\lambda) = \\frac{1}{\\pi} (m + \\sigma + 3 p \\lambda)\n\\sqrt{\\frac{4}{m + 2\\sigma} - \\left(\\lambda - \\frac{\\sigma}{3p}\\right)^2}\n\\end{equation}\nwhere the variable $\\sigma$ satisfies the equation\n\\begin{equation}\n\\label{sigma-eq}\n18 p^2 + \\sigma(m+\\sigma)(m+2\\sigma) + 3 p q(m+2\\sigma) = 0\n\\end{equation}\nUsing the eigenvalue density (\\ref{evaldens}), the self-consistency\nconditions on $c$ and $d$ become\n\\begin{eqnarray}\n\\label{c-eq}\nc & = & \\frac{3p}{(m+2\\sigma)^2} + \\frac{\\sigma}{3p} \\\\\n\\label{d-eq}\nd & = & \\frac{m+4\\sigma}{(m+2\\sigma)^2} +\n\\left(\\frac{\\sigma}{3p}\\right)^2\n\\end{eqnarray}\n\nThe reader is reminded that the eigenvalue density (\\ref{evaldens})\nis for the $\\psi$ matrix. Since $\\sigma$ scales as $\\Delta^0$, for\nlarge $\\Delta$ the eigenvalue density for $\\phi$ in the cubic\napproximation is centered on the $q=\\infty$ value $y$ with\na displacement $\\sigma\/3p \\sim {\\cal O}(\\Delta^{-1})$. The width\nof the distribution is $\\sqrt{4\/(m+2\\sigma)} \\sim {\\cal O}\n(\\Delta^{-1\/2})$, as previously advertised.\n\n\\section{Critical behavior}\n\nThe cubic approximation to the Potts model is expected\nto have three critical phases: the phase of {\\it large\nclusters}, the phase of {\\it large trees} of clusters,\nand the phase of {\\it large galaxies} of clusters.\n\nFirst, the regime of {\\it large clusters}.\nThis is the magnetized, low temperature phase.\nAs we have an approximate form for the eigenvalue density\n(\\ref{evaldens}), we can plug it back into the full action\n(\\ref{artificial}) and look for the singularities of the\npotential $\\Pi(g,\\phi)$. The reader is reminded\n\\cite{Me3} that for $q=\\infty$, the eigenvalue density\nis a delta function at $y(g,a)$; the potential in that\ncase can easily be reduced to the ``standard'' form\n$\\phi^2\/2 + G \\phi^3$, with $G = (1 - 12 g y)^{-3\/4} g$.\nAt the critical point\n\\begin{equation}\n\\label{qinfcrit}\nG=G_c=1\/\\sqrt{108\\sqrt{3}}\n\\end{equation}\nthe area of the clusters (but not their number) diverges.\nThe same thing happens in the case of the spread out density\n(\\ref{evaldens}), though the computation is somewhat more\ndifficult: we are looking for the critical point of the\nexternal matrix model (\\ref{kgndef}).\nThe result can be found in \\cite{GrossNewman}.\nThe only dependence on the external matrix is through\nthe moments\n\\begin{equation}\n\\label{defmoments}\n\\sigma_k = \\int d\\lambda \\, u(\\lambda) (x - \\lambda)^{-k\/2}\n\\end{equation}\nwhere the variable $x$ satisfies the implicit equation\n\\begin{equation}\n\\label{eqgn}\nx = \\frac{1}{12g} - \\sqrt{3g} \\sigma_1(x)\n\\end{equation}\nFor the eigenvalue density, these moments can be\ncalculated explicitly (as functions of $x$):\n\\begin{eqnarray}\n\\lefteqn{\\sigma_k = \\left(x - y - \\frac{\\sigma}{3p}\\right)^{-k\/2}\n\\left[\\mbox{}_2 F_1\\!\\left(\\frac{1}{2}+\\frac{k}{4},\\frac{k}{4};2;\n\\frac{4}{(1+2\\sigma) (x - \\sigma\/3p)^2}\n\\right)\\right.} \\\\\n& & \\left.\\qquad\\mbox{} + \\frac{3kp}{2(1+2\\sigma)^2\n(x - \\sigma\/3p)}\n\\,\\mbox{}_2 F_1\\!\\left(\\frac{1}{2}+\\frac{k}{4},1+\\frac{k}{4};3;\n\\frac{4}{(1+2\\sigma) (x - \\sigma\/3p)^2}\n\\right)\\right]\n\\nonumber\n\\end{eqnarray}\nThe criticality condition is the point where the\n$x$ equation (\\ref{eqgn}) ceases to have solutions.\nAlgebraically this can be expressed as\n\\begin{equation}\n\\label{stupidcrit}\n3 g \\sigma_3^2 = 4\n\\end{equation}\nIt can easily be checked that when $q,\\Delta\\to\\infty$,\nand consequently $\\sigma\/3p,4\/(m+2\\sigma) \\to 0$,\nthese new criticality conditions (\\ref{eqgn}) and\n(\\ref{stupidcrit}) reduce to (\\ref{qinfcrit}).\nIt will be seen in the phase diagrams that as\n$q$ approaches infinity, the curve of large clusters\nwill approach the corresponding $q=\\infty$ curve,\ngiven by solving eqs.\\ (\\ref{qinfcrit}) and \n(\\ref{qinfsaddle}) simultaneously.\n\nIn the large cluster phase matter is manifestly\nmagnetized, and is therefore decoupled from the\ngeometry; so this is a phase of pure gravity,\nwith $\\gamma_{\\rm str} = -\\frac{1}{2}$. Another way of saying the\nsame thing: in this phase the number of clusters\nis small, and few clusters is the same as one\ncluster, as far as critical behavior is concerned.\n\nSecond, the phase of {\\it large trees} of clusters.\nThere are, in principle, two ways of getting there.\nAt $q=\\infty$, the trees become large when the saddle\npoint equation for $y(g,a)$ (\\ref{qinfsaddle})\nceases to have a solution \\cite{Me3}; in other words, when\n\\begin{equation}\n\\label{qinfsaddlecrit}\na \\frac{\\partial^2}{\\partial y^2} \\Pi(g,y)\n= a \\left[\\Pi_2(g,y) + \\Pi_{11}(g,y)\\right] = 1\n\\end{equation}\nThe other way to get large trees is to make the\ntree of {\\it galaxies} (of clusters) blow up.\nThis is not exactly the same as a large tree of\nclusters at $q=\\infty$, but very nearly the same,\nas in general the galaxies at the vertices of this\ntree will be small; and as far as critical behavior\nis concerned, a small galaxy is no different from\na galaxy of size one, in other words, a single\ncluster. This intuition is supported by the fact\nthat both large tree phases have $\\gamma_{\\rm str} = +\\frac{1}{2}$;\nand as $q\\to\\infty$, the curves of the two phases\napproach each other.\n\nTo calculate the location of the second large tree\nphase, we can look for the point in the cubic model\n(\\ref{saddlepoint}) where the string susceptibility\n$\\partial^2 F\/\\partial p^2$ diverges \\cite{Indians}.\nEquivalently, and computationally simpler, we can\nlook for the divergence of the\n``pseudo-susceptibility''\\footnote{$d'$ or $\\sigma'$\nwould do just as well.}\n\\begin{equation}\n\\label{defpseudo}\nc' \\equiv \\frac{\\partial c}{\\partial p} \\to \\infty\n\\end{equation}\nThis is most simply done by differentiating the\neqs.\\ (\\ref{sigma-eq}--\\ref{d-eq}) with respect to\n$p$, and solving the resulting linear system for $c'$,\n$d'$, and $\\sigma'$.\nAs the result is unwieldy, and in any case can easily\nbe derived, it will not be given here. The same procedure,\nif applied to the simple quartic model in \\cite{Indians},\nreproduces the result for the large tree phase given in that\nwork.\n\nIn the large tree phase one has an ensemble of clusters\nof different size. As one moves toward the magnetized\nphase, clusters of higher and higher area begin to be\nincluded. At the multicritical point, some\nmoments of the cluster area diverge, a sign \nof a continuous spin ordering phase\ntransition.\\footnote{For the Ising model\non a fixed two-dimensional lattice, for example, the\nsecond moment of the cluster size distribution is the\nlowest one to diverge.\\cite{Fisher}} The susceptibility\nexponent in the bulk of the tree phase is\n$\\gamma_{\\rm str} = \\frac{1}{2}$, the generic exponent for tree growth.\nIn the tree phase, and only in the tree phase, the\nclusters are probably identical to the geometric structures\nknown as ``baby universes'' \\cite{MonteCarlo}.\nTherefore the transition to large trees from high\ntemperature can be interpreted as a magnetization\nof baby universes (but not of the whole surface);\nand the transition to trees from low temperature can\nbe interpreted as a break-up of the magnetized surface\ninto magnetized (but in different directions) baby\nuniverses \\cite{Teaser}.\n\nFinally, the cubic model should also have a phase\nof {\\it large galaxies}.\nThis occurs simply at the normal, BIPZ, critical point \nof the model (\\ref{saddlepoint}). This is simply given\nby\n\\begin{equation}\n\\label{largegalaxies}\n\\Gamma = \\frac{p}{(m - 12 p q)^{3\/4}} = \\Gamma_c =\n\\frac{1}{\\sqrt{108 \\sqrt{3}}}\n\\end{equation}\nThis phase is radically different from anything that\nexists at $q=\\infty$; it is in this phase that loops\nbecome important. Interestingly, it bears some resemblance\nto what {\\it must} be the case for small $q$: the spin\nclusters at high temperature do not form trees,\nbut complicated planar networks with many loops.\n\nThe large planar networks of clusters in the galaxy\nphase resemble the configurations of a low $q$ model\nat high temperature. As we go to higher temperature,\nmoreover, the clusters get smaller, as matter fluctuations\nare once again decoupled from the geometry. The large\ngalaxy phase is therefore pure gravity, once again.\nThe susceptibility exponent is $\\gamma_{\\rm str} = -\\frac{1}{2}$, the\nstandard pure gravity value for nonlinear models in\nthis phase.\n\nWe are now ready to study the complicated\ntranscendental equations of the cubic approximation.\nThe procedure is as follows. First, fix the three\nadjustable parameters: $\\Delta = q-1$, $g$, and $a$.\nThen solve the saddlepoint equation (\\ref{qinfsaddle})\nfor $y(g,a)$; this fixes all coefficients in the\ncubic action (\\ref{artfull}). Next, solve the\ncubic equation (\\ref{c-eq}) for $c$ as a function\nof $\\sigma$, plug into (\\ref{sigma-eq}) and solve\nnumerically for $\\sigma$, taking the root closest\nto zero. Finally, check the criticality conditions:\n(\\ref{eqgn}) and (\\ref{stupidcrit}) for the phase\nof large clusters; (\\ref{defpseudo}) for large\ntrees; or (\\ref{largegalaxies}) for large galaxies.\n\nThe following is a schematic, not-to-scale sketch\nof the resulting critical curves.\n$$\n\\epsfxsize=12cm\\epsfbox{approx.eps}\n$$\nIf drawn to scale, the curves are rather close to\neach other and hard to distinguish by eye.\nAs $q\\to\\infty$, the curves reassuringly approach\nthe exact $q=\\infty$ critical curve, where the\ntransition from magnetized to tree phases occurs\nat $g = 1\/\\sqrt{288}, a = 1$. As we move\naway from $q=\\infty$, the new critical regime of\nlarge galaxies appears. As $q$ gets lower, the\nphases of large galaxies and large clusters\napproach each other, eating away at the mean field,\nlarge tree phase.\n\nThe susceptibility exponent $\\gamma_{\\rm str}$ is $-\\frac{1}{2}$ in the\nmagnetized and galaxy phases, as discussed above,\nand $+\\frac{1}{2}$ in the tree phase. At the two multicritical\npoints $\\gamma_{\\rm str} = \\frac{1}{3}$. Further, we find $\\alpha=-1$\nat both multicritical points, as shown in Appendix C.\nThe equality of the exponents at the two multicritical\npoints is not a coincidence: the model at one of the\npoints can be mapped onto the model at the other---see\nthe appendix for details.\n\nAt $q=q_c$ ($q_c \\approx 133$ for the natural model,\nand $q_c \\approx 108$ for the artificial model), the\nphase of large trees ceases to exist. Thereafter,\nthe magnetized phase of large clusters goes\ndirectly into a phase of large galaxies.\nThe mean field behavior is entirely eliminated.\nLet $g_{c1}(q)$ be the multicritical point between\nphases of large galaxies and large trees, and\n$g_{c2}(q)$ the multicritical point between \nthe phases of large trees and large clusters.\n$\\Delta g_c = g_{c2} - g_{c1}$ is a measure of\n``how much of mean field theory is left'' at a\ngiven value of $q$. Here it is plotted for both\nthe natural and the artificial models.\n\\epsfxsize=12cm\\epsfbox{qplot2.eps}\n\nFor $q \\le q_c$, the two pure gravity phases\n(magnetized and large galaxies) are joined directly\nonto each other, with a multicritical point in\nthe middle. This phase diagram bears a striking\nresemblance to the known phase diagram for, say,\n$q=2$ \\cite{KazakovIsing}.\n\n\\section{Discussion}\n\nThe results obtained for $q_c$ cannot be considered\nas anything more than order-of-magnitude estimates.\nThe real result of this calculation is the discovery of\nthe mechanism for the break-down\nof mean field theory: the divergence\nof galaxies. The resulting critical behavior interpolates\nsmoothly between the somewhat odd phase diagram at $q=c=\\infty$\nand the well-known phase diagram for $q \\le 4, c \\le 1$.\nAnother result of this calculation is the likelihood\nthat $q_c$ is {\\it finite}, that mean field behavior\nholds for some finite Potts\nmodels (with modifications, as has been shown).\n\nThe two natural questions at this stage are: is the\npresent result merely an artifact of the truncation of the\naction (\\ref{artfull}), or are the higher-order terms\nessential? And does a similar picture hold for conformally\nsymmetric models, which, after all, reduce to the same\nmean field theory at $c=\\infty$?\n\nThe first question should not be too hard to answer.\nIt seems that we would not find any qualitatively new\nbehavior by including the higher-order terms, which\nwould simply add other types of vertices to the galaxies\nbut would not modify the essential structure.\nMean field theory should always be destroyed by the\ndivergence of the galaxies.\nIt is true that quartic or higher models will add\nnew multicritical points; but as we only have the three\nparameters $q, g,$ and $a$ to tune, we should not be\nable to reach them.\nTo repeat the present calculation in the quartic approximation\nshould be somewhat tedious, but not too difficult.\n\nThe second question is more difficult to answer, and the\nauthor does not have any suggestions on how to deal with\nit.\nThe immediate difficulty is that, although effective\ncluster models such as (\\ref{natural}) and (\\ref{artificial})\ncan be introduced for any matter model on the surface,\nthey are multi-matrix rather than one-matrix models.\nStill, they closely resemble the $c=\\infty$ model, so\nperhaps it is the right way to start.\n\n\\vspace{1cm}\nThe author thanks the Niels Bohr Institute for\nits hospitality, and Jan Ambj\\o rn and Gudmar Thorleifsson\nfor stimulating discussions. Marc Potters provided many\nof the ideas in the early stage of this work.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{section:intro}\nOur current working hypothesis is that the dominant force acting on large scales in the universe is gravity, and that gravity is accurately described by General Relativity (GR). In practice, we often focus on cosmological systems that are smaller than the Hubble scale; this permits us to make a set of simplifications referred to as the quasi-static (QS) limit. \n\n The QS regime corresponds to a window of lengthscales that are considerably smaller than the cosmological horizon, such that ${\\cal H}\/k\\ll1$, but sufficiently large such that linear perturbation theory is still valid.\nIn concrete figures, a reasonable estimate would be distances less than $600h^{-1}$ Mpc but greater than $10-20h^{-1}$ Mpc. The usual argument is that within this window of lengthscales the time derivatives of metric potentials are significantly smaller than their spatial derivatives. In GR this statement is a natural consequence of the subhorizon condition (${\\cal H}\/k\\ll1$), since the linear gravitational potentials evolve on the Hubble timescale. In practical terms, implementing conditions such as \\mbox{$|\\ddot\\Phi|\\ll |\\nabla^2\\Phi|$} makes the linearised field equations easy to work with.\n\nWhen we go over to a modified theory of gravity, the situation is less clear-cut. On one hand we can reason that any gravity theory consistent with current observations must behave in a manner very similar to $\\Lambda$CDM, so we expect our quasistatic limit to be preserved. On the other hand, when we modify GR we naturally introduce new dynamical degrees of freedom (hereafter d.o.f.) which might have evolutionary timescales different from ${\\cal H}$. In this paper we will assume that the new d.o.f. are sufficiently subdominant that a QS limit still exists for most of the history of the universe.\n\nThe largest distances that can be probed by current and next-generation galaxy surveys fall predominantly within the QS regime. To use these experiments to test the laws of gravity, we need to understand the typical behaviour of non-GR theories in the QS limit. There is a long-standing intuition that modified gravity theories generically lead to a novel dependence of observables on the length scale at which they are measured, e.g. the density-weighted growth rate, $f\\sigma_8(z)$, becomes a function of wavenumber $k$. Work has already begun to search for such signatures \\cite{Johnson2014}.\n\nThe goal of this paper is to make concrete these intuitions. What are the implications of a (non-)detection of scale-dependence for the host of gravity theories in the current literature \\cite{Clifton2012}? What are the most theoretically-motivated parameters for observers to measure? We present three main results:\n\n\\vspace{2mm}\n\\noindent 1) Considering a frequently-used parameterisation of the linearised gravitational field equations, we show that only a few key physical principles are needed to derive the fixed scale-dependence of many gravity theories in the QS regime. Our results apply to any theory with second-order equations of motion and one new spin-0 degree of freedom, which does \\textit{not} have to be a scalar field. This generalises the results of \\cite{defelice2011, Unobservables2013, Gleyzes2013, Bloomfield2013} beyond single-scalar field theories (see \\cite{Silvestri2013} for related ideas). The derivation is compact and does not require knowledge of a gravitational action -- hence its generality.\n\\vspace{2mm}\n\n\\noindent 2) We show that the detectability of this non-GR scale-dependence hinges crucially on the existence of new physical quantities (characteristic masses, lengths, etc.) that are generically introduced when modifying GR. If all such parameters are tuned to be comparable to the Hubble scale, it is highly unlikely that any novel scale-dependence of observables will be detectable in the QS regime.\n\\vspace{2mm}\n\n\\noindent 3) Turning these ideas around, we isolate the leading-order scale-dependent terms and estimate the constraints placed upon them by next-generation cosmological experiments. The headline results are displayed in Fig.~\\ref{fig:ellipses}.\n\nThe structure of this paper is as follows: in \\S\\ref{section:set-up} we discuss how characteristic physical quantities enter most popular theories of gravity. In \\S\\ref{section:derivation} we derive the result described in 1) above. In \\S\\ref{section:mass_scales} we discuss the implications of a (non-)detection of scale dependence, i.e. point 2) above. We also isolate the leading-order contribution to scale-dependence; in \\S\\ref{section:forecasts} we carry out example forecasts for future constraints on this leading-order term (point 3) above). We conclude in \\S\\ref{section:conclusions}. Some technical details are relegated to the appendices.\n\n\n\\section{New Scales \\& Set-up}\n\\label{section:set-up}\nNew physical scales are a near-universal feature of alternative gravity theories. This is no surprise: the success of GR in describing the Solar System generally forces us to introduce a `transition scale' into gravitational physics, positing that gravity reduces to GR on one side of this transition scale, but receives modifications on the other side. \n\nLet us elaborate with some examples. Probably the most familiar example of a new scale arises in scalar-tensor theories, where a mass scale emerges from second derivatives of the potential, $V(\\phi)_{,\\phi\\phi}$. In $f(R)$ gravity the new scale is more often thought of as a Compton wavelength for the scalaron, but is similarly derived from derivatives of an effective potential (a function of $f_{,RR}$) \\cite{Pogosian2008}. New scales can arise in a different way when there is non-trivial coupling between the matter energy-momentum tensor and the scalar degree of freedom, e.g. in theories which display chameleon screening, the transition scale is marked by a potential well depth, $|\\Phi|\\sim 10^{-6}$ \\cite{Davis2012,Jain2013}.\n\nVector-tensor theories are often endowed with an energy (mass) scale at which violations of Lorentz invariance become manifest. This new scale can be an explicit parameter in the Lagrangian of the theory (as, for example, in Horava-Lifschitz gravity \\cite{Horava2009, Sotiriou_HL_review}), or it can arise implicitly via spontaneous Lorentz violation at the level of the field equations (e.g. in the effective field theory of a vector coupled to gravity \\cite{Gripaios2007}).\n\nIt is well known that current bimetric theories not only have an explicit mass scale (the mass of the graviton), but also a system-dependent length scale, the Vainshtein radius, which signals the onset of screening \\cite{Babichev2013}. \nSimilarly, higher-dimensional theories such as DGP \\cite{Deffayet2002} and other braneworld models can have both an explicit scale such as a warp factor or crossover scale, as well as Vainshtein radii.\n\nFinally, there has been recent interest in non-local theories \\cite{Maggiore22014, Dirian2014, Maggiore2014, Foffa2014} containing Lagrangian terms such as $R\\,\\square^{-2}R$. The solutions for $\\square^{-1}R$ involve integrals over Green's functions for the $\\square^{-1}$ operator, $G(x,x^\\prime)$. This naturally suggests a characteristic scale between the spacetime points $x$ and $x^\\prime$ over which non-local interactions occur.\n\nIn essence, new transition scales can be dependent on a variety of physical quantities such as energy, ambient density, acceleration, potential, etc. Throughout this paper we will be agnostic about the origin of any new transition scale. In many gravity theories the new scale(s) are tuned to be of order the Hubble scale today, in the hope that they might replace the cosmological constant as the driver for accelerated expansion. In other theories, however, an effective cosmological constant is included in the theory (sometimes explicitly, sometimes via a `back door') \\cite{Faulkner2007,HuSawicki}. Then, since cosmic acceleration is already taken care of, new mass or length scales inherent in the theory may take a much wider range of values. \n\n\nLet us define a wavenumber $k_{\\mathrm{gal}}$ that represents the largest perturbation mode that can be reasonably well-measured by next-generation cosmological surveys. Let us also introduce a mass scale $M$ that represents the transition scale accompanying some generic modifications to gravity; in what follows we will also frequently interpret $M$ as an inverse lengthscale. (Note that we can extract a mass or length scale from any of the physical quantities discussed above by using appropriate factors of $c,\\,\\hbar$ etc., and taking appropriate powers).\n\nWe can then envisage three scenarios:\n\\begin{enumerate}[a)]\n\\item $M\\sim {\\cal H}\\Rightarrow M\\ll k_{\\mathrm{gal}}$ in the QS regime. \n\\item ${\\cal H} \\ll M\\lesssim k_{\\mathrm{gal}}$. \n\\item $k_{\\mathrm{gal}}\\ll M$. \n\\end{enumerate} \nIn this paper we will treat situations a) and b). We will work with one new dynamical degree of freedom, denoting its perturbations by $\\chi$. $\\chi$ does \\textit{not} have to be a scalar field, but could instead be a spin-0 perturbation of a new vector or tensor field, a new d.o.f. excited in the metric (i.e. a metric d.o.f. which is non-dynamical in GR), a Stuckelberg field, or several other possibilities \\cite{PPF2013}. \n\nFor the purposes of this paper we are only interested in the relative orders of magnitude of terms, not in precise factors of order unity. We can therefore write time derivatives of $\\chi$ as $\\dot\\chi={\\Gamma_\\chi}\\chi$, where ${\\Gamma_\\chi}$ is the evolutionary timescale of the new d.o.f. perturbation. There are two possibilities for this timescale: it could either be approximately the Hubble timescale (like for the metric potentials), or it could be a new, shorter timescale determined by $M$ (note that with appropriate factors of $c$ a mass is dimensionally equivalent to an inverse timescale). For now we will maintain generality, but later on we will we see that setting ${\\Gamma_\\chi}\\sim {\\cal H}$ or ${\\Gamma_\\chi}\\sim M$ can have different consequences.\n\nWe will not consider situation c), because in this scenario the existence of a quasistatic limit becomes questionable. If $M$ is very large and ${\\Gamma_\\chi}\\sim M$, then we have that $\\dot\\chi$ is very rapidly evolving, violating quasistaticity. Furthermore, the novel effects that occur at wavelengths close to the lengthscale $M^{-1}$ are likely to occur inside the nonlinear regime, which is beyond the scope of this paper. \n\nConsideration of the Friedmann equation suggests that the background (zeroth-order) values of any new fields present are constrained to evolve on Hubble timescales. Using the example of a scalar field to illustrate, we mean that $\\dot{\\phi}^2\\sim {\\cal H}^2\\phi^2$. Now one might well argue that if a field evolves like ${\\cal H}$ on the background, its perturbations must evolve like ${\\cal H}$ too, that is, only the case ${\\Gamma_\\chi}\\sim{\\cal H}$ is of interest. In \\S\\ref{section:mass_scales} we will argue that if this is true, it seems unlikely that the scale-dependent properties of modified gravity will be measurable any time soon. \n\n\\section{Derivation}\n\\label{section:derivation}\nTo obtain result 1) of \\S\\ref{section:intro}, we first need to take a step back to the gravitational field equations. A linear combination of two components of the tensor field equations gives the gravitational Poisson equation, whilst the transverse spatial component gives the `slip' relation (shown here at late times; our conventions for the metric potentials are given in Appendix~\\ref{app:derivs}):\n\\begin{align}\n-k^2\\Phi&=4\\pi G\\mu(a,k) a^2{\\bar \\rho}_m \\Delta_m \\label{eqNP}\\\\\n\\Phi&=\\gamma(a,k) \\Psi \\label{gamma}\n\\end{align}\nwhere $\\bar{\\rho}_m$ is the mean matter density, $\\Delta_m$ is the gauge-invariant density contrast, and a sum over all matter species is implied.\n\nIn the expressions above we have introduced two functions, $\\mu(a,k)$ and $\\gamma(a,k)$, that have been used extensively as a parameterisation of modified field equations in the QS regime \\cite{BZ2008, Bean2010, Hojjati2013, Simpson2013, Motta2013}. This parameterisation is convenient for theoretical work because it parameterises the `raw' gravitational field equations obtained directly from the action. However, a slightly different parameterisation (`$\\left\\{\\tilde{\\mu},\\,\\Sigma\\right\\}$' -- see eq.(\\ref{convert_params})) is preferred for data analyses, because it leads to minimal parameter degeneracy when combining redshift-space distortions and weak lensing surveys. For this reason we will present our theoretical results in terms of $\\left\\{\\mu,\\,\\gamma\\right\\}$, then rotate to the basis $\\left\\{\\tilde\\mu,\\,\\Sigma\\right\\}$ for the forecasts in \\S\\ref{section:forecasts}.\n\nTo manipulate a particular gravity theory into the form of eqs.(\\ref{eqNP}) and (\\ref{gamma}), one begins with the full (ie. unparameterised) Poisson equation and slip relation of the theory, and the linearly perturbed equation of motion for the new d.o.f. The steps are as follows:\n\\begin{enumerate}\n\\item Apply the quasistatic approximation to the metric terms in the three equations listed above, that is, drop terms containing $\\ddot\\Phi,\\,\\dot\\Phi,\\,\\ddot\\Psi$ ad $\\dot\\Psi$. Of the remaining terms, discard those with prefactors that evolve on Hubble timescales, i.e. drop ${\\cal H}^2\\Phi$, but keep $\\nabla^2\\Phi$. Time derivatives of $\\chi$ should be replaced by $\\dot\\chi\\approx{\\Gamma_\\chi}\\chi$ and $\\ddot\\chi\\approx{\\Gamma_\\chi}^2\\chi$ but not discarded.\n\\item Take two linear combinations of the unparameterised slip equation and the equation of motion for the d.o.f.: one combination that eliminates $\\chi$, and one that eliminates $\\Psi$. The form of $\\gamma(a,k)$ can be read off from the first linear combination. \n\\item Substitute the ratios $\\Psi\/\\Phi$ and $\\chi\/\\Phi$ obtained in the previous step into the right-hand side of the Poisson equation, so that it is written purely in terms of $\\Phi$ (plus the usual GR term in $\\Delta$). Rearrange this equation into the form of eq.(\\ref{eqNP}) and read off $\\mu(a,k)$.\n\\end{enumerate}\n\nWe wish to avoid laboriously carrying out these steps for many individual gravity theories. So instead we will apply this procedure to a set of `template' field equations that reflect the structure of real theories. A similar derivation was presented first in \\cite{Silvestri2013}; the addition we make is the explicit consideration of a new mass scale, $M$, as discussed in \\S\\ref{section:set-up}.\n\nWe will write down these templates in the conformal Newtonian gauge. To maintain transparent correspondence with the usual linearised Einstein equations we will not use an explicitly gauge-invariant combination of variables to represent the new d.o.f.. However, we will use the fact that the equations must ultimately have a gauge-invariant formulation to guide the construction of our templates.\n\n For example, the usual gauge-invariant Bardeen potentials $\\hat\\Phi$ and $\\hat\\Psi$ contain first- and second-order time derivatives respectively. This means that $\\hat\\Psi$ can only appear in the Poisson equation as part of the combination $\\dot{\\hat\\Phi}+{\\cal H}\\hat\\Psi$ (in which the second time-derivatives cancel out -- see Appendix~\\ref{app:derivs} or \\cite{PPF2013}) to avoid converting the Poisson equation from a constraint into a dynamical equation.\n\nAn example will help to clarify this point and allow us to introduce some notation. For the case where the dimensionless new d.o.f. $\\chi$ is a scalar field or a fluid energy density, the Poisson equation has the form:\n\\begin{align}\n-2k^2\\Phi&=8\\pi G a^2\\bar{\\rho}_m\\Delta_m+\\Phi\\left(h_1k^2+h_2\\left[{\\cal H}^2,M^2,{\\cal H} M\\right]\\right)\\nonumber\\\\\n&+h_3\\left[{\\cal H},M\\right]\\dot\\Phi+m_2\\left[{\\cal H}^2,M^2,{\\cal H} M\\right]\\Psi\\label{Poissonscalar}\\\\\n&+\\chi\\left(g_1k^2+g_2\\left[{\\cal H}^2,M^2,{\\cal H} M\\right]\\right)+g_3\\left[{\\cal H},M\\right]\\dot\\chi\\nonumber\n\\end{align}\nwhere $M$ is the potential new mass scale. Throughout this paper we will use notation like $\\left[{\\cal H}^2, M^2, {\\cal H} M \\right]$ to indicate a function of time which has dimensions of mass-squared. Terms appearing in this function can have three possible order-of-magitudes: ${\\cal H}^2$, $M^2$ or ${\\cal H} M$. The numerical coefficients accompanying these order-of-magnitude terms are unimportant for our purposes. We will use the dimensionless order-unity coefficients $h_i$, $g_i$ etc. simply as a convenient way to refer to individual terms. In complete analogy, $\\left[{\\cal H},M\\right]$ denotes a time-dependent function with the dimension of mass, which can have two possible orders of magnitude: $\\sim{\\cal H}$ or $\\sim M$.\n\nNote that $\\Psi$ appears up to one derivative order lower than $\\Phi$ in eq.(\\ref{Poissonscalar}). This is due to the aforementioned requirement that it must be possible to `repackage' these terms into the combination \\mbox{$\\alpha\\dot\\Phi+\\beta(\\dot\\Phi+{\\cal H}\\Psi)$}, where $\\alpha$ and $\\beta$ are numerical coefficients.\n \nCarrying out step 1 of our procedure, eq.(\\ref{Poissonscalar}) becomes:\n \\begin{align}\n-2k^2\\Phi&=8\\pi G a^2\\rho\\Delta+\\Phi\\left[h_1k^2+h_2M^2\\right]+\\Psi\\left[m_2M^2\\right]\\nonumber\\\\\n&+\\chi\\left[g_1k^2+g_2M^2+g_3M{\\Gamma_\\chi}\\right]\\label{QSPoissonscalar}\n\\end{align}\n\nFor brevity we will not write here the non-quasistatic templates for the slip relation and equation of motion, for this scalar field\/fluid example; they are given in eqs.(\\ref{eomscalar}) and (\\ref{slipscalar}). We move straight to their QS limits, which are:\n\\begin{align}\n\\chi\\Big[d_1 {\\Gamma_\\chi}^2&+d_2 M{\\Gamma_\\chi}+d_3 M^2+d_4 k^2\\Big]\\label{QSeomscalar}\\\\\n&+\\Phi\\left[b_2 M^2+b_3k^2\\right] +\\Psi\\left[c_2 M^2+c_3k^2\\right]=0\\nonumber\\\\\n&\\Phi-\\Psi=e_0\\Phi+j_0\\Psi+f_0\\chi& \\label{QSslipscalar}\n\\end{align}\n\n\\noindent Carrying out steps 2 and 3 described above, we obtain the following forms for $\\mu$ and $\\gamma$:\n\\begin{align}\n\\gamma&=\\frac{p_1+p_2\\frac{M^2}{k^2}+p_3\\frac{{\\Gamma_\\chi} M}{k^2}+p_4\\frac{{\\Gamma_\\chi}^2}{k^2}}{q_1+q_2\\frac{M^2}{k^2}+q_3\\frac{{\\Gamma_\\chi} M}{k^2}+q_4\\frac{{\\Gamma_\\chi}^2}{k^2}}\\label{gamscalar}\\\\\n\\mu&=\\left[p_1+p_2\\frac{M^2}{k^2}+p_3\\frac{{\\Gamma_\\chi} M}{k^2}+p_4\\frac{{\\Gamma_\\chi}^2}{k^2}\\right]\\times\\nonumber\\\\\n\\Big[t_1&+t_2\\frac{M^2}{k^2}+t_3\\frac{{\\Gamma_\\chi} M}{k^2}+t_4\\frac{{\\Gamma_\\chi}^2}{k^2}+t_5\\frac{M^4}{k^4}+t_6\\frac{{\\Gamma_\\chi} M^3}{k^4}\\nonumber\\\\\n&+t_7\\frac{{\\Gamma_\\chi}^2M^2}{k^4}\\Big]^{-1}\\label{muscalar}\n\\end{align}\nwhere the $p_i,\\,q_i$ and $t_i$ are simple algebraic combinations of the order-unity coefficients in eqs.(\\ref{QSPoissonscalar})-(\\ref{QSslipscalar}). Their precise forms are given in Table~\\ref{tab:scalar} in Appendix~\\ref{app:derivs}.\n\nWe immediately recognise that eqs.(\\ref{gamscalar}) and (\\ref{muscalar}) subsume some results already known for scalar field theories \\cite{defelice2011, Unobservables2013, Silvestri2013, Gleyzes2013, Bloomfield2013}, but note that we have not needed to use the complex form of the Horndeski Lagrangian \\cite{Horndeski, Deffayet2011,Gleyzes2014} to obtain them here. For example, we see that $\\mu$ and $\\gamma$ share the same numerator; an equivalent result was proved in \\cite{Silvestri2013, Gleyzes2013} (note that other authors use a slightly different parameterisation variables to ours, equivalent to the set $\\left\\{\\tilde{\\mu},\\,\\gamma\\right\\}$, where $\\tilde\\mu$ is defined in eq.(\\ref{convert_params})).\n \nOur expression for $\\mu$ contains three terms -- $t_5$, $t_6$ and $t_7$ -- which have not been included in works focused on Horndeski theories. In all Horndeski-type theories we have seen investigated this $k^{-4}$ dependence of $\\mu$ and $\\gamma$ is not present, because the terms represented by $h_2$, $m_2$, $g_2$ and $g_3$ in eq.(\\ref{QSPoissonscalar}) do not feature in the QS limit of their field equations \\footnote{Without the presence of a new mass scale, these terms are of magnitude $\\sim {\\cal H}^2$ and hence are discarded when the QS limit is taken.}. However, we will leave these terms in our general expressions because they could exist in some as-yet-undiscovered gravity theory. It is in the spirit of this paper to remain as agnostic as possible. \n\nWe stress that these forms for $\\mu$ and $\\gamma$ have been derived using purely a few basic principles, such as gauge-invariance and restriction to second-order equations of motion. We have used neither a model-specific action nor a general EFT-inspired one. In fact, if we repeat steps 1-3 for the case where the new d.o.f. is the spatial spin-0 perturbation of a timelike vector field (i.e. $|\\chi|\\sim |k|V$), such as occurs in Einstein-Aether theories \\cite{Jacobson2008,Zlosnik2008}, we reach exactly the same form as eqs.(\\ref{gamscalar}) and (\\ref{muscalar}). The derivation is given in Appendix~\\ref{app:derivs}, and only differs from the one shown here in small details. \n\nNote also that the authors of \\cite{Solomon2014,Koennig} have recently derived expressions equivalent to $\\mu$ and $\\gamma$ in a particular bigravity model, and found them to have a form contained by eqs.(\\ref{gamscalar}) and (\\ref{muscalar}) -- see eqs.(85) and (86) of \\cite{Koennig}. This is not unexpected, since bigravity -- despite its complex field equations (not fully encapsulated by eqs.~\\ref{QSPoissonscalar}-\\ref{QSslipscalar}) -- ultimately has the same physical features as the cases treated explicitly here, i.e. it introduces only one new spin-0 degree of freedom and respects gauge-invariance and locality. If the current stability issues surrounding generic bigravity models \\cite{Comelli2014} can be resolved, then eqs.(\\ref{gamscalar}) and (\\ref{muscalar}) can be used as a universal parameterisation for virtually \\textit{all} theories with second-order equations of motion and a single d.o.f of any type. \n\n\n \\section{REGIMES OF INTEREST} \n \\label{section:mass_scales}\nAs discussed in \\S\\ref{section:set-up}, there are essentially two choices for the scale $M$ that we have introduced. Either we can set $M$ to be of order the Hubble scale, or we can posit a new scale, $M\\gg{\\cal H}$, which marks the transition from the GR limit to some larger theory of gravity. This choice governs whether it makes sense to invest effort searching for scale-dependent signatures of modified gravity \\cite{Johnson2014}.\n \\vspace{5mm}\n \n\\noindent \\textit{Case a).} Many gravity theories explicitly tune their new scale(s) to be of order \\mbox{$H_0\\sim10^{-3}$} eV in order to produce a viable expansion history. For example, in recent bigravity models the graviton mass is taken to be of order the present Hubble scale. In this case there is no choice other than setting $M={\\Gamma_\\chi}={\\cal H}$. If we carry out the full QS approximation \\textit{all} scale-dependent terms in eqs.(\\ref{gamscalar}) and (\\ref{muscalar}) must be discarded and we are left with only the simple time-dependent expressions. Scale-dependence in $\\mu$ and $\\gamma$ only arises if we consider the first-order corrections in $({\\cal H}\/k)^2$ to the QS approximation, which leads to \\footnote{In case a) the precise form of the $p_i$, $q_i$ and $t_i$ differ in very small details from those given in Table~\\ref{tab:scalar}, e.g. factors of two. Given the approximate nature of this work it is not necessary to display another complete set of tables with the modified coefficients.}:\n \\begin{align}\n \\gamma(a)&=\\frac{p_1+\\left(p_2+p_3+p_4\\right)\\frac{{\\cal H}^2}{k^2}}{q_1+\\left(q_2+q_3+q_4\\right)\\frac{{\\cal H}^2}{k^2}} \\\\\n \\mu(a)&=\\frac{p_1+\\left(p_2+p_3+p_4\\right)\\frac{{\\cal H}^2}{k^2}}{t_1+\\left(t_2+t_3+t_4\\right)\\frac{{\\cal H}^2}{k^2}}\n \\end{align}\n Expressions of precisely this form have been worked out explicitly for numerous theories \\cite{defelice2011, Solomon2014, Koennig}.\n \nIn this scenario any scale-dependence of observables will be very weak. Until we are able to survey a substantial fraction of our Hubble volume, it is arguably better to focus our efforts on tightly constraining the time dependence of $\\{\\mu,\\gamma\\}$ or $\\{\\tilde\\mu,\\Sigma\\}$ by combining information from all scale bins. \n \\vspace{5mm}\n \n \n \\noindent \\textit{Case b).} In this scenario one posits a new physical transition scale $M^{-1}$ below the cosmological horizon distance, such that $M\\gg{\\cal H}$. For example, in $f(R)$ gravity the new mass scale is approximately given by \\mbox{$M^2\\propto 1\/f_{,RR}$}, and $f_{,RR} R\\ll1\\,\\Rightarrow\\,1\/f_{,RR} \\gg {\\cal H}^2$ is needed to ensure stability in the matter-dominated era \\cite{Hojjati2012}. \n \nRecall that in \\S\\ref{section:set-up} we introduced a wavenumber $k_{\\mathrm{gal}}$ that typified the maximum distance scale that could be well-constrained by near-future galaxy surveys. Given the lack of deviations from $\\Lambda$CDM+GR to date, one might naively assume the maximum value for $M$ is of order $k_{\\mathrm{gal}}$. In principle one should then attempt to constrain the full form of eqs.(\\ref{gamscalar}) and (\\ref{muscalar}). However, it has been shown that in practice it will be difficult to constrain all the individual $p_i,\\,q_i$ and $t_i$ \\cite{Hojjati2014}.\n\nThis motivates us to consider a slightly less accurate but simpler approach. We perform a Taylor expansion of eqs.(\\ref{gamscalar}) and (\\ref{muscalar}) in the vicinity of the naive assumption $M\\lesssim k_{\\mathrm{gal}}$, and keep only the leading order terms. We show in Appendix~\\ref{appendix:As_and_Ms} that this gives the following expressions:\n\\begin{align}\n{\\mu}(a,k)&\\simeq1+A_{\\mu}(a)\\left[1+\\left(\\frac{M_{\\mu}(a)}{k}\\right)^2\\right] \\label{iii}\\\\ \n\\gamma(a,k)&\\simeq1+A_\\gamma(a)\\left[1+\\left(\\frac{M_\\gamma(a)}{k}\\right)^2\\right]\\label{kkk}\n\\end{align}\nThe precise content of $M_\\mu(a)$ and $M_{\\gamma}(a)$ depends on whether ${\\Gamma_\\chi}\\sim M$ or ${\\Gamma_\\chi} \\sim {\\cal H}$, but this kind of detail is not important here. The thrust of our argument is that equations (\\ref{iii}) and (\\ref{kkk}) provide a simple, general and theoretically well-motivated description of scale-dependence that should be easily applicable to observations. They include case a) as a limit, if $M_\\mu(a)$ and $M_{\\gamma}(a)$ are set to be of order ${\\cal H}$.\n \n \\begin{figure}[t!]\n\\begin{center}\n\\hspace{-2mm}\n\\includegraphics[scale=0.27]{scales_line_plot.pdf}\n\\caption{Schematic diagram illustrating the arguments of \\S\\ref{section:mass_scales} (case b).}\n\\label{figure:scales_line_plot}\n\\end{center}\n\\end{figure}\n\\vspace{5mm}\n \n From the discussion of this paper, we now understand that a non-zero detection of $M_\\gamma$, $M_\\mu$, $A_\\mu$ or $A_\\gamma$ would signify one of two possible things:\n \\begin{enumerate}\n \\item A breakdown of the QS approximation ${\\cal H}\/k \\ll 1$. The scale-dependence would then be due to first-order corrections in ${\\cal H}^2\/k^2$.\n \\item The existence of a new scale in gravitational physics \\footnote{At the risk of over-emphasis, remember from our discussion above that the converse is not true. A lack of observed scale-dependence does not rule out modified gravity.}.\n \\end{enumerate}\n Either of these scenarios would have profound implications for our understanding of gravity on large scales.\n\n \n \n \n \n \n\\section{Detecting scale dependence.}\n\\label{section:forecasts}\nWe now speculate on the constraints that can be placed on the kind of parameterisation introduced above with future cosmological surveys. As we mentioned in \\S\\ref{section:derivation}, the function set $\\left\\{\\mu,\\,\\gamma\\right\\}$ is the most convenient for theoretical work. However, a change of basis will enable us to minimise parameter degeneracies when using redshift-space distortion (hereafter RSD) and weak lensing data \\cite{Simpson2013,Linder2013}. We introduce the new function set $\\left\\{\\tilde\\mu,\\,\\Sigma\\right\\}$, related to the old set by:\n\\begin{align}\n\\tilde\\mu(a,k)&=\\frac{\\mu(a,k)}{\\gamma(a,k)} & \\Sigma(a,k)&=\\frac{\\mu(a,k)}{2}\\left(1+\\frac{1}{\\gamma(a,k)}\\right)\n\\label{convert_params}\n\\end{align}\nEffectively, $\\tilde\\mu$ parameterises the geodesic equation for non-relativistic particles that governs the linear collapse of cold dark matter; $\\Sigma$ parameterises the geodesic equation for photons that governs weak gravitational lensing. However, it is important to note that gravitational lensing is also sensitive to $\\tilde\\mu$, because the lensing convergence and shear spectra involve integrals over the matter power spectrum, which is affected by modified structure growth \\cite{Simpson2013,Leonardinprep}.\n\nWe will write the new function set in a form analogous to eqs.(\\ref{iii}) and (\\ref{kkk}), that is:\n\\begin{align}\n{\\tilde{\\mu}}(a,k)&\\simeq1+A_{\\tilde{\\mu}}(a)\\left[1+\\left(\\frac{M_{\\tilde{\\mu}}(a)}{k}\\right)^2\\right] \\label{mutilde} \\\\ \n\\Sigma(a,k)&\\simeq1+A_\\Sigma(a)\\left[1+\\left(\\frac{M_\\Sigma(a)}{k}\\right)^2\\right] \\label{Sigma}\n\\end{align}\nWe stress that we are more interested in the general form of the scale-dependence rather than the precise (and lengthy) expressions relating $A_{\\tilde\\mu},\\,A_\\Sigma, M^2_{\\tilde\\mu}$, and $M^2_\\Sigma$ to the coefficients of the field equations (though for completeness the relationships between the $\\left\\{\\mu,\\,\\gamma\\right\\}$ and $\\left\\{\\tilde{\\mu},\\,\\Sigma\\right\\}$ parameterisations are given in Appendix \\ref{appendix:conv}).\n\nAn unavoidable feature of model-independent tests of gravity is that ansatzes must be chosen for the time-dependent functions. There must be enough parameters in the ansatz to capture important signatures in the data without weakening the constraints too severely. As a simplicity-motivated test case, we will choose our ansatz to be (partially following \\cite{Simpson2013}):\n\\begin{align}\nA_{\\tilde\\mu}(a)&=\\tilde{\\mu}_0\\frac{\\Omega^{GR}_\\Lambda(a)}{\\Omega^{GR}_{\\Lambda 0}} \\label{Amutilde} \\\\\nA_\\Sigma(a)&=\\Sigma_0\\frac{\\Omega^{GR}_\\Lambda(a)}{\\Omega^{GR}_{\\Lambda 0}} \\label{ASigma}\\\\\nM_{\\tilde\\mu}&=m_{\\tilde\\mu}\\,(20 H_0) \\label{yrt}\\\\\nM_{\\Sigma}&=m_{\\Sigma}\\,(20 H_0)\\label{yrt2}\n\\end{align}\nwhere $m_{\\tilde\\mu}$ and $m_{\\Sigma}$ are constants. Remembering that we will want to be able to interpret the $M_i^{-1}$ as lengthscales, it is convenient to introduce a subhorizon distance unit of $(20H_0)^{-1}$ and express $M_i^{-1}$ in units of this distance. In the simple forecasts here we will focus on perturbative observables, fixing the background expansion history to match that of the $\\Lambda$CDM+GR model and using Planck best-fit cosmological parameters \\cite{Planck_params}. For an analysis that accounts for a modified expansion history see \\cite{Leonardinprep}.\n\nIn principle we should really allow $M_{\\tilde\\mu}$ and $M_\\Sigma$ to be functions of time. Treating them as constants simply corresponds to imposing the same overall time-dependent amplitudes $A_i(a)$ on both the scale-free and scale-dependent modifications to the QS field equations.\n\nThe set of four parameters that we will forecast for is:\n\\begin{align}\n\\tilde{\\mu}_0,\\;\\;\\Sigma_0,\\;\\;\\tilde{\\mu}_0m^2_{\\tilde\\mu},\\;\\;\\Sigma_0m^2_\\Sigma\n\\end{align}\nNote that the scale-dependent parts of the parameterisation are sensitive to a degenerate combination of the time-dependent amplitude and the possible new effective mass\/length scale; we cannot constrain $M_{\\tilde\\mu}$ and $M_\\Sigma$ individually. \n\\begin{center}\n\\begin{figure*}[t]\n\\hspace{-1.2cm}\n\\subfigure{\\label{fig:mu0Sig0}\\includegraphics[scale=0.475]{ellipse_muSig.png}}\n\\hspace{-0.5cm}\n\\subfigure{\\label{fig:MmuMSig}\\includegraphics[scale=0.476]{ellipse_MmuMSig.png}}%\n\\vspace{-0.8cm}%\n\\caption{Forecast constraints on the scale-independent (left panel) and scale-dependent (right panel) parts of the $\\{\\tilde{\\mu},\\Sigma\\}$ parameterisation, using a DETF stage 4-like experiment. Redshift-space distortions (green contours) constrain only the two parameters in the $\\tilde\\mu(z,k)$ function, $\\tilde{\\mu}_0$ and $m_{\\tilde\\mu}$. Gravitational weak lensing (red contours) predominantly constrain the parameters in $\\Sigma(z,k)$, but also have some dependence on $\\tilde{\\mu}_0$. Blue contours show the combined constraints. The parameters $m_{\\tilde{\\mu}}$ and $m_{\\Sigma}$ are $M_{\\tilde{\\mu}}$ and $M_\\Sigma$ expressed in distance units of $\\left(20 H_0\\right)^{-1}$\\,Mpc, see eqs.(\\ref{yrt}) and (\\ref{yrt2}).}\n\\label{fig:ellipses}\n\\end{figure*}\n\\end{center}\n\nWe consider a Dark Energy Task Force stage 4 (DETF4) experiment that combines a galaxy clustering survey and a dedicated tomographic weak lensing survey. Weak lensing utilises scale-dependent information naturally, as the standard quantities to calculate are angular power spectra. RSD measurements, however, generally do not. Usually we talk about the density-weighted growth rate, $f\\sigma_8(z)$, implicitly assuming data from all scales ($k$-bins) has been combined.\n\nWe modify this situation by dividing each redshift bin of our hypothetical survey into five bins in $k$-space, with edges \\mbox{$\\left[0.005,0.02,0.05,0.08,0.12,0.15\\right]$h\\,Mpc$^{-1}$}; the choice of $k$-binning is analogous to \\cite{Johnson2014}, and the upper limit is chosen to cut off before nonlinearities start to dominate. It seems likely that as our survey sizes increase large-scale measurements will improve, whilst small-scale measurements will remain dominated by a lack of understanding of baryonic physics and the effects of nonlinearities. For this reason we will take our $k$-bins to have the following fractional errors at all redshifts, from large-scale to small-scale:~$\\left[0.01,0.03,0.03,0.09,0.09\\right]$. \n\nFor the tomographic gravitational lensing, we consider five source bins. These are constructed by taking the total distribution of source galaxies as:\n\\begin{equation}\nn(z) \\propto z^{\\alpha} e^{-\\left(\\frac{z}{z_0}\\right)^\\beta}\n\\label{nofz}\n\\end{equation}\nwith $\\alpha=2$, $\\beta=1.5$, and $z_0=z_m\/1.412$ where $z_m$ is the median redshift of the survey \\cite{Thomas2009, Amendola2008}. $n(z)$ is then divided into five bins between $z=0.5$ and $z=2$, each with equal numbers of galaxies. The lensing errors are encoded in the covariance matrices:\n\\begin{align}\n{\\bf C}_{ij}(\\ell)=\\sqrt{\\frac{2}{(2\\ell+1)f_{sky}}}\\left( P_{ij}^{\\kappa,\\,GR}(\\ell)+\\delta_{ij}\\,\\frac{\\langle\\gamma_{\\mathrm{int}}^2\\rangle}{\\bar{n}_i}\\right)\n\\end{align}\n where $P_{ij}^{\\kappa,\\,GR}(\\ell)$ is the cross-correlated convergence power spectrum sourced by galaxies in bins $i$ and $j$, and $f_{sky}$ is the fraction of the sky covered by the survey. $\\langle\\gamma_{\\mathrm{int}}^2\\rangle^{\\frac{1}{2}}$ is the r.m.s. intrinsic shear and $\\bar{n}_i$ is the number of galaxies per steradian in source bin $i$. For further details see \\cite{Hu1999, Leonardinprep}.\n\nOur model-agnostic approach to the field equations (eqs.\\ref{QSPoissonscalar}-\\ref{QSslipscalar}) means that factors of order unity are of no relevance here. For this reason we do not attempt a detailed, experiment-specific forecast (for which the $k$-bin errors and maximum $k$ value would evolve with redshift). More precise forecasts can be found in, for example, \\cite{Amendola_Fogli2013, Taddei2014}. Other model-independent tests of $\\Lambda$CDM using the growth rate have recently appeared in \\cite{Nesseris_Sapone}.\n\nFig.~\\ref{fig:ellipses} shows marginalised 2D constraints on the scale-independent and scale-dependent parts of the parameterisation (eqs.~\\ref{mutilde} and \\ref{Sigma}). RSDs (green contours) constrain only $\\tilde{\\mu}_0$ and $\\tilde{\\mu}_0M^2_{\\tilde\\mu}$, whilst weak lensing (red contours) is sensitive to all four parameters. The authors of \\cite{Simpson2013} have applied a scale-independent parameterisation to CFTHLens+WiggleZ data, finding that lensing is only weakly sensitive to ${\\tilde\\mu}_0$. We agree with these scale\\textit{-independent} results, but find that the scale\\textit{-dependent} parts of the `lensing function' $\\Sigma(a,k)$ and the `RSD function' $\\tilde{\\mu}(a,k)$ are more strongly correlated \\cite{Leonardinprep}, see the right panel of Fig.~\\ref{fig:ellipses}.\n\n A rough estimate of the precision with which we will be able to measure these new effective mass\/length scales in cosmology gives us\n$\\sigma \\left(m^2_{\\tilde\\mu}\\right)\\sim{\\sigma\\left(\\tilde{\\mu}_0 m^2_{\\tilde\\mu}\\right)}\/{\\sigma\\left(\\tilde{\\mu}_0 \\right)}\\sim 6.7$ and \n$\\sigma \\left(m^2_{\\Sigma}\\right)\\sim{\\sigma\\left(\\Sigma_0 m^2_{\\Sigma}\\right)}\/{\\sigma\\left(\\Sigma_0 \\right)}\\sim 13.2$. \nInterpreting these limits as distance scales, we find lower bounds of order $ M^{-1}_{\\tilde\\mu}\\geq 364\\,\\mathrm{Mpc}$ and $M^{-1}_{\\Sigma}\\geq 260\\, \\mathrm{Mpc}$. \nWe see that RSDs are more sensitive than weak lensing to new fundamental scales. That is, we should be able to pin down a new characteristic distance scale all the way up to $364$~Mpc with growth rate measurements.\n\nGiven that the bounds we have found on the $M_i$ are comparable to $k_{\\mathrm{gal}}$, the Taylor expansion of eqs.(\\ref{gamscalar}) and (\\ref{muscalar}) (see Appendix~\\ref{appendix:As_and_Ms}) may not be accurate enough. Yet, the well-behaved form of eqs.(\\ref{gamscalar}) and (\\ref{muscalar}) (ratios of quadratic polynomials) suggest that subsequent corrections in higher powers of $M^2\/k^2$ might change the bounds placed on $M_{\\tilde\\mu}$ and $M_\\Sigma$ by, at most, a factor of a few. \n\n\n\n\n\\section{Conclusions}\n\\label{section:conclusions}\n\nThe spirit of this work has been to take a step back from detailed model-specific investigations in alternative theories of gravity. Ultimately, the expressions collected in Appendix~\\ref{app:derivs} link the parameterisation we presented in eqs.(\\ref{mutilde}) and (\\ref{Sigma}) to the field equations of a specific gravity theory; but, as we hope has been clear, this is not the strategy we are advocating. Instead, the goal of this paper has been to highlight the fact that even the exotic plethora of gravity theories on the market today share basic physical features which endow them with the same structure in the quasistatic regime. \n\nWe have found that the scale-dependence of gravity theories is closely linked to an effective mass scale or length scale which the linearised field equations inherit from their parent quadratic action. Our derivation has allowed for the evolutionary timescale of the new degree of freedom to be affected by this new scale in the system, rather than relying too heavily on GR-based intuitions that might suggest $\\dot\\chi$ is negligible in the QS regime.\n\nIn many theories a new mass scale is tuned to be $\\sim{\\cal H}$ in order to produce accelerated expansion. Indeed, the motivation behind much of the current bestiary of modified gravity theories is to render the cosmological constant obsolete. {\\it Generally} this renders the scale-dependendence undetectable in the quasistatic regime. We conjecture that most of the theories giving rise to detectable scale-dependence are those which introduce a new scale much larger than the Hubble scale ($M\\gg{\\cal H}$), and meanwhile rely on a cosmological constant to achieve a viable expansion history.\n\nWe caution, though, that theories with screening mechanisms complicate the issue somewhat. One can envisage a smaller, plausibly detectable length scale emerging if it is a compound of fundamental scales and couplings to the energy-momentum tensor.\n\nNote that at no point in this paper have we needed a concrete action from which to start our calculations: knowledge of the basic physical properties of a theory (eg. second-order equations of motion and a single dynamical spin-0 perturbation) is sufficient. We have trivially recovered results of \\cite{defelice2011,Gleyzes2013, Bloomfield2013, Unobservables2013}, and have found them to be more general than previously realised (see \\cite{Silvestri2013} for a similar analysis along these lines).\n\nWe advocate that measurement of scale-dependent observables is an important and feasible target for next-generation cosmology experiments. They have the potential to unveil a scale at which new physics beyond $\\Lambda$CDM+GR kicks in. More conservatively, scale-dependent measurements would also act as an essential test of the quasistatic approximation that has rapidly grown in popularity over the past few years.\n\n\n\\textit{Acknowledgments.} \nIt is a pleasure to thank L. Amendola, J. Gleyzes, M. Kunz, M. Lagos, L. Miller, J. Noller, F. Piazza, A. Silvestri, F. Vernizzi and H. Winther. TB is supported by All Souls College, Oxford. PGF acknowledges support from Leverhulme, STFC, BIPAC and the Oxford Martin School. DL is supported by the Rhodes Trust. MM acknowledges support from the Swiss National Science Foundation and the Balzan foundation via the University of Oxford.\n\n\\begin{widetext}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nData streams are environments such as network event logs, video streams, call records, and transaction records where vast amounts of data is generated at high speed \\cite{aggarwal2007datastreams}. Classification models in data streams have to work under strict time and memory constraints, and be able to adapt changes in the distribution of data over time \\cite{bifet2009new}. Such changes in the relation between data points and their corresponding classes over time are called concept drifts \\cite{gama2010knowledge}. Robustness against concept drifts is a topic that is extensively studied in the literature \\cite{bonab2018goowe, brzezinski2014reacting}. As data continuously arrive, learned information from the past instances become irrelevant under concept drifts. Classification models need to adapt to the new concepts by not only learning the new concepts but also forgetting the now-obsolete ones \\cite{brzezinski2014reacting}.\n\nEnsembles are common choices for models in stream environments, as they often improve predictive performance while providing robustness against concept-drifting and time-evolving data \\cite{bifet2009new}. Ensembles employ different approaches for dealing with concept drifts, such as replacing the weakest \/ lowest weight classifier with a new one that is trained with the more recent data \\cite{bonab2018goowe}, or updating ensemble weights regularly \\cite{wang2003awe, bonab2018goowe}. Despite the popularity and prevalence of ensemble learning in data streams, ensemble pruning is still a vastly undiscovered area of research in stream mining community.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\linewidth]{method.pdf}\n \\vspace{-0.4cm}\n \\caption{Pipeline of Proposed Method. Prediction history of each component in the ensemble are recorded. At each row of records, colored cells indicate relevance scores for corresponding classes. Then, these records are used in calculating class-wise losses for each component, and class-wise rankings of those components. The acquired rankings are combined using a rank fusion algorithm (Modified Borda Count). The resulting top $\\varphi$ components ($\\varphi < K$) are selected, and the rest is pruned.}\n \\label{fig:method-figure}\n\\end{figure*}\n\n\nIn this study, we tackle the task of ensemble pruning in data streams for multi-class classification. Our contributions are as follows. We propose, \n\\begin{itemize}\n \\item an explicit pruning technique that can be integrated into any type of streaming ensemble, and called whenever pruning is requested. Our method results in pruned ensembles that are better performing in terms of accuracy and more efficient in terms of memory consumption (as demonstrated in Section \\ref{sec:results}).\n \\item to the best of our knowledge, the first large-scale \\footnote{The previous studies \\cite{krawczyk2015one, cruz2019fire} experimented on datasets with number of instances in the scales of hundreds or thousands} on-the-fly ensemble pruning method for streaming data.\n \\item a class imbalance-aware pruning method, where a pruned ensemble does not lose its ability to classify rare or less-frequent classes, as a result of our class-wise component prediction analysis and ranking (as described in Section \\ref{sec:class-wise-analysis-ranking}).\n\\end{itemize}\n\n\n\\section{Problem Definition and Notation}\n\nWe consider the problem of ensemble pruning within the context of supervised classification in data streams. Data stream $\\mathcal{D}$ is a (possibly infinite) sequence of time-ordered data that consists of pairs $(X^{(t)}, y^{(t)})$ where $t$ denotes the arrival time. In multi-class classification, the target $y^{(t)}$ has $L$-many classes where $L>2$. An ensemble model $\\xi$ is a group of $K$ component classifiers, i.e. $\\xi = \\{ C_1, C_2, \\dots, C_K \\}$ where the prediction of the model for an instance is a combination of individual hypotheses ($h_k (X)$) of its component classifiers where $k$ denotes the classifier index. \n\nLet $\\hat{\\rho}_{k,l}$ be the record of the predictions of $k$th component for the $l$th class on a sliding window containing the latest $N$ instances. Similarly, let $\\rho_{l}$ be the record of the ground truth information for the $l$th class on the sliding window.\n\nThe aim of ensemble pruning is selecting a subset of classifiers $\\xi' \\subset \\xi$ such that the ensemble's both efficiency and predictive performance are improved. Here, let us denote the size of the pruned ensemble as $|\\xi'| = \\varphi$ where $\\varphi < K$. \n\n\\section{Related Work}\n\\textbf{\\textit{Online Ensembles}}. The literature on the online ensembles is abundant \\cite{krawczyk2017ensemble}. Here, we only mention the ensembles that are used in our experiments. One of the most commonly used and well-known ensembles in data streams is AWE (Accuracy Weighted Ensemble) \\cite{wang2003awe}. It is a chunk-based ensemble where each classifier are weighted according to their expected accuracy in the latest chunk. Similarly, GOOWE \\cite{bonab2018goowe} is another chunk-based stacking ensemble that assigns optimal weights to its components using a geometric interpretation of target space and solving an optimization problem as the chunks are processed. Both of these ensembles are dynamic and they replace one of their components at the end of each chunk.\n\n\n\\noindent\n\\textbf{\\textit{Pruning in Data Streams}}. A related topic to on-the-fly ensemble pruning is Dynamic Ensemble Selection (DES) \\cite{ko2008dynamic}. In DES, a subset of components from a pool is selected for each test instance. DES assigns estimated levels of competence for each classifier in the pool, so that a selection method selects the classifiers to be assigned to incoming instances. Note that, unlike ensemble pruning, selection of a subset of classifiers for every instance does not reduce the size of the pool of classifiers. Firefly \\cite{krawczyk2015one} introduces a swarm intelligence approach to pruning with \\textit{one-class} classifiers where input classifiers are members of a firefly population. Interactions between fireflies describe the effectiveness of the classifiers, and best representatives of diverse groups in the population are selected as the pruning step. Lastly, FIRE-DES++ \\cite{cruz2019fire} incorporates a pre-selection stage where only the classifiers that can correctly classify at least a pair of instances with different classes can proceed to the next stage of classification. Again, the pre-selection stage in FIRE-DES++ does not change the size of the pool of classifiers.\n\n\\vspace{-0.1cm}\n\\section{Proposed Method}\n\\begin{algorithm}\n \\caption{\\texttt{CCRP}: Class-wise Component Ranking-based Pruner}\n \\label{alg:ccrp}\n \\begin{algorithmic}[1]\n \\Require $\\mathcal{D}$: data stream, $\\xi$: ensemble, $\\varphi$: pruned ensemble size\n \\Ensure $\\xi'$: pruned ensemble\n \\State Initialize $\\rho$ as an empty FIFO buffer of size $N$. \n \\State Initialize $\\hat{\\rho}$ as an empty FIFO buffer of size $K \\times N$.\n \\For{$(X,y) \\in \\mathcal{D}$}\n \n \\State $\\hat{\\rho}_k.$ \\texttt{append}$(h_k (X)) \\; $ for $\\forall k$ \\Comment{[ Preliminary Phase ]}\\label{code:preliminary}\n \\State $\\rho.$ \\texttt{append}$(y)$ \n \n \\If{prune} \\Comment{[ Phase I ]} \\label{code:phase-1} \n \\For{$l \\in L$} \\Comment{Start CCRP}\n \\For{$k \\in K$}\n \\State $ scores.$\\texttt{append}$($\\texttt{MSE}$(\\rho_{l}, \\hat{\\rho}_{k,l}))$\\label{code:mse-line} \\Comment{Class-wise MSEs}\n \\EndFor\n \\State $ ranks[l] =$ \\texttt{argsort}$(scores)$ \\Comment{Class-wise order of components}\n \\EndFor\n \\State $ccrp\\_ranks =$ \\texttt{ModifiedBorda}$(ranks)$ \\label{code:ModifiedBorda} \\Comment{[ Phase II ]}\\label{code:phase-2} \n \\State $\\xi' \\leftarrow$ top ranked $\\varphi$ components based on $ccrp\\_ranks$ \n \\State $\\xi \\leftarrow \\xi'$ \\Comment{Pruned ensemble in effect}\n \\EndIf\n \\EndFor\n \\end{algorithmic}\n\\end{algorithm}\n\nWe introduce, CCRP, \\textbf{C}lass-wise \\textbf{C}omponent \\textbf{R}ankings based \\textbf{P}runer for multi-class online ensembles. CCRP can be integrated to work with both dynamic and static multi-class ensembles to improve the performance and memory consumption. CCRP can be performed at any point in time on the stream. The proposed method consists of three phases: \\textit{\\textbf{Preliminary Phase}}: Recording Component Predictions on the Sliding Window, \\textit{\\textbf{Phase I}}: On-the-Fly Performance Analysis and Class-wise Ranking of Components, and \\textit{\\textbf{Phase II}}: Fusion of Rankings and Component Selection (see Figure \\ref{fig:method-figure}).\n\n\n\\subsubsection*{Preliminary Phase: Recording Component Predictions on the Sliding Window.} This phase only applies to the ensembles that are not already recording the component predictions on the sliding window (e.g. OzaBagging \\cite{oza2005onlinebb}). In this phase, for the latest $N$ data instances, predictions of classifiers and the ground truth are recorded as $\\rho$ and $\\hat{\\rho}$ respectively (Alg.\\ref{alg:ccrp} line \\ref{code:preliminary}). Since only the latest $N$ instances are kept in the records, the required memory for this process is fixed, despite being proportional to the number of classes.\n\n\\subsubsection*{Phase I: On-the-Fly Performance Analysis and Class-wise Ranking of Components.} This is the initial phase of CCRP in which per class performances of classifiers are measured. In this phase, for each class $l$, predictions of classifiers and the ground truth are extracted as $\\hat{\\rho}_{k,l}$ and $\\rho_{l}$ respectively (Alg.\\ref{alg:ccrp} line \\ref{code:phase-1}). Then, scores of classifiers are calculated using Mean Square Error (Eqn. \\ref{eq:loss-mse}) for each class (Alg.\\ref{alg:ccrp} line \\ref{code:mse-line}).\n\\label{sec:class-wise-analysis-ranking}\n\n\\vspace{-0.3cm}\n\\begin{equation}\n\\label{eq:loss-mse} \n \\mathcal{L}_{k,l}(X) = \\sum^{N}_{i=1} \\Big( \\hat{\\rho}_{k,l}^{(i)} - \\rho_{l}^{(i)} \\Big)^2, \\qquad 1 \\leq k \\leq K, \\; 1 \\leq l \\leq L\n\\end{equation}\n\n\\subsubsection*{Phase II: Fusion of Rankings and Component Selection.} In the final phase, CCRP generates the overall ranking of the classifiers, using a modified version Borda Count \\cite{nuray2006automatic} rank fusion method (Alg.\\ref{alg:ccrp} line \\ref{code:ModifiedBorda}). In Modified Borda Count (MBC, hereafter), CCRP assigns $K \\times L$ points (instead of $K$ points in the regular Borda Count \\cite{nuray2006automatic}) to the highest ranking components in the class-wise rankings. This ensures the winning component for each class to appear at the top $L$ places in the overall ranking. Afterwards, the second classifier gets $K - 1$ points and each proceeding classifier gets 1 point less than its successor. Then, the top $\\varphi$ classifiers from the overall ranking are selected as the members of the pruned ensemble. It is recommended \\cite{bonab2019less} that the pruned ensemble size should be at least equal to the number of classes. Taking this into account, CCRP guarantees that the best performing classifier for each class is included in the pruned ensemble when $\\varphi \\geq L$.\n\n\\section{Experimental Setup and Results} \\label{sec:results}\n\n\\subsection{Setup}\nThe proposed method and modifications to existing ensembles are integrated into scikit-multiflow \\cite{montiel2018scikit} library. The experiments are evaluated prequentially \\cite{gama2009issues}. We report prequential and overall accuracy as performance metrics. In addition, for memory efficiency analysis in the experiments, we define \\textit{Mean Memory Consumption Ratio} ($\\mu$) which indicates the percentage of average memory the pruned ensemble uses with respect to the original ensemble (Eqn. \\ref{eq:mmcr}).\n\n\\vspace{-0.3cm}\n\\begin{equation}\n\\label{eq:mmcr}\n \\mu = \\frac{\\sum_{\\forall \\text{chunk}} \\; Size(\\xi')}{\\sum_{\\forall \\text{chunk}} \\; Size(\\xi)}\n\\end{equation}\n\nThroughout the experiments, we use two typical dynamic ensembles (AWE \\cite{wang2003awe} and GOOWE \\cite{bonab2018goowe}) with Hoeffding Trees \\cite{domingos2000mining} as their base classifiers. CCRP is performed whenever the ensembles are full, i.e. when the ensemble size reaches $K$. After pruning, the ensembles continue to grow until they reach the maximum size $K$, where pruning takes place again. This process continues indefinitely.\n\n\n\\begin{table}[!h]\n\\caption{Datasets}\n\\vspace{-0.3cm}\n\\begin{tabular}{l|lrrr}\n & \\textbf{Name} & \\multicolumn{1}{l}{\\textbf{\\# Features}} & \\multicolumn{1}{l}{\\textbf{\\# Classes}} & \\multicolumn{1}{l}{\\textbf{\\# Instances}} \\\\ \\hline\n\\multirow{3}{*}{\\rotatebox{90}{\\textbf{Real}}} & COVTYPE \\cite{bifet2013efficient} & 54 & 7 & 581.012 \\\\\n & Poker Hand \\cite{bifet2013efficient} & 10 & 10 & 829.201 \\\\\n & Rialto \\cite{losing2016knn} & 27 & 10 & 82.250 \\\\ \\hline\n\\multirow{3}{*}{\\rotatebox{90}{\\textbf{Synth}}} & Mov. Squares \\cite{losing2016knn} & 2 & 4 & 200.000 \\\\\n & Mov. RBF \\cite{losing2016knn} & 10 & 5 & 200.000 \\\\\n & Trn. Chssbrd \\cite{losing2016knn} & 2 & 8 & 200.000 \n\\end{tabular}\n\\label{tab:datasets}\n\\end{table}\n\nThe models are ran on three real-world and three synthetic well-known datasets with concept drifts \\cite{bifet2013efficient, losing2016knn}. A summary of the dataset information is provided in Table \\ref{tab:datasets}.\n\n\\begin{figure}\n \\vspace{-0.35cm}\n \\centering\n \\includegraphics[width=\\linewidth]{all-in-one-vertical.pdf}\n \n \\caption{The impact of CCRP ($\\varphi = L$) on the prequential accuracy of two ensemble models on three datasets over the first few thousand instances. The first occurrence of pruning is denoted with a vertical red line.}\n \\label{fig:ccrp-with-different-ensembles}\n\\end{figure}\n\n\\subsection{Effect of CCRP on Predictive Performance and Memory Efficiency}\n\\textit{Does CCRP yield more effective and efficient ensemble models?} We respond to this question by taking both accuracy and memory consumption ($\\mu$) into account. We perform experiments with AWE (where $K = 20$) and GOOWE ($K = 30$) to investigate the effect of CCRP on the ensembles. \n\nTo examine the change in the behavior of ensembles, \\textit{ceteris paribus}, we consider the case when CCRP is performed for the first time on the stream. In Figure \\ref{fig:ccrp-with-different-ensembles}, it can be observed that the overall accuracy of the pruned ensemble is always higher than the original ensemble \\textit{after} pruning with CCRP. In addition to the improvement in accuracy, it should be noted that the ensembles after pruning are relatively more robust to the changes in the data stream. This can be observed from the steepness of the declining accuracies of the original ensembles compared to the pruned ensembles.\n\n\n\\begin{table}[]\n\\caption{Effect of CCRP on Accuracy and Mean Memory Consumption Ratio ($\\mu$) for Different Ensembles}\n\\vspace{-0.3cm}\n\\begin{tabular}{l|cc|c|cc|c}\n\\multicolumn{1}{c|}{} & \\multicolumn{3}{c|}{\\textbf{AWE} \\cite{wang2003awe}} & \\multicolumn{3}{c}{\\textbf{GOOWE} \\cite{bonab2018goowe}} \\\\ \\cline{2-7} \n & \\multicolumn{1}{c}{Default} & \\multicolumn{1}{c|}{CCRP} & $\\mu$ & \\multicolumn{1}{c}{Default} & \\multicolumn{1}{c|}{CCRP} & $\\mu$ \\\\ \\hline\nCOVTYPE & {\\ul 0.661} & 0.659 & \\textbf{75\\%} & {\\ul 0.852} & 0.829 & \\textbf{10\\%} \\\\\nPokerHand & 0.525 & {\\ul 0.585} & \\textbf{83\\%} & {\\ul 0.712} & 0.651 & \\textbf{14\\%} \\\\\nRialto & 0.377 & {\\ul 0.379} & \\textbf{80\\%} & {\\ul 0.428} & 0.420 & \\textbf{55\\%} \\\\\nMov. Squares & 0.478 & {\\ul 0.532} & \\textbf{71\\% } & 0.354 & {\\ul 0.359} & \\textbf{52\\%} \\\\\nMov. RBF & {\\ul 0.499} & {\\ul 0.499} & \\textbf{72\\%} & {\\ul 0.433} & 0.396 & \\textbf{19\\%} \\\\\nTrn. Chssbrd & {\\ul 0.146} & 0.141 & \\textbf{71\\%} & 0.685 & {\\ul 0.791} & \\textbf{70\\%}\n\\vspace{-0.4cm}\n\\end{tabular}\n\\label{table:acc-memory}\n\\end{table}\n\nFigure \\ref{fig:ccrp-with-different-ensembles} indicates that the initial calls of CCRP \\textit{locally} improves the ensemble performance. Yet, \\textit{is that the case throughout the stream?} To inspect the effect of CCRP on both accuracy and memory consumption throughout the stream, we experiment with AWE ($K=20$) and GOOWE ($K=30$) on all of the datasets, and report overall accuracies, as well as $\\mu$ (Table \\ref{table:acc-memory}). \n\nIn Table \\ref{table:acc-memory}, notice that CCRP yields $10\\% \\leq \\mu \\leq 83\\%$, which means the pruned ensembles consume at most $90\\%$ and at least $17\\%$ less memory over the course of their run. In 5 out of 12 cases, CCRP improves the performance of ensembles while reducing the memory consumption by $20$ to $25\\%$. In a small number of cases (3 out of 12: GOOWE on COVTYPE, PokerHand and MovingRBF), ensembles without pruning outperforms the ones with pruning. These are also the cases where CCRP yields the most drastic reductions in memory consumption, around $80$ to $90\\%$. This behaviour may occur due to removing old and strong components of the ensemble in the presence of drifts. For the remaining 4 cases, the overall accuracy for the pruned models are on par with the ones that are not pruned. A decrease in memory consumption around $30\\%$ can be observed in these cases, as well. These memory savings imply similar savings in CPU time, since memory consumption of an ensemble is proportional to the complexity of its components.\n\n\\subsection{Superiority of CCRP over Different Baseline Selection Techniques}\n\\textit{Is using CCRP's pruning scheme indeed better than selecting the highest weighted $\\varphi$ components from a weighted ensemble? What difference does using MBC make with respect to the regular Borda Count?} To investigate these questions, we design the following experiment: We choose a weighted ensemble (GOOWE \\cite{bonab2018goowe}) which normally employs a replacement policy where the lowest weight component is replaced at the end of each chunk. We, then, modify GOOWE's component replacement policy so that it removes its component using CCRP, and Borda Count with $\\varphi = K - 1$, and places the newly trained component to the freed-up spot. Overall accuracy values for several datasets are reported in Table \\ref{table:differentPrune}. The winning method(s) for each dataset is underlined.\n \n\\begin{table}[!h] \n\\caption{Accuracy of GOOWE (with $K = 10$) using Different Pruning Schemes (with $\\varphi=9$) for the Selected Datasets}\n\\vspace{-0.3cm}\n\\begin{tabular}{l|ccc}\n & \\textbf{CCRP} & \\textbf{Weight-based} & \\textbf{Regular Borda} \\\\ \\hline\nCOVTYPE & {\\ul 0.839} & 0.796 & 0.703 \\\\\nPoker & {\\ul 0.726} & 0.667 & 0.701 \\\\\nMov. RBF & {\\ul 0.543} & 0.537 & {\\ul 0.543} \\\\\nTrn. Chessboard & {\\ul 0.452} & 0.450 & 0.449 \n\\end{tabular}\n\\vspace{-0.3cm}\n\\label{table:differentPrune}\n\\end{table}\n\nThis experiment shows that the internal component weights of an ensemble do not necessarily indicate their contribution to the overall performance of the ensemble. In addition, CCRP's superiority over using Regular Borda \\cite{nuray2006automatic} shows MBC's capability of handling class imbalance. By assigning $K \\times L$ points to the best performing classifier for a less frequent class, MBC ensures that that classifier is not ranked poorly and selected for the resulting subset of classifiers.\n\n\\vspace{-0.3cm}\n\\section{Conclusion and Future Work}\nIn this work, we present an on-the-fly ensemble pruner for evolving data streams for the first time in the literature. Our approach utilizes predictions of ensembles to generate class-wise rankings and combine these rankings with a rank fusion algorithm. We demonstrate that the proposed pruning scheme yields smaller and more efficient ensembles with on par or better predictive performance consistently. Additionally, we show that the proposed rank fusion algorithm works better than its alternatives. Our experiments on different ensembles provide evidence that our method can be integrated into any streaming ensemble.\n\nFor future work, we plan to do a rigorous analysis of diversity of the ensemble components before and after pruning, considering various diversity measures \\cite{krawczyk2017ensemble}. The proposed approach can be extended by incorporating diversity of components into the rank fusion process. Additionally, \"how to select the optimal prune size ($\\varphi$)\" is still an open question. The effect of ensemble parameters and dataset properties on $\\varphi$ can be examined. Lastly, we observed that the time of the pruning is crucial to obtain high predictive performance. Therefore, integrating a concept drift detector into the proposed method and pruning the ensemble whenever a drift is detected may further improve the performance of models.\n\\vspace{-0.3cm}\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} The impressive progress of nonlinear science in the \nlast several decades revealed a large number of structures and mechanisms \nthat were novel and fascinating to almost all of us. This has, in a sense, \nsuggested that several aspects of our existing statistical theory, whose \nfirst kinetic equation was proposed more than a century ago, need certain \nkinds of `upgrade' or `renewal'. \n\nIn some of our relatively recent papers{\\cite{chen1,chen2}, we were tempted \nto analyze the standard kinetic theory from different perspectives. It was \nargued that (i) in various artificial and realistic situations, distribution \nfunctions of gases may have very unconventional structures: they can be, for \ninstance, discontinuous at every and each spatial point and thus the usual \ndifferential apparatus becomes inapplicable (after all, there are indeed many \nstructures and mechanisms in nature to which the use of the usual \ndifferential apparatus is quite limited); and that (ii) while the left side \nof the Boltzmann equation is symmetric in terms of partial derivatives with \nrespect to the position and velocity, the right side of the equation is \nconstructed entirely in the velocity space (the position coordinates merely \nserve as inactive parameters), which is incomprehensible especially in the \nmathematical sense. \n\nThe major objective of this work is, however, rather practical: to define a \nkinetic gas and, at the same time, to find out an algorithm which can be used \nto calculate the behavior of the gas rather completely. It is hoped that if \nsuch study gets established somehow, further studies, either concerning the \nfoundation of the existing statistical theory or concerning the development \nof more general treatment, will be inspired and promoted. \n\nThe structure of this paper is the following. In Sect.~2, we propose our \nworking model, a gas leaking out of a container, resembling the cavity model \nfor the black-body radiation. In Sect.~3, the zeroth-order distribution \nfunction of the leaking gas is formulated and, in passing, it is shown that \ndistribution functions of real gases may have radically discontinuous \nstructures untreatable for the standard theory. In Sect.~4, the collisional \ncorrection is investigated with help of a methodology that is slightly \ndifferent in form but much different in concept from the standard treatment. \nIn Sect.~5, we further comment on why the distribution function averaged over \nfinite velocity solid-angle ranges needs to be introduced. Sect.~6 summarizes \nthe paper. \n\n\\section{A leaking gas} \n\nAt the kinetic level, one of the customary conceptions about calculating a \npractical gas is to solve the Boltzmann differential-integral equation with \nhelp of a difference scheme. Known difficulties associated with this \nconception may be summarized as follows. \\begin{enumerate} \\item There are \nseven variables: time, three spatial coordinates and three velocity \ncomponents. If we divide each variable into $N$ intervals, `the degrees of \nfreedom' of the system are $N^7$. To reveal the true properties of a \nnonequilibrium phenomenon, such as those related to a turbulence, $N$ has to \nbe rather large and $N^7$ has to be terribly huge, so that no today's \ncomputational means really helps. \\item The complex nature of the collisional \noperator in Boltzmann's equation makes the situation worse. \\item Boundary \nconditions and initial conditions impose extra problems. As to a differential \nequation, well specified boundary and initial conditions usually mean that a \nuniquely and clearly defined solution can be constructed, say, from a \ndifference scheme. (The uniqueness may not be truly desired in view of the \nfact that bifurcations take place in nature.) As to an integral equation, \nboundary and initial conditions may be specified rather loosely and it leaves \nus certain room to `manipulate'. Boltzmann's equation is a \ndifferential-integral equation and it is not entirely clear in which \ndirection we should go. \\end{enumerate} \nDue to these difficulties (and \npossibly many more), almost no realistic problems have been fully treated, \nlet alone conclusive comparison between the kinetic theory and realistic \nexperiments. \n\nInterestingly enough, the aforementioned difficulties, at least some of them, \nare not intrinsic to the dynamics that we wish to study. In some sense, if \nthe standard approach had not dominated our mind too strongly, we might have \nalready had workable alternative approaches, at least for some special cases. \nIn what follows, we shall try to substantiate this viewpoint. \n\nWe consider a gas consisting of hard spheres and confined to a closed \ncontainer. The interactions between the walls and particles, and between the \nparticles themselves, make the gas solidly and quickly in equilibrium. \nNamely, the probability of finding particles in a phase volume element $d{\\bf \nr}d{\\bf v}=dxdy dz dv_x dv_y dv_z$ takes its Maxwell form \n\\begin{equation} \\label{fm0} f_M\\equiv n_0\\left(\\frac m{2\\pi\\kappa \nT}\\right)^{\\frac 32} \\exp\\left( - \\frac{mv^2}{2\\kappa T}\\right), \n\\end{equation}\nwhere $m$ is the mass of particle, $\\kappa$ the Boltzmann constant, $T$ the \ntemperature of the gas and $n_0$ the particle density in the container. \n\\vskip15pt \n\n\\hspace{40pt} \\setlength{\\unitlength}{0.015in} \\begin{picture}(200,100) \n\\put(30,100){\\line(1,0){60}}\n\\put(30,10){\\line(1,0){60}}\n\\put(40,20){\\line(1,0){40}}\n\\put(40,90){\\line(1,0){40}}\n\n\n\\put(90,53){\\line(0,-1){43}}\n\\put(90,57){\\line(0,1){43}}\n\\put(90,57){\\line(-2,3){10}}\n\\put(90,53){\\line(-2,-3){10}}\n\n\\multiput(30,10)(2,0){6}{\\line(0,1){90}}\n\\multiput(40,10)(2,0){21}{\\line(0,1){10}}\n\\multiput(40,100)(2,0){21}{\\line(0,-1){10}}\n\n\\put(80,10){\\line(0,1){28}}\n\\put(82,10){\\line(0,1){31}}\n\\put(84,10){\\line(0,1){34}}\n\\put(86,10){\\line(0,1){37}}\n\\put(88,10){\\line(0,1){40}}\n\n\\put(80,100){\\line(0,-1){28}}\n\\put(82,100){\\line(0,-1){31}}\n\\put(84,100){\\line(0,-1){34}}\n\\put(86,100){\\line(0,-1){37}}\n\\put(88,100){\\line(0,-1){40}}\n\n\\multiput(42,23)(0,4){17}{\\circle*{1}}\n\\multiput(46,23)(0,4){17}{\\circle*{1}}\n\\multiput(50,23)(0,4){17}{\\circle*{1}}\n\\multiput(54,23)(0,4){17}{\\circle*{1}}\n\\multiput(58,23)(0,4){17}{\\circle*{1}}\n\\multiput(62,23)(0,4){17}{\\circle*{1}}\n\\multiput(66,23)(0,4){17}{\\circle*{1}}\n\\multiput(70,23)(0,4){17}{\\circle*{1}}\n\\multiput(74,23)(0,4){17}{\\circle*{1}}\n\\multiput(78,23)(0,4){17}{\\circle*{1}}\n\\multiput(82,47)(0,4){5}{\\circle*{1}}\n\\multiput(86,51)(0,4){3}{\\circle*{1}}\n\n\\multiput(95,55)(1.8,0){20}{\\circle*{0.7}}\n\\multiput(94,57)(1.6,0.9){20}{\\circle*{0.7}}\n\\multiput(92.5,58.5)(0.9,1.6){20}{\\circle*{0.7}}\n\\multiput(94,53)(1.6,-0.9){20}{\\circle*{0.7}}\n\\multiput(92.5,51.5)(0.9,-1.6){20}{\\circle*{0.7}}\\end{picture} \n\n\\vspace{-8pt} \\begin{center} \\begin{minipage}{10cm} { \\vskip-0.3cm Figure~1: \nSchematic of a leaking gas.} \\end{minipage} \\end{center} \n\nWe then suppose that there is a small hole on the wall of the container, as \nshown in Fig.~1. The aim of our formalism is to determine the distribution \nfunction of the leaking gas. For convenience of discussion, it is further \nassumed that (i) the hole is indeed small so that the leaking is relatively \nslow and the relevant distribution function can be considered to be \ntime-independent; (ii) since the regions far from the hole are kept in the \nvacuum state (by a pump for instance), no incoming particles need to be taken \ninto account; and (iii) the leaking gas outside the container is so dilute \nthat the zeroth-order solution can be determined by the collisionless \ntrajectories of particles while the first-order correction can be formulated \nby assuming the particles of the leaking gas to collide with each other once \nand only once. \n\n\\section{The zeroth-order distribution function} \n\nIf we ignore the particle-to-particle interaction and ignore the \ngravitational force acting on each particle, the collisionless motion of \nevery particle of the leaking gas is conceptually simple---moving along a \nstraight line. However, it is still necessary and interesting to express the \ncorresponding distribution function in a proper mathematical form. \n\nWe start our discussion with the conventional theory. The theory states that \nthe distribution function satisfies, with collisions ignored completely, \n\\begin{equation} \\label{bltz} \\frac{\\partial f}{\\partial t}+{\\bf v}\\cdot \n\\frac {\\partial f} {\\partial {\\bf r}}+ {{\\bf F}\\over m} \\cdot \\frac {\\partial \nf} {\\partial {\\bf v}}=0,\n\\end{equation}\nwhere $\\bf F$ represents the external force. Equation (\\ref{bltz}) is \nsometimes termed the collisionless Boltzmann equation\\cite{reif}. A natural \nnotion related to such terminology is that any procedure, a difference scheme \nfor instance, if applicable in solving the regular Boltzmann equation, must \nbe applicable in solving this reduced equation. Strangely, complicated and \nsubtle issues arise from this `natural' notion; and we shall discuss these \nissues at the end of this section. \n\nTo formulate the zeroth-order distribution function, the following `slightly \ndifferent' approach is helpful. Rewriting Equation (\\ref{bltz}) along a \nparticle path in the phase space, we arrive at the path \ninvariance\\cite{reif,harris} \n\\begin{equation} \\label{fm}\\left. \\frac{df}{dt}\\right|_{\\rm path}=0. \n\\end{equation}\nAs far as our leaking gas is concerned, this means $f|_{\\rm path}=f_M$, where \n$f_M$ has been defined by expression (\\ref{fm0}). \\vskip 20pt \n\n\\hspace{90pt} \\setlength{\\unitlength}{0.014in} \\begin{picture}(200,100) \n\n\\put(109,97){\\line(1,-2){5}}\n\\put(107,79){\\makebox(20,8)[l]{$\\Delta \\Omega$}}\n\\put(92,89){\\makebox(20,8)[l]{$\\bf r$}}\n\n\\put(42,100){\\line(0,-1){25.5}}\n\\put(39,100){\\line(0,-1){21}}\n\\put(36,100){\\line(0,-1){16.5}}\n\\put(33,100){\\line(0,-1){12}}\n\\put(30,100){\\line(0,-1){7.5}}\n\\put(42,10){\\line(0,1){25.5}}\n\\put(39,10){\\line(0,1){21}}\n\\put(36,10){\\line(0,1){16.5}}\n\\put(33,10){\\line(0,1){12}}\n\\put(30,10){\\line(0,1){7.5}}\n\n\\put(45,100){\\line(0,-1){30}}\n\\put(45,10){\\line(0,1){30}}\n\\multiput(10,10)(0,5.0){19}{\\circle*{1.5}}\n\\multiput(15,10)(0,5.0){19}{\\circle*{1.5}}\n\\multiput(20,10)(0,5.0){19}{\\circle*{1.5}}\n\\multiput(25,15)(0,5.0){17}{\\circle*{1.5}}\n\\multiput(30,25)(0,5.0){13}{\\circle*{1.5}}\n\\multiput(35,30)(0,5.0){11}{\\circle*{1.5}}\n\\multiput(40,40)(0,5.0){7}{\\circle*{1.5}}\n\n\\multiput(45,43)(0,6){5}{\\line(0,1){3}}\n\\put(45,50){\\line(2,-3){7}}\n\\put(50,32){\\makebox(20,8)[l]{$\\Delta S_0$}}\n\n\\put(45,70){\\line(-2,3){20}}\n\\put(45,40){\\line(-2,-3){20}}\n\\put(45,55){\\vector(3,2){54}}\n\\multiput(45,55)(3,0){15}{\\circle*{1}}\n\\put(56,55.5){\\makebox(20,8)[l]{$\\theta$}}\n\n\\multiput(47,69)(2,0.85){35}{\\circle*{1}}\n\\multiput(47,42)(1.5,1.4){45}{\\circle*{1}}\n\\end{picture} \n\n\\vspace{-5pt}\\begin{center} \\begin{minipage}{10cm} { \\vskip-0.3cm Figure~2: \nThe velocity distribution at the position $\\bf r$.} \\end{minipage} \n\\end{center} \n\nThen, it seems that the zeroth-order distribution function on the right side \nof the hole is uniformly identical since any spatial point there is reachable \nalong a path radiating from the inside of the container (the external force \nis zero and the paths in the phase space and in the spatial space are the \nsame). This notion is, however, rather misleading. By moving together with a \nparticle of the leaking gas, an observer easily realizes that the true \nparticle density around him becomes lower and lower. With this realization in \nmind, it can easily be found that the distribution function of the leaking \ngas is strongly limited in the velocity space: the farther from the hole the \nstronger the limitation is. Namely, as shown in Fig.~2, the true implication \nof (\\ref{fm}) is \n\\begin{equation} \\label{cone} f({\\bf r},{\\bf v}) =\\left\\{ \\begin{array}{ll} \nf_M &{\\rm the{\\hskip 4pt} direction{\\hskip 4pt} of {\\hskip 4pt} {\\bf \nv}{\\hskip 4pt} within{\\hskip 4pt} \\Delta\\Omega} \\\\ 0 & {\\rm otherwise} \n\\end{array} \\right. , \n\\end{equation}\nwhere $\\Delta\\Omega$ is a solid-angle range whose size is defined by the \nsolid-angle range \n\\begin{equation} \\label{cone1} \\frac{\\Delta S_0 \\cos \\theta}{r^2} \\quad{\\rm \nwith}\\quad r=|{\\bf r}| \n\\end{equation}\nand whose direction is defined by the position vector ${\\bf r}$ (with the \norigin at the hole). Heuristically and intuitively, it can be said that the \nvelocity distribution at every point there is a sting-like function. In \nalmost all the regions on the right side of the hole, where $r$ is relatively \nlarge or $\\cos\\theta$ is relatively small, the stings tend to be infinitely \nacute. \n\nThough (\\ref{cone}) and (\\ref{cone1}) correctly describe the distribution \nfunction, there are some kinds of inconvenience in applying them in \nanalytical calculations. For practical and theoretical reasons, we wish to \nexpress the distribution function $f({\\bf r},{\\bf v})$ directly and \nexplicitly in terms of ${\\bf r}$ and ${\\bf v}$. \n\nTo get such expression, we adopt the picture that the sting-like velocity \ndistributions of the leaking gas are indeed infinitely acute, either by \nassuming the hole to be truly small or by assuming the interested region to \nbe truly distant. Under this understanding, we can use a $\\delta$-function to \nreflect the limitation of the distribution function in the velocity space and \nreplace (\\ref{cone}) and (\\ref{cone1}) by \n\\begin{equation} \\label{delta} f({\\bf r},{\\bf v})\\equiv \ng(r,v)\\delta\\left(\\Omega_{\\bf v}- \\Omega_{\\bf r} \\right) \n\\end{equation}\nwith $$ g(r,v) = n_0 \\left(\\frac m{2\\pi\\kappa T}\\right)^{3\/2} e^{- \n\\frac{mv^2}{2\\kappa T}} \\frac{\\Delta S_0 \\cos\\theta}{ r^2}, $$ where \n$\\Omega_{\\bf v}$ is the solid angle in the direction of $\\bf v$ and \n$\\Omega_{\\bf r}$ is the solid angle in the direction of $\\bf r$. Expression \n(\\ref{delta}) enables us to do analytical calculation easily. For instance, \nwith help of it we may compute the zeroth-order particle density outside the \ncontainer by \n\\begin{equation} n({\\bf r})=\\int f({\\bf r},{\\bf v}) d{\\bf v} =\\int f v^2dv \nd\\Omega= n_0\\frac{\\Delta S_0 \\cos\\theta}{4\\pi r^2}, \n\\end{equation}\nwhich is quantitatively consistent with our intuitive notion about the \nparticle density outside the container. \n\nBefore leaving this subject, we go back to discuss several usual concepts \nconcerning the collisionless and regular Boltzmann equations. It is commonly \nbelieved that the collisionless Boltzmann equation (\\ref{bltz}) is completely \nequivalent to the path-invariance expressed by (\\ref{fm}). In our view, this \nequivalence is just a formal one. For one thing, expression (\\ref{fm}) makes \nsense strictly along a particle path, which is governed by the collisionless \nmotion equations of single particle; while Boltzmann's equations are supposed \nto be solved by a difference scheme with help of boundary and initial \nconditions in which trajectories of individual particle do not play any role. \nFor another, unconventional topology and dynamics are related to (\\ref{fm}). \nDistribution functions described by it, such as (\\ref{cone}) and \n(\\ref{delta}), can be shaped like stings; more than that, these stings \n`spread' out along paths of particles, become sharper and sharper when \nspreading, and continue to spread out after being infinitely sharp. \n(Distributions produced by boundaries can be even `sharper', see Sect.~5.) \nBoltzmann's equations, being ones containing differential terms, are not \ncompatible with these `radical' things. \n\n\\section{The collisional correction} In this section, we try to deal with \ncollisions. To make our discussion involve less details, we shall only \nformulate the first-order correction. Namely, it is assumed that the \nparticles expressed by (\\ref{delta}) will collide once and only once \n(although further extension can be made with no principal difficulty). \n\n\\hspace{-25pt} \\setlength{\\unitlength}{0.020in} \\begin{picture}(200,100) \n \n\\put(90,53){\\line(0,-1){23}}\n\\put(90,57){\\line(0,1){23}}\n\\put(90,57){\\line(-2,3){10}}\n\\put(90,53){\\line(-2,-3){10}}\n\n\\put(80,30){\\line(0,1){8}}\n\\put(82,30){\\line(0,1){11}}\n\\put(84,30){\\line(0,1){14}}\n\\put(86,30){\\line(0,1){17}}\n\\put(88,30){\\line(0,1){20}}\n\n\\put(80,80){\\line(0,-1){8}}\n\\put(82,80){\\line(0,-1){11}}\n\\put(84,80){\\line(0,-1){14}}\n\\put(86,80){\\line(0,-1){17}}\n\\put(88,80){\\line(0,-1){20}}\n\n\\multiput(82,47)(0,4){5}{\\circle*{1.5}}\n\\multiput(86,51)(0,4){3}{\\circle*{1.5}}\n\n\\multiput(95,55)(1.8,0){14}{\\circle*{0.7}}\n\\multiput(94,57)(1.6,0.9){14}{\\circle*{0.7}}\n\\multiput(92.5,58.5)(0.9,1.6){14}{\\circle*{0.7}}\n\\multiput(94,53)(1.6,-0.9){14}{\\circle*{0.7}}\n\\multiput(92.5,51.5)(0.9,-1.6){14}{\\circle*{0.7}}\n\n\\multiput(150.2,39.4)(16.6,8.2){2}{\\line(1,-2){8}}\n\\multiput(150.2,39.4)(9.2,4.6){2}{\\line(2,1){7}}\n\\put(158,44.25){\\line(-1,5){10}}\n\\put(158,44.25){\\line(-4,5){33}}\n\\put(157,49){\\line(-1,0){3}}\n\\put(156,54){\\line(-1,0){5.5}}\n\\put(155,59){\\line(-1,0){8.5}}\n\\put(154,64){\\line(-1,0){11}}\n\\put(153,69){\\line(-1,0){14.3}}\n\\put(152,74){\\line(-1,0){17.5}}\n\\put(151,79){\\line(-1,0){20.5}}\n\\put(150,84){\\line(-1,0){24}}\n\\put(149,89){\\line(-1,0){12}}\n\n\\put(126,24){\\makebox(20,8)[l]{Detector}}\n\\put(153,79){\\makebox(20,8)[l]{Effective}}\n\\put(153,72){\\makebox(20,8)[l]{cone $-\\Delta\\Omega$}}\n\\end{picture} \n\n\\vspace{-29pt}\\begin{center} \\begin{minipage}{10cm} { \\vskip-0.3cm Figure~3: \nA particle detector is placed in the leaking gas.} \\end{minipage} \n\\end{center} \n\nWhat can be measured in an experiment should be of our first concern. For \nthis reason, we consider a particle detector placed somewhere in the leaking \ngas and try to determine how many particles will be recorded by the detector, \nas illustrated in Fig.~3. There are several essential things worth mentioning \nabout the detector. \\begin{enumerate} \\item The opening allowing particles to \nenter the detector is considered to be sufficiently small. Or, in the \nmathematical language, we regard the area of the opening, denoted by $\\Delta \nS$, as an infinitesimal quantity throughout this section. \\item Without \nspecifying concrete structure and mechanism of the detector, it is assumed \nthat every particle that enters the detector and is in the velocity range \n\\begin{equation} \\label{vvolume} \\Delta {\\bf v}= \\Delta v v^2 \\Delta \\Omega, \n\\end{equation}\nwhere $\\Delta v$ and $\\Delta \\Omega$ are predetermined before the \nmeasurement, will be recorded, and every other particle will not. \\item While \n$\\Delta S$ and $\\Delta v$ are allowed to shrink to zero, we define $\\Delta \n\\Omega$ as a finite solid-angle range. As will be seen, the discrimination \nagainst $\\Delta\\Omega$ is taken almost entirely by necessity. \\end{enumerate} \n\nIf the detector works as described above, do we know the distribution \nfunction at the entry of the detector? The answer to it is almost a positive \none. If $\\Delta N$ is the number that the detector counts during $\\Delta t$, \nthe particle density in the phase volume element $(\\Delta S v \\Delta \nt)(\\Delta v v^2 \\Delta \\Omega)$ is \n\\begin{equation} \\label{deltaN} f(t,{\\bf r}, {\\bf v},\\Delta \\Omega )\\approx \n\\frac {\\Delta N}{(\\Delta S v \\Delta t)(\\Delta v v^2 \\Delta \\Omega)}.\n\\end{equation}\nThe form of (\\ref{deltaN}) illustrates one of the most distinctive features \nof this approach: it tries to calculate the distribution function directly \nrather than to formulate a dynamic equation concerning how many particles \nenter and leave a phase volume element. Apart from other advantages, this \nbrings to us a lot of convenience in computational terms. \n\nNoticing that only the collisions taking place within the shaded spatial cone \n$-\\Delta\\Omega$, which is equal and opposite to the velocity cone $\\Delta \n\\Omega$, can possibly contribute to $\\Delta N$, as shown in Fig.~3, we call \nthe region inside $-\\Delta\\Omega$ the effective cone. Since this effective \ncone is a finite one ($\\Delta \\Omega$ is finite), our primary task is to \ndivide it into many subregions, denoted by $d{\\bf r}'$ (the origin of the \ncoordinates is still at the center of the hole as in Sect.~3), and to \ncalculate how collisions in each of the subregions give contribution to \n$\\Delta N$. \n\nConsider that two beams of identical particles (but still distinguishable in \nterms of classical mechanics) \n\\begin{equation} f(t',{\\bf v}',{\\bf r}')d{\\bf v}' \\quad {\\rm and} \\quad \nf(t',{\\bf v}'_1,{\\bf r}')d{\\bf v}'_1 \n\\end{equation}\ncollide within $d{\\bf r}'$ and at time $t'$, producing particles with \nvelocities ${\\bf v}$ and ${\\bf v}_1$ respectively. It is noted that there is \na time delay \n\\begin{equation} t-t'=\\frac{|{\\bf r}-{\\bf r}'|} {v}\\quad {\\rm with}\\quad \nv=|{\\bf v}| \n\\end{equation}\nbetween the time of collision and the time of detection. Since the leaking is \nassumed to be relatively slow (or the gas inside the container is supplied by \nan external gas source), we think of our problem as a time-independent one \nand pay no more attention to the time variable hereafter. To better observe \nthe collisions, we define \n\\begin{equation} \\label{cu} \\left\\{\\begin{array}{l} {\\bf v}'+{\\bf \nv}'_1=2{\\bf c}'\\cr {\\bf v}'-{\\bf v}'_1=2{\\bf u}' \\end{array}\\right. \\quad \n{\\rm and}\\quad \\left\\{\\begin{array}{l} {\\bf v}+{\\bf v}_1=2{\\bf c}\\cr {\\bf \nv}-{\\bf v}_1=2{\\bf u} \\end{array}\\right. . \n\\end{equation}\nThat is to say, by virtue of the conservation laws of energy and momentum, \n${\\bf c}={\\bf c}'$ represents the velocity of the center-of-mass and $u=|{\\bf \nu}|=|{\\bf u}'|$ stands for the particle speed relative to the center-of-mass. \nExamining the beam-to-beam collision in the center-of-mass frame, we find \nthat the differential number of the colliding particles is \n\\begin{equation} \\label{twobeams} [d{\\bf r}'f({\\bf r}',{\\bf v}')d{\\bf v}'] \n[f({\\bf r}',{\\bf v}'_1)d{\\bf v}'_1] [2u\\sigma_c({\\bf u}',{\\bf u}) d \\Omega_c \n\\Delta t ],\n\\end{equation}\nwhere $\\Omega_c$ is the solid angle between ${\\bf u}'$ and ${\\bf u}$ and \n$\\sigma_c$ is the cross section associated with particles emerging within the \nsolid-angle range $d\\Omega_c$. By integrating (\\ref{twobeams}) and getting \nhelp from (\\ref{delta}), the right side of (\\ref{deltaN}) is equal to \n\\begin{equation} \\label{entry} \\int_{-\\Delta \\Omega}d{\\bf r}'\\int_{\\Delta \nv\\Delta \\Omega_0} d\\Omega_c \\int_0^\\infty dv'\\int_0^\\infty dv'_1 \n\\frac{2u\\sigma_c g(r',v') g(r',v'_1)} {(|{\\bf r}-{\\bf r}'|^2 \\Delta \\Omega_0 \nv)\\dot (v^2\\Delta v \\Delta \\Omega )}, \n\\end{equation}\nwhere $\\Delta \\Omega_0$ is the solid-angle range formed by the point $d{\\bf \nr}'$ (as the apex) and the detector opening $\\Delta S$ (as the base), and the \nsubindex $\\Delta v\\Delta \\Omega_0$ there states that only particles that can \nbe recorded by the detector will be taken into account. Since the size of \n$\\Delta \\Omega_0$ is `much smaller' than that of $\\Delta\\Omega$, every \nparticle starting its journey from the effective cone and entering the \ndetector will be treated as one within $\\Delta \\Omega$. With help of the \nvariable transformation from $(v',v'_1)$ to $(c',u')$ and finally to $(c,u)$, \nwe rewrite expression (\\ref{entry}) as \n\\begin{equation} \\label{entry1} \\int_{-\\Delta \\Omega}d{\\bf r}'\\int_{\\Delta \nv\\Delta \\Omega_0} d\\Omega_c \\int_0^\\infty dc \\int_{-c}^{c} du \n\\frac{2u\\sigma_c\\cdot \\|J\\| \\cdot g(c+u) g(c-u)} {(|{\\bf r}-{\\bf r}'|^2 \n\\Delta \\Omega_0 v)\\dot (v^2\\Delta v \\Delta \\Omega )}, \n\\end{equation}\nin which the Jacobian between the variable transformation is \n\\begin{equation} \\| J\\|=\\frac{\\partial (v',v'_1)}{\\partial (c,u)}=2.\n\\end{equation}\nIn view of the symmetry of the cross section there, we have \n\\begin{equation} \\int_{-c}^c du\\cdots =2\\int_0^c du \\cdots. \n\\end{equation}\n\\vspace{-0.5cm}\n\n\\hspace{30pt} \\setlength{\\unitlength}{0.013in} \\begin{picture}(200,135) \n\n\\put(135,93){\\makebox(35,8)[l]{${\\bf c}$}}\n\\put(190,50){\\makebox(35,8)[l]{$\\bf u$}}\n\\put(220,90){\\vector(-1,-1){60}}\n\\put(70,90){\\line(1,0){150}}\n\\put(217.5,90.5){\\vector(1,0){1}}\n\\multiput(70,90)(3.0,-2.3){32}{\\circle*{1.2}}\n\\multiput(70,90)(3.0,-1.7){35}{\\circle*{1.2}}\n\\multiput(163,18.7)(1.6,2.4){6}{\\circle*{1.2}}\n\\multiput(154,42.4)(-1.6,-2.4){5}{\\circle*{1.2}}\n\\put(70,60){\\makebox(35,8)[l]{$\\Delta\\Omega_0$}}\n\\put(140,14){\\makebox(35,8)[l]{$\\Delta v$}}\n\n\\put(143.22,53.87){\\circle*{1}}\n\\put(144.80,50.69){\\circle*{1}}\n\\put(146.52,47.57){\\circle*{1}}\n\\put(148.36,44.53){\\circle*{1}}\n\\put(150.32,41.57){\\circle*{1}}\n\\put(152.41,38.70){\\circle*{1}}\n\\put(154.62,35.91){\\circle*{1}}\n\\put(156.94,33.22){\\circle*{1}}\n\\put(159.38,30.63){\\circle*{1}}\n\\put(161.91,28.15){\\circle*{1}}\n\\put(164.56,25.77){\\circle*{1}}\n\\put(167.29,23.50){\\circle*{1}}\n\\put(170.13,21.35){\\circle*{1}}\n\\put(173.04,19.32){\\circle*{1}}\n\\put(176.04,17.42){\\circle*{1}}\n\\put(179.12,15.64){\\circle*{1}}\n\\put(182.27,14.00){\\circle*{1}}\n\\put(185.49,12.48){\\circle*{1}}\n\\end{picture} \n\n\\begin{center} \\begin{minipage}{10cm} { \\vskip-0.3cm Figure~4: The relation \nbetween the velocity element $v^2\\Delta v \\Delta\\Omega_0 $ and the velocity \nelement $u^2 du d\\Omega_c $.} \\end{minipage} \\end{center} \\vspace{5pt} \n\n\\noindent And, by investigating the situation in the velocity space shown in \nFig.~4, the following relation can be found out: \n\\begin{equation} \\int_{\\Delta v \\Delta \\Omega_0}u^2 du d\\Omega_c \\cdots \n\\approx v^2\\Delta v \\Delta\\Omega_0\\cdots .\n\\end{equation}\nTherefore, the first-order distribution function at the entry of the detector \nis \n\\begin{equation} \\label{final} f({\\bf r}, {\\bf v}, \\Delta \\Omega)=\\frac{1}{v \n\\Delta \\Omega} \\int_{-\\Delta \\Omega}d{\\bf r}' \\int_0^\\infty dc \n\\frac{8\\sigma_c({\\bf u}',{\\bf u})g(r',c+u) g(r',c-u)}{u|{\\bf r}-{\\bf r}'|^2}, \n\\end{equation}\nin which, although $u =|{\\bf c}-{\\bf v}|$, $u1$'' extensions\nnor matter coupling permitted \\cite{CrJuSc78}.\nThe simple $D=5$ SUGRA, besides the metric $\\hat g_{AB}$, contains a\nspin-$\\frac32$ field $\\hat \\Psi^a_A$ ($a=1,2$ is an internal index)\nand $U(1)$ gauge field (one-form $\\hat B_A$) which replaces the\nthree-form gauge field in the $D=11$ SUGRA.\nThe ``primeval'' likeness comes directly from the fact that\nthe Lagrangians of both SUGRAs are exactly of the same form,\nexcept for the numbering of the gauge field indices.\nIn addition, their dimensional reduction to $D=4$ can be carried out in\na similar way \\cite{CrJu79}. Furthermore,\nthe $D=5$ simple SUGRA can be realized as a Calabi-Yau compactification\nof the $D=11$ SUGRA together with the truncation of the scalar multiplets,\nwhich is always necessary since there arises at least one scalar\nmultiplet for any Calabi-Yau compactification\n\\cite{CaCeAuFe95,FeKhMi96}.\nFurther resemblances between the two SUGRAs are related to the duality\ngroups upon dimensional reduction and the world sheet structure of the\nsolitonic string of the $D=5$ SUGRA \\cite{MiOh98}.\n\nThus the four-dimensional reduced effective action of the $N=2, D=5$\nSUGRA contains an additional Maxwell-like $U(1)$ field and a scalar\nfield regarded as external fields in five dimensions which are contributed\nby $\\hat B$, besides the ones coming from the metric\n$\\hat g_{AB}$ as in the traditional scheme for the Kaluza-Klein theory\n\\cite{AuFrMaRe82,AuMaReFr81,BaFaKe90a,BaFaKe90b}.\nCosmological solutions to this model have been previously considered\nby Balbinot, Fabris and Kerner \\cite{BaFaKe90a,BaFaKe90b}.\nFor the case of spatial homogeneity and isotropy the general\nsolution is non-singular in the scale factor, but unstable due to the\ncollapse to zero of the size of the fifth dimension\n\\cite{BaFaKe90a,BaFaKe90b}.\nBiaxial (with two equal scale factors) anisotropic solutions with a\ncylindrical homogeneous five-dimensional metric lead to singular\nsolutions with positive gravitational coupling \\cite{BaFaKe90b}.\nRecently, an explicit example of a manifestly $U$-duality covariant\nM-theory cosmology in five dimensions resulting from compactification\non a Calabi-Yau three-fold has been obtained in \\cite{LuOvWa98}.\nExact static solutions in $N=2,D=5$ SUGRA have been found by Pimentel\n\\cite{Pi95}, in a metric with cylindrical symmetry, with a particular\ncase corresponding to the exterior of a cosmic string.\n\nThe purpose of the present paper is to construct the general solution\nto the gravitational field equations of the $N=2, D=5$ SUGRA as\nformulated in \\cite{BaFaKe90a,BaFaKe90b} for an anisotropic triaxial\n(all directions have unequal scale factors) Bianchi type I space-time.\nIn this case the general solution of the field equations can be\nexpressed in an exact parametric form.\nFor all cosmological solutions, the singularity at the starting\/ending\ntime of the evolution can not be avoided except in the isotropic limit\nconsidered in \\cite{BaFaKe90a,BaFaKe90b}.\nNevertheless, in the models analyzed in this paper, the anisotropic\nUniverse has non-inflationary evolution for all times and for\nall values of parameters.\n\nThe present paper is organized as follows.\nThe field equations of our model are written down in Section II.\nIn Section III the general solution of the field equations is obtained.\nWe discuss our results and conclusions in Section IV.\n\n\\section{Field Equations, Geometry and Consequences}\nThe bosonic sector of $N=2, D=5$ SUGRA contains the five-dimensional\nmetric $\\hat g_{AB}$ and $U(1)$ gauge field $\\hat B_A$ described by\na Lagrangian which possesses a non-vanishing Chern-Simons term \\cite{Cr81}\n\\begin{eqnarray} \\label{L5}\n\\hat {\\cal L} &=& \\sqrt{-\\hat g} \\left\\{ \\hat R\n - \\frac14 \\hat F_{AB} \\hat F^{AB} \\right\\} \\nonumber \\\\\n &-& \\frac1{12\\sqrt{3}} \\epsilon^{ABCDE}\\hat F_{AB}\\hat F_{CD}\\hat B_E,\n\\end{eqnarray}\nwhere $\\hat F_{AB} = 2\\partial_{[A} \\hat B_{B]}$.\nIn this paper we use the following conventions and notations.\nThe variables with hats are five-dimensional objects all other variables\nare four-dimensional. Upper case indices $A,B,...$ are\nused for five-dimensional space-time, greek indices\n$\\mu,\\nu,...$ and low case indices $i,j,...$ are for\nfour-dimensional space-time and three-dimensional space\nrespectively. The signature is $(-,+,+,+,+)$.\n\nAssuming that the five dimensional space-time has locally the structure\nof $M^4 \\times S^1$ with a four-dimensional space-time $M^4$\nwhose spatial sections are homogeneous and asymptotic flat, then\nthe five-dimensional metric can be decomposed along the standard\nKaluza-Klein pattern\n\\begin{equation}\nd \\hat s^2 = \\phi^2 (dx_4 + A_\\mu dx^\\mu)^2 + g_{\\mu\\nu} dx^\\mu dx^\\nu,\n\\end{equation}\nwhere the scale factor $\\phi$ and Kaluza-Klein vector $A_\\mu$ are\nfunctions depending on $x^\\mu$ only.\n\nLooking for a ``ground state'' configuration we set, following\n\\cite{BaFaKe90a,BaFaKe90b}, the Kaluza-Klein vector, $A_\\mu$, equal\nto zero and take the one-form potential $\\hat B_A$ to be $\\hat B_\\mu=0$\nand $\\hat B_4= \\sqrt{3} \\psi(x^\\mu)$.\nUnder this ans\\\"atz, the five-dimensional gravitational field equations\nfor (\\ref{L5}) reduce to a set of four-dimensional equations\n\\begin{eqnarray}\nR_{\\mu\\nu} &-& \\phi^{-1} D_\\mu D_\\nu \\phi \\nonumber\\\\\n &-& \\frac12 \\phi^{-2} \\left[ 3 \\partial_\\mu \\psi \\partial_\\nu \\psi\n - g_{\\mu\\nu} (\\partial\\psi)^2 \\right] = 0, \\label{FEA1} \\\\\nD^2 \\phi &+& \\phi^{-1} (\\partial\\psi)^2 = 0, \\label{FEA2} \\\\\nD^2 \\psi &-& \\phi^{-1} \\partial_\\mu \\phi \\partial^\\mu \\psi = 0,\n \\label{FEA3}\n\\end{eqnarray}\nwhere $D$ denotes the four-dimensional covariant derivative with\nrespect to the metric $g_{\\mu\\nu}$.\nEquivalently, the field equations can be re-derived,\nin the string frame, from the four-dimensional Lagrangian\n\\cite{BaFaKe90a,BaFaKe90b}\n\\begin{equation} \\label{L4}\n{\\cal L} = \\sqrt{-g} \\phi \\left\\{ R\n - \\frac32 \\phi^{-2} (\\partial\\psi)^2 \\right\\},\n\\end{equation}\nvia variation with respect to the fields $g_{\\mu\\nu}, \\phi$ and $\\psi$.\nIn the Lagrangian (\\ref{L4}), the scale factor $\\phi$ is an analogue\nof the Brans-Dicke field whereas the origin of $\\psi$ is purely\nsupersymmetric.\n\nThe line element of an anisotropic homogeneous flat Bianchi type I\nspace-time is given by\n\\begin{equation}\nds^2 = - dt^2 + a_1^2(t) dx^2 + a_2^2(t) dy^2 + a_3^2(t) dz^2.\n\\end{equation}\nDefining the ``volume scale factor'', $V := \\prod_i a_i$,\n``directional Hubble factors'', $H_i := \\dot a_i\/a_i$,\nand ``average Hubble factor'', $H := \\frac13 \\sum_i H_i$, one can\npromptly find the relation $3H = \\dot V\/V$,\nwhere dot means the derivative with respect to time $t$.\nIn terms of those variables, the field equations\n(\\ref{FEA1}) and the equations of motion for $\\phi$ and $\\psi$\n(\\ref{FEA2},\\ref{FEA3}) coupling with the anisotropic Bianchi type I\ngeometry take the concise forms\n\\begin{eqnarray}\n3 \\dot H + \\sum_i H_i^2 + \\phi^{-1} \\ddot \\phi\n + \\phi^{-2} \\dot \\psi^2 &=& 0, \\label{FEB1} \\\\\nV^{-1} \\frac{d}{dt} (VH_i) + H_i \\phi^{-1} \\dot\\phi\n - \\frac12 \\phi^{-2} \\dot \\psi^2 &=& 0, \\; i=1,2,3,\n \\label{FEB2} \\\\\nV^{-1} \\frac{d}{dt} (V\\dot\\phi) + \\phi^{-1} \\dot \\psi^2\n &=& 0, \\label{FEB3} \\\\\nV^{-1} \\frac{d}{dt} (V\\dot\\psi) - \\phi^{-1} \\dot\\phi \\dot\\psi\\\n &=& 0. \\label{FEB4}\n\\end{eqnarray}\n\nThe physical quantities of interest in cosmology are the {\\em expansion\nscalar} $\\theta$, the {\\em mean anisotropy parameter} $A$,\nthe {\\em shear scalar} $\\sigma^2$ and the {\\em deceleration parameter}\n$q$ defined as \\cite{Gr85}\n\\begin{eqnarray}\n\\theta &:=& 3H, \\qquad\nA := \\frac13 \\sum_i \\left( \\frac{H-H_i}{H} \\right)^2, \\nonumber \\\\\n\\sigma^2 &:=&\n \\frac12 \\left( \\sum_i H_i^2 - 3 H^2 \\right), \\quad\nq := \\frac{d}{dt} H^{-1} - 1. \\label{Def}\n\\end{eqnarray}\n\nThe sign of the deceleration parameter indicates whether the cosmological\nmodel inflates. A positive sign corresponds to standard decelerating\nmodels whereas a negative sign indicates inflationary behavior.\n\n\\section{General Solution of the Field Equations}\nEquation (\\ref{FEB4}) can immediately be integrated to give\n\\begin{equation}\nV \\dot \\psi = \\omega \\phi, \\label{VPP}\n\\end{equation}\nwith $\\omega$ --- a constant of integration.\nFrom equations (\\ref{FEB3}) and (\\ref{FEB4}) one can find that the\nexpressions of the fields $\\phi(t)$ and $\\psi(t)$ have the following form\n\\begin{eqnarray}\n\\phi(t) &=& \\phi_0 \\cos \\left( \\omega \\int \\frac{dt}{V}\n + \\omega_0 \\right), \\label{phi} \\\\\n\\psi(t) &=& \\psi_0 + \\phi_0 \\sin \\left( \\omega \\int \\frac{dt}{V}\n + \\omega_0 \\right), \\label{psi}\n\\end{eqnarray}\nwhere $\\phi_0, \\psi_0$ and $\\omega_0$ are constants of integration.\n\nBy summing equations (\\ref{FEB2}) one gets\n\\begin{equation}\nV^{-1} \\frac{d}{dt} (VH) + H \\phi^{-1} \\dot\\phi\n - \\frac12 \\phi^{-2} \\dot \\psi^2 = 0, \\label{FEB2a}\n\\end{equation}\nwhich can be transformed, by using equations (\\ref{VPP}) and (\\ref{phi}),\ninto the following differential-integral equation\ndescribing the dynamics and evolution\nof a triaxial Bianchi type I space-time in $N=2,D=5$ SUGRA:\n\\begin{equation} \\label{EqV}\n\\ddot V = \\omega \\frac{\\dot V}{V} \\tan \\left( \\omega \\int \\frac{dt}{V}\n + \\omega_0 \\right) + \\frac32 \\frac{\\omega^2}{V}.\n\\end{equation}\n\nFurthermore, by subtracting equation (\\ref{FEB2a}) from equations\n(\\ref{FEB2}), one can solve for the $H_i$ as\n\\begin{equation}\nH_i = H + \\frac{K_i}{\\phi V}, \\qquad i=1,2,3, \\label{Hi}\n\\end{equation}\nwhere $K_i$ are constants of integration satisfying the following\nconsistency condition\n\\begin{equation}\n\\sum_i K_i = 0. \\label{Ki}\n\\end{equation}\nTherefore the physical quantities of interest (\\ref{Def}) reduce to\n\\begin{equation}\nA = \\frac{K^2}{3\\phi^2 V^2 H^2}, \\qquad\n\\sigma^2 = \\frac32 A H^2,\n\\end{equation}\nwhere $K^2 = \\sum_i K_i^2$.\n\nBy introducing a new variable $\\eta$ related to the physical time $t$\nby means of the transformation $d\\eta := dt\/V$ and by denoting\n$u:=\\dot V = dV\/(V d\\eta)$, equation (\\ref{EqV}) reduces to a first\norder linear differential equation for the unknown function $u$\n\\begin{equation}\n\\frac{du}{d\\eta} = \\omega \\tan( \\omega\\eta + \\omega_0) u\n + \\frac32 \\omega^2,\n\\end{equation}\nwhose general solution is given by\n\\begin{equation}\nu = C \\cos^{-1}(\\omega\\eta+\\omega_0)\n + \\frac32 \\omega \\tan(\\omega\\eta+\\omega_0), \\label{dotV}\n\\end{equation}\nwhere $C$ is an arbitrary constant of integration.\n\nDefining a new parameter $\\zeta(\\eta):=\\omega\\eta+\\omega_0$,\nwe can represent the general solution of\nthe field equations for a Bianchi type I space-time in\nthe $N=2,D=5$ SUGRA in the following exact parametric form:\n\\begin{eqnarray}\nt &=& t_0 + \\frac{V_0}{\\omega} \\int\n \\frac{(1+\\sin\\zeta)^\\beta}{(1-\\sin\\zeta)^\\gamma} d\\zeta,\n \\label{T} \\\\\nV &=& V_0 \\frac{(1+\\sin\\zeta)^\\beta}{(1-\\sin\\zeta)^\\gamma},\n \\label{V} \\\\\nH &=& \\frac{\\omega}{2V_0} (\\alpha + \\sin\\zeta)\n \\frac{(1-\\sin\\zeta)^{\\gamma-\\frac12}}{(1+\\sin\\zeta)^{\\beta+\\frac12}}, \\\\\na_{i} &=& a_{i0}\n \\frac{(1+\\sin\\zeta)^{\\frac{\\beta}3+\\frac{K_i}{2\\omega\\phi_0}}}\n {(1-\\sin\\zeta)^{\\frac{\\gamma}3+\\frac{K_i}{2\\omega\\phi_0}}},\n \\quad i=1,2,3, \\label{ai}\n\\end{eqnarray}\nwhere we have denoted $\\alpha = \\frac{2C}{3\\omega}$,\n$\\beta=\\frac34(\\alpha-1)$, $\\gamma=\\frac34(\\alpha+1)$ and the $a_{i0}$\nare arbitrary constants of integration while $V_0=\\Pi_i a_{i0}$.\nThe observationally important physical quantities are given by\n\\begin{eqnarray}\nA &=& \\frac{4K^2}{3\\phi_0^2 \\omega^2}\n \\left( \\alpha + \\sin\\zeta \\right)^{-2}, \\label{AA} \\\\\n\\sigma^2 &=& \\frac{K^2}{2\\phi_0^2 V_0^2}\n \\frac{(1-\\sin\\zeta)^{2\\gamma-1}}{(1+\\sin\\zeta)^{2\\beta+1}}, \\\\\nq &=& 2 \\left\\{ 1 - \\frac{ 1 + \\alpha \\sin\\zeta}\n {(\\alpha + \\sin\\zeta)^2} \\right\\}. \\label{qq}\n\\end{eqnarray}\nFinally, the field equation (\\ref{FEB1}) gives a\nconsistency condition relating the constants $K^2, \\omega, \\alpha$ and\n$\\phi_0$:\n\\begin{equation}\nK^2 = \\frac32 \\phi_0^2 \\omega^2 \\left( \\alpha^2 - 1 \\right), \\label{K2}\n\\end{equation}\nleading to\n\\begin{equation}\n\\alpha \\ge 1 \\quad \\hbox{or} \\quad \\alpha \\le -1.\n\\end{equation}\n\nIt is worth noting that these two classes of solutions corresponding to\npositive or negative values of $\\alpha$ and $\\omega$ are not independent.\nIndeed, they can be related via a ``duality'' transformation by\nchanging the signs of $\\omega, \\alpha$ and $\\zeta$ so that\n$t(\\omega,\\alpha,\\zeta)=t(-\\omega,-\\alpha,-\\zeta)$,\n$a_i(\\omega,\\alpha,\\zeta)=a_i(-\\omega,-\\alpha,-\\zeta), i=1,2,3$,\nand $V(\\alpha,\\zeta)=V(-\\alpha,-\\zeta)$ etc.\nThis duality relation can be obtained by a simple inspection of equations\n(\\ref{T})-(\\ref{ai}) and, therefore, all physical quantities are\ninvariant with respect to this transformation.\nMoreover, the physical properties of the cosmological\nmodels presented here are strongly dependent on the signs of the\nparameters $\\alpha$ and $\\omega$.\nNevertheless, due to the duality transformation, hereafter we will consider,\nwithout loss of generality, the cases with positive $\\alpha$ only.\n\nFor some particular values of $\\alpha$, the general solutions\ncan be expressed in an exact non-parametric form, for instance,\nan exact class solutions can be obtained for $\\alpha = \\pm \\frac53$.\nBy introducing a new time variable $\\tau:=\\frac{3\\sqrt{2}\\omega}{V_0}t$,\nand choosing $t_0=\\mp\\frac{V_0}{3\\sqrt{2}\\omega}$,\nthe exact solution in $N=2,D=5$ SUGRA\nfor the Bianchi type I space-time is given by\n\\begin{eqnarray}\n\\tau &=& \\pm \\left[ \\cos^{-3} \\left( \\frac{\\zeta}2 \\pm\\frac{\\pi}4 \\right)\n -1 \\right], \\\\\nV &=& \\frac{V_0}{2\\sqrt{2}} (1\\pm\\tau) \\left[ (1\\pm\\tau)^\\frac23 - 1\n \\right]^\\frac12, \\\\\nH &=& \\frac{\\sqrt{2}\\omega}{V_0} \\frac{\\frac43 (1\\pm\\tau)^\\frac23 -1}\n {(1\\pm\\tau)\\left[ (1\\pm\\tau)^\\frac23 - 1\\right]}, \\\\\na_i &=& \\frac{a_{i0}}{\\sqrt{2}} (1\\pm\\tau)^\\frac13 \\left[\n (1\\pm\\tau)^\\frac23 - 1 \\right]^{\\frac16\\pm\\frac{K_i}{2\\omega\\phi_0}}, \\\\\nA &=& \\frac89 (1\\pm\\tau)^\\frac43 \\left[ \\frac43 (1\\pm\\tau)^\\frac23 - 1\n \\right]^{-2}, \\\\\n\\sigma^2 &=& \\frac{8\\omega^2}{3V_0^2} (1\\pm\\tau)^{-\\frac23}\n \\left[ (1\\pm\\tau)^\\frac23 - 1 \\right]^{-2}, \\\\\nq &=& 2 \\left\\{ 1-\\frac{(1\\pm\\tau)^\\frac23\\left[4(1\\pm\\tau)^\\frac23-5\\right]}\n {6\\left[\\frac43(1\\pm\\tau)^\\frac23-1\\right]^2} \\right\\}.\n\\end{eqnarray}\n\nThe isotropic limit can be achieved by taking $\\alpha=\\pm 1$\nand, consequently, $K_i=0,i=1,2,3$.\nIt is worth noting that our solutions reduce to two different\ntypes of homogeneous space-times when $\\alpha=\\pm 1$.\n\nFor $\\alpha=1$, we obtain (by denoting $a_1=a_2=a_3=a$)\n\\begin{eqnarray}\nt &=& t_0+\\frac{V_0}{2\\sqrt{2}\\omega}\\left[ \\frac{\\sin\\theta}{\\cos^2\\theta}\n + \\ln(\\tan\\theta+\\sec\\theta) \\right], \\label{IT1} \\\\\na &=& \\frac{a_0}{2\\sqrt{2}} \\cos^{-3}\\theta, \\label{IA1}\n\\end{eqnarray}\nwhere $\\theta:=\\frac{\\zeta}2 + \\frac{\\pi}4$.\nEqs.(\\ref{IT1}) and (\\ref{IA1}), describing a homogeneous flat isotropic\nspace-time interacting with two scalar fields (Kaluza-Klein and\nsupersymmetric), have been previously obtained by Balbinot, Fabris and\nKerner \\cite{BaFaKe90a} (for an extra choice of the parameter\n$\\omega_0=\\pi\/2$), who extensively studied their physical properties.\nThis isotropic solution also provides a positive gravitational coupling\nat the present time.\n\nIn the isotropic limit corresponding to $\\alpha=-1$, one can obtain\nanother class of isotropic homogeneous flat space-times represented in\nthe following parametric form by\n\\begin{eqnarray}\nt &=& t_0-\\frac{V_0}{2\\sqrt{2}\\omega}\\left[ \\frac{\\cos\\theta}{\\sin^2\\theta}\n - \\ln(\\csc\\theta-\\cot\\theta) \\right], \\label{IT2} \\\\\na &=& \\frac{a_0}{2\\sqrt{2}} \\sin^{-3}\\theta. \\label{IA2}\n\\end{eqnarray}\nThis type of flat space-time has not been previously considered.\n\n\\section{Discussions and Final Remarks}\nIn order to study the physical properties of the Bianchi type I Universe\ndescribed by the Eqs. (\\ref{V})-(\\ref{ai}) we need to fix first the range\nof variation of the parameter $\\zeta$. There are no a priori limitations\nin choosing the admissible range of values, thus both positive and negative\nvalues are permitted since the variable $\\eta=\\int dt\/V$ can also\nbe negative.\nBut from a physical point of view it is natural to impose the condition\nsuch that the gravitational coupling $\\phi$ is always positive during\nthe evolution of the Bianchi type I space-time in $N=2,D=5$ SUGRA.\nConsequently, we shall consider $\\zeta\\in (-\\pi\/2, \\pi\/2)$.\nWith this choice, the Universe for $\\alpha<-1$ starts its evolution\nin the infinite past ($t\\to -\\infty$) and ends at a finite moment $t=t_0$.\nFor $\\alpha>1$ the Universe starts at $t=t_0$ and ends in an infinite\nfuture with $t \\to \\infty$. (All discussion here and hereafter are with\nrespect to positive $\\omega$.)\n\nAs can be easily seen from equation (\\ref{dotV}), if $\\alpha<-1$\nwe have $\\dot V < 0$ for all $t$.\nFor these values of the parameters the\nBianchi type I anisotropic Universe collapses from an initial state\ncharacterized by infinite values of the volume scale factor and of the\nscale factors $a_i,i=1,2,3,$ to a singular state\nwith all factors vanishing.\nBut if $\\alpha>1$, the Universe expands and $\\dot V > 0$ for all $t$.\nThe expanding Bianchi type I Universe starts its evolution at the\ninitial moment, $t_0$, from a singular state with zero values of the\nscale factors, $a_i(t_0)=0,i=1,2,3$.\n\nAnother possible way to investigate the singularity behavior at the\ninitial moment is to consider the sign of the quantity\n$R_{\\mu\\nu} u^\\mu u^\\nu$, where $u^\\mu$ is the vector tangent to the\ngeodesics; $u^\\mu=(-1,0,0,0)$ for the present model.\nFrom the gravitational field equations we easily obtain\n\\begin{equation}\nR_{\\mu\\nu} u^\\mu u^\\nu = 3 H^2 (q-A). \\label{Ruu}\n\\end{equation}\nBy using equations (\\ref{AA}), (\\ref{qq}) and (\\ref{K2}) we can\nexpress (\\ref{Ruu}) as\n\\begin{equation}\nR_{\\mu\\nu} u^\\mu u^\\nu\n = 6H^2 \\frac{\\sin\\zeta}{\\alpha+\\sin\\zeta}.\n\\end{equation}\nFor $\\zeta \\to -\\pi\/2$ the sign of $R_{\\mu\\nu}u^\\mu u^\\nu$ is determined\nby the sign of $1-\\alpha$.\nTherefore we obtain\n\\begin{equation}\nR_{\\mu\\nu} u^\\mu u^\\nu < 0 \\quad \\hbox{for} \\quad \\alpha > 1.\n\\end{equation}\nHence, the energy condition of Hawking-Penrose singularity theorems\n\\cite{HaEl73} is not satisfied for the solutions corresponding to Bianchi\ntype I Universes in the four-dimensional reduced two scalar fields theory of\n$N=2,D=5$ SUGRA with $\\alpha>1$.\nNevertheless, for those solutions an initial singular state is\n{\\em unavoidable} at the initial moment $t_0$.\n\nSince for $\\alpha>1$ the Bianchi type I Universe starts its evolution at\nthe initial moment $t=t_0$ ($\\zeta \\to -\\pi\/2$) from a singular state,\ntherefore, the presence of a variable gravitational coupling, $\\phi$,\nand of a supersymmetric field, $\\psi$, in an anisotropic geometry {\\em can\nnot} remove the initial singularity that mars the big-bang cosmology.\nAt the initial moment the degree of the anisotropy of the space-time\nis maximal, with the initial value of the anisotropy\nparameter $A(t_0)=2(\\alpha+1)\/(\\alpha-1)$.\nFor $t>t_0$ the Universe expands and the anisotropy parameter decreases.\n\nThe behavior of the volume scale factor, of the anisotropy parameter\nand of the deceleration parameter is presented for different values of\n$\\alpha$ in Figs. \\ref{FIG1}-\\ref{FIG3}.\nThe evolution of the Universe is non-inflationary, with $q>0$\nfor all $t>t_0$. Non-inflationary behavior is a generic feature of most\nof the supersymmetric models.\nThis is due to the general fact that the effective potential for the\ninflation field $\\sigma$ in SUGRA typically is too curved,\ngrowing as $\\exp (c \\sigma^2\/m)$, with $c$ a parameter\n(typically $O(1)$) and $m$ the stringy Planck mass \\cite{LiRi97}.\nThe typical values of $c$ make inflation impossible because the\ninflation mass becomes of the order of the Hubble constant.\nSee also \\cite{LiRi97} for a simple realization of hybrid inflation\nin SUGRA.\nIn the present model the presence of the supersymmetric field $\\psi$\nprevents the Bianchi type I Universe from inflating.\n\n\\begin{figure}\n\\epsfxsize=10cm\n\\epsffile{fig1.eps}\n\\caption{Behavior of the volume scale factor of the Bianchi type I\n space-time for different values of $\\alpha>1$\n ($V_0=1$ and $\\omega=1$): $\\alpha=3\/2$ (solid curve),\n $\\alpha=2$ (dotted curve) and $\\alpha=5\/2$ (dashed curve).}\n\\label{FIG1}\n\\end{figure}\n\n\\begin{figure}\n\\epsfxsize=10cm\n\\epsffile{fig2.eps}\n\\caption{Time dependence of the parameter\n $a=\\frac{3\\phi_0^2\\omega^2}{4K^2} A$ for different values of\n $\\alpha$: $\\alpha=3\/2$ (solid curve),\n $\\alpha=2$ (dotted curve) and $\\alpha=5\/2$ (dashed curve).}\n\\label{FIG2}\n\\end{figure}\n\n\\begin{figure}\n\\epsfxsize=10cm\n\\epsffile{fig3.eps}\n\\caption{Evolution of the deceleration parameter $q$ of the Bianchi\n type I space-time for different values of $\\alpha$:\n $\\alpha=3\/2$ (solid curve),\n $\\alpha=2$ (dotted curve) and $\\alpha=5\/2$ (dashed curve).}\n\\label{FIG3}\n\\end{figure}\n\nIn the far future, for $\\zeta \\to \\pi\/2$ and $t\\to \\infty$ ($\\alpha>1$),\nwe have $V \\to \\infty, a_i \\to\\infty, i=1,2,3$.\nIn this limit the anisotropy parameter becomes a non-zero constant and\nthe Universe ends in a still anisotropic phase, but with\na decrease in the value of the anisotropy parameter $A$,\nas compared with the initial one.\nTherefore during its evolution the Bianchi type I Universe cannot\nexperience a transition from the anisotropic phase to the isotropic\nflat geometry.\nThe time evolution of the gravitational coupling $\\phi$ and of the\nsupersymmetric field $\\psi$ is represented in Fig. \\ref{FIG4}.\nThe $\\phi$-field is positive for all values of time.\n\n\\begin{figure}\n\\epsfxsize=10cm\n\\epsffile{fig4.eps}\n\\caption{Time evolution of the gravitational coupling $\\phi$ (solid curve)\n and of the supersymmetric field $\\psi$ (dashed curve)\n for $\\alpha=5\/2$ ($\\phi_0=1$, $\\psi_0=0$).}\n\\label{FIG4}\n\\end{figure}\n\nIn the present paper we have investigated the evolution and dynamics\nof a Bianchi type I space-time in a SUGRA toy-model, obtained\nby dimensional reduction of the $N=2,D=5$ SUGRA.\nThe inclusion of the supersymmetric term gives some particular\nfeatures to this cosmological model, by preventing the Universe from\ninflating and attaining completely isotropy.\nBut globally there is a decrease in the degree of anisotropy of the\ngeometry. Hence this model can be used to describe only a specific,\nwell-determined period of the evolution of our Universe.\n\n\\section*{Acknowledgments}\nOne of the authors (CMC) would like to thank prof. J.M. Nester for\nprofitable discussions.\nThe work of CMC was supported in part by the National Science Council\n(Taiwan) under grant NSC 89-2112-M-008-016.\n\nWe are also grateful to prof. Pimentel for calling our attention to the\nresults \\cite{Pi92,PiSo93,PiSo95} about several types of Bianchi\ncosmologies in the framework of $N=2,D=5$ supergravity.\n\n\\begin{appendix}\n\\section{Some Exact Forms for Physical Time}\nFor a large class of values of the parameter $\\alpha$ the general\nsolution of the gravitational field equations can be expressed in a\nclosed explicit form.\nThe variation of the physical time $t$ is determined by the integral\nequation (\\ref{T}). After a trick manipulation, one can rewrite this\nequation in the following form\n\\begin{equation}\nt = t_0 + \\frac{V_0}{\\sqrt{2}\\omega} \\int \\sin^{\\gamma'-3}\\theta\n \\cos^{-\\gamma'}\\theta d \\theta,\n\\end{equation}\nwhere $\\theta:=\\frac{\\zeta}2+\\frac{\\pi}4$ and\n$\\gamma':=2\\gamma=\\frac32(\\alpha+1)$.\nIn general, for arbitrary $\\gamma'$, this integral can not be closed.\nFortunately, for integer values of $\\gamma'$, the physical time $t$ can\nbe expressed in an explicit form as a function of $\\zeta$. Some of these\nexact forms of the time function are listed in the following.\n(The outcomes for $\\alpha\\pm1$ are given in (\\ref{IT1}) and (\\ref{IT2}) ).\n\n\\noindent{\\bf For $\\alpha < -1$ :}\n\n\\noindent{(i) $\\alpha=-\\frac53, \\; (\\gamma'=-1),$}\n$$ t=t_0-\\frac{V_0}{3\\sqrt{2}\\omega} \\frac1{\\sin^3\\theta}, $$\n\n\\noindent{(ii) $\\alpha=-\\frac73, \\; (\\gamma'=-2),$}\n$$\nt=t_0-\\frac{V_0}{8\\sqrt{2}\\omega}\\Biggl[\n \\frac{\\cos\\theta(\\cos^2\\theta+1)}{\\sin^4\\theta}\n + \\ln(\\csc\\theta-\\cot\\theta) \\Biggr],\n$$\n\n\\noindent{(iii) $\\alpha=-3, \\; (\\gamma'=-3),$}\n$$\nt=t_0-\\frac{V_0}{15\\sqrt{2}\\omega}\\Biggl(\n \\frac{5\\cos^2-2}{\\sin^5\\theta} \\Biggr).\n$$\n\n\\noindent{\\bf For $\\alpha > 1$ :}\n\n\\noindent{(i) $\\alpha=\\frac53, \\; (\\gamma'=4),$}\n$$ t=t_0+\\frac{V_0}{3\\sqrt{2}\\omega} \\frac1{\\cos^3\\theta}, $$\n\n\\noindent{(ii) $\\alpha=\\frac73, \\; (\\gamma'=5),$}\n$$\nt=t_0+\\frac{V_0}{8\\sqrt{2}\\omega}\\Biggl[\n \\frac{\\sin\\theta(\\sin^2\\theta+1)}{\\cos^4\\theta}\n - \\ln(\\tan\\theta+\\sec\\theta) \\Biggr],\n$$\n\n\\noindent{(iii) $\\alpha=3, \\; (\\gamma'=6),$}\n$$\nt=t_0+\\frac{V_0}{15\\sqrt{2}\\omega}\\Biggl(\n \\frac{5\\sin^2\\theta-2}{\\cos^5\\theta} \\Biggr).\n$$\n\n\\end{appendix}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe \\emph{colouring number $\\mathrm{col}(G)$} of a graph $G$ is the minimum\n$k$ for which there is a linear order~$<_L$ on the vertices of $G$\nsuch that each vertex $v$ has \\emph{back-degree} at most $k-1$, that\nis, $v$ has at most $k-1$ neighbours $u$ with $u<_Lv$. The colouring\nnumber is a measure for uniform sparseness in graphs: we have\n$\\mathrm{col}(G)=k$ if and only if every subgraph $H$ of $G$ has a vertex of\ndegree at most $k-1$. Hence, provided $\\mathrm{col}(G)=k$, not only $G$ is\nsparse, but also every subgraph of $G$ is sparse. The colouring number\nminus one is also known as the \\emph{degeneracy}.\n\nRecently, Ne\\v{s}et\\v{r}il and Ossona de Mendez introduced the notions\nof \\emph{bounded expansion}~\\cite{nevsetvril2008grad} and\n\\emph{nowhere density} \\cite{nevsetvril2011nowhere} as very general\nformalisations of uniform sparseness in graphs. Since then, several\nindependent and seemingly unrelated characterisations of these notions\nhave been found, showing that these concepts behave robustly. For\nexample, nowhere dense classes of graphs can be defined in terms of\nexcluded shallow minors~\\cite{nevsetvril2011nowhere}, in terms of\nuniform quasi-wideness~\\cite{dawar2010homomorphism}, a notion studied\nin model theory, or in terms of a game~\\cite{grohe2014deciding} with\ndirect algorithmic applications. The \\emph{generalised colouring\n numbers} $\\mathrm{adm}_r$, $\\mathrm{col}_r$, and $\\mathrm{wcol}_r$ were introduced by\nKierstead and Yang~\\cite{kierstead2003orders} in the context of\ncolouring and marking games on graphs. As proved by Zhu\n\\cite{zhu2009colouring}, they can be used to characterise both bounded\nexpansion and nowhere dense classes of graphs.\n\nThe invariants $\\mathrm{adm}_r$, $\\mathrm{col}_r$, and $\\mathrm{wcol}_r$ are defined similarly\nto the classic colouring number: for example, the \\emph{weak\n $r$-colouring} number $\\mathrm{wcol}_r(G)$ of a graph $G$ is the minimum\ninteger~$k$ for which there is a linear order of the vertices such\nthat each vertex $v$ can reach at most $k-1$ vertices $w$ by a path of\nlength at most~$r$ in which~$w$ is the smallest vertex on the\npath. \n\nThe generalised colouring numbers found important applications in the\ncontext of algorithmic theory of sparse graphs. For example, they play\na key role in Dvo\\v{r}\\'ak's approximation algorithm for minimum\ndominating sets \\cite{dvovrak13}, or in the construction of sparse\nneighbourhood covers on nowhere dense classes, a fundamental step in\nthe almost linear time model-checking algorithm for first-order\nformulas of Grohe et al.~\\cite{grohe2014deciding}.\n \nIn this paper we study the relation between the colouring numbers and\nthe above mentioned characterisations of nowhere dense classes of\ngraphs, namely with uniform quasi-wideness and the splitter game. We\nuse the generalised colouring numbers to give a new proof that every\nbounded expansion class is uniformly quasi-wide. This was first\nproved by Ne\\v{s}et\\v{r}il and Ossona de Mendez in\n\\cite{nevsetvril2010first}; however, the constants appearing in the\nproof of~\\cite{nevsetvril2010first} are huge. We present a very simple\nproof which also improves the appearing constants. Furthermore, for\nthe splitter game introduced in~\\cite{grohe2014deciding}, we show that\nsplitter has a very simple strategy to win on any class of bounded\nexpansion, which leads to victory much faster than in general nowhere\ndense classes of graphs.\n\nEvery graph $G$ from a fixed class $\\mathcal{C}$ of bounded expansion\nsatisfies $\\mathrm{wcol}_r(G)\\leq f(r)$ for some function $f$ and all positive\nintegers~$r$. However, the order that witnesses this inequality for\n$G$ may depend on the value $r$. We say that a class $\\mathcal{C}$ admits\n\\emph{uniform orders} if there is a function $f:\\mathbb{N}\\rightarrow\\mathbb{N}$ such\nthat for each $G\\in\\mathcal{C}$ there is one linear order that witnesses\n$\\mathrm{wcol}_r(G)\\leq f(r)$ for every value of $r$. We show that every\nclass that excludes a fixed topological minor admits uniform orders\nthat can be computed efficiently.\n\nFinally, based on our construction of uniform orders for graphs that\nexclude a fixed topological minor, we provide an alternative proof of\na very recent result of Eickmeyer and\nKawarabayashi~\\cite{eickmeyer2016model}, that the model-checking\nproblem for successor-invariant first-order ($\\mathrm{FO}$) formulas is\nfixed-para\\-meter tractable on such classes (we obtained this result\nindependently of but later than~\\cite{eickmeyer2016model}).\nSuccessor-invariant logics have been studied in database theory and\nfinite model theory, and successor-invariant $\\mathrm{FO}$ is known to be more\nexpressive than plain $\\mathrm{FO}$~\\cite{rossman2007successor}. The\nmodel-checking problem for successor-invariant $\\mathrm{FO}$ is known to be\nfixed-parameter tractable parameterized by the size of the formula on\nany graph class that excludes a fixed minor~\\cite{eickmeyer2013model}.\nVery recently, this result was lifted to classes that exclude a fixed\ntopological minor by Eickmeyer and\nKawarabayashi~\\cite{eickmeyer2016model}. \nThe key point of their proof\nis to use the decomposition theorem for graphs excluding a fixed\ntopological minor, due to Grohe and Marx~\\cite{grohe2015structure}.\nOur approach is similar to that of~\\cite{eickmeyer2016model}.\nHowever, we employ new constructions based on the generalised\ncolouring numbers and use the decomposition theorem\nof~\\cite{grohe2015structure} only implicitly. In particular, we do not\nconstruct a graph decomposition in order to solve the model-checking\nproblem. Therefore, we believe that our approach may be easier to\nextend further to classes of bounded expansion, or even to nowhere\ndense classes of graphs.\n\n\\section{Preliminaries}\n\n\\subparagraph*{Notation.} We use standard graph-theoretical notation;\nsee e.g.~\\cite{diestel2012graph} for reference. All graphs considered\nin this paper are finite, simple, and undirected. For a graph $G$, by\n$V(G)$ and $E(G)$ we denote the vertex and edge sets of $G$,\nrespectively. A graph~$H$ is a \\emph{subgraph} of~$G$, denoted\n$H\\subseteq G$, if $V(H)\\subseteq V(G)$ and $E(H)\\subseteq E(G)$. For\nany $M\\subseteq V(G)$, by $G[M]$ we denote the subgraph induced by\n$M$. We write $G-M$ for the graph $G[V(G)\\setminus M]$ and if\n$M=\\{v\\}$, we write $G-v$ for $G-M$. For a non-negative integer\n$\\ell$, a \\emph{path of length $\\ell$} in~$G$ is a sequence\n$P=(v_1,\\ldots, v_{\\ell+1})$ of pairwise different vertices such that\n$v_iv_{i+1}\\in E(G)$ for all $1\\leq i\\leq \\ell$. We write $V(P)$ for\nthe vertex set $\\{v_1,\\ldots, v_{\\ell+1}\\}$ of~$P$ and $E(P)$ for the\nedge set $\\{v_iv_{i+1} : 1\\leq i\\leq \\ell\\}$ of~$P$ and identify~$P$\nwith the subgraph of $G$ with vertex set $V(P)$ and edge set\n$E(P)$. We say that the path $P$ \\emph{connects} its \\emph{endpoints}\n$v_1,v_{\\ell+1}$, whereas $v_2,\\ldots, v_\\ell$ are the \\emph{internal\n vertices} of $P$. The {\\em{length}} of a path is the number of its\nedges. Two vertices $u,v\\in V(G)$ are \\emph{connected} if there is a\npath in $G$ with endpoints $u,v$. The {\\em{distance}} $\\mathrm{dist}(u,v)$\nbetween two connected vertices $u,v$ is the minimum length of a path\nconnecting $u$ and $v$; if $u,v$ are not connected, we put\n$\\mathrm{dist}(u,v)=\\infty$. The \\emph{radius} of~$G$ is\n$\\min_{u\\in V(G)}\\max_{v\\in V(G)}\\mathrm{dist}(u,v)$. The set of all\nneighbours of a vertex $v$ in $G$ is denoted by $N^G(v)$, and the set\nof all vertices at distance at most $r$ from $v$ is denoted by\n$N^G_r(v)$. A graph $G$ is $c$-\\emph{degenerate} if every subgraph\n$H\\subseteq G$ has a vertex of degree at most $c$. A $c$-degenerate\ngraph of order $n$ contains an independent set of order at least\n$n\/(c+1)$.\n\nA graph $H$ with $V(H)=\\{v_1,\\ldots, v_n\\}$ is a \\emph{minor} of $G$,\nwritten $H\\preccurlyeq G$, if there are pairwise disjoint connected\nsubgraphs $H_1,\\ldots, H_n$ of $G$, called {\\em{branch sets}}, such\nthat whenever $v_iv_j\\in E(H)$, then there are $u_i\\in H_i$ and\n$u_j\\in H_j$ with $u_iu_j\\in E(G)$. We call $(H_1,\\ldots, H_n)$ a\n{\\em{minor model}} of $H$ in~$G$. The graph $H$ is a \\emph{topological\n minor} of $G$, written $H\\preccurlyeq^t G$, if there are pairwise\ndifferent vertices $u_1, \\ldots, u_n\\in V(G)$ and a family of paths\n$\\{P_{ij}\\ \\colon\\ v_iv_j\\in E(H)\\}$, such that each $P_{ij}$ connects\n$u_i$ and $u_j$, and paths $P_{ij}$ are pairwise internally\nvertex-disjoint.\n\n\\vspace{-0.5cm}\n\\subparagraph*{Generalised colouring numbers.} Let us fix a graph\n$G$. By $\\Pi(G)$ we denote the set of all linear orders of $V(G)$. For\n$L\\in\\Pi(G)$, we write $u<_L v$ if $u$ is smaller than $v$ in $L$, and\n$u\\le_L v$ if $u<_L v$ or $u=v$. Let $u,v\\in V(G)$. For a\nnon-negative integer $r$, we say that $u$ is \\emph{weakly\n $r$-reachable} from~$v$ with respect to~$L$, if there is a path $P$\nof length $\\ell$, $0\\le\\ell\\le r$, connecting $u$ and~$v$ such that\n$u$ is minimum among the vertices of $P$ (with respect to $L$). By\n$\\mathrm{WReach}_r[G,L,v]$ we denote the set of vertices that are weakly\n$r$-reachable from~$v$ w.r.t.\\ $L$.\n\nVertex $u$ is \\emph{strongly $r$-reachable} from $v$ with respect\nto~$L$, if there is a path $P$ of length~$\\ell$, $0\\le\\ell\\le r$,\nconnecting $u$ and $v$ such that $u\\le_Lv$ and such that all internal\nvertices $w$ of~$P$ satisfy $v<_Lw$. Let $\\mathrm{SReach}_r[G,L,v]$ be the set\nof vertices that are strongly $r$-reachable from~$v$ w.r.t.\\ $L$. Note\nthat we have $v\\in \\mathrm{SReach}_r[G,L,v]\\subseteq \\mathrm{WReach}_r[G,L,v]$.\n\nFor a non-negative integer $r$, we define the \\emph{weak $r$-colouring\n number $\\mathrm{wcol}_r(G)$} of $G$ and the \\emph{$r$-colouring number\n $\\mathrm{col}_r(G)$} of $G$ respectively as follows:\n\\begin{eqnarray*}\n\\mathrm{wcol}_r(G)& := & \\min_{L\\in\\Pi(G)}\\:\\max_{v\\in V(G)}\\:\n\\bigl|\\mathrm{WReach}_r[G,L,v]\\bigr|,\\\\\n\\mathrm{col}_r(G) & := & \\min_{L\\in\\Pi(G)}\\:\\max_{v\\in V(G)}\\:\n\\bigl|\\mathrm{SReach}_r[G,L,v]\\bigr|.\n\\end{eqnarray*}\n\nFor a non-negative integer $r$, the \\emph{$r$-admissibility}\n$\\mathrm{adm}_r[G,L, v]$ of $v$ w.r.t.\\ $L$ is the maximum size~$k$ of a\nfamily $\\{P_1,\\ldots,P_k\\}$ of paths of length at most $r$ that start\nin~$v$, end at a vertex $w$ with $w\\leq_Lv$, and satisfy\n$V(P_i)\\cap V(P_j)=\\{v\\}$ for all $1\\leq i< j\\leq k$. As for $r>0$ we\ncan always let the paths end in the first vertex smaller than $v$, we\ncan assume that the internal vertices of the paths are larger\nthan~$v$. Note that $\\mathrm{adm}_r[G,L,v]$ is an integer, whereas\n$\\mathrm{WReach}_r[G,L, v]$ and $\\mathrm{SReach}_r[G,L,v]$ are vertex sets. The\n\\emph{$r$-admissibility} $\\mathrm{adm}_r(G)$ of~$G$~is\n\\begin{eqnarray*} \n\\mathrm{adm}_r(G) & = & \\min_{L\\in\\Pi(G)}\\max_{v\\in V(G)}\\mathrm{adm}_r[G,L,v].\n\\end{eqnarray*} \nThe generalised colouring numbers were introduced by Kierstead and\nYang \\cite{kierstead2003orders} in the context of colouring and marking\ngames on graphs. The authors also proved that the generalised\ncolouring numbers are related by the following inequalities:\n\\begin{equation}\\label{eq:gen-col-ineq}\n\\mathrm{adm}_r(G)\\leq \\mathrm{col}_r(G)\\le \\mathrm{wcol}_r(G)\\le (\\mathrm{adm}_r(G))^r.\n\\end{equation}\n\n\\subparagraph*{Shallow minors, bounded expansion, and nowhere\n denseness.} A graph $H$ with $V(H)=\\{v_1,\\ldots, v_n\\}$ is a\n{\\em{depth-$r$ minor}} of $G$, denoted $H\\preccurlyeq_rG$, if there is a\nminor model $(H_1,\\ldots,H_n)$ of $H$ in $G$ such that each $H_i$ has\nradius at most $r$. We write $d(H)$ for the \\emph{average degree}\nof~$H$, that is, for the number $2|E(H)|\/|V(H)|$. A class $\\mathcal{C}$ of\ngraphs has \\emph{bounded expansion} if there is a function\n$f\\colon\\mathbb{N}\\rightarrow\\mathbb{N}$ such that for all non-negative integers $r$\nwe have $d(H)\\leq f(r)$ for every $H\\preccurlyeq_r G$ with $G\\in\\mathcal{C}$. A\nclass $\\mathcal{C}$ of graphs is \\emph{nowhere dense} if for every real\n$\\epsilon>0$ and every non-negative integer~$r$, there is an integer\n$n_0$ such that if $H$ is an $n$-vertex graph with $n\\geq n_0$ and\n$H\\preccurlyeq_r G$ for some $G\\in\\mathcal{C}$, then~$d(H)\\leq n^\\epsilon$.\n\nBounded expansion and nowhere dense classes of graphs were introduced\nby Ne\\v{s}et\\v{r}il and Ossona de Mendez as models for uniform\nsparseness of graphs\n\\cite{nevsetvril2008grad,nevsetvril2011nowhere}. As proved by\nZhu~\\cite{zhu2009colouring}, the generalised colouring numbers are\ntightly related to densities of low-depth minors, and hence they can\nbe used to characterise bounded expansion and nowhere dense classes.\n\n\\begin{theorem}[Zhu \\cite{zhu2009colouring}] A class $\\mathcal{C}$ of graphs\n has bounded expansion if and only if there is a function\n $f:\\mathbb{N}\\rightarrow\\mathbb{N}$ such that $\\mathrm{wcol}_r(G)\\leq f(r)$ for all $r\\in\\mathbb{N}$\n and all $G\\in\\mathcal{C}$.\n\\end{theorem}\n\nDue to Inequality~(\\ref{eq:gen-col-ineq}), we may equivalently demand that there\nis a function $f:\\mathbb{N}\\rightarrow\\mathbb{N}$ such that $\\mathrm{adm}_r(G)\\leq f(r)$ or\n$\\mathrm{col}_r(G)\\leq f(r)$ for all non-negative integers $r$ and all\n$G\\in\\mathcal{C}$.\n\nSimilarly, from Zhu's result one can derive a characterisation of\nnowhere dense classes of graphs, as presented in\n\\cite{nevsetvril2011nowhere}. A class $\\mathcal{C}$ of graphs is called\n\\emph{hereditary} if it is closed under induced subgraphs, that is, if\n$H$ is an induced subgraph of $G\\in\\mathcal{C}$, then $H\\in\\mathcal{C}$.\n\n\\begin{theorem}[Ne\\v{s}et\\v{r}il and Ossona de Mendez\n \\cite{nevsetvril2011nowhere}] A hereditary class $\\mathcal{C}$ of graphs is\n no\\-where dense if and only if for every real $\\epsilon>0$ and every\n non-negative integer $r$, there is a positive integer $n_0$ such\n that if $G\\in\\mathcal{C}$ is an $n$-vertex graph with $n\\geq n_0$, then\n $\\mathrm{wcol}_r(G)\\leq n^\\epsilon$.\n\\end{theorem}\n\n\nAs shown in \\cite{dvovrak13}, for every non-negative integer $r$,\ncomputing $\\mathrm{adm}_r(G)$ is fixed-parameter tractable on any class of\nbounded expansion (parameterized by $\\mathrm{adm}_r(G)$). For $\\mathrm{col}_r(G)$ and\n$\\mathrm{wcol}_r(G)$ this is not known; however, by~\\eqref{eq:gen-col-ineq} we\ncan use admissibility to obtain approximations of these numbers. On\nnowhere dense classes of graphs, for every $\\epsilon>0$ and every\nnon-negative integer $r$, we can compute an order that witnesses\n$\\mathrm{wcol}_r(G)\\leq n^\\epsilon$ in time $\\mathcal{O}(n^{1+\\epsilon})$ if $G$ is\nsufficiently large \\cite{grohe2014deciding}, based on Ne\\v{s}et\\v{r}il\nand Ossona de Mendez's augmentation technique\n\\cite{nevsetvril2008grad}.\n\n\\section{Uniform quasi-wideness and the splitter game}\n\nIn this section we discuss the relation between weak $r$-colouring\nnumbers and two notions that characterise nowhere dense classes:\nuniform quasi-wideness and the splitter game.\n\nFor a graph $G$, a vertex subset $A\\subseteq V(G)$ is called\n\\emph{$r$-independent} in $G$, if $\\mathrm{dist}_G(a,b)>r$ for all different\n$a,b\\in V(G)$. A vertex subset is called \\emph{$r$-scattered}, if it\nis $2r$-independent, that is, if the $r$-neighbourhoods of different\nelements of $A$ do not intersect.\n\nInformally, uniform quasi-wideness means the following: in any large\nenough subset of vertices of a graph from $\\mathcal{C}$, one can find a large\nsubset that is $r$-scattered in $G$, possibly after removing from $G$\na small number of vertices. Formally, a class $\\mathcal{C}$ of graphs is\n\\emph{uniformly quasi-wide} if there are functions\n$N:\\mathbb{N}\\times\\mathbb{N}\\rightarrow\\mathbb{N}$ and $s:\\mathbb{N}\\rightarrow\\mathbb{N}$ such that for all\n$m, r\\in\\mathbb{N}$, if $W\\subseteq V(G)$ for a graph $G\\in\\mathcal{C}$ with\n$|W|>N(m,r)$, then there is a set $S\\subseteq V(G)$ of size at most\n$s(r)$ such that $W$ contains a subset of size at least $m$ that is\n$r$-scattered in $G-S$.\n\nThe notion of quasi-wideness was introduced by\nDawar~\\cite{dawar2010homomorphism} in the context of homomorphism\npreservation theorems. It was shown in \\cite{nevsetvril2010first} that\nclasses of bounded expansion are uniformly quasi-wide and that uniform\nquasi-wideness characterises nowhere dense classes of graphs.\n\n\\begin{theorem}[Ne\\v{s}et\\v{r}il and Ossona de Mendez\n \\cite{nevsetvril2010first}] A hereditary class $\\mathcal{C}$ of graphs is\n no\\-where dense if and only if it is uniformly quasi-wide.\n\\end{theorem}\n\nIt was shown by Atserias et al.\\@ in \\cite{atserias2006preservation} that classes\nthat exclude $K_k$ \nas a minor are uniformly quasi-wide. In fact, in this case we can choose\n$s(r)=k-1$, independent of $r$ (if such a constant function for a class $\\mathcal{C}$\nexists, the class is called \\emph{uniformly almost wide}). However, the function\n$N(m,r)$ that was used in the proof is huge: it comes from an iterated Ramsey\nargument. The same approach was used in \\cite{nevsetvril2010first} to show that\nevery nowhere dense class, and in particular, every class of bounded expansion,\nis uniformly quasi-wide. We present a new proof that every bounded expansion\nclass is uniformly quasi-wide, which gives us a much better bound on $N(m,r)$\nand which is much simpler than the previously known proof.\n\n\\begin{theorem} Let $G$ be a graph and let $r,m\\in \\mathbb{N}$. Let $c\\in\\mathbb{N}$ be such\n that $\\mathrm{wcol}_r(G)\\leq c$ and let $A\\subseteq V(G)$ be a set of size at least\n $(c+1)\\cdot 2^m$. Then there exists a set $S$ of size at most $c(c-1)$ and a set\n $B\\subseteq A$ of size at least $m$ which is $r$-independent\n in $G-S$.\n\\end{theorem}\n\\begin{proof} Let $L\\in \\Pi(G)$ be such that\n $|\\mathrm{WReach}_r[G,L,v]|\\leq c$ for every $v\\in V(G)$. Let $H$ be the\n graph with vertex set $V(G)$, where we put an edge $uv\\in E(H)$ if\n and only if $u\\in \\mathrm{WReach}_r[G,L,v]$ or $v\\in \\mathrm{WReach}_r[G,L,u]$. Then\n $L$ certifies that~$H$ is $c$-degenerate, and hence we can greedily\n find an independent set $I\\subseteq A$ of size $2^m$ in $H$. By the\n definition of the graph $H$, we have that\n $\\mathrm{WReach}_r[G,L,v]\\cap I=\\{v\\}$ for each $v\\in I$.\n\n\\begin{claim}\\label{cl:del}\n Let $v\\in I$. Then deleting $\\mathrm{WReach}_r[G,L,v]\\setminus \\{v\\}$ from\n $G$ leaves $v$ at a distance greater than $r$ (in\n $G-(\\mathrm{WReach}_r[G,L,v]\\setminus\\{v\\})$) from all the other vertices of\n $I$.\n\\end{claim}\n\\begin{claimproof}\n Let $u\\in I$ and let $P$ be a path in $G$ that has length at most\n $r$ and connects $u$ and~$v$. Let $z\\in V(P)$ be minimal with\n respect to $L$. Then $z<_L v$ or $z=v$. If $z<_L v$, then\n $z\\in \\mathrm{WReach}_r[G,L,v]$ and hence the path $P$ no longer exists\n after the deletion of $\\mathrm{WReach}_r[G,L,v]\\setminus\\{v\\}$ from $G$. On\n the other hand, if $z=v$, then $v\\in \\mathrm{WReach}_r[G,L,u]$,\n which contradicts the fact that both $u,v\\in I$. \n\\end{claimproof}\n\nWe iteratively find sets\n$B_0\\subseteq \\ldots \\subseteq B_m\\subseteq I$, sets\n$I_0\\supseteq \\ldots \\supseteq I_m$, and sets\n$S_0\\subseteq \\ldots \\subseteq S_m$ such that $B$ is $r$-independent\nin $G-S$, where $B:=B_m$ and $S:=S_m$. We maintain the invariant that\nsets $B_i$, $I_i$, and $S_i$ are pairwise disjoint for each $i$. Let\n$I_0=I$, $B_0 = \\emptyset$ and $S_0 = \\emptyset$. In one step\n$i=1,2,\\ldots, m$, we delete some vertices from $I_i$ (thus obtaining\n$I_{i+1}$), shift one vertex from $I_i$ to $B_i$ (obtaining $B_{i+1}$)\nand, possibly, add some vertices from $V(G)\\setminus I_i$ to $S_i$\n(obtaining $S_{i+1}$). More precisely, let $v$ be the vertex of $I_i$\nthat is the largest in the order $L$. We set\n$B_{i+1} = B_i\\cup\\{v\\}$, and now we discuss how $I_{i+1}$ and\n$S_{i+1}$ are constructed.\n\nWe distinguish two cases. First, suppose $v$ is connected by a path of\nlength at most~$r$ in $G-S_i$ to at most half of the vertices of $I_i$\n(including $v$). Then we remove these reachable vertices from~$I_i$,\nand set $I_{i+1}$ to be the result. We also set $S_{i+1}=S_i$. Note\nthat $|I_{i+1}|\\geq |I_i|\/2$.\n\nSecond, suppose $v$ is connected by a path of length at most $r$ in\n$G-S_i$ to more than half of the vertices of $I_i$ (including $v$). We\nproceed in two steps. First, we add the at most $c-1$ vertices of\n$\\mathrm{WReach}_r[G,L,v]\\setminus\\{v\\}$ to $S_{i+1}$, that is, we let\n$S_{i+1}=S_i\\cup (\\mathrm{WReach}_r[G,L,v]\\setminus\\{v\\})$. (Recall here that\n$\\mathrm{WReach}_r[G,L,v]\\cap I = \\{v\\}$.) By Claim~\\ref{cl:del}, this leaves\n$v$ at a distance greater than~$r$ from every other vertex of $I_i$ in\n$G-S_{i+1}$. Second, we construct $I_{i+1}$ from $I_i$ by removing the\nvertex $v$ and all the vertices of $I_i$ that are not connected to $v$\nby a path of length at most~$r$ in $G-S_i$, hence we have\n$|I_{i+1}|\\geq\n\\lfloor|I_i|\/2\\rfloor$.\n\nObserve the construction above can be carried out for $m$ steps,\nbecause in each step, we remove at most half of the vertices of $I_i$\n(rounded up) when constructing $I_{i+1}$. As $|I_0|=|I|=2^m$, it is\neasy to see that the set $I_i$ cannot become empty within $m$\niterations. Moreover, it is clear from the construction that we end\nup with a set $B=B_m$ that has size~$m$ and is $r$-scattered in $G-S$,\nwhere $S=S_m$. It remains to argue that $|S_m|\\leq c(c-1)$. For this,\nit suffices to show that the second case cannot apply more than $c$\ntimes in total.\n\nSuppose the second case was applied in the $i$th iteration, when\nconsidering a vertex~$v$. Every vertex $u\\in I_i$ with $u<_Lv$ that\nwas connected to~$v$ by a path of length at most $r$ in $G-S_i$\nsatisfies $\\mathrm{WReach}_r[G,L,v]\\cap \\mathrm{WReach}_r[G,L,u]\\neq \\emptyset$.\nThus, every remaining vertex~$u\\in I_{i+1}$ has at least one of its\nweakly $r$-reachable vertices deleted (that is, included in\n$S_{i+1}$). As the number of such vertices is at most $c-1$ at the\nbeginning, and it can only decrease during the construction, this\nimplies that the second case can occur at most~$c$~times.\n\\end{proof}\n\nAs shown in \\cite{siebertz16}, if $K_k\\not\\preccurlyeq G$, then\n$\\mathrm{wcol}_r(G)\\in\\mathcal{O}(r^{k-1})$. Hence, for such graphs we have to delete\nonly a polynomial (in $r$) number of vertices in order to find an\n$r$-independent set of size $m$ in a set of vertices of size single\nexponential in $m$.\n\nWe now implement the same idea to find a very simple strategy for\nsplitter in the splitter game, introduced by Grohe et\nal.~\\cite{grohe2014deciding} to characterise nowhere dense classes of\ngraphs. Let~${\\ell},r\\in \\mathbb{N}$. The \\emph{simple $\\ell$-round\n radius-$r$ splitter game} on~$G$ is played by two players,\n\\emph{connector} and \\emph{splitter}, as follows. We let~$G_0:=G$. In\nround~$i+1$ of the game, connector chooses a\nvertex~$v_{i+1}\\in V(G_i)$. Then splitter picks a vertex\n$w_{i+1}\\in N_r^{G_i}(v_{i+1})$. We\nlet~$G_{i+1}:=G_i[N_r^{G_i}(v_{i+1})\\setminus \\{w_{i+1}\\}]$. Splitter\nwins if~$G_{i+1}=\\emptyset$. Otherwise the game continues\nat~$G_{i+1}$. If splitter has not won after~${\\ell}$ rounds, then\nconnector wins.\n\nA \\emph{strategy} for splitter is a function~$\\sigma$ that maps every\npartial play $(v_1, w_1, \\dots,$ $v_s, w_s)$, with associated\nsequence~$G_0, \\dots, G_s$ of graphs, and the next\nmove~$v_{s+1}\\in V(G_s)$ of connector, to a\nvertex~$w_{s+1}\\in N_r^{G_s}(v_{s+1})$ that is the next move of\nsplitter. A strategy~$\\sigma$ is a \\emph{winning strategy} for\nsplitter if splitter wins every play in which she follows the\nstrategy~$f$. We say that splitter \\emph{wins} the simple $\\ell$-round\nradius-$r$ splitter game on~$G$ if she has a winning strategy.\n\n\\begin{theorem}[Grohe et al.\\ \\cite{grohe2014deciding}] A class $\\mathcal{C}$\n of graphs is nowhere dense if and only if there is a function\n $\\ell:\\mathbb{N}\\rightarrow\\mathbb{N}$ such that splitter wins the simple\n $\\ell(r)$-round radius-$r$ splitter game on every graph $G\\in\\mathcal{C}$.\n\\end{theorem}\n\nMore precisely, it was shown in \\cite{grohe2014deciding} that\n$\\ell(r)$ can be chosen as $N(2s(r), r)$, where $N$ and~$s$ are the\nfunctions that characterise $\\mathcal{C}$ as a uniformly quasi-wide class of\ngraphs. We present a proof that on bounded expansion classes, splitter\ncan win much faster.\n\n\\begin{theorem}\\label{thm:splitterwcol} Let $G$ be a graph, let $r\\in\\mathbb{N}$ and let\n $\\ell=\\mathrm{wcol}_{2r}(G)$. Then splitter wins the $\\ell$-round radius-$r$\n splitter game.\n\\end{theorem}\n\\begin{proof} Let $L$ be a linear order that witnesses\n $\\mathrm{wcol}_{2r}(G)=\\ell$. Suppose in round $i+1\\leq \\ell$, connector\n chooses a vertex $v_{i+1}\\in V(G_i)$. Let $w_{i+1}$ (splitter's\n choice) be the minimum vertex of $N_r^{G_i}(v_{i+1})$ with respect\n to $L$. Then for each $u\\in N_r^{G_i}(v_{i+1})$ there is a path\n between $u$ and $w_{i+1}$ of length at most $2r$ that uses only\n vertices of $N_r^{G_i}(v_{i+1})$. As $w_i$ is minimum in\n $N_r^{G_i}(v_{i+1})$, $w_{i+1}$ is weakly $2r$-reachable from each\n $u\\in N_r^{G_i}(v_{i+1})$. Now let\n $G_{i+1}:=G_i[N_r^{G_i}(v_{i+1})\\setminus\\{w_{i+1}\\}]$. As $w_{i+1}$\n is not part of $G_{i+1}$, in the next round splitter will choose\n another vertex which is weakly $2r$-reachable from every vertex of\n the remaining $r$-neighbourhood. As $\\mathrm{wcol}_{2r}(G)= \\ell$, the game\n must stop after at most $\\ell$ rounds.\n\\end{proof}\n\n\n\\section{Uniform orders for graphs excluding a topological minor}\\label{sec:uniform}\n\nIf $\\mathcal{C}$ is a class of bounded expansion such that\n$\\mathrm{wcol}_r(G)\\leq f(r)$ for all $G\\in\\mathcal{C}$ and all $r\\in \\mathbb{N}$, the order\n$L$ that witnesses this inequality for $G$ may depend on the value\n$r$. We say that a class $\\mathcal{C}$ \\emph{admits uniform orders} if there\nis a function $f:\\mathbb{N}\\rightarrow\\mathbb{N}$ such that for each $G\\in\\mathcal{C}$, there\nis a linear order $L\\in \\Pi(G)$ such that\n$|\\mathrm{WReach}_r[G,L,v]|\\leq f(r)$ for all $v\\in V(G)$ and all $r\\in\\mathbb{N}$. In\nother words, there is one order that simultaneously certifies the\ninequality $\\mathrm{wcol}_r(G)\\leq f(r)$ for all $r$.\n\nIt is implicit in \\cite{siebertz16} that every class that excludes a\nfixed minor admits uniform orders, which can be efficiently\ncomputed. We are going to show that the same holds for classes that\nexclude a fixed topological minor. Our construction is similar to the\nconstruction of \\cite{siebertz16}, in particular, our orders can be\ncomputed quickly in a greedy fashion. The proof that we find an order\nof high quality is based on the decomposition theorem for graphs with\nexcluded topological minors, due to Grohe and\nMarx~\\cite{grohe2015structure}. Note however, that for the\nconstruction of the order we do not have to construct a tree\ndecomposition according to Grohe and Marx~\\cite{grohe2015structure}.\n\n\\subparagraph*{Construction.} Let $G$ be a graph. We present a\nconstruction of an order of $V(G)$ of high quality. We iteratively\nconstruct a sequence $H_1,\\ldots, H_\\ell$ of pairwise disjoint and\nconnected subgraphs of $G$ such that\n$\\bigcup_{1\\leq i\\leq \\ell}V(H_i)=V(G)$. For $0\\leq i<\\ell$, let\n$G_i \\coloneqq G - \\bigcup_{1\\le j\\le i}V(H_j)$. We say that a\ncomponent $C$ of $G_i$ is \\emph{connected} to a subgraph $H_j$,\n$j\\leq i$, if there is a vertex $u\\in V(H_j)$ and a vertex $v\\in V(C)$\nsuch that $uv\\in E(G)$. For all~$i$, $1\\leq i<\\ell$, we will maintain\nthe following invariant. If $C$ is a component of $G_i$, then the\nsubgraphs $H_{i_1},\\ldots, H_{i_s}\\in \\{H_1,\\ldots, H_i\\}$ that are\nconnected to $C$ form a minor model of the complete graph $K_s$, where\n$s$ is their number.\n\nTo start, we choose an arbitrary vertex $v\\in V(G)$ and let $H_1$ be\nthe connected subgraph $G[\\{v\\}]$. Clearly, $H_1$ satisfies the above\ninvariant. Now assume that for some $i$, $1\\le i< \\ell$, the sequence\n$H_1,\\ldots,H_i$ has already been constructed. Fix some component $C$\nof $G_i$ and, by the invariant, assume that the subgraphs\n$H_{i_1},\\ldots, H_{i_s} \\in \\{H_1,\\ldots, H_i\\}$ with\n$1\\leq i_1<\\ldotsa(k)$, there is a node $t$ with $\\beta(t)$\nintersecting at least $a(k)+1$ branch sets~$H_{i_j}$. By\nLemma~\\ref{lem:corebag}, this node is unique. We call it the\n\\emph{core node} of the minor model. Next we show that if the model\nis large, then its core node must be a bounded degree node. Shortly\nspeaking, this is because the model $H_{i_1},\\ldots, H_{i_s}$ trimmed\nto the torso of the core node is already a minor model of $K_s$ in\nthis torso.\n\n\\begin{lemma}\\label{lem:degreebag}\n If $s>\\max\\{a(k), e(k)\\}$, then the core node of the minor model is\n a bounded degree node.\n\\end{lemma}\n\\begin{proof}\n As $s>a(k)$, by Lemma~\\ref{lem:corebag} we can identify the \n unique core node $t$ whose bag intersects all the branch sets $H_{i_j}$. \n Recall that $\\tau(t)$ is the graph induced by\n the bag $\\beta(t)$ in which all adjacent separators are turned into cliques. \n It is easy to see that the subgraphs $H_{i_j}':=\\tau(t)[V(H_{i_j})\\cap \\beta(t)]$ are connected\n in $\\tau(t)$ and form a minor model of $K_s$. As $s>e(k)$, \n we infer that $t$ cannot be an excluded minor node, and hence it is a bounded degree node.\n\\end{proof}\n\nFor vertices outside the bag of the core node, the bound promised in\nLemma~\\ref{lem:disjointpaths} can be proved similarly as\nLemma~\\ref{lem:corebag}.\n\n\\begin{lemma}\\label{lem:connectionsfromoutside}\n Let $C$ be a component of $G_i$ that has a connection to the\n subgraphs $H_{i_1},\\ldots, H_{i_s}$. If $s>a(k)$, then for every\n vertex $v\\in V(C)\\setminus\\beta(t)$, where $t$ is the core node of\n the model, we have that $m(v)\\leq a(k)$.\n\\end{lemma}\n\\begin{proof}\nBy the properties of a tree decomposition, there is an edge $e=tt'$ of $T$ such that\n$\\beta^{-1}(v)$ is contained in the subtree of $T-e$ that contains $t'$.\nSuppose $\\mathcal{P}$ is a family of paths that connect $v$ with distinct branch sets $H_{i_j}$ and are pairwise disjoint apart from $v$.\nRecall that $\\beta(t)$ intersects every branch set $H_{i_j}$.\nTherefore, by extending each path of $\\mathcal{P}$ within the branch set it leads to, we can assume w.l.o.g. that each path of $\\mathcal{P}$ connects $v$ with a vertex of $\\beta(t)$.\nBy Lemma~\\ref{lem:separator}, this implies that each path of $\\mathcal{P}$ intersects $\\beta(t)\\cap \\beta(t')$.\nPaths of $\\mathcal{P}$ share only $v$, which is not contained in $\\beta(t)\\cap \\beta(t')$, and hence we conclude that $|\\mathcal{P}|\\leq |\\beta(t)\\cap \\beta(t')|$.\nAs $\\mathcal{P}$ was chosen arbitrarily, we obtain that $m(v)\\leq |\\beta(t)\\cap \\beta(t')|\\leq a(k)$.\n\\end{proof}\n\n\\vspace{-0.5cm}\nWe now complete the proof of Lemma~\\ref{lem:disjointpaths} by looking\nat the vertices inside the core bag.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:disjointpaths}]\n We set $\\alpha:=a(k)+c(k)+d(k)+e(k)$. Assume towards a\n contradiction that for some $i$, $1\\leq i<\\ell$, we have that some\n component $C$ of $G_i$ contains a vertex $v_1$ with $m(v_1)>\\alpha$.\n Denote the branch sets that have a connection to $C$ by\n $H_{i_1},\\ldots, H_{i_s}$, where $i_1\\alpha$, we have that $|\\mathcal{P}|>\\alpha$, and in\n particular $s>\\alpha$. As $\\alpha>a(k)$, by~Lemma~\\ref{lem:corebag} \n and~Lemma~\\ref{lem:corebag-exists} we can\n identify the unique core node~$t$ of the minor model. As\n $s>\\max\\{a(k),e(k)\\}$, by Lemma~\\ref{lem:degreebag} the core node is\n a bounded degree node. As $m(v_1)>a(k)$, by\n Lemma~\\ref{lem:connectionsfromoutside} we have $v_1\\in \\beta(t)$.\n As $\\mathcal{P}$ contains more than $d(k)$ disjoint paths from $v$\n to distinct branch sets, the degree of $v_1$ in $G$ must be greater\n than $d(k)$, hence $v_1$ is an apex vertex of $\\tau(t)$.\n\n Since $i_1a(k)+c(k)+d(k)+e(k)\\geq a(k)+d(k)+e(k)+1$, the same\n reasoning as above shows that $t$ is also the core vertex of the\n minor model formed by branch sets connected to $C'$. Thus, by\n exactly the same reasoning we obtain that $v_2$ is also an apex\n vertex of $\\tau(t)$.\n\n Since $\\alpha>a(k)+c(k)+d(k)+e(k)$, we can repeat this reasoning\n $c(k)+1$ times, obtaining vertices $v_1,\\ldots, v_{c(k)+1}$, which\n are all apex vertices of $\\tau(t)$. This contradicts the fact that\n $\\tau(t)$ contains at most $c(k)$ apex vertices.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:reachablevertices}]\n We set $\\beta$ so that $\\beta\\cdot r\\geq (2r+1)\\cdot \\alpha$, where\n $\\alpha$ is the constant given by Lemma~\\ref{lem:disjointpaths}.\n For the sake of contradiction, suppose there is a family of paths\n $\\mathcal{P}$ as in the statement, whose size is larger than\n $(2r+1)\\cdot \\alpha$.\n\n Recall that $H_{j}$ was chosen as a subtree of a breadth-first\n search tree in $G_{j-1}$; throughout the proof, we treat $H_j$ as a\n rooted tree. As $H_j$ is a subtree of a BFS tree, every path from a\n vertex $w$ of the tree to the root $v'$ of the tree is an isometric\n path in $G_{j-1}$, that is, a shortest path between $w$ and $v'$ in\n the graph $G_{j-1}$. If $P$ is an isometric path in a graph $H$,\n then $|N_r^H(v)\\cap V(P)|\\leq 2r+1$ for all $v\\in V(H)$ and all\n $r\\in\\mathbb{N}$. As the paths from $\\mathcal{P}$ are all contained in\n $G_{j-1}$, and they have lengths at most $r$, this implies that the\n path family $\\mathcal{P}$ cannot connect $v$ with more than $2r+1$\n vertices of $H_{j}$ which lie on the same root-to-leaf path in\n $H_{j}$. Since $|\\mathcal{P}|> (2r+1)\\cdot \\alpha$, we can find a\n set $X\\subseteq V(H_j)$ such that $|X|>\\alpha$, each vertex of $X$\n is connected to $v$ by some path from $\\mathcal{P}$, and no two\n vertices of $X$ lie on the same root-to-leaf path in $H_j$. Recall\n that, by the construction, each leaf of $H_j$ is connected to a\n different branch set $H_{j'}$ for some $j'0$) is\n chosen as follows. We first select the vertex\n $w'_{i+1}\\in V(H_{i_s})$ that is the largest in the order $L$ among\n those vertices of $H_{i_s}$ that are adjacent to $C$ (the vertices\n of $H_j$ for $j\\leq i$ are already ordered by $L$ at this point).\n Then, we select any its neighbour $w_{i+1}$ in $C$ as the vertex\n that is going to be included in $H_{i+1}$ in its construction.\n Finally, recall that in the construction of $L$, we could order the\n vertices of $H_{i+1}$ arbitrarily. Hence, we fix an order of\n $H_{i+1}$ so that $w_{i+1}$ is the smallest among $V(H_{i+1})$.\n This concludes the description of the restrictions applied to the\n construction.\n\n We now construct $H$ by taking $G$ and adding some edges. During the\n construction, we will mark some edges of $H$ as {\\em{spanning\n edges}}. We start by marking all the edges of all the trees\n $H_i$, for $1\\leq i\\leq \\ell$, as spanning edges. At the end, we\n will argue that the spanning edges form a spanning tree of $H$ with\n maximum degree at most $\\delta$.\n\n For each $i$ with $1\\leq i<\\ell$, let us examine the vertex\n $w_{i+1}$, and let us {\\em{charge}} it to $w_{i+1}'$. Note that in\n this manner every vertex $w_{i+1}$ is charged to its neighbour that\n lies before it in the order $L$. For any $w\\in V(G)$, let $D(w)$ be\n the set of vertices charged to $w$. Now examine the vertices of $G$\n one by one, and for each $w\\in V(G)$ do the following. If\n $D(w)=\\emptyset$, do nothing. Otherwise, if\n $D(w)=\\{u_1,u_2,\\ldots,u_h\\}$, mark the edge $wu_1$ as a spanning\n edge, and add edges $u_1u_2,u_2u_3,\\ldots,u_{h-1}u_h$ to $H$,\n marking them as spanning edges as well.\n\n\\begin{claim}\\label{cl:degbnd}\n The spanning edges form a spanning tree of $H$ of maximum degree at\n most $\\alpha+4$, where~$\\alpha$ is the constant given by\n Lemma~\\ref{lem:disjointpaths}.\n\\end{claim}\n\\begin{claimproof}\nBecause the branch sets partition the graph, the spanning edges form a spanning subgraph of $H$.\nBecause we connect the branch set $H_{i+1}$ only to the largest reachable branch set $H_{i_s}$ \n(and this set is never again the largest reachable branch set for $H_j$, $j>i$), the spanning \nsubgraph is acyclic. It is easy to see that the spanning subgraph is also connected. \nBy Lemma~\\ref{lem:degbound}, we have that each $H_i$ has maximum degree at most $\\alpha+1$.\nAlso, for every vertex $w\\in V(G)$, at most $3$ additional edges incident to $w$ in $H$ are marked as spanning (two edges are contributed by the path from $u_1$ to $u_h$ (only $u_1$ charges to a different vertex and has degree $1$ on the path) and one edge \nmay be added if a vertex is charged to it). In total, this means that\n$H$ has maximum degree bounded by $\\alpha+4$.\n\\end{claimproof}\n\nIt remains to argue that $H$ has small admissibility. For this, it\nsuffices to prove the following claim. The proof uses the additional\nrestrictions we introduced in the construction.\n\n\\begin{claim}\\label{cl:adm}\n Let $r$ be a positive integer. If the order $L$ satisfies \n $\\max_{v\\in V(G)} |\\mathrm{SReach}_{2r}[G,L,v]|\\leq m$, that is, \n the order certifies $\\mathrm{col}_{2r}(G)\\leq m$, \n then $\\mathrm{adm}_r(H)\\leq m+2$.\n\\end{claim}\n\\begin{claimproof}\nWe verify that for each $r$, the order $L$ certifies that $\\mathrm{adm}_r(H)\\leq m+2$. For this, take any vertex $v\\in V(H)=V(G)$, and let $\\mathcal{P}$ be any family of paths of length at most $r$ in $H$\nthat start in $v$, end in distinct vertices smaller than $v$ in $L$, and are pairwise internally disjoint.\nWe can further assume that all the internal vertices of all the paths from $\\mathcal{P}$ are larger than $v$ in $L$.\nLet $i$, $0\\leq i<\\ell$, be such that $v\\in V(H_{i+1})$.\nWe distinguish two cases: either $v=w_{i+1}$ or $v\\neq w_{i+1}$.\n\nWe first consider the case $v\\neq w_{i+1}$; the second one will be very similar. By the construction of the order $L$, it follows that $w_{i+1}<_L v$.\nConsider any path $P\\in \\mathcal{P}$. Then $P$ is a path in $H$; we shall modify it to a walk $P'$ in $G$ as follows.\nSuppose $P$ uses some edge $e$ that is not present in~$H$. By the construction of~$H$, it follows that\n$e=u_1u_2$ is an edge connecting two vertices that are charged to the same vertex $w$; suppose w.l.o.g. $P$ traverses $e$ from $u_1$ to $u_2$. \nDefine $P'$ by replacing the traversal of $e$ on $P$ by a path of length two consisting of $u_1w$ and $wu_2$, and making the same replacement for all other edges on $P$ that do not belong to $G$.\n\nWe claim that all the internal vertices of $P'$ are not smaller, in $L$, than $v$. For this, it suffices to show that whenever some edge $u_1u_2$ is replaced by a path $(u_1w, wu_2)$ as above,\nthen we have that $v\\leq_L w$. Aiming towards a contradiction, suppose that $u_1u_2$ is the first edge on $P$ for which we have $w<_L v$.\nBy the construction, it must be that $(u_1,u_2)=(w_{j_1},w_{j_2})$ for some $j_1,j_2>i+1$, and $w=w_{j_1}'=w_{j_2}'$. \nLet $j$ be such that $w\\in H_j$.\nWhen constructing $H_{j_1}$, we chose $w=w_{j_1}'$ as the largest, w.r.t. $L$, vertex of $H_j$ which was adjacent to the connected component $C'$ of $G_{j_1-1}$ that contains $H_{j_1}$.\nObserve that the prefix of $P'$ up to $w_{j_1}$ is a path in $G$ that, by the choice of $u_1u_2$, contains only vertices not smaller in $L$ than $v$. This prefix has to access the connected component $C'$\nfrom some vertex $q$, for which we of course have $v\\leq_L q$. If $q\\notin V(H_j)$ then, as $H_j$ is the last among subgraphs connected to $C'$, we have that $q\\in V(H_{j'})$ for some $j' {end{eqnarray*}\n\nInspired by \\cite{izekor,fadBA} we make the Ansatz\n\\[\n \\psi = f(\\Phi_1) \\mm\\delta(\\Phi_1-\\Phi_2 -\\gamma)\n\n\\]\n for the pseudovacuum at site $n$, with $f$ a functional to be\ndefined. Demanding that the off-diagonal operator $C$ annihilates\nthe vacuum, we get the functional relation\n\\be\n f(\\Phi+\\gamma) = - h(\\Phi) \\mm f(\\Phi) \\.\n \\mabel{functrel}\n\\ee\n This equation sets very stringent conditions on the function $f$. It\nis a sufficient condition for the pseudovacuum;\nwhen it is fulfilled, the actions of $A$ and $D$ on $\\psi$ are\ndiagonal, with the eigenvalues:\n\\bea\n A\\mm\\psi &=& (\\!\\e{-2\\lambda +\\i\\gamma} -1)\\mm \\psi \\equiv a(\\lambda)\\mm\\psi \\cr\n D\\mm\\psi &=& (\\!\\e{-2\\lambda -\\i\\gamma} -1)\\mm \\psi \\equiv d(\\lambda)\\mm\\psi \\.\n \\mabel{aetd}\n\\eea\n\nThe treatment of Functional Relation \\refe{functrel} depends crucially\nof the value of $q~=~\\e{\\i\\gamma}$. When $|q| < 1$, Equation\n\\refe{functrel} is exactly fulfilled by the quantum dilogarithm, see\nRef. \\cite{fadkas}. Following \\cite{fadcur}, we get an explicit solution\nfor the case $|q| =1$ as well, which is of interest to this paper:\n\\[\n f(\\Phi) = \\exp{\\mm \\int_{-\\infty}^\\infty\n\\frac{dx}{4x}\\mm\\mm \\frac{\\e{(\\gamma-\\pipert-\\Phi) x}}\n{\\sinh\\pipert x \\sinh{\\textstyle \\frac{\\gamma}{2}}\\mm x}\n} \\mm\\.\n\\]\n The singularity of the integral at $x=0$ is left under the\nintegration path. For more discussion on solutions of \\refe{functrel}\nand other similar functional equations we refer to\n\\cite{volkov,fadvol,fadkas,fadcur}.\n\n{}From the local pseudovacuums $\\psi_n$ we can now build up a total\npseudovacuum for the quantum chain:\n\\[\n \\Psi = \\otimes_{n=1}^N \\psi_n \\,\n\\]\n where the amount of paired sites is denoted by $N$.\n\nThe two-site L-operators \\refe{LL} become triangular when acting on the\nlocal vacuum. Thus the monodromy \\refe{monodromedary} acting on the\ntotal pseudovacuum $\\Psi$ is triangular as well:\n\\[\n T(\\lambda) \\mm\\Psi = \\mat{cc}{a^N(\\lambda)\\mm\\Psi & *\\\\\n\t\t\t 0 & d^N(\\lambda)\\mm\\Psi } \\.\n\\]\n The star in the upper right corner denotes a complicated state\ncreated by various combination of $A$:s, $D$:s and one $B$ acting on\n$\\Psi$.\n\nCorrespondingly, the transfer matrix \\refe{transfer} has the\npseudovacuum eigenvalue\n\\be\n \\tau(\\lambda)\\mm \\Psi = (a^N(\\lambda)\\mm + \\mm d^N(\\lambda))\\mm\\Psi \\.\n \\mabel{eigentransfer}\n\\ee\n\n\n\\subsection{The Bethe Ansatz Equations}\n\n\nIn the algebraic Bethe Ansatz\\ one makes the assumption that the exited states of the\ntheory can be obtained from the pseudovacuum by acting on it with the\n``pseudoparticle creation operator'' $\\B$. An arbitrary state of the\nform\n\\be\n \\Psi_m = \\prod_{j=1}^m \\B(\\lambda_j) \\mm\\Psi\n\\mabel{psim}\n\\ee\n is characterized by $m$ values of the spectral parameter $\\lambda_j$.\nThis state is an eigenstate of the trace of the monodromy,\ni.e. $\\A+\\D$, if the spectral parameters $\\set{\\lambda_j}$ satsify a\nset of $m$ coupled trancendental equations known as the Bethe Ansatz equations.\nWritten in terms of the components\nof $R$-matrix \\refe{R} and the eigenvalues $a^N, d^N$ of $\\A$\nand $\\D$, these equations read\n\\be\n \\left[\\frac{a(\\lambda_k)}{d(\\lambda_k)}\\right]^N =\n\\prod_{\\stackrel{j=1}{ j\\neq k}}^m\n \\frac{\\alpha(\\lambda_k-\\lambda_j)\\beta(\\lambda_j-\\lambda_k)}{\\alpha(\\lambda_j-\\lambda_k)\n \\beta(\\lambda_k-\\lambda_j)}\n \\ \\ \\forall\\ k\\.\n \\mabel{fBAE}\n\\ee\n\n Solutions to \\refe{fBAE} appear in sets of $m$ spectral parameters,\nso called Bethe Ansatz\\ roots, which parameterize different states of the\nsystem. A more refined analysis shows that generically all Bethe Ansatz\\ roots\nare distinct, and that $m\\leq N\/2$, see e.g. \\cite{fadBA}.\n\nFor lattice Liouville theory, the eigenvalues $a$ and $d$\nare given by Equation \\refe{aetd}, and the ratio on the left hand side of\nEquation \\refe{fBAE} is\n\\[\n \\frac{a(\\lambda)}{d(\\lambda)} = \\e{\\i\\gamma} \\BAEspsmi{\\lambda}{\\i{\\textstyle \\frac{\\gamma}{2}}\\mm} \\.\n\\]\n This leads to the Bethe Ansatz equations\n\\be\n \\e{\\i N\\gamma} \\left[\\BAEspsmi{\\lambda_k}{\\i{\\textstyle \\frac{\\gamma}{2}}\\mm}\\right]^N =\n\\prod_{j,\\mm j\\neq k} \\BAEspspl{\\lambda_k-\\lambda_j}{\\i\\gamma} \\ \\ \\forall\\ k\\.\n \\mabel{BAE}\n\\ee\n On the other hand, the Bethe Ansatz equations\\ of a spin-S XXZ chain are\n(see e.g. \\cite{fadBA})\n\\be\n \\left[\\BAEspspl{\\lambda_k}{\\i S\\gamma}\\right]^N =\n \\prod_{j,\\mm j\\neq k} \\BAEspspl{\\lambda_k-\\lambda_j}{\\i\\gamma} \\ \\ \\forall\\ k\\.\n \\mabel{xxzBAE}\n\\ee\n Comparing \\refe{BAE} to \\refe{xxzBAE} we see that in terms of the Bethe\nAnsatz, lattice Liouville theory corresponds to a spin $(-{\\textstyle \\frac{1}{2}}\\mm)$ XXZ\nspin chain, with an extra phase factor $\\e{\\i N\\gamma}$.\n\nPhase factors of the form $\\e{\\i \\theta}$ have appeared earlier in the\nliterature on the Bethe Ansatz. In \\cite{karowski} it was used to analyze Bethe Ansatz equations\\\nfor Potts models, related to a seam insertion of a boundary\nfield. Similarly, twisted boundary conditions in Reference\n\\cite{kluwezi} manifest themselves in the form of extra phase factors.\nMost interestingly, in the analysis of the eight-vertex model\n\\cite{baxter,takfad79}, there appears a ``theta-vacuum like'' phase\nfactor in the Bethe equations, related to different SOS-sectors of the\ntheory.\n\nHere, however, the important difference to the situations of\n\\cite{karowski,kluwezi,baxter,takfad79} occurs that the length $N$ of\nthe chain appears in the phase factor. When $q$ is a root of unity,\nthis will give us an extra integer parameter to parametrize\n``primary'' exited states, in addition to the string length used in\n\\cite{karowski}. These two integers then give us enough freedom to\nparameterize the whole Ka\\v c\\ table.\n\nThus different chain lengths occurring in the lattice Liouville model\ncorrespond exactly to the different ``theta-vacuum'' sectors in a\ncritical (R)SOS model.\n\n\n\\section{Mapping the Lattice Liouville Model\n\t\tto the Spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ Chain }\\mabel{maptoxxz}\n\n\\noindent\nThe thermodynamics \\cite{taksuz} and finite size effects\n\\cite{devewoy,karowski,kluwezi,boizre} of the Bethe Ansatz\\ of the fundamental spin\n$+{\\textstyle \\frac{1}{2}}\\mm$ representation of XXZ-chains have been widely discussed in\nthe literature. Accordingly it would be a desirable goal to map the\nspin $(-{\\textstyle \\frac{1}{2}}\\mm)$ Liouville Bethe equations \\refe{BAE} to the spin\n$+{\\textstyle \\frac{1}{2}}\\mm$ XXZ ones \\refe{xxzBAE}. This can be done in the string\npicture \\cite{taksuz} of solutions to the Bethe Ansatz equations.\n\nIn order to do the mapping, we thus make the string hypothesis that in the\nthermodynamic limit $N\\to\\infty$, complex Bethe Ansatz\\ roots $\\set{\\lambda_j}$\ncluster to complexes of the form\n\\be\n \\lambda^{(\\sK)} = \\lambda_\\sK \\mm + \\mm k \\i\\gamma\\, \\ \\ k= -{\\textstyle \\frac{1}{2}}\\mm (K-1), \\mm\\ldots\\mm,\n \\mm{\\textstyle \\frac{1}{2}}\\mm(K-1) \\; \\Im\\lambda_\\sK\\in\\{0,\\pipert\\}\\, \\ \\ K\\in Z_+ \\.\n \\mabel{kstring}\n\\ee\n A collection of $K$ Bethe Ansatz\\ roots with a common center obeying\nEquation \\refe{kstring} is known as a $K$-string. One-strings are the\nusual real Bethe Ansatz\\ roots. Strings with $\\Im\\lambda_\\sK = 0$ are called\npositive parity strings, if $\\Im\\lambda_\\sK = \\pipert$ one speaks about\nnegative parity strings.\n\nIt is important to notice that when $q$ is a root of unity there\nis an upper limit to the length of the string $K$\n\\cite{taksuz}. With\n\\[\n \\left[\\e{\\i\\gamma}\\right]^{\\nu+1} = \\pm 1 \\,\n\\]\n the maximal string length is\n\\[\n K_{{\\rm max}} = \\nu.\n\\]\n This limit on the range of $K$ changes the usual concept of\ncombinatorical completeness of Bethe Ansatz\\ states. In Ref. \\cite{jutkar} it\nwas argued for a $sl_q(2)$\\ invariant spin chain at $q$ root of unity\nthat the Bethe Ansatz\\ states exhibit completeness in the space of ``good''\nrepresentations of the quantum group.\n\n From now on we shall concentrate only on the root of unity case. More\nspecifically, we shall assume that the Liouville coupling constant\ntakes only the values\n\\[\n \\gamma = \\frac{\\pi\\nu}{\\nu +1} \\; \\nu\\in Z_+ +1 \\.\n\n\\]\n Using the string picture, we shall be able to map the states of a\nLiouville chain with ``anisotropy'' $\\gamma$ to a spin ${\\textstyle \\frac{1}{2}}\\mm$ spin\nchain with anisotropy\n\\be\n {\\tilde\\gamma} = \\frac{\\pi}{\\nu +1} \\; \\nu\\in Z_+ +1 \\.\n \\mabel{tilgam}\n\\ee\n\nWe begin with positive parity strings. The number of $L$-strings is\n$n_\\sL$, and the total number of Bethe Ansatz\\ roots is\n\\[\n m = \\sum_{L=1}^\\nu L \\mm n_\\sL \\.\n\n\\]\n The rapidities of different $L$-strings are denoted $\\lambda_{\\sL,j}\\, \\\nj=1\\mm\\ldots\\mm n_\\sL$.\n\nMultiplying the Bethe Ansatz equations\\ \\refe{BAE} for all the rapidities comprising a\n$K$-string, we get a set of coupled equations of the {\\it\nreal parts} of the strings only.\n\nThe terms in the right hand side of (\\ref{fBAE}--\\ref{xxzBAE}), which\ndescribe the effect on scattering of pseudoparticles on each other,\nnow become scattering matrices of strings on strings.\n\nThe scattering of $K$-strings on 1-strings is described by the\nfunction\n\\bea\n S_{\\sss{ K 1}}(\\lambda) & \\equiv&\n\\prod_{k=-{{\\scriptscriptstyle \\frac{1}{2}(K-1)}}}^{{\\scriptscriptstyle \\frac{1}{2}(K-1)}} S_{1 1}(\\lambda)\\ =\n\\prod_{k=-{{\\scriptscriptstyle \\frac{1}{2}(K-1)}}}^{{\\scriptscriptstyle \\frac{1}{2}(K-1)}} \\BAEsps{\\lambda}{\\i k\\gamma} \\cr\n& &\\cr\n& &\\cr\n&=& \\BAEsps{\\lambda}{{\\textstyle \\frac{1}{2}}\\mm(K+1)\\i\\gamma}\\\n\\BAEsps{\\lambda}{{\\textstyle \\frac{1}{2}}\\mm(K-1)\\i\\gamma} \\. \\mabel{Sk1}\n\\eea\n In terms of this function, we can write the scattering matrix\nof $K$-strings on $L$-strings as\n\\bea\n S_{\\sK\\sL}(\\lambda) &=& \\prod_{k=-{{\\scriptscriptstyle \\frac{1}{2}(K-1)}}}^{{\\scriptscriptstyle \\frac{1}{2}(K-1)}}\\\n\\prod_{l={{\\scriptscriptstyle \\frac{1}{2}(L-1)}}}^{{\\scriptscriptstyle \\frac{1}{2}(L-1)}}\\\n \\BAEsps{\\lambda}{(k-l+1)\\mm\\i\\gamma} \\cr\n& &\\cr\n&=& \\prod_{k=\\sss{ \\jhalf |K-L|}}^{\\sss{ \\jhalf\n(K+L) - 1}} \\ S_{k1}(\\lambda) \\\n= \\prod_{k=\\sss{ \\jhalf |K-L|}}^{\\sss{ {\\rm\nmin} \\left[\\jhalf (K+L) - 1,\\ \\nu -\\jhalf(K+L)\\right]}} \\\nS_{k1}(\\lambda)\n\\.\n \\mabel{S}\n\\eea\n The first expression in terms of $S_{k1}$ bears close resemblance to\n(an exponential form of) the decomposition of the tensor product of\ntwo irreducible $SU(2)$ representations with spins ${\\textstyle \\frac{1}{2}}\\mm(K-1)$ and\n${\\textstyle \\frac{1}{2}}\\mm(L-1)$. Indeed, the Bethe Ansatz\\ can be viewed as a\ndifferent way of reducing tensor products of spin 1\/2 particles into\nirreducible representations. The factor $S_{\\sK 1}$ attached to a\nspin ${\\textstyle \\frac{1}{2}}\\mm(K-1)$ representation is obtained by fusing $K$\nfactors $S_{11}$ corresponding to spin 1\/2 representations. The\n$S$-matrix then naturally reflects the decomposition of the tensor\nproducts of the representation spaces involved in the scattering.\n\nThe second (``reduced'') expression in terms of $S_{k1}$ is peculiar\nto the root of unity case. It may be related to the decomposition of\nthe tensor product of two ``good'' quantum group representations,\nc.f. \\cite{jutkar}. The reduced decomposition is related to the\nfusion rules of conformal blocks as treated in {\\cite{verlinde}},\nsuggesting that conformal blocks might be represented by Bethe Ansatz\\\nstrings. In the sequel we will see, how this is to be interpreted.\n\nThe reduced form shows explicitly the symmetry of\n$S_{\\sK\\sL}$ which will allow us to map the Liouville Bethe equations\non the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ Bethe equations:\n\\be\n S_\\sss{ \\nu+1-K,\\mm\\nu+1-L}(\\lambda)\n = S_{\\sK,\\sL}(\\lambda) \\.\n \\mabel{Sklsym}\n\\ee\n\n\n Expressing the Liouville Bethe equations \\refe{BAE} in terms of\nstrings we get\n\\be\n \\e{\\i K N\\gamma} \\ \\left[\\BAEspsmi{\\lambda_{\\sK,i}}{\\i\\kpert\\gamma}\\right]^N\n = \\prod_{\\sL; \\ n_\\sL\\neq 0} \\\n \\prod_{\\stackrel{j=1}{(\\sL,j)\\neq(\\sK,i)}}^{n_\\sL}\n S_{\\sK\\sL}(\\lambda_{\\sK,i} - \\lambda_{\\sL,j})\\ \\ \\ \\ \\forall\\ (K,i)\\.\n \\mabel{sBAE}\n\\ee\n\nThe amount of coupled equations is reduced to $\\sum_{\\sL = 1}^\\nu\nn_\\sL$, and all roots of the equations are now taken to be real.\n\nSimilarly, applying the string picture to Equation \\refe{xxzBAE} for\nspin ${\\textstyle \\frac{1}{2}}\\mm$, we get the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ \\ Bethe equations in terms\nof strings,\n\\be\n \\left[\\BAEspsmi{\\lambda_{\\sK,i}}{\\i\\kpert\\gamma}\\right]^N\n = \\prod_{\\sL; \\ n_\\sL\\neq 0} \\\n \\prod_{\\stackrel{j=1}{(\\sL,j)\\neq(\\sK,i)}}^{n_\\sL}\n \\Bigl(S_{\\sK\\sL}(\\lambda_{\\sK,i} - \\lambda_{\\sL,j})\\Bigr)^{-1} \\ \\ \\ \\ \\forall\\ (K,i)\\.\n \\mabel{xxzsBAE}\n\\ee\n Notice that as compared to {\\refe{xxzBAE}}, we have inverted the equation.\n\nInspired by Equation \\refe{Sklsym}, we parameterize the string lengths\n{}from the maximal string backwards,\n\\[\n K = \\nu +1 - \\tilde K \\; L = \\nu +1 - \\tilde L\\; \\tilde K,\\tilde L =\n1, \\mm\\ldots\\mm , \\nu \\.\n \n\\]\n With this parameterization, it turns out that the Liouville Bethe Ansatz equations\\\n\\refe{sBAE} for the strings $\\set{\\lambda_{\\sL,j}}$ maps to the spin\n${\\textstyle \\frac{1}{2}}\\mm$ XXZ equations for a set of strings $\\set{{\\tilde\\lambda}_{\\tilde\n\\sL,j}}$ with the anisotropy ${\\tilde\\gamma}$.\n\nIndeed, after some algebra we have for the left hand side\n\\be\n \\e{\\i K \\gamma}\\ \\BAEspsmi{\\lambda_{\\sK,i}}{\\i\\kpert\\gamma} =\n \\e{\\i \\tilde K {\\tilde\\gamma}}\\\n\\BAEspsmi{{\\tilde\\lambda}_{\\tilde\\sK,i}}{\\i\\trac{\\tilde K}{2}{\\tilde\\gamma}} \\.\n \\mabel{pmap}\n\\ee\n The spectral parameter is changed in the following way:\n\\be\n {\\tilde\\lambda} = \\lambda - \\i (1+(-1)^\\sK) \\piperf \\, \\mabel{tillam}\n\\ee\n i.e. the parity of even-length strings is changed. In addition, it\nturns out that\n\\[\n S^\\gamma_\\sss{ K 1}(\\lambda)\n= \\Bigl(S^{\\tilde\\gamma}_\\sss{ \\tilde K 1}(\\tilde\\lambda)\\Bigl)^{-1} \\,\n\\]\n where the upper index denotes whether $\\gamma$ or ${\\tilde\\gamma}$ is\nused when defining $S_\\sss{K1}$ according to Equation \\refe{Sk1}.\n\n Using this, as well as Symmetry Property \\refe{Sklsym}, we get for\nthe string-on-string scattering matrix \\refe{S}\n\\be\n S_{\\sK\\sL}^\\gamma (\\lambda_{\\sK,i} - \\lambda_{\\sL,j})\n =\\Bigl(S_{\\tilde\\sK\\tilde\\sL}^{\\tilde\\gamma}\n ({\\tilde\\lambda}_{\\tilde\\sK,i}-{\\tilde\\lambda}_{\\tilde\\sL,j})\\Bigr)^{-1} \\.\n \\mabel{smap}\n\\ee\n\nThese results can easily be generalized to negative parity Liouville\nstrings as well.\n\nNow we are in position to state the equivalence of lattice Liouville\nand spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\ Bethe Ans\\\"atze. Using Equations (\\ref{pmap},\n\\ref{smap}) in \\refe{sBAE} and comparing to Equation \\refe{xxzsBAE},\nwe get {\\it complete equivalence} of the string states in lattice\nLiouville theory and a spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain with an additional phase\nfactor $ \\exp\\{\\i N \\tilde K {\\tilde\\gamma}\\}$. This extra factor will allow\nus to incorporate the remnant of the chain length mod $\\nu+1$ as a\nmeaningful extra parameter in the theory of finite size corrections\\ of an usual XXZ\nchain.\n\nThe spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ Bethe Ansatz equations\\ we shall be analyzing are thus\n\\be\n \\e{\\i N \\tilde K{\\tilde\\gamma}}\\mm \\left[\n\\BAEspsmi{{\\tilde\\lambda}_{\\tilde\\sK,i}}{\\i\\trac{\\tilde K}{2}\\gamma}\n\\right]^N \\times\n\\prod_{\\sss{\\tilde L}; \\ n_\\sss{\\tilde L}\\neq 0} \\\n \\prod_{\\stackrel{j=1}{\\sss{(\\tilde L,j)\\neq(\\tilde K,\n \t\ti)}}}^{n_\\sss{\\tilde L}}\n S_{\\sss{\\tilde K\\tilde L}}(\\lambda_{\\sss{\\tilde K},i}-\\lambda_{\\sss{\\tilde L},j})\n\\ = \\ 1 \\ \\ \\ \\ \\forall\\ (\\tilde K,i) \\.\n \\mabel{phxxzBAE}\n\\ee\n This equation is valid separately for each $N$, but\nthe thermodynamic limit $N\\to\\infty$ is sensible only if $N$\napproaches $\\infty$ in steps of $\\nu+1$.\n\nAccordingly, we parametrize the (even) chain length as\n\\be\n \\frac{N}{2} = n (\\nu +1) + \\kap\\, \\ \\ \\kap=0,\\ldots ,\\nu \\.\n \\mabel{N}\n\\ee\n Taking the thermodynamic limit $N\\to\\infty$ {\\it at fixed} $\\kap$,\nletting $n\\to\\infty$, the limit of Bethe Ansatz equations\\ (\\ref{sBAE}, \\ref{phxxzBAE})\nis well defined. The extra phase factor in the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\ Bethe\nequations \\refe{phxxzBAE} is thus\n\\be\n \\e{2 \\i\\kap \\tilde K {\\tilde\\gamma}} \\.\n \\mabel{phase}\n\\ee\n\n\\vali\n\nTo summarize, the obtained equivalence of string states in lattice\nLiouville and spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ is the following:\n\n\\vali\\vali\n\n \\noindent\\begin{tabular}{l|c|c}\n \\null & Lattice Liouville &\n\t\\begin{minipage}{3cm}\\begin{center}\\vspace*{2mm}\nSpin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ\n\t \\vspace*{2mm}\\end{center}\\end{minipage} \\\\ \\cline{1-3}\nanisotropy &\n\t\\begin{minipage}{3cm}\\begin{center}\\vspace*{2mm}\n$\\gamma=\\frac{\\pi\\nu}{\\nu+1}$\n\t \\vspace*{2mm}\\end{center}\\end{minipage} &\n${\\tilde\\gamma} = \\frac{\\pi}{\\nu +1}$\\\\\nstring lengths & $K$ & $\\tilde K =\\nu +1 -K$ \\\\\nspectral\nparameters &\n\t\\begin{minipage}{3cm}\\begin{center}\\vspace*{2mm}\n$\\lambda$\n\t \\vspace*{2mm}\\end{center}\\end{minipage} &\n${\\tilde\\lambda}=\\Re\\lambda+\\i\\left(\\Im\\lambda-(1+(-1)^\\sK)\\piperf\\right)$ \\\\\n \\end{tabular}\n\n \\vali\\vali\n\\noindent\nFor the analysis of the physical vacuum, it is important to note\nthat strings of maximal length are mapped to 1-strings, i.e. ordinary\nBethe Ansatz\\ roots, and vice versa.\n\nWe are intersted in the energy spectrum of the Liouville model. The\nauxiliary Lax-operator $L_{n,a}$ which was used for the Bethe Ansatz,\nintertwines the $n$:th quantum space and the auxiliary space $sl_2$.\nThe quantum spaces for lattice Liouville model are copies of $L^2({\\rm\nR})$. Thus $L_{n,a}(\\lambda)$ does not degenerate to a permutation\noperator at any value of the spectral parameter $\\lambda$, and it is not\ngood for investigating lattice dynamics. To get an integrable\nHamiltonian that generates lattice Liouville dynamics, one would have\nto introduce fundamental Lax-operators $L_{n,f}$ that intertwine two\nquantum spaces \\cite{tatafa}.\n\nFortunately, we are here only interested in energy and momentum\neigenvalues, so we do not have to investigate the fundamental\n$L$-operator. Following {\\cite{tatafa}} one can read off the energy\nand momentum eigenvalues from the eigenvalues of the diagonal elements\n$A$ and $D$ of the {\\it auxiliary} Lax-operators $L_{n,a}$. The\nmomentum eigenvalues are\n\\[\n P^\\L(\\set{\\lambda_j}) = \\trac{1}{2\\i} \\sum_{j=1}^m p(\\lambda_j) \\;\n p(\\lambda) = \\ln \\frac{a^\\gamma(\\lambda)}{d^\\gamma(\\lambda)} \\.\n \\]\n The upper index for $a$ and $d$ again stresses the particular\nvalue of anisotropy used.\n\nThe energy can be acquired by differentiating:\n\\[\n E^\\L(\\set{\\lambda_j}) = \\sum_{j=1}^m \\epsilon\\mm(\\lambda_j)\\;\n \\epsilon\\mm(\\lambda) = \\frac{\\gamma}{\\pi}\\mm\\frac{d}{d\\lambda}\\mm p(\\lambda) \\.\n\\]\n\n{}From {\\refe{phxxzBAE}} we see that for the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\ with extra phase\nfactor, the roles of $a$ and $d$ are interchanged. Accordingly,\nusing Correspondence {\\refe{pmap}}, we can write the Liouville energy\nand momentum in terms of the XXZ ones:\n\\bea\n P^{\\L}(\\set{\\lambda_j})\n &=& \\trac{1}{\\i}\\sum_{j=1}^m\n\t\t\\ln\\frac{a^{\\tilde\\gamma}(\\lambda_j)}{d^{\\tilde\\gamma}(\\lambda_j)}\n = - P^{{\\rm xxz}}(\\set{\\lambda_j})\n \\equiv\n \\i\\ln\\Lambda^{{{\\rm xxz}}}(\\set{\\lambda_j})\\longat{\\lambda=\\i\\trac{{\\tilde\\gamma}}{2}}\n \\mabel{moment-mal} \\\\\n E^\\L(\\set{\\lambda_j}) &=& - E^{{\\rm xxz}}(\\set{\\lambda_j})\n \\equiv -\\i\\frac{{\\tilde\\gamma}}{\\pi}\\mm \\frac{d}{d\\lambda}\\mm\n \\ln\\Lambda^{{{\\rm xxz}}}(\\lambda,\\set{\\lambda_j})\n \\longat{\\lambda=\\i\\trac{{\\tilde\\gamma}}{2}} \\ \\ \\.\n \\mabel{energy-mal}\n\\eea\n Here we have expressed the energy and momentum in terms of\neigenvalues $\\Lambda$ of the transfer matrix \\refe{transfer},\n\\be\n\\left(\\A(\\lambda)+\\D(\\lambda)\\right)\\mm\\Psi_m(\\set{\\lambda_j}) \\equiv\n\\Lambda(\\lambda,\\set{\\lambda_j})\\Psi_m(\\set{\\lambda_j}) \\.\n \\mabel{Lam}\n\\ee\n This is possible for the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\, as the auxiliary\nand fundamental Lax-operators for the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\ coincide. The XXZ\nLax-operator \\refe{Lxxz} yields local commuting quantities at the\nvalue $\\lambda=\\i\\frac{\\tilde\\gamma}{2}$, for which it becomes a permutation\nmatrix.\n\n\nThe Bethe equations do not have to be completely solved to get the low\nlying spectrum of the Hamiltonian. We need only the {\\it finite size}\ni.e. $\\operN$ corrections to the eigenvalues of the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ\ntransfer matrix corresponding to Bethe Ansatz equations\\ {\\refe{phxxzBAE}}. The reason\nfor this is the following:\n\nIn this paper we started by discretizing Liouville theory in a finite\nvolume $2\\pi = a\\mm N$, with lattice spacing $a$. Thus the\nconformal scaling limit of the spin chain corresponds to the continuum\nlimit of the Liouville field theory. Moreover, the continuum\nHamiltonian is\n\\be\nH_{{\\rm cont}} = \\trac{1}{a}\\mm H_{\\mm{\\rm lattice}} \\,\n\\mabel{Hcont}\n\\ee\n so that only $\\operN$ corrections to the eigenvalues of the lattice\nHamiltonian remain finite in the continuum limit. This is exactly the\nrealm of finite size effects, which in this case are\nsimultaneously finite lattice spacing effects.\n\n\n\n\\section{Finite Size Corrections for the six-vertex model}\n\nAs is well known, the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\ has intimate connections to the\nsix-vertex model of classical statistical mechanics {\\cite{baxter}}.\nFor two dimensional classical statistical models, the $\\operN$\nbehaviour and the conformal properties are closely related.\n\nIn \\cite{blocani,affleck,cardy} it was argued that the central charge\n$c$ of the conformal field theory corresponding to the scaling limit\nof a two-dimensional statistical model is related to the finite size\ncorrections to the free energy, i.e. the logarithm of the maximal\neigenvalue $\\Lambda_o$ of the transfer matrix, in the limit $N\\to\\infty$.\nOn the other hand, the statistical mechanics minimum free energy\nconfiguration corresponds to the ground state energy $E_o$ of the\nscaling conformal theory. For a model on an infinitely wide strip of\nlength $N$, the behavior of $E_o$ is \\cite{cardy}\n\\be\n E_o = N f_\\infty \\mm\n - \\mm \\operN \\trac{\\pi}{6}\\mm c \\mm + \\mm \\O(\\trac{1}{N^2}) \\,\n \\mabel{scalec}\n\\ee\n with $f_\\infty$ the free energy per site in the thermodynamic\nlimit.\n\nSimilarly, the critical indices (conformal weights) $\\Delta,\\Bardelta$\nof operators corresponding to exited states of the system can be read\noff the large $N$ behavior of configurations close to the one\nminimizing the free energy. In terms of the higher energy and\nmomentum eigenvalues of the corresponding 1+1 dimensional quantum\ntheory, critical indices are:\n \\bea\nE_m - E_o &=& \\trac{2\\pi}{N}\\mm (\\Delta \\mm + \\mm \\Bardelta)\n \t\t\\ +\\ \\O(\\trac{1}{N^2}) \\cr\n & & \\mabel{scaleweights}\\\\\nP_m - P_o &=& \\trac{2\\pi}{N}\\mm (\\Delta \\mm - \\mm \\Bardelta)\n \t\t\\ +\\ \\O(\\trac{1}{N^2}) \\nonumber\n\\eea\n As opposed to our differential dependence {\\refe{energy-mal}}, the\napproach of Ref. {\\cite{cardy}} relates the energy and momentum\ndirectly to lower eigenvalues $\\Lambda_m$ of the transfer matrix\nat criticality; $E_m\\sim -\\Re \\ln\\Lambda_m$, $\\ P_m\\sim -\\Im \\ln\\Lambda_m$.\n\n\\vali\n\nThe finite size corrections\\ of six-vertex models and the corresponding conformal\nproperties have been extensively studied in the literature\n\\cite{devewoy,karowski,kluwezi,boizre}. For the six-vertex model with an\nextra phase factor of the form \\refe{phase}, and string exitations,\nthe finite size corrections were calculated by Karowski\n\\cite{karowski}. The results of \\cite{karowski} relevant for us are\nthe following.\n\nThe ground state of a six-vertex model is described by a filled Dirac\nsea of one-strings, i.e. $n_1= N\/2$, $n_l = 0, \\mm l>1$. The\nlogarithm of the corresponding maximal eigenvalue of the transfer\nmatrix is \\cite[Eq. 4.3]{karowski}\\footnote{Note that our $\\nu$\ndiffers from the one of \\cite{karowski} by one. The relation between\nour $\\lambda$ and the $\\theta$ of \\cite{karowski} is $\\lambda = -\\i\\theta +\n\\i\\trac{{\\tilde\\gamma}}{2}$. This follows from the differences of the\nrespective Bethe equations \\refe{phxxzBAE} and \\cite[Eq.\n2.4]{karowski}. }\n\\be\n \\ln \\Lambda_o \\approx - \\i N f_\\infty(\\lambda) + \\frac{1}{N} \\frac{\\pi}{6}\n\\left(1 - \\frac{6\\kap^2}{\\nu(\\nu+1)} \\right) \\mm\n \\cosh\\left(\\trac{\\pi}{{\\tilde\\gamma}}\\lambda\\right) \\.\n \\mabel{cscale}\n\\ee\n\nExited states consist of higher strings above the vacuum, and holes in\nthe distribution of one-strings. Low-energy excitations have both the\nnumber of holes and the number of higher strings $\\sum_{\\sL > 1}\nn_\\sL$ of the order $N^0$.\n\nFor a given distribution of strings $\\{n_l\\}$, there is a certain\nnumber of allowed values for the spectral parameters. This number\ndepends on the behavior of the Bethe Ansatz equations\\ in the limits $\\lambda\\to\\pm\\infty$.\nDue to the dependence of these limits on higher strings, each\n$L$-string gives automatically rise to $2L-2$ holes.\n\nThe ``primary'' exitations correspond to states with one higher\nstring, no extra holes and the holes corresponding to the string\nevenly distibuted between the two surfaces of the Dirac sea of\n1-strings, i.e. between $\\lambda \\sim\\infty$ and $\\lambda \\sim-\\infty$.\n\nFor such states, the finite size behavior of the eigenvalues of the\ntransfer matrix reads\\cite[Eq. 4.5]{karowski}\n\\be\n \\ln\\Lambda_m - \\ln\\Lambda_o \\approx -\\trac{2\\pi}{N}\n\\left\\{(\\Delta + \\Bardelta)\n \\cosh\\left(\\trac{\\pi}{{\\tilde\\gamma}}\\lambda\\right)\n\\mm + \\mm (\\Delta - \\Bardelta)\n \\sinh\\left(\\trac{\\pi}{{\\tilde\\gamma}}\\lambda\\right) \\right\\}\\,\n \\mabel{weightscale}\n\\ee\n where the weights are, if the single higher string is a $\\tilde\nK$-string,\n\\be\n \\Delta = \\Bardelta = \\frac{\\Bigl((\\nu+1)(\\tilde K-1) +\\kap\\Bigr)^2 -\n\\kap^2}{4\\nu(\\nu+1)} \\.\n \\mabel{delta}\n\\ee\n If more strings and \/or holes are present in the exited state, the\nresulting critical indices differ from the ones above by additional\nintegers. Accordingly, these states belong to the conformal towers of\ndescendants of the described ``primary states''.\n\n For future use we define the function\n \\[\n \\delta_{L,\\kap} = \\frac{\\Bigl((\\nu+1)(L-1) +\\kap\\Bigr)^2}\n {4\\nu(\\nu+1)} \\,\n\n\\]\n in terms of which the critical indices \\refe{delta} read\n \\be\n\\Delta = \\Bardelta = \\delta_{\\tilde K,\\kap} - \\delta_{1,\\kap}\\.\n \\mabel{Deleqdeldel}\n\\ee\n In this form, we explicitely see the subtraction of the part\ncorresponding to the ground state.\n\nAn extra condition on $\\tilde K$ for a primary state was found in\n\\cite{karowski}. There is an upper limit for the length of the strings\nthat contribute to the finite size corrections. For extra phase $\\kap$, only strings\nsatisfying\n\\be\n \\tilde K < \\nu + 1 - \\kap\n\\mabel{Klim}\n\\ee\n contribute. This cutting off of higher strings resembles a result of\nRef. \\cite{jutkar} for a $sl_q(2)$\\ invariant XXZ chain with fixed\nboundary conditions. There it was conjectured that highest strings\ncorrespond to ``bad'' representations of the quantum group (i.e.\nrepresentations with vanishing $q$-dimension), which have to be\nremoved from the spectrum to keep the theory unitary.\n\n\\vali\n\nIn the six-vertex model, one expects conformal invariance at $\\lambda =\n0$, where the $R$-matrix becomes isotropic. At this point, one can use\nEquations (\\ref{scalec}, \\ref{scaleweights}) to recognize the conformal\nproperties of the model.\n\n{}From Equation \\refe{cscale} we read off the central charge:\n\\be\n c = 1 - \\frac{6\\kap^2}{\\nu(\\nu+1)} \\.\n \\mabel{c}\n\\ee\n For the value $\\kap=1$, related to critical Potts models in\n\\cite{karowski}, the central charges of unitary minimal models emerge.\nThe critical indices \\refe{delta} reproduce a row in the Ka\\v c\\ \\ table.\nThe highest string does not contribute to the spectrum, due to\nRestriction \\refe{Klim}.\n\n\n\n\n\n\\section{Conformal Properties of Lattice Liouville Theory}\n\n\nNow we can use the results of Ref. \\cite{karowski} reviewed in the\nprevious section, to calculate the scaling properties of lattice\nLiouville energies and momenta, and to recognize the corresponding\nconformal structures.\n\nWe are interested in Bethe Ansatz\\ states that correspond to the ``primary''\nstates of the six-vertex model, i.e. one $K$-string above the physical\nvacuum, with $K = \\nu + 1 - \\tilde K$.\n\n{}From the scaling forms of the transfer matrix\n(\\ref{weightscale}, \\ref{scalec}) we get Liouville momenta and energies\nusing Prescriptions (\\ref{moment-mal}, \\ref{energy-mal}). The\nresulting eigenvalues yield critical indices using\nEquation \\refe{scaleweights}.\n\nFor lattice Liouville theory, there is an important difference to the\ncase treated in \\cite{karowski}. The extra phase $\\kap$ is not a\nconstant. Instead we have sectors with different values of $\\kap$,\ncorresponding to different lengths of the chain mod $(\\nu+1)$, as\nindicated by Equation \\refe{N}. In the thermodynamic limit, all values of\n$\\kap$ coexist.\n\n{}From Equation \\refe{cscale} it is easy to see, that the ground state of the\ntheory lies in the $\\kap = 0$ sector, which gives $c=1$, the usual\ncentral charge for a periodic XXZ spin chain \\cite{devewoy,alibaba}.\n\nAs in \\cite{karowski}, minimal models would emerge if the ground state\nwas taken to be in the $\\kap=1$ sector. Here we will adopt this\napproach, discarding the $\\kap=0$ sector alltogether. Later on we will\nreturn to the interpretation of this reduction of the theory.\n\nAccordingly, the central charge is\n \\be\nc = 1 - \\frac{6}{\\nu(\\nu+1)} \\; \\nu = 2,3, \\ldots\n \\mabel{cIII}\n\\ee\n and we recover the unitary minimal conformal field theories\\ of \\cite{bpz,fqs}.\n\nCalculating the exitation energies and the corresponding critical\nindices from Equation \\refe{weightscale}, we have to subtract the ground\nstate energy, not only the minimal energy $\\delta_{1,\\kap}$ within\neach $\\kap$ sector.\n\nWith the ground state lying in the $\\kap=1$ sector, we get for the\ncritical indices, instead of (\\ref{Deleqdeldel}),\n\\be\n \\Delta=\\Bardelta = \\delta_{\\tilde K,\\kap} - \\delta_{1,1} =\n\\frac{\\Bigl((\\tilde K - 1) (\\nu+1) + \\kap\\Bigr)^2\n\t\t- 1}{4\\nu(\\nu+1)}\n\\. \\mabel{weightsIII}\n\\ee\n Defining\n\\bea\n p &=& \\tilde K + \\kap -1\\; p = 1, \\ldots, \\nu -1 \\cr\n q &=& \\kap\\; q = 1, \\ldots, \\nu\n \\mabel{pq}\n\\eea\n we can write the critical indices in the form\n\\be\n \\Delta = \\Bardelta \\ =\\ \\frac{\\Bigl( p\\mm (\\nu+1) - q\\mm \\nu\\Bigr)^2 -\n1}{4\\nu(\\nu+1)} \\.\n \\mabel{weightsIV}\n\\ee\n These scaling weights reproduce the whole Ka\\v c\\ table of unitary\nconformal field theories. The ranges of $p$ and $q$ follow accurately from Restriction\n\\refe{Klim} on the maximal string length, and the possible values of\n$\\kap$ with the $\\kap=0$ sector excluded.\n\n\\vali\n\n{}From \\refe{Hcont} we see that in the continuum, the energy and\nmomentum of primary\nBethe Ansatz\\ states are\n\\[\n E = \\Delta \\mm + \\mm \\Bardelta\\; P = \\Delta \\mm - \\mm \\Bardelta \\.\n\\]\n The primary Bethe states are thus products of holomorphic and\nantiholomorphic vectors (right and left movers) with conformal weights\n\\refe{weightsIII}.\n\n\\vali\n\nThe behavior encountered here is exactly the same as the one\nencountered in the restriction of SOS models to RSOS models\n\\cite{anbafo}. The Bethe Ansatz equations\\ \\refe{phxxzBAE} are the Bethe equations of\nthe so called III\/IV critical limit of the SOS and corresponding eight\nvertex and XYZ models, in the case of root of unity anisotropy. In\nthe III\/IV critical limit, the extreme off-diagonal term (usually\ndenoted $d$) in the eight-vertex $R$-matrix vanishes, and the elliptic\nfunctions degenerate to trigonometric ones. Thus (on applying the\nstring picture,) the XYZ Bethe equations of \\cite{baxter,takfad79}\nturn into Equations \\refe{phxxzBAE} at criticality. For an anisotropy\nof the form \\refe{tilgam}, the SOS Bethe states are parameterized by\nall $0\\leq \\kap \\leq \\nu$. The ground state lies in the $\\kap=0$\nsector, and the corresponding central charge is $c=1$.\n\nIn the root of unity case, it becomes possible to ``restrict'' the SOS\nmodel \\cite{anbafo}. The sectors with $1\\leq\\kap\\leq\\nu$ decouple from\nthe other sectors, and we can restrict our interest to these sectors\nonly. This decoupled part of the SOS model is known as the RSOS\nmodel. On the level of Bethe equations, the restriction means leaving\nout the $\\kap=0$ sector. The ground state now lies in the $\\kap=1$\nsector, and in the thermodynamic limit the unitary conformal field theories\\ with $c<1$\nemerge \\cite{huse}.\n\n\\vali\n\nThere is one more subtlety in the interpretation of the results for\nthe lattice Liouville model described above. Remembering the\ncorrespondence between lattice Liouville theory and the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain\\\ndescribed in Section \\ref{maptoxxz}, it is evident that the physical\nvacuum for Liouville theory consists of maximal strings. This can be\nviewed as the maximal string limit of the fact that the vacuum of\nhigher spin chains consists of higher strings.\n\nDue to the complicated structure of the vacuum, not all\ncombinations of remnant and string length are a priori allowed.\nOn the contrary, we get stringent conditions on $N$ from requiring the\ncoexistence of a specific remnant $N \\ \\mbox{mod}\\ (\\nu+1)$ and a single\n$K$-string (corresponding to a spin ${\\textstyle \\frac{1}{2}}\\mm\\ $ XXZ $\\tilde K$-string)\nabove a sea of maximal $\\nu$-strings. In fact, these requirements fix\nthe chain length modulo $\\nu(\\nu+1)$. The parameterization \\refe{N}\nof $N$ has to be extended to\n\\be\n \\frac{N}{2} = (n(\\nu+1) \\mm-\\mm \\kap \\mm + \\mm K)\\nu + K\n\t = (n\\mm\\nu \\mm-\\mm \\kap \\mm + \\mm K)\n\t\t\t(\\nu +1) + \\kap \\.\n \\mabel{nKr}\n\\ee\n{}From here we see that for this $N$ it is indeed possible to define a\nstate with one $K$-string over a sea of $n(\\nu+1) - \\kap + K$ maximal\nstrings. In addition the remnant is $\\kap$. The thermodynamic limit\nhas to be taken in steps of $\\nu(\\nu+1)$ by taking $n\\to\\infty$ in Equation \n\\refe{nKr}.\n\nAccordingly, the full picture of lattice Liouville primary states is\nthe following. In a lattice Liouville chain consisting of $N$ sites\n($N$ even), there is a primary state characterized by two integers.\nThese integers are related to the remnants of the chain lenth $\\ \\mbox{mod}\\ \n\\nu(\\nu+1)$ and $\\ \\mbox{mod}\\ (\\nu+1)$,\n\\[\n \\frac{N}2 \\ \\mbox{mod}\\ \\nu(\\nu+1) = (\\nu - p)(\\nu +1) + q \\.\n\\]\n The primary state is the state with a single lower string. The $q=0$\nsector exhibits the behavior of the SOS ground state, and after the\nrestriction, the RSOS unitary series emerge, with central charges\n\\refe{cIII} and conformal weights \\refe{weightsIV}. In the\nthermodynamic limit all remnants mod $\\nu(\\nu+1)$ coexist (except\npossibly for the decoupled $q=0$), and the Liouville primary states\ngive the whole Ka\\v c\\ table.\n\n\n\\section{Conclusions}\n\n\nWe have developed a spectral parameter dependent integrable structure\nto quantum Liouville theory on a lattice. Using the ensuing\n$L$-matrix, we have written the Bethe Ansatz equations\\ for Liouville theory.\n\nWe have concentrated on certain Liouville coupling constants $\\gamma$,\nfor which $q=\\e{\\i\\gamma}$ is a root of unity. Using the string picture\nto describe exited Bethe Ansatz\\ states, we have mapped the Liouville Bethe\nequations to a set of generalized spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ Bethe Ansatz equations, more exactly\nthe critical SOS Bethe equations. This mapping takes maximal\nLiouville strings to XXZ one-strings and vice versa. The physical\nLiouville vacuum thus consists of a Dirac sea of maximal strings.\n\nUsing results of Karowski \\cite{karowski} for the finite size corrections\\ to the\neigenvalues of the transfer matrix, we have calculated the central\ncharges and conformal dimensions of the spin ${\\textstyle \\frac{1}{2}}\\mm$ XXZ chain, and accordingly also\nof Liouville theory.\n\nWe found that the continuum limits of lattice Liouville theories with\ncoupling constants $\\gamma = \\pi\\trac{\\nu}{\\nu+1}\\mm, \\ \\nu=2,3,\\ldots$\nreproduce the unitary minimal models of Friedan, Qiu and Shenker\n\\cite{fqs}, on restricting the chain length not to be divisible by\n$\\nu+1$. This restriction is the exact analogue of the RSOS\nrestriction of SOS models at root of unity anisotropies.\n\nPrimary excitations of the Liouville chain are characterized by two\nintegers, the length of a shorter string above the vacuum consisting\nof maximal strings, and the remnant $\\kap$ of the chain length mod\n$(\\nu+1)$. With these two parameters, the conformal weights\ncorresponding to the exited states give all states in the\ncorresponding Ka\\v c\\ table.\n\nTo clarify the structure of the different sectors in the theory, a\nunitarity analysis based on the hidden $sl_q(2)$\\ symmetry is needed. This\nshould illuminate both the RSOS restriction $\\kap\\neq 0$, and the\nresult of \\cite{karowski} that highest XXZ (i.e. lowest Liouville)\nstrings do not contribute to the spectrum. Following \\cite{jutkar} we\nbelieve that these properties are deeply related to properties of root\nof unity representations of $sl_q(2)$. Truncation of the Bethe Ansatz\\ Hilbert\nspace corresponds to excluding ``bad'' representations of the quantum\ngroup, which is required in order to have a positive metric on\nthe Hilbert space.\n\nIn this work we found equivalence of the Bethe Ansatz equations\\ of the lattice\nLiouville and the critical eight-vertex (SOS) models.\nAccordingly, the results presented here can be used to provide a\nLagrangian description of the critical conformal field theories of all\ntwo-dimensional statistical models related to the eight-vertex model.\n\nDue to the intimate connection of Liouville theory to two dimensional\ngravity, it would be very interesting to extend the method of\nquantizing Liouville theory presented here to more general couplings.\nIt is evident that negative couplings $\\gamma$ correspond to the real\nsector of the Liouville model with $c>25$.\n\nWhether the strong coupling results of Gervais \\& al. \\cite{gervais}\nin the regime $1 0$ is a hyper-parameter to balance the trade-off between fitting the data and satisfying our model assumptions given by $ \\Theta (\\mathbf{D}, \\mathbf{X})$, which is a regularizer that promotes the desired physical consistency. We call this \\emph{wave-informed matrix factorization}. To derive the specific function $\\Theta (\\mathbf{D}, \\mathbf{X})$ which promotes the desired physical consistency, we begin by designing a regularizer to promote wave-consistent signals in the columns of $\\mathbf{D}$.\\footnote{Note that our approach is also generalizable to having wave-consistent regularization in both factors.} \nSpecifically, consider the 1-dimensional wave equation evaluated at $d_i(l)x_i(t)$:\n\\begin{eqnarray}\n \\frac{\\partial^2 \\left[ d_i(l)x_i(t) \\right] }{\\partial l^2} &=& \\frac{1}{c^2} \\frac{\\partial^2 \\left[ d_i(l)x_i(t) \\right] }{\\partial t^2} \n \\label{eqn:wave_equation}\n\\end{eqnarray}\nNote that the above equation constrains that each \neach $d_i(l)x_i(t)$ component ($i \\in [N]$) is a wave. This can thus be seen as decomposing the given wave data into a superposition of simpler waves. The Fourier transform in time on both sides of \\eqref{eqn:wave_equation} yields:\n$\\frac{\\partial^2 \\left[ d_i(l) \\right] }{\\partial l^2}X_i(\\omega) = \\frac{-\\omega^2}{c^2} d_i(l) X_i(\\omega)$. Thus, at points where $X_i(\\omega) \\neq 0$, we can enforce the equation,\n\\begin{eqnarray}\n \\frac{\\partial^2 \\left[ d_i(l) \\right] }{\\partial l^2} &=& \\frac{-\\omega^2}{c^2} d_i(l) \n \\label{eqn:space_eigen_value}\n\\end{eqnarray}\nto maintain consistency with the wave-equation.\nNote that the above equation is analytically solvable for constant $\\omega\/c$ and the solution takes the form $d_i(l) = A \\sin(\\omega l\/c) + B \\cos(\\omega l\/c)$ and the constants $A$ and $B$ can be determined by initial conditions. Thus, we conclude that the choice of $\\omega\/c$ determines $d_i(l)$ and vice versa.\nWe next discretize equation \\eqref{eqn:space_eigen_value}. From the literature on discretizing differential equations, the above can be discretized to:\n\\begin{eqnarray}\n \\mathbf{L} \\mathbf{D}_i &=& -k_i^2 \\mathbf{D}_i \n \\label{eqn:discrete_wave_eqn}\n\\end{eqnarray}\nwhere $k_i = \\omega \/ c$ , also known as the wavenumber and $\\mathbf{L}$ is a discrete second derivative operator, where the specific form of $\\mathbf{L}$ depends on the boundary conditions (e.g., Dirichlet, Dirichlet-Neumann, etc) which can be readily found in the numerical methods literature (see for example \\cite{strang2007computational, golub2013matrix}).\n\nWe also specify the dependence on $i$ owing to the fact that $\\omega\/c$ and $d_i(l)$ determine each other. \nNow for the factorization to be physically consistent in terms of spatial data, we desire that \\eqref{eqn:discrete_wave_eqn} is satisfied for some (unknown) value of $k_i$, which we promote with the regularizer $\\min_{k_i} \\| \\mathbf{L} \\mathbf{D}_i + k_i^2 \\mathbf{D}_i \\|^2_F$. In addition to requiring that the columns of $\\mathbf{D}$ satisfy the wave equation, we also would like to constrain the number of modes in our factorization (or equivalently find a factorization $\\mathbf{D} \\mathbf{X}^\\top$ with rank equal to the number of modes). To accomplish this, we add squared Frobenius norms to both $\\mathbf{D}$ and $\\mathbf{X}$, which is known to induce low-rank solutions in the product $\\mathbf{D} \\mathbf{X}^\\top$ due to connections with the variational form of the nuclear norm \\cite{haeffele2019structured,haeffele2014structured,srebro2005rank}. Taken together, we arrive at a regularization function which decouples along the columns of $(\\mathbf{D},\\mathbf{X})$ given by:\n\\begin{equation}\n\\label{eq:theta}\n\\Theta(\\mathbf{D},\\mathbf{X}) = \\sum_{i=1}^N \\theta(\\mathbf{D}_i,\\mathbf{X}_i), \\text{ where } \n \\theta(\\mathbf{D}_i,\\mathbf{X}_i) = \\tfrac{1}{2} \\! \\! \\left( \\|\\mathbf{X}_i\\|_F^2 \\! + \\! \\|\\mathbf{D}_i\\|_F^2 \\right) \\! + \\! \\gamma \\min_{k_i} \\|\\mathbf{L} \\mathbf{D}_i \\! + \\! k_i^2 \\mathbf{D}_i \\|_F^2\n\\end{equation}\nIn the supplementary material we show that the above regularization on $\\mathbf{D}$ is equivalent to encouraging the columns of $\\mathbf{D}$ to lie in the passband of a bandpass filter (centered at wavenumber $k$) with the choice of the hyperparameter $\\gamma$ being inversely proportional to the filter bandwidth. With this regularization function, we then have our final model that we wish to solve:\n\\begin{equation}\n\\label{eq:main_obj}\n\\min_{N \\in \\mathbb{N}_+} \\min_{\\substack{\\mathbf{D} \\in \\mathbb{R}^{L \\times N}, \\mathbf{X} \\in \\mathbb{R}^{T \\times N} \\\\ \\mathbf{k} \\in \\mathbb{R}^N}} \\tfrac{1}{2}\\|\\mathbf{Y}-\\mathbf{D}\\mathbf{X}^\\top\\|_F^2 + \\tfrac{\\lambda}{2} \\sum_{i=1}^N \\left(\\|\\mathbf{X}_i\\|_F^2 + \\|\\mathbf{D}_i\\|_F^2 + \\gamma \\|\\mathbf{L} \\mathbf{D}_i + k_i^2 \\mathbf{D}_i \\|_F^2 \\right).\n\\end{equation}\n\n\\section{Model Optimization with Global Optimality Guarantees}\n\\label{section:model}\nWe note that our model in \\eqref{eq:main_obj} is inherently non-convex in $(\\mathbf{D},\\mathbf{X},\\mathbf{k})$ jointly due to the matrix factorization model as well as the fact that we are additionally searching over the number of columns\/entries in $(\\mathbf{D},\\mathbf{X},\\mathbf{k})$, $N$. However, despite the potential challenge of non-convex optimization, here we show that we can efficiently solve \\eqref{eq:main_obj} to global optimality in polynomial by leveraging prior results regarding optimization problems for structured matrix factorization \\cite{haeffele2019structured,bach2013convex}. In particular, the authors of \\cite{haeffele2019structured} consider a general matrix factorization problem of the form:\n\\begin{equation}\n\\label{eq:gen_obj}\n\\min_{N\\in\\mathbb{N}_+}{\\min_{\\mathbf{U} \\in \\mathbb{R}^{m \\times N}, \\mathbf{V} \\in \\mathbb{R}^{n \\times N}}} \\ell(\\mathbf{U} \\mathbf{V}^\\top) + \\lambda \\sum_{i=1}^N \\bar \\theta(\\mathbf{U}_i,\\mathbf{V}_i)\n\\end{equation}\nwhere $\\ell(\\hat \\mathbf{Y})$ is any function which is convex and once differentiable in $\\hat \\mathbf{Y}$ and $\\bar \\theta(\\mathbf{U}_i,\\mathbf{V}_i)$ is any function which satisfies the following 3 conditions:\n\\begin{enumerate}\n\\item $\\bar \\theta(\\alpha \\mathbf{U}_i, \\alpha \\mathbf{V}_i) = \\alpha^2 \\theta(\\mathbf{U}_i, \\mathbf{V}_i), \\ \\forall (\\mathbf{U}_i,\\mathbf{V}_i)$, $\\forall \\alpha \\geq 0$.\n\\item $\\bar \\theta(\\mathbf{U}_i, \\mathbf{V}_i) \\geq 0, \\ \\forall (\\mathbf{U}_i,\\mathbf{V}_i)$.\n\\item For all sequences $(\\mathbf{U}_i^{(n)},\\mathbf{V}_i^{(n)})$ such that $\\|\\mathbf{U}_i^{(n)}(\\mathbf{V}_i^{(n)})^\\top\\| \\rightarrow \\infty$ then $\\bar \\theta(\\mathbf{U}_i^{(n)},\\mathbf{V}_i^{(n)}) \\rightarrow \\infty$.\n\\end{enumerate}\nClearly, our choice of loss function $\\ell$ (squared loss) satisfies the necessary conditions. However, in \\eqref{eq:main_obj} we wish to optimize over not just the matrix factors $(\\mathbf{D},\\mathbf{X})$ but also the additional $\\mathbf{k}$ parameters, so it is not immediately apparent that the framework from \\cite{haeffele2019structured} can be applied to our problem. Here we first show that by our design of the regularization function $\\theta$ our formulation will satisfy the needed conditions, allowing us to apply the results from \\cite{haeffele2019structured} to our problem of interest \\eqref{eq:main_obj}.\n\\begin{proposition}\n\\label{prop:equiv_prob}\nThe optimization problem in \\eqref{eq:main_obj} is a special case of the problem considered in \\cite{haeffele2019structured}.\n\\end{proposition}\n\\begin{proof}\nAll proofs can be found in the supplement.\n\\end{proof}\nFrom this, we note that within the framework developed in \\cite{haeffele2019structured} it is shown (Corollary 1) that a given point $(\\tilde \\mathbf{U}, \\tilde \\mathbf{V})$ is a globally optimal solution of \\eqref{eq:gen_obj} iff the following two conditions are satisfied:\n\\begin{equation}\n1) \\ \\ \\langle -\\nabla \\ell( \\tilde \\mathbf{U} \\tilde \\mathbf{V}^\\top), \\tilde \\mathbf{U} \\tilde \\mathbf{V}^\\top \\rangle = \\lambda \\sum_{i=1}^N \\bar \\theta(\\tilde \\mathbf{U}_i, \\tilde \\mathbf{V}_i) \\ \\ \\ \\ \\ \\ 2) \\ \\ \\Omega_{\\bar \\theta}^\\circ (-\\tfrac{1}{\\lambda} \\nabla \\ell(\\tilde \\mathbf{U} \\tilde \\mathbf{V}^\\top)) \\leq 1\n\\end{equation}\nwhere $\\nabla(\\ell(\\hat \\mathbf{Y}))$ denotes the gradient w.r.t. the matrix product $\\hat \\mathbf{Y} = \\mathbf{U} \\mathbf{V}^\\top$ and $\\Omega_{\\bar \\theta}^\\circ(\\cdot)$ is referred to as the `polar problem' which is defined as \n\\begin{equation}\n\\label{eq:polar_def}\n\\Omega_{\\bar \\theta}^\\circ (\\mathbf{Z}) \\equiv \\sup_{\\mathbf{u},\\mathbf{v}} \\mathbf{u}^\\top \\mathbf{Z} \\mathbf{v} \\ \\ \\textnormal{s.t.} \\ \\ \\bar \\theta(\\mathbf{u},\\mathbf{v}) \\leq 1.\n\\end{equation}\nIt is further shown in \\cite{haeffele2019structured} (Proposition 3) that the first condition above will always be satisfied for any first-order stationary point $(\\tilde \\mathbf{U}, \\tilde \\mathbf{V})$, and that if a given point is not globally optimal then the objective function \\eqref{eq:gen_obj} can always be decreased by augmenting the current factorization by a solution to the polar problem as a new column:\n\\begin{equation}\n(\\mathbf{U}, \\mathbf{V}) \\! \\! \\leftarrow \\! \\! \\left( \\left[ \\tilde \\mathbf{U}, \\ \\tau \\mathbf{u}^* \\right], \\left[ \\tilde \\mathbf{V}, \\ \\tau \\mathbf{v}^* \\right] \\right) \\! : \\! \\mathbf{u}^*, \\mathbf{v}^* \\in \\argmax_{\\mathbf{u},\\mathbf{v}} \\mathbf{u}^\\top \\! (-\\tfrac{1}{\\lambda} \\nabla \\ell(\\tilde \\mathbf{U} \\tilde \\mathbf{V}^\\top) \\mathbf{v} \\ \\ \\textnormal{s.t.} \\ \\ \\bar \\theta(\\mathbf{u},\\mathbf{v}) \\leq 1 \n\\end{equation}\nfor an appropriate choice of step size $\\tau > 0$.\nIf one can efficiently solve the polar problem for a given regularization function $\\bar \\theta$, then this provides a means to efficiently solve problems with form \\eqref{eq:main_obj} to global optimality with guarantees of global optimality. Unfortunately, the main challenge in applying this result from a computational standpoint is that solving the polar problem requires one to solve another challenging non-convex problem in \\eqref{eq:polar_def}, which is often NP-Hard for even relatively simple regularization functions \\cite{bach2013convex}. Here, however, we provide a key positive result, proving that for our designed regularization function \\eqref{eq:theta}, the polar problem can be solved efficiently, in turn enabling efficient and guaranteed optimization of the non-convex model \\eqref{eq:main_obj}.\n\\begin{theorem}\n\\label{thm:polar}\nFor the objective in \\eqref{eq:main_obj}, the associated polar problem is equivalent to:\n\\begin{align}\n\\Omega_\\theta^\\circ (\\mathbf{Z}) &= \\max_{\\mathbf{d} \\in \\mathbb{R}^{L}, \\mathbf{x} \\in \\mathbb{R}^{T}, k\\in \\mathbb{R}} \\mathbf{d}^\\top \\mathbf{Z} \\mathbf{x}\n\\mathrm{s.t.} \\ &\\|\\mathbf{d}\\|^2_F + \\gamma \\|\\mathbf{L}\\mathbf{d} + k^2 \\mathbf{d}\\|_F^2 \\leq 1, \n &\\|\\mathbf{x}\\|^2_F \\leq 1, \\ 0 \\leq k \\leq 2.\n\\end{align}\nFurther, let $\\mathbf{L} = \\Gamma \\Lambda \\Gamma^\\top$ be an eigen-decomposition of $\\mathbf{L}$ and define the matrix $\\mathbf{A}( \\bar k) = \\Gamma(\\mathbf{I} + \\gamma(\\bar k \\mathbf{I} + \\Lambda)^2) \\Gamma^\\top$. Then, if we define $\\bar k^*$ as%\n\\begin{equation}\n\\bar k^* = \\argmax_{\\bar k \\in [0,4]} \\| \\mathbf{A}(\\bar k)^{-1\/2} \\mathbf{Z} \\|_2\n\\label{eq:k_max}\n\\end{equation}\noptimal values of $\\mathbf{d},\\mathbf{x}, k$ are given as $\\mathbf{d}^* = \\mathbf{A}(\\bar k^*)^{-1\/2} \\bar \\mathbf{d}$, $\\mathbf{x}^* = \\bar \\mathbf{x}$, and $k^* = (\\bar k^*)^{1\/2}$ where $(\\bar \\mathbf{d}, \\bar \\mathbf{x})$ are the left and right singular vectors, respectively, associated with the largest singular value of $\\mathbf{A}(\\bar k^*)^{-1\/2} \\mathbf{Z}$. Additionally, the above line search over $\\bar k$ is Lipschitz continuous with a Lipschitz constant, $L_{\\bar k}$, which is bounded by:\n\\begin{equation}\nL_{\\bar k} \\leq \\begin{Bmatrix} \\frac{2}{3 \\sqrt{3}} \\sqrt{\\gamma} \\|\\mathbf{Z}\\|_2 & \\gamma \\geq \\frac{1}{32} \\\\ \n4 \\gamma (1 + 16 \\gamma)^{-\\tfrac{3}{2}} \\|\\mathbf{Z}\\|_2 & \\gamma < \\frac{1}{32} \\end{Bmatrix} \\leq \\tfrac{2}{3 \\sqrt{3}} \\sqrt{\\gamma} \\|\\mathbf{Z}\\|_2\n\\end{equation}\n\\end{theorem}\n\nWe also note that the above result implies that we can solve the polar by first performing a (one-dimensional) line search over $k$, and due to the fact that the largest singular value of a matrix is a Lipschitz continuous function, this line search can be solved efficiently by a variety of global optimization algorithms. For example, we give the following corollary for the simple algorithm given in \\cite{malherbe2017global}, and similar results are easily obtained for other algorithms.\n\n\\begin{corollary}[Adapted from Cor 13 in \\cite{malherbe2017global}]\n\\label{cor:line_search}\nFor the function $f(\\bar k) = \\|\\mathbf{A}(\\bar k)^{-1\/2} \\mathbf{Z} \\|_2$ as defined in Theorem \\ref{thm:polar}, if we let $\\bar k_1, \\ldots \\bar k_r$ denote the iterates of the LIPO algorithm in \\cite{malherbe2017global} then we have $\\forall \\delta \\in (0,1)$ with probability at least $1-\\delta$,\n\\begin{equation}\n\\max_{\\bar k \\in [0,4]} f(\\bar k) - \\max_{i=1\\ldots r} f(\\bar k_i) \\leq \\tfrac{8}{3 \\sqrt{3}} \\sqrt{\\gamma} \\|\\mathbf{Z}\\|_2 \\frac{\\ln(1\/\\delta)}{r}\n\\end{equation}\n\\end{corollary}\nAs a result, we have that the error of the linesearch converges with rate $\\mathcal{O}(1\/r)$, where $r$ is the number of function evaluations of $f(\\bar k)$, to a global optimum, and then, given the optimal value of $k$, the optimal $(\\mathbf{d}, \\mathbf{x})$ vectors can be computed in closed-form via singular value decomposition. Taken together this allows us to employ the Meta-Algorithm defined in Algorithm \\ref{alg:meta} to solve problem \\eqref{eq:main_obj}.\n\\begin{corollary}\n\\label{cor:poly_time}\nAlgorithm \\ref{alg:meta} produces an optimal solution to \\eqref{eq:main_obj} in polynomial time.\n\\end{corollary}\n\n\\begin{algorithm}\n\\caption{\\bf{Meta-algorithm}}\n\\label{alg:meta}\n\\begin{algorithmic}[1]\n\\State Input $\\mathbf{D}_{init}$, $\\mathbf{X}_{init}$, $\\mathbf{k}_{init}$ \n\\State Initialize $ \\left( \\mathbf{D}, \\mathbf{X}, \\mathbf{k} \\right) \\leftarrow \\left( \\mathbf{D}_{init}, \\mathbf{X}_{init}, \\mathbf{k}_{init} \\right)$\n\\While {global convergence criteria is not met}\n\\State Perform gradient descent on the low-rank wave-informed objective function \\eqref{eq:main_obj} with $N$ fixed to reach a first order stationary point $(\\Tilde{\\mathbf{D}}, \\Tilde{\\mathbf{X}}, \\Tilde{\\mathbf{k}})$\\label{step:grad_desc}\n\\State Calculate the value of $\\Omega^\\circ_\\theta(\\tfrac{1}{\\lambda}(\\mathbf{Y}-\\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top))$ via Theorem \\ref{thm:polar} above and obtain $\\mathbf{d}^*, \\mathbf{x}^*, k^*$\n\\If {value of polar $\\Omega^\\circ_\\theta(\\tfrac{1}{\\lambda}(\\mathbf{Y}-\\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top)) = 1$} \\State {Algorithm converged to global minimum}\n\\Else \n\\State {Append $(\\mathbf{d}^*, \\mathbf{x}^*, k^*)$ to $(\\Tilde{\\mathbf{D}}, \\Tilde{\\mathbf{X}}, \\Tilde{\\mathbf{k}})$ and update $\\left( \\mathbf{D}, \\mathbf{X}, \\mathbf{k} \\right)$\n\\State $ \\left( \\mathbf{D}, \\mathbf{X}, \\mathbf{k} \\right) \\! \\leftarrow \\! \\left( \\left[ \\tilde{\\mathbf{D}}, \\ \\tau \\mathbf{d}^* \\right], \\left[ \\tilde{\\mathbf{X}}, \\ \\tau \\mathbf{x}^* \\right] , \\left[ \\tilde{\\mathbf{k}}^\\top, \\ k^* \\right]^\\top\n\\right)$, $\\tau\\! > \\! 0$ is step size (see supplement).\\label{step:step_size}}\n\\EndIf\n\\State Continue loop.\n\\EndWhile\n\\end{algorithmic}\n\\label{algoblock:meta-algo}\n\\end{algorithm}\n\n\\vspace{-3mm}\n\n\\section{Results \\& Discussions}\n\\label{sec:results}\nIn this section, we evaluate our proposed wave-informed matrix factorization on two synthetic datasets. For each dataset, algorithm iterations are performed until the value of polar is evaluated to be less than a threshold close to 1 to demonstrate convergence to the global minimum (i.e., a polar value of 1). We note that the distance to the global optimum (in objective value) is directly proportional to the value of the polar value at any point minus 1 (\\cite{haeffele2019structured}, Prop. 4), so choosing a stopping criteria as polar value $\\leq 1 \\! + \\! \\epsilon$ also guarantees optimality within $\\mathcal{O}(\\epsilon)$.\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.25]{figs\/merge_1_diff_strings_middle.png}\n \\caption{(left) Two cables of different impedances joined end-to-end; (right) cables of two different materials with a slightly alternate configuration, and measurements sampled at \\color{alizarin} \\textbf{X} \\color{black}.}\n \\label{fig:merge_1}\n \\vspace{-5mm}\n\\end{figure}\n\n\n\\myparagraph{Characterizing Multi-Segment Transmission Lines} First, we consider the \\textbf{material characterization} problem described in the introduction.\nSpecifically, we consider waves propagating along an electrical transmission line where the impedance of the transmission line changes along the line (as would occur, for example, if the transmission line was damaged or degraded in a region), as depicted in Figure \\ref{fig:merge_1}(left). \nWe simulate wave propagation in this line based on a standard RLGC (resistance, inductance, conductance, capacitance) transmission line model \\cite{Pozar2012}, using an algebraic graph theory engine for one-dimensional waves \\cite{Harley2019}. For this simulation, we combine two transmission line segments of length $0.4$~m (\\color{orange-left}left\\color{black}) and $0.6$~m (\\color{blue-right}right\\color{black}). \n\nThe wavenumber (which is inversely proportional to velocity) of the right segment is $4$~times higher than the wavenumber of the left segment. A modulated Gaussian signal with a center frequency of $10$~MHz and a $3$~dB bandwidth of $10$~MHz is transmitted from the left end of this setup. This impulse travels $0.4$~m till it reaches the interface (i.e., the connection between the two cables). At the interface, part of the propagating signal reflects and another part transmits into the next cable. The signal again reflects at the end of each cable. \n\nNote the following: 1) there are now two distinct wavenumber regions, 2) the excitation\/forcing function is transient and is therefore a linear combination of nearby frequencies that each have a unique wavenumber, and 3) the end boundaries have loss (i.e., energy exits the system). We show electrical voltage amplitude at a timestamp, see Figure \\ref{fig:Merge_2}(top-left). We observe waves travelling in two regions with different wavenumbers. Note that performing a simple Fourier transform (in space) of the signal will produce unsatisfactory results due to the discrete change in the wavenumber along the line. However, as we show below, our method automatically recovers the two distinct regions as separated modes in our decomposition. Namely, we solve our wave-informed matrix factorization with $\\gamma=50$ and $\\lambda=0.6$ on the above described wave data. \nInspection of the columns of $\\mathbf{D}$, see Figure \\ref{fig:Merge_2}(top-right), clearly shows that single columns of $\\mathbf{D}$ contain energy largely contained to only one region of the transmission line, automatically providing a natural decomposition of the signal which corresponds to changes in the physical transmission line. \n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.093]{figs\/Merge_2.png}\n \\caption{(top-left) Electrical amplitude at a timestamp of $0.4115 \\mu$s; (top-right) illustration of two columns of $\\mathbf{D}$ from Wave-informed matrix factorization; (bottom-left) illustration of two columns of $\\mathbf{D}$ from low-rank matrix factorization;(bottom-right) partitioned normalized energy of the first 30 significant columns of Wave-informed matrix factorization and low-rank matrix factorization.}\n \\label{fig:Merge_2}\n \\vspace{-5mm}\n\\end{figure}\n\nAs a first baseline comparison, we also perform a low-rank factorization (by setting $\\gamma=0$ and $\\lambda=0.6$), which is equivalent to only using nuclear norm regularization on the matrix product $(\\mathbf{D} \\mathbf{X}^\\top)$.\nWith just low-rank regularization, one observes that while there is a clear demarcation at $0.4$~m, see Figure \\ref{fig:Merge_2}(bottom-left), there is still significant energy in both transmission line segments for each column of $\\mathbf{D}$. \nTo quantitatively evaluate the quality of the decomposition, we normalize the 30 most significant (determined by the corresponding value of $\\|\\mathbf{D}_i \\mathbf{X}_i^{\\top} \\|_F^2$) columns of $\\mathbf{D}$ and plot the energy on each partition, see Figure \\ref{fig:Merge_2}(bottom-right). In case of the low-rank factorization, see Figure \\ref{fig:Merge_2}(bottom-right),\nwe largely observe higher energy levels on regions of higher wavenumber (energy is proportional to the wavenumber for oscillatory quantities) and lower energy levels on regions of lower wavenumber. \nIn the case of our wave-informed matrix factorization model, we see two clear step functions, one indicating the energy on the first segment and the other indicating energy on the second segment, showing a sharp decomposition of the signal into components corresponding to the two distinct regions. We can further encapsulate this into a single quantity representing the decomposition of the signal into components corresponding to the two distinct regions. For this we compute the mean entropy (mean over the columns of $\\mathbf{D}$) of the percentage of signal power in one of the two regions (note the entropy is invariant to the choice of region). In the ideal case, the entropy will be 0, indicating that the learned components have all of the power being exclusively in one of the two line segments, while an entropy of 1 indicates the worst case where the power in the signal of a component column is split equally between the two segments.\n To quantify the performance of our method we also compare against other similar matrix factorization or modal extraction techniques: wave-informed KSVD \\cite{Tetali2019} ({\\color{B}{\\textbf{B}}}), low-rank matrix factorization ({\\color{C}{\\textbf{C}}}), independent component analysis \\cite{hyvarinen2000independent,van2006pp} ({\\color{D}{\\textbf{D}}}), dynamic mode decomposition \\cite{PhysRevFluids.5.054401,SUSUKI2018327} ({\\color{E}{\\textbf{E}}}), ensemble empirical mode decomposition \\cite{wu2009ensemble, fang2011stress, 6707404} ({\\color{F}{\\textbf{F}}}) and principal component analysis \\cite{medina1993three} ({\\color{G}{\\textbf{G}}}). Dynamic mode decomposition ({\\color{E}{\\textbf{E}}}), independent component analysis ({\\color{D}{\\textbf{D}}}), and principal component analysis ({\\color{G}{\\textbf{G}}}) are subspace identification techniques that resemble matrix factorization in many respects. Ensemble empirical mode decomposition is a refined version of empirical mode decomposition, extended to 2D data), which learns a basis from data, similar to our algorithm.\n\n Note that our proposed wave-informed matrix factorization ({\\color{A}{\\textbf{A}}}) achieves the best performance of the seven methods, providing quantitative evidence of our model's ability to isolate distinct materials (and corresponding wavefields). \n\n\\begin{table}[ht]\n\\vspace{-5mm}\n\\caption{\\label{table:entropies} Comparison of entropy based performance for segmentation of a transmission line using algorithms ({\\color{A}{\\textbf{A}}}), ({\\color{B}{\\textbf{B}}}), ({\\color{C}{\\textbf{C}}}), ({\\color{D}{\\textbf{D}}}), ({\\color{E}{\\textbf{E}}}), ({\\color{F}{\\textbf{F}}}) and ({\\color{G}{\\textbf{G}}}), where the mean entropy row represents the mean entropy of partitioned normalized energies for each algorithm.}\n\\centering\n\\begin{tabularx}{\\textwidth}{@{}lXXXXXXX@{}}\n\\toprule\n\\textbf{Algorithm} & {\\color{A}{\\textbf{A}}} & {\\color{B}{\\textbf{B}}} & {\\color{C}{\\textbf{C}}} & {\\color{D}{\\textbf{D}}} & {\\color{E}{\\textbf{E}}} & {\\color{F}{\\textbf{F}}} & {\\color{G}{\\textbf{G}}} \\\\ \\midrule\n\\textbf{Mean Entropy} & 0.176 & 0.211 & 0.531 & 0.464 & 0.431 & 0.636 & 0.531 \\\\ \\bottomrule\n\\end{tabularx}\n\\vspace{-5mm}\n\\end{table}\n\n\n\\myparagraph{Characterizing Multi-segment Transmission Lines with Sparsely Sampled Data}\nIn this subsection, we demonstrate the effectiveness of our model when the data is sparsely sampled. Specifically, we reduce the spatial density at which points were sampled (keeping the number of time points the same) by sampling 10 points uniformly on the cable of length $1$m, and we also change the configuration of the transmission lines by assigning the \\color{orange-left} first $0.4$ m \\color{black} and the \\color{orange-left} last $0.5$ m \\color{black} with one material and the \\color{blue-right} middle $0.1$ m \\color{black} with another material (see Fig.~\\ref{fig:merge_1}(right)). We then solve a matrix completion problem with wave-informed matrix factorization to interpolate the wavefield at the remaining 90 points on the $1$m region. Since the sampling is uniformly done over space, the data matrix contains all zero columns which implies it \\textit{cannot} be completed with standard low-rank matrix completion. However, we show that wave-informed matrix factorization, on the other hand, fills in those regions appropriately due to the wave-constraint being enforced.\n\nSpecifically, we minimize the following objective\\footnote{The mathematical details of the optimization procedure are almost the same as the algorithm mentioned in Algorithm \\ref{alg:meta} except that the masking operator $\\mathcal{A}(\\cdot)$ also needs to be included (see supplement).}:\n\\begin{equation}\n\\label{eq:main_obj_missing}\n\\!\\!\\min_{N \\in \\mathbb{N}_+} \\!\\!\\min_{\\substack{\\mathbf{D} \\in \\mathbb{R}^{L \\times N}, \\\\ \\mathbf{X} \\in \\mathbb{R}^{T \\times N}, \\mathbf{k} \\in \\mathbb{R}^N}} \\!\\!\\!\\!\\tfrac{1}{2}\\| \\mathcal{A} \\left( \\mathbf{Y}-\\mathbf{D}\\mathbf{X}^\\top \\right) \\|_F^2 + \\tfrac{\\lambda}{2} \\sum_{i=1}^N \\left(\\|\\mathbf{X}_i\\|_F^2 + \\|\\mathbf{D}_i\\|_F^2 + \\gamma \\|\\mathbf{L} \\mathbf{D}_i + k_i^2 \\mathbf{D}_i \\|_F^2 \\right) \n\\end{equation}\nwhich is a matrix completion problem with a linear masking operator $\\mathcal{A}(\\cdot)$. We observe, despite the very sparse sampling, the columns of $\\mathbf{D}$ still maintain sufficient structure to identify different material regions. \nIn Figure~\\ref{fig:Merge_3}(left), we plot all of the recovered columns of $\\mathbf{D}$, where we observe that, except for one column of $\\mathbf{D}$, every other column contains very little energy in the region between $0.45$m and $0.55$m. Thus the algorithm again automatically detects a change in this region in the decomposition. Note that the actual change was introduced between $0.4$m and $0.5$m. An error of $0.05$m in estimating the region of material change is analogous to Gibbs phenomenon in signals and system theory -- this especially occurs due to the fact that we impose second derivative constraints on the columns of $\\mathbf{D}$, which imposes a smoothness constraint and shifts the transition. Figure \\ref{fig:Merge_3}(right) is similar to Figure \\ref{fig:Merge_2}(bottom-left) as it quantifies that only one column of $\\mathbf{D}$ is active in the middle region, and the other columns of $\\mathbf{D}$ are active in the other distinct regions, whereas a low-rank factorization model displays significantly more mixing of signal energies in each region. \n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.09]{figs\/Merge_3.png}\n \\caption{(left) All the columns of $\\mathbf{D}$ recovered from wave-informed matrix factorization in a single graph, note that the measurements were sampled only at $ \\{ 0,0.1, \\cdots, 0.9 \\}$; (right) normalized energies of various regions, for the first $8$ significant columns of $\\mathbf{D}$ for both algorithms.}\n \\label{fig:Merge_3}\n \\vspace{-5mm}\n\\end{figure}\n\n\n\\myparagraph{A Fixed Vibrating String}\nA fixed vibrating string is an example of a wave with fixed boundary conditions. \nAs an example, consider the dynamics of a noisy fixed string, given by: \n\\begin{equation}\ny(\\ell,t) = \\sum_{n=1}^{R} a_n \\sin (n k \\ell) \\sin(n \\omega t) + \\eta(\\ell,t) \\; .\n\\label{eqn:wave_equ}\n\\end{equation}\nThis can be represented in matrix notation as $\\mathbf{Y} = \\mathbf{D} \\mathbf{X}^{\\top} + \\mathbf{N}$, where $\\mathbf{N}$ is an additive noise term. To demonstrate our framework, we will consider two more challenging variants of \\eqref{eqn:wave_equ} where we add a damping factor (in either time or space) to the amplitude of the wave:\n\\begin{eqnarray}\n y_t(\\ell,t) = \\sum_{n=1}^{R} a_n e^{-\\alpha_n t} \\sin (n k \\ell) \\sin(n \\omega t) + \\eta(\\ell,t) \n \\label{eqn:damped_wave_time} \\\\\n y_s(\\ell,t) = \\sum_{n=1}^{R} a_n e^{-\\beta_n \\ell} \\sin (n k \\ell) \\sin(n \\omega t) + \\eta(\\ell,t)\n \\label{eqn:damped_wave_space}\n\\end{eqnarray}\n\nThough this does not exactly satisfy the wave equation, waves in practice often have a damped sinusoidal nature. We first consider the model in \\eqref{eqn:damped_wave_time} (damping in time) where we run our algorithm with $\\mathbf{Y} \\in \\mathbb{R}^{1000 \\times 4000}$, $R=10$, $k = 2 \\pi$, $\\omega = 12 \\pi$, $\\alpha_n = n$, $a_n$ roughly of the order of $10^{1}$, and additive white Gaussian noise $\\eta(\\ell,t)$ of variance $\\approx 10$ and $0$ mean (for the noisy case). We set $\\gamma = 10^9$ and $\\lambda = 200$ (see supplement for the behaviour of the algorithm with respect to changes in $\\gamma$ and $\\lambda$). From wave-informed matrix factorization, when there is damping in time, we still obtain the columns of $\\mathbf{D}$ (vibrations in space, which is undamped) as clear underlying sinusoids (Figure \\ref{fig:Merge_4}). We emphasize here that the modes of vibration are exactly recovered from data even in the presence of large amounts of noise (Figure \\ref{fig:Merge_4} (left)), whereas low-rank matrix factorization does not obtain pure sinusoids even in the noiseless case (Figure \\ref{fig:Merge_4} (right) -- the dotted blue curves change in amplitude over space). To quantify this performance, in the appendix we provide the error between modes recovered by our algorithm and the ground-truth modes. Recall our algorithm also provides guarantees of polynomial-time global optimality (unlike the work of \\cite{Tetali2019}), without using a library matrix as mentioned in \\cite{lai2020full}. \n %\n Next, we run wave-informed matrix factorization on the model in \\eqref{eqn:damped_wave_space} (damping in space) \n with the same parameters as above and $\\beta_n = n\/2$ (also including the additive noise term) and visually compare our algorithms to other modal analysis methods ({\\color{A}{\\textbf{A}}}, {\\color{B}{\\textbf{B}}}, {\\color{C}{\\textbf{C}}}, {\\color{D}{\\textbf{D}}}, {\\color{E}{\\textbf{E}}}, {\\color{F}{\\textbf{F}}}, {\\color{G}{\\textbf{G}}}) (Figure~\\ref{fig:damped_sines}). Here we show recovering damped sinusoids is possible under heavy noise (see two left columns of $\\mathbf{D}$ demonstrated in Figure~\\ref{fig:damped_sines}), where we have chosen the most significant columns of each method. A close look at the recovered modes indicates much cleaner recovery of damped (in space) sinusoids for our method ({\\color{A}{\\textbf{A}}}) compared to others. For example, ({\\color{B}{\\textbf{B}}}) is the closest in performance to our method, but distortions can be observed in the tails of the components.\n\n\\begin{figure}[ht]\n\\vspace{-4mm}\n \\centering\n \\includegraphics[trim=10 10 10 150, clip, scale=0.08]{figs\/Merge_4.png}\n \\caption{4 columns of $\\mathbf{D}$ from low-rank matrix factorization (blue) and wave-informed matrix factorization (red) with the temporally damped vibrating string model. Showing with noisy data (left) and the noiseless case (right). \\vspace{-5mm}}\n \\label{fig:Merge_4}\n\\end{figure}\n\\begin{figure}[ht]\n \\centering\n \\vspace{-4mm}\n \\includegraphics[trim=10 10 10 15, clip, width=\\linewidth]{figs\/dampled_sines.png}\n \\caption{Two recovered modes (rows) of spatially damped sinusoids ({\\color{A}{\\textbf{A}}}), ({\\color{B}{\\textbf{B}}}), ({\\color{C}{\\textbf{C}}}), ({\\color{D}{\\textbf{D}}}), ({\\color{E}{\\textbf{E}}}), ({\\color{F}{\\textbf{F}}}), ({\\color{G}{\\textbf{G}}}).} \\label{fig:damped_sines}\n \\vspace{-5mm}\n\\end{figure}\n\n\n\\myparagraph{Conclusions}\nWe have developed a framework for a wave-informed matrix factorization algorithm with provable, polynomial-time global optimally guarantees. Output from the algorithm was compared with that of low-rank matrix factorization and state-of-the-art algorithms for modal and component analysis. We demonstrated that the wave-informed approach learns representations that are more physically relevant and practical for the purpose of material characterization and modal analysis. Future work will include 1) generalizing this approach to a variety of linear PDEs beyond the wave equation as well as wave propagation along more than one dimension, 2) applications in baseline-free anomaly detection for structural health monitoring \\cite{alguri2018baseline, alguri2021sim}. \n\n\\textbf{Acknowledgments} This work was partially supported by NIH NIA 1R01AG067396, ARO MURI W911NF-17-1-0304, NSF-Simons MoDL 2031985, and the National Science Foundation under award number ECCS-1839704.\n\n\\bibliographystyle{IEEEtran}\n\n\\section{Supplementary Material}\n\nFollowing is an example Laplacian matrix ($\\mathbf{L}$),\n\\begin{eqnarray}\n \\mathbf{L} &=& \\frac{1}{(\\Delta l)^2} \\begin{bmatrix}\n -2 & 1 & 0 & 0 & 0 &\\cdots & 0 \\\\\n 1 & -2 & 1 & 0 & 0 & \\cdots & 0 \\\\\n 0 & 1 & -2 & 1 & 0 & \\cdots & 0 \\\\\n \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & 0 & 0 & 0 & 0 & \\cdots & -2 \\\\\n \\end{bmatrix}.\n \\label{eqn:Lap_mat}\n\\end{eqnarray}\nTo reduce complexity, all theorems and proofs consider $\\Delta l = 1$ (for $\\Delta l$ defined in equation \\eqref{eqn:Lap_mat} without any loss of generality. The only change that needs to be accommodated is on the values in $\\mathbf{k}$ obtained from equation (\\ref{eq:k_max}) (i.e. while solving the polar). Observing equations (\\ref{eqn:discrete_wave_eqn}) and (\\ref{eqn:Lap_mat}), the only modification that is needed is to note that for $\\Delta l \\neq 1$ we follow Algorithm \\ref{algoblock:meta-algo} and replace the final vector $\\mathbf{k}$ with $\\mathbf{k} \\Delta l$.\n\n\\subsection{Algorithm Details}\n\n\\paragraph{Matrix Completion}\nWe note that our algorithm (and guarantees of polynomial time solutions) is easily generalized to any loss function $\\ell(\\mathbf{D} \\mathbf{X}^\\top)$, provided the loss function is once differentiable and convex w.r.t. $\\mathbf{D} \\mathbf{X}^\\top$. For example, the following is the modified algorithm for the matrix completion formulation in \\eqref{eq:main_obj_missing}:\n\n\n\\begin{algorithm}\n\\caption{\\bf{Meta-algorithm}}\n\\begin{algorithmic}[1]\n\\State Input $\\mathbf{D}_{init}$, $\\mathbf{X}_{init}$, $\\mathbf{k}_{init}$ \n\\State Initialize $ \\left( \\mathbf{D}, \\mathbf{X}, \\mathbf{k} \\right) \\leftarrow \\left( \\mathbf{D}_{init}, \\mathbf{X}_{init}, \\mathbf{k}_{init} \\right)$\n\\While {global convergence criteria is not met}\n\\State Perform gradient descent on \\eqref{eq:main_obj} with $N$ fixed to reach a first order stationary point $(\\Tilde{\\mathbf{D}}, \\Tilde{\\mathbf{X}}, \\Tilde{\\mathbf{k}})$\\label{step:grad_desc}\n\\State Calculate the value of $\\Omega^\\circ_\\theta \\left(\\tfrac{1}{\\lambda}\\left(\\mathcal{A}^*\\left(\\mathbf{Y}-\\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top\\right)\\right)\\right)$ via Theorem \\ref{thm:polar} and obtain $\\mathbf{d}^*, \\mathbf{x}^*, k^*$\n\\If {value of polar $\\Omega^\\circ_\\theta \\left(\\tfrac{1}{\\lambda}\\left(\\mathcal{A}^*\\left(\\mathbf{Y}-\\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top\\right)\\right)\\right) = 1$} \\State {Algorithm converged to global minimum}\n\\Else\n\\State {Append $(\\mathbf{d}^*, \\mathbf{x}^*, k^*)$ to $(\\Tilde{\\mathbf{D}}, \\Tilde{\\mathbf{X}}, \\Tilde{\\mathbf{k}})$ and update $\\left( \\mathbf{D}, \\mathbf{X}, \\mathbf{k} \\right)$\n\\State $ \\left( \\mathbf{D}, \\mathbf{X}, \\mathbf{k} \\right) \\! \\leftarrow \\! \\left( \\left[ \\tilde{\\mathbf{D}}, \\ \\tau_{\\mathcal{A}} \\mathbf{d}^* \\right], \\left[ \\tilde{\\mathbf{X}}, \\ \\tau_{\\mathcal{A}} \\mathbf{x}^* \\right] , \\left[ \\tilde{\\mathbf{k}}^\\top, \\ k^* \\right]^\\top\n\\right)$, $\\tau_{\\mathcal{A}}\\! > \\! 0$ is step size.}\\label{step:step_size_mising}\n\\EndIf\n\\State Continue loop.\n\\EndWhile\n\\end{algorithmic}\n\\label{algoblock:meta-algo_missing}\n\\end{algorithm}\nwhere $\\mathcal{A}^*$ denotes the adjoint of the linear masking operator.\n\n\\paragraph{Gradient update equations} We set the derivative of the right hand side of equation \\ref{eqn:obj_final} with respect to $\\mathbf{D}$, $\\mathbf{X}$ and $\\mathbf{k}$ and utilize block coordinate descent of Gauss-Seidel type \\cite{xu2013block} to reach a first order stationary point mentioned in step \\ref{step:grad_desc} of Algorithm \\ref{algoblock:meta-algo}. The number of columns of $\\mathbf{D}$ and $\\mathbf{X}$ (denoted by $N$ in equation (\\ref{eq:main_obj})) is fixed and does not change during this step. The following are the gradient update equations (for stepsize $\\alpha_i$):\n\n\n\\begin{gather}\n \\mathbf{D}_j \\leftarrow \\mathbf{D}_j - \\alpha_j \\left( \\left(\\mathbf{D} \\mathbf{X}^{\\top} - \\mathbf{Y} \\right) \\mathbf{X}_j + \\lambda \\mathbf{D}_j + 2 \\gamma \\lambda \\left( \\mathbf{L} + k^2_j \\mathbf{I} \\right)^2 \\mathbf{D}_j \\right) \\label{eq:grad_D} \\\\\n \\mathbf{X}_j \\leftarrow \\mathbf{X}_j - \\alpha_j \\left( \\left( \\mathbf{D} \\mathbf{X}^{\\top} - \\mathbf{Y} \\right)^{\\top} \\mathbf{D}_j \\right) \\label{eq:grad_X} \\\\\n k_j \\leftarrow \\sqrt{-\\frac{ \\mathbf{D}_j^\\top \\mathbf{L} \\mathbf{D}_j } {\\|\\mathbf{D}_j\\|_2^2}} \\label{eq:grad_k} \n\\end{gather}\n\n\n\\paragraph{Gradient update equations for the Matrix completion setting}\n\n\\begin{gather}\n \\mathbf{D}_j \\leftarrow \\mathbf{D}_j - \\alpha_j \\left( \\mathcal{A}^* \\left(\\mathbf{D} \\mathbf{X}^{\\top} - \\mathbf{Y} \\right) \\mathbf{X}_j + \\lambda \\mathbf{D}_j + 2 \\gamma \\lambda \\left( \\mathbf{L} + k^2_j \\mathbf{I} \\right)^2 \\mathbf{D}_j \\right) \\label{eq:grad_D} \\\\\n \\mathbf{X}_j \\leftarrow \\mathbf{X}_j - \\alpha_j \\left( \\mathcal{A}^* \\left( \\mathbf{D} \\mathbf{X}^{\\top} - \\mathbf{Y} \\right)^{\\top} \\mathbf{D}_j \\right) \\label{eq:grad_X} \\\\\n k_j \\leftarrow \\sqrt{-\\frac{ \\mathbf{D}_j^\\top \\mathbf{L} \\mathbf{D}_j } {\\|\\mathbf{D}_j\\|_2^2}} \\label{eq:grad_k} \n\\end{gather}\n\n\\paragraph{Step size computation} The step size, $\\tau$ mentioned in Step 10 of Algorithm \\ref{algoblock:meta-algo} is computed through the following quadratic minimization problem.\n\\begin{eqnarray}\n \\min_{\\substack{\\tau \\in \\mathbb{R}}} \\tfrac{1}{2}\\|\\mathbf{Y}- \\mathbf{D}_\\tau \\mathbf{X}_\\tau^\\top\\|_F^2 + \\frac{\\lambda}{2} \\sum_{i=1}^{N+1} \\left(\\|\\left(\\mathbf{X}_{\\tau}\\right)_i\\|_F^2 + \\| \\left( \\mathbf{D}_{\\tau} \\right)_i \\|_F^2 + \\gamma \\|\\mathbf{L} \\left( \\mathbf{D}_{\\tau} \\right)_i + \\left(\\mathbf{k}_{\\tau}^2\\right)_i \\left( \\mathbf{D}_{\\tau} \\right)_i \\|_F^2 \\right)\n \\label{eqn:for_tau}\n\\end{eqnarray}\n\nwhere, $\\left(\\mathbf{D}_\\tau, \\mathbf{X}_\\tau, \\mathbf{k}_{\\tau} \\right) = \\left( \\left[ \\tilde{\\mathbf{D}}, \\ \\tau \\mathbf{d}^* \\right], \\left[ \\tilde{\\mathbf{X}}, \\ \\tau \\mathbf{x}^* \\right] , \\left[ \\tilde{\\mathbf{k}}^\\top, \\ k^* \\right]^\\top\n\\right)$ and $N$ is the number of columns in $\\tilde{\\mathbf{D}}$.\n\n\\paragraph{Proposition} The optimal step size $\\tau^*$ that minimizes the expression in (\\ref{eqn:for_tau}) is given by:\n\\begin{eqnarray}\n \\tau^* = \\frac{\\sqrt{(\\mathbf{d}^*)^\\top \\left( \\mathbf{Y} - \\tilde{\\mathbf{D}} \\tilde{\\mathbf{X}}^{\\top} \\right) \\mathbf{x}^* - \\lambda }}{\\|\\mathbf{d}^*\\|_2 \\|\\mathbf{x}^*\\|_2 }\n \\label{eq:for_tau_}\n\\end{eqnarray}\n\n\\begin{proof}\nLet $f(\\tau)$ represent the objective function in \\eqref{eqn:for_tau} with everything except for $\\tau$ held fixed:\n\\begin{eqnarray}\n f(\\tau) = \\tfrac{1}{2}\\|\\mathbf{Y}- \\mathbf{D}_\\tau \\mathbf{X}_\\tau^\\top\\|_F^2 + \\tfrac{\\lambda}{2} \\sum_{i=1}^{N+1} \\left(\\|\\left(\\mathbf{X}_{\\tau}\\right)_i\\|_F^2 + \\| \\left( \\mathbf{D}_{\\tau} \\right)_i \\|_F^2 + \\gamma \\|\\mathbf{L} \\left( \\mathbf{D}_{\\tau} \\right)_i + \\left(\\mathbf{k}_{\\tau}\\right)^2_i \\left( \\mathbf{D}_{\\tau} \\right)_i \\|_F^2 \\right)\n\\end{eqnarray}\n\nObserve that from the solution to the Polar problem we have by construction that the regularization term $\\theta(\\mathbf{d}^*,\\mathbf{x}^*)=1$, so combined with the positive homogeneity of $\\theta$ we have have that minimizing $f(\\tau)$ w.r.t. $\\tau$ is equivalent to solving:\n\\begin{equation}\n\\min_{\\tau \\geq 0} \\tfrac{1}{2}\\|\\mathbf{Y}- \\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top - \\tau^2 \\mathbf{d}^* (\\mathbf{x}^*)^\\top \\|_F^2 + \\lambda \\tau^2\n\\end{equation}\n\nTaking the gradient of the above w.r.t. $\\tau^2$ and solving for 0 gives:\n\\begin{equation}\n(\\tau^*)^2 = \\frac{\\langle \\mathbf{Y}-\\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top, \\mathbf{d}^* (\\mathbf{x}^*)^\\top \\rangle - \\lambda}{ \\|\\mathbf{d}^* (\\mathbf{x}^*)^\\top\\|_F^2}\n\\end{equation}\nThe result is completed by noting that the numerator is guaranteed to be strictly positive due to the fact that the Polar solution has value strictly greater that 1.\n\\end{proof}\n\nThe step size, $\\tau_{\\mathcal{A}}$ for the matrix completion algorithm, mentioned in Step 10 of Algorithm \\ref{algoblock:meta-algo_missing} is computed through the following quadratic minimization problem.\n\\begin{eqnarray}\n \\min_{\\substack{\\tau_{\\mathcal{A}} \\in \\mathbb{R}}} \\tfrac{1}{2}\\|\\mathbf{Y}- \\mathbf{D}_{\\tau_{\\mathcal{A}}} \\mathbf{X}_{\\tau_{\\mathcal{A}}}^\\top\\|_F^2 + \\frac{\\lambda}{2} \\sum_{i=1}^{N+1} \\left(\\|\\left(\\mathbf{X}_{\\tau_{\\mathcal{A}}}\\right)_i\\|_F^2 + \\| \\left( \\mathbf{D}_{\\tau_{\\mathcal{A}}} \\right)_i \\|_F^2 + \\gamma \\|\\mathbf{L} \\left( \\mathbf{D}_{\\tau_{\\mathcal{A}}} \\right)_i + \\left(\\mathbf{k}_{\\tau_{\\mathcal{A}}}^2\\right)_i \\left( \\mathbf{D}_{\\tau_{\\mathcal{A}}} \\right)_i \\|_F^2 \\right)\n \\label{eqn:for_tau_missing}\n\\end{eqnarray}\n\nwhere, $\\left(\\mathbf{D}_{\\tau_{\\mathcal{A}}}, \\mathbf{X}_{\\tau_{\\mathcal{A}}}, \\mathbf{k}_{\\tau_{\\mathcal{A}}} \\right) = \\left( \\left[ \\tilde{\\mathbf{D}}, \\ {\\tau_{\\mathcal{A}}} \\mathbf{d}^* \\right], \\left[ \\tilde{\\mathbf{X}}, \\ {\\tau_{\\mathcal{A}}} \\mathbf{x}^* \\right] , \\left[ \\tilde{\\mathbf{k}}^\\top, \\ k^* \\right]^\\top\n\\right)$ and $N$ is the number of columns in $\\tilde{\\mathbf{D}}$.\n\n\\paragraph{Proposition} The optimal step size $\\tau^*_{\\mathcal{A}}$ that minimizes the expression in (\\ref{eqn:for_tau_missing}) is given by:\n\\begin{eqnarray}\n\\tau^*_{\\mathcal{A}^*} = \\frac{ \\sqrt{ \\langle \\mathcal{A}^* \\left( \\mathbf{Y}-\\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top \\right), \\mathcal{A}^* \\left( \\mathbf{d}^* {\\mathbf{x}^*}^\\top \\right)\\rangle - \\lambda }}{ \\| \\mathcal{A}^* \\left( \\mathbf{d}^* {\\mathbf{x}^*}^\\top \\right) \\|_F}\n\\end{eqnarray}\n\\begin{proof}\nFollowing the same steps of the previous proof, we obtain:\n\n\\begin{equation}\n\\min_{\\tau_{\\mathcal{A}} \\geq 0} \\tfrac{1}{2}\\| \\mathcal{A}^* \\left( \\mathbf{Y}- \\tilde \\mathbf{D} \\tilde \\mathbf{X}^\\top \\right) - \\tau_{\\mathcal{A}}^2 \\mathcal{A}^* \\left( \\mathbf{d}^* (\\mathbf{x}^*)^\\top \\right) \\|_F^2 + \\lambda \\tau_{\\mathcal{A}}^2\n\\end{equation}\n\nThe proof is completed by minimizing the quadratic for $\\left(\\tau_{\\mathcal{A}}\\right)^2$.\n\n\\end{proof}\n\n\\subsection{Proofs}\n\n{\n\\renewcommand{\\theproposition}{\\ref{prop:equiv_prob}}\n\\begin{proposition}\nThe optimization problem in \\eqref{eq:main_obj} is a special case of the problem considered in \\cite{haeffele2019structured}.\n\\end{proposition}\n\\addtocounter{proposition}{-1}\n}\n\n\n\\begin{proof}\nThe problem considered in \\cite{haeffele2019structured} is stated in \\eqref{eq:gen_obj}. Comparing it with (\\ref{eq:main_obj}) we have:\n\\begin{gather}\n \\ell \\left( \\mathbf{D}\\mathbf{X}^{\\top} \\right) = \\tfrac{1}{2}\\| \\mathbf{Y} - \\mathbf{D}\\mathbf{X}^{\\top} \\|_F^2\\\\\n \\bar{\\theta} \\left( \\mathbf{D}_i, \\mathbf{X}_i \\right) = \\tfrac{1}{2} \\left( \\| \\mathbf{D}_i \\|^2_2 + \\| \\mathbf{X}_i \\|^2_2 + \\gamma \\min_{k_i} \\| \\mathbf{L} \\mathbf{D}_i - k_i \\mathbf{D}_i \\|^2_2 \\right)\n\\end{gather}\nObserve that $\\ell ( \\hat{\\mathbf{Y}} ) = \\tfrac{1}{2}\\| \\mathbf{Y} - \\hat{\\mathbf{Y}} \\|^2_F$ is clearly convex and differentiable w.r.t $\\hat \\mathbf{Y}$. \n Now to realize that the optimization problem in \\eqref{eq:main_obj} is a special case of the problem considered in \\cite{haeffele2019structured}, it suffices to check that $\\bar \\theta(\\mathbf{d},\\mathbf{x})$ satisfies the three conditions of a rank-1 regularizer from \\cite{haeffele2019structured}.\n\n\\begin{enumerate}\n \\item $\\bar \\theta(\\alpha \\mathbf{d}, \\alpha \\mathbf{x}) = \\alpha^2 \\theta(\\mathbf{d}, \\mathbf{x}), \\ \\forall (\\mathbf{d},\\mathbf{x})$ and $\\forall \\alpha \\geq 0$.\n \n For any $\\alpha > 0$, $\\forall (\\mathbf{d}, \\mathbf{x})$ :\n \\begin{equation}\n \\begin{split}\n \\bar{\\theta}\\left( \\alpha \\mathbf{d}, \\alpha \\mathbf{x} \\right) &= \\| \\alpha \\mathbf{d} \\|^2_2 + \\| \\alpha \\mathbf{x} \\|^2_2 + \\gamma \\min_{k} \\| \\alpha \\mathbf{L} \\mathbf{d} + k^2 \\alpha \\mathbf{d} \\|^2_F \\\\\n &= \\alpha^2 \\| \\mathbf{d} \\|^2_2 + \\alpha^2 \\| \\mathbf{x} \\|^2_2 + \\alpha^2 \\gamma \\min_{k} \\| \\mathbf{L} \\mathbf{d} + k^2 \\mathbf{d} \\|^2_F \\\\\n &= \\alpha^2 \\bar{\\theta}(\\mathbf{d},\\mathbf{x})\n \\end{split}\n \\end{equation}\n \n where we note that scaling $\\mathbf{d}$ by $\\alpha > 0$ does not change the optimal value of $k$ in the third term, allowing $\\alpha^2$ to be moved outside of the norm.\n \n \\item $\\bar \\theta(\\mathbf{d}, \\mathbf{x}) \\geq 0, \\ \\forall (\\mathbf{d},\\mathbf{x})$.\n \n \n Clearly, all terms in $\\bar{\\theta}(\\mathbf{d},\\mathbf{x})$ are non-negative, thus, $\\forall (\\mathbf{d},\\mathbf{x})$, we have $\\bar{\\theta}(\\mathbf{d},\\mathbf{x})\\geq 0$.\n \\item For all sequences $(\\mathbf{d}^{(n)},\\mathbf{x}^{(n)})$ such that $\\|\\mathbf{d}^{(n)}(\\mathbf{x}^{(n)})^\\top\\| \\rightarrow \\infty$ then $\\bar \\theta(\\mathbf{d}^{(n)},\\mathbf{x}^{(n)}) \\rightarrow \\infty$.\n \n Here, note that the following is true for all $(\\mathbf{d},\\mathbf{x})$:\n %\n \\begin{equation}\n \\|\\mathbf{d} \\mathbf{x}^\\top \\|_F = \\|\\mathbf{d}\\|_2 \\|\\mathbf{x}\\|_2 \\leq \\tfrac{1}{2}(\\|\\mathbf{d}\\|_2^2 + \\|\\mathbf{x}\\|_2^2) \\leq \\tfrac{1}{2}\\left(\\|\\mathbf{d}\\|_2^2 + \\|\\mathbf{x}\\|_2^2 + \\min_{k} \\| \\mathbf{L} \\mathbf{d} + k^2 \\mathbf{d} \\|^2_2\\right)\n \\end{equation}\n As a result we have $\\forall (\\mathbf{d}, \\mathbf{x})$ that $\\|\\mathbf{d}\\mathbf{x}^\\top\\|_F \\leq \\bar \\theta(\\mathbf{d},\\mathbf{x})$, completing the result.\n \n\\end{enumerate}\n\\end{proof}\n\nBefore proving Theorem \\ref{thm:polar} we first prove an intermediate lemma regarding Lipschitz constants.\n\\begin{lemma}\n\\label{lem:L_bound}\nGiven a set of constants $\\lambda_i, \\ i=1, 2, \\ldots$ such that $\\forall i$, $\\mu_\\lambda \\leq \\lambda_i \\leq 0$, and a constant $\\gamma > 0$, let the function $f$ be defined as:\n\\begin{equation}\nf(x) = \\max_i \\frac{1}{\\sqrt{1+\\gamma(x + \\lambda_i)^2}}.\n\\end{equation}\nThen, over the domain $0 \\leq x \\leq \\mu_x$ $f$ is Lipschitz continuous with Lipschitz constant $L_f$ bounded as follows:\n\\begin{equation}\nL_f \\leq \\left[\\begin{cases} \\frac{2}{3\\sqrt{3}} \\sqrt{\\gamma} & \\gamma\\geq \\frac{1}{2 \\max \\{ \\mu_\\lambda^2, \\mu_x^2 \\}} \\\\\n\\gamma \\max \\{ -\\mu_\\lambda, \\mu_x\\} (1+\\gamma \\max \\{\\mu_\\lambda^2, \\mu_x^2\\} )^{-\\tfrac{3}{2}} & \\mathrm{otherwise}\n\\end{cases} \\right] \\leq \\frac{2}{3 \\sqrt{3}}\\sqrt{\\gamma}. \n\\end{equation}\n\\end{lemma}\n\n\n\\begin{proof}\nFirst, note that for any two Lipschitz continuous functions $\\psi_a$ and $\\psi_b$, with associated Lipschitz constants $L_a$ and $L_b$, respectively, one has that the point-wise maximum of the two functions, $\\psi(x) = \\max \\{ \\psi_a(x), \\psi_b(x) \\}$, is also Lipschitz continuous with Lipschitz constant bounded by $\\max \\{ L_a, L_b \\}$. This can be easily seen by the following two inequalities:\n\\begin{align}\n&\\psi_a(x') \\leq \\psi_a(x) + | \\psi_a(x') - \\psi_a(x) | \\leq \\psi(x) + | \\psi_a(x') - \\psi_a(x) | \\leq \\psi(x) + L_a |x' - x| \\\\\n&\\psi_b(x') \\leq \\psi_b(x) + | \\psi_b(x') - \\psi_b(x) | \\leq \\psi(x) + | \\psi_b(x') - \\psi_b(x) | \\leq \\psi(x) + L_b |x' - x|\n\\end{align}\nFrom this we have:\n\\begin{equation}\n\\begin{split}\n&\\psi(x') = \\max \\{ \\psi_a(x'), \\psi_b(x') \\} \\leq \\max \\{ \\psi(x) + L_a |x'-x|, \\psi(x) + L_b |x'-x| \\} = \\psi(x) + \\max \\{L_a, L_b \\} |x'-x| \\\\\n&\\implies \\psi(x') - \\psi(x) \\leq \\max \\{ L_a, L_b \\} |x'-x|\n\\end{split}\n\\end{equation}\nwhich implies the claim from symmetry.\n\nNow, if we define the functions $g$ and $h$ as\n\\begin{equation}\ng(x, \\lambda) = \\frac{1}{\\sqrt{1+\\gamma(x + \\lambda)^2}} \\ \\ \\ \\ \\ \\ \\ h(b) = \\gamma b (1+\\gamma b^2)^{-\\tfrac{3}{2}}\n\\end{equation}\nwe have that the Lipschitz constant of $f$, denoted as $L_f$, is bounded as:\n\\begin{equation}\n\\label{eq:h_max}\n\\begin{split}\nL_f \\leq &\\max_i \\sup_{x \\in [0, u_x]} \\left| \\frac{\\partial}{\\partial x} g(x, \\lambda_i) \\right| \\leq \\sup_{\\lambda \\in [\\mu_\\lambda,0]} \\sup_{x \\in [0, \\mu_x]} \\left| \\frac{\\partial}{\\partial x} g(x, \\lambda) \\right| = \\\\\n&\\sup_{\\lambda \\in [\\mu_\\lambda,0]} \\sup_{x \\in [0, \\mu_x]} \\left| - \\frac{\\gamma(x+\\lambda)}{ (\\sqrt{1+\\gamma (x+\\lambda)^2})^3 } \\right|\n= \\sup_{b \\in [\\mu_\\lambda, \\mu_x]} \\left| h(b) \\right| = \\sup_{b \\in [0, \\max \\{ -\\mu_\\lambda, \\mu_x \\} ]} h(b)\n\\end{split}\n\\end{equation}\nWhere the first inequality is from the result above about the Lipschitz constant of the point-wise maximum of two functions and the simple fact that the Lipschitz constant of a function is bounded by the maximum magnitude of its gradient, and the final equality is due to the symmetry of $|h(b)|$ about the origin. Now, finding the critical points of $h(b)$ for non-negative $b$ we have:\n\\begin{equation}\n\\begin{split}\nh'(b) = \\gamma (1+\\gamma b^2)^{-\\tfrac{3}{2}} - 3 \\gamma^2 b^2 (1+\\gamma b^2)^{- \\tfrac{5}{2}} = 0 \n\\implies 3 \\gamma b^2 = (1+ \\gamma b^2) \\implies b^* = \\frac{1}{\\sqrt{2 \\gamma}}\n\\end{split} \n\\end{equation}\nNote that $h(0)=0$, $h(b)>0$ for all $b>0$, and $h'(b)<0$ for all $b > b^*$. As a result, $b^*$ will be a maximizer of $h(b)$ if it is feasible, otherwise the maximum will occur at the extreme point $b = \\max \\{-\\mu_\\lambda, \\mu_x \\}$. From this we have the result:\n\\begin{equation}\n\\begin{split}\nL_f &\\leq \\begin{cases} h(b^*) = \\frac{2}{3 \\sqrt{3}} \\sqrt{\\gamma} & \\gamma\\geq \\frac{1}{2 \\max \\{ \\mu_\\lambda^2, \\mu_x^2\\}} \\\\\nh(\\max \\{-\\mu_\\lambda, \\mu_x \\} ) = \\gamma \\max \\{ -\\mu_\\lambda, \\mu_x \\} (1+\\gamma \\max \\{ \\mu_\\lambda^2, \\mu_x^2 \\})^{-\\tfrac{3}{2}} & \\text{otherwise}\n\\end{cases} \\\\\n& \\leq h(b^*) = \\frac{2}{3\\sqrt{3}}\\sqrt{\\gamma}.\n\\end{split}\n\\end{equation}\n\\end{proof}\n\n{\n\\renewcommand{\\thetheorem}{\\ref{thm:polar}}\n\n\\begin{theorem}\nFor the objective in \\eqref{eq:main_obj}, the associated polar problem is equivalent to:\n\\begin{equation}\n\\Omega_\\theta^\\circ (\\mathbf{Z}) = \\max_{\\mathbf{d} \\in \\mathbb{R}^{L}, \\mathbf{x} \\in \\mathbb{R}^{T}, k\\in \\mathbb{R}} \\mathbf{d}^\\top \\mathbf{Z} \\mathbf{x} \\ \\ \\textnormal{s.t.} \\ \\ \\|\\mathbf{d}\\|^2_F + \\gamma \\|\\mathbf{L}\\mathbf{d} + k^2 \\mathbf{d}\\|_F^2 \\leq 1, \\ \\|\\mathbf{x}\\|^2_F \\leq 1, \\ 0 \\leq k \\leq 2.\n\\end{equation}\nFurther, let $\\mathbf{L} = \\Gamma \\Lambda \\Gamma^\\top$ be an eigen-decomposition of $\\mathbf{L}$ and define the matrix $\\mathbf{A}( \\bar k) = \\Gamma(\\mathbf{I} + \\gamma(\\bar k \\mathbf{I} + \\Lambda)^2) \\Gamma^\\top$. Then, if we define $\\bar k^*$ as%\n\\begin{equation}\n\\bar k^* = \\argmax_{\\bar k \\in [0,4]} \\| \\mathbf{A}(\\bar k)^{-1\/2} \\mathbf{Z} \\|_2\n\\label{eq:k_max}\n\\end{equation}\noptimal values of $\\mathbf{d},\\mathbf{x}, k$ are given as $\\mathbf{d}^* = \\mathbf{A}(\\bar k^*)^{-1\/2} \\bar \\mathbf{d}$, $\\mathbf{x}^* = \\bar \\mathbf{x}$, and $k^* = (\\bar k^*)^{1\/2}$ where $(\\bar \\mathbf{d}, \\bar \\mathbf{x})$ are the left and right singular vectors, respectively, associated with the largest singular value of $\\mathbf{A}(\\bar k^*)^{-1\/2} \\mathbf{Z}$. Additionally, the above line search over $\\bar k$ is Lipschitz continuous with a Lipschitz constant, $L_{\\bar k}$, which is bounded by:\n\\begin{equation}\nL_{\\bar k} \\leq \\left[ \\begin{cases} \\frac{2}{3 \\sqrt{3}} \\sqrt{\\gamma} \\|\\mathbf{Z}\\|_2 & \\gamma \\geq \\frac{1}{32} \\\\ \n4 \\gamma (1 + 16 \\gamma)^{-\\tfrac{3}{2}} \\|\\mathbf{Z}\\|_2 & \\mathrm{otherwise} \\end{cases} \\right] \\leq \\frac{2}{3 \\sqrt{3}} \\sqrt{\\gamma} \\|\\mathbf{Z}\\|_2\n\\end{equation}\n\\end{theorem}\n\\addtocounter{theorem}{-1}\n}\n\n\\begin{proof}\nThe polar problem associated with the objectives of the form \\eqref{eq:gen_obj} as given in \\cite{haeffele2019structured} is:\n\n\\begin{gather}\n \\Omega_\\theta^\\circ (\\mathbf{Z}) = \\sup_{\\mathbf{d},\\mathbf{x}} \\mathbf{d}^{\\top} \\mathbf{Z} \\mathbf{x} \\ \\ \\textnormal{s.t.} \\ \\ \\bar \\theta(\\mathbf{d},\\mathbf{x}) \\leq 1 \n\\end{gather}\n\nFor our particular problem, due to the bilinearity between $\\mathbf{d}$ and $\\mathbf{x}$ in the objective the above is equivalent to:\n\\begin{gather}\n \\Omega_\\theta^\\circ (\\mathbf{Z}) = \\sup_{\\mathbf{d},\\mathbf{x}} \\mathbf{d}^{\\top} \\mathbf{Z} \\mathbf{x} \\ \\ \\textnormal{s.t.} \\ \\ \\| \\mathbf{x} \\|^2_2 \\leq 1, \\|\\mathbf{d}\\|^2_2 + \\gamma \\min_{k} \\| \\mathbf{L} \\mathbf{d} + k^2 \\mathbf{d} \\|^2_2 \\leq 1\n\\end{gather}\nNote that this is equivalent to moving the minimization w.r.t. $k$ in the regularization constraint to a maximization over $k$:\n \\begin{gather}\n \\Omega_\\theta^\\circ (\\mathbf{Z}) = \\sup_{\\mathbf{d},\\mathbf{x},k} \\mathbf{d}^{\\top} \\mathbf{Z} \\mathbf{x} \\ \\ \\textnormal{s.t.} \\ \\ \\| \\mathbf{x} \\|^2_2 \\leq 1, \\|\\mathbf{d}\\|^2_2 + \\gamma \\| \\mathbf{L} \\mathbf{d} + k^2 \\mathbf{d} \\|^2_2 \\leq 1\n\\end{gather}\n\nNext, note that maximizing w.r.t. $\\mathbf{d}$ while holding $\\mathbf{x}$ and $k$ fixed is equivalent to solving a problem of the form:\n\\begin{align}\n\\max_\\mathbf{d} \\langle \\mathbf{d}, \\mathbf{Z} \\mathbf{x} \\rangle \\ \\ \\textnormal{subject to} \\ \\ \\mathbf{d}^\\top \\mathbf{A} \\mathbf{d} \\leq 1\n\\end{align}\nfor some positive definite matrix $\\mathbf{A}$. If we make the change of variables $\\bar \\mathbf{d} = \\mathbf{A}^{1\/2} \\mathbf{d}$, this then becomes:\n\\begin{align}\n\\max_{\\bar \\mathbf{d}} \\ \\ &\\langle \\bar \\mathbf{d}, \\mathbf{A}^{-1\/2} \\mathbf{Z} \\mathbf{x} \\rangle \\ \\ \\textnormal{subject to} \\ \\ \\| \\bar \\mathbf{d} \\|_2^2 \\leq 1 = \\| \\mathbf{A}^{-1\/2} \\mathbf{Z} \\mathbf{x} \\|_2 \n\\end{align}\nwhere the optimal $\\bar \\mathbf{d}$ and $\\mathbf{d}$ are obtained at \n\\begin{align}\n\\bar \\mathbf{d}_{opt} &= \\frac{\\mathbf{A}^{-1\/2} \\mathbf{Z} \\mathbf{x}}{\\|\\mathbf{A}^{-1\/2} \\mathbf{Z} \\mathbf{x}\\|_2} \\\\\n\\label{eq:d_opt}\n\\mathbf{d}_{opt} &= \\mathbf{A}^{-1\/2} \\bar \\mathbf{d}_{opt} = \\frac{\\mathbf{A}^{-1} \\mathbf{Z} \\mathbf{x}}{\\|\\mathbf{A}^{-1\/2} \\mathbf{Z} \\mathbf{x}\\|_2}\n\\end{align}\nFor our particular problem, if we make the change of variables $\\bar k = k^2$ we have that $\\mathbf{A}$ is given by:\n\\begin{equation}\n\\mathbf{A}(\\bar k) = (1+\\gamma \\bar k^2) \\mathbf{I} + \\gamma \\mathbf{L}^2 + 2 \\bar k \\gamma \\mathbf{L}\n\\end{equation}\nwhere we have used that $\\mathbf{L}$ is a symmetric matrix. If we let $\\mathbf{L} = \\Gamma \\Lambda \\Gamma^\\top$ be an eigen-decomposition of $\\mathbf{L}$ then we can also represent $\\mathbf{A}(\\bar k)$ and $\\mathbf{A}(\\bar k)^{-1\/2}$ as:\n\\begin{align}\n\\mathbf{A}(\\bar k) &= \\Gamma \\left( (1+\\gamma \\bar k^2) \\mathbf{I} + \\gamma \\Lambda^2 + 2 \\bar k \\gamma \\Lambda \\right) \\Gamma^\\top \\\\\n &= \\Gamma (\\mathbf{I} + \\gamma ( \\bar k \\mathbf{I} + \\Lambda)^2) \\Gamma^\\top \\\\ \n \\label{eq:A12}\n\\mathbf{A}(\\bar k)^{-1\/2} &= \\Gamma (\\mathbf{I} + \\gamma ( \\bar k \\mathbf{I} + \\Lambda)^2)^{-1\/2} \\Gamma^\\top \n\\end{align}\nNow if we substitute back into the original polar problem, we have:\n\\begin{align}\n\\Omega^\\circ(\\mathbf{Z}) &= \\max_{\\mathbf{x}, \\bar k} \\| \\mathbf{A}(\\bar k)^{-1\/2} \\mathbf{Z} \\mathbf{x} \\|_2 \\ \\ \\textnormal{subject to} \\ \\ \\|\\mathbf{x}\\|_2^2 \\leq 1 \\\\\n\\label{eq:k_line}\n&= \\max_{\\bar k} \\|(\\mathbf{A}(\\bar k)^{-1\/2} \\mathbf{Z} \\|_2\n\\end{align}\nwhere $\\|\\cdot\\|_2$ denotes the spectral norm (maximum singular value). Similarly, for a given $\\bar k$ the optimal $\\mathbf{x}$ is given as the right singular vector of $\\mathbf{A}(\\bar k)^{-1\/2}\\mathbf{Z}$ associated with the largest singular value.\n\nAs a result, we can solve the polar by performing a line search over $\\bar k$, then once an optimal $\\bar k^*$ is found we get $\\mathbf{x}^*$ as the largest right singular vector of $\\mathbf{A}(\\bar k^*)^{-1\/2} \\mathbf{Z}$ and the optimal $\\mathbf{d}^*$ from \\eqref{eq:d_opt} (where $\\mathbf{d}_{opt}$ will be the largest left singular vector of $\\mathbf{A}(\\bar k^*)^{-1\/2} \\mathbf{Z}$ multiplied by $\\mathbf{A}(\\bar k^*)^{-1\/2}$).\n\nNow, an upper bound for $\\bar k$ can be calculated from the fact that the optimal $\\bar k$ is defined using a minimization problem, i.e.\n\\begin{eqnarray}\n \\bar k^* = \\argmin_{\\bar k} \\| \\mathbf{L} \\mathbf{d} + \\bar k \\mathbf{d} \\|^2\n\\end{eqnarray}\nSo we note for any $\\mathbf{d}$,\n\\begin{eqnarray}\n \\bar k^* = -\\frac{ \\mathbf{d}^\\top \\mathbf{L} \\mathbf{d}} {\\|\\mathbf{d}\\|_2^2}\n \\label{eq:opt_k}\n\\end{eqnarray}\nwhich is bounded by the smallest eigenvalue of $\\mathbf{L}$ (note that $\\mathbf{L}$ is negative (semi)definite). We cite the literature on eigenvalues of discrete second derivatives \\cite{chung2000discrete} to note that all eigenvalues of $\\mathbf{L}$ (irrespective of the boundary conditions) lie in the range $[\\frac{-4}{\\Delta l},0]$, since we specifically chose $\\Delta l = 1$ (without loss of generality), we have that all eigenvalues of $\\mathbf{L}$ lie in the range$[-4,0]$.\n\n\n\n\n\n\n\n\nAs a result, we need to only consider $\\bar k$ in the range $[0,4]$:\n\\begin{equation}\n\\bar k^* = \\argmax_{\\bar k \\in [0,4]} \\| \\mathbf{A}(\\bar k)^{-1\/2} \\mathbf{Z} \\|_2\n\\end{equation}\n\nFinally, to show the Lipschitz continuity, we define the function:\n\\begin{equation}\nf_\\mathbf{A}(\\bar k) = \\| \\mathbf{A}(\\bar k)^{-1\/2} \\|_2\n\\end{equation}\nand then note the following:\n\\begin{equation}\n\\begin{split}\n& \\left| \\|\\mathbf{A}(\\bar k)^{-1\/2}\\mathbf{Z} \\|_2 - \\| \\mathbf{A}(\\bar k ')^{-1\/2} \\mathbf{Z} \\|_2 \\right| \\\\\n\\leq & \\left\\| \\left( \\mathbf{A}(\\bar k)^{-1\/2} - \\mathbf{A}(\\bar k')^{-1\/2} \\right) \\mathbf{Z} \\right\\|_2 \\\\\n\\leq & \\left\\| \\mathbf{A}(\\bar k)^{-1\/2} - \\mathbf{A}(\\bar k')^{-1\/2} \\right\\|_2 \\left\\| \\mathbf{Z} \\right\\|_2 \\\\\n\\leq & L_\\mathbf{A} |\\bar k - \\bar k'| \\|\\mathbf{Z}\\|_2\n\\end{split}\n\\end{equation}\nwhere the first inequality is simply the reverse triangle inequality, the second inequality is due to the spectral norm being submultiplicative, and $L_\\mathbf{A}$ denotes the Lipschitz constant of $f_\\mathbf{A}(\\bar k)$. From the form of $\\mathbf{A}(\\bar k)^{-1\/2}$ in \\eqref{eq:A12} note that we have:\n\\begin{equation}\nf_\\mathbf{A}(\\bar k) \\equiv \\| \\mathbf{A}(\\bar k)^{-1\/2}\\|_2 = \\max_i \\frac{1}{\\sqrt{1 + \\gamma(\\bar k + \\Lambda_{i,i})^2}}\n\\end{equation}\nso the result is completed by recalling from our discussion above that $\\Lambda_{i,i} \\in [-4, 0], \\forall i$ and applying Lemma \\ref{lem:L_bound}.\n\n\\end{proof}\n\n{\n\\renewcommand{\\thecorollary}{\\ref{cor:poly_time}}\n\\begin{corollary}\nAlgorithm \\ref{alg:meta} produces an optimal solution to \\eqref{eq:main_obj} in polynomial time.\n\\end{corollary}\n\\addtocounter{corollary}{-1}\n}\n\\begin{proof}\nThis result is largely a Corollary from Theorem \\ref{thm:polar} and what is known in the literature. Namely, the authors of \\cite{xu2013block} show that the block coordinate update steps \\eqref{eq:grad_D}, \\eqref{eq:grad_X} and \\eqref{eq:grad_k} in step \\ref{step:grad_desc} of Algorithm \\ref{algoblock:meta-algo} reaches a stationary point in polynomial time because the objective function \\eqref{eq:main_obj} is convex w.r.t. each ($\\mathbf{D}$, $\\mathbf{X}$, $\\mathbf{k}$) if the other terms are held fixed. Next,\nby Theorem \\ref{thm:polar} the optimization problem for solving the polar can be done in polynomial time. Finally, it has been noted in the literature on structured matrix factorization \\cite{bach2013convex} that the polar update step is equivalent to a generalized conditional gradient step\nand if the conditional gradient steps (i.e., the polar problem) can be solved exactly (as we show in Theorem \\ref{thm:polar}) then the algorithm converges in a polynomial number of such steps. As a result, due to the fact that the block coordinate update steps reach a stationary in polynomial time we will perform a polar update step (a.k.a., a conditional gradient step) at polynomial time intervals, so the overall algorithm is also guaranteed to converge in polynomial time.\n\n\\end{proof} \n\n\\subsection{Interpreting the Wave-Informed Regularizer as a Bandpass Filter}\n\nNote that when identifying the optimal $k$ value in the polar program, we solve for\n\\begin{equation}\n\\label{eq:k_opt}\n\\argmax_{k \\in [0,4]} \\| \\Gamma (\\mathbf{I} + \\gamma ( \\bar k \\mathbf{I} + \\Lambda)^2)^{-1\/2} \\Gamma^\\top \\mathbf{Z} \\|_2 \\; .\n\\end{equation}\nThis optimization has an intuitive interpretation from digital signal processing. Given that $\\Gamma$ contains the eigenvectors of a Toeplitz matrix, those eigenvectors have spectral qualities similar to the discrete Fourier transform (the eigenvectors of a circulant matrix would be the discrete Fourier transform). As a result, $\\Gamma^\\top$ transforms the data $\\mathbf{Z}$ into a spectral-like domain and $\\Gamma$ returns the data back to the original domain. Since the other terms are all diagonal matrices, they represent element-wise multiplication across the data in the spectral domain. This is approximately equivalent to a filtering operation, with filter coefficients given by the diagonal entries of $(\\mathbf{I} + \\gamma ( \\bar k \\mathbf{I} + \\Lambda)^2)^{-1\/2}$.\n\nFurthermore, recall that the transfer function of a 1st-order Butterworth filter is given by:\n\\begin{equation}\nT(\\omega) = \\frac{1}{\\sqrt{1+\\gamma (k_0 + \\omega)^2}}\n\\end{equation}\nwhere $k_0$ is the center frequency of the passband of the filter and $1\/\\sqrt{\\gamma}$ corresponds to the filter's $-3$dB cut-off frequency. Comparing this to the filter coefficients from \\eqref{eq:k_opt} we note that the filter coefficients are identical to those of the 1st-order Butterworth filter, where $\\Lambda$ corresponds to the angular frequencies.\nAs a result, we can consider this optimization as determining the optimal filter center frequency $(\\bar k)$ with fixed bandwidth $(1\/\\sqrt{\\gamma})$ that retains the maximum amount of signal power from $\\mathbf{Z}$. Likewise the choice of the $\\gamma$ hyperparameter sets the bandwidth of the filter. As $\\gamma \\rightarrow \\infty$, the filter bandwidth approaches 0 and thereby restricts us to a single-frequency (i.e., Fourier) solution.\nFurthermore, we can provide a recommended lower bound for $\\gamma$ according to $\\gamma > 1\/k_{bw}^2$, where $k_{bw}$ is the bandwidth of the signal within this spectral-like domain.\n\n\n\\section{Additional Results}\n\n\\subsection{Characterizing Multi-Segment Transmission Lines}\n\n For the simulation considered in \\textbf{Characterizing Multi-Segment Transmission Lines} in \\S~\\ref{sec:results}, Figure \\ref{fig:Merge_2} shows two example columns for $\\gamma = 50$ and $\\lambda = 0.6$. We show in Figure \\ref{fig:polar_objective} the reduction of objective value over iterations, the rate of change of objective value per iteration and the value of polar after each iteration of the overall meta-algorithm. Figures~\\ref{fig:workflow_a}, \\ref{fig:workflow_b}, \\ref{fig:workflow_c} show curves similar to Figure~\\ref{fig:polar_objective} but for different choices of regularization parameters.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.75]{figs\/polar_obj_50_lamb_0_6.png}\n \\caption{Objective value, rate of change of objective value per iteration and the value of polar after each iteration of the meta-algorithm for the simulation in subsection \\textbf{Characterizing Multi-Segment Transmission Lines} \\S \\ref{sec:results}.}\n \\label{fig:polar_objective}\n\\end{figure}\n\nWe observe in Figure \\ref{fig:polar_objective} that the change of objective value is almost zero after 70 iterations. At this point, the polar value also goes below 1.1 (which as we note in the main paper also implies we are close to the global minimum) and eventually reaches 1, providing a certificate of global optimality.\n\n \n \nIn practice, we often stop the algorithm after the polar value reaches below 1.1 as this guarantees a very close to optimal solution.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{figs\/polar_obj_50_lamb_1_6.png}\n \\caption{Objective value, rate of change of objective value per iteration and the value of polar after each iteration of the meta-algorithm for the simulation in \\textbf{Characterizing Multi-Segment Transmission Lines} \\S \\ref{sec:results} with values of regularization parameters $\\gamma = 50$, $\\lambda = 1.6$}\n \\label{fig:workflow_a}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{figs\/polar_obj_50_lamb_3_7.png}\n \\caption{Objective value, rate of change of objective value per iteration and the value of polar after each iteration of the meta-algorithm for the simulation in \\textbf{Characterizing Multi-Segment Transmission Lines} \\S \\ref{sec:results} with values of regularization parameters $\\gamma = 50$, $\\lambda = 3.7$}\n \\label{fig:workflow_b}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{figs\/polar_obj_50_lamb_5.png}\n \\caption{Objective value, rate of change of objective value per iteration and the value of polar after each iteration of the meta-algorithm for the simulation in \\textbf{Characterizing Multi-Segment Transmission Lines} \\S \\ref{sec:results} with values of regularization parameters $\\gamma = 50$, $\\lambda = 5$}\n \\label{fig:workflow_c}\n\\end{figure}\n\n\n\n\n\nIn addition to these optimization results, we also show additional examples of columns of $\\mathbf{D}$ for different values of $\\gamma$ ($500$ and $5000$) with $\\lambda$ fixed at 0.6 to show the variation of the columns of $\\mathbf{D}$ with $\\gamma$.\n\nIn Figures~\\ref{dictatoms_multi_a} and ~\\ref{dictatoms_multi_b}, we observe that with increasing $\\gamma$, the performance of the algorithm in distinguishing the two regions reduces: energy in the dictionary column is no longer confined to the segment containing the same material, it is distributed over the entire column. This can also be observed quantitatively from comparing the partitioned normalized energies (energy on each of the two segments, partitioned by the violet line, of the normalized columns of $\\mathbf{D}$) in Figures~\\ref{energies_a},~\\ref{energies_b} and Figures~\\ref{dictatoms_multi_a},\\ref{dictatoms_multi_b}.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.65\\textwidth]{figs\/D_diff_strings_lr_gamma_500_lam_0_6.png}\n \\caption{Two Columns of $\\mathbf{D}$ obtained from wave-informed matrix factorization when $\\gamma = 500$ $\\lambda = 0.6$.}\n \\label{dictatoms_multi_a}\n\\end{figure}\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.65\\textwidth]{figs\/D_diff_strings_lr_gamma_5000_lam_0_6.png}\n \\caption{Two Columns of $\\mathbf{D}$ obtained from wave-informed matrix factorization when $\\gamma = 5000$ $\\lambda = 0.6$.}\n \\label{dictatoms_multi_b}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.65\\textwidth]{figs\/energies_gamma_500_lamb_0_6.png}\n \\caption{Two Columns of $\\mathbf{D}$ obtained from wave-informed matrix factorization when $\\gamma = 500$ $\\lambda = 0.6$.}\n \\label{energies_a}\n\\end{figure}\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.65\\textwidth]{figs\/energies_gamma_5000_lamb_0_6.png}\n \\caption{Two Columns of $\\mathbf{D}$ obtained from wave-informed matrix factorization when $\\gamma = 5000$ $\\lambda = 0.6$.}\n \\label{energies_b}\n\\end{figure}\n\n\n\\subsection{A Vibrating String}\nFor the experiment described in subsection \\textbf{A Fixed Vibrating String} of \\S \\ref{sec:results}, we show the squared difference between the actual matrix $\\mathbf{D}$ and the one estimated from the algorithm in Table \\ref{table:wave_1}, Table \\ref{table:wave_2}, and Table \\ref{table:wave_3} (for different scenarios). We can calculate the percentage error as\n\\begin{gather}\n \\% \\textnormal{error} = \\frac{\\| \\hat{\\mathbf{D}} - \\mathbf{D} \\|_F}{\\|\\mathbf{D}\\|_F} \\times 100\n\\end{gather}\nwhere $\\hat{\\mathbf{D}}$ is the matrix estimated from the algorithm after permuting the columns to optimally align with the ground-truth $\\mathbf{D}$, zero-padding $\\mathbf{D}$ or $\\hat{\\mathbf{D}}$ (whichever has fewer columns) so that the two matrices have the same number of columns, and normalizing the non-zero columns of $\\hat{\\mathbf{D}}$ to have unit $\\ell_2$ norm\n\nSince the matrix is normalized and the actual $\\mathbf{D}$ matrix contains 10 columns (corresponding to 10 modes in the data simulated), we know that $\\| \\mathbf{D} \\|_F = 10$. We define error $\\| \\hat{\\mathbf{D}} - \\mathbf{D} \\|^2_F$ of the order of $0.01$ ($1 \\%$) to be desirable. We bold error of this order in the tables. Table~\\ref{table:wave_1} describes the performance as a function of regularization as described in \\eqref{eqn:wave_equ} with amplitude of order 10 and noise variance of $0.1$. We see a wide range of values of $\\gamma$ and $\\lambda$ satisfy our error criteria.\n\nTable~\\ref{table:wave_2} describes the performance as a function of regularization as described by the damped vibrations in \\eqref{eqn:damped_wave_time} with amplitude of order 10 and noise variance of $0.1$. We see that the error values satisfy our chosen criteria beyond a particular value of regularization $\\gamma$ ($10^5$) and are also limited in the range of $\\lambda$. The range of $\\lambda$ that satisfies the chosen error condition also reduces with increasing values of $\\gamma$.\n\nTable~\\ref{table:wave_3} describes the performance as a function of regularization as described in \\eqref{eqn:wave_equ} with amplitude of order 10 and noise variance of 10. We observe that a narrow range of regularization values satisfy the error criteria. We thus clearly see the effect of noise in this case. Based on our observations, we believe the large regularization values over fit to the noise with high frequency components, resulting in poor performance. We believe low regularization values do not enforce enough of the wave-constraint, resulting in noisy components.\n\nIn Figures \\ref{fig:accommodations_a}, \\ref{fig:accommodations_b}, we show columns of $\\mathbf{D}$ obtained from low-rank factorization and wave-informed matrix factorization with a signal amplitude of order $10$ and noise variance of $10$. For the low-rank factorization, we choose $\\lambda = 2592$ and for wave-informed matrix factorization we choose $\\gamma = 10^7$ and $\\lambda = 2200$ ($\\gamma$ and $\\lambda$ are chosen corresponding to the minimum value in Table~\\ref{table:wave_3}). The columns of $\\mathbf{D}$ obtained from low-rank matrix factorization in Figure~\\ref{fig:accommodations_a} are noisy and not always decipherable as sinusoids. In contrast, columns of $\\mathbf{D}$ obtained from wave-informed matrix factorization in Figure \\ref{fig:accommodations_b} on the other hand are completely noiseless and have a clearly measurable wavenumber.\n\nFor the case of \\eqref{eqn:damped_wave_space}, when $R=6$, we show an approximate recovery of all the modes in Figure~\\ref{fig:damped_sines_all}.\n\n\\setlength{\\arrayrulewidth}{0.65mm}\n\\setlength{\\tabcolsep}{4pt}\n\\begin{table}[H]\n\n\\captionof{table}{Squared Frobenius norm of the difference between actual $\\mathbf{D}$ and the one obtained from Wave-informed matrix factorization with noise of variance 0.1 and data of the form \\eqref{eqn:wave_equ}}\n\\begin{tabular}{@{}|r|r|r|r|r|r|r|r|r|r|r|r|r|@{}}\n\\hline\n\\multicolumn{1}{|c|}{} & \\multicolumn{12}{|c|}{Regularization $\\left(\\gamma\\right)$} \\\\\n\\hline\n\\multicolumn{1}{|l|}{$\\lambda$} & \\multicolumn{1}{l|}{$10^1$} & \\multicolumn{1}{l|}{$10^{2}$} & \\multicolumn{1}{l|}{$10^3$} & \\multicolumn{1}{l|}{$10^4$} & \\multicolumn{1}{l|}{$10^5$} & \\multicolumn{1}{l|}{$10^6$} & \\multicolumn{1}{l|}{$10^7$} & \\multicolumn{1}{l|}{$10^8$} & \\multicolumn{1}{l|}{$10^9$} & \\multicolumn{1}{l|}{$10^{10}$} & \\multicolumn{1}{l|}{$10^{11}$} & \\multicolumn{1}{l|}{$10^{12}$} \\\\ \\hline\n200 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFD666}\\textbf{0.0035} & \\cellcolor[HTML]{FFDC7F}1.0023 & \\cellcolor[HTML]{FFDC7F}1.0013 & \\cellcolor[HTML]{FFDC7F}1.0013 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} \\\\ \\hline\n300 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFD666}\\textbf{0.0035} & \\cellcolor[HTML]{FFDC7F}1.0023 & \\cellcolor[HTML]{FFDC7F}1.0013 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} \\\\ \\hline\n400 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFD666}\\textbf{0.0035} & \\cellcolor[HTML]{FFEAB4}3.0598 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} \\\\ \\hline\n500 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFEAB3}3.0438 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} \\\\ \\hline\n600 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFD666}\\textbf{0.0035} & \\cellcolor[HTML]{FFEAB3}3.0342 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} \\\\ \\hline\n700 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFD666}\\textbf{0.0035} & \\cellcolor[HTML]{FFE399}2.0015 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDC7F}1.0011 \\\\ \\hline\n800 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFD666}\\textbf{0.0035} & \\cellcolor[HTML]{FFDC7F}1.0014 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDC7F}1.0011 \\\\ \\hline\n900 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFDC7F}1.0013 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDC7F}1.0011 \\\\ \\hline\n1000 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFDC7F}1.0013 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFE398}2.0009 \\\\ \\hline\n1100 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFDC7F}1.0014 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFEAB2}3.0009 \\\\ \\hline\n1400 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFEAB2}2.9963 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFF8E5}5.0005 \\\\ \\hline\n1800 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFDC7F}1.0019 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFFEFE}6.0004 \\\\ \\hline\n2000 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}1.0035 & \\cellcolor[HTML]{FFDC7F}1.0018 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDC7F}1.0011 & \\cellcolor[HTML]{FFFEFE}6.0004 \\\\ \\hline\n2100 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}\\textbf{1.0035} & \\cellcolor[HTML]{FFDC7F}1.0017 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDC7F}1.0011 & \\cellcolor[HTML]{FFFEFE}6.0004 \\\\ \\hline\n2200 & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0040} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0041} & \\cellcolor[HTML]{FFD666}\\textbf{0.0038} & \\cellcolor[HTML]{FFDC7F}\\textbf{1.0035} & \\cellcolor[HTML]{FFDC7F}1.0017 & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDC7F}1.0011 & \\cellcolor[HTML]{FFFEFE}6.0004 \\\\ \\hline\n\\end{tabular}\n\\label{table:wave_1}\n\\end{table}\n\n\n\\setlength{\\arrayrulewidth}{0.65mm}\n\\setlength{\\tabcolsep}{4pt}\n\\begin{table}[H]\n\\captionof{table}{Squared Frobenius norm of the difference between actual $\\mathbf{D}$ and the one obtained from Wave-informed matrix factorization with noise of variance 0.1 and data of the form \\eqref{eqn:damped_wave_time}}\n\\begin{tabular}{@{}|r|r|r|r|r|r|r|r|r|r|r|r|r|@{}}\n\\hline\n\\multicolumn{1}{|c|}{} & \\multicolumn{12}{|c|}{Regularization $\\left(\\gamma\\right)$} \\\\\n\\hline\n\\multicolumn{1}{|l|}{$\\lambda$} & \\multicolumn{1}{l|}{$10^1$} & \\multicolumn{1}{l|}{$10^{2}$} & \\multicolumn{1}{l|}{$10^3$} & \\multicolumn{1}{l|}{$10^4$} & \\multicolumn{1}{l|}{$10^5$} & \\multicolumn{1}{l|}{$10^6$} & \\multicolumn{1}{l|}{$10^7$} & \\multicolumn{1}{l|}{$10^8$} & \\multicolumn{1}{l|}{$10^9$} & \\multicolumn{1}{l|}{$10^{10}$} & \\multicolumn{1}{l|}{$10^{11}$} & \\multicolumn{1}{l|}{$10^{12}$} \\\\ \\hline\n200 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFE7A7}3.0006 \\\\ \\hline\n300 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFEDBD}4.0005 \\\\ \\hline\n400 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFF3D3}5.0005 \\\\ \\hline\n500 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n600 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFE7A7}3.0006 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n700 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFE7A7}3.0006 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n800 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFEDBD}4.0005 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n900 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFEDBD}4.0005 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n1000 & \\cellcolor[HTML]{FFD76C}0.2846 & \\cellcolor[HTML]{FFD76C}0.2845 & \\cellcolor[HTML]{FFD76C}0.2829 & \\cellcolor[HTML]{FFD76B}0.2655 & \\cellcolor[HTML]{FFD669}0.1606 & \\cellcolor[HTML]{FFD667}\\textbf{0.0521} & \\cellcolor[HTML]{FFD666}\\textbf{0.0165} & \\cellcolor[HTML]{FFD666}\\textbf{0.0018} & \\cellcolor[HTML]{FFD666}\\textbf{0.0013} & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFEDBD}4.0005 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n1100 & \\cellcolor[HTML]{FFDD82}1.2836 & \\cellcolor[HTML]{FFDD82}1.2835 & \\cellcolor[HTML]{FFDD81}1.2819 & \\cellcolor[HTML]{FFDD81}1.2644 & \\cellcolor[HTML]{FFDC7F}1.1596 & \\cellcolor[HTML]{FFDC7C}1.0513 & \\cellcolor[HTML]{FFDB7C}1.0162 & \\cellcolor[HTML]{FFDB7B}1.0016 & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFDB7B}1.0011 & \\cellcolor[HTML]{FFEDBD}4.0005 & \\cellcolor[HTML]{FFF9E9}6.0004 \\\\ \\hline\n1400 & \\cellcolor[HTML]{FFE397}2.2794 & \\cellcolor[HTML]{FFE397}2.2793 & \\cellcolor[HTML]{FFE397}2.2777 & \\cellcolor[HTML]{FFE397}2.2602 & \\cellcolor[HTML]{FFE295}2.1554 & \\cellcolor[HTML]{FFE192}2.0478 & \\cellcolor[HTML]{FFE192}2.0152 & \\cellcolor[HTML]{FFE191}2.0013 & \\cellcolor[HTML]{FFE191}2.0008 & \\cellcolor[HTML]{FFEDBD}4.0003 & \\cellcolor[HTML]{FFFEFE}7.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 \\\\ \\hline\n1800 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0459 & \\cellcolor[HTML]{FFEDBE}4.0452 & \\cellcolor[HTML]{FFEDBE}4.0387 & \\cellcolor[HTML]{FFEDBD}4.0127 & \\cellcolor[HTML]{FFEDBD}4.0008 & \\cellcolor[HTML]{FFEDBD}4.0003 & \\cellcolor[HTML]{FFEDBD}4.0003 & \\cellcolor[HTML]{FFFEFE}7.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 \\\\ \\hline\n2000 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0459 & \\cellcolor[HTML]{FFEDBE}4.0452 & \\cellcolor[HTML]{FFEDBE}4.0387 & \\cellcolor[HTML]{FFEDBD}4.0127 & \\cellcolor[HTML]{FFEDBD}4.0008 & \\cellcolor[HTML]{FFEDBD}4.0003 & \\cellcolor[HTML]{FFF3D3}5.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 \\\\ \\hline\n2100 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0459 & \\cellcolor[HTML]{FFEDBE}4.0452 & \\cellcolor[HTML]{FFEDBE}4.0387 & \\cellcolor[HTML]{FFEDBD}4.0127 & \\cellcolor[HTML]{FFEDBD}4.0008 & \\cellcolor[HTML]{FFEDBD}4.0003 & \\cellcolor[HTML]{FFF3D3}5.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 \\\\ \\hline\n2200 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0460 & \\cellcolor[HTML]{FFEDBE}4.0459 & \\cellcolor[HTML]{FFEDBE}4.0452 & \\cellcolor[HTML]{FFEDBE}4.0387 & \\cellcolor[HTML]{FFEDBD}4.0127 & \\cellcolor[HTML]{FFEDBD}4.0008 & \\cellcolor[HTML]{FFEDBD}4.0003 & \\cellcolor[HTML]{FFF3D3}5.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 & \\cellcolor[HTML]{FFFEFE}7.0002 \\\\ \\hline\n\\end{tabular}\n\\label{table:wave_2}\n\\end{table}\n\n\n\\setlength{\\arrayrulewidth}{0.65mm}\n\\setlength{\\tabcolsep}{4pt}\n\n\\begin{table}[H]\n\\captionof{table}{Squared Frobenius norm of the difference between actual $\\mathbf{D}$ and the one obtained from Wave-informed matrix factorization with noise of variance 10 and data of the form \\eqref{eqn:wave_equ}}\n\\begin{tabular}{@{}|r|r|r|r|r|r|r|r|r|r|r|r|r|@{}}\n\\hline\n\\multicolumn{1}{|c|}{} & \\multicolumn{12}{|c|}{Regularization $\\left(\\gamma\\right)$} \\\\\n\\hline\n\\multicolumn{1}{|l|}{$\\lambda$} & \\multicolumn{1}{l|}{$10^1$} & \\multicolumn{1}{l|}{$10^{2}$} & \\multicolumn{1}{l|}{$10^3$} & \\multicolumn{1}{l|}{$10^4$} & \\multicolumn{1}{l|}{$10^5$} & \\multicolumn{1}{l|}{$10^6$} & \\multicolumn{1}{l|}{$10^7$} & \\multicolumn{1}{l|}{$10^8$} & \\multicolumn{1}{l|}{$10^9$} & \\multicolumn{1}{l|}{$10^{10}$} & \\multicolumn{1}{l|}{$10^{11}$} & \\multicolumn{1}{l|}{$10^{12}$} \\\\ \\hline\n200 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE6A2}7.991 \\\\ \\hline\n300 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n400 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n500 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n600 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFEAB1}10.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n700 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n800 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n900 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n1000 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFE49A}6.991 \\\\ \\hline\n1100 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFDD83}3.991 \\\\ \\hline\n1400 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE8A9}9.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFE292}5.978 \\\\ \\hline\n1800 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.218 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFFEFE}20.216 & \\cellcolor[HTML]{FFFEFE}20.202 & \\cellcolor[HTML]{FFFEFE}20.141 & \\cellcolor[HTML]{FFFEFD}20.063 & \\cellcolor[HTML]{FFFEFD}20.022 & \\cellcolor[HTML]{FFFEFD}20.001 & \\cellcolor[HTML]{FFE49A}7.001 & \\cellcolor[HTML]{FFE293}6.001 & \\cellcolor[HTML]{FFE49A}6.978 \\\\ \\hline\n2000 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.218 & \\cellcolor[HTML]{FFFEFE}20.219 & \\cellcolor[HTML]{FFEAB3}10.216 & \\cellcolor[HTML]{FFDC7D}3.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD86D}1.063 & \\cellcolor[HTML]{FFDE84}4.021 & \\cellcolor[HTML]{FFDE83}4.001 & \\cellcolor[HTML]{FFDE83}4.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n2100 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFFEFE}20.218 & \\cellcolor[HTML]{FFDE85}4.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n2200 & \\cellcolor[HTML]{FFFEFE}20.220 & \\cellcolor[HTML]{FFDA76}2.219 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n2300 & \\cellcolor[HTML]{FFE49C}7.220 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n2500 & \\cellcolor[HTML]{FFD667}0.220 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n2700 & \\cellcolor[HTML]{FFD667}0.220 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n3000 & \\cellcolor[HTML]{FFD667}0.220 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n3200 & \\cellcolor[HTML]{FFD667}0.220 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.219 & \\cellcolor[HTML]{FFD667}0.216 & \\cellcolor[HTML]{FFD667}0.202 & \\cellcolor[HTML]{FFD666}0.141 & \\cellcolor[HTML]{FFD666}\\textbf{0.063} & \\cellcolor[HTML]{FFDC7C}3.021 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFDB7C}3.001 & \\cellcolor[HTML]{FFD974}2.001 & \\cellcolor[HTML]{FFE293}6.000 \\\\ \\hline\n\\end{tabular}\n\\label{table:wave_3}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{figs\/dict_atoms_low_rank.png}\n \\caption{ \\label{fig:accommodations_a}Example columns of $\\mathbf{D}$ obtained from Low-rank matrix factorization $\\lambda = 2952$. }\n \\label{fig:my_label}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{figs\/dict_atoms_gamma_50_lambda_0_6.png}\n \\caption{\\label{fig:accommodations_b}Example columns of $\\mathbf{D}$ obtained from our Wave-informed decomposition with $\\gamma = 10^7$, $\\lambda = 2200$.}\n\\end{figure}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[trim=15 10 10 15, clip, width=1.1\\linewidth]{figs\/dampled_sines_all.png}\n \\caption{Recovered modes (rows) of spatially damped sinusoids ({\\color{A}{\\textbf{A}}}), ({\\color{B}{\\textbf{B}}}), ({\\color{C}{\\textbf{C}}}), ({\\color{D}{\\textbf{D}}}), ({\\color{E}{\\textbf{E}}}), ({\\color{F}{\\textbf{F}}}), ({\\color{G}{\\textbf{G}}}) when $R=6$.} \\label{fig:damped_sines_all}\n \\vspace{-5mm}\n\\end{figure}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\nThe different disciplines in physics often evolve in isolation from\neach other, developing their own formalisms and terminologies. This\ndifference in concepts, language, and notations hampers the mutual\nunderstanding and deepens the divisions. It has the\neffect of obscuring the circumstance that the different disciplines\nmay have much more in common than meets the eye, and that much of what\nseems to be different is of cultural and not of intrinsic origin. \n\nIn the present paper we highlight the non-trivial circumstance that\nthe mathematical symmetries and structures that govern the Mueller\nmatrices in polarization theory are the same as those of the electromagnetic tensor in Maxwell\ntheory and the Lorentz transformations in relativistic physics, all of\nwhich obey the same Lorentz algebra. Absorption, electric\nfields, and Lorentz boosts are governed by the symmetric part of the\nrespective transformation matrix, while \ndispersion, magnetic fields, and spatial rotations are represented by\nthe antisymmetric part. These two aspects can be\nunified in terms of complex-valued matrices, where\nthe symmetric and antisymmetric aspects represent the real and\nimaginary parts, respectively. \n\nWe can extend this comparison by introducing a Minkowski metric for a\n4D space spanned by the four Stokes parameters $I,Q,U,V$. In this\ndescription Stokes vectors that represent 100\\,\\%\\ polarization\nare null vectors, while partial depolarization causes the Stokes\nvector to lie inside the null cones like the energy-momentum vectors\nof massive particles in ordinary spacetime. This comparison points to an analogy\nbetween depolarization (which can be seen as a symmetry breaking) and\nthe appearance of mass. Another interesting property is that, in\ncontrast to electromagnetism and Lorentz transformations, Stokes\nvectors and Mueller matrices have the rotational symmetry of spin-2\nobjects because they have substructure: they\nare formed from bilinear products of spin-1 objects. Here we try to\nexpose these potentially profound connections and discuss their\nmeanings. \n\nWe start in Sect.~\\ref{sec:emanalog} by showing the\nrelations between the Stokes formalism in polarization physics and the\ncovariant formulation of the Maxwell theory of electromagnetism. After\nclarifying the symmetries it is shown how the introduction of\ncomplex-valued matrices leads to an elegant, unified formulation. In\nSect.~\\ref{sec:lorentz} we show how the Lorentz transformations with\nits boosts and spatial rotations have the same structure. The\nintroduction of a Minkowski metric for polarization space in\nSect.~\\ref{sec:depolmass} reveals a\nnull cone structure for Stokes vectors that represent fully polarized\nlight. It further brings out an analogy between depolarization caused by the\nincoherent superposition of fields and the appearance of what may be\ninterpreted as a mass term. In Sect.~\\ref{sec:spin2} we highlight the\ncircumstance that the Stokes vectors and Mueller matrices in\npolarization physics have the symmetry of spin-2 objects, because they\nhave substructure, being formed from bilinear products of vector\nobjects. Section \\ref{sec:conc} summarizes the conclusions. \n\n\n\\section{The electro-magnetic analogy}\\label{sec:emanalog}\nLet $\\vec{S}_\\nu$ be the 4D Stokes vector for frequency\n$\\nu$. Explicitly, in terms of its transposed form (with superscript\n$T$) and omitting index $\\nu$ for clarity of notation, \n$\\vec{S}^T\\!\\equiv (S_0,\\,S_1,\\,S_2,\\,S_3) \\equiv (I,\\,Q,\\,U,\\,V)$. The equation for\nthe transfer of polarized radiation can then be written as \n\\begin{equation}\\label{eq:transeq}\n{{\\rm d}\\vec{S}_\\nu\\over{\\rm d}\\tau_c}=(\\vec{\\eta}+\\vec{I})\\,\\vec{S}_\\nu\n\\,-\\vec{j}_\\nu\/\\kappa_c\\,,\n\\end{equation}\nwhere $\\tau_c$ is the continuum optical depth, $\\vec{I}$ is the\n$4\\times 4$ identity matrix (representing continuum absorption),\n$\\kappa_c$ is the continuum absorption coefficient, $\\vec{j}_\\nu$ is\nthe emission 4-vector, while the Mueller absorption matrix\n$\\vec{\\eta}$ that represents the polarized processes due to the\natomic line transitions is \n\\begin{equation}\\label{eq:etamat} \n\\vec{\\eta}=\\left(\\matrix{\\,\\,\\eta_I &\\phantom{-}\\eta_Q &\\phantom{-}\\eta_U&\n\\phantom{-}\\eta_V\\,\\,\\cr \\,\\,\\eta_Q&\\phantom{-}\\eta_I&\n\\phantom{-}\\rho_V&-\\rho_U\\,\\,\\cr \\,\\,\\eta_U&-\\rho_V&\\phantom{-}\\eta_I&\\phantom{-}\n\\rho_Q\\,\\,\\cr \\,\\eta_V&\\phantom{-}\\rho_U&-\\rho_Q&\n\\phantom{-}\\eta_I\\,\\, }\\right)\\,.\n\\end{equation}\n For a detailed account of Stokes vector polarization theory with its\nnotations and terminology we refer to the monographs of\n\\citet{stenflo-book94} and \\citet{stenflo-lanlan04}. \n\nWhile $\\eta_{I,Q,U,V}$ represents the absorption terms for the four\nStokes parameters $I,Q,U,V$, the differential phase shifts, generally\nreferred to as anomalous dispersion or\nmagnetooptical effects, are represented by the $\\rho_{Q,U,V}$\nterms. They are formed, respectively, from the imaginary and the real\npart of the complex refractive index that is induced when the atomic\nmedium interacts with the radiation field. \n\nWe notice that $\\vec{\\eta}$ can be expressed as the sum of two\nmatrices: a symmetric matrix that only contains the $\\eta$ terms, and\nan anti-symmetric matrix that only contains the $\\rho$\nterms. Antisymmetric matrices represent spatial rotations, as will be\nparticularly clear in Sect.~\\ref{sec:lorentz} when comparing with\nLorentz transformations. \n\nThese symmetries turn out to be identical to the symmetries that\ngovern both the Maxwell theory for electromagnetism and the Lorentz\ntransformation of the metric in relativity. Here we will compare these\nvarious fields of physics to cast light on the intriguing\nconnections. We begin with electromagnetism. \n\nThe analogy between the Stokes formalism and electromagnetism has\npreviously been pointed out by \\citet{stenflo-lanlan81} using a 3+1\n(space + time) formulation rather than the covariant formulation in\nMinkowski spacetime. It is however only with the covariant formulation that\nthe correspondences become strikingly transparent. \n\nThe Lorentz electromagnetic force law when written in covariant 4D\nform is \n\\begin{equation}\\label{eq:forcelaw} \n{{\\rm d}p^{\\,\\alpha}\\over\\!\\!{\\rm d}\\tau}={e\\over\n m}\\,F^{\\,\\alpha}_{\\phantom{\\alpha}\\beta}\\,\\,p^{\\,\\beta}\\,,\n\\end{equation}\nwhere $p^{\\,\\alpha}$ are the components of the contravariant energy-momentum\n4-vector, $e$ and $m$ are the electric charge and mass of the\nparticle, and $F^{\\,\\alpha}_{\\phantom{\\alpha}\\beta}$ are the\ncomponents of the electromagnetic tensor, which has the representation\nof a $4\\times 4$ matrix: \n\\begin{equation}\\label{eq:fabmat} \nF^{\\,\\alpha}_{\\phantom{\\alpha}\\beta}\\,=\\left(\\matrix{\\,\\,0&\\phantom{-}E_x&\\phantom{-}E_y&\n\\phantom{-}E_z\\,\\,\\cr \\,\\,E_x&\\phantom{-}0&\n\\phantom{-}B_z&-B_y\\,\\,\\cr \\,\\,E_y&-B_z&\\phantom{-}0&\\phantom{-}\nB_x\\,\\,\\cr \\,\\,E_z&\\phantom{-}B_y&-B_x&\n\\phantom{-}0\\,\\, }\\right)\\,.\n\\end{equation}\n A comparison between Eqs.~(\\ref{eq:etamat}) and (\\ref{eq:fabmat})\nimmediately reveals the correspondence between absorption $\\eta$ and the electric\n$E$ field on the one hand and between anomalous dispersion $\\rho$ and the magnetic $B$\nfield on the other hand: \n\\begin{eqnarray}\\label{eq:emconnect} \n&\\eta_{Q,U,V}\\,\\,\\longleftrightarrow \\,\\, E_{x,y,z}\\nonumber\\\\\n&\\,\\,\\rho_{Q,U,V}\\,\\,\\longleftrightarrow \\,\\, B_{x,y,z}\\,.\n\\end{eqnarray}\n\nLet us follow up this structural comparison in terms of a unified\ndescription, where absorption and dispersion are combined into a\ncomplex-valued absorption. $\\eta$ is proportional to the Voigt\nfunction $H(a,v_q)$, with $\\eta_0$ as the proportionality constant, $a$\nthe dimensionless damping parameter, and $v_q$ the dimensionless\nwavelength or frequency parameter. Index $q$, with $q=0,\\pm 1$,\nindicates the differential shift of the wavelength scale for atomic\ntransitions with magnetic quantum number $m_{\\rm lower}-m_{\\rm\n upper}=q$. Similarly $\\rho$ is proportional \nto $2F(a,v_q)$, where $F$ is the line dispersion function. \n\\begin{eqnarray}\\label{eq:etarho} \n&\\,\\,\\,\\eta_{I,Q,U,V}=\\,\\,\\eta_0 \\,H_{I,Q,U,V}\\,,\\nonumber\\\\\n&\\rho_{Q,U,V}=2\\eta_0 \\,F_{Q,U,V}\\,.\n\\end{eqnarray}\nIn the unified description the $H$ and $F$ functions are combined into\nthe complex-valued \n\\begin{equation}\\label{eq:hcal}\n{\\cal H}(a,v_q)\\equiv H(a,v_q)-2i\\,F(a,v_q)\\,, \n\\end{equation}\nwhich now represents the building blocks when forming the\ncorresponding quantities with indices $I,Q,U,V$ to refer to the\nrespective Stokes parameters. ${\\cal H}_{I,Q,U,V}$ can be combined\ninto the 4-vector \n\\begin{equation}\\label{eq:hcalvect} \n\\vec{{\\cal H}}\\,\\equiv\\left(\\matrix{\\,\\,{\\cal H}_I\\,\\,\\cr \\,\\,{\\cal H}_Q\\,\\,\\cr \\,\\,{\\cal H}_U\\,\\,\\cr \\,\\,{\\cal H}_V\\,\\, }\\right)\\,\\equiv\\,\\left(\\matrix{\\,\\,{\\cal H}_0\\,\\,\\cr \\,\\,{\\cal H}_1\\,\\,\\cr \\,\\,{\\cal H}_2\\,\\,\\cr \\,\\,{\\cal H}_3\\,\\, }\\right)\\,.\n\\end{equation}\n From the above follows how ${\\cal H}_k$ is related to and unifies the\ncorresponding absorption and dispersion parameters $\\eta_k$ and $\\rho_k$: \n\\begin{equation}\\label{eq:completa}\n\\eta_k -i\\,\\rho_k =\\,\\eta_0\\,{\\cal H}_k\\,. \n\\end{equation}\n\nLet us next define the three symmetric matrices $\\vec{K}^{(k)}$ and three\nantisymmetric matrices $\\vec{J}^{(k)}$ through \n\\begin{eqnarray}\\label{eq:ikj} \n&\\vec{K}^{(k)}_{0j}&\\!\\!\\!\\equiv\\,\\, \\vec{K}^{(k)}_{j\\,0}\\,\\,\\equiv 1\\,,\\nonumber\\\\\n&\\vec{J}^{(k)}_{ij}&\\!\\!\\!\\equiv\\, -\\vec{J}^{(k)}_{j\\,i}\\equiv-\\,\\varepsilon_{ij\\,k}\\,,\n\\end{eqnarray}\nwhere $\\varepsilon_{ij\\,k}$ is the Levi-Civita antisymmetric symbol. \nWe further define the complex-valued matrix \n\\begin{equation}\\label{eq:xmatdef}\n\\vec{T}^{(k)} \\equiv \\vec{K}^{(k)} -i\\,\\vec{J}^{(k)} \\,. \n\\end{equation}\nThen the Mueller matrix from Eq.~(\\ref{eq:etamat}) becomes \n\\begin{equation}\\label{eq:etacompmat} \n\\vec{\\eta} -\\eta_I \\vec{I} =\\,\\eta_0\\,{\\rm Re}\\,(\\,{\\cal\n H}_k\\,\\vec{T}^{(k)} \\,)\\,. \n\\end{equation}\n\nLet us similarly define the complex electromagnetic vector \n\\begin{equation}\\label{eq:complemvec} \n\\vec{\\cal E} \\equiv \\vec{E}-i\\,\\vec{B}\\,, \n\\end{equation}\nwhich in quantum mechanics represents photons with positive\nhelicity. Then the electromagnetic tensor\n$F^{\\,\\alpha}_{\\phantom{\\alpha}\\beta}$ of Eq.~(\\ref{eq:fabmat}) can be\nwritten as \n\\begin{equation}\\label{eq:compemtensor} \n\\vec{F}\\,=\\,{\\rm Re}\\,(\\,{\\cal\n E}_k\\,\\vec{T}^{(k)} \\,)\\,. \n\\end{equation}\nComparison with Eq.~(\\ref{eq:etacompmat}) again brings out the\nstructural correspondence between the Mueller matrix and the\nelectromagnetic tensor, this time in a more concise and compact form. It also\nshows how the electric and magnetic fields are inseparably linked, as\nthe real and imaginary parts of the same complex vector. \n\nIt may be argued that the structural similarity between the Mueller\nand electromagnetic formalisms is not unexpected, since the underlying\nphysics that governs the Mueller matrices is the electromagnetic\ninteractions between matter and radiation. The atomic transitions are\ninduced by the oscillating electromagnetic force of the ambient\nradiation field when it interacts with the atomic electrons, and this\ninteraction is governed (in the classical description) by the force\nlaw of Eq.~(\\ref{eq:forcelaw}) with its electromagnetic tensor. This\nis however not the whole story, since there are also profound\ndifferences: As we will see in Sect.~\\ref{sec:spin2} Stokes vectors and Mueller\nmatrices behave like spin-2 objects, while the electromagnetic tensor\nis a spin-1 object. Another interesting aspect in the comparison\nbetween Stokes vectors and the energy-momentum 4-vector is that\ndepolarization of Stokes vectors acts as if the corresponding 4-vector\nhas acquired ``mass'', as will be shown in Sect.~\\ref{sec:depolmass}. Before we turn\nto these topics we will in the next section show the correspondence\nbetween the Mueller matrix and the Lorentz transformation matrix. \n\n\n\\section{Lorentz transformations and the Mueller absorption\n matrix}\\label{sec:lorentz} \n\nLet $\\vec{X}$ be the spacetime 4-vector: \n\\begin{equation}\\label{eq:xvect} \n\\vec{X}\\,\\equiv\\,\\left(\\matrix{\\,\\,ct\\,\\,\\cr \\,\\,x\\,\\,\\cr \\,\\,y\\,\\,\\cr \\,\\,z\\,\\, }\\right)\\,\\equiv\\,\\left(\\matrix{\\,\\,x_0\\,\\,\\cr \\,\\,x_1\\,\\,\\cr \\,\\,x_2\\,\\,\\cr \\,\\,x_3\\,\\, }\\right)\\,.\n\\end{equation}\n With the Lorentz transformation $\\vec{\\Lambda}$ we transfer to a new\nsystem $\\vec{X}^\\prime$: \n\\begin{equation}\\label{eq:xprimex} \n\\vec{X}^\\prime =\\vec{\\Lambda}\\,\\vec{X}\\,. \n\\end{equation}\n$\\vec{\\Lambda}$ represents rotations in Minkowski space, composed of\nthree spatial rotations $\\phi_k$ and three boosts $\\gamma_k$, which\nmay be regarded as imaginary rotations. Let us combine them into complex\nrotation parameters $\\alpha_k$ through \n\\begin{equation}\\label{eq:compalphagamphi} \n\\alpha_k\\,\\equiv \\,\\gamma_k + i\\,\\phi_k\\,. \n\\end{equation}\nThen the Lorentz transformation $\\vec{\\Lambda}$ can be written as \n\\begin{equation}\\label{eq:explorentz} \n\\vec{\\Lambda}\\equiv e^{\\vec{V}}\\,, \n\\end{equation}\nwhere \n\\begin{equation}\\label{eq:vlormat} \n\\vec{V}\\,=\\,{\\rm Re}\\,(\\,\\alpha_k\\,\\vec{T}^{(k)} \\,)\\,=\\gamma_k\\,\n\\vec{K}^{(k)}+\\,\\phi_k \\,\\vec{J}^{(k)}\\,. \n\\end{equation}\nExplicitly, \n\\begin{equation}\\label{eq:lorentzmi}\n\\vec{V}\\,= \\left(\\matrix{\\,\\,0&\\phantom{-}\\gamma_x&\\phantom{-}\\gamma_y&\n\\phantom{-}\\gamma_z\\,\\,\\cr \\,\\,\\gamma_x&\\phantom{-}0&\n-\\phi_z&\\phantom{-}\\phi_y\\,\\,\\cr \\,\\,\\gamma_y&\\phantom{-}\\phi_z&\\phantom{-}0&-\\phi_x\\,\\,\\cr\n\\,\\,\\gamma_z&-\\phi_y&\\phantom{-}\\phi_x& \n\\phantom{-}0\\,\\, }\\right)\\,. \n\\end{equation}\n Note that in quantum field theory a convention for the definition of\nthe $\\vec{K}$ and $\\vec{J}$ matrices that define the Lorentz algebra\nis used, which differs by the\nfactor of the imaginary unit $i$ from the convention of Eq.~(\\ref{eq:ikj}) used here, in\norder to make $\\vec{K}$ anti-hermitian and $\\vec{J}$ hermitian\n\\citep{stenflo-zee2010}. \n\nComparing with the Mueller matrix and the electromagnetic tensor we see the correspondence \n\\begin{eqnarray}\\label{eq:lorconnect} \n&\\eta_k\\,\\,\\longleftrightarrow \\,\\, \\,\\,\\,\\gamma_k\\nonumber\\,\\,\\,\\longleftrightarrow \\,\\, \\,\\,E_k\\\\\n&\\,\\,\\rho_k\\,\\,\\longleftrightarrow -\\, \\phi_k\\,\\,\\,\\longleftrightarrow \\,\\, \\,\\,B_k\\,.\n\\end{eqnarray}\nThe minus sign in front of $\\phi_k$ is only due to the convention\nadopted for defining the sense of rotations and is therefore\nirrelevant for the following discussion of the physical meaning of the\nstructural correspondence. \n\nRelations (\\ref{eq:lorconnect}) show how the Lorentz boosts\n$\\gamma_k$, which change the energy and momentum of the boosted object, relate to\nboth absorption $\\eta_k$ and electric field $E_k$, while the spatial\nrotations $\\phi_k$, which do not affect the energy but change the\nphase of the rotated object, relate to both dispersion or phase shift \neffects $\\rho_k$ and to magnetic fields $B_k$. \n\n\n\\section{Analogy between depolarization and the emergence of\n mass}\\label{sec:depolmass}\nThe structural correspondence between the $4\\times 4$ Mueller\nabsorption matrix, the matrix representation of the covariant\nelectromagnetic tensor, and the Lorentz transformation matrix suggests\nthat there may be a deeper analogy or connection between the 4D Stokes vector\nspace and 4D spacetime. Let us therefore see what happens when we\nintroduce the Minkowski metric to the Stokes vector formalism. The\nusual notation for the Minkowski metric is $\\eta_{\\mu\\nu}$, but to\navoid confusion with the absorption matrix $\\vec{\\eta}$ that we have\nbeen referring to in the present paper, we will use the notation\n$\\vec{g}$ or $g_{\\mu\\nu}$ that is generally reserved for a general\nmetric, but here we implicitly assume that we are only dealing with \ninertial frames, in which $g_{\\mu\\nu}=\\eta_{\\mu\\nu}$. \n\nAssume that $\\vec{I}_\\nu =\\vec{S}_\\nu$ is the 4D Stokes vector, with\nits transpose being $\\vec{I}_\\nu^T\n=(I_\\nu,\\,Q_\\nu,\\,U_\\nu,\\,V_\\nu)$. The scalar product in Minkowski\nspace is then \n\\begin{equation}\\label{eq:itetai}\n\\vec{I}_\\nu^T\n\\vec{g}\\,\\vec{I}_\\nu\\,=\\,I_\\nu^2\\,-\\,(\\,Q_\\nu^2\\,+\\,U_\\nu^2\\,+\\,V_\\nu^2\\,)\\,, \n\\end{equation}\nwhich also represents the squared length of the Stokes vector in\nMinkowski space. \n\nWe know from polarization physics that the right-hand-side of\nEq.~(\\ref{eq:itetai}) is always $\\geq 0$, and equals zero only when the\nlight beam is 100\\,\\%\\ (elliptically) polarized. Such fully polarized,\npure or coherent states are thus represented by null vectors, in\nexactly the same way as the energy-momentum 4-vector $\\vec{p}$ of\nmassless particles are also null vectors on the surface of null\ncones. The energy-momentum vectors of massive particles live inside\nthe null cones. Similarly the Stokes vectors live inside and not on\nthe surface of null cones only if the light is not fully but partially\npolarized. \n\nThis comparison raises the question whether there is some deeper\nconnection between depolarization and the appearance of mass. In\npolarization physics all individual (coherent) wave packages are\n100\\,\\%\\ polarized, and any coherent superposition of such wave\npackages is also fully polarized. Partial polarization occurs\nexclusively as a result of the {\\it incoherent} superposition of\ndifferent, uncorrelated wave packages. In such cases it is customary to\nrepresent the intensity $I_\\nu$, which represents the energy or the\nnumber of photons carried by the beam, as consisting of two parts, one\nfraction $p_\\nu$ that is fully polarized, and one fraction with\nintensity $I_{\\nu,\\,u}$ that is unpolarized, with transposed Stokes\nvector $I_{\\nu,\\,u}\\,(1,\\,0,\\,0,\\,0)$: \n\\begin{equation}\\label{eq:unpoldecomp}\nI=p_\\nu\\,I \\,+\\,I_u \\,, \n\\end{equation}\nwhere we have omitted index $\\nu$ for simplicity except for $p_\\nu$ (to\ndistinguish it from the momentum vector $p$ below). This fractional\npolarization $p_\\nu$ is \n\\begin{equation}\\label{eq:polfrac} \np_\\nu={I-I_u\\over I}=\\,{(Q^2 +U^2 +V^2)^{1\/2}\\over I}\\,. \n\\end{equation}\n\nIn comparison, in particle physics, the scalar product for the 4D\nenergy momentum vector $\\vec{p}$ is \n\\begin{equation}\\label{eq:ptgp} \n\\vec{p}^T\\vec{g}\\,\\vec{p}\\,=\\,m^2\\,c^2\\,, \n\\end{equation}\nfrom which the well-known Dirac equation \n\\begin{equation}\\label{eq:ptetap} \nE^2=p^2\\,c^2\\,+\\,m^2\\,c^4 \n\\end{equation}\nfollows. While the emergence of the mass term corresponds to the\nemergence of the unpolarized component $I_u$,\nEqs.~(\\ref{eq:unpoldecomp}) and (\\ref{eq:ptetap}) look different,\nbecause the decomposition in Eq.~({\\ref{eq:unpoldecomp}) has been done\n for the unsquared intensity $I$, while in Eq.~({\\ref{eq:ptetap}) it\n is in terms of the squared components. Since we have the freedom\n to choose different ways to mathematically decompose a quantity,\n this difference is not of particular physical significance. \n\nIn current quantum field theories (QFT) the emergence of mass requires\nthe spontaneous breaking of the gauge symmetry, for which the Higgs\nmechanism has been invented. It is postulated that all of space is\npermeated by a ubiquitous Higgs field, which when interacting with the\nfield of a massless particle breaks the symmetry. When the particle\ngets moved to the non-symmetric state it acquires mass. Because the phases\nof the Higgs field and the field of the initially massless particle\nare uncorrelated, the superposition of the fields is incoherent, which\nmay be seen as one reason for the breaking of the symmetry. \n\nIn polarization physics the emergence of depolarization may also be\ninterpreted as a symmetry breaking, caused by the incoherent\nsuperposition of different wave fields. Incoherence means that the\nphases of the superposed fields are uncorrelated, which has the result that\nthe interference terms, all of which are needed to retain the symmetry,\nvanish. \n\n\n\\section{Stokes vectors as spin-2 objects}\\label{sec:spin2}\nAn object with spin $s$ varies with angle of rotation $\\theta$ as\n$s\\,\\theta$. For $s=\\textstyle{1\\over 2}$ one has to rotate $4\\pi$\nradians to return to the original state, for $s=2$ one only needs to\nrotate $\\pi$ radians,\nand so on. Ordinary vectors, like the electric and magnetic fields\n$\\vec{E}$ and $\\vec{B}$, rotate like spin-1 objects. It may\ntherefore come as a surprise that the Stokes vector rotates with twice\nthe angle, like a spin-2 object, in spite of the identical symmetry\nproperties of the Mueller matrix and the electromagnetic tensor. \n\nThe resolution to this apparent paradox is found by distinguishing\nbetween the kind of spaces in which the rotations are performed. In\nthe Minkowski-type space that is spanned by $I,Q,U,V$ as\ncoordinates, which is the Poincar\\'e\\ space in polarization physics\nfor a fixed and normalized intensity $I$, the transformation properties\nare indeed those of a real vector, a spin-1 object. However, besides\nPoincar\\'e\\ space the Stokes vector also lives in ordinary space, and\na rotation by $\\theta$ of a vector in Poincar\\'e\\ space corresponds to\na rotation in ordinary space by $2\\theta$. While being a spin-1 object\nin Poincar\\'e\\ space, the same object becomes a spin-2 object in\nordinary space. \n\nThe reason why it becomes a spin-2 object is that the Stokes vector\nhas substructure: it is formed from tensor products of Jones\nvectors. Similarly Mueller matrices for coherent (100\\,\\%\\ polarized)\nwave packages are formed from tensor products of Jones matrices. While\nthe Jones vectors and matrices are spin-1 objects in ordinary space,\nthe bilinear products between them become spin-2 objects. \n\nThe fundamental physics that governs the polarization physics does not\nmanifest itself at the level of these spin-2 objects, because the basic\nprocesses are the electromagnetic interactions between the radiation\nfield and the electrons (which may be bound in atoms), and these\ninteractions are described at the spin-1 level (since the\nelectromagnetic waves represent a spin-1 vector field). The Jones\nmatrices, or, in QM terminology, the Kramers-Heisenberg scattering\namplitudes, contain the fundamental physics. They are the basic\nbuilding blocks for the bilinear products, the spin-2 objects. \n\nThis discussion points to the possibility that the physics of other\ntypes of spin-2 objects, like the metric field in general relativity,\nmay be hidden, because the governing physics may take place within a spin-1\nsubstructure level and would remain invisible if the spin-2 field\nwould be (incorrectly) \nperceived as fundamental, without substructure. \n\n\n\\section{Conclusions}\\label{sec:conc}\nComparison between the Stokes formalism, the covariant formulation of\nelectromagnetism, and the Lorentz transformation shows that they all\nshare the same Lie algebra, namely the algebra of the Lorentz\ngroup. This algebra is 6-dimensional (for instance in the case of\nelectromagnetism we have three electric field components + three magnetic\nfield components). While this is the algebra that is known to govern Lorentz\ntransformations and the related covariant formulation of\nelectromagnetism, it is not obvious why this group algebra should also apply\nto the transformation of Stokes 4-vectors, which have been constructed\nwith the aim of being a powerful tool for the treatment of\npartially polarized light. \n\nIn spite of the common underlying group structure, there is also a\nprofound difference. While the electromagnetic field vectors and \ntensors are objects of a vector space with rotational properties of\nspin-1 objects, Stokes vectors and Mueller matrices have the\nrotational symmetries of spin-2 objects, because they are formed from\ntensor products of spin-1 objects. This \nvector-field substructure contains the governing physics, where\neverything is coherent, and where in the quantum description the probability\namplitudes or wave functions live and get linearly superposed to form mixed states with\ncertain phase relations. When we go to the spin-2 level by\nforming bilinear products between the probability amplitudes, which\ngenerates observable probabilities, or when we form bilinear products\nbetween electric field vectors to generate \nquantities that represent energies or photon numbers, we get\nstatistical quantities (probabilities or energy packets) over which we\ncan form ensemble averages. If the phase relations of the mixed states in\nthe substructure are definite, we get interference effects and 100\\,\\%\\\npolarization for the ensemble averages, while if the phase relations\ncontain randomness (incoherent superposition) we get partial polarization. \n\nWhen comparing the Stokes 4-vector with the energy-momentum 4-vector\nof a particle, 100\\,\\%\\ elliptically polarized light corresponds to\nmassless particles, while the Stokes vector for partially polarized\nlight corresponds to the energy-momentum vector of a massive\nparticle. Depolarization thus has an effect as if the Stokes vector has\nacquired ``mass'' by a symmetry breaking that is caused by the destruction of\ncoherences between mixed states. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\IEEEPARstart{M}{odern} radar systems are able to generate optimal filters matched to increasingly complex target motion, resulting in increased sensitivity to targets exhibiting these motion at the cost of significant processing load. This problem is most difficult for sensors targeting objects in low Earth orbit (LEO), especially sensors with a significant field of regard. This is due to the observation time required to detect smaller targets, combined with significant orbital velocities and large search volumes, increasing the parameter space to impractical levels.\n\nExtending radar processing integration times in order to increase detection sensitivity requires mitigation against range migration, Doppler migration, and angular migration. The correction of these migrations is further complicated by the motion of the Earth, and hence the sensor located on the Earth. The direct implementation of a matched filter in this radar search space may lead to the incorporation of many parameters.\n\nThe nominal trajectory of orbits is well understood and is generally deterministic. The motion of a two-body Keplerian orbit, an idealised case of an object of insignificant mass orbiting around a much larger central body\\footnote{Treated as a single point mass.}, can be expressed entirely by six parameters. Matching the processing to this well-defined orbital motion for the purpose of improved radar detection and space situational awareness is therefore a natural extension.\n\nWhilst the primary aim of this general method is to increase a radar's sensitivity to objects in orbit, detections from a filter matched to a target's orbital trajectory will additionally provide coarse initial orbit determination. Traditionally, performing initial orbit determination requires many radar detections of a pass of an object in space.\n\nAfter briefly covering prior work (\\ref{ssec:prior_work}), Section \\ref{sec:problem_formulation} details the problem formulation, specifically in terms of ambiguity function expressions (\\ref{ssec:ambiguity_surface_generation}) and Keplerian orbital dynamics (\\ref{ssec:orbital_dynamics}). In Section \\ref{sec:odbd}, Orbit Determination Before Detect (ODBD) methods are discussed, including matched processing to orbital parameters, constraining the search volume (\\ref{ssec:odbd_search_constraints}), and constraining the orbit in radar measurement space (\\ref{ssec:odbd_zdc}), particularly for uncued detections. Some specific applications, including single-channel object detection and orbit determination are also discussed (\\ref{ssec:odbd_single_channel}). Section \\ref{sec:results} presents simulated results, with comparison against ephemerides. Section \\ref{sec:conclusion} concludes with a description of future work.\n\\vspace{-1ex}\n\n\\subsection{Prior Work}\n\\label{ssec:prior_work}\nThe motivation for this paper is to further develop techniques for the surveillance of space with the Murchison Widefield Array (MWA) using passive radar. The paper is particularly concerned with developing techniques for uncued detection over a wide field of regard. The MWA is a low frequency (70 - 300 MHz), wide field-of-view, radio telescope located in Western Australia \\cite{2013PASA...30....7T}. The MWA has demonstrated the incoherent detection of the International Space Station (ISS) \\cite{ 2013AJ....146..103T} and other, smaller, objects in orbit \\cite{prabu2020development}. However, for coherent processing, methods compensating for all aspects of motion migration are required in order to detect smaller satellites and space debris \\cite{8835821}. As passive radar systems have no control over the transmitter used for detection, improving processing gain through extended Coherent Processing Intervals (CPIs) is a method used to achieve the required sensitivity \\cite{4653940}. Orbital trajectories are ideal targets for such techniques, as stable and predictable relative motion allows for simpler measurement models. Such techniques have also been used with active radar, for improved sensitivity and processing gain \\cite{markkanen2005real} \\cite{8812975}.\n\nConsisting of 256 tiles spread across many square kilometres, the MWA's sparse layout\\footnote{At FM radio frequencies, even the compact configuration of MWA Phase II is sparse \\cite{pase22018article}.} provides high angular resolution. Objects in orbit will therefore transit many beamwidths per second at the point of closest approach. Because of this, high angular resolution (normally a desirable attribute) can result in significant angular migration. Highly eccentric orbits will transit significantly faster. This is particularly challenging for the uncued detection of small objects, where longer integration times are needed to achieve sufficient sensitivity.\n\nIndividual radar detections consisting of a single measurement of range, Doppler, azimuth and elevation, only define a broad region of potential orbital parameters \\cite{2007CeMDA..97..289T}. This region may be constrained by incorporating angular rates \\cite{2014demarsradar_regions1}, and even further by including radial acceleration and jerk \\cite{8812975}. Usually, many radar detections are required to perform initial orbit determination. The mapping between radar measurement space and orbital parameters is an ongoing area of research \\cite{8448187}.\n\n\n\\section{Problem Formulation}\n\\label{sec:problem_formulation}\n\\subsection{Radar Product Formation} \n\\label{ssec:ambiguity_surface_generation}\nA standard timeseries matched filter is a function to detect reflected copies of a reference signal $d(t)$ in the surveillance signal $s(t)$, specifically copies delayed by $\\tau$ and frequency shifted by $f_D$:\n\\vspace{-1ex}\n\\begin{IEEEeqnarray}{rCL}\n \\label{eq:woodward1}\n \\chi(\\tau,f_D) = \\int_T s(t){d}^*(t-\\tau){\\mathrm{e}}^{-j2\\pi f_Dt}\\,dt .\n\\end{IEEEeqnarray}\n\nThis matched filter can be extended to more complicated motions by \\textit{dechirping} (or even applying higher order corrections to) the motion-induced frequency shift. For example, instead of matching to the radial velocity with a Doppler shift of $f_D$, higher order motions could be matched with a time varying frequency (that can be represented as a polynomial phase signal) given by $f_D +\nf_Ct$, where $f_C$ is proportional to the radial acceleration. This can be extended to an arbitrary number of parameters at the cost of adding extra dimensions to the matched filter outputs. \nTo account for any range migration, the delay term $\\tau$ will also need to be a function of time to match the radial motion.\n\nFor a receiver array consisting of $N$ elements, the surveillance signal $s(t)$ can be formed by classical far-field beamforming in a direction of interest such that:\n\\begin{IEEEeqnarray}{rCL}\ns(t) = \\sum_{n=1}^{N}s_{n}(t){\\mathrm{e}}^{-j\\boldsymbol{k}(\\theta,\\phi) \\cdot \\boldsymbol{u}_n } ,\n\\end{IEEEeqnarray}\nwhere $s_{n}(t)$ is the received signal at the $n^{\\text{th}}$ antenna, $\\boldsymbol{u}_n$ is the position of the $n^{\\text{th}}$ antenna, and $\\boldsymbol{k}(\\theta,\\phi)$ is the signal wavevector for azimuth $\\theta$ and elevation $\\phi$. \nTime varying adjustments can be made to every measurement parameter to create a filter, $\\chi$, matched to the exact motion of an object with range $\\rho(t)$ and slant range-rate $\\dot{\\rho}(t)$, in time-varying directions given by azimuth $\\theta(t)$ and elevation $\\phi(t)$:\n\n\\begin{multline}\n \\label{eq:time_varying_delay_doppler}\n \\chi\\left(\\theta(t), \\phi(t), \\rho(t), \\dot{\\rho}(t)\\right) = \\int_T \\left[\\sum_{n=1}^{N}{\\mathrm{e}}^{j\\boldsymbol{k}(\\theta(t),\\phi(t)) \\cdot \\boldsymbol{u}_n }s_{n}(t)\\right] \\\\\n d^*\\left(t-2c^{-1}\\rho(t)\\right){\\mathrm{e}}^{-j\\frac{4\\pi}{\\lambda}\\dot{\\rho}(t)t}dt ,\n\\end{multline}\nwhere the delay to the target is now given by the total path distance scaled by $\\frac{1}{c}$, and the Doppler shift is given by $\\frac{2\\dot{\\rho}}{\\lambda}$.\n\n\\subsection{Orbital Dynamics}\n\\label{ssec:orbital_dynamics}\n\\vspace{-.47ex}\nThe most common elements used to parameterise an orbit are the Keplerian, or \\textit{classical}, orbital elements. These elements directly describe the size, shape, and orientation of an orbital ellipse (with one focus being at the centre of the central body), and the position of an object on this ellipse at some epoch, in the Earth-Centered Inertial (ECI) coordinate frame \\cite{Vallado2001fundamentals}. The ECI coordinate frame has its origin at the centre of the Earth, but it does not rotate with the Earth. It is also worth noting that a Keplerian orbit can, in fact, be any conic section. However, in this paper, it is assumed that orbits describe Earth-captured closed orbits.\n\nThe Keplerian orbital parameters are: the semi-major axis, $a$, and eccentricity, $e$, defining the size and shape of the ellipse; the right-ascension of the ascending node, $\\Omega$, and inclination, $i$, which define the orientation of the elliptical plane to the Earth's equatorial plane; the argument of periapsis, $w$, defining the orientation\/rotation of the ellipse in the orbital plane; and finally, the true anomaly, $\\nu$, defining the position of the object on the ellipse (refer to Figure \\ref{fig:orbitplane1}).\n\n\\vspace{-3ex}\n\\begin{figure}[ht!]\n\\hspace{2ex}\n\\tdplotsetmaincoords{70}{110}\n\\begin{tikzpicture}[tdplot_main_coords,scale=4.3]\n \\pgfmathsetmacro{\\r}{0.7}\n \\pgfmathsetmacro{\\O}{45} \n \\pgfmathsetmacro{\\i}{35} \n \\pgfmathsetmacro{\\f}{32}\n\n \\coordinate (O) at (0,0,0);\n\n \\draw [->] (O) -- (1,0,0) node[anchor=north east] {$\\boldsymbol{I}$};\n \\draw [->] (O) -- (0,0.9,0) node[anchor=north west] {$\\boldsymbol{J}$};\n \\draw [->] (O) -- (0,0,0.9) node[anchor=south] {$\\boldsymbol{K}$};\n \n \\tdplotdrawarc[dashed]{(O)}{\\r}{0}{360}{}{}\n\n \\tdplotsetrotatedcoords{\\O}{0}{0}\n\n \\draw [tdplot_rotated_coords] (0,0,0) -- (\\r,0,0) node [below right] {};\n \\tdplotdrawarc[->]{(O)}{.28*\\r}{0}{\\O}{anchor=north}{$\\Omega$}\n\n \\tdplotsetrotatedcoords{-\\O}{\\i}{0}\n \\tdplotdrawarc[tdplot_rotated_coords]{(O)}{\\r}{0}{360}{}{} \n \\begin{scope}[tdplot_rotated_coords]\n \\draw[->] (O) -- (0,0,0.8) node [above] {$\\boldsymbol{h}$};\n \\draw (0,0,0) -- (-\\r,0,0);\n \\tdplotdrawarc[->]{(O)}{.4*\\r}{90}{180}{anchor=west}{$\\omega$}\n \\coordinate (P) at (180+\\f:\\r);\n \\draw (O) -- (P);\n \\tdplotdrawarc[->]{(O)}{.7*\\r}{180}{180+\\f}{anchor=south west}{$\\nu$}\n \\end{scope}\n\n \\tdplotsetrotatedcoords{-\\O+\\f}{\\i}{0}\n \\tdplotsetrotatedcoordsorigin{(P)}\n \\begin{scope}[tdplot_rotated_coords,scale=.2,thick]\n \\fill (P) circle (.6ex) node [above] {Celestial Body};\n \\end{scope}\n\n \\tdplotsetthetaplanecoords{-\\f}\n \\tdplotdrawarc[tdplot_rotated_coords,->]{(O)}{0.9*\\r}{0}{\\i}{anchor=south}{$i$}\n\\end{tikzpicture}\n\\vspace{-2ex}\n\\caption{The orbital plane determined by orientation parameters $\\Omega$, $\\omega$, and $i$ relative to the plane of reference in the ECI coordinate frame. These parameters define the direction of the angular momentum vector $\\boldsymbol{h}$. The axes $\\boldsymbol{I}$, $\\boldsymbol{J}$ and $\\boldsymbol{K}$ define the ECI coordinate frame.}\n\\vspace{-1ex}\n\\label{fig:orbitplane1}\n\\end{figure}\n\nIt is also assumed that the only force acting on the object in orbit is due to the gravity of the dominant mass\\footnote{Uniform acceleration does not take into account the ellipsoidal\/oblate nature of the Earth or other forces, such as micro-atmospheric drag, solar weather, and gravity due to other celestial bodies. For the short duration of a single CPI, these factors are generally negligible.}, with the acceleration due to the Earth's gravity $\\boldsymbol{\\ddot{r}}$, given by:\n\\vspace{-0.2ex}\n\\begin{equation}\n\\boldsymbol{\\ddot{r}}= -\\frac{\\mu}{\\lvert\\boldsymbol{r}\\rvert^3}\\boldsymbol{r}\\label{eq:eci_acceleration},\\vspace{-1ex}\n\\end{equation}\nwhere $\\mu$ is the standard gravitational parameter for the Earth.\n\nGiven the orbital parameters, and the acceleration due to the Earth's gravity, the Cartesian position $\\boldsymbol{r}$, and velocity $\\boldsymbol{\\dot{r}}$, for an object in Earth orbit is completely deterministic and is given by:\n\\begin{IEEEeqnarray}{rCL}\n \\boldsymbol{r} &=& \\frac{a(1-e^2)}{1 + e\\cos{\\nu}}(\\cos{\\nu}\\boldsymbol{P} + \\sin{\\nu}\\boldsymbol{Q})\\label{eq:eci_position}~;\\\\\n \\boldsymbol{\\dot{r}} &=& \\sqrt{\\frac{\\mu}{a(1-e^2)}}(-\\sin{\\nu}\\boldsymbol{P} + (e + \\cos{\\nu})\\boldsymbol{Q})\\label{eq:eci_velocity}~,\n\\end{IEEEeqnarray}\nwhere $\\boldsymbol{P}$ and $\\boldsymbol{Q}$ represent axes of a coordinate system co-planar with the orbital plane in the Cartesian ECI coordinate frame (given by axes $\\boldsymbol{I}$, $\\boldsymbol{J}$, and $\\boldsymbol{K}$). The third axis, $\\boldsymbol{W}$, is perpendicular to the orbital plane \\cite{Vallado2001fundamentals}. These vectors are described by:\n\n\\begin{IEEEeqnarray}{rCL}\n\\boldsymbol{P} = & &\n \\begin{bmatrix} \n \\cos{\\Omega}\\cos{\\omega} - \\sin{\\Omega}\\cos{i}\\sin{\\omega} \\\\\n \\sin{\\Omega}\\cos{\\omega} + \\cos{\\Omega}\\cos{i}\\sin{\\omega} \\\\\n \\sin{i}\\sin{\\omega}\n \\end{bmatrix}~; \\\\\n \\boldsymbol{Q} = & & \\begin{bmatrix} \n -\\cos{\\Omega}\\sin{\\omega} - \\sin{\\Omega}\\cos{i}\\cos{\\omega} \\\\\n -\\sin{\\Omega}\\sin{\\omega} + \\cos{\\Omega}\\cos{i}\\cos{\\omega} \\\\\n \\sin{i}\\cos{\\omega}\n \\end{bmatrix}~;\\\\\n \\boldsymbol{W} = & & \\begin{bmatrix} \n \\sin{i}\\sin{\\Omega} \\\\\n -\\sin{i}\\cos{\\Omega} \\\\\n \\cos{i}\n \\end{bmatrix}~.\n\\end{IEEEeqnarray}\nNote that a complicating factor with the ECI reference frame is that a nominally stationary position on the surface of the Earth, such as a fixed radar sensor, will have significant motion.\n\n\\section{{Orbit Determination Before Detect}} \\label{sec:odbd}\n\n\\begin{figure}[ht!]\n\\hspace{7ex}\n\\begin{tikzpicture}[dot\/.style={draw,fill,circle,inner sep=1pt}]\n \\def5{5}\n \\def\\a{5}\n \\def\\angle{10} \n \\coordinate[] (O) at (0,0) {};\n \\node[thick, dot,label={\\angle:Celestial Body}] (X) at (20:{4} and {3.2}) {};\n \\coordinate[label={-30:Sensor Location}] (Q) at (0:{4} and {3}) {};\n \\draw [->] (O) -- (X) node [midway, above] {$\\boldsymbol{r}$};\n \\draw [->] (O) -- (Q) node [midway, below] {$\\boldsymbol{q}$};\n \\draw [dashed, ->] (X) -- (1.9,2) node [near end, right] [shift=(10:1mm)] {$\\dot{\\boldsymbol{r}}$};\n \\draw [ ->] (Q) -- (X) node [midway, right] {$\\boldsymbol{\\rho}$};\n \\fill (X) circle [radius=2pt];\n\n\\end{tikzpicture}\n\\vspace{-2ex}\n\\caption{In the ECI coordinate frame the sensor is at position $\\boldsymbol{q}$, the celestial body at position $\\boldsymbol{r}$ with velocity $\\dot{\\boldsymbol{r}}$ (given by \\eqref{eq:eci_position} and \\eqref{eq:eci_velocity}) and the slant range vector from the sensor to the object given by $\\boldsymbol{\\rho}$.}\n\\label{fig:radar_vectors}\n\\end{figure}\n\nFor a two-body Keplerian orbit, the time-varying terms $\\rho(t)$, $\\dot{\\rho}(t)$, $\\phi(t)$, and $\\theta(t)$ \n\\eqref{eq:time_varying_delay_doppler} can be completely described by an orbit's six independent parameters. Although the position of an object in orbit is given by \\eqref{eq:eci_position}, there are no closed form solutions for the time varying position $\\boldsymbol{r}(t)$. Instead, a Taylor series approximation can be used to calculate an expression for the object's position throughout a CPI such that $\\boldsymbol{r}(t) = \\sum_{n=0}^{\\infty}\\frac{\\boldsymbol{r}^{(n)}(0)t^n}{n!}$ (where $\\boldsymbol{r}^{(n)}(x)$ denotes the $n^{th}$ derivative of $\\boldsymbol{r}$ evaluated at the point $x$), with $t$ being the time through the CPI of length $T$, $t \\in [\\frac{-T}{2}, \\frac{T}{2}]$. With knowledge of the sensor's location, $\\boldsymbol{q}(t)$ (as in Figure \\ref{fig:radar_vectors}), and $\\boldsymbol{\\dot{q}}(t)$ giving the slant range vector from the sensor to the object, as well as the slant-range rate, as $\\boldsymbol{\\rho}(t) = \\boldsymbol{r}(t) - \\boldsymbol{q}(t)$ and $\\boldsymbol{\\dot{\\rho}}(t) = \\boldsymbol{\\dot{r}}(t) - \\boldsymbol{\\dot{q}}(t)$, a polynomial expression for the slant-range and slant-range rate equations of motion over the CPI is possible:\n\\vspace{-1ex}\n\\begin{IEEEeqnarray}{rCL}\n \\rho(t) &=& \\lvert\\boldsymbol{\\rho}(t)\\rvert = \\lvert \\sum_{n=0}^{\\infty}\\frac{\\boldsymbol{r}^{(n)}(0)t^n}{n!} - \\boldsymbol{q}(t)\\rvert\\label{eq:taylor_position}~;\\\\\n \\dot{\\rho}(t) &=& \\lvert\\boldsymbol{\\dot{\\rho}}(t)\\rvert = \\lvert \\sum_{n=1}^{\\infty}\\frac{\\boldsymbol{r}^{(n)}(0)t^n}{n!} - \\boldsymbol{\\dot{q}}(t)\\rvert\\label{eq:taylor_rangerate}~.\n\\end{IEEEeqnarray}\nThese expressions can be extended (or truncated) to arbitrary accuracy.\n\nThe directional angles are now calculated as topocentric right ascension and declination, that is right ascension and declination relative to the sensor location, given by $\\alpha$ and $\\delta$, respectively: \n\\vspace{-1.5ex}\n\\begin{IEEEeqnarray}{rCL}\n \\alpha(t) &=& {\\tan}^{-1}\\left(\\frac{\\rho_{\\boldsymbol{J}}(t)}{\\rho_{\\boldsymbol{I}}(t)}\\right)\\label{eq:alpha_t}~;\\\\\n \\delta(t) &=& {\\tan}^{-1}\\left(\\frac{\\rho_{\\boldsymbol{K}}(t)}{\\sqrt{{\\rho_{\\boldsymbol{I}}(t)}^2 + {\\rho_{\\boldsymbol{J}}(t)}^2}}\\right)~, \\label{eq:delta_t}\n\\end{IEEEeqnarray}\nnoting that these expressions depend on the individual elements of $\\boldsymbol{\\rho}$ such that $\\boldsymbol{\\rho}(t) = [\\rho_{\\boldsymbol{I}}(t), \\rho_{\\boldsymbol{J}}(t), \\rho_{\\boldsymbol{K}}(t)]^T$.\n\nUsing the expressions in this section, it is possible to form a matched filter to the orbital elements themselves, essentially creating $\\chi(e,a,i,\\Omega,\\omega,\\nu)$ at a given epoch \\eqref{eq:time_varying_delay_doppler}. This enables arbitrarily long CPIs by tracking an orbit throughout the CPI. Additionally, instead of calculating a Taylor Series expression for the orbital position $\\boldsymbol{r}(t)$, and deriving the parameters of interest, it is far more efficient to directly calculate a Taylor Series expression for the parameters of interest. \nFor a sensor at known Cartesian position $\\boldsymbol{q}$, with known instantaneous velocity, acceleration and jerk, given by $\\boldsymbol{\\dot{q}}$, $\\boldsymbol{\\ddot{q}}$, and $\\boldsymbol{\\dddot{q}}$, respectively, and given the slant range vector $\\boldsymbol{\\rho} = \\boldsymbol{r} - \\boldsymbol{q}$, the slant range and its instantaneous derivatives are given by:\n\n\\begin{align}\n \\rho &= \\lvert\\boldsymbol{\\rho}\\rvert\\label{eq:straight_up_slant_range}~;\\\\\n \\label{eq:orbit_doppler}\n \\dot{\\rho} &= \\frac{\\boldsymbol{\\rho}\\cdot\\dot{\\rhovec}}{\\rho}~; \\\\\n \\label{eq:orbit_chirp}\n \\ddot{\\rho} &= -\\frac{(\\boldsymbol{\\rho}\\cdot\\dot{\\rhovec})^2}{\\rho^3} + \\frac{|\\dot{\\rhovec}|^2 + \\boldsymbol{\\rho}\\cdot\\ddot{\\rhovec}}{\\rho}~; \\\\\n \\label{eq:orbit_jerk}\n \\dddot{\\rho} &= \\begin{multlined}[t]\n 3\\frac{(\\boldsymbol{\\rho}\\cdot\\dot{\\rhovec})^3}{\\rho^5} \\\\\n - 3\\frac{(\\boldsymbol{\\rho}\\cdot\\dot{\\rhovec})(|\\dot{\\rhovec}|^2 + \\boldsymbol{\\rho}\\cdot\\ddot{\\rhovec})}{\\rho^3}\\\\\n + \\frac{3\\dot{\\rhovec}\\cdot\\ddot{\\rhovec} + \\boldsymbol{\\rho}\\cdot\\dddot{\\rhovec}}{\\rho}~,\n \\end{multlined}\n\\end{align}\n\nwhere $\\dddot{\\boldsymbol{r}}$ is from the derivative of \\eqref{eq:eci_acceleration} and is given by:\n\\begin{IEEEeqnarray}{rCL}\n\\dddot{\\boldsymbol{r}} = \\frac{3\\mu\\boldsymbol{r}\\cdot\\boldsymbol{\\dot{r}}}{\\lvert\\boldsymbol{r}\\rvert^5}\\boldsymbol{{r}} -\\frac{\\mu}{\\lvert\\boldsymbol{r}\\rvert^3}\\boldsymbol{\\dot{r}}~.\n\\end{IEEEeqnarray}\n\nNow, \\eqref{eq:orbit_doppler}, \\eqref{eq:orbit_chirp}, and \\eqref{eq:orbit_jerk} can be used to directly specify the target's Doppler, chirp rate, and radial jerk. This leads to more efficient expressions (when compared to \n\\eqref{eq:taylor_position} and \\eqref{eq:taylor_rangerate}) for the slant-range, and also slant-range rate, throughout the CPI of length $T$ such that $t \\in [\\frac{-T}{2}, \\frac{T}{2}]$: \n\\begin{IEEEeqnarray}{rCL}\n \\rho(t) & = & \\rho + \\dot{\\rho}t + \\frac{1}{2}\\ddot{\\rho}t^2 + \\frac{1}{6}\\dddot{\\rho}t^3~; \\\\\n \\dot{\\rho}(t) & = & \\dot{\\rho} + \\ddot{\\rho}t + \\frac{1}{2}\\dddot{\\rho}t^2~.\n\\end{IEEEeqnarray}\n\nA fourth-order Taylor Series approximation to the slant-range, $\\rho(t)$, was chosen due to previous work, which demonstrated that a third order polynomial phase signal may be required in order to coherently match orbits for CPIs of duration up to 10 seconds \\cite{8835821}. \n\nSimilarly, equivalent approximations can be formed for the angular measurement parameters $\\alpha(t)$ \n\\eqref{eq:alpha_t} and $\\delta(t)$ \\eqref{eq:delta_t}.\n\\subsection{Search-Volume Constraints}\n\\label{ssec:odbd_search_constraints}\nThe methods described above enable coherent processing that matches orbital parameters; however, they are not suitable for searching to perform uncued detections. The parameter space is far too large to be practically searched, and the vast majority of orbits will not correspond to passes within a region of interest above the sensor. Although, as stated earlier in Section \\ref{ssec:orbital_dynamics}, alternatives to the Keplerian parameter set are available. In fact, it is possible to parameterise a Keplerian orbit with the Cartesian position and velocity to constitute the six elements \\cite{Vallado2001fundamentals}. It is also possible to utilise combinations of both sets of elements in other formulations.\n\nInstead of searching through classical orbital parameters, three parameters can be expressed as a hypothesised ECI position within a search volume of interest. This ensures any hypothesised orbit, determined from these initial parameters, will be within the search volume. Given this potential orbital position, $\\boldsymbol{r}$, only three more additional parameters are needed to fully define an elliptical orbit. Although the three elements forming the orbital velocity could be treated as free variables, the majority of possible velocities would not correspond to valid Earth-captured orbits. Instead, given position $\\boldsymbol{r}$ and semi-major axis $a$, the magnitude of the velocity of the corresponding orbit is given by the Vis-Viva equation \\cite{Vallado2001fundamentals}:\n\\begin{IEEEeqnarray}{rCL}\n{\\lvert\\boldsymbol{\\dot{r}}\\rvert}^2\n= \\mu(\\frac{2}{\\lvert\\boldsymbol{r}\\rvert} - \\frac{1}{a})~. \\label{eq:vis_viva}\n\\end{IEEEeqnarray}\n\nFurthermore, given position $\\boldsymbol{r}$ and eccentricity $e$, the semi major axis length will itself be constrained between the potential limits of the orbit's apogee and perigee ranges:\n\\begin{IEEEeqnarray}{rCL}\n\\frac{\\lvert\\boldsymbol{r}\\rvert}{1+e} \\leq a \\leq \\frac{\\lvert\\boldsymbol{r}\\rvert}{1-e}~. \\label{eq:apogiee_and_perigee_ranges}\n\\end{IEEEeqnarray}\n\nThe semi-major axis is also constrained by realistic limits on an orbit's range, as well as a sensor's maximum detection range, represented by minimum and maximum allowable periapsides, ${rp}_{min}$ and ${rp}_{ max}$:\n\\begin{IEEEeqnarray}{rCL}\n\\frac{{rp}_{min}}{1-e} \\leq a \\leq \\frac{{rp}_{max}}{1-e}~. \\label{eq:perigee_ranges_limits}\n\\end{IEEEeqnarray}\n\nAnother constraint is the constant angular momentum of the orbit, $\\boldsymbol{h}$. This vector is perpendicular to the orbital plane, parallel to $\\boldsymbol{W}$\\hspace{-0.5ex}, with a magnitude depending on the size and shape of the ellipse:\n\\begin{IEEEeqnarray}{rCL}\n \\boldsymbol{h} = \\sqrt{\\mu a(1-e^2)}\\boldsymbol{W} = \\boldsymbol{r}\\times\\boldsymbol{\\dot{r}}~. \\label{eq:angular_momentum}\n\\end{IEEEeqnarray}\n\nThis cross-product may be rewritten to form an expression for the inner product between the position and velocity:\n\\begin{IEEEeqnarray}{rCL}\n \\boldsymbol{r}\\cdot\\boldsymbol{\\dot{r}} & = & \\pm\\sqrt{\\lvert\\boldsymbol{r}\\rvert^2\\lvert\\boldsymbol{\\dot{r}}\\rvert^2 - \\lvert\\boldsymbol{h}\\rvert^2}~.\n\\end{IEEEeqnarray}\n\nCombined with the magnitude of the velocity, from the Vis-Viva equation \\eqref{eq:vis_viva}, as well as the magnitude of the constant angular momentum \\eqref{eq:angular_momentum}, an expression for this inner product can be formed which depends solely on the position $\\boldsymbol{r}$ and the size and shape of the orbital ellipse:\n\\begin{IEEEeqnarray}{rCL}\n \\boldsymbol{r}\\cdot\\boldsymbol{\\dot{r}} & = & \\pm\\sqrt{\\lvert\\boldsymbol{r}\\rvert^2\\mu(\\frac{2}{\\lvert\\boldsymbol{r}\\rvert} - \\frac{1}{a}) - \\mu a (1-e^2)}~. \\label{eq:parallel_planes}\n\\end{IEEEeqnarray}\n\nAdditionally, the specific relative angular momentum vector, $\\boldsymbol{h}$, is perpendicular to both the orbital position $\\boldsymbol{r}$ and orbital velocity $\\boldsymbol{\\dot{r}}$. This leads to the expressions $\\boldsymbol{r}\\cdot\\boldsymbol{h}=0$ and $\\boldsymbol{\\dot{r}}\\cdot\\boldsymbol{h}=0$, which result in another constraint on the velocity, dependant on the right ascension of the ascending node, $\\Omega$:\n\\begin{IEEEeqnarray}{rCL}\n \\begin{bmatrix}\n r_{\\boldsymbol{K}}\\sin{\\Omega} \\\\ -r_{\\boldsymbol{K}}\\cos{\\Omega} \\\\ r_{\\boldsymbol{J}}\\cos{\\Omega} - r_{\\boldsymbol{I}}\\sin{\\Omega}\n \\end{bmatrix}\\cdot\\boldsymbol{\\dot{r}} = 0~. \\label{eq:raan_plane}\n\\end{IEEEeqnarray}\n\nThese expressions lead to a simple geometric solution for determining orbits when $\\boldsymbol{r}$ (and other parameters) are known, and $\\boldsymbol{\\dot{r}}$ is unknown. For determining $\\boldsymbol{\\dot{r}}$, \n(\\ref{eq:vis_viva}) defines a sphere of radius $\\sqrt{\\mu(\\frac{2}{\\lvert\\boldsymbol{r}\\rvert} - \\frac{1}{a})}$, representing valid orbits in the velocity vector's element space. Additionally, \n(\\ref{eq:parallel_planes}) defines two parallel planes of valid orbits, which intersect with (\\ref{eq:vis_viva}) to define two circles. Finally, intersecting these two circles with the plane defined by the position and the right ascension of the ascending node, $\\Omega$, \n\\eqref{eq:raan_plane} will result in a maximum of four intersection points, that is, four velocities, each corresponding to a valid orbit. An example diagram is shown in Figure \\ref{fig:makingshapes}. Although this means that a choice of six orbital parameters will result in up to four potential orbital matched filters, this approach will be far more efficient than methods outlined earlier in this section, as the orbit will be within the search volume, and each parameter choice restricts the range of subsequent parameters. \n\n\\vspace{-3ex}\n\n\\begin{figure}[ht!]\n\\hspace{8ex}\\begin{tikzpicture}[\n point\/.style = {draw, circle, fill=black, inner sep=0.7pt},\n scale=0.7\n]\\clip (-4.25,-3.75) rectangle + (8.5,8);\n\\def3cm{3cm}\n\\coordinate (O) at (0,0); \n\n \\draw[->] (0,0,0) -- (1,0,0) node[anchor=north east]{$\\dot{{r}}_{\\boldsymbol{J}}$};\n \\draw[->] (0,0,0) -- (0,1,0) node[anchor=north west]{$\\dot{{r}}_{\\boldsymbol{K}}$};\n \\draw[->] (0,0,0) -- (0,0,1) node[anchor=south east]{$\\dot{{r}}_{\\boldsymbol{I}}$};\n\n\n\\draw[-] (0,0) circle [radius=3cm];\n\n\n\n\\begin{scope}[]\n\\draw[-]\n (-3.25,2.7) -- (4.25,2.7) -- (3.25,1.2) -- (-4.25,1.2) -- cycle;\n \n \\draw[]\n (-3.25,-1.2) -- (4.25,-1.2) -- (3.25,-2.7) -- (-4.25,-2.7) -- cycle;\n \n \\draw[]\n (-2.3,3.25) -- (-2.3,-3.75) -- (-1.2,-3.25) -- (-1.2,3.75) -- cycle;\n \n \\draw[dashed] (0,2) ellipse (2.2 and 0.35);\n \\draw[dashed] (0,-1.97) ellipse (2.2 and 0.3);\n \\draw[dashed] (-1.75,0) ellipse (0.3 and 2.4 );\n \n\\node at (2.5,3.1) {$P_1$};\n\n\\node at (3.6,-0.8) {$P_2$};\n\n\n\\node at (-0.85,3.45) {$P_3$};\n\n\\fill (-1.65,2.23) circle [radius=0.105];\n\n\\fill (-1.93,1.83) circle [radius=0.105];\n\n\\fill (-1.51,-1.73) circle [radius=0.105];\n\n\n\\fill (-1.85,-2.1) circle [radius=0.105];\n\n\n\\end{scope}\n\\end{tikzpicture}\n\\vspace{-1.5ex}\n\\caption{Four valid orbital velocities given by the intersection of the sphere (given by \\eqref{eq:vis_viva}), parallel planes P1 and P2 (given by \\eqref{eq:parallel_planes}), and plane P3 (given by \\eqref{eq:raan_plane} or \\eqref{eq:doppler_plane}).}\n\\label{fig:makingshapes}\n\\end{figure}\n\nTherefore, given an orbital position, $\\boldsymbol{r}$, a choice of eccentricity, $e$, semi-major axis, $a$, and right ascension of the ascending node, $\\Omega$, four potential orbital velocities, $\\boldsymbol{\\dot{r}}$, are calculated, which leads to an expression for the complete matched filter:\n\\vspace{-0.5ex}\n\\begin{IEEEeqnarray}{rCL}\n \\chi(\\boldsymbol{r}, \\boldsymbol{\\dot{r}}) \n & = \\hspace{-1ex} \\int\\limits_{-\\frac{T}{2}}^{\\frac{T}{2}} \\hspace{-1ex} [&\\sum_{n=1}^{N}{\\mathrm{e}}^{j\\boldsymbol{k}(\\delta(\\boldsymbol{r}, \\boldsymbol{\\dot{r}},t),\\alpha(\\boldsymbol{r}, \\boldsymbol{\\dot{r}},t)) \\cdot \\boldsymbol{u}_n }s_{n}(t)]\\nonumber\\\\\n & & ~d^*(t-2c^{-1}\\rho(\\boldsymbol{r}, \\boldsymbol{\\dot{r}},t)){\\mathrm{e}}^{-j\\frac{2\\pi}{\\lambda}\\dot{\\rho}(\\boldsymbol{r}, \\boldsymbol{\\dot{r}},t)t}\\,dt~.~~~\\label{eq:full_OD_right_here}\n\\end{IEEEeqnarray}\n\nThe proposed method tests for only realistic orbits in a given search region. Also, given a set of orbit parameters, this matched filter should maximise a radar's sensitivity to that orbit. Additionally, a detection in this matched filter corresponds to a detection in the orbital element space, providing initial orbit determination from a single detection.\n\nThis style of trajectory-match approach, has several advantages beyond just maximising sensitivity to motion models. Coupling measurement parameters together through a trajectory model can improve achievable resolution compared with using separate independent measurement parameters. As an example, a radar's range resolution is determined solely by the signal bandwidth, but its Doppler and Doppler-rate resolution improve with the CPI length.\n\nThrough coupling the measurement parameters with the trajectory model, as a radar can resolve finer Doppler and Doppler rate measurements it can essentially resolve finer trajectory states. This can potentially improve target localisation as increasingly accurate state measurements could localise a target within a single range bin.\n\\vspace{-0.2ex}\n\n\\subsection{Zero Doppler Crossing}\n\\label{ssec:odbd_zdc}\nThe flexibility of the geometric formulation in Section~\\ref{ssec:odbd_search_constraints} allows radar parameters to be used alongside, and in place of, other orbital parameters to constrain the search space. A Doppler shift $f_D$ will define another plane in $\\dot{\\boldsymbol{r}}$ space, given by:\n\\begin{equation}\n \\label{eq:doppler_plane}\n \\frac{\\boldsymbol{\\rho}}{\\rho}\\cdot\\boldsymbol{\\dot{r}} = -\\frac{\\lambda f_D}{2} + \\frac{\\boldsymbol{\\rho}\\cdot\\boldsymbol{\\dot{q}}}{\\rho}~.\n\\end{equation}\nEquation~\\eqref{eq:doppler_plane} can be used to search for a particular Doppler shift instead of one of the orbital parameters. This is useful because it allows a blind search to constrain the search-space solely for objects in orbit at their point of closest approach to the sensor. As an object is passing overhead, its point of closest approach will correspond exactly with it being at zero Doppler, which is when it is most detectable\\footnote{This may not necessarily hold in all instances, depending on particular beampattern and radar cross section factors.}. If a radar is unable to detect an object at its point of closest approach, at its minimum range, there is little value trying to detect it as it moves further away, towards the horizon.\n\nAnother benefit to applying this constraint is that, as Doppler is proportional to the range-rate, this constraint will also restrict the orbit search-space to a point of minimal (or zero) range migration, which greatly simplifies matched-processing\\footnote{Depending on the CPI length, it may be possible to make ${\\rho}(t) \\approx {\\rho}$.}.\n\nThe vast majority of the objects in an Earth-captured orbit are in a circular, or near-circular, orbit. Searching solely for objects in a circular orbit greatly decreases the potential orbital search space. A circular orbit means the eccentricity of the orbital ellipse is zero, $e=0$, and so \\eqref{eq:apogiee_and_perigee_ranges} becomes $a=\\lvert\\boldsymbol{r}\\rvert$. In a circular orbit, the position and velocity vectors will always be perpendicular, so \\eqref{eq:parallel_planes} simplifies to $\\boldsymbol{r}\\cdot\\boldsymbol{\\dot{r}} = 0$, a single plane instead of two parallel planes. The result is that a three-parameter search, within a region of interest, provides sufficient information to match the closest approach of objects in a circular orbit. For a given position\nin a search-region, there will be at most two possible orbits to match against (determined from the intersection of \\eqref{eq:vis_viva}, \\eqref{eq:parallel_planes}, and \\eqref{eq:doppler_plane}). This type of search approach, attempting uncued detection of the most common types of orbit when they are most detectable, is a far more realisable and practical approach than a completely unbounded search through measurement parameters. Additionally, for an eccentric orbit, the orbital velocity and position are perpendicular at perigee \\cite{Vallado2001fundamentals}. For typical radar detection ranges, an object in a highly eccentric orbit is likely to be within a radar's field of regard solely at, or near, perigee. Because of this, the same simplification of $\\boldsymbol{r}\\cdot\\boldsymbol{\\dot{r}} = 0$ could be used to reduce the number of potential orbits.\n\\vspace{-1.1ex}\n\\subsection{Single Channel Orbit Detection}\n\\label{ssec:odbd_single_channel}\nCoupling together measurement parameters is not necessarily new; however, incorporating such techniques into the detection stage offers some significant advantages. By coupling together the measurement parameters using these ODBD methods, it is possible to apply this matched filtering to single beam radar systems. This could be a post-beamformed surveillance signal from an array or even a classic narrowbeam tracking radar. Because the trajectory model determines all measurement parameters, a particular polynomial phase signal which results in a detection is coupled to a particular location and orbit. This is shown in \n\\eqref{eq:full_OD_right_here}. The beamforming parameters do not determine the location; rather the (hypothesised) location determines the beamforming parameters. Removing the array processing, as in \\eqref{eq:single_channel}, does not remove the ability to localise a target using the algorithm.\n\n\\vspace{-3ex}\n\\begin{IEEEeqnarray}{rCL}\n \\label{eq:single_channel}\n \\chi(\\boldsymbol{r},\\boldsymbol{\\dot{r}}) & = \\hspace{-1.5ex} \\int\\limits_{-\\frac{T}{2}}^{\\frac{T}{2}} & \\hspace{-1ex} s(t){d}^*(t-2c^{-1}\\rho(\\boldsymbol{r},\\boldsymbol{\\dot{r}},t)){\\mathrm{e}}^{-j\\frac{2\\pi}{\\lambda}\\dot{\\rho}(\\boldsymbol{r},\\boldsymbol{\\dot{r}},t)t}\\,dt~~~~\n\\end{IEEEeqnarray}\n\nIn the case of a narrow beam radar, the pointing of the beam will be incorporated into the algorithm by determining the search region that is used. Because it handles sensor motion, this type of processing would be ideal for a satellite-based sensor, with the sensor location term $\\boldsymbol{q}(t)$ (or its instantaneous components $\\boldsymbol{q}, \\boldsymbol{\\dot{q}, \\boldsymbol{\\ddot{q}}}$, etc.) themselves determined by a known orbit rather than the motion of the Earth.\n\n\\vspace{-0.2ex}\n\\section{Simulated Results} \\label{sec:results}\nThese methods have been verified by comparing ODBD-derived measurement parameters, described in section \\ref{sec:odbd}, of an object in orbit, against measurement parameters propagated from available ephemerides. These ephemeris tracks consist of the six Keplerian orbital elements, as well as several additional parameters describing drag and orbital decay. These tracks are propagated with the standard SGP-4 propagator used by the USSPACECOM two-line element sets \\cite{USSTRATCOM}.\n\nThe configuration used for these simulations, matching \\cite{8835821}, is a sensor located at the MWA (at a latitude of 27\\degree~south) in a bistatic configuration with a transmitter in Perth, approximately 600 km further south. This transmitter is taken to be transmitting an FM radio signal at a centre frequency of 100 MHz.\n\nFigure \\ref{fig:standard_circular} shows the path of an object in a near circular orbit at closest approach. The simulated measurement parameters match very well in both angular and delay-Doppler space despite being based on a perfectly circular orbit. Likewise, Figure \\ref{fig:standard-eccentric} also matches with the prediction, noting that the simulation used the matching eccentricity and semi-major axis.\n\nFigure \\ref{fig:eccentricity-mismatch} shows the path of an object in a near circular orbit, but slightly more eccentric than Figure \\ref{fig:standard_circular} ($e=0.00126$) at point of closest approach. The simulated circular path matches well in the delay-Doppler space but diverges in the angular space. Additionally, several other simulated close eccentricities are shown, resulting in changes to the direction of travel but little difference in the delay-Doppler space. The delay-Doppler results suggest good tolerance to small eccentricity changes, however the sensor's angular resolution may limit potential processing intervals.\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{images\/FIGURE1.png}\n\\end{center}\n\\vspace{-2ex}\n\\caption{The measurement parameters of a close pass of an object in a near-circular orbit ($e=0.0007$), as well as the simulation made assuming zero eccentricity at point of closest approach. The left plot is angular space and the right is the delay-Doppler. Twenty seconds of the true pass is shown with ten seconds of the simulated path overlaid.}\n\\label{fig:standard_circular}\n\\end{figure}\n\\vspace{-2ex}\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{images\/FIGURE2.png}\n\\end{center}\n\\vspace{-2ex}\n\\caption{The measurement parameters of a close pass of an object in an eccentric orbit ($e=0.7$), as well as the four simulations made with the correct eccentricity and semi-major axis. The left plot is angular space and the right is the delay-Doppler. Twenty seconds of the true pass is shown with ten seconds of the simulated paths overlaid.}\n\\label{fig:standard-eccentric}\n\\end{figure}\n\\vspace{-2ex}\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{images\/FIGURE3.png}\n\\end{center}\n\\vspace{-2ex}\n\\caption{The measurement parameters of a close pass of an object in a near-circular orbit ($e=0.00126$), as well as several simulations made using different eccentricities. The left plot is angular space and the right is the delay-Doppler. Twenty seconds of the true pass is shown with ten seconds of the simulated paths overlaid.}\n\\label{fig:eccentricity-mismatch}\n\\end{figure}\n\nThe good agreement between the parameters derived from methods described in this paper, when compared with ephemeris derived parameters, suggests that earlier results, \\cite{8835821}, can be practically achieved without requiring apriori information.\n\n\\section{Conclusion} \\label{sec:conclusion}\nModern radars are able form matched-filter products with significant numbers of measurement parameters, especially with digital beamforming and extended processing intervals. Conversely, the motion of an object in a Keplerian orbit is defined by only six parameters. Mapping radar measurement parameters from orbital motion parameters constrains the search space for uncued detection, it additionally allows for other constraints to be applied to further reduce the search-space, most notably when searching for objects in a circular orbit at their point of closest approach to the sensor. For a hypothesised orbit of this type, all range, Doppler, and angular motion parameters can be derived entirely from a three-dimensional position.\nDetections from this matched filter will correspond to the hypothesised orbit. This means that initial orbit determination can be potentially achieved from a single radar detection.\n\nIn future work, these algorithms will be experimentally validated with MWA observations. Noting that although these methods have been developed for the MWA, these methods also apply to conventional active space surveillance radar or even to satellite-based sensors. Additionally, it is planned to investigate the sensitivity of these techniques, characterising their variance by calculating the Cram\\'er-Rao lower bound (CRLB) on the variance of the initial orbital estimates.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:intro} Introduction}\n\nThere exists an accumulation of studies on quantum dynamics of classically\nchaotic systems\n, e.g., kicked rotators, kicked spin-tops, hydrogen atoms in\ntime-dependent electric field,\nand the standard map\nmodel, to mention a few.~\\cite{NS2001}\n Quantum suppression of energy diffusion,\ndynamical localization\nand other signatures of quantum chaos\nare notable in these dynamics. However, most of the systems treated so far are\nconfined to those with a few degrees-of-freedom, and little attention is\npaid to dynamics of quantum many-body systems\\cite{Jona,Prosen,Flambaum}\n whose adiabatic energy levels\nare characterized by\nGaussian orthogonal ensemble (GOE) spectral statistics, i.e., by a hallmark\nof quantum chaos.\nWhile some important\ncontributions~\\cite{Wilkinson88,WilAus92,WilAus95,Bulgac,Cohen00D,Cohen00M,Machida}\nare devoted to dynamics of a kind of many-body systems,\nthose systems are actually described by the random-matrix models,\nand not by deterministic quantum Hamiltonians.\nIt is highly desirable to explore dynamical behaviors of deterministic\nquantum many-body systems\n exhibiting GOE or GUE spectral statistics.\n \n On the other hand, the frustrated quantum spin systems have been receiving\n a wide attention, and we can find their realization in $s=\\frac12$\n antiferromagnetic chains Cu(ampy)Br$_2$~\\cite{Kikuchi} and\n (N$_2$H$_5$)CuCl$_3$,~\\cite{Hagiwara} and in $s=\\frac12$ triangular\n antiferromagnets.~\\cite{KPL}\n The high-lying states of these quantum many-body systems deserve\n being studied\n in the context of \"quantum chaos.\" The advantage of the frustrated quantum\nsystems\n is that one can expect quantum chaotic behaviors\n appearing already in the low energy region\n near the ground state.~\\cite{Nakamura85,Yamasaki04}\n From the viewpoint of real physics of condensed matters, novel features\nobserved\nin the low-energy region are very important and welcome.\nRecalling that in most of deterministic Hamiltonian systems quantum chaotic\nbehaviors\nappear in high-lying states, the role of\nfrustration is essential in the study of\nquantum dynamics from the ground state of deterministic many-body systems with\nGOE or Gaussian unitary ensemble (GUE) level statistics.\n\n\nIn this paper, we investigate dynamics of\n$XXZ$ quantum spin chains which have antiferromagnetic exchange\ninteractions for the\nnearest-neighbor (NN) and\nthe next-nearest-neighbor (NNN) couplings. The NNN couplings cause the\nfrustration, i.e., difficulty in achieving the ground state,\nthereby attributing a name of frustrated quantum spin chains to these systems.\nIn fact, the level statistics of the NNN coupled $XXZ$ spin chains\nwithout an applied magnetic field\nhas been studied intensively in Refs.~\\onlinecite{Kudo03,Kudo04}, and\nit has been shown that GOE behavior,\nwhich is typical of quantum chaos, appears already in the low energy\nregion near the ground state.~\\cite{note,note1}\nThe ground-state phase diagram is shown in Ref.~\\onlinecite{Nomura} for\nthe NNN coupled $XXZ$ spin chains without a magnetic field.\n\nA natural extension of the research is to investigate dynamics\nof the frustrated quantum spin chains\nwith an applied periodically oscillating\nmagnetic field. We calculate a time evolution of the system\nstarting from their ground state and analyze\nthe nature of energy diffusion. We shall numerically exhibit\nthe time dependence of energy variance,\nand show how\nthe diffusion coefficients depend on the coupling constants, the anisotropy\nparameters, the magnetic field and the frequency of the\nfield. Furthermore, to compare with the energy diffusion in\nthe case of weakened frustrations, we also investigate dynamics of\nthe corresponding energy diffusion in $XXZ$ spin chains with\nsmall NNN couplings.\n \nThe organization of the paper is as follows:\nIn Sec.~\\ref{sec:method}, we briefly describe a numerical approach to\nobtain the time evolution operator. In Sec.~\\ref{sec:variance}\nwe shall show the time dependence of energy variance starting from\nthe ground state of the many-body system and explain a way to evaluate\ndiffusion coefficients. Section \\ref{sec:diffusion} elucidates\nhow diffusion coefficients depend on field strength and driving\nfrequency. \nHere \npower laws are shown to exist in the linear response and non-perturbative\nregions.\nSection \\ref{sec:compare} is devoted to a mechanism of oscillation of energy\ndiffusion. \nConclusions are given in Sec.~\\ref{sec:conc}.\n\n\\section{\\label{sec:method} Numerical Procedure}\n\nWe give Hamiltonian for the NN and NNN exchange-coupled\nspin chain on $L$ sites with a time-periodic oscillating magnetic field as\n\\begin{equation}\n\\mathcal{H}(t)=\\mathcal{H}_0 +\\mathcal{H}_1(t),\n\\label{eq:H}\n\\end{equation}\nwhere\n\\begin{eqnarray}\n \\mathcal{H}_0 &=& J_1\\sum_{j=1}^{L}(S^x_j S^x_{j+1} +S^y_j S^y_{j+1}\n +\\Delta S^z_j S^z_{j+1})\n\\nonumber \\\\\n &+& J_2\\sum_{j=1}^{L}(S^x_j S^x_{j+2} +S^y_j S^y_{j+2}\n +\\Delta S^z_j S^z_{j+2}) \\nonumber \\\\\n&-& \\sum_{j=1}^{L}B^z_j(0)S^z_j,\n\\end{eqnarray}\n\\begin{equation}\n \\mathcal{H}_1(t)= \\sum_{j=1}^{L}B^z_j(0)S^z_j -\\sum_{j=1}^{L}B^z_j(t)S^z_j.\n\\end{equation}\nHere, $S_j^{\\alpha}=(1\/2)\\sigma_j^{\\alpha}$ and \n$(\\sigma^x_j, \\sigma^y_j, \\sigma^z_j)$ are the Pauli matrices on the\n$j$th site;\nthe periodic boundary conditions (P.~B.~C.) are imposed.\nThe magnetic field $B^z_j$ on $j$th site along the $z$ axis is chosen to\nform a traveling wave:\n\\begin{equation}\n B^z_j(t)=B_0\\sin\\left( \\omega t-\\frac{2\\pi j}L\\right).\n\\label{eq:jiba}\n\\end{equation}\nThe period of Eq.~(\\ref{eq:H}) as well as Eq.~(\\ref{eq:jiba}) is\n $T=2\\pi\/\\omega$. Because of the coexisting spatial P.~B.~C., however,\n the effective period of the adiabatic energy spectra is\n given by $T'=T\/L=2\\pi\/(\\omega L)$. In other words, the period of the\nHamiltonian operator is $T$, and the spectral flow of the\n eigenvalues has the effective period $T'$.\nThis periodicity property comes from the\n traveling-wave form of Eq.~(\\ref{eq:jiba}), and is advantageous for our\n getting a sufficient number of relevant data in each period $T$.\n\nWhen $J_1>0$ and $J_2>0$, \nthe unperturbed Hamiltonian $\\mathcal{H}_0$\nwithout coupling to the magnetic field is translationally invariant and \ncorresponds to a\nfrustrated antiferromagnetic quantum spin model\nexhibiting GOE level statistics.~\\cite{Kudo03,Kudo04} \nIf $J_2=0$ and $B_0=0$, it describes an\nintegrable and non-frustrated model. Before calculating energy\ndiffusion, we have to consider the symmetries of the model. We divide\nthe Hamiltonian matrix to some sectors which have the same quantum\nnumbers. In the Hamiltonian Eq.(\\ref{eq:H}), total $S^z $ $(S^z_{\\rm tot})$ is\nconserved. The eigenstates with different $S^z_{\\rm tot}$ are\nuncorrelated.\nOn the other hand, the non-uniform magnetic field\nbreaks the translational symmetry, and leads\nto mixing between manifolds of different wave-number values.\n\nBefore proceeding to consider the time evolution of a wave\nfunction, we should note: If we use the original\nHamiltonian\n$\\mathcal{H}(t)=\\mathcal{H}_0 +\\mathcal{H}_1(t)$ as it\nstands, the mean level spacing of eigenvalues would change\ndepending\non $J_2$, $\\Delta$, and $B_0$.\nTo see a universal feature of the energy diffusion, it is\nessential to\nscale the Hamiltonian so that the full range of adiabatic\nenergy eigenvalues becomes almost free from these\nparameters.\nNoting that this energy range for the original Hamiltonian\nis \nof order of $L$ when $J_1=J_2=\\Delta=1$,\nwe define the scaled Hamiltonian $H(t)=H_0+H_1(t)$\nso that the full energy range equals $L$ at $t=0$, \nwhich will be used throughout in the text.\nThe Sch\\\"odinger equation is then given by\n\\begin{equation}\n i\\hbar \\frac{\\partial}{\\partial t}|\\psi (t)\\rangle\n =H(t)|\\psi (t)\\rangle\n =[H_0+H_1(t)] |\\psi (t)\\rangle.\n\\label{eq:Schrodinger}\n\\end{equation}\nThe solution of Eq. (\\ref{eq:Schrodinger}) consists\nof a sequence of the infinitesimal processes as\n\\begin{eqnarray}\n |\\psi (t)\\rangle &=& U(t;t-\\Delta t) U(t-\\Delta t;t-2\\Delta t)\n\\nonumber \\\\\n&\\cdots& U(2\\Delta t;\\Delta t)\n U(\\Delta t;0) |\\psi (0)\\rangle.\n\\end{eqnarray}\nThe initial state $|\\psi(0)\\rangle$ is taken to be the ground state,\nsince our concern lies in the dynamical behaviors starting from the\nmany-body ground state.\nTo calculate a time evolution operator $U(t+\\Delta t;t)$\nfor each short time step $\\Delta t$, we use the\nfourth-order decomposition formula for the exponential\noperator:~\\cite{Suzuki90}\n\\begin{eqnarray}\n U(t+\\Delta t;t)&=& S(-i p_5\\Delta t\/\\hbar,t_5)\nS(-i p_4\\Delta t\/\\hbar,t_4) \\nonumber\\\\\n&\\cdots& S(-i p_2\\Delta t\/\\hbar,t_2)\n S(-i p_1\\Delta t\/\\hbar,t_1),\n\\label{eq:U}\n\\end{eqnarray}\nwhere,\n\\begin{equation}\n S(x,t)=\\exp\\left( \\frac{x H_1(t)}2 \\right) \\exp(x H_0)\n\\exp\\left( \\frac{x H_1(t)}2 \\right).\n\\end{equation}\nHere, $t_j$'s and $p_j$'s are the following:\n\\begin{eqnarray}\n t_j &=& t+(p_1+p_2+\\cdots +p_{j-1}+p_j\/2)\\Delta t, \\nonumber\\\\\n p &=& p_1 =p_2 =p_4=p_5, \\nonumber\\\\\n &=& 0.4144907717943757\\cdots \\nonumber\\\\\n p_3&=&1-4p.\n\\end{eqnarray}\nThe numerical procedure based on the above decompositions is quite effective\nwhen $H_1(t)$ and $H_0$\ndo not commute and each time step is very small. \nOur computation below is concerned mainly with\nthe system of $L=10$, whose $S^z_{\\rm tot}=1$ manifold involves 210\nlevels. To check the validity of our assertion, some of the results will be\ncompared to those for the system of $L=14$ and \n$S^z_{\\rm tot}=4$ whose manifold involves 364 levels. \n\n\n\\section{\\label{sec:variance} Time Dependence of Energy Variance}\n\n\\begin{figure}\n\\includegraphics[width=8cm]{fig1.eps}\n\\caption{\\label{fig:1} (Color online) \nTime evolution of energy diffusion for (a) $L=10$ and (b) $L=14$. The\n parameters are the following: $J_1=J_2=1.0$,\n $\\Delta=0.3$, $B_0=1.0$.}\n\\end{figure}\n\nWe calculate time evolution of the state and evaluate\nenergy variances at each integer multiple of\nthe effective period $T'=T\/L=2\\pi \/(\\omega L)$.\nAs mentioned already, we choose the ground state as an initial state,\nfollowing the spirit of real physics of condensed matters.\nThis viewpoint is in contrast to that of\nthe random matrix models where initial states\nare chosen among high-lying\nones.~\\cite{Wilkinson88,WilAus92,WilAus95,Bulgac,Cohen00D,Cohen00M} \nConsequently,\nthe energy variance of our primary concern is\nthe \\textit{variance around the ground state energy} $E_0$\nand is defined by\n\\begin{equation}\n \\delta E(t)^2= \\langle \\psi (t)|[H(t)-E_0]^2 |\\psi (t) \\rangle .\n\\label{eq:variance}\n\\end{equation}\nTime evolution of $\\delta E(t)^2$ is shown in Fig.~\\ref{fig:1}. The\nparameters except for $\\omega$ are fixed. The larger $\\omega$ is, the faster\nthe energy diffusion grows, which is consistent with our expectations. The\ndetails will be explained in Sec.~\\ref{sec:diffusion}.\nFor wide parameter values of the next-nearest-neighbor (NNN) coupling $J_2$\nand exchange anisotropy $\\Delta$, the early stage of quantum dynamics\nbecomes to show the normal diffusion in energy space, i.e., a linear growth of \n$\\delta E(t)^2$ in time.\nWhile we proceed to investigate this normal diffusion process, \nenergy variances will\nfinally saturate because the system size we consider is finite. On the\nother hand, energy variances can also saturate because of another reason,\ni.e., the dynamical localization effect associated with a periodic\nperturbation. \n\nDuring the first period, $\\delta E(t)^2$ shows a linear\ngrowth in time as shown in Fig.~\\ref{fig:1} (a). The range of the linear\ngrowth is not sufficiently wide because the number of levels is not\nlarge enough for\n$L=10$. However, if the number of levels as well as the system\nsize is increased, the length of a linear region may be elongated. \nIn fact, the linear growth of $\\delta E(t)^2$ during the first\nperiod can be recognized more clearly for $L=14$ than for $L=10$ \n[see Fig.~\\ref{fig:1} (b)].\nThe diffusion coefficient has to be determined much earlier than the \ntime where saturation begins. \nWe determine the diffusion coefficient $D$ from the fitting\n\\begin{equation}\n \\delta E(t)^2 = Dt +\\mbox{\\rm const.}\n\\label{eq:defD}\n\\end{equation}\nto some data points around the largest slope\nin the first period, where the normal diffusion is expected. \n\n\\section{\\label{sec:diffusion} Diffusion coefficients: dependence on field\nstrength and frequency}\n\nSince the time evolution of our system starts from the ground state,\nwe consider non-adiabatic regions where inter-level transitions\nfrequently occur. In other words, we suppress a near-adiabatic or the\nso-called Landau-Zener (LZ) region\nwhere the driving frequency $\\omega$ is much smaller than the mean level\nspacing divided by Planck constant. \nBecause of a large energy gap between the ground and first\nexcited states,\nthe near-adiabatic region cannot result in the notable energy diffusion and\nwill be left outside a scope of the present study.\n\nBeyond the LZ region, however, so long as the changing rate $\\dot{X}$ of a\nperturbation\nparameter is not very large,~\\cite{note2} the diffusion coefficient can be calculated\nusing the Kubo formula. We call such a parameter regime ``linear\nresponse'' regime. In the linear response regime, $D\\propto\\dot{X}^2$\n(See, e.g., Refs.~\\onlinecite{WilAus92} and \\onlinecite{WilAus95}).\nWhen $\\dot{X}$ is large, however, the perturbation theory fails. We call\nsuch a parameter regime ``non-perturbative'' regime. In the \nnon-perturbative regime, the diffusion coefficient is smaller than that\npredicted by the Kubo formula.~\\cite{WilAus95,Cohen00D} According to\nRef.~\\onlinecite{WilAus95}, $D \\propto \\dot{X}^{\\gamma}$ with\n$\\gamma \\le 1$ in the non-perturbative regime. We note that\n$\\dot{X}\\propto B_0\\omega$ in this paper since the perturbation is given by\nEq.~(\\ref{eq:jiba}). \nBoth Refs.~\\onlinecite{WilAus95} and \\onlinecite{Cohen00D} are based on\nthe random matrix models, which are utterly different from our\ndeterministic one.\n\n\\begin{figure}\n\\includegraphics[width=8cm]{fig2.eps}\n\\caption{\\label{fig:2} Driving frequency dependence of the diffusion\n coefficients. The chained line and the solid line\n are just eye guides for $D\\propto \\omega^{\\beta}$ with $\\beta=1$ and\n $2$, respectively. \nThe symbols ($\\diamond$) are the average of the diffusion coefficients\n calculated for several values of $\\Delta$ ($0.3\\le \\Delta \\le 0.8$). The\n parameters are the following: $L=10$, $J_1=1.0$; (a) $J_2=1.0$, \n(b) $J_2=0.2$.}\n\\end{figure}\n\n\\begin{figure*}\n\\includegraphics[width=12cm]{fig3.eps}\n\\caption{\\label{fig:3} Dependence of the diffusion\n coefficients on the product of field strength $B_0$ and driving\n frequency $\\omega$ for (a) $L=10$ and (b) $L=14$. \nThe symbols ($\\diamond$) are the average of the diffusion coefficient\n calculated for several values of $\\Delta$ ($0.3\\le \\Delta \\le 0.8$). \nThe parameters are $J_1=J_2=1.0$; for\n the inset, $J_1=1.0$ and $J_2=0.2$. \nThe chained line and the solid line\n are just eye guides for $D\\propto (B_0\\omega)^{\\beta}$ with $\\beta=1$\n and $2$, respectively. Some error bars are too short to see.}\n\\end{figure*}\n\nNumerical results of diffusion coefficients in Fig.~\\ref{fig:2}\n are almost consistent with the argument of Ref.~\\onlinecite{WilAus95}.\nDiffusion coefficients as a function of $\\omega$\nare shown in Fig.~\\ref{fig:2}.\nIn Fig.~\\ref{fig:2}(a), where $J_2=1.0$\n(i.e., the fully-frustrated case), $D$ is larger\nas $B_0$ is larger for a fixed value of $\\omega$. In a small-$\\omega$ regime,\n$D\\propto \\omega^{\\beta}$ with $\\beta=2$, though $\\beta >2$ for small $B_0$. \nThe latter is merely attributed to the fact that the\nperturbation is\ntoo small to observe a sufficient energy diffusion \nwhen both $\\omega$ and $B_0$ are small. \nIn a large-$\\omega$ regime, $\\beta=1$.\nNamely, we observe that $\\beta=2$ in the linear response\nregime\nand $\\beta=1$ in the non-perturbative regime.\nIn fact, for a large-$\\omega$ regime, \nthe increase of energy variances per effective\nperiod hardly depend on $\\omega$ by the time when $\\delta E(t)^2$\n starts to decrease.\nThis explains the observation\nthat $D\\propto \\omega^{\\beta}$ with $\\beta=1$ in both\nFig.~\\ref{fig:2}(a) and Fig.~\\ref{fig:2}(b). \nLet us represent the increase of energy variances per\neffective period\nas $\\Delta(\\delta E^2)$. From the definition of $D$,\ni.e. Eq.~(\\ref{eq:defD}), $D\\propto \\Delta(\\delta E^2)\/T'$.\nIf\n$\\Delta(\\delta E^2)$ is constant, $D\\propto \\omega$.\n\nOn the other hand, in Fig.~\\ref{fig:2}(b) where $J_2=0.2$\n(i.e., a weakly-frustrated\n case), the region with $\\beta=1$ is expanding. \nFor small $B_0$, $\\beta>2$ in a small-$\\omega$ regime is\nthe same as in the\ncase of $J_2=1.0$.\nFor small $B_0$ and around $\\omega\\sim 1$,\n$D$ seems to rather decrease than increase \nespecially in the case of $J_2=0.2$.\nSome kind of localization would have occurred \nin the very early stage of energy diffusion for large\n$\\omega$ and small $B_0$,\nleading to the suppression of $D$.\n\n\nIt is seen more clearly in\nFig.~\\ref{fig:3} how the behavior of $D$ changes between a linear\nresponse regime and a non-perturbative regime. \nThe diffusion coefficient $D$ obeys the power law\n$D\\propto (B_0 \\omega)^{\\beta}$ with its power $\\beta$ being two in the\nlinear response regime and $\\beta=1$ in the non-perturbative regime.\nFor small $B_0\\omega$, the power law seems to fail because of some\nfinite-size effects. \nThese universal feature is confirmed in systems of larger size.\nActually, $D$ obeys the power law \nbetter for $L=14$ [Fig.~\\ref{fig:3}(b)] than $L=10$\n[Fig.~\\ref{fig:3}(a)]. In addition, error bars are shorter for $L=14$\nthan $L=10$.\nHere, we have used the data of\n$\\omega \\le 1$. We cannot expect meaningful results in a large-$\\omega$\nregime since, as mentioned above, energy diffusion is not normal there.\n\nFigure~\\ref{fig:3} suggests that the strength of frustration should affect the\nrange of the linear response regime.\nThe linear response regime is shorter for $J_2=0.2$ than for $J_2=1.0$,\nwhile the non-perturbative regime is larger for $J_2=0.2$ than for\n$J_2=1.0$. In fact, when $J_2=0$ (i.e. the integrable case), \n$D\\propto (B_0 \\omega)^{\\beta}$ with $\\beta=1$ for almost all the data\nin the same range of $B_0\\omega$ as that of Fig.~\\ref{fig:3}. \n\n\\section{\\label{sec:compare} Oscillation Of energy diffusion in \nweakly-frustrated cases}\n\nWe shall now proceed to investigate\noscillations of diffusion which occur in the non-perturbative regime of\na weakly-frustrated case.\nFigure~\\ref{fig:4}(a) shows an example of oscillatory diffusion for\n$J_2=0.2$, which is compared with a non-oscillatory\ndiffusion for $J_2=1.0$. \nThe two examples have the same set of parameters except for $J_2$.\nHowever, the cases of $J_2=1.0$ and $J_2=0.2$ are in the linear response\nregime and in the non-perturbative regime, respectively.\nThe variance for both cases\nshows normal diffusion at the very early stage of time evolution.\nFor $J_2=1.0$, the energy variance seems to saturate after a normal\ndiffusion time. On the contrary, the energy variance for $J_2=0.2$ shows\nlarge-amplitude oscillations.\nTo investigate more details, we introduce another definition of energy \nvariance:\n\\begin{equation}\n \\delta \\tilde{E}(t)^2 =\\langle \\psi (t)|[H(t)-\\langle\\psi (t) |\nH(t)|\\psi(t)\\rangle]^2\n |\\psi (t)\\rangle .\n\\end{equation}\nThis follows a standard definition of the variance and quantifies\nthe degree of energy diffusion around the \\textit{time-dependent\nexpectation} of the energy Hamiltonian.\nThe time evolutions of $\\delta \\tilde{E}(t)^2$ \ncorresponding to that of $\\delta E(t)^2$ are shown\nin Fig.~\\ref{fig:4}(b). \nIn the fully-frustrated case ($J_2=1.0$), the profile of \n$\\delta\\tilde{E}(t)^2$ is similar to that of $\\delta E(t)^2$.\nThis observation indicates that an occupation probability spread\nover the whole levels after normal diffusion of energy.\n\n\\begin{figure}\n\\includegraphics[width=8cm]{fig4.eps}\n\\caption{\\label{fig:4} Examples for time evolution of energy variances : \n(a) $\\delta E(t)^2$ and (b) $\\delta \\tilde{E}(t)^2$ (see text).\nSolid lines are for $J_2=1.0$; Broken lines, $J_2=0.2$.\nThe parameters are\nthe following: $L=10$, $J_1=1.0$, $\\Delta=0.3$, $B_0=1.5$, $\\omega=0.5$.}\n\\end{figure}\n\nOn the contrary, in a weakly-frustrated case ($J_2=0.2$) \nin Fig.~\\ref{fig:4}, $\\delta \\tilde{E}(t)^2$\nshows small-amplitude oscillations reflecting\nthe large-amplitude oscillations of $\\delta E(t)^2$.\nMost part of $\\delta \\tilde{E}(t)^2$ for $J_2=0.2$ is smaller than that\nfor $J_2=1.0$. Furthermore, minima of $\\delta \\tilde{E}(t)^2$ come just\nbefore minima and maxima of $\\delta E(t)^2$. \nThese observations indicates the following: an occupation probability,\nwhich is diffusing slowly,\nclustering around the expectation of energy oscillates together with the\nexpectation in the energy space.\nTo make the picture of such behavior clearer, let us\nconsider an occupation probability described by\n\\begin{equation}\n P_t(E_n)=| \\langle \\phi_n | \\psi(t) \\rangle |^2,\n\\end{equation}\nwhere $|\\phi_n\\rangle$ is the $n$th excited eigenstate of $H_0$:\n\\begin{equation}\n H_0 |\\phi_n\\rangle =E_n |\\phi_n\\rangle .\n\\end{equation}\nWhen $t=0$ , $P_t(E_n)$ is given by the Kronecker delta:\n$P_0(E_n)=\\delta_{E_n,E_0}$, where $E_0$ is the energy of the ground\nstate. As $t$ increases, $P_t(E_n)$ forms a wave packet in energy space\nand moves to\nhigher levels. When the wave packet reaches some highest level, it\nreflects like a soliton and moves back to lower levels. Such behavior is\nrepeated, although the wave packet of $P_t(E_n)$ broadens slowly.\nWe have actually watched this soliton-like behavior of $P_t(E_n)$ in\na form of an animation.\n\n\\begin{figure}\n\\includegraphics[width=8cm]{fig5.eps}\n\\caption{\\label{fig:5} Parts of energy spectra depending on\n adiabatically fixed time $t$ with\n $0\\le t \\le T\/4$. Effective period is $\\omega T'=2\\pi\/10$. The\n parameters are the following: $L=10$, $J_1=1.0$, $\\Delta=0.3$, $B_0=0.8$;\n (a) $J_2=1.0$, (b) $J_2=0.2$.}\n\\end{figure}\n\nThe picture discussed above is also supported by the adiabatic energy\nspectra in Fig.~\\ref{fig:5}. \nFigures~\\ref{fig:5}(a) and \\ref{fig:5}(b) correspond to fully- and\nweakly-frustrated cases, respectively.\nMuch more sharp avoided crossings appear in\nFig.~\\ref{fig:5}(b) than Fig.~\\ref{fig:5}(a). Some energy levels appear\nto be crossing, although they are very close and never crossing in\nfact. At a sharp-avoided-crossing point,\nLandau-Zener formula for two adjacent levels is applicable.\nThen the nonadiabatic transition leads to one-way transfer of a\npopulation from a level to its partner, failing to result in the energy\ndiffusion. \nFor small-$J_2$, therefore,\nsuccessive sharp avoided crossings can suppress diffusion of energy.\n\nWe believe that large-amplitude oscillations of $\\delta E(t)^2$ should\nbe one of characteristic features of the non-perturbative regime in this\nfinite frustrated spin system.\nIn fact, similar oscillations of energy variance are seen for large\n$\\omega$ and large $B_0$ even when $J_2=1.0$ though the energy variance\nrapidly converges after one or two periods. How long such oscillations\ncontinue should depend mainly on $J_2$.\n\n\\begin{figure}\n\\includegraphics[width=8cm]{fig6.eps}\n\\caption{\\label{fig:6} (Color online) Level-spacing distributions at $t=\\pi\/4$ \nfor lowest 300 levels from\n the ground state (about 10\\% of all 3003 levels). Blue histogram is for\n $J_2=1.0$; Red bars, $J_2=0.2$; Solid curve, GOE spectral statistics.\nThe other\n parameters are the following: $L=14$, $S^z_{\\rm tot}=1$, $J_1=1.0$,\n $\\Delta=0.3$, $B_0=0.8$.\nThe inset is for all levels when $J_2=1.0$. The numerical methods to\n obtain the level-spacing distributions are referred in\n Refs.~\\onlinecite{Kudo03,Kudo04}. }\n\\end{figure}\n\nIt is a notable fact that, common to both $J_2=1.0$ and $J_2=0.2$, the\nlevel-spacing distributions in Fig.~\\ref{fig:6} show GOE behavior. This\nGOE behavior in the adiabatic energy spectra appears for an arbitrary\nfixed time except for special points\nsuch as $t=T=2\\pi\/\\omega$. This fact suggests that dynamics can reveal\nsome various generic features of quantum many-body systems\n which can never be explained by level\nstatistics. The level-spacing distributions in Fig.~\\ref{fig:6} convey\nanother crucial fact: they have been\ncalculated for low energy levels because our interest is in the low\nenergy region around the ground state. We have confirmed that the\nlevel-spacing distributions for all energy levels in the inset is also described by\nGOE spectral statistics. It is typical of this frustrated spin system that GOE level\nstatistics is observed already in the low energy region.~\\cite{Kudo04} \n\n\\section{\\label{sec:conc}Conclusions}\n\nWe have explored the energy diffusion from the ground state\nin frustrated quantum $XXZ$ spin chains under the applied oscillating\nmagnetic field.\nIn a wide parameter region of next-nearest-neighbor (NNN) coupling $J_2$\nand exchange anisotropy $\\Delta$,\nthe diffusion is normal in the early stage of diffusion.\nDiffusion coefficients $D$ obey the power law\nwith respect to\nboth the field strength and driving frequency with its power being two in the\nlinear response regime and equal to unity in the\nnon-perturbative regime.\nIn the case of weakened frustrations with small-$J_2$ \nwe find oscillation of energy diffusion,\nwhich is attributed to a\nnon-diffusive and ballistic nature of\nthe underlying energy diffusion.\nIn this way, the energy diffusion reveals generic features of the\nfrustrated quantum spin chains, which cannot be captured by the analysis\nof level statistics.\n\n\n\\begin{acknowledgments}\nThe authors would like to thank T. Deguchi.\n The present study was partially supported by Hayashi Memorial\n Foundation for Female Natural Scientists.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzheip b/data_all_eng_slimpj/shuffled/split2/finalzzheip new file mode 100644 index 0000000000000000000000000000000000000000..f68ed07e8ac6c0640af6ce5e849dcd7c24a8c00a --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzheip @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\t\n\tIn contemporary financial option valuation theory, jump-diffusion processes form a principal class of models for the \n\tevolution of the underlying asset prices, see~e.g.~Cont \\& Tankov~\\cite{CT04book} and Schoutens~\\cite{S03book}.\n\tThe first jump-diffusion process was proposed in 1976 by Merton~\\cite{M76}.\n\tIn this classical model, the relative jump sizes are assumed to be lognormally distributed.\n\tA wide variety of jump-diffusion processes, and more generally, exponential L\\'{e}vy processes, for asset prices has been introduced \n\tin the literature since then, for example the familiar variance gamma (VG), normal inverse Gaussian (NIG) and Carr--Geman--Madan--Yor \n\t(CGMY) models~\\cite{CT04book,S03book}.\n\t\n\tIn this paper we consider the jump-diffusion model proposed by Kou~\\cite{K02}.\n\tIn this model, the relative jump sizes are given by a log-double-exponential distribution.\n\tJust like the models mentioned above, Kou's jump-diffusion model has become popular in financial option valuation theory and practice.\n\tIn this paper we are interested in the valuation of European-style options under a direct extension of Kou's single \n\tasset jump-diffusion model~\\cite{K02} to two assets.\n\tHere the (finite activity) jumps in the two asset prices are assumed to occur contemporaneously.\n\tFinancial option valuation theory then yields a two-dimensional time-dependent partial integro-differential equation (PIDE)\n\tthat must be satisfied for the values of European two-asset options. \n\tHere the integral part, which stems from the contribution of the jumps, is nonlocal: it is taken over the whole, two-dimensional asset \n\tprice domain.\n\tIn general, (semi-)closed analytical solutions to this PIDE are not available in the literature.\n\tAccordingly, in the present paper, we investigate the effective numerical solution of the two-dimensional time-dependent Kou PIDE.\n\t\n\tFor the numerical solution, we follow the well-known and general method of lines (MOL) approach.\n\tThe two-dimensional Kou PIDE is first discretized in space by finite differences, and the resulting large, semidiscrete system of \n\tordinary differential equations (ODEs) is subsequently discretized in time by a suitable implicit time stepping scheme.\n\tThe main challenges for the efficient and stable numerical solution are the treatment of the two-dimensional integral part and the \n\ttreatment of the two-dimensional PDE part, which includes a mixed spatial derivative term.\n\t\n\tSpatial discretization of the PIDE leads to a large, dense matrix for the integral part.\n\tIn each time step of the schemes under consideration, products of this matrix with one or more given vectors need to be computed,\n\twhich can form a computational burden.\n\tFor the one-dimensional Kou PIDE, however, Toivanen~\\cite{T08} derived a simple algorithm for evaluating these matrix-vector \n\tproducts that has optimal computational cost.\n\tA key result of our paper is a generalization of Toivanen's algorithm to the two-dimensional Kou PIDE that maintains optimal \n\tcomputational cost.\n\t\n\tFor the temporal discretization of semidiscretized one-dimensional PIDEs arising in financial option valuation, various authors have \n\tproposed operator splitting schemes where the (stiff) PDE part is handled implicitly and the (nonstiff) integral part explicitly.\n\tCont \\& Voltchkova~\\cite{CV05} considered an {\\it implicit-explicit (IMEX)} splitting scheme where the PDE part is handled by \n\tthe backward Euler method and the integral part by the forward Euler method.\n\tThis IMEX Euler scheme is only first-order consistent.\n\tA variety of higher-order IMEX schemes for PIDEs in finance has been studied since, e.g.~by Briani, Natalini \\& Russo~\\cite{B07},\n\tFeng \\& Linetsky~\\cite{FL08}, Kwon \\& Lee~\\cite{KL11} and Salmi \\& Toivanen~\\cite{ST14}.\n\tThe latter authors proposed the IMEX CNAB scheme, where the PDE part is treated by the Crank--Nicolson method and the integral part \n\tby the second-order Adams--Bashforth method.\n\tThe IMEX CNAB scheme has been successfully applied to two-dimensional option valuation PIDEs in e.g.~\\cite{HT16,STS14}. \n\t\n\tTavella \\& Randall~\\cite{TR00book} and d'Halluin, Forsyth \\& Vetzal~\\cite{HFV05} considered an alternative approach where the PDE \n\tpart is treated by the Crank--Nicolson method and a fixed-point iteration on the integral part is performed in each time step. \n\tThis approach has been applied to two-dimensional option valuation PIDEs in Clift \\& Forsyth~\\cite{CF08}, including the \n\ttwo-dimensional Kou PIDE.\n\tWhen the number of fixed-point iterations is frozen, one arrives at a particular IMEX scheme.\n\t\n\tFor the efficient temporal discretization of semidiscrete two-dimensional PIDEs, a subsequent important improvement is obtained \n\tby using, instead of the Crank--Nicolson method, an {\\it alternating direction implicit (ADI)} splitting scheme for the \n\ttwo-dimensional PDE part.\n\tIn the computational finance literature, a range of effective, second-order ADI schemes has been developed and analyzed for \n\tmulti-dimensional PDEs (without integral part), where in each time step the implicit unidirectional stages are combined with \n\texplicit stages involving the mixed derivative terms, see e.g.~\\cite{H17book,HT16}.\n\tIn this paper, we consider the well-established modified Craig--Sneyd (MCS) scheme, introduced by in 't Hout \\& Welfert~\\cite{HW09}, \n\tand the stabilizing correction two-step Adams-type scheme called SC2A, constructed by Hundsdorfer \\& in 't Hout~\\cite{HH18}.\n\t\n\tThe direct adaptation of ADI schemes for PDEs to PIDEs in finance has first been studied by Kaushansky, Lipton \\& Reisinger~\\cite{KLR18} \n\tand next by in 't Hout \\& Toivanen~\\cite{HT18}.\n\tHere the implicit unidirectional stages are blended with explicit stages involving both the mixed derivative terms and the integral \n\tpart, leading again to second-order schemes.\n\tWe note that an efficient, parallel implementation of the schemes developed in \\cite{HT18} has been designed by Ghosh \\& Mishra~\\cite{GM21} \n\twho apply a parallel cyclic reduction algorithm.\n\t\n\tBoen \\& in 't Hout~\\cite{BH21} recently investigated a collection of seven contemporary operator splitting schemes of both the \n\tIMEX and the ADI kind in the application to the two-dimensional Merton PIDE for the values of two-asset options.\n\tHere the numerical evaluation of the integral term has been done by means of a FFT algorithm, following \n\te.g.~\\cite{AO05,AA00,CF08,HFV05,STS14}. \n\tBased on analytical and numerical evidence in \\cite{BH21,HT18} in the case of the two-dimensional Merton and Bates PIDEs, \n\tit is concluded that, among the schemes under consideration, the adaptation of the MCS scheme introduced in \\cite{HT18} \n\tthat deals with the integral part in a two-step Adams--Bashforth fashion is preferable.\n\t\n\tIn the present paper we consider for the two-dimensional Kou PIDE the same collection of operator splitting schemes as in~\\cite{BH21}. \n\tFor the numerical evaluation of the double integral part, a generalization of the algorithm of Toivanen~\\cite{T08} is derived that has \n\toptimal computational cost.\n\tThis algorithm is simple to implement, requires little memory usage and is computationally much faster than the FFT algorithm \n\tmentioned above.\n\tAs a representative example, we consider the approximation of European put-on-the-average option values, together with their Greeks.\n\tAn outline of the rest of this paper is as follows.\n\t\n\tIn Section~\\ref{SecModel} the two-dimensional Kou PIDE is formulated.\n\tSection~\\ref{SecSpatial} deals with its spatial discretization.\n\tFirst, in Subsection~\\ref{SecPDEpart}, the two-dimensional PDE part is considered and a second-order finite difference discretization \n\ton a suitable nonuniform spatial grid is described. \n\tNext, in Subsection~\\ref{SecIntpart}, we present the first main contribution of this paper.\n\tA common, second-order spatial discretization of the double integral part is employed and for its highly efficient evaluation \n\twe derive an extension of the algorithm proposed by Toivanen~\\cite{T08}.\n\tIt is shown that the computational cost of the extension is directly proportional to the number of spatial grid points, \n\twhich is optimal.\n\tSection~\\ref{SecTime} subsequently concerns the temporal discretization of the obtained semidiscrete two-dimensional Kou PIDE.\n\tHere the seven contemporary operator splitting schemes of the IMEX and ADI kind from~\\cite{BH21} are considered. \n\tEach of these schemes conveniently treats the integral part in an explicit manner, where its fast evaluation is performed by the \n\talgorithm derived in Subsection~\\ref{SecIntpart}.\n\tIn Section~\\ref{SecResults} ample numerical experiments are presented.\n\tHere European put-on-the-average option values, together with their Greeks Delta and Gamma, are considered and we examine in detail\n\tthe temporal discretization errors of the different operator splitting schemes.\n\tThe final Section~\\ref{SecConc} gives conclusions.\n\t\n\t\n\n\n\n\t\n\t\\section{The two-dimensional Kou PIDE}\\label{SecModel}\n\tUnder the two-asset Kou jump-diffusion model, the value $v = v(s_1,s_2,t)$ of a European-style option with maturity date $T>0$ and $s_i$ ($i=1,2$) \n\trepresenting the price of asset $i$ at time $\\tau = T-t$, satisfies the following PIDE:\n\t\\begin{align}\\label{PIDE2D}\n\t\t\\dud{t} =&~ \\tfrac{1}{2} \\sigma_1^2s_1^2\\dudd{s_1} + \\rho \\sigma_1\\sigma_2s_1s_2\\duddm{s_1}{s_2} + \\tfrac{1}{2} \\sigma_2^2 s_2^2\\dudd{s_2} + \n\t\t(r-\\lambda \\kappa_1) s_1\\dud{s_1} + (r-\\lambda\\kappa_2) s_2 \\dud{s_2} \\nonumber \\\\\n\t\t& -(r+\\lambda)v+\\lambda\\int_0^{\\infty}\\int_0^{\\infty} f(y_1,y_2) v(s_1y_1, s_2y_2,t) \\mathrm{d} y_1 \\mathrm{d} y_2\n\t\\end{align}\n\twhenever $s_1>0$, $s_2>0$, $0 0$ ($i=1,2$) is the instantaneous volatility for asset $i$ conditional \n\ton the event that no jumps occur, and $\\rho$ is the correlation coefficient of the two underlying standard Brownian motions.\n\tNext, $\\lambda$ is the jump intensity of the underlying Poisson arrival process, and $\\kappa_i$ ($i=1,2$) is the expected \n\trelative jump size for asset $i$.\n The function $f$ is the joint probability density function of two independent random variables possessing log-double-exponential \n\tdistributions~\\cite{K02},\n\t\\begin{equation}\\label{pdf2D}\n\t\tf(y_1,y_2) = \n\t\t\\left\\{\\begin{array}{lll}\n\t\t\tq_1q_2\\eta_{q_1}\\eta_{q_2}y_1^{\\eta_{q_1}-1}y_2^{\\eta_{q_2}-1} & (0 < y_1, y_2 < 1),\\\\\n\t\t\tp_1q_2\\eta_{p_1}\\eta_{q_2}y_1^{-\\eta_{p_1}-1}y_2^{\\eta_{q_2}-1} & (y_1 \\geq 1, 0< y_2 < 1),\\\\\n\t\t\tq_1p_2\\eta_{q_1}\\eta_{p_2}y_1^{\\eta_{q_1}-1}y_2^{-\\eta_{p_2}-1} & (0 < y_1 < 1, y_2 \\geq 1),\\\\\n\t\t\tp_1p_2\\eta_{p_1}\\eta_{p_2}y_1^{-\\eta_{p_1}-1}y_2^{-\\eta_{p_2}-1} & (y_1, y_2 \\geq 1).\n\t\t\\end{array}\\right.\n\t\\end{equation} \n\tThe parameters $p_i$, $q_i$, $\\eta_{p_i}$, $\\eta_{q_i}$ are all positive constants with $p_i + q_i = 1$ and $\\eta_{p_i} > 1$.\n\tIt holds that\n\t\\begin{equation*}\n\t \\kappa_i = \\frac{p_i\\eta_{p_i}}{\\eta_{p_i}-1}+\\frac{q_i\\eta_{q_i}}{\\eta_{q_i}+1}-1 \\quad (i=1,2).\n\t\\end{equation*}\n\tFor \\eqref{PIDE2D}, the initial condition is given by\n\t\\[\n\tv(s_1,s_2,0) = \\phi(s_1,s_2),\n\t\\]\n\twhere $\\phi$ denotes the payoff function of the option.\n\tAs a typical example, we consider in this paper a European put-on-the-average option, which has the payoff function\n\t\\begin{equation}\\label{payoff}\n\t\t\\phi(s_1,s_2) = \\textrm{max} \\left(0\\,,\\,K-\\frac{s_1+s_2}{2}\\right)\n\t\\end{equation}\n\twith strike price $K>0$.\n\tIts graph is shown in Figure~\\ref{FigPayoff}.\n\tConcerning the boundary condition, it holds that the PIDE \\eqref{PIDE2D} is itself satisfied on the two sides $s_{1} = 0$ \n\tand $s_{2} = 0$, respectively.\n\t\n\n\t\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{payoff.pdf}\n\t\\caption{Payoff put-on-the-average option with $K=100$.}\n\t\\label{FigPayoff}\n\t\\end{figure}\n\t\n\t\n\n\n\n\t\n\t\\section{Spatial discretization}\\label{SecSpatial}\n\tFor the numerical solution of the initial-boundary value problem for \\eqref{PIDE2D} we employ the popular method of lines (MOL) approach.\n\tThis approach consists of two consecutive steps~\\cite{HV03book}: first the PIDE \\eqref{PIDE2D} is discretized in space and subsequently in time.\n\tThis Section~\\ref{SecSpatial} deals with the spatial discretization. \n\tIn the next Section~\\ref{SecTime} we shall consider the temporal discretization.\n\t\n\n\n\n\t\n\t\\subsection{Convection-diffusion-reaction part}\\label{SecPDEpart}\n\tFor the numerical solution, the spatial domain is truncated to a bounded set $[0,S_{\\rm max}]\\times[0,S_{\\rm max}]$ \n\twith fixed value $S_{\\rm max}$ chosen sufficiently large.\n\tOn the two far sides $s_{1} = S_{\\rm max}$ and $s_{2} = S_{\\rm max}$ a linear boundary condition is taken, which is well-known in finance,\n\t\\begin{equation}\\label{LBC}\n\t\t\\dudd{s_1} = 0~~(\\textrm{if}~s_{1} = S_{\\rm max}) \\quad \\mbox{ and } \\quad \\dudd{s_2} = 0 ~~(\\textrm{if}~s_{2} = S_{\\rm max}).\n\t\\end{equation}\n\tIn this subsection we describe the finite difference discretization of the convection-diffusion-reaction part of the PIDE \\eqref{PIDE2D}, \n\tspecified by\n\t\\begin{equation*}\\label{lindiffop}\n\t\t\\mathcal{D} v = \\tfrac{1}{2} \\sigma_1^2s_1^2\\dudd{s_1} + \\rho \\sigma_1\\sigma_2s_1s_2\\duddm{s_1}{s_2} + \\tfrac{1}{2} \\sigma_2^2 s_2^2\\dudd{s_2} \n\t\t+ (r-\\lambda \\kappa_1) s_1\\dud{s_1} + (r-\\lambda\\kappa_2) s_2 \\dud{s_2} -(r+\\lambda)v.\n\t\\end{equation*}\n\tThe semidiscretization of this part is common and similar to that in~e.g.~\\cite{BH21}.\n\t\n\tLet integers $m_1, m_2 \\geq 1$ be given.\n\tThe option value function $v$ is approximated at a nonuniform, Cartesian set of spatial grid points,\n\t\\begin{equation*}\\label{sgrid}\n\t\t(s_{1,i},s_{2,j})\\in [0,S_{\\rm max}]\\times[0,S_{\\rm max}] \\quad (0\\leq i \\leq m_1, \\ 0\\leq j \\leq m_2),\n\t\\end{equation*}\n\twith $s_{1,0} = s_{2,0} = 0$ and $s_{1,m_1} = s_{2,m_2} = S_{\\rm max}$. \n\tThe nonuniform grid in each spatial direction is defined through a smooth transformation of an artificial uniform grid, such that \n\trelatively many grid points are placed in a region of financial and numerical interest.\n\tFigure~\\ref{FigGrid} shows a sample spatial grid if $m_1=m_2=50$, $K=100$ and $S_{\\rm max} = 5K$.\n\t\n\tLet integer $m \\geq 1$ and parameter $d > 0$.\n\tConsider equidistant points $0 = \\xi_0 < \\xi_1 < \\dots < \\xi_{m} = \\xi_{\\rm max}$ where\n\t\\begin{equation*}\n\t\t\\xi_{\\rm max} = \\xi_{\\rm int} + \\sinh^{-1}\\left(\\frac{S_{\\rm max}}{d}-\\xi_{\\rm int}\\right) \n\t\t\\quad \\textrm{and } \\quad\n\t\t\\xi_{\\rm int} = \\frac{2K}{d}.\n\t\\end{equation*}\n\tThen in each spatial direction a nonuniform mesh $0=s_00$ are given. A change of variables $y_i = z_i\/s_i$ ($i=1,2$) yields\n\\begin{equation*}\n\t\\mathcal{J} = \\lambda\\int_0^{\\infty}\\int_0^{\\infty} f\\bigg(\\frac{z_1}{s_1},\\frac{z_2}{s_2}\\bigg) v(z_1,z_2,t) \\frac{\\mathrm{d} z_1 \\mathrm{d} z_2}{s_1 s_2}.\n\\end{equation*}\nThe density function $f$ is defined on a partition of four sets of the first quadrant in the real plane, see (\\ref{pdf2D}).\nIt follows that $\\mathcal{J}$ is decomposed into four integrals as $\\mathcal{J} = \\mathcal{J}_1 + \\mathcal{J}_2 + \\mathcal{J}_3 + \\mathcal{J}_4$, where\n\\begin{align*}\n\t\\mathcal{J}_1 &= \\lambda q_1q_2\\eta_{q_1}\\eta_{q_2}s_1^{-\\eta_{q_1}}s_2^{-\\eta_{q_2}} \\int_0^{s_2} \\int_0^{s_1} z_1^{\\eta_{q_1}-1}z_2^{\\eta_{q_2}-1} v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2,\\\\\n\t\\mathcal{J}_2 &= \\lambda p_1q_2\\eta_{p_1}\\eta_{q_2}s_1^{\\eta_{p_1}}s_2^{-\\eta_{q_2}} \\int_0^{s_2} \\int_{s_1}^{\\infty} z_1^{-\\eta_{p_1}-1}z_2^{\\eta_{q_2}-1} v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2,\\\\\n\t\\mathcal{J}_3 &= \\lambda q_1p_2\\eta_{q_1}\\eta_{p_2}s_1^{-\\eta_{q_1}}s_2^{\\eta_{p_2}} \\int_{s_2}^{\\infty} \\int_0^{s_1} z_1^{\\eta_{q_1}-1}z_2^{-\\eta_{p_2}-1} v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2,\\\\\t\t\n\t\\mathcal{J}_4 &= \\lambda p_1p_2\\eta_{p_1}\\eta_{p_2}s_1^{\\eta_{p_1}}s_2^{\\eta_{p_2}} \\int_{s_2}^{\\infty} \\int_{s_1}^{\\infty} z_1^{-\\eta_{p_1}-1}z_2^{-\\eta_{p_2}-1} v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2.\n\\end{align*}\nWe first consider discretization of the integral $\\mathcal{J}_1$. \nUpon writing\n\\begin{equation*}\n\t\\psi_1(s_1,s_2) = \\lambda q_1q_2\\eta_{q_1}\\eta_{q_2}s_1^{-\\eta_{q_1}}s_2^{-\\eta_{q_2}} \n\t\\quad \\textrm{and} \\quad\n\t\\varphi_1(z_1,z_2) = z_1^{\\eta_{q_1}-1}z_2^{\\eta_{q_2}-1},\n\\end{equation*}\nwe have\n\\begin{equation}\n\t\\mathcal{J}_1 = \\psi_1(s_1,s_2)\\int_0^{s_2} \\int_0^{s_1} \\varphi_1(z_1,z_2) v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2.\n\\end{equation}\nFor $1\\leq i \\leq m_1$, $1\\leq j \\leq m_2$ let\n\\begin{equation*}\n\t\\mathcal{J}_{1,ij} = \\psi_1(s_{1,i},s_{2,j}) \\int_0^{s_{2,j}} \\int_0^{s_{1,i}} \\varphi_1(z_1,z_2) v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2\n\\end{equation*}\ndenote the value of $\\mathcal{J}_1$ at the spatial grid point $(s_{1,i},s_{2,j})$.\nDefine\n\\begin{equation*}\n\t\\mathcal{G}_{1,kl} = \\int_{s_{2,l-1}}^{s_{2,l}} \\int_{s_{1,k-1}}^{s_{1,k}} \\varphi_1(z_1,z_2) v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2\n\\end{equation*}\nwhenever $1\\leq k \\leq m_1$, $1\\leq l \\leq m_2$.\nThen the following useful expression for $\\mathcal{J}_{1,ij}$ in terms of a double cumulative sum is obtained,\n\\begin{equation}\\label{cumsum1}\n\t\\mathcal{J}_{1,ij} = \\psi_1(s_{1,i},s_{2,j}) \\sum_{k=1}^i \\sum_{l=1}^j \\mathcal{G}_{1,kl}\n\t\\quad\n\t(1\\leq i \\leq m_1, 1\\leq j \\leq m_2).\n\\end{equation}\nNotice the obvious but important fact that the $\\mathcal{G}_{1,kl}$ are independent of the indices $i$ and $j$.\nHence, if all values $\\mathcal{G}_{1,kl}$ are given, then computing the double cumulative sums in \\eqref{cumsum1} \nfor all $i$, $j$ can be done in, to leading order, just $2m_1m_2$ additions.\n\nWe subsequently construct approximations $G_{1,kl}$ to $\\mathcal{G}_{1,kl}$ ($1\\leq k \\leq m_1$, $1\\leq l \\leq m_2$)\nand define the approximation to $\\mathcal{J}_{1,ij}$ by\n\\begin{equation}\\label{cumsum1_semi}\n\tJ_{1,ij} = \\psi_1(s_{1,i},s_{2,j}) \\sum_{k=1}^i \\sum_{l=1}^j G_{1,kl}\n\t\\quad\n\t(1\\leq i \\leq m_1, 1\\leq j \\leq m_2).\n\\end{equation}\nFor any given $k, l$ with $1\\leq k \\leq m_1$, $1\\leq l \\leq m_2$ consider the natural choice of bilinear interpolation \nto approximate $v(z_1,z_2,t)$ on the $(z_1,z_2)$-domain $[s_{1,k-1}, s_{1,k}] \\times [s_{2,l-1}, s_{2,l}]$:\n\\begin{equation*}\n\t{\\widetilde v}_{kl}(z_1, z_2,t) = \\ell_{kl}^{00}(z_1,z_2) V_{k-1,l-1}(t) + \\ell_{kl}^{10}(z_1,z_2) V_{k,l-1}(t) +\n\t\\ell_{kl}^{01}(z_1,z_2) V_{k-1,l}(t) + \\ell_{kl}^{11}(z_1,z_2) V_{k,l}(t)\n\\end{equation*}\nwith weights\n\\begin{align*}\n\t\\ell_{kl}^{00}(z_1,z_2) &= (s_{1,k}-z_1)(s_{2,l}-z_2)\/\\delta_{kl},\\\\\n\t\\ell_{kl}^{10}(z_1,z_2) &= (z_1-s_{1,k-1})(s_{2,l}-z_2)\/\\delta_{kl},\\\\\n\t\\ell_{kl}^{01}(z_1,z_2) &= (s_{1,k}-z_1)(z_{2}-s_{2,l-1})\/\\delta_{kl},\\\\\n\t\\ell_{kl}^{11}(z_1,z_2) &= (z_1-s_{1,k-1})(z_2-s_{2,l-1})\/\\delta_{kl},\n\\end{align*}\nwhere $\\delta_{kl} = \\Delta s_{1,k}\\Delta s_{2,l}$ and $\\Delta s_{1,k} = s_{1,k}-s_{1,k-1}$, $\\Delta s_{2,l} = s_{2,l}-s_{2,l-1}$.\nThen we define \n\\begin{equation*}\n\tG_{1,kl} = \\int_{s_{2,l-1}}^{s_{2,l}} \\int_{s_{1,k-1}}^{s_{1,k}} \\varphi_1(z_1,z_2) {\\widetilde v}_{kl}(z_1, z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2.\n\\end{equation*}\nA straightforward calculation yields the simple, convenient formula \n\\begin{equation}\\label{G1kl}\n\tG_{1,kl} = \n\t\\gamma_{1,kl}^{00} V_{k-1,l-1}(t) + \\gamma_{1,kl}^{10} V_{k,l-1}(t) +\n\t\\gamma_{1,kl}^{01} V_{k-1,l}(t) + \\gamma_{1,kl}^{11} V_{k,l}(t)\n\\end{equation}\nwith\n\\begin{align*}\n\t\\gamma_{1,kl}^{00} &= (s_{1,k}s_{2,l}\\zeta_{1,kl}^{00} - s_{2,l}\\zeta_{1,kl}^{10} - s_{1,k}\\zeta_{1,kl}^{01} + \\zeta_{1,kl}^{11})\/\\delta_{kl},\\\\\n\t\\gamma_{1,kl}^{10} &= (-s_{1,k-1}s_{2,l}\\zeta_{1,kl}^{00} + s_{2,l}\\zeta_{1,kl}^{10} + s_{1,k-1}\\zeta_{1,kl}^{01} - \\zeta_{1,kl}^{11} )\/\\delta_{kl},\\\\\n\t\\gamma_{1,kl}^{01} &= (-s_{1,k}s_{2,l-1}\\zeta_{1,kl}^{00} + s_{2,l-1}\\zeta_{1,kl}^{10} + s_{1,k}\\zeta_{1,kl}^{01} - \\zeta_{1,kl}^{11} )\/\\delta_{kl},\\\\\n\t\\gamma_{1,kl}^{11} &= (s_{1,k-1}s_{2,l-1}\\zeta_{1,kl}^{00} - s_{2,l-1}\\zeta_{1,kl}^{10} - s_{1,k-1}\\zeta_{1,kl}^{01} + \\zeta_{1,kl}^{11})\/\\delta_{kl}\n\\end{align*}\nand\n\\begin{equation*}\n\t\\zeta_{1,kl}^{ab} = \n\t\\int_{s_{2,l-1}}^{s_{2,l}} \\int_{s_{1,k-1}}^{s_{1,k}} \\varphi_1(z_1,z_2) z_1^a z_2^b \\mathrm{d} z_1 \\mathrm{d} z_2 =\n\t\\frac{\\bigg(s_{1,k}^{a+\\eta_{q_1}}-s_{1,k-1}^{a+\\eta_{q_1}}\\bigg)\\bigg(s_{2,l}^{b+\\eta_{q_2}}-s_{2,l-1}^{b+\\eta_{q_2}}\\bigg)}{\\big(a+\\eta_{q_1}\\big)\\big(b+\\eta_{q_2}\\big)}\n\\end{equation*}\nfor $a,b\\in \\{0,1\\}$.\n\nThe coefficients $\\gamma_{1,kl}^{ab}$ are completely determined by the Kou parameters and the spatial grid. \nSince they are independent of $t$, they can be computed upfront, before the time discretization.\nClearly, for any given vector $V(t)$, the computation of $G_{1,kl}$ by \\eqref{G1kl} for all $k$, $l$ requires \n$3m_1m_2$ additions and $4m_1m_2$ multiplications.\nNoticing that the values of $\\psi_1$ in \\eqref{cumsum1_semi} can also be computed upfront, it follows that \nthe number of basic arithmetic operations to compute all approximations $J_{1,ij}$ ($1\\leq i \\leq m_1$, \n$1\\leq j \\leq m_2$) by \\eqref{cumsum1_semi} is, to leading order, equal to $10m_1m_2$.\n\nThe discretization and efficient evaluation of the other three integrals is done completely analogously.\nIn the relevant derivation, $v$ is approximated by zero outside the spatial domain $[0,S_{\\rm max}]\\times [0,S_{\\rm max}]$.\nWrite\n\\begin{align*}\n\t\\psi_2(s_1,s_2) &= \\lambda p_1q_2\\eta_{p_1}\\eta_{q_2}s_1^{\\eta_{p_1}}s_2^{-\\eta_{q_2}},\\\\\n\t\\psi_3(s_1,s_2) &= \\lambda q_1p_2\\eta_{q_1}\\eta_{p_2}s_1^{-\\eta_{q_1}}s_2^{\\eta_{p_2}},\\\\ \n\t\\psi_4(s_1,s_2) &= \\lambda p_1p_2\\eta_{p_1}\\eta_{p_2}s_1^{\\eta_{p_1}}s_2^{\\eta_{p_2}}.\n\\end{align*}\nLet $1\\leq i \\leq m_1$, $1\\leq j \\leq m_2$. \nThen the approximations of $\\mathcal{J}_2, \\mathcal{J}_3, \\mathcal{J}_4$ \nat the spatial grid point $(s_{1,i},s_{2,j})$ are given by, respectively, the double cumulative sums\n\\begin{align*}\n\tJ_{2,ij} &= \\psi_2(s_{1,i},s_{2,j}) \\sum_{k=i+1}^{m_1} \\sum_{l=1}^j G_{2,kl},\\\\\n\tJ_{3,ij} &= \\psi_3(s_{1,i},s_{2,j}) \\sum_{k=1}^{i} \\sum_{l=j+1}^{m_2} G_{3,kl},\\\\\n\tJ_{4,ij} &= \\psi_4(s_{1,i},s_{2,j}) \\sum_{k=i+1}^{m_1} \\sum_{l=j+1}^{m_2} G_{4,kl}\n\\end{align*}\nwith the usual convention that empty sums are equal to zero.\nFor $2\\le \\nu \\le 4$, $1\\leq k \\leq m_1$, $1\\leq l \\leq m_2$ we obtain\n\\begin{equation*}\\label{Gnukl}\n\tG_{\\nu,kl} = \n\t\\gamma_{\\nu,kl}^{00} V_{k-1,l-1}(t) + \\gamma_{\\nu,kl}^{10} V_{k,l-1}(t) +\n\t\\gamma_{\\nu,kl}^{01} V_{k-1,l}(t) + \\gamma_{\\nu,kl}^{11} V_{k,l}(t)\n\\end{equation*}\nwith\n\\begin{align*}\n\t\\gamma_{\\nu,kl}^{00} &= (s_{1,k}s_{2,l}\\zeta_{\\nu,kl}^{00} - s_{2,l}\\zeta_{\\nu,kl}^{10} - s_{1,k}\\zeta_{\\nu,kl}^{01} + \\zeta_{\\nu,kl}^{11})\/\\delta_{kl},\\\\\n\t\\gamma_{\\nu,kl}^{10} &= (-s_{1,k-1}s_{2,l}\\zeta_{\\nu,kl}^{00} + s_{2,l}\\zeta_{\\nu,kl}^{10} + s_{1,k-1}\\zeta_{\\nu,kl}^{01} - \\zeta_{\\nu,kl}^{11} )\/\\delta_{kl},\\\\\n\t\\gamma_{\\nu,kl}^{01} &= (-s_{1,k}s_{2,l-1}\\zeta_{\\nu,kl}^{00} + s_{2,l-1}\\zeta_{\\nu,kl}^{10} + s_{1,k}\\zeta_{\\nu,kl}^{01} - \\zeta_{\\nu,kl}^{11} )\/\\delta_{kl},\\\\\n\t\\gamma_{\\nu,kl}^{11} &= (s_{1,k-1}s_{2,l-1}\\zeta_{\\nu,kl}^{00} - s_{2,l-1}\\zeta_{\\nu,kl}^{10} - s_{1,k-1}\\zeta_{\\nu,kl}^{01} + \\zeta_{\\nu,kl}^{11})\/\\delta_{kl}\n\\end{align*}\nwhere\n\\begin{align*}\n\t\\zeta_{2,kl}^{ab} &= \n\t\\frac{\\bigg(s_{1,k}^{a-\\eta_{p_1}}-s_{1,k-1}^{a-\\eta_{p_1}}\\bigg)\\bigg(s_{2,l}^{b+\\eta_{q_2}}-s_{2,l-1}^{b+\\eta_{q_2}}\\bigg)}{\\big(a-\\eta_{p_1}\\big)\\big(b+\\eta_{q_2}\\big)},\\\\\n\t\\zeta_{3,kl}^{ab} &= \n\t\\frac{\\bigg(s_{1,k}^{a+\\eta_{q_1}}-s_{1,k-1}^{a+\\eta_{q_1}}\\bigg)\\bigg(s_{2,l}^{b-\\eta_{p_2}}-s_{2,l-1}^{b-\\eta_{p_2}}\\bigg)}{\\big(a+\\eta_{q_1}\\big)\\big(b-\\eta_{p_2}\\big)},\\\\\n\t\\zeta_{4,kl}^{ab} &= \n\t\\frac{\\bigg(s_{1,k}^{a-\\eta_{p_1}}-s_{1,k-1}^{a-\\eta_{p_1}}\\bigg)\\bigg(s_{2,l}^{b-\\eta_{p_2}}-s_{2,l-1}^{b-\\eta_{p_2}}\\bigg)}{\\big(a-\\eta_{p_1}\\big)\\big(b-\\eta_{p_2}\\big)}\n\\end{align*}\nfor $a,b\\in \\{0,1\\}$.\n\nOn the boundary part $\\{(s_1,0): s_1>0\\}$ the double integral \\eqref{integral} reduces to the single integral\n\\begin{equation}\\label{integral1_1D}\n\t\\mathcal{J} = \\lambda\\int_0^{\\infty} f_1(y_1)v(s_1y_1,0,t) \\mathrm{d} y_1.\n\\end{equation}\nThe discretization and efficient evaluation of this integral, which arises in the one-dimensional \nKou PIDE, has been constructed by Toivanen~\\cite{T08}.\nFor completeness, we include the result here.\nAssume $s_1>0$. \nThen for \\eqref{integral1_1D} there holds $\\mathcal{J} = \\mathcal{J}_1 + \\mathcal{J}_2$, where\n\\begin{equation*}\n\t\\mathcal{J}_1 = \\lambda \\int_0^{s_1} f_1\\bigg(\\frac{z_1}{s_1}\\bigg) v(z_1,0,t) \\frac{\\mathrm{d} z_1 }{s_1}\n\t\\quad \\textrm{and} \\quad\n\t\\mathcal{J}_2 = \\lambda \\int_{s_1}^{\\infty} f_1\\bigg(\\frac{z_1}{s_1}\\bigg) v(z_1,0,t) \\frac{\\mathrm{d} z_1 }{s_1}.\n\\end{equation*}\nWrite\n\\begin{equation*}\n\t\\psi_1(s_1) = \\lambda q_1\\eta_{q_1}s_1^{-\\eta_{q_1}} \n\t\\quad \\textrm{and} \\quad\n\t\\psi_2(s_1) = \\lambda p_1\\eta_{p_1}s_1^{\\eta_{p_1}}.\n\\end{equation*}\nLet $1\\leq i \\leq m_1$. \nThen the approximations of $\\mathcal{J}_1, \\mathcal{J}_2$ at the spatial grid point $(s_{1,i},0)$ are given by, \nrespectively, the single cumulative sums\n\\begin{equation*}\n\tJ_{1,i} = \\psi_1(s_{1,i}) \\sum_{k=1}^{i} G_{1,k}\n\t\\quad \\textrm{and} \\quad\n\tJ_{2,i} = \\psi_2(s_{1,i}) \\sum_{k=i+1}^{m_1} G_{2,k}.\n\\end{equation*}\nFor $1\\le \\nu \\le 2$, $1\\leq k \\leq m_1$ we obtain, by employing linear interpolation,\n\\begin{equation*}\\label{Gnuk}\n\tG_{\\nu,k} = \n\t\\gamma_{\\nu,k}^{0} V_{k-1,0}(t) + \\gamma_{\\nu,k}^{1} V_{k,0}(t)\n\\end{equation*}\nwith\n\\begin{equation*}\n\t\\gamma_{\\nu,k}^{0} = (s_{1,k}\\zeta_{\\nu,k}^{0} - \\zeta_{\\nu,k}^{1})\/\\Delta s_{1,k}\n\t\\quad \\textrm{and} \\quad\n\t\\gamma_{\\nu,k}^{1} = (-s_{1,k-1}\\zeta_{\\nu,k}^{0} + \\zeta_{\\nu,k}^{1} )\/\\Delta s_{1,k}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\t\\zeta_{1,k}^{a} = \n\t\\frac{s_{1,k}^{a+\\eta_{q_1}}-s_{1,k-1}^{a+\\eta_{q_1}}}{a+\\eta_{q_1}}\n\t\\quad \\textrm{and} \\quad\n\t\\zeta_{2,k}^{a} = \n\t\\frac{s_{1,k}^{a-\\eta_{p_1}}-s_{1,k-1}^{a-\\eta_{p_1}}}{a-\\eta_{p_1}}\n\t\\quad (a=0,1).\n\\end{equation*}\n\nOn the boundary part $\\{(0,s_2): s_2>0\\}$ the double integral \\eqref{integral} reduces \nto the single integral\n\\begin{equation}\\label{integral2_1D}\n\t\\mathcal{J} = \\lambda\\int_0^{\\infty} f_2(y_2)v(0,s_2y_2,t) \\mathrm{d} y_2\n\\end{equation}\nand the discretization is performed completely similarly as above. \nFinally, at the spatial grid point $(s_1,s_2)=(0,0)$ it holds that \n$\\mathcal{J} \\approx \\lambda V_{0,0}(t)$.\n\nFrom the above it follows that the number of basic arithmetic operations to compute, for any given $t$, the approximation to the double \nintegral \\eqref{integral} on the full spatial grid is, to leading order, equal to $40m_1m_2$.\n\n\t\n\n\n\n\t\n\\subsection{Semidiscrete PIDE}\\label{SecSemiPIDE}\nCombining the above semidiscretizations for the convection-diffusion-reaction and integral parts of PIDE \\eqref{PIDE2D}, the following \nlarge system of ODEs is obtained:\n\\begin{align}\\label{ODEs}\n\tV'(t) = AV(t) \\quad (00$ is a given parameter.\n\tMethod \\eqref{MCS} is of order two for any value~$\\theta$.\n\tHere we make the common choice $\\theta = \\frac{1}{3}$, which is motivated by stability and accuracy results \n\tfor two-dimensional problems, see e.g.~\\cite{HM11,HT18,HW09,HW16}.\n\tIt is easily verified that in \\eqref{MCS} the integral part and mixed derivative term are both treated by the \n\texplicit trapezoidal rule.\n\tWe note that the explicit stages $\\widehat{Y}_0$, $\\widetilde{Y}_0$ can be merged, so that the integral part \n\tis evaluated (just) twice per time step.\n\tThe implicit stages $Y_j$, $\\widetilde{Y}_j$ (for $j=1,2$) are often called stabilizing corrections.\n\tThe four pertinent linear systems for these stages are tridiagonal and can be solved very efficiently \n\tby means of an a priori $LU$ factorization.\n\tWe mention that \\eqref{MCS} is already applied in the first time step, i.e., for defining $V^1$ (thus \n\tIMEX Euler is not used here).\n\t\\vskip0.2cm \n\t\n\t\\item[6.] \\textit{Two-step adaptation of the MCS (MCS2) scheme}:\n\t\\vskip0.1cm\n\tIn \\cite{HT18} a second adaptation of the MCS scheme to PIDEs has been proposed where, instead of the explicit \n\ttrapezoidal rule, the integral part is now handled in a two-step Adams--Bashforth fashion:\n\t\\begin{equation}\\label{MCS2}\n\t\t\\left\\{\\begin{array}{lll}\n\t\t\tX_0 = V^{n-1}+\\Delta t A^{(D)}V^{n-1},\\\\\n\t\t\tY_0 = X_0 + \\tfrac{1}{2}\\Delta t A^{(J)}(3V^{n-1}-V^{n-2}),\\\\\n\t\t\tY_j = Y_{j-1}+\\theta\\Delta t A_j(Y_j-V^{n-1})\\quad (j=1,2),\\\\\n\t\t\t\\widehat{Y}_0 = Y_0 + \\theta\\Delta t A^{(M)}(Y_2-V^{n-1}),\\\\\n\t\t\t\\widetilde{Y}_0 = \\widehat{Y}_0 + (\\tfrac{1}{2}-\\theta)\\Delta t A^{(D)}(Y_2-V^{n-1}),\\\\\n\t\t\t\\widetilde{Y}_j = \\widetilde{Y}_{j-1} + \\theta \\Delta t A_j(\\widetilde{Y}_j-V^{n-1})\\quad (j=1,2),\\\\\n\t\t\tV^n = \\widetilde{Y}_2.\n\t\t\\end{array}\\right.\n\t\\end{equation} \n\tMethod \\eqref{MCS2} is also of order two for any value~$\\theta$.\n\tWe choose again $\\theta = \\frac{1}{3}$ and, for starting this two-step method, define $V^1$ by \\eqref{MCS}.\n\t\\vskip0.2cm \n\t\n\t\\item[7.] \\textit{Stabilizing correction two-step Adams-type (SC2A) scheme}:\n\t\\vskip0.1cm\n\tHundsdorfer \\& in 't Hout \\cite{HH18} recently studied a novel class of stabilizing correction multistep methods \n\tfor the numerical solution of PDEs.\n\tWe consider here a direct adaptation to PIDEs of a prominent member of this class, the two-step Adams-type scheme \n\tcalled SC2A:\n\t\\begin{equation}\\label{SC2A}\n\t\t\\left\\{\\begin{array}{lll} \n\t\t\tY_0 = V^{n-1} + \\Delta t\\, (A^{(M)}+A^{(J)})\\sum_{i=1}^2\\widehat{b}_iV^{n-i} + \\Delta t\\, (A_1+A_2)\\sum_{i=1}^2\\widecheck{b}_iV^{n-i},\\\\\n\t\t\tY_j = Y_{j-1} + \\theta \\Delta t A_j(Y_j-V^{n-1})\\quad (j=1,2),\\\\\n\t\t\tV^n = Y_2,\n\t\t\\end{array}\\right.\n\t\\end{equation} \n\twith coefficients $(\\widehat{b}_1,\\widehat{b}_2) = \\left(\\frac{3}{2},-\\frac{1}{2}\\right)$ and \n\t$(\\widecheck{b}_1,\\widecheck{b}_2) = \\left(\\frac{3}{2}-\\theta, - \\frac{1}{2}+\\theta\\right)$.\n\tThe integral part and mixed derivative term are now both handled by the two-step Adams--Bashforth scheme.\n\tMethod \\eqref{SC2A} is again of order two for any value~$\\theta$. \n\tFollowing \\cite{HH18}, we take $\\theta = \\frac{3}{4}$, which is based upon stability and accuracy results.\n\tFor starting \\eqref{SC2A}, we define $V^1$ again by \\eqref{MCS} with $\\theta = \\frac{1}{3}$\n\t\\vskip0.2cm \n\t\n\\end{enumerate}\n\n\n\n\n\n\t\n\\section{Numerical study}\\label{SecResults}\nIn this section we present ample numerical experiments for the seven operator splitting schemes formulated in Section~\\ref{SecTime}, \nwhich provides important insight in their convergence behaviour and mutual performance.\nThe experiments involve three different parameter sets for the two-asset Kou model, labelled as 1, 2, and~3. \nThe parameter set~1 is taken from Clift \\& Forsyth \\cite{CF08}. \nSet~2 is a blend of parameters considered for the one-asset Kou model by Almendral \\& Oosterlee~\\cite{AO05} and d'Halluin, Forsyth \n\\& Vetzal~\\cite{HFV05} (see also Toivanen~\\cite{T08}). \nSet~3 has been newly constructed and includes a relatively large value for the product $\\lambda T$, i.e., the expected number of \njumps in $[0,T]$. \nFor the truncated spatial domain, we (heuristically) select $S_{\\rm max} = 20K, 10K, 30K$ for sets 1, 2, 3, respectively.\n\n\\begin{table}[H]\n\t\\caption{Parameter sets for the two-asset Kou jump-diffusion model.}\n\t\\small\n\t\\begin{tabular}{@{}llllllllllllll@{}}\n\t\t\\toprule\n\t\t& $\\sigma_1$ & $\\sigma_2$ & $r$ & $\\rho$ & $\\lambda$ & $p_1$ & $p_2$ & $\\eta_{p_1}$ & $\\eta_{q_1}$ & $\\eta_{p_2}$ & $\\eta_{q_2}$ & $K$ & $T$ \\\\ \\midrule\n\t\tSet 1 & 0.12 & 0.15 & 0.05 & 0.30 & 0.50 & 0.40 & 0.60 & $1\/0.20$ & $1\/0.15$ & $1\/0.18$ & $1\/0.14$ & 100 & 1 \\\\\n\t\tSet 2 & 0.15 & 0.20 & 0.05 & 0.50 & 0.20 & 0.3445 & 0.50 & 3.0465 & 3.0775 & 3 & 2 & 100 & 0.2 \\\\\n\t\tSet 3 & 0.20 & 0.30 & 0.05 & 0.70 & 8 & 0.60 & 0.65 & 5 & 4 & 4 & 3 & 100 & 1 \\\\ \\bottomrule\n\t\\end{tabular}\n\t\\label{paramsets}\n\\end{table}\n\n\n\\subsection{Option values and numerical convergence behaviour}\\label{SecResults1}\nFigure~\\ref{FigSurfacePlots} shows the numerically computed\\footnote{With $m_1=m_2=200$, $N=100$ and the MCS2 scheme.} option value \nsurfaces for the European put-on-the-average option and the three parameter sets from Table~\\ref{paramsets} on the asset price \ndomain $[0,3K]\\times[0,3K]$ and $t=T$.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[trim = 1.4in 3.7in 1.3in 3.7in, clip, scale=0.5]{surf1.pdf}\t\n\t\\includegraphics[trim = 1.4in 3.7in 1.3in 3.7in, clip, scale=0.5]{surf2.pdf}\t\n\t\\includegraphics[trim = 1.3in 3.7in 1.3in 3.7in, clip, scale=0.5]{surf3.pdf}\n\t\\caption{Option value surfaces for the European put-on-the-average option under the two-asset Kou model in the case of parameter \n\tset 1 (top left), 2 (top right) and 3 (bottom) given in Table~\\ref{paramsets}.}\n\t\\label{FigSurfacePlots}\n\\end{figure}\n\n\\noindent\nFor future reference, accurate approximations to the option values have been \ncomputed\\footnote{Using $m_1=m_2=1000$, $N=500$, the MCS2 scheme and spline interpolation.} \nfor specifically chosen spot prices $S_0^{(1)}$ and $S_0^{(2)}$ of the two assets in a\nneighbourhood of the strike $K$, see Table~\\ref{tablevalues}. \n\\vskip0.3cm\n\n\\begin{table}[H]\n\t\\caption{\\label{tablevalues}European put-on-the-average option value approximations for parameter sets 1, 2 and 3.}\n\t\\vspace{0.5cm}\n\t\\centering\n\tSet 1\\\\\\vspace{0.1cm}\n\t\\begin{tabular}{llrrrrr}\n\t\t\\toprule\n\t\t& & $S_0^{(1)} = 90$ & & $S_0^{(1)} = 100$ & & $S_0^{(1)} = 110$ \\\\\n\t\t& & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} \\\\\n\t\t$S_0^{(2)} = 90$ & & 8.9385 & & 6.0316 & & 3.8757 \\\\\n\t\t$S_0^{(2)} = 100$ & & 5.9655 & & 3.8038 & & 2.3370 \\\\\n\t\t$S_0^{(2)} = 110$ & & 3.7641 & & 2.2978 & & 1.3771 \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\\vspace{0.5cm}\n\n\tSet 2\\\\\\vspace{0.1cm}\n\t\\begin{tabular}{llrrrrr}\n\t\t\\toprule\n\t\t& & $S_0^{(1)} = 90$ & & $S_0^{(1)} = 100$ & & $S_0^{(1)} = 110$ \\\\\n\t\t& & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} \\\\\n\t\t$S_0^{(2)} = 90$ & & 9.6863 & & 5.5616 & & 2.6400 \\\\\n\t\t$S_0^{(2)} = 100$ & & 5.6162 & & 2.6929 & & 1.1264 \\\\\n\t\t$S_0^{(2)} = 110$ & & 2.7570 & & 1.1670 & & 0.5246 \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\\vspace{0.5cm}\n\n\tSet 3\\\\\\vspace{0.1cm}\n\t\\begin{tabular}{llrrrrr}\n\t\t\\toprule\n\t\t& & $S_0^{(1)} = 90$ & & $S_0^{(1)} = 100$ & & $S_0^{(1)} = 110$ \\\\\n\t\t& & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} \\\\\n\t\t$S_0^{(2)} = 90$ & & 32.7459 & & 31.0984 & & 29.5758 \\\\\n\t\t$S_0^{(2)} = 100$ & & 30.5796 & & 29.0181 & & 27.5770 \\\\\n\t\t$S_0^{(2)} = 110$ & & 28.5830 & & 27.1033 & & 25.7396 \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\\vspace{0.5cm}\n\\end{table}\n\nWe next examine, through numerical experiments, the convergence behaviour of the seven operator splitting schemes formulated \nin Section~\\ref{SecTime} for the temporal discretization of the semidiscrete two-dimensional Kou PIDE. \nTake $m_1=m_2=m$.\nConsider a region of financial interest\n\\begin{equation*}\\label{ROI}\n\t\\textrm{ROI} = \\left\\{(s_1,s_2) : \\tfrac{1}{2}K < s_1,s_2 < \\tfrac{3}{2}K\\right\\}\n\\end{equation*}\nand define the {\\it temporal discretization error} on this ROI for $t=T$ by\n\\begin{equation}\\label{EROI}\n\t\\widehat{E}^{\\rm ROI}(m,N) = \\textrm{max} \\left\\{| V_{i,j}(T) - V_{i,j}^{N'} |: \\Delta t = T\/N' ~~{\\rm and}~~ \n\t(s_{1,i},s_{2,j})\\in \\rm ROI\\right\\}.\n\\end{equation}\nClearly, this error is measured in the (important) maximum norm.\nThe MCS2 scheme is applied with $3000$ time steps to obtain a reference value for the exact semidiscrete solution $V(T)$ to \n\\eqref{ODEs}.\nFor each of the seven operator splitting schemes formulated in Section~\\ref{SecTime}, we study the temporal discretization \nerror for a sequence of values $N$, where the approximation $V^{N'}$ to $V(T)$ is computed using $N'$ time steps.\nHere $N'$ is chosen in function of $N$ depending on the specific scheme, so as to arrive at a fair comparison between the \nschemes.\n\nTheoretical as well as experimental evidence indicates that for the four IMEX schemes the process of solving the linear systems, \ninvolving the matrix $I-\\tfrac{1}{2}\\Delta t A^{(D)}$, essentially determines the total computational time in each time step \nwhenever $m$ is sufficiently large ($m\\gtrsim 150$). \nIn particular, the time for evaluating the integral part, by means of the algorithm from Subsection~\\ref{SecIntpart}, forms \nonly a small fraction of the total computational time.\nAccordingly, for a fair comparison between the IMEX schemes, we apply the three schemes (\\ref{CNFE}), (\\ref{IETR}), (\\ref{CNAB}) \nwith $N' = 2N$ time steps and the scheme (\\ref{CNFI}) with just $N'=N$ time steps, since in the latter scheme double the amount \nof linear systems is solved in each time step.\n\nFor the three ADI schemes we find that the evaluation of the two-dimensional integral part takes about the same computational\ntime as the solution of four pertinent tridiagonal linear systems. \nThe scheme (\\ref{MCS}) employs two evaluations of the integral part, whereas (\\ref{MCS2}) and (\\ref{SC2A}) each use only one \nsuch evaluation.\nNext, both schemes (\\ref{MCS}) and (\\ref{MCS2}) require the solution of four tridiagonal systems, whereas (\\ref{SC2A}) involves \nonly two tridiagonal systems.\nAccordingly, for a fair mutual comparison, we apply the ADI schemes (\\ref{MCS}), (\\ref{MCS2}) and (\\ref{SC2A}) with, respectively, \n$N'=N$, $N' = \\lfloor 3N\/2 \\rfloor$ and $N' = 2N$ time steps.\n\nFor each of the seven splitting schemes, the temporal discretization errors $\\widehat{E}^{\\rm ROI}(m,N)$ have been computed for \n$m = 200$ and a sequence of values $N$ between 10 and 1000. \nFigure~\\ref{FigTemperrors} displays the obtained temporal errors for all three parameter sets given in Table~\\ref{paramsets}, \nwhere the left column shows the results for the IMEX schemes and the right column the results for the ADI schemes.\n\nFrom Figure~\\ref{FigTemperrors} we observe the positive result that, for each given scheme and parameter set, the temporal errors \nare bounded from above by a moderate value and decrease monotonically as $N$ increases. \nAs expected, the CNFE scheme (\\ref{CNFE}) shows an order of convergence equal to one, whereas all other six splitting schemes \nreveal a desirable order of convergence equal to two.\nBy repeating the numerical experiments for spatial grids that are finer (e.g.~$m=400$) or coarser (e.g.~$m=50, 100$) \nwe obtain results that are visually identical to those displayed in Figure~\\ref{FigTemperrors}. \nThis indicates that, for each splitting scheme, the temporal errors are essentially independent of the spatial grid \nsize, and hence, its observed convergence order is valid in a stiff sense, which is very favourable.\n\nIt is interesting to remark that the error constants become larger as the intensity of the jumps increases, keeping all\nelse fixed.\nThis has been observed in additional numerical experiments for increasing values of $\\lambda$ between 0 and 10.\n\nAmong the four IMEX schemes, the IETR and CNAB schemes yield the smallest error constants in the numerical experiments.\nAmong the three ADI schemes, the MCS2 scheme has the smallest error constant in all experiments.\nOverall, the MCS2 scheme is preferred.\nIt yields temporal errors that are smaller than or approximately equal to the IETR and CNAB schemes, but is computationally \nmuch faster since the linear systems appearing in this scheme are just tridiagonal.\nAs an indication for actual run times of the MCS2 scheme, we mention that they are approximately 6e-4, 2e-3, 7e-3, \n4.5e-2 CPU seconds per time step if $m=50,100,200,400$, respectively, in the case of our Matlab implementation, \nversion R2020b, on an Intel Core i7-8665U processor at 1.9 GHz with 16 GB memory.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{TemperrorIMEX_jul8_set1.pdf}\\includegraphics[trim = 1.3in 3.7in 1.6in 3.7in, clip, scale=0.5]{TemperrorADI_jul8_set1.pdf}\\\\\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{TemperrorIMEX_jul8_set2.pdf}\\includegraphics[trim = 1.3in 3.7in 1.6in 3.7in, clip, scale=0.5]{TemperrorADI_jul8_set2.pdf}\\\\\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{TemperrorIMEX_jul8_set3.pdf}\\includegraphics[trim = 1.3in 3.7in 1.6in 3.7in, clip, scale=0.5]{TemperrorADI_jul8_set3.pdf}\n\t\\caption{Temporal errors $\\widehat{E}^{\\rm ROI}(200,N)$ of the seven operator splitting schemes under consideration. IMEX schemes on the left side and ADI \n\tschemes on the right side, with parameter set 1 (top), set 2 (middle) and set 3 (bottom) from Table~\\ref{paramsets}. The CNFI and MCS schemes are applied \n\twith step size $\\Delta t = T\/N$, the MCS2 scheme is applied with $\\Delta t = T\/\\lfloor 3N\/2 \\rfloor$ and the CNFE, IETR, CNAB, SC2A schemes are applied \n\twith $\\Delta t = T\/(2N)$.}\n\t\\label{FigTemperrors}\n\\end{figure}\n\n\\newpage\n\\subsection{The Greeks}\\label{Greeks}\nThe Greeks are mathematical derivatives of the option value with respect to underlying variables and parameters. \nThey constitute a measure for risk that indicates how sensitive an option value is to changes in underlying variables and parameters \nand are crucial for hedging strategies. \nIn this subsection we consider the numerical approximation of the Delta and Gamma Greeks. \nDelta is a measure for the rate of change of the option value with respect to a change in an underlying asset price.\nAs there are two underlying assets, there are two Deltas:\n\\begin{equation*}\n\\Delta_1 = \\frac{\\partial v}{\\partial s_1} \\quad \\text{and} \\quad \\Delta_2 = \\frac{\\partial v}{\\partial s_2} . \n\\end{equation*}\nNext, Gamma measures the rate of change of a Delta with respect to a change in an underlying asset price. \nThere are three different Gammas:\n\\begin{equation*}\n\t\\Gamma_{11} = \\frac{\\partial \\Delta_1}{\\partial s_1} = \\frac{\\partial^2v}{\\partial s_1^2}, \\quad \n\t\\Gamma_{22} = \\frac{\\partial \\Delta_2}{\\partial s_2} = \\frac{\\partial^2v}{\\partial s_2^2} \\quad \\text{and} \\quad \n\t\\Gamma_{12} = \\frac{\\partial \\Delta_1}{\\partial s_2} = \\frac{\\partial^2v}{\\partial s_2 \\partial s_1} = \\frac{\\partial \\Delta_2}{\\partial s_1} = \\Gamma_{21}.\n\\end{equation*}\n\nBy virtue of the finite difference discretization that has been defined in Section \\ref{SecSpatial}, the Delta and Gamma Greeks\ncan directly be approximated, at essentially no computational cost, by applying the second-order central finite \ndifference formulas considered in Subsection \\ref{SecPDEpart} to the option value approximations on the spatial grid.\n\nAs an illustration, Figure~\\ref{FigGreeks} displays the numerically\\footnote{With $m_1=m_2=200$, $N=100$ and the MCS2 scheme.} \nobtained Delta and Gamma surfaces at maturity for the European put-on-the-average option under the two-asset Kou model for \nparameter set~1. \nAs expected, the Delta surfaces are steepest around the line segment $s_1 + s_2 = 2K$ and, correspondingly, the \nGamma surfaces are highest there.\n\nSimilarly to the option value, we study the temporal convergence behaviour of all operator splitting schemes in \nthe case of the five Greeks.\nAkin to (\\ref{EROI}), the temporal discretization error in the case of Delta $\\Delta_k \\, (k = 1, 2)$ is defined by\n\\begin{equation}\\label{EROIGreeks}\n\t\\widehat{E}^{\\rm ROI}_{\\Delta_k}(m,N) = \\textrm{max} \\left\\{| (\\Delta_k)_{i,j}(T) - (\\Delta_k)_{i,j}^{N'} |: \n\t\\Delta t = T\/N' ~~{\\rm and}~~ (s_{1,i},s_{2,j})\\in \\rm ROI \\right\\}.\n\\end{equation}\nHere $\\Delta_k (T)$ denotes the pertinent finite difference matrix for convection applied to the reference value for the exact\nsemidiscrete solution $V(T)$ given in Subsection~\\ref{SecResults1}. \nNext, $\\Delta_k ^{N'}$ is equal to the same finite difference matrix applied to the approximation $V^{N'}$ of $V(T)$ that is \ngenerated by any one of the seven operator splitting schemes. \nAnalogous temporal discretization error definitions hold in the case of $\\Gamma_{11}, \\Gamma_{22}$, $\\Gamma_{12}$. \n\nFigure~\\ref{FigGreeksError} displays the temporal errors in the case of the five Greeks and parameter set 1 for $m = 200$ and \nthe same sequence of values $N$ between 10 and 1000 as before.\nWe arrive at the same conclusions on the temporal convergence behaviour of the seven splitting schemes as obtained \nin Subsection~\\ref{SecResults1} regarding the option value.\nIn particular, besides CNFE, all splitting schemes reveal a stiff order of convergence equal to two, and MCS2 has the best\nperformance among all these schemes.\n\nWe note that the somewhat larger errors that are observed for the CNFI, IETR, CNAB schemes when $N$ is small are attributed\nto the nonsmootness of the initial (payoff) function and could be alleviated by applying four (instead of two) half time steps\nwith IMEX Euler at the start of the time stepping, cf. Section~\\ref{SecTime}.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{Delta1.pdf}\\includegraphics[trim = 1.3in 3.7in 1.4in 3.7in, clip, scale=0.5]{Delta2.pdf}\\\\\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{Gamma11.pdf}\\includegraphics[trim = 1.3in 3.7in 1.4in 3.7in, clip, scale=0.5]{Gamma22.pdf}\\\\\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{Gamma12.pdf}\n\t\\caption{\n\tFirst-order Greeks $\\Delta_1$ (top left) and $\\Delta_2$ (top right) and second-order Greeks $\\Gamma_{11}$ (middle left), $\\Gamma_{22}$ \n\t(middle right) and $\\Gamma_{12}$ (bottom) for the European put-on-the-average option under the two-asset Kou model in the case of \n\tparameter set 1.}\n\t\\label{FigGreeks}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{errorDelta1_jul14_set1.pdf}\\includegraphics[trim = 1.3in 3.7in 1.6in 3.7in, clip, scale=0.5]{errorDelta2_jul14_set1.pdf}\\\\\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{errorGamma11_jul14_set1.pdf}\\includegraphics[trim = 1.3in 3.7in 1.6in 3.7in, clip, scale=0.5]{errorGamma22_jul14_set1.pdf}\\\\\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{errorGamma12_jul14_set1.pdf}\n\t\\caption{Temporal errors of the seven operator splitting schemes under consideration in the case of the five Greeks and parameter set 1: \n\t$\\widehat{E}^{\\rm ROI}_{\\Delta_1}$ (top left), $\\widehat{E}^{\\rm ROI}_{\\Delta_2}$ (top right), $\\widehat{E}^{\\rm ROI}_{\\Gamma_{11}}$ \n\t(middle left), $\\widehat{E}^{\\rm ROI}_{\\Gamma_{22}}$ (middle right) and $\\widehat{E}^{\\rm ROI}_{\\Gamma_{12}}$ (bottom). \n\tThe CNFI and MCS schemes are applied with step size $\\Delta t = T\/N$, the MCS2 scheme is applied with $\\Delta t = T\/\\lfloor 3N\/2 \\rfloor$ \n\tand the CNFE, IETR, CNAB, SC2A schemes are applied with $\\Delta t = T\/(2N)$.}\n\t\\label{FigGreeksError}\n\\end{figure}\n\n\n\n\n\n\n\\newpage\n\\section{Conclusions}\\label{SecConc}\nWe have studied the valuation of European options under the two-asset Kou jump-diffusion model via the numerical solution of the \npertinent two-dimensional time-dependent PIDE.\nA first main contribution of our paper is the extension of an algorithm derived by Toivanen~\\cite{T08}, which enables a highly \nefficient numerical evaluation of the nonlocal double integral appearing in this PIDE.\nThe computational cost of the acquired algorithm is optimal: it is directly proportional to the number of grid points in the \nspatial discretization.\nAlso, it is simple to implement and requires little memory usage.\nSubsequently, for the efficient discretization in time of the semidiscretized two-dimensional Kou PIDE, we have investigated \nseven modern operator splitting schemes of the IMEX and the ADI kind.\nEvery splitting scheme conveniently treats the integral term in an explicit fashion. \nThrough extensive numerical experiments, we have examined the stability and convergence behaviour of the splitting schemes as \nwell as their relative performance. \nAll of the considered schemes, except for the first-order CNFE scheme, show a desirable, stiff order of temporal convergence \nequal to two. \nThe MCS2 scheme, successively developed by in 't Hout \\& Welfert \\cite{HW09} for PDEs and in 't Hout \\& Toivanen \\cite{HT18} for PIDEs,\nstood out favourably among the splitting schemes in view of its superior efficiency. \nThis conclusion agrees with the results recently obtained by Boen \\& in 't Hout~\\cite{BH21} in the case of the two-dimensional \nMerton PIDE. \n\n\n\n\n\n\n\\bibliographystyle{plain}\n\n\\section{Introduction}\n\t\n\tIn contemporary financial option valuation theory, jump-diffusion processes form a principal class of models for the \n\tevolution of the underlying asset prices, see~e.g.~Cont \\& Tankov~\\cite{CT04book} and Schoutens~\\cite{S03book}.\n\tThe first jump-diffusion process was proposed in 1976 by Merton~\\cite{M76}.\n\tIn this classical model, the relative jump sizes are assumed to be lognormally distributed.\n\tA wide variety of jump-diffusion processes, and more generally, exponential L\\'{e}vy processes, for asset prices has been introduced \n\tin the literature since then, for example the familiar variance gamma (VG), normal inverse Gaussian (NIG) and Carr--Geman--Madan--Yor \n\t(CGMY) models~\\cite{CT04book,S03book}.\n\t\n\tIn this paper we consider the jump-diffusion model proposed by Kou~\\cite{K02}.\n\tIn this model, the relative jump sizes are given by a log-double-exponential distribution.\n\tJust like the models mentioned above, Kou's jump-diffusion model has become popular in financial option valuation theory and practice.\n\tIn this paper we are interested in the valuation of European-style options under a direct extension of Kou's single \n\tasset jump-diffusion model~\\cite{K02} to two assets.\n\tHere the (finite activity) jumps in the two asset prices are assumed to occur contemporaneously.\n\tFinancial option valuation theory then yields a two-dimensional time-dependent partial integro-differential equation (PIDE)\n\tthat must be satisfied for the values of European two-asset options. \n\tHere the integral part, which stems from the contribution of the jumps, is nonlocal: it is taken over the whole, two-dimensional asset \n\tprice domain.\n\tIn general, (semi-)closed analytical solutions to this PIDE are not available in the literature.\n\tAccordingly, in the present paper, we investigate the effective numerical solution of the two-dimensional time-dependent Kou PIDE.\n\t\n\tFor the numerical solution, we follow the well-known and general method of lines (MOL) approach.\n\tThe two-dimensional Kou PIDE is first discretized in space by finite differences, and the resulting large, semidiscrete system of \n\tordinary differential equations (ODEs) is subsequently discretized in time by a suitable implicit time stepping scheme.\n\tThe main challenges for the efficient and stable numerical solution are the treatment of the two-dimensional integral part and the \n\ttreatment of the two-dimensional PDE part, which includes a mixed spatial derivative term.\n\t\n\tSpatial discretization of the PIDE leads to a large, dense matrix for the integral part.\n\tIn each time step of the schemes under consideration, products of this matrix with one or more given vectors need to be computed,\n\twhich can form a computational burden.\n\tFor the one-dimensional Kou PIDE, however, Toivanen~\\cite{T08} derived a simple algorithm for evaluating these matrix-vector \n\tproducts that has optimal computational cost.\n\tA key result of our paper is a generalization of Toivanen's algorithm to the two-dimensional Kou PIDE that maintains optimal \n\tcomputational cost.\n\t\n\tFor the temporal discretization of semidiscretized one-dimensional PIDEs arising in financial option valuation, various authors have \n\tproposed operator splitting schemes where the (stiff) PDE part is handled implicitly and the (nonstiff) integral part explicitly.\n\tCont \\& Voltchkova~\\cite{CV05} considered an {\\it implicit-explicit (IMEX)} splitting scheme where the PDE part is handled by \n\tthe backward Euler method and the integral part by the forward Euler method.\n\tThis IMEX Euler scheme is only first-order consistent.\n\tA variety of higher-order IMEX schemes for PIDEs in finance has been studied since, e.g.~by Briani, Natalini \\& Russo~\\cite{B07},\n\tFeng \\& Linetsky~\\cite{FL08}, Kwon \\& Lee~\\cite{KL11} and Salmi \\& Toivanen~\\cite{ST14}.\n\tThe latter authors proposed the IMEX CNAB scheme, where the PDE part is treated by the Crank--Nicolson method and the integral part \n\tby the second-order Adams--Bashforth method.\n\tThe IMEX CNAB scheme has been successfully applied to two-dimensional option valuation PIDEs in e.g.~\\cite{HT16,STS14}. \n\t\n\tTavella \\& Randall~\\cite{TR00book} and d'Halluin, Forsyth \\& Vetzal~\\cite{HFV05} considered an alternative approach where the PDE \n\tpart is treated by the Crank--Nicolson method and a fixed-point iteration on the integral part is performed in each time step. \n\tThis approach has been applied to two-dimensional option valuation PIDEs in Clift \\& Forsyth~\\cite{CF08}, including the \n\ttwo-dimensional Kou PIDE.\n\tWhen the number of fixed-point iterations is frozen, one arrives at a particular IMEX scheme.\n\t\n\tFor the efficient temporal discretization of semidiscrete two-dimensional PIDEs, a subsequent important improvement is obtained \n\tby using, instead of the Crank--Nicolson method, an {\\it alternating direction implicit (ADI)} splitting scheme for the \n\ttwo-dimensional PDE part.\n\tIn the computational finance literature, a range of effective, second-order ADI schemes has been developed and analyzed for \n\tmulti-dimensional PDEs (without integral part), where in each time step the implicit unidirectional stages are combined with \n\texplicit stages involving the mixed derivative terms, see e.g.~\\cite{H17book,HT16}.\n\tIn this paper, we consider the well-established modified Craig--Sneyd (MCS) scheme, introduced by in 't Hout \\& Welfert~\\cite{HW09}, \n\tand the stabilizing correction two-step Adams-type scheme called SC2A, constructed by Hundsdorfer \\& in 't Hout~\\cite{HH18}.\n\t\n\tThe direct adaptation of ADI schemes for PDEs to PIDEs in finance has first been studied by Kaushansky, Lipton \\& Reisinger~\\cite{KLR18} \n\tand next by in 't Hout \\& Toivanen~\\cite{HT18}.\n\tHere the implicit unidirectional stages are blended with explicit stages involving both the mixed derivative terms and the integral \n\tpart, leading again to second-order schemes.\n\tWe note that an efficient, parallel implementation of the schemes developed in \\cite{HT18} has been designed by Ghosh \\& Mishra~\\cite{GM21} \n\twho apply a parallel cyclic reduction algorithm.\n\t\n\tBoen \\& in 't Hout~\\cite{BH21} recently investigated a collection of seven contemporary operator splitting schemes of both the \n\tIMEX and the ADI kind in the application to the two-dimensional Merton PIDE for the values of two-asset options.\n\tHere the numerical evaluation of the integral term has been done by means of a FFT algorithm, following \n\te.g.~\\cite{AO05,AA00,CF08,HFV05,STS14}. \n\tBased on analytical and numerical evidence in \\cite{BH21,HT18} in the case of the two-dimensional Merton and Bates PIDEs, \n\tit is concluded that, among the schemes under consideration, the adaptation of the MCS scheme introduced in \\cite{HT18} \n\tthat deals with the integral part in a two-step Adams--Bashforth fashion is preferable.\n\t\n\tIn the present paper we consider for the two-dimensional Kou PIDE the same collection of operator splitting schemes as in~\\cite{BH21}. \n\tFor the numerical evaluation of the double integral part, a generalization of the algorithm of Toivanen~\\cite{T08} is derived that has \n\toptimal computational cost.\n\tThis algorithm is simple to implement, requires little memory usage and is computationally much faster than the FFT algorithm \n\tmentioned above.\n\tAs a representative example, we consider the approximation of European put-on-the-average option values, together with their Greeks.\n\tAn outline of the rest of this paper is as follows.\n\t\n\tIn Section~\\ref{SecModel} the two-dimensional Kou PIDE is formulated.\n\tSection~\\ref{SecSpatial} deals with its spatial discretization.\n\tFirst, in Subsection~\\ref{SecPDEpart}, the two-dimensional PDE part is considered and a second-order finite difference discretization \n\ton a suitable nonuniform spatial grid is described. \n\tNext, in Subsection~\\ref{SecIntpart}, we present the first main contribution of this paper.\n\tA common, second-order spatial discretization of the double integral part is employed and for its highly efficient evaluation \n\twe derive an extension of the algorithm proposed by Toivanen~\\cite{T08}.\n\tIt is shown that the computational cost of the extension is directly proportional to the number of spatial grid points, \n\twhich is optimal.\n\tSection~\\ref{SecTime} subsequently concerns the temporal discretization of the obtained semidiscrete two-dimensional Kou PIDE.\n\tHere the seven contemporary operator splitting schemes of the IMEX and ADI kind from~\\cite{BH21} are considered. \n\tEach of these schemes conveniently treats the integral part in an explicit manner, where its fast evaluation is performed by the \n\talgorithm derived in Subsection~\\ref{SecIntpart}.\n\tIn Section~\\ref{SecResults} ample numerical experiments are presented.\n\tHere European put-on-the-average option values, together with their Greeks Delta and Gamma, are considered and we examine in detail\n\tthe temporal discretization errors of the different operator splitting schemes.\n\tThe final Section~\\ref{SecConc} gives conclusions.\n\t\n\t\n\n\n\n\t\n\t\\section{The two-dimensional Kou PIDE}\\label{SecModel}\n\tUnder the two-asset Kou jump-diffusion model, the value $v = v(s_1,s_2,t)$ of a European-style option with maturity date $T>0$ and $s_i$ ($i=1,2$) \n\trepresenting the price of asset $i$ at time $\\tau = T-t$, satisfies the following PIDE:\n\t\\begin{align}\\label{PIDE2D}\n\t\t\\dud{t} =&~ \\tfrac{1}{2} \\sigma_1^2s_1^2\\dudd{s_1} + \\rho \\sigma_1\\sigma_2s_1s_2\\duddm{s_1}{s_2} + \\tfrac{1}{2} \\sigma_2^2 s_2^2\\dudd{s_2} + \n\t\t(r-\\lambda \\kappa_1) s_1\\dud{s_1} + (r-\\lambda\\kappa_2) s_2 \\dud{s_2} \\nonumber \\\\\n\t\t& -(r+\\lambda)v+\\lambda\\int_0^{\\infty}\\int_0^{\\infty} f(y_1,y_2) v(s_1y_1, s_2y_2,t) \\mathrm{d} y_1 \\mathrm{d} y_2\n\t\\end{align}\n\twhenever $s_1>0$, $s_2>0$, $0 0$ ($i=1,2$) is the instantaneous volatility for asset $i$ conditional \n\ton the event that no jumps occur, and $\\rho$ is the correlation coefficient of the two underlying standard Brownian motions.\n\tNext, $\\lambda$ is the jump intensity of the underlying Poisson arrival process, and $\\kappa_i$ ($i=1,2$) is the expected \n\trelative jump size for asset $i$.\n The function $f$ is the joint probability density function of two independent random variables possessing log-double-exponential \n\tdistributions~\\cite{K02},\n\t\\begin{equation}\\label{pdf2D}\n\t\tf(y_1,y_2) = \n\t\t\\left\\{\\begin{array}{lll}\n\t\t\tq_1q_2\\eta_{q_1}\\eta_{q_2}y_1^{\\eta_{q_1}-1}y_2^{\\eta_{q_2}-1} & (0 < y_1, y_2 < 1),\\\\\n\t\t\tp_1q_2\\eta_{p_1}\\eta_{q_2}y_1^{-\\eta_{p_1}-1}y_2^{\\eta_{q_2}-1} & (y_1 \\geq 1, 0< y_2 < 1),\\\\\n\t\t\tq_1p_2\\eta_{q_1}\\eta_{p_2}y_1^{\\eta_{q_1}-1}y_2^{-\\eta_{p_2}-1} & (0 < y_1 < 1, y_2 \\geq 1),\\\\\n\t\t\tp_1p_2\\eta_{p_1}\\eta_{p_2}y_1^{-\\eta_{p_1}-1}y_2^{-\\eta_{p_2}-1} & (y_1, y_2 \\geq 1).\n\t\t\\end{array}\\right.\n\t\\end{equation} \n\tThe parameters $p_i$, $q_i$, $\\eta_{p_i}$, $\\eta_{q_i}$ are all positive constants with $p_i + q_i = 1$ and $\\eta_{p_i} > 1$.\n\tIt holds that\n\t\\begin{equation*}\n\t \\kappa_i = \\frac{p_i\\eta_{p_i}}{\\eta_{p_i}-1}+\\frac{q_i\\eta_{q_i}}{\\eta_{q_i}+1}-1 \\quad (i=1,2).\n\t\\end{equation*}\n\tFor \\eqref{PIDE2D}, the initial condition is given by\n\t\\[\n\tv(s_1,s_2,0) = \\phi(s_1,s_2),\n\t\\]\n\twhere $\\phi$ denotes the payoff function of the option.\n\tAs a typical example, we consider in this paper a European put-on-the-average option, which has the payoff function\n\t\\begin{equation}\\label{payoff}\n\t\t\\phi(s_1,s_2) = \\textrm{max} \\left(0\\,,\\,K-\\frac{s_1+s_2}{2}\\right)\n\t\\end{equation}\n\twith strike price $K>0$.\n\tIts graph is shown in Figure~\\ref{FigPayoff}.\n\tConcerning the boundary condition, it holds that the PIDE \\eqref{PIDE2D} is itself satisfied on the two sides $s_{1} = 0$ \n\tand $s_{2} = 0$, respectively.\n\t\n\n\t\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{payoff.pdf}\n\t\\caption{Payoff put-on-the-average option with $K=100$.}\n\t\\label{FigPayoff}\n\t\\end{figure}\n\t\n\t\n\n\n\n\t\n\t\\section{Spatial discretization}\\label{SecSpatial}\n\tFor the numerical solution of the initial-boundary value problem for \\eqref{PIDE2D} we employ the popular method of lines (MOL) approach.\n\tThis approach consists of two consecutive steps~\\cite{HV03book}: first the PIDE \\eqref{PIDE2D} is discretized in space and subsequently in time.\n\tThis Section~\\ref{SecSpatial} deals with the spatial discretization. \n\tIn the next Section~\\ref{SecTime} we shall consider the temporal discretization.\n\t\n\n\n\n\t\n\t\\subsection{Convection-diffusion-reaction part}\\label{SecPDEpart}\n\tFor the numerical solution, the spatial domain is truncated to a bounded set $[0,S_{\\rm max}]\\times[0,S_{\\rm max}]$ \n\twith fixed value $S_{\\rm max}$ chosen sufficiently large.\n\tOn the two far sides $s_{1} = S_{\\rm max}$ and $s_{2} = S_{\\rm max}$ a linear boundary condition is taken, which is well-known in finance,\n\t\\begin{equation}\\label{LBC}\n\t\t\\dudd{s_1} = 0~~(\\textrm{if}~s_{1} = S_{\\rm max}) \\quad \\mbox{ and } \\quad \\dudd{s_2} = 0 ~~(\\textrm{if}~s_{2} = S_{\\rm max}).\n\t\\end{equation}\n\tIn this subsection we describe the finite difference discretization of the convection-diffusion-reaction part of the PIDE \\eqref{PIDE2D}, \n\tspecified by\n\t\\begin{equation*}\\label{lindiffop}\n\t\t\\mathcal{D} v = \\tfrac{1}{2} \\sigma_1^2s_1^2\\dudd{s_1} + \\rho \\sigma_1\\sigma_2s_1s_2\\duddm{s_1}{s_2} + \\tfrac{1}{2} \\sigma_2^2 s_2^2\\dudd{s_2} \n\t\t+ (r-\\lambda \\kappa_1) s_1\\dud{s_1} + (r-\\lambda\\kappa_2) s_2 \\dud{s_2} -(r+\\lambda)v.\n\t\\end{equation*}\n\tThe semidiscretization of this part is common and similar to that in~e.g.~\\cite{BH21}.\n\t\n\tLet integers $m_1, m_2 \\geq 1$ be given.\n\tThe option value function $v$ is approximated at a nonuniform, Cartesian set of spatial grid points,\n\t\\begin{equation*}\\label{sgrid}\n\t\t(s_{1,i},s_{2,j})\\in [0,S_{\\rm max}]\\times[0,S_{\\rm max}] \\quad (0\\leq i \\leq m_1, \\ 0\\leq j \\leq m_2),\n\t\\end{equation*}\n\twith $s_{1,0} = s_{2,0} = 0$ and $s_{1,m_1} = s_{2,m_2} = S_{\\rm max}$. \n\tThe nonuniform grid in each spatial direction is defined through a smooth transformation of an artificial uniform grid, such that \n\trelatively many grid points are placed in a region of financial and numerical interest.\n\tFigure~\\ref{FigGrid} shows a sample spatial grid if $m_1=m_2=50$, $K=100$ and $S_{\\rm max} = 5K$.\n\t\n\tLet integer $m \\geq 1$ and parameter $d > 0$.\n\tConsider equidistant points $0 = \\xi_0 < \\xi_1 < \\dots < \\xi_{m} = \\xi_{\\rm max}$ where\n\t\\begin{equation*}\n\t\t\\xi_{\\rm max} = \\xi_{\\rm int} + \\sinh^{-1}\\left(\\frac{S_{\\rm max}}{d}-\\xi_{\\rm int}\\right) \n\t\t\\quad \\textrm{and } \\quad\n\t\t\\xi_{\\rm int} = \\frac{2K}{d}.\n\t\\end{equation*}\n\tThen in each spatial direction a nonuniform mesh $0=s_00$ are given. A change of variables $y_i = z_i\/s_i$ ($i=1,2$) yields\n\\begin{equation*}\n\t\\mathcal{J} = \\lambda\\int_0^{\\infty}\\int_0^{\\infty} f\\bigg(\\frac{z_1}{s_1},\\frac{z_2}{s_2}\\bigg) v(z_1,z_2,t) \\frac{\\mathrm{d} z_1 \\mathrm{d} z_2}{s_1 s_2}.\n\\end{equation*}\nThe density function $f$ is defined on a partition of four sets of the first quadrant in the real plane, see (\\ref{pdf2D}).\nIt follows that $\\mathcal{J}$ is decomposed into four integrals as $\\mathcal{J} = \\mathcal{J}_1 + \\mathcal{J}_2 + \\mathcal{J}_3 + \\mathcal{J}_4$, where\n\\begin{align*}\n\t\\mathcal{J}_1 &= \\lambda q_1q_2\\eta_{q_1}\\eta_{q_2}s_1^{-\\eta_{q_1}}s_2^{-\\eta_{q_2}} \\int_0^{s_2} \\int_0^{s_1} z_1^{\\eta_{q_1}-1}z_2^{\\eta_{q_2}-1} v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2,\\\\\n\t\\mathcal{J}_2 &= \\lambda p_1q_2\\eta_{p_1}\\eta_{q_2}s_1^{\\eta_{p_1}}s_2^{-\\eta_{q_2}} \\int_0^{s_2} \\int_{s_1}^{\\infty} z_1^{-\\eta_{p_1}-1}z_2^{\\eta_{q_2}-1} v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2,\\\\\n\t\\mathcal{J}_3 &= \\lambda q_1p_2\\eta_{q_1}\\eta_{p_2}s_1^{-\\eta_{q_1}}s_2^{\\eta_{p_2}} \\int_{s_2}^{\\infty} \\int_0^{s_1} z_1^{\\eta_{q_1}-1}z_2^{-\\eta_{p_2}-1} v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2,\\\\\t\t\n\t\\mathcal{J}_4 &= \\lambda p_1p_2\\eta_{p_1}\\eta_{p_2}s_1^{\\eta_{p_1}}s_2^{\\eta_{p_2}} \\int_{s_2}^{\\infty} \\int_{s_1}^{\\infty} z_1^{-\\eta_{p_1}-1}z_2^{-\\eta_{p_2}-1} v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2.\n\\end{align*}\nWe first consider discretization of the integral $\\mathcal{J}_1$. \nUpon writing\n\\begin{equation*}\n\t\\psi_1(s_1,s_2) = \\lambda q_1q_2\\eta_{q_1}\\eta_{q_2}s_1^{-\\eta_{q_1}}s_2^{-\\eta_{q_2}} \n\t\\quad \\textrm{and} \\quad\n\t\\varphi_1(z_1,z_2) = z_1^{\\eta_{q_1}-1}z_2^{\\eta_{q_2}-1},\n\\end{equation*}\nwe have\n\\begin{equation}\n\t\\mathcal{J}_1 = \\psi_1(s_1,s_2)\\int_0^{s_2} \\int_0^{s_1} \\varphi_1(z_1,z_2) v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2.\n\\end{equation}\nFor $1\\leq i \\leq m_1$, $1\\leq j \\leq m_2$ let\n\\begin{equation*}\n\t\\mathcal{J}_{1,ij} = \\psi_1(s_{1,i},s_{2,j}) \\int_0^{s_{2,j}} \\int_0^{s_{1,i}} \\varphi_1(z_1,z_2) v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2\n\\end{equation*}\ndenote the value of $\\mathcal{J}_1$ at the spatial grid point $(s_{1,i},s_{2,j})$.\nDefine\n\\begin{equation*}\n\t\\mathcal{G}_{1,kl} = \\int_{s_{2,l-1}}^{s_{2,l}} \\int_{s_{1,k-1}}^{s_{1,k}} \\varphi_1(z_1,z_2) v(z_1,z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2\n\\end{equation*}\nwhenever $1\\leq k \\leq m_1$, $1\\leq l \\leq m_2$.\nThen the following useful expression for $\\mathcal{J}_{1,ij}$ in terms of a double cumulative sum is obtained,\n\\begin{equation}\\label{cumsum1}\n\t\\mathcal{J}_{1,ij} = \\psi_1(s_{1,i},s_{2,j}) \\sum_{k=1}^i \\sum_{l=1}^j \\mathcal{G}_{1,kl}\n\t\\quad\n\t(1\\leq i \\leq m_1, 1\\leq j \\leq m_2).\n\\end{equation}\nNotice the obvious but important fact that the $\\mathcal{G}_{1,kl}$ are independent of the indices $i$ and $j$.\nHence, if all values $\\mathcal{G}_{1,kl}$ are given, then computing the double cumulative sums in \\eqref{cumsum1} \nfor all $i$, $j$ can be done in, to leading order, just $2m_1m_2$ additions.\n\nWe subsequently construct approximations $G_{1,kl}$ to $\\mathcal{G}_{1,kl}$ ($1\\leq k \\leq m_1$, $1\\leq l \\leq m_2$)\nand define the approximation to $\\mathcal{J}_{1,ij}$ by\n\\begin{equation}\\label{cumsum1_semi}\n\tJ_{1,ij} = \\psi_1(s_{1,i},s_{2,j}) \\sum_{k=1}^i \\sum_{l=1}^j G_{1,kl}\n\t\\quad\n\t(1\\leq i \\leq m_1, 1\\leq j \\leq m_2).\n\\end{equation}\nFor any given $k, l$ with $1\\leq k \\leq m_1$, $1\\leq l \\leq m_2$ consider the natural choice of bilinear interpolation \nto approximate $v(z_1,z_2,t)$ on the $(z_1,z_2)$-domain $[s_{1,k-1}, s_{1,k}] \\times [s_{2,l-1}, s_{2,l}]$:\n\\begin{equation*}\n\t{\\widetilde v}_{kl}(z_1, z_2,t) = \\ell_{kl}^{00}(z_1,z_2) V_{k-1,l-1}(t) + \\ell_{kl}^{10}(z_1,z_2) V_{k,l-1}(t) +\n\t\\ell_{kl}^{01}(z_1,z_2) V_{k-1,l}(t) + \\ell_{kl}^{11}(z_1,z_2) V_{k,l}(t)\n\\end{equation*}\nwith weights\n\\begin{align*}\n\t\\ell_{kl}^{00}(z_1,z_2) &= (s_{1,k}-z_1)(s_{2,l}-z_2)\/\\delta_{kl},\\\\\n\t\\ell_{kl}^{10}(z_1,z_2) &= (z_1-s_{1,k-1})(s_{2,l}-z_2)\/\\delta_{kl},\\\\\n\t\\ell_{kl}^{01}(z_1,z_2) &= (s_{1,k}-z_1)(z_{2}-s_{2,l-1})\/\\delta_{kl},\\\\\n\t\\ell_{kl}^{11}(z_1,z_2) &= (z_1-s_{1,k-1})(z_2-s_{2,l-1})\/\\delta_{kl},\n\\end{align*}\nwhere $\\delta_{kl} = \\Delta s_{1,k}\\Delta s_{2,l}$ and $\\Delta s_{1,k} = s_{1,k}-s_{1,k-1}$, $\\Delta s_{2,l} = s_{2,l}-s_{2,l-1}$.\nThen we define \n\\begin{equation*}\n\tG_{1,kl} = \\int_{s_{2,l-1}}^{s_{2,l}} \\int_{s_{1,k-1}}^{s_{1,k}} \\varphi_1(z_1,z_2) {\\widetilde v}_{kl}(z_1, z_2,t) \\mathrm{d} z_1 \\mathrm{d} z_2.\n\\end{equation*}\nA straightforward calculation yields the simple, convenient formula \n\\begin{equation}\\label{G1kl}\n\tG_{1,kl} = \n\t\\gamma_{1,kl}^{00} V_{k-1,l-1}(t) + \\gamma_{1,kl}^{10} V_{k,l-1}(t) +\n\t\\gamma_{1,kl}^{01} V_{k-1,l}(t) + \\gamma_{1,kl}^{11} V_{k,l}(t)\n\\end{equation}\nwith\n\\begin{align*}\n\t\\gamma_{1,kl}^{00} &= (s_{1,k}s_{2,l}\\zeta_{1,kl}^{00} - s_{2,l}\\zeta_{1,kl}^{10} - s_{1,k}\\zeta_{1,kl}^{01} + \\zeta_{1,kl}^{11})\/\\delta_{kl},\\\\\n\t\\gamma_{1,kl}^{10} &= (-s_{1,k-1}s_{2,l}\\zeta_{1,kl}^{00} + s_{2,l}\\zeta_{1,kl}^{10} + s_{1,k-1}\\zeta_{1,kl}^{01} - \\zeta_{1,kl}^{11} )\/\\delta_{kl},\\\\\n\t\\gamma_{1,kl}^{01} &= (-s_{1,k}s_{2,l-1}\\zeta_{1,kl}^{00} + s_{2,l-1}\\zeta_{1,kl}^{10} + s_{1,k}\\zeta_{1,kl}^{01} - \\zeta_{1,kl}^{11} )\/\\delta_{kl},\\\\\n\t\\gamma_{1,kl}^{11} &= (s_{1,k-1}s_{2,l-1}\\zeta_{1,kl}^{00} - s_{2,l-1}\\zeta_{1,kl}^{10} - s_{1,k-1}\\zeta_{1,kl}^{01} + \\zeta_{1,kl}^{11})\/\\delta_{kl}\n\\end{align*}\nand\n\\begin{equation*}\n\t\\zeta_{1,kl}^{ab} = \n\t\\int_{s_{2,l-1}}^{s_{2,l}} \\int_{s_{1,k-1}}^{s_{1,k}} \\varphi_1(z_1,z_2) z_1^a z_2^b \\mathrm{d} z_1 \\mathrm{d} z_2 =\n\t\\frac{\\bigg(s_{1,k}^{a+\\eta_{q_1}}-s_{1,k-1}^{a+\\eta_{q_1}}\\bigg)\\bigg(s_{2,l}^{b+\\eta_{q_2}}-s_{2,l-1}^{b+\\eta_{q_2}}\\bigg)}{\\big(a+\\eta_{q_1}\\big)\\big(b+\\eta_{q_2}\\big)}\n\\end{equation*}\nfor $a,b\\in \\{0,1\\}$.\n\nThe coefficients $\\gamma_{1,kl}^{ab}$ are completely determined by the Kou parameters and the spatial grid. \nSince they are independent of $t$, they can be computed upfront, before the time discretization.\nClearly, for any given vector $V(t)$, the computation of $G_{1,kl}$ by \\eqref{G1kl} for all $k$, $l$ requires \n$3m_1m_2$ additions and $4m_1m_2$ multiplications.\nNoticing that the values of $\\psi_1$ in \\eqref{cumsum1_semi} can also be computed upfront, it follows that \nthe number of basic arithmetic operations to compute all approximations $J_{1,ij}$ ($1\\leq i \\leq m_1$, \n$1\\leq j \\leq m_2$) by \\eqref{cumsum1_semi} is, to leading order, equal to $10m_1m_2$.\n\nThe discretization and efficient evaluation of the other three integrals is done completely analogously.\nIn the relevant derivation, $v$ is approximated by zero outside the spatial domain $[0,S_{\\rm max}]\\times [0,S_{\\rm max}]$.\nWrite\n\\begin{align*}\n\t\\psi_2(s_1,s_2) &= \\lambda p_1q_2\\eta_{p_1}\\eta_{q_2}s_1^{\\eta_{p_1}}s_2^{-\\eta_{q_2}},\\\\\n\t\\psi_3(s_1,s_2) &= \\lambda q_1p_2\\eta_{q_1}\\eta_{p_2}s_1^{-\\eta_{q_1}}s_2^{\\eta_{p_2}},\\\\ \n\t\\psi_4(s_1,s_2) &= \\lambda p_1p_2\\eta_{p_1}\\eta_{p_2}s_1^{\\eta_{p_1}}s_2^{\\eta_{p_2}}.\n\\end{align*}\nLet $1\\leq i \\leq m_1$, $1\\leq j \\leq m_2$. \nThen the approximations of $\\mathcal{J}_2, \\mathcal{J}_3, \\mathcal{J}_4$ \nat the spatial grid point $(s_{1,i},s_{2,j})$ are given by, respectively, the double cumulative sums\n\\begin{align*}\n\tJ_{2,ij} &= \\psi_2(s_{1,i},s_{2,j}) \\sum_{k=i+1}^{m_1} \\sum_{l=1}^j G_{2,kl},\\\\\n\tJ_{3,ij} &= \\psi_3(s_{1,i},s_{2,j}) \\sum_{k=1}^{i} \\sum_{l=j+1}^{m_2} G_{3,kl},\\\\\n\tJ_{4,ij} &= \\psi_4(s_{1,i},s_{2,j}) \\sum_{k=i+1}^{m_1} \\sum_{l=j+1}^{m_2} G_{4,kl}\n\\end{align*}\nwith the usual convention that empty sums are equal to zero.\nFor $2\\le \\nu \\le 4$, $1\\leq k \\leq m_1$, $1\\leq l \\leq m_2$ we obtain\n\\begin{equation*}\\label{Gnukl}\n\tG_{\\nu,kl} = \n\t\\gamma_{\\nu,kl}^{00} V_{k-1,l-1}(t) + \\gamma_{\\nu,kl}^{10} V_{k,l-1}(t) +\n\t\\gamma_{\\nu,kl}^{01} V_{k-1,l}(t) + \\gamma_{\\nu,kl}^{11} V_{k,l}(t)\n\\end{equation*}\nwith\n\\begin{align*}\n\t\\gamma_{\\nu,kl}^{00} &= (s_{1,k}s_{2,l}\\zeta_{\\nu,kl}^{00} - s_{2,l}\\zeta_{\\nu,kl}^{10} - s_{1,k}\\zeta_{\\nu,kl}^{01} + \\zeta_{\\nu,kl}^{11})\/\\delta_{kl},\\\\\n\t\\gamma_{\\nu,kl}^{10} &= (-s_{1,k-1}s_{2,l}\\zeta_{\\nu,kl}^{00} + s_{2,l}\\zeta_{\\nu,kl}^{10} + s_{1,k-1}\\zeta_{\\nu,kl}^{01} - \\zeta_{\\nu,kl}^{11} )\/\\delta_{kl},\\\\\n\t\\gamma_{\\nu,kl}^{01} &= (-s_{1,k}s_{2,l-1}\\zeta_{\\nu,kl}^{00} + s_{2,l-1}\\zeta_{\\nu,kl}^{10} + s_{1,k}\\zeta_{\\nu,kl}^{01} - \\zeta_{\\nu,kl}^{11} )\/\\delta_{kl},\\\\\n\t\\gamma_{\\nu,kl}^{11} &= (s_{1,k-1}s_{2,l-1}\\zeta_{\\nu,kl}^{00} - s_{2,l-1}\\zeta_{\\nu,kl}^{10} - s_{1,k-1}\\zeta_{\\nu,kl}^{01} + \\zeta_{\\nu,kl}^{11})\/\\delta_{kl}\n\\end{align*}\nwhere\n\\begin{align*}\n\t\\zeta_{2,kl}^{ab} &= \n\t\\frac{\\bigg(s_{1,k}^{a-\\eta_{p_1}}-s_{1,k-1}^{a-\\eta_{p_1}}\\bigg)\\bigg(s_{2,l}^{b+\\eta_{q_2}}-s_{2,l-1}^{b+\\eta_{q_2}}\\bigg)}{\\big(a-\\eta_{p_1}\\big)\\big(b+\\eta_{q_2}\\big)},\\\\\n\t\\zeta_{3,kl}^{ab} &= \n\t\\frac{\\bigg(s_{1,k}^{a+\\eta_{q_1}}-s_{1,k-1}^{a+\\eta_{q_1}}\\bigg)\\bigg(s_{2,l}^{b-\\eta_{p_2}}-s_{2,l-1}^{b-\\eta_{p_2}}\\bigg)}{\\big(a+\\eta_{q_1}\\big)\\big(b-\\eta_{p_2}\\big)},\\\\\n\t\\zeta_{4,kl}^{ab} &= \n\t\\frac{\\bigg(s_{1,k}^{a-\\eta_{p_1}}-s_{1,k-1}^{a-\\eta_{p_1}}\\bigg)\\bigg(s_{2,l}^{b-\\eta_{p_2}}-s_{2,l-1}^{b-\\eta_{p_2}}\\bigg)}{\\big(a-\\eta_{p_1}\\big)\\big(b-\\eta_{p_2}\\big)}\n\\end{align*}\nfor $a,b\\in \\{0,1\\}$.\n\nOn the boundary part $\\{(s_1,0): s_1>0\\}$ the double integral \\eqref{integral} reduces to the single integral\n\\begin{equation}\\label{integral1_1D}\n\t\\mathcal{J} = \\lambda\\int_0^{\\infty} f_1(y_1)v(s_1y_1,0,t) \\mathrm{d} y_1.\n\\end{equation}\nThe discretization and efficient evaluation of this integral, which arises in the one-dimensional \nKou PIDE, has been constructed by Toivanen~\\cite{T08}.\nFor completeness, we include the result here.\nAssume $s_1>0$. \nThen for \\eqref{integral1_1D} there holds $\\mathcal{J} = \\mathcal{J}_1 + \\mathcal{J}_2$, where\n\\begin{equation*}\n\t\\mathcal{J}_1 = \\lambda \\int_0^{s_1} f_1\\bigg(\\frac{z_1}{s_1}\\bigg) v(z_1,0,t) \\frac{\\mathrm{d} z_1 }{s_1}\n\t\\quad \\textrm{and} \\quad\n\t\\mathcal{J}_2 = \\lambda \\int_{s_1}^{\\infty} f_1\\bigg(\\frac{z_1}{s_1}\\bigg) v(z_1,0,t) \\frac{\\mathrm{d} z_1 }{s_1}.\n\\end{equation*}\nWrite\n\\begin{equation*}\n\t\\psi_1(s_1) = \\lambda q_1\\eta_{q_1}s_1^{-\\eta_{q_1}} \n\t\\quad \\textrm{and} \\quad\n\t\\psi_2(s_1) = \\lambda p_1\\eta_{p_1}s_1^{\\eta_{p_1}}.\n\\end{equation*}\nLet $1\\leq i \\leq m_1$. \nThen the approximations of $\\mathcal{J}_1, \\mathcal{J}_2$ at the spatial grid point $(s_{1,i},0)$ are given by, \nrespectively, the single cumulative sums\n\\begin{equation*}\n\tJ_{1,i} = \\psi_1(s_{1,i}) \\sum_{k=1}^{i} G_{1,k}\n\t\\quad \\textrm{and} \\quad\n\tJ_{2,i} = \\psi_2(s_{1,i}) \\sum_{k=i+1}^{m_1} G_{2,k}.\n\\end{equation*}\nFor $1\\le \\nu \\le 2$, $1\\leq k \\leq m_1$ we obtain, by employing linear interpolation,\n\\begin{equation*}\\label{Gnuk}\n\tG_{\\nu,k} = \n\t\\gamma_{\\nu,k}^{0} V_{k-1,0}(t) + \\gamma_{\\nu,k}^{1} V_{k,0}(t)\n\\end{equation*}\nwith\n\\begin{equation*}\n\t\\gamma_{\\nu,k}^{0} = (s_{1,k}\\zeta_{\\nu,k}^{0} - \\zeta_{\\nu,k}^{1})\/\\Delta s_{1,k}\n\t\\quad \\textrm{and} \\quad\n\t\\gamma_{\\nu,k}^{1} = (-s_{1,k-1}\\zeta_{\\nu,k}^{0} + \\zeta_{\\nu,k}^{1} )\/\\Delta s_{1,k}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\t\\zeta_{1,k}^{a} = \n\t\\frac{s_{1,k}^{a+\\eta_{q_1}}-s_{1,k-1}^{a+\\eta_{q_1}}}{a+\\eta_{q_1}}\n\t\\quad \\textrm{and} \\quad\n\t\\zeta_{2,k}^{a} = \n\t\\frac{s_{1,k}^{a-\\eta_{p_1}}-s_{1,k-1}^{a-\\eta_{p_1}}}{a-\\eta_{p_1}}\n\t\\quad (a=0,1).\n\\end{equation*}\n\nOn the boundary part $\\{(0,s_2): s_2>0\\}$ the double integral \\eqref{integral} reduces \nto the single integral\n\\begin{equation}\\label{integral2_1D}\n\t\\mathcal{J} = \\lambda\\int_0^{\\infty} f_2(y_2)v(0,s_2y_2,t) \\mathrm{d} y_2\n\\end{equation}\nand the discretization is performed completely similarly as above. \nFinally, at the spatial grid point $(s_1,s_2)=(0,0)$ it holds that \n$\\mathcal{J} \\approx \\lambda V_{0,0}(t)$.\n\nFrom the above it follows that the number of basic arithmetic operations to compute, for any given $t$, the approximation to the double \nintegral \\eqref{integral} on the full spatial grid is, to leading order, equal to $40m_1m_2$.\n\n\t\n\n\n\n\t\n\\subsection{Semidiscrete PIDE}\\label{SecSemiPIDE}\nCombining the above semidiscretizations for the convection-diffusion-reaction and integral parts of PIDE \\eqref{PIDE2D}, the following \nlarge system of ODEs is obtained:\n\\begin{align}\\label{ODEs}\n\tV'(t) = AV(t) \\quad (00$ is a given parameter.\n\tMethod \\eqref{MCS} is of order two for any value~$\\theta$.\n\tHere we make the common choice $\\theta = \\frac{1}{3}$, which is motivated by stability and accuracy results \n\tfor two-dimensional problems, see e.g.~\\cite{HM11,HT18,HW09,HW16}.\n\tIt is easily verified that in \\eqref{MCS} the integral part and mixed derivative term are both treated by the \n\texplicit trapezoidal rule.\n\tWe note that the explicit stages $\\widehat{Y}_0$, $\\widetilde{Y}_0$ can be merged, so that the integral part \n\tis evaluated (just) twice per time step.\n\tThe implicit stages $Y_j$, $\\widetilde{Y}_j$ (for $j=1,2$) are often called stabilizing corrections.\n\tThe four pertinent linear systems for these stages are tridiagonal and can be solved very efficiently \n\tby means of an a priori $LU$ factorization.\n\tWe mention that \\eqref{MCS} is already applied in the first time step, i.e., for defining $V^1$ (thus \n\tIMEX Euler is not used here).\n\t\\vskip0.2cm \n\t\n\t\\item[6.] \\textit{Two-step adaptation of the MCS (MCS2) scheme}:\n\t\\vskip0.1cm\n\tIn \\cite{HT18} a second adaptation of the MCS scheme to PIDEs has been proposed where, instead of the explicit \n\ttrapezoidal rule, the integral part is now handled in a two-step Adams--Bashforth fashion:\n\t\\begin{equation}\\label{MCS2}\n\t\t\\left\\{\\begin{array}{lll}\n\t\t\tX_0 = V^{n-1}+\\Delta t A^{(D)}V^{n-1},\\\\\n\t\t\tY_0 = X_0 + \\tfrac{1}{2}\\Delta t A^{(J)}(3V^{n-1}-V^{n-2}),\\\\\n\t\t\tY_j = Y_{j-1}+\\theta\\Delta t A_j(Y_j-V^{n-1})\\quad (j=1,2),\\\\\n\t\t\t\\widehat{Y}_0 = Y_0 + \\theta\\Delta t A^{(M)}(Y_2-V^{n-1}),\\\\\n\t\t\t\\widetilde{Y}_0 = \\widehat{Y}_0 + (\\tfrac{1}{2}-\\theta)\\Delta t A^{(D)}(Y_2-V^{n-1}),\\\\\n\t\t\t\\widetilde{Y}_j = \\widetilde{Y}_{j-1} + \\theta \\Delta t A_j(\\widetilde{Y}_j-V^{n-1})\\quad (j=1,2),\\\\\n\t\t\tV^n = \\widetilde{Y}_2.\n\t\t\\end{array}\\right.\n\t\\end{equation} \n\tMethod \\eqref{MCS2} is also of order two for any value~$\\theta$.\n\tWe choose again $\\theta = \\frac{1}{3}$ and, for starting this two-step method, define $V^1$ by \\eqref{MCS}.\n\t\\vskip0.2cm \n\t\n\t\\item[7.] \\textit{Stabilizing correction two-step Adams-type (SC2A) scheme}:\n\t\\vskip0.1cm\n\tHundsdorfer \\& in 't Hout \\cite{HH18} recently studied a novel class of stabilizing correction multistep methods \n\tfor the numerical solution of PDEs.\n\tWe consider here a direct adaptation to PIDEs of a prominent member of this class, the two-step Adams-type scheme \n\tcalled SC2A:\n\t\\begin{equation}\\label{SC2A}\n\t\t\\left\\{\\begin{array}{lll} \n\t\t\tY_0 = V^{n-1} + \\Delta t\\, (A^{(M)}+A^{(J)})\\sum_{i=1}^2\\widehat{b}_iV^{n-i} + \\Delta t\\, (A_1+A_2)\\sum_{i=1}^2\\widecheck{b}_iV^{n-i},\\\\\n\t\t\tY_j = Y_{j-1} + \\theta \\Delta t A_j(Y_j-V^{n-1})\\quad (j=1,2),\\\\\n\t\t\tV^n = Y_2,\n\t\t\\end{array}\\right.\n\t\\end{equation} \n\twith coefficients $(\\widehat{b}_1,\\widehat{b}_2) = \\left(\\frac{3}{2},-\\frac{1}{2}\\right)$ and \n\t$(\\widecheck{b}_1,\\widecheck{b}_2) = \\left(\\frac{3}{2}-\\theta, - \\frac{1}{2}+\\theta\\right)$.\n\tThe integral part and mixed derivative term are now both handled by the two-step Adams--Bashforth scheme.\n\tMethod \\eqref{SC2A} is again of order two for any value~$\\theta$. \n\tFollowing \\cite{HH18}, we take $\\theta = \\frac{3}{4}$, which is based upon stability and accuracy results.\n\tFor starting \\eqref{SC2A}, we define $V^1$ again by \\eqref{MCS} with $\\theta = \\frac{1}{3}$\n\t\\vskip0.2cm \n\t\n\\end{enumerate}\n\n\n\n\n\n\t\n\\section{Numerical study}\\label{SecResults}\nIn this section we present ample numerical experiments for the seven operator splitting schemes formulated in Section~\\ref{SecTime}, \nwhich provides important insight in their convergence behaviour and mutual performance.\nThe experiments involve three different parameter sets for the two-asset Kou model, labelled as 1, 2, and~3. \nThe parameter set~1 is taken from Clift \\& Forsyth \\cite{CF08}. \nSet~2 is a blend of parameters considered for the one-asset Kou model by Almendral \\& Oosterlee~\\cite{AO05} and d'Halluin, Forsyth \n\\& Vetzal~\\cite{HFV05} (see also Toivanen~\\cite{T08}). \nSet~3 has been newly constructed and includes a relatively large value for the product $\\lambda T$, i.e., the expected number of \njumps in $[0,T]$. \nFor the truncated spatial domain, we (heuristically) select $S_{\\rm max} = 20K, 10K, 30K$ for sets 1, 2, 3, respectively.\n\n\\begin{table}[H]\n\t\\caption{Parameter sets for the two-asset Kou jump-diffusion model.}\n\t\\small\n\t\\begin{tabular}{@{}llllllllllllll@{}}\n\t\t\\toprule\n\t\t& $\\sigma_1$ & $\\sigma_2$ & $r$ & $\\rho$ & $\\lambda$ & $p_1$ & $p_2$ & $\\eta_{p_1}$ & $\\eta_{q_1}$ & $\\eta_{p_2}$ & $\\eta_{q_2}$ & $K$ & $T$ \\\\ \\midrule\n\t\tSet 1 & 0.12 & 0.15 & 0.05 & 0.30 & 0.50 & 0.40 & 0.60 & $1\/0.20$ & $1\/0.15$ & $1\/0.18$ & $1\/0.14$ & 100 & 1 \\\\\n\t\tSet 2 & 0.15 & 0.20 & 0.05 & 0.50 & 0.20 & 0.3445 & 0.50 & 3.0465 & 3.0775 & 3 & 2 & 100 & 0.2 \\\\\n\t\tSet 3 & 0.20 & 0.30 & 0.05 & 0.70 & 8 & 0.60 & 0.65 & 5 & 4 & 4 & 3 & 100 & 1 \\\\ \\bottomrule\n\t\\end{tabular}\n\t\\label{paramsets}\n\\end{table}\n\n\n\\subsection{Option values and numerical convergence behaviour}\\label{SecResults1}\nFigure~\\ref{FigSurfacePlots} shows the numerically computed\\footnote{With $m_1=m_2=200$, $N=100$ and the MCS2 scheme.} option value \nsurfaces for the European put-on-the-average option and the three parameter sets from Table~\\ref{paramsets} on the asset price \ndomain $[0,3K]\\times[0,3K]$ and $t=T$.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[trim = 1.4in 3.7in 1.3in 3.7in, clip, scale=0.5]{surf1.pdf}\t\n\t\\includegraphics[trim = 1.4in 3.7in 1.3in 3.7in, clip, scale=0.5]{surf2.pdf}\t\n\t\\includegraphics[trim = 1.3in 3.7in 1.3in 3.7in, clip, scale=0.5]{surf3.pdf}\n\t\\caption{Option value surfaces for the European put-on-the-average option under the two-asset Kou model in the case of parameter \n\tset 1 (top left), 2 (top right) and 3 (bottom) given in Table~\\ref{paramsets}.}\n\t\\label{FigSurfacePlots}\n\\end{figure}\n\n\\noindent\nFor future reference, accurate approximations to the option values have been \ncomputed\\footnote{Using $m_1=m_2=1000$, $N=500$, the MCS2 scheme and spline interpolation.} \nfor specifically chosen spot prices $S_0^{(1)}$ and $S_0^{(2)}$ of the two assets in a\nneighbourhood of the strike $K$, see Table~\\ref{tablevalues}. \n\\vskip0.3cm\n\n\\begin{table}[H]\n\t\\caption{\\label{tablevalues}European put-on-the-average option value approximations for parameter sets 1, 2 and 3.}\n\t\\vspace{0.5cm}\n\t\\centering\n\tSet 1\\\\\\vspace{0.1cm}\n\t\\begin{tabular}{llrrrrr}\n\t\t\\toprule\n\t\t& & $S_0^{(1)} = 90$ & & $S_0^{(1)} = 100$ & & $S_0^{(1)} = 110$ \\\\\n\t\t& & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} \\\\\n\t\t$S_0^{(2)} = 90$ & & 8.9385 & & 6.0316 & & 3.8757 \\\\\n\t\t$S_0^{(2)} = 100$ & & 5.9655 & & 3.8038 & & 2.3370 \\\\\n\t\t$S_0^{(2)} = 110$ & & 3.7641 & & 2.2978 & & 1.3771 \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\\vspace{0.5cm}\n\n\tSet 2\\\\\\vspace{0.1cm}\n\t\\begin{tabular}{llrrrrr}\n\t\t\\toprule\n\t\t& & $S_0^{(1)} = 90$ & & $S_0^{(1)} = 100$ & & $S_0^{(1)} = 110$ \\\\\n\t\t& & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} \\\\\n\t\t$S_0^{(2)} = 90$ & & 9.6863 & & 5.5616 & & 2.6400 \\\\\n\t\t$S_0^{(2)} = 100$ & & 5.6162 & & 2.6929 & & 1.1264 \\\\\n\t\t$S_0^{(2)} = 110$ & & 2.7570 & & 1.1670 & & 0.5246 \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\\vspace{0.5cm}\n\n\tSet 3\\\\\\vspace{0.1cm}\n\t\\begin{tabular}{llrrrrr}\n\t\t\\toprule\n\t\t& & $S_0^{(1)} = 90$ & & $S_0^{(1)} = 100$ & & $S_0^{(1)} = 110$ \\\\\n\t\t& & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} \\\\\n\t\t$S_0^{(2)} = 90$ & & 32.7459 & & 31.0984 & & 29.5758 \\\\\n\t\t$S_0^{(2)} = 100$ & & 30.5796 & & 29.0181 & & 27.5770 \\\\\n\t\t$S_0^{(2)} = 110$ & & 28.5830 & & 27.1033 & & 25.7396 \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\\vspace{0.5cm}\n\\end{table}\n\nWe next examine, through numerical experiments, the convergence behaviour of the seven operator splitting schemes formulated \nin Section~\\ref{SecTime} for the temporal discretization of the semidiscrete two-dimensional Kou PIDE. \nTake $m_1=m_2=m$.\nConsider a region of financial interest\n\\begin{equation*}\\label{ROI}\n\t\\textrm{ROI} = \\left\\{(s_1,s_2) : \\tfrac{1}{2}K < s_1,s_2 < \\tfrac{3}{2}K\\right\\}\n\\end{equation*}\nand define the {\\it temporal discretization error} on this ROI for $t=T$ by\n\\begin{equation}\\label{EROI}\n\t\\widehat{E}^{\\rm ROI}(m,N) = \\textrm{max} \\left\\{| V_{i,j}(T) - V_{i,j}^{N'} |: \\Delta t = T\/N' ~~{\\rm and}~~ \n\t(s_{1,i},s_{2,j})\\in \\rm ROI\\right\\}.\n\\end{equation}\nClearly, this error is measured in the (important) maximum norm.\nThe MCS2 scheme is applied with $3000$ time steps to obtain a reference value for the exact semidiscrete solution $V(T)$ to \n\\eqref{ODEs}.\nFor each of the seven operator splitting schemes formulated in Section~\\ref{SecTime}, we study the temporal discretization \nerror for a sequence of values $N$, where the approximation $V^{N'}$ to $V(T)$ is computed using $N'$ time steps.\nHere $N'$ is chosen in function of $N$ depending on the specific scheme, so as to arrive at a fair comparison between the \nschemes.\n\nTheoretical as well as experimental evidence indicates that for the four IMEX schemes the process of solving the linear systems, \ninvolving the matrix $I-\\tfrac{1}{2}\\Delta t A^{(D)}$, essentially determines the total computational time in each time step \nwhenever $m$ is sufficiently large ($m\\gtrsim 150$). \nIn particular, the time for evaluating the integral part, by means of the algorithm from Subsection~\\ref{SecIntpart}, forms \nonly a small fraction of the total computational time.\nAccordingly, for a fair comparison between the IMEX schemes, we apply the three schemes (\\ref{CNFE}), (\\ref{IETR}), (\\ref{CNAB}) \nwith $N' = 2N$ time steps and the scheme (\\ref{CNFI}) with just $N'=N$ time steps, since in the latter scheme double the amount \nof linear systems is solved in each time step.\n\nFor the three ADI schemes we find that the evaluation of the two-dimensional integral part takes about the same computational\ntime as the solution of four pertinent tridiagonal linear systems. \nThe scheme (\\ref{MCS}) employs two evaluations of the integral part, whereas (\\ref{MCS2}) and (\\ref{SC2A}) each use only one \nsuch evaluation.\nNext, both schemes (\\ref{MCS}) and (\\ref{MCS2}) require the solution of four tridiagonal systems, whereas (\\ref{SC2A}) involves \nonly two tridiagonal systems.\nAccordingly, for a fair mutual comparison, we apply the ADI schemes (\\ref{MCS}), (\\ref{MCS2}) and (\\ref{SC2A}) with, respectively, \n$N'=N$, $N' = \\lfloor 3N\/2 \\rfloor$ and $N' = 2N$ time steps.\n\nFor each of the seven splitting schemes, the temporal discretization errors $\\widehat{E}^{\\rm ROI}(m,N)$ have been computed for \n$m = 200$ and a sequence of values $N$ between 10 and 1000. \nFigure~\\ref{FigTemperrors} displays the obtained temporal errors for all three parameter sets given in Table~\\ref{paramsets}, \nwhere the left column shows the results for the IMEX schemes and the right column the results for the ADI schemes.\n\nFrom Figure~\\ref{FigTemperrors} we observe the positive result that, for each given scheme and parameter set, the temporal errors \nare bounded from above by a moderate value and decrease monotonically as $N$ increases. \nAs expected, the CNFE scheme (\\ref{CNFE}) shows an order of convergence equal to one, whereas all other six splitting schemes \nreveal a desirable order of convergence equal to two.\nBy repeating the numerical experiments for spatial grids that are finer (e.g.~$m=400$) or coarser (e.g.~$m=50, 100$) \nwe obtain results that are visually identical to those displayed in Figure~\\ref{FigTemperrors}. \nThis indicates that, for each splitting scheme, the temporal errors are essentially independent of the spatial grid \nsize, and hence, its observed convergence order is valid in a stiff sense, which is very favourable.\n\nIt is interesting to remark that the error constants become larger as the intensity of the jumps increases, keeping all\nelse fixed.\nThis has been observed in additional numerical experiments for increasing values of $\\lambda$ between 0 and 10.\n\nAmong the four IMEX schemes, the IETR and CNAB schemes yield the smallest error constants in the numerical experiments.\nAmong the three ADI schemes, the MCS2 scheme has the smallest error constant in all experiments.\nOverall, the MCS2 scheme is preferred.\nIt yields temporal errors that are smaller than or approximately equal to the IETR and CNAB schemes, but is computationally \nmuch faster since the linear systems appearing in this scheme are just tridiagonal.\nAs an indication for actual run times of the MCS2 scheme, we mention that they are approximately 6e-4, 2e-3, 7e-3, \n4.5e-2 CPU seconds per time step if $m=50,100,200,400$, respectively, in the case of our Matlab implementation, \nversion R2020b, on an Intel Core i7-8665U processor at 1.9 GHz with 16 GB memory.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{TemperrorIMEX_jul8_set1.pdf}\\includegraphics[trim = 1.3in 3.7in 1.6in 3.7in, clip, scale=0.5]{TemperrorADI_jul8_set1.pdf}\\\\\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{TemperrorIMEX_jul8_set2.pdf}\\includegraphics[trim = 1.3in 3.7in 1.6in 3.7in, clip, scale=0.5]{TemperrorADI_jul8_set2.pdf}\\\\\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{TemperrorIMEX_jul8_set3.pdf}\\includegraphics[trim = 1.3in 3.7in 1.6in 3.7in, clip, scale=0.5]{TemperrorADI_jul8_set3.pdf}\n\t\\caption{Temporal errors $\\widehat{E}^{\\rm ROI}(200,N)$ of the seven operator splitting schemes under consideration. IMEX schemes on the left side and ADI \n\tschemes on the right side, with parameter set 1 (top), set 2 (middle) and set 3 (bottom) from Table~\\ref{paramsets}. The CNFI and MCS schemes are applied \n\twith step size $\\Delta t = T\/N$, the MCS2 scheme is applied with $\\Delta t = T\/\\lfloor 3N\/2 \\rfloor$ and the CNFE, IETR, CNAB, SC2A schemes are applied \n\twith $\\Delta t = T\/(2N)$.}\n\t\\label{FigTemperrors}\n\\end{figure}\n\n\\newpage\n\\subsection{The Greeks}\\label{Greeks}\nThe Greeks are mathematical derivatives of the option value with respect to underlying variables and parameters. \nThey constitute a measure for risk that indicates how sensitive an option value is to changes in underlying variables and parameters \nand are crucial for hedging strategies. \nIn this subsection we consider the numerical approximation of the Delta and Gamma Greeks. \nDelta is a measure for the rate of change of the option value with respect to a change in an underlying asset price.\nAs there are two underlying assets, there are two Deltas:\n\\begin{equation*}\n\\Delta_1 = \\frac{\\partial v}{\\partial s_1} \\quad \\text{and} \\quad \\Delta_2 = \\frac{\\partial v}{\\partial s_2} . \n\\end{equation*}\nNext, Gamma measures the rate of change of a Delta with respect to a change in an underlying asset price. \nThere are three different Gammas:\n\\begin{equation*}\n\t\\Gamma_{11} = \\frac{\\partial \\Delta_1}{\\partial s_1} = \\frac{\\partial^2v}{\\partial s_1^2}, \\quad \n\t\\Gamma_{22} = \\frac{\\partial \\Delta_2}{\\partial s_2} = \\frac{\\partial^2v}{\\partial s_2^2} \\quad \\text{and} \\quad \n\t\\Gamma_{12} = \\frac{\\partial \\Delta_1}{\\partial s_2} = \\frac{\\partial^2v}{\\partial s_2 \\partial s_1} = \\frac{\\partial \\Delta_2}{\\partial s_1} = \\Gamma_{21}.\n\\end{equation*}\n\nBy virtue of the finite difference discretization that has been defined in Section \\ref{SecSpatial}, the Delta and Gamma Greeks\ncan directly be approximated, at essentially no computational cost, by applying the second-order central finite \ndifference formulas considered in Subsection \\ref{SecPDEpart} to the option value approximations on the spatial grid.\n\nAs an illustration, Figure~\\ref{FigGreeks} displays the numerically\\footnote{With $m_1=m_2=200$, $N=100$ and the MCS2 scheme.} \nobtained Delta and Gamma surfaces at maturity for the European put-on-the-average option under the two-asset Kou model for \nparameter set~1. \nAs expected, the Delta surfaces are steepest around the line segment $s_1 + s_2 = 2K$ and, correspondingly, the \nGamma surfaces are highest there.\n\nSimilarly to the option value, we study the temporal convergence behaviour of all operator splitting schemes in \nthe case of the five Greeks.\nAkin to (\\ref{EROI}), the temporal discretization error in the case of Delta $\\Delta_k \\, (k = 1, 2)$ is defined by\n\\begin{equation}\\label{EROIGreeks}\n\t\\widehat{E}^{\\rm ROI}_{\\Delta_k}(m,N) = \\textrm{max} \\left\\{| (\\Delta_k)_{i,j}(T) - (\\Delta_k)_{i,j}^{N'} |: \n\t\\Delta t = T\/N' ~~{\\rm and}~~ (s_{1,i},s_{2,j})\\in \\rm ROI \\right\\}.\n\\end{equation}\nHere $\\Delta_k (T)$ denotes the pertinent finite difference matrix for convection applied to the reference value for the exact\nsemidiscrete solution $V(T)$ given in Subsection~\\ref{SecResults1}. \nNext, $\\Delta_k ^{N'}$ is equal to the same finite difference matrix applied to the approximation $V^{N'}$ of $V(T)$ that is \ngenerated by any one of the seven operator splitting schemes. \nAnalogous temporal discretization error definitions hold in the case of $\\Gamma_{11}, \\Gamma_{22}$, $\\Gamma_{12}$. \n\nFigure~\\ref{FigGreeksError} displays the temporal errors in the case of the five Greeks and parameter set 1 for $m = 200$ and \nthe same sequence of values $N$ between 10 and 1000 as before.\nWe arrive at the same conclusions on the temporal convergence behaviour of the seven splitting schemes as obtained \nin Subsection~\\ref{SecResults1} regarding the option value.\nIn particular, besides CNFE, all splitting schemes reveal a stiff order of convergence equal to two, and MCS2 has the best\nperformance among all these schemes.\n\nWe note that the somewhat larger errors that are observed for the CNFI, IETR, CNAB schemes when $N$ is small are attributed\nto the nonsmootness of the initial (payoff) function and could be alleviated by applying four (instead of two) half time steps\nwith IMEX Euler at the start of the time stepping, cf. Section~\\ref{SecTime}.\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{Delta1.pdf}\\includegraphics[trim = 1.3in 3.7in 1.4in 3.7in, clip, scale=0.5]{Delta2.pdf}\\\\\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{Gamma11.pdf}\\includegraphics[trim = 1.3in 3.7in 1.4in 3.7in, clip, scale=0.5]{Gamma22.pdf}\\\\\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{Gamma12.pdf}\n\t\\caption{\n\tFirst-order Greeks $\\Delta_1$ (top left) and $\\Delta_2$ (top right) and second-order Greeks $\\Gamma_{11}$ (middle left), $\\Gamma_{22}$ \n\t(middle right) and $\\Gamma_{12}$ (bottom) for the European put-on-the-average option under the two-asset Kou model in the case of \n\tparameter set 1.}\n\t\\label{FigGreeks}\n\\end{figure}\n\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{errorDelta1_jul14_set1.pdf}\\includegraphics[trim = 1.3in 3.7in 1.6in 3.7in, clip, scale=0.5]{errorDelta2_jul14_set1.pdf}\\\\\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{errorGamma11_jul14_set1.pdf}\\includegraphics[trim = 1.3in 3.7in 1.6in 3.7in, clip, scale=0.5]{errorGamma22_jul14_set1.pdf}\\\\\n\t\\includegraphics[trim = 1.3in 3.7in 1in 3.7in, clip, scale=0.5]{errorGamma12_jul14_set1.pdf}\n\t\\caption{Temporal errors of the seven operator splitting schemes under consideration in the case of the five Greeks and parameter set 1: \n\t$\\widehat{E}^{\\rm ROI}_{\\Delta_1}$ (top left), $\\widehat{E}^{\\rm ROI}_{\\Delta_2}$ (top right), $\\widehat{E}^{\\rm ROI}_{\\Gamma_{11}}$ \n\t(middle left), $\\widehat{E}^{\\rm ROI}_{\\Gamma_{22}}$ (middle right) and $\\widehat{E}^{\\rm ROI}_{\\Gamma_{12}}$ (bottom). \n\tThe CNFI and MCS schemes are applied with step size $\\Delta t = T\/N$, the MCS2 scheme is applied with $\\Delta t = T\/\\lfloor 3N\/2 \\rfloor$ \n\tand the CNFE, IETR, CNAB, SC2A schemes are applied with $\\Delta t = T\/(2N)$.}\n\t\\label{FigGreeksError}\n\\end{figure}\n\n\n\n\n\n\n\\newpage\n\\section{Conclusions}\\label{SecConc}\nWe have studied the valuation of European options under the two-asset Kou jump-diffusion model via the numerical solution of the \npertinent two-dimensional time-dependent PIDE.\nA first main contribution of our paper is the extension of an algorithm derived by Toivanen~\\cite{T08}, which enables a highly \nefficient numerical evaluation of the nonlocal double integral appearing in this PIDE.\nThe computational cost of the acquired algorithm is optimal: it is directly proportional to the number of grid points in the \nspatial discretization.\nAlso, it is simple to implement and requires little memory usage.\nSubsequently, for the efficient discretization in time of the semidiscretized two-dimensional Kou PIDE, we have investigated \nseven modern operator splitting schemes of the IMEX and the ADI kind.\nEvery splitting scheme conveniently treats the integral term in an explicit fashion. \nThrough extensive numerical experiments, we have examined the stability and convergence behaviour of the splitting schemes as \nwell as their relative performance. \nAll of the considered schemes, except for the first-order CNFE scheme, show a desirable, stiff order of temporal convergence \nequal to two. \nThe MCS2 scheme, successively developed by in 't Hout \\& Welfert \\cite{HW09} for PDEs and in 't Hout \\& Toivanen \\cite{HT18} for PIDEs,\nstood out favourably among the splitting schemes in view of its superior efficiency. \nThis conclusion agrees with the results recently obtained by Boen \\& in 't Hout~\\cite{BH21} in the case of the two-dimensional \nMerton PIDE. \n\n\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\nClassical cameras record a scene using three color channels, namely, the red (R), green (G) and blue (B) channel.\nTo achieve this, the incoming light spectrum is filtered by corresponding filters, e.g., a filter for the red component lets pass the wavelengths from 600 nm to 700 nm.\nThe RGB filters are approximating the properties of the different cones of the human eye.\nThus, if such an image is shown to a human on a proper display, all perceivable information is recorded and displayed.\n\nFor many classification tasks, it is useful to record even more channels, for example also in the infrared (IR) area of the spectrum.\nThus, the concept of RGB imaging is extended to \\textit{multispectral} imaging, where typically six to 16 different bands of the scene are recorded.\nUsually, the filters of multispectral cameras are spectrally distributed depending on the application.\nTherefore, there is no structure regarding the shape of filters or the location of filters in the spectrum.\n\nIn contrast, the filters of \\textit{hyperspectral} cameras are typically equidistantly distributed in the wavelength area of interest.\nSince the employed bandpass filters are narrowband as well, the goal of a hyperspectral camera is to sample the light spectrum in the desired wavelength area for each pixel.\nWhile there are no sharp definitions of multispectral and hyperspectral, these definitions are inline with the definitions given by Hagen et al.\\cite{hagen_review_2013}, which state that most authors distinguish between contiguous (hyperspectral) versus spaced spectral bands (multispectral).\n\nHyperspectral imaging has become increasingly popular for diverse classification tasks.\nFor example, different types of plastics can be separated using hyperspectral cameras\\cite{garaba_airborne_2018}, which is an essential building block for recycling pipelines.\nMoreover, these cameras are employed in agriculture to measure the plant health\\cite{williams_classification_2016} in order to distribute the proper amount of water and fertilizer.\nOther applications include forensics\\cite{edelman_hyperspectral_2012} where hyperspectral cameras can analyse the scene in a non-invasive way, object tracking\\cite{xiong_material_2020}, and tumour detection in medicine\\cite{han_vivo_2016}.\nUsually, for most of these applications, it is beneficial if the hyperspectral camera is able to record videos.\n\n\\begin{figure}\n\t\\centering\n\n\t\\includegraphics[]{figures\/paper-figure0.pdf}\n\t\n\t\\caption{A hyperspectral 3D datacube with two spatial dimensions $x$ and $y$ and one spectral dimension $\\lambda$.}\n\t\\label{fig:hs_cube}\n\\end{figure}\n\n\\begin{table*}[t]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.5}\n\t\\caption{Overview over different hyperspectral databases. Note that the 1890 images of the proposed database are put together by 7 scenes, each with 30 frames, rendered from a camera array with 9 cameras.}\n\t\\label{tab:databases}\n\t\\begin{tabular}{ccccccccc}\n\t\tDatabase & Spatial res. & \\#Channels & Wavel. area (nm) & \\#Images & Videos & Rerendering & Cam. array & Depth \\\\ \\hline\n\t\tCAVE\\cite{yasuma_generalized_2010} & $512 \\times 512$ & 31 & 400 - 700 & 32 & \\xmark & \\xmark & \\xmark & \\xmark \\\\\n\t\tUGR\\cite{eckhard_outdoor_2015} & $1000 \\times 900$ & 61 & 400 - 1000 & 14 & \\xmark & \\xmark & \\xmark & \\xmark \\\\\n\t\tHordley\\cite{hordley_multi-spectral_2004} & $\\sim 336 \\times 271$ & 31 & 400 - 700 & 23 & \\xmark & \\xmark & \\xmark & \\xmark \\\\\n\t\tLe Moan\\cite{larabi_database_2015} & $500 \\times 500$ & 160 & 410 - 1000 & 9 & \\xmark & \\xmark & \\xmark & \\xmark \\\\\n\t\tFoster\\cite{foster_frequency_2006} & $1018 \\times 1339$ & 33 & 400 - 720 & 8 & \\xmark & \\xmark & \\xmark & \\xmark \\\\\n\t\tChakrabarti\\cite{chakrabarti_statistics_2011} & $1392 \\times 1040$ & 31 & 420 - 720 & 50 & \\xmark & \\xmark & \\xmark & \\xmark \\\\\n\t\tBGU\\cite{leibe_sparse_2016} & $1392 \\times 1300$ & 244 & 400 - 1000 & 201 & \\xmark & \\xmark & \\xmark & \\xmark \\\\\n\t\tMian\\cite{mian_hyperspectral_2012} & $752 \\times 480$ & 33 & 400 - 720 & 31 & \\cmark & \\xmark & \\xmark & \\xmark \\\\\n\t\tProposed & $1600 \\times 1200$ & 31 & 400 - 700 & 1890 & \\cmark & \\cmark & \\cmark & \\cmark \\\\\n\t\\end{tabular}\n\t\\vspace*{-0.3cm}\n\\end{table*}\n\nThere are several techniques to record a hyperspectral image.\nThe essential challenge is to record a 3D datacube using only 2D sensors.\nThis hyperspectral 3D datacube is depicted in Fig.\\ \\ref{fig:hs_cube}.\nOne can imagine combining different layers or single data elements to fit onto a 2D sensor.\nSome of the following techniques drop some of these data elements and need a reconstruction process afterwards.\nThe first class of recording techniques are scanning devices.\nHere, one dimension of this 3D datacube is unfolded across the time dimension.\nFor example, pushbroom sensors\\cite{gomez-chova_correction_2008} are scanning one spatial dimension and the spectral dimension using a 2D sensor.\nThe second spatial dimension is scanned over time by moving the scene or camera accordingly.\nAnother example of scanning techniques are filter wheels\\cite{koenig_practice_1998}, where the spectral dimension is scanned by spinning a wheel, such that different filters are in front of the sensor.\nThus, this technique unfolds the spectral dimension over time.\nObviously, these techniques are not able to capture hyperspectral videos, since the 3D datacube is scanned over time and thus not fully available at each time instance.\n\nHyperspectral snapshot cameras, on the other hand, are able to capture hyperspectral videos.\nSeveral ideas exist how to build a snapshot hyperspectral camera.\nThere are concepts using dispersion and fiber bundles\\cite{gat_development_2006} or a 2D dispersion pattern within a computed tomography framework\\cite{descour_demonstration_1997}.\nThus, a reconstruction process is mandatory.\nBoth of these techniques suffer from a very low spatial resolution.\nAnother technique is multispectral beamsplitting\\cite{matchett_volume_2007}, where the incoming light beam is split into the corresponding bands using special mirrors.\nThe problem here is, that the amount of photons for other bands is also drastically reduced by each mirror.\nFor a large number of bands, this technique does not work anymore.\nMultispectral filter arrays\\cite{monno_practical_2015} have a similar idea as a classical Bayer pattern by placing filters periodically over each pixel of a sensor.\nSubsequently, a demosaicing algorithm is necessary to yield a full hyperspectral image.\nHowever, depending on the number of bands, the underlying physical resolution for each channel drops significantly using multispectral filter arrays.\nEspecially, when thinking of hyperspectral imaging which can have up to several hundred bands, the pixels with the same filter are positioned far apart.\nMulti-aperture filtered cameras\\cite{shogenji_multispectral_2004} have a similar idea, but place filters over whole regions of the sensor.\nThus, the filters are not periodically repeated.\nHere, also a reconstruction process is necessary.\nImaging techniques based on compressed sensing\\cite{gehm_single-shot_2007} are also discussed.\nFor these cameras, a coded aperture followed by a dispersion element is used to record different spectral mixtures of different pixels.\nAfterwards, compressed sensing is used to reconstruct the datacube.\nFinally, works on camera arrays\\cite{genser_camera_2020} are currently active as well.\nSince the cameras are spatially distributed, a reconstruction pipeline yielding a consistent datacube is essential.\nTwo examples for the necessity of a reconstruction pipeline for high-resolution hyperspectral snapshot cameras are shown in Fig.\\ \\ref{fig:motivation}.\n\n\\begin{figure}\n\t\\centering\n\t\n\n\t\\includegraphics[]{figures\/paper-figure1.pdf}\n\t\n\t\\caption{Two cases of missing ground-truth pixels of snapshot $3 \\times 3$ multispectral\/hyperspectral cameras. On the left, the warped channels of a camera array are depicted, where pixels are missing due to occlusions. On the right, the channels of a multispectral filter array sensor are shown. Missing pixels are shown in white.}\n\t\\label{fig:motivation}\n\\end{figure}\n\nFrom this short overview of hyperspectral cameras, it becomes clear that it is challenging to get ground-truth high-resolution hyperspectral video data.\nGround-truth data has several requirements.\nFirst, the scene itself should be as realistic as possible.\nOf course, when recording a scene in reality, this is fulfilled by definition, while it is more of a challenge for synthetic data.\nFor synthetic data, the world, including all physical concepts like light, needs to be modeled properly to get as realistic data as possible.\nSecond, ground-truth data needs a high resolution in every dimension to be able to sample in any direction.\nWhile scanning techniques can deliver this as long as the scene is static, some hyperspectral snapshot cameras either record a very low spatial or spectral resolution or have low underlying spatial or spectral resolution.\nThird, the amount of artefacts and postprocessing should be as low as possible.\nThese artefacts include noise, blur, reconstruction errors, exposure problems, chromatic aberrations and many more.\nHere, recording a scene using existing hyperspectral imaging techniques fails to fulfill this requirement.\nFor a non-moving scene a near ground-truth hyperspectral image can be generated using scanning techniques.\nHowever, for example for a filter wheel, due to different behaviours of lens-filter combinations, the focus and exposure time have to be adjusted, which leads to different behaviours regarding optics and noise.\nSnapshot hyperspectral cameras most often need a reconstruction process, which cannot recover the perfect datacube.\nThus, again artefacts may be introduced.\nFourth, boundary conditions should be given as exact as possible.\nThis includes the used camera(s) including all specifications like sensor size and resolution, as well as the aperture and focal length of the lens.\nFurthermore, properties of the bandpass filters and light are also very important.\nSince there are always slight deviations during manufacturing processes and aging phenomena, it is impossible to properly fulfill this requirement when recording a scene hyperspectrally in reality.\n\nAs one can see, there is a strong need for ground-truth hyperspectral video data, for example also as reference data to measure the performance of snapshot reconstruction algorithms.\nThere are a number of successful hyperspectral image databases, which are summarized in Table \\ref{tab:databases}.\nOne of the most famous examples is the CAVE database\\cite{yasuma_generalized_2010}.\nThis database contains 32 scenes depicting different objects like balloons, persons, pepper and color patches usually with a black background.\nThe images have a resolution of $512 \\times 512$ pixels in the range from 400 nm to 700 nm in 10 nm steps, resulting in 31 hyperspectral channels.\nThe CAVE database does not contain any natural scene.\nThe UGR database\\cite{eckhard_outdoor_2015} contains 14 images with a resolution of $1000 \\times 900$ pixels.\nThis database depicts mostly urban scenes including trees, cars and buildings.\nThe scenes were recorded from 400 nm to 1000 nm in 10 nm steps, resulting in 61 hyperspectral channels.\nUnfortunately, some of the bands are corrupted by artefacts.\nThe database of Hordley et al.\\cite{hordley_multi-spectral_2004} contains 23 images of packagings and color charts and thus does not hold any natural spectra.\nThe resolution of the images differ.\nThe packagings and color charts were recorded from 400 nm to 700 nm in 10 nm steps, resulting in 31 hyperspectral channels.\nLe Moan et al.\\cite{larabi_database_2015} created a database of 9 different textures including textile, wood and skin from 410 nm to 1000 nm in 160 hyperspectral channels.\nThe textures have a resolution of $500 \\times 500$.\nThe next database of Foster et al.\\cite{foster_frequency_2006} contains 8 hyperspectral images of natural and urban scenes.\nThese were recorded in the range from 400 nm to 720 nm in 10 nm steps, resulting in 33 hyperspectral channels.\nThe images have a resolution of $1018 \\times 1339$.\nChakrabarti et al.\\cite{chakrabarti_statistics_2011} recorded a hyperspectral image database of 50 hyperspectral images with a resolution of $1392 \\times 1040$.\nFor each scene, 31 hyperspectral channels were recorded from 420 nm to 720 nm in 10 nm steps.\nThis database mainly contains urban and indoor scenes.\nThe BGU database\\cite{leibe_sparse_2016} of Arad et al. contains over 200 hyperspectral images of natural, urban and indoor scenes at a resolution of $1392 \\times 1300$ pixels.\nThe database was recorded from 400 nm to 1000 nm at 1.25 nm steps.\nThus, the BGU database offers the highest spectral resolution, one of the highest spatial resolutions and the most diversity in scenes due to the sheer amount of different scenes.\nFinally, a hyperspectral video database\\cite{mian_hyperspectral_2012} containing a single video of a lab scene with a spatial resolution of $752 \\times 480$ exists.\nHere, the spectrum from 400 nm to 720 nm was recorded using 10 nm steps, thus resulting in 33 hyperspectral channels.\n\nThis paper introduces a novel synthetic hyperspectral video database rendered using a camera array.\nThis camera array consists of nine cameras arranged in a three times three grid.\nThus, the scenes are rendered from multiple positions for wavelengths between 400 nm and 700 nm in 10 nm steps, resulting in 31 hyperspectral channels for each camera.\nMoreover, the depth map is generated as well for every frame and camera.\nSince a high resolution representation of spatial positions in three dimensions as well as the spectral dimension is available, many applications can be addressed by this database.\nThese applications do not necessarily have to be in a hyperspectral context, but also range to grayscale and RGB image processing tasks as well as fusion problems, especially regarding depth sensors.\nTo the best of our knowledge, this paper is the first to introduce a \\textit{synthetic} hyperspectral \\textit{video} database.\n\nThe remainder of this paper is organized as follows.\nFirst, the techniques and properties of this hyperspectral video database are described in detail in Section \\ref{sec:database}.\nAfterwards, two applications of this database are considered in Section \\ref{sec:applications} by improving the corresponding state of the art.\nFinally, the paper is summarized in Section \\ref{sec:conclusion}.\n\n\\section{Database}\n\\label{sec:database}\nThe hyperspectral video database was rendered using the open-source software Blender\\cite{blender}, which provides tools for RGB rendering.\nTherefore, the first properties and techniques that are described in this section are related to Blender.\nThe basic idea is to render each wavelength for each frame image by image in grayscale.\nFor that, the textures need to be adjusted to the wavelength as well as the light source.\nOf course, the scene can be rendered from multiple cameras within a camera array.\n\n\\subsection{Renderer}\n\n\\begin{figure*}[t]\n\t\\centering\n\n\t\\includegraphics[]{figures\/paper-figure2.pdf}\n\t\n\t\\caption{The pipeline of the hyperspectral renderer. Essentially, all combinations of channel (cnl), frame (frm) and camera (cmr) are traversed and rendered. The red items in the top row indicate an exemplary state, i.e., currently the textures at 640 nm and the animations at frame 10 of 30 viewed from camera 6 are rendered.}\n\t\\label{fig:rendering_pipeline}\n\\end{figure*}\n\nTo extend Blender to a hyperspectral image renderer, each wavelength is rendered as grayscale image.\nSince the wavelengths from 400 nm to 700 nm were rendered in 10 nm steps, one hyperspectral image consists of 31 rendered grayscale images.\nFor each wavelength, the images of materials as well as the intensity of the light source were changed programmatically through Python.\nMore details on textures and light sources will be presented below.\nThis pipeline is explained in detail in Fig.\\ \\ref{fig:rendering_pipeline}.\n\nBlender has two different rendering engines, namely, Eevee and Cycles.\nWhile Eevee is a simple rasterization engine, Cycles implements path tracing\\cite{purcell_ray_2005} and thus yields much more realistic images.\nConsequently, Cycles was chosen as rendering engine with branched path tracing as integrator.\nThe number of samples per pixel was set to 128.\n\nSince path tracing nearly always yields rendering noise in complex scenes, a post-processing denoiser was employed.\nConsidering that Blender directly implements Non-Local Means (NLM) denoising\\cite{buades_non-local_2011}, NLM was used as denoiser after the grayscale images were rendered.\n\nThe bitdepth of the final output was set to eight to allow for a smaller total database size.\nFurthermore, each grayscale image is saved as PNG.\n\n\\subsection{Hyperspectral Textures and Materials}\n\nAn essential part for setting up a realistic scene in Blender are textures to give surfaces of objects color and details without increasing the complexity of geometry.\nTo give a surface the desired part of a texture, coordinates in images are mapped to coordinates of surfaces.\nSince a hyperspectral scene also needs hyperspectral textures, the possibilities are either to record them, which requires a high resolution hyperspectral camera, or extract them from other databases.\nThe latter was picked for this database.\nTo maintain a consistent appearance of the textures, the BGU database\\cite{leibe_sparse_2016} was chosen to deliver hyperspectral textures.\nThough this database contains wavelengths from 400 nm to 1000 nm in 1.25 nm increments, our novel database only contains wavelengths from 400 nm to 700 nm in 10 nm steps, to keep the rendering time feasible and the database size acceptable.\nThe BGU database contains over 200 images of diverse natural, urban, and indoor scenes.\nThus, different textures like grass, wood, flower, leaves, metal, plastic, different types of stones, parts of cars and many more could be extracted from this database.\nThe sky was also extracted from this database using the image \\textit{hill\\_0325-1228}, which shows a huge portion of a cloud-free sky.\nThis texture was then used for the world material in Blender.\nSince Blender is only an RGB renderer natively, hyperspectral textures consist of one grayscale image for each wavelength.\nThus, depending on the wavelength that is currently rendered, the textures need to be exchanged.\n\nThe scenes have 14 to 34 different textures depending on the complexity.\nFor example, an outdoor scene needs rather less texture, since there is mainly dirt, grass, wood and leaves.\nThen, for these types of materials, different textures for leaves and wood are extracted from the BGU database.\nOn the other hand, indoor scenes have typically more textures, since there are more smaller things lying around the scene, which all need a texture.\nOf course, one can reuse some textures for some objects, but an interesting indoor scene also needs diverse textures even for the same object.\n\n\\begin{figure}\n\t\\centering\n\n\t\\includegraphics[]{figures\/paper-figure3.pdf}\n\t\\caption{The procedure of extracting hyperspectral textures. The RGB image is taken to spot good textures. The extraction is done by warping all hyperspectral channels using a calculated homography matrix.}\n\t\\label{fig:hte}\n\\end{figure}\n\nThe extraction procedure is shown in Fig.\\ \\ref{fig:hte}.\nFirst, four points spanning a rectangle are selected.\nAfterwards, a homography matrix is calculated, which translates these world coordinates in the image plane to coordinates $\\left( (0, 0), (0, H), (W, H), (W, 0) \\right)$, where $W$ and $H$ represent the width and the height of the resulting texture.\nThis has to be defined by the user.\nFrom these four pairs of points $(x_i^{\\text{src}}, y_i^{\\text{src}}) \\leftrightarrow (x_i^{\\text{dst}}, y_i^{\\text{dst}})$, the homography matrix\n\\begin{equation}\n\t\\bm{H} =\n\t\\begin{bmatrix}\n\t\th_{11} & h_{12} & h_{13}\\\\\n\t\th_{21} & h_{22} & h_{23}\\\\\n\t\th_{31} & h_{32} & h_{33}\n\t\\end{bmatrix}\n\\end{equation}\n\nis found by solving the least-squares problem\n\n\\begin{equation}\n\t\\begin{split}\n\t\t\\argmin_{\\bm{H}} \\sum_{i=1}^{4} & \\left( x_i^{\\text{src}} - \\frac{h_{11}y_i^{\\text{dst}} + h_{12}x_i^{\\text{dst}} + h_{13}}{h_{31}x_i^{\\text{dst}} + h_{32}y_i^{\\text{dst}} + h_{33}} \\right) +\\\\\n\t\t& \\left( y_i^{\\text{src}} - \\frac{h_{21}y_i^{\\text{dst}} + h_{22}x_i^{\\text{dst}} + h_{23}}{h_{31}x_i^{\\text{dst}} + h_{32}y_i^{\\text{dst}} + h_{33}} \\right).\n\t\\end{split}\n\\end{equation}\n\nSince a homography matrix has eight degrees of freedom, the final matrix is scaled such that $h_{33} = 1$.\nThen, the position of other points in the destination image plane can be calculated by\n\\begin{equation}\n\t\\begin{bmatrix}\n\t\tx^{\\text{dst}}\\\\\n\t\ty^{\\text{dst}}\\\\\n\t\t1\n\t\\end{bmatrix} = \\bm{H}\n\t\\begin{bmatrix}\n\t\tx^{\\text{src}}\\\\\n\t\ty^{\\text{src}}\\\\\n\t\t1\n\t\\end{bmatrix}.\n\\end{equation}\nThis is necessary to warp the texture from the source image plane to the destination image texture plane.\nThis warping procedure is done for every hyperspectral channel.\nAfterwards, the hyperspectral channels are saved individually as lossless grayscale images.\n\nAfter the textures are extracted, corresponding materials can be set up in Blender.\nThe texture of the material is changed depending on which channel is currently rendered.\nMoreover, a lot of properties can be changed in Blender, which influences how light is reflected or absorbed.\nFor example, the roughness can be changed as well as how specular and metallic a material is, e.g., a shiny car paint is typically very little rough but very metallic, while a street is very rough and less shiny.\nMoreover, also the transparency of a texture can be set, which is very important for windows.\n\n\\subsection{Light Sources}\n\nFor all outdoor scenes or scenes with windows, the light source is the sun.\nIn bright daylight the sun can reach a color temperature up to 6400K.\nThe color temperature of a light source defines its emitting light spectrum.\nThe spectrum is based on the radiation of an ideal black body and can be calculated by\nPlanck's law.\nFor the single indoor scene without windows, a lamp was mounted into that room with a color temperature of 3200K, which is a typical value for studio lamps.\nThe spectra of these two light sources is shown in Fig.\\ \\ref{fig:light_spectra} and were calculated by Planck's law.\nFor each wavelength, the intensity of the sun or the indoor lamp in the scene is set according to this plot.\nFurthermore, the hardness and light strength differs from scene to scene and is also dependent on the angle of the sun.\n\n\\begin{figure}\n\t\\centering\n\n\t\\includegraphics[]{figures\/paper-figure4.pdf}\n\t\\caption{Used light spectra for the sun (6400K) and the indoor lamp (3200K).}\n\t\\label{fig:light_spectra}\n\\end{figure}\n\n\\subsection{Cameras and Camera Array}\n\nAll scenes are rendered from nine cameras arranged in a $3 \\times 3$ grid.\nThis mimicks a camera array from Genser et al.\\cite{genser_camera_2020} and is shown in Fig.\\ \\ref{fig:camsi}.\nTherefore, the specifications are very similar.\nThe baseline between neighboring cameras is 40 mm.\nThe cameras itself are very close to common industrial cameras like a Basler acA1600-60gm, which are also used for the camera array in Fig.\\ \\ref{fig:camsi}.\nThus, the image sensor size is set to 7.2 mm times 5.4 mm and the resolution of the images is 1600 pixels times 1200 pixels.\nThe focal length is different from the original camera array and is set to 6 mm focal length to capture a large portion of the scene.\nThe purpose of rendering the scene from this camera array is, for example, to be able to simulate the multispectral camera shown in Fig.\\ \\ref{fig:camsi}.\nSince this multispectral camera array only records one spectral area with a specific transmission curve per camera, one has to simulate the corresponding filters by calculating a weighted sum of hyperspectral images for each camera.\nThus, in contrast to the camera array in Genser et al.\\cite{genser_camera_2020}, the synthetic data contains all 31 hyperspectral channels for every camera.\nIf the application just demands for a single hyperspectral view of the scene, only the center camera can be extracted, since each camera has its own folder in the database.\n\n\\begin{figure}[t]\n\t\\centering\n\n\t\\includegraphics[]{figures\/paper-figure5.pdf}\n\t\\caption{A multispectral camera with nine channels on the left\\cite{genser_camera_2020} with a sketch of dimensions and indices of cameras on the right. Note that this view is from the front, as well.}\n\t\\label{fig:camsi}\n\\end{figure}\n\n\\subsection{Depth Maps}\n\nMany applications also rely on depth data provided by a depth sensor like RADARs or LIDARs.\nMoreover, some algorithms also estimate the depth as intermediate step, for example, to register multiple views on top of each other.\nTherefore, depth data for all scenes, cameras, and frames is provided.\nThe depth maps are stored in the EXR format as 32 bit floats for a high resolution.\nThe depth maps contain the physical depth in meters.\nConsequently, the disparity between cameras is calculated by\n\n\\begin{equation}\n\t\\label{eq:depth_to_disp}\n\td = \\frac{b \\cdot f}{p \\cdot s},\n\\end{equation}\nwhere $d$ is the disparity of corresponding pixels between two cameras, $b$ is the baseline between the cameras, e.g. 40 mm for directly neighboring cameras in the camera array, $f$ is the focal length, thus 6 mm, $p$ is the depth itself, and $s$ is the size of a single pixel and calculates to $s = 7.2 \\ \\text{mm}\/1600 = 5.4 \\ \\text{mm}\/1200 = 4.5 \\cdot 10^{-3} \\ \\text{mm}$.\nNote that the depth is provided in meters while all other properties are given in millimeters.\nDepth maps can be used for example to evaluate depth estimation algorithms or to warp different views of the camera array to different positions.\n\n\\subsection{Motion}\n\nIn all of the scenes, the camera array moves along a specified path.\nDepending on the scene, this is realized using different approaches.\nIn the first method, the camera array is moved along the scene manually and this trajectory is recorded.\nAfter recording this movement, the paths are smoothed out, since a manual camera movement is typically quite shaky.\nIn the end, each frame is a keyframe for the movement and stores the position and rotation of the camera array.\nThe second way to define movement in this database is to define the starting and endpoint position and rotation and smoothly interpolation between them for intermediate frames.\nThus, there are only two keyframes set and the intermediate points are determined using Bezier interpolation.\nThis approach is also performed for moving objects.\n\n\\subsection{Scenes}\n\n\\begin{figure}\n\t\\centering\n\n\t\\includegraphics[]{figures\/paper-figure6.pdf}\n\t\n\t\\caption{The workflow to create a hyperspectral scene. This general workflow can be executed in any rendering framework.}\n\t\\label{fig:workflow}\n\\end{figure}\n\n\\begin{figure*}\n\t\\centering\n\n\t\\includegraphics[]{figures\/paper-figure7.pdf}\n\t\\caption{The center views of the seven different synthetic hyperspectral scenes: frame 0 of \"family house\", frame 0 of \"medieval seaport\", frame 25 of \"city\", frame 0 of \"outdoor\", frame 10 of \"forest\", frame 0 of \"indoor\" and frame 14 of \"lab\". On the left, an RGB image using standard CIE 1931 color curves can be seen. Next to it, all 31 hyperspectral channels of the corresponding image and the depth map are depicted.}\n\t\\label{fig:scenes}\n\\end{figure*}\n\nThe whole workflow for setting up a 3D hyperspectral scene is summarized in Fig.\\ \\ref{fig:workflow}.\nIn total, seven different scenes were rendered.\nThese scenes are depicted in Fig.\\ \\ref{fig:scenes}, where the RGB images are calculated using standard CIE 1931 color curves.\nFurthermore, all 31 hyperspectral channels of one frame are shown as well as the depth maps corresponding to the respective scene and frame.\nThe camera array moves through each scene for 30 frames.\n\nThere are five outdoor scenes. \\textit{Family house} represents a mixture of urban and rural objects, \\textit{medieval seaport} is a view of a more ancient city incorporating a lot of stone and wood, and \\textit{city} is a newer city including a moving car, normal streets, sidewalks, modern houses and trash bins.\n\\textit{Outdoor} represents a rural scene from an agricultural way and \\textit{forest} is a similar scene without any man-made objects and a lot of trees.\n\\textit{Indoor} depicts a room including furniture and decoration elements, but still with the light source being the sun shining through windows.\nFinally, \\textit{lab} is illuminated by a halogen lamp and depicts a laboratory setup with selective elements on a table similar to the real-world image taken in Genser et al.\\cite{genser_camera_2020}.\n\n\\subsection{Limitations}\n\nNote that this database still contains only \\textit{synthetic} hyperspectral data.\nThus, it is nearly clean from any noise or artefacts and ideal cameras are assumed as well as an ideal camera array is assumed.\nAn array-related real-world artefact would be that the cameras are not perfectly aligned to each other as well as the sensors within the camera housing are not perfectly fitted.\nTherefore, in real-world a calibration procedure between different cameras would always be necessary, e.g., to fulfill the epipolar constraint.\nMoreover, a lot of artefacts of the filter-lens-camera combinations can occur.\nFirst, filters have the tendency to change optical paths slightly dependent on the wavelength.\nSecond, lenses introduce artefacts like optical veiling glare\\cite{mccann_veiling_2007}, lens flares\\cite{rosenhauer_image_1968}, wavelength-dependent chromatic aberration\\cite{waller_phase_2010}, star bursts and many more.\nFinally, cameras always introduce different kind of noise, e.g., shot noise, thermal noise and electronic noise\\cite{hytti_noise_2006}.\nAll of these artefacts are not part of this database and most of them are difficult to simulate.\nThus, if one trains or evaluates algorithms based on this data, one has to be careful, since the data of this database deviates from real-world image acquisitions.\nHowever, depending on the desired application, noise or artefacts could be added to clean data.\nMoreover, while the absolute performance is wrongly estimated due to missing artefacts, the relative performance between two algorithms might still be possible assuming that both algorithms are affected by noise and artefacts equally.\n\n\\section{Applications}\n\\label{sec:applications}\nIn this section, an overview over applications that can benefit from this dataset is given.\nFirst, two applications are presented in detail and improved which is verified using the novel database.\nIn the first application, videos from all cameras and depth maps from the center camera are used.\nThe second application uses just the hyperspectral video of the center camera.\nIn the end, several other applications are described briefly.\n\n\\subsection{Cross Spectral Reconstruction}\n\nThe first application that is covered in detail is cross-spectral image reconstruction.\nConsider imaging a scene with the camera array shown in Fig.\\ \\ref{fig:camsi}.\nThe first step to create a consistent multispectral datacube is a cross-spectral disparity estimation, i.e., finding the correspondence of which pixel in one camera belongs to which pixel in the other camera.\nSince the final reconstruction process is evaluated here, the ground-truth depth maps from the database can be used.\nFor that, the depths needs to be converted to disparities using \\eqref{eq:depth_to_disp}.\nThen, this disparity map can be used to warp the peripheral view to the center view.\nDue to occlusions, there are pixels missing which need to be reconstructed.\nFortunately, the center view is always fully available, since it is not warped to any other position.\nThus, the center view can be used as reference channel for all peripheral views.\nHowever, this center view records a different part of the spectrum than the peripheral views.\nNote that the procedure to simulate a multispectral camera array is described in Section \\ref{subsec:csr_eval}.\nTherefore, one has to find a relationship between the center view and the peripheral view that has to be reconstructed.\nThis whole problem is depicted in Fig.\\ \\ref{fig:CSR_problem}.\nThe peripheral view that has to be reconstructed will be called distorted view in the following description.\n\\begin{figure}\n\t\\centering\n\n\t\\includegraphics[]{figures\/paper-figure8.pdf}\n\t\\caption{The basic problem of cross spectral reconstruction. On the left side the fully available center view is depicted, on the right side the warped peripheral view is shown. Due to occlusions, some pixels are missing in the peripheral view, which are shown in red.}\n\t\\label{fig:CSR_problem}\n\\end{figure}\n\n\\subsubsection{State of the Art}\n\nFor a single multispectral image, Non-local Cross-Spectral Reconstruction (NOCS) in \\cite{sippel_spatio-spectral_2021} is a state-of-the-art approach.\nNOCS first finds similar blocks using the fully available reference images.\nBlock means the set of pixels of usually a square area around a center pixel.\nIn the case of the multispectral camera array, this would be just the center view.\nFor that, the $l_2$-norm between two blocks within the reference image is minimized\n\\begin{equation}\n\td_{\\text{NOCS}}(\\bm{x}, \\bm{y}) = ||\\bm{B}^{\\text{R}}(\\bm{x}) - \\bm{B}^{\\text{R}}(\\bm{y})||_2,\n\\end{equation}\nwhere $\\bm{B}^{R}(\\bm{x})$ describes the reference block at coordinate $\\bm{x}$.\nOf course, this whole block matching procedure only has to be done for pixels that are missing in the distorted view.\nFor missing pixel position $\\bm{x}$, the $B$ best matches are used to stack the corresponding pixel values of the reference view on top of each other, resulting in the vector $\\bm{r}(\\bm{x})$.\nThe same procedure is done for the distorted image, resulting in the distorted vector $\\bm{d}(\\bm{x})$.\nOf course, this vector contains missing pixels.\nMoreover, since a block has distance 0 with itself, the pixel to reconstruct is always at the first entry of the vector.\n\nThese two vectors are then used to build a linear regression model\n\\begin{equation}\n\t\\bm{d}(\\bm{x}) = \\alpha(\\bm{x}) \\cdot \\bm{r}(\\bm{x}) + \\beta(\\bm{x}),\n\\end{equation}\nwhere $\\alpha(\\bm{x})$ and $\\beta(\\bm{x})$ are the parameters of the linear regression and thus the variables that need to be estimated for every missing pixel.\nThey are found by minimizing the $l_2$-norm of the difference between known references of the distorted vector $\\tilde{\\bm{d}}(\\bm{x})$ and its prediction from the reference vector $\\tilde{\\bm{r}}(\\bm{x})$ at the corresponding positions\n\\begin{equation}\n\t\\hat{\\alpha}(\\bm{x}), \\hat{\\beta}(\\bm{x}) = \\argmin_{\\alpha(\\bm{x}), \\beta(\\bm{x})} ||\\alpha(\\bm{x}) \\cdot \\tilde{\\bm{r}}(\\bm{x}) + \\beta(\\bm{x}) - \\tilde{\\bm{d}}(\\bm{x})||_2^2.\n\\end{equation}\nThese parameters are found in closed-form\\cite{sippel_spatio-spectral_2021}.\nAfterwards, this model is used to predict the missing pixel, which is the first element in the distorted vector.\nFor that, the value of the first element in the reference vector is put into the model and the resulting value is the predicted reconstructed pixel value\n\\begin{equation}\n\t\\bm{d}_1(\\bm{x}) = \\hat{\\alpha}(\\bm{x}) \\cdot \\bm{r}_1(\\bm{x}) + \\hat{\\beta}(\\bm{x}).\n\\end{equation}\n\nOf course, not all missing pixels are reconstructed at once, but an iterative procedure is carried out.\nAlways 10\\% (but at least one) of the remaining pixels to reconstruct that have the highest number of non-missing pixels of the distorted view in their vector are reconstructed.\nAfterwards, all vectors that contain a newly reconstructed pixel are updated and the loop starts from the beginning.\nThus, reconstructed pixels will influence the remaining pixels that need to be reconstructed.\n\n\\subsubsection{Novel Approach}\n\nWe propose to extend this method by also using the previous frame and therefore exploit temporal correlation to enhance NOCS.\nThis novel method is called Temporal Cross Spectral Reconstruction (TNOCS).\nTo exploit temporal correlation, the block matching procedure is also executed on the previous frame.\nTherefore, the block matching procedure needs to consider that similar blocks might be in different frames\n\\begin{equation}\n\td_{\\text{TNOCS}}(\\bm{x}, \\bm{y}, t_d) = ||\\bm{B}^{\\text{R}}_{t}(\\bm{x}) - \\bm{B}^{\\text{R}}_{t - t_d}(\\bm{y})||_2,\n\\end{equation}\nwhere $\\bm{B}^{R}_{t}(\\bm{x})$ is a block in the reference video at spatial position $\\bm{x}$ and current frame $t$, and $t_d$ is the difference to the respective frame.\nTherefore, when a similar block for $\\bm{x}$ is searched, the position and the frame of the other block can be varied.\nSince two consecutive frames are usually highly correlated but still depict the scene from a slightly different angle, only the previous and the current frame are searched for similar blocks.\nThus, $t_d$ can only be in the set of $\\{0, 1\\}$.\nThe remaining formulas are the same, since the pixel at position $\\bm{x}$ still needs to be reconstructed and the vectors contain pairs of reference pixels and the corresponding pixels from the distorted video.\n\nFor an improvement over NOCS, it is essential that the camera or object in scene moves from one frame to another frame, since otherwise the pixels to be reconstructed would be exactly the same and the temporal correlation can not be properly exploited.\nNote that only non-reconstructed pixels of the previous frame are used for block matching, since the error in reconstruction should not propagate through all frames.\nOtherwise, this would lead to severe artifacts after some frames.\nOf course, the block matching is also executed on the current frame to further exploit spatial correlation.\nThis idea is summarized in the Fig.\\ \\ref{fig:TNOCS}.\n\n\\begin{figure}\n\t\\centering\n\n\t\\includegraphics[]{figures\/paper-figure9.pdf}\n\t\n\t\\caption{The proposed TNOCS searches similar blocks in the current frame as well as the previous frame to build the linear regression model. The blue box depicts the block for which similar blocks are searched, the red blocks are the best matches. The images depict cutouts of two consecutive frames of the \\textit{lab} scene.}\n\t\\label{fig:TNOCS}\n\\end{figure}\n\n\\subsubsection{Evaluation}\n\\label{subsec:csr_eval}\n\nTo validate that this idea works well, our novel hyperspectral video database is employed.\nFor that, multispectral video data has to be created out of the hyperspectral data.\nTo achieve this, an imaging pipeline needs to be simulated.\nA single pixel of the $i$-th channel of a multispectral image is recorded by\\cite{sippel_structure-preserving_2020}\n\n\\begin{equation}\n\t\\bm{c}_i = \\int q(\\lambda)r(\\lambda)\\bm{f}_i(\\lambda)o(\\lambda)m(\\lambda) \\text{d}\\lambda,\n\\end{equation}\nwhere $q(\\lambda)$ is the spectrum of the light source, $r(\\lambda)$ is the reflectance spectrum of the material, $\\bm{f}_i(\\lambda)$ is the transfer function of the $i$-th filter corresponding to the $i$-th multispectral channel, $o(\\lambda)$ the spectral permeability of the camera lens and $m(\\lambda)$ is the spectral response of the camera sensor.\nIn this case, the database already yields a sampled version of the product of the light source and reflectance $s(\\lambda) = q(\\lambda)r(\\lambda)$ as images.\nThe transfer function of the camera lens and camera sensor are assumed to be perfect in the covered wavelength area from 400 nm to 700 nm.\nSince the images are obviously sampled in the spectral dimension, this equation needs to be sampled as well.\nTherefore, a single hyperspectral pixel from our database can be described by $\\bm{s} = [s(\\lambda_1), s(\\lambda_2), \\dots, s(\\lambda_{31})]^\\text{T}$.\nWhen stacking the sampled filters on top of each other into the filter matrix $\\bm{F} = [\\bm{f}_i(\\lambda_1), \\bm{f}_i(\\lambda_2), \\dots, \\bm{f}_i(\\lambda_{31})]$, the imaging process for a single pixel can be written as\n\\begin{equation}\n\t\\bm{c} = \\bm{F} \\bm{s}.\n\\end{equation}\n\nIn this case, the filter matrix contains sampled values of real filters used for the camera array.\nNamely, six bandpass filters with bandwidth 50 nm at wavelengths 425 nm, 450 nm, 500 nm, 550 nm, 600 nm and 650 nm, and a red filter, a green filter and a blue filter to compose an RGB image.\nThen, the data to reconstruct is created by calculating the multispectral data for the corresponding camera.\nThe ground truth is calculated by doing the same calculations with the middle camera.\nMoreover, the warped images are calculated using the ground-truth depth maps provided by the database.\n\n\\begin{table}[t]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.5}\n\t\\caption{Evaluation in terms of PSNR of the novel temporal cross spectral reconstruction algorithm (TNOCS) against the version without exploiting temporal correlation (NOCS).}\n\t\\label{tab:eval_reconstruction}\n\t\\begin{tabular}{c|c|c}\n\t\t& NOCS & TNOCS \\\\ \\hline\n\t\tFamily house & 30.83 dB & \\textbf{36.43 dB} \\\\ \\hline\n\t\tMedieval seaport & 40.06 dB & \\textbf{44.87 dB} \\\\ \\hline\n\t\tCity & 43.61 dB & \\textbf{45.39 dB} \\\\ \\hline\n\t\tOutdoor & 28.94 dB & \\textbf{34.05 dB} \\\\ \\hline\n\t\tForest & 27.03 dB & \\textbf{30.67 dB} \\\\ \\hline\n\t\tIndoor & 39.33 dB & \\textbf{41.15 dB} \\\\ \\hline\n\t\tLab & 35.40 dB & \\textbf{36.26 dB} \\\\ \\hline \\hline\n\t\tAverage & 35.03 dB & \\textbf{38.40 dB} \\\\\n\t\\end{tabular}\n\t\\vspace*{-0.3cm}\n\\end{table}\n\nUsing this multispectral database, the novel TNOCS algorithm can be evaluated against its non-temporal version NOCS.\nThe results are summarized in Table\\ \\ref{tab:eval_reconstruction}.\nFrom the table it gets obvious that exploiting temporal correlation by just searching for similar blocks in the previous frame is enhancing the performance significantly.\nEspecially for scenes containing a lot of leaves, which leads to many small areas that are missing, the difference is enormous.\nFor other scene like \\textit{lab} order \\textit{city}, the improvement is smaller but still noticeable.\n\nWithout our novel hyperspectral video database, it would be impossible to accurately validate the performance gain of exploiting temporal correlation.\nAs shown, it is also possible to create nearly any other data that lies in the wavelength area from 400 nm to 700 nm.\n\n\\subsection{Hyperspectral Video Coding}\n\nThe second application addressed is hyperspectral video coding.\nThe need for hyperspectral image and video coding is basically already motivated by this database.\nWith just seven scenes, nine cameras and 30 frames, this database already has a size bigger than 60 GB saved as single grayscale PNGs.\nWhen looking at raw values, this number increases even more.\nFor this hyperspectral database, saving all images without any compression and eight bits per grayscale pixel results in $7 \\cdot 9 \\cdot 30 \\cdot 31 \\cdot 1600 \\cdot 1200 \\ \\text{bytes} \\approx 105 \\ \\text{GB}$.\nThis enormous amount of data shall be compressed, since there is a lot of correlation.\nOf course, there is temporal correlation, which can be exploited.\nBut also the different hyperspectral channels are highly correlated which can be exploited by a coder.\nIn the following, the term encoding means producing a compressed bitstream out of a (hyperspectral) video.\nIn contrast, the term decoding means transforming this bitstream back to the (hyperspectral) video.\nFor lossy coders, this decoded video deviates from the original video.\n\n\\subsubsection{State of the Art}\n\nThe presented method is based on a multispectral and hyperspectral image coder named Pel-Recursive Inter-Band Prediction (PRIBP)\\cite{meyer_multispectral_2020}.\nBefore presenting the novel inter component of this coder, the intra hyperspectral coder is reviewed first.\nThe basic idea of\\cite{meyer_multispectral_2020} is to first code three channels, in this paper RGB channels, with a standard coder.\nThen, the three decoded channels serve as reference channels for the next channels to code.\nThe decoded channels need to be used as reference channels, because the other end of the communication channel only has the decoded images, which slightly deviates from the original images depending on the quality settings, available.\nThus, to avoid a drift between encoder and decoder, the original channels cannot be used as reference channels.\nFor that, more intra modes have been implemented into the High Efficiency Video Coding (HEVC) codec\\cite{sullivan_overview_2012}.\nThe prediction is done on a transform block level using previously predicted pixels and reconstructed pixels from adjacent blocks.\nTo code the center pixel $b$ of the current block $\\bm{b}$ of the channel to code, first the best reference channel block $\\bm{r}$ out of the current three reference channels is searched by picking the one with the maximum correlation with the already predicted pixels\n\\begin{equation}\n\t\\rho(\\tilde{\\bm{r}}, \\tilde{\\bm{b}}) = \\frac{\\sum_{i=1}^{N} \\left( \\tilde{\\bm{r}}_i - \\mathbb{E}\\{\\tilde{\\bm{r}}\\} \\right)\\left( \\tilde{\\bm{b}}_i - \\mathbb{E}\\{\\tilde{\\bm{b}}\\} \\right)}{\\sqrt {\\sum_{i=1}^{N} \\left( \\tilde{\\bm{r}}_i - \\mathbb{E}\\{\\tilde{\\bm{r}}\\} \\right)^2} \\sqrt{\\left( \\tilde{\\bm{b}}_i - \\mathbb{E}\\{\\tilde{\\bm{b}}\\} \\right)^2}},\n\\end{equation}\nwhere $\\mathbb{E}\\{\\cdot\\}$ is the expectation operator, $N$ is the number of already predicted pixels, and $\\tilde{\\bm{r}}$ and $\\tilde{\\bm{b}}$ are the already predicted pixels of the current block of the reference signal and the channel, respectively.\nSubsequently, this reference is used to build a linear regression model very similar to the one used in cross-spectral reconstruction.\nTherefore, the linear model reads as\n\\begin{equation}\n\t\\tilde{\\bm{b}} = \\alpha \\cdot \\tilde{\\bm{r}} + \\beta\n\\end{equation}\nwith parameters $\\alpha$ and $\\beta$ to be estimated.\nAgain, the parameters are found by minimizing the $l_2$-norm\n\\begin{equation}\n\t\\hat{\\alpha}, \\hat{\\beta} = \\argmin_{\\alpha, \\beta} ||\\alpha \\cdot \\tilde{\\bm{r}} + \\beta - \\tilde{\\bm{b}}||_2^2.\n\\end{equation}\nFinally, this model is employed for the prediction of the pixel\n\\begin{equation}\n\t\\hat{b} = \\hat{\\alpha} \\cdot r + \\hat{\\beta}.\n\\end{equation}\nThe coding order of the remaining channels is found by sorting the different channels according to their structural similarity index with respect to one anchor channel.\nSince there are always three reference channels, the least similar reference channel is then replaced by the channel, which was just coded.\nNote that this method was focused on multispectral image coding rather than hyperspectral image coding.\n\n\\subsubsection{Novel Approach}\n\n\\begin{figure*}\n\t\\centering\n\n\t\\includegraphics[]{figures\/paper-figure10.pdf}\n\t\n\t\\caption{The basic concept of this novel inter hyperspectral video coder. First, motion compensation is executed on the frame $t$ exploiting the neighboring frames. Afterwards, the current channel is spectrally compensated using the last three channels.}\n\t\\label{fig:inter_hs}\n\\end{figure*}\n\nFor hyperspectral image coding, there is implicitly more structure given, since neighboring channels are also neighbors in the spectrum.\nTherefore, since spectra are typically smooth, the correlation between two neighboring channels is the highest.\nThe temporal and spectral compensation of the novel hyperspectral video coder is shown in Fig.\\ \\ref{fig:inter_hs}.\nAssuming the channels are sorted in ascending order according to their wavelength, the proposed hyperspectral video coder compresses the first three channels as an RGB video with a standard video coder using only intra-prediction to ensure a high quality for motion estimation using these channels.\nSince this work is based on PRIBP, the base video coder is again HEVC.\nThus, the first three decoded channels for every time step $t$ read as\n\\begin{equation}\n\t\\hat{C}^t_0, \\hat{C}^t_1, \\hat{C}^t_2 = \\text{D}^{\\text{HEVC}}\\left( \\text{C}^{\\text{HEVC}} \\left( \\left[ C^t_0, C^t_1, C^t_2 \\right] \\right) \\right),\n\\end{equation}\nwhere $\\text{C}^{\\text{HEVC}}\\left( \\cdot \\right)$ and $\\text{D}^{\\text{HEVC}}\\left( \\cdot \\right)$ are the coder and decoder, respectively, and $C^t_i$ is the $i$-th grayscale channel of a hyperspectral image of size $M \\times N$ at time step $t$.\nTo avoid issues with error propagations, every second frame of every other channel is coded by the hyperspectral intra encoder, while for all other frames a motion-compensated residual is compressed by the hyperspectral intra coder.\nOtherwise, the errors during coding would propagate into the next frame leading to even more errors and a higher residual to code, which would then turn into an even lower quality or higher rate.\nThis residual is calculated by exploiting temporal correlation.\nTherefore, the decoded first three channels are used to estimate motion, which is done by PWC-Net\\cite{sun_pwc-net_2018}.\nThis is done using forward and backward motion estimation\n\\begin{equation}\n\t\\label{eq:motion_vectors}\n\t\\begin{split}\n\t\t\\bm{V}^t_{\\text{fw}} &= \\text{PWC}\\left( \\left[ \\hat{C}^{t-1}_0, \\hat{C}^{t-1}_1, \\hat{C}^{t-1}_2 \\right], \\left[ \\hat{C}^{t}_0, \\hat{C}^{t}_1, \\hat{C}^{1}_2 \\right] \\right) \\\\\n\t\t\\bm{V}^t_{\\text{bw}} &= \\text{PWC}\\left( \\left[ \\hat{C}^{t+1}_0, \\hat{C}^{t+1}_1, \\hat{C}^{t+1}_2 \\right], \\left[ \\hat{C}^{t}_0, \\hat{C}^{t}_1, \\hat{C}^{1}_2 \\right] \\right),\n\t\\end{split}\n\\end{equation}\nwhere $\\bm{V}^t$ is an image of motion vectors for time step $t$.\nThen, these motion estimations can be used for motion compensation\n\\begin{equation}\n\t\\label{eq:motion_compensation}\n\t\\begin{split}\n\t\tP^t_{\\text{fw}, i}[m, n] &= \\tilde{C}^{t - 1}_i \\left( m + \\bm{V}^t_{\\text{fw}, 1}[m, n], n + \\bm{V}^t_{\\text{fw}, 2}[m, n] \\right) \\\\\n\t\tP^t_{\\text{bw}, i}[m, n] &= \\tilde{C}^{t + 1}_i \\left( m + \\bm{V}^t_{\\text{bw}, 1}[m, n], n + \\bm{V}^t_{\\text{bw}, 2}[m, n] \\right),\n\t\\end{split}\n\\end{equation}\nwhere $\\tilde{C} \\left( m, n \\right)$ is the interpolated and extended image of $\\hat{C} \\left[ m, n \\right]$.\nMoreover, corresponding masks, when the motion compensation tries to pull a pixel outside the image scope, need to be calculated\n\\begin{equation}\n\t\\label{eq:motion_mask}\n\tM^t_{j}[m, n] = \\left\\{\n\t\\begin{array}{ c l }\n\t\t1, &\\text{if } \\begin{aligned}\n\t\t\t&0 \\leq m + \\bm{V}^t_{j, 1}[m, n] \\leq M - 1 \\ \\land \\\\\n\t\t\t&0 \\leq n + \\bm{V}^t_{j, 2}[m, n] \\leq N - 1\n\t\t\\end{aligned} \\\\\n\t\t0, &\\text{else},\n\t\\end{array}\n\t\\right.\n\\end{equation}\nwhere $j$ is in the set of $\\left\\{ \\text{fw}, \\text{bw} \\right\\}$.\nSince two predictions for the same image are made, there has to be a merge procedure for these.\nIn general, the simple average of these two predictions is taken to merge them.\nHowever, at the border of the image, it occurs that the warping tries to warp pixels outside of the original image plane to the corresponding pixel in the predicted image.\nIn these situations, the merge procedure entirely relies on the other prediction as long as this prediction is valid.\nIf no prediction is valid, again the average of both predictions is taken as best guess.\nThis summarizes to the final prediction\n\\begin{equation}\n\t\\label{eq:final_prediction}\n\t\\begin{split}\n\t\tP^t_i = \\left\\{\n\t\t\\begin{array}{c l}\n\t\t\t\\frac{P^t_{\\text{fw}, i} + P^t_{\\text{bw}, i} }{2}, &\\begin{aligned}\n\t\t\t\t&\\text{if } \\left(M^t_{\\text{fw}}[m, n] = 1 \\land M^t_{\\text{bw}}[m, n] = 1\\right) \\\\\n\t\t\t\t&\\lor \\left(M^t_{\\text{fw}}[m, n] = 0 \\land M^t_{\\text{bw}}[m, n] = 0\\right)\n\t\t\t\\end{aligned}\\\\\n\t\t\tP^t_{\\text{fw}, i}, &\\text{if } M^t_{\\text{fw}}[m, n] = 1 \\land M^t_{\\text{bw}}[m, n] = 0 \\\\\n\t\t\tP^t_{\\text{bw}, i}, &\\text{if } M^t_{\\text{fw}}[m, n] = 0 \\land M^t_{\\text{bw}}[m, n] = 1.\n\t\t\\end{array}\n\t\t\\right.\n\t\\end{split}\n\\end{equation}\nTo calculate the residual for all odd frames, the predicted image is subtracted from the original decoded image\n\\begin{equation}\n\t\\label{eq:residual}\n\tR^t_i = C^t_i - P^t_i \\quad \\forall \\ t \\bmod 2 = 1.\n\\end{equation}\nThis yields three residual images which serve as reference residual images for the hyperspectral intra coder, later.\nMoreover, as in PRIBP, for all-intra coded frames the first three channels serve as reference channel for the first hyperspectral intra coded image.\n\nSubsequently, all other channels are coded iteratively.\nEven frames are coded using all-intra PRIBP\n\\begin{multline}\n\t\\hat{C}^t_i = \\text{D}^{\\text{PRIBP}}\\left( \\text{C}^{\\text{PRIBP}} \\left( C^t_i, \\left[ \\hat{C}^t_{i - 1}, \\hat{C}^t_{i - 2}, \\hat{C}^t_{i - 3} \\right] \\right) \\right) \\\\\n\t\\forall \\ i \\geq 3 \\land t \\bmod 2 = 0,\n\\end{multline}\nwhere $\\text{C}^{\\text{PRIBP}} \\left( C^t_i, \\left[ \\hat{C}^t_{i - 1}, \\hat{C}^t_{i - 2}, \\hat{C}^t_{i - 3} \\right] \\right)$ means that the $i$-th channel is coded using decoded channels $i - 1$, $i -2$ and $i - 3$ as reference channels.\nAfter being decoded, they replace the oldest reference in the reference buffer for that frame and serve as reference channel for the next three channels.\nThis is done, since, as already described, spectra are typically smooth and therefore, the last three channels have the highest correlation with the current channel from a physical point of view.\nThen, the decoded intra frames at timesteps $t - 1$ and $t + 1$ are used for predicting the current frame at time $t$ using the motion vectors estimated from the first three channels in \\eqref{eq:motion_vectors}.\nThe motion-compensated prediction is calculated using \\eqref{eq:motion_compensation}, \\eqref{eq:motion_mask} and \\eqref{eq:final_prediction}.\nAfterwards, the residual with the frame to code is calculated by \\eqref{eq:residual}.\nThis residual is fed into PRIBP, which uses three reference images again\n\\begin{multline}\n\t\\hat{R}^t_i = \\text{D}^{\\text{PRIBP}}\\left( \\text{C}^{\\text{PRIBP}} \\left( R^t_i, \\left[ \\hat{R}^t_{i - 1}, \\hat{R}^t_{i - 2}, \\hat{R}^t_{i - 3} \\right] \\right) \\right) \\\\\n\t\\forall \\ i \\geq 3 \\land t \\bmod 2 = 1.\n\\end{multline}\nIn this case, the reference images are residual signals as well.\nAgain, as soon as this residual is decoded again, it serves as reference residual for the next three channels.\n\n\\subsubsection{Evaluation}\n\nOur novel hyperspectral video database can directly be used to evaluate the gains of the proposed hyperspectral video coder.\nFor the evaluation, only the videos of the center camera are used.\nThe Quantization Parameters (QPs) for evaluation are set to 22, 27, 32 and 37.\nThese QPs are extracted from the common test conditions for HEVC and thus common practice\\cite{hevc_ctc_2018}.\n\nIn general, different coders are evaluated by setting up rate-distortion curves, i.e., a plot that shows the data rate in terms of bitrate on the x-axis and distortion, here in terms of PSNR, on the y-axis.\nThe different aforementioned QPs are datapoints for this curve.\nBetween these data points the curve is interpolated.\nThus, if the curve of a second coder is to the top left compared to the curve of the base coder, it achieves a better PSNR for the same bitrate, or vice versa, a lower bit rate for the same PSNR.\nThis average distance between rate-distortion curves can be measured using the Bjontegaard-Delta (BD) for the rate direction and PSNR direction\\cite{bjontegaard_calculation_2001}.\nThe results for each scene are summarized in Table\\ \\ref{tab:eval_coding}, which shows the BD rate and PSNR.\nThe proposed hyperspectral video coder is first compared to the all-intra coder PRIBP.\nFor six out of seven scenes, the proposed coder outperforms the intra procedure.\nPWC-Net has problems estimating the motion properly for the scene \\textit{forest}, since there are a lot of leaves, which leads to a high frequency change between background and leaves.\nThus, the motion estimation would have to output a motion field with a very high spatial resolution to enhance the coder.\nIn this case, PWC-Net does not achieve this and even has a bad influence on the coding result.\nOn average, the novel hyperspectral coder saves roughly 3\\% in comparison to PRIBP. Note that this average is calculated over all sizes and PSNRs and not over the rate and PSNR values from the table.\nThus, videos with a large file size influences this average more.\n\nMoreover, the proposed coding technique is also compared to the random access mode of High Efficiency Video Coding (HEVC), thus an RGB inter coder.\nFor that, every three consecutive channels were merged into an RGB video.\nThe remaining channels are coded as grayscale videos.\nFor 31 hyperspectral channels, this results in 10 RGB videos and 1 grayscale video to code for the standard HEVC.\nThe results vary much more in comparison, however, the average bit rate saving of the proposed hyperspectral video coder is over 50\\% in comparison the inter mode of HEVC.\nIt is noticeable that HEVC has problems with high frequency content in comparison, since the scenes \\textit{family house}, \\textit{outdoor} and \\textit{forest} are compressed much worse.\nOn the other side, for scenes with mostly homogeneous regions and movements, HEVC even outperforms the proposed hyperspectral video coder.\n\n\\begin{table}[t]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.5}\n\t\\caption{Evaluation of our novel inter hyperspectral coding against all-intra hyperspectral coding PRIBP and HEVC inter coding (random access). The values are given in Bjontegaard-Delta rate and PSNR, respectively. Lower is better for rate, while higher is better for PSNR.}\n\t\\label{tab:eval_coding}\n\t\\begin{tabular}{c|c|c|c|c}\n\t\t& \\multicolumn{2}{c|}{PRIBP} & \\multicolumn{2}{c}{HEVC Inter Coding} \\\\\n\t\t& BD Rate & BD PSNR & BD Rate & BD PSNR \\\\ \\hline\n\t\tFam. house & -10.41\\% & 0.59 dB & -54.55\\% & 4.02 dB \\\\ \\hline\n\t\tM. seaport & -7.15\\% & 0.35 dB & -22.82\\% & 1.16 dB \\\\ \\hline\n\t\tCity & -9.29\\% & 0.44 dB & -19.36\\% & 0.92 dB \\\\ \\hline\n\t\tOutdoor & -9.22\\% & 0.47 dB & -65.81\\% & 4.82 dB \\\\ \\hline\n\t\tForest & 13.24\\% & -0.59 dB & -73.49\\% & 6.09 dB \\\\ \\hline\n\t\tIndoor & -6.77\\% & 0.22 dB & 12.71\\% & -0.38 dB \\\\ \\hline\n\t\tLab & -9.21\\% & 0.57 dB & 42.61\\% & -1.85 dB \\\\ \\hline \\hline\n\t\tAverage & -3.36\\% & 0.15 dB & -52.46\\% & 3.27 dB\n\t\\end{tabular}\n\t\\vspace*{-0.3cm}\n\\end{table}\n\n\\subsection{Other Applications}\nOther applications include spectral reconstruction\\cite{sippel_structure-preserving_2020}.\nSpectral reconstruction has the goal of reconstructing light spectra from multispectral images or even RGB images.\nThe database can also be used for cross-spectral depth estimation\\cite{genser_deep_2020}, where the disparity between different views, which record different ranges in the spectrum, is estimated, e.g., using the multispectral camera array shown in Fig.\\ \\ref{fig:camsi}.\nFurthermore, one can also evaluate different camera setups like trinocular arrangements\\cite{mozerov_trinocular_2009}.\nMoreover, depth estimation algorithms and image enhancement procedures that rely on grayscale and RGB cameras\\cite{jeon_stereo_2016} can be used.\nIn a similar direction steer studies that investigate proper color to grayscale conversions for these types of algorithms\\cite{benedetti_color_2012}.\nA related topic to stereo matching is scene reconstruction using different cameras\\cite{vedaldi_atlas_2020}.\n\nIn general, different fusion techniques related to depth sensors can be evaluated.\nA very obvious example is depth estimation aided by a depth map sensor, e.g., a LIDAR sensor\\cite{gao_object_2018}.\nFurthermore, a depth sensor can be used to help upsampling\\cite{park_high-quality_2014} as well.\nAn even better fit to this database is the fusion of RGB images, hyperspectral recordings and LIDAR data for urban land cover classification\\cite{hansch_fusion_2021}.\n\nOf course reconstruction algorithms for different types of hyperspectral cameras can be easily evaluated like demosaicing for multispectral filter arrays\\cite{feng_mosaic_2021}.\nCombinations of these techniques also can be evaluated, e.g., a mixture of multispectral filter arrays and camera arrays.\n\nFinally, multispectral and hyperspectral denoising algorithms can be evaluated, e.g., techniques based on sparse matrix decomposition\\cite{xie_hyperspectral_2020}.\nThe noise can be even further reduced exploiting the temporal correlation between frames, which can be evaluated using the proposed database.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nThis paper introduced a novel synthetic hyperspectral video database containing seven scenes.\nAll scenes were rendered using a camera array with nine cameras arranged in a three times three grid.\nThis camera array moves through the scenes for 30 frames.\nSince depth maps are provided for every scene, camera and frame, this database provides a high resolution in all three spatial as well as spectral and temporal dimension of the corresponding scene.\nTherefore, this database can serve as validation database for many applications ranging from spectral reconstruction over diverse stereo matching problems like cross spectral disparity estimation to sensor fusion problem like combining hyperspectral data, RGB images and data from depth sensing devices.\nFinally, two applications were covered in detail by exploiting the temporal dimension of the data.\nThe proposed two novel algorithms outperformed its non-temporal versions.\nFurthermore, it proved that this new database can be used to evaluate algorithms in diverse image processing tasks.\n\n\\begin{backmatter}\n \\bmsection{Funding} The authors gratefully acknowledge that this work has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under project number 491814627.\n \\bmsection{Disclosures} The authors declare no conflicts of interest.\n \\bmsection{Data availability} Data underlying the results presented in this paper are available in Dataset 1, Ref. \\cite{hyvid}.\n\\end{backmatter}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\nA knowledge graph~\\cite{hogan2020knowledge} (KG) is an abstraction used in knowledge representation to encode knowledge in one or more domains by representing entities like \\texttt{New York City} and \\texttt{United States} (i.e., nodes) and binary relationships that connect these entities; for example, \\texttt{New York City} and \\texttt{United States} are connected by the relationship \\texttt{country}, i.e., \\texttt{New York City} has \\texttt{United States} as a \\texttt{country}. Most of KGs also contains relationships that connect entities with \\textit{literals}, i.e., values from known data structures such as strings, numbers, dates, and so on; for example a relationship \\texttt{settled} that connects \\texttt{New York City} and the integer \\texttt{1624} describe a property of the entity \\texttt{New York City}. \nMore in general, we can view a KG under a dual perspective: as a \\textit{directed labeled multi-graph}, where nodes represent entities or literals and labeled edges represent specific relationships between entities or between an entity and a literal, and as a set of \\textit{statements}, also referred to as \\textit{facts}, having the form of subject-predicate-object triples, e.g., (\\texttt{New York City}, \\texttt{country}, \\texttt{United States}) and (\\texttt{New York City}, \\texttt{settled}, \\texttt{1624}). In the following, we will use the notation (h, r, t) (head, relation, tail) to identify a statement in KG, as frequent in the literature about KG embeddings.\n\nThe entities described in KGs are commonly organized using a set of \\textit{types}, e.g., \\texttt{City} and \\texttt{Country}, also referred to as concepts, classes or data types (when referred to literals). For example, the statement (\\texttt{New York City}, \\texttt{type}, \\texttt{City}) states that the entity \\texttt{New York City} has type \\texttt{City}. Indeed, this types are often defined in what is generally referred to as the \\textit{ontology}~\\cite{ehrlinger2016towards}. An ontology is a formal specification of the meaning of types and relationships expressed as a set of logical constraints and rules, which support automated reasoning.\nFor example, DBpedia~\\cite{auer2007dbpedia}, a knowledge graph built upon information extracted from Wikipedia, describes more than 4 million entities and has 3 billion statements\\footnote{\\url{https:\/\/wiki.dbpedia.org\/about\/facts-figures}}. \n\nWhile KGs can be described using a graph, a nice and simple way to visualize a knowledge graph is considering it as a 3-order adjacency tensor (i.e., a 3-dimensional tensor describing the structure of the KG). Formally a 3-dimensional adjacency tensor is defined as $T \\in \\mathbb{R}^{N \\times R \\times N}$, where $N$ is the number of entities and $R$ is the number of relationships. Each dimension of the tensor corresponds to (\\texttt{head}, \\texttt{relation}, \\texttt{tail}) respectively.\n\nMore formally, assume we have a KG $\\mathcal{G} = \\{ (e_i, r_j, e_k) \\} \\subseteq \\mathcal{E} \\times \\mathcal{R} \\times \\mathcal{E}$, where $\\mathcal{E}$ and $\\mathcal{R}$ denote the sets of entities and relations in the KG, respectively, with $|\\mathcal{E}| = N$ and $|\\mathcal{E}| = R$.\nThe adjacency tensor $T \\in \\mathbb{R}^{N \\times R \\times N}$ is defined as follows:\n\\begin{equation*}\n T_{i,j,k} = \\begin{cases}\n1 & \\text{if } (e_i, r_j, e_k) \\in \\mathcal{G}, \\\\ \n0 & \\text{otherwise}.\n\\end{cases}\n\\end{equation*}\nTo visualize this, imagine a simple adjacency matrix that represents a single relation, such as the \\texttt{country} relation: the two dimensions of the matrix correspond to the head entity and the tail entity.\nEach entity corresponds to an unique index: given a triple (\\texttt{New York City}, \\texttt{country}, \\texttt{United States}), we have a 1 in the cell of the matrix corresponding to the intersection between the $i$-th row and the $j$-th column, where $i, j \\in \\mathbb{N}$ are the indices associated with \\texttt{New York City} and \\texttt{United States}, respectively.\nOn the other hand, any cell in the adjacency matrix corresponding to triples not in the KG contains a 0.\nIf we consider more than one relationship and we stack them together, we obtain a 3-dimensional tensor, generally referred to as the binary tensor representation of a KG.\nSee Figure~\\ref{bianchi:kge:tensor} for a simple visualization of this concept. \n\n\\begin{figure}[t]\n\\includegraphics[width=1\\linewidth]{Figs\/bianchi_tensor.pdf}\n\\caption{Binary adjacency representation of a KG.}\n\\label{bianchi:kge:tensor}\n\\end{figure}\n\nThe term ``knowledge graph embeddings'' refers to the generation of vector representations of the elements that form a knowledge graph\\footnote{Note that knowledge graph embeddings are different from Graph Neural Networks (GNNs). KG embedding models are in general shallow and linear models and should be distinguished from GNNs~\\cite{scarselli2008graph}, which are neural networks that take relational structures as inputs.}. Essentially, what most methods do is to create a vector for each entity and each relation; these embeddings are generated in such a way to capture latent properties of the semantics in the knowledge graph: \\textit{similar} entities and \\textit{similar} relationships will be represented with \\textit{similar} vectors. Figure~\\ref{bianchi:kge:embeddings:idea} provides an intuitive example of what a knowledge graph embedding method does. The tensor representation introduced above is frequently used in many KG embedding methods that learn embeddings by using dimensionality reduction techniques over the tensor.\n\nThe elements are generally represented in a vector space with low dimensionality (with values ranging from 100 dimensions to 1000 dimensions) and one key aspect is given by the notion of similarity: in a vector space similarity can be interpreted with the use of vector similarity measures (e.g., cosine similarity, in which two vectors are more similar if the angle between them is small).\n\n\\begin{figure}[t]\n\\includegraphics[width=1\\linewidth]{Figs\/bianchi_kge_chapter_embeddings_idea.pdf}\n\\caption{Starting from a knowledge graph, embedding methods generate representations of the elements of the knowledge graph that are embedded in a vector space. For example, these representations could be vectors. Vectors encode latent properties of the graph and for example similar entities tend to be described with similar vectors.}\n\\label{bianchi:kge:embeddings:idea}\n\\end{figure}\n\nAn important task is to find ways to extend KGs adding new relationships between entities. This task is generally referred to as link prediction or knowledge graph completion. Adding new facts can be done with the use of logical inference. For example, from a triple (\\texttt{Washington D.C.}, \\texttt{capital}, \\texttt{United States}) we can infer (\\texttt{Washington D.C.}, \\texttt{country}, \\texttt{United States}).\nInferring this last fact comes from background knowledge encoded in an axiom that specify that if a city is a capital of a country, it is also part of that country (e.g., as encoded by a first order logic rule such as $\\forall X, Y: \\text{capital}(X, Y) \\Rightarrow \\text{country}(X, Y)$). Unfortunately, many knowledge graphs have many observed facts and fewer axioms or rules~\\cite{trouillon2019inductive}. \n\nKG embeddings can be used for link prediction, since they show interesting predictive abilities and are not directly constrained by logical rules. This property comes at the cost of not being directly interpretable (i.e., the vector representations now encode the latent meaning of the entity\/relationship). The explainability of this prediction is often difficult because the result comes from the combination of latent factors that are embedded in a vector space and an evaluation of the inductive abilities of these methods is still an open problem~\\cite{trouillon2019inductive}.\n\nKnowledge graph embeddings projected in the vector space tend to show interesting latent properties~\\cite{DBLP:conf\/pkdd\/MinerviniCMNV17};\nfor example, \\textit{similar} entities tend to be close in the vector space. The value of similarity in the latent space is a function that depends on the way knowledge graph embeddings are generated. Similarity is also important under the point of view of explaining the meaning. \nFor instance, we might not know the meaning of the entity \\texttt{New York City}, but it can be inferred from its topic by looking at closest entities in the geometric space (i.e. \\texttt{Washington D.C.} and \\texttt{United States}).\n\n\nThe components of the vectors representing the entities and relations are not explainable themselves, and it can be hard to assign a natural language label that describes the meaning of that component.\nHowever, we can observe how different entities and relationships are related within the graph by analyzing its structure -- which was also used to generate the vector-based representations.\nIn addition, the training is driven by a similarity principle, which can be easily understood. For example, similar entities have similar embedding representations, and the same is true for similar relationships.\nThus, while it is not possible to explain the exact difference between two vectors of two entities, we can refer to this similarity when using the vectors in more complex neural networks that use these vectors and the additional information to enrich the network capabilities.\n\nKnowledge graph embeddings have been used in different contexts including recommendation~\\cite{DBLP:conf\/sigir\/HuangZDWC18,DBLP:conf\/www\/WangZXG18,DBLP:conf\/kdd\/ZhangYLXM16}, visual relationship detection~\\cite{baier2017improving} and knowledge base completion~\\cite{bordes2013translating}. Moreover, knowledge graph embeddings can be used to integrate semantic knowledge inside deep neural networks, thus enriching the explainability of pure black-box neural networks~\\cite{lecue2019role,hitzler2019neural}, but they also come with some limitations.\n\nIn this chapter, we describe how to build embedding representations for knowledge graphs and how to evaluate them. We discuss related work of the field by mentioning the approaches that improved the state-of-the-art results.\nThen, we focus on knowledge graph embeddings to support explainability, i.e. how knowledge graph embeddings can be adopted to provide explanations by describing the relevant state-of-the-art approaches. Similarity comes has a key factor also in the context of explainability, in recommender systems for example, similarity is a key notion to express suggestions to users.\n\n\\subsection{Overview of this Chapter}\nThis chapter provides an overview of the field in which we describe how KG embeddings are generated and which are the most influential approaches in the filed up to date. Moreover, the chapter should also describe which are the possible usages for KG embeddings in the context of explainability. \nIn the recent literature, many approaches for knowledge graph embeddings have been proposed; we summarize the most relevant models by focusing on the key ideas and their impact on the community. \n\nIn Section~\\ref{bianchisec:knowledge:embeddings:surv} we give a more detailed overview related to how a knowledge graph embedding method can be defined and trained. We will describe TransE~\\cite{bordes2013translating}, one of the most popular models, and then we will briefly explain how information that does not come from the knowledge graph can be used to extend the capabilities of the embedding models. This will be a general introduction that should help the reader understand how the methods introduced in the other sections work.\n\nIn Section~\\ref{bianchisection:stateart}, we describe the approaches we have selected. We summarize what researchers have experimented within the field, giving to the reader the possibility of exploring different possible ways of generating knowledge graph embeddings. Note that it is difficult to describe which is the best model for a specific task because evaluation results are greatly influenced by hyper-parameters (see Section~\\ref{bianchi:limits:kge}). Nevertheless, we think that most of the approaches have laid the basis for further development in the field and are thus worth describing. We then describe how knowledge graph embeddings are evaluated, showing that the main task is link prediction and that the datasets used have changed over the years. Link prediction is a task that requires high explainability, something that in the context of knowledge graph embeddings is often missing. In general, ComplEx~\\cite{trouillon2016complex} is often considered as one of the best performing models~\\cite{baier2017improving} and gives stable results in inductive reasoning tasks~\\cite{trouillon2019inductive}.\n\n\nThen, in Section~\\ref{bianchi:ref:sec:explainability}, we focus on explainability. Explainability is a difficult term to define~\\cite{lipton2018interpretability}. Knowledge graph embeddings are not explainable by default, because they are sub-symbolic representations of entities in which latent factors are encoded. Knowledge graph embeddings can be used for link prediction, but the prediction is the result of the combination of latent factors that are not directly interpretable. However, there is recent literature that explores the usage of embeddings in the context of explainable and logical inferences. \n\nWe conclude this chapter in Section~\\ref{sec:conclusions}, where we summarize our main conclusions and we describe possible future directions for the field.\n\n\n\\paragraph{Additional Resources} Several works that provide an overview of knowledge graph embeddings have been proposed in the literature. We point the reader to~\\cite{gesese2019survey} that contains a nicely written survey of approaches that are meant to support the embedding of knowledge graph literals and to~\\cite{wang2017knowledgesurvey} for another overview on knowledge graph embeddings. As knowledge graph embeddings provide sub-symbolic representations of knowledge there is a recent increasing interest in finding ways to interpret how these representations interact~\\cite{allen2019understanding}. Inductive capabilities of knowledge graph embeddings methods have been recently evaluated~\\cite{trouillon2019inductive}.\n\n\\section{Knowledge Graph Embeddings}\\label{bianchisec:knowledge:embeddings:surv}\n\n\n\\paragraph{A Short Primer}\nIn this first part, we are going to define the general elements that characterize a knowledge graph embedding method. To better illustrate how knowledge graph embeddings are created we focus our explanation on one of the seminal approaches of the field, TransE~\\cite{bordes2013translating}. We will introduce how TransE embeddings can be generated and how a method like TransE can be extended to consider information that is not included in the set of triples. While we will describe TransE-specific concepts, most of what it is explained in this section is still valid for other methods in the state of the art.\n\nNowadays, a plethora of approaches to generate embedded representations of KGs exists~\\cite{bordes2013translating,nickel2011three,wang2014knowledge,lin2015learning,trouillon2016complex}. In 2011, RESCAL~\\cite{nickel2011three} was the first influential model to create embedded representations of entities and relationships from a KG by relying on a tensor factorization approach upon the 3-dimensional tensor generated by considering subject entity, predicate entity and object entity as the 3 dimensions of the tensor. There are mainly three elements that are used to distinguish a method to generate KGs embedding: (i) the choice of the representations of entities and relationships, in general vector representations of real numbers are used~\\cite{bordes2013translating,wang2014knowledge}, but there are methods that use matrices to represent relationships~\\cite{nickel2011three} and complex vectors to represent entities and relationships~\\cite{trouillon2016complex}; (ii) the so-called scoring function, which we will refer to as $\\phi$. This function is used to aggregate the information combing from a triple, and is generally referred to as the function that estimates the \\textit{likelihood} of the triple; lastly (iii) the loss function, which defines the objective being minimized during the training of the knowledge graph embedding model. \n\nChanges in these three elements is what generally makes one model better than the other (although, see Section~\\ref{bianchi:limits:kge}, where we explain the impact of different hyperparameters on the comparison). Scoring functions can be extended with many different information like, information coming from images~\\cite{wang2019multimodal} or numerical and relational features~\\cite{garcia2017kblrn}, in which the entity vector of a scoring function might be represented with the aggregation of image representations of that entity or textual content, an entity can be represented by aggregating the information contained inside its textual description. At the same time, loss functions can be extended considering different parameters, e.g., it is possible to extend a loss function by adding regularization. \nThe interaction between the entity vectors and the relationship vectors is modulated by the score function. The score function computes a confidence value of the likelihood of a triple. \n\nThe learning process requires both positive and negative data in input and KGs contain only positive information. In KG embeddings the generation of negative is generally achieved generating \\textit{corrupted triples} i.e., triples that are false. For example, if in a knowledge graph we have the triple (\\texttt{New York City}, \\texttt{country}, \\texttt{United States}), a simple corrupted triple is (\\texttt{United States}, \\texttt{country}, \\texttt{New York City}). Note that despite these training procedures might have several limitations, different methods have been proposed to optimize the selection of good negative samples. One of the most advanced techniques is KBGAN~\\cite{cai2018kbgan} that proposes an adversarial method to generate effective negative training examples that can improve the representations of the knowledge graph embedding.\n\n\\paragraph{Making Knowledge Graph Embeddings}\n\nTransE~\\cite{bordes2013translating} uses k-dimensional vectors to represent both entities and relationships; the score function that the authors propose as the following form $d(\\mathbf{h} + \\mathbf{r}, \\mathbf{t})$, where the $d$ function can be the L1 or the L2 norm. The driving idea of this score function is that the sum of the subject vector with the predicate vector should generate the vector representation of the object as output (i.e. $\\mathbf{h} + \\mathbf{r} \\approx \\mathbf{t}$), in general the scoring function can be also defined as $d(\\mathbf{h}+\\mathbf{r},\\mathbf{t}) = \\left\\|\\textbf{h}+ \\textbf{r}-\\textbf{t}\\right\\|$. The loss function defined to learn the representations is instead:\n\\begin{equation*}\n \\mathcal{L} = \\sum_{h,r,t \\in S}\\sum_{h',r,t' \\in S'_{h,r,t}}[\\gamma + d(\\mathbf{h}+\\mathbf{r},\\mathbf{t}) - d(\\mathbf{h'}+\\mathbf{r},\\mathbf{t'})]_{+},\n\\end{equation*}\n\\noindent where $[x]_{+}$ is the positive part of $x$ and $\\gamma$ is a margin hyper-parameter. And $S'_{h,r,t}$ is the set of corrupted triples. $d(\\mathbf{h}+\\mathbf{r},\\mathbf{t})$ is the score of the true triple while $d(\\mathbf{h'}+\\mathbf{r},\\mathbf{t'})$ is the score of the true triple. This loss function favors low values of $d(\\mathbf{h}+\\mathbf{r},\\mathbf{t})$ with respect to the corrupted triples, in such a way that the function can be effectively minimized. It is possible to optimize the representation through the use of gradient-based techniques that are now common in machine learning. Figure~\\ref{bianchi:kge:embeddings:transE} shows how TransE combine entities and relationships in the scoring function. Through the training process, TransE learns vector representations of entities and relationships.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.6\\linewidth]{Figs\/bianchi_kge_transE.pdf}\n\\caption{Example of how TransE represents and models the interactions between entities and relationships in vector space.}\n\\label{bianchi:kge:embeddings:transE}\n\\end{figure}\n\n\\paragraph{Augmenting Knowledge Graph Embeddings} Knowledge graph embeddings can be generated by considering information that is not included in the graph itself. Different methods have been introduced to extend knowledge graph embeddings by adding novel information outside from the one provided by knowledge graph triples and we will give a more detailed overview in the next section, here we describe a method that extends TransE using textual information; adding elements to the score function allows us to include novel information inside our representations. \n\nDescription-Embodied Knowledge Representation Learning (DKRL)~\\cite{xie2016representationentity} jointly learns a structure-based representation $h_{s}$ (as TransE) and a description-based representation $t_d$ that can be used in an integrated scoring function, thus combining the relative information coming from both text and facts. \nTo extend with additional information a model like TransE, the scoring function can be extended to optimize also other representations. For example, DKRL uses the following scoring function:\n\\begin{equation*}\n\\left\\|\\textbf{h}_{s}+\\textbf{r}-\\textbf{t}_{s}\\right\\|+\\left\\|\\textbf{h}_{d}+\\textbf{r}-\\textbf{t}_{d}\\right\\| + \\left\\|\\textbf{h}_{s}+\\textbf{r}-\\textbf{t}_{d}\\right\\|+\\left\\|\\textbf{h}_{d}+\\textbf{r}-\\textbf{t}_{s}\\right\\|.\n\\end{equation*}\nOptimizing this joint score function allows us to combine the information coming from both text and triples. In detail, DKRL uses convolutional neural networks to generate description based representations for the entities. Different information can be used to extend the embedding such as images, logical rules, and textual information. In general, the process to introduce new information relies on the extension of the scoring function. Often adding more information allows us to extend the capabilities of the model. For example, the use of text-based representations allows us to generate vector representations of entities for which we have a description but that are not present in the KG.\n\n\\section{State-of-the-art Knowledge Graph Embeddings}\\label{bianchisection:stateart}\nIn this section, we review some of the algorithms that have been introduced in the state of the art. Our main objective is to give the reader an overview of the research that has been done until now and which are the key points in the knowledge graph embedding field.\n\n\\subsection{Structure-based Embeddings}\\label{bianchisec:structure}\n\nApproaches that focus on the use of knowledge graph facts have also been called \\textit{fact alone} methods by other authors~\\cite{wang2017knowledgesurvey}. Table~\\ref{bianchi:table:scoring:function} shows the different scoring function that can be used to define different knowledge graph embeddings methods. The two main categories of approaches are the \\emph{translational models} and the \\emph{bilinear models}. Transnational models are often based on learning the translations from the head entity to the tail entity (e.g., TransE) while bilinear models often tend to use a multiplicative approach and to represent the relationships as matrices in the vector space. In general, bilinear models obtain good results in the link prediction tasks~\\cite{kazemi2018simple}. Main models of this category are RESCAL~\\cite{nickel2011three}, DistMult~\\cite{yang2014embedding}, ComplEx~\\cite{trouillon2016complex}. \n\n\n\n\n\n\\begin{table}[t]\n \\centering\n \n \\begin{tabular}{ccc}\n \\toprule\n {\\bf Method} & {\\bf Scoring Function} & {\\bf Representation} \\\\ \\midrule\n RESCAL~\\cite{nickel2011three}, 2011 & $\\textbf{h}^\\intercal \\textbf{W}_r \\textbf{t} $ & $\\textbf{h},\\textbf{t} \\in \\mathbb{R}^{d}$, $\\textbf{W}_r \\in \\mathbb{R}^{d \\times d}$ \\\\ \n TransE~\\cite{bordes2013translating}, 2013 & $ - || \\textbf{h} + \\textbf{r} - \\textbf{t}||$ & $\\textbf{h},\\textbf{t},\\textbf{r} \\in \\mathbb{R}^{d}$ \\\\\n DistMult~\\cite{yang2014embedding}, 2014 & $\\langle \\textbf{h},\\textbf{r},\\textbf{t} \\rangle$ & $\\textbf{h},\\textbf{t},\\textbf{r} \\in \\mathbb{R}^{d}$ \\\\\n HolE~\\cite{nickel2016holographic}, 2016 & $\\langle \\textbf{r}, \\textbf{h} \\otimes \\textbf{t} \\rangle $ & $\\textbf{h},\\textbf{t},\\textbf{r} \\in \\mathbb{R}^{d}$ \\\\\n ComplEx~\\cite{trouillon2016complex}, 2016 & $\\text{Re}(\\langle \\textbf{h},\\textbf{r},\\overline{\\textbf{t}} \\rangle)$ & $\\textbf{h},\\textbf{t},\\textbf{r} \\in \\mathbb{C}^{d}$ \\\\\n RotatE~\\cite{sun2019rotate}, 2019 & $ - || \\textbf{h} \\circ \\textbf{r} - \\textbf{t} ||^2 $ & $\\textbf{h},\\textbf{t},\\textbf{r} \\in \\mathbb{C}^{d}$, $|r_i| = 1$ \\\\ \\bottomrule\n \\end{tabular}\n \n \\caption{A short list with knowledge graph embedding approaches with the respective scoring functions and the representation space used for entities and relationships. Lowercase elements are vectors, while uppercase elements are matrices, $\\otimes$ is the circular correlation. $\\overline{\\textbf{t}}$ defines the complex conjugate of an $\\textbf{t}$ and $\\text{Re}$ denotes the real part of a complex vector. We sampled these approaches by considering the novelty they introduced at the time they were presented.\n Score functions are based on those published in~\\cite{sun2019rotate,balazevic2019tucker}.}\n \\label{bianchi:table:scoring:function}\n\\end{table}\n\n\\paragraph{Translational Models} We have described how TransE behaves in the previous section. Note that TransE does not efficiently learn the representations for 1-to-N relationships in a knowledge graph. This comes from how the scoring function is defined: suppose the existence of the triples (\\texttt{New York City}, \\texttt{locatedIn}, \\texttt{State of New York}), (\\texttt{New York City}, \\texttt{locatedIn}, \\texttt{United States}. Eventually, a scoring function consistent with $\\mathbf{s} + \\mathbf{p} \\approx \\mathbf{o}$, would make the entities \\texttt{State of New York} and \\texttt{United States} similar, since the elements $\\mathbf{s}$ and $\\mathbf{p}$ of the formula are fixed. Novel models in the translational group have been introduced to reduce the effect of this problem; we can cite in this category TransH~\\cite{wang2014knowledge} and TransR~\\cite{lin2015learning}. In general, translational models have the advantages of having a concise definition and getting good performances. In this same category, recent and relevant approaches are RotatE~\\cite{sun2019rotate} and HAKE~\\cite{zhang2019learning}.\n\n\\paragraph{Bilinear Models} RESCAL~\\cite{nickel2011three} is based on the factorization of the tensor (see Figure~\\ref{bianchi:kge:embeddings:idea} and has a high expressive power due to the use of a full rank matrix for each relationship in the score function $\\textbf{h}^\\intercal \\textbf{W}_r \\textbf{t}$, where the interaction between the elements comes under the form of vector-matrix products. At the same time, the full rank matrix is prone to overfitting~\\cite{zhang2019learning} and thus researchers that studied bilinear models have added some constraints on those representations. Indeed, DistMult~\\cite{yang2014embedding} interprets the matrix $\\textbf{W}_r$ as a diagonal matrix, not making difference between head entity and tail entity and thus forcing the modeling of symmetric relationships~\\cite{kazemi2018simple,trouillon2019inductive}: $\\phi(h,r,t) = \\phi(t,r,h)$, $\\forall h,t$, that force symmetry even for anti-symmetric relationships (e.g., \\texttt{country}, \\texttt{hypernym}).\n\nAt the same time DistMult was extended by ComplEx that models the vectors in a complex vector space to better account for anti-symmetric relationships. HolE~\\cite{nickel2016holographic} uses circular correlation, a non commutative operation between vectors, that allows us to effectively surpass the $\\phi(h,r,t) = \\phi(t,r,h)$ problem that DistMult had. Note that it has been proved that HolE and ComplEx are isomorphic~\\cite{hayashi-shimbo-2017-equivalence}. ANALOGY~\\cite{liu2017analogical} is a model that extends the scoring function by considering analogical relationships that exist between entities given the relationships. \nIn their paper~\\cite{liu2017analogical}, the authors have shown that DistMult, ComplEx and HolE are special cases of ANALOGY.\n\n\\paragraph{Neural Models} Another group with a lower number of proposed approaches consists of neural networks-based models; the Neural Tensor Network~\\cite{socher2013reasoning} is an approach for knowledge graph embeddings that uses a score function that contains a tensor multiplication, that depends on the relationship, to relate entity embeddings, this type of operation provides some interesting reasoning capabilities and was also used in later approaches as a support for reasoning using neural networks in a neural-symbolic model~\\cite{serafini2016logic}. Instead, ConvE~\\cite{dettmers2018conve} introduces the use of convolutional layers, thus being closer to deep learning approaches. While effective, this method suffers from limited explainability and more variation given by the number of hyperparameters that increases with the number of layers~\\cite{sun2019rotate}.\n\n\\paragraph{Recent Approaches}\nWe hereby summarize some recent approaches that have been introduced in the literature and that are relevant with respect to the results they obtained and the ideas that stand behind them.\n\n\\begin{itemize}\n \\item Hierarchy-Aware Knowledge Graph Embedding (HAKE)~\\cite{zhang2019learning} is one of the few models that also consider the fact that elements in the knowledge graph belong to different levels of the hierarchy (e.g., the authors use the triple \\textit{arbor\/cassia\/palm, hypernym, tree} as an example of elements at different levels of the hierarchy). Using polar coordinates they are able to distribute the hierarchical knowledge inside the representations.\n \\item RotatE~\\cite{sun2019rotate} was introduced to provide a method to effectively represent symmetric properties in knowledge graph embeddings. The authors of this paper propose to use rotation in a complex space to support symmetry and other properties. In Figure~\\ref{bianchi:kge:embeddings:rotate} we show how rotation can effectively support the definition of relationships that are symmetric; the rotation allows you to interpret symmetry as a geometric property. Authors prove that their model, implemented inside a complex vector space, can capture properties like symmetry, inversion, and composition.\n \\item TuckER~\\cite{balazevic2019tucker} is a recent approach that also uses tensor factorization for knowledge graph embeddings obtaining good results over the link prediction task.\n \\item Another recent approach tries to apply graph convolutional neural networks to generate knowledge graph embeddings, and this might influence a new way of dealing with knowledge graph structures~ \\cite{schlichtkrull2018modeling}.\n \\item Contextualized Knowledge Graph Embeddings~\\cite{gupta2019care} (COKE) is a method that has been inspired by recent results of contextual representation of words~\\cite{peters2018deep}: using transformers~\\cite{vaswani2017attention}, the authors propose to capture the different meanings an entity can assume in different parts of the knowledge graph. For example, the entity Barack Obama is connected to entities related to politics, but also to the entity that represents members of his family, showing two different \\textit{contextual meanings} of the same entity. The main difference between COKE and other models is that it models the representations based on the context and thus, differently from other methods, it provides representations that are not static.\n \\item SimplE~\\cite{kazemi2018simple} extends canonical Polyadic tensor decomposition (CP)~\\cite{hitchcock1927expression} to provide good embeddings for link prediction. CP poorly performs on link prediction because it learns two independent embeddings for each entity. SimplE makes use of inverse relationships to jointly learn the two embeddings of each entity.\n \\item Quantum embeddings~\\cite{garg2019quantum} are a novel method to embed entities and relationships in a vector space and the representations are generated following ideas that come from quantum logic axioms~\\cite{birkhoff1936logic}. These embeddings preserve the logical structure and can be used to do both reasoning and link prediction.\n\\end{itemize}\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.6\\linewidth]{Figs\/bianch_kge_drawings_rotatE.pdf}\n\\caption{Example of how the use of rotation can support the definition of properties that are symmetric in the vector space. Image is adapted from~\\cite{sun2019rotate}.}\n\\label{bianchi:kge:embeddings:rotate}\n\\end{figure}\n\n\n\n\\subsection{Enhanced Knowledge Graph Embeddings}\\label{bianchisection:enached}\nWhile most of the previous approaches rely mainly on the use of the triples present in the knowledge graph to generate the vector representations; additional information (or different information) can be used inside the embeddings to generate vectors that account for a better representation. As noted by~\\cite{wang2017knowledgesurvey} attributes (like gender) need to be model in an efficient way: the attribute \\textit{male} is connected to multiple entities and thus model like TransE might not be adequate to treat this issue; in the literature, there are in fact models that have been proposed to account for better handling of these attributes~\\cite{lin2016attributes}.\n\n\n\\paragraph{Path-based Embeddings}\nWhile the most common approaches use a score function that is based on triples, more recent approaches try to consider also the information that comes from a path on the graph~\\cite{lin2015modeling,guu2015traversing}. There are approaches that focus on the use of Recurrent Neural Networks (RNNs) to tackle the task of multi-hop predictions~\\cite{yin2018recurrent,das2017chains}. \n\n\\paragraph{Distributional Embeddings} An alternative approach to generate embeddings comes from the computational linguistics field and it is represented by those models that view language under a distributional perspective in which the meaning of words in a language can be extracted from the usage of those words in the language. Word2vec~\\cite{mikolov2013distributed} is a model that embeds words in the vector space by eventually putting words that appear in similar contexts in close positions of the vector space. In the same way, on Wikipedia using user-made links~\\cite{basile2016learning} or using entity linking~\\cite{bianchi2018towards} it is possible to generate embeddings of the entities of a knowledge graph using the word2vec algorithm~\\cite{mikolov2013distributed}. For example, Wiki2vec\\footnote{\\url{https:\/\/github.com\/idio\/wiki2vec}} uses word2vec over Wikipedia text and generates the representations for both entities (by looking at links co-occurrence) and words. TEE~\\cite{bianchi2018towards} proposes to use entity linking to first disambiguate text and generate sequences of entities and then use the knowledge graph to replace the sequences of entities with sequences of most specific types; using word2vec one can generate entity and type embeddings based on the distribution in text. Methods that are based on entity linking suffer from low coverage, caused by the entity linking quality. In general, these models do not provide a direct way to embed relationships. Another prominent model in this category is RDF2Vec~\\cite{ristoski2018rdf2vec}: it uses an approach that combines techniques from the word embeddings community with knowledge graphs. It generates embeddings of entities and relationships by first creating a virtual document that contains lexicalized walks over the graph and then use word embeddings algorithm on the virtual document to create the representations. \n\n\\paragraph{Text-Enhanced Embeddings} There instead exists a variety of models that makes use of textual information~\\cite{wang2016text,fang2016entity,wang2014knowledge2,xie2016representationentity,xiao2017ssp,Jameel2016EntityEW,an2018accurate} to enhance the performance of knowledge graph embeddings techniques. These pre-trained representations can be used to initialize knowledge graph embeddings and to generate representations that can, in some cases, outperform other baselines~\\cite{yang2014embedding}. As stated in the previous section, the use of textual information can be useful to generate the representations of the entities even when they are not present in the knowledge base. For example, Text-enhanced Knowledge Embedding~\\cite{wang2016text} (TEKE) focuses on Wikipedia inner links and replaces them with Freebase entities and then constructs a co-occurrence network of entities and words in the text; eventually, this information is used to enrich the contextual representation of the elements of the knowledge graph. Jointly~\\cite{wang2014knowledge2} is an embedding method in which textual knowledge is used to enrich the representation of entities and relationships. In this work, both entities and words are aligned into a common vector space; vectors associated with words and entities that represent a common concept are then forced to be closer in the vector space by combining different loss functions. Description-Embodied Knowledge Representation Learning (DKRL)~\\cite{xie2016representationentity} includes the description of the entities in the representation. DKRL uses a convolutional layer to encode the description of the entity into a vector representation and use this representation in the loss function. Words vectors coming from the entity description can be initialized with the use of word2vec embeddings. The model learns two representations for each entity, one that is structure-based (i.e., like TransE) and one that is based on the descriptions. One key advantage of DKRL~\\cite{xie2016representationentity} is that it offers the possibility of doing zero-shot learning of entities by using the description of the entities themselves.\n\n\\paragraph{Image Enhanced Embeddings} \nImage-embodied Knowledge Representation Learning~\\cite{xie2017image} (IKRL) provides a method to integrate images inside the scoring function of the knowledge graph embedding model. Essentially, IKRL uses multiple images for each entity and use the AlexNet convolutional neural network~\\cite{krizhevsky2012imagenet} to generate representations for the images; these representations are then selected and combined with the use of attention to be finally projected in the entity space, generating an image specific representation for images. \nRecently, approaches to exploit \\textit{multi-modal learning} on knowledge graph embeddings that combine image features and other information have also been introduced in the state-of-the-art~\\cite{wang2019multimodal,liu2019mmkg}. \n\n\\paragraph{Logic Enhanced Embeddings} There are approaches that account for the combination of logic and facts~\\cite{wang2016learning,guo2016jointly,guo2018knowledge,rocktaschel2015injecting} for knowledge representation. KALE is a model that combines facts and rules using fuzzy logic~\\cite{guo2016jointly}. There are other approaches that try to embed knowledge graphs by keeping the logical structure consistent, we mentioned embedding with quantum axioms in Section~\\ref{bianchisec:structure}, but there are other methods that starts with the objective of doing logical reasoning over embedded representations~\\cite{serafini2016logic,rocktaschel2017end} (we will present more details of these approaches in the Section~\\ref{bianchi:ref:sec:explainability}, where we discuss explainability).\n\nResearchers have shown that it is possible to combine facts and first-order formulae using a joint optimization process.\nIn \\cite{DBLP:conf\/naacl\/RocktaschelSR15}, the authors propose a general approach for incorporating first-order logic formulae in embedding representations.\nDuring training, their approach samples sets of entities, and jointly minimizes the negative likelihood of the data and a loss function measuring to which extent the model violates the given rules with respect to the sampled entities.\nA shortcoming of this approach is that it relies on a sampling procedure, and it provides no guarantees the model will still produce predictions that are consistent with the logic rules for entities that were not observed during training.\nTo overcome this shortcoming, in \\cite{DBLP:conf\/pkdd\/MinerviniCMNV17} authors incorporate equivalency and inversion axioms between relations by only regularizing the relation representations during the training process, where the shape of the regularizers are derived from the axiom and the model formulations.\nA similar idea is followed by \\cite{DBLP:conf\/emnlp\/DemeesterRR16} for incorporating simple implications between two relations.\nIn \\cite{DBLP:conf\/uai\/MinerviniDRR17}, authors propose using adversarial training for incorporating general first-order logic rules in entity and relation representations: during training, an adversary searches for entities where the model violates the given constraints, and the model is regularized in order to correct such violations.\nEntities can be searched either in entity or in entity embedding space; in the latter case, the problem of finding the entity embeddings where the model maximally violates the logic rules can be efficiently solved via gradient-based optimization.\n\n\n\\paragraph{Schema-Aware Embeddings}\nFew models in the state of the art focus on the differences between instances (i.e., entities) of a knowledge graph and concepts (like, \\texttt{Country}, \\texttt{City} and \\texttt{Place})~\\cite{lv2018differentiating}.\nSchema-rules can be useful to define constraints over score predictions. For example, they have been used to learn predicate specific parameters to decrease, in an adaptive way, the score of relationships that might be conflicting with schema rules~\\cite{DBLP:conf\/sac\/MinervinidFE16}.\n\nTransC~\\cite{lv2018differentiating} proposes an interesting representation for concepts, in which each concept is represented as a sphere and each entity is a vector. An instance-of relationship can be easily verified by checking if the entity is contained inside the sphere. In one of the previous sections, we mentioned HAKE (Hierarchy-Aware Knowledge Embeddings)~\\cite{zhang2019learning} as a recent method that considers the hierarchical topology in the embedding. This aspect is also important in the context of analysis over explainability: modeling ontologies is a needed step to learn how to model logical reasoning and provide justifiable inferences, however, not all methods are capable of modeling rules~\\cite{gutierrez2018knowledge}.\n\nThere are also approaches that considers the fact that the ontology can be used to provide better representations, for example Type-embodied Knowledge Representation Learning (TKRL)~\\cite{xie2016representationtypes}. Given a triple $h,r,t$, the subject $\\textbf{h}$ and the object $\\textbf{t}$ are projected to the type spaces of this relation as $\\textbf{h}_{r}$ and $\\textbf{t}_{r}$, the projection matrices become type-specific. TKRL optimizes the following scoring function: $||\\textbf{h}_{r} + \\textbf{r} - \\textbf{t}_{r}||$. In this group we also include TRESCAL~\\cite{chang2014typed} an extension of RESCAL~\\cite{nickel2011three} that considers types in the tensor decomposition. On the other hand, there do exist approaches that generate the representations of ontology concept by taking in consideration the co-occurrence of types in text~\\cite{bianchi2018towards}. \n\n\\paragraph{Hyperbolic Embeddings}\nMany approaches in the state-of-the-art rely on the use of representations in the Euclidean space. However, when dealing with the representations of tree-like structures (e.g., some ontologies can be interpreted as trees) Euclidean spaces have to rely on many dimensions and are not suited to represent trees. Euclidean geometries rely on Euclid's axiom of the parallel lines, but there do exist other geometries that do not consider it. Hyperbolic geometries allow us to use hyperbolic planes where trees can be effectively encoded. These approaches have been now widely used to represent tree-like structure ~\\cite{nickel2017poincare,suzuki2019hyperbolic,sala2018representations} and received recognition in natural language processing~\\cite{le2019inferring,tifrea2018poincare,vimercati2019mapping}. In general, these approaches have been applied to ontological trees (e.g., the WordNet hierarchy) and cannot account for knowledge graph structures that are more complex. Recently, embedding in the hyperbolic plane has shown to be effective also for knowledge graphs~\\cite{balazevic2019multi,kolyvakis2019hyperkg} since they can provide better ways to model topological structures~\\cite{kolyvakis2019hyperkg}. \n\n\\paragraph{Temporal Knowledge Graph Embeddings} There are also approaches that are meant to account for temporality in knowledge graph embeddings by considering temporal link prediction (i.e., consider that some predicates, like \\textit{president of}, have values that change over time) and to study the evolution of knowledge graphs over time~\\cite{jiang2016encoding,esteban2016predicting,garcia2018learning}. For example, recurrent neural networks can be used to learn time-aware relation representations~\\cite{garcia2018learning}.\n\n\\subsection{Evaluation and Replication}\\label{bianchi:evaluation}\nEvaluation in knowledge graph embeddings is often based on link prediction. In general, the link prediction task can be defined as the task of finding an entity that can be used to complete the triple $(h,r,?)$; for example, (\\texttt{New York City}, \\texttt{country}, \\texttt{?}), where \\texttt{?} is United States. To compute the answer for the incomplete triple generally the score function is used to estimate the \\textit{likelihood} of the entities. The procedure is the following: for each triple to test, we remove the head and we compute the value of the score function for each of the entities that we have in the dataset and we rank them from higher to lowest. Then we collect the rank of the correct entity. The same is done by replacing the tail of the triple. At the end, the average rank is computed, this measure is called Mean Reciprocal Rank (MRR). Another measure that is often used in the link prediction setting is the HITS@K (with K commonly in ${1,3,10}$). \n\n\\cite{bordes2013translating} uses a \\textit{filtering} setting that has become a standard of the evaluation. The evaluation of the MRR is influenced by the fact that some \\textit{correct} triples share entity and relationship (e.g., (\\texttt{United States}, \\texttt{countryOf}, \\texttt{?}) is true for multiple triples) and they can be ranked one over the other in the ranking list, thus biasing the results. What it is typically done when computing the MRR for a triple in this setting is to filter out the other triples that are true and that are present in the training\/validation\/test set. \n\nFB15k~\\cite{bordes2013translating} is a subset of Freebase while WN18~\\cite{bordes2013translating} is a Word-Net subset. FB15k and WN18 were both introduced in~\\cite{bordes2013translating} and originally come with a training, validation and test split.\n\nThe quality of these two datasets has been argued in more recent work~\\cite{toutanova2015observed,dettmers2018conve}. FB15k originally contained triples in the test set that are the inverse of those present in the training set, for example \\texttt{\/award\/award\\_nominee} and \\texttt{\/award\\_nominee\/award}. While those links are not false, they could bias the results by making the task easier for learning models (i.e., models can just learn that one relationship is the inverse of the other~\\cite{toutanova2015observed}, and models that force symmetry, like DistMult, could perform better just because of the dataset used). The same problem was found in WN18~\\cite{dettmers2018conve}. This brought researchers to introduce two novel datasets, a subset of the original ones, that do not contain easy-to-solve cases. FB15k-237 has been introduced by~\\cite{toutanova2015observed} and WN18RR was introduced by~\\cite{dettmers2018conve} and they are a subset of FB15K and WN18 respectively. Take into account that the DistMult model favored the symmetry between the relationships.\n \nYAGO3-10~\\cite{mahdisoltani2014yago3,dettmers2018conve} has recently become quite popular, it contains a subset of the YAGO knowledge graph that consists of entities that have more than 10 relationships each. As noted by~\\cite{dettmers2018conve} the triples in this dataset account for descriptive attributes of people (e.g., as citizenship, gender, and profession). Another really important dataset is Countries~\\cite{bouchard2015approximate}, which is often used to evaluate how well knowledge graph embeddings learn long term logical dependencies. Note that while in general, the datasets used are the ones we described, some papers introduce new datasets when needed. For example, a subset of the YAGO dataset (namely YAGO39K) has been used to evaluate TransC a work that extended embeddings with the use of concepts~\\cite{lv2018differentiating}.\n\nIn Table~\\ref{bianchi:tab:dataset:sizes} we show numerical data related to these datasets. It is important to notice that these datasets are small with respect to the size of knowledge graphs (e.g., DBpedia has more than 4 million entities).\n\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{cccccc}\n \\toprule\n \\textbf{Dataset} & \\textbf{\\# Entities} & \\textbf{\\# Relations} & \\textbf{Train} & \\textbf{Validation} & \\textbf{Test} \\\\\n \\midrule\n FB15k & 14,951 & 1,345 & 483,142 & 50,000 & 59,071 \\\\\n FB15k-237 & 14,505 & 237 & 272,115 & 17,535 & 20,466 \\\\\n WN18 & 40,943 & 18 & 141,442 & 5,000 & 5,000 \\\\ \n WN18RR & 40,943 & 11 & 86,835 & 2,824 & 2,924 \\\\\n YAGO3-10 & 123,182 & 37 & 1,079,040 & 5,000 & 5,000 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Number of entities, relationships and training, validation, test triples for the main dataset used in the state-of-the-art.}\n \\label{bianchi:tab:dataset:sizes}\n\\end{table}{}\n\nLink prediction is not the only task on which knowledge graph embedding are evaluated, often the evaluation takes into account the task of triple classification, that is the task of verifying if a triple is true or false (i.e., it is a binary classification task over input triples).\n\n\\subsection{Open-source Projects on Knowledge Graph Embeddings}\nMany approaches in the literature share code to reproduce the results in the paper, but often code is written in different languages and does not allow efficient comparison between methods and extensions of the methods. However, there are now some libraries that can be used to replicate the results of different knowledge graph embeddings methods. We cite three and currently active repositories that are popularly used. OpenKE\\footnote{\\url{https:\/\/github.com\/thunlp\/OpenKE}}~\\cite{han2018openke}, the main repository contains Pytorch code, but the authors made the code available also in tensorflow. Ampligraph\\footnote{\\url{https:\/\/github.com\/Accenture\/AmpliGraph}}~\\cite{costabello2019ampligraph}, a tensorflow library that introduces high-level APIs to generate embeddings. Finally, PyTorch BigGraph is another interesting library for knowledge graph embeddings that has been recently introduced by Facebook that can scale to billions of entities\\footnote{\\url{https:\/\/github.com\/facebookresearch\/PyTorch-BigGraph}}~\\cite{lerer2019biggraph}.\n\n\n\\subsection{Limitations of Knowledge Graph Embeddings}\\label{bianchi:limits:kge}\n\nDifferent methods have been introduced in literature and all come with different training methods (e.g., different optimizers, different loss functions, different strategies for sampling negatives). Making the comparison between different methods often difficult and in general not directly possible. \n\nThe first hints of these limitations have been outlined in 2017, where a work has shown that most of the approaches introduced until then could be outperformed by the use of a simple well-tuned DistMult model~\\cite{kadlec2017knowledge}; the other two competitive models were ComplEx~\\cite{trouillon2016complex} and HolE~\\cite{nickel2016holographic}. As stated by the authors, there is the need to focus on different measures for the evaluation of knowledge graph embeddings\\footnote{Note that the work considered experiment over FB15k and WN18.} and for the intensive study of how hyperparameters are selected. Results are sometime more influenced by training epochs than from actual model complexity.\n\nRecent work~\\cite{calibration} shows that KGE models for link prediction are uncalibrated. This is problematic especially for triple classification tasks where users must define relation-specific thresholds, which can be difficult when working with a large number of relation types. \nMoreover, calibrated probabilities are crucial to provide trustworthy and interpretable decisions (e.g. drug-target discovery scenarios). The authors propose a heuristics that adopts Platt scaling or isotonic regression to calibrate KGE models even without ground truth negatives.\n\nA very recent paper~\\cite{anonymous2020you} has provided new evidence over the limitations of the evaluation of knowledge graph embedding approaches. Authors found that the results of the approaches vary significantly across studies and that they are very much dependent on experimental settings including hyperparameters and loss functions. The main result of this paper is that the conclusions drawn in different papers probably need to be revised in light of the results. Note that the paper address only structure-based embeddings (to which they refer to as pure knowledge graph embeddings), but since many of the enhanced models are based on knowledge graph embeddings, the conclusions drawn from them should also be revised. This paper suggests the lack of a predefined ground of comparison for the embeddings that was already hinted by the need of updating the evaluation datasets (see Section ~\\ref{bianchi:evaluation}, where we explained the limitations of some of the state-of-the-art datasets). The same authors propose LibKGE\\footnote{\\url{https:\/\/github.com\/uma-pi1\/kge}} an open-source library for reproducible research on knowledge graph embeddings that might become useful in providing more robust results to the community.\n\n\\section{Knowledge Graph Embeddings and Explainability}\\label{bianchi:ref:sec:explainability}\n\nWhile explainability is a widely used term and its general meaning is intuitive, there is no agreed definition about what explainability in machine learning is~\\cite{lipton2018interpretability}. Explainability in the context of knowledge graph has recently been outlined by~\\cite{lecue2019role,hitzler2019neural}. In relation to knowledge graph embeddings, explainability has a difficult interpretation: while knowledge graphs are open and in general explainable in terms of direct relationships with other entities, knowledge graph embeddings are often referred to as sub-symbolic, since they represent elements in the vector space, thus losing the original interpretability that comes from logic. The difficulty of mapping vector space representations with logic has been outlined in different work~\\cite{gutierrez2018knowledge,kazemi2018simple} and that in general, some rules are impossible to learn with knowledge graph embeddings (i.e., as described by~\\cite{gutierrez2018knowledge} DistMult can only model a restricted class of subsumption hierarchies). Moreover, explainability passes from the definition of methods that support logical reasoning, since logic offers a paradigm that supports reasoning and its inferences are justifiable and verifiable using logical axioms.\n\nThe problem in parts originates from the fact that there is no agreed view upon how to measure how explainable a system is; the quest of explainable artificial intelligence remains \\textit{how to build intelligent systems able to expose explanation in a human-comprehensible way}~\\cite{lecue2019role}. \n\nExplainability in knowledge graph embeddings is also important because these latent representations are affected by bias, and social biases have to be taken into consideration when using embeddings for prediction. In fact, as word embedding show stereotypical biases in the representation, evidence of bias in knowledge graph embeddings has been found~\\cite{fisher2019measuring}: males are more likely to be bakers while females are more likely to be home-keepers; this fact could greatly bias the link prediction of novel relationships, think for example of a link prediction system that predicts the most suitable person for a job. Explainability is important in the context of KG embedding because we need to be able to explain these inferences. Same requirement is needed by methods that study drugs effects~\\cite{malone2018knowledge}.\n\nWhat is generally missing is a methodology to effectively explain the predictions of knowledge graph embeddings. From their introduction in the state-of-the-art until now, embedding methods have been mainly evaluated and compared by considering only the accuracy on link prediction tasks. As already outlined in literature~\\cite{pezeshkpour2019investigating}, studies should also be conducted to evaluate the interpretability and the reason why link prediction is feasible in knowledge graphs. For example, Completion Robustness and Interpretability via Adversarial Graph Edits (CRIAGE)~\\cite{pezeshkpour2019investigating} explore the robustness of the approaches by seeing how adding and removing facts affects the general performance of the models. CRIAGE can estimate the effect of those modifications and how they influence the predictions; moreover, it can be used to evaluate the sensitivity of the models towards the addition of fake facts. CRIAGE~\\cite{pezeshkpour2019investigating} can be indeed used to understand and explain knowledge graph embeddings prediction and explore the limitations and the advantages of different models. In this context, it is worth to cite the closely related, but introduced for graph neural networks, GNNExplainer \\cite{ying2019gnnexplainer}. GNNExplainer is the first work that provides an approach to make sense of the predictions of a graph network: it can be used to identify the most important parts and features of the graph neural network that influence the prediction of a particular instance (e.g., new link, new node label). While this model has been applied to graph neural network it might be possible to adapt it to knowledge graph embeddings.\n\n\\cite{lecue2019role} provides an overview of the challenges, the approaches and the limitations of Explainable Artificial Intelligence in different fields, such as machine learning, planning, natural language processing, computer vision, etc. In particular, the author focuses on how knowledge graphs could be used to support explanations in order to overtake the limitations in each field. \n\nAn advantage of knowledge graph generated representations, with respect to standard representations generated by deep learning algorithms, is that they come with a previous meaning: each entity vector has a connection with the knowledge graph from which it originates; even if the representation is sub-symbolic. Differently from words~\\cite{mikolov2013distributed}, knowledge graph embedding representations do not suffer from inheriting ambiguity and are can be thus be used more effectively to model reasoning and explainable systems. Moreover, knowledge graph embeddings are not ambiguous in contrast to ``pure words'' in sentences that are ambiguous; this last fact can also help in context of explainability, since it's favorable to provide explanations on something that is not ambiguous and that is linked to a knowledge base.\n\nA key combination can come from the usage of knowledge graph embeddings with logical rules, that can provide justification and explainability over inferences. As stated by \\cite{lecue2019role}, knowledge graphs could provide a semantic layer to support tasks like question answering that are generally tackled with brute force approaches on text. Knowledge graphs can provide generalization capabilities using logic as the source of the generalization: the KG representations can, in fact, be used as sources of inputs to deep learning algorithms and can be used to bridge two worlds that are apart. Knowledge graph representations are linked to knowledge graphs and are thus connected to a source that has explicit connections.\n\nIn fact, there has been a recent spike in the interest for knowledge graph embedding used inside recommender systems to enhance the performance and the explainability of recommendation \\cite{DBLP:conf\/sigir\/HuangZDWC18,DBLP:conf\/www\/WangZXG18,DBLP:conf\/kdd\/ZhangYLXM16}. Deep Knowledge-Aware Network~\\cite{DBLP:conf\/www\/WangZXG18} is a deep network that is used to include external knowledge, trough the use of entity embeddings, inside a news recommendation system; the idea behind this model is to use the information in the knowledge graph to recommend to user news that have a high probability of being clicked. Instead of focusing on word-occurrence based method, like topic models, the proposed model search for more latent factors to use in the recommendation trough the use of embeddings. Instead, other researchers have combined embeddings with recurrent neural networks to account for the recommendation of items based on sequences of user interactions~\\cite{DBLP:conf\/sigir\/HuangZDWC18}.\n\nMany methods for recommendation have limitations regarding the explainability when a multi-hop reasoning is required. In attempt to address this shortcoming, a Knowledge-aware Path Recurrent Network (KPRN) is proposed in \\cite{DBLP:conf\/aaai\/WangWX00C19}. KPRN models the sequential dependencies that connect users and items by also considering the entities and the relationships in between. The running example in the paper is as follows if (\\texttt{Alice}, \\texttt{Interact}, \\texttt{Shape of You}) \\& (\\texttt{Shape of You}, \\texttt{SungBy}, \\texttt{Ed Sheeran}) \\& (\\texttt{Ed Sheeran}, \\texttt{IsSingerOf}, \\texttt{I SeeFire}) then (\\texttt{Alice}, \\texttt{Interact}, \\texttt{I See Fire}). LSTMs are used to model the sequences of entities and relationships and to predict a recommendation. The embedding of entities and relationships is similar to the path-based embeddings introduced in Section~\\ref{bianchisection:enached}.\n\nAt the same time, the field of conversational agents has also taken into consideration the use of knowledge graph embeddings for explainable conversations~\\cite{moon2019opendialkg}. OpenDialKG~\\cite{moon2019opendialkg} is a corpus in which there is a parallel alignment between the knowledge graph and the dialogues. The authors of the paper propose also an attention-based model that can learn knowledge paths from entities mentioned in the dialog contexts and predicts novel entities that are relevant to the contexts of the dialog: paths provide explanations for entity used in reply to a dialog. Initialization of the model is done through the use of knowledge graph embeddings. \n\nThese last models are close to what has been outlined at the start of this section~\\cite{lecue2019role,hitzler2019neural}: knowledge graphs can provide a semantic and explainable layer (i.e., for conversational agents and for recommendation) that can be useful not only to simply solve tasks but also to provide an effective way to interpret the black-box answers given by neural models.\n\n\\subsection{Knowledge Graph Embeddings and Logical Reasoning}\nLogic is the main explanation paradigm for KGs and one important aspect we want KG embeddings to cover is how to account for axiomatic knowledge inside embeddings.\nThrough a standard KG embedding model it is possible to perform several downstream tasks, such as triple validation or subject, object and relationship prediction.\nKnowledge graph embeddings model relational structure under the form of elements in a low dimensional vector space. While originally methods have been introduced to solve link prediction tasks, more recently many researchers have studied and explored the logical properties of knowledge graph embedding methods. The question that they try to answer is related to how can we effectively model logical knowledge inside a vector space.\n\nIn fact, a recent trend in literature is to propose embedding models that can effectively model some specific logical axioms inside the vector space. This attempt is related to the observation that pure translational approaches like TransE~\\cite{bordes2013translating} are not capable of modeling symmetry in the relationships due to the choice of the score function. \n\nComplEx~\\cite{trouillon2016complex} was proposed as an extension of the DistMult approach in the complex space in which it is easier to model properties of relationships like anti-symmetry (remember that DistMult force symmetry between the relationships). \nInstead, as introduced above, RotatE~\\cite{sun2019rotate} use rotations in a complex plane to capture properties like symmetry, antisymmetry, inversion, and composition. In fact, RotatE models each relationship as a rotation from the subject vector to the object vector in a complex hyperplane. Still, a drawback of the approaches that are based on complex Euclidean geometry is that they require a large number of parameters to train.\n\nA promising direction is how to perform complex logical queries using KGE models. For instance, a query ``Predict communities C? in which user u is likely to upvote a post'' might be expressed as $C?. \\exists P : upvote(u, P) \\wedge belong(P, C?)$.\nIn \\cite{DBLP:conf\/nips\/HamiltonBZJL18} the authors propose a method to map and execute conjunctive queries in a vector space represented by KG embeddings, and further extended by \\cite{DBLP:journals\/corr\/abs-2002-05969} to support disjunctions.\nRecent work such as QUERY2BOX~\\cite{Query2box} goes as far as proposing a hybrid query processing framework. The authors propose a KGE-based query engine that addresses both conjunctive and disjunctive queries by modeling queries as bounding boxes in the embedding space. Besides its intrinsic interpretability - grounded in first-order logical queries it supports - this multi-hop reasoning framework shows how the interplay of KG embeddings and logical queries overcome missing information in the graph when delivering an answer.\n\nThere are approaches that try to combine sub-symbolic representations with reasoning systems: Logic Tensor Networks~\\cite{serafini2016logic}, for example, allows us to define a differential fuzzy logic language over data. Essentially, Logic Tensor Networks (LTN) create the representations for logical constants, functions, and predicates by embedding those in a vector space. While Logic Tensor Networks were not used directly to create knowledge graph embeddings, they have been used with good results on semantic image interpretation tasks~\\cite{donadello2017logic,donadello2019compensating}. Integrating embedding approaches with logical reasoning can account for more complex inferences: combining similarity with logical inferences can bring to interesting results in the field; for example, it is possible to use embedding similarity to extend reasoning on unknown entities. For example, ~\\cite{bianchi2019complementing} shows that combining entity embeddings with logical systems like LTNs can be useful to make inferences that are impossible for rule-based systems.\n\nOn a similar note, the Neural Theorem Prover (NTP)~\\cite{rocktaschel2017end} is an extension of the Prolog programming language that uses embeddings in place of the strict unification provided by Prolog. For such a reason, they are also able to provide an explanation in the form of proof paths, for any given prediction.\n\nWhile both LTN and NTP are not directly knowledge graph embedding methods they use or generate embeddings as part of their training procedure (e.g., LTN embeds elements in the vector space to support logical reasoning). The NTPs provide strong reasoning capabilities with the power of the neural network models but are not scalable to large knowledge bases. Indeed, recently NTPs were extended by the Greedy NTPs (GNTPs)~\\cite{minervini2020differentiable} a model that greatly reduces the computational needs of NTPs, making it possible to use it on large knowledge bases by considering to prune reasoning paths that are not likely when doing inference. We mention again in this section the embeddings inspired by quantum physics have been proposed~\\cite{garg2019quantum} that provide methodologies to reason over embedding by preserving the logical structure.\n\n\n\n\n\n \n \n \n \n \n\n\n \n \n\n\n\n\n\n \n\n\n\\section{Summary and Future Directions}\\label{sec:conclusions}\n\nIn this chapter, we have summarized the current state-of-the-art of knowledge graph embeddings by describing many different methods and their main properties. We have also outlined the limitations of these methods, that provide dense representations that while not directly interpretable, but are still connected to a knowledge graph and have thus relationships with other elements. In the context of explainability, we saw that some models are tightly related to logic and try to reconstruct it from the embeddings or to use the embedded representation to perform logical reasoning. However, the actual explainability of these methods is still low and approaches that try to account for it are recent.\n\nThe evolution of the methods in the literature has passed trough different dataset and the evaluation is still subject to a lot of variation due to hyperparameters choice and training procedures. Older approaches perform well when trained with new methodologies. There is the need to define a common ground for evaluation that also takes into account the many differences that each model proposes. \n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDissipative spin models are currently attracting wide interest, because they\ndescribe genuine nonequilibrium states of matter and at the same time,\ncan be realized in various platforms from superconducting circuits to Rydberg atoms\n\\cite{puri2017quantum,clerk2020hybrid,xu2020probing,henriet2020quantum,zeiher2017coherent,labuhn2016tunable,saffman2010quantum,angerer2018superradiant,zhang2017observation}.\nThe model consists of an ensemble of spins that are subject to dissipation caused\nby external baths. The dynamics is described by the Lindblad equation,\nobtained by integrating out the baths' degrees of freedom~\\cite{2002theory}.\n\nGreat efforts have been taken in the investigation of various spin models.\nIn the central spin model~\\cite{kessler2012dissipative}, the dissipative phase\ntransition was located according to the closing of the Liouvillian gap.\nThe transverse-field Ising model was thoroughly studied both under\nthe mean-field approximation and beyond~\\cite{lee2011antiferromagnetic,lee2012collective,\nates2012dynamical,hu2013spatial,marcuzzi2014universal,weimer2015variational,weimer2015variational}, in which\nthe bistability of steady states in some region of the parameter space was found\nto be replaced by a first-order phase transition after the spatial correlation\nwas correctly considered. \nAs the couplings between spins are all-to-all and then the system can be\nseen as a huge spin, the semiclassical approach was adopted.\nThe magnetization was found to display an everlasting oscillation\nin the thermodynamic limit, indicating that the time translational symmetry\nis spontaneously broken into a discrete one~\\cite{ie2018boundary,shammah2018open,tucker2018shattered}. If the dissipation acts in the eigenbasis of the transverse\nfield, then a continuous dissipative phase transition manifests itself as\ncontinuous order parameters with discontinuous derivatives~\\cite{overbeck2017multicritical,foss2017solvable,jin2018phase,wang2021dissipative}. The XYZ-Heisenberg model was also studied by different approximation schemes~\\cite{lee2013unconventional,jin2016cluster,rota2017critical,\ncasteels2018gutzwiller,huybrechts2020validity} to clarify its phase diagram.\n\nThese studies focused on the time-independent Liouvillians. But\nmuch less is known as the Liouvillian changes periodically with time~\\cite{zhu2019dicke}.\nThe topic of this work\nis to study the response to a time-periodic Liouvillian.\nOn the other hand, in closed quantum systems, the response to a time-periodic Hamiltonian\nhas been under intensive investigations.\nThe kicked top models were studied both theoretically~\\cite{haake1987classical,wang2021multifractality,\nbhosale2017signatures,bandyopadhyay2004entanglement,\nruebeck2017entanglement,lombardi2011entanglement,wang2004entanglement,iwaniszewski1995chaos} and\nexperimentally\\cite{chaudhury2009quantum,krithika2019nmr,neill2016ergodic}, which\nis known to exhibit a transition between a regular dynamical phase and a chaotic one,\ndepending on the value of the kicking strength. Similar chaotic behavior was\nfound in the Lipkin-Meshkov-Glick model as the parameters change\nperiodically with time~\\cite{engelhardt2013ac,lerose2019prethermal,\ndas2006infinite,russomanno2015thermalization}.\nThe study in this paper can be seen as an investigation\nof the dissipation effect on the chaotic dynamics in the periodically-driven spin models.\n\nAs a concise example, we study the fully-connected Ising model in the presence of\ncollective dissipation. Without dissipation, chaotic behavior\nappears in the presence of a strongly oscillating\nexternal field~\\cite{lerose2019prethermal,das2006infinite,\nrussomanno2015thermalization}. We find that, the chaotic behavior\nis robust against weak dissipation, if the semiclassical approximation is taken.\nAs the oscillating amplitude of field increases, a periodic response\nchanges into a subharmonic oscillation, and then into\na chaotic behavior. But the numerical simulation shows\nthat, beyond the semiclassical approximation,\nonly the periodic response can survive the quantum fluctuation, but\nneither the subharmonic nor the chaotic dynamics can be observed.\nThe semiclassical approximation works only as the oscillating amplitude\nis small in the periodic-response regime,\nbut it fails as the amplitude is large.\n\n\nThe paper is organized as follows. We introduce the model\nin Sec.~\\ref{sec:model}. Section~\\ref{sec:semi} contributes to\nthe discussion of the semiclassical results. The exact numerical\nsimulations of the dynamics of magnetizations\nare discussed in Sec.~\\ref{sec:finite}. The Floquet Liouvillian spectrum\nis studied in Sec.~\\ref{sec:spec}. Finally, Sec.~\\ref{sec:con} summarizes our results.\n\n\n\\section{ THE MODEL}\n\\label{sec:model}\n\nWe consider the transverse field Ising model with all-to-all couplings,\nand a sinusoidal modulation added to the external field. The Hamiltonian is written as\n\\begin{equation}\\label{1}\n\\hat{H} =-Ng(\\hat{J}_{x})^{2}+N\\Gamma(t)\\hat{J}_{z},\t\n\\end{equation}\nwhere $N$ denotes the total number of spins, $g$ denotes the coupling strength, and\n$\\hat{J}_{\\alpha}=\\sum_{j}\\hat{\\sigma}_{j}^{\\alpha}\/N$ denote the collective spin operators\nwith $\\hat{\\sigma}_{j}^{\\alpha}$ being the Pauli matrices of the $j$th spin and $\\alpha=x,y,z$.\n$\\Gamma(t)$ denotes the time-dependent external field, which is supposed to be\n$\\Gamma(t)= \\Gamma_0+A\\ sin(\\omega_0 t)$, where $A$, $\\Gamma_0$ and $\\omega_0$\nare the oscillating amplitude, mean value and frequency, respectively.\nWe set $g = 1$ as the unit of energy throughout the paper.\n\nIn the presence of dissipation, the dynamics of the system is\ndescribed by the Lindblad equation~\\cite{LindbladmasterE}. The density matrix satisfies\n\\begin{equation}\\label{2}\n\\frac{d\\hat{\\rho}}{\\partial t}=-i[\\hat{H},\\hat{\\rho}]+N{\\gamma_{c}}\\left(\n2\\hat{J}_{-}\\hat{\\rho}\\hat{J}_{+}- \\{\\hat{\\rho},\\hat{J}_{+}\\hat{J}_{-}\\}\\right),\n\\end{equation}\nwhere $ \\hat{J}_{\\pm}=\\left(\\hat{J}_x \\pm i \\hat{J}_y\\right)\/2$ are the jump operators,\nand $\\gamma_{c}$ is the dissipation rate. As $A=0$ and then $\\Gamma(t)= \\Gamma_0$\nis a constant, the solution of Eq.~\\eqref{2}\nwas studied previously~\\cite{wang2021dissipative}. The jump operator forces the spins to be\naligned in the negative $z$-direction, while the interaction between spins favors\nan alignment in the $x$-direction. Their interplay results in a steady state, which\nis either ferromagnetic (with nonzero magnetization in the $x$-direction)\nor paramagnetic. Here we extend to the case of $A\\neq 0$, in which\nthe field oscillation prevents a steady state being reached,\nand then we expect a nontrivial dynamical behavior.\n\n\\begin{figure}\n\t\n\t\\subfigure[]{\n\t\t\\begin{minipage}{0.9\\linewidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\linewidth]{fig1.pdf}\n\t\t\n\t\t\\end{minipage}\\label{img2a}\n\t}%\n\n\t\\subfigure[]{\n\t\t\\begin{minipage}{0.9\\linewidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\linewidth]{fig2.pdf}\n\t\t\n\t\t\\end{minipage}\\label{img2b}\n\t}%\n\t\n\t\\centering\n\t\\caption{The time evolution of $m_x$ for (a) $A=0.1$ with $25$ different initial states,\n\tand (b) $A=1$ with two initial states - $(\\theta_0=0.5\\pi,\\phi_0=0.1\\pi)$\n\tand $(\\theta_1=0.5\\pi+10^{-7},\\phi_1=0.1\\pi)$. Different line colors\n\tare for different initial states. The dissipation strength is set to $\\gamma_c=1$.}\n\t\\label{img2}\n\\end{figure}\n\n\\begin{figure}\n\\subfigure[]{\n\\begin{minipage}{0.9\\linewidth}\n\\centering\n\\includegraphics[width=\\linewidth]{fig3.pdf}\n\\end{minipage}\\label{img1a}\n}\n\\subfigure[]{\n\\begin{minipage}{0.9\\linewidth}\n\\centering\n\\includegraphics[width=\\linewidth]{fig4.pdf}\n\\end{minipage}\\label{img1b}\n}\n\n\\centering\n\\caption{The time evolution of $m_x$ for (a) $A=0.73$ and (b) $A=0.7345$.\nThe red line highlights a complete period. The transient regime\n($t<150$) is omitted.}\n\\label{img1}\n\\end{figure}\n\n\n\\section{Chaotic dynamics in the semiclassical approach}\n\\label{sec:semi}\n\nThe semiclassical approach is frequently employed for solving the fully-connected models.\nWe choose the order parameters to be $m_{\\alpha}=\\left\\langle \\hat{J}_{\\alpha}\\right\\rangle =Tr[\\hat{\\rho}\\hat{J}_{\\alpha}]$. By ignoring the correlations (i.e., setting $\\left\\langle \\hat{J}_{\\alpha}\\hat{J}_{\\beta}\\right\\rangle =\\left\\langle \\hat{J}_{\\alpha}\\right\\rangle \\left\\langle \\hat{J}_{\\beta}\\right\\rangle$)\nin the limit $N\\to\\infty$, we\nobtain a nonlinear system of differential equations, which read\n\\begin{equation}\n\\begin{aligned}\n\\dot{m}_{x}\t&=-2\\Gamma(t)m_{y}+\\gamma_{c}m_{x}m_{z} , \\\\\n\\dot{m}_{y}\t&=4m_{x}m_{z}+2\\Gamma(t)m_{x}+\\gamma_{c}m_{y}m_{z}, \\\\\n\\dot{m}_{z}\t&=-4m_{x}m_{y}-\\gamma_{c}(m_{x}^{2}+m_{y}^{2}).\n\\end{aligned}\\label{3}\n\\end{equation}\nIt is easy to see that $\\left| {\\bf{m}}(t)\\right| =\\sqrt{m_x^2 + m_y^2 + m_z^2} $\nis a constant of motion, which can be set to unity without loss of generality.\n${\\bf{m}}$ is moving on a Bloch sphere, and the initial state can be\ndescribed by the azimuthal angles $\\left(\\theta, \\phi\\right)$, which are defined\nby $m_x = \\sin\\theta \\cos\\phi$, $m_y=\\sin\\theta\\sin \\phi$ and $m_z=\\cos \\theta$.\n\nSince the coefficient $\\Gamma(t)$ is a periodic function of $t$ with the period $2\\pi\/\\omega_0$,\none may naively think that the solution ${\\bf{m}}(t)$ is also a periodic function. This\nis the case for small $A$, but not true for large $A$. We note that Eq.~\\eqref{3} bears some\nresemblance to the Lorenz equation~\\cite{lorenz1963deterministic},\none famous example of the deterministic chaos in the classical systems.\nIn Eq.~\\eqref{3}, the possibility of chaos comes from the fact that $\\Gamma(t)$\nis time-periodic, even the trajectory is limited on a two-dimensional sphere.\nIndeed, we observed a periodic ${\\bf{m}}(t)$ for small $A$, but a chaotic\n${\\bf{m}}(t)$ for large $A$.\n\nNext we choose $\\Gamma_0=1$ and $\\omega=1$ as an example for the demonstration.\nIn Fig.~\\ref{img2}(a), we display the evolution of\n$m_x $ for small $A$ ($A=0.1$) and 25 initial states that are evenly distributed\non the Bloch sphere. Except for the initial state at the south pole (${\\bf{m}}=(0,0,1)$),\nall the others eventually evolve into one of the two oscillation\nmodes that are symmetric to each other.\nThe two oscillation modes have the same period, which is exactly\nthe driving period ($2\\pi\/\\omega_0$). For small $A$, the long-time response\nis insensitive to a small deviation in the initial state.\nOn the other hand, Fig. \\ref{img2b} shows the evolution of $m_x $\nfor a large $A$ ($A = 1.0$). Till the largest evolution time that is accessible,\nno periodicity is observed. More importantly, the long-time response is\nextremely sensitive to the initial condition. Even for a tiny deviation\nin the initial state ($\\theta_1-\\theta_0=10^{-7}$), $m_x(t)$\ndisplays a significant difference as $t$ is as large as a few hundreds\n(see the lines of different colors in Fig.~\\ref{img2b}).\n\nFor the values of $A$ between $0.1$ and $1.0$, ${{m}_x}(t)$ displays abundant dynamical\nbehaviors. As $A$ increases up to a certain critical value, the time period is doubled.\nIn Fig.~\\ref{img1a}, we plot ${{m}_x}(t)$ for $A=0.73$. It is easy to see that\nthe period is not $2\\pi\/\\omega_0$, instead, it becomes $4\\pi\/\\omega_0$.\nAs $A$ increases further, the period-doubling happens again.\nFor example, for $A=0.7345$, the period becomes $8\\pi\/\\omega_0$ (see Fig.~\\ref{img1b}).\nThe period-doubling bifurcation is well-known in the classical nonlinear systems.\nUsually, a cascade of period-doubling bifurcations lead to chaos~\\cite{galatzer1997understanding}.\nThis explains why we observe a chaotic dynamics as $A$ is as large as $A=1$.\n\nDepending on the values of $A$, the system exhibits the periodic, subharmonic\nor chaotic responses. We plot the trajectory of the vector ${\\bf{m}}(t)$ on the Bloch sphere\nin Appendix~\\ref{sec:app}, which provides more evidence for the existence\nof different dynamical behaviors.\n\nIn our model, the subharmonic response (i.e., doubled period) to a time-periodic Liouvillian\nmust be distinguished from that in the Floquet time-crystals.\nAs will be shown next, the subharmonic response in our model\ncan only be observed in the semiclassical limit. It cannot\nsurvive the quantum fluctuation that is unavoidable at finite $N$.\n\nTo locate the chaotic phase in the parameter space, we calculate the\nLyapunov exponent. The defining property of a chaotic system is\nthe extreme sensitivity of trajectories to the initial condition.\nTwo points that are initially close will drift apart exponentially over time.\nThe Lyapunov exponent~\\cite{oseledets1968multiplicative} provides a quantitative\nmeasure for this, which is defined as\nthe average exponential rate of convergence or divergence\nbetween adjacent trajectories in the phase space.\nEspecially, the largest Lyapunov exponent (LLE) is frequently employed\nfor determining whether a nonlinear system is chaotic.\nIf the LLE is greater than zero, two initial points will depart\nfrom each other exponentially, and then the system is chaotic~\\cite{wolf1985determining},\notherwise, it is not. Figure~\\ref{fig:h0} displays the dependence of the LLE on $A$ and $\\gamma_c$.\nThe area in red or yellow has a positive LLE, while the area in blue or green has a negative LLE. \nThe chaotic phase is clearly distinguishable in the parameter space.\nAn interesting finding is that for a fixed $\\gamma_c$ in the interval $\\left(0,1\\right)$,\nthe dynamics is chaotic only for an intermediate amplitude of oscillating field, but the dynamics\nis regular either if $A$ is too large or too small. In the presence of\nstrong dissipation ($\\gamma_c>1.5$), there is no chaotic dynamics for\nwhatever $A$.\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{fig5.pdf}\n\t\\caption[]{The largest Lyapunov exponent as a function of $\\gamma_c$ and $A$.\n\tThe system displays chaotic dynamics as $LLE > 0$ (see the area in red or yellow).}\n\t\\label{fig:h0}\n\\end{figure}\n\n\n\\section{Periodic behavior at finite $N$}\n\\label{sec:finite}\n\nTo check the validity of semiclassical approximation,\nwe numerically simulate the real-time dynamics at finite $N$, by exploiting\nthe permutation symmetry of fully-connected models.\nThe Dicke basis with maximum angular momentum is defined as~\\cite{sciolla2011dynamical}\n\\begin{equation}\n\\label{4}\n\\ket{M} =\\sqrt{\\frac{1}{C_{N}^{\\frac{N}{2}+M}}}\n\\sum_{{\\sum_{j=1}^{N} \\sigma_{j}=M}}\\ket{\\sigma_{1},\\sigma_{2},\\cdots,\\sigma_{N}} ,\n\\end{equation}\nwhere $C_{N}^{\\frac{N}{2}+M}$ is the binomial coefficient, and $\\sigma_j=\\pm 1\/2$\nrepresents the spin-up and down states, respectively.\nAnd $M= -N\/2, -N\/2+1,\\cdots, N\/2$ is the magnetization in the $z$-direction.\nThe initial state is supposed to be a pure state with all the spins\naligned in the same direction. We use the azimuthal angles $(\\theta,\\phi)$\nto indicate the initial direction, then the initial state can be\nexpressed in the Dicke basis as\n \\begin{equation}\n \\label{key}\n \t\\ket{ \\theta ,\\phi } =\\sum_{M=-\\frac{N}{2} }^{\\frac{N}{2}} \n\tC_{N}^{M+\\frac{N}{2} }\\cos\\frac{\\theta}{2}^{\\frac{N}{2}+M}\n\t\\sin\\frac{\\theta}{2}^{\\frac{N}{2}-M}e^{i(\\frac{N}{2}-M)\\phi}\\left | M \\right \\rangle .\n\\end{equation} \nEquation~\\eqref{2} has the permutation symmetry, its solution can then be expressed as\n$\\hat{\\rho}(t) = \\sum_{M,M'} \\rho_{M,M'}(t) \\ket{M}\\bra{M'}$, where\n$\\rho_{M,M'}(t)$ is the density matrix in the Dicke basis. Now Eq.~\\eqref{2} changes into\na system of ordinary differential equations for $\\rho_{M,M'}$, which are solved numerically.\nThe permutation symmetry reduces the dimension of Hilbert space from $2^N$\nto $N$, and then allows us to access a large system with the number of spins $N\\sim 10^2-10^3$.\n\nIn Fig.~\\ref{img4a}, we compare the dynamics of magnetizations at\nfinite $N$s and in the semiclassical limit ($N=\\infty$),\nas $A=0.1$ is in the semiclassical periodic regime.\nAt finite $N$s, the magnetizations display periodic oscillations\nin the asymptotic long time.\nThe curve of $m_z(t)$ at $N=50$ is already very close to the\nsemiclassical one. As $N$ increases, $m_z(t)$\ngoes even closer to the semiclassical result (see Fig.~\\ref{img4a} the inset).\nAs $N\\to\\infty$, we expect that the numerical results should\ncoincide with the semiclassical results.\nIf $A$ is small, then the semiclassical approximation\nis good for large enough $N$s.\n\nFigure~\\ref{img4b} shows the comparison as $A=1.0$\nis in the semiclassical chaotic regime. The results\nare significantly different from those at $A=0.1$. Up to $N=200$,\nwe find no similarity between the numerical result and the semiclassical one.\nFor $N$ ranging between $50$ and $200$, the initial transient\nbehavior of $m_z(t)$ always quickly evolves into the asymptotic behavior\n- periodic oscillation. And the $m_z(t)$s for $N=50$ and $N=200$ have the same period,\nwith only their amplitude being different. But in the semiclassical result, $m_z(t)$ is aperiodic\nat arbitrarily long time. These observations suggest that the\nexact numerical solutions do not conserve to the semiclassical one\nin the limit $N\\to\\infty$. If $A$ is large, then the semiclassical approximation is bad,\neven for large $N$s. This seems to be strange for a fully-connected model.\nWe will further discuss this discrepancy in next section.\n\n\n\\begin{figure}\n\t\\subfigure[]{\n\t\t\\begin{minipage}{1\\linewidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\linewidth]{fig6.pdf}\n\t\t\n\t\t\\end{minipage}\\label{img4a}\n\t}%\n\n\t\\subfigure[]{\n\t\t\\begin{minipage}{1\\linewidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\linewidth]{fig7.pdf}\n\t\t\n\t\t\\end{minipage}\\label{img4b}\n\t}%\n\t\\centering\n\t\\caption{The time evolution of $m_z$ for (a) $A=0.1$ and (b) $A=1$\n\twith different $N$. The black solid lines represent the semiclassical results.\n\tThe initial condition is fixed to be $\\theta_0=0.5\\pi, \\phi_0=0.1\\pi$.}\n\t\\label{img4(2)}\n\\end{figure}\n\n\\begin{figure}\n\t\\subfigure[]{\n\t\t\\begin{minipage}{1\\linewidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\linewidth]{fig8.pdf}\n\t\t\n\t\t\\end{minipage}\\label{img5b}\n\t}%\n\t\n\t\\subfigure[]{\n\t\t\\begin{minipage}{1\\linewidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\linewidth]{fig9.pdf}\n\t\t\n\t\t\\end{minipage}\\label{img5a}\n\t}%\n\t\n\t\\centering\n\t\\caption{The evolution of $m_x$ and $m_z$ with different $A$.\n\tThe number of spins is chosen to be $N=100$. And the\n\tinitial condition is $\\theta_0=0.5\\pi,\\phi=0.1\\pi$.}\n\t\\label{img5}\n\\end{figure}\nWe also compare the dynamics of magnetizations for different $A$s, as\n$N$ is fixed. Figure~\\ref{img5b} and~\\ref{img5a} display $m_z(t)$s and\n$m_x(t)$s, respectively, for the values of $A$ ranging from $0.1$ to $1.0$.\nThe magnetization in the $z$-direction always displays a periodic oscillation.\nAnd the oscillations for different $A$s are in phase (see Fig.~\\ref{img5b} the\ninset). On the other hand, the magnetization in the $x$-direction displays\ntwo different dynamical modes, depending on the value of $A$.\nAs seen in Fig.~\\ref{img5a}, as $A$ is small ($A=0.1, 0.3$),\n$m_x$ displays an everlasting oscillation with the period $2\\pi\/\\omega_0$.\nBut as $A$ is large ($A=0.73, 1.0$), $m_x$ rapidly decays to zero.\nFor an intermediate $A$ ($A=0.5$), $m_x$ maintains an oscillation for\na relatively long time, but the decay can still be clearly seen.\nThe decay of $m_x$ is a signal of the discrepancy between the exact\nresults and the semiclassical approximations, for the latter have non-decaying\nmagnetizations in all directions (see App.~\\ref{sec:app}).\n\n\\section{Floquet Liouvillian spectrum}\n\\label{sec:spec}\n\nIn the closed systems, it is widely believed that the semiclassical\napproximation becomes exact for the fully-connected models\nif the number of spins goes to infinity.\nIn the open systems, a similar result was obtained~\\cite{carollo2021exactness},\nas the Liouvillian is time-independent. But in this paper, our numerical\nresults suggest that the semiclassical approximation is bad even in the\nlimit $N\\to\\infty$, if there is a big time-periodic term in the Liouvillian.\nSince we can only obtain the exact results at finite $N$, a scaling\nanalysis is helpful for confirming our viewpoint. Next we give a\nscaling analysis of the Floquet Liouvillian spectrum.\n\n\n\\begin{figure}\n\t\n\t\\subfigure[]{\n\t\t\\begin{minipage}{0.9\\linewidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\linewidth]{fig10.pdf}\n\t\t\n\t\t\\end{minipage}\\label{fig.6a}\n\t}%\n\n\t\\subfigure[]{\n\t\t\\begin{minipage}{0.9\\linewidth}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\linewidth]{fig11.pdf}\n\t\t\n\t\t\\end{minipage}\\label{fig.6b}\n\t}%\n\t\\centering\n\t\\caption{The Floquet Liouvillian spectrum for (a) $A=0.1$ and (b) $A=1.0$.\n\tThe number of spins is $N=30$.}\n\t\\label{fig.6}\n\\end{figure}\nThe Lindblad equation can be expressed in a vectorized form as\n$d\\hat{\\rho}\/dt = \\hat{\\hat{\\mathcal{L}}}\\left( \\hat{\\rho}\\right)$, where\n$\\hat{\\hat{\\mathcal{L}}}$ is the so-called Liouvillian\nsuperoperator (or Liouvillian in short), which\nis a non-Hermitian linear operator acting on the vector space of density matrices.\nFor the dissipative systems with time-independent\n$\\hat{\\hat{\\mathcal{L}}}$, it is well known that the eigenvalues\nand eigenvectors of $\\hat{\\hat{\\mathcal{L}}}$ determine the dynamics.\nThe eigenvalues of $\\hat{\\hat{\\mathcal{L}}}$ (Liouvillian spectrum) are complex numbers.\nMore important, if the system size $N$ is finite, all the eigenvalues\nmust have negative real part, except one that is zero.\n\nThese notations can be generalized to the case of time-periodic $\\hat{\\hat{\\mathcal{L}}}$.\nFor a Lindblad equation with $\\hat{\\hat{\\mathcal{L}}}(t) = \\hat{\\hat{\\mathcal{L}}}(t+T)$,\nthere exist a complete set of solutions written as $\\hat{\\rho}(t)= e^{\\lambda t} \\hat{\\varrho}(t)$,\nwhere $\\hat{\\varrho}(t)=\\hat{\\varrho}(t+T)$ is the time-periodic part of density matrix,\naccording to the Floquet theorem. And $\\hat{\\varrho}(t)$ satisfies\n\\begin{equation}\n\\hat{\\hat{\\mathcal{L}}}\\left( \\hat{\\varrho}(t) \\right) - \\frac{d}{dt} \\hat{\\varrho}(t)\n= \\lambda \\hat{\\varrho}(t).\n\\end{equation}\nThen $\\lambda$ can be seen as the eigenvalue of $\\left(\\hat{\\hat{\\mathcal{L}}}(t)- d\/dt\\right)$, which\nis an operator acting on the generalized vector space of density matrices,\njust as $\\left(\\hat{H}-id\/dt\\right)$ is an operator acting\non the generalized Hilbert space (Sambe space)\nfor a closed system with time-periodic Hamiltonian.\nWhile the eigenstates of $\\left(\\hat{H}-id\/dt\\right)$ are called the\nFloquet spectrum, the eigenstates of $\\left(\\hat{\\hat{\\mathcal{L}}}(t)- d\/dt\\right)$\nare called the Floquet Liouvillian spectrum.\n\nCompared to the Floquet spectrum or Liouvillian spectrum, much less is known about the\nFloquet Liouvillian spectrum. For the dissipative systems, we guess that\nthe Floquet Liouvillian spectrum has the same properties as the Liouvillian spectrum,\ni.e., all the complex eigenvalues have negative real parts except for a unique one\nthat is zero. The numerics support our guess. Figure~\\ref{fig.6}\ndisplays the Floquet Liouvillian spectrum at $N=30$. For both\n$A=0.1$ and $A=1.0$, we see that the rightmost eigenvalue\nin the complex plane is zero, which is non-degenerate.\nAnd the others are to the left of zero.\n\nThe Floquet Liouvillian spectrum completely determines whether\nthe dynamics is periodic, subharmonic or chaotic. To see it, we arrange all\nthe eigenvalues as\n\\begin{equation}\n\\lambda_0=0\\geq \\text{Re}\\lambda_1 \\geq \\text{Re}\\lambda_2 \\geq \\cdots,\n\\end{equation}\nwith the corresponding eigenvectors being $\\hat{\\varrho}_0(t)$, $\\hat{\\varrho}_1(t)$,\n$\\hat{\\varrho}_2(t),\\cdots$. For an arbitrary initial state, the\nsolution of Lindblad equation can be formally expressed as\n\\begin{equation}\n\\hat{\\rho}(t) = \\sum_j K_j e^{\\lambda_j t} \\hat{\\varrho}_j(t),\n\\end{equation}\nwhere the coefficients $K_j$ depend on the initial state.\nIn the asymptotic long time, the terms with $\\text{Re}\\lambda_j<0$ all\ndecay to zero, and $1\/\\left| \\text{Re}\\lambda_j \\right|$ is just the decay time of\nthe $j$-th mode. At finite $N$, all the $\\lambda_j$ with $j>0$ have\nnegative real parts, therefore, the density matrix in the asymptotic long time\nbecomes $\\hat{\\rho}(t) = K_0 \\hat{\\varrho}_0(t)$, which is exactly periodic with\nthe period $T$. We then expect that the asymptotic behavior is always periodic at finite $N$.\nThe situation is more complicated in the thermodynamic limit.\nAs found in the previous studies of time-independent Liouvillians,\nthere exist possibilities that the real parts of some $\\lambda_j$ decrease\nwith increasing $N$ and vanish in the limit $N\\to\\infty$. As a consequence, the asymptotic\ndensity matrix becomes $\\hat{\\rho}(t) = \\sum_{j=0}^L K_j \ne^{i \\text{Im}\\left(\\lambda_j\\right) t} \\hat{\\varrho}_j(t)$, where $L$ is the number of\neigenvalues with vanishing real parts. Additionally, if these $\\lambda_j$\nhave also nonvanishing imaginary parts, then $\\hat{\\rho}(t)$ possibly display\nsubharmonic or chaotic behaviors, depending on the values of $\\text{Im}\\left(\\lambda_j\\right)$.\nConversely, if the real parts of $\\lambda_j$ with $j>0$ are all\nfinite in the limit $N\\to\\infty$, then subharmonic oscillation or chaotic\nbehavior are both impossible.\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{fig12.pdf}\n\t\\caption[]{The Floquet Liouvillian gap as a function of $1\/N$. The dots\n\trepresent the numerical results, while the lines represent the fitted curves.}\n\t\\label{fig.7}\n\\end{figure}\nAccording to the above argument, we perform a scaling analysis of\nthe Floquet Liouvillian gap, which is defined as $\\Delta = \\lambda_0 - \\text{Re}\\lambda_1$.\nFigure~\\ref{fig.7} plots $\\Delta$ as a function of $1\/N$.\nThe dots are the numerical results, while the lines are the fitted curves.\nThe gap at $A=1.0$ is significantly smaller than the gap at $A=0.1$.\nBut in both cases, we clearly see that the gap does not go\nto zero as $N\\to\\infty$. This indicates that no $\\lambda_j$ with $j>0$\nhas vanishing real parts, therefore, the asymptotic behaviors\nare periodic for both $A=0.1$ and $A=1.0$, even in the limit $N\\to \\infty$. Such a result is\nconsistent with our previous simulation of the real-time dynamics of magnetizations.\n\n\n\\section{CONCLUSIONS}\n\\label{sec:con}\n\nIn summary, we study the fully-connected Ising model with\na time-periodic external field and subject to a dissipation,\nby using both the semiclassical approach and the exact\nnumerical simulation. If the field amplitude is small, both the semiclassical approach\nand the numerical simulation predict a perfect periodic oscillation\nof magnetizations. And the numerical results in the thermodynamic limit\nare consistent with the semiclassical one.\n\nAs the field amplitude increases, the semiclassical approach predicts\nthe period-doublings or subharmonic oscillations, and a series of period-doublings\nfinally lead to the chaotic dynamics of magnetizations, which is confirmed\nby the calculations of Lyapunov exponents. On the contrary, the\nnumerical simulation show that the magnetizations are always\noscillating periodically, whatever the field amplitude is. No subharmonic\nor chaotic dynamics are observed, even we choose the \nnumber of spins to be as large as a few hundreds in the simulation.\nWe then analyze the Floquet Liouvillian gap, which conserves to a finite\nvalue in the thermodynamic limit, for either small or large field amplitude.\nWe argue that a finite gap is another evidence of the periodic oscillations of observables.\n\nFor the fully-connected models, the\nsemiclassical approximation is generally believed to be good\nfor sufficiently large number of spins. But we find that this is\nnot the case if the time-periodic field and the dissipation are both present.\nFor a large field amplitude, the predictions from the semiclassical approach\nand the numerical method are qualitatively different.\nOur finding suggests that one should be more careful when\nusing the semiclassical approach in the case of time-periodic Liouvillians.\n\n\n\\section*{Acknowledgement}\nThis paper is supported by NSFC under Grant Nos.~11774315,~11835011, and~12174346,\nand by the Junior Associates program of the Abdus Salam International Center for Theoretical Physics.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\@startsection{section}{1}{\\z@}%\n {-21dd plus-4pt minus-4pt}{10.5dd plus 4pt\n minus4pt}{\\normalsize\\bfseries\\boldmath}}\n\n\\ifx\\ParFact\\undefined\\def\\ParFact{0.4}\\fi\n\\def\\@setparskip{\\parskip=\\ParFact\\baselineskip \\advance\\parskip by 0pt plus 2pt}\n\\@setparskip\n\\parindent=0pt\n\\makeatother\n\n\\usepackage{tikz}\n\\usetikzlibrary{matrix}\n\\usepackage{caption}\n\\usepackage[flushmargin]{footmisc}\n\\usepackage{varwidth}\n\\usepackage{enumitem}\n\\setlist[enumerate]{labelindent=--.5in,leftmargin=0pt}\n\\setlist[itemize]{labelindent=--.5in,leftmargin=0pt}\n\\usepackage{nicefrac}\n\\usepackage{euscript}\n\\usepackage{color}\n\\definecolor{green}{RGB}{0,127,0}\n\\definecolor{red}{RGB}{191,0,0}\n\\usepackage[colorlinks=true]{hyperref}\n\n\\theoremstyle{plain}\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{corollary}[theorem]{Corollary}\n\\newtheorem{lemma}[theorem]{Lemma}\n\\newtheorem{proposition}[theorem]{Proposition}\n\\theoremstyle{definition}\n\\newtheorem{definition}[theorem]{Definition}\n\\newtheorem{example}[theorem]{Example}\n\\newtheorem{remark}[theorem]{Remark}\n\\newtheorem{question}[theorem]{Question}\n\\newtheorem{problem}[theorem]{Problem}\n\n\\numberwithin{equation}{section}\n\n\\newcommand{\\B}[1]{\\mathbf{#1}}\n\\newcommand{\\C}[1]{\\mathcal{#1}}\n\\newcommand{\\F}[1]{\\mathfrak{#1}}\n\n\\begin{document}\n\n\\title[Positive definite functions on Coxeter groups\\dots]{Positive definite functions on Coxeter groups\nwith applications to operator spaces and noncommutative\nprobability}\n\\author{Marek {\\sc Bo\\.zejko}}\n\\address{M.~Bo{\\.z}ejko --- \\normalfont Polska Akademia Nauk, ul.~Kopernika 18, 50-001 Wroc{\\l}aw}\n\\email{bozejko@math.uni.wroc.pl}\n\\author{\\'Swiatos{\\l}aw R.~{\\sc Gal}}\n\\address{\\'S.~R.~Gal --- \\normalfont Uniwersytet Wroc{\\l}awski, pl.~Grunwaldzki 2\/4, 48-300 Wroc{\\l}aw}\n\\email{sgal@math.uni.wroc.pl}\n\\author{Wojciech {\\sc M{\\l}otkowski}}\n\\address{W.~M{\\l}otkowski --- \\normalfont Uniwersytet Wroc{\\l}awski, pl.~Grunwaldzki 2\/4, 48-300 Wroc{\\l}aw}\n\\email{mlotkow@math.uni.wroc.pl}\n\n\\begin{abstract}\nA new class of positive definite functions\nrelated to colour-length function on arbitrary Coxeter group is introduced.\nExtensions of positive definite functions, called the Riesz-Coxeter product,\nfrom the Riesz product on the Rademacher (Abelian Coxeter) group to arbitrary\nCoxeter group is obtained. Applications to harmonic analysis, operator spaces\nand noncommutative probability is presented. Characterization of radial and\ncolour-radial functions on dihedral groups and infinite permutation group are shown.\n\\end{abstract}\n\n\\maketitle\n\n\\def{\\mathop{\\rm cb}}{{\\mathop{\\rm cb}}}\n\\def\\mathop{\\rm tr}{\\mathop{\\rm tr}}\n\n\\section*{Introduction}\n\nIn 1979 Uffe Haagerup in his seminal paper \\cite{MR520930}, essentially proved\nthe positive definitness, for $0\\leq q\\leq 1$, of the function $P_q(x) =\nq^{|x|} = \\exp(-t|x|)$, where $|\\cdot|$ is the word lenght on a free Coxeter\ngroup $W=\\B Z\/2*\\dots *\\B Z\/2$. From this he deduced also Khinchine\ntype inequalities. He has shown that the regular $C^*$-algebra of $W$ has\nbounded approximation property and later \\cite{MR784292} the completely bounded approximation\nproperty \\textsc{\\MakeLowercase{(CBAP)}}. These results of Uffe Haagerup have had\nsignificant impact on harmonic analysis on free groups and, more generally, on\nCoxeter groups; they also influenced free probability theory and other\nnoncommutative probability theories.\n\nIn the paper \\cite{MR950825} it was shown that the function $P_{q} (x) = q^{|x|}$\nis positive definite for $q\\in[-1,1]$ and all Coxeter groups,\nwhere the length $|\\cdot|$ is the natural word length function on a Coxeter group\nwith repect to the set of its Coxeter generators. This fact implies that infinite\nCoxeter groups have the Haagerup property and do not have Kazhdan's\npropery (T).\n\nLater, Januszkiewicz \\cite{MR1925487} and Fendler \\cite{Fendler} showed,\nin the spirit of Haagerup proof, that $w\\mapsto z^{|w|}$\nis a coefficient of a uniformely bounded Hilbert representation of $W$\nfor all $z\\in\\B C$ such that $|z|< 1$. As shown in a very short\npaper of Valette \\cite{MR1172955}, this implies \\textsc{\\MakeLowercase{CBAP}}.\nSee the book \\cite{MR2391387} for\nfuther extension of Uffe Haagerup results for a big class of\ngroups.\n\nIn the paper \\cite{MR1388006} Bo{\\.z}ejko and Speicher considered the free\nproduct (convolution) of classic normal distribution $N(0,1)$ and the new\nlength function on the permutation group $\\F S_n$ (i.e. the Coxeter group of\ntype \\textsc{\\MakeLowercase{A}}) was introduced, which we shall call the \\textit{colour-length function}\n$\\|\\cdot\\|$. It is defined as follows: for $w\\in \\F S_n$ in the minimal\n(reduced) representations $w = s_1\\dots s_k$, where each $s_{j}$ belong to the set $S$\nof transposions of the form $(i,i+1)$, we put $\\|w\\| = \\#\\{s_{1}, s_{2},\\dots, s_{k}\\}$.\n\nFor our study one of the most important results\nof this paper is that the function\ncalled Riesz-Coxeter product $R_{\\B q}$ defined on all Coxeter groups\n$(W,S)$ as\n$$\nR_{\\B q}(s) = q_{s} \\text{, for $s\\in S$, and\\ }\nR_{\\B q}(xy) = R_{\\B q}(x) R_{\\B q} (y) \\text{, if\\ } \\| x y \\| = \\| x \\| + \\|y\\|\n$$\nis positive definite for $0\\leq q_s\\leq 1$.\n\nThis implies, in particular, that in an arbitrary Coxeter group the set\nof its Coxeter generators is a weak Sidon\nset and also it is completely bounded $\\Lambda^{\\mathop{\\rm cb}}_p$-set, see\nTheorems \\ref{T:5.1} and\n\\ref{T:6.1}. Equivalently, the span of the linear operators $\\{\\lambda(s)|s\\in\nS\\}$ in the noncommutative $L^{p}$-space $L^p(W)$ is completely boundedly isomorphic\nto row and column operator Hilbert space (see Theorem \\ref{T:6.1}).\n\n\nAnother interesting connection between the two length functions $|\\cdot|$ and\n$\\|\\cdot\\|$ appeared in \\cite{MR1388006} in the formula for the moments of\nfree additive convolution power of the Bernoulli law $\\mu_{-1} =\n(\\delta_{-1} + \\delta_1)\/2$ (cf.~Corollary~6 in cited paper):\n$$\nm_{2n}\\left(\\mu_{-1}^{\\boxplus q}\\right)\n=q^n\\smashoperator{\\sum_{\\pi\\in\\C P_2(2n)}}(-1)^{|\\pi|}q^{-\\|\\pi\\|},\n$$\nfor $q\\in\\B N$. (See also Section 9 of the present paper.)\n\nAlso, in \\cite{MR2764902} the colour-length function on the permutation group\n$\\F S_n$ was studied. Some of its extensions to pairpartitions appeared\nin the presentation of the proof that classical normal law $N(0,1)$ is free\ninfinitely divisible under free additive convolution $\\boxplus$.\n\nSince we have recent extensions of the free probability (which is related to\ntype \\textsc{\\MakeLowercase{A}} Coxeter groups) to the free probability of type \\textsc{\\MakeLowercase{B}} Coxeter groups\n(see \\cite{MR3373433}), it seems to be\ninteresting to determine the role of the colour-length functions for the Coxeter\ngroups of type \\textsc{\\MakeLowercase{B}} and \\textsc{\\MakeLowercase{D}}.\n\nThe plan of the paper is as follows.\n\nIn Section 1 we recall definitions of Coxeter groups and of the length and the\ncolour-length functions.\n\nIn Section 2 we recall the definition of positive definite funcios and discuss\nvarious classes of those, namely radial, colour-radial, and colour-dependant.\n\nIn Section 3 we discuss Abelian Coxeter groups.\n\nIn Section 4 we show the following formula characterizing the radial normalised\npositive definite functions on these Coxeter groups which contain the infinite\nRademacher group $\\bigoplus_{i=1}^\\infty \\B Z\/2$\nas a parabolic subgroup (these include the infinite\npermutation group $\\F S_\\infty$): every radial positive definite function\n$\\varphi$ is of the form\n$$\n\\varphi (w) =\\int_{-1}^1 q^{|w|}\\,\\mu(dq)\n$$\nfor a probability measure $\\mu$.\n\nThat characterisation is a variation on the classical de Finetti theorem.\nA noncommutative version was shown by K\\\"ostler ans Speicher \\cite{MR2530168}\n(see also \\cite{MR2092722}).\n\nWe also show in Theorem \\ref{T:frac}, that the function $\\exp(-t|w|^p)$\nis positive definite for all $t\\geq 0$ if and only if $p\\in[0,1]$.\n\nIn Section 5 we give a short proof of the equivalence of the two known results\nconcerning positive definite functions on finite Coxeter groups.\n\nIn Section 6 we present the main properties of the colour-dependent positive definite\nfunctions on Coxeter groups, in particular we show in Proposition 4.4. that on\n$\\F S_\\infty$ and some other Coxeter groups, the function $w\\mapsto r^{\\|w\\|}$\nis positive definite if and only if $r\\in[0,1]$.\n\nThe Section 7 gives characterization of all colour-length functions on finite and\ninfinite dihedral groups $\\B D_m$, for $m=1,2,\\dots,\\infty$.\n\nIn Section 8 we prove that the set $S$ of Coxeter generators is a weak Sidon\nset in arbitrary Coxeter groups $(W,S)$ with constant 2 and that it is also a\ncompletely bounded $\\Lambda (p)$ set with contants as $C \\sqrt p$, for $p>2$.\n\nIn Section 9 we prove for arbitrary finitely generated Coxeter group an identity\ninvolving both lengths $|\\cdot|$ and $\\|\\cdot\\|$ (see Proposition \\ref{P:tokyo}).\nWe apply it to give a proof of Corollary 7 from \\cite{MR1388006},\n(see Equation (\\ref{eq:p2-nc2})) where the proof, involving probabilistic considerations,\nwas not presented in \\cite{MR1388006}.\n\n\\section{Coxeter groups}\n\nIn this part we recall the basic facts regarding Coxeter groups and introduce notation\nwhich will be used throughout the rest of the paper. For more details we refer to \\cite{MR0240238,MR1066460}.\n\nA group $W$ is called a \\emph{Coxeter group} if it admits the following presentation:\n\\[\nW=\\left\\langle S\\left|\\left\\{(s_1 s_2)^{m(s_1,s_2)}=1:s_1,s_2\\in S,m(s_1,s_2)\\ne\\infty\\right\\}\\right.\\right\\rangle,\n\\]\nwhere $S\\subset W$ is a set and $m$ is a function $m:S\\times S\\to\\{1,2,3,\\ldots,\\infty\\}$\nsuch that $m(s_1,s_2)=m(s_2,s_1)$ for all $s_1,s_2\\in S$ and $m(s_1,s_2)=1$ if and only if $s_1=s_2$.\nThe pair $(W,S)$ is called a \\emph{Coxeter system}.\nIn particular, every generator $s\\in S$ has order two\nand every element $w\\in W$ can be represented as\n\\begin{equation}\\label{areducedrepr}\nw=s_1 s_2\\ldots s_{m}\n\\end{equation}\nfor some $s_1,s_2,\\ldots,s_m\\in S$.\nIf the sequence $(s_1,\\ldots,s_m)\\in S^{m}$ is chosen in such a way that $m$ is minimal\nthen we write $|w|=m$ and call it the \\emph{length} of $w$.\nIn such a case the right hand side of (\\ref{areducedrepr}) is called a \\emph{reduced representation}\nor \\emph{reduced word} of $w$.\nThis is not unique in general, but the set of involved generators is unique \\cite[Ch.~\\textsc{\\MakeLowercase{iv}}, \\S1, Prop.~7]{MR0240238}, i.e.\nif $w=s_1 s_2\\ldots s_m=t_1 t_2\\ldots t_m$ are two reduced representations\nof $w\\in W$ then $\\{s_1,s_2,\\ldots,s_m\\}=\\{t_1,t_2,\\ldots,t_m\\}$.\nThis set $\\{s_1,s_2,\\ldots,s_m\\}\\subseteq S$ will be denoted $S_w$ and called the \\emph{colour} of $w$.\n\nGiven a subset $T\\subset S$ by $W_T$ we denote the subgroup generated by $T$ and call it the \\emph{parabolic\nsubgroup} associated with $T$.\nTo see that $ S_w$ is independent of the reduced representation of $w$ notice that\n\\begin{equation}\n\\label{def:colour}\ns\\in S_w \\iff w\\not\\in W_{S\\smallsetminus\\{s\\}}.\n\\end{equation}\nWe define the \\emph{colour-length} of $w$ putting $\\|w\\|=\\# S_w$ (the cardinality of $S_w$).\nBoth lengths satisfy the triangle inequality and we have $\\|w\\|\\le |w|$.\n\nIn the case of the permutation group the colour-length has the following pictorial\ninterpretation. If $\\sigma$ is a permutation\nin $\\F S_{n+1}$ then $\\|\\sigma\\|$ equals $n$ minus the number\nof connected components of the diagram representing~$\\sigma$.\nNotice, that $|\\sigma|$ equals to the number of crossings in the diagram\n(the number of pairs of chords that cross).\n\n\\begin{tikzpicture}\n\\newcommand{\\perm}[1]{\\draw \\foreach \\x\/\\y in {#1} {(\\x*.75,1)--(\\y*.75,0)};}\n\\newcommand{\\xtext}[1]{\\draw (1.5,.5) node[fill=white] {#1};}\n\\matrix[column sep=0.8cm,row sep=0.5cm]{\n\\xtext {$\\sigma$};&\\xtext {$e$};&\\xtext {$(12)$};&\\xtext {$(12)(23)$};&\\xtext {$(12)(23)(12)$};\\\\\n&\\perm{1\/1,2\/2,3\/3}&\\perm{1\/2,2\/1,3\/3}&\\perm{1\/2,2\/3,3\/1}&\\perm{1\/3,2\/2,3\/1}\\\\\n\\xtext {$|\\sigma|$};&\\xtext {0};&\\xtext {1};&\\xtext {2};&\\xtext {3};\\\\\n\\xtext {$\\|\\sigma\\|$};&\\xtext {0};&\\xtext {1};&\\xtext {2};&\\xtext {2};\\\\};\n\\end{tikzpicture}\n\n\nIt would be convenient to define \n\\begin{equation}\n\\label{def:block-length}\n\\|w\\|_s\\colon=\n\\begin{cases}\n0&\\text{if $s\\not\\in S_w$,}\\\\\n1&\\text{if $s\\in S_w$,}\n\\end{cases}\n\\end{equation}\nthen, clearly, $\\|w\\|=\\sum_{s\\in S}\\|w\\|_s$.\n\n\\def\\mathop{\\rm Rad}\\nolimits{\\mathop{\\rm Rad}\\nolimits}\n\n\\section{Positive defined functions}\n\nA complex function $\\varphi$ on a group $\\Gamma$ is called\n\\emph{positive definite}\nif we have\n\\[\n\\sum_{x,y\\in \\Gamma}\\varphi(y^{-1}x)\\alpha(x)\\overline{\\alpha(y)}\\ge0\n\\]\nfor every finitely supported function $\\alpha\\colon\\Gamma\\to\\B C$.\n\nA positive definite $\\varphi$ function is Hermitian and satisfies $|\\varphi(x)|\\le\\varphi(e)$\nfor all $x\\in \\Gamma$. Usually it is assumed, that $\\varphi$ is \\emph{normalised}, i.e. that $\\varphi(e)=1$.\n\nIn this and the following sections we discuss the \\textit{radial functions} on Coxeter groups.\nThese are functions which depend on $|w|$ rather then on $w$.\n\nWe call a function $\\varphi$ on $(W,S)$ \\emph{colour-dependent} if\n$\\varphi(w)$ depends only on $S_w$. We call it \\emph{colour-radial}\nif it depends only on $\\|w\\|$.\n\nAn Abelian Coxeter group generated by $S$ is isomorphic to the direct product $\\oplus_{s\\in S}\\B Z\/2$.\nOn these groups the lengths $|\\cdot|$ and $\\|\\cdot\\|$ coincide and all functions are colour dependent.\n\nThe main example of a positive definite function will be the \\emph{Riesz--Coxeter function}. Given a sequence $\\B q=(q_s)_{s\\in S}$\nwe define $R_{\\B q}(w)=\\prod_{s\\in S}q_s^{\\|w\\|_s}=\\prod_{s\\in S_w}q_s$.\nWe will abuse notation and denote by $R_{\\B q}$ also the associated operator\n$\\sum_{w\\in W}R_{\\B q}(w)w$. That is\n$$\nR_{\\B q}=1+\\sum_{s\\in S}q_ss+\\sum_{w: S_w=\\{s_1,s_2\\}}q_{s_1}q_{s_2}w+\\sum_{w: S_w=\\{s_1,s_2,s_3\\}}q_{s_1}q_{s_2}q_{s_3}w+\\dots\n$$\n\nIn the case all $q_s=q$ we get $R_{\\B q}=\\sum q^{\\|w\\|}w$.\n\nThis generalises the classical case of Rademacher--Walsh functions in the\nRademacher group $\\mathop{\\rm Rad}\\nolimits_n$. If we denote the generator of the $i$-th factor $\\B Z\/2$\nof the latter by the symbol $r_i$ then, by definition, $r_1^2=1$ and $r_ir_j=r_jr_i$ and\n$$\nR_{\\B q}=\\prod\\limits_{i=1}^n(1+q_ir_i).\n$$\n\n\\section{Rademacher groups}\n\n\nIn this section we are going to study positive definite radial functions on the Abelian Coxeter groups,\n$(W,S)=\\mathop{\\rm Rad}\\nolimits_{S}$.\nSince positive definiteness is tested on functions with finite support, we can assume that $S$ is countable.\nIf $\\#S=n$ we will write $\\mathrm{Rad}_{n}$ instead of $\\mathrm{Rad}_{S}$.\nGiven $n\\in\\B N\\cup\\{\\infty\\}$, we denote by $P_n$ the class of all functions $f\\colon\\{0,1,\\dots,n\\}\\to\\B R$ for $n$ finite and $f\\colon\\B N\\to\\B R$ if\n$n=\\infty$ such that\n$\\varphi(w)=f(|w|)$ is a normalised positive definite on $\\mathop{\\rm Rad}\\nolimits_n$.\n\nThe following observation is straightforward.\n\n\\begin{proposition}\nAssume that $1\\leq m< n\\leq\\infty$ and $f\\in P_n$. Then the restriction of $f$ to $\\{0,\\dots,m\\}$\nbelongs to $P_m$. A fuction $f$ belogs to $P_\\infty$ if and only if all its restrictions to $\\{0,\\dots,m\\}$\nfor any $m\\in\\B N$ belong to $P_m$.\n\\end{proposition}\n\n\\begin{theorem}\nAssume $n$ is finite.\nThe set $P_n$ form a simplex whose\nvertices (extreme points) are $f^n_l(k)={n\\choose l}^{-1}\\sum_{i=0}^l(-1)^i{k\\choose i}{n-k\\choose l-i}$,\nwhere $0\\leq l\\leq n$. Equivalently, every normalised radial positive definite function on the group $\\mathop{\\rm Rad}\\nolimits_n$\nis of the form\n$$\n\\varphi(x)=\\sum_{l=0}^n\\lambda_lf^n_l(|x|),\n$$\nwhere the sequence of nonnegative numbers $(\\lambda_l)_{l=0}^n$ is unique and satisfies\n$\\sum_{l=0}^n\\lambda_l=1$.\n\\end{theorem}\n\n\\begin{proof}\nWe can indentify the dual $\\widehat{\\mathop{\\rm Rad}\\nolimits_n}$ group of $\\mathop{\\rm Rad}\\nolimits_n$ with $\\mathop{\\rm Rad}\\nolimits_n$ via the paring\n$(x,y)=(-1)^{\\sum_{i=1}^n x_iy_i}$. By Bochner's theorem every normalised\npositive definite function $\\varphi$ on $\\mathop{\\rm Rad}\\nolimits_n$ is of the form\n\\[\n\\varphi(x)=\\int_{\\widehat\\mathop{\\rm Rad}\\nolimits_{\\infty}}(x,y)\\,\\mu(dy),\n\\]\nfor some probability measure $\\mu$. Clearly, such a function\nis radial if and only if $\\mu$ is invariant under the action of $\\F S_n$.\n\nAmong such measures extreme ones are measures $\\mu_l$ for $0\\leq l\\leq n$,\nwhere $\\mu_l$ is equally distributed among elements of length $l$.\nMoreover,\n\\[\n\\varphi(x)=\\int_{\\widehat\\mathop{\\rm Rad}\\nolimits_{\\infty}}(x,y)\\,\\mu_l(dy)\n=f^n_l(|x|)\n\\]\nas claimed.\n\\end{proof}\n\nThe following theorem is a version of the classical de Finetti Theorem\n(see \\cite[p. 223]{MR0270403}) for the infinite Rademacher group.\n\n\\begin{theorem}\nAssume that $\\varphi$ is a radial function on the Rademacher group\n$\\mathop{\\rm Rad}\\nolimits_\\infty$.\nThen $\\varphi$ is a normalised positive definite if and only if there exists\na probability measure $\\mu$ on $[-1,1]$ such that\n$$\n\\varphi(x)=\\int_{-1}^1 q^{|x|}\\mu(dq).\n$$\nThis measure $\\mu$ is unique.\n\\end{theorem}\n\n\\begin{proof}\nSince the function $q^{|x|}$ is normalised positive definite for $q\\in[-1,1]$, the ``if'' implication is obvious.\n\nAssume that $\\varphi$ is normlised positive definite.\nThe group $\\mathop{\\rm Rad}\\nolimits_{\\infty}$ is discrete and Abelian and its dual is the compact group\n$\\widehat{\\mathop{\\rm Rad}\\nolimits_\\infty}=\\prod_{i=1}^{\\infty}\\B Z\/2$.\nBy Bochner's theorem, there exists a probability measure $\\eta$ on $\\widehat\\mathop{\\rm Rad}\\nolimits_{\\infty}$\nsuch that\n\\[\n\\varphi(x)=\\int_{\\widehat\\mathop{\\rm Rad}\\nolimits_{\\infty}}(x,y)\\,d\\eta(y),\n\\]\nwhere for $x=(x_1,x_2,\\ldots)\\in \\mathop{\\rm Rad}\\nolimits_\\infty$, $y=(y_1,y_2,\\ldots)\\in \\widehat\\mathop{\\rm Rad}\\nolimits_{\\infty}$\nwe put $(x,y)=(-1)^{\\sum_{i=1}^\\infty x_i y_i}$.\nThe radiality of $\\varphi$ is equivalent to the fact that for every permutation $\\sigma\\in\\mathfrak{S}_\\infty$\nwe have $\\varphi(x)=\\varphi(\\sigma(x))$, where $\\sigma(x)=(x_{\\sigma(1)},x_{\\sigma(2)},\\ldots)$.\nThis, in turn, implies that $\\eta$ is $\\sigma$-invariant for every $\\sigma\\in\\mathfrak{S}_\\infty$,\ni.e. we have $\\eta(A)=\\eta(\\sigma(A))$ for every Borel subset $A\\subset\\widehat\\mathop{\\rm Rad}\\nolimits_{\\infty}$.\n\nFor a sequence $\\B \\epsilon=(\\epsilon_i)_{i=1}^n\\in\\{0,1\\}^n$ we define\n$C_{n}(\\B \\epsilon)\\subseteq\\widehat\\mathop{\\rm Rad}\\nolimits_{\\infty}$ by\n\\[\nC_{n}(\\B \\epsilon)=\\{y\\in\\widehat\\mathop{\\rm Rad}\\nolimits_{\\infty}|y_i=\\epsilon_i:1\\leq i\\leq n\\},\n\\]\nin particular $C_0(\\varnothing)=\\widehat\\mathop{\\rm Rad}\\nolimits_{\\infty}$.\nThen we have $\\eta(C_{n}(\\epsilon))=\\eta(C_{n}(\\B\\epsilon'))$\nif $\\epsilon'_i=\\epsilon_{\\sigma(i)}$ for some $\\sigma\\in\\mathfrak{S}_n$\nand every $1\\leq i\\leq n$.\nFor $\\varepsilon \\in \\B R$ we put\n\\[\n\\varepsilon^n=\\underbrace{\\varepsilon,\\varepsilon,\\ldots,\\varepsilon}_{n} \n\\]\nand $a_n=\\eta(C_{n}(1^{n}))$. Moreover, for $n,k\\ge0$ we define the difference operators $\\Delta^{k}a_n$ by induction:\n$\\Delta^{0}a_n=a_n$ and $\\Delta^{k+1}a_n=\\Delta^{k}a_{n+1}-\\Delta^{k}a_{n}$.\nWe claim that\n\\begin{equation}\\label{deltakn}\n(-1)^{k}\\Delta^{k}a_{n}=\\eta\\left(C_{n+k}\\left(1^n0^k\\right)\\right).\n\\end{equation}\nDenoting the right hand side of (\\ref{deltakn}) by $c(n,k)$ we note that $c(n,0)=a_n$ and\n\\[\nC_{n+k+1}\\left(1^n0^k0\\right)\\cup\nC_{n+k+1}\\left(1^n0^k1\\right)\n=C_{n+k}\\left(1^n0^k\\right),\n\\]\nis a disjoint union. This implies\n\\[\nc(n,k+1)=c(n,k)-c(n+1,k).\n\\]\nThis formula, by induction on $k$, leads to (\\ref{deltakn}).\n\nFrom (\\ref{deltakn}) we see that the sequence $(a_n)$ is \\textit{completely monotone},\ni.e. that $(-1)^{k}\\Delta^{k}a_n\\ge0$ for all $n,k\\ge0$.\nBy the celebrated theorem of Hausdorff (see \\cite[S\\\"atze \\textsc{\\MakeLowercase{ii}} und \\textsc{\\MakeLowercase{iii}}]{MR1544467}),\nthere exists a unique probability measure\n$\\rho$ on $[0,1]$ such that\n\\begin{equation}\n\\label{eq-Haus}\n(-1)^{k}\\Delta^{k}a_n=\\int_{0}^{1} u^n(1-u)^k\\,d\\rho(u).\n\\end{equation}\n(Note that Equation (\\ref{eq-Haus}) for arbitrary $k\\geq 0$ follows from the case $k=0$.)\n \nFor $x=\\left(1^n0^\\infty\\right)\\in \\mathop{\\rm Rad}\\nolimits_\\infty$ so that $|x|=n$, we have\n\\begin{align*}\n\\varphi(x)&=\\int_{\\widehat\\mathop{\\rm Rad}\\nolimits_{\\infty}}(x,y)\\,d\\eta(y)\n=\\int_{\\widehat\\mathop{\\rm Rad}\\nolimits_{\\infty}}(-1)^{\\sum_{i=1}^ny_i}\\,d\\eta(y)\\\\\n&=\\sum_{\\epsilon\\in\\{0,1\\}^{n}}(-1)^{\\sum_{i=1}^n\\epsilon_i}\\eta(C_{n}(\\epsilon))\n=\\sum_{k=0}^{n}\\binom{n}{k}(-1)^{k}\\eta\\left(C_n\\left(1^k0^{n-k}\\right)\\right)\\\\\n&=\\sum_{k=0}^{n}\\binom{n}{k}(-1)^{k}\\int_{0}^{1} u^k(1-u)^{n-k}\\,d\\rho(u)\n=\\int_{0}^{1}(1-2u)^{n}\\,d\\rho(u)\\\\\n&=\\int_{-1}^{1}q^n \\,d\\mu(q),\n\\end{align*}\nwhere $\\mu$ is defined by $\\mu(A)=\\rho\\left(\\frac{1}{2}+\\frac{1}{2}A\\right)$ for a Borel set $A\\subseteq[-1,1]$.\n\\end{proof}\n\n\\section{Remarks on radial positive definite functions on some infinitely generated Coxeter groups}\n\nIn this Section we extend the last theorem of the previous section to a certain class of Coxeter groups.\n\n\\begin{theorem}\nAssume that $(W,S)$ is a Coxeter system and that there is an infinite subset $S_0\\subseteq S$\nsuch that $st=ts$ for $s,t\\in S_0$.\nAssume that $\\varphi$ is a radial function on $W$ with $\\varphi(e)=1$.\nThen $\\varphi$ is positive definite if and only if there exists\na probability measure $\\mu$ on $[-1,1]$ such that\n$$\n\\varphi(\\sigma)=\\int_{-1}^1 q^{|\\sigma|}\\mu(dq).\n$$\nThis measure $\\mu$ is unique.\n\\end{theorem}\n\n\\begin{proof}\nIt is sufficient to note that the group generated by $S_0$ is a parabolic subgroup isomorphic with\n$\\mathop{\\rm Rad}\\nolimits_\\infty$.\n\\end{proof}\n\n\\textbf{Example.}\nFor $W=\\mathfrak{S}_\\infty$ we have $S=\\{(n,n+1):n\\in\\B N\\}$.\nThen we can take $S_0=\\{(2n-1,2n):n\\in\\B N\\}$. Similar\n$S_0$ can be found in infinitely generated groups of type \\textsc{\\MakeLowercase{B}} and \\textsc{\\MakeLowercase{D}}.\n\n\\begin{problem}\nWhen $-1\\leq q\\leq1$, $q\\neq 0$ is the positive definite function\n$q^{|x|}$ on $\\F S_\\infty$ an extreme point in the set of\nnormalised positive definite functions?\n\\end{problem}\n\n\\begin{theorem}\\label{T:frac}\nThe function $\\varphi_p(\\sigma)=e^{-t|\\sigma|^p}$ is positive definite on $\\F S_\\infty$\nif and only if $0\\leq p\\leq1$.\n\\end{theorem}\n\n\\begin{proof}\n{\\em A contrario.} Assume that for some $p>1$ and $t_0>0$ the function $\\psi_p(\\sigma)=e^{-t_0|\\sigma|^p}$\nis positive definite on $\\F S_\\infty$.\n\nFor $q_0=e^{-t_0}$, choosing $\\sigma$ such that $|\\sigma|=2n$ we have\n$q_0^{(2n)^p}=\\int_{-1}^1q^{2n}\\,d\\mu_0(q)$ for some probability measure $\\mu_0$\non $[-1,1]$. Since $\\left(\\int_{-1}^1q^{2n}\\,d\\mu_0(q)\\right)^{1\/n}$ tends to $\\max\\{q^2|q\\in\\mathop{\\hbox{\\rm supp}}\\mu_0\\}$\nwhile $\\left(q_0^{(2n)^p}\\right)^{1\/n}\\to 0$, we conclude that $\\mu_0$ is the Dirac measure at $0$,\nwhich is a contradiction.\n\nThe ``if'' part is standard.\nWe need to show that $f(x)=e^{-tx^p}$ is the Laplace transform of some probability measure supported on $[0,\\infty)$,\nso $f$ is a convex combination of functions of the form $e^{-sx}$.\n\nBy characterisation of Laplace transforms (see \\cite[Satz \\textsc{\\MakeLowercase{iii}}]{MR1544467})\nthis is equivalent to \\emph{complete monotonicity}, that is $(-1)^nf^{(n)}>0$ for all $n=0,1,\\dots$.\nAnd indeed, by induction, $(-1)^nf^{(n)}$ is a positive linear\ncombination of positive functions of the form $x^{pj-n}f(x)$ for $1\\leq j\\leq n$.\n\\end{proof}\n\nThe measures with Laplace transforms $e^{-tx^p}$ for $t\\geq0$ and $0\\leq p\\leq 1$\nare studied in detail in \\cite[Ch.~\\textsc{\\MakeLowercase{ix}.11}]{MR617913}\n(see Propositions~1 and 2 there).\n\nLet us note that for such groups like $\\B Z^k$ or $\\B R^k$ with the Euclidean distance $d$\nthe functions $\\exp(-td^p)$ are positive definite for all $t\\geq 0$ and $0\\leq p\\leq 2$\n(the case $p=2$ corresponds to the Gau\\ss ian Law).\n\n\\section{The longest element}\n\nIf a Coxeter group $W$ is finite, then it contains the unique element $\\omega_\\circ$\nwhich has the maximal length with respect to $|\\cdot|$.\n\nFrom the definition it is clear, that a function $\\varphi$ on a group $\\Gamma$ with values in the field\nof complex numbers $\\B C$ is {\\em positive definite}\nif and only $\\sum_{g\\in\\Gamma}\\varphi(g)g$ is a nonnegative (bounded if the group is finite)\noperator on $\\ell^2\\Gamma$. (We will identify $g\\in \\Gamma$ with $\\lambda(g)\\in B(\\ell^2\\Gamma)$,\nwhere $\\lambda$ is the left regular representation, for short.)\n\nLet $W$ be a finite Coxeter group. The following two statements are well known.\n\\begin{enumerate}[label={(\\textsc{\\alph*})}]\n\\item The function $q^{|w|}$ is positive definite for any $0\\leq q\\leq 1$.\n\\item The function $\\Delta(w)=|\\omega_\\circ|\/2-|w|$ is positive definite.\n\\end{enumerate}\nThe first one was proven in \\cite{MR950825} (even for infinite Coxeter groups and also for\n$-1\\leq q\\leq 1$) while the second --- in \\cite[Proposition~6]{MR2009841}.\nHere we give a short direct prove of the following.\n\n\\begin{proposition}\\label{P: }\nThe above statements (\\textsc{a}) and (\\textsc{b}) are equivalent.\n\\end{proposition}\n\n\\begin{proof}\nLet $q=e^{-t}$ (with $t\\geq0$, as we assume $q\\leq 1$).\nThe case (\\textsc{a}) is equivalent to $\\Phi_t=\\sum_{w\\in W}e^{t\\Delta(w)}w=e^{t|\\omega_\\circ|\/2}\\sum_{w\\in W}q^{|w|}w$\nbeing nonnegative.\n\nAssume (\\textsc{a}).\nRecall first, that $|\\omega_\\circ w|=|\\omega_\\circ|-|w|=|w\\omega_\\circ|$. Therefore\n$|\\omega_\\circ|\/2-|\\omega_\\circ w|=-(|\\omega_\\circ|\/2-|w|)$, ie.\n$\\Delta(\\omega_\\circ w)=-\\Delta(w)$ and similarly, $\\Delta(w\\omega_\\circ)=-\\Delta(w)$.\n\nThe equality $\\Delta(\\omega_\\circ w)=\\Delta(w\\omega_\\circ)$ implies that $\\omega_\\circ$\n(and thus $Q=(1-\\omega_\\circ)\/2$) commutes with $\\Delta$ (and thus $\\Phi_t$).\nSince $Q=Q^2$ is nonnegative we conclude that\n$$\nt^{-1}\\Phi_tQ=\\sum_{w\\in W}\\frac{e^{t(|\\omega_\\circ|\/2-|w|)}-e^{t(|\\omega_\\circ|\/2-|w\\omega_\\circ|)}}{2t}w\n=\\sum_{w\\in W}\\frac{\\sinh(t\\Delta(w))}{t}w.\n$$\nis nonnegative. Therefore, taking the limit as $t\\to0$, we obtain that\n$\\sum_{w\\in W}\\Delta(w)w$ is nonnegative.\nThus (\\textsc{b}).\n\nAssuming (\\textsc{b}) and using the Schur lemma, which says that the (pointwise)\nproduct of positive definite functions is positive definite, we get that\n$$\n\\Phi_t=\\sum_{w\\in W} e^{t\\Delta(w)}w\n=\\sum_{n\\geq 0}\\frac{t^n}{n!}\\left(\\sum_{w\\in W}\\Delta(w)^nw\\right)\n$$\nis nonnegative.\nThus (\\textsc{a}).\n\\end{proof}\n\n\\section{Colour-dependent positive definite functions on Coxeter groups}\n\nThe question which colour-dependant or colour-radial functions\nare positive functions on Coxeter groups is wide open.\nIn this section we provide some sufficient conditions.\nIn the next section we will\nexamine the dihedral groups in full details.\n\n\\begin{lemma}\\label{alemmasubgroup}\nLet $H$ be a subgroup of a group $\\Gamma$ of index $d$.\nThen the function $\\varphi_r$ defined by $\\varphi_{r}(x)=1$ if $x\\in H$\nand $\\varphi_{r}(x)=r$ otherwise is positive definite on $\\Gamma$\nif and only if $r\\in[-1\/(d-1),1]$,\nwith natural convention that if $d=\\infty$ then $-1\/(d-1)=0$.\n\\end{lemma}\n\nNote, that if $H=\\{1\\}$ then $d=|\\Gamma|$.\n\n\\begin{proof}\nFirst assume that $d$ is finite and let us enumerate the left cosets:\n\\[\n\\{gH:g\\in \\Gamma\\}=\\{H_1,H_2,\\dots,H_d\\}.\n\\]\nNote, that for $x\\in H_i, y\\in H_j$ we have $y^{-1}x\\in H$\nif and only if $i=j$. Therefore, for $r_0=-1\/(d-1)$\nand for a finitely supported\ncomplex function $f$ on $\\Gamma$ we have\n\\[\n\\sum_{x,y\\in \\Gamma} \\varphi_{r_0}(y^{-1}x)f(x)\\overline{f(y)}\n=\\frac{1}{d-1}\\sum_{1\\le i2$) for $1\\le k\\le n$.\nIf the function $w\\mapsto r^{\\|w\\|}$ is positive definite on $W$,\nthen $\\sfrac{-1}{(n-1)}\\le r^3\\le1$.\n\nIf there is an element $s_0\\in S$ for which there are infinitely many $s\\in S$\nsuch that $s_0s\\ne ss_0$ then $ r^{\\|w\\|}$ is positive definite on $W$ if and only if\\\/ $0\\le r\\le 1$.\n\\end{proposition}\n\n\\begin{proof}\nConsider elements $w_k=s_0s_ks_0$.\nNote, that for $k\\ne l$ we have $\\|w_l^{-1}w_k\\|=3$.\nIf $\\varphi_r$ is positive definite on $W$\nthen we have\n\\[\n0\\le\\sum_{k,l=1}^{n}\\varphi_r(x_l^{-1}x_k)=\nn+(n^2-n)r^3,\n\\]\nwhich implies $r^3\\ge\\sfrac{-1}{(n-1)}$.\n\\end{proof}\n\n\\begin{corollary}\nThe function $w\\mapsto q^{\\|w\\|}$ on $\\F S_\\infty$ is positive definite if and only if\\\/ $0\\leq q\\leq 1$.\n\\end{corollary}\n\n\\begin{problem}\nThus, it is valid to ask the following.\nIs it true that {\\em every} normalised positive definite colour-lenght-radial\nfunction $\\phi\\colon\\F S_\\infty\\to\\B R$ is of the form $\\phi(\\sigma)=\\int_0^1 q^{\\|\\sigma\\|}\\,d\\mu(q)$\nfor some probability measure $\\mu$ on $[0,1]$?\n\\end{problem}\n\n\\section{Dihedral groups}\n\nIn this part we are going to examine the class of colour-dependent\npositive definite functions on the case the simplest nontrivial Coxeter groups.\nAssume that $W=\\B D_{2n}=\\langle s,t|(st)^n\\rangle$ (i.e.\\ the group of symmetries of a regular\n$n$-gon), and define a colour-dependent function on $W$:\n\\begin{equation}\\label{functiondichedralpqr}\n\\phi(w)=\n\\begin{cases}\n1&\\text{if $w=e$,}\\\\\np&\\text{if $w=s$,}\\\\\nq&\\text{if $w=t$,}\\\\\nr&\\text{otherwise.}\n\\end{cases}\n\\end{equation}\nIf $p=q$ then $\\phi$ is colour radial.\nWe are going to determine for which parameters $p,q,r$ the function $\\phi$ is positive definite on $W$.\nIt is easy to observe necessary conditions: $p,q,r\\in[-1,1]$.\nMoreover, since $\\left\\langle st\\right\\rangle$\nis a cyclic subgroup of order $n$, Lemma~\\ref{alemmasubgroup},\nimplies a necessary condition: $-1\/(n-1)\\le r\\le1$.\n\n\\subsection*{Finite dihedral groups}\n\nAssume that $W$ is a finite dihedral group, $W=\\B D_{2n}$,\nso that $(st)^n=1$.\nWe will use the following version of Bochner's theorem:\nA function $f$ on a compact group $G$ is positive definite\nif and only if its \\emph{Fourier transform}:\n\\[\\widehat{f}(\\pi)=\\int_{G} f(x)\\pi(x^{-1})dx\n\\]\nis a positive operator for every $\\pi\\in\\widehat{G}$,\nwhere $\\widehat{G}$ denotes the dual object of $G$, i.e. the family of\nall equivalency classes of unitary irreducible representations of $G$, see~\\cite{MR1363490}.\nThen we have\n\\[\nf(x)=\\sum_{\\pi\\in\\widehat{G}}d_{\\pi}\\mathop{\\rm tr}\\left[\\widehat{f}(\\pi)\\pi(x)\\right].\n\\]\n\nTherefore, for every irreducible representation $\\pi$ of $\\B D_{2n}$\nwe are going to find\n\\[\n\\widehat{\\phi}(\\pi)=\\frac{1}{2n}\\sum_{g\\in G}\\phi(g)\\pi(g^{-1}).\n\\]\n\nWe will identify $s$ with $(0,-1)$ and $t$ with $(1,-1)$.\nIf $n$ is odd then $\\B D_{2n}$ possesses two characters:\n$\\chi_{+,+}$ such that $\\chi_{+,+}(w)=1$ for every $w\\in\\B D_{2n}$\nand $\\chi_{-,-}$ such that $\\chi_{-,-}(s)=\\chi_{-,-}(t)=-1$.\nIf $n$ is even then we have two additional characters\n$\\chi_{+,-}$ and $\\chi_{-,+}$\nsuch that $\\chi_{+,-}(s)=\\chi_{-,+}(t)=1$ and $\\chi_{+,-}(t)=\\chi_{-,+}(s)=-1$.\nIt is easy to check that\n\\[\n2n\\widehat{\\phi}(\\chi_{+,+})=1+p+q+(2n-3)r,\n\\]\n\\[\n2n\\widehat{\\phi}(\\chi_{-,-})\n=1-p-q+r,\n\\]\nwhich gives\n\\[\n-1-(2n-3)r\\le p+q\\le 1+r\n\\]\nand, for $n$ even,\n\\[\n2n\\widehat{\\phi}(\\chi_{+,-})=1+p-q-r,\n\\]\n\\[\n2n\\widehat{\\phi}(\\chi_{-,+})=1-p+q-r,\n\\]\nwhich implies\n\\[\n|p-q|\\le 1-r.\n\\]\nWe have also the family of two dimensional representations $U_a$:\n\\begin{align*}\nU_a(k,1)&=\\begin{pmatrix}\ne^{2\\pi ika\/n}&0\\\\\n0&e^{-2\\pi ika\/n}\n\\end{pmatrix},\\\\\nU_a(k,-1)&=\\begin{pmatrix}\n0&e^{2\\pi ika\/n}\\\\e^{-2\\pi ika\/n}&0\n\\end{pmatrix},\n\\end{align*}\nwhere $a=1,2,\\dots,\\left\\lfloor\\frac{n-1}{2}\\right\\rfloor$.\nThen for the function given by (\\ref{functiondichedralpqr}) we have\n\\begin{align*}\n2n\\widehat{\\phi}(U_a)&=(1-r)\\mathrm{Id}+(p-r)U_a(0,-1)+(q-r)U_a(1,-1)\\\\\n&=\\begin{pmatrix}\n1-r&p-r+(q-r)e^{2\\pi ia\/n}\\\\p-r+(q-r)e^{-2\\pi ia\/n}&1-r\n\\end{pmatrix}.\n\\end{align*}\nThis matrix is positive definite if and only if $r\\le 1$\nand\n\\[\n\\left|p-r+(q-r)e^{2\\pi ia\/n}\\right|\\le 1-r.\n\\]\nTherefore we have\n\\begin{proposition}\\label{propositiondihedralpqr}\nThe function $\\phi$ given by (\\ref{functiondichedralpqr}) is positive definite on $\\B D_{2n}$\nif and only if $$1+p+q+(2n-3)r\\ge0,\\qquad 1-p-q+r\\ge0$$\n(plus $$1+p-q-r\\ge0,\\qquad 1-p+q-r\\ge0$$ whenever $n$ is even)\nand\n\\[\n\\left|p-r+(q-r)e^{2\\pi ia\/n}\\right|\\le 1-r.\n\\]\nfor $a=1,2,\\dots,\\left\\lfloor\\frac{n-1}{2}\\right\\rfloor$.\n\\end{proposition}\n\nLet us confine ourselves to colour-radial functions.\n\n\\begin{corollary}\nAssuming that $p=q$, the function $\\phi$ defined by (\\ref{functiondichedralpqr})\nis positive definite on $W=\\B D_{2n}$ if and only if\n\\[\n\\max\\left\\{\\frac{-2p-1}{2n-3},2p-1\\right\\}\\le r\\le \\frac{1+2p\\cos(\\sfrac{\\pi}n)}{1+2\\cos(\\sfrac{\\pi}n)},\n\\]\ni.e. if and only if the point $(p,r)$ belongs to the triangle whose vertices are\n\\[\n\\left(\\frac{1-n-\\cos(\\sfrac{\\pi}n)}{1+(2n-1)\\cos(\\sfrac{\\pi}n)},\\frac{1-\\cos(\\sfrac{\\pi}n)}{1+(2n-1)\\cos(\\sfrac{\\pi}n)}\\right),\\quad\n\\left(\\frac{n-2}{2n-2},\\frac{-1}{n-1}\\right),\\quad(1,1).\n\\]\n\\end{corollary}\n\n\\begin{proof}\nFor $p=q$ the conditions from Proposition~\\ref{propositiondihedralpqr} reduce to \n\\[\n2p-1\\le r,\\quad\n-1-2p\\le (2n-3)r,\\quad\\mbox{ and }\\quad\n2\\cos(\\sfrac{\\pi}n)|p-r|\\le 1-r.\n\\]\nIt is sufficient to note that $2p-1\\le r$ implies\n$2\\cos(\\sfrac{\\pi}n)(p-r)\\le1-r$ for $p\\le1$.\n\\end{proof}\n\n\\textbf{Example.} For $\\B D_4$ we have the positive definiteness of $\\phi$ is equivalent to\n\\[\n-1+|p+q|\\le r\\le 1-|p-q|,\n\\]\nwhich means that the set of all possible $(p,q,r)$ forms a tetrahedron\nwith vertices $(-1,1,-1)$, $(1,-1,-1)$, $(-1,-1,1)$, $(1,1,1)$.\nFor $p=q$ the condition reduces to $2|p|-1\\le r\\le1$.\n\nIn the case of $\\B D_{6}$ Proposition~\\ref{propositiondihedralpqr} leads to the following conditions:\n\\[\n1-p-q+r\\ge0,\\qquad 1+p+q+3r\\ge0,\n\\]\n\\[\n1-r\\ge\\sqrt{p^2+q^2+r^2-pq-pr-qr},\n\\]\nwhich can be expressed as\n\\[\n\\max\\left\\{\\frac{-1-p-q}{3},p+q-1 \\right\\}\\le r\\le\\frac{1-p^2-q^2+pq}{2-p-q}.\n\\]\n\n\\subsection*{The infinite dihedral group}\n\n\nHere we are going to study $W=\\B D_{\\infty}$.\n\n\\begin{proposition}\nThe function $\\phi$ given by (\\ref{functiondichedralpqr}) is positive definite on $W=\\B D_{\\infty}$\nif and only if $0\\le r$ and $|p-r|+|q-r|\\le1-r$, i.e.\n\\begin{equation}\\label{dichedralinfinite}\n\\max\\left\\{0,p+q-1\\right\\}\\le r\\le\\min\\left\\{1-|p-q|,\\frac{1+p+q}{3}\\right\\}.\n\\end{equation}\n\\end{proposition}\n\n\n\\begin{proof}\nFirst we note that the set of $(p,q,r)\\in\\B R^3$ satisfying (\\ref{dichedralinfinite})\nconstitutes a pyramid which is the convex hull of the points $(\\pm1,0,0)$, $(0,\\pm1,0)$ and $(1,1,1)$ (apex).\nFor these particular parameters it is easy to see that $\\phi$ is positive definite: $(1,1,1)$ corresponds\nto the constant function $1$, $(1,0,0)$ to the characteristic function of the subgroup $\\left\\langle s\\right\\rangle=\\{1,s\\}$,\nand $(-1,0,0)$ to the character $\\chi_{-,-}$ times the characteristic function of $\\left\\langle s\\right\\rangle$.\nSimilarly for $(0,\\pm1,0)$. This, by convexity, proves that (\\ref{dichedralinfinite}) is a sufficient condition.\n\nOn the other hand, we know already that $r\\ge0$ is a necessary condition.\nLet us fix $n$ and define $W^+(n)=\\{x\\in W:|sx|<|x|\\le2n\\}$, $W^-(n)=\\{x\\in W:|tx|<|x|\\le2n\\}$ and\n\\[\nf(x)=\n\\begin{cases}\n\\pm1&\\text{if $x\\in W^\\pm(n)$,}\\\\\n0&\\text{otherwise.}\n\\end{cases}\n\\]\nFor $x,y\\in W^+(n)$ we have $S_{y^{-1}x}=\\varnothing$ in $2n$ cases (namely, if $x=y$)\n$S_{y^{-1}x}=\\{s\\}$ in $2n-2$ cases (namely if $|x|=2k$, $|y|=2k+1$ or vice-versa, $k=1,\\ldots,n-1$)\n$S_{y^{-1}x}=\\{t\\}$ in $2n$ cases (namely if $|x|=2k$, $|y|=2k-1$ or vice-versa, $k=1,\\ldots,n$)\nand $S_{y^{-1}x}=\\{s,t\\}$ in all the other $(2n-1)(2n-2)$ cases.\nSimilarly, for $x,y\\in W^-(n)$ we have $S_{y^{-1}x}=\\emptyset$ in $2n$ cases,\n$S_{y^{-1}x}=\\{s\\}$ in $2n$ cases,\n$S_{y^{-1}x}=\\{t\\}$ in $2n-2$ cases\nand $S_{y^{-1}x}=\\{s,t\\}$ in $(2n-1)(2n-2)$ cases.\nIf $x\\in W^+(n)$, $y\\in W^-(n)$ or vice-versa then $S_{y^{-1}x}=\\{s,t\\}$.\nSumming up, we get\n\\begin{align*}\n\\sum_{x,y\\in W}&\\phi(y^{-1}x)f_n(x)f_n(y)\\\\\n&=4n+(4n-2)p+(4n-2)q+(4n-2)(2n-2)r-8n^2 r\\\\\n&=4n+(4n-2)p+(4n-2)q-(12n-4)r.\n\\end{align*}\nTherefore for every $n\\in\\B N$ we have a necessary condition\n\\[\n1+\\left(1-\\frac{1}{2n}\\right)p+\\left(1-\\frac{1}{2n}\\right)q-\\left(3-\\frac{1}{n}\\right)r\\ge0.\n\\]\nLetting $n\\to\\infty$ we get $1+p+q\\ge3r$.\n\nPut $x_k=stst\\ldots$, $|x_k|=k$. Fix $n$ and define\n\\[\ng(x)=\n\\begin{cases}\n\\chi_{-,+}(x)&\\text{if $x=x_k$ for $1\\le k\\le 4n$,}\\\\\n0&\\text{otherwise,}\n\\end{cases}\n\\]\nwhere, as before, $\\chi_{-,+}$ is the character on $W$ for which $\\chi_{-,+}(s)=-1$, $\\chi_{-,+}(t)=1$.\nThen\n\\[\n\\sum_{x,y\\in W}\\phi(y^{-1}x)g(x)g(y)=\\sum_{k,l=1}^{4n}\\phi(x_{l}^{-1}x_{k})g(x_k)g(x_l).\n\\]\nDenote $c_{k,l}=\\phi(x_{l}^{-1}x_{k})g(x_k)g(x_l)$. Then we have $c_{k,k}=1$, $1\\le k\\le 4n$,\n$c_{k,k-1}=q$ if $k$ is even, $c_{k,k-1}=-p$ if $k$ is odd, $2\\le k\\le4n$ and\n$c_{k,l}=c_{l,k}$ for all $1\\le k,l\\le 4n$. If $1\\le k,l\\le 4n$ and $|k-l|\\ge2$\nthen $c_{k,l}=(-1)^{j}r$, where $j$ is the total number of $s$ appearing in $x_k$ and $x_l$.\nNow it is not difficult to check that\n\\[\n\\sum_{l=1}^{4n}c_{k,l}=\n\\begin{cases}\n1+q-2r&\\text{if $k=1$ or $k=4n$,}\\\\\n1-p+q-r&\\text{if $1 0\\}$. Set\n$$\nq^\\pm_s=\\begin{cases}\\pm f(s)&\\hbox{for $s\\in S_\\pm(f)$,}\\\\0&\\hbox{otherwise.}\\end{cases}\n$$\nThen $f(s)=R_{\\B q^+}(s)-R_{\\B q^-}(s)$ as claimed. The rest of the statement hold as\nthe Riesz-Coxeter function at the identity element equals to one.\n\\end{proof}\n\nGiven a matrix $A\\in M_n(\\B C)$ and $p\\geq 1$ the Schatten $p$-class norm $\\|A\\|_{\\C S_p}$\nis defined as $\\|A\\|^p_{\\C S_p}=\\left(\\mathop{\\rm tr}|A|\\right)^{1\/p}$, where $|A|=(A^*A)^{1\/2}$.\n\nLet $\\lambda$ denote the left regular representation of a group $\\Gamma$. Given a finite sum\n$f=\\sum c_g\\lambda(g)\\in\\B C[\\Gamma]$ we define noncommutative $L^p$-norm\n$$\n\\|f\\|_{L^p(\\Gamma)}^p=\\left(\\tau\\left((f^**f)^{1\/2}\\right)\\right)^{1\/p}\n$$\nwhere $\\tau(f)=c_e$ is the von Neumann trace and $L^p(\\Gamma)$ is a completion\nof $\\B C[\\Gamma]$ with respect to the above norm.\n\nWe recall, that a scalar-valued map $\\varphi$ on a group $\\Gamma$ is called a \\emph{completely bounded Fourier multiplier}\non $L^p(\\Gamma)$ if the associated operator\n$$\nM_\\varphi(\\lambda(g))=\\varphi(g)\\lambda(g),\\qquad g\\in \\Gamma\n$$\nextends to a completely bounded operator on $L^p(\\Gamma)$.\n\nWe let $M_{\\mathop{\\rm cb}}(L^p(\\Gamma))$ to be an algebra of completely bounded Fourier multipliers equipped\nwith the norm\n$$\n\\|\\varphi\\|_{M_{\\mathop{\\rm cb}}(L^p(\\Gamma))}=\\|M_\\varphi\\otimes\\mathop{\\rm id}\\nolimits_{\\C S^p}\\|.\n$$\n\n\\def\\left\\|(a_s)_{s\\in S}\\right\\|_{\\mathop{R}\\cap \\mathop{C}}{\\left\\|(a_s)_{s\\in S}\\right\\|_{\\mathop{R}\\cap \\mathop{C}}}\n\nFollowing Pisier \\cite{MR2006539}, for $a_s\\in M_n(\\B C)$, where $s\\in S$, we define\n$$\n\\left\\|(a_s)_{s\\in S}\\right\\|_{\\mathop{R}\\cap \\mathop{C}}={\\max\\left\\{\\left\\|\\sum_{s\\in S}(a_sa_s^*)^{1\/2}\\right\\|_{\\C S^p},\\left\\|\\sum_{s\\in S}(a_s^*a_s)^{1\/2}\\right\\|_{\\C S^p}\\right\\}}.\n$$ \nFor a set $E\\in \\Gamma$ we define the \\emph{completely bounded} constant $\\Lambda^{\\mathop{\\rm cb}}_p(E)$ as infimum of $C$ \nsuch that\n$$\n\\left\\|\\sum_{s\\in S}a_s\\otimes\\lambda(s)\\right\\|_{L^p(W)}\\leq C\\left\\|(a_s)_{s\\in S}\\right\\|_{\\mathop{R}\\cap \\mathop{C}}\n$$\nfor all matrices $a_s\\in M_n(\\B C)$ and $n\\in\\B N$.\n\n\\begin{theorem}\n\\label{T:6.1}\nIf\\\/ $a_s\\in M_n(\\B C)$, then for all $p\\geq 2$ and any Coxeter system $(W,S)$\nwe have\n$$\n\\left\\|(a_s)_{s\\in S}\\right\\|_{\\mathop{R}\\cap \\mathop{C}}\\leq\\left\\|\\sum_{s\\in S}a_s\\otimes\\lambda(s)\\right\\|_{L^p(W)}\\leq 2A'\\sqrt p\\left\\|(a_s)_{s\\in S}\\right\\|_{\\mathop{R}\\cap \\mathop{C}}.\n$$\n\\end{theorem}\n\n\\begin{proof}\nIt was shown by Harcharras \\cite[Prop.~1.8]{MR859804} that $\\Lambda^{\\mathop{\\rm cb}}_p(E)$ if finite\nif and only if $E$ is an interpolation set for $M_{\\mathop{\\rm cb}}(L^p(\\Gamma))$, i.e.\nevery bounded function on $E$ can be extended to a multiplier, and\n$$\n\\Lambda^{\\mathop{\\rm cb}}_p(E)\\leq \\Lambda^{\\mathop{\\rm cb}}_p(R)\\mu^{\\mathop{\\rm cb}}_p(E),\n$$\nwhere $R$ is the generating set in the Rademacher group $\\mathop{\\rm Rad}\\nolimits_\\infty$\nand $\\mu^{\\mathop{\\rm cb}}_p(E)$ is the interpolation constant.\n\nAs shown by Buchholz \\cite[Thm.~5]{MR2213610} for $p=2n$, and $S$ te standart generating set in $\\mathop{\\rm Rad}\\nolimits_\\infty$,\n$\\Lambda_{2n}^{\\mathop{\\rm cb}}(R)=\\left((2n-1)!!\\right)^{1\/2n}\\leq A\\sqrt p$ for some absolute $A$. This was extended\nby Pisier \\cite[Thm.~9.8.2]{MR2006539} for any $p\\geq 2$, i.e\n$$\n\\Lambda_p^{\\mathop{\\rm cb}}(R)\\leq A'\\sqrt p,\n$$\nfor an absolute constant $A'$.\n\nWe have shown in Theorem \\ref{T:5.1} that in an arbitrary Coxeter group $W$ its Coxeter generating set $S$ is\na weak Sidon set, i.e. it is interpolation set for the Fourier--Stieltjes algebra $B(W)$.\nSince for $p\\geq 1$,\n$B(\\Gamma)$ is a subalgebra of $M_{\\mathop{\\rm cb}}(L^p(\\Gamma))$ and\n$$\n\\|\\varphi\\|_{M_{\\mathop{\\rm cb}}(L^p(\\Gamma))}\\leq\\|\\varphi\\|_{B(\\Gamma)},\n$$\nwe see that $\\mu^{\\mathop{\\rm cb}}_p(S)\\leq 2$. Thus $\\Lambda^{\\mathop{\\rm cb}}_p(S)\\leq 2A'\\sqrt p$.\nThis finishes the proof of the right inequality.\n\nThe left inequality\nholds for any group $\\Gamma$ and any $S\\subset \\Gamma$ (see \\cite{MR859804}).\n\\end{proof}\n\n\\begin{remark}\nFendler \\cite{MR1967380} has shown that if for all $s,t\\in S$, $s\\neq t$, we have $m_{s,t}\\geq 3$, then\n$$\n\\Lambda_p^{\\mathop{\\rm cb}}(S)\\leq 2\\sqrt 2.\n$$\nSee also \\cite{MR0390658} and \\cite{MR1476122} for related results in the case of free Coxeter groups.\nAlso Haagerup and Pisier have shown that $\\Lambda_\\infty^{\\mathop{\\rm cb}}(S)=2$,\nwhere $\\Lambda^{\\mathop{\\rm cb}}_\\infty(E)=\\sup_{p\\geq 2}\\Lambda^{\\mathop{\\rm cb}}_p(E)$ \\cite{MR1240608}.\nSee the paper of Haagerup \\cite{MR654838} where the best constant\nwas calculated for the set of Coxeter generators of the Rademacher group in case when $a_s$ are scalars.\n\\end{remark}\n\n\n\n\\section{Chromatic length function for Coxeter groups and pairpartitions}\n\n\\def\\mathop{\\C{NC}}\\nolimits{\\mathop{\\C{NC}}\\nolimits}\n\\def\\mathop{\\C P}\\nolimits{\\mathop{\\C P}\\nolimits}\n\\def\\r#1{{:}\\mkern-.5\\thinmuskip{#1}\\mkern-.5\\thinmuskip{:}}\n\nLet $[2n]=\\{1,\\dots,2n\\}$. Let $2^{[2n]}$ denote the set of subsets of $[2n]$.\nBy a partition of $[2n]$ we mean $\\pi\\subset s^{[2n]}$ such that\n$\\bigcup\\pi=[2n]$ and if $\\pi',\\pi''\\in \\pi$ then $\\pi'=\\pi''$ or\n$\\pi'\\cap\\pi''=\\varnothing$. We say, that partition $\\varrho$ is a \\emph{coarsening}\nof a partition $\\pi$ if for any $\\pi'\\in\\pi$ there exists $\\varrho'\\in\\varrho$\nsuch that $\\pi'\\subset\\varrho'$.\n\nA partition is called \\emph{crossing}\nif there exist $1\\leq ax,\\ \\{x,y\\}\\in\\pi\\}$ and\n$\\beta_1^-=\\{x|(\\exists y)\\ y\\in\\beta,\\ y>x,\\ \\{x,y\\}\\in\\pi\\}$. Order $\\beta^+=\\{y_1,\\dots,y_k\\}$\nin increasing way and $\\beta^-=\\{x_1,\\dots,x_k\\}$ in decreasing way. Then all\npairs $\\{x_i,y_i\\}$ will be parts of $\\r{\\pi}$.\n\nEquation (\\ref{eq:p2-nc2}) will follow from a more refined statement.\n\n\\begin{proposition}\n\\label{P:tokyo2}\nFor every $\\varpi\\in\\mathop{\\C{NC}}\\nolimits_2(2n)$\n\\begin{equation}\n\\label{eq:tokyo2}\n\\sum_{\\pi\\in\\mathop{\\C P}\\nolimits_2(2n)\\atop \\r{\\pi}=\\varpi}(-1)^{|\\pi|}q^{\\|\\pi\\|}=(1-q)^{\\mathop{\\hbox{\\rm inn}}\\nolimits(\\varpi)}.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nLet us first consider the case of $\\varpi=\\overline1=\\{(i,2n+1-i)|1\\leq i\\leq n\\}$.\nClearly, $\\{\\pi|\\r{\\pi}=\\overline1\\}=\\{\\overline\\sigma|\\sigma\\in\\F S_n\\}$. And Equation\n(\\ref{eq:tokyo2}) is equivalent to Equation (\\ref{eq:wat-1}) (as all $q_s$ are set to $q$) for\n$W=\\F S_n$.\n\nIn a general case observe, that $\\Phi(\\pi)$ is a coarsening of $\\varpi=\\r{\\pi}$. Yet, not every coarsening\nmay appear. The obvious condition is that for each block $\\beta$ of $\\r{\\pi}$ the pair $\\{\\min\\beta,\\max\\beta\\}$\nbelong to $\\varpi$. For the purpose of this proof we will call such a coarsening\nadmissible. Its clear, that abmissible coarsenings $\\rho$ are in one to one correspondence with\nsubsets of $\\varpi$ containing all outer (not inner) parts of $\\varpi$ of the form $\\{\\{\\min\\rho',\\max\\rho'\\}|\\rho'\\in\\rho\\}$.\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}\n\\matrix[matrix of nodes,column sep=0.8cm,row sep=0.5cm]{\n\\draw plot [smooth] coordinates {(1,0) (5,1.5) (8,0)};\n\\draw plot [smooth] coordinates {(2,0) (3,0.6) (4,0)};\n\\draw plot [smooth] coordinates {(3,0) (4,0.6) (5,0)};\n\\draw plot [smooth] coordinates {(6,0) (7.5,1) (9,0)};\n\\draw plot [smooth] coordinates {(7,0) (8.5,1.0) (10,0)};\n&$\\pi$\\\\\n\\draw plot [smooth] coordinates {(1,0) (3,1) (5,1.5) (7,1) (7.5,1.2) (8,1) (9,1.2) (10,0)};\n\\draw plot [smooth] coordinates {(2,0) (3,.8) (3.5,.55) (4,.8) (5,0)};\n\\draw plot [smooth] coordinates { (3,0) (3.5,0.45) (4,0)};\n\\draw plot [smooth] coordinates { (6,0) (7,0.8) (7.5,0.55) (8,0.8) (9,0)};\n\\draw plot [smooth] coordinates { (7,0) (7.5,.45) (8,0)};\n&$\\r{\\pi}$\\\\\n\\draw plot [smooth] coordinates {(1,0) (3.5,1.5) (6,0)};\n\\draw plot [smooth] coordinates {(6,0) (6.5,.5) (7,0)};\n\\draw plot [smooth] coordinates {(7,0) (7.5,.5) (8,0)};\n\\draw plot [smooth] coordinates {(8,0) (8.5,.5) (9,0)};\n\\draw plot [smooth] coordinates {(9,0) (9.5,.5) (10,0)};\n\\draw plot [smooth] coordinates {(2,0) (2.5,.5) (3,0)};\n\\draw plot [smooth] coordinates {(3,0) (3.5,.5) (4,0)};\n\\draw plot [smooth] coordinates {(4,0) (4.5,.5) (5,0)};\n&$\\Phi(\\pi)$\\\\};\n\\end{tikzpicture}\n\\caption*{\\textsc{Figure.} Examples of $\\pi$, $\\r{\\pi}$, and $\\Phi(\\pi)$.}\n\\end{figure}\n\nLet us refine Equation (\\ref{eq:tokyo2}) further. For every $\\varpi\\in\\mathop{\\C{NC}}\\nolimits_2(2n)$ and any\nadmissible coarsening $\\eta$ of $\\varpi$ we have\n\\begin{equation}\n\\label{eq:tokyo3}\n\\sum_{\\pi\\in\\mathop{\\C P}\\nolimits_2(2n)\\atop \\r{\\pi}=\\varpi,\\ \\Phi(\\pi)=\\rho}(-1)^{|\\pi|}=(-1)^{\\#\\rho}.\n\\end{equation}\nEquation (\\ref{eq:tokyo2}) follows from (\\ref{eq:tokyo3}) by multiplying by $q^{n-\\#\\eta}$\nand summing over all admissible coarsenings $\\eta$ of $\\varpi$.\n\nEquation (\\ref{eq:tokyo3}) is again equivalent to to Equation (\\ref{eq:wat-1})\n(for all apermutation groups and all $q_s$ set to $q$) as both\nsides factor as a product over blocks of $\\rho$.\n\\end{proof}\n\n\\begin{question}\nWe have proven Equation \\ref{eq:p2-nc2} with the help of an embedding $\\F S_n\\ni\\sigma\\mapsto\\overline\\sigma\\in\\mathop{\\C{NC}}\\nolimits_2(2n)$\n(or several such embeddings, one for each outer block of\\\/ $\\varpi$). Corollary\n\\ref{C:wat-1} holds for any Coxeter group. Is there a corresponding formula\nconcerning some generalization of pairpartitions?\n\\end{question}\n\nIn the proof of Proposition \\ref{P:tokyo} we have not assumed that $W$ was finite.\nLet us finish this section with a discussion of infinite Coxeter groups. Recall,\nthat $-1$ does not lie in the radius of convergence on $W(t)$ if $W$ is not finite.\nNevertheless, $W(t)$ represents a rational function as follows from the following result.\n\\begin{proposition}(\\cite{MR0230728},\\cite[Prop.~26]{MR0385006})\\label{P:Serre}\nLet $(W,S)$ be an an infinite Coxeter system. Then\n\\begin{equation}\n\\label{eq:serre}\n\\frac{1}{W(t)}=\\sum_{T\\in\\C F}\\frac{(-1)^{\\#T}}{W_T(1\/t)}.\n\\end{equation}\nWhere $\\C F$ denote the family of subsets $T\\subset S$, such that\nthe group $W_T$ generated by $T$ is finite. In particular, $W(t)$ is a series\nof a rational function (i.e. a quotient of polynomials).\n\\end{proposition}\n\nOne may ask a question what is the class of (infinite) Coxeter groups such that\n$W_T(-1)=0$ for any nonempty subset $T$ of generators.\nA~na\\\"ive argument that\n$$W(t)=W_{\\{s\\}}(t)W^{\\{s\\}}(t)=(1+t)W^{\\{s\\}}(t)$$\nshows, that the question if $W(-1)\\neq0$ is equivalent to whether\n$W^{\\{s\\}}(t)$ can have a pole at $t=-1$.\nOn the other hand note, that if $W$ is of type \\textsc{\\MakeLowercase{\\~A}}$\\mathstrut_2$, ie.~$W$\nis given by a presentation $\\langle s_i:1\\leq i\\leq3|s_i^2,(s_isj)^3:1\\leq i50$ MeV for the barrel and $E>150$\nMeV for the endcaps. \nData are then analyzed by an event classification filter \\cite{NIMOffline},\nwhich selects and streams various categories of events in different\noutput files.\n\nIn this Letter, we refer only to data collected during 2004-2005 for an integrated luminosity\n$\\mathcal{L} = 1.7~\\mathrm{fb}^{-1}$ with the most stable running conditions and the\nbest peak luminosity. \nA total of 5.1 billion $\\phi$ mesons were produced, yielding 1.7~$ \\times~10^{9}$ \\mbox{$K_{S}$}\\mbox{$K_{L}$}\\ pairs. \nAssuming BR($\\mbox{$K_{S}$} \\to 3 \\mbox{$\\pi^{0}$}$) $\\sim 1.9 \\times 10^{-9}$ about 3 signal \nevents are expected to have been produced.\n\\section{\\boldmath Event selection}\n\\label{Sec:DataSample}\nAt \\dafne\\, the mean decay length of $K_L$, $\\lambda_L$, is equal to\n$~\\sim 340$ cm and about\n50\\% of $K_L$'s reach the calorimeter before \ndecaying. A very clean $K_S$ tag is provided by the $K_L$ interaction in the calorimeter \n($\\mbox{$K_{L}$}$-crash), which is identified by a cluster with polar angle\n$40^\\circ<\\theta_{cr}<140^\\circ$, not associated\nto any track, with energy $E_{cr}>100$~MeV and with \na time corresponding to a $K_L$ velocity in the $\\phi$ rest frame $\\beta^*$\nin the range [0.17,0.28].\nThe average value of the $e^+e^-$ center of mass \nenergy $W$ is obtained with a precision of 20 keV \nfor each 200 nb$^{-1}$ running period\nusing large angle Bhabha scattering events~\\cite{kloe2008}. The value of $W$ and \nthe $K_L$-crash cluster position allows us to obtain, \nfor each event, the direction of the $K_S$ with an \nangular resolution of 1$^{\\circ}$ and a momentum \nresolution of about 2 MeV.\n\nBecause of its short decay length, $\\lambda_S \\sim 0.6$ cm, the displacement\nof the $K_S$ from the $\\phi$ decay position \nis negligible. We therefore identify as photons from $\\mbox{$K_{S}$}$ decay,\nneutral particles that travel with $\\beta=1$ \nfrom the interaction point to the EMC (``prompt photons\").\nIn order to retain a large control sample for the \nbackground while preserving high efficiency for the \nsignal, we keep all photons satisfying $E_\\gamma>$ 7 ~MeV and\n$|\\cos \\theta|<$ 0.915.\nEach cluster is required to satisfy the condition \n$|t_{\\gamma}-R_{\\gamma}\/c|<{\\rm min}(3.5\\sigma_t, 2\\ {\\rm ns})$, \nwhere $t_{\\gamma}$ is the photon flight time and $R$ the path \nlength; $\\sigma_t$ also includes a contribution from \nthe finite bunch length (2--3 cm), which introduces \na dispersion in the collision time. \nThe photon detection efficiency of the calorimeter amounts to about 90\\% for $E_\\gamma$ = 20 MeV, and reaches \n100\\% above 70 MeV. After tagging the signal sample is selected requiring 6 prompt photons.\nFor normalization we use the $K_S \\to 2\\pi^0$ decay which is selected requiring 4 prompt photons.\\\\\nFor both channels the expected background as well as the detector acceptance and the analysis efficiency\nare estimated using the\nMonte Carlo simulation of the experiment~\\cite{NIMOffline}.\nThe simulation incorporates a detailed geometry and material composition of the KLOE apparatus\nand most of the data taking conditions of the experiment e.g. DA$\\Phi$NE background rates,\nposition of the interaction point and beam parameters. All the processes contributing to\nthe background were simulated with statistics twice larger than\nthe data sample. Moreover, for the acceptance and the analysis efficiency evaluation a dedicated\n$K_{S}\\to 3\\pi^{0}$ signal simulation was performed, based on a branching\nratio equal to the best known upper limit~\\cite{KLOE_old} increased by a factor of 30 (about 5000 events).\n\\subsection{The six-photon sample}\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{Ecr_plbfinal.eps}\n\\includegraphics[width=0.49\\textwidth]{beta_plbfinal.eps}\n\\end{center}\n\\caption{Distributions of the $K_L$ energy deposit in the EMC ($E_{cr}$) and velocity\nin the $\\phi$ center of mass frame ($\\beta^*$) for all events in the six-photon\nsample. Black points represent data, while the MC background simulation is shown\nas red histogram. The same distributions for events rejected\nby the track veto are shown by the black triangles (data) and green filled histograms \n(MC simulation).}\n\\label{fig1}\n\\end{figure}\nThe selection of the $K_{S}\\to 3\\pi^{0}$ decay is performed by asking for a\n$K_L$-crash and by searching six prompt photons from the decay of pions.\nAfter these requirements we count 76689 events. For these events we perform\nfurther discriminant analysis to increase the signal to background ratio.\\\\\nThe first analysis step aims to reject fake $K_S$ tags (about 2.5\\% of the total background).\nThe distributions of $E_{cr}$ and $\\beta^*$ for the selected data sample\nand background simulations are shown in Fig.~\\ref{fig1}. In the $\\beta^*$ distribution,\nthe peak around 0.215 corresponds to genuine $K_L$ interaction in the calorimeter,\nwhile the flat distribution mainly originates from\n$\\phi \\to K_{S}K_{L} \\to (K_S \\to \\pi^+\\pi^-, K_L \\to 3\\pi^0)$ background events.\nIn this case one of the low momentum charged pions spirals in the forward direction and interacts\nin the low-$\\beta$ \nquadrupoles. This interaction produces neutral particles which simulate the signal of $K_L$\ninteraction in the calorimeter (fake $K_L$-crash), while the $K_L$ meson decays close\nenough to the interaction point to produce six prompt photons.\nTo suppress fake $K_L$-crash we first reject events having charged particles produced close to\nthe interaction region (track veto).\nThe distributions of the kinematical variables for the vetoed\nbackground events are shown in Fig.~\\ref{fig1}.\nTaking advantage of the differences in the $\\beta^*$ and $E_{cr}$ distributions between\nthe tagged $K_S$ events and the fake $K_L$-crash, we have tightened\nthe cuts on these variables: $E_{cr} > 150~\\mathrm{MeV}$ and $0.20 <\\beta^* < 0.225$ ($K_L$-crash hard).\nThis improves by a factor 12 the rejection of this background with respect to the previous\nanalysis~\\cite{KLOE_old}.\\\\\nThe second source of background originates from wrongly reconstructed $K_S\\to 2\\pi^0$ decays.\nThe four photons from this decay can be reconstructed as six due to fragmentation of the electromagnetic\nshowers (splitting). These events are characterized by one or two low-energy clusters reconstructed very close to\nthe position of the genuine photon interaction in the calorimeter and constitute about 67.5\\% of the background.\nAdditional clusters come from accidental time coincidence between $\\phi$ decay\nand machine background photons from DA$\\Phi$NE ($\\sim$~30\\% of the background).\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{chi2fit_plb2.eps}\n\\end{center}\n\\caption{\nDistribution of $\\chi^2_{fit}$ for the tagged sixphoton sample for data\n(black points), background simulation (solid histogram), and simulated $K_S \\to 3\\pi^0$ signal\n(dashed histogram).}\n\\label{fig:chi2}\n\\end{figure}\nAfter tagging with the $K_L$-crash hard algorithm and applying the track veto we remain\nwith a sample of about 50000 six-photon events.\nA kinematic fit with 11 constraints has been performed imposing\nenergy and momentum conservation, the kaon mass and the velocity of the six photons in the final state.\nThe $\\chi^2$ distribution of the fit for data and background simulation, $\\chi^2_{fit}$, is shown in\nFig.~\\ref{fig:chi2} together with the expected distribution for signal events.\nCutting on $\\chi^2_{fit}$ reduces by about 30\\% the remaining background while keeping the signal efficiency\nat 70\\% level.\n\\\\\nIn order to improve rejection of events with split and accidental clusters, we have exploited\nthe correlation between two $\\chi^2$-like variables named $\\zeta_{2\\pi}$ and $\\zeta_{3\\pi}$. \n$\\zeta_{2\\pi}$ is calculated by an algorithm selecting the best four out of six clusters satisfying\nthe kinematic constraints of the two-body decay in the $K_S \\to 2\\pi^0 \\to 4\\gamma$\nhypothesis:\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{chi2pi3data_plb.eps}\n\\includegraphics[width=0.49\\textwidth]{chi2pi3sig_plb.eps}\n\\end{center}\n\\caption{Distributions of events in the $\\zeta_{3\\pi}$-$\\zeta_{2\\pi}$ plane,\nfor six-photon sample tagged by $K_L$-crash for data (left),\nand for the simulated $K_S\\to 3\\pi^0$ decays (right).\nThe boundaries of the background control regions B1, B2, B3, B4, B5 and\nthe signal region S are as specified in the text.}\n\\label{fig3}\n\\end{figure}\n\\begin{align}\n\\zeta_{2\\pi} &=~\\frac{(m_{1 \\gamma\\gamma} - m_{\\pi^0})^2}{\\sigma^{2}_{2\\pi}}\n+ \\frac{(m_{2 \\gamma\\gamma} - m_{\\pi^0})^2}{\\sigma^{2}_{2\\pi}}\n+ \\frac{(\\theta_{\\pi\\pi} - \\pi)^2}{\\sigma^2_{\\theta_{\\pi\\pi}}}\n\\nonumber\n+ \\frac{\\biggl(E_{K_{S}} - \\displaystyle\\sum_{i=1}^4 E_{\\gamma_{i}}\\biggr)^2}{\\sigma^2_{E_{K_{S}}}}\\\\\n&+ \\frac{\\biggl(p_{K_{S}}^x - \\displaystyle\\sum_{i=1}^4 p_{\\gamma_{i}}^x\\biggr)^2}{\\sigma^2_{p_x}}\n+ \\frac{\\biggl(p_{K_{S}}^y - \\displaystyle\\sum_{i=1}^4 p_{\\gamma_{i}}^y\\biggr)^2}{\\sigma^2_{p_y}}\n+ \\frac{\\biggl(p_{K_{S}}^z - \\displaystyle\\sum_{i=1}^4 p_{\\gamma_{i}}^z\\biggr)^2}{\\sigma^2_{p_z}}~,\n\\label{chi2_2pi_def}\n\\end{align}\nwhere $m_{1 \\gamma \\gamma}$ and $m_{2 \\gamma \\gamma}$ are the reconstructed\n$\\gamma \\gamma$ masses for a given cluster pairing, and $\\theta_{\\pi\\pi}$ denotes the opening angle\nof the reconstructed pion directions in the $K_S$ center of mass frame. $E_{K_S}$ and $p_{K_S}$\nstand for the $K_S$ energy and momentum vector determined from the reconstructed four-momentum\nof $K_L$, while $E_{\\gamma_i}$ and $p_{\\gamma_i}$ are energies and momenta of four out\nof six reconstructed photons.\nThe minimization of $\\zeta_{2\\pi}$ gives the best two photon pairs fulfilling the\n$K_S \\to 2\\pi^0 \\to 4\\gamma$ hypothesis.\nThe resolutions used in Eq.~\\ref{chi2_2pi_def} were estimated independently on data\nand MC simulation using a $K_S \\to 2\\pi^0 \\to 4\\gamma$ control sample.\\\\\nThe second $\\chi^2$-like variable, $\\zeta_{3\\pi}$, instead verifies the signal hypothesis $K_S \\to 3\\pi^0$ by looking\nat the reconstructed masses of the three pions. For each pair of clusters\nwe evaluate $\\zeta_{3\\pi}$ as:\n\\begin{equation}\n\\zeta_{3\\pi}~=~\\frac{(m_{1 \\gamma\\gamma} - m_{\\pi^0})^2}{\\sigma^{2}_{3\\pi}}\n+ \\frac{(m_{2 \\gamma\\gamma} - m_{\\pi^0})^2}{\\sigma^{2}_{3\\pi}}\n+ \\frac{(m_{3 \\gamma\\gamma} - m_{\\pi^0})^2}{\\sigma^{2}_{3\\pi}}~.\n\\label{chi2_3pi_def}\n\\end{equation}\nAs the best combination of cluster pairs, we take the configuration minimizing $\\zeta_{3\\pi}$.\nThe resolution on the $\\gamma\\gamma$ invariant mass in the $3\\pi^0$ hypothesis, $\\sigma_{3\\pi}$,\nwas estimated applying the algorithm to the simulated $K_S \\to 3\\pi^0$ events.\\\\\nThe distributions in the $\\zeta_{3\\pi}$-$\\zeta_{2\\pi}$ plane for the data and\n$K_S \\to 3\\pi^0$ simulated signal are shown in Fig.~\\ref{fig3}. Signal events\nare characterized by small values of $\\zeta_{3\\pi}$ and relatively high\n$\\zeta_{2\\pi}$. To compare data and Monte Carlo simulations we have subdivided\nthe $\\zeta_{3\\pi}$-$\\zeta_{2\\pi}$\nplane into six regions B1, B2, B3, B4, B5, and S as indicated in the left panel of Fig.\\ref{fig3}.\nRegion S, with the largest signal-to-background ratio, is the signal box,\nwhile B1--B5 are control regions used to check the reliability of the simulation\nand optimize our description of the experimental data.\\\\\nSimulation does not reproduce accurately the absolute number of events belonging\nto different background categories. However, their kinematical properties\nare reproduced quite well. To determine the background composition,\nand improve the description of experimental data, we have performed a binned\nlikelihood fit of a linear combination \nof simulated $\\zeta_{3\\pi}$-$\\zeta_{2\\pi}$ distributions to the same data distribution\nfor all background categories.\nThe quality of the fit was controlled by comparing inclusive distributions\nof discriminating variables between data and simulation. Examples are presented\nin Fig.~\\ref{fig4}.\\\\\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n\\textbf{} & \\textbf{SBOX} & \\textbf{B1} & \\textbf{B2} & \\textbf{B3} & \\textbf{B4} & \\textbf{B5}\\\\\n\\hline\n\\textbf{DATA} & 220 $\\pm$ 15 & 5 $\\pm$ 3 & 15179 $\\pm$ 123 & 26491 $\\pm$ 163 & 6931 $\\pm$ 83 & 137 $\\pm$ 12\\\\\n\\hline\n\\textbf{MC} & 239 $\\pm$ 11 & 4 $\\pm$ 3 & 14905 $\\pm$ 116 & 26964 $\\pm$ 169 & 6797 $\\pm$ 76 & 100 $\\pm$ 7\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\n\\label{tab:box1}\nNumber of events populating control regions in the $\\zeta_{3\\pi}$-$\\zeta_{2\\pi}$ plane defined in\nFig.~\\ref{fig3} after tight requirements on $K_L$-crash and track veto.}\n\\end{table}\nTable~\\ref{tab:box1} shows the comparison of observed number of events with the expectations in each\ncontrol region of the $\\zeta_{3\\pi}$-$\\zeta_{2\\pi}$ plane. The agreement is better\nthan 1.5 $\\sigma$ in all regions except region B5 (2.8 $\\sigma$).\\\\\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{chi23pi_plb.eps}\n\\includegraphics[width=0.49\\textwidth]{chi22pi_plb.eps}\n\\end{center}\n\\caption{Inclusive distributions of the $\\zeta_{3\\pi}$\nand $\\zeta_{2\\pi}$ discriminating variables for six-photon events: data\n(black points), background simulations (red curves). The dashed histograms\nrepresents simulated $K_S \\to 3\\pi^0$ events.\n}\n\\label{fig4}\n\\end{figure}\nTo further improve the $K_S \\to 2\\pi^0$ background rejection we cut on the $\\Delta$ variable defined as:\n\\begin{equation}\n\\Delta~=~(m_{\\phi}\/2 - \\sum E_{\\gamma_{i}})\/\\sigma_{E}~,\n\\label{eqdE}\n\\end{equation}\nwhere $\\sum E_{\\gamma_{i}}$ is the sum of energies of the four prompt photons selected by\nthe $\\zeta_{2\\pi}$ algorithm and $\\sigma_E$ stands for the 4$\\gamma$ energy resolution estimated using the\n$K_S \\to 2\\pi^0 \\to 4\\gamma$ control sample. For $K_S \\to 2\\pi^0$ decays with two additional background\nclusters, we expect $\\Delta \\sim$~0, while for $K_S \\to 3\\pi^0$ events $\\Delta~\\sim~m_{\\pi^0}\/\\sigma_E$.\nTo further reject surviving $K_S \\to 2\\pi^0$ events with split clusters,\nwe cut on the minimal distance between centroids of\nreconstructed clusters, $R_{min}$, considering that the distance between split clusters\nis on average smaller than the distance between clusters originating from $\\gamma$'s of\n$K_S \\to 3\\pi^0$ decay.\nDistributions of these two discriminant variables are presented\nin Fig.~\\ref{fig5}.\\\\\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{deks_plb.eps}\n\\includegraphics[width=0.49\\textwidth]{rmin_plb.eps}\n\\end{center}\n\\caption{Distributions of $\\Delta$\nand $R_{min}$ discriminating variables for six-photon events:\ndata (black points), background simulations (red curves).\nThe dashed histograms represents simulated $K_S \\to 3\\pi^0$ events. \n}\n\\label{fig5}\n\\end{figure}\nBefore opening the signal box, the cuts on the discriminant variables have been refined minimizing\n$f_{cut}(\\chi^2_{fit},\\zeta_{2\\pi},\\zeta_{3\\pi},\\Delta,R_{min}) = N_{up}\/\\epsilon_{3\\pi}$, where\n$\\epsilon_{3\\pi}$ stands for the signal efficiency and\n$N_{up}$ is the mean upper limit (at 90\\% CL) on the expected number of signal events calculated\non the basis of the expected number of background events\n$B_{exp} = B_{exp}(\\chi^2_{fit},\\zeta_{2\\pi},\\zeta_{3\\pi},\\Delta,R_{min})$\nfrom simulation~\\cite{lal92}. The outcome of the optimizing procedure is $\\chi^2_{fit} < 57.2$, \n$\\Delta > 1.88$ and $R_{min} > 65$~cm. The signal box is defined as:\n$4 < \\zeta_{2\\pi} < 84.9$ and $\\zeta_{3\\pi} < 5.2$.\nAt each stage of the analysis we checked that the simulation describes the data within\nstatistical uncertainty.\nDistributions of $\\chi^2_{fit}$, $\\Delta$ and $R_{min}$ variables are presented\nin Fig.~\\ref{fig6} and Fig.~\\ref{fig7} for events in the signal box.\nIn the right panel of Fig.~\\ref{fig7} we present also the $R_{min}$ distribution just before\nthe last cut $R_{min}>65$~cm.\nAccording to the Monte Carlo simulation, these survived events are all $K_S \\to 2\\pi^0$\ndecays with two split clusters (95$\\%$), or one split and one accidental cluster (5$\\%$).\nA total efficiency of $\\epsilon_{3\\pi} = 0.233 \\pm 0.012_{stat}$ has been estimated. \n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{chi2fit_plb1.eps}\n\\includegraphics[width=0.49\\textwidth]{deks_plb2.eps}\n\\end{center}\n\\caption{Distributions of $\\chi^2_{fit}$ for six-photon events in the signal box (left)\nand $\\Delta$ for six-photon events in the signal box applying the $\\chi^2_{fit} <$~57.2 cut (right).\nBlack points are data, background simulation is the red histogram.\nThe dashed histogram represents simulated $K_S \\to 3\\pi^0$ events.}\n\\label{fig6}\n\\end{figure}\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{rmin_plb1.eps}\n\\includegraphics[width=0.49\\textwidth]{rmin_plb3.eps}\n\\end{center}\n\\caption{Distributions of $R_{min}$ for six-photon events in the signal box applying\nthe $\\chi^2_{fit} <$~57.2 cut (left), and applying $\\chi^2_{fit} <$~57.2\nand $\\Delta > 1.88$ cuts (right).\nBlack points are data, background simulation is the red histogram.\nThe dashed histogram represents simulated $K_S \\to 3\\pi^0$ events.}\n\\label{fig7}\n\\end{figure}\nAt the end of the analysis we find zero candidates in data and in the\nsimulated background sample. To assign an error to the Monte Carlo estimate of the background, $N_b$,\nwe have fit the simulated $R_{min}$ distribution of Fig.~7. (right) with a gaussian and a log-gaussian.\nIntegrating the events above the cut we estimated $N_b = 0.04 ^{+ 0.15}_{-0.03}$.\n\\subsection{The normalization sample}\nThe $K_S \\to 2\\pi^0$ normalization sample is selected requiring four prompt photons.\nThe Monte Carlo simulation shows an amount of background of about 0.1$\\%$ of the total.\nThese events are essentially $\\phi \\to K^+K^-$ decays.\nAfter the $K_L$-crash hard tagging\nwe find $N_{2\\pi} = (7.533 \\pm 0.018) \\times 10^7$ events.\nWith the Monte Carlo simulations we have also determined the $K_S \\to 2\\pi^{0} \\to 4\\gamma$\nefficiency: $\\epsilon_{2\\pi} = 0.660 \\pm 0.002_{stat}$. \nThe final number of produced $K_S \\to 2\\pi^0$ events is:\n$ N_{norm} = N_{2\\pi}\/ \\epsilon_{2\\pi} = (1.142 \\pm 0.005) \\times 10^8$.\n\\subsection{Evaluation of systematic uncertainties}\nThe systematic uncertainties are related to the number of \nbackground events and to the determination of\nthe acceptance and total efficiencies for the signal,\n$\\epsilon_{3\\pi}$, and normalization, $\\epsilon_{2\\pi}$, samples.\n\nFor the tagged six-photon sample, we have investigated the uncertainties related to\nthe observed background at the end of the analysis.\nA difference of $\\sim$ 2.4\\% in the EMC energy scale and resolution has been observed between\ndata and MC simulation and has been studied using a control sample of $K_S \\to 2\\pi^0$\nevents. To evaluate the related systematic uncertainty on the background, we have\nrepeated the upper limit evaluation with several values of the energy\nscale correction in the range of 2.2\\%-2.6\\%. Similarly, the analysis has been repeated\nmodifying the resolution used in the definition\nof $\\zeta_{2\\pi}$ and $\\zeta_{3\\pi}$.\nMoreover, we have varied of 1 $\\sigma$ the resolution used in the $\\Delta$ variable calculation\nand removed a data--MC shift correction on $R_{min}$. These variations correspond to a cut change\nof 5\\% and 6\\%, respectively. Similarly, we have removed the data--MC scale correction for $E_{cr}$\nand the additional gaussian smearing in the MC $\\beta^*$ distribution, both\ncorresponding to a 5\\% variation of the cuts.\nThe full analysis was repeated in total twenty times applying each\ntime one of the changes mentioned above. For all of these\nchecks, we have observed no variation in the number of simulated background.\n\nFor the acceptance of both the signal and normalization\nsamples, we have evaluated the systematic uncertainty on the photon counting\nby comparing data and simulation splitting, accidental probabilities and cluster\nreconstruction efficiency.\nTo determine the probabilities of one, $P_{A1}$, or two, $P_{A2}$, accidental\nclusters in the event we have used out of time clusters originated from earlier bunch\ncrossing. To estimate the probability of generating one, $P_{S1}$, or more fragments,\n$P_{S2}$, per cluster, we have fit the photon multiplicities observed in data using\nthe experimental values of $P_{A1}$ and $P_{A2}$, and the photon\nmultiplicities obtained by the simulation~\\cite{silarskiPHD,note201}.\nResults of these fits are reported in Tab.~\\ref{tabprob}.\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n {}& $\\boldsymbol{P_{A1}}$ [\\%] & $\\boldsymbol{P_{A2}}$ [\\%] & $\\boldsymbol{P_{S1}}$ [\\%] & $\\boldsymbol{P_{S2}}$ [\\%]\\\\\n\\hline\n\\textbf{DATA} & 0.378 $\\pm$ 0.004 & 0.025 $\\pm$ 0.001 & 0.30 $\\pm$ 0.01 & 0.0103 $\\pm$ 0.0001\\\\\n\\hline\n\\textbf{MC}& 0.492 $\\pm$ 0.004& 0.027 $\\pm$ 0.001 & 0.31 $\\pm$ 0.01 & 0.0156 $\\pm$ 0.0002\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\n\\label{tabprob}\nThe probabilities to find one ( $P_{A1}$ ) or two ( $P_{A2}$ ) accidental clusters\nand to reconstruct one ( $P_{S1}$ ) or more ( $P_{S2}$ ) split clusters\nestimated using out of time clusters and fit to the photon multiplicities,\nas described in the text.}\n\\end{table}\nThe photon reconstruction efficiency, for both data and MC,\nwas evaluated using a control sample of $\\phi \\to \\pi^+\\pi^-\\pi^0$ events. The momentum\nof one of the photons is estimated from tracking information and position\nof the other cluster. The candidate photon is then searched for within a search cone.\nThe systematic error related to the cluster efficiency has been estimated by\nremoving the data\/MC efficiency correction. The total systematic\nuncertainty on the acceptance for both\nmeasured samples are listed in Tab.~\\ref{tabsys}.\nAnother source of systematic uncertainties originates from the offline filter FILFO~\\cite{memo288}\nused, during data reconstruction, to reject cosmic rays and machine\nbackground events before starting the track reconstruction. The FILFO efficiency, for both\nnormalization and signal samples, has been\nestimated using the simulation and is very close to 100\\%~\\cite{silarskiPHD}. \nWe have conservatively assigned as systematic uncertainty in data half of\nthe difference between the MC evaluated efficiency and 100\\%.\nWe consider completely negligible the influence of trigger efficiency for\nboth samples, since in~\\cite{KLOE_old} it was about 99.5\\%\nand the $K_L$-crash hard tagging requires a\nlarger energy release in the calorimeter, which translates in a \nlarger trigger efficiency.\n\nThe observed difference in the EMC energy scale and resolution between data and simulation\nenters also in the $\\epsilon_{3\\pi}$ evaluation. The effects have been estimated as\n$\\Delta \\epsilon_{3\\pi}\/\\epsilon_{3\\pi} = 1.0 $\\% from the \tenergy scale,\nand $\\Delta \\epsilon_{3\\pi}\/\\epsilon_{3\\pi} = 1.1 $\\% from the resolution. \nThe effect of the cut on $\\chi^2_{fit}$ has been tested constructing the ratio between\nthe cumulative distributions for experimental data and simulation\nwhich leads to a systematics of $\\Delta \\epsilon_{3\\pi}\/\\epsilon_{3\\pi} = 1.46 $\\%.\nFinally, we have investigated the systematic effect related to\nthe $R_{min}$ cut by varying its value by 6\\%, and estimated its contribution\nto be $\\Delta \\epsilon_{3\\pi}\/\\epsilon_{3\\pi} =0.9$\\%.\n \nAll the contributions to the systematic uncertainty are summarized\nin Tab.~\\ref{tabsys}, with the total systematic uncertainty evaluated adding\nall effects in quadrature.\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{Source} & $\\boldsymbol{\\Delta \\epsilon_{2\\pi}\/\\epsilon_{2\\pi}~[\\%]}$ & $\\boldsymbol{\\Delta \\epsilon_{3\\pi}\/\\epsilon_{3\\pi}~[\\%]}$\\\\\n\\hline\nAcceptance & 1.60 & 0.21\\\\\n\\hline\nOffline filter &0.46 & 0.30\\\\\n\\hline\nCalorimeter energy scale & -- & 1.00\\\\\n\\hline\nCalorimeter energy resolution & -- & 1.10\\\\\n\\hline\n$\\chi^2_{fit}$ cut & -- & 1.46\\\\\n\\hline\n$R_{min}$ cut & -- & 0.90\\\\\n\\hline\n\\bf{TOTAL} & \\textbf{1.65} & \\textbf{2.30}\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\n\\label{tabsys}\nSummary table of the systematic uncertainties on the total\nefficiencies for the signal, $\\epsilon_{3\\pi}$,\nand normalization samples, $\\epsilon_{2\\pi}$.}\n\\end{table}\n\\section{Results}\nNo events were observed on data in the signal region. \nEqually, no background events are found in the MC simulation based on twice the data statistics.\nIn the conservative assumption of no background, we estimate an upper\nlimit on the expected number of signal events UL(Nev$(\\mbox{$K_{S}$} \\to 3\n\\mbox{$\\pi^{0}$})$) = 2.3 at 90\\% ~C.L., with a signal efficiency of\n$\\epsilon_{3\\pi} = 0.233 \\pm 0.012_{stat} \\pm 0.006_{sys}$.\nIn the same tagged sample we count $N_{norm} = (1.142 \\pm 0.005) \\times 10^8$\n$\\mbox{$K_{S}$} \\to 2 \\mbox{$\\pi^{0}$}$ events.\\\\\nSystematic uncertainties on background determination, as well as on\nthe efficiency evaluation for the signal and normalization samples,\nare negligible in the calculation of the limit.\n\nUsing the value BR($\\mbox{$K_{S}$} \\to 2 \\mbox{$\\pi^{0}$})= 0.3069 \\pm 0.0005$~\\cite{pdg2012} we obtain:\n\\begin{equation}\nBR(\\mbox{$K_{S}$}\\to 3 \\mbox{$\\pi^{0}$}) \\leq 2.6 \\times 10^{-8}\\; \\;\\;\\; {\\rm at}\\;\\; \\; 90\\%\\;\\;\\; {\\rm C.L.}\n\\end{equation}\nwhich represents the best limit on this decay, improving \nby a factor of $\\sim$~5 previous result \\cite{KLOE_old}.\\\\\nThis result can be translated into a limit on $|\\eta_{000}|$: \n\\begin{equation}\n|\\eta_{000}| = \\left|\\frac{A(K_S \\to 3\\pi^0)}{A(K_L \\to 3\\pi^0)}\\right| = \\sqrt{\\frac{\\tau_L}{\\tau_S}\n\\frac{BR(K_S \\to 3\\pi^0)}{BR(K_L \\to 3\\pi^0)}} \\leq 0.0088\\;\\;\\; {\\rm at}\\;\\; \\; 90\\%\\;\\;\\; {\\rm C.L.}\n\\end{equation}\n\nThis describes a circle of radius 0.0088 centered at zero in the $\\Re({\\eta_{000}})$, $\\Im{(\\eta_{000})}$\nplane and represents a limit two times smaller than previous result~\\cite{KLOE_old}.\n\\section*{Acknowledgments}\nWe warmly thank our former KLOE colleagues for the access to the data\ncollected during \nthe KLOE data taking campaign.\nWe thank the DA$\\Phi$NE team for their efforts in maintaining low\nbackground running conditions and their \ncollaboration during all data taking. We want to thank our technical staff: \nG.F. Fortugno and F. Sborzacchi for their dedication in ensuring efficient operation of the KLOE computing facilities; \nM. Anelli for his continuous attention to the gas system and detector safety; \nA. Balla, M. Gatta, G. Corradi and G. Papalino for electronics maintenance; \nM. Santoni, G. Paoluzzi and R. Rosellini for general detector support; \nC. Piscitelli for his help during major maintenance periods.\\\\\nWe acknowledge the support of the European Community-Research\nInfrastructure Integrating Activity `Study of Strongly Interacting Matter'\n(acronym HadronPhysics2, Grant Agreement No. 227431) under the Seventh Framework\nProgramme of EU.\nThis work was supported also in part by the EU Integrated Infrastructure\nInitiative Hadron Physics \nProject under contract number RII3-CT-2004-506078; by the European\nCommission under the 7th \nFramework Programme through the `Research Infrastructures' action of\nthe `Capacities' Programme, Call: \nFP7-INFRASTRUCTURES-2008-1, Grant Agreement No. 283286; by the Polish\nNational Science Centre through the \nGrants Nos. 0469\/B\/H03\/2009\/37, 0309\/B\/H03\/2011\/40, \nDEC-2011\/03\/N\/ST2\/02641, \\\\2011\/01\/D\/ST2\/00748, 2011\/03\/N\/ST2\/02652,\n2011\/03\/N\/ST2\/02641 and by the Foundation for Polish Science through the MPD programme \nand the project HOMING PLUS BIS\/2011-4\/3.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n Many electronic-structure methods use basis sets made of states\nthat move or change with atomic positions. \n A very popular subset of these methods (most quantum-chemical\\cite{Gaussian,\nTurbomol,Gamess,ADF} and a significant fraction of solid-state \nmethods\\cite{Siesta,CP2K,Crystal,OpenMX,FHI-AIMS}) use atomic-like basis \nfunctions, \ncomposed\nof the product of a radial function and a spherical harmonic, normally \ncentered around atoms.\n In other cases, the localised basis is obtained dynamically, using a finer\nauxiliary basis.\\cite{Onetep,Conquest,BigDFT}\n The detail of the kind of functions is not important in this work,\nwhat matters here is that such a basis is generally not orthonormal,\nit spans a subspace of the Hilbert space (a finite basis is always used), \nand both the basis and the subspace change with the evolution of a set \nof external parameters such as the atomic positions.\n\n Non-orthogonal basis sets have been used since the early times of \nquantum mechanics, especially in the context of quantum chemistry.\\cite{Lowdin1950} \n The matrix representation of Schr\\\"odinger's equation (using Dirac notation)\n$H | \\psi \\rangle = E | \\psi \\rangle$ in a basis $\\{ | e_{\\mu} \\rangle , \\mu = 1 \\dots \\cal{N} \\} $\ngives\n\\begin{equation*} \n\\sum_{\\nu} H_{\\mu \\nu} C_{\\nu} = E \\sum_{\\nu} S_{\\mu\\nu} C_{\\nu} \\; ,\n\\end{equation*}\nwhere \n\\begin{equation}\n\\label{coeff-expansion}\n| \\psi \\rangle = \\sum_{\\mu} | e_{\\mu} \\rangle \\, C_{\\mu} \\, ,\n\\end{equation}\n$H_{\\mu \\nu}= \\langle e_{\\mu} | H | e_{\\nu} \\rangle$ and \n$S_{\\mu \\nu}= \\langle e_{\\mu} | e_{ \\nu} \\rangle$, the latter being the\noverlap matrix.\n \n Similarly, there are electronic structure methods based on \nthe integration of time-evolving quantum problems, most prominently\nbased on time-dependent density-functional \ntheory.\\cite{RungeGross,Yabana,Tsolakidis2002,Octopus,ONTDDFT,Bowler} \n The time-dependent Kohn-Sham equation arising is \nanalogous to the time dependent Schr\\\"odinger equation \n$H | \\psi \\rangle = i \\, \\partial_t | \\psi \\rangle$ (using \n$\\hbar = m_e = e = 1$), which, for a non-orthogonal \nbasis set, becomes\n\\begin{equation*}\n\\sum_{\\nu} H_{\\mu \\nu} C_{\\nu} = \ni \\, \\sum_{\\nu} S_{\\mu \\nu} \\, \\partial_t C_{\\nu} \\; ,\n\\end{equation*}\nin a situation in which the basis set is fixed.\\cite{Tsolakidis2002}\n If the basis set moves in time (e.g., related to nuclear motion)\nthe equation becomes rather\n\\begin{equation}\n\\label{old-td-eq}\n\\sum_{\\nu} (H_{\\mu \\nu} - i \\, D_{\\mu \\nu} ) C_{\\nu} = \ni \\, \\sum_{\\nu} S_{\\mu \\nu} \\, \\partial_t C_{\\nu} \\; ,\n\\end{equation}\nin which the new terms $D_{\\mu \\nu}= \\langle e_{\\mu} | \\partial_t | e_{\\nu} \\rangle\n= \\langle e_{\\mu} | \\partial_t e_{\\nu} \\rangle$ appear related to the\nbasis set evolution,\\cite{Todorov2001,Rudiger2002,Kaxiras2015} \nalthough at sufficiently low nuclear velocities these terms can be\nneglected.\\cite{Kaxiras}\n Similar objects to the $D_{\\mu\\nu}$ matrix in Eq.~\\ref{old-td-eq} can also \nbe found in time-dependent methods using localised molecular \norbitals.\\cite{WeitaoYang2010}\n Extra terms related to derivatives also appear when calculating\nthe forces on atoms for geometry relaxation, ab initio molecular\ndynamics calculations, or Ehrenfest dynamics simulations.\\cite{Todorov2001,Rudiger2002} \n There are terms arising, called Pulay forces,\\cite{Pulay1969} which again involve \nbasis vector derivatives. \n\n The matrix representation of the quantum formalism used above has its limitations,\nhowever, and a more general formalisation was introduced\\cite{Vanderbilt1984,\nBallantine1986, Artacho1991, Head-Gordon1993} based on\ntensors, which offers a better suited and more flexible framework for non-orthogonal\nbasis sets, corresponding to a description of magnitudes in an Euclidean space with\noblique axes. \n It allowed, for instance, the formalisation of second quantisation based on \nnon-orthogonal bases,\\cite{Artacho1991} which was then used to\nformulate many-body theories using such bases,\\cite{Head-Gordon1993,Holomorphic}\nand to formulate corrective methods such as DFT+$U$ for the non-orthogonal \ncase.\\cite{ORegan2011,Palacios2014,Jacob2015}\n It also connected naturally with non-Hermitian representations proposed\nearlier for the better exploitation of localisation.\\cite{Weeks,Bullett,Anderson}\n\n In this paper we use \nconcepts of differential geometry \nto extend that oblique-axis formalism to the calculation of derivatives \nwhen the basis and the Hilbert space it spans change with parameters \nsuch as atomic positions or time. \n This offers insights into the geometric interpretation of the dynamical equations \narising with moving basis sets.\n In particular, the affine connection defined for the changing basis allows the\nproposal of optimised propagators for the numerical integration of quantum \ntime-evolving problems. \n It should be noted that there have been previous works on derivatives \nin the tensorial formalism for non-orthogonal bases in electronic \nstructure,\\cite{MHG-gradients} including relaxations with curvy steps.\\cite{MHG-curvy}\n They were, however, always addressing \nderivatives of a scalar, the total energy, \na special case which allows for circumvention of the key concepts in this work, \nthe affine connection and the covariant derivative.\n An alternative way of using differential geometry in electronic \nstructure was initially explored in \nRef.~\\onlinecite{ORegan}\nwhen calculating derivatives with respect to the basis functions themselves,\ninstead of external parameters as in this work.\n This is beyond the scope of the present paper.\n \n The ideas in this paper should be useful for time-dependent \n(or parameter dependent) methods involving basis functions, \nauxiliary support functions, or any kind of states, which move\nduring simulations, including atomic-like basis orbitals, support functions \nor generalised Wannier functions in large scale electronic-structure methods, or\neven the projectors for the core-electron description in projected-augmented-wave \n(PAW) methods. \n The connection is also made with the Berry formalism \nof geometric phases.\n In Section II the general formalism is presented, while Section III shows its\napplication in several contexts.\n Many derivations have been pushed to appendices \nwith a view to attaining\na more concise exposition of the relevant ideas in the main text, \nwhile preserving a reasonably self-contained paper. \n\n\n\n\n\n\\section{Formulation}\n\n\\subsection{Tensorial representations}\n\\label{SectForm}\n\n In this work we will use tensorial representations as used \nin Ref.~\\onlinecite{Artacho1991}.\n Here are the essentials before we get into derivatives.\n Consider a basis consisting on a set of linearly-independent, non-orthogonal\nstates,\n\\begin{equation*}\n\\{ | e_{\\mu} \\rangle, \\; \\mu = 1 \\dots \\cal{N} \\}\n\\end{equation*}\nspanning a subspace $\\Omega$ of the relevant Hilbert space $\\cal{H}$\nfor our quantum problem.\n We will use here the tensorial notation for oblique \nangles.\\cite{Vanderbilt1984, Ballantine1986, Artacho1991, Head-Gordon1993}\n For this, the dual basis\\cite{Algebra,Artacho1991} is defined as the set of vectors \n$\\{ | e^{\\mu} \\rangle, \\; \\mu = 1 \\dots \\cal{N} \\}$ in the same space $\\Omega$ that \nfulfil\n\\begin{equation*}\n\\langle e^{\\mu} | e_{\\nu} \\rangle = \\delta^{\\mu}_{\\phantom{e}\\nu} = \n\\langle e_{\\nu} | e^{\\mu} \\rangle = \\delta_{\\nu}^{\\phantom{e}\\mu} \\; ,\n\\end{equation*}\nwhere $\\delta^{\\mu}_{\\phantom{e}\\nu}=\\delta_{\\nu}^{\\phantom{e}\\mu}=\n\\delta^{\\mu}_{\\nu}$ is Kronecker's delta.\n They also fulfil\\cite{Artacho1991}\n\\begin{equation*}\n\\sum_{\\mu} | e_{\\mu} \\rangle \\langle e^{\\mu} | = \\sum_{\\mu} | e^{\\mu} \\rangle \\langle \ne_{\\mu} | = P_{\\Omega} \\; ,\n\\end{equation*}\nwhere $P_{\\Omega}$ is the projector onto the $\\Omega$ Hilbert space.\n The metric tensors are given by the overlap, \n$S_{\\mu\\nu} = \\langle e_{\\mu} | e_{\\nu} \\rangle$, and its upper-indices counterpart \n$S^{\\mu\\nu} = \\langle e^{\\mu} | e^{\\nu} \\rangle$, which is the inverse\n(in the matrix sense) of the overlap matrix. \n\n In this paper we will mostly (always, unless explicitly stated) use the\n{\\it natural representation} as defined in Ref.~\\onlinecite{Artacho1991}.\n A state $| \\psi \\rangle \\in \\Omega$ is represented by the contravariant \nfirst-rank tensor \n\\begin{equation*}\n\\psi^{\\mu}= \\langle e^{\\mu} | \\psi \\rangle \\; ,\n\\end{equation*}\nwhich corresponds to the coefficients $C_{\\mu}$ of the vector expansion\nin Eq.~\\ref{coeff-expansion}, since, \n\\begin{equation*}\n| \\psi \\rangle = P_{\\Omega} | \\psi \\rangle\n= \\sum_{\\mu} | e_{\\mu} \\rangle \\langle e^{\\mu} | \\psi \\rangle\n= \\sum_{\\mu} | e_{\\mu} \\rangle \\psi^{\\mu}\n= | e_{\\mu} \\rangle \\psi^{\\mu} \\; .\n\\end{equation*}\n The last identity just reflects the fact that from now on we will use \nEinstein's convention for tensors,\\cite{Einstein1916} by which repeated \nindices imply a sum.\n A bra $\\langle \\psi | \\in \\Omega^\\dagger$ will be represented by the \nequivalent covariant tensor,\n\\begin{equation*}\n\\psi_{\\mu} = \\langle \\psi | e_{\\mu} \\rangle \\; ,\n\\end{equation*}\ncoming from \n\\begin{equation*}\n\\langle \\psi | = \\langle \\psi | e_{\\mu} \\rangle \\langle e^{\\mu} | = \n\\psi_{\\mu} \\langle e^{\\mu} | \\; .\n\\end{equation*}\n Note that the representation of the bra is not the complex conjugate of\nthe representation of the ket (see Appendix~\\ref{NotationAppendix}).\n The covariant and contravariant character of a tensor relates to the way\nthey transform under basis change (see Appendix~\\ref{BasisChangeAppendix}\nfor the definition).\n \n An operator acting in $\\Omega$ is represented by the second-rank tensor given by\n\\begin{equation*}\nH^{\\mu}_{\\phantom{e}\\nu} = \\langle e^{\\mu} | H | e_{\\nu} \\rangle \\; ,\n\\end{equation*}\nsince\n\\begin{equation*}\nP_{\\Omega} H P_{\\Omega} = \\left ( |e_{\\mu} \\rangle \\langle e^{\\mu} | \\right ) \\, H \\, \n\\left ( | e_{\\nu} \\rangle \\langle e^{\\nu} | \\right ) = |e_{\\mu} \\rangle \nH^{\\mu}_{\\phantom{e}\\nu} \\langle e^{\\nu} | \\; .\n\\end{equation*}\n Schr\\\"odinger's equation $H | \\psi \\rangle = E | \\psi \\rangle$ in this representation \nthen becomes\n\\begin{equation*}\nH^{\\mu}_{\\phantom{e}\\nu} \\, \\psi^{\\nu} = E \\, \\psi^{\\mu} \\; .\n\\end{equation*}\n\n It should be noted at this point that the formalism described here is equally \nvalid for $| \\psi \\rangle$ being a single-particle or a many-particle state.\n It just requires the $| e_{\\mu} \\rangle$ basis states (and their duals) to \nrepresent the same number of particles as $|\\psi \\rangle$.\n One can also use this formalism for many-particle systems building\non single-particle non-orthogonal basis states by using \nnon-orthogonal second quantization.\\cite{Artacho1991,Head-Gordon1993}\n In some parts below we will refer to single-particle (mean-field-like)\nsituations since they are frequently found in different contexts, such as\nKohn-Sham density-functional theory,\\cite{Kohn-Sham} but the present\nformalism is not limited to such situations.\n\n The other representation to be considered in this work is the traditional \nquantum-chemical representation (henceforth called the {\\it matrix representation}), \nwhich uses $\\psi^{\\mu}$ and $H_{\\mu\\nu}= \\langle e_{\\mu} | H | e_{\\nu} \\rangle$ \nfor the representation of states and operators, respectively, the Schr\\\"odinger \nequation now reading\n\\begin{equation*}\nH_{\\mu\\nu} \\, \\psi^{\\nu} = E \\, S_{\\mu\\nu} \\, \\psi^{\\nu} \\; ,\n\\end{equation*}\nas detailed in Ref.~\\onlinecite{Artacho1991}.\n\n An Hermitian operator in the natural representation would fulfil\n\\begin{equation*}\nH^{\\mu}_{\\phantom{e}\\nu} = \\left ( H_{\\nu}^{\\phantom{e}\\mu} \\right )^*\n= \\left ( S_{\\nu\\lambda} H^{\\lambda}_{\\phantom{e}\\sigma} S^{\\sigma\\mu} \\right )^* \\; ,\n\\end{equation*}\nwhere $H_{\\nu}^{\\phantom{e}\\mu} = \\langle e_{\\nu} | H | e^{\\mu}\\rangle$. \n In the matrix representation hemiticity is reflected just by \n$H_{\\mu\\nu}= \\left ( H_{\\nu\\mu} \\right )^*$.\n Further details on the tensorial notation used in this work are found in \nAppendix~\\ref{NotationAppendix}.\n\n All of these magnitudes are tensors in the sense that they represent abstract \nobjects that are defined independently of the basis set.\n Tensor components transform in a well defined fashion when\nchanging the basis set.\n Transformations under basis change of the different tensors in this paper \nare discussed in Appendix~\\ref{BasisChangeAppendix}.\n \n \n \n\n \n\\subsection{Parameter vector space}\n\\label{ParameterSpace}\n\n In this work we provide a comprehensive formalisation of derivatives of\nthe quantities just defined with respect to any parameters that the basis\nmay depend on, including both the basis change within $\\Omega$ and\nthe evolution of $\\Omega$ itself.\n Such parameters will normally be nuclear positions as in a molecular \ndynamics or Ehrenfest simulations, or time, as when following the\ndynamics governed by time-dependent Schr\\\"odinger equation (or the\ntime-dependent Kohn-Sham equation in time-dependent density-functional\ntheory, or analogous mean-field-like equation).\n\n We will then consider those parameters as defining a vector space $\\Theta$\nof dimension $N$, spanned by the basis\n\\begin{equation*}\n\\{ \\mathbf{u}_i, \\; i = 1 \\dots N \\} \\; ,\n\\end{equation*}\nsuch that any vector $\\mathbf{R} \\in \\Theta$ is expanded as\n\\begin{equation*}\n{\\mathbf R} = R^i {\\mathbf u}_i \\: .\n\\end{equation*} \n Please keep in mind that these $R^i$ variables represent any parameters\nthat a particular quantum problem may depend on, not necessarily nuclear\npositions (in the Applications section below there will be examples for \nnuclear positions, but also for time as a single parameter in a one-dimensional\n$\\Theta$).\n We keep the tensor notation for the vectors in $\\Theta$ for convenience.\nThis allows for oblique angles in this space as well if ever wanted.\n We will always use Greek letters as indices for the quantum (electronic)\ncomponents and Latin for the components of vectors in parameter space.\n Our electronic basis set and space do then depend on $\\mathbf{R}$, i.e.\n$\\Omega=\\Omega(\\mathbf{R})$, $| e_{\\mu} \\rangle = | e_{\\mu} (\\mathbf{R}) \\rangle$,\nand so will the projector $P_{\\Omega}$ and all the tensors defined above. \n\n It is important to note that, although we will exploit analogies with the \ndifferential geometry defined for curved spaces as in general relativity, \nthe situation described here may be discussed more formally using the \nlanguage of fibre bundles,\\cite{FibreBundle} with $\\Theta$ as the base space, \nand $\\Omega(\\mathbf{R})$ as a fibre for each {\\bf R}. \n When moving in the base space, the Hilbert space associated to each fibre\nturns within the ambient Hilbert space ${\\cal H}$, very much as the tangent space \nwould turn for a curved space.\n Although both the base space $\\Theta$ and each fiber $\\Omega(\\mathbf{R})$ \nare each flat Euclidean spaces, the overall bundle is curved.\n\n The parameter-dependent Hilbert space $\\Omega (\\mathbf{R})$ and its turning\nis also the basis of the Berry formalism of geometric phases in quantum \nmechanics.\\cite{Berry-orig, Berry} \n In this case the relevant $\\Omega$ space would \nbe the one associated with the ground state, or, in a mean-field-like \nsetting (as in e.g. density-functional theory), the space spanned by\nthe occupied single-particle states (occupied space),\nalthough also larger spaces are considered, e.g. for\nmetallic systems, or disentangling bands.\\cite{Marzari1997}\n We relate this work to the Berry formalism below.\n Finally, the change of basis between fibres can be regarded as a gauge transformation\ngiven in principle by any $\\cal{N} \\times \\cal{N}$ invertible matrix of complex numbers,\ni.e., belonging to the general linear group GL$(\\cal{N},\\mathbb{C})$.\n In this paper, however, we prefer to introduce the formalism in an accessible, \nself-contained manner, using basic quantum mechanics, tensors, and simple \nmanipulations therein.\n\n \n In what follows, we will investigate the rate of change of quantum states, \nand operators acting upon them, with respect to the evolving basis vectors as we \nnavigate the parameter vector space. \n We will find that the components of such change due to space preserving and \nspace non-preserving basis function evolution must be separately considered, \nas follows.\n\n\n\n\n\n\n\n\\subsection{Differential geometry}\n\n\\subsubsection{Covariant derivative}\n\n The derivative of the $\\psi^{\\mu}$ components of a quantum state $| \\psi \\rangle$\nwith respect to $R^i$ will be indicated by \n\\begin{equation*}\n\\partial_i \\psi^{\\mu} = {\\partial \\psi^{\\mu} \\over \\partial R^i} \\; .\n\\end{equation*}\n It is easy to show (see Appendix~\\ref{BasisChangeAppendix}) that\nsuch a derivative does not transform as a tensor under basis change.\n Using the conventional nomenclature: it is a non-tensor.\n \n Let us then define the covariant derivative as one that transforms \nas a tensor, which we can easily do as follows:\n\\begin{equation}\n\\label{covdevdef}\n\\text{\\dh}_i \\psi^{\\mu} \\equiv \\langle e^{\\mu} | P_{\\Omega} \\partial_i \n\\left \\{ P_{\\Omega} | \\psi \\rangle \\right \\} =\n\\langle e^{\\mu} | \\partial_i \\left \\{ P_{\\Omega} | \\psi \\rangle \\right \\} \\; .\n\\end{equation}\n The projector directly acting on $| \\psi \\rangle$ might appear redundant\nif starting with $| \\psi \\rangle \\in \\Omega$.\n It is not so, however, the important point being that the derivative is calculated\nfor the state being projected on the {\\it varying} $\\Omega$ space, \nand therefore, even though $| \\psi \\rangle$ and $P_{\\Omega} |\\psi \\rangle$ \nare equal at {\\bf R}, they are not necessarily equal at any nearby point, \n$\\mathbf{R} + \\mathrm{d} R^i \\mathbf{u}_i$.\n Put another way, the covariant derivative must be applied before \nprojection onto $\\Omega$, since both $|\\psi \\rangle$ and $P_{\\Omega}$ \nmay evolve in time.\n\n This definition gives a well-behaved tensor since\nthe defined $\\text{\\dh}_i \\psi^{\\mu}$ is the tensor representation of the vector\n$P_{\\Omega} \\partial_i \\left \\{ P_{\\Omega} | \\psi \\rangle \\right \\} \\in \\Omega$\n(see also Appendix~\\ref{BasisChangeAppendix}).\n The justification for this particular definition will become clear throughout \nthis section. \n We can already point to the fact that for $| \\psi \\rangle \\in \\Omega$\nsuch that neither $|\\psi \\rangle$ nor $\\Omega$ change with {\\bf R},\nand for a basis set that does change, the proposed covariant derivative is\nzero (while the usual derivative is not), thereby indicating that it is the \nintrinsic rate of change of the state what is being measured, excluding\nthe basis set change. \n This point will be proven more generally below.\n It is analogous to the definition of covariant derivatives in gauge-dependent\ntheories, as the physical derivative independent of change of local gauge.\n\n The \nrelationship between\n$\\text{\\dh}_i \\psi^{\\mu}$ and $\\partial_i \\psi^{\\mu}$ is\nobtained as follows:\n\\begin{align*}\n\\text{\\dh}_i \\psi^{\\mu} &= \\langle e^{\\mu} | \\partial_i \\left \\{ P_{\\Omega} | \\psi \\rangle \\right \\} =\n\\langle e^{\\mu} | \\partial_i \\left \\{ | e_{\\nu} \\rangle \\langle e^{\\nu} | \\psi \\rangle \\right \\} \\\\\n&= \\langle e^{\\mu} | \\partial_i \\left \\{ | e_{\\nu} \\rangle \\psi^{\\nu} \\right \\} \n= \\langle e^{\\mu} | \\partial_i | e_{\\nu} \\rangle \\psi^{\\nu} + \\partial_i \\psi^{\\mu} \\; .\n\\end{align*}\n This gives us an alternative definition of the covariant derivative,\nexpressed in objects all defined within $\\Omega$, namely,\n\\begin{equation}\n\\label{covdevdef2}\n\\text{\\dh}_i \\psi^{\\mu} = \\partial_i \\psi^{\\mu} + D^{\\mu}_{\\phantom{i} \\nu i} \\psi^{\\nu} \\; ,\n\\end{equation}\nwhere we have used the following definition\n\\begin{equation}\n\\label{christoffel}\nD^{\\mu}_{\\phantom{i} \\nu i} \\equiv \\langle e^{\\mu} | \\partial_i | e_{\\nu} \\rangle\n= \\langle e^{\\mu} | \\partial_i e_{\\nu} \\rangle \\; .\n\\end{equation}\n The $i$ index is located after the index of the state being differentiated.\n This choice can be remembered by thinking of it as $\\partial_i | e_{\\mu} \\rangle = \n\\partial | e_{\\mu} \\rangle \/ \\partial R^i$.\n This second expression of the covariant derivative (Eq.~\\ref{covdevdef2})\ncan be shown to give a tensor within this \nformalism (see Appendix~\\ref{BasisChangeAppendix}).\n\n\n\n\n\n\\subsubsection{Affine connection}\n\n The quantity defined in Eq.~\\ref{christoffel} is also a non-tensor and \nplays the role of the Christoffel symbols of the second kind in the Levi-Civita \nconnection of conventional differential geometry.\n Keeping the nomenclature, our Christoffel symbols as defined in \nEq.~\\ref{christoffel} thus define the affine connection relevant to our problem.\n Remember, however, that this is not differential geometry for curved\nspaces, where the tangent space at one point directly relates to the \noverall manifold, but \nrather\nfor a rotating Hilbert space $\\Omega$ within ${\\cal H}$ \nwhen moving in parameter space $\\Theta$.\n In the former, there is one metric tensor at every point, while in the latter\nthere are still two metrics, one for $\\Omega$ and one for $\\Theta$.\n Hence, \nwe do not establish a relationship\nbetween the defined Christoffel symbols \nand the electronic metric and its derivatives.\n\n The defined Christoffel symbols \nexhibit\nother expected properties.\n They give, for instance, the expansion coefficients in $\\Omega$ of the \nderivative of a basis vector:\n\\begin{equation*}\nP_{\\Omega} \\, \\partial_i | e_{\\nu} \\rangle = | e_{\\mu} \\rangle \\langle e^{\\mu} \n| \\partial_i | e_{\\nu} \\rangle = | e_{\\mu} \\rangle D^{\\mu}_{\\phantom{i} \\nu i} \\, ,\n\\end{equation*}\nso that when moving from one point $\\mathbf{R} \\in \\Theta$ to another\ninfinitesimally close to it $\\mathbf{R} + \\mathrm{d}\\mathbf{R}$, the basis\nvectors transform as\n\\begin{equation}\n\\label{BasisProp}\n| e_{\\nu} (\\mathbf{R} + \\mathrm{d}\\mathbf{R}) \\rangle =\n| e_{\\nu} (\\mathbf{R}) \\rangle + | e_{\\mu} (\\mathbf{R}) \\rangle \nD^{\\mu}_{\\phantom{e}\\nu i} \\mathrm{d} R^i \\; ,\n\\end{equation}\nto linear order for infinitesimal $\\mathrm{d} \\mathbf{R}$.\n The second term of the right hand side accounts for the fact that\nnot only the space is turning, but the basis set itself is changing when\ndisplacing in $\\Theta$.\n\n The turning of the Hilbert space $\\Omega$ as we move in parameter\nspace $\\Theta$ demands the definition of\nthe way the vector $|\\psi (\\mathbf{R}) \\rangle$ in $\\Omega (\\mathbf{R})$\npropagates when moving to the neighbouring point $\\mathbf{R} + \\mathrm{d}\n\\mathbf{R}$ into the slightly turned Hilbert space $\\Omega(\\mathbf{R}+\\mathrm{d}\n\\mathbf{R})$. \n The required propagation is given by \n\\begin{equation}\n\\label{prop}\n\\psi^{\\mu} (\\mathbf{R} + \\mathrm{d}\\mathbf{R}) = \n\\psi^{\\mu} (\\mathbf{R}) + \\partial_i \\psi^{\\mu} (\\mathbf{R}) \\mathrm{d} R^i \\; ,\n\\end{equation}\nagain for infinitesimal $\\mathrm{d} \\mathbf{R}$.\n This means that the vector is first propagated in $\\Omega (\\mathbf{R})$\nand then projected into $\\Omega (\\mathbf{R} + \\mathrm{d}\\mathbf{R})$ by \nretaining the same vector components. \n This is consistent \nto linear order\nwith the projective propagation of the state $|\\psi\\rangle$ when\nmoving from $\\mathbf{R}$ to $\\mathbf{R} + \\mathrm{d}\\mathbf{R}$, \n\\begin{multline}\n\\label{abstractprop}\n|\\psi (\\mathbf{R} + \\mathrm{d}\\mathbf{R}) \\rangle =\nP_{\\Omega (\\mathbf{R} + \\mathrm{d}\\mathbf{R})} \\{\nP_{\\Omega (\\mathbf{R})} |\\psi (\\mathbf{R}) \\rangle \\\\ +\nP_{\\Omega (\\mathbf{R})} \\partial_i [ P_{\\Omega (\\mathbf{R})}\n| \\psi (\\mathbf{R})\\rangle ] \\mathrm{d} R^i \\} \\, .\n\\end{multline}\n This last expression shows the propagation of a vector when moving in\n$\\Theta$, irrespective of basis set and basis set transformation.\n It involves the covariant derivative in its final term.\n The equivalence between Eqs.~\\ref{prop} and \\ref{abstractprop}\nis shown in Appendix~\\ref{PropAppendix}.\n\n Differently from canonical differential geometry of curved spaces,\nin the formalization presented here the proposed Christoffel symbols\ndo not necessarily contract with the metric tensor following conventional\nrules \nfor lowering or raising indices. \n The required equalities \namong these objects are presented in Appendix~\\ref{ChristAppendix}.\n\n\n\n\n\\subsubsection{Covariant derivative for the bra representation}\n\n The natural representation of the bra of any state, $\\langle \\psi | $,\nis $\\psi_{\\mu} = \\langle \\psi | e_{\\mu} \\rangle$.\n Its covariant derivative is defined in analogy to Eq.~\\ref{covdevdef},\nnamely,\n\\begin{equation}\n\\label{covdevbra}\n\\text{\\dh}_i \\psi{\\mu} \\equiv \\, \\partial_i \\left \\{ \\langle \\psi | P_{\\Omega} \\, \\right \\} \n| e_{\\mu} \\rangle \\, ,\n\\end{equation}\nwhich, again working in analogy with that which was\ndone for Eq.~\\ref{covdevdef2}, gives\n\\begin{equation}\n\\label{covdevbra2}\n\\text{\\dh}_i \\psi_{\\mu} = \\partial_i \\psi_{\\mu} + \\psi_{\\nu} D^{\\nu}_{\\phantom{i} i \\mu} \\; ,\n\\end{equation}\nwhere we have defined the corresponding Christoffel symbol\n\\begin{equation}\n\\label{christoffelbra}\nD^{\\nu}_{\\phantom{i} i \\mu} \\equiv \\langle \\partial_i e^{\\nu} | e_{\\mu} \\rangle \n\\end{equation}\n(see the different order of indices in the symbol as compared to the \ndefinition in Eq.~\\ref{christoffel}).\n\n The Christoffel symbols of Eqs.~\\ref{christoffel} and \\ref{christoffelbra}\nare easy to interrelate, since \n$\\langle e^{\\mu} | e_{\\nu} \\rangle = \\delta^{\\mu}_{\\phantom{i}\\nu} $ for all\n$\\mathbf{R} \\in \\Theta$, and, therefore,\n\\begin{equation*}\n\\partial_i \\delta^{\\mu}_{\\phantom{i}\\nu} = \\langle \\partial_i e^{\\mu} | e_{\\nu} \\rangle +\n\\langle e^{\\mu} | \\partial_i e_{\\nu} \\rangle = D^{\\mu}_{\\phantom{i} i \\nu} +\nD^{\\mu}_{\\phantom{i} \\nu i} = 0 \\; ,\n\\end{equation*}\nwhich leads to\n\\begin{equation}\n\\label{christoffelrelation}\nD^{\\mu}_{\\phantom{i} i \\nu} = - D^{\\mu}_{\\phantom{i} \\nu i} \\; \n\\end{equation}\n(general relations among Christoffel symbols can be found in \nAppendix~\\ref{ChristAppendix}).\n The covariant derivative for the bra can then be written as\n\\begin{equation}\n\\label{covdevbra3}\n\\text{\\dh}_i \\psi_{\\mu} = \\partial_i \\psi_{\\mu} - \\psi_{\\nu} D^{\\nu}_{\\phantom{i} \\mu i} \\; .\n\\end{equation}\n\n This derivative transforms as the corresponding tensor, as can be easily checked\nfollowing any of the two procedures used above (and in \nAppendix~\\ref{BasisChangeAppendix}) for the case of the ket representation.\n\n\n\n\n\n\n\\subsubsection{Covariant derivative for operators}\n\n Following the spirit of Eqs.~\\ref{covdevdef} and \\ref{covdevbra}, let us\ndefine the covariant derivative of an operator $H$ in its natural representation\n$H^{\\mu}_{\\phantom{e}\\nu}$ as\n\\begin{equation}\n\\label{covdevop}\n\\text{\\dh}_i H^{\\mu}_{\\phantom{e}\\nu} \\equiv \\langle e^{\\mu} | \\partial_i \\left \\{\nP_{\\Omega} H P_{\\Omega} \\right \\} | e_{\\nu} \\rangle \\;,\n\\end{equation}\nwhich becomes\n\\begin{equation}\n\\label{covdevop2}\n\\text{\\dh}_i H^{\\mu}_{\\phantom{e}\\nu} = \\partial_i H^{\\mu}_{\\phantom{e}\\nu} + \nD^{\\mu}_{\\phantom{e}\\lambda i} H^{\\lambda}_{\\phantom{e}\\nu} - \nH^{\\mu}_{\\phantom{e}\\lambda} D^{\\lambda}_{\\phantom{e}\\nu i} \\; .\n\\end{equation}\n For the last term we have used Eq.~\\ref{christoffelrelation}.\n This last expression coincides with the usual definition of the \ncovariant derivative of a second-rank tensor in other differential\ngeometry contexts, including general relativity, once the \nconnection (the Christoffel symbols) has been defined.\n It can also be written as \n\\begin{equation*}\n\\text{\\dh}_i H^{\\mu}_{\\phantom{e}\\nu} = \\partial_i H^{\\mu}_{\\phantom{e}\\nu} \n+ [D,H]^{\\mu}_{\\phantom{e}\\nu i} \\; ,\n\\end{equation*}\nwhere the conmutator in the last term is defined as the two last terms\nin Eq.~\\ref{covdevop2}.\n\n This definition gives a well-behaved tensor, transforming as such\nunder a basis set change. \n It can be straightforwardly checked following either of\nthe two procedures used before for $\\text{\\dh}_i \\psi{^\\mu}$: by noticing\nfrom Eq.~\\ref{covdevop} that it is a tensor representation of an operator \nwithin $\\Omega$, and by following Appendix~\\ref{BasisChangeAppendix}. \n It is also a definition consistent with the previous ones. \n The following Leibniz chain rule for a vector\n\\begin{equation}\n\\label{chain}\n\\text{\\dh}_i \\left ( H^{\\mu}_{\\phantom{e}\\nu} \\psi^{\\nu} \\right ) =\n\\left ( \\text{\\dh}_i H^{\\mu}_{\\phantom{e}\\nu} \\right ) \\psi^{\\nu} +\nH^{\\mu}_{\\phantom{e}\\nu} \\left ( \\text{\\dh}_i \\psi^{\\nu} \\right ) \\; ,\n\\end{equation}\nand the expected behavior of a scalar (zero-rank tensor)\n\\begin{equation}\n\\label{e_deriv}\n\\text{\\dh}_i E = \n\\text{\\dh}_i \\left ( \\psi_{\\mu} H^{\\mu}_{\\phantom{e}\\nu} \\psi^{\\nu} \\right ) = \n\\partial_i \\left ( \\psi_{\\mu} H^{\\mu}_{\\phantom{e}\\nu} \\psi^{\\nu} \\right ) = \\partial_i E \\; ,\n\\end{equation}\nare proved in Appendix~\\ref{ChainRuleAppendix}.\n\n\n\n\n\n\\subsubsection{Matrix representation}\n\n Let us extend the previous definitions to the matrix representation, \nwhich will be useful in the next section. \n Since the ket is equally represented in the natural and matrix representations,\nthe covariant derivative of a ket is also equally defined, as in Eq.~\\ref{covdevdef2}.\n The bra is, however, different, since $\\langle \\psi |$ is represented by\n$\\psi^{\\mu *} = \\langle \\psi | e^{\\mu} \\rangle$ (see details of present notation\nin Appendix~\\ref{NotationAppendix}).\n Its covariant derivative is then defined as\n\\begin{equation*}\n\\text{\\dh}_i \\psi^{\\mu *} = \\partial_i \\left \\{ \\langle \\psi | P_{\\Omega} \\right \\} | \ne^{\\mu} \\rangle \\; ,\n\\end{equation*}\nand, therefore, without resorting to the ambient Hilbert space ${\\cal H}$, \nit is expressed as\n\\begin{equation*}\n\\text{\\dh}_i \\psi^{\\mu *} = \\partial_i \\psi^{\\mu *} + \\psi^{\\nu *} \nD_{\\nu i}^{\\phantom{ei}\\mu} \\; ,\n\\end{equation*}\nwhich is just the Hermitian conjugate of Eq.~\\ref{covdevdef2}\n(see Appendix~\\ref{ChristAppendix} for relations among Christoffel symbols).\n For an operator in this representation, the covariant derivative is defined as\n\\begin{equation*}\n\\text{\\dh}_i H_{\\mu\\nu} = \\langle e_{\\mu} | \\partial_i \\left \\{ P_{\\Omega} H\nP_{\\Omega} \\right \\} | e_{\\nu} \\rangle \\; ,\n\\end{equation*}\nwhich becomes\n\\begin{equation}\n\\label{covdevHmatrix}\n\\text{\\dh}_i H_{\\mu\\nu} = \\partial_i H_{\\mu\\nu} +\nD_{\\mu\\phantom{e} i}^{\\phantom{e}\\sigma} H_{\\sigma\\nu} +\nH_{\\mu\\sigma} D_{\\phantom{e} i \\nu}^{\\sigma} \\; .\n\\end{equation}\n\n\n\n\n\n\n\\subsubsection{Geometric interpretation of the affine connection}\n\n Eqs.~\\ref{prop} and \\ref{abstractprop} give the basis for a clearer geometric \ninterpretation of the affine connection defined above, and the\ncorresponding covariant derivative.\n Closing Eq.~\\ref{abstractprop} from the left with $\\langle e^{\\mu} (\n\\mathbf{R} + \\mathrm{d}\\mathbf{R}) |$, one obtains\n\\begin{equation}\n\\label{affinerot}\n\\psi^{\\mu} ( \\mathbf{R} + \\mathrm{d}\\mathbf{R}) =\nA^{\\mu}_{\\phantom{e}\\nu} (\\mathbf{R} + \\mathrm{d}\\mathbf{R} : \\mathbf{R})\n\\big \\{ \\psi^{\\nu} + ( \\text{\\dh}_i \\psi^{\\nu} ) \\mathrm{d} R^i \\big \\} \\, ,\n\\end{equation}\nwhere the terms within the curly brackets are defined at {\\bf R},\nand \n\\begin{equation}\n\\label{TimeOverlap}\nA^{\\mu}_{\\phantom{e}\\nu} (\\mathbf{R} + \\mathrm{d}\\mathbf{R} : \\mathbf{R})\n\\equiv \\langle e^{\\mu} (\\mathbf{R} + \\mathrm{d}\\mathbf{R}) | \ne_{\\nu} (\\mathbf{R}) \\rangle \\, ,\n\\end{equation}\ndefines the basis transformation (as in Appendix~\\ref{BasisChangeAppendix})\nwhen moving from {\\bf R} to $\\mathbf{R} + \\mathrm{d}\\mathbf{R}$ \n(the linear equivalence between Eqs.~\\ref{affinerot} and \\ref{prop} is shown in \nAppendix~\\ref{Geometric-Appendix}).\n Eq.~\\ref{TimeOverlap} represents the local gauge transformation.\nIn principle any invertible matrix with a smooth behaviour with respect to {\\bf R}\nis allowed.\n\n In Eq.~\\ref{affinerot}, the basis set transformation information is now carried by\nthe $A$ tensor instead of the Christoffel symbols.\n The Christoffel symbols would reappear in Eq.~\\ref{affinerot}\nif replacing the covariant derivative by its definition in terms of \nthe regular derivative, Eq.~\\ref{covdevdef2}.\n That would indeed defeat the purpose of Eq.~\\ref{affinerot} (and would\nbring us back to Eq.~\\ref{prop}, as shown in the mentioned \nAppendix~\\ref{Geometric-Appendix}). \n The usefulness of expressing the propagation as in Eq.~\\ref{affinerot}\nbecomes evident when using it, for instance, for finite-differences time integrators \nfor solving the time-dependent Schr\\\"odinger equation, as we will do in \nSection~\\ref{quantumevolution} and Appendix~\\ref{ModifiedCK-Appendix}, \nwhere we will essentially replace \n$\\text{\\dh}_t \\psi^{\\mu}$ by $-i H^{\\mu}_{\\phantom{e}\\nu} \\psi^{\\nu}$. \n\n In addition to its utility for integrators, it is presented here because it conveys \nthe geometric meaning of the affine connection quite clearly: the covariant derivative \nis the intrinsic one, independent of the basis set change, accounting for both the \nphysical variation of the state and the turning of the Hilbert space $\\Omega$, \nwhile the Christoffel symbols linearly account for the basis set transformation.\n\n\n\n\\subsubsection{Rotation versus deformation}\n\n So far we have talked about basis change or transformation in general.\n We make here the distinction between pure rotations of\nthe basis and what we will call basis {\\it deformation} (in analogy with \nelasticity theory).\n They are defined as the ones for anti-Hermitian and Hermitian $D_{\\mu\\nu i}$,\nrepectively.\n A small arbitrary transformation will thus have a rotation and a deformation\n component, that can be obtained from $( D_{\\mu\\nu i} - D^{*}_{\\nu\\mu i}) \/ 2$,\n and $( D_{\\mu\\nu i} + D^{*}_{\\nu\\mu i} ) \/ 2$, respectively.\n\n Appendix~\\ref{rotation-appendix} shows how a small unitary transformation of\n the basis, defined as one that keeps constant overlap (and therefore\ncorresponding to basis rotations in $\\Omega$) has an associated anti-Hermitian\n$D_{\\mu\\nu i}$ tensor.\n This consideration will be relevant in the Applications section below, in the\ncontext of finite-differences integrators.\n \n\n\n\\subsubsection{Parallel transport and unitary propagation}\n\n\\label{ParallelTransport}\n\n The analog of the parallel transport in curved spaces would be the \npropagation of a state vector $|\\psi \\rangle \\in \\Omega (\\mathbf{R})$, \nwhich, in itself, would not vary (it would be constant if $\\Omega={\\cal H}$) \nwhen moving in $\\Theta$ away from point {\\bf R}. \n Such intrinsic constancy is reflected by a null covariant derivative, \n\\begin{equation*}\n\\text{\\dh}_i \\psi^{\\mu} = \\partial_i \\psi^{\\mu} + \nD^{\\mu}_{\\phantom{e}\\nu i} \\psi^{\\nu} = 0 \\, .\n\\end{equation*}\n Therefore, parallel transport of any vector along any line in $\\Theta$\nis then obtained from Eq.~\\ref{prop}, propagating \n\\begin{equation*}\n\\psi^{\\mu} (\\mathbf{R} + \\mathrm{d}\\mathbf{R}) = \n\\psi^{\\mu} (\\mathbf{R}) - D^{\\mu}_{\\phantom{e}\\nu i} (\\mathbf{R}) \n\\psi^{\\nu} (\\mathbf{R}) \\mathrm{d} R^i \n\\end{equation*}\n(for infinitesimal d{\\bf R})\nalong the corresponding line.\n\n Vectors that are orthogonal to each other at a given point would\npropagate keeping their orthogonality. \n Appendix~\\ref{UnitaryAppendix} shows that any unitary propagation\n(one such the condition $\\text{\\dh}_i \\{ \\psi_{n\\mu} \\psi^{\\mu}_{\\phantom{e}m} \\} \n= 0$ is preserved)\nmaintains the orthogonality of propagated vectors.\n Parallel transport is a special case, since $\\text{\\dh}_i \\psi^{\\nu}_m\n= \\text{\\dh}_i \\psi_{\\mu n} = 0$.\n\n More generally, a set of vectors in parallel transport keep their mutual \nscalar products (this can also be seen following the reasoning of \nAppendix~\\ref{UnitaryAppendix}).\n This also applies to the two metric tensors, which are composed \nprecisely of the scalar products of basis vectors.\n Since such scalar products are preserved under parallel transport, \nthen\n\\begin{equation*}\n\\text{\\dh}_i S_{\\mu\\nu}= \\text{\\dh}_i S^{\\mu\\nu} = 0 \\, ,\n\\end{equation*}\nwhich reflects a fundamental property of the theory, that the covariant derivative \nconserves the metric.\n This result is also derived explicitly in Appendix~\\ref{ChristAppendix}.\n\n\n\n\n\n\n\\subsubsection{Curvature}\n\n The Riemann-Christoffel curvature for a curved space characterises the\nfact that if you take one vector in a vector field using the Levi-Civita connection \nalong one direction, and then along another direction to reach a certain point, \nit gives a different result than if changing the order of directions in which you \narrive to the same point (or analogously, if following a closed loop).\n This is locally quantified with the difference in changing the order of\nsecond derivatives, by defining the curvature tensor\n$R^{\\mu}_{\\phantom{e}i\\nu j}$ such that\n\\begin{equation}\n\\label{curvdef}\nR^{\\mu}_{\\phantom{e}i \\nu j} \\psi^{\\nu} = \n\\text{\\dh}_i \\text{\\dh}_j \\psi^{\\mu} - \\text{\\dh}_j \\text{\\dh}_i \\psi^{\\mu} \\; .\n\\end{equation}\n Using the covariant derivatives defined above, one obtains:\n\\begin{equation}\n\\label{curvexp}\nR^{\\mu}_{\\phantom{e} i \\nu j} = \n\\partial_i D^{\\mu}_{\\phantom{e}\\nu j} - \\partial_j D^{\\mu}_{\\phantom{e}\\nu i} + \nD^{\\mu}_{\\phantom{e}\\lambda i} D^{\\lambda}_{\\phantom{e}\\nu j} -\nD^{\\mu}_{\\phantom{e}\\lambda j} D^{\\lambda}_{\\phantom{e}\\nu i} \\, ,\n\\end{equation}\nagain in perfect analogy to the expression for curved spaces.\n\n\n\n\n\\subsubsection{Relation to Berry connection and curvature}\n\n In Subsection~\\ref{ParameterSpace} the analogy with Berry's geometric phase\nformalism\\cite{Berry} was mentioned. \n Indeed, for a quantum mechanical (single- or many-particle) state \n$| \\Psi ( \\mathbf{R} ) \\rangle$,\nthe Berry connection is normally defined as \n\\begin{equation*}\n{\\cal A}_{j} = i \\, \\langle \\Psi | \\partial_j \\Psi \\rangle\n\\end{equation*}\n($i=\\sqrt{-1}$),\nwhich is nothing but ($i$ times) the connection defined in this work \n(Eq.~\\ref{christoffel}) for a space $\\Omega$ spanned by a single state.\n It generalises to the trace\n\\begin{equation}\n\\label{BerryConnection}\n{\\cal A}_{j} = i \\, \\sum_n^{occ} \\langle \\psi_n | \\partial_j \\psi_n \\rangle \\; ,\n\\end{equation} \nfor a set of single-particle states $| \\psi_n \\rangle$ spanning the occupied space\nin the context of a mean-field-like approach to the many-particle problem (as is the case \nfor the Kohn-Sham states in density-functional theory).\n As expected from Berry's work, it is easy to see that \n\\begin{equation}\n\\label{TraceConnection}\n{\\cal A}_j = i \\, D^{\\mu}_{\\phantom{e}\\mu j} \\; ,\n\\end{equation}\nwhere $D^{\\mu}_{\\phantom{e}\\mu j}$ is \nthe trace of the connection defined in \nEq.~\\ref{christoffel}\n(bear in mind that the $| \\psi_n \\rangle$ states in this context play\nthe role of the basis of the relevant space, i.e., the $| e_{\\mu}\\rangle$ states\nof the previous sections, and $\\Omega$ refers here to the occupied space).\n The expression in Eq.~\\ref{BerryConnection} assumes orthonormal states,\nwhereas Eq.~\\ref{TraceConnection} is valid for any non-orthogonal basis of \nthe relevant $\\Omega$ space.\n More generally, the Berry connection matrix \n${\\cal A}_{mnj} = i \\, \\langle \\psi_m | \\partial_j \\psi_n \\rangle$ corresponds to\n($i$ times) this work's $D_{mn j}$ connection in the matrix representation, which\ncan be transformed to any other tensorial representation for\nnon-orthogonal states.\n\n Similarly to what happens to the connection, the curvature of this work and that \nof Berry are closely related.\n The Berry curvature is usually defined as \n\\begin{equation*}\n{\\cal R}_{ij} = - 2 \\; \\mathrm{Im} \\left \\{ \\langle \\partial_i \\Psi | \\partial_j \\Psi \\rangle \\right \\} \\; ,\n\\end{equation*}\nfor a quantum mechanical state $| \\Psi \\rangle$,\nwhich generalises to \n\\begin{equation}\n\\label{BerryCurv}\n{\\cal R}_{ij} = - 2 \\; \\mathrm{Im} \\left \\{ \\sum_n^{occ} \\langle \\partial_i \\psi_n | \\partial_j \\psi_n \n\\rangle \\right \\} \n\\end{equation}\nfor a set of single-particle states $| \\psi_n \\rangle$ spanning the occupied space.\n The curvatures in Eqs.~\\ref{curvexp} and \\ref{BerryCurv} are very closely \ninterrelated when considering our $\\Omega$ Hilbert space as the occupied space\n(or any specific subspace that we are computing the curvature for).\n Starting with \n\\begin{equation*}\n\\partial_i D^{\\mu}_{\\phantom{e}\\nu j} = \\langle \\partial_i e^{\\mu} | \\partial_j e_{\\nu} \\rangle\n- \\langle e^{\\mu} | \\partial_i \\partial_j e_{\\nu} \\rangle \\; ,\n\\end{equation*}\nwe can easily see that\n\\begin{equation*}\n\\partial_i D^{\\mu}_{\\phantom{e}\\nu j} - \\partial_j D^{\\mu}_{\\phantom{e}\\nu i} = \n\\langle \\partial_i e^{\\mu} | \\partial_j e_{\\nu} \\rangle -\n\\langle \\partial_j e^{\\mu} | \\partial_i e_{\\nu} \\rangle \\; .\n\\end{equation*}\nIf we now trace over the quantum variables, in analogy with the Ricci curvature\n\\begin{equation}\n\\label{RicciBerry}\n{\\cal R}_{ij} = R^{\\mu}_{\\phantom{e} i \\mu j} = \n\\langle \\partial_i e^{\\mu} | \\partial_j e_{\\mu} \\rangle -\n\\langle \\partial_j e^{\\mu} | \\partial_i e_{\\mu} \\rangle \\; ,\n\\end{equation} \nsince the following traces annihilate:\n\\begin{equation*}\nD^{\\mu}_{\\phantom{e}\\lambda i} D^{\\lambda}_{\\phantom{e}\\mu j} -\nD^{\\mu}_{\\phantom{e}\\lambda j} D^{\\lambda}_{\\phantom{e}\\mu i} = 0 \\; .\n\\end{equation*}\n\n If the basis $\\{ | e_{\\mu} \\rangle \\}$ is invariably orthonormal, then\n$\\langle e^{\\mu} | = \\langle e_{\\mu} |$, for any {\\bf R}, and thus\n$\\langle \\partial_i e^{\\mu} | = \\langle \\partial_i e_{\\mu} |$, and\n\\begin{equation*}\n\\langle \\partial_j e_{\\mu} | \\partial_i e_{\\mu} \\rangle =\n\\langle \\partial_i e_{\\mu} | \\partial_j e_{\\mu} \\rangle ^* \\; ,\n\\end{equation*}\nand, therefore, \n\\begin{equation}\n\\label{BerryRicci}\n{\\cal R}_{ij} = 2 i \\; \\mathrm{Im} \\left \\{\n\\langle \\partial_i e_{\\mu} | \\partial_j e_{\\mu} \\rangle \\right \\} \\; ,\n\\end{equation} \nwhich is nothing but Eq.~\\ref{BerryCurv} (times $i$) for the \n$| \\psi_n \\rangle$ states taken as an orthonormal basis of \noccupied space $\\Omega$.\n Therefore, Berry's curvature is nothing but the Ricci curvature of \nour turning occupied space.\n\n This result is directly generalizable to any other orthonormal basis\nof occupied space, e.g., a basis of Wannier functions, still \nunder Eq.~\\ref{BerryRicci}.\n If the basis is non-orthogonal (non-orthogonal Wannier functions), \nEq.~\\ref{RicciBerry} is then the relevant definition.\n If seeking an expression closer to Eq.~\\ref{BerryRicci}, the definition in \nEq.~\\ref{RicciBerry} can also be re-expressed as\n\\begin{align}\n\\label{BerryRicciNon}\n{\\cal R}_{ij} = & 2 i \\; \\mathrm{Im} \\left \\{ S^{\\mu\\nu}\n\\langle \\partial_i e_{\\nu} | \\partial_j e_{\\mu} \\rangle \\right \\} \\notag \\\\ \n& + ( \\partial_i S^{\\mu\\nu} ) D_{\\nu\\mu j} - ( \\partial_j S^{\\mu\\nu} ) D_{\\nu\\mu i} \\; ,\n\\end{align}\nwhich, in addition to the expected redefinition of the trace with the metric tensor \nin the first term, includes two additional terms related to the variation \nof the metric itself.\n\n\n\n\\subsubsection{The topology of $\\Omega (\\mathbf{R})$}\n\n When solving a quantum-mechanical problem using a finite basis\nset that changes in parameter space, we will then have two relevant\nfibre bundles, one within the other: the one spanned by the basis, $\\Omega$,\nand the one spanned by the occupied states. \n The curvature and topology of $\\Omega$({\\bf R})\nand its relation with the ones corresponding to the occupied space could have \nimplications on the effect of the approximation implied by the basis.\n We will not explore this point further in this paper. \n We can however, point to Ref.~\\onlinecite{Mead1992} for the implications\nof Berry concepts on the dependence of the occupied space on atomic positions\nin molecular systems, including the effect of conical intersections, for instance.\n Rethinking such ideas considering a wider-than-occupied space could be\nan avenue for future investigation.\n\n\n\n\\section{Applications}\n\n\n\n\n\\subsection{Quantum time evolution}\n\n\\label{quantumevolution}\n\n\\subsubsection{Basic equations}\n \n Using the present formalism, we can consider a one-dimensional \nparameter space with time as the only variable. \n In the natural representation, the time dependent Schr\\\"odinger \nequation $H | \\psi \\rangle = i \\partial_t | \\psi \\rangle$ becomes \nsimply\n\\begin{equation}\n\\label{td}\nH^{\\mu}_{\\phantom{e}\\nu} \\psi^{\\nu} = i \\, \\text{\\dh}_t \\, \\psi^{\\mu} \\, ,\n\\end{equation}\nwhere the time covariant derivative is defined as \n\\begin{equation*}\n\\text{\\dh}_t \\, \\psi^{\\mu} = \\partial_t \\, \\psi^{\\mu} + D^{\\mu}_{\\phantom{e}\\nu t} \n\\, \\psi^{\\nu} \\, ,\n\\end{equation*}\nand the corresponding temporal Christoffel symbol as\n\\begin{equation*}\nD^{\\mu}_{\\phantom{e}\\nu t} = \\langle e^{\\mu} | \\partial_t | e_{\\nu} \\rangle\n= \\langle e^{\\mu} | \\partial_t e_{\\nu} \\rangle \\, .\n\\end{equation*}\n Equation \\ref{td} reflects the physics in a basis-set independent form, \nin the sense that the well-behaved tensors in the equation\nall transform as in Appendix~\\ref{BasisChangeAppendix}, \nand transmit the physics of the original Schr\\\"odinger\nequation for an evolving basis set and Hilbert space.\n Eq.~\\ref{td} can be obtained by minimising the action\nfor the following Lagrangian: \n\\begin{equation*}\nL = i \\, \\psi_{\\mu} \\text{\\dh}_t \\psi^{\\mu} - \\psi_{\\mu} H^{\\mu}_{\\phantom{e}\\nu}\n\\psi^{\\nu} \\; ,\n\\end{equation*}\nwhich is easily obtained using the ideas above from the standard\n$L = i \\, \\langle \\psi | \\partial_t \\psi \\rangle - \\langle \\psi | H | \\psi \\rangle$.\n \n The matrix representation also allows a concise representation\nof the physical equation, albeit less elegantly, carrying around\nthe metric tensors, as follows.\n\\begin{equation}\n\\label{matschroe}\nH_{\\mu\\nu} \\psi^{\\nu} = i \\, S_{\\mu\\nu} \\,\\text{\\dh}_t \\, \\psi^{\\nu} \\, ,\n\\end{equation}\nwhich corresponds to Eq.~\\ref{old-td-eq} of the Introduction, or\n\\begin{equation*}\nS^{\\mu\\sigma} H_{\\sigma\\nu} \\psi^{\\nu} = i \\,\\text{\\dh}_t \\, \\psi^{\\mu} \\, .\n\\end{equation*}\n\n If following the propagation of the density matrix\ninstead of that of the wave functions, the dynamics is\ndefined by the Liouville - Von Neumann equation,\n\\begin{equation*}\ni \\, \\partial_t \\rho = [ H, \\rho] \\; ,\n\\end{equation*}\nwhere the density operator $\\rho$ would be\n\\begin{equation*}\n\\rho (t) = | \\Psi (t) \\rangle \\langle \\Psi (t) |\n\\end{equation*}\nfor a pure quantum state, or \n\\begin{equation*}\n\\rho (t) = \\sum_n^{occ} | \\psi_n (t) \\rangle \\langle \\psi_n (t) |\n\\end{equation*}\nfor the set of occupied states in a mean-field setting. \n (It can also be generalised to statistical mixtures in general,\nincluding thermal). \n\n Again, for the evolving Hilbert space $\\Omega$, the\nexpression of this equation in terms of the corresponding\ntensors is obtained by closing it with $\\langle e^{\\mu} |$ \nfrom the left and $ | e_{\\nu} \\rangle $ from the right, and\nsubstituting $\\rho$ and $H$ by \n$P_{\\Omega} \\rho P_{\\Omega}$ and \n$P_{\\Omega} H P_{\\Omega}$, giving\n\\begin{equation*}\ni \\, \\text{\\dh}_t \\rho^{\\mu}_{\\phantom{e}\\nu} = \n[ H, \\rho]^{\\mu}_{\\phantom{e}\\nu} = \nH^{\\mu}_{\\phantom{e}\\sigma} \\rho^{\\sigma}_{\\phantom{e}\\nu} -\n\\rho^{\\mu}_{\\phantom{e}\\sigma} H^{\\sigma}_{\\phantom{e}\\nu} \\; \n\\end{equation*}\nin its natural representation.\n The matrix representation of this equation\nis much less elegant.\n The conventional definition of density matrix in a typical quantum chemistry\nsetting is $\\sum_n^{occ} C_{\\mu n} C_{\\nu n}^*$, in the language \nof Eq.~\\ref{coeff-expansion} of the Introduction. \n This is nothing but \n\\begin{equation*}\n\\rho^{\\mu\\nu} = \\langle e^{\\mu} | \\, \\rho \\, | e^{\\nu} \\rangle \\; .\n\\end{equation*} \n In this representation, the Louiville - Von Neumann equation becomes\n\\begin{equation*}\ni \\, \\text{\\dh}_t \\rho^{\\mu\\nu} = \nS^{\\mu\\sigma} H_{\\sigma\\kappa} \\rho^{\\kappa\\nu} -\n\\rho^{\\mu\\sigma} H_{\\sigma\\kappa} S^{\\kappa\\nu} \\; .\n\\end{equation*}\n\n For any of the former equations, the time dependence of the basis \norbitals may be due to the variation in time of other parameters like\natomic positions $R^i$.\n In such cases, the Christoffel symbols in the covariant derivatives would\nsatisfy\n\\begin{equation*}\nD^{\\mu}_{\\phantom{e} \\nu t} = v^i D^{\\mu}_{\\phantom{e} \\nu i} \\; ,\n\\end{equation*} \n(or equivalent representations) where $v^i = \\partial R^i \/ \\partial t$ \nare the corresponding nuclear velocities.\n\n\n\n\n\n\\subsubsection{Crank-Nicholson integrator}\n\n In various contexts a time-dependent Schr\\\"odinger-like equation \nis numerically solved by discretizing time, using adequate\nintegrator algorithms (for a comparison of the performance and\nstability of different options see Refs.~\\onlinecite{Koslov,Correa}).\n In the context of non-orthogonal basis sets \nand within mean-field-like\ntheories for which matrix inversion is affordable, \nthe Crank-Nicholson algorithm has been used quite \nsuccessfully.\\cite{Tsolakidis2002} \n The generalization of that procedure to a moving basis set was\nachieved by incorporating a L\\\"owdin orthonormalisation step, \nan idea due to Sankey and collaborators,\\cite{Sankey} which will \nbe discussed below. \n The Crank-Nicholson-L\\\"owdin procedure proved quite successful\nin the integration of the Kohn-Sham equations for several studies\nof electronic stopping power for ionic projectiles shooting through \nvaried materials.\\cite{Zeb2013,Correa2013,Ullah2015} \n Here we define new integrators based on the Crank-Nicholson idea,\ninspired by the affine connection defined above, and we compare\nthem with the Crank-Nicholson-L\\\"owdin procedure.\n\n Let us first revise the Crank-Nicholson method in this context\nfor non-moving bases.\\cite{Tsolakidis2002}\n The basics: a state $| \\psi \\rangle$ evolving according to\n$H | \\psi \\rangle = i \\partial_t | \\psi \\rangle$ can be propagated from $t$ to \n$t+\\mathrm{d}t$ by considering the backwards and forwards evolution \nfrom each of those time points to the one in the middle, as follows:\n\\begin{align}\n\\label{CK-abstract}\n| \\psi (t+\\mathrm{d}t\/2 ) \\rangle \n&= \\Big \\{ 1 - i \\frac{\\mathrm{d}t}{2} H (t) \\Big \\} | \\psi (t) \\rangle \\notag \\\\\n| \\psi (t+\\mathrm{d}t\/2 ) \\rangle \n&= \\Big \\{ 1 + i \\frac{\\mathrm{d}t}{2} H (t+\\mathrm{d}t) \\Big \\} \n| \\psi (t+\\mathrm{d}t) \\rangle \\, ,\n\\end{align}\nwhich, by eliminating the middle point becomes\n\\begin{align*}\n| \\psi (t+\\mathrm{d}t) \\rangle = \\Big [ 1 + i \\frac{\\mathrm{d}t}{2} \nH (t+\\mathrm{d}t) \\Big ]^{-1}\n\\Big \\{ 1 - i \\frac{\\mathrm{d}t}{2} H (t) \\Big \\} | \\psi (t) \\rangle\n\\end{align*}\nthereby ensuring, by construction, that the algorithm respects\ninvariance under time reversal.\n\n The use of this algorithm is complicated by the dependence\non $H (t+\\mathrm{d}t)$, which requires the use of an \niterative self-consistency procedure.\\cite{Kaxiras}\n For practical purposes, however, in many implementations the \nalgorithm is simplified by using $H(t)$ in both factors, \n\\begin{align}\n\\label{CK-kets}\n| \\psi (t+\\mathrm{d}t) \\rangle = \\Big [ 1 + i \\frac{\\mathrm{d}t}{2} \nH (t) \\Big ]^{-1} \\Big \\{ 1 - i \\frac{\\mathrm{d}t}{2} H (t) \\Big \\} | \\psi (t) \\rangle\n\\end{align}\nwhich allows a direct evaluation of $| \\psi (t+\\mathrm{d}t) \\rangle$\nfrom information of the previous time step.\n This change is of course of no consequence if the Hamiltonian does\nnot change with time, but this is hardly the case for any mean-field-like\nHamiltonian (such as that of Kohn and Sham), given the dependence of the \nHamiltonian on the evolving electron density or wave-functions. \n\n For a fixed, not evolving basis set, this is used\\cite{Tsolakidis2002} as\n\\begin{multline}\n\\label{CK-natural}\n\\psi^{\\mu} (t+\\mathrm{d}t) = \\\\ \\Big [ \\delta_{\\mu}^{\\phantom{e}\\sigma} + \ni \\frac{\\mathrm{d}t}{2} H_{\\mu}^{\\phantom{e}\\sigma} (t) \\Big ]^{-1}\n\\Big \\{ \\delta^{\\sigma}_{\\phantom{e}\\nu} - i \\frac{\\mathrm{d}t}{2} \nH^{\\sigma}_{\\phantom{e}\\nu} (t) \\Big \\} \\psi^{\\nu}(t) \\, \n\\end{multline}\n(please note that the first factor does not indicate an inversion of the\nparticular $(\\mu,\\sigma)$ element, but of the tensor as a whole; see \nAppendix~\\ref{InverseAppendix} for the notation and relevant definitions \nused in this paper for inverse second-rank tensors).\n In the matrix representation it becomes\n\\begin{multline}\n\\label{CK-matrix}\n\\psi^{\\mu} (t+\\mathrm{d}t) = \\\\ \\Big [ S_{\\mu\\sigma} + i \\frac{\\mathrm{d}t}{2} \nH_{\\mu\\sigma} (t) \\Big ]^{-1} \n\\Big \\{ S_{\\sigma\\nu} - i \\frac{\\mathrm{d}t}{2} H_{\\sigma\\nu} (t) \\Big \\}\n\\psi^{\\nu}(t) \\, ,\n\\end{multline}\nkeeping in mind that the lower indices within the inverted tensor become \nupper indices (see Appendix~\\ref{InverseAppendix}).\n\n For fixed bases this algorithm has the enormous virtue of \nbeing strictly unitary,\\cite{Tsolakidis2002} in the sense that when \npropagating an orthonormal set of states $\\psi^{\\mu}_m$, the set \nremains orthonormal at $t+\\mathrm{d}t$ regardless of the size of d$t$.\n\n\n\n\n\\subsubsection{Revising Crank-Nicholson for a moving basis}\n\n For a moving basis set, the algorithm has to be generalized.\n One straightforward generalisation is achieved by replacing\n$H_{\\mu\\sigma}(t)$ in Eq.~\\ref{CK-matrix} by $H_{\\mu\\sigma}(t) \n- i D_{\\mu\\sigma t}(t)$ (in the matrix representation, for instance), giving\n\\begin{multline}\n\\label{CKmov-matrix}\n\\psi^{\\mu} (t+\\mathrm{d}t) = \\Big [ S_{\\mu\\sigma} + i \\frac{\\mathrm{d}t}{2} \n( H_{\\mu\\sigma} - i D_{\\mu\\sigma t} ) \\Big ]^{-1} \\\\ \\times\n\\Big \\{ S_{\\sigma\\nu} - i \\frac{\\mathrm{d}t}{2} ( H_{\\sigma\\nu} - i D_{\\sigma\\nu t} ) \\Big \\} \n\\psi^{\\nu}(t)\n\\end{multline}\n(where we have dropped the $t$-dependence of the tensors for\nclarity).\nHere, $D_{\\mu\\sigma t} = \\langle e_{\\mu} | \\partial_t e_{\\sigma} \\rangle$\nis the required temporal Christoffel symbol.\n Eq.~\\ref{CKmov-matrix} can be derived from Eq.~\\ref{prop} using the \ntensorial time-dependent Schr\\\"odinger equation in Eq.~\\ref{td} and the \ndefinition of the covariant derivative, Eq.~\\ref{covdevdef2}.\n This would be only linearly correct in d$t$ in what concerns both\nHilbert space turning and basis set transformation, \nand therefore the nice \nunitary propagation feature for arbitrary d$t$ is lost.\n Indeed, the loss of hermiticity of the propagated effective Hamiltonian\nmakes this problem apparent,\nalbeit that the propagation is correct and well-behaved in the limit of\nsmall d$t$.\n\n A much more promising approach is obtained by building on \nEq.~\\ref{affinerot} instead of Eq.~\\ref{prop}, \nwhich exactly accounts \nfor basis set change if $\\Omega(t+\\mathrm{d}t)=\\Omega(t)$.\n Within the matrix representation, one obtains\n\\begin{multline}\n\\label{CKmov-matrix2}\n\\psi^{\\mu} (t+\\mathrm{d}t) = \n\\Big [ S_{\\mu\\lambda}(t+\\mathrm{d}t) + i \\frac{\\mathrm{d}t}{2} \nH_{\\mu\\lambda} (t+\\mathrm{d}t) \\Big ]^{-1} \\\\ \\times\nA_{\\lambda}^{\\phantom{e}\\sigma} (t +\\mathrm{d}t : t) \n\\Big \\{ S_{\\sigma\\nu}(t) - i \\frac{\\mathrm{d}t}{2} H_{\\sigma\\nu} (t) \\Big \\} \n\\psi^{\\nu}(t) \\; ,\n\\end{multline}\nwhere $A_{\\lambda}^{\\phantom{e}\\sigma} (t +\\mathrm{d}t : t) =\n\\langle e_{\\lambda} (t +\\mathrm{d}t) | e^{\\sigma} (t) \\rangle$ is defined \nanalogously to Eq.~\\ref{TimeOverlap}.\n The derivation of Eq.~\\ref{CKmov-matrix2} is given in \nAppendix~\\ref{ModifiedCK-Appendix}.\n The algorithm given by Eq.~\\ref{CKmov-matrix2} would again require an\niterative self-consistency procedure for every time step.\n An analog to Eq.~\\ref{CK-kets} can also be considered \nwith a view to\nremoving the dependence on $H_{\\mu\\nu} (t+\\mathrm{d}t)$ and\n$S_{\\mu\\nu} (t+\\mathrm{d}t)$.\n This is achieved with the following Ansatz:\n\\begin{multline}\n\\label{CKmov-matrix3}\n\\psi^{\\mu} (t+\\mathrm{d}t) = A^{\\mu}_{\\phantom{e}\\lambda} (t +\\mathrm{d}t : t) \n\\\\ \\times \\Big [ S_{\\lambda\\sigma} (t) + i \\frac{\\mathrm{d}t}{2} \nH_{\\lambda\\sigma} (t) \\Big ]^{-1}\n\\Big \\{ S_{\\sigma\\nu} (t) - i \\frac{\\mathrm{d}t}{2} H_{\\sigma\\nu} (t) \\Big \\} \n\\psi^{\\nu}(t) \\, .\n\\end{multline}\n The integrator in Eq.~\\ref{CKmov-matrix3} keeps the strict unitary character \nof the algorithm for a transforming basis set in a fixed $\\Omega$ space. \n This can be seen by noticing that the last two transformations in \nEq.~\\ref{CKmov-matrix3} (the ones in curly and square brackets) correspond\nto the original Crank-Nicholson scheme for a fixed Hamiltonian, and that\nthe $A$ tensor transformation is nothing but a change of basis. \n The latter, however, is only linearly correct in d$t$ if $\\Omega$ does turn. \n Indeed, for an arbitrarily large d$t$, the set \n$P_{\\Omega(t + \\mathrm{d}t)} | \\psi_m \\rangle$ \nis not necessarily orthonormal even if the set $P_{\\Omega(t)} | \n\\psi_m \\rangle$ was.\n The advantage of Eq.~\\ref{CKmov-matrix3} with respect to \nEq.~\\ref{CKmov-matrix} is important, however, since the space \nturning diminishes with better converged bases, while the basis \nchange does not necessarily diminish (think of the perfectly converged limit of \n$\\Omega={\\cal H}$ for a moving basis: the basis still changes and the \nHilbert space does not).\n\n In practice, the tensor $A^{\\mu}_{\\phantom{e}\\lambda} (t +\\mathrm{d}t : t)$\nof Eq.~\\ref{CKmov-matrix3} is somewhat inconvenient for calculations.\n It can be replaced by\n\\begin{multline}\n\\label{CKmov-matrix4}\n\\psi^{\\mu} (t+\\mathrm{d}t) = S^{\\mu\\kappa}(t+\\mathrm{d}t) A_{\\kappa\\lambda} \n(t +\\mathrm{d}t : t) \\\\ \\times \\Big [ S_{\\lambda\\sigma} (t) + i \\frac{\\mathrm{d}t}{2} \nH_{\\lambda\\sigma} (t) \\Big ]^{-1}\n\\Big \\{ S_{\\sigma\\nu} (t) - i \\frac{\\mathrm{d}t}{2} H_{\\sigma\\nu} (t) \\Big \\} \n\\psi^{\\nu}(t) \\, ,\n\\end{multline} \nwhich involves the inversion of the overlap matrix at $t+\\mathrm{d}t$\nand the calculation of the overlap between the $t$ basis vectors and\nthe ones at $t+\\mathrm{d}t$ (in addition to the Crank-Nicholson operations at $t$).\n Similarly, the practical implementation of the self-consistent procedure\nimplied by Eq.~\\ref{CKmov-matrix2} would actually imply using the\nfollowing self-consistent integrator instead:\n\\begin{multline}\n\\label{CKmov-matrix5}\n\\psi^{\\mu} (t+\\mathrm{d}t) = \n\\Big [ S_{\\mu\\lambda}(t+\\mathrm{d}t) + i \\frac{\\mathrm{d}t}{2} \nH_{\\mu\\lambda} (t+\\mathrm{d}t) \\Big ]^{-1} \\\\ \\times\nA_{\\lambda\\kappa} (t +\\mathrm{d}t \\! : \\! t) \\, \\Big \\{ \\delta^{\\kappa}_{\\phantom{e}\\nu} \n- i \\frac{\\mathrm{d}t}{2} S^{\\kappa\\sigma} (t) H_{\\sigma\\nu} (t) \\Big \\} \n\\psi^{\\nu}(t) \\, .\n\\end{multline}\n\n\n\n\n\n\n\n\\subsubsection{Procedure based on L\\\"owdin orthonormalization}\n\\label{secSankey}\n \n An alternative integrator was proposed\\cite{Sankey} and has been \nused\\cite{Zeb2013,Correa2013,Ullah2015} for strict unitary propagation\nfor arbitrary d$t$.\n Following the previous notation, this propagator can be written as\n\\begin{multline}\n\\label{Sankey}\n\\psi^{\\mu} (t+\\mathrm{d}t) = \\{ S^{-1\/2} (t+\\mathrm{d}t) \\}^{\\mu l} \n\\{ S^{1\/2} (t) \\}_{l\\lambda} \n\\\\ \\times \\Big [ S_{\\lambda\\sigma} (t) + i \\frac{\\mathrm{d}t}{2} \nH_{\\lambda\\sigma} (t) \\Big ]^{-1}\n\\Big \\{ S_{\\sigma\\nu} (t) - i \\frac{\\mathrm{d}t}{2} H_{\\sigma\\nu} (t) \\Big \\} \n\\psi^{\\nu}(t) \\, .\n\\end{multline} \n This is analogous to Eq.~\\ref{CKmov-matrix4} inasmuch as it first does the\nphysical (Crank-Nicholson) propagation for the basis at $t$ and within \n$\\Omega(t)$, and then \nit transforms to conform to the basis at time $t+\\mathrm{d}t$.\n The basis transformation is done differently, however. It can be seen as the\nfollowing process: ($i$) It first changes basis \nwithin $\\Omega(t)$ to a L\\\"owdin orthonormal basis.\n ($ii$) It then assumes that the coefficients in this basis do not change\nwhen changing to the L\\\"owdin basis of $t+\\mathrm{d}t$. \n ($iii$) It then undoes the L\\\"owdin transformation in $t+\\mathrm{d}t$\nobtaining the sought coefficients in the non-orthogonal basis\nat $t+\\mathrm{d}t$.\n\n This procedure has the advantage that\nthe propagation is now strictly unitary by construction for any d$t$,\neven considering the turning of the Hilbert space, which should give\nlarger stability to the method for relatively large values of d$t$.\n The propagated vectors are guaranteed to be orthonormal.\n \n However, the vectors propagated following this L\\\"owdin procedure\n(Eq.~\\ref{Sankey}) do not constitute a fair representation of what the \nevolution of the corresponding vectors would be in ${\\cal H}$ in the sense\nthat it does not properly counter the effect of the transforming basis.\n In order to see this, consider the case of a set of vectors at $t=0$, \n$ \\{ | \\psi_n \\rangle \\}$, all within $\\Omega$ and all initially orthonormal, \n$ {\\cal S}_{nm} = \\langle \\psi_n | \\psi_m \\rangle = \\delta_{nm}$. \n Consider, as well, that the dynamics is such that the vectors do not \nrotate with time, changing only in phase, as would be the case for\neigenstates of a time-independent Hamiltonian, for instance.\n Take now a basis set for $\\Omega$ that does rotate with time, but\nkeeps orthonormality at all times.\n In this scenario it is clear that the coefficients $\\psi^{\\mu}_n$ should\ntransform with time to capture the fact that the non-rotating eigenvectors \nare described in the frame of a rotating basis.\n This is properly taken care of in the integrator proposed in \nEq.~\\ref{CKmov-matrix4} by the $A_{\\kappa\\lambda} \n(t +\\mathrm{d}t : t)$ tensor, which does the needed basis transformation.\n \n For the L\\\"owdin procedure, however, the coefficients do not change,\n except for the global phase dictated by the Hamiltonian evolution, i.e., \n \\begin{equation*}\n \\psi^{\\mu}_{\\phantom{e}n} (t + \\mathrm{d} t) = e^{-i \\omega_n \n \\mathrm{d} t} \\psi^{\\mu}_{\\phantom{e}n} (t), \\, ,\n \\end{equation*}\nas is obvious from the fact that if $S_{\\mu\\nu} = \\delta_{\\mu\\nu}$ at all\ntimes, and therefore\n\\begin{equation*}\nS^{-1}_{\\mu\\nu} = S^{-1\/2 \\, \\mu\\nu} = S^{1\/2}_{\\mu\\nu} = \\delta_{\\mu\\nu}\n\\end{equation*} \nat all times. \n This simple situation clearly illustrates the above assertion on the \nunfair representation by Eq.~\\ref{CKmov-matrix4} of the \nstates' evolution in a generic situation.\n It is not clear however how significant such discrepancies can be.\n In particular, the case made above is for pure rotations of the basis set.\n We explore this point in more depth in Appendix~\\ref{SankeyAppendix}, \nfinding interesting dependencies on the rotating versus deforming basis sets.\n In particular, for fixed-shape atomic-like orbitals as basis functions, \nsuch basis rotations correspond to Galileo transforms in parameter space,\nwhich would suggest no physical significance to the discrepancies\nbetween the solvers in Eqs.~\\ref{Sankey} and \\ref{CKmov-matrix4} for such \nrotations.\n This could be behind the apparent success of Eq.~\\ref{Sankey}.\n A more quantitative analysis should be the focus of further work.\n \n\n\n\n\n\n\\subsubsection{Strictly unitary propagation}\n\n If not using Eq.~\\ref{Sankey} for propagation, we are left with \nEq.~\\ref{CKmov-matrix4} (or Eq.~\\ref{CKmov-matrix5} if using\nself-consistency), which does not strictly conserve the orthonormality\nof propagated states for finite d$t$ if the space $\\Omega$ turns within\n${\\cal H}$.\n The propagator can still be used as long as d$t$ is small enough such\nthat the unitary propagation is preserved within a desired tolerance.\n An alternative, however, is re-orthonormalising the states. \n This can be done with any orthonormalisation procedure, e.g.,\nGram-Schmidt or the iterative procedure used when finding eigenstates\nby minimization in electronic structure (see, e.g., \nRef~\\onlinecite{Payne1990}).\n\n The L\\\"owdin orthonormalization method described above can\nbe used for this as well, with the advantage that the orthonormal states\nkeep closest to the states prior to orthonormalization (we need to remember\nwe are not just after an orthonormal basis of the evolved occupied space, \nbut actually following the evolution of separate states).\n\n Consider a set of $M$ states, $\\{ | \\psi_n \\rangle , n=1, ... , M \\}$, where \n$M < {\\cal N}$, being ${\\cal N}$ the dimension of $\\Omega$, and which are\nrepresented by $\\{ \\psi^{\\mu}_{\\phantom{e}n}, n= 1 ... M, \\mu = 1, ... , {\\cal N} \\}$, \nand are all, therefore, within $\\Omega$.\n Consider they have been propagated by Eq.~\\ref{CKmov-matrix4} and\nare not strictly orthonormal, i.e.,\n\\begin{equation*}\n{\\cal S}_{nm} = \\langle \\psi_n | \\psi_m \\rangle \\ne \\delta_{nm} \\; ,\n\\end{equation*}\nwhere \n\\begin{equation*}\n{\\cal S}_{nm} = \\psi_{\\mu n} \\psi^{\\mu}_{\\phantom{e}m}\n\\end{equation*}\nis an $M\\times M$ matrix. \n By diagonalising ${\\cal S}_{mn}$, one can obtain ${\\cal S}^{-1\/2\\phantom{i}nm}$,\nwhereupon the strictly unitary propagator version of Eq.~\\ref{CKmov-matrix4}\nbecomes\n\\begin{multline}\n\\label{CKmov-unitary1}\n\\psi^{\\mu}_{\\phantom{e}n} (t+\\mathrm{d}t) = {\\cal S}^{-1\/2 \\phantom{i} nm} \n(t+\\mathrm{d}t) S^{\\mu\\kappa}(t+\\mathrm{d}t) A_{\\kappa\\lambda} \n(t +\\mathrm{d}t : t) \\\\ \\times \\Big [ S_{\\lambda\\sigma} (t) + i \\frac{\\mathrm{d}t}{2} \nH_{\\lambda\\sigma} (t) \\Big ]^{-1}\n\\Big \\{ S_{\\sigma\\nu} (t) - i \\frac{\\mathrm{d}t}{2} H_{\\sigma\\nu} (t) \\Big \\} \n\\psi^{\\nu}_{\\phantom{e}m} (t) \\, .\n\\end{multline} \n The self-consistent alternative orthonormalising Eq.~\\ref{CKmov-matrix5}\nwould read:\n\\begin{multline}\n\\label{CKmov-unitary2}\n\\psi^{\\mu}_{\\phantom{e}n} (t+\\mathrm{d}t) = \n{\\cal S}^{-1\/2 \\phantom{i} nm} (t+\\mathrm{d}t) \\\\ \\times\n\\Big [ S_{\\mu\\lambda}(t+\\mathrm{d}t) + i \\frac{\\mathrm{d}t}{2} \nH_{\\mu\\lambda} (t+\\mathrm{d}t) \\Big ]^{-1} A_{\\lambda\\kappa} (t +\\mathrm{d}t \\! : \\! t) \n\\, \\\\ \\times \\Big \\{ \\delta^{\\kappa}_{\\phantom{e}\\nu} - i \\frac{\\mathrm{d}t}{2} \nS^{\\kappa\\sigma} (t) H_{\\sigma\\nu} (t) \\Big \\} \\psi^{\\nu}_{\\phantom{e}m} (t) \\, .\n\\end{multline}\n\n This orthonormalisation step can be done at every evolution step, \nafter a determined number of them, or when the deviation from orthonormality \nreaches some tolerance.\n\n\n\n\\subsection{Forces}\n\n Additional terms related to derivatives also appear when calculating\nthe forces on atoms for geometry relaxation or ab initio molecular\ndynamics calculations.\n In the following we restrict the discussion to adiabatic forces, leaving for \nfurther work the consideration of additional terms that appear for moving \nbasis sets in non-adiabatic settings.\\cite{Todorov2001}\n Considering for simplicity one single state $| \\psi \\rangle$, and using \nthe language of the Introduction, one calculates quantities like\n\\begin{align}\n\\label{PulayOld}\n& \\frac{\\partial E} {\\partial R_i} = \\partial_i \\langle \\psi | H | \\psi \\rangle =\n\\partial_i \\left \\{ \\sum_{\\mu\\nu} C_{\\mu}^* \\langle e_{\\mu} | H | e_{\\nu} \n\\rangle C_{\\nu} \\right \\} \\notag \\\\\n&= \\sum_{\\mu\\nu} \\bigg \\{ C_{\\mu}^* \\langle e_{\\mu} | \\partial_iH | e_{\\nu} \n\\rangle C_{\\nu} \\notag \\\\\n& +(\\partial_i C_{\\mu}^*) \\langle e_{\\mu} | H | e_{\\nu} \\rangle C_{\\nu} \n+ C_{\\mu}^* \\langle e_{\\mu} | H | e_{\\nu} \\rangle (\\partial_i C_{\\nu}) \\notag \\\\ \n& + C_{\\mu}^* \\langle \\partial_i e_{\\mu} | H | e_{\\nu} \\rangle C_{\\nu}\n+ C_{\\mu}^* \\langle e_{\\mu} | H | \\partial_i e_{\\nu} \\rangle C_{\\nu} \\bigg \\} \\, .\n\\end{align} \n The last two terms, called Pulay forces,\\cite{Pulay1969} involve again \nbasis vector derivatives. \n We discuss here the relevance of the present formalism for\nthese forces and related concepts.\n\n\n\n\n\n\\subsubsection{Hellmann-Feynman theorem}\n\n The Hellman-Feynman theorem states that, given \n$E = \\langle \\psi | H | \\psi \\rangle$ (for a normalised $| \\psi \\rangle$),\nthe derivative of $E$ with respect to $R^i$ fulfils\n\\begin{equation}\n\\label{HF-general}\n\\partial_i E = \\langle \\psi | \\partial_i H | \\psi \\rangle\n\\end{equation}\nif $H | \\psi \\rangle = E | \\psi \\rangle$ (and $\\langle \\psi | H = \\langle \\psi | E$). \nIt is easy to see that, using the latter equations in $\\Omega$, \n\\begin{equation}\n\\label{schroe}\nH^{\\mu}_{\\phantom{e}\\nu} \\psi^{\\nu} = E \\psi^{\\mu} \\; \\, {\\rm and} \\; \\; \n\\psi_{\\mu} H^{\\mu}_{\\phantom{e}\\nu} = \\psi_{\\nu} E \\; ,\n\\end{equation}\nand the Hellman-Feynman theorem then reads,\n\\begin{equation*}\n\\partial_i E = \\psi_{\\mu} \\left ( \\partial_i \nH^{\\mu}_{\\phantom{e}\\nu} \\right ) \\psi^{\\nu} \\; .\n\\end{equation*}\n\n In Eq.~\\ref{e_deriv} we saw that the derivative of a scalar\nneeds no correction. Actually, \n\\begin{equation}\n\\label{forces}\n\\text{\\dh}_i E = \\partial_i E = \\psi_{\\mu} \\left ( \\partial_i \nH^{\\mu}_{\\phantom{e}\\nu} \\right ) \\psi^{\\nu} \n= \\psi_{\\mu} \\left ( \\text{\\dh}_i \nH^{\\mu}_{\\phantom{e}\\nu} \\right ) \\psi^{\\nu} \\; ,\n\\end{equation}\nthe last identity being easy to check by using \nEqs.~\\ref{covdevop2} and \\ref{schroe}.\n This is a very transparent expression of the theorem in \nits general quantum mechanical form in Eq.~\\ref{HF-general}.\n\n\n\n\n\n\n\\subsubsection{Hellman-Feynman theorem in matrix representation}\n\n The matrix representation gives a less concise expression of the same \ntheorem, except when using the covariant derivative of $H$. \n Starting with the Schr\\\"odinger equation, instead of Eq.~\\ref{schroe}, we have \n\\begin{equation}\n\\label{schroematrix}\nH_{\\mu\\nu} \\psi^{\\nu} = E S_{\\mu\\nu} \\psi^{\\nu} \\; \\, {\\rm and} \\; \\; \n\\psi^{\\mu *} H_{\\mu\\nu} = \\psi^{\\mu *} S_{\\mu\\nu} E \\; .\n\\end{equation}\nExpanding the derivative $\\partial_i E=\n\\partial_i \\left ( \\psi^{\\mu *} H_{\\mu\\nu}\\psi^{\\nu} \\right )$ \nand using the fact that\n$\\langle \\psi | \\psi \\rangle = 1 = \\psi^{\\mu *} S_{\\mu\\nu} \\psi^{\\nu}$,\nthe Hellman-Feynman theorem is obtained in this representation as\n\\begin{equation*}\n\\partial_i E = \\psi^{\\mu *} \\left [ \\partial_i H_{\\mu\\nu} - \nE \\, \\partial_i S_{\\mu\\nu} \\right ] \\psi^{\\nu} \\; ,\n\\end{equation*}\nwhich can also be expressed as\n\\begin{equation}\n\\label{HFmatrix}\n\\partial_i E = \\psi^{\\mu *} \\left [ \\partial_i H_{\\mu\\nu} - \nE ( D_{\\mu i\\nu} + D_{\\mu\\nu i} ) \\right ] \\psi^{\\nu} \\; .\n\\end{equation}\n\n Let us see how it looks using the covariant derivative.\n Introducing its definition for the matrix\nrepresentation of $H$ (Eq.~\\ref{covdevHmatrix}) into the previous\nexpression (Eq.~\\ref{HFmatrix}), \n\\begin{multline}\n\\label{stepHFmatrixcov}\n\\partial_i E = \\psi^{\\mu *} [ \\text{\\dh}_i H_{\\mu\\nu} \n- D_{\\mu\\phantom{e}i}^{\\phantom{e}\\sigma} H_{\\sigma\\nu}\n- H_{\\mu\\sigma} D^{\\sigma}_{\\phantom{e}i\\nu} \\\\\n- E ( D_{\\mu i\\nu} + D_{\\mu\\nu i} ) ] \\psi^{\\nu} \\; .\n\\end{multline}\n Using now the Schr\\\"odinger equation again (Eq.~\\ref{schroematrix}),\nthe following elements of the previous equation become:\n\\begin{align*}\n\\psi^{\\mu *} D_{\\mu\\phantom{e}i}^{\\phantom{e}\\sigma} H_{\\sigma\\nu} \\psi^{\\nu} &=\nE \\, \\psi^{\\mu *} D_{\\mu\\phantom{e}i}^{\\phantom{e}\\sigma} S_{\\sigma\\nu} \\psi^{\\nu} \n\\, , \\quad \\mbox{and} \\\\\n\\psi^{\\mu *} H_{\\mu\\sigma} D_{\\phantom{e}i\\nu}^{\\sigma} \\psi^{\\nu} &=\nE \\psi^{\\mu *} S_{\\mu\\sigma} D_{\\phantom{e}i\\nu}^{\\sigma} \\psi^{\\nu} \\; ,\n\\end{align*}\nwhereupon Eq.~\\ref{stepHFmatrixcov} becomes\n\\begin{multline}\n\\label{step2HFmatrixcov}\n\\partial_i E = \\psi^{\\mu *} [ \\text{\\dh}_i H_{\\mu\\nu} \n- E ( D_{\\mu\\phantom{e}i}^{\\phantom{e}\\sigma} S_{\\sigma\\nu}\n+ S_{\\mu\\sigma} D^{\\sigma}_{\\phantom{e}i\\nu} \\\\\n+ D_{\\mu i\\nu} + D_{\\mu\\nu i} ) ] \\psi^{\\nu} \\; .\n\\end{multline}\n Using now the relations between Christoffel symbols in \nEqs.~\\ref{ChristRelations}, we find that\n\\begin{align*}\nD_{\\mu\\phantom{e}i}^{\\phantom{e}\\sigma} S_{\\sigma\\nu} &= \n- D_{\\mu i}^{\\phantom{ee}\\sigma} S_{\\sigma\\nu} = \n- D_{\\mu i\\nu} \\\\\nS_{\\mu\\sigma} D^{\\sigma}_{\\phantom{e}i\\nu} &=\n- S_{\\mu\\sigma} D^{\\sigma}_{\\phantom{e}\\nu i} =\n- D_{\\mu \\nu i} \\; ,\n\\end{align*}\nand introducing these in Eq.~\\ref{step2HFmatrixcov} gives a\nquite simple final result for the Hellman-Feynman theorem in the matrix\nrepresentation in terms of the Hamiltonian's covariant derivative,\nnamely,\n\\begin{equation}\n\\label{HFmatrixcov}\n\\partial_i E = \\text{\\dh}_i E = \\psi^{\\mu *} ( \\text{\\dh}_i H_{\\mu\\nu} ) \\psi^{\\nu} \\; .\n\\end{equation}\n\n This last equation can also be derived directly from Eq.~\\ref{forces} using the\nfact that the covariant derivative conserves the metric, as discussed in\nsection~\\ref{ParallelTransport}, i.e., \n$\\text{\\dh}_i S_{\\mu\\nu} = \\text{\\dh}_i S^{\\mu\\nu} = 0$.\n From Eq.~\\ref{forces}, we have\n\\begin{align*}\n\\text{\\dh}_i E &= \\psi_{\\mu} \\left ( \\text{\\dh}_i \nH^{\\mu}_{\\phantom{e}\\nu} \\right ) \\psi^{\\nu} =\n\\psi^{\\mu *} S_{\\mu\\lambda} (\\text{\\dh}_i S^{\\lambda\\sigma} H_{\\sigma\\nu} )\n\\psi^{\\nu} \\\\\n&= \\psi^{\\mu *} S_{\\mu\\lambda} \\big \\{ S^{\\lambda\\sigma} \n(\\text{\\dh}_i H_{\\sigma\\nu} ) + (\\text{\\dh}_i S^{\\lambda\\sigma} )\nH_{\\sigma\\nu} \\big \\} \\psi^{\\nu} \\\\\n&= \\psi^{\\mu *} \\delta_{\\mu}^{\\phantom{e}\\sigma}\n(\\text{\\dh}_i H_{\\sigma\\nu} ) \\psi^{\\nu} +\n\\psi^{\\mu *} S_{\\mu\\lambda} \\, 0 \\,\nH_{\\sigma\\nu} \\psi^{\\nu} \\\\\n&= \\psi^{\\mu *} (\\text{\\dh}_i H_{\\mu\\nu} ) \\psi^{\\nu} \\; ,\n\\end{align*} \nwhich is nothing but Eq.~\\ref{HFmatrixcov}. \n\n\n\\subsubsection{Pulay forces}\n\n When facing the problem of calculating the forces, $\\partial_i E$, \none still needs to calculate $\\partial_i H^{\\mu}_{\\phantom{e}\\nu}$.\n For the time-dependent Schr\\\"odinger or the von Neumann\nequations above, the relevant derivatives were obtained by solving\nequations defined in (an evolving) $\\Omega$.\n In this case, however, the calculation of $\\partial_i H^{\\mu}_{\\phantom{e}\\nu}$ \nis done by integration, effectively using an auxiliary basis set in $\\cal{H}$ \n(analytically with Gaussians, on a real-space grid, etc.).\n The usual procedure follows\n\\begin{align}\n\\label{pulay}\n\\partial_i & H^{\\mu}_{\\phantom{e}\\nu} = \\partial_i \\langle \\psi^{\\mu} |\nH | \\psi_{\\nu} \\rangle = \\\\ &= \n\\langle e^{\\mu} | \\partial_i H | e_{\\nu} \\rangle +\n\\langle \\partial_i e^{\\mu} | H | e_{\\nu} \\rangle + \n\\langle e^{\\mu} | H | \\partial_i e_{\\nu} \\rangle \\; . \\notag\n\\end{align}\n The last two terms give rise to the so-called Pulay terms,\\cite{Pulay1969}\nas already presented in Eq.~\\ref{PulayOld}.\n There is nothing substantially new offered by differential geometry \nin this context.\n\n It is interesting, however, to separate the terms residing fully within $\\Omega$ from\nthe contributions outside it. \n Defining $Q_{\\Omega}$ as $P_{\\Omega}$'s complement, i.e.,\n$P_{\\Omega} + Q_{\\Omega} = \\mathbb{1}$ (the identity operator in $\\cal{H}$), \nwe can expand\n\\begin{align*}\n\\langle \\partial_i e^{\\mu} | H | e_{\\nu} \\rangle &=\n\\langle \\partial_i e^{\\mu} | ( P_{\\Omega} + Q_{\\Omega}) H | e_{\\nu} \\rangle \\\\ &=\n\\langle \\partial_i e^{\\mu} | e_{\\sigma} \\rangle \\langle e^{\\sigma} | H | e_{\\nu} \\rangle \n+ \\langle \\partial_i e^{\\mu} | Q_{\\Omega} H | e_{\\nu} \\rangle \\\\ &=\nD^{\\mu}_{\\phantom{e}i\\sigma} H^{\\sigma}_{\\phantom{e}\\nu} \n+ \\langle \\partial_i e^{\\mu} | Q_{\\Omega} H | e_{\\nu} \\rangle \\; .\n\\end{align*}\n Doing this for both Pulay terms and using the definition of the covariant\nderivative $\\text{\\dh}_i H^{\\mu}_{\\phantom{e}\\nu}$, one arrives at\n\\begin{equation*}\n\\text{\\dh}_i H^{\\mu}_{\\phantom{e}\\nu} = \n\\langle e^{\\mu} | \\partial_i H | e_{\\nu} \\rangle +\n\\langle \\partial_i e^{\\mu} | Q_{\\Omega} H | e_{\\nu} \\rangle +\n\\langle e^{\\mu} | H Q_{\\Omega} | \\partial_i e_{\\nu} \\rangle \n\\end{equation*}\nthe last two terms being explicitly built from the components out of\n$\\Omega$ of both vectors $H | e_{\\nu} \\rangle$ and \n$| \\partial_i e_{\\nu} \\rangle$ (and their duals\/bras).\n Indeed, if neglecting out-of-space components, then\n\\begin{equation*}\n\\text{\\dh}_i H^{\\mu}_{\\phantom{e}\\nu} \\simeq\n\\langle e^{\\mu} | \\partial_i H | e_{\\nu} \\rangle \\; ,\n\\end{equation*}\nand considering Eq.~\\ref{forces}, we arrive upon\n\\begin{equation*}\n\\partial_i E \\simeq \\psi_{\\mu} \\langle e^{\\mu} | \\partial_i H | e_{\\nu} \\rangle\n\\psi^{\\nu} \\; ,\n\\end{equation*}\nor, in the matrix representation,\n\\begin{equation*}\n\\partial_i E \\simeq \\psi^{\\mu *} \\langle e_{\\mu} | \\partial_i H | e_{\\nu} \\rangle\n\\psi^{\\nu} \\; ,\n\\end{equation*}\nwhere the Pulay terms have disappeared. \n Neglecting those terms, however, spoil the correspondence between $E$\nand $\\partial_i E$.\n This just reflects the fact that, in the time-dependent equations above,\nthe derivatives of the basis vectors were naturally projected onto $\\Omega$,\nwhich is not the case here.\n\n\n\n\n\n\n\\section{Conclusions}\n\n Covariant derivatives are defined for derivatives of quantum mechanical states\nin situations of basis sets varying as a function of external parameters.\n The concepts from differential geometry used to re-formalize dynamical equations\nallow for better insights into the meaning of connection terms appearing in\ndynamical equations. \n In addition to relating to the Berry-phase and gauge formalisms, \nthese geometrical insights enable the evaluation of existing, and proposal of new,\nfinite-differences propagators for time evolution equations.\n\n\n\n\n\n\n\\begin{acknowledgments}\n We would like to thank \nDaniel Sanchez-Portal and Jorge Kohanoff for interesting and intense discussions\non the problem of integrating time-dependent Kohn-Sham equations, Ivo\nSouza for discussions on the relation of this work with the Berry formalism,\nand Jonathan M. Evans for suggestions on the mathematics.\n D. D. O'R. would like to thank S. M.-M. Dubois, A. A. Mostofi, C.-K. Skylaris, and \nM. C. Payne for helpful comments an early stage of this work.\n Both authors would like to thank the anonymous reviewers for the careful \nreading of the manuscript and for their constructive comments, which have\nhelped to improve the paper noticeably. \n E. A. acknowledges funding from the EU through the ElectronStopping Grant \nNumber 333813, within the Marie-Curie CIG program, and from MINECO, Spain, \nthrough Grant Number FIS2015-64886-C5-1-P.\n D. D. O'R. gratefully acknowledges the support of the National University of Ireland \nand the School of Physics at Trinity College Dublin.\n\\end{acknowledgments}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\subsection{Dataset}\nThe dataset given the VLSP 2020's Organizer\\footnote{https:\/\/vlsp.org.vn\/vlsp2020\/eval} contains 4,372 training examples and about 1,642 examples for each public test set and private test set. Each example includes some information such as the encoded id of the owner, the text content of the post, the number of likes, shares, comments, and some photos. This dataset is really imbalanced. The unreliable class accounts for about 17\\% of the training set, the reliable class dominates with about 83\\%. This dataset also contains nearly 10\\% long texts with more than 500 tokens.\n\n\\subsection{Experimental Setup}\nFor training, we use Adam optimizer with a learning rate of 3e-5 and a weight decay of 0.01. Initially we freeze all layers of Transformer-based model to make sure the gradients are not calculated in the first epoch. We initialize each of the models described in Section \\ref{singmodel} with 10 different random states, then we train them for 20 epochs and select the models with the highest ROC-AUC score in the validation set for predicting.\n\n\\subsection{Results}\nIn public test set, all our models trained on lower-case texts. Some models fail in the validation set. Thus, we do not use them for predicting. As can be seen from Table \\ref{table:1}, the monolingual models such as viBERT, viELECTRA, and PhoBERT achieve better performance than the multilingual models. The effectiveness of length is also different for each model architecture. Specifically, viBERT with the length of 512 is worse than 256-length. Meanwhile, viELECTRA with the length of 512 significantly outperforms 256-length viELECTRA, about 3.21\\% improvement. 512-length viELECTRA also overcomes all single models, it achieves the highest score in both public test and private test. \n\nIn order to leverage the ability of the ensemble method, we select three best models with different styles including 256-length viBERT, 512-length viELECTRA, and 256-length PhoBERT to average their output probabilities, we refer this as 3-Ensemble model. In addition, we also denote the top six models with 6-Ensemble. This experiment result demonstrates the ensemble method can further boost the performance.\n\\input{result_table}\n\nAfter reviewing the data, arbitrary capitalization makes the text content unprofessional. In private test set, we investigate the influence of letter case on model performance. viELECTRA with the length of 512 and PhoBERT with the length of 256 achieve the highest score in the public test set, so we decide to train them on the raw texts which remain the upper-case and newline character. The results are described in Table \\ref{table:2} shows that the cased models significantly improve the uncased models. The ROC-AUC values being 1.49\\% and 1.02\\% above the uncased models for viELECTRA and PhoBERT. If only based on text content, it is difficult to identify fake news. In common, the news sources will be an important factor to distinguish reliable information. Because of that, we deliver the username feature into our models. The usernames with high unreliable examples will be penalized, which reduces their output probabilities by a certain value and vise versa. This intuition brings a significant improvement to 512-length cased viELECTRA, from 93.09\\% to 93.78\\%.\n\n\n\\section{Introduction}\n\\input{introduction}\n\n\\section{Methodology}\n\\input{methodology}\n\n\\section{Experiments}\n\\input{experiments}\n\n\\section{Conclusion and Future work}\n\\input{conclusion}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\printbibliography\n\\end{document}\n\n\\subsection{Data Pre-processing}\nWe process the text contents in two phases. First, we tokenize the contents using TweetTokenizer from the NLTK toolkit\\footnote{https:\/\/www.nltk.org\/} and use the emoji\\footnote{https:\/\/pypi.org\/project\/emoji\/} package to translate icons into text strings. Word segmentation is required in some pre-trained models, we use VnCoreNLP \\cite{vu-etal-2018-vncorenlp} to tokenize the input. In the text content, several features can affect model performance. To clarify this, we duplicate the texts into two versions, one is lower-case texts and no newline characters, the other is raw texts.\n\nIn the second phase, to fine-tune the Transformer-based models such as BERT Multilingual, viBERT, and viELECTRA, we must insert [CLS] special token and [SEP] special token to the input. The [CLS] is encoded including all representative information of the whole input sentence. Meanwhile, the use of [SEP] token is simply to separate different sentences of an input. In our method, we only need to insert the [SEP] token to the end of every input. Finally, we convert the new input to the sequence of indices with the same length. We will pad the sequences with the [PAD] token if their length is less than the specified length. In PhoBERT and XLM-RoBERTa, instead of [CLS], [SEP], and [PAD], we use $<$s$>$, $<$\/s$>$, $<$pad$>$ respectively.\n\n\\subsection{Models}\n\\subsubsection{Single Models}\\label{singmodel}\nWe leverage the ability of pre-trained Transformer-based models, which are trained on large-scale datasets by a variety of unsupervised learning methods. In this task, we list some popular models in the Vietnamese language. They divide into two categories following:\n\\begin{itemize}\n \\item Multilingual: BERT Multilingual employs masked language model and next sentence prediction for pre-training. Meanwhile, XLM-RoBERTa relies on masked language modeling objective and cross-lingual language modeling objective without next sentence pre-training objective. Both models trained on a large-scale dataset with multiple languages.\n \\item Monolingual: viBERT uses the same architecture as BERT Multilingual. However, it just trained on Vietnamese dataset. viELECTRA is released with viBERT. viELECTRA uses a new pre-training task, called replaced token detection. PhoBERT is also masked language model, which trained on 20GB Vietnamese corpus.\n\\end{itemize}\n\nWe examine the diversity of input lengths for each of the above models including 256-length, 512-length, and multiple 512-length as a long document. Handling long document is inspired by the research in \\cite{9003958}, we segment the input into multiple chunks with the length of 512 and feed them into the Transformer-based model to obtain contextual representations. Then we propagate each output through a Bi-LSTM layer to get document embedding. Finally, we perform the final classification in a linear layer. \n\n\nWe stack a linear classifier on the top of output representations. Instead of just using the last hidden layer, based on the result in \\cite{devlin2018bert}, we concatenate the last four hidden layers as the input of linear classifier, this modification might slightly improve model performance. For more clearly, Figure \\ref{fig:model} illustrates the architecture of our model in detail.\n\n\\subsubsection{Ensemble Method}\nTo utilize the robustness of the different Transformer-based models such as viBERT, viELECTRA, PhoBERT. We select three different Transformer-based models with the highest ROC-AUC score in the validation set. Then, we average their label probabilities to get final probabilities. \n\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.5\\textwidth,height=8.5cm]{classification_model.pdf}\n \\caption{Proposed Neural Network Architecture}\n \\label{fig:model}\n\n\\end{figure}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe study of quantum systems interacting with their environment, namely open quantum systems, has gained immense importance lately. This is both due to practical reasons as well as the relevance of open quantum systems to the foundations of quantum mechanics \\cite{BPbook}. In particular, precise quantum coherent control and state preparation necessary for quantum computation and information is only possibly by taking the interaction of a quantum system with its environment, while the process of decoherence sheds light on the so-called measurement problem. Amongst the variety of tools and techniques developed to tackle the dynamics of open quantum systems, the most popular approach is the quantum master equation approach whereby a differential equation for the time evolution of the system state is obtained and solved. The basic approach is that an initial state of the system and the environment is assumed, the total system-environment Hamiltonian is written down, and then using this total Hamiltonian, the time evolution of the total system-environment state is examined. Since we are typically interested only in the system dynamics, the environment is traced out to obtain the master equation. Unfortunately, performing this process to obtain the master equation for most realistic system-environment models involves making a series of approximations. For example, the system-environment coupling strength is assumed to be weak so that the joint time evolution operator can be found perturbatively \\cite{BPbook,Weissbook}. The environment is assumed to have a short `memory time' (the Markovian approximation) - this means that the environment loses information about the system very quickly \\cite{deVegaRMP,BreuerRMP}. Finally, the initial system-environment state is assumed to a simple product state, with the system and the environment independent of one another \\cite{deVegaRMP}. \n\nWith increasingly sophisticated quantum technologies, each of the assumptions typically made in the derivation of master equations has come under renewed reexamination. Master equations that allow one to deal with stronger system-environment coupling strengths have been formulated (see, for example, Ref.~\\cite{MoussaPRA2014}). Measures of non-Markovianity have also been put forward \\cite{BreuerRMP}. Most pertinent for us, the role of the initial system-environments - the correlations present in the total system-environment state at the initial state - has been investigated widely \\cite{HakimPRA1985, HaakePRA1985, Grabert1988, SmithPRA1990, GrabertPRE1997, PazPRA1997, LutzPRA2003, BanerjeePRE2003, vanKampen2004, BanPRA2009, HanggiPRL2009, UchiyamaPRA2010, TanimuraPRL2010, SmirnePRA2010, DajkaPRA2010, ZhangPRA2010,TanPRA2011, CKLeePRE2012,MorozovPRA2012, SeminPRA2012, ChaudhryPRA2013a,ChaudhryPRA2013b,ChaudhryCJC2013,FanchiniSciRep2014,FanSciRep2015correlation,ChenPRA2016,VegaRMP2017,VegaPRA2017,ShibataJPhysA2017,CaoPRA2017,MehwishEurPhysJD2019}. Such studies have generally been performed using exactly solvable models such as the pure dephasing model of a two-level system interacting with a collection of harmonic oscillators \\cite{MorozovPRA2012,ChaudhryPRA2013a}. Some efforts have nevertheless been made to consider master equations that include the effect of the initial correlations. For example, in Ref.~\\cite{ChaudhryPRA2013b}, the system and its environment were allowed to come to thermal equilibrium and thereafter a projective measurement was performed on the system to prepare the desired initial system state. It was shown that the effect of the initial correlations appears as an additional term in the second-order master equation, similar in form to the first term in the master equation that describes the effect of the free system-Hamiltonian. This approach was later generalized to higher order system-environment coupling strengths \\cite{ShibataJPhysA2017}. Along similar lines, in this paper, we consider the quantum system and its environment to reach a joint equilibrium state. A unitary operator is then performed on the system to prepare (approximately) a desired initial system state, and a time-local master equation, correct to second-order in the system-environment coupling strength, that describes the ensuing dynamics of the system is derived. In fact, the system Hamiltonian before the application of the system Hamiltonian can be different from the system Hamiltonian after the unitary operator - the former plays a role in the initial state preparation, while the latter plays a role in the dynamics thereafter. We show that the effect of the initial correlations is now again contained in an additional term in the master equation, but now the form of this additional term is similar to that of the second term in the usual master equation that describes relaxation and decoherence of open quantum systems. We then apply our master equation to a collection of two-level systems interacting with a common environment of harmonic oscillators. We work out the additional term in the master equation, and perform numerical simulations to show that the effect of the initial correlations increases as the number of two-level systems increases. Along similar lines, we also apply our master equation to analyze the effect of initial correlations for a collection of two-level systems interacting with a spin environment.\n\n\nThis paper is organized as follows. In Sec.~II, we derive our general time-local second-order master equation. Sec.~III discusses the application of this master equation to the large spin-boson model, while Sec.~IV applies the master equation to a collection of two-level systems interacting with a spin environment. We then conclude in Sec.~V. The appendices consist of some technical details regarding the derivation of the usual relaxation term in the master equation, as well as the details of the exactly solvable limit of the large spin-boson model.\n\n\\section{The formalism}\nWe start by briefly discussing the problem we wish to solve. We are given a quantum system which is interacting with its environment and has reached a joint equilibrium state. At the initial time, a unitary operation is performed on the system alone, and the system Hamiltonian itself may also be changed. After all, one can, for example, apply a large magnetic field in order to prepare the initial state, and thereafter alter the value of this magnetic field to generate some desired system dynamics. Our problem is to derive the master equation, correct to second-order in the system-environment coupling strength, that describes the system dynamics. We then write the system-environment Hamiltonian as \n\\begin{align}\n H_{tot} =\n \\begin{cases}\n H_{S0} + H_B +\\alpha V & t\\le 0,\\\\\n H_{S} + H_B +\\alpha V & t\\ge 0.\\\\\n \\end{cases} \n\\end{align}\nHere $H_S$ is the system Hamiltonian corresponding to coherent evolution of the system only after the initial time $t = 0$ at which the system state is prepared. $H_\\text{S0}$ is similar to $H_S$\nbut with possibly different parameters in order to prepare the initial system state. $H_B$ is the Hamiltonian of the environment, and $V$ corresponds to the system-environment coupling. $\\alpha$ is simply a dimensionless parameter introduced to keep track of the perturbation order; at the end of the calculation, we will set $\\alpha = 1$. Let us now discuss in detail the initial state preparation.\n\n\\subsection{Initial state preparation}\nWe let our system to come to a joint equilibrium state with the environment. What we mean by this is that the equilibrium state of the system is not simply proportional to $e^{-\\beta H_{S0}}$ - there are corrections due to the finite system-environment coupling strength. We instead consider the system and the environment together in the thermal equilibrium state proportional to $e^{-\\beta H_{\\text{tot}}}$ with $H_{\\text{tot}} = H_{S0} + H_B +\\alpha V$; the system state can then be obtained by simply tracing out the environment. A unitary operator $\\Omega$ is then applied to the system. The initial system-environment state is then \n\\begin{align}\n \\rho_\\text{tot}(0)\n &=\\frac{\\Omega e^{-\\beta H_\\text{tot}}\\Omega^\\dagger}{Z_\\text{tot}}, \\label{ref11}\n\\end{align}\nwith $Z_\\text{tot}=\\text{Tr}_\\text{S,B}\\left[e^{-\\beta H_\\text{tot}}\\right]$ the partition function and $\\text{Tr}_{S,B}$ denotes the trace over the system and the environment. Now, assuming the system-environment coupling strength to be weak, we can use the Kubo identity to expand the joint state given by Eq.~\\eqref{ref11}. The Kubo identity tells us that for any two arbitrary operators $X$ and $Y$,\n\\begin{align}\n e^{\\beta(X+Y)}&=e^{\\beta X}\\left[1+\\alpha\\int_{0}^{\\beta} e^{-\\lambda X}Y^{\\lambda(X+Y)}d\\lambda\\right]. \\label{ref12}\n\\end{align}\nBy setting $X=-(H_\\text{S0}+H_B)$ and $Y=-V$, and using the Kubo identity twice, we find that to second order in the system-environment coupling strength\n\\begin{align}\n &e^{-\\beta(H_\\text{S0}+H_B+V)}=e^{-\\beta(H_\\text{S0}+H_B)}\\nonumber \\\\& -\\alpha e^{-\\beta(H_\\text{S0}+H_B)}\\int_{0}^{\\beta} e^{\\lambda(H_\\text{S0}+H_B)}V \n \\times e^{-\\lambda(H_\\text{S0}+H_B)}d\\lambda \\nonumber \\\\ &+\\alpha^2e^{-\\beta(H_\\text{S0}+H_B)}\\int_{0}^{\\beta}d\\lambda e^{\\lambda(H_\\text{S0}+H_B)}V \\nonumber \\\\\n &\\times e^{-\\lambda(H_\\text{S0}+H_B)}\\int_{0}^{\\lambda} e^{\\lambda'(H_\\text{S0}+H_B)} Ve^{-\\lambda'(H_\\text{S0}+H_B)}d\\lambda'.\\label{ref14}\n\\end{align}\nWe now write the system environment coupling $V$ as $F \\otimes B$, where $F$ and $B$ are operators living in the system and environment Hilbert space respectively. The extension to the more general case where $V = \\sum_\\alpha F_\\alpha \\otimes B_\\alpha$ is straightforward. Eq.~\\eqref{ref14} can then be simplified as\n\\begin{align} \n &e^{-\\beta(H_\\text{S0}+H_B+V)}=\n e^{-\\beta(H_\\text{S0}+H_B)} -\\alpha e^{-\\beta(H_\\text{S0}+H_B)}\\nonumber \\\\ &\\times \\int_{0}^{\\beta} F(\\lambda)\\otimes B(\\lambda)d\\lambda + \\alpha^2 e^{-\\beta(H_\\text{S0}+H_B)}\\nonumber\\\\\n &\\times \\int_{0}^{\\beta}d\\lambda F(\\lambda)\\otimes B(\\lambda)\\int_{0}^{\\lambda} F(\\lambda')\\otimes B(\\lambda')d\\lambda'.\n\\end{align}\nwhere $F\\left(\\lambda\\right)=e^{\\lambda H_\\text{S0}}Fe^{-\\lambda H_\\text{S0}}$ and $B\\left(\\lambda\\right)=e^{\\lambda H_B}Be^{-\\lambda H_B}$.\nWe now use this in Eq.~\\eqref{ref11} and thereafter take the trace over the environment to find the initial system state correct to second order in the system-environment coupling strength. This is important because our aim is to derive a master equation correct to second-order in the system-environment strength. For consistency, the initial system state used to solve this master equation should also be accurate to second-order in the system-environment coupling strength. For ease of notation, we write the initial system state as\n\\begin{align}\n \\rho(0)=\\rho^{(0)}(0)+\\rho^{(1)}(0)+\\rho^{(2)}(0),\n \\label{sysdminitial}\n\\end{align}\nwhere,\n\\begin{align}\n &\\rho^{(0)}(0)\n =\\frac{\\text{Tr}_B\\left[\\Omega\\left( e^{-\\beta(H_\\text{S0}+H_B)}\\right)\\Omega^\\dagger\\right]}{Z_\\text{tot}}\\\\\n &\\rho^{(1)}(0) \\nonumber \\\\\n &=\\frac{\\text{Tr}_B\\left[-\\alpha \\Omega\\left(e^{-\\beta(H_\\text{S0}+H_B)}\\int_{0}^{\\beta} F(\\lambda)\\otimes B(\\lambda)d\\lambda\\right)\\Omega^\\dagger\\right]}{Z_\\text{tot}}\\\\\n &\\rho^{(2)}(0)= \\frac{1}{{Z_\\text{tot}}}\\times \\text{Tr}_B\\Bigg[\\alpha^2 \\Omega\\Big(e^{-\\beta(H_\\text{S0}+H_B)}\\nonumber \\\\\n & \\times \\int_{0}^{\\beta}d\\lambda F(\\lambda)\\otimes B(\\lambda) \\int_{0}^{\\lambda}F(\\lambda')\\otimes B(\\lambda')d\\lambda'\\Big)\\Omega^\\dagger\\Bigg]. \n\\end{align}\nLet us simplify these relations one by one. $\\rho^{(0)}(0)$ can be simplified as,\n\\begin{align*}\n \\rho^{(0)}(0)=\\frac{e^{-\\beta \\widetilde{H}_\\text{S0}} Z_B}{Z_\\text{tot}},\n\\end{align*}\nwhere $\\widetilde{H}_{\\text{S0}} = \\Omega H_{\\text{S0}}\\Omega^\\dagger$ and $Z_B= \\text{Tr}_B\\left[ e^{-\\beta H_B}\\right]$. As for $\\rho^{(1)}(0)$, we can write\n\\begin{align*}\n \\rho^{(1)}(0)= \\frac{-\\alpha Z_B\\int_{o}^{\\beta}\\Omega e^{-\\beta H_\\text{S0}}F(\\lambda)\\Omega^{\\dagger} \\big\\langle B(\\lambda)\\big\\rangle_Bd\\lambda}{Z_\\text{tot}},\n\\end{align*}\nwhere $\\langle \\hdots \\rangle_B = \\text{Tr}_B [e^{-\\beta H_B} (\\hdots)\/Z_B]$. Since $\\langle B(\\lambda)\\rangle_B$ is zero for most system-environment models, we simply get that $\\rho^{(1)}(0)=0$. Carrying on, $ \\rho^{(2)}(0)$ can be simplified as\n\\begin{align*}\n &\\rho^{(2)}(0) = \\nonumber\\\\\n &\\frac{\\alpha^2 Z_B\\Omega e^{-\\beta H_\\text{S0}}\\int_{0}^{\\beta}\\int_{0}^{\\lambda} F(\\lambda) F(\\lambda') \\Omega^{\\dagger} \\big\\langle B(\\lambda) B(\\lambda')\\big\\rangle_Bd\\lambda'd\\lambda}{Z_\\text{tot}}. \n\\end{align*}\nTo proceed further, we evaluate the partition function $Z_{\\text{tot}}$. We note that $Z_{\\text{tot}}$ has to be such that the trace of the system state $\\rho(0)$ in Eq.~\\eqref{sysdminitial} is one. It is then clear that \n\\begin{align}\n Z_\\text{tot}\n =&Z_B\\text{Tr}_\\text{S}\\left[e^{-\\beta H_\\text{S0}} \\right] +\\alpha^2Z_B\\text{Tr}_\\text{S}\\Bigg[\\Omega e^{-\\beta H_\\text{S0}} \\nonumber \\\\\n &\\times\\int_{0}^{\\beta}\\int_{0}^{\\lambda} F(\\lambda) F(\\lambda') \\Omega^{\\dagger} \\big\\langle B(\\lambda) B(\\lambda')\\big\\rangle_Bd\\lambda'd\\lambda\\Bigg]\\notag.\n\\end{align}\nIt is then clear that the initial system density matrix can be written as \n\\begin{align}\n \\rho(0)\n &=\\frac{e^{-\\beta \\widetilde{H}_{S0}}}{Z_{S0}Z'}\\Big[\\mathds{1}+\\int_{0}^{\\beta}\\int_{0}^{\\lambda} F^R(\\lambda)F^R(\\lambda') \\nonumber \\\\ \n &\\times\\big\\langle B(\\lambda) B(\\lambda')\\big\\rangle_Bd\\lambda'd\\lambda\\Big]\n\\end{align}\nwhere $F^R(\\lambda) = \\Omega F(\\lambda)\\Omega^\\dagger$, $Z_{S0} = \\text{Tr}_S[e^{-\\beta H_{S0}}]$, and \n\\begin{align}\n Z'=1 + \\int_{0}^{\\beta}\\int_{0}^{\\lambda} \\langle F(\\lambda) F(\\lambda') \\rangle_S \n \\langle B(\\lambda) B(\\lambda')\\rangle_B d\\lambda'd\\lambda,\n\\end{align}\nwith $\\langle \\hdots \\rangle_S = \\text{Tr}_S [ e^{-\\beta H_{S0}}(\\hdots)\/Z_{S0}]$. With the initial system state, correct to second order in the system-environment coupling strength available, we now turn our attention to deriving the second-order master equation.\n\n\n\n\\subsection{Derivation of the master equation}\nWe now derive a master equation that describes the time evolution of the system for $t > 0$. The system-environment Hamiltonian is \n\\begin{align*}\n H_\\text{tot} = H_\\text{S}+H_\\text{B}+\\alpha V \\equiv H_0 + \\alpha V.\n\\end{align*}\nNote that the system Hamiltonian $H_S$ can be different from the previous system Hamiltonian $H_{S0}$. Using perturbation theory, the unitary time evolution for such an Hamiltonian can be written as,\n\\begin{align}\n U (t) \\approx U_0\\left(t\\right) \\left[ 1 - \\alpha \\int_0^t U_0^\\dagger (s) V U_0(s)\\,ds \\right] \\label{ref15}\n\\end{align}\nwhere, $U_0(t) \\equiv U_S(t) \\otimes U_B(t)$ is the free unitary time evolution corresponding to $H_0.$ The matrix elements of the system density matrix can be written as $\\rho_{mn}\\left(t\\right) = \\text{Tr}_S\\left[\\ket{n}\\bra{m} \\rho \\left(t\\right) \\right]$, where $\\ket{n}$ and $\\ket{m}$ are some basis states of the system. Since $\\rho(t) = \\text{Tr}_B \\rho_{\\text{tot}}(t)$, we can alternatively write\n\\begin{align*}\n \\rho_{mn}(t) = \\text{Tr}_\\text{S,B}\\left[X_{nm}^H\\left(t\\right) \\rho_\\text{tot} (0) \\right],\n\\end{align*}\nwhere $X_{nm}^H(t) = U^\\dagger(t) (\\ket{n}\\bra{m} \\otimes \\mathds{1}_B)U(t)$. The master equation can then be written in the general form \n\\begin{align}\n \\frac{d}{dt}\\rho_{mn}(t) &=\\text{Tr}_\\text{S,B}\\left[\\rho_\\text{tot}(0)\\frac{d}{dt}X_{nm}^H(t)\\right].\n \\label{mastereqgen}\n\\end{align}\nTo make further progress, we note that $X_{nm}^H(t)$ is a Heisenberg picture operator. Using the Heisenberg equation of motion and Eq.~\\eqref{ref15}, it can be shown that, correct to second order in the system-environment coupling strength, \n\\begin{align} \\label{ref16}\n \\frac{d}{dt}X_{nm}^H(t) &=i\\left[H_0^H(t),X_{nm}^H(t)\\right]+i\\alpha\\left[\\Tilde{V}(t),\\Tilde{X}_{nm}(t)\\right] \\nonumber \\\\\n &+ \\alpha^2\\int_{0}^{t}ds\\left[\\left[\\Tilde{V}(t),\\Tilde{X}_{nm}(t)\\right],\\Tilde{V}(s)\\right],\n\\end{align}\nwhere the tildes denote time evolution under the `free' unitary operator $U_0 \\left(t\\right)$ while the superscript `$H$' denotes time evolution with the full time evolution operator. Using Eq.~\\eqref{ref16} and given the initial system-environment state $\\rho_\\text{tot}(0)$ in Eq.~\\eqref{sysdminitial}, we can derive our master equation that describes the time evolution of the quantum system by simplifying Eq.~\\eqref{mastereqgen}. The first term is very straightforward. We simply have that \n\\begin{align}\n &\\text{Tr}_\\text{S,B}\\left[\\rho_\\text{tot}(0)i\\left[H_0^H(t),X_{nm}^H(t)\\right]\\right] \\nonumber\\\\\n &=i\\text{Tr}_\\text{S,B}\\left[\\rho_\\text{tot}(t)\\left[H_S + H_B,(\\ket{n}\\bra{m}\\otimes \\mathds{1}_B)\\right]\\right]\\nonumber\\\\\n &=i\\text{Tr}_\\text{S}\\left[\\rho(t)\\left[H_\\text{S},\\ket{n}\\bra{m}\\right]\\right]\\nonumber\\\\\n &=i\\bra{m}\\left[\\rho(t),H_\\text{S}\\right]\\ket{n}. \\label{ref20}\n\\end{align}\nThis term simply tells us about free system evolution corresponding to the system Hamiltonian $H_\\text{S}$.\n\nTo calculate the next term in our master equation, that is,\n \\begin{align*}\n i\\alpha \\text{Tr}_\\text{S,B}\\left[\\rho_\\text{tot}(0)\\left[V(t),X_{nm}(t)\\right]\\right],\n\\end{align*}\nit is useful to write the initial system-environment state as $\\rho_{\\text{tot}}(0) = \\rho_{\\text{tot}}^{(0)} + \\rho_{\\text{tot}}^{(1)}$, where, using Eq.~\\eqref{ref14}, it should be clear that \n\\begin{align}\n \\rho_\\text{tot}^{(0)}(0)\n &=\\frac{\\Omega e^{-\\beta(H_\\text{S0}+H_B)}\\Omega^\\dagger}{Z_\\text{tot}} = \\widetilde{\\rho}_{S0} \\otimes \\rho_B \\label{ref18}\\\\\n \\rho_\\text{tot}^{(1)}(0)\n &=\\frac{-\\alpha \\Omega e^{-\\beta(H_\\text{S0}+H_B)}Q_\\text{SB}(\\beta)\\Omega^\\dagger}{Z_\\text{tot}}, \\label{ref19}\n\\end{align}\nwith $\\widetilde{\\rho}_{S0} = e^{-\\beta \\widetilde{H}_{S0}}\/Z_{S0}$, $\\rho_B = e^{-\\beta H_B}\/Z_B$, the partition function $\\tot{Z} = Z_{S0} Z_B$ and $Q_{\\text{SB}}(\\beta) = \\int_0^\\beta d\\lambda F(\\lambda)\\otimes B(\\lambda)$. We do not need the higher order terms since there is already a factor of $\\alpha$ in $i\\alpha\\left[V(t),X_{nm}(t)\\right]$. Now, the contribution of $\\rho_\\text{tot}^{(0)}$ is\n\\begin{align*}\n &i\\alpha \\text{Tr}_\\text{S,B}\\left [\\rho_\\text{tot}^{(0)}(0)\\left[U_0^\\dagger(t)VU_0(t),U_0^\\dagger(t)X_{nm}U_0(t)\\right]\\right]\\\\\n &=i\\alpha \\text{Tr}_\\text{S,B}\\left[\\widetilde{\\rho}_{S0}\\otimes\\rho_B U_0^\\dagger(t) \\left[F\\otimes B,\\ket{n}\\bra{m}\\otimes \\mathds{1}_B\\right] U_0(t)\\right]\\\\\n &=i\\alpha \\text{Tr}_\\text{S}\\left [\\Tilde{\\rho}_{S0}U^\\dagger(t)\\left[F,Y_{nm}\\right]U(t)\\right ]\\times \\langle B(t)\\rangle_B.\n\\end{align*}\nAgain, since $\\langle B(t)\\rangle_B$ is usually zero for most system-environment models, this contribution turns out to be zero. \n\nThe most interesting contribution is due to $\\rho_\\text{tot}^{(1)}(0)$. This is\n\\begin{align}\n &i\\alpha \\text{Tr}_\\text{S,B}\\Big[\\rho_\\text{tot}^{(1)}(0)\\left[U_0^\\dagger(t)VU_0(t),U_0^\\dagger(t)X_{nm}U_0(t)\\right]\\Big]\\nonumber\\\\\n &=\\frac{-i\\alpha^2}{Z_\\text{S0}}\\int_{0}^{\\beta}\\text{Tr}_\\text{S,B}\\Big[\\rho_B{\\Omega e^{-\\beta H_\\text{S0}} F(\\lambda)\\Omega^\\dagger\\otimes B(\\lambda)}U_S^\\dagger(t) \\nonumber \\\\\n & \\times\\left[F,\\ket{n}\\bra{m}\\right]U_S(t)U_B^\\dagger(t)BU_B(t)\\Big]d\\lambda\\nonumber\\\\\n &=\\frac{-i\\alpha^2}{Z_\\text{S0}}\\int_{0}^{\\beta}\\text{Tr}_\\text{S}\\Big[{\\Omega e^{-\\beta H_\\text{S0}} F(\\lambda)\\Omega^\\dagger}U_S^\\dagger(t)\\left[F,\\ket{n}\\bra{m}\\right] \\nonumber \\\\\n &\\times U_S(t)\\Big] \\text{Tr}_B\\Big[\\rho_B B(\\lambda)B(t)\\Big]d\\lambda\\nonumber\\\\\n &=\\frac{-i\\alpha^2}{Z_\\text{S0}}\\int_{0}^{\\beta}\\bra{m}\\left[U_S(t)\\Omega e^{-\\beta H_\\text{S0}} F(\\lambda)\\Omega^\\dagger U_S^\\dagger(t),F\\right]\\ket{n} \\nonumber \\\\\n & \\times B_{\\text{corr}}(\\lambda,t)\\, d\\lambda, \\label{ref21}.\n\\end{align}\nwhere we have defined $B(t) = U_B^\\dagger (t) B U_B(t)$ and $B_{\\text{corr}}(\\lambda,t) = \\text{Tr}_B \\Big[\\rho_B B(\\lambda)B(t)\\Big]$. This is additional term in the master equation that takes into account the effect of the initial correlations, correct to second order in the system-environment coupling strength. In basis-independent form, we can write this term as \n\\begin{align}\n -i\\left[ \\widetilde{\\rho}(t)J^R_\\text{corr}(\\beta,t),F\\right],\n\\end{align}\nwhere we have defined \n\\begin{align} \\label{ref23}\n J^R_\\text{corr}(\\beta,t)\n &=\\int_{0}^{\\beta} \\overleftarrow{F}^R(\\lambda,t)\\text{B}_\\text{corr}(\\lambda,t)d\\lambda,\\\\\n \\overleftarrow{F}^R(\\lambda,t)\n &= U_S(t)\\Omega e^{\\lambda H_\\text{S0}}Fe^{-\\lambda H_\\text{S0}}\\Omega^\\dagger U_S^\\dagger(t).\n\\end{align}\nWe now write \n\\begin{align}\n -i\\left[ \\widetilde{\\rho}(t)J^R_\\text{corr}(\\beta,t),F\\right] = -\\frac{i}{2}\\left(\\left[ \\widetilde{\\rho}(t)J^R_\\text{corr}(\\beta,t),F\\right] - \\text{h.c.}\\right).\n\\end{align}\nThis is permitted because $ -i\\left[ \\widetilde{\\rho}(t)J^R_\\text{corr}(\\beta,t),F\\right]$ is hermitian, so \n$\\left[ \\widetilde{\\rho}(t)J^R_\\text{corr}(\\beta,t),F\\right]$ is anti-hermitian. We now replace $\\widetilde{\\rho}(t)$ by $\\rho(t)$. This step is permitted since the corrections are of order higher than the second-order master equation that we are considering. Consequently, the term in the master equation that takes into account the initial correlations is $-\\frac{i}{2}\\left(\\left[ \\rho(t)J^R_\\text{corr}(\\beta,t),F\\right] - \\text{h.c.}\\right)$.\n\n\tWe next simplify the contribution of the third term in Eq.~\\eqref{ref16}. It is clear that now only $\\rho_{\\text{tot}}^{(0)}(0)$ contributes. Similar manipulations to those performed above lead to (see the appendix for details) \n$$ \\alpha^2\\int_{0}^{t}\\bra{m}\\Big(\\left[\\Bar{F}(t,s)\\widetilde{\\rho}_{S}(t),F\\right]C_{ts}+\\text{h.~c.}\\Big)\\ket{n}\\,ds, $$\nwhere the environment correlation function is $C_{ts} = \\langle B(t)B(s) \\rangle_B$, $\\widetilde{\\rho}_S(t) = U_S(t) \\widetilde{\\rho}_{S0} U_S^\\dagger (t)$, $\\Bar{F}(t,s)=U_S^\\dagger(t,s)FU_S(t,s)$, and $\\text{h.~c.}$ denotes the hermitian conjugate. We can further replace $\\widetilde{\\rho}_S(t)$ by $\\rho(t)$ to get \n$$ \\alpha^2\\int_{0}^{t}\\bra{m}\\Big(\\left[\\Bar{F}(t,s)\\rho(t),F\\right]C_{ts}+\\text{h.~c.}\\Big)\\ket{n}\\,ds. $$\nOnce again, this is permitted since the corrections lead to terms of higher order in the master equation (compared to the second order master equation that we are considering). We now put all the terms together to arrive at the general basis-independent form of the master equation given by \n\n\\begin{align}\n &\\frac{d}{dt}\\rho(t)\n =i\\left[\\rho(t),H_S\\right]-\\frac{i}{2}\\left(\\left[ \\rho(t)J^R_\\text{corr}(\\beta,t),F\\right] - \\text{h.c.}\\right)\\nonumber\\\\\n &+\\int_{0}^{t}\\left(\\left[\\Bar{F}(t,s)\\rho(t),F\\right]C_{ts}+\\text{h.c.}\\right)\\,ds.\n \\label{finalme}\n\\end{align}\n\n\n\n\n\n\n\n\\begin{comment}\n\\subsubsection{$\\Omega$ as a mixed state projection operator $\\ket{\\psi_i}\\bra{\\psi_i}$}\nFirst and third term will be totally unchanged and while second term will have some modifications, Eq. \\eqref{ref21} under the action projection operator and after some those little modifications, the second term turns out to be.\n\\begin{align*}\n &-i\\int_{o}^{\\beta}\\frac{1}{\\mathcal{Z'}}\\sum_{i=1}^{n}P_i\\bra{\\psi_i} e^{-\\beta H_\\text{S0}} F(\\lambda)\\ket{\\psi_i} \\text{B}_{\\text{corr}}(\\lambda,t)d\\lambda\\\\\n &=-i\\bra{m}\\left[\\rho(t),F\\right]\\ket{n}\\mathcal{F}_\\text{corr},\n \\end{align*}\nhence overall master equation in operator form become\n\\begin{align}\n &\\frac{d}{dt}\\rho(t)\n =i\\left[\\rho(t),H_S\\right]-i\\left[\\rho(t),F\\right]\\mathcal{F}_\\text{corr}\\nonumber\\\\\n &+\\int_{0}^{t}\\left(\\left[\\Bar{F}(t,s)\\rho(t),F\\right]C_\\text{ts}+\\text{hc}\\right)ds\n\\end{align}\nwith\n\\begin{align}\n \\mathcal{F}_\\text{corr}(t)\n &=\\int_{o}^{\\beta}\\sum_{i=1}^{n}\\frac{P_i\\bra{\\psi_i} e^{-\\beta H_\\text{S0}} F(\\lambda)\\ket{\\psi_i} \\text{B}_{\\text{corr}}(\\lambda,t)d\\lambda}{\\mathcal{Z'}}\\\\\n \\mathcal{Z'} \n &= \\bra{\\psi_i} e^{-\\beta H_\\text{S0}} \\ket{\\psi_i}\n \\label{finalme}\n\\end{align}\n\\end{comment}\n\n\n\n\n\n\n\\section{Application to the large spin-boson model}\nWe will now apply our derived master equation to a variant of the paradigmatic spin-boson model \\cite{BPbook} with a large number of two-level systems interacting with a common environment of harmonic oscillators \\cite{KurizkiPRL20112,ChaudhryPRA2013a,ChaudhryPRA2013b}. Recall that the total system-environment Hamiltonian is given by $H_\\text{tot} = H_\\text{S0}+H_\\text{B}+V$ for $t < 0$, while $H_{\\text{tot}} = H_S+H_B+V$ for $t \\geq 0$. For the large spin-boson model, we write\n\\begin{align}\n H_\\text{S0}\n &= \\varepsilon_0 J_z + \\Delta_0 J_x\\\\\n H_S&= \\varepsilon J_z + \\Delta J_x\\\\\n H_B&= \\sum_{k} \\omega_k b_{k}^{\\dagger} b_k,\\\\\n V&=J_z \\sum_k \\big( g_k^* b_k + g_k b_k^\\dagger \\big)\n\\end{align}\nwhere $J_x,J_y,J_z$ are the collective spin operators with $J^2 =J_x^2+J_y^2+J_z^2, \\varepsilon$ is energy bias, $\\Delta$ is the tunneling amplitude, $H_B$ is the bath of harmonic oscillators (we are ignoring zero point energy), while $V$ describes the interaction between the common harmonic oscillator bath and the spin system. We have set $ \\hbar =1 $ throughout and the values of other parameters will be in dimensionless units. Note that the system operator $F=J_z,$ and the bath operator $B=\\sum_k \\big( g_k^* b_k + g_k b_k^\\dagger \\big). $ One imagine that the large-spin system has been interacting with the environment for a long time with a relatively large value of $\\varepsilon_0$ and a small value of $\\Delta_0$. In such a situation, the state of the system will be approximately corresponding to the state with all spins down in the $z$-direction. At time $t = 0$, we then apply a unitary operator to prepare the desired initial state. For example, if the desired initial state is one with all spins in the $x$-direction. then the unitary operator that should be applied is $U_R = e^{i\\pi J_y\/2}$. In other words, a $\\frac{\\pi}{2}$-pulse is used to prepare the initial system state, with the assumption that this pulse takes a very short time to be applied. With the initial state approximately prepared, we can then change the parameters to the system Hamiltonian to whatever values we desire to generate any required system evolution. Let us look at how the initial system-environment correlations appear in this system evolution using our general master equation. \n\nOur objective is to calculate the operator $J^R_{\\text{corr}}$. To do so, we first find \n\\begin{align*}\n &\\overleftarrow{F}^R(\\lambda,t)=U(t)\\left[U_R\\left(e^{\\lambda H_{S0}}Fe^{-\\lambda H_{S0}}\\right)U^\\dagger_R\\right]U^\\dagger(t)\\\\\n &=J_x\\left[a_xd_x+a_yc_x-a_z b_x\\right]+J_y\\left[a_xd_y+a_yc_y-a_z b_y\\right]\\\\\n &+J_z\\left[a_xd_z+a_yc_z-a_zb_z\\right],\n\\end{align*}\nwith \n\\begin{align*}\n a_x\n &=\\frac{\\varepsilon_0\\Delta_0}{\\Delta'^2} \\left\\{1-\\cosh\\left({\\lambda\\Delta'} \\right)\\right\\},\\\\\n a_y\n &=\\frac{-i\\Delta_0}{\\Delta'}\\sinh \\left(\\lambda\\Delta'\\right),\\\\\n a_z\n &= \\frac{\\varepsilon_0^2 + \\Delta^2_0 \\cosh \\left({\\lambda\\Delta'}\\right)}{\\Delta'^2},\\\\\n b_x\n &= \\frac{\\Delta^2 + \\varepsilon^2 \\cos \\left({\\widetilde{\\Delta}t}\\right)}{\\widetilde{\\Delta}^2},\\\\\n b_y\n &= \\frac{ \\varepsilon}{\\widetilde{\\Delta}}\\sin \\left( \\widetilde{\\Delta}t\\right),\\\\\n b_z\n &= \\frac{\\varepsilon \\Delta}{\\widetilde{\\Delta}^2} \\left\\{ 1 - \\cos\\left({\\widetilde{\\Delta}t} \\right)\\right\\},\\\\\n c_x \n &= -\\frac{\\varepsilon }{\\widetilde{\\Delta}}\\sin{\\left(\\widetilde{\\Delta}t\\right)},\\\\\n c_y\n &= \\cos{\\left(\\widetilde{\\Delta}t\\right)},\\\\\n c_z\n &= \\frac{\\Delta }{\\widetilde{\\Delta}}\\sin{\\left(\\widetilde{\\Delta}t\\right)},\\\\\n d_x &= \\frac{\\varepsilon\\Delta }{\\tilde{\\Delta}^2}\\left\\{1-\\cos{\\left(\\widetilde{\\Delta}t\\right)}\\right\\},\\\\\n d_y &=-\\frac{\\Delta }{\\widetilde{\\Delta}}\\sin{\\left(\\widetilde{\\Delta}t\\right)},\\\\\n d_z &= 1+\\frac{\\Delta^2 }{\\tilde{\\Delta}^2}\\left\\{\\cos{\\left(\\widetilde{\\Delta}t\\right)}-1\\right\\}.\n\\end{align*}\nHere $\\Delta'^2 = \\varepsilon_0^2 + \\Delta_0^2$ and $\\widetilde{\\Delta}^2 = \\varepsilon^2 + \\Delta^2$. In short, \n\\begin{align} \\label{ref24}\n \\overleftarrow{F}^R(\\lambda,t)=\\alpha_1(\\lambda,t)J_x+\\alpha_2(\\lambda,t)J_y+\\alpha_3(\\lambda,t)J_z,\n\\end{align}\nwhere,\n\\begin{align*}\n \\alpha_1(\\lambda,t)\n &=a_xd_x+a_yc_x-a_z b_x,\\\\\n \\alpha_2(\\lambda,t)\n &=a_xd_y+a_yc_y-a_z b_y,\\\\\n \\alpha_3(\\lambda,t)\n &=a_xd_z+a_yc_z-a_zb_z.\n\\end{align*}\nIt then follows that \n\\begin{align}\n\\label{initialcorrelations}\n J_\\text{corr}^R(\\beta,t)\n &=P(\\beta,t)J_x+Q(\\beta,t)J_y+R(\\beta,t)J_z,\n\\end{align}\nwith,\n\\begin{align*}\n P(\\beta,t)\n &=\\int_{0}^{\\beta} \\alpha_1(\\lambda,t)B_{\\text{corr}}(\\lambda,t)d\\lambda\\\\\n Q(\\beta,t)\n &=\\int_{0}^{\\beta} \\alpha_2(\\lambda,t)B_{\\text{corr}}(\\lambda,t)d\\lambda\\\\\n R(\\beta,t)\n &=\\int_{0}^{\\beta} \\alpha_3(\\lambda,t)B_{\\text{corr}}(\\lambda,t)d\\lambda.\n\\end{align*}\nWe now calculate bath correlation term $B_{\\text{corr}}(\\lambda,t)$. First, \n\\begin{align}\n B(t)=\\sum_k\\left(g_k^*e^{-i\\omega_{k}t}b_k+g_ke^{i\\omega_{k}t}b_k^\\dagger\\right),\n\\end{align}\nand,\n\\begin{align}\n B(\\lambda)=\\sum_k\\left(g_k^*e^{-\\lambda\\omega_{k}}b_k+g_ke^{\\lambda\\omega_{k}}b_k^\\dagger\\right).\n\\end{align}\nSince $B_{\\text{corr}}(\\lambda,t) =\\text{Tr}[\\rho_B B(\\lambda) B(t)]$, we find \n\\begin{align}\n B_{\\text{corr}}(\\lambda,t)&=\\sum_k |g_k|^2\\Big\\{e^{-\\omega_{k}\\left(\\lambda-it\\right)}\\nonumber \\\\& +2n_k\\cosh{\\left(\\lambda\\omega_{k}-i\\omega_{k}t\\right)}\\Big\\},\\label{ref26}\n\\end{align}\nwith $n_k$ given by Bose-Einstein statistics as \n\\begin{align}\n n_k&=\\frac{1}{2}\\left\\{\\coth{\\left(\\frac{\\beta\\omega_k}{2}\\right)}-1\\right\\}\\nonumber.\n\\end{align}\nTo perform the sum over the environment modes, we use the spectral density $J(\\omega)$ via $\\sum_k |g_k|^2 (\\hdots) \\rightarrow \\int_0^\\infty d\\omega \\, J(\\omega) (\\hdots)$. We generally use an Ohmic spectral density of the form $J(\\omega) = G\\omega e^{-\\omega\/\\omega_c}$. The integrals are performed numerically to find $J_{\\text{corr}}(\\beta,t)$, and the results are incorporated in the numerical simulations of the master equation. We start by first looking at the pure dephasing case ($\\Delta=\\Delta_0=0$), since this case can be solved exactly and serves as a useful benchmark (see the appendix for details regarding the exact solution). We illustrate our results in Fig.~\\ref{Puredephasing-N=1-unitary} for $N = 1$. Two points should be noted. First, the role played by initial correlations is very small in this case. Second, our master equation reproduces the exact results very well. Since we expect that the role of the initial correlations increases with increasing $N$, we next look at $N = 4$ and $N = 10$. Results are shown in Figs.~\\ref{Puredephasing-N=4-unitary} and \\ref{Puredephasing-N=10-unitary}. It is clear that as $N$ increases, the initial correlations play a larger and larger role. Moreover, the extra term in our master equation is able to take into account the effect of the initial correlations very well.\n\n\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{Puredephasing-N=1-unitary.eps}\n\\caption{Behavior of $j_x$ versus $t$ for N = 1 using the exact solution with (blue circled dot) and without (purple squares) initial correlations, as well as using the master\nequation with (solid, black line) and without (dashed, red line)initial correlations. We have used $\\varepsilon=\\varepsilon_0=4$, $G = 0.05$, $\\beta=1$ and $\\omega_c=5$. Here and in all other figures, the plotted variables are all in dimensionless units} \\label{Puredephasing-N=1-unitary}\n\\end{figure} \n \\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{Puredephasing-N=4-unitary.eps}\n\\caption{Same as Fig. \\ref{Puredephasing-N=1-unitary}, except that we now have $N = 4$.}\n\\label{Puredephasing-N=4-unitary}\n\\end{figure}\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{Puredephasing-N=10-unitary.eps}\n\\caption{Same as Fig. \\ref{Puredephasing-N=4-unitary}, except that we now have $N = 10$.} \\label{Puredephasing-N=10-unitary}\n\\end{figure}\n\nHaving shown that our master equation is able to reproduce results for the pure dephasing model, we are now in a position to go beyond the pure dephasing model and see the effects of the initial correlations. In Fig.~\\ref{Beyond-PD-N=2}, we have shown the dynamics of $j_x$ with a non-zero value of the tunneling amplitude for $N = 2$. It is clear that the initial correlations do have a small influence on the dynamics. This effect becomes more pronounced as we increase $N$ (see Figs.~\\ref{Beyond-PD-N=4} and \\ref{Beyond-PD-N=10}). We have also looked at how the role played by the initial correlations changes as the temperature changes. To this end, we compare Fig.~\\ref{Beyond-PD-N=10}, where the inverse temperature is $\\beta = 1$, with Fig.~\\ref{Beta=0.5} where $\\beta = 0.5$ and Fig.~\\ref{Beta=1.5} where $\\beta = 1.5$. As expected, at higher temperatures, the effect of the initial correlations decreases, while at lower temperatures, the effect of the initial correlations increases. \\\\\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{Beyond-PD-N=2new.eps}\n\\caption{Behavior of $j_x$ against $t$ for $N=2$ with (black, solid) and without (dashed, red) taking into\naccount initial correlations. Here we have used $\\varepsilon_0 = 4$, $\\varepsilon=2.5$ and $\\Delta=\\Delta_0=0.5$, while the rest of the parameters used are the same as Fig. \\ref{Puredephasing-N=1-unitary}.} \\label{Beyond-PD-N=2}\n\\end{figure}\n\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{Beyond-PD-N=4new.eps}\n\\caption{Same as Fig. \\ref{Beyond-PD-N=2}, except that we now have $N = 4$.} \\label{Beyond-PD-N=4}\n\\end{figure}\n\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{Beyond-PD-N=10.eps}\n\\caption{Same as Fig. \\ref{Beyond-PD-N=4}, except that we now have $N = 10$.} \\label{Beyond-PD-N=10}\n\\end{figure}\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{fig6-beta=0.5new.eps}\n\\caption{Behavior of $j_x$ against $t$ for $N=10$ with (black, solid) and without (dashed, red) taking into account initial correlations. Here we have used the same parameters as Fig.~\\ref{Beyond-PD-N=10}, except that $\\beta = 0.5$.} \\label{Beta=0.5}\n\\end{figure}\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{fig6beta=1.5new.eps}\n\\caption{Same as Figs.~\\ref{Beyond-PD-N=10} and \\ref{Beta=0.5}, except that we now have $\\beta=1.5$.} \\label{Beta=1.5}\n\\end{figure}\n\nWe now show that the effect of initial correlations are not manifested in the dynamics of $j_x$ alone, we show in Figs.~\\ref{Jx^2-plot-for-N=4} and \\ref{Jx^2-plot-for-N=10} the dynamics of $j_x^{(2)}\\equiv 4\\expval{J_x^2}\/N^2$. Such an observable is relevant in the study of spin squeezing and entanglement. Once again, the initial correlations can affect the dynamics significantly.\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{Jx_2-plot-for-N=4.eps}\n\\caption{Behavior of $j_x^{(2)}$ against $t$ for $N = 4$\nwith (black, solid) and without (dashed, red) taking into account initial correlations. The rest of the parameters used are the same as those in Fig.~\\ref{Beyond-PD-N=2}.} \\label{Jx^2-plot-for-N=4}\n\\end{figure}\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{Jx_2-plot-for-N=10.eps}\n\\caption{Same as Fig. \\ref{Jx^2-plot-for-N=4}, except that we now have $N=10$.} \\label{Jx^2-plot-for-N=10}\n\\end{figure}\n\nFinally, let us also demonstrate the effect of the initial correlations with sub-Ohmic environment, that is, $J(\\omega) = G\\omega^s \\omega_c^{1 - s} e^{-\\omega\/\\omega_c}$. Since these environments have longer correlation times, we expect that the effect of the initial correlations will be greater as well. This is indeed the case, as can be seen by comparing Figures \\ref{subOhmic1} and \\ref{subOhmic2} with Figs.~\\ref{Beyond-PD-N=4} and Figs.~\\ref{Beyond-PD-N=10} where an Ohmic environment had been used. \n\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{subOhmic-S=0.5,J=2.eps}\n\\caption{Behavior of $j_x$ against $t$ for $N=4$ with (black, solid) and without (dashed, red) taking into\naccount initial correlations. Here we have used a sub-Ohmic environment with $s = 0.5$. We also have $\\varepsilon_0 = 4$, $\\varepsilon=2.5$ and $\\Delta=\\Delta_0=0.5$, while the rest of the parameters used are the same as Fig. \\ref{Puredephasing-N=1-unitary}.} \\label{subOhmic1}\n\\end{figure}\n\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{subOhmic-S=0.5,J=5.eps}\n\\caption{Same as Fig.~\\ref{subOhmic1}, except that now $N = 10$.} \\label{subOhmic2}\n\\end{figure}\n\n\n\n\\section{Application to the spin-spin environment model}\n\nWe consider now a collection of identical two-level systems interacting with an environment consisting of two-level systems \\cite{cucchietti2005decoherence,CamaletPRB2007, Schlosshauerbook,VillarPhysLettA2009,SegalJCP2014}. We then have\n\\begin{align*}\n H_\\text{S0}\n &= \\varepsilon_0 J_z + \\Delta_0 J_x\\\\\n H_S&= \\varepsilon J_z + \\Delta J_x\\\\\n H_B&= \\sum_{k} \\frac{\\omega_k}{2} \\sigma_{x}^{(k)},\\\\\n V&=J_x \\otimes \\sum_k g_k \\sigma_{z}^{(k)}\n\\end{align*}\nwhere $\\sigma_{z}^{(k)}$ and $ \\sigma_{x}^{(k)}$ are the Pauli $z$-spin and $x$-spin operators\nof the $k$th environment spin respectively, $\\omega_k$ denotes the tunneling matrix element for the $k^{\\text{th}}$ environment spin and $g_k$ quantifies the coupling strength.\\\\\nThe different environment leads to a different correlation function $C_{ts}$ as well as a different factor $J^R_\\text{corr}(\\beta,t)$ that takes into account the effect of the initial system-environment correlations. We first note that \n\\begin{align}\n B_{\\text{corr}}(\\lambda,t)&=\\sum_k |g_k|^2\\Big\\{\\tanh{\\left(\\frac{\\beta \\omega_k}{2}\\right)}e^{-\\omega_{k}\\left(\\lambda-it\\right)}\\nonumber\\\\\n &+2n_k\\sinh{\\left(\\lambda\\omega_{k}-i\\omega_{k}t\\right)}\\Big\\},\n\\end{align}\nwith, \n\\begin{align}\n n_k=\\frac{1}{2}\\left\\{\\coth{\\left(\\frac{\\beta\\omega_k}{2}\\right)}-1\\right\\}\\nonumber.\n\\end{align}\nSince the factors $\\alpha_1(\\lambda,t)$, $\\alpha_2(\\lambda,t)$, and $\\alpha_3(\\lambda,t)$ are the same as before, this allows us to work out the role of the initial correlations [see Eq.~\\eqref{initialcorrelations}]. In a similar manner, the environment correlation function can also be worked out. As before, to perform the sum over the environment modes, we\nuse the same procedure as mentioned earlier, that is, $\\sum_k |g_k|^2 (...) \\rightarrow \\int_0^{\\infty}d\\omega J(\\omega) (...)$. Results are shown in Figs.~\\ref{spinenv1} and \\ref{spinenv2}. Once again, the role of the initial correlations is relatively small for a smaller value of $N$. However, as $N$ increases, it is clear that we need to take into account the role of the initial correlations to obtain an accurate picture of the system dynamics.\n\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{Spin-E-Ohmic-J=2.eps}\n\\caption{Behavior of $j_x$ against $t$ for $N = 4$\nwith (black, solid) and without (dashed, red) taking into account initial correlations. Here we have $\\varepsilon_0 = 4$, $\\varepsilon=2.5$, $\\Delta=\\Delta_0=0.5$, $G = 0.05$, $\\beta=1$ and $\\omega_c=5$.} \\label{spinenv1}\n\\end{figure}\n\n\\begin{figure}[htp]\n \\centering \n \\includegraphics[width=0.95\\linewidth]{Spin-E-Ohmic-J=5.eps}\n\\caption{Same as Fig.~\\ref{spinenv1}, except that now $N = 10$.} \\label{spinenv2}\n\\end{figure}\n\n\n\\section{Conclusion}\nTo conclude, we have shown that if we start from the joint thermal equilibrium state of a quantum system and its environment, and then apply a unitary operation to the system to prepare the system quantum state, the initial correlations that exist in the joint thermal equilibrium state influence the subsequent dynamics of the system. We have derived a time-local master equation, correct to second-order in the system-environment coupling strength, that takes into account the effect of these correlations, showing therefore that one need not necessarily be in the strong system-environment coupling regime to observe the effects of the initial correlations. The structure of this master equation is very interesting, as the form of the term that takes into account the initial correlations is the same as the relaxation and dephasing term. In this sense, one can say that the initial correlations affect the decoherence and dephasing rates, a fact which has already been pointed out in studies of the role of initial correlations in pure dephasing models \\cite{MorozovPRA2012}. Finally, we actually applied our master equation to the large spin-boson model as well as to a collection of two-level systems interacting with a spin environment to quantitatively investigate the role of the initial correlations. We found that when the number of spins is small, then the initial correlations do not play a significant role. However, for a larger number of spins, the initial correlations must be accounted for in order to explain the dynamics accurately. \n\n\n\n\n\\section*{acknowledgements}\nA.~R.~M, M.~Z.~and A.~Z.~C. are grateful for support from HEC under grant No 5917\/Punjab\/NRPU\/R\\&D\/HEC\/2016. Support from the LUMS Faculty Initiative Fund is also acknowledged.\n\n\n\n\\begin{comment}\n\\subsection{State preparation via projection operator}\nFor an initial state prepared by a pure state mixed operator, work has been already done by [citation.]. As mentioned earlier, the initial state prepared using a pure state projection operator is given by Eq.~\\eqref{pure-projection-state.}.\nWe are considering the work done by [citation] as a benchmark and therefore, we are using the similar Hamiltonian to verify the results in this and the next sub-section. The total system-environment Hamiltonian can be written as:\n\\begin{align*}\n H=H_S + H_B +V.\n\\end{align*}\nwhere, $H_S$ and $H_B$ are same as used previously, whereas, \n\\begin{align*}\n V= J_x \\sum_k \\left( g_k^* b_k + g_k b_k^\\dagger \\right)\n\\end{align*}\nBy choosing an initial state $\\ket{\\psi}$, such that $J_Z\\ket{\\psi}= -\\frac{N}{2}\\ket{\\psi}$, it can be shown that,\n\\begin{align*}\n &f_\\text{corr}(t)= N \\sum_k \\abs{g_k}^2 \\cos\\left(\\omega_k t\\right) \\times \\nonumber \\\\\n &\\left[ \\frac{A}{\\omega_k} + \\frac{D}{\\tilde{\\Delta}^2-\\omega_k^2}\\left\\{\\tilde{\\Delta} \\coth\\left(\\frac{\\beta \\omega_k}{2} \\right)-\\omega_k \\coth\\left(\\frac{\\beta\\tilde{\\Delta}}{2}\\right)\\right\\} \\right],\n\\end{align*}\nwith,\n\\begin{align*}\n A &= -\\frac{\\Delta}{\\tilde{\\Delta}} \\frac{\\tilde{\\Delta} + \\varepsilon \\coth\\left(\\beta\\tilde{\\Delta}\/2\\right)}{\\tilde{\\Delta} \\coth\\left(\\beta\\tilde{\\Delta}\/2\\right) + \\varepsilon},\\\\\n D&= \\frac{\\varepsilon \\Delta \\tilde{\\Delta}}{\\tilde{\\Delta} \\coth\\left(\\beta\\tilde{\\Delta}\/2\\right) + \\varepsilon }.\n\\end{align*}\nIt can also be shown that, $f_\\text{corr}\\left( t\\right)$ is real.\n\\subsection{State preparation via mixed state projection operator}\nThis is a more generalized case of what we have done in previous sub-section. By choosing an initial state $\\ket{\\psi}= P_1 \\ket{\\psi_1} + P_2 \\ket{\\psi_2} $, such that $J_Z\\ket{\\psi_1}= -\\frac{N}{2}\\ket{\\psi_1}$ and $J_Z\\ket{\\psi_2}= \\frac{N}{2}\\ket{\\psi_2}$. By generalizing the previous results, it can be shown that,\n\\begin{align}\n f_\\text{corr}(t) = f_{\\text{corr},1} + f_{\\text{corr},2}\n\\end{align}\nwhere,\n\\begin{align}\n &f_{\\text{corr},1}= P_1 N \\sum_k \\abs{g_k}^2 \\cos\\left(\\omega_k t\\right) \\times \\nonumber \\\\\n &\\left[ \\frac{A_1}{\\omega_k} + \\frac{D_1}{\\tilde{\\Delta}^2-\\omega_k^2}\\left\\{\\tilde{\\Delta} \\coth\\left(\\frac{\\beta \\omega_k}{2} \\right)-\\omega_k \\coth\\left(\\frac{\\beta\\tilde{\\Delta}}{2}\\right)\\right\\} \\right],\n\\end{align}\n\\begin{align}\n &f_{\\text{corr},2}= P_2 N \\sum_k \\abs{g_k}^2 \\cos\\left(\\omega_k t\\right) \\times \\nonumber \\\\\n &\\left[ \\frac{A_2}{\\omega_k} + \\frac{D_2}{\\tilde{\\Delta}^2-\\omega_k^2}\\left\\{\\tilde{\\Delta} \\coth\\left(\\frac{\\beta \\omega_k}{2} \\right)-\\omega_k \\coth\\left(\\frac{\\beta\\tilde{\\Delta}}{2}\\right)\\right\\} \\right],\n\\end{align}\nwith,\n\\begin{align}\n A_1 &= -\\frac{\\Delta}{\\tilde{\\Delta}} \\frac{\\tilde{\\Delta} + \\varepsilon \\coth\\left(\\beta\\tilde{\\Delta}\/2\\right)}{\\tilde{\\Delta} \\coth\\left(\\beta\\tilde{\\Delta}\/2\\right) + \\varepsilon},\\\\\n D_1&= \\frac{\\varepsilon \\Delta \\tilde{\\Delta}}{\\tilde{\\Delta} \\coth\\left(\\beta\\tilde{\\Delta}\/2\\right) + \\varepsilon },\\\\\n A_2 &= -\\frac{\\Delta}{\\tilde{\\Delta}} \\frac{\\tilde{\\Delta} - \\varepsilon \\coth\\left(\\beta\\tilde{\\Delta}\/2\\right)}{\\tilde{\\Delta} \\coth\\left(\\beta\\tilde{\\Delta}\/2\\right) - \\varepsilon},\\\\\n D_2&= -\\frac{\\varepsilon \\Delta \\tilde{\\Delta}}{\\tilde{\\Delta} \\coth\\left(\\beta\\tilde{\\Delta}\/2\\right) - \\varepsilon }. \n\\end{align}\nIt can also be shown that, $f_\\text{corr}\\left( t\\right)= f_{\\text{corr},1} + f_{\\text{corr},2}$ is real.\n\n\\end{comment}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{}\n\\begin{abstract}\nThe study of non-linearity (linearity) of Boolean function was initiated by Rothaus in 1976. The classical non-linearity of a Boolean function is the minimum Hamming distance of its truth table to that of affine functions. \nIn this note we introduce new \"multidimensional\" non-linearity parameters $(N_f,H_f)$ for conventional and vectorial Boolean functions $f$ with $m$ coordinates in $n$ variables.\n The classical non-linearity may be treated as a 1-dimensional parameter in the new definition. $r$-dimensional parameters for $r\\geq 2$ are relevant to possible multidimensional extensions of the Fast Correlation Attack in stream ciphers and Linear Cryptanalysis in block ciphers. Besides we introduce a notion of optimal vectorial Boolean functions relevant to the new parameters. For $r=1$ and even $n\\geq 2m$ optimal Boolean functions are exactly perfect nonlinear functions (generalizations of Rothaus' bent functions) defined by Nyberg in 1991. By a computer search we find that this property holds for $r=2, m=1, n=4$ too. That is an open problem for larger $n,m$ and $r\\geq 2$. The definitions may be easily extended to $q$-ary functions. \n\\end{abstract}\n\\section{Conventional Boolean Functions}\\label{Boolean}\nLet $f(x)=f(x_1,\\ldots,x_n)$ be a Boolean function (takes $0,1$-values) in $n$ Boolean variables $x=(x_1,\\ldots,x_n)$. Let $v$ denote its weight (the number of values $1$ in the truth table). One constructs a probability distribution $p$ on binary $n$-strings, such that $p_x=1\/v$ if $f(x)=1$ and $p_x=0$ otherwise.\n\nLet $r$ be a fixed number $1\\leq r\\leq n$ and $U$ an $r\\times n$ binary matrix of rank $r$. The matrix defines a linear transform from the space of $n$-bit strings to the space of $r$-bit strings. That induces \n a probability distribution $q$ on binary $r$-strings. \n Namely, $q_y=\\sum_{y=Ux}p_x$, where the sum is over $x$ such that $y=Ux$. The distribution $q$ depends on $U$. For $r=n$ the distribution $q$ is a permuted distribution $p$ with a linear permutation defined by $U$.\n \n For $r=1$ the distributions (in a slightly different form) are used in Correlation and Fast Correlation Attacks in stream ciphers, see \\cite{MS}.\n The efficiency of Correlation \nAttacks, e.g., for a Filter Generator with a filtering function $f$, depends on the probability\n$\\mathbf{Pr}(Ux=f(x)).$ By the definition of $q=(q_0,q_1)$, one gets $\\mathbf{Pr}(Ux=1,f(x)=1)=vq_1\/2^n$ and $\\mathbf{Pr}(Ux=0,f(x)=0)=1\/2-vq_0\/2^n$. \nSo \n\\begin{eqnarray}\n\\mathbf{Pr}(Ux=f(x))&=&\\mathbf{Pr}(Ux=1,f(x)=1)+\\mathbf{Pr}(Ux=0,f(x)=0)=\\frac{1}{2}+\\frac{v(q_1-q_0)}{2^n}.\\nonumber\\end{eqnarray}\n For $r\\geq 2$ the distribution $q$ may potentially \n be used in multidimensional extensions of Correlation Attacks. \n \n In cryptanalysis one may want to distinguish non-uniform distributions from uniform. The number of zero values of $q_y$ denoted $N_q$ and the entropy of $q$ on its support denoted $H_q$ are relevant parameters. \n For a fixed $r$ the distributions $q$ may be partitioned into classes by equivalence, where $q_1$ and $q_2$ are equivalent if $N_{q_1}=N_{q_2}$ and $H_{q_1}=H_{q_2}$. Obviously, the number of zero values provides with a stronger distinguisher than the entropy. So we define\n an order on classes $\\{q\\}$ induced by the relation \n $\\{q_1\\}>\\{q_2\\}$ which holds if $N_{q_1}>N_{q_2}$ or if $N_{q_1}=N_{q_2}$ and $H_{q_1}$) class we call $r$-dimensional non-linearity of $f$. They are denoted $(N_f,H_f)$. Let, for instance, $r=n$ then $(N_f,H_f)=(2^n-v,\\ln(v))$. It is easy to see that $r$-dimensional non-linearity of $f$ is invariant under affine change of variables in $f$. \n \n Let $r=1$, then $Ux$ is a non-zero linear function and \n \\begin{eqnarray}\nq_0-q_1&=&\\sum_xp_x\\,(-1)^{Ux}=\\frac{1}{v}\\sum_{x}f(x)\\,(-1)^{Ux}\\nonumber\\\\\n&=&\\frac{1}{v}\\sum_{x}\\frac{1-(-1)^{f(x)}}{2}\\,(-1)^{Ux}=\\frac{-2^{n-1}}{v}W_U\n.\\nonumber\n\\end{eqnarray}\nThe numbers\n$W_a=\\frac{1}{2^n}\\sum_{x}(-1)^{f(x)+ax}$, where $a$ are encoded by binary $n$-strings, may be called Walsh-Hadamard spectrum of $f$. Also $(-1)^{f(x)}=\\sum_{a}W_a\\,(-1)^{ax}$. \n\n It is well known and easy to prove that $W_a=\\frac{v_a-2^{n-1}}{2^{n-1}}$, where $v_a$ is the number of $x$ such that $f(x)= ax$. \n Minimum distance of $f$ to affine functions (classical non-linearity of $f$) is defined by\n $$d_f=\\min_{a}(v_a,2^n-v_a)=2^{n-1}(1-\\max_{a }|W_a|).$$ \n \nTo construct $U$, where the distribution $q_0,q_1$ has the smallest entropy (largest bias $|q_0-q_1|$), one computes Walsh-Hadamard spectrum of $f$ and chooses $U$ such that $W_U$ is the largest in absolute value. The computation takes $n2^n$ integer additions and subtractions.\nSo the largest Walsh-Hadamard spectrum value $|W_a|,a\\ne 0$ is a $1$-dimensional parameter of the Boolean function $f$. For balanced Boolean functions $W_0=0$. So $1$-dimensional non-linearity parameter for a balanced Boolean function is also defined by the classical non-linearity of $f$.\n\nIn order to find $r$-dimensional parameters for $r\\geq 2$ one can brute force all matrices $U$ (up to an equivalence by row operations), calculate the distribution $q$, its entropy and the number of its zero values. The number of inequivalent matrices grows fast with $r$, so the calculation is infeasible even for moderate $n$. \n \n \n \n We consider an example. For the Boolean function $$f(x_1,x_2,x_3,x_4,x_5)=x_1x_2x_3 +x_1x_2x_4 +x_1x_2x_5 +x_1x_4 +x_2x_5 +x_3 +x_4 +x_5$$\n$r$-dimensional parameters are shown in Table \\ref{Boolean} for $r=1,2,3,4$, where $u$ is the number of $r\\times n$-matrices $U$ up to a row equivalence, $c$ is the number of classes of equivalent distributions. \nAlso the table contains a representative $q$ of the largest (according to the order above) class of the distributions, a matrix $U_q$ and \nthe number $T_q$ of the distributions in that class, and the parameters $N_f,H_f$. The linear transform $U$ is represented by its coordinate linear functions. \n\\begin{table}[htp]\n\\caption{$r$-dimensional non-linearity parameters of $f$}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\\hline\n\n$r$& $u$&$c$& $U_q$ & $q$& $N_f$& $H_f$& $T_q$\\\\ \\hline\n&&& & & && \\\\\n$1$&31&2& $x_4+x_5$ & $\\frac{3}{8},\\frac{5}{8}$& 0&$0.95441$& $16$\\\\\n& & & && &&\\\\ \\hline\n&&& & & && \\\\\n$2$&155&5 &$x_3+x_5$ & $\\frac{5}{16},\\frac{5}{16},\\frac{5}{16},\\frac{1}{16}$& 0&$1.82320$& $8$\\\\\n& & &$x_4+x_5$ & & && \\\\\n \\hline\n $3$&155&7& $x_2$ & & 1&$2.65563$& $12$\\\\\n &&& $x_3+x_5$ & $\\frac{1}{16},\\frac{1}{16},\\frac{3}{16},\\frac{3}{16},0,\\frac{1}{4},\\frac{1}{8},\\frac{1}{8}$ & && \\\\\n&&& $x_4+x_5$ & & && \\\\\n \\hline\n $4$&31&3& $x_1$ & & 6&$ 3.2500$& $1$\\\\\n && & $x_2$ &$\\frac{1}{16},\\frac{1}{16},\\frac{1}{16},\\frac{1}{16},0,\\frac{1}{8},\\frac{1}{8},0$ & && \\\\\n &&& $x_3+x_5$ & & && \\\\\n&&& $x_4+x_5$ & $0,0,\\frac{1}{8},\\frac{1}{8},0,\\frac{1}{8},0,\\frac{1}{8}$ & && \\\\\n&&& & & && \\\\\n \\hline\n\\end{tabular}\n\\end{center}\n\\label{Boolean}\n\\end{table}%\n \n\n\n\n \n \n\\section{Vectorial Boolean Functions}\nA variation of the above definition may be extended to vectorial Boolean functions. Let $y=(y_1,\\ldots,y_m)=f(x_1,\\ldots,x_n)$ be a vectorial Boolean function in $n$ variables $x=(x_1,\\ldots,x_n)$. One defines a probability distribution on $(n+m)$-binary vectors $p_{x,y}=1\/2^n$ if $y=f(x)$ and $p_{x,y}=0$ otherwise. Let $U$ be an $r\\times (n+m)$ binary matrix of rank $r$. That matrix defines a probability distribution $q$ on binary $r$-strings as\n$q_z=\\sum_{z=U(x,y)}p_{x,y}$, where the sum is computed over $x,y$ such that $z=U(x,y)$, and $(x,y)$ is a column vector of length $n+m$.\nHow to find efficiently $U$ such that the distribution $q$ is far away from the uniform? For $r=1$ the distribution is $q=(q_0,q_1)$ and the function $U(x,y)=ax+by$ is a conventional linear approximation used in Matsui's Linear Cryptanalysis of block ciphers, see \\cite{Matsui}. The best $ax+by$ is found after applying Walsh-Hadamard transform to the distribution $p$ as\n$$q_0-q_1=\\sum_{x,y}p_{x,y}(-1)^{ax+by}.$$\nThe computation takes $(n+m)2^{n+m}$ arithmetic operations. Let $r=m=1$. We set $b=1$, otherwise the distribution $q$ is uniform. Then\n$$q_0-q_1=\\sum_{x,y}p_{x,y}(-1)^{ax+y}=\\frac{1}{2^n}\\sum_{x}(-1)^{ax+f(x)}=W_a.$$\nOne takes $a$ with the largest $|W_a|$ and constructs the distribution $q$ with the smallest entropy by using $U(x,y)=ax+y$.\n\n Similar to Section \\ref{Boolean}, we define $r$-dimensional non-linearity parameters $(N_f,H_f)$ for $f$.\n in case $r\\geq 2$ efficient method to compute those parameters is unknown. However for small $n$ one can brute force all matrices $U$. \nFor instance, let $n=m=4$ and $f(x_1,x_2,x_3,x_4)=(y_1,y_2,y_3,y_4)$, where \n\\begin{eqnarray}\n(x_1\\alpha^3+x_2\\alpha^2+x_3\\alpha+x_4)^{-1}=(y_1\\alpha^3+y_2\\alpha^2+y_3\\alpha+y_4)\\quad \\hbox{mod} \\quad\\alpha^4+\\alpha+1\n\\end{eqnarray}\nif $(x_1,x_2,x_3,x_4)\\ne (0,0,0,0)$ and $f(0,0,0,0)=(0,0,0,0).$ $r$-dimensional non-linearity parameters for $f$ for $r=1,\\ldots,7$ are in Table \\ref{Vectorial}.\n\n\\begin{table}[htp]\n\\caption{$r$-dimensional non-linearity parameters of the vectorial $f$}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\\hline\n\n$r$& $u$&$c$& $U_q$ & $q$& $N_f$& $H_f$& $T_q$\\\\ \\hline\n&&& & & && \\\\\n$1$&255&3& $x_1+x_2+x_3+y_1$ & $\\frac{3}{4},\\frac{1}{4}$& 0&$0.8112$& $30$\\\\\n& & & && &&\\\\ \\hline\n&&& & & && \\\\\n$2$&10795&12 &$x_1+x_3+x_4+y_4,$ & $\\frac{1}{2},\\frac{1}{4},0,\\frac{1}{4}$& 1&$1.5$& $135$\\\\\n& & &$x_2$ & & && \\\\\n \\hline\n $3$&97155&35& $x_1+y_1,$ & & 3&$2$& $15$\\\\\n &&& $x_2+y_1+y_2,$ & $\\frac{1}{2},0,0,0,\\frac{1}{8},\\frac{1}{8},\\frac{1}{8},\\frac{1}{8}$ & && \\\\\n&&& $x_3+y_1+y_3$ & & && \\\\\n \\hline\n $4$&200787&49& $x_1+y_2+y_3,$ & & 10&$ 2.4056$& $3$\\\\\n && & $x_2+y_1+y_2+y_4,$ &$\\frac{3}{8},0,\\frac{1}{8},\\frac{1}{8},0,0,0,\\frac{1}{8}$ & && \\\\\n &&& $x_3+y_3+y_4,$ & & && \\\\\n&&& $x_4+y_2$ & $0,0,0,\\frac{1}{8},0,\\frac{1}{8},0,0$ & && \\\\\n&&& & & && \\\\\n \\hline\n $5$&97155&21& $x_1+y_2+y_3,$ &$\\frac{1}{4},0,0,0,\\frac{1}{16},0,\\frac{1}{16},0$ & 23&$ 3$& $30$\\\\\n && & $x_2+y_2+y_4,$ &$0,\\frac{1}{8},0,0,0,\\frac{1}{16},\\frac{1}{8},\\frac{1}{16}$ & && \\\\\n &&& $x_3+y_3+y_4,$ &$0,0,0,\\frac{1}{8},0,0,0,0$ & && \\\\\n&&& $x_4+y_2$ & $0,0,0,0,0,0,0,\\frac{1}{8}$ & && \\\\\n&&& $y_1$ & & && \\\\\n \\hline\n $6$&10795&9& $x_1+y_3,x_2,x_3,$ &$\\frac{3}{16},0,0,\\frac{1}{16},0,\\ldots,0$ & 52&$ 3.4528$& $90$\\\\\n&&& $x_4+y_3+y_4,y_1,y_2$ & & && \\\\\\hline\n$7$&255&3& $x_1,x_2,x_3,$ &$\\frac{1}{8},0,0,,\\ldots,\\frac{1}{16},0,0,0$ & 114&$ 3.75$& $15$\\\\\n&&& $x_4+y_4,y_1,y_2,y_3$ & & && \\\\\n \\hline\n\n\n\\end{tabular}\n\\end{center}\n\\label{Vectorial}\n\\end{table}%\nOne can construct an extension to the Linear Cryptanalysis based on $r$-dimensional parameters for $r\\geq 2$. \n\\section{Optimal Boolean Functions}\\label{optimal}\nLet $y=f(x_1,\\ldots,x_n)$ be a vectorial Boolean function with $m$ coordinates and in $n$ variables and let $1\\leq r\\leq n$. \nOne can split Boolean functions with the same $n,m$ into classes of equivalence and define an order on the equivalence classes.\n\n Let $f'$ be another Boolean function with the same parameters $n,m$. One says $f,f'$ are equivalent (belong to the same class denoted $\\{f\\}$) if $(N_f,H_f)=(N_{f'},H_{f'})$. One now defines an order on the classes by \n$\\{f\\}<\\{f'\\}$ if $N_{f}H_{f'}$. Boolean functions from the smallest (according to $<$) class are called optimal. \n\n Then it is easy to show that for $r=1,m=1$ and even $n$ optimal Boolean functions are exactly Boolean bent functions introduced by Rothaus in \\cite{R76}. For $m\\geq 1$ and even $n\\geq 2m$ optimal Boolean functions are perfect nonlinear according to \\cite{N91} and vice versa. By a computer search we find that the property holds for $r=2, m=1, n=4$. For larger $n,m$ and $r\\geq 2$ this is an open problem. \n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzimim b/data_all_eng_slimpj/shuffled/split2/finalzzimim new file mode 100644 index 0000000000000000000000000000000000000000..502536ff063214fd6fb8eb8372fc33e8ce01803f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzimim @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{section:litReview}\n\nInsurance is based on the idea that society asks for protection against unforeseeable events which may cause serious financial damage. Insurance companies offer a financial protection against these events. The general idea is to build a community where everybody contributes a certain amount and those who are exposed to the damage receive financial reimbursement \\cite{wuthrich2019non} . \n\\smallskip\n\nWhen (non-life) insurers set premium prices they usually start by finding the so-called pure premium, which is the expected value of the total claims that will occur in one time unit. However, when pricing insurance policies, insurers must take into account the risk associated with the policy as well as additional costs (e.g. operational cost, capital cost, etc.). Therefore, a so-called security loading is added to cover the risk and additional costs. The security loading is often calculated using some premium calculation principle,\nand the insurance premium is obtained once the security loading has been determined and added to the pure premium. The main concerns are usually whether the loading is an appropriate measure of the risk and which premium principle to choose. The higher the loading the higher the premium and consequently, the underwriting risk will be lower. However, if the premium price is too high then the exposure will be too low due to competition, and the operational cost of the insurer will engulf the premium income resulting in financial instability. Therefore, insurers usually require sophisticated premium calculations in order to secure stability.\n\\smallskip\n\nCollective risk models are fundamental in actuarial science to model the aggregate claim amount of a line of business in an insurance company. The collective risk model has two random components, the number of claims and the severity of claims, and is usually modelled with a compound process \\cite[Chapter~3]{kaas2008modern}. The classical Lundberg risk process has been studied extensively and there exist many variations, for example including reinsurance or investments \\cite{hipp2000optimal}. It assumes that premia come in a continuous stream while claims happen at discrete times according to a Poisson distribution.\n\\smallskip\n\nAnother common assumption is that the risk can be divided into groups of homogeneous risks such that the pure premia and security loadings can be estimated separately for each risk group. The pure premia of these individual groups are usually modelled with generalized linear models (GLM). GLM's have been applied extensively in actuarial work and a good overview is provided in \\cite{ohlsson2010glm, yao2013generalized}. Traditional risk theory has usually assumed independence between risks due to its convenience, but it is generally not very realistic. Claims in an insurer's risk portfolio are correlated as they are subject to the same event causes \\cite{campana2005distortion}. Completely homogeneous risk groups are extremely rare and dependence among risks has become a flourishing topic in actuarial literature \\cite{Copulas_Barning}. Dependence has mostly been measured through linear correlation coefficients \\cite{britt2005linear}. The popularity of linear coefficient is mainly due to the ease with which dependencies can be parameterized, in terms of correlation matrices. Most random variables, however, are not jointly elliptically distributed and it could be very misleading to use linear coefficients \\cite{straumann2001correlation}. This motivated the use of concordance measures. Two random variables are concordant when large values of one go with large values of the other \\cite{nelsen2007introductionCopulas}. The Lundberg risk model is a L\u00e9vy jump process, \\cite{ContTankov2004Jumps} which means that the dependency of two claim processes is best explained through their L\u00e9vy measure \\cite{tankov2016levy}. This study will not go into details about L\u00e9vy processes, but both \\cite{ContTankov2004Jumps} and \\cite{papapantoleon2008introduction} provide a very good introduction, and \\cite{Copulas_Barning, BAUERLE2011398, sato1999levy, van2012parameter, avanzi_cassar_wong_2011} are examples of applications of L\u00e9vy copulas to risk processes. For example, van Velesen \\cite{van2012parameter} showed how L\u00e9vy copulas can be used in operational modelling and discussed how dependence is implied by the L\u00e9vy copula. In this work we consider bivariate claim processes, but the presented theory can be straightforwardly extended to multiple claim processes.\n\\smallskip\n\nRuin probability is a classical measure of risk and has been extensively studied \\cite{kaas2008modern, hipp2000optimal,kasumo2018minimizing, trufin2011properties}. Although there is no absolute meaning to the probability of ruin, it still measures the stability of insurance companies. A high ruin probability indicates instability, and risk mitigation techniques should be used, like reinsurance or raising premia \\cite{kaas2008modern}. Most non-life insurance products have a term of one year and therefore it can be argued that the one year ruin probability should be used. The one year ruin probability is the probability that the capital of an insurance company will hit zero within one year. However, the appropriateness of risk measures defined over fixed time horizons can be questioned, since ruin in a given time span can be minimized by increasing the probability of ruin in the aftermath of that period. Lundberg concluded that the actual assumptions behind the classical collective risk model are in fact less restrictive when time-invariant quantities like the infinite time ruin probability are considered \\cite{trufin2011properties}. Therefore, we focus on the infinite time ruin probability in this paper.\n\\smallskip\n\nIn this work, the optimal loadings based on two strategies are derived, and compared. One strategy maximizes the profit and the other minimizes the ruin probability. We show that the two loading strategies give different results. Furthermore, we show how the optimal loading with respect to the ruin probability can be found and compare it to the one obtained when the expected profit is maximized. We consider dependencies and illustrate how L\u00e9vy copulas can be used to model claim process dependencies and how dependencies can affect the riskiness of the insurance portfolio. We take this idea further and consider dependency between the acquisition of insurance for different risks by policyholders. This is a realistic assumption as policyholders usually buy multiple insurance products from the same insurance company. We also take into account the fact that the market risk process and the company's risk process are not the same, and how the company's risk process depends on its exposure to the market. This is, to our knowledge, the first analysis of the interplay of the ruin probability, the dependency structure of claim, and the dependency structure of acquisition of insurance. We demonstrate that even if there is a strong dependency between insurance products within the market, small insurance companies have less dependency and therefore less risk than bigger insurance companies, provided the dependency between acquisition of insurance for different risks is not too strong. \n\\medskip\n\nThe paper is organized as follows: Section \\ref{section:min_ruin} contains some background material about ruin probabilities in the Lundberg process and aggregation of compound Poisson processes. Section \\ref{section:demand} deals with the single-risk case. We characterize the optimal loading and compare it with the loading maximizing the expected profit. Section \\ref{section:companyRisk} handles the multiple risks case. We show how the dependency structure existing in the market (i.e. the general population) translates into the risk exposure of the company through its market shares on different risks and the likelihood that clients acquire insurance for more than one risk. Section \\ref{seq:numerical} contains a numerical illustration. A numerical scheme to compute the ruin probabilities is given in the appendix. \n\n\n\n\n\n\\section{Preliminaries}\n\\label{section:min_ruin}\n\n\n\\subsection{Claim and Surplus Processes}\n\nThe Lundberg risk model describes the evolution of the capital of an insurance company and assumes that exposure is constant in time, losses follow a compound Poisson process, and premia arrive at a fixed continuous rate:\n\n$$ X_t = u + ct-\\sum_{i=0}^{N_t} Y_i = u + ct - S_t, \\qquad Y_0 \\coloneqq 0 ,$$\n\nwhere $u$ is the initial surplus, $c$ is the risk premium rate, $N_t$ is a time homogeneous Poisson process with intensity parameter $\\lambda$, and $Y_i$ are i.i.d. random variables representing the severity of claim $i$, $i = 0,\\dots, N_t$. Here it is assumed that $Y_i$ are positive. In the following sections, $Y$ denotes an arbitrary random variable with the same distribution as any $Y_i$. The severity distribution is denoted as $F(x)$ and the severity survival distribution as $\\overline{F}(x)$. $S_t$ is a compound Poisson process and thus $X_t$ is a stochastic process (sometimes called the surplus process) representing the insurance wealth at time $t$. $X_t$ increases because of earned premia and decreases when claims occur. When the capital of an insurance company hits zero, the insurance company is said to be ruined. Formally, the ruin probability is defined as follows.\n\n\\begin{definition}[Probability of Ruin]\n\\label{SimpleRuin}\nLet $(\\Omega, \\mathcal{F}, \\{\\mathcal{F}_t\\}_{t\\geq0}, \\mathbbm{P})$ be a filtered probability space and $X = (X_t)_{t \\in [0, \\infty[}$ a surplus process which is adapted and Markov with respect to the filtration. The state space is $(\\mathbbm{R}, \\mathcal{B}(\\mathbbm{R}))$. If X is time homogeneous, the infinite time ruin probability is the function $V: \\mathbbm{R} \\mapsto [0,1]$ such that\n\n$$V(x) = \\Prob \\big( \\exists s\\in[0,+\\infty[:X_s \\leq 0 \\given[\\big] X_0 = x \\big), \\quad x \\in \\mathbbm{R} .$$\n\n\\end{definition}\n\nSometimes it is useful to use the survival (non-ruin) probability, defined as $\\overline{V}(x) = 1-V(x)$. The ruin probability can be calculated using the following integro-differential equation \\cite{grandell2012aspects}.\n\n\\begin{proposition}\n\\label{prop:Ruin_lundberg_inf}\nAssume that $X_t$ is defined as above and the premium rate satisfies $c > \\lambda \\E[Y]$. If $V \\in C^1(]0, \\infty[),$ then the probability of ruin with infinite time horizon satisfies the following equation:\n\\begin{align}\n\\label{eq:Ruin_lundberg_inf}\n 0 = c \\deriv{x}V(x) + \\lambda \\bigg( \\int_{0}^{x} V(x-y)dF(y) - V(x) + 1-F(x) \\bigg), \\quad x>0,\n\\end{align}\nwith the following boundary condition:\n\\[\n\\begin{cases}\n V(x) = 1 & x\\leq 0,\\\\\n \\lim_{x \\to 0^+}V(x) = \\frac{\\lambda}{c}\\E[Y] . \\\\\n\\end{cases}\n\\]\nFurthermore, the probability of non-ruin satisfies the following equation:\n\\begin{equation}\n\\label{eq:Ruin_lundberg_inf_survival}\n \\overline{V}(x)-\\overline{V}(\\epsilon) = \\frac{\\lambda}{c}\\int_\\epsilon^x\\overline{V}(x-y)\\overline{F}(y)dy\n\\end{equation}\nfor $0 < \\epsilon \\leq x < + \\infty$ with the following boundary condition:\n\\[\n\\begin{cases}\n \\overline{V}(x) = 0 & x\\leq 0,\\\\\n \\lim_{x \\to 0^+} \\overline{V}(x) =1- \\frac{\\lambda}{c}\\E[Y] . \\\\\n\\end{cases}\n\\]\n\\end{proposition}\n\\medskip\n\nA numerical scheme solving equation \\eqref{eq:Ruin_lundberg_inf} can be found in Appendix \\ref{appendix:NA_ez_Poi_Exp_inf}. \n\n\n\\subsection{Accounting for Claim Dependencies}\n\nConsider the surplus process $\\bm{X} = (X_t^{(1)},...,X_t^{(n)}) $ where\n\\begin{equation}\n\\label{multiclaims}\n \\begin{split}\n X_t^{(1)} &= u^{(1)}+ c^{(1)}t - \\sum_{i = 0}^{N_t^{(1)}}Y_i^{(1)} \\\\\n \\vdots &\\\\\n X_t^{(n)} &= u^{(n)}+ c^{(n)}t - \\sum_{i = 0}^{N_t^{(n)}}Y_i^{(n)} \\\\\n \\end{split}\n\\end{equation}\n\nIf these processes are independent, it is relatively easy to combine them into a single process using the aggregation property of compound Poisson processes as described in W\\\"{u}trich \\cite{wuthrich2019non}. The aggregation property allows the combination of multiple surplus processes into a single risk process as follows:\n\\begin{equation*}\n X_t = \\sum_{j= 1}^n u^{(j)} +\\sum_{j = 1}^n c^{(j)}t -\\sum_{i = 0}^{N_t}Y_i,\n\\end{equation*}\nwhere $N_t$ is a Poisson r.v. with $\\lambda = \\lambda_{1} + ...+ \\lambda_{n}$ and $Y_i$ are i.i.d. random variables, which follow the severity distribution $F(x) = \\sum_{j = 1}^n \\frac{\\lambda_j}{\\lambda} F_j(x)$. This aggregation property allows us to use the integro-differential equation \\eqref{eq:Ruin_lundberg_inf} to calculate the ruin of multiple surplus processes.\n\\smallskip\n\nIf the risks are not independent, then we can use the fact that compound Poisson processes are characterized by their L\u00e9vy measure to decompose the claim process into independent processes to which the aggregation property can be applied.\nIn particular, for $n = 2$ risks, we obtain the decomposition:\n\\begin{equation*}\n \\begin{split}\n X_t = X_t^{(1)} + X_t^{(2)} = u + ct - S_t^{1\\perp} - S_t^{2\\perp} -S_t^\\parallel.\n \\end{split}\n\\end{equation*}\nwhere $S^{1\\perp}$ and $S^{2\\perp}$ are compound Poisson processes accounting for events concerning only risk 1 and risk 2, respectively. $S^{\\parallel}$ is a compound Poisson process accounting for events concerning both risks simultaneously. Furthermore, $S^{1\\perp}$, $S^{2\\perp}$ and $S^{\\parallel}$ are mutually independent. \\smallskip\n\nIn this section, we briefly explain how this can be achieved. Further details can be found in \\cite{tankov2016levy}. We will use the following definitions:\n\n\\begin{definition}\nThe tail integral of a L\u00e9vy measure $\\nu$ on $[0, \\infty]^2$ is given by a function $U:[0, \\infty]^2 \\mapsto [0, \\infty]$\n\\begin{equation}\n\\label{eq:tail_int}\n \\begin{split}\n & U(x_1,x_2) = 0 \\quad if \\quad x_1 = \\infty \\quad or \\quad x_2 = \\infty ,\\\\\n & U(x_1,x_2) = \\nu\\big([x_1, \\infty[ \\times [x_2, \\infty[ \\big) \\quad for \\quad (x_1,x_2) \\in ]0, \\infty[^2 , \\\\\n & U(0,0) = \\infty.\n \\end{split}\n \\end{equation}\n\\end{definition}\n\n\\begin{definition}[L\u00e9vy Copula for Processes with Positive Jumps]\nA two-dimensional L\u00e9vy copula for L\u00e9vy processes with positive jumps, or for short, a positive L\u00e9vy copula, is a 2-increasing grounded function $\\mathcal{C}: [0,\\infty]^2 \\to [0, \\infty]$ with uniform margins, that is, $\\mathcal{C}(x,\\infty) = \\mathcal{C}(\\infty,x) = x$.\n\\end{definition}\n\nSimilarly to Sklar's theorem for ordinary copulas \\cite{nelsen2007introductionCopulas}, it has been shown that the dependency structure of $(X_t^{(1)},X_t^{(2)})$ can be characterized by a Levy copula $\\mathcal{C}$ such that $\\mathcal{C}(U_1(x_1), U_2(x_2))$ where $U_1$ and $U_2$ are the marginal tail integrals for $X_t^{(1)}$ and $X_t^{(2)}$. If $U_1$ and $U_2$ are absolutely continuous, this L\u00e9vy copula is unique, otherwise it is unique on $Range(U_1)\\times Range(U_2)$, the product of ranges of one-dimensional tail integrals, \\cite[Theorem~5.4]{ContTankov2004Jumps}\n\\smallskip\n\nConsider a two dimensional claim process:\n\\begin{equation}\n \\label{eq:2DimClaimProcess}\n S_t = (S_t^{(1)}, S_t^{(2)}) = \\sum_{i = 0}^{N_t} (Y_i^{(1)},Y_i^{(2)}),\n\\end{equation}\nwhere $N_t$ is a Poisson process with intensity $\\lambda$ and $Y_i = (Y_i^{(1)},Y_i^{(2)})$, $i \\in \\mathbbm{N}$ are independent random variables with common joint distribution $F_Y$. The components of $S$, $S^{(1)}$ and $S^{(2)}$, are one-dimensional compound Poisson processes with intensities $\\lambda_1$ and $\\lambda_2$ and severity distributions $F_{Y^{(1)}}$ and $F_{Y^{(2)}}$, respectively. \nWe wish to obtain a decomposition:\n\\begin{equation}\n\\label{eq:bracketSum}\n (S_t^{(1)}, S_t^{(2)}) = \\sum_{i = 0}^{N_t^{1\\perp}} (Y_i^{(1\\perp)}, 0) + \\sum_{i = 0}^{N_t^{2\\perp}} (0, Y_i^{(2\\perp)}) + \\sum_{i = 0}^{N_t^{\\parallel}}(Y_i^{(1\\parallel)}, Y_i^{(2\\parallel)}),\n\\end{equation}\nwhere $\\sum_{i = 0}^{N_t^{1\\perp}} Y_i^{(1\\perp)}$,$\\sum_{i = 0}^{N_t^{2\\perp}} Y_i^{(2\\perp)}$ and $\\sum_{i = 0}^{N_t^{\\parallel}}(Y_i^{(1\\parallel)}, Y_i^{(2\\parallel)})$ are independent compound Poisson processes with intensities $\\lambda_1^\\perp$, $\\lambda_2^\\perp$, $\\lambda^\\parallel$ and severity distributions $F_{Y^{1\\perp}}$,$F_{Y^{2\\perp}}$,$F_{Y^{\\parallel}}$, respectively. In the above setting, we consider \\begin{equation}\n\\label{eq:AllDistZero}\n F_Y(0,0) = F_{Y^{\\parallel}}(0,0) = 0, \\quad F_{Y^{(1)}}(0) = F_{Y^{1\\perp}}(0) = F_{Y^{(2)}}(0) = F_{Y^{2\\perp}}(0) \n\\end{equation}\n\nA compound Poisson process, $S$, is a L\u00e9vy process with L\u00e9vy measure $\\nu(dx) = \\lambda dF(x)$, with tail integral\n\\begin{equation*}\n U(x_1,x_2) = \n \\begin{cases}\n \\lambda \\Prob \\big(Y^{(1)} \\geq x_1, Y^{(2)} \\geq x_2 \\big) & \\textrm{if } x_1 >0 \\textrm{ or } x_2 >0 \\\\\n +\\infty & \\textrm{if } x_1 = x_2 = 0 .\n \\end{cases}\n\\end{equation*}\nThe components $S^{(1)}$ and $S^{(2)}$ are independent if and only if $U(x_1, x_2) = 0$ for every $(x_1, x_2) \\in ]0, + \\infty[$, i.e., if and only if $\\lim_{x_1 \\to 0^{+}, x_2 \\to 0^{+}} U(x_1, x_2) = 0 $.\n\\smallskip\n\nThe L\u00e9vy measure of the processes $S^{(i)}$, $i = 1,2$, have tail integrals\n\\begin{equation*}\n \\begin{split}\n U_1(x_1) &= \\lambda_1 \\Prob \\big( Y^{(1)} \\geq x_1 \\big) = U(x_1,0) \\\\\n U_2(x_2) &= \\lambda_2 \\Prob \\big( Y^{(2)} \\geq x_2 \\big) = U(0,x_2)\n \\end{split}\n\\end{equation*}\nTaking equation \\eqref{eq:AllDistZero} into account, one obtains \n\\begin{equation*}\n \\lambda_i = \\lim_{x_i \\to 0^{+}} U_i(x_i), \\quad i = 1,2 \n\\end{equation*}\n\\begin{equation*}\n \\lambda =\\lim_{x_1, x_2 \\to 0^{+}} \\big(U_1(x_1) + U_2(x_2) - U(x_1,x_2) \\big) \n = \\lambda_1 + \\lambda_2 - \\lambda^\\parallel\n\\end{equation*}\n\\begin{equation*}\n \\lambda^\\parallel =\\lim_{x_1, x_2 \\to 0^{+}} U(x_1,x_2) \n\\end{equation*}\n\\begin{equation*}\n \\lambda_i^\\perp = \\lambda_i - \\lambda^\\parallel, \\quad i = 1,2 .\n\\end{equation*}\nThe severity distributions $F_{Y^{1\\perp}}$, $F_{Y^{2\\perp}}$, and $F_{Y^{\\parallel}}$ can be recovered from the tail integrals:\n\\begin{equation*}\n \\Prob \\big(Y^{1\\perp} \\geq x_1 \\big) = \\frac{1}{\\lambda_1^\\perp} \\lim_{x_2 \\to 0^{+}} \\Big( U_1(x_1) - U(x_1,x_2) \\Big)\n\\end{equation*}\n\\begin{equation*}\n \\Prob \\big(Y^{2\\perp} \\geq x_2 \\big) = \\frac{1}{\\lambda_2^\\perp} \\lim_{x_1 \\to 0^{+}} \\Big( U_2(x_2) - U(x_1,x_2) \\Big)\n\\end{equation*}\n\\begin{equation*}\n \\Prob \\big(Y^{1\\perp} \\geq x_1, Y^{2\\perp} \\geq x_2 \\big) = \\frac{1}{\\lambda^\\parallel} U(x_1,x_2) .\n\\end{equation*}\n\nIf the dependency between $S^{(1)}$ and $S^{(2)}$ is characterized by a L\u00e9vy copula, $\\mathcal{C}$, i.e. $U(x_1, x_2) = \\mathcal{C}(U_1(x_1), U_2(x_2))$ for $(x_1, x_2) \\in [0, +\\infty[^2$, then the relations above can be written using the L\u00e9vy copula and one-dimensional tail integrals:\n\\begin{equation*}\n\\lambda^\\parallel = \\lim_{u_1 \\to \\lambda_1^{+}, u_2 \\to \\lambda_2^{+}} \\mathcal{C}(u_1, u_2)\n\\end{equation*}\n\\begin{equation*}\n \\Prob \\big(Y^{1\\perp} \\geq x_1 \\big) = \\frac{1}{\\lambda_1^\\perp} \\lim_{u_2 \\to \\lambda_2^{+}} \\Big( U_1(x_1) - \\mathcal{C}(U_1(x_1),u_2) \\Big)\n\\end{equation*}\n\\begin{equation*}\n \\Prob \\big(Y^{2\\perp} \\geq x_2 \\big) = \\frac{1}{\\lambda_2^\\perp} \\lim_{u_1 \\to \\lambda_1^{+}} \\Big( U_2(x_2) - \\mathcal{C}(u_1, U_2(x_2)) \\Big)\n\\end{equation*}\n\\begin{equation*}\n \\Prob \\big(Y^{1\\perp} \\geq x_1, Y^{2\\perp} \\geq x_2 \\big) = \\frac{1}{\\lambda^\\parallel} \\mathcal{C}(U_1(x_1), U_2(x_2)) .\n\\end{equation*}\n\nUsing the above methodology, the surplus process\ncan be represented as\n\\begin{equation*}\n X_t = u + ct - \\sum_{i = 0}^{N_t^{1\\perp}} Y_i^{1\\perp} - \\sum_{i = 0}^{N_t^{2\\perp}} Y_i^{2\\perp} - \\sum_{i = 0}^{N_t^{\\parallel}} ( Y_i^{1\\parallel} + Y_i ^{2 \\parallel} ) =\n u +ct - \\sum_{i = 1}^{N_t^{*}} Y_i^{*} ,\n\\end{equation*}\nwhere $u = u_1 + u_2$, $c = c_1 + c_2$, $N^{*}$ is a Poisson process with intensity $\\lambda = \\lambda_1^\\perp + \\lambda_2^\\perp + \\lambda^\\parallel$ and $Y_i^{*}$ are i.i.d. random variables with distribution:\n\\begin{equation*}\n F^{*} = \\frac{\\lambda_1^\\perp}{\\lambda} F_{Y^{1\\perp}} + \\frac{\\lambda_2^\\perp}{\\lambda} F_{Y^{2\\perp}} + \\frac{\\lambda^\\parallel}{\\lambda} F_{Y^{1\\parallel} + Y^{2\\parallel}} ,\n\\end{equation*}\nwhere\n\\begin{equation*}\n F_{Y^{1\\parallel} + Y^{2\\parallel}}(x) = \\int_{x_1 + x_2 \\leq x} dF_{Y^\\parallel}(x_1,x_2).\n\\end{equation*}\n\n\n\\section{The Optimal Loading for a Single Risk}\n\\label{section:demand}\n\nAn insurer can control the volume of its business through the premium loading $\\theta$. A reasonable assumption is that the higher the loading, the smaller the number of contracts in its portfolio, which means that the claim intensity (or business volume) will decrease. Therefore, both the claim intensity $\\E^\\theta[N_1]$, and the premium rate $c(\\theta)$, will depend on $\\theta$. It is reasonable to assume that $\\E^\\infty[N_1] = 0$, as abnormal premium rates will not attract customers \\cite{hipp2004insurancecontrol}. To capture these concepts let $\\E^\\theta[N_1] = \\lambda p(\\theta)$. Here $\\lambda$ is the average number of claims per unit of time for the whole market, and $p(\\theta)$ is the probability that a potential claim is filed as an actual claim to the particular insurer under consideration. In other words, $p(\\theta)$ reflects the demand or the market share sensitivity to the loading parameter $\\theta$. $p(\\theta)$ can be interpreted as a probability that a customer buys an insurance product. For example, we may assume that demand of insurance contracts is described by a logit glm model as in Hardin and Tabari \\cite{hardin2017renewal}. Thus, $p(\\theta)$, will be :\n\\begin{equation}\n\\label{eq:logit}\n p(\\theta) = \\frac{1}{1+e^{\\beta_0 + \\beta_1 \\theta}},\n\\end{equation}\nwhere $\\beta_0$ and $\\beta_1$ are determined from the glm and $\\theta$ is the loading parameter. $\\beta_1$ will be a positive number so $p\\to 0$ when $\\theta \\to \\infty$ and $p\\to 1$ when $\\theta \\to -\\infty$. Assuming that the company has some fixed costs, independent of the risk exposure, denoted by $r > 0$, the expression for the net premium income becomes:\n\\begin{equation*}\n c(\\theta) = (1 + \\theta)\\E^\\theta[N_1]\\E[Y] -r.\n\\end{equation*}\n\nThe following proposition characterizes the behaviour of the solution of equation \\eqref{eq:Ruin_lundberg_inf} with respect to the loading $\\theta$.\n\n\\begin{proposition}\n\\label{prop:alpha_proof}\nIf $V(x,\\theta)$ satisfies equation \\eqref{eq:Ruin_lundberg_inf} then $V(x,\\theta)$ is strictly increasing with respect to the parameter $\\alpha= \\frac{ \\E^\\theta [N_1]}{c(\\theta)}$.\n\\end{proposition}\n\n\\begin{proof}\nIt is possible to integrate Equation \\eqref{eq:Ruin_lundberg_inf} on the interval $]0, x]$ to obtain:\n\\begin{equation}\n\\label{eq:omega_prove_1}\n \\begin{split}\n V(x, \\theta) = \\frac{ \\E^\\theta [N_1]}{c(\\theta)} \\Bigg( \\E[Y] + \\int_0^x \\Big( V(z, \\theta) - \\int_0^z V(z-y)dF(y) + F(z) -1 \\Big)dz \\Bigg) .\n \\end{split}\n\\end{equation}\n\nTo prove the proposition, we will study equations of the general form:\n\\begin{equation}\n\\label{eq:alpha_prove_2}\n \\begin{split}\n u(x) = \\alpha \\Bigg( g(x) + \\int_0^x \\Big( u(z) - \\int_0^z u(z-y)dF(y) \\Big)dz \\Bigg) .\n \\end{split}\n\\end{equation}\n\nWe introduce the operator $\\Psi$, acting on measurable locally bounded functions $h:[0, +\\infty] \\mapsto \\mathbbm{R}$, as:\n\\begin{equation}\n\\label{eq:IntegralOperator}\n(\\Psi h)(x) = \\int_0^x \\Big( h(z) - \\int_0^z h(z-y)dF(y) \\Big)dz, \\quad x \\geq 0 \n\\end{equation}\nNotice that the transformation $h \\mapsto \\Psi h$ is linear and for every $h$, $\\Psi h : [0, + \\infty[ \\mapsto \\mathbbm{R}$ is continuous, hence measurable and locally bounded. Thus, powers of the operator $\\Psi$ are defined in the usual way.\n\\begin{equation*}\n\\Psi^0h = h, \\qquad \\Psi^nh = \\Psi (\\Psi^{n-1}h), \\quad n \\in \\mathbb N.\n\\end{equation*}\n\nLet, $\\norm{h}_{[0,x]} = \\sup_{z \\in [0,x]} |h(z)| $. Then:\n\\begin{equation*}\n \\begin{split}\n | (\\Psi h)(x) | &\\leq \\int_0^x \\Big( |h(z)| +\\int_0^z |h(z-y)|dF(y) \\Big)dz \\leq 2x \\norm{h}_{[0,x]}.\n \\end{split} \n\\end{equation*}\nIf the inequality \n\\begin{equation}\n\\label{Eq bound Psi n}\n \\norm{(\\Psi^n h)}_{[0,x]} \\leq \\frac{2^n x^n}{n!} \\norm{h}_{[0,x]}\n\\end{equation}\nholds, for some $n \\in \\mathbbm{N}$, then\n\\begin{equation*}\n \\begin{split}\n | (\\Psi^{n+1} h)(x) | &\\leq \\int_0^x \\Big( | (\\Psi^{n}h)(z)| +\\int_0^z |(\\Psi^{n}h)(z-y)|dF(y) \\Big)dz \\\\\n &\\leq \\int_0^x 2 \\frac{2^n z^n}{n!} \\norm{h}_{[0,x]} dz = \\frac{2^{n+1} x^{n+1}}{(n+1)!} \\norm{h}_{[0,x]}.\n \\end{split} \n\\end{equation*}\nThus, by induction, \\eqref{Eq bound Psi n} holds for every $n \\in \\mathbb N$.\nTherefore, for every $x \\in [0, \\infty[$, fixed, there is some $n \\in \\mathbb N$ such that $\\Psi^n$ is a contraction in the space of measureable and bounded functions $h:[0,x] \\mapsto \\mathbbm{R}$. It follows from the contraction principle that equation \\eqref{eq:omega_prove_1} has one unique solution. Further, $\\lim_{n \\to \\infty} (\\alpha^n \\Psi^n) h = 0$, uniformly in $[0,x]$ for any given $h$ and any fixed $x \\in [0, +\\infty[$. \n\nLet $u_{\\alpha, g}$ be the solution of equation \\eqref{eq:alpha_prove_2} for given $g$ and $\\alpha$. Then,\n\\begin{equation*}\n \\begin{split}\n u_{\\alpha,g} &= \\alpha (g + \\Psi u_{\\alpha,g}) = \\alpha g + \\alpha \\Psi(\\alpha(g + \\Psi u_{\\alpha,g}))\n = \\alpha g + \\alpha^2 \\Psi g + \\alpha^2 \\Psi u_{\\alpha,g} \\\\ \n &= \\alpha g + \\alpha^2 \\Psi g + \\dots +\\alpha^{n+1} \\Psi^n g +\\alpha^{n+2} \\Psi^{n +1} u_{\\alpha,g} .\n \\end{split}\n\\end{equation*}\nSince $\\lim_{n \\to \\infty} \\alpha^n \\Psi^n u_{\\alpha,g}(x) = 0$, this shows that $u_{\\alpha,g}$ admits the series representation:\n\\begin{equation*}\n \\begin{split}\n u_{\\alpha,g} &= \\sum_{n=0}^\\infty\\alpha^{n+1} \\Psi^n g ,\n \\end{split}\n\\end{equation*}\nwhich converges uniformly with respect to $\\alpha$ on compact intervals. \nThus, we can differentiate term by term and obtain\n\\begin{equation*}\n \\begin{split}\n \\frac{d}{d\\alpha} u_{\\alpha,g}(x) &= \\sum_{n=0}^\\infty (n+1)\\alpha^n(\\Psi^n g)(x)\\\\\n &= \\sum_{n = 0}^\\infty \\alpha^n (\\Psi^n g)(x) + \\sum_{n = 1}^\\infty n \\alpha^n (\\Psi^n g)(x) \\\\\n &= \\frac{1}{\\alpha}u_{\\alpha,g} + \\sum_{n=1}^\\infty \\alpha^n (\\Psi^n g)(x) + \\sum_{n=2}^\\infty (n-1) \\alpha^n (\\Psi^n g)(x)\\\\\n &= \\frac{1}{\\alpha}u_{\\alpha,g} + \\sum_{n=0}^\\infty \\alpha^{n+1} (\\Psi^{n+1} g)(x) + \\sum_{n=1}^\\infty n \\alpha^{n+1} (\\Psi^{n+1} g)(x) \\\\\n &= \\frac{1}{\\alpha} u_{\\alpha,g} + (\\Psi u_{\\alpha,g})(x) + \\sum_{n=1}^\\infty \\alpha^{n+1} (\\Psi^{n+1} g)(x) + \\sum_{n=2}^\\infty (n-1) \\alpha^{n+1} (\\Psi^{n+1} g)(x) \\\\\n &= \\frac{1}{\\alpha} u{\\alpha,g} + (\\Psi u_{\\alpha,g})(x) + \\sum_{n=0}^\\infty \\alpha^{n+2} (\\Psi^{n+2} g)(x) + \\sum_{n=1}^\\infty n \\alpha^{n+2} (\\Psi^{n+2} g)(x) \\\\\n &= \\frac{1}{\\alpha} u_{\\alpha,g} + (\\Psi u_{\\alpha,g})(x) + ( \\alpha \\Psi^2 u_{\\alpha,g})(x) + \\dots + ( \\alpha^{k-1} \\Psi^k u_{\\alpha,g})(x) + \\sum_{n = 1}^\\infty n \\alpha^{n+k} (\\Psi^{n+k} g)(x) \\\\\n &= \\sum_{n=0}^\\infty \\alpha^{n-1} (\\Psi^n u_{\\alpha,g})(x) = \\frac{1}{\\alpha^2} u_{\\alpha,u_{\\alpha,g}}.\n \\end{split}\n\\end{equation*}\n\nFor any $h:[0,x] \\mapsto \\mathbbm{R}$ locally absolutely continuous function:\n\\begin{equation*}\n \\begin{split}\n (\\Psi h)(x) &= \\int_0^x \\Big( h(z) - \\int_0^z h(z-y)dF(y) \\Big)dz \\\\\n &= \\int_0^x \\Big(h(z) - [h(z-y)F(y)]_{y = 0}^{y = z} - \\int_0^z h'(z-y)F(y)dy \\Big)dz \\\\\n &= \\int_0^x (h(z) - h(0)F(z))dz - \\int_0^x \\int_y^x h'(z-y)F(y)dzdy \\\\\n &= \\int_0^x (h(z) - h(0)F(z))dz - \\int_0^x \\big(h(x-y) - h(0) \\big)F(y)dy \\\\\n &= \\int_0^x h(z)dz - \\int_0^x h(x-y)F(y)dy = \\int_0^x h(z) (1+F(x-z))dz .\\\\\n \\end{split}\n\\end{equation*}\nThus, $h>0$ implies $(\\Psi h)>0$, which implies $(\\Psi^n h)>0$, $\\forall n \\in \\mathbbm{N}$, and therefore $u_{\\alpha,h}>0$ for any $\\alpha >0$. This argument shows that $\\frac{d}{d\\alpha} V =\\frac{1}{\\alpha^2} u_{\\alpha,V} >0$ as $V >0$. Therefore $V$ is strictly increasing with $\\alpha$.\n\\end{proof}\n\nAccording to Proposition \\ref{prop:alpha_proof}, in order to find $\\theta$ minimizing the probability of ruin, it is sufficient to find $\\theta$ minimizing $\\frac{\\mathbb E^\\theta[N_1]}{c(\\theta)}$. For example, using the logit demand model \\eqref{eq:logit}, the optimal loading is found with direct differentiation of $\\alpha$ and is given by:\n\\begin{equation}\n\\label{eq:alpha_direct}\n \\begin{split}\n\\theta_{ruin} = \\frac{1}{\\beta_1} \\Big( \\ln \\big( \\frac{\\lambda \\E[Y]}{r \\beta_1} \\big) - \\beta_0 \\Big).\n \\end{split}\n\\end{equation}\nHowever, the loading that maximizes the expected profit is:\n\\begin{equation*}\n \\begin{split}\n \\theta_{profit} = \\argmax_\\theta \n \\E^\\theta[X_1 \\given[] X_0 = x] = \\argmax_\\theta \\{ \\theta\\E^\\theta[N_1]\\E[Y] -r\\},\n \\end{split}\n\\end{equation*} \nwhich is, in the case of logit demand \\eqref{eq:logit}, the unique solution of:\n\\begin{equation}\n\\label{eq:profit_equation}\n \\begin{split}\n 1 + e^{\\beta_0 + \\beta_1 \\theta} - \\beta_1 \\theta e^{\\beta_0 + \\beta_1 \\theta} = 0.\n \\end{split}\n\\end{equation}\nThus, in general, $\\theta_{ruin}$ does not coincide with $\\theta_{profit}$.\n\n\n\\section{The Multiple Risk Case }\n\\label{section:companyRisk}\n\nIn this section, we explore how dependencies between risks available in an insurance market translate into risk exposure for a company through its market shares on the different risks. It turns out that this mechanism is non trivial when the risks are dependent. For the sake of simplicity, we assume that the company offers insurance for two risks in a market constituted by identical individuals, all of them exposed to both risks. Using the notation in equations \\eqref{eq:2DimClaimProcess} and \\eqref{eq:bracketSum} to denote the market claim process, $S_t = (S_t^{(1)}, S_t^{(2)})$ is the vector of the total (accumulated) amount of claims of each risk that occurred in the market, up to time $t$. The marginal distributions of $S^{(1)}$ and $S^{(2)}$ are characterized by claim intensities $\\lambda_1$ and $\\lambda_2$ and the severity distributions $F_{Y^{(1)}}$, $F_{Y^{(2)}}$ and their dependency structure is characterized by a parameter $\\lambda^\\parallel \\in [0, \\min(\\lambda_1, \\lambda_2)]$ and a joint distribution $F_{(Y^{1\\parallel},Y^{2\\parallel})}$, as explained in Section $\\ref{section:min_ruin}$. \n\n\n\\subsection{Risk Exposure as a Function of Market Shares}\n\\label{section:sub4.1}\n\nTo extend the demand model outlined in Section \\ref{section:demand} to a market with multiple risks where the acquisition of insurance for different risks may not be independent, we propose the following interpretation for the function $p$.\n\\smallskip\n\nLet $(\\theta_1, \\theta_2)$ be the loadings charged by the company for each risk. We assume that every individual in the market (a potential client) is provided with a vector of bid prices ($b_1$, $b_2$). The client acquires the insurance for risk $i$ if $b_i \\geq \\theta_i$ (for convenience, we consider prices net of the pure premium). The distribution of the price vectors in the market is modelled by a random vector $B = (B_1, B_2)$. Thus, $p_i(\\theta) = p_i(\\theta_i) = \\Prob \\big( B_i \\geq \\theta_i \\big)$ is the company's market share for the insurance of risk $i$ at equilibrium, given the loadings $\\theta = ( \\theta_1, \\theta_2)$. Let $p^{(1,0)}$ be the proportion of individuals in the market holding a policy for risk 1 and no policy for risk 2. Similarly, $p^{(0,1)}(\\theta)$ and $p^{(1,1)}(\\theta)$ denote the proportion of individuals holding a policy only for risk 2 and for both risks, respectively. If the acquisition of polices for different risks is independent, then:\n\\begin{equation}\n\\label{eq:AcquisitionProbabilities}\n p^{(1,1)}(\\theta) = p_1(\\theta_1)p_2(\\theta_2), \\quad p^{(1,0)}(\\theta) = p_1(\\theta_1)(1-p_2(\\theta_2)), \\quad p^{(0,1)}(\\theta) = p_2(\\theta_2)(1-p_1(\\theta_1)).\n\\end{equation}\n\nDependency between the acquisition of different risks can be introduced by considering dependent bid prices $B = (B_1, B_2)$. In particular, if the joint distribution of $B$ is characterized by an ordinary copula $C: [0,1]^2 \\mapsto [0,1]$, then, according to Sklar's theorem $F_B(\\theta_1, \\theta_2) = C(F_{B_1}(\\theta_1), F_{B_2}(\\theta_2))$ \\cite{nelsen2007introductionCopulas}. This gives:\n\\begin{equation*}\n\\label{eq:EQ8}\n \\begin{split}\n p^{(1,0)} &= F_{B_2}(\\theta_2^{-}) - C(F_{B_1}(\\theta_1^{-}), F_{B_2}(\\theta_2^{-})), \\\\\n p^{(0,1)} &= F_{B_1}(\\theta_1^{-}) - C(F_{B_1}(\\theta_1^{-}), F_{B_2}(\\theta_2^{-})), \\\\\n p^{(1,1)} &= 1 -F_{B_1}(\\theta_1^{-})- F_{B_2}(\\theta_2^{-}) + C(F_{B_1}(\\theta_1^{-}), F_{B_2}(\\theta_2^{-})). \\\\\n \\end{split}\n\\end{equation*}\n\nUnder this model, the company's surplus process is:\n\\begin{equation}\n\\label{eq:agg_company_claims}\n \\Tilde{X}_t = u^{(1)} + u^{(2)} + \\big( c^{(1)}(\\theta_1) + c^{(2)}(\\theta_2) \\big)t - \\sum_{i = 0}^{\\Tilde{N}_t^{1\\perp}} \\Tilde{Y}_i^{1\\perp} - \\sum_{i = 0}^{\\Tilde{N}_t^{2\\perp}} \\Tilde{Y}_i^{2\\perp} - \\sum_{i = 0}^{\\Tilde{N}_t^{\\parallel}} \\Big(Y_i^{1\\parallel} + Y_i^{2\\parallel} \\Big),\n\\end{equation}\nwhere $\\Tilde{N}_t^{1\\perp}$, $\\Tilde{N}_t^{2\\perp}$ , and $\\Tilde{N}_t^{\\parallel}$ count the number of claims received by the company concerning only risk 1, only risk 2, and both risks, respectively. Their intensities are, respectively,\n\\begin{equation*}\n \\begin{split}\n \\Tilde{\\lambda}_1^\\perp &= p^{(1,0)}(\\theta) \\big( \\lambda_1^\\perp + \\lambda^\\parallel \\big) + p^{(1,1)}(\\theta) \\lambda_1^\\perp = p_1(\\theta_1) \\lambda_1^\\perp + p^{(1,0)}(\\theta) \\lambda^\\parallel, \\\\\n \\Tilde{\\lambda}_2^\\perp &= p_2(\\theta_2) \\lambda_2^\\perp + p^{(0,1)}(\\theta) \\lambda^\\parallel, \\\\\n \\Tilde{\\lambda}^\\parallel &= p^{(1,1)}(\\theta) \\lambda^\\parallel.\n \\end{split}\n\\end{equation*}\n\nThe distribution of the single risk claim amounts $\\Tilde{Y}^{1\\perp}$ (resp., $\\Tilde{Y}^{2\\perp}$) is a mixture of the distributions $Y^{1\\perp}$ and $Y^{1\\parallel}$ (resp., $Y^{2\\perp}$ and $Y^{2\\parallel}$):\n\\begin{equation*}\n \\begin{split}\n &F_{\\Tilde{Y}^{1\\perp}} = \\frac{p_1\\lambda_1^\\perp }{p_1 \\lambda_1^\\perp + p^{(1,0)} \\lambda^\\parallel } F_{Y^{1\\perp}} + \\frac{p^{(1,0)} \\lambda^\\parallel }{p_1 \\lambda_1^\\perp + p^{(1,0)} \\lambda^\\parallel } F_{Y^{1\\parallel}} \\\\\n &F_{\\Tilde{Y}^{2\\perp}} = \\frac{p_2 \\lambda_2^\\perp }{p_2 \\lambda_2^\\perp + p^{(0,1)} \\lambda^\\parallel } F_{Y^{2\\perp}} + \\frac{p^{(0,1)} \\lambda^\\parallel }{p_2 \\lambda_2^\\perp + p^{(0,1)} \\lambda^\\parallel } F_{Y^{2\\parallel}}\n \\end{split}\n\\end{equation*}\nThis is because some customers insure risk 1, but not risk 2 and vice-versa. Therefore, the aggregate process for the insurer is\n\\begin{equation}\n\\label{eq:indp_company_claim}\n \\Tilde{X}_t = u^{(1)} + u^{(2)} + \\big( c^{(1)}(\\theta_1) + c^{(2)}(\\theta_2) \\big)t - \\sum_{i = 0}^{\\Tilde{N}_t} \\Tilde{Y}_i,\n\\end{equation}\nwhere $\\Tilde{N}_t$ is a Poisson process with intensity\n\\begin{equation}\n\\label{eq:lambda_tilde}\n \\Tilde{\\lambda} = p_1 \\lambda_1^\\perp + p_2 \\lambda_2^\\perp + \\big( p^{(1,0)} + p^{(0,1)}+ p^{(1,1)} \\big)\\lambda^\\parallel = p_1 \\lambda_1 + p_2 \\lambda_2 - p^{(1,1)}\\lambda^\\parallel ,\n\\end{equation}\nand $\\Tilde{Y}_i$, $i \\in \\mathbb N$ are i.i.d random variables with distribution\n\\begin{equation}\n\\label{eq:DepDist}\n\\begin{split}\n F_{\\Tilde{Y}} &= \\frac{p_1 \\lambda_1^{\\perp} }{\\Tilde{\\lambda}} F_{Y^{1 \\perp}} + \\frac{p_2 \\lambda_2^{\\perp} }{\\Tilde{\\lambda}} F_{Y^{2 \\perp}} \n + \\frac{p^{(1,0)} \\lambda^{\\parallel} }{\\Tilde{\\lambda}} F_{Y^{1 \\parallel}}\n + \\frac{p^{(0,1)} \\lambda^{\\parallel} }{\\Tilde{\\lambda}} F_{Y^{2 \\parallel}}\n + \\frac{p^{(1,1)} \\lambda^{\\parallel} }{\\Tilde{\\lambda}} F_{Y^{1 \\parallel}+ Y^{2 \\parallel}} \\\\\n &=\\frac{1}{p_1 \\lambda_1 + p_2 \\lambda_2 - p^{(1,1)} \\lambda^\\parallel} \\bigg( p_1 \\lambda_1 F_{Y^{1}} + p_2 \\lambda_2 F_{Y^{2}} + p^{(1,1)}\\lambda^\\parallel \\Big( F_{Y^{1\\parallel} + Y^{2\\parallel}} - F_{Y^{1\\parallel}} - F_{Y^{2\\parallel}} \\Big) \\bigg). \n \\end{split}\n\\end{equation}\n\nThus, if the risks in the market are independent (i.e. if $\\lambda^\\parallel = 0$), then the risk in the company's portfolio is just a sum of the risks $S^{(1)}$ and $S^{(2)}$, weighted by the respective market shares, $p_1$ and $p_2$, irrespective of any dependency between sales of policies for different risks. However, if the risks in the market are dependent ( $\\lambda^\\parallel \\neq 0$), then the company's risk is not, in general, a weighted sum of $S^{(1)}$ and $S^{(2)}$. Further, this effect persists even in the case where sales of different policies are independent (i.e., $p^{(1,1,)} = p_1 p_2$). On the other hand, equalities \\eqref{eq:lambda_tilde} and \\eqref{eq:DepDist} show that in the (unlikely) situation where clients always buy insurance for only one risk, the risk exposure of the insurer is accurately computed using only the marginal distributions of each risk (i.e. assuming that the risks are independent). This is due to the static nature of our model. For example, it does not take into account the possibility of external factors changing the frequency of claim events in both risks simultaneously.\n\n\n\\subsection{The Impact of Dependencies on Ruin Probability}\n\nFrom the discussion above and Proposition \\ref{prop:alpha_proof}, it follows that the ruin probability of a company with market shares ($p_1$, $p_2$, $p^{(1,1)}$) solves the equation \n\\begin{align}\n\\label{eq:EQ3}\n\\frac{dV(x)}{dx} = &\n\\frac{\\Tilde{\\lambda}}{c^{(1)} + c^{(2)}} \\Big( V(x) - \\int_0^x V(x-y)dF_{\\Tilde{Y}}(y) + F_{\\Tilde{Y}}(x) -1 \\Big),\n\\\\ \\label{eq:EQ4}\n V(0^{+}) =&\n \\frac{\\Tilde{\\lambda}}{c^{(1)} + c^{(2)}} \\E[\\Tilde{Y}] ,\n\\end{align}\nwith $\\Tilde{\\lambda}$ and $F_{\\Tilde{Y}}$ given by equations \\eqref{eq:lambda_tilde} and \\eqref{eq:DepDist}.\n\\smallskip\n\nSince estimating the dependency structure may pose substantial difficulties, we may wish to have an a-priori bound for the error introduced by neglecting dependencies, that is, by substituting the probability $V_{ind}(x)$ for $V(x)$, where $V_{ind}(x)$ solves the equation.\n\\begin{equation}\n \\label{eq:EQ5}\n \\frac{dV(x)}{dx} = \\frac{\\hat{\\lambda}}{c^{(1)} + c^{(2)}} \\Big( V(x) - \\int_0^x V(x-y)dF_{\\hat{Y}}(y) + F_{\\hat{Y}}(x) -1 \\Big) ,\n\\end{equation}\nwhere $\\hat{\\lambda} = \\lambda_1p_1 + \\lambda_2p_2$ and $F_{\\hat{Y}}(x) = \\frac{\\lambda_1 p_1 F_{Y^{(1)}} + \\lambda_2 p_2 F_{Y^{(2)}}}{\\hat{\\lambda}}$. Notice that $\\hat{\\lambda} \\E[\\hat{Y}] = \\Tilde{\\lambda} \\E[\\Tilde{\\lambda}]$ and therefore the boundary condition for \\eqref{eq:EQ5} is again \\eqref{eq:EQ4}.\n\\smallskip\n\nThe discussion in Subsection \\ref{section:sub4.1} shows that the difference $V(x) - V_{ind}(x)$ is expected to be small when $p^{(1,1)}$ is small compared to $p_1 + p_2$. The following proposition gives a precise meaning for this statement.\n\n\\begin{proposition}\n\\label{prop:P1}\nWith the notation above:\n\\begin{equation*}\n |V(x) - V_{ind}(x)| \\leq p^{(1,1)} \\lambda^\\parallel \\frac{e^{\\frac{2\\Tilde{\\lambda}x}{c^{(1)} + c^{(2)}}} - 1}{\\Tilde{\\lambda}}\n\\end{equation*}\nfor every amount of initial reserve $x \\geq 0$.\n\\end{proposition}\n\n\\begin{proof}\nFrom equalities \\eqref{eq:EQ3}, \\eqref{eq:EQ4} and \\eqref{eq:EQ5}, straightforward computations yield:\n\\begin{equation}\n\\label{eq:EQ6}\n \\begin{split}\n V(x) -V_{ind}(x) &= \\frac{p^{(1,1)} \\lambda^\\parallel}{c^{(1)} + c^{(1)}} \\Bigg( \\int_0^x V_{ind}(z) - \\int_0^z V_{ind}(z-y)dF_{Y^{1\\parallel} + Y^{2\\parallel}}(y) + F_{Y^{1\\parallel} + Y^{2\\parallel}}(z) -1 dz - \\\\\n & \\quad \\int_0^x V_{ind}(z) - \\int_0^z V_{ind}(z-y)dF_{Y^{1\\parallel}}(y) + F_{Y^{1\\parallel} }(z) -1 dz - \\\\\n & \\quad \\int_0^x V_{ind}(z) - \\int_0^z V_{ind}(z-y)dF_{Y^{2\\parallel}}(y) + F_{Y^{2\\parallel}}(z) -1 dz -\\Bigg) \\\\\n & \\quad + \\frac{\\Tilde{\\lambda}}{c^{(1)} + c^{(2)}} \\int_0^x (V - V_{ind})(z) - \\int_0^z (V - V_{ind})(z-y)dF_{\\Tilde{Y}}(y)dz\n \\end{split}\n\\end{equation}\n\nIt can be checked that for every distribution function $G:[0, +\\infty[ \\mapsto [0,1]$,\n\\begin{equation*}\n -x \\leq \\int_0^x V_{ind}(z) -\\int_0^z V_{ind}(z-y)dG(y) + G(z) -1dz \\leq 0\n\\end{equation*}\n\nTherefore, \\eqref{eq:EQ6} implies:\n\\begin{equation*}\n \\max_{y \\in [0,x]} |V(x) - V_{ind}(x)| \\leq \\frac{ p^{(1,1)} \\lambda^\\parallel}{c^{(1)} + c^{(2)}}2x + \\frac{\\Tilde{\\lambda}}{c^{(1)} + c^{(2)}} \\int_0^x 2 \\max_{y \\in [0,z]} |V(x) - V_{ind}(y)|dz\n\\end{equation*}\nThus, the result follows by Gr\u00f6nwall's inequality \\cite{dragomir2003some}.\n\\end{proof}\n\n\n\\subsection{The Impact of Dependencies on Small Companies}\n\nNow, we proceed with the argument above to explore how dependencies affect companies of different size. We measure the size of the company by it's expected total value of claims, $\\Tilde{\\lambda} \\E[\\Tilde{Y}]$ and, to make comparisons meaningful, we consider that the total revenue is proportional to the company's size, i.e.\n\\begin{equation*}\n c^{(1)} + c^{(2)} = (1 + \\theta) \\Tilde{\\lambda} \\E [\\Tilde{Y}], \\quad \\textrm{with } \\theta>0 \\textrm{ constant.}\n\\end{equation*}\nSimilarly, we consider the initial reserve to be proportional to size, i.e.:\n\\begin{equation*}\n x = x_o \\Tilde{\\lambda} \\E [\\Tilde{Y}], \\quad \\textrm{with } x_0>0 \\textrm{ constant.}\n \\end{equation*}\nNotice that, due to equations \\eqref{eq:EQ3}, \\eqref{eq:EQ4} and \\eqref{eq:EQ5}, the effect of dependencies must be bounded in the sense that\n\\begin{equation*}\n |V(x_0\\Tilde{\\lambda}\\E[\\Tilde{Y}]) - V_{ind}(x_0\\Tilde{\\lambda}\\E[\\Tilde{Y}])| \\leq K_1 x_0\\Tilde{\\lambda}\\E[\\Tilde{Y}] \\leq K_2(p_1 + p_2) ,\n\\end{equation*}\nfor some constants $K_1, K_2 < +\\infty$. However, we can use Proposition \\ref{prop:P1} to obtain a better estimate:\n\\begin{equation}\n \\label{eq:EQ7}\n |V(x_0\\Tilde{\\lambda}\\E[\\Tilde{Y}]) - V_{ind}(x_0\\Tilde{\\lambda}\\E[\\Tilde{Y}])| \\leq p^{(1,1)} \\lambda^\\parallel \\frac{e^{\\frac{2\\Tilde{\\lambda}\\frac{x_0}{1 + \\theta}}{c^{(1)} + c^{(2)}}} - 1}{\\Tilde{\\lambda}}.\n\\end{equation}\n\nNotice that the right-hand side of \\eqref{eq:EQ7} has the same asymptotic behaviour as \n\\begin{equation*}\n p^{(1,1)} \\lambda^\\parallel \\frac{x_0}{1 + \\theta}, \\quad \\textrm{when } p_1 + p_2 \\to 0.\n\\end{equation*}\nFurther, if the sales of policies for different risks to the same individual are independent, then $p^{(1,1)} = p_1 p_2$ goes to zero faster than $\\Tilde{\\lambda} \\E[\\Tilde{Y}] = p_1 \\lambda_1 \\E[Y^{(1)}] + p_2 \\lambda_2 \\E[Y^{(2)}]$, when $p_1 + p_2 \\to 0$. Thus, a small company selling policies for different risks independently is relatively immune to the effects of dependencies between the risks, contrary to a large company (it is obvious that a monopolistic company is fully exposed to the dependencies between risks). \nThis immunity to risk's dependencies may persist even when sales of policies for different risks are not independent, provided the dependency in sales is sufficiently mild. For example, $\\lim_{p_1 + p_2 \\to 0} \\frac{p^{(1,1)}}{p_1 + p_2} = 0$ if the dependency between sales is modelled by a Clayton or a Frank copula in \\eqref{eq:EQ8}. However, small companies are not specially protected from risk dependencies if the dependency between sales is modelled by a Pareto or a Gumbel copula.\n\n\\subsection{Optimal Loadings and Market Shares}\n\nSince the right-hand sides of equalities \\eqref{eq:EQ3} and \\eqref{eq:EQ4} depend on the loadings through both $\\frac{\\E[\\Tilde{N}_1}{c^{(1)} + c^{(2)}}$ and $F_{\\Tilde{Y}}$, Proposition \\ref{prop:alpha_proof} can not be generalized to models with multiple risks. However, it is possible to provide optimality conditions for the loadings $\\theta = (\\theta_1, \\theta_2)$ minimizing the ruin probability.\n\\smallskip\n\nTo do this, we extend the notation introduced in the proof of Proposition \\ref{prop:alpha_proof}. For any distribution function $G:[0, +\\infty[ \\mapsto [0,1]$, we consider the compounding operator of type \\eqref{eq:IntegralOperator}\n\\begin{equation*}\n (\\Psi_G h)(x) = \\int_0^x \\Big( h(z) - \\int_0^z h(z-y) d\\theta(y)\\Big)dz, \\quad x \\geq 0.\n\\end{equation*}\nThus, the 2-risk version of equation \\eqref{eq:omega_prove_1} can be written as \n\\begin{equation}\n\\label{eq:EQ10}\n V_\\theta(x) = \\frac{\\Tilde{\\lambda}_\\theta}{c(\\theta)} \\bigg( \\int_x^\\infty 1- F_\\theta(z)dz + \\Big(\\Psi_{F_\\theta} V_\\theta \\Big)(x) \\bigg) ,\n\\end{equation}\nwhere\n\\begin{equation*}\n F_\\theta = \\frac{\\lambda_1^\\perp}{\\Tilde{\\lambda}_\\theta} p_1(\\theta) F_{Y^{1\\perp}} + \\frac{\\lambda_2^\\perp}{\\Tilde{\\lambda}_\\theta} p_1(\\theta) F_{Y^{2\\perp}} + \\frac{\\lambda^\\parallel}{\\Tilde{\\lambda}_\\theta} p^{(1,0)}(\\theta) F_{Y^{1\\parallel}} + \\frac{\\lambda^\\parallel}{\\Tilde{\\lambda}_\\theta} p^{(0,1)}(\\theta) F_{Y^{2\\parallel}} + \\frac{\\lambda^\\parallel}{\\Tilde{\\lambda}_\\theta} p^{(1,1)}(\\theta) F_{Y^{1\\parallel} + Y^{2\\parallel}} .\n\\end{equation*}\nSince $\\frac{\\lambda_1^\\perp}{\\Tilde{\\lambda}_\\theta} p_1(\\theta) + \\frac{\\lambda_2^\\perp}{\\Tilde{\\lambda}_\\theta}p_2(\\theta) + \\frac{\\lambda^\\parallel}{\\Tilde{\\lambda}_\\theta} p^{(1,0)}(\\theta) + \\frac{\\lambda^\\parallel}{\\Tilde{\\lambda}_\\theta} p^{(0,1)}(\\theta) + \\frac{\\lambda^\\parallel}{\\Tilde{\\lambda}_\\theta} p^{(1,1)}(\\theta) = 1$, \\eqref{eq:EQ10} becomes\n\\begin{equation*}\n \\begin{split}\n V_{\\theta}(x) &= \\frac{\\lambda_1^\\perp p_1(\\theta)}{c(\\theta)} \\bigg( \\int_x^\\infty 1- F_{Y^{1\\perp}}(z)dz + (\\Psi_{F_{Y^{1\\perp}}}V_\\theta)(x) \\bigg) \\\\\n & \\quad + \\frac{\\lambda_2^\\perp p_2(\\theta)}{c(\\theta)} \\bigg( \\int_x^\\infty 1- F_{Y^{2\\perp}}(z)dz + (\\Psi_{F_{Y^{2\\perp}}}V_\\theta)(x) \\bigg) \\\\\n & \\quad + \\frac{\\lambda^\\parallel p^{(1,0)}(\\theta)}{c(\\theta)} \\bigg( \\int_x^\\infty 1- F_{Y^{1\\parallel}}(z)dz + (\\Psi_{F_{Y^{1\\parallel}}}V_\\theta)(x) \\bigg) \\\\\n & \\quad + \\frac{\\lambda^\\parallel p^{(0,1)}(\\theta)}{c(\\theta)} \\bigg( \\int_x^\\infty 1- F_{Y^{2\\parallel}}(z)dz + (\\Psi_{F_{Y^{2\\parallel}}}V_\\theta)(x) \\bigg) \\\\\n & \\quad + \\frac{\\lambda^\\parallel p^{(1,1)}(\\theta)}{c(\\theta)} \\bigg( \\int_x^\\infty 1- F_{Y^{1\\parallel} +Y^{2\\parallel}}(z)dz + (\\Psi_{F_{Y^{1\\parallel} + Y^{2\\parallel}}}V_\\theta)(x) \\bigg) .\n \\end{split}\n\\end{equation*}\n\nWe write this in abbreviated form:\n\\begin{equation*}\n V_{\\theta}(x) = <\\alpha(\\theta), \\Gamma(x)> + (<\\alpha(\\theta), \\Psi>V_\\theta)(x) ,\n\\end{equation*}\nwhere $\\alpha(\\theta)$ is the vector\n\\begin{equation*}\n \\alpha(\\theta) = \\frac{1}{c(\\theta)} \\Big(\\lambda_1^\\perp p_1(\\theta), \\lambda_2^\\perp p_2(\\theta), \\lambda^\\parallel p^{(1,0)}, \\lambda^\\parallel p^{(0,1)}, \\lambda^\\parallel p^{(1,1)} \\Big),\n\\end{equation*}\n$\\Gamma(x)$ is the vector function\n\\begin{equation*}\n \\Gamma(x) = \\Big(\\int_x^\\infty 1- F_{Y^{1\\perp}}(z)dz, \\int_x^\\infty 1- F_{Y^{2\\perp}}(z)dz, \\int_x^\\infty 1- F_{Y^{1\\parallel}}(z)dz, \\int_x^\\infty 1- F_{Y^{2\\parallel}}(z)dz, \\int_x^\\infty 1- F_{Y^{1\\parallel} + Y^{2\\parallel}}(z)dz \\Big),\n\\end{equation*}\n$\\Psi$ is the vector of operators\n\\begin{equation*}\n \\Psi = \\Big(\\Psi_{F_{Y^{1\\perp}}}, \\Psi_{F_{Y^{2\\perp}}}, \\Psi_{F_{Y^{1\\parallel}}}, \\Psi_{F_{Y^{2\\parallel}}}, \\Psi_{F_{Y^{1\\parallel}} + Y^{2\\parallel}} \\Big) ,\n\\end{equation*}\nand $<\\cdot, \\cdot>$ is the usual inner product in $\\mathbbm{R}^5$. \n\\smallskip\n\nUsing the argument in the proof of Proposition \\ref{prop:alpha_proof}, we see that $V_\\theta$ admits the series representation\n\\begin{equation*}\n V_\\theta(x) = \\sum_{n=0}^\\infty \\Big(<\\alpha(\\theta), \\Psi>^n <\\alpha(\\theta), \\Gamma> \\Big)(x) .\n\\end{equation*}\nSimilarly, any vector $\\gamma \\in \\mathbbm{R}^5$ and any bounded measurable function $g:[0, +\\infty[ \\mapsto \\mathbbm{R}$ define one unique function\n\\begin{equation*}\n u_{\\gamma, g} (x) = \\sum_{n = 0}^\\infty \\Big(<\\gamma, \\Psi>^n g\\Big)(x) .\n\\end{equation*}\nThis function is analytic with respect to $\\gamma$, with partial derivatives\n\\begin{equation*}\n \\frac{\\partial{u_{\\gamma, g}}}{\\partial{\\gamma_i}} = \\sum_{n = 0}^\\infty <\\gamma, \\Psi>^n\\big( \\Psi_i u_{\\gamma,g}\\big) = u_{\\gamma, \\Psi_i u_{\\gamma, g}}, \\quad i = 1, \\dots, 5 .\n\\end{equation*}\n\nTaking into account the chain rule for derivatives, this proves the following proposition.\n\n\\begin{proposition}\n\\label{prop:P2}\nIf $\\theta \\mapsto \\alpha(\\theta)$ is differentiable, then $\\theta \\mapsto V_\\theta(x)$ is differentiable for every $x \\geq 0$ and\n\n$$\\frac{\\partial{}}{\\partial{\\theta_i}} V_\\theta(x) = \\sum_{j = 1}^5 u_{\\alpha(\\theta),(\\Gamma_j + \\Psi_j V_\\theta)} \\frac{\\partial{\\alpha_j(\\theta)}}{\\partial{\\theta_i}}, \\quad i = 1,2.$$\n\\end{proposition}\n\nBy Proposition \\ref{prop:P2}, the optimal loadings satisfy the equation \n\\begin{equation}\n \\label{eq:EQ11}\n \\sum_{j = 1}^5 u_{\\alpha(\\theta),(\\Gamma_j + \\Psi_j V_\\theta)} \\frac{\\partial{\\alpha_j(\\theta)}}{\\partial{\\theta_i}} = 0, \\quad i = 1,2\n\\end{equation}\n\nContrary to the single-risk case, the odds of finding explicit solutions for this equation seem very low, even in simple cases. \nHowever, \\eqref{eq:EQ11} can be numerically solved by Newton's algorithm, the second-order partial derivatives being\n\\begin{equation*}\n \\frac{\\partial^2}{\\partial{\\theta_i}\\partial{\\theta_j}} V_\\theta(x) = \\sum_{k = 1}^5 u_{\\alpha(\\theta),(\\Gamma_k + \\Psi_k V_\\theta)} \\frac{\\partial^2{\\alpha_k(\\theta)}}{\\partial{\\theta_i}\\partial{\\theta_j}} + \\sum_{k = 1}^5\\sum_{l = 1}^5 u_{\\alpha(\\theta),\\Psi_k} u_{\\alpha(\\theta),(\\Gamma_l + \\Psi_l V_\\theta)} \\frac{\\partial{\\alpha_k(\\theta)}}{\\partial{\\theta_i}} \\frac{\\partial{\\alpha_l(\\theta)}}{\\partial{\\theta_j}} .\n\\end{equation*}\n\nNotice that the expected profit is\n\\begin{equation*}\n c^{(1)}(\\theta) + c^{(2)}(\\theta) - \\Tilde{\\lambda} \\E[\\Tilde{Y}] = \\theta_1 p_1(\\theta_1)\\lambda_1\\E[Y^{(1)}] + \\theta_2 p_2(\\theta_2)\\lambda_2\\E[Y^{(2)}].\n\\end{equation*}\nThus, it depends only on the marginal distribution of the claim processes $S^{(1)}$, $S^{(2)}$, being independent of the dependency structure. It follows that the loadings minimizing the joint profit coincide with the loadings minimizing the profit on each risk, separately. That is, a pricing strategy that completely focus on expected profit completely fails to take both dependencies between risks and dependencies between sales of policies into account.\n\n\n\n\n\n\\section{Numerical Results}\n\\label{seq:numerical}\n\nThroughout this section, $Y_i^{(i)}$ are assumed to be i.i.d gamma distributed random variables with shape parameter, $a^{(i)}$, and scale parameters, $k^{(i)}$, which means that the mean is, $\\E[Y^{(i)}] = a^{(i)}k^{(i)}$, for $i = 1,2$. In the following numerical analysis let $a^{(1)} = a^{(2)} = 2$, $k^{(1)} = k^{(2)} = 500$, $\\lambda^{(1)} = \\lambda^{(2)} = 800$, $\\beta_0^{(1)} = \\beta_0^{(2)} = -0.5$, $\\beta_1^{(2)} = 4$ and $\\beta_1^{(1)} = 4.5$. That is, the difference stems from surplus process 2 being more sensitive to the loading via the parameter $\\beta_1^{(2)}$. $r^{(i)}$ is taken to be $20\\%$ of the pure premium if the exposure was $40\\%$, that is $r^{(i)} = 0.4 * 0.2 k^{(i)} a^{(i)} N^{(i)}$. The operational cost is therefore $8\\%$ of the expected total amount of claims in the market. The Clayton L\u00e9vy copula is considered for positive dependence and the parameter is set to $\\omega = 1$. Finally, let $\\theta_{ruin}^*$ and $\\theta_{profit}^*$ denote the optimal loading when the ruin probability and expected profit criterion is used, respectively. The programming language R was used for every calculation. \\\\\n\n\n\\subsection{Single Surplus Process}\n\nThe surplus processes are first considered separately. The ruin probability and the expected profit is plotted as a function of $\\theta$ for the two processes in Figures \\ref{fig:logit_1} and \\ref{fig:logit_2}. $\\theta_{ruin}^*$ was found by minimizing $\\alpha$.\n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[scale = 0.4]{.\/logit_1.png}\n \\caption[Optimal loading parameter of surplus process 1.]{Surplus process 1. The blue lines show the ruin probability as a function of $\\theta$ for a given surplus x. The black line shows the expected profit per time unit as a function of $\\theta$. The blue dots show the minimum ruin probability for each surplus. $\\theta_{profit}^*$ and $\\theta_{ruin}^*$ denote the optimal security loading parameter for the expected profit and for the probability of ruin, respectively. }\n \\label{fig:logit_1}\n\\end{figure}\n\n\nFrom Figure \\ref{fig:logit_1} it can be seen that the optimal security loading parameter for the ruin probability is, $\\theta_{ruin}^* = 0.435$, while the $\\theta$ that maximizes the expected profit is lower, $\\theta_{profit}^* = 0.359$. Moreover, in this example, the maximum expected profit is 22.843 units and is given at $\\theta_{profit}^*$. The expected profit taken at the point $\\theta_{ruin}^*$ is lower, close to 20.000 units. \n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[scale = 0.4]{.\/logit_2.png}\n \\caption[Optimal loading parameter of surplus process 2.]{Surplus process 2. The blue lines show the ruin probability as a function of $\\theta$ for a given surplus x. The black line shows the expected profit per time unit as a function of $\\theta$. The blue dots show the minimum ruin probability for each surplus. $\\theta_{profit}^*$ and $\\theta_{ruin}^*$ denote the optimal security loading parameter for the expected profit and for the probability of ruin, respectively.}\n \\label{fig:logit_2}\n\\end{figure}\n\n\nFrom Figure \\ref{fig:logit_2} it can be seen that the optimal security loading parameter for the ruin probability is $\\theta_{ruin}^* = 0.358$, while the $\\theta$ that maximizes the expected profit is again lower or $\\theta_{profit}^*= 0.319$. \\\\\n\nObviously, for both processes, the ruin probability decreases with increasing surplus. Moreover, it can be seen that surplus process $X_2$ has higher probability of ruin than surplus process $X_1$ for the same amount of surplus. The sensitivity of the demand curve affects the ruin probability and $\\theta_{ruin}^*$ greatly. The more sensitive to the exposure the demand curve is, the closer the $\\theta_{profit}^*$ and $\\theta_{ruin}^*$ are. This more sensitive curve also has higher probability of ruin for a given surplus, which indicates that more competitive insurance products are riskier. These effects can be seen if the two Figures (\\ref{fig:logit_1} and \\ref{fig:logit_2}) are compared. Conversely, if the demand curve is not sensitive to the price, then the gap between $\\theta_{profit}^*$ and $\\theta_{ruin}^*$ can become quite large. Additionally, it can be seen from the curve at surplus = 100 that the ruin probability for $\\theta_{profit}^*$ and $\\theta_{ruin}^*$ are similar but as the surplus grows the values start to differ and once the surplus is great enough the two values $\\theta_{profit}^*$ and $\\theta_{ruin}^*$ result in similar ruin probabilities again. This means that if the insurance firm has high enough surplus then they can choose arbitrary $\\theta$ without risking the chance of ruin. If the surplus is great enough then the value of $\\theta$ does not matter as much. However, having too much reserves can be bad for insurance companies as it can be seen as a negative leverage. The bowl shape of the blue curves in the two Figures (\\ref{fig:logit_1} and \\ref{fig:logit_2}) is because of the interplay between the fixed cost and the demand curve. \\\\\n\n$\\theta_{ruin}^*$ should give the minimum ruin probability at all surplus values. This can be tested by graphing multiple ruin probability curves and compare it with the one obtained by $\\theta_{ruin}^*$. Figure \\ref{fig:test_many_thetas} shows that $\\theta_{ruin}^*$ gives the minimum ruin probability indeed.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[scale = 0.4]{.\/test_many_thetas.png}\n \\caption[Optimal value function of surplus process, $X_1$.]{The figure compares the optimal value function of surplus process, $X_1$ (blue line) to other cost functions (grey lines). The blue line is achieved by setting $\\theta = \\theta_{ruin}^*$. All the cost functions lie above the optimal value function, as is expected. }\n \\label{fig:test_many_thetas}\n\\end{figure}\n\n \n\n\\subsection{Two Aggregated Surplus Processes with Common Loading}\n\nNext, the two surplus processes, $X_1$ and $X_2$ are aggregated, both when the claims are independent and dependent. The acquisition is independent in this subsection.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[scale = 0.38]{.\/dep_vs_indp.png}\n \\caption[Surplus processes 1 and 2 aggregated. Showing the ruin probability and optimal $\\theta$ both in the case of independence and dependence. ]{Ruin probability when $X_1$ and $X_2$ are aggregated as a function of the security loading parameter, $\\theta$, both when they are independent and dependent via Clayton L\u00e9vy copula with $\\omega = 0.5$. The blue curves show the ruin probability when the two processes are independent for different values of the surplus and the red curves show the same for the dependent case. The curves have similar shapes, but the ruin probability is higher in the case of dependence, for the same surplus. The values in the legend show the minimum ruin probability for a given surplus (surplus $\\rightarrow$ probability).}\n \\label{fig:logit_dep_vs_indp}\n\n\\end{figure}\n\nFigure \\ref{fig:logit_dep_vs_indp} shows the ruin probability of the aggregated surplus process as a function of the security loading parameter, $\\theta$, both when they are independent and dependent via Clayton L\u00e9vy copula. The red curves represent dependence while the blue curves represent independence. \\\\\n\nFirstly, it can be seen that the expected profit is the same for dependence and independence and from the figure, $\\theta^*_{profit} \\approx 0.34$. The reason is that the claim mean and the claim frequency is almost the same (numerically) for dependence and independence. \\\\\n\nSecondly, the dependent case has a higher probability of ruin than the independent case for the same amount of surplus. However, the ruin probability is almost the same for small surplus values as can be seen from the figure. Interestingly, the optimal loading for dependence and independence seem to be the same and numerically the values are $\\theta^*_{ruin,dep} = 0.4 = \\theta^*_{ruin,indp}$. The surplus value does not change the optimal loading $\\theta^*$, as expected. The reason why the ruin probability difference between the dependent and independent cases is relatively small is because of the probability $p^{(0,1)}(\\theta)$. The fact that the insurance company does not always have the both claims $Y^{1\\parallel}$ and $Y^{1\\parallel}$ when a common jump occurs reduces the risk.\n\\\\\n\n\nFinally, the difference of the two ruin probability curves (red and blue) for a given surplus seems to be increasing with increasing surplus, meaning that the ruin probability in the independent case decreases more rapidly with increasing surplus then for the dependent case. Therefore, it is clear that the positive dependent case is riskier. \\\\\\\n\n\nNote that $\\theta_{ruin}^* \\approx 0.4$, which is very close to the weighted average of the optimal loading parameter of the isolated surplus processes where the weight is the exposure ratio of each surplus process, that is\n\n\\begin{equation*}\n \\theta_{weighted} = \\frac{0.435 \\frac{1}{1 + \\exp(-0.6 + 4*0.4)} + 0.358\\frac{1}{1 + \\exp(-0.6 + 4.5*0.4)}}{\\frac{1}{1 + \\exp(-0.6 + 4*0.4)} + \\frac{1}{1 + \\exp(-0.6 + 4.5*0.4)}} \\approx 0.4,\n\\end{equation*}\n\nwhich strongly indicates that the optimal value, $\\theta_{ruin}^*$, is simply the weighted average. \\\\\n\n\\subsubsection{Two Aggregated Surplus Processes with Separate Loadings}\n\nIt is more realistic to consider $\\theta$ as a vector so that the loading parameter can be different for each surplus process separately, to spread the total premium over the policies in an optimal way. The two surplus processes, $X_1$ and $X_2$, are aggregated as before and the constants are the same, but let $\\theta = (\\theta^{(1)}, \\theta^{(2)})$. \\\\\n\n\\begin{figure}[ht]\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=.9\\linewidth]{.\/indp_multi_profit.png} \n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=.9\\linewidth]{.\/indp_multi_ruin.png} \n\\end{subfigure}\n \\caption[Two security loading parameters of two aggregated independent surplus processes. ]{Expected profit (left) and the ruin probability (right) when $X_1$ and $X_2$ are aggregated, as a function of the security loading parameters, $\\theta^{(1)}$ and $\\theta^{(2)}$. The processes are assumed to be independent and the surplus is fixed at $x = 5000$. The parenthesis in the right figure shows the optimal values of $\\theta^{(1)}$ and $\\theta^{(2)}$ with the ruin probability as a criterion. The arrow indicates which values $\\theta^{(1)}$ and $\\theta^{(2)}$ are mapped into, thus showing the minimum ruin probability. The parenthesis in the left figure shows the same for the expected profit. The shape of the contour plot is due to the fact that the $\\theta$ grid considered is sparser for values that give high ruin probability.}\n \\label{fig:both_two_indp_15000}\n\\end{figure}\n\nFigure \\ref{fig:both_two_indp_15000} shows the expected profit (left) and the ruin probability (right), when $X_1$ and $X_2$ are assumed to be independent and aggregated, as a function of the security loading parameters, $\\theta^{(1)}$ and $\\theta^{(2)}$. The surplus is fixed at $x = 5000$ and the optimal values are shown. It should be noted that many surplus values were tested and they all gave the same value for $\\theta_{ruin}^{*(1)}$, $\\theta_{ruin}^{*(2)}$, $\\theta_{profit}^{*(2)}$, and $\\theta_{profit}^{*(2)}$ as shown, only the ruin probability level changed. Note that the optimal loading parameters for the expected profit are the same as those for the individual surplus processes. However, the optimal loading parameters for the ruin probability change when compared to the individual one (compare it with Figures \\ref{fig:logit_1} and \\ref{fig:logit_2}). When compared to the optimal loading parameter for the individual surplus process, $\\theta^{(1)}$ decreases from $0.435$ to $0.42$ and $\\theta^{(2)}$ increases from 0.358 to 0.38. Therefore, the optimal security loading parameter decision is to decrease the loading parameter of the less sensitive surplus process while increasing the loading parameter of the more sensitive surplus process. Additionally, when compared to Figure \\ref{fig:logit_dep_vs_indp}, the minimum ruin probability for one shared loading is $0.57$ while the ruin for two loadings is $0.56$, showing only a marginal difference. When the same is done for other surplus values a similar difference is found. The expected profit is marginally higher. \\\\\n\nLastly, consider the case when the surplus processes are assumed to be dependent via L\u00e9vy Clayton copula and the loadings can be different for each surplus process separately. Figure \\ref{fig:both_two_dep_15000} shows the ruin probability when $X_1$ and $X_2$ are aggregated as a function of the security loading parameters, $\\theta^{(1)}$ and $\\theta^{(2)}$. The shape of the contour plots is due to the fact that the $\\theta$ grid considered is sparser for values that give high ruin probability. The surplus is fixed at $x = 5000$. \\\\\n\n\\begin{figure}[ht]\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=.96\\linewidth]{.\/dep_multi_profit.png} \n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=.96\\linewidth]{.\/dep_multi_ruin.png} \n\\end{subfigure}\n \\caption[Two security loading parameters of two aggregated dependent surplus processes.]{Expected profit (left) and the ruin probability (right) when $X_1$ and $X_2$ are aggregated, as a function of the security loading parameters, $\\theta^{(1)}$ and $\\theta^{(2)}$. The processes are assumed to be dependent via Clayton L\u00e9vy copula and the surplus is fixed at $x = 5000$. The parenthesis in the right figure shows the optimal values of $\\theta^{(1)}$ and $\\theta^{(2)}$ with the ruin probability as a criterion. The arrow shows which values $\\theta^{(1)}$ and $\\theta^{(2)}$ are mapped into, thus showing the minimum ruin probability. The parenthesis in the left figure shows the same except for the expected profit. The shape of the contour plot is due to the fact that the $\\theta$ grid considered is sparser for values that give high ruin probability.}\n \\label{fig:both_two_dep_15000}\n\\end{figure}\n\n\n\nIt can be seen that the optimal loadings $\\theta^{(1)}$ and $\\theta^{(2)}$ are the same as the ones in the case of independence and the minimum ruin probability is higher (compared to the case in Figure \\ref{fig:both_two_indp_15000}). Both the values and the optimal loadings of the expected profit are the same as the independent case. \nAgain, the optimal security loading parameter decision is to decrease the loading parameter of the less sensitive surplus process while increasing the loading parameter of the more sensitive surplus process. The difference between the ruin probability in Figure \\ref{fig:both_two_dep_15000} vs Figure \\ref{fig:both_two_indp_15000} is only $0.03$ but in this case the surplus is low compared to the expected profit. If the surplus would be increased to $\\approx 20.000$ the difference would become greater. The difference would then decrease again if the surplus were increased to $\\approx 40.000$. \\\\\n\nAdditionally, when compared to Figure \\ref{fig:logit_dep_vs_indp}, the minimum ruin probability for one common loading is $0.59$, which is the same as the ruin probability for separate loading selections, therefore the difference is only marginal. \\\\\n\n\\subsection{Dependent Claims and Dependent Acquisition}\n\nIt is time to look at the case when we have dependent claims and dependent acquisition. Note that the case when we have independent claims and dependent acquisition is the same as the total independence case. We will look both at the case when the acquisition is modelled with a Gumbel and Clayton dependency structure. To compare these two structures we use Kendell's tau. The following equations relate the copula parameters, $\\omega_{clayton}$ and $\\omega_{gumbel}$ to kendell's tau, $\\tau$. \n\n\\begin{equation*}\n \\omega_{clayton} = \\frac{2 \\tau}{1-\\tau}, \\quad \\omega_{gumbel} = \\frac{1}{1-\\tau}.\n\\end{equation*}\n\nWe know that the expected profit is the same as before for all values of $\\tau$. Therefore, we analyze the ruin probability. \\\\\n\n\\begin{figure}[ht]\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=.9\\linewidth]{.\/Clayton.png} \n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n \\centering\n \n \\includegraphics[width=.9\\linewidth]{.\/Gumbel.png} \n\\end{subfigure}\n \\caption[Acquisition dependency.]{The ruin probability when the acquisition is modelled with a Clayton (left) and a Gumbel (right) dependency structure. The surplus is constant. In both cases, the ruin probability is higher for higher kendell's tau. The Gumbel case is more riskier. The surplus is fixed at 5000 units.}\n \\label{fig:acquisition}\n\\end{figure}\n\nIn Figure \\ref{fig:acquisition} we can see the ruin probability for different dependency values when the surplus is fixed at 5000 units. We can see that the ruin probability is higher for more dependent acquisition, as we expected. Also, we can see that the Gumbel acquisition model gives higher ruin probabilities than the Clayton model for the same Kendell's tau value. They give the same optimal loading parameter. It can also be seen that when kendell's tau is $0.05$ (close to $0$) the ruin probability is close to the case of independent acquisition, as expected. The optimal loading parameter is the same for all dependency levels. \n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{}\n\n\n\nUntil recently, the nuclear magic numbers have been considered to be robust \nand durable with respect to the shell structure of atomic nuclei.\nThe nuclear shell model initiated by Mayer and Jensen \\cite{MayJen}\nsuccessfully accounts for the shell structure of nuclei on and near \nthe $\\beta$-stability line. \nHowever, recent experimental studies on neutron-rich nuclei \nfar from the $\\beta$-stability line have revealed \ndisappearance of magic numbers \n(e.g., N $=$ 8 \\cite{be12a,be12b,be12c,be12d}, 20 \\cite{mg32}) \nand appearance of new magic numbers (e.g., N $=$ 16 \\cite{n16magic}).\nThe new N $=$ 16 magic number comes from the reduction of the N $=$ 20\nshell gap. \nOne theoretical explanation \\cite{otsuka1, utsuno1} for this phenomenon \nis that the attractive \nmonopole part of the tensor force acting between the $\\pi$d$_{5\/2}$ and \n$\\nu$d$_{3\/2}$ orbitals \nreduces the N $=$ 20 gap in nuclei with high N\/Z ratios. \n\nAnother anomalous phenomenon discovered in neutron-rich nuclei is \nthe occurrence of a {\\it strongly deformed} ground state on and near \nthe N $=$ 20 magic nuclei in the Z $\\sim$ 12 region, \nthe so-called 'island of inversion' \\cite{inversion}. \nHere, the intruder two-particle, two-hole (2p--2h) configuration occupying \nthe {\\it fp} shell and the normal 0p--0h configuration in the {\\it sd} shell \nare inverted in energy, and the 2p--2h configuration dominates in the \nground state. \nThis is a new challenge to the nuclear shell model. \nThe most convincing evidence for the large quadrupole collectivity of the nuclei \nin the 'island of inversion' was the measured low E(2$_{1}^{+}$) energy \\cite{mg32a} \nand the high B(E2) strength \\cite{mg32} as well as the enhancement of the binding \nenergy \\cite{mg32mass}. \nThe experimentally deduced large B(E2; 0$_{g.s.}^{+}\\rightarrow$2$_{1}^{+}$) value \nindicates large deformation ($\\beta_{2}=$ 0.512(44)) in $^{32}$Mg \\cite{mg32}.\n$^{34}$Mg (N $=$ 22) shows an even larger deformation ($\\beta_{2}=$ 0.58(6)) \n\\cite{mg34}.\nMonte Carlo shell model calculations \\cite{mcsm} that include the effects of \nthe narrowed shell gap at N $=$ 20 and enhanced 2p--2h excitations reproduce \nthe experimental values quite well \\cite{mg34}. \n\nOn the other hand, such a multiparticle--multihole (mp--mh) excitation appears \nin the excited states of nuclei near the $\\beta$-stability line.\nThese states can be studied by the heavy-ion induced fusion-evaporation reaction\nof stable isotopes. \nIn fact, an mp--mh excitation from the {\\it sd} to {\\it fp} shell produces \nsuperdeformation (SD) in N $=$ Z light mass nuclei, \nsuch as $^{36}$Ar \\cite{ar36}, $^{40}$Ca \\cite{ca40}, and $^{44}$Ti \\cite{ti44}.\nThese SD nuclei exhibit a large deformation of $\\beta_{2}\\sim$ 0.5, which is about \nthe same magnitude as the ground state deformation in the 'island of inversion'.\nThe presence of SD in the three abovementioned nuclei can be also understood \nin terms of the SD shell gaps of N $=$ Z $=$ 18, 20, and 22, respectively \\cite{ca40}. \nIn this region, the spherical and SD magic numbers occur at similar \nparticle numbers, which results in shape coexistence. \nHowever, the existence of a SD shell structure in neutron-rich nuclei \nhas not been experimentally confirmed yet.\n\nIn order to access the currently reachable neutron-richest SD states with asymmetric \nSD magic numbers, especially the nucleus with N $=$ 22 corresponding to $^{34}$Mg, \nwe employed a proton emission channel (2p2n) in the fusion-evaporation reaction using \nthe neutron-richest beam and target combination of stable isotopes obtained so far. \nConsequently, we successfully populated the high-spin states of the SD double magic\nZ $=$ 18 and N $=$ 22 nucleus, $^{40}$Ar. \nIn this Letter, we report experimental results on the SD states in $^{40}$Ar \nassociated with the mp--mh excitation between the {\\it sd} and {\\it fp} shells. \n\n\nHigh-spin states in $^{40}$Ar have previously been studied by proton-$\\gamma$ \ncoincidence measurements using the $^{37}$Cl($\\alpha$,~p$\\gamma$) reaction \n\\cite{old40ar}.\nHigh-spin levels below 6.8~MeV were identified up to (8$^{+}$) and spin-parity \nassignments up to the 6$^{+}$ state were obtained from the particle-$\\gamma$ \nangular correlations. \nThe parity of the 5$^{-}$ state at 4.494~MeV was determined by the linear \npolarization of the 5$^{-}\\rightarrow$4$^{+}$ transition at 1602~keV.\nThe lifetimes of low-lying levels were measured by the Doppler-shift attenuation \nmethod. \nThe high E2 strengths of the 6$_{2}^{+}\\rightarrow$4$_{2}^{+}$ \nand 4$_{2}^{+}\\rightarrow$2$_{2}^{+}$ transitions are respectively deduced to be \n67$_{-19}^{+38}$ and 46$_{-10}^{+15}$ in Weisskopf units, \nwhich indicates the large collectivity of the band. \nHowever, the (8$^{+}$) assignment was based solely on the similarity of the\nlevel structure to that in $^{42}$Ca, but the $\\gamma-\\gamma$ coincidence\nrelations of the in-band transitions were not examined and the presence of the \nband structure was not unambiguously identified by experiment. \nTherefore, it is essential to find the higher-spin members of the rotational band \nand to confirm the coincidence relations between the in-band $\\gamma$ transitions.\n\n\nHigh-spin states in $^{40}$Ar were populated via the \n$^{26}$Mg($^{18}$O, 2p2n)$^{40}$Ar reaction with a 70-MeV $^{18}$O beam\nprovided by the tandem accelerator facility at the Japan Atomic Energy Agency.\nTwo stacked self-supporting foils of $^{26}$Mg enriched isotopes with \nthicknesses of 0.47 and 0.43 mg\/cm$^{2}$ were used. \nThe mean beam energy of the $^{18}$O beam used to irradiate the $^{26}$Mg foils \nwas 69.0~MeV.\nGamma rays were measured by the GEMINI-II array \\cite{gemini} consisting of \n16 HPGe detectors with BGO Compton suppression shields, in coincidence with \ncharged particles detected by the Si-Ball \\cite{siball}, a 4 $\\pi$ array \nconsisting of 11 $\\Delta$E Si detectors that were 170~$\\mu$m thick. \nThe most forward placed Si detector was segmented into five sections and \nthe other detectors were segmented into two sections each, giving a total of \n25 channels that were used to enhance the selectivity of multi charged-particle \nevents.\nWith a trigger condition of more than two Compton-suppressed Ge detectors\nfiring in coincidence with charged particles, a total number of \n6.6$\\times$10$^{8}$ events were collected.\n\nBased on the number of hits in the charged particle detectors, events were \nsorted into three types of E$_{\\gamma}-$E$_{\\gamma}$ coincidence matrices \nfor each evaporation channel.\nA symmetrized matrix was created and the RADWARE program ESCL8R \\cite{radware}\nwas used to examine the coincidence relations of $\\gamma$ rays. \nBy gating on the previously reported $\\gamma$ rays, high-spin states\nin $^{40}$Ar were investigated.\n\n\n\nBy gating on the known 1461, 1432, and 571~keV peaks of the \n2$^{+}\\rightarrow $0$^{+}$, \n4$^{+}\\rightarrow $2$^{+}$, and \n6$^{+}\\rightarrow $4$^{+}$ transitions respectively, \nseveral new levels were identified above the 5$^{-}$ states at 4.49~MeV by \nconnecting with high-energy $\\gamma$ transitions of $\\geq$2.5~MeV.\nThe previously assigned deformed band members of 2$_{2}^{+}$, 4$_{2}^{+}$, \nand 6$_{2}^{+}$ states were confirmed at 2.522, 3.515, and 4.960~MeV, \nrespectively.\nIn addition, two $\\gamma$-ray cascade transitions of 2269 and 2699~keV \nwere identified in coincidence with the 993, 1445, and 1841~keV transitions, \nwhich form a rotational band up to the (12$^{+}$) state at 11.769~MeV. \nLinking $\\gamma$ transitions were also observed between the excited \n2$_{2}^{+}$, 4$_{2}^{+}$, and 6$_{2}^{+}$ states and the low-lying \n2$_{1}^{+}$ and 4$_{1}^{+}$ levels, which establishes the excitation energies \nand the spin-parity assignment of the band. \n\n\nSpins of the observed levels are assigned on the basis of the \nDCO (Directional Correlations from Oriented states) ratios of $\\gamma$ rays \nby analyzing an asymmetric angular correlation matrix. \nThe multipolarities of the in-band transitions of the band and \nthe linking transitions of 4$_{2}^{+}\\rightarrow $2$_{1}^{+}$ and \n6$_{2}^{+}\\rightarrow $4$_{1}^{+}$ are consistent with a stretched quadrupole \ncharacter. \nAssuming E2 multipolarity for the stretched quadrupole transition, the parity \nof the band was assigned to be positive.\nThe multipolarity of the 2699~keV transition could not be determined due to \nthe lack of statistics, but it was in coincidence with other $\\gamma$ \ntransitions in the band and assigned as E2. \n\nTo determine the deformation of the band, the transition quadrupole moment Q$_{t}$ \nwas deduced. \nLifetimes were estimated by the \\cite{DSAM} technique, which is based on the \nresidual Doppler shift of the $\\gamma$-ray energies emitted from the deceleration \nof recoiling nuclei in a thin target. \nThe average recoil velocity $<\\beta>$ is expressed as a function of the \ninitial recoil velocity to obtain $F(\\tau) \\equiv <\\beta>\/\\beta_{0}$.\nIn Fig.~3, the fractional Doppler shift $F(\\tau)$ is plotted against the \n$\\gamma$-ray energy. \nThe experimental $F(\\tau)$ values are compared with the calculated values\nbased on known stopping powers \\cite{srim2003}. \nIn this calculation, the side feeding into each state is assumed to consist\nof a cascade of two transitions having the same lifetime as the in-band\ntransitions.\nThe intensities of the side-feeding transitions were modeled to reproduce\nthe observed intensity profile.\nThe data are best fitted with a transition quadrupole moment \n$Q_{t} = 1.45_{-0.31}^{+0.49} e$b, which corresponds to a quadrupole \ndeformation of $\\beta_{2}=0.53_{-0.10}^{+0.15}$.\nThis result is consistent with a superdeformed shape of the band.\n\n\n\nIn order to compare the high-spin behavior of the rotational band in $^{40}$Ar\nwith the SD bands in $^{36}$Ar and $^{40}$Ca, \nthe so-called 'backbending' plot of the SD bands is displayed in Fig.~\\ref{fig4}.\nThe gradients of the plots correspond to the kinematic moments of inertia. \nBecause $^{40}$Ar has a similar gradient to $^{36}$Ar and $^{40}$Ca, \nthe deformation size of the $^{40}$Ar rotational band is expected to be as \nlarge as the deformation of the SD bands in $^{36}$Ar and $^{40}$Ca. \nUnlike $^{36}$Ar, no backbending was observed in $^{40}$Ar.\nIts behavior is rather similar to that of $^{40}$Ca.\nMany theoretical models, including the shell model \\cite{ar36,caurier,poves,sun} \nand various mean-field models \\cite{inakura,bender,HFB}, have been used to analyze \n$^{36}$Ar.\nAll the calculations reveal that the strong backbending in \n$^{36}$Ar is due to the simultaneous alignment of protons and neutrons \nin the f$_{7\/2}$ orbital.\n\n\nThe pronounced difference in the high-spin behaviors of $^{36}$Ar and \n$^{40}$Ar implies that the addition of four neutrons to $^{36}$Ar gives rise to \na dramatic effect on its structure. \nIn order to understand this structural change theoretically, cranked \nHartree--Fock--Bogoliubov (HFB) calculations with the P+QQ force \\cite{HFB} \nwere conducted.\nThe evolution of the nuclear shape was treated in a fully self-consistent manner \nand the model space of the full {\\it sd-fp} shell plus the g$_{9\/2}$ orbital \nwas employed.\nThe calculation shows that $\\beta_2 = 0.57$ at $J = 0 \\hbar$ and that the deformation \ngradually decreases to 0.45 at $J = 12\\hbar$. \nTriaxiality is found to be almost zero ($\\gamma\\simeq 0$) throughout this \nangular momentum range. \nThis result agrees with the experimental $Q_t$ value within the error bars.\n\n\n\nThe occupation number of each orbital was also calculated (Table \\ref{occupation}).\nThe ground-state configuration is expressed as \n({\\it sd})$^{-2}$({\\it fp})$^{2}$ relative to the ground-state configuration of \n$^{40}$Ca, where the Fermi levels for protons and neutrons lie at d$_{3\/2}$\nand f$_{7\/2}$, respectively.\nThe self-consistently calculated second $0^+$ state has \nthe ({\\it sd})$^{-6}$({\\it fp})$^{6}$ configuration.\nHere, the {\\it fp} shell is occupied by two protons and four neutrons,\nwhile the {\\it sd} shell has four proton holes and two neutron holes.\nConsidering the rise of the neutron Fermi level relative to that in $^{36}$Ar,\nthis excited configuration is essentially equivalent to the 4p--4h superdeformed \nconfiguration in $^{36}$Ar.\n\nCranking is then performed to study high-spin states. \nIn the proton sector, the behaviors of single-particle orbitals are similar to \nthose of $^{36}$Ar \\cite{HFB}. \nFor example, the occupation numbers in the $\\pi$p$_{3\/2}$ orbital monotonically\ndecrease up to $J=16 \\hbar$ while the $\\pi$f$_{7\/2}$ orbital starts to increase \nat $J=$ 8 $\\hbar$ due to the disappearance of the pairing gap energy.\n\nIn the neutron sector, clear differences are observed from the $^{36}$Ar case.\nThe occupation number in the $\\nu$f$_{7\/2}$ orbital is almost constant ($\\sim$3) \nagainst the total angular momentum; it is about 1.5 times larger than that for \n$^{36}$Ar. \nThe $\\nu$d$_{5\/2}$ orbitals are almost fully occupied ($\\simeq 5.5$) \nfrom the low- to the high-spin regions.\nIn the case of $^{36}$Ar, the structural change involving the sharp backbending\nis caused by a particle feeding from the p$_{3\/2}$ orbital to the f$_{7\/2}$ \norbital for both protons and neutrons.\nIn the neutron sector of $^{40}$Ar, this feeding happens from the p$_{3\/2}$\nto the many other single-particle orbitals.\nThis is because the rise of the neutron Fermi level enhances the occupation \nnumbers of the single-particle orbitals, particularly at the bottom end of \nthe {\\it fp} shell. \nFor example, the f$_{7\/2}$ is well occupied by $\\simeq 40\\%$. \nThis high occupation influences the response of the system to the Coriolis \nforce. \nIn general, low-$\\Omega$ states tend to come down energetically lower, so that \nsuch states are the first to be \"submerged\" when the Fermi level rises. \nAs a result, neutron states near the Fermi level in $^{40}$Ar possess a higher \n$\\Omega$ value and the rotational alignment is suppressed. \nIn $^{36}$Ar, many $\\Omega = 1\/2$ states are vacant in the {\\it fp} shell, \nso that it is possible to place particles in the $\\Omega = 1\/2$ states during \nthe feeding from the p$_{3\/2}$ orbital to the f$_{7\/2}$ orbital.\nHowever, in $^{40}$Ar, such $\\Omega = 1\/2$ states are filled due to the\nrise of the neutron Fermi level. \nIt is thus necessary to place neutrons in the $\\Omega$ = 3\/2 or 5\/2 levels \nin order to generate angular momentum. \nBut this way of feeding weakens the rotational alignment.\nThis ``Pauli blocking effect'' is one of the reasons why $^{40}$Ar does not \nbackbend (at least, not in the spin region so far observed). \nIt is also worth mentioning that because of the rise of the neutron Fermi level \nin $^{40}$Ar, angular momentum generation is spread among the extra f$_{7\/2}$ \nneutrons, in comparison with $^{36}$Ar. \nThis means that, unlike their neutron counterparts, the f$_{7\/2}$ protons do not \nneed to \"work hard\" to generate angular momentum. \nAs a result, simultaneous alignment of the f$_{7\/2}$ protons and neutrons \ndoes not occur in $^{40}$Ar. \nOur calculation confirms this picture. \n\n\nIn summary, a discrete-line superdeformed band has been identified in $^{40}$Ar. \nThe observed large transition quadrupole moment ($Q_{t}= 1.45_{-0.31}^{+0.49} e$b)\nsupports its SD character. \nThe properties of the SD band could be reasonably well explained by cranked HFB \ncalculations with the P+QQ force. \nThis finding of the SD band in $^{40}$Ar is similar to those observed in $^{36}$Ar \n\\cite{ar36} and $^{40}$Ca \\cite{ca40}, indicating the persistence of the SD shell \nstructure in the neutron-rich A $=$ 30 $\\sim$ 40 nuclei and possibly implying that \n$^{40}$Ar is a doubly SD magic nucleus with Z $=$ 18 and N $=$ 22.\nThe observed SD structure with a deformation of $\\beta_2 \\sim$ 0.5 caused by the \nmp--mh excitation across the {\\it sd--fp} shell gap might explain\nthe origin of the strongly deformed ground state in the 'island of inversion'.\n\n\nThe authors thank the staff at the JAEA tandem accelerator for providing \nthe $^{18}$O beam. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn his seminal paper, published in 1870, about the solution of a nonlinear equation in a single unknown, \n\\begin{equation} \nf(z)=0,\n\\end{equation}\\label{eq1}\nSchr\\\"oder deals with the problem of characterizing general iterative algorithms to solve~\\eqref{eq1} with a prefixed order of convergence $\\omega\\ge 2$ (see the original paper \\cite{Sch} or the commented English translation \\cite{Ste}). The main core of Schr\\\"oder's work studies two families of iterative processes, the well-known families of first and second kind \\cite{Pet1}, \\cite{Pet2}. The $\\omega$-th member of these families is an iterative method that converges with order $\\omega$ to a solution of~\\eqref{eq1}. In this way, the second method of both families is Newton's method. The third method of the first family is Chebyshev's method. The third method of the second family is Halley's method. The rest of the methods in both families (with order of convergence $\\omega\\ge 4$) are not so well known. \n\nNote that Newton's, Chebyshev's and Halley's method are also members of another famous family of iterative methods, the known as Chebyshev-Halley family of methods (introduced by Werner \\cite{Wer} and reported by may other authors such as \\cite{Amat} or \\cite{Dub}):\n \\begin{equation}\\label{eq2}\nz_{k+1}=z_k-\\left(1-\\frac{1}{2}\\frac{L_f(z_k)}{1-\\alpha L_f(z_k)}\\frac{f(z_k)}{f'(zk+1)}\\right), \\quad \\alpha\\in \\mathbb{R},\\quad n\\ge 0, \\quad z_0\\in\\mathbb{C},\n\\end{equation}\nwhere we have used the notation\n \\begin{equation}\\label{eq3}\n L_f(z)=\\frac{f(z)f''(z)}{f'(z)^2}.\n\\end{equation}\nIn fact, Chebyshev's method is obtained for $\\alpha=0$, Halley's method appears for $\\alpha=1\/2$ and Newton's method can be obtained as a limit case when $|\\alpha| \\to \\infty$. Except for the limit case of Newton's method, all the methods in the family have third order of convergence.\n\nIn this general context of families of iterative methods for solving nonlinear equations, we would like to highlight a detail that appears in the aforementioned paper by Schr\\\"oder~\\cite{Sch}. Actually, in the third section of this article, Schr\\\"oder constructs an algorithm by applying Newton's method to the equation \n$$\n\\frac{f(z)}{f'(z)}=0.\n$$\nThe resulting iterative scheme can be written as\n$$\nz_{k+1}=z_k-\\frac{f(z_k)f'(z_k)}{f'(z_k)^2-f(z_k)f''(z_k)}, \\quad n\\ge 0, \\quad z_0\\in\\mathbb{C},\n$$\nas it is known as \\emph{Schr\\\"oder's method} by many authors (see for instance \\cite{Proinov} or \\cite{Scavo}).\n\nFor our convenience, we denote by $S_f(z)$ the iteration function of Schr\\\"oder's method. Note that it can be written in terms of the function $L_f(z)$ introduced in~\\eqref{eq3} in the following way\n \\begin{equation}\\label{eq4}\nz_{k+1}=S_f(z_k)=z_k-\\frac{1}{1-L_f(z_k)}\\frac{f(z_k)}{f'(z_k)} \\quad n\\ge 0, \\quad z_0\\in\\mathbb{C}.\n\\end{equation}\nThe same Schr\\\"oder compares the resulting algorithm~\\eqref{eq4} with Newton's method and says: \n\\begin{quote}\n``It is an equally worthy algorithm which to my knowledge has not been previously considered. Besides, being almost as simple, this latter algorithm has the advantage that it converges quadratically even for multiple roots''.\n\\end{quote}\n\nCuriously, Schr\\\"oder's method~\\eqref{eq4} does not belong neither to the Schr\\\"oder's families of first and second kind nor the Chebyshev-Halley family~\\eqref{eq2}. It has very interesting numerical properties, such as the quadratic convergence even for multiple roots, but the fact of having a high computational cost (equivalent to the the third order methods in~\\eqref{eq2}) could be an important handicap for practical purposes.\n\nIn this paper we present a first approach to the dynamical behavior of Schr\\\"oder's method. So we show that for polynomials with two different roots and different multiplicities, it is possible to characterize the basins of attraction and the corresponding Julia set. We can appreciate the influence of the multiplicities in such sets.\n\n\\section{Preliminaries}\n\nIn the 13th section of Schr\\\"oder's work~\\cite{Sch}, that has the title \\emph{The Principal Algorithms Appllied to very Simple Examples}, we can find the first dynamical study of a couple of rootfinding methods. Actually, Schr\\\"oder's considers those who, in his opinion, are the two most useful methods: Newton's method, defined by the iterative scheme\n\\begin{equation}\\label{eq5}\nz_{k+1}=N_f(z_k)=z_k-\\frac{f(z_k)}{f'(z_k)} \\quad n\\ge 0, \\quad z_0\\in\\mathbb{C},\n\\end{equation}\nand the method $z_{k+1}=S_f(z_k)$ given in~\\eqref{eq4}.\n\nIn the simplest case, namely equations with only one root, we can assume without loss of generality that $f(z)=z^n$. It is easy to see that\n$$\nN_f(z)=\\frac{n-1}{n}z,\\quad S_f(z)=0.\n$$\nSo Schr\\\"oder's method gives the correct root ($z=0$) of the equation in just one step, whereas Newton's method converges to this root with lineal convergence:\n$$\nz_k=\\left(\\frac{n-1}{n}\\right)^k z_0.\n$$\nConsequently, for equations with s single root Schr\\\"oder concludes that the convergence regions of these two methods is the entire complex plane.\n\nThe next simple case considered by Schr\\\"oder is the quadratic equation. Again, without loss of generality he asumes \n$f(z)=(z-1)(z+1)$. After a series of cumbersome calculus, he estates that in this case and for both methods, the entire complex plane decomposes into two regions separated by the imaginary axis. A few years later, Cayley~\\cite{Cay} addresses the same problem, only for Newton's method. In a very elegant way, Cayley proves that for polynomials\n\\begin{equation}\\label{eq6}\nf(z)=(z-a)(z-b),\\quad a,b\\in \\mathbb{C}, \\quad a\\ne b,\n\\end{equation}\nNewton's iterates converge to the root $a$ if $|z_0-a|<|z_0-b|$ and to the root $b$ if $|z_0-b|<|z_0-a|$. The Julia set is the equidistant line between the two roots. The key to prove this result is to check that Newton iteration function~\\eqref{eq5} applied to polynomials~\\eqref{eq6} is conjugate via the M\\\"obius map\n\\begin{equation}\\label{eq7}\nM(z)=\\frac{z-a}{z-b}\n\\end{equation}\nwith the function $R(z)=z^2$, that is, $R(z)=M\\circ N_f\\circ M^{-1}(z)$. The unit circle $S^1=\\{z\\in\\mathbb{C}; |z|=1\\}$ is invariant by $R$. Its anti-image by $R$ is the bisector between the roots $a$ and $b$.\n\nTwo functions $f,g: \\mathbb{C} \\to \\mathbb{C} $ are said topologically conjugate if there exists a homeomorphism $\\varphi$ such that \n$$\n\\varphi\\circ g=f\\circ \\varphi.\n$$\nTopological conjugation is a very useful tool in dynamical systems (see \\cite{Dev} for more details)\nbecause two conjugate functions share the same dynamics properties, from the topological viewpoint. For instance, the fixed points of one function are mapped into the fixed points of the other, the periodic points of one function are mapped into the periodic points of the other function, and so on. Speaking informally, we can say that the two functions are the same from a dynamical point of view. As we have just seen, in some cases one of the functions in a conjugation could be much simpler than the other. In the case of Cayley's problem $R(z)=z^2$ is topologically conjugate (and much simpler) to\n$$\nN_f(z)=\\frac{a b-z^2}{a+b-2 z}.\n$$\n\n\nIn the same way, we have that Schr\\\"oder's method~\\eqref{eq4} applied to polynomials~\\eqref{eq6} \n$$\nS_f(z)=\\frac{z^2 (a+b)-4 a b z+a b (a+b)}{a^2-2 z (a+b)+b^2+2 z^2}\n$$is conjugated with $-R(z)$ via the M\\\"obius map defined in~\\eqref{eq7}, that is $M\\circ S_f\\circ M^{-1}(z)=-z^2$. Consequently, the dynamical behavior of Schr\\\"oder's method for quadratic polynomials mimics the behavior of Newton's method: the Julia set is the bisector between the two roots and the basins of attraction are the corresponding half-planes.\n\n\\section{Main results}\n\nNow we consider the case of polynomials with two roots, but with different multiplicities, $m\\ge n\\ge 1$:\n\\begin{equation}\\label{eq8}\nf(z)=(z-a)^m(z-b)^n,\\quad a,b\\in \\mathbb{C}, \\quad a\\ne b.\n\\end{equation}\nFor our simplicity, and to better appreciate symmetries, we move the roots $a$ and $b$ to $1$ and $-1$. To do this, we conjugate with the affine map\n\\begin{equation}\\label{eq9}\nA(z)=1+2\\frac{z-a}{a-b}\n\\end{equation}\nto obtain a simpler function that does not depend on the roots $a$ and $b$. Let $T_{m,n}(z)$ be the corresponding conjugate map:\n\\begin{equation}\\label{eq10}\nT_{m,n}(z)=A\\circ S_f\\circ A^{-1}(z)=\\frac{(m-n) z^2 +2 (m+n) z +m-n}{(m+n) z^2 +2(m-n)z +m+n}.\n\\end{equation}\nA new conjugation of $T_{m,n}(z)$ with the M\\\"obius map \\eqref{eq7}, when $a=1$ and $b=-1$ provides us a new rational function whose dynamics are extremely simple. Actually:\n\\begin{equation}\\label{eq11}\nR_{m,n}(z)=M\\circ T_{m,n}\\circ M^{-1}(z)=-\\frac{nz^2}{m}.\n\\end{equation}\n\nNote that the circle $C_{m,n}=\\{z\\in\\mathbb{C}; |z|=m\/n\\}$ is invariant by $R_{m,n}(z)$. After iteration by $R_{m,n}(z)$, the orbits of the points $z_0$ with $|z_0|m\/n$ go to $\\infty$. Consequently, $C_{m,n}$ is the Julia set of the map $R_{m,n}(z)$.\n\n\n\\begin{Theorem}\\label{teo1}\nLet $T_{m,n}(z)$ be the rational map defined by~\\eqref{eq10} and let us denote by $J_{m,n}$ its Julia set. Then we have:\n\\begin{enumerate}\n\\item If $m=n$, then $J_{m,m}$ is the imaginary axis.\n\\item If $m>n\\ge 1$, then $J_{m,n}$ is the circle\n$$J_{m,n}=\\left\\{z\\in\\mathbb{C}; \\left|z+\\frac{m^2+n^2}{m^2-n^2}\\right|=\\frac{2mn}{m^2-n^2}\\right\\}.$$\n\\end{enumerate}\n\\end{Theorem}\n\n\\begin{proof\nThe proof follows immediately, just by taking into account that $J_{m,n}$ is the pre-image by $M(z)=(z-1)\/(z+1)$ of the circle $C_{m,n}$ and by distinguishing the two situations indicated in the statement of the theorem.\n\\end{proof}\n\n\\begin{Theorem}\\label{teo2}\nLet $S_f(z)$ be the rational map defined by applying Schr\\\"oder's method to polynomials~\\eqref{eq8} and let us denote by $J_{m,n,a,b}$ its Julia set. Then we have:\n\\begin{enumerate}\n\\item If $m=n$, $J_{m,m,a,b}$ is the equidistant line between the points $a$ and $b$.\n\\item If $m>n\\ge 1$, then $J_{m,n,a,b}$ is the circle\n$$J_{m,n,a,b}=\\left\\{z\\in\\mathbb{C}; \\left|z+\\frac{b m^2-a n^2}{m^2-n^2}\\right|=\\frac{mn |a-b|}{m^2-n^2}\\right\\}.$$\n\\end{enumerate}\n\\end{Theorem}\n\n\\begin{proof\nNow we deduce this result by calculating the pre-images of $J_{m,n}$ by the affine map $A(z)$ defined in~\\eqref{eq9} in the two situations indicated in the previous theorem.\n\\end{proof}\n\n\n\n\\section{Conclusions and further work}\n\n\nWe have studied the behavior of Schr\\\"oder's method for polynomials with two different complex roots and with different multiplicities~\\ref{eq8}. Actually, we have proved that the Julia set of the corresponding rational functions obtained in this case is a circle given in Theorem~\\ref{teo2}.\n\nIn addition, Theorem~\\ref{teo1} gives us a universal result that characterizes the behavior of Schr\\\"oder's method in a very simplified form, depending only of the values of the multiplicities $m$ and $n$. The influence of the roots $a$ and $b$ is revealed in Theorem~\\ref{teo2}, and is just an affine transformation of the situation given in Theorem~\\ref{teo1}.\n\n\nLet us consider the points $(x,y)\\in\\mathbb{R}^2$ given by the centers and radius of the circles defined in Theorem~\\ref{teo1}, that is\n$$\n(x,y)=\\left( \\frac{m^2+n^2}{m^2-n^2}, \\frac{2mn}{m^2-n^2}\\right).\n$$\nThese points belong to the hyperbola $x^2-y^2=1$ in the real plane $\\mathbb{R}^2$.\n\nIn addition, we appreciate that are polynomials for which Schr\\\"oder's method has the same dynamical behavior. Actually, if we introduce the new parameter\n$$\np=\\frac{m}{n},\n$$\nwe have that the circles $J_{m,n}$ defined in Theorem~\\ref{teo1} can be expressed as\n$$J_{p}=\\left\\{z\\in\\mathbb{C}; \\left|z+\\frac{p^2+1}{p^2-1}\\right|=\\frac{2p}{p^2-1}\\right\\}.$$\nTherefore Schr\\\"oder's method applied to polynomials with couples of multiplicities $(m,n)$ having the same quotient $p$ have the same Julia set.\n\nWe can schematize of the dynamics of Schr\\\"oder's method applied to polynomials $(z-1)^m(z+1)^n$, $m>n$ in the following way:\n\\begin{itemize}\n\\item When $p=m\/n\\to \\infty$, the Julia sets $J_p$ are circles that tends to collapse in the point $z=-1$.\n\\item When $p=m\/n\\to 1^+$, the Julia sets $J_p$ are circles with centers in the negative real line. Note\nthat centers\n$$\n-\\frac{p^2+1}{p^2-1}\\to -\\infty \\text{ when } p\\to 1^+\n$$\nand radius\n$$\n\\frac{2p}{p^2-1}\\to \\infty \\text{ when } p\\to 1^+.\n$$\nSo when $p\\to 1^+$ the Julia set are circles getting bigger and tending to ``explode'' into the limit case, given by the imaginary axis when $p=1$.\n\\end{itemize}\n\nIf we consider the presence of the roots $a$ and $b$, the dynamics of Schr\\\"oder's method applied to polynomials $(z-a)^m(z-b)^n$, $m>n$ can be summarized in a ``travel'' from a circle concentrated in the root with the smallest multiplicity, $b$ to circles with the center in the line connecting the roots $a$ and $b$ and radius tending to infinity until the ``explosion'' into the limit case, given by the bisector of the two roots, when $p=1$.\n\nIn Figures~\\ref{fig1}--\\ref{fig3} we show some graphics of different Julia sets obtained when Schr\\\"oder's method is applied to polynomials $(z-a)^m(z-b)^n$, $m\\ge n\\ge 1$. We compare these dynamical planes with the ones obtained for Newton's. For instance, in Figure~\\ref{fig1} we show the behavior when $p=m\/n$ is increasing. We appreciate how the Julia set for Schr\\\"oder's method (a circle) tends to to collapse in the point $z=-1$ that in this case is the simple root. In the case of Newton's method, the Julia set is a kind of ``deformed parabola'', whose ``axis of symmetry'' is the real line, it is open to the left, the ``vertex'' tends to the simple root $z=-1$ and the ``latus rectum'' tends to zero. We see how the basin of attraction of the multiple root $z=1$ invades more and more the basin of the simple root $z=-1$, as it was pointed out by Hern\\'andez-Paricio et al. \\cite{Gut}.\n\nIn Figure~\\ref{fig2} we see what happens when $p=m\/n\\approx 1$. The Julia set for Schr\\\"oder's method are circles getting bigger as $p$ approaches the value 1 and exploding into a half-plane limited by the imaginary axis when $p=1$. In the case of Newton's method, the Julia set is again a ``deformed parabola'' with the real line as ``axis of symmetry'' and open to the left. However as $p$ goes to 1, the ``vertex'' tends to $z=0$ and the ``latus rectum'' tends to infinity. As a limit case, when $p=1$ this ``deformed parabola'' becomes a straight line, actually the imaginary axis.\n\n\nFigure~\\ref{fig3} shows the circle corresponding to the Julia of Schr\\\"oder's method applied to polynomials $(z-1)^m(z+1)^n$ with $p=m\/n=2$. We can also see the Julia set of Newton's method applied to such polynomials. In the case of Newton's method we observe that the behavior is not the same fo values of $m$ and $n$ such that $p=m\/n=2$. The corresponding ``deformed parabola'' tends to be smoother when the values of $m$ and $n$ increase.\n\nFinally in Figure~\\ref{fig4} we show the Julia set $J_{m,n,a,b}$ defined in Theorem~\\ref{teo2} in the case $m=2$, $n=1$, $a=1$, $b=i$ together with the corresponding Julia set for Newton's method. In these figures we appreciate the loss of symmetry respect to the imaginary axis. This role is now played by the equidistant line between the roots $a$ and $b$.\n\nAs a further work, we would like to explore the influence of the multiplicity in the Julia set of Newton's method applied to polynomials $(z-a)^m(z-b)^n$, $m\\ge n\\ge 1$ and its possible relationship between the study of Schr\\\"oder's method. In particular, we are interested in characterize the main properties of the ``deformed parabolas'' that appear in the case of Newton's method.\n\n\n\n\n\n\\begin{figure}[H]\n\\centering \n\\twofractals{imagenes\/sch_m_2n_1.jpg}{Schr\\\"oder $m=2, \\, n=1.$}{imagenes\/newton_m_2n_1}{Newton $m=2, \\, n=1.$}\n\n\\twofractals{imagenes\/sch_m_5n_1}{Schr\\\"oder $m=5, \\, n=1.$}{imagenes\/newton_m_5n_1}{Newton $m=5, \\, n=1.$}\n\n\\twofractals{imagenes\/sch_m_8n_1}{Schr\\\"oder $m=8, \\, n=1.$}{imagenes\/newton_m_8n_1}{Newton $m=8, \\, n=1.$}\n\\caption{Basins of attraction of Schr\\\"oder's and Newton's methods applied to polynomials $(z-1)^n(z+1)^m$ for $n=1$, $m=2, 5, 8$.}\n\\label{fig1}\n\\end{figure} \n\n\\begin{figure}[H]\n\\centering \n\\twofractals{imagenes\/sch_n_6_m_6}{Schr\\\"oder $m=6, \\, n=6.$}{imagenes\/newton_n_6_m_6}{Newton $m=6, \\, n=6.$}\n\n\\twofractals{imagenes\/sch_m_7n_6}{Schr\\\"oder $m=7, \\, n=6.$}{imagenes\/newton_m_7n_6}{Newton $m=7, \\, n=6.$}\n\n\\twofractals{imagenes\/sch_m_8n_6}{Schr\\\"oder $m=8, \\, n=6.$}{imagenes\/newton_m_8n_6}{Newton $m=8, \\, n=6.$}\n\\caption{Basins of attraction of Schr\\\"oder's and Newton's methods applied to polynomials $(z-1)^n(z+1)^m$ for $n=6$, $m=6, 7, 8$.}\n\\label{fig2}\n\\end{figure} \n\n\n\\begin{figure}[H]\n\\centering \n\\imagen{imagenes\/sch_m_4n_2}{Schr\\\"oder $p=m\/n=2.$}{imagenes\/newton_m_4n_2}{Newton $m=4, \\, n=2.$}\n\n\\imagen{imagenes\/newton_m_6n_3}{Newton $m=6, \\, n=3.$}{imagenes\/newton_m_8n_4}{Newton $m=8, \\, n=4.$}\n\n\\caption{The first graphic shows the basin of attraction of Schr\\\"oder's method applied to polynomials $(z-1)^n(z+1)^m$ with $p=m\/n=2$. The other graphics show the basins of attraction of Newton's method applied to the same polynomials for different values of $m$ and $n$ with $p=m\/n=2$.}\n\\label{fig3}\n\\end{figure} \n\n\\begin{figure}[H]\n\\centering \n\\includegraphics[width=0.4\\textwidth]{imagenes\/sch-i_m2_n1.jpg}\\quad \n\\includegraphics[width=0.4\\textwidth]{imagenes\/newton1-i_m2_n1.jpg}\n\n\n\\caption{Basin of attraction of Schr\\\"oder's and Newton's methods applied to polynomials $(z-1)^2(z+i)$.}\n\\label{fig4}\n\\end{figure} \n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRayleigh-B\\'enard convection of a horizontal fluid layer heated from\nbelow is a paradigmatic system for the study of complex spatial and\nspatio-temporal patterns \\C{BoPe00}. Usually, the system parameters\nare chosen such that variations of the fluid properties across the\nlayer are very small. In the theoretical description this allows the\nOberbeck-Boussinesq (OB) approximation, in which the temperature\ndependence is neglected in all fluid properties except for the density\nterm that is responsible for the buoyancy. For large temperature\ndifferences across the layer the variation of the fluid properties\nwith the temperature become significant. These non-Oberbeck-Boussinesq\n(NB) effects break the up-down symmetry that is characteristic of the\nOB approximation and, thus, allow a resonant triad interaction among\nthree Fourier modes whose wavevectors form a hexagonal planform. Due\nto the resonant interaction the primary bifurcation to the hexagons is\ntranscritical, and hexagons are preferred over rolls in the immediate\nvicinity of onset \\cite{Bu67}. \n\nWithin the framework of the leading-order weakly nonlinear analysis,\nhexagons typically become unstable to rolls further above threshold,\nwhere the amplitudes are larger and the resonant-triad interaction\nloses significance compared to interactions involving four modes\n\\cite{Pa60,SeSt62,Se65,Bu67,PaEl67,DaSe68}. This scenario of a\ntransition from hexagons to rolls has been confirmed in quite a number\nof experimental investigations\n\\cite{SoDo70,DuBe78,Ri78,WaAh81,BoBr91,PaPe92}, and quite commonly it\nhas been assumed that hexagon patterns in NB convection are confined\nto the regime close to onset.\n\nTwo convection experiments using SF$_6$ as the working fluid\n\\C{AsSt96,RoSt02} have shown, however, that even in the strongly\nnonlinear regime stable hexagon patterns can be observed. Under OB\nconditions \\cite{AsSt96} hexagons were found at relatively high\nRayleigh numbers, $\\epsilon \\equiv (R-R_c)\/R_c \\approx 3.5$ \n\\C{AsSt96}. Due to the mid-plane symmetry of OB convection hexagons\nwith up-flow in the center coexist stably with down-flow hexagons in\nthis regime. In experiments using SF$_6$ under NB conditions\n\\cite{RoSt02} the hexagons that appear at threshold were replaced by\nrolls for somewhat larger heating and then reappeared in the strongly\nnon-linear regime near $\\epsilon ={\\mathcal O}(1)$. The\nrestabilization was attributed to the large compressibility of SF$_6$\nnear its critical point \\cite{RoSt02}. The hexagons that regain\nstability were termed {\\em reentrant hexagons}.\n\nRecent numerical computations \\cite{MaRi05} have demonstrated that \nhexagons can restabilize in NB convection even if the fluid is {\\em\nincompressible}. For instance, in water hexagons that are unstable at\n$\\epsilon=(R-R_c)\/R_c =0.15$ can already restabilize at\n$\\epsilon=0.2$. The origin of this reentrance was traced back to the\nexistence of stable hexagons in OB convection at large Rayleigh\nnumbers, and the dependence of the NB effects on the Rayleigh number.\n\nAt low Prandtl numbers ($Pr\\simeq 1$) and further above onset OB\nconvection exhibits a new state: {\\em spiral defect chaos} (SDC). It\nwas first found experimentally \\C{MoBo93,MoBo96,LiAh96} and then\ninvestigated theoretically using simple models \\cite{XiGu93,CrTu95} as\nwell as simulations of the full fluid equations \\cite{DePe94a,ChPa03}.\nThis fascinating state of spatio-temporal chaos is characterized by\nrotating spirals with varying numbers of arms and of different size,\nwhich appear and disappear irregularly and interact with each other\nand with other defects. SDC arises from the roll state at a threshold \nthat can be as low as $\\epsilon=0.1$ in the limit of small $Pr$. It\nis driven by large-scale flows that are induced by roll curvature and\nhave a strength that is proportional to $Pr^{-1}$ \\cite{ChPa03}.\n\nSo far, strongly non-linear NB convection has been studied mostly for\nlarge Prandtl numbers \\cite{YoRi03b,MaPe04,MaRi05}, but little is\nknown for small ( $Pr\\simeq 1$) or very small Prandtl numbers ($Pr\\ll \n1$). In particular, whether reentrant hexagons exist at large\n$\\epsilon$ in the presence of the large-scale flows that arise at low\n$Pr$, and how NB effects impact spiral defect chaos are interesting\nquestions, which we address in this paper. \n\nHere we study NB convection in gases with small Prandtl numbers.\nSpecifically, we consider parameters corresponding to convection in\nCO$_2$ and SF$_6$ ($Pr\\simeq 0.8$) and in a H$_2$-Xe mixture\n($Pr=0.17$). We show that reentrant hexagons are possible in\nconvection in CO$_2$. For spiral defect chaos in SF$_6$ we find that\nNB effects promote small convection cells (`bubbles'). In H$_2$-Xe,\ncloser to threshold, roll-like structures dominate. In both cases the\nNB effects reduce the spiral character of the pattern. We quantify the\nimpact of the NB effects on spiral defect chaos using geometric\ndiagnostics that we have proposed recently \\cite{RiMa06}. \n\nThe paper is organized as follows. In Sec.\\ref{sec:basicequations} we\nbriefly review the basic equations, emphasizing how our computations\nfocus on weakly non-Boussinesq effects, but strongly nonlinear\nconvection. In Sec.\\ref{sec:co2stability} we present the results for\nthe linear stability of hexagons and rolls in CO$_2$ for a range of\nparameters accessible experimentally. To compare with experiments, we\nstudy the influence of different lateral boundary conditions on the\ntransition from hexagons to rolls in Sec. \\ref{sec:sim-co2}. In Sec.\n\\ref{sec:sim-sf6} we discuss spiral defect chaos in SF$_6$ under NB\nconditions. The stability of hexagons and spiral defect chaos in\nfluids with very low Prandlt number ($Pr=0.17$) is studied in a \nmixture of $H_2$ and $Xe$ in Sec. \\ref{sec:h2xe}. Finally, conclusions\nare drawn in Sec.\\ref{sec:conclusions}.\n\n\n\\section{Basic equations \\LB{sec:basicequations} }\n\nThe basic equations that we use for the description of NB\nconvection have been discussed in detail previously\n\\cite{YoRi03b,MaRi05}. We therefore give here only a brief summary. \nWe consider a horizontal fluid layer of thickness $d$, density $\\rho$,\nkinematic viscosity $\\nu$, heat conductivity $\\lambda$, thermal\ndiffusivity $\\kappa$, and specific heat $c_p$. The system is heated\nfrom below (at temperature $T_1$) and cooled from above (at\ntemperature $T_2 < T_1$). \n\nTo render the governing equations and boundary conditions\ndimensionless we choose the length $d$, the time $d^{2}\/\\kappa_0$, the\nvelocity $\\kappa_0\/d$, the pressure $\\rho_0\\nu_0 \\kappa_0\/d^{2}$, and\nthe temperature $T_s=\\nu_0 \\kappa_0\/\\alpha_0 g d^3$ as the respective\nscales. The subscripted quantities refer to the respective values at\nthe middle of the fluid layer in the conductive state. The\nnon-dimensionalization gives rise to two dimensionless quantities: the\nPrandtl number $Pr=\\nu_0\/\\kappa_0$, and the Rayleigh number\n$R=\\alpha_0 \\Delta T g d^3\/\\nu_0 \\kappa_0$. Furthermore, we write the\nequations in terms of the dimensionless momentum density $v_i=\\rho d\nu_i\/\\rho_0 \\kappa_0$ instead of the velocities $u_i$. The\ndimensionless form of the temperature $\\hat T =T\/T_s$, heat\nconductivity $\\hat \\lambda =\\lambda\/\\lambda_0$, density $\\hat \\rho\n=\\rho\/\\rho_0$, kinematic viscosity $\\hat \\nu =\\nu\/\\nu_0$, and specific\nheat $\\hat c_p =c_p\/c_{p0}$ will be used in the ensuing equations and\nthe hats dropped for clarity. In dimensionless form the equations for\nthe momentum, mass conservation and heat are then given, respectively,\nby\n\\begin{eqnarray}\n\\frac{1}{Pr}\\left(\\partial_tv_i+v_j\\partial_j\\left(\\frac{v_i}{\\rho}\\right)\\right)&=&-\\partial_i\np \\\\\n&&+\\delta_{i3}\\left(1+\\gamma_1(-2 z+\\frac{\\Theta}{R})\\right)\\Theta\\nonumber \\\\\n&&+\\partial_j\\left[\\nu\\rho\\left(\\partial_i(\\frac{v_j}{\\rho})+\\partial_j(\\frac{v_i}{\\rho})\\right)\\right]\n\\nonumber \\LB{e:v}\\\\\n\\partial_jv_j&=&0, \\LB{e:cont}\\\\\n\\partial_t\\Theta+\\frac{v_j}{\\rho}\\partial_j\\Theta\n& =&\\frac{1}{\\rho\nc_p}\\partial_j(\\lambda\\partial_j\\Theta)-\\gamma_3\\partial_z\\Theta-\\nonumber \\\\\n&& R\\frac{v_z}{\\rho}(1+\\gamma_3z).\\LB{e:T}\n\\end{eqnarray}\nwith the dimensionless boundary conditions \n\\begin{eqnarray}\n\\vec{v}(x,y,z,t)=\\Theta(x,y,z,t)=0 \\mbox{ at } z= \\pm \\frac{1}{2}.\\LB{e:bc}\n\\end{eqnarray}\nHere $\\Theta$ is the deviation of the temperature field from the basic\nconductive profile. Summation over repeated indices is assumed.\n\nWe consider the NB effects to be weak and retain in a\nTaylor expansion of all material properties only the leading-order\ntemperature dependence {\\it beyond} the OB approximation. For\nthe density this implies also a quadratic term with coefficient\n$\\gamma_1$. It contributes, however, only to the buoyancy term in\n(\\ref{e:v}); in all other expressions it would constitute only a\nquadratic correction to the leading-order NB effect. Thus,\nthe remaining temperature dependence of the fluid parameters $\\rho$,\n$\\nu$, $\\lambda$, and $c_p$ in (\\ref{e:v},\\ref{e:cont},\\ref{e:T}) is\ntaken to be linear\n\\begin{eqnarray}\n\\rho(\\Theta)&=&1-\\gamma_0(-z+\\frac{\\Theta}{R}),\\LB{e:rhoTh}\\\\\n\\nu(\\Theta)&=& 1+\\gamma_2(-z+\\frac{\\Theta}{R}),\\LB{e:nuTh}\\\\\n\\lambda(\\Theta)&=&1+\\gamma_3(-z+\\frac{\\Theta}{R}),\\LB{e:lambdaTh}\\\\\nc_p(\\Theta)&=&1+\\gamma_4(-z+\\frac{\\Theta}{R}).\\LB{e:cpTh}\n\\end{eqnarray}\n\nThe coefficients $\\gamma_i$ give the difference of the respective\nfluid properties across the layer. They depend therefore linearly on\nthe Rayleigh number, \n\\begin{eqnarray}\n\\gamma_i(\\Delta T)=\\gamma_i^{c}\\,\\left(\\frac{R}{R_c} \\right) \n=\\gamma_i^{c} \\, (1+\\epsilon) ,\n\\end{eqnarray}\nwhere $\\gamma_i^{c}$ is the value of $\\gamma_i$ at the onset of convection\nand $\\epsilon\\equiv (R-R_c(\\gamma_i^{c}))\/R_c(\\gamma_i^{c})$ is the\nreduced Rayleigh number. \n\nIn analogy to \\cite{Bu67}, we further omit NB terms that\ncontain cubic nonlinearities in the amplitudes $v_i$ or $\\Theta$, as\nthey arise from the expansion of the advection terms $v_j\n\\partial_j(v_i\/\\rho)$ and $(v_j\/\\rho)\\partial_j \\Theta$ when the\ntemperature-dependence of the density is taken into account. Since we\nwill be considering Rayleigh numbers up to twice the critical value,\nwhich implies enhanced NB effects, these approximations\nmay lead to quantitative differences compared to the fully\nNB system, even though the temperature-dependence of the\nmaterial properties themselves is in most situations well described by\nthe linear (or quadratic in the case of the density) approximation. \n\nTo quantify the overall strength of the NB-effects we use Busse's\nNB parameter $Q$, which is given by\n\\begin{eqnarray}\nQ = \\sum_{i=0}^{4}\\gamma_i^{c} {\\cal P}_i,\\LB{e:busseq}\n\\end{eqnarray}\nwhere the quantities ${\\cal P}_i$ are linear functions of $Pr^{-1}$.\nThe NB parameter $Q$ quantifies the breaking of the up-down symmetry,\nwhich renders at most one of the two types of hexagons stable. Gases\nhave a positive value of $Q$ and exhibit hexagons with down-flow in\nthe center ($g$-hexagons), whereas liquids have negative $Q$ and show\nhexagons with up-flow ($l$-hexagons).\n\nWe focus in this paper on the stability properties of patterns in the\nstrongly nonlinear regime. They are determined by a Galerkin method\n(e.g. \\cite{BuCl79a}). We use a Fourier expansion on a hexagonal\nlattice in the lateral directions. The Fourier wave vectors ${\\bf q}$\nare constructed as linear combinations of the hexagon basis vectors\n${\\bf b}_1 =q(1,0)$ and ${\\bf b}_2 =q(1\/2, \\sqrt{3}\/2)$ with ${\\bf\nq} = m {\\bf b}_1 + n {\\bf b}_2$ where the integers $m$ and $n$ are in\nthe range $|m {\\bf b}_1+n{\\bf b}_2 | \\le n_q q$. The largest\nwavenumber is then $n_q q$ and the number of Fourier modes retained is\ngiven by $1+6\\sum_{j=1}^{n_q}j$. Typically we use $n_q =3 $. The top\nand bottom boundary conditions are satisfied by using appropriate\ncombinations of trigonometric and Chandrasekhar functions in $z$\n(\\cite{Ch61,Bu67}). In most of the computations we use $n_z=6$ modes\nfor each field in Eq.(\\ref{e:v},\\ref{e:cont},\\ref{e:T}). The linear\nanalysis yields the critical Rayleigh number $R_c$ as well as the\ncritical wavenumber $q_c$. Both depend on the NB-coefficients\n$\\gamma_i^{c}$ which in turn depend on $R_c$. Thus, in principle one\nobtains an implicit equation for the $\\gamma_i^{c}$. The shift in the\ncritical Rayleigh number away from the classical value $R_c=1708$ due\nto the NB-effects is, however, quite small (less than 1 percent) and\ntherefore the resulting change in the $\\gamma_i^{c}$ is negligible. In\nthis paper we therefore choose the $\\gamma_i^{c}$ corresponding to\n$R_c=1708$. \n\n\nTo investigate the nonlinear hexagon solutions, we start with\nthe standard weakly nonlinear analysis to determine the\ncoefficients of the three coupled amplitude equations for the\nmodes making up the hexagon pattern. To obtain the fully\nnonlinear solutions requires the solution of a set of nonlinear\nalgebraic equations for the expansion coefficients with respect\nto the Galerkin modes. This is achieved with a Newton solver for\nwhich the weakly nonlinear solutions serve as convenient\nstarting solutions. In the Galerkin code amplitude instabilities\nare tested by linear perturbations of the expansion\ncoefficients. In addition, modulational instabilities are considered, which involve the introduction of Floquet\nmultipliers $\\exp (i {\\bf s}\\cdot (x,y))$ in the Fourier ansatz for the linear perturbations \nof the Galerkin solutions. \n\nWe also study the temporal evolution of the system. For that we employ\na Fourier spectral code on a rectangular grid $(i\\,dq_x,j\\,dq_y)$,\n$i,j=1...N$, with $dq_y\/dq_x=\\sqrt{3}\/2$ to accomodate perfect\nhexagonal patterns. The same vertical modes are used as in the\nGalerkin stability code \\cite{DePe94a}. To solve for the time\ndependence a fully implicit scheme is used for the linear terms,\nwhereas the nonlinear parts are treated explicitly (second order\nAdams-Bashforth method). The time step is typically taken to be\n$t_v\/500$, where $t_v$ is the vertical diffusion time. We have tested\nthat the stability regimes obtained from the Galerkin analysis are\nconsistent with the direct numerical simulations. Both codes employed\nin this paper have been kindly provided by W. Pesch\n\\cite{DePe94a,Pe96}. \n\n\\section{Reentrant hexagons in CO$_2$ \\LB{sec:co2stability}}\n\nIn this paper we investigate specific scenarios for convection in\ngases that should be experimentally realizable. We focus in this\nsection on CO$_2$ in a conventional range for the layer thickness,\npressure, and temperature. Table \\ref{t:gamma-co2} provides the NB\ncoefficients and the $Q$-value at the onset of convection for a\nrepresentative range of the mean temperature $T_0$ in a layer of\nthickness $d=0.08\\,cm$ \\footnote{These values were obtained with a\ncode kindly provided by G. Ahlers.}. \n\n\\begin{table}\n\\begin{tabular}{|c|cccccccc|}\\hline \n$T_0$ & $\\Delta T_c$ & $Pr$ & $\\gamma_0^{c}$ &\n$\\gamma_1^{c}$ & $\\gamma_2^{c} $ & $\\gamma_3^{c}$ &\n$\\gamma_4^{c}$ & $Q$ \\\\ \\hline \n20 & 9.43 & 0.87 & 0.0486 & -0.0669 & 0.0779 &0.0236 & -0.0251& 1.199 \\\\ \\hline \n40 & 15.52 & 0.84 & 0.0685 & -0.0883 & 0.1132 &0.0508 & -0.0184& 1.724 \\\\ \\hline \n60 & 23.80 & 0.82 & 0.0931 & -0.1148 & 0.1566 &0.0919 & -0.0074 & 2.430 \\\\ \\hline \n\\end{tabular} \n\\caption{Values for the critical temperature (in $^\\circ C$), the \nPrandtl number $Pr$, NB coefficients\n$\\gamma_i^{c}$, and Busse's parameter $Q$ for CO$_2$ at the\nonset of convection for three values of the mean temperature (in\n$^\\circ C$). The values correspond to a layer thickness of \n$d=0.08\\,cm$ and a pressure of $P=300$\\,psi. \\LB{t:gamma-co2} } \n\\end{table} \n\n\n\\subsection{Amplitude instabilities}\n\nIn our analysis we first concentrate on spatially periodic solutions\nwith their wavenumber being fixed at the critical wavenumber and\ndiscuss their domains of existence and their stability with respect to\namplitude instabilities, which do not change the wavenumber. For\nthree different cells with thicknesses $d=0.07,\\, 0.08, \\mbox{ and }\n0.09$\\, cm, respectively, we consider the range $0 < \\epsilon < 1$ at\na pressure of $P=300$\\,psi. \n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{arti-co2-arti-3.eps}\n\\caption{Stability regions for hexagons and rolls in CO$_2$ with respect to amplitude \nperturbations for three fluid depths: $d=0.07\\, cm$ (dotted\nlines), $d=0.08\\, cm$ (full lines), $d=0.09\\, cm$ (dot-dashed line).\nPressure is kept at $P=300$\\,psi.\nThick lines: stability boundaries for hexagons. Thin lines: stability\nboundaries for rolls. For a given depth, rolls are stable above the thin \nline, and hexagons are unstable in the inner region of the thick line. \\LB{fig:co2-ampli3d}}\n\\end{figure}\n\\end{center}\n\nThe results of the stability analysis for hexagons and rolls are shown\nin Fig.\\ref{fig:co2-ampli3d}. The hexagons are linearly stable for\nvery small $\\epsilon$. For a given layer thickness and not too high\nmean temperature $T_0$ the hexagons become unstable as the control\nparameter is increased. The hexagon patterns then undergo a second\nsteady bifurcation as the control parameter is increased further and\nbecome stable again. Such restabilized hexagons have been termed {\\em\nreentrant hexagons} \\cite{AsSt96,RoSt02,MaRi05}. As the mean\ntemperature is increased or the layer thickness is decreased the\ncritical heating and with it the NB effects increase. This shifts the\npoint of reentrance to lower $\\epsilon$ and the lower stability limit\nto higher $\\epsilon$, decreasing the $\\epsilon$-range over which the\nhexagons are unstable, until the two limits merge at a temperature\n$T_m$. For $T_0>T_m$ the hexagons are amplitude-stable over the whole\nrange of $\\epsilon$ considered ($0 \\le \\epsilon \\le 1$).\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=270]{artico2-To40.ps} \n\\caption{ Stability regions for hexagons in CO$_2$ with respect\nto amplitude (dashed line) and side-band perturbations \n(solid line) for layer depth $d=0.08\\,\ncm$, pressure $P=300$\\,psi, mean temperature $T_0=40\\,^oC$, and Prandtl number $Pr=0.84$.\nThe NB coefficients $\\gamma_i^{c}$ are reported in Tab. \\ref{t:gamma-co2}.\nHexagons are stable with respect to amplitude perturbations outside the\ndashed-line region, and stable with respect to side-band\nperturbation inside the solid-line region.\\LB{fig:side-co2}}\n\\end{figure}\n\\end{center}\n\nWe have also computed the stability of rolls with respect to amplitude\nperturbations. The corresponding stability limits are indicated in\nFig.\\ref{fig:co2-ampli3d} by thin lines. Rolls are stable above these\nlines. As the NB effects become stronger the stabilization\nof rolls is shifted to larger $\\epsilon$. In contrast to the\nhexagons, the rolls do not undergo a second bifurcation within the\nparameter regime investigated and remain amplitude-stable beyond\n$\\epsilon =1.0$. For strong NB\neffects one has therefore a large range of parameters over which the\ncompeting rolls and hexagons are both linearly amplitude-stable. \n\nThe amplitude-stability limits of the hexagons and rolls depend on\ntheir wavenumber. This is illustrated for the hexagons in Fig.\n\\ref{fig:side-co2} for a mean temperature of $T_0=40\\,^o C$. The\ninstability region with respect to amplitude perturbations forms a\nbubble-like closed curve, inside of which the hexagons are unstable\nwith respect to amplitude perturbations. \n\nIt is worth mentioning that the stability limits for hexagons in\nCO$_2$ are quite similar to those of NB convection in\nwater, except that in CO$_2$ the NB effects increase\nrather than decrease with increasing mean temperature \\cite{MaRi05}.\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=270]{side-to27-co2.ps}\n\\caption{ Stability regions for hexagons in CO$_2$ with respect to \nside-band perturbations for layer thickness $d=0.052\\, cm$, pressure\n$P=300$\\,psi, mean temperature $T_0=27^o C$, and Prandtl $Pr=0.86$. The NB coefficients\nare $\\gamma_0^{c}=0.2053$, $\\gamma_1^{c}= -0.2877$, $\\gamma_2^{c}=0.3267$, $\\gamma_3^{c}=0.1152$, and $\\gamma_4^{c}=-0.086$.\nThe hexagons are stable with respect to side-band perturbations in the region \ninside the solid lines and unstable outside. The dashed line corresponds to\nthe neutral curve. \\LB{fig:side-to27}} \n\\end{figure}\n\\end{center}\n\n\n\\subsection{Side-Band Instabilities}\n\nIn systems with sufficently large aspect ratio side-band instabilities\ncan be the most relevant instabilities. Using the Galerkin method, we\nhave studied the stability of the hexagons with respect to long- and\nshort-wave side-band perturbations for $T_0=40^oC$. The results are\nshown in Fig.\\ref{fig:side-co2}. We find that over the whole range\n$0\\le \\epsilon \\le 1$ the only relevant side-band perturbations are\nlong-wave and steady, as is the case in the weakly nonlinear regime.\nThe same is true also in water with higher Prandtl number \\C{MaRi05}.\nIn this parameter regime the stability region consists of two\ndisconnected domains, reflecting the reentrant nature of the hexagons.\nThe stability domain near onset is very small and closes up as the\namplitude stability limit is reached. In the reentrant regime the\nstable domain opens up again in an analogous fashion when the\namplitude-stability limit is crossed. Note that the stability\nboundaries are leaning toward lower wavenumbers. Thus, stable\nreentrant hexagon patterns are expected to have wavenumbers below\n$q_c$. This is related to the fact that in the OB case hexagons can be\nstable for large $\\epsilon$, but they are side-band stable only for\nsmall wavenumbers \\cite{ClBu96}.\n\nAs the mean temperature is increased the bubble of the amplitude\ninstability shrinks and eventually disappears, as shown in\n\\F{fig:side-to27} for a cell with thickness $d=0.052\\, cm$ and mean\ntemperature $T_0 =27\\,^o C$. As before, the relevant side-band\ninstabilities are long-wave all along the stability limits and with\nincreasing $\\epsilon$ the wavenumber range over which the hexagons are\nstable shifts towards smaller wavenumbers. For these parameters the\nregion of side-band-stable hexagons reaches without interruption from\nthe strongly nonlinear regime all the way down to threshold. For yet\nstronger NB effects the range of stable wavenumbers widens. \n\n\\section{Comparison with experiments in CO$_2$ \\LB{sec:sim-co2}}\n\nBodenschatz {\\em et al.} \\cite{BoBr91} carried out a set of experiments on\nconvection in CO$_2$ in a cylindrical cell with aspect ratio\n$\\Gamma\\sim172$, thickness $d=0.052\\, cm$, and pressure $P=300$\\, psi.\nUnder these conditions NB effects are relevant. In the experiments a\nweakly hysteretic transition from hexagons to rolls was found near\n$\\epsilon=0.1$. Noting that this transition point was below the\namplitude instability of hexagons to rolls as predicted by weakly\nnonlinear theory, the authors interpreted their results in terms of\nthe heterogeneous nucleation of rolls by the sidewalls. They found\nthat for small $\\epsilon$ the concentric rolls induced by the sidewall\nheating remained confined to the inmediate vicinity of the sidewalls;\nhowever, as $\\epsilon$ was increased the rolls invaded the hexagons\nand filled the whole cell, inducing a transition from hexagons to\nrolls. \n\nA comparison of the experimental findings with the stability results\nshown in Fig. \\ref{fig:side-to27} shows that indeed the transition\ncannot be due to an amplitude instability of the hexagons. In fact, in\nthis regime the NB effects are so strong that, in contrast to the\npredictions of the weakly nonlinear theory, the hexagons do not\nundergo an amplitude instability at all. To clarify the influence of\nthe sidewalls and to assess the significance of the side-band\ninstabilities for the transition from hexagons to rolls, we perform\ndirect simulations of the Navier-Stokes equations\n(\\ref{e:v},\\ref{e:cont},\\ref{e:T},\\ref{e:bc}) for two different sets\nof boundary conditions \\footnote{In the experimental setup the top\ntemperature is held constant at $T=12.84 ^oC$, and therefore the mean\ntemperature changes as $\\epsilon$ is increased. In our computations,\nhowever, we keep $T_0$ fixed. Since the transition occurs quite close\nto threshold this is a good aproximation to the experimental\nprocedure.}: \n\ni) periodic boundary conditions,\n\nii) concentric rolls as boundary conditions. In our computations this\ntype of boundary condition is generated by a suitably patterned \nheating in the interior of the fluid. In the experiments concentric\nrolls near the side walls were generated by a side-wall heating\n\\cite{BoBr91}. \n\n\ni) according to Fig. \\ref{fig:side-to27}, for $\\epsilon=0.3$ hexagons\nwith wavenumbers larger than $q>2.98$, which includes the critical\nwavenumber, are unstable with respect to side-band instabilities. To\ntest whether these side-band instabilities trigger a transition to\nrolls we perform numerical simulations with periodic boundary\nconditions and hexagons as initial conditions. Fig.\n\\ref{fig:perio-co2} presents some snapshots of the ensuing temporal\nevoluation in a cell of size $L=8\\cdot 2\\,\\pi\/q_c=16.11$ using\n$128\\times 128$ Fourier modes. More precisely, to allow perfect\nhexagons in a rectangular container the container cannot be square. In\nour simulations we use $L_x=L$ and $L_y=\\sqrt{3}L\/2$. The sideband\ninstability of the initially almost perfect hexagon pattern (cf.\n\\F{fig:perio-co2}(a)) induces a shearing of the pattern (cf.\n\\F{fig:perio-co2}(b)). At the same time a few penta-hepta defects\narise and some hexagonal convection cells tend to connect with each\nother forming short roll-like structures. However, as time evolves the\nsystem becomes progressively more ordered again and eventually after\nlosing a number of convection cells a defect-free hexagon pattern with\na smaller wavenumber is established (cf. \\F{fig:perio-co2}(c) at \n$t\\simeq 60t_v$). \n\nThus, while roll-like features appear for this value of $\\epsilon$,\nwith periodic boundary conditions no transition to rolls occurs and\nthe system relaxes to a new ordered hexagon pattern. Only for yet\nlarger values of $\\epsilon$ the roll-like structures that arise in the\nintermediate, disordered state take over and lead to a transition to\nrolls induced by the sideband instabilities.\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\textwidth]{R2193-a-b-c-2.eps}\n\\caption{ Numerical simulation corresponding to a layer of \nCO$_2$ with thickness $d=0.052\\,cm$, pressure $P=316$\\,psi, \nmean temperature $T_0=27.3^o C$, Prandtl number $Pr=0.87$, and control parameter $\\epsilon=0.3$\n(cf. Fig.\\ref{fig:side-to27}). The size of the integration domain is\n$L=8\\cdot 2\\,\\pi\/q_c=16.11$ and the boundary conditions are\nperiodic. As initial condition a perfect hexagon lattice with \nwavenumber $q_c=3.12$ has been used. a) corresponds to $t=0$, \nb) to $t=18t_v$, and c) to $t=60t_v$.\\\\ \\LB{fig:perio-co2}} \n\\end{figure}\n\\end{center}\n\nii) clearly, the simulations for periodic boundary conditions do not match\nthe experimental results described above, where a transition from\nhexagons to rolls occurs already for $\\epsilon \\gtrsim 0.11$. To\naddress this disagreement we take into account the fact that in the\nexperiments the side walls promote the nucleation of rolls\n\\cite{BoBr91}.\n\nStrictly speaking, the code we are using does not allow non-periodic\nboundary conditions. To mimic the experimentally used clylindrical\ncell we employ a step ramp in $\\epsilon$ that reduces $\\epsilon$ to\nvalues well below $\\epsilon=0$ outside a circle of radius $r=0.45L$\nwith $L=16\\cdot 2\\pi\/q_c=32.22$ \\C{DePe94a}. To induce concentric\nrolls near the sidewalls we introduce for $r> 0.45L$ an additional\nheating in the interior of the fluid in the form of concentric rings.\nUsing hexagonal initial conditions in the bulk, this leads to an\ninitial state as shown in Fig. \\ref{fig:boun-co2}a.\n\nFig. \\ref{fig:boun-co2}a,b shows two snapshots at $t=t_v$ and\n$t=158t_v$ demonstrating how the rolls induced by the side walls\ninvade the carefully prepared hexagonal state in the bulk already for\n$\\epsilon=0.2$. This is well below the $\\epsilon$-value for which with\nperiodic boundary conditions the hexagons persisted even through the\nside-band instability. The final steady state consists of concentric\nrolls as observed in the experiments (cf. Fig. 5 in \\cite{BoBr91}).\nFor lower $\\epsilon$, however, the experimentally observed final state\nconsists of hexagons in the bulk of the system surrounded by\nwall-induced concentric rolls (cf. Fig. 4 in \\cite{BoBr91}). We find\nthis also in our numerical simulations, as shown in\n\\F{fig:boun-co2}(c-d). There the forcing of rolls is identical to that\nin \\F{fig:boun-co2}(a-b) but $\\epsilon=0.05$. Starting with random\ninitial conditions, the forcing gives rise to a ring contained in the\nsquare integration domain. At the beginnig of the simulation\n(\\F{fig:boun-co2}(c)) the rolls created by the forcing invade the\ninterior of this small system. However, as time progresses the rolls\npull out of the central region of the cell, and the final steady\nstate (for $t \\gtrsim 165\\,t_v$) consists of stable hexagons\nsurrounded by a couple of concentric rolls in addition to those \ninduced by the forcing. \n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth]{ergT.000005-ergT.000790-co2-to27-qc-eps0.2-forcing.rolls.eps}\\\\\n\\vspace{0.5cm}\n\\includegraphics[width=0.4\\textwidth]{ergT-eps0.05-for.rolls-init.random.eps}\n\\caption{ Numerical simulation in a cell of CO$_2$ with thickness\n$d=0.052\\,cm$, pressure $P=300$\\,psi, mean temperature $T_0=27^o\nC$, and Prandtl number $Pr=0.86$. The size of the integration domain\nis $L=16\\cdot 2\\,\\pi\/q_c=32.22$, and the boundary conditions are rings\nof concentric rolls generated with and external forcing. \\\\ For\n$\\epsilon=0.2$ (upper snapshots) hexagon inititial conditions have\nbeen used, with $t=t_v$ (left snapshot) and $t=158\\,t_v$ (right). The\nlower snapshots correspond to a simulation with $\\epsilon=0.05$ and\nrandom initial conditions, at $t=40\\,t_v$ \n(left) and $t=158\\,t_v$ (right).\\LB{fig:boun-co2}} \n\\end{figure}\n\\end{center}\n\nThus, our simulations suggest that the experimentally observed\ntransition from hexagons to rolls is neither due to amplitude\ninstabilities nor to side-band instabilities. Rather, there is a large\nrange of parameters in which hexagons and rolls are both linearly\nstable and the final state is selected by one type of pattern invading\nthe other. The transition to rolls at these low values of $\\epsilon$\nis made possible by the boundaries, which provide a seed for the\nnucleation of rolls. We expect that by applying a forcing that is\nconfined to the region near the boundaries and that replaces the rolls\nby hexagons the transition to rolls could be shifted to substantially\nlarger values of $\\epsilon$. Such a forcing could be achieved by a\npatterned heating of the interior of the fluid \\cite{SeSc02} or\npossibly by a suitable geometric patterning of the bottom plate\n\\cite{Bopriv}. \n\n\n\\section{NB Spiral Defect Chaos in SF$_6$\\LB{sec:sim-sf6}}\n\nA fascinating state observed in convection at low Prandtl numbers is\nspiral defect chaos. It is characterized by self-sustained chaotic\ndynamics of rotating spirals, as well as dislocations and\ndisclinations, and arises in fluids with $Pr\\lesssim 1$\n\\C{MoBo93,XiGu93,DePe94a,CrTu95,LiAh96} in a parameter range where\nstraight rolls are linearly stable. Spiral defect chaos has so far\npredominantly been investigated under OB conditions, in which\nup-flows and down-flows are equivalent. \n\nAs mentioned before, NB effects break the up-down symmetry and\ndifferent flow structures may be predominantly associated with up-flow\nand down-flow, respectively. Moreover, in the absence of the OB\nsymmetry a resonant triad interaction is allowed. If it is strong\nenough it leads to the formation of hexagons. For weaker interaction\none may still expect an enhancement of cellular rather than roll-like\nstructures.\n\nTo investigate the impact of NB effects on spiral defect chaos we\nconsider convection in a layer of SF$_6$. This gas has been used\npreviously in experimental convection studies under usual laboratory\nconditions \\C{BoCa92}, and near the thermodynamical critical point\n\\C{AsSt94,AsSt96,RoSt02}. In Fig. \\ref{fig:sf6-ampli}(a) we present\nthe stability diagram for hexagons and rolls with respect to amplitude\nperturbations in a layer of SF$_6$ of thickness $d=0.0542$\\,cm,\npressure $P=140\\,$psi, and a range of temperatures that is\nexperimentally accessible. Hexagons are amplitude-stable to the right\nof the solid line and rolls above the dashed line. As in the case of\nCO$_2$ the NB effects increase with increasing mean temperature $T_0$\nand above a certain value of $T_0$ hexagons are linearly\namplitude-stable over the whole range of $\\epsilon$ investigated. Here\nwe focus on relatively strong NB effects. We therefore show in\nFig.\\ref{fig:sf6-ampli}b the stability limits with respect to\nside-band perturbations for a relatively large mean temperature,\n$T_0=80^\\circ C$. As in the case of CO$_2$, the wavenumber range over\nwhich the hexagons are stable is leaning towards smaller wavenumbers.\nOverall, amplitude and side-band stability limits are qualitatively\nsimilar to those of convection in CO$_2$ (cf. Fig.\n\\ref{fig:co2-ampli3d}).\n\n\\begin{center}\n\\begin{figure}\n\\begin{minipage}{0.35\\textwidth}\n\\includegraphics[width=\\textwidth,angle=0]{sf6.eps}\n\\end{minipage} \n\\hspace{0.4cm} \n\\begin{minipage}{0.35\\textwidth}\n\\includegraphics[width=\\textwidth,angle=270]{sf6-to80-side2.ps}\n\\end{minipage}\n\\caption{ \nStability regions of hexagons and rolls in a layer of SF$_6$ \nof thickness $d=0.0542\\,cm$, pressure $P=140$\\,psi, and Prandtl number $Pr=0.8$. \\\\\na) Stability regions with respect to amplitude perturbations. \nContinous line: stability boundary for hexagons. Dashed line: stability boundary for\nrolls. Stability limits obtained for the critical wavenumber $q_c$.\\\\\nb) Stability regions with respect to side-band perturbations for the above\nlayer with a mean temperature $T_0=80\\,^oC$ The corresponding NB coefficients\nare $\\gamma_0^{c}=0.1714$, $\\gamma_1^{c}= -0.2118$, $\\gamma_2^{c}=0.2836$, $\\gamma_3^{c}=0.1905 $, and\n$\\gamma_4^{c}=0.0624$ corresponding to $Q=4.2$.. The dashed line corresponds to\nthe neutral curve. \n\\LB{fig:sf6-ampli}}\n\\end{figure}\n\\end{center}\n\nFig.\\ref{fig:sf6-nb-ob} shows two snapshots obtained by direct \nnumerical simulations of the Navier-Stokes equations corresponding to\nconvection in SF$_6$ for $T_0=80\\,^oC$ and $\\epsilon=1.4$ in a\nconvective cell of thickness $d=0.0542\\,cm$ and horizontal size\n$L=16\\cdot 2 \\,\\pi\/q_c=32.22$. Periodic boundary conditions are used\nwith $128\\times 128$ Fourier modes and 6 vertical modes. Both states\nare obtained after an integration time of $160\\,t_v$, starting from\nrandom initial conditions. While in\nFig.\\ref{fig:sf6-nb-ob}b all NB are retained, in\nFig.\\ref{fig:sf6-nb-ob}a the same values are used for $Pr$ and\n$\\epsilon$, but all NB parameters $\\gamma_i^c$ are set to 0, i.e. the\nsystem is treated as if it was Boussinesq. \n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth]{sf6-nb-ob-4-sqrt.eps\n\\caption{ \nDirect numerical simulation of equations (\\ref{e:v}-\\ref{e:bc}) for SF$_6$ in a\ncell of thickness $d=0.0542\\,cm$, pressure $P=140$\\,psi, mean temperature \n$T_0=80^oC$, Prandtl number $Pr=0.8$, and control parameter $\\epsilon=1.4$. \nThe cell has size $L=16\\cdot 2 \\,\\pi\/q_c=32.22$ with periodic boundary conditions. \nStarting from random initial conditions both snapshots are taken at\n$t=160\\,t_v$. Left panels for OB conditions ($\\gamma_i$=0, $i=0..4$, left), \nright panels for NB conditions appropriate for $T_0=80^oC$ \n(cf. Fig\\ref{fig:sf6-ampli}b). Bottom panels give the corresponding contour lines used\nfor the pattern diagnostics.\n\\LB{fig:sf6-nb-ob}}\n\\end{figure}\n\\end{center}\n\nThe snapshots depicted in Fig.\\ref{fig:sf6-nb-ob} show, as expected,\nthat due to the NB effects down-flow convection cells, which are white\nin Fig.\\ref{fig:sf6-nb-ob}, outnumber cells with up-flow (black).\nMoreover, in this regime the NB effects enhance the overall cellular\nrather than roll-like character of SDC. This manifests itself in the\nappearance of numerous small down-flow convection cells (white\n`bubbles') and in the appearance of quite noticeable bulges on the NB\nconvection rolls. To quantify these and other differences we analyse\na long sequence of snapshots with a recently introduced geometric\napproach \\cite{RiMa06}. It is based on the contour lines corresponding\nto the intensity half-way between the minimal and maximal intensity of\nall snapshots in a given run. The contour lines corresponding to the\ntemperature field of snapshots Fig.\\ref{fig:sf6-nb-ob}(a,b) are shown in\nFig.\\ref{fig:sf6-nb-ob}(c,d). In the following we\npresent various statistics of these contour lines.\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{NOB-OB-ratio0.8-2.eps} \\\\%{NOB-OB-2.ps}\n\\caption{ \nNumber of black and white closed contours as a function of time\nfor NB and OB conditions for the simulations corresponding to the\nsnapshots shown in \\F{fig:sf6-nb-ob}. \n\\LB{fig:num-bubbles}}\n\\end{figure}\n\\end{center}\n\nThe most striking difference between the OB and the NB case is the\nasymmetry that is induced by the NB effects between `black' and\n`white' components, i.e. between closed contours that enclose up- and\ndown-flow regiones, respectively. To quantify this asymmetry we\nmeasure the number of white and black components. These two\ntopological measures correspond, respectively, to the Betti numbers of\norder 1 and 2 of the pattern defined by the white components\n\\cite{GaMi04}. Fig.\\ref{fig:num-bubbles} shows these two quantities as\na function of time in the OB and the NB case. As expected, in the OB\ncase the number of black and white components is essentially the same\nat all times, whereas in the NB case the white components\nsignificantly outnumber the black ones. The ratio of white to black\ncomponents is therefore a sensitive indicator for the significance of\nNB effects. Fig.\\ref{fig:num-bubbles} also illustrates how much the\nnumber of components fluctuates during these runs. Recently, the two\nBetti numbers have also been measured based on patterns obtained in\nexperiments on SDC in convection in CO$_2$. Scanning $\\epsilon$ in\nvery small steps the authors report steps in the Betti numbers\nindicative of transitions between different chaotic, disordered states\n\\cite{KrGaunpub}.\n\nFig.\\ref{fig:num-bubbles} shows that the total number of components\n(closed contours) is considerably larger in the NB case than in the OB\ncase. This is presented in more quantitative detail in\nFig.\\ref{fig:num-bubbles-eps}, which gives the mean value of the total\nnumber of components, i.e. the sum of black and white components, as a\nfunction of $\\epsilon$ for OB as well as NB conditions. In the NB case\nthe total number of components is up to four times larger than in the\nOB case. We attribute this difference to the resonant triad\ninteraction that is made possible by the breaking of the OB symmetry\nand which tends to enhance cellular rather than filamentary roll-like\nstructures.\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{meanNP-NPratio0.8-2.eps}\n\\caption{ \nTotal number of closed contours as a function of the control parameter \n$\\epsilon$ for NB (circles) and OB (squares) conditions. \n\\LB{fig:num-bubbles-eps}}\n\\end{figure}\n\\end{center}\n \nTo characterize the components better and to distinguish cellular and\nroll-like structures we introduce the `compactness' ${\\mathcal C}$ of\ncomponents \\cite{RiMa06},\n\\begin{eqnarray}\n{\\mathcal C}=4\\pi \\frac{{\\mathcal A}}{{\\mathcal P}^2}. \\LB{e:compact}\n\\end{eqnarray} \nHere ${\\mathcal A}$ is the area inside a closed contour and $P$ its\nperimeter. With the normalization used in (\\ref{e:compact}) compact,\ncellular structures are charecterized by ${\\mathcal C}\\lesssim 1$,\nwhereas filamentary, roll-like structures have ${\\mathcal C} \\ll 1$.\n\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{histo_compact_SF.eps}\n\\caption{ \nMean number of closed contours per snapshot with a \ngiven compactness ${\\mathcal C}$ for convection in SF$_6$ at\n$\\epsilon=1.4$ under NB (circles) and under OB (squares) conditions \n(cf. \\F{fig:sf6-nb-ob}).\n\\LB{fig:hist-compact}}\n\\end{figure}\n\\end{center}\n\n\\F{fig:hist-compact} shows the mean number of closed contours per\nsnapshot for a given compactness $\\mathcal C$ for the NB and the OB\nsimulation at $\\epsilon=1.4$ over the duration $t=360\\,t_v$. As\nexpected, in the NB case the number of white components is much larger\nthan that of black components, whereas in the OB case both are about\nthe same. The total number of components is noticeably larger in the\nNB case, which also shows an increase in the number of white,\nfilamentary contours with small compactness. \n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{histo_compact_smoothed_normalized.eps}\n\\caption{ \nDistribution function for the compactness ${\\mathcal C}$ for NB\n(circles) and OB (squares) conditions (cf. \\F{fig:sf6-nb-ob}).\n\\LB{fig:hist-compact-normalized}}\n\\end{figure}\n\\end{center}\n\nThe relative distribution among components with different compactness\nis more clearly visible in the {\\it relative} frequency of contours as\na function of the compactness, which is shown in\nFig.\\ref{fig:hist-compact-normalized}. More precisely, its data result\nfrom running averages over adjacent 10 points of the corresponding\ndata shown in Fig.\\ref{fig:hist-compact}, which are then normalized.\nThe normalized data show that the increase in the relative frequency\nof white filamentary components (${\\mathcal C}\\ll 1$) is essentially\nthe same in the NB case and in the OB case. However, only very few\nblack filamentary components arise in the NB case. \n\nA feature of Fig.\\ref{fig:hist-compact-normalized} that is surprising\nat first sight is the essentially equal height of the NB peak and the\nOB peak for components with ${\\mathcal C}\\sim 1$. Visually, the NB\nsnapshot exhibits many more small compact `bubbles' than the OB run.\nAn explanation for this observation can be obtained by correlating the\ncompactness of the closed contours with their length ${\\mathcal P}$.\nThe joint distribution function for these two quantities is shown in\nFig.\\ref{fig:corr-cont-comp} using logarithmic scales for ${\\mathcal\nP}$ and ${\\mathcal C}$. Focussing on the compact objects with\n${\\mathcal C}\\lesssim 1$ one recognizes in the OB case a second peak\nat somewhat larger contourlength. We associate this peak with the\nappearance of target-like structures, i.e. with a second contourline\nencircling a smaller compact, almost circular contourline. In the NB\ncase this second peak is barely visible. Instead, the shoulder of the\nmain peak is extended significantly towards smaller contourlength\n${\\mathcal P}$. It signifies the appearance of compact objects that\nare smaller than a typical wavelength, which we associate with the\nsmall `bubbles' that are easily recognized in the snapshot of the NB\ncase Fig. \\ref{fig:sf6-nb-ob}b. Thus, the comparable relative\nfrequency of small components shown in\nFig.\\ref{fig:hist-compact-normalized} has a different origin in the OB\nand the NB case. Whereas in the NB case it is mostly due to small\nbubbles, it seems to originate from target-like structures in the OB\ncase. \n\nIn the logarithmic scaling used in Fig.\\ref{fig:corr-cont-comp} a\nstraight ridge arises in the distribution function for large\ncontourlengths. It is characteristic for filamentary structures with a\ntypical width, which corresponds here to half a wavelength $\\lambda$\nof the convection rolls,\n\\begin{eqnarray}\n{\\mathcal C}\\sim 4\\pi\n\\frac{\\lambda \\, {\\mathcal P}}{4{\\mathcal P}^2} \\propto {\\mathcal\nP}^{-1}.\n\\end{eqnarray}\nIn the OB case one can discern deviations from this scaling. They are\nconfined to larger rather than smaller compactness values for a given\ncontourlength, indicating that the long rolls can be wider but not\nnarrower than a certain thickness. In the NB case these deviations are\nmuch smaller. \n\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{corr-cont-comp.MaRi05a.f12a-4.eps}\n\\includegraphics[width=0.48\\textwidth]{corr-cont-comp.MaRi05a.f12b-2.eps}\n\\caption{Distribution function for contour length and \ncompactness for OB (a) and NB conditions (b)\n$Pr=0.8$ and $\\epsilon=1.4$.\n\\LB{fig:corr-cont-comp}\n} \n\\end{figure}\n\nTo identify spiral components in the pattern directly we also measure\nthe winding number of the components \\cite{RiMa06}. It is defined via\nthe angle $\\theta$ by which the (spiral) arm of a pattern component is\nrotated from its tip to its end at the vertex at which it merges with\nthe rest of the component. In cases in which a component has no\nvertices we split it into two arms at the location of minimal\ncurvature \\cite{RiMa06}. The winding number is then defined as\n$|{\\mathcal W}|=\\theta\/2\\pi$. To assess the impact of the NB effects\non the spiral character of the pattern we measure the number of\nspirals in each snapshot and show the resulting histogram over the\nwhole run in Fig.\\ref{fig:num-spiral}. We use three different\nthresholds ${\\mathcal W}_{min}$ for the identification of spirals,\n${\\mathcal W}_{min}= 1$, ${\\mathcal W}_{min}= 1\/2$, and ${\\mathcal\nW}_{min}= 1\/4$. As Fig.\\ref{fig:num-spiral} shows, the number of\nsmall spirals with $|{\\mathcal W}|\\gtrsim 1\/4$ is quite similar in the\nOB and the NB case. However, larger spirals with $|{\\mathcal W}|\\ge\n1\/2$ or even $|{\\mathcal W}|\\ge 1$ are much more rare in the NB case;\nin fact, for the system size $L=16 \\cdot 2\\pi\/q_c=32.22$ that we have\nused in these simulations there was at most one spiral with \n$|{\\mathcal W}| \\ge 1$ at any given time. \n\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{histo_num_spiral.OB_NB.1.eps}\n\\caption{Distribution function for the number of spirals in spiral\ndefect chaos under OB and NB conditions for three values of the\nthreshold, ${\\mathcal W}_{min}=1\/4$ (circles),${\\mathcal W}_{min}=1\/2$\n(squares), and ${\\mathcal W}_{min}=1$ (diamonds). \n\\LB{fig:num-spiral}\n} \n\\end{figure}\n\nThe reduced spiral character of NB spiral defect chaos is also quite\napparent in Fig.\\ref{fig:corr-arc-winding}, which shows the\ncorrelation between the winding number and the arclength of the spiral\narm. More precisely, each dot marks the occurrence of one spiral arm\nin a snapshot. In the OB case one can see quite clearly a maximal\nwinding number for any given arclength, which is consistent with an\nArchimedean shape of the spiral \\cite{RiMa06}\\footnote{Note that\ndetailed analyses of large spirals show deviations from the\nArchimedean shape due to the dislocations that accompany finite\nspirals \\cite{Pl97}}. In the NB case, however, only components with\nvery small contourlength reach the Archimedean limit and most\ncomponents have winding numbers that are much smaller, i.e. the\ncomponents are quite straight.\n\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{corr-arc-winding_SF6-OB-NB.eps}\n\\caption{Correlation between the arclength and the winding number of\nspiral arms under OB (black squares) and NB conditions (red circles).\n\\LB{fig:corr-arc-winding}} \n\\end{figure}\n\n\\section{Hexagons and Spiral Defect Chaos at Very Low Prandtl numbers. H$_2$-Xe\nMixtures \\LB{sec:h2xe}}\n\nAs mentioned above, the restabilization of NB hexagons at larger\nRayleigh number is related to the existence of stable OB hexagons at\nlarge Rayleigh numbers. The wavenumber range over which the OB\nhexagons are stable shrinks with decreasing Prandtl number and for $Pr\n< 1.2$ the OB hexagons are side-band unstable at {\\it all} wavenumbers\n\\cite{ClBu96}. However, as seen in the case of CO$_2$ and SF$_6$, NB\nhexagons can be side-band stable at large $\\epsilon$ even below\n$Pr=1.2$ due to the additional stabilizing effect of the resonant\ntriad interaction. It is therefore of interest to investigate whether\nthe NB effects can be sufficient to stabilize strongly nonlinear\nhexagons even for Prandtl numbers significantly below $Pr=1$. \n\nPrandtl numbers well below $Pr=1$ can be reached by using a mixture of\na heavy and a light gas. An experimentally investigated case is a\nmixture of H$_2$ and Xe \\cite{LiAh97}. With a mole fraction of\n$\\chi=0.4$, one can reach Prandtl numbers as small as $Pr=0.17$. The\nLewis number of such a mixture is close to one \\cite{LiAh96,Ah05}. \nTherefore such mixtures are expected to behave essentially like a pure\nfluid with the same Prandtl number \\cite{BoPe00}.\n\nWe investigate the stability of hexagon and roll convection in a\nH$_2$-Xe mixture with mole fraction $\\chi=0.4$ at a pressure of $300$\n\\,psi and a layer thickness of $d=0.1$cm. With respect to amplitude\ninstabilities the stability diagram is very similar to that of\nconvection in CO$_2$ and SF$_6$ with hexagons becoming reentrant at\n$\\epsilon$-values as low as $\\epsilon=0.14$ for $T_0=20\\,^oC$. \n\nFocusing on strong NB effects we perform a detailed stability analysis\nwith respect to sideband perturbations at a mean temperature of\n$T_0=80^oC$ using the same layer thickness of $d=0.1$cm. For these\nlow Prandtl numbers the numerical resolution has to be increased to\nobtain sufficiently well resolved flows. While for the stability\nanlyses of hexagons in CO$_2$ and SF$_6$ it is sufficient to use \n$n_z=6$ and $n_q=3$ in the Galerkin expansion, for the H$_2$-Xe\nmixture with $P_r\\simeq0.17$ at least $n_q=5$ and $n_z=6$ are\nrequired. \\F{fig:side-h2xe} depicts the resulting stability diagram. It shows\nthat the region of side-band stable hexagons is not contiguous but\nconsists of the usual region immediately above threshold and an\nadditional, disconnected region at larger Rayleigh numbers. We could\nnot follow the stability limits to smaller values of $q$ than shown in\nFig.\\ref{fig:side-h2xe} due to numerical convergence problems.\nPresumably, these arise due to bifurcations involving additional,\nresonant wavevectors \\cite{Mo04a}, somewhat similar to the resonances\nstudied in Taylor vortex flow \\cite{RiPa86,PaRi90}. The fact that the\nregion of stability is disconnected is remarkable since the region of\namplitude-stability (not shown) is actually contiguous. This is in\ncontrast to the behavior found in CO$_2$ and SF$_6$ where the two\nside-band stable regions become connected when the bubble-like region\nof amplitude instability disappears (cf.\nFig.\\ref{fig:side-co2},\\ref{fig:side-to27}). The comparison of the\nstability limits with those of CO$_2$ and of SF$_6$ \n(Fig.\\ref{fig:sf6-ampli}b) shows further that the maximal wavenumber\n$q$ at which the hexagons are stable with respect to side-band\nperturbations decreases with decreasing Prandtl number and as a result\nthe over-all stability region shrinks as well. \n\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=270]{h2xe-to80.ps\n\\caption{ \nStability limits with respect to sideband perturbations in a mixture\nof $H_2-Xe$ with molar fraction $\\chi=0.4$, thickness $d=0.1 cm$, pressure $P=300$\\,psi, \nmean temperature $T_0=80\\,^oC$, and Prandtl number $Pr=0.17$. The NB coefficients are \n$\\gamma_0=0.5535$, $\\gamma_1=-0.6421$, $\\gamma_2=0.9224$, $\\gamma_3=0.3647$, \n$\\gamma_4=-0.0712$ resulting in $Q=13.8$. Hexagons are stable in the\nregions enclosed by the solid lines and unstable outside. The dashed line represents \nthe neutral curve. The results for H$_2$-Xe are obtained with $n_z=6$ and $n_q=5$ \n\\LB{fig:side-h2xe}.}\n\\end{figure}\n\\end{center}\n\n\nThe stability analysis of the H$_2$-Xe mixture also reveals an\noscillatory instability of the hexagon patterns at $\\epsilon \\sim 1$.\nWithin the $\\epsilon$-range investigated, no such oscillatory\ninstability was found at the larger Prandtl numbers relevant for\nCO$_2$ and SF$_6$. Unfortunately, it turns out that before the onset\nof the oscillatory instability the hexagons already become unstable to\na side-band instability at half the hexagon wavelength, which will\nalways preempt the oscillatory instability. For the rolls we also find\nan oscillatory instability. It is presumably related to the well-known\noscillatory instability of Boussinesq rolls \\cite{CrWi89}. \n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.3\\textwidth,angle=0]{h2xe.snap.001401-sqrt.eps}\n\\caption{ \nTypical pattern for convection in H$_2$-Xe at $\\epsilon=0.3$.\n\\LB{fig:pattern-h2xe}. }\n\\end{figure}\n\\end{center}\n\nFor not too small $\\epsilon$ generic initial conditions will not lead\nto hexagonal patterns but rather to spiral defect chaos. A typical\nsnapshot of a pattern in this state is shown in\nFig.\\ref{fig:pattern-h2xe} for $\\epsilon=0.3$. Compared to the\npatterns obtained in NB convection in SF$_6$ (cf.\nFig.\\ref{fig:sf6-nb-ob}b) the patterns in H$_2$-Xe are less cellular\nand do not show a large number of small bubbles. To quantify these and\nother characteristics of the patterns we again apply the geometric\ndiagnostics introduced earlier \\cite{RiMa06}. \n\nIn Fig.\\ref{fig:compact-h2xe} we show the normalized distribution\nfunctions for the compactness of white and black components (cf.\nFig.\\ref{fig:hist-compact-normalized}). Since there are only very few\nblack components their distribution function exhibits large\nstatistical fluctuations. Of particular interest is the distribution\nfunction for the white components. It confirms the visual impression\nthat the number of compact components is significantly reduced\ncompared to the case of SF$_6$; in fact, while in SF$_6$ the maximum\nof the distribution function is close to ${\\mathcal C}=1$, in H$_2$-Xe\nthe absolute maximum is at ${\\mathcal C}\\sim 0.1$, which corresponds\nto filamentary structures.\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{histo_compact.h2xe.eps}\n\\caption{ \nDistribution function of the compactness of closed contours \nfor H$_2$-Xe ($Pr=0.17$, $\\epsilon=0.3$)\n\\LB{fig:compact-h2xe}. }\n\\end{figure}\n\\end{center}\n\nThe lack of small bubbles is demonstrated in more detail in the joint\ndistribution function for the contourlength and the compactness, which\nis shown in Fig.\\ref{fig:corr-cont-compt-h2xe}. Note that the view is\nrotated compared to Fig.\\ref{fig:corr-cont-comp}. The distribution\nfunction is lacking the broad shoulder seen in NB convection in SF$_6$\n(see Fig.\\ref{fig:corr-cont-comp}b). Instead, the decay of the\ndistribution function towards long filamentary contours is quite\nslow. \n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{corr-cont-comp.MaRi05a.f18-d.eps\n\\caption{ \nJoint distribution function for the contour length and \nthe compactness of closed contours for H$_2$-Xe ($Pr=0.17$, $\\epsilon=0.3$).\n\\LB{fig:corr-cont-compt-h2xe}.}\n\\end{figure}\n\\end{center}\n\nTo assess the spiral character of NB spiral defect chaos at these low\nPrandtl numbers we show in Fig.\\ref{fig:histo-winding} the\ndistribution function for the absolute value $|{\\mathcal W}|$ of the\nwinding number for NB convection in H$_2$-Xe (at $\\epsilon=0.3$ and\n$PR=0.17$) as well as for Boussinesq and non-Boussinesq convection in\nSF$_6$ (at $\\epsilon=1.4$ and $Pr=0.8$). As had been noted\npreviously in the Boussinesq case \\cite{EcHu97,RiMa06} the\ndistribution function is roughly consistent with exponential behavior.\nIn the NB case the exponential decays substantially faster than in the\nBoussinesq case and spirals with winding numbers above ${|\\mathcal\nW}|=1$ are rare. In the Boussinesq case we had found that the decay\nrate depends mostly on the Prandtl number, but only very little on\n$\\epsilon$ \\cite{RiMa06}. Unfortunately, we do not have enough\nnon-Boussinesq data to investigate such trends in the $\\epsilon$- and\n$Pr$-dependence. However, it is worth noting that in the two\nnon-Boussinesq cases shown in Fig.\\ref{fig:histo-winding} the decay\nrates are essentially the same despite their substantial difference in\nPrandtl number and both decays are much faster than that in the\nBoussinesq case. Thus, possibly the impact of NB effects dominates\nthe dependence on the Prandtl number.\n\n\\begin{center}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth,angle=0]{histo_winkel.H2Xe_SF6.eps}\n\\caption{ \nDistribution function for the absolute value of the winding number for\nNB convection in H$_2$-Xe in comparison with convection in \nSF$_6$ in the NB and the OB case.\n\\LB{fig:histo-winding}. }\n\\end{figure}\n\\end{center}\n\nIn NB convection in H$_2$-Xe, the distribution function for the number\nof spirals is qualitatively similar to that in SF$_6$, which is shown\nin Fig.\\ref{fig:num-spiral} above. In particular, there are almost no\nspirals with a winding number $|{\\mathcal W}|>1$. Similarly, the\ncorrelation between the arclength and the winding number of spirals\nreveals that in NB convection in H$_2$-Xe there is no significant\ntrend towards Archimedean spirals. \n\n\\section{Conclusion}\n\\LB{sec:conclusions}\n\nIn this paper we have studied non-Boussinesq convection in gases\n(CO$_2$, SF$_6$, H$_2$-Xe) with Prandtl numbers ranging from $Pr\\sim\n1$ down to $Pr=0.17$ in experimentally relevant parameter regimes. We\nhave complemented a Galerkin stability analysis of hexagon patterns\nwith direct numerical simulations of the fluid equations to study\ntransitions between different hexagon patterns and to quantify the\nimpact of non-Boussinesq effects on spiral defect chaos.\n\nWe find that the reentrance of hexagons that we have identified\npreviously in non-Boussinesq convection in water \\cite{MaRi05} also\noccurs at low Prandtl numbers. As was the case at large Prandtl\nnumbers, compressibility is not neccessary for reentrance. Since, in\naddition, the range of wavenumbers for which the reentrant hexagons\nare stable differs significantly from that of the reentrant hexagons\nobserved in experiments on convection in SF$_6$ near the thermodynamic\ncritical point \\cite{RoSt02}, the mechanisms underlying the two types\nof restabilization of the hexagons is most likely different.\nReflecting the fact that in gases the non-Boussinesq effects increase\nwith increasing temperature the reentrance is shifted to lower values\nof the Rayleigh number when the mean temperature is increased, opposite to\nthe behavior in water \\cite{MaRi05}. In convection in water the\nreentrant hexagons are stable only for wavenumbers below the critical\nwavenumber. This trend becomes more pronounced with decreasing Prandtl\nnumber. In fact, for the gas mixture with $Pr=0.17$ the wavenumber at\nthe stability limit of the hexagons decreases so rapidly with\nincreasing Rayleigh number that the range in Rayleigh number over\nwhich the hexagons are stable becomes quite small. \n\nThe comparison of our stability results with experiments on the\ntransition between hexagons and rolls in CO$_2$ \\cite{BoBr91} shows\nthat this transition is not due to an amplitude or a side-band\ninstability. As a matter of fact, for the parameters of the\nexperimental system the hexagons do not undergo any linear amplitude\ninstability to rolls, contrary to the prediction of the weakly\nnonlinear theory \\cite{BoBr91}. We have performed detailed numerical\nsimulations with various lateral boundary conditions and confirm that\nthe transition is the result of the heterogeneous nucleation of rolls\nat the side walls of the container, which then invade the whole system\nif the Rayleigh number is sufficiently high. Our simulations suggest\nthat hexagons could be stabilized well beyond the experimentally\nobserved transition point if the influence of the lateral walls can be\nreduced by applying a spatially patterned forcing that drives hexagons\nat the wall. Such a forcing can be achieved by localized heating\n\\cite{SeSc02} or by geometric patterning of the bottom plate\n\\cite{Bopriv}. Of course, the wavenumber of the forced hexagons would\nhave to be adjusted to lie in the stable range.\n\nWe have also investigated the stability of hexagons in H$_2$-Xe\nmixtures with very small Prandlt number ($Pr=0.17$). There also stable\nreentrant hexagons are possible, but they are restricted to a small\nrange in wavenumber ($q \\pi + 2^{B+1}$ where $T$ is the number of taps, and $B$ is the\ncoefficient bit width. Finally, the width of a PFB channel is tunable by\nadjusting the period of the sinc function, forcing adjacent bandpass filters to\noverlap at a point other than the -3 dB point. Note that this causes\npower to no longer be conserved in the Fourier transform operation.\n\n\\subsection{A Bandwidth-Agile Fast Fourier Transform}\n\\label{sec:fft}\n\nThe computational core of our FFT library is an implementation of a radix-2\nbiplex pipelined FFT \\citep{rabiner_gold1975} capable of analyzing two\nindependent, complex data streams using a fraction of the FPGA resources of\ncommercial designs \\citep{dick2000}. This architecture takes advantage of the\nstreaming nature of ADC samples by multiplexing the butterfly computations of\neach FFT stage into a single physical butterfly core. When used to analyze two\nindependent streams, every butterfly in this biplex core outputs valid data\nevery clock for 100\\% utilization efficiency.\n\nThe need to analyze bandwidths higher than the native clock rate of an FPGA led\nus to create a second core that combines multiple biplex cores with additional\nbutterfly cores to create an FFT that is parametrized to handle $2^P$ samples\nin parallel \\citep{parsons2008}. This FFT architecture uses only 25\\% more\nbuffering than the theoretical minimum, and still achieves 100\\% butterfly\nutilization efficiency. This feat is achieved by decomposing a $2^N$\nchannel FFT into $2^P$ parallel biplex FFTs of length $2^{N-P}$, followed by a\n$2^P$ channel parallel FFT core using time-multiplexed twiddle-factor\ncoefficients.\n\nFinally, we have written modules for performing two real FFTs with each half of\na biplex FFT using Hermitian conjugation. Mirroring and\nconjugating the output spectra to reconstitute the negative frequencies, this\nmodule effects a 4-in-1 real biplex FFT that can then be substituted for the\nequivalent number of biplex cores in a high-bandwidth FFT. Thus, our real FFT\nmodule has the same bandwidth flexibility as our standard complex FFT.\n\nDynamic range inside fixed-point FFTs requires careful consideration. Tones\nare folded into half as many samples through each FFT stage, causing magnitudes\nto grow by a factor of 2 for narrow-band signals, and $\\sqrt{2}$ for random \nnoise. To\navoid overflow and spectrum corruption, our cores contain optional downshifts\nat each stage. In an interference-heavy environment, one must balance loss of\nSNR from downshifting signal levels against loss of integration time due to\noverflows. A good practice is to place time-domain input into the\nmost-significant bits of the FFT and downshift as often as possible to\navoid overflow and minimize rounding error in each butterfly stage. However,\nit is also best to avoid using the top 2 bits on input since the first \n2 butterfly\nstages can be implemented using negation instead of complex multiplies, but the\nasymmetric range of 2's complement arithmetic can allow this negation to\noverflow.\n\n\\subsection{A Cross-Multiplication\/Accumulation (X) Engine}\n\\label{sec:x_engine_arch}\n\n\\placefigure{fig:x_engine_schem}\n\nOur FX correlator architecture employs\nX engines to compute all antenna cross-multiples within a frequency\nchannel, and multiple frequencies are multiplexed into the core as dictated by\nprocessor bandwidth; the complex visibility $V_{ij}$ (Eq. \\ref{eq:vis})\nis the average of the product of complex voltage samples from antenna $i$ and\nantenna $j$ with the convention that the voltage $j>i$ is conjugated prior to\nforming product.\nIn collaboration with Lynn Urry of UC Berkeley's Radio\nAstronomy Lab we have implemented a parametrized module (Fig.\n\\ref{fig:x_engine_schem}) for computing and accumulating all visibilities for a\nspecified number of antennas. An X engine operates by receiving $N_{ant}$ data\nblocks in series, each containing $T_{acc}$ data samples from one frequency\nchannel of one antenna. The first samples of all blocks are\ncross-multiplied, and the $N_{ant}(N_{ant}+1)\/2$ results are added to the\nresults from the second samples, and so on, until all $T_{acc}$ samples have\nbeen exhausted. Accumulation prevents the data rate out of a\ncross-multiplier from exceeding the input data rate. An X engine is divided\ninto stages, each responsible for pairing two different data blocks\ntogether: the zeroth stage pairs adjacent blocks, the first stage pairs blocks\nseparated by one, and so on. As the final accumulated results become available,\nthey are loaded onto a shift register and output from the X engine.\n\nHowever, as a new window of $N_{ant}\\times T_{acc}$ samples arrives, some\nstages, behaving as described above, would compute invalid results using\ndata from two different windows. To avoid this, each stage switches between\ncross-multiplying separations of $S$ to separations of $N_{ant}-S$, which\nhappen to be valid precisely when separations of $S$ would be invalid. As a\nresult, there need be only $floor({N_{ant}\/2}+1)$ stages in an X engine. Every\n$T_{acc}$ samples, each stage outputs a valid result, yielding $N_{ant}\\times\nfloor({N_{ant}\/2}+1)$ total accumulations; for even values of $N_{ant}$,\n$N_{ant}\/2$ of the results from the last stage are redundant.\nAll other multiplier\/accumulators are 100\\% utilized. Each stage\nalso computes all polarization cross-multiples (Eq. \\ref{eq:pol})\nusing parallel multipliers.\n\nWhen one X engine no longer fits on a single FPGA, it may be divided across\nchips at any stage boundary at the cost of a moderate amount of bidirectional\ninterconnect. The output shift register need not be carried between chips;\neach FPGA can accumulate and store the results computed locally. In order for\nthe output shift register's $floor({N_{ant}\/2}+1)$ stages to clear before the\nnext accumulation is ready, an X engine requires a minimum integration length\nof: $T_{acc}>floor({N_{ant}\/2}+1)$. In current hardware, a practical upper\nlimit on $T_{acc}$ is set by the 2$\\times$4 Mbit of SRAM storage available on\nthe IBOB. For 2048 channels with 4-bit samples, and double buffering for 2\nantennas, 2 polarizations, this limit is $T_{acc}\\le 128$. Longer integration\nrequires an accumulator capable of buffering an entire vector of visibility\ndata, and typically occurs in off-chip DRAM. The maximum theoretical\naccumulation length in correlator is determined by the fringe rate of sources\nmoving across the sky, and is a function of observing frequency, maximum\nantenna separation, and (for correlators with internal fringe rotation)\nfield-of-view across the primary beam.\n\nCross-multiplication comes to dominate the total correlator processing budget\nfor large numbers of antennas. As a result, care must be taken both to reduce\nthe footprint of a complex multiplier\/accumulator and to make full and\nefficient use of the resources on an FPGA processor. The number of bits used\nto carry a signal should be minimized while retaining sufficient dynamic range\nto distinguish signal from noise. We have chosen to focus on 4-bit multipliers\nin current applications, and the subjects of dynamic equalization and Van Vleck\ncorrection generalized to 4 bits are explored in Section\n\\ref{sec:characterization} for optimizing signal-to-noise ratios (SNR) in our\ncorrelators. To make full use of FPGA resources, we construct\n4-bit complex multipliers using distributed logic, dedicated multiplier cores, \nand look-up tables implemented in Block RAMs. \n\nIt is possible to perform the bulk of an $N$-bit complex multiply in an $M$-bit\nmultiplier core by sign-extending numbers to $2N$ bits and combining them into\ntwo $M$-bit, unsigned numbers. Multiplying $(a+bi)(c+di)$, these\nrepresentations are $(2^{M-2N}a_s+b_s)$ and $(2^{M-2N}c_s+d_s)$, where\n$n_s=2^{2N}+n$. The bits corresponding to $ac, ad+bc, bd$ may be selected from\nthe product, provided that the\nsign-extension to $2N$ bits shifts $a+d$ beyond the bits occupied by $ad$.\nThis yields the constraint: \n\\begin{equation} 6N-1 < M \\end{equation} \nThe 18-bit multipliers in current Xilinx \nFPGAs can efficiently perform 3-bit complex\nmultiplies, but fall short of 4 bits.\n\n\\section{System Integration}\n\\label{sec:integration}\n\n\\subsection{F Engine Synchronization}\n\\label{sec:F_synch}\n\n\\placefigure{fig:corr_vs_dly}\n\nThough we have touted GALS design principles for X engine processing,\ndigitization and spectral processing within F engines must be synchronized to a\ntime interval much smaller than a spectral window to avoid severe degradation\nof correlation response (Fig. \\ref{fig:corr_vs_dly}). This attenuation effect,\nresulting from the changing degree of overlap of correlated signals within a\nspectral window, can be caused by systematic signal delay between antennas, as\nwell as by source-dependent geometric delay; FX correlators with insufficient\nchannel resolution experience a narrowing of the field of view related to\nchannel bandwidth. This effect has been well explored for FX correlators\nemploying DFTs (see Chapter 8 of \\citet{thompson_et_al2001}), but Polyphase\nFilter Banks show a different response owing to a weighting function that\nextends well beyond the number of samples used in a DFT. \nGiven a standard form for PFB sample weighting of\n${\\rm sinc}\\left(\\frac{\\pi t}{N\\tau_s}\\right)\nW\\left(\\frac{t}{2TN\\tau_s}\\right)$, \nwhere $N$ is the number of output channels,\n$T$ is the number of PFB taps, $\\tau_s$ is the delay between time-domain\nsamples, and $W$ is an arbitrary windowing function that tapers to 0 at\n$\\pm1$, the gain versus delay $G(\\tau)$ of a PFB-based FX correlator is\ngiven by:\n\\begin{displaymath}\nG(\\tau)=\\int_{-\\infty}^{\\infty}{\n\\left[{\\rm sinc}\\left(\\frac{\\pi t}{N\\tau_s}\\right)\nW\\left(\\frac{t}{2TN\\tau_s}\\right)\\right] \\times\n\\left[{\\rm sinc}\\left(\\frac{\\pi (t-\\tau)}{N\\tau_s}\\right)\nW\\left(\\frac{t-\\tau}{2TN\\tau_s}\\right)\\right]\\ dt\n}\n\\end{displaymath}\n\nFor the purpose of F Engine synchronization, we\nrely on a one-pulse-per-second (1PPS) signal with a fast edge-rate provided\nsynchronously to a bank of F processors running off identical system clocks.\nThis signal is sampled by the system clock on each processor, and provided\nalongside ADC data. A slower, asynchronous ``arm'' signal is sent from\na central node to each F engine at the half second phase \nto indicate that the next 1PPS signal should be\nused to generate the reset event that synchronizes spectral windows and packet\ncounters. This ensures that samples from different antennas entering X engines\ntogether were acquired within one or two system clocks of one another. The\ndegree of synchronization is determined by the difference in path lengths of\n1PPS and the system clock from their generators to each F engine. This path\nlength can be determined from celestial source observations\nusing self-calibration, and barring temperature\neffects, will be constant for a correlator configuration following power-up.\n\n\\subsection{Asynchronous, Packetized ``Corner Turner''}\n\\label{sec:packetization}\n\nThe choice of the accumulation length $T_{acc}$ in X engines \ndetermines the natural size of UDP packets in our\npacket-switched correlator architecture. For current CASPER hardware where\nchannel-ordering occurs in IBOB SRAM, $T_{acc}$ is constrained by the available\nmemory to an upper limit of 128 samples for 2048-channel dual-polarization, \n4-bit,\ncomplex data, yielding a packet payload of 256 bytes. A header containing\n2 bytes of antenna index and 6 bytes of frequency\/time index is added to each\npacket to enable packet unscrambling on the receive side. The frequency\/time\nindex (hereafter referred to as the master counter, or MCNT) is a counter that\nis incremented every packet transmission. The lower bits count frequencies\nwithin a spectrum, and the rest count time. Combined with the antenna\nindex, MCNT completely determines the time, frequency, source, and destination\nof each packet; MCNT maps uniquely to a destination IP address.\n\n\\placefigure{fig:packet_rx}\n\nPacket reception (Fig. \\ref{fig:packet_rx}) is complicated by the realities of\npacket scrambling, loss, and interference. A circular buffer holding $N_{win}$\nwindows worth of X engine data stores packet data as they arrive. The lower\nbits of MCNT act as an address for placing payloads into the the correct\nwindow, and the antenna index addresses the position within that window. When\ndata arrives $N_{win}\/2$ windows ahead of a buffered window, that window is\nflagged for readout, and is processed contiguously on the next window boundary\nof the free-running X engine. Using packet arrival to determine when a window\nis processed allows a data-rate dependent time interval for all packets to\narrive, but pushes data through the buffer in the event of packet loss. On\nreadout, the buffer is zeroed to ensure that packet loss results in loss of\nsignal, rather than the introduction of noise. F engines can be intentionally\ndisconnected from transmission without compromising the correlation of\nthose remaining.\n\nPacket interference occurs when a well-formed packet contains an invalid MCNT\nas a result of switch latency, unsynchronized F engines, or system\nmisconfiguration. Such packets must be prevented from entering the receive\nbuffer, since they can lead to data corruption; one would prefer that a\nmisconfigured F engine antenna result in data loss for that antenna, rather\nthan data loss for the entire system. To ensure this behavior, incoming\npackets face a sliding filter based on currently active MCNTs. Packets are\nonly accepted if their MCNT falls within the range of what can currently be\nheld in the circular buffer. As higher MCNTs are received and accepted, old\nwindows are flagged for read out, freeing up buffer space for still\nhigher MCNTs. This system forces MCNTs to advance by small increments and\nprevents the large discontinuities indicative of packet\ninterference. In the eventuality that a receive buffer accidentally locks onto\nan invalid MCNT from the outset, a time-out clause causes the currently active\nMCNT to be abandoned for a new one if no new data is accepted into the receive\nbuffer.\n\nA final complication comes when implementing a bidirectional 10GbE transmission\narchitecture such as the one outlined in Figure \\ref{fig:ex_app1}.\nCommercial switches do not support\nself-addressed packet transmission; they assume that the transmitter\n(usually a CPU) intercepts these packets and transfers them to the receive\nbuffer. On FPGAs, this requires an extra buffer for holding ``loopback'', and\na multiplexer for inserting these packets into the processing stream. A simple\nmethod for this insertion would be to always insert loopback packets, if\navailable, and otherwise to insert packets from the 10GbE\ninterface. However, there is a maximum interval over which packets with\nidentical MCNTs can be scrambled before the receive system rejects\npackets for being outside of its buffer. This simple method has the\nundesirable effect of including switch latency in the time interval over which\npackets are scrambled, causing unnecessary packet loss. Our solution is to\npull loopback packets only after packets with the same MCNT \narrive through the switch.\n\n\\subsection{Monitor, Control, and Data Acquisition}\n\\label{sec:data_aq}\n\nThe toolflow we have developed for CASPER hardware provides convenient\nabstractions for interfacing to hardware components such as ADCs, DRAM, and 10\nGbE transceivers, and allows specified registers and BRAMs to be automatically\nconnected to CPU-accessible buses. On top of this framework, we run BORPH--an\nextension of the Linux operating system that provides kernel support for FPGA\nresources \\citep{so_broderson2006,so2007}. This system allows FPGA\nconfigurations to be run in the same fashion as software processes, and creates\na virtual file system representing the memories and registers defined on the\nFPGA. Every design compiled with this toolflow comes equipped with this\nreal-time interface for low- to moderate-bandwidth data I\/O. By emulating\nstandard file-I\/O interfaces, BORPH allows programmers to use standard\nlanguages for writing control software. The majority of the monitor, control,\nand data acquisition routines in our correlators are written in C\nand Python. For 8-16 antenna correlators, the bandwidth through BORPH on a\nBEE2 board is sufficient to support the output of visibility data with 5-10s\nintegrations.\n\nFor correlators with more antennas or shorter integration times, the bandwidth\nof the CPU\/FPGA interface is incapable of maintaining the full correlator\noutput. This limitation is being overcome by transmitting the final correlator\noutput using a small amount of the extra bandwidth on the 10GbE ports already\nattached to each X engine. After accumulation in DRAM, correlator output is\nmultiplexed onto the 10GbE interface and transmitted to one or more Data\nAcquisition (DA) systems attached to the central 10GbE switch. These systems\ncollect and store the final correlator output. With a capable DA system, the\nadded bandwidth through this output pathway can be used to attain millisecond\nintegration times, opening up opportunities for exploring transient events and\nincreasing time resolution for removing interference-dominated data. \n\nThe capabilities of correlators made possible by our research are placing\nnew challenges on DA systems \\citep{wright2005}. There is a severe (factor of\n100) mismatch between the data rates in the on-line correlator hardware and\nthose supported by the off-line processing. Members of our team are currently\npursuing research on how this can be resolved both for correlators and for\ngeneric signal processing systems using commercially available compute\nclusters. For correlators, our group is currently exploring how to implement\ncalibration and imaging in real-time to reduce the burden of\nexpert data reduction on the end user, and to make best use of both telescope\nand human resources.\n\n\n\\section{Characterization}\n\\label{sec:characterization}\n\n\\subsection{ADC Crosstalk}\n\\label{sec:crosstalk}\n\n\\placefigure{fig:crosstalk}\n\nCrosstalk is an undesirable but prevalent characteristic of analog systems\nwherein a signal is coupled at a low level into other pathways. This can pose\na major threat to sensitivity in systems that integrate noise-dominated data to\nreveal low-level correlation. For CASPER hardware, we have examined crosstalk\nlevels between signal inputs sharing an ADC chip, and between different ADC\nboards on the same IBOB. Figure \\ref{fig:crosstalk} illustrates a one-hour\nintegration of uncorrelated noise of various bandwidths input to the ``Pocket\nCorrelator'' system (see Section \\ref{sec:deployments}). Between inputs \nof the same ADC board, a coupling coefficient of $\\sim0.0016$ indicates\ncrosstalk at approximately $-28$ dB. This coupling is a factor of $5$ higher\nthan the $-35$ dB isolation advertised by the Atmel ADC chip, and is most\nlikely the result of board geometry and shared power supplies. Crosstalk\nbetween inputs on different ADCs also peaks at the $-28$ dB level, but shows\nmore frequency-dependent structure.\n\n\\placefigure{fig:crosstalk_stability}\n\nCrosstalk may be characterized and removed, provided that its timescale for\nvariation is much longer than the calibration interval. Figure\n\\ref{fig:crosstalk_stability} demonstrates that for integration intervals\nranging from 7.15 seconds to approximately 1 day (the limit of our testing),\ncrosstalk amplitudes and phases vary around stable values in a\nlab test that, when\nsubtracted, yield noise that integrates down with time. Even\nthough crosstalk is encountered at the $-28$ dB level, its stability allows\nsuppression to at least $-62$ dB. This stability has allowed crosstalk\nto be removed post-correlation, and we have until recently deferred\nadding phase switching. Developments along this line are proceeding by\nintroducing an invertible mixer (controlled via a Walsh counter on an IBOB)\nearly in the analog signal path, and removing this inversion after\ndigitization. Phase switching must be coupled with data blanking near \nboundaries when the\ninversion state is uncertain. Blanking will be most easily implemented by\nintentionally dropping packets of data from F engine transmission, and by\nproviding a count of results accumulated in each integration for normalization\npurposes.\n\n\\subsection{XAUI Fidelity and Switch Throughput}\n\\label{sec:10gbe_sw}\n\nCASPER boards are currently configured to transmit XAUI protocol over CX4 ports\nas a point-to-point communication protocol and as the physical layer of 10GbE\ntransmission. Because the Virtex-II FPGAs used in current CASPER hardware do\nnot fully support XAUI transmission standards \\cite{xilinx_ug024,xilinx_ds083}, \ncurrent devices can have\nsub-optimal performance for certain cable lengths. We expect the new ROACH\nboard, which employs Virtex-5 FPGAs, to have better\nperformance in this regard. For cable lengths supported in current hardware,\nwe tested XAUI transmission fidelity using matched Linear Feedback Shift\nRegisters (LFSRs) on transmit and receive. Error detection was verified using\nprogrammable bit-flips following transmitting LFSRs. Over a period of 16\nhours, 573 Tb of data were transmitted and received on each of 8 XAUI\nlinks. During this time, no errors were detected, resulting in an estimated\nbit-error rate of $2.2\\cdot 10^{-16}$ Hz. We also tested the capability of two\nFujitsu switches (the XG700 and the XG2000) for performing the full\ncross-connect packet switching required in our FX correlator architecture. By\ntuning the sample rate inside F engines of an 8-antenna (4-IBOB) packetized\ncorrelator, we controlled the transmission rate per switch port over a range of\n5.96 to 8.94 Gb\/s. In 10-minute tests, packet loss was zero for both\nswitches in all but the 8.94 Gb\/s case. Packet loss in this final case was\ntraced to intermittent XAUI failure as a result of imperfect compliance with\nthe XAUI standard, as described previously. Overheating of FPGA chips in the\nfield has also been reported as a source of intermittent operation.\n\n\\subsection{Equalization and 4-Bit Requantization}\n\\label{sec:equalization}\n\n\\placefigure{fig:4_bit_quant}\n\nCorrelator processing resources can be reduced by limiting the bit width of\nfrequency-domain antenna data before cross-multiplication. However, digital\nquantization requires careful setting of signal levels for optimum\nSNR and subsequent calibration to a linear power scale \n\\citep{thompson_et_al2001,jenet_anderson1998}. Correlators using 4 bits \nrepresent\nan improvement over their 1 and 2 bit predecessors, but there are still\nquantization issues to consider. The total power of a 4-bit quantizer has a\nnon-linear response with respect to input level as shown in Figure\n\\ref{fig:4_bit_quant}. In currently deployed correlators, we perform\nequalization (per channel scaling) to control the RMS channel values before\nrequantizing from 18 bits to 4 bits. This operation saturates RFI and flattens\nthe passband to reduce dynamic range and to hold the passband in\nthe linear regime of the 4-bit quantization power curve. Equalization is\nimplemented as a scalar multiplication on the output of each PFB using 18-bit\ncoefficients from a dynamically updateable memory. These coefficients allow\nfor automatic gain control to maintain quantization fidelity through changing\nsystem temperatures.\n\n\\section{Deployments and Results}\n\\label{sec:deployments}\n\n\\subsection{A Pocket Correlator}\n\\label{sec:pocket_corr}\n\n\\placefigure{fig:f_engine}\n\nThe ``Pocket Correlator'' (Fig. \\ref{fig:f_engine}) is a single IBOB system\nthat includes F and X engines on a single board for correlating and\naccumulating 4 input signals. Each input is sampled at 4 times the FPGA clock\nrate (which runs up to 250 MHz), and a down-converter extracts half of the\ndigitized band. This subband is decomposed into 2048 channels by an 8-tap PFB,\nequalized, and requantized to 4 bits. With all input signals on one chip, X\nprocessing can be implemented directly as multipliers and vector accumulators,\nrather than as X engines. Limited buffer space on the IBOB permits only 1024\nchannels (selectable from within the 2048) to be accumulated. Output occurs\neither via serial connection (with a minimum integration time of 5\nseconds) or via 100-Mbit UDP transmission (with a minimum integration time in\nthe millisecond range). This system can act as a 2-antenna, full Stokes\ncorrelator, or as a 4-antenna single polarization correlator.\n\n\\placefigure{fig:skymap}\n\nThe Pocket Correlator is valuable as a simple, stand-alone instrument, and for\nboard verification in larger packetized systems. It is being applied as a\nstand-alone instrument in PAPER, the ATA, and the UNC PARI observatory. A\n4-antenna, single polarization deployment of the PAPER experiment in Western\nAustralia in 2007 used the Pocket Correlator to collect the data used to\nproduce a 150 MHz all-sky map illustrated in Figure \\ref{fig:skymap}. In\naddition to demonstrating the feasibility of post-correlation crosstalk\nremoval, this map (specifically, the imperfectly removed sidelobes of sources)\nillustrates a problem that will require real-time imaging to solve for large\nnumbers of antennas.\n\n\\subsection{An 8-Antenna, 2-Stokes, Synchronous Correlator}\n\\label{sec:8_ant_corr}\n\nThis first generation multi-board correlator demonstrated the functionality\nof signal processing algorithms and CASPER hardware, but preempted the\ncurrent packetized architecture--it operated synchronously. This version of\nthe correlator was most heavily limited by X engine resources, all of which\nwere implemented on a single FPGA to simplify interconnection. The\ntotal number of complex multipliers in the X engines of an $N_{ant}$ antenna\narray is: $N_{cmac} = floor({N_{ant}\/2}+1)\\times N_{ant}\\times N_{pol}$; the\nlimited number of multipliers on a BEE2 FPGA only allowed for supporting half\nthe polarization cross-multiples. This system was an\nimportant demonstration of the basic capabilities of our hardware and software,\nand provided a starting point for evolving a more sophisticated system. \nDeployments of this\nsystem at the NRAO site in Green Bank as part of the PAPER\nexperiment, and briefly\nat the Hat Creek Radio Observatory for the Allen Telescope Array,\nare being supersede by the packetized correlator presented in the next\nsection.\n\n\\subsection{A 16-Antenna, Full-Stokes, Packetized Correlator}\n\\label{sec:packet_deploy}\n\nThis packetized FX correlator is a realization of the architecture outlined in\nFigure \\ref{fig:ex_app1}, with F processing for 2 antennas implemented on each\nIBOB, and matching X processors implemented on each corner FPGA of two BEE2s.\nEach F processor is identical to a Pocket Correlator (Fig. \\ref{fig:f_engine}),\nbut branches data from the equalization module to a matrix transposer in IBOB\nSRAM to form frequency-based packets. Packet data for each antenna are\nmultiplexed through a point-to-point XAUI connection to a BEE2-based X\nprocessor, and then relayed in 10GbE format to the switch. The number of\nchannels in this system is limited to 2048 by memory in IBOB SRAM for\ntransposing the 128 spectra needed to meet bandwidth restrictions between X\nengines and DRAM-based vector accumulators.\n\n\\placefigure{fig:x_processor}\n\nThe X processor in this packetized correlator implements the transmit and\nreceive architecture illustrated in Figure \\ref{fig:x_processor} \nfor two X engines sharing the same 10GbE link.\nEach X engine's data processing rate is\ndetermined by packets arriving in its own receive buffer, and results are\naccumulated in separate DRAM DIMMs. The accumulated output of each X processor\nis read out of DRAM at a low bandwidth and transmitted via 10GbE packets to\na CPU-based server where\nall visibility data is collected and\nwritten to disk in MIRIAD format\n\\citep{sault_et_al1995} using interfaces from the Astronomical Interferometry\nin PYthon (AIPY) package\\footnote{http:\/\/pypi.python.org\/pypi\/aipy}.\n\nThe clocks for the BEE2 FPGAs are asynchronous 200-MHz oscillators, and IBOBs\nrun synchronously at any rate lower than this. Packet transmission is\nstatically addressed so that all each X engine processes every 16th channel.\nWe use 8 ports of a Fujitsu XG700 switch to route data. This system is is\nscalable to 32 antennas before two X engines no longer fit on a single FPGA.\nFor larger systems, the number of BEE2s will scale as the square of the number\nof antennas, and the number of IBOBs will scale linearly. A 32-antenna,\n200-MHz correlator on 16 IBOBs and 4 BEE2s is now working in the lab, and a\n16-antenna version using 8 IBOBs and 2 BEE2s has been deployed to the NRAO site\nin Green Bank with the PAPER experiment. \n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nBy decreasing the time and engineering costs of building and upgrading\ncorrelators, we aim to reduce the total cost of correlators for a wide range of\nscales. Small- and medium-scale correlators with total cost dominated by\ndevelopment clearly stand to benefit from our research. It is less clear if\nthe cost of large-scale correlators can be reduced by the general-purpose\nhardware used in our architecture. Though minimization of replication cost\nfavors the development of specialized parts, there are two factors\nthat can make a generic, modular solution cost less.\n\nThe first factor to consider is time to deployment. Even if the monetary cost\nof development is negligible in the budget of a large correlator, the cost of\ndevelopment time can be significant. If a custom solution takes several years\nto go from design to implementation, the hardware that is deployed will be out\nof date. Moore's Law suggests that when a custom solution taking 3 years to\ndevelop is deployed, there will exist processors 4 times more powerful, or 4\ntimes less expensive for the equivalent system. The cost of a generic, modular\nsystem has to be tempered by the expected savings of committing to hardware\ncloser to the ultimate deployment date.\n\nThe second factor is the cost of upgrade. Many facilities (including the ATA)\nare beginning to appreciate the advantages of designing arrays with wider\nbandwidths and larger numbers of antennas than can be handled by current\ntechnology. Correlators may then be implemented inexpensively on scales\nsuited to current processors, and upgraded as more powerful processors\nbecome available. Modular solutions facilitate this methodology.\n\n\n\\acknowledgments\n\nThis and other CASPER research are supported by the National Science Foundation\nGrant No. 0619596 for Low Cost, Rapid Development Instrumentation for Radio\nTelescopes. We would like to acknowledge the students, faculty and sponsors of\nthe Berkeley Wireless Research Center, and the National Science Foundation\nInfrastructure Grant No. 0403427. Correlator development for the PAPER\nproject is supported by NSF grant AST-0505354, and for the ATA project by NSF\ngrant AST-0321309 as well as the Paul G. Allen Foundation. Chips and software\nwere generously provided by Xilinx, Inc. JM and PM gratefully acknowledge\nfinancial support from the MeerKAT project and South Africa's National Research\nFoundation.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAn accurate and scalable multi-target tracking solution is a critical component of many wide-area urban surveillance systems.\nFor example, human and vehicle detection with closed-circuit television (CCTV) networks leverages multiple bearing-only sensors to uniquely track targets throughout a city \\cite{Chen2015,Hou2016,Anuj2017,Yang2009}.\nAnother important area involves tracking unauthorized unmanned aerial vehicles (UAVs) using heterogeneous and spatially distributed sensors \\cite{Poullin2018,Jovanoska2018,Ganti2016}. \nFor commercial UAVs that stream video and telemetry data, passive RF detection mechanisms have also been suggested \\cite{Watson2017,Watson2016,Scerri2007}.\nAcross all of these applications, the deployment and positioning of the sensors over time has a major impact on multi-target tracking performance. \nThis is especially true when tracking with passive sensors, which requires fusing multiple sensors to unambiguously resolve target position and velocity.\nExamples of these passive sensor types include received signal strength indicator (RSSI), time difference of arrival (TDOA), frequency difference of arrival (FDOA), and angle of arrival (AOA).\n\nSensor deployment and path planning for multi-target tracking falls under the broad research area of \\emph{sensor control}.\nSensor control began receiving considerable attention from the information fusion community in the late 1990s \\cite{Hero2011, Ng2000}.\nMany of the techniques in the area initially focused on dynamic reconfiguration of individual sensors in order to maintain strong target tracking performance (e.g., beam scheduling \\cite{Krishnamurthy2001}, waveform selection \\cite{Kershaw1994,Sira2009}). \nIn the early 2000s, however, the focus shifted to include sensor selection for wireless sensor network (WSN) applications \\cite{Ramya2012, Cao2016}.\nThe size, weight, and power (SWaP) requirements of these systems necessitated sensor control techniques that could balance tracking performance with the energy cost of obtaining and communicating sensor measurements across the network \\cite{Guo2004,Zuo2008,Masazade2013,Niu2018}. \nDecentralized multi-target tracking techniques were proposed to maintain communication bandwidth scalability and resilience to sensor failure \\cite{Hlinka2013,Hlinka2014,Meyer2018,Msechu2008,Ribeiro2006,Zuo2010}. \nThe majority of WSN applications focused on stationary installations, allowing \\emph{offline} solutions to the problem of sensor deployment optimization.\nA number of WSN deployment optimization solvers were proposed by drawing analogies to the NP-hard art gallery problem from computational geometry \\cite{Efrat2005}. \nMeta heuristics formed the basis for many of these solvers, including techniques such as particle swarm optimization and genetic algorithms \\cite{Bojkovic2008,Kulkarni2011,Perez2015,Hu2008}.\nThe sensor control problem for \\emph{online} path planning in the context of multi-target tracking, however, is significantly more challenging and less studied.\nAs opposed to existing offline techniques for path planning in mobile sensor networks \\cite{Singh2009}, the multi-target tracking variation of the problem necessitates an online solution due to the lack of \\emph{a priori} information on target trajectories.\n\nVery few online mobile sensor control techniques in the current state-of-the-art are capable of addressing the unique challenges associated with non-cooperative target surveillance in urban environments.\nThis is primarily because the urban environment highly constrains sensor coverage and maneuverability based on terrain elevation and building geometries. \nThe same shadowing issues that make target sensing difficult also introduce challenges in maintaining end-to-end network connectivity, thus rendering centralized fusion approaches impractical.\nStrong performance and safe operation of a mobile sensor network in this scenario requires an understanding of how the terrain impacts the relevant tracking and sensor control algorithms, all while maintaining decentralized operation.\n\nThe goal of this paper is to provide a brief summary of the current multi-target multi-sensor tracking approaches using a mobile sensor network.\nWe use this summary to highlight the main limitations that prevent immediate application of these architectures to the urban environment.\nIn the sections that follow, we briefly summarize the model generally assumed for the mobile sensor network control problem. \nFollowing this, we provide a brief literature review of the current-state-of-the art in multi-target tracking with mobile sensor networks in non-urban environments.\nWe then conclude by discussing three open challenges related to urban surveillance with commercial-off-the-shelf (COTS) UAVs, or more specifically, quadcopters.\n\n\n\\section{Problem Overview}\n\\subsection{Integrated Sensing and Control Architecture}\nFigure~\\ref{fig:sensingArch} shows a typical architecture used for decentralized target tracking and mobile sensor control for a single platform. \nA sensor interface provides derived target measurements, such as time of arrival, Doppler shift, TDOA\/FDOA, RSSI, or AOA.\nThe measurements are usually obtained under measurement origin uncertainty.\nThat is, it is not known \\emph{a priori} which sensor measurements correspond to clutter and which to existing targets.\nIn addition, measurements obtained from targets may be miss-detected at a given time step.\nThe posterior distributions for each target from the previous time step are propagated forward in time under known target birth and survival dynamics.\nThe mechanism for performing this forward prediction is usually a variant of the Chapman-Kolmogorov integral \\cite{Sheldon2014,Chen2003}.\nA data association process uses these predicted posterior distributions to generate a mapping from the measurements to newborn and persisting targets.\nThe association map, sensor measurements, and platform telemetry are then used to apply the Bayes update for each target's predicted posterior distribution.\nIf the tracker update step is decentralized, a consensus process is used to jointly process the sensor log likelihood messages over the network with one-hop neighbors.\nThe updated posterior distribution per target is used to perform state extraction, which generates state estimates and covariance ellipses.\nThe sensor control policy finally uses the updated posterior distribution to determine which platform control actions (e.g., heading, acceleration, or waypoint) to use for the next time step.\nA separate consensus procedure may also occur to synchronize agent control actions.\n\n\\begin{figure*}\n\\centerline{\\includegraphics[width=0.75\\textwidth]{fig1.png}}\n\\caption{Typical estimation and control architecture used for multi-target tracking. Network communication interfaces shown for decentralized operation. Interfaces shown as gray boxes. Algorithms shown as blue boxes.}\n\\label{fig:sensingArch}\n\\end{figure*}\n\n\n\\subsection{POMDP Formulation for Mobile Sensor Control}\nControl of mobile sensor networks for multi-target tracking typically follows a \\emph{partially observable Markov decision process} (POMDP) formulation \\cite[Chapter 5.6]{Bertsekas2012}.\nIn a POMDP, the target states (i.e., position and velocities) evolve according to a Markov process and are observed indirectly through sensor measurements.\nSensor states also evolve according to a Markov process based on the control action applied at the current time step.\nDepending on the inertial sensor and kinematic models, the corresponding relationship between platform state dynamics and control may be deterministic or stochastic with directly or partially observable states.\nThe relationship between the sensor measurements and the target and platform states at the current time step is given by a set of likelihood functions.\nThe reward function is designed to capture target tracking goals (e.g., minimized cumulative uncertainty in state estimates), obstacle and inter-agent collision avoidance requirements, and constraints on platform control actions.\nGiven this reward function defined over target states, platform states, and platform actions, the goal is to construct a closed-loop control policy that maximizes the infinite horizon expected cost-to-go (i.e., discounted cumulative reward).\nThe information available for a control policy at the current time step is the measurement history of all target and platform states and the control inputs used at each platform.\nFor simplicity, the following discussion will assume that the sensor platform states are completely observable.\n\nTo prevent the growth of the control and state space dimensionality as new measurements are obtained, the POMDP model is typically reformulated into an equivalent Markov decision process (MDP).\nThis is accomplished using a sufficient statistic that subsumes the measurements up until the current time step \\cite[Chapter 4.3]{Bertsekas2017}.\nThe corresponding sufficient statistic is termed the \\emph{belief state} of the system.\nThe belief state for the sensor control application here is the posterior probability of the target states given the observed measurements up until the current time step.\nIn general, the belief state per target is estimated using application-specific variations of the recursive Bayes filter \\cite{Chen2003}.\nFor the multi-target case, extensions exist for the joint target probability under soft associations \\cite{Barshalom1988} or for multi-target probability distribution under the random finite sets (RFS) formalism \\cite{Mahler2014}.\nThe sensor control reward function at each time step is then mapped to the belief states through an information theoretic measure of the quality of the current target state estimate.\nThe most commonly used measure is the \\emph{mutual information} \\cite{Cover2006} between the future states per target and the predicted sensor measurements obtained over a finite horizon lookahead window.\nEach plausible control action affects the locations of the platforms at future time steps, which in turn affects the posterior belief state for each target.\nThe core idea is that the mutual information metric quantifies how the the sharpness of the belief state per target changes over a finite horizon lookahead under each control action.\n\nDespite this simplification, the resulting \\emph{belief MDP} state space is a subset of the space of multivariate probability distributions.\nThese belief states are always continuous, even if the partially observable states are discrete.\nAs a result, very few closed-form optimal policies for belief MDPs exist.\nThe most well known solution is for the case of a linear Gaussian POMDP with quadratic cost. \nHere, the optimal solution reduces to a Kalman update of the belief state, and the control solution results from solving the discrete-time algebraic Riccati equation \\cite{Georgiou2013,Aastrom2012}.\nIn all other cases, the policies must be determined using approximate online dynamic programming techniques for infinite dimensional state spaces, such as model predictive control (MPC) or stochastic rollout \\cite[Chapter 6.4-5]{Bertsekas2017}.\nFor these techniques, achieving real-time implementation of the control policy involves an application-specific treatment of computational complexity.\n\n\n\n\\section{Related Work in Non-Urban Environments}\n\\subsection{Myopic Control of Mobile Sensors}\nRistic and Vo in \\cite{Ristic2010} used the RFS formalism \\cite{Mahler2014} to derive a myopic sensor control policy for a single integrator (i.e., velocity controlled) plant. \nThe tracking algorithm was a particle filter approximation of the multi-target Bayes filter.\nThis controller used the Renyi divergence between the predicted and the future expected multi-object posterior after obtaining measurements from range-only mobile sensors.\nRistic et al. provided a similar myopic scheme in \\cite{Ristic2011} for range-only tracking, but specialized the multi-object Renyi divergence of \\cite{Ristic2010} to the more computationally tractable probability hypothesis density (PHD) filter \\cite{Vo2005}.\n\nGostar et al. in \\cite{Gostar2017} leveraged the Cauchy-Schwarz divergence for Poisson point processes \\cite{Hoang2015} to derive myopic sensor control policies for a sequential Monte Carlo implementation of the labeled multi-Bernoulli (LMB) filter.\nTo maximize computational efficiency, the Cauchy-Schwarz divergence between the PHDs of the LMB filter's predict and update steps (per control action) was used as the reward function.\nThis reward was efficiently realized by evaluating the difference between each predicted and updated particle systems' weights, scaled by each target's probability of existence.\nA similar approach was applied by Gostar et al. to the cardinality-balanced multi-target multi-Bernoulli (CB-MeMBer) filter in \\cite{Gostar2016}.\nTo further accelerate computation, both \\cite{Gostar2017} and \\cite{Gostar2016} applied certainty equivalent (i.e., noiseless) measurement models when predicting future filter states per control action.\n\nKoohifar et al. in \\cite{Koohifar2018} provided a single sensor myopic control policy based on the steepest descent direction of the predicted posterior Cramer-Rao lower bound (PCRLB). \nThis policy generalized their previous work in \\cite{Koohifar2017} by deriving the sensor likelihoods for an RSSI-only measurement.\nThe RSSI measurement likelihood further modeled packet transmission statistics via a Bernoulli process.\nThe plant model was assumed velocity controllable, where the heading at each control step was chosen from a fixed quantized set.\n\nHoffman and Tomlin in \\cite{Hoffman2010} leveraged a particle filtering solution and distributed myopic control policy for a constrained double integrator (i.e., acceleration controlled) plant using bearing-only or range-only measurements. \nThe plant was designed to model the STARMAC quadcopter \\cite{Hoffmann2007} moving at slow speeds. \nThe reward function was the mutual information between the predicted future target states and measurements.\nTo maintain computational tractability, mutual information was evaluated using either local contributions per node, or limited pair-wise contributions between nodes.\n\nDames and Kumar in \\cite{Dames2013} leveraged the PHD filter to construct a distributed sensor control scheme for indoor unmanned ground vehicles (UGV).\nIn contrast to \\cite{Hoffman2010}, the policy used the mutual information between the predicted target states and the binary event of an agent observing an empty measurement set at subsequent time steps. \nDecentralized estimation and control was achieved by transitioning sensors through operating modes, and organizing them into smaller sub-groups where their PHD states were directly synchronized.\n\nMeyer et al. in \\cite{Meyer2015} proposed a myopic gradient descent algorithm for a class of decentralized multi-target tracking algorithms based on loopy belief propagation (BP). \nThe cost function used was the conditional entropy of the target states at the next time step given the expected sensor measurements at the next time step. \nThe relevant BP messages and conditional entropy gradient were approximated using multiple particle systems under perfect knowledge of the number of targets and the target-to-measurement association.\nAlthough the simulations presented in \\cite{Meyer2015} were for single integrator dynamics, the corresponding technique is general enough to accommodate non-linear sensor and target dynamic models.\n\nChung et al. in \\cite{Chung2006} proposed a decentralized myopic control strategy based on memoryless, zero cross-covariance, track-to-track fusion. \nThe individual sensors provided range-bearing measurements which were fused decentrally using the Kalman equations.\nThe reward function was the determinant of the fused covariance matrix, which is directly related to the entropy of the fused target state estimate.\nThe reward gradient was derived, and consisted of a sum of per-sensor reward gradients.\nAs a result, the control policy was the same for each sensor based on its state, sensor model, and fused target covariance estimates.\nA similar controller was derived for the case where imperfect communications contribute to additional errors in the fused estimates.\n\n\\subsection{Non-myopic Control of Mobile Sensors}\nBeard et al. in \\cite{Beard2017} used the generalized labeled multi-Bernoulli (GLMB) filter to apply the Cauchy-Schwarz divergence for Poisson point processes \\cite{Hoang2015} as the sensor control reward function. \nThe authors also proposed the use of RFS void probabilities to achieve collision avoidance with targets.\nAn example of controlling of a single range-bearing sensor tracking multiple targets under measurement origin uncertainty was presented.\nA finite horizon controller was simulated that assumed a constant velocity plant with instantaneously controllable heading.\nClosed form equations of the Cauchy-Schwarz divergence for the case when the GLMB single object posterior densities are modeled as Gaussian mixtures were provided.\nThe derivations in \\cite{Beard2017} for the GLMB can also be applied to the LMB filter as a special case, but exhibit higher complexity than those described by Gostar in \\cite{Gostar2016,Gostar2017}.\nImplementation details of these control techniques, including pseudo-code, can be found in \\cite{Beard2016}\n\nDames and Kumar in \\cite{Dames2014} demonstrated a non-myopic tracking and control solution on real UGVs.\nThe tracking and control algorithms included a particle filter implementation of the PHD filter and an online estimate of mutual information. \nSimilar to \\cite{Dames2013}, the non-myopic policy was achieved by evaluating the mutual information against the potential of observing empty measurement sets.\nReceding horizon control was achieved through a combination of efficient action-set generation and adaptive sequential information planning \\cite{Charrow2014}.\n\nAtanasov et al. in \\cite{Atanasov2014} proposed a reduced value iteration (RVI) algorithm demonstrated on a target-linearized range-bearing measurement model for a single sensor. \nAn important distinction was that the relationship between platform state and the observed measurements remained non-linear.\nThe specific technique did not require linearized sensor platform dynamics, and as such, it was demonstrated in simulation for a single target tracking scenario under differential drive dynamics.\nThis RVI algorithm was later generalized by Schlotfeldt et al. in \\cite{Schlotfeldt2018} to an anytime planning algorithm.\nThe resulting technique, denoted Anytime-RVI (ARVI) was decentralized and tested on a set of quadcopters attempting to localize ground-based robots using range and bearing estimates.\n\nRagi and Chong in \\cite{Ragi2013} assumed linear-Gaussian state and measurement dynamics and applied a joint-probabilistic data association (JPDA) tracker \\cite{Barshalom1988}.\nThe sensor control technique used in this paper is known as nominal belief-state optimization (NBO) \\cite{Miller2009}.\nNBO is a POMDP approach that assumes the associated belief-states (i.e., per target posteriors) are completely characterized by a normal distribution (presumably through a Kalman update).\nA certainty-equivalent principle was applied to remove the expectation across belief states.\nBoth single and multi-step lookahead rollout approaches were provided.\nThe approach in \\cite{Ragi2013} additionally considered forward acceleration thrust and heading dynamics for the platform under wind force disturbances.\nInter-agent collision and obstacle avoidance constraints were considered by including a scaled regularization parameter in the cost-to-go function.\n\nGrocholsky et al. in \\cite{Grocholsky2003} assumed a fixed wing aircraft with constant forward velocity and controllable yaw rate to implement a decentralized control rule for bearing-only sensors.\nDecentralized data fusion was achieved by leveraging the information form of the Kalman filter \\cite{Manyika1994}.\nThe control law used the expected mutual information gain of the information matrix at the beginning and end of a finite lookahead window.\nThis law was made computationally feasible by linearizing the measurement and sensor state evolution dynamics and solving the resulting linear-quadratic-Gaussian (LQG) optimal control problem. \n\n\n\\section{Challenges Specific to Urban Surveillance}\n\\subsection{Terrain-aware Tracking and Sensor Control}\\label{sec:terrain}\nThe primary sensor control challenge in urban surveillance involves understanding how the terrain and building geometries affect tracking performance. \nIn a camera based solution, for example, the observed measurements are AOAs where the probability of detection is dependent on the platform's ability to maintain line-of-sight (LOS) on targets. \nA similar argument can be used for passive RF observations from low power transmitters, where the detectability of multi-path effects is negligible\\footnote{Examples of localization in multi-path rich environments have been proposed based off of pattern recognition techniques \\cite{Tsalolikhin2011}. These are outside of the scope of this review.}.\nIf targets maneuver into a non-line-of-sight (NLOS) region to all sensors, the uncertainty on the target's position and velocity increases due to the lack of measurement updates.\nThus, platform maneuvers that keep as many targets within LOS to their corresponding sensors will lead to an increase in the mutual information between predicted future target states and measurements.\nConsequently, the number of sensors that have LOS to a given target and their sensing geometry in the LOS region is also important.\n\nThere are a number of related studies that provide tracking functionality for targets constrained to road networks.\nUlmke and Koch in \\cite{Ulmke2006} describe a particle filtering technique for tracking a single target maneuvering on partially obstructed road networks.\nThe authors showed that improved tracking performance results when conditioning the measurement detection process on LOS\/NLOS information.\nUlmke et al. extended their work in \\cite{Ulmke2006} to the RFS formalism in \\cite{Ulmke2010} using the Gaussian-mixture cardinalized PHD filter \\cite{Vo2007}.\nWithin these efforts, the detectable regions were constrained by the road network as observed from an overhead sensor.\nIn the general urban surveillance case, the sensors may not necessarily be overhead.\nFurthermore, many practical target types will not be constrained to road networks (e.g., unauthorized UAVs).\n\nThe conditioning of the sensor control policy on LOS\/NLOS sensing regions as described above necessitates a non-parametric approximation of the belief state per target. \nAcross all applications, these approximation techniques are computationally complex and make implementing non-myopic policies at high update rates very challenging. \nA number of point-based value iteration (PBVI) approaches \\cite{Kurniawati2008,Smith2005,Kochenderfer2015} have been proposed to solve loosely related target surveillance problems \\cite{Egorov2016,Hsu2008,He2010}.\nThis computational complexity is made worse by the requirement to perform moderate to high fidelity ray-casting under each sensor action to identify the LOS\/NLOS regions.\nIn order to maintain the computational complexity of the PBVI approaches, these shadowing computations necessitate some form of GPU-based acceleration from the computational geometry literature \\cite{Tomczak2012}. \n\nAnother important consideration is the incorporation of safety-guaranteed operation with respect to inter-agent and obstacle collision avoidance. \nSome studies such as \\cite{Ragi2013} attempt to address the collision avoidance constraints for sensor control by directly penalizing the reward function estimate when targets are too close to other agents or other obstacles. \nA central issue, however, is how the safety constraint penalization term should be weighted when estimating the discounted cost-to-go.\nA regularization weight that is too small may not be capable of preventing a collision under the assumed dynamics of the platform. \nConversely, a penalization that is too large may over constrain the system and unnecessarily degrade the optimality of the policy. \nA better solution would be to select a POMDP solver that is capable of guaranteeing satisfiability of the safety constraints given an accurate map of the environment and agent positions. \nMinimum-norm controllers that modify the planned action from sensor control using safety barrier functions are one option \\cite{Ames2016,Freeman1996}, but can potentially over-compensate for safety when the optimization reward is not quantifiable by a control Lyapunov function. \nComputationally tractable implementations of this technique also require a plant model that is affine in the control actions. \nPath planning and graph traversal techniques, such as A* \\cite{Hart1968} and RRT \\cite{LaValle2001} provide another option, but require a discretization of the platform state space that may not be kinematically feasible. \nRelevant work by the controls community applying such graph search techniques to the trajectory planning problem is given in \\cite{Liu2018,Fink2012,Fink2013}.\n\nWhen digital surface models (DSM) of the terrain and buildings are not available \\emph{a priori}, an online estimate is usually computed via a simultaneous localization and mapping (SLAM) technique. \nThe use of online estimated map data necessitates sensor control robustness under uncertainty. \nThat is, the LOS\/NLOS sensor control techniques should be designed to maintain strong performance up to a pre-specified level of error in the estimated map data. \nSimilarly, the obstacle avoidance techniques should guarantee collision avoidance up to the same pre-specified level of map error.\n\n\n\\subsection{Control Space Fidelity for Quadcopters}\nA major contributing factor to the current interest in urban surveillance with mobile sensor networks is the abundance of commercially available UAVs. \nQuadcopters, for example, provide vertical takeoff and landing functionality in addition to high agility maneuvers.\nThese platforms and their flexible APIs for flight control tasking \\cite{Mavlink,DJIsdk,ROS} make real-time experimentation of mobile sensor network applications very attractive. \nThe sensor control methods that exist in the current state-of-the-art, however, make overly simplifying assumptions in the platform kinematics to further reduce computational complexity (e.g., first or second order integrator dynamics).\nAs a result, the platforms are forced to maneuver at slower velocities so that the actions generated by the sensor control algorithms are representative of the POMDP state dynamics. \nA more critical flaw in this approach is that, under the dynamics of the urban environment, platforms may attempt to delay necessary actions for maintaining collision-free flight until it is too late.\n\nQuadcopter platform dynamics have been studied extensively by the controls and aeronautics communities \\cite{Michael2010}.\nThe quadcopter is a six degree-of-freedom system consisting of position and orientation in 3D Euclidean space. \nHowever, it provides only four actuation points consisting of the total upward thrust force and the roll, pitch, and yaw moments. \nThis makes the quadcopter \\emph{underactuated}, implying that its position and orientation can not be accelerated in any arbitrary direction. \nInstead, translational and rotational acceleration are achieved by applying time-varying attitude control.\nA naive incorporation of these plant state and action dynamics under a fixed discretization in a POMDP-based sensor control algorithm is not computationally feasible.\n\nEarly attempts at quadcopter control applied small-angle approximation techniques to linearize the flight dynamics around the hover state \\cite{Hoffmann2008}. \nAn important finding was made by Mellinger and Kumar in \\cite{Mellinger2011,Mellinger2012}, where the quadcopter was determined to be \\emph{differentially flat} in terms of its 3D Euclidean position and yaw angle. \nDifferential flatness of a system implies that the original states and inputs can be rewritten as algebraic functions of (potentially fewer) state variables and their derivatives.\nThese algebraic functions define a diffeomorphism that ensures any trajectories of sufficient smoothness in the flat outputs will be sufficiently smooth in the original state and control space. \n\nFor the quadcopter, the highest degree derivative of the flat position outputs in their expressions for the original control inputs is four (i.e., trajectory snap). \nSimilarly, the highest degree derivative of the flat yaw output in the expressions for the original control inputs is two (i.e., yaw acceleration).\nUsing this insight, Mellinger and Kumar \\cite{Mellinger2011,Mellinger2012} provided a series of waypoint-based quadcopter trajectory generation techniques that minimize the control effort under trajectory snap and yaw acceleration (i.e., minimum snap trajectories).\nThese waypoint generation methods assume a concatenation of piecewise polynomial functions that pass through pre-defined waypoints.\nSolving for the trajectory polynomial coefficients is done by solving a computationally tractable quadratic program (QP).\nRegulating the original state dynamics of the quadcopter according to this trajectory can then be achieved through the use of a backstepping controller \\cite{Michael2010,Lee2010}.\n\nThe key takeaway from the above discussion is that the accuracy of the quadcopter control space in a mobile sensor control algorithm can be maintained provided that the actions commanded to the platform generate smooth trajectories up until the fourth derivative of position and second derivative of yaw rate. \nFor sensor control with a quadcopter platform, a natural solution is to assume that the plant consists of a fourth order differential equation on the flat outputs, with an input consisting of the trajectory snap at each time step.\nThis trajectory snap input is termed a \\emph{motion primitive} \\cite{Liu2017,Liu2018}.\nUnder polynomial trajectories, these motion primitives induce a resolution-complete discretization in the flat outputs. \nSikang et al. in \\cite{Liu2017} derive this discretization and suggest optimal search techniques for trajectory generation between waypoints using A* \\cite{Hart1968}. \nFor dynamic environments, Sikang et al. in \\cite{Liu2017} proposed a receding horizon control technique based on Lifelong Planning A* (LPA*).\nThe techniques presented in \\cite{Liu2019} were shown to provide collision avoidance guarantees between static and dynamic obstacles, and generate robust paths with respect to random platform disturbances. \n\n\n\\subsection{Tracking and Control Algorithm Decentralization}\nDecentralized operation is a critical requirement of an urban surveillance system.\nAs discussed in Section~\\ref{sec:terrain}, the terrain and building geometries present a strongly RF shadowed propagation environment.\nThis poses a significant challenge for inter-agent communication, and thus renders centralized tracking and sensor control techniques impractical.\nIn general, decentralization of multi-target tracking and sensor control algorithms is very challenging\nThe BP tracking approaches discussed by Meyer et al. in \\cite{Meyer2018} provide an intuitive framework for performing average consensus on the relevant belief state parameters with one-hop neighbors.\nFor particle filtering approaches to the multi-target tracking problem, the BP approaches are decentralized using a consensus-over-weights approach \\cite{Farahmand2011}.\nConsensus over-weights assumes that the particle systems sampled at each agent are identical, which necessitates the use of synchronized random number generators.\nLikelihood consensus \\cite{Hlinka2012} is a slightly less restrictive approach that overcomes the need for synchronized random number generators by projecting the sensor likelihood functions onto a common set of basis functions. \nOther alternatives to the consensus-over-weights scheme include fusion via Gaussian mixture approximations \\cite{Li2018} and kernel-based methods \\cite{Tslil2018}. \n\nAlthough these techniques work well when decentralizing target tracking algorithms, it is difficult to apply them to mobile sensor control policies for the urban environment. \nSince the techniques suggested in Section~\\ref{sec:terrain} necessitate an online simulation-based approach, it is not immediately clear how a consensus algorithm should be constructed.\nOne approach to circumvent this challenge is to implement the centralized mobile sensor control policy in a high-fidelity simulation and perform \\emph{imitation learning} to generate a decentralized policy. \nImitation learning is a variation of reinforcement learning, where the goal is to make observations on a set of oracle control decisions and determine a non-parametric representation of the policy \\cite{Schaal2003}.\nThis type of learning has been applied regularly in robotics to learn specific robotic manipulator movements via kinesthetic examples \\cite{Kober2010,Pervez2017,Zhang2015}. In these efforts, a convolutional neural network (CNN) is commonly used as approximate architecture for the state-action value function.\n\nA recent study by Gama et al. in \\cite{Gama2019} has shown how the convolution and pooling operations used in CNNs can be generalized to support learning with signals supported over graphs. \nThe resulting learning architecture, titled a graph neural network (GNN), may be capable of supporting an imitation learning procedure where the one-hop features that may be relevant to consensus are analogous to those signals supported over a communication network graph.\nA more thorough investigation of imitation learning of decentralized policies from centralized ones using GNNs is an ongoing and open area of research.\n\n\n\\section{Conclusion}\nIn this paper, we presented an overview of the mobile sensor control problem for multi-target tracking with a specific emphasis on urban surveillance problems. \nIn addition to providing a brief background on the sensor control POMDP formulation, we provided a detailed literature review of the current state-of-the-art and suggested three challenge areas that have yet to receive considerable attention by the community\nThese three areas were terrain-aware tracking and sensor control, control space fidelity for quadcopters, and joint estimation and control algorithm decentralization.\nA number of these challenges are addressed separately in the information fusion, tracking, and control communities.\nWe suggest a coordinated effort amongst these communities in order to arrive at solutions that are capable of addressing these challenges together in a computationally tractable and bandwidth efficient manner.\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nLet $K$ be an infinite field, $A$ be a Noetherian $K$-algebra and $\\mathbb P^n_A$ the $n$-dimensional projective space over $A$. A classical approach to investigate a closed projective scheme $W$ consists of \nconsidering a general hyperplane section of $W$, because many properties of $W$ \nare preserved under general hyperplane sections and can be easier recognized in subschemes of lower dimension. \n\nThe inverse problem that consists in finding a scheme $W$ starting from a possible hyperplane section is called a \\emph{lifting problem} and investigations in this topic can produce methods to obtain affine or projective schemes with specific properties. \n\nTo address the question, we recall that in \\citep{Macaulay} Macaulay lifted monomial ideals to obtain sets of points with a given Hilbert functions, in \\citep{H66} Hartshorne proved the connectedness of a Hilbert scheme by lifting Borel ideals and in \\citep{RA} Reeves computed the radius of a Hilbert scheme with an analogous procedure (distractions). Moreover, by $t$-liftings and pseudo-liftings in \\citep{MiNa} Migliore and Nagel obtained special configurations of linear varieties (stick figures). Much interest has been in particular given to the study of the $x_n$-liftings, which can be defined in terms of ideals (see \\citep{GGR, Roitman}) or equivalently in terms of $K$-algebras, like proposed by Grothendieck (see \\citep{BuchEis,Roitman} and the references therein). \nStarting from the papers \\citep{Laudal} and \\citep{Strano}, many authors also give significant contributions to find conditions on some invariants of a variety $W$ so that a degree $d$ hypersurface containing a hyperplane section of $W$ lifts to a hypersurface containing $W$.\n\nIn this paper, we consider the following lifting problem: given a closed subscheme $Y\\subset \\mathbb P^{n-1}_K$, explicitly describe all closed subschemes $W\\subset \\mathbb P^n_A$ such that $Y$ is a general hyperplane section of $W$, up to an extension of scalars. \nEvery such scheme $W$ is called a {\\em lifting} of $Y$ over $A$ (Definition \\ref{def:geometric lifting}) and the saturated defining ideal $I$ of $W$ is called a {\\em lifting} of the saturated defining ideal $I'$ of $Y$ (see Definition \\ref{def:geometric lifting per gli ideali} and Proposition \\ref{prop:h generico}). By the definition, we almost suddenly obtain that $W$ is smooth at every point at which $Y$ is smooth (see Proposition \\ref{prop:smooth points}).\n\nIn order to have a better understanding of the issue we have in mind, let us consider a double point $P\\in \\mathbb P^n_K$. It is very well known that, for any negative integer $g$, there exist double lines $C_g\\subset \\mathbb P^3_K$ with arithmetic genus $g$ and having $P$ as general hyperplane section. So, even when $Y\\subset \\mathbb P^{n-1}_K$ is let to vary in a concrete quasi-projective scheme, we cannot hope that the liftings of $Y$ are parameterized \nby a quasi-projective scheme. In view of this we are led to set the Hilbert polynomial $p_Y(t)$ of $Y$ and look for a functor describing all the liftings of $Y$ with a given Hilbert polynomial $p(t)$ such that its first difference $\\Delta p(t):=p(t)-p(t-1)$\nis equal to $p_Y(t)$.\n\nThe framework of the present paper is both functorial and constructive. \nWe use computational methods that are borrowed from Gr\\\"obner and marked bases theories and which involve quasi-stable ideals (see \\cite{LR,LR2,BCR2}).\nThroughout the paper, a functor $F:{\\text{\\rm Noeth-}K\\text{\\rm -Alg}}\\rightarrow {\\mathrm{Sets}}$ will be said \\emph{representable} if there is a scheme $X$ and a natural isomorphism between $F$ and the functor of points $h_X=\\mathrm{Hom}( - ,X)$ applied on affine schemes.\nIf $X$ is an affine scheme, this definition coincides with the usual one in category theory (see, for instance, \\cite[Definition 2.1]{Vistoli}).\n\nDenoting by $\\mathrm{Hilb}_{p(t)}^n$ the Hilbert scheme parameterizing all the subschemes of $\\mathbb P^n_K$ with Hilbert polynomial $p(t)$, our main results, which are collected in Theorems \\ref{th:schema che rappresenta} and \\ref{th:funtore di punti}, can be summarized in the following way.\n\\vskip 1mm\n\\noindent{\\bf Theorem A.} {\\em Let $p(t)$ be the Hilbert polynomial of a lifting of $Y$.\n\\begin{itemize}\n\\item[(i)] If $Y$ is equidimensional, then the family of the equidimensional liftings of $Y$ with Hilbert polynomial $p(t)$ is parameterized by a locally closed subscheme of $\\mathrm{Hilb}_{p(t)}^n$.\n\\item[(ii)] The family of the liftings of $Y$ with Hilbert polynomial $p(t)$ is parameterized by a subscheme of $\\mathrm{Hilb}_{p(t)}^n$ which can be explicitly constructed.\n\\end{itemize}\n}\nThis paper has been motivated by the investigation of $x_n$-liftings of a homogeneous polynomial ideal (see Definition \\ref{def:lifting}) that has been faced in \\citep{BCGR} from a functorial point of view. However, the question treated here is different and presents new problems to be solved.\nWe now give a detailed outline of the contents of the paper.\n\nIn what follows, we consider $K[x_0,\\dots,x_{n-1}]$ as a subring of $A[x_0,\\dots,x_{n-1},x_n]$ and the variable $x_n$ is \\emph{generic} for every ideal in $A[x_0,\\dots,x_n]$ we take. \nThis assumption allows us to exploit the behavior of Gr\\\"obner bases with respect to the degree reverse lexicographic order when we need the saturation of homogeneous ideals and of their initial ideals (see Remark \\ref{rem:generic and degrevlex}). The study of $x_n$-lifting presented in \\citep{BCGR} needed weaker hypotheses on the term order that was indeed chosen in a more general class of term orders (see Definition 4.4.1 in \\citep{KR2}, and \\citep{erdos} for a first generic classification of term orderings). \n\nHere, we only consider the degrevlex term order. In this setting, we recall the notion of Gr\\\"obner functor and Gr\\\"obner stratum (see Definition \\ref{def:GrStratum}) \ntogether with the functor of $x_n$-liftings. Then, we prove that our Definition \\ref{def:geometric lifting} of lifting of a projective scheme is equivalent to Definition \\ref{def:geometric lifting per gli ideali} of lifting of a saturated polynomial ideal (see Proposition \\ref{prop:h generico}). This allows to observe that \nthe algebraic counterpart of our lifting problem consists in identifying the liftings $W$ of $Y$ with the $x_n$-liftings of ideals having saturation equal to $I'$ (see Proposition \\ref{prop:geometric lifting}). This characterization implies that all the liftings with a given Hilbert polynomial $p(t)$ can be identified with the points of a disjoint union of locally closed subschemes in the Hilbert scheme $\\mathrm{Hilb}_{p(t)}^n$ via suitable Gr\\\"obner strata (see \nTheorem \\ref{th:parametrizzazione}).\n\nMoreover, in the further non-restrictive hypothesis that the variable $x_{n-1}$ is generic for $I'$, we prove that a subscheme $W$ is a lifting of $Y$ only if $I$ belongs to a Gr\\\"obner stratum over a monomial ideal $J$ which is a lifting of the initial ideal of $I'$ (see Theorem \\ref{th:dove}). The proofs of these results are constructive and then produce a method for the computation of the locally closed subschemes in a Hilbert scheme whose disjoint union corresponds to all liftings with a given Hilbert polynomial via Gr\\\"obner strata (see Algorithm \\ref{alg:computation in GS}). \n\nSo, we obtain embeddings of liftings with a given Hilbert polynomial in Gr\\\"obner strata and, hence, in a Hilbert scheme. Then, it is natural to look at liftings from a functorial point of view. Thanks to the constructive characterization of liftings given in Section \\ref{sec:Construction}, we are finally able to define functors related to our liftings as subfunctors of a Hilbert functor (Definitions \\ref{def:funtore lifting} and \\ref{def:funtore lifting eq}). Given a Hilbert polynomial $p(t)$, we prove that the functor $\\underline{\\mathrm{L}}_Y^{p(t)}$ of liftings of $Y$ and the functor $\\underline{\\mathrm{L}}_Y^{p(t),e}$ of equidimensional liftings with Hilbert polynomial $p(t)$ are representable. \n\nFor what concerns the functor $\\underline{\\mathrm{L}}_Y^{p(t),e}$, we adapt to our situation classical arguments of algebraic geometry, like the upper semicontinuity of the dimension of the fibers of a dominant map (Theorem \\ref{th:schema che rappresenta}). \n\nFor what concerns the functor $\\underline{\\mathrm{L}}_Y^{p(t)}$, we observe that the locally closed subfunctors that are represented by the locally closed schemes above introduced in suitable Gr\\\"obner strata are not necessarily open subfunctors of $\\underline{\\mathrm{L}}_Y^{p(t)}$ and so are not expected to be suitable to give a unique scheme representing $\\underline{\\mathrm{L}}_Y^{p(t)}$. At this point our constructive approach is pushed forward by the following fact. Given the degrevlex term order, if the initial ideal of the saturated ideal $I'$ defining $Y$ is quasi-stable then the initial ideal of the saturated ideal $I$ defining a lifting of $Y$ is quasi-stable (see Theorem \\ref{th:lifting}). This result, which is achieved in Section \\ref{sec:quasi-stable}, allows us to study liftings in marked schemes over truncations of quasi-stable ideals, which are open subschemes in a Hilbert scheme (see Definition \\ref{def:markedscheme}). Then, we are able to replace the disjoint locally closed subschemes described by means of Gr\\\"obner strata by open subschemes that we describe by means of marked schemes.\nIndeed, we exploit the features of marked schemes in order to construct an open covering of the functor $\\underline{\\mathrm{L}}_Y^{p(t)}$ that provides a scheme representing $\\underline{\\mathrm{L}}_Y^{p(t)}$ by a gluing procedure (see Lemma \\ref{lemma:sezioni ideali marcati}, Theorems \\ref{th:aperti} and \\ref{th:funtore di punti} and Remark \\ref{rem:stromme}). \nThe proof of this last result is constructive and gives rise to Algorithm~\\ref{alg:computation in MS}. In Example \\ref{ex:esempio bello} we exhibit an application of this construction. \n\n\n\n\\section{Setting}\n\nWe consider commutative unitary rings and morphisms that preserve the unit. \nGiven sets of variables $\\mathbf x=\\lbrace x_0,\\dots,x_{n-1}\\rbrace$ and $\\mathbf x,x_n=\\lbrace x_0,\\dots,x_{n}\\rbrace$, we assume they are ordered as $x_0>\\cdots>x_n$. \nFor a term $x^\\alpha:=x_0^{\\alpha_0}\\cdots x_n^{\\alpha_n}$\nother than $1$, we denote by $\\min(x^\\alpha)$ the smallest variable appearing in $x^\\alpha$ with a non-zero exponent\nand by $\\deg(x^\\alpha)=\\vert\\alpha\\vert:=\\sum_i \\alpha_i$ the degree of $x^\\alpha$.\n\nLet $K$ be an infinite field. We denote the polynomial ring $K[x_0,\\dots,$ $x_{n-1}]$ by $K[\\mathbf x]$ and the polynomial ring $K[x_0,\\dots,x_n]$ \nby $K[\\mathbf x,x_n]$. For any (Noetherian) $K$-algebra $A$, $A[\\mathbf x]$ denotes the polynomial ring $K[\\mathbf x]\\otimes_K A$ and $A[\\mathbf x,x_n]$ denotes \n$K[\\mathbf x,x_n]\\otimes_K A$ as \\emph{standard graded algebras}. \nObviously, $A[\\mathbf x]$ is a subring of $A[\\mathbf x,x_n]$, hence the following notations and assumptions will be stated for $A[\\mathbf x,x_n]$ but will hold for $A[\\mathbf x]$ too. \n\nFor any non-zero homogeneous polynomial $f \\in A[\\mathbf x,x_n]$, the {\\em support} $\\mathrm{supp}(f)$ of $f$ is the set of terms in the variables $\\mathbf x,x_n$ that appear in $f$ with a non-zero coefficient.\nWe denote by $\\mathrm{coeff}(f)\\subset A$ the set of the coefficients in $f$ of the terms of $\\mathrm{supp}(f)$. For any subset $\\Gamma\\subseteq A[\\mathbf x,x_n]$, $\\Gamma_t$ is the set of homogeneous polynomials of $\\Gamma$ of degree $t$. Furthermore, we denote by $\\langle \\Gamma\\rangle$ the $A$-module generated by $\\Gamma$. \nWhen $\\Gamma$ is a homogeneous ideal, we denote by $\\Gamma_{\\geq t}$ the ideal containing\nthe homogeneous polynomials of $\\Gamma$ of degree $\\geq t$. \n\nLet $I$ be a homogeneous ideal of $A[\\mathbf x,x_n]$. \nThe {\\em saturation} of $I$ is $I^{\\textnormal{sat}}=\\{f\\in A[\\mathbf x,x_n] \\ \\vert \\ \\forall \\ i=0,\\dots n, \\exists \\ k_i : x_i^{k_i} f \\in I\\}$. The ideal $I$ is {\\em saturated} if $I=I^{{\\textnormal{sat}}}$ and is {\\em $m$-saturated} if $I_t=(I^{\\textnormal{sat}})_t$ for every $t\\geq m$. The \\emph{satiety} of $I$, denoted by ${\\textnormal{sat}}(I)$, is the smallest $m$ for which $I$ is $m$-saturated.\nA linear form $h\\in A[\\mathbf x,x_n]$ is said \\emph{generic} for $I$ if $h$ is not a zero-divisor in $A[\\mathbf x,x_n]\/I^{\\textnormal{sat}}$.\n\nA {\\em monomial ideal} $J$ of $A[\\mathbf x,x_n]$ is an ideal generated by terms. We denote by $B_J$ the minimal set of terms generating $J$ and by $\\mathcal N(J)$ the sous-escalier of $J$, i.e.~the set of terms outside $J$. \n\n\\begin{definition}\\label{def:stable} \nA monomial ideal $J\\subset A[\\mathbf x,x_n]$ is \\emph{quasi-stable} if for every $x^\\alpha \\in J$ and $x_j>\\min(x^\\alpha)$, there is $t\\geq 0$ such that $\\frac{x_j^t x^\\alpha}{\\min(x^\\alpha)}$ belongs to $J$.\n\\end{definition}\n\nGiven a monomial ideal $J$ and an ideal $I$, a {\\em $J$-reduced form modulo $I$} of a polynomial $f$ is a polynomial $\\bar f$ such that $f-\\bar f$ belongs to $I$ and $\\mathrm{supp}(\\bar f)$ is contained in $\\mathcal N(J)$.\nIf $\\bar f$ is the unique possible $J$-reduced form modulo $I$ of $f$, then it is called the {\\em $J$-normal form modulo $I$} of $f$ and is denoted by $\\mathrm{Nf}(f)$.\n\nWith the usual language of Gr\\\"obner bases theory, {\\em from now,} we consider the degree reverse lexicographic term order $\\succ$ (degrevlex, for short) and, for every non-zero polynomial $f\\in A[\\mathbf x,x_n]$, denote by $\\mathrm{in}(f)=\\max_{\\succ}\\mathrm{supp}(f)$ its initial term and by $c_f$ the coefficient of $\\mathrm{in}(f)$ in $f$. For every ideal $I\\subset A[\\mathbf x,x_n]$ let $\\mathrm{in}(I)=(c_f \\mathrm{in}(f) : f\\in I)$ be its initial ideal. A set $\\{f_1,\\dots,f_t\\}\\subset I$ is a Gr\\\"obner basis of $I$ if $\\{c_{f_1}\\mathrm{in}(f_1),\\dots,c_{f_t}\\mathrm{in}(f_t)\\}$ generates $I$. \nIn general, Gr\\\"obner bases theory over rings \\citep{CM,Z,M} is more intricate than over fields (see for instance the detailed discussion in \\citep{Lederer2011}), and the possibly non-invertible leading coefficients make Gr\\\"obner bases over rings not well-suited for functorial constructions. This is the reason why\nin this paper we consider homogeneous ideals $I$ generated by either Gr\\\"obner bases in $K[\\mathbf x,x_n]$ or \\emph{monic} Gr\\\"obner bases in $A[\\mathbf x,x_n]$ or \\emph{marked bases} over quasi-stable ideals in $K[\\mathbf x,x_n]$ and $A[\\mathbf x,x_n]$ (see Section \\ref{sec:backgroundMarkedBases} for the definition of marked basis).\nSo, the quotients $A[\\mathbf x,x_n]\/I$ are free graded $A$-modules and \nthis is a very key point for the use of functors we will introduce (see \\citep{BGS91}, \\citep[Lemma 6.1]{BCR2}, \\citep[Section 5]{LR2}).\nThe crucial fact is that, if $\\varphi \\colon A \\rightarrow B$ is a morphism of $K$-algebras and $I$ is an ideal in $A[\\mathbf x]$ (or $A[\\mathbf x,x_n]$) generated by a {\\em monic} Gr\\\"obner basis (resp.~a marked basis over a quasi-stable ideal) $G_I$, then $I\\otimes_A B$ is generated by $\\varphi(G_I)$ which is again a {\\em monic} Gr\\\"obner basis (resp.~a marked basis over a quasi-stable ideal) \n(see Proposition~\\ref{prop:funtore}). \n\nIn this setting, for a homogeneous ideal $I$ of $A[\\mathbf x,x_n]$ we can consider the {\\it Hilbert function} $h_{A[\\mathbf x,x_n]\/I}$ of the free graded $A$-module $A[\\mathbf x,x_n]\/I$ and its {\\it Hilbert polynomial} $p_{A[\\mathbf x,x_n]\/I}(t)$ like the Hilbert function and the Hilbert polynomial of $K[\\mathbf x, x_n]\/\\mathrm{in}(I)$. When we say \\lq\\lq ideal $I$ with Hilbert polynomial $p(t)$\\rq\\rq \\ we mean $p_{A[\\mathbf x,x_n]\/I}(t)=p(t)$. For the geometric definition of Hilbert polynomial of a projective scheme over a field we refer to \\citep[Chapter III, Exercise 5.2]{H}.\n\n\\begin{remark}\\label{rem:generic and degrevlex} \n\\\n\\begin{enumerate}\n\\item If $h$ is a generic linear form for $I$, then $I^{\\textnormal{sat}}=(I : h^\\infty):=\\{f\\in A[\\mathbf x,x_n] \\ \\vert \\ \\exists t\\geq 0 : h^tf\\in I\\}$ (see \\citep{BS}).\n\\item \\label{item:remgeneric} \nRecall that the smallest variable is generic for a homogeneous polynomial ideal $I\\subset A[\\mathbf x,x_n]$ generated by a monic Gr\\\"obner basis if and only if it is generic for $\\mathrm{in}(I)$. Indeed, the initial\nterm with respect to degrevlex of a homogeneous polynomial $f$ is divisible by $x_n^r$ if and only if $f$ is divisible by $x_n^r$.\n\\item \\label{item:estensioni e contrazioni}\nIf $L$ is a homogeneous ideal of $K[\\mathbf x,x_n]$ then\n$L A[\\mathbf x,x_n]=L\\otimes_K A$, \\ $L = L A[\\mathbf x,x_n] \\cap~K[\\mathbf x,x_n]$, \\ $(L A[\\mathbf x,x_n])^{\\textnormal{sat}} = L^{\\textnormal{sat}} A[\\mathbf x,x_n]$, and if $I$ is a homogeneous ideal of $A[\\mathbf x,x_n]$ then $I^{\\textnormal{sat}} \\cap K[\\mathbf x,x_n] = (I \\cap K[\\mathbf x,x_n])^{\\textnormal{sat}}$.\n\\end{enumerate} \n\\end{remark}\n\n\n\n\\section{Background I: Gr\\\"obner functor and functor of $x_n$-liftings}\n\\label{sec:Groebner strata and $x_n$-liftings}\n\nIn this section, referring to \\citep{LR,LR2} and to \\citep{BCGR}, we collect some known information about Gr\\\"obner functor and functor of $x_n$-liftings.\nBoth these functors are subfunctors of a Hilbert functor, for which we refer to \\citep{GroHilb,Nit}. We only recall that the Hilbert functor $\\underline{\\mathrm{Hilb}}^n$ associates to a locally Noetherian $K$-scheme $S$ the set $\n\\underline{\\mathrm{Hilb}}^n(S) = \\Bigl\\{W \\subset \\mathbb P_S^n \\ \\vert \\ W \\text{ is flat over }S\\Bigr\\}\n$ of flat families of subschemes of $\\mathbb P^n_S=\\mathbb P^n_K\\times_{\\mathrm{Spec}(K)} S$ parameterized by $S$, \nand to any morphism $\\phi: T \\rightarrow S$ of locally Noetherian $K$-schemes the map\n$$\\begin{array}{lccc}\\underline{\\mathrm{Hilb}}^n(\\phi): &\\underline{\\mathrm{Hilb}}^n(S) &\\longrightarrow &\\underline{\\mathrm{Hilb}}^n(T)\\\\\n& W &\\longmapsto &W \\times_S T.\n\\end{array}\n$$\nGrothendieck shows that the functor $\\underline{\\mathrm{Hilb}}^n$ is representable by a locally Noetherian scheme $\\mathrm{Hilb}^n$, called Hilbert scheme (see \\citep{GroHilb}).\nGiven the subfunctor\n\\[\n\\underline{\\mathrm{Hilb}}^n_{p(t)}(S):=\\Bigl\\{W \\subset \\mathbb P_S^n \\ \\vert \\ \\ W \\text{ is flat over }S\\text{ and has fibers with Hilbert polynomial } p(t)\\Bigr\\},\n\\]\nthen $\\underline{\\mathrm{Hilb}}^n$ decomposes as co-product:\n\\begin{equation}\\label{eq:co-product}\n\\underline{\\mathrm{Hilb}}^n=\\coprod_{p(t)\\text{ admissible for schemes in } \\mathbb P_K^n}\\underline{\\mathrm{Hilb}}^n_{p(t)}.\n\\end{equation}\nFor every admissible polynomial $p(t)$ of $\\mathbb P_K^n$, $\\underline{\\mathrm{Hilb}}^n_{p(t)}$ is represented by a projective scheme $\\mathrm{Hilb}_{p(t)}^n$.\nThe fact that $\\mathrm{Hilb}^n$ and $\\mathrm{Hilb}_{p(t)}^n$ are locally Noetherian allows to consider the restriction of the functors $\\underline{\\mathrm{Hilb}}^n$ and $\\underline{\\mathrm{Hilb}}^n_{p(t)}$ to the category of Noetherian $K$-algebras (e.g.~\\cite[Proposition VI-2 and Exercise VI-3]{EH}). \nEvery $K$-point of a Hilbert scheme is identified with the saturated ideal $I\\subseteq K[x_0,\\dots,x_n]$ that defines the corresponding fiber in $\\mathbb P^n_K$. \n\nSince in this paper we only consider the degrevlex order, we now recall the notion of Gr\\\"obner functor in this particular setting.\n\n\\begin{definition}\\label{def:GrStratum} {\\rm \\cite[Section 5 and Theorem 5.3]{LR2}} \nGiven a monomial ideal $J\\subset K[\\mathbf x,x_n]$, the {\\em Gr\\\"obner functor} $\\underline{\\mathrm{St}}_J: \\text{Noeth-}K\\text{-Alg} \\rightarrow \\text{Sets}$ associates to any Noetherian $K$-algebra $A$ the set $\\underline{\\mathrm{St}}_J(A):=\\{I\\subset A[\\mathbf x,x_n] : \\mathrm{in}(I)=J\\otimes_K A\\}$ and to any $K$-algebra morphism $\\phi: A\\rightarrow B$ the function $\\underline{\\mathrm{St}}_J(\\phi): \\underline{\\mathrm{St}}_J(A) \\rightarrow \\underline{\\mathrm{St}}_J(B)$ such that the image of an ideal $I$ is $I\\otimes_A B$. The affine scheme $\\mathrm{St}_J$ representing the Gr\\\"obner functor is called {\\em Gr\\\"obner stratum}. \n\\end{definition}\n\nIn order to briefly recall the construction of a Gr\\\"obner stratum $\\mathrm{St}_J$, take the set of polynomials\n\\begin{equation}\\label{JbaseC}\nG:= \\left\\{ f_\\alpha=x^\\alpha + \\sum_{{red}x^\\alpha\\succ x^\\gamma\\in\\mathcal N(J)_{\\vert \\alpha\\vert}} C_{\\alpha\\gamma} x^\\gamma \\ : \\ \n\\mathrm{in}(f_\\alpha) = x^\\alpha\\in B_J \\right\\} \\subset K[C_J][\\mathbf x,x_n]\n\\end{equation}\nwhere $C_J$ denotes the set of the new variables $C_{\\alpha \\gamma}$.\nLet $\\mathfrak{a}_J$ be the ideal in $K[C_J]$ generated by the coefficients (w.r.t.~variables $\\mathbf x,x_n$) of the terms \n in $J$-reduced forms $\\overline{S(f_\\alpha,f_{\\beta})}$ of the $S$-polynomials $S(f_\\alpha,f_{\\beta})$ modulo $(G)$. Due to \\cite[Proposition 3.5]{LR} the ideal $\\mathfrak{a}_J$ depends only on $J$ and on the given term order, which here is supposed to be the degrevlex. Hence, the ideal $\\mathfrak{a}_J$ defines the affine scheme $\\mathrm{St}_J=\\mathrm{Spec}(K[C_J]\/\\mathfrak{a}_J)$.\n\n\\begin{theorem} \\label{th:main features} \\rm{(\\cite[Lemma 5.2]{LR2}, \\cite[Theorem 2.2]{BCGR})} \nLet $J\\subset A[\\mathbf x,x_n]$ be a monomial ideal and $p(t)$ the Hilbert polynomial of $A[\\mathbf x,x_n]\/J$. With the degrevlex term order, \n\\begin{itemize}\n\\item[(i)] $\\underline{\\mathrm{St}}_{J}$ is a Zariski sheaf. \n\\item[(ii)] If the terms in $B_J$ are not divisible by $x_n$, then \n$\\mathrm{St}_J\\cong \\mathrm{St}_{J_{\\geq m}}, \\text{ for every integer } m$, \nand $\\underline{\\mathrm{St}}_J$ is a locally closed subfunctor of the Hilbert functor $\\underline{\\mathrm{Hilb}}^n_{p(t)}$. \n\\end{itemize}\n\\end{theorem}\n\n\\begin{definition}\\label{def:lifting} \\citep{GGR,Roitman}\nA homogeneous ideal $I$ of $A[\\mathbf x,x_n]$ is called a {\\em $x_n$-lifting} of a homogeneous ideal $H$ of $K[\\mathbf x]$ if \n\\begin{enumerate}\n\\item[(a)] the indeterminate $x_n$ is a non-zero divisor in $A[\\mathbf x,x_n]\/I$;\n\\item[(b)] $(I,x_n)\/(x_n)\\simeq H A[\\mathbf x]$ under the canonical isomorphism\n $A[\\mathbf x,x_n]\/(x_n)\\simeq A[\\mathbf x]$; \n\nor, equivalently,\n\\item[(b$'$)] $\\{ g(x_0,x_1,\\ldots,x_{n-1},0) : g\\in I\\} = H A[\\mathbf x]$.\n\\end{enumerate}\n\\end{definition}\n\n\\begin{theorem} {\\rm(\\cite[Theorem 2.5]{CFRo}, \\cite[Proposition 6.2.6]{KR2}, \\cite[Theorem 3.2 and Corollary 3.3]{BCGR})}\\label{FM}\nLet $H$ be a homogeneous ideal of $K[\\mathbf x]$. A homogeneous ideal $I$ of $A[\\mathbf x,x_n]$ is a $x_n$-lifting of $H$ if and only if the reduced Gr\\\"obner basis of $I$ is of type $\\lbrace f_\\alpha+g_\\alpha\\rbrace_\\alpha$, \nwhere $\\lbrace f_\\alpha\\rbrace_\\alpha$ is the reduced Gr\\\"obner basis of $H$ and $g_\\alpha \\in (x_n)A[\\mathbf x,x_n]$.\nIf $I$ is a $x_n$-lifting of $H$, then $\\mathrm{in}(I)$ is generated by the same terms as $\\mathrm{in}(H)$.\n\\end{theorem}\n\n\\begin{definition}\\label{def:xn-liftings} \\cite[Definition 3.4]{BCGR}\nThe {\\em functor} \\ $\\underline{\\mathrm{L_{H}}}: \\underline{\\text{Noeth-}K\\text{-Alg}}\\rightarrow \\underline{\\mathrm{Sets}}$ {\\em of $x_n$-liftings} of a homogeneous ideal $H$ of $K[\\mathbf x]$\nassociates to every Noetherian $K$-algebra $A$ the set $\n\\underline{\\mathrm{L_{H}}}(A)=\\lbrace I\\subseteq A[\\mathbf x,x_n] : I \\text{ is a $x_n$-lifting of }H \\rbrace$ \nand to every morphism of $K$-algebras $\\phi:A \\rightarrow B$ the map\n\\[\\begin{array}{rcl}\n\\underline{\\mathrm{L_{H}}}(\\phi): \\underline{\\mathrm{L_{H}}}(A)&\\rightarrow& \\underline{\\mathrm{L_{H}}}(B)\\\\\n I&\\mapsto& I\\otimes_A B.\n\\end{array}\\]\n\\end{definition}\nWith the notation of Definition \\ref{def:xn-liftings}, let $J:=\\mathrm{in}(H)A[\\mathbf x,x_n]$ and $p(t):=p_{A[\\mathbf x,x_n]\/J}$. The functor $\\underline{\\mathrm{L_{H}}}$ is a closed subfunctor of $\\underline{\\mathrm{St}}_J$ represented by a closed affine subscheme $\\mathrm{L_H}$ of $\\mathrm{St}_J$ and, hence, a locally closed subscheme of the Hilbert scheme $\\mathrm{Hilb}_{p(t)}^n$ thanks to Theorem \\ref{th:main features} \\cite[Theorem 4.3 and Proposition 6.1]{BCGR}. Moreover, $\\underline{\\mathrm{L_H}}$ is a Zariski sheaf.\n\n\n\\section{Liftings of projective schemes}\n\\label{sec:Liftings of projective schemes}\n\n\\begin{definition}\\label{def:geometric lifting}\nLet $\\mathcal{H}=\\mathbb P^{n-1}_K$ be the hyperplane of $\\mathbb P^n_K$ defined by the ideal $(x_n)$, $Y$ be a closed subscheme of $ \\mathbb P^{n-1}_K$ with Hilbert polynomial $p_Y(t)$ and $A$ be a $K$-algebra. A {\\em lifting of $Y$ over $A$} is a closed subscheme $W\\in \\underline{\\mathrm{Hilb}}^n(A)$ of $\\mathbb P^n_A=\\mathbb P^n_K\\times_{\\mathrm{Spec}(K)} \\mathrm{Spec}(A)$, such that:\n\\begin{itemize}\n\\item[(i)] $\\Delta p_W(t):=p_W(t)-p_W(t-1)=p_Y(t)$;\n\\item[(ii)] $W\\cap (\\mathcal{H} \\times_{\\mathrm{Spec}(K)} \\mathrm{Spec}(A)) = Y\\times_{\\mathrm{Spec}(K)} \\mathrm{Spec}(A)$. \n\\end{itemize}\n\\end{definition}\n\nObserve that in Definition \\ref{def:geometric lifting} we assume that the scheme $Y$ is contained in the hyperplane defined by the ideal $(x_n)$. This assumption is not restrictive because, given a linear form $h\\in K[\\mathbf x,x_n]$, we can always replace $h$ by the smallest variable $x_n$ thanks to a suitable (deterministic) change of coordinates. \n\n\\begin{definition}\\label{def:geometric lifting per gli ideali}\nLet $I'$ be a homogeneous saturated ideal of $K[\\mathbf x]$. A homogeneous saturated ideal $I$ of $A[\\mathbf x,x_n]$ is called a {\\em lifting of $I'$} if the following conditions are satisfied:\n\\begin{enumerate}\n\\item[(a)] the indeterminate $x_n$ is generic for $I$;\n\\item[(b)] $\\Bigl((I,x_n)\/(x_n)\\Bigr)^{{\\textnormal{sat}}}\\simeq I' A[\\mathbf x]$ under the canonical isomorphism\n $A[\\mathbf x,x_n]\/(x_n)\\simeq A[\\mathbf x]$; \n\nor, equivalently,\n\\item[(b$'$)] $(\\{g(x_0,x_1,\\ldots,x_{n-1},0) : g\\in I\\})^{{\\textnormal{sat}}} = I' A[\\mathbf x]$.\n\\end{enumerate}\n\\end{definition}\n\n\\begin{proposition}\\label{prop:h generico}\nLet $Y$ be a closed subscheme of $\\mathbb P^{n-1}_K$ defined by the homogeneous saturated ideal $I'\\subset K[\\mathbf x]$ and $W\\in \\underline{\\mathrm{Hilb}}^n(A)$ be a closed subscheme of $\\mathbb P^n_A$ defined by the homogeneous saturated ideal $I\\subset A[\\mathbf x,x_n]$. Then, $W$ is a lifting of $Y$ if and only if $I$ is a lifting of $I'$.\n\\end{proposition}\n\n\\begin{proof}\nBy condition (ii) of Definition \\ref{def:geometric lifting}, we have $\\Bigl(\\frac{(I,x_n)}{(x_n)}\\Bigr)^{{\\textnormal{sat}}}=I' A[\\mathbf x]=I'\\otimes_K A$, in particular the quotient $A[\\mathbf x,x_n]\/(I,x_n)$ has Hilbert polynomial $p_Y(t)$. Thus, by the following short exact sequence\n\\begin{equation}\\label{short exact sequence zero-divisor}\n0 \\rightarrow (A[\\mathbf x,x_n]\/(I:x_n))_{t-1} \\xrightarrow{\\cdot x_n} (A[\\mathbf x,x_n]\/I)_t \\rightarrow (A[\\mathbf x,x_n]\/(I,x_n))_t \\rightarrow 0\n\\end{equation}\ncondition (i) of Definition \\ref{def:geometric lifting}, i.e.~$\\Delta p_W(t)= p_Y(t)$, implies that the quotient $A[\\mathbf x,x_n]\/(I:x_n)$ has the same Hilbert polynomial $p_W(t)$ as $A[\\mathbf x,x_n]\/I$. Hence, we obtain $(I:x_n)^{{\\textnormal{sat}}}=I^{{\\textnormal{sat}}}=I$ because $(I:x_n)\\supseteq I$. In conclusion, we have $I^{{\\textnormal{sat}}}\\supseteq(I:x_n)^{{\\textnormal{sat}}}\\supseteq (I:x_n) \\supseteq I=I^{{\\textnormal{sat}}}$, \nnamely $(I:x_n)=I$, which is possible only if $x_n$ is generic for $I$.\n\nConversely, if $I$ is a lifting of $I'$ then it is quite immediate that $W$ is a lifting of $Y$. Indeed, if $x_n$ is generic for $I$, then $(I:x_n)=I=I^{\\textnormal{sat}}$. Hence, $A[\\mathbf x, x_n]\/(I:x_n)$ and $ A[\\mathbf x, x_n]\/I$ have the same Hilbert polynomial. From \\eqref{short exact sequence zero-divisor}, we obtain $\\Delta p(t)=p_{Y}(t)$ and condition (ii) of Definition~\\ref{def:geometric lifting} also follows.\n\\end{proof}\n\nWith the notation we have already introduced in Definition \\ref{def:geometric lifting}, we consider a closed subscheme $Y\\subset \\mathcal H \\subset \\mathbb P^{n}_K$, where $\\mathcal H$ is defined by the ideal $(x_n)$. If $W\\subset \\mathbb P^n_A$ is a lifting of $Y$, then $\\deg W=\\deg Y$ and $\\dim W = \\dim Y +1$, so there are natural restrictions on the Hilbert polynomial $p(t)$ of $W$ because $\\Delta p(t)$ must be the Hilbert polynomial $p_Y(t)$ of $Y$. Hence, the non-constant part of the Hilbert polynomial of $W$ is determined by the Hilbert polynomial of $Y$. However, in general\nthere are no limits on the constant term of the Hilbert polynomial of $W$, even \nif we only consider liftings without zero-dimensional components, as shown by the following example.\n\n\\begin{example}\\label{ex:primoes}For every positive integer $k$, consider the double line $W_k \\subset \\mathbb P^3_K$ defined by the ideal $I=(x_0^2,x_0x_1,x_1^2,x_0x_2^k-x_1x_3^k)\\subseteq K[x_0,x_1,x_2,x_3]$. The Hilbert polynomial of $W_k$ is $p(t)=2t+k+1$ and $W_k$ is a lifting of the double point $Y\\subset \\mathbb P^2_K$ defined by the ideal $I'=(x_0, x_1^2)\\subseteq K[x_0,x_1,x_2]$. In conclusion, for every positive integer $k$, we find a lifting of $Y$ with Hilbert polynomial $2z+k+1$ and without zero-dimensional components.\n\\end{example}\n\nWe conclude this section highlighting a geometric feature of liftings.\n \n\\begin{proposition} \\label{prop:smooth points}\nLet $W$ be a lifting of a scheme $Y$. If $Y$ is smooth on a point $P$ then also $W$ is smooth on $P$. \n\\end{proposition}\n\n\\begin{proof}\nBy definition of lifting, $Y$ is a Cartier divisor in $W$. Then the dimension of the Zariski tangent space of a point $y$ in $Y$ is not lower than the dimension of the Zariski tangent space of the point $y$ in $W$ minus $1$. Indeed, if $\\mathfrak m$ is the local ring at $y$ in $Y$ and $M$ is the local ring at $y$ in $W$, we have $\\dim_K \\frac{\\mathfrak m}{\\mathfrak m^2} \\geq \\dim_K \\frac{M}{M^2}-1$. Hence, if we had $\\dim_K \\frac{M}{M^2} > \\dim(W)$ that we would obtain $\\dim_K \\frac{\\mathfrak m}{\\mathfrak m^2}\\geq \\dim_K \\frac{M}{M^2}-1 > \\dim(W)-1=\\dim(Y)$.\n\\end{proof}\n\n\n\\section{Construction of liftings of projective schemes}\n\\label{sec:Construction}\n\nIn this section, we obtain a constructive characterization of liftings of projective schemes by investigating relations between the notion of lifting of a saturated homogeneous ideal in $K[\\mathbf x]$ (Definition \\ref{def:geometric lifting per gli ideali}) and that of $x_n$-lifting of a homogeneous ideal in $K[\\mathbf x]$ (Definition \\ref{def:lifting}). \n\nIn general a lifting is not a $x_n$-lifting, as shown by the following easy example. Nevertheless, we will show how to recover every lifting of a given saturated ideal by constructing the $x_n$-liftings of suitable families of ideals.\n\n\\begin{example}\nThe ideal \n$I=(x_0^2+x_3^2,x_0x_1,x_0x_2,x_1^2,x_1x_2)\\subset K[x_0,x_1,x_2,x_3]$ is a lifting of $I'=(x_0,x_1)\\subseteq K[x_0,x_1,x_2]$ but is not a $x_3$-lifting of $I'$, as one can easily verify using Theorem \\ref{FM}.\n\\end{example}\n\n\\begin{lemma} \\label{lemma:sezione}\nLet $I'\\subset K[\\mathbf x]$ be a saturated ideal. If $I\\subset A[\\mathbf x,x_n]$ is a lifting of $I'$ then $I'=\\Bigl(\\frac{(I,x_n)}{(x_n)}\\Bigr)^{{\\textnormal{sat}}}\\cap K[\\mathbf x]=\\Bigl(\\frac{(I,x_n)}{(x_n)} \\cap K[\\mathbf x]\\Bigr)^{{\\textnormal{sat}}}$. \n\\end{lemma}\n\n\\begin{proof}\nDefinition \\ref{def:geometric lifting per gli ideali}\nof lifting immediately implies the thesis by Remark \\ref{rem:generic and degrevlex}\\eqref{item:estensioni e contrazioni}. \n\\end{proof}\n\n\\begin{proposition}\\label{prop:geometric lifting}\nLet $I'\\subseteq K[\\mathbf x]$ be a homogeneous saturated ideal. A homogeneous ideal $I\\subseteq A[\\mathbf x,x_n]$ is a lifting of $I'$ if and only if there exists a homogeneous ideal $H\\subseteq K[\\mathbf x]$ such that $H^{{\\textnormal{sat}}}=I'$ and $I$ is a $x_n$-lifting of $H$.\n\\end{proposition}\n\n\\begin{proof}\nFirst assume that $I$ is a lifting of $I'$ and take $H:=\\frac{(I,x_n)}{(x_n)}\\cap K[\\mathbf x]$. Then, we have $H^{{\\textnormal{sat}}}=I'$ by Lemma \\ref{lemma:sezione}. Furthermore, it is immediate that $I$ is a $x_n$-lifting of $H$, because $HA[\\mathbf x]=\\frac{(I,x_n)}{(x_n)}$ by Remark \\ref{rem:generic and degrevlex}\\eqref{item:estensioni e contrazioni} and $x_n$ is a non-zero divisor in $A[\\mathbf x,x_n]\/I$ by definition.\n\nConversely, let $I\\subset A[\\mathbf x,x_n]$ be an ideal which is a $x_n$-lifting of a homogeneous ideal $H\\subseteq K[\\mathbf x]$ such that $H^{{\\textnormal{sat}}}=I'$. Then, $I$ is a lifting of $I'$, because $x_n$ is generic for $I$, so $I$ is saturated, and $\\frac{(I,x_n)}{(x_n)} \\simeq H\\otimes_K A$ implies $((I,x_n)\/(x_n))^{\\textnormal{sat}}=I'A[\\mathbf x,x_n]$, by Remark \\ref{rem:generic and degrevlex}\\eqref{item:estensioni e contrazioni}.\n\\end{proof}\n\n\\begin{theorem} \\label{th:parametrizzazione}\nLet $I'\\subseteq A[\\mathbf x]$ be a homogeneous saturated ideal and $I\\subseteq A[\\mathbf x,x_n]$ a lifting of $I'$. The locus of liftings of $I'$ in $\\underline{\\mathrm{St}}_{\\mathrm{in}(I)}(A)$ is parameterized by an affine scheme obtained by linear sections of ${\\mathrm{St}}_{\\mathrm{in}(I)}$.\n\\end{theorem}\n\n\\begin{proof}\nLet $J':=\\mathrm{in}(I')\\subset K[\\mathbf x]$ and $J:=\\mathrm{in}(I)\\subset A[\\mathbf x,x_n]$. \nBy Proposition \\ref{prop:geometric lifting}, $I$ is a $x_n$-lifting of a homogeneous ideal $H\\subset K[\\mathbf x]$ with $H^{{\\textnormal{sat}}}=I'$. Hence, by Theorem \\ref{FM}, $\\mathrm{in}(H)$ has the same generators as $J$, so $\\frac{(J,x_n)}{(x_n)}=\\mathrm{in}(H)\\otimes_K A$ and $\\mathrm{in}(H)=J\\cap K[\\mathbf x]$. \n\nLet $\\mathfrak a_J \\subset A[C_J]$ be the defining ideal of the Gr\\\"obner stratum $\\mathrm{St}_{J}$ and $\\mathfrak a_{J\\cap K[\\mathbf x]} \\subset K[C_{J\\cap K[\\mathbf x]}]$ the defining ideal of the Gr\\\"obner stratum $\\mathrm{St}_{J\\cap K[\\mathbf x]}$, where $C_{J\\cap K[\\mathbf x]}\\subseteq C_J$.\n\nThe ideal $H\\subset K[\\mathbf x]$ is characterized by the following two conditions. The first condition is that $H$ belongs to the family $\\underline{\\mathrm{St}}_{J\\cap K[\\mathbf x]}(K)$, so the reduced Gr\\\"obner basis of $H$ consists of polynomials of the following type\n\\begin{equation}\\label{base di H}\nf_\\beta = x^\\beta +\\sum_{x^\\beta >x^\\gamma \\in \\mathcal N(\\mathrm{in}(H))_{\\vert \\beta\\vert}} C_{\\beta\\gamma} x^\\gamma, \\quad f_\\beta \\in K[C_{J\\cap K[\\mathbf x]}][\\mathbf x],\n\\end{equation}\nfor every term $x^\\beta$ minimal generator of $J$. The second condition is that the polynomials $f_\\beta$ belong to $I'$, because $H$ is contained in $I'$. This second condition implies that the saturation of $H$ is $I'$ because $K[\\mathbf x]\/I'$ and $K[\\mathbf x]\/H$ have the same Hilbert polynomial, by the first condition.\n\nBy construction, $\\mathrm{in}(H)$ is contained in $J'$. Hence, we have $\\mathcal N(J')\\subseteq\\mathcal N(\\mathrm{in}(H))$ and can obtain a $J$-reduced form $\\overline{f_\\beta}$ modulo $I'$ of the polynomial $f_\\beta$ using the reduced Gr\\\"obner basis of $I'$. Imposing that $\\overline{f_\\beta}$ is zero we obtain that $H$ is contained in $I'$ and collect some new constraints on the coefficients in the parameters $C_{\\beta\\gamma}$ in $C_{J\\cap K[\\mathbf x]}$. Let $\\mathfrak b_{J\\cap K[\\mathbf x]} \\subset K[C_{J\\cap K[\\mathbf x]}]$ be the ideal generated by these constraints, for every $x^\\beta\\in B_J$. The ideal $\\mathfrak a_{J\\cap K[\\mathbf x]} + \\mathfrak b_{J\\cap K[\\mathbf x]} \\subset K[C_{J\\cap K[\\mathbf x]}]$, hence the affine scheme $\\textnormal{Spec}\\,\\left(\\frac{ K[C_{J\\cap K[\\mathbf x]}]}{\\mathfrak a_{J\\cap K[\\mathbf x]} + \\mathfrak b_{J\\cap K[\\mathbf x]}}\\right)$, parameterizes the locus in $\\underline{\\mathrm{St}}_{J\\cap K[\\mathbf x]}(K)$ of all the ideals $H\\subset K[\\mathbf x]$ such that $H^{\\textnormal{sat}}=I'$. \n\nFinally, we apply Theorem \\ref{FM} and hence consider \n\\begin{equation}\\label{base di I}\ng_\\beta :=f_\\beta + \\sum_{x^\\delta \\in \\mathcal N(J)_{\\vert\\beta\\vert-1}} C_{\\beta\\delta} x_n x^\\delta, \\quad g_\\beta \\in A[C_J][\\mathbf x,x_n],\n\\end{equation}\nfor every term $x^\\beta$ minimal generator of $J$. The set $\\{g_\\beta\\}_{x^\\beta \\in B_J}$ is a Gr\\\"obner basis with initial ideal $J$ modulo the ideal $\\mathfrak a_J\\subset A[C_J]$ which defines the Gr\\\"obner stratum $\\mathrm{St}_{J}$.\n\nWe now observe that if the set of polynomials $g_\\beta$ is a Gr\\\"obner basis then also the set of polynomials $f_\\beta$ is a Gr\\\"obner basis, due to the hypothesis on $J$ and $\\mathrm{in}(H)$. This fact means that the ideal $\\mathfrak a_{J\\cap K[\\mathbf x]} A[C_J]$ is contained in $\\mathfrak a_J$. Then, the ideal $\\mathfrak a_J+\\mathfrak b_{J\\cap K[\\mathbf x]} A[C_J]$, hence the affine scheme $\\textnormal{Spec}\\,\\left(\\frac{A[C_J]}{\\mathfrak a_J+\\mathfrak b_{J\\cap K[\\mathbf x]} A[C_J]}\\right)$, parameterizes the locus of the liftings of $I'$ in the family $\\underline{\\mathrm{St}}_{J}(A)$. \n\nIt remains to show that the constraints we obtain by rewriting the polynomials $f_\\beta$ of \\eqref{base di H} by the reduced Gr\\\"obner basis of $I'$ are linear, i.e.~the ideal $\\mathfrak b_{J\\cap K[\\mathbf x]}$ has linear generators. This fact is immediate, because the coefficients of the polynomials of the reduced Gr\\\"obner basis of $I'$ belong to the field $K$.\n\\end{proof}\n\nTheorem \\ref{th:parametrizzazione} describes the locus of liftings of a homogeneous saturated ideal $I'\\subset K[\\mathbf x]$ in a given Gr\\\"obner stratum. In the further hypothesis that the variable $x_{n-1}$ is generic for $I'$, we can recognize what Gr\\\"obner strata are candidate to contain these liftings.\n\n\\begin{theorem}\\label{th:dove}\nLet $I'\\subseteq K[\\mathbf x]$ be a homogeneous saturated ideal. If $x_{n-1}$ is generic for $I'$, then the liftings of $I'$ belong to the Gr\\\"obner stratum over a monomial lifting $J\\subset A[\\mathbf x,x_n]$ of \\ $\\mathrm{in}(I')$.\n\\end{theorem}\n\n\\begin{proof}\nIt is enough to prove that if $I\\subset A[\\mathbf x,x_n]$ is a lifting of $I'$ and $x_{n-1}$ is generic for $I'$, then $\\Bigl(\\frac{(\\mathrm{in}(I),x_n)}{(x_n)}\\Bigr)^{{\\textnormal{sat}}}=\\mathrm{in}(I')\\otimes_K A$. Hence, we will conclude by applying Theorem \\ref{th:parametrizzazione} and observing that if $x_n$ is generic for $I$ then it is a non-zero divisor for $\\mathrm{in}(I)$.\n\nLet $J':=\\mathrm{in}(I')$ and $J:=\\mathrm{in}(I)$. By definition of lifting we have $\\Bigl(\\frac{(I,x_n)}{(x_n)}\\Bigr)^{{\\textnormal{sat}}}=I'\\otimes_K A$. Hence, there exists an integer $s\\geq 0$ such that\n$\\frac{(I_{\\geq s},x_n)}{(x_n)}=\\Bigl(\\frac{(I,x_n)}{(x_n)}\\Bigr)_{\\geq s}=I'_{\\geq s}\\otimes_K A$. \nBy \\cite[Lemma 2.2]{BS}, we have $\\mathrm{in}(I_{\\geq s},x_n)=(\\mathrm{in}(I_{\\geq s}),x_n)$ and obtain\n\\begin{equation}\\label{eq:saturation}\n\\frac{(J_{\\geq s},x_n)}{(x_n)}={J'}_{\\geq s}\\otimes_K A\n\\end{equation}\nbecause $\\mathrm{in}(I_{\\geq s})=\\mathrm{in}(I)_{\\geq s}= J_{\\geq s}$ and $\\mathrm{in}(I'_{\\geq s})=\\mathrm{in}(I')_{\\geq s}= J'_{\\geq s}$.\nNow, it is enough to recall that $J'$ is saturated because $I'$ is saturated and $x_{n-1}$ is not a zero-divisor in $K[\\mathbf x]\/I'$.\n\\end{proof}\n\n\\begin{remark} \nThe condition that $x_{n-1}$ is generic for $I'$ is not restrictive because it can always be obtained up to a suitable change of variables. On the other hand, the result of Theorem \\ref{th:dove} does not hold without this hypothesis: for example, for the saturated ideal $I'=(x_0^2,x_1x_0+x_2^2)\\subset K[x_0,x_1,x_2]$ we obtain $\\mathrm{in}(I')=(x_0^2, x_1x_0,x_2^2x_0,x_2^4)$, that is not saturated.\n\\end{remark}\n\nThanks to Theorems \\ref{th:parametrizzazione} and \\ref{th:dove}, we know {\\em where} the liftings of a given saturated polynomial ideal $I'\\subseteq A[\\mathbf x]$ are located and {\\em how} they can be constructed, obtaining Algorithm \\ref{alg:computation in GS}.\n\n\\begin{algorithm}[!ht]\n\\caption{\\label{alg:computation in GS} Algorithm for computing parameter schemes for the liftings of a saturated homogeneous ideal $I'\\subset K[\\mathbf x]$ over a Noetherian $K$-algebra $A$ in $\\mathrm{Hilb}_{p(t)}^n$ by means of Gr\\\"obner strata.}\n\\begin{algorithmic}[1]\n\\STATE $\\textsc{LiftingGS}\\big(I',p(t)\\big)$\n\\REQUIRE $I'\\subset K[\\mathbf x]$ a saturated polynomial ideal such that $x_{n-1}$ is generic for $I'$.\n\\REQUIRE $p(t)$ a Hilbert polynomial such that $\\Delta p(t)= p_Y(t)$, where $p_Y(t)$ is the Hilbert polynomial of the scheme $Y$ defined by $I'$.\n\\ENSURE A set $\\mathfrak B$ containing parameter schemes for the liftings of $I'$ in the Gr\\\"obner stratum over $J$, for every monomial lifting $J\\subset A[\\mathbf x,x_n]$ of $\\mathrm{in}(I')$ with Hilbert polynomial $p(t)$.\n\\STATE $\\mathcal L:=\\{ J \\subset A[\\mathbf x,x_n] \\ \\vert \\ J \\text{ monomial lifting of } \\mathrm{in}(I') \\text{ with Hilbert polynomial } p(t)\\}$; \n\\STATE $\\mathfrak B=\\emptyset$;\n\\FOR{$J \\in \\mathcal L$}\n\\STATE let $\\mathfrak a_J \\subset A[C_J]$ be the defining ideal of the Gr\\\"obner stratum $\\mathrm{St}_{J}$;\n\\STATE $\\mathfrak b_{J\\cap K[\\mathbf x]}:=(0)$;\n\\FOR{$x^\\beta\\in B_J$}\n\\STATE construct the polynomial $f_\\beta$ as in \\eqref{base di H};\n\\STATE $\\mathfrak b_{J\\cap K[\\mathbf x]}:= \\mathfrak b_{J\\cap K[\\mathbf x]}+(\\mathrm{coeff}(\\overline{f_\\beta}))$, where $\\overline{f_\\beta}$ is the $J$-normal form of $f_\\beta$ modulo $I'$; \n\\ENDFOR\n\\STATE $\\mathfrak B:=\\mathfrak B\\cup \\{\\mathfrak a_J+\\mathfrak b_{J\\cap K[\\mathbf x]}A[\\mathbf x,x_n]\\}$;\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\\section{Functors of liftings}\n\nFrom Theorem \\ref{th:parametrizzazione} we obtain the following generalization of \\cite[Corollary~3.3]{BCGR}\nin which, if $\\phi:A\\rightarrow B$ is a $K$-algebra morphism, we also denote by $\\phi$ the natural extension\nof $\\phi$ to $A[\\mathbf x,x_n]$. Recall that the image under $\\phi$ of every ideal $I$ in $A[\\mathbf x,x_n]$ generates the extension $I^e = IB[\\mathbf x,x_n]=I \\otimes_A B$ (see \\citep{BGS91}). \n\n\\begin{proposition} \\label{prop:funtore}\nLet $I'\\subset K[\\mathbf x]$ be a saturated ideal and $\\phi:A\\rightarrow B$ a $K$-algebra morphism. For every lifting $I \\subset A[\\mathbf x,x_n]$ of $I'$ over $A$ the ideal $I\\otimes_A B$ is a lifting of $I'$ over $B$, with the same Hilbert polynomial as $I$. \n\\end{proposition}\n\n\\begin{proof} Let $G_I$ be the reduced monic Gr\\\"obner basis of $I$ and $J=\\mathrm{in}(I)$. Then, $\\phi(G_I)$ is a monic Gr\\\"obner basis of $I\\otimes_A B$ with the same initial\nterms of $G_I$ because $G_I$ is monic. Let $H\\subset K[\\mathbf x]$ be the ideal such that $I$ is a $x_n$-lifting of $H$ and $H^{\\textnormal{sat}}=I'$. Let $G_H=\\{f_\\beta\\}_{x^\\beta\\in B_J}$ be the reduced (monic) Gr\\\"obner basis of $H$. By Theorems \\ref{th:parametrizzazione} and \\ref{FM} we have that $G_I$ is of the following type\n\\[\nG_I=\\lbrace f_\\beta + \\sum_{x^\\delta \\in \\mathcal N(J)_{\\vert \\beta\\vert-1}} {c}_{\\beta\\delta} x_n x^\\delta \\rbrace_\\beta, \\quad c_{\\beta\\delta} \\in A.\n\\]\nThe ideal $I\\otimes_A B$ is then generated by $\\phi(G_I)=\\lbrace f_\\beta - \\sum_{x^\\delta \\in \\mathcal N(J)_{\\vert \\beta\\vert-1}} \\phi({c}_{\\beta\\delta}) x_n x^\\delta \\rbrace_{x^\\beta\\in B_J}$, which is still a reduced Gr\\\"obner basis because the polynomials of $G_I$ are monic. Hence, $I\\otimes_A B$ is a $x_n$-lifting of $H$ by Theorem \\ref{FM} \nand a lifting of $I'$ by Proposition \\ref{prop:geometric lifting}.\n\nFor the statement concerning \nthe Hilbert polynomial, it is sufficient to observe that $I$ and $I\\otimes_K B$ have the same initial ideal, hence the same Hilbert polynomial.\n\\end{proof}\n\nGiven a scheme $Y\\subseteq \\mathbb P^{n-1}_K$, thanks to Proposition \\ref{prop:funtore} we can now easily define some functors concerning the liftings of $Y$. \n\n\\begin{definition}\\label{def:funtore lifting}\nLet $Y=\\textnormal{Proj}\\,(K[\\mathbf x]\/I')$ be a closed subscheme of $\\mathbb P^{n-1}_K$ with Hilbert polynomial $p_Y(t)$ and $p(t)$ be a Hilbert polynomial such that $\\Delta p(t)=p_Y(t)$. \n\\begin{itemize}\n\\item[(a)] The {\\em functor of liftings of $Y$}, $\\underline{\\mathrm{L}}_{Y}:{\\text{Noeth-}K\\text{-Alg}}\\rightarrow {\\mathrm{Sets}}$, \nassociates to every Noetherian $K$-algebra $A$ the set \n$$\\underline{\\mathrm{L}}_{Y}(A):=\\{I \\subset A[\\mathbf x,x_n] : I \\text{ lifting of } I' \\}$$\nand to every morphism of $K$-algebras $\\phi: A \\rightarrow B$ the map\n\\[\\begin{array}{rcl}\n\\underline{\\mathrm{L}}_{Y}(\\phi): \\underline{\\mathrm{L}}_{Y}(A)&\\rightarrow& \\underline{\\mathrm{L}}_{Y}(B)\\\\\n I&\\mapsto& I\\otimes_A B.\n\\end{array}\\]\n\n\\item[(b)] The {\\em functor of liftings of $Y$ with Hilbert polynomial $p(t)$}, $\\underline{\\mathrm{L}}_Y^{p(t)}: {\\text{Noeth-}K\\text{-Alg}}\\rightarrow {\\mathrm{Sets}}$, associates to every Noetherian $K$-algebra $A$ the set \n$$\\underline{\\mathrm{L}}_{Y}^{p(t)}(A):=\\{I \\subset A[\\mathbf x,x_n] : I \\text{ lifting of } I' \\text{ with Hilbert polynomial } p(t)\\}$$\nand to every morphism of $K$-algebras $\\phi: A \\rightarrow B$ the map\n\\[\\begin{array}{rcl}\n\\underline{\\mathrm{L}}_{Y}^{p(t)}(\\phi): \\underline{\\mathrm{L}}_{Y}^{p(t)}(A)&\\rightarrow& \\underline{\\mathrm{L}}_{Y}^{p(t)}(B)\\\\\n I&\\mapsto& I\\otimes_A B.\n\\end{array}\\]\n\\end{itemize}\n\\end{definition}\n\nIt is immediate that $\\underline{\\mathrm{L}}_{Y}^{p(t)}$ is a subfunctor of $\\underline{\\mathrm{L}}_{Y}$, and furthermore $\\underline{\\mathrm{L}}_{Y}$ (resp.~$\\underline{\\mathrm{L}}_{Y}^{p(t)}$) is a subfunctor of $\\underline{\\mathrm{Hilb}}^n$ (resp.~$\\underline{\\mathrm{Hilb}}^n_{p(t)}$). In fact, recall that the functors $\\underline{\\mathrm{Hilb}}^n$ and $\\underline{\\mathrm{Hilb}}^n_{p(t)}$ are both representable by locally Noetherian schemes, hence it is enough to consider their restrictions to the category of Noetherian $K$-algebras.\n\nUsing the same arguments on Hilbert schemes that give \\eqref{eq:co-product}, we obtain that the functor of litings of $Y$ decomposes as a co-product of the above subfunctors \n\\begin{equation}\\label{eq:co-product lifting}\n\\underline{\\mathrm{L}}_{Y}=\\coprod_{p(t)\\text{ admissible for liftings of $Y$ in } \\mathbb P_K^n} \\underline{\\mathrm{L}}_{Y}^{p(t)}.\n\\end{equation}\n\n\\begin{proposition}\\label{prop:Zariski sheaf}\nThe functor $\\underline{\\mathrm{L}}_Y^{p(t)}$ is a Zariski sheaf.\n\\end{proposition}\n\n\\begin{proof}\nLet $A$ be a Noetherian $K$-algebra and $\\{U_i=\\textnormal{Spec}\\,(A_{a_i})\\}_{i=1,\\dots,r}$ be an open covering of $\\textnormal{Spec}\\,(A)$. This is equivalent to the fact $(a_1,\\dots,a_r)=A$. Consider a set of ideals $I_i\\in \\underline{\\mathrm{L}}_Y^{p(t)}(A_{a_i})$ such that for any pair of indexes $i\\not= j$ we have\n\\begin{equation}\\label{eq:Zariski sheaf}\nI_{ij}:=I_i\\otimes_{A_{a_i}} A_{a_ia_j} = I_j\\otimes_{A_{a_j}} A_{a_ia_j} \\in \\underline{\\mathrm{L}}_Y^{p(t)}(A_{a_ia_j}).\n\\end{equation}\nWe need to show that there is a unique ideal $I\\in \\underline{\\mathrm{L}}_Y^{p(t)}(A)$ such that $I_i=I\\otimes_A A_{a_i}$ for every $i$. \n \nBy Proposition \\ref{prop:geometric lifting}, there are $H_i$ and $H_j$ ideals in $K[\\mathbf x]$ such that $H_i^{\\textnormal{sat}}=H_j^{\\textnormal{sat}}=I'$ and $I_i$ is a $x_n$-lifting of $H_i$, while $I_j$ is a $x_n$-lifting $H_j$. By Theorem \\ref{FM} and assumption \\eqref{eq:Zariski sheaf}, $H_i=H_j\\subset K[\\mathbf x]$, hence $I_i$ belongs to $\\underline{\\mathrm L_H}(A_i)$ and $I_j$ belongs to $\\underline{\\mathrm L_H}(A_j)$. Since $\\underline{\\mathrm L_H}$ is a Zariski sheaf, there is a unique $I\\subset A[\\mathbf x,x_n]$ such that $I$ is a $x_n$-lifting of $H$, $I\\otimes_{A}A_{a_i}=I_i$ and $I\\otimes_{A}A_{a_j}=I_j$. By Proposition \\ref{prop:geometric lifting} we conclude that $I$ belongs to $\\underline{\\mathrm{L}}_Y^{p(t)}(A)$ and is the unique ideal in $A[\\mathbf x,x_n]$ such that $I_i=I\\otimes_A A_{a_i}$ for every $i$. \n\\end{proof}\n\nRecall that a closed subscheme in $\\mathbb P^n_K$ is {\\em equidimensional} if all its components have the same dimension, in particular it has no embedded components. Thus, there exists an equidimensional lifting $W$ of a subscheme $Y$ only if $Y$ is equidimensional, i.e.~the ideal $I'\\subset K[\\mathbf x]$ defines an equidimensional scheme in $\\mathbb P^{n-1}_K$.\nWe say that a saturated ideal $I\\subset A[\\mathbf x,x_n]$ is {\\em equidimensional} if defines families of equidimensional subschemes. The next well-known result \nhighlights that base extension preserves the fibers on every $K$-point and, hence, their possible equidimensionality. \n\n\\begin{lemma}\\label{lemma:funtore equidimensionale}\nLet $W$ be a scheme over a $K$-algebra $A$. If $\\phi: A \\rightarrow B$ is a morphism of $K$-algebras, then the fibers of $W\\to \\textnormal{Spec}\\,(A)$ are isomorphic to the fibers of $W\\times_{\\textnormal{Spec}\\,(A)} \\textnormal{Spec}\\,(B)\\to \\textnormal{Spec}\\,(B)$ for every $K$-point.\n\\end{lemma}\n\n\\begin{proof} For the sake of completeness we give a proof of this statement.\nLet $\\phi: A\\to B$ be a morphism of $K$-algebras, $\\phi^{*}: \\textnormal{Spec}\\,(B) \\to \\textnormal{Spec}\\,(A)$ the corresponding morphism and $\\textnormal{Spec}\\,(K)\\to \\textnormal{Spec}\\,(B)$ the morphism associated to a $K$-point of $\\textnormal{Spec}\\,(B)$ (e.g.~\\cite[Chapter II, Exercise 2.7]{H}). Moreover, let $\\textnormal{Spec}\\,(K)\\to \\textnormal{Spec}\\,(A)$ be the morphism associated to the $K$-point of $\\textnormal{Spec}\\,(A)$ obtained by composition with $\\phi^{*}$ and $W\\times_{\\textnormal{Spec}\\,(A)} \\textnormal{Spec}\\,(K)$ the fiber on this $K$-point. Then, we obtain $(W\\times_{\\textnormal{Spec}\\,(A)} \\textnormal{Spec}\\,(B))\\times_{\\textnormal{Spec}\\,(B)} \\textnormal{Spec}\\,(K)\\simeq W\\times_{\\textnormal{Spec}\\,(A)} \\textnormal{Spec}\\,(K)$ due to the transitivity of base extension.\n\\end{proof}\n\n\\begin{definition}\\label{def:funtore lifting eq}\nLet $Y=\\textnormal{Proj}\\,(K[\\mathbf x]\/I')$ be an equidimensional closed subscheme of $\\mathbb P^{n-1}_K$ with Hilbert polynomial $p_Y(t)$ and $p(t)$ a Hilbert polynomial such that $\\Delta p(t)=p_Y(t)$. Thanks to Lemma \\ref{lemma:funtore equidimensionale} we define\n\\begin{itemize}\n\\item[(a)] The {\\em functor of equidimensional liftings of $Y$}, denoted by $\\underline{\\mathrm{L}}_{Y}^e: {\\text{Noeth-}K\\text{-Alg}}\\rightarrow {\\mathrm{Sets}}$,\nassociates to every Noetherian $K$-algebra $A$ the set \n$$\\underline{\\mathrm{L}}_{Y}^e(A):=\\{I \\subset A[\\mathbf x,x_n] : I \\text{ equidimensional lifting of } I' \\}$$\nand to every morphism of $K$-algebras $\\phi: A \\rightarrow B$ the map\n\\[\\begin{array}{rcl}\n\\underline{\\mathrm{L}}_{Y}^e(\\phi): \\underline{\\mathrm{L}}_{Y}^e(A)&\\rightarrow& \\underline{\\mathrm{L}}_{Y}^e(B)\\\\\n I&\\mapsto& I\\otimes_A B.\n\\end{array}\\]\n\n\\item[(b)] The {\\em functor of equidimensional liftings of $Y$ with Hilbert polynomial $p(t)$}, denoted by $\\underline{\\mathrm{L}}_Y^{p(t),e}:~{\\text{Noeth-}K\\text{-Alg}}\\rightarrow~{\\mathrm{Sets}}$, associates to every Noetherian $K$-algebra $A$ the set \n$$\\underline{\\mathrm{L}}_{Y}^{p(t),e}(A):=\\{I \\subset A[\\mathbf x,x_n] : I \\text{ equidimensional lifting of } I' \\text{ with Hilbert polynomial } p(t)\\}$$\nand to every morphism of $K$-algebras $\\phi: A \\rightarrow B$ the map\n\\[\\begin{array}{rcl}\n\\underline{\\mathrm{L}}_{Y}^{p(t),e}(\\phi): \\underline{\\mathrm{L}}_{Y}^{p(t),e}(A)&\\rightarrow& \\underline{\\mathrm{L}}_{Y}^{p(t),e}(B)\\\\\n I&\\mapsto& I\\otimes_A B.\n\\end{array}\\]\n\\end{itemize}\n\\end{definition}\n\nBy definition, the functor $\\underline{\\mathrm{L}}_Y^{e}$ (resp.~$\\underline{\\mathrm{L}}_Y^{p(t),e}$) is a subfunctor of $\\underline{\\mathrm{L}}_Y$ (resp.~of $\\underline{\\mathrm{L}}_Y^{p(t)}$) and we have\n\\begin{equation}\\label{eq:co-product lifting equid}\n\\underline{\\mathrm{L}}_{Y}^e=\\coprod_{p(t)\\text{ admissible for liftings of $Y$ in } \\mathbb P_K^n} \\underline{\\mathrm{L}}_{Y}^{p(t),e}\n\\end{equation}\nsimilarly to formulas \\eqref{eq:co-product} and \\eqref{eq:co-product lifting} for $\\underline{\\mathrm{Hilb}^n}$ and $\\underline{\\mathrm{L}}_Y$.\n\n\n\n\\section{The functor $\\underline{\\mathrm{L}}_Y^{p(t),e}$ is representable}\n\nWe need some preliminary results. \n\n\\begin{proposition}\\label{prop:Gro2} \\cite[Proposition (2.3.4)(iii)]{Gro2}\nLet $S$ be a locally Noetherian $K$-scheme, $W$ be an element of $\\underline{\\mathrm{Hilb}}_{p(t)}^n(S)$ and $f: W \\rightarrow S$ the corresponding flat projection. For every irreducible closed subset $S'$ of $S$, every irreducible component $W'$ of $f^{-1}(S')$ is dominant on $S'$, i.e.~$f'=f_{\\vert W'} : W' \\rightarrow S'$ is dominant. \n\\end{proposition}\n\nIf $p_Y(t)$ is the Hilbert polynomial of $Y$, we can consider the Hilbert-flag scheme $\\mathcal Fl_{p_Y,p}$ (see \\citep{Kleppe}) and the projections $\\pi_1: \\mathcal Fl_{p_Y,p} \\longmapsto \\mathrm{Hilb}_{p_Y(t)}^{n-1}$ and $\\pi_2: \\mathcal Fl_{p_Y,p} \\longmapsto \\mathrm{Hilb}_{p(t)}^n$. Thus, $\\pi_1^{-1}(Y)$ is the closed scheme consisting of the couples $(Y,W)$ where $W$ varies among all the closed subschemes of $\\mathbb P^n_K$ containing $Y$ and with Hilbert polynomial $p(t)$. We set $\\mathrm{Hilb}_{p(t),Y}^n:=\\pi_2(\\pi_1^{-1}(Y))$.\n\n\\begin{proposition}\\label{prop:flag}\n$\\mathrm{Hilb}_{p(t),Y}^n$ is a closed subscheme of $\\mathrm{Hilb}_{p(t)}^n$ which represents a closed subfunctor $\\underline{\\mathrm{Hilb}}_{p(t),Y}^n$ of $\\underline{\\mathrm{Hilb}}_{p(t)}^n$. \n\\end{proposition}\n\n\\begin{proof}\nFrom the definition it follows straightforwardly that $\\mathrm{Hilb}_{p(t),Y}^n:=\\pi_2(\\pi_1^{-1}(Y))\\simeq \\pi_1^{-1}(Y)$ is the closed subscheme of $\\mathrm{Hilb}_{p(t)}^n$\nconsisting of the points in $\\mathrm{Hilb}_{p(t)}^n$ corresponding to schemes containing $Y$. Thus, it represents the closed subfunctor of the Hilbert functor that associates to a scheme $S$ the set of subschemes $W$ of $\\mathbb P^n_K \\times S$ containing $Y\\times_{\\mathrm{Spec}(K)}~S$.\n\\end{proof}\n\n\\begin{theorem}\\label{th:schema che rappresenta}\nLet $Y$ be an equidimensional closed subscheme of $\\mathbb P^{n-1}_K$ with Hilbert polynomial $p_Y(t)$ and $p(t)$ a Hilbert polynomial such that $\\Delta p(t)=p_Y(t)$. Then, $\\underline{\\mathrm{L}}_Y^{p(t),e}$ is representable by a locally closed subscheme ${\\mathrm{L}}_Y^{p(t),e}$ of ${\\mathrm{Hilb}}_{p(t)}^n$.\n\\end{theorem}\n\n\\begin{proof} \nLet $S$ be a Noetherian $K$-scheme, $W$ an element of $\\underline{\\mathrm{Hilb}}_{p(t)}^n(S)$ and $f: W \\rightarrow S$ the corresponding flat projection. The fibers of $f$ in $W$ have degree equal to $\\deg(Y)$ and dimension equal to $\\dim(Y)+1$ because $\\Delta p(t)=p_Y(t)$. \n\nFor every irreducible closed subset $S'$ of $S$, let $W'$ be any irreducible component of $f^{-1}(S')$. By Proposition \\ref{prop:Gro2}, $W'$ is dominant on $S'$. Then, also $W'\\cap (\\mathcal H\\times S')$ is dominant on $S'$, because for every $s'\\in S'$ the fiber of $s'$ in $W'$ has dimension at least $1$ by construction, and hence $s'$ has a fiber in $W'\\cap (\\mathcal H\\times S')$ of dimension at least $0$. Indeed, the dimension of every fiber in $W'\\cap (\\mathcal H\\times S')$ is between $\\dim(Y)+1$ and $\\dim(Y)$.\n\nRecall that the dimension of the fibers of a dominant morphism is an upper semicontinuous function, namely the subset of $S'$ whose fibers in $W\\cap(\\mathcal H\\times S')$ have dimension less than or equal to $\\dim(Y)$ is open \\cite[Chapter I, section 8, Corollary 3]{Mumford}. Since this dimension cannot be strictly lower than $\\dim(Y)$ by the previous argument, all the fibers of the above open subset have dimension equal to $\\dim(Y)$. \n\nIn the above situation, if we assume that $W$ belongs to $\\underline{\\mathrm{Hilb}}_{p(t)}^n(U_e)$, where $U_e$ is the open subscheme of $\\mathrm{Hilb}_{p(t)}^n$ that parameterizes the families of equidimensional subschemes of $\\mathbb P^n_K$ with Hilbert polynomial $p(t)$ \\cite[Th\\'{e}or\\`{e}me (12.2.1)(iii)]{Gro3}, \nwe can also observe that the fibers in $W$ and $\\mathcal H \\times_{\\mathrm{Spec}(K)} S$ intersect properly, implying that the degree of the fibers in $W\\cap (\\mathcal H \\times_{\\mathrm{Spec}(K)} S)$ is less than or equal to $\\deg(Y)$ because the fibers and $\\mathcal H \\times_{\\mathrm{Spec}(K)} S$ are equidimensional (see \\cite[Corollary 18.5]{Harris92} in case of varieties). \n\nIf we also assume that $W$ belongs to $\\mathrm{Hilb}_{p(t),Y}^n$, so that $W$ contains $Y$, we obtain that the fibers in $W\\cap (\\mathcal H \\times_{\\mathrm{Spec}(K)} S)$ have degree equal to $\\deg(Y)$ because $Y\\times_{\\mathrm{Spec}(K)} S \\subseteq W\\cap (\\mathcal H \\times_{\\mathrm{Spec}(K)} S)$.\n\nWe can now conclude that there is an open subset in $\\mathrm{Hilb}_{p(t),Y}^n$ describing all subschemes $W$ such that $W\\cap (\\mathcal H \\times_{\\mathrm{Spec}(K)} S)=Y\\times_{\\mathrm{Spec}(K)} S$, namely $W$ is a lifting of $Y$. It is immediate that any equidimensional lifting of $Y$ belongs to this open subset of $\\mathrm{Hilb}_{p(t),Y}^n$ and, hence, locally closed subscheme of $\\mathrm{Hilb}_{p(t)}^n$ by Proposition \\ref{prop:flag}. \n\\end{proof}\n\n\\begin{remark} \nThe locally closed subscheme ${\\mathrm{L}}_Y^{p(t),e}$ of ${\\mathrm{Hilb}}_{p(t)}^n$ which has been introduced in the proof of Theorem \\ref{th:schema che rappresenta} completely describes the locally Cohen-Macaulay liftings of $Y$ when $Y$ is a zero-dimensional scheme. \n\\end{remark}\n\n\n\\section{The case of a quasi-stable initial ideal}\n\\label{sec:quasi-stable}\n\nThanks to Theorem \\ref{th:dove} we have that the liftings of a saturated homogeneous ideal $I'\\subset K[\\mathbf x]$ with $x_{n-1}$ generic belong to a Gr\\\"obner stratum over a monomial lifting $J$ of $J':=\\mathrm{in}(I')$. In this section we prove that if $J'$ is quasi-stable then $J$ is quasi-stable too (Theorem \\ref{th:lifting}). \n\nThe assumption that $J'$ is quasi-stable is not restrictive: indeed, this can be obtained by a change of coordinates on $I'$, and this change does not effect the scheme $Y=\\textnormal{Proj}\\,(K[\\mathbf x]\/I')$ from a geometric point of view. Quasi-stability for initial ideals will allow us to use the techniques concerning marked bases over quasi-stable ideals developed in \\citep{BCR2} (see \\citep{ABRS} for the more general case of free modules).\n\n\\begin{lemma}\\label{lemma:ss tagliato}\nLet $J\\subseteq A[\\mathbf x,x_n]$ be a monomial ideal.\nIf $J_{\\geq s}$ is quasi-stable for some integer $s$, then $J$ is quasi-stable.\n\\end{lemma}\n\n\\begin{proof}\nIf $J_{\\geq s}$ is quasi-stable, it is enough to check the condition of Definition \\ref{def:stable} for every term $x^\\alpha \\in J$ with $\\vert\\alpha\\vert \\min(x^\\alpha)$, take $x^\\alpha x_i^{s-\\vert\\alpha\\vert} \\in J_{\\geq s}$. Then, there is an integer $t$ such that $\\frac{x^\\alpha x_i^{s-\\vert\\alpha\\vert+t}}{\\min(x^\\alpha x_i^{s-\\vert\\alpha\\vert})}$ belongs to $J_{\\geq s}$, because $J_{\\geq s}$ is quasi-stable. We conclude by observing that $\\min(x^\\alpha)=\\min(x^\\alpha x_i^{s-\\vert\\alpha\\vert})$ by construction.\n\\end{proof}\n\n\\begin{theorem}\\label{th:lifting}\nIf $J'\\subseteq K[\\mathbf x]$ is a saturated quasi-stable ideal then a monomial lifting $J\\subseteq A[\\mathbf x,x_n]$ of $J'$ is quasi-stable.\n\\end{theorem}\n\n\\begin{proof}\nConsider the ideal $L=\\frac{(J,x_n)}{(x_n)}\\cap K[\\mathbf x]$. Since quasi-stability is a property concerning the semigroup structure of the ideal generated by the minimal monomial basis of $J$, regardless of the coefficients of the polynomial ring, it is sufficient to prove that $L$ is quasi-stable.\n\nBy the hypothesis, we have $L^{\\textnormal{sat}}=J'$ and hence, if $s={\\textnormal{sat}}(L)$, then $L_{\\geq s}={J'}_{\\geq s}$ is quasi-stable. By Lemma \\ref{lemma:ss tagliato}(i), $L$ is quasi-stable too.\n\\end{proof}\n\n\\begin{corollary}\\label{cor:lifting}\nLet $I'\\subseteq K[\\mathbf x]$ be a homogeneous saturated ideal and $I\\subseteq A[\\mathbf x,x_n]$ be a lifting of $I'$. If $\\mathrm{in}(I')$ is quasi-stable, then $\\mathrm{in}(I)$ is a quasi-stable lifting of $\\mathrm{in}(I')$. \n\\end{corollary}\n\n\\begin{proof}\nThis is a consequence of the fact that the smallest variable is generic for any quasi-stable ideal (see \\cite[Prop. 4.4(ii)]{Seiler2009II}), of Remark \\ref{rem:generic and degrevlex}\\eqref{item:remgeneric} and of Theorems \\ref{th:lifting} and \\ref{th:dove}.\n\\end{proof}\n\n\\begin{example}\\label{ex:un primo calcolo} \nIn this example we apply Theorem \\ref{th:lifting}. Let $\\mathbf x:=\\{x_0,\\dots,x_3\\}$ and consider the saturated ideal $I'=(x_0^2,x_0x_1+x_1^2,x_0x_2)=(x_1^2,x_0) \\cap (x_2,x_1^3,x_0x_1+x_1^2,x_0^2) \\subseteq K[\\mathbf x]$. \nThe reduced Gr\\\"obner basis of $I'$ is $G'=\\{x_0x_2, x_0x_1 + x_1^2, x_0^2, x_1^2x_2, x_1^3\\}$, hence the initial ideal is the quasi-stable ideal $J':=\\mathrm{in}(I')=(x_0^2,x_0x_1,x_0x_2,$ $x_1^2x_2,x_1^3)$.\nThe Hilbert function of $\\frac{K[\\mathbf x]}{I'}$ is \n\\[\nh_{K[\\mathbf x]\/I'}(0)=1, h_{K[\\mathbf x]\/I'}(1)= 4 \\text{ and } h_{K[\\mathbf x]\/I'}(t)=2t+3 \\text{ for every }t\\geq 2, \n\\]\nhence $I'$ defines a curve $Y$ in $\\mathbb P^3_K$. A Hilbert polynomial $p(t)$ such that $\\Delta p(t)=2t+3$ must be of type $p(t)=t^2+4t+c$. \nAssume that $W\\subset \\mathbb P^4_K$ is a surface and is a lifting of $Y$ over a Noetherian $K$-algebra $A$. The Hilbert polynomial of $W$ is $p(t)=t^2+4t+c$ with $c\\geq 0$. Indeed, since $x_4$ is not a zero-divisor on $S\/I$, we have the short exact sequence\n$$ 0 \\rightarrow (A[\\mathbf x,x_4]\/I)_{t-1} \\xrightarrow{\\cdot x_4} (A[\\mathbf x,x_4]\/I)_t \\rightarrow (A[\\mathbf x,x_4]\/(I,x_4))_t \\rightarrow 0,$$\nthat gives $h_{A[\\mathbf x,x_4]\/(I,x_4)}(t) = \\Delta h_{A[\\mathbf x,x_4]\/I}(t)$, in particular $p_{A[\\mathbf x,x_4]\/{(I,x_4)}}(t)=\\Delta p_{A[\\mathbf x,x_4]\/{I}}(t)$ and $\\Delta h_W(t)\\geq h_Y(t)$ for every $t$. As a consequence, $h_W(t)\\geq \\sum_{i=0}^t h_Y(i)$, hence \n\\[\nh_{A[\\mathbf x,x_4]\/I}(0)= 1, h_{A[\\mathbf x,x_4]\/I}(1)= 5 \\text{ and } h_{A[\\mathbf x,x_4]\/I}(t)\\geq t^2+4t \\text{ for every }t\\geq 2, \n\\]\nand $c=0$ is the minimal possible value of the constant term $c$ for the Hilbert polynomial of $W$. \nWe now investigate the cases $c=0$ and $c=1$. In the following, we compute quasi-stable ideals by the algorithm described in \\citep{Bertone}. \n\nIf $c=0$, among $56$ possible quasi-stable saturated ideals there is a unique quasi-stable ideal $J:=J'\\cdot K[\\mathbf x,x_4]\\subset K[\\mathbf x,x_4]$ such that $\\left(\\frac{(J,x_4)}{(x_4)}\\right)^{{\\textnormal{sat}}} \\cap K[\\mathbf x]= J'$. Hence, in this case the liftings of $I'$ belong to the family $\\underline{\\mathrm{St}}_J(A)$ and exactly are the $x_4$-liftings of $I'$ because $J$ and $J'$ share the same generators, using Theorem \\ref{th:dove}.\n\nIf $c=1$, among $176$ possible quasi-stable saturated ideals there are only the following $5$ quasi-stable ideals $J^{(i)}\\subset K[\\mathbf x,x_4]$ such that $\\left(\\frac{(J^{(i)},x_4)}{(x_4)}\\right)^{{\\textnormal{sat}}}\\cap K[\\mathbf x]=\\mathrm{in}(I')$:\n$$\\begin{array}{l}\nJ^{(1)}=(x_2x_0, x_1x_0, x_3x_0^2, x_2x_1^2, x_1^3, x_0^3),\\\\\nJ^{(2)}=(x_2x_0, x_0^2, x_3x_1x_0, x_2x_1^2, x_1^3, x_1^2x_0),\\\\\nJ^{(3)}=(x_1x_0, x_0^2, x_3x_2x_0, x_2^2x_0, x_2x_1^2, x_1^3) \n\\\\\nJ^{(4)}=(x_2x_0, x_1x_0, x_0^2, x_2x_1^2, x_3x_1^3, x_1^4),\\\\\nJ^{(5)}=(x_2x_0, x_1x_0, x_0^2, x_1^3, x_3x_2x_1^2, x_2^2x_1^2) \n\\end{array}\n$$\nHence, in this case the liftings of $I'$ belong to the union of Gr\\\"obner strata over the above quasi-stable ideals in the Hilbert scheme $\\mathrm{Hilb}_{t^2+4t+1}^4$. Now, we apply the construction described in the proof of Theorem \\ref{th:parametrizzazione} to these monomial ideals and obtain the following families of Gr\\\"obner bases for the ideals of type $H$, modulo the defining ideal of the Gr\\\"obner stratum over $J^{(k)}$:\n\n$J^{(1)}$: $G^{(1)}=\\{x_2x_0, x_0x_1+x_1^2, x_3x_0^2, x_2x_1^2, x_1^3, x_0^3 \\}$,\n\n$J^{(2)}$: $G^{(2)}(a)=\\{x_2x_0, \nx_0^2 + ax_0x_1 + ax_1^2, \nx_0x_1x_3+ x_1^2x_3,\nx_2x_1^2,\nx_1^3, \nx_1^2x_0\\}$ where $a\\in A$,\n\n$J^{(3)}$: $G^{(3)}(b,c)=\\{ x_0x_1+ x_1^2 + bx_0x_2, \nx_0^2 + cx_0x_2, \nx_3x_2x_0,\nx_0x_2^2, \nx_2x_1^2,\nx_1^3\\}$ where $b,c\\in A$,\n\n$J^{(4)}$: $G^{(4)}=\\{x_2x_0, \nx_0x_1+x_1^2,\nx_0^2,\nx_2x_1^2,\nx_3x_1^3,\nx_1^4\\}$,\n\n$J^{(5)}$: $G^{(5)}(d)=\\{x_0x_2, \nx_0x_1+x_1^2, \nx_0^2, \nx_1^3+dx_1^2x_2, \nx_1^2x_2x_3,\nx_1^2x_2^2\\}$ where $d\\in A$.\n \n\\noindent In conclusion, for every $k=1,\\dots,5$, the liftings of $I'$ in $\\mathrm{St}_{J^{(k)}}$ are the $x_4$-liftings of the ideals generated by the corresponding Gr\\\"obner bases ${(G^{(k)}(a,b,c,d))}$, modulo the defining ideal of the Gr\\\"obner stratum over $J^{(k)}$.\n\\end{example}\n\n\\begin{example}\\label{ex:lCM} \nConsider a field $K$ with $\\mathrm{char}(K)=0$ and a $K$-algebra $A$. In the present example we test our methods in order to characterize the locally Cohen-Macaulay liftings in $\\mathbb P_A^3$ of the double point $Y\\subset \\mathbb P^2_K$ defined by the ideal $I'=(x_0,x_1^2)\\subset K[x_0,x_1,x_2]$, where $J'=\\mathrm{in}(I')=I'$ is quasi-stable. For every Hilbert polynomial $p(t)$, a lifting $W$ of $Y$ with Hilbert polynomial $p(t)$ has embedded or isolated points if and only if $W$ contains a lifting with Hilbert polynomial $p(t)-1$. \n\nFor $p(t)=2t+1$ we find a unique monomial quasi-stable lifting $J=(x_0,x_1^2)\\subset A[x_0,x_1,x_2,x_3]$ of $J'$. Furthermore, $H=I'\\subset K[x_0,x_1,x_2]$ is the unique ideal such that $H^{\\textnormal{sat}}=I'$ and $\\mathrm{in}(H)=J$.\nHence, $\\underline{\\mathrm{L}}_Y^{2t+1}(A)$ consists of the $x_3$-liftings of the ideal $(x_0,x_1^2)$. These liftings are generated by the following Gr\\\"obner basis, which depends on the free parameters $\\alpha, \\beta, \\gamma, \\delta \\in A$:\n\\[\nG(\\alpha,\\beta,\\gamma,\\delta)=\\{x_{{0}}+\\alpha\\,x_{{3}},x_{{1}}^{2}+\\beta\\,x_{{1}}x_{{3}}+\n\\gamma\\,x_{{2}}x_{{3}}+\\delta\\,x_{{3}}^{2}\\}.\n\\]\n\nFor $p(t)=2t+2$, we find two monomial quasi-stable liftings $J^{(1)}=(x_0,x_1^3,x_1^2x_2)$ and $J^{(2)}=(x_0^2,x_0x_1,x_1^2,x_0x_2)$ of $J'$. By direct computation we see that $J^{(1)}$ (resp.~$J^{(2)}$) is the unique ideal in $K[x_0,x_1,x_2]$ such that its saturation is $I'$ and its initial ideal is $J^{(1)}$ (resp.~$J^{(2)}$). \nHence, $\\underline{\\mathrm{L}}_Y^{2t+2}(A)$ consists of the $x_3$-liftings of the ideals $J^{(1)}$ and $J^{(2)}$. We now study them in order to find the locally Cohen-Macaulay liftings of $Y$ with Hilbert polynomial $2t+2$.\n\nEvery $x_3$-lifting of $J^{(1)}$ \ndefines a plane curve with isolated or embedded points and hence is not locally Cohen-Macaulay. \n\nEvery $x_3$-lifting of $J^{(2)}$ is generated by polynomials of the following type\n\\begin{equation}\\label{eq:ex75groebner}\n\\begin{array}{c}\nx_{{0}}^{2}+c_{{1}}x_{{0}}x_{{3}}+ \\left( c_{{1}}c_{{8}}-c_{{8}}^{2} \\right) x\n_{{3}}^{2},\\quad\nx_{{0}}x_{{1}}+c_{{2}}c_{{8}}x_{{3}}^{2}+c_{{2}}x_{{0}}x_\n{{3}}+c_{{8}}x_{{1}}x_{{3}},\\\\\nx_{{1}}^{2}+c_{{3}}x_{{0}}x_{{3}}+c_{{4\n}}x_{{1}}x_{{3}}+c_{{5}}x_{{2}}x_{{3}}+ \\left( c_{{1}}c_{{3}}-c_{{2}\n}^{2}+c_{{2}}c_{{4}}-c_{{3}}c_{{8}}+c_{{5}}c_{{6}} \\right) x_{{3}}^{\n2},\\\\\nx_{{0}}x_{{2}}+c_{{6}}x_{{0}}x_{{3}}+c_{{7}}x_{{1}}x_{{3}}+c_{{8}}x_\n{{2}}x_{{3}}+ \\left(\\frac{c_{{4}}c_{{7}}}{2}+c_{{6}}c_{{8}} \\right) {x_{{3\n}}}^{2}\n\\end{array}\n\\end{equation}\nwith $c_1,\\dots,c_8\\in A$ such that $c_{{5}}c_{{7}}=c_{{3}}c_{{7}}=2\\,c_{{2}}c_{{7}}-c_{{4}}c_{{7}}=c_\n{{1}}c_{{7}}-2\\,c_{{7}}c_{{8}}=0$.\nThe assumptions on the $c_i$'s ensure that the polynomials in \\eqref{eq:ex75groebner} form a Gr\\\"obner basis.\n\nA $x_3$-lifting of $J^{(2)}$ is contained in the ideal generated by $G(\\alpha, \\beta, \\gamma, \\delta)$, with $\\alpha, \\beta, \\gamma, \\delta \\in A$, if and only if\n\\begin{equation}\\label{eq:ex75nonlCM}\n c_{{1}}=c_8=\\alpha,c_{{4}}=\\beta,c_{{5}}=\\gamma,c_{{2}}=c_{{3}}=c_{{6}}=c_{{7}}=0. \n \\end{equation}\nObserve that $\\delta=0$. Then, the conditions \\eqref{eq:ex75nonlCM} characterize the liftings of the double point $Y$ which are not locally Cohen-Macaulay.\n\nSumming up, the locally Cohen-Macaulay liftings of $I'$ with Hilbert polynomial $2t+2$ are $x_3$-liftings of $J^{(2)}$ having Gr\\\"obner basis as in \\eqref{eq:ex75groebner} and such that the $c_i$'s do not satisfy equations~\\eqref{eq:ex75nonlCM}.\n\\end{example}\n\n\n\\section{Backgroud II: marked functor over a truncation ideal}\\label{sec:backgroundMarkedBases}\n\nThe results of Section \\ref{sec:quasi-stable} are preliminary for the proof of the representability of $\\underline{\\mathrm{L}}_Y^{p(t)}$, which is described in Section \\ref{sec:reprLYp} and uses the notion of marked functor. First, we need to recall the notion of Pommaret basis.\n\n\\begin{definition}\\citep{Seiler2009I}\\label{def:Pommaret}\nGiven a term $x^\\alpha$ and $x_j=min(x^\\alpha)$, the set $\\mathcal C_P(x^\\alpha):=\\{x^\\delta x^\\alpha \\ \\vert \\ \\delta_i=0 \\ \\forall i0$ is a small constant.}\\label{algo:sinkhorn}\n \\DontPrintSemicolon\n \\KwData{Point clouds ${\\cal X}=\\{x_i\\}_{i=1}^m$, ${\\cal Y}=\\{y_j\\}_{j=1}^n$}\n \\nl $\\vec{C}_{ij}=\\|x_i-y_j\\|^2 \\qquad 1\\le \\forall i\\le m, \\; 1\\le\\forall j\\le n$\\\n \\nl $\\vec{G}\\leftarrow\\exp(-\\vec{C}\/\\varepsilon)$\\tcp*{\\footnotesize{Element-wise exp}}\n \\nl $\\vec{v}\\leftarrow \\onevec{n}$\\tcp*{\\footnotesize{Initialize dual variable}}\n \\nl \\While{Not converged}{\n \\nl $\\vec{u}\\leftarrow \\frac1m\\onevec{m}\/\\vec{G}\\vec{v}$\\tcp*{\\footnotesize{Element-wise division}}\n \\nl $\\vec{v}\\leftarrow \\frac1n\\onevec{n}\/\\vec{G}^\\top\\vec{u}$\n }\n \\nl \\textbf{{return}} \\, {$\\langle\\diag{\\vec{u}}\\vec{G}\\,\\diag{\\vec{v}},\\vec{C}\\rangle$ \\textbf{as} $\\widetilde W^2({\\cal X},{\\cal Y})$}\\;\n\\end{algorithm}\n\n\\paragraph{Remark}\nTo be exact, Algorithm~\\ref{algo:sinkhorn} is an approximation algorithm for \\eqref{eq:wassdef};\nit solves the optimization problem with an additional \\emph{entropy regularization} term $H(\\vec{P})$:\n\\begin{align}\n &\n \\min_{\\vec{P}} \\langle \\vec{P},\\vec{C}\\rangle-\\varepsilon H(\\vec{P}), \\quad \\text{subject to} \\; \\vec{P}\\in\\mathcal{U}_{m,n}\n \\label{eq:regwass}\\\\\n &\n \\text{where}\\quad H(\\vec{P}):=-\\sum_{ij}\\vec{P}_{ij}(\\log\\vec{P}_{ij}-1).\\nonumber\n\\end{align}\nWith the optimal solution $\\vec{P}^\\dagger$ of this problem, we define the regularized Wasserstein distance by $\\widetilde W^2({\\cal X},{\\cal Y})=\\langle\\vec{P}^\\dagger,\\vec{C}\\rangle.$\nThis regularization term makes WCL smooth with respect to its inputs, resulting in stable training.\nAs can be seen, if $\\varepsilon\\to0$, \\eqref{eq:regwass} converges to the original optimization problem \\eqref{eq:wassdef};\nhence, a small $\\varepsilon>0$ gives a good approximation. However, such a small $\\varepsilon$ might cause numerical instability as a small $\\varepsilon$ reduces $\\vec{G}$ to an almost zero matrix at Line~2. \nTo improve numerical stability, we may use log-sum-exp at Lines 5 $-$ 6.\nFurthermore, we can compute Algorithm~\\ref{algo:sinkhorn} in parallel over batch dimension.\nAt Line~4, we can use any kind of stopping condition; here, we stop at 100 iterations.\nSee \\cite{cuturi2013sinkhorn,peyre2019computational} for details of the regularized Wasserstein distance.\n\n\\section{Experiment}\nWe apply our WCL to prior works and quantitatively and qualitatively evaluate them using the public dataset with comparison to the original method.\n\n\\subsection{Dataset}\nIn this study, we use the KITTI dataset~\\cite{Geiger2013IJRR}.\nAlthough it is known that a large input image size can lead to better performance~\\cite{pillai2019superdepth}, the main purpose of our experiment is to confirm any advantages against the target original method to be applied.\nHence, we use a 416$\\times$128 image size, similar to prior works~\\cite{zhou2017unsupervised,mahjourian2018unsupervised,gordon2019depth,godard2019digging}.\nHowever, we conduct an additional evaluation using an image size of 640$\\times$192, for monodepth2 only ~\\cite{godard2019digging} as their default image size is 640$\\times$192 in the public code. As with several prior works, we have two different datasets, split for the evaluation of depth and pose estimation. It should be noted that the models for depth and pose estimation are trained simultaneously on each dataset, but depth and pose are trained and evaluated on separate data.\n\\paragraph{Depth Estimation} We separate the KITTI raw dataset via the Eigen split~\\cite{eigen2014depth}. As a result, we have approximately 40,000 frames for training, 4,000 frames for validation, and 697 frames for the test.\nThe test data are chosen from 29 scenes of the KITTI raw dataset. \nAlthough the KITTI raw data include stereo images and LIDAR data, we use monocular images for both training and testing as the input data and use the LIDAR data as the ground truth only for testing.\n\\paragraph{Pose Estimation} Unlike the KITTI raw dataset, the KITTI odometry dataset has the ground truth of relative pose between consecutive images for testing.\nHowever, test data in the KITTI odometry dataset are partially included in the training data of the KITTI raw dataset.\nHence, we can not use the trained model on the KITTI raw dataset for the pose estimation evaluation on the KITTI odometry dataset.\nTherefore, we have both training and testing on the KITTI odometry dataset, as with the baseline methods~\\cite{zhou2017unsupervised,godard2019digging}.\nAlthough there are 22 sequences in the KITTI odometry dataset, we use 11 sequences, from 00 to 10. \nAfter training models on sequence 00 to 08, we test the model for pose estimation on sequence 09 and 10.\n\n\\subsection{Evaluation of Depth Estimation}\n\\paragraph{Training}\nIn the evaluation of depth estimation, we apply our WCL to four selected baselines~\\cite{zhou2017unsupervised,mahjourian2018unsupervised,gordon2019depth,godard2019digging}.\nFor \\cite{zhou2017unsupervised,godard2019digging}, we add $\\lambda_{w} \\cdot L_{wass}$ to their original cost function, $L_{origin}$, to train the model.\nBy contrast, as the original cost function of \\cite{mahjourian2018unsupervised,gordon2019depth} includes the ICP loss~\\cite{mahjourian2018unsupervised} and the depth consistency loss\\cite{gordon2019depth}, which also penalizes geometric inconsistencies, we remove them for the comparative evaluation with each objective.\nWe train all models by applying the same hyperparameters (e.g., mini-batch size, learning rate, data augmentation, and network structure) and training process (e.g., masking, training length, optimization, selection of input images $I_A$ and $I_B$ in Fig.~\\ref{f:overview}) as in the original code, except for the following hyperparameters about our WCL.\n\nWe determine the weighting values, $\\lambda_{w}$, for $L_{wass}$ as 7.0, 2.0, 3.0, and 0.5 in \\cite{zhou2017unsupervised}, \\cite{mahjourian2018unsupervised}, \\cite{gordon2019depth}, and \\cite{godard2019digging}, respectively.\n$\\lambda_w$ is a weighting factor that balances $L_{origin}$; therefore, there is no significance to the relative size of the values.\n$\\varepsilon$ in \\eqref{eq:regwass} is set as 0.001 to suppress the approximation and to stably calculate the WCL. \nIn addition, we uniformly sample the point clouds at a grid point on an image coordinate before feeding them into our WCL in \\eqref{eq:Lwass} owing to the limitation of GPU memory usage. \nAn explanation of the sampling method and an ablation study of the hyperparameters are shown in the supplementary materials.\n\n\n\\paragraph{Quantitative Analysis}\nTable~\\ref{tab:ev} displays the widely used seven metrics for the evaluation of depth estimation from monocular camera images. \nAs summarized in Table~\\ref{tab:ev}, the performance of all the baselines can be successfully improved on most metrics by adding our WCL.\nThe improvement for \\cite{zhou2017unsupervised} and \\cite{godard2019digging} is about 25$\\%$ and 15$\\%$ on the most advantageous metric, respectively. On the other hand, it is not quite as large for \\cite{mahjourian2018unsupervised} and \\cite{gordon2019depth} as they also penalize geometric inconsistencies using their own approach.\nHowever, our method still has explicit margins about 5$\\%$ on Sq Rel or RMSE against these baselines.\nFurthermore, monodepth2~\\cite{godard2019digging} + WCL can successfully obtain the best results, which are quite better than the other related works~\\cite{luo2020consistent,fei2019geo}\n\n\\begin{table*}[t]\n \\caption{{\\small {\\bf Evaluation of depth estimation by self-supervised mono supervision on an Eigen split of the KITTI dataset.} We display seven metrics from the {\\bf estimated depth images less than 80 m}. ``+ WCL'' is the result obtained with our proposed method, in addition to the baseline method presented above. For the leftmost four metrics, smaller is better; for the rightmost three metrics, higher is better. $\\dagger$ and $\\ddagger$ indicate removing the ICP and depth consistency loss from the original cost function, respectively. The method of $\\ddagger$ is evaluated by the author. The $\\ast$ method uses the ground truth pose.}\n \n \\begin{center}\n \\resizebox{0.95\\columnwidth}{!}{\n \\label{tab:ev}\n \\begin{tabular}{|lc|c|c|c|c||c|c|c|} \\hline\n method & image size & Abs Rel & Sq Rel & RMSE & RMSE log & $\\delta < 1.25$ & $\\delta < 1.25^2$ & $\\delta < 1.25^3$ \\\\ \\hline\n Yang et al.~\\cite{yang2017unsupervised} & 416x128 & 0.182 & 1.481 & 6.501 & 0.267 & 0.725 & 0.906 & 0.963 \\\\\n LEGO~\\cite{yang2018lego} & 416x128 & 0.162 & 1.352 & 6.276 & 0.252 & 0.783 & 0.921 & 0.969 \\\\\n GeoNet~\\cite{yin2018geonet} & 416x128 & 0.155 & 1.296 & 5.857 & 0.233 & 0.793 & 0.931 & 0.973 \\\\\n Fei et al.~\\cite{fei2019geo} & 416x128 & 0.142 & 1.124 & 5.611 & 0.223 & 0.813 & 0.938 & 0.975 \\\\\n DDVO~\\cite{wang2018learning} & 416x128 & 0.151 & 1.257 & 5.583 & 0.228 & 0.810 & 0.936 & 0.974 \\\\\n Yang et al.~\\cite{yang2018every} & 416x128 & 0.131 & 1.254 & 6.117 & 0.220 & 0.826 & 0.931 & 0.973 \\\\\n Casser et al.~\\cite{casser2019depth} & 416x128 & 0.141 & 1.026 & 5.291 & 0.2153 & 0.8160 & 0.9452 & 0.9791 \\\\\n Luo et al.~\\cite{luo2020consistent} $\\ast$ & 384x112 & 0.130 & 2.086 & 4.876 & 0.205 & 0.878 & 0.946 & 0.970 \\\\ \\hline\n \n Zhou et al.~\\cite{zhou2017unsupervised} & 416x128 & 0.208 & 1.768 & 6.856 & 0.283 & 0.678 & 0.885 & 0.957 \\\\\n \n \n \\bf{+ WCL} & 416x128 & \\bf{0.171} & \\bf{1.316} & \\bf{6.080} & \\bf{0.255} & \\bf{0.755} & \\bf{0.915} & \\bf{0.966} \\\\ \\hdashline\n vid2depth~\\cite{mahjourian2018unsupervised} & 416x128 & \\bf{0.163} & 1.240 & 6.220 & 0.250 & 0.762 & 0.916 & \\bf{0.968} \\\\\n vid2depth~\\cite{mahjourian2018unsupervised} {\\scriptsize wo ICP loss}$\\dagger$ & 416x128 & 0.175 & 1.617 & 6.267 & 0.252 & 0.759 & 0.917 & 0.967 \\\\\n \n \\bf{+ WCL} & 416x128 & 0.165 & \\bf{1.226} & \\bf{5.892} & \\bf{0.246} & \\bf{0.767} & \\bf{0.918} & \\bf{0.968} \\\\ \\hdashline\n Gordon et al.~\\cite{gordon2019depth} & 416x128 & 0.128 & 0.959 & 5.23 & 0.212 & \\bf{0.845} & \\bf{0.947} & 0.976 \\\\\n \n \n Gordon et al.~\\cite{gordon2019depth} {\\scriptsize wo depth consis. loss}$\\ddagger$ & 416x128 & 0.129 & 0.945 & \\bf{5.211} & 0.214 & 0.839 & 0.944 & 0.976 \\\\ \n \n \\bf{+ WCL} & 416x128 & \\bf{0.125} & \\bf{0.915} & 5.231 & \\bf{0.210} & 0.844 & \\bf{0.947} & \\bf{0.977} \\\\ \\hdashline\n monodepth2~\\cite{godard2019digging} & 416x128 & 0.128 & 1.087 & 5.171 & 0.204 & 0.855 & \\bf{0.953} & 0.978 \\\\\n \n \\bf{+ WCL} & 416x128 & \\bf{0.123} & \\bf{0.920} & \\bf{4.990} & \\bf{0.201} & \\bf{0.858} & \\bf{0.953} & \\bf{0.980} \\\\ \\hdashline\n monodepth2~\\cite{godard2019digging} & 640x192 & 0.115 & 0.903 & 4.863 & 0.193 & \\bf{0.877} & \\bf{0.959} & \\bf{0.981} \\\\\n \n \\bf{+ WCL} & 640x192 & \\bf{0.114} & \\bf{0.813} & \\bf{4.705} & \\bf{0.191} & 0.874 & \\bf{0.959} & \\bf{0.981} \\\\ \\hline \n \\end{tabular}\n }\n\\end{center}\n \\vspace{-5mm}\n\\end{table*}\n\n\\paragraph{Qualitative Analysis}\nWe show the depth images of \\cite{zhou2017unsupervised,godard2019digging}, with and without our WCL, in Fig.~\\ref{f:pred_depth}.\nThe first row in Fig.~\\ref{f:pred_depth} shows the RGB image as the neural network input, the second and fourth rows are the depth images estimated by \\cite{zhou2017unsupervised} and \\cite{godard2019digging}, and the third and fifth rows are the depth images estimated by \\cite{zhou2017unsupervised} + WCL and \\cite{godard2019digging} + WCL, respectively.\nIn the estimated depth images, we display a white lined rectangle to highlight the advantage of our method.\nOur method can reduce the number of artifacts and sharpen estimation.\nThe depth images in additional cases estimated by various methods in Table~\\ref{tab:ev} are shown in the supplementary material.\n\n\\begin{figure*}[t]\n \\begin{center}\n \n \\includegraphics[width=0.99\\hsize]{fig\/fig_depth_corl.pdf}\n \\end{center}\n \n\t\\caption{\\small {\\bf Qualitative results of our proposed method.} The top row displays the input images for the trained neural network to estimate depth images and the other rows display the estimated depth images, with and without the WCL. The white rectangle box in the depth image highlights the advantages of the WCL.}\n \\label{f:pred_depth}\n \n\\end{figure*}\n\\subsection{Evaluation of Pose Estimation}\nFor evaluation of pose estimation, we select two prior works~\\cite{zhou2017unsupervised,godard2019digging} to apply our WCL.\nWe could not evaluate \\cite{mahjourian2018unsupervised,gordon2019depth} for pose estimation, because the hyperparameters and codes for the pose estimation and evaluation are partially unclear from their released code.\nFor \\cite{zhou2017unsupervised,godard2019digging}, we train the models using the KITTI odometry dataset, with the same training conditions as the depth estimation.\n\nTable~\\ref{tab:ego} displays the absolute trajectory error (ATE) in meters for sequence 09 and 10 of the KITTI odometry dataset.\nWe show the mean and standard deviation of ATE over all overlapping 5 frame snippets. \nBy applying our WCL, the pose estimation accuracies of two prior studies~\\cite{zhou2017unsupervised,godard2019digging} are improved.\nAlthough the performance is slightly worse than ``ORB-SLAM (full)'', the original ORB-SLAM with loop closure using whole sequence images, the methods with our WCL outperforms ``ORB-SLAM (short)'', which takes 5 consecutive images like our method.\n\n\\begin{table}[t]\n \\caption{{\\small {\\bf Evaluation of pose estimation on sequence 09 and 10 of the KITTI odometry dataset.} Results show the mean and standard deviation of the absolute trajectory error in meters. ``frame number'' is the number of input images used for pose estimation. The method with our WCL outperforms the original method.}}\n \n \\begin{center}\n \\resizebox{0.75\\columnwidth}{!}{\n \\label{tab:ego}\n \\begin{tabular}{|lc|c|c|c|} \\hline\n method & image size & \\hspace{5mm} sequence 09 \\hspace{5mm} & \\hspace{5mm} sequence 10 \\hspace{5mm} & frame number \\\\ \\hline\n ORB-SLAM (full) & full size & 0.014 $\\pm$ 0.008 & 0.012 $\\pm$ 0.011 & - \\\\\n ORB-SLAM (short) & full size & 0.064 $\\pm$ 0.141 & 0.064 $\\pm$ 0.130 & - \\\\ \\hline\n \n \n Zhou et al.~\\cite{zhou2017unsupervised} & 416x128 & 0.021 $\\pm$ 0.017 & 0.020 $\\pm$ 0.015 & 5 \\\\\n \\bf{+ WCL} & 416x128 & \\bf{0.016} $\\pm$ \\bf{0.011} & \\bf{0.013} $\\pm$ \\bf{0.009} & 5 \\\\ \\hdashline\n monodepth2~\\cite{godard2019digging} & 416x128 & 0.017 $\\pm$ \\bf{0.009} & 0.015 $\\pm$ \\bf{0.010} & 2\\\\\n \\bf{+ WCL} & 416x128 & \\bf{0.016} $\\pm$ \\bf{0.009} & \\bf{0.014} $\\pm$ \\bf{0.010} & 2 \\\\ \\hdashline\n monodepth2~\\cite{godard2019digging} & 640x192 & 0.017 $\\pm$ \\bf{0.008} & 0.015 $\\pm$ \\bf{0.010} & 2 \\\\\n \\bf{+ WCL} & 640x192 & \\bf{0.016} $\\pm$ \\bf{0.008} & \\bf{0.014} $\\pm$ \\bf{0.010} & 2\\\\ \\hline \n \\end{tabular}\n }\n\\end{center}\n \\vspace{-0.5em}\n\\end{table}\n\n\\section{Conclusion}\nIn this paper, we proposed a novel WCL to penalize geometric inconsistencies, for depth and pose estimation.\nOur proposed approach employed the Wasserstein distance for measuring the consistency between two point clouds from different frames.\nOur WCL is a smooth and symmetric objective, which can suitably measure geometric consistency without using any other external and\/or non-differentiable libraries.\nTherefore, the neural network can be effectively and efficiently trained to obtain highly accurate depth estimation.\nIn the experiment, we applied our proposed WCL to several state-of-the-art baselines and confirmed the benefits of our method with healthy margins.\n\nThere are two remaining issues to be addressed in the future.\nFirst, the study of memory-saving methods for measuring geometric inconsistency.\nAs shown in the supplementary material, the performance of our WCL is restricted by the limitation on GPU memory.\nSecond, the occlusion when calculating Wasserstein distance.\nRelaxation of the coupling constraints and\/or masking for occluded point clouds will be required for more accurate estimation.\nOur WCL has potentials to adapt for this issue, because our WCL can be calculated for pairs of different number of point clouds as $m \\neq n$. \n\n\\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{intro}\n\nSeveral theorem provers\n(e.g.\\ Coq, Isabelle, PVS)\ninclude facilities to generate code in various programming languages\n(e.g.\\ C, Clean, Haskell, Lisp, Ocaml, Scala, Scheme, SML)\nfrom executable subsets of the provers' logical languages.\nThat way, code written in a prover's language,\npossibly verified to satisfy properties of interest,\ncan be run as code in a conventional programming language;\nverifying the correctness of\nthe generated code with respect to the prover's code\nis a separable problem, akin to compiler verification.\nWhen carrying out\nformal program synthesis by stepwise refinement\n\\cite{bmethod,zed,vdm,specware-www}\nin a prover\nto derive an implementation from a high-level specification,\na code generator can translate\nthe low-level (i.e.\\ fully refined) specification\nto the final program.\n\nACL2's tight integration with the underlying Lisp platform\nmay obviate the need for code generation,\nbecause executable ACL2 code can run efficiently as Lisp.\nAn APT program derivation \\cite{apt-www,apt-simplify,apt-isodata}\nmay end with an implementation in executable ACL2 that runs as Lisp.\nHowever, some applications require code in other languages,\nsuch as C for embedded systems or device drivers.\nTo synthesize this kind of code via an APT derivation,\na code generator for ACL2 can be used,\nsuch as ATJ \\cite{atj-deep,atj-shallow} \\citeman{JAVA____ATJ}{java::atj}\n(for Java).\n\nA typical code generator for a prover can be viewed as\na reification of a shallow embedding of\nthe prover's (executable) language in the programming language:\nconstructs of the prover's language\nare rendered as suitably equivalent constructs of the programming language.\nThe translation is centered on, and driven by, the prover's language,\nwhich is mimicked in the programming language.\nTherefore, unless the prover's language and the programming languages\nare sufficiently similar,\nthe generated code may not be very efficient (in time and space) and idiomatic;\nthis could be an issue for the aforementioned example applications.\n\nThis paper proposes an approach\nto turn the focus on the generated code,\nand to exert direct control over it,\nby flipping the direction of the embedding:\n(i) a shallow embedding of the programming language in the prover's language\ndefines representations of program constructs in the prover; and\n(ii) a code generator recognizes these representations\nand translates them to the represented code.\nThis is \\emph{code generation by inverse shallow embedding}:\nthe embedding is a (not necessarily reified) translation\nfrom the programming language to the prover's language,\nand the code generator is the (reified) inverse translation.\nIn contrast, \\emph{code generation by direct shallow embedding}\nis the more typical approach where\nthe code generator translates the prover's language to the programming language,\nthe translation being a shallow embedding of the former in the latter.\n\nThe program constructs shallowly embedded in the prover\nare oriented towards the programming language,\nand thus may not be idiomatic formulations in the prover's language.\nThis new code generation approach is designed\nfor program synthesis by stepwise refinement,\nwhere the final refinement steps turn idiomatic code in the prover's language\ninto provably equivalent shallowly embedded program code.\nThese final refinement steps, carried out under user guidance,\nafford fine-grained control on the exact form of the final program.\n\nThis new code generation approach is realized\nin ATC (\\textbf{A}CL2 \\textbf{T}o \\textbf{C}) \\citeman{C____ATC}{c::atc},\nthe C code generator for ACL2 described in this paper.\nATC is designed for use with the aforementioned APT:\nthe final steps of an APT derivation turn ACL2 code\ninto C code shallowly embedded in ACL2,\nwhich ATC turns into actual C code.\nThese final derivation steps take place within the ACL2 language,\nand their verification involves only the ACL2 language;\nthese steps may be carried out via proof-generating APT transformations,\nincluding ones tailored to ATC\nthat have been and are being developed at Kestrel.\nThese APT transformations are not discussed further in this paper,\nwhich concentrates on ATC.\n\nBesides the C code, ATC also generates ACL2 proofs\nof the correctness of the C code with respect to the ACL2 code.\nThe proofs are based on a formalization of the needed subset of C in ACL2.\nTogether with the ACL2-to-ACL2 proofs in an APT derivation,\nthe ACL2-to-C proof generated by ATC\nprovides an end-to-end correctness proof of\nthe synthesized C code that ends the derivation\nwith respect to the high-level specification that starts the derivation.\n\nATC supports a limited subset of C18 \\cite{c18},\nwhich includes\ninteger types,\noperations and conversions on them,\ninteger arrays with read and (destructive) write access,\nlocal variable declarations and assignments,\nloops of certain forms,\nconditional expressions and statements,\nand functions that may affect arrays.\nDespite its limitations, this set suffices for some interesting programs.\nATC has been and is being used in a growing number of applications at Kestrel,\nsupporting the viability of the approach.\n\n\\secref{embeddings} offers a perhaps original perspective on\nhow different kinds of language embedding correspond to\ndifferent code generation approaches.\n\\secref{shallow-codegen} describes\nthe shallow embedding of C in ACL2 and how ATC uses it to generate C code.\n\\secref{deep-proofgen} describes the formalization (i.e.\\ deep embedding)\nof C in ACL2\nand how ATC uses it to generate proofs.\n\\secref{future} discusses future work.\n\\secref{related} surveys related work.\nIn this paper, `fixtype' refers to a type\ndefined using the FTY library \\citeman{ACL2____FTY}{fty}.\n\n\n\\section{Perspective on Language Embedding and Code Generation}\n\\label{embeddings}\n\nAn \\emph{embedding} of a language $\\mathcal{L}$ in a language $\\mathcal{L}'$\nis a representation of $\\mathcal{L}$ in $\\mathcal{L}'$.\nIn a \\emph{deep} embedding,\nthe syntax of $\\mathcal{L}$ is represented\nvia $\\mathcal{L}'$ values that capture $\\mathcal{L}$ constructs,\nand the semantics of $\\mathcal{L}$ is represented\nvia $\\mathcal{L}'$ operations over those values;\nthe representation is explicit.\nIn a \\emph{shallow} embedding,\nthe syntax of $\\mathcal{L}$ is represented\nvia a translation of $\\mathcal{L}$ constructs to $\\mathcal{L}'$ constructs,\nand the semantics of $\\mathcal{L}$ is represented\nvia the $\\mathcal{L}'$ semantics of\nthe $\\mathcal{L}'$ counterparts of the $\\mathcal{L}$ constructs;\nthe representation is implicit.%\n\\footnote{Although representations are conceivable\nthat blur the line between deep and shallow embedding,\nthe distinction remains generally useful,\nand is commonly found in the literature\nand used in technical discourse.}\nAn embedding may apply to a subset of $\\mathcal{L}$.\nThe features of an embedding depend on the nature of $\\mathcal{L}$ and $\\mathcal{L}'$,\nin particular whether each one is a logical language or a programming language.\n\nGiven a logical language $\\mathcal{A}$ (e.g.\\ ACL2)\nand a programming language $\\mathcal{P}$ (e.g.\\ C or Java),\nthere are four possible kinds of embedding, based on direction and depth:\n(i) shallow embedding of $\\mathcal{A}$ in $\\mathcal{P}$;\n(ii) deep embedding of $\\mathcal{A}$ in $\\mathcal{P}$;\n(iii) shallow embedding of $\\mathcal{P}$ in $\\mathcal{A}$; and\n(iv) deep embedding of $\\mathcal{P}$ in $\\mathcal{A}$.\nEach kind corresponds to a different approach\nto generate $\\mathcal{P}$ code from $\\mathcal{A}$,\nparticularly for the first three kinds,\nwith the fourth kind being a little different.\nIn addition, the fourth kind plays a role in\nestablishing the correctness of the generated code for all four approaches.\n\n\n\\subsection{Direct Shallow Embedding}\n\\label{direct-shallow}\n\nA shallow embedding of $\\mathcal{A}$ in $\\mathcal{P}$\nis a translation of constructs in (an executable subset of) $\\mathcal{A}$\nto suitably equivalent constructs in $\\mathcal{P}$.\nThe translation can be used to generate $\\mathcal{P}$ from $\\mathcal{A}$.\nThe code generator reifies the embedding;\nembedding and code generation go in the same direction---%\nhence `direct'.\n\nThis is a typical code generation approach.\nThe constructs of $\\mathcal{A}$ are rendered in $\\mathcal{P}$:\nthe data types of $\\mathcal{A}$ are mapped to data types of $\\mathcal{P}$,\nthe operations of $\\mathcal{A}$ are mapped to operations of $\\mathcal{P}$,\nand so on.\nIf $\\mathcal{P}$ lacks suitable built-in counterparts\nfor certain $\\mathcal{A}$ constructs,\nsuch counterparts are defined in $\\mathcal{P}$ (e.g.\\ as libraries)\nand used as code generation targets.\n\nThis translation is centered on, and driven by, $\\mathcal{A}$,\nwhich is mimicked in $\\mathcal{P}$.\nUnless $\\mathcal{P}$ is sufficiently similar to $\\mathcal{A}$,\nthe generated code may look more like ``$\\mathcal{A}$ written in $\\mathcal{P}$''\nthan like idiomatic $\\mathcal{P}$;\nit may not be as efficient (in time and space)\nas the original $\\mathcal{A}$ code\nor as a handcrafted port to $\\mathcal{P}$.\nIn some applications,\nthe readability of the generated code may be unimportant,\nand its efficiency adequate;\nbut other applications, e.g.\\ for resource-constrained systems,\nmay have more stringent requirements.\n\nA realization of this code generation approach\nis ATJ in shallow embedding mode \\citeman{JAVA____ATJ}{java::atj},\nwhich generates Java code from ACL2.\nIt relies on AIJ \\citeman{JAVA____AIJ}{java::aij},\nwhich implements ACL2 data types and operations in Java.\nACL2 functions are translated to Java static methods,\norganized in Java classes corresponding to the ACL2 packages.\nACL2 \\code{let} bindings are translated to\nJava local variable declarations and assignments.\nAnd so on; see the references for more details.\n\n\n\\subsection{Direct Deep Embedding}\n\\label{direct-deep}\n\nA deep embedding of $\\mathcal{A}$ in $\\mathcal{P}$ is\nan interpreter of (an executable subset of) $\\mathcal{A}$ written in $\\mathcal{P}$.\nThis enables a simple code generation approach:\nthe code generator translates $\\mathcal{A}$ constructs\nto their deeply embedded counterparts in $\\mathcal{P}$,\nwhich the interpreter executes in $\\mathcal{P}$,\npossibly via a thin wrapper produced by the code generator.\nThe interpreter reifies the embedding;\nembedding and code generator go in the same direction---%\nhence `direct'.\n\nThis is an unconventional code generation approach,\nbut is a conceptually simple way to run $\\mathcal{A}$ in $\\mathcal{P}$.\nThe interpreter includes representations of\nthe abstract syntax of $\\mathcal{A}$,\nthe values and computation states of $\\mathcal{A}$,\nthe basic operations of $\\mathcal{A}$,\nand so on.\nThe translation carried out by the code generator is straightforward.\nThe $\\mathcal{P}$ code resulting from this code generation approach\nis even less idiomatic and efficient than discussed in \\secref{direct-shallow},\nbut it may be adequate for certain applications;\ndue to its simplicity, it is also fairly high-assurance\n(relevant if formal proofs are absent).\n\nA realization of this code generation approach\nis ATJ in deep embedding mode \\citeman{JAVA____ATJ}{java::atj},\nwhich generates Java code from ACL2.\nThe interpreter is AIJ \\citeman{JAVA____AIJ}{java::aij},\npart of which is the Java implementation of the ACL2 data types and operations\nmentioned in \\secref{direct-shallow}.\n\nThis code generation approach could become more interesting\nby accompanying it with partial evaluation.\nAccording to the first Futamura projection \\cite{parteval},\npartially evaluating an interpreter with respect to a program\namounts to compiling the program\nto the language that the interpreter is written in.\nThus, a partial evaluator for $\\mathcal{P}$ can be used to\npartially evaluate the $\\mathcal{A}$ interpreter (written in $\\mathcal{P}$)\nwith respect to (the $\\mathcal{P}$ representation of) the $\\mathcal{A}$ code.\nFor the aforementioned ATJ,\na partial evaluator for Java would be needed;\nthe partial evaluator may be written in any language, including ACL2.\n\n\n\\subsection{Inverse Shallow Embedding}\n\\label{inverse-shallow}\n\nA shallow embedding of $\\mathcal{P}$ in $\\mathcal{A}$\nis a translation of constructs in (a subset of) $\\mathcal{P}$\nto suitably equivalent constructs in $\\mathcal{A}$.\nThe inverse of this translation can be used\nto generate $\\mathcal{P}$ from $\\mathcal{A}$,\nrecognizing the $\\mathcal{P}$ constructs shallowly embedded in $\\mathcal{A}$\n(i.e.\\ recognizing the image of the embedding)\nand turning them into the actual $\\mathcal{P}$ constructs.\nThe code generator reifies the inverse of the embedding,\nwhile the embedding does not have to be reified;\nembedding and code generator go in opposite directions---%\nhence `inverse'.\n\nThis appears to be a novel code generation approach.\nIt is centered on, and driven by, $\\mathcal{P}$ rather than $\\mathcal{A}$:\nit enables users to specify the generated code exactly,\nmaking it as idiomatic and efficient as desired,\nand fit for the more demanding applications\nmentioned in \\secref{direct-shallow}.\n\nWhile the generated $\\mathcal{P}$ code can be idiomatic,\nits $\\mathcal{A}$ representation may look\nmore like ``$\\mathcal{P}$ written in $\\mathcal{A}$''\nthan like idiomatic $\\mathcal{A}$ code,\nand may be burdensome to write directly.\nThus, this code generation approach is designed\nfor program synthesis by stepwise refinement,\nwhere the final refinement steps turn idiomatic $\\mathcal{A}$\ninto the form required by the code generator,\ni.e.\\ into something in the image of the shallow embedding.\nThese refinement steps, carried out under user guidance,\nafford fine-grained control on the exact form of the final program.\nSince the code generator merely turns\n$\\mathcal{P}$ code shallowly embedded in $\\mathcal{A}$ into actual $\\mathcal{P}$ code,\nthe readability and the efficiency (in space and time) of the code\ndo not depend on the code generator:\nthey are the responsibility of the aforementioned final refinement steps;\nshifting this responsibility from the code generator\nto the stepwise refinement derivation\nis a deliberate and distinctive aspect of this code generation approach.\n\nA realization of this code generation approach is ATC, described in this paper,\nwhich generates C code from ACL2.\nATC is designed for use with APT,\nwhose transformations (some tailored to ATC) can be used\nto turn idiomatic ACL2 code into the form required by ATC.\n\n\n\\subsection{Inverse Deep Embedding}\n\\label{inverse-deep}\n\nA deep embedding of $\\mathcal{P}$ in $\\mathcal{A}$ is\na formalization of (a subset of) $\\mathcal{P}$ in $\\mathcal{A}$.\nThus, it is the basis for formal correctness proofs of\n$\\mathcal{P}$ code generated from $\\mathcal{A}$\nvia any of the three approaches described above,\nplaying a role alongside the three other kinds of embedding.\nThe formalization reifies the deep embedding.\n\nFurthermore, this kind of embedding can be used\nin a pop-refinement derivation \\cite{popref}, where:\n(i) the initial specification is\na predicate over $\\mathcal{P}$ programs\nthat characterizes the possible implementations; and\n(ii) the final specification is\na singleton predicate that selects one $\\mathcal{P}$ implementation\nin explicit syntactic form, from which the actual code is readily obtained.\nAll the specifications in the derivation, from initial to final,\nare written in $\\mathcal{A}$,\nbut their formulation requires a formalization of $\\mathcal{P}$,\ni.e.\\ the deep embedding.\nThere is no code generator as such,\nother than a simple obtainment of the implementation\nfrom the final specification predicate where it is deeply embedded.\nEmbedding and refinement to code go in opposite directions---%\nhence `inverse'.\n\nFor this code generation approach to be practical,\ntechniques must be developed to\nmake deeply embedded $\\mathcal{P}$ constructs ``emerge'' within $\\mathcal{A}$\nvia suitable refinement steps.\nAn idea worth pursuing is whether\nthe kind of code generator described in \\secref{inverse-shallow}\ncould be realized as one or more such refinement steps,\nto automatically turn shallowly embedded $\\mathcal{P}$ constructs\ninto deeply embedded ones.\n\nThe proofs generated by ATC are based on a formalization of C in ACL2,\nwhich is a deep embedding of C in ACL2.\nATC uses the code generation approach described in \\secref{inverse-shallow},\nnot the pop-refinement approach sketched above.\n\n\n\\section{C Shallow Embedding and Code Generation}\n\\label{shallow-codegen}\n\nATC generates C code from ACL2\naccording to the inverse shallow embedding approach\nexplained in \\secref{inverse-shallow}.\nATC relies on the definition of a shallow embedding of C in ACL2,\nwhose image ATC recognizes in ACL2 code and translates to actual C code.\n\n\n\\subsection{Shallow Embedding of C in ACL2}\n\\label{shallow}\n\nThe shallow embedding of C in ACL2 relied on by ATC consists of\n(i) an ACL2 model of C integers and arrays and\n(ii) a definition of how C expressions, statements, and functions\nare represented as ACL2 terms and functions.\nThe latter is embodied in the ATC user documentation,\nwhich spells out the representation in an almost formal way.\nThere is currently no implemented translation from C to ACL2.\n\n\n\\subsubsection{Integers}\n\nThe model includes all the C18 standard integer types except \\code{\\_Bool},\nnamely\n\\code{char}, \\code{short}, \\code{int}, \\code{long}, and \\code{long} \\code{long},\nboth \\code{signed} (the default) and \\code{unsigned}\n(but not the plain \\code{char} type);\n10 types in total.\nThe model includes\nunary integer operations\n(\\code{+}, \\code{-}, \\verb|~|, \\code{!}; 4 in total),\nbinary integer operations\n(\\code{+}, \\code{-}, \\code{*}, \\code{\/}, \\code{\\%},\n\\code{\\&}, \\code{|}, \\code{\\^}, \\code{<{}<}, \\code{>{}>},\n\\code{<}, \\code{>}, \\code{<=}, \\code{>=}, \\code{==}, \\code{!=};\n16 in total),%\n\\footnote{The non-strict operations \\code{\\&\\&} and \\code{||}\nare represented differently, as described later.}\nand integer type conversions (90 in total).\n\nThe exact format of the C integer types is implementation-dependent.\nThe model assumes two's complement without padding bits (as prevalent),\nand has ACL2 nullary functions for the sizes of the different types;\nthese nullary functions are referenced in other parts of the model\nto avoid hardwiring assumptions on sizes,\nwhich vary across popular C implementations.\nThese nullary functions are currently defined (to common values),\nbut they are planned to be made constrained instead\n(see \\secref{future}).\nThe C integer values are represented as ACL2 integers in appropriate ranges\n(defined via the nullary functions),\ntagged with an indication of their type.\nThe model has a fixtype for each C integer type.\n\nThe C integer operations and conversions are modeled as ACL2 functions,\nwhose guards capture the conditions under which\nthe results are well-defined in C18.\nFor example, \\code{int} addition is modeled as follows:\n\\begin{bcode}\n(defun add-sint-sint (x y) \\ccode{; addition of signed int and signed int}\n (declare (xargs :guard (and (sintp x) (sintp y) (add-sint-sint-okp x y))))\n (sint (+ (sint->get x) (sint->get y))))\n(defun add-sint-sint-okp (x y) \\ccode{; well-definedness condition}\n (declare (xargs :guard (and (sintp x) (sintp y))))\n (sint-integerp (+ (sint->get x) (sint->get y))))\n\\end{bcode}\nTwo \\code{int} values, recognized by \\code{sintp},\nare added by taking the underlying ACL2 integers via \\code{sint->get},\nadding them via ACL2's addition \\code{+},\nand turning the result into an \\code{int} value via \\code{sint},\nprovided that the result is representable as an \\code{int},\nwhose range is recognized by \\code{sint-integerp}.%\n\\footnote{In C18,\nsigned integer arithmetic operations are well-defined only if\nthe true result of the operation can be represented\nin the type of the result of the operation\n(which is determined from the types of the operands).\nC implementations may extend this well-definedness,\ne.g.\\ as two's complement wrap-around.}\n\nC unary and binary integer operations apply to any combination of operand types\nand have fairly complex rules about operand type conversions and result types;\nthe rules are not uniform across the different types.\nThe model has versions of the integer operations\nfor all possible combinations of operand types,\nwhose definitions capture those rules.\nThere are thousands of such functions,%\n\\footnote{(4 unary operations $\\times$ 10 types)\n$+$ (16 binary operations $\\times$ 100 type combinations)\n$=$ 1,640 functions.}\ngenerated via macros that exploit their available uniformity.\n\n\n\\subsubsection{Arrays}\n\nThe model includes monodimensional arrays of the above integer types,\nwith read and write access.\n\nA C array is modeled as a non-empty list of C integer values,\ntagged with an indication of its type.\nThe model has a fixtype for each C array integer type.\n\nRead and write access is modeled via ACL2 functions to read and write arrays.\nThe read functions return an array element from an array and an index;\nthe write functions return a new array\nfrom an old array, an index, and a value for the element.\nThese functions have guards requiring the index to be within the array bounds.\nSince an array may be indexed with any integer type in C,\nthere are versions of these functions\nfor all combinations of array and index types.\nThere are hundreds of such functions,%\n\\footnote{(2 read or write operations)\n$\\times$ (10 array types)\n$\\times$ (10 index types)\n$=$ 200 functions.}\ngenerated via macros that exploit their uniformity.\n\nArrays are not first-class entities in C,\nbut mere juxtapositions of their elements.\nAlthough the model just described\nappears to treat arrays as first-class entities,\ntheir use in the shallowly embedded C code\nis restricted in ways that make this modeling adequate;\nsee also \\secref{deep}.\n\n\n\\subsubsection{Expressions, Statements, and Functions}\n\nC functions are represented as ACL2 functions\nthat operate on (the ACL2 model of) C values.\nConsider the following C function:\n\\begin{bcode}\nint f(int x, int y, int z) \\{\n return (x + y) * (z - 3);\n\\}\n\\end{bcode}\nThis is represented as follows in ACL2\n(where the ellipsis in the guard is explained later):\n\\begin{bcode}\n(defun |f| (|x| |y| |z|)\n (declare (xargs :guard (and (sintp |x|) (sintp |y|) (sintp |z|) ...)))\n (mul-sint-sint (add-sint-sint |x| |y|)\n (sub-sint-sint |z| (sint-dec-const 3))))\n\\end{bcode}\nThe reason for the vertical bars around the ACL2 symbols is that\na C identifier is represented as\nan ACL2 symbol whose \\code{symbol-name} is the identifier;\nif \\code{f} were used as the ACL2 function name instead of \\code{|f|},\nit would represent a C function named \\code{F}.\nThe correspondence between the body of \\code{|f|}\nand the return expression of \\code{f} is clear:\nthe ACL2 functions that model arithmetic operations\nhave been explained earlier;\nthe \\code{int} decimal (i.e.\\ in base 10) constant \\code{3}\nis represented via \\code{sint-dec-const}, which is part of the model.\nThe input and output types of \\code{f} are represented by\nthe corresponding guard conjuncts of \\code{|f|}\nand the fact that the body of \\code{|f|}\nreturns \\code{sintp} (via \\code{mul-sint-sint});\nsee also \\secref{codegen}.\n\nBesides C expressions as exemplified above,\nACL2 terms of certain forms represent C statements,\nwhich may contain blocks with local variable declarations.\nConsider the following C function:\n\\begin{bcode}\nunsigned int g(unsigned int x, unsigned int y) \\{\n unsigned int z = 1U;\n if (x < y) \\{ z = z + x; \\} else \\{ z = z + y; \\}\n return 2U * z;\n\\}\n\\end{bcode}\nThis is represented as follows in ACL2:\n\\begin{bcode}\n(defun |g| (|x| |y|)\n (declare (xargs :guard (and (uintp |x|) (uintp |y|))))\n (let ((|z| (declar (uint-dec-const 1))))\n (let ((|z| (if (boolean-from-sint (lt-uint-uint |x| |y|))\n (let ((|z| (assign (add-uint-uint |z| |x|))))\n |z|)\n (let ((|z| (assign (add-uint-uint |z| |y|))))\n |z|))))\n (mul-uint-uint (uint-dec-const 2) |z|))))\n\\end{bcode}\nLocal variable declarations and assignments\nare both represented via \\code{let} bindings;\nthe two cases are distinguished by\nthe \\code{declar} and \\code{assign} wrappers,\nwhich are defined as identity functions.\nStatements, like the \\code{if} statement above,\nthat affect variables and are followed by more code\nare represented via \\code{let} bindings without any wrappers;\nboth branches of the \\code{if} term in ACL2\nmust end with (the latest values of) the affected variables.\nIf multiple variables were affected,\n\\code{mv-let} would be used to bind them to the \\code{if} term,\nwhose branches would have to end with an \\code{mv} of the variables.\nSince ACL2 is functional, variable updates are explicated.\nFunction \\code{boolean-from-sint} converts\nthe \\code{int} returned by the less-than operation to a boolean,\nas needed for \\code{if} tests in ACL2;\nthe conversion is not reflected in the C code,\nwhich uses scalars (including integers) in \\code{if} tests.\n\nC loops are represented as tail-recursive ACL2 functions.\nConsider the following modular factorial:\n\\begin{bcode}\nunsigned int h(unsigned int n) \\{\n unsigned int r = 1U;\n while (n != 0U) \\{ r = r * n; n = n - 1U; \\}\n return r;\n\\}\n\\end{bcode}\nThis is represented as follows in ACL2:\n\\begin{bcode}\n(defun |h\\$loop| (|n| |r|) \\ccode{; representation of the loop of h}\n (declare (xargs :guard (and (uintp |n|) (uintp |r|))))\n (if (boolean-from-sint (ne-uint-uint |n| (uint-dec-const 0)))\n (let* ((|r| (assign (mul-uint-uint |r| |n|)))\n (|n| (assign (sub-uint-uint |n| (uint-dec-const 1)))))\n (|h\\$loop| |n| |r|))\n (mv |n| |r|)))\n(defun |h| (|n|) \\ccode{; representation of the function h}\n (declare (xargs :guard (uintp |n|)))\n (let ((|r| (declar (uint-dec-const 1))))\n (mv-let (|n| |r|)\n (|h\\$loop| |n| |r|)\n (declare (ignore |n|)) \\ccode{; because n is not used after the loop}\n |r|)))\n\\end{bcode}\nSince the loop function does not represent a C function,\nits name does not have to be a legal C identifier.\nIn \\code{|h|},\nthe two variables are \\code{mv-let}-bound to the loop function call,\nbecause the loop affects both.\nLoop functions may have a more elaborate structure than above,\nbut every control path must end\neither with a recursive call on the formal parameters\n(which must therefore be updated before the call)\nor with (the subset of) the formal parameters affected by the loop.\n\nC arrays are passed around as wholes in functional ACL2,\nbut they are passed around as pointers in C.\nThe ACL2 representation is correct so long as the arrays are treated in\na stobj-like \\citeman{ACL2____STOBJ}{stobj} single-threaded way,\nwhich is required in this shallowly embedded representation.\nConsider the following C function:\n\\begin{bcode}\nvoid i(unsigned char *a, int x, int y) \\{\n a[x] = (unsigned char) 1;\n a[y] = (unsigned char) 2;\n\\}\n\\end{bcode}\nThis is represented as follows in ACL2\n(where the ellipsis in the guard is explained later):\n\\begin{bcode}\n(defun |i| (|a| |x| |y|)\n (declare (xargs :guard (and (uchar-arrayp |a|) (sintp |x|) (sintp |y|) ...)))\n (let* ((|a| (uchar-array-write-sint |a| |x| (uchar-from-sint (sint-dec-const 1))))\n (|a| (uchar-array-write-sint |a| |y| (uchar-from-sint (sint-dec-const 2)))))\n |a|))\n\\end{bcode}\nArray writes are represented as \\code{let} bindings\nof the array variables to calls of the array write functions.\nThe C function returns nothing (i.e.\\ \\code{void}),\nbut since it affects the array,\nthe ACL2 function returns the updated array.\nIf multiple arrays were updated, they would all be returned via \\code{mv},\nwhile the C function would still return nothing.\nIf the C function returned a result besides affecting arrays,\nthe ACL2 function would return the result and the updated arrays via \\code{mv}.\nArrays may be updated in loops;\nin general, ACL2 loop functions return affected variables and arrays,\nwhile ACL2 non-loop functions return affected arrays and optionally a result.\n\nThe non-strict C operations \\code{\\&\\&} and \\code{||}\nare represented via ACL2's \\code{and} and \\code{or} macros,\nwhich are non-strict because they are defined in terms of \\code{if}.\nRepresenting \\code{\\&\\&} and \\code{||} via strict ACL2 functions\nwould require the evaluation of their second operand to be well-defined\n(i.e.\\ require its guards in ACL2 to be verified)\nregardless of the value of the first operand,\nwhich would be too restrictive,\npreventing many valid and useful C programs from being represented.\nFunctions like \\code{boolean-from-sint} shown earlier,\nas well as inverses like \\code{sint-from-boolean}\nthat are part of the shallow embedding,\nare used to convert between ACL2 booleans and C integers\nin the representation of C expressions involving \\code{\\&\\&} and \\code{||}.\n\nThe above is just an overview of the supported representations of C in ACL2.\nThe ATC reference documentation \\citeman{C____ATC}{c::atc}\nspells out the supported representations in full detail.\n\n\n\\subsection{C Code Generation}\n\\label{codegen}\n\nATC is invoked on a list of ACL2 functions\nthat represent C functions and C loops.\nIf an ACL2 function is not recursive, it represents a C function;\nif it is recursive, it represents a C loop.\nThe ACL2 functions must be listed in bottom-up order\n(i.e.\\ each function may call the preceding ones or itself,\nbut not the subsequent ones);\nthe C functions are generated in the same order,\ncurrently in a single \\code{.c} file.\nThe above requirements imply that\nrecursive C functions cannot be generated currently.\n\nATC requires all the ACL2 functions\nto be defined, in logic mode, and guard-verified.\nThis is critical for proof generation (see \\secref{proofgen}).\nThe fact that the loop functions are in logic mode means that\nthe loops must terminate under the guards,\nwhich may be added to the termination conditions via \\code{mbt},\nwhich ATC ignores as far as the representation of C code goes.\nThe fact that the functions are guard-verified means that\nall arrays are always accessed within their bounds\n(avoiding well-known problems in C),\nand that all arithmetic operations always have well-defined results\naccording to C18.\nThe ellipses in the guards of some of the examples in \\secref{shallow}\ninclude conditions needed for guard verification,\nnamely that the parameters of \\code{|f|} are in certain ranges\nand that the index parameters of \\code{|i|}\nare non-negative and less than the array's length.%\n\\footnote{No additional conditions are needed\nin the guards of \\code{|g|} and \\code{|h|} (and \\code{|h\\$loop|}),\nbecause unsigned arithmetic\nis always well-defined as wrap-around in C18.}\n\nATC checks that the names of the non-loop functions,\nand the names of the parameters of both loop and non-loop functions,\nare valid C ASCII identifiers.\nThe input types of the C functions are determined from the guards,\nwhich must include explicit conjuncts\nlike the ones in the examples in \\secref{shallow}.\nATC operates on the unnormalized bodies of the ACL2 functions\n\\citeman{ACL2____FUNCTION-DEFINEDNESS}{function-definedness}:\nit checks that they contain supported representations of C code,\ni.e.\\ that they are in the image of the shallow embedding.\nIn the process, ATC performs C-like type checking,\nto determine the types of \\code{let}-bound variables with \\code{declar},\nto determine the output types of the C functions,\nand to ensure that the generated C code is acceptable to C compilers;\nthe latter is not always ensured by guard verification,\nspecifically when there is dead code under the guards (by user mistake),\nwhich C compilers do not regard as dead.\n\nWhile this translation of ACL2 to C is conceptually relatively simple,\nits implementation is more complicated than anticipated.\nThere are a lot of detailed cases to consider and to check.\nGiving informative error messages when the checks fail takes some effort;\nthe current implementation may be improved in this respect.\nThe ATC code responsible for the checks and translation\nconsists of a few thousand lines (including documentation and blanks).\n\nATC generates C code\nvia an abstract syntax of (a sufficient subset of) C,\ndefined via algebraic fixtypes.\nThe C file is generated via a pretty-printer,\nwhich minimizes parentheses in expressions\nby considering the relative precedence of the C expression constructs.\n\nATC generates the C code, as well as the associated ACL2 proof events,\nvery quickly for all the examples tried so far\n(which are admittedly not very large).\nThis is expected, as ATC does a linear pass on the ACL2 code\nthat does not perform particularly intensive computations.\nHowever, the processing of the proof events by ACL2 is not always quick;\nsee \\secref{proofgen}.\n\n\n\\section{C Deep Embedding and Proof Generation}\n\\label{deep-proofgen}\n\nBesides C code,\nATC also generates ACL2 theorems asserting\nthe correctness of the C code with respect to the ACL2 code.\nThese assertions rely on a formalization in ACL2 of\nthe syntax and semantics of (a sufficient subset of) C,\ni.e.\\ a deep embedding of C in ACL2,\nwhich plays a role alongside the shallow embedding\n(see \\secref{inverse-deep}).\nThis formalization is more general than ATC, and is of independent interest.\n\n\n\\subsection{Deep Embedding of C in ACL2}\n\\label{deep}\n\nThe formalization of C in ACL2 consists of\n(i) an abstract syntax,\n(ii) a static semantics, and\n(iii) a dynamic semantics.\nBoth static and dynamic semantics are defined over the abstract syntax.\n\n\n\\subsubsection{Syntax}\n\nThe abstract syntax is currently the same one\nused for code generation (see \\secref{codegen}).\nIt captures the syntax of C after preprocessing.\n\nFuture versions of ATC may likely generate\nC code with at least some preprocessing directives,\nwhich would be convenient to incorporate\ninto the abstract syntax used for code generation;\nin this case, the abstract syntax would capture\nthe syntax of C before preprocessing.\nThus, at some point it may be appropriate for the C formalization\nto use its own separate abstract syntax,\nand in fact to have\none for the C syntax before preprocessing\nand one for the C syntax after preprocessing,\nto model more faithfully C18's translation phases.\n\nCurrently the formalization does not include the C concrete syntax.\nSince ATC's generated proofs currently apply to abstract syntax,\nthere is no immediate need to formalize concrete syntax.\n\n\n\\subsubsection{Static Semantics}\n\nThe static semantics of C\nconsists of decidable requirements\nthat must be satisfied by C code to be compiled and executed:\nevery referenced variable and function is in scope;\nevery operation is applied to operands of appropriate types;\nand so on.\nThese are described informally in the C18 standard.\n\nThese requirements are formalized via (executable) ACL2 code\nthat checks whether the C abstract syntax satisfies those requirements,\nanalogously to what C compilers do.\nThe checking code makes use of symbol tables,\ni.e.\\ data structures that capture which C symbols\n(e.g.\\ function and variable names) are in scope\nand what their types are.\nIf an abstract syntax entity violates any requirement,\nthe checking code returns an error result;\notherwise, it returns a non-error result\nthat may include inferred information about the checked abstract syntax entity.\nIn particular,\nthe successful checking of an expression yields\nthe (non-\\code{void}) C type of the expression,\nand the successful checking of a statement yields\na non-empty finite set of C types,\ncorresponding to the possible values that may be returned\n(including \\code{void} for code that completes execution\nwithout a \\code{return} or with one without expression);\nthe latter set is used to check that the body of a function\nalways returns something consistent with the function's return type.\n\n\n\\subsubsection{Dynamic Semantics}\n\nThe dynamic semantics of C\nis the execution behavior of C code\n(that satisfies the static semantic requirements),\ni.e.\\ how the execution of expressions, statements, etc.\\\nmanipulates values and memory.\nWhile this behavior is normally realized\nby compiling C code to machine code and running the latter,\nit can be described, as the C18 standard does,\nin terms of an abstract machine that directly executes C.\n\nThis abstract machine is formalized\nvia fixtypes that capture the machine states\nand via (executable) ACL2 code that manipulates machine states\naccording to the C expressions, statements, etc.\n\nThe model of machine states starts with the model of C integers and arrays\ndescribed in \\secref{shallow}, which is shared by deep and shallow embedding.\nC values are defined to consist of integers and pointers (for arrays);\na pointer consists of a type and an optional address (absent for null pointers),\nwhere an address is a natural number that is treated opaquely.%\n\\footnote{These addresses are just used to identify arrays.\nThey do not represent actual addresses in memory.\nThe formalization performs no arithmetic on them.\nOther kinds of entities, e.g.\\ strings, could be used instead.}\nImportantly, in this model the C values carry information about their types,\nwhich is used in the defensive checks described later.\nThe state of the variables in scope is\na stack (i.e.\\ list) of finite maps from identifiers to values:\nthe stack corresponds to the nested C scopes,\nand each finite map consists of the variables in the same scope.\nA frame consists of a function name and\na stack of variable scopes of the kind just described.\nA computation state consists of\na stack (i.e.\\ list) of frames, which captures the call stack,\nand a heap, which is a finite map\nfrom addresses (the ones used in non-null pointers)\nto arrays (the ones shared with the shallow embedding).\nThe current model of the heap is simple,\nwith arrays treated as wholes and accessed exclusively via their addresses.\n\nThe next component of the formalized dynamic semantics of C consists of\nACL2 functions that perform basic operations on computation states.\nThese include operations to:\npush and pop frames, and get the top frame;\npush and pop scopes (in the top frame);\ncreate variables (in the top scope in the top frame),\nand read and write variables\n(in some scope, looked up from innermost to outermost, in the top frame);\nread and write arrays (in the heap).\nThere are no operations to create or destroy arrays;\nthe read and write operations apply to externally created arrays.\n\nThe C functions in scope are captured via a function environment,\nwhich is a finite map from identifiers (function names)\nto information about the function (typed parameters, return type, and body).\nThe function environment for a C program is built\nby collecting all the C functions that form the program;\nit never changes during execution.\n\nThe C18 standard does not prescribe the order of expression evaluation;\nsince C expressions may have side effects,\ndifferent orders of evaluation may lead to different outcomes,\nwhich complicates formal modeling.\nThe formal model manages this complexity\nby partitioning expressions into pure ones (i.e.\\ free of side effects)\nand non-pure ones (i.e.\\ with possible side effects).\nWhile the pure ones may be freely nested\n(because their order of evaluation does not matter),\nthe non-pure ones may only appear in certain positions of the code\nthat force a unique order of evaluation:\nspecifically,\nassignments may only appear as expression statements.\nThese restrictions, which are not required in C18,\nare enforced in the subset of C covered by the formal model.\n\nThe last component of the formalized dynamic semantics of C consists of\nACL2 functions to execute expressions, statements, etc.\nThese functions are defensive,\ni.e.\\ they do not assume that\nthe code satisfies the static semantic requirements,\nand instead independently check those requirements on the dynamic data,\ni.e.\\ that a referenced variable is in the current frame,\nthat the values that an operation is applied to have appropriate types\n(recall the note above about values carrying type information),\nand so on;\nif any of these checks fails,\nthe execution functions return an error indication.%\n\\footnote{It remains to be formally proved that\nthe static semantics is sound with respect to the dynamic semantics,\nnamely that no such error indications are returned\nwhen executing code that satisfies the static requirements.\nThe proofs generated by ATC do not rely on this property,\nbut its proof would provide\na major validation of this formalization of C in ACL2.}\nThe ACL2 execution functions also check that\nthe results of integer operations and conversions are well-defined\nand that arrays are read and written within their bounds;\nif any of these checks fails,\nthe execution functions return an error indication.\nImportantly, this means that the dynamic semantics returns an error indication\nif the C code attempts an unsafe array access.\n\nThe ACL2 function to execute a pure expression\ntakes an expression and a computation state as inputs,\nand returns either a value or an error indication as output.\nExecuting a non-pure expression may involve executing a function call,\nwhich involves executing the statements in the function,\nand so on.\nThus, the ACL2 functions to execute\nnon-pure expressions, statements, functions, etc.\\\nare mutually recursive.\nBesides the syntactic entities they execute (statements etc.),\nthese functions take a computation state and a function environment as inputs\n(the latter is used to look up called functions),\nand they return a possibly updated computation state as output,\nalong with a result (e.g.\\ an optional value for a statement)\nor an error indication of the kind explained above.\n\nThese mutually recursive functions,\ntogether with the functions to execute pure expressions and lists thereof,\nformalize an (interpretive) big-step operational semantics:\neach function executes its syntactic construct to completion,\nby recursively executing the sub-constructs\nand combining their outcomes to yield the final outcome.%\n\\footnote{In contrast, a small-step operational semantics\nwould only execute part of a construct at each step.\nThis requires keeping track of\nwhich parts of a construct have been already executed\nand which parts must be executed next.\nThis is more complicated than just executing each construct to completion.\nAn example of small-step operational semantics,\nfor the ACL2 programming language (but still illustrating the point),\nis in the Community Books\n\\citeman{ACL2PL____ACL2-PROGRAMMING-LANGUAGE}\n {acl2pl::acl2-programming-language}.}\nSince C code may not terminate in general,\nthe mutual recursion may not terminate,\nwhich makes it problematic to define in a theorem prover like ACL2.\nThis is solved by adding an artificial counter\nthat limits the depth of the mutual recursion:\nthe counter is decremented by 1 at each recursive call,\nand used as the measure of the mutual recursion,\nwhose termination proof is thus straightforward.\nThis does not sweep issues of (non-)termination under the rug:\nsee \\secref{proofgen}.\n\nThe ACL2 execution functions\nfor (pure and non-pure) expressions, statements, etc.\\\nuse the operations and conversions on integers discussed in \\secref{shallow},\nwhich are shared between deep and shallow embedding.\nThey also use the operations on computation states discussed above.\n\n\n\\subsection{C Proof Generation}\n\\label{proofgen}\n\nAlthough ATC's translation\nfrom the shallowly embedded C code in ACL2 to the actual C code\nis conceptually simple by design,\ngenerating ACL2 proofs of their equivalence\nis more laborious than anticipated.\nThese proofs consist of ACL2 events that ATC builds\nand then submits to ACL2 via \\code{make-event}.\n\n\n\\subsubsection{Code Constant}\n\nAs ATC computes the abstract syntax of the generated C code\nand pretty-prints it to a \\code{.c} file (see \\secref{codegen}),\nit also generates a \\code{defconst} event that defines a named constant\nfor the C code, which is a single translation unit in C18 terminology:%\n\\footnote{In the future, this may be generalized to\na collection of translation units that forms a generated program.\nEach translation unit is in a \\code{.c} or \\code{.h} file.}\n\\begin{bcode}\n(defconst *program* \\ccode{; default name, customizable by the user}\n '(:transunit ...) \\ccode{; fixtype value of translation unit}\n\\end{bcode}\nThe other generated events refer to this constant\nto assert properties of the generated C code.\n\n\n\\subsubsection{Static Correctness}\n\nATC generates a \\code{defthm} event asserting that\nthe top-level checking function of the C static semantics\nsucceeds on the named constant described above:\n\\begin{bcode}\n(defthm *program*-well-formed\n (equal (check-transunit *program*) \\ccode{; top-level static checking function}\n :wellformed))\n\\end{bcode}\nSince this is a ground theorem, it is proved by execution:\nATC generates hints to prove it in the theory consisting of\nexactly the executable counterpart of that top-level checking function.\nThis proof is very quick for all the examples tried so far.\n\nThis establishes that the C code satisfies all the static semantic requirements,\nand can be therefore successfully compiled by C compilers.\nThis property is not always implied by the dynamic correctness theorems,\nin the presence of dead code under guards\nas briefly discussed in \\secref{codegen}.\n\n\n\\subsubsection{Dynamic Correctness}\n\nATC generates \\code{defthm} events asserting that, roughly speaking,\nexecuting each C function according to the dynamic semantics\nyields the same outcome as the representing ACL2 function.\nThe proofs are carried out via a symbolic execution of the C code\n(which is constant in the proofs, i.e.\\ \\code{*program*})\nthat turns the execution functions applied to the deeply embedded C code\ninto a form that can be matched with the shallowly embedded C code\nthat forms the ACL2 functions.\n\nThe general formulation is illustrated by\nthe theorem for function \\code{|f|} in \\secref{shallow}:\n\\begin{bcode}\n(defthm *program*-|f|-correct\n (implies (and (compustatep compst)\n (equal fenv (init-fun-env *program*))\n (integerp limit)\n (>= limit ...) \\ccode{; constant lower bound calculated by ATC}\n (and (sintp |x|) (sintp |y|) (sintp |z|) ...)) \\ccode{; guard of |f|}\n (equal (exec-fun (ident \"f\") (list |x| |y| |z|) compst fenv limit)\n (b* ((result (|f| |x| |y| |z|)))\n (mv result compst)))))\n\\end{bcode}\nIt says that executing the C function whose name is the identifier \\code{f}\non \\code{int} inputs \\code{|x|}, \\code{|y|}, \\code{|z|}\nsatisfying the guard of \\code{|f|},\non an arbitrary computation state \\code{compst},\nwith the function environment \\code{fenv} for the program,\nwith a sufficiently large recursion limit \\code{limit},\nyields the same result (value) as \\code{|f|} on the same inputs,\nand does not change the computation state.\nThe hypothesis on the limit ensures that\nexecution does not stop prematurely;\nthe lower bound, which is constant in this case,\nis calculated by ATC as it generates function \\code{f},\nbecause it knows how much recursion depth the execution of its body needs.\nATC generates hints (not shown) to prove this theorem,\nmainly consisting of a large theory\nfor symbolically executing the C code in ACL2,\nalong with a lemma instance for the guard theorem of \\code{|f|},\nwhich steers the symbolic execution away from returning error indications\ndue to non-well-defined arithmetic operations,\nsince they are all well-defined under the guard.\nThe fact that \\code{|f|} never returns error indications\nand that the execution result of \\code{f} is equal to that,\nmeans that execution never returns error indications:\nin general, this means that the C code generated by ATC\nalways has a well-defined behavior, including always accessing arrays safely.\nThis theorem also implicitly asserts that \\code{f} terminates:\nit is always possible to find a value of \\code{limit}\nthat satisfies the inequality hypothesis.\n\nTo prove the theorem above, ATC also generates a local theorem%\n\\footnote{Here `local' refers to the \\code{encapsulate}\nwith the ATC-generated events,\nonly a few of which are exported, free of hints.}\nsaying that \\code{|f|} returns an \\code{int};\nthis local theorem is referenced in the generated hints for the above theorem.\nAn analogous local theorem is generated by ATC for each ACL2 function,\nwith the applicable type(s).\n\nATC generates a correctness theorem for each C function.\nIf the C function affects arrays, the formulation is more complicated.\nThis is illustrated by the theorem for function \\code{|i|} in \\secref{shallow}:\n\\begin{bcode}\n(defthm *program*-|i|-correct\n (b* ((|a| (read-array (pointer->address |a|-ptr) compst)))\n (implies (and (compustatep compst)\n (equal fenv (init-fun-env *program*))\n (integerp limit)\n (>= limit ...) \\ccode{; constant lower bound calculated by ATC}\n (pointerp |a|-ptr)\n (not (pointer-nullp |a|-ptr))\n (equal (pointer->reftype |a|-ptr) (type-uchar))\n (and (uchar-arrayp |a|) ...)) \\ccode{; guard of |i|}\n (equal (exec-fun (ident \"i\") (list |a|-ptr |x| |y|) compst fenv limit)\n (b* ((|a|-new (|i| |a| |x| |y|)))\n (mv nil (write-array (pointer->address |a|-ptr) |a|-new compst))))))\n\\end{bcode}\nSince C arrays are manipulated as wholes in ACL2 but via pointers in C,\na variable \\code{|a|-ptr} is introduced for the pointer to the array,\nwhile the array \\code{|a|} is the result of \\code{read-array}:\nthe call of \\code{exec-fun} takes \\code{|a|-ptr};\nthe call of \\code{|i|} takes \\code{|a|}.\nThe hypotheses on \\code{|a|-ptr} say that it is\na non-null pointer of the right type;\nthe guard hypothesis saying that \\code{|a|} is an array\nalso implicitly says that \\code{|a|-ptr} points to an existing array,\nbecause \\code{read-array} returns an error indication if that is not the case.\nThis hypothesis constrains the computation state to contain that array,\nwhich is thus no longer unconstrained\nas in the correctness theorem for \\code{|f|}.\nAlso unlike the correctness theorem for \\code{|f|},\nhere the computation state is updated by the execution of \\code{i}:\nthe new array, returned by \\code{|i|}, is \\code{|a|-new};\nthe new computation state is obtained by replacing\nthe array \\code{|a|} pointed by \\code{|a|-ptr} with \\code{|a|-new}.\nThe \\code{nil} that precedes the updated computation state\nrefers to the fact that \\code{i} returns no result.\nThe generated hints for this theorem are similar to\nthe ones for the correctness theorem for \\code{|f|}.\n\nATC also generates a correctness theorem for each C loop,\nrelating its execution to the ACL2 function that represents it.\nThis is illustrated by\nthe theorem for function \\code{|h\\$loop|} in \\secref{shallow}:\n\\begin{bcode}\n(defthm *program*-|h\\$loop|-correct\n (b* ((|n| (read-var (ident \"n\") compst))\n (|r| (read-var (ident \"r\") compst)))\n (implies (and (compustatep compst)\n (not (equal (compustate-frames-number compst) 0))\n (equal fenv (init-fun-env *program*))\n (integerp limit)\n (>= limit ...) \\ccode{; non-constant lower bound calculated by ATC}\n (and (uintp |n|) (uintp |r|))) \\ccode{; guard of |h\\$loop|}\n (equal (exec-stmt-while\n '... \\ccode{; test of the loop}\n '... \\ccode{; body of the loop}\n compst fenv limit)\n (b* (((mv |n|-new |r|-new) (|h\\$loop| |n| |r|)))\n (mv nil\n (write-var (ident \"n\")\n |n|-new\n (write-var (ident \"r\")\n |r|-new\n compst))))))))\n\\end{bcode}\nThe computation state is constrained to have at least one frame\n(i.e.\\ the number of frames is not 0)\nand two variables \\code{n} and \\code{r} in the top frame\n(i.e.\\ the variables accessed by the loop);\n\\code{|n|} and \\code{|r|} are bound to those variables' values.\nThe guard hypothesis saying that \\code{|n|} and \\code{|r|}\nare \\code{unsigned} \\code{int} values\nalso implicitly says that the variables exist,\nbecause \\code{read-var} returns an error indication if that is not the case.\nThe lower bound in the hypothesis on \\code{limit} is not constant here:\nit is a symbolic term that references the measure of \\code{|h\\$loop|},\nbecause the measure is related to the recursion limit needed to execute the loop;\nATC calculates this symbolic term as it generates the loop's code.\nThe \\code{exec-stmt-while} function is called on\nthe quoted test and body of the loop,\nmaking this theorem applicable as a rewrite rule\nin the proof of the correctness theorem for \\code{|h|}\n(more on this below).\nSince C loops affect variables (and possibly arrays) but do not return results,\nthe right side of the equality whose left side is the \\code{exec-stmt} call\nhas a form similar to the correctness theorem for function \\code{|i|} above,\nexcept that variables instead of arrays are updated in the computation state.\nThis theorem is proved by induction;\nthe generated hints make use of the termination theorem of \\code{|h\\$loop|},\nas well as of some local functions and theorems not discussed here.\n\nIf a C loop affects arrays besides variables,\nits correctness theorem combines characteristics of\nthe correctness theorems for \\code{|h\\$loop|} and for \\code{|i|}.\nThe theorem starts with \\code{b*} bindings for both variables and arrays.\nVariables for the pointers to the arrays are introduced.\nThe final computation state updates both variables and arrays,\nvia nested \\code{write-var} and \\code{write-array} calls.\n\nIf a C function or C loop affects multiple arrays,\nthe generated correctness theorem includes hypotheses saying that\nthe arrays are all at different addresses.\nSince the representing ACL2 function treats the arrays as wholes,\nupdating one leaves the others unchanged in the ACL2 representation.\nIn the C code, where the arrays are handled via pointers,\nupdates would not be independent if two pointers pointed to the same array.\nThe absence of aliasing is therefore\na critical hypothesis in the correctness theorems.\n\nThe correctness theorem for a C loop is used as a rewrite rule\nin the symbolic execution of\nthe correctness theorem for the C function that contains the loop.\nSimilarly, the correctness theorem for a C function is used as a rewrite rule\nin the symbolic execution of another C function that calls it.\nIn other words,\nthe generated correctness proofs build upon each other,\naccording to the call graph of the ACL2 functions\nthat represent the C functions and loops.\nFor this to work,\nthe theorems are formulated so that\nthe left sides of the rewrite rules match\nthe terms that arise during symbolic execution.\nIn particular, the recursion limit is a variable, \\code{limit},\nwhich the hypotheses constrain with a lower bound:\nthis ensures that any actual limit term\nthat arises during the symbolic expression is matched by the variable,\nand that the inequality hypothesis can be discharged.\nATC calculates the lower bound terms by considering\nnot only the code generated for the C function or loop,\nbut also the lower bound terms previously calculated\nfor all the C functions and loops that may be executed;\nthe more complex the call graph of the ACL2 functions,\nthe more complex the resulting lower bound terms.\n\nThe above is just an overview of how ATC generates dynamic correctness proofs.\nThere are several other complexities involved,\nsuch as canonical forms of the symbolic computation states,\nwhich are defined via slightly modified operations on computation states,\nand achieved via appropriate sets of rewrite rules.\nThe implementation of ATC includes documentation\nfor all the details of proof generation.\n\nSome correctness theorems are processed quickly by ACL2,\ne.g.\\ in less than 0.1 seconds.\nOther correctness theorems take several minutes.\nThe slow times seem mainly due to case splits that occur\nwhen dealing with code with several conditionals.\nThe granularity of the correctness proofs generated by ATC\nis at the function and loop level,\ni.e.\\ there is one theorem per C function or loop,\nproved by symbolically executing the function or loop as a whole,\nand matching that with the representing C functions.\nThis process may be slow even for relatively small C functions and loops,\nif they have enough conditionals.\nAn approach to overcome this problem is discussed in \\secref{future}.\n\nThe hypotheses in the correctness theorems generated by ATC\nmust be fulfilled by external C code that calls the generated C code,\nfor the correctness guarantees to hold.\nThis does not apply to the \\code{limit} hypotheses,\nwhich just serve to show that execution terminates,\nand which are easily satisfied with sufficiently large values of \\code{limit},\nwhich are not part of the data manipulated by the C code\n(they are an artifact of the model).\nThe satisfaction of the hypotheses about the types of the inputs\nmay be checked by C compilers.\nSatisfying the remaining hypotheses\nis the responsibility of the calling code;\nthese are the non-type portions of the guards\n(e.g.\\ that certain values are in certain ranges),\nand the fact that all the arrays are disjoint.\nEven though the formal model puts all the arrays in the heap,\nthe arrays passed to the C functions generated by ATC\ncould be allocated in the stack (by the callers) as well;\nthe heap of the formal model more generally represents\nexternally allocated memory.\n\n\n\\section{Future Work}\n\\label{future}\n\nAn obvious direction of future work is\nto add support for increasingly larger subsets of C.\nAlthough the current subset can represent some interesting programs,\nmany other interesting programs cannot be represented.\nWork is underway to add support for structures,\nwhich are commonly used in C.\nThe main challenge in supporting additional C features\nis perhaps the definition of their representations in the shallow embedding.\nGiven that ACL2 is a very different language from C,\ndefining such representations may require some thought.\nIn particular, representing aliasing may involve some complexity,\nbecause of the likely need to explicate some graph structure\nthat captures aliased data and that is passed through the ACL2 functions.\nHandling tree-shaped data without aliasing is comparatively simple,\nalong the lines of the current treatment of arrays,\nwhich may be generalized to includes structures,\npossibly mutually nested with arrays.\n\nBoth shallow and deep embedding currently comply to the C18 standard.\nHowever, some C implementations may extend the well-definedness of C constructs\n(e.g.\\ they may ensure that signed arithmetic is always well-defined,\nas two's complement wrap-around as in the x86 processor),\nand some applications may rely on this extended well-definedness.\nThe plan is to parameterize both shallow and deep embedding\nover certain implementation-defined characteristics,\ncaptured via ACL2 constrained functions;\ndifferent instantiations of these parameters\ncan be captured via hypotheses over these constrained functions,\nand these hypotheses can be used in the applications that require them.\nIn particular, the behavior of an arithmetic operation\nover operands of certain types\nwhose exact result is not representable in the result type\ncan be captured via a constrained function.\nFor C code strictly compliant to the C18 standard,\nthis constrained function can be hypothesized to return an error indication;\nfor C code tailored to an implementation based on x86 as above,\nthis constrained function can be hypothesized to return\nthe two's complement wrapped-around result.\nThe same approach is planned for the sizes of integer types,\nas alluded to in \\secref{shallow}.\n\nThe proof (in)efficiency issues discussed in \\secref{proofgen}\nmust be addressed soon.\nWhile there may be ways to mitigate the case splits\nby tweaking the symbolic execution,\na more promising approach is to generate finer-grained proofs,\nat the level of blocks, or even statements and expressions,\nas ATC generates these syntactic entities.\nThe finer-grained proofs will still use symbolic execution,\nbut in a more controlled and efficient way, avoiding case splits.\nIt should be possible to generate finer-grained proofs\nthat are processed in linear time over the size of the C code.\n\nATC could be extended with code generation modes\nbased on direct shallow embedding and direct deep embedding,\nanalogously to ATJ\n\\cite{atj-deep,atj-shallow} \\citeman{JAVA____ATJ}{java::atj}.\nThis may require the use of a garbage collector for C.\n\n\n\\section{Related Work}\n\\label{related}\n\nATC's shallow embedding of C in ACL2\nis similar to the shallow embedding of RAC (Restricted Algorithmic C) in ACL2\n\\cite[Chapter 15]{russinoff-book}.\nDespite their different purposes (code generation vs.\\ code verification),\nthey share the concern of representing C or RAC code in ACL2.\n\nATJ \\cite{atj-deep,atj-shallow} \\citeman{JAVA____ATJ}{java::atj}\nis a Java code generator for ACL2\nprimarily based on direct deep embedding and direct shallow embedding.\nIt also features partial support for inverse shallow embedding,\nwhich was in fact pioneered in ATJ,\nand then developed in full in ATC.\nWhile ATJ does not generate proofs yet,\nATC has been generating them from the outset.\nWhile ATC requires the ACL2 code translated to C\nto be in a very restricted form,\nATJ translates to Java a much larger subset of ACL2.\n\nSeveral other theorem provers include code generation facilities\n\\cite{coq-refman,isa-codegen,pvs-codegen}.\nThese are based on direct shallow embedding,\nwhich is quite different from ATC's inverse shallow embedding.\nFurthermore, the ACL2 language is quite different from those provers' languages,\nwhich are higher-order and strongly typed.\nThus, not many of the ideas from those provers' code generation facilities\nmay be relevant to ATC.\nThe C code generator for PVS \\cite{pvs-codegen}\nmay contain the most relevant ideas for ATC,\nbut likely more for a future code generation mode\nbased on direct shallow embedding.\n\nSeveral formalizations of C exist\n\\cite{c-asm,c-coq,c-hol,c-k,c-papaspyrou}.\nThese may contain ideas relevant to\nextending and improving the formalization of C in ACL2 described in this paper,\nwhich the proofs generated by ATC are based on.\n\n\n\\section*{Acknowledgements}\n\nThanks to\nRuben Gamboa,\nKarthik Nukala, and\nEric Smith\nfor using the C code generator\nand for providing valuable suggestions that led to improvements to the tool.\nThanks to Eric Smith and Karthik Nukala\nfor developing new APT transformations tailored to the C code generator.\nThanks to David Hardin for\nuseful discussions on C code generation and related topics.\n\n\n\\bibliographystyle{eptcs}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzkfvn b/data_all_eng_slimpj/shuffled/split2/finalzzkfvn new file mode 100644 index 0000000000000000000000000000000000000000..90760592b0353f2767366d874203a29e88e50ab3 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzkfvn @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe theory of Sample Distribution is an important branch of Statistics and Probability which study the general problem of determining the distribution of functions of random vectors. It provides a formal framework for modelling, simulating and making statistical inference. To be more precise, let us fix a probability measure space $\\left(\\Omega,\\Sigma,p\\right)$ and let $X$ be a (vector valued) random variable i.e a $\\Sigma$-measurable function from $\\Omega$ to $\\mathbb{R}^k$, where $k\\in\\mathbb{N}$; $X$ is usually referred to as the data. Let $Y$ be any measurable function of the data $X$ i.e. $Y$ is a random variable which satisfies $Y=\\phi(X)$ for some measurable function $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$ with $n\\in\\mathbb{N}$; $Y$ is usually called a statistic. \nThe problem which we address consists in finding the probability distribution of $Y$ knowing the distribution of $X$.\n\n\n\n\n Depending on the nature of the data, there are in general different approaches for finding the distribution of the statistic $Y$, including the distribution function technique, the moment-generating function technique and the change of variable technique (see e.g. \\citep[Section 4.1, page 122]{HoggCraig}). In the last case, let us suppose for example that the probability measure induced by $X$ is absolutely continuous with respect to the Lebesgue measure on $\\mathbb{R}^k$ and let $f_X$ be its density function. Then if $k=n$ and $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^k$ is a $C^1$-diffeomorphism, then the change of variable $y=\\phi(x)$ yields, for every Borel subset $A\\subseteq\\mathbb{R}^k$,\n\\begin{align}\\label{int 2}\np\\left(\\phi(X)\\in A\\right)=p\\left(X\\in \\phi^{-1}(A)\\right)=\\int_{\\phi^{-1}(A)}f_X(x)\\,dx=\\int_{A}f_X(\\phi^{-1}(y))\\frac{1}{|\\mbox{det}J_\\phi(\\phi^{-1}(y))|}\\,dy.\n\\end{align}\nThe last equation implies that the probability measure induced by $Y$ is absolutely continuous with respect to the Lebesgue measure on $\\mathbb{R}^k$ and its density function $f_Y$ is determined by the equation\n\\begin{align}\\label{int 1}\nf_Y(y)=f_X(\\phi^{-1}(y))\\frac{1}{|\\mbox{det}J_\\phi(\\phi^{-1}(y))|}, \\quad y\\in\\mathbb{R}^k.\n\\end{align}\nThis change of variables formula is widely used, for example, in machine learning and is essential for some recent results in density estimation and\n\tgenerative modeling like normalizing flows (\\citep{Rezende}), NICE (\\citep{Dinh2}), or Real NVP (\\citep{Dinh1}). However all uses of this formula in the machine learning literature that we are aware of are constrained by the bijectivity and the differentiability of the map $\\phi$.\n\n\nIn this expository paper we extend formula \\eqref{int 1} to the more general case of statistics $Y=\\phi(X)$ defined by locally Lipschitz functions $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$. The approach presented is mainly based upon the Coarea Formula proved by Federer in \\citep{Federer} which provides, in our setting, an elegant tool to derive the distribution of $Y$. This also allows to get rid of the differentiability and invertibility assumptions on $\\phi$ and to treat the case $k\\neq n$: this is quite useful in many problems of statistical inference and of machine learning (see e.g. \\citep{Civitkovic}).\n\nWhen $\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n$ is a $C^1$-map and $m:=\\max\\{\\mbox{rank }(J\\phi(x)):x\\in\\mathbb{R}^k\\}$, our result states that the probability $p_Y$ induced by $Y=\\phi(X)$ has a density function $f_Y$ with respect to the Hausdorff measure $\\mathcal{H}^m$ on $\\phi(\\mathbb{R}^k)$\nwhich satisfies \n\\begin{align}\\label{eq intr}\n\tf_Y(y)=\n\t\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_m\\phi(x)}\\,d\\mathcal{H}^{k-m}(x), &\\quad \\text{for $\\mathcal{H}^m$-a.e.}\\quad y\\in\\phi(\\mathbb{R}^k),\n\\end{align} \nwhere $J_m\\phi(x)$ is the $m$-dimensional Jacobian of $\\phi$ (see Definition \\ref{k-Jac}). When the Jacobian matrix $J\\phi$ has, at any point, maximum rank we allow the map $\\phi$ to be only locally Lipschitz. We also consider the case of $X$ having probability concentrated on some $m$-dimensional sub-manifold $E\\subseteq\\mathbb{R}^k$. This case has many applications in directional and axial statistics, morphometrics, medical\ndiagnostics, machine vision, image analysis and molecular biology (see e.g \\citep{Bhattacharya} and references there in).\n\n\n\n We do not make any attempt to reach any novelty but the evidence suggests that this result is not as universally known as it should be. Besides this we give also several examples.\n\n\n\n\\smallskip\nLet us briefly describe the content of the sections. In Section \\ref{preliminaries} we introduce the main definitions and notation that we use throughout the paper. Section \\ref{Section AC} collects the main results we need about the theory of Hausdorff measures and the Area and Coarea Formulas. In Section \\ref{Section Distribution} we develop the method proving Formula \\eqref{eq intr}: when $J\\phi$ has maximum rank we also allow the map $\\phi$ to be locally Lipschitz. In Section \\ref{Section Generalization} we gives a further generalization considering the case of random variables $X$ having probability density functions $f_X$ with respect to the Hausdorff measure $\\mathcal{H}^m_{\\vert E}$ concentrated on some $m$-dimensional sub-manifold $E\\subseteq\\mathbb{R}^k$. \nFinally, in Section \\ref{Section apll}, we provide several examples which show how to apply the latter results in order to characterize the distribution of algebra of random variables and how to compute, in an easy way, the probability densities of some classic distributions including order statistics, degenerate normal distributions, Chi-squared and \"Student's t\" distributions. \n\n\n\\bigskip\n\\noindent\\textbf{Notation.} \nWe write $\\langle \\lambda,\\mu \\rangle=\\sum_{i}\\lambda_i\\mu_i$ to denote the inner product of $\\mathbb{R}^k$.\n When $f:\\mathbb{R}^k\\to\\mathbb{R}^n$ is a Lipschitz map we write $Jf$ to denote its Jacobian matrix $\\left(\\frac{\\partial f_i}{\\partial x_j}\\right)_{i,j}$ which is defined a.e. on $\\mathbb{R}^k$. When $A=(a_{ij})\\in \\mathbb{R}^{n,k}$ is a real matrix we write $Ax$ to denote the linear operator \n$$\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n,\\qquad x=(x_1,\\dots,x_k)\\mapsto x\\cdot A^t=(y_1,\\dots,y_n),\\quad y_i=\\sum_{j=1}^k a_{i,j}x_j.$$\n With this notation the Jacobian matrix $J(Ax)$ of $Ax$ satisfies $J(Ax)=A$. $I_k$ is the identity matrix of $\\mathbb{R}^{k,k}$. If $\\left(\\Omega_1, \\Sigma_1\\right)$ and $\\left(\\Omega_2, \\Sigma_2\\right)$ are measurable spaces, a function $f :\\Omega_1\\to\\Omega_2$ is said to be $\\left(\\Sigma_1,\\Sigma_2\\right)$-measurable if $f^{-1}(B)\\in\\Sigma_1$ for all $B\\in\\Sigma_2$. \nUnless otherwise specified when $\\Omega_2=\\mathbb{R}^k$ we always choose $\\Sigma_2$ as the $\\sigma$-algebra $\\mathcal{B}(\\mathbb{R}^k)$ of all the Borel subsets of $\\mathbb{R}^k$ and in this case we simply say that $f$ is $\\Sigma_1$-measurable. We finally write $\\mathcal L^k$ and $\\mathcal H^s$ to denote respectively the Lebesgue measure and the $s$-dimensional Hausdorff measure on $\\mathbb{R}^k$: under this notation we have in particular that $\\mathcal L^k=\\mathcal H^k$ and that $\\mathcal H^0$ is the counting measure on $\\mathbb{R}^k$. \n\n\n\n\\bigskip\n\n\\noindent\\textbf{Acknowledgements.} The author thanks F. Durante for several comments on a previous version of the manuscript.\n\n\\section{Preliminaries}\\label{preliminaries}\nIn this section we fix the main notation and collect the main results we use concerning the Probability theory. For a good survey on the topic we refer the reader, for example, to \\citep[Chapter IV]{HalmosMeasure} and \\citep[Appendix A and B]{Schervish}.\n\n\nLet $\\mu,\\nu$ two (positive) measures defined on a measurable space $\\left(\\Omega,\\Sigma\\right)$. $\\nu$ is said to be \\emph{absolutely continuous} with respect to $\\mu$ and we write $\\nu\\ll\\mu$ if and only if $\\nu(B)=0$ for every $B\\in\\Sigma$ such that $\\mu(B)=0$. $\\nu$ is said to have a \\emph{density function} $f$ with respect to $\\mu$ if and only if there exists a measurable positive function $f:\\Omega\\to \\mathbb{R}^+$ such that \n\\begin{align*}\n\\nu(B)=\\int_B f\\,d\\mu,\\quad \\text{for all}\\quad A\\in\\Sigma,\n\\end{align*}\n(note that $f$ is uniquely defined up to zero measure sets). When $\\mu$ is $\\sigma$-finite, thanks to the Radon-Nikodym Theorem, the latter two definitions coincide and $\\frac{d\\nu}{d \\mu}:=f$ is called the \\emph{Radon-Nikodym derivative} of $\\nu$ with respect to $\\mu$ (see e.g. \\citep[Theorem 1.28]{AFPallara}).\n\n\nLet now $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space i.e. a measure space with $p(\\Omega)=1$ and let $k\\in\\mathbb{N}$. A $\\Sigma$-measurable function $X$ from $\\Omega$ to $\\mathbb{R}^k$ is called a (vector) \\emph{random variable}; in statistical inference problems, $X$ is sometimes referred to as the given data. We write $p_X$ to denote the distribution of $X$ i.e. the measure induced by $X$ on $\\left(\\mathbb{R}^k, \\mathcal{B}(\\mathbb{R}^k)\\right)$ defined by\n\\begin{align*}\np_X(A)=p\\left(X\\in A\\right):=p(X^{-1}(A)),\\quad \\text{for all Borel set}\\quad A\\subseteq \\mathbb{R}^k.\n \\end{align*}\nWith a little abuse of terminology, $X$ is said to be an \\emph{absolutely continuous random variable} if and only if $p_X\\ll \\mathcal L^k$ i.e. $p_X$ is absolutely continuous with respect to the Lebesgue measure $\\mathcal L^k$. In this case the non-negative Radon-Nikodym derivative $f_X:=\\frac{dp_X}{d \\mathcal L^k}$ is called the density function of $X$ and it is defined through the relation\n\\begin{align*}\np_X(A)=\\int_{A}f_X(x)\\,dx,\\quad \\text{for all Borel set}\\quad A\\subseteq \\mathbb{R}^k.\n\\end{align*}\n$X$ is said to be a \\emph{discrete random variable} if and only if there exists a countable subset $I=(a_i)_{i\\in N}$ of $\\mathbb{R}^k$ such that $p_X\\ll {\\mathcal{H}^0}_{\\vert I}$ i.e. $p_X$ is absolutely continuous with respect to the counting measure $\\mathcal{H}^0$ on $I$. In this case the density \n $f_X:=\\frac{dp_X}{d {\\mathcal{H}^0}_{\\vert I}}$ is also called the probability mass function and it is defined through the relation\n\\begin{align*}\np_X(A)=\\sum_{i:a_i\\in A}f_X(a_i),\\quad \\text{for all subset}\\quad A\\subseteq \\mathbb{R}^k.\n\\end{align*}\nIn particular $p_X(a)=f_X(a)$, for all $a\\in A$.\n\n\n Let $X:\\Omega\\to\\mathbb{R}^k$ be a fixed random variable and let $n\\in\\mathbb{N}$; a random variable $Y:\\Omega\\to\\mathbb{R}^n$ is called a \\emph{statistic} (of the data $X$) if it is a measurable function of $X$ i.e. $Y$ satisfies $Y=\\phi\\circ X$ for some $\\left(\\mathcal{B}(\\mathbb{R}^k), \\mathcal{B}(\\mathbb{R}^n)\\right)$ measurable function $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$.\n\n\nFinally, let $X_1,\\dots, X_n$ be $n$ random variables where $X_i:\\Omega\\to\\mathbb{R}^k$, for $i=1,\\dots, n$. $X_1,\\dots, X_n$ are said to be \\emph{independent} if for every Borel subset $A_1,\\dots A_n$ of $\\mathbb{R}^k$ and for every $J\\subseteq \\{1,\\dots, n\\}$ one has \n\\begin{align*}\np\\left(\\bigcap_{i\\in J}X_i^{-1}(A_i)\\right)=\\prod_{i\\in J}p_{X_i}(A_i).\n\\end{align*}\nIn this case, if every $X_i$ is absolutely continuous with density function $f_i$, then $X$ is absolutely continuous and its density function satisfies $f_X(x_1,\\dots, x_n)=\\prod_{i=1,\\dots,n}f_i(x_i)$ for every $x_i\\in\\mathbb{R}^k$.\n\nIf, moreover, $X_1,\\dots, X_n$ are identically distributed i.e. $p_{X_i}=p_{X_j}:=q$ for every $i,j$, then $X=\\left(X_1,\\dots, X_n\\right)$ is called a \\emph{random sample} from the distribution $q$; in this case, if every $X_i$ is absolutely continuous with density $f_i=f$, then the density function of $X$ satisfies $f_X(x_1,\\dots, x_n)=\\prod_{i=1,\\dots,n}f(x_i)$.\n\n\\section{Area and Coarea Formulas}\\label{Section AC}\n\nIn this section we provide a brief introduction to the theory of Hausdorff measures and we collect the main results about the Area and Coarea Formulas proved by Federer in \\citep{Federer}. For the related proofs, we refer the reader, for example, to \\citep{AFPallara, Federer-book, GiaqModica} (and references therein).\n\nWe begin with the definition of the $s$-dimensional Hausdorff measure.\nLet $E\\subseteq\\mathbb{R}^n$, $\\epsilon>0$ and let $\\left(B(x_i,r_i)\\right)_{i\\in\\mathbb{N}}$ be a coverings of $E$ by a countable collections of balls $B(x_i,r_i)$ whose radii satisfy $r_i\\leq \\epsilon$. For each $s\\geq 0$, let\n\\begin{align*}\n\\sigma_s(\\epsilon)= \\frac{\\pi^{s\/2}}{\\Gamma (1 + s\/2)}\\inf\\sum_{i\\in\\mathbb{N}} r_i^s,\n\\end{align*}\nwhere the infimum is taken over all such coverings. The monotonicity of $\\sigma_s$, with respect to $s$, implies that there exists the limit (finite or\ninfinite)\n\\begin{align*}\n\\mathcal{H}^s(E) := \\lim_{\\epsilon\\to 0^+}\\sigma_s(\\epsilon).\n\\end{align*}\nThis limit is called the $s$-dimensional Hausdorff measure of E. The Hausdorff measure $\\mathcal{H}^s$ satisfies Caratheodory's criterion therefore, the $\\sigma$-algebra of all the $\\mathcal{H}^s$-measurable sets contains all the Borel subsets of $\\mathbb{R}^n$ (see e.g. \\citep[Proposition 2.49]{AFPallara}).\n\n\nIf $0 n$ and it can be seen as a generalization of the Fubini's theorem about the reduction of integrals. For its proof we refer the reader to \\citep[Theorem 3.2.3]{Federer}, \\citep[Theorem 2.86]{GiaqModica}. \n\\begin{teo}[Coarea formula]\\label{Coarea}\nLet $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$ be a locally Lipschitz map with $k> n$.\n\\begin{itemize}\n\\item[(i)]For any $\\mathcal{L}^k$-measurable set $E\\subseteq \\mathbb{R}^k$ the function $y\\mapsto \\mathcal{H}^{k-n}\\left(E\\cap \\phi^{-1}(y)\\right)$ is $\\mathcal{L}^n$-measurable in $\\mathbb{R}^n$ and\n\\begin{align*}\n\\int_E J_n\\phi(x)\\,dx=\\int_{\\mathbb{R}^n}\\mathcal{H}^{k-n}\\left(E\\cap \\phi^{-1}(y)\\right)\\,dy.\n\\end{align*}\n\\item[(ii)] If $u$ is a positive measurable function, or $uJ_n\\phi\\in L^1\\left(\\mathbb{R}^k\\right)$, then \n\\begin{align*}\n\\int_{\\mathbb{R}^k}u(x) J_n\\phi(x)\\,dx=\\int_{\\mathbb{R}^n}\\int_{\\phi^{-1}(y)}u(x)\\,d\\mathcal{H}^{k-n}(x)\\;dy.\n\\end{align*}\n\\end{itemize} \n\\end{teo}\nWhen $\\phi$ is an orthogonal projection \n(e.g. $\\phi(x_1,\\cdots, x_k)=(x_{i_1},\\cdots, x_{i_n})$ where $\\{i_1,\\cdots, i_n\\}\\subseteq \\{1,\\cdots, k\\}$), then $J_n\\phi=1$, the level sets of $\\phi$ are $(n-k)$-planes and the last formula corresponds to Fubini's theorem. \n\n\nApplying Theorem \\ref{Coarea} in the particular case $n=1$, then one has $J_1\\phi(x)=|\\nabla \\phi(x)|$ and the formula in (ii) becomes\n\\begin{align}\\label{1d-Coarea}\n\\int_{\\mathbb{R}^k}u(x)\\, |\\nabla \\phi(x)|\\,dx=\\int_{\\mathbb{R}}\\int_{\\phi^{-1}(y)}u(x)\\,d\\mathcal{H}^{k-1}(x)\\;dy.\n\\end{align}\nIn the special case $\\phi(x)=|x|$, $J_n\\phi(x)=1$ for every $x\\neq 0$ and, since the map sending $x\\mapsto rx$ changes $\\mathcal{H}^{k-1}$ by the factor $r^{k-1}$ (see e.g. \\citep[Proposition 2.49]{AFPallara}), one has \n\\begin{align*}\n\\int_{\\mathbb{R}^k}u(x)\\,dx=\\int_0^\\infty\\int_{|x|=r}u(x)\\,d\\mathcal{H}^{k-1}(x)\\;dr=\\int_0^\\infty r^{k-1}\\int_{|x|=1}u(x)\\,d\\mathcal{H}^{k-1}(x)\\;dr.\n\\end{align*}\n\n \n\n\\bigskip\n\nWe end the section by stating a generalization of the Coarea Formula to the case where the Lebesgue measure on the right hand side of the equations in Theorem \\ref{Coarea} is replaced by the Hausdorff measure $\\mathcal{H}^m$, where $m:=\\max\\{\\mbox{rank }(J\\phi(x)):x\\in\\mathbb{R}^k\\}$. For simplicity we suppose $f$ to be $C^1$-differentiable.\n\\begin{teo}\\label{Coarea gen}\n\tLet $k,n\\in\\mathbb{N}$ and let $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$ be a $C^1$-map and let $m:=\\max\\{\\mbox{rank }(J\\phi(x)):x\\in\\mathbb{R}^k\\}$ . The following properties hold.\n\t\\begin{itemize}\n\t\t\\item[(i)] For every $\\mathcal{L}^k$-measurable set $E\\subseteq \\mathbb{R}^k$ the function $y\\mapsto \\mathcal{H}^{k-m}\\left(E\\cap \\phi^{-1}(y)\\right)$ is $\\mathcal{H}^m$-measurable in $\\mathbb{R}^n$ and one has\n\t\t\\begin{align*}\n\t\t\t\\int_E J_m\\phi(x)\\,dx=\\int_{\\mathbb{R}^n}\\mathcal{H}^{k-m}\\left(E\\cap \\phi^{-1}(y)\\right)\\,d\\mathcal{H}^m(y).\n\t\t\\end{align*}\n\t\t\\item[(ii)] If $u$ is a positive measurable function, or $uJ_m\\phi\\in L^1\\left(E\\right)$, then \n\t\t\\begin{align*}\n\t\t\t\\int_{E}u(x) J_m\\phi(x)\\,dx=\\int_{\\mathbb{R}^n}\\int_{\\phi^{-1}(y)\\cap E}u(x)\\,d\\mathcal{H}^{k-m}(x)\\;d\\mathcal{H}^m( y).\n\t\t\\end{align*}\n\t\\end{itemize} \n\\end{teo}\n{\\sc{Proof.}} The proof is a consequence of \\citep[Theorem 5.1, Theorem 5.2]{KristensenABridge}.\\qed\n\n\n\n\n\\bigskip\nThe next Remark clarifies some positivity properties about $k$-dimensional Jacobians. It can be seen as a generalization of Sard's Theorem (see also \\citep[Lemma 2.73, Lemma 2.96, Remark 2.97]{AFPallara} and \\citep[Theorem 1.1]{KristensenABridge}).\n\n\n\n\\begin{os}\\label{oss J>0}\nLet $\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n$ be a locally Lipschitz map and let us suppose, preliminarily, that $J\\phi(x)$ has maximum rank for a.e. $x\\in\\mathbb{R}^k$.\n\\begin{itemize}\n\\item[(i)] if $k\\leq n$, then using (i) of Theorem \\ref{Area} with $E:=\\{x\\in\\mathbb{R}^k\\,:\\,J_k\\phi(x)=0\\}$ we get\n\\begin{align*}\n\\int_{\\mathbb{R}^n}\\mathcal{H}^0\\left(E\\cap \\phi^{-1}(y)\\right)\\,d\\mathcal{H}^k(y)=0.\n\\end{align*}\nThis implies $\\mathcal{H}^0\\left(E\\cap \\phi^{-1}(y)\\right)=0$ (i.e. $\\phi(E)\\cap \\{y\\}=\\emptyset $) for $\\mathcal{H}^k$-a.e. $y\\in\\mathbb{R}^n$. This yields $\\mathcal{H}^k\\left(\\phi(E)\\right)$=0 and it implies, in particular, that $J_k\\phi>0$ on $\\phi^{-1}(y)$ for $\\mathcal{H}^k$-a.e. $y\\in\\mathbb{R}^n$.\n\\item[(ii)] if $k\\geq n$, then using (i) of Theorem \\ref{Coarea} with $E:=\\{x\\in\\mathbb{R}^k\\,:\\,J_n\\phi(x)=0\\}$ we get\n\\begin{align*}\n\\int_{\\mathbb{R}^n}\\mathcal{H}^{k-n}\\left(E\\cap \\phi^{-1}(y)\\right)\\,dy=0.\n\\end{align*}\nThis yields $\\mathcal{H}^{k-n}\\left(E\\cap \\phi^{-1}(y)\\right)=0$ for a.e. $y\\in\\mathbb{R}^n$ and it implies, in particular, that $J_n\\phi>0$ $\\mathcal{H}^{k-n}$-a.e on $\\phi^{-1}(y)$ for a.e. $y\\in\\mathbb{R}^n$.\n\\end{itemize}\n\\medskip\nLet us suppose, now, $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$ to be $C^1$ and let $m:=\\max\\{\\mbox{rank }(J\\phi(x)):x\\in\\mathbb{R}^k\\}$. Then setting $E:=\\{x\\in\\mathbb{R}^k\\,:\\,J_m\\phi(x)=0\\}$ and using Theorem \\ref{Coarea gen} we get\n\\begin{align*}\n\\int_{\\mathbb{R}^n}\\mathcal{H}^{k-m}\\left(E\\cap \\phi^{-1}(y)\\right)\\,d\\mathcal{H}^m(y)=0.\n\\end{align*}\nThis implies that $J_m\\phi>0$ $\\mathcal{H}^{k-m}$-a.e on $\\phi^{-1}(y)$ for $\\mathcal{H}^m$-a.e. $y\\in\\mathbb{R}^n$.\n\n\n\\end{os}\n\\section{Sample Distribution Theory}\\label{Section Distribution}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k\\in\\mathbb{N}$ and let $X:\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous random variable. Let $Y:=\\phi\\circ X$ be a statistic, where $\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n$ is a measurable map and $k\\in\\mathbb{N}$. In this section we prove that when $\\phi$ is locally Lipschitz then \nthe probability measure $p_Y$ induced by $Y$ has a density function, with respect to some Hausdorff measure $\\mathcal{H}^m$ on $\\phi(\\mathbb{R}^k)\\subseteq\\mathbb{R}^n$, which can be computed explicitly in terms of an integral involving the density function $f_X$ of $X$.\n We recall preliminarily that the Radon-Nikodym derivative of a measure is uniquely defined up to zero measure sets: since, by definition, $p_Y$ is concentrated on $\\phi(\\mathbb{R}^k)$, in what follows we can always set $f_Y(y)=0$ for $y\\notin\\phi(\\mathbb{R}^k)$. \n\n\nWe start with the case $k\\leq n$.\n\n\n\\begin{teo}\\label{teo k0}, $J_k\\phi>0$ on $\\phi^{-1}(y)$ for $\\mathcal{H}^k$-a.e. $y\\in\\mathbb{R}^n$. Then using the Area Formula of Theorem \\ref{Area} one has\n\\begin{align*}\np_Y(A)&=p\\left(Y^{-1}(A)\\right)=p\\left(X^{-1}\\left(\\phi^{-1}(A)\\right)\\right)=\\int_{\\phi^{-1}(A)}f_X(x)\\,dx\\\\[1ex]\n&=\\int_{\\mathbb{R}^n}\\int_{\\phi^{-1}(y)\\cap \\phi^{-1}(A)}f_X(x)\\frac{1}{J_k\\phi(x)}\\,d\\mathcal{H}^0(x)\\;d\\mathcal{H}^k(y)\\\\[1ex]\n&=\n\\int_{A}\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_k\\phi(x)}\\,d\\mathcal{H}^0(x)\\;d\\mathcal{H}^k(y)=\\int_{A}\\sum_{x\\in \\phi^{-1}(y)}f_X(x)\\frac{1}{J_k\\phi(x)}\\;d\\mathcal{H}^k(y).\n\\end{align*}\nThis proved the first required claim. The second assertion follows after observing that, under the given hypothesis, $\\phi^{-1}(y)=\\bigcup_i \\{\\phi_i^{-1}(y)\\}$ for every $y\\in \\phi(\\mathbb{R}^k)$.\n\\\\\\qed\n\\begin{os}\nWe note that if $kn$.\n\n\\begin{teo}\\label{teo k>n}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k,n \\in\\mathbb{N}$ with $k\\geq n$ and let $X:\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous random variable with probability density function $f_X$. If $\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n$ is a locally Lipschitz map such that $\\mbox{rank }(J\\phi)=n$ a.e., then the statistic $Y:=\\phi\\circ X$ is an absolutely continuous random variable (i.e. $p_{Y}\\ll \\mathcal{L}^n$) and its probability density function $f_Y$ satisfies \n\\begin{align*}\nf_Y(y)=\n\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_n\\phi(x)}\\,d\\mathcal{H}^{k-n}(x), &\\quad \\text{for a.e.}\\quad y\\in\\phi(R^k)\n\\end{align*} \nand it is $0$ otherwise.\n\\end{teo}\n{\\sc{Proof.}} The case $k=n$ is the result of the previous Corollary. Let us suppose $k>n$. Recalling Definition \\ref{k-Jac} and Remark \\ref{oss J>0}, $J_n\\phi>0$ $\\mathcal{H}^{k-n}$-a.e on $\\phi^{-1}(y)$ for a.e. $y\\in\\mathbb{R}^n$. Let $A\\subseteq \\mathbb{R}^n$ be a Borel set. Then using the Coarea Formula of Theorem \\ref{Coarea} one has\n\\begin{align*}\np_Y(A)&=p\\left(Y^{-1}(A)\\right)=p\\left(X^{-1}\\left(\\phi^{-1}(A)\\right)\\right)\\\\[1ex]\n&=\\int_{\\phi^{-1}(A)}f_X(x)\\,dx=\\int_{\\mathbb{R}^n}\\int_{\\phi^{-1}(y)\\cap \\phi^{-1}(A)}f_X(x)\\frac{1}{J_n\\phi(x)}\\,d\\mathcal{H}^{k-n}(x)\\;dy\\\\[1ex]\n&=\n\\int_{A}\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_n\\phi(x)}\\,d\\mathcal{H}^{k-n}(x)\\;dy.\n\\end{align*}\nThis proved the required claim.\\\\\\qed\n\nFor the reader's convenience we enlighten in the following corollary the particular case $n=1$ which is very useful in the applications and which follows from formula \\eqref{1d-Coarea}. \n\n\\begin{cor}\\label{teo k>1}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k\\in\\mathbb{N}$ and let $X:\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous random variable with probability density function $f_X$. If $\\phi:\\mathbb{R}^k\\to \\mathbb{R}$ is a locally Lipschitz map such that $|\\nabla \\phi|>0$ a.e., then the statistic $Y:=\\phi\\circ X$ is an absolutely continuous random variable and has probability density function \n\\begin{align*}\nf_Y(y)=\n\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{|\\nabla\\phi(x)|}\\,d\\mathcal{H}^{k-1}(x), &\\quad \\text{for a.e.}\\quad y\\in\\phi(R^k).\n\\end{align*} \n\\end{cor}\n\\medskip\n\\begin{os}The assumptions $p(\\Omega)=1$ was never used in the proof of the Theorems \\ref{teo kn}. Indeed analogous results hold with $p_X$ replaced by an absolutely continuous measure on $\\mathbb{R}^k$. More precisely let $\\mu$ be a measure defined on $\\left(\\mathbb{R}^k,\\mathcal{B}\\right)$ such that $\\mu \\ll \\mathcal L^k$ and let $\\frac{d\\mu}{d\\mathcal L^k}$ its Radon-Nykodym derivative. Let $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$ be a locally Lipschitz map whose Jacobian matrix $J\\phi$ has a.e. maximum rank.\n\\begin{itemize}\n\\item[(i)] If $k\\leq n$ then $\\mu \\phi^{-1} \\ll \\mathcal{H}^k$ and for $\\mathcal{H}^k$-a.e. $y\\in \\phi(\\mathbb{R}^k)$one has\n\\begin{align*}\n\\frac{d\\mu \\phi^{-1}}{d\\mathcal \\mathcal{H}^k}(y)=\\int_{\\phi^{-1}(y)}\\frac{d\\mu}{d\\mathcal L^k}(x)\\frac{1}{J_k\\phi(x)}\\,d\\mathcal{H}^0(x)=\\sum_{\\phi(x)=y}\\frac{d\\mu}{d\\mathcal L^k}(x)\\frac{1}{J_k\\phi(x)}.\n\\end{align*}\n\\item[(ii)] If $k> n$ then $\\mu \\phi^{-1} \\ll \\mathcal{L}^n$ and for a.e. $y\\in\\mathbb{R}^n$ one has\n\\begin{align*}\n\\frac{d\\mu \\phi^{-1}}{d\\mathcal{L}^n}(y)=\\int_{\\phi^{-1}(y)}\\frac{d\\mu}{d\\mathcal{L}^k}(x)\\frac{1}{J_n\\phi(x)}\\,d\\mathcal{H}^{k-n}(x).\n\\end{align*}\n\\end{itemize} \n\\end{os}\n\\bigskip\n\nWe end the section by applying Theorem \\ref{Coarea gen} in order to extend Theorem \\ref{teo k>n} to the case of a $C^1$-map $\\phi$ whose Jacobian could possibly have not maximum rank. In this case, setting $m:=\\max\\{\\mbox{rank }(J\\phi(x)):x\\in\\mathbb{R}^k\\}$, the induced probability $p_Y$ has a density function $f_Y$ with respect to the Hausdorff measure $\\mathcal{H}^m$ on $\\phi(\\mathbb{R}^k)\\subseteq\\mathbb{R}^n$.\n\n\n\\begin{teo}\\label{teo Hausdorff}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k,n \\in\\mathbb{N}$ \n and let $X:\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous random variable with probability density function $f_X$. Let $\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n$ be a $C^1$-map and let $m:=\\max\\{\\mbox{rank }(J\\phi(x)):x\\in\\mathbb{R}^k\\}$. Then the induced probability measure $p_Y$ of the statistic $Y:=\\phi\\circ X$ has a density function $f_Y$ with respect to the Hausdorff measure $\\mathcal{H}^m$ \n\n which satisfies \n\\begin{align*}\nf_Y(y)=\n\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_m\\phi(x)}\\,d\\mathcal{H}^{k-m}(x), &\\quad \\text{for $\\mathcal{H}^m$-a.e.}\\quad y\\in\\phi(R^k)\n\\end{align*} \nand it is $0$ otherwise.\n\\end{teo}\n{\\sc{Proof.}} Recalling Definition \\ref{k-Jac} and Remark \\ref{oss J>0}, $J_m\\phi>0$ $\\mathcal{H}^{k-m}$-a.e on $\\phi^{-1}(y)$ for $\\mathcal{H}^m$-a.e. $y\\in\\mathbb{R}^n$. Let $A\\subseteq \\mathbb{R}^n$ be a Borel set. Then using Theorem \\ref{Coarea gen} one has\n\\begin{align*}\np_Y(A)&=p\\left(Y^{-1}(A)\\right)=p\\left(X^{-1}\\left(\\phi^{-1}(A)\\right)\\right)\\\\[1ex]\n&=\\int_{\\phi^{-1}(A)}f_X(x)\\,dx=\\int_{\\mathbb{R}^n}\\int_{\\phi^{-1}(y)\\cap \\phi^{-1}(A)}f_X(x)\\frac{1}{J_m\\phi(x)}\\,d\\mathcal{H}^{k-m}(x)\\;d\\mathcal{H}^m(y)\\\\[1ex]\n&=\n\\int_{A}\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_m\\phi(x)}\\,d\\mathcal{H}^{k-m}(x)\\;d\\mathcal{H}^m(y).\n\\end{align*}\nThis proves the required claim.\\\\\\qed\n\n\n\n\\section{Further generalizations}\\label{Section Generalization}\n\nIn this section we briefly expose, for the interested reader, a further generalization of Theorem \\ref{teo k>n} which covers the case of probabilities $p_X$ concentrated on some $m$-dimensional subset $E\\subseteq\\mathbb{R}^k$. This includes, for example, the cases of a random variable $X$ which is uniformly distributed over a generic subset $E$ and of random variables taking values on $m$-dimensional manifolds. Statistical analysis on manifolds \n\thas many applications in directional and axial statistics, morphometrics, medical\n\tdiagnostics, machine vision and image analysis (see e.g \\citep{Bhattacharya} and references there in). Amongst the many important applications, those arising, for example, from the analysis of data on torus play also a fundamental role in molecular biology in the study of the Protein Folding Problem. \n\nAlthough the following Theorem is valid for countably $m$-rectifiable sets (see \\citep[Definition 5.4.1, Lemma 5.4.2]{Krantz-Parks}), we suppose for simplicity $E$ to be an $m$-dimensional sub-manifold of $\\mathbb{R}^k$. \n\nWe state first the Area and Coarea formula relative to sub-manifolds of $\\mathbb{R}^k$. If $J\\phi^E$ is the tangential Jacobian matrix of $\\phi$ with respect to $E$ (see \\citep[Formula 11.1]{Maggi}), the $k$-tangential Jacobian $J_k^E\\phi$ is defined as in Definition \\ref{k-Jac}. For a rigorous introduction to tangential Jacobians as well as for all the other details we refer the reader to \\citep[Chapter 3]{Federer}, \\citep[Section 5.3]{Krantz-Parks}, \\citep[Section 11.1]{Maggi}.\n\n\\begin{teo}\nLet $\\phi:\\mathbb{R}^k\\to\\mathbb{R}^n$ be a $C^1$-map and let $E\\subseteq\\mathbb{R}^k$ an $m$-dimensional manifold. The following properties hold.\n\\begin{itemize}\n\\item[(i)]If $m\\leq n$ and $u$ is a positive measurable function, or $uJ_m^E\\phi\\in L^1\\left(E,\\mathcal{H}^m\\right)$, one has \n\\begin{align*}\n\\int_{E}u\\, J_m^E\\phi\\,d\\mathcal{H}^m(x)=\\int_{\\mathbb{R}^n}\\int_{E\\cap \\phi^{-1}(y)}u\\,d\\mathcal{H}^{0}(x)\\,d\\mathcal{H}^{m}(y).\n\\end{align*}\n\\item[(ii)]\nIf $m\\geq n$ and $u$ is a positive measurable function, or $uJ_n^E\\phi\\in L^1\\left(E,\\mathcal{H}^m\\right)$,one has\n\\begin{align*}\n\\int_{E}u\\, J_n^E\\phi\\,d\\mathcal{H}^m(x)=\\int_{\\mathbb{R}^n}\\int_{E\\cap \\phi^{-1}(y)}u\\,d\\mathcal{H}^{m-n}(x)\\,dy.\n\\end{align*}\n\\end{itemize}\n\\end{teo}\n{\\sc{Proof.}} See \\citep[Theorem 2.91, 2.93]{Federer} and \\citep[Theorem 2.91, Theorem 2.93]{AFPallara}.\\qed\n\\bigskip\nThe same methods of proof used in Section \\ref{Section Distribution}, yield finally the next result. Note that every $m$-dimensional manifold $E\\subseteq\\mathbb{R}^k$ has $\\mathcal{H}^m$-$\\sigma$-finite measure.\n\\begin{teo}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k,n \\in\\mathbb{N}$ and let $E\\subseteq\\mathbb{R}^k$ an $m$-dimensional manifold. Let $X:\\Omega\\to E$ be a random variable having a probability density function $f_X$ with respect to the Hausdorff measure $\\mathcal{H}^m_{\\vert E}$ on $E$. Let $\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n$ be a $C^1$-map whose tangential Jacobian $J^E\\phi(x)$ has maximum rank at any point $x\\in E$. The following properties hold.\n\\begin{itemize}\n\\item[(i)] If $m\\leq n$ then the probability measure $p_Y$ induced by the statistic $Y:=\\phi\\circ X$ is absolutely continuous with respect to the Hausdorff measure $\\mathcal{H}^m_{\\vert\\phi(E)}$ on $\\phi(E)$ and its density function $f_Y$ satisfies\n\\begin{align*}\nf_Y(y)&=\n\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_m^E\\phi(x)}\\,d\\mathcal{H}^0(x)=\\sum_{\\phi(x)=y}f_X(x)\\frac{1}{J_m^E\\phi(x)}, &\\quad \\text{for $\\mathcal{H}^m$-a.e.}\\quad y\\in\\phi(E).\n\\end{align*}\n\\item[(ii)] If $m\\geq n$ then the statistic $Y:=\\phi\\circ X$ is an absolutely continuous random variable (i.e. $p_{Y}\\ll \\mathcal{L}^n$) and its probability density function $f_Y$ satisfies \n\\begin{align*}\nf_Y(y)=\n\\int_{\\phi^{-1}(y)}f_X(x)\\frac{1}{J_n^E\\phi(x)}\\,d\\mathcal{H}^{m-n}(x), &\\quad \\text{for a.e.}\\quad y\\in\\phi(E).\n\\end{align*} \n\\end{itemize}\n\\end{teo}\n\n\\section{Some applications}\\label{Section apll}\n\n\nIn this section we apply the results of the previous sections in order to compute the density functions of some distributions in some cases of relevant interest.\n\\subsection{First examples}\nIn these first examples we provide a density formula for random variables which are algebraic manipulations of absolutely continuous random variables. The first example, in particular, is used in Proposition \\ref{section chi squared} in order to find the density of the chi-squared distribution.\n\\begin{esem}[Square function]\\label{Square function}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space and let $X:\\Omega\\to \\mathbb{R}$ be an absolutely continuous random variable with probability density function $f_X$. We employ Corollary \\ref{teo k=n} with \\begin{align*}\n\\phi:\\mathbb{R}\\to [0,\\infty[,\\quad \\phi(t)=t^2,\\quad J_1(t)=2|t|.\n\\end{align*}\nThen the statistic $Y:=X^2$ is an absolutely continuous random variable and its probability density function $f_Y$ satisfies \n\\begin{align*}\nf_Y(y)=\\frac{f_X(\\sqrt y)+f_X\\left(-\\sqrt y\\right)}{2\\sqrt y},\\quad \\text{for any}\\quad y>0.\n\\end{align*}\nMore generally let $k\\in\\mathbb{N}$ and let $X=(X_1,\\dots, X_k):\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous (vector valued) random variable with probability density function $f_X$. Then employing Theorem \\ref{teo k>n} with \n\\begin{align*}\n\\phi:\\mathbb{R}^k\\to [0,\\infty[,\\quad \\phi(x)=\\|x\\|^2=x_1^2+\\cdots+x_k^2,\\quad J_1(x)=2\\|x\\|,\n\\end{align*}\n we get that the statistic $Y:=\\|X\\|^2=X_1^2+\\cdots+X_k^2$ is an absolutely continuous random variable whose probability density function satisfies \n\\begin{align*}\nf_Y(y)&=\\int_{\\|x\\|^2=y}\\frac{f_X(x)}{2\\|x\\|}\\,d\\mathcal{H}^{k-1}(x)=\\frac{1}{2\\sqrt y}\\int_{\\|x\\|=\\sqrt y}f_X(x)\\,d\\mathcal{H}^{k-1}(x),\\quad \\text{for any}\\quad y>0.\n\\end{align*}\n\\end{esem}\n\n\n\\begin{esem}[Affine transformations]\\label{Affine transformations}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k\\in\\mathbb{N}$ and let $X:\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous random variable with probability density function $f_X$. Let us consider the affine transformation \n\\begin{align*}\n\\phi:\\mathbb{R}^k\\to \\mathbb{R}^n,\\quad \\phi(x)=Ax+y_0,\n\\end{align*}\nwhere $A\\in\\mathbb{R}^{n\\times k}$, $\\mbox{rank}(A)=m$ and $y_0\\in\\mathbb{R}^n$. Recalling Definition \\ref{k-Jac}, the $m$-dimensional Jacobian of $\\phi$ is given by\n\\begin{align*}\n\tJ_m\\phi(x)=\\sqrt{\\sum_{B}(\\mbox{det\\,}B)^2}=:A_m\n\\end{align*}\nwhere the sum runs along all $m\\times m$ minors $B$ of $A$.\n Then, using Theorem \\ref{teo Hausdorff}, the induced probability measure $p_Y$ of the statistic $Y:=AX+y_0$ has a density function $f_Y$ with respect to the Hausdorff measure $\\mathcal{H}^m$ on the $m$-dimensional hyper-surface $\\phi\\left(\\mathbb{R}^k\\right)=\\{y=Ax+y_0:x\\in\\mathbb{R}^k\\}$ which satisfies \n\\begin{align*}\nf_Y(y)&=\n\\frac{1}{A_m}\\int_{Ax+y_0=y}f_X(x)\\,d\\mathcal{H}^{k-m}(x)\n\\\\[1ex]&=\\frac{1}{A_m}\\int_{\\mbox{Ker}(A)+x_y}f_X(x)\\,d\\mathcal{H}^{k-m}(x)\n, \\quad \\text{for}\\quad y\\in\\phi(R^k).\n\\end{align*} \nHere for $y\\in\\phi(R^k)$, $x_y\\in\\mathbb{R}^k$ is any fixed solution of the equation $y=Ax_y+y_0$.\n\n\n When $m=n$, then $A_n=\\sqrt{\\mbox{det}(AA^T)}$ and the map $\\phi$ is surjective i.e. $\\phi(\\mathbb{R}^k)=\\mathbb{R}^n$. In this case theorem \\ref{teo k>n} implies that $p_y\\ll\\mathcal L ^n$ i.e. $Y$ is an absolutely continuous random variable. If moreover $k=n$ and $A\\in\\mathbb{R}^{k\\times k}$ is not-singular then $A_k=|\\mbox{det } A|$ and in this case we have \n\\begin{align*}\nf_Y(y)=\n\\frac{1}{|\\mbox{det } A|}\\int_{Ax+y_0=y}f_X(x)\\,d\\mathcal{H}^{0}(x)=\\frac{f_X(A^{-1}(y-y_0))}{|\\mbox{det } A|}, &\\quad \\text{for}\\quad y\\in \\mathbb{R}^k.\n\\end{align*} \n\\end{esem}\n\n\n\n\n\\begin{esem}[Sum of variables and Sample mean]\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space, let $k\\in\\mathbb{N}$ and let $X=\\left(X_1,\\dots, X_k\\right):\\Omega\\to \\mathbb{R}^k$ be an absolutely continuous random variable with probability density function $f_X$. We employ Corollary \\ref{teo k>1} with \n\\begin{align*}\n\\phi:\\mathbb{R}^k\\to \\mathbb{R},\\quad \\phi(t)=\\sum_{i=1}^k t_i,\\quad J_1(t)=|\\nabla\\phi|=\\sqrt k.\n\\end{align*}\nThen the statistic $Y:=\\sum_{i=1}^k X_i$ is an absolutely continuous random variable and its probability density function $f_Y$ satisfies \n\\begin{align*}\nf_Y(y)=\n\\int_{\\sum_{i=1}^k x_i=y}\\frac{f_X(x)}{\\sqrt k}\\,d\\mathcal{H}^{k-1}(x), &\\quad \\text{for a.e.}\\quad y\\in \\mathbb{R}.\n\\end{align*} \nLet us set $x^{k-1}:=\\left(x_1,\\dots,x_{k-1}\\right)$ and let $\\psi(x^{k-1})=\\left(x^{k-1},\\,y-\\sum_{i=1}^{k-1}x_i\\right)$ be a parametrization of the hyperplane $\\sum_{i=1}^k x_i=y$. Using the area formula \\eqref{parametrized manifold}, the last integral becomes\n\\begin{align*}\nf_Y(y)=\n\\int_{\\mathbb{R}^{k-1}}f_X\\Big(x^{k-1},\\,y-\\sum_{i=1}^{k-1}x_i\\Big)\\,dx^{k-1}, &\\quad \\text{for a.e.}\\quad y\\in \\mathbb{R}.\n\\end{align*}\nIn the particular case $k=2$ and if $X_1$, $X_2$ are independent, the last formula gives the well known convolution form for the distribution of the random variable $X_1+X_2$:\n\\begin{align*}\nf_{X_1+X_2}(y)=\n\\int_{\\mathbb{R}}f_{X_1}(t)f_{X_2}(y-t)\\,dt, &\\quad \\text{for a.e.}\\quad y\\in \\mathbb{R},\n\\end{align*}\nwhere $f_{X_1}, f_{X_2}$ are respectively the density function of the distribution generated by $X_1,X_2$.\n\n \nMoreover if $X_1,\\dots,X_k$ are identically distributed and independent with common probability density function $f:\\Omega\\to\\mathbb{R}$, then (using also Example \\ref{Affine transformations} with $\\phi(x)=\\frac 1 kx$), the density function of the sample mean $Z:=\\frac{1}{k}\\sum_{i=1}^k X_i$ is\n\\begin{align*}\nf_Z(y)= k\\, f_Y\\left(ky\\right)= k\n\\int_{\\sum\\limits_{i=1}^k x_i= k y}\\;\\prod_{i=1}^k f(x_i)\\,d\\mathcal{H}^{k-1}(x), &\\quad \\text{for a.e.}\\quad y\\in \\mathbb{R}.\n\\end{align*}\n\\end{esem}\n\n\\begin{esem}[Product and ratio of random variables]\\label{ratio}\nLet $\\left(\\Omega,\\Sigma,p\\right)$ be a probability measure space and let $X:\\Omega\\to \\mathbb{R}^2,\\, X=\\left(X_1,X_2\\right)$ be an absolutely continuous random variable with probability density function $f_X$. \n\\begin{itemize}\n\\item[(i)] Let us employ Corollary \\ref{teo k>1} with \n\\begin{align*}\n\\phi:\\mathbb{R}^2\\to \\mathbb{R},\\quad \\phi(x_1,x_2)=x_1x_2,\\quad J_1\\phi(x_1,x_2)=|\\nabla\\phi(x_1,x_2)|=\\sqrt{x_1^2+x_2^2}.\n\\end{align*}\nThen the statistic $X_1X_2$ is an absolutely continuous random variable whose probability density function satisfies\n\\begin{align*}\nf_{X_1X_2}(y)=\n\\int_{x_1x_2=y}\\frac{f_X(x_1,x_2)}{\\sqrt {x_1^2+x_2^2}}\\,d\\mathcal{H}^{1}(x_1,x_2)=\\int_{\\mathbb{R}\\setminus\\{0\\}}f_X\\Big(t,\\frac y t\\Big)\\frac{1}{|t|}\\,dt, &\\quad \\text{for a.e.}\\quad y\\in \\mathbb{R},\n\\end{align*} \nwhere we parametrized the hyperbole $x_1x_2=y$ by $\\psi(t)=\\left(t,\\frac y t\\right)$ and we used Formula \\eqref{parametrized manifold} to evaluate the last integral.\n\n\\item[(ii)] Let us suppose $X_2\\neq 0$ a.e. and let us employ Corollary \\ref{teo k>1} with \n\\begin{align*}\n\\phi:\\mathbb{R}^2\\to \\mathbb{R},\\quad \\phi(x_1,x_2)=\\frac{x_1}{x_2},\\quad J_1\\phi(x_1,x_2)=|\\nabla\\phi(x_1,x_2)|=\\frac 1 {x_2^2}\\sqrt{x_1^2+x_2^2}.\n\\end{align*}\nThen the statistic $\\frac{X_1}{X_2}$ is an absolutely continuous random variable whose probability density function satisfies\n\\begin{align*}\nf_{\\frac{X_1}{X_2}}(y)=\n\\int_{\\frac{x_1}{x_2}=y}f_X(x_1,x_2)\\frac{x_2^2}{\\sqrt {x_1^2+x_2^2}}\\,d\\mathcal{H}^{1}(x_1,x_2)=\\int_{\\mathbb{R}}f_X\\Big(ty, t\\Big)|t|\\,dt, &\\quad \\text{for a.e.}\\quad y\\in \\mathbb{R},\n\\end{align*} \nwhere we parametrized the line $x_1=yx_2$ by $\\psi(t)=\\left(ty,t\\right)$ and we used \\eqref{parametrized manifold} to evaluate the last integral.\n\\item[(iii)]Let $X:\\Omega\\to \\mathbb{R}$ be an absolutely continuous random variable such that $X\\neq 0$ a.e. and let $f_X$ its probability density function. We employ Corollary \\ref{teo k=n} with \\begin{align*}\n\\phi:\\mathbb{R}\\setminus\\{0\\}\\to \\mathbb{R}\\setminus\\{0\\},\\quad \\phi(t)=\\frac 1 t,\\quad J_1(t)=|\\phi'(t)|=\\frac 1{t^2}.\n\\end{align*}\nThen the statistic $\\frac 1 {X}$ is an absolutely continuous random variable whose probability density function satisfies \n\\begin{align*}\nf_{\\frac 1 X}(y)=f_X\\left(\\frac 1 y\\right)\\frac{1}{y^2},\\quad \\text{for any}\\quad y\\neq 0.\n\\end{align*}\n\\end{itemize}\n\n\n\n\n\n\\end{esem}\n\\subsection{Order Statistics}\nLet $S_k$ be the set of all the permutations of the set $\\{1,\\dots,k\\}$. Let $X=\\left(X_1,\\dots, X_k\\right):\\Omega\\to \\mathbb{R}^k$ be a random variable and let us consider the map\n$$\\phi:\\mathbb{R}^k\\to\\mathbb{R}^k,\\quad x=(x_1,\\dots, x_k)\\mapsto (x_{(1)},\\dots, x_{(k)}),$$\nwhich associates to any vector $x$ its increasing rearrangement $(x_{(1)},\\dots, x_{(k)})$ i.e. $x_{(1)}\\leq \\dots\\leq x_{(k)}$. The random variable $\\phi\\circ X:= \\left(X_{(1)},\\dots, X_{(k)}\\right)$ is the random vector of the so-called \\emph{Order Statistics} of $X$. In what follows, as an easy application of the results of the previous sections, we deduce their well known density functions. We start with the following Lemma which shows, in particular, that $\\phi$ has unitary Jacobian. \n \n \\begin{lem}\n Let $n\\in\\mathbb{N}$ such that $n\\leq k$, let $I=\\{i_1,i_2,\\dots,i_n\\}\\subseteq\\{1,\\dots,k\\}$ a subset of indexes, where $|I|=n$ and $i_1n} and the previous Lemma or alternatively by integrating the joint density $f_Y$. Indeed if we write $\\hat{x_i}=\\left(x_1,\\dots x_{i-1}, x_{i+1},\\dots, x_k\\right)\\in\\mathbb{R}^{k-1}$ to denote the variable $x$ without the $x_i$ component and if we set $F=\\{\\hat{x_i}\\in\\mathbb{R}^{k-1} \\;:\\; x_1 0$ and let $X:\\Omega\\to \\mathbb{R}$ be a random variable. $X$ is said to have a \n\\emph{ Normal (or Gaussian) distribution}\n $p_X$, and we write $p_X\\sim \\mathcal N\\left(a,\\sigma^2\\right)$, if $p_X$ has density\n \\begin{align*}\n f_X(t)=\\frac{1}{\\sqrt{2\\pi\\sigma^2}}\n \\exp\\left(-\\frac{|t-a|^2}{2\\sigma^2}\\right),\\quad t\\in\\mathbb{R}.\n \\end{align*}\nWhen $\\sigma=0$ we also write $p_X\\sim \\mathcal N\\left(a,0\\right)$ with the understanding that $p_X$ is the dirac measure $\\delta_a$ at the point $a$. \nThe parameters $a$ and $\\sigma^2$ are called the mean and the variance of $X$, respectively.\n\n\nLet $k\\in\\mathbb{N}$, $a\\in\\mathbb{R}^k$ and let $\\Sigma\\in\\mathbb{R}^{k,k}$ be a symmetric, and positive semi-definite matrix. A random variable $X=\\left(X_1,\\dots,X_k\\right):\\Omega\\to \\mathbb{R}^k$ is said to have a \\emph{(multivariate) normal distribution} $\\mathcal{N}(a,\\Sigma)$ if \n\\begin{align*}\n\\langle\\lambda, X\\rangle \\sim \\mathcal{N}\\Big(\\langle\\lambda, a\\rangle, \\langle\\Sigma \\lambda,\\lambda \\rangle\\Big),\\quad \\forall \\lambda\\in\\mathbb{R}^k\n\\end{align*}\n(we write $\\langle \\lambda,\\mu \\rangle=\\sum_{i}\\lambda_i\\mu_i$ to denote the inner product of $\\mathbb{R}^k$).\nHere $a:=E\\left(X\\right)$ is the mean vector and $\\Sigma=\\left(\\sigma_{ij}\\right)_{i,j}:=\\mbox{Cov}(X)$ is the covariance matrix of $X$ i.e. $\\sigma_{i,j}=\\mbox{Cov}\\left(X_i,X_j\\right)$. The following very well known properties about Gaussian vectors are direct consequences of their definition (see for example \\citep[Chapter 1]{bogachev}).\n\n\\begin{prop}\\label{Basic Normal}\nLet $X=\\left(X_1,\\dots,X_k\\right):\\Omega\\to \\mathbb{R}^k$ be a random variable such that $X\\sim\\mathcal{N}\\left(a,\\Sigma\\right)$. \n\\begin{itemize}\n\\item[(i)] The mean vector $a$ and the Covariance matrix $\\Sigma$ uniquely characterized the Gaussian measure $p_X$.\n\\item[(ii)] $X_i$, $X_j$ are independent if and only if $\\sigma_{ij}=\\mbox{Cov}(X_i,X_j)=0$. \n\\item[(iii)] For every matrix $A\\in\\mathbb{R}^{m,k}$ one has $AX\\sim\\mathcal{N}\\left(\\langle Aa\\rangle, A\\Sigma A^t\\right)$.\n\\item[(iv)] When $\\Sigma$ is positive definite we say that $p_X$ is \\emph{not-degenerate}: in this case $X$ is absolutely continuous and has density function \n\\begin{align*}\nf_X(x)=\\frac{1}{(2\\pi)^{\\frac k 2}\\mbox{det}(\\Sigma)^\\frac 1 2}\\exp{\\left(-\\frac{|\\Sigma^{-\\frac 1 2}(x-a)|^2}{2}\\right)}.\n\\end{align*}\n\\end{itemize} \n\\end{prop}\n\n\\medskip\nIn the following Proposition we show that, when the Covariance matrix $\\Sigma$ is degenerate, $p_X$ has a density function with respect to the Hausdorff measure $\\mathcal{H}^m$ on some hyperplane of $\\mathbb{R}^k$. In what follows we say that that a matrix $P\\in \\mathbb{R}^{k,m}$, with $m\\leq k$, is orthogonal if it has orthonormal columns; in this case $|Qy|=|y|$ for every $y\\in\\mathbb{R}^m$. \n\n\n\\begin{prop}\nLet $k\\in\\mathbb{N}$, $a\\in\\mathbb{R}^k$ and let $\\Sigma\\in\\mathbb{R}^{k,k}$ be a positive semi-definite matrix with $m=\\mbox{rank} (\\Sigma)\\geq 1$. Let $X=\\left(X_1,\\dots,X_k\\right):\\Omega\\to \\mathbb{R}^k$ be a random variable. Then $X\\sim\\mathcal{N}(a,\\Sigma)$ if and only if there exists an orthogonal matrix $P\\in \\mathbb{R}^{k,m}$ and $m$ independent random variables $Y_1,\\dots,Y_m$ which satisfies $Y_i\\sim \\mathcal{N}\\left(0,1\\right)$ for every $i=1,\\dots,m$ and such that \n\\begin{align*}\nX=\\Sigma^{\\frac 1 2 }P\\,Y+a,\\quad Y=\\left(Y_1,\\dots,Y_m\\right).\n\\end{align*}\nMoreover the probability measure $p_X$ has a density function $f_X$ with respect to the Hausdorff measure $\\mathcal{H}^m$ on the hyperplane \n$$\\Sigma^{\\frac 12}P\\left(\\mathbb{R}^m\\right)+a=\\{x\\in\\mathbb{R}^k\\;:\\;x=\\Sigma^{\\frac 12}Py+a,\\;y\\in\\mathbb{R}^m\\}$$\n which satisfies \n\\begin{align*}\nf_X(x)=\\frac{1}{(2\\pi)^{\\frac m 2}\\Sigma^{\\frac 12}_m}\\exp{\\left(-\\frac{|y|^2}{2}\\right)}, \\quad x=\\Sigma^{\\frac 12}Py+a,\n\\end{align*}\nwhere $\\Sigma^{\\frac 12}_m=\\prod_i{\\sqrt \\lambda_i}$ and the product runs over all positive eigenvalues of $\\Sigma$ (counted with their multiplicities).\n\n\\end{prop}\n{\\sc{Proof.}} Let us prove the first claim and let us suppose, preliminarily, that the Covariance matrix $\\Sigma$ is a diagonal matrix and, without any loss of generality, let us assume that its entries in the main diagonal are \n$$\\left(\\sigma_{11},\\dots,\\sigma_{mm}, 0,\\dots,0\\right),$$\n where $\\sigma_{ii}>0$ for $i\\leq m$. Then from Proposition \\ref{Basic Normal}, $X_1,\\dots, X_m$ are independent and $X_i\\sim\\mathcal{N}(a_i,\\sigma_{ii})$; moreover $X_i=a_i$ a.e. for $i>m$. The required claim then immediately follows setting $Y=\\left(Y_1,\\dots, Y_m\\right)$, with $Y_i=\\frac{X_i-a_i}{\\sqrt\\sigma_{ii}}$, and $P=\\left(\\begin{array}{c}\n I_m \\\\ \n \\hline\n 0 \n \\end{array}\\right)$, where $I_m$ is the identity matrix of $\\mathbb{R}^{m,m}$.\n\n\n\n In the general case let us diagonalize the Covariance matrix $\\Sigma$: Let $Q\\in\\mathbb{R}^{k,k}$ be an orthogonal matrix such that $Q\\Sigma Q^t=D$, where $D$ is the diagonal matrix whose entries in the main diagonal are $\\left(\\lambda_1,\\dots,\\lambda_m, 0,\\dots,0\\right)$, where the $\\lambda_i>0$ are the positive eigenvalues of $\\Sigma$. From Proposition \\ref{Basic Normal} the vector $Z=QX$ satisfies $Z\\sim\\mathcal{N}\\left(Qa,D\\right)$; from the previous step there exists $Y=\\left(Y_1,\\dots,Y_m\\right)\\sim \\mathcal{N}\\left(0,I_m\\right)$ such that\n \\begin{align*}\n Z=D^{\\frac 1 2}\\left(\\begin{array}{c}\n I_m \\\\ \n \\hline\n 0 \n \\end{array}\\right)Y+Qa.\n \\end{align*}\n Then since $Q\\Sigma^{\\frac 1 2 }Q^t=D^{\\frac 1 2 }$ we get \n \\begin{align*}\n X=Q^tZ=Q^tD^{\\frac 1 2}\\left(\\begin{array}{c}\n I_m \\\\ \n \\hline\n 0 \n \\end{array}\\right)Y+a=\\Sigma^{\\frac 1 2} Q^t\\left(\\begin{array}{c}\n I_m \\\\ \n \\hline\n 0 \n \\end{array}\\right)Y+a\n \\end{align*}\n and the claim follows with $P=Q^t\\left(\\begin{array}{c}\n I_m \\\\ \n \\hline\n 0 \n \\end{array}\\right)$.\n\n\n\nFinally, to prove the second claim, let us apply the first step and Example \\ref{Affine transformations} with $A=\\Sigma^{\\frac 1 2}P$ and $y_0=a$. Then we get that $X$ has a density function $f_X$ with respect to the Hausdorff measure $\\mathcal{H}^m$ on the hyperplane \n$$\\Sigma^{\\frac 12}P\\left(\\mathbb{R}^m\\right)+a=\\{x\\in\\mathbb{R}^k\\;:\\;x=\\Sigma^{\\frac 12}Py+a,\\;y\\in\\mathbb{R}^m\\}$$\n which satisfies \n\\begin{align*}\nf_X(x)&=\n\\frac{1}{(\\Sigma^{\\frac 12}P)_m}\\int_{\\Sigma^{\\frac 12}Py+a=x}f_Y(y)\\,d\\mathcal{H}^{0}(y), \\quad \\text{for}\\quad x\\in \\Sigma^{\\frac 12}P\\left(\\mathbb{R}^m\\right)+a.\n\\end{align*} \nSince $P$ has orthogonal columns then from Definition \\ref{k-Jac} we have $(\\Sigma^{\\frac 12}P)_m=\\prod{\\sqrt \\lambda_i}:=\\Sigma^{\\frac 12}_m$, where the product runs over all positive eigenvalues of $\\Sigma$. Moreover since $\\Sigma^{\\frac 12}P$ has maximum rank, the equation $x=\\Sigma^{\\frac 12}Py+a$ has a unique solution. Then\n\\begin{align*}\nf_X(x)=\\frac{1}{(2\\pi)^{\\frac m 2}\\Sigma^{\\frac 12}_m}\\exp{\\left(-\\frac{|y|^2}{2}\\right)}, \\quad x=\\Sigma^{\\frac 12}Py+a.\n\\end{align*}\n\\qed\n\n\\subsection{Chi-squared and Student's distributions}\nLet $X:\\Omega\\to \\mathbb{R}^k$ be a Gaussian random vector whose covariance matrix is the identity matrix $I_k$. If $X\\sim\\mathcal{N}\\left(0,I_k\\right)$ then the probability measure $p_{\\chi^2(k)}$ induced by $|X|^2$ is called \\emph{Chi-squared distribution with $k$-degrees of freedom} and we write $|X|^2\\sim \\chi^2(k)$.\n\n If $X$ is not-centred i.e. $X\\sim\\mathcal{N}\\left(\\mu,I_k\\right)$ for some $\\mu\\in\\mathbb{R}^k\\setminus\\{0\\}$, then the measure $p_{\\chi^2(k,\\lambda)}$ induced by $|X|^2$ is called \\emph{Non-central Chi-squared distribution with $k$-degrees of freedom and non-centrality parameter $\\lambda=|\\mu|^2>0$} and we write $|X|^2\\sim \\chi^2(k,\\lambda)$.\n\n\\medskip\nIn the next Proposition we derive the density function of $|X|^2$. In what follows we consider the gamma function $\\Gamma(r)=\\int_0^\\infty t^{r-1}e^{-r}\\,dr$, $r>0$ (see e.g. \\citep[page 255]{abramowitz+stegun}) and the modified Bessel function of the first kind $ I_{\\nu }$ defined for $y>0$ as \n\\begin{align*}\nI_{\\nu }(y)=(y\/2)^{\\nu }\\sum _{j=0}^{\\infty }{\\frac {(y^{2}\/4)^{j}}{j!\\,\\Gamma (\\nu +j+1)}}=\\frac{(y\/2)^{\\nu }}{\\pi^{\\frac 1 2}\\Gamma\\left(\\nu+\\frac 1 2\\right)}\\int_0^\\pi e^{y\\cos\\theta}\\left(\\sin \\theta\\right)^{2\\nu}\\,d\\theta,\n\\end{align*}\n(see e.g. \\citep[Section 9.6 and Formula 9.6.20, page 376]{abramowitz+stegun}).\n\\begin{prop}[Chi-squared Distribution]\\label{section chi squared}\nLet $X:\\Omega\\to \\mathbb{R}^k$ be a Gaussian random vector. If $X\\sim\\mathcal{N}\\left(0,I_k\\right)$ then the Chi-squared distribution $p_{\\chi^2(k)}$ induced by $|X|^2$ has density function\n\\begin{align*}\nf_{\\chi^2(k)}(y)=\\frac{1}{2^{\\frac k 2}\\Gamma\\left(\\frac k 2\\right)}y^{\\frac k 2-1}\\exp\\left(-\\frac y 2\\right),\\quad \\text{for any}\\quad y>0.\n\\end{align*}\nIf $X\\sim\\mathcal{N}\\left(\\mu,I_k\\right)$ for some $\\mu\\in\\mathbb{R}^k\\setminus\\{0\\}$ then, setting $\\lambda=|\\mu|^2>0$, the Non-central Chi-squared distribution $p_{\\chi^2(k,\\lambda)}$ induced by $|X|^2$ has density function\n\n\\begin{align*}\nf_{\\chi^2(k,\\lambda)}(y)&=\\frac{1}{2}\\exp\\left(-\\frac{y+\\lambda}2\\right)\\left(\\frac y\\lambda\\right)^{\\frac k 4-\\frac 1 2}I_{\\frac k 2 -1}\\left(\\sqrt{\\lambda y}\\right),\\quad \\text{for any}\\quad y>0.\n\\end{align*}\n\\end{prop}\n{\\sc{Proof.}} \nLet $X\\sim\\mathcal{N}\\left(0,I_k\\right)$; using Example \\ref{Square function} we have for any $y>0$\n\\begin{align*}\nf_{\\chi^2(k)}(y)&=\\frac{1}{2\\sqrt y}\\frac{1}{(2\\pi)^{\\frac k 2}}\\int_{|x|=\\sqrt y}\\exp\\left(-\\frac{|x|^2}2\\right)\\,d\\mathcal{H}^{k-1}(x)\n\\\\[1ex]&=\\frac{1}{2\\sqrt y}\\frac{1}{(2\\pi)^{\\frac k 2}}\\exp\\left(-\\frac{y}2\\right)\\mathcal{H}^{k-1}\\left(\\mathbb{S}^{k-1}\\right)y^{\\frac{k-1}2}=\\frac{1}{2^{\\frac k 2}\\Gamma\\left(\\frac k 2\\right)}y^{\\frac k 2-1}\\exp\\left(-\\frac y 2\\right)\n\\end{align*}\nwhich is the first claim. If $X\\sim\\mathcal{N}\\left(\\mu,I_k\\right)$ for some $\\mu\\in\\mathbb{R}^k\\setminus\\{0\\}$, then using Example \\ref{Square function} again and the elementary equality $|x-\\mu|^2=|x|^2+|\\mu|^2-2\\langle x,\\mu\\rangle$ we have for any $y>0$\n\\begin{align*}\nf_{\\chi^2(k)}(y)&=\\frac{1}{2\\sqrt y}\\int_{|x|=\\sqrt y}\\frac{1}{(2\\pi)^{\\frac k 2}}\\exp\\left(-\\frac{|x-\\mu|^2}2\\right)\\,d\\mathcal{H}^{k-1}(x)\\\\[1ex]\n&=\\frac{1}{2\\sqrt y (2\\pi)^{\\frac k 2}}\\exp\\left(-\\frac{y+\\lambda}2\\right)\\int_{|x|=\\sqrt y}\\exp\\langle x,\\mu\\rangle\\,d\\mathcal{H}^{k-1}(x)\n\\\\[1ex]&=\\frac{y^{\\frac {k}2-1}}{2(2\\pi)^{\\frac k 2}}\\exp\\left(-\\frac{y+\\lambda}2\\right)\\int_{|z|=1}\\exp\\langle \\sqrt y z,\\mu\\rangle\\,d\\mathcal{H}^{k-1}(z).\n\\end{align*}\nSince $\\mathcal{H}^{K-1}_{\\vert_{\\mathbb{S}^{k-1}}}$ is rotationally invariant, up to an orthogonal transformation of $\\mathbb{R}^k$ which maps $\\frac{\\mu}{|\\mu|}$ to $e_1=\\left(1,0,\\dots,0\\right)$, we can suppose $\\frac{\\mu}{|\\mu|}=e_1$. Using $k$-dimensional spherical coordinates to evaluate the last integral then we have \n\\begin{align*}\n\\int_{|z|=1}\\exp\\langle \\sqrt y z,\\mu\\rangle\\,d\\mathcal{H}^{k-1}(z)&=\\int_{|z|=1}\\exp\\left(\\sqrt{y\\lambda}z_1\\right)\\,d\\mathcal{H}^{k-1}(z)\n\\\\[1ex]&=\\mathcal{H}^{K-2}(\\mathbb{S}^{K-2})\\int_0^\\pi \\exp\\left(\\sqrt{y\\lambda}\\cos\\theta\\right)(\\sin\\theta)^{k-2}\\,d\\theta\n\\\\[1ex]&=(2\\pi)^{\\frac k 2}\\left(\\lambda y\\right)^{-\\frac k 4+\\frac 1 2}I_{\\frac k 2 -1}\\left(\\sqrt{\\lambda y}\\right).\n\\end{align*}\nCombining the latter equalities gives the required last claim.\\qed\n\\bigskip\n\n\n Finally let $X,Y:\\Omega\\to \\mathbb{R}$ be two independent random variables such that $X\\sim\\mathcal{N}\\left(0,1\\right)$ and $Y\\sim\\chi^2(k)$. The probability measure $p_{T}$ induced by the random variable $T=\\frac{X}{\\sqrt {Y\/k}}$ is called a \\emph{(Student's) t-distribution with $k$-degrees of freedom}. In the next Proposition we use Corollary \\ref{teo k>1} and example \\ref{ratio} in order to derive the density function of $p_{T}$.\n\n\n\\begin{prop}[Student's t-Distribution]\nLet $X,Y:\\Omega\\to \\mathbb{R}$ two independent random variables such that $X\\sim\\mathcal{N}\\left(0,1\\right)$ and $Y\\sim\\chi^2(k)$. The t-distribution $p_{T}$ induced by $T=\\frac{X}{\\sqrt {Y\/k}}$ has density function\n\\begin{align*}\nf_T(y)=\\frac{\\Gamma\\left(\\frac{k+1}2\\right)}{\\sqrt{k\\pi}\\Gamma\\left(\\frac{k}2\\right)}\\left(1+\\frac{y^2}{k}\\right)^{-\\frac{k+1}2},\\quad \\text{for any}\\quad t\\in\\mathbb{R}.\n\\end{align*}\n\\end{prop}\n{\\sc{Proof.}}\nUsing Corollary \\ref{teo k>1} with $\\phi:\\mathbb{R}^+\\to\\mathbb{R}^+, \\phi(t)=\\sqrt {t\/k}$, we have\n\\begin{align*}\nf_{\\sqrt {Y\/k}}(y)&=\\frac{2k^{\\frac k 2}}{2^{\\frac k 2}\\Gamma\\left(\\frac k 2\\right)}y^{ k -1}\\exp\\left(-\\frac {ky^2} 2\\right),\\qquad \\forall y>0.\n\\end{align*}\nThen applying example \\ref{ratio} we get for $y\\in \\mathbb{R}$,\n\\begin{align*}\nf_{T}(y)=\\int_{0}^\\infty f_X(ty)f_{\\sqrt {Y\/k}}(t)t\\,dt\n=\\frac{2k^{\\frac k 2}}{2^{\\frac k 2}\\Gamma\\left(\\frac k 2\\right)\\sqrt{2\\pi}}\\int_{0}^\\infty \\exp\\left({-\\frac{t^2(y^2+k)}{2}}\\right)t^k\\,dt,\n\\end{align*} \nwhich with the substitution $s=t^2\\frac{y^2+k}{2}$ becomes\n\\begin{align*}\nf_{T}(y)=\\frac{1}{\\Gamma\\left(\\frac k 2\\right)\\sqrt{k\\pi}}\\left(1+\\frac{y^2}k\\right)^{-\\frac {k+1} 2 }\\int_{0}^\\infty e^{-s}s^{\\frac {k-1}2}\\,ds=\\frac{\\Gamma\\left(\\frac {k+1} 2\\right)}{\\Gamma\\left(\\frac k 2\\right)\\sqrt{k\\pi}}\\left(1+\\frac{y^2}k\\right)^{-\\frac {k+1} 2 }.\n\\end{align*}\n\\qed\n\n\n\\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTexture-zeros, vanishing elements of fermion mass matrices, \nin quark and lepton mass matrix are successful to\npredict masses and mixing angles. However the origin of zero is not \nclear, which is a motivation of our model. As an approach for this problem \nwe consider the discrete flavor symmetry approach. This approach is studied by \nmany authors{\\cite{discrete symmetry}}. Flavor symmetry is expected to \nbe a clue to understand the masses and mixing angles of quarks and leptons because\nit reduces the number of free parameters in Yukawa couplings and some testable \npredictions of masses and mixing angles generally follow(see references in {\\cite{our model}}).\n\nA interesting point of our model is that dynamical realization of Texture-zeros in\nthe discrete flavor symmetry approach. In previous model, in order to derive \nTexture-zeros certain Yukawa couplings are forbidden by the discrete symmetry. \nIn our model however we consider that some of Higgs vacuum expectation values(VEVs) vanish \nby electroweak symmetry breaking(EWSB) in flavor basis, that is, we consider multi-Higgs system and \nTexture-zeros are derived by EWSB dynamically. \n\nIn our model we take $S_3$ symmetry, permutations of three objects, as the discrete symmetry. \nThe reasons why we adopt $S_3$ are that this symmetry is the smallest group of non \ncommutative discrete groups and $S_3$ has three \nirreducible representations, doublet $\\bf 2$, singlet ${\\bf 1_{S}}$, pseudo singlet \n${\\bf 1_{A}}$, so that it is easy to assign the flavor symmetry representations to three generations such as\n${\\bf 2} + {\\bf 1_{S}}$. In addition, we consider all the $S_{3}$ irreducible representations in this model.\n\n\\section{$S_3$ invariant mass matrix on supersymmetry}\n\nIn this section, the $S_3$ invariant mass matrices are presented{\\cite{mass matrix}}. \nWe consider supersymmetric theory and we suppose that two of three generations belong to $S_3$ doublets and the others are singlets. Using the following tensor product of $S_3$ doublet, $\\phi^{c} = \\sigma_{1} \\phi^{*}= \\sigma_{1}(\\phi^{*}_{1},\\phi^{*}_{2})^{T},\\ \\psi = (\\psi_{1},\\psi_{2})^{T}$,\n\\begin{equation} \n\t\\begin{array}{ccccccc}\n\t\t\\phi^{c} \\times \\psi &=& (\\phi_2 \\psi_2,\\phi_1 \\psi_1)^{T} & + & (\\phi_1 \\psi_2 - \\phi_2 \\psi_1) & + & (\\phi_1 \\psi_2 + \\phi_2 \\psi_1), \\\\\n\t\t & & {\\bf 2} & & {\\bf 1_{A}} & & {\\bf 1_{S}} \n\t\\end{array}\n\\end{equation}\nthe $S_{3}$ invariant mass matrices are obtained as\n\\begin{equation}\n M_{D}=\n \\left(\n \\begin{tabular}{cc|c}\n\t $aH_{1}$ & $b H_{S} + c H_{A}$ & $d H_{2}$ \\\\\n\t $b H_{S} - c H_{A}$ & $a H_{2}$ & $d H_{1}$ \\\\\n\t \\hline\n\t $e H_{2}$ & $e H_{1}$ & $ f H_{S}$\n\t \\end{tabular}\n \\right),\\qquad\n\t\tM_{R}=\n \\left(\n\t \\begin{tabular}{cc|c}\n\t\t & $M_{1}$ & \\\\\n\t $M_{1}$ & & \\\\\n\t \\hline\n\t & & $M_{2}$\n\t \\end{tabular}\n\t\\right),\n\\end{equation}\nwhere $a, b, \\cdots,f$ are independent Yukawa coupling constants, $M_1, M_{2}$ are majorana masses. Now we assign ${\\bf 1_{S}}$ to third generation temporarily. We reconfigure this assignment later. \n\n\\section{$S_3$ invariant Higgs scalar potential analysis}\nIn our model we consider the following eight Higgs bosons,\n\\begin{equation}\n\tH_{uS},H_{dS},H_{uA},H_{dA},H_{u1},H_{d1},H_{u2},H_{d2}.\n\\end{equation}\nOur purpose in this section is to discuss whether or not there are some vacuum patterns with no parameter relations in terms of vanishing-VEVs. \nTherefore we have to analyze eight equations at vacuum, which correspond to each Higgs field.\n\\begin{table}[t]\n\\begin{minipage}{0.5 \\textwidth}\n\t\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline\n$v_{uS}$ & $v_{dS}$ & $v_{uA}$ & $v_{dA}$ & $v_{u1}$ & $v_{u2}$ \n& $v_{d1}$ & $v_{d2}$ \\\\ \\hline \\hline\n $0$ & $0$ & $0$ & $0$ & $0$ & & & $0$ \\\\ \\hline\n $0$ & $0$ & $0$ & $0$ & & $0$ & $0$ & \\\\ \\hline\n $0$ & $0$ & $0$ & $0$ & & & & \\\\ \\hline\n $0$ & $0$ & & & $0$ & $0$ & $0$ & $0$ \\\\ \\hline\n $0$ & $0$ & & & $0$ & & & $0$ \\\\ \\hline\n $0$ & $0$ & & & & $0$ & $0$ & \\\\ \\hline\n $0$ & $0$ & & & & & & \\\\ \\hline\n \\end{tabular}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.5 \\textwidth}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline\n$v_{uS}$ & $v_{dS}$ & $v_{uA}$ & $v_{dA}$ & $v_{u1}$ & $v_{u2}$ \n& $v_{d1}$ & $v_{d2}$ \\\\ \\hline \\hline\n & & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\\\ \\hline\n & & $0$ & $0$ & $0$ & & & $0$ \\\\ \\hline\n & & $0$ & $0$ & & $0$ & $0$ & \\\\ \\hline\n & & $0$ & $0$ & & & & \\\\ \\hline\n & & & & $0$ & $0$ & $0$ & $0$ \\\\ \\hline\n & & & & $0$ & & & $0$ \\\\ \\hline\n & & & & & $0$ & $0$ & \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{minipage}\n\\caption{All possible minima of the scalar potential for $S_3$ singlet\nand doublet Higgs fields without tuning of Lagrangian parameters for\nelectroweak symmetry breaking. The blank entries denote\nnon-vanishing VEVs.}\n{\\label{summary}}\n\\end{table} \nIn general, these equations are the coupled equations through a common parameter which contains all the Higgs VEVs. However we can separate these equations into three parts for the singlet\n, the pseudo singlet\nand the doublet\nbecause vanishing-VEVs makes the equations trivial within each sector. \nTherefore analyzing the equations\n, possible 14 VEV patterns(Table {\\ref{summary}}) are obtained with no parameter relations in terms of vanishing-VEVs. \n\n\\section{Quark and lepton mass textures}\n\nIn previous section, we got 14 VEV patterns.\nNow let us analyze these patterns phenomenologically. At first we can \nobtain the most interesting pattern of 14 patterns. \nThis pattern is the following.\n\\begin{equation}\n\tv_{uS}=v_{dS}=v_{u1}=v_{d2}=0,\\qquad v_{uA},v_{dA},v_{u2},v_{d1}\\not= 0. {\\label{vev}}\n\\end{equation}\nThis pattern leads to the simplest texture(i.e.the maximal number of zero matrix elements) with non-trivial flavor mixing. Next we consider mass matrices obtained from this VEV pattern. We only took ${\\bf 2} + {\\bf 1_S}$ as the $S_3$ representations of three generation matter fields so that the $S_3$ charge assignments of matter fields has a complexity. For example we can assign ${\\bf 1_{S}}$ to any generation in general. As results of exhausting all the $S_3$ charge assignments for the quark sector and assuming $SU(5)$ grand unification{\\cite{su(5)GUT}}{\\cite{georgi jarlskog}} for the lepton sector, mass matrices and predictions of this model are derived as \n \n\\begin{equation}\n\tM_u =\n\t\t\\left(\n\t\t\t\\begin{array}{ccc}\n\t\t\t\t\t& b_u &\t\t\\\\\n\t\t\t\td_u &\t\t& f_u \\\\\n\t\t\t\t & - f_u & i_u\n\t\t\t\\end{array}\n\t\t\\right),\\qquad\n\tM_d =\n\t\t\\left(\n\t\t\t\\begin{array}{ccc}\n\t\t\t\t\t& b_d &\t\t\\\\\n\t\t\t\td_d &\te_d\t& \\\\\n\t\t\t\t-e_d & & i_d\n\t\t\t\\end{array}\n\t\t\\right), \\qquad\n\tM_e =\n\t\t\\left(\n\t\t\t\\begin{array}{ccc}\n\t\t\t\t\t& d_d &\t3e_d\t\\\\\n\t\t\t\tb_d &\t-3 e_d\t& \\\\\n\t\t\t\t & & i_d\n\t\t\t\\end{array}\n\t\t\\right),\n\\end{equation}\n\\begin{equation}\n\tM_{\\nu} =\n\t\t\\left(\n\t\t\t\\begin{array}{ccc}\n\t\t\t\t& b_{\\nu} &\tc_{\\nu}\t\\\\\n\t\t\t\t-b_{\\nu} &\te_{\\nu}\t& \\\\\n\t\t\t\tg_{\\nu} & & \n\t\t\t\\end{array}\n\t\t\\right),\\qquad\n\tM_R =\n\t\t\\left(\n\t\t\t\\begin{array}{ccc}\n\t\t\t\t\t& M_1 &\t\t\\\\\n\t\t\t\tM_1 &\t\t& \\\\\n\t\t\t\t & & M_2\n\t\t\t\\end{array}\n\t\t\\right),\n\\end{equation}\n\\begin{equation}\n\t\\left| V_{cb} \\right| = \\sqrt{\\frac{m_c}{m_t}},\\qquad \\left| V_{e3} \\right| \\ge 0.04{\\cite{PDG}}.\n\\end{equation}\nwhere blank entries denote zero and each parameter in $M_{u},M_{d},M_{e},M_{\\nu}$ such as $d_{u},d_{d},b_{\\nu}$ denote a product of a Yukawa coupling and a VEV, for example $d_{u} = d v_{u2}$.\n\\section{Higgs mass spectrum and $S_3$ soft breaking in B-term}\n\nThe $S_3$ potential has an enhanced global symmetry $SU(2) \\times U(1)^2$ and leads to massless Nambu-Goldstone bosons \nin the electroweak broken phases. It is therefore reasonable to softly break the flavor symmetry within the scalar potential.\nWe introduce the following supersymmetry-breaking soft terms which do not break phenomenological characters of the exact $S_3$ model.\n\\begin{equation}\n\tV_{\\not S_3} = b_{SD}H_{uS}H_{d2}+ b'_{SD}H_{u1}H_{dS}+ b_{AD}H_{uA}H_{d1}+ b'_{AD}H_{u2}H_{dA} + {\\rm h.c.}\n\\end{equation}\nThese soft terms have not only the same phenomenological characters as exact $S_3$ model but also a character which we can take the same VEV pattern as ({\\ref{vev}}) in previous section with no parameter relations.\n\n\\section{Tree level FCNC}\n\nSince there are multiple electroweak doublet Higgs bosons which couple to \nmatter fields, flavor-changing processes are mediated at classical \nlevel by these Higgs fields.\tWe can show that all but one have masses of the order of supersymmetry breaking parameters. Therefore the experimental observations of FCNC rare events would lead to a bound on the supersymmetry breaking scale.\nAmong various experimental constraints, we find the most important constraint comes from the neutral K meson mixing. For the heavy mass eigenstates, the tree-level $K_{L}-K_{S}$ mass difference $\\Delta m^{\\rm tree}_K$ is given by the matrix element of the effective Hamiltonian between K mesons{\\cite{FCNC}}.\n$\\Delta m^{\\rm tree}_K$ contains $M_{H}$, which is an average of the Higgs masses $1\/M^{2}_{H} = \\frac{1}{4}\\left( 1\/M^{2}_{H^{0}_{1}} + 1\/ M^{2}_{H^{0}_{2}} + 1\/M^{2}_{H^{0}_{3}}+1\/M^{2}_{H^{0}_{4}} \\right)$, and a free parameter $\\eta$, which contains the down type quark Yukawa couplings.\nIt is found that heavy Higgs masses are bounded from below so as to suppress the extra Higgs contribution compared with the standard model one, which bound is roughly given by\n\\begin{equation}\n M_{H} \\ge\n\t \\left\\{\n\t\t\t \\begin{array}{cl}\n\t\t\t\t\t\t3.8 {\\rm TeV} &(\\eta = 0) \\\\\n\t\t\t\t\t\t1.4 {\\rm TeV} &(\\eta = 0.03) \\\\\n\t\t\t\t\t\\end{array}\n\t\t\t\t\\right.\n,\n\\end{equation}\nwhere we take $\\eta = 0,\\ 0.03$ as typical values.\n\n\\section{Summary}\n\nIn our model we have discussed the structure of Higgs potential and fermion mass \nmatrices in supersymmetric models with $S_3$ flavor symmetry and examined possible \nvanishing elements of quark and lepton mass matrices. As results of exhausting the \npatterns of flavor symmetry charges of matter fields, some predictions such that \nthe lepton mixing $V_{e3}$ is within the range which will be tested in near future \nexperiments are obtained. We have also discussed the physical mass spectrum of \nHiggs bosons and the tree level FCNC processes which is propagated by heavy Higgs fields. \nFrom the tree level FCNC process analysis, it is found that heavy Higgs masses, \nwhich is the order of soft supersymmetry breaking scale, are a few TeV.\n\t\n\n\\vspace*{12pt}\n\\noindent\n{\\bf Acknowledgement}\n\\vspace*{6pt}\n\n\\noindent\nThe Summer Institute 2006 is sponsored by the Asia \nPacific Center for Theoretical Physics and the BK 21 \nprogram of the Department of Physics, KAIST.\nWe would like to thank the organizers of Summer Institute 2006 and thank J. Kubo, T. Kobayashi and H. Nakano for helpful discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Stability of Dual Solution}\n\\label{appn:dual}\nWe first note that $D(\\alpha)$ is the dual of a convex function and has a unique minimizer $\\alpha^*$.\nThe function ${V(\\alpha) = D(\\alpha) - D(\\alpha^*)}$, hence, is a non-negative function and equals zero only at $\\alpha = \\alpha^*$.\nDifferentiating $V(\\alpha)$ with respect to time we get\n\\[\\dot{V}(\\alpha) = \\frac{\\partial V}{\\partial \\alpha}\\dot{\\alpha} = -\\gamma \\Big(\\sum_{i}{h_i} - B \\Big)^2 < 0.\\]\n\nTherefore, $V(\\cdot)$ is a Lyapunov function, and the system state will converge to optimum starting from any initial condition.\n\n\\section{Stability of Primal Solution}\n\\label{appn:primal}\nWe first note that since $W(\\mathbf{h})$ is a strictly concave function, it has a unique maximizer $\\mathbf{h}^*$.\nMoreover ${V(\\mathbf{h}) = W(\\mathbf{h}^*) - W(\\mathbf{h})}$ is a non-negative function and equals zero only at $\\mathbf{h} = \\mathbf{h}^*$.\nDifferentiating $V(\\cdot)$ with respect to time we obtain\n\\begin{align*}\n\\dot{V}(\\mathbf{h}) &= \\sum_{i}{\\frac{\\partial V}{\\partial h_i}\\dot{h_i}} \\\\\n&= -\\sum_{i}{\\left( U'_i(h_i) - C'(\\sum_{i}{h_i} - B) \\right) \\dot{h_i}}.\n\\end{align*}\n\nFor $\\dot{h_i}$ we have\n\\[\\dot{h_i} = \\frac{\\partial h_i}{\\partial t_i}\\dot{t_i}.\\]\nFor non-reset and reset TTL caches we have\n\\[\\frac{\\partial h_i}{\\partial t_i} = \\frac{\\lambda_i}{(1+\\lambda_i t_i)^2} \\qquad\\text{ and }\\qquad \\frac{\\partial h_i}{\\partial t_i} = \\lambda_i e^{-\\lambda_i t_i},\\]\nrespectively, and hence $\\partial h_i \/ \\partial t_i > 0$.\n\nFrom the controller for $t_i$ we have\n\\[t_i = k_i \\left( U'_i(h_i) - C'(\\sum_{i}{h_i} - B) \\right).\\]\n\nHence, we get\n\\[\\dot{V}(\\mathbf{h}) = -\\sum_{i}{k_i \\frac{\\partial h_i}{\\partial t_i} \\left( U'_i(h_i) - C'(\\sum_{i}{h_i} - B) \\right)^2} < 0.\\]\n\nTherefore, $V(\\cdot)$ is a Lyapunov function\\footnote{A description of Lyapunov functions and their applications can be found in~\\cite{srikant13}.}, and the system state will converge to $\\mathbf{h}^*$ starting from any initial condition.\n\n\\section{Stability of Primal-Dual Solution}\n\\label{appn:primal_dual}\nAs discussed in Section~\\ref{sec:opt}, the Lagrangian function for the optimization problem~\\eqref{eq:opt} is expressed as\n\\[\\mathcal{L}(\\mathbf{h}, \\alpha) = \\sum_{i}{U_i(h_i)} - \\alpha(\\sum_{i}{h_i} - B).\\]\nNote that $\\mathcal{L}(\\mathbf{h}, \\alpha)$ is concave in $\\mathbf{h}$ and convex in $\\alpha$, and hence first order condition for optimality of $\\mathbf{h}^*$ and $\\alpha^*$ implies\n\\begin{align*}\n\\mathcal{L}(\\mathbf{h}^*, \\alpha) &\\le \\mathcal{L}(\\mathbf{h}, \\alpha) + \\sum_{i}{\\frac{\\partial \\mathcal{L}}{\\partial h_i}(h_i^* - h_i)}, \\\\\n\\mathcal{L}(\\mathbf{h}, \\alpha^*) &\\ge \\mathcal{L}(\\mathbf{h}, \\alpha) + \\frac{\\partial \\mathcal{L}}{\\partial \\alpha}(\\alpha^* - \\alpha) .\n\\end{align*}\n\nAssume that the hit probability of a file can be expressed by $f(\\cdot)$ as a function of the corresponding timer value $t_i$, \\emph{i.e.}\\ ${h_i = f(t_i)}$. The temporal derivative of the hit probability can therefore be expressed as\n\\[\\dot{h_i} = f'(t_i) \\dot{t_i},\\]\nor equivalently\n\\[\\dot{h_i} = f'(f^{-1}(h_i)) \\dot{t_i},\\]\nwhere $f^{-1}(\\cdot)$ denotes the inverse of function $f(\\cdot)$. For notation brevity we define ${g(h_i) = f'(f^{-1}(h_i))}$. Note that as discussed in Appendix~\\ref{appn:primal}, $f(\\cdot)$ is an increasing function, and hence ${g(h_i)\\ge 0}$.\n\nIn the remaining, we show that $V(\\mathbf{h}, \\alpha)$ defined below is a Lyapunov function for the primal-dual algorithm:\n\\[V(\\mathbf{h}, \\alpha) = \\sum_{i}{\\int_{h_i^*}^{h_i}{\\frac{x - h_i^*}{k_i g(x)}\\mathrm{d} x}} + \\frac{1}{2\\gamma}(\\alpha - \\alpha^*)^2.\\]\nDifferentiating the above function with respect to time we obtain\n\\[\\dot{V}(\\mathbf{h}, \\alpha) = \\sum_{i}{\\frac{h_i - h_i^*}{k_i g(h_i)}\\dot{h_i}} + \\frac{\\alpha - \\alpha^*}{\\gamma}\\dot{\\alpha}.\\]\nBased on the controllers defined for $t_i$ and $\\alpha$ we have\n\\[\\dot{h_i} = g(h_i) \\dot{t_i} = k_i g(h_i) \\frac{\\partial \\mathcal{L}}{\\partial h_i},\\]\nand\n\\[\\dot{\\alpha} = -\\gamma\\frac{\\partial \\mathcal{L}}{\\partial \\alpha}.\\]\nReplacing for $\\dot{h_i}$ and $\\dot{\\alpha}$ in $\\dot{V}(\\mathbf{h}, \\alpha)$, we obtain\n\\begin{align*}\n\\dot{V}(\\mathbf{h}, \\alpha) &= \\sum_{i}{(h_i - h_i^*)\\frac{\\partial \\mathcal{L}}{\\partial h_i}} - (\\alpha - \\alpha^*)\\frac{\\partial \\mathcal{L}}{\\partial \\alpha} \\\\\n&\\le \\mathcal{L}(\\mathbf{h}, \\alpha) - \\mathcal{L}(\\mathbf{h}^*, \\alpha) + \\mathcal{L}(\\mathbf{h}, \\alpha^*) - \\mathcal{L}(\\mathbf{h}, \\alpha) \\\\\n&= \\Big(\\mathcal{L}(\\mathbf{h}^*, \\alpha^*) - \\mathcal{L}(\\mathbf{h}^*, \\alpha)\\Big) + \\Big(\\mathcal{L}(\\mathbf{h}, \\alpha^*) - \\mathcal{L}(\\mathbf{h}^*, \\alpha^*)\\Big) \\\\\n&\\le 0,\n\\end{align*}\nwhere the last inequality follows from \n\\[\\mathcal{L}(\\mathbf{h}, \\alpha^*) \\le \\mathcal{L}(\\mathbf{h}^*, \\alpha^*) \\le \\mathcal{L}(\\mathbf{h}^*, \\alpha),\\]\nfor any $\\mathbf{h}$ and $\\alpha$.\n\nMoreover, $V(\\mathbf{h}, \\alpha)$ is non-negative and equals zero only at $(\\mathbf{h}^*, \\alpha^*)$.\nTherefore, $V(\\mathbf{h}, \\alpha)$ is a Lyapunov function, and the system state will converge to optimum starting from any initial condition.\n\n\\end{appendices}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper, we proposed the concept of utility-driven caching, and formulated it as an optimization problem with rigid and elastic cache storage size constraints. Utility-driven caching provides a general framework for defining caching policies with considerations of fairness among various groups of files, and implications on market economy for (cache) service providers and content publishers. This framework has the capability to model existing caching policies such as FIFO and LRU, as utility-driven caching policies.\n\nWe developed three decentralized algorithms that implement utility-driven caching policies in an online fashion and that can adapt to changes in file request rates over time. We prove that these algorithms are globally stable and converge to the optimal solution. Through simulations we illustrated the efficiency of these algorithms and the flexibility of our approach.\n\n\\section{Discussion}\n\\label{sec:discussion}\nIn this section, we explore the implications of utility-driven caching on monetizing the caching service and discuss some future research directions.\n\n\\subsection{Decomposition}\nThe formulation of the problem in Section~\\ref{sec:opt} assumes that the utility functions $U_i(\\cdot)$ are known to the system. In reality content providers might decide not to share their utility functions with the service provider. To handle this case, we decompose the optimization problem~\\eqref{eq:opt} into two simpler problems.\n\nSuppose that the cache storage is offered as a service and the service provider charges content providers at a constant rate $r$ for storage space. Hence, a content provider needs to pay an amount of $w_i = rh_i$ to receive hit probability $h_i$ for file $i$. The utility maximization problem for the content provider of file $i$ can then be written as\n\\begin{align}\n\\label{eq:opt_user}\n\\text{maximize} \\quad &U_i(w_i\/r) - w_i \\\\\n\\text{such that} \\quad &w_i \\ge 0 \\notag\n\\end{align}\n\nNow, assuming that the service provider knows the vector $\\mathbf{w}$, for a proportionally fair resource allocation, the hit probabilities should be\nset according to\n\\begin{align}\n\\label{eq:opt_network}\n\\text{maximize} \\quad &\\sum_{i=1}^{N}{w_i\\log{(h_i)}} \\\\\n\\text{such that} \\quad &\\sum_{i=1}^{N}{h_i} = B \\notag\n\\end{align}\n\nIt was shown in~\\cite{kelly97} that there always exist vectors $\\mathbf{w}$ and $\\mathbf{h}$, such that $\\mathbf{w}$ solves~\\eqref{eq:opt_user} and $\\mathbf{h}$ solves~\\eqref{eq:opt_network}; further, the vector $\\mathbf{h}$ is the unique solution to~\\eqref{eq:opt}.\n\n\\subsection{Cost and Utility Functions}\nIn Section~\\ref{sec:soft}, we defined a penalty function denoting the cost of using additional storage space. One might also define cost functions based on the consumed network bandwidth. This is especially interesting in modeling in-network caches with network links that are likely to be congested.\n\nOptimization problem~\\eqref{eq:opt} uses utility functions defined as functions of the hit probabilities. It is reasonable to define utility as a function of the hit \\emph{rate}. Whether this makes any changes to the problem, \\emph{e.g.}\\ in the notion of fairness, is a question that requires further investigation. One argument in support of utilities as functions of hit rates is that a service provider might prefer pricing based on request rate rather than cache occupancy. Moreover, in designing hierarchical caches a service provider's objective could be to minimize the internal bandwidth cost. This can be achieved by defining the utility functions as $U_i = -C_i(m_i)$ where $C_i(m_i)$ denotes the cost associated with miss rate $m_i$ for file $i$.\n\n\\subsection{Online Algorithms}\nIn Section~\\ref{sec:online}, we developed three online algorithms that can be used to implement utility-driven caching. Although these algorithms are proven to be stable and converge to the optimal solution, they have distinct features that can make one algorithm more effective in implementing a policy. For example, implementing the max-min fair policy based on the dual solution requires knowing\/estimating the file request rates, while it can be implemented using the modified primal-dual solution without such knowledge. Moreover, the convergence rate of these algorithms may differ for different policies. The choice of non-reset or reset TTL caches also has implications on the design and performance of these algorithms.\nThese are subjects that require further study.\n\n\\section{Online Algorithms}\n\\label{sec:online}\nIn Section~\\ref{sec:opt}, we formulated utility-driven caching as a convex optimization problem either with a fixed or an elastic cache size. However, it is not feasible to solve the optimization problem offline and then\nimplement the optimal strategy. Moreover, the system parameters can change over time. Therefore, we need algorithms\nthat can be used to implement the optimal strategy and adapt to changes in the system by collecting limited information.\nIn this section, we develop such algorithms.\n\n\\subsection{Dual Solution}\n\\label{sec:dual}\nThe utility-driven caching formulated in~\\eqref{eq:opt} is a convex optimization problem, and hence the optimal solution corresponds to solving the dual problem.\nThe Lagrange dual of the above problem is obtained by incorporating the constraints into the maximization by means of Lagrange multipliers\n\\begin{align*}\n\\text{minimize} \\quad &D(\\alpha, \\boldsymbol{\\nu}, \\boldsymbol{\\eta}) = \\max_{h_i}\\Bigg\\{ \\sum_{i=1}^{N}{U_i(h_i)} \\\\\n&\\qquad\\qquad\\qquad -\\alpha\\left[ \\sum_{i=1}^{N}{h_i} - B\\right] \\\\\n&\\qquad\\qquad\\qquad -\\sum_{i=1}^{N}{\\nu_i (h_i - 1)} + \\sum_{i=1}^{N}{\\eta_i h_i} \\Bigg\\}\\\\\n\\text{such that} \\quad &\\alpha \\ge 0, \\quad \\boldsymbol{\\nu}, \\boldsymbol{\\eta} \\ge \\mathbf{0}.\n\\end{align*}\nUsing timer based caching techniques for controlling the hit probabilities with $0 < t_i < \\infty$ ensures that $0 < h_i < 1$, and hence we have $\\nu_i = 0$ and $\\eta_i = 0$. \n\nHere, we consider an algorithm based on the dual solution to the utility maximization problem~\\eqref{eq:opt}. Recall that we can write the Lagrange dual of the utility maximization problem as\n\\[D(\\alpha) = \\max_{h_i}{\\left\\{ \\sum_{i=1}^{N}{U_i(h_i)}-\\alpha\\left[ \\sum_{i=1}^{N}{h_i} - B\\right] \\right\\}},\\]\nand the dual problem can be written as\n\\[\\min_{\\alpha \\ge 0}{D(\\alpha)}.\\]\n\nA natural decentralized approach to consider for minimizing $D(\\alpha)$ is to gradually move the decision variables towards the optimal point using the gradient descent algorithm.\nThe gradient can be easily computed as\n\\[\\frac{\\partial D(\\alpha)}{\\partial\\alpha} = -\\Big(\\sum_{i}{h_i} - B \\Big),\\]\nand since we are doing a gradient \\emph{descent}, $\\alpha$ should be updated according to the \\emph{negative} of the gradient as\n\\[\\alpha \\gets \\max{\\Big\\{0, \\alpha + \\gamma \\Big( \\sum_{i}{h_i} - B \\Big)\\Big\\}},\\]\nwhere $\\gamma > 0$ controls the step size at each iteration. Note that the KTT conditions require that $\\alpha \\ge 0$.\n\nBased on the discussion in Section~\\ref{sec:opt}, to satisfy the optimality condition we must have\n\\[U'_i(h_i) = \\alpha,\\]\nor equivalently\n\\[h_i = {U'_i}^{-1}(\\alpha).\\]\nThe hit probabilities are then controlled based on the timer parameters $t_i$ which can be set according to~\\eqref{eq:non_reset_t} and~\\eqref{eq:reset_t} for non-reset and reset TTL caches.\n\nConsidering the hit probabilities as indicators of files residing in the cache, the expression $\\sum_{i}{h_i}$ can be interpreted as the number of items currently in the cache, denoted here as $B_{curr}$. We can thus summarize the control algorithm for a reset TTL algorithm as\n\\begin{align}\n\\label{eq:dual_sol}\nt_i &= -\\frac{1}{\\lambda_i} \\log{\\Big(1 - {U'_i}^{-1}(\\alpha) \\Big)}, \\notag\\\\\n\\alpha &\\gets \\max{\\{0, \\alpha + \\gamma ( B_{curr} - B )\\}}.\n\\end{align}\nWe obtain an algorithm for a non-reset TTL cache by using the correct expression for $t_i$ in~\\eqref{eq:non_reset_t}.\n\nLet $\\alpha^*$ denote the optimal value for $\\alpha$. We show in Appendix~\\ref{appn:dual} that $D(\\alpha) - D(\\alpha^*)$ is a Lyapunov function and the above algorithm converges to the optimal solution.\n\n\\subsection{Primal Solution}\nWe now consider an algorithm based on the optimization problem in~\\eqref{eq:opt_soft} known as the \\emph{primal} formulation.\n\nLet $W(\\mathbf{h})$ denote the objective function in~\\eqref{eq:opt_soft} defined as\n\\[W(\\mathbf{h}) = \\sum_{i=1}^{N}{U_i(h_i)} - C(\\sum_{i=1}^{N}{h_i} - B).\\]\nA natural approach to obtain the maximum value for $W(\\mathbf{h})$ is to use the gradient ascent algorithm.\nThe basic idea behind the gradient ascent algorithm is to move the variables $h_i$ in the direction of the gradient\n\\[\\frac{\\partial W(\\mathbf{h})}{\\partial h_i} = U'_i(h_i) - C'(\\sum_{i=1}^{N}{h_i} - B).\\]\nSince the hit probabilities are controlled by the TTL timers, we move $h_i$ towards the optimal point by updating timers $t_i$.\nLet $\\dot{h_i}$ denote the derivative of the hit probability $h_i$ with respect to time. Similarly, define $\\dot{t_i}$ as the derivative of the timer parameter $t_i$\nwith respect to time. We have\n\\[\\dot{h_i} = \\frac{\\partial h_i}{\\partial t_i}\\dot{t_i}.\\]\nFrom~\\eqref{eq:hit_non_reset} and~\\eqref{eq:hit_reset}, it is easy to confirm that $\\partial h_i\/\\partial t_i > 0$ for non-reset and reset TTL caches.\nTherefore, moving $t_i$ in the direction of the gradient, also moves $h_i$s in that direction.\n\nBy gradient ascent, the timer parameters should be updated according to\n\\[t_i \\gets \\max{\\left\\{0, t_i + k_i\\Big[ U'_i(h_i) - C'(B_{curr} - B) \\Big]\\right\\}},\\]\nwhere $k_i > 0$ is the step-size parameter, and $\\sum_{i=1}^{N}{h_i}$ has been replaced with $B_{curr}$ based on the same argument as in the dual solution.\n\nLet $\\mathbf{h}^*$ denote the optimal solution to~\\eqref{eq:opt_soft}. We show in Appendix~\\ref{appn:primal} that $W(\\mathbf{h}^*) - W(\\mathbf{h})$ is a Lyapunov function, and the above algorithm converges to the optimal solution.\n\n\\subsection{Primal-Dual Solution}\nHere, we consider a third algorithm that combines elements of the previous two algorithms.\nConsider the control algorithm\n\\begin{align*}\nt_i &\\gets \\max{\\{0, t_i + k_i [ U'_i(h_i) - \\alpha]\\}}, \\\\\n\\alpha &\\gets \\max{\\{0, \\alpha + \\gamma (B_{current} - B)\\}}.\n\\end{align*}\nUsing Lyapunov techniques we show in Appendix~\\ref{appn:primal_dual} that the above algorithm converges to the optimal solution.\n\nNow, rather than updating the timer parameters according to the above rule explicitly based on the utility function, we can have update rules based on a cache hit or miss.\nConsider the following differential equation\n\\begin{equation}\n\\label{eq:t}\n\\dot{t_i} = \\delta_m(t_i, \\alpha)(1 - h_i)\\lambda_i - \\delta_h(t_i, \\alpha)h_i\\lambda_i,\n\\end{equation}\nwhere $\\delta_m(t_i, \\alpha)$ and $-\\delta_h(t_i, \\alpha)$ denote the change in $t_i$ upon a cache miss or hit for file $i$, respectively.\nMore specifically, the timer for file $i$ is increased by $\\delta_m(t_i, \\alpha)$ upon a cache miss, and decreased by $\\delta_h(t_i, \\alpha)$ on a cache hit.\n\nThe equilibrium for~\\eqref{eq:t} happens when $\\dot{t_i} = 0$, which solving for $h_i$ yields\n\\[h_i = \\frac{\\delta_m(t_i, \\alpha)}{\\delta_m(t_i, \\alpha) + \\delta_h(t_i, \\alpha)}.\\]\nComparing the above expression with $h_i = {U'_i}^{-1}(\\alpha)$ suggests that\n$\\delta_m(t_i, \\alpha)$ and $\\delta_h(t_i, \\alpha)$ can be set to achieve desired hit probabilities and caching policies.\n\nMoreover, the differential equation~\\eqref{eq:t} can be reorganized as\n\\[\\dot{t_i} = h_i \\lambda_i \\Big(\\delta_m(t_i, \\alpha)\/h_i - [\\delta_m(t_i, \\alpha) + \\delta_h(t_i, \\alpha)]\\Big),\\]\nand to move $t_i$ in the direction of the gradient $U'_i(h_i) - \\alpha$ a natural choice for the update functions can be\n\\[\\delta_m(t_i, \\alpha) = h_i U'_i(h_i), \\text{ and } \\delta_m(t_i, \\alpha) + \\delta_h(t_i, \\alpha) = \\alpha.\\]\n\nTo implement proportional fairness for example, these functions can be set as\n\\begin{equation}\n\\label{eq:prop_pd}\n\\delta_m(t_i, \\alpha) = \\lambda_i, \\text{ and } \\delta_h(t_i, \\alpha) = \\alpha - \\lambda_i.\n\\end{equation}\n\nFor the case of max-min fairness, recall from the discussion in Section~\\ref{sec:opt_identical} that a utility function that is content agnostic, \\emph{i.e.}\\ $U_i(h) = U(h)$, results in a max-min fair resource allocation. Without loss of generality we can have $U_i(h_i) = \\log{h_i}$. Thus, max-min fairness can be implemented by having\n\\[\\delta_m(t_i, \\alpha) = 1, \\text{ and } \\delta_h(t_i, \\alpha) = \\alpha - 1.\\]\nNote that with these functions, max-min fairness can be implemented without requiring knowledge about request arrival rates~$\\lambda_i$, while the previous approaches require such knowledge.\n\n\n\\subsection{Estimation of $\\lambda_i$}\n\\label{sec:estimate}\nComputing the timer parameter $t_i$ in the algorithms discussed in this section requires knowing the request arrival rates for most of the policies.\nEstimation techniques can be used to approximate the request rates in case such knowledge is not available at the (cache) service provider.\n\nLet $r_i$ denote the remaining TTL time for file $i$. Note that $r_i$ can be computed based on $t_i$ and a time-stamp for the last time file~$i$ was requested.\nLet $X_i$ denote the random variable corresponding to the inter-arrival times for the requests for file~$i$, and $\\bar{X_i}$ be its mean.\nWe can approximate the mean inter-arrival time as $\\hat{\\bar{X_i}} = t_i - r_i$. Note that $\\hat{\\bar{X_i}}$ defined in this way is a one-sample\nunbiased estimator of $\\bar{X_i}$. Therefore, $\\hat{\\bar{X_i}}$ is an unbiased estimator of $1\/\\lambda_i$. In the simulation section, we will use this estimator in computing the timer parameters for evaluating our algorithms.\n\n\n\\section{Utility Functions and Fairness}\n\\label{sec:fairness}\nUsing different utility functions in the optimization formulation~\\eqref{eq:opt} yields different timer values for the files.\nIn this sense, each utility function defines a notion of fairness in allocating storage resources to different files.\nIn this section, we study a number of utility functions that have important fairness properties associated with them.\n\\subsection{Identical Utilities}\n\\label{sec:opt_identical}\nAssume that all files have the same utility function, \\emph{i.e.}\\ $U_i(h_i) = U(h_i)$ for all $i$. Then, from~\\eqref{eq:c} we obtain\n\\[\\sum_{i=1}^{N}{{U'}^{-1}(\\alpha)} = N {U'}^{-1}(\\alpha) = B,\\]\nand hence\n\\[{U'}^{-1}(\\alpha) = B\/N.\\]\nUsing~\\eqref{eq:hu} for the hit probabilities we get\n\\[h_i = B\/N, \\quad \\forall{i}.\\]\n\nUsing a non-reset TTL policy, the timers should be set according to\n\\[t_i = \\frac{B}{\\lambda_i (N - B)},\\]\nwhile with a reset TTL policy, they must equal\n\\[t_i = -\\frac{1}{\\lambda_i}\\log{\\left(1-\\frac{B}{N}\\right)}.\\]\n\nThe above calculations show that identical utility functions yield identical hit probabilities for all files. Note that the hit probabilities computed above do not depend on the utility function.\n\n\\subsection{$\\boldsymbol{\\beta}$-Fair Utility Functions}\nHere, we consider the family of $\\beta$-fair (also known as \\emph{isoelastic}) utility functions given by\n\\[U_i(h_i) = \\left\\{ \\begin{array}{ll}\n w_i\\frac{h_i^{1-\\beta}}{1-\\beta} & \\beta \\ge 0, \\beta \\neq 1; \\\\\n & \\\\\n w_i \\log{h_i} & \\beta = 1,\n \\end{array} \\right. \\]\nwhere the coefficient $w_i \\ge 0$ denotes the weight for file $i$.\nThis family of utility functions unifies different notions of fairness in resource allocation~\\cite{srikant13}.\nIn the remainder of this section, we investigate some of the choices for $\\beta$ that lead to interesting special cases.\n\n\\subsubsection{$\\boldsymbol{\\beta = 0}$}\\hspace*{\\fill} \\\\\nWith $\\beta = 0$, we get $U_i(h_i) = w_i h_i$, and maximizing the sum of the utilities\ncorresponds to\n\\[\\max_{h_i}{\\sum_{i}{w_i h_i}}.\\]\n\nThe above utility function defined does not satisfy the requirements for a utility function mentioned in Section~\\ref{sec:model}, as it is not strictly concave.\nHowever, it is easy to see that the sum of the utilities is maximized when\n\\[h_i = 1, i=1,\\ldots, B \\quad \\text{ and } \\quad h_i = 0, i=B+1,\\ldots, N,\\]\nwhere we assume that weights are sorted as ${w_1 \\ge \\ldots \\ge w_N}$.\nThese hit probabilities indicate that the optimal timer parameters are\n\\[t_i = \\infty, i=1,\\ldots, B \\quad \\text{ and } \\quad t_i = 0, i=B+1,\\ldots, N.\\]\n\nNote that the policy obtained by implementing this utility function with $w_i = \\lambda_i$ corresponds to the Least-Frequently Used (LFU) caching policy,\nand maximizes the overall throughput.\n\n\\subsubsection{$\\boldsymbol{\\beta = 1}$}\\hspace*{\\fill} \\\\\nLetting $\\beta = 1$, we get $U_i(h_i) = w_i \\log{h_i}$,\nand hence maximizing the sum of the utilities corresponds to\n\\[\\max_{h_i}{\\sum_{i}{w_i \\log{h_i}}}.\\]\n\nIt is easy to see that ${U'_i}^{-1}(\\alpha) = w_i \/ \\alpha$, and hence using~\\eqref{eq:c} we obtain\n\\[\\sum_{i}{{U'_i}^{-1}(\\alpha)} = \\sum_{i}{w_i} \/ \\alpha = B,\\]\nwhich yields\n\\[\\alpha = \\sum_{i}{w_i} \/ B.\\]\nThe hit probability of file $i$ then equals\n\\[h_i = {U'_i}^{-1}(\\alpha) = \\frac{w_i}{\\sum_{j}{w_j}}B.\\]\n\nThis utility function implements a \\emph{proportionally fair} policy~\\cite{kelly98}.\nWith $w_i = \\lambda_i$, the hit probability of file $i$ is proportional to the request arrival rate $\\lambda_i$.\n\n\\subsubsection{$\\boldsymbol{\\beta = 2}$}\\hspace*{\\fill} \\\\\nWith $\\beta = 2$, we get $U_i(h_i) = -w_i\/h_i$, and maximizing the total utility corresponds to\n\\[\\max_{h_i}{\\sum_{i}{\\frac{-w_i}{h_i}}}.\\]\n\nIn this case, we get ${U'_i}^{-1}(\\alpha) = \\sqrt{w_i} \/ \\sqrt{\\alpha}$, therefore\n\\[\\sum_{i}{{U'_i}^{-1}(\\alpha)} = \\sum_{i}{\\sqrt{w_i}} \/ \\sqrt{\\alpha} = B,\\]\nand hence\n\\[\\alpha = \\Big(\\sum_{i}{\\sqrt{w_i}}\\Big)^2 \/ B^2.\\]\n\nThe hit probability of file $i$ then equals\n\\[h_i = {U'_i}^{-1}(\\alpha) = \\frac{\\sqrt{w_i}}{\\sqrt{\\alpha}} = \\frac{\\sqrt{w_i}}{\\sum_{j}{\\sqrt{w_j}}}B.\\]\n\nThe utility function defined above is known to yield minimum potential delay fairness. It was shown in~\\cite{kelly98} that the TCP congestion control protocol\nimplements such a utility function.\n\n\\subsubsection{$\\boldsymbol{\\beta \\rightarrow\\infty}$}\\hspace*{\\fill} \\\\\nWith $\\beta \\rightarrow\\infty$, maximizing the sum of the utilities corresponds to (see~\\cite{mo00} for proof)\n\\[\\max_{h_i}{\\min_{i}{h_i}}.\\]\n\nThis utility function does not comply with the rules mentioned in Section~\\ref{sec:model} for utility functions, as it is not strictly concave.\nHowever, it is easy to see that the above utility function yields\n\\[h_i = B\/N, \\quad \\forall{i}.\\]\n\nThe utility function defined here maximizes the minimum hit probability, and corresponds to the \\emph{max-min fairness}. Note that using identical utility functions for all files resulted in similar hit probabilities as this case. A brief summary of the utility functions discussed here is given in Table~\\ref{tbl:u}. \n\\begin{table*}[]\n\\centering\n\\caption{$\\beta$-fair utility functions family}\n\\begin{tabular}{ | c | c | c | c |}\n\\hline\n$\\beta$ & $\\max{\\sum_{i}{U_i(h_i)}}$ & $h_i$ & implication \\\\\n\\hline\n 0 & $\\max{\\sum{w_i h_i}}$ & $h_i = 1, i\\le B, h_i = 0, i \\ge B+1$ & maximizing overall throughput \\\\\n 1 & $\\max{\\sum{w_i \\log{h_i}}}$ & $h_i = w_i B \/ \\sum_{j}{w_j}$ & proportional fairness \\\\\n 2 & $\\max{-\\sum{w_i \/ h_i}}$ & $h_i = \\sqrt{w_i} B \/ \\sum_{j}{\\sqrt{w_j}}$ & minimize potential delay \\\\\n $\\infty$ & $\\max{\\min{h_i}}$ & $h_i = B\/N$ & max-min fairness \\\\\n\\hline\n\\end{tabular}\n\\label{tbl:u}\n\\end{table*}\n\n\n\\section{Introduction}\nThe increase in data traffic over past years is predicted to continue more aggressively, with global Internet traffic in 2019 estimated to reach 64 times of its volume in 2005~\\cite{cisco14}.\nThe growth in data traffic is recognized to be due primarily to streaming of video on-demand content over cellular networks.\nHowever, traditional methods such as increasing the amount of spectrum or deploying more base stations are not sufficient to cope with this predicted traffic increase~\\cite{Andrews12, Golrezaei12}. Caching is recognized, in current and future Internet architecture proposals, as one of the most effective means to improve the performance of web applications.\nBy bringing the content closer to users, caches greatly reduce network bandwidth usage, server load, and perceived service delays~\\cite{borst10}.\n\n\n\nBecause of the trend for ubiquitous computing, creation of new content publishers and consumers, the Internet is becoming an increasingly heterogeneous environment where different content types have different quality of service requirements, depending on the content publisher\/consumer.\nSuch an increasing diversity in service expectations advocates the need for content delivery infrastructures with service differentiation among different applications and content classes.\nService differentiation not only induces important technical gains, but also provides significant economic benefits~\\cite{feldman02}.\nDespite a plethora of research on the design and implementation of \\emph{fair} and \\emph{efficient} algorithms for differentiated bandwidth sharing in communication networks, little work has focused on the provision of multiple levels of service in network and web caches.\nThe little available research has focused on designing controllers for partitioning cache space~\\cite{ko03, lu04}, biased replacement policies towards particular content classes~\\cite{kelly99}, or using multiple levels of caches~\\cite{feldman02}. These techniques either require additional controllers for fairness, or inefficiently use the cache storage.\n\nMoreover, traditional cache management policies such as LRU treat different contents in a strongly coupled manner that makes it difficult for (cache) service providers to implement differentiated services, and for content publishers to account for the valuation of their content delivered through content distribution networks.\nIn this paper, we propose a utility-driven caching framework, where each content has an associated utility and content is stored and managed in a cache so as to maximize the aggregate utility for all content. Utilities can be chosen to trade off user satisfaction and cost of storing the content in the cache.\nWe draw on analytical results for time-to-live (TTL) caches~\\cite{Nicaise14b}, to design caches with ties to utilities for individual (or classes of) contents.\nUtility functions also have implicit notions of fairness that dictate the time each content stays in cache.\nOur framework allows us to develop \\emph{online} algorithms for cache management, for which we prove achieve optimal performance. Our framework has implications for distributed pricing and control mechanisms and hence is well-suited for designing cache market economic models.\n\nOur main contributions in this paper can be summarized as follows:\n\\begin{itemize}\n\\item We formulate a utility-based optimization framework for maximizing aggregate content publisher utility subject to buffer capacity constraints at the service provider.\nWe show that existing caching policies, \\emph{e.g.}\\ LRU, LFU and FIFO, can be modeled as utility-driven caches within this framework.\n\\item By reverse engineering the LRU and FIFO caching policies as utility maximization problems, we show how the \\emph{characteristic time}~\\cite{Che01} defined for these caches relates to the Lagrange multiplier corresponding to the cache capacity constraint.\n\\item We develop online algorithms for managing cache content, and prove the convergence of these algorithms to the optimal solution using Lyapunov functions.\n\\item We show that our framework can be used in revenue based models where content publishers react to prices set by (cache) service providers without revealing their utility functions.\n\\item We perform simulations to show the efficiency of our online algorithms using different utility functions with different notions of fairness.\n\\end{itemize}\n\nThe remainder of the paper is organized as follows. We review related work in the next section. Section~\\ref{sec:model} explains the network model considered in this paper, and\nSection~\\ref{sec:opt} describes our approach in designing utility maximizing caches. In Section~\\ref{sec:fairness} we elaborate on fairness implications of utility functions, and in Section~\\ref{sec:reverse}, we derive the utility functions maximized by LRU and FIFO caches. In Section~\\ref{sec:online}, we develop online algorithms for implementing utility maximizing caches. We present simulation results in Section~\\ref{sec:simulation}, and discuss prospects and implications of the cache utility maximization framework in Section~\\ref{sec:discussion}. Finally, we conclude the paper in Section~\\ref{sec:conclusion}.\n\n\\section{Model}\n\\label{sec:model}\nConsider a set of $N$ files, and a cache of size $B$. We use the terms file and content interchangeably in this paper.\nLet $h_i$ denote the hit probability for content $i$.\nAssociated with each content, $i=1,\\ldots, N$, is a utility function $U_i:[0,1] \\rightarrow \\mathbb{R}$ that represents the ``satisfaction'' perceived by observing hit probability $h_i$.\n$U_i(\\cdot)$ is assumed to be increasing, continuously differentiable, and strictly concave.\nNote that a function with these properties is invertible. We will treat utility functions that do not satisfy these constraints as special cases.\n\n\\subsection{TTL Caches}\nIn a TTL cache, each content is associated with a timer~$t_i$. Whenever a cache miss to content $i$ occurs, content $i$ is stored in the cache and its timer is set to $t_i$.\nTimers decrease at constant rate, and a content is evicted from cache when its timer reaches zero.\nWe can adjust the hit probability of a file by controlling the time a file is kept in cache.\n\nThere are two TTL cache designs:\n\\begin{itemize}\n\\item Non-reset TTL Cache: TTL is only set at cache misses, \\emph{i.e.}~TTL is not reset upon cache hits.\n\\item Reset TTL Cache: TTL is set each time the content is requested.\n\\end{itemize}\nPrevious work on the analysis of TTL caches~\\cite{Nicaise14} has shown that the hit probability of file $i$ for these two classes of non-reset and reset TTL caches can be expressed as \n\\begin{equation}\n\\label{eq:hit_non_reset}\nh_i = 1 - \\frac{1}{1 + \\lambda_i t_i},\n\\end{equation}\nand\n\\begin{equation}\n\\label{eq:hit_reset}\nh_i = 1 - e^{-\\lambda_i t_i},\n\\end{equation}\nrespectively, where requests for file $i$ arrive at the cache according to a Poisson process with rate $\\lambda_i$.\nNote that depending on the utility functions, different (classes of) files might have different or equal TTL values.\n\n\\section{Cache Utility Maximization}\n\\label{sec:opt}\nIn this section, we formulate cache management as a utility maximization problem. We introduce two formulations, one where the buffer size introduces a hard constraint and a second where it introduces a soft constraint.\n\n\\subsection{Hard Constraint Formulation}\nWe are interested in designing a cache management policy that optimizes the sum of utilities over all files, more precisely,\n\\begin{align}\n\\label{eq:opt}\n\\text{maximize} \\quad &\\sum_{i=1}^{N}{U_i(h_i)} \\notag\\\\\n\\text{such that} \\quad &\\sum_{i=1}^{N}{h_i} = B \\\\\n& 0 \\le h_i \\le 1, \\quad i=1, 2, \\ldots, N. \\notag\n\\end{align}\nNote that the feasible solution set is convex and since the objective function is strictly concave and continuous, a unique maximizer, called the optimal solution, exists. Also note that the buffer constraint is based on the {\\em expected} number of files not exceeding the buffer size and not the total number of files.\nTowards the end of this section, we show that the buffer space can be managed in a way such that the probability of \\emph{violating} the buffer size constraint vanishes as the number of files and cache size grow large.\n\nThis formulation does not enforce any special technique for managing the cache content, and any strategy that can easily adjust the hit probabilities can be employed. We use the TTL cache as our building block because it provides the means through setting timers to control the hit probabilities of different files in order to maximize the sum of utilities.\n\nUsing timer based caching techniques for controlling the hit probabilities with $0 < t_i < \\infty$ ensures that $0 < h_i < 1$, and hence, disregarding the possibility of $h_i = 0$ or $h_i = 1$, we can write the Lagrangian function as\n\\begin{align*}\n\\mathcal{L}(\\mathbf{h}, \\alpha) &= \\sum_{i=1}^{N}{U_i(h_i)}-\\alpha\\left[ \\sum_{i=1}^{N}{h_i} - B\\right] \\\\\n&= \\sum_{i=1}^{N}{\\Big[ U_i(h_i)-\\alpha h_i \\Big]} + \\alpha B,\n\\end{align*}\nwhere $\\alpha$ is the Lagrange multiplier.\n\nIn order to achieve the maximum in $\\mathcal{L}(\\mathbf{h}, \\alpha)$, the hit probabilities should satisfy\n\\begin{equation}\n\\label{eq:drv}\n\\frac{\\partial\\mathcal{L}}{\\partial h_i} = \\frac{\\mathrm{d} U_i}{\\mathrm{d} h_i} - \\alpha = 0.\n\\end{equation}\n\nLet $U'_i(\\cdot)$ denote the derivative of the the utility function $U_i(\\cdot)$, and define ${U'_i}^{-1}(\\cdot)$ as its inverse function.\nFrom~\\eqref{eq:drv} we get\n\\[U'_i(h_i) = \\alpha,\\]\nor equivalently\n\\begin{equation}\n\\label{eq:hu}\nh_i = {U'_i}^{-1}(\\alpha).\n\\end{equation}\nApplying the cache storage constraint we obtain\n\\begin{equation}\n\\label{eq:c}\n\\sum_{i}{h_i} = \\sum_{i}{{U'_i}^{-1}(\\alpha)} = B,\n\\end{equation}\nand $\\alpha$ can be computed by solving the fixed-point equation given above.\n\nAs mentioned before, we can implement utility maximizing caches using TTL based policies.\nUsing the expression for the hit probabilities of non-reset and reset TTL caches given in~\\eqref{eq:hit_non_reset} and~\\eqref{eq:hit_reset},\nwe can compute the timer parameters $t_i$, once $\\alpha$ is determined from~\\eqref{eq:c}.\nFor non-reset TTL caches we obtain\n\\begin{equation}\n\\label{eq:non_reset_t}\nt_i = -\\frac{1}{\\lambda_i}\\Big(1 - \\frac{1}{1 - {U'_i}^{-1}(\\alpha)}\\Big),\n\\end{equation}\nand for reset TTL caches we get\n\\begin{equation}\n\\label{eq:reset_t}\nt_i = -\\frac{1}{\\lambda_i}\\log{\\Big(1 - {U'_i}^{-1}(\\alpha)\\Big)}.\n\\end{equation}\n\n\\subsection{Soft Constraint Formulation}\n\\label{sec:soft}\nThe formulation in~\\eqref{eq:opt} assumes a hard constraint on cache capacity.\nIn some circumstances it may be appropriate for the (cache) service provider to increase the available cache storage at some cost to the file provider\nfor the additional resources\\footnote{One straightforward way of thinking this is to turn the cache memory disks on and off based on the demand.}.\nIn this case the cache capacity constraint can be replaced with a penalty function $C(\\cdot)$ denoting the cost for the extra cache storage.\nHere, $C(\\cdot)$ is assumed to be a convex and increasing function.\nWe can now write the utility and cost driven caching formulation as\n\\begin{align}\n\\label{eq:opt_soft}\n\\text{maximize} \\quad &\\sum_{i=1}^{N}{U_i(h_i)} - C(\\sum_{i=1}^{N}{h_i} - B) \\\\\n\\text{such that} \\quad &0 \\le h_i \\le 1, \\quad i=1,2,\\ldots, N. \\notag\n\\end{align}\n\nNote the optimality condition for the above optimization problem states that\n\\[U'_i(h_i) = C'(\\sum_{i=1}^{N}{h_i} - B).\\]\n\nTherefore, for the hit probabilities we obtain\n\\[h_i = {U'_i}^{-1}\\Big(C'(\\sum_{i=1}^{N}{h_i} - B)\\Big),\\]\nand the optimal value for the cache storage can be computed using the fixed-point equation\n\\begin{equation}\n\\label{eq:elastic_B}\nB^* = \\sum_{i=1}^{N}{{U'_i}^{-1}\\Big(C'(B^* - B)\\Big)}.\n\\end{equation}\n\n\\subsection{Buffer Constraint Violations}\n\\label{sec:violation}\nBefore we leave this section, we address an issue that arises in both formulations, namely how to deal with the fact that there may be more contents with unexpired timers than can be stored in the buffer. This occurs in the formulation of (\\ref{eq:opt}) because the constraint is on the {\\em average} buffer occupancy and in (\\ref{eq:opt_soft}) because there is no constraint. Let us focus on the formulation in (\\ref{eq:opt}) first. Our approach is to provide a buffer of size $B(1+\\epsilon )$ with $\\epsilon > 0$, where a portion $B$ is used to solve the optimization problem and the additional portion $\\epsilon B$ to handle buffer violations. We will see that as the number of contents, $N$, increases, we can get by growing $B$ in a sublinear manner, and allow $\\epsilon$ to shrink to zero, while ensuring that content will not be evicted from the cache before their timers expire with high probability. Let $X_i$ denote whether content $i$ is in the cache or not; $P(X_i =1) = h_i $. Now Let $\\mathbb{E}\\bigl[\\sum_{i=1}^N X_i\\bigr] = \\sum_{i=1}^N h_i = B$. We write $B(N)$ as a function of $N$, and assume that $B(N) = \\omega (1)$. \n\\begin{theorem}\n\\label{thrm:violation}\nFor any $\\epsilon > 0$\n\\[\n \\mathbb{P}\\bigl(\\sum_{i=1}^N X_i \\ge B(N)(1+\\epsilon)\\bigr) \\le e^{-\\epsilon^2 B(N)\/3} .\n\\]\n\\end{theorem}\nThe proof follows from the application of a Chernoff bound.\n\nTheorem~\\ref{thrm:violation} states that we can size the buffer as $B(1+\\epsilon)$ while using a portion $B$ as the constraint in the optimization. The remaining portion, $\\epsilon B$, is used to protect against buffer constraint violations. \nIt suffices for our purpose that ${\\epsilon^2 B(N) = \\omega (1)}$. This allows us to select $B(N) = o(N)$ while at the same time selecting $\\epsilon = o(1)$. As an example, consider Zipf's law with $\\lambda_i = \\lambda\/i^s$, $\\lambda > 0$, $0 < s <1$, $i=1,\\ldots, N$ under the assumption that $\\max{\\{t_i\\}} = t$ for some $t <\\infty$. In this case, we can grow the buffer as $B(N) = O(N^{1-s})$ while \n$\\epsilon$ can shrink as $\\epsilon = 1\/N^{(1-s)\/3}$. Analogous expressions can be derived for $s \\ge 1$.\n\nSimilar choices can be made for the soft constraint formulation.\n\n\n\n\n\n\n\n\\section{Related Work}\n\\subsection{Network Utility Maximization}\n\nUtility functions have been widely used in the modeling and control of computer networks, from stability analysis of queues to the study of fairness in network resource allocation; see~\\cite{srikant13, neely10} and references therein. Kelly~\\cite{kelly97} was the first to formulate the problem of rate allocation as one of achieving maximum\naggregate utility for users, and describe how network-wide optimal rate allocation can be achieved by having individual users control their transmission rates.\nThe work of Kelly~\\emph{et al.}~\\cite{kelly98} presents the first mathematical model and analysis of the behavior of congestion control algorithms for general topology networks.\nSince then, there has been extensive research in generalizing and applying Kelly's \\emph{Network Utility Maximization} framework to model and analyze various network protocols and architectures. This framework has been used to study problems such as network routing~\\cite{tassiulas92}, throughput maximization~\\cite{eryilmaz07}, dynamic power allocation~\\cite{neely03} and scheduling in energy harvesting networks~\\cite{huang13}, among many others. Ma and Towsley~\\cite{Ma15} have recently proposed using utility functions for the purpose of designing contracts that allow service providers to monetize caching.\n\n\n\\subsection{Time-To-Live Caches}\nTTL caches, in which content eviction occurs upon the expiration of a timer, have been employed\nsince the early days of the Internet with the Domain Name System (DNS) being an important application~\\cite{Jung03}. More recently, TTL caches have regained popularity, mostly due to admitting a general approach in the analysis of caches that can also be used to model replacement-based caching policies such as LRU. The connection between\nTTL caches and replacement-based (capacity-driven) policies was first established for the LRU policy by Che~\\emph{et al.}~\\cite{Che01} through the notion of cache \\emph{characteristic time}. The characteristic time was theoretically justified and extended to other caching policies such as FIFO and RANDOM~\\cite{Fricker12}. \nThis connection was further confirmed to hold for more general arrival models than Poisson processes~\\cite{Bianchi13}. Over the past few years, several exact and approximate analyses have been proposed for modeling single caches in isolation as well as cache networks using the TTL framework~\\mbox{\\cite{Nicaise14, Berger14}}.\n\nIn this paper, we use TTL timers as \\emph{tuning knobs} for individual (or classes of) files to control the utilities observed by the corresponding contents,\nand to implement \\emph{fair} usage of cache space among different (classes of) contents.\nWe develop our framework based on two types of TTL caches described in the next section.\n\n\n\n\\section{Reverse Engineering}\n\\label{sec:reverse}\nIn this section, we study the widely used replacement-based caching policies, FIFO and LRU, and show that their hit\/miss behaviors can be duplicated in our framework through an appropriate choice of utility functions.\n \nIt was shown in~\\cite{Nicaise14} that, with a proper choice of timer values, a TTL cache can generate the same statistical properties, \\emph{i.e.}~same hit\/miss probabilities, as FIFO and LRU caching policies. \nIn implementing these caches, non-reset and reset TTL caches are used for FIFO and LRU, respectively, with $t_i=T, i=1,\\ldots,N$ where $T$ denotes the \\emph{characteristic time}~\\cite{Che01} of these caches. For FIFO and LRU caches with Poisson arrivals the hit probabilities can be expressed as\n$h_i = 1 - 1\/(1+\\lambda_iT)$ and $h_i = 1 - e^{-\\lambda_i T}$, and $T$ is computed such that $\\sum_{i}{h_i} = B$.\nFor example for the LRU policy $T$ is the unique solution to the fixed-point equation\n\\[\\sum_{i=1}^{N}{\\left(1 - e^{-\\lambda_i T}\\right)} = B.\\]\n\n\nIn our framework, we see from~\\eqref{eq:hu} that the file hit probabilities depend on the Lagrange multiplier $\\alpha$ corresponding to the cache size constraint in~\\eqref{eq:opt}.\nThis suggests a connection between $T$ and $\\alpha$. Further note that the hit probabilities are increasing functions of $T$. On the other hand, since utility functions are concave and increasing, $h_i = {U'_i}^{-1}(\\alpha)$ is a decreasing\nfunction of $\\alpha$. Hence, we can denote $T$ as a decreasing function of $\\alpha$, \\emph{i.e.}~$T = f(\\alpha)$. \n\nDifferent choices of function $f(\\cdot)$ would result in different utility functions for FIFO and LRU policies. \nHowever, if we impose the functional dependence $U_i(h_i) = \\lambda_i U_0(h_i)$, then the equation $h_i = {U'_i}^{-1}(\\alpha)$ yields\n\\[h_i = {U'_0}^{-1}(\\alpha\/\\lambda_i).\\]\nFrom the expressions for the hit probabilities of the FIFO and LRU policies, we obtain $T = 1\/\\alpha$. In the remainder of the section, we use this to derive utility functions for the FIFO and LRU policies.\n\n\\subsection{FIFO}\nThe hit probability of file $i$ with request rate $\\lambda_i$ in a FIFO cache with characteristic time $T$ is\n\\[h_i = 1 - \\frac{1}{1 + \\lambda_i T}.\\]\nSubstituting this into~\\eqref{eq:hu} and letting $T = 1\/\\alpha$ yields\n\\[{U'_i}^{-1}(\\alpha) = 1 - \\frac{1}{1 + \\lambda_i \/ \\alpha}.\\]\nComputing the inverse of ${U'_i}^{-1}(\\cdot)$ yields\n\\[U'_i(h_i) = \\frac{\\lambda_i}{h_i} - \\lambda_i,\\]\nand integration of the two sides of the above equation yields the utility function for the FIFO cache \n\\[U_i(h_i) = \\lambda_i (\\log{h_i} - h_i).\\]\n\n\\subsection{LRU}\nTaking $h_i = 1 - e^{-\\lambda_i T}$ for the LRU policy and letting ${T = 1\/\\alpha}$ yields\n\\[{U'_i}^{-1}(\\alpha) = 1 - e^{-\\lambda_i\/\\alpha},\\]\nwhich yields\n\\[U'_i(h_i) = \\frac{-\\lambda_i}{\\log{(1-h_i)}}.\\]\nIntegration of the two sides of the above equation yields the utility function for the LRU caching policy\n\\[U_i(h_i) = \\lambda_i \\text{li}(1-h_i),\\]\nwhere $\\text{li}(\\cdot)$ is the logarithmic integral function\n\\[\\text{li}(x) = \\int_0^x{\\frac{\\mathrm{d} t}{\\ln{t}}}.\\]\n\nIt is easy to verify, using the approach explained in Section~\\ref{sec:opt}, that the utility functions computed\nabove indeed yield the correct expressions for the hit probabilities of the FIFO and LRU caches.\nWe believe these utility functions are unique if restricted to be multiplicative in\\footnote{We note that utility functions, defined in this context, are subject to affine transformations, \\emph{i.e.}~$aU+b$ yields the same hit probabilities as $U$ for any constant $a>0$ and $b$.} $\\lambda_i$.\n\n\\section{Simulations}\n\\label{sec:simulation}\nIn this section, we evaluate the efficiency of the online algorithms developed in this paper.\nDue to space restrictions, we limit our study to four caching policies: FIFO, LRU, proportionally fair, and max-min fair.\n\nPer our discussion in Section~\\ref{sec:reverse}, non-reset and reset TTL caches can be used with $t_i = T, i=1,\\ldots,N$ to implement caches with the same statistical properties as FIFO and LRU caches.\nHowever, previous approaches require precomputing the cache characteristic time $T$.\nBy using the online dual algorithm developed in Section~\\ref{sec:dual} we are able to implement these policies with no a priori information of $T$.\nWe do so by implementing non-reset and reset TTL caches, with the timer parameters for all files\nset as $t_i = 1\/\\alpha$, where $\\alpha$ denotes the dual variable and is updated according to~\\eqref{eq:dual_sol}.\n\nFor the proportionally fair policy, timer parameters are set to\n\\[t_i = \\frac{-1}{\\lambda_i}\\log{(1 - \\frac{\\lambda_i}{\\alpha})},\\]\nand for the max-min fair policy we set the timers as\n\\[t_i = \\frac{-1}{\\lambda_i}\\log{(1 - \\frac{1}{\\alpha})}.\\]\nWe implement the proportionally fair and max-min fair policies as reset TTL caches.\n\n\nIn the experiments to follow, we consider a cache with the expected number of files in the cache set to $B=1000$. Requests arrive for ${N = 10^4}$ files according to a Poisson process with aggregate rate one. File popularities follow a Zipf distribution with parameter ${s=0.8}$,~\\emph{i.e.}~${\\lambda_i = 1\/i^s}$. In computing the timer parameters we use estimated values for the file request rates as explained in Section~\\ref{sec:estimate}.\n\nFigure~\\ref{fig:dual} compares the hit probabilities achieved by our online dual algorithm with those computed numerically for the four policies explained above.\nIt is clear that the online algorithms yield the exact hit probabilities for the FIFO, LRU and max-min fair policies. For the proportionally fair policy however, the\nsimulated hit probabilities do not exactly match numerically computed values. This is due to an error in estimating $\\lambda_i, i=1,\\ldots, N$. Note that we use a simple estimator\nhere that is unbiased for $1\/\\lambda_i$ but biased for $\\lambda_i$. It is clear from the above equations that computing timer parameters for the max-min fair policy\nonly require estimates of $1\/\\lambda_i$ and hence the results are good. Proportionally fair policy on the other hand requires estimating $\\lambda_i$ as well,\nhence using a biased estimate of $\\lambda_i$ introduces some error.\n\nTo confirm the above reasoning, we also simulate the proportionally fair policy assuming perfect knowledge of the request rates.\nFigure~\\ref{fig:prop_exact} shows that in this case simulation results exactly match the numerical values.\n\n\\begin{figure}[h]\n\\centering\n \\begin{subfigure}[b]{0.50\\linewidth}\n \t\\centering\\includegraphics[scale=0.20]{prop_dual_hit.eps}\n \t\\caption{\\label{fig:prop_exact}}\n \\end{subfigure}%\n \\begin{subfigure}[b]{0.50\\linewidth}\n \t\\centering\\includegraphics[scale=0.20]{prop_pd_hit_est.eps}\n \t\\caption{\\label{fig:prop_pd}}\n \\end{subfigure}\n\\vspace{-0.25cm}\n \\caption{Proportionally fair policy implemented using the (a) dual algorithm with exact knowledge of $\\lambda_i$s, and (b) primal-dual algorithm with ${\\delta_m(t_i, \\alpha) = \\lambda_i}$ and ${\\delta_h(t_i, \\alpha) = \\alpha - \\lambda_i}$, with approximate $\\lambda_i$ values.}\n \\label{fig:prop_fair}\n\\end{figure}\n\nWe can also use the primal-dual algorithm to implement the proportionally fair policy.\nHere, we implement this policy using the update rules in~\\eqref{eq:prop_pd}, and estimated values for the request rates.\nFigure~\\ref{fig:prop_pd} shows that, unlike the dual approach, the simulation results match the numerical values.\nThis example demonstrates how one algorithm may be more desirable than others in implementing a specific policy.\n\nThe algorithms explained in Section~\\ref{sec:online} are proven to be globally and asymptotically stable, and converge to the optimal solution.\nFigure~\\ref{fig:lru_dual_var} shows the convergence of the dual variable for the LRU policy.\nThe red line in this figure shows $1\/T=6.8\\times 10^{-4}$ where $T$ is the characteristic time of the LRU cache computed according to the discussion in Section~\\ref{sec:reverse}.\nAlso, Figure~\\ref{fig:lru_cache_size} shows how the number of contents in the cache is centered around the capacity $B$.\nThe probability density and complementary cumulative distribution function (CCDF) for the number of files in cache are shown in Figure~\\ref{fig:cs}.\nThe probability of violating the capacity $B$ by more than $10\\%$ is less than $2.5\\times 10^{-4}$. For larger systems, \\emph{i.e.}\\ for large $B$ and $N$, the probability of violating the \ntarget cache capacity becomes infinitesimally small; see the discussion in Section~\\ref{sec:violation}. This is what we also observe in our simulations.\nSimilar behavior in the convergence of the dual variable and cache size is observed in implementing the other policies as well.\n\n\\begin{figure}[h]\n\\centering\n \\begin{subfigure}[b]{0.5\\linewidth}\n \t\\centering\\includegraphics[scale=0.21]{lru_dual_var.eps}\n \t\\caption{\\label{fig:lru_dual_var}}\n \\end{subfigure}%\n \\begin{subfigure}[b]{0.5\\linewidth}\n \t\\centering\\includegraphics[scale=0.21]{cs_conv.eps}\n \t\\caption{\\label{fig:lru_cache_size}}\n \\end{subfigure}\n\\vspace{-0.25cm}\n \\caption{Convergence and stability of dual algorithm for the utility function representing LRU policy.}\n \\label{fig:lru_dual}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n \\begin{subfigure}[b]{0.5\\linewidth}\n \t\\includegraphics[scale=0.21]{cs_distr.eps}\n \\end{subfigure}%\n \\begin{subfigure}[b]{0.5\\linewidth}\n \t\\includegraphics[scale=0.21]{cs_ccdf.eps}\n \\end{subfigure}\n\\vspace{-0.25cm}\n \\caption{Cache size distribution and CCDF from dual algorithm with the utility function representing LRU policy.}\n \\label{fig:cs}\n\\end{figure}\n\n\n\\section{Final Remarks}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Appendix}\n\nThis appendix contains a description of the methodology used to solve for treatment strategies in chain graphs, the methodology used to train a Q-learner on the Karate Club network, and a brief description of community detection techniques that were used on the Karate Club network.\n\n\\subsection{Solving for treatment strategies in the chain graph via linear program}\n\\label{app:chain_lp}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.33\\columnwidth]{images\/chain_bases.png}\n \\caption{Some risk profiles in a chain graph with 11 nodes for different preemptive treatment strategies. Treating node $i$ creates zero risk for individual $i$ and lowers the risk for neighbors as well.}\n \\label{fig:chain_graph_bases}\n\\end{figure}\n\n\\subsubsection{Preemptive treatments}\nIn the preemptive setting, with only a single treatment, there are a limited number of possible interventions, corresponding to treating each of the members of the graph. \nEach of those interventions has a corresponding risk profile (Figure~\\ref{fig:chain_graph_bases}).\n\nTo find the strategy that minimizes average disease burden, it is sufficient to simply enumerate them and find the lowest.\n\nTo see if there is a distribution over strategies that gives a uniform risk profile, we try to find a convex combination of risk profiles that minimizes average risk and is subject to the constraints that all of the risks are equal. These are all linear constraints, so the problem is a linear program. We use scipy's {\\tt optimize.linprog} function as a solver.\n\n\\subsubsection{1-step precision treatments}\nWe also consider precision treatments after 1 step - i.e., waiting to see which patient is infected and responding with a treatment at that point.\n\nFor a chain graph with N members, we are now looking for N different reactive allocations, with each allocation being a convex combination of treatment risk profiles, similar to above, but with the initial patient already infected.\n\nThis problem has a similar form to above, but with N convex combination constraints instead of 1, and is also a linear program.\n\nWaiting for longer than one step before acting was not considered, though could be a reasonable strategy as well.\n\n\n\\subsection{Training a DQN network on the karate-club graph}\n\\label{app:training_dqn}\n\nA DQN agent was trained using dopamine's experiment runner.\nThe reward function at each step was the negative number of newly sick nodes.\n\nThe observation vector at each step was one-hot encoded health states. \n\nThe graph structure was not passed to the agent explicitly, but it was given the chance to learn by observing the simulations.\n\nHyperparameters of the agent were optimized using a black-box hyperparameter tuning system.\n\nThe range of settings explored were:\n\\begin{itemize}\n \\item gamma: [0, 0.99]\n \\item hidden layer size: [5, 500]\n \\item learning rate: [$10^{-4}$, $10^0$]\n \\item num iterations: [$10^2$, $10^5$]\n\\end{itemize}\nThe agent's stack size was set to 1; max steps per episode was 20, and the training steps per iteration was 500. All other arguments were left at default.\n\nThe best settings found by the hyperparameter tuner after 150 trials were:\n\\begin{itemize}\n \\item gamma: 0.70\n \\item hidden layer size: 106\n \\item learning rate: 0.95\n \\item num iterations: 2543\n\\end{itemize}\n\n\\subsection{Community detection in the karate-club graph}\n\\label{app:community_dection_karate}\nWe use networkx's implementation of the Girvan Newman algorithm for community detection and use the highest level in the hierarchy, creating two communities shown in Figure~\\ref{fig:karate_partition}. \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\columnwidth]{karate_club_partition.png}\n \\caption{Partitioning of the Karate club graph into 2 communities using the Girvan-Newman algorithm}\n \\label{fig:karate_partition}\n\\end{figure}\n\\section{Background and Related Work}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/epidemic_progression.png}\n \\caption{An illustration of the development of an epidemic on a barbell graph with a central individual. Here, blue nodes are in the `Susceptible' state, red nodes are in the `Infected' state, and purple nodes are in the `Recovered' state. `Recovered` is an abosrbing state. After t=4, there are not more transitions to take place.}\n \\label{fig:barbell_evolution}\n\\end{figure}\n\nMathematical models for studying infectious disease spread have a long history~\\citep[e.g.,][]{longini1978optimization, hethcote1978immunization, patel2005finding, tennenbaum2008simple}, including explicit treatment of network structures~\\citep{newman2002spread, valente2012network, newman2018networks}, and evolving control policies~\\citep{sharomi2017optimal}. \n\n\\citet{keeling2012optimal} used analytical models to illustrate a tension between optimal and equitable distributions of vaccines in a simple case of two non-interacting (or barely interacting) populations. \\citet{yi2015fairness} study tradeoffs between fairness and effectiveness when considering different possible prioritization orderings of who receives vaccines in the case of limited resources. \\citet{salathe2010dynamics} discuss the importance of ``intercommunity-bridges'', individuals who have contact with multiple communities, in order to stem the spread of disease.\n\nEthical discussions of prioritizing individuals for treatment based on considerations like quality-adjusted life years, vulnerability, need, productivity, ability to treat others, and lottery have been discussed extensively~\\citep[e.g.,][]{emanuel2006should, persad2009principles, buccieri2013ethical, saunders2018equality}, but generally deal with the problem of a general order of prioritization rather than precision treatments and do not consider explicit social network structures.\n\nTo the best of our knowledge, the problem of optimally allocating vaccines within a social network in a step-by-step manner as the disease spreads has not been explicitly studied neither from an effectiveness nor fairness view. The turn-based precision version of the problem is closer to learning to play a strategy game like chess or go which has received a lot of attention in the reinforcement learning community~\\citep[e.g.,][]{atari, silver2017mastering}. \n\n\\subsection{Fairness in networks} \n\\label{sec:fairness_in_networks}\n\nMany recently proposed measurements of fairness focus either on group fairness~\\citep{hardt2016equality} or individual-fairness~\\citep{dwork2012fairness}. Networks serve as an interesting setting because of they way that they can richly reproduce many societal structures.\n\n{\\bf Communities:} Social networks tend to organize in community structures~\\citep{girvan2002community, faust2005using}. Membership in these communities tends to be much more nuanced than strictly binary or categorical with individuals potentially belonging to many communities, possibly with differing degrees of identification~\\citep{palla2005uncovering} and with communities interacting at different scales or resolutions~\\citep{ronhovde2009multiresolution}. Using measures like overall burden to a community rather than probability of disease conditioned on being from a community, directly links fairness measures to group well-being. \n\n{\\bf Centrality:} Individuals in different positions in the graph may play different roles and be treated differently. Centrality of a node in a network refers to a measure of its influence on the network and can be characterized using a number of different measures~\\citep{lu2016vital}. Three important measures are {\\em degree centrality}: the number of immediate neighbors; {\\em betweenness-centrality}: the number of shortest paths that pass through this node; and {\\em eigenvector centrality}: uses the dominant eigenvector of the adjacency matrix.\n\nEven with the same set of actors, an individual's characterization in the network (both in terms of community and centrality) may be very different depending on how the edges in the network are defined. For example, in an workplace, the communities formed by who eats lunch together may be different from the communities of who attend meetings together. Thus centrality and community memberships of an individual should not be thought of as unique but depend crucially on the context being analyzed.\n\n\\section{Formalizing the precision contagion control problem}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{images\/wait_and_see.png}\n \\caption{Network structures to illustrate some of the complexity of the precision control task. {\\bf Case 1 - greedy suboptimal:} If one treatment is available per time-step, a greedy agent would prefer to treat node A whereas an agent that plans ahead would treat B then C. {\\bf Case 2 - wait and see:} If disease transfer occurs with with $\\tau = 0.5$ and only one treatment is available overall, an agent minimizing expected sick days should to wait to see if the disease transfers to each neighbor before deciding which side to treat.}\n \\label{fig:greedy_suboptimal}\n\\end{figure}\nWe pose the precision contagion control problem in this section. In defining the problem, we build on work on discrete-time compartmental modeling of infectios disease \\cite{brauer2010discrete,hernandez2015discrete,liu2015effect,allen1994some}.\n\nA population of individuals, $V$, is connected by edges, $E$, forming a social network $N = (V, E)$. Individuals can be in one of three health states $\\{S, I , R\\}$ for susceptible, infected, recovered. The health of the network is $H \\in \\{S, I, R\\}^{|V|}$. Health evolves in discrete time steps. At time $t_0$ an initial set of nodes $V_0$ are infected. At every subsequent step step, disease spreads stochastically from infected to susceptible neighbors at every time step, with probability:\n\n\\begin{equation}\np_{S \\rightarrow I} = 1-(1-\\tau)^{\\#I(N)},\n\\end{equation}\nwhere $\\tau \\in [0, 1]$ is probability of transmission and $\\#I(N)$ is a count of infected neighbors.\n\nThe recovery process does not depend on neighborhood.\n\\begin{equation}\np_{I \\rightarrow R} = \\rho,\n\\end{equation}\n\nwhere $\\rho \\in [0, 1]$ is probability of recovery. \n\nAn agent allocates $N_t$ treatments at each step. Treatments effect a direct transition to the R state, and have no effect on individuals in states $\\{I, R\\}$ (i.e., treatments are {\\em preventative}). Figure~\\ref{fig:barbell_evolution} illustrates the disease being transmitted and individuals recovering in a simple network with no treatments bring allocated.\n\nThe precision contagion control problem consists of learning a treatment policy $\\pi(N, \\tau, \\rho, N_t): H \\rightarrow (V \\cup \\{\\varnothing\\})^{N_t}$, i.e., finding the best next individuals to treat based on the current health state of the network. There is always an option to not use an available treatment which we denote by allocating that treatment a non-existent individual, $\\varnothing$.\n\nIn public health, uncertainties in the network structure as well as the logistical difficulties in implementing precision policies make it practically not worthwhile to optimize precision interventions in the way that we describe here. In this work, we consider fairness implications of this idealized precision setting that may become possible as health IT infrastructure makes it possible to target health interventions on the individual level. \n\nFinding optimal strategies for precision contagion control is not easy. The following properties of the problem, which we prove with example network structures, highlight some of the complexity of the task. \n\n{\\bf Greedy is not optimal:} A greedy policy that aims to minimize expected total disease burden at every step in a greedy way can suffer from being sub-optimal (Figure~\\ref{fig:greedy_suboptimal}, Case 1).\n\n{\\bf Act now or wait?:} Strategies that wait to see how the disease progresses for some number of steps before allocating a treatment can be more effective than treating immediately (Figure~\\ref{fig:greedy_suboptimal}, Case 2). \n\nFrom a fairness perspective, one might be interested in finding treatment strategies that equalize outcomes in the population.\n\n{\\bf Uniform allocation may not produce uniform outcomes:} Some individuals are more or less at risk before intervention due to their position in the social network. Uniformly allocating treatment does not always create uniform outcomes. This will be explored in more detail in Section~\\ref{sec:chain}.\n\n\n\\section{Conclusion}\nThis is work in progress, but we believe that the precision contagion treatment problem, which we intend to contribute to the collection of simulated environments in the open-source {\\tt ML-fairness-gym}~\\citep{fairnessgym}, contains interesting questions of how to find fair allocation policies when individuals are highly intertwined. \nIt emphasizes the importance of {\\em fairness in outcomes} rather than a narrower goal of fairness in e.g., classification accuracy (there is no classification or state estimation task in this problem -- just planning).\nIt also highlights how individuals with ``intermediate risk'' can end up being most at risk worst when policies are put in place to help the most at risk.\nDespite the idealized nature of the problem which assumes fully observed graph structures and highly precise treatment delivery, we hope that an increased understanding of the fairness implications of the underlying optimization problem with network effects can inform work in healthcare more broadly.\n\\section{Problem Statement}\nLet $K = \\{1, ..., N_p\\}$ be indices that represent individual members of a population with $N_p$ members and let $N_t$ be the number of treatments an agent can give out. Let $A \\in P^{N_t}$ be an allocation of treatments to individuals in the population. Finally, let $H = \\{S,I,R\\}^{N_p}$ be a health state, and let $\\tau$ be a schedule according to which treatments are given out -- perhaps only at $t=0$, or at timestep, or something else.\n\nThe agent's task is to find a probabilistic \\emph{allocation policy} $f: H \\rightarrow P(A)$ that maps health states to a distribution over allocations. This function will be applied according to the schedule $\\tau$. One way to go about this is to find the policy by solving an optimization problem that encodes a goal related to reducing infection.\n\nA few more definitions are required before we pose the optimization problems. Let $x_i | t, f \\in \\{0, 1\\}$ be a random variable that indicates whether individual $i \\in K$ ever enters the \\texttt{Infected} state by time $t$ given allocation policy $f$.\n\nNow consider an agent whose only concern is to maximize utility -- that is, to minimize the number of individuals who get sick once the infectious disease has played out. They can find this \\emph{maximum utility} policy by solving the following optimization problem:\n\\begin{equation}\n\\begin{aligned}\nf_{MU} = \\argmin_{f} \\quad & \\frac{1}{N_p} \\sum\\limits_{i=1}^{N_p} P(x_i = 1 | t=\\infty, f) \\\\\n\\end{aligned}\n\\end{equation}\n\nConsider now an agent who is concerned with a particular notion of fairness -- each member of the population should have the same probability of being infected. An agent can find this \\emph{naively equitable} policy by adding a constraint to the problem above:\n\\begin{equation}\n\\begin{aligned}\nf_{NE} = \\argmin_{f} \\quad & \\frac{1}{N_p} \\sum\\limits_{i=1}^{N_p} P(x_i = 1 | t=\\infty, f) \\\\\n\\textrm{s.t.} \\quad & P(x_i = 1 | t=\\infty, f) = P(x_j = 1 | t=\\infty, f) \\: \\forall \\: i, j \\in K\n\\end{aligned}\n\\end{equation}\n\nWhy is the policy naive? It could well be the case that there's a policy that \\emph{dominates} the naively equitable policy by offering an improvement for every individual over the naively equitable policy. That is, an \\emph{undominated equitable} policy is a naively equitable policy for which:\n\\begin{equation}\n\\nexists f' \\; \\textrm{s.t.} \\; P(x_i = 1 | t=\\infty, f') < P(x_j = 1 | t=\\infty, f) \\: \\forall \\: i \\in K\n\\end{equation}\n\\section{Introduction}\n\nThe field of public health has long understood that different medical and health problems are not always easily decomposable into individual health measures, but can benefit from a networked analysis - viewing health as a population-level process.\nExamples of this type of analysis have long been applied to spread of infectious diseases, and more recently to other health factors like obesity~\\citep{christakis2007spread}, substance abuse~\\citep{rosenquist2010spread}, happiness~\\citep{fowler2008dynamic}, and even misinformation about health topics~\\citep{fernandez2015health}.\n\nIn network analyses, determining who benefits from an intervention is not as simple as observing who is directly affected by the intervention (e.g., who receives a vaccine), since neighbors and neighbors of neighbors are also affected indirectly. With this in mind, we can still investigate how different health interventions differentially benefit individuals and communities within a larger population. These tradeoffs are important to study because they highlight how pursuing a coarse metric of success of an intervention averaged over an entire population (like e.g., minimizing total expected number of sick-days) can, under certain circumstances, leave parts of the population under-served. \n\nWe study a stylized version of the public health task of epidemic control that highlights the networked setting. More specifically, we study the problem of optimally allocating vaccines within a social network in a step-by-step manner as the disease spreads. We assume that the agents allocating vaccines can observe the disease states of individuals within the population and the underlying contact network structure. We call this the {\\em precision disease contagion control} problem.\n\nThis work is relevant to fairness in machine learning for healthcare because it addresses class of optimization problems that appear repeatedly in the public health. We study this problem in simulation in order to illuminate core dynamics \\cite{epstein2008} -- with the rise of learned policies that attempt to optimize health outcomes on real data, the tensions at the root of the underlying optimization problem become increasingly relevant.\n\nWe propose this environment as a tool for researchers to explore the dynamics of disease under different contact and initial infection conditions, and to evaluate the relative performance of allocation approaches, learned or otherwise, that they may be interested in exploring.\n\nThis work is exploratory in nature. We present several different contexts in which epidemics can spread and explore the efficacy and fairness of different treatment scenarios within those contexts.\n\n\\subsection{Contributions}\n\\begin{itemize}\n \\item We pose a version of the precision disease contagion control problem and discuss heuristic and learned policies. \n \\item We characterize some of the tradeoffs between efficiency and fairness in simple social network graphs.\n \\item We open-source all of our code in an extensible manner to provide for reproduction and extensions of this work.\n\\end{itemize}\n\n\n\n\n\\section{Related Work}\nIn this section, we provide a brief overview of infectious disease modeling.\n\n\\subsection{Infectious Disease Epidemiology}\n\nInfectious disease epidemiology studies infectious disease at the population level. Researchers in this field rely on simple models where human contact is associated with some probability of communicating disease. Compartmental models, which are a central to the study, represent disease dynamics in populations by specifying a small number of mutually exclusive states that members of a population can occupy and some kind of mechanism by which disease propagates \\cite{hethcote2000mathematics}.\n\nIn this paper, we study the Susceptible-Exposed-Infected-Recovered (SEIR) subclass of compartmental models, where individuals exist in one of four mutually exclusive states: \\texttt{Susceptible}, \\texttt{Exposed}, \\texttt{Infected}, and \\texttt{Recovered}. \\texttt{Susceptible} people are healthy but vulnerable to infection, \\texttt{Exposed} people are not yet sick but will become so after time passes, \\texttt{Infected} people are sick and spreading the disease, and \\texttt{Recovered} people were once sick but no longer are. Note that not all stages need be present -- for example, the SI model only includes the \\texttt{Susceptible} and \\texttt{Infected} states.\n\nCompartmental models are not fully specified by the states that people can occupy. Additional considerations include the representation of time, which is typically treated as continuous, but the discrete case is actively studied as well (such as in \\cite{brauer2010discrete,hernandez2015discrete,liu2015effect,allen1994some}). \n\nIn addition, there is the question of how people come into contact. Models typically assume `full mixing,' whereby all members of a population are equally likely to come into contact with all other members of a population. However, there is a rich vein of work that assumes instead that contact proceeds along some explicitly-specified social network (for a detailed treatment, see \\cite{newman2002spread, newman2018networks}).\n\n\\subsection{Social welfare orderings and collective utility functions}\n\n\n\n\n\\section{Experiments}\n\n\nWe implement the simulated dynamics of this environment using the ML Fairness Gym~\\citep{fairnessgym}. Centrality is computed with the networkx package's centrality measures. The code for the environment, agents, and experiments will be published open-source and is intended to be contributed to the ML Fairness Gym collection of examples.\n\n\\subsection{Policies for contagion control}\nThere are many possible policies for contagion control. In these experiments we consider:\n\n{\\bf Random selection}: Treatments are uniformly at random allocated to individuals in the population. \n\n{\\bf Direct optimization:} Some networks are simple enough that policies can be computed by direct optimization of expected number of sick days.\n\n{\\bf Central-first heuristic}: Central-first heuristics treat more {\\em central} individuals before less-central individuals. Centrality is discussed in more detail in~\\ref{sec:fairness_in_networks}.\n\n{\\bf Deep Q-Network}: In cases where we cannot directly optimize over policies because the graph, we train a Deep Q-Network~\\citep{mnih2015human} with a single hidden layer using the Dopamine library~\\citep{castro18dopamine} for reinforcement learning. Details of the training can be found in Appendix~\\ref{app:training_dqn}.\n\n\n\\subsection{Barbell Graph}\n\nWe start by considering disease transmission in a barbell-like graph where a central, initially-infected node is connected to two cliques of different sizes through intermediaries. Figure \\ref{fig:barbell_evolution} shows the natural progression of disease without intervention.\n\nThe barbell setting was previously considered by~\\citet{keeling2012optimal} as a clear example of conflict between minimizing disease in the total population, and treating communities equally.\n\nIf a single vaccine is available, the optimal policy is to treat on the side of the larger community to block the spread of disease. However, this policy is clearly unfair to the smaller community. If multiple vaccines are available and it is impossible to completely block the spread to one community, \\citet{keeling2012optimal} showed that the optimal allocation will change depending on the amount of vaccine available, but that for many values it would strongly favor one community or the other.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Chain graph}\n\\label{sec:chain}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\columnwidth]{images\/equalizing.png}\n \\includegraphics[width=0.45\\columnwidth]{images\/chain.png}\n \\caption{\n (Left) A visualization of the ``Precision: Equalize'' strategy. Each row represents a possible initial infected patient and the agent's conditional distribution of who to treat in response.\n (Right) Probability of infection of individuals in a chain graph using different treatment strategies. It is possible to ensure an equal probability of infection in the precision setting, but it does not have the lowest overall disease burden.}\n \\label{fig:chain_treatments}\n\\end{figure}\n\nIn contrast to the barbell graph, the chain graph (where vertices are connected sequentially) has no clear group structure, but we can still consider likelihood of infection for individuals in different locations in the graph. For experiments we use a chain graph of size 11 with $\\tau=0.75, \\rho=1$ and a single treatment available.\n\nIf the initial infected patient is a uniformly chosen node in the network, the expected number of sick days with no intervention is higher for the central node and is lower at the two ends (Figure~\\ref{fig:chain_treatments} Right - No treatment). A random allocation strategy recapitulates this shape. \n\n{\\bf Preemptive treatment:} Treatment can be allocated preemptively, {\\em before} the first patient is infected. The preemptive strategy that optimizes average utility is to treat the middle node in the chain. Interestingly, with only one treatment available, there may not be a preemptive strategy (including randomized strategies) that equalizes the expected disease burden for all individuals. We verify this for our chain graph by posing the problem as a constrained linear program (Appendix~\\ref{app:chain_lp}) and determining that there is {\\em no feasible solution}. This is in contrast to equalized-opportunity classifiers in the classification setting~\\citep{hardt2016equality} which can always be constructed through randomization.\n\n{\\bf Precision treatment:} In the precision setting, we choose the individual to receive treatment {\\em conditional} on the observation of which individual becomes sick first. In this setting, the strategy that optimizes average utility is to treat the individual beside the sick patient, choosing the side with more people. Finding a precision policy that equalizes disease burden can be formulated as a constrained linear program (Appendix~\\ref{app:chain_lp}). Despite there being no preemptive treatment strategy that equalizes disease burden, a precision strategy {\\em does exist} (Figure~ \\ref{fig:chain_treatments} Left), though it has a higher average disease burden than the unconstrained policy. Figure \\ref{fig:chain_treatments} compares how the different treatment policies distribute disease burden over the population in the chain graph. The random policy is dominated by the precision equalizing strategy, i.e., every individual in the network fares equally or better with the precision equalize strategy than the random strategy. \n\n\\subsection{Scale-free network}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/scale-free.png}\n \\caption{Disease burden by centrality (using three different measures of centrality) for the members of the synthetic scale-free network~\\citep{holme2002growing}. Moderately central individuals fare the worst under centrality-based treatment.}\n \\label{fig:general_graph}\n\\end{figure}\n\n\nMany real social networks have degree distributions that follow a power-law (or similar) distribution, meaning that a small number of nodes have very high connectivity.\n\n\n\n\nIn this setting, how will individuals with low centrality fare if high-centrality individuals are prioritized for treatment? They may be better off than they would have been under random allocation if centrality-based allocation methods control the epidemic more effectively; they may be worse off because they do not receive treatment themselves.\n\nWe simulate this scenario with a synthetic scale-free network~\\citep{holme2002growing} with 100 nodes and allowed an agent to allocate 1 treatment per turn. \nInfection spreads along contacts with $\\tau=0.25$ and individuals recover with $\\rho=0.01$.\n\nFigure \\ref{fig:general_graph} was generated by running 1000 simulations for 20 steps under three different centrality-based treatment allocation policies. Note that moderately-central individuals have the highest epidemic size burden under centrality-based treatment because they do not enjoy the low risk that comes with being low centrality or the benefit offered to highly-central individuals by treatment.\n\n\n\\subsection{Karate club graph}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{images\/karate_club_centrality.png}\n \n \\caption{Disease burden by centrality (using three different measures of centrality) for each of the 34 members of the Karate club graph~\\citep{zachary1977information}. With natural progression (green circles), more central nodes tend to have a higher expected number of sick days. Central-first treating heuristics bring down everyone's expected disease burden, with the highest disease burden for individuals with intermediate centrality. The Deep Q-network does not succeed in finding strategies that beat heuristics.}\n \\label{fig:karate_club_centrality}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\columnwidth]{images\/karate_club_community.png}\n \\caption{Violin plots showing disease burden in two communities in the karate club graph, for each of the treatment strategies. The distributions pictured are over 1000 simulation runs. Black lines indicate median runs of the simulation.}\n \\label{fig:karate_club_community}\n\\end{figure}\n\nBeyond the synthetic graphs discussed above, we also consider the small scale Karate club graph \\citep{zachary1977information} which records the social interactions between 34 members of a karate club over three years, measuring the expected disease burden on individuals as a function of their centrality in the graph.\n\nSimulations were run 1000 times with parameters ($\\tau = \\rho = 0.5; N_{t} = 1$). Each simulation lasts 20 time-steps. Agents prioritized by centrality (experiments were repeated with the same set of random seeds for each of the 3 different measures of centrality) or using a deep Q-network~\\citep{atari}(Details in appendix~\\ref{app:training_dqn}). Ties between individuals with equal centrality were broken randomly. In each run, the initial infected patient was chosen uniformly. \n\nWith the natural disease progression, more central individuals tend to be {\\em more at risk}, but once central-first treatment policies are put in place, the worst-off individuals are those with intermediate centrality (everyone's risk reduces somewhat). \n\nLooking at expected total number of sick days in the entire population, the different central-first policies are all similar (No treatments: 58.01, Eigen-centrality: 25.56, Degree-centrality: 25.61; Betweenness-centrality: 25.17), but the individual burdens are different (Figure~\\ref{fig:karate_club_centrality}). The Deep Q-network (DQN) did not succeed in learning a policy that outperformed centrality heuristics, suggesting that a different network structure or training approach should be tried.\n\nThe Karate club graph naturally splits into two communities based on how members of the club sided in a particular conflict that led to the dissolution of the club (for more history on this graph, see~\\citet{zachary1977information}). Figure~\\ref{fig:karate_club_community} shows how the disease burden is spread over the two communities (details on the community detection procedure is in Appendix~\\ref{app:community_dection_karate}). On average, community 1 tends to fare a little bit better than community 2. This is even true with natural disease progression, suggesting that community 1 is somehow structured in a way that is less conducive to disease spread. \nThe DQN favors community 1 in an extreme way, but since it's performance is generally poor, its hard to draw strong conclusions from that.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n Upon exhausting all of its available fuel, the iron core of a massive star ($\\gtrsim$8$M_\\odot$) collapses, often giving rise to an extremely powerful, bright explosion known as a core-collapse supernova (CCSN). Under very specific circumstances these CCSNe are also thought to be associated with another very powerful and luminous event known as a long-duration gamma-ray burst (LGRB). The circumstances that permit this extraordinary\npartnership are still not well understood, but it seems likely\nthat progenitors born in lower-metallicity environments (sub-solar) favour production of GRBs \\citep[see][]{2006AcA....56..333S,2006Natur.441..463F,2003A&A...406L..63F,2004MNRAS.352.1073T}.\n\n The most popular theoretical model interprets the LGRB as\n being caused by the core-collapse of a specific class of massive\n star. This fundamental theory, coined as the `Collapsar' model, was\n first conceived by \\citet{1993ApJ...405..273W} \\citep[also see][]{1999A&AS..138..499W}. The criteria for the creation of a LGRB\n resulting from the collapse of a massive star are that a black-hole\n must be formed upon core-collapse, either promptly or via fall-back\n of ejected material, and that an accretion disk is allowed to form\n around this black-hole which acts as a `central engine' causing the\n release of collimated relativistic jets along the rotation axis of\n the black-hole\n \\citep{2006astro.ph.10276H,2008arXiv0801.4362Y,2008arXiv0803.3807T}. It\n is these relativistic jets that form the LGRB and the associated\n afterglow emission. These criteria are found to place \n very specific\n constraints on the type of progenitor star that can collapse to\n produce a LGRB. The core of\n the progenitor must be massive enough to produce a black-hole, \n it must be rotating rapidly enough to form an accretion disk, and the star\n needs to be stripped of\n its hydrogen envelope, so that the resulting relativistic jets are\n not inhibited from reaching the stellar surface. All of this points toward\n rapidly rotating Wolf-Rayet stars as viable progenitors. \n \\citep{2008arXiv0804.0014D,2006ApJ...637..914W,2006ARA&A..44..507W}\n\n Rapidly rotating O-stars which mix-up their core processed material\n to produce WR stars through chemically homogeneous evolution have been\nsuggested to \n satisfy the requirements of the collapsar model \\citep{1987A&A...178..159M,2006ASPC..353...63Y,2006A&A...460..199Y,2006ApJ...637..914W}.\n This model prevents the depletion of angular\n momentum due to mass-loss and allows a massive star to become a WR star\n directly, avoiding the supergiant phase. \\citet{2006A&A...460..199Y} have estimated that for a\n rapidly rotating star to be the progenitor of an LGRB, prior to collapse\n it must have a helium core mass greater than 10M$_{\\odot}$, that is an\n initial mass greater than 25-30M$_{\\odot}$. \\citet{2006ApJ...637..914W}\n have modelled the WR mass-loss rates of \\citet{2005A&A...442..587V} and\n predict that the upper metallicity limit for forming LGRB may be as high\n as 0.3Z$_{\\odot}$. Apart from the single star progenitor model, a rapidly rotating massive star could lose its hydrogen envelope\n and still retain a large fraction of its initial angular momentum if a close binary companion were to strip it of the envelope\n rapidly enough for the progenitor to avoid spin down\n \\citep{2007A&A...465L..29C,2007astro.ph..2652B,2007A&A...471L..29D}.\n\nWhatever the model for the progenitor of the LGRB, prior to collapse we require a very rapidly rotating, massive WR star. These constraints lead to the prediction that LGRBs favour a lower-metallicity environment and that CCSNe found in association with an LGRB will be classified as Type Ib or Type Ic, characterised by an absence of hydrogen in their spectra \n\\citep{2007ApJ...666.1024B,2003fthp.conf..171F,2003astro.ph..1006H}. To date many LGRBs have revealed `bumps' in their afterglow emission, thought to be evidence of associated SNe \\citep[see][]{1999Natur.401..453B, 2000ApJ...536..185G, 2003A&A...406L..33D, 2005ApJ...624..880L}. A few of these associated SNe have been adequately spectroscopically typed, all have been found to be broad-lined Type Ic SNe, helping to confirm the theory that LGRBs are the result of the collapse of hydrogen-stripped massive stars \\citep{1998Natur.395..670G,2003ApJ...591L..17S,2003Natur.423..847H,2004ApJ...609..952Z}. \\citet{2008AJ....135.1136M} have found that the SNe Type Ic-BL associated with an LGRB are found to inhabit a sample of host galaxies of significantly lower metallicity than the host galaxies of their Type Ic-BL counterparts with no associated observed LGRB. Interestingly, there has not been a SN Type Ic-BL without an associated observed LGRB that has been found to inhabit the same low-metallicity galaxy sample that the LGRB associated SNe inhabit.\n\n\\citet{2006Natur.441..463F} have found that LGRBs are preferentially\nfound in irregular galaxies which are \nsignificantly fainter, smaller and hence presumably of\nlower-metallicity than typical CCSNe hosts. \n\\citet{2005NewA...11..103S} make a similar point, stating that\nthe majority of LGRBs are found in extremely blue, sub-luminous\ngalaxies, similar to the Blue\nCompact Dwarf Galaxy population.\n\\citet{2006astro.ph..9208S} \nhas speculated that at primordial metallicities stars of up \nto 300M$_{\\odot}$ may form and if they have low mass-loss, \nthey may end their lives still retaining much of their original \nmass. \\citet{2002ApJ...567..532H} predict that those stars\nwithin the mass range 140M$_{\\odot}$-260M$_{\\odot}$ \nmay produce pair-instability supernovae\n(PISNe), huge thermonuclear explosions with energies as high as\n$\\sim$10$^{53}$ ergs. Hence understanding SNe in low-metallicity \nenvironments locally could offer insights into high-z GRBs and\nearly Universe explosions. \nAn indication that these exotic PISN\nevents may not be beyond our present day reach is the recent SN 2006gy\nwhich has been reported to have a peak luminosity three times greater\nthan any other supernova ever recorded reaching a magnitude of -22\n\\citep{2006astro.ph.12617S}. Hypothesised as having had a Luminous\nBlue Variable as a progenitor with an initial mass of\n$\\sim$150M$_{\\odot}$ this has been suggested to be the first ever PISNe\ndetected \\citep[also\nsee][]{2007arXiv0708.1970L,2007ApJ...659L..13O,2007Natur.450..390W}.\nThe unusual SN 2006jc defines another class of peculiar low-metallicity\nevent, occurring spatially coincident with an outburst thought to be\nthat of an LBV just two years previously \\citep{2007Natur.447..829P}, \nwhich has also been postulated to be related to the PISN mechanism\n\\citep{2007Natur.450..390W}\n\nTo date there \nhave been many surveys specifically designed to search for SNe and\ndepending on the motivation of an individual survey generally one of\ntwo different survey strategies has been employed. The first of these\nstrategies is the \\textit{pointed survey strategy}; here we have a\nfixed catalogue of galaxies which are all individually\nobserved with a cadence of a few days and scanned for fresh SNe. These\nsurveys included the highly successful Lick Observatory Supernova\nSearch (LOSS) which uses the fully robotic 0.75m KAIT telescope to\nscan a catalogue of 7\\,500-14\\,000 nearby galaxies ($z \\lesssim 0.04$), \naiming to observe each\nindividual galaxy with a cadence of 3-5 days\n\\citep{2001ASPC..246..121F}. With\nthe advent of relatively inexpensive CCD technology, the ability of\namateur astronomers to detect nearby SNe using this pointed survey\nstrategy has become more than substantial and a large fraction of\nnearby SNe are now being discovered by this community\n\\citep{2006JAVSO..35...42G}.\nThe Southern inTermediate Redshift ESO\nSupernova Search (STRESS) produced a catalogue of $\\sim$43,000\n($0.05 < z < 0.6$) \ngalaxies within 21 fields observed with the ESO Wide-Field-Imager\nand searched for SNe in these hosts to determine \nrates and their dependency on host galaxy colour and \nredshift evolution \\citep{2008A&A...479...49B}.\n\nThe second strategy \nemployed by SN surveys is the \\textit{area survey strategy}; here an\narea of sky is surveyed repeatedly and image subtraction used to\nidentify transient events including SNe. The SN Legacy Survey uses\nthe CFHT MegaCam imager to image four one-square degree fields to\nsearch for SNe Type Ia with the motivation of improving the sampling\nof these SNe within the redshift range $0.2 < z < 1.0$\n\\citep{2005ASPC..339...60P, 2006A&A...447...31A}. The Equation of\nState SupErNova Cosmology Experiment (ESSENCE) uses the Mosaic Imager\non the Blanco 4m telescope to survey equatorial fields, that have\npreferably been observed with other wide-field surveys, to discover SN\nType Ia within the redshift range $0.15 < z < 0.74$\n\\citep{2007ApJ...666..674M}. The Nearby SN Factory uses ultra-wide\nfield CCD mosaic images from the Near-Earth Asteriod Tracking (NEAT) and Palomar-Quest survey\nprograms to perform an area survey in a bid to find SN Type\nIa in the redshift range $0.03 < z < 0.08$\n\\citep{2002SPIE.4836...61A}. The SDSS-II Supernova Survey takes\nrepeated images of Stripe 82, a 300 square degree southern equatorial\nstrip of the sky, driven by the ambition of identifying and measuring\nthe lightcurves of SNe Type Ia in the intermediate redshift range\n$0.05 < z < 0.35$ \\citep{2008AJ....135..338F, 2008AJ....135..348S}. \nThe Texas Supernova Search search \n\\citep{2005AAS...20717102Q, 2007AAS...21110505Y}\nis somewhat unique in that it \nhas a small aperture telescope (0.45m) but very wide field (1.85 square\ndegrees) and is focussed on finding nearby SNe in wide area searches. \nThis survey discovered SN2006gy along with many other of the brightest\nknown SNe ever found and we shall discuss this further in \nSec. \\ref{sect:disc}.\n\n\nThe main driver of \nSN surveys that employ the pointed survey strategy is basically to\nfind as many SNe as possible regardless of type or characteristic. To\nensure that these surveys are as efficient as possible at finding\nCCSNe, the galaxies catalogues used by the surveys generally consist\nonly of the most massive galaxies with the greatest star-formation\nrates. As a result these galaxy catalogues\ntend to be heavily biased towards galaxies with high metallicity. This\nover arching high-metallicity bias has more than likely placed us in\nthe situation where the vast majority of CCSNe that have occurred in\nlow-metallicity environments (such as low-luminosity, dwarf, irregular\ngalaxies) have remained undetected. \\citet{2008ApJ...673..999P} have\nmatched the SAI supernova catalogue to the SDSS-DR4 catalogue of\nstar-forming galaxies with measured metallicities and it is clear that\nthe vast majority of the SNe considered have been detected in areas of\nrelatively high-metallicity; SN Type II occurring in host galaxies\nwith mean metallicity \\mbox{$12+{\\rm \\log(O\/H)}=8.94\\pm0.04$} and SN\nIb\/c at 9.06$\\pm$0.04.\n\nIt is inherently interesting then to search for SNe specifically in \nlow-metallicty environments, or at least without biasing the search\nto look at only high metallicity regimes. \nIn this paper we shall discuss several methods to \nsearch for low-metallicity CCSN\nevents. First we consider compiling a catalogue of nearby,\nlow-metallicity galaxies from pre-existing catalogues \n(taking SDSS DR5 as our primary survey source)\nand using either a single 2.0m telescope or a\nnetwork of 2.0m telescopes to perform a pointed survey of \nlow-metallicity galaxies in the hope of detecting a few CCSNe. \nSecondly we consider\nusing a future all-sky transient survey such as the Panoramic Survey\nTelescope and Rapid Response System (Pan-STARRS) to perform a\nvolume-limited survey for SNe and estimate how many may be found\nin low-metallicity galaxies. \nA third and final method we consider is to use a future all-sky\ntransient survey, limited only by\nthe limiting magnitude of the survey, to search for all CCSNe\nincluding those low-metallicity events.While this paper is \nprimarily aimed at determining numbers of low metallicity events\nthat could be found, the latter calculation gives an estimate of \nthe total number of CCSNe that are likely to be harvested from these\nupcoming surveys.\n\n\\section{Creating Galaxy Catalogues for the Various Survey Strategies}\\label{sec_a}\n\n In order to produce catalogues of galaxies that we can use for the various survey strategies to be considered, we begin with the Sloan Digital Sky Survey (SDSS) \\citep{2007arXiv0707.3380A}. The SDSS Data\n Release 5 (DR5) provides photometric\n and spectroscopic data, within a wavelength range of 3500-9200$\\AA$, for\n $\\sim$675\\,000 galaxies, over an area of 5\\,713 square degrees of the northern\n hemisphere out to a redshift\n of $z\\sim0.4$. In general we only want to detect relatively nearby CCSNe as we want to spectroscopically type and follow these SNe with relative ease, so to this end we introduce a distance limit of $z=0.04$ to the SDSS spectroscopic catalogue. We have used\n the SDSS DR5 website\\footnote{SDSS DR5 website:\n \\it{http:\/\/www.sdss.org\/dr5}} to extract the out 44\\,041 galaxies within z\n $=0.04$ along with data\n including the petrosian magnitudes ({\\it\n u, g, r, i} and {\\it z}), the galactic extinctions in each filter determined following \\citet{1998ApJ...500..525S}, the spectroscopic\n redshifts, the\n {\\it r}-band fibre magnitudes and the line intensities of H$\\alpha$, H$\\beta$,\n [OIII]$\\lambda5007$ and [NII]$\\lambda6584$ for each galaxy.\n \n Of these 44\\,041 galaxies we extract out two separate samples of galaxies. The first of these samples is classified as the high signal-to-noise sample, containing galaxies with an SDSS defined line significance indicator `nSigma' $>$ 7$\\sigma$ for all of the four spectral lines mentioned previously (as advised by DR5). The second sample is classified as the low signal-to-noise sample, containing galaxies that have not been included in the high signal-to-noise sample but exhibit `nSigma' $>$ 5$\\sigma$ in both H$\\alpha$ and H$\\beta$. The high signal-to-noise sample contains 20\\,632 galaxies and the low signal-to-noise sample contains 8\\,703 galaxies.\n \n\\subsection{\\textbf{Removing AGN}}\\label{sec_aa}\n\n Eventually we aim to determine a star-formation rate (SFR) for each\n individual galaxy\n in our sample in order to then determine their core-collapse SN rates\n (CCSR).\n As it is the young, massive, hot star population within each galaxy that\n is the dominant source of hydrogen ionising radiation, it is possible to use\n the H$\\alpha$ luminosity of these galaxies as a SFR indicator.\n \n A difficultly arises in that many galaxies within both the high signal-to-noise and the low signal-to-noise galaxy samples will host Active Galactic Nuclei\n (AGN), which will also contribute to the galaxy's H$\\alpha$\n luminosity. To remove these AGN contaminated galaxies\n from the high signal-to-noise galaxy sample we use the following\n diagnostic line provided by \\cite{2003MNRAS.346.1055K} to discriminate between\n purely star-forming galaxies (SFGs) and galaxies that also host AGN:\n \n\\begin{equation}\n \\log\\left(\\frac{[\\rm{OIII}]\\lambda5007}{\\rm{H}\\beta}\\right)=\\frac{0.61}{\\log([\\rm{NII}]\\lambda6584\/\\rm{H}\\alpha)-0.05}+1.3\n \\label{kauff_equ}\n\\end{equation}\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[totalheight=0.2\\textheight]{BPT_dia.ps}\n \\caption{\\textrm{[OIII]$\\lambda5007\/$H$\\beta$ vs\n [NII]$\\lambda6584\/$H$\\alpha$ plot of 20\\,632 galaxies in our SDSS DR5 good signal-to-noise galaxy sample.\n The diagnostic line from \\cite{2003MNRAS.346.1055K} discriminates between the purely star-forming galaxies in\n our sample (those below the line) and those hosting AGN (above the line).}\\label{BPT_dia}}\n \\end{center}\n\\end{figure}\n\n \\noindent Fig. \\ref{BPT_dia} shows the SFGs found below this line and AGN\n host galaxies above the line. We now have 18\\,350 high signal-to-noise SFGs. Concerning the low signal-to-noise galaxy sample , we do not have accurate enough spectral information to apply this diagnostic line to remove any unwanted AGN host galaxies. However, following the example of\n \\citet{2004MNRAS.351.1151B} it is still possible to remove the AGN hosts from\n the low signal-to-noise sample by requiring that [NII]$\\lambda6584$\/H$\\alpha > 0.6$ and that\n nSigma $>$ 7$\\sigma$ in both lines. We now also have 6\\,000 low signal-to-noise SFGs. An overview of all galaxy sub-samples can be found\n in Table \\ref{subsample_table}. The 17\\,409 unclassified galaxies are\n predominantly early-type galaxies that show little signs of\n recent star-formation, which is the reason that H$\\alpha$ has not been detected to\n the significance levels that we have required. The lack of recent star-formation within these galaxies implies that they will\n also be void of any future CCSN events and hence these galaxies are not of\n interest to our survey.\n \n Having removed all of the AGN contaminated galaxies we now have two catalogues of SFGs which we shall refer to from now on as the high signal-to-noise SFG (HSFG) catalogue and the low signal-to-noise SFG (LSFG) catalogue.\n\n\\begin{table*}\n \\begin{center}\n \\caption{\\textrm{Hierarchy of galaxies extracted from the original SDSS\n DR5 spectroscopic galaxy sample within z$=$0.04}\\label{subsample_table}}\n \\begin{tabular}{c | c | c | c | c | c | c}\n \\hline\\hline\n \\multicolumn{7}{c}{\\bf All SDSS DR5 Galaxies [44\\,041]}\\\\\n {\\bf Sample} & \\multicolumn{2}{|c|}{High signal-to-noise galaxies [20\\,632]} & \\multicolumn{2}{|c|}{Low signal-to-noise galaxies [8\\,703]}& \\multicolumn{2}{c}{Unclassified galaxies [17\\,409]}\\\\\n {\\bf Sub-sample} & {SFGs [18\\,350]} & {AGN [2\\,282]} & {SFGs [6\\,000]} & {AGN [2\\,703]} & \\multicolumn{2}{c}{}\\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table*}\n\n \n\\subsection{Measuring Oxygen Abundances}\\label{sec_b}\n\n\n In order to select out low-metallicity galaxies from both the HSFG\n and LSFG catalogues,\n we define oxygen abundances using the empirical calibrations of\n \\citet{2004MNRAS.348L..59P}. Within the range $8.12 \\lesssim 12+\\log(\\rm{O\/H}) < 9.05$ the\n following empirical calibration is used:\n\n\\begin{center} \n\\begin{equation}\n 12+\\log(\\rm{O\/H})=8.73-0.32\\log\\left(\\frac{[\\rm{OIII}]\\lambda5007\/\\rm{H}\\beta}{[\\rm{NII}]\\lambda6584\/\\rm{H}\\alpha}\\right),\n\\end{equation}\n\\end{center}\n\n \\noindent and below $12+\\log(\\rm{O\/H})\\simeq 8.12$ the following calibration is used:\n\n\\begin{center} \n \\begin{equation}\n 12+\\log(\\rm{O\/H})=8.9+0.59\\log([\\rm{NII}]\\lambda6584\/\\rm{H}\\alpha).\n \\end{equation}\n\\end{center}\n\n The wavelengths of the emission lines used for the flux ratios in both of these calibrations\n are separated by only a small amount and therefore their ratios are free of any extinction\n effects. Of the sample of 18\\,350 HSFGs, \\citet{2006A&A...448..955I} have directly measured\n metallicities for 209. They determined the\n oxygen abundances by measuring the\n [OIII]$\\lambda$4363\/[OIII]$\\lambda$5007 line ratio to calculate an\n electron temperature $T_{e}$, and then derived the\n abundances directly from the strengths of the [OII]$\\lambda$3727 (or\n [OII]$\\lambda$7320, 7331 when [OII]$\\lambda$3727 was not available) and\n [OIII]$\\lambda\\lambda$4959, 5007 emission lines. Comparing the\n \\citeauthor{2006A&A...448..955I} $T_{e}$ measured abundances with our \\citeauthor{2004MNRAS.348L..59P}\n empirically calibrated abundances we find good agreement, apart\n from four outlying galaxies (see Fig. \\ref{izo_fig}). These galaxies are\n SDSS J124813.65-031958.2, SDSS J091731.22+415936.8, SDSS\n J123139.98+035631.4 and SDSS J130240.78+010426.8.\n\n \n When viewed, these four\n outliers seem to be dwarf galaxies that are in the same\n line of sight as and possibly gravitationally bound\n to much larger and presumably more metal rich galaxies.\n Assuming that the contamination from these background galaxies has not been\n adequately removed from the spectra of the dwarf galaxies by the SDSS reduction pipeline would explain\n the discrepancy between the oxygen abundances measured by \\citeauthor{2006A&A...448..955I} and\n our measurements. Viewing a random selection of the remaining galaxies from\n the \\citeauthor{2006A&A...448..955I} sample reveal the blue compact dwarf galaxies expected from\n their low-metallicity galaxy sample.\n The fact that we may be over-estimating the oxygen abundances of a very\n small fraction of galaxies should not concern us too much as we are trying to produce a\n low-metallicity galaxy sample and not a high-metallicity sample that would then\n possibly be contaminated by a few misplaced lower-metallicity galaxies. Choosing then\n to ignore these four outlying galaxies we\n find that our oxygen abundance measurements for the remaining\n 205 galaxies fall with an RMS scatter of 0.14 dex from the directly\n measured abundances of the \\citeauthor{2006A&A...448..955I}\n \n Recently \\cite{2008ApJ...673..999P} have taken a sample of 125 958 SFGs from SDSS DR4, with oxygen abundances derived in the same fashion as \\cite{2004ApJ...613..898T} used for SFGs in DR2. The \\cite{2004ApJ...613..898T} method for deriving oxygen abundance estimates an individual galaxy's metallicty via a likelihood analysis which simultaneously fits multiple optical nebular emission lines to those predicted by the hybrid stellar-population plus photoionisation models of \\cite{2001MNRAS.323..887C}. A likelihood distribution of the metallicity is determined for each galaxy the median is taken as the best estimate of the galaxy metallicity. The \\cite{2004ApJ...613..898T} metallicities are essentially on the \\cite{2002ApJS..142...35K} abundance scale. Matching our catalogues of HSFGs and LSFGs against the sample of 125 958 SFGs of \\cite{2008ApJ...673..999P}, we find a common sample of 18 014 SFGs. The oxygen abundances that we measure with the \\citeauthor{2004MNRAS.348L..59P} method are typically $\\sim$0.2 dex below that of \\citeauthor{2004ApJ...613..898T}, in agreement with the findings of \\cite{2008AJ....135.1136M}. The cause of this discrepancy is debated but may either be due to certain parameters that produce temperature variations not being taken into consideration when deriving $T_{e}$ at higher-metallicity, which would lead to an under-estimation of the oxygen abundance measured on the \\citeauthor{2004MNRAS.348L..59P} scale which is calibrated with $T_{e}$ measured abundances \\citep{2005A&A...434..507S, 2006astro.ph..8410B}, or to an unknown problem with the photoionisation models used by \\citeauthor{2004ApJ...613..898T} \\citep{2003ApJ...591..801K}. \n \n\\subsection{SN Rate Indicator}\\label{sec_c}\n\n Having measured the metallicity for each of the galaxies in our\n catalogues, we now wish to determine CCSRs. To do this we must first\n determine SFRs for the galaxies and then determine the fraction of those stars\n formed that will eventually end their lives as CCSNe. The\n best indicator that we have for the SFR for each galaxy is its H$\\alpha$\n luminosity. As alluded to previously, it is going to be the young, massive, hot stars\n in purely star-forming galaxies that are the dominant source of\n hydrogen ionising radiation, causing the galaxy's\n H$\\alpha$ luminosity to be proportional to its recent SFR.\n \n \\citet{1998ARA&A..36..189K} has determined the following calibration\n between a galaxy's SFR and its H$\\alpha$ luminosity:\n \n\\begin{center}\n\\begin{equation}\n {\\rm SFR_{H\\alpha}}({\\rm M}_{\\odot}{\\rm \\,\\,yr^{-1}})=\\frac{L_{{\\rm H}\\alpha}}{1.27\\times10^{34}({\\rm W})}\n \\label{kennicutt_equ}\n\\end{equation}\n\\end{center}\n\n \\noindent where the luminosity is measured in Watts.\n\n Derived from their model fits, \\citet{2004MNRAS.351.1151B} also provide likelihood distributions for\n the conversion factor between the H$\\alpha$ luminosity and the SFR for galaxies of\n various mass ranges. They confirm that the Kennicutt calibration is a\n very good {\\it typical} calibration, comparing well with\n the median value for their sample. When considering the complete HSFG and LSFG catalogues it is acceptable to assume a median mass range for the galaxies and we employ\n the Kennicutt calibration. However, when considering galaxies with\n relatively poor metallicity we choose to use the most\n probable conversion factor for the \\citeauthor{2004MNRAS.351.1151B} distribution with the lowest mass range (${\\rm \\log}M_{*}<8$) as this\n distribution closest resembles the low-metallicity galaxies in our catalogues i.e. low-mass, blue,\n dwarf, irregular galaxies.\n\n\\begin{center} \n\\begin{equation}\n {\\rm SFR_{H\\alpha}}({\\rm M}_{\\odot}{\\rm \\,\\,yr^{-1}})=\\frac{L_{{\\rm H}\\alpha}}{2.01\\times10^{34}({\\rm W})}\n\\end{equation}\n\\end{center}\n\n\\begin{figure*}[b]\n\\begin{center} \n\\begin{equation}\n {\\rm SFR_{H\\alpha}}({\\rm M}_{\\odot}{\\rm \\,\\,yr^{-1}})=10^{-0.4(r_{\\rm Petro} - r_{\\rm\n fibre})}\\left[\\frac{S_{\\rm H\\alpha}\/S_{\\rm H\\beta}}{2.86}\\right]^{2.114}\\frac{L_{{\\rm H}\\alpha}}{1.27\\times10^{34}({\\rm W})}\n \\label{AC_equ}\n\\end{equation}\n\\end{center}\n\\end{figure*}\n\n\\begin{figure*}[b]\n\\begin{center} \n\\begin{equation}\n {\\rm SFR_{H\\alpha}}({\\rm M}_{\\odot}{\\rm \\,\\,yr^{-1}})=10^{-0.4(r_{\\rm Petro} - r_{\\rm\n fibre})}\\left[\\frac{S_{\\rm H\\alpha}\/S_{\\rm H\\beta}}{2.86}\\right]^{2.114}\\frac{L_{{\\rm H}\\alpha}}{2.01\\times10^{34}({\\rm\n W})}\n \\label{AC_low_equ}\n\\end{equation}\n\\end{center}\n\\end{figure*}\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[totalheight=0.2\\textheight]{izo_fig.ps}\n \\caption{\\textrm{Comparing our \\citet{2004MNRAS.348L..59P} empirically calibrated oxygen abundances with those\n directly measured by \\citet{2006A&A...448..955I} via measuring\n the electron temperature reveals that our\n measurements are reliable with an RMS scatter of 0.14 dex, the solid line depicting a one-to-one correspondence. Note the four outlying galaxies - see text for\n details.}\\label{izo_fig}}\n \\end{center}\n\\end{figure}\n\n SDSS provides the H$\\alpha$ equivalent line width and also the continuum flux\n at the wavelength of H$\\alpha$, the equivalent width times the continuum flux\n giving the H$\\alpha$ flux which is then used to determine the H$\\alpha$\n luminosity. A problem with the SDSS data is that the measured H$\\alpha$\n flux for a galaxy is only the flux which falls within the 3''\n fibre aperture of the SDSS multi-fibre spectrograph. Typically this is only a\n fraction of the total galaxy flux, as the SDSS spectrograph fibre locks onto the\n centre of the galaxy and any flux that falls outside of the 3'' fibre\n aperture is lost. \\citet{2003ApJ...599..971H} have developed a very simple aperture-correction that can\n be applied to the measured galaxy H$\\alpha$ luminosity to give an estimate of the total\n galaxy H$\\alpha$ luminosity. The aperture-correction takes account of the ratio between\n the petrosian photometric $r$-band galaxy magnitude and the\n synthetic $r$-band `fibre magnitude' determined from the galaxy spectrum. \n \\citeauthor{2003ApJ...599..971H} also provide an extinction correction to be\n used when determining the galaxy H$\\alpha$ SFR. The correction makes use of the Balmer decrement\n and assumes the standard Galactic extinction law of\n \\citet{1989ApJ...345..245C}. This gives\n a final aperture and extinction corrected H$\\alpha$ luminosity SFR indicator\n for our entire galaxy sample as given in Equation \\ref{AC_equ}, and for the low-metallicity galaxy sample as given in Equation \\ref{AC_low_equ}, where $S_{\\rm H\\alpha}$ and $S_{\\rm H\\beta}$ are the line \nfluxes corrected for stellar absorption according to \\citet{2003ApJ...599..971H}.\n Having now determined a SFR for each of the galaxies in our\n catalogues, we can compare these rates with their oxygen abundances\n (Fig. \\ref{oxy_sfr}). It is\n clear that in general the higher the galaxy SFR the higher the typical oxygen abundance. It can be reasoned that if the SFR of a galaxy is high, or has been high at any point in its history, there is an increased population of young, hot, massive stars which in turn leads to an increased rate of\n CCSNe and therefore a greater rate of enrichment of the\n ISM. The observed high-metallicity cutoff of the SDSS galaxies is probably due to a saturation suffered by [OIII] $\\lambda$4363 T$_{e}$ calibrated metallicities \\citep[see][]{2008arXiv0801.1849K}\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[totalheight=0.2\\textheight]{SFR_oxy.ps}\n \\caption{\\textrm{H{$\\alpha$} determined star-formation rates using the low-mass range calibration of \\citet{2004MNRAS.351.1151B} of\n 18\\,350 galaxies\n in our HSFG catalogue compared with their measured oxygen abundances. In general,\n the greater the galaxy metallicity, the greater its star-formation rate - as\n expected.}\n\\label{oxy_sfr}}\n \\end{center}\n\\end{figure}\n\n Given a SFR, it is relatively simple to convert to a CCSR by determining\n the fraction of a stellar population that will eventually collapse to create\n CCSNe. Following the method used by e.g. \\citet{2001MNRAS.324..325M}, we use an initial mass function (IMF) for a stellar\n population to calculate the fraction of this population within the mass\n range 8M$_{\\odot}<$ M $<$ 50M$_{\\odot}$, the same mass range for stars predicted to end\n their lives as CCSNe. From this logic the CCSR is determined as:\n\n\\begin{center} \n\\begin{equation} \n {\\rm CCSR}=\\frac{\\int_{\\rm 8M_{\\odot}}^{\\rm 50M_{\\odot}}\\phi(m)dm}{\\int_{\\rm\n 0.1M_{\\odot}}^{\\rm 125M_{\\odot}}m\\phi(m)dm}\\times{\\rm SFR}\n \\label{mattila_equ}\n\\end{equation}\n\\end{center}\n\n where \\mbox{$\\phi(m)$} is the Salpeter IMF \\citep{1955ApJ...121..161S} with upper and\n lower mass cut-offs of \\mbox{$0.1{\\rm M}_{\\odot}$} and \\mbox{$125{\\rm M}_{\\odot}$}. This conversion is\n calculated to be:\n\n\\begin{center} \n\\begin{equation} \n {\\rm CCSR (SNe \\,\\,yr^{-1})}=0.007 \\times {\\rm SFR (M_{\\odot}\\,\\,yr^{-1})}\n \\label{mattila_equ}\n\\end{equation}\n\\end{center}\n\n\\subsection{Additional nearby bright galaxies}\n\n The target selection criteria for the SDSS DR5 spectroscopic sample\n includes a\n bright magnitude limit of \\mbox{$r\\sim$14.5} in order to avoid saturation and excessive\n cross-talk in the spectrographs. As a result of this restriction many of\n the nearby luminous galaxies have been omitted from the DR5 spectroscopic\n sample. To account for these missing bright galaxies and \nconstruct a complete galaxy catalogue we initially\n select out the galaxies from the HyperLeda galaxy catalogue that match with\n galaxies from the SDSS\n DR5 {\\it photometric} survey with magnitudes $r<14.5$, assuming that the HyperLeda\n catalogue contains all of the nearby luminous galaxies. From this catalogue\n of matched galaxies we further select out those galaxies that are not found in the\n SDSS DR5 {\\it spectroscopic} survey. We discover a total of 1\\,887 nearby\n luminous galaxies included in the SDSS DR5 photometric survey that have\n been omited from the spectroscopic survey.\n \n Of these 1\\,887 galaxies we wish to know the fraction that are star-forming\n galaxies. HyperLeda provides a galaxy morphological classification for a\n large number of the galaxies in the catalogue and we are able to remove all\n galaxies with an early-type classification, as these galaxies\n will generally have very low SFRs. But note that some early-type galaxies\n have HII regions and some evidence of low-level star-formation\n \\citep[e.g.][]{2005MNRAS.357.1337M}. The mean\n $g-i$ colour of these removed early-type galaxies is 1.216, with a standard\n deviation \\mbox{$\\sigma = 0.153$}. In an attempt to remove the remaining\n non-starforming galaxies from this bright galaxy sample that do not have a morphological classification, we remove all\n galaxies redward of one $\\sigma$ from the mean $g-i$ colour, i.e.\n $g-i>1.063$. This results in 1\\,216 remaining star-forming galaxies. There is a possibility that this cut may also exclude a few starburst galaxies which have been heavily reddened due to the effects of extinction, but we are mainly concerned about low-metallicity galaxies, which are not greatly affected by extinction.\n \n As we have no spectral information for these galaxies from SDSS DR5, it is\n necessary to use an alternative indicator to estimate SFR other than the\n H$\\alpha$ luminosity. The $U$-band luminosity can be used as a suitable\n indicator as developed by \\cite{2006ApJ...642..775M}:\n \n\\begin{center} \n\\begin{equation}\n {\\rm SFR_{\\it U_{obs}}}({\\rm M}_{\\odot}{\\rm yr^{-1}})=(1.4\\pm1.1)\\times10^{-43}\\frac{L_{U_{obs}}}{\\rm ergs\\ s^{-1}},\n\\end{equation}\n\\end{center}\n\n \\noindent where the SDSS $u$-band is transformed to \\mbox{$U_{\\rm vega}$} using the following\n transformation from \\citet{2007AJ....133..734B}:\n \n\\begin{center} \n\\begin{equation}\n U_{\\rm vega}=u-0.0140(u-g)+0.0556,\n\\end{equation}\n\\end{center}\n \n \\noindent and the distance moduli from HyperLeda are used to determine $L_{U_{obs}}$. The\n CCSR is then determined using Equation \\ref{mattila_equ}. Note\n that as \\citeauthor{2006ApJ...642..775M} have empirically calibrated this\n $U$-band SFR indicator from\n extinction-corrected H$\\alpha$ galaxy luminosities, there is no need to\n further correct this indicator for dust reddening. The 1\\,216 \n bright, nearby galaxies have a resulting $U$-band indicated CCSR of \\mbox{12.70\n CCSNe yr$^{-1}$}.\n\n\n We now have three galaxy catalogues; a HSFG catalogue and a LSFG\n catalogue, with measured metallcities and CCSRs, and a nearby \n galaxy catalogue containing bright galaxies not found in the SDSS\n spectroscopic galaxy catalogue. \n We have estimates of SFRs and CCSRs for all these galaxies.\n Taking our\n catalogues of 18\\,350 HSFGs and 6\\,000 LSFGs and introducing various\n upper limits on the individual galaxy metallicities and lower limits\n on the CCSRs we are able to extract out separate galaxy\n sub-samples. Table \\ref{SFG_rate_table} displays these sub-samples,\n differing both in the number of galaxies they contain and their\n estimated CCSR determined from the galaxy SFRs\n derived from aperture and extinction-corrected H$\\alpha$ luminosities for the HSFG and LSFG catalogues, and from $U$-band luminosities for the \nnearby bright galaxy catalogue. For the samples of\n galaxies with no metallicity constraint the SFRs have been derived using the\n `typical' calibration of \\citet{1998ARA&A..36..189K} whereas the SFRs of the\n sub-samples of galaxies with constraints on metallicity have been\n derived using the lowest mass range calibration of \\citet{2004MNRAS.351.1151B}. These full catalogues or sub-samples of these catalogue can now be used to determine the feasibility of searching for low-metallicity CCSN events using various survey strategies. Figure \\ref{cum_dis} shows the cumulative\n distribution of the combined CCSN rate from our HSFG and LSFG\n catalogues, measured against oxygen abundance. It is clear that the\n CCSR increases steeply with oxygen abundance of the galaxy sample.\n\n\n\\section{\\textbf{Strategy 1 : A Pointed survey of catalogued low-Metallicity galaxies}}\\label{sec_d}\n\n \n When determining the feasibility of using a pointed low-metallicity survey to search for CCSNe in a catalogue of low-metallicity galaxies we consider using the fully robotic 2.0m\n Liverpool Telescope situated at the Observatorio del Roque de los Muchachos, La Palma. We shall also consider the using of a network of similar sized telescopes.\n \n There are three characteristics that we require of the galaxy catalogue that we shall use with this survey strategy; firstly, it must contain only a few hundred galaxies as too many galaxies in the catalogue would hinder our ability to observe each individual\n galaxy frequently enough to ensure that we detect any SN that it may\n host. Secondly, we require that the galaxies are of\n sufficiently low metallicity in order to determine how the CCSNe that\n they host differ from those hosted by their higher metallicity counterparts.\n Finally, the galaxies must exhibit a suitably high enough CCSR to increase\n the probability of detecting these CCSNe. The latter two requirements somewhat\n contradict each other because metallicity\n tends to scale with SFR in SFGs and therefore requiring a\n low-metallicity\n galaxy sample implies that we require galaxies with lower\n SFRs, and hence {\\it lower} CCSRs. It is\n therefore essential that we produce a galaxy catalogue that can act as\n a compromise between these two conflicting requirements.\n \n\\begin{table*}\n\\caption{\\textrm{The table shows the expected CCSN rates within the SDSS DR5\n spectroscopic survey area (14\\% of the entire sky) out to a distance of \\mbox{$z\\sim0.04$}. }}\\label{SFG_rate_table}\n \\begin{center}\n \\begin{tabular}{c c c c c c c}\n \\multicolumn{7}{c}{\\bf HSFG Catalogue}\\\\\n \\hline\\hline\n & \\multicolumn{6}{c}{\\bf INDIVIDUAL GALAXY SN RATE LIMITS}\\\\\n & \\multicolumn{2}{c}{\\bf $>$ 0.0 SNe yr$^{-1}$} & \\multicolumn{2}{c}{\\bf $>$ 0.001 SNe yr$^{-1}$} & \\multicolumn{2}{c}{\\bf $>$ 0.01 SNe yr$^{-1}$}\\\\\n \\hline\n {\\bf 12+log(O\/H)} & {\\bf Galaxies} & {\\bf SNe yr$^{-1}$} & {\\bf Galaxies} & {\\bf SNe yr$^{-1}$} & {\\bf Galaxies} & {\\bf SNe yr$^{-1}$}\\\\\n \\hline\n {\\bf No Limit} & 18\\,350 & 115.6 & 13\\,974 & 113.4 & 2\\,557 & 73.6\\\\\n {\\bf $<$ 8.4} & 8\\,019 & 13.2 & 3\\,650 & 11.2 & 120 & 2.3\\\\\n {\\bf $<$ 8.3} & 4\\,290 & 6.9 & 1\\,830 & 5.9 & 73 & 1.3\\\\\n {\\bf $<$ 8.2} & 1\\,713 & 3.1 & {\\bf 727} & {\\bf 2.8} & 42 & 0.8\\\\\n {\\bf $<$ 8.1} & 537 & 1.0 & 209 & 0.9 & 16 & 0.3\\\\\n \\hline\n \\multicolumn{7}{c}{}\\\\\n \\multicolumn{7}{c}{\\bf LSFG Catalogue}\\\\\n \\hline\\hline\n & \\multicolumn{6}{c}{\\bf INDIVIDUAL GALAXY SN RATE LIMITS}\\\\\n & \\multicolumn{2}{c}{\\bf $>$ 0.0 SNe yr$^{-1}$} & \\multicolumn{2}{c}{\\bf $>$ 0.001 SNe yr$^{-1}$} & \\multicolumn{2}{c}{\\bf $>$ 0.01 SNe yr$^{-1}$}\\\\\n \\hline\n {\\bf 12+log(O\/H)} & {\\bf Galaxies} & {\\bf SNe yr$^{-1}$} & {\\bf Galaxies} & {\\bf SNe yr$^{-1}$} & {\\bf Galaxies} & {\\bf SNe yr$^{-1}$}\\\\\n \\hline\n {\\bf No Limit} & 6\\,000 & 34.1 & 3\\,691 & 33.2 & 901 & 22.9\\\\\n {\\bf $<$ 8.4} & 1\\,757 & 0.8 & 116 & 0.3 & 6 & 0.1\\\\\n {\\bf $<$ 8.3} & 1\\,025 & 0.3 & 50 & 0.1 & 0 & 0.0\\\\\n {\\bf $<$ 8.2} & 401 & 0.1 & 18 & 0.0 & 0 & 0.0\\\\\n {\\bf $<$ 8.1} & 116 & 0.0 & 4 & 0.0 & 0 & 0.0\\\\\n \\hline \n \\multicolumn{7}{c}{}\\\\\n \\multicolumn{7}{c}{\\bf Nearby Bright Galaxy Catalogue}\\\\\n \\hline\\hline\n {\\bf } & {\\bf } & {\\bf Galaxies} & {\\bf } & {\\bf SNe yr$^{-1}$} & {\\bf } & {\\bf }\\\\\n \\hline\n {\\bf } & & 1 216 & & 12.70 & & \\\\\n \\hline \n\n \\end{tabular} \n \\end{center}\n\\end{table*}\n \n \\begin{figure}\n \\begin{center}\n \\includegraphics[totalheight=0.3\\textheight]{cum-dist.eps}\n \\caption{\\textrm{Cumulative distribution of the combined CCSN rate from our HSFG and LSFG catalogues, measured against oxygen abundance. It is clear that as oxygen abundance increases so to does the rate of CCSNe}}\\label{cum_dis}\n \\end{center}\n\\end{figure}\n \n \n \n For the purpose of this survey strategy we decide that the galaxy catalogue that\n optimally fits our requirements is the sub-sample of the HSFG cataglogue that\n contain galaxies with oxygen abundances of less than 8.2\n dex {\\it and} a CCSR greater than 1 CCSNe every 1\\,000 years. This low-metallicity catalogue contains\n 727 galaxies, a suitable number for a pointed survey, with an estimated\n CCSR of \\mbox{2.8 CCSNe yr$^{-1}$}. It should be noted that this galaxy catalogue is extracted solely from the 5\\,713 square degrees of the sky that\n SDSS DR5 spectroscopic survey covers, that is 14\\% of the entire sky.\n It can be assumed that the rest of the sky contains similar density of\n these low-metallicity galaxies. We shall return to this point later.\n \n\\subsection{Monte-Carlo Simulations}\\label{sec_e}\n\n Of the estimated \\mbox{2.8 CCSNe yr$^{-1}$} that this\n low-metallicity galaxy catalogue will produce, we will only be able to detect a fraction\n of these due to the practical limiting factors of a pointed survey. The\n reason that a given SN would not be detected is simply due to its faintness at the point of\n observation. The factors that will influence the\n likelihood that\n a SN will be detected when observed within our search are: whether\n or not the galaxy that hosts the\n CCSN is observable during the period of time when the CCSN\n is\n detectable, the type of CCSN (IIP, IIL, Ib, Ic or IIn) observed and its intrinsic\n brightness, the distance to the host galaxy, the extinction towards the\n CCSN, the exposure time and the age of the CCSN when observed (affected by the\n cadence of the observations).\n \n By running a Monte-Carlo simulation, constrained by each of these parameters,\n to randomly produce a sample of 100\\,000 possible SNe observable\n within our search, we can infer the fraction of CCSNe that we should\n actually detect.\n \n\\subsection{Supernova Rates, Template Lightcurves and Distributions}\n\n In order for the Monte-Carlo simulation (MCS) to accurately reproduce the\n relative rates of the different types of CCSNe, we\n use the observed rates compiled by \\citet{smartt_et_al}, given in Table\n \\ref{smartt_rate}. These rates have been compiled within a time and volume-limited\n sample, accounting for all SNe discovered within the eight year period between 1999 January 1\n and 2006 December 2006 in galaxies with a recessional velocity \\mbox{$< 2\\,000\n {\\rm km s}^{-1}$}. Apart from those SNe that\n inhabit environments of heavy extinction or first observed late into their\n evolution, it is expected that within this appointed distance limit \n \\mbox{($\\mu = 32.3$)} all known types of SNe should have been bright enough to\n have been detected, implying that these relative rates are\n as free from any Malmquist bias as possible. CCSNe of type IIb have been merged with type Ib\/c,\n and type IIn have been divided between those with a plateau phase in the\n tail of their lightcurves, IIn\/P, and those with a linear phase, IIn\/L.\n \n\\begin{table}\n \\caption{{\\textrm Relative CCSRs taken from\n Smartt et al. (2007).}\\label{smartt_rate}}\n \\begin{center}\n \\begin{tabular}{ c c c c}\n \\hline\\hline\n {\\bf SN Type} & {\\bf Number} & {\\bf Relative Rate} & {\\bf Core-Collapse\n Only}\\\\\n \\hline\n {\\bf Ia} & 25 & 24.8$\\%$ & -\\\\\n {\\bf IIP} & 43 & 42.6$\\%$ & 56.6$\\%$\\\\\n {\\bf IIL} & 2.5 & 2.4$\\%$ & 3.3$\\%$\\\\\n {\\bf Ib\/c} & 28 & 27.7$\\%$ & 36.6$\\%$\\\\\n {\\bf IIn\/P} & 1.25 & 1.2$\\%$ & 1.6$\\%$\\\\\n {\\bf IIn\/L} & 1.25 & 1.2$\\%$ & 1.6$\\%$\\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n \n We also supply template lightcurves of the various SNe for the MCS. For Type IIP we use\n 1999em as our template, taking the data from \\citet{2001ApJ...558..615H}. For Type IIL we use 1998S,\n taking data for the rise to maximum light from \\citet{2000A&AS..144..219L} and\n data for the tail from \\citet{2000MNRAS.318.1093F}. For Type Ib\/c we use\n 2002ap, taking data from \\citet{2003PASP..115.1220F}. For\n SNe of Type IIn, we suggest that it is appropriate to divide the relative rate evenly\n between Type IIn that exhibit a plateau phase in their\n lightcurves and those that exhibit a linear phase, Type IIn\/P and Type\n IIn\/L respectively. For the Type IIn\/P, we use\n 1994Y as the template taking data from \\citet{2001PASP..113.1349H}, allowing\n 1998S to provide the rise to maximum light. For Type IIn\/L we use 1999el as the\n template taking data from \\citet{2002ApJ...573..144D}, again allowing 1998S\n to provide the rise to maximum light (see Fig \\ref{templates} for comparison). \n The following conversion from \\citet{2007AJ....133..734B} is used to transform the data for the\n lightcurves to the Sloan $g$-band (note magnitudes are to be in AB system):\n\n\\begin{center} \n\\begin{equation}\n g=B-0.03517-0.3411(B-V)\n \\label{blanton_equ}\n\\end{equation}\n\\end{center} \n\n The SN absolute magnitude distributions of \\citet{2002AJ....123..745R} are used in the MCS to provide weighted,\n random distributions of peak magnitudes for the\n SNe (Table \\ref{rich_table}). Equation \\ref{blanton_equ} is again\n used to transform these distributions to the $g$-band, taking\n a $(B-V)$ colour from the epoch of peak $g$-magnitude from our template\n lightcurves. $\\sigma$ is the range in the peak magnitude distribution and $g$-band magnitudes are given in the AB system. When choosing a filter to perform a SN search with the Liverpool telescope the $r$-band is superior to the $g$-band because it has a greater filter throughput, the CCD detector is more responsive in the $r$-band and also SNe are generally brighter in the $r$-band especially later in their evolution. However, the \\citeauthor{2002AJ....123..745R} distributions of SN peak magnitudes are given in the $B$-band which we can transform to the $g$-band but not to the $r$-band. It is for this reason that we simulate a SN survey in the $g$-band and then with these results we can then hypothesise the outcome of a search in the $r$-band.\n \n\\begin{figure*}\n \\begin{center}\n \\includegraphics[totalheight=0.3\\textheight]{lightcurves.eps}\n \\caption{\\textrm{Template lightcurves used within the Monte-Carlo\n simulation. Lightcurves for SNe of type IIP are according to 1999em, type IIL to 1998S, type Ib\/c to 2002ap, type IIn\/P to 1994Y and\n type IIn\/L to 1999el. Note that all magnitudes are in the AB system.}}\n \\label{templates}\n \\end{center}\n\\end{figure*}\n\n \n\\begin{table}\n\\caption{\\textrm{Peak magnitude distributions from \\citet{2002AJ....123..745R}}\\label{rich_table}}\n \\begin{center}\n \\begin{tabular}{c c c c}\n \\hline\\hline\n {\\bf SN Type} & ${\\bf M_{B}}$ & ${\\bf M_{g}}$ & ${\\bf \\sigma}$\\\\\n \\hline\n {\\bf IIP} & -17.00 & -17.02 & 1.12\\\\\n {\\bf IIL} & -18.03 & -18.00 & 0.90\\\\\n {\\bf Ib\/c} & -18.04 &-18.22 & 1.39\\\\\n {\\bf IIn\/P} & -19.15 & -19.24 & 0.92\\\\\n {\\bf IIn\/L} & -19.15 & -19.24 & 0.92\\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\\subsection{Cadence, Distance and Extinction}\n\n A major factor influencing the detection efficiency of any SN\n search is the time between consecutive\n observations of a galaxy hosting a SN. As this\n time increases, so does the likelihood that we will not observe the\n SN until it is well advanced along its lightcurve. This could result\n in the magnitude dropping below our detection limit, hence failing to be detected in the\n search. There are three parameters that affect the cadence of our\n search and\n that need to be included in the MCS. The first of these is the probability\n that the SN will be in solar conjunction for a\n fraction, if not all, of the time that it\n remains detectable by our search. To account for this factor, we use\n the celestial coordinates of the 727 galaxies within our low-metallicity galaxy catalogue and an almanac for the Liverpool Telescope to determine the fraction\n of the year that each individual galaxy is observable, producing a\n distribution with a mean fraction of 0.380. We then use this\n distribution in the MCS to allocate a random amount of time to the\n cadence of observation of each SN, to account for cases where a SN\n would be unobservable and hence undetectable because its host galaxy is behind the sun.\n \n The second factor that affects the cadence is the\n number of nights that cannot be used by the Liverpool Telescope to\n perform our search. Reasons why the telescope could not be used on any\n particular night include weather, technical\n difficulties and scheduled maintenance nights. To\n compensate for these nights in our MCS, we have taken nightly reports\n from the Liverpool Telescope website\\footnote{LT website:\n \\it{http:\/\/telescope.livjm.ac.uk\/}}, from 2005 August 1 to 2006 July 31,\n and use these reports to determine a distribution of nights (58$\\%$ of nights in total) that\n can be used for our search. This distribution of usable nights is included in our MCS when considering the cadence of observations.\n \n The final factor that will affect the cadence of observations, is the\n number of galaxies that we aim to observe every night. This number is\n influenced by the amount of telescope time dedicated to our\n search each night; the\n greater the amount of time on the telescope each night, the greater the number of galaxies we can observe per\n night and the higher the cadence of observations. The number of galaxies\n that can be observed each night is also affected by exposure time. The\n benefit of increasing the exposure time is that we can search to a deeper magnitude limit, meaning\n that fainter SNe are detectable and SNe are detectable for a greater period of time. It is essential therefore to run a number of MCSs in order to find\n the optimal balance between the exposure time given to each galaxy and\n the cadence of observations and hence enabling us to detect the greatest fraction of\n CCSNe hosted by our low-metallicity galaxy catalogue.\n \n The final two parameters placed in our MCS are the host galaxy distances and\n the extinction toward the SNe. The MCS will attribute to each SN a random host galaxy\n distance, out to the distance limit of our search, which is \\mbox{z$=$0.04} (assuming the galaxies within\n our search are spread homogeneously and isotropically throughout this\n volume). Considering that the majority, if not all, of the galaxies\n within our catalogue are low-luminosity, blue, dwarf galaxies with relatively\n low metallicities, we assume that host galaxies will have produced very\n little dust and will not contribute a great deal to any extinction toward\n the CCSN.\n To this end we simply attribute a typical Galactic extinction to each\n SN \\mbox{(A$_{g}=0.3$)}.\n \n We note that a fraction of SNe remain undetected within spiral galaxies that are not observed face on \\citep[e.g. ][]{2003astro.ph.10859C}. This is due to the fact that the average extinction attributed to SNe in spiral galaxies with higher inclination angles is higher than that of face on spiral galaxies, making the SNe fainter and more difficult to detect. Implementation of a correction for this effect into the MCS, using a similar method to that of \\citet{2005MNRAS.362..671R}, could easily result in an under-estimation of the CCSRs observed in low-metallicity galaxies as these tend not be to grand-design spiral galaxies but rather dwarf, irregular galaxies where the effect of galaxy inclination would be negligible. We have therefore decided not to attempt to correct for this effect. This may make our\npredictions for the SN discovery rates in the full galaxy catalogues\n(with no metallicity limit) somewhat optimistic. However this may not lead to a significant over-estimate of events, as the SFRs that we calculate come from the observed galaxy H$\\alpha$ luminosities and in inclined spirals these will also be lower than in face on targets.\n \n \n\\subsection{Monte Carlo Simulation Results for a Single 2.0m Telescope}\\label{sec_f}\n\n Having included each of these parameters in the MCS, we randomly\n generate 100\\,000 SNe that would potentially be observable within\n our search. The information gained about each SN produced by the simulation include\n its type and its apparent magnitude at the point of observation. It is only if this magnitude is\n above the limiting magnitude of our search can we register the SN as being\n detected.\n \n By running the MCS multiple times for (a) the varying number of hours we aim to\n observe with the LT every night and (b) the varying amount of exposure time\n that we dedicate to each individual galaxy, we can determine the optimal\n values of these parameters that shall enable us to detect the greatest\n fraction of the CCSNe that we have predicted our low-metallicity galaxy\n will produce. The results can be seen in Fig. \\ref{exp_time}.\n \n\\begin{figure*}\n \\begin{center}\n \\includegraphics[totalheight=0.3\\textheight]{exp_time.eps}\n \\caption{\\textrm{Percentage of CCSNe\n detected from our MCSs for a differing number of hours observing per night\n (labeled to the right of the plot). The percentage of CCSNe detected\n increases as the nightly observing time increases, as this increases the\n cadence of observations. As the individual exposure time per galaxies\n increases, the limiting magnitude of the search deepens but the cadence of\n observations decreases. There is therefore an optimal point at which\n the exposure time and cadence of observation are balanced to detect the maximum\n percentage of SNe.}}\n \\label{exp_time}\n \\end{center}\n\\end{figure*} \n\n Studying the results of the MCSs we suggest that the\n difference between the percentage of CCSNe detected while aiming to observe one\n hour per night as opposed to two hours per night or more is not enough to\n justify the extra observing time as the\n fraction of CCSN detected only increases by a few percent. As the detection rates are double valued either side of the peak\n detection rate, we suggest using a shorter exposure time in order to\n increase cadence and thereby increase the probability of \n discovering CCSNe earlier in their evolution as opposed to later. \n \n Aiming to observe one hour a night using 60 sec exposures allows each galaxy to be observed once every $\\sim$17 days while accessible with LT and enables 29.6\\% of CCSNe to be detected with\n our search. When considering the catalogue of 727 low-metallicity galaxies which produce \\mbox{2.8 CCSNe yr$^{-1}$}, this equates to a detected CCSN rate of \\mbox{0.8 SNe yr$^{-1}$}\n and requires a total amount of \\mbox{154hrs\n yr$^{-1}$} telescope time. Remembering that the $r$-band is superior to the $g$-band for performing a SN search, we note that given a 60 sec exposure the AB limiting magnitudes for the Liverpool telescope are $g = 18.7$ mags and $r = 19.0$ mags. Given the average cadence of observations determined by our chosen survey parameters, a typical SN will be observed $\\sim$73 days post-explosion with $g-r=0.8$ mags. Given the difference in limiting magnitudes and the typical $g-r$ colour at the point of observation, we expect to detect SNe that are 1.1 magnitudes fainter with an $r$-band search than we would with a $g$-band search. Assuming that the template SN lightcurves and the distribution of SN peak magnitudes are similar in the $r$-band as compared to the $g$-band, we re-run the $g$-band MCS with a limiting magnitude 1.1 magnitudes fainter than previously used. From this simulation we predict that using the $r$-band will allow for $47.8\\%$ of CCSNe to be detected with our search. Again considering the catalogue of 727 low metallicity HSFGs which produce \\mbox{2.8 CCSNe yr$^{-1}$}, this equates to a detected CCSN rate of \\mbox{1.3 SNe yr$^{-1}$}.\n \n Assuming a typical Galactic extinction of A$_g=0.3$ (A$_r=0.22$) and an exposure\n time of 60 seconds we estimate that the absolute\n limiting magnitude ($25\\sigma$ significance) for this search at a distance of 20Mpc will be\n $M_r=-12.7$, at 70Mpc it will be $M_r=-15.4$ and at 150Mpc it will be\n $M_r=-17.1$. We plan to use the method of image matching and subtraction \\citep[see ][]{1998ApJ...503..325A} to detect SNe throughout this survey. Assuming original \nimages with equal depth, the process of image subtraction increases the\nnoise by roughly a factor of $\\sqrt{2}$. In addition, the detection of\nSNe inside their host galaxies will be always more difficult due to\nimage subtraction residuals caused by uncertainties in the alignment and\nmatching of the two images. Since we wish to avoid a large fraction of \nspurious detections, we require the relatively high significance level of \n25$\\sigma$ from the limiting magnitude of our search.\n\n \n\\subsection{Surveying with a Network of 2.0m Telescopes}\\label{sec_f2}\n\n As discussed previously SDSS DR5 covers only $\\sim$14\\% of the\n entire sky. If we presently had the ability to compile a catalogue of low-metallicity\n galaxies for the whole sky as complete as that for the area of the SDSS DR5\n spectroscopic survey and a collection of seven 2.0m telescopes (or\n three to four 2.0m telescopes with double the time allocation) we would expect to detect\n roughly 7 times the CCSNe detected solely by the LT, i.e. $\\sim$9.3\n CCSNe\/yr. We could consider using a network of six to eight 2.0m robotic telescopes similar to the RoboNet global network of 2.0m\n telescopes consisting of the LT and the Faulkes Telescope North and the\n Faulkes Telescope South to perform this kind of survey. Another advantage of using a network of telescopes in both the northern and southern hemispheres as opposed to a single telescope is that we gain a greater sky coverage and can therefore target a greater number of galaxies within our search.\n \n The two obstacles that we would encounter if we were to use this strategy to search for CCSNe are firstly the generous amount of telescope\n time that we would require \\mbox{($\\sim$1\\,000 hrs yr$^{-1}$)} and secondly the present lack of\n data required to compile an all sky galaxy catalogue. Compiling a catalogue\n of all galaxies listed in the 2dF, 6dF, LEDA and SDSS DR5 within the redshift range\n \\mbox{0$<$z$<$0.04} results in a catalogue containing a total of 103\\,549 individual galaxies, only a fraction of which will be star-forming. Apart from this we shall have to wait for PS1, the prototype\n 1.8m telescope of the Panoramic Survey Telescope and\n Rapid Response System (Pan-STARRS), which shall cover an area of 3$\\pi$ of\n the sky to a depth exceeding that of SDSS DR5, in order to compile a far more\n complete all-sky low-metallicity galaxy catalogue.\n \n \n\\section{Strategy 2 : volume-limited searches with the Pan-STARRS all sky surveys} \\label{sec_g}\n\n Having considered a dedicated pointed low-metallicity galaxy survey to search for CCSNe both with one and with a network of 2.0m telescopes, we\n now turn our attention to an all-sky survey. Pan-STARRS is a\n set of four 1.8m telescopes with wide field optics, each with a 7 square\n degree field of view, which will be constructed by the University of Hawaii. The\n prototype telescope PS1 has now achieved first light and\n is set to go online during 2008. PS1 will have the capability of\n surveying\n the whole available sky in less than a week at a rate of 6\\,000 square degrees per\n night, covering 30\\,000 square degrees of the sky per cycle\n \\citep{PS1_DRM}.\n Included in the list of tasks that the PS1 image reduction pipeline will\n perform is the subtraction of every image from a reference image in order to\n produce a database of residual transient objects that will include moving objects and static\n variables such as SNe.\n\nThere are two different strategies that one would employ to search for\nSNe (or transients of any type), these are volume-limited searches and\nmagnitude-limited searches. We will consider both of these. A volume-limited search has the advantage that it can quantify true rates of \ntransients. In addition the radial limit can be chosen so that the \ntarget discoveries are bright enough to be followed in multi-wavelength\nstudies with complementary facilities (e.g. spectroscopic and photometric\nfollow-up with 2-8m telescopes). \n\n\n\\subsection{Monte Carlo Simulations}\\label{sec_h}\n\n Of all the modes in which PS1 shall be run the 3$\\pi$ Steradian Survey\n (covering 30,000 square degrees of the sky)\n shall be the most effective when searching for nearby CCSNe. The survey aims to cover that whole sky 60 times in 3 years, that is 12 times\n in each of the five filters {\\it g, r, i, z} and {\\it y}. This aim has already\n taken historic weather patterns on Haleakala into consideration. The footprint of the survey\n completely covers the entire footprint of the SDSS DR5 spectroscopic\n survey and the AB limiting magnitude \\mbox{(25$\\sigma$)} in the $g$-band is stated\n to be 21.5\n mags for a single 60 second exposure \\citep{PS1_DRM}.\n \n Having an adapted MCS produce 100\\,000 potential SNe over the\n redshift range of our galaxy samples \\mbox{($020$ & 32.2 \\% & 0.0 \\%\\\\\n \\end{tabular} \n \\end{center}\n\\end{table}\n\nAnother future all-sky survey that has potential for SNe discoveries is \nGaia : the European Space Agency's `super-Hipparcos' satellite with\nthe objective of creating the most precise three-dimensional map of\nthe Galaxy \\citep{2005ASPC..338....3P}. The satellite shall have many\nother additional capabilities including the ability to detect nearby\nSNe (within a few hundred Mpc), and predicted for launch in December\n2011 it is a potential competitor of Pan-STARRS and\nLSST. \\cite{2003MNRAS.341..569B} have performed a feasibility study\nsimilar to this study for Gaia and have predicted that the satellite\nshall detect \\mbox{$\\sim$1\\,420 CCSNe yr$^{-1}$} using a magnitude-limited\nsurvey strategy. Capable of detecting objects brighter than 20th\nmagnitude, 1.5 magnitudes brighter than the $g$=21.5 limit for PS1,\nGaia has the ability to survey a volume of only an eighth of the depth\nthat PS1 shall survey. \\citet{2003MNRAS.341..569B} employed a galaxy\ncatalogue which more than likely neglected low-luminosity galaxies,\nwhich would result in the CCSN rate being under-predicted by a factor\nof up to $\\sim$2 fewer SNe. Hence if we scale the\n\\citeauthor{2003MNRAS.341..569B} numbers by $\\sim$16, the final\nnumbers should be comparable with our PS1 estimates, predicting\n\\mbox{$\\sim$22\\,720 CCSNe yr$^{-1}$}, which is very close to the rate\nthat we have predicted for PS1 using our MCSs, \\mbox{$\\sim$24\\,000\nCCSNe yr$^{-1}$} (see Table \\ref{CCSN_survey_rates}). However, we note\nthat these numbers are likely somewhat over optimistic due to the\ntreatment of host galaxy extinction in our MCS.\n\n\\section{Conclusions}\nHaving determined oxygen abundances, star-formation rates and CCSN rates for all\nspectroscopically typed star-forming galaxies in the Sloan Digital Sky\nSurvey Data Release 5 within $z=0.04$, we have used Monte-Carlo simulations to predict the fraction of\nthese CCSNe that we can expect to detect using different survey\nstrategies. Using a single 2m telescope (with a standard CCD camera) \nsearch we predict a detection rate of\n$\\sim$1.3 CCSNe yr$^{-1}$ in galaxies with metallicities below\n$12+\\log({\\rm O\/H})<8.2$ which are within a volume that will allow\ndetailed follow-up with 4m and 8m telescopes ($z=0.04$). With a\nnetwork of seven 2m telescopes we estimate $\\sim$9.3 CCSNe yr$^{-1}$ \ncould be found, although this would require more than \n1000\\,hrs of telescope time allocated to the network. \nWithin the same radial distance, a \nvolume-limited search with the future Pan-STARRS\nPS1 all-sky survey should uncover 12.5 CCSNe yr$^{-1}$ in low metallicity\ngalaxies. Over a period of a a few years this would allow a detailed comparison of their properties. We have also extended our \ncalculations to determine the total numbers of CCSNe that can potentially be \nfound in magnitude-limited surveys \nwith PS1 (24\\,000 yr$^{-1}$, within $z \\lesssim 0.6$), \nPS4 (69\\,000 yr$^{-1}$, within $z \\lesssim 0.8$ ) \nand LSST (160\\,000 yr$^{-1}$, within $z \\lesssim0.9 $) surveys.\n\nAll considered, a final strategy chosen to searching for CCSNe in low-metallicity environments shall realistically involve both a volume-limited and a magnitude-limited all-sky survey in order to include a volume-limited galaxy sample with which to accurately determine relative SN rates and have some prior knowledge of the host galaxy characteristics, but yet not exclude the potential of detecting rare CCSN events that would have otherwise been missed had we only considered the volume-limited survey strategy. \n\nWith the huge number of CCSNe predicted to be detected, these all-sky surveys are set to serve as a catalyst concerning our understanding of CCSNe; including their varying characteristics with metallicity, the relative rates of the various types of SNe and of extremely rare events similar to SN 2006jc, SN 2006gy and more than likely events the nature of which have not yet been observed. \n\n\n \n\\begin{acknowledgements}\n\n Funding for the Sloan Digital Sky Survey (SDSS) and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, and the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web site is http:\/\/www.sdss.org\/.\n\n The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, The University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.\n \n This research has made use of the CfA Supernova Archive, which is funded in part by the National Science Foundation through grant AST 0606772.\n \n This work, conducted as part of the award \"Understanding the lives of\nmassive stars from birth to supernovae\" (S.J. Smartt) made under the\nEuropean Heads of Research Councils and European Science Foundation\nEURYI (European Young Investigator) Awards scheme, was supported by\nfunds from the Participating Organisations of EURYI and the EC Sixth\nFramework Programme. SJS and DRY thank the Leverhulme Trust and\nDEL for funding. SM acknowledges financial support from the Academy of Finland, project 8120503.\n\n\\end{acknowledgements}\n\n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzlcom b/data_all_eng_slimpj/shuffled/split2/finalzzlcom new file mode 100644 index 0000000000000000000000000000000000000000..bf69b64ea0ffee7e345377136e65c22cc6abdcd8 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzlcom @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\begin{figure*}\n\\includegraphics{smearing-explanation.pdf}\n\\caption{The transition to the double fringes packet for a binary. \\rev{Gray} curves show the fringes of the individual components \\rev{with vertical dotted lines indicating the position of their centres}. The \\rev{black} curve is the detected fringe packet (sum). In comic strip order: (i) An unresolved binary superimposes two fringe systems and achieve maximum fringe contrast; (ii) Contrast loss arises when the binary is resolved because individual fringe packets do not overlap exactly; (iii) The packet is elongated and loses its original shape when the binary is sufficiently separated, this is the transition to (iv) two separate fringe packets are clearly seen. Cases (i), (ii) are standard in interferometry. Case (iv) is easily analysed, but helps to understand why smearing occurs: each fringe packet doesn't seem impacted, but the power in its fringes is diluted by the incoherent flux from the other one; the resulting visibility will be a constant, strictly smaller than one, independent \\rev{of} binary separation. This paper focuses on case (iii) that has not been thoroughly studied in the optical. Some of the notations of the paper are also reported: $\\beta^{ij}$, the decentering of the fringe packet of an on-axis source due to instrumental effects; $\\xobj[ij]{o}$ the OPD position shift of the fringe packet for an off-axis source; $\\phiobj[ij]{o}$ the same as the latter expressed in terms of a phase shift.}\n\\label{fig:smearing-explanation}\n\\end{figure*}\n\nLong-baseline interferometry is an observational technique used from the optical \\citep{MIC20} to the radio domain \\citep{RYL46,PAW46} that allows to overcome the resolution limit of single-dish telescopes, as ultimately set by diffraction. To achieve such a goal an ideal interferometer measures the complex degree of coherence and relates this so-called complex visibility to the object intensity distribution through a Fourier transform \\citep{VCI34,ZER38}. Practically speaking, interference fringes are formed and their contrast and shift will be used to retrieve partial or total information on the complex visibilities.\n\nThere are numerous sources of error and biases that have to be evaluated and as much as possible corrected in order to provide a proper estimation of the interferometric observables. Among them, \\emph{bandwidth smearing} occurs in finite bandwidth for \\rev{objects spanning an extended field of view}. The interferogram corresponding to a point-like source has a coherence length of the order of $R\\lambda$ where $R$ is the spectral resolution and $\\lambda$ the central wavelength. For two points of an extended sourc\\rev{e} separated by a distance $\\theta$ along the projected baseline length $B$ corresponding to two telescopes of the array, individual fringe packets are shifted with \\rev{respect to} each other by an optical path difference $\\theta\/B$. When the OPD shift $\\theta\/B$ becomes of the order of, or greater \\rev{than}, the fringe packet width, i.e. when $\\theta \\approx R\\lambda\/B$, the fringe packets of these points do not overlap correctly and bandwidth smearing of the interferogram occurs (see bottom left panel of Fig.~\\ref{fig:smearing-explanation}). In other words, one can consider that the coherence length of an interferogram $R\\lambda$ corresponds to an angular extension on the sky $\\theta \\approx R\\lambda\/B$: it is called the interferometric field of view. Objects composed of multiple \\rev{incoherent} sources, either discrete or continuous, are affected by the smearing when their extent becomes of the order of the interferometric field of view.\n\nFigure \\ref{fig:smearing-explanation} shows an illustration of that effect applied to the case of a binary system. Each of the sources \\rev{contributes} with a fringe packet; the observed interferogram is their sum. The distance between the interferograms is proportional to the angular separation. We can distinguish four separation regimes: 1) the unresolved case; 2) the resolved case where the separation is a small fraction of the interferogram envelope; 3) the smeared regime where \\rev{the separation} is not anymore a small fraction and interferometric estimators are altered; 4) the ``double packet'' regime where two fringes packets are well separated.\n\nWhile this effect \\rev{has been} known for decades \\citep{THO73}, it cannot be remedied by calibration as other biases. This was analysed in a review by \\citet{BRI89} in the radio-astronomy context, in which the observer had no other choice but to define, somewhat heuristically, the best compromise between observing performance and limiting the bandwidth smearing. However, modern radio techniques, using \\textit{a posteriori} software recombination, can overcome the problem in many situations by using several phase centres, around which smearing does not occur. In the optical and the infrared, software recombination is not technically feasible and bandwidth smearing must be dealt with. \\citet{ZHA07} \\rev{recommend to limit} the field of view $\\theta$ to $1\/5$ of \\rev{the} theoretical value \\rev{of the interferometric field of view} i.e. $\\theta\\approx R\\lambda\/(5B)$ to remain in the resolved regime. For an interferometer working in the near-IR with 100\\,m baselines, this corresponds to 5--10 milliarcseconds of separation when a \\rev{wide-band filter ($\\lambda\/\\Delta\\lambda \\sim \\rev{5}$)} is used without spectral resolution. The main leverage to increase the interferometric field of view is adapting the spectral resolution or the baseline length. However, it comes very often at a prohibitive sensitivity cost (spectral resolution) or a loss of spatial \\rev{resolution} (baseline length).\n\nIn this paper, we present the first analytic calculation of the bandwidth smearing effect on the two main optical interferometric observables, namely the squared visibility and closure phase. We restricted the calculation to temporally encoded interferograms, including the so-called Fourier mode \\rev{(a full scan of the fringe packet)} and the \\rev{temporal} ABCD \\rev{(a 4-point scan of a central fringe)}, which are among the most popular optical schemes. Fourier mode has been or is being used at COAST \\citep{COAST}, IOTA with IONIC \\citep{IONIC} and FLUOR \\citep{IOTAFLUOR}, CHARA with FLUOR \\citep{CHARAFLUOR} and CLIMB \\citep{CLIMB}, VLTI with VINCI \\citep{VINCI}, PIONIER \\citep{PIONIER2,PIONIER}, and MIDI \\citep{MIDI}. \\rev{Temporal} ABCD is the choice at PTI \\citep{PTI} and the Keck Interferometer \\citep{KI}. It should be stressed that a similar line of reasoning can be used with very little adaptation to the 8-point time-encoded interferograms of NPOI \\citep{NPOI}, and, with more efforts, to spatially encoded interferograms such as in VLTI\/AMBER \\citep{AMBER} and static ABCD systems such as VLTI\/PRIMA \\citep{PRIMA}.\nThe derived formula\\newrev{e} can \\rev{be} applied to correct \\emph{any} squared visibility and closure phase analytic formula describing the object angular intensity distribution. We apply this corrective technique to the study of binary stellar systems. Indeed optical long baseline interferometry is a unique tool to study the astrometry of close binary systems with milli-arcsecond accuracy to provide direct means to measure accurate masses. Moreover very recently several attempts at searching for substellar companions \\citep{ABS11,ZHA11} are pushing the technique down to dynamical ranges where no adverse effects can be neglected. Since \\rev{most studies forgo} bandwidth smearing correction \\rev{without assessing the biases that may arise from such approximation}, we felt a proper treatment had become mandatory and would be useful in the future. For practical purposes we used the PIONIER instrument characteristics to provide an application of that work. PIONIER is currently being used at the Very Large Telescope Interferometer \\citep[VLTI,][]{VLTI} to combine four beams in the H band ($1.5$ to $1.8\\mu\\mathrm{m}$).\n\nSect.~\\ref{sec:hypnot} gives the analytic expression of the observables in the absence of atmospheric turbulence for an instrument working in fringe-scanning mode. Section~\\ref{sec:bin} is an application of these formulas to a binary star, which allows us to analyse the bias that smearing produces on the interferometeric observables and the model-fitted parameters of the binary. We also show there how simulated fringes of PIONIER are much better fitted with the smeared model than with the standard expression. Finally, section~\\ref{sec:atm} studies the impact of atmospheric turbulence on the observables, indicating that a moderate spectral resolution is enough to alleviate most of its effects.\n\n\\section{Modelling the bandwidth smearing: turbulence-free case}\n\\label{sec:hypnot}\n\\label{sec:ana}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{PIONIER-transmission.pdf}\n\\caption{Spectral transmission and fringe packet envelope for PIONIER, as measured on an internal calibration with the 3-channel spectral setting of the H band. The left column display\\newrev{s} the spectral transmission and instrumental phase for a telescope triplet, as contained in $\\textsflux[ij]{\\text{lamp}}(\\wavenum - \\wavenumzero)$. The right column shows the envelope and phase of the fringe packet, given by the Fourier transform of the latter. The slope of the instrumental phase translates into a fringe packet decentering, known as group delay. \\rev{Phases are expressed in radians.}}\n\\label{fig:PT}\n\\end{figure*}\n\n\n\\begin{table}\n\\caption{Principal notations of this paper.}\n\\label{tab:notations}\n\\begin{tabular}{ll}\n \\hline\\hline\n \\multicolumn{2}{l}{Indexing}\\\\\n $o$, $p$, $q$ & Source number (index)\\\\\n $i$, $j$, $k$ & Station number (superscript)\\\\\n \\hline\n \\multicolumn{2}{l}{Space and spatial frequency variables}\\\\\n $\\wavenum$, $\\wavenumzero$ & Wavenumber, central wavenumber\\\\\n $\\xi = \\wavenum - \\wavenumzero$ & Reduced wavenumber\\\\\n $\\opdvar$, $\\textopd[ij]$ & Optical path difference\\\\\n \\textxobj[ij]{o} & Fringe packet position in a perfect instrument\\\\\n $\\textphiobj[ij]{o} = 2\\pi\\wavenumzero\\textxobj[ij]{o}$\n & Fringe packet phase in perfect instrument\\\\\n $\\textphishift[ij]{}$ & Instrumental group delay (see Fig.~\\ref{fig:smearing-explanation})\\\\\n & $\\rightarrow$ Packet position is \\smash{$\\xobj[ij]{o} + \\phishift[ij]{}\/2\\pi\\wavenumzero$}\\\\\n \\hline\n \\multicolumn{2}{l}{Functions of wavenumber $\\wavenum$ or reduced wavenumber $\\xi$}\\\\\n $\\sfluxstar{o}(\\wavenum)$ & Spectrum of a point source\\\\\n $\\texttrans[i]{}(\\xi)$ & Transmission through an arm\\\\\n $\\loss[ij]{}(\\xi)$ & Instrumental contrast\\\\\n $\\insphi[ij]{}(\\xi)$ & Instrumental differential phase\\\\\n $\\textsflux[ij]{o}(\\xi)$ & The equivalent of $\\newrev{N_iN_j}$\\\\\n \\hline\n \\multicolumn{2}{l}{Functions of OPD $\\opdvar$ or phase $\\alpha = 2\\pi\\wavenumzero\\opdvar$}\\\\\n $\\textphasor[ij]{}(\\opdvar)$ & Complex coherent flux\\\\\n $\\textsmearing{}(\\alpha)$ & Complex smearing\\\\\n $\\textenvband{}(\\alpha)$ & Smearing amplitude\\\\\n $\\phiband{}(\\alpha)$ & Smearing phase\\\\\n\\hline\n \\multicolumn{2}{l}{Fluxes}\\\\\n $\\textflux{o}$ & Flux of a point source\\\\\n $\\textflux{}$ & Total flux\\\\\n $\\textflux[ij]{op}$ & Flux product equivalent\\\\\n\\hline\n \\multicolumn{2}{l}{Other}\\\\\n $\\dopd[ij]{}$ & OPD scanning speed\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nIn order to introduce the basic concepts of the data processing for fringe-scanning mode instruments, we remind \\rev{the reader} here how observables are derived \\rev{in monochromatic light}.\n\nIgnoring the atmosphere and instrumental signatures, the interferogram of a binary on baseline $ij$ can be written as\n\\begin{equation}\n N^{ij}(\\opdvar) = \\rev{N}_1 \\big[1 + \\cos 2\\pi\\wavenum\\opdvar\\big] \n + \\rev{N}_2\\big[ 1 + \\cos(2\\pi\\wavenum\\opdvar + \\phiobj[ij]{}) \\big]\\newrev{,}\n\\end{equation}\nwhere $\\rev{N}_1$ and $\\rev{N}_2$ are the fluxes of each component, $\\opdvar$ is the OPD \\rev{between the arms of the interferometer}, and $\\textphiobj[ij]{} = (2\\pi\\wavenum\\textbase[ij]\\cdot\\textpos{})$ is proportional to the binary separation $\\textpos{}$, the projected baseline $\\textbase[ij]$, and wavenumber $\\wavenum$.\n\nIt is convenient to use the coherent flux, a complex quantity representing the interferogram, from which the continuum $\\rev{N}_1 + \\rev{N}_2$ is removed and the negative frequencies are filtered out. In practice, one can take the Fourier transform of the interferogram, remove all frequencies but a small interval centred on the frequency of the fringes, and take the inverse Fourier transform. \\rev{The coherent flux can be written as}\n\\begin{equation}\n \\phasor[ij]{}(\\opdvar) = \\exp{2\\pi\\j\\wavenum\\opdvar} \n \\big[ \\rev{N}_1 + \\rev{N}_2\\, \\exp{\\j\\phiobj[ij]{}} \\big].\n \\label{eq:Mmono}\n\\end{equation}\n\nThe \\rev{square visibility amplitude} is obtained by dividing the power contained in the coherent flux by that in the continuum: \n\\begin{equation}\n\\begin{split}\n \\vsqPS[ij]{} &= \\frac{<|\\phasor[ij]{}(\\opdvar)|^2>_\\opdvar}{(\\rev{N}_1+\\rev{N}_2)^2}\\\\\n &= 1 - \\frac{4 \\rev{N}_1 \\rev{N}_2}{(\\rev{N}_1+\\rev{N}_2)^2} \\sin^2 \\frac{\\phiobj[ij]{}}2,\n\\end{split}\n \\label{eq:Vmono}\n\\end{equation}\n\\rev{where $_\\opdvar$ means the average of variable $x$ over the OPD.} In practice, the power may be computed using the Fourier \\rev{transform} of the coherent flux, which is strictly equivalent (Parseval's identity).\n\nWhen a triplet of telescopes $ijk$ is used, the closure phase is used to obtain partial information on the phase because it is independent \\rev{of} atmospheric turbulence. It is the argument of the bispectrum given by:\n\\begin{equation}\n\\begin{split}\n \\bisp[ijk]{} &=\\ <\\phasor[ij]{}(\\opd[ij](t))\n \\phasor[jk]{}(\\opd[jk](t))\n \\phasor[ki]{}(\\opd[ki](t))>_\\opdvar\\\\\n &= (N_1-N_2)^2 \n + 4\\rev{N}_1 \\rev{N}_2\n \\cos\\frac{\\phiobj[ij]{}}2 \n \\cos\\frac{\\phiobj[jk]{}}2 \n \\cos\\frac{\\phiobj[ki]{}}2 \n \\\\\n &\\quad -4\\j \\, \\rev{N}_1 \\rev{N}_2(\\rev{N}_1-\\rev{N}_2) \n \\sin\\frac{\\phiobj[ij]{}}2 \n \\sin\\frac{\\phiobj[jk]{}}2\n \\sin\\frac{\\phiobj[ki]{}}2 ,\n\\end{split}\n\\label{eq:Bmono}\n\\end{equation}\nwhere $\\textopd[ij]$, $\\textopd[jk]$, and $\\textopd[ki]$ are\nthe time-modulated OPDs on the three baselines, meeting the closure\nrelation $\\textopd[ij] + \\textopd[jk] + \\textopd[ki] = 0$. (Eq.~\\ref{eq:Bmono} gives \\rev{a compact, generic expression for the bispectrum in the same way \\citet{LEB12} did for the specific case of high-contrast binaries.})\n\nThe goal of this section is to describe the coherent flux, squared visibility, and closure phase of time encoded interferograms processed by means of Fourier analysis, when observing a source of arbitrary geometry in finite bandwidth. In other words, we seek to generalise Eqs.~(\\ref{eq:Mmono}, \\ref{eq:Vmono},~\\& \\ref{eq:Bmono}) and provide ready-to-use formulas to fit object models to smeared data. For the sake of simplicity we use a discrete formalism valid for a collection of point-like sources. The results presented here are easily generalised to systems of resolved, compact sources (Appendix~\\ref{ap:syscomp}) and to any system with our summations over a finite number of point-like sources replaced by integrations on the plane of the sky.\n\n\nThe most \\rev{frequently used} notations and symbols used in this section are given in Table~\\ref{tab:notations}. \n\n\n\\subsection{Interferogram}\n\\label{sec:interferogram}\n\nWe consider an interferometer with stations $i$, $j$, etc. separated by a baseline $\\base[ij]$ operating in a spectral channel centred on wavenumber $\\wavenumzero$. In the following developments we shall use $\\wavenum$, the wavenumber, and $\\xi = \\wavenum - \\wavenumzero$ as ``reduced'' wavenumber. Without losing generality, we assume that we observe an object made of several point sources $o$, $p$, etc. with positions $\\textpos{o}$, $\\textpos{p}$, etc. \\rev{in} the plane of the sky and spectra $\\textsfluxstar{o}(\\wavenum)$, $\\textsfluxstar{p}(\\wavenum)$, etc.\n\nThe interferometer measures the complex coherent flux of the electromagnetic field by forming dispersed fringes on a detector. In our case, fringes are obtained by a temporal modulation of the optical path difference (OPD) $\\opdvar$ around an ideal position $\\xobj[ij]{o}$. This position is related to the angular position of the source in the sky $\\pos{o}$ through the relation $\\xobj[ij]{o} = \\base[ij]\\ensuremath{\\!\\cdot\\!}\\pos{o}$. Each of the point sources contributes to a quasi-monochromatic interferogram per instrument spectral channel. Once the incoherent photometric contribution has been removed from the two telescopes and the negative frequencies have been filtered out in Fourier space, the complex coherent flux of one source reads: \n\n\\begin{equation}\n \\phasor[ij]{o}(\\xi,\\opdvar) = \n 2\\sflux[ij]{o} (\\xi)\n \\exp{\n 2\\jpi(\\wavenumzero+\\xi)(\\xobj[ij]{o} + \\opdvar) \n }\n \\label{eq:phasormono}\n\\end{equation}\n\nwhere $\\sflux[ij]{o} (\\xi)$ is the \\rev{``instrumental'' coherent flux density} \\rev{primarily} due to the wavelength-dependent instrumental effects\\rev{, but also to some extent to the spectrum of the source.} We can define this coherent flux density as:\n\n\\begin{equation}\n\\sflux[ij]{o}(\\xi) = \\loss[ij]{}(\\xi)\\sqrt{\\trans[i]{}(\\xi)\\trans[j]{}(\\xi)}\n \\,\\exp{\\j \\insphi[ij]{}(\\xi)}\n \\,\\sfluxstar{o}(\\wavenumzero + \\xi)\n \\label{eq:cohernorm}\n\\end{equation}\nwhere:\n\\begin{itemize}\n \\item $\\loss[ij]{}(\\xi)$, is the instrumental visibility, or instrumental contrast loss, \\newrev{and} has different origins such as differential polarisation or wavefront aberrations; \n \\item $\\insphi[ij]{}(\\xi)$, is the instrumental differential phase, \\newrev{and} arises from a difference of optical path lengths between the arms of the interferometer that depends on the wavelength. For example this can be the case when light travels through glass (e.g waveguides, dichroics) that do not have the same refraction index dependence as a function of wavelength.\n \\item $\\trans[i]{}(\\xi)$, is the spectral transmission through an arm including detector efficiency.\n\\end{itemize}\nWe assume that these instrumental signatures do not depend on the \\newrev{OPD position in the interferogram}, which is a good approximation in fringe-scanning mode\\newrev{, since the OPD modulation is obtained through a few micrometres of air or vacuum, with negligible dispersion. In other words, we assume that the instrumental differential phase is a static term that is not impacted by the movement of the differential delay lines.} However, this is usually not true for spatially dispersed fringes \\citep[see][for a generic expression for the fringes]{TAT06}, so that our approach needs adaptation to instruments like AMBER \\citep{AMBER}.\n\n\nIt is now possible to describe the coherent flux for an arbitrary number of sources and across a wider spectral bandpass:\n\n\\begin{equation}\n \\phasor[ij]{}(\\opdvar) = \n \\intinf \\sum_o \\phasor[ij]{o}(\\xi, \\opdvar) \\idiff\\xi, \n \\label{eq:phasorgen}\n\\end{equation}\n\nFor practical purposes we use the Fourier transform\n\\begin{equation}\n \\IFT{f}(\\opdvar) = \\intinf f(\\xi) \\exp{2i\\pi\\xi\\opdvar} \\idiff\\xi,\n\\end{equation}\nsubstitute Eq.~(\\ref{eq:phasormono}) into Eq.~(\\ref{eq:phasorgen}), and\nobtain \n\\begin{align}\n \\phasor[ij]{} (\\opdvar) = \n \\sum_o\n 2\\IFT{\n \\sflux[ij]{o}\n }(\\xobj[ij]{o} + \\opdvar)\n \\,\\exp{2\\jpi\\wavenumzero\\opdvar + \\j\\phiobj[ij]{o}}.\n \\label{eq:def:phasor}\n\\end{align}\nwhere $\\textphiobj[ij]{o} = 2\\pi\\wavenumzero\\textxobj[ij]{o}$. In the following, we will use the coherent flux expression (Eq.~\\ref{eq:def:phasor}) to compute the most \\rev{commonly used} interferometric observables i.e. the square visibility and the closure phase. In practice, $\\textsflux[ij]{o}$ is not known a priori. However, it can be inferred from fringes obtained on an internal lamp. The coherent flux of the lamp fringes yield $\\textsflux[ij]{\\text{lamp}}$ (see Eq.~\\ref{eq:def:phasor}). \\rev{If both the spectrum of the source $\\textsflux[\\star]{o}$ and that of lamp $\\textsflux{\\text{lamp}}$ are known, $\\textsflux[ij]{o} = \\textsflux[ij]{\\text{lamp}} \\textstrans[ij]{\\text{int}} \\, (\\textsflux[\\star]{o}\/\\textsflux{\\text{lamp}})$ (see Eq.~\\ref{eq:cohernorm}) where $\\textstrans[ij]{\\text{int}}$ is the transmission of the interferometer before the calibration lamp. The amplitude of the VLTI transmission is a smooth function of wavelength that can be considered constant. Its phase results from dispersive elements in the optical path. The optical elements of the VLTI before PIONIER are all in reflection and the most dispersive ones (the M9 dichroics) have been designed to display the least differential dispersion, so that the dispersion is dominated by the air in the non evacuated delay line. In the rest of this paper, we have considered near-zenithal observations for which the interferometric delay is small so that the air dispersion could be ignored as Appendix~\\ref{ap:gd} shows. While the presence of dispersion in non zenithal observations has a significant impact on the amount of smearing, it neither changes its order of magnitude nor the general conclusions of this paper. When the atmospheric dispersion must be tackled, it can be done either explicitly (Appendix~\\ref{ap:gd} explains how) or implicitly by letting the parameters of Sect.~\\ref{sec:isr} free in model fits, as \\citet{ZHA07} do for the spectral resolution.}\n\nAs an illustration, we show in the left panels of Fig.~\\ref{fig:PT} the spectral coherence transmission \\textsflux[ij]{\\text{lamp}} (amplitude and phase) that we measured on the internal source of PIONIER using three spectral channels across the H band on three baselines. The right panels correspond to the coherent flux of the fringes \\textphasor[ij]{\\text{lamp}} (amplitude and phase). \n\n\n\\subsection{Instrumental spectral response}\n\\label{sec:isr}\nIn this paper, after providing generic formulas using Fourier formalism, we will also give closed form expressions for direct use. To do so, we need an analytic description of the instrumental transmission ($\\texttrans[i]{}$) and differential phase ($\\textinsphi[ij]{}$). PIONIER's \\rev{instrumental coherent flux density} is obtained on a calibration lamp (Fig.~\\ref{fig:PT}, left panels)\\newrev{. It} displays a near-quadratic behaviour of the differential phase and a spectral transmission intermediate between top-hat and Gaussian functions.\n\nWe therefore describe the instrumental differential phase as\n\\begin{equation}\n \\insphi[ij]{} (\\xi) = \\insphi[ij]{}(0) + \\phishift[ij]{} (\\xi\/\\wavenumzero) + \\disp[ij]{} (\\xi\/\\wavenumzero)^2. \n \\label{eq:hyp:insphi}\n\\end{equation}\n\nThe linear term $\\textphishift[ij]{}$ in the instrumental differential phase $\\textinsphi[ij]{}(\\xi)$ translates into a fringe packet shift of $\\textphishift[ij]{}\/2\\pi\\wavenumzero$ with respect to the nominal zero OPD (see Fig.~\\ref{fig:smearing-explanation}, bottom right panel). It is called group delay. In a single-spectral channel interferometer it is possible to zero it by means of fringe tracking. When several spectral channels are observed at the same time, it is no longer possible to do so in all channels simultaneously. \\rev{For instance, if a central \\rev{spectral} channel is centred at zero OPD, adjacent channels may be shifted with respect to it if there is a differential behaviour of the dispersive elements (such as waveguides, dichroics, or air whose refractive index depend on wavelength) in the beam paths before the recombiner. In the bottom panels of Fig.~\\ref{fig:PT} (baseline 1-3), the central \\rev{spectral} channel is approximately centred at zero OPD (the solid line on the right panel \\newrev{shows the envelope of the fringe packet, i.e. the amplitude of the coherent flux}) with a slope of the phase averaging to $\\approx 0$ (same line of the left panel). The adjacent channels feature some shift (dashed lines on the right panels) and non-zero phase slope (same lines on the left). Appendix~\\ref{ap:gd} gives a further description of the group delay and its correction through fringe-tracking.} \n\nThe quadratic term in the instrumental differential phase $\\disp[ij]{}$ has a less visible impact on the fringe packet.\n\nWe will give results both for Gaussian and top-hat transmissions of FWHM $\\dwavenum{}$:\n\\begin{align}\n \\transG[i]{}(\\xi) &= \\wideexp{-\\frac{4 \\log 2}{\\dwavenum{}^2} \\xi^2},\n \\label{eq:hyp:bandpass}\\\\\n \\transH[i]{}(\\xi) &=\n \\begin{cases}\n 1 \\quad &\\text{if $|\\xi| \\le \\dwavenum{}\/2$},\\\\\n 0 \\quad &\\text{otherwise}.\n \\end{cases}\n\\end{align}\n\n\n \n\n\n\\subsection{Square visibility amplitude}\nThe square visibility amplitude is obtained from the coherent flux using:\n\\begin{equation}\n \\vsqPS[ij]{} \n = \\frac1{4\\normtot[ij]{}}\n \\intinf \\phasor[ij]{}(\\opdvar)\n \\!\\cdot\\! \\conj{\\phasor[ij]{}(\\opdvar)} \\idiff\\opdvar,\n \\label{eq:def:vsqPS}\n\\end{equation}\nwhere \\textnormtot[ij]{} is a normalisation factor that relates to the total flux of the target ($\\propto \\textfluxtot{}^2$) and \\rev{$\\conj{x}$ stands for the complex conjugate of $x$}. In the first line of the previous equation, we substitute Eq.~(\\ref{eq:def:phasor}) and expand the product into a double sum to find\\rev{:}\n\\begin{equation}\n \\vsqPS[ij]{}\n = \\frac1{\\normtot[ij]{}} \\sum_{o,p} \n \\exp{\\j(\\phiobj[ij]{o} - \\phiobj[ij]{p})}\n \\intinf\n \\IFT{\\sflux[ij]{o}}(\\xobj[ij]{o} + \\opdvar) \n \\IFT{\\sflux[ij]{p}}(-\\xobj[ij]{p} - \\opdvar) \n \\idiff\\opdvar.\n\\end{equation}\nUsing the change of variables $\\opdvar \\rightarrow u = \\opdvar + \\textxobj[ij]{o}$, a correlation of Fourier transforms is recognised and simplified into the Fourier transform of a product. Thus,\n\\begin{equation}\n \\vsqPS[ij]{} = \\frac1{\\normtot[ij]{}} \\sum_{o, p} \n \\IFT{\\sflux[ij]{o}\\sflux[ji]{p}}(\\xobj[ij]{o} - \\xobj[ij]{p})\n \\exp{\\j(\\phiobj[ij]{o} - \\phiobj[ij]{p})}.\n\\end{equation}\n\nThe bandwidth smearing is contained in $\\IFT{\\sflux[ij]{o}\\sflux[ji]{p}}$. It\ncan be made clearer by introducing the complex smearing\n\\begin{equation}\n \\smearing[ij]{op}(\\alpha) = \\frac{\n \\IFT{\\sflux[ij]{o}\\sflux[ji]{p}}(\\alpha \/ 2\\pi\\wavenumzero) \n }{ \\IFT{\\sflux[ij]{o}\\sflux[ji]{p}}(0)},\n \\label{eq:gen:V2smearing}\n\\end{equation}\n\\rev{where $\\alpha$ is an angular variable that is linked to the OPD by the relation $\\alpha = 2\\pi\\wavenumzero\\delta$.} It \\rev{is} convenient to use the amplitude and phase \\rev{of the smearing}: $\\textenvband[ij]{op} = |\\textsmearing[ij]{op}|$ is the contrast loss due to smearing and $\\textphiband[ij]{op} = \\arg \\textsmearing[ij]{op}$ is a phase shift induced by it. We also define the flux product equivalent---the equivalent to $\\flux{o}\\flux{p}$ in the monochromatic case---as\n\\begin{equation}\n \\norm[ij]{op} = \\intinf \\sflux[ij]{o}(\\xi)\\sflux[ji]{p}(\\xi)\\idiff\\xi.\n \\label{eq:def:norm}\n\\end{equation}\nWith these definitions, we can rearrange the square visibility amplitude:\n\\begin{equation}\n\\begin{split}\n \\vsqPS[ij]{} =\n \\sum_o & \\frac{\\norm[ij]{oo}}{\\normtot[ij]{}} \n + \\sum_{o < p}\n \\Bigg[\\frac{2\\norm[ij]{op}}{\\normtot[ij]{}}\n \\envband[ij]{op}(\\phiobj[ij]{o}-\\phiobj[ij]{p})\\\\\n &\\times \\cos \\left(\\phiobj[ij]{o}-\\phiobj[ij]{p} \n + \\phiband[ij]{op}(\\phiobj[ij]{o}-\\phiobj[ij]{p})\n \\right) \\Bigg]. \n\\end{split}\n\\label{eq:gen:vsqPS}\n\\end{equation}\nThese results are independent of the instrumental phase $\\insphi[ij]{}$. If $\\textenvband[ij]{op} = 1$ and $\\textphiband[ij]{op} = 0$ (no smearing) this formula is equivalent to the monochromatic case (Eq.~\\ref{eq:Vmono} in the case of a binary). In practice, model-fitting of square visibility amplitudes by multiple stellar systems uses Eqs.~(\\ref{eq:gen:V2smearing}, \\ref{eq:def:norm},~\\& \\ref{eq:gen:vsqPS}). Knowledge of $\\textsflux[ij]{o}$, needed in Eqs.~(\\ref{eq:gen:V2smearing} \\& \\ref{eq:def:norm}), can be inferred from fringes obtained on a calibration lamp (or a calibrator) if the spectra of both lamp and source $o$ are known, as we discussed in Sect.~\\ref{sec:interferogram}. \n\n\\def\\ensuremath{V_\\text{ins}}{\\ensuremath{V_\\text{ins}}}\nWhen the different sources share the same spectrum, i.e. $\\sfluxstar{o}(\\xi) \\propto \\sfluxstar{p}(\\xi)$, we may express the visibility as a function of the individual fluxes \\textflux{o} and the total flux \\textfluxtot{}. In Eq.~\\ref{eq:gen:vsqPS}, we then use the flux products in lieu of the flux products equivalents, i.e. $\\textflux[ij]{op} = \\ensuremath{V_\\text{ins}}\\textflux{o}\\textflux{p}$ and $\\textflux[ij]{} = \\textflux{}^2$, where \n\\begin{equation}\n \\ensuremath{V_\\text{ins}}^2 = \\intinf \\loss[ij]{}(\\xi)^2\\trans[i]{}(\\xi)\\trans[j]{}(\\xi)\n \\sfluxstar{}(\\xi)^2 \\idiff\\xi\\, \\Big\/ \\intinf \\sfluxstar{}(\\xi)^2 \\idiff\\xi \n\\end{equation}\nis the ``instrumental'' square visibility amplitude. Note that $\\ensuremath{V_\\text{ins}}$ also depends on the spectral profile. It only disappears in the calibration if the calibrator has the same spectral profile as the source. \n\nIn the cases of the Gaussian and top hat transmissions with FWHM $\\dwavenum{}$ around central wavelength $\\wavenumzero$ and a constant contrast loss $\\loss[ij]{}$ in the spectral channel, the smearing is purely real\n($\\phiband[]{} = 0$) and\n\\begin{subequations}\n\\label{eq:easy:vsqPS}\n\\begin{align}\n \\envbandH{}(\\alpha) \n &= \\sinc\\left(\\frac{\\alpha}{2\\resol{}}\\right),\n \\label{eq:C:vsqPS}\n \\\\\n \\envbandG{}(\\alpha) \n &= \\wideexp{\\left(\n -\\frac{\\alpha^2}{32\\resol{}^2\\log 2} \n \\right)},\n \\label{eq:gauss:vsqPS}\n\\end{align}\n\\end{subequations}\nwhere $\\resol{} = \\wavenumzero \/ \\dwavenum{}$ is the spectral resolution. For small enough baselines, we have shown in Appendix~\\ref{ap:smallsmearing} that an exponential formula can be used by properly choosing the value of $\\resol{}$. On real data, $\\resol{}$ will need to be set to a value that differs from the spectral resolution in order to account from the departure from Gaussian profile and the wavelength dependence of the contrast. In practice, a model fit of smeared data may leave it as a free parameter. If high precision is needed, the asymmetry of the spectral band and the slope of $\\loss[ij]{}$ give a non zero $\\gamma$. Cubic developments for the smearing terms $\\textenvband[]{}$ and $\\textphiband[]{}$ are given in Appendix~\\ref{ap:smallsmearing}.\n\n\n\n \n\n\n\n\\subsection{Closure phase}\n\\label{sec:ana:clo}\nA triple correlation or its Fourier transform, the bispectrum, or an equivalent method, is generally used to determine the closure phase \\citep{LOH83,ROD86}. The determination of the closure phase in direct space uses the phase of the bispectrum, given by:\n\\begin{equation}\n\\bispDS[ijk]{} = \\intinf \n \\phasor[ij]{}(\\opd[ij](t)) \n \\phasor[jk]{}(\\opd[jk](t)) \n \\phasor[ki]{}(\\opd[\\rev{ki}](t)) \n \\idiff t,\n\\label{eq:bispDS:1}\n\\end{equation}\nwhere $t$ is time in the case of linear OPD variations. By substitution of Eq.~(\\ref{eq:phasorgen}) into Eq.~(\\ref{eq:bispDS:1}) and writing $\\textopd[ij](t) = \\textdopd[ij] t$\n\\begin{equation}\n \\bispDS[ijk]{} =\n \\sum_{o,p,q} \n \\intinf\n \\phasor[ij]{o}(\\dopd[ij] t) \n \\phasor[jk]{p}(\\dopd[jk] t) \n \\phasor[ki]{q}(\\dopd[ki] t)\n \\idiff t.\n\\label{eq:def:bispDS}\n\\end{equation}\nIt follows from Eqs.~\\newrev{(\\ref{eq:def:phasor} \\& \\ref{eq:def:bispDS})} and closure relation $\\textdopd[ij] + \\textdopd[jk] + \\textdopd[ki] = 0$ that\n\\begin{equation}\n\\begin{split}\n \\bispDS[ijk]{} &\\propto\n \\sum_{o, p, q}\n \\Bigg[\n \\exp{i(\\phiobj[ij]{o} + \\phiobj[jk]{p} + \\phiobj[ki]{q})}\\\\\n &\\times\n \\intinf \n \\IFTsflux[ij]{o}(\\xobj[ij]{o} + \\dopd[ij] t)\n \\IFTsflux[jk]{p}(\\xobj[jk]{p} + \\dopd[jk] t)\n \\IFTsflux[ki]{q}(\\xobj[ki]{q} + \\dopd[ki] t)\n \\idiff t\n \\Bigg].\n\\end{split}\n\\label{eq:int:bispDS}\n\\end{equation}\nUsing the change of variables $t \\rightarrow u = \\xobj[ij]{o}\/\\textopd[ij] + t$, a triple cross-correlation of Fourier transforms can be recognised and expressed as the two-dimensional Fourier transform \n\\begin{equation}\n \\IFTtd{\\ f \\ }(\\opdvar_1, \\opdvar_2) \n = \\iintinf f(\\xi_1, \\xi_2) \n \\exp{2\\j\\pi(\\xi_1\\opdvar_1 + \\xi_2\\opdvar_2)} \n \\idiff\\xi_1\\idiff\\xi_2\n\\end{equation}\nof the triple product\n\\begin{equation}\n \\striple[ijk]{opq}(\\xi_1, \\xi_2) = \n \\sflux[ij]{o}(\\xi_1) \\sflux[jk]{p}(\\xi_2) \\sflux[ki]{q} \n \\Big(\n - \\frac{\\dopd[ij]}{\\dopd[ki]} \\xi_1 \n - \\frac{\\dopd[jk]}{\\dopd[ki]} \\xi_2\n \\Big).\n \\label{eq:def:striple}\n\\end{equation}\nThe bispectrum therefore reads\n\\begin{equation}\n\\begin{split}\n \\bispDS[ijk]{} \\propto \n \\sum_{o, p, q} \\Bigg[\n \\IFTstriple[ijk]{opq} \\Big(\n \\phiobj[ij]{o} - \\frac{\\dopd[ij]}{\\dopd[ki]} \\phiobj[ki]{q},& \n \\frac{\\dopd[jk]}{\\dopd[ki]} \\phiobj[ki]{q}\n - \\phiobj[jk]{p}\n \\Big)\\\\\n &\\times \\exp{\\j\\left(\\phiobj[ij]{o} + \\phiobj[jk]{p} + \\phiobj[ki]{q}\\right)}\n \\Bigg].\n\\end{split}\n\\end{equation}\nThe bandwidth smearing is contained in $\\IFTstriple[ijk]{opq}$. In order to make it clearer we need to introduce several terms. The triple flux product equivalent (corresponding to $\\flux{o}\\flux{p}\\flux{q}$ in the monochromatic case) is given by \n\\begin{equation}\n \\triple[ijk]{opq} = \\left| \\IFTstriple[ijk]{opq}(0, 0) \\right|,\n \\label{eq:gen:triple}\n\\end{equation}\nthe ``instrumental'' closure phase by\n\\begin{equation}\n \\insphi[ijk]{opq} = \\arg \\IFTstriple[ijk]{opq}(0, 0), \n \\label{eq:gen:insphi}\n\\end{equation}\nand the smearing by\n\\begin{equation}\n \\smearing[ijk]{opq}(\\phivar_1, \\phivar_2) = \n \\IFTstriple[ijk]{opq}( \\phivar_1 \/ 2\\pi\\wavenumzero, \n -\\phivar_2 \/ 2\\pi\\wavenumzero) \n \\,\/\\,\\IFTstriple[ijk]{opq}(0, 0).\n \\label{eq:gen:smearing}\n\\end{equation}\nThe ``instrumental'' closure phase is a flux-weighted mean over the spectral channel and thus also depends on the spectrum of the source. The triple flux product equivalent can be simplified to the triple flux product ($\\texttriple[ijq]{opq} \\propto \\textflux{o}\\textflux{p}\\textflux{q}$) when the sources have the same spectrum, i.e. $\\textsfluxstar{o}(\\xi) \\propto \\textsfluxstar{p}(\\xi)$. Note that the instrumental closure phase cancels out in the calibration only if the sources $o$, $p$, $q$ and the calibrator all share the same spectrum.\n\nWith these notations, the bispectrum reads\n\\begin{equation}\n\\begin{split}\n \\bispDS[ijk]{} \\propto \\sum_{o, p, q} \n \\Bigg[\n \\smearing[ijk]{opq}\n \\Big(\n \\phiobj[ij]{o} - &\\frac{\\dopd[ij]}{\\dopd[ki]} \\phiobj[ki]{q}, \n \\frac{\\dopd[jk]}{\\dopd[ki]} \\phiobj[ki]{q}\n - \\phiobj[jk]{p}\n \\Big)\\\\\n &\\times\\triple[ijk]{opq} \\exp{i\\left( \\phiobj[ij]{o} + \\phiobj[jk]{p} + \\phiobj[ki]{q}\n + \\insphi[ijk]{opq}\n \\right)}\n \\Bigg].\n\\end{split}\n\\label{eq:gen:bispDS}\n\\end{equation}\nIf $\\textsmearing[ijk]{opq} = 1$ (no smearing) and $\\insphi[ijk]{opq} = 0$ (no bandwidth-related differential phase), the formula is equivalent to the monochromatic case (Eq.~\\ref{eq:Bmono} for a binary). In practice, Eqs.~(\\ref{eq:def:striple}, \\ref{eq:gen:triple}, \\ref{eq:gen:insphi}, \\ref{eq:gen:smearing}, \\& \\ref{eq:gen:bispDS}) allow us to to model fit multiple stellar systems to smeared interferometric data. The knowledge of $\\textsflux[ij]{o}$ needed in Eq.~(\\ref{eq:def:striple}) can be inferred from calibration fringes obtained on an internal lamp (or a calibrator) as discussed in Sect.~\\ref{sec:interferogram}. \n\n\\rev{This modelling} can be further simplified using an analytic description of the bandpass. In that case, Eqs.~(\\ref{eq:gen:bispDS}~\\& \\ref{eq:bispDS}) can be used for the model fit of closure phases. In our cases of top-hat and Gaussian transmission of FWHM \\dwavenum{}, with a linear instrumental phase, we reorder baselines so that $\\textdopd[ki]$ has the largest absolute value, and we can assume it negative without losing generality. Then, the smearing can be simplified to\n\\begin{subequations}\n\\label{eq:bispDS}\n\\begin{align}\n \\smearingH[ijk]{}(\\phivar_1, \\phivar_2) &\\propto \n \\sinc\\left(\n \n \\frac{\\phivar_1 + \\phishift[ijk]{}}{2\\resol{}}\n \\right)\n \\sinc\\left(\n \\frac{\\phivar_2 + \\phishift[ijk]{}}{2\\resol{}}\n \\right)\n\\label{eq:gate:bispDS}\n\\\\\n \\smearingG[ijk]{}(\\phivar_1, \\phivar_2) &\\propto\n \\exp{ \n - \\frac{\n \n (\\phishift[ijk]{} + \\phivar_1)^2\n \n + (\\phishift[ijk]{} + \\phivar_2)^2 + \n \\left( \n \\phishift[ijk]{}\n - \\frac{\\dopd[jk]\\phivar_1 \n + \\dopd[ij]\\phivar_2} \n {\\dopd[ki]} \n \\right)^2 \n }{\n 16\\resol{}^2\\log 2 \n \\left(1\n + \\big(\\frac{\\dopd[ij]}{\\dopd[ki]}\\big)^2 \n + \\big(\\frac{\\dopd[jk]}{\\dopd[ki]}\\big)^2\\right) \n }\n }.\n\\label{eq:gauss:bispDS}\n\\end{align}\n\\end{subequations}\nIn the equations above, the ``group delay closure'' is expressed as\n\\begin{equation}\n \\phishift[ijk]{} = \n \\frac{\\dopd[ki]\\phishift[ij]{} - \\dopd[ij] \\phishift[ki]{}}\n {\\dopd[ki]}\n .\n\\label{eq:bisp:gd}\n\\end{equation}\nThe group delay closure is the consequence of the incorrect centering of the three fringe packets on the three baselines of the telescope triplets. Because of this de-centering, the centres of these packets are not scanned at the same time. In order to yield a usable closure phase, there should still be an overlap in the time intervals when the high contrast part of the packets are scanned. It means that the individual group delays \\textphishift[ij]{}, \\textphishift[jk]{}, and \\textphishift[ki]{}, and thus the group delay closure, should be of the order of a few times the spectral resolution or less ($\\textphishift[ijk]{} \\lesssim 2\\pi\\resol{}$). Since this overlap in time depends on the relative scanning speeds along the baselines, the group delay closure depends on $\\dopd[ij]$, $\\dopd[jk]$, and $\\dopd[ki]$. \n\nIn our analytic approach to the spectral transmission, the instrumental closure phase reduces to a constant term, independent of \\newrev{the} source\\newrev{s}\\begin{equation}\n \\insphi[ijk]{} = \\insphi[ij]{}(0) + \\insphi[jk]{}(0) + \\insphi[ki]{}(0).\n\\end{equation}\n\nAppendix~\\ref{ap:disp} explains how to use the Gaussian formula if the the quadratic chromatic dispersion term $\\textdisp[ij]{}$ is non zero. \n\n\n\n\n\n\n\n\n\n\\section{Consequence on companion search}\n\\label{sec:bin}\n\n\\subsection{Bias on the interferometric observables}\n\\label{sec:bias:observables}\n\n\\begin{table}\n \\caption{Test case used in \\rev{Figs.~\\ref{fig:ideal}~\\& \\ref{fig:phi:jitt}}. For the square visibility amplitude, the first baseline is used. The spectral resolution is, by definition, the major source of smearing. In addition, the visibility is slightly impacted by the spectral dispersion $\\disp[ij]{}$. The closure phase is strongly impacted by the group delay closure $\\phishift[123]{}$ (indirectly by group delays and OPD modulation speeds) and moderately by the dispersion $\\disp[ij]{}$.}\n\\label{tab:testcase}\n\\begin{tabular}{ll}\n\\hline\\hline\nBinary flux ratio & 0.6\\\\\nEffective bandpass & Gaussian\\\\\nSpectral resolution & lines: 7, 18, 42, contours: 3--100\\\\\nProjected telescope positions & $(0, B, 0.4B)$\\\\\n\\textit{Corresponding baselines} & $(B, -0.6B, -0.4B)$\\\\\nOPD modulation along baselines & $\\dopd[ij] = (\\dopd[12], -2\\dopd[12], \\dopd[12])$\\\\\nOPD bounds & $(\\pm 25\\lambda, \\mp 50\\lambda, \\pm 25\\lambda)$\\\\\nGroup delays & $\\phishift[ij]{} = (5, 0, -5)\\times2\\pi$\\\\\n\\textit{Corresponding group delay closure} \n & $\\phishift[123]{} = 10\\times 2\\pi$\\\\\nSpectral dispersion & $\\disp[ij]{} = 0$\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\begin{figure*}[p]\n\\subfigure[Square visibility amplitude]{\\includegraphics[width=\\linewidth]{ideal-visibility.pdf}\\label{fig:ideal:Vsq}}\n\\subfigure[Closure phase]{\\includegraphics[width=\\linewidth]{ideal-closure.pdf}\\label{fig:ideal:phi3}}\n\\caption{Square visibility amplitude (top) and closure phase (bottom) of a binary with flux ratio 0.6 (test case of Table~\\ref{tab:testcase}) observed with an interferometer with Gaussian bandpass under ideal atmospheric conditions and baselines $B$, $-0.6B$, $-0.4B$. In both figures, \\emph{top panel:} interferometric observable as a function of binary separation (milliarcseconds at one micron wavelength for a 100\\,m baseline) for an infinite resolution and three spectral resolutions approximately matching those of PIONIER. \\emph{bottom panel:} deviation of the smeared observable with respect to the infinite spectral resolution case, shown as contours in the separation-spectral resolution plane. In the lowest panel, the behaviour change around spectral resolution $\\resol{} = 8$ is explained by the transition from the single spectral channel mode (group-delay free in ideal fringe tracking conditions, \\rev{since a single fringe packet can be centred around zero OPD, see Appendix~\\ref{ap:gd}}) to the multiple channel observation (where \\rev{the fringe packets of the different spectral channels are shifted with respect to each other and therefore cannot be simultaneously positioned at zero OPD by the fringe-tracker, see Appendix~\\ref{ap:gd}}).}\n\\label{fig:ideal}\n\\end{figure*}\n\n\nThe first impact of the smearing is a tapering of the peak-to-peak amplitude of the oscillation of the visibility with baseline, hour angle, or spectral\nchannel, due to the smearing amplitude $\\envband{}$. The second \\newrev{impact} only concerns the closure phase in multi-channel observations\\rev{. I}t originates from the imperfect alignment of the fringe packets on baseline triplets, \nas measured by $\\phishift[ijk]{}$. In order to make these influences clearer,\nwe give in Fig.~\\ref{fig:ideal} the interferometric observables of a binary with a high flux ratio 0.6, whose characteristics are given in Table~\\ref{tab:testcase}.\n\n\\paragraph{Square visibility amplitude.}\nFigure~\\ref{fig:ideal:Vsq}, top panel, shows the theoretical smearing of the visibility amplitude of a binary as a function of reduced separation $\\theta B\/\\lambda$ (in $\\mathrm{mas}\\cdot\\mathrm{hm}\\cdot\\mu\\text{m}\\smash{^{-1}}$) for three different spectral resolutions ($\\approx 7, 18, 42$) corresponding to the observing modes available on PIONIER at the VLTI. The lower panel of the figure displays the error on the square visibility occurring from not taking smearing into account, as a function of separation and spectral resolution. The result is easily generalised to binaries of different flux ratios, as the relative error on the visibility $\\Delta|V^2| \/ |V^2|$ remains unchanged.\n\n\\paragraph{Closure phase.}\nFigure~\\ref{fig:ideal:phi3}, top panel, shows the theoretical closure phase of a binary for three different spectral resolutions ($\\approx 7, 18, 42$) corresponding to the observing modes available on PIONIER at the VLTI. It can be seen at small separations (5--10\\,$\\mathrm{mas}\\cdot\\mathrm{hm}\\cdot\\mu\\text{m}\\smash{^{-1}}$) that the intermediate spectral resolution ($\\approx 18$) shows more smearing than expected for these separations, in particular more than the broad-band $\\approx 7$ observing mode. The reason lies in \\rev{the dispersive elements in the light beams of the interferometer and instrument that decentre fringe packets more in some spectral channels than in others, thus making it impossible to centre all fringes packets at the same time. (see the imperfect centering of some spectral channels of PIONIER in Fig.~\\ref{fig:PT} and a description of the group-delay tracking in Appendix~\\ref{ap:gd})}. This effect is not seen in the broad band, where \\rev{the single fringe packet of each baseline can be centred with a fringe tracker, thus eliminating the group-delay}. This low-separation smearing approximately scales linearly with separation, as $f\\textphishift[ijk]{}\\theta\/\\resol{}\\smash{^2}$, where $f$ is the flux ratio of the binary, $\\theta$ the separation, and $\\textphishift[ijk]{}$ the group-delay closure (This can be obtained analytically by linearising Eq.~\\ref{eq:gauss:bispDS} and normalising by the bispectrum of a point-source calibrator.) At larger separations ($\\gtrsim 10\\mathrm{mas}\\cdot\\mathrm{hm}\\cdot\\mu\\mathrm{m}\\smash{^{-1}}$ in Fig.~\\ref{fig:ideal:phi3}), the closure phase is impacted by a combination of the tapering of the oscillation of the visibility (a purely spectral resolution effect, as seen in the visibility in Fig.~\\ref{fig:ideal:Vsq}) and the instrumental phase, the impact is relatively complex, and we can only recommend to use Eq.~(\\ref{eq:gauss:bispDS}) to model it. As an illustration, Fig.~\\ref{fig:closim} of Appendix~\\ref{ap:closim} compares the closure phase of the three spectral channels of PIONIER for a given configuration of the interferometer, and it is quite clear the the behaviour radically changes with channel and telescope triplet.\n\nThe lower panels displays the error on the closure phase occurring from not taking smearing into account, as a function of separation and spectral resolution. The figure shows a sharp discontinuity at resolution $\\resol{} = 8$ where the transition occurs from a single spectral channel (where the single fringe packet of each baseline is positioned at zero OPD by an ideal fringe-tracker) to spectrally dispersed fringes (with the fringe packets \\rev{of each baseline} that do not align well \\rev{because they are shifted with respect with each other by the instrumental phase}). Even for moderately resolved sources, percent precision requires a good enough spectral resolution ($\\resol{} \\gtrsim 40$ or more), adequate modelling of \\rev{bandwidth} smearing, or a good fringe-tracking on a single spectral channel at moderate spectral resolutions ($\\resol{} \\gtrsim 10$). \n\n\n\\subsection{Retrieving binary parameters}\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{binobs-uv.pdf}\n\\caption{$(u, v)$ coverage of a typical 100\\,m baseline 4T observation (K0-A1-G1-I1) at the VLTI for an object close to the meridian, with 3 observations over a few hours.}\n\\label{fig:uv}\n\\end{figure}\n\nWe assess here the bias on the binary parameters that smearing produces. In order to model the data as \\rev{realistically} as possible we build synthetic binary fringes corresponding to a \\rev{typical scenario}: near-zenith object observed in a sequence of three sets of fringes separated by one hour using a large telescope quadruplet at VLTI (see Fig.~\\ref{fig:uv} for $u$, $v$ coverage). They are obtained from calibration fringes obtained by PIONIER on an internal calibration lamp, which can be considered as a point source observation for our purpose. Then, we feed \\rev{these} synthetic data to the PIONIER data reduction software and get visibility amplitudes and closure phases. They are calibrated using simulated fringes of a point-source calibrator. They are then fit with a binary model to derive the parameters of the binary. In a first step, the model is that of an unsmeared binary (Eqs.~\\ref{eq:Vmono}~\\& \\ref{eq:Bmono}), then we use the smeared model of Sect.~\\ref{sec:ana} with Gaussian bandpass (Eq.~\\ref{eq:gauss:vsqPS}~\\& \\ref{eq:gauss:bispDS}). \\rev{Additional transmission effects of the VLTI from the telescope up to the internal calibration lamp, positioned after the delay lines, have been ignored: the near-zenith observations we consider here are dominated by PIONIER's instrumental effects (as we discuss in Sect.~\\ref{sec:interferogram}). For non zenithal observations, where the interferometric delay in the delay lines is several tens of metres, the air dispersion in the delay lines becomes a factor of the same order of PIONIER's instrumental phase and can be modelled using Appendix~\\ref{ap:gd}.}\n\nIn our analysis, the separations in right ascension and declination are varied from $-30$ to 30\\,mas or approximately 10 times the angular resolution the interferometer and the magnitude differences from 0.1 to 3.3 (flux ratios from 0.05 and 0.95). For each point triplet of parameters, the difference between the fitted values and the input gives us the bias on the binary position and magnitude difference. The reduced chi square was determined assuming a 2\\% accuracy on visibilities and 20\\,mrad on closure phases typical of single-mode instrument performances on bright objects (like PIONIER). Figure~\\ref{fig:binobs-bias} shows the \\rev{absolute values of the errors} and reduced chi square at each separation and position angle at the given magnitude difference of 0.55 (flux ratio of 0.6). In Figure~\\ref{fig:binobs-bias-2}, we \\rev{consider possible biases and give} the median value of the \\rev{error with} its confidence intervals for a given binary separation, considering all the position angles and flux ratios at that separation.\n\n\\begin{figure*}\n\\includegraphics{binobs-bias.pdf}\n\\caption{Quality of least-squares model fitting of binary parameters to smeared interferometric observables. These observables are derived from PIONIER synthetic fringes in the 3-channel spectral resolution ($\\resol{} \\approx 20$) using the data reduction pipeline. The contour plots give the \\newrev{absolute value of the error in the model fits} for each position of the secondary assuming a binary flux ratio of 0.6. \\text{Left:} the binary model assumes monochromatic light and absence of smearing. \\text{Right:} the binary model assumes a Gaussian bandpass and takes into account the smearing. \\text{Top:} \\rev{absolute value of the} error on the binary separation. \\text{Middle:} \\rev{absolute value of the} error on the magnitude difference. \\text{Bottom:} reduced chi squares assuming 2\\% error on square\\newrev{d} visibilities and 20~mrad on closure phases.}\n\\label{fig:binobs-bias}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[width=\\linewidth]{bias-smearing-allratios.pdf}\n\\caption{\\newrev{The solid lines give the median value of the errors on the fitted binary parameters} as a function of binary separation. \\newrev{If non zero and systematically of one sign, the median indicates a bias. The grayed area are the} confidence intervals for the errors (dark gray 1-$\\sigma$, light gray 2-$\\sigma$). At a given separation, all binary orientations and flux ratios were considered. \\text{Left:} the binary model assumes monochromatic light and absence of smearing. \\text{Right:} the binary model assumes a Gaussian bandpass and takes into account the smearing. \\text{Top:} \\newrev{error} on the binary separation. \\text{Middle:} \\newrev{error} on the magnitude difference. \\text{Bottom:} reduced chi square.}\n\n\\label{fig:binobs-bias-2}\n\\end{figure*}\n\n\\paragraph{Smearing-free binary model.} A binary model with the classical expression for the visibility amplitude and closure phase (Eqs.~\\ref{eq:Vmono}~\\& \\ref{eq:Bmono}) is fitted to synthetic PIONIER data with the three-channel spectral resolution. \nThe left panel of Fig.~\\ref{fig:binobs-bias} displays from top to bottom the \\newrev{absolute value of the error} on the secondary's position, the \\newrev{absolute value of the error} on the magnitude difference, and the reduced chi square for errors of 2\\% and 20\\,mrad on individual measurements of square visibility amplitudes and closure phases respectively. We checked that the results for other flux ratios are similar. The \\newrev{errors (with median value and confidence intervals)} for the parameters are given in Fig.~\\ref{fig:binobs-bias-2} (left panel) as a function of separation when the flux ratio is allowed to vary between detectable limits (0.05 to 0.95). \\newrev{The median value of the error indicates a bias, if it is non zero and consistently of one sign.} \n\nThe main impact of the smearing is a degradation of the goodness of fit at all separations, followed by errors on the flux ratio and separation at moderate separations, and a clear bias of both observables at larger separations. In our models, the secondary is dimmer than the input of the simulation \\newrev{more often than not} and the separation tends to be smaller \\newrev{more often than not}. \\newrev{(For instance, the confidence intervals on the errors of Fig.~\\ref{fig:binobs-bias-2} show that the error on the separation is approximately 5 times more likely to be negative than positive at a separation of 30\\,mas.)} The \\newrev{apparent dimming of the secondary} is easily explained by the tapering of the fringe contrast that occurs due to smearing. The \\newrev{bias on separation} is independent of smearing as we will see later on. \n\nEven at moderate separations (5--10\\,mas) the reduced chi square is around 3. However, the errors on the flux ratio and positions become significant (50\\,$\\mu$as and 20\\,mmag) only at higher separations ($\\gtrsim 15$ mas), as Fig.~\\ref{fig:binobs-bias}. At first sight, it seems to contradict the trend of Sect.~\\ref{sec:bias:observables}. In that section, we have found a significant smearing of the closure phase at small separations, as a result of the imperfect centering of fringe packets in an observation with multiple spectral channels. We easily reconcile these findings by noting that, as an average over the spectral band, the group delay is zero, i.e. both ends of the bands have group delays of same magnitude but opposite signs; thus their respective impacts on the observables approximately cancel out in the fit. The deviation of the individual spectral channels from the average over the band still explains the larger chi square. (Fig.~\\ref{fig:closim} in Appendix~\\ref{ap:closim} shows how the closure phases are impacted differently for the three spectral channels of PIONIER in low resolution mode.) \n\n\\paragraph{Smeared binary model.} We performed similar fits to synthetic smeared fringes of a binary \\rev{by} using the Gaussian formulas for the smearing (see Sect.~\\ref{sec:ana}). The \\newrev{absolute values of the errors} on the position and flux ratio are given for a binary with a flux ratio of 0.6 in the right panel of Fig~\\ref{fig:binobs-bias}. The \\rev{errors} on the position and magnitude difference, and the quality of the fit are given in the right panel Fig.~\\ref{fig:binobs-bias-2} for a wide range of flux ratios. \\newrev{In Fig.~\\ref{fig:binobs-bias-2}, the median value of the error indicates a bias if it is non zero and consistently of one sign.}\n\nTaking the smearing into account eliminates most of the errors and bias on the flux ratio. It also largely improves the quality of the fit, with a reduced chi square of 3 found at significant separations ($\\gtrsim 15$\\,mas) in \\rev{most cases}. The errors on the separation are improved at all separations but \\rev{the bias remains at larger separations}. We \\rev{have found that the bias is related} to the uncertainty on the effective wavelength of the interferometer, which varies by $\\approx 0.1$\\% across baselines on PIONIER; this phenomenon is independent \\rev{of} our adequate modelling of the smearing. \\rev{It is difficult to calibrate in the first place, because a deviation of the pie\\newrev{z}o scan speed from its nominal value has exactly the same observable consequence. (We note that including a proper spectral calibration in the instrument would solve for this problem.)} At 30 mas of separation, \\rev{a 0.1\\% bias translates into 30$\\,\\mu$as}, which is what we indeed find: \\rev{the solid lines in the top panels of Fig.~\\ref{fig:binobs-bias-2} show this bias both in the monochromatic model and the smeared one.} At specific binary parameters, seen as high \\rev{error} values islands on Fig.~\\ref{fig:binobs-bias}, \\rev{the discrepancy} originates from the difference between the smeared visibility and the Gaussian model: This happens close to smearing-induced phase jumps (see Fig.~\\ref{fig:closim} of Appendix~\\ref{ap:closim} for a comparison between Gaussian smearing and simulated values). High contrast binaries \\rev{do not feature these phase jumps} and are not impacted. For precision work \\rev{of high to moderate flux ratio binaries, we strongly recommend to discard closure phases} close to predicted jumps. \n\n\n\n\n\n\n\n\n\n\n\\section{Modelling the atmosphere}\n\\label{sec:atm}\n\\label{sec:atm:temp}\n\nThe estimators of the interferometric observables have been chosen to be mostly immune to atmospheric biases in the typical interferometric regime of a moderately resolved source, \\rev{i.e. when bandwidth smearing can be ignored}. In this section, we investigate possible biases when \\rev{bandwidth smearing becomes significant}, as \\citet{ZHA07} did for IOTA's closure phase estimator.\n\nFor temporal scanning, it is possible to write the differential piston---the variable differential phase induced by the atmosphere---as a function of OPD since time and OPD are linked \\citep[see for instance][]{jitter}. The jittered coherent flux can be expressed as a function of the ideal coherent flux\n\\begin{equation}\n \\phasorjitt[ij]{}(\\opdvar) = \n \\phasor[ij]{}(\\opdvar + \\piston[ij](\\opdvar))\n \\wideexp{\\left[\n -\\frac16 \\left(\n \\pi\\wavenumzero \n \\pderiv{\\piston[ij]}{\\opdvar}(\\opdvar)\n \\right)^2\\right],\n }\n \\label{eq:coherjitt}\n\\end{equation}\n\\rev{where $\\textpiston[ij]$ is the atmospheric differential piston on baseline $ij$.} The exponential term is the contrast loss due to piston variation during the integration, of the order of one millisecond for one OPD step of a temporal scan. It bears the assumption that the spectral envelope of the fringes does not have features as sharp as the fringe frequency and that the integration during one OPD step is fast enough (of the order of \\rev{a} millisecond in practice) to allow for a linearisation of piston. \n\n\\subsection{Orders of magnitude}\n\\label{sec:atm:om}\nAn analytic approach to the atmospheric turbulence can be taken, using the \nassumption that scanning is fast enough for the piston to vary linearly during\na sub-second scan, i.e. $\\textpist[ij]{} = \\textpist[ij]{0} + \n\\textpist[ij]{1} \\textopd[ij]$, where $\\textpist[ij]{0}$\nis the group-delay tracking error and $\\textpist[ij]{1}$ a rate of piston \nvariation during scan. $\\textpist[]{0}$ and $\\textpist[]{1}$ are random variables when statistics over a large number of scans are derived. Using this approach, the coherent flux is:\n\\begin{align}\n\\begin{split}\n \\phasorjitt[ij]{} (\\opd[ij]) &= \n \\sum_o\n 2 \\IFTsflux{o}(\\xobj[ij]{o} + (1 + \\pist[ij]{1})\\opd[ij] + \\pist[ij]{0}) \n \\\\&\\qquad \\times \\exp{\n i\\phiobj[ij]{o} \n + 2i\\pi\\wavenumzero[(1+\\pist[ij]{1})\\opd[ij] + \\pist[ij]{0}]\n - \\frac16 (\\pi\\wavenumzero\\pist[ij]{1})^2 \n }.\n\\end{split}\n\\label{eq:atm:phasor}\n\\end{align}\nThis approach can be used to determine the orders of magnitude of the atmospheric effects.\n\n\\paragraph{Visibility.} \nThe piston variation term $1 + \\textpist[ij]{1}$ comes as a product of the OPD variable in Eq.~(\\ref{eq:atm:phasor}), so we recognise it as a scaling factor. $\\textpist[ij]{0}$ is a mere shift of the central OPD and has no impact---the square visibility does not depend on centering. Therefore, we can link the jittered visibility to the ideal case: \n\\begin{equation}\n \\vsqPS[ij]{\\text{jit}} = \\frac{1}{1+\\pist[ij]{1}} \\vsqPS[ij]{\\text{ideal}}\n \\wideexp{-\\frac13 (\\pi\\wavenumzero\\pist[ij]{1})^2}.\n\\end{equation}\nThe impact of atmospheric jitter is independent \\rev{of} the geometry of the source and, thus, smearing. For all separations it can be calibrated out if science target and calibrators are observed with similar atmospheric conditions.\n\n\\paragraph{Closure phase.} The group-delay tracking term $\\textpist[ij]{0}$ can be seen as a fringe shift that adds to the predicted fringe position $\\textphishift[ij]{} \\rightarrow \\textphishift[ij]{} + 2\\pi\\wavenumzero\\textpist[ij]{0}$ and the linear variation of the piston can be seen as a scanning velocity change $\\textdopd[ij] \\rightarrow \\textdopd[ij](1 + \\textpist[ij]{1})$. With these substitutions, the formulas of Sect.~\\ref{sec:ana:clo} can be used directly to determine the jittered closure phase. As we have seen, the predominant impact of the bandwidth smearing on the closure phase is the fringe decentering $\\textphishift[ij]{}$, so we expect the group-delay tracking errors to be the main source of bias. \n\n\\subsection{Numerical modelling}\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{pdf\/jittered-interferogram.pdf}\n\\caption{One of the simulated temporal scans. The deformation of the envelope is correlated with the piston slope and the accordion-like features to variations of its slope. \\textit{Top:} piston; \\textit{Bottom:} simulated fringes.}\n\\label{fig:interf:jitt}\n\\end{figure}\nIn the high frequency regime the pistons at the different stations can be considered as uncorrelated when the baselines are larger than the outer scale of turbulence $\\mathcal{L}_0$ \\citep{KEL07}. With a median value $\\mathcal{L}_0 = 22$\\,m at Paranal \\citep{MAR00} baselines of the medium and large telescope quadruplets used with PIONIER normally fulfil the criterium. At other sites, for smaller baselines, or under relatively uncommon atmospheric conditions at Paranal, the pistons can be correlated. This correlation decreases the amount of atmospheric jitter for given coherence time and seeing, which in turns tends to decrease the bias on the interferometric observables. Therefore, we model the random piston $\\piston[i](t)$ using its spectral density\n\\begin{equation}\n \\DFT{\\piston[i]}(\\nu) = A\\nu^{-B} \\exp{\\j\\Phi^i(\\nu)},\n\\end{equation}\nwhere $A$ and $B$ are constants and $\\Phi^i(\\nu)$ is chosen randomly for each sampled temporal frequency $\\nu$. For Kolmogorov turbulence, the fast scan ($\\ll 1$\\,s) regime has $B = 17\/6$ \\citep{CON95} but there is \\rev{experimental evidence \\citep{DIF03}} that the slope is not as steep at VLTI, \\rev{with simulations by \\citet{ABS06} explaining it in terms of the piston induced at high frequency by the adaptive optics (imperfect) correction \\citep[``bimorph piston'', see][]{VER01} and wavefront errors produced by the injection into single-mode waveguides \\citep[``coupled piston'', see][]{RUI01}. \\citet{LIN99} have also measured a deviation from the Kolmogorov behaviour at PTI.} We used $B = 2$, which experimentally reproduces well the accordion features of temporal scans obtained under below average atmospheric conditions (see Fig.~\\ref{fig:interf:jitt}). We normalise $A$ to match the group-delay tracking rms in the differential piston $\\piston[ij] = \\piston[j] - \\piston[i]$.\n\nBy substituting in Eq.~\\ref{eq:atm:phasor}, we perform a numerical integration of Eqs.~(\\ref{eq:def:vsqPS}~\\& \\ref{eq:def:bispDS}) and obtain the jittered\nvisibility amplitude and closure phase.\n\n\\begin{figure*}\n\\includegraphics{jittered-closure.pdf}\n\\caption{Bias on the closure phase resulting from atmospheric piston in temporal scans, assuming that the static smearing is correctly modelled. The x-axis shows the reduced binary separation in milliarcseconds-hectometres of baselines per micron of wavelength (below) or the OPD between binary components (above). \\textit{Top:} bias and statistical errors for three spectral resolutions corresponding to PIONIER at the VLTI. \\textit{Bottom panel:} bias in the spatial resolution-spectral resolution plane. The bias decreases quickly with spectral resolution.}\n\\label{fig:phi:jitt}\n\\end{figure*}\n\n\n\\subsection{Bias on the observables} \nAs we have seen in Sect.~\\ref{sec:atm:om} there is little bias of the atmosphere on the square visibility amplitude and we could confirm it numerically. However, the bias can be substantial on the closure phase. Figure~\\ref{fig:phi:jitt} displays in its top panel the bias on the closure phase of our test-case binary as a function of separation, for the three spectral resolutions $\\resol{} = 7$, 18, 42 corresponding to PIONIER's modes. \\rev{For each separation, baseline, and spectral resolution considered in the simulation}, 100 random scans with a \\rev{remaining scatter of the fringe tracking of $6\\lambda$ (typical value by average conditions) have been generated. The closure phase on the telescope triplet is the average closure phase of the scans. To better identify the biases}, the closure phase of a \\rev{jitter-free observation} has been subtracted from the results. In the lower panel of the figure, the bias on the phase is given in the separation-spectral resolution plane. As one can see, the impact of the atmosphere is very important at low resolution but quickly vanishes for $\\resol{} \\gtrsim 20$. For three spectral channels across a typical IR band, the error on the phase is at most a few degrees or less. \n\n\\section{Discussion \\& Conclusion}\n\n\\subsection{Impact of the instrument and visibility estimator}\n\nAs already discussed by \\citet{PER05}, the square visibility amplitude is impacted differently for different estimators that otherwise would be equivalent in the absence of smearing. Not only is the amount of smearing different but the behaviour can be changed. Because it is a popular recombination method and it illustrates this argument, we have given the formulas for the smeared complex visibility of a time-modulated ABCD recombiner in Appendix~\\ref{ap:ABCD}. In Sect.~\\ref{sec:ana}, we have seen that the square visibility amplitude is not impacted by the fringe centering in full scans processed by Fourier analysis : in Eq.~(\\ref{eq:gen:V2smearing}), smearing is independent \\rev{of} absolute source position---only on source distances $\\textphiobj[ij]{o} - \\textphiobj[ij]{p}$---and group delay $\\textphishift[ij]{}$. Conversely, the ABCD visibility estimator shows explicit dependence on $\\textphiobj[ij]{o}$ and $\\textphishift[ij]{}$ (see for instance Eq.~\\ref{eq:ABCD:gauss:smearing}), and this propagates to the square visibility estimator. \n\nAlso, we have clearly put in evidence that instrumental features such as the OPD modulation scheme \\rev{(ABCD or Fourier mode, stroke speeds on the different baselines)} or the chromatic dispersion have a strong impact on the closure phase. In particular, the smearing behaviour of the closure phase of PIONIER (Fig.~\\ref{fig:closim}) shows different trends on different triplets or different spectral channels: on one hand, different telescope triplets are impacted differently because of the different OPD modulations; on the other hand, different spectral channels of the same triplet behave in different manners, as a consequence of different chromatic signatures. While the square visibility amplitude did not show a strong dependence on instrumental signature for full scans processed by Fourier analysis (Sect.~\\ref{sec:ana}), this is not necessarily the case. For instance, a time-modulated ABCD method displays impact for both visibility and phase (see Eq.~\\ref{eq:ABCD:gauss:smearing} in Appendix~\\ref{ap:ABCD}).\n\n\\rev{We therefore stress} that each data reduction pipeline and each instrument require their own modelling of the smearing. In this paper, we have provided a generic formalism which can be used as is for VLTI\/PIONIER and probably with little adaptation to other instruments that temporally scan most of the fringe packet. \n\n\\subsection{When only part of the fringe packet is sensed}\n\nAlso, our developments make the implicit assumption that most of the flux of \\newrev{the} fringe packet is measured, i.e. that the OPD range is significantly larger than the FWHM of the fringe envelope. Actually, our developments still hold if the centres of the fringe packets originating from the different parts of the source are scanned but the extremities of the fringe packet are cropped, providing that the cropping is not too aggressive. \\rev{In the case of PIONIER, the partial cropping on some baselines does not prevent a good agreement between simulated fringed and our analytic development, as Fig.~\\ref{fig:closim} shows.} \n\nHowever, it is clearly not the case in the ABCD method when a fringe-tracker locks the recombiner on the ``central'' fringe \\citep[e.g][]{SHA80}. While the smearing can be derived theoretically for this method (see Appendix~\\ref{ap:ABCD}), \\rev{its magnitude will depend on the location of the fringe (i.e the OPD) onto which the fringe tracker locks. In the aforementionned Appendix it is shown that the visibility depends on the position of a source which in turns depends on the value of the group delay \\textphishift[ij]{} (see Eq.~\\ref{eq:ABCD:beta}). For relatively compact objects, the fringe tracker locks onto the brighter fringe or a local zero of the group delay and possible biases are calibrated out when observing an (almost) point-like calibrator under similar conditions. When a source is smeared, the fringe tracker does not necessarily behave in the same manner on source and calibrator, since there is no longer an obvious location of a central fringe (e.g. in the extreme case of a double fringe packet, it may lock on either packet). Therefore,} it is quite likely that instruments sensing the central fringe of sources more resolved than a few beam widths \\rev{(i.e. a few times the resolution power of the interferometer) will lead to altered measurements}, unless \\rev{(a)} a high spectral resolution \\rev{is used ($\\resol{} \\gg \\textphishift[ij]{}$ in Eq.~\\ref{eq:ABCD:beta})} or \\rev{(b) the fringe tracking scheme can be modelled with enough detail to know on which part of a given smeared fringe packet it locks}. In particular, instruments that target high accuracy astrometry with the ABCD method like GRAVITY \\citep{GRAVITY} and PRIMA \\citep{PRIMA} will require that both the tracking reference and the science target are not very resolved.\n\n\\subsection{Image reconstruction}\nOur approach clearly targets parametric analysis, by providing formulas to model fit interferometric data by systems of compact sources. Image reconstruction however, usually relies on the Fourier relation between visibility and image, a relation which is broken in finite bandwidth. Thus, image reconstruction is made difficult as \\cite{BRI89} already noted in radio interferometry.\n\n\n\\subsection{Dealing with bandwidth smearing in practice}\nThe angle of attack of radio astronomers to limit bandwidth smearing\n(see e.g \\citet{BRI89}), is to restrict its effects either by\nincreasing the spectral resolution to optimise the interferometric\nfield of view or centering the phase tracking delay optimally to\nreduce the radial spread. Optical interferometry users do not have\nnecessarily such a flexibility. One of the important differences\nbetween the wavelength regimes is that, in the optical, because the\narrays have \\rev{many fewer} telescopes, most of the users do not actually\nreconstruct images but rather model directly the interferometric\nobservables. This \\rev{has} been done to an extreme level of precision\nwhere visibilities are measured to a fraction of percent\n\\citep[e.g.][]{Absil:2008} and closure phases to a fraction of a degree\n\\citep[see e.g][]{Zhao:2011}. The particularly large impact of the\nsmearing, even for moderately resolved sources, undermine the idea\nthat the parameters for a large number of objects might be derived\neffortlessly using the traditional techniques.\n\nIt therefore appears reasonable to adopt a two step strategy to deal with\nbandwidth smearing first by \\emph{limiting the static instrumental smearing\nby design} and secondly by \\emph{operating the instrument under\nconditions that allow a proper modelling of the induced biases}.\n\n\\emph{Limiting the instrumental smearing.} We have seen that the ``group delay\nclosure'' is the major contributor to a static smearing effect in the closure\nphase \\rev{for instruments that operate in Fourier mode}; it depends on the\ngroup delays and the OPD modulation scheme. The scanning speed scheme can be\nchosen so as to minimise the average group delay closures. For the\n\\rev{temporal ABCD, visibility amplitudes and closures phases are directly\nimpacted by the group delay, and this mitigation can longer be used. Since the\ngroup delay is mostly produced by a static chromatic dispersion in the instrument (waveguides, optical elements), an} integrated approach\nto differential dispersion and birefringence compensation can be attempted as\ndiscussed in \\citep{LAZ12}. Solutions exist that can provide guided or free\nspace optics instrument with dispersion compensation \\citep{Vergnole:2005}.\n\\rev{Correcting the air dispersion in the delay lines in real time may prove\nmore difficult to implement than static correction of the dispersion in the optical elements, so that evacuated delay lines are probably part of the solution for larger baseline lengths ($\\gg 100$\\,m) \\newrev{and at shorter wavelengths where the air dispersion is larger}.}\n\n\\emph{Modelling the biases.} We have shown that bandwidth smearing can be\nmodelled provided that, a moderate spectral resolution is used (the first\nobvious step) \\rev{and} the \\rev{estimators of the observables are properly\ncalculated}. In very low spectral resolution or in full-band ($\\resol{} \\sim\n5$) observations atmospheric effects must also be decently constrained. For the\nlatter, initial studies \\citep[e.g.][]{LIN99,DIF03} have shown the correlation\nbetween atmospheric turbulence and low frequency statistics of piston but these\nare not necessarily well adapted to the sub second exposure\n\\citep[e.g.][]{ABS06}. Dedicated further characterisation of piston statistics\n\\rev{vs. monitored atmospheric properties} would be needed. In summary, the\nultimate tool to obtain a smeared source's \\rev{properties} will simulate the\ninstrumental visibility numerically taking the instrumental signatures, in\nparticular a dedicated spectral calibration, and the atmosphere into account.\n\n\\subsection{Concluding remarks}\n\n\\beginrevision\nOptical interferometry is increasingly used for precise measurements of high flux ratios and\/or separation. Application of this precision techniques range from the detection of hot dust components around debris-disc host stars or the search for direct detection of hot Jupiters to the accurate astrometry of binary systems in search of precise mass determination. \n\nWe have focused our work on a rarely studied effect that can alter significantly these astrophysical measurements, the so-called the bandwidth smearing. This bias-inducing phenomenon arises from the wavelength-dependence in the characteristics of the instrument, the atmosphere, and the source. We have modelled its impact by analysing its influence on the instrumental fringe contrast and determined how it alters the visibility amplitudes and closure phases. The magnitude of this effect will depend, for a given instrument, on the spectral resolution and the extension of the observed field of view and in some cases on the atmospheric piston.\n\nWe have demonstrated analytically how to calibrate for this degradation in the context of popular temporal fringe scanning instruments and applied this analysis to the specific case of binary systems by computing the error or biases induced on the separation vector and flux ratio.\n\nWe have further discussed ``real-life'' constraints such as the influence of the atmospheric piston, the use of different fringe encoding schemes or the imperfections of the fringe tracking quality. We believe that the current analysis can be used with little effort to correct for potential bandwidth smearing biases in almost any astrophysical case.\n\\endrevision\n\n\\section*{Acknowledgements}\n\\rev{We would like to thank an anonymous referee and Chris Haniff who helped us to improve this paper. This research has made use of NASA's Astrophysics Data System, the free softwares maxima, Yorick, and python. It has been supported by Comit\\'e Mixto ESO-Chile and Basal-CATA (PFB-06\/2007).}\n\n{\\footnotesize\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Methods}\n\\noindent\n$\\textbf{Crystal growth and magnetic characterizations.}$ $\\rm{(MnBi_2Te_4)(Bi_2Te_3)_{\\emph{n}}(\\emph{n} = 1, 2,...)}$ single crystals were grown by flux method \\cite{N33}. Mn powder, Bi lump and Te lump were weighed with the ratio of Mn: Bi: Te = 1: 8: 13 (MnTe: $\\rm{Bi_2Te_3}$ = 1: 4). The mixture was loaded into a corundum crucible sealed into a quartz tube. The tube was then placed into a furnace and heated to 1100 \u00b0C for 20 h to allow sufficient homogenization. After a rapid cooling to 600 \u00b0C at 5 \u00b0C\/h, the mixture was cooled slowly to 585 \u00b0C (581 \u00b0C) at 0.5 \u00b0C\/h for $\\rm{MnBi_4Te_7}$ ($\\rm{MnBi_6Te_{10}}$) and kept at this temperature for 2 days. Finally, the single crystals were obtained after centrifuging. The centimeter-scale plate-like $\\rm{MnBi_4Te_7}$ and $\\rm{MnBi_6Te_{10}}$ single crystals can be easily exfoliated. Magnetic measurements of $\\rm{MnBi_4Te_7}$ and $\\rm{MnBi_6Te_{10}}$ single crystals were measured by the vibrating sample magnetometer (VSM) option in a Quantum Design Physical Properties Measurement System (PPMS-9 T). The temperature-dependent magnetization measurements are described in detail in Supplementary Materials.\n\\bigskip\n\n\\noindent\n$\\textbf{Preparation of the ultra-thin samples.}$ The $\\rm{MnBi_4Te_7}$ and $\\rm{MnBi_6Te_{10}}$ flakes with different thicknesses were first mechanically exfoliated on a polydimethylsiloxane (PDMS) substrate by the Scotch tape method. The exfoliated samples on PDMS substrates were then dry-transferred onto 285 nm $\\rm{SiO_2}$\/Si substrates with evaporated gold films. Then, a layer of PMMA was spin-coated on the thin flakes for protection.\n\\bigskip\n\n\\noindent\n$\\textbf{AFM characterization.}$ The thickness of the ultra-thin samples wwas verified by the atomic force microscopy characterization using the Oxford Cypher S system in tapping mode. According to the height line profiles, the $\\rm{MnBi_4Te_7}$ and $\\rm{MnBi_6Te_{10}}$ were confirmed to possess an alternated lattice structure of BT (1 nm) + MBT (1.4 nm) and BT (1 nm) + BT (1 nm) + MBT (1.4 nm). See more details in Supplementary Materials.\n\\bigskip\n\n\\noindent\n$\\textbf{RMCD measurements.}$ The RMCD measurements were performed based on the Attocube closed-cycle cryostat (attoDRY2100) down to 1.6 K and up to 9 T in the out-of-plane direction. The linearly polarized light of 633 nm HeNe laser was modulated between left and right circular polarization by a photoelastic modulator (PEM) and focused on the sample through a high numerical aperture (0.82) objective. The reflected light was detected by a photomultiplier tube (THORLABS PMT1001\/M). The magnetic reversal under external magnetic field was detected by the RMCD signal determined by the ratio of the a.c. component of PEM at 50.052 kHz and the a.c. component of chopper at 779 Hz (dealt by a two-channel lock-in amplifier Zurich HF2LI). The errors of ratio of FM and AFM components are determined by the instability of the data acquired during RMCD measurements.\n\\bigskip\n\n\\noindent\n$\\textbf{STEM characterization.}$ Samples for cross-sectional investigations were prepared by standard lift-out procedures using an FEI Helios NanoLab G3 CX focused ion beam system. To minimize sidewall damage and make the samples sufficiently thin to be electronically transparent, final milling was carried out at a voltage of 5 kV and a fine milling at 2 kV. Aberration-corrected STEM imaging was performed using a Nion HERMES-100 operating at an acceleration voltage of 60 kV and a probe forming semi-angle of 32 mrad. HAADF images were acquired using an annular detector with a collection semi-angle of 75-210 mrad. EELS measurements were performed using a collection semi-angle of 75 mrad, an energy dispersion of 0.3 eV per channel, and a probe current of $\\sim$20 pA. The Mn-$L$ (640 eV) and Te-$M$ (572 eV) absorption edges were integrated for elemental mapping after background subtraction. The original spectrum images were processed to reduce random noise using a principal component analysis (PCA) tool. HAADF image simulations were computed using the STEM\\_CELL software simulation package matching the microscope experimental settings described above and using a supercell with a thickness $\\sim$20 nm.\n\n\n\\section{\\label{sec:level1}DATA AVAILABILITY}\nThe data that support the findings of this study will be available at an open-access repository with a doi link, when accepted for publishing.\n\n\\section{\\label{sec:level3}ACKNOWLEDGEMENT}\nThis work was supported by the National Key R\\&D Program of China (Grants No. 2018YFA0306900, No. 2017YFA0206301, 2019YFA0308602, No. 2019YFA0308000, and 2018YFA0305800), the National Natural Science Foundation of China (Grants No. 62022089, No. 12074425, and No. 11874422), Strategic Priority Research Program (B) of the Chinese Academy of Sciences (Grant No. XDB33000000), Beijing Natural Science Foundation (Grant No. JQ21018 and BJJWZYJH01201914430039), and the fundamental Research Funds for the Central Universities (E1E40209).\n\n\\section{Author contributions}\nY.Y., S.Y., and X.X. conceived the project, designed the experiments, analyzed the results and wrote the manuscript. S.Y. and X.X. conducted the RMCD measurements. H.W. and T.X. grew the $\\rm{MnBi_4Te_7}$ and $\\rm{MnBi_6Te_{10}}$ bulk crystals. M.X., S.T., and H.L. grew the $\\rm{MnSb_2Te_4}$ bulk crystal. Y.H. prepared the few-layer samples. Y.P. and J.Y. performed the magnetic characterizations of the bulk crystals. R.G. performed the STEM characteristics under the supervision of W.Z. Y.Z. and Z.L. helped with results analysis. All authors discussed the results and contributed to the manuscript.\n\n\\section{ADDITIONAL INFORMATION}\nCompeting interests: The authors declare no competing financial interests.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIt is well known that Einstein's General Relativity (GR) is plagued by short-scale divergences, be it in the context of the curvature singularities inside black holes, or the big bang and related singularities in cosmology. In the context of point masses, the the gravitational field close to the mass becomes singular, and curvature invariants diverge. These singularities ``survive'' the Newtonian limit, where they resurface as unbounded tidal forces. Seemingly, general covariance is not the guiding principle to ameliorate the situation. On the other hand, these singularities are unphysical and have to be dealt with. What to do?\n\nOne may hope that these issues will be resolved once a consistent UV completion of GR (``quantum gravity'') is known. In the meantime, we can instead attempt to find modifications of GR that feature finite potentials, regarding these modifications as the effective field theory limit of some more fundamental concepts, hitherto unknown. Before jumping into technical details, let us consider some modifications of Newtonian gravity by considering the field of a $\\delta$-like mass distribution\n\\begin{align}\n\\label{eq:delta-density}\n\\rho = 4\\pi G m\\delta(r) \\, .\n\\end{align}\nThen, the Poisson equation gives the well-known Newtonian potential of a point mass,\n\\begin{align}\n\\bigtriangleup \\phi = \\rho \\, , \\quad \\phi(r) = -\\frac{Gm}{r} \\, .\n\\end{align}\nThis potential is singular, and it gives rise to infinite tidal forces. In this essay, we shall uphold Eq.~\\eqref{eq:delta-density}, that is, the notion of sharply concentrated densities. Then, a simple Pauli--Villars-type regularization scheme of the Poisson equation can ameliorate the situation:\n\\begin{align}\n\\label{eq:modified-poisson}\n\\bigtriangleup (1 + M^{-2}\\bigtriangleup)\\phi = \\rho \\, , \\quad \\phi(r) = -\\frac{Gm}{r} \\left( 1 - e^{-M r} \\right)\\, ,\n\\end{align}\nand $M$ is a large mass scale such that for $M \\rightarrow \\infty$ one recovers the original Poisson equation. For finite $M$, however, the potential is now finite at the origin, $\\phi(0) = -GMm$. Note, however, that it is not regular since its derivative does not vanish: $\\phi'(0) = GmM^2\/2$. If we think of the Newtonian limit of GR, this would imply that the corresponding spacetime has a conical singularity at the origin. There is another problem, however, which is generic to higher-derivative modifications:\n\nTypically, they bring along massive ghost degrees of freedom, which in turn lead to unstable vacua upon quantization. In the above example, the Green function is\n\\begin{align}\nD(r) = \\frac{1}{\\bigtriangleup(1 + M^{-2}\\bigtriangleup)} = \\frac{1}{\\bigtriangleup} - \\frac{M^2}{M^2 + \\bigtriangleup} \\, ,\n\\end{align}\nthe second term of which corresponding to a ghost of mass $M$. The particle spectrum of this theory, then, will not only feature a massless graviton of helicity 2, but also a massive ghost mode \\cite{VanNieuwenhuizen:1973fi,Stelle:1977ry}.\n\nA recent approach \\cite{Modesto:2011kw,Biswas:2011ar} that avoids the excitation of ghost modes at tree level \\cite{Shapiro:2015uxa} is aptly called \\emph{ghost-free gravity}. To see how this works, let us consider the following modification of the Poisson equation:\n\\begin{align}\ne^{f(\\bigtriangleup)} \\bigtriangleup \\phi = \\rho \\, ,\n\\end{align}\nwhere $f(\\bigtriangleup)$ is some polynomial of the Laplace operator defined as a formal power series (we are forced to introduce again at least one scale $M$ such that we can build the dimensionless combination $M^{-2}\\bigtriangleup$ that enters the power series). Then, the propagator is\n\\begin{align}\nD(r) = \\frac{1}{e^{f(\\bigtriangleup)} \\bigtriangleup} \\, , \n\\end{align}\nbut by construction there are no new poles since the exponential function is nowhere zero on the real axis. This can be made more rigorous in a mathematical sense by noticing that the exponential of a polynomial is a so-called \\emph{entire function} which does not have any poles at finite distance from the origin. Then, one can apply Picard's little theorem for entire functions to ensure that the denominator of the propagator never goes to zero (except at the graviton pole), which is a fundamental mathematical statement that surpasses simple plausibility arguments.\n\nAs it turns out (and as we shall see below), the gravitational potential for some generic choices of the function $f(\\bigtriangleup)$ is regular at the origin. However, as a theory with an infinite number of derivatives, this ghost-free gravity is necessarily non-local at some characteristic scale $\\ell \\equiv M^{-1}$. What is the nature of this non-locality? In this essay we shall treat it as classical, that is, $\\ell \\gg \\ell_\\text{Planck}$. This is consistent with regarding ghost-free gravity as an effective description of gravity, obtained by coarse-graining some underlying, more fundamental quantum description. In that sense, $\\ell > 0$ may be regarded as a reminder that ghost-free gravity may have a non-classical origin.\n\nLet us summarize: in both higher-derivative and infinite-derivative theories of gravity, one can attain a regular Newtonian potential that is (i) regular at the origin, and (ii) decays like $\\sim 1\/r$ for large distances. On the classical side there are various attempts to understand the non-linear regime of non-local gravity, whereas on the quantum side the perturbative structures are still to be fully understood.\n\nIn the remainder of this essay, we would like to venture in a somewhat orthogonal direction and study the gravitational field of point sources at mesoscopic distances away from the point source, that is, not directly at the origin, and also not very far away where the gravitational field approaches the standard Newtonian $1\/r$-behavior.\n\n\\section{A framework for linearized higher-derivative gravity}\nLet us now briefly sketch a more rigorous framework that we can employ to study the linearized gravitational field of matter distributions in both higher-derivative and infinite-derivative gravity on a flat Minkowski background. Writing the metric as $g{}_{\\mu\\nu} = \\eta{}_{\\mu\\nu} + h_{\\mu\\nu}$, the most general action quadratic in $h_{\\mu\\nu}$ in $D=d+1$ spacetime dimensions can be written as\n\\begin{align}\n\\begin{split}\nS &= \\frac{1}{2\\kappa} \\int \\mbox{d}^D x \\Big( \\frac12 h^{\\mu\\nu}\\,a(\\Box)\\Box\\,h_{\\mu\\nu}-h^{\\mu\\nu}\\,a(\\Box)\\partial_{\\mu}\\partial_{\\alpha}\\,h^{\\alpha}{}_{\\nu} +h^{\\mu\\nu}\\, c(\\Box)\\partial_{\\mu}\\partial_{\\nu} h \\\\\n&\\hspace{75pt} - \\frac12 h\\,c(\\Box)\\Box h \n+ \\frac12 h^{\\mu\\nu}\\,\\frac{a(\\Box)-c(\\Box)}{\\Box}\\partial_{\\mu}\\partial_{\\nu}\\partial_{\\alpha}\\partial_{\\beta}\\,h^{\\alpha\\beta}\\Big) \\, ,\n\\end{split}\n\\end{align}\nwhere ``$\\Box$'' denotes the d'Alembert operator. The two functions $a(\\Box)$ and $c(\\Box)$ are non-local \\emph{form factors} that satisfy $a(0)=c(0)=1$ in order to reproduce linearized GR at large scales. The above action is indeed the most general one since the Bianchi identities relate the possible choices of functions $f(\\Box)$ to just two independent functions.\n\nFor the sake of simplicity, let us consider a simple case where $a(\\Box)=c(\\Box)$. Then, for a stress-energy tensor of a point mass, $T{}_{\\mu\\nu} = m \\delta^t_\\mu \\delta^t_\\nu \\delta^{(d)}(\\vec{r}\\,)$, and the metric ansatz\n\\begin{align}\n\\label{eq:metric-ansatz}\n\\mbox{d} s^2 = -\\left[1-2(d-2)\\phi\\right]\\mbox{d} t^2 + (1+2\\phi)\\mbox{d} \\vec{r}^{\\,2} \\, ,\n\\end{align}\nwhere $\\mbox{d} \\vec{r}^{\\,2} = \\mbox{d} x_1^2 + \\dots + \\mbox{d} x_d^2$ is the metric of flat space in Euclidean coordinates $x_i$ ($i=1,\\dots,d$), one obtains the field equations\n\\begin{align}\na(\\bigtriangleup)\\bigtriangleup \\phi = \\frac{\\kappa m}{d-1} \\delta^{(d)}(\\vec{r}\\,) \\, .\n\\end{align}\nThe Green function for this static problem takes the form\n\\begin{align}\n\\label{eq:green-function}\nD(r) = \\frac{1}{(2\\pi)^{\\frac d2} r^{d-2}} \\int\\limits_0^\\infty \\mbox{d} \\zeta \\frac{\\zeta^{\\frac{d-4}{2}}}{a(-\\zeta^2\/r^2)} J_{\\frac d2 - 1}(\\zeta) = \\frac{1}{2\\pi^2 r} \\int\\limits_0^\\infty \\mbox{d} \\zeta \\frac{\\sin \\zeta}{\\zeta} \\frac{1}{a(-\\zeta^2\/r^2)} \\, .\n\\end{align}\nwhere $J_n$ denotes the Bessel function of the first kind, and in the second equality we inserted $d=3$ (which we shall concern ourselves with in what follows). The gravitational potential is then given by\n\\begin{align}\n\\label{eq:potential-master}\n\\phi(r) = -\\frac{\\kappa m}{d-1} D(r) \\, ,\n\\end{align}\nand is easy to see that for $a=1$ one obtains the well-known result $\\phi(r) = -Gm\/r$ as obtained in the Newtonian limit of GR in four spacetime dimensions ($d=3$).\n\n\\section{Gravitational Friedel oscillations}\nGiven the general solution of the potential, Eq.~\\eqref{eq:potential-master}, we can now study it shape in various higher-derivative as well as infinite-derivative theories of gravity. The Green function \\eqref{eq:green-function} can either be evaluated analytically or numerically; for the sake of this essay, let us focus on analytical results that can easily be written down.\n\nTo that end we shall consider higher-derivative theories of the following class, call them $\\mathrm{HD_N}$,\n\\begin{align}\na(\\Box) = 1 + (-\\Box\/M^2)^N \\, , \\quad N \\in \\mathbb{N} \\, ,\n\\end{align}\nas well as a class of infinite-derivative ``ghost-free'' theories, call them $\\mathrm{GF_N}$:\n\\begin{align}\na(\\Box) = \\exp\\left[ (-\\Box\/M^2)^N \\right] \\, , \\quad N \\in \\mathbb{N} \\, .\n\\end{align}\nClearly these theories satisfy $a=1$ for $M \\rightarrow \\infty$, so for large scales they will reproduce linearized GR. The Green functions can be calculated analytically, and one obtains (see also \\cite{Frolov:2015usa})\n\\begin{align}\n\\begin{split}\n\\label{eq:green-functions}\n\\mathrm{GR} : \\quad & D(r) = \\frac{1}{4\\pi r} \\, , \\\\\n\\mathrm{HD_1}: \\quad & D(r) = \\frac{1}{4\\pi r} \\left( 1 - e^{-M r} \\right) \\, , \\\\\n\\mathrm{HD_2}: \\quad & D(r) = \\frac{1}{4\\pi r} \\left[ 1 - e^{-Mr\/\\sqrt{2}} \\cos\\left( Mr\/\\sqrt{2} \\right) \\right] \\, , \\\\\n\\mathrm{HD_3}: \\quad & D(r) = \\frac{1}{4\\pi r} \\left[ 1 - \\tfrac13 e^{-Mr} - \\tfrac23 e^{-Mr\/2} \\cos\\left( \\sqrt{3}Mr\/2 \\right) \\right] \\, , \\\\\n\\mathrm{GF_1}: \\quad & D(r) = \\frac{\\text{erf}(M r\/2)}{4\\pi r} \\, , \\\\\n\\mathrm{GF_2}: \\quad & D(r) = \\frac{M}{6\\pi^2}\\Big[ 3 \\Gamma\\!\\left(\\tfrac54\\right) {}_1\\!F\\!{}_3\\left( \\tfrac14;~ \\tfrac12,\\tfrac34,\\tfrac54;~ y^2 \\right) -2y\\Gamma\\!\\left(\\tfrac34\\right) {}_1\\!F\\!{}_3\\left( \\tfrac34;~ \\tfrac54, \\tfrac32, \\tfrac74;~ y^2 \\right) \\Big] \\, , \\\\\n\\mathrm{GF_3}: \\quad & D(r) = \\frac{M}{\\pi}\\Big[ -\\frac{1}{\\Gamma\\left(-\\tfrac16\\right)}{}_{1\\!}F{}_{\\!5}\\left( \\tfrac16;~ \\tfrac13, \\tfrac 12, \\tfrac 23, \\tfrac56, \\tfrac76;\\,-z^3 \\right) - \\frac{z}{2\\sqrt{\\pi}} {}_{1\\!}F{}_{\\!5}\\left( \\tfrac12;~ \\tfrac23,\\tfrac56,\\tfrac76,\\tfrac43,\\tfrac32;\\,-z^3\\right) \\\\\n&\\hspace{62pt} + \\frac{3z^2}{10\\Gamma\\left(\\tfrac76\\right)} {}_{1\\!}F{}_{\\!5}\\left( \\tfrac56;~ \\tfrac76,\\tfrac43,\\tfrac32,\\tfrac53,\\tfrac{11}{6};\\,-z^3\\right) \\Big] \\, ,\n\\end{split}\n\\end{align}\nwhere ${}_{p\\!}F{}_{\\!q}(a_1,\\dots,a_p;\\,b_1,\\dots,b_q;\\,z)$ denotes the generalized hypergeometric function and we defined the dimensionless radial variables $y \\equiv M^2 r^2\/16$ and $z \\equiv M^2r^2\/36$. See Fig.~\\ref{fig:potentials} for a visualization of the Green functions: in both higher-derivative and infinite-derivative gravity they are finite at $r=0$, whereas for GR the Green function diverges at the origin.\n\n\\begin{figure}[!htb]\n\\centering\n\\subfloat[Higher derivative theories for $N=1,2,3$.]\n{\n \\includegraphics[width=0.5\\textwidth]{potentials-hd-log-log.pdf}\n}\n\\subfloat[Infinite-derivative theories for $N=1,2,3$.]\n{\n \\includegraphics[width=0.5\\textwidth]{potentials-gf-log-log.pdf}\n}\n\\caption{The Green functions of $\\mathrm{HD_N}$ and $\\mathrm{GF_N}$ theories visualized for $N=1,2,3$. Whereas $N=1$ approaches the $1\/r$ power law directly, there are oscillations in cases of $N=2,3$.}\n\\label{fig:potentials}\n\\end{figure}\n\nThis constitutes a major insight of these calculations in the literature, and, at the linear level, one can easily extend these studies to $p$-branes in higher-dimensional Minkowski space \\cite{Boos:2018bxf}. Observe, however, the particular shape of the Green functions a bit closer. There appears to be a substructure: whereas the $N=1$ Green functions decay like $1\/r$ for large values of the dimensionless radial distance $M r$, there exist noticeable oscillations in the potentials for the cases $N=2,3$ \\cite{Modesto:2011kw,Modesto:2016ofr,Edholm:2016hbt,Conroy:2017nkc,Boos:2018bxf}; for a visualization, see Fig.~\\ref{fig:potentials}.\n\nThese oscillations have direct consequences for the local energy density $\\rho$ perceived by a static observer whose 4-velocity is tangential to $\\ts{\\xi} = \\partial_t$. For the metric ansatz \\eqref{eq:metric-ansatz} one obtains\n\\begin{align}\n\\rho \\equiv G{}_{\\mu\\nu} \\xi{}^\\mu \\xi{}^\\nu = (1-d)\\bigtriangleup \\phi \\, ,\n\\end{align}\nwhere $G{}_{\\mu\\nu}$ denotes the linearized Einstein tensor. For $N=1$ theories, this quantity is positive definite, whereas for $N=2,3$ it undergoes oscillations around zero that decay in strength with distance $M r$. See a visualization of this behavior in Fig.~\\ref{fig:energy-densities}.\n\n\\begin{figure}[!htb]\n\\centering\n\\subfloat[Higher-derivative theories for $N=1,2,3$.]\n{\n \\includegraphics[width=0.5\\textwidth]{energy-densities-hd-log.pdf}\n}\n\\subfloat[Infinite-derivative theories for $N=1,2,3$.]\n{\n \\includegraphics[width=0.5\\textwidth]{energy-densities-gf-log.pdf}\n}\n\\caption{The absolute value of the local energy density $\\rho \\equiv G{}_{\\mu\\nu}\\xi{}^\\mu\\xi{}^\\nu$, $\\ts{\\xi} = \\partial_t$, undergoes oscillatory behavior in the cases $N=2,3$, whereas there are no oscillations in the case $N=1$ (both for higher-derivative and infinite-derivative theories). Close to the origin, $M r \\approx 0$, one has $\\rho > 0$; at the points of diverging slope, the energy density vanishes. Between these points, it switches its overall sign.}\n\\label{fig:energy-densities}\n\\end{figure}\n\nUsing these diagrams, we can extract some typical wavelengths: from Eq.~\\eqref{eq:green-functions} it is clear that the wavelengths of oscillation are constant in the cases $N=2,3$ in higher-derivative gravity due to the explicit appearance of trigonometric functions. For the infinite-derivative theories the behavior is more involved. The oscillations still scale with $M{}^{-1}$, but they decay with increasing distance $M r$. A rather qualitative fit gives\n\\begin{align}\n\\mathrm{GF_2} : \\quad M\\delta_2 \\sim 9.68 \\, (M r)^{-0.28} \\, , \\qquad\n\\mathrm{GF_3} : \\quad M\\delta_3 \\sim 8.28 \\, (M r)^{-0.16} \\, ,\n\\end{align}\nbut a closer inspection reveals that the precise wavelengths oscillate over and under these curves, see Fig.~\\ref{fig:wavelengths} for more details.\n\n\\begin{figure}[!htb]\n\\centering\n\\subfloat\n{\n \\bgroup\n\t\\def1.2{1.2}\n\t\\footnotesize\n\t\\begin{tabular}{rccrcc}\n\t$M r$ & $M\\delta_2\/2$ & ~~ & $M r$ & $M\\delta_3\/2$ \\\\ \\hline\n\t 4.59 & 3.16 && 4.65 & 3.23 \\\\\n\t 7.75 & 2.76 && 7.88 & 3.02 \\\\\n\t10.51 & 2.53 && 10.90 & 2.86 \\\\\n\t13.04 & 2.38 && 13.76 & 2.75 \\\\\n\t15.42 & 2.26 && 16.51 & 2.66 \\\\\n\t17.68 & 2.17 && 19.17 & 2.58 \\\\\n\t19.85 & --- && 21.75 & ---\n\t\\end{tabular}\n\t\\egroup\n}\n\\qquad\n\\subfloat{\\adjustbox{raise=-6pc}{\\includegraphics[width=0.5\\textwidth]{wavelength-fit.pdf}}}\n\\caption{We evaluated the zeroes of the energy density for $\\mathrm{GF_2}$ theory and $\\mathrm{GF_3}$ theory numerically, from which we can read off (half the) wavelength $M\\delta_N$. Contrary to the higher-derivative theories, in infinite-derivative theories the wavelengths are not constant, but decrease with increasing $M r$. To first approximation, this can be described by simple power laws.}\n\\label{fig:wavelengths}\n\\end{figure}\n\nIt seems that these oscillations occur irrespective of the precise modification method of GR. While they are absent for $N=1$, we should remark that the case $N=1$ is somewhat degenerate for both classes of theories considered: in the higher-derivative framework the potential is indeed finite at the origin, but it's first derivative is non-zero, leading to a conical singularity (as well as a diverging energy density, see Fig.~\\ref{fig:energy-densities}). As it turns out, the scalar case of $N=1$ infinite-derivative theory has time-dependent instabilities \\cite{Frolov:2016xhq}.\n\nIn other words: for all regular versions of both higher-derivative and infinite-derivative gravity, these oscillations do occur at distances where $M r \\sim \\mathcal{O}(1)$ before they decay roughly like a power law. Since these theories are classical, and hence $M \\ll m_\\text{Planck}$, the typical distance $r \\sim M^{-1} \\mathcal{O}(1)$ might be accessible to experiment at some point in time.\n\nOscillations of energy density, somewhat similar to the ones we described here, are well known in condensed matter physics where they are called \\emph{Friedel oscillations} \\cite{Friedel:1952,Friedel:1954,Friedel:1958}: upon insertion of a positively charged impurity into a cold metal the overall charge density around this impurity exhibits spatial oscillations. This effect is usually calculated at 1-loop using the random phase approximation wherein the photon propagator picks up a fermion-loop as a correction term. In other words, the screening mechanism of an electric charge inside a cold metal is to be treated as a scattering problem \\cite{Altland:2006}.\n\nThere is also a physically intuitive explanation: in the Jellium model, electrons in a metal at low temperature fill up the Fermi sphere up to a maximum momentum of $k_\\ind{F}$ while the positive ions form a rigid background structure. Electrons close to the Fermi momentum $k_\\ind{F}$ are most prone to interact with the impurity, and since these electrons are non-local objects (scale of non-locality $\\sim k_\\ind{F}^{-1}$), they cannot compensate the positive charge exactly: they overcompensate the charge, and thereby induce a spatially oscillating charge distribution.\n\n\\section{Discussion and conclusions}\n\nIn the recent literature, there has been a lot of focus on (i) the classical non-linear behavior and (ii) the perturbative quantum structure of infinite-derivative gravity. In this essay, we pursued a somewhat different direction by focussing on the linearized theory at mesoscopic distances, $M r \\sim \\mathcal{O}(1)$, where both the gravitational potential as well we the local energy density exhibits fluctuations, the latter one assuming negative values in some regions. For values of $M \\gg m_\\text{Planck}$ these oscillations might become observable at some point in the future \\cite{Accioly:2016qeb,Accioly:2016etf,Perivolaropoulos:2016ucs}.\n\nIn an analogy to condensed matter physics and Friedel oscillations in cold metals, we think it may be appropriate to call the oscillations described in this essay \\emph{gravitational Friedel oscillations.} Since they do not only appear in higher-derivative theories (wherein they can be interpreted as spurious effects occurring at the Pauli--Villars regularization scale due to the presence of complex poles \\cite{Accioly:2016qeb,Giacchini:2016xns}) and they survive the ghost-free limit, we think that these oscillations are of some physical relevance.\n\nAt the present stage, the perturbative structure of infinite-derivative ``ghost-free'' gravity is not fully understood \\cite{Shapiro:2015uxa,Giacchini:2016xns,Biswas:2013kla,Talaganis:2016ovm} (see, however, the recent work \\cite{Calcagni:2018gke,Buoninfante:2018mre}), and it is also not clear whether ghost-ridden higher-derivative theories can be considered physically viable classical theories at distances close to the Pauli--Villars regularizations scale $M$. It would also be interesting to study the linearized gravitational field in other non-local modifications of gravity \\cite{Hehl:2009es} arising from non-local constitutive relations rather than from a quadratic tensor action with somewhat ad hoc non-local form factors.\n\nSince the oscillations occur in both higher-derivative and infinite-derivative classes of theories, quite independently from one another, we hope that the above observations may prove helpful in extracting observational criteria on the gravitational potential at mesoscopic distances.\n\n\\section*{Acknowledgements}\nThe author benefited from discussions with Valeri P.\\ Frolov, Hennadii Yerzhakov, and Andrei Zelnikov (all Edmonton), as well as Breno Loureiro Giacchini (Rio de Janeiro), and is moreover grateful for a Vanier Canada Graduate Scholarship administered by the Natural Sciences and Engineering Research Council of Canada as well as for the Golden Bell Jar Graduate Scholarship in Physics by the University of Alberta.\n\n\\begin{singlespace}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDifferent natural landforms such as valleys, forests, and other areas have a significant impact on the propagation of radio waves\\cite{b1}. The forest area is covered by dense trees, which makes the multipath fading and non-line-of-sight channel very complex, and differs from the effects of urban buildings\\cite{b7}. Some main factors affecting radio wave propagation, such as the distance between the transmitting and receiving antennas, the height of the antenna, and the type of ground objects, are reflected in the path loss formula as variable functions\\cite{b8}. However, in different geographical environments, topographic relief, vegetation height and density, climate, and other factors have various degrees of influence on propagation\\cite{b9}. Therefore, when these propagation models are applied in specific environments, the corresponding variable functions should be different, and it is necessary to identify a reasonable channel model. The accuracy of the channel model is very important for network deployment, because the inappropriate channel models will lead to significant reduction of network coverage\\cite{b10}. \n\nDifferent typical propagation models have different characteristics, and they are applicable to different environment. Typical propagation models include Okumura Hata model\\cite{b18}, cost-231 Hata model\\cite{b19}, SPM model\\cite{b20}, etc. These models are suitable for cities, suburbs and villages, but not for forest areas. In\\cite{b16}, the authors proposed the Erceg model which is based on extensive experimental data collected by AT\\&T Wireless Services across the United States in 95 existing macro cells at 1.9GHz. The terrains are classified in three categories. Category A is hilly terrain with moderate-to-heavy tree density, category B is hilly terrain with light tree density or flat terrain with moderate-to-heavy tree density and category C is mostly flat terrain with light tree density. Soon later, In\\cite{b17}, Stanford University proposed Stanford University Interim (SUI) channel model is a set of 6 channel models representing three terrain types and a variety of Doppler spreads, delay spread and line-of-sight\/non-line-of-site conditions that are typical of the continental US. The terrain type A, B, C are same as those defined earlier for Erceg model. However, these models were proposed earlier and are mainly applicable to the North American environment.\n\nOther scholars have proposed channel models for specific forest environment, in addition to the typical channel models. In \\cite{b11}, the authors studied the oblique leaf path of roadside woodland, including three vegetation types, and obtained the attenuation loss results of oblique path at C-band yielding 0.9 dB overall improvement and up to 20 dB regional improvement in root mean square errors. In \\cite{b12}, the authors investigated the propagation behavior of 28-GHz millimeter wave in coniferous forests and model its basic transmission loss, and proposed novel fully automated site-specific models. The root-mean-square deviations between model predictions and simulation results are 11.3 dB for an ITU woodland model and 6.8 dB for a site-specific model published in this paper. In \\cite{b13}, the authors characterized the wireless channel for a short-range, temperate, medium density forest environment at the 5-GHz band. In \\cite{b14}, the authors presented measurement results and propose empirical models of ultra-wideband signal propagation in forest environments after recording more than 22000 measurements at 165 sites in four different forest environments in Virginia and Maryland. However, these works only focus on the single scenario.\n\nIn this paper, we carry out propagation measurement campaigns in two different types of forest areas, and record the measured values of signal propagation loss in those areas. Owing to the large attenuation of the signal in the forest area, the propagation effect of the lower frequency band is better, and in order to study the communication status of the emergency frequency band, so we adopted the 605 MHz frequency band for measurement. Then we use three classical large-scale path loss models and forest excess attenuation models to characterize the measured data, providing a comprehensive model comparison. Through the analysis of the results, we develop a new forest-specific path loss model, which has better performance than representative existing models.\n\n\\section{Description of measurements}\n\n\n\nIn order to study the impact of different forest areas on signal propagation, we selected two different types of areas which are Jiaozi snow mountain and Pudu-river dry-hot valley where channel propagation measurement campaigns were conducted in March 2022.\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=8.5cm]{xueshan.pdf}}\n\\caption{Geographical environment of Jiaozi snow mountain}\n\\label{fig}\n\\end{figure}\n\nJiaozi snow mountain is located at the junction of Luquan County and Dongchuan District in Yunnan Province, China, with a maximum altitude of 4344.1 meters (m), a minimum altitude of 2300 m, and a relative height difference of more than 2000 m. Jiaozi snow mountain belongs to seasonal snow mountain, which is the lowest snow mountain in the northern hemisphere. There are 15000 mu of Abies lanceolata primary secondary forest and Rhododendron forest, which are typical dense forest scenes. Fig. 1 shows the geographical scenario of Jiaozi snow mountain.\n\nPudu-river dry-hot valley is located in the Pudu river area in Yunnan Province as well. There are seven vegetation types, 11 vegetation subtypes, 17 formation groups, and 28 formations in the reserve, including dry-hot valley hard leaf evergreen oak forest, semi-humid evergreen broad-leaved forest, mountaintop bryophyte dwarf forest, cold temperate shrub, and cold temperate meadow. Fig. 2 shows the geographical scenario of Pudu-river dry-hot valley.\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=8.5cm]{hegu.pdf}}\n\\caption{Geographical environment of Pudu-river dry-hot valley.}\n\\label{fig}\n\\end{figure}\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=8.5cm]{equip.pdf}}\n\\caption{Transmitter and receiver equipment. The backpack base station on the left is transmitter and the handheld device on the right is receiver.}\n\\label{fig}\n\\end{figure}\n\n\n\n\\begin{table}[htbp]\n\\caption{MEASUREMENT INFORMATION}\n\\begin{center}\n\\begin{tabular}{|l|l|}\n\\hline\nBase station information & Parameters or information \\\\ \\hline\nTransmitting antenna height & 1.5 m \\\\ \\hline\nCoordinate of Jiaozi snow mountain & (102.848226, 26.0845327) \\\\ \\hline\nCoordinate of Pudu-river dry-hot valley & (102.7342136, 26.02112129) \\\\ \\hline\nBase station transmission power & 43 dBm \\\\ \\hline\nBase station antenna type & Omnidirectional antennas \\\\ \\hline\nTransmit antenna gain & 5 dBi \\\\ \\hline\nReceiving antenna gain & 0 dBi \\\\ \\hline\nTransmission band & 605 MHz \\\\ \\hline\nCell reference signal power & 15.2 dBm \\\\ \\hline \n\\end{tabular}\n\\end{center}\n\\end{table}\n\nIn the propagation measurement campaigns, we used a backpack base station as the signal transmitter (Tx) in a fixed position, equipped with an omni-directional antenna, while a handheld device was used as the receiver (Rx), with an omni-directional receiving antenna inside. Tx and Rx devices are shown in Fig. 3. The transmit power of the base station is 43 dBm, the transmit antenna gain is 2.5 dBi, the receive antenna gain is 0 dBi, and the carrier frequency is 605 MHz. The relevant information of the measurements is provided in Table \\uppercase\\expandafter{\\romannumeral1}. We recorded the longitude and latitude of every transmission and reception position, adopted the continuous wave as the signal source to transmit the signal, and carried out the on-board test on the preset routes. \n\nWe collected and recorded the pilot signal received power at the on-board test cell phone. Since the test was aimed to obtain the path loss data in the actual network, the test data truly reflects the propagation of broadband signals in the local wireless environment. As there is no need to set up a base station by itself, this test scheme is simple and convenient. Fig. 4 illustrates the measurement trajectory, and different colors of trajectory indicate different reference signal received power where green represents the minimum and red represents the maximum. The measurement data in Jiaozi snow mountain and Pudu-river dry-hot valley will be shown in Fig. 5.\n\n\n\n\n\\begin{figure}[t]\n\\centering \n\\subfigure[]{\n\\label{Fig.sub.1}\n\\includegraphics[width=8cm,height = 5cm]{xueshan_luxian.pdf}}\\\\\n\\subfigure[]{\n\\label{Fig.sub.2}\n\\includegraphics[width=8cm,height = 5cm]{hegu_luxian.pdf}}\n\\caption{Measurement trajectory in Jiaozi snow mountain and Pudu-river Dry-hot valley. The colors in the graphs indicate reference signal received power value, where green and red represent the lowest and highest power, respectively.}\n\\label{1}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\centering \n\\subfigure[]{\n\\label{Fig.sub.1}\n\\includegraphics[width=9cm]{tu4.eps}}\\\\\n\\subfigure[]{\n\\label{Fig.sub.2}\n\\includegraphics[width=9cm]{tu3.eps}}\n\\caption{Measurement data set in Jiaozi snow mountain and Pudu-river dry-hot valley.}\n\\label{1}\n\\end{figure}\n\n\n\n\n\n\\section{Experimental results}\n\nAfter obtaining the data set, we consider three path loss models as the baseline generic model which are the alpha-beta\u2013gamma (ABG) model\\cite{b2}\\cite{b6}, the close-in free-space reference distance (CI) model\\cite{b2}\\cite{b6}\\cite{b3}, and the free-space path loss (FSPL) model. The expressions of CI, ABG, and FSPL are as follows:\n\n\\begin{small}\n\\begin{equation}\n\\mathrm{P L}_{\\mathrm{CI} }=10 n \\log_{10}{\\frac{d}{d_{0} } }+20\\log_{10}{\\left ( \\frac{4\\pi\\times 10^{9}}{c}\\right )}+20 \\log_{10}{f} \n\\end{equation}\n\\end{small}\n\\begin{small}\n\\begin{equation}\n\\mathrm{P L}_{\\mathrm{ABG} }=10 \\alpha\\log_{10}{d} +\\beta +10 \\gamma\\log_{10}{f} \n\\end{equation}\n\\end{small}\n\\begin{small}\n\\begin{equation}\n\\mathrm{P L}_{\\mathrm{FSPL} }=20\\log_{10}{(\\frac{4\\pi fd\\times 10^{9}}{c} ) }\n\\end{equation}\n\\end{small}\n\n\\noindent where $n$ denotes the path loss exponent (PLE), $d_{0}$ is the close-in free-space reference distance and is set to 1 m\\cite{b2}, $d$ is the 3-D T-R separation distance in meters, $\\alpha$ and $\\gamma$ are coefficients showing the dependence of path loss on distance and frequency, respectively, $\\beta$ is an optimized offset value for path loss in decibels, $f$ is the carrier frequency in GHz, $c$ is the speed of light.\n\nNote that the CI model has a very similar form compared with the ABG model, but has fewer model parameters and more solid physical basis\\cite{b2}\\cite{b6}. Since additional attenuation is caused by the occlusion of vegetation in the forest area, we use the ITU horizontal forest model\\cite{b4} as the excess path loss model.The expressions of ITU horizontal forest model is as follows:\n\n\\begin{small}\n\\begin{equation}\n\\mathrm{P L}_{\\mathrm{ITU-H} }=A_m\\left[ 1-e^ {\\left( -d\\mu\/A_m \\right)} \\right]\n\\end{equation}\n\\end{small}\n\n\\noindent where $\\mu$ denotes the specific attenuation for very short vegetative paths (dB\/m) and $A_m$ denotes the maximum attenuation for one terminal within a specific type and depth of vegetation (dB). Next, we combine FSPL model and ITU horizontal forest model to fit the measured data\\cite{b5}. The expression of FSPL-H is given by:\n\n\\begin{small}\n\\begin{equation}\n\\mathrm{P L}_{\\mathrm{FSPL-H} }=20\\log_{10}{ (\\frac{4\\pi fd\\times 10^{9}}{c} )} +A_m\\left[ 1-e^ {\\left( -d\\mu\/A_m \\right)} \\right]\n\\end{equation}\n\\end{small}\n\n\\begin{figure}[t]\n\\centering \n\\subfigure[]{\n\\label{Fig.sub.1}\n\\includegraphics[width=8.7cm]{tu1.eps}}\\\\\n\\subfigure[]{\n\\label{Fig.sub.2}\n\\includegraphics[width=8.5cm]{tu2.eps}}\n\\caption{Measured path loss data and fitting results of the two classical models, the ITU forest excess attenuation model, and the proposed BHF model.}\n\\label{1}\n\\end{figure}\n\n\nFor short-distance specific forest scenes, we build a simple but powerful scene-specific model by more carefully characterizing forest-specific propagation loss, which can simplify the expression and parameters compared with directly combining two types of models presented above. We name the model Beijing University of Posts and Telecommunications horizontal forest model (BHF). The expression of BHF is as follows:\n\n\\begin{small}\n\\begin{equation}\n\\mathrm{P L}_{\\mathrm{BHF}}=10\\alpha \\log_{10}{d}+\\beta+\\zeta \\tanh (d \/20)+20 \\log_{10}{f} \n\\end{equation}\n\\end{small}\n\n\\noindent where $\\alpha$ is a coefficient showing the dependence of path loss on the conventional log-scaled distance, $\\beta$ is an optimized offset value for path loss in decibels, $\\zeta$ is a coefficient characterizing the path loss caused by vegetation attenuation.\n\n\n\\begin{table}[htbp]\n\\caption{Optimized Model Parameters In The Baseline And Proposed Path Loss Models}\n\\begin{center}\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline\nSite & Model & \\begin{tabular}[c]{@{}l@{}}$n$(CI)\\\\$\\alpha$(ABG)\\\\$Am$(FSPL-H) \\\\ $\\alpha$(BHF)\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}$\\beta$(ABG)\\\\$\\mu$(FSPL-H) \\\\$\\beta$(BHF)\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}$\\gamma$(ABG)\\\\ $\\zeta$(BHF)\\end{tabular} \\\\ \\hline\nJiaozi snow & CI & 3.8 & - & - \\\\ \\cline{2-5} \nmountain & ABG & 2.9 & 31.8 & 2.0 \\\\ \\cline{2-5} \n\\multicolumn{1}{|c|}{} & FSPL-H & 40.0 & 1.2 & - \\\\ \\cline{2-5} \n & BHF & 1.6 & -1305.2 & 1407.0 \\\\ \\hline\nPudu-river & CI & 4.0 & - & - \\\\ \\cline{2-5} \ndry-hot valley & ABG & 1.9 & 57.7 & 2.0 \\\\ \\cline{2-5} \n & FSPL-H & 43.8 & 4.6 & - \\\\ \\cline{2-5} \n & BHF & 0.8 & 48.3 & 64.2 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\nThe BHF model contains the negative exponential characteristic attenuation caused by vegetation, where the changes are made on the basis of ITU horizontal model, and the additional attenuation is close to the hyperbolic tangent function. Compared with the ITU horizontal model in (4), the function about the additional attenuation of vegetation changes more gently in the BHF model. We fit the BHF model and three baseline models by using the least-square method to find the optimal model parameters. The optimized model parameters are shown in Table \\uppercase\\expandafter{\\romannumeral2}. It can be seen from the Table \\uppercase\\expandafter{\\romannumeral2} that the coefficients of the CI and FSPL-H models are relatively stable, with little change across environments, due to the free-space reference term acting as an anchor point. In contrast, the coefficients of the ABG and BHF models are strongly influenced by the environment.\n\nFig. 6 shows the fitting results of the CI, ABG, FSPL-H, and BHF models. It can be observed from Fig. 6 that the CI model and ABG model are two straight lines. Because the relationship between the parameter term and the 3D T-R separation distance in the expression is multiplication, when other coefficients are given, the function between path loss and distance in the logarithmic scale is a linear function. Since the adaptation ability of straight lines is relatively limited, the fitting errors of CI and ABG models are large. The CI model has fewer parameter variables than the ABG model, rendering larger errors. The relationship between $d$ and parameters of the other two models are more complex, so the fitting effect is better. It can be seen that the BHF model has stronger capability of adapting with the trend of measured data in Fig. 6(b).\n\n\\begin{table}[htbp]\n\\caption{Overall Models Performance}\n\\begin{center}\n\\begin{tabular}{|l|lll|l|}\n\\hline\n- & \\multicolumn{3}{l|}{Traditional} & Proposed \\\\ \\hline\nModel & \\multicolumn{1}{l|}{CI} & \\multicolumn{1}{l|}{ABG} & FSPL-H & BHF \\\\ \\hline\nJiaozi snow mountain & \\multicolumn{1}{l|}{4.6} & \\multicolumn{1}{l|}{4.1} & 3.6 & 3.0 \\\\ \\hline\nPudu-river dry-hot valley & \\multicolumn{1}{l|}{13.1} & \\multicolumn{1}{l|}{9.7} & 9.3 & 8.3 \\\\ \\hline\nNumber of model parameters & \\multicolumn{1}{l|}{1} & \\multicolumn{1}{l|}{2} & 2 & 3 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\nWe use root-mean-square error (RMSE) and the number of parameters to quantify the fitting effect of the models, which are given in Table \\uppercase\\expandafter{\\romannumeral3}. Note that for single frequencies, $\\gamma$ in the ABG model is set to 2, thus there are actually two model parameters in the ABG model.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIt can be seen from Table \\uppercase\\expandafter{\\romannumeral3} that the fitting error for Jiaozi snow mountain is smaller than that for Pudu-river dry-hot valley in general, because the different types and densities of vegetation and the geographical environment have an impact on the transmission of signals. The terrain of Pudu-river dry-hot valley is steeper than that of Jiaozi snow mountain. Accoding to Table \\uppercase\\expandafter{\\romannumeral3}, the RMSEs of the BHF model are 3.0 dB and 8.3 dB for Jiaozi snow mountain and Pudu-river dry-hot valley, respectively. This shows that the BHF model proposed in this paper has the best fitting effect overall and is more suitable for the forest environment. \n\n \n\\section{Conclusion}\n\n In this paper, we have provided results from real-world measurement campaigns to assess channel characteristics for the forest environment. The signal measurement data near Jiaozi snow mountain and Pudu-river dry-hot valley are used to compare the attenuation of vegetation with comprehensive large-scale path loss models. Inspired by these results, we have developed a new site-specific path loss model. Compared with typical traditional models, the model proposed in this paper yields significantly smaller fitting errors with acceptable computational complexity, and is thus more suitable for the forest environment.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nA problem arising throughout both nuclear theory---from {\\it ab-initio} nuclear\ntheory \\cite{Ekstrom2019,Piarulli2016} to density functional theory (DFT)\n\\cite{Bender2003,schunck2019energy}---and supervised machine learning is the\nfitting of a model to data. Formally, given a computer model $m$ evaluated at\ninputs $\\bm{\\nu}_1, \\ldots, \\bm{\\nu}_{\\nd}$, one seeks a parameter vector $\\xb \\in \\R^{\\nx}$ so that\nthe outputs $m\\left(\\bm{\\nu}_1;\\xb\\right), \\ldots, m\\left(\\bm{\\nu}_{\\nd};\\xb\\right)$\nagree with data $\\db=[d_1, \\ldots, d_{\\nd}]$ within the assumed uncertainties. For example, the inputs $\\bm{\\nu}$\nmight characterize a particular configuration of the atomic nucleus (defined by the number of its\nproton and neutron constituents). For such a case, the\ndata might include observables such as experimentally measured binding energies and charge\nradii. Or, generally, the inputs could correspond to 64-pixel by 64-pixel images, and the data could represent labels such as ``cat'' or ``banana.'' \n\nAlthough fitting can result in many different formulations of optimization problems, the most common form in \nthe physical sciences \nfollows a $\\chi^2$-convention wherein independence of errors is assumed and one seeks to solve\n\\begin{equation}\n\\label{eq:function}\n\\min_{\\xb \\in \\R^{\\nx}} f(\\xb), \\qquad \\mbox{where } f(\\xb) = \\sum_{i=1}^{\\nd} \\left(\\frac{m\\left(\\bm{\\nu}_i;\\xb\\right) - d_i}{\\sigma_i}\\right)^2,\n\\end{equation}\nwith $\\sigma_1, \\ldots, \\sigma_{\\nd}>0$ often interpreted as experimental and\/or model error bars \\cite{DNR14}. \nMore general types of such objective functions (also called ``loss functions'' or ``penalty functions'') include those that take into account correlations, such as\n\\begin{equation*}\n\\label{eq:correlated}\nf(\\xb) = \\sum_{i=1}^{\\nd} \\sum_{j=1}^{\\nd} w_{i,j} \\left(m\\left(\\bm{\\nu}_i;\\xb\\right) - d_i\\right) \\left(m\\left(\\bm{\\nu}_j;\\xb\\right) - d_j\\right).\n\\end{equation*}\n\nIn this paper we address the squared-loss formulation in \\cref{eq:function},\nwhich we generalize as the finite sum of squares of nonlinear functions of the\nparameter vector $\\xb$; that is,\n\\begin{equation}\n\\label{eq:genfun}\nf(\\xb) = \\sum_{i=1}^{\\nd} F_i(\\xb)^2.\n\\end{equation}\nThroughout the text, we refer to these general functions, $F_i$, as \\emph{component functions}.\nSince objective functions of the form in \\cref{eq:genfun} are found throughout supervised learning, \nmany optimization methods used for training machine learning models are applicable here.\nIn contrast to\nstandard fitting problems that arise in nuclear theory, however, the number of data,\n$\\nd$, used when training machine learning models tends to be massive. For\nexample, as of August 2020, the open images dataset \n\\cite{openimages20}\ncontained nearly 60 million image labels. \nWhen fitting nuclear models, \nthe value of $\\nd$ is typically many orders of magnitude smaller; this is the case in the study conducted in this paper. \n\nA natural question is thus whether the algorithms used to train machine learning models can benefit the physicist who has a computer model and desires to solve fitting problems. \nHere we investigate the strengths and limitations of different optimization algorithms for minimizing \\cref{eq:genfun} through a case study from nuclear theory. We focus on the ``derivative-free'' case where gradients \n$\\nablab f(\\xb), \\nablab F_1(\\xb), \\ldots, \\nablab F_{\\nd}(\\xb)$ \nand higher-order derivatives \nare unavailable for use in the optimization. \nThis is often the setting when the computer models are composed of iterative components \\cite{more2014nd,WSS15} or when dependencies on legacy computer codes pose obstacles to algorithmic differentiation \\cite{Berz1996}, which is a key enabling technology for deep learning \\cite{2015arXiv150205767G}.\n\n\nIn \\cref{sec:algorithms} we summarize the set of optimization algorithms tested. We focus on methods for local optimization (i.e., those that do not seek to asymptotically cover the entire parameter space) since we are interested in assessing performance within a budget of function evaluations.\nSuch a budget limits the applicability of global optimization algorithms. \nOur case study, involving the fitting of the Fayans energy density functional (EDF) to\ndata across the nuclear chart, is described in \\cref{sec:fayans}. This problem was selected in part because it shares characteristics with many fitting problems.\nFor this problem, there are $\\nd=198$ data, $\\nx=13$ parameters to be optimized, and correlations among the errors $m\\left(\\bm{\\nu}_i;\\xb\\right) - d_i$ are evident.\nNumerical results are presented in \\cref{sec:numerical}, and we summarize both the\nconsistency and efficiency of the tested algorithms. Our performance measures\nemphasize how the efficiency of optimization methods, as measured in function evaluations, can change depending on one's ability to evaluate components $F_i(\\xb)$ concurrently. \n\nAlthough our focus is on optimization-based approaches for training, these could also be used in a larger framework of statistical calibration (e.g., as discussed in \\cite{HigdonJPG15}).\n \n\n\n\n\n\\section{Derivative-free optimization\/training algorithms}\n\\label{sec:algorithms}\n\nWe consider five algorithmic families of iterative methods for local, unconstrained derivative-free optimization. \nThe first two algorithms are deterministic, and the latter three are randomized\n(sometimes called ``stochastic''). For randomized algorithms, the \nsequence of points in parameter space at which the component functions will be evaluated is generated stochastically\/nondeterministically by the method. \n\nThe randomized algorithms considered in this study are designed to have the ability to vary the number of component functions evaluated in any one iteration. \nThroughout this paper, we refer to this number as the \\emph{batch size}, denoted $\\nb$.\nIn our experiments, as is typical in such batch-sampling-based algorithms, a\nbatch of size $\\nb$ is generated by sampling uniformly $\\nb$ many times without\nreplacement from the integers $\\{1, \\ldots, \\nd\\}$. Hence, the maximum batch size $\\nb=\\nd$ corresponds to evaluating all of the component functions.\n\nWe now briefly describe each of the algorithm types, along with their hyperparameters and our implementation. \nFor additional details on these and other derivative-free optimization methods, we refer the reader to \\cite{Conn2009a,LMW2019AN}.\n\n\n\\subsection{Deterministic algorithms}\n\nIn general, deterministic methods have the property that, given a starting point\nand hyperparameter values, the sequence of points in parameter space generated by the method will be\nthe same every time the optimization is repeated. The deterministic methods considered here also assume that all of the\n$\\nd$ component functions in \\cref{eq:genfun} are evaluated before the next\npoint in parameter space to be evaluated is determined. \nThat is, we address a batch-synchronous, rather than asynchronous, environment; the latter may be appropriate when the component function evaluation time varies significantly and\/or individual component functions depend on relatively few parameters \\cite{Recht2011hogwild}.\n\n\\subsubsection{Direct search algorithm.}\n\\label{sec:nelder}\n\nThe Nelder-Mead simplex algorithm \\cite{NelderMead} is a popular direct search algorithm for general derivative-free optimization \\cite{PhysRevLett.67.1334,pres07}. The version tested here is from the MATLAB routine \\texttt{fminsearch} based on \\cite{JCL98}. \n\nThe Nelder-Mead algorithm determines a new point for evaluation by performing operations on a simplex\ndefined by $\\nx+1$ previously evaluated affinely independent points\n\\cite{Conn2009a,Wright2012,AudetHare2017}. The particular choice of operations is dictated \nby the $f$ values associated with each of the simplex's vertices. \nSince the algorithm bases its decision on the complete evaluation of $f$, accepted points $\\xb_k$ monotonically decrease the objective: $f(\\xb_0) > f(\\xb_1) > \\ldots$. \nMultiple complete evaluations of $f$ may be required before an acceptable point is found, and each of these evaluations corresponds to $\\nd$ component function evaluations.\n\nThe sole hyperparameter in our Nelder-Mead implementation is the initial simplex size. This size can be interpreted as defining the size of the neighborhood within which Nelder-Mead begins its search. \n\n\\subsubsection{Model-based trust-region algorithm.}\n\\label{sec:pounders}\n\nPOUNDERS \\cite{SWCHAP14} is a deterministic method that exploits the structural form in \\cref{eq:genfun} by constructing a local surrogate model of each component function $F_i$. POUNDERS was used in the optimization of the UNEDF family of energy density functionals \\cite{UNEDF0,UNEDF1,UNEDF2} and chiral nucleon-nucleon interactions \\cite{Ekstrom13,EksJPG15}. \n\nThe surrogate models in POUNDERS are updated with each new function evaluation, and the algorithm assumes that all $\\nd$ component functions are evaluated at each point. \nA new point to evaluate \nis obtained by locally minimizing an\naggregation of the component surrogate models. Thus, unlike the Nelder-Mead\nmethod, POUNDERS requires and exploits full knowledge of the individual \ncomponent function values $F_1(\\xb_k), \\ldots, F_{\\nd}(\\xb_k)$. \nSimilar to Nelder-Mead, \nsince POUNDERS evaluates all component functions, accepted points monotonically decrease the objective, and multiple such evaluations of $\\nd$ components may be required before decrease in the function value is found.\n\nThe primary hyperparameter in POUNDERS is the radius used to define the initial neighborhood within which the surrogate models are constructed and optimized over.\n\n\n\\subsection{Derivative-free stochastic approximation}\n\\label{sec:sgd}\n\nIn supervised learning tasks of machine learning---a class of optimization problems containing \\cref{eq:genfun}---the\nworkhorse optimization method for obtaining (approximate) solutions to \\cref{eq:genfun} has been \nstochastic gradient methods \\cite{RobbinsMonro1951}. \nFor an excellent contemporary survey of stochastic gradient methods, see \\cite{BottCurtNoce16}. \nStochastic gradient methods as applied to \\cref{eq:genfun} resemble traditional gradient descent methods, \nthe basic iteration of which takes the form\n\\begin{equation}\\label{eq:gd_iteration}\n \\xb_{k+1}= \\xb_k-\\alpha_k\\nablab f(\\xb_k).\n\\end{equation}\nWhen $\\nd$ is large, however, it may become computationally prohibitive to\nevaluate, or even numerically estimate, the gradient of \\cref{eq:genfun}:\n\\begin{equation}\n\\label{eq:gradsum}\n\\nablab f(\\xb_k)=\\sum_{i=1}^{\\nd} \\nablab F_i^2(\\xb_k).\n\\end{equation}\n\nThus, stochastic gradient methods compute approximations to the gradient \\cref{eq:gradsum} \nby including in the sum only a\nsampled batch of the component function indices $\\{1,\\dots,\\nd\\}$. \nIn its simplest form, a single $i(k)\\in\\{1,\\dots,\\nd\\}$ is chosen at random from a \ndiscrete (often uniform) distribution, and the basic iteration in \\cref{eq:gd_iteration} is replaced with\n\\begin{equation}\\label{eq:sgd_basic_iteration}\n \\xb_{k+1}= \\xb_k-\\alpha_k\\nd\\nablab F_{i(k)}^2(\\xb_k).\n\\end{equation}\nEffectively, the cost of performing \\cref{eq:sgd_basic_iteration}\nis a factor of $\\nd$ times cheaper than the cost of performing \\cref{eq:gd_iteration}.\nThis represents significant computational savings in performing a single iteration when $\\nd$ is large,\nat the expense of using an inaccurate gradient approximation.\nMore generally, one can consider sampling a batch of component function indices\n$\\B_k\\subseteq\\{1,\\dots,\\nd\\}$ of size $\\nb(k)\\leq\\nd$,\nand replacing \\cref{eq:gd_iteration} with \n\\begin{equation}\\label{eq:sgd_batch_iteration}\n \\xb_{k+1} = \\xb_k-\\alpha_k\\displaystyle\\frac{\\nd}{\\nb(k)}\\displaystyle\\sum_{i\\in B_k}\\nablab F_{i}^2(\\xb_k).\n\\end{equation}\nThe rationale behind the random sampling approach in \\cref{eq:sgd_batch_iteration}\nis that the expected value (with\nrespect to the stochastic sampling of component function indices) of \n$\\displaystyle\\frac{\\nd}{\\nb(k)}\\displaystyle\\sum_{i\\in B_k}\\nablab F_{i}^2(\\xb_k)$\nis exactly\n\\cref{eq:gradsum}. \nWe note that when $\\nb(k) < \\nd$, the step from $\\xb_k$ to $\\xb_{k+1}$ will be based on incomplete information; however, since the sampled batches will be independent from one iteration to the next, these methods probabilistically find a zero of the full gradient \\cref{eq:gradsum} when the step sizes decay fast enough. \n\nDating almost as far back as the earliest stochastic gradient methods \\cite{RobbinsMonro1951},\nderivative-free variants of the iterations in \\cref{eq:sgd_basic_iteration} and in \\cref{eq:sgd_batch_iteration} have been proposed \\cite{KieferWolfowitz}. \nAll these methods \nperform an analogous iteration \n\\begin{equation}\n\\label{eq:basic_update}\n\\xb_{k+1} = \\xb_k - \\alpha_k \\db_k, \\qquad k=0,1,2,\\ldots,\n\\end{equation}\nwith $\\db_k \\in \\R^{\\nx}$ serving as an estimate of a gradient quantity. \nThese algorithms differ in their selection of the step size $\\alpha_k>0$ and, most distinctively, the choice of direction $\\db_k$. \n\n\\subsubsection{Kiefer-Wolfowitz method.}\nIterations of type \\cref{eq:basic_update} are found in the Kiefer-Wolfowitz (KW) method \\cite{KieferWolfowitz}.\nThe KW method computes \\emph{finite differences of sampled functions} to\napproximate directional derivatives in each of the $\\nx$ coordinate directions. \nAlthough other variants exist \\cite{LEcuyerYin1998,Kleinman1999}, \nin this paper we use the most common sampling described in the following.\n\nIn the $k$th iteration, we uniformly sample a \nbatch $B_k\\subseteq\\{1,\\dots,\\nd\\}$ of size $|B_k|=\\nb(k)$. \nGiven a fixed finite-difference parameter $h$, \nforward differences are used to approximate the partial derivatives needed to\nestimate the gradients of the $\\nb(k)$ squared component functions associated\nwith the batch. Specifically, we compute\n\\begin{equation}\n\\label{eq:kw-fs}\n\\db_k = \\frac{\\nd}{\\nb(k)} \\displaystyle \\sum_{i\\in\\B_k} \n\\gb_i(\\xb_k;h), \n\\end{equation}\nwhere\n\\begin{equation}\n\\label{eq:FD}\n\\gb_i(\\xb_k;h) = \\frac{1}{h}\n\\left[\\begin{array}{c}\nF_i\\left(\\xb_k+h\\mathbf{e}_1\\right)^2-F_i\\left(\\xb_k\\right)^2 \\\\\n\\vdots \\\\\nF_i\\left(\\xb_k+h\\mathbf{e}_{\\nx}\\right)^2-F_i\\left(\\xb_k\\right)^2 \n\\end{array}\\right].\n\\end{equation}\nIn our experiments, we refer to the algorithm that uses \\cref{eq:kw-fs} as $\\db_k$ in \\cref{eq:basic_update}\nas ``KW.'' \n\nObserve that in KW, $\\nb(\\nx+1)$ component function evaluations are performed in a single iteration. \nAs with any method using \\cref{eq:basic_update}, both a sequence of step sizes $\\{\\alpha_k\\}$ and a sequence\nof batch sizes $\\{\\nb(k)\\}$ must be selected. \nAdditionally, the finite-difference parameter $h>0$ must be selected. \nFor the sake of simplicity in presentation,\nwe have chosen to keep $h$ fixed, with the immediate consequence that \n$\\db_k$ is a biased estimator of $\\nablab f(\\xb_k)$ for all $k$, even when $\\nb(k)=\\nd$. \n\n\\subsubsection{Bandit method.}\n\\label{sec:bandit}\nWe now consider members of a class of derivative-free methods that have become increasingly attractive in supervised learning over the past decade,\nthe so-called (two-point) bandit methods \\cite{Agarwal2010,Ghadimi2013,Duchi2015,Gasnikov2017,Shamir2017}. \nSimilar to KW, bandit methods can employ a batch\n$B_k\\subseteq\\{1,\\dots,\\nd\\}$ of component function indices in the computation\nof $\\db_k$, which is computed based on finite differences and employed in\niterations of the type \\cref{eq:basic_update}.\nIn each iteration of a bandit method, however, only \\emph{one} directional\nderivative is numerically approximated per element in the batch; in contrast, KW uses \n$\\nx$ partial derivatives per element.\nFor the basic iteration of what we refer to as the ``Bandit'' method in the following,\nthe direction vector $\\db_k$ becomes\n\\begin{equation}\n\\label{eq:bandit-fs}\n\\db_k = \\frac{\\nd}{\\nb(k)} \\sum_{i\\in\\B_k} \n\\left(\n\\frac{F_i\\left(\\xb_k+h\\mathbf{u}_k\\right)^2-F_i\\left(\\xb_k\\right)^2}{h} \n\\right)\\mathbf{u}_k,\n\\end{equation}\nwhere $\\mathbf{u}_k$ is a \\emph{randomized} direction. In particular, \nwe sample $\\mathbf{u}_k$ uniformly from the surface of an $\\nx$-dimensional sphere centered at the origin and of unit radius.\nOnce again, the quantity $h$ in \\cref{eq:bandit-fs} denotes a fixed finite-difference parameter. \nIn the case where $\\nb(k)=\\nd$ for all $k$, the Bandit method is related to the\niteration used in the Gaussian smoothing method \\cite{Nesterov2015}.\nWe remark that even when $\\nb(k)=\\nd$, the Bandit method is still randomized because of the\nrandom directions $\\mathbf{u}_k$. \n\nWhereas a KW method involves $\\nb(\\nx+1)$ component function evaluations in a single iteration, \nthe Bandit method entails only $2\\nb$ component function evaluations in a single iteration. \nIn common with a KW method, however, the Bandit method requires a selection of\nthe finite-difference parameter $h$,\na sequence of step sizes $\\{\\alpha_k\\}$,\nand a sequence of batch sizes $\\{\\nb(k)\\}$. \n\n\n\\subsection{Adaptive sampling quasi-Newton method}\n\\label{sec:aqn}\n\nWe now consider adaptive sampling quasi-Newton (AdaQN) methods \n\\cite{BollapragadaICML18, RBSWshort19,RBSWlong20}, which iteratively construct a local quadratic surrogate model according to the sampled component functions and select\nsearch directions $\\db_k$ as an approximate minimizer of the quadratic surrogate\nmodel.\nThe quadratic surrogate model is updated at every iteration using the differences between current and previously evaluated forward-difference gradient approximations. \nWhereas the KW and Bandit methods considered here use a prescribed sequence of batch sizes $\\{\\nb(k)\\}$, \nAdaQN adaptively increases the \nbatch size. \nDifferent adaptive rules \\cite{RBSWshort19,RBSWlong20} will increase the batch sizes differently; and we consider one such rule, called the \\emph{norm test}, in this study. \n\nAdaQN computes a direction $\\db_k$ of the form similar to \\cref{eq:kw-fs}:\n\\begin{equation}\n\\label{eq:adaqn-fs}\n\\db_k = \\frac{\\nd}{\\nb(k)} \\Hb_k\\displaystyle \\sum_{i\\in\\B_k} \n\\gb_i(\\xb_k;h),\n\\end{equation}\nwhere $\\gb_i$ is defined in \\cref{eq:FD} and $\\Hb_k$ is a quasi-Newton matrix defining the quadratic surrogate model, \nupdated such that $\\Hb_{k+1} \\mathbf{v}_k = \\xb_{k+1} - \\xb_k$, where \n\\begin{equation}\n\\label{eq:adqn-fs-hk}\n\\mathbf{v}_k =\n\\displaystyle \\sum_{i\\in\\B_k} \\Big(\\gb_i(\\xb_{k+1};h) - \\gb_i(\\xb_k;h)\\Big).\n\\end{equation} \nUnlike KW, however, the step size $\\alpha_k$ in AdaQN is adaptively determined in each\niteration via a \\emph{stochastic backtracking line search}\n\\cite{BollapragadaICML18, RBSWshort19}, an automatic procedure that ensures\nsufficient decrease in a sampled function. This procedure \nrequires evaluating the currently sampled component functions \nalong the direction $\\db_k$, with the associated number of such evaluations,\n$l_k$, varying at each iteration; typically, $l_k$ is less than $5$. Observe that in\nAdaQN, $2|B_k|(\\nx+1)$ component function evaluations are required to compute\n$\\db_k$ (and, for free, $\\mathbf{v}_k$) and $|B_k|l_k$ component function evaluations are required to compute $\\alpha_k$. \n\nThe primary hyperparameter in AdaQN is the initial batch size. All the other hyperparameters associated with AdaQN are set to their default values as specified in \\cite{RBSWlong20}. \n\n\n\n\\section{Case Study: Optimizing Fayans energy density functional}\n\\label{sec:fayans}\n\nA key pursuit in the understanding of atomic nuclei is a global (i.e., across the nuclear chart) description of nuclei. For such a task, the microscopic tool of choice\nis nuclear DFT rooted in the\nmean-field approach \\cite{Bender2003}. An effective interaction in DFT is given by the EDF,\nwhose parameters are adjusted to experimental data.\nOver the past decade, increasingly refined EDFs have been developed, with increasingly complex and computationally expensive computer models; see, for example, \\cite{UNEDF0,UNEDF1,UNEDF2}. Because of the expense of these computer models, their calibration has focused largely on point-\/optimization-based estimation. Bayesian approaches have been demonstrated with limited (e.g., 200 in \\cite{McDonnellPRL15}) model evaluations, with nontrivial failure (associated with convergence at insufficient fidelity levels) rates of roughly 9\\% of the designs present even in relatively narrow (in terms of posterior probability) regions of parameter space; see \\cite{HigdonJPG15}. \n\nWe focus on calibration of a Fayans EDF\n\\cite{Fayans1998}, for which computer models have recently been developed and\ndemonstrated to correct some systematic effects of other state-of-the-art\nfunctionals \\cite{Reinhard2017}. This functional form has recently sparked\nsignificant interest, especially in the context of charge radii measurements \\cite{Hammen2018,Miller2019,Gorges2019,deGroote2020}. \n\n\n\\begin{table}[htb]\n\\caption{\\label{tab:observableClasses} Nine classes of physical\nobservables that constitute all observables included in the study\n\\cite{Reinhard2017}.}\n\\begin{indented}\n\\lineup\n\\item[]\\begin{tabular}{lcc}\n\\br\nClass & Symbol & Number of Observables\\cr\n\\mr\nBinding Energy & $E_B$ & 63 \\cr\nCharge Radius & $r_{\\textrm{ch}}$ & 52 \\cr\nDiffraction Radius & $R_{\\textrm{diffr}}$ & 28 \\cr\nSurface Thickness & $\\sigma$ & 26 \\cr\nNeutron Single-Level Energy & $\\epsilon_{ls,n}$ & 5 \\cr\nProton Single-Level Energy & $\\epsilon_{ls,p}$ & 5 \\cr\nDifferential Radii & $\\delta\\langle r^2 \\rangle$ & 3 \\cr\nNeutron Pairing Gap & $\\Delta E_n$ & 5 \\cr\nProton Pairing Gap & $\\Delta E_p$ & 11 \\cr\n\\mr\n& & $\\nd = 198$ \\cr\n\\br\n\\end{tabular}\n\\end{indented}\n\\end{table}\n\n\\subsection{Problem specification}\n\nThe computer model $m\\left(\\bm{\\nu};\\xb\\right)$ for the currently used Fayans EDF has $\\nx=13$ free model\nparameters and employs a pool of fit data for spherical nuclei that primarily comprises\nbulk properties of the nuclear ground state (energy, radii, surface thickness),\nthree-point binding energy differences to calibrate pairing strengths, and some\nisotopic differences of root mean square radii in calcium isotopes. Specifically, the pool\nused for this study is that used to fit the new Fayans EDF\nFy($\\Delta r$) reported in \\cite{Reinhard2017} but with the even-odd staggering\nof binding energies replaced by the even-even data. \nThe total dataset\nconsists of $\\nd=198$ observables of different classes (see\nTable~\\ref{tab:observableClasses}) that are associated with 72 different spherical, ground-state,\neven-even nucleus configurations (with these configurations encapsulated by $\\bm{\\nu}$). \nThe weights ($\\sigma_i$) associated with each observable in the pool are related to those in \\cite{Reinhard2017} and are detailed in the \nsupplemental material \\cite{suppl}. The data, weights, and model outputs together define the collection of component functions $F_1(\\xb), \\ldots, F_{198}(\\xb)$ used in \\cref{eq:genfun}.\n\nTo ensure that our optimizations solve the specified problem, we identified \ntransient platform errors,\ntransient software faults outside of our control, user error, and \nreproducible software faults in the Fayans model evaluation software\nas the classes of failures that can occur during an optimization run and that\nmust be understood and handled sensibly throughout the study. \nWe developed scripts to scan over\nall optimization results and their associated metadata so that possible failures\ncould be flagged and manually inspected. When a transient failure was positively\nidentified and was determined to affect the data quality, the\nassociated optimization run could simply be rerun and the results manually verified as\nacceptable. Since the failures associated with the Fayans model software, which are discussed further in\n\\cref{sec:FayansSW}, are reproducible, rerunning a failed optimization is not an option. As\na result, schemes for handling this class of errors were developed and\nimplemented. A detailed discussion of this handling is given in \\cref{sec:mods}.\n\n\\subsection{Fayans model evaluation software}\n\\label{sec:FayansSW}\nThe code that is used in our study to evaluate the Fayans \nEDF is derived from a code solving nonrelativistic nuclear \nHartree-Fock equations for spherically symmetric nuclei \n\\cite{Rei91aR}, which is under continuous development. We have\nidentified two classes of reproducible software faults within\nthe Fayans model evaluation software. The numerical methods used\ninternally by the code are iterative, and therefore the first class of\nfailures is the inability of the methods to satisfy \na specified stopping criterion within a given maximum\niteration budget. \nWhile a single computation that does not satisfy\nthis criterion would normally be deemed as a failed result, for this study and informed by the experience and knowledge of\nthe developer, we implemented a secondary stopping criterion. This\ncriterion, which is a relaxation of the primary\ncriterion, is employed as follows. If a computation has failed to\nachieve the primary stopping criterion within the budget but does\nachieve the secondary criterion within the budget, then the result is\nflagged as \\emph{marginally convergent}. If, however, a computation\ndoes not satisfy either criterion within the budget, the associated\nmodel evaluation is flagged as \\emph{nonconvergent}.\n\nThe second class of failures contains those computations that could not\nbe run to completion because at runtime the code encountered a situation that\nwas incompatible with its computations. Such failures, which are referred to as\n\\emph{runtime failures}, could arise because \nof exceptional conditions that cause internal algorithmic failures or because \nthe computation is being\nevaluated in a region of the parameter space for which the functional \nis unstable \\cite{Hellemans2013,Pastore2015}.\nWhen runtime\nfailures occur, the Fayans model code reports an error, execution is aborted,\nand the associated model evaluation result is flagged as failed.\nTo avoid such severe failures as much as possible, we have\nestablished empirical, ``reasonable'' bounds for the model parameters,\nwhere reasonable means that we want to avoid \ninstabilities as well as unphysical results (e.g., unbound\nmatter). For details regarding the region of assumed stability of Fayans EDF that is characterized by these bounds, see the supplemental material \\cite{suppl}.\n\nKnowledge of this region has not been programmed\ninto the optimization methods, and therefore any optimization\ncan evaluate the model at points outside the region of stability. We expect that\nsome methods, such as the randomized methods, might have a greater propensity\nfor choosing points outside the region of stability and that the various methods might also differ in their ability to recover from such bad evaluations. Our means for managing such potential difficulties is detailed next.\n\n\n\\subsection{Modifications to minimize and address possible failures}\n\\label{sec:mods}\n\nA necessary step in facilitating the automatic training of any simulation-based model is to ensure that error handling is adequately addressed. All of the methods of \\cref{sec:algorithms} use some form of the output \n$\\left \\{F_i(\\xb) ; \\; i \\in B_k \\right\\}$\nat a queried point $\\xb$ to inform their internal decision-making. Consequently, it is necessary to address what occurs if the evaluation $F_i(\\xb)$ fails for one or more components $i \\in B_k$. \nIn this paper, we seek to make minimal changes to the methods stated in\n\\cref{sec:algorithms} and instead modify the objective function to\naccount for the variety of situations that can be encountered as discussed in\n\\cref{sec:FayansSW}. \nBefore detailing each of these modifications,\nwe stress that throughout this article, information about specific points in parameter\nspace is reported in the original unscaled space used by physicists. However,\nthe optimization methods used in this study were implemented to work on a scaled version of the\nparameter space; hence, it is understood that the domain of the\nobjective function is the scaled space. Unless otherwise stated, the points in parameter\nspace discussed in the remainder of this section should be assumed to be with respect to\nthe scaled parameter space used for optimization. For more information regarding the choice of \nscaling, we refer the reader to the supplemental material \\cite{suppl}.\n\n\\subsubsection{Projection.}\n\\label{sec:projection}\n\nThe tested optimization methods all were intended primarily for unconstrained optimization. We operate in such a setting here and do not provide any method with prior knowledge \nof valid combinations of parameters.\nWe observed that, depending on the quality of gradient estimators obtained by the randomized \nmethods, the directions $\\db_k$ could become so large as to generate steps into physically\nmeaningless or unstable regions of parameter space. \nTo help such methods avoid divergence, we alter the objective function to include a projection \nonto an $\\ell_1$-ball centered around the\npoint $\\bar{\\xb}$.\nThe unscaled version of this point is\ngiven in the supplemental material \\cite{suppl}.\nBecause of the scaling, it is\nappropriate to use an isotropic $\\ell_1$-ball \nfor defining a reasonable region; that is, we compute the projection \n\\begin{equation}\n\\label{eq:project}\n\\xb_{\\mathbf{P}} = \\displaystyle\\arg\\min_{\\yb\\in\\mathbf{P}} \\|\\yb-\\xb\\|_2, \n\\quad \\textrm{where } \n\\mathbf{P}= \\left\\{\\yb\\in \\R^{\\nx}: \\|\\yb-\\bar{\\xb}\\|_1\\leq 2\\right\\}.\n\\end{equation} \nOur choice of using the $\\ell_1$-norm to define $\\mathbf{P}$ is motivated by \nour observation that failures are more likely to occur when many parameter components deviate significantly from $\\bar{\\xb}$.\n\nWe then pass the projected point $\\xb_{\\mathbf{P}}$ to the Fayans model simulation for\nevaluation. \nWe modify the\nobjective function \\cref{eq:genfun} by applying a multiplicative penalty to each residual $F_i(\\xb)$ based on the distance between $\\xb_{\\mathbf{P}}$ and $\\xb$; that is,\n\\begin{equation}\n\\label{eq:projection_modification}\n\\tilde{F_i}(\\xb) = F_i(\\xb_{\\mathbf{P}})\\left(1+ \\left\\|\\xb-\\xb_{\\mathbf{P}}\\right\\|_2^2\\right).\n\\end{equation}\nWe acknowledge that the replacement of each $F_i$ with $\\tilde{F_i}$\ncan introduce nonsmoothness \nat the boundary of $\\mathbf{P}$, even when we assume each $F_i$ is smooth in a neighborhood of the boundary of $\\mathbf{P}$. \n\n\\subsubsection{Observable convergence.}\n\\label{sec:convergence}\nTo account for marginally convergent and noncovergent results as well as\noccasional runtime failures, and\ninformed by the belief that convergent computations are more likely to\nindicate physically meaningful points in parameter space, we further modified\nthe observable data $\\tilde{F_i}(\\xb)$ in \\cref{eq:projection_modification} by\ncomputing\n\\begin{equation*}\n\\hspace{-55pt}\n\\label{eq:convergence_modification}\n\\hat{F_i}(\\xb) = \n\\left\\{ \n\\begin{array}{ll}\n\\tilde{F_i}(\\xb) &\\mbox{if the computation of }F_i\\left(\\xb_{\\mathbf{P}}\\right)\\mbox{ succeeded} \\\\\n(1 + \\lambda_m^2)\\tilde{F_i}(\\xb) & \\mbox{if the computation of }F_i\\left(\\xb_{\\mathbf{P}}\\right)\\mbox{ was marginally convergent} \\\\\n(1 + \\lambda_n^2)\\tilde{F_i}(\\xb) & \\mbox{if the computation of }F_i\\left(\\xb_{\\mathbf{P}}\\right)\\mbox{ was nonconvergent } \\\\\n(1 + \\sigmar^2)\\tilde{F_i}(\\xb) & \\mbox{if the computation of }F_i\\left(\\xb_{\\mathbf{P}}\\right)\\mbox{ had a runtime failure}, \n\\end{array}\n\\right.\n\\end{equation*}\nwhere $\\sigmar \\ge \\lambda_n \\ge \\lambda_m \\ge 0$ denote penalty parameters. In our study, we\nset $\\lambda_m=2, \\lambda_n=5$, and $\\sigmar=100$. With these considerations, our \nmodified objective function, seen by all of the optimization algorithms, is\n\\begin{equation}\n\\label{eq:modfun}\n\\hat{f}(\\xb) = \\sum_{i=1}^{\\nd} \\hat{F_i}(\\xb)^2.\n\\end{equation}\n\n\n\\subsubsection{Recovering from failed simulations.}\n\\label{sec:failures}\nIn our study, not even the use of the modified objective function\n$\\hat{f}(\\xb)$ can cover every possible failure case. \nWhen the Fayans simulation returned no output $\\hat{f}(\\xb)$ whatsoever---a situation that we refer to as a \\emph{hard failure}---none of the methods that we tested can continue. \nWe thus slightly modified the methods to handle hard failures. \nThe deterministic methods (POUNDERS and Nelder-Mead) were modified to terminate gracefully when a hard failure occurred,\nreturning the point in parameter space corresponding to the best-found objective value in the run up until the hard failure occurred. \nThe randomized methods were augmented with a simple backtracking procedure, like the one employed in AdaQN. \nAfter a direction $\\db$ was computed, if the function evaluation at the next suggested point $\\xb+\\db$ resulted in hard failure, then the direction $\\db$ was replaced by $0.1\\db$, and we reevaluated at $\\xb+\\db$. This process was repeated until the evaluation of $\\xb+\\db$ did not result in hard failure. \nAs we will see in the numerical results, the deterministic methods and AdaQN\nnever suggested a point that resulted in hard failure; but KW and Bandit did encounter hard failures, depending on the selection of hyperparameters. \n\n\n\\section{Numerical Results}\n\\label{sec:numerical}\n\nWe now study the performance of the algorithms from \\cref{sec:algorithms} on the function in \\cref{eq:modfun}.\nWe first tune the identified hyperparameters to obtain hyperparameter values to maximize the performance of each algorithm. \nSince computational budgets may limit one's ability to perform comprehensive hyperparameter tuning, the insensitivity to hyperparameter selection \n(as well as the variability overall) may be a key consideration in selecting an optimization algorithm. \nWe report on this sensitivity and perform a thorough study of each algorithm using the best hyperparameter values found.\n\nFor this study, the results for all $\\nd=198$ component functions were\nstored at each evaluated point, even when an optimization method with $\\nb<198$ was not provided \n(or charged for) this full set of component function evaluations.\nStoring this information allowed us to\nreevaluate, during postprocessing, the true function (i.e., with all 198 component functions) \nfor every point queried.\n\nThe randomized algorithms of \\cref{sec:algorithms} require a forward-difference parameter $h>0$. \nIn our computations we use $h=5\\cdot 10^{-7}$.\nThis value was obtained by estimating the noise level in each $F_i^2$ following \\cite{more2011ecn}. These noise estimates were then used to determine $h$ following the procedure in \\cite{more2011edn}. Although variation was seen across different component functions $i$ and different directions in $\\R^{\\nx}$, the effect of this variation turned out to be mild, and hence we used a fixed difference parameter for all component functions. \n\n\n\n\\subsection{Tuning of hyperparameters}\n\\label{sec:tuning}\n\nFor our hyperparameter tuning procedure, we randomly selected 5 starting points from the\nsame $\\ell_1$-ball as in \\cref{eq:project}. \nWe ran each \nmethod with a budget of $700\\nd$ component function evaluations from each starting point. \nEach randomized method was run with three different seeds from each starting point, while deterministic methods were run once from each starting point. \n\nThe three main classes of hyperparameters, and the ways we chose to vary them, are defined below.\n\n\\subsubsection{Step-size hyperparameters.}\nEvery method that we tested, with the exception of AdaQN, requires some kind of (initial) step-size parameter. \nWhile POUNDERS and Nelder-Mead require a single radius parameter,\nthe stochastic approximation methods KW and Bandit require a \\emph{sequence} of step-size parameters \n$\\{\\alpha_k\\}$, as seen in \\cref{eq:gd_iteration}.\nFor all four methods, we chose to use a common set of step-size hyperparameters based on the set \n$J\\equiv \\{3,4,5,6,7\\}.$\n\nIn the case of POUNDERS, the hyperparameter value $\\alpha=2^{-j}, j\\in J$ sets the initial trust-region radius; \n and in the case of Nelder-Mead, the hyperparameter value $\\alpha=2^{-j}, j\\in J$ sets the initial simplex radius. \nFor the stochastic approximation methods, we opted to use a schedule of decaying step sizes\n$\\alpha_k = 2^{-j}\/(k+1), j\\in J$. \nEmploying such a harmonic sequence as the step-size schedule for stochastic approximation methods is in line with standard convergence theory for those methods. \nWe remark again that AdaQN employs adaptive step sizes and hence does not require a step-size hyperparameter. \n\n\\subsubsection{Batch-size hyperparameters.}\nEach of the stochastic methods\nrequires the specification of a batch-size parameter. Recall from \\cref{sec:sgd} that a batch is drawn uniformly from the $\\nd$ component functions and that such draws are independent from one draw to the next.\nIn each iteration, KW and Bandit methods require a batch size $\\nb$ of component function evaluations to compute a gradient estimator; recall \\cref{eq:gradsum}. Following standard practice, we chose to hold $\\nb$ constant for these methods. \nWhile AdaQN adaptively increases $\\nb$ during the course of an algorithm, it still requires an initial $\\nb$. \nFor all three methods, we used a set of 4 common batch sizes\n$$\\nb\\in\\{11,33,99,198\\}.$$\nWe interpreted $\\nb$ as the constant batch size for KW and Bandit methods and as the initial batch size for AdaQN. \nObserve that all of our tested $\\nb$ divide $\\nd=198$, which is helpful for comparing the stochastic methods with the full-batch deterministic methods. \nMoreover, when $\\nb=\\nd$, \nKW and AdaQN are deterministic methods while Bandit is still a randomized method since it employs a random direction $\\mathbf{u}_k$ in each iteration.\n\n\n\n\n\n\\subsection{Performance metrics}\n\\label{sec:metrics}\nWe now discuss various measures of effort in order to compare the performance of the methods.\nWe label by $f_{s,*}$ the minimum function value evaluated over all runs \ninstantiated from the $s$th starting point $\\xb_{s,0}$, regardless of method, seed, and all relevant hyperparameters. \nWe say that a point\n$\\xb_{s,k}$ is\n\\emph{$\\tau$-optimal for starting point $\\xb_{s,0}$} provided \n\\begin{equation}\n \\label{eq:tau-optimality}\n \\displaystyle\\frac{f(\\xb_{s,k})-f_{s,*}}{f(\\xb_{s,0})-f_{s,*}}\\leq\\tau.\n\\end{equation}\nA point $\\xb_{s,k}$ satisfying \\cref{eq:tau-optimality} has achieved a fraction $\\tau$ of the best-known decrease from starting point $\\xb_{s,0}$.\n\nOur primary measure for any run is the best function value, $\\min_{k\\leq K} f(\\xb_{s,k})$, as a function of the number of points, $K$, evaluated during that run. We often report these results in terms of the number of component functions evaluated, that is, $\\sum_{k\\leq K} \\nb(k)$. This also allows us to track the number of component function evaluations needed to achieve $\\tau$-optimality, for a specified value of $\\tau\\in (0,1)$.\nNote that for some values of $\\tau$, not all runs may achieve $\\tau$-optimality; \nwhen a run fails to do so, we define the number of component function evaluations it required to attain $\\tau$-optimality as the budget of component function evaluations it was given.\n\n\n\\subsection{Results of hyperparameter tuning}\n\\label{sec:empirical}\nWe now show the results of hyperparameter tuning to search for ``best\" step sizes and\/or batch sizes, where appropriate. \nIn \\Cref{fig:pounders_nmead} we look at summary hyperparameter tuning results for POUNDERS and Nelder-Mead,\nand in \\Cref{fig:adaqn} we look at summary results for AdaQN. \nFor AdaQN, we \nchose to tune only initial batch sizes.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.92\\linewidth]{Fig1}\n\\caption{Tuning the step-size parameter for POUNDERS (left) and Nelder-Mead (right). \nThroughout such plots in this paper, the \nvertical axis shows the value of $\\hat{f}$, which is defined in \\cref{eq:modfun}, that is best among those seen within the specified number (horizontal axis) of evaluations. \nSolid lines denote median performance over all starting points (and stochastic replications), while the translucent bands denote the $25$th and $75$th percentiles of performance.\nThe number in parentheses in the legend denotes the \\emph{average} number of hard failures produced by the Fayans-model simulation during the run of the algorithm.\nThe solid black horizontal line denotes the value of $\\hat{f}(\\xb_1)$ in Table~\\ref{tab:BestResults}. \n\\label{fig:pounders_nmead}}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=.55\\linewidth]{Fig2}\n\\caption{Tuning the batch-size parameter for AdaQN. \\label{fig:adaqn}} \n\\end{figure}\n\nBased on the results for AdaQN in \\Cref{fig:adaqn}, \nwe chose to initialize $\\nb=11$, which finds the same quality of median solutions in terms of $\\hat{f}$ values as do other batch sizes toward the end of its budget\nbut identifies better solutions earlier on (in terms of the $25$th, $50$th, and $75$th percentiles). \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=.7\\linewidth]{Fig3} \n\\caption{Median number of component function evaluations needed by POUNDERS and Nelder-Mead to attain $\\tau$-optimality, where $\\tau=0.01$. \n\\label{fig:pounders_nmead2}}\n\\end{figure}\n\n\n\nWe see in \\Cref{fig:pounders_nmead} that the performance of POUNDERS is extremely robust to the selections of initial trust-region radius. \nFor Nelder-Mead, we observe that its performance is not as independent of the simplex radius as POUNDERS is independent of its initial trust-region radius. This is summarized in \\Cref{fig:pounders_nmead2}, which follows the example set in \\cite{asi2019importance} and shows\nthe \\emph{median} amount of component function evaluations required by a method to attain $0.01$-optimality. \n\nBecause the POUNDERS performance was so similar for all initial trust-region radius values, we \nselected $\\alpha=2^{-4}$. \nFor Nelder-Mead, because the best final median function value occurred for $\\alpha=2^{-4}$, and because the median performance of $\\alpha=2^{-4}$ was nearly as fast as the median performance of $\\alpha=2^{-3}$ in finding $0.01$-optimal solutions, we selected $\\alpha=2^{-4}$. \n\nFor the remaining stochastic approximation methods, we first fix batch-size parameters and compare the median performance of step-size parameters. These results are shown, respectively, in \\Cref{fig:bandit-batchsizes} and \\Cref{fig:kw-batchsizes}.\nWe remark that in the case of $\\nb=11$, we did not test all the step-size parameters because of the extreme computational cost of running\nthese two methods with $\\nb=11$ in our computational setup. \nInstead, for $\\nb=11$ we tested the two step sizes that \nresulted in the fewest average hard failures from running only one seed per starting point. \nIn \\Cref{fig:kw-batchsizes}, we see that for KW, \nthis selection of step sizes matches the selection of step sizes that performed best for $\\nb=33$.\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{Fig4.pdf}\n\n\\caption{Hyperparameter tuning results for Bandit; from left to right, and then top to bottom, are batch sizes 198, 99, 33, and 11.\n\\label{fig:bandit-batchsizes}}\n\\end{figure}\n\n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{Fig5.pdf}\n\\caption{Hyperparameter tuning results for KW; from left to right, and then top to bottom, are batch sizes 198, 99, 33, and 11.\n\\label{fig:kw-batchsizes}}\n\\end{figure}\n\n\n\n\n\nIn \\Cref{fig:bandit-batchsizes}, we observe that for each fixed batch size, there is a step size that provides a clear best median performance. Unlike in the comparisons made for POUNDERS, Nelder-Mead, and AdaQN, however,\nthe difference in the percentile performance between the Bandit methods and the average number of hard failures encountered across different runs should be taken into account. \nWith these three considerations in mind, for $\\nb=198$, we selected $\\alpha=2^{-3}$. \nFor $\\nb=99$, balancing the significantly lower number of hard failures encountered by $\\alpha=2^{-4}$ compared with $\\alpha=2^{-3}$, as well as the better $75$th percentile performance of $\\alpha=2^{-4}$, we selected $\\alpha=2^{-4}$. \nFor $\\nb=33$, because of the similar median final performance of $\\alpha=2^{-5}$ and $\\alpha=2^{-3}$, coupled with the better $75$th percentile performance of $\\alpha=2^{-5}$ and lower number of hard failures encountered by $\\alpha=2^{-5}$, we selected $\\alpha=2^{-5}$.\nFor $\\nb=11$, the choice of $\\alpha=2^{-5}$ was clear. \n\nIn \\Cref{fig:kw-batchsizes}, the choice for $\\nb=198$ and $\\nb=99$ was fairly easy to make, at $\\alpha=2^{-4}$ and $\\alpha=2^{-5}$, respectively. The choice for $\\nb=33$ was less clear; the median performance of $\\alpha=2^{-5}$ was not much worse than the median performance of $\\alpha=2^{-4}$; however, because the $75$th percentile performance of $\\alpha=2^{-5}$ was better than that of $\\alpha=2^{-4}$, and because $\\alpha=2^{-5}$ had fewer average hard failures, we selected $\\alpha=2^{-5}$. \nFor $\\nb=11$, we selected $\\alpha=2^{-5}$ because of its failure-free performance. \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{Fig6.pdf}\n\\caption{Best step sizes of each batch size for Bandit (left) and KW (right). \\label{fig:best-stepsizes-compared}}\n\\end{figure}\n\nHaving downselected the step-size parameters per batch size, we now compare the methods across different batch sizes in \\Cref{fig:best-stepsizes-compared}.\nWe see that for KW, using the smallest tested batch size $\\nb=11$ with a step size of $\\alpha=2^{-5}$ is the best setting of hyperparameters. \nThe situation was less clear for Bandit methods. \nWhile the median performance within the budget of component function evaluations was best with batch size 11, this parameter combination exhibited \nmany hard failures; \nas a tradeoff between a smaller number of hard failures and \na reasonable median performance, we selected step size $\\alpha=2^{-4}$ and $\\nb=99$. \n\n\\subsection{Comparing tuned methods on additional starting points}\nHaving performed the hyperparameter tuning in the preceding section to select appropriate hyperparameters for each of the five methods, we then ran the selected variant of each method on a larger set of problems.\nIn particular, we randomly generated twenty starting points (instead of five) from the\nsame $\\ell_1$-ball as in \\cref{eq:project} and again\nran three seeds for each starting point for each of the randomized methods. \nThe budget for each method was extended to $1500\\nd(\\nx+1)$ component function evaluations, more than double the budget provided in the hyperparameter tuning runs. These results are presented in \\Cref{fig:compare-all}.\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.6\\linewidth]{Fig7.pdf}\n\\caption{Comparing the best variants of all methods. \n \\label{fig:compare-all}}\n\\end{figure}\n\nThe results as seen in \\Cref{fig:compare-all} are remarkable.\nEven in the full run, POUNDERS continues to exhibit an interesting phenomenon where after a fixed number of full-batch component function evaluations, the objective function found suddenly drops and exhibits very low variation in the $25$th to $75$th percentile band in decreasing to a final solution. This sort of robustness and consistency in performance is certainly desirable.\n\nIn terms of final median performance, AdaQN and Nelder-Mead find similar quality solutions as POUNDERS. \nOne could argue that the performance of Nelder-Mead, in terms of overall median and other percentile trajectories, is \ndominated by the performance of POUNDERS. \nThe performance of AdaQN is interesting in that it is not strictly dominated by the performance of POUNDERS. In fact, if one were interested only in generating reasonable solutions (say $\\tau-$optimal solutions, where $\\tau\\approx 0.25$) in as few component function evaluations as possible, AdaQN is a better choice than POUNDERS. \nIf one were simultaneously interested in the robustness of the final solution, then AdaQN remains a strong choice. This is in contrast with KW, which also achieves gains fairly quickly but does not have the same final robustness as exhibited by AdaQN. For all but a few $\\tau$ values, the Bandit method is bested by KW, which may be attributed partly to the nontrivial failure rate experienced by Bandit.\n\nOur comparisons thus far have measured computational expense in terms of the number of component function evaluations. Such metrics are fully justified in computing environments where a single component function can be evaluated at a time.\nFor sufficiently large parallel computing environments, resources are available to perform full batch (here $\\nb=\\nd=198$) evaluations simultaneously. We now examine the case between the extremes of \na single component function evaluated at a time and all $\\nd$ component functions evaluated at a time.\nMethods capable of subsampling (i.e., using batch sizes $\\nb<\\nd$) are potentially promising in such intermediate regimes.\n\nOur \\emph{resource utilization plots} illustrate these considerations. \nBy resource size we denote the number of component function evaluations \nthat can be simultaneously computed.\nGiven a resource size, we refer to the number of \\emph{rounds} as the iterations of such whole resources to evaluate the component function evaluations needed to \nachieve a performance metric (e.g., $\\tau$-optimality as in \\cref{eq:tau-optimality}). \nFor example, if the resource size is 11, then a method with $\\nb=198$ will use at least 198\/11=18 \nrounds to make an optimization step, while a method with $\\nb=11$ will potentially use only one such round. \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\linewidth]{Fig8}\n\\caption{Resource utilization plots with respect to $\\tau$-optimality (a: $\\tau=0.5$; b: $\\tau=0.1$) for the tuned methods shown in \\Cref{fig:compare-all}. The construction of these figures assumes that component functions can only be evaluated in parallel at a single point in parameter space at a time. In addition, some choices of resource size result in poor performance when the size is incompatible with $\\nb$. For example, evaluating 33 component functions with a resource size of 22 is charged two rounds (44 possible evaluations). \nSimilarly, a method with nonadaptive batch size $\\nb$ will not be able to benefit, absent failures, from resource sizes larger than $\\nb$. \n \\label{fig:rups}}\n\\end{figure}\n\n\\begin{table}\n\\caption{\\label{tab:initialevals} Minimum number of initial component function evaluations to evaluate first (noninitialization) optimization step.}\n\\centering\n\\begin{tabular}[t]{lr}\n\\br\n Method & $F_i$ evaluations\\\\\n\\mr\nAdaQN & $(\\nx+2)\\nb(0)$ \\\\\nBandit & $3 \\nb $ \\\\\nKW & $(\\nx+2)\\nb$ \\\\\nNelder-Mead & $(\\nx+2)\\nd$ \\\\\nPOUNDERS & $(\\nx+2)\\nd$ \\\\\n\\br\n\\end{tabular}\n\\end{table}\n\nWe highlight computational environments where POUNDERS is not the most obvious choice by showing select values of $\\tau$ in the resource utilization plots in \\Cref{fig:rups}. \nFor low demands on solution quality \n($\\tau=0.5$), Bandit methods are exceptionally good, identifying $0.5$-optimal solutions remarkably quickly. This is because Bandit requires very few evaluations to get started; \\Cref{tab:initialevals} shows that Bandit (with $\\nb=33$) will have evaluated its first step (to $\\xb_1$) after 99 component function evaluations. The left plot in \\Cref{fig:rups} shows that on over half the runs, this number (3 rounds at a resource level of 33) is sufficient for satisfying this coarse accuracy level. Nelder-Mead and POUNDERS show a similar behavior after their first step is performed, but this step requires 15 rounds at a resource level of 198 (90 rounds at a resource level of 33). AdaQN's smaller batch size allows it to outperform POUNDERS and Nelder-Mead at lower resource levels, but is insufficient for catching Bandit at the coarse accuracy $\\tau=0.5$. \n\n\nWhen we tighten the accuracy demands to $\\tau=0.1$, we see that the deterministic methods (i.e., those with a full batch $\\nb=198$) are again best, even for resource sizes as small as 11. This plot also shows that AdaQN's adaptive batch size allows it to remain competitive even at this tighter accuracy for resource sizes up to 99.\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nOur results show that the deterministic methods tested were insensitive to starting point in terms of finding a good objective function value with a limited number of $\\hat{f}$ evaluations. Furthermore, these methods were generally insensitive to hyperparameters, did not evaluate points that resulted in hard failures, and are attractive even if the expense of evaluating the Fayans model allowed for computing only a fraction (e.g., 11\/198=1\/18) of the component functions at a time. For problems where even smaller fractions are possible or when less accurate solutions are desired with even smaller computational budgets than those tested here, AdaQN appears especially promising. \nWe expect that such methods that can use smaller batch sampling will become more attractive as the number of fit data significantly increases (as in the case of traditional supervised learning applications).\n\nAs part of understanding the quality of results achieved in this study, we identified the best run, \nin terms of lowest $\\hat{f}$ value, for each of the 20 starting points.\nEleven of these 20 best results were found with POUNDERS, which also found the overall best result; seven by AdaQN; and two by Nelder-Mead. \nAll 20 points of these best results are contained in $\\mathbf{P}$ and resulted in fully converged Fayans model evaluations.\n\\revised{The parameter values of two of these best points are presented in unscaled form in Table~\\ref{tab:BestResults}.}\n{The parameter values of two of these best points are presented in\nunscaled form in Table~\\ref{tab:BestResults}. To give an impression of typical\nparameters from previous fits, we also show the parameters for\nFy($\\Delta r$) \\cite{Reinhard2017} and Fy($\\Delta r$,HFB) \\cite{Miller2019,Reinhard2020}.} \n\n\n\n\\begin{table}\n\\caption{\\label{tab:BestResults} Unscaled parameter values for two of the 20 best optimization results in the study.\n\\revised{and their associated objective function value.}\n{The parameters are given up\nto six digits, which suffices to reproduce the output values shown in \\Cref{tab:Chi2Comparison}.}\nThe point $\\xb_1$\nhad the lowest objective function value in the study and is chosen as the representative of\nthe group of the four best runs; the point $\\xb_{5}$ had the best result in\n\\revised{the second grouping of the remaining 16 best runs. For the definition of Fayans EDF parameters, see \\cite{suppl}. $\\rho_\\mathrm{eq}$ is in fm$^{-3}$; $E\/A, K, J, L$ are in MeV; other parameters are dimensionless. For comparison, parameter values of Fy($\\Delta r$) \\cite{Reinhard2017} and Fy($\\Delta r$,HFB) \\cite{Miller2019,Reinhard2020} EDFs are also given.}\n{the second grouping of the remaining 16 best runs. For the definition\nof Fayans EDF parameters, see \\cite{suppl}. $\\rho_\\mathrm{eq}$ is in\nfm$^{-3}$; $E\/A, K, J, L$ are in MeV; other parameters are\ndimensionless. As a guideline for typical model parameters, the values\nfor Fy($\\Delta r$) \\cite{Reinhard2017} and Fy($\\Delta r$,HFB)\n\\cite{Miller2019,Reinhard2020} EDFs are also given.} \n}\n\\begin{indented}\n\\lineup\n\\item[]\n\\begin{tabular}[t]{lll|ll}\n\\br\n Parameter & $\\xb_1$ & $\\xb_{5}$ & \\multicolumn{1}{c}{Fy($\\Delta r$)}\n & \\multicolumn{1}{c}{Fy($\\Delta r$,HFB)} \\\\\n\\mr\n$\\rho_\\mathrm{eq}$ \t\\cmmnt{RHO\\_NM} & \\0\\00.165755 & \\0\\00.166182 & \\0\\00.160 & \\0\\00.164\\\\\n$E\/A$ \\cmmnt{EOVERA} & \\0\\-15.8715 & \\0\\-15.8780 & \\0\\-16.11 & \\0\\-15.86 \\\\\n$K$ \t \\cmmnt{COMPR} & 192.686 & 185.156 & 219 & 210.3\\\\\n$J$ \t \\cmmnt{ASYMM} & \\028.8018 & \\028.8467 & \\029 & \\028.1 \\\\\n$L$ \t \\cmmnt{DASYM} & \\035.6545 & \\031.5877 & \\030 & \\037.5\\\\\n${h_{2-}^\\mathrm{v}}$ \\cmmnt{H2VM} & \\0\\07.08066 & \\0\\04.71124 & \\0\\01.2150 & \\022.8090 \\\\\n${a_+^\\mathrm{s}}$\t \\cmmnt{ASP} & \\0\\00.594920 & \\0\\00.620893 & \\0\\00.6047 & \\0\\00.56548 \\\\\n${h_{\\nabla}^\\mathrm{s}}$\t \\cmmnt{HGRADP}& \\0\\00.510148 & \\0\\00.613192 & \\0\\00.6656 & \\0\\00.45795\\\\\n${\\kappa}$\t \\cmmnt{C0NABJ} & \\0\\00.192851 & \\0\\00.191370 & \\0\\00.18792 & \\0\\00.19833 \\\\\n${\\kappa'}$\t \\cmmnt{C1NABJ} & \\0\\00.0383998 & \\0\\00.0532395 & \\0\\0\\-0.0237 & \\0\\00.44008 \\\\\n${f_{\\mathrm{ex}}^\\xi}$\t \\cmmnt{FXI} & \\0\\0\\-3.70050 & \\0\\0\\-3.63760 & \\0\\0\\-4.4720 & \\0\\0\\-4.4556 \\\\\n${h_\\nabla^\\xi}$\t \\cmmnt{HGRADXI} & \\0\\03.17494 & \\0\\03.48559 & \\0\\03.227 & \\0\\03.113 \\\\\n${h_{+}^\\xi}$\t \\cmmnt{H1XI} & \\0\\03.22592 & \\0\\03.13267 & \\0\\04.229 & \\0\\04.2440\\\\[5pt]\n\\br\n\\end{tabular}\n\\end{indented}\n\\end{table}\n\n\\Cref{fig:MeanChi2PerClass} shows the outputs of these 20 points by observable class (see Table~\\ref{tab:observableClasses}). For each observable class, by $\\chi^2$ we denote the contributions to $\\hat{f}(\\xb)$ from that observable class (and hence the sum over all observable classes is $\\hat{f}(\\xb)$). We normalized these $\\chi^2$ by the number of observables in the associated class to obtain the average $\\chi^2$ of each observable class, $\\overline{\\chi^2}$. \n\\Cref{fig:MeanChi2PerClass} suggests that the results can be partitioned into two groups. This partitioning is related not only to $\\overline{\\chi^2}$ but also to the values of $\\hat{f}$. The four results labeled as Low $\\hat{f}$ corresponds to those results with $\\hat{f}$ less than 49; the 16 other results, labeled as High $\\hat{f}$, have a slightly higher $\\hat{f}$. The results with lowest $\\hat{f}$ from each group are denoted by $\\xb_1$ and $\\xb_5$ in Table~\\ref{tab:BestResults}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.5\\linewidth]{Fig9.pdf}\n\\caption{Average $\\chi^2$ by observable class ($\\overline{\\chi^2}$) plotted for each of the 20 best\nresults obtained in the study. In terms of this quantity, the 20 results clearly can be\npartitioned into two different groups. The results in one such group are\ncolored blue and also correspond to the four results with the lowest $\\hat{f}$\nresults in the study.\\label{fig:MeanChi2PerClass}}\n\\end{figure}\n\nThe $\\chi^2$ and $\\overline{\\chi^2}$ values are given for the same two points in Table~\\ref{tab:Chi2Comparison}. In general, the low-$\\hat{f}$ cluster appears to fit radius-based observables better than does the high-$\\hat{f}$ cluster, but at the expense of the quality of fit to energy-based observables. In particular, the fit to the isotopic differences of charge radii in Ca isotopes is better, but the fits to the two pairing gaps deteriorate. \n\n\n\\begin{table}\n\\caption{\\label{tab:Chi2Comparison} Breakdown of the $\\chi^2$ and average $\\chi^2$ ($\\overline{\\chi^2}$) by \nobservable class (see\nTable~\\ref{tab:observableClasses}) for the two points in parameter space given\nin Table~\\ref{tab:BestResults}. In bold are those values with potentially significant differences between the two groups of best results.} \n\\begin{indented}\n\\lineup\n\\item[]\n\\begin{tabular}[t]{l||ll|lll}\n\\br\n & \\multicolumn{2}{c|}{$\\xb_1$} & \\multicolumn{2}{c}{$\\xb_5$} \\\\\nClass & $\\chi^2$ & $\\overline{\\chi^2}$ & $\\chi^2$ & $\\overline{\\chi^2}$\\\\\n\\mr\n$E_B$ & \\09.64 & 0.153 & \\09.06 & 0.144 \\\\\n$R_{\\rm diffr}$ & \\09.49 & 0.339 & \\09.81 & 0.351 \\\\\n$r_{\\rm ch}$ & 16.41 & 0.316 & 17.95 & 0.345 \\\\\n$\\sigma$ & \\02.48 & 0.095 & \\03.17 & 0.122 \\\\\n$\\epsilon_{ls,p}$ & \\00.78 & 0.156 & \\00.70 & 0.141 \\\\\n$\\epsilon_{ls,n}$ & \\03.64 & 0.728 & \\03.90 & 0.780 \\\\\n$\\delta\\langle r^2 \\rangle$ & \\00.25 & \\textbf{0.082} & \\01.31 & \\textbf{0.436} \\\\\n$\\Delta E_p$ & \\03.52 & \\textbf{0.320} & \\02.66 & \\textbf{0.242} \\\\\n$\\Delta E_n$ & \\02.12 & \\textbf{0.425} & \\01.12 & \\textbf{0.225} \\\\\n\\mr\n$\\hat{f}$\t & 48.33 & & 49.69 & \\\\\n\\br\n\\end{tabular}\n\\end{indented}\n\\end{table}\n\n\n\nThese results underscore the value of optimization methods being able to train physics models with few model evaluations. \nSuch efficiency allows one to perform several different optimizations (e.g., from different starting points, with different fit data) and thereby identify potentially different local minimizers. \nThe subsequent study of distinct local minima could be useful; the ability of a solution to model the desired physics often matters more than the final objective function value.\n\n\n\n\\section{Perspectives}\n\nIn this study, we addressed the calibration of the nuclear physics model Fayans EDF using the $\\chi^2$-minimization, which can be viewed as a supervised machine learning problem. \nThe model is somewhat computationally expensive and the \nderivative information with respect to the model parameters is not available. To this end, we investigated the strengths and limitations of five algorithmic families of iterative methods for local, unconstrained derivative-free optimization. We considered two deterministic and three randomized methods. We analyzed hyperparameter tuning considerations and variability associated with the methods, and illustrated considerations for tuning in different computational settings. In total, nearly a half million CPU core hours were expended for this study, an indication of the infeasibility for doing thorough hyperparameter tuning and comparison for many nuclear physics model training problems. \n\nFor the model considered, we conclude that the performance of POUNDERS, within a budget of function evaluations, is extremely robust. The Fayans EDF optimization results obtained in this work are generally consistent with those of\nFy($\\Delta r$) \\cite{Reinhard2017} and Fy($\\Delta r$,HFB) \\cite{Miller2019,Reinhard2020} models, see Table~\\ref{tab:BestResults}. In particular, the set $\\xb_1$, which performs very well on the $\\delta\\langle r^2 \\rangle$ class appears to be fairly close to Fy($\\Delta r$,HFB). The extension of the Fayans model to isovector pairing, suggested in \\cite{Reinhard2017}, will be carried out in the following work, which will also contain detailed discussion of resulting quantified nuclear properties.\n\n\n\n\\section*{Acknowledgments}\nThe work at Argonne was \nsupported by the U.S.\\ Department of\nEnergy, Office of Science, Office of Advanced Scientific Computing\nResearch, applied mathematics and SciDAC programs under Contract No.\\\nDE-AC02-06CH11357 and by the NUCLEI SciDAC-4 collaboration. This work was also supported by the U.S.\\ Department of Energy, Office of Science, Office of Nuclear Physics under award numbers DE-SC0013365 (Michigan State University) and DE-SC0018083 (NUCLEI SciDAC-4 collaboration).\nWe gratefully acknowledge the computing resources provided on Bebop, a \nhigh-performance computing cluster operated by the Laboratory Computing Resource \nCenter at Argonne National Laboratory.\n\n\n\n\\section*{References}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzmdvy b/data_all_eng_slimpj/shuffled/split2/finalzzmdvy new file mode 100644 index 0000000000000000000000000000000000000000..d4f95d1a6612e5f1133f0091217a2e02ee80e15a --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzmdvy @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:Introduction}\nRapid advances in sensing and data acquisition technologies are increasingly resulting in individual data samples or signals structured by multiple \\textit{modes}. Examples include hyperspectral video (four modes; two spatial, one temporal, and one spectral), colored depth video (five modes; two spatial, one temporal, one spectral, and one depth), and four-dimensional tomography (four modes; three spatial and one temporal). Such data form multiway arrays and are called \\textit{tensor data}~\\cite{smilde2005multi,kolda2009tensor}.\n\nTypical feature extraction approaches that handle tensor data tend to collapse or vectorize the tensor into a long one-dimensional vector and apply existing processing methods for one-dimensional data. Such approaches ignore the structure and inter-mode correlations in tensor data. More recently, several works instead assume a structure on the tensor of interest through tensor decompositions such as the CANDECOMP\/PARAFAC (CP) decomposition~\\cite{harshman1970foundations}, Tucker decomposition~\\cite{tucker1963implications}, and PARATUCK decomposition~\\cite{kolda2009tensor} to obtain meaningful representations of tensor data. Because these decompositions involve fewer parameters, or degrees of freedom, in the model, inference algorithms that exploit such decompositions often perform better than those that assume the tensors to be unstructured. Moreover, algorithms utilizing tensor decompositions tend to be more efficient in terms of storage and computational costs: the cost of storing the decomposition can be substantially lower, and numerical methods can exploit the structure by solving simpler subproblems.\n\nIn this work, we focus on the problem of finding sparse representations of tensors that admit a Tucker decomposition. More specifically, we analyze the \\textit{dictionary learning} (DL) problem for tensor data. The traditional DL problem for vector-valued data involves constructing an overcomplete basis (dictionary) such that each data sample can be represented by only a few columns (atoms) of that basis~\\cite{aharon2006img}. To account for the Tucker structure of tensor data, we require that the dictionary underlying the vectorized versions of tensor data samples be \\textit{Kronecker structured} (KS). That is, it is comprised of \\textit{coordinate dictionaries} that independently transform various modes of the tensor data. Such dictionaries have successfully been used for tensor data representation in applications such as hyperspectral imaging, video acquisition, distributed sensing, magnetic resonance imaging, and the tensor completion problem (multidimensional inpainting)~\\cite{duarte2012kronecker,caiafa2013multidimensional}. To provide some insights into the usefulness of KS dictionaries for tensor data, consider the hypothetical problem of finding sparse representations of $1024 \\times 1024 \\times 32 $ hyperspectral images. Traditional DL methods require each image to be rearranged into a one-dimensional vector of length $2^{25}$ and then learn an unstructured dictionary that has a total of $(2^{25} p)$ unknown parameters, where $p \\geq 2^{25}$. In contrast, KS DL only requires learning three coordinate dictionaries of dimensions $1024 \\times p_1$, $1024 \\times p_2$, and $32 \\times p_3$, where $p_1,p_2\\geq 1024$, and $p_3 \\geq 32$. This gives rise to a total of $[1024 (p_1 + p_2) + 32p_3]$ unknown parameters in KS DL, which is significantly smaller than $2^{25} p$. While such ``parameter counting'' points to the usefulness of KS DL for tensor data, a fundamental question remains open in the literature: what are the theoretical limits on the learning of KS dictionaries underlying $K$th-order tensor data? To answer this question, we examine the KS-DL objective function and find sufficient conditions on the number of samples (or sample complexity) for successful local identification of \\textit{coordinate dictionaries} underlying the KS dictionary. To the best of our knowledge, this is the first work presenting such identification results for the KS-DL problem.\n\n\\subsection{Our Contributions}\\label{subsec:contr}\nWe derive sufficient conditions on the true coordinate dictionaries, coefficient and noise distributions, regularization parameter, and the number of data samples such that the KS-DL objective function has a local minimum within a small neighborhood of the true coordinate dictionaries with high probability. Specifically, suppose the observations are generated from a true dictionary $\\mb{D}^0 \\in \\mbb{R}^{m \\times p}$ consisting of the Kronecker product of $K$ coordinate dictionaries, $\\mb{D}_k^0 \\in \\mbb{R}^{m_k \\times p_k}, k \\in \\lr{1,\\dots,K}$, where $m = \\prod_{k=1}^Km_k$ and $p = \\prod_{k=1}^Kp_k$. Our results imply that $N = \\max_{k\\in[K]} \\Omega(m_kp_k^3\\varepsilon_k^{-2})$ samples are sufficient (with high probability) to recover the underlying coordinate dictionaries $\\mb{D}_k^0$ up to the given estimation errors $\\varepsilon_k, k \\in \\lr{1,\\dots,K}$.\n\n\\subsection{Relationship to Prior Work}\\label{subsec:prior}\n\nAmong existing works on structured DL that have focused exclusively on the Tucker model for tensor data, several have only empirically established the superiority of KS DL in various settings for 2nd and 3rd-order tensor data~\\cite{hawe2013separable,zubair2013tensor,caiafa2013multidimensional,roemer2014tensor,dantas2017learning,ghassemi2017stark}.\n\nIn the case of unstructured dictionaries, several works do provide analytical results for the dictionary identifiability problem~\\cite{aharon2006uniqueness,agarwal2013learning,agarwal2013exact,arora2013new, schnass2014identifiability,schnass2014local,gribonval2014sparse,jung2015minimax}. These results, which differ from each other in terms of the distance metric used, cannot be trivially extended for the KS-DL problem. In this work, we focus on the Frobenius norm as the distance metric. Gribonval et al.~\\cite{gribonval2014sparse} and Jung et al.~\\cite{jung2015minimax} also consider this metric, with the latter work providing minimax lower bounds for dictionary reconstruction error. In particular, Jung et al.~\\cite{jung2015minimax} show that the number of samples needed for reliable reconstruction (up to a prescribed mean squared error $\\varepsilon$) of an $m\\times p$ dictionary within its local neighborhood must be \\emph{at least} on the order of $N = \\Omega(mp^2\\varepsilon^{-2})$. Gribonval et al.~\\cite{gribonval2014sparse} derive a competing upper bound for the sample complexity of the DL problem and show that $N = \\Omega(mp^3\\varepsilon^{-2})$ samples are \\emph{sufficient} to guarantee (with high probability) the existence of a local minimum of the DL cost function within the $\\varepsilon$ neighborhood of the true dictionary. In our previous works, we have obtained lower bounds on the minimax risk of KS DL for 2nd-order~\\cite{shakeri2016minimax} and $K$th-order tensors~\\cite{shakeri2017sample,shakeri2016arxiv}, and have shown that the number of samples necessary for reconstruction of the true KS dictionary within its local neighborhood up to a given estimation error scales with the sum of the product of the dimensions of the coordinate dictionaries, i.e., $N = \\Omega(p\\sum_{k=1}^Km_kp_k\\varepsilon^{-2})$. Compared to this sample complexity lower bound, our upper bound is larger by a factor $\\max_{k} p_k^2$.\n\nIn terms of the analytical approach, although we follow the same general proof strategy as the vectorized case of Gribonval et\nal.~\\cite{gribonval2014sparse}, our extension poses several technical challenges. These include: ($i$) expanding the asymptotic objective function into a summation in which individual terms depend on coordinate dictionary recovery errors, ($ii$) translating identification conditions on the KS dictionary to conditions on its coordinate dictionaries, and ($iii$) connecting the asymptotic objective function to the empirical objective function using concentration of measure arguments; this uses the \\textit{coordinate-wise Lipschitz continuity} property of the KS-DL objective function with respect to the coordinate dictionaries. To address these challenges, we require additional assumption on the generative model. These include: ($i$) the true dictionary and the recovered dictionary belong to the class of KS dictionaries, and ($ii$) dictionary coefficient tensors follow the \\textit{separable sparsity} model that requires nonzero coefficients to be grouped in blocks~\\cite{caiafa2013computing,shakeri2016arxiv}.\n\n\\subsection{Notational Convention and Preliminaries} \\label{subsec:notation}\nUnderlined bold upper-case, bold upper-case and lower-case letters are used to denote tensors, matrices and vectors, respectively, while non-bold lower-case letters denote scalars. For a tensor $\\ul{\\X}$, its $(i_1,\\dots,i_K)$-th element is denoted as $\\underline{x}_{i_1\\dots i_K}$. The $i$-th element of vector $\\mathbf{v}$ is denoted by $v_i$ and the $ij$-th element of matrix $\\mb{X}$ is denoted as $x_{ij}$. The $k$-th column of $\\mb{X}$ is denoted by $\\mb{x}_k$ and $\\mb{X}_{\\mathcal{I}}$ denotes the matrix consisting of the columns of $\\mb{X}$ with indices $\\mathcal{I}$. We use $|\\mathcal{I}|$ for the cardinality of the set $\\mathcal{I}$. Sometimes we use matrices indexed by numbers, such as $\\mb{X}_1$, in which case a second index (e.g., $\\mathbf{x}_{1,k}$) is used to denote its columns. We use $\\mathop{\\mathrm{vec}}\\nolimits(\\mb{X})$ to denote the vectorized version of matrix $\\mb{X}$, which is a column vector obtained by stacking the columns of $\\mb{X}$ on top of one another. We use $\\diag{\\mb{X}}$ to denote the vector comprised of the diagonal elements of $\\mb{X}$ and $\\Diag{\\mb{v}}$ to denote the diagonal matrix, whose diagonal elements are comprised of elements of $\\mb{v}$. The elements of the sign vector of $\\mathbf{v}$, denoted as $\\mathop{\\mathrm{sign}}\\nolimits(\\mathbf{v})$, are equal to $\\mathop{\\mathrm{sign}}\\nolimits(v_i)= v_i\/|v_i|$, for $v_i \\neq 0$, and $\\mathop{\\mathrm{sign}}\\nolimits(v_i)=0$ for $v_i = 0$, where $i$ denotes the index of any element of $v$. We also use $\\sin(\\mb{v})$ to denote the vector with elements $\\sin(v_i)$ (used similarly for other trigonometric functions). Norms are given by subscripts, so $\\|\\mb{v}\\|_0$, $\\|\\mb{v}\\|_1$, and $\\|\\mb{v}\\|_2$ are the $\\ell_0$, $\\ell_1$, and $\\ell_2$ norms of $\\mathbf{v}$, while $\\|\\mb{X}\\|_2$ and $\\|\\mb{X}\\|_F$ are the spectral and Frobenius norms of $\\mb{X}$, respectively.\nWe use $[K]$ to denote $\\{1,2,\\dots,K\\}$ and $\\mb{X}_{1:K}$ to denote $\\{\\mb{X}_k\\}_{k=1}^K$.\n\nWe write $\\mb{X} \\otimes \\mb{Y}$ for the \\textit{Kronecker product} of two matrices $\\mb{X}\\in \\mbb{R}^{m\\times n}$ and $\\mb{Y}\\in \\mbb{R}^{p\\times q}$,\nwhere the result is an $mp \\times nq$ matrix and we have $\\|\\mb{X} \\otimes\\mb{Y} \\|_F = \\|\\mb{X}\\|_F\\| \\mb{Y}\\|_F$~\\cite{horn2012matrix}. We also use $\\bigotimes_{k \\in K} \\mb{X}_k \\triangleq \\mb{X}_1 \\otimes \\dots \\otimes \\mb{X}_K$ . We define $\\mb{H}_{\\mb{X}}\\triangleq (\\mb{X}^\\top \\mb{X})^{-1}$, $\\mb{X}^+ \\triangleq \\mb{H}_{\\mb{X}}\\mb{X}^\\top$, and $\\mb{P}_{\\mb{X}} \\triangleq \\mb{X} \\mb{X}^+$ for full rank matrix $\\mb{X}$. In the body, we sometimes also use $\\Delta f(\\mb{X};\\mb{Y}) \\triangleq f(\\mb{X}) - f(\\mb{Y})$.\n\nFor matrices $\\mb{X}_1$ and $\\mb{X}_2$ of appropriate dimensions, we define their distance to be $d(\\mb{X},\\mb{Y}) = \\|\\mb{X}-\\mb{Y}\\|_F$. For $\\mb{X}^0$ belonging to some set $\\mathcal{X}$, we define\n\t\\begin{align}\n\t\\mc{S}_{\\varepsilon}(\\mb{X}^0) \\triangleq \\lr{\\mb{X} \\in \\mathcal{X}: \\|\\mb{X} - \\mb{X}^0\\|_F = \\varepsilon}, \\nonumber \\\\\n\t\\mathcal{B}_{\\varepsilon}(\\mb{X}^0) \\triangleq \\lr{\\mb{X} \\in \\mathcal{X}: \\|\\mb{X} - \\mb{X}^0\\|_F < \\varepsilon}, \\nonumber \\\\\t\n\t\\bar{\\mathcal{B}}_{\\varepsilon}(\\mb{X}^0) \\triangleq \\lr{\\mb{X} \\in \\mathcal{X}: \\|\\mb{X} - \\mb{X}^0\\|_F \\leq \\varepsilon}.\n\t\\end{align}\nNote that while $\\mc{S}_{\\varepsilon}(\\mb{X}^0)$ represents the surface of a sphere, we use the term ``sphere\" for simplicity. We use the standard ``big-$\\mathcal{O}$'' (Knuth) notation for asymptotic scaling.\n\n\\subsubsection{Tensor Operations and Tucker Decomposition for Tensors}\nA tensor is a multidimensional array where the order of the tensor is defined as the number of dimensions in the array.\n\n\\textit{Tensor Unfolding: }A tensor $\\ul{\\X} \\in \\mbb{R}^{p_1 \\times p_2\\times \\dots \\times p_K}$ of order $K$ can be expressed as a matrix by reordering its elements to form a matrix. This reordering is called unfolding: the mode-$k$ unfolding matrix of a tensor is a $p_k \\times \\prod_{i \\ne k} p_i$ matrix, which we denote by $\\mb{X}_{(k)}$. Each column of $\\mb{X}_{(k)}$ consists of the vector formed by fixing all indices of $\\ul{\\X}$ except the one in the $k$th-order.\nThe $k$-rank of a tensor $\\ul{\\X}$ is defined by $\\mathop{\\mathrm{rank}}\\nolimits(\\mb{X}_{(k)})$; trivially, $\\mathop{\\mathrm{rank}}\\nolimits(\\mb{X}_{(k)}) \\leq p_k$.\n\n\\textit{Tensor Multiplication: } The mode-$k$ matrix product of the tensor $\\ul{\\X}$ and a matrix $\\mb{A} \\in \\mbb{R}^{m_k \\times p_k}$, denoted by $\\ul{\\X} \\times_k \\mb{A}$, is a tensor of size $p_1 \\times \\dots p_{k-1} \\times m_k \\times p_{k+1} \\dots \\times p_K$ whose elements are\n$\n\t(\\ul{\\X} \\times_k \\mb{A})_{i_1\\dots i_{k-1} j i_{k+1} \\dots i_K} = \\sum_{i_k=1}^{p_k} \\underline{x}_{i_1\\dots i_{k-1} i_k i_{k+1} \\dots i_K} a_{ji_k}.\n$\nThe mode-$k$ matrix product of $\\ul{\\X}$ and $\\mb{A}$ and the matrix multiplication of $\\mb{X}_{(k)}$ and $\\mb{A}$ are related~\\cite{kolda2009tensor}:\n\t\\begin{align}\n\t\\ul{\\Y} = \\ul{\\X} \\times_k \\mb{A} \\Leftrightarrow \\mb{Y}_{(k)} = \\mb{A} \\mb{X}_{(k)}.\n\t\\end{align}\n\t\n\\textit{Tucker Decomposition: } The Tucker decomposition decomposes a tensor into a \\textit{core tensor} multiplied by a matrix along each mode~\\cite{tucker1963implications,kolda2009tensor}. We take advantage of the Tucker model since we can relate the Tucker decomposition to the Kronecker representation of tensors~\\cite{caiafa2013computing}.\nFor a tensor $\\ul{\\Y} \\in \\mbb{R}^{m_1 \\times m_2 \\times \\dots \\times m_K}$ of order $K$, if $\\mathop{\\mathrm{rank}}\\nolimits(\\mb{Y}_{(k)})\\leq p_k$ holds for all $k \\in [K]$ then, according to the Tucker model, $\\ul{\\Y}$ can be decomposed into:\n\t\\begin{align} \\label{eq:UY_UX}\n\t\\ul{\\Y} = \\ul{\\X} \\times_1 \\mb{D}_1 \\times_2 \\mb{D}_2 \\times_ 3 \\dots \\times_K \\mb{D}_K,\n\t\\end{align}\nwhere $\\ul{\\X} \\in \\mbb{R}^{p_1 \\times p_2\\times \\dots \\times p_K}$ denotes the core tensor and $\\mb{D}_k \\in \\mbb{R}^{m_k \\times p_k}$ are factor matrices.\nThe following is implied by \\eqref{eq:UY_UX}~\\cite{kolda2009tensor}:\n\t\\begin{align*}\n\t\\mb{Y}_{(k)} = \\mb{D}_{k}\\mb{X}_{(k)}(\\mb{D}_{K} \\otimes \\dots \\otimes \\mb{D}_{k+1} \\otimes \\mb{D}_{k-1} \\otimes \\dots \\otimes \\mb{D}_1)^\\top.\n\t\\end{align*}\nSince the Kronecker product satisfies $\\mathop{\\mathrm{vec}}\\nolimits(\\mb{B}\\mb{X}\\mb{A}^\\top)=(\\mb{A} \\otimes \\mb{B})\\mathop{\\mathrm{vec}}\\nolimits(\\mb{X})$, \\eqref{eq:UY_UX} is equivalent to\n\t\\begin{align} \\label{eq:vecty_vectx}\n\t\\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\Y}) = \\big( \\mb{D}_K \\otimes \\mb{D}_{K-1} \\otimes \\dots \\otimes \\mb{D}_1 \\big) \\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\X}),\n\t\\end{align}\nwhere $\\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\Y}) \\triangleq \\mathop{\\mathrm{vec}}\\nolimits(\\mb{Y}_{(1)})$ and $\\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\X}) \\triangleq \\mathop{\\mathrm{vec}}\\nolimits(\\mb{X}_{(1)})$.\n\n\\subsubsection{Definitions for Matrices}\nWe use the following definitions for a matrix $\\mb{D}$ with unit-norm columns:\n$\\delta_s(\\mb{D})$ denotes the \\textit{restricted isometry property} ($\\mathsf{RIP}$) constant of order $s$ for $\\mb{D}$~\\cite{candes2008restricted}.\nWe define the \\textit{worst-case coherence} of $\\mb{D}$ as $\\mu_1(\\mb{D}) = \\max_{\\substack{i,j\\\\i \\neq j}} \\lra{\\mb{d}_i^\\top \\mb{d}_j}$.\nWe also define the \\textit{order-$s$ cumulative coherence} of $\\mb{D}$ as\n\t\\begin{align} \\label{eq:mu_1}\n\t\\mu_{s}(\\mb{D}) \\triangleq \\max_{|\\mc{J} |\\leq s} \\max_{j \\not\\in \\mc{J}}\n\t\t\\|\\mb{D}_{\\mc{J}}^\\top \\mb{d}_{j}\\|_1.\n\t\\end{align}\nNote that for $s=1$, the cumulative coherence is equivalent to the worst-case coherence and $\\mu_{s}(\\mb{D}) \\leq s \\mu_1(\\mb{D})$~\\cite{gribonval2014sparse}.\nFor $\\mb{D} = \\bigotimes_{k \\in [K]} \\mb{D}_k$, where $\\mb{D}_k$'s have unit-norm columns, $\\mu_1(\\mb{D}) = \\max_{k \\in [K]} \\mu_1(\\mb{D}_k)$~\\cite[Corollary 3.6]{jokar2009sparse} and it can be shown that\\footnote{The proof of \\eqref{eq:mu_s} is provided in Appendix C.}:\n\t\\begin{align}\\label{eq:mu_s}\n\t\\mu_s(\\mb{D}) &\\leq \\max_{k \\in [K]} \\mu_{s_k}(\\mb{D}_k)\n\t\t\\bigg( \\prod_{\\substack{i \\in [K], \\\\ i \\neq k}} \\lrp{ 1+\\mu_{s_i-1}(\\mb{D}_i)} \\bigg).\n\t\\end{align}\t\n\nThe rest of the paper is organized as follows. We formulate the KS-DL problem in Section~\\ref{sec:model}. In Section~\\ref{sec:asymp}, we provide analysis for asymptotic recovery of coordinate dictionaries composing the KS dictionary and in Section~\\ref{sec:finite}, we present sample complexity results for identification of coordinate dictionaries that are based on the results of Section~\\ref{sec:asymp}. Finally, we conclude the paper in Section~\\ref{sec:discuss}. In order to keep the main exposition simple, proofs of the lemmas and propositions are relegated to appendices.\n\n\\section{System Model} \\label{sec:model}\nWe assume the observations are $K$th-order tensors $\\ul{\\Y} \\in \\mbb{R}^{m_1\\times m_2 \\times \\dots \\times m_K}$. Given generating \\textit{coordinate dictionaries} $\\mb{D}^0_k \\in \\mbb{R}^{m_k \\times p_k}$, \\textit{coefficient tensor} $\\ul{\\X} \\in \\mbb{R}^{p_1\\times p_2 \\times \\dots \\times p_K}$, and \\textit{noise tensor} $\\ul{\\N}$, we can write $\\mb{y} \\triangleq \\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\Y})$ using \\eqref{eq:vecty_vectx} as\\footnote{We have reindexed $\\mb{D}_k$'s in \\eqref{eq:vecty_vectx} for ease of notation.}\n\t\\begin{align} \\label{eq:obs_model}\n\t\\mb{y} = \\bigg( \\bigotimes_{k \\in [K]} \\mb{D}_k^0 \\bigg) \\mb{x} + \\mb{w},\n\t\t\\quad \\|\\mb{x}\\|_0 \\leq s,\n\t\\end{align}\nwhere $\\mb{x}=\\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\X}) \\in \\mbb{R}^{p}$ denotes the sparse generating coefficient vector, $\\mb{D}^0 = \\bigotimes \\mb{D}_k^0 \\in \\mbb{R}^{m\\times p}$ denotes the underlying KS dictionary, and $\\mb{w}=\\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\N}) \\in \\mbb{R}^m$ denotes the underlying noise vector. Here, $\\mb{D}_k^0 \\in \\mathcal{D}_k = \\lr{ \\mb{D}_k \\in \\mbb{R}^{m_k\\times p_k}, \\|\\mb{d}_{k,j}\\|_2 = 1, \\forall j \\in [p_k]}$ for $k \\in [K]$, $p = \\prod_{k \\in [K]}p_k$ and $m = \\prod_{k \\in [K]}m_k$.\\footnote{Note that the $\\mathcal{D}_k$'s are compact sets on their respective oblique manifolds of matrices with unit-norm columns~\\cite{gribonval2014sparse}.} We use $\\bigotimes$ for $\\bigotimes_{k\\in[K]}$ in the following for simplicity of notation. We assume we are given $N$ noisy tensor observations, which are then stacked in a matrix $\\mb{Y} = [\\mb{y}_1,\\dots,\\mb{y}_N]$. To state the problem formally, we first make the following assumptions on distributions of $\\mb{x}$ and $\\mb{w}$ for each tensor observation.\n\n\\textit{Coefficient distribution:} We assume the coefficient tensor $\\ul{\\X}$ follows the random \\textit{``separable sparsity\"} model. That is, $\\mb{x}=\\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\X})$ is sparse and the support of nonzero entries of $\\mb{x}$ is structured and random. Specifically, we sample $s_k$ elements uniformly at random from $[p_k]$, $k \\in [K]$. Then, the random support of $\\mb{x}$ is $\\lr{ \\mc{J} \\subseteq [p], |\\mc{J} |=s}$ and is associated with\n\\begin{align*}\n\t\\lr{\\mc{J}_1\\times \\mc{J}_2 \\times \\dots \\times \\mc{J}_K:\n\t\t\\mc{J}_k\\subseteq [p_k], |\\mc{J}_k|=s_k, k \\in [K]}\n\\end{align*}\nvia lexicographic indexing, where $ s=\\prod_{k \\in [K]} s_k$, and the support of $\\mb{x}_{1:N}$'s are assumed to be independent and identically distributed (i.i.d.). This model requires nonzero entries of the coefficient tensors to be grouped in blocks and the sparsity level associated with each coordinate dictionary to be small~\\cite{caiafa2013computing}.\\footnote{In contrast, for coefficients following the random non-separable sparsity model, the support of the nonzero entries of the coefficient vector are assumed uniformly distributed over $\\lr{\\mc{J} \\subseteq [p]: |\\mc{J}|=s}$.}\n\nWe now make the same assumptions for the distribution of $\\mb{x}$ as assumptions A and B in Gribonval et al.~\\cite{gribonval2014sparse}.\nThese include:\n($i$) $\\mbb{E}\\lr{\\mb{x}_\\mc{J} \\mb{x}_\\mc{J}^\\top |\\mc{J}} = \\mbb{E}\\lr{x^2}\\mb{I}_{s}$,\n($ii$) $\\mbb{E}\\lr{\\mb{x}_\\mc{J} \\boldsymbol{\\sigma}_\\mc{J}^\\top |\\mc{J}} = \\mbb{E} \\lr{|x|}\\mb{I}_{s}$, where $\\boldsymbol{\\sigma} = \\mathop{\\mathrm{sign}}\\nolimits(\\mb{x})$,\n($iii$) $\\mbb{E}\\lr{\\boldsymbol{\\sigma}_\\mc{J} \\boldsymbol{\\sigma}_\\mc{J}^\\top |\\mc{J}} = \\mb{I}_{s}$,\n($iv$) magnitude of $\\mb{x}$ is bounded, i.e., $\\|\\mb{x}\\|_2 \\leq M_x $ almost surely, and\n($v$) nonzero entries of $\\mb{x}$ have a minimum magnitude, i.e., $\\min_{j \\in \\mc{J}} |x_j| \\geq x_{\\mathrm{min}}$ almost surely.\nFinally, we define $\\kappa_x \\triangleq \\mbb{E}\\lr{|x|}\/\\sqrt{\\mbb{E} \\lr{x^2}}$ as a measure of the flatness of $\\mb{x}$ ($\\kappa_x \\leq 1$, with $\\kappa_x=1$ when all nonzero coefficients are equal~\\cite{gribonval2014sparse}).\n\n\\textit{Noise distribution:} We make following assumptions on the distribution of noise, which is assumed i.i.d. across data samples:\n($i$) $\\mbb{E}\\lr{\\mb{w} \\mb{w}^\\top} = \\mbb{E}\\lr{w^2} \\mb{I}_m $,\n($ii$) $\\mbb{E}\\lr{\\mb{w} \\mb{x}^\\top |\\mc{J}}=\\mbb{E}\\lr{\\mb{w} \\boldsymbol{\\sigma}^\\top |\\mc{J}}=\\mathbf{0}$, and\n($iii$) magnitude of $\\mb{w}$ is bounded, i.e., $\\|\\mb{w}\\|_2 \\leq M_w$ almost surely.\n\nOur goal in this paper is to recover the underlying coordinate dictionaries, $\\mb{D}^0_k$, from $N$ noisy realizations of tensor data.\nTo solve this problem, we take the empirical risk minimization approach and define\n\t\\begin{align}\n\t&f_\\mb{y} \\lrp{\\D_{1:K}} \\triangleq\n\t\t \\inf_{\\mb{x}' \\in \\mbb{R}^p } \\bigg\\{\n\t\t \\frac{1}{2} \\norm{\\mb{y} - \\lrp{\\bigotimes \\mb{D}_k } \\mb{x}'\n\t\t }_2^2\n\t\t +\\lambda\\|\\mb{x}'\\|_1 \\bigg\\},\n \t\t\\text{and} \\nonumber \\\\\n\t&F_\\mb{Y} \\lrp{\\D_{1:K}} \\triangleq\n\t\t \\frac{1}{N} \\sum_{n =1}^N f_{\\mb{y}_n}\\lrp{\\D_{1:K}} ,\n\t\\end{align}\nwhere $\\lambda$ is a regularization parameter. In theory, we can recover the coordinate dictionaries by solving the following regularized optimization program:\n\t\\begin{align}\n \\min_{\\substack{\\mb{D}_k \\in \\mathcal{D}_k \\\\ k \\in [K]}}\n\t\tF_\\mb{Y} \\lrp{\\D_{1:K}}. \\label{eq:f_x}\n\t\\end{align}\nMore specifically, given desired errors $\\lr{\\varepsilon_k}_{k=1}^K$, we want a local minimum of~\\eqref{eq:f_x} to be attained by coordinate dictionaries $\\wh{\\D}_k \\in \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k), k \\in [K]$. That is, there exists a set $\\{\\wh{\\D}_k\\}_{k \\in [K]} \\subset \\lr{\\mb{D}_k \\in \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k)}_{k \\in [K]}$ such that $F_\\mb{Y}(\\wh{\\D}_{1:K}) \\leq F_\\mb{Y}(\\D_{1:K})$.\\footnote{We focus on the local recovery of coordinate dictionaries (i.e., $\\wh{\\D}_k \\in \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k)$) due to ambiguities in the general DL problem. This ambiguity is a result of the fact that dictionaries are invariant to permutation and sign flips of dictionary columns, resulting in equivalent classes of dictionaries. Some works in the literature on conventional overcome this issue by defining distance metrics that capture the distance between these equivalent classes~\\cite{agarwal2013exact,agarwal2013learning,arora2013new}.}\nTo address this problem, we first minimize the statistical risk:\n\t\\begin{align} \\label{eq:f_x_asym}\n\t&\\min_{\\substack{\\mb{D}_k \\in \\mathcal{D}_k \\\\ k \\in [K]}} f_\\mbb{P} \\lrp{\\D_{1:K}} \\triangleq\n\t\\min_{\\substack{\\mb{D}_k \\in \\mathcal{D}_k \\\\ k \\in [K]}}\n\t\t\\mbb{E}_\\mb{y} \\lr{ f_\\mb{y}\\lrp{\\D_{1:K}}}.\n\t\\end{align}\nThen, we connect $F_\\mb{Y} \\lrp{\\D_{1:K}}$ to $f_\\mbb{P} \\lrp{\\D_{1:K}}$ using concentration of measure arguments and obtain the number of samples sufficient for local recovery of the coordinate dictionaries. Such a result ensures that any KS-DL algorithm that is guaranteed to converge to a local minimum, and which is initialized close enough to the true KS dictionary, will converge to a solution close to the generating coordinate dictionaries (as opposed to the generating KS dictionary, which is guaranteed by analysis of the vector-valued setup~\\cite{gribonval2014sparse}).\n\n\\section{Asympototic Identifiability Results}\\label{sec:asymp}\nIn this section, we provide an identifiability result for the KS-DL objective function in \\eqref{eq:f_x_asym}. The implications of this theorem are discussed in Section~\\ref{sec:discuss}.\n\\begin{theorem}\\label{thm:asymp}\nSuppose the observations are generated according to \\eqref{eq:obs_model} and the dictionary coefficients follow the separable sparsity model of Section~\\ref{sec:model}. Further, assume the following conditions are satisfied:\n\t\\begin{align} \\label{eq:cond_k_i_p_i}\n\t&s_k \\leq \\frac{p_k}{8\\lrp{\\norm{\\mb{D}^0_k}_2+1}^2}, \\\\\\nonumber\n\t&\\max_{k \\in [K]} \\lr{\\mu_{s_k}(\\mb{D}^0_k)} \\leq \\frac{1}{4} , \\quad\n\t\\mu_s(\\mb{D}^0) <\\frac{1}{2},\n\t\\end{align}\nand\n\t\\begin{align} \\label{eq:cond_m_p}\n\t&\\frac{\\mbb{E}\\lr{x^2}}{M_x \\mbb{E}\\lr{|x|}} > \\frac{24\\sqrt{3}(4.5^{K\/2})K}{(1-2\\mu_s(\\mb{D}^0))} \\nonumber\\\\\n\t&\\qquad \\quad \\max_{k \\in [K]} \\lr{\n\t\t \\frac{s_k}{p_k}\n\t\t \\norm{ {\\mb{D}^0_k}^\\top\\mb{D}^0_k - \\mb{I}}_F\n\t\t \\lrp{ \\norm{\\mb{D}^0_k}_2+1}}.\n\t\\end{align}\nDefine\n\t\\begin{align} \\label{eq:cond_C_min_C_max}\n\t&C_{k,\\min} \\triangleq 8 (3^{\\frac{K+1}{2}})\\kappa_x^2\n\t\t\\lrp{\\frac{s_k}{p_k}}\n\t\t\\norm{{\\mb{D}_k^0}^\\top\\mb{D}_k^0 - \\mb{I}}_F \\lrp{ \\norm{\\mb{D}^0_k}_2+1} ,\n\t\t \\nonumber \\\\\n\t&C_{\\max} \\triangleq \\frac{1}{3K(1.5)^{K\/2}} \\frac{\\mbb{E}\\lr{|x|}}{M_x} (1-2\\mu_s(\\mb{D}^0)).\n\t\\end{align}\nThen, the map $\\D_{1:K} \\mapsto f_{\\mbb{P}}\\lrp{\\D_{1:K}}$ admits a local minimum $\\widehat{\\mb{D}}=\\bigotimes_{k \\in [K]} \\widehat{\\mb{D}}_k$ such that $\\widehat{\\mb{D}}_k \\in \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k)$, $k \\in [K]$, for any $\\varepsilon_k>0$ as long as\n\t\\begin{align} \\label{eq:cond_lambda}\n\t\\lambda \\leq \\frac{x_\\mathrm{min}}{8\\times 3^{(K-1)\/2}},\n\t\\end{align}\n\t\\begin{align} \\label{eq:cond_r_i}\n\t&\\frac{\\lambda C_{k,\\min}}{\\mbb{E}\\lr{|x|}} < \\varepsilon_k < \\frac{\\lambda C_{\\max}}{\\mbb{E}\\lr{|x|}}, \\ k \\in [K],\n\t\\end{align}\nand\n\t\\begin{align} \\label{eq:cond_noise}\n\t\\frac{M_w}{M_x}\n\t\t < 3(1.5)^{K\/2} \\bigg(\\frac{\\lambda K C_{\\max} }{\\mbb{E}\\lr{|x|}}\n\t\t - \\sum_{k \\in [K]} \\varepsilon_k\\bigg).\n\t\\end{align}\n\\end{theorem}\n\n\\subsection{Discussion}\n\nTheorem~\\ref{thm:asymp} captures how the existence of a local minimum for the statistical risk minimization problem depends on various properties of the coordinate dictionaries and demonstrates that there exists a local minimum of $f_{\\mbb{P}} \\lrp{\\D_{1:K}}$ that is in local neighborhoods of the coordinate dictionaries. This ensures asymptotic recovery of coordinate dictionaries within some local neighborhood of the true coordinate dictionaries, as opposed to KS dictionary recovery for vectorized observations~\\cite[Theorem 1]{gribonval2014sparse}.\n\nWe now explicitly compare conditions in Theorem~\\ref{thm:asymp} with the corresponding ones for vectorized observations~\\cite[Theorem 1]{gribonval2014sparse}.\nGiven that the coefficients are drawn from the separable sparsity model, the sparsity constraints for the coordinate dictionaries in \\eqref{eq:cond_k_i_p_i} translate into\n\t\\begin{align}\n\t\\frac{s}{p} = \\prod_{k \\in [K]} \\frac{ s_k}{ p_k}\n\t\t\\leq \\frac{1}{8^K \\prod_k \\lrp{\\norm{\\mb{D}^0_k}_2+1}^2} .\n\t\\end{align}\nTherefore, we have $\\dfrac{s}{p}= \\mathcal{O}\\lrp{ \\frac{1}{ \\prod_k \\norm{\\mb{D}^0_k}_2^2}}=\\mathcal{O}\\lrp{\\frac{1}{\\|\\mb{D}^0\\|_2^2}}$. Using the fact that $\\norm{\\mb{D}^0}_2 \\geq \\|\\mb{D}^0\\|_F\/ \\sqrt{m} = \\sqrt{p}\/\\sqrt{m}$, this translates into sparsity order\n$\ts = \\mathcal{O}\\lrp{ m}$. Next, the left hand side of the condition in \\eqref{eq:cond_m_p} is less than 1. Moreover, from properties of the Frobenius norm, it is easy to show that\n$\n\t\\norm{{\\mb{D}^0_k}^\\top\\mb{D}_k^0 - \\mb{I} }_F \\geq \\sqrt{p_k(p_k-m_k)\/m_k}.\n$\nThe fact that $\\norm{\\mb{D}_k^0}_2 \\geq \\sqrt{p_k}\/\\sqrt{m_k}$ and the assumption $\\mu_{s_k}(\\mb{D}_k^0)\\leq 1\/4$ imply that the right hand side of \\eqref{eq:cond_m_p} is lower bounded by $\\Omega\\lrp{ \\max_k s_k\\sqrt{(p_k-m_k)\/m_k^2}}$.\nTherefore, Theorem~\\ref{thm:asymp} applies to coordinate dictionaries with dimensions $p_k \\leq m_k^2$ and subsequently, KS dictionaries with $p \\leq m^2$. Both the sparsity order and dictionary dimensions are in line with the scaling results for vectorized data~\\cite{gribonval2014sparse}.\n\n\\subsection{Proof Outline}\n\nFor given radii $0<\\varepsilon_k\\leq 2\\sqrt{p_k}, k \\in [K]$, the spheres $\\mc{S}_{\\varepsilon_k}(\\mb{D}^0_k)$ are non-empty. This follows from the construction of dictionary classes, $\\mc{D}_k$'s.\nMoreover, the mapping $\\D_{1:K} \\mapsto f_{\\mbb{P}}\\lrp{\\D_{1:K}}$ is continuous with respect to the Frobenius norm $\\|\\D_k-\\D'_k\\|_F$ on all $\\mb{D}_k,\\mb{D}'_k \\in \\mbb{R}^{m_k\\times p_k}, k \\in [K]$~\\cite{gribonval2015sample}. Hence, it is also continuous on compact constraint sets $\\mathcal{D}_k$'s.\nWe derive conditions on the coefficients, underlying coordinate dictionaries, $M_w$, regularization parameter, and $\\varepsilon_k$'s such that\n\t\\begin{align} \\label{eq:f_p_r_def}\n\t\\Delta f_{\\mbb{P}}\\lrp{\\eps_{1:K}}\n\t\t\\triangleq \\inf_{\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}^0_k)}\n\t\t\\Delta f_{\\mbb{P}} \\lrp{ \\D_{1:K}; \\D^0_{1:K} } >0.\n\t\\end{align}\nThis along with the compactness of closed balls $\\bar{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k)$ and the continuity of the mapping $\\D_{1:K} \\mapsto f_{\\mbb{P}}\\lrp{\\D_{1:K}}$ imply the existence of a local minimum of $f_{\\mbb{P}}\\lrp{\\D_{1:K}}$ achieved by $\\wh{\\D}_{1:K}$ in open balls, $\\mathcal{B}_{\\varepsilon_k}(\\mb{D}_k^0)$'s, $k \\in [K]$.\n\n\nTo find conditions that ensure $\\Delta f_{\\mbb{P}}\\lrp{\\eps_{1:K}} > 0$, we take the following steps:\ngiven coefficients that follow the separable sparsity model, we can decompose any $\\mb{D}_\\mc{J}, |\\mc{J}|=s$, as\n\t\\begin{align} \\label{eq:Dj_D12}\n\t\\mb{D}_\\mc{J} = \\bigotimes \\mb{D}_{k,\\mc{J}_k},\n\t\\end{align}\nwhere $\\ |\\mc{J}_k|=s_k$ for $k \\in[K]$.\\footnote{The separable sparsity distribution model implies sampling without replacement from columns of $\\mb{D}_k$.}\nGiven a generating $\\boldsymbol{\\sigma} = \\mathop{\\mathrm{sign}}\\nolimits(\\mb{x})$,\nwe obtain $\\widehat{\\mb{x}}$ by solving $f_\\mb{y}\\lrp{\\D_{1:K}}$ with respect to $\\mb{x}'$, conditioned on the fact that $\\mathop{\\mathrm{sign}}\\nolimits(\\widehat{\\mb{x}})=\\widehat{\\boldsymbol{\\sigma}}=\\boldsymbol{\\sigma}$. This eliminates the dependency of $f_\\mb{y} \\lrp{\\D_{1:K}}$ on $\\inf_{\\mb{x}'}$ by finding a closed-form expression for $f_\\mb{y}\\lrp{\\D_{1:K}}$ given $\\widehat{\\boldsymbol{\\sigma}}=\\boldsymbol{\\sigma}$, which we denote as $\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}}$. Defining\n\\begin{align}\n\\phi_{\\mbb{P}}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}} \\triangleq \\mbb{E}\\lr{\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}}},\n\\end{align}\nwe expand $ \\Delta \\phi_{\\mbb{P}}\\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}}$ using \\eqref{eq:Dj_D12} and separate the terms that depend on each radius $\\varepsilon_k = \\|\\D_k-\\D^0_k\\|_F$ to obtain conditions for sparsity levels $s_k, k\\in [K]$, and coordinate dictionaries such that $\\Delta \\phi_{\\mbb{P}} \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} >0$.\nFinally, we derive conditions on $M_w$, coordinate dictionary coherences and $\\varepsilon_k$'s that ensure $\\widehat{\\boldsymbol{\\sigma}}=\\boldsymbol{\\sigma}$ and $\\Delta f_{\\mbb{P}} \\lrp{ \\D_{1:K};\\D^0_{1:K} } = \\Delta \\phi_{\\mbb{P}} \\lrp{\\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}}$.\n\n\n\\begin{remark}\nThe key assumption in the proof of Theorem~\\ref{thm:asymp} is expanding $\\mb{D}_{\\mc{J}}$ according to~\\eqref{eq:Dj_D12}. This is a consequence of the separable sparsity model for dictionary coefficients. For a detailed discussion on the differences between the separable sparsity model and the random sparsity model for tensors, we refer the readers to our earlier work~\\cite{shakeri2016minimax}.\n\\end{remark}\n\n\\begin{remark}\nAlthough some of the forthcoming lemmas needed of Theorem~\\ref{thm:asymp} impose conditions on $\\mb{D}_k$'s as well as true coordinate dictionaries $\\mb{D}^0_k$'s, we later translate these conditions exclusively in terms of $\\mb{D}_k^0$'s and $\\varepsilon_k$'s.\n\\end{remark}\n\n\nThe proof of Theorem~\\ref{thm:asymp} relies on the following propositions and lemmas. The proofs of these are provided in Appendix A.\n\n\\begin{proposition} \\label{prop:1}\nSuppose the following inequalities hold for $k \\in [K]$:\n\t\\begin{align} \\label{eq:cond_delt_s_prop}\n\ts_k \\leq \\frac{p_k}{8(\\|\\mb{D}_k^0\\|_2+1)^2}\\quad \\text{and} \\quad\n\t\\max_{k\\in [K]}&\\lr{\\delta_{s_k}(\\mb{D}_k^0)}\\leq \\frac{1}{4} .\n\t\\end{align}\nThen, for\n\t\\begin{align} \\label{eq:cond_lamb_1}\n\t\\bar{\\lambda} \\triangleq \\dfrac{\\lambda}{\\mbb{E}\\lr{|x|}}\\leq \\dfrac{1}{8\\times 3^{(K-1)\/2}},\n\t\\end{align}\nany collection of $\\lr{ \\varepsilon_k: \\varepsilon_k \\leq 0.15, k \\in [K]}$, and for all $\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}_k^0)$, we have :\n\t\\begin{align} \\label{eq:delt_p_LB}\n\t&\\Delta \\phi_{\\mbb{P}} \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}}\n\t\t\\geq \\frac{s\\mbb{E}\\{x^2\\}}{8} \\sum_{k \\in [K]}\n\t\t\\frac{\\varepsilon_k}{p_k} \\lrp{\\varepsilon_k- \\varepsilon_{k,\\min}(\\bar{\\lambda})},\n\t\\end{align}\nwhere\n\t\\begin{align*}\n\t&\\varepsilon_{k,\\min} (\\bar{\\lambda})\n\t\\triangleq \\frac{3^{(K-1)\/2}}{2} \\lrp{1.5^{\\frac{K-1}{2}}\n\t\t+2^{(K+1)}\\bar{\\lambda} } \\bar{\\lambda} C_{k,\\min}.\n\t\\end{align*}\nIn addition, if\n\t\\begin{align} \\label{eq:cond_lamb_2}\n\t\\bar{\\lambda} \\leq \\frac{0.15}{\\max_{k \\in [K]} C_{k,\\min}},\n\t\\end{align}\nthen $\\varepsilon_{k,\\min}(\\bar{\\lambda})<0.15$.\nThus, $\\Delta \\phi_\\mbb{P}\\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} > 0 $ for all $\\varepsilon_k \\in (\\varepsilon_{k,\\min}(\\bar{\\lambda}),0.15], k \\in [K]$.\n\\end{proposition}\n\nThe proof of Proposition \\ref{prop:1} relies on the following lemmas as well as supporting lemmas from the analysis of vectorized data~\\cite[Lemmas~4,6,7,15,16]{gribonval2014sparse}.\n\n\n\\begin{lemma} \\label{def:P_H_PS}\nLet $\\mb{D} = \\bigotimes \\D_k $ where $\\delta_s(\\mb{D}_k)<1$ for $k \\in [K]$, and $\\mc{J}$ be a support set generated by the separable sparsity model. Then any $\\mb{D}_\\mc{J}, |\\mc{J}|=s$, can be decomposed as\n$\\mb{D}_\\mc{J} = \\bigotimes \\mb{D}_{k,\\mc{J}_k}$,\nwhere $\\ |\\mc{J}_k|=s_k$ and $\\mathop{\\mathrm{rank}}\\nolimits (\\mb{D}_{k,\\mc{J}_k})=s_k$, for $k \\in[K]$. Also, the following relations hold for this model:\\footnote{The equations follow from basic properties of the Kronecker product~\\cite{horn2012matrix}.}\n\t\\begin{align} \\label{eq:P_H_Ps}\n\t\\bP_{\\D_{\\cJ}} = \\bigotimes \\bP_{\\D_{k,\\cJ_k}}, \\D_{\\cJ}^+ = \\bigotimes \\D_{k,\\cJ_k}^+, \\bH_{\\D_{\\cJ}} = \\bigotimes \\bH_{\\D_{k,\\cJ_k}},\n\t\\end{align}\nwhere $\\mb{P}$ and $\\mb{H}$ are defined in Section~\\ref{subsec:notation}.\n\\end{lemma}\n\n\n\\begin{lemma} \\label{lem:Dtld}\nGiven $\\D_{1:K}$ and $\\D^0_{1:K}$, the difference\n\t\\begin{align} \\label{eq:otD_otDp}\n\t&\\bigotimes \\D_k - \\bigotimes \\D^0_k \\nonumber\\\\\n\t&\\qquad = \\sum _{k \\in [K]} \\widetilde{\\mb{D}}_{k,1} \\otimes \\dots \\otimes\n\t\t\\lrp{\\mb{D}_k - \\mb{D}^0_k}\t\\otimes \\dots \\otimes \\widetilde{\\mb{D}}_{k,K},\n\t\\end{align}\t\nwhere without loss of generality, each $\\widetilde{\\mb{D}}_{k,i}$ is equal to either $\\mb{D}^0_i$ or $\\mb{D}_i$, for $k \\in [K]$.\n\\end{lemma}\n\nWe drop the $k$ index from $\\wt{\\D}_{k,i}$ for ease of notation throughout the rest of the paper.\n\n\\begin{lemma} \\label{lemma:f_closedform}\nLet $\\boldsymbol{\\sigma} \\in \\{-1,0,1\\}^p$ be an arbitrary sign vector and $\\mc{J} = \\mc{J}(\\boldsymbol{\\sigma})$ be its support. Define\\footnote{The quantity $\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}}$ is not equal to $\\phi_\\mb{y}\\lrp{\\D_{1:K}}$ conditioned on $\\boldsymbol{\\sigma}$ and the expression is only used for notation.}\n\t\\begin{align} \\label{eq:delt_inf}\n\t\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}} \\triangleq \\inf_{\\substack{\\mb{x} \\in \\mbb{R}^p \\\\ \\mathop{\\mathrm{supp}}\\nolimits(\\mb{x}) \\subset \\mc{J}}}\n\t\t\\frac{1}{2} \\norm{\\mb{y} - \\lrp{\\bigotimes \\D_k } \\mb{x} }_2^2+ \\lambda {\\boldsymbol{\\sigma}}^\\top\\mb{x}.\n\t\\end{align}\nIf $\\mb{D}_{k,\\mc{J}_k}^\\top \\mb{D}_{k,\\mc{J}_k}$ is invertible for $k \\in [K]$, then $\\widehat{\\mb{x}}$ minimizes $\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}} $, where\n\t\\begin{align} \\label{eq:xhat_closedform}\n\t\\widehat{\\mb{x}}_\\mc{J} = \\lrp{\\bigotimes \\D_{k,\\cJ_k}^+ } \\mb{y} - \\lambda \\lrp{ \\bigotimes \\big( \\mb{D}_{k,\\mc{J}_k}^\\top\\mb{D}_{k,\\mc{J}_k} \\big)^{-1} }\\boldsymbol{\\sigma}_\\mc{J},\n\t\\end{align}\nand $\\widehat{\\mb{x}}_{\\mc{J}^c} = \\mathbf{0}$. Thus, $\\phi_\\mb{y}\\lrp{\\D_{1:K} |\\boldsymbol{\\sigma}}$ can be expressed in closed form as:\n\t\\begin{align} \\label{eq:delt_closedform}\n\t&\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}} = \\frac{1}{2}\\|\\mb{y}\\|_2^2\n\t\t- \\frac{1}{2} \\mb{y}^\\top\n\t\t\\lrp{ \\bigotimes \\bP_{\\D_{k,\\cJ_k}}}\\mb{y} \\nonumber\\\\\n\t&\\ + \\lambda {\\boldsymbol{\\sigma}}_\\mc{J}^\\top\n\t\t\\lrp{ \\bigotimes \\D_{k,\\cJ_k}^+ } \\mb{y}\n\t\t-\\frac{\\lambda^2}{2}{\\boldsymbol{\\sigma}}_\\mc{J}^\\top \\lrp{ \\bigotimes \\bH_{\\D_{k,\\cJ_k}}} \\boldsymbol{\\sigma}_\\mc{J}.\n\t\\end{align}\n\\end{lemma}\n\n\\begin{lemma} \\label{lem:exp_phi}\nAssume $\\max\\lr{ \\delta_{s_k}(\\mb{D}_k^0),\\delta_{s_k}(\\mb{D}_k)}<1$ for $k \\in [K]$ and let $\\wt{\\D}_k$ be equal to either $\\mb{D}^0_k$ or $\\mb{D}_k$.\nFor\n\t\\begin{align}\n\t\\Delta \\phi_{\\mbb{P}} \\lrp{ \\D_{1:K};\\D^0_{1:K} \\big| \\boldsymbol{\\sigma}}\n\t\t\\triangleq \\phi_{\\mbb{P}} \\lrp{ \\D_{1:K}| \\boldsymbol{\\sigma}}\n\t\t- \\phi_{\\mbb{P}} \\lrp{ \\D^0_{1:K}| \\boldsymbol{\\sigma}},\n\t\\end{align}\nwe have\n\t\\begin{align} \\label{eq:delt_ph_2}\n\t&\\Delta \\phi_{\\mbb{P}} \\lrp{ \\D_{1:K};\\D^0_{1:K} \\big| \\boldsymbol{\\sigma}} \\nonumber\\\\\n\t& = \\frac{\\mbb{E}\\{x^2\\}}{2}\n\t\t\\sum_{k \\in [K]} \\mbb{E}_{\\mc{J}_1} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_1}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\\mb{D}^0_1} }\\dots \\nonumber\\\\\n\t&\\qquad \\qquad \\qquad \\qquad \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_k}^\\top(\\mb{I}_{m_k} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_k}} \\nonumber \\\\\n\t&\\qquad \\qquad \\qquad \\qquad \\dots\n\t\t\\mbb{E}_{\\mc{J}_K} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{{\\mb{D}^0_K}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\mb{D}^0_K}}\n\t\t\\nonumber \\\\\n\t&- \\lambda \\mbb{E}\\{|x|\\}\n\t\t\\sum_{k \\in [K]} \\mbb{E}_{\\mc{J}_1} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}^+\\mb{D}^0_1}} \\dots \\nonumber\\\\\n\t&\\quad \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{\\mb{I}_{s_k} - \\D_{k,\\cJ_k}^+ \\mb{D}^0_k}} \\dots\n\t\t\\mbb{E}_{\\mc{J}_K} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}^+ \\mb{D}^0_K}} \\nonumber \\\\\n\t&+ \\frac{\\lambda^2}{2}\n\t\t\\sum_{k \\in [K]} \\mbb{E}_{\\mc{J}_1} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\widetilde{\\mb{D}}_1,\\mc{J}_1}} }\\dots \\nonumber\\\\\n\t&\\quad \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\mb{D}^0_{k,\\mc{J}_k}} - \\bH_{\\D_{k,\\cJ_k}} }\t} \\dots\n\t\t\\mbb{E}_{\\mc{J}_K} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}}}.\n\t\\end{align}\n\\end{lemma}\n\n\\begin{lemma} \\label{lem:UB_mu_delt_RIP}\nFor any $\\mb{D}_k \\in \\mc{D}_k$ satisfying $\\mathsf{RIP}$ of order $s_k$, given $\\mc{J}_k \\subset [p_k]$ and $ |\\mc{J}_k|=s_k$, the following relations hold:\n\t\\begin{align}\n\t\\norm{\\mb{D}_{k,\\mc{J}_k}}_2 &= \\norm{{\\mb{D}_{k,\\mc{J}_k}}^\\top}_2\n\t\t\\leq \\sqrt{1+\\delta_{s_k}(\\mb{D}_k)}, \\label{eq:l2_delt} \\\\\n\t\\delta_{s_k}(\\mb{D}_k) &\\leq \\mu_{s_k-1}(\\mb{D}_k). \\label{eq:delt_mu}\n\t\\end{align}\n\\end{lemma}\n\n\n\\begin{lemma}[Lemma 4~\\cite{gribonval2014sparse}]\n\\label{lem:H_Dps}\nLet $\\mb{D}_k$'s be coordinate dictionaries such that $\\delta_{s_k}(\\mb{D}_k)<1$. Then for any $\\mc{J}_k \\subset p_k, |\\mc{J}_k|=s_k$, $\\bH_{\\D_{k,\\cJ_k}}$ exists and\n\t\\begin{align} \\label{eq:pso_cond}\n\t&\\norm{\\bH_{\\D_{k,\\cJ_k}}}_2 \\leq \\frac{1}{1-\\delta_{s_k}(\\mb{D}_k)}, \\quad\n\t\\norm{\\D_{k,\\cJ_k}^+}_2 \\leq \\frac{1}{\\sqrt{1-\\delta_{s_k}(\\mb{D}_k)}},\n\t\\end{align}\nand for any $\\mb{D}_k'$ such that $\\|\\D_k-\\D'_k\\|_F \\leq \\varepsilon_k < \\sqrt{1-\\delta_{s_k}(\\mb{D}_k)}$:\n\t\t\\begin{align} \\label{eq:cond_delt_r_i}\n\t\t&1-\\delta_{s_k}(\\mb{D}'_k) \\geq (\\sqrt{1-\\delta_{s_k}(\\mb{D}_k)} - \\varepsilon_k)^2 \\triangleq 1-\\delta_k.\n\t\t\\end{align}\n\\end{lemma}\n\n\n\\begin{lemma}[Lemma 6~\\cite{gribonval2014sparse}]\n\\label{lem:D_12_tet}\nGiven any $\\mb{D}_k^1,\\mb{D}_k^2 \\in \\mc{D}_k$, there exist $\\mathbf{V}_k \\in \\mbb{R}^{m_k \\times p_k}$ with $\\diag{{\\mb{D}^1_k}^\\top \\mathbf{V}_k}=\\mathbf{0}$ and $\\diag{\\mathbf{V}_k^\\top \\mathbf{V}_k}=\\mb{I}_{p_k}$ and a vector $\\boldsymbol{\\theta}_k \\triangleq \\boldsymbol{\\theta}_k(\\mb{D}_k^1,\\mb{D}_k^2) \\in [0,\\pi]^{p_k}$, such that\n\t\\begin{align}\n\t\\mb{D}_k^2 = \\mb{D}_k^1 \\mathbf{C}_k (\\boldsymbol{\\theta}_k) + \\mathbf{V}_k \\mathbf{S}_k(\\boldsymbol{\\theta}_k),\n\t\\end{align}\nwhere $\\mathbf{C}_k (\\boldsymbol{\\theta}_k) \\triangleq \\Diag{\\cos(\\boldsymbol{\\theta}_k) }$ and $\\mathbf{S}_k (\\boldsymbol{\\theta}_k) \\triangleq \\Diag{\\sin(\\boldsymbol{\\theta}_k) }$. Moreover,\n\t\\begin{align} \\label{eq:tet_rk}\n\t&\\frac{2}{\\pi}\\theta_{k,j} \\leq \\|\\mb{d}^2_{k,j} - \\mb{d}^1_{k,j} \\|_2\n\t\t= 2\\sin \\lrp{\\frac{\\theta_{k,j}}{2}} \\leq\\theta_{k,j}, \\text{and} \\nonumber \\\\\n\t&\\frac{2}{\\pi} \\|\\boldsymbol{\\theta}_k\\|_2 \\leq \\|\\mb{D}_k^2 - \\mb{D}_k^1 \\|_F \\leq \\|\\boldsymbol{\\theta}_k\\|_2 ,\n\t\\end{align}\nwhere $j \\in [p_k]$.\nSimilarly, there exists $\\mathbf{V}_k'$ such that $\\mb{D}_k^1 = \\mb{D}_k^2 \\mathbf{C}_k (\\boldsymbol{\\theta}_k) + \\mathbf{V}'_k \\mathbf{S}_k(\\boldsymbol{\\theta}_k)$, where $\\diag{{\\mb{D}^2_k}^\\top \\mathbf{V}'_k}=\\mathbf{0}$.\n\\end{lemma}\n\n\n\\begin{lemma} \\label{lem:A_B_delt}\nFix $\\D_{1:K}$ and $\\D^0_{1:K}$, and suppose $\\lr{A_k},\\lr{B_k},\\lr{\\delta_k}$ satisfy the following:\n\t\\begin{align} \\label{eq:A_B_Delk}\n\t&A_k \\geq \\max\\lr{ \\|\\mb{D}_k^\\top \\mb{D}_k - \\mb{I}_{p_k}\\|_F,\\|{\\mb{D}_k^0}^\\top \\mb{D}_k^0 - \\mb{I}_{p_k}\\|_F } , \\nonumber \\\\\n\t&B_k \\geq \\max\\lr{ \\|\\mb{D}_k\\|_2, \\|\\mb{D}_k^0\\|_2 }, \\text{and}\\nonumber \\\\\n\t&\\delta_k \\geq \\max\\lr{ \\delta_{s_k}(\\mb{D}_k),\\delta_{s_k}(\\mb{D}_k^0) }.\n\t\\end{align}\nThen for all $ \\boldsymbol{\\theta}_k \\triangleq \\boldsymbol{\\theta}_k(\\mb{D}_k,\\mb{D}_k^0), k \\in [K]$, we have\n\t\\begin{align} \\label{eq: delt_ph_3}\n\t&\\Delta\\phi_{\\mbb{P}}\\lrp{ \\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} \\nonumber\\\\\n\t&\\geq \\frac{s\\mbb{E}\\{x^2\\}}{2}\n\t\\sum_{k \\in [K]} \\frac{\\|\\boldsymbol{\\theta}_k\\|_2 }{p_k}\n\t\t\\bigg[\\|\\boldsymbol{\\theta}_k\\|_2\n\t\t\\bigg( 1 - \\frac{s_k}{p_k} \\frac{B^2_k}{1-\\delta_k}\n\t\t-\\bar{\\lambda} \\kappa_x^2\n\t\t\\delta_{-k}\\bigg)\n\t\t\\nonumber \\\\\n\t&\\qquad \\quad -\\bigg(\\delta_{-k}\n\t\t+2\\bar{\\lambda}\\prod_{i \\in [K]} \\frac{1}{1-\\delta_i}\\bigg)\n\t\t\\bar{\\lambda} \\kappa_x^2\n\t\t\\frac{s_k}{p_k} \\frac{2A_kB_k}{1-\\delta_k}\\bigg],\n\t\\end{align}\nwhere $\\bar{\\lambda} \\triangleq \\dfrac{\\lambda}{\\mbb{E}\\lr{|x|}} $ and $\\delta_{-k} \\triangleq \\prod_{\\substack{i \\in [K]\\\\ i \\neq k}}\n\t\t\\sqrt{\\dfrac{1+\\delta_i}{1-\\delta_i}} $.\n\\end{lemma}\n\n\nProposition~\\ref{prop:1} shows $\\Delta\\phi_{\\mbb{P}}\\lrp{ \\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} > 0 $. However, given $\\widehat{\\mb{x}}$, the solution of $\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}}$, $\\widehat{\\boldsymbol{\\sigma}} = \\mathop{\\mathrm{sign}}\\nolimits\\lrp{\\widehat{\\mb{x}}}$ is not necessarily equal to the sign of the generating $\\boldsymbol{\\sigma}$. We derive conditions that ensure $\\widehat{\\mb{x}}$ is almost surely the unique minimizer of $f_\\mb{y}\\lrp{\\D_{1:K}}$ and $\\widehat{\\boldsymbol{\\sigma}}=\\boldsymbol{\\sigma}$.\nWe introduce the following proposition for this purpose.\n\n\\begin{proposition} \\label{prop:3}\nLet the generating coordinate dictionaries $\\{ \\mb{D}_k^0 \\in\\mc{D}_k\\}$ satisfy:\n\t\\begin{align} \\label{eq:delt_cond}\n\t\\mu_{s}(\\mb{D}^0) < \\frac{1}{2} , \\quad \\max_k\\{ \\delta_{s_k}(\\mb{D}_k^0)\\} < \\frac{1}{4} .\n\t\\end{align}\nSuppose $\\bar{\\lambda} = \\dfrac{\\lambda}{\\mbb{E}\\lr{|x|}}\\leq \\dfrac{x_{\\min}}{2\\mbb{E}\\lr{|x|}}$ and\n\t\\begin{align} \\label{eq:prop_r_max}\n\t\\max_{k\\in [K]}\\{\\varepsilon_k\\} \\leq \\min\\lr{ \\bar{\\lambda}C_{\\max}, 0.15}.\n\t\\end{align}\nIf the following is satisfied:\n\t\\begin{align} \\label{eq:M_eps_M_al}\n\t\\frac{M_w}{M_x}\n\t\t < 3(1.5)^{K\/2} \\bigg( \\bar{\\lambda} K C_{\\max} - \\sum_{k \\in [K]} \\varepsilon_k\\bigg),\n\t\\end{align}\nthen for any $\\D_{1:K}$ such that $\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}^0_k)$, for $k \\in [K]$, $\\widehat{\\mb{x}}$ that is defined in~\\eqref{eq:xhat_closedform} is almost surely the minimizer of the map $\\mb{x}' \\mapsto \\frac{1}{2}\\norm{\\mb{y}- \\lrp{\\bigotimes \\D_k } \\mb{x}'}_2^2 +\\lambda \\|\\mb{x}'\\|_1 $ and \t\n$\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} = \\Delta f_{\\mbb{P}} \\lrp{\\D_{1:K};\\D^0_{1:K}}$.\n\\end{proposition}\n\n\\begin{remark}\nNote that $\\mu_s(\\mb{D}^0) < \\frac{1}{2}$ in \\eqref{eq:delt_cond} can be satisfied by ensuring that the right hand side of~\\eqref{eq:mu_s} is less than $\\frac{1}{2}$. One way this can be ensured is by enforcing strict conditions on coordinate dictionaries; for instance, $\\mu_{s_k}(\\mb{D}^0_k)\\leq \\frac{1}{2^K}$.\n\\end{remark}\n\nThe proof of Proposition~\\ref{prop:3} relies on the following lemmas and~\\cite[Lemmas 10--13]{gribonval2014sparse}.\n\n\\begin{lemma}[Lemma 13~\\cite{gribonval2014sparse}]\n\\label{lem:a_hat_min_cond}\nAssume $\\mu_s(\\mb{D}) <\\dfrac{1}{2}$. If\n\t\\begin{align} \\label{eq:lem8_cod}\n\t\\min_{j \\in \\mc{J}} \\left| x_j \\right| \\geq 2\\lambda, \\\n\t\t\\text{and} \\\n\t\t\\norm{\\mb{y} - \\mb{D} \\mb{x}}_2 < \\lambda (1-2\\mu_s(\\mb{D}))\n\t\\end{align}\nhold for generating $\\mb{x}$, then $\\widehat{\\mb{x}}$ defined in \\eqref{eq:xhat_closedform} is the unique solution of $\\min_{\\mb{x}'} \\frac{1}{2}\\norm{\\mb{y} - \\lrp{\\bigotimes \\D_k } \\mb{x}' }_2 +\\lambda\\|\\mb{x}'\\|_1$.\n\\end{lemma}\n\n\n\\begin{lemma} \\label{lem:mu_mu0_rel}\nFor any $\\mb{D}^0=\\bigotimes \\D^0_k $ and $\\mb{D} = \\bigotimes \\D_k $ such that\n$\\mb{D}_k \\in \\bar{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k)$, for $k \\in [K]$, suppose the following inequalities are satisfied:\n\t\\begin{align} \\label{eq:UB_mu_delt}\n\t\\max_{k \\in [K]} \\{\\delta_{s_k}(\\mb{D}_k^0)\\} \\leq \\frac{1}{4},\n\t\t\\quad \\text{and}\n\t\t\\quad\t\\max_{k \\in [K]} \\varepsilon_k \\leq 0.15.\n\t\\end{align}\nThen, we have\n\t\\begin{align} \\label{eq:mu_mu0}\n\t\\mu_s(\\mb{D}) \\leq \\mu_s(\\mb{D}^0) + 2(1.5)^{K\/2}\\sqrt{s} \\bigg(\\sum_{k \\in [K]} \\varepsilon_k\\bigg).\n\t\\end{align}\t\n\\end{lemma}\n\n\n\\begin{IEEEproof}[Proof of Theorem~\\ref{thm:asymp}]\nTo prove this theorem, we use Proposition~\\ref{prop:1} to show that $\\Delta \\phi_{\\mbb{P}}\\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} > 0$, and then use Proposition~\\ref{prop:3} to show that $\\Delta \\phi_{\\mbb{P}}\\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} =\\Delta f_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}}$.\nThe assumptions in \\eqref{eq:cond_k_i_p_i} ensure that the conditions in \\eqref{eq:cond_delt_s_prop} and \\eqref{eq:delt_cond} are satisfied for Proposition~\\ref{prop:1} and Proposition~\\ref{prop:3}, respectively. Assumptions \\eqref{eq:cond_m_p} and \\eqref{eq:cond_lambda} ensure that the conditions in \\eqref{eq:cond_lamb_1} and \\eqref{eq:cond_lamb_2} are satisfied for Proposition~\\ref{prop:1}, $\\bar{\\lambda}\\leq \\dfrac{x_{\\min}}{2\\mbb{E}\\lr{|x|}}$ holds for Proposition~\\ref{prop:3}, and $\\max_{k \\in [K]}\\{C_{k,\\mathrm{min}}\\} 0 $ for all $\\varepsilon_k \\in (\\bar{\\lambda}C_{k,\\min},0.15], k \\in [K]$. Finally, using the assumption in \\eqref{eq:cond_noise} implies $\\Delta \\phi_{\\mbb{P}}\\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} =\\Delta f_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}}$ for all $\\varepsilon_k \\leq \\bar{\\lambda}C_{\\max}, k \\in [K]$. Furthermore, the assumption in \\eqref{eq:cond_lambda} implies $C_{\\max}\\bar{\\lambda} \\leq 0.15$. Consequently, for any $\\lr{ \\varepsilon_k>0,k \\in [K]}$ satisfying the conditions in \\eqref{eq:cond_r_i}, $\\D_{1:K} \\rightarrow f_{\\mbb{P}}\\lrp{\\D_{1:K}}$ admits a local minimum $\\widehat{\\mb{D}}= \\bigotimes \\widehat{\\mb{D}}_k$ such that $\\widehat{\\mb{D}}_k \\in \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k), k \\in [K]$.\n\\end{IEEEproof}\n\n\n\n\\section{Finite Sample Identifiability Results} \\label{sec:finite}\n\nWe now focus on leveraging Theorem~\\ref{thm:asymp} and solving~\\eqref{eq:f_x} to derive finite-sample bounds for KS dictionary identifiability. Compared to Gribonval et al.~\\cite{gribonval2014sparse}, who use Lipschitz continuity of the objective function with respect to the larger KS dictionary, our analysis is based on ``coordinate-wise Lipschitz continuity\" with respect to the coordinate dictionaries.\n\n\\begin{theorem}\\label{thm:finite_n}\nSuppose the observations are generated according to \\eqref{eq:obs_model} and the dictionary coefficients follow the separable sparsity model of Section~\\ref{sec:model} such that \\eqref{eq:cond_k_i_p_i} to \\eqref{eq:cond_noise} are satisfied. Next, fix any $\\xi \\in (0,\\infty)$. Then, for any number of observations satisfying\n\t\\begin{align} \\label{eq:smplCmp}\n\tN = \\max_{k \\in [K]}\n\t\t&\\Omega \\bigg(\n\t\t\\frac{p_k^2 (\t\\xi+m_kp_k) } {(\\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda}))^2}\n\t\t\\nonumber\\\\\n\t&\\qquad \\bigg( \\frac{2^{K}(1 + \\bar{\\lambda}^2) M_x^2}\n\t\t{s^2\\mbb{E}\\{x^2\\}^2}\n\t\t+ \\bigg(\\frac{M_w}{s\\mbb{E}\\{x^2\\}} \\bigg)^2 \\bigg) \\bigg),\n\t\\end{align}\nwith probability at least $1-e^{-\t\\xi}$, $\\D_{1:K} \\mapsto F_\\mb{Y}\\lrp{\\D_{1:K}}$ admits a local minimum $\\widehat{\\mb{D}}=\\bigotimes \\widehat{\\mb{D}}_k$ such that $\\widehat{\\mb{D}}_k \\in \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k)$, for $k \\in [K]$.\n\\end{theorem}\n\n\\subsection{Discussion}\nLet us make some remarks about implications of Theorem~\\ref{thm:finite_n}. First, sample complexity has an inverse relationship with signal to noise ratio ($\\mathop{\\mathrm{SNR}}\\nolimits$),\\footnote{Sufficient conditioning on $N$ implies $\\mathcal{O}$-scaling for sample complexity.} which we define as\n\t\\begin{align}\n\t\\mathop{\\mathrm{SNR}}\\nolimits \\triangleq \\frac{\\mbb{E}\\{\\|\\mb{x}\\|_2^2\\}}{\\mbb{E}\\{\\|\\mb{w}\\|^2_2\\}} = \\frac{s\\mbb{E}\\{x^2\\}}{m\\mbb{E}\\{w^2\\}}.\n\t\\end{align}\nLooking at the terms on the right hand side of~\\eqref{eq:smplCmp} in Theorem~\\ref{thm:finite_n}, $M_x\/(s\\mbb{E}\\lr{x^2})$ is related to the deviation of $\\|\\mb{x}\\|_2$ from its mean, $\\mbb{E}\\lr{\\|\\mb{x}\\|_2}$, and depends on the coefficient distribution, while $M_w\/(s\\mbb{E}\\lr{x^2})$ is related to $1\/\\mathop{\\mathrm{SNR}}\\nolimits$ and depends on the noise and coefficient distributions.\n\nSecond, we notice dependency of sample complexity on the recovery error of coordinate dictionaries. We can interpret $\\varepsilon_k$ as the recovery error for $\\mb{D}^0_k$. Then, the sample complexity scaling in \\eqref{eq:smplCmp} is proportional to $\\max_k \\varepsilon_k^{-2}$.\nWe note that the sample complexity results obtained in~\\cite{gribonval2014sparse} that are independent of $\\varepsilon \\triangleq \\norm{\\mb{D}-\\mb{D}^0}_F$ only hold for the noiseless setting and the dependency on $\\varepsilon^{-2}$ is inevitable for noisy observations~\\cite{gribonval2014sparse}. Furthermore, given the condition on the range of $\\varepsilon_k$'s in \\eqref{eq:cond_r_i}, $\\varepsilon_k$'s cannot be arbitrarily small, and will not cause $N$ to grow arbitrarily large.\n\nThird, we observe a linear dependence between the sample complexity scaling in \\eqref{eq:smplCmp} and coordinate dictionaries' dimensions, i.e., $\\max_k \\mathcal{O}(m_k p_k^3)$. Comparing this to the $\\mathcal{O}(mp^3)=\\mathcal{O}\\lrp{\\prod_k m_kp_k^3}$ scaling in the unstructured DL problem~\\cite{gribonval2014sparse}, the sample complexity in the KS-DL problem scales with the dimensions of the largest coordinate dictionary, as opposed to the dimensions of the larger KS dictionary.\n\n\\begin{table}\n\\caption{\\small Comparison of upper and lower bounds on the sample complexity of dictionary learning for vectorized DL and KS DL.}\n\\label{table:1}\n\\centering\n\\begin{tabular}{l|C{1.7cm}|c| N} \\cline{2-3}\n & Vectorized DL & KS DL & \\\\ [20pt] \\hline\n\t\\multicolumn{1}{|c|} {Minimax Lower Bound}\n\t&$\\dfrac{mp^2}{\\varepsilon^2}$~\\cite{jung2015minimax}\n\t&$\\dfrac{p\\sum_{k } m_k p_k}{\\varepsilon^2 }$~\\cite{shakeri2016arxiv}\n\t& \\\\ [20pt]\\hline\n\t\\multicolumn{1}{|c|} {Achievability Bound} & $\\dfrac{mp^3}{\\varepsilon^2}$~\\cite{gribonval2014sparse}\n\t& $\\max\\limits_k \\dfrac{m_kp_k^3}{\\varepsilon_k^2} $ & \\\\ [20pt]\\hline\n\\end{tabular}\n\\end{table}\n\nWe also compare this sample complexity upper bound scaling to the sample complexity lower bound scaling in our previous work~\\cite[Corollary 1]{shakeri2016minimax}, where we obtained $N = \\Omega\\lrp{p\\sum_k m_kp_k\\varepsilon^{-2}\/K}$ as a \\emph{necessary condition for recovery of KS dictionaries}.\\footnote{We have the following relation between $\\varepsilon$ and $\\varepsilon_k$'s:\n\t\\begin{align*}\n\t\\varepsilon \t\\leq\t\\sum_{k \\in [K]} \\bigg(\n\t\t\\prod_{\\substack{i \\in [K]\\\\i \\neq k}} \\norm{\\wt{\\D}_k}_F \\bigg)\n\t\t\\norm{\\mb{D}_k-\\mb{D}^0_k}_F\n\t\t\\leq \\sqrt{p} \\sum_{k \\in [K]} \\varepsilon_k.\n\t\\end{align*}\nAssuming all $\\varepsilon_k$'s are equal, this then implies $\\varepsilon_k^2 \\geq \\varepsilon^2\/(K^2 p)$.}\nIn terms of overall error $\\varepsilon$, our result translates into $N = \\max_k \\Omega\\lr{ 2^K K^2p(m_kp_k^3)\\varepsilon^{-2}}$ as a \\emph{sufficient} condition for recovery of coordinate dictionaries. The lower bound depended on the average dimension of the coordinate dictionaries, $\\sum_k m_kp_k\/K$, whereas we observe here a dependence on the dimensions of the coordinate dictionaries in terms of the maximum dimension, $\\max_k m_kp_k$. We also observe an increase of order $\\max_k p_k^2$ in the sample complexity upper bound scaling. This gap suggests that tighter bounds can be obtained for lower and\/or upper bounds. A summary of these results is provided in Table~\\ref{table:1} for a fixed $K$.\n\n\\subsection{Proof Outline}\n\nWe follow a similar approach used in~\\cite[Theorem 2]{gribonval2014sparse} for vectorized data.\nWe show that, with high probability,\n\t\\begin{align}\n\t\\Delta F_\\mb{Y} (\\eps_{1:K}) \\triangleq \\inf_{\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}_k^0)} \\Delta F_\\mb{Y} \\lrp{ \\D_{1:K};\\D^0_{1:K}}\n\t\\end{align}\nconverges uniformly to its expectation,\n\t\\begin{align}\n\t\\Delta f_{\\mbb{P}}(\\eps_{1:K})\\triangleq \\inf_{\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}^0_k)}\n\t\t\\Delta f_{\\mbb{P}} \\lrp{ \\D_{1:K}; \\D^0_{1:K} }.\n\t\\end{align}\nIn other words, with high probability,\t\n\t\\begin{align}\n\t\\lra{\\Delta F_\\mb{Y} (\\eps_{1:K}) - \\Delta f_{\\mbb{P}}(\\eps_{1:K}) } \\leq \\eta_N,\n\t\\end{align}\nwhere $\\eta_N$ is a parameter that depends on the probability and other parameters in the problem.\nThis implies $\\Delta F_\\mb{Y} (\\eps_{1:K}) \\geq \\Delta f_{\\mbb{P}}(\\eps_{1:K}) - 2\\eta_N $.\nIn Theorem~\\ref{thm:asymp}, we obtained conditions that ensure $\\Delta f_{\\mbb{P}}(\\eps_{1:K})> 0$. Thus, if $2\\eta_N < \\Delta f_{\\mbb{P}}(\\eps_{1:K})$ is satisfied, this implies $\\Delta F_\\mb{Y} (\\eps_{1:K})> 0$, and we can use arguments similar to the proof of Theorem~\\ref{thm:asymp} to show that\n$\\D_{1:K} \\mapsto F_\\mb{Y}\\lrp{\\D_{1:K}}$ admits a local minimum $\\widehat{\\mb{D}}=\\bigotimes \\widehat{\\mb{D}}_k$, such that $\\widehat{\\mb{D}}_k \\in \\mathbf{B}_{\\varepsilon_k}(\\mb{D}^0_k)$, for $k \\in [K]$.\n\nIn Theorem~\\ref{thm:asymp}, we showed that under certain conditions, $f_{\\mbb{P}}(\\D_{1:K};\\D^0_{1:K}) = \\Delta \\phi_\\mbb{P}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$.\nTo find $\\eta_N$, we uniformly bound deviations of $\\D_{1:K} \\mapsto \\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$ from its expectation on $\\lr{ \\mc{S}_{\\varepsilon_k}(\\mb{D}^0_k)}_{k=1}^K$.\nOur analysis is based on the \\textit{coordinate-wise Lipschitz continuity} property of $\\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$ with respect to coordinate dictionaries. Then, to ensure $ 2\\eta_N < \\Delta \\phi_\\mbb{P}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$, we show that $2\\eta_N$ is less than the right-hand side of~\\eqref{eq:delt_p_LB} and obtain conditions on the sufficient number of samples based on each coordinate dictionary dimension and recovery error.\n\nThe proof of Theorem~\\ref{thm:finite_n} relies on the following definition and lemmas. The proofs of these are provided in Appendix B.\n\n\\begin{definition}[Coordinate-wise Lipschitz continuity]\nA function $f: \\mc{D}_1 \\times \\dots \\times \\mc{D}_K \\rightarrow \\mbb{R}$ is coordinate-wise Lipschitz continuous with constants $(L_1,\\dots,L_K)$ if there exist real constants $\\lr{L_k \\geq 0}_{k=1}^K$, such that for $\\lr{\\mb{D}_k,\\mb{D}'_k \\in \\mc{D}_k}_{k=1}^K$:\n\t\\begin{align}\n\t\\lra{ f\\lrp{\\D_{1:K}} - f\\lrp{\\D'_{1:K}} }\n\t\t\\leq \\sum_{k \\in [K]} L_k \\norm{\\mb{D}_k - \\mb{D}'_k}_F.\n\t\\end{align}\n\\end{definition}\n\n\\begin{lemma}[Rademacher averages~\\cite{gribonval2014sparse}]\\label{lem:rad}\nConsider $\\mathcal{F}$ to be a set of measurable functions on measurable set $\\mc{X}$ and $N$ i.i.d. random variables $X_1,\\dots,X_N \\in \\mc{X}$. Fix any $\\xi \\in (0,\\infty)$. Assuming all functions are bounded by $B$, i.e., $|f(X)|\\leq B$, almost surely, with probability at least $1-e^{-\\xi}$:\n\t\\begin{align} \\label{eq:rad_gau}\n\t& \\sup_{f \\in \\mathcal{F}} \\bigg( \\frac{1}{N}\n\t\t\\sum_{n \\in [N] } f \\lrp{X_n}\n\t\t- \\mbb{E}_{X} \\lr{ f \\lrp{X}} \\bigg) \\nonumber \\\\\n\t&\\quad\n\t\\leq 2\\sqrt{\\frac{\\pi}{2}} \\mbb{E}_{X,\\beta_{1:N}}\n\t\t\\bigg\\{ \\sup_{f \\in \\mathcal{F}}\n\t\t\\bigg( \\frac{1}{N}\n\t\t\\sum_{n \\in [N] } \\beta_n f \\lrp{ X_n} \\bigg) \\bigg\\}\n\t\t+ B\\sqrt{\\frac{2\\xi}{N}},\n\t\\end{align}\nwhere $\\beta_{1:N}$'s are independent standard Gaussian random variables.\n\\end{lemma}\n\n\n\\begin{lemma} \\label{lemma:delt_m_T_dev}\nLet $\\mathcal{H}$ be a set of real-valued functions on $\\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k), k \\in [K]$, that are bounded by $B$ almost everywhere and are coordinate-wise Lipschitz continuous with constants $(L_1,\\dots,L_K)$ .\nLet $h_1,h_2,\\dots,h_N$ be independent realizations from $\\mathcal{H}$ with uniform Haar measure on $\\mathcal{H}$. Then, fixing $\\xi \\in (0,\\infty)$, we have with probability greater than $1-e^{-\\xi}$ that:\n\t\\begin{align} \\label{eq:dev}\n\t&\\sup_{\\substack{ \\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k) \\\\ k \\in [K]}}\n\t\t \\bigg| \\frac{1}{N} \\sum_{n \\in [N]}\n\t\t h_n(\\D_{1:K}) - \\mbb{E} \\lr{ h(\\D_{1:K})} \\bigg| \\nonumber\\\\\n\t&\\qquad \\quad \\leq 4\\sqrt{\\frac{\\pi}{2N}} \\bigg(\\sum_{k \\in [K]} L_k\\varepsilon_k \\sqrt{Km_kp_k} \\bigg)\n\t\t+ B \\sqrt{\\frac{2\\xi}{N}}.\n\t\\end{align}\n\\end{lemma}\n\n\\begin{lemma}[Lemma 5~\\cite{gribonval2014sparse}]\n\\label{lem:H_Ps}\nFor any $\\delta_k<1$, $\\mb{D}_k,\\mb{D}_k'$ such that $\\max(\\delta_{s_k}(\\mb{D}_k),\\delta_{s_k}(\\mb{D}'_k))\\leq \\delta_k$, and $\\mc{J}_k \\subset p_k, |\\mc{J}_k|=s_k$, we have\n\t\\begin{align} \\label{eq:PH_PHp}\n\t&\\|\\mb{I} - \\D_{k,\\cJ_k}^+ \\mb{D}'_{k,\\mc{J}_k}\\|_2 \\leq (1-\\delta_k)^{-1\/2} \\|\\D_k-\\D'_k\\|_F, \\nonumber \\\\\n\t&\\|\\bH_{\\D_{k,\\cJ_k}}-\\bH_{\\D'_{k,\\cJ_k}} \\|_2 \\leq 2(1-\\delta_k)^{-3\/2} \\|\\D_k-\\D'_k\\|_F, \\nonumber \\\\\n\t&\\|\\D_{k,\\cJ_k}^+ - {\\D'}_{k,\\cJ_k}^+ \\|_2\\leq 2(1-\\delta_k)^{-1} \\|\\D_k-\\D'_k\\|_F,\\text{and}\\nonumber \\\\\n\t&\\|\\bP_{\\D_{k,\\cJ_k}} -\\bP_{\\D'_{k,\\cJ_k}} \\|_2 \\leq 2(1-\\delta_k)^{-1\/2} \\|\\D_k-\\D'_k\\|_F.\n\t\\end{align}\n\\end{lemma}\n\n\\begin{lemma} \\label{lem:phi_m_T1_lip}\nConsider $\\mb{D}^0_k \\in \\mc{D}_k$ and $\\varepsilon_k$'s such that $\\varepsilon_k < \\sqrt{1-\\delta_{s_k}(\\mb{D}_k^0)}$, for $k \\in [K]$ and define\n $\\sqrt{1-\\delta_k} \\triangleq \\sqrt{1-\\delta_{s_k}(\\mb{D}_k^0)} - \\varepsilon_k>0$. The function $\\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$ is almost surely coordinate-wise Lipschitz continuous on $\\lr{ \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k)}_{k=1}^K$ with Lipschitz constants\n \t\\begin{align} \\label{eq:lipsch_const_h}\n \tL_k \\triangleq (1-\\delta_k)^{-1\/2}\n\t\t \\bigg(& M_x \\bigg( \\prod_{k \\in [K]} \\sqrt{1+\\delta_{s_k}(\\mb{D}^0_k)}\\bigg)\n\t\t+M_w \\nonumber\\\\\n\t&+ \\lambda\\sqrt{s}\n\t\t \\prod_{k \\in [K] } (1-\\delta_k)^{-1\/2} \\bigg)^2 ,\n \t\\end{align}\nand $\\lra{\\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}}$ is almost surely bounded on $\\lr{ \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k)}_{k=1}^K$ by $\\sum_{k \\in [K]} L_k\\varepsilon_k$.\n\\end{lemma}\n\n\n\n\\begin{IEEEproof}[Proof of Theorem 2]\nFrom Lemmas~\\ref{lemma:delt_m_T_dev} and \\ref{lem:phi_m_T1_lip}, we have that with probability at least $1-e^{-\\xi}$:\n\t\\begin{align} \\label{eq:delt_finite_UB}\n\t& \\sup_{\\substack{\\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k)\\\\ k \\in [K]}}\n\t\t\\big| \\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} - \\Delta \\phi_\\mbb{P} \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} \\big| \\nonumber\\\\\n\t&\\qquad \\quad \\quad \\leq \\sqrt{\\frac{2}{N}}\\sum_{k \\in [K]} L_k\\varepsilon_k \\lrp{ 2\\sqrt{\\pi m_kp_k}\n\t\t+ \\sqrt{\\xi}},\n\t\\end{align}\nwhere $L_k$ is defined in \\eqref{eq:lipsch_const_h}.\nFrom \\eqref{eq:delt_finite_UB}, we obtain $ \\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} >\\Delta \\phi_\\mbb{P} \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} - 2\\eta_N$ where $\\eta_N = \\sqrt{\\frac{2}{N}}\\sum_{k \\in [K]} L_k\\varepsilon_k \\lrp{ 2\\sqrt{\\pi m_kp_k}\n\t\t+ \\sqrt{\\xi}}$. In Theorem~\\ref{thm:asymp}, we derived conditions that ensure $\\Delta f_\\mb{y} (\\D_{1:K};\\D^0_{1:K}) = \\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} $ and $\\Delta f_{\\mbb{P}}(\\D_{1:K};\\D^0_{1:K})=\\Delta \\phi_\\mbb{P} \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$. Therefore, given that the conditions in Theorem~\\ref{thm:asymp} are satisfied, $\\Delta F_\\mb{Y} (\\eps_{1:K}) > \\Delta f_{\\mbb{P}}(\\eps_{1:K}) - 2\\eta_N $, and the existence of a local minimum of $F_\\mb{Y}(\\D_{1:K})$ within radii $\\varepsilon_k$ around $\\mb{D}_k^0$, $k \\in [K]$, is guaranteed with probability at least $1-e^{-\\xi}$ as soon as $2\\eta_N < \\Delta f_\\mbb{P} (\\eps_{1:K}) $. According to \\eqref{eq:delt_p_LB}, $\t\\Delta \\phi_\\mbb{P} \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}\n\\geq \\dfrac{s\\mbb{E}\\{x^2\\}}{8} \\sum_{k \\in [K]} \\dfrac{\\varepsilon_k}{p_k}\n\\lrp{ \\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda})}$; therefore, it is sufficient to have for all $k \\in [K]$:\n\t\\begin{align*}\n\t\\sqrt{\\frac{8}{N}} L_k\\varepsilon_k \\lrp{ 2\\sqrt{\\pi m_kp_k}\n\t\t+ \\sqrt{\\xi}}\n\t < \\frac{s\\mbb{E}\\{x^2\\}\\varepsilon_k\\lrp{\\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda})} }{8p_k},\n\t\\end{align*}\nwhich translates into $N \\geq \\max_{k \\in [K]} N_k$, where\n\t\\begin{align} \\label{eq:N_k}\n\t&N_k= \\lrp{ 2\\sqrt{\\pi m_kp_k} + \\sqrt{\\xi}}^2\n\t\\lrp{ \\frac{2^{4.5}L_k p_k }{s\\mbb{E} \\{x^2\\} (\\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda}))}}^2 .\n\t\\end{align}\t\t\nFurthermore, we can upper bound $L_k$ by\n\t\\begin{align} \\label{eq:Rk_UB}\n\tL_k &\\numrel{\\leq}{r_Rk} \\sqrt{2}\\bigg(1.25^{K\/2} M_x + M_w + 2^{K\/2} \\lambda \\sqrt{s} \\bigg)^2 \\nonumber \\\\\n\t&\\numrel{\\leq}{r_lmb_Mx} \\sqrt{2}c_1 \\bigg(\\big(1.25^{K} + 2^{K} \\bar{\\lambda}^2 \\big) M_x^2 + M_w^2 \\bigg),\n\t\\end{align}\nwhere $c_1$ is some positive constant, \\eqref{r_Rk} follows from the fact that given the assumption in~\\eqref{eq:cond_delt_s_prop}, assumptions in Lemma~\\ref{lem:phi_m_T1_lip} are satisfied with $\\sqrt{1-\\delta_k}\\geq \\sqrt{1\/2}$ for any $\\varepsilon_k\\leq 0.15$, and \\eqref{r_lmb_Mx} follows from the following inequality:\n\t\\begin{align*}\n\t\\lambda\n\t\t= \\bar{\\lambda} \\mbb{E}\\lr{ |x| }\n\t\t= \\dfrac{1}{s}\\bar{\\lambda} \\mbb{E}\\lr{ \\norm{\\mb{x}}_1 }\n\t\t\\leq \\dfrac{1}{\\sqrt{s}} \\bar{\\lambda} \\mbb{E}\\lr{ \\norm{\\mb{x}}_2 }\n\t\t\\leq\\dfrac{1}{\\sqrt{s}} \\bar{\\lambda} M_x.\n\t\\end{align*}\nSubstituting \\eqref{eq:Rk_UB} in \\eqref{eq:N_k} and using $\\lrp{ \\sqrt{\\xi} + 2\\sqrt{\\pi m_kp_k} }^2 \\leq c_2 (\\xi + m_kp_k)$ for some positive constant $c_2$, we get\n\t\\begin{align*} %\n\t&N_k =\n\t\t\\Omega \\bigg(\n\t\tp_k^2 (m_kp_k+\\xi)\n\t\t\\bigg( \\frac{2^{K}(1 + \\bar{\\lambda}^2) M_x^2 + M_w^2}\n\t\t{s^2\\mbb{E}\\{x^2\\}^2(\\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda}))^2} \\bigg) \\bigg)\n\t\t\\nonumber \\\\\n\t&=\n\t\t\\Omega \\bigg(\n\t\t\\frac{p_k^2 (m_kp_k+\\xi) } {(\\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda}))^2}\n\t\t\\bigg( \\frac{2^{K}(1 + \\bar{\\lambda}^2) M_x^2}\n\t\t{s^2\\mbb{E}\\{x^2\\}^2}\n\t\t+ \\frac{M_w^2}{s^2\\mbb{E}\\{x^2\\}^2} \\bigg) \\bigg).\n\t\\end{align*}\nand $N \\geq \\max_{k \\in [K]} N_k $.\n\\end{IEEEproof}\n\n\\begin{remark}\nTo bound deviations of $\\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$ from its mean,\nwe can also use the bound provided in~\\cite[Theorem 1]{gribonval2015sample} that prove uniform convergence results using covering number arguments for various classes of dictionaries. In this case, we get $\\eta_N \\leq c\\sqrt{\\dfrac{\\lrp{\\sum_k m_kp_k + \\xi}\\log N}{N}}$ for some constant $c$, where an extra $\\sqrt{\\log N}$ term appears compared to \\eqref{eq:dev}. Therefore, Lemma~\\ref{lemma:delt_m_T_dev} provides a tighter upper bound.\n\\end{remark}\n\n\n\n\n\\section{Conclusion} \\label{sec:discuss}\nIn this paper, we focused on local recovery of coordinate dictionaries comprising a Kronecker-structured dictionary used to represent $K$th-order tensor data. We derived a sample complexity upper bound for coordinate dictionary identification up to specified errors by expanding the objective function with respect to individual coordinate dictionaries and using the coordinate-wise Lipschitz continuity property of the objective function. This analysis is local in the sense that it only guarantees existence of a local minimum of the KS-DL objective function within some neighborhood of true coordinate dictionaries. Global analysis of the KS-DL problem is left for future work.\nOur results hold for dictionary coefficients generated according to the separable sparsity model. This model has some limitations compared to the random sparsity model and we leave the analysis for the random sparsity model for future work also. Another future direction of possible interest includes providing practical KS-DL algorithms that achieve the sample complexity scaling of Theorem~\\ref{thm:finite_n}.\n\n\n\n\\section*{Appendix A}\n\\begin{IEEEproof}[Proof of Lemma~\\ref{lem:Dtld}]\nTo prove the existence of such a formation for any $K\\geq 2$, we use induction.\nFor $K=2$, we have\n\t\\begin{align} \\label{eq:k2}\n\t\\lrp{\\mb{D}_1\\otimes \\mb{D}_2} &- \\lrp{\\mb{D}_1^0\\otimes \\mb{D}_2^0} \\nonumber\\\\\n\t&= \\lrp{\\mb{D}_1 - \\mb{D}_1^0} \\otimes \\mb{D}^0_2 + \\mb{D}_1 \\otimes \\lrp{\\mb{D}_2 - \\mb{D}_2^0} \\nonumber \\\\\n\t&= \\lrp{\\mb{D}_1 - \\mb{D}_1^0} \\otimes \\mb{D}_2 + \\mb{D}^0_1 \\otimes \\lrp{\\mb{D}_2 - \\mb{D}_2^0} .\n\t\\end{align}\nFor $K$ such that $K>2$, we assume the following holds:\n\t\\begin{align} \\label{eq:kK}\n\t&\\bigotimes_{k \\in [K]} \\mb{D}_k - \\bigotimes_{k \\in [K]} \\mb{D}_k^0 \\nonumber\\\\\n\t&\\qquad = \\sum _{k \\in [K]} \\widetilde{\\mb{D}}_{k,1} \\otimes \\dots \\otimes\n\t\t\\lrp{\\mb{D}_k - \\mb{D}^0_k}\t\\otimes \\dots \\otimes \\widetilde{\\mb{D}}_{k,K}.\n\t\\end{align}\nThen, for $K+1$, we have:\n\t\\begin{align} \\label{eq:kKp1}\n\t&\\bigotimes_{k \\in [K+1]} \\mb{D}_k - \\bigotimes_{k \\in [K+1]} \\mb{D}_k^0 \\nonumber\\\\\n\t&= \\bigg( \\bigotimes_{k \\in [K]} \\mb{D}_k \\bigg) \\otimes \\mb{D}_{K+1}\n\t\t- \\bigg( \\bigotimes_{k \\in [K]} \\mb{D}_k^0 \\bigg) \\otimes \\mb{D}_{K+1}^0 \\nonumber\\\\\n\t&\\numrel{=}{r_k2} \\bigg( \\bigotimes_{k \\in [K]} \\mb{D}_k -\\bigotimes_{k \\in [K]} \\mb{D}_k^0 \\bigg)\n\t\t\\otimes \\mb{D}_{K+1}^0 \\nonumber\\\\\n\t&+ \\bigg( \\bigotimes_{k \\in [K]} \\mb{D}_k \\bigg) \\lrp{\\mb{D}_{K+1} - \\mb{D}_{K+1}^0} \\nonumber\\\\\n\t&\\numrel{=}{r_kK} \\bigg( \\sum _{k \\in [K]} \\widetilde{\\mb{D}}_{k,1} \\otimes \\dots \\otimes \\lrp{\\mb{D}_k - \\mb{D}^0_k}\t\\otimes \\dots \\otimes \\widetilde{\\mb{D}}_{k,K} \\bigg)\n\t\t \\nonumber \\\\\n\t&\\qquad \\qquad \\otimes \\mb{D}_{K+1}^0 + \\bigg( \\bigotimes_{k \\in [K]} \\mb{D}_k \\bigg) \\lrp{\\mb{D}_{K+1} - \\mb{D}_{K+1}^0} \\nonumber\\\\\n\t&\\numrel{=}{r_allcases}\n\t \t\\sum _{k \\in [K+1]} \\widetilde{\\mb{D}}_{k,1} \\otimes \\dots \\otimes \\lrp{\\mb{D}_k - \\mb{D}^0_k}\t\\otimes \\dots \\otimes \\wt{\\D}_{k,K+1} ,\n\t\\end{align}\nwhere \\eqref{r_k2} follows from \\eqref{eq:k2}, \\eqref{r_kK} follows from \\eqref{eq:kK} and \\eqref{r_allcases} follows from replacing $\\mb{D}^0_{K+1}$ with $\\wt{\\D}_{k,K+1}$ in the first $K$ terms of the summation and $\\mb{D}_k$'s with $\\wt{\\D}_{K+1,k}$, for $k \\in[K]$, in the $(K+1)$th term of the summation.\n\\end{IEEEproof}\n\n\\begin{IEEEproof}[Proof of Lemma~\\ref{lemma:f_closedform}]\nUsing the same definition as Gribonval et al.~\\cite[Definition 1]{gribonval2014sparse}, taking the derivative of $\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}}$ with respect to $\\mb{x}$ and setting it to zero, we get the expression in~\\eqref{eq:xhat_closedform} for $\\widehat{\\mb{x}}$. Substituting $\\widehat{\\mb{x}}$ in~\\eqref{eq:delt_inf}, we get\n \t\\begin{align*}\n\t&\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}} = \\frac{1}{2}\n\t\t\\bigg[\n\t\t\\|\\mb{y}\\|_2^2 - \\lrp{ \\lrp{\\bigotimes \\mb{D}_{k,\\mc{J}_k}^\\top} \\mb{y} -\\lambda \\boldsymbol{\\sigma}_\\mc{J} }^\\top \\nonumber\\\\\n\t&\\qquad \\lrp{ \\bigotimes (\\mb{D}_{k,\\mc{J}_k}^\\top \\mb{D}_{k,\\mc{J}_k})^{-1} }\n\t\t\\lrp{ \\lrp{\\bigotimes \\mb{D}_{k,\\mc{J}_k}^\\top} \\mb{y} -\\lambda \\boldsymbol{\\sigma}_\\mc{J} }\n\t\t\\bigg]\\nonumber\\\\\n\t&\\qquad \\qquad \\quad \\numrel{=}{r_dP} \\frac{1}{2}\\|\\mb{y}\\|_2^2\n\t\t- \\frac{1}{2} \\mb{y}^\\top \\lrp{\\bigotimes \\bP_{\\D_{k,\\cJ_k}}} \\mb{y} \\nonumber\\\\\n\t&\\qquad + \\lambda \\boldsymbol{\\sigma}_\\mc{J}^\\top \\lrp{\\bigotimes \\D_{k,\\cJ_k}^+}\\mb{y}\n\t\t-\\frac{\\lambda^2}{2}\\boldsymbol{\\sigma}_\\mc{J}^\\top \\lrp{ \\bigotimes \\bH_{\\D_{k,\\cJ_k}}} \\boldsymbol{\\sigma}_\\mc{J},\n\t\\end{align*}\nwhere \\eqref{r_dP} follows from \\eqref{eq:P_H_Ps}.\n\\end{IEEEproof}\n\n\n\\begin{IEEEproof}[Proof of Lemma~\\ref{lem:exp_phi}]\nWe use the expression for $\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}}$ from~\\eqref{eq:delt_closedform}. For any $\\mb{D}=\\bigotimes \\mb{D}_k,\\mb{D}'=\\bigotimes \\mb{D}'_k$, $\\mb{D}_k,\\mb{D}_k' \\in \\mc{D}_k$, we have\n\t\\begin{align} \\label{eq:delt_ph_1}\n\t&\\Delta \\phi_\\mb{y}\\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t= \\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}} - \\phi_\\mb{y}\\lrp{\\D'_{1:K}|\\boldsymbol{\\sigma}} \\nonumber \\\\\n\t&\\quad= \t\\frac{1}{2} \\mb{y}^\\top \\lrp{ \\bigotimes \\bP_{\\D'_{k,\\cJ_k}} - \\bigotimes \\bP_{\\D_{k,\\cJ_k}} } \\mb{y} \\nonumber\\\\\n\t&\\qquad- \\lambda \\boldsymbol{\\sigma}_\\mc{J}^\\top \\lrp{ \\bigotimes {\\D'}_{k,\\cJ_k}^+ - \\bigotimes \\D_{k,\\cJ_k}^+ } \\mb{y} \\nonumber \\\\\n\t&\\qquad + \\frac{\\lambda^2}{2}\\boldsymbol{\\sigma}_\\mc{J}^\\top \\lrp{ \\bigotimes \\bH_{\\D'_{k,\\cJ_k}} - \\bigotimes \\bH_{\\D_{k,\\cJ_k}} } \\boldsymbol{\\sigma}_\\mc{J}.\n\t\\end{align}\nWe substitute $\\mb{y} = \\lrp{\\bigotimes \\mb{D}^0_k} \\mb{x} +\\mb{w} = \\lrp{\\bigotimes \\mb{D}^0_{k,\\mc{J}_k} }\\mb{x}_{\\mc{J}} +\\mb{w}$\nand break up the sum in \\eqref{eq:delt_ph_1} into 6 terms:\n\t\\begin{align} \\label{eq:delt_t}\n\t\\Delta \\phi_\\mb{y}\\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}} = \\sum_{i\\in[6]} \\Delta \\phi_i \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}} ,\n\t\\end{align}\t\nwhere\n\t\\begin{align} \\label{eq:delt_phi}\n\t& \\Delta \\phi_1 \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t\t= \\frac{1}{2} {\\mb{x}}^\\top \\lrp{\\bigotimes \\mb{D}^0_k }^\\top \\nonumber\\\\\n\t&\\qquad\t\\lrp{ \\bigotimes \\bP_{\\D'_{k,\\cJ_k}} - \\bigotimes \\bP_{\\D_{k,\\cJ_k}} }\n\t\t\\lrp{\\bigotimes \\mb{D}^0_k } \\mb{x} \\nonumber\\\\\n\t&\\numrel{=}{r_Dtild} \\frac{1}{2} {\\mb{x}}^\\top \\lrp{\\bigotimes \\mb{D}^0_k }^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}} \\otimes \\dots \\otimes\n\t\t\\nonumber\\\\\t\t\n\t&\\qquad \\lrp{\\bP_{\\D'_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}}}\t\\otimes \\dots \\otimes \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\bigg)\n\t\t\\lrp{\\bigotimes \\mb{D}^0_k } \\mb{x} \\nonumber \\\\\n\t&= \\frac{1}{2} {\\mb{x}}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\lrp{{\\mb{D}^0_1}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\\mb{D}^0_1} \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\lrp{{\\mb{D}^0_k}^\\top(\\bP_{\\D'_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_k}\t\\otimes \\dots \\otimes \\nonumber \\\\\n\t&\\qquad\n\t\t\\lrp{{\\mb{D}^0_K}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\mb{D}^0_K} \\bigg)\n\t\t\\mb{x} , \\nonumber \\\\\n\t&\\Delta \\phi_2 \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t%\n\t%\n\t%\n\t%\n\t= \\mb{w}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\lrp{\\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\\mb{D}^0_1}\n\t\t\\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad\n\t\t\\lrp{(\\bP_{\\D'_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}}) \\mb{D}^0_k}\t\t\t\n\t\t\\otimes \\dots \\otimes\n\t\t\\lrp{ \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}} \\mb{D}^0_K}\\bigg)\n\t\t\\mb{x}, \\nonumber \\\\\n\t&\\Delta \\phi_3 \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t%\n\t%\n\t%\n\t\t= \\frac{1}{2} \\mb{w}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}} \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\lrp{\\bP_{\\D'_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}}}\n\t\t\\otimes \\dots \\otimes \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}} \\bigg)\n\t\t\\mb{w}, \\nonumber \\\\\n\t&\\Delta \\phi_4 \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t%\n\t%\n\t%\n\t\t= - \\lambda \\boldsymbol{\\sigma}_\\mc{J}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\lrp{ {\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}^+\\mb{D}^0_1} \\otimes \\dots \\otimes \t\\nonumber\\\\\n\t&\\qquad\n\t\t\\lrp{({\\D'}_{k,\\cJ_k}^+ - \\D_{k,\\cJ_k}^+ ) \\mb{D}^0_k} \\otimes \\dots \\otimes\n\t\t\\lrp{ {\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}^+ \\mb{D}^0_K} \\bigg)\n\t\t\\mb{x}, \\nonumber\\\\\n\t&\\Delta \\phi_5 \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t%\n\t%\n\t\t= - \\lambda \\boldsymbol{\\sigma}_\\mc{J}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} {\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}^+ \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\lrp{{\\D'}_{k,\\cJ_k}^+ - \\D_{k,\\cJ_k}^+ }\t\\otimes \\dots \\otimes\n\t\t {\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}^+ \\bigg) \\mb{w}, \\text{and}\\nonumber \\\\\n\t&\\Delta \\phi_6 \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t%\n\t%\n\t%\n\t\t= \\frac{\\lambda^2}{2}\\boldsymbol{\\sigma}_\\mc{J}^\\top\n\t \t\\bigg( \\sum_{k \\in [K]} \\mb{H}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}} \\otimes \\dots \\otimes \\nonumber\\\\\n\t &\\qquad \\lrp{\\bH_{\\D'_{k,\\cJ_k}} - \\bH_{\\D_{k,\\cJ_k}} }\n\t \t\\otimes \\dots \\otimes\n\t\t\\mb{H}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}} \\bigg) \\boldsymbol{\\sigma}_\\mc{J},\n\t\\end{align}\nwhere \\eqref{r_Dtild} follows from Lemma~\\ref{lem:Dtld} and analysis for derivation of $\\lr{\\Delta \\phi_i\\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}} }_{i=2}^6 $ are omitted due to space constraints.\nNow, we set $\\mb{D}' = \\mb{D}^0$ and take the expectation of $\\Delta \\phi_\\mb{y}\\lrp{ \\D_{1:K};\\{\\mb{D}^0_k\\} | \\boldsymbol{\\sigma}}$ with respect to $\\mb{x}$ and $\\mb{w}$. Since the coefficient and noise vectors are uncorrelated,\n\t\\begin{align*}\n\t\\mbb{E} \\lr{ \\Delta \\phi_2 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}}=\\mbb{E} \\lr{\\Delta \\phi_5 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} } = 0.\n\t\\end{align*}\nWe can restate the other terms as:\n\t\\begin{align} \\label{eq:delt_tr}\n\t &\\Delta \\phi_1 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} \\nonumber\\\\\n\t &\\numrel{=}{r_Imk} \\frac{1}{2}\\mathop{\\mathrm{Tr}}\\nolimits \\bigg[\\mb{x}_\\mc{J} {\\mb{x}}^\\top_\\mc{J}\n\t\t \\sum_{k \\in [K]} \\lrp{{\\mb{D}^0_1}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\\mb{D}^0_1} \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\lrp{{\\mb{D}^0_k}^\\top(\\mb{I}_{m_k} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_k}\t\n\t\t\\otimes \\dots \\otimes\n\t\t\\lrp{{\\mb{D}^0_K}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\mb{D}^0_K}\n\t\t\\bigg] ,\\nonumber \\\\\n\t& \\Delta \\phi_3 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} \\nonumber\\\\\n\t&= \\frac{1}{2} \\mathop{\\mathrm{Tr}}\\nolimits \\bigg[ \\mb{w} \\mb{w}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}} \\otimes \\dots \\otimes\n\t\t\\lrp{\\bP_{\\D^0_{k,\\cJ_k}} -\\bP_{\\D_{k,\\cJ_k}}} \\nonumber\\\\\n\t&\\qquad \\otimes \\dots \\otimes \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\bigg) \\bigg],\n\t\t\\nonumber\\\\\n\t &\\Delta \\phi_4 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} \\nonumber\\\\\n\t &\\numrel{=}{r_Isk} - \\lambda \\mathop{\\mathrm{Tr}}\\nolimits \\bigg[ \\mb{x}_\\mc{J} \\boldsymbol{\\sigma}_\\mc{J}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\lrp{ {\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}^+\\mb{D}^0_1} \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\lrp{\\mb{I}_{s_k} - \\D_{k,\\cJ_k}^+ \\mb{D}^0_k}\t\\otimes \\dots \\otimes\n\t\t\\lrp{ {\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}^+ \\mb{D}^0_K}\\bigg) \\bigg] ,\\text{and}\\nonumber \\\\\n\t& \\Delta \\phi_6 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} \\nonumber\\\\\n\t&= \\frac{\\lambda^2}{2} \\mathop{\\mathrm{Tr}}\\nolimits \\bigg[ \\boldsymbol{\\sigma}_\\mc{J} \\boldsymbol{\\sigma}_\\mc{J}^\\top\n\t \t\\bigg( \\sum_{k \\in [K]} \\mb{H}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}} \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\lrp{\\mb{H}_{\\mb{D}^0_{k,\\mc{J}_k}} - \\bH_{\\D_{k,\\cJ_k}} }\t\\otimes \\dots \\otimes\n\t\t\\mb{H}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}} \\bigg) \\bigg],\n\t\\end{align}\nwhere \\eqref{r_Imk} and \\eqref{r_Isk} follow from the facts that $\\mb{P}_{\\mb{D}^0_{k,\\mc{J}_k}}\\mb{D}_k^0 =\\mb{D}_k^0$ and ${\\mb{D}^0}^+_{k,\\mc{J}_k} \\mb{D}^0_k = \\mb{I}_{s_k}$, respectively. Taking the expectation of the terms in~\\eqref{eq:delt_tr}, we get\n\t\\begin{align} \\label{eq:jam_T}\n\t& \\mbb{E} \\lr{ \\Delta \\phi_1 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} } \\nonumber \\\\\n\t&\\numrel{=}{r_tr_eq} \\frac{\\mbb{E}\\{x^2\\}}{2}\\mbb{E}_\\mc{J}\n\t\t\\bigg\\{ \\sum_{k \\in [K]} \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_1}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\\mb{D}^0_1} \\dots \\nonumber\\\\\n\t&\\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_k}^\\top(\\mb{I}_{m_k} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_k} \\dots\n\t\t\\mathop{\\mathrm{Tr}}\\nolimits \\lrb{{\\mb{D}^0_K}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\mb{D}^0_K}\\bigg\\}\n\t\t\\nonumber \\\\\n\t&= \\frac{\\mbb{E}\\{x^2\\}}{2}\n\t\t\\sum_{k \\in [K]} \\mbb{E}_{\\mc{J}_1} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_1}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\\mb{D}^0_1} } \\dots\\nonumber\\\\\n\t&\\qquad \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_k}^\\top(\\mb{I}_{m_k} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_k}} \\dots \\nonumber \\\\\n\t& \\qquad\n\t\t\\mbb{E}_{\\mc{J}_K} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{{\\mb{D}^0_K}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\mb{D}^0_K}}, \\nonumber \\\\\n\t& \\mbb{E} \\{ \\Delta \\phi_3 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} \\} \\nonumber \\\\\n\t&= \\frac{\\mbb{E}\\{w^2\\}}{2}\n\t\t\\mbb{E}_\\mc{J}\n\t\t\\bigg\\{ \\mathop{\\mathrm{Tr}}\\nolimits \\bigg[\\sum_{k \\in [K]} \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}} \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\lrp{\\bP_{\\D^0_{k,\\cJ_k}} -\\bP_{\\D_{k,\\cJ_k}}} \\otimes \\dots \\otimes \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}} \\bigg] \t\n\t\t\\bigg\\}\n\t\t\\nonumber \\\\\n\t&= \\frac{\\mbb{E}\\{w^2\\}}{2}\n\t\t\\mbb{E}_\\mc{J}\n\t\t\\bigg\\{ \\sum_{k \\in [K]} \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{P}_{\\widetilde{\\mb{D}}_{1,J_1}} } \\dots\n\t\t \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\bP_{\\D^0_{k,\\cJ_k}} -\\bP_{\\D_{k,\\cJ_k}}} \\nonumber\\\\\n\t&\\qquad \\dots \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}} } \\bigg\\} \\nonumber\\\\\n\t&\\numrel{=}{r_proj_tr} 0, \\nonumber \\\\\n\t& \\mbb{E} \\lr{ \\Delta \\phi_4 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} } \\nonumber \\\\\n\t& =- \\lambda \\mbb{E}\\{|x|\\}\n\t\t\\sum_{k \\in [K]} \\mbb{E}_{\\mc{J}_1} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}^+\\mb{D}^0_1}} \\dots \\nonumber\\\\\n\t&\\quad \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{\\mb{I}_{s_k} - \\D_{k,\\cJ_k}^+ \\mb{D}^0_k}} \\dots\n\t\t\\mbb{E}_{\\mc{J}_K} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}^+ \\mb{D}^0_K}},\n\t \\nonumber \\\\\n\t& \\mbb{E} \\lr{ \\Delta \\phi_6 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} } \\nonumber \\\\\n\t&= \\frac{\\lambda^2}{2}\n\t\t\\sum_{k \\in [K]} \\mbb{E}_{\\mc{J}_1} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}} }\\dots \\nonumber\\\\\n\t&\\quad \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\mb{D}^0_{k,\\mc{J}_k}} - \\bH_{\\D_{k,\\cJ_k}} }\t} \\dots\n\t\t\\mbb{E}_{\\mc{J}_K} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}}} .\n\t\\end{align}\t\nwhere \\eqref{r_tr_eq} follows from the relation $\\mathop{\\mathrm{Tr}}\\nolimits(\\mb{A}\\otimes \\mb{B})=\\mathop{\\mathrm{Tr}}\\nolimits[\\mb{A}]\\mathop{\\mathrm{Tr}}\\nolimits[\\mb{B}]$~\\cite{horn2012matrix} and \\eqref{r_proj_tr} follows from the fact that $\\bP_{\\D_{k,\\cJ_k}}$'s are orthogonal projections onto subspaces of dimension $s_k$ and $ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{\\bP_{\\D^0_{k,\\cJ_k}} -\\bP_{\\D_{k,\\cJ_k}}}=s_k - s_k=0$.\nAdding the terms in \\eqref{eq:jam_T}, we obtain the expression in \\eqref{eq:delt_ph_2}.\n\\end{IEEEproof}\n\n\\begin{IEEEproof} [Proof of Lemma~\\ref{lem:UB_mu_delt_RIP}]\nEquation \\eqref{eq:l2_delt} follows from the definition of $\\mathsf{RIP}$ and \\eqref{eq:delt_mu} follows from Gerschgorin's disk theorem~\\cite{HornJohnson,horn2012matrix,golub2012matrix}.\n\\end{IEEEproof}\n\n\n\\begin{IEEEproof} [Proof of Lemma~\\ref{lem:A_B_delt}]\nTo lower bound $\\Delta \\phi_\\mbb{P}\\lrp{ \\D_{1:K}; \\D^0_{1:K} | \\boldsymbol{\\sigma}}$, we bound each term in~\\eqref{eq:delt_ph_2} separately. For the first term $\\mbb{E}\\lr{ \\Delta \\phi_1 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} }$, we have\n\t\\begin{align} \\label{eq:P_Dt_LB}\n\t&\\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_k}^\\top \\mb{P}_{\\wt{\\D}_{k,\\mc{J}_k}}\\mb{D}^0_k} }\t\n\t%\n\t%\n\t\t= \\mbb{E}_{\\mc{J}_k} \\lr{ \\norm{\\mb{P}_{\\wt{\\D}_{k,\\mc{J}_k}}\\mb{D}^0_{k,\\mc{J}_k}}_F^2} .\n\t\\end{align}\nIf $\\wt{\\D}_k = \\mb{D}_k^0$, then\n\t\\begin{align}\n\t\\mbb{E}_{\\mc{J}_k} \\lr{ \\norm{ \\bP_{\\D^0_{k,\\cJ_k}} \\mb{D}^0_{k,\\mc{J}_k}}_F^2} &\n\t\t%\n\t\t\\numrel{=}{r_Esp} \\frac{s_k}{p_k} \\norm{\\mb{D}^0_k}_F^2 = s_k,\n\t\\end{align}\nwhere~\\eqref{r_Esp} follows from~\\cite[Lemma 15]{gribonval2014sparse}. If $\\wt{\\D}_k = \\mb{D}_k$, then \t\n\t\\begin{align*}\n\t& \\mbb{E}_{\\mc{J}_k} \\lr{ \\norm{\\mb{P}_{\\mb{D}_{k,\\mc{J}_k}}\\mb{D}^0_{k,\\mc{J}_k}}_F^2}\n\t\\numrel{=}{r_DC} \\mbb{E}_{\\mc{J}_k} \\lr{ \\norm{ [\\mb{D}_k \\mb{C}_k^{-1}]_{\\mc{J}_k}}_F^2 } \\nonumber \\\\\n\t& \\numrel{=}{r_exp} \\frac{s_k}{p_k} \\norm{\\mb{D}_k\\mb{C}_k^{-1}}_F^2\n\t\t\\numrel{=}{r_d_j_1} \\frac{s_k}{p_k}\n\t\t\\sum_{j=1}^{p_k} \\frac{1}{\\cos^2(\\theta_{(k,j)})}\n\t\t\\numrel{\\geq}{r_cos_ineq} \\frac{s_k}{p_k} p_k = s_k,\n\t\\end{align*}\nwhere \\eqref{r_DC} is a direct consequence of Lemma~\\ref{lem:D_12_tet}; we can write $\\mb{D}^0_k = \\mb{D}_k\\mb{C}_k^{-1} - \\mathbf{V}_k\\mb{T}_k$ where $\\mb{C}_k = \\Diag{\\cos(\\boldsymbol{\\theta}_{k})}$, $\\mb{T}_k = \\Diag{\\tan(\\boldsymbol{\\theta}_k)}$ and $\\theta_{k,j}$ denotes the angle between $\\mb{d}_{k,j}$ and $\\mb{d}^0_{k,j}$. Hence $\\bP_{\\D_{k,\\cJ_k}} \\mb{D}^0_{k,\\mc{J}_k} = [\\mb{D}_k \\mb{C}_k^{-1}]_{\\mc{J}_k}$. Moreover,~\\eqref{r_exp} follows from~\\cite[Lemma 15]{gribonval2014sparse}, \\eqref{r_d_j_1} follows from the fact that $\\|\\mb{d}_{k,j}\\|_2=1$, and \\eqref{r_cos_ineq} follows from the fact that $\\cos (\\theta_{k,j})<1$. Similarly, we have\n\t\\begin{align}\n\t& \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_k}^\\top(\\mb{I}_{m_k} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_k} } \\nonumber\\\\\n\t&\\qquad = \\mbb{E}_{\\mc{J}_k} \\lr{ \\norm{(\\mb{I}_{m_k} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_{k,\\mc{J}_k}}_F^2 } \\nonumber \\\\\n\t&\\qquad \\numrel{\\geq}{r_diff_p_LB} \\frac{s_k}{p_k} \\|\\boldsymbol{\\theta}_k\\|_2^2 \\lrp{ 1 - \\frac{s_k}{p_k} \\frac{B^2_k}{1-\\delta_k} },\n\t\\end{align}\nwhere~\\eqref{r_diff_p_LB} follows from similar arguments as in Gribonval et al.~\\cite[Equation (72)]{gribonval2014sparse}.\nPutting it all together, we have\n\t\\begin{align} \\label{eq:delt1_LB}\n\t\\mbb{E} &\\lr{ \\Delta \\phi_1 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} } \\nonumber\\\\\n\t&\\geq \\frac{\\mbb{E}\\{x^2\\}}{2}\n\t\t\\sum_{k \\in [K]} \\bigg(\\prod_{\\substack{i \\in [K]\\\\ i \\neq k}} s_i\\bigg) \\frac{s_k}{p_k} \\|\\boldsymbol{\\theta}_k\\|_2^2\n\t\t\\lrp{ 1 - \\frac{s_k}{p_k} \\frac{B^2_k}{1-\\delta_k} } \\nonumber \\\\\n\t&= \\frac{s \\mbb{E}\\{x^2\\}}{2}\n\t\t\\sum_{k \\in [K]} \\frac{\\|\\boldsymbol{\\theta}_k\\|_2^2 }{p_k}\n\t\t\\lrp{ 1 - \\frac{s_k}{p_k} \\frac{B^2_k}{1-\\delta_k} }.\n\t\\end{align}\n\nNext, to lower bound $\\mbb{E}\\lr{ \\Delta \\phi_4 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} }$, we upper bound $\\lra{\\mbb{E}\\lr{ \\Delta \\phi_4 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} }}$. If $\\wt{\\D}_k = \\mb{D}_k^0$, we have\n\t\\begin{align}\n\t\\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0}_{k,\\mc{J}_k}^+\\mb{D}^0_{k,\\mc{J}_k}}}\n\t\t= \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{I}_{s_k}}} = s_k,\n\t\\end{align}\notherwise, if $\\wt{\\D}_k = \\mb{D}_k$, we get\n\t\\begin{align}\n\t&\\left| \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}_{k,\\mc{J}_k}}^+\\mb{D}^0_k}} \\right| \\nonumber\\\\\n\t&\\qquad \\numrel{\\leq}{r_tr_spn} s_k \\mbb{E}_{\\mc{J}_k} \\lr{ \\norm{ {\\mb{D}}_{k,\\mc{J}_k}^+\\mb{D}^0_{k,\\mc{J}_k}}_2}\n\t\t\\nonumber \\\\\n\t&\\qquad \\leq s_k \\mbb{E}_{\\mc{J}_k} \\lr{ \\| {\\mb{D}}_{k,\\mc{J}_k}^+\\|_2\\|\\mb{D}^0_{k,\\mc{J}_k} \\|_2}\n\t\t\\nonumber \\\\\n\t&\\qquad \\numrel{\\leq}{r_5} s_k \\lrp{ \\frac{1}{\\sqrt{1-\\delta_{s_k}(\\mb{D}_k)}} }\n\t\t\\lrp{ \\sqrt{1+\\delta_{s_k}(\\mb{D}^0_k)}}\n\t\t\\nonumber \\\\\n\t&\\qquad \\numrel{\\leq}{r_delts} s_k \\sqrt{\\frac{1+\\delta_k}{1-\\delta_k}},\n\t\\end{align}\nwhere \\eqref{r_tr_spn} follows from the fact that for a square matrix $\\mb{A} \\in \\mbb{R}^{q\\times q}$, $\\mathop{\\mathrm{Tr}}\\nolimits \\lrb{\\mb{A}} \\leq q\\|\\mb{A}\\|_2$, \\eqref{r_5} follows from \\eqref{eq:l2_delt} and \\eqref{eq:pso_cond} and \\eqref{r_delts} follows from \\eqref{eq:A_B_Delk}. Similar to~\\cite[Equation (73)]{gribonval2014sparse}, we also have\n\t\\begin{align}\n\t&\\left| \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\big[\\mb{I}_{s_k} - \\D_{k,\\cJ_k}^+ \\mb{D}^0_k\\big]} \\right|\n\t\\nonumber\\\\\n\t&\\qquad \\leq \\frac{s_k}{p_k} \\frac{\\|\\boldsymbol{\\theta}_k\\|_2^2}{2}\n\t\t+ \\frac{s_k^2}{p_k^2} \\frac{A_kB_k}{1-\\delta_k}\\|\\boldsymbol{\\theta}_k\\|_2.\n\t\\end{align}\nThus, defining $\\delta_{-k} \\triangleq \\prod_{\\substack{i \\in [K]\\\\ i \\neq k}} \\sqrt{\\dfrac{1+\\delta_i}{1-\\delta_i}}$, we get\n\t\\begin{align} \\label{eq:delt4_LB}\n\t &\\mbb{E} \\lr{ \\Delta \\phi_4 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} } \\nonumber\\\\\n\t&\\quad \\geq - \\lambda \\mbb{E}\\{|x|\\}\n\t\t\\sum_{k \\in [K]}\\delta_{-k}\\bigg( \\prod_{\\substack{i \\in [K]\\\\ i \\neq k}}s_i\n\t\t \\bigg) \\nonumber\\\\\n\t&\\quad \\qquad \\lrp{\\frac{s_k}{p_k} \\frac{\\|\\boldsymbol{\\theta}_k\\|_2^2}{2}\n\t\t+ \\frac{s_k^2}{p_k^2} \\frac{A_kB_k}{1-\\delta_k}\\|\\boldsymbol{\\theta}_k\\|_2}\n\t\t \\nonumber \\\\\n\t&\\quad = - \\lambda s \\mbb{E}\\{|x|\\} \\sum_{k \\in [K]} \\frac{\\delta_{-k}}{p_k}\n\t\t\\lrp{ \\frac{\\|\\boldsymbol{\\theta}_k\\|_2^2}{2}\n\t\t+ \\frac{s_k}{p_k} \\frac{A_kB_k}{1-\\delta_k}\\|\\boldsymbol{\\theta}_k\\|_2}.\n\t\\end{align}\n\t\nTo lower bound $\\mbb{E}\\lr{ \\Delta \\phi_6 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} }$, we upper bound $\\lra{\\mbb{E}\\lr{ \\Delta \\phi_6 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} }}$. For any $\\wt{\\D}_k$, we have\n\t\\begin{align}\n\t\\lra{ \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\wt{\\D}_{k,\\mc{J}_k}} }} }\n\t\t&\\leq \\mbb{E}_{\\mc{J}_k} \\lr{ s_k \\norm{ \\mb{H}_{\\wt{\\D}_{k,\\mc{J}_k}} }_2} \t\n\t\t\\numrel{\\leq}{r_H_UB} \\frac{s_k}{1-\\delta_k},\n\t\\end{align}\nwhere \\eqref{r_H_UB} follows from \\eqref{eq:pso_cond} and \\eqref{eq:A_B_Delk}. Similar to Gribonval et al.~\\cite[Equation (74)]{gribonval2014sparse}, we also have\n\t\\begin{align*}\n\t\\left| \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\mb{D}^0_{k,\\mc{J}_k}} - \\bH_{\\D_{k,\\cJ_k}} }} \\right|\n\t\\leq \\frac{s_k^2}{p_k^2} \\frac{4A_kB_k}{(1-\\delta_k)^2}\\|\\boldsymbol{\\theta}_k\\|_2.\n\t\\end{align*}\nThus, we get\n\t\\begin{align} \\label{eq:delt6_LB}\n\t& \\mbb{E} \\{ \\Delta \\phi_6 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} \\} \\nonumber\\\\\n\t&\\quad \\geq - \\frac{\\lambda^2}{2}\n\t\t\\sum_{k \\in [K]} \\bigg( \\prod_{\\substack{i \\in [K]\\\\ i \\neq k}}\n\t\t\\frac{s_i}{1-\\delta_i}\\bigg)\n\t\t\\lrp{ \\frac{s_k^2}{p_k^2} \\frac{4A_kB_k}{(1-\\delta_k)^2}\\|\\boldsymbol{\\theta}_k\\|_2}\n\t\t\\nonumber \\\\\n\t&\\quad = - \\frac{\\lambda^2s}{2}\n\t\t\\sum_{k \\in [K]} \\frac{1}{p_k}\n\t\t\\bigg( \\prod_{i \\in [K]} \\frac{1}{1-\\delta_i}\\bigg)\n\t\t\\lrp{ \\frac{s_k}{p_k} \\frac{4A_kB_k}{1-\\delta_k}\\|\\boldsymbol{\\theta}_k\\|_2}.\n\t\\end{align}\nAdding~\\eqref{eq:delt1_LB}, \\eqref{eq:delt4_LB}, and \\eqref{eq:delt6_LB}, we get \\eqref{eq: delt_ph_3}.\n\\end{IEEEproof}\n\n\\begin{IEEEproof} [Proof of Proposition \\ref{prop:1}]\nTo show that $\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} > 0$, we use Lemma~\\ref{lem:A_B_delt} and prove that the right hand side of \\eqref{eq: delt_ph_3} is positive under certain conditions. First, we ensure the conditions in \\eqref{eq:cond_delt_r_i} and \\eqref{eq:A_B_Delk} hold for Lemma~\\ref{lem:H_Dps} and Lemma~\\ref{lem:A_B_delt}, respectively. We set $\\delta_k = \\dfrac{1}{2}$, $\\delta_{s_k}(\\mb{D}_k) = \\dfrac{1}{2}$ and $\\delta_{s_k}(\\mb{D}^0_k) = \\dfrac{1}{4}$, for $k \\in [K]$. For $\\varepsilon_k\\leq 0.15 $, this ensures:\n\t\t\\begin{align}\n\t\t&\\sqrt{1-\\delta_{s_k}(\\mb{D}_k)}\n\t\t\t\\geq \\sqrt{1-\\delta_{s_k}(\\mb{D}^0_k)} - \\varepsilon_k, \\text{ and}\n\t\t\t%\n\t\t\t\\nonumber \\\\\n\t\t&\\max\\lr{ \\delta_{s_k}(\\mb{D}^0_k), \\delta_{s_k}(\\mb{D}_k)} \\leq \\delta_k,\n\t\t\\end{align}\nand implies $\\delta_k<1$ (condition for Lemmas~\\ref{lem:exp_phi} and \\ref{lem:H_Ps}).\nNext, we find conditions that guarantee:\n\t\\begin{align} \\label{eq:tet_2_p1}\n\t\\frac{s_k}{p_k} \\frac{B^2_k}{1-\\delta_k}\n\t\t+\\bar{\\lambda} \\kappa_x^2\n\t\t\\delta_{-k}\n\t\t\\numrel{=}{r_deltmk} \\frac{2B^2_k s_k}{p_k}\n\t\t+\\bar{\\lambda} \\kappa_x^2\n\t\t\\lrp{3}^{(K-1)\/2}\n\t\t\\leq \\frac{1}{2},\n\t\\end{align}\nwhere \\eqref{r_deltmk} follows from replacing $\\delta_k$ with $\\dfrac{1}{2}$.\nIf we take $\\dfrac{s_k}{p_k} \\leq \\dfrac{1}{8B_k^2}$ and $\\bar{\\lambda}\\leq \\dfrac{1}{8\\times 3^{(K-1)\/2}}$, given the fact that $\\kappa_x^2\\leq 1$, \\eqref{eq:tet_2_p1} is satisfied.\\footnote{These numbers are chosen for a simplified proof and can be modified.}\nConsequently, we can restate~\\eqref{eq: delt_ph_3} as\n\t\\begin{align} \\label{eq:eb_1}\n\t&\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}\n\t\t\\geq \\frac{s \\mbb{E}\\{x^2\\}}{4}\n\t\t\\sum_{k \\in [K]} \\frac{\\|\\boldsymbol{\\theta}_k\\|_2 }{p_k}\n\t\t\\bigg[\\|\\boldsymbol{\\theta}_k\\|_2 \\nonumber\\\\\n\t&\\qquad -8 \\lrp{3^{(K-1)\/2}\n\t\t+2^{(K+1)}\\bar{\\lambda} }\n\t\t\\bar{\\lambda} \\kappa_x^2\n\t\t\\frac{s_k}{p_k} A_kB_k\\bigg].\n\t\\end{align}\nFrom~\\cite[Proof of Proposition 2]{gribonval2014sparse}, we use the following relations:\n\t\\begin{align} \\label{eq:A0_A}\n\tB_k\\leq B^0_k +\\varepsilon_k \\leq B^0_k +1, \\quad\n\tA_k \\leq A^0_k + 2B_k\\varepsilon_k, \\quad k\\in [K],\n\t\\end{align}\nwhere $A^0_k \\triangleq \\norm{{\\mb{D}_k^0}^\\top \\mb{D}_k^0 - \\mb{I}_{p_k}}_F$ and $B^0_k \\triangleq \\norm{\\mb{D}_k^0}_2$ and \\eqref{eq:A0_A} follows from matrix norm inequalities~\\cite{gribonval2014sparse}.\nDefining $\\gamma_k \\triangleq 16 \\bigg(3^{(K-1)\/2}\n\t\t+2^{(K+1)}\\bar{\\lambda} \\bigg) \\bar{\\lambda}\\kappa_x^2\n\t\t\\dfrac{B_k^2 s_k}{p_k}$ for $k \\in [K]$ and using $\\kappa_x^2 \\leq 1$, we have\n\t\\begin{align} \\label{eq:gam}\n\t\\gamma_k &\\leq\n\t\t2\\bigg(3^{(K-1)\/2}\n\t\t+\\frac{2^{(K+1)}}{8\\times 3^{(K-1)\/2}} \\bigg)\n\t\t\\bigg(\\frac{1}{8\\times 3^{(K-1)\/2}} \\bigg) \\nonumber\\\\\n\t\t&\\leq 2\\lrp{\\frac{1}{8}+\\frac{4}{64}}\n\t\t\\leq\\frac{1}{2}.\n\t\\end{align}\nThen, for $\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}_k^0)$, $k \\in [K]$, we get\n\t\\begin{align}\n\t&\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} \\nonumber\\\\\n\t&\\qquad \\numrel{\\geq}{r_tet_rk} \\frac{s\\mbb{E}\\{x^2\\}}{4}\n\t\t\\sum_{k \\in [K]} \\frac{\\varepsilon_k}{p_k}\n\t\t\\lrp{ \\varepsilon_k - \\frac{\\gamma_k}{2} \\frac{A_k}{B_k}\t}\n\t\t\\nonumber \\\\\n\t&\\qquad\t\\numrel{\\geq}{r_A_B} \\frac{s\\mbb{E}\\{x^2\\}}{4}\n\t\t\\sum_{k \\in [K]} \\frac{\\varepsilon_k}{p_k}\n\t\t\\lrp{ \\varepsilon_k - \\frac{\\gamma_k}{2} \\frac{A_k^0+2B_k\\varepsilon_k}{B_k}\t}\n\t\t\\nonumber \\\\\n\t&\\qquad\t\\geq \\frac{s\\mbb{E}\\{x^2\\}}{4}\n\t\t\\sum_{k \\in [K]} \\frac{\\varepsilon_k}{p_k}\n\t\t\\lrp{ \\varepsilon_k(1-\\gamma_k) - \\frac{\\gamma_k}{2} \\frac{A_k^0}{B_k}\t}\n\t\t\\nonumber \\\\\n\t&\\qquad\t\\numrel{\\geq}{r_gam} \\frac{s\\mbb{E}\\{x^2\\}}{8}\n\t\t\\sum_{k \\in [K]} \\frac{\\varepsilon_k}{p_k}\n\t\t\\lrp{ \\varepsilon_k - \\gamma_k \\frac{A_k^0}{B_k}\t},\n\t\\end{align}\nwhere \\eqref{r_tet_rk} follows from \\eqref{eq:eb_1}, \\eqref{r_A_B} follows from \\eqref{eq:A0_A}, and \\eqref{r_gam} follows from \\eqref{eq:gam}.\nHence, we can write\n\t\\begin{align} \\label{eq:Delt_LB_rmin}\n\t\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} \\geq& \\frac{s\\mbb{E}\\{x^2\\}}{8}\n\t\t\\sum_{k \\in [K]} \\frac{\\varepsilon_k}{p_k}\n\t\t\\lrp{ \\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda})},\n\t\\end{align}\nwhere we define\n\t\\begin{align}\n\t& \\varepsilon_{k,\\min} (\\bar{\\lambda})\n\t\\triangleq \\gamma_k \\frac{A_k^0}{B_k} \\nonumber\\\\\n\t&= 16 \\lrp{3^{(K-1)\/2}\n\t\t+2^{(K+1)}\\bar{\\lambda} }\n\t\t\\bar{\\lambda}\\kappa_x^2\n\t\t\\frac{s_k}{p_k} A_k^0 B_k \\nonumber \\\\\n\t&=\\frac{2}{3^{(K+1)\/2}} \\bigg(3^{(K-1)\/2}\n\t\t+2^{(K+1)}\\bar{\\lambda} \\bigg) \\bar{\\lambda} C_{k,\\min},\n\t\\end{align}\nand $C_{k,\\min}$ is defined in \\eqref{eq:cond_C_min_C_max}.\nThe lower bound in \\eqref{eq:Delt_LB_rmin} holds for any $\\varepsilon_k \\leq 0.15$ and $\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}_k^0)$, $k \\in [K]$.\nFinally, since $3^{(K-1)\/2} +2^{(K+1)}\\bar{\\lambda} \\leq 0.5\\times 3^{(K+1)\/2}$,\nthe assumption\n$\\bar{\\lambda} \\leq 0.15\/(\\max_{k \\in [K]} C_{k,\\min})$\nimplies that $\\varepsilon_{k,\\min}(\\bar{\\lambda}) \\leq 0.15$ for $k\\in[K]$.\nTherefore, $\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} > 0 $ for all $\\varepsilon_k \\in (\\varepsilon_{k,\\min}(\\bar{\\lambda}),0.15]$, $k \\in [K]$.\n\\end{IEEEproof}\n\n\n\\begin{IEEEproof}[Proof of Lemma~\\ref{lem:mu_mu0_rel}]\nConsidering $j \\not \\in \\mc{J}$, associated with $\\lrp{j_1,\\dots,j_k} \\not \\in \\lrp{\\mc{J}_1\\times \\dots \\times \\mc{J}_K}$, we have\n\t\\begin{align}\n\t&\\|\\mb{D}_\\mc{J}^\\top \\mb{d}_j\\|_1 \\nonumber\\\\\n\t&\\numrel{\\leq}{r_tr_ineq} \\|{\\mb{D}_\\mc{J}^0}^\\top \\mb{d}^0_j\\|_1 + \\|{\\mb{D}_\\mc{J}^0}^\\top (\\mb{d}_j-\\mb{d}^0_j)\\|_1\n\t\t+ \\|(\\mb{D}_\\mc{J}-\\mb{D}_\\mc{J}^0)^\\top \\mb{d}_j\\|_1\\nonumber \\\\\n\t&\\leq \\mu_s(\\mb{D}^0)\n\t\t+ \\sqrt{s} \\lrb{ \\|{\\mb{D}_\\mc{J}^0}^\\top( \\mb{d}_{j}-\\mb{d}_{j}^0) \\|_2\n\t\t+ \\|(\\mb{D}_\\mc{J}-\\mb{D}^0_\\mc{J})^\\top\\mb{d}_j\\|_2} \\nonumber \\\\\n\t&\\leq \\mu_s(\\mb{D}^0) + \\sqrt{s}\n\t\t\\bigg[ \\norm{ \\bigotimes {\\mb{D}^0_{k,\\mc{J}_k}}^\\top}_2\n\t\t\\norm{ \\bigotimes \\lrp{ \\mb{d}_{k,j_k} - \\mb{d}^0_{k,j_k} }}_2 \\nonumber\\\\\n\t&\\qquad + \\norm{\\bigotimes \\mb{D}_{k,\\mc{J}_k} -\\bigotimes \\mb{D}^0_{k,\\mc{J}_k}}_2\n\t\t \\norm{\\mb{d}_j}_2 \\bigg] \\nonumber \\\\\n\t&\\numrel{\\leq}{r_d0_mu} \\mu_s(\\mb{D}^0) + \\sqrt{s}\n\t\t\\bigg[ \\bigg( \\prod_{k \\in [K]} \\sqrt{1+\\delta_{s_k}(\\mb{D}^0_k)} \\bigg)\n\t\t\\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]} \\norm{ \\widetilde{\\mb{d}}_{1,j_1}}_2\\dots\n\t\t\\norm{\\mb{d}_{k,j_k} - \\mb{d}^0_{k,j_k} }_2 \\dots \\\n\t\t\\norm{ \\widetilde{\\mb{d}}_{k,j_K}}_2 \\bigg) \\nonumber \\\\\n\t &\\quad\n\t \t+ \\sum_{k \\in [K]} \\norm{ \\wt{\\D}_{1,\\mc{J}_1}}_2\\dots\n\t\t\\norm{\\mb{D}_{k,\\mc{J}_k} - \\mb{D}^0_{k,\\mc{J}_k} }_2 \\dots \\\n\t\t\\norm{ \\wt{\\D}_{k,\\mc{J}_k}}_2 \\bigg] \\nonumber \\\\\n\t&\\numrel{\\leq}{r_RIP_d} \\mu_s(\\mb{D}^0) + \\sqrt{s}\n\t\t\\bigg[ \\bigg( \\prod_{k \\in [K]} \\sqrt{1+\\delta_{s_k}(\\mb{D}^0_k)} \\bigg)\n\t\t\\bigg( \\sum_{k \\in [K]} \\varepsilon_k \\bigg) \\nonumber\\\\\n\t&\\qquad + \\sum_{k \\in [K]} \\bigg( \\prod_{\\substack{i \\in [K] \\\\ i \\neq k}}\n\t\t\\norm{\\wt{\\D}_{i,\\mc{J}_i}}_2 \\bigg) \\varepsilon_k\n\t\t \\bigg] \\nonumber \\\\\n\t%\n\t%\n\t%\n\t%\n\t%\n\t&\\numrel{\\leq}{r_asump} \\mu_s(\\mb{D}^0) + 2(1.5)^{K\/2}\\sqrt{s} \\bigg( \\sum_{k \\in [K]} \\varepsilon_k \\bigg),\n\t\\end{align}\t\nwhere \\eqref{r_tr_ineq} follows from the triangle inequality, \\eqref{r_d0_mu} follows from \\eqref{eq:otD_otDp}, \\eqref{r_RIP_d} follows from \\eqref{eq:delt_mu}, and, \\eqref{r_asump} follows from substituting the upper bound value from \\eqref{eq:UB_mu_delt} for $\\delta_{s_k}(\\mb{D}_k^0) $. For $\\wt{\\D}_i = \\mb{D}_i^0$, $\\norm{\\mb{D}_{i,\\mc{J}_i}^0}_2 \\leq \\sqrt{1 + \\delta_{s_i}(\\mb{D}^0_i)} \\leq \\sqrt{\\frac{5}{4}} < 1.5$ and for $\\wt{\\D}_i = \\mb{D}_i$, according to \\eqref{eq:A0_A}, we have $\\norm{\\mb{D}_{i,\\mc{J}_i}}_2 \\leq \\norm{\\mb{D}_{i,\\mc{J}_i}^0}_2 + \\varepsilon_i \\leq \\sqrt{\\frac{5}{4}}+0.15 < 1.5$.\n\\end{IEEEproof}\n\n\\begin{IEEEproof}[Proof of Proposition~\\ref{prop:3}]\nWe follow a similar approach to Gribonval et al.~\\cite{gribonval2014sparse}. We show that the conditions in~\\eqref{eq:lem8_cod} hold for Lemma~\\ref{lem:a_hat_min_cond}. We have\n\t\\begin{align}\n\t& \\norm{\\mb{y} - \\lrp{ \\bigotimes \\mb{D}_k} \\mb{x} }_2 \\nonumber\\\\\n\t&\\leq \\norm{ \\lrp{ \\bigotimes \\mb{D}^0_{k,\\mc{J}_k} - \\bigotimes \\mb{D}_{k,\\mc{J}_k} }\n\t\t\\mb{x}_\\mc{J}} _2+\\|\\mb{w}\\|_2 \\nonumber \\\\\n\t&\\leq M_x\n\t\t\\sum _{k \\in [K]} \\big\\| \\wt{\\D}_{1,\\mc{J}_1} \\otimes \\dots \\otimes\n\t\t\\lrp{\\mb{D}^0_{k,\\mc{J}_k} - \\mb{D}_{k,\\mc{J}_k} }\t\\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\wt{\\D}_{K,\\mc{J}_K}\\big\\|_2 +M_w \\nonumber \\\\\n\t&\\leq M_x\n\t\t\\sum _{k \\in [K]} \\norm{ \\wt{\\D}_{1,\\mc{J}_1}}_2 \\dots\n\t\t\\norm{\\mb{D}^0_{k,\\mc{J}_k} - \\mb{D}_{k,\\mc{J}_k} }_2 \\dots \\norm{ \\wt{\\D}_{K,\\mc{J}_K}}_2 \\nonumber\\\\\n\t&\\qquad +M_w\n\t\t\\nonumber \\\\\n\t&\\leq M_x\n\t\t\\sum _{k \\in [K]}\n\t\t\\bigg( \\prod_{\\substack{i \\in [K] \\\\ i \\neq k}}\n\t\t\\norm{ \\wt{\\D}_{i,\\mc{J}_i }}_2 \\bigg) \\varepsilon_k\n\t\t +M_w \\nonumber \\\\\n\t&\\numrel{\\leq}{r_delt_leq} (1.5)^{(K-1)\/2} M_x \\sum_{k \\in [K]} \\varepsilon_k + M_w,\n\t\\end{align}\nwhere \\eqref{r_delt_leq} follows from \\eqref{eq:delt_cond} and the fact that for $\\wt{\\D}_i = \\mb{D}_i^0$, $\\norm{\\mb{D}_{i,\\mc{J}_i}^0}_2 \\leq \\sqrt{1 + \\delta_{s_i}(\\mb{D}^0_i)} \\leq \\sqrt{\\frac{5}{4}} < 1.5$ and for $\\wt{\\D}_i = \\mb{D}_i$, according to \\eqref{eq:A0_A}, we have $\\norm{\\mb{D}_{i,\\mc{J}_i}}_2 \\leq \\norm{\\mb{D}_{i,\\mc{J}_i}^0}_2 + \\varepsilon_i \\leq \\sqrt{\\frac{5}{4}}+0.15 < 1.5$.\nHence, we get\n\t\\begin{align} \\label{eq:lmb_m_err}\n\t&\\lambda (1-2\\mu_s(\\mb{D}))- \\norm{\\mb{y} - \\lrp{ \\bigotimes \\mb{D}_k} \\mb{x}}_2 \\nonumber \\\\\n\t&\\geq \\lambda (1-2\\mu_s(\\mb{D})) - (1.5)^{(K-1)\/2} M_x \\sum_{k \\in [K]} \\varepsilon_k -M_w\n\t\t\\nonumber \\\\\n\t&\\numrel{\\geq}{r_mu_mu0} \\lambda (1-2\\mu_s(\\mb{D}^0))\n\t\t- (1.5)^{K\/2} \\lrp{ 4\\lambda \\sqrt{s} + (1.5)^{-1\/2}M_x} \\nonumber\\\\\n\t&\\qquad \\sum_{k \\in [K]} \\varepsilon_k\n\t\t-M_w \\nonumber \\\\\n\t&\\numrel{\\geq}{r_lam_m_a} \\lambda (1-2\\mu_s(\\mb{D}^0)) - 3(1.5)^{K\/2} M_x \\sum_{k \\in [K]} \\varepsilon_k -M_w\n\t\\nonumber \\\\\n\t&= 3(1.5)^{K\/2} M_x \\bigg( K \\bar{\\lambda} C_{\\max} - \\sum_{k \\in [K]} \\varepsilon_k \\bigg) -M_w,\n\t\\end{align}\nwhere \\eqref{r_mu_mu0} follows from \\eqref{eq:mu_mu0} and \\eqref{r_lam_m_a} follows from \\eqref{eq:lem8_cod} ($2\\lambda \\sqrt{s} \\leq x_{\\min}\\sqrt{s}\\leq M_x$) and~\\eqref{eq:mu_mu0}.\nIf $\\varepsilon_k < C_{\\max} \\bar{\\lambda} $, $k \\in [K]$, the assumption on the noise level in \\eqref{eq:M_eps_M_al} implies that the right-hand side of \\eqref{eq:lmb_m_err} is greater than zero and $\\lambda (1-2\\mu_s(\\mb{D}))> \\norm{\\mb{y} - \\lrp{ \\bigotimes \\mb{D}_k} \\mb{x}}_2 $. Thus, according to Lemma~\\ref{lem:a_hat_min_cond}, $\\widehat{\\mb{x}}$ is almost surely the unique solution of $\\min_{\\mb{x} } \\frac{1}{2}\\norm{\\mb{y} - \\lrp{ \\bigotimes \\mb{D}_k} \\mb{x}' }_2 +\\lambda\\|\\mb{x}'\\|_1$ and $\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K},\\D^0_{1:K}|\\boldsymbol{\\sigma}} = \\Delta f_{\\mbb{P}} \\lrp{\\D_{1:K},\\D^0_{1:K}}$.\n\\end{IEEEproof}\n\n\n\n\\section*{appendix B}\n\n\n\\begin{IEEEproof} [Proof of Lemma~\\ref{lemma:delt_m_T_dev}]\nAccording to Lemma~\\ref{lem:rad}, we have to upper bound\n$\n\t\\mbb{E} \\lr{ \\sup_{\\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k), k \\in [K]}\\lra{ \\frac{1}{N}\n\\sum_{n \\in [N] } \\beta_n h_n(\\D_{1:K}) }}\n$. Conditioned on the draw of functions $h_1,\\dots,h_N$, consider the Gaussian processes\n$\n\tA_{\\D_{1:K}} = \\frac{1}{N} \\sum_{n \\in [N]} \\beta_n h_n(\\D_{1:K})\n$ and\n$\n\tC_{\\D_{1:K}} = \\sqrt{\\frac{K}{N}}\n\t\t\\sum_{k \\in [K]} \\bigg( L_k \\sum_{i \\in [m_k]} \\sum_{j \\in [p_k]}\n\t\t\\zeta_{ij}^k (\\mb{D}_k-\\mb{D}_k^0)_{ij} \\bigg)\n$,\nwhere $\\lr{\\beta_n}_{n=1}^N$'s and $\\lr{\\zeta_{ij}^k},k\\in [K],i\\in [m_k],j\\in [p_k]$'s are independent standard Gaussian vectors.\nWe have\n\t\\begin{align}\n\t&\\mbb{E} \\lr{\\lra{A_{\\D_{1:K}} - A_{\\D'_{1:K}}}^2} \\nonumber\\\\\n\t&\\qquad= \\frac{1}{N^2} \\bigg|\n\t\t\\sum_{n \\in [N]} h_n(\\D_{1:K})- h_n(\\D'_{1:K}) \\bigg|^2 \\nonumber \\\\\n\t&\\qquad \\numrel{\\leq}{r_lpsh_h} \\frac{1}{N}\n\t\t\\bigg(\\sum_{k \\in [K]} L_k\\|\\D_k-\\D'_k\\|_F \\bigg)^2\\nonumber \\\\\n\t&\\qquad \\numrel{\\leq}{r_cs} \\frac{K}{N}\\sum_{k \\in [K]} L_k^2\\|\\D_k-\\D'_k\\|_F^2\\nonumber \\\\\n\t&\\qquad = \\mbb{E}\\lr{\\lra{C_{\\D_{1:K}} - C_{\\D'_{1:K}}}^2},\n\t\\end{align}\nwhere \\eqref{r_lpsh_h} follows from coordinate-wise Lipschitz continuity of $h$ and \\eqref{r_cs} follows from Cauchy-Schwartz inequality. Hence, using Slepian's Lemma~\\cite{massart2007concentration}, we get\n\t\\begin{align}\n\t\\mbb{E} \\bigg\\{ \\sup_{\\substack{\\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k) \\\\ k \\in [K]}}\n\t\tA_{\\D_{1:K}} \\bigg\\}\n\t&\\leq \\mbb{E} \\bigg\\{ \\sup_{\\substack{ \\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k) \\\\ k \\in [K]}}\n\t\tC_{\\D_{1:K}} \\bigg\\} \\nonumber \\\\\n\t& = \\sqrt{\\frac{K}{N}} \\bigg(\\sum_{k \\in [K]} L_k\\varepsilon_k \\mbb{E} \\big\\{\\|\\boldsymbol{\\zeta}^k\\|_F \\big\\}\\bigg)\\nonumber \\\\\n\t& =\\sqrt{\\frac{K}{N}} \\bigg(\\sum_{k \\in [K]} L_k\\varepsilon_k \\sqrt{m_kp_k}\\bigg).\n\t\\end{align}\nThus, we obtain\n$\n\t\\mbb{E} \\lr{\\sup_{\\substack{ \\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k) \\\\ k \\in [K]}}\n\t\t\\lra{\\frac{1}{N} \\sum_{n \\in [N]} \\beta_nh_n(\\D_{1:K}) } }\n\t\\\\ \\leq 2\\sqrt{\\frac{K}{N}}\\lrp{\\sum_{k \\in [K]} L_k\\varepsilon_k \\sqrt{m_kp_k}}.\n$\n\\end{IEEEproof}\t\n\n\n\n\\begin{IEEEproof}[Proof of Lemma~\\ref{lem:phi_m_T1_lip}]\nWe expand $\\Delta \\phi_\\mb{y} \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$ according to \\eqref{eq:delt_t} and bound each term of the sum separately. Looking at the first term, we get\n\t\\begin{align}\n\t& \\lra{ \\Delta \\phi_1 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} }\n\t\t\\numrel{=}{r_exp_phi1} \\bigg| \\frac{1}{2}\\mb{x}^\\top\n\t\t{\\mb{D}^0 }^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\n\t\t\\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad\n\t\t\\lrp{\\bP_{\\D'_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}}}\t\\otimes \\dots \\otimes \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\bigg)\n\t\t\\mb{D}^0\n\t\t\\mb{x} \\bigg| \\nonumber \\\\\n\t&\\numrel{\\leq}{r_D0_D0k} \\frac{1}{2} \\norm{\\mb{x}}_2^2\n\t\t\\bigg(\\prod_{k \\in [K]} \\norm{ \\mb{D}^0_{k,\\mc{J}_k} }_2^2 \\bigg)\n\t\t\\bigg( \\sum_{k \\in [K]}\n\t\t\\norm{\\bP_{\\D^0_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}}}_2 \\nonumber\\\\\n\t&\\qquad \\bigg( \\prod_{\\substack{i\\in [K] \\\\ i \\neq k}}\n\t\t\\norm{\\mb{P}_{\\wt{\\D}_{i,\\mc{J}_i}}}_2 \\bigg)\\bigg)\n\t\t\\nonumber \\\\\n\t&\\numrel{\\leq}{r_p_rip1} M_x^2\n\t\t\\bigg( \\prod_{k \\in [K]}\\big(1+\\delta_{s_k}(\\mb{D}^0_k)\\big) \\bigg)\n\t\t\\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]}(1 - \\delta_k)^{-1\/2}\\|\\D_k-\\D^0_k\\|_F \\bigg) ,\n\t\\end{align}\nwhere \\eqref{r_exp_phi1} follows from \\eqref{eq:delt_phi}, \\eqref{r_D0_D0k} follows from the fact that $\\norm{\\mb{D}^0_\\mc{J}}_2 = \\prod_{k \\in [K]} \\norm{ \\mb{D}^0_{k,\\mc{J}_k} }_2$, and \\eqref{r_p_rip1} follows from the definition of $\\mathsf{RIP}$, equation \\eqref{eq:PH_PHp}, and $\\big\\|\\mb{P}_{\\wt{\\D}_{i,\\mc{J}_i}}\\big\\|_2=1$. Following a similar approach and expanding the rest of the terms, we get\n\t\\begin{align*}\n\t&\\lra{ \\Delta \\phi_2 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} } \\nonumber\\\\\n\t&\\leq \\norm{\\mb{w}}_2 \\norm{\\mb{x}}_2\n\t\t\\bigg( \\prod_{k \\in [K]} \\norm{ \\mb{D}^0_{k,\\mc{J}_k} }_2^2 \\bigg)\n\t\t\\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]} \\norm{\\bP_{\\D^0_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}} }_2\n\t\\bigg( \\prod_{\\substack{i\\in [K] \\\\ i \\neq k}} \\norm{\\mb{P}_{\\wt{\\D}_{i,\\mc{J}_i}}}_2 \\bigg)\\bigg)\n\t\\nonumber \\\\\n\t&\\numrel{\\leq}{r_p_rip} 2M_w M_x\n\t\t\\bigg( \\prod_{k \\in [K]}\\big(1+\\delta_{s_k}(\\mb{D}^0_k)\\big)^{1\/2} \\bigg)\n\t\t\\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]} (1 - \\delta_k)^{-1\/2}\\|\\D_k-\\D^0_k\\|_F \\bigg), \\nonumber \\\\\n\t&\\lra{ \\Delta \\phi_3 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} }\n\t\t\\leq \\frac{1}{2} \\norm{\\mb{w}}_2^2 \\nonumber\\\\\n\t& \\qquad \\bigg( \\sum_{k \\in [K]}\n\t\t\\norm{\\bP_{\\D^0_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}}}_2\t\n\t\t\\bigg( \\prod_{\\substack{i\\in [K] \\\\ i \\neq k}} \\norm{\\mb{P}_{\\wt{\\D}_{i,\\mc{J}_i}}}_2 \\bigg)\n\t\t\\bigg) \\nonumber \\\\\n\t&\\leq M_w^2 \\bigg( \\sum_{k \\in [K]} (1 - \\delta_k)^{-1\/2}\\|\\D_k-\\D^0_k\\|_F \\bigg),\\nonumber \\\\\n\t&\\lra{ \\Delta \\phi_4 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} }\n\t\t= \\lambda \\norm{\\boldsymbol{\\sigma}_\\mc{J}}_2 \\norm{\\mb{x}}_2\n\t\t\\bigg( \\prod_{k \\in [K]} \\norm{\\mb{D}^0_{\\mc{J}_k}}_2 \\bigg)\n\t\t\\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]}\n\t\t\\norm{{\\D^0}_{k,\\cJ_k}^+ - \\D_{k,\\cJ_k}^+}_2\n\t\t\\bigg( \\prod_{\\substack{i\\in [K] \\\\ i \\neq k}} \\norm{ {\\wt{\\D}_{i,\\mc{J}_i}}^+}_2 \\bigg)\n\t\t \\bigg) \\nonumber \\\\\n\t&\\numrel{\\leq}{r_ps} 2\\lambda \\sqrt{s} M_x\n\t\t \\bigg( \\prod_{k \\in [K]}\\big(1+\\delta_{s_k}(\\mb{D}^0_k)\\big)^{1\/2} \\bigg) \\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]} (1-\\delta_k)^{-1}\n\t\t \\bigg( \\prod_{\\substack{i \\in [K] \\\\ i \\neq k}}\n\t\t (1-\\delta_i)^{-1\/2} \\bigg) \\|\\D_k-\\D^0_k\\|_F \\bigg) ,\\nonumber \\\\\n\t& \\lra{ \\Delta \\phi_5 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} }\n\t= \\lambda \\norm{\\boldsymbol{\\sigma}_\\mc{J}}_2 \\norm{\\mb{w}}_2 \\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]}\n\t\t\\norm{{\\D^0}_{k,\\cJ_k}^+ - \\D_{k,\\cJ_k}^+ }_2\t\n\t\t\\bigg( \\prod_{\\substack{i\\in [K] \\\\ i \\neq k}} \\norm{ {\\wt{\\D}_{i,\\mc{J}_i}}^+}_2 \\bigg)\n\t\t\\bigg)\t\\nonumber \\\\\n\t&\\leq 2\\lambda \\sqrt{s}M_w \\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]} (1-\\delta_k)^{-1}\n\t\t \\bigg( \\prod_{\\substack{i \\in [K] \\\\ i \\neq k}}\n\t\t (1-\\delta_i)^{-1\/2} \\bigg) \\|\\D_k-\\D^0_k\\|_F \\bigg) ,\\nonumber \\\\\n\t&\\lra{ \\Delta \\phi_6 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} }\n\t\t= \\frac{\\lambda^2}{2} \\norm{\\boldsymbol{\\sigma}_\\mc{J}}_2^2 \\nonumber \\\\\n\t&\\qquad\n\t \t\\bigg( \\sum_{k \\in [K]}\n\t\t\\norm{\\bH_{\\D^0_{k,\\cJ_k}} - \\bH_{\\D_{k,\\cJ_k}} }_2\t\n\t\t\\bigg( \\prod_{\\substack{i\\in [K] \\\\ i \\neq k}} \\norm{\\mb{H}_{\\wt{\\D}_{i,\\mc{J}_i}} }_2 \\bigg)\\bigg)\t\n\t\t\\nonumber \\\\\n\t&\\numrel{\\leq}{r_h} \\lambda^2 s\n\t\t\\bigg( \\sum_{k \\in [K]} (1-\\delta_k)^{-\\frac{3}{2}}\n\t\t \\bigg( \\prod_{\\substack{i \\in [K] \\\\ i \\neq k}}\n\t\t (1-\\delta_i)^{-1} \\bigg) \\|\\D_k-\\D^0_k\\|_F\\bigg),\n\t\\end{align*}\nwhere \\eqref{r_ps} and \\eqref{r_h} follow from \\eqref{eq:pso_cond} and \\eqref{eq:PH_PHp}. Adding all the terms together, we get\n\t\\begin{align}\n\t&\\lra{\\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}} \\leq\n\t\t\\sum_{k \\in [K]} L_k \\|\\D_k-\\D^0_k\\|_F.\n\t\\end{align}\nwhere $L_k$ is defined in \\eqref{eq:lipsch_const_h}.\n\\end{IEEEproof}\t\n\\section*{appendix C}\n\\begin{IEEEproof}[Proof of the coherence relation for KS dictionaries]\nTo prove \\eqref{eq:mu_s}, we define the set $\\mathcal{A} = \\lr{\\forall j_k \\in \\mc{J}_k, (j_1,\\dots,j_K) \\not\\in (\\mc{J}_1,\\dots,\\mc{J}_K)}$. We have\n\t\\begin{align}\n\t\\mu_s(\\mb{D})&= \\max_{|\\mc{J}|\\leq s} \\max_{j \\not\\in \\mc{J}}\\|\\mb{D}_\\mc{J}^\\top \\mb{d}_j\\|_1 \\nonumber \\\\\n\t&= \\max_{\\substack{|\\mc{J}_k|\\leq s_k\\\\ k \\in [K]}}\n\t\t\\max_{\\mathcal{A}}\n\t\t\\norm{\\lrp{\\bigotimes \\mb{D}_{k,\\mc{J}_k}^\\top}\\lrp{\\bigotimes \\mb{d}_{k,j_k}}}_1\\nonumber \\\\\n\t&= \\max_{\\substack{|\\mc{J}_k|\\leq s_k\\\\ k \\in [K]}}\n\t\t\\max_{\\mathcal{A}}\n\t\t\\norm{\\bigotimes \\mb{D}_{k,\\mc{J}_k}^\\top\\mb{d}_{k,j_k}}_1 \\nonumber\\\\\n\t&= \\max_{\\substack{|\\mc{J}_k|\\leq s_k\\\\ k \\in [K]}}\n\t\t\\max_{\\mathcal{A}}\n\t\t\\prod_{k \\in [K]} \\norm{\\mb{D}_{k,\\mc{J}_k}^\\top\\mb{d}_{k,j_k}}_1 \\nonumber\\\\\n\t&\\leq \\max_{k \\in [K]} \\mu_{s_k}(\\mb{D}_k)\n\t\t\\bigg( \\prod_{\\substack{i \\in [K], \\\\ i \\neq k}} \\lrp{ 1+\\mu_{s_i-1}(\\mb{D}_i)} \\bigg).\n\t\\end{align}\t\n\n\\end{IEEEproof}\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSolar sytem progenitors, from the deeply embedded sources to\nthe weak line T Tauri Stars (WTTSs), are sources of high\nenergy radiation (X-ray to UV); the total energy\nradiated in this range goes from $\\sim 0.02$~L$_{\\odot}$\nmeasured in the very young\nsources through their X-ray radiation to the 0.2 L$_{\\odot}$ radiated in the\nUV during the T Tauri phase (or Phase~TT) (Preibish 2004, G\\'omez de Castro 2008).\nLater on, in the Weak line T Tauri Phase the energy released drop to\n$\\sim 10^{-3}$~L$_{\\odot}$ radiated in X-ray. Being TTSs\nintrinsically cool stars (with $\\log T_{\\rm eff}\\sim$ 6500-3600~K),\nsurrounded by cool accretion disks radiating at infrared\nwavelengths, the source\nof this energy must be searched in the release of magnetic energy.\n\nIt is well known that the mediation of magnetic fields in the\naccretion process is able to heat up the plasmas since a fraction of the\ngravitational energy lost during accretion is invested in field\namplification and dynamo action thus, radiative loses are pushed\ntowards the high energy range. Unfortunately, in the early phases\n(ages $< 0.1$Myr) most of this radiation is reabsorbed by the\ndense circumstellar environment (Av$>3$) and only the hard X-ray\nradiation is able to escape from the system providing direct\ninformation on the evolution of the accretion process. After 1\nMyr, extinction drops enough to make the engine accessible to UV\nwavelengths; current technologies allow to carry out high\nresolution spectroscopy in the UV range which is also extremely\nrich in spectral tracers so a single echelle spectra can provide\ninformation on molecular, atomic and ionized gas (from singly\nionized gas to very high ionization species such as Fe~XII,\nFe~XVIII or Fe~XXI). Thus, UV spectroscopy is an extremely\nefficient tool to study solar system progenitors from 1 Myr on with the\ncurrent technology and these application will be discussed in detail\nbelow. However, one can foresee a future when\nmicroarcseconds UV imaging will be available and\nstudies similar to those being run on the Sun, will be feasible\nfrom 1 Myr old Suns all the way down into the main sequence while\nthe young planetary disk settles down and life begins to grow.\nThis review deals with this; with a description of our\ncurrent understanding of the evolution from the T Tauri phase to\nthe modern Sun and with a non-technologically biased ambitious\nview of what we could learn from challenging new UV observatories.\nThis review has been written after the end of the 1st. conference\nof the network for UV astronomy held in El Escorial in May 2007\nwhere some challenging projects for new space observatories were\npresented; you should find references to some of them in this text.\n\n\\section{Physics to be understood I: the gravito-magnetic engine}\n\nDuring the phase TT, stars count on an energy source\nwhich is not available during the main sequence evolution: gravitational\nenergy from the infalling material. This extra energy is released either\nthrough shocks on the stellar surface or through the gravito-magnetic\ninteraction between the star and the disk.\n\nShocks release the kinetic energy of the infalling material into\nheating at the impact point. If matter infall occurs along the\nfield lines, all the gravitational energy is damped into heating\nand the gas may reach temperatures as high as $\\sim 10^6$K. The\ndominant output radiation is produced by the photoionized preshock\ninfalling gas radiating mainly in the UV range (G\\'omez de Castro\n\\& Lamzin 1999; Gullbring et al. 2000). As the density of the\ninfalling gas column is high ($n_e \\simeq 10^9-10^{12}$~cm$^{-3}$)\nthe thickness of the radiating column is expected to be negligible\ncompared with the stellar radius thus, accretion shocks are\nobserved as {\\it hot spots} on the stellar surface. As such, they\nare expected to produce a rotationally modulated signal that has\nbeen detected in monitoring campaigns of some stars both in optical \n(see i.e. Petrov et al. 2001;\nBouvier et al. 2003) and UV (G\\'omez de Castro \\& Fern\\'andez\n1996; G\\'omez de Castro \\& Franqueira 1997a). An important result \nof these campaigns is that only $\\sim\n50$\\% of the UV continuum excess is rotationally modulated. Thus,\na significant fraction of the UV excess is not produced by the\naccretion shocks even in sources where rotational modulation\nhas been detected. However, this excess decreases as the star\napproaches the main sequence (see Fig.~1).\n\n\\begin{figure}[]\n\\centerline{\\includegraphics[width=18pc]{gomezdecastro_fig1.ps}}\n\\caption{The (UV-V, V) colour -- magnitude diagram for the\nT Tauri stars observed with the IUE satellite in the Taurus region.\nThe crosses represent cool TTSs (spectral types later than\n$\\sim $ K3) and the open circles warm TTSs (spectral types\nearlier than $\\sim$ K3). The location of the main sequence is\nmarked by the spectral types. The stars closer to the main sequence\nare the WTTSs (from G\\'omez de Castro 1997).}\n\\end{figure}\n\n\nIn fact, the major source of high energy radiation is the dissipation\nof the magnetic and mechanical energy produced by the gravito-magnetic\nengine. A simple analogy can be made with a self-regulated hydraulic\nturbine: the potential energy of the gas falling from the disk into the\nstellar magnetic field drives to the generation of electric currents\ndue to the Lorentz force that, in turn, create new field components\nthrough dynamo action. There are however, a great number of uncertainties\nin the way the system self-regulates and also on the dependence of the\nengine details on initial conditions such as the effective gravity of\nthe star, the role of stellar radiation and magnetic field on the engine\nperformance and the role of the ionizing radiation produced by the engine on\nthe evolution of the mass storage, the disk.\n\nThe Sun, itself, provides important clues to understand the\nphysics of the gravito-magnetic engine and its evolution. At the\nbase of the Sun convective layer, the tachocline marks the\nlocation of the shear layer between the rigid body rotation of the\nradiative core and the differentially rotating convective\nenvelope. The tachocline is significantly prolate; it is centered\nat 0.69R$_{\\odot}$ at the equator and 0.72R$_{\\odot}$ at latitude\n60$^o$ (Basu \\& Antia, 2003). The tachocline thickness is $\\sim\n0.04$R$_{\\odot}$. The angular velocity profile based on\nhelioseismic inversions shows that at a latitude of about 35$^o$,\nthe radial gradient changes sign becoming negative for latitudes\n$> 35^o$. This latitude marks the limit of the two latitude belts\nwhere the overwhelming majority of sunspots occur. There are also\nsome indications of the meridional flow moving equatorwards below\nthis latitude and polewards above it (see Miesch 2005). The Solar\nwind is (magnetic)\nlatitude dependent during solar minimum;\nabove $\\sim 35^o$ is fast (1000~km\/s) and\nthin, below is slower (300~km\/s) and denser (see Fig.2 from\nUlysses data). The current paradigm\nfor how solar dynamo operates includes: (1) field amplication in a\nturbulent downflow ($\\alpha$ effect) that is pumped downward by\nconvection and accumulate in the overshoot region and the\ntachocline; (2) field amplification and organization into toroidal\nflux tubes and sheets by differential rotation in the tachocline;\n(3) magnetic instabilities (buoyancy) drives the field to the surface and\n(4) the Coriolis force acting on the rising structures depends on the\nlatitude producing a latitude-dependent emergence of bipolar magnetic\nstructures.\n\n\n\\begin{figure}[]\n\\centerline{\\includegraphics[width=18pc]{gomezdecastro_fig2.eps}}\n\\caption{{\\it ``Solar wind observations collected by the Ulysses spacecraft\nduring two separate polar orbits of the Sun, six years apart,\nat nearly opposite times in the solar cycle. Near solar minimum\n(left) activity is focused at low altitudes, high-speed solar wind\nprevails, and magnetic fields are dipolar. Near solar maximum (right),\nthe solar winds are slower and more chaotic, with fluctuating magnetic\nfields.''} (From NASA Solar Probe Web (solarprobe.gsfc.nasa.gov),\ncourtesy of Southwest Research Institute and the Ulysses\/SWOOPS team)\n}\n\\end{figure}\n\n\nGoing backwards in time, during the phase TT,\nthe problem becomes complicated by the\npresence an additional ``tachocline'' or differentially rotating\nregion attached to the convective layer. This {\\it external\ntachocline} connects the star with the accretion disk which\nrotates significantly faster than the stellar surface; rotation\nperiods during the TT phase are about 7-8 days ($\\Omega _* =\n0.8-0.9$ day$^{-1}$) while the Keplerian frequency is:\n$$\n\\Omega _k = 11.1 {\\rm day}^{-1} \\lgroup \\frac {M}{M_{odot}} \\rgroup ^{1\/2}\n\\lgroup \\frac {r}{3 R_{odot}} \\rgroup ^{-3}\n$$\nKeplerian disk corotation radius is at,\n$$\nr_{\\rm co} = (7.2 - 6.9)R_{\\odot} \\lgroup \\frac {M}{M_{odot}} \\rgroup ^{1\/2}\n$$\nTo avoid this large shear, the magnetosphere will grow to balance the\ntoroidal component of the flux with the angular momentum of the infalling\nmatter (Ghosh \\& Lamb 1979) thus,\n$$\n\\frac{B_p B_t}{4 \\pi} 4\\pi r^2 \\Delta r \\simeq \\dot M r V_k\n$$\nwhere $B_p$ and $B_t$ are the poloidal and toroidal components of the\nfield respectively, $r$ is the magnetosphere radius, $\\Delta r$ is\nthe thickness of the shear layer, $\\dot M$ is the accretion rate and\n$V_k$ is the Keplerian velocity at the magnosphere radius. For typical\nT~Tauri stars parameters:\n$$\nr_{\\rm mag}= 4.4 R_{\\odot} \\gamma ^{2\/7} \\lgroup \\frac {B_*}{1kG}\n\\rgroup^{4\/7} \\lgroup \\frac {\\dot M}{10^{-8} M_{\\odot} {\\rm\nyr}^{-1}} \\rgroup^{-2\/7} \\lgroup \\frac {M_*}{M_{\\odot}}\n\\rgroup^{-1\/7}\n$$\nwhere $\\gamma ^{2\/7}$ is a factor about unity ($\\gamma =\n(B_t\/B_p)(\\Delta r \/r)$, see Lamb, 1989). Notice that the main\nuncertainties in the physics, namely the ratio between the\ntoroidal and the poloidal components and the relative thickness of\nthe {\\it ``external tachocline''} are enclosed in this factor.\nAs in the Solar\ninterior, the shear region is fed by turbulent, magnetized\nmaterial though this comes from the accretion disk instead of\nthe convective layer. The turbulent disk dynamo is fed by the\nmagneto-rotational instability in the acretion disk. Shear\namplifies the field producing a strong toroidal component; an\nexternal dynamo sets in. This toroidal field and\nthe associated magnetic pressure push the field lines outwards\nfrom the disk rotation axis, inflating and opening them in a {\\it\nbutterfly-like pattern} reminiscent of the helmet streamers in\nthe solar corona, so producing a current layer between the\nstellar and the disk dominated regions as displayed in Fig~3.\nMagnetic field dissipation in the current layer produces high\nenergy radiation and particles. The magnetic link between the star\nand the disk is broken and reestablished continuously by magnetic\nreconnection. The opening angle of the current layer, as well as\nits extent, depends on the stellar and disk fields, the accretion\nrate and the ratio between the inner disk radius and the stellar\nrotation frequencies. Hot, pressure driven outflows are produced\nfrom the region closer to the rotation axis while cool\ncentrifugally driven flows are produced by the disk; plasmoids are\nejected from the current layer generating a third outflowing\ncomponent.\n\n\\begin{figure}[]\n\\centerline{\\includegraphics[width=18pc]{gomezdecastro_fig3.eps}}\n\\caption{The interaction between the stellar magnetic field and the\ndisk twists the stellar field lines due to the differential rotation.\nThe toroidal magnetic field generated out of the poloidal flux and\nthe associated pressure tends to push the field lines outwards,\ninflating them, and eventually braking the magnetic link between\nthe star and the disk (boundary between regions I and II).\nThree basic regions can be defined:\nRegion I dominated by the stellar wind, Region II dominated by\nthe disk wind and Region III dominated by stellar magnetospheric\nphenomena. The dashed line traces the boundaries between this\nthree regions. The continuous lines indicate the topology\nof the field and the shadowed areas represent regions where\nmagnetic reconnection events are likely to occur, producing\nhigh energy radiation and particles (from G\\'omez de Castro 2004).}\n\\end{figure}\n\nDisk-star interaction has been investigated by means of numerical\nsimulations since the early works by Goodson et al (1997) till the\nlast results ({\\it i.e.} von Rekowski \\& Brandenburg 2006). They\nshow that the fundamental mechanism for disk winds formation is\nrobust; numerical simulations with different parameters (disk\/star\nfields) and initial conditions produce disk winds. Stellar winds\nare much more sensitive to the physical conditions and specially\nto the stellar field; compare the results of simulations assuming\nthat the stellar field is a magnetic dipole (von Rekowski \\&\nBrandenburg 2004) with those of simulations where the stellar\nfield is prescribed through the action of the stellar dynamo (von\nRekowski \\& Brandenburg, 2006). In fact, the characteristics of\nthe accretion flow and the winds (dominant driver, temperature,\nterminal velocity, density, variability) depend on the physical\nproperties of the system such as the degree of magnetization of\nthe disk, the characteristics of the disk dynamo and the stellar\nfield.\n\nThe bulk of the energy produced in this engine is released at UV\nand X-ray wavelenghts as in the Sun atmosphere. In the very early\nepochs, when extinction is high ($A_V \\geq 3$), only the X-ray\nradiation from the engine is detected. Later on, about 1 Myr,\nextinction drops and the engine can be studied in the UV. Only in\nthe UV, the various components of the engine can be defined and\nstudied as well as their evolution, starting during the phase~TT\nall the way down into the main sequence.\n\n\\section{Physics to be understood II: the\nimpact of the engine on disk evolution}\n\nThough often neglected, the impact of the engine in the inner disk\nevolution is enormous. On the one hand, the engine adds a significant\npoloidal component in the inner disk thus favouring gas motions\nperpendicular to the disk as shown by the numerical simulations,\non the other hand, the engine is a source of highly energetic\nradiation where part of the dissipation is produced at heights of\nsome few stellar radii above the disk in the inflating current\nlayer; this high latitude illumination favours energy absorption\nby the disk. Both together act to increase the disk scale height and\nthe absorption of the radiation henceforth producing the\nrarification of the disk\natmosphere and favoring disk evaporation close to the star.\nTo achieve the evaporation of a standard optically thin accretion disk,\nthe sound speed should be comparable to the keplerian velocity thus,\n$$\nT = \\frac{G M_*\/r_{\\rm mag}}{\\mu m_H \/ \\gamma \\kappa}\n= 3.13 \\times 10^7 K \\lgroup \\frac {M_*}{M_{\\odot}}\n\\rgroup \\lgroup \\frac {r_{\\rm mag}}{4.4 R_{\\odot}}\n\\rgroup ^{-1}\n$$\nwhere $\\mu$ is the mean molecular weight, $\\gamma $ is the\npolytropic index and $\\kappa $ is Boltzmann constant.\nHowever, this value relaxes in the presence of a poloidal\nfield as the expected in the disk-magnetosphere interface so,\n$$\nT = 3.13 \\times 10^7 K \\frac {\\beta}{1+\\beta}\n\\lgroup \\frac {M_*}{M_{\\odot}}\\rgroup\n\\lgroup \\frac {R_{\\rm mag}}{4.4 R_{\\odot}}\n\\rgroup ^{-1}\n$$\nwhere $\\beta$ is the rate between magnetic and thermal pressures.\nThus, for highly magnetized environments, $T$ may drop\narbitrarily. For thin accretion disks\\footnote{See Frank et al 2002\nfor the standard prescription of the thin disk density and temperature\nas a function the accretion rate, radius and stellar mass.} penetrated\nby the stellar dipolar field, $B_*$,\n\n\\begin{eqnarray}\n\\beta &=& \\frac {\\gamma \\kappa T\/ \\mu M_H}{B^2 \/4 \\pi \\rho} \\\\\n&=&\n4.77 \\lgroup \\frac {M_*}{M_{\\odot}}\\rgroup ^{7\/8}\n\\lgroup \\frac {\\dot M}{10^{-8}M_{\\odot}{\\rm yr}^{-1}}\\rgroup ^{17\/20} \\\\\n&\\times & \\lgroup \\frac {r}{4.4 R_{\\odot}}\\rgroup ^{-21\/8}\n\\lgroup \\frac {B_*}{1kG}\\rgroup ^{-2} \\\\\n\\end{eqnarray}\n\\noindent\nwhere $r$ is the disk radius, $B_*$ the stellar magnetic field and\n$\\dot M$ the accretion rate. Note that\n$\\beta $ drops to 0.02 for accretion rates of $10^{-9}$M$_{\\odot}$yr$^{-1}$.\n\n Another important phenomenon to be considered is that the disk\nis not unlocked from the engine so disk material should be\nsubjected to the propagation of the Alfv\\'en waves, shear waves\nand global alfv\\'en oscillations driven from the interface.\nIn summary, we might expect the inner rim of the disk to be hot\nwith temperatures of about $10^4$K well above the\ntemperature of dust sublimation.\n\nThe role of far-UV radiation fields and high energy particles in\nthe disk chemical equilibrium is now beginning to be understood.\nBergin et al. (2003) showed how strong Ly$\\alpha$ emission may\ncontribute to the observed enhancement of CN\/HCN in the disk. The\npenetration of UV photons coming from the engine in a dusty disk\ncould produce an important change in the chemical composition of\nthe gas allowing the growth of large organic molecules. In this\ncontext, UV photons photodissociating organic molecules at\n$\\lambda > 1500$~\\AA\\ could play a key role in the chemistry of\nthe inner regions of the disk, while those photodissociating H$_2$\nand CO will control the chemistry of the external layers of the\ndisk directly exposed to the radiation from the central engine.\n\n\n\n\n\\section{Lessons learned from UV (spectroscopic) observations}\n\nThe first observations of pre-main sequence (PMS) stars in the UV\nwere carried out with the International Ultraviolet Explorer (IUE)\n(1979-1997). The observations showed that pre-main sequence stars\nhave UV fluxes exceeding those of main sequence stars by a factor\nof about 50. In fact, the UV excess decreases as the stars\napproach the main sequence as shown in Fig~1.\n\nUV radiation provides direct information on the interaction\nbetween the star and the disk. This includes all the various\ncomponents mentioned above: the shear layer, i.e.the {\\it external\ntachocline}, the\nwind, the enhanced magnetospheres, mass ejections from\nreconnecting loops, shocks between the various wind components\n(among themselves and also with the disk material) and as well as\nthe inner regions of the disk. There is a recent review on UV\nobservations of pre-main sequence stars and young planetary disks\n(G\\'omez de Castro et al 2006) where a detailed accounting of the\nwork carried out since the IUE times is summarized. Thus, I should\nconcentrate on the main lessons learned from IUE and HST\nobservations\\footnote{Some observations have also been obtained\nwith FUSE but its small effective area has allowed to observed\nonly the brightest of the TTSs and some Vega-like disks.} that are:\n\n\\subsection{About the accretion flow}\n\nThe actual measurements of infalling gas in the UV are scarce. The\nare hints of accretion in the large extent of the red wings of the\nmain UV resonance lines (Herczeg et al 2005) or through the\ndetection of redshifted absorption components on the profiles\nof the most prominent emission lines.\nHowever, the only target for which there is clear spectroscopic\nevidence of accretion shocks (in the UV) is RY~Tau (see Fig.~4).\nTwo observations of the same star obtained in 1993 and 2001 show that\nthere is a variable redshifted component. From this single\nobservation three important properties are learnt:\n\\begin{enumerate}\n\\item As described above, UV radiation from accretion shocks\nis predicted to be produced in the preshock region on scale\nheights significantly smaller than the stellar radius. Thus, it is\nexpected that only matter falling onto the visible hemisphere can\nbe detected at UV wavelengths. The fact that the variable flux\ncomponent is redwards shifted supports these theoretical\nexpectations. Moreover, the broadening shows that infalling matter\nshould cover a significant fraction of the hemisphere to account\nfor the broad distribution of projected velocities in the\ninfalling gas.\n\n\\item The red wing extends to\nvelocities of 250~km\/s which corresponds to the free-fall velocity\\footnote{\nRY Tau mass is 1.63~M$_{\\odot}$ and radius 2.4~R$_{\\odot}$ according to\nHartigan et al 1995} from 1.7 R$_*$ which is much smaller than\nthe fiducial values derived for the inner disk radius.\n\n\\item The UV excess is not only produced by accretion;\nalso the wind contributes to it. Thus accretion rates derived from\nthe UV excess, assuming that it is caused just by magnetospheric\ninfall, are overestimated.\n\n\\end{enumerate}\n\n\nIt also adds to our understanding of {\\it magnetospheric\naccretion}. Magnetospheric accretion was originally proposed to\nexplain the large broadening of the TTSs H$\\alpha $ lines\n(Muzerolle et al 1998) later detected also in the UV\nresonance lines of CIV, CIII (Ardila et al 2002, Herczeg et al 2005).\nTypical line widths are about\n200-300 km\/s that exceed by far what expected from the rotation\nvelocities of the TTSs ($10-20$ km\/s) even if the corotating\nmagnetosphere is postulated to extend to some 4-5 stellar radii.\nInfall adds a radial velocity component to rotation of the\nradiating gas. As free fall velocity is:\n$$\nv_ {ff} \\simeq 315 {\\rm km s} ^{-1} \\lgroup \\frac {M_*}{M_{\\odot}}\n\\rgroup ^{1\/2} \\lgroup \\frac {R_*}{R_{\\odot}} \\rgroup ^{-1\/2}\n$$\nThe observed\nprofiles broadenings can be reproduced without difficulty. The\nobserved broadening of H$\\alpha$ or Mg~II lines do not vary\nsignificantly requiring that, at least, the spatial average of the\naccretion flow is rather stable. Strong variations in the\naccretion flow, like the reported from RY~Tau UV observations, should\nalso show in the large scale magnetosphere tracers (i.e.\nH$\\alpha$ or Mg~II profiles).\n\n\\begin{figure}[]\n\\centerline{\\includegraphics[width=18pc]{gomezdecastro_fig4.eps}}\n\\caption{ SiIII] and CIII] UV lines observed in RY~Tau (from\nG\\'omez de Castro \\& Verdugo 2007); RSDP processed data are\nplotted with a thin line and the 3-pixels average profile with a\nthick line. The rest wavelength of the lines and the velocity of\nthe unresolved jet at $\\simeq -80$ km\/s (from G\\'omez de Castro \\&\nVerdugo 2001 and Hamann 1994) are marked with dashed lines. {\\it\nLeft panel}: Observations obtained in Dec. 31st, 1993 with the\nGHRS. {\\it Right panel}: Observations obtained in March 27th,\n2001. Both lines show an excess of flux in the red wing compared\nwith the 1993 observations; this excess is shaded in the figure.}\n\\end{figure}\n\n\n\n\\subsection{About the wind}\n\nThere is another possibility to broaden the line profiles: adding\na radial velocity component associated with the outflow. The\npresence of magnetic fields and the relevance of centrifugal\nlaunching drives to the formulation of the velocity field in the\nwind by means of three components: axial component (along the\nrotation axis), radial expansion from the axis and the azimutal\ntoroidal component (rotation around the axis); in figure 5 there\nis a representation of the three components for a warm centrifugal\nwind model (from G\\'omez de Castro \\& Ferro-Font\\'an 2005). A\nrapid radial expansion close to the star (to guarantee that the\nwind density and temperature are about the observed\nn$_e=10^{10}$cm$^{-3}$ and T$_e = 20-30 \\times 10^{3}$K values)\ncould produce similar effects in the profiles than those predicted\nby magnetospheric infall. Several attempts have been made to\nreproduce the wind profiles with cold winds from accretion disks\n(Ferro-Font\\'an \\& G\\'omez de Castro 2003), warm disk winds\n(G\\'omez de Castro \\& Ferro-Font\\'an, 2005), coronal winds from\naccretion disks (Ferreira \\& Casse, 2004) and winds driven\nby the star-disk interaction (G\\'omez de Castro \\& von Rekoswki, 2008).\n\n\\begin{figure}[]\n \\includegraphics[width=18pc]{gomezdecastro_fig5.eps}\n\\caption{The basic kinematics of MHD centrifugal winds is outlined\nfrom the simple semiempirical model of centrifugally driven MHD\nwinds with thermal pressure of G\\'omez de Castro \\& Ferro-Font\\'an\n(2005). {\\it Left:} Velocity field in the flow.\n$V_r$ (solid) and $V_z$ (dashed) are the components of the\nvelocity along the $r$\nand $z$ axes, respectively. The toroidal component of the\nvelocity, $V_{t}$ is scaled with respect to the keplerian\nvelocity of the disk at the radius from which the wind is ejected.\n {\\it Right:} Line profiles generated by the wind in ring\nof gas at different heights along the\nz-axis. }\n\\end{figure}\n\nCold disk winds fail to reproduce the high temperatures observed.\nThe original wind temperature is as low as the disk one and\nheating has to be extracted from photoionization by the central\nsource. However, this radiation is able to heat the gas only to\nmild temperatures of about 10$^4$K. Warm disk winds produce\nprofiles that are too narrow to reproduce all the observations;\nthis is because the vertical thermal pressure push forces the\ngrowth of the radial velocity component to heights were the plasma\nis already too cool. Finally, winds driven from the star-disk\ninteraction also produce narrower profiles than the observed in\nsome sources as shown in Fig.6. According their UV forbidden lines\nprofiles, TTSs can be classified in two groups: stars with\nbroadenings with full width half maximum about 150~km~s$^{-1}$\nthat can be adjusted with the current models and stars with\nextremely broad profiles ($> 250$km~s$^{-1}$). The source of the\nvery large broadenings have to be seek in other structures such as\nion belts or plasma rings as the resolved in RW~Aur (see Sect.\n4.3).\n\n\\begin{figure}[]\n \\includegraphics[width=18pc]{gomezdecastro_fig6.eps}\n\\caption{{\\it Left:} Si~III] profiles of the TTSs observed\nwith HST; notice the very different line broadenings.\n{\\it Right:} Predicted Si~III] profiles for winds generated\nby means of the interaction between a stellar magnetosphere\nwith stellar field 1kG and an accretion disk undergoing\n$\\alpha ^2 \\Omega$-dynamo effect (e.g. Krause \\& Raedler, 1980)\nfrom G\\'omez de Castro \\& von Rekowski (2008).\n}\n\\end{figure}\n\nFinally, UV observations have clearly proved that:\n\\begin{enumerate}\n\\item {\\it Warm winds are latitude dependent on scales\ncomparable to the stellar radius}. As an example, the Mg~II\nresonance doublet has been observed in a broad sample of 17 TTSs\n(adding IUE and HST samples); this is the largest sample of TTSs\nobserved in a single UV spectral line. These lines can be\ngenerically described as broad, asymmetric emission lines with\ntypical full widths at 10 \\% intensity of few hundreds km\/s. The\nbroad blueward shifted absorption component characteristic of\nmass-loss was detected in few sources (Penston \\& Lago, 1983,\nImhoff \\& Appenzeller, 1989) but not in all of them; the degree of\nabsorption (the asymmetry of the line) varies from not absorption\nto full absorption of the bluewards shifted emission (see G\\'omez\nde Castro, 1997).\n\\item {\\it The collimated flow, the jet, radiates in the UV as\nwell as the bow-shock and the Herbig-Haro objects}. Basically\nall data have been obtained with the IUE satellite (see\nG\\'omez de Castro \\& Robles 2001 for a compilation).\nRecent observations obtained with Hopkins Ultraviolet Telescope\n(HUT) have shown that it is still unclear how line\nradiation is excited, at least, in HH2; in particular\nO~VI is not detected as was expected in high\nexcitation Herbig-Haro objects where line radiation is\npredicted to be produced in strong radiative shocks\nwhere the shock kinetic energy is damped into heating\n(Raymond et al 1997).\n\n\\end{enumerate}\n\n\\subsection{About the inner disk and ion belts}\n\nStrong continuum FUV emission (1300--1700 \\AA ) has\nbeen detected recently from some stars with bright molecular disks\nincluding GM~Aur, DM~Tau, and LkCa~15, together with inner disk\ngaps of few AUs (Bergin et al. 2004). This emission is likely due\nto energetic photoelectrons mixed into the molecular layer that\nlikely indicates the existence of a very hot component in the\ninner disk.\n\nHigh-resolution HST\/STIS spectra have revealed, for the first\ntime, the rich UV molecular emission in CTTSs. H$_2$ fluorescence\nemission has now been studied in detail in the nearest CTTS,\nTW~Hya, and the richness of the spectrum is overwhelming: Herczeg\net al. (2002) detected 146 Lyman-band H$_2$ lines. The observed\nemission is likely produced in the inner accretion disk, as are\nthe infrared CO and H$_2$O lines. From these UV data, Herczeg et\nal. (2004) estimated that the warm disk surface has a column\ndensity of $N_{H_2} = 3.2 \\times 10^{18}$~cm$^{-2}$, temperature\nof $T=2500$~K, and filling factor of H$_2$ as seen from the source\nof the Ly$\\alpha$ emission of 0.25$\\pm$0.08. The observed spectrum\nshows that some ground electronic state H$_2$ levels with\nexcitation energies as large as 3.8 eV are pumped by Ly$\\alpha$.\nThese highly excited levels may be formed by dissociative\nrecombination of H$^+_3$, which in turn may be formed by reactions\ninvolving X-rays and UV photons from the star. Also DF~Tau and\nV836~Tau H$_2$ emission seems to arise from the disk (Herczeg et\nal 2006).\n\nIn addition to this molecular component, there is increasing\nevidence of the existence of ion belts\/rings around some TTSs. An\nion belt has been detected around the TTS, RW~Aur (G\\'omez de\nCastro \\& Verdugo, 2003). A corotation radius of 4.4~$R_*$ is\nderived and a $\\log T_e (K) \\simeq 4.7$ and $\\log n_e (cm^{-3}) =\n11.6$ are estimated. This was the first detection of such an\nstructure around a classical TTS. In addition, there are\nindications of a similar structure around AB~Dor, a weak line TTS\n(see Fig.~7). The structure is resolved, as in RW~Aur, because\nthere is an inner hole that allows separating the stellar\/wind\ncontribution from the belt. However in a 5.7 hours time lapse the\ndouble peaked profile is lost, and the inner part of the profile\nis filled in again with emission (G\\'omez de Castro 2002) .\n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=17pc]{gomezdecastro_fig7.eps}}\n\\caption[]{SiIII] profiles of AB~Dor obtained with the HST\/GHRS\n(see G\\'omez de Castro 2002 for more details). In the bottom\npanel, the profile at phase ($\\phi$) 0.329 is overplotted\n(dashed line) on the profiles at $\\phi = 0.794$ (continuous line),\nfor comparison.}\n\\end{figure}\n\n\n\n\\subsection{About the interaction between the disk and the wind}\n\nAB~Dor, a very bright nearby 30~Myr old star, is the only young\nstar that has been well monitored in the UV for flares. Nine events were\ndetected during 10.63~hours of monitoring with HST\/GHRS!.\nThe C~IV and Si~IV UV line profiles produced by most of the events\nare narrow and redshifted, indicating hot gas falling onto the\nstar during the flare. However, the strongest event produced a\nvery broad profile with narrow absorption slightly blueshifted.\nThis profile lasted a few kiloseconds and thus the broad wings are\nmost likely tracing the front shock of a CIR (G\\'omez de Castro\n2002). In the solar system, there are three very different types\nof ``flares'', which are sudden increases of the high energy\nradiation and particles flux: magnetic flares (magnetic\nreconnection events), corotating interaction regions or CIRs\n(shock fronts formed by the interaction between the slow and the\nfast component of the solar wind), and coronal mass ejections.\nThis classification also applies to TTSs and their circumstellar\nenvironments. High-resolution UV spectroscopic monitoring is\nrequired to disentangle the possible mechanisms for flares in\nproto-stellar systems and to study their impact in young planetary disks\nevolution as well as on planetary atmospheres embryos.\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=18pc]{gomezdecastro_fig8.eps}}\n\\caption[]{The C~IV 1548~\\AA\\ profile of AB~Dor during a normal\nstellar flare (left) and a transient feature probably associated\nwith a CIR (right). Both events lasted several kiloseconds. The\nleft profile is typical of three events that occured during the\nshort monitoring time, while the profile on the right was observed\nonly once. Note the presence of a narrow absorption and the very\nbroad line wings in the right panel profile (see G\\'omez de Castro\n2002 for more details). }\n\\end{figure}\n\n\n\n\n\\section{Summary: the key observables}\n\nIn brief, the radiation produced by the accretion engine\n(including magnetospheres, outflows, accretion, inner disk\nand shock between winds and young planetary disks) is produced\nin the UV. To separate the various contributions is necessary\neither very high spatial resolution or moderate time resolution.\n\nCurrent surveys show that although there are some few nearby TTSs\nand WTTSs sparsely distributed around the Sun (AB~Dor at 14.9~pc\nor TW~Hya at 56~pc) the nearest star forming complexes are\nconcentrated in a {\\it Star Formation Belt} (SFB) at 140~pc\naround the Sun which includes Taurus, Auriga-Perseus, Ophiuchus,\nLupus and Chamaleon molecular clouds and several thousands of\npre-main sequence stars forming in various environments (clustered\nas in Ophichus, sparse as in Taurus). Resolving spatial scales of\na tenth of the solar radius in the SFB would allow to study the\nconnection between the star and the outflow in full detail. For\nthis purposes spatial resolutions of 3.3 micro arcseconds are\nrequired; thus, for a fiducial wavelength of 1500\\AA\\ the aperture\nmust be about 10~km. Such a long baseline interferometry should be\ncarried out in the space and the Stellar Imager project (Carpenter\net al 2008) represents a first atempt to such an ambitious\nproject.\n\nHowever, the requirements are not so strong to resolve the inner\ndisk structure, the disk wind and the plasmoids ejection from the\ncurrent layer between the stellar and the disk wind. In such a\ncase, spatial resolutions of 1-0.5 milliarcseconds (mas) would be\nenough to map the SFB sources thus requiring apertures of 20~m and\neffective areas about 10$^4$ times those of HST\/STIS; research in\nnew coatings and detectors in the UV field (Kappelmann \\&\nBarnstedt, 2007) as well as a clever optical design may account\nfor a factor of ten but, still, a larger, $\\sim 30$m aperture will\nbe required to get SNR$\\simeq 10$ in the C~IV line in reasonable\nexposure times (few hours). There is an ongoing project that\nsatisfies these requirements, the Fresnel Interferometer (see\nKoechlin et al 2008) and some projections on the expected\nperformance of the interferometer on the mapping of the engine are\nplotted in Fig.~9.\n\nBoth space interferometer projects are under study by their\nnational space agencies and, in case they succeed, they will\nbe available about 2030. Is there anything else to be done\n{\\it in the meantime}?. The answer is definitely positive, time\nmapping will allow us to resolve the structures since the\nvariability time scales are not the same for all the\nphenomena and they do not produce the same inprint in the\nspectra (neither in temperatures, densities or velocities).\nSome examples of the power of this technique have\nalready been shown in this contribution.\n\nHigh resolution spectroscopy (R$\\sim 50,000$) is enough to\ndiscriminate among the various components thus, scaling with\nthe fluxes of the weakest H$_2$ lines (from Heczeg et al\n2002) detected with the HST\/STIS, a factor of 10 increase\nof the effective area with respect to HST\/STIS is required\nto reach most of the sources\nin the SFB. The COS instrument in HST will provide\na factor of 10 sensitivity increase with respect to STIS\nin the 1150-1700\\AA\\ range because of its optimized optical\ndesign (see Froening et al 2008), unfortunately\nthe orbital constrains of HST do not favour monitoring\nprograms. High orbit missions alike the WSO-UV (see Shustov et al\n2008) are better suited for this purpose.\n\nAn additional factor of 10\nwould be required to obtain SNR$\\sim 10$ in exposure times\nof few minutes; this short time scales are necessary to\nmap variations in flare time scales as shown in\nFig.~8 for the pre-main sequence stars in the SFB.\nUnfortunately, spectroscopic monitorings of the flaring activity in the\nSFB will have to wait for future missions, with collecting\nareas about 8-10~m, preferably located at the L2.\n\n\n\n\\begin{figure*}\n\\centerline{\\includegraphics[width=26pc]{gomezdecastro_fig9.eps}}\n\\caption[]{Theoretical prediction of the Si~III] emissivity from\nnumerical simulations of star-disk interaction (G\\'omez de Castro\n\\& Rekowski, 2008). The stellar magnetosphere is assumed to be\ndipolar with a field strength at the surface of 1kG. The\nmagnetosphere interacts with the disk which is under a moderate\n$\\alpha$-dynamo effect\\footnote{The magnetic field in the disk is\nassumed to be generated by a standard $\\alpha ^2 \\Omega $ dynamo\n(e.g. Krauser \\& Raedler 1980) where $\\alpha$ is the mean-field\n$\\alpha$ effect and $\\Omega$ the angular velocity of the plasma.\n$\\alpha$ scaling is: $\\alpha = -0.1 \\frac{z}{z_0} \\frac {\\chi\n_{\\rm disk} (r, z)}{1+ V_A^2\/C_s^2}$ where $\\chi _{\\rm disk}$ is\nthe disk profile (see von Rekowski et al 2003), $z_0$ is the disk\nhalf-thickness, $V_A$ is the local Alfv\\'en velocity and $C_s$ is\nthe local sound speed.} . The inner disk wind is\nmagnetocentrifugally accelerated ({\\it top panel}).\n\nThe convolution of the theoretical prediction with the point\nspread function of the 30m Fresnel Interferometer\nFII has been carried out by Laurent Koechlin\nand Truswin Raksasataya.\nThe convolution is shown for three inclinations\n(0$^o$, 45$^o$ and 90$^o$) and three distances to the Earth\n(15~pc, 40~pc and 140~pc). Notice that the inner ring is resolved\neven for 140~pc.\n }\n\\end{figure*}\n\n\n\n\n\n\n\\acknowledgments\n\nThis work has been supported by the Ministry of Education of Spain\nthrough grant AYA2007-67726 and the Comunidad Aut\\'onoma de Madrid\nthrough grant CAM-S-0505\/ESP\/0237\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Conclusion}\n\\label{sec:conclusion}\n\\vspace{-0.4em}\nWe hypothesised that explicitly modeling the internal structure of complex labels for morphological tagging improves the overall tagging accuracy over the baseline with monolithic tags.\nTo test this hypothesis, we experimented with three approaches to model composite morphological tags in a neural sequence tagging framework.\nExperimental results on 49 languages demonstrated the advantage of modeling morphological labels as sequences of category values, whereas the superiority of this model is especially pronounced on smaller datasets.\nFurthermore, we showed that, in contrast to baselines, our models are capable of predicting labels that were not seen during training.\n\n\\section{Analysis and Discussion}\n\\label{sec:discussion}\n\n\n\\paragraph{OOV label accuracy}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.95\\columnwidth]{fig\/oov_labels_seq_crop}\n\\caption{OOV label accuracies of the \\textsc{Seq} model.}\n\\label{fig:oov_labels_seq}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=\\columnwidth]{fig\/category_errors_crop}\n\\caption{Average error rates of distinct morphological categories for \\textsc{Seq} and \\textsc{Mc} models.}\n\\label{fig:cat_errors}\n\\end{figure}\n\n\nOur models are able to predict labels that were not seen in the training data. Figure~\\ref{fig:oov_labels_seq} presents the accuracy of test tokens with OOV labels obtained with our best performing \\textsc{Seq} model plotted against the number of OOV label types. The datasets with zero accuracy are omitted. The main observation is that although the OOV label accuracy is zero for some languages, it is above zero on ca. half of the datasets---a result that would be impossible with \\textsc{MarMoT} or \\textsc{Mc} baselines.\n\n\n\n\n\n\n\\paragraph{Error Analysis}\nFigure~\\ref{fig:cat_errors} shows the largest error rates for distinct morphological categories for both \\textsc{Seq} and \\textsc{Mc} models averaged over all languages. \nWe observe that the error patterns are similar for both models but the error rates of the \\textsc{Seq} model are consistently lower as expected. \n\n\\iffalse\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=\\columnwidth]{fig\/feature_confusion_crop}\n\\caption{Most common category-value confusion patterns.}\n\\label{fig:feature_confusion}\n\\end{figure}\n\nTo better understand the nature of these errors, we looked at the confusion patterns of category-value pairs. Ten most common confusion patterns over all languages are shown on Figure~\\ref{fig:feature_confusion}. For each confusion pattern, we plot both its average occurrence count together with the average proportion with respect to the first category-value pair in the pattern. We only show confusion patterns for which the average proportion is at least $0.7$. \nThe most common source of errors is predicting a value for a category that has not been annotated in the test set, e.g. predicting a \\emph{case} value for a noun which does not have an annotated \\emph{case} attribute. Since the error patterns are similar for both models, we conclude that both \\textsc{Seq} and \\textsc{Mc} models learn roughly the same classification function but due to the flexibility in composing the label, the \\textsc{Seq} model makes less errors. \n\\fi\n\n\\paragraph{Stability Analysis}\nTo assess the stability of our predictions, we picked five languages from different families and with different corpus size, and performed five independent train\/test runs for each language.\nTable~\\ref{tbl:stability} summarises the results of these experiments and demonstrates a reasonably small variance for all languages. \nFor all languages, except for Finnish, the worst accuracy of the \\textsc{Seq} model was better than the best accuracy of the \\textsc{Mc} model, confirming our results that in those languages, the \\textsc{Seq} model is consistently better than the \\textsc{Mc} baseline.\n\n\\begin{table}[t]\n\\centering\n\\small\n\\begin{tabular}{lcc}\n\\toprule\nDataset & \\textsc{Seq} & \\textsc{Mc} \\\\\n\\midrule\nFinnish & 93.24 $\\pm$ 0.12 & 93.20 $\\pm$ 0.07 \\\\\nGerman & 88.45 $\\pm$ 0.21 & 87.74 $\\pm$ 0.17 \\\\\nHungarian & 84.51 $\\pm$ 0.54 & 80.68 $\\pm$ 0.48 \\\\\nRussian & 91.08 $\\pm$ 0.18 & 90.13 $\\pm$ 0.15 \\\\\nTurkish & 90.29 $\\pm$ 0.24 & 89.16 $\\pm$ 0.27 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Mean accuracy with standard deviation over five independent runs for \\textsc{Seq} and \\textsc{Mc} models.}\n\\label{tbl:stability}\n\\vspace{-1em}\n\\end{table}\n\n\\paragraph{Hyperparameter Tuning}\nIt is possible that the hyperparameters tuned on Finnish are not optimal for other languages and thus, tuning hyperparameters for each language individually would lead to different conclusions than currently drawn.\nTo shed some light on this issue, we tuned hyperparameters for the \\textsc{Seq} and \\textsc{Mc} models on the same subset of five languages.\nWe first independently optimised the dropout rates on word embeddings, encoder's LSTM inputs and outputs, as well as the number of LSTM layers.\nWe then performed a grid search to find the optimal initial learning rate, the learning rate decay factor and the decay step.\nValue ranges for the tuned parameters are given in Table~\\ref{tbl:tuning_grid}.\n\n\\begin{table}[h]\n\\centering\n\\small\n\\begin{tabular}{lc}\n\\toprule\nParameter & Values \\\\\n\\midrule\nWord embedding dropout & $\\{0, 0.1, \\dots, 0.5\\}$ \\\\\nLSTM input dropout & $\\{0, 0.1, \\dots, 0.5\\}$ \\\\\nLSTM input dropout & $\\{0, 0.1, \\dots, 0.5\\}$ \\\\\nNumber of LSTM layers & $\\{1, 2\\}$ \\\\\n\\midrule\nInitial learning rate & $\\{0.01, 0.1, 1, 2\\}$ \\\\\nLearning rate decay factor & $\\{0.97, 0.98, 0.99, 1\\}$ \\\\\nDecay step & $\\{1250, 2500, 5000\\}$ \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{The grid values for hyperparameter tuning.}\n\\label{tbl:tuning_grid}\n\\vspace{-1em}\n\\end{table}\n\n\nTable~\\ref{tbl:tuning} reports accuracies for the tuned models compared to the mean accuracies reported in Table~\\ref{tbl:stability}.\nAs expected, both tuned models demonstrate superior performance on all languages, except for German with the \\textsc{Seq} model.\nHyperparameter tuning has a greater overall effect on the \\textsc{Mc} model, which suggests that it is more sensitive to the choice of parameters than the \\textsc{Seq} model.\nStill, the tuned \\textsc{Seq} model performs better or at least as good as the \\textsc{Mc} model on all languages.\n\n\\begin{table}[t]\n\\centering\n\\small\n\\begin{tabular}{lcc|cc}\n\\toprule\nDataset & \\textsc{Seq} & Gain & \\textsc{Mc} & Gain \\\\ \n\\midrule\nFinnish & 93.44 & $+0.20$ & 93.43 & $+0.23$ \\\\\nGerman & 88.35 & $-0.10$ & 88.14 & $+0.40$ \\\\\nHungarian & 85.56 & $+1.05$ & 82.29 & $+1.61$ \\\\\nRussian & 91.44 & $+0.36$ & 90.74 & $+0.61$ \\\\\nTurkish & 90.56 & $+0.27$ & 89.32 & $+0.16$ \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Accuracies of the tuned \\textsc{Seq} and \\textsc{Mc} models compared to the mean accuracies in Table~\\ref{tbl:stability}.}\n\\label{tbl:tuning}\n\\end{table}\n\n\\paragraph{Comparison with Previous Work}\nSince UD datasets have been in rapid development and different UD versions do not match, direct comparison of our results to previously published results is difficult.\nStill, we show the results taken from \\citet{heigold2017}, which were obtained on UDv1.3, to provide a very rough comparison.\nIn addition, we compare our \\textsc{Seq} model with a neural tagger presented by \\citet{Dozat2017}, which is similar to our \\textsc{Mc} model, but employs a more sophisticated encoder.\nWe train this model on UDv2.1 on the same set of languages used by \\citet{heigold2017}.\n\nTable~\\ref{tbl:result_comparison} reports evaluation results for the three models.\nThe \\textsc{Seq} model and Dozat's tagger demonstrate comparable performance.\nThis suggests that the \\textsc{Seq} model can be further improved by adopting a more advanced encoder from \\citet{Dozat2017}.\n\n\n\\begin{table}\n\\centering\n\\small\n\\begin{tabular}{lcc|c}\n\\toprule\nDataset & \\textsc{Seq} & Dozat & Heigold \\\\\n\\midrule\nArabic & \\textbf{93.84} & 92.85 & 93.78 \\\\\nBulgarian & 97.04 & \\textbf{97.25} & 95.14 \\\\\nCzech & \\bf 95.39 & 95.22 & 96.32 \\\\\nEnglish & 94.80 & \\textbf{94.81} & 93.32 \\\\\nEstonian & 93.30 & \\bf 93.90 & 94.25 \\\\\nFinnish & 93.41 & \\textbf{93.73} & 93.52 \\\\\nFrench & \\textbf{96.39} & 95.90 & 94.91 \\\\\nHindi & 91.75 & \\textbf{92.36} & 90.84 \\\\\nHungarian & \\textbf{84.12} & 82.84 & 77.59 \\\\\nRomanian & 97.16 & \\textbf{97.20} & 94.12 \\\\\nRussian-SynTagRus & \\textbf{96.67} & 96.20 & 96.45 \\\\\nTurkish & \\textbf{90.70} & 90.22 & 89.12 \\\\\n\\midrule\nAverage & \\bf 93.71 & 93.54 & 92.45 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Accuracies for the \\textsc{SEQ} model, \\citet{Dozat2017} and \\citet{heigold2017}.}\n\\label{tbl:result_comparison}\n\\vspace{-1em}\n\\end{table}\n\n\n\\iffalse\n\\paragraph{Runtime}\nWe perform runtime analysis on Nvidia Tesla P100 graphics processor.\nIn the training phase, the \\textsc{Seq} model processes 43 sentences per second on average.\nAlthough it is ca 4.5 times slower than the \\textsc{Mc} model\\footnote{Note that the batch size is 5 for \\textsc{Seq} and 20 for \\textsc{Mc} model.}, the overall training time is still reasonable, ranging from few hours for smaller corpora, such as Turkish, to several days for the largest corpora, such as Czech.\nThe inference speed is similar for both models, with the \\textsc{Seq} model running at a rate of 460 sentences per second on average with a batch size of 50.\n\n\n \n\n\n\\fi\n\\section{Experimental Setup}\n\\label{sec:experiments}\n\n\\begin{table*}\n\\centering\n\\ssmall\n\\renewcommand{\\arraystretch}{0.98}\n\\tabcolsep=0.11cm\n\\input{tbl_corpus_stats}\n\\caption{\\small Descriptive statistics for all UDv2.1 datasets. For training sets we report the number of word tokens and types, the average (Avg) and maximum (Max) tags per word type, the proportion of word types for which pre-trained embeddings were available (\\% Emb) and the size of the morphological tagset (\\# Tags). For the test sets, we also give the total number of tokens and types, the proportion of OOV words (\\% OOV) and the number of OOV tag tokens and types.}\n\\label{tbl:corpus_stats}\n\\end{table*}\n\nThis section details the experimental setup. We describe the data, then we introduce the baseline models and finally we report the hyperparameters of the models.\n\n\\subsection{Data}\nWe run experiments on the Universal Dependencies version 2.1~\\citep{nivre2017}.\nWe excluded corpora that did not include train\/dev\/test split, word form information\\footnote{French-FTB and Arabic-NYUAD}, or morphological features\\footnote{Japanese}. Additionally, we excluded corpora for which pre-trained word embeddings were not available.\\footnote{Ancient Greek and Coptic}\nThe resulting dataset contains 69 corpora covering 49 different languages.\nTagsets were constructed by concatenating the POS and morphological annotations of the treebanks.\nTable~\\ref{tbl:corpus_stats} gives corpus statistics. We present type and token counts for both training and test sets. For training set, we also show the average and maximum number of tags per word type and the size of the morphological tagset. \nFor the test set, we report the proportion of out-of-vocabulary (OOV) words as well as the number of OOV tag tokens and types.\n\nIn the encoder, we use fastText word embeddings \\citep{bojanowski2017} pre-trained on Wikipedia.\\footnote{\\ssmall \\url{https:\/\/github.com\/facebookresearch\/fastText}}\nAlthough these embeddings are uncased, our model still captures case information by means of character-level embeddings.\nIn Table~\\ref{tbl:corpus_stats}, we also report for each language the proportion of word types for which the pre-trained embeddings are available.\n\n\n\\subsection{Baseline Models}\n\nWe use two models as baseline: the CRF-based \\textsc{MarMoT} \\citep{mueller2013} and the regular neural multiclass classifier.\n\n\\paragraph{MarMoT (\\textsc{MMT})}\n\\textsc{MarMoT}\\footnote{\\url{http:\/\/cistern.cis.lmu.de\/marmot\/}} is a CRF-based morphological tagger which has been shown to achieve competitive performance across several languages \\citep{mueller2013}.\n\\textsc{MarMoT} approximates the CRF objective using a pruning strategy which enables training higher-order models and handling large tagsets.\nIn particular, the tagger first predicts the POS part of the label and based on that, constrains the set of possible morphological labels.\nFollowing the results of \\citet{mueller2013}, we train second-order models. We tuned the regularization type and weight on German development set and based on that, we use L2 regularization with weight 0.01 in all our experiments.\n\n\\paragraph{Neural Multiclass classifier (\\textsc{Mc})}\nAs the second baseline, we employ the standard multiclass classifier used by both \\citet{heigold2017} and \\citet{yu2017}.\nThe proposed model consists of an LSTM-based encoder, identical to the one described above in section~\\ref{sec:encoder}, and a softmax classifier over the full tagset.\nThe tagset sizes for each corpora are shown in Table~\\ref{tbl:corpus_stats}.\nDuring preliminary experiments, we also added CRF layer on top of softmax, but as this made the decoding process considerably slower without any visible improvement in accuracy, we did not adopt CRF decoding here.\nThe multiclass model is shown in Figure~\\ref{fig:neural_models} (d).\n\nThe inherent limitation of both baseline models is their inability to predict tags that are not present in the training corpus. Although the number of such tags in our data set is not large, it is nevertheless non-zero for most languages.\n\n\n\\subsection{Training and Parametrisation}\\label{sbsec:training}\nSince tuning model hyperparameters for each of the 69 datasets individually is computationally demanding, \nwe optimise parameters on Finnish---a morphologically complex language with a reasonable dataset size---and apply the resulting values to other languages.\nWe first tuned the character embedding size and character-LSTM hidden layer size of the encoder on the \\textsc{Seq} model and reused the obtained values with all other models.\nWe tuned the batch size, the learning rate and the decay factor for the \\textsc{Seq} and \\textsc{Mc} models separately since these models are architecturally quite different.\nFor the \\textsc{McMl} and \\textsc{HMcMl} models we reuse the values obtained for the \\textsc{Mc} model.\nThe remaining hyperparameter values are fixed.\nTable~\\ref{tbl:parameters} lists the hyperparameters for all models.\n\nWe train all neural models using stochastic gradient descent for up to 400 epochs and stop early if there has been no improvement on development set within 50 epochs.\nFor all models except \\textsc{Seq}, we decay the learning rate by a factor of 0.98 after every 2500 batch updates.\nWe initialise biases with zeros and parameter matrices using Xavier uniform initialiser~\\citep{Glorot2010}.\n\nWords in training sets with no pre-trained embeddings are initialised with random embeddings.\nAt test time, words with no pre-trained embedding are assigned a special UNK-embedding.\nWe train the UNK-embedding by randomly substituting the singletons in a batch with the UNK-embedding with a probability of 0.5.\n\n\\begin{table}[t]\n\\centering\n\\footnotesize\n\\tabcolsep=0.11cm\n\\begin{tabular}{lrr}\n\\toprule\n & \\textsc{Seq} & \\textsc{Other NN} \\\\\n\\midrule\n\\textbf{Encoder} & & \\\\\nWord embedding size & 300 & 300 \\\\\nCharacter embedding size & 100 & 100 \\\\ \nCharacter LSTM hidden layer size & 150 & 150 \\\\\nWord embedding dropout & 0.5 & 0.5 \\\\\nLSTM layers & 1 & 1 \\\\\nLSTM hidden state size & 400 & 400 \\\\\nLSTM input dropout & 0.5 & 0.5 \\\\\nLSTM state dropout & 0.3 & 0.3 \\\\\nLSTM output dropout & 0.5 & 0.5 \\\\\n\\midrule\n\\textbf{Decoder} & & \\\\\nLSTM hidden state size & 800 & 800 \\\\\nTag embedding size & 150 & -- \\\\\n\\midrule\n\\textbf{Training} & & \\\\\nInitial learning rate & 1.0 & 1.0 \\\\\nBatch size & 5 & 20 \\\\\nMaximum epochs & 400 & 400 \\\\\nLearning rate decay factor & -- & 0.98 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Hyperparameters for neural models.}\n\\label{tbl:parameters}\n\\end{table}\n\n\n\n\n\\section{Introduction}\n\\label{sec:introduction}\n\n\n\nThe common approach to morphological tagging combines the set of word's morphological features into a single monolithic tag and then, similar to POS tagging, employs multiclass sequence classification models such as CRFs \\citep{mueller2013} or recurrent neural networks \\citep{labeau2015,heigold2017}.\nThis approach, however, has a number of limitations.\nFirstly, it ignores the intrinsic compositional structure of the labels and treats two labels that differ only in the value of a single morphological category as completely independent; compare for instance labels \\textsc{[POS=noun,Case=Nom,Num=Sg]} and \\textsc{[POS=noun,Case=Nom,Num=Pl]} that only differ in the value of the \\textsc{Num} category.\nSecondly, it introduces a data sparsity issue as the less frequent labels can have only few occurrences in the training data.\nThirdly, it excludes the ability to predict labels not present in the training set which can be an issue for languages such as Turkish where the number of morphological tags is theoretically unlimited \\citep{yuret2006}.\n\nTo address these problems we propose to treat morphological tags as composite labels and explicitly model their internal structure. \nWe hypothesise that by doing that, we are able to alleviate the sparsity problems, especially for languages with very large tagsets such as Turkish, Czech or Finnish, and at the same time also improve the accuracy over a baseline using monolithic labels.\nWe explore three different neural architectures to model the compositionality of morphological labels.\nIn the first architecture, we model all morphological categories (including POS tag) as independent multiclass classifiers conditioned on the same contextual word representation.\nThe second architecture organises these multiclass classifiers into a hierarchy---the POS tag is predicted first and the values of morphological categories are predicted conditioned on the value of the predicted POS.\nThe third architecture models the label as a sequence of morphological category-value pairs.\nAll our models share the same neural encoder architecture based on bidirectional LSTMs to construct contextual representations for words \\citep{lample2016}.\n\nWe evaluate all our models on 49 UD version 2.1 languages.\nExperimental results show that our sequential model outperforms other neural counterparts establishing state-of-the-art results in morphological tagging for most languages.\nWe also confirm that all neural models perform significantly better than a competitive CRF baseline.\nIn short, our contributions can be summarised as follows:\n\\begin{enumerate}[label=\\arabic*),topsep=0em,noitemsep]\n\\item We propose to model the compositional internal structure of complex morphological labels for morphological tagging in a neural sequence tagging framework;\n\\item We explore several neural architectures for modeling the composite morphological labels;\n\\item We find that tag representation based on the sequence learning model achieves state-of-the art performance on many languages.\n\\item We present state-of-the-art morphological tagging results on 49 languages on the UDv2.1 corpora.\n\\end{enumerate}\n\n\n\\section*{Acknowledgments}\nThis work was supported by the Estonian Research Council (grants no. 2056, 1226 and IUT34-4).\n\n\n\n\\section{Neural Models}\n\\label{sec:models}\n\n\\begin{figure*}[]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{fig\/neural_models_crop.pdf}\n\\caption{Neural architectures for modeling complex morphological labels: a) Multiclass Multilabel model (\\textsc{McMl}), b) Hierarchical Multiclass Multilabel model (\\textsc{HMcMl}), c) Sequence model (\\textsc{Seq}) and d) Multiclass baseline model (\\textsc{Mc}). \nCorrect labels are shown with a green border, incorrect labels have a red dotted border.}\n\\label{fig:neural_models}\n\\end{figure*}\n\nWe explore three different neural architectures for modeling morphological labels: multiclass multi\\-label model that predicts each category value separately, hierarchical multiclass multilabel model where the values of morphological features depend on the value of the POS, and a sequence model that generates morphological labels as sequences of feature-value pairs.\n\n\n\\subsection{Notation}\nGiven a sentence $w_1, \\dots, w_n$ consisting of $n$ words, we want to predict the sequence $t_1, \\dots, t_n$ of morphological labels for that sentence.\nEach label $t_i = \\{f_{i0}, f_{i1}, \\dots, f_{im}\\}$ consists of a POS tag ($f_{i0} \\equiv \\textsc{POS}$) and a sequence of $m$ category values. \nFor each word $w_i$, the encoder computes a contextual vector $h_i$, which captures information about the word and its left and right context.\n\n\\subsection{Decoder Models}\n\n\\paragraph{Multiclass Multilabel model (\\textsc{McMl})}\nThis model formulates the morphological tagging as a multiclass multilabel classification problem. \nFor each morphological category, a separate multiclass classifier is trained to predict the value of that category (Figure~\\ref{fig:neural_models} (a)).\nBecause not all categories are always present \nfor each POS (e.g., a noun does not have a \\textit{tense} category), we extend the morphological label of each word by adding all features that are missing from the annotated label and assign them a special value that marks the category as ``off''.\nFormally, the model can be described as:\n\\begin{equation}\np(t|h)_{\\textsc{McMl}} = \\prod_{j=0} ^ M p(f_j|h),\n\\end{equation}\nwhere $M$ is the total number of morphological categories (such as case, number, tense, etc.)\nobserved in the training corpus.\nThe probability of each feature value is computed with a softmax function:\n\\begin{equation*}\np(f_j|h)_{\\textsc{McMl}} = \\text{softmax}(W_j h + b_j),\n\\end{equation*}\nwhere $W_j$ and $b_j$ are the parameter matrix and bias vector for the $j$th morphological feature ($ j=0, \\dots, M$).\nThe final morphological label for a word is obtained by concatenating predictions for individual categories while filtering out off-valued categories.\n\n\\paragraph{Hierarchical Multiclass Multilabel model (\\textsc{HMcMl})}\nThis is a hierarchical version of the \\textsc{McMl} architecture that models the values of morphological categories as directly dependent on the POS tag (Figure~\\ref{fig:neural_models} (b)):\n\\begin{equation}\np(t|h)_{\\textsc{HMcMl}} = p(\\textsc{pos}|h)\\prod_{j=1} ^ M p(f_j|\\textsc{pos},h)\n\\end{equation}\nThe probability of the POS is computed from the context vector $h$ using the respective parameters:\n\\begin{equation*}\np(\\textsc{pos}|h) = \\text{softmax}(W_{\\textsc{pos}} h + b_{\\textsc{pos}})\n\\end{equation*}\nThe POS-dependent context vector $l$ is obtained by concatenating the context vector $h$ with the unnormalised log probabilities of the POS:\n\\begin{equation*}\nl = [h; W_{\\textsc{pos}} h + b_{\\textsc{pos}}]\n\\end{equation*}\nThe probabilities of the morphological features are computed using the POS-dependent context vector:\n\\begin{equation*}\np(f_j|\\textsc{pos}, h) = \\text{softmax}(W_j l + b_j) \\hspace{2mm} j=1, \\dots, M\n\\end{equation*}\n\n\\paragraph{Sequence model (\\textsc{Seq})}\nThe \\textsc{Seq} model predicts complex morphological labels as sequences of category values. This approach is inspired from neural sequence-to-sequence models commonly used for machine translation \\citep{Cho2014a,Sutskever2014}.\nFor each word in a sentence, the \ndecoder uses a unidirectional LSTM network (Figure~\\ref{fig:neural_models} (c)) to generate a sequence of morphological category-value pairs based on the context vector $h$ \nand the previous predictions.\nThe probability of a morphological label $t$ is under this model:\n\\begin{equation}\np(t|h)_{\\textsc{Seq}} = \\prod_{j=0} ^ m p(f_j|f_0, \\dots, f_{j-1}, h)\n\\end{equation}\n\nDecoding starts by passing the start-of-sequence symbol as input.\nAt each time step, the decoder computes the label context vector $g_j$ based on the previously predicted category value, previous label context vector and the word's context vector.\n\\begin{equation*}\ng_j = \\text{LSTM}([f_{j-1}; h], g_{j-1})\n\\end{equation*}\nThe probability of each morphological feature-value pair is then computed with a softmax.\n\\begin{equation*}\np(f_j|g_j)_{\\textsc{Seq}} = \\text{softmax}(W_{\\textsc{seq}} g_j + b_{\\textsc{seq}})\n\\end{equation*}\nAt training time, we feed correct labels as inputs while at inference time, we greedily emit the best prediction from the set of all possible feature-value pairs.\nThe decoding terminates once the end-of-sequence symbol is produced.\n\n\n\\subsection{Encoder}\n\\label{sec:encoder}\nWe adopt a standard sequence tagging encoder architecture for all our models.\nIt consists of a bidirectional LSTM network that maps words in a sentence into context vectors using character and word-level embeddings.\nCharacter-level word embeddings are constructed with a bidirectional LSTM network and they capture useful information about words' morphology and shape.\nWord level embeddings are initialised with pre-trained embeddings and fine-tuned during training.\nThe character and word-level embeddings are concatenated and passed as inputs to the bidirectional LSTM encoder.\nThe resulting hidden states $h_i$ capture contextual information for each word in a sentence.\nSimilar encoder architectures have been applied recently with notable success to morphological tagging \\citep{heigold2017,yu2017} as well as several other sequence tagging tasks \\citep{lample2016,Chiu2016,ling2015}.\n\n\\section{Related Work}\n\\label{sec:related}\n\n\nMost previous work on modeling the internal structure of complex morphological labels has occurred in the context of morphological disambiguation---a task where the goal is to select the correct analysis from a limited set of candidates provided by a morphological analyser. The most common strategy to cope with a large number of complex labels has been to predict all morphological features of a word using several independent classifiers whose predictions are later combined using some scoring mechanism \\citep{hajic1998,hajic2000,smith2005,yuret2006,zalmout2017,kirov2017}. \n\\citet{inoue2017} combined these classifiers into a multitask neural model sharing the same encoder, and predicted both POS tag and morphological category values given the same contextual representation computed by a bidirectional LSTM. \nThey showed that the multitask learning setting outperforms the combination of several independent classifiers on tagging Arabic.\nIn this paper, we experiment with the same architecture, termed as multiclass multilabel model, on many languages.\nAdditionally, we extend this approach and explore a hierarchical architecture where morphological features directly depend on the POS tag. \n\n\n\nAnother previously adopted approach involves modeling complex morphological labels as sequences of morphological feature values \\citep{hakkani2000,schmid2008}. In neural networks, this idea can be implemented with recurrent sequence modeling. Indeed, one of our proposed models generates morphological tags with an LSTM network. Similar idea has been applied for the morphological reinflection task \\citep{kann2016,faruqui2016} where the sequential model is used to generate the spellings of inflected forms given the lemma and the morphological label of the desired form. In morphological tagging, however, we generate the morphological labels themselves.\n\nAnother direction of research on modeling the structure of complex morphological labels involves structured prediction models~\\citep{mueller2013,mueller2015,malaviya2018,lee2011}.\n\\citet{lee2011} introduced a factor graph model that jointly infers morphological features and syntactic structures.\n\\citet{mueller2013} proposed a higher-order CRF model which handles large morphological tagsets by decomposing the full label into POS tag and morphology part.\n\\citet{malaviya2018} proposed a factorial CRF to model pairwise dependencies between individual features within morphological labels and also between labels over time steps for cross-lingual transfer.\nRecently, neural morphological taggers have been compared to the CRF-based approach \\citep{heigold2017,yu2017}.\nWhile \\citet{heigold2017} found that their neural model with bidirectional LSTM encoder surpasses the CRF baseline, the results of \\citet{yu2017} are mixed with the convolutional encoder being slightly better or on par with the CRF but the LSTM encoder being worse than the CRF baseline. \n\nMost previous work on neural POS and morphological tagging has shared the general idea of using bidirectional LSTM for computing contextual features for words \\citep{ling2015,huang2015,labeau2015,ma2016,heigold2017}.\nThe focus of the previous work has been mostly on modeling the inputs by exploring different character-level representations for words \\citep{heigold2016,santos2014,ma2016,inoue2017,ling2015,rei2016}.\nWe adopt the general encoder architecture from these works, constructing word representations from characters and using another bidirectional LSTM to encode the context vectors. \nIn contrast to these previous works, our focus is on modeling the compositional structure of the complex morphological labels. \n\nThe morphologically annotated Universal Dependencies (UD) corpora~\\citep{nivre2017} offer a great opportunity for experimenting on many languages. \nSome previous work have reported results on several UD languages \\citep{yu2017,heigold2017}. \nMorphological tagging results on many UD languages have been also reported for parsing systems that predict POS and morphological tags as preprocessing \\citep{andor2016,straka2016,straka2017}.\nSince UD treebanks have been in constant development, these results have been obtained on different UD versions and thus are not necessarily directly comparable. We conduct experiments on all UDv2.1 languages and we aim to provide a baseline for future work in neural morphological tagging\n\n\\section{Results}\n\\label{sec:results}\n\n\\begin{table*}\n\\centering\n\\ssmall\n\\renewcommand{\\arraystretch}{0.91}\n\\tabcolsep=0.095cm\n\\input{tbl_results}\n\\caption{\\small Morphological tagging accuracies on UDv2.1 test sets for MarMot (\\textsc{MMT}) and \\textsc{Mc} baselines as well as for \\textsc{McMl}, \\textsc{HMcMl} and \\textsc{Seq} compositional models. The left section shows the full \\textsc{Pos+Morph} tag results, the middle section gives accuracies for OOV words only, the right-most section shows the POS tagging accuracy. The best result in each section for each language is in bold. The languages are color-coded according to the training set size, lighter color denotes larger training set: cyan (<20K), violet (20K-50K), magenta (50K-100K), pink (>100K).}\n\\label{tbl:results}\n\\end{table*}\n\nTable~\\ref{tbl:results} presents the experimental results. We report tagging accuracy for all word tokens and also for OOV tokens only. \nA full morphological tag is considered correct if both its POS and all morphological features are correctly predicted. \n\nFirst of all, we can confirm the results of \\citet{heigold2017} that the performance of neural morphological tagging indeed exceeds the results of a CRF-based model. In fact, all our neural models perform significantly better than \\textsc{MarMoT} ($p<0.001$).\\footnote{As indicated by Wilcoxon signed-rank test.}\n\n\nThe best neural model on average is the \\textsc{Seq} model, which is significantly better from both the \\textsc{Mc} baseline as well as the other two compositional models, whereby the improvement is especially well-pronounced on smaller datasets. We do not observe any significant differences between \\textsc{McMl} and \\textsc{HMcMl} models neither on all words nor OOV evaluation setting.\n\nWe also present POS tagging results in the right-most section of Table~\\ref{tbl:results}. Here again, all neural models are better than CRF which is in line with the results presented by \\citet{plank2016}. For POS tags, the \\textsc{HMcMl} is the best on average. It is also significantly better than the neural \\textsc{Mc} baseline, however, the differences with the \\textsc{McMl} and \\textsc{Seq} models are insignificant.\n\nIn addition to full-tag accuracies, we assess the performance on individual features.\nTable~\\ref{tbl:features} reports macro-averaged F1-cores for the \\textsc{Seq} and the \\textsc{Mc} models on universal features.\nResults indicate that the \\textsc{Seq} model systematically outperforms the \\textsc{Mc} model on most features.\n\n\n\\begin{table}[t]\n\\centering\n\\small\n\\renewcommand{\\arraystretch}{0.91}\n\\tabcolsep=0.095cm\n\\begin{tabular}{lrrr|lrrr}\n\\toprule\nFeature & \\textsc{Seq} & \\textsc{Mc} & \\# & Feature & \\textsc{Seq} & \\textsc{Mc} & \\# \\\\\n\\midrule\nPOS & \\textbf{91.03} & 90.20 & 69 & NumType & \\textbf{89.68} & 87.82 & 54 \\\\\nNumber & \\textbf{94.02} & 93.05 & 63 & Polarity & \\textbf{93.83} & 92.86 & 54 \\\\\nVerbForm & \\textbf{91.29} & 89.86 & 61 & Degree & \\textbf{87.44} & 84.12 & 48 \\\\\nPerson & \\textbf{89.02} & 87.52 & 60 & Poss & \\textbf{94.52} & 93.60 & 44 \\\\\nTense & \\textbf{92.96} & 91.31 & 59 & Voice & \\textbf{88.40} & 82.85 & 42 \\\\\nPronType & \\textbf{89.83} & 88.81 & 58 & Definite & \\textbf{95.26} & 94.10 & 37 \\\\\nMood & \\textbf{87.34} & 85.40 & 58 & Aspect & \\textbf{89.76} & 87.71 & 29 \\\\\nGender & \\textbf{89.31} & 87.78 & 55 & Animacy & \\textbf{86.22} & 83.73 & 19 \\\\\nCase & \\textbf{88.90} & 87.04 & 55 & Polite & 75.76 & \\textbf{80.48} & 10 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Performance of \\textsc{Seq} and \\textsc{Mc} models on individual features reported as macro-averaged F1-scores. \n}\n\\label{tbl:features}\n\\end{table}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThere is a long and rich history of modeling the response of visual cortex neurons to stimuli extending back to the work of Hubel and Wiesel on simple and complex cells \\cite{hubel1962receptive}. \nIn recent years, artificial neural networks (ANNs) have achieved state-of-the-art performance predicting neural responses to natural stimuli \\cite{Antolik2016,Yamins2016,Klindt2017,Cadena2019,Batty2016,Sinz2018,Yamins2014,Vintch2015,Ecker2018,Walker2019}. \nThese models are accurate enough that the stimuli that maximally excite a neuron can be computed \\textit{in silico}, and when tested \\textit{in vivo} indeed drive neurons effectively \\cite{Walker2019,Bashivan2019}.\nHowever, these approaches place the computational burden of optimizing network parameters \\textit{after} extensive data from a neuron has been collected, which prohibits their use in real-time closed-loop experiments.\nTo avoid this optimization step, we wanted a model that can predict the response of a novel neuron to any stimulus, conditioned on a set of $K$ observed stimulus-response pairs -- essentially performing $K$-Shot prediction on neural responses.\n\n\\citet{Garnelo2018a} aptly describe how Neural Processes (NPs) can help solve this problem:\n\\emph{``Meta-learning models share the fundamental motivations of NPs as they shift workload from training time to test time. NPs can therefore be described as meta-learning algorithms for few-shot function regression''}. \nNPs achieve this by embedding input and output measurements into a latent space that maps to a space of functions, essentially learning the distribution over functions and a method to infer the posterior over functions given limited samples \\cite{Garnelo2018a,Garnelo2018,Kim2019}. \n\nA significant advance in modeling visual responses with ANNs was using convolutional neural networks with a factorized readout between the tuning function's location and properties \\cite{Klindt2017, Sinz2018}.\nWe found that NPs struggle to learn the space of tuning functions from stimulus-response samples without such a factorized representation. \nThus, we developed a Factorized Neural Process (FNP), which is composed of stacking multiple NPs.\nA key insight for this was that by passing the latent variable computed by early layers to deeper layers, in addition to the observations, we could obtain a factorized latent space while retaining the representational power and efficiency of NPs. \nWe used a two-layer FNP applied to visual responses, where the first NP produces a latent variable for the tuning function's location that the second NP uses to infer the tuning function's properties.\nWe found that a FNP trained on simulated data generalizes to new neurons, successfully inferring the tuning function's location and properties and predicting the responses to unseen stimuli. An FNP trained on neural responses from the mouse primary visual cortex made predictions with comparable accuracy to state-of-the-art approaches, and made these predictions almost \\emph{100 times faster}.\n\nIn short, our contributions in this work include: \\circled{1} We reformulate the problem of predicting the response of neurons to visual stimuli as a K-shot regression problem, removing the time consuming step of optimizing network parameters for each newly acquired neuron. \\circled{2} We develop a Factorized Neural Process that embeds the observed stimuli-response pairs into a latent space representing the tuning function that is partitioned into location and tuning function properties. \\circled{3} We train this Factorized Neural Process for Neural Processes end-to-end on simulated data and show it approaches the ground truth predictions as the size of the observation set increases. \\circled{4} We found that this approach performs comparably to state-of-the-art predictive models on responses from mouse visual cortex neurons while improving estimation speed by multiple orders of magnitude. The code is available at \\url{https:\/\/github.com\/peabody124\/fnp_neurips2020}.\n\n\\section{Neural Processes for Neural Processes}\n\nThe core steps that allow a NP to efficiently meta-learn a $K$-shot regression model are (1) encoding each element from a set of observed input-output observations into a representation space, (2) aggregating the elements in that representation space (typically by taking the mean) to produce a sufficient statistic of the observed set, (3) a conditional decoder that maps the aggregated representation to a function used for prediction of new observations, and (4) training this over many different sets of observations from different sample functions, i.e. meta-learning the distribution over tuning functions \\cite{Garnelo2018}. Our approach is largely based on \\citet{Garnelo2018a}, which expanded on \\citet{Garnelo2018} by introducing a stochastic variable used by the conditional decoder. NPs were further extended to include attention in \\citet{Kim2019}, which we do not use in this work. \n\nFirst, we describe the data generation process we seek to model: Let $\\mathcal F : \\mathcal X \\rightarrow \\mathcal Y$ be the space of all tuning functions that map from images to neural responses. An individual neuron corresponds to a sample function, $f \\in \\mathcal F$, from which we get $K$ observations $O_K=\\{(\\boldsymbol x_i, y_i)\\}_{i=0}^{i\n\\end{equation*}\n\nWe found that as as the size of the observation set increased, the predictive accuracy improved up to several hundred observations and then began to saturate. We also found that increasing the maximum set size used during training had a slight benefit in the asymptotic performance when increasing from 512 to 1024 trials, but with little benefit beyond this (Fig. \\ref{fig:accuracy}). These were averaged over three different seeds, with each fit producing similar performance. \n\nDespite trying a number of architecture variations, we could not get the asymptotic performance to quite reach ground truth. However, it performed well, with a $\\Delta LL$ of $0.4$ corresponding to a correlation coefficient between the ground truth mean response, $\\lambda_\\phi\\left(\\boldsymbol x\\right)$, and the model prediction mean, $\\lambda_{K=1024}(\\boldsymbol x)$, of $0.8$. \n\n\\subsection{Latent variables accurately capture the tuning function}\n\\label{section:reconstruction}\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{rf_accuracy.pdf}\n \\caption{a) the correlation between the location latent variable, $z_k^p$, and the ground truth for increasing observations. b) Reconstruction of receptive fields (RF). Each row corresponds to a different cell with the bottom half being complex cells. The first column shows the ground truth kernels, the second column is the RF reconstructed by the gradient method, and the remaining block shows the maximally exciting images computed using increasing numbers of observations. Ground truth kernels of complex cells use pseudocolor to reflect the two phases in the energy model and any reconstruction of this energy model with the same orientation and location is equally valid, regardless of the phase.}\n \\label{fig:rf_accuracy}\n\\end{figure}\n\nWe confirmed the information about the tuning function was correctly factorized by computing the correlation coefficient between the latent variable for location, $\\boldsymbol z_k^p$, and the ground truth location of the kernel, $k_\\phi$. We found that with only 64 observations there was a correlation of $0.8$, and it reached nearly $1.0$ with 256 observations (Fig.~\\ref{fig:rf_accuracy}a). \n\nWe then asked if the latent variables $(\\boldsymbol z_K^p, \\boldsymbol z_K^w)$ from 1024 observations were sufficient to reconstruct the receptive field. First, we computed receptive fields as the gradient of the tuning function conditioned on the latent variables: \n\\begin{equation*}\n RF_\\nabla^K = \\nabla_{\\boldsymbol x} \\left([\\mathcal T \\left( g_\\theta(\\boldsymbol x), \\boldsymbol z_K^p \\right), 1] \\cdot u_\\theta(\\boldsymbol z_K^w) \\right)\n\\end{equation*}\nFor simple cells the gradient showed a good correspondence to the kernel used to generate the responses, $k_\\phi$ (Fig. \\ref{fig:rf_accuracy}b). For complex cells it was in the correct location, but did not show the same structure as the kernel. This is expected as complex cells are not well described by a single kernel. We then computed the maximally exciting images (MEIs) for a neuron similarly to \\citet{Walker2019} by maximizing the predicted response, conditioned on the latent variables sampled after an increasing number of observations:\n\\begin{equation*}\n MEI_K = \\argmax_{\\boldsymbol x} \\left([\\mathcal T \\left( g_\\theta(\\boldsymbol x), \\boldsymbol z_K^p \\right), 1] \\cdot u_\\theta(\\boldsymbol z_K^w) \\right) - \\kappa \\Vert \\boldsymbol x \\Vert\n\\end{equation*}\nWith $\\kappa=0.01$ to regularize the images. As desired, MEIs computed with more observations converged towards the ground truth kernels, with complex cells having an anticipated random phase offset.\n\n\\subsection{Latent dimensionality and stimulus complexity}\n\\label{section:tuning_complexity}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{latent.pdf}\n \\caption{\\textbf{Left}: The difference between the predictive log likelihood and ground truth as the dimension of the tuning function properties latent variable increases. Each line reflects an increasingly complex RF space. \\textbf{Right}: The correlation between the ground truth mean firing rate and the predicted mean firing rate for the same data.}\n \\label{fig:latent_dimension}\n\\end{figure}\n\nWe also studied how important the tuning function property's latent dimension $D$, with $\\boldsymbol z_K^w \\in \\mathbb R^D$, was to the predictive performance by increasing it from 2 to 64 (all experiments above used 64). We did this with different complexities of the simulated receptive fields by reducing the number of parameters in $\\phi$ that were randomized. In all experiments the orientation and location of the tuning function was randomized ($\\phi \\in \\mathbb R^3$). We increased the tuning function dimensions by then additionally randomizing (in order): frequency, width, phase offset, simple only versus simple and complex cells, and scale. Because this analysis involved refitting many models, we performed it with $16\\times 16$ stimuli. We found the performance improved with greater model capacity (tuning function properties latent dimension) and this impact was much more pronounced for more complex (higher dimensional) tuning functions (Fig.~\\ref{fig:latent_dimension}). Randomizing the phase offset produced the greatest reduction in predictive accuracy, although performance still remained quite good with high correlations between the model predictions and ground truth. Encouragingly, including complex cells did not produce a significant change in performance.\n\n\\section{Experiments with real neural responses}\n\nWe next tested our approach on real visual responses recorded with the same experimental paradigm as in \\citet{Walker2019}, and found it had a comparable predictive performance to optimization-based approaches. The data consists of pairs of neural population responses and grayscale visual stimuli sampled and cropped from ImageNet, isotropically downsampled to $64\\times 36$\\,px, with a resolution of $0.53$\\,ppd (pixels per degree of visual angle). The neural responses were recorded from layer L2\/3 of the primary visual cortex (area V1) of the mouse, using a wide field two photon microscope. A single scan contained the responses of approximately 5000--9000 neurons to up to 6000 images. \n\nWe trained an FNP on 57,533 mouse V1 neurons collected across 19 different scans and tested it on 1000 neurons from a hold-out scan (i.e. never seen during training). \nDuring testing, we assigned the latent variables assigned to their mean values: $\\boldsymbol z_K^p:=\\boldsymbol \\mu(s_K^p)$ and $\\boldsymbol z_K^w:=\\boldsymbol \\mu_K^w$, and used these in Eq.~\\ref{eq:conditional_decoder}. \nWe measured the $K$-shot predictive accuracy for each neuron as the correlation between the predicted mean from the conditional decoder, $\\lambda(\\boldsymbol x_t, \\boldsymbol z_K^p, \\boldsymbol z_K^w)$, and the real responses, $y_t$, for the remaining trials. \nIn agreement with synthetic data, the predictive accuracy improves rapidly with the first several hundred trials and continues to improve with additional observations (Fig.~\\ref{fig:real_predictions}). \nWe compared the performance of our FNP to an optimization based approach similar to \\citet{Klindt2017}, adapted for mouse V1, which we reference as Per Neuron Optimization (PNO). We measured the predictive performance of PNO similarly to FNP, on the same 1000 neurons with the readout optimized with $K$ trials and used to predict the response to the remaining stimuli. \nExcitingly, FNP performs well and with 1k images is almost as accurate as PNO (which is optimized for those individual cells), and even \\emph{outperforms} it for smaller numbers of observations (Fig.~\\ref{fig:real_predictions}). This likely arises because the FNP learns the prior distribution over tuning functions, which has a greater influence with less data. Please see the Appendix for details of both FNP and PNO fitting and testing.\n\nThese experiments also demonstrated the speed improvements for inferring the tuning function of a newly recorded neurons that FNP was designed for. While fitting the FNP to the training data took 6 days using two V100 GPUs, computing the latent variables for one thousand neurons with $K=1000$ took only 250~ms on a 1080Ti. This is in comparison to PNO which takes from 20~s to compute the readout using a pretrained CNN (Supplementary Table~\\ref{table:times}). Thus an FNP is two orders of magnitude faster, enabling real-time inference of tuning functions within the time of a single stimulus presentation.\n\n\\begin{figure}\n \\begin{center}\n\n \\includegraphics[width=0.5\\linewidth]{figures\/optimization_v_prediction.pdf}\n\n \\end{center}\n \\caption{Performance of a FNP for $K$-shot prediction for new neurons compared a traditional approach with per-neuron optimization (PNO) for $K$ up to 1000 trials}\n \\label{fig:real_predictions}\n\\end{figure}\n\n\\section{Discussion}\n\nUsing a Factorized Neural Process, we are able to learn a distribution over tuning functions that can be conditioned on a set of observed stimulus-response pairs and predict the response to novel stimuli. We first focused on simulated data from simple and complex cells where we could compare the inferred tuning functions to the ground truth. Importantly, the model performed equally well when including complex cells, which is not possible for classical techniques like spike-triggered average that similarly accumulate sufficient statistics. The fact that the asymptotic log likelihood for predictions did not reach the ground truth also indicates there is room to increase the model capacity, although the correlation between the ground truth and model predictions exceeded $0.8$. \nFollowing prior work \\cite{Klindt2017, Ecker2019, Walker2019,Sinz2018}, we restricted ourselves to a decoder that was a factorized linear readout on output of $g_\\theta$, but learning a more powerful decoder could also improve the capacity. \nWe then tested our approach on data from the mouse primary visual cortex in response to natural images. We found the trained FNP predicted the responses to test data with comparable accuracy as a model specifically optimized for those neurons, and even exceeded the performance when conditioned on less than 500 trials. Additionally, the FNP made these predictions orders of magnitudes more quickly than an optimization-based approach, thus opening the door to real-time, closed-loop inference of tuning functions updated after every stimulus presentation.\n \nThis work was motivated by real-time experimentation, but during an experiment the best way to know how a neuron responds to a stimulus is to measure it. The real need is using the observations to rapidly generate stimuli to test a hypothesis. We envision combining a FNP for rapid inference with a generator network that takes the latent representations as input and is trained \\textit{in silico} prior to experiments to generate stimuli to illicit a maximal responses or reduce the uncertainty in the latent representations. We believe this general approach of training a more powerful model prior to experiments that is capable of rapid, real-time inference will be a powerful tool for the neuroscience community, and that this approach using FNPs will facilitate it.\n\n\\section*{Broader Impact}\n\nWe hope this approach will be useful to the Neuroscience community and that Factorized Neural Processes may have even broader applications for modeling functions. The ability to perform real-time, closed-loop experiments and to performances inferences with less data may reduce the amount of time to record from animals or the number of experimental sessions. \nWe do not believe this methodology or the demonstrated application will disadvantage anyone.\n\n\\begin{ack}\nRJC thanks the Research Accelerator Program of the Shirley Ryan AbilityLab for support during residency.\nFHS is supported by the Carl-Zeiss-Stiftung and acknowledges the support of the DFG Cluster of Excellence \"Machine Learning \u2013 New Perspectives for Science\", EXC 2064\/1, project number 390727645.\nSupported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior\/Interior Business Center (DoI\/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI\/IBC, or the U.S. Government. \\end{ack}\n\n\\small\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Large parton densities in the nuclear wave function}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=5cm,angle=-90]{dhj-evol.eps}\n\\hspace{0.5cm}\n\\includegraphics[width=5cm,angle=-90]{dhj-pred.eps}\n\\caption{Forward particle production in d+Au collisions at RHIC. The left plot shows the importance of including both the large-$x$ DGLAP evolution of the dilute deuteron and the\nsmall-$x$ CGC evolution of the dense nucleus. The right plots shows the excellent description of the spectra shapes, and the K factors needed to obtain the normalization.}\n\\end{center}\n\\end{figure}\n\nWhen probing small distances inside a hadron or nucleus with a hard process, one resolves their partonic constituents. Increasing the energy of the scattering process at a fixed momentum transfer allows to probe lower-energy partons, with smaller energy fraction $x.$ As the parton densities in the hadronic\/nuclear wave function grow with decreasing $x,$ they eventually become so large that a non-linear (yet weakly-coupled) regime is reached, called saturation, where partons do not interact with the probe independently anymore, but rather behave coherently. \n\nThe Color Glass Condensate (CGC) is an effective theory of QCD \\cite{cgcrev} which aims at describing this part of the wave function. Rather than using a standard Fock-state decomposition, it is more efficient to describe it with collective degrees of freedom, more adapted to account for the collective behavior of the small-$x$ gluons. The CGC approach uses classical color fields: \n\\begin{equation}\n|h\\rangle=|qqq\\rangle+|qqqg\\rangle+\\dots+|qqqg\\dots ggg\\rangle+\\dots\\quad\n\\Rightarrow\\quad|h\\rangle=\\int D\\rho\\ \\Phi_{x_A}[\\rho]\\ |\\rho\\rangle\n\\label{cgc}\\ .\\end{equation}\nThe long-lived, large-$x$ partons are represented by a strong color source\n$\\rho\\!\\sim\\!1\/g_S$ which is static during the lifetime of the short-lived small-$x$ gluons, whose dynamics is described by the color field $A^\\mu\\!\\sim\\!1\/g_S.$ The arbitrary separation between the field and the source is denoted $x_A.$\n\nWhen probing the CGC with a dilute object carrying a weak color charge, the color field $A^\\mu$ is directly obtained from $\\rho$ via classical Yang-Mills equations:\n\\begin{equation}\n[D_\\mu,F^{\\mu\\nu}]=\\delta^{-\\nu}\\rho\\ ,\n\\end{equation}\nand it can be used to characterize the CGC wave function $\\Phi_{x_A}[A^-]$\n(in the $A^+\\!=\\!0$ gauge). This wave function is a fundamental object of this picture, it is mainly a non-perturbative quantity, but the $x_A$ evolution can be computed perturbatively. Requiring that observables are independent of the choice of $x_A,$ a functional renormalization group equation can be derived. In the leading-logarithmic approximation which resums powers of $\\alpha_S\\ln(1\/x_A),$ the JIMWLK equation describes the evolution of $|\\Phi_{x_A}|^2$ with $x_A.$\n\nThe information contained in the wave function, on gluon number and gluon correlations, can be expressed in terms of n-point correlators, probed in scattering processes. These correlators consist of Wilson lines averaged with the CGC wave function, and resum powers of $g_S A^-$ (therefore both multiple scatterings and non-linear QCD evolution are taken into account). For instance in the case of single inclusive gluon production in pA collisions, the CGC is described by its (all-twist) unintegrated gluon distribution, obtained from the 2-point function \\cite{mygprod}. More exclusive observables involve more complicated correlators. \n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=6.8cm]{dau.eps}\n\\hspace{0.5cm}\n\\includegraphics[width=6.8cm]{ppb.eps}\n\\caption{Two-particle production at forward rapidities in pA collisions. The $\\Delta\\phi$ spectrum is displayed at RHIC (left) and LHC (right) energies. When decreasing $p_{T_2}$ at fixed $y_2,$ the correlation in azimuthal angle is suppressed. At the LHC, smaller values of $x_A$ are probed, and the azimuthal angle correlation is more suppressed as indicated by the vertical axis; the peak is also less pronounced.}\n\\end{center}\n\\end{figure}\n\nForward particle production in pA collisions allows to investigate the non linear QCD dynamics of high-energy nuclei with a probe well understood in QCD. Indeed, while such processes are probing small-momentum partons in the nuclear wave function, only high-momentum partons of the proton contribute to the scattering ($\\sqrt{s} x_p\\!=\\!k e^y$ and $\\sqrt{s} x_A\\!=\\!k e^{-y}$ with $k$ and $y$ denoting transverse momentum and rapidity), and that involves standard parton distribution functions. In two-particle production, contrary to single particle production, the CGC cannot be described only by its unintegrated gluon distribution, the so-called $k_T$-factorization framework is not applicable.\n\nIt was not obvious that the CGC picture (\\ref{cgc}), which requires small values of $x_A,$ would be relevant at present energies. One of the most acclaimed successes came in the context of d+Au collisions at RHIC: the prediction that the yield of high-$p_T$ particles at forward rapidities in d+Au collisions is suppressed compared to A pp collisions, and should decrease when increasing the rapidity, was confirmed\n\\cite{jyrev}. In Fig.1 the $dAu\\!\\to\\!hX$ $p_T$ spectra computed in the CGC approach \\cite{dhj} is compared to RHIC data, and the description of the slope is impressive. The need of K factors to describe the normalization could be expected since this is a leading-order based calculation. Improving the calculation with the next-leading evolution has yet to be done.\n\nThe focus should now shift towards more exclusive observables like two-particle production $pA\\!\\to\\!h_1h_2X.$ In particular the correlations in azimuthal angle between the produced hadrons should be suppressed compared to pp collisions. Predictions for the process $pA\\to h_1h_2X$ are shown in Fig.2, for RHIC and the LHC \\cite{mytpc}. $k_1,$ $k_2$ and $y_1,$ $y_2$ are the transverse momenta and rapidities of the final state hadrons, and the azimuthal angle spectra are displayed. It is obtained that the perturbative back-to-back peak of the azimuthal angle distribution is reduced by initial state saturation effects. As the momenta decrease, the angular distribution broadens.\n\n\\section{Particle production in the Glasma}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=6.5cm]{mpi.eps}\n\\hspace{0.5cm}\n\\includegraphics[width=7.5cm]{horizon.eps}\n\\caption{Left: typical leading-order diagram for particle production in the Glasma, multiple partonic interactions are crucial when low values of $x$ are being probed in the nuclear wave functions. Right: the space-time location of events that may correlate two particles is the intersection of their past light-cones. Correlations between particles widely separated in rapidity are due to early-time dynamics.}\n\\end{center}\n\\end{figure}\n\nThe Glasma is the result of the collision of two CGCs, and it provides a weak-coupling description of the early stages after a high-energy heavy-ion collision. Each nuclear wave function is characterized by a strong color charge, and the field describing the dynamics of the small-x gluons is the solution of\n\\begin{equation}\n[D_\\mu,F^{\\mu\\nu}]=\\delta^{+\\nu}\\rho_1+\\delta^{-\\nu}\\rho_2\\ .\n\\end{equation}\nThe field after the collision is non-trivial \\cite{glasma}: it has a strong component ($A^\\mu\\sim1\/g_s$), a component which is particle like ($A^\\mu\\sim1$), and components of any strength in between. To understand how this pre-equilibrium system thermalizes, one needs to understand how the Glasma field decays into particles. Right after the collision, the strong field component contains all modes.\nThen, as the field decays, modes with $p_T>1\/\\tau$ are not part of the strong component anymore, and for those a particle description becomes more appropriate. After a time of order $1\/Q_s,$ this picture breaks down, and it has been a formidable challenge to determine weather a fast thermalization can be achieved within this framework.\n\nA problem which can be more easily addressed is particle production. The difficult task is to express the cross-section in terms of the Glasma field, taking into account multiple partonic interactions, as pictured in Fig.3 (left). Because of the flux-tube structure of its color field $A^\\mu,$ the Glasma is a natural candidate to explain the ridge-shaped two-particle correlations observed at RHIC, as well as three-particle correlations \\cite{ridge}. The ridge is collimated in azimuthal angle because of the radial flow which happens at a later stage, but since the ridge is several units long in rapidity, it is due to early time dynamics: this is explained in Fig.3 (right) which shows the space-time picture of the collision. In the forward light-cone, lines of constant proper time $\\tau=\\sqrt{x^+x^-}$ are hyperbolae and lines of constant rapidity $\\eta=\\frac12\\log(x^+\/x^-)$ are straight lines from the origin. For two final-state particles separated by the rapidity $\\Delta\\eta,$ causality imposes that they can be correlated only by events which happened at\n\\begin{equation}\n\\tau<\\tau_{f.o.}\\ e^{-\\Delta\\eta\/2}\\ ,\n\\end{equation}\nwhere the freeze-out time $\\tau_{f.o.}$ denote the time of last interaction. While the features of the ridge are qualitatively explained by the Glasma, a quantitative description is needed.\n\n\\section{Energy loss of high-$p_T$ partons in the QCD plasma}\n\nHard probes are believed to be understood well enough to provide clean measurements of the properties of the QGP formed in heavy-ion collisions. A large amount of work has been devoted to understand what happens to a quark (of high energy $E,$ mass $M$ and Lorentz factor $\\gamma=E\/M$) as it propagates through a thermalized plasma\n\\cite{jqrev}. Multiple scatterings are a main ingredient of the perturbative QCD (pQCD) description of how a quark losses energy, until it thermalizes or exits the medium (see Fig.4).\n\nAt lowest order with respect to $\\alpha_s,$ quantum fluctuations in a quark wave function consist of a single\ngluon, whose energy we denote $\\omega$ and transverse momentum $k_\\perp.$ The virtuality of that\nfluctuation is measured by the coherence time, or lifetime, of the gluon $t_c=\\omega\/k_\\perp^2.$\nShort-lived fluctuations are highly virtual while longer-lived fluctuations are more easily put on shell when\nthey interact. The probability of the fluctuation is $\\alpha_sN_c,$ up to a kinematic factor which for heavy\nquarks suppresses fluctuations with $\\omega>\\gamma k_\\perp.$ This means that when gluons are put\non-shell, they are not radiated in a forward cone around a heavy quark. This suppression of the available\nphase space for radiation, the {\\it dead-cone} effect, implies less energy loss for heavier quarks.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=4.5cm]{cartoon.eps}\n\\hspace{1cm}\n\\includegraphics[width=6.5cm]{raa.eps}\n\\caption{Left: production of high-energy partons in a hard process, which then lose energy propagating through the plasma. Some quantum fluctuations in their wave function are put on shell while interacting with the medium and become emitted radiation.\nRight: the resulting particle production in AA collisions is suppressed ($R_{AA}<1$) compared to independent nucleon-nucleon collisions. The suppression is large for light hadrons, and similar for heavy mesons (those data are displayed in the figure), which is difficult to accommodate in a weakly-coupled QCD description.}\n\\end{center}\n\\end{figure}\n\nIn pQCD, medium-induced gluon radiation is due to multiple scatterings of the virtual gluons.\nIf, while undergoing multiple scattering, the virtual gluons pick up enough transverse momentum to be put on shell,\nthey become emitted radiation. The accumulated transverse momentum squared picked up by a gluon of coherence\ntime $t_c$ is\n\\begin{equation}\np_\\perp^2=\\mu^2 \\frac{t_c}{l}\\equiv\\hat{q}\\ t_c\n\\end{equation}\nwhere $\\mu^2$ is the average transverse momentum squared picked up in each scattering, and\n$l$ is the mean free path. These medium properties are involved through the ratio\n$\\hat{q}=\\mu^2\/l.$\n\nSince only the fluctuations which pick up enough transverse momentum are freed ($k_\\perpQ_s$ do not have time to pick up enough $p_\\perp$ to be freed, while the longer-lived ones with $k_\\perp 0)$ for $k < |\\mu_0|$.\nThe maximum growth rate of the dynamo instability,\n\\begin{eqnarray}\n\\gamma^{\\rm max}_\\mu = \\frac{v_\\mu^2}{4 \\eta},\n\\label{gamma-max}\n\\end{eqnarray}\nis attained at\n\\begin{equation}\nk_\\mu =\\frac12|\\mu_0|.\n\\label{eq_kmax}\n\\end{equation}\n\n\n\\subsubsection{Time evolution}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=\\columnwidth]{La2_15B__ts}\n\\end{center}\n\\caption[]{\\textbf{Laminar $v_\\mu^2$ dynamo:}\ntime evolution of $B_\\mathrm{rms}$ (solid black line),\n$\\langle \\mathbf{A} \\cdot \\mathbf{B} \\rangle$ (dashed gray line),\n$\\mu_\\mathrm{rms}$ (multiplied by\n$2\/\\lambda$, dotted blue line), and\n$\\langle \\mathbf{A} \\cdot \\mathbf{B} \\rangle +2\\mu_\\mathrm{rms}\/\\lambda$\n(dash-dotted red line) for reference run La2-15B\n(see table \\ref{table_simulations_vmu2}).\n}\n\\label{fig__La2_15B__ts}\n\\end{figure}\n\nIn Figure~\\ref{fig__La2_15B__ts} we show the time evolution\nof the rms magnetic field $B_\\mathrm{rms}$,\nthe magnetic helicity $\\langle \\mathbf{A} \\cdot \\mathbf{B} \\rangle$,\nthe chemical potential $\\mu_\\mathrm{rms}$ (multiplied by a factor of\n$2\/\\lambda$), and $\\langle \\mathbf{A} \\cdot \\mathbf{B} \\rangle\n+2\\mu_\\mathrm{rms}\/\\lambda$ for reference run La2-15B.\nIn simulations, the time is measured in units of diffusion time\n$t_\\eta=(\\eta k_1^2)^{-1}$.\nThe initial conditions for the magnetic field are chosen in the form of\na Beltrami field on $k=k_1=1$.\n\nThe magnetic field is amplified\nexponentially over more than four orders\nof magnitude until it saturates after roughly eight diffusive times.\nWithin the same time, the magnetic helicity\n$\\langle \\mathbf{A} \\cdot \\mathbf{B} \\rangle$ increases\nover more than eight orders of magnitude.\nSince the sum of magnetic helicity and $2 \\mu\/\\lambda$ is conserved,\nthe chemical potential $\\mu$ decreases, in a nonlinear era\nof evolution, from the initial value $\\mu_0=2$ to $\\mu=1$,\nresulting in a saturation of the laminar $v_\\mu^2$ dynamo.\n\n\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{Gamma_etamu}\n\\end{center}\n\\caption[]{\\textbf{Laminar $v_\\mu^2$ dynamo:}\ngrowth rates as a function of ${\\rm Ma}_\\mu$,\nfor simulations with $\\mu_0=2$.\nThe black line is the theoretical prediction for the maximum growth rate\n$\\gamma^\\mathrm{max}_\\mu$\n(see Equation~\\ref{gamma-max}) that is attained at\n$k_\\mu=\\mu_0\/2=1$\n(see Equation~\\ref{eq_kmax}).\nThe runs with Gaussian initial fields, shown as red diamonds,\nlie on the theoretically predicted\n$\\gamma^\\mathrm{max}_\\mu$.\nThe dotted line corresponds to the theoretical prediction for the growth rate\n$\\gamma(k=1)$ at the scale of the box.\nThe runs with an initial magnetic Beltrami field on $k=1$,\nshown as blue diamonds, lie on the theoretically predicted dotted\ncurve $\\gamma(k=1)$.\n}\n\\label{fig__Gamma_eta}\n\\end{figure}\n\n\\subsubsection{Dynamo growth rate}\n\nIn Figure~\\ref{fig__Gamma_eta}, we show the growth rate of the magnetic field\nas a function of the chiral Mach number, ${\\rm Ma}_\\mu$.\nThe black solid line\nin this figure shows the theoretical prediction for the maximum\ngrowth rate $\\gamma^\\mathrm{max}_\\mu$ that is attained at $k_\\mu=\\mu_0\/2=1$;\nsee Equations~(\\ref{gamma-max}) and (\\ref{eq_kmax}).\nWhen the initial magnetic field is distributed over all spatial scales,\nlike in the case of initial magnetic Gaussian noise, in which\nthere is a nonvanishing magnetic field at\n$k_\\mu$; that is inside the computational domain,\nthe initial magnetic field is excited with the maximum growth rate\nas observed in the simulations.\nConsequently, the runs with Gaussian initial fields shown as red diamonds in\nFigure~\\ref{fig__Gamma_eta}, lie on the theoretical curve\n$\\gamma^\\mathrm{max}_\\mu$.\nThe dotted line in Figure~\\ref{fig__Gamma_eta} corresponds to the theoretical\nprediction for the growth rate $\\gamma$ at the scale of the box $(k=1)$.\nThe excitation of the magnetic field from an initial Beltrami field on $k=1$\noccurs with growth rates in agreement with the theoretical dotted curve;\nsee blue diamonds in Figure~\\ref{fig__Gamma_eta}.\n\n\n\\subsubsection{Dependence on initial conditions}\n\n\\begin{figure}[t]\\begin{center}\n\\includegraphics[width=\\columnwidth]{La2_B_t}\n\\end{center}\n\\caption[]{\\textbf{Laminar $v_\\mu^2$ dynamo:}\nTime evolution of $B_\\mathrm{rms}$ for two different initial conditions.\nThe black line is for the dynamo instability started from an initial\nBeltrami field at $k=1$ (run La2-10B), while the blue line\nis for an initial Beltrami field with\n$k=10$ (run La2-10Bkmax).\nFits in different regimes are indicated by thin lines.\nBoth runs are for the initial value $\\mu_0=20$, so that $k_\\mu=10$;\nand $\\gamma^\\mathrm{max}_\\mu=0.1$ (see~Equation~\\ref{gamma-max}).\n}\n\\label{fig__La2_B_t}\n\\end{figure}\n\nThe initial conditions for the magnetic field are important mostly at early times.\nIf the magnetic field is initially concentrated on the box scale, we expect to\nobserve a growth rate $\\gamma(k=1)$ as given by Equation~(\\ref{gamma}).\nAt later times, the spectrum of the magnetic field can, however, be changed,\ndue to mode coupling, and be amplified with a larger growth rate.\nThis behavior is observed in Figure~\\ref{fig__La2_B_t}, where\nan initial Beltrami field with $k=10$ is excited with maximum growth rate,\nsince $\\mu_0=20$.\nIn Figure~\\ref{fig__La2_B_t} we also consider another situation\nwhere the dynamo is started from an initial Beltrami field with $k=1$ (La2-10B).\nIn this case, the dynamo starts with a growth rate $\\gamma=0.019$,\nwhich is consistent with the theoretical prediction for $\\gamma(k=1)$.\nLater, after approximately $0.4\\,t_\\eta$, the dynamo growth\nrate increases up to the value $\\gamma=0.07$, which is close to\nthe maximum growth rate $\\gamma^\\mathrm{max}_\\mu=0.1$.\n\n\n\\subsubsection{Saturation}\n\nThe parameter $\\lambda$ in the evolution Equation~(\\ref{mu-DNS}),\nor the corresponding dimensionless parameter $\\lambda_\\mu$\nin Equation~(\\ref{mu-NS}), for the chiral chemical potential determines\nthe nonlinear saturation of the chiral dynamo.\nWe determine the saturation value of the\nmagnetic field $B_\\mathrm{sat}$ numerically for different values of $\\lambda_\\mu$;\nsee Figure~\\ref{fig_Bsat_lambda}.\nWe find that the saturation value of the magnetic field\nincreases with decreasing $\\lambda_\\mu$.\nThis can be expected from the conservation law (\\ref{CL}).\nIf the initial magnetic energy is very small,\nwe find from Equation~(\\ref{CL}) the following estimate\nfor the saturated magnetic field during laminar chiral dynamo action:\n\\begin{eqnarray}\n B_\\mathrm{sat} \\sim \\left[\\frac{\\mu_0(\\mu_0 -\\mu_\\mathrm{sat})}{\\lambda}\\right]^{1\/2},\n\\label{eq_Bsat}\n\\end{eqnarray}\nwhere $\\mu_\\mathrm{sat}$ is the chiral chemical potential at saturation, and\nwe use the estimate $A$ by $2 B \/\\mu_0$.\nInspection of Figure~\\ref{fig_Bsat_lambda} demonstrates\na good agreement between\ntheoretical (solid line) and numerical results (blue diamonds).\n\n\\begin{figure}[t]\\begin{center}\n\\includegraphics[width=\\columnwidth]{Bsat_lambda}\n\\end{center}\n\\caption{{\\bf Laminar $v_\\mu^2$ dynamo:}\nthe saturation magnetic field strength for simulations\nwith different $\\lambda_\\mu$.\nDetails for the different runs, given by labeled blue diamonds, can be found in\nTable~\\ref{table_simulations_vmu2}.}\n\\label{fig_Bsat_lambda}\n\\end{figure}\n\n\n\\subsubsection{Effect of a nonvanishing flipping rate}\n\\label{sec_flip}\n\nIn this section, we consider the influence of a nonvanishing chiral\nflipping rate on the $v_\\mu^2$ dynamo.\nA large flipping rate $\\Gamma_\\mathrm{f}$ decreases the chiral chemical\npotential $\\mu$; see Equation~(\\ref{mu-DNS}). It can stop\nthe growth of the magnetic field caused by the chiral dynamo instability.\n\nQuantitatively, the influence of the flipping term can be estimated by\ncomparing the last two terms of Equation~(\\ref{mu-DNS}).\nThe ratio of these terms is\n\\begin{eqnarray}\n f_\\mu \\equiv \\frac{\\Gamma_\\mathrm{f}\\mu_0}{\\lambda\\eta\\mu_0 B_\\mathrm{sat}^2}\n = \\frac{\\Gamma_\\mathrm{f}}{\\eta\\mu_0^2},\n\\label{eq_fmu}\n\\end{eqnarray}\nwhere we have used Equation~(\\ref{eq_Bsat}) with $\\mu_\\mathrm{sat} \\ll \\mu_0$\nfor the saturation value of the magnetic field strength.\nIn Figure~\\ref{fig_ts_flip} we present the time evolution of\n$B_\\mathrm{rms}$ and $\\mu_\\mathrm{rms}$\nfor different values of $f_\\mu$.\nThe reference run La2-15B, with zero flipping rate ($f_\\mu=0$),\nhas been repeated with a finite flipping term.\nAs a result, the magnetic field grows more\nslowly in the nonlinear era, due to the flipping effect, and it\ndecreases the saturation level of the magnetic field; see Figure~\\ref{fig_ts_flip}.\nFor larger values of $f_\\mu$, the chiral chemical potential\n$\\mu$ decreases quickly, leading to strong quenching of the $v_\\mu^2$ dynamo;\nsee the blue lines in Figure~\\ref{fig_ts_flip}.\n\n\\begin{figure}[t]\\begin{center}\n\\includegraphics[width=\\columnwidth]{ts_flip.ps}\n\\end{center}\n\\caption{{\\bf Laminar $v_\\mu^2$ dynamo:}\ntime evolution of the chiral chemical potential $\\mu_{\\rm rms}$ (black lines) and\nthe magnetic field $B_{\\rm rms}$ (blue lines)\nfor $f_\\mu=0$ (solid), $f_\\mu = 0.0025$ (dashed), and $f_\\mu = 0.01$ (dotted).\n}\n\\label{fig_ts_flip}\n\\end{figure}\n\n\n\\subsection{Laminar chiral--shear dynamos}\n\\label{sec_laminarashear}\n\n\nIn this section, we consider laminar chiral dynamos\nin the presence of an imposed shearing velocity.\nSuch a nonuniform velocity profile can be created\nin different astrophysical flows.\n\n\\subsubsection{Theoretical aspects}\n\nWe start by outlining the theoretical predictions\nfor laminar chiral dynamos in the presence\nof an imposed shearing velocity; for details see Paper~I.\nWe consider the equilibrium configuration specified by the shear velocity\n${\\bm{U}}_{\\rm eq}=(0,S\\, x,0)$,\nand $\\mu=\\mu_0=$ const.\nThis implies that the fluid has nonzero vorticity\n${\\bm W} = (0,0,S)$ similar to differential (nonuniform) rotation.\nThe functions $B_y(t,x,z)$ and $A(t,x,z)$ are determined by\n\\begin{eqnarray}\n&&\\frac{\\partial A(t,x,z)}{\\partial t} = v_\\mu \\, B_y + \\eta \\Delta A,\n\\label{A1-eq}\\\\\n&&\\frac{\\partial B_y(t,x,z)}{\\partial t} = - S\\nabla_z A - v_\\mu \\, \\Delta A\n+ \\eta \\Delta B_y .\n\\label{By1-eq}\n\\end{eqnarray}\nWe look for a solution to Equations~(\\ref{A1-eq})\nand~(\\ref{By1-eq}) of the form $A, B_y \\propto \\exp[\\gamma t + i (k_x x + k_z z-\n\\omega t)]$.\nThe growth rate of the dynamo instability and the frequency of the dynamo waves\nare given by\n\\begin{eqnarray}\n \\gamma = {|v_\\mu \\, k| \\over \\sqrt{2}} \\,\n\\left\\{1 + \\left[1 + \\left({S k_z \\over v_\\mu \\, k^2}\\right)^2 \\right]^{1\\over 2}\n\\right\\}^{1\\over 2} - \\eta k^2\n\\label{eq_gamma_aS}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n \\omega= {\\rm sgn} \\left(\\mu_0 k_z\\right) \\, {S k_z \\over \\sqrt{2} k}\n\\,\n\\left\\{1 + \\left[1 + \\left({S k_z \\over v_\\mu \\, k^2}\\right)^2\\right]^{1\\over 2}\n\\right\\}^{-{1\\over 2}} ,\n\\nonumber\\\\\n\\label{omega10}\n\\end{eqnarray}\nrespectively.\nThis solution describes a laminar $v_\\mu^2$--shear dynamo\nfor arbitrary values of the shear rate $S$.\n\n\nNext, we consider a situation where the shear term on the\nright side of Equation~(\\ref{By1-eq}) dominates,\nthat is, where $|S \\nabla_z A| \\gg |v_\\mu \\, \\Delta A|$.\nThe growth rate of the dynamo instability\nand the frequency of the dynamo waves are then given by\n\\begin{eqnarray}\n&& \\gamma = \\left({ |v_\\mu \\, S \\, k_z| \\over 2}\\right)^{1\/2} - \\eta k^2 ,\n\\label{gamma1}\\\\\n&& \\omega= {\\rm sgn} \\left(\\mu_0 k_z\\right) \\, \\left({|v_\\mu \\, S \\,\nk_z| \\over 2}\\right)^{1\/2} .\n\\label{omega}\n\\end{eqnarray}\nThe dynamo is excited for $k < |v_\\mu \\, S \\, k_z \/2\\eta^2|^{1\/4}$.\nThe maximum growth rate of the dynamo instability\nand the frequency $\\omega=\\omega (k=k_z^\\mu)$\nof the dynamo waves are attained at\n\\begin{equation}\n k_z^\\mu ={1 \\over 4} \\left({2|S \\, v_\\mu| \\over \\eta^2} \\right)^{1\/3},\n\\label{kz-max}\n\\end{equation}\nand are given by\n\\begin{eqnarray}\n&& \\gamma^{\\rm max}_\\mu\n = {3 \\over 8} \\left({S^2 \\, v_\\mu^2 \\over 2\\eta}\\right)^{1\/3}\n - \\eta k_x^2,\n\\label{gam-max}\\\\\n&& \\omega(k=k_z^\\mu)\n = {{\\rm sgn} \\left(v_\\mu \\, k_z\\right) \\over 2\\eta} \\,\n \\left({S^2 \\, v_\\mu^2 \\over 2 \\eta}\\right)^{1\/3} .\n\\label{omega-max}\n\\end{eqnarray}\nThis solution describes the laminar $v_\\mu$--shear dynamo.\n\n\n\\subsubsection{Simulations of the laminar $v_\\mu$--shear dynamo}\n\n\n\\begin{table}\n\\centering\n\\caption{\nOverview of Runs for the Chiral--Shear Dynamos\n(Reference Run in Bold)}\n \\begin{tabular}{l|llllll}\n \\hline\n \\hline\n \\\\\n\tsimulation \t& $\\lambda_\\mu$ \t& $\\dfrac{{\\rm Ma}_\\mu}{10^{-3}}$ & $u_S$\t& $\\dfrac{k_\\lambda}{10^{-4}\\mu_0}$ & $\\dfrac{k_\\mathrm{diff}}{\\mu_0}$\t\\\\\\\\\t \t\t\\\\\n \\hline\n \tLaU-1B\t \t& $1\\times10^{-9}$\t& $2.0$\t& 0.01\t& 1.3 & 503 \\\\\n \tLaU-1G\t \t& $1\\times10^{-9}$\t& $2.0$\t& 0.01\t& 1.3 & 503 \\\\\n \tLaU-2B\t \t& $1\\times10^{-9}$\t& $2.0$\t& 0.02\t& 1.3 & 503 \\\\\n \tLaU-2G\t \t& $1\\times10^{-9}$\t& $2.0$\t& 0.02\t& 1.3 & 503 \\\\\n \tLaU-3B\t \t& $1\\times10^{-9}$\t& $2.0$\t& 0.05\t& 1.3 & 503 \\\\\n \tLaU-3G\t \t& $1\\times10^{-9}$\t& $2.0$\t& 0.05\t& 1.3 & 503 \\\\\n \tLaU-4B\t \t& $1\\times10^{-9}$\t& $2.0$\t& 0.10\t& 1.3 & 503 \\\\\n \t\\textbf{LaU-4G} & $\\mathbf{1\\times10^{-5}}$\t& $\\mathbf{2.0}$ & $\\mathbf{0.10}$ & $\\mathbf{126}$ & $\\mathbf{50}$ \\\\\n \tLaU-5B\t \t& $1\\times10^{-9}$\t& $2.0$\t& 0.20 \t& 1.3 & 503 \\\\\n \tLaU-5G\t \t& $1\\times10^{-9}$\t& $2.0$\t& 0.20 & 1.3 & 503 \\\\\n \tLaU-6B\t \t& $1\\times10^{-9}$\t& $2.0$ & 0.50\t& 1.3 & 503 \\\\\n \tLaU-6G\t \t& $1\\times10^{-9}$\t& $2.0$ & 0.50\t& 1.3 & 503 \\\\\n \tLaU-7G\t \t& $1\\times10^{-8}$\t& $10$ & 0.01\t& 4.0 & 283 \\\\\n \tLaU-8G\t \t& $1\\times10^{-8}$\t& $10$ & 0.05\t& 4.0 & 283 \\\\\n \tLaU-9G\t \t& $1\\times10^{-8}$\t& $10$ & 0.10\t& 4.0 & 283 \\\\\n \tLaU-10G\t\t& $1\\times10^{-8}$\t& $10$\t& 0.50\t& 4.0 & 283 \\\\\n \\hline\n \\hline\n \\end{tabular}\n \\label{table_simulations_vmushear}\n\\end{table}\n\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=\\columnwidth]{LaU_9G__ts}\n\\end{center}\n\\caption[]{\\textbf{Laminar $v_\\mu$--shear dynamo:}\ntime evolution of the magnetic field\n$B_\\mathrm{rms}$, the velocity $u_\\mathrm{rms}$, the magnetic helicity\n$\\langle \\mathbf{A} \\cdot \\mathbf{B} \\rangle$,\nthe chemical potential $\\mu_\\mathrm{rms}$ (multiplied by a factor\nof $2\/\\lambda$), and $\\langle \\mathbf{A} \\cdot \\mathbf{B}\n\\rangle +2\\mu_\\mathrm{rms}\/\\lambda$\n(run LaU-4G).}\n\\label{fig_AlphaShear_t}\n\\end{figure}\n\nSince our simulations have periodic boundary conditions, we model shear\nvelocities as $U_S=(0, u_S \\cos x, 0)$.\nThe mean shear velocity $\\overline{u}_S$ over half the box is\n$\\overline{u}_S = (2\/\\pi) u_S$.\nIn Figure~\\ref{fig_AlphaShear_t} we show the time evolution of the magnetic field\n(which starts to be excited from a Gaussian initial field),\nthe velocity $u_\\mathrm{rms}$, the magnetic helicity\n$\\langle \\mathbf{A} \\cdot \\mathbf{B} \\rangle$,\nthe chemical potential $\\mu_\\mathrm{rms}$ (multiplied by a factor\nof $2\/\\lambda$), and $\\langle \\mathbf{A} \\cdot \\mathbf{B}\n\\rangle +2\\mu_\\mathrm{rms}\/\\lambda$ for run LaU-4G.\nThe growth rate for the chiral--shear dynamo (the $v_\\mu^2$--shear dynamo)\nis larger than that for the laminar chiral dynamo (the $v_\\mu^2$--dynamo).\nAfter a time of roughly $0.03~t_\\eta$, the system enters a nonlinear\nphase, in which the velocity field is affected by the magnetic field,\nbut the magnetic field can still increase slowly.\nSaturation of the dynamo occurs after approximately $0.1~t_\\eta$.\n\nFor Gaussian initial fields, we have observed a short delay in the\ngrowth of the magnetic field.\nIn both cases, the dynamo growth rate increases with increasing shear.\nAs for the chiral $v_\\mu^2$ dynamo, we observe perfect\nconservation of the quantity $\\langle \\mathbf{A} \\cdot \\mathbf{B} \\rangle +\n2\\mu_\\mathrm{rms}\/\\lambda$ in the simulations of the laminar $v_\\mu$--shear\ndynamo.\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=\\columnwidth]{Beltrami_GammaOmega_Us}\n\\end{center}\n\\caption[]{\\textbf{Laminar $v_\\mu$--shear dynamo:}\ngrowth rate (top panel) and dynamo frequency (bottom panel) as a function\nof the mean shear $\\overline{u}_S$ for the Beltrami initial field\n(runs LaU-$n$B with $n=1$--$6$; see Table~\\ref{table_simulations_vmushear}).}\n\\label{Gamma_Us_Beltrami}\n\\end{figure}\n\nIn Figure~\\ref{Gamma_Us_Beltrami} we show\nthe theoretical dependence of the growth rate $\\gamma$ and the dynamo frequency\n$\\omega$ on the shear velocity $\\overline{u}_S$\nfor Beltrami initial conditions at different wavenumbers;\nsee Equations~(\\ref{gamma1}) and~(\\ref{gam-max}).\nThe dynamo growth rate is estimated from an exponential fit.\nThe result of the fit depends slightly on the fitting regime, leading\nto an error of the order of 10\\%.\nThe dynamo frequency is determined afterward by dividing the magnetic field\nstrength by $\\mathrm{exp}(\\gamma t)$ and fitting a sine function.\nDue to the small amplitude and a limited number of periods of dynamo waves,\nthe result is sensitive to the fit regime considered.\nHence we assume a conservative error of 50\\% for the dynamo frequency.\nThe blue diamonds correspond to the numerical results.\nWithin the error bars, the theoretical and numerical results are in agreement.\n\n\\subsubsection{Simulations of the laminar $v_\\mu^2$--shear dynamo}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=\\columnwidth]{a2shear_Gamma_Us}\n\\end{center}\n\\caption[]{\\textbf{Laminar $v_\\mu^2$--shear dynamo:}\ngrowth rate $\\gamma$ as a function of\nmean shear $\\overline{u}_S$.\nFor comparison, we plot the maximum growth rate of $v_\\mu^2$ dynamo\n(\\ref{gamma-max}) and of the $v_\\mu$--shear dynamo (\\ref{gam-max}).\nThe solid black line is the theoretically predicted maximum growth rate\n(see Equation~\\ref{eq_gamma_aS}).\n}\\label{fig_Gamma_Us_aShear}\n\\end{figure}\n\n\nThe growth rate of chiral--shear dynamos versus mean shear in\nthe range between $u_S=0.01$\nand $0.5$ is shown in Figure~\\ref{fig_Gamma_Us_aShear}.\nWe choose a large initial value of the chemical potential, i.e.\\\n$\\mu_0=10$, to ensure that $k_\\mathrm{max}$ is\ninside the box for all values of $\\overline{u}_S$.\nWe overplot the growth rates found from the simulations with the maximum growth\nrate given by Equation (\\ref{eq_gamma_aS}).\nIn addition, we show the theoretical predictions for the limiting cases\nof the $v_\\mu^2$ and\n$v_\\mu$--shear dynamos; see Equations~(\\ref{gamma-max}) and (\\ref{gam-max}).\nInspection of Figure~\\ref{fig_Gamma_Us_aShear} shows that the results\nobtained from the simulations agree with theoretical predictions.\n\n\n\n\n\\section{Chiral magnetically driven turbulence}\n\\label{sec_turbdynamo1}\n\n\nIn this section we show that the CME\ncan drive turbulence via the Lorentz force in the Navier-Stokes equation.\nWhen the magnetic field increases exponentially, due to the\nsmall-scale chiral magnetic dynamo with growth rate $\\gamma$,\nthe Lorentz force,\n$({\\bm \\nabla} {\\bm \\times} {\\bm{B}}) {\\bm \\times} \\bm{B}$,\nincreases at the rate $2\\gamma$.\nThe laminar dynamo occurs only up to the first nonlinear phase,\nwhen the Lorentz force starts to produce turbulence\n(referred to as chiral magnetically driven turbulence).\nWe will also demonstrate here that, during the second nonlinear phase,\na large-scale dynamo is excited by the chiral $\\alpha_\\mu$ effect\narising in chiral magnetically driven turbulence.\nThe chiral $\\alpha_\\mu$ effect was studied using different\nanalytical approaches in Paper~I.\nThis effect is caused by an interaction of the CME\nand fluctuations of the small-scale\ncurrent produced by tangling magnetic fluctuations.\nThese fluctuations are generated by tangling of the large-scale\nmagnetic field through sheared velocity fluctuations.\nOnce the large-scale magnetic field becomes strong enough,\nthe chiral chemical potential decreases, resulting in the saturation\nof the large-scale dynamo instability.\n\nThis situation is similar to that of\ndriving small-scale turbulence via the Bell instability\nin a system with an external cosmic-ray current\n\\citep{B04,BL14}, and the generation of a\nlarge-scale magnetic field by the Bell turbulence; see \\citet{RNBE2012}\nfor details.\n\n\\subsection{Mean-field theory for large-scale dynamos}\n\\label{sec_meanfieldMHD}\n\nIn this section, we outline the theoretical predictions\nfor large-scale dynamos based on mean-field theory;\nsee Paper~I for details.\nThe mean induction equation is given by\n\\begin{eqnarray}\n\\frac{\\partial \\overline{\\mbox{\\boldmath $B$}}{}}{}{\\partial t} &=&\n\\bm{\\nabla} \\times \\left[\\overline{\\bm{U}} \\times \\overline{\\mbox{\\boldmath $B$}}{}}{\n+ (\\overline{v}_\\mu + \\alpha_\\mu) \\overline{\\mbox{\\boldmath $B$}}{}}{\n- (\\eta+ \\, \\eta_{_{T}})\\bm{\\nabla} \\times \\overline{\\mbox{\\boldmath $B$}}{}}{\\right], \\nonumber\\\\\n\\label{ind4-eq}\n\\end{eqnarray}\nwhere $\\overline{v}_\\mu = \\eta \\overline{\\mu}_{0}$, and we consider the following equilibrium\nstate: $\\overline{\\mu}_{\\rm eq}=\\overline{\\mu}_{0}={\\rm const}$ and ${\\bm \\overline{\\bm{U}}}_{\\rm eq}=0$.\nThis mean-field equation contains additional terms that are\nrelated to the chiral $\\alpha_\\mu$ effect and the turbulent magnetic diffusivity\n$\\eta_{_{T}}$.\nIn the mean-field equation, the chiral $v_\\mu$ effect is replaced\nby the mean chiral $\\overline{v}_\\mu$ effect.\nNote, however, that at large fluid and magnetic Reynolds numbers, the $\\alpha_\\mu$ effect\ndominates the $\\overline{v}_\\mu$ effect.\n\nTo study the large-scale dynamo,\nwe seek a solution to Equation~(\\ref{ind4-eq}), for small perturbations in\nthe form\n$\\overline{\\mbox{\\boldmath $B$}}{}}{(t,x,z)=\\overline{B}_y(t,x,z) {\\bm e}_y + \\bm{\\nabla} \\times\n[\\overline{A}(t,x,z) {\\bm e}_y]$,\nwhere ${\\bm e}_y$ is the unit vector directed along the $y$ axis.\nThe functions $\\overline{B}_y(t,x,z)$ and $\\overline{A}(t,x,z)$ are determined by\n\\begin{multline}\n \\frac{\\partial \\overline{A}(t,x,z)}{\\partial t}\n =(\\overline{v}_\\mu + \\alpha_\\mu)\\, \\overline{B}_y\n + (\\eta+ \\, \\eta_{_{T}}) \\, \\Delta \\overline{A},\n \\label{me-A-eq}\n\\end{multline}\n\\begin{multline}\n \\frac{\\partial \\overline{B}_y(t,x,z)}{\\partial t}\n =-(\\overline{v}_\\mu + \\alpha_\\mu) \\, \\Delta \\overline{A}\n + (\\eta+ \\, \\eta_{_{T}}) \\, \\Delta \\overline{B}_y ,\n\\label{me-By-eq}\n\\end{multline}\nwhere $\\Delta=\\nabla_x^2 + \\nabla_z^2$, and the other components of the magnetic\nfield are $\\overline{B}_x=-\\nabla_z \\overline{A}$ and $\\overline{B}_z=\\nabla_x \\overline{A}$.\n\nWe look for a solution of the mean-field equations~(\\ref{me-A-eq})\nand~(\\ref{me-By-eq}) in the form\n\\begin{eqnarray}\n \\overline{A}, \\overline{B}_y \\propto \\exp[\\gamma t + i (k_x x + k_z z)],\n\\end{eqnarray}\nwhere the growth rate of the large-scale dynamo instability is given by\n\\begin{eqnarray}\n\\gamma = |(\\overline{v}_\\mu + \\alpha_\\mu)\\, k| - (\\eta+ \\, \\eta_{_{T}}) \\, k^2,\n\\label{gamma_turb}\n\\end{eqnarray}\nwith $k^2=k_x^2 + k_z^2$.\nThe maximum growth rate of the large-scale dynamo instability, attained at\nthe wavenumber\n\\begin{equation}\n k \\equiv k_\\alpha\n ={|\\overline{v}_\\mu + \\alpha_\\mu|\n \\over 2(\\eta+ \\, \\eta_{_{T}})},\n\\label{kmax_turb}\n\\end{equation}\nis given by\n\\begin{eqnarray}\n\\gamma^{\\rm max}_\\alpha\n= {(\\overline{v}_\\mu + \\alpha_\\mu)^2\\over 4 (\\eta+ \\, \\eta_{_{T}})}\n= {(\\overline{v}_\\mu + \\alpha_\\mu)^2\\over 4 \\eta \\, (1 + \\, {\\rm Re}_{_\\mathrm{M}}\/3)}.\n\\label{gammamax_turb}\n\\end{eqnarray}\nFor small magnetic Reynolds numbers,\n${\\rm Re}_{_\\mathrm{M}}=u_0 \\ell_0\/\\eta = 3 \\eta_{_{T}}\/\\eta$, this equation\nyields the correct result for the laminar $v_\\mu^2$ dynamo;\nsee Equation~(\\ref{gamma-max}).\n\nAs was shown in Paper~I, the CME\nin the presence of turbulence gives rise to the chiral $\\alpha_\\mu$ effect.\nThe expression for $\\alpha_\\mu$\nfound for large Reynolds numbers and a weak\nmean magnetic field is\n\\begin{eqnarray}\n \\alpha_\\mu = - {2 \\over 3} \\overline{v}_\\mu \\ln {\\rm Re}_{_\\mathrm{M}}.\n\\label{alphamu}\n\\end{eqnarray}\nSince the $\\alpha_\\mu$ effect in homogeneous turbulence\nis always negative, while the $\\overline{v}_\\mu$ effect is positive,\nthe chiral $\\alpha_\\mu$ effect decreases the $\\overline{v}_\\mu$ effect.\nBoth effects compensate each other at ${\\rm Re}_{_\\mathrm{M}}=4.5$ (see Paper~I).\nHowever, for large fluid and magnetic Reynolds numbers, $\\overline{v}_\\mu \\ll\n|\\alpha_\\mu|$, and we can neglect $\\overline{v}_\\mu$ in these equations.\nThis regime corresponds to the large-scale $\\alpha_\\mu^2$ dynamo.\n\n\n\\subsection{DNS of chiral magnetically driven turbulence}\n\nWe have performed a higher resolution $(576^3)$ three-dimensional\nnumerical simulation to study chiral magnetically driven turbulence.\nThe chiral Mach number of this simulation is ${\\rm Ma}_\\mu=2\\times10^{-3}$,\nthe chiral nonlinearity parameter is $\\lambda_\\mu=2\\times10^{-7}$, and the\nmagnetic and the chiral Prandtl numbers are unity.\nThe velocity field is initially zero, and the magnetic field is Gaussian noise,\nwith $B=10^{-6}$.\n\n\\begin{figure}[t]\n\\centering\n \\subfigure{\\includegraphics[width=\\columnwidth]{LamTurb_ts_mu20}}\n\\caption{\n{\\bf Chiral magnetically driven turbulence.}\nTime evolution for different quantities.\n}\n \\label{fig_LTts}\n\\end{figure}\n\nThe time evolution of $B_\\mathrm{rms}$, $u_\\mathrm{rms}$,\n$\\langle \\mathbf{A} \\cdot \\mathbf{B} \\rangle$,\n$\\mu_\\mathrm{rms}$ (multiplied by\n$2\/\\lambda$), and $\\langle \\mathbf{A} \\cdot \\mathbf{B}\n\\rangle +2\\mu_\\mathrm{rms}\/\\lambda$\nof chiral magnetically driven turbulence\nis shown in the top panel of Figure~\\ref{fig_LTts}.\nFour phases can be distinguished:\n\\begin{asparaenum}[\\it (1)]\n\\item{The kinematic phase of small-scale chiral dynamo instability\nresulting in exponential growth of small-scale magnetic field\ndue to the CME.\nThis phase ends approximately at $t=0.05 t_\\eta$.\n}\n\\item{The first nonlinear phase resulting in production\nof chiral magnetically driven turbulence.\nIn this phase, $u_\\mathrm{rms}$ grows from very weak noise\nover seven orders of magnitude up to nearly the equipartition value\nbetween turbulent kinetic and magnetic energies,\ndue to the Lorentz force in the Navier-Stokes equation.\n}\n\\item{The second nonlinear phase resulting in large-scale dynamos.\nIn particular, the evolution of $B_\\mathrm{rms}$ for $t > 0.12 t_\\eta$\nis affected by the velocity field.\nDuring this phase, the velocity stays approximately constant,\nwhile the magnetic field continues to increase\nat a reduced growth rate in comparison with that of the\nsmall-scale chiral dynamo instability.\nIn this phase, we also observe the formation of inverse energy transfer\nwith a $k^{-2}$ magnetic energy spectrum that was previously found\nand comprehensively analyzed by \\cite{BSRKBFRK17} in DNS of chiral MHD\nwith different parameters.\n}\n\\item{The third nonlinear phase resulting in saturation of the large-scale\ndynamos, which ends at $\\approx 0.45 t_\\eta$ when the\nlarge-scale magnetic field reaches the maximum value.\nThe conserved quantity $\\langle \\mathbf{A} \\cdot \\mathbf{B} \\rangle\n+2\\mu_\\mathrm{rms}\/\\lambda$ stays constant over all four phases.\nSaturation is caused by the $\\lambda$ term in the evolution equation of the chiral\nchemical potential, which leads to a decrease of $\\mu$ from its initial value to 1.\n}\n\\end{asparaenum}\n\nThe middle panel of Figure~\\ref{fig_LTts} shows the measured growth rate of\n$B_\\mathrm{rms}$ as a function of time.\nIn the kinematic phase, $\\gamma$ agrees\nwith the theoretical prediction for the\nlaminar chiral dynamo instability; see Equation~(\\ref{gamma-max}),\nwhich is indicated by the dashed red horizontal line in the middle panel of\nFigure~\\ref{fig_LTts}.\nDuring this phase, the growth rate\nof the velocity field, given by the dotted gray line in Figure~\\ref{fig_LTts}, is\nlarger by roughly a factor of two than that of the magnetic field.\nThis is expected when turbulence is driven via the Lorentz force, which is\nquadratic in the magnetic field.\n\nOnce the kinetic energy is of the same order\nas the magnetic energy, the growth rate of\nthe magnetic field decreases abruptly by a factor of more than five.\nThis is expected in the presence of turbulence, because\nthe energy dissipation of the magnetic field is increased by turbulence due to\nturbulent magnetic diffusion.\nAdditionally, however, a positive contribution to the\ngrowth rate comes from the chiral $\\alpha_\\mu$ effect\nthat causes large-scale dynamo instability.\n\nThe time evolution of the ratio of the mean magnetic field to the total\nfield, $\\overline{B}\/B_\\mathrm{rms}$, is presented in the bottom panel\nof Figure~\\ref{fig_LTts}.\nThe mean magnetic field grows faster than the rms of the total magnetic\nfield in the time interval between 0.14 and 0.2 $t_\\eta$.\nDuring this time, the large-scale (mean-field) dynamo operates,\nso magnetic energy is transferred to larger spatial scales.\nWe now determine, directly from DNS,\nthe growth rate of the large-scale dynamo using\nEquation~(\\ref{gamma_turb}).\nTo this end, we determine the Reynolds number and the strength of the $\\alpha_\\mu$\neffect using the data from our DNS.\nWhereas the rms velocity is a direct output of the simulation, the turbulent\nforcing scale can be found from analysis of the energy spectra.\nThe theoretical value based on these estimates at the time $0.2\\,t_\\eta$ is\nindicated as the solid red horizontal line in the middle\npanel of Figure~\\ref{fig_LTts}.\n\n\\begin{figure}[t]\n\\centering\n \\subfigure{\\includegraphics[width=\\columnwidth]{LamTurb_spec_mu20}}\n\\caption{\n{\\bf Chiral magnetically driven turbulence.}\nMagnetic (blue lines) and kinetic (black lines) energy\nspectra are calculated at equal time differences,\nand the very last spectra are shown as solid lines.}\n \\label{fig_LTspec}\n\\end{figure}\n\n\nThe evolution of kinetic and magnetic energy spectra is\nshown in Figure~\\ref{fig_LTspec}.\nWe use equal time steps between the different spectra, covering the whole\nsimulation time.\nThe magnetic energy, indicated by blue\nlines, increases initially at $k=\\mu_0\/2=10$, which agrees with the theoretical\nprediction for the chiral laminar dynamo.\nThe magnetic field drives a turbulent spectrum of the kinetic energy, as can\nclearly be seen in\nFigure~\\ref{fig_LTspec} (indicated by black lines in Figure~\\ref{fig_LTspec}).\nThe final spectral slope of the kinetic energy is roughly $-5\/3$.\nThe magnetic field continues to grow at small wavenumbers,\nproducing a peak at $k=1$ in the final stage of the time evolution.\n\n\\begin{figure}[t]\n\\centering\n \\subfigure{\\includegraphics[width=\\columnwidth]{LamTurb_kmax_t_mu20}}\n\\caption{\n{\\bf Chiral magnetically driven turbulence.}\nThe black solid line shows the inverse correlation length, $k_{\\rm M}$,\nof the magnetic energy, defined by Equation (\\ref{eq_kcorr}), as a\nfunction of time $t$.\nUsing this wavenumber and the rms velocity, the fluid and magnetic\nReynolds numbers are estimated (see Equation~\\ref{eq_Rmkcorr}), that is\nshown by the dashed blue line.}\n \\label{fig_LTkmax}\n\\end{figure}\n\n\nWe determine the correlation length of the magnetic field from\nthe magnetic energy spectrum via\n\\begin{equation}\n\\xi_{\\rm M}(t)\\equiv\n k_{\\rm M}^{-1}(t) = \\frac{1}{\\mathcal{E}_{\\rm M}(t)} \\int k^{-1}\n E_{\\rm M}(k,t)~\\mathrm{d}k .\n\\label{eq_kcorr}\n\\end{equation}\nThe wavenumber $k_{\\rm M}$ so defined coincides (up to a numerical factor of order unity) with\nthe so-called tracking solution, $\\Delta \\mu_{\\rm tr}$ in \\citet{BFR12}.\nThere it was demonstrated that, in the course of evolution, the chiral\nchemical potential follows $k_{\\rm M}(t)$.\nAnd, indeed, the evolution of $k_{\\rm M}$, shown in Figure~\\ref{fig_LTkmax},\nstarts at around 10 (the value of $\\mu_0\/2$ in this simulation) and then decreases to\n$k_{\\rm M} = k_1$ (corresponding to the simulation box size) at $t\\approx0.18~t_\\eta$.\nInterestingly, the chemical potential is affected by\nmagnetic helicity only at much later times, as can be seen in\nFigure~\\ref{fig_LTkmax}.\nBased on the wavenumber, $k_{\\rm M}$, we estimate\nthe Reynolds numbers as\n\\begin{equation}\n {\\rm Re}_{_\\mathrm{M}} = \\mathrm{Re} = \\frac{u_\\mathrm{rms}}{\\nu k_{\\rm M}} .\n\\label{eq_Rmkcorr}\n\\end{equation}\nFigure~\\ref{fig_LTkmax} shows that the Reynolds number increases exponentially,\nmostly due to the fast increase of $u_\\mathrm{rms}$, and saturates\nlater at ${\\rm Re}_{_\\mathrm{M}}\\approx 10^2$.\nSimilarly, the turbulent diffusivity can be estimated as\n\\begin{equation}\n \\eta_{_{\\rm T}} = \\frac{u_\\mathrm{rms}}{3~k_{\\rm M}} .\n\\label{eq_etaTkcorr}\n\\end{equation}\nDuring the operation of the mean-field large-scale dynamo, we find\nthat $\\eta_{_{\\rm T}}\\approx2.4\\times10^{-3}$,\nwhich is about 24 times larger than the molecular diffusivity $\\eta$.\nUsing these estimates, we determine\nthe chiral magnetic $\\alpha_\\mu$ effect from Equation~(\\ref{alphamu}).\nThe large-scale dynamo growth rate (\\ref{gamma_turb}) is\nshown as the solid red horizontal line in the middle panel of Figure~\\ref{fig_LTts}\nand is in agreement with the DNS results shown as the black solid line.\n\n\n\\begin{figure}[t]\n\\centering\n \\subfigure{\\includegraphics[width=\\columnwidth]{LamTurb_ts4_mu20}}\n \\subfigure{\\includegraphics[width=\\columnwidth]{LamTurb_gammak_mu20}}\n\\caption{\n{\\bf Chiral magnetically driven turbulence.}\nThe evolution of the magnetic energy $E_{\\rm M}$ on different wavenumbers $k$\n(top panel).\nThe growth rate as a function of $k$ in different time intervals as given in the\nplot legend. The black line corresponds to a fit, while the theoretical\nexpectations are given as a red\nline.}\n \\label{fig_LT_gammak}\n\\end{figure}\n\nFurther analysis of the evolution of the magnetic field at different wavenumbers\nis presented in Figure~\\ref{fig_LT_gammak}.\nIn the top panel, we display the magnetic energy at various\nwavenumbers as a function of time.\nIn the kinematic phase, for $t<0.1~t_\\eta$, the fastest amplification\noccurs at $k=10$, as can also be seen in the energy spectra.\nAt wavenumbers $k1$ needs\nto be fulfilled.\nAs shown in Equation (\\ref{kmax_turb}), $k_\\mathrm{max}$ is proportional\nto $\\eta\/\\eta_{_{\\rm T}}$, which is inversely proportional to the magnetic\nReynolds number ${\\rm Re}_{_\\mathrm{M}}$.\nAs a result, the chemical potential needs to be sufficiently large for\n$k_\\mathrm{max}>1$.}\n\\item{\nDue to nonlocal effects, the turbulent diffusivity $\\eta_{_{\\rm T}}$\nis generally scale-dependent and\ndecreases above $k_\\mathrm{f}$ \\citep{BRS08}.\nFor comparison with mean-field theory, the chiral dynamo\ninstability has to occur on scales $k < k_\\mathrm{f}$, where\n$\\eta_{_{\\rm T}} \\approx u_\\mathrm{rms}\/(3k_\\mathrm{f})$.\nNote, however, that\nthe presence of a mean kinetic helicity in the system\ncaused by the CME (see Paper~I)\ncan increase the turbulent diffusivity $\\eta_{_{\\rm T}}$\nfor moderate magnetic Reynolds numbers by up to 50\\% \\citep{BSR17}.}\n\\item{\nTo simplify the system, we\navoid classical small-scale dynamo action, which\noccurs at magnetic Reynolds numbers larger than\n${\\rm Re}_{_{\\mathrm{M,crit}}}\\approx50$.}\n\\end{itemize}\n\n\n\n\\begin{table}[t]\n\\caption{\nOverview of Runs With Externally Forced Turbulence\n(Reference Run in Bold)}\n\\centering\n \\begin{tabular}{l|lllllll}\n \\hline\n \\hline\n \\\\\n\t~ \t & $\\mu_0$ \t& $\\dfrac{{\\rm Ma}_\\mu}{10^{-3}}$ \t& $\\dfrac{\\lambda_\\mu}{10^{-6}}$ \t& $\\dfrac{k_\\lambda}{10^{-3} \\mu_0}$ & $\\dfrac{k_\\mathrm{diff}}{\\mu_0}$ & $k_\\mathrm{f}$ \t& ${\\rm Re}_{_\\mathrm{M}}$\t\t\\\\\t\n\t~ \t & \t\t& ~\t\t \t& ~\t\t \t& ~\t\t& ~\t\t& ~\t \t& (early $\\rightarrow$ late)\t\t\\\\\t \\hline\n \tTa2-1 \t& $20$\t\t& $8$\t& $16$ \t& 160\t\t \t& 4.5\t& $10$ \t\t& $24 \\rightarrow 19$ \\\\\n \tTa2-2 \t& $20$\t\t& $4$\t& $4.0$ & 80\t\t \t& 63\t& $10$ \t\t& $36 \\rightarrow 28$ \t\\\\\n \tTa2-3 \t& $20$\t\t& $8$\t& $16$ \t& 160\t\t \t& 45\t& $10$ \t\t& $16 \\rightarrow 14$ \t\\\\\n \tTa2-4 \t& $20$\t\t& $4$\t& $4.0$ & 80\t\t \t& 63\t& $10$ \t\t& $4 \\rightarrow 13$ \t\\\\\n \t\\textbf{Ta2-5} & $\\mathbf{20}$\t& $\\mathbf{8}$ & $\\mathbf{160}$ & $\\mathbf{51}$ \t& $\\mathbf{80}$\t& $\\mathbf{10}$ \t& $\\mathbf{24} \\rightarrow\\mathbf{18}$ \t\\\\\n \tTa2-6 \t& $20$\t\t& $8$\t& $1.6$ & 51\t\t \t& 80\t& $10$ \t\t& $16 \\rightarrow 14$ \t\\\\\n \tTa2-7 \t& $30$\t\t& $12$\t& $32$ \t& 230\t\t \t& 38\t& $4$ \t\t& $42 \\rightarrow 58$ \t\\\\\n \tTa2-8 \t& $30$\t\t& $9$\t& $18$ \t& 160\t\t \t& 43\t& $4$ \t\t& $58 \\rightarrow 65$ \t\\\\\n \tTa2-9 \t& $30$\t\t& $9$\t& $13.5$ & 150\t\t \t& 47\t& $4$ \t\t& $82 \\rightarrow 74$ \t\\\\\n \tTa2-10 & $40$\t\t& $8$\t& $16$ \t& 160\t\t \t& 45\t& $4$ \t\t& $119 \\rightarrow 107$ \\\\\n \\hline\n \\hline\n \\end{tabular}\n \\label{table_simulations_forced}\n\\end{table}\n\n\n\\subsection{DNS of chiral dynamos in forced turbulence}\n\\label{sec:forced-chiral-dynamos}\n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=\\columnwidth]{ts__TurbRef}\n\\caption{\n{\\bf Externally forced turbulence.}\nTime evolution of the magnetic field, the velocity field, and\nthe chemical potential, as well as the mean value of the magnetic\nhelicity (top panel).\nThe middle panel shows the growth rate of $B_\\mathrm{rms}$ as a function of\ntime (solid black line).\nThe red lines are theoretical expectations in different dynamo phases.\nIn the bottom panel, the ratio of the mean magnetic field to the total field\n$B_\\mathrm{rms}$ is presented.}\n\\label{fig_alphamu_ts}\n\\end{figure}\n\nThe time evolution of different quantities in our reference run is presented in\nFigure~\\ref{fig_alphamu_ts}.\nThe magnetic field first increases first exponentially,\nwith a growth rate $\\gamma \\approx 60~t_\\eta^{-1}$, which is\nabout a factor of 1.6 lower than that expected for the laminar $v_\\mu^2$ dynamo;\nsee the middle panel of Figure~\\ref{fig_alphamu_ts}.\nThis difference seems to be caused by the presence of random forcing;\nsee discussion below.\nAt approximately 0.2 $t_\\eta$, the growth rate decreases to a value\nof $\\gamma\\approx 15~t_\\eta^{-1}$ consistent with that of the mean-field\nchiral $\\alpha_\\mu^2$ dynamo, before saturation occurs at $0.4\\,t_\\eta$.\nThe evolution of $B_\\mathrm{rms}$ is comparable qualitatively\nin chiral magnetically produced turbulence; see Figure~\\ref{fig_LTts}.\nAn additional difference from the latter\nis the value of $u_\\mathrm{rms} \\approx 0.1$ for\nexternally forced turbulence, which is controlled by the intensity\nof the forcing function.\nAn indication of the presence of a mean-field dynamo is the evolution of\n$\\overline{B}\/B_\\mathrm{rms}$ in the bottom panel of\nFigure~\\ref{fig_alphamu_ts}, which reaches a value of unity at $0.3\\,t_\\eta$.\n\nThe energy spectra presented in Figure~\\ref{fig_alphamu_spec} support the\nlarge-scale dynamo scenario.\nFirst, the magnetic energy increases at all scales,\nand, at later times, the maximum of the magnetic energy\nis shifted to smaller wavenumbers, finally producing a\npeak at $k=1$, i.e., the smallest possible wavenumber in our periodic domain.\n\nA detailed analysis of the growth of magnetic energy is presented in\nFigure~\\ref{fig_alphamu_specana}.\nIn the first phase, the growth rate of the magnetic field is\nindependent of the wavenumber $k$ (see top panel), due to a coupling\nbetween different modes.\nThe growth rate measured in this phase is less than that\nin the laminar case (see middle panel), due to a scale-dependent\nturbulent diffusion caused by the random forcing.\n\nWithin the time interval $(0.22$--$0.28)\\,t_\\eta$, only the magnetic field at\n$k=1$ increases.\nThis is clearly seen in the bottom panel of Figure~\\ref{fig_alphamu_specana},\nwhere we show the evolution of the magnetic energy at different wavenumbers $k$.\nThe growth rate of the mean-field dynamo, which is determined\nat $k=1$, agrees with the result from\nmean-field theory, given by Equation~(\\ref{gammamax_turbRm}).\nThere is a small dependence of the resulting mean-field growth rate on the\nexact fitting regime.\nIf the phase of the\nmean-field dynamo is very short, changing the fitting range can affect the\nresult by a factor up to 30 \\%.\nWe use the latter value as an estimate of the uncertainty in the growth rate, and,\nin addition, indicate an error of 20 \\% in determining the Reynolds number,\nwhich is caused by the temporal variations of $u_\\mathrm{rms}$.\n\n\\begin{figure}[t]\n\\centering\n \\subfigure{\\includegraphics[width=\\columnwidth]{spec__TurbRef}}\n\\caption{\n{\\bf Externally forced turbulence.}\nEvolution of kinetic (black lines) and magnetic energy spectra (blue lines)\nfor the reference run Ta2-5.\nThe ratio $\\mu_0\/\\lambda$ is indicated by the horizontal dashed line.}\n\\label{fig_alphamu_spec}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n \\subfigure{\\includegraphics[width=\\columnwidth]{Emag_t__TurbRef}}\n \\subfigure{\\includegraphics[width=\\columnwidth]{gamma_k__TurbRef}}\n\\caption{\n{\\bf Externally forced turbulence.}\nTime evolution of the magnetic energy at different wavenumbers $k$\n(top panel).\nThe remaining panels show the growth rates as a function of scale in\ndifferent fit intervals.}\n\\label{fig_alphamu_specana}\n\\end{figure}\n\n\n\\subsection{Dependence on the magnetic Reynolds number}\n\nBased on the mean-field theory developed in Paper~I, we expect\nthe following.\nUsing the expression for the $\\alpha_\\mu$ effect\ngiven by\nEquation (\\ref{alphamu}), the maximum growth rate (\\ref{gammamax_turb})\nfor the mean-field dynamo can be rewritten as a function of the magnetic\nReynolds number:\n\\begin{eqnarray}\n\\gamma_{\\rm max} ({\\rm Re}_{_\\mathrm{M}}) = \\frac{\\overline{v}_\\mu^2 (1 - 2\/3~\\ln{\\rm Re}_{_\\mathrm{M}} )^2}{4 \\eta \\,\n(1 + \\, {\\rm Re}_{_\\mathrm{M}}\/3)} ,\n\\label{gammamax_turbRm}\n\\end{eqnarray}\nwhere the ratio $\\eta_{_{\\rm T}}\/\\eta = {\\rm Re}_{_\\mathrm{M}}\/3$.\n\nWe perform DNS with different Reynolds numbers to\ntest the scaling of $\\gamma_{\\rm max}({\\rm Re}_{_\\mathrm{M}})$ given by\nEquation~(\\ref{gammamax_turbRm}).\nThe parameters of the runs with externally forced\nturbulence are summarized in Table \\ref{table_simulations_forced}.\nWe vary $\\nu \\, (=\\eta$), the forcing wavenumber $k_\\mathrm{f}$, as well as the\namplitude of the forcing,\nto determine the function $\\gamma_{\\rm max}({\\rm Re}_{_\\mathrm{M}})$.\nIn the initial phase, $u_\\mathrm{rms}$ is constant in time.\nOnce large-scale turbulent dynamo action occurs,\nthere are additional minor\nvariations in $u_\\mathrm{rms}$, because the system is already\nin the nonlinear phase.\nThe nonlinear terms in the Navier-Stokes equation lead to a modification of\nthe velocity field at small spatial scales, which affects the value of\n$u_\\mathrm{rms}$ and results in\nthe small difference between\nthe initial and final values of the Reynolds numbers\n(see Table \\ref{table_simulations_forced}).\n\nAccording to Equation~(\\ref{kmax_turb}),\nthe wavenumber associated with the maximum growth rate\nof the large-scale turbulent\ndynamo instability decreases with increasing ${\\rm Re}_{_\\mathrm{M}}$.\nIn order to keep this mode inside the computational domain and hence to\ncompare the measured growth rate with the maximum one given by\nEquation~(\\ref{gammamax_turbRm}),\nwe vary the value of $\\mu_0$ in our simulations.\nThe variation of $\\mu_0$ and the additional variation of $\\eta$ for scanning\nthrough the ${\\rm Re}_{_\\mathrm{M}}$ parameter space, implies that ${\\rm Ma}_\\mu$ changes\ncorrespondingly.\n\nThe values of the nonlinear parameter $\\lambda$ should be within a certain range.\nIndeed, the saturation value of the magnetic field, given by\nEquation~(\\ref{eq_Bsat}), is proportional to $\\lambda^{-1\/2}$.\nIn order for the Alfv\\'en velocity not to exceed the sound speed,\nwhich would result in a very small time step in DNS,\n$\\lambda$ should not be below a certain value.\nOn the other hand, $\\lambda$ should not be too large,\nas in this case the dynamo would saturate quickly, and there is only a very short\ntime interval of the large-scale dynamo.\nIn this case, determining the growth rate of the mean-field dynamo,\nand hence comparing with the mean-field theory, is difficult.\n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=\\columnwidth]{gamma_Rm}\n\\caption{\n{\\bf Externally forced turbulence and chiral magnetically driven turbulence.}\nThe normalized growth rate $\\gamma~\\eta\/v_\\mu^2$ of the magnetic\nfield as a function of the magnetic Reynolds number ${\\rm Re}_{_\\mathrm{M}}$.\nThe gray data points show the\ngrowth rate in the initial, purely kinematic phase of the simulations.\nThe blue data points show the measured growth rate of the magnetic field\non $k=1$, when the large-scale dynamo occurs.\nThe diamond-shaped data points represent simulations of forced turbulence, while\nthe dot-shaped data points refer to the case of\nchiral magnetically driven turbulence.\nThe growth rate observed in the initial laminar phase\nfor the case of chiral magnetically driven turbulence\nis shown at ${\\rm Re}_{_\\mathrm{M}}=2$, with the left arrow indicating that the actual ${\\rm Re}_{_\\mathrm{M}}$ is much\nlower and out of the plot range at this time; see Figure~\\ref{fig_LTkmax}.}\n \\label{fig_gamma_Re}\n\\end{figure}\n\nIn Figure~\\ref{fig_gamma_Re} we show the normalized growth\nrate $\\gamma~\\eta\/v_\\mu^2$ of the magnetic\nfield as a function of the magnetic Reynolds number ${\\rm Re}_{_\\mathrm{M}}$.\nThe gray data points show the growth rate in the initial,\npurely kinematic phase of the simulations.\nThe blue data points show the measured growth rate of the magnetic field\non $k=1$, when the large-scale dynamo occurs.\nFor comparison of the results with externally forced turbulence\n(indicated as diamond-shaped data points), we show\nin Figure~\\ref{fig_gamma_Re} also the results obtained for the\ndynamo in chiral magnetically driven turbulence, which are indicated as dots.\n\nIn DNS with externally forced turbulence, we see in all cases a\nreduced growth rate due to mode coupling.\nContrary to the case with externally forced turbulence,\nin DNS with the chiral magnetically driven turbulence,\nwe do initially observe the purely laminar dynamo\nwith the growth rate given by Equation~(\\ref{gamma-max}),\nbecause there is no mode coupling in the initial phase\nof the magnetic field evolution in this case.\nOn the other hand, the measured growth rates of the mean-field dynamo\nin both cases agree (within the error bars) with the growth rates\nobtained from the mean-field theory.\n\n\n\n\n\n\\section{Chiral MHD dynamos in astrophysical relativistic plasmas}\n\\label{sec_astro}\n\n\nIn this section, the results for the\nnonlinear evolution of the chiral chemical potential,\nthe magnetic field, and the turbulent state of the plasma found in this paper\nare applied to astrophysical relativistic plasmas.\nWe begin by discussing the role of chiral dynamos in the early universe and\nidentify conditions under which the CME affects the generation and evolution of\ncosmic magnetic fields.\nFinally, in Section~\\ref{sec:PNS}, we examine the importance of the CME\nin proto-neutron stars (PNEs).\n\n\n\\subsection{Early Universe}\n\\label{sec:early-universe}\n\nIn spite of many possible mechanisms that can produce magnetic fields in the\nearly universe\n\\citep[see, e.g.,][for reviews]{W02,WEtAl12,DN13,Giovannini:2003yn,S16},\nunderstanding the origin of cosmic magnetic fields remains an open problem.\nTheir generation is often associated with nonequilibrium events in the universe\n(e.g., inflation or phase transitions).\nA period of particular interest is the electroweak (EW) epoch, characterized\nby temperatures of $\\unit[10^{15}]{K}$ ($k_{\\rm B} T \\sim \\unit[100]{GeV}$).\nSeveral important events take place around this time: the electroweak\nsymmetry gets broken, photons appear while intermediate vector bosons\nbecome massive, and the asymmetry between matter and antimatter appears\nin the electroweak baryogenesis scenario \\citep{KRS85};\nsee, for example, the review by \\citet{MRM12}.\nMagnetic fields of appreciable strength can be generated as a consequence of\nthese events \\citep{V91,Olesen:1992np,Enqvist:1993np,Enqvist:1994dq,\nVachaspati:1994ng,GGV95, 1996PhLB..380..253D,BBM96,V01,Semikoz:2010zua}.\nTheir typical correlation length $\\xi_{\\rm M}^{(\\rm ew)} \\sim (\\ensuremath{\\alpha_{\\rm em}} T)^{-1}$\ncorresponds to only a few centimeters today -- much less than the observed correlation\nscales of magnetic fields in galaxies or galaxy clusters.\nTherefore, in the absence of mechanisms that can increase\nthe comoving scale of the magnetic field beyond\n$\\xi_{\\rm M}^{(\\rm ew)}$, such fields were deemed to be irrelevant\nto the problem of cosmic magnetic fields \\citep[for discussion,\nsee, e.g.,][]{Durrer:2003ja,Caprini:2009pr,Saveliev:2012ea,KTBN13}.\n\nThe situation may change if (i) the magnetic fields are helical and\n(ii) the plasma is turbulent.\nIn this case, an inverse transfer of magnetic energy may develop,\nwhich leads to a shift of the typical scale of the magnetic field to\nprogressively larger scales \\citep{BEO96, CHB01, BJ04, KTBN13}.\nThe origin of such turbulence has been unknown.\nAn often considered paradigm is that a random magnetic field, generated at small\nscales, produces turbulent motions via the Lorentz force.\nHowever, continuous energy input is required.\nIf this is not the case, the magnetic field decays:\n$\\langle \\bm{B}^2 \\rangle \\sim t^{-2\/3}$ as the correlation scale grows\n\\citep{BM99PhRvL, KTBN13}, so that $\\langle \\bm{B}^2 \\rangle \\xi_{\\rm M} = {\\rm const}$.\n\nIn the present work, we demonstrated that the presence of a finite chiral\ncharge in the plasma at the EW epoch is sufficient to satisfy the above\nrequirements (i) and (ii).\nAs a result,\n\\begin{asparaenum}[\\it (1)]\n\\item helical magnetic fields are excited,\n\\item turbulence with large ${\\rm Re}_{_\\mathrm{M}}$ is produced, and\n\\item the comoving correlation scale increases.\n\\end{asparaenum}\nWe discuss each of these phases in detail below.\n\n\n\\subsubsection{Generation and evolution of cosmic magnetic fields in the\npresence of a chiral chemical potential}\n\\label{subsec_cosmologybounds}\n\nAlthough it is not possible to perform numerical simulations with\nparameters matching those of the early universe, the results of the\npresent paper allow us to make qualitative predictions about the fate\nof cosmological magnetic fields generated at the EW epoch in the\npresence of a chiral chemical potential.\n\nAll of the main stages of the magnetic field evolution,\nsummarized in Section~\\ref{sec:stages}, can occur in the early universe\n(a sketch of the main phases is provided in \\Fig{fig:phases}).\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=\\columnwidth]{sketch_phases_2}\n \\caption{\n \\textbf{Chiral MHD dynamos in the early universe.}\nSketch of the different phases of the chiral mean-field dynamo.\nFrom left to right:\nsmall-scale chiral dynamo (phase 1), large-scale turbulent dynamo (phase 2), and\nsaturation (phase 3).\nAfter saturation of the dynamo, the magnetic magnetic field dissipates.\nThe upper horizontal dotted line shows the initial value of $\\mu$ and the lower\none the ``saturation limit,'' given by Eq.~\\protect\\eqref{eq:2}.}\n \\label{fig:phases}\n\\end{figure}\n\nPhase 1.\nAt this initial stage, the small-scale chiral dynamo instability develops at\nscales around $\\xi_\\mu$, where\n\\begin{equation}\n \\xi_\\mu \\equiv \\frac{2}{|\\mu_0|} ,\n \\label{eq_mu-1}\n\\end{equation}\nand\n\\begin{equation}\n \\label{eq:1}\n \\mu_0 \\approx 4\\ensuremath{\\alpha_{\\rm em}}\\frac{\\mu_5}{\\hbar c}\\approx\n\\unit[1.5\\times10^{14}]{cm^{-1}}\\frac{\\mu_5}{\\unit[100]{GeV}}.\n\\end{equation}\nThe chemical potential $\\mu_5$ can be approximated by the thermal energy\n$k_{\\rm B} T$ for order-of-magnitude estimates.\nIn what follows, we provide numerical estimates for $\\mu_5 = 100\\,{\\rm GeV}$ which\ncorresponds to the typical thermal energy of relativistic particles\nat the EW epoch.\nThe characteristic energy at the quantum chromodynamics phase transition\nis $\\approx 100\\,{\\rm MeV}$ where the quark--gluon plasma turns into hadrons.\nWe stress, however, that the MHD formalism is only valid if the scales\nconsidered are larger than the mean free path given by Equation~(\\ref{eq_mfp}).\nComparing the chiral instability scale $k_\\mu^{-1}$ with $\\ell_\\mathrm{mfp}$ \nresults in the condition \n$\\mu_5 \\ll k_{\\rm B} T\\, 4\\pi^2 \\ensuremath{\\alpha_{\\rm em}} \\ln{((4\\pi \\ensuremath{\\alpha_{\\rm em}})^{-1\/2})}$. \nStrictly speaking, modeling a system that does not fulfill this condition \nrequires full kinetic theory as described, for example, in \\citet{CPWW13} or\nin \\citet{AY13}.\n\nThe growth rate of an initially weak magnetic field in the linear\nstage of the chiral dynamo instability is given by \\Eq{gamma-max}:\n\\begin{equation}\n\\label{eq:9}\n \\gamma^{\\rm max}_\\mu = \\frac{\\mu_0^2\\eta}4 \\approx 2.4\\times 10^{19}T_{100}^{-1}\\,{\\rm s}^{-1}.\n\\end{equation}\nFor the value of the magnetic diffusivity $\\eta = c^2\/(4 \\pi \\sigma)$\nin the early universe, we adopted the conductivity $\\sigma$ from\nEquation~(1.11) of \\cite{ArnoldEtAl2000}.\nNumerically,\n\\begin{equation}\n\\eta(T)={7.3\\times 10^{-4}}\\,{\\hbar c^2\\overk_{\\rm B} T}\\approx\n{4.3\\times10^{-9}}T_{100}^{-1}\\,{\\rm cm}^2\\,{\\rm s}^{-1} ,\n\\label{eq_rDiffvA}\n\\end{equation}\nwhere $T_{100}=1.2\\times10^{15}\\,{\\rm K}$ (so that $k_{\\rm B} T_{100} =\\unit[100]{GeV}$).\nAs a result, the number of $e$-foldings over one Hubble time $t_H$ is\n$$\\gamma^{\\rm max}_\\mu t_H \\gg 1,$$ where\n\\begin{equation}\n \\label{eq:tH}\n t_H = H^{-1}(T) \\approx 4.8\\times10^{-11}\\,g_{100}^{-1\/2}T_{100}^{-2}\\,\\,{\\rm s}\n\\end{equation}\n(here $g_*$ is the number of relativistic degrees of freedom and $g_{100}=g_*\/100$).\nWe should stress that this picture has been known before and was described\nin many previous works \\citep{JS97,Frohlich:2000en,Frohlich:2002fg,BFR12}.\n\nWe note that a nonzero chiral flipping rate\n$\\Gamma_\\mathrm{f}$ has been discussed in the literature\n\\citep{CDEO92,BFR12,DS15,BFR15,SiglLeite2016}.\nIn Section~\\ref{sec_flip}, we have found\nin numerical simulations that the flipping term affects the evolution\nof the magnetic field only\nfor large values of $f_\\mu$, when the flipping term is of the order of or\nlarger than the $\\lambda_\\mu$ term in Equation~(\\ref{mu-NS});\nsee also Equation~(\\ref{eq_fmu}) and Figure~\\ref{fig_ts_flip}.\nWhen adopting the estimate in \\citet{BSRKBFRK17} of $f_\\mu\\approx 1.6\\times10^{-7}$,\nchirality flipping is not likely to play a significant role for\nthe laminar $v_\\mu^2$ dynamo in the early universe at very high temperatures\nof the order of $100 \\,{\\rm GeV}$.\nHowever, $\\Gamma_\\mathrm{f}$ depends on the ratio $m_e c^2\/(k_{\\rm B} T)$ and\nthus suppresses all chiral effects once the universe has cooled down to\n$k_{\\rm B} T \\approx m_e c^2$ \\citep{BFR12}.\nAt this point, we stress again that the true value of $\\mu_0$ is unknown\nand has here been set to the thermal energy in Equation~(\\ref{eq:1}).\nIf it turns out that the initial value of the chiral chemical potential\nis much smaller than the thermal energy, $f_\\mu$ becomes larger, and\nthe flipping rate can play a more important role already during the initial\nphases of the chiral instability in the early universe.\nThis scenario is not considered in the following discussion.\n\nIn the regime of the laminar $v_\\mu^2$ dynamo, one could reach\n$\\mathcal{O}(10^9)$ $e$-folds over the Hubble time $t_{\\rm H}$; see lower panel\nof Figure~\\ref{fig:chiral_turbulence}.\nHowever, as shown in this work, already after a few hundred $e$-foldings,\nthe magnetic field starts to excite turbulence via the Lorentz force.\nThis happens once the magnetic field is no longer force-free.\nOnce the flow velocities reach the level $v_\\mu = \\mu_0\\eta$, nonlinear\nterms are no longer small, small-scale turbulence is produced, and\nthe next phase begins.\n\n\nPhase 2.\nThe subsequent evolution of the magnetic field depends on the strength\nof the chiral magnetically excited turbulence.\nThis has been shown in the mean-field analysis of \\citet{REtAl17} and\nis confirmed by the present work; see, for example, Figure~\\ref{fig_gamma_Re}.\nThe growth rate and instability scale depend on the magnetic Reynolds\nnumber; see Equations~(\\ref{gamma_turb})--(\\ref{gammamax_turb}).\nThe maximum growth rate for ${\\rm Re}_{_\\mathrm{M}} \\gg 1$ is given by\n\\begin{equation}\n \\label{eq:10}\n \\gamma^{\\rm max}_{\\alpha} = \\gamma_\\mu^{\\rm max} \\frac{4}3 \\frac{(\\ln {\\rm Re}_{_\\mathrm{M}})^2}{{\\rm Re}_{_\\mathrm{M}}},\n\\end{equation}\nwhere $\\gamma_\\mu^{\\rm max}$ is given by \\Eq{eq:9}.\nFor the early universe, it is impossible to determine the exact value of the\nmagnetic Reynolds number from the numerical simulations, but one\nexpects ${\\rm Re}_{_\\mathrm{M}} \\gg 1$ and we show in \\Fig{fig:chiral_turbulence}\nthat, in a wide range of magnetic Reynolds numbers,\n$1 \\ll {\\rm Re}_{_\\mathrm{M}} \\ll 6\\times 10^{12}$, the number of $e$-foldings during one\nHubble time is much larger than $1$.\nThe turbulence efficiently excites magnetic fields at scales much larger than\n$\\xi_\\mu$ (Figure~\\ref{fig:chiral_turbulence}, top panel).\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=\\columnwidth]{TurbDyn_EU}\n \\caption[Turbulence-driven instability in the early universe]\n {\\textbf{Chiral MHD dynamos in the early universe. }\n The ratios between $\\xi_\\alpha$ of the turbulence-driven dynamo (Eq.~(\\ref{kmax_turb}))\n and scale $\\xi_\\mu$ (Equation~\\protect\\eqref{eq_mu-1}), as well as the ratio\n between $\\xi_\\mu$ and the Hubble radius at different temperatures.\n In the top panel, furthermore, the ratio $\\xi_\\mu\/\\xi_\\lambda$ is presented.\n Maximum growth rates over the Hubble time for laminar ($\\gamma_\\mu^{\\rm max}$)\n and turbulent ($\\gamma_\\alpha^{\\rm max}$) regimes are shown in the bottom panel.\n }\\label{fig:chiral_turbulence}\n\\end{figure}\n\nUsing dimensional analysis and DNS, \\citet{BSRKBFRK17} demonstrated\nthat the resulting spectrum of the magnetic fields behaves as\n$E_{\\rm M}\\propto k^{-2}$ between $k_\\mu$ and $k_\\lambda$, given by\nEquation~(\\ref{klambda}).\nThe wavenumber $k_\\lambda$ depends on the nonlinearity parameter $\\lambda$,\ndefined by Equation~(\\ref{eq_lambda}), which, in the early universe, is given by\n\\begin{equation}\n \\lambda=3 \\hbar c\\,\\left({8\\ensuremath{\\alpha_{\\rm em}}\\overk_{\\rm B} T}\\right)^2\\approx1.3\\times10^{-17}\\,T_{100}^{-2}\\,\n\\,{\\rm cm}\\,{\\rm erg}^{-1}.\n \\label{eq_lambda_1}\n\\end{equation}\nWe note that this expression is, strictly speaking, only valid when\n$k_{\\rm B} T \\gg \\max(|\\mu_L|,|\\mu_R|)$ and modifications might be expected \noutside of this regime.\nFurther, the mean density of the plasma\n\\begin{equation}\n \\overline{\\rho}=\\frac{\\pi^2}{30}\\,g_*\\frac{(k_{\\rm B} T)^4}{\\hbar^3c^5}\n \\approx 7.6\\times10^{26}g_{100}T_{100}^4\\,{\\rm g}\\,{\\rm cm}^{-3}.\n\\end{equation}\nThe ratio $\\xi_\\lambda\/\\xi_\\mu = k_\\mu\/k_\\lambda$ is presented in the top\npanel of \\Fig{fig:chiral_turbulence}, but we note that the exact numerical\ncoefficient in the condition $k_\\mu\/k_\\lambda \\gg 1$ might depend on ${\\rm Re}_{_\\mathrm{M}}$.\n\nPhase 3.\nThe stage of large-scale turbulent dynamo action ends with the\n\\emph{saturation phase} (see Section~\\ref{sec:stages} and\n\\Fig{fig:phases}).\nAt this stage, the total chiral charge (determined by the initial\nconditions) gets transferred to magnetic helicity.\nAs shown in \\citet{BFR12} (see also~\\citet{JS97} for earlier work, as well\nas \\cite{TVV12} and \\cite{Hirono:2015rla} for more discussion), and confirmed by\nnumerical simulations in \\citet{BSRKBFRK17} and in the present work,\nthe chiral chemical potential $\\mu$ follows $k_{\\rm M}$ at this stage\nand thus decreases with time.\nTherefore, most of the chiral charge will be transferred with time into magnetic helicity,\n\\begin{equation}\n \\langle \\bm{A} \\cdot \\bm{B}\\rangle \\simeq \\xi_{\\rm M} \\langle \\bm{B}^2 \\rangle \\to \\frac{2\\mu_0}\\lambda ,\n \\label{eq:2}\n\\end{equation}\nswitching off the CME (the end of Phase 3 in \\Fig{fig:phases}).\n\n\\subsubsection{Chiral MHD and cosmic magnetic fields}\n\\label{sec:end_chiral_MHD}\n\nMagnetic fields produced by chiral dynamos are fully helical.\nOnce the CME has become negligible, the subsequent phase of decaying\nhelical turbulence begins and the\nmagnetic energy decreases, while the magnetic correlation length increases\nin such a way that the magnetic helicity~\\eqref{eq:2} is conserved\nfor very small magnetic diffusivity \\citep{BM99PhRvL, KTBN13}.\n\nBased on \\Eq{eq:2}, one can estimate the magnetic helicity \\emph{today}; see also \\cite{BSRKBFRK17}.\nTaking as an estimate for the chiral chemical potential $\\mu_5 \\sim k_{\\rm B} T$ (this means that the density of the chiral charge is of the order of the\nnumber density of photons), one finds\n\\begin{multline}\n \\label{eq:Br17:3}\n \\bra{\\bm{B}^2}\\xi_{\\rm M} \\simeq\n \\frac{\\hbar c}{4\\ensuremath{\\alpha_{\\rm em}}} \\frac{g_{0}}{g_\\ast} n_\\gamma^{(0)}\n \\simeq 6\\times 10^{-38}\\,{\\rm G}^2\\,{\\rm Mpc} .\n\\end{multline}\nHere, the present number density of photons is $n_\\gamma^{(0)} = 411\\,{\\rm cm}^{-3}$,\nand the ratio $g_{0}\/g_\\ast\\approx3.36\/106.75$ of the effective\nrelativistic degrees of freedom today and at the EW epoch appears,\nbecause the photon number density dilutes as $T^3$ while the magnetic helicity\ndilutes as $a^{-3}$.\nWe recall that, to arrive at the numerical value in $\\,{\\rm G}^2\\,{\\rm Mpc}$\ngiven in Equation~(\\ref{eq:Br17:3}), an additional $4\\pi$ factor\nwas applied to convert to Gaussian units.\n\nUnder the assumption that the spectrum of the cosmic magnetic field is sharply\npeaked at some scale\n$\\xi_0$ (as is the case in all of the simulations presented here),\nthe lower bounds on magnetic fields, inferred\nfrom the nonobservation of GeV cascades from TeV sources\n\\citep{NV10,Tavecchio:10,Dolag:10} can be directly translated into\na bound on magnetic helicity today. The observational bound scales\nas $|\\bm{B}| \\propto \\xi_0^{-1\/2}$ for $\\xi_0 < 1\\,{\\rm Mpc}$ \\citep{NV10} and\ntherefore $\\bra{\\bm{B}^2}\\xi_0 = {\\rm const} > 8\\times 10^{-38}\\,{\\rm G}^2\\,{\\rm Mpc}$.\nThe numerical value is obtained using the most conservative bound\n$|\\bm{B}| \\ge 10^{-18}\\,{\\rm G}$ at $1\\,{\\rm Mpc}$ (\\citealt{DCRFCL11},\nsee also~\\citealt{DN13}).\nThese observational constraints for intergalactic magnetic fields are\ncompared to the magnetic field produced in chiral MHD for different values\nof the initial chiral chemical potential in Figure~\\ref{fig_B_xi__comov}.\n\nThe limit given by Equation~(\\ref{eq:Br17:3}) is quite general.\nIt does not rely on chiral MHD or the CME, but simply\nreinterprets the bounds of \\cite{NV10}, \\cite{Tavecchio:10}, \\cite{Dolag:10},\nand \\cite{DCRFCL11} as bounds on magnetic helicity.\nGiven such an interpretation, we conclude that \\emph{if cosmic magnetic fields\nare helical and have a cosmological origin, then at some moment in the history\nof the universe the density of chiral charge was much larger\nthan $n_\\gamma(T)$}.\nThis chiral charge can be, for example, in the form of magnetic helicity or of\nchiral asymmetry of fermions, or both.\nTo generate such a charge density, some new physics beyond the Standard Model of\nelementary particles is required.\nBelow we list several possible mechanisms that can generate large initial\nchiral charge density:\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{B_xi__comov}\n \\caption{\n {\\bf Chiral MHD dynamos in the early universe.}\nThe magnetic field strength resulting from a chiral dynamo as a function of correlation length in comoving\nunits and comparison with observational constraints.\nThe differently colored lines show the chiral magnetically produced magnetic field\nstrength in the range between the injection length $\\mu^{-1}$ and the\nsaturation length $k_\\lambda^{-1}$; see Equations~(\\ref{eq_mu-1}) and\n(\\ref{klambda}), respectively.\nThe colors indicate different values of the chiral chemical potential:\nRed refers to the value of $\\mu_0$ given in Equation~(\\ref{eq:1}), blue to\n$10^{-2}\\mu_0$, and purple to $10^{2}\\mu_0$.\nThe dashed gray line is an upper limit on the intergalactic magnetic field from\nZeeman splitting.\nSolid gray lines,\nrefer to the lower limits reported by \\citet{NV10} (``NV10'') and \\citet{DCRFCL11} (``D+11''), respectively.\nThe vertical dotted gray lines show the\nhorizon at $k_{\\rm B} T = 100\\,{\\rm GeV}$ and $100\\,{\\rm MeV}$ correspondingly.\nThe thin colored arrows refer to the nonlinear evolution of magnetic fields in\nan inverse cascade in helical turbulence up to the final value as given in\n\\citet{BJ04}\n(line ``BJ04'').\n}\n \\label{fig_B_xi__comov}\n\\end{figure}\n\n\\begin{asparaenum}[\\it (1)]\n\\item The upper bound in \\Eq{eq:Br17:3} assumes that only one fermion of the\nStandard Model developed a chiral asymmetry $\\sim n_\\gamma$.\n Many fermionic species are present in the plasma at the electroweak epoch.\n They all can have a left--right asymmetric population of comparable size,\n increasing the total chirality by a factor $\\mathcal{O}(10)$, which\n makes the estimate~(\\ref{eq:Br17:3}) consistent with the lower bound\n from \\citet{DCRFCL11}.\n One should check, of course, whether for more massive fermions the\n chirality flipping rate is much slower than the dynamo growth rate\n determined by Equation~(\\ref{eq:9}).\n\\item The estimate~(\\ref{eq:Br17:3}) assumed that left--right asymmetry was\ncreated via thermal processes. Of course, new physics at the EW epoch can result in\nnonthermal production of chiral asymmetry (e.g.\\ via decays of some long-lived\nparticles), thus leading to $n_5 \\gg n_\\gamma$ and so increasing the limit\n(\\ref{eq:Br17:3}).\n\\item\n The left--right asymmetry may be produced as a consequence of the decay\n of helical \\emph{hypermagnetic} fields prior to the EW epoch.\n Such a scenario, relating hypermagnetic helicity to the\n chiral asymmetry, has been discussed previously, such as in\n \\cite{Giovannini:1997eg} and \\cite{Semikoz:2012ka}.\n A conservation law similar to that of~(\\ref{cons_law}) exists also for\n hypermagnetic fields, and the decay of the latter may cause asymmetric\n populations of left and right states.\n\\item\nIn our analysis, we have not taken into account the chiral vortical effect\n\\citep{Vilenkin:79}.\nFor nonvanishing chemical potential, it leads to an additional current\nalong the direction of vorticity \\citep[see, e.g.,][]{TVV12}.\n\\end{asparaenum}\n\nFrom the point of view of chiral MHD, the value of $\\mu_0$ (to which\nthis bound is proportional) is just an initial condition and therefore\ncan take arbitrary values.\nOnce an initial condition with a large value of $\\mu_0$ has been\ngenerated, the subsequent evolution (as described above) does not require\nany new physics.\n\nMoreover, the coupled evolution of magnetic helicity and chiral chemical\npotential is \\emph{unavoidable} in the relativistic plasma and should be an\nintegral part of relativistic MHD (as was discussed in Paper~I).\n\n\n\n\n\\subsection{PNSs and the CME}\n\\label{sec:PNS}\n\n\nIn this section, we explore whether the CME and chiral dynamos can play a\nrole in the development of strong magnetic fields in neutron stars.\nA PNS is a stage of stellar evolution after\nthe supernova core collapse and before the cold and dense neutron star is\nformed \\citep[see, e.g.,][]{PRPLM99}.\nPNSs are characterized by high temperatures\n(typically $k_{\\rm B} T \\sim \\unit[\\mathcal{O}(10)]{MeV}\\gg m_e c^2$),\nlarge lepton number density (electrons Fermi energy $\\mu_e \\sim$ a few\nhundreds of MeV), the presence of turbulent flows in the interior, and quickly\nchanging environments.\nOnce the formation of a neutron star is completed, its magnetic\nfield can be extremely large.\nNeutron stars that exceed the quantum electrodynamic limit\n$B_\\mathrm{QED}\\equiv m_e^2c^3\/(e \\hbar)\\approx4.4\\times10^{13}~\\,{\\rm G}$ are known as\n``magnetars'' \\citep[see, e.g.,][for recent reviews]{MPM15,TZ15,KB17}.\nThe origin of such strong magnetic fields remains unknown, although\nmany explanations have been proposed; see, for example, \\citet{DT92}, \n\\cite{AWML03}, and \\cite{FW06}.\n\nThe role of the CME in the physics of (proto)neutron stars and\ntheir contribution to the generation of strong magnetic fields have been\ndiscussed in a number of works\n\\citep{Charbonneau:2009ax,Ohnishi:2014uea,Dvornikov:2015lea,DS15,\nGKR15,Dvornikov:2016cmz,SiglLeite2016,Yamamoto:2015gzz}.\n\n\n\\subsubsection{Chiral MHD in PNSs}\n\\label{sec:chiral-mhd-PNS}\n\nDuring the formation of a PNS, electrons and protons are converted into neutrons,\nleaving behind left-handed neutrinos.\nThis is known as the Urca process\n\\citep[$e + p \\to n + \\nu_e$;][]{Haensel:95}.\nIf the chirality-flipping timescale, determined by the electron's\nmass, is longer than the instability scale, the net chiral\nasymmetry in the PNS can lead to the generation\nof magnetic fields. This scenario has been discussed previously\n\\citep{Ohnishi:2014uea,SiglLeite2016,GKR15}.\nThe chiral turbulent dynamos discussed in this work can be relevant for\nthe physics of PNSs and can affect our conclusions\nabout the importance of the CME.\nHowever, to make a detailed quantitative analysis, a number of factors\nshould be taken into account:\n\\begin{asparaenum}[\\it (1)]\n\\item The rate of the Urca process is strongly temperature dependent\n \\citep{LPPH91,Haensel:95}.\n The temperatures inside PNSs are only known with large uncertainties, and\n the cooling occurs on a scale of seconds \\citep[see, e.g.,][]{PRPLM99},\n making estimates of the Urca rates uncertain by orders of magnitude.\n\\item The chirality flipping rate that aims to restore the depleted population of\n left-chiral electrons is also expected to be temperature dependent\n \\citep[see, e.g.,][]{GKR15,SiglLeite2016}.\n\\item The neutrinos produced via the Urca process are trapped in the\n interior of a PNS and can release the chiral asymmetry back into the plasma\n via the $n + \\nu_e \\to e + p$ process.\n Therefore, only when the star becomes transparent to neutrinos\n (as the temperature drops to a few MeV) does the creation of chiral asymmetry\n can become significant.\n\\end{asparaenum}\n\n\nModeling the details of PNS cooling and neutrino propagation is\nbeyond the scope of this paper.\nBelow we perform the estimates that demonstrate that chiral MHD can\nsignificantly change the picture of the evolution of a PNS.\n\n\n\\subsubsection{Estimates of the relevant parameters}\n\nAn upper limit of the chiral chemical potential can be estimated by\nassuming that\n$n_\\mathrm{L}=0$ and $n_\\mathrm{R}=n_e$ (all left-chiral electrons have been\nconverted into neutrinos, and the rate of chirality flipping is much slower than\nother relevant processes).\nThis leads to the estimate $\\mu_5 \\simeq \\mu_e$ and correspondingly\n\\begin{equation}\n \\mu_{\\rm max} = 4\\ensuremath{\\alpha_{\\rm em}} \\frac{\\mu_e}{\\hbar c} \\approx \\unit[4\\times 10^{11} ]{cm}^{-1}\\left(\\frac{\\mu_e}{\\unit[250]{MeV}}\\right),\n \\label{eq_muupper_NS}\n\\end{equation}\nwhere we have used a typical value of the electron's Fermi energy $\\mu_e$\n\\citep{PRPLM99}.\nFor an ultrarelativistic \\emph{degenerate} electron gas (i.e., when\n$\\mu_e \\gg k_{\\rm B} T \\gg m_e c^2$), the relation between the number density\nof electrons, $n_e$, and their Fermi energy, $\\mu_e$, is\n\\begin{equation}\n \\label{eq:5}\n \\mu_e = \\hbar c(2\\pi^2 n_e)^{1\/3}\\approx 250\\,{\\rm MeV} \\left(\\frac{n_e}{\\unit[10^{38}]{cm^{-3}}}\\right)^{1\/3}.\n\\end{equation}\nThe interior of neutron stars is a conducting medium whose\nconductivity is estimated to be \\citep{BPP69,Kelly1973}:\n\\begin{eqnarray}\n \\sigma(T) &=& \\sqrt 3\\left(\\frac4\\pi\\right)^{3\/2}\\frac{\\hbar^4 c^2}{e\\,m_p^{3\/2}}\\frac{n_e^{3\/2}}{k_{\\rm B}^2 T^2}\\\\\n &\\approx& 1\\times 10^{27}\\left(\\frac{1~\\mathrm{MeV}}{k_{\\rm B} T}\\right)^{2}\\left(\\frac{n_e}{\\unit[10^{38}]{cm^{-3}}}\\right)^{3\/2}\\,{\\rm s}^{-1}\n\\label{eq_sigmaNS}\n\\end{eqnarray}\n(there is actually a difference in the numerical coefficient $\\mathcal{O}(1)$\nbetween the results of \\citet{BPP69} and \\citet{Kelly1973}).\nUsing Equation~(\\ref{eq_sigmaNS}), we find the magnetic diffusion coefficient to be\n\\begin{equation}\n \\label{eq:3}\n \\eta(T)\n \\approx 7\\times 10^{-8}\\,{\\rm cm}^2\\,{\\rm s}^{-1} \\left(\\frac{k_{\\rm B} T}{1~\\mathrm{MeV}}\\right)^{2}\\left(\\frac{\\unit[10^{38}]{cm^{-3}}}{n_e}\\right)^{3\/2}.\n\\end{equation}\nTherefore, we can determine the\nthe maximum growth rate of the small-scale\nchiral instability (\\ref{gamma-max}) as\n\\begin{equation}\n \\label{eq:4}\n \\gamma^{\\rm max}_\\mu = \\frac{\\mu_{\\rm max}^2\\eta}4 \\approx 2\\times 10^{15}\\,{\\rm s}^{-1} \\left(\\frac{\\mu_e}{\\unit[250]{MeV}}\\right)^2 \\left(\\frac{k_{\\rm B} T}{1~\\mathrm{MeV}}\\right)^{2}.\n\\end{equation}\nWe see that over a characteristic time $\\tau_\\mathrm{cool}\\sim 1 \\,{\\rm s}$\n(the typical cooling time), the magnetic field would increase by many\n$e$-foldings.\nIn fact, using a flipping rate of $\\Gamma_\\mathrm{f}=10^{14} \\,{\\rm s}$, as\nsuggested in \\citet{GKR15} for $\\mu_\\mathrm{e}= 100 \\,{\\rm MeV}$ and $k_{\\rm B} T=30 \\,{\\rm MeV}$,\nwe find that $f_\\mu$ ranges from\n$\\approx 9\\times 10^{-3}$ down to $\\approx 9\\times 10^{-7}$ for the range\nbetween $k_{\\rm B} T = 1 \\,{\\rm MeV}$ and $k_{\\rm B} T = 100 \\,{\\rm MeV}$.\nHence the evolution of the chemical potential\nand the chiral dynamo\nis weakly affected by flipping reactions.\n\nAs in Section~\\ref{subsec_cosmologybounds}, the phase of the small-scale\ninstability ends when turbulence is excited.\nIt should be stressed, however, that unlike the early universe, the\ninteriors of PNSs are expected to be turbulent with high ${\\rm Re}_{_\\mathrm{M}}$ even in\nthe absence of chiral effects\n(with ${\\rm Re}_{_\\mathrm{M}}$ as large as $10^{17}$); see \\cite{TD93}.\nTherefore, the system may find itself in the forced turbulence regime of Section~\\ref{sec:forced-chiral-dynamos}.\nFigure~\\ref{fig:turbulence_PNS} shows that in a wide range of magnetic\nReynolds numbers, one can have many $e$-foldings over a typical timescale of\nthe PNS and that the scale of the magnetic field can reach macroscopic size.\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=\\columnwidth]{scales_NS}\n \\caption[Laminar and turbulent scales in PNS]{\n \n \\textbf{Chiral MHD dynamos in PNSs.}\nLaminar and turbulent growth rate multiplied by the\ncooling timescale (top panel) and the characteristic scales of chiral MHD\nnormalized by the typical radius of the PNS $r_{\\rm NS} \\sim 10\\,{\\rm km}$\n(bottom panel).\nThe estimates are presented as a function of ${\\rm Re}_{_\\mathrm{M}}$.\nThe initial value of the chiral chemical potential is assumed at the\nlevel~(\\protect\\ref{eq_muupper_NS}) and the we use $\\mu_e =\\unit[250]{MeV}$.\nSince the conductivity is temperature dependent, the ratios including $\\eta$ are\npresented for both $k_{\\rm B} T = 1~\\mathrm{MeV}$ and $k_{\\rm B} T = 10~\\mathrm{MeV}$.\n }\n \\label{fig:turbulence_PNS}\n\\end{figure}\n\n\\subsubsection{Estimate of magnetic field strengths}\n\nA dedicated analysis, taking into account temperature and density evolution of\nthe PNS as well as its turbulent regimes, is needed to make detailed predictions.\nHere we will make the estimates of the strength of the magnetic field,\nsimilar to Section~\\ref{sec:early-universe} above.\nTo this end, we use the conservation law~(\\ref{cons_law}), assuming\n$\\mu_0 = \\mu_{\\rm max}$.\nIn the PNS case, the plasma is degenerate, and therefore the relation between\n$n_5$ and $\\mu_5$ is given by\n\\begin{equation}\n n_5 = \\frac{\\mu_5}{3\\pi^2}(3\\mu_e^2 + \\pi^2 T^2)\n \\label{eq:7}\n\\end{equation}\n(in the limit $\\mu_5 \\ll T$).\n{\nAs a result, the chiral feedback parameter\n$\\lambda$ is\n\\begin{equation}\n \\label{lambda_PNS}\n \\lambda_{\\rm PNS} = \\frac{\\hbar c\\pi^2}{2}\\left(\\frac{8\\ensuremath{\\alpha_{\\rm em}}}{\\mu_e}\\right)^2,\n\\end{equation}\nwhich determines the wavenumber $k_\\lambda$; see Equation~(\\ref{klambda}).\nThe corresponding length scale $\\xi_\\lambda = k_\\lambda^{-1}$ is presented in\nthe top panel of Figure~\\ref{fig:turbulence_PNS}, where we assume a\nmean density of the PNS of $\\overline{\\rho}_{\\rm PNS} = 2.8\\times10^{14}~\\mathrm{g}\\mathrm{cm}^{-3}$.\n\nUsing \\Eqs{eq_muupper_NS}{lambda_PNS}, we find\n\\begin{eqnarray}\n \\label{eq:8}\n (B^2\\xi)_{\\max}\n &=&\\frac{4\\pi\\mu_{\\rm max}}{\\lambda_{\\rm PNS}} = \\frac{\\mu_e^3}{2(\\hbar c)^2\\ensuremath{\\alpha_{\\rm em}}}\\\\\n &\\approx& 1.4\\times 10^{24}\\,{\\rm G}^2\\,{\\rm cm}\\left(\\frac{\\mu_e}{\\unit[250]{MeV}}\\right)^3.\n\\end{eqnarray}\nAssuming for the maximum correlation scale $\\xi_{\\rm PNS} \\sim 1\\,{\\rm cm}$\n(see \\Fig{fig:turbulence_PNS}), we find that\nmagnetic field strength is of the order of\n\\begin{equation}\n \\label{eq:6}\n B_{\\rm max} \\approx 1.2\\times 10^{12}\\,{\\rm G} \\left(\\frac{\\mu_e}{\\unit[250]{MeV}}\\right)^{3\/2}\n \\left(\\frac {1\\,{\\rm cm}}{\\xi_{\\rm M}}\\right)^{1\/2} .\n\\end{equation}\nNotice that the estimate~(\\ref{eq:8}) is independent of $T$\n(but depends strongly on the assumed value of $\\mu_e$).\n\n\nOur estimates have demonstrated that the chiral MHD could be capable of generating\nstrong small-scale magnetic fields. \\emph{Therefore, chiral effects should be\nincluded in the modeling of evolution of PNSs.}}\n\n\n\n\n\n\\section{Conclusions}\n\\label{sec_concl}\n\nIn this work, we have presented results from numerical simulations of chiral MHD\nthat include the temporal and spatial evolution of magnetic fields,\nplasma motions, and the chiral chemical potential.\nThe latter, characterizing the asymmetry between left- and right-handed fermions,\ngives rise to the CME, which\nresults in the excitation of a small-scale chiral dynamo instability.\n\nOur numerical simulations are performed for the system of\nchiral MHD equations~(\\ref{ind-DNS})--(\\ref{mu-DNS}) that was derived in\nPaper~I.\nThis system of equations is valid for plasmas with high electric conductivity,\nthat is, in the limit of high and moderately high Reynolds numbers.\nChiral flipping reactions are neglected in most of the simulations.\nIn the majority of the runs, the initial conditions are a very weak magnetic seed\nfield and a high chiral chemical potential.\nBoth initially force-free systems and systems with external forcing of\nturbulence are considered.\nWith our numerical simulations, we confirm\nvarious theoretical predictions of the chiral laminar\nand turbulent large-scale dynamos discussed in Paper~I.\n\nOur findings from DNS can be summarized as follows:\n\\begin{itemize}\n\\item{\nThe evolution of magnetic fields studied here in DNS agrees with\nthe predictions made in Paper~I for all types of laminar dynamos.\nIn particular, the scalings of\nthe maximum growth rate of the chiral dynamo instability\n$\\gamma_\\mu\\propto v_\\mu^2$ for the\n$v_\\mu^2$ dynamo (see Figure~\\ref{fig__Gamma_eta}) and\n$\\gamma_\\mu\\propto (S v_\\mu)^{2\/3}$\nfor the $v_\\mu$--shear dynamo (see Figure~\\ref{Gamma_Us_Beltrami}) have been\nconfirmed.\nAdditionally, the transitional regime of an $v_\\mu^2$--shear dynamo, where the\ncontributions from the $v_\\mu^2$- and shear terms\nare comparable, agrees with theoretical predictions, as can be seen in\nFigure~\\ref{fig_Gamma_Us_aShear}.\nIn our DNS, the scale-dependent amplification of the magnetic field\nin the laminar chiral dynamo is observed in the energy spectra; see, for example,\nFigure~\\ref{fig_LTspec} where the maximum\ngrowth rate of the $v_\\mu^2$ dynamo instability is attained\nat wavenumber $k_\\mu=\\mu_0\/2$.\n}\n\\item{\nThe conservation law~(\\ref{CL}) for total chirality\nimplies a maximum magnetic field strength\nof the order of $B_\\mathrm{sat}\\approx(\\mu_0 \\xi_\\mathrm{M}\/\\lambda)^{1\/2}$.\nThis dependence of $B_\\mathrm{sat}$ on the chiral nonlinearity parameter\n$\\lambda$ has been confirmed numerically and is presented in\nFigure~\\ref{fig_Bsat_lambda}.\n}\n\\item{\nThe CME can drive turbulence efficiently via the Lorentz force,\nwhich has been demonstrated in our numerical simulations\nthrough the measured growth rate of the turbulent velocity, which is\nlarger by approximately a factor of two than that of the magnetic field;\nsee, for example, the middle panel of Figure~\\ref{fig_LTts}.\n}\n\\item{\nIn the presence of small-scale turbulence, the large-scale dynamo\noperates due to the chiral $\\alpha_\\mu$ effect,\nwhich is not related to the kinetic helicity;\nsee Equation~(\\ref{alphamu}).\nIn the limit of large magnetic Reynolds numbers,\nthe maximum growth rate of the large-scale dynamo instability\nis reduced by a factor of\n$(4\/3)(\\ln{\\rm Re}_{_\\mathrm{M}})^2\/{\\rm Re}_{_\\mathrm{M}}$ as\ncompared to the laminar case; see Equation~(\\ref{gammamax_turbRm}).\nThe dynamo growth rate is close to this prediction\nof mean-field chiral MHD for both\nchiral magnetically produced turbulence and\nfor externally driven turbulence; see\nsee Figure~\\ref{fig_gamma_Re}.\n}\n\\item{\nUsing DNS, we found a new scenario of the magnetic field evolution\nconsisting of three phases\n(see also the schematic overview in Figure~\\ref{fig:phases}):\n\\begin{asparaenum}[\\it (1)]\n\\item small-scale chiral dynamo instability;\n\\item production of small-scale turbulence, inverse transfer of magnetic energy,\nand generation of a large-scale magnetic field by the chiral $\\alpha_\\mu$ effect;\n\\item\nsaturation of the large-scale chiral dynamo\nby a decrease of the CME\ncontrolled by the conservation law for\nthe total chirality:\n$\\lambda \\, \\bra{{\\bm A} {\\bm \\cdot} \\bm{B}}\/2 + \\bra{\\mu} = \\mu_0$.\n\\end{asparaenum}\nThe previously discussed scenario of magnetic field\nevolution caused by the CME \\citep{BFR12}\ndid not include the second phase.}\n\\end{itemize}\n\nWhile the results summarized above have been obtained\nin simulations of well-resolved periodic domains,\nastrophysical parameters are beyond the regime accessible to DNS.\nHence we can only estimate the effects of the chiral anomaly in relativistic\nastrophysical plasmas, like in the early universe or in neutron stars.\nThe main conclusions from the astrophysical applications are the following:\n\\begin{itemize}\n\\item{The chiral MHD scenario found in DNS may help to explain the origin of\nthe magnetic field observed in the interstellar\nmedium. The chiral dynamo instability produces helical magnetic\nfields initially at\nsmall spatial scales and simultaneously drives turbulence,\nwhich generates a magnetic field on large scales.\nAfter the chiral chemical potential has been transformed into magnetic\nhelicity during the dynamo saturation phase,\nthe magnetic field cascades to\nlarger spatial scales according to the phenomenology of decaying MHD\nturbulence.\nWe have estimated the values of $\\mu_0$ and $\\lambda$ for the early universe.\nThese parameters determine the time and spatial scales associated with\nthe chiral dynamo instability\n(see Figure~\\ref{fig:chiral_turbulence}) and the maximum magnetic\nhelicity (see Equation~\\ref{eq:Br17:3}).\nOur estimates for magnetic fields produced by chiral dynamos in the\nearly universe are consistent with the observational lower limits\nfound by \\citet{DCRFCL11} (see Figure~\\ref{fig_B_xi__comov})\nif we assume that the initial chiral chemical potential is of the order of\nthe thermal energy density.\n}\n\\item{In PNSs, chiral dynamos\noperating in the first tens of seconds after the supernova explosion\ncan produce magnetic fields of approximately $10^{12}\\,{\\rm G}$ at a magnetic\ncorrelation length of $1\\,{\\rm cm}$; see Equation~(\\ref{eq:6}).\nHowever, we stress that many questions remain open, especially regarding the\ngeneration of a chiral asymmetry and the role of the chiral flipping term\nin PNEs.\n}\n\\end{itemize}\n\nFinally, we stress again that the parameters and the initial conditions,\nincluding the initial chiral asymmetry, are unknown in the astrophysical \nsystems discussed in this paper. \nHence, the purpose of our applications should be classified as a study\nof the conditions under which the CME plays a significant \nrole in the evolution of a plasma of relativistic charged fermions. \nWith the regimes accessible to our simulations not being truly realistic in the \ncontext of the physics of the early universe and \nin neutron stars, our applications have a rather exploratory nature. \nIn this sense, our results from DNS can be used to answer\nthe question in which area of plasma physics -- the physics of the early\nuniverse, the physics of neutron stars, or the physics of heavy ion\ncollisions---the CME is important and can modify the\nevolution of magnetic fields.\n\n\n\\begin{acknowledgements}\nWe thank the anonymous referee for constructive criticism that\nimproved our manuscript.\nWe acknowledge support from Nordita, which is funded by the\nNordic Council of Ministers, the Swedish Research Council, and the two host\nuniversities, the Royal Institute of Technology (KTH) and\nStockholm University.\nThis project has received funding from the\nEuropean Union's Horizon 2020 research and\ninnovation program under the Marie Sk{\\l}odowska-Curie grant\nNo.\\ 665667.\nWe also acknowledge the University of Colorado's support through\nthe George Ellery Hale visiting faculty appointment.\nSupport through the NSF Astrophysics and Astronomy Grant Program (grant 1615100),\nthe Research Council of Norway (FRINATEK grant 231444),\nand the European Research Council (grant number 694896) are\ngratefully acknowledged. Simulations presented in this work have been performed\nwith computing resources\nprovided by the Swedish National Allocations Committee at the Center for\nParallel Computers at the Royal Institute of Technology in Stockholm.\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nThe celebrated laws of black hole thermodynamics~\\cite{Bek72,Bek74,Haw74,Haw75} ascribe physical properties to the event horizon of a black hole. However, the event horizon is defined globally, as the boundary of the past of future infinity. Thus, the location of the thermodynamic object depends on the future history of the spacetime. For example, an observer in a perfectly flat spacetime region might already be inside a black hole, if a null shell is collapsing outside their past light-cone. By causality, Hawking radiation and the first and second law of black hole thermodynamics should have no manifestation for such an observer. Conversely, once a black hole has formed, its thermodynamic properties should be observable at finite distance, regardless of whether the collapsed region already coincides with the true event horizon, or is headed for substantial growth in the distant future.\n\nHere we consider the problem of finding a geometric object that is locally defined, and which obeys a classical law analogous to one of the laws of thermodynamics. We will focus on the second law, whose manifestation in classical General Relativity is the statement that the area of certain surfaces cannot decrease. For the cross-sections of an event horizon this was proven by Hawking in 1971~\\cite{Haw71}, but as noted above the event horizon is not locally defined.\n\nWe will formulate and prove a new area theorem. It is obeyed by what we shall call a {\\em future (or past) holographic screen}, $H$. $H$ is a hypersurface foliated by marginally (anti-)trapped surfaces, which are called {\\em leaves}. This definition is local, unlike that of an event horizon. It requires knowledge only of an infinitesimal neighborhood of each leaf.\n\\begin{figure*}[ht]\n\\subfigure[]{\n\\includegraphics[width=0.45 \\textwidth]{1aPRD.pdf}\n}\n\\subfigure[]{\n \\includegraphics[width=0.45 \\textwidth]{1bPRD.pdf}\n}\n\\caption{Penrose diagrams showing examples of holographic screens. The green diagonal lines show a null slicing of the spacetime; green dots mark the maximal area sphere on each slice. These surfaces combine to form a holographic screen (blue lines); we prove that their area increases monotonically in a uniform direction on the screen (blue triangles). (a) A black hole is formed by collapse of a star (inner shaded region); later another massive shell collapses onto the black hole (outer shaded region). At all other times an arbitrarily small amount of matter accretes (white regions); this suffices to satisfy our generic conditions. The black hole interior contains a future holographic screen that begins at the singularity and asymptotes to the event horizon. It is timelike in the dense regions and spacelike in the dilute regions. (b) In a closed universe filled with dust, marginally antitrapped spheres form a past holographic screen in the expanding region; its area increases towards the future. Marginally trapped spheres form a future holographic screen in the collapsing region; its area increases towards the past. The equator of the three-sphere at the turnaround time (black circle) belongs to neither the past nor the future screen; it is extremal in the sense of Ref.~\\cite{HubRan07}.}\n\\label{fig-examplesPRD}\n\\end{figure*} \nA future holographic screen exists (nonuniquely) in generic spacetimes that have a future event horizon. It is disjoint from the event horizon but it may asymptote to it; see Fig.~\\ref{fig-examplesPRD}a. Past holographic screens exist in expanding universes such as ours, regardless of whether they have a past event horizon. Because $H$ is not defined in terms of distant regions, past and future holographic screens can exist in spacetimes with no distant boundary at all, such as a recollapsing closed universe; see Fig.~\\ref{fig-examplesPRD}b. Our area law applies to all future and past holographic screens.\n\n\\noindent{\\bf Relation to previous work} The notion of future or past holographic screen has roots in two distinct bodies of research, which had not been connected until now. It can be regarded as a refinement of the notion of ``preferred holographic screen hypersurface''~\\cite{CEB2}, which need not have monotonic area. Alternatively, it can be viewed as a generalization of the notion of ``dynamical horizon'', which obeys a straightforward area law but is not known to exist in many realistic solutions. We will now discuss these two connections for context and attribution; see also~\\cite{BouEng15a}. We stress, however, that our theorem and proof are self-contained. They rely only on classical General Relativity, and not, for example, on any conjecture about semiclassical or quantum gravity.\n\nFirst, let us discuss the relation to the holographic principle. (See~\\cite{Tho93,Sus95,FisSus98} for earlier work and Ref.~\\cite{RMP} for a review.) To an arbitrary codimension 2 spatial surface $B$, one can associate a light-sheet~\\cite{CEB1}: a null hypersurface orthogonal to $B$ with everywhere nonpositive expansion (i.e., locally nonincreasing area), in the direction away from $B$. The covariant entropy bound (Bousso bound)~\\cite{CEB1} is the conjecture that the entropy of the matter on the light-sheet cannot exceed the area of $B$, in Planck units. The conjecture has broad support; it has been proven in certain limiting regimes~\\cite{FlaMar99,BouFla03,StrTho03,BouCas14a,BouCas14b}.\n\nThere are four null directions orthogonal to any surface. In each direction, the orthogonal null congruence generates a null hypersurface with boundary $B$. The expansion in opposing directions, such as future-outward and past-inward, differs only by a sign. In typical settings, therefore, there will be two directions with initially negative expansion, each of which gives rise to a light-sheet. For example, a sphere in Minkowski space admits light-sheets in the future and past inward directions, but not in the outward directions. A large enough sphere near the big bang is anti-trapped: it admits light-sheets in the past inward and outward directions. Spheres near the singularity of a black hole are trapped: the light-sheets point in the future inward and outward directions.\n\nHowever, it is possible to find surfaces that are {\\em marginal}: they have vanishing expansion in one opposing pair of null directions. Hence they admit a pair of light-sheets whose union forms an entire null slice of the spacetime~\\cite{CEB1}. In fact, in strongly gravitating regions one can readily construct a continuous family of marginal surfaces, which foliate a hypersurface called ``preferred holographic screen hypersurface''. The opposing pairs of light-sheets attached to each leaf foliate the spacetime. The Bousso bound is particularly powerful when applied to these light-sheets. It constrains the entropy of the entire spacetime, slice by slice, in terms of the area of the leaves. All quantum information in the spacetime can be stored on the leaves, at no more than about one qubit per Planck area. In this sense the world is a hologram.\n\nFor event horizons, a classical area theorem~\\cite{Haw71} preceded the interpretation of area as physical entropy~\\cite{Bek72}. For holographic screens, the present work belatedly supplies a classical area law for an object whose relevance to geometric entropy had long been conjectured~\\cite{CEB2}. What took so long?\n\nIn fact, the notion of ``preferred holographic screen hypersurface'' lacked a key refinement, without which our theorem would not hold: the distinction between past and future holographic screens. The leaves of a ``preferred holographic screen hypersurface'' are {\\em marginal}, that is, one orthogonal null congruence has vanishing expansion. However, they were not required to be either marginally trapped, or marginally anti-trapped. That is, no definite sign was imposed on the expansion of the second, independent orthogonal null congruence. Fig.~\\ref{fig-examplesPRD}b shows a spacetime in which a ``preferred holographic screen hypersurface'' fails to obey an area law. Once we distinguish between marginally trapped and anti-trapped surfaces, however, we recognize that there are in fact two disconnected objects: a past and a future holographic screen. Each obeys an area law, as our proof guarantees, but in different directions of evolution. This is analogous to the distinction between past and future event horizons. From this perspective, it is not surprising that ``preferred holographic screen hypersurfaces'' fail to satisfy an area law without the refinement we introduce here.\n\nThis brings us to the second body of research to which the present work owes debt. Previous attempts to find a quasi-local alternative to the event horizon culminated in the elegant notions of a future outer trapping horizon (FOTH)~\\cite{Hay93,Hay97,HayMuk98} or dynamical horizon~\\cite{AshKri02,AshKri03,AshGal} (see~\\cite{AshKri04,Booth05} for reviews). In a generic, classical setting their definitions are equivalent: a dynamical horizon is a spacelike hypersurface foliated by marginally trapped surfaces. \n\n``Preferred holographic screen hypersurface'' was a weaker notion than future holographic screen; ``dynamical horizon'' is a stronger notion. It adds not only the crucial refinement from marginal to marginally trapped, but also the requirement that the hypersurface be spacelike. This immediately implies that the area increases in the outward direction~\\cite{Hay93,AshKri02}. (Note the brevity of the proof of Theorem~\\ref{thm-area} below, which alone would imply an area law without the need for any of the previous theorems, if a spacelike assumption is imposed.) \n\nHowever, our present work shows that the spacelike requirement is not needed for an area theorem. This is important, because the spacelike requirement is forbiddingly restrictive~\\cite{BoothBrits}: no dynamical horizons are known to exist in simple, realistic systems such as a collapsing star or an expanding universe dominated by matter, radiation, and\/or vacuum energy. \n\nThus, the notion of a dynamical horizon (or of a FOTH) appears to be inapplicable in a large class of realistic regions in which gravity dominates the dynamics. We are not aware of a proof of nonexistence. But we show here that an area theorem holds for the more general notion of future holographic screen, whose existence is obvious and whose construction is straightforward in the same settings. Thus we see little reason for retaining the additional restriction to hypersurfaces of spacelike signature, at least in the context of the second law.\n\nIn the early literature on FOTHs\/dynamical horizons, future holographic screens were already defined and discussed, under the name ``marginally trapped tube''~\\cite{AshKri04}.\\footnote{The definition of ``trapping horizon''~\\cite{Hay93} excludes the junctions between inner and outer trapping horizons and thus precludes the consideration of such objects as a single hypersurface.} Ultimately, two separate area laws were proven, one for the spacelike and one for the timelike portions of the future holographic screens. These follow readily from the definitions. The first, for FOTHs\/dynamical horizons, was mentioned above. The second states that the area decreases toward the future along any single timelike portion (known as ``future inner trapping horizons''~\\cite{Hay93} or ``timelike membranes''~\\cite{AshKri04}). \n\nIn these pioneering works, no unified area law was proposed for ``marginally trapped tubes''\/future holographic screens. Perhaps this is because it was natural to think of their timelike portions as future directed and thus area-decreasing. Moreover, the close relation to ``preferred holographic screen hypersurfaces''~\\cite{CEB2} was not recognized, so the area of leaves lacked a natural interpretation in terms of entropy.\\footnote{It is crucial that the entropy associated with the area of leaves on a future holographic screen $H$ is taken to reside on the light-sheets of the leaves, as we assert, and not on $H$ itself. The latter choice---called a ``covariant bound'' in Refs.~\\cite{He:2007qd, He:2007dh, He:2007xd, He:2008em} but related to~\\cite{BakRey99} and distinct from~\\cite{CEB1}---is excluded by a counterexample~\\cite{KalLin99} and would not lead to a valid Generalized Second Law.} And finally, it is not immediately obvious that an area law can hold once timelike and spacelike portions are considered together. Indeed, the central difficulty in the proof we present here is our demonstration that such portions can only meet in ways that uphold area monotonicity for the entire future holographic screen under continuous flow. A key element of our proof builds on relatively recent work~\\cite{Wal10QST}.\n\nThere is an intriguing shift of perspective in a brief remark in later work by Booth {\\em et al.}~\\cite{BoothBrits}. After explicitly finding a ``marginally trapped tube'' (i.e., what we call a future holographic screen) in a number of spherically symmetric collapse solutions, the authors point out that it could be considered as a single object, rather than a collection of dynamical horizon\/``timelike membrane'' pairs. They note that with this viewpoint the area increases monotonically in the examples considered. Our present work proves that this behavior is indeed general.\n\nAnalogues of a first law of thermodynamics have been formulated for dynamical horizons and trapping horizons. We expect that this can be extended to future holographic screens. However, here we shall focus on the second law and its classical manifestation as an area theorem.\n\n\\noindent{\\bf Outline \\ } In Sec.~\\ref{sec-screens}, we give a precise definition of future and past holographic screens, and we establish notation and nomenclature. We also describe a crucial mathematical structure derived from the foliation of $H$ by marginally (anti-)trapped leaves $\\sigma(r)$: there exists a vector field $h^a$ tangent to $H$ and normal to its leaves, which can be written as a linear combination of the orthogonal null vector fields $k^a$ and $l^a$. Its integral curves are called fibers of $H$. \n\nIt is relatively easy to see that the area of leaves is monotonic if $h^a l_a$ has definite sign, i.e., if $H$ evolves towards the past or exterior of each leaf. The difficulty lies in showing that it does so everywhere.\n\nOur proof is lengthy and involves nontrivial intermediate results. Given an arbitrary two-surface $\\sigma$ that splits a Cauchy surface into complementary spatial regions, we show in Sec.~\\ref{sec-csss} that a null hypersurface $N(\\sigma)\\supset \\sigma$ partitions the entire spacetime into two complementary spacetime regions: $K^+(\\sigma)$, the future-and-interior of $\\sigma$; and $K^-(\\sigma)$, the past-and-exterior of $\\sigma$.\n\nIn Sec.~\\ref{sec-ssmono}, we consider a hypersurface foliated by Cauchy-splitting surfaces $\\sigma(r)$. We prove that $K^+(r)$ grows monotonically under inclusion, if the surfaces $\\sigma(r)$ evolve towards their own past-and-exterior. This puts on a rigorous footing the equivalence (implicit in the constructions of~\\cite{CEB2}) between foliations of $H$ and null foliations of spacetime regions. The proofs in Sec.~\\ref{sec-monosplit} do not use all of the properties of $H$; in particular they do not use the marginally trapped property of its leaves. Thus our results up to this point apply to more general classes of hypersurfaces.\n\nIn Sec.~\\ref{sec-arealaw}, we do use the assumption that the leaves of $H$ are marginally trapped, and we combine it with the monotonicity of $K^+(r)$ that we established for past-and-exterior evolution. This allows us to show that the evolution of leaves $\\sigma(r)$ on a future holographic screen $H$ must be {\\em everywhere} to the past or exterior (assuming the null energy condition and certain generic conditions). This is the core of our proof. We then demonstrate that such evolution implies that the area $A(r)$ of $\\sigma(r)$ increases strictly monotonically with $r$. \n\nWe close Sec.~\\ref{sec-arealaw} with a theorem establishing the uniqueness of the foliation of a given holographic screen. The holographic screens themselves are highly nonunique. For example, one can associate a past (future) holographic screen with any observer, by finding the maximal area surfaces on the past (future) light-cones of each point on the observer's worldline.\n\n\n\n\n\\section{Holographic Screens}\n\\label{sec-screens}\n\nWe assume throughout this paper that the spacetime is globally hyperbolic (with an appropriate generalization for asymptotically AdS geometries~\\cite{Wal10QST,EngWal14}). We assume the null curvature condition (NCC): $R_{ab}k^{a}k^{b}\\geq 0$ where $k^{a}$ is any null vector. In a spacetime with matter satisfying Einstein's equations this is equivalent to the null energy condition: $T_{ab}k^{a}k^{b}\\geq 0$.\n\n\\begin{defn}\nA {\\em future holographic screen}~\\cite{CEB2} (or {\\em marginally trapped tube}~\\cite{AshGal, AshKri04}) $H$ is a smooth\nhypersurface admitting a foliation by marginally trapped surfaces called {\\em leaves}.\n\nA {\\em past holographic screen} is defined similarly but in terms of marginally anti-trapped surfaces. Without loss of generality, we will consider future holographic screens in general discussions and proofs.\n\nBy {\\em foliation} we mean that every point $p\\in H$ lies on exactly one leaf. A {\\em marginally trapped surface} is a codimension 2 compact spatial surface $\\sigma$ whose two future-directed orthogonal null geodesic congruences satisfy\n\\begin{eqnarray}\n\\theta_{k} &=& 0 \\label{marginal}~,\\\\\n\\theta_{l} &<& 0 \\label{trapped}~.\n\\end{eqnarray} \nThe opposite inequality defines ``marginally anti-trapped'', and thus, past holographic screens. Here $\\theta_{k}= \\hat{\\nabla}_a k^a$ and $\\theta_{l} = \\hat{\\nabla}_a l^a$ are the null expansions~\\cite{Wald} (where $\\hat{\\nabla}_{a}$ is computed with respect to the induced metric on $\\sigma$), and $k^{a}$ and $l^{a}$ are the two future directed null vector fields orthogonal to $\\sigma$.\n\nWe will refer to the $k^a$ direction as {\\em outward} and to the $l^a$ direction as {\\em inward}. For screens in asymptotically flat or AdS spacetimes, these notions agree with the intuitive ones. Furthermore, in such spacetimes any marginally trapped surface, and hence any holographic screen, lies behind an event horizon. However, holographic screens may exist in cosmological spacetimes where an independent notion of outward, such as conformal infinity, need not exist (e.g., a closed FRW universe). In this case the definition of $H$ requires only that there exist some continuous assignment of $k^a$ and $l^a$ on $H$ such that all leaves are marginally trapped. See Fig.~\\ref{fig-examplesPRD} for examples of holographic screens.\n\\end{defn}\n\n\\begin{defn}\nThe defining foliation of $H$ into leaves $\\sigma$ determines a $(D-2)$-parameter family of leaf-orthogonal curves $\\gamma$, such that every point $p\\in H$ lies on exactly one curve that is orthogonal to $\\sigma(p)$. We will refer to this set of curves as the {\\em fibration of $H$}, and to any element as a {\\em fiber} of $H$.\n\\end{defn}\n\n\\begin{conv} \\label{conv-rh}\nThus it is possible to choose a (non-unique) {\\em evolution parameter} $r$ along the screen $H$ such that $r$ is constant on any leaf and increases monotonically along the fibers $\\gamma$. We will label leaves by this parameter: $\\sigma(r)$. \n\nThe tangent vectors to the fibers define a vector field $h^a$ on $H$. For any choice of evolution parameter the normalization of this vector field can be fixed by requiring that the function $r$ increases at unit rate along $h^a$: $h(r)=h^a\\, (dr)_a=1$. (Since $H$ can change signature, unit normalization of $h^a$ would be possible only piecewise, and hence would not be compatible with the desired smoothness of $h^a$.)\n\\end{conv}\n\n\\begin{rem}\nSince fibers are orthogonal to leaves, a tangent vector field $h^a$ can be written as a (unique) linear combination of the two null vector fields orthogonal to each leaf:\n\\begin{equation}\nh^{a} = \\alpha l^{a} + \\beta k^{a}\n\\label{eq-hlk}\n\\end{equation}\nMoreover, the foliation structure guarantees that $h^a$ vanishes nowhere: it is impossible to have $\\alpha=\\beta=0$ anywhere on $H$. (These remarks hold independently of the requirement that each leaf be marginally trapped.) \\label{rem-h}\n\\end{rem}\n\\begin{figure}[ht]\n\n\\includegraphics[width=2in]{AlphaBetaDiagram.pdf}\n\\caption{The null vectors $l^{a}$ and $k^{a}$ orthogonal to a leaf $\\sigma$ of the foliation of $H$ at some point. The evolution of $H$ is characterized by vector $h^a$ normal to the leaves and tangent to $H$. Depending on the quadrant $h^a$ points to, $H$ evolves locally to the future, exterior, past, or interior (clockwise from top).}\n\\label{fig-st}\n\n\\end{figure}\n\n\\begin{conv} \\label{conv-stnotation}\nAs shown in Fig.~\\ref{fig-st}, $h^a$ is spacelike and outward-directed if $\\alpha<0, \\beta>0$; timelike and past-directed if $\\alpha<0, \\beta<0$; spacelike and inward-directed if $\\alpha>0, \\beta<0$; and finally, timelike and future-directed if $\\alpha>0, \\beta>0$. We denote such regions, in this order (and somewhat redundantly): $S_{-+},T_{--},S_{+-},T_{++}$. \n\\end{conv}\n\\begin{rem} \\label{rem-result}\nOur key technical result below will be to demonstrate that $\\alpha$ cannot change sign on $H$. Thus on a given screen $H$, either only the first two, or only the second two possibilities are realized. (The latter case can be reduced to the former by taking $r\\to -r$.)\n\\end{rem}\n\\begin{rem} \\label{rem-borders}\nBecause $\\alpha$ and $\\beta$ cannot simultaneously vanish, $S_{+-}$ and $S_{--}$ regions cannot share a boundary or be separated by a null region; they must be separated by a timelike region. Similarly $T_{++}$ and $T_{--}$ regions must be separated by a spacelike region.\n\\end{rem}\n\nBelow we will consider only holographic screens that satisfy additional technical assumptions:\n\\begin{defn} \\label{def-technical}\nA holographic screen $H$ is {\\em regular} if\n\\begin{enumerate}[(a)]\n\\item \\label{def-technical1} the {\\em first generic condition} is met, that $R_{ab} k^a k^b + \\varsigma_{ab}\\varsigma^{ab}>0$ everywhere on $H$, where $\\varsigma_{ab}$ is the shear of the null congruence in the $k^a$-direction;\n\\item \\label{def-technical2} the {\\em second generic condition} is met: let $H_+$, $H_-$, $H_0$ be the set of points in $H$ with, respectively, $\\alpha>0$, $\\alpha<0$, and $\\alpha=0$. Then $H_0= \\dot H_- = \\dot H_+$. \n\\item \\label{def-technical3} every inextendible portion $H_i\\subset H$ with definite sign of $\\alpha$ either contains a complete leaf, or is entirely timelike.\n\\item \\label{def-technical4} every leaf $\\sigma$ splits a Cauchy surface $\\Sigma$ into two disjoint portions $\\Sigma^\\pm$.\n\\end{enumerate} \n\\end{defn}\nAnalogous assumptions have been used in the more restricted context of dynamical horizons. The first generic condition is identical to the regularity condition of~\\cite{AshGal}. Together with the null curvature condition, $R_{ab}k^a k^b\\geq 0$, it ensures that the expansion of the $k^a$-congruence becomes negative away from each leaf. The second generic condition excludes the degenerate case where $\\alpha$ vanishes along $H$ without changing sign. Either condition excludes the existence of an open neighborhood in $H$ with $\\alpha=0$. Both are aptly called ``generic'' since they can fail only in situations of infinitely fine-tuned geometric symmetry and matter distributions.\nThe third assumption is substantially weaker than the definition of a dynamical horizon, since we do not require global spacelike signature of $H$. The fourth assumption will play a role analogous to the assumption of achronality of the dynamical horizon. It holds in typical spacetimes of interest (including settings with nontrivial spatial topology, such as $S^1\\times S^2$, as long as the holographic screen is sufficiently localized on the sphere). We leave the question of relaxing some or all of these assumptions to future work.\n\n\\begin{rem} \\label{rem-s0exists}\nAssumption~\\ref{def-technical}.c and Remark~\\ref{rem-borders} imply that $H$ contains at least one complete leaf with definite sign of $\\alpha$.\n\\end{rem}\n\\begin{conv}\n\\label{conv-orient}\nLet $\\sigma(0)\\subset H$ be an arbitrary leaf with definite sign of $\\alpha$. We will take the parameter $r$ to be oriented so that $\\alpha< 0$ on $\\sigma(0)$, and we take $r=0$ on $\\sigma(0)$. By convention \\ref{conv-rh} this also determines the global orientation of the vector field $h^a$. For past holographic screens, it is convenient to choose the opposite convention, $\\alpha>0$ on $\\sigma(0)$.\n\\end{conv}\n\n\n\\section{Leaves Induce a Monotonic Spacetime Splitting}\n\\label{sec-monosplit}\n\nIn this section, we will use only a subset of the defining properties of a holographic screen. In Sec.~\\ref{sec-csss}, we examine the implications of Assumption~\\ref{def-technical}.d, that each leaf split a Cauchy surface. We show that a null surface orthogonal to such a leaf splits the entire spacetime into two disconnected regions $K^\\pm(\\sigma)$.\n\nIn Sec.~\\ref{sec-ssmono}, we use the foliation property of the holographic screen. (However, nowhere in this section do we use the condition that each leaf be marginally trapped, or Assumptions~\\ref{def-technical}.a-c.) We show that in portions of $H$ where $\\alpha$ is of constant sign, the sets $K^\\pm(\\sigma(r))$ satisfy inclusion relations that are monotonic in the evolution parameter $r$. \n\nTogether these results imply that an $\\alpha<0$ foliation of any hypersurface $H$ into Cauchy-splitting surfaces $\\sigma$ induces a null foliation of the spacetime, such that each null hypersurface $N(\\sigma)$ splits the entire spacetime into disconnected regions $K^\\pm(\\sigma)$. \n\nIn the following section, we will add the marginally trapped condition and the remaining technical assumptions, to show that on a future holographic screen, $\\alpha$ {\\em must}\\\/ have constant sign.\n\n\\subsection{From Cauchy Splitting to Spacetime Splitting}\n\\label{sec-csss}\n\nBy Assumption~\\ref{def-technical}.d, every leaf $\\sigma$ splits a Cauchy surface $\\Sigma$ into two disconnected portions $\\Sigma^+$ and $\\Sigma^-$:\n\\begin{equation}\n\\Sigma = \\Sigma^+ \\cup \\sigma \\cup \\Sigma^-~,~~\\sigma = \\dot\\Sigma^\\pm~.\n\\end{equation}\nWe take $\\Sigma^\\pm$ to be open in the induced topology on $\\Sigma$, so that $\\Sigma^\\pm\\cap\\sigma=\\varnothing$.\nWe consider the following sets shown in Fig.~\\ref{fig-sets}a:\n\\begin{itemize} \n\\item $I^+(\\Sigma^+)$, the chronological future of $\\Sigma^+$: this is the set of points that lie on a timelike future-directed curve starting at $\\Sigma^+$. (Note that this set does not include $\\Sigma^+$.)\n\\item $D^-(\\Sigma^+)$, the past domain of dependence of $\\Sigma^+$: this is the set of points $p$ such that every future-directed causal curve through $p$ must intersect $\\Sigma^+$. (This set does include $\\Sigma^+$.)\n\\item Similarly, we consider $I^-(\\Sigma^-)$ and $D^+(\\Sigma^-)$.\n\\end{itemize} \n\n\\begin{figure*}[ht]\n\n\\subfigure[ ]{\n\n\\includegraphics[height=0.4 \\textwidth]{SigmaDecompositionv2.pdf}\n\\label{fig-sets-a}}\n\\qquad\n\\subfigure[]{\n\\includegraphics[height=0.4\\textwidth]{Ksv2.pdf}\n\\label{fig-sets-b}}\n\\caption{(a) Each leaf $\\sigma$ splits a Cauchy surface. This defines a partition of the entire spacetime into four regions, given by the past or future domains of dependence and the chronological future or past of the two partial Cauchy surfaces. (b) The pairwise unions $K^\\pm$ depend only on $\\sigma$, not on the choice of Cauchy surface. They can be thought as past and future in a null foliation defined by the lightsheets $N$.}\n\\label{fig-sets}\n\\end{figure*}\n\n\n\\begin{defn}\\label{def-sets}\nFrom the Cauchy-splitting property of $\\sigma$, it follows\\footnote{The proofs of the following statements are straightforward and use only well-known properties of $I^\\pm$ and $D^\\pm$.} that the four sets defined above have no mutual overlap. However they share null boundaries:\n\\begin{align} \nN^+(\\sigma) & \\equiv & \\dot I^+(\\Sigma^+) -\\Sigma^+ = \\dot D^+(\\Sigma^-) - I^-(D^+(\\Sigma^-)) \\\\\nN^-(\\sigma) & \\equiv & \\dot I^-(\\Sigma^-) - \\Sigma^- = \\dot D^-(\\Sigma^+) - I^+(D^-(\\Sigma^+))\n\\end{align} \nNote that $N^+(\\sigma)\\cap N^-(\\sigma)=\\sigma$. We define\n\\begin{eqnarray} \nK^+(\\sigma) & \\equiv & I^+(\\Sigma^+) \\cup D^-(\\Sigma^+)-N^+(\\sigma)~; \\\\\nK^-(\\sigma) & \\equiv & D^+(\\Sigma^-) \\cup I^-(\\Sigma^-)-N^-(\\sigma)~; \\\\\nN(\\sigma) &\\equiv & N^+(\\sigma) \\cup N^-(\\sigma)\n\\end{eqnarray}\nThus\n\\begin{equation}\nN(\\sigma) = \\dot K^+(\\sigma) = \\dot K^-(\\sigma)~;\n\\end{equation}\nand the sets $N$, $K^+$, and $K^-$ provide a partition of the spacetime (Fig.~\\ref{fig-sets}b). \n\\end{defn}\n\n\\begin{lem}\\label{lem-secondN}\nThere exists an independent characterization of $N^+$, $N^-$, and thus of $N$:\n$N^+(\\sigma)$ is generated by the future-directed null geodesic congruence orthogonal to $\\sigma$ in the $\\Sigma^-$ direction up to intersections: $p\\in N^+(\\sigma)$ if and only if no conjugate point or nonlocal intersection with any other geodesic in the congruence lies between $\\sigma$ and $p$.\n\\end{lem}\nThis follows from a significantly strengthened version of Theorem 9.3.11 in Ref.~\\cite{Wald}, a proof of which will appear elsewhere. Similarly $N^-$ is generated by the past-directed $\\sigma$-orthogonal null congruence towards $\\Sigma^+$. (Hence if $\\sigma$ is marginally trapped then $N^\\pm$ both are light-sheets of $\\sigma$~\\cite{CEB1}.)\n\n\\begin{cor}\\label{cor-onlysigma}\nLemma~\\ref{lem-secondN} implies that $N$ depends only on $\\sigma$, not on the Cauchy surface $\\Sigma$. Moreover, the sets $K^+$ and $K^-$ are then uniquely fixed by the fact that $N$ splits the spacetime: $K^+$ is the largest connected set that contains $I^+(N)$ but not $N$.\n\\end{cor}\nThus our use of $\\sigma$ (as opposed to $\\Sigma^+$ and\/or $\\Sigma^-$) as the argument of the sets $K^\\pm$, $N^\\pm$ is appropriate. \n\n\\subsection{Monotonicity of the Spacetime Splitting}\n\\label{sec-ssmono}\n\nUntil now, we have only used the Cauchy-splitting property of $\\sigma$. We will now consider a family of such leaves, $\\sigma(r)$, that foliate a hypersurface ${\\cal H}$. (We use this notation instead of $H$, in order to emphasize that ${\\cal H}$ need not satisfy the additional assumptions defining a future holographic screen.) A tangent vector field $h^a$ can be defined as described in Remark~\\ref{rem-h}, with decomposition $h^a = \\alpha l^a + \\beta k^a$ into the null vectors orthogonal to each leaf. We take $\\Sigma^+$ to be the side towards which the vector $l^a$ points. (This convention anticipates Sec.~\\ref{sec-arealaw}. In the current section, $k^a$ and $l^a$ need not be distinguished by conditions on the corresponding expansions.) To simplify notation, we denote $K^+(\\sigma(r))$ as $K^+(r)$, etc.\n\n\\begin{thm}\\label{thm-kmono}\nConsider a foliated hypersurface ${\\cal H}$ with tangent vector field $h^a$ defined as above. Suppose that $\\alpha<0$ on all leaves $\\sigma(r)$ in some open interval, $r\\in I$. Then\n\\begin{equation}\n\\bar K^+(r_1)\\subset K^+(r_2)~,\n\\label{eq-kmono}\n\\end{equation}\nor equivalently $K^-(r_1)\\supset \\bar K^-(r_2)$, for all $r_1, r_2 \\in I$ with $r_10$ between $\\sigma(r)$ and $\\sigma(r+\\delta r)$, i.e., if $\\delta {\\cal H}$ is spacelike. Both the future of a set, and the past domain of dependence of a set cannot become smaller when the set is enlarged; hence,\n\\begin{eqnarray} \nI^+(X) & \\supset & I^+(\\Sigma^+(r))~,\\nonumber \\\\\nD^-(X) & \\supset & D^-(\\Sigma^+(r))~,\n\\label{eq-idsuper}\n\\end{eqnarray} \nand so the infinitesimal version of Eq.~(\\ref{eq-kmono}) follows trivially from the definition of $K^+$.\n\nNow consider the general case, with no restriction on the sign of $\\beta$. Thus, $\\delta {\\cal H}$ may be spacelike, timelike ($\\beta<0$), or null ($\\beta=0$); indeed, it may be spacelike at some portion of $\\sigma(r)$ and timelike at another. One can still define the submanifold $X$ as the extension of $\\Sigma^+(r)$ by $\\delta {\\cal H}$, as in Eq.~(\\ref{eq-x}); see Fig.~\\ref{fig-deltah}. Again, this extension cannot decrease the future of the set, nor its past domain of dependence,\\footnote{The future of a set is defined for arbitrary sets. The domain of dependence is usually defined only for certain sets, for example for closed achronal sets in Ref.~\\cite{Wald}. Here we extend the usual definition to the more general set $X$: $p\\in D^-(X)$ iff every future-inextendible causal curve through $p$ intersects $X$. This is useful for our purposes; however, we caution that certain theorems involving $D^\\pm$ need not hold with this broader definition.} as described in Eq.~(\\ref{eq-idsuper}).\n\nHowever, $X$ need not be achronal and hence, it need not lie on any Cauchy surface. In this case, we consider a new Cauchy surface that contains $\\sigma(r+\\delta r)$. Because $\\alpha<0$, this surface can be chosen so that $\\Sigma^+(r+\\delta r)$ is nowhere to the future of $X$; see Fig.~\\ref{fig-deltah}. Since $X$ and $\\Sigma^+(r+\\delta r)$ share the same boundary $\\sigma(r+\\delta r)$, $\\alpha>0$ then implies that $X$ is entirely in the future of $\\Sigma^+(r+\\delta r)$: \n\\begin{equation}\nX\\subset I^+(\\Sigma^+(r+\\delta r))\n\\label{eq-xfut}\n\\end{equation}\nMoreover, the set $X$ together with $\\bar\\Sigma^+(r+\\delta r)$ forms a ``box'' that bounds an open spacetime region $Y$, such that \n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=3.0in]{NestedK2.pdf}\n\\caption{Proof that $K^+(r)$ grows monotonically under inclusion, for any foliation $\\sigma(r)$ of a hypersurface $\\cal H$ with $\\alpha<0$. See the main text for details and definitions.}\n\\label{fig-deltah}\n\\end{center}\n\\end{figure}\n\\begin{equation}\nY\\subset I^+(\\Sigma^+(r+\\delta r))~.\n\\label{eq-ysubi}\n\\end{equation}\nAll future-directed timelike curves that pass through $\\Sigma^+(r+\\delta r)$ enter $Y$ and then can exit $Y$ only through $X$. Hence $D^-(X) \\subset Y \\cup D^-(\\Sigma^+(r+\\delta r))$. Since $\\alpha<0$, for all points outside of $Y \\cup D^-(\\Sigma^+(r+\\delta r))$ there exist future-directed timelike curves that evade $X$. Hence equality holds:\n\\begin{equation}\nD^-(X) = Y \\cup D^-(\\Sigma^+(r+\\delta r))~.\n\\label{eq-dmx}\n\\end{equation}\nTo obtain the infinitesimal inclusion relation, \n\\begin{equation}\nK^+(r+\\delta r)\\supset K^+(r)~,\n\\label{eq-minimono}\n\\end{equation}\nby Eq.~(\\ref{eq-idsuper}) it suffices to show that $K^+(r+\\delta r)\\supset I^+(X)\\cup D^-(X)$. Indeed if $p\\in I^+(X)$ by Eq.~(\\ref{eq-xfut}) $p\\in I^+(\\Sigma^+(r+\\delta r))\\subset K^+(r+\\delta r)$. And if $p\\in D^-(X)$ then by Eqs.~(\\ref{eq-dmx}) and (\\ref{eq-ysubi}) we again have $p\\in K^+(r+\\delta r)$.\n\nTo obtain the stricter relation\n\\begin{equation}\nK^+(r+\\delta r)\\supset \\bar K^+(r)~,\n\\label{eq-minimonos}\n\\end{equation}\nwe note that $\\sigma(r)\\subset X$; hence by Eq.~(\\ref{eq-xfut}), for every point $p\\in\\sigma(r)$ there exists a timelike curve from $\\Sigma^+(r+\\delta r)$ to $p$. This curve can be continued along the null generator of $N^+(r)$ starting at $p$ to a point $q\\in N^+(r)$, and then slightly deformed into a timelike curve connecting $p$ to $q$. By Lemma~\\ref{lem-secondN}, every point in $N^+(r)$ lies on a generator starting at $\\sigma(r)$. Hence, $N^+(r)\\subset K^+(r+\\delta r)$. A similar argument yields $N^-(r)\\subset K^+(r+\\delta r)$. Since $N(r)=N^+(r)\\cup N^-(r)$ and $\\bar K^+(r) = K^+(r)\\cup N(r)$, Eq.~(\\ref{eq-minimonos}) follows. \n\nTo extend Eq.~(\\ref{eq-minimonos}) to Eq.~(\\ref{eq-kmono}), one may iterate the above infinitesimal construction. The only way this could fail is if the iteration gets stuck because the steps $\\delta r$ have to be taken ever smaller to keep Eq.~(\\ref{eq-rdr}) satisfied. Suppose therefore that the iteration can only reach an open set $(r,r_*)$ but no leaves in the set $(r_*,r_2)$. But this contradicts the assumption that $\\alpha<0$ at $r_*$.\n\\end{proof}\n\n\\section{Area Law}\n\\label{sec-arealaw}\n\nIn this section, we prove our main result: that the area of the holographic screen is monotonic. The most difficult part of this task is proving that $\\alpha$ cannot change sign on $H$, Theorem~\\ref{thm-alpha}).\nWe then prove our Area Theorem~\\ref{thm-area}.\n\\begin{figure}[h]\n\\centering\n\\includegraphics[height=0.35 \\textwidth]{lemma3D.pdf}\n\\caption{An example illustrating Lemma~\\ref{monotonicity}: in Minkowski space, the spatial sphere $\\chi$ is tangent to the null plane $N$ at $p$ and lies outside the past of $N$ near $p$. It is easy to see that this implies that $\\chi$ is a cross-section of a future light-cone that shares one null generator with $N$. In this example it is obvious that $\\chi$ expands faster than $N$ at $p$, as claimed in Lemma~\\ref{monotonicity}.}\n\\label{fig-lemma}\n\\end{figure}\nWe begin by stating a useful Lemma.\n\\begin{lem} \\label{monotonicity}\nLet $N$ be a null hypersurface and let $\\chi$ be a spacelike surface tangent to $N$ at a point $p$. That is, we assume that one of the two future-directed null vectors orthogonal to $\\chi$, $\\kappa^a$, is also orthogonal to $N$ at $p$. We may normalize the (null) normal vector field to $N$ so that it coincides with $\\kappa^a$ at $p$. Let $\\theta^{(\\chi)}$ be the null expansion of the congruence orthogonal to $\\chi$ in the $\\kappa^a$ direction, and let $\\theta^{(N)}$ be the null expansion of the generators of $N$. Then:\n\\begin{itemize} \n\\item If there exists an open neighborhood $O(p)\\cap \\chi$ that lies entirely outside the past of $N$,\\footnote{I.e., there exists no past directed causal curve from any point on $N$ to any point in $O(p)\\cap \\chi$.} then $\\theta^{(\\chi)}\\geq \\theta^{(N)}$ at $p$.\n\\item If there exists an open neighborhood $O(p)\\cap \\chi$ that lies entirely outside the future of $N$, then $\\theta^{(\\chi)}\\leq \\theta^{(N)}$ at $p$.\n\\end{itemize} \n\\end{lem}\n\\begin{proof}\nSee Lemma A in Ref.~\\cite{Wal10QST}. Our Lemma is stronger but the proof is the same; so instead of reproducing it here, we offer Fig.~\\ref{fig-lemma} to illustrate the result geometrically. It generalizes to null hypersurfaces an obvious relation in Riemannian space, between the extrinsic curvature scalars of two codimension 1 surfaces that are tangent at a point in a Riemannian space but do not cross near that point.\n\\end{proof}\n\n\n\n\\begin{thm} Let $H$ be a regular future holographic screen with leaf-orthogonal tangent vector field $h^{a}=\\alpha l^{a} + \\beta k^{a}$, whose orientation is chosen so that $\\alpha<0$ at the leaf $\\sigma(0)$. Then $\\alpha< 0$ everywhere on $H$. \\label{thm-alpha}\n\\end{thm}\n\n\\begin{proof}\nBy contradiction: suppose that $H$ contains a point with $\\alpha\\geq 0$. It immediately follows that the subset $H_+\\subset H$ of points with $\\alpha> 0$ is nonempty, since by Assumption~\\ref{def-technical}.b, $\\alpha=0$ can occur only as a transition between $\\alpha<0$ and $\\alpha>0$ regions. Let $\\sigma(0)$ be the complete leaf that exists by Remark~\\ref{rem-s0exists} and has $r=0$, $\\alpha<0$, by Convention~\\ref{conv-orient}. By continuity of $\\alpha$, there exists an open neighborhood of $\\sigma(0)$ where $\\alpha<0$. \n\n\\begin{figure}[t]\n\\includegraphics[height=0.3 \\textwidth]{Forbidden.pdf}\n\\caption{The four types of spacelike-timelike transitions on a future holographic screen that would violate the monotonicity of the area, and which our proof in Sec.~\\ref{sec-arealaw} will exclude. Near $\\sigma(0)$, the area increases in the direction of the arrow. On the far side of the ``bend'' the area would decrease, in the same direction. There are other types of spacelike-timelike transitions which preserve area monotonicity under uniform flow; these do arise generically (see Fig.~\\ref{fig-examplesPRD}a).}\n\\label{fig-sphsym}\n\\end{figure}\nWe first consider the case where $H_+$ has a component in the $r>0$ part of $H$ (cases 1 and 2 in Fig.~\\ref{fig-sphsym}). Let $\\sigma(1)$ be the ``last slice'' on which $\\alpha\\leq 0$, i.e., we use our freedom to rescale $r$ to set\n\\begin{equation}\n1=\\inf\\{r:r>0,\\sigma(r)\\cap H_+ \\neq \\varnothing\\}\n\\end{equation}\nBy the second generic condition~\\ref{def-technical}.b, $\\alpha<0$ for all leaves $\\sigma(r)$ with $00$ in $O(p)$, so that the assumed sign change from $\\alpha<0$ to $\\alpha>0$ corresponds to a transition of $h^a$ from spacelike-outward ($S_{-+}$) to timelike-future-directed ($T_{++}$). The following construction is illustrated in Fig. \\ref{fig-Case1LR}.\n\nLet $\\sigma^+(1+\\epsilon)$ be the set of points with $\\alpha>0$ on the leaf $\\sigma(1+\\epsilon)$. If there is more than one connected component, we choose $\\sigma^+(1+\\epsilon)$ to be the component at least one of whose fibers intersects $p$. By choosing $\\epsilon$ sufficiently small, we can ensure that $\\sigma^+(1+\\epsilon) \\subset O(p)$. Let $\\Gamma$ be the set of fibers that pass through $\\sigma^+(1+\\epsilon)$. \n\nBecause $\\alpha>0$, all fibers in $\\Gamma$ enter $K^-(1+\\epsilon)$ as we trace them back to smaller values of $r$. But $\\sigma(0)$ is entirely outside of this set: by definition, $\\sigma(0)\\cap K^-(0)=\\varnothing$, so Eq.~(\\ref{eq-nocon}) implies $\\sigma(0)\\cap K^-(1+\\epsilon)=\\varnothing$. Hence, all fibers in $\\Gamma$ also intersect $N(1+\\epsilon)$, at some positive value of $r< 1+\\epsilon$. Because $\\beta>0$ in $O(p)$, this intersection will be with $N^-(1+\\epsilon)$. By smoothness and the second generic assumption, the intersection will consist of one point per fiber. (Otherwise a fiber would coincide with a null generator of $N^{-}(1+\\epsilon)$ in a closed interval.) \nThe set of all such intersection points, one for each fiber in $\\Gamma$, defines a surface $\\phi$, and the fibers define a continuous, one-to-one map $\\sigma^+(1+\\epsilon)$ to $\\phi$. Similarly, the closures of both sets, $\\bar\\sigma^+(1+\\epsilon)$ and $\\bar\\phi$ are related by such a map. Note that these two sets share the same boundary at $r=1+\\epsilon$.\n\nLet $R$ be the minimum value of $r$ on the intersection: $R\\equiv\\inf\\{r(q): q\\in \\bar\\phi\\}$. Since $\\bar\\sigma^+(1+\\epsilon)$ is a closed subset of a compact set, it is compact; and by the fiber map, $\\bar\\phi$ is also compact. Therefore $R$ is attained on one or more points in $\\bar\\phi$. Let $Q$ be such a point. Since $R<1$ but $\\dot\\phi\\subset \\sigma(1+\\epsilon)$, $Q\\notin\\dot\\phi$, and hence $Q$ represents a local minimum of $r$. Hence the leaf $\\sigma(R)$ is tangent to the null hypersurface $N^{-}(1+\\epsilon)$ at $Q$.\n\nSince $Q$ achieves a global minimum of $r$ on $\\bar\\phi$, $\\sigma(R)$ lies nowhere in the past of $N^{-}(1+\\epsilon)$ in a sufficiently small open neighborhood of $Q$. For suppose there existed no such neighborhood. Then fibers arbitrarily close to the one containing $Q$ (and hence connected to $\\sigma^+(1+\\epsilon)$ would still be inside $K^-(1+\\epsilon)$ at $R$. Hence we could find a value $r0$ at $Q$. But this contradicts the defining property of holographic screens, that all leaves are marginally trapped ($\\theta_k^{\\sigma(r)}=0$ for all $r$). \n\n\\vskip .6cm\n\\noindent {\\bf Case 2 \\ } Next we consider the case where $\\beta<0$ in the neighborhood of the assumed transition from $\\alpha<0$ to $\\alpha>0$ that begins at $r=1$ (see Fig.~\\ref{fig-sphsym}). This corresponds to the appearance of a spacelike-inward-directed region within a timelike-past-directed region: $T_{--}\\to S_{+-}$.\n\nWe note that the direct analogue of the above proof by contradiction fails: tracing back the generators from $\\sigma^+(1+\\epsilon)$ to $\\sigma(0)$, one finds that they pass through $N^+(1+\\epsilon)$, rather than $N^-(1+\\epsilon)$. But $N^+$ has negative expansion by the first generic condition, whereas $N^-$ had positive expansion. There is no compensating sign change elsewhere in the argument; in particular, the tangent leaf $\\sigma(R)$ with vanishing expansion again lies nowhere in the past of $N^+$ in a neighborhood of the tangent point $Q$. Thus no contradiction arises with Lemma~\\ref{monotonicity}.\n\nInstead, we show that every case 2 transition implies the existence of a case 1 transition at a {\\em different}\\\/ point on $H$, under the reverse flow $r\\to c-r$. Since we have already shown that case 1 transitions are impossible, this implies that case 2 transitions also cannot occur. \n\nLet us first illustrate this argument in the simple case where the transition occurs entirely on a single leaf: $\\alpha<0$ for $0\\leq r<1$, $\\alpha=0$ at $r=1$, and $\\alpha>0$ for $10$ region contains a complete leaf $\\sigma(2+\\epsilon)$. In the text we show that the complete-leaf region begins at some leaf $\\sigma(2)$ where a $T_{--}\\to S_{+-}$ boundary comes to an end: either the original one (a), or a different one containing a $T_{--}$ region with no complete leaf (b). The endpoint (green dot) becomes the starting point of a case 1 transition ($S_{-+}\\to T_{++}$) under reversal of the flow direction; but this case has already been ruled out.}\n\\label{fig-case2proof}\n\\end{figure*}\n\n\nIn general, the case 2 transition need not occur on a single leaf, so we shall assume for contradiction only that $\\alpha$ first becomes positive at some point on or subset of $\\sigma(1)$, as in the case 1 proof, and that $\\beta<0$ in a neighborhood of this set. Let $\\tilde H_+$ denote the connected region with $\\alpha>0$ that begins at this transition. Since the transition is $T_{--}\\to S_{+-}$, $\\tilde H_+$ contains some spacelike points; and hence by Def.~\\ref{def-technical}.c, $\\tilde H_+$ contains a complete leaf with $\\alpha>0$. We use our freedom to rescale $r$ to set\n\\begin{equation}\n2=\\inf\\{r:r>0,\\sigma(r)\\subset \\tilde H_+\\}\n\\label{eq-leaf2}\n\\end{equation}\nBy the second generic assumption, Def.~\\ref{def-technical}.b, this choice implies the existence of an open interval $(2,2+\\epsilon)$ such that every leaf in this interval is a complete leaf with $\\alpha>0$. Let us call this intermediate result (*); see Fig.~\\ref{fig-case2proof} which also illustrates the remaining arguments.\n\nWe now consider the boundary $B$ that separates the $\\alpha<0$ from the $\\alpha>0$ region, i.e., the connected set of points with $\\alpha=0$ that begins at $r=1$. Because $\\alpha$ and $\\beta$ cannot simultaneously vanish, we have $\\beta<0$ in an open neighborhood of all of $B$. Thus, $B$ separates a $T_{--}$ region at smaller $r$ from a $S_{+-}$ region at larger $r$. We note that $B$ must intersect every fiber, or else $H_+$ would not contain a complete leaf. Moreover, $B$ must end at some $r_*\\leq 2$, or else there would be points with $\\alpha<0$ in the interval $(2,2+\\epsilon)$, in contradiction with (*). \n\nIf $r_*=2$ then under the reverse flow starting from the complete leaf at $r=2+\\epsilon$ there is a case 1 transition at $r=2$ from $S_{-+}$ to $T_{++}$, and we are done. This is shown in Fig.~\\ref{fig-case2proof}a. \n\nThe only remaining possibility is that $B$ ends at some $r_*\\in (1,2)$; this is shown in Fig.~\\ref{fig-case2proof}b. Then every leaf with $r\\in (r_*,2)$ must contain points with $\\alpha<0$, or else there would be a complete leaf with $\\alpha>0$ at some $r<2$, in contradiction with Eq.~(\\ref{eq-leaf2}). Therefore each leaf with $r\\in (r_*,2)$ must intersect one or more $\\alpha<0$ regions $\\tilde H_-^{(i)}$ that are disconnected from the $T_{--}$ region bounded by $B$. None of these regions $\\tilde H_-^{(i)}$ can contain a complete $\\alpha<0$ leaf, because this would imply that $\\tilde H_+$ does not contain a complete $\\alpha>0$ leaf. From Def.~\\ref{def-technical}.c it follows that each region $\\tilde H_-^{(i)}$ is everywhere timelike, i.e., of type $T_{--}$. But this implies that a $T_{--}$ region ends at $r=2$ where $\\alpha$ becomes positive. Moreover, the $S_{+-}$ region in which the $T_{--}$ region ends has complete leaves in some open interval $(2,2+\\epsilon)$ by our result (*). Thus we find again that under the reverse flow starting from the complete leaf at $r=2+\\epsilon$ there is a case 1 transition at $r=2$ from $S_{-+}$ to $T_{++}$.\n\nWe have thus established that a case 2 transition at $r=1$ implies a case 1 transition at the same or a larger value of $r$, after reversal of the direction of flow. Since case 1 transitions are impossible, we conclude that case 2 transitions are also impossible.\n\\begin{figure*}[ht]\n\n\\includegraphics[width= 0.85\\textwidth]{Panelv3.pdf}\n\\caption{(a) Case 3 is ruled out analogously to case 1, by contradiction. (b) Case 4 is analogous to case 2: the transition is impossible because it would imply a case 3 transition elsewhere on $H$, under reversal of the flow direction.}\n\\label{fig-cases}\n\\end{figure*}\n\n\n\\vskip .6cm\n\\noindent {\\bf Cases 3 and 4} \nOur consideration of cases 1 and 2 has ruled out the possibility of points with $\\alpha>0$ at any $r>0$. (Recall that $r=0$ corresponds to a complete leaf with $\\alpha<0$.) We must now also rule out the possibility that $\\alpha$ might be positive in the region $r<0$; this corresponds to cases 3 and 4 in Fig.~\\ref{fig-sphsym}. Again, assume for contradiction that such a transition occurs, and focus on the transition nearest to $r=0$. We may rescale $r$ so that this transition ends at $r=-1$. That is, $\\alpha<0$ for all $r\\in (-1,0)$, but all leaves in some interval $(-(1+\\epsilon),-1)$ contain points with $\\alpha>0$. Again, a further case distinction arises depending on the sign of $\\beta$ at this transition.\n \nThe proof of case 3 (Fig.~\\ref{fig-cases}a), where $\\beta>0$ at the transition, proceeds exactly analogous to that of case 1. Fibers that connect the offending region to $r=0$ must cross the null hypersurface $N^+(-(1+\\epsilon))$, implying the existence of a leaf $\\sigma(R)$, $-10~.\n\\label{eq-area}\n\\end{equation}\n\\end{thm}\n\\begin{proof} \nBy Theorem~\\ref{thm-alpha}, $\\alpha<0$ everywhere on $H$. In regions where $\\beta$ is of definite sign, the result would then follow from the analysis of Hayward~\\cite{Hay93} (using a 2+2 lightlike formalism) or that of Ashtekar and Krishnan~\\cite{AshKri02} who used a standard 3+1 decomposition. It should be straightforward to generalize their proofs to the case where $\\beta$ may not have definite sign on some or all leaves. However, since this would necessitate the introduction of additional formalism, we will give here a simple, geometrically intuitive proof. Our construction is shown in Fig.~\\ref{fig-zigzag}. \n\nConsider two infinitesimally nearby leaves at $r$ and $r+dr$, $dr>0$. Construct the null hypersurface $N(r)$ in a neighborhood of $\\sigma(r)$. Also, construct the null hypersurface $L^+(r+dr)$ generated by the future-directed null geodesics with tangent vector $l^a$, in a neighborhood of $\\sigma(r+dr)$. By Theorem~\\ref{thm-alpha}, for sufficiently small $dr$ these null hypersurfaces intersect on a two-dimensional surface $\\hat\\sigma(r,r+dr)$, such that every generator of each congruence lies on a unique point in $\\hat\\sigma(r,r+dr)$. \n\nNote that in regions where $H$ is spacelike, $\\beta>0$, the intersection will lie in $N^+(r)$; if $H$ is timelike, $\\beta<0$, the intersection will lie in $N^-(r)$; but this makes no difference to the remainder of the argument. Crucially, Theorem~\\ref{thm-alpha} guarantees that the intersection always lies in $L^+(r+dr)$, and never on $L^-(r+dr)$, the null hypersurface generated by the past-directed null geodesics with tangent vector $-l^a$.\nWe now exploit the defining property of $H$, that each leaf is marginally trapped ($\\theta^{\\sigma(r)}_k=0$). This implies\n\\begin{eqnarray} \nA[\\hat\\sigma]-A[\\sigma(r)] & = & O(dr^2)~;\\\\\nA[\\sigma(r+dr)]-A[\\hat\\sigma] & = & O(dr)>0~.\n\\end{eqnarray} \nHence, the area increases linearly in $dr$ between any two nearby leaves $\\sigma(r)$, $\\sigma(r+dr)$. This implies that the area increases strictly monotonically with $r$.\n\n\\end{proof}\n\n\\begin{cor}\\label{cor-quant}\nThe above construction implies, more specifically, that the area of leaves increases at the rate\n\\begin{equation}\n\\frac{dA}{dr} = \\int_{\\sigma(r)} \\sqrt{h^{\\sigma(r)}}~ \\alpha \\theta^{\\sigma(r)}_l~.\n\\label{eq-specarea}\n\\end{equation}\nwhere $h_{ab}^{\\sigma(r)}$ is the induced metric on the leaf $\\sigma(r)$ and $h^{\\sigma(r)}$ is its determinant. Note that the integrand is positive definite since $\\alpha<0$ and all leaves are marginally trapped; in this sense the area theorem is local. However, the theorem applies to complete leaves only, not to arbitrary deformations of leaves.\n\\end{cor}\n\n\\begin{cor}\\label{cor-past}\nFor past holographic screens, we recall the contrasting convention that $\\alpha>0$ on $\\sigma(0)$. The above arguments then establish that $\\alpha>0$ everywhere on $H$. Eqs.~(\\ref{eq-area}) and (\\ref{eq-specarea}) hold as an area theorem. \n\\end{cor}\n\n\\begin{rem}\\label{rem-arrow}\nWe note that the area increases in the outside or future direction along a past holographic screen. With an interpretation of area as entropy, the holographic screens of an expanding universe thus have a standard arrow of time. \n\\end{rem}\n\\begin{rem}\\label{rem-backarrow}\nBy contrast, the area increases in the outside or {\\em past} direction along a future holographic screen. Thus, {\\em the arrow of time runs backwards on the holographic screens inside black holes, and near a big crunch}. Perhaps this intriguing result is related to the difficulty of reconciling unitary quantum mechanics with the equivalence principle~\\cite{Haw76,AMPS,Bou12c,AMPSS,Bou13,MarPol13,Bou13a,Bou13b}.\n\\end{rem}\n\nWe close with a final theorem that establishes the uniqueness of the foliation of $H$:\n\\begin{thm}\nLet $H$ be a regular future holographic screen with foliation $\\{\\sigma(r)\\}$. Every marginally trapped surface $s\\subset H$ is one of the leaves $\\sigma(r)$. \n\\end{thm}\n\\begin{proof}\nBy contradiction: suppose that $s$ is marginally trapped and distinct from any $\\sigma(r)$. Thus $s$ intersects the original foliation in a nontrivial closed interval $[r_1,r_2]$ and is tangent to $\\sigma(r_1)$ and $\\sigma(r_2)$. The $\\theta=0$ null vector field orthogonal to $s$ must coincide with $k^a$ at the tangent point with $\\sigma(r_2)$. Since $r_1