diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzahox" "b/data_all_eng_slimpj/shuffled/split2/finalzzahox" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzahox" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nLet $(u_n)_{n\\ge 0}$ be a sequence of real numbers satisfying the three-term recurrence relation\n\\begin{equation}\\label{u-rr}\na(n)u_{n+1}=b(n)u_{n}-c(n)u_{n-1},\\qquad n=1,2,\\ldots,\n\\end{equation}\nwhere $a(n),b(n),c(n)$\ntake positive values for all $n\\ge 1$.\nWe say also that $u_n$ is a {\\it solution} of the difference equation \\eqref{u-rr}.\n \n \nThe positivity problem naturally arises:\nin which case, the three-term recurrence sequence is positive?\nSuch a problem is closely related to the total nonnegativity of matrices.\nFollowing \\cite{FJ11},\nwe say that a (finite or infinite) matrix is {\\it totally nonnegative} (TN for short),\nif its minors of all orders are nonnegative.\nWe have the following characterization.\n\n\\begin{thm}[Characterization]\n\\label{pp-TN}\nLet $u_n$ be a solution of the difference equation \\eqref{u-rr}.\nThen $(u_n)_{n\\ge 0}$ is positive if and only if $u_0>0$ and the tridiagonal matrix\n$$\nM_0=\\left(\n \\begin{array}{ccccc}\n u_1 & c(1) & & & \\\\\n u_0 & b(1) & c(2) & & \\\\\n & a(1) & b(2) & c(3) & \\\\\n & & a(2) & b(3) & \\ddots \\\\\n & & & \\ddots & \\ddots \\\\\n \\end{array}\n \\right).\n$$\nis totally nonnegative.\n\\end{thm}\n\nOur interest in the positivity problem of three-term recurrence sequences is motivated by\nthe positivity of diagonal Taylor coefficients of multivariate rational functions\n(see Example \\ref{dtc-exm}).\nSome of diagonal coefficients are the so-called Ap\\'ery-like numbers that satisfy three-term recurrence relations,\nin which $a(n),b(n),c(n)$ are all quadratic polynomials in $n$\nor are all cubic polynomials in $n$.\n\nThroughout this paper,\nwe always assume that\n$a(n),b(n),c(n)$ in \\eqref{u-rr} are polynomials in $n$ with the same degree $\\delta$ and\n$$a(n)=an^\\delta+a'n^{\\delta-1}+\\cdots,\\quad\nb(n)=bn^\\delta+b'n^{\\delta-1}+\\cdots,\\quad\nc(n)=cn^\\delta+c'n^{\\delta-1}+\\cdots,$$\nwhere the leading coefficients $a,b,c$ are positive.\n\nFollowing Elaydi \\cite{Ela05},\na nontrivial solution $u_n$ of\n\\eqref{u-rr} is said to be {\\it oscillatory} (around zero)\nif for every positive integer $N$ there exists $n\\ge N$ such that $u_nu_{n+1}\\le 0$.\nOtherwise, the solution is said to be {\\it nonoscillatory}.\nIn other words,\na solution is nonoscillatory if it is {\\it eventually sign-definite},\ni.e., either eventually positive or eventually negative.\n\n\nWe say that a nontrivial solution $u^*_n$ is a {\\it minimal solution} of \\eqref{u-rr}\nif $\\lim_{n\\rightarrow +\\infty}u^*_n\/u_n=0$ for arbitrary solution $u_n$ of \\eqref{u-rr}\nthat is not a multiple of $u^*_n$.\nClearly, a minimal solution is unique up to multiplicity.\nMinimal solutions play a central role in the convergence of continued fractions\nand the asymptotics of orthogonal polynomials.\nFor convenience, we also write \\eqref{u-rr} as\n\\begin{equation}\\label{u-rr-1}\nu_{n+1}=\\b_nu_{n}-\\c_nu_{n-1},\\qquad n=1,2,\\ldots\n\\end{equation}\nwhere $\\b_n=b(n)\/a(n)$ and $\\c_n=c(n)\/a(n)$.\nFor simplicity,\nwe denote the continued fraction in a compact form\n\\begin{equation}\\label{cf}\n\\frac{\\c_1}{\\b_1-}\\;\\frac{\\c_2}{\\b_2-}\\;\\frac{\\c_3}{\\b_3-}\\;\\cdots\n:=\\cfrac{\\c_1}{\\b_1-\\cfrac{\\c_2}{\\b_2-\\cfrac{\\c_3}{\\b_3-\\cdots}}}\n\\end{equation}\n\n\\begin{thm}[Necessity]\n\\label{nc-pp}\nLet $(u_n)_{n\\ge 0}$ be a solution of \\eqref{u-rr}.\n\\begin{enumerate}[\\rm (i)]\n\\item\nIf $(u_n)$ is eventually sign-definite,\nthen $b^2\\ge 4ac$.\n\\item\nIf $(u_n)_{n\\ge 0}$ is positive,\nthen the continued fraction \\eqref{cf}\nconverges to a finite positive limit $\\rho_0$ and\n$u_1\\ge\\rho_0 u_0$.\nMoreover,\nthe solution $(u^*_n)_{n\\ge 0}$ of \\eqref{u-rr} decided by $u^*_0=1$ and $u^*_1=\\rho_0$\nis a positive and minimal solution of \\eqref{u-rr}.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{thm}[Sufficiency]\\label{sc-epp}\nIf $b^2>4ac$,\nthen each nontrivial solution $(u_n)$ of \\eqref{u-rr} is eventually sign-definite.\n\\end{thm}\n\n\nDenote the characteristic polynomial of the difference equation \\eqref{u-rr} by $Q(\\la)=a\\la^2-b\\la+c$\nand the characteristic roots by\n\\begin{equation*}\\label{roots}\n\\la_1=\\frac{b-\\sqrt{b^2-4ac}}{2a},\\quad\n\\la_2=\\frac{b+\\sqrt{b^2-4ac}}{2a}.\n\\end{equation*}\nDenote\n$Q_n(\\la):=a(n)\\la^2-b(n)\\la+c(n)$.\nThen $Q_n(\\la)=Q(\\la)n^\\delta+\\cdots$.\nAssume that $b^2>4ac$.\nThen for $\\la_1<\\la_0<\\la_2$, we have $Q(\\la_0)<0$,\nand so $Q_n(\\la_0)<0$ for sufficiently large $n$.\n\n\\begin{thm}[Criterion]\n\\label{sc-pp}\nLet $(u_n)_{n\\ge 0}$ be a solution of \\eqref{u-rr}.\nAssume that these exists a positive number $\\la_0$ such that\n$Q_n(\\la_0)\\le 0$ for all $n\\ge m$ and\n$u_{m+1}\\ge \\la_0u_m>0$.\nThen $(u_n)_{n\\ge m}$ is positive.\n\\end{thm}\n\nClearly, $Q_n(\\la_0)\\le 0$ for all $n\\ge m$\nimplies that $Q(\\la_0)\\le 0$,\nand so that\n$b^2\\ge 4ac$ and $\\la_1\\le\\la_0\\le\\la_2$.\nIn this sense,\nthe conditions in Theorem \\ref{sc-pp} are ``almost\" necessary.\n\nThis paper is organized as follows.\nIn \\S 2,\nwe show Theorems \\ref{pp-TN} and {\\ref{nc-pp} by means of the total nonnegativity of tridiagonal matrices\nand the theory of continued fractions.\nIn \\S 3,\nwe first present the proofs of Theorems \\ref{sc-epp} and \\ref{sc-pp},\nand then apply them to show the positivity of diagonal Taylor coefficients of some famous rational functions.\nWe also establish a criterion for the positivity and log-convexity of three-term recurrence sequences.\nIn \\S 4,\nwe illustrate that the difference equation \\eqref{u-rr} may be either oscillatory or nonoscillatory in the case $b^2=4ac$.\nWe also propose a couple problems for further work.\n\n\\section{Proof of Theorems \\ref{pp-TN} and \\ref{nc-pp}\n\nAsymptotic behavior of solutions of second-order difference equations\nhas been extensively and deeply investigated\n(see \\cite[Chapter 8]{Ela05} for instance).\nHowever, as will be seen below,\nthe total nonnegativity of (tridiagonal) matrices is a more natural approach to the positivity problem.\n\n\n\n\nFollowing \\cite{FJ11},\nwe say that a (finite or infinite) matrix is {\\it totally nonnegative} (TN for short),\nif its minors of all orders are nonnegative.\nLet $(a_n)_{n\\ge 0}$ be an infinite sequence of nonnegative numbers.\nIt is called a {\\it P\\'olya frequency} (PF for short) sequence\nif the associated Toeplitz matrix\n$$[a_{i-j}]_{i,j\\ge 0}=\\left[\n\\begin{array}{lllll}\na_{0} & & & &\\\\\na_{1} & a_{0} & &\\\\\na_{2} & a_{1} & a_{0} & &\\\\\na_{3} & a_{2} & a_{1} & a_{0} &\\\\\n\\vdots & & & & \\ddots\\\\\n\\end{array}\n\\right]$$\nis TN.\nWe say that a finite sequence $(a_0,a_1,\\ldots, a_n)$ is PF\nif the corresponding infinite sequence $(a_0,a_1,\\ldots,a_n,0,0,\\ldots)$ is PF.\nA classical result of Aissen, Schoenberg and Whitney states that\na finite sequence of nonnegative numbers is PF\nif and only if its generating function has only real zeros\n(see \\cite[p. 399]{Kar68} for instance).\nFor example, the sequence $(r,s,t)$ of nonnegative numbers is PF if and only if $s^2\\ge 4rt$.\n\nTo prove Theorem \\ref{pp-TN},\nwe need the following result (see \\cite[Example 2.2, p.149]{Min88} for instance).\n\n\\begin{lem\n\\label{irr-tp}\nAn irreducible nonnegative tridiagonal matrix is totally nonnegative\nif and only if all its leading principal minors are positive.\n\\end{lem}\n\n\\begin{proof}[Proof of Theorem \\ref{pp-TN}]\nLet $(u_n)_{n\\ge 0}$ be a solution of the difference equation \\eqref{u-rr-1}.\nThen $u_{n+1}=\\b_nu_n-\\c_nu_{n-1}$,\nwhere $\\b_n=b(n)\/a(n)$ and $\\c_n=c(n)\/a(n)$.\nDenote the infinite tridiagonal matrix\n$$\nM_1=\\left(\n \\begin{array}{ccccc}\n u_1 & \\c_1 & & & \\\\\n u_0 & \\b_1 & \\c_2 & & \\\\\n & 1 & \\b_2 & \\c_3 & \\\\\n & & 1 & \\ddots & \\ddots \\\\\n & & & \\ddots & \\ddots \\\\\n \\end{array}\n \\right).\n$$\nThen for $n\\ge 1$,\nthe $n$th leading principal minor of $M_1$ is precisely $u_{n}$,\nsince they satisfy the same three-term recurrence relation.\nSo, if $u_0$ and $\\gamma_n$ are positive for all $n\\ge 1$,\nthen the sequence $(u_n)_{n\\ge 1}$ is positive if and only if the tridiagonal matrix $M_1$ is totally nonnegative\nby Lemma \\ref{irr-tp}.\nClearly, $M_0$ is TN if and only if $M_1$ is TN.\nHence the positivity of the sequence $(u_n)_{n\\ge 1}$\nis equivalent to the total nonnegativity of the tridiagonal matrix $M_0$.\nThis completes the proof of Theorem \\ref{pp-TN}.\n\\end{proof}\n\nThere are characterizations for the total nonnegativity of tridiagonal matrices besides Lemma \\ref{irr-tp}.\n\n\\begin{lem}[{\\cite[Example 2.1, p.147]{Min88}}]\\label{tri-tp}\nA nonnegative tridiagonal matrix is totally nonnegative if and only if all its principal minors are nonnegative.\n\\end{lem}\n\n\\begin{lem}[{\\cite[Theorem 4.3]{Pin10}}]\\label{tri-tn}\nA nonnegative tridiagonal matrix is totally nonnegative if and only if\nall its principal minors containing consecutive rows and columns are nonnegative,\n\\end{lem}\n\nWe also refer the reader to \\cite{CLW15ra,CLW15rm,LMW16,WZ16} for some criteria for the total nonnegativity of tridiagonal matrices.\n\n\n\\begin{proof}[\\bf Proof of Theorem \\ref{nc-pp} (i)]\nClearly, it suffices to consider the case the total sequence $(u_n)_{n\\ge 0}$ is positive.\nBy Proposition \\ref{pp-TN},\nto prove Theorem \\ref{nc-pp} (i),\nit suffices to prove that the total nonnegativity of the matrix $M_0$ implies $b^2\\ge 4ac$.\nIn other words,\nwe need to prove that the sequence $(c,b,a)$ is a P\\'olya frequency sequence,\nor equivalently, the tridiagonal matrix\n$$\n\\left(\n \\begin{array}{ccccc}\n b & c & & & \\\\\n a & b & c & & \\\\\n & a & b & c & \\\\\n & & a & b & \\ddots \\\\\n & & & \\ddots & \\ddots \\\\\n \\end{array}\n \\right)\n$$\nis totally nonnegative.\nBy Lemma \\ref{tri-tn},\nit suffices to show that the determinants\n$$D_k=\n\\det\\left(\n \\begin{array}{ccccc}\n b & c & & & \\\\\n a & b & c & & \\\\\n & a & b & \\ddots & \\\\\n & &\\ddots &\\ddots & c\\\\\n & & & a& b\\\\\n \\end{array}\n \\right)_{k\\times k}\n$$\nare nonnegative for all $k\\ge 1$.\n\nSuppose the contrary and assume that $D_m<0$ for some $m\\ge 1$.\nConsider the determinants\n$$\nD_m(n)=\\det\\left(\n \\begin{array}{ccccc}\n b(n+1) & c(n+2) & & & \\\\\n a(n+1) & b(n+2) & c(n+3) & & \\\\\n & a(n+2) & b(n+3) & \\ddots & \\\\\n & &\\ddots &\\ddots & c(n+m)\\\\\n & & & a(n+m-1) & b(n+m)\\\\\n \\end{array}\n \\right)_{m\\times m}.\n$$\nClearly, $D_m(n)\\ge 0$ for all $n\\ge 0$\nsince they are minors of the totally nonnegative matrix $M_0$.\nOn the other hand,\nnote that\n$D_m(n)$\nare polynomials in $n$\nof degree $m\\delta$ with the leading coefficient $D_m$:\n$$D_m(n)=D_mn^{m\\delta}+\\cdots.$$\nIt follows that $D_m(n)<0$ for sufficiently large $n$, a contradiction.\n\nThus $D_k\\ge 0$ for all $k\\ge 1$, as desired.\nThis completes the proof of Theorem \\ref{nc-pp} (i).\n\\end{proof}\n\n\nTo prove Theorem \\ref{nc-pp} (ii),\nwe need the following classical determinant evaluation rule.\n\n\\begin{DJI}\\label{dji}\nLet the matrix $M=[m_{ij}]_{0\\le i,j\\le k}$.\nThen $$\\det M\\cdot\\det M^{0,k}_{0,k}=\\det M_k^k\\cdot\\det M_0^0-\\det M_0^k\\cdot\\det M_k^0,$$\nwhere $M^I_J$ denote the submatrix obtained from $M$ by deleting those rows in $I$ and columns in $J$.\n\\end{DJI}\n\nLet $\\b=(\\b_n)_{n\\ge 0}$ and $\\c=(\\c_n)_{n\\ge 1}$ be two sequences of positive numbers.\nDenote\n$$\nJ_i=\\left(\n \\begin{array}{ccccc}\n \\b_i & \\c_{i+1} & & & \\\\\n 1 & \\b_{i+1} & \\c_{i+2} & & \\\\\n & 1 & \\b_{i+2} & \\c_{i+3} & \\\\\n & & 1 & \\b_{i+3} & \\ddots \\\\\n & & & \\ddots & \\ddots \\\\\n \\end{array}\n \\right),\\quad i=0,1,2,\\ldots.\n$$\n\n\\begin{lem}\\label{td-cf}\nIf the tridiagonal matrix $J_0$ is totally nonnegative,\nthen the continued fraction\n\\begin{equation}\\label{cf-0}\n\\b_0\n\\frac{\\c_1}{\\b_1-}\\;\\frac{\\c_2}{\\b_2-}\\;\\frac{\\c_3}{\\b_3-}\\;\\frac{\\c_4}{\\b_4-}\\;\\cdots\n\\end{equation}\nis convergent.\n\\end{lem}\n\\begin{proof}\nLet $A(n)$ and $B(n)$ be the $n$th partial numerator and\nthe $n$th partial denominator of the continued fraction \\eqref{cf-0}.\nThen we have\n\\begin{eqnarray*}\n A(n)=\\b_nA(n-1)-\\c_nA(n-2),&& A(-1)=1,\\ A(0)=\\b_0; \\\\\n B(n)=\\b_nB(n-1)-\\c_nB(n-2),&& B(-1)=0,\\ B(0)=1\n\\end{eqnarray*}\nby the fundamental recurrence formula for continued fractions\n(see \\cite[Theorem 9.2]{Ela05} for instance).\nTo show that the continued fraction \\eqref{cf-0} is convergent,\nit suffices to show that $A(n)\/B(n)$ is convergent.\n\nFor $n\\ge i\\ge 0$, denote\n$$u_{i,n}=\n\\det\\left(\n \\begin{array}{ccccc}\n \\b_i & \\c_{i+1} & & & \\\\\n 1 & \\b_{i+1} & \\c_{i+2} & & \\\\\n & 1 & \\b_{i+2} & \\ddots & \\\\\n & &\\ddots &\\ddots & \\c_{n}\\\\\n & & & 1 & \\b_{n}\\\\\n \\end{array}\n \\right).\n$$\nIf $J_0$ is TN, then so is $J_i$ for each $i\\ge 0$.\nThus $u_{i,n}>0$ by Lemma \\ref{irr-tp}.\n\nApplying the Desnanot-Jacobi determinant identity to the determinant $u_{i,n+1}$,\nwe obtain\n$$u_{i,n+1}u_{i+1,n}=u_{i+1,n+1}u_{i,n}-\\c_{i+1}\\cdots \\c_n.$$\nIt follows that $u_{i,n+1}u_{i+1,n}0$ for $i\\ge 0$.\nDenote\n\\begin{equation}\\label{ri}\n\\rho_i=\\frac{\\c_{i+1}}{\\b_{i+1}-}\\;\\frac{\\c_{i+2}}{\\b_{i+2}-}\\;\\frac{\\c_{i+3}}{\\b_{i+3}-}\\;\\frac{\\c_{i+4}}{\\b_{i+4}-}\\;\\cdots.\n\\end{equation}\nThen $\\rho_i=\\frac{\\c_{i+1}}{\\ell_{i+1}}$.\nThus $\\rho_i>0$ for $i\\ge 0$.\nOn the other hand,\n$\\ell_i=\\b_i-\\rho_i$.\nHence $\\b_0\\ge\\rho_0$ and $\\b_{i+1}>\\rho_{i+1}$ for $i\\ge 0$.\n\\end{rem}\n\nThe following classic result was given by Pincherle in his fundamental work on continued fractions\n(see \\cite[Theorem 9.5]{Ela05} for instance).\n\n\\begin{PiT}\nThe continued fraction\n$$\\frac{\\c_1}{\\b_1-}\\;\\frac{\\c_2}{\\b_2-}\\;\\frac{\\c_3}{\\b_3-}\\;\\frac{\\c_4}{\\b_4-}\\;\\cdots$$\nconverges if and only if\nthe difference equation $u_{n+1}=\\b_nu_n-\\c_nu_{n-1}$\nhas a minimal solution $u^*_n$ with $u^*_0=1$.\nIn case of convergence, moreover, one has\n\\begin{equation*}\\label{cf-n}\n\\frac{u^*_{n+1}}{u^*_{n}}=\n\\frac{\\c_{n+1}}{\\b_{n+1}-}\\;\\frac{\\c_{n+2}}{\\b_{n+2}-}\\;\\frac{\\c_{n+3}}{\\b_{n+3}-}\\;\\frac{\\c_{n+4}}{\\b_{n+4}-}\\;\\cdots.\n\\end{equation*}\n\\end{PiT}\n\n\n\\begin{proof}[\\bf Proof of Theorem \\ref{nc-pp} (ii)]\n\nLet $(u_n)_{n\\ge 0}$ be a positive solution of the difference equation\n$u_{n+1}=\\b_n u_n-\\c_n u_{n-1}$ and $\\b_0=u_1\/u_0$.\nThen the tridiagonal matrix $J_0$ is totally nonnegative.\nBy Remark \\ref{rem-cf}, we have $\\b_0\\ge\\rho_0>0$,\nand so $u_1\\ge\\rho_0 u_0$.\n\nOn the other hand,\nwe have $u^*_{n+1}=\\rho_{n}u^*_{n}$ by Lemma \\ref{td-cf} and Pincherle Theorem,\nand $\\rho_n>0$ again by Remark \\ref{rem-cf}.\nThus the solution $(u^*_n)_{n\\ge 0}$ of \\eqref{u-rr} decided by $u^*_0=1$ and $u^*_1=\\rho_0$\nis a positive and minimal solution of \\eqref{u-rr}.\nThis completes the proof of Theorem \\ref{nc-pp} (ii).\n\\end{proof}\n\n\\section{Proofs and applications of Theorems \\ref{sc-epp} and \\ref{sc-pp}\n\nWe say that \\eqref{u-rr} is a difference equation of {\\it Poincar\\'e type}\nin the sense that both the sequences $b(n)\/a(n)$ and $c(n)\/a(n)$ have finite limit.\nThe following Poincar\\'e theorem\nmarks the beginning of research in the qualitative theory of linear difference equations\n(see \\cite[Theorem 8.9]{Ela05} for instance).\n\n\\begin{PT}\nSuppose that \\eqref{u-rr} is a difference equation of Poincar\\'e type\nand that the characteristic roots have distinct moduli.\nIf $u_n$ is a solution of \\eqref{u-rr},\nthen either $u_n=0$ for all large $n$, or\n$\\lim_{n\\rightarrow +\\infty}\\frac{u_{n+1}}{u_n}=\\la_i$\nfor some characteristic root $\\la_i$.\n\\end{PT}\n\n\\begin{proof}[\\bf Proof of Theorem \\ref{sc-epp}]\nBy Poincar\\'e theorem,\n$u_{n+1}\/u_n\\rightarrow \\la_i$ for some $i$.\nNow $0<\\la_1<\\la_2$.\nHence there exists a positive integer $N$ such that $u_{n+1}\/u_n>0$ for $n\\ge N$.\nThe sequence $(u_n)$ is therefore sign-definite.\n\\end{proof}\n\n\\begin{proof}[\\bf Proof of Theorem \\ref{sc-pp}]\nAssume that $u_n\\ge \\la_0u_{n-1}>0$.\nThen by \\eqref{u-rr},\n$$u_{n+1}=\\frac{b(n)}{a(n)}u_n-\\frac{c(n)}{a(n)}u_{n-1}\n\\ge\\frac{b(n)}{a(n)}u_n-\\frac{c(n)}{a(n)}\\frac{u_n}{\\la_0}\n=\\left[\\frac{b(n)\\la_0-c(n)}{a(n)\\la_0}\\right]u_n\n\\ge\\la_0u_n>0.$$\nThus $(u_n)_{n\\ge m}$ is positive by induction.\n\\end{proof}\n\nA particular interest special case of Theorem \\ref{sc-pp} is the following.\n\n\\begin{coro}\\label{=1}\nIf $b(n)\\ge a(n)+c(n)$ for all $n\\ge 1$ and $u_1\\ge u_0>0$,\nthen $(u_n)_{n\\ge 0}$ is positive.\n\\end{coro}\n\\begin{proof}\nThe statement follows from Theorem \\ref{sc-pp} by taking $\\la_0=1$.\n\\end{proof}\n\nA preferred candidate for $\\la_0$ in Theorem \\ref{sc-pp} is $\\la_1$.\nNote that\n$Q_n(\\la_1)$ is a polynomial in $n$ of degree less than $\\delta$ and is easier to estimate.\n\n\\begin{coro}\\label{1-rr}\nSuppose that\n$(an+a_0)u_{n+1}=(bn+b_0)u_{n}-(cn+c_0)u_{n-1}$.\nThen $(u_n)_{n\\ge 0}$ is positive if\n$b^2\\ge 4ac$,\n$a_0\\la_1^2-b_0\\la_1+c_0\\le 0$, and\n$u_1\\ge\\la_1 u_0$.\n\\end{coro}\n\\begin{proof}\nWe have $Q_n(\\la_1)=a_0\\la_1^2-b_0\\la_1+c_0$,\nand so the statement follows from Theorem \\ref{sc-pp} by taking $\\la_0=\\la_1$.\n\\end{proof}\n\nThe following folklore result is an immediate consequence of\nTheorem \\ref{nc-pp} and Theorem \\ref{sc-pp},\nwhich can be found in \\cite{HHH06} for instance.\n\n\\begin{coro}\\label{cont}\nSuppose that $au_{n+1}=bu_n-cu_{n-1}$,\nwhere $a,b,c$ are positive number.\nThen $(u_n)_{n\\ge 0}$ is positive\nif and only if $b^2\\ge 4ac$ and $u_1\\ge\\la_1u_0>0$.\n\\end{coro}\n\\begin{proof}\nThe ``if\" part follows from Theorem \\ref{sc-pp}.\nNow assume that $(u_n)_{n\\ge 0}$ is positive.\nThen $b^2\\ge 4ac$ and $u_1\\ge\\rho u_0$ from Theorem \\ref{nc-pp},\nwhere $\\b=b\/a,\\c=c\/a$ and\n$$\\rho\n\\frac{\\c}{\\b-}\\;\\frac{\\c}{\\b-}\\;\\frac{\\c}{\\b-}\\;\\frac{\\c}{\\b-}\\;\\cdots.$$\nIt follows that $\\rho=\\frac{\\beta-\\sqrt{\\beta^2-4\\gamma}}{2}=\\frac{b-\\sqrt{b^2-4ac}}{2a}=\\la_1$.\nThis completes the proof of the ``only if\" part.\n\\end{proof}\n\n\n\\begin{exm}\nLet $b,c>0$ and the ration function\n$$\\frac{1}{1-bx+cx^2}=\\sum_{n\\ge 0}u_nx^n.$$\nThen $u_0=1,u_1=b$ and $u_{n+1}=bu_n-cu_{n-1}$.\nThus all $u_n$ are positive\nif and only if $b^2\\ge 4c$,\na folklore result.\n\nSimilarly, let $b,c,d>0$ and\n$$\\frac{1-dx}{1-bx+cx^2}=\\sum_{n\\ge 0}u_nx^n.$$\nThen $u_0=1,u_1=b-d$ and $u_{n+1}=bu_n-cu_{n-1}$.\nThus all $u_n$ are positive\nif and only if $b^2\\ge 4c$ and\n$d\\le (b+\\sqrt{b^2-4c})\/2$.\n\\end{exm}\n\n\n\\begin{exm}\\label{dtc-exm}\nThe question of determining whether Taylor coefficients of a given rational function are all positive,\nhas been investigated by many authors \\cite{Ask74,AG72,Kau07,Pil19,SS14,Str08,SZ15}.\nIn order to show the positivity of the rational functions,\nit is necessary even suffices to prove that diagonal Taylor coefficients are positive.\nThe diagonal coefficients of some important rational functions\nare arithmetically interesting sequences and satisfy three-term recurrence relations.\nStraub and Zudilin \\cite{SZ15} showed that these diagonal coefficients are positive\nby expressed them in terms of known hypergeometric summations.\nHere we show their positivity from the viewpoint of three-term recurrence sequences.\n\n(1)\\quad\nConsider the rational function\n\\begin{equation}\\label{exm-a}\n\\frac{1}{1-(x+y)+axy}=\\sum_{n,m\\ge 0}u_{n,m}x^ny^m.\n\\end{equation}\nThe diagonal terms $u_n:=u_{n,n}$ of the Taylor expansion\nsatisfy the recurrence relation\n$$(n+1)u_{n+1}=(2-a)(2n+1)u_n-a^2nu_{n-1}$$\nwith $u_0=1$ and $u_1=2-a$.\nThe characteristic function $Q(\\la)=\\la^2-2(2-a)\\la+a^2$ and the discriminant $\\Delta=16(1-a)$.\nIf $(u_n)$ is positive, then $\\Delta\\ge 0$ by Theorem \\ref{nc-pp}, i.e., $a\\le 1$.\nConversely, if $a\\le 1$, then $\\la_1=2-a-2\\sqrt{1-a}$.\nClearly, $u_1=2-a\\ge\\la_1=\\la_1 u_0$ and\n$Q_n(\\la_1\n=\\la_1[\\la_1-(2-a)]\\le 0$.\nIt follows that $(u_n)_{n\\ge 0}$ is positive from Theorems \\ref{sc-pp} by taking $\\la_0=\\la_1$.\nThus we conclude that $(u_n)_{n\\ge 0}$ is positive if and only if $a\\le 1$.\nIt is also known that\n$$u_n=\\sum_{k=0}^{n}\\frac{(2n-k)!}{k!(n-k)!^2}(-a)^k.$$\nThe positivity is not apparent when $0\\la_1 s_1$ and\n$$Q_n(\\la_1)=2(2n+1)\\la_1^2-3(27n+8)\\la_1-81=-\\frac{729}{2}n-\\frac{81}{2}\n<0$$\nfor $n\\ge 1$.\nThe positivity of $(s_n)_{n\\ge 1}$ follows from Theorem \\ref{sc-pp} by taking $\\la_0=\\la_1$.\nThus the total sequence $(s_n)_{n\\ge 0}$ is positive.\n\n(3)\\quad\nConsider the Lewy-Askey rational function\n$$h(x,y,z,w)=\\dfrac{1}{1-(x+y+z+w)+\\frac{2}{3}(xy+xz+xw+yz+yw+zw)}.$$\nLet $t_n=9^n[(xyzw)^n]h(x,y,z,w)$ and $t_n=\\binom{2n}{n}h_n$.\nThen $h_0=1,h_1=24$ and\n$$3(n+1)^2h_{n+1}=4(28n^2+28n+9)h_n-64(4n-1)(4n+1)h_{n-1}.$$\nThe characteristic equation $3\\la^2-112\\la+1024=0$ has two roots\n$\\la_1=16$ and $\\la_2=64\/3$.\nAlso,\n$$Q_n(\\la_1)=3(2n+1)\\la_1^2-4(28n+9)\\la_1-64=-256(n-1)-128<0$$\nfor $n\\ge 1$, and $h_1>\\la_1 h_0$.\nThe positivity of $(h_n)_{n\\ge 0}$ follows from Theorem \\ref{sc-pp} by taking $\\la_0=\\la_1$.\n\n(4)\\quad\nConsider the Kauers-Zeilberger rational function\n$$D(x,y,z,w)=\\dfrac{1}{1-(x+y+z+w)+2(xyz+xyw+xzw+yzw)+4xyzw}.$$\nLet $d_n=[(xyzw)^n]D(x,y,z,w)$.\nThen\n$$(n+1)^3d_{n+1}=4(2n+1)(3n^2+3n+1)d_n-16n^3d_{n-1}$$\nwith $d_0=1$ and $d_1=4$.\nThe characteristic equation $\\la^2-24\\la+16=0$ has two roots\n$\\la_1=12-8\\sqrt{2}<1<\\la_2=12+8\\sqrt{2}$.\nThe positivity of $(d_n)_{n\\ge 0}$ follows from Theorem \\ref{sc-pp} by taking $\\la_0=1$\nsince $b(n)\\ge a(n)+c(n)$.\n\\end{exm}\n\n\\begin{rem}\\label{aln}\nThe Ap\\'ery numbers\n$$A_n=\\sum_{k=0}^n \\binom{n}{k}^2\\binom{n+k}{k}^2$$\nplay an important role in Ap\\'ery's proof of the irrationality of $\\zeta(3)=\\sum_{n\\ge 1}1\/n^3$.\nThe Ap\\'ery numbers are diagonal Taylor coefficients of the rational function\n$\n\\frac{1}{1-(xyzw+xyw+xy+xz+zw+y+z)}$$\nand satisfy the three-term recurrence relation\n\\begin{equation}\\label{a-33}\n(n+1)^3A_{n+1}=(2n+1)(17n^2+17n+5)A_n-n^3A_{n-1}.\n\\end{equation}\nWe refer the reader to \\cite[A005259]{Slo} and references therein for the Ap\\'ery numbers.\nThe Ap\\'ery numbers are closely related to modula forms or supercongruences\nand have been generalized to various Ap\\'ery-like numbers,\nwhich satisfy three-term recurrence relations similar to \\eqref{a-33}\n(see \\cite{Coo12,MS16} for instance).\nThe diagonal terms $s_n,h_n$ and $d_n$ in Example \\ref{dtc-exm} are all Ap\\'ery-like numbers.\nNot all Ap\\'ery-like numbers are positive.\nFor example, consider the Ap\\'ery-like numbers $(u_n)_{n\\ge 0}$ defined by\n$$u_n=\\sum_{k=0}^{\\lrf{n\/3}}\n(-1)^k3^{n-3k}\\binom{n}{3k}\\frac{(3k)!}{k!^3},$$\nwhich are diagonal Taylor coefficients of the rational function\n$$\\frac{1}{1+x^3+y^3+z^3-3xyz}$$\nand satisfy the recurrence relation\n$$(n+1)^2u_{n+1}=(9n^2+9n+3)u_n-27n^2u_{n-1}.$\nSee \\cite[A006077]{Slo} and references therein.\nNote that the discriminant of the characteristic equation $\\la^2-9\\la+27=0$ is negative.\nHence the sequence $(u_n)$ is oscillatory by Theorem \\ref{nc-pp} (i).\n\\end{rem}\n\nA sequence $(u_n)_{n\\ge 0}$ of positive numbers is said to be {\\it log-convex}\nif $u_{n-1}u_{n+1}\\ge u_n^2$ for all $n\\ge 1$.\nThe log-convexity of combinatorial sequences have been extensively investigated\n(see \\cite{LW07lcx} for instance).\nHere we present a new criterion,\nwhich can be simultaneously used for the positivity and log-convexity of three-term recurrence sequences.\n\nDenote\n$$B(n)=\\left|\n \\begin{array}{cc}\n b(n+1) & b(n)\\\\\n a(n+1) & a(n)\\\\\n \\end{array}\n \\right|=Bn^{2\\delta-2}+\\cdots,\\quad\nC(n)=\\left|\n \\begin{array}{cc}\n c(n+1) & c(n)\\\\\n a(n+1) & a(n)\\\\\n \\end{array}\n \\right|=Cn^{2\\delta-2}+\\cdots\n$$\nwhere\n$$B=\\left|\\begin{array}{cc}\nb & b'\\\\\na & a'\\\\\n\\end{array}\n\\right|=ba'-b'a,\\quad\nC=\\left|\\begin{array}{cc}\nc & c'\\\\\na & a'\\\\\n\\end{array}\n\\right|=ca'-c'a.$$\n\n \n \n\n\\begin{prop}[Log-convexity]\n\\label{lcx}\nLet $(u_n)_{n\\ge0}$ be a sequence satisfying the recurrence relation (\\ref{u-rr}).\n \n \n \nSuppose that $B,C>0$ and let $\\la_0=C\/B$.\n\\begin{enumerate}[\\rm (i)]\n \\item Assume that $u_1\\ge\\la_0 u_0>0$ and $Q_n(\\la_0)\\le 0$ for $n\\ge 1$.\n Then the sequence $(u_n)_{n\\ge 0}$ is positive.\n \\item Assume that the sequence $(u_n)_{n\\ge 0}$ is positive and $CB(n)\\ge BC(n)\\ge 0$ for $n\\ge 1$.\n If $u_{2}\/u_{1}\\ge u_{1}\/u_0\\ge\\la_0$,\n then the sequence $(u_n)_{n\\ge 0}$ is log-convex.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof\n(i)\\quad\nThe positivity of $(u_n)_{n\\ge 0}$ is obvious by Theorem \\ref{sc-pp}.\n\n(ii)\\quad\nLet $x_n={u_{n+1}}\/{u_n}$ for $n\\ge 0$.\nThen $(u_n)_{n\\ge 0}$ is log-convex if and only if $(x_n)_{n\\ge 0}$ is nondecreasing.\nWe next show that $x_{n+1}\\ge x_{n}\\ge\\la_0$ for $n\\ge 0$.\nWe proceed by induction on $n$.\nClearly, $x_1\\ge x_0\\ge\\la_0$.\nAssume now that $x_n\\ge x_{n-1}\\ge\\la_0$.\nWe need to show that $x_{n+1}\\ge x_n\\ge\\la_0$.\n\nBy the recurrence relation \\eqref{u-rr}, we have\n\\begin{equation}\\label{xn-rr0}\nx_n=\\frac{b(n)}{a(n)}-\\frac{c(n)}{a(n)}\\frac{1}{x_{n-1}}.\n\\end{equation}\nThus\n\\begin{eqnarray}\\label{xn1}\nx_{n+1}-x_n\n&=& \\left[\\left(\\frac{b(n+1)}{a(n+1)}-\\frac{b(n)}{a(n)}\\right)\n- \\left( \\frac{c(n+1)}{a(n+1)}-\\frac{c(n)}{a(n)}\\right) \\frac{1}{x_n}\\right]\n+ \\frac{c(n)}{a(n)} \\left( \\frac{1}{x_{n-1}}-\\frac{1}{x_n} \\right)\\nonumber\\\\\n&=& \\frac{B(n)x_n-C(n)}{a(n+1)a(n)x_n}+\\frac{c(n)}{a(n)} \\left( \\frac{1}{x_{n-1}}-\\frac{1}{x_n} \\right).\n\\end{eqnarray}\nBy the assumption $x_n\\ge\\la_0$ and the condition $CB(n)\\ge BC(n)$,\nwe obtain $B(n)x_n\\ge B(n)\\la_0\\ge C(n)$.\nIt follows from \\eqref{xn1} that $x_{n+1}\\ge x_n$, as required.\nThus the sequence $(x_n)_{n\\ge 0}$ is nondecreasing,\nand the sequence $(u_n)_{n\\ge 0}$ is therefore log-convex.\n\\end{proof}\n\nBy means of Proposition \\ref{lcx},\nwe may prove that the log-convexity of the diagonal terms $s_n,h_n,d_n$ in Example \\ref{dtc-exm},\nas well as the Ap\\'ery numbers $A_n$.\nWe omit the proofs for brevity.\nInstead\nwe give a somewhat more complex example to illustrate Proposition \\ref{lcx}.\n\n\\begin{exm}\nConsider the Ap\\'ery-like numbers $(u_n)_{n\\ge 0}$ defined by\n\\begin{equation}\\label{s18}\n(n+1)^3u_{n+1}=(2n+1)(14n^2+14n+6)u_n-n(192n^2-12)u_{n-1}\n\\end{equation}\nwith $u_0=1$ and $u_1=6$.\nSuch Ap\\'ery-like numbers are introduced by Cooper in \\cite{Coo12}.\nIt is known that\n$$u_n=\n\\sum_{k=0}^{\\lrf{n\/3}}\n(-1)^k\\binom{n}{k}\\binom{2k}{k}\\binom{2(n-k)}{n-k}\\left[\\binom{2n-3k-1}{n}+\\binom{2n-3k}{n}\\right].$$\nWe next apply Proposition \\ref{lcx} to obtain the positivity and log-convexity simultaneously.\n\nWe have\n$$B(n)=42n^4+200n^3+330n^2+220n+54$$\nand\n$$C(n)=576n^4+2328n^3+2952n^2+1200n+180.$$\nHence $B=42, C=576$ and\n$$CB(n)-BC(n)=17424n^3+66096n^2+76320n+23544$$\nfor $n\\ge 1$.\nOn the other hand,\n$\\la_0=C\/B=96\/7$ and\n$$Q_n(\\la_0)=\\frac{12}{49}(-16n^3-48n^2+799n+432)<0$$\nfor $n\\ge 7$.\nAlso, $u_{12}\/u_{11}\\ge u_{11}\/u_{10}>\\la_0$.\nThe sequence $(u_n)_{n\\ge 10}$ is therefore positive and log-convex by Proposition \\ref{lcx}.\nIt is not difficult to check that $(u_n)_{0\\le n\\le 11}$ is also positive and log-convex.\nThus the total sequence $(u_n)_{n\\ge 0}$ is positive and log-convex.\n\\end{exm}\n\nWe also refer the interested reader to\n\\cite{XY13} for the log-convexity of three-term recursive sequences and\n\\cite{HZ19} for the asymptotic log-convexity of $P$-recursive sequences.\n\n\\section{Concluding remarks and further work}\n\nWe have seen that\nif $b^2<4ac$,\nthen the difference equation \\eqref{u-rr} is oscillatory;\nand if $b^2>4ac$,\nthen the difference equation \\eqref{u-rr} is nonoscillatory.\nIn the case $b^2=4ac$,\nthe asymptotic behavior of solutions of the second-order difference equations can be very complicated.\nThe interested reader is referred to Wong and Li \\cite{WL92a,WL92b}.\nHere we illustrate that the difference equation \\eqref{u-rr} may be either oscillatory or nonoscillatory.\n\n\\begin{exm}\\label{D=0}\nConsider the difference equation\n\\begin{equation}\\label{Lnx}\n(n+1)L_{n+1}(x)=(2n+1-x)L_n(x)-nL_{n-1}(x).\n\\end{equation}\nClearly, the corresponding discriminant $b^2-4ac=0$.\n\nWhen $x=0$, we have $(n+1)L_{n+1}(0)=(2n+1)L_n(0)-nL_{n-1}(0)$.\nEvery solution of this difference equation is nonoscillatory.\nActually, solve the difference equation to obtain\n\\begin{equation}\\label{Ln0}\nL_n(0)=\\left(1+\\frac{1}{2}+\\cdots+\\frac{1}{n}\\right)(L_1(0)-L_0(0))+L_0(0).\n\\end{equation}\nRecall that $1+\\frac{1}{2}+\\cdots+\\frac{1}{n}\\sim \\ln n+\\gamma$,\nwhere $\\gamma$ is the Euler constant.\nHence if $L_1(0)L_0(0)$, then $L_n(0)$ is eventually positive.\nIn case of positive,\nit immediately follows from \\eqref{Ln0} that the sequence $(L_n(0))$ is concave, and therefore log-concave.\n\n\nWhen $x=1$, we have\n\\begin{equation}\\label{Ln1-rr}\n(n+1)L_{n+1}(1)=2nL_n(1)-nL_{n-1}(1).\n\\end{equation}\nWe next show that every solution of the difference equation \\eqref{Ln1-rr} is oscillatory.\n\nSuppose the contrary and let $L_n$ be an eventually positive solution of \\eqref{Ln1-rr}.\nWe may assume, without loss of generality, that\n$L_n>0$ for all $n\\ge 0$.\nLet $x_n=L_{n+1}\/L_n$ for $n\\ge 0$.\nThen\n\\begin{equation}\\label{xn-rr}\n x_n=\\frac{2n}{n+1}-\\frac{n}{n+1}\\frac{1}{x_{n-1}}=\\frac{n}{n+1}\\left(2-\\frac{1}{x_{n-1}}\\right).\n\\end{equation}\nNote that $x_1=1-\\frac{1}{2x_0}<1$.\nAssume that $x_{n-1}<1$.\nThen $x_n<\\frac{2n}{n+1}-\\frac{n}{n+1}=\\frac{n}{n+1}<1$ by \\eqref{xn-rr}.\nThus $x_n<1$ for all $n\\ge 1$.\nOn the other hand, since $a+1\/a\\ge 2$ for $a>0$, we have\n$$x_n=\\frac{n}{n+1}\\left(2-\\frac{1}{x_{n-1}}\\right)\\le\\frac{n}{n+1}x_{n-1}0}$ and $\\mathbb{R}_{\\ge 0}$ denote the natural numbers, reals, positive, and nonnegative reals, respectively. We write $\\alpha\\in\\mathcal{K}$ if $\\alpha:\\mathbb{R}_{\\ge 0} \\to \\mathbb{R}_{\\ge 0}$ is continuous, strictly increasing and $\\alpha(0)=0$, and $\\alpha\\in\\K_\\infty$ if, in addition, $\\alpha$ is unbounded. We write $\\beta\\in\\mathcal{KL}$ if $\\beta:\\mathbb{R}_{\\ge 0}\\times \\mathbb{R}_{\\ge 0}\\to \\mathbb{R}_{\\ge 0}$, $\\beta(\\cdot,t)\\in\\K_\\infty$ for any $t\\ge 0$ and, for any fixed $r\\ge 0$, $\\beta(r,t)$ monotonically decreases to zero as $t\\to \\infty$. The single bars $|\\cdot|$ denote the Euclidean norm in $\\mathbb{R}^n$.\n\n\\section{Preliminaries}\n\\label{sec:preliminaries}\n\\subsection{Time-varying systems with inputs}\nIn order to make our results applicable to different classes of time-varying systems with inputs, for example those described by ordinary differential equations (ODE) with or without impulse effects, retarded differential equations (RDE), semilinear differential equations (SDE) and switched systems, among others, we consider the following general definition of time-varying system with inputs.\n\\begin{defin} \\label{def:system}\nConsider the triple $\\Sigma=\\left(\\mathcal{X} ,\\mathcal{U} ,\\phi \\right)$ consisting of\n\\begin{enumerate}[1.-]\n\\item A normed vector space $\\left (\\mathcal{X},\\|\\cdot\\|_{\\mathcal{X}}\\right )$, which we call the state space.\n\\item A set of admissible inputs $\\mathcal{U}=\\{u:\\mathbb{R}_{\\ge 0}\\to \\mathsf{U}\\}$, where $\\mathsf{U}$ is a vector space of input values, which satisfies:\n\\begin{enumerate}\n\\item The zero input belongs to $\\mathcal{U}$, i.e. $\\mathbf{0}\\in\\mathcal{U}$ with $\\mathbf{0}:\\mathbb{R}_{\\ge 0}\\to \\mathsf{U}$ such that $\\mathbf{0}(t)\\equiv 0$.\n\\item If $u,v\\in \\mathcal{U}$ and $t>0$, then the concatenation $u\\sharp_t v\\in \\mathcal{U}$, where\n\\begin{align*}\nu\\sharp_t v(\\tau)=\\begin{cases} u(\\tau)\\quad \\tau \\le t, \\\\\nv(\\tau)\\quad \\tau > t.\n\\end{cases}\n\\end{align*}\n\\end{enumerate}\n\\item A transition map $\\phi:D_{\\phi}\\rightarrow \\mathcal{X}$, with $D_{\\phi}\\subset \\{(t,s,x,u):t\\ge s\\ge 0, x\\in \\mathcal{X}, u\\in \\mathcal{U}\\}$, such that for all $s\\ge 0$, $x\\in \\mathcal{X}$ and $u\\in \\mathcal{U}$,\n$\\{t\\in \\mathbb{R}_{\\ge 0}:(t,s,x,u)\\in D_{\\phi}\\}=[s,t_{(s,x,u)})$ with $ss$, if $v\\in \\mathcal{U}$ satisfies $v(\\tau)=u(\\tau)$ for all $\\tau \\in (s,t]$, then $(t,s,x,v) \\in D_{\\phi}$ and $\\phi(t,s,x,v)=\\phi(t,s,x,u)$. \n\\item[$(\\Sigma3)$]\\underline{Semigroup}: for all $(t,s,x,u) \\in D_{\\phi}$ with $ss$ then $\\phi(\\tau,s,x,u_{(s,t]})=\\phi(\\tau,s,x,u_{(s,\\infty)})=\\phi(\\tau,s,x,u)$ for all $\\tau \\in (s,t]$.\n\n\n\\subsection{Stability definitions}\nThe stability properties considered next are straightforward extensions of those defined for specific classes of systems with inputs. The set of admissible inputs $\\mathcal{U}$ is assumed to be endowed with a nonnegative admissible functional, defined as follows.\n\\begin{defin} \\label{def:admissible-f}\n The functional $\\|\\cdot\\|_{\\mathcal{U}}:\\mathcal{U}\\to \\mathbb{R}_{\\ge 0}\\cup \\{\\infty\\}$ is said to be \\emph{admissible} if it satisfies the following conditions:\n\\begin{enumerate}[a)]\n\\item \\label{item:zero-input prop} $\\|\\mathbf{0}\\|_{\\mathcal{U}}=0$;\n\\item \\label{item: monotonicity} for all $0\\le s0$, $\\|u\\|_{\\kappa,T}:=\\sup_{t\\ge 0}\\|u_{(t,t+T]}\\|_{\\kappa}$.\n\\end{enumerate}\n\\end{defin}\n\nThe following stability properties are extensions to time-varying systems of some of those in \\cite{Mironchenko2018}. Since $\\|u\\|_{\\mathcal{U}}=\\infty$ may be true for some inputs $u\\in \\mathcal{U}$, we adopt the convention $\\rho(\\infty)=\\infty$ for any function $\\rho \\in \\K_\\infty$. \n\\begin{defin}\\label{def:stability-def}\nLet $\\Sigma$ be a system with inputs and let $\\|\\cdot\\|_{\\mathcal{U}}$ be an admissible functional. Then\n\\begin{enumerate}[a)]\n\\item \\label{item:0-GUAS}$\\Sigma$ is zero-input globally uniformly asymptotically stable (0-GUAS) if there exists $\\beta \\in \\mathcal{KL}$ such that for all $t_0\\ge 0$ and $x_0\\in \\mathcal{X}$, the trajectory $x(t)=\\phi(t,t_0,x_0,\\mathbf{0})$ is defined for all $t\\ge t_0$ and satisfies\n\\begin{align} \\label{eq:0-guas}\n\\|x(t)\\|_{\\mathcal{X}}\\le \\beta(\\|x_0\\|_{\\mathcal{X}} ,t-t_0) \\quad \\forall t\\ge t_0.\n\\end{align}\n\\item \\label{item:ISS}$\\Sigma$ is input-to-state stable (ISS) with respect to the admissible functional $\\|\\cdot\\|_{\\mathcal{U}}$, abbreviated $\\|\\cdot\\|_{\\mathcal{U}}$-ISS, if $\\Sigma$ is forward complete and there exist $\\rho\\in \\K_\\infty$ and $\\beta \\in \\mathcal{KL}$ such that for all $t_0\\ge 0$, $x_0\\in \\mathcal{X}$ and $u\\in \\mathcal{U}$, the corresponding trajectory $x(\\cdot)$ of $\\Sigma$ satisfies\n\\begin{align} \\label{eq:iss}\n\\|x(t)\\|_{\\mathcal{X}}\\le \\beta(\\|x_0\\|_{\\mathcal{X}},t-t_0)+\\rho(\\|u\\|_{\\mathcal{U}}) \\quad \\forall t\\ge t_0.\n\\end{align}\n\\item \\label{item:UGB}$\\Sigma$ is uniformly globally bounded (UGB) with respect to the admissible functional $\\|\\cdot\\|_{\\mathcal{U}}$, abbreviated $\\|\\cdot\\|_{\\mathcal{U}}$-UGB, if $\\Sigma$ is forward complete and there exist $\\alpha, \\rho\\in \\K_\\infty$ and $c\\ge 0$ such that for all $t\\ge t_0\\ge 0$, $x_0\\in \\mathcal{X}$ and $u\\in \\mathcal{U}$, the corresponding trajectory $x(\\cdot)$ of $\\Sigma$ satisfies\n\\begin{align} \\label{eq:ugb}\n\\|x(t)\\|_{\\mathcal{X}}\\le c+\\alpha(\\|x_0\\|_{\\mathcal{X}})+\\rho(\\|u\\|_{\\mathcal{U}})\\quad \\forall t\\ge t_0.\n\\end{align}\n\\item \\label{item:UGS}$\\Sigma$ is uniformly globally stable (UGS) with respect to the admissible functional $\\|\\cdot\\|_{\\mathcal{U}}$, abbreviated $\\|\\cdot\\|_{\\mathcal{U}}$-UGS, if it is $\\|\\cdot\\|_{\\mathcal{U}}$-UGB and (\\ref{eq:ugb}) holds with $c=0$.\n\\end{enumerate} \n\\end{defin}\nThe word ``uniformly'' in Defintion~\\ref{def:stability-def}~\\ref{item:0-GUAS}), \\ref{item:UGB})~and~\\ref{item:UGS}) involves uniformity both with respect to the state and with respect to initial time. For conciseness, we avoid the use of a double `U' and use `0-GUAS' instead of the `0-UGAS' used to denote uniformity with respect to the state in, e.g. \\cite{mironc_scl16}. \nWhenever the admissible functional $\\|\\cdot\\|_{\\mathcal{U}}$ is clear from the context, we may remove the prefix $\\|\\cdot\\|_{\\mathcal{U}}$ and simply refer to the ISS, UGB or UGS properties.\n\nNote the following:\n\\begin{enumerate}[i)]\n\\item \\label{item:truncateu}Due to causality ($\\Sigma 2$) and Definition~\\ref{def:admissible-f}\\ref{item: monotonicity}), replacing $\\|u\\|_{\\mathcal{U}}$ by $\\|u_{(t_0,t]}\\|_{\\mathcal{U}}$ or $\\|u_{(t_0,\\infty)}\\|_{\\mathcal{U}}$ in (\\ref{eq:iss}) and (\\ref{eq:ugb}), equivalent definitions of ISS and UGB (or UGS), respectively, are obtained.\n\\item Since (\\ref{eq:iss}) and (\\ref{eq:ugb}) are trivially satisfied when $\\|u\\|_{\\mathcal{U}}=\\infty$, no loss of generality is incurred if only inputs belonging to $\\mathcal{U}_F$ are considered in the definitions of ISS and UGB.\n\\item When the system $\\Sigma$ satisfies the boundedness-implies-continuation (BIC) property, {\\em i.e.} when $\\phi(\\cdot,s,x,u)$ being bounded on $[s,t_{(s,x,u)})$ implies that $t_{(s,x,u)}=\\infty$, then the forward completeness requirement can be removed from Definition~\\ref{def:stability-def}. This happens because, since from item~\\ref{item:truncateu}) above $\\|u\\|_{\\mathcal{U}}$ can be replaced by $\\|u_{(t_0,t]}\\|_{\\mathcal{U}}$ and $\\|u_{(t_0,t]}\\|_{\\mathcal{U}} < \\infty$ from Definition~\\ref{def:admissible-f}\\ref{item: monotonicity}), then the satisfaction of (\\ref{eq:iss}) or (\\ref{eq:ugb}) for all $t \\in [s,t_{(s,x,u)})$ and all $s\\ge 0$, $x\\in \\mathcal{X}$ and $u\\in \\mathcal{U}$ would imply that $t_{(s,x,u)}=\\infty$ and therefore that $\\Sigma$ is forward complete.\n\\end{enumerate}\n\nSome standard stability properties defined for specific classes of systems, such as those modelled by ODEs with or without impulse effects, RDEs or PDEs, are recovered by choosing the admissible functional $\\|\\cdot\\|_{\\mathcal{U}}$ in a suitable manner. \nFor example, for systems without impulse effects, $\\|\\cdot\\|_{\\infty}$-ISS is the standard ISS property and $\\|\\cdot\\|_{\\infty}$-UGS is the uniform bounded-input bounded-state property \\cite{bacmaz_JCO00}. Moreover, for $\\kappa \\in \\K_\\infty$, then $\\|\\cdot\\|_{\\kappa}$-ISS becomes iISS, and $\\|\\cdot\\|_{\\kappa}$-UGB and $\\|\\cdot\\|_{\\kappa}$-UGS become uniformly bounded-energy input\/bounded-state (UBEBS) and UBEBS with constant $c=0$, respectively (see \\cite{Angeli2000a, haiman_tac18, Pepe2006, chagok_tac21}). In these cases, $\\kappa$ is referred to as the iISS- or UBEBS-gain according to the considered property. Also, $\\|\\cdot\\|_{\\kappa,T}$-ISS is an extension of the p-ISS property considered in \\cite{manhai_tac17}. \nIn the case of systems with impulse effects, where the state jumps at a fixed sequence $\\lambda$ of impulse-time instants, $\\|\\cdot\\|_{\\infty,\\lambda}$-ISS, $\\|\\cdot\\|_{\\kappa,\\lambda}$-ISS, $\\|\\cdot\\|_{\\kappa,\\lambda}$-UGB and $\\|\\cdot\\|_{\\kappa,\\lambda}$-UGS become, respectively, the usual ISS, iISS, UBEBS and UBEBS with constant $c=0$ properties and in the case of the iISS and UBEBS properties $\\kappa$ is also referred to as the iISS- and UBEBS-gain \\cite{heslib_auto08, manhai_tac20, haiman_rpic19, haiman_auto20}. \n\nA common feature of $\\|\\cdot\\|_{\\kappa}$ and $\\|\\cdot\\|_{\\kappa,\\lambda}$ is that both functionals satisfy the following condition (actually with equality):\n\\begin{itemize}\n\\item[(E)] For every $u\\in \\mathcal{U}$ and $0\\le t_1< t_2 < t_3$, $\\|u_{(t_1,t_3]}\\|_{\\mathcal{U}}\\ge \\|u_{(t_1,t_2]}\\|_{\\mathcal{U}}+\\|u_{(t_2,t_3]}\\|_{\\mathcal{U}}$.\n\\end{itemize}\n\n\\begin{defin}\n \\label{def:iISS}\n Let $\\Sigma$ be a system with inputs and let $\\|\\cdot\\|_{\\mathcal{U}}$ be an admissible functional that satisfies condition (E).\n \\begin{itemize}\n \\item If $\\Sigma$ is $\\|\\cdot\\|_{\\mathcal{U}}$-ISS, then we say that $\\Sigma$ is $\\|\\cdot\\|_{\\mathcal{U}}$-iISS\n \\item If $\\Sigma$ is $\\|\\cdot\\|_{\\mathcal{U}}$-UGB, then we say that $\\Sigma$ is $\\|\\cdot\\|_{\\mathcal{U}}$-UBEBS.\n \\item If $\\Sigma$ is $\\|\\cdot\\|_{\\mathcal{U}}$-UGS, then we say that $\\Sigma$ is $\\|\\cdot\\|_{\\mathcal{U}}$-UBEBS with constant $c=0$, or just $\\|\\cdot\\|_{\\mathcal{U}}$-UBEBS0.\n \\end{itemize}\n We remove the prefix $\\|\\cdot\\|_{\\mathcal{U}}$ when this is clear from the context. In addition, when $\\|\\cdot\\|_{\\mathcal{U}}=\\|\\cdot\\|_{\\kappa}$ for some $\\kappa\\in \\K_\\infty$, we refer to $\\kappa$ as the iISS, UBEBS or UBEBS0 gain.\n\\end{defin}\n\n \n\\subsection{Problem statement}\nIt is clear from the very definitions that $\\|\\cdot\\|_{\\mathcal{U}}$-iISS implies $0$-GUAS and $\\|\\cdot\\|_{\\mathcal{U}}$-UBEBS. The aim of the current paper is to investigate the converse implication. \n\nConditions that ensure that 0-GUAS and UBEBS imply iISS are known for systems generated by ODEs with or without impulse effects \\cite{Angeli2000a, haiman_tac18, haiman_auto20} and for time-invariant time-delay systems \\cite{chagok_tac21}. These conditions involve assumptions on the functions appearing in the equations that define the systems. More specifically, such functions must have some type of regularity and satisfy specific bounds. These conditions suggest that, for the kind of general system considered here, the transition map $\\phi$ is required to have some specific regularity. \n \nThe following example gives some insight into the type of regularity which may be required.\n\\begin{ex} \\label{ex:regularity}\nConsider the system $\\Sigma=(\\mathcal{X},\\mathcal{U},\\phi)$ with $(\\mathcal{X},\\|\\cdot\\|_{\\mathcal{X}})=(\\mathsf{U},\\|\\cdot\\|_{\\mathsf{U}})=(\\mathbb{R},|\\cdot|)$, $\\mathcal{U}$ the set of piecewise constant functions $u:\\mathbb{R}_{\\ge 0}\\to \\mathbb{R}$ and $\\phi:D_{\\phi}\\to \\mathbb{R}$, with $D_{\\phi}=\\{(t,s) : t\\ge s\\ge 0\\}\\times \\mathbb{R} \\times \\mathcal{U}$, defined as follows. \nPick any smooth function $g:\\mathbb{R} \\to \\mathbb{R}$ such that $0\\le g(r)\\le 1$ for all $r\\in \\mathbb{R}$, $g(r)=1$ if $|r|\\le 1$ and $g(r)=0$ if $|r|\\ge 2$. For a given $(t_0,x_0,u)\\in \\mathbb{R}_{\\ge 0}\\times \\mathbb{R} \\times \\mathcal{U}$, let $x(\\cdot)$ be the unique solution of the scalar initial value problem\n\\begin{align} \\label{eq:exsys}\n\\dot x=-x+g(x)v(t),\\quad x(t_0)=x_0,\n\\end{align}\nwhere $v:\\mathbb{R}_{\\ge 0}\\to \\mathbb{R}$ is the piecewise constant function defined by $v(t)=1\/u(t)$ if $u(t)\\neq 0$ and $v(t)=0$ if $u(t)=0$. Note that $x(\\cdot)$ is defined for all $t\\ge t_0$. Then we define $\\phi(t,t_0,x_0,u)=x(t)$ for all $t\\ge t_0$. \nIt is a simple exercise to show that the triple $(\\mathcal{X},\\mathcal{U},\\phi)$ is a system with inputs according to Definition \\ref{def:system} and that it is forward complete. \n\nFor $u=\\mathbf{0}$, we have that $v=\\mathbf{0}$, and then $\\Sigma$ is $0$-GUAS since the trajectories corresponding to $u$ satisfy the equation $\\dot{x}=-x$. From the fact that $g(r)=0$ for all $|r|\\ge 2$, it follows that any solution of (\\ref{eq:exsys}) satisfies $|x(t)|\\le 2+|x_0|$ for all $t\\ge t_0$ and therefore $\\Sigma$ is $\\|\\cdot\\|_{\\kappa}$-UBEBS for any $\\kappa \\in \\K_\\infty$.\n\nNext, we will prove that $\\Sigma$ is not $\\|\\cdot\\|_{\\kappa}$-iISS for any $\\kappa \\in \\K_\\infty$. Suppose on the contrary that $\\Sigma$ is $\\|\\cdot\\|_{\\kappa}$-iISS for some $\\kappa \\in \\mathcal{K}$. Then there exist $\\beta \\in \\mathcal{KL}$ and $\\rho \\in \\K_\\infty$ so that (\\ref{eq:iss}) holds, with $\\|u\\|_{\\kappa}$ in place of $\\|u\\|_{\\mathcal{U}}$. \n\nWe claim that for every $\\delta>0$ there exists an input $u$ such that $\\|u\\|_{\\kappa}<\\delta$ and $|\\phi(t,0,0,u)|>\\frac{1}{2}$ for some $t>0$.\n\nLet $\\mu>0$ be such that $\\mu<1-e^{-1}$ and $\\kappa(\\mu)<\\delta$. Define $u(t)=\\mu$ if $t\\in [0,1]$ and $u(t)=0$ for $t>1$. Then $\\|u\\|_{\\kappa}=\\kappa(\\mu)<\\delta$. Let $x(t)=\\phi(t,0,0,u)$ and suppose that $|x(t)|\\le \\frac{1}{2}$ for all $t\\in [0,1]$. From the definition of $\\phi$ it follows that $\\dot x(t)=-x(t)+\\frac{1}{\\mu}$ for all $t\\in [0,1]$ and that $x(0)=0$. Therefore $x(t)=\\int_0^t \\frac{e^{-(t-s)}}{\\mu}ds=\\frac{1-e^{-t}}{\\mu}$. In consequence $x(1)=\\frac{1-e^{-1}}{\\mu}>1$ which is a contradiction. So, there must exist $t\\in [0,1]$ such that $|x(t)|>\\frac{1}{2}$. This proves the claim.\n\nFrom the claim it easily follows that $\\Sigma$ cannot be $\\|\\cdot\\|_{\\kappa}$-iISS, since taking $\\delta>0$ such that $\\rho(\\delta)<\\frac{1}{2}$ and $u$ and $t$ as in the claim, then~(\\ref{eq:iss}) implies that $\\frac{1}{2}<|\\phi(t,0,0,u)|\\le \\rho(\\|u\\|_{\\kappa})<\\frac{1}{2}$, which is absurd.\n\\end{ex}\n\nNote that the transition map $\\phi(t,t_0,x_0,u)$ in the preceding example is continuous in $(t,t_0,x_0)$ for any fixed $u\\in \\mathcal{U}$ but, due to the claim above, it is not continuous with respect to the input $u$ when $u$ is near the zero input $\\mathbf{0}$ (i.e. when $\\|u\\|_{\\kappa}$ is small). This suggests that for the problem to have a solution some continuity condition on the map $\\phi$ with respect to small inputs $u$ may be required.\n\n\nThe more specific problem addressed is hence the following:\n\n{\\em Find conditions on the transition map $\\phi$ that ensure that the $0$-GUAS and $\\|\\cdot\\|_{\\mathcal{U}}$-UBEBS of $\\Sigma$ imply the $\\|\\cdot\\|_{\\mathcal{U}}$-iISS of $\\Sigma$.}\n\n\\section{Main result: a characterization of iISS}\n\\label{sec:main-result:-char}\nBy solving the previous problem, the characterization of iISS as the superposition~(\\ref{eq:iISS-superp}) will be extended to general classes of infinite-dimensional systems. The following condition on the transition map will be required.\n\n\\begin{as}\n \\label{as:transition-map}\n The transition map $\\phi$ of the system $\\Sigma$ satisfies the following:\\\\\n For every $r>0$, $\\varepsilon>0$ and $T>0$ there exists $\\delta=\\delta(r,\\varepsilon,T)>0$ such that for every $t_0\\ge 0$, $x_0 \\in \\mathcal{X}$ and $u\\in \\mathcal{U}$ with $\\|u\\|_{\\mathcal{U}}\\le \\delta$, if for some $t^*\\in [t_0, t_0+T]$ it happens that $\\|x(t)\\|_{\\mathcal{X}}\\le r$ and $\\|z(t)\\|_{\\mathcal{X}}\\le r$ for all $t\\in [t_0,t^*]$, where $x(t)=\\phi(t,t_0,x_0,u)$ and $z(t)=\\phi(t,t_0,x_0,\\mathbf{0})$, then \n\\begin{align} \\|z(t)-x(t)\\|_{\\mathcal{X}} \\le \\varepsilon\\quad \\forall t\\in [t_0,t^*].\n\\end{align}\n\\end{as}\nAssumption~\\ref{as:transition-map} means that the solution $x$ corresponding to an input can be made arbitrarily close to the zero-input solution $z$ by reducing the input, as measured by the admissible functional, whenever both solutions remain bounded by $r$ over some time interval of prespecified maximum length $T$. This should happen uniformly over the initial time. \n\nThe following theorem is our main result.\n\\begin{teo} \\label{thm:main}\nLet $\\Sigma$ be a forward complete system endowed with an admissible functional $\\|\\cdot\\|_{\\mathcal{U}}$ satisfying condition (E). Let Assumption \\ref{as:transition-map} hold. Then the following are equivalent:\n\\begin{enumerate}[(a)]\n\\item \\label{item:a} $\\Sigma$ is $0$-GUAS and UBEBS.\n\\item \\label{item:b} $\\Sigma$ is iISS.\n\\end{enumerate}\n\\end{teo}\nThe proof of Theorem \\ref{thm:main} employs the $\\varepsilon$-$\\delta$ characterization of the ISS property provided by Theorem \\ref{thm:ISS-e-d} and Lemma \\ref{lem:o-guas+UGR:UGS}, whose proofs are given in the Appendix. This $\\varepsilon$-$\\delta$ characterization applies to the general ISS property in Definition~\\ref{def:stability-def}\\ref{item:ISS}) where the input functional should be admissible but is not required to satisfy condition (E). In what follows, $B_r$ denotes the closed ball of radius $r\\ge 0$ centred at $0$ in $\\mathcal{X}$, namely $B_r = \\{x \\in \\mathcal{X} : \\|x\\|_{\\mathcal{X}} \\le r\\}$. \n\\begin{teo}\\label{thm:ISS-e-d}\nLet $\\Sigma$ be a forward complete system and let $\\|\\cdot\\|_{\\mathcal{U}}$ be an admissible functional. Then $\\Sigma$ is $\\|\\cdot\\|_{\\mathcal{U}}$-ISS if and only if the following conditions hold:\n\\begin{enumerate}[C1)]\n\\item \\label{item:ubrs}For every $T>0$, $r>0$ and $s>0$ there exists $C\\ge 0$ such that for all $t_0\\ge 0$, $x_0\\in B_r$ and $u\\in\\mathcal{U}$ so that $\\|u\\|_{\\mathcal{U}}\\leq s$, $\\|\\phi(t,t_{0},x_0,u)\\|_{\\mathcal{X}}\\leq C$ for all $t\\in[t_{0},t_{0}+T]$.\n\\item \\label{item:smallx0u}For all $\\varepsilon >0$ there exists $\\delta>0$ such that for every $t_0\\ge 0$, $x_0\\in B_{\\delta}$ and $u\\in\\mathcal{U}$ with $\\|u\\|_{\\mathcal{U}}\\le \\delta$, \n$\\|\\phi(t,t_{0},x_0,u)\\|_{\\mathcal{X}}\\leq \\varepsilon$ for all $t\\ge t_{0}$.\n\\item \\label{item:attract}There exists $\\nu\\in\\mathcal{K}$ such that for all $r\\geq\\varepsilon>0$ there is a positive $T=T(r,\\varepsilon)$ so that the following holds: for every $t_0\\ge 0$, $x_0\\in B_r$ and $u\\in\\mathcal{U}$ we have that\n$\\|\\phi(t,t_{0},x_0,u)\\|_{\\mathcal{X}}\\leq \\varepsilon+\\nu\\left(\\|u\\|_{\\mathcal{U}}\\right)$ for all $t \\ge t_{0}+T$.\n\\end{enumerate}\n\\end{teo}\n\nTheorem \\ref{thm:ISS-e-d} is a generalization of the $\\varepsilon$-$\\delta$ characterization of ISS in Lemma 2.7 of \\cite{sonwan_scl95}. The condition~C\\ref{item:ubrs}) is not needed in \\cite{sonwan_scl95} because it is automatically satisfied for time-invariant finite-dimensional systems defined by $\\dot{x}=f(x,u)$ with $f$ locally Lipschitz in $(x,u)$. For time-invariant infinite-dimensional systems, C\\ref{item:ubrs}) becomes equivalent to the bounded reachability sets (BRS) property \\cite{Mironchenko2018} (but here $\\|\\cdot\\|_{\\mathcal{U}}$ is not required to be a norm). Hence, C\\ref{item:ubrs}) can be regarded as BRS uniformly with respect to initial time (UBRS). \n\nGiven C\\ref{item:attract}), condition C\\ref{item:smallx0u}) can be relaxed to:\n\\begin{itemize}\n\\item[C\\ref{item:smallx0u}')] for all $h>0$ and $\\varepsilon>0$, there is a $\\delta>0$ such that for every $t_0\\ge0$, $x_0\\in B_{\\delta}$ and $u\\in \\mathcal{U}$ with $\\|u\\|_{\\mathcal{U}}\\le\\delta$, then $\\|\\phi(t,t_0,x_0,u)\\|_{\\mathcal{X}}\\le \\varepsilon$ for all $t\\in [t_0,t_0+h]$,\n\\end{itemize}\nbecause C\\ref{item:smallx0u}') and C\\ref{item:attract}) imply C\\ref{item:smallx0u}). For time-invariant systems, C\\ref{item:smallx0u}') becomes equivalent to the CEP (continuous at the equilibrium point) and C\\ref{item:attract}) to the UAG (uniform asymptotic gain) properties in \\cite{Mironchenko2018}. C\\ref{item:smallx0u}') and C\\ref{item:attract}) can hence be regarded as CEP and UAG, respectively, uniformly with respect to initial time (UCEP and UUAG). Replacing C\\ref{item:smallx0u}) by C\\ref{item:smallx0u}'), Theorem~\\ref{thm:ISS-e-d} then generalizes the equivalence between items i) and ii) in \\cite[Thm. 5]{Mironchenko2018}, namely \n\\begin{align*}\n \\text{ISS $\\Leftrightarrow$ UUAG $\\wedge$ UCEP $\\wedge$ UBRS},\n\\end{align*}\non the one hand by allowing time-varying systems and on the other by considering a more general definition of ISS that incorporates iISS within a unifying framework.\n\nThe proof of Theorem \\ref{thm:ISS-e-d} is inspired in the proofs of Lemma 2.7 of \\cite{sonwan_scl95} and of Theorem 5 in \\cite{Mironchenko2018} and is provided for the sake of completeness in Appendix~\\ref{app:proof-thm:ISS-e-d}. \n\nThe proof of our main result, Theorem~\\ref{thm:main}, requires the following two lemmas. The first one shows that under the continuity with respect to the input provided by Assumption~\\ref{as:transition-map}, then 0-GUAS $\\wedge$ UGB $\\Rightarrow$ UGS. Note that when the input functional satisfies condition (E), then the latter reads as 0-GUAS $\\wedge$ UBEBS $\\Rightarrow$ UBEBS0 (Definition~\\ref{def:iISS}). The second lemma gives a specific bound for the trajectories of a 0-GUAS and forward complete system that satisfies Assumption~\\ref{as:transition-map}.\n\\begin{lema}\n \\label{lem:o-guas+UGR:UGS}\n Let $\\Sigma$ be a system and let $\\|\\cdot\\|_{\\mathcal{U}}$ be an admissible functional. Let Assumption~\\ref{as:transition-map} hold. If $\\Sigma$ is 0-GUAS and $\\|\\cdot\\|_{\\mathcal{U}}$-UGB then it is $\\|\\cdot\\|_{\\mathcal{U}}$-UGS.\n\\end{lema}\nThe proof of Lemma~\\ref{lem:o-guas+UGR:UGS} is given in Appendix~\\ref{app:proof-lem:o-guas+UGR:UGS}.\n\\begin{lema}\n \\label{lem:main}\n Let $\\Sigma$ be a forward complete 0-GUAS system endowed with an admissible functional $\\|\\cdot\\|_{\\mathcal{U}}$. Let Assumption~\\ref{as:transition-map} hold. Then, for every $r>0$, $\\eta>0$ and $T>0$, there exists $\\gamma=\\gamma(r,\\eta,T)>0$ such that if $\\|\\phi(t,t_{0},x,u)\\|_{\\mathcal{X}}\\leq r$ for all $t\\in[t_{0},t_{0}+T]$ and $\\|u\\|_{\\mathcal{U}}\\leq \\gamma$ then\n\\begin{align} \\label{eq:beta-bound}\n\\|\\phi(t,t_{0},x,u)\\|_{\\mathcal{X}}\\leq\\beta(\\|x\\|_{\\mathcal{X}},t-t_{0})+ \\eta \\mbox{, $\\forall t\\in[t_{0},t_{0}+T]$},\n\\end{align}\nwhere $\\beta\\in \\mathcal{KL}$ is the function given by the definition of $0$-GUAS. \n\\end{lema}\nThe proof of Lemma~\\ref{lem:main} is given in Appendix~\\ref{app:proof-lem:main}.\n\nWe are now ready to provide the proof of our main result.\n\\begin{proof}[{\\bf Proof of Theorem \\ref{thm:main}}]\n(\\ref{item:b}) $\\Rightarrow$ (\\ref{item:a}) is straightforward. We next prove (\\ref{item:a}) $\\Rightarrow$ (\\ref{item:b}). \n\nAssume (\\ref{item:a}). We prove iISS using Theorem~\\ref{thm:ISS-e-d} and taking into account that ISS means iISS in this case (Definition~\\ref{def:iISS}) given that the admissible input functional satisfies condition (E). From Lemma \\ref{lem:o-guas+UGR:UGS} we have that $\\Sigma$ is UGS and therefore (Definition~\\ref{def:iISS}) UBEBS0. Let $\\alpha, \\rho \\in \\K_\\infty$ be the functions given by the definition of UGS. \n\nLet $T>0$, $r>0$ and $s>0$. Let $t_{0}\\ge 0$, $x\\in \\mathcal{X}$ such that $\\|x\\|_{\\mathcal{X}}\\le r$ and $u\\in\\mathcal{U}$ with $\\|u\\|_{\\mathcal{U}}\\leq s$. Then, due to UGS we have that for all $t\\in[t_{0},t_{0}+T]$,\n\\begin{eqnarray*}\n\\|\\phi(t,t_{0},x,u)\\|_{\\mathcal{X}}&\\leq &\\alpha(\\|x\\|_{\\mathcal{X}})+\\rho\\left(\\|u\\|_{\\mathcal{U}}\\right)\\\\\n&\\leq &\\alpha(r)+\\rho\\left(s\\right).\n\\end{eqnarray*}\nTherefore, C\\ref{item:ubrs}) holds with $C=\\alpha(r)+\\rho\\left(s\\right)$.\n\nLet $\\varepsilon>0$. Pick $\\delta>0$ such that $\\alpha(\\delta)+\\rho\\left(\\delta\\right)\\le\\varepsilon$. Then, if $t_0\\ge 0$, $x\\in \\mathcal{X}$ with $\\|x\\|_{\\mathcal{X}}\\leq \\delta$ and $u\\in\\mathcal{U}$ with $\\|u\\|_{\\mathcal{U}}\\leq \\delta$, it follows that for all $t\\ge t_0$\n\\begin{eqnarray*}\n\\|\\phi(t,t_{0},x,u)\\|_{\\mathcal{X}}&\\leq &\\alpha(\\|x\\|_{\\mathcal{X}})+\\rho\\left(\\|u\\|_{\\mathcal{U}}\\right)\\\\\n&\\leq &\\alpha(\\delta)+\\rho\\left(\\delta\\right)\\\\\n&<&\\varepsilon,\n\\end{eqnarray*}\nand thus C\\ref{item:smallx0u}) holds.\n\nNext, we prove C\\ref{item:attract}). Define $\\nu\\in\\K_\\infty$ via $\\nu=2\\rho$ and let $\\psi=\\rho^{-1}{\\scriptstyle{\\,\\circ\\,}}\\alpha$. Let $r\\geq\\varepsilon>0$, $t_{0}\\ge 0$, $x\\in \\mathcal{X}$ be such that $\\|x\\|_{\\mathcal{X}}\\leq r$ and $u\\in\\mathcal{U}$. Distinguish the cases\n\\begin{enumerate}[(i)]\n\\item \\label{item:i}$\\|u\\|_{\\mathcal{U}}\\ge\\psi(r)$; and\n\\item \\label{item:ii}$\\|u\\|_{\\mathcal{U}}<\\psi(r)$. \n\\end{enumerate}\nIn case (\\ref{item:i}), we have \n\\begin{eqnarray*}\n\\|\\phi(t,t_{0},x,u)\\|_{\\mathcal{X}}&\\leq &\\alpha(\\|x\\|_{\\mathcal{X}})+\\rho\\left(\\|u\\|_{\\mathcal{U}}\\right)\\\\\n&\\leq &\\alpha(r)+\\rho\\left(\\|u\\|_{\\mathcal{U}}\\right)\\\\\n&\\leq &\\alpha(\\psi^{-1}(\\|u\\|_{\\mathcal{U}}))+\\rho\\left(\\|u\\|_{\\mathcal{U}}\\right)\\\\\n&=&\\rho\\left(\\|u\\|_{\\mathcal{U}}\\right)+\\rho\\left(\\|u\\|_{\\mathcal{U}}\\right)\\\\\n&=&2\\rho(\\|u\\|_{\\mathcal{U}})\\\\\n&=&\\nu(\\|u\\|_{\\mathcal{U}})\n\\end{eqnarray*} \nSo, for every $\\varepsilon>0$ and $T>0$, it happens that $\\|\\phi(t,t_{0},x,u)\\|_{\\mathcal{X}}\\leq\\varepsilon+\\nu(\\|u\\|_{\\mathcal{U}})$ for every $t\\geq t_{0}+T$.\n\nIn case (\\ref{item:ii}), we have \n\\begin{eqnarray*}\n\\|\\phi(t,t_{0},x,u)\\|_{\\mathcal{X}}&\\leq &\\alpha(\\|x\\|_{\\mathcal{X}})+\\rho\\left(\\|u\\|_{\\mathcal{U}}\\right)\\\\\n&\\leq &\\alpha(r)+\\rho\\left(\\|u\\|_{\\mathcal{U}}\\right)\\\\\n&\\leq &\\alpha(r)+\\rho\\left(\\psi(r)\\right)=\\tilde{r} \n\\end{eqnarray*}\nSo $\\|\\phi(t,t_{0},x,u)\\|_{\\mathcal{X}}\\leq\\tilde{r}$ for all $t\\geq t_{0}$. Let $\\tilde{\\varepsilon}=\\alpha^{-1}(\\varepsilon)$ and $\\eta=\\tilde{\\varepsilon}\/2$. Pick $\\tilde{T}>0$ such that $\\beta(\\tilde{r},\\tilde{T})<\\tilde{\\varepsilon}\/2$, where $\\beta\\in\\mathcal{KL}$ is given by 0-GUAS. By Lemma~\\ref{lem:main}, there exists $\\gamma = \\gamma(\\tilde{r}, \\eta, \\tilde{T}) > 0$ such that (\\ref{eq:beta-bound}) holds, with $\\tilde{T}$ instead of $T$, provided that $\\|u\\|_{\\mathcal{U}}\\leq\\gamma$. \n\nDefine $N=\\left\\lceil\\frac{\\psi(r)}{\\gamma}\\right\\rceil$ and $T=N\\tilde{T}$, where $\\lceil s \\rceil$ denotes the smallest integer not less than $s\\in\\mathbb{R}$.\n\nFor $i=0,\\ldots,N$, let $t_{i}=t_{0}+i\\tilde{T}$. We consider the intervals $I_{i}=(t_{i},t_{i+1}]$ with $i=0,\\ldots,N-1$ and claim that there exists an integer $j\\leq N-1$ for which $\\|u_{(t_{j},t_{j+1}]}\\|_{\\mathcal{U}}<\\gamma$. If such a $j$ did not exist, then from the definition of $N$ and condition (E), it would follow that $\\|u\\|_{\\mathcal{U}}\\ge \\|u_{(t_0,T]}\\|_{\\mathcal{U}}\\ge \\sum_{i=0}^{N-1}\\|u_{(t_{i},t_{i+1}]}\\|_{\\mathcal{U}}\\ge N\\gamma \\ge \\psi(r)$, which contradicts case (\\ref{item:ii}). \n\nPick $j$ such that $\\|u_{(t_{j},t_{j+1}]}\\|_{\\mathcal{U}}<\\gamma$ and define $u_{j}=u_{(t_j,t_{j+1}]}$ and $x_j=\\phi(t_j,t_0,x,u)$. \nBy the causality and semigroup properties, $\\phi(t,t_{0},x,u)=\\phi(t,t_{j},x_j, u_{j})$ for all $t\\in [t_{j},t_{j+1}]$. Since $\\|\\phi(t,t_{0},x,u)\\|_{\\mathcal{X}}\\leq\\tilde{r}$ for all $t\\ge t_0$, we have that $\\|\\phi(t,t_{j},x_j,u_j)\\|_{\\mathcal{X}}\\leq\\tilde{r}$ for all $t\\in [t_j,t_{j+1}]$. From the facts that $\\|u_j\\|_{\\mathcal{U}}\\le \\gamma$ and the definition of $\\gamma$ it follows that if $x_{j+1}=\\phi(t_{j+1},t_{j},x_j,u_{j})$, then\n\\begin{eqnarray*}\n\\|x_{j+1}\\|_{\\mathcal{X}}&= &\\|\\phi(t_{j+1},t_{j},x_j,u_{j})\\|_{\\mathcal{X}}\\\\\n&\\leq &\\beta(\\|x_j\\|_{\\mathcal{X}},\\tilde{T})+\\eta\\\\\n&\\leq &\\beta(\\tilde{r},\\tilde{T})+\\eta\\\\\n&\\leq &\\frac{\\tilde{\\varepsilon}}{2}+\\frac{\\tilde{\\varepsilon}}{2}=\\tilde{\\varepsilon}\n\\end{eqnarray*}\nTherefore, since $\\phi(t,t_{0},x,u)=\\phi(t,t_{j+1},x_{j+1}, u)$ for all $t\\ge t_{j+1}$ and recalling the UGS property, it follows that for all $t\\ge t_0+T \\ge t_{j+1}$, \n\\begin{eqnarray*}\n\\|\\phi(t,t_{0},x,u)\\|_{\\mathcal{X}}&\\leq &\\alpha(\\|x_{j+1}\\|_{\\mathcal{X}})+\\rho\\left(\\|u\\|_{\\mathcal{U}}\\right)\\\\\n&\\leq &\\alpha(\\tilde{\\varepsilon})+\\rho\\left(\\|u\\|_{\\mathcal{U}}\\right)\\\\\n&=&\\varepsilon+\\rho\\left(\\|u\\|_{\\mathcal{U}}\\right)\\\\\n&\\le&\\varepsilon+\\nu\\left(\\|u\\|_{\\mathcal{U}}\\right).\n\\end{eqnarray*}\nThis shows that C\\ref{item:attract}) is satisfied. By Theorem~\\ref{thm:ISS-e-d}, the system $\\Sigma$ is $\\|\\cdot\\|_{\\mathcal{U}}$-ISS and hence $\\|\\cdot\\|_{\\mathcal{U}}$-iISS from Definition~\\ref{def:iISS}.\n\\end{proof}\n\n\\section{Time-delay systems}\n\\label{sec:time-delay}\n\nIn this section, we consider time-delay systems with inputs. For $\\tau\\ge 0$ (where $\\tau$ is larger than, or equal to, the maximum delay involved in the dynamics), let $\\mathcal{C}=\\mathcal{C}\\left([-\\tau,0],\\mathbb{R}^{n}\\right)$ be the set of continuous functions $\\psi:[-\\tau,0]\\to \\mathbb{R}^n$ endowed with the supremum norm $\\|\\psi\\|=\\displaystyle\\sup\\{|\\psi(s)|:s\\in [-\\tau ,0]\\}$. As usual, given a continuous function $x:[t_0-\\tau,T)\\to \\mathbb{R}^n$ and any $t_0\\le tt_0$ and $x_{t_0}=\\psi$, that is locally absolutely continuous on $[t_0,t_{(t_0,\\psi,u)})$ and satisfies equation (\\ref{eq:sistr}) for almost all $t\\in [t_0,t_{(t_0,\\psi,u)})$.\n\nUnder these assumptions, take $\\mathcal{X}=\\mathcal{C}$, $\\|\\cdot\\|_{\\mathcal{X}} = \\|\\cdot\\|$ and define the map $\\phi:D_{\\phi}\\to \\mathcal{X}$, with $D_{\\phi}=\\{(t,s,\\psi,u)\\in \\mathbb{R}_{\\ge 0}\\times \\mathbb{R}_{\\ge 0} \\times \\mathcal{C} \\times \\mathcal{U} : s\\le t 0}$ non-decreasing such that\n \\begin{equation*}\n |f(t,\\psi,\\mu)|\\leq N(\\|\\psi\\|)\\left(1+\\gamma(|\\mu|)\\right)\n \\end{equation*}\n for all $t\\geq 0$, for every $\\psi\\in\\mathcal{C}$ and for all $\\mu\\in\\mathbb{R}^{m}$.\n \\item[(R2)] \\label{item:r2} For every $r>0$ and $\\varepsilon >0$, there exists $\\delta >0$ such that for all $t\\geq 0$, it is true that \n \\begin{equation*}\n |f(t,\\psi,\\mu)-f(t,\\psi,0)|<\\varepsilon\n \\end{equation*}\n if $\\|\\psi\\|\\leq r$ and $|\\mu|\\leq\\delta$.\n \\item[(R3)] \\label{item:r3} $f(t,\\psi,0)$ is Lipschitz in $\\psi$ on bounded sets, uniformly in $t\\ge 0$, i.e., for all $r>0$ there exists $L=L(r)$ such that $|f(t,\\psi,0)-f(t,\\varphi,0)|\\leq L\\|\\psi-\\varphi\\|$ for all $t\\ge 0$ whenever $\\|\\psi\\| \\le r$ and $\\|\\varphi\\| \\le r$.\n\\end{enumerate}\n\\end{as}\nThe following lemma, whose proof can be obtained, {\\em mutatis mutandis}, from that of Lemma~1 in \\cite{haiman_tac18}, asserts that Assumption~\\ref{as:f-delay} holds if $f(t,0,0)=0$ for all $t\\ge 0$ and $f$ satisfies a Lipschitz condition on bounded sets.\n\\begin{lema} \\label{lem:f-lips-delay} Suppose that $f:\\mathbb{R}_{\\ge 0}\\times \\mathcal{C} \\times \\mathbb{R}^m\\to \\mathbb{R}^n$ is Lipschitz on bounded subsets of $\\mathcal{C} \\times \\mathbb{R}^m$, uniformly in $t$, i.e. for all $r\\ge 0$ there exists $L=L(r)\\ge 0$ such that for all $\\psi,\\theta \\in \\mathcal{C}$ such that $\\|\\psi\\|\\le r$ and $\\|\\theta\\|\\le r$ and all $\\mu,\\nu\\in \\mathbb{R}^m$ with $|\\mu|\\le r$ and $|\\nu|\\le r$ we have that\n\\begin{align*}\n|f(t,\\psi,\\mu)-f(t,\\theta,\\nu)|\\le L(\\|\\psi-\\theta\\|+|\\mu-\\nu|)\\quad \\forall t\\ge 0.\n\\end{align*}\nSuppose in addition that $f(t,0,0)=0$ for all $t\\ge 0$. Then $f$ satisfies Assumption \\ref{as:f-delay}.\n\\end{lema}\n\\begin{teo}\n \\label{thm:iISS-ubebs+0-guas}\n Consider system (\\ref{eq:sistr}) and let Assumption \\ref{as:f-delay} hold. Let $\\gamma\\in \\K_\\infty$ be given by (R1). Then, the following hold.\n \\begin{enumerate}\n \\item[a)] If system (\\ref{eq:sistr}) is iISS with gain $\\kappa$, then it is 0-GUAS and UBEBS with gain $\\kappa$.\n \n \\item[b)] If system (\\ref{eq:sistr}) is $0$-GUAS and UBEBS with gain $\\alpha$, then it is iISS with gain $\\kappa=\\max\\{\\alpha,\\gamma\\}$.\n \\end{enumerate}\n\\end{teo}\nThe proof of Theorem \\ref{thm:iISS-ubebs+0-guas} is a consequence of Theorem~\\ref{thm:main} and the following lemma.\n\\begin{lema}\n \\label{lem:teosigrh}\n Let Assumption~\\ref{as:f-delay} hold and let $\\gamma\\in \\mathcal{K}$ be given by (R1). Then, system $\\Sigma^{R}$ satisfies Assumption~\\ref{as:transition-map} with $\\|\\cdot\\|_{\\mathcal{U}}=\\|\\cdot\\|_{\\gamma}$.\n\\end{lema}\nThe proof of Lemma~\\ref{lem:teosigrh} is provided in Appendix \\ref{app:p-ds}.\n\n\\begin{proof}[{\\bf Proof of Theorem \\ref{thm:iISS-ubebs+0-guas}}]\n Part a) is straightforward; we next prove b). \n\n Assume that (\\ref{eq:sistr}) is $0$-GUAS and UBEBS with gain $\\alpha$. Let $\\kappa=\\max\\{\\alpha,\\gamma\\}\\in \\K_\\infty$. Then, (\\ref{eq:sistr}) is also UBEBS with gain $\\kappa$ because $\\|u\\|_{\\alpha}\\le \\|u\\|_{\\kappa}$ for all $u\\in \\mathcal{U}$. By Proposition~\\ref{prop:delay-general-equiv}, $\\Sigma^R$ is $\\|\\cdot\\|_{\\kappa}$-UBEBS and $0$-GUAS. From Lemma~\\ref{lem:teosigrh}, $\\Sigma^R$ satisfies Assumption~\\ref{as:transition-map} with $\\|\\cdot\\|_{\\mathcal{U}} = \\|\\cdot\\|_{\\gamma}$, and hence also with $\\|\\cdot\\|_{\\mathcal{U}}=\\|\\cdot\\|_{\\kappa}$. By Theorem~\\ref{thm:main}, $\\Sigma^R$ is then $\\|\\cdot\\|_{\\kappa}$-iISS and, by Proposition~\\ref{prop:delay-general-equiv}, (\\ref{eq:sistr}) is iISS with gain $\\kappa$.\n\\end{proof}\nThe equivalence between $0$-GUAS $\\wedge$ UBEBS and iISS has been proved recently in \\cite[Thm. 2]{chagok_tac21} for time-invariant time-delay systems under the stronger hypothesis that the function $f(x_t,u)$ is Lipschitz on bounded subsets of $\\mathcal{C} \\times \\mathbb{R}^m$ \\cite[Standing assumption~1]{chagok_tac21}. The proof of the equivalence is there based on the existence of a time-invariant, Lipschitz on bounded subsets and coercive Lyapunov-Krasovskii functional (LKF) $V$ for the zero-input system $f(x_t,0)$ \\cite{pepkar_ijc13}, which is then employed for the system with inputs \\cite[Proposition~3]{chagok_tac21}. Since the concept of derivative of $V$ considered in \\cite{chagok_tac21} is that of Driver, the Lipschitz condition on $f$ is therein essential for establishing the equivalence between $0$-GUAS $\\wedge$ UBEBS and iISS. The fact that a) $\\Leftrightarrow$ e) in \\cite[Thm. 2]{chagok_tac21} becomes then a corollary of Theorem~\\ref{thm:iISS-ubebs+0-guas}. In view of Lemma~\\ref{lem:f-lips-delay}, the assumptions of Theorem~\\ref{thm:iISS-ubebs+0-guas} are clearly weaker than those of \\cite[Thm. 2]{chagok_tac21}.\n\n By simplifying the analysis of iISS into the separate evaluation of 0-GUAS and UBEBS, Theorem \\ref{thm:iISS-ubebs+0-guas} also allows to more easily conclude that if the function $f$ in~(\\ref{eq:sistr}) is time-invariant and Lipschitz on bounded subsets, then the existence of an iISS LKF with pointwise dissipation (as per \\cite{chagok_tac21}) implies that the time-delay system is iISS, which is one of the important results in \\cite{chagok_tac21}. Moreover, Theorem \\ref{thm:iISS-ubebs+0-guas} shows that this implication still holds for time-invariant systems satisfying the weaker Assumption~\\ref{as:f-delay}, if the derivative of $V$ is considered in the usual sense instead of Driver's.\n \n\n\\section{Semilinear systems}\n\\label{sec:semilinear-systems}\nIn this section, we apply our main result to obtain a characterization of iISS for a semilinear system of the form\n\\begin{align}\n \\label{eq:sistsl}\n \\begin{split}\n \\dot{x}(t) &=Ax(t)+f(t,x(t),u(t)) \\\\\n x(t_{0}) &=x_{0} \n \\end{split}\n\\end{align}\nwhere $t\\ge 0$, $x(t)\\in \\mathcal{X}$, $\\mathcal{X}$ a Banach space with norm $\\|\\cdot\\|_{\\mathcal{X}}$, $u(t)\\in \\mathsf{U}$, with $\\mathsf{U}$ a normed space with norm $\\|\\cdot\\|_{\\mathsf{U}}$. The operator $A:D(A)\\subseteq\\mathcal{X}\\rightarrow\\mathcal{X}$ is a linear operator that generates a strongly continuous semigroup (a $C_{0}$-semigroup) $T:\\mathbb{R}_{\\ge 0} \\to \\mathcal{L}(\\mathcal{X})$, where $\\mathcal{L}(\\mathcal{X})$ is the set of all the linear and bounded operators from $\\mathcal{X}$ to $\\mathcal{X}$, and $f:\\mathbb{R}_{\\ge 0}\\times \\mathcal{X} \\times \\mathsf{U}\\to \\mathcal{X}$. \nThe set $\\mathcal{U}$ of admissible inputs is the set of all the piecewise continuous functions $u:\\mathbb{R}_{\\ge 0} \\to \\mathsf{U}$. \n\nGiven $t_0\\ge 0$, $x_0\\in \\mathcal{X}$ and $u\\in \\mathcal{U}$, consider the weak solutions of (\\ref{eq:sistsl}). A function $x:J\\to \\mathcal{X}$, with $J=[t_0,\\tau)$ or $[t_0,\\tau]$ is a weak solution of (\\ref{eq:sistsl}) if it is continuous and \n\\begin{align*}\nx(t)=T(t-t_0)x_0+\\int_{t_0}^t T(t-s)f(s,x(s),u(s))\\:ds, \\quad \\forall t\\in J\n\\end{align*}\nwhere the concept of integral is that of Bochner \\cite{cazenave_1998}.\n\n\\subsection{Semilinear systems: general results}\n\\label{sec:semilinear-general}\n\nThe following assumptions on $f$ are required. \n\\begin{as} \\label{ass:f-sml}\n The function $f$ in (\\ref{eq:sistsl}) satisfies the following conditions.\n \\begin{enumerate}\n \\item[(SL1)] $f$ is piecewise continuous in $t$ and continuous in its other variables in the following sense. There exists a strictly increasing and unbounded sequence of positive times $\\{\\tau_k\\}_{k=1}^{\\infty}$ and continuous functions $f_k:[\\tau_k,\\tau_{k+1}]\\times \\mathcal{X} \\times \\mathsf{U} \\to \\mathcal{X}$, $k=0,1,\\ldots$ with $\\tau_0=0$, such that $f=f_k$ on $[\\tau_k,\\tau_{k+1})\\times \\mathcal{X} \\times \\mathsf{U}$.\n \\item[(SL2)] $f(t,\\xi,\\mu)$ is Lipschitz in $\\xi$ on bounded sets, uniformly for all $t$ and for $\\mu$ in bounded sets, i.e., for all $r>0$ there exists $L=L(r)\\ge 0$ such that, for all $\\xi,\\omega \\in \\mathcal{X}$ such that $\\|\\xi\\|_{\\mathcal{X}}\\leq r$, $\\|\\omega \\|_{\\mathcal{X}}\\leq r$, all $\\mu\\in\\mathsf{U}$ such that $\\|\\mu\\|_{\\mathsf{U}}\\leq r$ and all $t\\ge 0$, it holds that\n \\begin{equation*}\n \\|f(t,\\xi,\\mu)-f(t,\\omega,\\mu)\\|_{\\mathcal{X}}\\leq L\\|\\xi-\\omega\\|_{\\mathcal{X}}.\n \\end{equation*}\n \\item[(SL3)] \\label{item:sl3} There exists $\\gamma\\in\\K_\\infty$ and $N:\\mathbb{R}_{\\geq 0}\\rightarrow\\mathbb{R}_{> 0}$ non-decreasing such that\n \\begin{equation*}\n \\|f(t,\\xi,\\mu)\\|_{\\mathcal{X}}\\leq N(\\|\\xi\\|_{\\mathcal{X}})\\left(1+\\gamma(\\|\\mu\\|_{\\mathsf{U}})\\right)\n \\end{equation*}\n for all $t\\geq 0$, $\\xi\\in\\mathcal{X}$ and $\\mu\\in\\mathsf{U}$.\n \\item[(SL4)] \\label{item:sl4} For every $r>0$ and $\\varepsilon >0$, there exists $\\delta >0$ such that for all $t\\geq 0$, it is true that \n \\begin{equation*} \n \\|f(t,\\xi,\\mu)-f(t,\\xi,0)\\|_{\\mathcal{X}}<\\varepsilon\n \\end{equation*}\n if $\\|\\xi\\|_{\\mathcal{X}}\\leq r$ and $\\|\\mu\\|_{\\mathsf{U}}\\leq\\delta$.\n \\end{enumerate}\n\\end{as}\nWhen $f(t,\\xi,\\mu)$ is Lipschitz in $(\\xi,\\mu)$ on bounded sets and satisfies $f(t,0,0)\\equiv 0$, it can be proved, similarly to the proof of Lemma~\\ref{lem:f-lips-delay}, that $f$ satisfies (SL2)--(SL4) of Assumption~\\ref{ass:f-sml}. This is made more precise as follows.\n\\begin{lema}\n \\label{lem:f-lips-sl}\n Suppose that $f:\\mathbb{R}_{\\ge 0}\\times \\mathcal{X} \\times \\mathsf{U} \\to \\mathcal{X}$ is Lipschitz on bounded subsets of $\\mathcal{X} \\times \\mathsf{U}$ uniformly over $\\mathbb{R}_{\\ge 0}$, i.e. for all $r\\ge 0$ there exists $L=L(r)\\ge 0$ such that for all $\\xi,\\zeta \\in \\mathcal{X}$ such that $\\|\\xi\\|_{\\mathcal{X}}\\le r$ and $\\|\\zeta\\|_{\\mathcal{X}}\\le r$ and all $\\mu, \\nu \\in \\mathsf{U}$ with $\\|\\mu\\|_{\\mathsf{U}}\\le r$ and $\\|\\nu\\|_{\\mathsf{U}}\\le r$ we have that\n\\begin{align*}\n\\|f(t,\\xi,\\mu)-f(t,\\zeta,\\nu)\\|_{\\mathcal{X}}\\le L(\\|\\xi-\\zeta\\|_{\\mathcal{X}} + \\|\\mu-\\nu\\|_{\\mathsf{U}}) \\quad \\forall t\\ge 0.\n\\end{align*}\nSuppose in addition that $f(t,0,0)=0$ for all $t\\ge 0$. Then $f$ satisfies (SL2)--(SL4) of Assumption~\\ref{ass:f-sml}.\n\\end{lema}\nUnder (SL1)--(SL3) of Assumption \\ref{ass:f-sml} and the fact that the admissible inputs $u$ are piecewise continuous, a slight modification of \\cite[Prop. 4.3.3]{cazenave_1998} to allow piecewise continuity proves that for every $t_0\\ge 0$, $x_0\\in \\mathcal{X}$ and $u\\in \\mathcal{U}$ there exists a unique maximally defined weak solution $x:[t_0, t_{(t_0,x_0,u)})\\to \\mathcal{X}$ of (\\ref{eq:sistsl}).\n\nDefining the map $\\phi:D_{\\phi}\\to \\mathcal{X}$, with $D_{\\phi}=\\{(t,t_0,x_0,u)\\in \\mathbb{R}_{\\ge 0}\\times \\mathbb{R}_{\\ge 0} \\times \\mathcal{X} \\times \\mathcal{U}:t_0\\le t0$, where $\\|T(t)\\|$ denotes the induced norm of the operator $T(t)$. Also, exponential stability of $T(\\cdot)$ is equivalent to GUAS of the system $\\dot{x}=Ax$ \\cite[Prop. 3]{dasmir_mcss13}. \n\n\\begin{teo}\n \\label{thm:bilineal}\n Consider a semilinear system (\\ref{eq:sistsl}) that satisfies Assumption \\ref{as:bilinear}. Then, the following are equivalent. \n\\begin{enumerate}[a)]\n\\item \\label{item:bilin-iiss}System (\\ref{eq:sistsl}) is iISS.\n\\item \\label{item:bilin-0guas}System (\\ref{eq:sistsl}) is $0$-GUAS.\n\\end{enumerate}\n\\end{teo}\n\\begin{proof}. Since \\ref{item:bilin-iiss}) $\\Rightarrow$ \\ref{item:bilin-0guas}) is trivial, we prove \\ref{item:bilin-0guas}) $\\Rightarrow$ \\ref{item:bilin-iiss}). Suppose that the system is $0$-GUAS. Then the semigroup $T(\\cdot)$ is exponentially stable. Let $M\\ge 1$ and $\\lambda>0$ so that $\\|T(t)\\|\\le Me^{-\\lambda t}$ for all $t\\ge 0$, where $\\|T(t)\\|$ denotes the induced norm of the operator $T(t)$. In the remainder of this proof, we omit the subscripts in the norms $\\|\\cdot\\|_{\\mathcal{X}}$ and $\\|\\cdot\\|_{\\mathsf{U}}$ in order to avoid cluttered notation and because these can be inferred from the context. \n\nLet $t_0\\ge 0$, $x_0\\in \\mathcal{X}$, $u \\in \\mathcal{U}$ and $x(\\cdot)$ be the corresponding trajectory. Let $[t_0,t_{(t_0,x_0,u)})$ be the maximal interval of definition of $x(\\cdot)$. Suppose without loss of generality that $\\|u\\|<\\infty$. Then, for all $t_0\\le t < t_{(t_0,x_0,u)}$,\n\\begin{align*}\nx(t)=T(t-t_0)x_0+\\int_{t_0}^t T(t-s)f(s,x(s),u(s))ds.\n\\end{align*}\nTake the norm at both sides of the equality and apply the triangle inequality and the properties of the norm of the integral to obtain\n\\begin{align*}\n\\|x(t)\\|&\\le\\|T(t-t_0)\\|\\|x_0\\|+\\int_{t_0}^t \\|T(t-s)\\|\\|f(s,x(s),u(s))\\|ds \\\\\n&\\le M e^{-\\lambda(t-t_0)}\\|x_0\\| + \\int_{t_0}^t M e^{-\\lambda(t-s)} (K\\|x(s)\\|+d)\\gamma(\\|u(s)\\|) ds.\n\\end{align*}\nMultiply both sides by $e^{\\lambda (t-t_0)}$ and define $z(t)=e^{\\lambda (t-t_0)}\\|x(t)\\|$, so that\n\\begin{align*}\nz(t)&\\le M \\|x_0\\|+ M d \\int_{t_0}^t e^{\\lambda(s-t_0)}\\gamma(\\|u(s)\\|) ds+ \\int_{t_0}^t MK \\gamma(\\|u(s)\\|) z(s) ds.\n\\end{align*}\nThen, for $t_0\\le t\\le \\tau 0$ and select $L=L(\\max\\{r,r_u\\})$ from (SL2), so that the function $F$ satisfies\n\\begin{align*}\n \\| F(t,\\xi) - F(t,\\omega) \\| = \\| f(t,\\xi,u(t)) - f(t,\\omega,u(t)) \\| \\le L \\| \\xi - \\omega \\|\n\\end{align*}\nfor all $t\\in [t_0, t_{(t_0,x_0,u)})$, whenever $\\|\\xi\\|_\\mathcal{X} \\le r$, $\\|\\omega\\|_\\mathcal{X} \\le r$. This proves the claim.\n\nSince $t_{(t_0,x_0,u)}<\\infty$ and $F(t,\\xi)$ satisfies this Lipschitz condition, then a slight variation of \\cite[Thm.4.3.4]{cazenave_1998} implies that $x(\\cdot)$ is unbounded on $[t_0, t_{(t_0,x_0,u)})$. This is a contradiction showing that $t_{(t_0,x_0,u)}<\\infty$ is not possible. Then, $t_{(t_0,x_0,u)}=\\infty$ and the corresponding system $\\Sigma^{SL}$ is UBEBS as per Definitions \\ref{def:stability-def} and \\ref{def:iISS}. The iISS of the system then follows from Theorem \\ref{thm:iISS-ubebs+0-guas-sl}.\n\\end{proof}\n\nTheorem~\\ref{thm:bilineal} generalizes \\cite[Theorem~4.2]{mirito_mcrf16} to the time-varying case. The proof given here is based on the general characterization~(\\ref{eq:iISS-superp}), while that in \\cite{mirito_mcrf16} uses an {\\em ad hoc} method.\nA recent result dealing with the relationship between ISS and iISS for generalized bilinear time-invariant systems, allowing for unbounded (linear) input operators is given in \\cite{hosjac_mcss22}. The results in the current paper are neither a special case nor more general than those of \\cite{hosjac_mcss22}.\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nThe equivalence between integral input-to-state stability (iISS) and the combination of global uniform asymptotic stability under zero input (0-GUAS) with uniformly bounded-energy input\/bounded state (UBEBS) was established for systems defined in abstract form, provided a reasonable assumption of continuity of the trajectories with respect to the input, at the zero input, is satisfied and employing a more general definition of iISS. Sufficient conditions for this assumption to be satisfied were given for time-delay systems and for semilinear evolution equations over Banach spaces. The abstract definition of system employed allows for time-varying infinite-dimensional systems whose solutions are unique. It is expected that our main result could be helpful in (a) establishing the equivalence for other specific classes of infinite-dimensional systems, such as semilinear systems over Banach spaces involving unbounded input operators, for which very few results are currently available, and (b) giving mild conditions under which ISS implies iISS, as done for finite-dimensional systems in \\cite{haiman_auto18}.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe Linear Inverse Problem or Unfolding is a complex problem common for many experiments. Often the experimentalist\nhas to reconstruct a true distribution T from a measured distribution M, where the two distributions are connected by\n\n\\begin{equation} \\label{e1}\n \\int R(x,y) T(x) dx = M(y).\n\\end{equation}\nThe function $R(x,y)$ can represent the limited resolution and acceptance of the detector, or the presence\nof an intermediate process.\n\nIn the case of a 1-dimensional (1D) discrete approximation one can reformulate Eq.\\ref{e1} as \n\\begin{equation} \\label{e2}\n R_{ij}T_j = M_i.\n\\end{equation}\n\nHere the histograms $T_j \\ (j=0,1,2,..,N_t-1)$ and\n$M_i \\ (i=0,1,2,..,n_m-1)$ are connected by a matrix $R_{ij}$ which gives the fraction of events\nfrom bin $T_j$ of the true distribution that end up being measured in bin $M_i$ of the measured distribution.\nTypically this matrix is determined by a model or by using a Monte Carlo simulation of the direct process.\n\nWhen solving Eq.(\\ref{e2}), $T_j$ and $M_i$ cannot be simply considered as vectors, because the number of entries in a given bin\ncan only be a non-negative real number.\n\nIt is also important to remember that Eq.(\\ref{e2}) is an approximation of Eq.(\\ref{e1}). The matrix $R_{ij}$ connects\none particular true distribution $T_j$ to one particular measured distribution $M_i$, therefore $R_{ij}$ and $T_j$ are not\nindependent. A preliminary hypothesis about the true distribution is needed, in order to calculate the elements of $R_{ij}$.\nThe usage of a wrong hypothesis about T will introduce certain systematic errors in the calculation of this matrix. \n\n\\section{Description of the algorithm}\nIn principal, if the matrix $R_{ij}$ is already known, one can try to guess the number of entries in every bin of the\ntrue sample, and to use the connection matrix to create a measured sample corresponding to this guess.\n\n\\begin{equation}\n R_{ij}T_j^g = M_i^g\n\\end{equation}\n\nThen the guess $T^g$ can be validated by a comparison between the measured sample $M$ and the\nsample $M^g$. A $\\chi^2$ test \\cite{chi2} can be used for a quantitative estimate of the quality of the guess.\nThe minimum of $\\chi^2$ can also be used as a \\textit{selection criteria} for choosing the \\textit{best guess}\nbetween multiple candidates. \n\nA direct brute-force attack\\cite{bfa} is not applicable for solving the unfolding problem, because of the unaffordable\nnumber of possible true samples $T^g$, which have to be tested against the measured sample $M$.\nNevertheless, if we limit ourself to the case of guesses $T^g$, containing only one entry, the number of possible\ncandidates is equal to the number of bins $N_t$, used to depict the true distribution $T$. In this case we can easily select\nthe \\textit{best guess} and there is a good\nchance that the entry of this \\textit{best guess} will be placed in a bin $T_j^g$ where the Probability Density\nFunction of the true distribution ($p.d.f._T$) has a relatively big value. Unfortunately, this single entry\ncannot be used to derive any useful information about $T$.\n\nAt this stage one can try adding another entry to the \\textit{best guess}. This will require a second iteration of the\nsame procedure, which will include test of $N_t$\nnew candidates. Again, there is a good chance that the second entry of the \\textit{best guess} will\nbe added to a bin, where $p.d.f._T$ has a relatively big value, but this time the decision will be influenced\nalso by the choice made during the previous iteration. Every subsequent iteration of the procedure will add new entry to $T^g$\nin a way which gives the best possible match between $M$ and $M^g$. After a sufficient number of iterations the growing\ndistribution of the \\textit{best guess} $T^g$ will start to converge to the true\ndistribution $T$. This is illustrated in Fig. \\ref{progres}.\n\nIn this example the connection function $R(x,y)$ corresponds to a gaussian smearing, systematic translation, and\nvariable inefficiency. The matrix elements of $R_{ij}$ are calculated with Monte Carlo by assuming a flat true distribution.\n\n\\section{Regularization of the reconstructed distribution}\nThe $\\chi^2$ test of the candidates is not sufficient to ensure a good reconstruction of the true distribution.\nThis problem is well known and comes from the ill-posedness of the matrix $R_{ij}$. As a result, \nthe presence of small statistical fluctuations in the measured sample has a very disproportional effect\non the reconstructed distribution. This is illustrated on Fig. \\ref{reg} - Top.\n\nThe problem can be mitigated by adding a regularization term to the \\textit{selection criteria} of the \\textit{best guess}:\n\n\\begin{equation}\n \\min(\\chi^2 + \\alpha C) ,\n\\end{equation}\nwhere $C$ is the regularization term and $\\alpha$ is its relative weight in the \\textit{selection criteria}.\nThe role of the regularization term is to add a penalty for guesses $T_j^g$, which give very good matching between\nthe measured sample $M_i$ and the projection of the guess $M_i^g$, but are nonsensical.\nThe role of the coefficient $\\alpha$ is to ensure that the $\\chi^2$ test will dominate the selection\ncriteria and that the regularization term will add only a weak preference to this criteria.\nOne possible implementation of the regularization term is:\n\n\\begin{equation*}\n {T^g_j}' = 2\\frac{\\Big( T_j^g\/B_j - T_{j-1}^g\/B_{j-1} \\Big) }{(B_j+B_{j-1})}\n\\end{equation*}\n\n\\begin{equation*}\n {T^g_j}''= {T^g_{j+1}}'-{T^g_j}'\n\\end{equation*}\n\n\\begin{equation} \\label{regterm}\n C = \\frac{R^4}{\\left(\\sum_{j=0}^{N_t-1} T^g_j\\right)^2} \\times \\frac{\\sum_{j=1}^{N_t-2}({T^g_j}'')^2}{N_t-2}\n\\end{equation}\n\nHere $T_j^g$ is the number of entries in bin $j$ of the candidate, $B_j$ is the size of the bin $j$\nand R is the range of the true sample (difference between the lower edge of the first bin and the upper edge of the last bin).\nThis regularization term will prefer smooth distributions and will constrain all very complex distributions having large\nbin-to-bin fluctuations\\footnote{The formulation of this regularization term is a bit complicated, because it tries to handle\nthe case of non-uniform bin sizes. }. The effect of adding a regularization term in the \\textit{selection criteria} is illustrated on\nFig. \\ref{reg} - Bottom.\n\nThe requirement of having a smooth distribution is not the only possibility for the regularization term. Any additional\ninformation, known in advance, for the true distribution $T$ can be used to define a regularization term. It is also possible to\nhave multiple regularization terms, having different relative weights in the \\textit{selection criteria}. \n\n\\section{2D unfolding}\nThe implementation of the 2D unfolding requires only a minor modification of the procedure described so far.\nThe 2D histogram of the true distribution $T_{kl} \\ (k=0,1,..,N_t-1; l=0,1,..,M_t-1)$ can be treated as a 1D histogram\n$T_{j} \\ (j=0,1,..,N_t \\times M_t-1)$. The same can be done for the measured distribution $M_{mn}$. The only\nconsiderable difference between 1D and 2D comes from the definition of the regularization term of the \\textit{selection criteria}.\n\nFig. \\ref{2d} demonstrates the reconstruction of a complex 2D distribution. The regularization term used in this example is similar to \n(\\ref{regterm}), but the smoothness of the reconstructed distribution is checked independently in $X$ and $Y$ direction.\nAs in the 1D example above, here the connection function corresponds to a gaussian smearing, systematic translation, and\nvariable inefficiency. The elements of $R_{ij}$ are calculated assuming a flat true distribution.\n\nKeep in mind that the algorithm does not reconstruct directly the 2D true distribution $T_{kl}$. What is actually\nreconstructed is the 1D true distribution $T_{j}$ (see Fig. \\ref{2d_1d}). Notice, that this quite complex 1D distribution is\nreconstructed without any initial assumptions\\footnote{One may argue that the regularisation term used here is actually an \ninitial assumption.}. \n\n\\section{Discussion of the method}\nThe heuristic method described so far, can be classified as a Greedy\\cite{greedy} Genetic\\cite{genetic} algorithm.\nIt does not apply any restrictions on the configuration of the bins used to describe the true and\nthe measured distributions and on the dimensions of the connection matrix $R_{ij}$. The method itself does not require explicitly\nany knowledge about the true distribution, but if we have additional information known in advance, this can be used to define a\nregularization term and improve the quality of the solution. \n\nNevertheless, the method relies on the good knowledge of the matrix $R_{ij}$ and any systematic or statistical errors in the\ncalculation of the matrix elements will affect the quality of the solution.\n\nThe realization of the method has been implemented as a small C++ library available at \n\nhttps:\/\/launchpad.net\/ggaunfold\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\nOne of the most fascinating prediction of General Relativity, Einstein's classical theory of \ngravitation, is the existence of black holes, a region of space-time where nothing not even \nlight can escape. Seen originally as a simple non-physical curiosity, the mathematical theory\nof black holes is now a fully grown subject. A stationary black hole appears in fact to \nbe quite a simple object defined entirely by its mass, angular momentum and charge (although this must be put in contrast with the recent work underlining the existence of classical hairs for black holes in relation to gravity's soft modes \\cite{Strominger:2017aeh,Averin:2016hhm,Freidel_2016,Freidel_2016_2}). Moreover,\na straight analogy can be drawn between black holes dynamics predicted by Einstein's equation and \nthermodynamics. In particular, a notion of entropy is associated to a black hole \\cite{Bekenstein_1973} which, in presence of quantum fields, is related to the area of the event horizon by the Hawking formula \\cite{Hawking_1975} and has lead to the holographic principle relating geometric quantities and entropy in quantum gravity \\cite{Bousso_2002}.\n\nIn the context of loop quantum gravity (see \\cite{Rovelli_book,Rovelli_Vidotto_book,Thiemann_book}),\nblack hole entropy was mainly studied from the \nisolated horizon concept which assumes boundary conditions at the classical level \n\\cite{Ashtekar_1998}. From a purely quantum perspective, this classical input \nshould be removed. Instead, the quantum route is to compute an entanglement\nentropy between a bipartite partition of the spin network state into an inside\/outside\nregions. For instance, in the 3d Riemannian BF formulation of gravity, such an \nentanglement (after a suitable regularization) satisfies an holographic behavior for \nthe flat state \\cite{Livine_Terno_2008} (see also \\cite{Donnelly:2008vx,Donnelly:2011hn} for a more general treatment in loop gravity and lattice gauge theory beyond the flat state). Those calculations have strong similarities \nwith those in some spin models like the toric code model useful for quantum computation\npurposes \\cite{Kitaev_2003,Zanardi_2005,Preskill_Kitaev_2006}. Here we would like to push further \nthis similarities and propose to study a class of test states having holographic properties\nand non trivial correlations. \n\nA very active line of research in this direction is tensor network renormalisation techniques applied to the context of the AdS\/CFT correspondence for quantum gravity. Indeed tensor network states turned out to be a tremendously efficient ansatz to study holography in quantum gravity \\cite{Vidal_2015}, especially when looking at the holographic entanglement entropy \\cite{Takayanagi_2006,Takayanagi_2009}. In particular, multi-scale entanglement renormalization ansatz (MERA) \\cite{Vidal_2007} are especially promising as they appear to be variational ansatz of conformal field theory ground states and allow for a lattice realization of the AdS\/CFT correspondence \\cite{Pastawski:2015qua,Hayden:2016bbf,Caputa:2017yrh}. Moreover, they have opened a fecund interaction between quantum gravity, quantum information and quantum computing.\n\nHere, going in a similar direction although without using the MERA tools, our goal is to better understand the structure of correlations in loop quantum gravity (the interested reader can nevertheless find in \\cite{Chirco:2017vhs} a first application of tensor network techniques to loop quantum gravity and spin network states). \nThe motivation behind our study is twofold. The first one is to have a clearer understanding\nof the physical states solving all the constraints of canonical quantum general relativity. It is expected\nthey should have non trivial correlations mapping to the two points correlation\nfunctions of gravitons at the classical limit and that they should satisfy an area law.\nWe will introduce an ansatz for quantum states as superpositions of loop states with loops \nof arbitrary sizes with weights scaling for instance with the loop area, their perimeter\nand their number. The second motivation is related to the definition and action of the Hamiltonian \nconstraint of the theory which is still under active research. Identifying quantum states, with well-behaved correlations (both holographic and admitting nice 2-point correlations)\nwould give great insights toward the proper form and action of the quantum dynamic\nimplementing Einstein equation.\nOur work can be seen as complementary to the study of entanglement on spin network states built from local Hamiltonian as in condensed matter models developed in \\cite{Bianchi:2016hmk,Bianchi:2016tmw,Vidmar:2017uux}.\n\nThe present paper is structured as follows. Section \\ref{toriccode_def} \nreviews the basic features of the toric code model that is then used to define \nthe proper class of spin network states. The entanglement entropy between a partition \n(the system is a closed region) of the spin network is evaluated and we show that it scales as \nthe number of degrees of freedom of the boundary in Section \\ref{entanglement}. \nLoops crossing the boundary are seen \nto be at the origin of this entanglement. Section \\ref{correlations} discusses a \nnecessary generalization for the correlations to be non trivial and for the entanglement \nentropy to scale as the area. \n\n\\section{The toric code model}\n\\label{toriccode_def}\n\nThe toric code model is a topological model of spin $1\/2$ living on the links\nof a general 2D lattice. The anyonic structure of the excitations makes it useful for \nfault-tolerant quantum computation \\cite{Kitaev_2003,Zanardi_2005}. \nThis model can be shown to be equivalent to a $BF$ theory on the discrete \ngroup ${\\mathbb Z}_2$, a highly interesting fact since gravity can be formulated as a \n(constrained) $BF$ theory.\n\nConsidering a bipartite partition of the lattice, it was found that the entanglement \nentropy between those regions were proportional to the boundary area \n(plus a topological term) which is reminiscent of the holographic principle. \nWe will review here the basic results useful for our following discussion \non spin network states.\n\nLet's define $n_s$ and $n_p$ the number of vertex and plaquettes respectively.\nThe dynamic of the model is constructed with the vertex operators \n$A_s = \\bigotimes_{j\\in s} \\sigma_j^x$, tensor product of Pauli matrices with the \nvertex $s$ as a source, and plaquette operators \n$B_p = \\bigotimes_{j\\in\\partial p} \\sigma_j^z$, tensor product of Pauli matrices \naround the plaquette $p$. The Hamiltonian is then \n\\begin{align}\nH = - \\sum_{s=1}^{n_s} A_s - \\sum_{p=1}^{n_p} B_p\n\\end{align}\nWe stress here the fact that the plaquette and vertex operators are subject\nto a particular constraint $\\prod_p B_p = \\prod_s A_s = \\mathbbm{1}$. \nEvery term commute with all the others which makes it easier to find the ground\nstate(s) $\\ket{\\psi_0}$ of the system by looking at the fundamental of each operators (states \ndiagonalizing all the operators with highest eigenvalues)\n\\begin{align}\nA_s\\ket{\\psi_0} = B_p\\ket{\\psi_0} = \\ket{\\psi_0}\n\\end{align}\nThe lowest energy state of the vertex operators can be seen as gas of loops.\nIt implements the Gauss law enforcing gauge invariance at each vertex. \nTaking into account the plaquette operators, which deform smoothly a loop into another, \nthe ground state $\\ket{\\psi_0}$ of the system is a superposition of all\nthose loops with equal weights and imposes the flatness of the ${\\mathbb Z}_2$ holonomy.\nDenoting by $\\mathcal{A}$ the group generated by the vertex operators, we have\n\\begin{align}\n\\ket{\\psi_0} = \\frac{1}{\\sqrt{2^{n_s -1}}} \\sum_{g\\in\\mathcal{A}}g\\ket{0}\n\\end{align}\nIn fact, because a plaquette operator only smoothly deform a loop, the fundamental\nsubspace is degenerate, with dimension $2^{2g}$ for an orientable genus $g$ \nsurface. This degeneracy is at the heart of the topological character of this model \nwhich cannot be lifted by local perturbations.\n\nTo cast thing in the proper form useful for generalizing to gravity (for the \n$\\mathrm{SU}(2)$ group), we write explicitly the expanded form of the ground states\nin terms of loops. Written in terms of projector $\\left( \\mathbbm{1} + B_p \\right) \/ \\sqrt{2}$, \n$\n\\ket{\\psi_0} = \\frac{1}{\\sqrt{2^{n_p +1}}} \\prod_p \\left( \\mathbbm{1} + B_p \\right) \\ket{0}\n$,\nthe loop expansion is straightforward. It simply suffices to \nexpand the product of operators. Thanks to the fact that $\\sigma_i^2 = \\mathbbm{1}$,\nthe product of two plaquette operators $B_{p_1}$ and $B_{p_2}$ sharing one\nlink is equivalent to an operator on the disjoint union of the plaquettes \n$B_{\\partial (p_1 \\cup p_2)}$. We have finally the loop superposition form\n\\begin{align}\n\\label{toriccode_loopstate}\n\\ket{\\psi_0} =\\frac{1}{\\sqrt{2^{n_p -1}}} \\sum_{\\mathcal{C}} \n\\bigotimes_{{\\mathcal{L}\\in\\mathcal{C}}}\\ket{1_{e\\in\\mathcal{L}}, 0_{e\\not\\in\\mathcal{L}}}\n\\end{align}\nHere the set $\\mathcal{C}$ is the set of all configuration of non intersecting \nloops having no links in common.\n\nAs mentioned above, one of the interesting result of the toric code model \nis the area scaling law of the entanglement entropy for the ground state\nbetween a bipartite partition of the lattice \\cite{Zanardi_2005}. \nConsider a region $S$ and its exterior $E$, the global system being in a \nground state, see Fig.\\ref{loops}. \nThe reduced density matrix of the region $S$ is needed to obtain the entropy. \nThe loop structure coming from the state \\eqref{toriccode_loopstate} is composed \nof three kinds of loops, those contained completely in $S$ or $E$ and those\nbelonging to both. We then have the following \nresult (with a detailled proof in annex \\ref{proofs}):\nthe entanglement entropy associated to a \ngiven region $S$ whose (contractible) frontier possesses $n_{SE}$ degrees\nof freedom in the ground state is \n\\begin{align}\nS = n_{SE} - 1\n\\end{align}\nThis entropy is \nproportional to the number of degrees of freedom on the boundary and \nscales as the area. The minus one is a topological contribution and is model\ndependent in some sense \\cite{Preskill_Kitaev_2006}.\n\nWhat we intend to do now is the study in the context of loop\nquantum gravity the class of wavefunction \nhaving the same loop structure as the one for the toric code\nmodel and study the entanglement entropy and correlations they \ncontain.\n\n\n\\section{Definition and properties}\n\\label{spinnetstate_def}\n\n\\subsection{The loop decomposition}\n\nThe holographic principle is one of the few accepted feature every\nquantum theory of gravity should have. Simply stated, it says that\nvolume degrees of freedom of a region of spacetime are encoded\non some degrees of freedom on the boundary \\cite{Bousso_2002}.\n We saw that the entanglement entropy of the toric \ncode model has this same behavior. The purpose here is then \nquite simple: we want to adapt the ground state structure of this\nmodel for the spin network state on a given random 2d lattice \nwith edge degrees of freedom fixed to the fundamental spin $1\/2$\nexcitation (this condition can be relaxed by choosing any spin $j$).\n\n\\begin{figure}[h]\n \\centering\n\n\\begin{tikzpicture}[domain=-2.2:2.2]\n\n\\tikzstyle{dot}=[draw,circle,minimum size=2pt,inner sep=0pt,outer sep=0pt,fill=black]\n\n\t\\def.6{.6}\n\t\\def5{5}\n\t\\def6{6}\n\t\\pgfmathsetmacro{\\l}{0.5*.6}\n\t\n\t\\draw[blue, dashed] (1.5*.6,1.5*.6) -- (4.5*.6,1.5*.6) -- (4.5*.6,3.5*.6) -- (1.5*.6,3.5*.6) -- cycle ;\n\t\\node at (4.5*.6,0.5*.6) {\\textcolor{blue}{$\\mathcal{SE}$}};\n\n\t\\draw (0,0) -- (.6,0) -- (.6,.6) -- (0,.6) -- cycle;\n\t\\node at (0.5*.6,4.5*.6) {$\\mathcal{E}$};\n\t\n\t\\draw[red, thick] (.6,4*.6) -- (.6,.6) -- (2*.6,.6) -- (2*.6,0) -- (3*.6,0) -- (3*.6,2*.6) -- (2*.6,2*.6) -- (2*.6,4*.6) -- cycle ;\n\t\\draw[red, thick] (3*.6,4*.6) -- (5*.6,4*.6) -- (5*.6,2*.6) -- (4*.6,2*.6) -- (4*.6,3*.6) -- (3*.6,3*.6) -- cycle ;\n\t\\node at (3.5*.6,2.5*.6) {$\\mathcal{S}$};\n\n\t\\foreach \\i in {0,...,6} {\n\t\t\\foreach \\j in {0,...,5} {\n\t\t\t\t\t\\coordinate[dot, black] () at (\\i*.6,\\j*.6);\n\t\t}\n\t\n}\n\n\\end{tikzpicture}\n\n \\caption{Illustration of one possible loop structure appearing in the superposition \n \t\tdefining the Kitaev state motivated by the ground state structure of the \n \t\ttoric code model. Three kind of loops are distinguished when a subsystem \n \t\tis chosen and only loops crossing the boundary give a non zero entanglement.}\n \\label{loops}\n\\end{figure}\n\nThe natural $\\mathrm{SU}(2)$ gauge invariant object in loop quantum gravity \nis the holonomy, here \n$\\chi_{1\/2}\t\\left(\\prod_{e\\in \\mathcal{L}} g_e \\right) $\nfor a given loop $\\mathcal{L}$ and group element $g_e \\in \\mathrm{SU}(2)$ \nfor each edge. We thus define by analogy the state which has the same loop structure \nthan \\eqref{toriccode_loopstate}. In fact, canonical model of statistical physics\nsuch as the Ising model or $O(N)$ models, hints toward adding new amplitude\ncontribution like a perimeter $P(\\mathcal{C})$ contribution $\\gamma^{P(\\mathcal{C})}$ \nor\/and a number of loops $N(\\mathcal{C})$ contribution $ \\beta^{N(\\mathcal{C})}$.\nSo the natural general states we are interested in are given by the wave function\n %\n\\begin{align}\n\\label{def_state}\n\\psi_{\\alpha, \\beta, \\gamma}(g_e) = \n\t\\sum_{\\mathcal{C}} \\alpha^{A(\\mathcal{C})} \\beta^{N(\\mathcal{C})} \\gamma^{P(\\mathcal{C})}\n\t\t\\prod_{\\mathcal{L}\\in\\mathcal{C}}\n\t\t\t\\chi_{1\/2}\\left(\\prod_{e\\in \\mathcal{L}}^{\\rightarrow} g_e \\right) \n\\end{align}\nA given configuration $\\mathcal{C}$ is composed of non intersecting\nloops $\\mathcal{L}$ having no links in common while $A(\\mathcal{C}) $,\n$N(\\mathcal{C})$ and $P(\\mathcal{C})$ are respectively the total area, the \nnumber of loop and the perimeter of the configuration and $\\alpha,\\beta,\n\\gamma \\in \\C$ are complex amplitudes. \n\nOur goal is to study this class of states, the scaling law of the entanglement\nentropy between a partition of the spin network and then the correlation\ntwo point functions between spins of different edges. We start with the \nsimple case $\\beta=\\gamma=1$ and see that the\nentropy scales as expected with the number of degrees of freedom of the \nboundary. However, the correlations will appear to be topological,\n motivating the introduction of a more general class of states with \namplitudes function of the perimeter of the loops or their number. \n\nIn fact, we could have first thought of a simpler state constructed as a product\nof all holonomies of each plaquette (with a potential contribution from a boundary\nfor a finite size graph) as \n\\begin{align*}\n\\psi(g_e) = \\prod_p \\chi_{1\/2}\\left(\\prod_{e\\in p}^{\\rightarrow} g_e \\right) \n\t\t\t\\chi_{1\/2}\\left(\\prod_{e\\in \\partial}^{\\rightarrow} g_e \\right)\n\\end{align*}\nAt first sight, it would appear that such a state would display some non trivial \ncorrelations. However this is not the case both for the holonomy and spin two\npoint functions in the infinite size limit. We won''t dwell on this state in the \ncore of this paper, see annex \\ref{naive} for more details.\n\n\n\\subsection{Behavior under coarse-graining}\n\nThe state $\\psi_{\\alpha, \\beta, \\gamma}$ have very nice properties under\nsome coarse-graining procedures due the very particular loop structure we \nchose. For a graph $\\Gamma$, one procedure is to simply eliminate a link $e_0$\nconstructing the new graph $\\Gamma\\setminus e_0$ and another is to \npinch the link to a node defining the pinched graph $\\Gamma - e_0$.\n\nFrom the wave function $\\psi_{\\alpha, \\beta, \\gamma}^\\Gamma$, the pure \nelimination of a link is done by a simple average. Here, since the \nloops composing the state are always non overlapping, \nthe integration over $e_0$ amounts to remove all loops containing it.\n\\begin{align}\n\\int_{\\mathrm{SU}(2)}\\psi_{\\alpha, \\beta, \\gamma}^\\Gamma \\left(g_{e_0},g_e\\right) \\; \\mathrm{d} g_{e_0}\n=\\psi_{\\alpha, \\beta, \\gamma}^{\\Gamma\\setminus e_0}(g_e)\n\\end{align}\nThus the coarse-grained state corresponds exactly to the state on the coarse-grained \ngraph $\\Gamma\\setminus e_0$. We have a stability under this coarse-graining procedure. \n\n\\begin{figure}[h]\n \\centering\n\n\\begin{tikzpicture}[domain=-2.2:2.2]\n\n\\tikzstyle{dot}=[draw,circle,minimum size=2pt,inner sep=0pt,outer sep=0pt,fill=black]\n\n\t\\def.6{1}\n\t\\def5{1}\n\t\\def6{2}\n\t\n\t\t\\matrix[] () at (0,0) {\n\t\n\t\\draw (.6,0) -- (0,0) -- (0,.6) -- (.6,.6) ;\n\t\\node at (-0.3*.6,0.5*.6) {\\textcolor{blue}{$h_S$}};\n\n\t\\draw (.6,0) -- (2*.6,0) -- (2*.6,.6) -- (.6,.6) ;\n\t\\node at (2.3*.6,0.5*.6) {$h_E$};\n\t\n\t\\draw[red, thick] (.6,0) -- (.6,.6) ;\n\t\\node at (1.2*.6,0.5*.6) {$h_b$};\n\n\t\\foreach \\i in {0,...,6} {\n\t\t\\foreach \\j in {0,...,5} {\n\t\t\t\t\t\\coordinate[dot, black] () at (\\i*.6,\\j*.6);}}\n\t\t\n\t\n\t\t&\n\n\t\t\\draw [->] (-.6\/3,.6\/2) -- (.6\/3,.6\/2) ;\n\t\t\\node at (0,0.3*.6) {$h_b=\\mathbbm{1}$};\n\n\t\t&\n\t\t\n\t\\draw (.6,.6\/2) -- (0,0) -- (0,.6) -- cycle ;\n\t\\node at (-0.3*.6,0.5*.6) {\\textcolor{blue}{$h_S$}};\n\n\t\\draw (.6,.6\/2) -- (2*.6,0) -- (2*.6,.6) -- cycle ;\n\t\\node at (2.3*.6,0.5*.6) {$h_E$};\n\t\n\t\\draw[dotted] (0,0) -- (2*.6,0);\n\t\\draw[dotted] (0,.6) -- (2*.6,.6);\n\t\\draw[dotted] (.6,.6) -- (.6,0);\n\n\t\\coordinate[dot, black] () at (0,0);\t\n\t\\coordinate[dot, black] () at (0,.6);\n\t\\coordinate[dot, black] () at (2*.6,0);\n\t\\coordinate[dot, black] () at (2*.6,.6);\n\t\\coordinate[dot, red] () at (.6,.6\/2); \\\\\n\t};\n\t\n\n\\end{tikzpicture}\n\n\n \\caption{The pinch coarse-graining method is an invariant procedure only for \n the case $\\gamma = 1$, meaning the state doesn't contain perimeter information.}\n \\label{example}\n\\end{figure}\n\nThe second method is to pinch the link. This is done by imposing the holonomy \non $e_0$ to be equal to the identity. The coarse-grained state is $\\left.\n\\psi_{\\alpha, \\beta, \\gamma}^\\Gamma(g_e)\\right|_{g_{e_0} = \\mathbbm{1}}$.\nFor a given loop containing $e_0$, pinching the link doesn't change the area or\nthe number of loops, but only its perimeter. Separating configuration containing \nthe link $e_0$ or not, forming respectively the sets $\\mathcal{C}_0$ \nand $\\mathcal{C}\\setminus e_0$, we have\n\\begin{align}\n\\left.\\psi_{\\alpha, \\beta, \\gamma}^\\Gamma(g_e)\\right|_{g_{e_0} = \\mathbbm{1}} &=\n\t\\sum_{\\mathcal{C}\\setminus e_0}\n\t\t\\alpha^{A(\\mathcal{C})} \\gamma^{P(\\mathcal{C})}\n\t\t\\prod_{\\mathcal{L}\\in\\mathcal{C}}\n\t\t\t\\beta\\chi_{1\/2}\\left(\\prod_{e\\in \\mathcal{L}}^{\\rightarrow} g_e \\right) \\nonumber \\\\\n\t&+\\gamma \\sum_{\\mathcal{C}_0}\n\t\t\\alpha^{A(\\mathcal{C})} \\gamma^{P(\\mathcal{C})}\n\t\t\\prod_{\\mathcal{L}\\in\\mathcal{C}}\n\t\t\t\\beta\\chi_{1\/2}\\left(\\prod_{e\\in \\mathcal{L}}^{\\rightarrow} g_e \\right) \\nonumber\n\\end{align}\nThe invariance under coarse-graining is recovered at the condition that $\\gamma = 1$, \nmeaning that the perimeter of the loops doesn't matter: \n$\\left.\\psi_{\\alpha, \\beta, \\gamma}^\\Gamma(g_e)\\right|_{g_{e_0} = \\mathbbm{1}} =\n\\psi_{\\alpha, \\beta, \\gamma}^{\\Gamma-e_0} (g_e)$.\n\n\n\\section{Entanglement entropy}\n\\label{entanglement}\n\n\\subsection{Entanglement entropy}\n\nThe next step is to compute the entanglement (Von Neuman) entropy \n$S= \\trace{(\\rho_{{\\mathcal S}} \\ln\\rho_{{\\mathcal S}})}$ between a bipartite partition of the graph.\nThe system ${\\mathcal S}$ of interest will be a bounded connected region and \nthe rest of the graph forms the environment ${\\mathcal E}$ whose degrees of \nfreedom are traced out. The boundary group elements will be by convention \nincorporated into the system and won't be traced over.\nFor simplicity, we will restrict the evaluation of the entropy for $\\beta= \\gamma = 1$.\n\n\n\nTo compute the entropy of ${\\mathcal S}$, we need its reduced density matrix defined as \n \\begin{align}\n\\rho_{{\\mathcal S}}(\\tilde{g}_e, g_e) = \\int\n\t\\overline{\\psi}_{\\alpha}(\\tilde{g}_{e\\in{\\mathcal S}}, h_{e\\notin {\\mathcal S}}) \n\t\\psi_{\\alpha}(h_{e\\notin {\\mathcal S}}, g_{e\\in{\\mathcal S}}) \\; \\mathrm{d} h_{e\\notin {\\mathcal S}}\n\\end{align}\nThe method to evaluate the entropy $S= -\\trace{(\\rho_{{\\mathcal S}} \\ln\\rho_{{\\mathcal S}})}$\nis based on the replica trick \\cite{Wilczek_1994}. Computing the successive power of the reduced \ndensity matrix $\\rho_{{\\mathcal S}}^n, \\; n\\in{\\mathbb N}$, we then obtain the entropy \nby $S = - \\left.\\frac{\\partial \\trace{\\rho_{\\mathcal S}^n}}{\\partial n}\\right|_{n=1}$.\n\nThe first step is to compute the reduced density matrix.\nDenoting respectively $\\mathcal{C}_S$, $\\mathcal{C}_E$ and $\\mathcal{C}_{SE}$ \nthe loops belonging to $S$, $E$ or both, we have (see annex \\ref{Entanglement Kitaev state})\n\\begin{widetext}\n\\begin{align}\n\\label{reduced_density}\n\\rho_{S}(g,g') = \\frac{\\mathcal{N}_E (\\alpha)}{\\mathcal{N}(\\alpha)} \n\t\t\\sum_{\\substack{\n\t\t\t\t\\mathcal{C}_S \\cup \\mathcal{C}_{SE} \\\\\n\t\t\t\t\\mathcal{C}'_S \\cup \\mathcal{C}_{SE}\n\t\t\t\t\t}\n\t\t\t}\n\t\t\t\\overline{\\alpha}^{A(\\mathcal{C}'_S \\cup \\mathcal{C}_{SE})} \n\t\t\t \\alpha^{A(\\mathcal{C}_S \\cup \\mathcal{C}_{SE})} \n\t\t\t \\times \\prod_{\\substack{\n\t\t\t\t\\mathcal{L}_S\\in\\mathcal{C}_S \\\\\n\t\t\t\t\\mathcal{L}'_S\\in \\mathcal{C}'_S\n\t\t\t\t\t}\n\t\t\t}\n\t\t\t\\chi_{1\/2}\\left( \\mathcal{L}_S (g)\\right)\n\t\t\t\\chi_{1\/2}\\left( \\mathcal{L}'_S (g')\\right)\n\t\t\\prod_{\\mathcal{L}_{SE}\\in\\mathcal{C}_{SE}}\n\t\t\\left(\n\t\t\\frac{1}{2} \n\t\t\t\\chi_{1\/2}\\left( \\mathcal{L}_{SE} (g,g') \\right)\n\t\t\\right)\n\\end{align}\n\\end{widetext}\nwith $\\mathcal{N}(\\alpha)$ the norm and $\\mathcal{N}_E (\\alpha) = \n\\sum_{\\mathcal{C}_E} |\\alpha|^{2A(\\mathcal{C}_E)}$ the factor coming \nout of the partial trace on the environment. We see that two contributions \nappear, one with only loops in $S$ and another coming from loops crossing\nthe boundary. This last term is responsible for the entanglement between\n$S$ and $E$. \n\nFigure \\ref{loops_reduced} shows an example of a configuration appearing \nin the reduced density matrix. To understand simply the form of $\\rho_{\\mathcal S}$, let's imagine\nwe have only two loops configuration, one copy for the bra and ket of \nthe density matrix. Each configuration is composed of non overlapping and \nnon intersecting loops. Nonetheless, each copy can overlap since their are independent.\nNow, tracing out the $E$ degrees of freedom imposes that the parts in $E$ from \neach copies to be exactly the same, otherwise the average\ngives zero. Complications come from loop crossing the boundary.\nThe average of the ${\\mathcal E}$ part of crossing loops gives a contribution of the from\n$\\int_{\\mathrm{SU}(2)} \\chi_{1\/2}(gh) \\chi_{1\/2}(g'h) \\; \\mathrm{d} h\n= \\frac{1}{2} \\chi_{1\/2}(gg'^{-1})$.\nThis is at the origin of the boundary holonomies in \\eqref{reduced_density}.\nNow considering again all the allowed configurations, we see that for a given bulk\/boundary \nplaquette choice like in Fig.\\ref{loops_reduced}, their is a huge redundancy\ncoming from the $E$ plaquettes. After the partial trace, this leads to the overall\n$\\mathcal{N}_E (\\alpha)$ prefactor.\n\n\\begin{figure}[h]\n \\centering\n\n\\begin{tikzpicture}[domain=-2.2:2.2, rotate=-90]\n\n\\tikzstyle{dot}=[draw,circle,minimum size=2pt,inner sep=0pt,outer sep=0pt,fill=black]\n\n\t\\def.6{.6}\n\t\\def5{7}\n\t\\def6{6}\n\t\\pgfmathsetmacro{\\l}{0.5*.6}\n\n\t\\draw[blue, dashed] (-0.5*.6, -0.5*.6) -- (-0.5*.6,7.5*.6) -- (6.5*.6,7.5*.6) -- (6.5*.6,-0.5*.6) -- cycle;\n\t\n\t\\draw[red,thick] (-.6,.6) -- (0,.6) ;\n\t\\draw[red,thick] (-.6,3*.6) -- (0,3*.6) ;\n\t\\draw[red,thick] (-.6,6*.6) -- (0,6*.6) ;\n\t\\draw[red,thick] (2*.6,8*.6) -- (2*.6,7*.6) ;\n\t\n\t\\draw[black,thick] (0,.6) -- (0,0) -- (.6,0) -- (.6,3*.6) -- (0,3*.6) ;\n\t\\draw[black,thick] (0,6*.6) -- (3*.6,6*.6) -- (3*.6,7*.6)-- (2*.6,7*.6) ;\n\t\\draw[black] (3*.6,6*.6) -- (6*.6,6*.6) -- (6*.6, 3*.6) -- (3*.6,3*.6) -- cycle ;\n\t\n\t\\draw[green,thick] (0,.6) -- (2*.6,.6) -- (2*.6,2.92*.6) -- (0,2.92*.6) ;\n\t\\draw[green] (3*.6,2*.6) -- (3*.6,0) -- (5*.6,0) -- (5*.6,.6) -- (4*.6,.6) -- (4*.6,2*.6) -- cycle ;\n\t\\draw[green] (5*.6,3.05*.6) -- (5.95*.6,3.05*.6) -- (5.95*.6,4*.6) --(5*.6,4*.6) -- cycle ;\n\t\\draw[green,thick] (0,6*.6) -- (0,5*.6) -- (2*.6,5*.6) -- (2*.6,7*.6) ;\n\n\t\\foreach \\i in {0,...,6} {\n\t\t\\foreach \\j in {0,...,5} {\n\t\t\t\t\t\\coordinate[dot, black] () at (\\i*.6,\\j*.6);\n\t\t}\n\t\n}\n\n\\end{tikzpicture}\n\n\n \\caption{Illustration of one possible loop structure after the partial trace over \n the environment has been performed. The reduced density matrix is not factorized \n anymore and a non trivial boundary contribution leads to entanglement.}\n \\label{loops_reduced}\n\\end{figure}\n\nThe next step is to compute the successive power and take\nthe trace. At the end a simple formula remains, \n\\begin{align*}\n\\trace{\\rho_{S}^n (g,g')}&=\\left( \\frac{\\mathcal{N}_E (\\alpha)}{\\mathcal{N}(\\alpha)} \\right)^{n}\n\t\t\t\t\\mathcal{N}^{n-1}_S (\\alpha)\n\t\t\\sum_{\\mathcal{C}_S \\cup \\mathcal{C}_{SE}} \n\t\t\t\\frac{|\\alpha^{A(\\mathcal{C}_S \\cup \\mathcal{C}_{SE})}|^{2n}}\n\t\t\t\t{\\left(4^{n-1}\\right)^{\\#\\mathcal{C}_{SE}}}\n\\end{align*}\nAppendix \\ref{proofs} presents the detailed calculations. The entropy \nof the region ${\\mathcal S}$ is then directly obtained by differentiation and we have that \nthe leading order term scales as the area of the boundary of the region. The\nentanglement entropy is given finally by\n\\begin{align}\n\\label{entropy}\nS =\tn_{SE} f(|\\alpha|^2) +\n\t\\frac{2\\ln 2}{\\left( 1 + |\\alpha|^2 \\right)^{n_{SE}}} \n\t\t\\sum_{\\mathcal{C}_{SE}}\\#\\mathcal{L}_{SE} |\\alpha|^{2A(\\mathcal{L}_{SE})} \n\\end{align}\nwith $n_{SE}$ the number of degrees of freedom at the boundary \nand $f(|\\alpha|^2) = \\ln\\left( 1 + |\\alpha|^2 \\right) - \n\\frac{|\\alpha|^2 }{ 1+ |\\alpha|^2 } \\ln\\left( |\\alpha|^2 \\right) $.\nThis formula is quite general and is valid for an arbitrary graph as long as the \nloop structure of the spin network state is the same.\n\n\\subsection{Boundary degrees of freedom - Purification}\n\nWe came to understand that the entanglement in the subsystem $\\mathcal{S}$\nprepared in the state \\eqref{reduced_density} can be traced back to loops\ncrossing the boundary. In fact, we can understand the state \\eqref{reduced_density} \nas resulting from tracing out additional boundary degrees of freedom. This idea\ngoes in the same spirit as recent studies on local subsystems in gauge theories and \ngravity \\cite{Freidel_2016,Freidel_2016_2}.\n\nTo purify the state, consider at each puncture a new degree of freedom, for \ninstance a new fictitious edge. We work in the extended Hilbert space \n$\\mathcal{H}_{\\mathcal{S}} \\otimes \\mathcal{H}_e^{\\otimes N}$ with $\\mathcal{H}_e$\nthe Hilbert space associated to the new edge, $N$ the total number \nof puncture and $\\mathcal{H}_{\\mathcal{S}}$ the Hilbert space of the system. \nWe then construct a pure state as superposition of loops in the bulk and \npaths joining pairs of punctures, see for instance FIG.\\ref{loops_reduced}.\nFor a path $\\mathcal{P}$, we use the holonomy (properly oriented)\n\\begin{align}\n\\chi_{1\/2}\\left(\\mathcal{P}\\right) = \\chi_{1\/2}(h_{s_b}g_{\\mathcal{S}}h_{t_b})\n\\end{align}\nwith $h_{s_b,t_b}$ associated to the pair of boundary degrees of freedom, the source\nand target of the path respectively. The reduced density matrix \\eqref{reduced_density}\ncan then be purified by considering the state $\\ket{\\psi_{\\mathcal{S}B}}$ (B for boundary)\nwith wave-function\n\\begin{align}\n\\psi_{\\mathcal{S}B}(g) = \n\t\\sum_{\\mathcal{C}} \\alpha^{A(\\mathcal{C})} \\beta^{N(\\mathcal{C})} \\gamma^{P(\\mathcal{C})} \n\t\t\\prod_{\\mathcal{L}\\in\\mathcal{C}}\n\t\t\t\\chi_{1\/2}\\left(\\mathcal{L} \\right) \n\t\t\\prod_{\\mathcal{P}\\in\\mathcal{C}}\n\t\t\t\\chi_{1\/2}\\left(\\mathcal{P} \\right) \n\\end{align}\nThen $\\rho_\\mathcal{S} = \\trace{\\left( \\ket{\\psi_{\\mathcal{S}B}}\\bra{\\psi_{\\mathcal{S}B}} \\right)}$.\nWe have purified the reduced density matrix of the system. One could argue that their are\nmany ways to purify a quantum state and could question its relevance here. After all the original\nstate \\eqref{def_state} is a perfectly valid purification. What is really interesting here is the method. \nWe can think of the local subsystem on its own by doubling the boundary degrees of freedom \nand construct pure state in an extended Hilbert space. The physical state is recovered by \ntracing out the additional boundary degrees of freedom. This match exactly the results of\n\\cite{Freidel_2016} by a direct analysis of the reduced density matrix of a sub-region \nof the spin network state. Naturally we here have no particular information on those\nadditional degrees of freedom and they should be determined by a proper analysis of\nboundary terms in the classical and quantum theory.\n\nThis elementary discussion illustrates simply the fact that the extended Hilbert space\nmethod can been seen from a quantum information perspective as a clever way \nto purify a state of a local region and consequently why it has something to say \nabout entanglement, correlations and entropy.\n\n\n\\subsection{On correlations}\n\nThis holographic behavior is a good sign for this class of states to be good candidates \nfor physical states solutions of the Hamiltonian constraint of loop quantum gravity. \nWhat's more, for physical solutions, we expect the correlations between geometrical\nobservables to be non trivial. This is where the limit $\\beta= \\gamma = 1$ fails.\nIndeed, the spin (or holonomies) two points correlation functions are topological in the \nsense that they do not depend on the graph distance between the edges. \n\nLet's look for instance at the spin two points functions \n$\\langle \\hat{j}_e \\hat{j}_{e'} \\rangle - \\langle \\hat{j}_e \\rangle \\langle \\hat{j}_{e'} \\rangle$. \nThis spin operator is defined by its action on a spin network state with the help\nof the Peter-Weyl theorem $(\\hat{j}_e \\psi)(g,g_e) = \\sum_{j_e} (2 j_e +1) j_e\n\\int \\chi_{j_e}(g_eh^{-1})\\psi(g,h) \\;\\mathrm{d} h$. The method to evaluate the averages \ngoes as follows. First we have only the spin $1\/2$ component of the average that \ngives a non zero contribution, so that we have\n\\begin{align*}\n\\langle \\hat{j}_e \\rangle \n&= \\frac{\t\\int_{SU(2)}\\! \n\t\t\\chi_{1\/2}(g_eh_e^{-1}) \n\t\t\\psi_{\\alpha}(g, h_e) \\psi_{\\alpha}(g_e, g) \n\t\\; \\mathrm{d} g \\mathrm{d} h_e \\mathrm{d} g_e }\n\t{\\mathcal{N}(\\alpha)}\n\\\\\n&= \\sum_{\\mathcal{C, C'}}\\alpha^{A(\\mathcal{C})} \\bar{\\alpha}^{A(\\mathcal{C}')} \n \t\\int_{SU(2)}\\!\n\t\t\\chi_{1\/2}(g_eh_e^{-1}) \\\\\n\t\t&\\times \\prod_{\\mathcal{L}\\in\\mathcal{C}}\\chi_{1\/2}\\left( h_e, g \\right) \n\t\t\\prod_{\\mathcal{L}'\\in\\mathcal{C}'}\\chi_{1\/2}\\left(g_e, g \\right) \n\t\\; \\mathrm{d} g \\mathrm{d} h_e \\mathrm{d} g_e\n\\end{align*}\nWe integrate over $g_e$. If $g_e \\notin \\mathcal{C}'$ the integral gives zero. Otherwise we have simply \n$ \\int_{SU(2)}\\! \\chi_{1\/2}(g_eh_e^{-1}) \\chi_{1\/2}\\left(g_eh\\right)\\; \\mathrm{d} g_e = \\frac{1}{2}\\chi_{1\/2}(hh_e)$ ; \nsubstitute $h_e$ for $g_e$ with a factor one half. Finally, denoting by $\\mathcal{C}'_e$ a configuration of loops \ncontaining the link $e$\n\\begin{align}\n\\langle \\hat{j}_e \\rangle \n&= \\frac{1}{2 \\mathcal{N}(\\alpha)}\n\t\\sum_{\\mathcal{C}, \\mathcal{ C}'_e}\n\t\t\\alpha^{A(\\mathcal{C})} \\bar{\\alpha}^{A(\\mathcal{C}'_e)} \\\\\n\t\t&\\prod_{\\mathcal{L} \\in \\mathcal{C}, \\mathcal{L}'_e \\in \\mathcal{ C'}_e}\n\t\t\t\\underbrace{\n\t\t\t\t \\int_{SU(2)}\\!\n\t\t\t\t\t\\chi_{1\/2}\\left(\\prod_{e\\in \\mathcal{L}}^{\\rightarrow} g_e \\right) \n\t\t\t\t\t\\chi_{1\/2}\\left(\\prod_{e\\in \\mathcal{L}'_e}^{\\rightarrow} g_e \\right) \n\t\t\t\t\\; \\mathrm{d} g_e}\n\t\t\t_{=0 \\text{ unless } \\mathcal{L}=\\mathcal{L}'_e} \\nonumber \\\\\n&= \\frac{1}{2\\mathcal{N}(\\alpha)} \n\t\\sum_{\\mathcal{ C}_e}\n\t\t |\\alpha|^{A(2\\mathcal{C}_e)} \n = \\frac{\\left|\\alpha\\right|^{2}}{\\left( 1 + \\left|\\alpha\\right|^{2} \\right)^2}\n\\end{align}\nThe explicit evaluation of $\\langle \\hat{j}_e \\hat{j}_{e'} \\rangle$ follows the same \nsteps. Distinguishing the two cases when the spins belong to the same loop or not, \nsee FIG.\\ref{correlations_example}, we have respectively \n$\n\\langle \\hat{j}_e \\hat{j}_{e'} \\rangle\n= \\frac{1}{4} \\frac{\\left|\\alpha\\right|^{2}}{\\left( 1 + \\left|\\alpha\\right|^{2} \\right)^2} \n$ \nand \n$\n\\langle \\hat{j}_e \\hat{j}_{e'} \\rangle\n=\\frac{\\left|\\alpha\\right|^{4}}{\\left( 1 + \\left|\\alpha\\right|^{2} \\right)^4} \n$. \nIn both cases, the correlation $\\langle \\hat{j}_e \\hat{j}_{e'} \\rangle = \n\\langle \\hat{j}_e \\rangle \\langle \\hat{j}_{e'} \\rangle $ is not in any way a function \nof the distance between the edges which is particularly clear when the edges don't \nbelong to the same loop where the correlation is strictly zero. \n\n\\begin{figure}[h]\n \\centering\n \n \\begin{tikzpicture}[domain=-2.2:2.2]\n\n\\tikzstyle{dot}=[draw,circle,minimum size=2pt,inner sep=0pt,outer sep=0pt,fill=black]\n\n\t\\def.6{.6}\n\t\\def5{3}\n\t\\def6{4}\n\t\\pgfmathsetmacro{\\l}{0.5*.6}\n\t\n\t\\matrix[] () at (0,0) {\n\t\n\t\\draw (0,0) -- (.6,0) -- (.6,.6) -- (0,.6) -- cycle;\n\t\\draw (3*.6,2*.6) -- (3*.6,3*.6) -- (4*.6,3*.6) -- (4*.6,2*.6) -- cycle;\n\t\\node at (-0.5*.6,0.5*.6) {$e$};\n\t\\node at (4.5*.6,2.5*.6) {$e'$};\n\n\t\\foreach \\i in {0,...,6} {\n\t\t\\foreach \\j in {0,...,5} {\n\t\t\t\t\t\\coordinate[dot, black] () at (\\i*.6,\\j*.6);\n\t\t}\n\t\n\t}\n\t\n\t\t&\n\t\t\\hspace{-0.5cm}\n\t\t&\n\n\t\t\n\t\\draw (0,0) -- (4*.6,0) -- (4*.6,3*.6) -- (3*.6,3*.6) -- (3*.6,.6) -- (0,.6) -- cycle;\n\t\\node at (-0.5*.6,0.5*.6) {$e$};\n\t\\node at (4.5*.6,2.5*.6) {$e'$};\n\n\t\\foreach \\i in {0,...,6} {\n\t\t\\foreach \\j in {0,...,5} {\n\t\t\t\t\t\\coordinate[dot, black] () at (\\i*.6,\\j*.6);\n\t\t}\n\t\n\t}\n \\\\\n\t};\n\t\n\n\\end{tikzpicture}\n\n \\caption{Trivial correlations arise because their is no distinction between configurations\n presented is the figure.}\n \\label{correlations_example}\n\\end{figure}\n\nFrom the structure of state, we should have naively expected the \ncorrelations to scale in some way as the graph distance between the edges \n$e$ and $e'$. This is in fact not the case since the averages counts every loops\nmeeting the edges in a democratic way (both configuration in FIG.\\ref{correlations_example}\ngive the same correlations). \nIntroducing a contribution to the amplitude proportional for instance to the number of loops \ncan be a solution to this issue. The limit $\\beta= \\gamma = 1$ has thus to be reconsidered \nto account for non trivial correlations.\n\n\\subsection{Example}\n\n\\begin{figure}[h]\n \\centering\n\n\\begin{tikzpicture}[domain=-2.2:2.2]\n\n\\tikzstyle{dot}=[draw,circle,minimum size=2pt,inner sep=0pt,outer sep=0pt,fill=black]\n\n\t\\def.6{1}\n\t\\def5{1}\n\t\\def6{2}\n\t\n\t\\draw[blue] (.6,0) -- (0,0) -- (0,.6) -- (.6,.6) ;\n\t\\node at (-0.3*.6,0.5*.6) {\\textcolor{blue}{$h_S$}};\n\n\t\\draw (.6,0) -- (2*.6,0) -- (2*.6,.6) -- (.6,.6) ;\n\t\\node at (2.3*.6,0.5*.6) {$h_E$};\n\t\n\t\\draw[red, thick] (.6,0) -- (.6,.6) ;\n\t\\node at (1.2*.6,0.5*.6) {$h_b$};\n\n\t\\foreach \\i in {0,...,6} {\n\t\t\\foreach \\j in {0,...,5} {\n\t\t\t\t\t\\coordinate[dot, black] () at (\\i*.6,\\j*.6);\n\t\t}\n\t\n}\n\n\\end{tikzpicture}\n\n\n \\caption{Illustrative example for the evaluation of the entanglement entropy for a two \n\t\tloops state.}\n \\label{example}\n\\end{figure}\n\nAs an illustrative example, consider a two loops state whose wave function is \n$\\psi(h_S,h_b,h_E) = 1 + \\alpha\\chi(h_S h_b) + \\alpha \\chi(h_E h_b^{-1}) + \\alpha^2 \\chi(h_S h_E)$. \nThe reduced density matrix, obtained by taking two copies of the state and tracing out over\nthe environment has the form\n\\begin{align}\n&\\rho_{S}(h_b,h_S, h_b',h'_S) = \\int \\psi^*(h'_S,h'_b,h_E) \\psi(h_S,b,h_E) \\: \\mathrm{d} h_E \\nonumber \\\\\n&= 1 + \\alpha \\chi(h_S h_b) + \\overline{\\alpha} \\chi(h'_S h_b') + |\\alpha|^2 \\chi(h_S h_b)\\chi(h'_S h_b') \\nonumber \\\\\n&+\\frac{|\\alpha|^2}{2} \n\t\\Big{[} \\chi(h_bh_b'^{-1}) + \\alpha \\chi(h_bh'^{-1}_S) + \\overline{\\alpha} \\chi(h_S h_b'^{-1}) \\nonumber \\\\\n&\\qquad\\qquad\t+ |\\alpha|^2 \\chi(h_Sh'^{-1}_S)\n\t\\Big{]}\n\\end{align}\nThe computation of the successive power of the reduced density matrix and \nthe trace in then straightforward. We have $\\trace{\\rho_{S}^n (g,g')} = \n\\frac{\\left( 1 + |\\alpha|^2 \\right)^n}{\\mathcal{N}(\\alpha)} \n\\left( 1 + \\frac{|\\alpha|^{2n}}{4^{n-1}} \\right)$ and the entanglement\nentropy follows formula \\eqref{entropy}.\n\n\\section{Finding non trivial correlations}\n\\label{correlations}\n\n\\subsection{Distinguishing loops}\n\nWe saw in the last section why correlations were trivial for the restricted state\nstudied for entanglement entropy. This was coming from the fact that their \nwas non distinction between loops passing through both links or not, see Figure \n\\ref{correlations_example}. To understand how the solution comes about, \nlet's look first at a simpler state constructed as the superposition of \nsingle loop holonomy\n\\begin{align}\n\\label{oneloopstate}\n\\psi(g_e) = \\sum_{\\mathcal{L}} \\alpha ^{A(\\mathcal{L})}\n\t\t\t\\chi_{1\/2}\\left(\\prod_{e\\in \\mathcal{L}}^{\\rightarrow} g_e \\right) \n\\end{align}\nThis term is the first non trivial term of \\ref{def_state} in an expansion of \n$\\beta$ (for $\\gamma =1$). The spin two points correlation function is \nstraightforwardly evaluated as \n\\begin{align}\n\\langle \\hat{j}_e \\hat{j}_{e'} \\rangle &= \n\t\\int \n\t\t\\chi_{1\/2}\\left( g_e h_e^{-1} \\right) \\chi_{1\/2}\\left( g_{e'} h_{e'}^{-1} \\right)\t \n\t\t\\nonumber \\\\\n\t\t&\\psi(h_e,h_{e'},g) \\psi(g_e,g_{e'},g) \\; \\mathrm{d} g \\mathrm{d} h_{e,e'} \\mathrm{d} g_{e,e'} \\nonumber \\\\\n\t&= \\frac{1}{4} \\frac{\\mathcal{N}(e,e')}{\\mathcal{N}}\n\\end{align}\nwith $\\mathcal{N}(e,e') = \\sum_{\\mathcal{L}_{ee'}}|\\alpha|^{A(\\mathcal{L}_{ee'})}$ is a sum \nover all loops passing through both edge $e$ and $e'$. Now in this case, the correlations \nwill scale non trivially on the minimal area between the edges since we must consider loops \npassing through both links at the same time. Here is the main difference between this state\nand the previous one. \n\nWe can go even further and analyze the entanglement entropy of a local region for this\nstate. In fact, the computation is completely similar to the one presented in \n\\ref{entanglement}. However, the entanglement entropy doesn't follow an area law,\ndoesn't even scale as a function of the boundary degrees of freedom. Indeed, the \narea scaling came from the term \n$\\ln\\left( \\frac{\\mathcal{N}}{\\mathcal{N}_S\\mathcal{N}_E} \\right)$ and \nthe multiplicative nature of $\\mathcal{N} = \\mathcal{N}_S\\mathcal{N}_E\\mathcal{N}_{SE}$\nwhereas for \\ref{oneloopstate}, $\\mathcal{N}$ is additive. Thus the same \ncontribution $\\ln\\left( \\frac{\\mathcal{N}}{\\mathcal{N}_S\\mathcal{N}_E} \\right)$ is not \nonly a function of the boundary degrees of freedom.\n\n\n\\subsection{The proposal}\n\nThe previous discussions show that two ingredients are necessary to \nobtain states with non trivial correlations and an entanglement entropy \nfor a localized region to scale as the area of the boundary (at least to be \na function of boundary degrees of freedom). Area law entanglement entropy\ncame from the loop structure of the toric code model, more precisely\nall configurations of non intersecting and overlapping loops enter the superposition.\nNon trivial correlations came on the other hand from the fact that a clear distinction \nbetween loops passing by both edges $e$ and $e'$ and those that do not was \nmade. The solution presents itself when we come back to our original state \n\\eqref{def_state} with amplitude scaling as the area and the number of \nloops (we omit the perimeter contribution since it can obstruct coarse-graining \ninvariance),\n %\n \\begin{align}\n \\label{proposal}\n\\psi_{\\alpha, \\beta }(g_e) &= \n\t\\sum_{\\mathcal{C}}\\alpha^{A(\\mathcal{C})}\n\t\t\\prod_{\\mathcal{L}\\in\\mathcal{C}} \n\t\t\t\\left( \\beta \\chi_{1\/2}\\left(\\prod_{e\\in \\mathcal{L}}^{\\rightarrow} g_e \\right) \\right)\n\\end{align}\nThe two requirements are here met.\nThe $\\beta$ factor corresponds exactly to a number of loops contribution. It is\nstraightforward to generalize the following discussion to an arbitrary superposition\nof holonomies $f(g) = \\sum_{j=1\/2}^{\\infty} p_j \\chi_j(g) $; the formal expressions\nremains the same as before. \n\nLet's review its features. Concerning correlations, we can now distinguish \na dominant term in the two points function. Indeed, what \nrendered the correlations topological initially was that their was no\ndistinction between the cases when the edges $e$ and $e'$ belong to the same loop\nor different ones. With the additional $\\beta$ contribution, we can now pinpoint a \ndominant term which is the one with minimal area and only one loop connecting the edges.\n\\begin{figure}[h]\n \\centering\n\n\\begin{tikzpicture}[domain=-2.2:2.2]\n\n\\tikzstyle{dot}=[draw,circle,minimum size=2pt,inner sep=0pt,outer sep=0pt,fill=black]\n\n\t\\def.6{.6}\n\t\\def5{6}\n\t\\def6{7}\n\t\\pgfmathsetmacro{\\l}{0.5*.6}\n\t\n\t\\draw[red,thick] (5*.6,5*.6) -- ( 6*.6,5*.6) ;\n\t\\node at (5.5*.6,5.5*.6) {$e'$};\n\n\t\\draw[red, thick] ( .6,.6) -- (.6,2*.6) ;\n\t\\node at (0.5*.6,1.5*.6) {$e$};\n\t\n\t\\draw[blue] (.6,.6) -- (6*.6,.6) -- (6*.6,5*.6);\n\t\\draw[blue] (.6,2*.6) -- (5*.6, 2*.6) -- (5*.6,5*.6);\n\t\n\t\\draw[green] (.6,2*.6) -- (.6,5*.6) -- (5*.6,5*.6) ;\n\t\\draw[green] (.6,1.05*.6) -- (2*.6,1.05*.6) -- (2*.6, 4*.6) -- (5.95*.6,4*.6) -- (5.95*.6, 5*.6) ;\n\t\n\t\\draw (.6, 0.95*.6) -- (3*.6,0.95*.6) -- (3*.6,1.95*.6) -- (4*.6, 1.95*.6) -- (4*.6,3*.6) -- (6.05*.6,3*.6) -- (6.05*.6,5*.6);\n\t\\draw (.6,1.95*.6) -- (2.05*.6, 1.95*.6) -- (2.05*.6, 2.95*.6) -- (3*.6,2.95*.6) -- (3*.6, 3.95*.6) -- \n\t\t\t(5.05*.6, 3.95*.6) -- (5.05*.6, 5*.6);\n\n\t\\foreach \\i in {0,...,6} {\n\t\t\\foreach \\j in {0,...,5} {\n\t\t\t\t\t\\coordinate[dot, black] () at (\\i*.6,\\j*.6);\n\t\t}\n\t\n}\n\n\\end{tikzpicture}\n\n\n \\caption{Set of possible minimal paths (not all drawn) joining the edges $e$ and $e'$ in red.}\n \\label{distance}\n\\end{figure}\nDenoting by $L_x$ and $L_y$ the horizontal and vertical graph distance respectively\nconnecting two given links $e$ and $e'$, the number of such minimal loops is ${A_{\\text{min}} \\choose L_y }$.\nThus, the dominant contribution to the spin correlations is\n\\begin{align}\n\\langle \\hat{j}_e \\hat{j}_{e'} \\rangle \n\t&= \\frac{1}{4} |\\beta| ^2 |\\alpha|^{2A_{\\text{min}}}\n\t\t{A_{\\text{min}} \\choose L_y } + o(|\\beta| ^2 ,|\\alpha|^{2A_{\\text{min}}}) \\\\\n\t&\\underset{N \\rightarrow +\\infty}{=} \\frac{1}{4} |\\beta| ^2 (2|\\alpha|^2)^{A_{\\text{min}}}\n\t\t\\frac{\\mathrm{e}^{-\\frac{(L_x-L_y)^2}{2A_{\\text{min}}}}}{\\sqrt{A_{\\text{min}}\\pi\/2}}\n\\end{align}\nThe correlations are now non topological. The correlations are maximum when the number \nof minimal paths joining the edges is maximal. This can be seen as entropic competition \nbetween the number of paths linking the edges $e$ and $e'$ and an energetic term \n$|\\alpha|^{2A_{\\text{min}}}$. The more connected the edges are the more correlated\nthey are. In the light of the distance from correlation point of view \n\\cite{Livine_Terno_2005, Feller_2015}, the edges get closer when more \ndifferent minimal paths of the graph exist.\n\nThe entropy can be obtained following the same steps as in Section \\ref{entanglement}\nby computing the successive power of the reduced density matrix and by employing \nthe replica formula for the Von Neumann entropy. We have in the end an entanglement \nentropy function only of the boundary degrees of freedom, \n\\begin{align}\nS &= \\ln(\\mathcal{N}_{SE}(\\alpha,\\beta)) \n- \\frac{1}{\\mathcal{N}(\\alpha,\\beta)}\n\t\\sum_{\\mathcal{C}_{SE}} \n\t\t\\left[ A(\\mathcal{C}_{SE}) \\ln\\left(|\\alpha^2|\\right) \\right. \\nonumber \\\\\n\t\t&+ \n\t\t\\left. N(\\mathcal{C}_{SE})\\ln\\left(\\frac{|\\beta|^2}{4}\\right) \\right]\n\t\t|\\alpha|^{2A(\\mathcal{C}_{SE})}|\\beta|^{2N(\\mathcal{C}_{SE})}\n\\end{align}\nIn the special case where $\\alpha=1$ and $|\\beta|=2$, we have a very simple \nexpression for the entanglement entropy, \n\\begin{align}\nS = \\ln(\\mathcal{N}_{SE}(\\alpha,\\beta))\n\\end{align}\nSo in the end, we see that the state \\eqref{proposal} is a good candidate \nto be a physical state, at least mirrors some features the true physical \nsolution of the Hamiltonian constraint might be, since it as correlations \nthat are function of some measure of distance in the graph through the \nminimal area and as an area law scaling entanglement entropy.\n\n\n\n\\section{Conclusion}\n\\label{conclusion}\n\nIn this paper, we introduced a class of states in loop quantum gravity whose\nentanglement entropy for a bounded region scales as the area of the boundary \n(number of degrees of freedom) and whose correlation functions between distant spins are\nnon trivial. Its structure is motivated by a condensed matter model, Kitaev's toric code \nmodel, where the ground state can be seen as a gas of loops on the lattice. Our \nansatz mimics this structure, being defined as a superposition of non intersecting loops \nof arbitrary size. To each configuration, an amplitude function of the area, the perimeter \nor the number of loops is considered.\n\nWe showed that indeed the entanglement entropy of a region scales as the area of its boundary\nusing the replica trick method. The source of entanglement is seen to be exclusively due to \nloops crossing the boundary and the fact that the entanglement depends only on \nboundary degrees of freedom depends on the configuration structure. This analysis\nserves also to illustrate extended Hilbert space ideas coming from research on local \nsubsystems in gauge and gravity theories by seeing it as a clever way to purify a state.\nOn the side of the correlations, their non triviality come from \nthe fact that some loops are distinct from the other . What's more, we showed that \ncorrelations grow as the number of minimal path joining the two spins is larger. \n\nThe idea behind those kind of investigations is to have a clearer understanding\nof the physical states of quantum gravity solving, ideally, all the constraints of \nthe theory. From there we could infer the structure of the Hamiltonian constraints \nthey are solution of pointing then toward the structure of the true quantum \nHamiltonian constraint. Indeed, constructing a good Hamiltonian constraint \nis still under active research and we expect those retro-engineering studies\nto open new perspectives.\n\nAt the end of the day, the goal would be to weave the standard loop quantum gravity techniques for designing quantum states of geometry by the action of holonomy operators and volume excitations with the MERA vision of local unitaries and (dis-)entangling operations, in order to understand the structure of (local) holographic states in (loop) quantum gravity.\n\n\\bibliographystyle{bib-style}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nTheoretical studies of the CMB have shown that the accurate\nmeasurement of the CMB anisotropy spectrum $C^T_\\ell$ with future\nspace missions such as {\\sc Planck} will allow for tests of cosmological\nscenarios and the determination of cosmological parameters with\nunprecedented accuracy. Nevertheless, some near degeneracies between\nsets of cosmological parameters yield very similar CMB temperature\nanisotropy spectra. The measurement of the CMB polarization and the computation of\nits power spectrum \\cite{seljak96b,zaldarriagath} may lift to some extent some of these degeneracies.\nIt will also provide additional information on the reionization epoch and\non the presence of tensor perturbations, and may also help in the\nidentification and removal of polarized astrophysical foregrounds\n\\cite{kinney98,kamionkowski98,prunet99}.\n\n\nA successful measurement of the CMB polarization stands as an\nobservational challenge; the expected polarization level is of the order of\n$10\\%$ of the level of temperature fluctuations ($\\Delta T\/T \\simeq\n10^{-5}$). Efforts have thus gone into developing techniques to\nreduce or eliminate spurious non-astronomical signals and instrumental\nnoise which could otherwise easily wipe out real polarization signals.\nIn a previous paper \\cite{couchot98}, we have shown how to configure\nthe polarimeters in the focal plane in order to minimize the errors on\nthe measurement of the Stokes parameters. In this paper, we address the problem of\nlow frequency noise.\n\nLow frequency noise in the data streams can arise due to a wide range\nof physical processes connected to the detection of radiation. $1\/f$\nnoise in the electronics, gain instabilities, and temperature\nfluctuations of instrument parts radiatively coupled to the detectors,\nall produce low frequency drifts of the detector outputs.\n\\begin{figure}[ht]\n \\begin{center}\n \\epsfig{ file=spectre_bruit_paper.eps,width=8.5cm}\n \\end{center}\n \\caption{The power spectrum of the K34 bolometer from Caltech, the same type of bolometer\n planned to be used on the {\\sc Planck} mission. The measurement was performed at the SYMBOL test bench\n at I.A.S., Orsay (supplied by Michel Piat). The knee frequency of this spectrum is $\\sim 0.014$ Hz \n and the planned spin frequency for {\\sc Planck} is $0.016$ Hz. \n We can model the spectrum as the function\n ${S(f)=1+\\left( \\frac{1.43 \\times 10^{-2}}{f} \\right) ^2}$ (dashed line).\n \\label{spectrebruit}}\n\\end{figure}\nThe spectrum of the total noise can be modeled as\na superposition of white noise and components behaving like\n$1\/f^{\\alpha}$ where $\\alpha \\ge 1$, as shown in Fig.\n\\ref{spectrebruit}.\n\nThis noise generates stripes after reprojection on maps, whose exact form depends on the\nscanning strategy. If not properly subtracted, the effect of such\nstripes is to degrade considerably the sensitivity of an experiment.\nThe elimination of this ``striping'' may be achieved using\nredundancies in the measurement, which are essentially of two types for\nthe case of {\\sc Planck}:\n\n\\begin{itemize}\n\\item each individual detector's field of view scans the sky on large\n circles, each of which is covered consecutively many times ($\\sim 60$) \n at a rate of about ${f_\\mathrm{spin}\\sim 1}$~rpm. This\n permits a filtering out of non scan-synchronous fluctuations in the\n circle constructed from averaging the consecutive scans.\n\\item a survey of the whole sky (or a part of it) involves many such\n circles that intersect each other (see Fig. \\ref{inter}); the exact\n number of intersections depends on the scanning strategy but is of\n the order of $10^8$ for the {\\sc Planck} mission: this will allow to\n constrain the noise at the intersection points.\n\\end{itemize}\n\n\\begin{figure}[ht]\n \\begin{center}\n \\epsfig{file=circles.epsi,bbllx=0,bblly=50,bburx=799,bbury=449,width=8.5cm}\n \\end{center}\n \\caption{The Mollweide projection of $3$ intersecting circles. For clarity, the scan angle \n between the spin axis and the main beam axis is set to $60^\\circ$ for this figure.\n \\label{inter}}\n\\end{figure}\n\nOne of us \\cite{delabrouille98a} has proposed to remove low frequency drifts for\nunpolarized data in the framework of the {\\sc Planck} mission\nby requiring that all measurements of a single\npoint, from all the circles intersecting that point, share a common\nsky temperature signal. The problem is more complicated in the case of\npolarized measurements since the orientation of a polarimeter with\nrespect to the sky depends on the scanning circle. Thus, a given polarimeter\ncrossing a given point in the sky along two different circles will not\nmeasure the same signal, as illustrated in Fig. \\ref{deuxpol}.\n\n\\begin{figure}[ht]\n \\begin{center}\n \\epsfig{ file=interpol.eps, width=5cm}\n \\end{center}\n \\caption{\n The orientation of polarimeters at an intersection point. This\n point is seen by two different circles corresponding to two\n different orientations of the polarimeters in the focal plane.\n For clarity, we have just represented one polarimeter.\n \\label{deuxpol}}\n\\end{figure}\n\nThe rest of the paper is organized as follows: in Sect. \\ref{noise},\nwe explain how we model the noise and how low frequency drifts\ntransform into offsets when considering the circles instead of\nindividual scans. In Sect. \\ref{skypol}, we explain how polarization\nis measured. The details of the algorithm for\nremoving low-frequency drifts are given in Sect. \\ref{algo}. We\npresent the results of our simulations in Sect. \\ref{simul} and give\nour conclusions in Sect. \\ref{resul}.\n\n\\section{Averaging noise to offsets on circles}\\label{noise}\n\nAs shown in Fig. \\ref{spectrebruit}, the typical noise spectrum expected for\nthe {\\sc Planck} High Frequency Instrument (HFI) features a drastic increase of noise\npower at low frequencies ${f\\leq 0.01 ~ \\mathrm{Hz}}$. We model this noise spectrum\nas:\n\n\\begin{equation}\n\\!\\!\\!\\!\\!\\!\\!\\!\\! S(f) = \\sigma^2 \\times \\left( 1 + \\sum_i \\left( \\frac{f_i}{f} \\right) ^{\\alpha_i} \\right).\n\\end{equation}\n\nThe knee frequency ${f_\\mathrm{knee}}$ is defined as the frequency at which\nthe power spectrum due to low frequency contributions equals that of\nthe white noise. The noise behaves as pure white noise with variance\n${\\sigma^2}$ at high frequencies. The spectral index of each component\nof the low-frequency noise, ${\\alpha_i}$, is typically between 1 and 2,\ndepending on the physical process generating the noise.\n\nThe Fourier spectrum of the noise on the circle obtained by combining\n$N$ consecutive scans depends on the exact method used. The simplest method, setting the circle\nequal to the average of all its scans, efficiently filters out all\nfrequencies save the harmonics of the spinning frequency \\cite{delabrouille97a}. Since the\nnoise power mainly resides at low frequencies (see Fig.\n\\ref{spectrebruit}), the averaging transforms -- to first order -- low\nfrequency drifts into constant offsets different for each circle and\nfor each polarimeter. This is illustrated in the comparison between\nFigs. \\ref{stream1sf} and \\ref{average1sf}. More sophisticated methods for recombining the data\nstreams into circles can be used, as $\\chi^2$ minimization, Wiener filtering, or any map-making\nmethod projecting about ${6\\times 10^5}$ samples onto a circle of about ${5\\times 10^3}$ points.\nFor simplicity, we will work in the following with the circles obtained by simple averaging of all its\nconsecutive scans.\n\n\\begin{figure}[ht]\n \\begin{center}\n \\epsfig{ file=ds_3cercles.eps, bbllx=25,bblly=20,bburx=675,bbury=530,width=8.5cm}\n \\end{center}\n \\caption{Typical $1\/f^2$ low frequency noise stream.\n Here, ${f_{\\rm knee}=f_{\\rm spin}=0.016}$~Hz, ${\\alpha=2}$ and\n ${\\sigma=21 ~\\mu\\mathrm{K}}$ (see Eq. 1).\n This noise stream corresponds to 180 scans or 3 circles (60 scans per circle) or\n a duration of 3 hours.\n \\label{stream1sf}}\n\\end{figure}\n\\begin{figure}[ht]\n \\begin{center}\n \\epsfig{ file=3offsets_cercles.eps, bbllx=55,bblly=50,bburx=655,bbury=515,width=8.5cm}\n \\end{center}\n \\caption{The residual noise on the 3 circles after averaging. To first approximation, \n low frequency drifts are transformed into\n offsets, different for each circle and each polarimeter. Note the expanded scale on the $y$-axis as compared\n to that of Fig. \\ref{stream1sf}.\n \\label{average1sf}}\n\\end{figure}\n\nWe thus model the effect of low frequency drifts as a constant offset\nfor each polarimeter and each circle. This approximation is excellent\nfor ${f_\\mathrm{knee}\\le f_\\mathrm{spin}}$. The remaining white noise of the $h$ polarimeters is\ndescribed by one constant ${h \\times h}$ matrix.\n\n\\section{The measurement of sky polarization}\\label{skypol}\n\n\\subsection{Observational method}\n\nThe measurement with one polarimeter of the linear polarization of a\nwave coming from a direction $\\boldsymbol{\\hat{n}}$ on the sky, requires at least\nthree measurements with different polarimeter orientations. Since the\nStokes parameters $Q$ and $U$ are not invariant under rotations, we define them\nat each point $\\boldsymbol{\\hat{n}}$ with respect to a reference frame of tangential\nvectors ${(\\hat{e}_\\lambda,\\hat{e}_\\beta)}$. The output signal given by a\npolarimeter looking at point $\\boldsymbol{\\hat{n}}$ is:\n\\begin{equation}\n\\!\\!\\!\\!\\!\\!\\!\\!\\! M_{polar}=\\frac{1}{2} ( I+Q \\cos 2\\Psi+U \\sin 2\\Psi ) \n\\label{mesurepol}\n\\end{equation}\nwhere $\\Psi$ is the angle between the polarimeter and\n$\\hat{e}_\\lambda$\\footnote{We do not consider the $V$ Stokes parameter since\n no net circular polarization is expected through Thomson\n scattering.}. In the following, we choose the longitude-latitude\nreference frame as the fixed reference frame on the sky (see Fig. \\ref{frame}). \n\\begin{figure}[ht]\n \\begin{center}\n \\epsfig{ file=frame.eps,width=5cm}\n \\end{center}\n \\caption{The reference frame used to define the Stokes parameters and angular position $\\Psi$\n of a polarimeter. $\\Psi$ lies in the plane ${(\\hat{e}_{\\lambda},\\hat{e}_{\\beta})}$.\n \\label{frame}}\n\\end{figure}\n\n\\subsection{Destriping method}\n\nThe destriping method consists in using redundancies at the\nintersections between circle pairs to estimate, for each circle $i$\nand each polarimeter $p$, the offsets $O_i^{p}$ on polarimeter\nmeasurements. For each circle intersection, we require that all three\nStokes parameters {\\em in a fixed reference frame} in that direction of the sky, as\nmeasured on each of the intersecting circles, be the same. A $\\chi^2$\nminimization leads to a linear system whose solution gives the\noffsets. By subtracting these offsets, we can recover the Stokes parameters corrected\nfor low-frequency noise.\n\n\\subsection{Formalism}\n\nWe consider a mission involving $n$ circles. The set of all circles\nthat intercept circle $i$ is denoted by $\\mathcal{I}(i)$ and contains\n$N_{\\mathcal{I}(i)}$ circles. For any pair of circles $i$ and $j$, we denote\nthe two points where these two circles intersect (if any) by\n${\\{i,j,\\delta\\}}$. In this notation $i$ is the circle currently\nscanned, $j$ the intersecting circle in set $\\mathcal{I}(i)$, and $\\delta$\nindexes the two intersections ($\\delta = 1 (-1)$ indexes the first\n(second) point encountered from the northernmost point on the circle)\nso that the points ${\\{j,i, -\\delta\\}}$ and ${\\{i,j,\\delta\\}}$ on the sky\nare identical.\n\nThe Stokes parameters at point $\\{i,j,\\delta\\}$, with respect to a fixed global\nreference system, are denoted by a $3-$vector\n$\\boldsymbol{S}_{i,j,\\delta}$, with\n\\begin{equation}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\boldsymbol{S}_{i,j,\\delta} = \\boldsymbol{S}_{j,i,-\\delta} = \\left(\n \\begin{array}{c}\n I\\\\\n Q\\\\\n U\n \\end{array}\n \\right)\n \\left(\n \\boldsymbol{\\hat{n}} \\equiv \\{i,j,\\delta \\}\n \\right)\n \\label{redund}.\n\\end{equation}\n\nAt intersection $\\{i,j,\\delta\\}$, the set of measurements by $h$\npolarimeters travelling along the scanning circle $i$ is a $h-$vector\ndenoted by $\\boldsymbol{M}_{i,j,\\delta}$, and is related to the Stokes parameters at this\npoint by (see Eq. \\ref{mesurepol}):\n\\begin{equation}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\boldsymbol{M}_{i,j,\\delta} = \\boldsymbol{\\mathcal{A}}_{i,j,\\delta} \\boldsymbol{S}_{i,j,\\delta} \\label{stokesm}\n\\end{equation}\nwhere $\\boldsymbol{\\mathcal{A}}_{i,j,\\delta}$ is the $h \\times 3$ matrix: \n\\begin{equation}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\n \\boldsymbol{\\mathcal{A}}_{i,j,\\delta} = \\frac{1}{2}\\left(\\begin{array}{ccc}\n 1&\\cos 2\\Psi_1(i,j,\\delta)&\\sin 2\\Psi_1(i,j,\\delta)\\\\\n \\vdots & \\vdots & \\vdots\\\\\n 1&\\cos 2\\Psi_p(i,j,\\delta)&\\sin 2\\Psi_p(i,j,\\delta)\\\\\n \\vdots & \\vdots & \\vdots\\\\\n 1&\\cos 2\\Psi_h(i,j,\\delta)&\\sin 2\\Psi_h(i,j,\\delta)\\\\\n \\end{array}\\right).\n \\nonumber\n\\end{equation}\n\n$\\Psi_p(i,j,\\delta) \\in [0,\\pi]$ is the angle between the orientation of\npolarimeter $p$ and the reference axis in the fixed global reference\nframe (see Fig. \\ref{frame}). The matrix $\\boldsymbol{\\mathcal{A}}_{i,j,\\delta}$ can be\nfactorised as\n\\begin{equation}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\boldsymbol{\\mathcal{A}}_{i,j,\\delta} = \\boldsymbol{\\mathcal{A}} \\boldsymbol{R}_{i,j,\\delta}. \\label{decomp}\n\\end{equation}\nThe constant $h \\times 3$ matrix $\\boldsymbol{\\mathcal{A}}$ characterizes the geometrical\nsetup of the $h$ polarimeters in the focal reference frame:\n\\begin{equation}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\boldsymbol{\\mathcal{A}} =\\frac{1}{2} \\left(\\begin{array}{ccc}\n 1&1&0\\\\\n \\vdots & \\vdots & \\vdots\\\\\n 1&\\cos 2 \\Delta_p&\\sin 2 \\Delta_p\\\\\n \\vdots & \\vdots & \\vdots\\\\\n 1&\\cos 2 \\Delta_h&\\sin 2 \\Delta_h\\\\\n \\end{array}\\right)\\label{matabsA}\n\\end{equation}\nwhere $\\Delta_p$ is the angle between the orientations of polarimeters\n$p$ and $1$, so we have $\\Psi_p=\\Psi_1+\\Delta_p$ and $\\Delta_1=0$.\nThe rotation matrix $\\boldsymbol{R}_{i,j,\\delta}$ brings the focal plane to its\nposition at intersection $\\{i,j,\\delta\\}$ when scanning along circle\n$i$:\n\\begin{equation}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\boldsymbol{R}_{i,j,\\delta}=\n \\left(\n \\begin{array}{ccc}\n 1&0&0\\\\\n 0&\\cos 2\\Psi_1(i,j,\\delta)&\\sin 2\\Psi_1(i,j,\\delta)\\\\\n 0&-\\sin 2\\Psi_1(i,j,\\delta)&\\cos 2\\Psi_1(i,j,\\delta) \n \\end{array}\\right). \\label{arot}\n\\end{equation} \n\n\\section{The algorithm}\\label{algo}\n\\subsection{The general case}\n\nTo extract the offsets from the measurements, we use a $\\chi^2$\nminimization. This $\\chi^2$ relates the measurements $\\boldsymbol{M}_{i,j,\\delta}$\nto the offsets $\\boldsymbol{O}_i$ and the Stokes parameters $\\boldsymbol{S}_{i,j,\\delta}$, using the\nredundancy condition \\eqref{redund}. In order to take into account the\ntwo contributions of the noise (see Sect. \\ref{noise}) and of the\nStokes parameters (see Eq. \\ref{stokesm}), we model the measurement as:\n\\begin{equation}\\label{eq:offsetsmodel}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\boldsymbol{M}_{i,j,\\delta}=\n \\boldsymbol{\\mathcal{A}} \\, \\boldsymbol{R}_{i,j,\\delta} \\, \\boldsymbol{S}_{i,j,\\delta} \\, + \\boldsymbol{O}_i \\, + \\mathrm{white~noise}.\n\\end{equation}\nso that we write\n\\begin{multline}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\chi^2 = \\sum_{i,j\\in {\\mathcal{I}(i)},\\delta=\\pm 1} \\,\\left(\\boldsymbol{M}_{i,j,\\delta} - \\boldsymbol{O}_i -\n \\boldsymbol{\\mathcal{A}}\\,\\boldsymbol{R}_{i,j,\\delta}\\, \\boldsymbol{S}_{i,j,\\delta}\\right)^T \\times\\\\\n {\\boldsymbol{N}_i}^{-1}\\left(\\boldsymbol{M}_{i,j,\\delta} - \\boldsymbol{O}_i -\n \\boldsymbol{\\mathcal{A}}\\,\\boldsymbol{R}_{i,j,\\delta}\\, \\boldsymbol{S}_{i,j,\\delta}\\right).\n\\label{eq:chi2}\n\\end{multline}\nwhere ${\\boldsymbol{N}_i}$ is the $h \\times h$ matrix of noise correlation\nbetween the $h$ polarimeters. \n\nMinimization with respect to $\\boldsymbol{O}_i$ and $\\boldsymbol{S}_{i,j,\\delta}$ yields the\nfollowing equations: \n\\begin{gather}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! {\\boldsymbol{N}_i}^{-1} \\sum_{j\\in \\mathcal{I}(i),\\delta=\\pm 1} \\left(\\boldsymbol{M}_{i,j,\\delta} - \\boldsymbol{O}_i -\n \\boldsymbol{\\mathcal{A}}\\,\\boldsymbol{R}_{i,j,\\delta}\\, \\boldsymbol{S}_{i,j,\\delta}\\right) = 0, \\label{eqgd}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\intertext{and}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\boldsymbol{R}_{i,j,\\delta}^{-1}\\,\\boldsymbol{\\mathcal{A}}^T\\, {\\boldsymbol{N}_i}^{-1} \\left(\\boldsymbol{M}_{i,j,\\delta} - \\boldsymbol{O}_i -\n \\boldsymbol{\\mathcal{A}}\\,\\boldsymbol{R}_{i,j,\\delta}\\, \\boldsymbol{S}_{i,j,\\delta}\\right) + \\notag\\\\\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\boldsymbol{R}_{j,i,-\\delta}^{-1} \\boldsymbol{\\mathcal{A}}^T {\\boldsymbol{N}_j}^{-1} \\left(\\boldsymbol{M}_{j,i,-\\delta} - \\boldsymbol{O}_j -\n \\boldsymbol{\\mathcal{A}}\\,\\boldsymbol{R}_{j,i,-\\delta}\\, \\boldsymbol{S}_{i,j,\\delta}\\right) = 0.\n \\label{eqgs}\n\\end{gather}\n\nWe can work with a reduced set of transformed measurements and\noffsets which can be viewed as the Stokes parameters in the focal reference\nframe and the associated offsets which are the $3$ dimensional vectors:\n\\begin{gather}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\mathscr{S}_{i,j,\\delta} = {\\boldsymbol{X}_i}^{-1}\\,\\boldsymbol{\\mathcal{A}}^T\\, {\\boldsymbol{N}_i}^{-1} \\boldsymbol{M}_{i,j,\\delta} \\notag \\mathrm{~~and~}\\\\\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\boldsymbol{\\Delta}_i = {\\boldsymbol{X}_i}^{-1}\\,\\boldsymbol{\\mathcal{A}}^T\\, {\\boldsymbol{N}_i}^{-1} \\boldsymbol{O}_i, \\label{defg}\\\\ \n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\mbox{where } \\boldsymbol{X}_i = \\boldsymbol{\\mathcal{A}}^T\\, {\\boldsymbol{N}_i}^{-1}\\, \\boldsymbol{\\mathcal{A}}. \\notag \n\\end{gather}\nEqs. \\eqref{eqgd} and \\eqref{eqgs} then simplify to:\n\\begin{gather}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\sum_{j\\in \\mathcal{I}(i),\\,\\delta=\\pm 1} \\left(\\mathscr{S}_{i,j,\\delta} - \\boldsymbol{\\Delta}_i - \\boldsymbol{R}_{i,j,\\delta}\\,\n \\boldsymbol{S}_{i,j,\\delta}\\right) = 0, \\label{eqgd2}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\intertext{and}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\boldsymbol{R}_{i,j,\\delta}^{-1}\\,\\boldsymbol{X}_i\\,\\left(\\mathscr{S}_{i,j,\\delta} - \\boldsymbol{\\Delta}_i -\n \\boldsymbol{R}_{i,j,\\delta}\\, \\boldsymbol{S}_{i,j,\\delta}\\right) + \\notag\\\\\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\boldsymbol{R}_{j,i,-\\delta}^{-1}\\,\\boldsymbol{X}_j\\ \\left(\\mathscr{S}_{j,i,-\\delta}\n - \\boldsymbol{\\Delta}_j - \\boldsymbol{R}_{j,i,-\\delta}\\, \\boldsymbol{S}_{i,j,\\delta}\\right) = 0. \n \\label{eqgs2}\n\\end{gather}\n$\\boldsymbol{R}_{i,j,\\delta}\\,\\boldsymbol{S}_{i,j,\\delta}$ in Eq. \\eqref{eqgs2} can be solved for\nand the result inserted in Eq. \\eqref{eqgd2}. After a few\nalgebraic manipulations, one gets the following linear system for the\noffsets $\\boldsymbol{\\Delta}_i$ as functions of the data $\\mathscr{S}_{i,j,\\delta}$:\n\\begin{multline}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\sum_{j\\in \\mathcal{I}(i),\\delta=\\pm 1}\\left[\\bbbone + \\widetilde{\\boldsymbol{R}}(i,j,\\delta)\\,\n {\\boldsymbol{X}_j}^{-1}\\,\\widetilde{\\boldsymbol{R}}(i,j,\\delta)^{-1}\\,\\boldsymbol{X}_i\\right]^{-1} \\times \\\\\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! \\left[\\boldsymbol{\\Delta}_i - \\widetilde{\\boldsymbol{R}}(i,j,\\delta)\\,\\boldsymbol{\\Delta}_j - \n \\mathscr{S}_{i,j,\\delta} + \\widetilde{\\boldsymbol{R}}(i,j,\\delta)\\,\\mathscr{S}_{j,i,-\\delta}\\right] = 0,\n \\label{eqgd3}\n\\end{multline}\nwhere the rotation\n\\begin{equation}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\widetilde{\\boldsymbol{R}}(i,j,\\delta)=\\boldsymbol{R}_{i,j,\\delta} \\,{\\boldsymbol{R}^{-1}_{j,i,-\\delta}} \\label{frot}\n\\end{equation}\nbrings the focal reference frame from its position along scan $j$ at\nintersection $\\{i,j,\\delta\\}$ to its position along scan $i$ at the same\nintersection (remember that $\\{i,j,\\delta\\} = \\{j,i,-\\delta\\}$). \nNote that\n$\\widetilde{\\boldsymbol{R}}(i,j,\\delta)=\\widetilde{\\boldsymbol{R}}(j,i,-\\delta)^{-1}$. \nIn this linear system, we need to know the measurements of the\npolarimeters\nat the points $\\{i,j,\\delta \\}$ and $\\{j,i,-\\delta\\}$. These two points on\ncircles $i$ and $j$ respectively will unlikely correspond to a\nsample along these circles. So we have linearly interpolated the\nvalue of the intersection points from the values measured at sampled points. \nFor a fixed circle $i$, this is a $3 \\times N_{\\mathcal{I}(i)}$ linear\nsystem. In Eq. \\eqref{eqgd3}, $i$ runs from $1$ to $n$, therefore the\ntotal matrix to be inverted has dimension $3n \\times 3n$. However,\nbecause the rotation matrices are in fact two dimensional (see Eq.\n\\ref{arot}), the intensity components $\\Delta_i^I$ of the offsets\nonly enter Eq. \\eqref{eqgd3} through their differences $\\Delta_i^I -\n\\Delta_j^I$ so that the linear system is not invertible: the rank of\nthe system is $3n-1$. In order to compute the offsets, we can fix the\nintensity offset on one particular scanning circle or add the\nadditionnal constraint that the length of the solution vector is\nminimized.\n\nOnce the offsets $\\boldsymbol{\\Delta}_i$ are known, \nthe Stokes parameters in the global reference frame\n$\\left( \\hat{e}_{\\lambda},\\hat{e}_{\\beta} \\right)$ at a generic\nsampling $k$ of the circle\n$i$, labeled by $\\{i,k\\}$ are estimated as\n\\begin{equation}\n \\!\\!\\!\\!\\!\\!\\!\\!\\!\\boldsymbol{S}_{i,k} = \\boldsymbol{R}_{i,k}^{-1} \\left(\\mathscr{S}_{i,k} - \\boldsymbol{\\Delta}_i\\right),\n \\label{substract}\n\\end{equation}\nwhere $\\boldsymbol{R}_{i,k}$ is the rotation matrix which transforms the focal frame Stokes parameters into those of\nthe global reference frame.\n\n\nThe quantities $\\mathscr{S}_{i,k}$ are the Stokes parameters measured in the focal\nframe of reference at this point and are simply given in terms of the\nmeasurements $\\boldsymbol{M}_{i,k}$ (see Eq. \\ref{defg}) by:\n\\begin{equation}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\mathscr{S}_{i,k} = {\\boldsymbol{X}_i}^{-1}\\,\\boldsymbol{\\mathcal{A}}^T\\, {\\boldsymbol{N}_i}^{-1} \\boldsymbol{M}_{i,k}.\n\\end{equation}\n\nThe matrix\n\\begin{equation}\n \\!\\!\\!\\!\\!\\!\\!\\!\\! \\boldsymbol{N}^{\\rm{Stokes}}_i=\\left( \\boldsymbol{\\mathcal{A}}^T \\, \\boldsymbol{N}_i^{-1} \\, \\boldsymbol{\\mathcal{A}} \\right) ^{-1}\n\\end{equation}\nis the variance matrix of the Stokes parameters on circle $i$. Note that this\nalgorithm is totally independent of the pixelization chosen which only\nenters when reprojecting the Stokes parameters on the sphere.\n\n\\subsection{Uncorrelated polarimeters, with identical noise}\nWhen the polarimeters are uncorrelated with identical noise, the\nvariance matrix reduces to $\\boldsymbol{N}_i = \\bbbone\/{\\sigma_i}^2$ and the matrices\n$\\boldsymbol{X}_i$ can all be written as\n\\[\n\\!\\!\\!\\!\\!\\!\\!\\!\\! \\boldsymbol{X}_i = \\frac{1}{{\\sigma_i}^2}\\,\\boldsymbol{X} \\mbox{ with } \\boldsymbol{X} = \\boldsymbol{\\mathcal{A}}^T\n\\boldsymbol{\\mathcal{A}}\n\\]\n\n\\subsection*{Case of ``Optimized Configurations''}\nWe have shown \\cite{couchot98} that the polarimeters can be \narranged in ``Optimized Configurations'', where the $h$ polarimeters are\nseparated by angles of $\\pi\/h$. If the noise level of each of the $h$\npolarimeters is the same and if there are no correlation between detector noise, \nthen the errors of the\nStokes parameters are also decorrelated and the matrix $\\boldsymbol{X}$ has the simple form:\n\\begin{equation}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\n\\boldsymbol{X} = \\frac{n}{8}\\left(\n \\begin{array}{ccc}\n 2&0&0\\\\\n 0&1&0\\\\\n 0&0&1\n \\end{array}\\right).\n\\end{equation}\nBecause this matrix commutes with all rotation matrices\n$\\widetilde{\\boldsymbol{R}}(i,j,\\delta)$, \nEq. \\eqref{eqgd3} simplifies further to\n\\begin{multline}\n \\!\\!\\!\\!\\!\\!\n \\frac{N_{\\mathcal{I}(i)}}{\\sigma_{\\mathcal{I}(i)}^2}\\boldsymbol{\\Delta}_i\n - \\sum_{{m\\in\n {\\mathcal{I}(i)}}}\\frac{1}{\\sigma_i^2+\\sigma_j^2}\\left(\\widetilde{\\boldsymbol{R}}(i,j,1)+\\widetilde{\\boldsymbol{R}}(i,j,-1)\n \\right)\\boldsymbol{\\Delta}_j\\\\\n \\!\\!\\!\\!\\!\\!\\!\\!\\! = \\sum_{j\\in {\\mathcal{I}(i)},\\delta=\\pm 1}\n \\frac{1}{\\sigma_i^2+\\sigma_j^2} \\left(\\mathscr{S}_{i,j,\\delta} -\n \\widetilde{\\boldsymbol{R}}(i,j,\\delta) \\,\n \\mathscr{S}_{j,i,-\\delta}\\right),\n \\label{eqdelta1}\n\\end{multline}\nwhere the sum over $\\delta$ is explicit on the left side of the\nequation and we have defined an average error\n$\\sigma_{\\mathcal{I}(i)}$ along circle $i$ by\n\\[\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\frac{N_{\\mathcal{I}(i)}}{\\sigma_{\\mathcal{I}(i)}^2} = \\sum_{j\\in{\\mathcal{I}(i)}} \\frac{2}{\\sigma_i^2+\\sigma_j^2},\n\\]\nand where the rotation matrix $\\widetilde{\\boldsymbol{R}}(i,j,\\delta)$ is defined by Eq. \\eqref{frot}.\nRotations $\\widetilde{\\boldsymbol{R}}(i,j,\\delta)$ and\n$\\widetilde{\\boldsymbol{R}}(i,j,-\\delta)=\\widetilde{\\boldsymbol{R}}(j,i,\\delta)^{-1}$ correspond to the two intersections\nbetween circles $i$ and $j$. Eq. \\eqref{eqdelta1} can be simplified \nfurther. We can separate the $\\boldsymbol{\\Delta}_i$ and the $\\mathscr{S}_{i,j,\\delta}$ into\nscalar components related to the intensity: $\\Delta^I_i,\\\n\\mathscr{S}^I_{i,j,\\delta}$ and 2-vectors components related to the\npolarization: $\\boldsymbol{\\Delta}^P_i,\\ \\mathscr{S}^P_{i,j,\\delta}$. We obtain then two separate equations, one for the \nintensity offsets $\\Delta^I_i$, which is exactly the same as in the\nunpolarized case (see \\ref{appendixa}):\n\\begin{multline}\n\\!\\!\\!\\!\\!\\! \\sum_{j\\in {\\mathcal{I}(i)}}\\frac{2}{\\sigma_i^2+\\sigma_j^2}\\,\\left( \\Delta^I_i -\n \\Delta^I_j\\right) = \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\! \\qquad \\qquad\\sum_{j\\in {\\mathcal{I}(i)},\\delta=\\pm 1}\n\\frac{1}{\\sigma_i^2+\\sigma_j^2}\\left(\\mathscr{S}^I_{i,j,\\delta} \n - \\mathscr{S}^I_{j,i,-\\delta}\\right),\n\\label{eqdeltaI}\n\\end{multline}\nand one for the polarization offsets $\\boldsymbol{\\Delta}^P_i$:\n\\begin{multline}\n\\!\\!\\!\\!\\!\\! \\frac{N_{\\mathcal{I}(i)}}{\\sigma_{\\mathcal{I}(i)}^2}\\,\\boldsymbol{\\Delta}^P_i - \n\\sum_{j\\in {\\mathcal{I}(i)}}\\frac{2}{\\sigma_i^2+\\sigma_j^2}\\, \\cos\n\\left(\n 2 \n \\Psi_{ij}\n\\right)\n\\,\\boldsymbol{\\Delta}^P_j = \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\! \\sum_{j\\in {\\mathcal{I}(i)},\\delta=\\pm 1}\\frac{1}{\\sigma_i^2+\\sigma_j^2} \n\\left(\\mathscr{S}^P_{i,j,\\delta} - \\boldsymbol{\\mathcal{R}}(i,j,\\delta)\\,\\mathscr{S}^P_{j,i,-\\delta}\\right),\n\\label{eqdeltaP}\n\\end{multline}\nwhere $\\Psi_{ij}=\\Psi_1(i,j,\\delta)-\\Psi_1(j,i,-\\delta)$ is the angle of the rotation that\nbrings the focal reference frame along scan $j$ on the focal reference\nframe along scan $i$ at intersection $\\{i,j,\\delta\\}$. The two\ndimensional matrix $\\boldsymbol{\\mathcal{R}}(i,j,\\delta)$ is the rotation sub-matrix\ncontained in $\\widetilde{\\boldsymbol{R}}(i,j,\\delta)$ (see Eq. \\ref{frot}).\nNote that all mixing between polarization components have disappeared\nfrom the left side of Eq. \\eqref{eqdeltaP}). Therefore we are\nleft with two different $n \\times n$ matrices to invert \nin order to solve for the offsets $\\boldsymbol{\\Delta}_i$, instead of one $3n \\times\n3n$ matrix.\n\nAs in Eq. \\eqref{eqgd3}, the linear system in Eq. \\eqref{eqdeltaI}\ninvolves differences between the offsets $\\Delta_i^I$, the matrix is\nnot invertible, and we find the solution in the same way as in the\ngeneral case. On the other hand, for the polarized offsets\n$\\Delta_i^P$, the underlying matrix of Eq. \\eqref{eqdeltaP} is regular\nas expected.\n\n\\section{Simulations and test of the algorithm}\\label{simul}\n\n\\subsection{Methods}\nWe now discuss how we test the method using numerical simulations.\nFor each simulated mission, we produce several maps. The first,\nwhich we use as the standard\n``reference'' of comparison, is a projected map of a mission\nwith only white noise, ${f_\\mathrm{knee}=0}$.\nThe remaining maps include $1\/f$ plus white noise streams\nwith ${f_\\mathrm{knee}=\\eta f_\\mathrm{spin}}$, ${\\eta \\in \\{1,2,5,10\\}}$.\nThe first of these is an ``untreated'' image which is projected with\nno attempt to remove striping affects. In the second\n``zero-averaged'' map, we attempt a crude destriping by subtracting its average to each circle.\nThe final ``destriped'' map is constructed using the algorithm\nin this paper.\nWe subtract the input maps ($I$, $Q$ and $U$) from the final maps in order to get maps of noise residuals.\nNote that in case of a zero signal sky, setting the average of each circle to zero is better than destriping by nature \nbecause the offsets are only due to the noise. With a real sky,\nboth signal and noise contribute to the average so that zeroing circle not only removes the noise but also the signal.\nGiard~et~al~\\cite*{giard99} have attempted to refine their method by fitting templates of the dipole and of the galaxy\nbefore subtracting a baseline from each circle. They concluded that an additional destriping (they used the algorithm of\nDelabrouille~\\cite*{delabrouille98a}) is needed.\n\n\\subsection{Simulated missions}\nIn order to test its efficiency, the destriping algorithm has been\napplied to raw data streams generated from simulated observations\nusing various circular scanning strategies representative of a satellite mission as\n{\\sc Planck}, different ``Optimized Configurations'', and various noise\nparameters. The resulting maps were then compared with input and untreated\nmaps to test the quality of the destriping.\n\nThe input temperature ($I$) maps are the sum of galaxy, dipole,\nand a randomly generated standard CDM anisotropy map \n(we used HEALPIX\\footnote{\\tt http:\/\/www.tac.dk\/\\~{}healpix\/}\nand CMBfast\\footnote{\\tt http:\/\/www.sns.ias.edu\/\\~{}matiasz\\\\\n \/CMBFAST\/cmbfast.html}).\nSimilarly, the polarization maps $Q$ and $U$ are the sum of the galaxy and\nCMB polarizations.\nThe CMB polarization maps are randomly generated assuming a\nstandard CDM scenario. For the galactic polarization maps, we constructed a\nrandom, continuous and correlated vector field defined on the 2-sphere with a correlation length of $5^\\circ$\nand a maximum polarization rate of ${20\\%}$ ($100\\%$ gave similar results).\nGiven the temperature map of the sky (not including CMB contribution), we can thus construct two polarization\nmaps for $Q$ and $U$.\n\n\\subsection{Results}\nWe first consider the case of destriping pure white noise and check\nthat the destriping algorithm does not introduce spurious structure.\nOnce this is verified, we apply the destriping algorithm to low\nfrequency noise. We find that the quality of the destriping is significantly dependent\non $\\eta$ only. \nTo demonstrate visually the quality of the destriping, we produce projected\nsky maps with the input galaxy, dipole and CMB signal subtracted.\n\nFor temperature maps, we can compare Figs. 7,8, and 9. The eye is not able to see any differences\nbetween the white noise map and the noise residual on the destriped map. \nWe will see in the following how to quantify the presence of structures.\nFor the ``zero-averaged circles'' map, the level of the structure is very high and make it impossible\nto compute the power spectrum of the CMB (see Fig. 13).\n\nFor the $Q$ Stokes parameter, Fig. 11 shows the destriped map for $\\eta=1$.\nFig. 12 shows a map where the offsets are calculated as the average of\neach circle. The maps for $U$ are very similar.\nAs for the $I$ maps, the destriped map is very similar to the white noise map. There exist some residual\nstructure on the ``zero-averaged circles'' map.\nTo assess quantitatively the efficiency of the destriping algorithm,\nwe have first studied the power spectra $C_\\ell^T$, $C_\\ell^E$, $C_\\ell^B$, $C_\\ell^{TE}$ and $C_\\ell^{TB}$\ncalculated from the $I$, $Q$ and $U$ maps\n\\cite{zaldarriaga97,kamionkowski97}.\n\\begin{figure}[ht]\n \\caption{The Mollweide projection of the residuals of the \n ${I-\\mathrm{Stokes}}$ parameter for a white noise mission. \n The scale is in Kelvins. \n The parameters of the simulation leading to this map are \n described in \\ref{appendixb}.\n \\label{siwun}}\n\\end{figure}\n\\begin{figure}[ht]\n \\caption{The Mollweide projection of the residuals of the \n ${I-\\mathrm{Stokes}}$ parameter after destriping of $1\/f$ noise plus white noise with $\\eta=1$\n \\label{sidun}}\n\\end{figure}\n\\begin{figure}[ht]\n \\caption{The Mollweide projection of the residuals of the \n ${I-\\mathrm{Stokes}}$ parameter after zeroing the average of the circles, \n for $1\/f$ noise plus white noise with $\\eta=1$.\n \\label{simun}}\n\\end{figure}\n\\begin{figure}[ht]\n \\caption{The Mollweide projection of the residuals of the \n ${Q-\\mathrm{Stokes}}$ parameter for a white noise mission.\n \\label{sqwun}}\n\\end{figure}\n\\begin{figure}[ht]\n \\caption{The Mollweide projection of the residuals of the ${Q-\\mathrm{Stokes}}$\n parameter after destriping of $1\/f$ noise plus white noise with $\\eta=1$\n \\label{sqdun}}\n\\end{figure}\n\\begin{figure}[ht]\n \\caption{The Mollweide projection of the residuals of the ${Q-\\mathrm{Stokes}}$\n parameter after zeroing the average of the circles, for $1\/f$ noise plus white noise with $\\eta=1$.\n Although the remaining structures seem small, they are responsible for the excess of power in \n ${C_\\ell^E}$, see Fig. \\ref{speun}.\n \\label{sqmun}}\n\\end{figure}\n\nThe reference sensitivity of our simulated mission is evaluated by\ncomputing the average spectra of 1000 maps of reprojected mission white noise.\nThis reference sensitivity falls, within sample variance, between the two dotted lines\nrepresented in Figs. 13, 14, 15 and 16 and below the\ndotted line in Figs. 17 and 18.\nFigs. 13 and 14 show the spectra ${C_\\ell^T}$ \ncorresponding to ${f_\\mathrm{knee}\/f_\\mathrm{spin}=1\\mathrm{~and~}5}$ respectively,\nfor the $T$ field.\nSimilarly, Figs. 15 and 16 are the spectra ${C_\\ell^E}$.\nThe $B$ field is not represented because it is very similar to $E$.\nFigs. 17 and 18 represent the correlation between $E$ and $T$:~$C_\\ell^{ET}$.\nFor ${f_\\mathrm{knee}\/f_\\mathrm{spin}=1}$, we see that we are able to remove very efficiently low frequency\ndrifts in the noise stream: the destriped spectra obtained are\ncompatible with the white spectrum (within sample variance). Similar quality destriping is\nachieved for any superposition of $1\/f$, $1\/f^2$ and white noise,\nprovided that the knee frequency is lower than or equal to the spin\nfrequency.\nIn the case of ${f_\\mathrm{knee}\/f_\\mathrm{spin}=5}$, \nthe method as implemented here leaves some\nstriping noise on the maps at low values of $\\ell$. Modeling the\nnoise as an offset is no longer adequate and a better model of the\naveraged low-frequency noise is required (superposition of sine and\ncosine functions for instance), or a more sophisticated method for constructing\none circle from 60 scans. We again note that the value of the\nratio ${f_\\mathrm{knee}\/f_\\mathrm{spin}}$ for both {\\sc Planck} HFI and LFI is likely to be very close\nto unity (in Fig. \\ref{spectrebruit}, ${f^\\mathrm{measured}_\\mathrm{knee}\\sim 0.014}$~Hz and \n${f_\\mathrm{spin}=0.016}$~Hz).\n\n\nTo quantify the presence of stripes in the maps of residuals, we can compute the value of the ``striping'' estimator\n${\\mathrm{rms}(a^{T,E,B}_{\\ell \\ell})\/\\mathrm{rms}(a^{T,E,B}_{\\ell 0})}$, because stripes tend to appear as structure\ngrossly parallel to the iso-longitude circles. In the case of pure white noise with a uniform\nsky coverage, this value is 1. Here, because of the scanning strategy, the sky coverage\nis not uniform and the value of this estimator is greater than 1, showing that it is not\nspecific of the striping. In order to get rid of the effect of non-uniform sky coverage, we express the estimator\n${\\mathrm{rms}(a^{T,E,B}_{\\ell \\ell})\/\\mathrm{rms}(a^{T,E,B}_{\\ell 0})}$\nin units of ${\\mathrm{rms}(a^{T,E,B}_{\\ell \\ell})\/\\mathrm{rms}(a^{T,E,B}_{\\ell 0})}$ for the white noise. This new estimator\nis specific to the striping. The results in Table \\ref{table1} show the improvement achieved by the destriping\nalgorithm although the result is still not perfect.\n\\begin{table}\\caption{Values of ${\\mathrm{rms}(a^{T,E,B}_{\\ell \\ell})\/\\mathrm{rms}(a^{T,E,B}_{\\ell 0})}$ as\n a function of ${\\eta={f_\\mathrm{knee}\/f_\\mathrm{spin}}}$ in units of \n ${\\mathrm{rms}(a^{T,E,B}_{\\ell \\ell}(\\mathrm{WN}))\/\\mathrm{rms}(a^{T,E,B}_{\\ell 0}(\\mathrm{WN}))}$\n We have checked that the systematic difference between the zero-averaged $E$ and $B$ fields is randomly in favor\n of $E$ and $B$ depending on the particular sky simulation.}\n \n \\label{table1}\n \\begin{center}\n \\begin{tabular}{|c|c|c|c|c|}\\hline\n Method & ${f_\\mathrm{knee}\/f_\\mathrm{spin}}$ & $T$ & $E$ & $B$ \\\\ \\hline\n white noise & 0 & 1 & 1 & 1 \\\\ \\hline\\hline\n {\\bf destriped} & {\\bf 0.5}& {\\bf 1.19} & {\\bf 1.05} & {\\bf 1.03} \\\\ \\hline\n zero-averaged & 0.5 & 51.9 & 9.23 & 3.64 \\\\ \\hline\n undestriped & 0.5 & 6.98 & 7.04 & 15.2 \\\\ \\hline\\hline\n {\\bf destriped} & {\\bf 1} & {\\bf 1.24} & {\\bf 1.12} & {\\bf 1.19} \\\\ \\hline\n zero-averaged & 1 & 52.2 & 9.91 & 3.95 \\\\ \\hline\n undestriped & 1 & 10.7 & 10.9 & 7.51 \\\\ \\hline\\hline\n {\\bf destriped} & {\\bf 2} & {\\bf 1.26} & {\\bf 1.32} & {\\bf 1.23} \\\\ \\hline\n zero-averaged & 2 & 49.5 & 10.2 & 3.85 \\\\ \\hline\n undestriped & 2 & 6.41 & 9.97 & 8.18 \\\\ \\hline\\hline\n {\\bf destriped} & {\\bf 5} & {\\bf 1.35} & {\\bf 1.39} & {\\bf 1.38} \\\\ \\hline\n zero-averaged & 5 & 49.8 & 10.4 & 3.99 \\\\ \\hline\n undestriped & 5 & 11.3 & 8.24 & 12.4 \\\\ \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\\begin{figure}[ht]\n \\begin{center}\n \\epsfig{ file=spt_1.eps,width=8.5cm}\n \\end{center}\n \\caption{Efficiency of destriping for the $T$ field with ${f_{\\rm knee}\/f_{\\rm spin}=1}$.\n The sample variance associated to a pure white noise mission\n is plotted as the\n dotted lines. The ``destriped spectrum'' is very close to the white noise\n spectrum (within the limits due to the sample variance). The zero-averaged\n and the ``not destriped'' spectra are a couple of orders of magnitude above.\n The solid line represents a standard CDM temperature spectrum and the dashed line represents\n a CDM temperature spectrum with reionization.\n \\label{sptun}}\n \\begin{center}\n \\epsfig{ file=spt_5.eps,width=8.5cm}\n \\end{center}\n \\caption{Efficiency of destriping for the $T$ field with ${f_{\\rm knee}\/f_{\\rm spin}=5}$. \n The modelling of \n low-frequency noise with an offset is no longer sufficient and\n the destriping leaves some power at low values of $\\ell$. Nevertheless, \n it remains a very good way to significantly reduce the effect of low-frequency noise.\n \\label{sptcinq}}\n\\end{figure}\n\\begin{figure}[ht]\n \\begin{center}\n \\epsfig{ file=spe_1.eps,width=8.5cm}\n \\end{center}\n \\caption{Efficiency of destriping for the $E$ field for ${f_{\\rm knee}\/f_{\\rm spin}=1}$. \n The zero-averaged spectrum\n is not as bad as for $T$ but the residual striping we can see in Fig. \\ref{sqmun}\n leads to some excess of power for low values of $\\ell$ (up to $\\ell\\sim 100$). We do not see such effect in\n the destriped spectrum (and maps). The spectra for the $B$ fields are very similar.\n The solid line represents a standard CDM $E$ spectrum and the dashed line represents\n a CDM $E$ spectrum with reionization. \n \\label{speun}}\n \\begin{center}\n \\epsfig{ file=spe_5.eps,width=8.5cm}\n \\end{center}\n \\caption{Same as Fig. \\ref{speun} but for ${f_{\\rm knee}\/f_{\\rm spin}=5}$.\n The spectra for the $B$ fields are very similar.\n \\label{specinq}}\n\\end{figure}\n\\begin{figure}[ht]\n \\begin{center}\n \\epsfig{ file=spet_1.eps,width=8.5cm}\n \\end{center}\n \\caption{Same as Fig. \\ref{speun} for the $ET$-correlation for ${f_{\\rm knee}\/f_{\\rm spin}=1}$. \n \\label{spetun}}\n \\begin{center}\n \\epsfig{ file=spet_5.eps,width=8.5cm}\n \\end{center}\n \\caption{Same as Fig. \\ref{spetun} but for ${f_{\\rm knee}\/f_{\\rm spin}=5}$.\n \\label{spetcinq}}\n\\end{figure}\n\n\\section{Discussions and Conclusions}\\label{resul}\n\n\\paragraph{Comparison with other methods.}\n\nAlthough no other method has yet been developped specifically for destriping\npolarized data, many methods exist for destriping unpolarized CMB data,\nwhich could be adapted to polarized data as well.\n\nWe first comment on the classical method which consists in modelling the \nmeasurement as\n\n\\begin{equation}\nm_{t} = A_{tp}T_{p} + n_{t}\n\\end{equation}\n\nwhere $A$ is the so-called \"pointing matrix\", $T$ a vector of temperatures in\npixels of the sky, $m_{t}$ the data and $n_{t}$ the noise. The problem is\nsolved by inversion, yielding an estimator of the signal:\n\n\\begin{equation}\n\\tilde{T_{p}} = [A^t N^{-1} A]^{-1} A^t N^{-1} \\, m,\n\\end{equation}\nwhere $N=\\langle nn^t \\rangle$ is the noise correlation matrix and $A^t$ is the transposed matrix of $A$.\n\nThis method can be extended straightforwardly to polarized measurements,\nat the price of extending by a factor of $3\\times h$ the size of the matrix $A_{tp}$, by 3 that of\nvector $T_{p}$ (replaced by $(I_{p},Q_{p},U_{p})$), and by $h$ that of the data\nstream (remember that $h$ is the number of polarimeters). \nThe implementation of this formally simple solution may turn into\na formidable problem when megapixel maps are to be produced. Numerical\nmethods have been proposed by a variety of authors \n\\cite{wright96b,tegmark97}, that\nuse properties of the noise correlation matrix (symmetry,\nband-diagonality) and of the pointing matrix (sparseness).\nSuch methods, however, rely critically on the assumption that the noise is\na Gaussian, stationary random process, which has been a reasonable\nassumption for CMB mission as COBE where the largest part of the\nuncertainty comes from detector noise, but is probably not so for sensitive\nmissions such as Planck. Our method requires only inverting a ${3n\\times 3n}$ matrix\nwhere $n$ is the number of circles involved, and does not assume\nanything on the statistical properties of low frequency drifts. It just assumes\na limit frequency (the knee frequency) above which the noise can be considered as a white\nGaussian random process.\n\nAnother interesting method is the one that has been used by Ganga in the\nanalysis of FIRS data \\cite{ganga94th}, which is itself adapted from a\nmethod developed originally by Cottingham~\\cite*{cottingham87}. In that method,\ncoefficients for splines fitting the low temperature drifts are obtained\nby minimising the dispersion of measurements on the pixels of the map. Such\na method, very similar in spirit to ours, could be adapted to\npolarization. Splines are natural candidates to replace our offsets in\nrefined implementations of our algorithm.\n\nHere, we have assumed that the averaged noise can be modeled as circle\noffsets plus white noise (Eq. \\ref{eq:offsetsmodel}), i.e. that the noise between\ndifferent measurements from the same bolometer is uncorrelated after\nremoval of the offset. This allowed us to simplify the $\\chi^2$ to\nthat shown in Eq. \\eqref{eq:chi2}. In reality, the circular offsets do not\ncompletely remove the low frequency noise and there does remain some\ncorrelation between the measurements. The amount of correlation is\ndirectly related to the value of ${f_\\mathrm{knee}\/f_\\mathrm{spin}}$; the smaller\n${f_\\mathrm{knee}\/f_\\mathrm{spin}}$ the smaller the remaining correlation. \nFigs. 13-18\nalready contain the errors induced from the fact that we did not\ninclude these correlations in the covariance matrix, and thus\ndemonstrate that the effect is small for ${f_\\mathrm{knee}\/f_\\mathrm{spin}\\sim 1}$.\n\n\\paragraph{Conclusion.} The destriping as implemented in this paper \nremoves low frequency drifts up to the white noise level provided that ${f_\\mathrm{knee}\/f_\\mathrm{spin}\\le 1}$.\nFor larger ${f_\\mathrm{knee}}$, the simple offset model for the averaged noise could be\nreplaced with a more accurate higher-order model that destripes to\nbetter precision provided the scan strategy allows to do so, as discussed in Delabrouille~et~al~\\cite*{delabrouille98e}.\nWe are currently working on improving our algorithm to account for\nthese effects. However, despite the shortcomings of our model, it\nstill appears to be robust for small ${f_\\mathrm{knee}}$ and can serve as a first order\nanalysis tool for real missions.\nIn particular, our technique cannot only be used for the Planck HFI\nand LFI, but can also be adopted for other CMB missions with circular\nscanning strategies, such as COSMOSOMAS for instance \\cite{rebolo98}.\n\n\\begin{acknowledgement}\nWe would like to acknowledge our referee's very useful suggestions.\n\\end{acknowledgement}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}