diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbmup" "b/data_all_eng_slimpj/shuffled/split2/finalzzbmup" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbmup" @@ -0,0 +1,5 @@ +{"text":"\\section*{Supplementary Material}\n\n\\renewcommand{\\theequation}{S\\arabic{equation}}\n\\renewcommand{\\thefigure}{S\\arabic{figure}}\n\\renewcommand{\\bibnumfmt}[1]{[S#1]}\n\\renewcommand{\\citenumfont}[1]{S#1}\n\n\n\\section{Convergence to the classical limit for 1D harmonic oscillator} \n\n\nHere we show that the Gaussian ensemble converges to the classical microcanonical ensemble in $\\hbar\\to0$ limit and reproduces the leading quantum correction to it for 1D harmonic oscillator in a coherent state.\nIn this example, there is only one integral of motion (the Hamiltonian itself) in \\esref{clGME1} and \\re{eq:GME}. \nSuppose the initial state is a coherent state with eigenvalue\n$z$: $\\hat{a} |z\\rangle = z|z\\rangle$, where $\\hat{a}$ is the annihilation operator.\nThis corresponds to taking a particle in the ground state of $\\hat H_0=(\\hat{p} - p_0)^2\/(2m)+m \\omega^2 (\\hat{q} - q_0)^2 \/ 2 $\nand time-evolving with $\\hat H= \\hat{p}^2\/(2m)+m \\omega^2 \\hat{q}^2 \/ 2$, where \n$$\nz = \\sqrt{\\frac{m\\omega}{2\\hbar}}q_0 + i \\frac{p_0}{\\sqrt{2m\\hbar\\omega}},\n$$\n i.e. to a quantum quench from $p_0, q_0\\ne 0$ to $p_0, q_0= 0$\n \nTo construct $\\rho_\\mathrm{G}$, we need to fix two parameters, $\\mu$ and $\\Sigma^{-1}\\equiv 2\\sigma^2$, with the help of \\esref{eq:charge} and \\re{eq:correlation} for $\\langle \\hat H\\rangle_0$ and $\\langle \\hat H^2\\rangle_0$. Decomposing $|z\\rangle$ into number operator eigenstates $|n\\rangle$, we obtain \n\\begin{align}\nC\\sum_{n=0}^\\infty \\hbar \\omega n e^{-\\frac{(\\hbar\\omega n - \\mu)^2}{2\\sigma^2}} &= \\hbar \\omega |z|^2 \\label{eq:HO_first},\\quad (\\hat\\rho_\\mathrm{G})_{nn}=Ce^{-\\frac{(\\hbar\\omega n - \\mu)^2}{2\\sigma^2}},\\\\\nC\\sum_{n=0}^\\infty (\\hbar \\omega n)^2 e^{-\\frac{(\\hbar\\omega n - \\mu)^2}{2\\sigma^2}} &= (\\hbar \\omega |z|^2)^2(|z|^2+1),\n\\label{eq:HO_second}\n\\end{align}\nwhere $ C^{-1} = \\sum_{n=0}^\\infty \\exp(-(\\hbar\\omega n - \\mu)^2\/(2\\sigma^2))$ \n and we include the zero point energy $\\hbar\\omega\/2$ into $\\mu$. \nThe classical limit is \n\\begin{equation}\n\\hbar\\to 0,\\quad |z|^2\\to \\infty,\\quad \\hbar \\omega|z|^2=\\frac{p_0^2}{2m}+\\frac{m\\omega^2 q_0^2}{2}\\equiv E_0=\\mbox{fixed}.\n\\label{cllim}\n\\end{equation}\nIn this limit, sums in \\esref{eq:HO_first} and \\re{eq:HO_second} turn into integrals resulting in $\\mu = E_0$ and $\\sigma = \\hbar \\omega |z|$.\n We see that $\\sigma\\to0$, $\\hbar\\omega n\\to E$ and $\\hat \\rho_\\mathrm{G}\\to C\\delta(E-E_0)$, i.e. we recover the classical microcanonical ensemble. \n\nNow let us demonstrate that the Gaussian ensemble captures the leading quantum correction to infinite time averages. We restrict ourselves to powers of the number operator,\n$\\hat{n}^k=(\\hat{a}^\\dag\\hat{a})^k $, where $k$ is a nonnegative integer. Note that expansion in $\\hbar$ is equivalent to the expansion in $1\/|z|^2$, see \\eref{cllim}. The infinite time average is\n\\begin{align}\n\\overline{\\langle \\hat{n}^k \\rangle}_{\\infty} =\\sum_{n=0}^\\infty n^k \\left|\\langle z|n\\rangle\\right|^2= |z|^{2k} \\left[ 1 + \\frac{k(k-1)}{2|z|^2} + O\\left(\\frac{1}{|z|^4}\\right) \\right]=|z|^{2k} \\left[ 1 + \\frac{k(k-1)}{2}\\frac{\\hbar\\omega}{E_0} + O\\left(\\frac{\\hbar^2\\omega^2}{E_0^2} \\right) \\right], \n\\label{TAnk}\n\\end{align}\nwhere we used the fact that $ \\left|\\langle z|n\\rangle\\right|^2=|z|^{2n} e^{-|z|^2}\/n!$ is a Poisson distribution with parameter $|z|^2$ whose $k^\\mathrm{th}$ factorial moment, i.e. the expectation value of $\\hat n(\\hat n-1)\\dots (\\hat n-k+1)$, is $|z|^{2k}$, which is straightforward to verify directly. \n\nNext, we evaluate the Gaussian ensemble averages. Let $n_0 = \\mu\/(\\hbar \\omega)$ and $s = \\sigma\/(\\hbar \\omega)$. We begin by showing that corrections to the classical answer $n_0=s^2=|z|^2$ as $|z|^2 \\rightarrow \\infty $ are exponentially small ($\\propto e^{-|z|^2}=e^{-E_0\/\\hbar\\omega}$) and therefore can be neglected when calculating corrections of order $\\hbar$. \n \\eref{eq:HO_first} becomes\n\\begin{align}\n\\frac{\\sum_{n=0}^{\\infty} n \\exp\\left(-\\frac{(n - n_0)^2}{2s^2}\\right)}{\\sum_{n=0}^{\\infty} \\exp\\left(-\\frac{(n - n_0)^2}{2s^2}\\right)} = |z|^2.\n\\label{eq:first_moment} \n\\end{align}\nObserve the following relation for any nonnegative integer $k$ as $n_0, s$, and $n_0\/s$ tend to infinity, \n\\begin{align}\n&\\bigg|\\sum_{n=0}^{\\infty} n^k \\exp\\left(-\\frac{(n - n_0)^2}{2s^2}\\right) - \\sum_{n=-\\infty}^{\\infty} n^k \\exp\\left(-\\frac{(n - n_0)^2}{2s^2}\\right) \\bigg|< \\sum_{n = 2n_0+1}^\\infty n^k \\exp\\left(-\\frac{(n - n_0)^2}{2s^2}\\right)<\\nonumber\\\\\n& \\int_{2n_0}^\\infty x^k \\exp\\left(-\\frac{(x - n_0)^2}{2s^2}\\right) dx = O\\left( s^2 n_0^{k-1} e^{-n_0^2\/s^2}\\right), \n\\end{align}\nwhere we obtained the last relation by integrating by parts. \nTherefore, we can extend summations in \\eref{eq:first_moment} from $[0,\\infty)$ to $(-\\infty, \\infty)$ with an exponentially small error, i.e. \n\\begin{align}\n|z|^2 &= \\frac{\\sum_{n=0}^{\\infty} n \\exp\\left(-\\frac{(n - n_0)^2}{2s^2}\\right)}{\\sum_{n=0}^{\\infty} \\exp\\left(-\\frac{(n - n_0)^2}{2s^2}\\right)} = \\frac{\\sum_{n=-\\infty}^{\\infty} (n+n_0) \\exp\\left(-\\frac{n^2}{2s^2}\\right)}{\\sum_{n=-\\infty}^{\\infty} \\exp\\left(-\\frac{n^2}{2s^2}\\right)} + O\\left(|z|^2e^{-|z|^2}\\right) \n= n_0 + O\\left(|z|^2e^{-|z|^2}\\right). \n\\end{align}\nSimilarly, we derive $s^2 = |z|^2$ up to an exponentially small error. \n\n \nTherefore, the Gaussian ensemble expectation value is\n\\begin{align}\n\\langle \\hat{n}^k \\rangle_\\mathrm{G} = \\frac{\\sum_{n=0}^{\\infty} n^k \\exp\\left[-(n - |z|^2)^2\/(2|z|^2)\\right]}{\\sum_{n=0}^{\\infty} \\exp\\left[-(n - |z|^2)^2\/(2|z|^2)\\right]}.\n\\end{align}\nWe evaluate the sums involved using the Poisson summation formula,\n\\begin{equation}\nA_k\\equiv\\sum_{n=0}^{\\infty} n^k \\exp\\left[\\frac{-(n - |z|^2)^2}{2|z|^2}\\right]=|z|^{2k+2}\\sum_{p=-\\infty}^\\infty\n\\int_0^\\infty e^{-|z|^2\\left[ (x-1)^2\/2-2i\\pi p x\\right]} x^k dx.\n\\end{equation}\nThe saddle-point analysis of the integrals on the r.h.s. shows that the contribution of $p\\ne 0$ terms is suppressed by a factor $e^{-2\\pi^2 |z|^2}$, i.e. it is sufficient to keep only the $p=0$ term,\n\\begin{equation}\nA_k=|z|^{2k+2}\\biggl[\\int_{-1}^\\infty (1+y)^k e^{-|z|^2 y^2\/2} dy+ O\\left(e^{-2\\pi^2 |z|^2}\\right)\\biggr]\n=|z|^{2k}\\sqrt{2\\pi}|z|\\left[1+\\frac{k(k-1)}{2}\\frac{1}{|z|^2}+O\\left(\\frac{1}{|z|^4}\\right)\\right],\n\\label{akint}\n\\end{equation}\nwhere we changed the variable $x=y+1$ in the integral and evaluated it with a simple version of the saddle-point method. Thus,\n\\begin{equation}\n\\langle \\hat{n}^k \\rangle_\\mathrm{G}=\\frac{A_k}{A_0}=|z|^{2k}\\left[ 1 + \\frac{k(k-1)}{2|z|^2} + O\\left(\\frac{1}{|z|^4}\\right) \\right]=|z|^{2k} \\left[ 1 + \\frac{k(k-1)}{2}\\frac{\\hbar\\omega}{E_0} + O\\left(\\frac{\\hbar^2\\omega^2}{E_0^2} \\right) \\right].\n\\label{gaussnk}\n\\end{equation}\nComparing \\esref{TAnk} and \\re{gaussnk}, we see that they agree up to terms proportional to $\\hbar$. In other words, Gaussian ensemble reproduces the leading term and the first quantum correction. \n\nThis agreement, however, does not extend to terms of order $\\hbar^2$ and higher. Consider, for example, $\\hat{n}^3$. \nThe exact infinite time average follows from the first three factorial moments mentioned above\n\\begin{equation}\n\\overline{\\langle \\hat{n}^3 \\rangle}_\\infty = |z|^6 + 3|z|^4 + |z|^2=|z|^6\\left[1+3\\frac{\\hbar\\omega}{E_0}+ \\frac{\\hbar^2\\omega^2}{E^2_0}\\right].\n\\end{equation}\nTo obtain the Gaussian ensemble, we evaluate the integral in the first equation in \\re{akint} for $k=3$ and $k=0$,\n\\begin{equation}\n\\langle \\hat{n}^3 \\rangle_\\mathrm{G} = |z|^6 + 3|z|^4 + O\\bigl(e^{-|z|^2\/2}\\bigr)=|z|^6\\left[1+3\\frac{\\hbar\\omega}{E_0}+ O\\bigl(e^{-E_0\/\\hbar\\omega}\\bigr)\\right],\n\\end{equation}\nwhere the error arises from $|z|^{2k}\\int_{-\\infty}^{-1} (1+y)^k e^{-|z|^2 y^2\/2}=O\\bigl(e^{-|z|^2\/2}\\bigr)$, which we estimated via repeated integration by parts.\nTherefore, there is a discrepancy of order $\\hbar^2$. For instance,\nsince $\\langle \\hat{q}^6 \\rangle = (5\/2) (\\hbar\/m\\omega)^3 \\langle \\hat{n}^3\\rangle +$ expectation values of lower powers of $\\hat{n}$ that \nare captured exactly by construction, \nthe discrepancy in the third order cumulant of $\\hat{q}^2$ between the Gaussian ensemble and exact infinite time averages is $2.5(\\hbar\/m\\omega)^3|z|^2=2.5\\hbar^2 E_0\/m^3\\omega^4$.\n\n\n\n\\section{Calculation of three-body observables in the free-fermion model}\n\nIn the main text, we compared GGE and the Gaussian ensemble for quenches in a free-fermion model\n\\begin{align}\n\\hat H = &-\\sum_{j=1}^L (e^{i \\phi} \\hat c_{j+1}^\\dag \\hat c_j +e^{-i\\phi} \\hat c_j^\\dag \\hat c_{j+1}) + \\sum_{j=1}^L[V_1 \\cos(Q j) + V_2 \\cos(2Q j)] \\hat n_j, \n\\end{align}\n with periodic boundary conditions and $Q = 2\\pi\/M$ with integer $M$, such that $L$ is a multiple of $M$. \nThe pre-quench Hamiltonian $\\hat H_\\mathrm{in}$ has $\\phi = V_1 = V_2 = 0$ and is therefore diagonal in the momentum basis,\n\\begin{align}\n\\hat H_\\mathrm{in} = - \\sum_{k} 2\\cos(k) \\hat \\tilde{c}_k^\\dag \\hat \\tilde{c}_k, \n\\end{align} \nwhere $\\hat \\tilde{c}_k = L^{-1\/2} \\sum_{j = 1}^L \\hat c_j e^{-i k j}$ and $k = 2\\pi \\nu\/L$ with $\\nu = 1,2, \\ldots, L$. \n\nDue to our choice of the modulation wavenumber $Q = 2\\pi\/M$, the quenched Hamiltonian $\\hat H_\\mathrm{fn}$ mixes momenta $k, k+Q, k+2Q, \\ldots k + (M-1)Q$ only among themselves. It is convenient to introduce a two-index momentum notation that reflects this property: \n$\\hat \\tilde{c}_k \\rightarrow \\hat \\tilde{c}_{q,\\alpha}$, where $q = 2\\pi\/L, 4\\pi\/L, \\ldots, 2\\pi\/M$, $\\alpha = 0, 1, \\ldots, M-1$, and $k = q + \\alpha Q$. \nThe quenched Hamiltonian splits into $L\/M$ independent sub-Hamiltonians (sectors)\n\\begin{align}\n\\hat H_\\mathrm{fn} = \\sum_{q} \\sum_{\\alpha,\\beta=0}^{M-1} h^q_{\\alpha,\\beta}\\hat\\tilde{c}_{q,\\alpha}^\\dag \\hat \\tilde{c}_{q,\\beta} = \\sum_{q} \\sum_{\\gamma=0}^{M-1} \\epsilon(q,\\gamma) \\hat N_{q,\\gamma},\\quad \\hat N_{q,\\gamma}=\\hat b^\\dag_{q,\\gamma} \\hat b_{q,\\gamma}, \n\\label{hfn}\n\\end{align}\nwhere $h^q_{\\alpha,\\beta}$ is an $M \\times M$ matrix,\n\\begin{align}\n{\\bm h}^q = \n\\begin{pmatrix}\n-2 \\cos(q+\\phi) && V_1\/2 && V_2\/2 &&\\cdots && V_1\/2 \\\\\nV_1\/2 && -2\\cos(q+\\phi + Q) && V_1\/2 && \\cdots && V_2\/2\\\\\nV_2\/2 && V_1\/2 && -2\\cos(q + +\\phi + 2Q) && \\cdots &&\\vdots \\\\\n\\vdots && \\vdots && \\vdots && \\ddots && V_1\/2 \\\\\nV_1\/2 && \\cdots && V_2\/2 && V_1\/2 && -2\\cos(q + \\phi+ (M-1)Q) \n\\end{pmatrix}.\n\\end{align}\n Diagonalizing ${\\bm h}^q$, we obtain single-particle energies ${\\bm \\epsilon}(q) = ({\\bf U}^q)^{-1} {\\bf h}^q {\\bf U}^q$ and new fermion operators $\\hat b_{q,\\gamma} = \\sum_\\beta (U^q)^{-1}_{\\gamma,\\beta} \\hat \\tilde{c}_{q,\\beta}$. \n \n The conservation laws are mode occupation numbers $\\hat N_{q,\\gamma}$. The GGE and Gaussian ensemble density matrices are\n \\begin{align}\n \\hat\\rho_\\mathrm{GGE}=\\exp\\biggl[-\\sum_{k,\\alpha} \\lambda_{k,\\alpha} \\hat N_{k,\\alpha} \\biggr],\\quad \\langle\\hat N_{k,\\alpha}\\rangle_\\mathrm{GGE}\\equiv\\frac{\\mathrm{tr}\\left(\\hat \\rho_\\mathrm{GGE} \\hat N_{k,\\alpha}\\right)}{\\mathrm{tr }\\,\\hat\\rho_\\mathrm{GGE}}=\\langle\\hat N_{k,\\alpha}\\rangle_0,\\label{matr}\\\\ \n \\hat\\rho_\\mathrm{G}= \\exp\\biggl[-\\!\\!\\!\\sum_{p,q,\\alpha,\\beta} \\sigma_{p\\alpha,q\\beta} \\hat N_{p,\\alpha} \\hat N_{q,\\beta} \\biggr],\\quad \\langle\\hat N_{p,\\alpha}\\rangle_\\mathrm{G}\\equiv\\frac{\\mathrm{tr}\\left(\\hat \\rho_\\mathrm{G} \\hat N_{p,\\alpha}\\right)}{\\mathrm{tr }\\,\\hat\\rho_\\mathrm{G}}=\\langle\\hat N_{p,\\alpha}\\rangle_0,\\quad \n \\langle\\hat N_{p,\\alpha} \\hat N_{q,\\beta}\\rangle_\\mathrm{G}=\\langle\\hat N_{p,\\alpha} \\hat N_{q,\\beta}\\rangle_0,\\label{Gmatr}\n \\end{align}\nwhere we used $ \\hat N_{q,\\beta}= \\hat N_{q,\\beta}^2$ to absorb the linear in $\\hat N_{q,\\beta}$ terms in the definition of $\\hat\\rho_\\mathrm{G}$ into the quadratic part.\n \nWe are interested in the three-point correlation function of the lattice site occupation number\n\\begin{align}\n\\langle \\hat n_j \\hat n_\\ell \\hat n_m \\rangle&= \\langle \\hat c^\\dag_j \\hat c_j \\hat c^\\dag_\\ell \\hat c_\\ell \\hat c_m^\\dag \\hat c_m \\rangle \\\\\n& = \\frac{1}{L^3} \\sum_{k,q,u,v,p,s}\\sum_{\\alpha,\\beta,\\gamma,\\delta,\\zeta,\\eta}e^{i[k - q + (\\beta - \\alpha)Q]j + i[u - v + (\\delta - \\gamma)Q]\\ell + i[s - p + (\\eta - \\zeta)Q]m}\\langle \\hat\\tilde{c}^\\dag_{q,\\alpha} \\hat\\tilde{c}_{k,\\beta} \\hat\\tilde{c}^\\dag_{v,\\gamma} \\hat\\tilde{c}_{u,\\delta} \\hat\\tilde{c}^\\dag_{p,\\zeta} \\hat\\tilde{c}_{s,\\eta}\\rangle\\\\\n&= \\frac{1}{L^3} \\sum_{k,q,u,v,p,s}\\sum_{\\alpha,\\beta,\\gamma,\\delta,\\zeta,\\eta}e^{i[k - q + (\\beta - \\alpha)Q]j + i[u - v + (\\delta - \\gamma)Q]\\ell + i[s - p + (\\eta - \\zeta)Q]m} \\times \\nonumber\\\\\n&\\quad\\quad\\sum_{\\alpha',\\beta',\\gamma',\\delta',\\zeta',\\eta'} (U^{q})^*_{\\alpha,\\alpha'} (U^k)_{\\beta,\\beta'} (U^{v})^*_{\\gamma,\\gamma'} (U^{u})_{\\delta,\\delta'}(U^p)^*_{\\zeta,\\zeta'} (U^s)_{\\eta,\\eta'}\\langle \\hat b^\\dag_{q,\\alpha'}\\hat b_{k,\\beta'} \\hat b^\\dag_{v, \\gamma'} \\hat b_{u,\\delta'} \\hat b^\\dag_{p,\\zeta'} \\hat b_{s,\\eta'}\\rangle. \n\\label{eq:full_three_body}\n\\end{align}\nTherefore, we need to evaluate the following average: \n\\begin{align}\n\\langle \\hat b^\\dag_{q,\\alpha'} \\hat b_{k,\\beta'} \\hat b^\\dag_{u,\\gamma'} \\hat b_{v,\\delta'} \\hat b^\\dag_{p,\\zeta'}\\hat b_{s,\\eta'} \\rangle, \n\\label{eq:three_body}\n\\end{align} \nwith respect to the time-evolved state of the system as well as Gaussian ensemble and GGE.\n\n \nThe time-evolution is\n\\begin{align}\n\\langle \\hat b^\\dag_{q,\\alpha'}\\hat b_{k,\\beta'} \\hat b^\\dag_{v, \\gamma'} \\hat b_{u,\\delta'} \\hat b^\\dag_{p,\\zeta'} \\hat b_{s,\\eta'} \\rangle_t = e^{i[\\epsilon(q,\\alpha') - \\epsilon(k,\\beta') + \\epsilon(v,\\gamma') - \\epsilon(u,\\delta')) + \\epsilon(p,\\zeta') - \\epsilon(s,\\eta')]t} \\langle \\hat b^\\dag_{q,\\alpha'}\\hat b_{k,\\beta'} \\hat b^\\dag_{v, \\gamma'} \\hat b_{u,\\delta'} \\hat b^\\dag_{p,\\zeta'} \\hat b_{s,\\eta'} \\rangle_0, \n\\end{align}\nwhere $\\langle \\ldots \\rangle_t$ is the expectation value at time $t$. \nThe infinite time average is nonzero only for terms with zero phase factor, i.e. when $ \\epsilon(q,\\alpha') - \\epsilon(k,\\beta') + \\epsilon(v,\\gamma') - \\epsilon(u,\\delta') + \\epsilon(p,\\zeta') - \\epsilon(s,\\eta') = 0$. Since we removed time-reversal and particle-hole symmetries, there are no\ndegeneracies in the single-particle spectrum as well as no two-particle and three-particle resonances. \nConsequently, the time average is zero unless the double indices on creation and annihilation operators are pairwise equal, i.e. the set $\\{(q,\\alpha'), (v,\\gamma'), (p,\\zeta')\\}$ is a permutation of $\\{(k,\\beta'), (u,\\delta'), (s,\\eta')\\}$. Therefore, only terms that can be cast into the form $\\langle \\hat N_{q,\\alpha}\\hat N_{r,\\beta} \\hat N_{s, \\gamma} \\rangle_t=\\langle \\hat N_{q,\\alpha}\\hat N_{r,\\beta} \\hat N_{s, \\gamma} \\rangle_0$ survive. The same holds for expectation value \\re{eq:three_body} evaluated in any eigenstate of $\\hat H_\\mathrm{fn}$ and hence for both ensemble averages. Thus,\n\\begin{equation}\n\\overline{\\langle \\hat n_j \\hat n_\\ell \\hat n_m \\rangle}_\\infty-\\langle \\hat n_j \\hat n_\\ell \\hat n_m \\rangle_\\mathrm{ens}=\\frac{1}{L^3}\\!\\!\\!\\sum_{q,r,s;\\alpha,\\beta,\\gamma} R^{qrs}_{\\alpha\\beta\\gamma}\n\\left(\\langle \\hat N_{q,\\alpha}\\hat N_{r,\\beta} \\hat N_{s, \\gamma} \\rangle_0-\\langle \\hat N_{q,\\alpha}\\hat N_{r,\\beta} \\hat N_{s, \\gamma} \\rangle_\\mathrm{ens}\\right),\n\\label{diff}\n\\end{equation}\nwhere $R^{qrs}_{\\alpha\\beta\\gamma}$ are coefficients of order one and $\\langle\\dots\\rangle_\\mathrm{ens}$ stands for the average with respect to either ensemble.\n\nBecause $\\hat \\rho_\\mathrm{GGE}$ in \\eref{matr} is a (tensor) product of functions of individual occupation numbers \\cite{product},\n\\begin{equation}\n\\langle \\hat N_{q,\\alpha}\\hat N_{r,\\beta} \\hat N_{s, \\gamma} \\rangle_\\mathrm{GGE}=\n\\langle \\hat N_{q,\\alpha}\\rangle_\\mathrm{GGE}\\langle\\hat N_{r,\\beta} \\rangle_\\mathrm{GGE}\\langle\\hat N_{s, \\gamma} \\rangle_\\mathrm{GGE}=\\langle \\hat N_{q,\\alpha}\\rangle_0\\langle\\hat N_{r,\\beta} \\rangle_0\\langle\\hat N_{s, \\gamma} \\rangle_0,\n\\label{GGEfactor}\n\\end{equation}\nas long as no two occupation number operators, i.e. \\textit{pairs} of indices coincide, $\\{q,\\alpha\\}\\ne \\{r,\\beta\\}\\ne \\{s, \\gamma\\}$ and $\\{q,\\alpha\\} \\ne \\{s, \\gamma\\}$. Since occupation numbers in different sectors are uncorrelated in the initial state, the same factorization holds for $\\langle \\hat N_{q,\\alpha}\\hat N_{r,\\beta} \\hat N_{s, \\gamma} \\rangle_0$ if $q, r, s$ are distinct. For most remaining terms, when $q=r$ (but $\\alpha\\ne\\beta$) or $q=s$ (but $\\alpha\\ne\\gamma$) or $r=s$ (but $\\beta\\ne\\gamma$), the GGE and initial state averages do not necessarily agree. In other words, GGE fails to capture the correlations between different occupation numbers within the same sector. Since the number of such terms($\\propto$ number of pairs of sectors) is of order $(L\/M)^2\\propto L^2$, $\\overline{\\langle \\hat n_j \\hat n_\\ell \\hat n_m \\rangle}_\\infty-\\langle \\hat n_j \\hat n_\\ell \\hat n_m \\rangle_\\mathrm{GGE}\\propto 1\/L$ at large $L$ ($M$ is fixed). Similarly, the Gaussian ensemble automatically matches $\\langle \\hat N_{q,\\alpha}\\hat N_{r,\\beta} \\hat N_{s, \\gamma} \\rangle_0$ for distinct $q, r, s$ and for\n$q=r\\ne s$, $q\\ne r=s$, $q=s\\ne r$, but not for $q=r=s$ (except when two of the indices $\\alpha,\\beta,\\gamma$ are equal). In other words, it reproduces two-body, but not three-body correlations between occupation numbers in the initial state. The number of $q=r=s$ terms is proportional to $L$, so $\\overline{\\langle \\hat n_j \\hat n_\\ell \\hat n_m \\rangle}_\\infty-\\langle \\hat n_j \\hat n_\\ell \\hat n_m \\rangle_\\mathrm{G}\\propto 1\/L^2$. This behavior with the system size $L$ is in agreement with Fig.~3 in the main text for both ensembles. \nThe fact that the Gaussian ensemble is exact for one- and two-point correlation functions of lattice site occupation numbers forced us to consider three-point functions. \n\n\\subsection{Computing initial state and ensemble averages}\n\nAnother implication of \\eref{GGEfactor} is that we need not determine the Lagrange multiplies $\\lambda_{q,\\alpha}$ for GGE in \\eref{matr}, since GGE averages reduce to expectation values of single mode occupation numbers $\\langle\\hat N_{k,\\alpha}\\rangle_0$ in the initial state. \n\nWe do however need $\\langle\\hat N_{k,\\alpha}\\rangle_0$ and we also need \n $\\langle\\hat N_{p,\\alpha} \\hat N_{q,\\beta}\\rangle_0$ in \\esref{Gmatr} to construct the Gaussian ensemble,\n\\begin{align}\n \\langle\\hat N_{k,\\alpha}\\rangle_0= \\langle 0| \\prod_q \\hat\\tilde{c}_q (\\hat b_{k,\\alpha}^\\dag \\hat b_{k,\\alpha}) \\prod_{p} \\hat\\tilde{c}^\\dag_p|0\\rangle &= \\sum_\\beta |(U^k)^{-1}_{\\alpha,\\beta}|^2\\langle 0| \\prod_q \\hat \\tilde{c}_q (\\hat \\tilde{c}^\\dag_{k,\\beta} \\hat \\tilde{c}_{k,\\beta}) \\prod_p \\hat\\tilde{c}^\\dag_p|0\\rangle \n= \\!\\!\\!\\sum_{\\beta: k + \\beta Q \\in \\{\\mathrm{gr}\\}}\\!\\!\\! |(U^k)^{-1}_{\\alpha,\\beta}|^2, \n\\end{align}\n\\begin{align}\n \\langle\\hat N_{p,\\alpha} \\hat N_{q,\\beta}\\rangle_0 &= \\langle 0| \\prod_u \\hat\\tilde{c}_u (\\hat b_{p,\\alpha}^\\dag \\hat b_{p,\\alpha} \\hat b_{q,\\beta}^\\dag \\hat b_{q,\\beta}) \\prod_v \\hat\\tilde{c}^\\dag_v|0\\rangle =\\nonumber\\\\\n& \\sum_{\\gamma,\\gamma'\\!,\\delta,\\delta'} (U^p)^{-1}_{\\alpha,\\gamma} (U^{p*})^{-1}_{\\alpha,\\gamma'} (U^q)^{-1}_{\\beta,\\delta} (U^{q*})^{-1}_{\\beta,\\delta'} \\langle 0| \\prod_u \\hat \\tilde{c}_u (\\hat\\tilde{c}^\\dag_{p,\\gamma'} \\hat\\tilde{c}_{p,\\gamma} \\hat\\tilde{c}^\\dag_{q,\\delta'} \\hat\\tilde{c}_{q,\\delta}) \\prod_v \\hat\\tilde{c}^\\dag_v|0\\rangle=\\nonumber\\\\\n&\\sum_{q + \\delta Q, p + \\gamma Q \\in \\{\\mathrm{gr}\\}} |(U^p)^{-1}_{\\alpha,\\gamma}|^2 |(U^q)^{-1}_{\\beta,\\delta}|^2 + \\sum_{\\substack{q + \\delta Q\\in \\{\\mathrm{gr} \\} \\\\ q + \\gamma Q\\not\\in \\{\\mathrm{gr} \\}}} (U^q)^{-1}_{\\alpha,\\gamma} (U^{q*})^{-1}_{\\beta,\\gamma} (U^q)^{-1}_{\\alpha,\\delta} (U^{q*})^{-1}_{\\beta,\\delta}, \n\\end{align} \nwhere $\\{ \\mathrm{gr}\\}$ is the set of momenta occupied in the ground state of the pre-quench Hamiltonian. \n\nTo construct the Gaussian ensemble, we have to solve the last two equations in \\re{Gmatr} for $ \\sigma_{p\\alpha,q\\beta}$. Since $\\langle\\hat N_{p,\\alpha} \\hat N_{q,\\beta}\\rangle_0=\\langle\\hat N_{p,\\alpha}\\rangle_0\\langle \\hat N_{q,\\beta}\\rangle_0$ for $p\\ne q$ in the initial state, we set $ \\sigma_{p\\alpha,q\\beta}=0$ for $p\\ne q$. Then, $\\hat\\rho_\\mathrm{G}$ is a tensor product over different sectors labeled by $p$, which ensures \\cite{product} $\\langle\\hat N_{p,\\alpha} \\hat N_{q,\\beta}\\rangle_\\mathrm{G}=\\langle\\hat N_{p,\\alpha}\\rangle_\\mathrm{G}\\langle \\hat N_{q,\\beta}\\rangle_\\mathrm{G}$. Thus, we are left with\n\\begin{equation}\n \\frac{\\mathrm{tr}\\left(\\hat \\rho_\\mathrm{G} \\hat N_{p,\\alpha}\\right)}{\\mathrm{tr }\\,\\hat\\rho_\\mathrm{G}}=\\langle\\hat N_{p,\\alpha}\\rangle_0,\\quad \n \\frac{\\mathrm{tr}\\left(\\hat \\rho_\\mathrm{G} \\hat N_{p,\\alpha} \\hat N_{p,\\beta}\\right)}{\\mathrm{tr }\\,\\hat\\rho_\\mathrm{G}} =\\langle\\hat N_{p,\\alpha} \\hat N_{p,\\beta}\\rangle_0,\\quad \\hat\\rho_\\mathrm{G}= \\exp\\biggl[-\\!\\!\\!\\sum_{p,\\alpha,\\beta} \\sigma_{p\\alpha,p\\beta} \\hat N_{p,\\alpha} \\hat N_{p,\\beta} \\biggr].\n \\label{Gmatr1}\n \\end{equation}\n There are $M(M+1)\/2$ nonlinear equations for $M(M+1)\/2$ unknown $\\sigma_{p\\alpha,p\\beta}$ for each of $L\/M$ sectors labeled by $p$, a total of $L(M+1)\/2$ equations, to be solved numerically. This is feasible for moderate $M$($\\le 12$). \n \n Note also that the number of particles in each sector [each sub-Hamiltonian in \\eref{hfn}] is conserved. Therefore, the number of eigenstates $|n\\rangle$ of $H_\\mathrm{fn}$ involved in the time-evolution in each sector is\n $C^M_m$. The infinite time average of an observable $\\hat O$ in the absence of degeneracies is $\\overline{\\langle \\hat O\\rangle}_\\infty=\\sum_n |c_n|^2\\langle n|\\hat O|n\\rangle$, where $c_n$ are the coefficients in the decomposition of the initial state into the eigenstates (diagonal ensemble). We need $M(M+1)\/2>C^M_m$ or else the Gaussian ensemble becomes exact as it has enough parameters to match all $|c_n|^2$. \nThe smallest $M$ that satisfies this criterion is $M=6$ at $m = 3$. This dictates our choice of $M = 6$ and $\\mbox{filling fraction}=1\/2$. \n\nFinally, we comment on a technical aspect of the computation. Even though we are dealing with a \n free-fermion model, a direct computation of three-point correlation functions (unlike two- and one-point ones) is prohibitive at large $L(\\geq 120)$ due to a large number $(\\propto L^3)$ of nonzero terms in \\eref{eq:full_three_body}. However, only a small fraction of these terms contribute to the difference between ensemble and infinite time averages in \\eref{diff}. As discussed above, there is a factor of $(L\/M)^2$ reduction in the number of terms for the Gaussian ensemble and a factor of $L\/M$ for GGE. This allows as to go to much larger $L$ in numerical evaluation of the difference.\n \n \n\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nVisual content recognition and understanding have greatly made progress based on the advances of deep learning methods that construct the discriminative model by training large-scale labeled data. In fact, two reasons limit the current deep learning methods for efficiently learning new categories. One is that human annotation cost is high for large-scale data (for example, thousands of the diversity samples in the same category and hundreds of the various categories in one cognition domain), the other is that the rare samples of some categories are not enough for the discriminative model training. Therefore, it is still a challenge question that the discriminative model is learned from the rare samples of the categories. To solve this question, few-shot learning \\cite{vinyals2016matching} \\cite{snell2017prototypical} \\cite{finn2017model} \\cite{RaviL17} \\cite{sung2018learning} \\cite{qi2018low} \\cite{SatorrasE18} \\cite{kim2019edge} \\cite{lee2019meta} \\cite{sun2019meta} \\cite{peng2019few} \\cite{chen2019a}proposed from the inspiration of human visual system has been an attracted research to generalize the learning model to new classes with the rare samples of each novel category by feature learning \\cite{ghiasi2018dropblock} \\cite{wu2018unsupervised} \\cite{donahue2019large} \\cite{8955914} \\cite{8948295} \\cite{saikia2020optimized}or meta-learning \\cite{chen2019a} \\cite{ravichandran2019few} \\cite{dhillon2019baseline}\\cite{fei2020meta} \\cite{zhang2020rethinking} \\cite{luo2019learning}. Feature learning emphasises on feature generation and extraction model construction based on invariance transfer information, while meta-learning focuses on the relevance model between the samples for mining the common relationship of data samples by the episode training.\n\nMeta-learning can transfer the available knowledge between the collection of the separated tasks, and propagate the latent structure information to enhance the model generalization and to avoid the model overfitting. Therefore, meta-learning is one of most promising directions for few-shot learning. However, meta-learning is constructed based on the large-scale separated tasks, and each task have the respective metric criterion that causes the gap of the transfer information between the samples of the separated tasks(the details in figure \\ref{fig-2}). Although existing methods can relieve this gap to a certain extend by the same sample filling into the different tasks, it is still difficult to build the approximated metric criterion of the different tasks for efficiently information transfer and propagation. Therefore, we present HOSP-GNN that attempts to construct the approximated metric criterion by mining high-order structure and updates these metric values between samples by constraining data manifold structure for few-shot learning. Figure \\ref{fig-1} illustrates the difference between HOSP-GNN and the most meta-learning for few-shot learning conceptually.\n\n\\begin{figure*}[ht]\n \\begin{center}\n\\includegraphics[width=1\\linewidth]{fig1.png}\n\\end{center}\n\\vspace{-0.2in}\n \\caption{The illustration of the difference between HOSP-GNN and the most meta-learning for few-shot learning.$S$ stands for support set; $Q$ is query set;the different color circles describe the labeled samples of the different classes in $S$; the gray color circles represent unlabeled samples in $Q$;the black solid lines between circles show the structure relationship of the labeled samples;the black dot lines between circles are the predicted structure relationship between labeled and unlabeled samples; the blue dot lines between circles across tasks indicate the latent high-order structure of samples.}\n \\label{fig-1}\n \\end{figure*}\n\nOur contributions mainly have two points as follow.\n\\begin{itemize}\n\\item One is to find the high-order structure for bridging the gap between the metric criterion of the separated tasks. The importance of this point is to balance the consistence of the same samples in the different tasks, and to enhance the transferability of the similar structure in the learning model.\n\\item Another is to smooth the structure evolution for improving the propagation stability of the model transfer by manifold structure constraints. This point try to minimize the difference of the transformation projection between the similar samples, and to maximize the divergence of the transformation projection between the dissimilar samples for efficiently preserving the graph structure learning of data samples.\n\\end{itemize}\n\n\n\\section{Related Works}\nIn recent few-shot learning, there mainly are two kinds of methods according to the different learning focuses. One is feature learning based on the data representation of model extraction, and another is meta-learning based on the metric relationship of model description.\n\n\\subsection{Feature Learning}\nFeature learning \\cite{cubuk2019autoaugment} \\cite{cubuk2019randaugment} \\cite{lim2019fast} \\cite{ghiasi2018dropblock} \\cite{wu2018unsupervised} \\cite{donahue2019large}for few-shot learning expects to inherit and generalize the well characteristics of the pre-train model based on large-scale samples training for recognizing new classes with few samples.\n\nBecause few samples often can not satisfy the necessary of the whole model training, the recent representative methods usually optimize the part parameters or structure of the pre-trained model by few samples for feature learning. For example,Bayesian optimization to Hyperband (BOHB) optimizes hyper-parameters by searching the smaller parameter space to maximize the validation performance for generic feature learning \\cite{saikia2020optimized}; Geometric constraints fine-tune the parameters of one network layer with a few training samples for extracting the discriminative features for the new categories \\cite{8948295}; Bidirectional projection learning (BPL) \\cite{8955914} utilizes semantic embedding to synthesize the unseen classes features for obtaining enough samples features by competitive learning. These methods attempt to find the features invariance by partly fine-tuning the pre-trained model with the different constraints for recognizing the new classes with the few instances.\n\nHowever, these methods can not explicitly formulate the metric rules for learning the discriminative model between new categories, moreover, these methods need retrain the model to adapt the distribution of new categories. It can lead to the degraded classification performance for few-shot learning and the more complicated optimization strategy in validation and test phases.\n\n\\subsection{Meta-Learning}\n\\label{ML}\nMeta-learning \\cite{vinyals2016matching} \\cite{sung2018learning} \\cite{chen2019a}\\cite{gidaris2018dynamic} \\cite{oreshkin2018tadam} \\cite{ravichandran2019few} \\cite{dhillon2019baseline}for few-shot learning tries to construct the relevances between samples in the base classes for generalizing the model to the new classes. These methods can learn the common structure relationship between samples by training on the collection of the separated tasks.In terms of the coupling between the model and the data, meta-learning can mainly be divided into two groups.\n\nOne group is the model optimization to quickly fit the distribution of new categories. Typical methods attempt to update the model parameters or optimizer for this purpose. For instance, meta-learner long short-term memory (LSTM) \\cite{RaviL17} can update the model parameters to initialize the classifier network for the quick training convergence in the few samples of each classes; Model-agnostic meta-learning (MAML) \\cite{finn2017model} can train the small gradient updating based on few learning data from a new task to obtain the well generalization performance; Latent embedding optimization(LEO)\\cite{rusu2019metalearning}\ncan learn the latent generative representation of model parameters based on data dependence to decouple the gradient adaptation from the high-dimension parameters space.\n\nAnother group is metric learning to describe the structure relationship of the samples between support and query data for directly simulating the similarity metric of the new categories. The recent methods trend to enhance the metric structure by constraint information for the better model generalization in the new categories. For example,edge-labeling graph neural network (EGNN)\\cite{kim2019edge} can update graph structure relationship to directly exploit the intra-cluster similarity and the inter-cluster dissimilarity by iterative computation; Meta-learning across meta-tasks (MLMT) \\cite{fei2020meta} can explore their relationships between the random tasks by meta-domain adaptation or meta-knowledge distillation for boosting the performance of existing few-shot learning methods; Absolute-relative Learning (ArL)\\cite{zhang2020rethinking} can both consider the class concepts and the similarity learning to complement their structure relationship for improving the recognition performance of the new categories; Continual meta-learning approach with Bayesian graph neural networks(CML-BGNN) \\cite{luo2019learning} can implement the continual learning of a sequence of tasks to preserve the intra-task and inter-task correlations by message-passing and history transition.\n\nIn recent work, meta-learning based on metric learning shows the promising performance for recognizing the new categories with the few samples. These methods initially focus on the structure relation exploitation between support set and query set by modeling metric distances, and subsequent works further mine the relevance by mimicking the dependence between the separated tasks for enhancing the discrimination of the new categories. However, these methods depend on the projection loss between the seen and unseen classes \\cite{fei2020meta} or Bayesian inference based on low-order structure (the metric of the pairwise data) \\cite{luo2019learning} for considering the structure relationship between the intra or inter tasks. It is difficult to describe the latent high-order structure from the global observation. Therefore, the proposed HOSP-GNN expects to capture the high-order structure relationship based on samples metric for naturally correlating the relevance between the intra or inter tasks for improving the performance of few-shot learning.\n\n\n\\section{High-order structure preserving graph neural network}\nFew-shot classification attempts to learn a classifier model for identifying the new classes with the rare samples. $C_{e}$ or $C_{n}$ respectively stands for a existing classes set with the large samples or a new classes set with the rare samples, and $C_{e}\\bigcap C_{n}=\\emptyset$,but they belong to the same cognise domain. The existing classes data set $D_{e}=\\{(x_{i},y_{i})|y_{i} \\in C_{e}, i=1,...,|D_{e} |\\}$, where $x_{i}$ indicates the $i$-th image with the class label $y_{i}$, $|D_{e}|$ is the number of the elements in $D_{e}$. Similarly, the new classes data set $D_{n}=\\{(x_{i},y_{i})|y_{i} \\in C_{n}, i=1,...,|D_{n} |\\}$, where $x_{i}$ indicates the $i$-th image with the class label $y_{i}$, $|D_{n}|$ is the number of the elements in $D_{n}$. If each new class includes $K$ labeled samples, the new classes data set is $K$-shot sample set. In other word, $|D_{n}|=K|C_{n}|$, where $|C_{n}|$ is the number of the elements in $C_{n}$. Few-shot learning is to learn the discriminative model from $D_{n}$ to predict the label of the image sample in the test set $D_{t}$ that comes from $C_{n}$ and $D_{n} \\bigcap D_{t}=\\emptyset$.\n\\subsection{Meta-learning for few-shot learning based on graph neural network}\nIn meta-learning, the classifier model can be constructed based on the collection of the separated tasks $\\mathcal{T}=\\{S,Q\\}$ that contains a support set $S$ from the labeled samples in $D_{n}$ and a query set $Q$ from unlabeled samples in $D_{t}$. To build the learning model for few-shot learning, $S$ includes $K$ labeled samples and $N$ classes, so this situation is called $N$-way-$K$-shot few-shot classification that is to distinguish the unlabeled samples from $N$ classes in $Q$.\n\nIn practise, few-shot classification often faces the insufficient model learning based on the new classes data set $D_{n}$ with the rare labeled samples and $D_{t}$ with unlabeled samples. In this situation, the model difficultly identifies the new categories. Therefore, many methods usually draw support from the transfer information of $D_{e}$ with a large labels samples to enhance the model learning for recognizing the new classes. Episodic training \\cite{sung2018learning} \\cite{kim2019edge} is an efficient meta-learning for few-shot classification. This method can mimic $N$-way-$K$-shot few-shot classification in $D_{n}$ and $D_{t}$ by randomly sampling the differently separated tasks in $D_{e}$ as the various episodics of the model training. In each episode, $\\mathcal{T}_{ep}=(S_{ep},Q_{ep})$ indicates the separated tasks with $N$-way-$K$-shot $T$ query samples, where the support set $S_{ep}=\\{(x_{i},y_{i})|y_{i}\\in C_{ep}, i=1,...,N\\times K\\}$, the query set $Q_{ep}=\\{(x_{i},y_{i})|y_{i}\\in C_{ep}, i=1,...,N\\times T\\}$, $S_{ep}\\cap Q_{ep}=\\emptyset$, and the class number $|C_{ep}|=N$. In the training phase, the class set $C_{ep}\\in C_{e}$, while in test phase, the class set $C_{ep}\\in C_{n}$. Many episodic tasks can be randomly sampled from $D_{e}$ to simulate $N$-way-$K$-shot learning for training the few-shot model, whereas the learned model can test the random tasks from $D_{n}$ for few-shot classification by $N$-way-$K$-shot fashion. If we construct a graph $G_{ep}=(\\mathcal{V}_{ep},\\mathcal{E}_{ep},\\mathcal{T}_{ep})$ (here, $\\mathcal{V}_{ep}$ is the vertex set of the image features in $\\mathcal{T}_{ep}$, and $\\mathcal{E}_{ep}$ is the edge set between the image features in $\\mathcal{T}_{ep}$.) for describing the sample structure relationship in each episodic task, meta-learning for few-shot learning based on $L$ layers graph neural network can be reformulated by the cross-entropy loss $L_{ep}$ as following.\n\\begin{align}\n\\label{Eq1}\n \\begin{aligned}\n L_{ep}&=-\\sum_{l=1}^{L}\\sum_{(x_{i},y_{i})\\in Q_{ep}}y_{i}\\log (h_{W}^{l}(f(x_{i},W_{f});S_{ep},G_{ep}))\\\\\n &=-\\sum_{l=1}^{L}\\sum_{(x_{i},y_{i})\\in Q_{ep}}y_{i}\\log (\\hat{y_{i}^{l}})\n \\end{aligned}\n\\end{align}\n\n\\begin{align}\n\\label{Eq1-1}\n \\begin{aligned}\n \\hat{y_{i}^{l}}=softmax(\\sum_{j\\neq i~~and~~c\\in C_{ep}}e_{ij}^{l}\\delta(y_{i}=c))\n \\end{aligned}\n\\end{align}\n\nhere, $\\hat{y_{i}^{l}}$ is the estimation value of $y_{i}$ in $l$th layer;$e_{ij}^{l}$ is edge feature of the $l$th layer in graph $G_{ep}$; $\\delta(y_{i}=c)$ is equal one when $y_{i}=c$ and zero otherwise;$f(\\bullet)$ with the parameter set $W_{f}$ denotes the feature extracting function or network shown in Figure \\ref{fig-4}(a); $h_{W}^{l}(f(x_{i});S_{ep},G_{ep})$ indicates few-shot learning model in the $l$th layer by training on $S_{ep}$ and $G_{ep}$, and $W$ is the parameter set of this model. This few-shot learning model can be exploited by the meta-training minimizing the loss function \\ref{Eq1}, and then recognize the new categories with the rare samples.\n\n\\subsection{High-order structure description}\nIn few-shot learning based on graph neural network, the evolution and generation of the graph plays a very important role for identifying the different classes. In each episodic task of meta-learning, existing methods usually measure the structure relationship of the samples by pairwise way, and an independence metric space with the unique metric criteria is formed by the similarity matrix in graph. In many episodic tasks training, the various metric criteria lead to the divergence between the different samples structure relationship in Figure \\ref{fig-2}. It is the main reason that the unsatisfactory classification of the new categories.\n\n\\begin{figure*}[ht]\n \\begin{center}\n\\includegraphics[width=1\\linewidth]{fig2.png}\n\\end{center}\n\\vspace{-0.2in}\n \\caption{The difference between the metric criteria of the episodic tasks. In each episodic training and testing, $S_{ep}$ stands for support set; $Q_{ep}$ is query set;the different color circles describe the labeled sample of the different classes in $S_{ep}$; the gray color circles represent unlabeled samples in $Q_{ep}$;the black solid lines between circles show the structure relationship of the labeled samples;the black dot lines between circles are the predicted structure relationship between labeled and unlabeled samples.}\n \\label{fig-2}\n \\end{figure*}\n\nTo reduce the difference between the metric criteria of the episodic tasks, we attempt to explore the high-order structure of the samples by building the latent connection. The traditional pairwise metric loses the uniform bench marking because of the normalization of the sample separation in independence tasks. However, the absolutely uniform bench marking is difficult to build the high-order structure relation between the samples of the different tasks. Therefore, \\textbf{we define the relative metric graph of multi-samples in a task as high-order structure relation}, and the same samples by random falling into the independence task make this relative metric relationship widely propagate to the other samples for approximating to the uniform bench marking under the consideration with the interaction relationship between the episodic tasks.\n\nMore concretely, the relative metric graph $\\hat{G}_{ep}=(\\hat{\\mathcal{V}}_{ep},\\hat{\\mathcal{E}}_{ep},\\mathcal{T}_{ep})$, where $\\mathcal{T}_{ep}=\\{(x_{i},y_{i})|(x_{i},y_{i})\\in S_{ep}~~or~~(x_{i},y_{i})\\in Q_{ep}, y_{i}\\in C_{ep},S_{ep}\\bigcap Q_{ep}=\\emptyset, i=1,...,N\\times (K+T)\\}$, the vertex set $\\hat{\\mathcal{V}}_{ep}=\\{v_{i}|i=1,...,N\\times (K+T)\\}$, the edge set $\\hat{\\mathcal{E}}_{ep}=\\{e_{ij}|i=1,...,N\\times (K+T)~~and~~j=1,...,N\\times (K+T)\\}$. To describe the relative relationship between features, we can build $L$ layers graph neural network for learning edge feature $e_{ij}^{l}$ (graph structure relationship) and feature representation $v_{i}^{l}$ in each layer, where $l=0,...,L$. In the initial layer, each vertex feature $v_{i}^{0}$ can be computed by feature difference as following.\n\\begin{align}\n\\label{Eq2-1}\n \\begin{aligned}\n&u_{i}^{0}=f(x_{i}),~~~~~~i=1,...,N\\times (K+T),\n \\end{aligned}\n\\end{align}\n\n\\begin{align}\n\\label{Eq2}\n \\begin{aligned}\nv_{i}^{0}=\\left\\{\n \\begin{aligned}\n&u_{i}^{0}-u_{i+1}^{0},~~~~~~i=1,...,N\\times (K+T)-1, \\\\\n&u_{i}^{0}-u_{1}^{0},~~~~~~~~~~i=N\\times (K+T),\n\\end{aligned}\n\\right.\n \\end{aligned}\n\\end{align}\nhere, $f(\\bullet)$ is the feature extracting network shown in Figure \\ref{fig-4}(a). The vertex can represented by two ways. One is that the initial vertex feature $u_{i}^{0}$ is described by the original feature. Another is that $v_{i}^{0}$ is a relative metric based on $u_{i}^{0}$ in $0$th layer. We expect to construct the higher order structure $e_{ij1}^{l}$ (the first dimension value of edge feature between vertex $i$ and $j$ in $l$ layer) based on this relative metric for representing edge feature under the condition with the pairwise similarity structure $e_{ij2}^{l}$ and dissimilarity structure $e_{ij3}^{l}$(these initial value of $0$ layer is defined by the labeled information of $S_{ep}$ in Equation \\ref{Eq3}). Therefore, the initial edge feature can be represented by the different metric method as following.\n\n\\begin{align}\n\\label{Eq3}\n \\begin{aligned}\ne_{ij}^{0}=\\left\\{\n \\begin{aligned}\n& [e_{ij1}^{0}~||~ e_{ij2}^{0}=1~||~ e_{ij3}^{0}=0],~~~~y_{i}=y_{j}~~and~~(x_{i},y_{i})\\in S_{ep}, \\\\\n& [e_{ij1}^{0}~||~ e_{ij2}^{0}=0~||~ e_{ij3}^{0}=1],~~~~y_{i}\\neq y_{j}~~and~~(x_{i},y_{i})\\in S_{ep}, \\\\\n& [e_{ij1}^{0}~||~ e_{ij2}^{0}=0.5~||~ e_{ij3}^{0}=0.5],~~~~ otherwise,\n\\end{aligned}\n\\right.\n \\end{aligned}\n\\end{align}\nhere, $||$ is concatenation symbol, $e_{ij1}^{0}$ can be calculated by the metric distance of the difference in Equation \\ref{Eq4}, and $e_{ij1}^{l}$ can be updated by Equation \\ref{Eq7}. It shows the further relevance between the relative metric, and indicates the high-order structure relation of the original features.\n\n\\begin{align}\n\\label{Eq4}\n \\begin{aligned}\ne_{ij1}^{0}=1-\\parallel v_{i}^{0}-v_{j}^{0} \\parallel_{2}\/\\sum_{k}\\parallel v_{i}^{0}-v_{k}^{0} \\parallel_{2},~~~~(x_{i},y_{i})\\in S_{ep}\\bigcup Q_{ep},\n \\end{aligned}\n\\end{align}\nFigure \\ref{fig-3} shows the relationship between pairwise metric and high-order metric in $l$th layer, and the high-order metric involves any triple vertex features $u_{i}^{l}$,$u_{j}^{l}$ and $u_{k}^{l}$ in $\\hat{G}_{ep}$ in each task. In these features, $u_{j}^{l}$ is a benchmark feature that is randomly sampled by the separated tasks. The common benchmark feature can reduce the metric difference between samples of the separated tasks.\n\n\\begin{figure*}[ht]\n \\begin{center}\n\\includegraphics[width=1\\linewidth]{fig3.png}\n\\end{center}\n\\vspace{-0.2in}\n \\caption{The relationship between pairwise metric $d_{pairwise\\_metric}^{l}$(left figure) and high-order metric $d_{high-order\\_metric}^{l}$(right figure) in $l$th layer. $f_{p}^{l}(\\bullet)$ and $W_{p}^{l}$ respectively are pairwise metric network projection and parameter set in $l$th layer, while $f_{h}^{l}(\\bullet)$ and $W_{h}^{l}$ respectively are high-order metric network projection and parameter set in $l$th layer.The black vector indicates the original vertex, the blue vector is the low-order metric vector ( the relative metric based on the original vertex), and the red vector stands for the high-order metric vector.}\n \\label{fig-3}\n \\end{figure*}\n\n\\subsection{High-order structure preserving}\nHOSP-GNN can construct $L$ layers graph neural network for evolving the graph structure by updating the vertex and edge features. Moreover, we expect to preserve the high-order structure layer by layer for learning the discriminative structure between samples in the separated tasks. $l=1,...,L$ is defined as the layer number. In detail, $u_{i}^{l}$ can be updated by $u_{i}^{l-1}$,$v_{i}^{l-1}$ and $e_{ij}^{l-1}$ in Equation \\ref{Eq5}, while $e_{ij}^{l}$ can be updated by$u_{i}^{l-1}$, $v_{i}^{l-1}$ and $e_{ij}^{l-1}$ in Equation \\ref{Eq7},\\ref{Eq8} and \\ref{Eq9}.\n\n\\begin{align}\n\\label{Eq5}\n \\begin{aligned}\nu_{i}^{l}=f_{v}^{l}([\\sum_{j}\\tilde{e}_{ij1}^{l-1}v_{j}^{l-1}~||~\\sum_{j}\\tilde{e}_{ij2}^{l-1}u_{j}^{l-1}~||~\\sum_{j}\\tilde{e}_{ij3}^{l-1}u_{j}^{l-1}], W_{v}^{l}),\n \\end{aligned}\n\\end{align}\n\n\\begin{align}\n\\label{Eq5-1}\n \\begin{aligned}\nv_{i}^{l}=\\left\\{\n \\begin{aligned}\n&u_{i}^{l}-u_{i+1}^{l},~~~~~~i=1,...,N\\times (K+T)-1, \\\\\n&u_{i}^{l}-u_{1}^{l},~~~~~~~~~~i=N\\times (K+T),\n\\end{aligned}\n\\right.\n \\end{aligned}\n\\end{align}\nhere,$||$ is concatenation symbol, $\\tilde{e}_{ijk}^{l-1}=e_{ijk}^{l-1}\/\\sum_{k}e_{ijk}^{l-1}$ ($k=1,2,3$), and $f_{v}^{l}(\\bullet)$ is the vertex feature updating network shown in Figure \\ref{fig-4}(b),and $W_{v}^{l}$ is the network parameters in $l$th layer. This updating process shows that the current vertex feature is the aggregative transformation of the previous layer vertex and edge feature in the different metrics, and can propagate the representation information under the consideration with edge feature (high-order structure information) layer by layer evolution. In \\ref{Eq5}, high-order structure influences the vertex representation by transforming aggregation computation, but can not efficiently transfer layer by layer. Therefore, we expect to preserve high-order structure layer by layer by updating edge features. According to manifold learning \\cite{he2004locality} and structure fusion\\cite{Lin2014146}, structure information (the similarity relationship of samples) can be held from the original space to the projection space by minimizing the metric difference of these spaces. Similarly, high-order evolution based on graph neural network may obey the same rule for computing edge feature of each layer with the vertex feature updating. Therefore, we can construct the manifold loss by layer-by-layer computation for constraining the model optimization.\n\n\\begin{align}\n\\label{Eq6}\n \\begin{aligned}\nL_{ml}=&\\sum_{i,j,l}f_{h}^{l}(\\|v_{i}^{l}-v_{j}^{l}\\|_{2},W_{h}^{l})e_{ij1}^{l-1}+\\\\\n&\\sum_{i,j,l}f_{p}^{l}(\\|u_{i}^{l}-u_{j}^{l}\\|_{2},W_{p}^{l})e_{ij2}^{l-1}+\\\\\n&\\sum_{i,j,l}(1-f_{p}^{l}(\\|u_{i}^{l}-u_{j}^{l}\\|_{2},W_{h}^{l}))e_{ij3}^{l-1},\n\\end{aligned}\n\\end{align}\nhere,$L_{ml}$ is the loss of the manifold structure in the different layer and metric method (The first term is the manifold constrain for high-order structure, while the second and third terms are respectively the manifold constrain for similarity and dissimilarity); $f_{h}^{l}(\\bullet)$ is the high-order metric network in Figure \\ref{fig-4}(c) between vertex features, and $W_{h}^{l}$ is the parameter set of this network in $l$th layer; $f_{p}^{l}(\\bullet)$ is the pairwise metric network in Figure \\ref{fig-4}(c) between vertex features, and $W_{p}^{l}$ is it's parameter set in $l$th layer. \\ref{Eq6} shows that the different manifold structures between layers can be preserved for minimizing $L_{ml}$. The edge updating based on high-order structure preserving is as following.\n\n\\begin{align}\n\\label{Eq7}\n \\begin{aligned}\n\\bar{e}_{ij1}^{l}=\\frac{f_{h}^{l}(\\|v_{i}^{l}-v_{j}^{l}\\|_{2},W_{h}^{l})e_{ij1}^{l-1}}{\\sum_{k}f_{h}^{l}(\\|v_{i}^{l}-v_{k}^{l}\\|_{2},W_{h}^{l})e_{ik1}^{l-1}\/\\sum_{k}e_{ik1}^{l-1}},\n\\end{aligned}\n\\end{align}\n\n\\begin{align}\n\\label{Eq8}\n \\begin{aligned}\n\\bar{e}_{ij2}^{l}=\\frac{f_{p}^{l}(\\|u_{i}^{l}-u_{j}^{l}\\|_{2},W_{p}^{l})e_{ij2}^{l-1}}{\\sum_{k}f_{p}^{l}(\\|u_{i}^{l}-u_{k}^{l}\\|_{2},W_{p}^{l})e_{ik2}^{l-1}\/\\sum_{k}e_{ik2}^{l-1}},\n\\end{aligned}\n\\end{align}\n\n\\begin{align}\n\\label{Eq9}\n \\begin{aligned}\n\\bar{e}_{ij3}^{l}=\\frac{(1-f_{p}^{l}(\\|u_{i}^{l}-u_{j}^{l}\\|_{2},W_{p}^{l}))e_{ij3}^{l-1}}{\\sum_{k}(1-f_{p}^{l}(\\|u_{i}^{l}-u_{k}^{l}\\|_{2},W_{p}^{l}))e_{ik3}^{l-1}\/\\sum_{k}e_{ik3}^{l-1}},\n \\end{aligned}\n\\end{align}\n\n\\begin{align}\n\\label{Eq10}\n \\begin{aligned}\ne_{ij}^{l}=\\bar{e}_{ij}^{l}\/\\|\\bar{e}_{ij}^{l}\\|_{1}.\n \\end{aligned}\n\\end{align}\nTherefore, The total loss $L_{total}$ of the whole network includes $L_{ep}$ and $L_{ml}$.\n\n\\begin{align}\n\\label{Eq11}\n \\begin{aligned}\n L_{total}=L_{ep}+\\lambda L_{ml},\n \\end{aligned}\n\\end{align}\nhere, $\\lambda$ is the tradeoff parameter for balancing the influence of the different loss. Figure \\ref{fig-4} shows the network architecture of the proposed HOSP-GNN.\n\n\\begin{figure*}[hbp]\n \\begin{center}\n\\includegraphics[width=1\\linewidth]{fig4.png}\n\\end{center}\n\\vspace{-0.2in}\n \\caption{The network architecture of the proposed HOSP-GNN.(a) is the total network structure, (b) and (c) respectively are vertex and edge updating network in (a). MLP is a multilayer perceptron; DU indicates the difference unit for the relative metric; Conv stands for a convolutional block that includes $96$ channels of $1\\times 1$ convolution kernel, batch normalization unit, and LeakReLU unit;$S_{ep}$ stands for support set; $Q_{ep}$ is query set;the different color circles describe the labeled samples of the different classes in $S_{ep}$; the gray color circles represent unlabeled samples in $Q_{ep}$;the black solid lines between circles show the structure relationship of the labeled samples;the black dot lines between circles are the predicted structure relationship between labeled and unlabeled samples;$L_{ep}$ is the loss metric between the real labels and the predicted labels; $L_{ml}$ is the loss metric between high structures layer by layer;$v_{i}^{l}$ is the $i$th vertex feature in the $l$th layer of graph;$e_{ij}^{l}$ is the edge feature between the vertex $i$ and $j$ in the $l$th layer of graph;$f(\\bullet)$ denotes the feature extracting network;$f_{v}^{l}(\\bullet)$ indicates the vertex feature updating network in the $l$th layer; $f_{h}^{l}(\\bullet)$ denotes the high-order metric network between vertex features in the $l$th layer; $f_{p}^{l}(\\bullet)$ stands for the pairwise metric network between vertex features in the $l$th layer.}\n \\label{fig-4}\n \\end{figure*}\n\nTo indicate the inference details of HOSP-GNN, algorithm \\ref{algHOSP-GNN} shows the pseudo code of the proposed HOSP-GNN for predicting the labels of the rare samples. This algorithm process contains four steps. The first step (line 1 and line 2) initializes the vertex feature and the edge feature. The second step (line 4 and line 5) updates the vertex features layer by layer. The third step (from line 6 to line 8) updates the edge features layer by layer. The forth step (line 9) predicts the labels of the query samples.\n\\begin{algorithm}[ht]\n \\caption{The inference of the HOSP-GNN for few-shot learning}\n \\begin{algorithmic}[1]\n \\label{algHOSP-GNN}\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n\\renewcommand{\\algorithmicreturn}{\\textbf{Iteration:}}\n \\REQUIRE \\textit{Graph}, $\\hat{G}_{ep}=(\\hat{\\mathcal{V}}_{ep},\\hat{\\mathcal{E}}_{ep},\\mathcal{T}_{ep})$, where $\\mathcal{T}_{ep}=\\{(x_{i},y_{i})|(x_{i},y_{i})\\in S_{ep}~~or~~x_{i}\\in Q_{ep}, y_{i}\\in C_{ep},S_{ep}\\bigcap Q_{ep}=\\emptyset, i=1,...,N\\times (K+T)\\}$,$\\hat{\\mathcal{V}}_{ep}=\\{v_{i}|i=1,...,N\\times (K+T)\\}$, $\\hat{\\mathcal{E}}_{ep}=\\{e_{ij}|i=1,...,N\\times (K+T)~~and~~j=1,...,N\\times (K+T)\\}$;\\\\ \\textit{Model parameter}, $W=\\{W_{f},W_{v}^{l},W_{h}^{l}|l=1,...,L\\}$\n \\ENSURE The query samples of the predicted labels $\\{\\hat{y}_{i}^{l}|i=1,...,N\\times T~~~~and~~~~l=1,...,L\\}$\n \\STATE Computing the initial vertex feature $v_{i}^{0}$ by feature difference in Equation \\ref{Eq2}\n \\STATE Computing the initial edge feature $e_{ij}^{0}$ as high-order structure in Equation \\ref{Eq3}\n \\FOR {$1\\leq l\\leq L$}\n \\FOR {$1\\leq i\\leq N\\times (K+T)$}\n \\STATE Updating vertex feature $v_{i}^{l}$ by Equation \\ref{Eq5}\n \\FOR {$1\\leq j\\leq N\\times (K+T)$}\n \\STATE Updating edge feature $e_{ij}^{l}$ by Equation \\ref{Eq7},\\ref{Eq8},\\ref{Eq9} and \\ref{Eq10}\n \\ENDFOR\n \\STATE Predicting the query sample labels $\\hat{y}_{i}^{l}$ by Equation\\ref{Eq1-1}\n \\ENDFOR\n \\ENDFOR\n\n \\end{algorithmic}\n\\end{algorithm}\n\n\n\\section{Experiment}\n\nTo evaluating the proposed HOSP-GNN, we carry out four experiments. The first experiment involves the baseline methods comparison. The second experiment conducts the state-of-the-art methods comparison. The third experiment implements semi-supervised fashion for few-shot learning. The forth experiment assesses the layer effect for graph model, and the loss influence for the manifold constraint.\n\\subsection{Datasets}\nIn experiments, we use three benchmark datasets that are miniImageNet\\cite{vinyals2016matching}, tieredImageNet \\cite{Mengye2018}, and FC100\\cite{NIPS20187352}. In miniImageNet dataset from ILSVRC-12 \\cite{ILSVRC15}, RGB images include $100$ different classes, and each class has $600$ samples. We adopt the splits configuration \\cite{kim2019edge} that respectively is 64,16,and 20 classes for training, validation and testing. In tieredImageNet dataset from ILSVRC-12 \\cite{ILSVRC15}, there are more than $700k$ images from $608$ classes. Moreover, $608$ classes is collected for $34$ higher-level semantic classes, each of which has $10$ to $20$ classes. We also use the splits configuration \\cite{kim2019edge} that respectively is $351$,$97$, and $160$ for training, validation and testing. Each class has about $1281$ images. In FC100 dataset from CIFAR-100\\cite{Krizhevsky}, there are $100$ classes images grouped into $20$ higher-level classes. Classes respectively are divided into $60$,$20$, and $20$ for training, validation and testing. Each classes have $600$ images of size $32\\times 32$. Table \\ref{tab1} shows the statistics information of these datasets.\n\n\\begin{table*}[!ht]\n\\small\n\\renewcommand{\\arraystretch}{1.0}\n\\caption{Datasets statistics information in experiments. $\\sharp$ denotes the number. }\n\\label{tab1}\n\\begin{center}\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\begin{tabular}{lp{1.0cm}p{1.5cm}p{1.5cm}p{1.5cm}p{1.5cm}p{0.5cm}}\n\\hline\n\\bfseries Datasets & \\bfseries \\tabincell{l}{$\\sharp$ Classes } & \\bfseries \\tabincell{l}{$\\sharp$ training\\\\ classes} & \\bfseries \\tabincell{l}{$\\sharp$ validation \\\\classes} & \\bfseries \\tabincell{l}{$\\sharp$ testing \\\\classes} &\\bfseries \\tabincell{l}{$\\sharp$ images}\\\\\n\\hline \\hline\nminiImageNet & $100$ &$64$& $16$ & $20$ & $60000$\\\\\n\\hline\ntieredImageNet & $608$ &$351$& $97$ & $160$ & $778848$\\\\\n\\hline\nFC100 & $100$ &$60$& $20$ & $20$ & $60000$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\subsection{Experimental Configuration}\nFigure \\ref{fig-4} describes the network architecture of the proposed HOSP-GNN in details. The feature extracting network is the same architecture in the recent works\\cite{vinyals2016matching} \\cite{snell2017prototypical} \\cite{finn2017model} \\cite{kim2019edge}, and specifically includes four convolutional blocks with $3\\times 3$ kernel, one linear unit, one bach normalization and one leakReLU unit for few-shot models. Other parts of network is detailed in figure \\ref{fig-4}. To conveniently compare with other methods(baseline methods and state-of-the-art methods), we set the layer number $L$ to $3$ in the proposed HOSP-GNN.\n\nTo train the proposed HOSP-GNN model, we use Adam optimizer with the learning rate $5\\times 10^{-4}$ and weight decay $10^{-6}$. The mini-batch size of meta-learning task is set to $40$ or $20$ for 5-way-1-shot or 5-way-5-shot experiments. The loss coefficient $\\lambda$ is set to $10^{-5}$. Experimental results in this paper can be obtained by 100K iterations training for miniImageNet and FC100, 200K iterations training for tieredImageNet.\n\nWe implement 5-way-1-shot or 5-way-5-shot experiments for evaluating the proposed method. Specifically, we averagely sample $15$ queries from each classes, and randomly generate $600$ episodes from the test set for calculating the averaged performance of the queries classes.\n\n\\subsection{Comparison with baseline approaches}\nThe main framework of the proposed HOSP-GNN is constructed based on edge-labeling graph neural network (EGNN)\\cite{kim2019edge}. Their differences are the graph construction and the manifold constraint for model training in episodic tasks. EGNN method mainly considers the similarity and dissimilarity relationship between the pair-wise samples, but does not involve the manifold structure constraint of each layer for learning few-shot model.In contrast, HOSP-GNN tries to capture the high-order structure relationship between multi-samples ,fuses the similarity and dissimilarity relationship between the pair-wise samples, and constrains the model training by layer by layer manifold structure loss. Therefore, the base-line methods include EGNN, HOSP-GNN-H-S(the proposed HOSP-GNN only considers the high-order structure relationship and the similarity relationship),HOSP-GNN-H-D(the proposed HOSP-GNN only considers the high-order structure relationship and the dissimilarity relationship),HOSP-GNN-H (the proposed HOSP-GNN only considers the high-order structure relationship),HOSP-GNN-S (the proposed HOSP-GNN only considers the similarity relationship),and HOSP-GNN-D (the proposed HOSP-GNN only considers the dissimilarity relationship), in which H denotes the high-order structure relationship, S strands for the similarity relationship, and D represents the dissimilarity relationship.\n\n\\begin{table}[!ht]\n\\small\n\\renewcommand{\\arraystretch}{1.0}\n\\caption{Comparison of the methods related the high-order structure (HOSP-GNN,HOSP-GNN-H-S,HOSP-GNN-H-D,and HOSP-GNN-H)with baseline methods (EGNN,HOSP-GNN-S,and HOSP-GNN-D) for 5-way-1-shot learning. Average accuracy (\\%)of the query classes is reported in random episodic tasks.}\n\\label{tab2}\n\\begin{center}\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\begin{tabular}{l|p{2.0cm}p{2.0cm}p{2.0cm}p{2.0cm}}\n\\hline\n\\bfseries Method &\\bfseries 5-way-1-shot &\\bfseries &\\bfseries \\\\\n\\cline{2-4}\n\\bfseries &\\bfseries miniImageNet &\\bfseries tieredImageNet &\\bfseries FC100 \\\\\n\\hline \\hline\nEGNN \\cite{kim2019edge} & $52.46\\pm0.45$ &$57.94\\pm0.42$ & $35.00\\pm0.39$ \\\\\n\\hline\nHOSP-GNN-D & $52.44\\pm0.43$ &$57.91\\pm0.39$ & $35.55\\pm0.40$ \\\\\n\\hline\nHOSP-GNN-S & $52.86\\pm0.41$ &$57.84\\pm0.44$ & $35.48\\pm0.42$ \\\\\n\\hline\\hline\nHOSP-GNN-H & $69.52\\pm0.41$ &$91.71\\pm 0.28$ & $76.24\\pm0.41$ \\\\\n\\hline\nHOSP-GNN-H-D & $78.82\\pm0.45$ &$82.63\\pm0.26$ & $82.27\\pm0.44$ \\\\\n\\hline\nHOSP-GNN-H-S & $88.15\\pm0.35$ &$\\textbf{95.39}\\pm\\textbf{0.20}$ & $\\textbf{83.65}\\pm\\textbf{0.38}$ \\\\\n\\hline\nHOSP-GNN & $\\textbf{93.93}\\pm\\textbf{0.37}$ &$94.00\\pm0.24$ & $76.79\\pm0.46$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{table}[!ht]\n\\small\n\\renewcommand{\\arraystretch}{1.0}\n\\caption{Comparison of Comparison of the methods related the high-order structure (HOSP-GNN,HOSP-GNN-H-S,HOSP-GNN-H-D,and HOSP-GNN-H) with baseline methods (EGNN,HOSP-GNN-S,and HOSP-GNN-D) for 5-way-5-shot learning. Average accuracy (\\%)of the query classes is reported in random episodic tasks.}\n\\label{tab3}\n\\begin{center}\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\begin{tabular}{l|p{2.0cm}p{2.0cm}p{2.0cm}p{2.0cm}}\n\\hline\n\\bfseries Method &\\bfseries 5-way-5-shot &\\bfseries &\\bfseries \\\\\n\\cline{2-4}\n\\bfseries &\\bfseries miniImageNet &\\bfseries tieredImageNet &\\bfseries FC100 \\\\\n\\hline \\hline\nEGNN \\cite{kim2019edge} & $67.33\\pm0.40$ &$68.93\\pm0.40$ & $47.77\\pm0.42$ \\\\\n\\hline\nHOSP-GNN-D & $65.75\\pm0.43$ &$68.30\\pm 0.40$ & $47.00\\pm0.41$ \\\\\n\\hline\nHOSP-GNN-S & $66.10\\pm0.42$ &$68.64\\pm0.41$ & $47.69\\pm0.41$ \\\\\n\\hline\\hline\nHOSP-GNN-H & $69.19\\pm0.44$ &$90.06\\pm 0.30$ & $70.82\\pm0.46$ \\\\\n\\hline\nHOSP-GNN-H-D & $68.39\\pm0.42$ &$91.11\\pm0.29$ & $48.48\\pm0.43$ \\\\\n\\hline\nHOSP-GNN-H-S & $68.85\\pm0.42$ &$91.16\\pm0.29$ & $48.25\\pm0.43$ \\\\\n\\hline\nHOSP-GNN & $\\textbf{95.98}\\pm\\textbf{0.21}$ &$\\textbf{98.44}\\pm\\textbf{0.12}$ & $\\textbf{70.94}\\pm\\textbf{0.51}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nIn Table \\ref{tab2} and \\ref{tab3}, the methods related the high-order structure relationship show the better performance in the base-line methods. However, the performance of HOSP-GNN based on the high-order structure combination is different because of the adaptability and coupling between the high-order structure and the pair-wise structure (similarity or dissimilarity). Figure \\ref{fig-5} demonstrates the validation accuracy with iteration increasing for 5-way-1-shot or 5-way-5-shot in the different datasets. These processes also indicate the effectiveness of the high-order structure for training few-shot model. The details is analyzed in section \\ref{analysis}.\n\n\\begin{figure}\n\\subfigure[]{\n \\centering\n \n \\includegraphics[width=2.4in]{fig5-1.png}\n }\n\\subfigure[]{\n \\centering\n \n \\includegraphics[width=2.4in]{fig5-2.png}\n }\n\\subfigure[]{\n \\centering\n \n \\includegraphics[width=2.4in]{fig5-3.png}\n }\n\\subfigure[]{\n \\centering\n \n \\includegraphics[width=2.4in]{fig5-4.png}\n }\n\\subfigure[]{\n \\centering\n \n \\includegraphics[width=2.4in]{fig5-5.png}\n }\n\\subfigure[]{\n \\centering\n \n \\includegraphics[width=2.4in]{fig5-6.png}\n }\n \\caption{Validation accuracy with iteration increasing for 5-way-1-shot or 5-way-5-shot in the different datasets.(a),(c) and (e) for 5-way-1-shot in miniImageNet,tieredImageNet and FC100; (b),(d) and (f) for 5-way-5-shot in miniImageNet,tieredImageNet and FC100.}\n \\label{fig-5}\n\\end{figure}\n\n\n\\subsection{Comparison with state-of-the-arts}\nIn this section, we compare the proposed HOSP-GNN with the state-of-the-art methods, which include EGNN\\cite{kim2019edge},MLMT \\cite{fei2020meta},ArL\\cite{zhang2020rethinking},\nand CML-BGNN\\cite{luo2019learning}, which are detailed in section \\ref{ML}. These methods can capture the structure relationship of the samples in the episodic tasks based on meta-learning for few-shot learning. The difference of these method are based on the various processing ways to mine the structure relationship for few-shot models. Therefore,these methods denote the different classification performance in the benchmark datasets. Table \\ref{tab4} and \\ref{tab5} express that the performance of the proposed HOSP-GNN is greatly better than that of other methods. It shows that the dependence of the episodic tasks can be better described by high-order structure based on HOSP-GNN. The detailed analysis is demonstrated in section \\ref{analysis}.\n\n\\begin{table}[!ht]\n\\small\n\\renewcommand{\\arraystretch}{1.0}\n\\caption{Comparison of HOSP-GNN method with state-of-art methods (EGNN,MLMT,ArL,and CML-BGNN) for 5-way-1-shot learning. Average accuracy (\\%)of the query classes is reported in random episodic tasks.}\n\\label{tab4}\n\\begin{center}\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\begin{tabular}{l|p{2.0cm}p{2.0cm}p{2.0cm}p{2.0cm}}\n\\hline\n\\bfseries Method &\\bfseries 5-way-1-shot &\\bfseries &\\bfseries \\\\\n\\cline{2-4}\n\\bfseries &\\bfseries miniImageNet &\\bfseries tieredImageNet &\\bfseries FC100 \\\\\n\\hline \\hline\nEGNN \\cite{kim2019edge} & $52.46\\pm0.45$ &$57.94\\pm0.42$ & $35.00\\pm0.39$ \\\\\n\\hline\nMLMT \\cite{fei2020meta} & $72.41\\pm0.49$ &$72.82\\pm0.52$ & $null$ \\\\\n\\hline\nArL \\cite{zhang2020rethinking} & $59.12\\pm0.67$ &$null$ & $null$ \\\\\n\\hline\nCML-BGNN \\cite{luo2019learning} & $88.62\\pm0.43$ &$88.87\\pm 0.51$ & $67.67\\pm1.02$ \\\\\n\\hline\\hline\nHOSP-GNN & $\\textbf{93.93}\\pm\\textbf{0.37}$ &$\\textbf{94.00}\\pm\\textbf{0.24}$ & $\\textbf{76.79}\\pm\\textbf{0.46}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{table}[!ht]\n\\small\n\\renewcommand{\\arraystretch}{1.0}\n\\caption{Comparison of HOSP-GNN method with state-of-art methods (EGNN,MLMT,ArL,and CML-BGNN) for 5-way-5-shot learning. Average accuracy (\\%)of the query classes is reported in random episodic tasks.}\n\\label{tab5}\n\\begin{center}\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\begin{tabular}{l|p{2.0cm}p{2.0cm}p{2.0cm}p{2.0cm}}\n\\hline\n\\bfseries Method &\\bfseries 5-way-5-shot &\\bfseries &\\bfseries \\\\\n\\cline{2-4}\n\\bfseries &\\bfseries miniImageNet &\\bfseries tieredImageNet &\\bfseries FC100 \\\\\n\\hline \\hline\nEGNN \\cite{kim2019edge} & $67.33\\pm0.40$ &$68.93\\pm0.40$ & $47.77\\pm0.42$ \\\\\n\\hline\nMLMT \\cite{fei2020meta} & $84.96\\pm0.34$ &$85.97\\pm0.35$ & $null$ \\\\\n\\hline\nArL \\cite{zhang2020rethinking} & $73.56\\pm0.45$ &$null$ & $null$ \\\\\n\\hline\nCML-BGNN \\cite{luo2019learning} & $92.69\\pm 0.31$ &$92.77\\pm 0.28$ & $63.93\\pm 0.67$ \\\\\n\\hline\\hline\nHOSP-GNN & $\\textbf{95.98}\\pm\\textbf{0.21}$ &$\\textbf{98.44}\\pm\\textbf{0.12}$ & $\\textbf{70.94}\\pm\\textbf{0.51}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsection{Semi-supervised few-shot learning}\nIn support set, we label the part of samples on all classes for the robust test of the learning model, and this situation is called semi-supervised few-shot learning. Therefore, we set $20\\%$, $40\\%$, and $100\\%$ labeled samples of the support set for 5-way-5-shot learning in miniImageNet dataset. In this section, we compare the proposed HOSP-GNN with three graph related methods, which are GNN\\cite{SatorrasE18},EGNN\\cite{kim2019edge}\nand CML-BGNN\\cite{luo2019learning}. The common of these methods is based on graph for describing the structure of the samples, while the difference of these methods is the various ways for mining the structure of the samples. For example, GNN focuses on generic message-passing mechanism for optimizing the samples structure;EGNN emphasizes on updating mechanism for evolving the edge feature;CML-BGNN cares about the continual information of the episode tasks for structure complement; The proposed HOSP-GNN expects to mine the high-order structure for connecting the separated tasks and preserves the layer-by-layer manifold structure of the samples for constraining the model learning.The detailed analysis is indicated in section \\ref{analysis}.\n\\begin{table}[!ht]\n\\small\n\\renewcommand{\\arraystretch}{1.0}\n\\caption{Semi-supervised few-shot learning for the graph related methods(GNN,EGNN,CML-BGNN and the proposed HOSP-GNN) in miniImageNet dataset. Average accuracy (\\%)of the query classes is reported in random episodic tasks.}\n\\label{tab6}\n\\begin{center}\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\begin{tabular}{l|p{2.0cm}p{2.0cm}p{2.0cm}p{2.0cm}}\n\\hline\n\\bfseries Method &\\bfseries miniImageNet &\\bfseries 5-way-5-shot &\\bfseries \\\\\n\\cline{2-4}\n\\bfseries &\\bfseries $20\\%$-labeled &\\bfseries $40\\%$-labeled &\\bfseries $100\\%$-labeled \\\\\n\\hline \\hline\nGNN \\cite{SatorrasE18} & $52.45\\pm0.88$ &$58.76\\pm0.86$ & $66.41\\pm0.63$ \\\\\n\\hline\nEGNN \\cite{kim2019edge} & $63.62\\pm0.00$ &$64.32\\pm0.00$ & $75.25\\pm0.49$ \\\\\n\\hline\nCML-BGNN \\cite{luo2019learning} & $\\textbf{88.95}\\pm\\textbf{0.32}$ &$\\textbf{89.70}\\pm\\textbf{0.32}$ &$92.69\\pm0.31$ \\\\\n\\hline\\hline\nHOSP-GNN & $65.93\\pm0.38$ &$67.06\\pm0.40$ & $\\textbf{95.98}\\pm\\textbf{0.21}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsection{Ablation experiments for the layer number and the loss}\nThe proposed HOSP-GNN have two key points about structure evolution. One is the influence of the layer for model learning in graph. Another is the layer-by-layer manifold structure constraint for generating the better model with the preserved structure. Therefore, we respectively evaluate these points by ablating the part of the components from the whole model. The first experiment is about layers ablation, in which we train one layer model, two layer model and tree layer model for few-shot learning in Table \\ref{tab7}. The second experiment is about the different loss, in which we set the various losses propagation for optimizing the model in Table \\ref{tab8}.Table \\ref{tab9} shows the parameter $\\lambda$ influence to the proposed HOSP-GNN. The detailed analysis of these experimental results is shown in section \\ref{analysis}.\n\n\\begin{table}[!ht]\n\\small\n\\renewcommand{\\arraystretch}{1.0}\n\\caption{Comparison of the different layer model for the graph related methods(GNN,EGNN,CML-BGNN and the proposed HOSP-GNN) in miniImageNet dataset. Average accuracy (\\%)of the query classes is reported in random episodic tasks.}\n\\label{tab7}\n\\begin{center}\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\begin{tabular}{l|p{2.5cm}p{2.5cm}p{2.5cm}p{2.5cm}p{2.5cm}}\n\\hline\n\\bfseries Method &\\bfseries miniImageNet &\\bfseries 5-way-1-shot &\\bfseries \\\\\n\\cline{2-4}\n\\bfseries &\\bfseries one layer model &\\bfseries two layer model &\\bfseries three layer model \\\\\n\\hline \\hline\nGNN \\cite{SatorrasE18} & $48.25\\pm0.65$ &$49.17\\pm0.35$ & $50.32\\pm0.41$ \\\\\n\\hline\nEGNN \\cite{kim2019edge} & $55.13\\pm0.44$ &$57.47\\pm0.53$ & $58.65\\pm0.55$ \\\\\n\\hline\nCML-BGNN \\cite{luo2019learning} & $\\textbf{85.75}\\pm\\textbf{0.47}$ &$87.67\\pm0.47$ &$88.62\\pm0.43$ \\\\\n\\hline\nHOSP-GNN & $75.13\\pm0.44$ &$\\textbf{87.77}\\pm\\textbf{0.37}$ & $\\textbf{93.93}\\pm\\textbf{0.37}$ \\\\\n\\hline\\hline\n\\bfseries Method &\\bfseries miniImageNet &\\bfseries 5-way-5-shot &\\bfseries \\\\\n\\cline{2-4}\n\\bfseries &\\bfseries one layer model &\\bfseries two layer model &\\bfseries three layer model \\\\\n\\hline \\hline\nGNN \\cite{SatorrasE18} & $65.58\\pm0.34$ &$67.21\\pm0.49$ & $66.99\\pm0.43$ \\\\\n\\hline\nEGNN \\cite{kim2019edge} & $67.76\\pm0.42$ &$74.70\\pm0.46$ & $75.25\\pm0.49$ \\\\\n\\hline\nCML-BGNN \\cite{luo2019learning} & $\\textbf{90.85}\\pm\\textbf{0.27}$ &$\\textbf{91.63}\\pm\\textbf{0.26}$ &$92.69\\pm0.31$ \\\\\n\\hline\nHOSP-GNN & $67.86\\pm0.41$ &$72.48\\pm0.37$ & $\\textbf{95.98}\\pm\\textbf{0.21}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{table}[!ht]\n\\small\n\\renewcommand{\\arraystretch}{1.0}\n\\caption{Comparison of the different loss model for the proposed HOSP-GNN (HOSP-GNN-loss1 for label loss in support set , and HOSP-GNN for the consideration of the label and manifold structure loss). Average accuracy (\\%)of the query classes is reported in random episodic tasks.}\n\\label{tab8}\n\\begin{center}\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\begin{tabular}{l|p{2.0cm}p{2.0cm}p{2.0cm}p{2.0cm}}\n\\hline\n\\bfseries Method &\\bfseries 5-way-5-shot &\\bfseries &\\bfseries \\\\\n\\cline{2-4}\n\\bfseries &\\bfseries miniImageNet &\\bfseries tieredImageNet &\\bfseries FC100 \\\\\n\\hline \\hline\nHOSP-GNN-loss1 & $92.29\\pm0.28$ &$98.41\\pm0.12$ & $65.47\\pm0.51$ \\\\\n\\hline\\hline\nHOSP-GNN & $\\textbf{95.98}\\pm\\textbf{0.21}$ &$\\textbf{98.44}\\pm\\textbf{0.12}$ & $\\textbf{70.94}\\pm\\textbf{0.51}$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{table}[!ht]\n\\small\n\\renewcommand{\\arraystretch}{1.0}\n\\caption{The tradeoff parameter $\\lambda$ influence to few-show learning in miniImageNet.}\n\\label{tab9}\n\\begin{center}\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\begin{tabular}{l|p{1.0cm}p{1.0cm}p{1.0cm}p{1.0cm}p{1.0cm}p{1.0cm}p{1.0cm}}\n\\hline\n\\bfseries Method &\\bfseries miniImageNet &\\bfseries &\\bfseries 5-way-5-shot &\\bfseries $\\lambda$ &\\bfseries \\\\\n\\cline{2-7}\n\\bfseries &\\bfseries $10^{-2}$ &\\bfseries $10^{-3}$ &\\bfseries $10^{-4}$ &\\bfseries $10^{-5}$ &\\bfseries $10^{-6}$ &\\bfseries $10^{-7}$ \\\\\n\\hline \\hline\nHOSP-GNN & $94.65\\pm0.22$ &$93.71\\pm0.25$ & $93.43\\pm0.25$ & $95.98\\pm0.21$ &$95.31\\pm0.21$ & $91.89\\pm0.30$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\subsection{Experimental results analysis}\n\\label{analysis}\nIn above experiments, there are ten methods used for comparing with the proposed HOSP-GNN. In the baseline methods (HOSP-GNN,EGNN \\cite{kim2019edge}, HOSP-GNN-H-S, HOSP-GNN-H-D, HOSP-GNN-H, HOSP-GNN-S and HOSP-GNN-D),we can capture the various structure information of the samples for constructing the similar learning model. In the state-of-the-art methods (EGNN\\cite{kim2019edge}, MLMT \\cite{fei2020meta}, ArL\\cite{zhang2020rethinking},\n,CML-BGNN\\cite{luo2019learning} and HOSP-GNN), we demonstrate the model learning results based on the different networks framework for mining the relevance between the separated tasks. In the semi-supervised methods (GNN\\cite{SatorrasE18},EGNN\\cite{kim2019edge}\n, CML-BGNN\\cite{luo2019learning} and HOSP-GNN), we can find the labeled samples number to the performance influence for the robust testing of these methods. In ablation experiments, we build the different layers model(one layer model, two layer model and three layer model) and the various loss model (HOSP-GNN-loss1 and HOSP-GNN) for indicating their effects. The proposed HOSP-GNN can jointly consider the high-order structure and the layer-by-layer manifold structure constraints to effectively recognize the new categories. From these experiments, we have the following observations and analysis.\n\n\\begin{itemize}\n\\item The proposed HOSP-GNN and its Variants (HOSP-GNN-H-S, HOSP-GNN-H-D and HOSP-GNN-H) greatly outperform the base-line methods(HOSP-GNN-S, HOSP-GNN-D and EGNN) in table \\ref{tab2},table \\ref{tab3}, and figure \\ref{fig-5}. The common characteristic of these methods (the proposed HOSP-GNN and its Variants) involves the high-order structure for learning model. Therefore,it shows that the high-order structure can better associate with the samples from the different tasks for improving the performance of few-shot learning.\n\\item The performance of the proposed HOSP-GNN and its Variants(HOSP-GNN-H-S, HOSP-GNN-H-D and HOSP-GNN-H) indicate the different results in the various dataset and experimental configuration in table \\ref{tab2}, table \\ref{tab3}, and figure \\ref{fig-5}. In 5-way-1-shot learning, HOSP-GNN has the better performance than other methods in miniImageNet, while HOSP-GNN-H-S indicates the better results than others in tieredImageNet and FC100. In 5-way-5-shot learning, HOSP-GNN also shows the better performance than others in miniImageNet,tierdImageNet,and FC100. It shows that similarity, dissimilarity and high-order structure have the different influence to the model performance in the various datasets. For example, similarity,dissimilarity and high-order structure have the positive effect for recognizing the new categories in miniImageNet and tierdImageNet, while dissimilarity produces the negative effect for learning model in FC100. In any situation, high-order structure has an important and positive role for improving the model performance.\n\\item The proposed HOSP-GNN obviously is superior to other state-of-the-art methods in table \\ref{tab4} and \\ref{tab5}. These methods focus on the different aspects, which are the graph information mining based on EGNN\\cite{kim2019edge}, the across task information exploitation based on MLMT \\cite{fei2020meta}, the semantic-class relationship utilization based on ArL\\cite{zhang2020rethinking}, the history information association based on CML-BGNN\\cite{luo2019learning}, and the high-order structure exploration based on HOSP-GNN. The proposed HOSP-GNN can not only exploit the across task structure by the extension association of the high-order structure , but also use the latent manifold structure to constrain the model learning, so the proposed HOSP-GNN obtains the best performance in these methods.\n\\item The proposed HOSP-GNN demonstrates the better performance than the graph related methods(GNN\\cite{SatorrasE18}, EGNN\\cite{kim2019edge}, and CML-BGNN\\cite{luo2019learning}) based on the more labeled samples in table \\ref{tab6}. The enhanced structure of the more labeled samples can efficiently propagate the discriminative information to the new categories by the high-order information evolution based on the graph. The labeled sample number has few influence on model learning based on CML-BGNN\\cite{luo2019learning}. In contrast, labeled sample number has an important impact on model learning based on the graph related methods(GNN\\cite{SatorrasE18}, EGNN\\cite{kim2019edge}, and HOSP-GNN).\n\\item In the different layer model experiments, the proposed HOSP-GNN indicates the various performance with layer number changing in table \\ref{tab7}. In 5-way-1-shot learning, HOSP-GNN has the better performance than other methods, while in 5-way-5-shot learning, CML-BGNN\\cite{luo2019learning} shows the more challenging results than other methods. It demonstrates that layer number has an important impact on the high-order structure evolution. We can obtain the significant improvement based on the more layer model of HOSP-GNN for 5-way-5-shot in miniImageNet, while the performances of other methods almost are not changing with the layer number increasing. Therefore, the proposed HOSP-GNN trends to the more layers to exploit the high-order structure for few-shot learning.\n\\item In table \\ref{tab8}, the different losses (supervised label loss and manifold structure loss) are considered for constructing few-shot model. HOSP-GNN (the method model based on supervised label loss and manifold structure loss) can show the better performance than HOSP-GNN-loss1(the approach involves the model with the supervised label loss). It expresses that manifold constraint with the layer-by-layer evolution can enhance the performance of model because of the intrinsic distribution consistence on the samples of the different task.\n\n\\end{itemize}\n\n\\section{Conclusion}\nTo associate and mine the samples relationship in the different tasks, we have presented high-order structure preserving graph neural network(HOSP-GNN) for few-shot learning. HOSP-GNN can not only describe high-order structure relationship by the relative metric in multi-samples, but also reformulate the updating rules of graph structure by the alternate computation between vertexes and edges based on high-order structure. Moreover, HOSP-GNN can enhance the model learning performance by the layer-by-layer manifold structure constraint for few-shot classification. Finally, HOSP-GNN can jointly consider similarity, dissimilarity and high-order structure to exploit the metric consistence between the separated tasks for recognizing the new categories. For evaluating the proposed HOSP-GNN, we carry out the comparison experiments about the baseline methods,the state of the art methods, the semi-supervised fashion, and the layer or loss ablation on miniImageNet, tieredImageNet and FC100. In experiments, HOSP-GNN demonstrates the prominent results for few-shot learning.\n\\section{Acknowledgements}\nThe authors would like to thank the anonymous reviewers for their insightful comments that help improve the quality of this paper. Especially, this work was supported by NSFC (Program No.61771386,Program No.61671376 and Program No.61671374), Research and Development Program of Shaanxi (Program No.2020SF-359).\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nMathematical Music Theory is the study of Music from a mathematical point of view. Many connections have been discovered, some of which have a long tradition, but they seem to be still offering new problems and ideas to researchers,\nwhether they be music composers or computer scientists.\nThe first attempt to produce music through a computational model dates back to $1957$, when the authors\ncomposed a string quartet, also known as the Illiac Suite, through \nrandom number generators and Markov chains \\cite{hiller1957musical}. \nSince then, a plethora of other works have explored how computer science and music can interact: to compose music \\cite{wiggins1989representing,zimmermann2001modelling}, to analyse existing compositions and melodies \\cite{chemillier2001two,courtot1990constraint,ebciouglu1988expert}, or even to represent human gestures of the music performer \\cite{radicioni2007constraint}. \nIn particular, Constraint Programming has been used to model harmony, counterpoint and other aspects of music (e.g., see \\cite{anders2011constraint}), to compose music of various genres as described in the book \\cite{anders2018compositions}, or to impose musical harmonization constraints in \\cite{pachet2001musical}.\n\nIn this paper, we deal with Tiling Rhythmic Canons, that are purely rhythmic contrapuntal compositions.\nFor a fixed period $n$, a tiling rhythmic canon is a couple of sets $A,B\\subset\\{0,1,2,\\dots,n-1\\}$ such that at every instant there is exactly one voice playing; $A$ defines the sequence of beats played by every voice, $B$ the instants at which voices start to play.\nIf one of the sets, say $A$, is given, it is well-known that the problem \nof finding a \\emph{complement} $B$ has in general no unique solution. It is very easy to find tiling canons in which at least one of the set is \\emph{periodic}, i.e. it is built repeating a shorter rhythm.\nFrom a mathematical point of view, the most interesting canons are therefore those in which both sets are \\emph{aperiodic} (the problem can be equivalently rephrased as a research of tessellations of a special kind). \nEnumerating all aperiodic tiling canons has to face two main hurdles: on one side, the problem lacks the structure of other algebraic ones, such as ring or group theory; on the other side, the combinatorial size of the domain becomes enormous very soon.\nStarting from the first works in the 1940s, research has gradually shed some light on parts of the problem from a theoretical point of view, and several heuristics and algorithms that allow to compute tiling complements have been introduced, but a complete solution appears to still be out of reach.\n\n\n\n\n\\paragraph{Contributions.}\nThe main contributions of this paper are the Integer Linear Programming (ILP) model and the SAT Encoding to solve the Aperiodic Tiling Complements Problem presented in Section 3.\nUsing a modern SAT solver we are able to compute the complete list of aperiodic tiling complements of a class of Vuza rhythms for periods $n = \\{ 180, 420, 900\\}$.\n\n\\paragraph{Outline.} \nThe outline of the paper is as follows. \nSection \\ref{sec:basic_notions} reviews the main notions on Tiling Rhythmic Canons and defines formally the problem we tackle.\nIn Section \\ref{sec:our_contribution}, we introduce an ILP model and a SAT Encoding of the Aperiodic Tiling Complements Problem expressing the tiling and the aperiodicity constraints in terms of Boolean variables.\nFinally, in Section \\ref{sec:final}, we include our computational results to compare the efficiency of the aforementioned ILP model and SAT Encoding with the current state-of-the-art algorithms.\n\n\n\\section{The Aperiodic Tiling Complements Problem}\n\\label{sec:basic_notions}\nWe begin fixing some notation and giving the main definitions.\nIn the following, we conventionally denote the cyclic group of remainder classes modulo $n$ by $\\mathbb{Z}_n$ and its elements with the integers $\\{0, 1, \\dots, n - 1 \\}$, i.e. identifying each class with its least non-negative member.\n\\begin{definition}\\label{directsum}\n Let $A, B \\subset \\mathbb{Z}_n$. Let us define the application\n\\[\\sigma:A \\times B \\rightarrow \\mathbb{Z}_n, (a, b) \\mapsto a + b.\\]\nWe set $A + B: = \\mbox{Im}(\\sigma)$; if $\\sigma$ is bijective we say that $A$ and $B$ {\\bf are in direct sum}, and we write\n\\[A \\oplus B: = \\mbox{Im}(\\sigma).\\]\nIf $\\mathbb{Z}_n = A\\oplus B$, we call $(A, B)$ a {\\bf tiling rhythmic canon} of {\\bf period $n$}; $A$ is called the {\\bf inner voice} and $B$ the {\\bf outer voice} of the canon.\n\\end{definition}\n\t\n\n\t\n\n\n\t%\n\n\n\\begin{remark} It is easy to see that the tiling property is invariant under translations, i.e. if $A$ is a tiling complement of some set $B$, also any translate $A + z$ of $A$ is a tiling complement of $B$ (and any translate of $B$ is a tiling complement of $A$). In fact, suppose that $A \\oplus B = \\mathbb{Z}_n$; for every $k, z \\in \\mathbb{Z}_n$\nby definition there exists one and only one pair $(a,b) \\in A\\times B$ such that $k - z = a + b$. Consequently, there exists one and only one pair $(a + z, b) \\in (A + z)\\times B$ such that $k = (a + z) + b$, that is $(A + z)\\oplus B =\\mathbb{Z}_n$.\nIn view of this, without loss of generality, we shall limit our investigation to rhythms containing 0 and consider equivalence classes under translation. \n\\end{remark}\n\n\n\\input{inner-outer-image}\n\t\n\\begin{example}\nWe consider a period $n=9$,\nand the two rhythms $A=\\{0,1,5\\} \\subset \\mathbb{Z}_9$ and $B=\\{0,3,6\\} \\subset \\mathbb{Z}_9$ in Figure \\ref{fig:A} and Figure \\ref{fig:B}. \nThey provide the canon $A \\oplus B = \\mathbb{Z}_9$, since\n$\\{0,1,5\\} \\oplus \\{0,3,6\\} = \\{0, 3, 6, 1, 4, 7, 5, 8, 2\\}$, where the last number is obtained by $(5+6) \\mod 9 = 2$. \n\\end{example}\n\n\\begin{definition}\\label{def:period}\n A rhythm $A \\subset\\mathbb{Z}_n$ is {\\bf periodic (of period $z$)} if and only if there exists an element $z \\in \\mathbb{Z}_n$, $z\\neq 0$, such that $z + A = A$. \n In this case, $A$ is also called periodic modulo $z\\in\\mathbb{Z}_n$.\n A rhythm $A\\subset\\mathbb{Z}_n$ is {\\bf aperiodic} if and only if it is not periodic.\n\\end{definition} \n\nComing back to Example 1, it is easy to note the periodicity $z = 3$ in rhythm $B=\\{0, 3, 6\\}$: indeed, $3 + B = B$.\nNotice that if $A$ is periodic of period $z$, $z$ must be a strict divisor of the period $n$ of the canon.\n\nTiling rhythmic canons can be characterised using polynomials, as follows.\n\t\n\\begin{lemma}\n \\label{lm:pol_equivalence}\n Let $A$ be a rhythm in $\\mathbb{Z}_n$ and let $p_A(x)$ be the {\\bf characteristic polynomial} of $A$, that is, $p_A(x)=\\sum_{k\\in A}x^{k}$. Given $B\\subset\\mathbb{Z}_n$ and its characteristic polynomial $p_B(x)$, we have that\n\\begin{equation}\\label{eq:pol_form}\np_A (x)\\cdot p_B (x)\\equiv \\sum_{k=0}^{n-1} x^k,\\quad\\quad\\mod (x^{n} - 1) \n\\end{equation}\n if and only if $p_A (x), p_B (x)$ are polynomials with coefficients in $\\{0,1\\}$ and $A\\oplus B = \\mathbb{Z}_n$.\n\\end{lemma}\n\n \n\n\n\\begin{definition}\n A tiling rhythmic canon $(A,B)$ in $\\mathbb{Z}_{n}$ is a {\\bf Vuza canon} if both $A$ and $B$ are aperiodic.\n\\end{definition}\n\n\\begin{remark}\n \\label{aperiodic}\n Note that a set $A$ is periodic modulo $z$ if and only if it is periodic modulo all the non-trivial multiples of $z$ dividing $n$. \n %\n For this reason, when it comes to check whether $A$ is periodic or not, it suffices to check if $A$ is periodic modulo $m$ for every $m$ in the set of maximal divisors of $n$.\n %\n We denote by $\\mathcal{D}_n$ this set\n\\begin{equation*}\n \\mathcal{D}_n:=\\big\\{n\/ p \\mid p \\mbox{ is a prime factor of } n\\big\\}.\n \\end{equation*}\nWe also denote with $k_n$ the cardinality of $\\mathcal{D}_n$, so that $n=p_1^{\\alpha_1}p_2^{\\alpha_2}\\dots p_{k_n}^{\\alpha_{k_n}}$ is the unique prime factorization of $n$, where $\\alpha_1,\\dots,\\alpha_{k_n}\\in\\mathbb{N^+}$ .\n\\end{remark}\n\nFor a complete and exhaustive discussion on tiling problems, we refer the reader to \\cite{amiot2011structures}.\nIn this paper, we are interested in the following tiling problem.\n\n\\begin{definition}\n Given a period $n\\in\\mathbb{N}$ and a rhythm $A \\subset \\mathbb{Z}_n$, the {\\bf Aperiodic Tiling Complements Problem} consists in finding all aperiodic complements $B$ i.e., subsets $B$ of $\\mathbb{Z}_n$ such that $A \\oplus B = \\mathbb{Z}_n$.\n\\end{definition}\n \nSome problems very similar to the decision of tiling (i.e., the tiling decision problem DIFF\nin \\cite{Matolcsi}) have been shown to be NP-complete; a strong lower bound for computational complexity of the tiling decision problem is to be expected, too. \n \n\n\\section{A SAT Encoding }\n\\label{sec:our_contribution}\n\nIn this section, we present in parallel an ILP model and a new SAT Encoding for the Aperiodic Tiling Complements Problem that are both used to enumerate all complements of $A$.\nWe define two sets of constraints: (i) the {\\it tiling constraints} that impose the condition $A \\oplus B = \\mathbb{Z}_n$, and (ii) the {\\it aperiodicity constraints} that impose that the canon $B$ is aperiodic.\n\n\\paragraph{Tiling constraints.}\nGiven the period $n$ and the rhythm $A$, let $\\bm a=[a_0,\\dots,a_{n-1}]^\\intercal$ be its characteristic (column) vector, that is, $a_i=1$ if and only if $i \\in A$. \nUsing vector $\\bm a$ we define the circulant matrix $T \\in \\{0,1\\}^{n \\times n}$ of rhythm $A$, that is, each column of $T$ is the circular shift of the first column, which corresponds to vector $\\bm a$.\nThus, the matrix $T$ is equal to \n\\[ \nT=\n\\begin{bmatrix}\n a_{0} & a_{n-1} & a_{n-2} & \\dots & a_{1} \\\\\n a_{1} & a_{0} & a_{n-1} & \\dots & a_{2} \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{n-1} & a_{n-2} & a_{n-3} & \\dots & a_{0}\n\\end{bmatrix}.\n\\]\nWe can use the circulant matrix $T$ to impose the tiling conditions as follows.\nLet us introduce a literal $x_i$ for $i=0,\\dots,n-1$, that represents the characteristic vector of the tiling rhythm $B$, that is, $x_i = 1$ if and only if $i \\in B$.\nNote that a literal is equivalent to a 0--1 variable in ILP terminology.\nThen, the tiling condition can be written with the following linear constraint:\n\\begin{equation}\\label{complementary}\n \\sum_{i \\in \\{0, \\dots, n-1\\}} T_{ij} x_i = 1, \\quad \\forall j = 0, \\dots, n-1.\n\\end{equation}\nNotice that the set of linear constraints \\eqref{complementary} imposes that exactly one variable (literal) in the set $\\{x_{{n+i-j \\mod n}}\\}_{j\\in A}$ is equal to one. Hence, we encode this condition as an \n{\\tt Exactly-one} constraint, that is, exactly one literal can take the value one.\nThe {\\tt Exactly-one} constraint can be expressed as the conjunction of the two constraints {\\tt At-least-one} and {\\tt At-most-one}, for which standard SAT encoding exist (e.g., see \\cite{bailleux2003efficient,philipp2015pblib}). \nHence, the tiling constraints \\eqref{complementary} are encoded with the following set of clauses depending on $i=0, \\dots, n-1$:\n\\begin{equation}\\label{sat:compl}\n\\bigvee_{j \\in A}\\left(x_{n-(j-i) \\mod n}\\right) \\bigwedge_{k,l \\in A,k \\neq l}\\left(\\lnot x_{{n-(k-i) \\mod n}} \\lor \\lnot x_{{n-(l-i) \\mod n}}\\right).\n\\end{equation}\n\n\\paragraph{Aperiodicity constraints.}\nIn view of Definition \\ref{def:period},\nif there exists a $b \\in B$ such that $(d + b) \\mod n \\neq b$, then the canon $B$ is not periodic modulo $d$.\nNotice that by Remark \\ref{aperiodic} we need to check this condition only for the values of $d \\in \\mathcal{D}_n$.\n\nWe formulate the aperiodicity constraints introducing auxiliary variables $y_{d,i},z_{d,i},u_{d,i} \\in \\{0,1\\}$ for every prime divisor $d \\in \\mathcal{D}_n$ and for every integer $i = 0,\\dots,d-1$.\nWe set\n\\begin{equation}\n \\label{implications} u_{d,i} = 1 \\; \\Leftrightarrow \\;\n \\left(\\sum_{k=0}^{n\/d-1} x_{i+kd} = \\frac{n}{d}\\right) \\vee \\left(\\sum_{k=0}^{n\/d-1} x_{i+kd} = 0\\right),\n\\end{equation}\nfor all $d \\in \\mathcal{D}_n$, $i=0,\\dots,d-1$, with the condition\n\\begin{equation}\n\\label{sumdivisor} \n\\sum_{i=0}^{d-1} u_{d,i} \\leq d-1, \\quad \\forall d \\in \\mathcal{D}_n.\n\\end{equation}\n\nSimilarly to \\cite{auricchio2021integer}, the constraints \\eqref{implications} can be linearized using standard reformulation techniques as follows:\n\n\\begin{align}\n\\label{y:1} & 0 \\leq \\sum_{k=0}^{n\/d} x_{i+kd} - \\frac{n}{d}y_{d,i}\\leq \\frac{n}{d} - 1 & \\forall d \\in \\mathcal{D}_n,\\;\\; i=0,\\dots,d-1, \\\\\n\\label{z:-1} & 0 \\leq \\sum_{k=0}^{n\/d} (1-x_{i+kd}) - \\frac{n}{d}z_{d,i} \\leq \\frac{n}{d} - 1 & \\forall d \\in \\mathcal{D}_n, \\; \\; i=0,\\dots,d-1,\\\\\n\\label{U} & y_{d,i} + z_{d,i} = u_{d,i} & \\forall d \\in \\mathcal{D}_n, \\;\\; i=0,\\dots,d-1. \n\\end{align}\n\n\\noindent Notice that when $u_{d,i}=1$ exactly one of the two incompatible alternatives in the right hand side of \\eqref{implications} is true,\nwhile whenever $u_{d,i}=0$ the two constraints are false. \nCorrespondingly, the constraint \\eqref{U} imposes that the variables $y_{d,i}$ and $z_{d,i}$ cannot be equal to $1$ at the same time.\nOn the other hand, constraint \\eqref{sumdivisor} imposes that at least one of the auxiliary variables $u_{d,i}$ be equal to zero.\n\nNext, we encode the previous conditions as a SAT formula.\nTo encode the if and only if clause, we make use of the logical equivalence between $C_1 \\Leftrightarrow C_2$ and $(\\lnot C_1 \\lor C_2) \\land (C_1 \\lor \\lnot C_2)$.\nThe clause $C_1$ is given directly by the literal $u_{d,i}$.\nThe clause $C_2$, expressing the right hand side of \\eqref{implications}, i.e. the constraint that the variables must be either all true or all false, can be written as\n\\[\nC_2 = \\left(\\bigwedge_{k=0}^{n\/d} x_{i+kd}\\right) \\vee \\left(\\bigwedge_{k=0}^{n\/d} \\bar{x}_{i+kd}\\right), \\quad \\forall d \\in \\mathcal{D}_n.\n\\]\nThen, the linear constraint \\eqref{sumdivisor} can be stated as the SAT formula:\n\\[\n \\lnot \\left(u_{d,0} \\land u_{d,1} \\land \\dots \\land u_{d,(d-1)}\\right) = \\bigvee_{l=0}^{d-1} \\bar{u}_{d,l}, \\quad \n \\forall d \\in \\mathcal{D}_n.\n\\]\nFinally, we express the aperiodicity constraints using\n\\begin{equation}\\label{sat:apreriodic}\n \\bigwedge\\limits_{i = 0}^{d-1}\n \\left[\\left( \\lnot C_2 \\lor u_{d,i} \\right)\\land\n \\left( C_2 \\lor \\bar{u}_{d,i} \\right) \\right]\n \\land \n \\bigvee_{l=0}^{d-1} \\bar{u}_{d,l},\\,\n \\forall d \\in \\mathcal{D}_n.\n\\end{equation}\nNote that joining \\eqref{complementary}, \\eqref{y:1}--\\eqref{U} with a constant objective function gives a complete ILP model, which can be solved with a modern ILP solver such as Gurobi to enumerate all possible solutions.\nAt the same time, joining \\eqref{sat:compl} and \\eqref{sat:apreriodic} into a unique CNF formula, we get our complete SAT Encoding of the Aperiodic Tiling Complements Problem.\n(see Section 4 for computational results).\n\n\n\\subsection{Existing solution approaches}\n\nFor the computation of all the aperiodic tiling complements of a given rhythm\nthe two most successful approaches already known are the \\emph{Fill-Out Procedure} \\cite{kolountzakis2009algorithms} and the {\\it Cutting Sequential Algorithm} \\cite{auricchio2021integer}.\n\n\n\\paragraph{The Fill-Out Procedure.}\n\nThe \\emph{Fill-Out Procedure} is the heuristic algorithm introduced in \\cite{kolountzakis2009algorithms}. \nThe key idea behind this algorithm is the following: \ngiven a rhythm $A\\subset\\mathbb{Z}_n$ such that $0\\in A$, the algorithm sets $P=\\{0\\}$ and starts the search for possible expansions of the set $P$. \nThe expansion is accomplished by adding an element $\\alpha\\in\\mathbb{Z}_n$ to $P$ according to the reverse order induced by a ranking function $r(x, P)$, which counts all the possible ways in which $x$ can be covered through a translation of $A$.\nThis defines a new set, $\\Tilde{P}\\supset P$, which is again expanded until either it can no longer be expanded or the set becomes a tiling complement. \nThe search ends when all the possibilities have been explored. \nThe algorithm finds also periodic solutions that must removed in\npost-processing, as well as multiple translations of the same rhythm.\n\n\n\n\\paragraph{The Cutting Sequential Algorithm (CSA).}\nIn \\cite{auricchio2021integer}, the authors formulate the Aperiodic Tiling Complements Problem using an Integer Linear Programming (ILP) model that is based on the polynomial characterization of tiling canons.\nThe ILP model uses auxiliary 0--1 variables to encode the product $p_A (x) \\cdot p_B(x)$ which characterizes tiling canons.\nThe aperiodicity constraint is formulated analogously to what done above.\nThe objective function is equal to a constant and does not impact the solutions found by the model.\nThe ILP model is used within a sequential cutting algorithm that adds a no-good constraint every time a new canon $B$ is found to prevent finding solutions twice.\nIn addition, the sequential algorithm sets a new no-good constraints for every translation of $B$; hence, in contrast to the \\emph{Fill-Out Procedure}, the \\emph{CSA Algorithm} does not need any post-processing.\n\n\\section{Computational Results}\n\\label{sec:final}\nFirst, we compare the results obtained using our ILP model and SAT Encoding with the runtimes of the \\emph{Fill-Out Procedure} and of the \\emph{CSA Algorithm}.\nWe use the canons with periods 72, 108, 120, 144 and 168 that have been completely enumerated by Vuza \\cite{Vuza}, Fripertinger \\cite{fripertinger2005remarks}, Amiot \\cite{amiot2009new}, Kolountzakis and Matolcsi \\cite{kolountzakis2009algorithms}. \nTable \\ref{tab1} shows clearly that the two new approaches outperform the state-of-the-art, and in particular, that SAT provides the best solution approach. \nWe then choose some periods $n$ with more complex prime factorizations, such as $n = p^2q^2r=180$, $n = p^2qrs=420$, and $n = p^2q^2r^2=900$. \nTo find aperiodic rhythms $A$, we apply Vuza's construction \\cite{Vuza} with different choices of parameters $p_1$, $p_2$, $n_1$, $n_2$, $n_3$. Thus, having $n$ and $A$ as inputs, we search for all the possible aperiodic complements and then we filter out the solutions under translation.\nSince the post-processing is based on sorting canons, it requires a comparatively small amount of time.\nWe report the results in Table \\ref{tab2}: the solution approach based on the SAT Encoding is the clear winner (from a Music theory perspective, it is also noteworthy that this is the first time that all the tiling complements, whose number is reported in the last column of the two tables, of the studied rhythms are computed).\n\n\\paragraph{Implementation Details.}\nWe have implemented in Python the ILP model and in PySat \\cite{imms-sat18} the SAT Encoding discussed in Section 3. We use Gurobi 9.1.1 as ILP solver and Maplesat \\cite{phdthesis} as SAT solver.\nThe experiments are run on a Dell Workstation with a Intel Xeon W-2155 CPU with 10 physical cores at 3.3GHz and 32 GB of RAM.\nIn case of acceptance, we will release the source code and the instances on GitHub.\n\\subsubsection{Conclusions and Future Work.}\nIt is thinkable to devise an algorithm that, for a given $n$, finds all the pairs $(A, B)$ that give rise to a Vuza canon of period $n$.\nThis could provide in-depth information on the structure of Vuza canons.\n\n\\subsubsection{Acknowledgements.}\nThis research was partially supported by: Italian Ministry of Education, University and Research (MIUR), Dipartimenti di Eccellenza Program (2018--2022) - Dept. of Mathematics ``F. Casorati'', University of Pavia; Dept. of Mathematics and its Applications, University of Milano-Bicocca; National Institute of High Mathematics (INdAM) ``F. Severi''; Institute for Advanced Mathematical Research (IRMA), University of Strasbourg.\n\\begin{table}\n\\caption{Aperiodic tiling complements for periods $n\\in\\{72,108,120,144,168\\}$.}\n\\label{tab1}\n\\vspace{.2cm}\n\\centering\n\\begin{adjustbox}{width=0.9\\textwidth}\n\\begin{tabular}{|c|c|c|c|c|c|c|r|r|r|r|r|}\n\\hline\n\\multirow{2}{*}{$n$} &\\multirow{2}{*}{$\\mathcal{D}_n$}&\\multirow{2}{*}{$p_1$}&\\multirow{2}{*}{$n_1$}&\\multirow{2}{*}{$p_2$}&\\multirow{2}{*}{$n_2$}&\\multirow{2}{*}{$n_3$}&\\multicolumn{4}{c|}{runtimes (s)}& \\multirow{2}{*}{$\\# B$}\\\\\n\\cline{8-11}\n&&&&&&& \\emph{FOP}& \\emph{CSA}& \\emph{SAT} & \\emph{ILP} & \\\\\n\\hline\n\\hline\n72&$\\{24,36\\}$&2&2&3&3&2& 1.59 &0.10 &$ < 0.01$ & 0.03 &6\\\\\n\\hline\n\\hline\n108&$\\{36,54\\}$&2&2&3&3&3& 896.06 &7.84 &0.09 & 0.19 & 252\\\\\n\\hline\n\\hline\n\\multirow{2}{*}{120}&\\multirow{2}{*}{$\\{24,40,60\\}$}&2&2&5&3&2&24.16&0.27&0.02& 0.04 &18\\\\\n&&2&2&3&5&2&10.92&0.14&0.01& 0.04 &20\\\\\n\\hline\n\\hline\n\\multirow{4}{*}{144}&\\multirow{4}{*}{$\\{48,72\\}$}&4&2&3&3&2&82.53&2.93&0.02& 0.11 &36\\\\\n&&2&2&3&3&4&$> 10800.00$ &$> 10800.00$&11.04& 46.96 &8640\\\\\n&&2&2&3&3&4&7.13&0.10&$< 0.01$& 0.05 &6\\\\\n&&2&4&3&3&2&80.04&0.94&0.02& 0.08 &60\\\\\n\\hline\n\\hline\n\\multirow{2}{*}{168}&\\multirow{2}{*}{$\\{24,56,84\\}$}&2&2&7&3&2&461.53&17.61&0.04& 0.20 &54\\\\\n&&2&2&3&7&2&46.11&0.91&0.02& 0.07 &42\\\\\n\\hline\n\\end{tabular}\n\\end{adjustbox}\n\\vspace{1cm}\n\\end{table}\n\\begin{table}\n\\caption{Aperiodic tiling complements for periods $n\\in\\{180,420,900\\}$.}\n\\label{tab2}\n\\vspace{.2cm}\n\\centering\n\\begin{adjustbox}{width=0.82\\textwidth}\n\\begin{tabular}{|c@{ }|c@{ }|c@{ }|c@{ }|c@{ }|c@{ }|c@{ }|r@{ }|r@{ }|r@{ }|}\n\\hline\n\\multirow{2}{*}{$n$} &\\multirow{2}{*}{$\\mathcal{D}_n$}&\\multirow{2}{*}{$p_1$}&\\multirow{2}{*}{$n_1$}&\\multirow{2}{*}{$p_2$}&\\multirow{2}{*}{$n_2$}&\\multirow{2}{*}{$n_3$}&\\multicolumn{2}{c|}{runtimes (s)}& \\multirow{2}{*}{$\\# B$}\\\\\n\\cline{8-9}\n&&&&&&& \\emph{SAT} & \\emph{ILP} & \\\\\n\\hline\n \\hline\n\\multirow{5}{*}{180}&\\multirow{5}{*}{$\\{36,60,90\\}$}&2&2&5&3&3& 2.57 & 5.62 &2052\\\\\n\\cline{3-10}\n&&3&3&5&2&2 &0.07 & 0.14 &96\\\\\n\\cline{3-10}\n&&2&2&3&5&3 & 1.25 & 2.23 &1800\\\\\n\\cline{3-10}\n&&2&5&3&3&2 & 0.05& 0.16 & 120\\\\\n\\cline{3-10}\n&&2&2&3&3&5& 8079.07 & $> 10800.00$ & 281232\\\\\n\\hline\n\\hline\n\\multirow{12}{*}{420} &\\multirow{12}{*}{$\\{60,84,140,210\\}$}&7&5&3&2&2 &2.13 & 3.57 &720 \\\\\n\\cline{3-10}\n&&5&7&3&2&2 & 1.52 & 4.08 &672 \\\\\n\\cline{3-10}\n&&7&5&2&3&2& 7.73 & 16.11 & 3120 \\\\\n\\cline{3-10}\n&&5&7&2&3&2 & 1.63 & 4.18 & 1008 \\\\\n\\cline{3-10}\n&&7&3&5&2&2 & 4.76 & 7.45 & 864 \\\\\n\\cline{3-10}\n&&3&7&5&2&2 & 12.78 & 32.19 & 6720 \\\\\n\\cline{3-10}\n&&7&3&2&5&2& 107.83 & 1186.21 & 33480 \\\\\n\\cline{3-10}\n&&3&7&2&5&2&0.73 & 2.36 & 840\\\\\n\\cline{3-10}\n& &7&2&5&3&2 & 11.14 & 21.19 & 1872 \\\\\n\\cline{3-10}\n&&2&7&5&3&2& 17.31& 52.90 & 10080 \\\\\n\\cline{3-10}\n&&7&2&3&5&2& 89.97 & 691.56 & 22320 \\\\\n\\cline{3-10}\n&&2&7&3&5&2 & 1.17 & 4.13 & 1120 \\\\\n\\hline\n\\hline\n\\multirow{5}{*}{900}&\\multirow{5}{*}{$\\{180,300,450\\}$}&2&25&3&3&2 & 43.60 & 110.65 & 15600 \\\\\n\\cline{3-10}\n&&5&10&3&3&2 & 107.36 & 741.79 & 15840 \\\\\n\\cline{3-10}\n& &2&9&5&5&2 & 958.58 & $> 10800.00$ & 118080 \\\\\n\\cline{3-10}\n& & 6&3&5&5&2 &5559.76 &$> 10800.00$ &123840 \\\\\n\\cline{3-10}\n&&3 & 6&5&5&2&486.39 & 8290.35 & 62160\\\\ \n\\hline\n\\end{tabular}\n\\end{adjustbox}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\pagebreak\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\nKilling tensors of the second rank express hidden symmetries of spacetime \\cite{Carter:1977pq,Frolov:2017kze} providing integrals of motion for geodesics and wave operators in field theories. Some constructive procedures to find them were suggested for spaces with a warped\/twisted product structure \\cite{Krtous:2015ona}, spaces admitting a hypersurface orthogonal Killing vector field \\cite{Garfinkle:2010er,Garfinkle:2013cha}, or special conformal Killing vector fields \\cite{Rani:2003br}. Recently, some deformed Kerr metrics attracted attention when trying to find new physics in ultracompact astrophysical objects.\nClasses of such metrics were listed that admit Killing tensors\n\\cite{Carson:2020dez,Papadopoulos:2020kxu}, but for others the separation of variables in geodesic equations is not guaranteed.\n\nThe new procedure \\cite{Kobialko:2021aqg} suggested for foliated spacetime of codimension one is based on lifting the trivial Killing tensors in slices with an arbitrary second fundamental form \\cite{Kobialko:2021aqg}. This remove some particular assumptions about slices, so, hopefully, we can move further in the study of separability.\nThe method is closely related to formalism of\nfundamental photon surfaces introduced in \\cite{Kobialko:2020vqf}. This generalizes one previous observation\n\\cite{Pappas:2018opz} on the relationship between spacetime separability and spherical photon orbits. It is also worth noting that our method does not require sovling diffenerial equations at all.\n\nAs illustrations, we apply this new technique to some conventional metrics of Petrov type D demonstrating that this technique allows to obtain known Killing tensors \\cite{Kubiznak:2007kh,Vasudevan:2005bz} purely algebraically, without solving any differential equations. Then we successfully apply new technique to find Killing tensor for Gal'tsov-Kechkin solution \\cite{Galtsov:1994pd,Garcia:1995qz} in dilaton-axion gravity which belongs to the general Petrov type I. The new method reveals the nature of Killing tensors as arising from isometries of low-dimensional slices of a smooth foliation.\n\n\nIn Sec. \\ref{sec:killing} we briefly describe the equations for the Killing vectors and Killing tensors of rank two. In Sec. \\ref{sec:generation} we consider spacetimes with foliation of codimension one and present the equations governing the interplay between symmetries in the bulk and in the slices. Eventually we describe the Killing generating technique. In Sec. \\ref{sec:photon} we reveal the connection between the Killing tensors and the fundamental photon surfaces. Sec. \\ref{sec:axially} provides examples with axial symmetry in different models. \n\n\\section{Conventions}\n\\label{sec:killing}\n\nLet $M$ be a Lorentzian manifold of dimension $m$ with scalar product $\\left\\langle \\;\\cdot\\; ,\\;\\cdot\\;\\right\\rangle$ and Levi-Civita connection $\\nabla$\\footnote{Here we also use the notation $\\underset{\\mathcal X \\leftrightarrow \\mathcal Y}{{\\rm Sym}}\\left\\{ B(\\mathcal X, \\mathcal Y) \\right\\}\\equiv B(\\mathcal X, \\mathcal Y)+ B(\\mathcal Y, \\mathcal X)$.}.\n\n\\begin{definition} \nA vector field $\\mathcal K:M\\rightarrow TM$ is called a Killing vector field if \\cite{Chen} \n\\begin{align}\\label{eq:killing_equation}\n\\underset{\\mathcal X \\leftrightarrow \\mathcal Y}{{\\rm Sym}}\\left\\{\\left\\langle \\nabla_\\mathcal X\\mathcal K, \\mathcal Y \\right\\rangle\\right\\}=0, \\quad \\forall \\mathcal X,\\mathcal Y\\in TM. \n\\end{align}\n\\end{definition} \n\n\\begin{definition} \nA linear self-adjoint mapping $K(\\;\\cdot\\;):TM\\rightarrow TM$ is called a Killing mapping if\n\\begin{align}\\label{eq:killing_mapping_equation}\n\\underset{\\mathcal X \\leftrightarrow \\mathcal Y\\leftrightarrow \\mathcal Z}{{\\rm Sym}}\\left\\{\\left\\langle\\nabla_\\mathcal X K(\\mathcal Z),\\mathcal Y\\right\\rangle\\right\\}=0, \\quad \\forall \\mathcal X,\\mathcal Y,\\mathcal Z\\in TM,\n\\end{align} \nwhere the linear mapping $\\nabla_\\mathcal X K(\\;\\cdot\\;):TM\\rightarrow TM$ is defined as follows\n\\begin{align}\n\\nabla_\\mathcal X K( \\mathcal Y)\\equiv\\nabla_\\mathcal X( K( \\mathcal Y))-K(\\nabla_\\mathcal X \\mathcal Y), \\quad \\forall \\mathcal X,\\mathcal Y\\in TM.\n\\end{align} \n\\end{definition} \n\nOne can introduce a Killing tensor as a symmetric form $K(\\mathcal X,\\mathcal Y)=\\left\\langle K(\\mathcal X),\\mathcal Y\\right\\rangle$, which is associated with the conservation law quadratic in momenta. Let $\\mathcal K_\\alpha$ be a set of $n$ Killing vector fields. Then, one can define the following trivial Killing mapping \n\\begin{align}\\label{eq:trivial_tensor}\nK(\\mathcal X)=\\alpha \\mathcal X+\\sum^n_{\\alpha,\\beta=1}\\gamma^{\\alpha\\beta}\\left\\langle \\mathcal X,\\mathcal K_\\alpha\\right\\rangle\\mathcal K_\\beta, \\quad \\gamma^{\\alpha\\beta}=\\gamma^{\\beta\\alpha},\n\\end{align} \nwhere $\\alpha$ and $\\gamma^{\\alpha\\beta}$ is the set of $n(n+1)\/2+1$ independent constants in $M$. Note that the trivial Killing mapping does not give new conservation laws. However, one can show the existence of manifolds with nontrivial Killing tensors, which are not associated with the manifold isometries directly.\n\n\n\\section{Generation of a non-trivial Killing tensor}\n\\label{sec:generation}\nConsider a timelike\/spacelike {\\em foliation} of the manifold $M$ by a smooth family of hypersurfaces $S_\\Omega$ parameterized by $\\Omega\\in\\mathbb{R}$ (slices) with the lapse function $\\varphi$, and vector field $\\xi$ normal to slices ($\\left\\langle \\xi, \\xi \\right\\rangle\\equiv\\epsilon=\\pm1$). Then, the second fundamental form ${}^\\Omega\\sigma( \\;\\cdot\\; ,\\;\\cdot\\;):TS\\times TS\\rightarrow \\mathbb R$ and the mean curvature of slices $S_\\Omega$ are defined as follows\n\\begin{align}\n {}^\\Omega\\sigma(X,Y)\\equiv \\epsilon\\left\\langle \\nabla_XY,\\xi\\right\\rangle, \\quad\n \\forall X,Y\\in TS, \\qquad\n H\\equiv{\\rm Tr}({}^\\Omega\\sigma)\/(m-1).\n\\label{SFF1}\n\\end{align}\n\nIn particular, one can decompose the Killing vector field into the sum $\\mathcal K=\\mathcal K_\\Omega+k_N\\xi$, with the normal $k_N\\xi$ and tangent $\\mathcal K_\\Omega\\in TS_\\Omega$ components. In the general case, the projection $\\mathcal K_{\\Omega}$ is not a Killing vector in the slices of foliation\\cite{Kobialko:2020vqf,Kobialko:2021aqg}. An exception is the case of totally umbilic or totally geodesic slice, where the projection of any Killing field is a conformal or ordinary Killing vector field respectively. Such slices arise if the field generating the foliation is a (conformal) Killing field and\/or the spacetime has the structure of a warped\/twisted product \\cite{Chen}. Therefore, the generation of the Killing vectors in $M$ from Killing vectors $\\mathcal{K}_\\Omega$ with a nontrivial normal component $k_N$ is possible in the case of the totally geodesic slices only. As we will see further, the case of Killing tensors is more intricate.\n\n\nSimilarly, the Killing mapping can be split to normal and tangent components $K(\\; \\cdot\\;)=K^{(\\; \\cdot\\;)}_\\Omega+k^{(\\; \\cdot\\;)}_N\\xi$, where $k^{(\\; \\cdot\\;)}_N\\xi$ is a normal component and $K^{(\\; \\cdot\\;)}_\\Omega\\in TS_\\Omega$ is a tangent component. In the case of totally geodesic slices, one can lift the Killing tensor from the slice to the whole manifold and obtain a nontrivial normal component $k_N^{(\\;\\cdot\\;)}$. This particular case of totally geodesic slices was considered, for example, in the Ref.\\cite{Garfinkle:2010er}. Moreover, if we consider the conformal Killing tensors, a similar technique can be applied in the warped spacetimes \\cite{Krtous:2015ona}, where the foliation slices are totally umbilic \\cite{Chen}. In this paper, we consider the Killing tensor lift technique for arbitrary slices (not totally geodesic submanifolds). In this case, the second fundamental form ${}^\\Omega \\sigma$ is not trivial, and Killing equations imply $k^X_N=0$, $K^\\xi_{\\Omega}=0$. Then, the family of Killing mappings $K_{\\Omega}:TS_{\\Omega}\\rightarrow TS_{\\Omega}$ can be lifted from the slices to the Killing mapping $K(\\; \\cdot\\;)=K^{(\\; \\cdot\\;)}_\\Omega+k^{(\\; \\cdot\\;)}_N\\xi$ in the manifold $M$ with nontrivial normal components, if the following equations hold\\cite{Kobialko:2020vqf,Kobialko:2021aqg}\n\\begin{subequations}\n\\begin{align}\n&k^X_N=0, \\quad K^\\xi_{\\Omega}=0,\\quad \\xi(k^\\xi_N)=0,\\quad X(k^\\xi_N)=2 k^\\xi_N\\nabla_X\\ln \\varphi-2\\nabla_{K^X_{\\Omega}}\\ln\\varphi,\\label{eq:kkv1a}\\\\&\\underset{X \\leftrightarrow Y}{{\\rm Sym}}\\{\\,\\frac{1}{2}\n\\cdot\\left\\langle\\nabla_\\xi K^X_{\\Omega},Y\\right\\rangle+ \\epsilon\\cdot {}^\\Omega\\sigma(X,K^Y_{\\Omega})-\\epsilon k^\\xi_N\\cdot{}^\\Omega\\sigma(X,Y)\\}=0.\\label{eq:kkv1b}\n\\end{align}\\label{eq:kkv1}\n\\end{subequations}\n\nSuppose that the manifold $M$ has a collection $n\\leq m-2$ of linearly independent Killing vector fields $\\mathcal K_\\alpha$ tangent to the slices $S_\\Omega$ of the foliation $\\mathcal F_\\Omega$. Then, such vectors $\\mathcal K_\\alpha$ are also Killing vectors in the slices $S_\\Omega$, and a trivial Killing mapping of the form (\\ref{eq:trivial_tensor}) is always defined. Substituting this mapping into equations (\\ref{eq:kkv1a}), (\\ref{eq:kkv1b}) we obtain\n\\begin{subequations}\n\\begin{align}\n &\n X(k^\\xi_N)=2 (k^\\xi_N-\\alpha)\\nabla_X\\ln \\varphi, \\quad\n \\xi(k^\\xi_N)=0, \\quad\n X(\\alpha)=0, \\quad\n X(\\gamma^{\\alpha\\beta})=0,\\label{eq:kkv3a}\n \\\\&\n \\label{eq:map_evolution_2}\n 2\\epsilon(k^\\xi_N-\\alpha)\\cdot{}^\\Omega\\sigma(X,Y)=\\xi(\\alpha)\\left\\langle X,Y\\right\\rangle\n+\\sum^n_{\\alpha,\\beta=1}\\xi(\\gamma^{\\alpha\\beta})\\left\\langle X,\\mathcal K_\\alpha\\right\\rangle\\left\\langle\\mathcal K_\\beta,Y\\right\\rangle, \n\\end{align}\n\\label{eq:kkv3b}\n\\end{subequations}\nfor any $X,Y\\in TS_\\Omega$. There is always a trivial solution for these equations\n\\begin{align}\nk^\\xi_N=\\alpha, \\quad \\xi(\\alpha)=0, \\quad \\xi(\\gamma^{\\alpha\\beta})=0,\n\\end{align}\ncorresponding to the trivial Killing tensor in $M$. However, in some cases it can also have nontrivial solutions, which corresponds to the nontrivial Killing tensor and new conservation laws. Let us additionally assume that the Gramian matrix $\\mathcal G_{\\alpha\\beta}=\\left\\langle\\mathcal K_\\alpha,\\mathcal K_\\beta\\right\\rangle$ is not degenerate ($\\mathcal G\\equiv \\det(\\mathcal G_{\\alpha\\beta}) \\neq 0$). Then, we can introduce a basis $\\{\\mathcal K_\\alpha,e_a\\}$ in $S_\\Omega$ in such a way that $e_a\\in\\{\\mathcal K_\\alpha\\}^{\\perp}$ with $a=1,\\ldots,m-n-1$. A non-trivial Killing tensor can be generated using the technique from the following theorem\\cite{Kobialko:2021aqg}:\n\n\n\\begin{theorem} \n\\label{KBG}\nLet the manifold $M$ contains a collection of $n\\leq m-2$ Killing vector fields $\\mathcal K_\\alpha$ with a non-degenerate Gramian $\\mathcal G_{\\alpha\\beta}=\\left\\langle\\mathcal K_\\alpha,\\mathcal K_\\beta\\right\\rangle$, tangent to the foliation slices $S_\\Omega$ (partially umbilic if $n< m-2$) with the second fundamental form\\footnote{The form of the left upper block is a consequence of the tangent Killing vectors, and this does not impose a new condition. The non-diagonal zero elements is a new condition, which is satisfied in many applications. The right lower block is a condition of partially umbilical surfaces, which imposes constraints if ${\\rm dim}\\{e_{a}\\} > 1$. }\n\\begin{align}\\label{eq:second_form_explicit}\n{}^\\Omega\\sigma=\\begin{pmatrix}\n-\\frac{1}{2} \\epsilon \\cdot \\xi\\mathcal G_{\\alpha\\beta} & 0\\\\\n0 & h^\\Omega \\cdot \\left\\langle e_a,e_b\\right\\rangle & \n\\end{pmatrix}\n\\end{align}\nThen, there is a nontrivial Killing tensor on manifold $M$, if the following steps can be successfully completed:\n\n{\\bf Step one}: Check compatibility and integrability conditions (\\ref{K1}), (\\ref{eq:gamma_integrability})\n\\begin{align}\nX(h^{\\Omega}\\cdot\\varphi^3) = 0,\n\\label{K1}\n\\end{align}\n\\begin{align} \\label{eq:gamma_integrability}\n X \\left(\n \\mathcal G^{\\alpha\\beta}\n -\n \\frac{\\epsilon}{2 h^{\\Omega}} \\cdot \\xi \\mathcal G^{\\alpha\\beta}\n \\right) = 0.\n\\end{align}\n\n{\\bf Step two}: Obtain $\\alpha$ from (\\ref{K2}) and check the condition (\\ref{K2_cond})\n\\begin{align}\n\\xi\\ln\\xi(\\alpha)=\\xi \\ln h^{\\Omega}-2\\epsilon h^{\\Omega},\n\\label{K2}\n\\end{align}\n\\begin{equation}\\label{K2_cond}\n X(\\alpha)=0.\n\\end{equation}\n\n{\\bf Step three}: Define $\\gamma^{\\alpha\\beta}$ from (\\ref{K3n}) using the conditions $\\xi\\nu^{\\alpha\\beta}=0$, $X\\gamma^{\\alpha\\beta}=0$.\n\\begin{align} \\label{K3n}\n \\gamma^{\\alpha\\beta} = \n \\epsilon\\frac{\\xi(\\alpha)}{2 h^{\\Omega}} \\cdot \\mathcal G^{\\alpha\\beta}\n - \\nu^{\\alpha\\beta},\n\\end{align}\n\n{\\bf Step four}: Using the functions found in the previous steps, construct a Killing map and the corresponding Killing tensor:\n\\begin{align}\nK(\\mathcal X)=\n\\alpha \\mathcal X\n+ \\sum^n_{\\alpha,\\beta=1}\\gamma^{\\alpha\\beta}\\left\\langle \\mathcal X,\\mathcal K_\\alpha\\right\\rangle\\mathcal K_\\beta\n+ \\frac{\\xi(\\alpha)}{2 h^{\\Omega}}\\left\\langle\\mathcal X,\\xi \\right\\rangle\\xi.\n\\end{align}\n\\end{theorem}\n\n\\section{Connection with photon submanifolds}\n\\label{sec:photon}\n\nConsider the case of a manifold with two Killing vectors spanning a timelike surface ($\\mathcal G<0$). Let us define a Killing vector field $\\rho^\\alpha$ with index $\\alpha=1,2$ numbering Killing vectors of the basis $\\{\\mathcal{K}_\\alpha\\}$, which is supposed to have constant components $\\rho^\\alpha=(\\rho,1)$. The quantity $\\rho$ can be called the generalized impact parameter (see \\cite{Kobialko:2020vqf} for details). However, one can choose arbitrary parametrization of $\\rho^\\alpha$ up to the norm.\n\n\\begin{proposition} \nThe fundamental photon surface is a partially umbilic surface with a second fundamental form of the form (\\ref{eq:second_form_explicit}), with the following connection between $h^\\Omega$ and $\\mathcal G_{\\alpha\\beta}$\\cite{Kobialko:2020vqf}\n\\begin{align}\n \\rho^\\alpha \\mathcal{M}_{\\alpha\\beta} \\rho^\\beta = 0,\n \\qquad\n \\mathcal{M}_{\\alpha\\beta} \\equiv\n \\frac{1}{2 h^{\\Omega}}\\cdot\\xi\\mathcal G_{\\alpha\\beta}\n - \\mathcal G_{\\alpha\\beta}\n - \\frac{1}{2 h^{\\Omega}}\\cdot\\xi\\ln \\mathcal G\\cdot\\mathcal G_{\\alpha\\beta}.\n\\label{FPS3}\n\\end{align}\n\\end{proposition}\n\nIf the surface under consideration is totally umbilic $\\mathcal{M}_{\\alpha\\beta}=0$, it is obviously a fundamental photon surfaces for any $\\rho$. Since totally umbilic surfaces usually exist in spherically symmetric solutions (both static and non-static) or non-rotating solutions with NUT-charge \\cite{Galtsov:2019bty}, and they have been considered in detail in a number of works \\cite{Claudel:2000yi,Koga:2020akc}, we will focus on the case $\\mathcal{M}_{\\alpha\\beta}\\neq0$. \n\nConsider the foliation generating a nontrivial Killing tensor in accordance with Theorem \\ref{KBG}, and ask the question whether its slice $S_\\Omega$ is a fundamental photon surface. First of all, we need to solve the quadratic equation (\\ref{FPS3}) for $\\rho$ and check the condition $\\rho^\\alpha \\mathcal{G}_{\\alpha\\beta} \\rho^\\beta\\geq0$. It has nontrivial solution if the eigenvalues of the matrix $\\mathcal{M}_{\\alpha\\beta}$ have different signs, that is $\\mathcal M\\equiv\\det(\\mathcal M_{\\alpha\\beta})<0$. Then the solution for $\\rho$ reads as\n\\begin{align}\\label{PR1a}\n \\rho = \\frac{-\\mathcal M_{12}\\pm \\sqrt{-\\mathcal M}}{\\mathcal M_{11}}.\n\\end{align}\nCondition $\\rho^\\alpha \\mathcal{G}_{\\alpha\\beta} \\rho^\\beta\\geq0$ is satisfied if the following inequality holds\n\\begin{align}\\label{PR1b}\n\\pm 2(\\mathcal G_{12}\\mathcal M_{11}-\\mathcal G_{11}\\mathcal M_{12})\\sqrt{-\\mathcal M}-2\\mathcal G_{11} \\cdot \\mathcal M+\\mathcal M_{11} \\cdot\\mathcal G \\cdot {\\rm Tr}(\\mathcal M)\\geq0,\n\\end{align}\nwhere ${\\rm Tr}(\\mathcal M)=\\mathcal M_{\\alpha\\beta} \\mathcal G^{\\alpha\\beta}=-2-(2h^\\Omega)^{-1}\\cdot\\xi\\ln\\mathcal G$. Equation (\\ref{PR1b}) defines the so-called photon region \\cite{Grenzebach:2014fha,Grenzebach:2015oea}, which arises as a flow of fundamental photon surfaces \\cite{Kobialko:2020vqf}. However, it has not been proven that the expression (\\ref{PR1a}) for $\\rho$ is constant in every slice. But the integrability condition (\\ref{eq:gamma_integrability}) guarantees that it is true in fact \\cite{Kobialko:2021aqg}. Therefore, we have the following theorem.\n\n\\begin{theorem} \nLet $S_\\Omega$ be a non totally umbilic foliation slice with compact spatial section satisfying all conditions of the theorem \\ref{KBG} for $\\text{dim} \\{\\mathcal{K}_\\alpha\\}=2$. Then maximal subdomain $U_{PS} \\subseteq S_\\Omega$ such that the inequality (\\ref{PR1b}) holds for all $p\\in U_{PS}$ is a fundamental photon surface\\footnote{In the case of not compact spatial section, the slice is not a fundamental photon surface by definition \\cite{Kobialko:2020vqf}. However, the theorem can be generalized for such not compact surfaces.}.\n\\end{theorem}\n \nIn particular, the region $U_{PR}\\subseteq M$, such that the inequality (\\ref{PR1b}) holds for any point $p\\in U_{PR}$, is a photon region. This theorem generalizes the connection between the existence of Killing tensors of this type and photon surfaces or spherical null geodesics, which was noted in Refs. \\cite{Koga:2020akc,Pappas:2018opz,Glampedakis:2018blj}. Unfortunately, in the opposite direction, the theorem is not fair, since the existence of fundamental photon surfaces does not guarantee the existence of the Killing tensor. As a counterexample, one can suggest Zipoy-Voorhees metric \\cite{Kodama:2003ch} where the fundamental photon surfaces exists \\cite{Galtsov:2019fzq} but there is no nontrivial Killing tensor \\cite{Lukes-Gerakopoulos:2012qpc}. Nevertheless, the existence of fundamental photon surfaces can serve as a sign that the Killing tensor can be presented in the corresponding metric, and it is advisable to check the conditions of consistency and integrability.\n\n\n\\section{Axially symmetric spacetimes} \\label{sec:axially} \n \nConsider a Lorentzian manifold $M$ with the metric tensor\n\\begin{align}\nds^2=-f (dt-\\omega d\\phi)^2+\\lambda dr^2+ \\beta d\\theta^2 +\\gamma d\\phi^2. \n\\end{align}\nwhere all metric components depend on $r$ and $\\theta$ only and the foliation with timelike slices $r=\\Omega$. Generally, this metric possesses two Killing vectors $\\mathcal{K}_1= \\partial_t$, $\\mathcal{K}_2 = \\partial_\\varphi$. One can find that the second fundamental form of these slices has the form (\\ref{eq:second_form_explicit}) and other quantities are\n\\begin{align}\n \\xi = \\lambda^{-1\/2}\\partial_r, \\quad\n h^{\\Omega} = -\\frac{1}{2}\\lambda^{-1\/2}\\cdot\\partial_r \\ln \\beta, \\quad\n \\varphi=\\lambda^{1\/2},\\quad\n \\mathcal G^{\\alpha\\beta} = \\frac{1}{\\gamma}\\begin{pmatrix}\n \\omega^2- \\gamma f^{-1} & \\omega \\\\\n \\omega & 1 \\\\\n \\end{pmatrix}.\n\\end{align}\nIn this case, the number of Killing vector is one less than the slices dimension, so the boundary $n \\geq m - 2$ saturates and the partially umbilic condition just imposes a relation on $h^\\Omega$. The compatibility and integrability conditions (\\ref{K1}), (\\ref{eq:gamma_integrability}) take the form\n\\begin{align}\n \\partial_\\theta(\\lambda \\cdot\\partial_r\\ln\\beta)=0,\\qquad\n \\partial_\\theta \\left(\n \\mathcal G^{\\alpha\\beta}\n +\n \\frac{1}{\\partial_r \\ln \\beta} \\partial_r \\mathcal G^{\\alpha\\beta}\n \\right) = 0.\n\\end{align}\nThe Eq. (\\ref{K2}) can be solved as follows\n\\begin{align}\n \\alpha=A_\\theta \\cdot\\beta+B_\\theta,\n\\end{align}\nwhere the arbitrary functions $A_\\theta$, $B_\\theta$ depend on $\\theta$ only, obeying the condition $\\partial_\\theta \\alpha = 0$. As the result, we have one more necessary condition for the case in this section: the function $\\beta$ must be of the form \n\\begin{equation}\\label{eq:beta_condition}\n \\beta(r,\\theta) = \\beta_1(\\theta) \\beta_2(r) + \\beta_3(\\theta),\n\\end{equation}\nwhere $\\beta_{1,2,3}$ are some functions of the corresponding variables. In particular, the normal component is $k^\\xi_N=B_\\theta$. Next, we can define the matrix $\\gamma$:\n\\begin{align}\n \\gamma^{\\alpha\\beta} = \n - \\beta A_\\theta \\cdot \\mathcal G^{\\alpha\\beta}\n - \\nu^{\\alpha\\beta}.\n\\end{align}\nThe integrability condition guarantees that $\\gamma^{\\alpha\\beta}$ always satisfies the equations (\\ref{eq:kkv3b}) for some $\\nu^{\\alpha\\beta}$ depending only on $\\theta$. On the other hand, we have to find a $\\nu^{\\alpha \\beta}$ that makes the equation $\\partial_r \\gamma^{\\alpha\\beta}= 0$ true. Therefore, we can omit the $\\theta$-dependent part in $\\gamma^{\\alpha\\beta}$ to some constant matrix instead of looking for $\\nu^{\\alpha\\beta}$. Combining everything together, we get the final Killing tensor in the holonomic basis \n\\begin{align}\n K^{\\mu\\nu} =\n \\alpha g^{\\mu\\nu}\n + \\sum_{\\alpha,\\beta=t,\\phi}\\gamma^{\\alpha\\beta} \\mathcal{K}^\\mu_\\alpha \\mathcal{K}^\\nu_\\beta\n - \\beta A_\\theta \\lambda^{-1} \\delta_r^\\mu \\delta_r^\\nu.\n\\end{align}\n\nThe compatibility and integrability conditions, as well as the condition on the function $\\beta$, are invariant under the multiplicative transformations of the form\n\\begin{align}\n \\lambda\\rightarrow\\lambda'=u(r)\\lambda, \\quad \\beta\\rightarrow\\beta'=v(\\theta)\\beta.\n\\end{align}\nIf $\\beta$ possesses the aforementioned form (\\ref{eq:beta_condition}), one can simplify the integrability condition by the substitution $\\mathcal{G}^{\\alpha\\beta} = \\tilde{\\mathcal{G}}^{\\alpha\\beta} \\cdot \\beta_1 \/ \\beta$. Then, the integrability condition is $\\partial_\\theta\\partial_r \\tilde{\\mathcal{G}}^{\\alpha\\beta} = 0$, which is solved by $\\tilde{\\mathcal{G}}^{\\alpha\\beta} = \\tilde{\\mathcal{G}}_r^{\\alpha\\beta}(r) + \\tilde{\\mathcal{G}}_\\theta^{\\alpha\\beta}(\\theta)$. This generalizes the result of Ref. \\cite{Johannsen:2013szh}, where the similar condition was obtained from the separability of the Hamilton-Jacobi equation. In our case, we have also included the $\\beta_1(\\theta)$ term. Furthermore, the compatibility condition and the function form (\\ref{eq:beta_condition}) leads to the form of $\\lambda=\\lambda_r(r) \\beta\/\\beta_1$, where $\\lambda_r$ is an arbitrary function of $r$.\n\n\\subsection{Kerr metric}\n\nAs a simple illustration, consider Kerr solution in the Boyer-Lindquist coordinates:\n\\begin{equation}\\label{Kerr}\n ds^2 =\n - \\frac{\\Delta - a^2\\sin^2\\theta}{\\Sigma}(dt - \\omega d\\phi)^2\n +\\Sigma \\left(\n \\frac{dr^2}{\\Delta}\n + d\\theta^2\n + \\frac{\\Delta \\sin^2\\theta}{\\Delta - a^2\\sin^2\\theta} d\\phi^2\n \\right),\n\\end{equation}\\\n\\begin{subequations}\n\\begin{equation}\n \\Sigma = r^2 + a^2 \\cos^2\\theta,\\qquad\n \\omega=\\frac{-2Mar\\sin^2\\theta}{\\Delta-a^2\\sin^2\\theta},\n\\end{equation}\n\\begin{equation}\n \\Delta = r(r-2M) + a^2.\n\\end{equation}\n\\end{subequations}\nIn the Kerr metric, $\\beta=r^2+a^2\\cos^2\\theta$, $\\lambda=\\beta\/\\Delta$, satisfy the compatibility condition. One can explicitly verify that $\\mathcal{G}^{\\alpha\\beta}$ satisfies the integrability equation. In this case $\\alpha=r^2$, $A_\\theta=1$ and $k_N^\\xi=B_\\theta=-a^2\\cos^2\\theta$ (here we have fixed the multiplicative integration constant, which appears due to the linearity of Killing equations). The part of $\\gamma^{\\alpha\\beta}$ independent on $\\theta$ reads\n\\begin{align}\n\\gamma^{\\alpha\\beta}=\\Delta^{-1}\\left(\\begin{matrix}\n (a^2+r^2)^2 & a(a^2+r^2) \\\\\na(a^2+r^2) & a^2 \\\\\n\\end{matrix}\\right).\n\\end{align}\nFinally, we get $\\alpha$ and $\\gamma^{\\alpha\\beta}$, which correspond to the well-known nontrivial Killing tensor for Kerr solution\n\\begin{align}\n &\n K^{\\mu\\nu} =\n r^2 g^{\\mu\\nu}\n +\\Delta^{-1} S^\\mu S^\\nu\n - \\Delta \\delta_r^\\mu \\delta_r^\\nu,\n \\\\\\nonumber &\n S^\\mu = s \\delta^\\mu_t + a \\delta^\\mu_\\varphi,\\qquad\n s = r^2+ a^2.\n\\end{align}\n\n\\subsection{ Plebanski-Demianski solution}\nThis is a less trivial example of the type D solution of Einstein-Maxwell equations with the cosmological constant, which contains also an acceleration parameter. The metric line element $ds^2$ is more conveniently presented in the conformally related frame using the Boyer-Lindquist coordinates: \n\\begin{align}\n\\Omega^2ds^2&=\\Sigma\\left(\\frac{dr^2}{\\Delta_r}+\\frac{d\\theta^2}{\\Delta_\\theta}\\right)+\\frac{1}{\\Sigma}\n\\left((\\Sigma+a\\chi)^2\\Delta_\\theta\\sin^2\\theta-\\Delta_r\\chi^2\\right)d\\phi^2 \\\\\n\\label{Sol222}\n&+\\frac{2}{\\Sigma}\\left(\\Delta_r\\chi-a(\\Sigma+a\\chi)\\Delta_\\theta\\sin^2\\theta\\right)dt d\\phi-\\frac{1}{\\Sigma}\n\\left(\\Delta_r-a^2\\Delta_\\theta\\sin^2\\theta \\right)dt^2,\n\\end{align}\nwhere we have defined the functions\n\\begin{align}\n\\Delta_\\theta &=1-a_1\\cos\\theta-a_2\\cos^2\\theta, \\qquad \\Delta_r=b_0+b_1r+b_2r^2+b_3r^3+b_4r^4\\,,\\\\\n\\Omega &=1-\\lambda(N+a \\cos\\theta)r, \\quad \\Sigma=r^2+(N+a\\cos\\theta)^2\\,,\\quad\n\\chi =a \\sin^2\\theta-2N(\\cos \\theta+C)\\,,\n\\end{align}\nwith the following constant coefficients in $\\Delta_\\theta$ and $\\Delta_r$:\n\\begin{align}\na_1 &=2aM\\lambda-4aN\\left(\\lambda^2(k+\\beta)+\\frac{\\Lambda}{3}\\right), \\quad a_2=-a^2\\left(\\lambda^2(k+\\beta)+\\frac{\\Lambda}{3}\\right),\\quad\nb_0=k+\\beta, \\\\ \nb_1 &=-2M,\\quad\nb_2=\\frac{k}{a^2-N^2}+4MN\\lambda-(a^2+3N^2)\\left(\\lambda^2(k+\\beta)+\\frac{\\Lambda}{3}\\right),\\\\\nb_3 &=-2\\lambda\\left(\\frac{kN}{a^2-N^2}-(a^2-N^2)\\left(M\\lambda-N\\left(\\lambda^2(k+\\beta)+\\frac{\\Lambda}{3}\\right)\\right)\\right),\\\\\nb_4 &=-\\left(\\lambda^2k+\\frac{\\Lambda}{3}\\right),\\\\ \nk &=\\frac{1+2MN \\lambda-3N^2\\left(\\lambda^2\\beta+\\frac{\\Lambda}{3}\\right)}{1+3\\lambda^2N^2(a^2-N^2)}(a^2-N^2), \\quad\n\\lambda=\\frac{\\alpha}{\\omega}, \\quad \\omega=\\sqrt{a^2+N^2}\\,.\n\\end{align}\nGenerally, the coordinates $t$ and $r$ range over the whole $\\mathbb R$, while $\\theta$ and $\\phi$ are the\nstandard coordinates on the unit two-sphere. Seven independent parameters $M,N,a,\\alpha,\\beta,\\Lambda,C$ can be physically interpreted as mass, NUT parameter (magnetic mass), rotation parameter, acceleration parameter, $\\beta=e^2+g^2$ comprises the electric $e$ and magnetic $g$ charges, $\\Lambda$ is the cosmological constant, and the constant $C$ defines the location of the Misner string. \n \n{\\bf The first step} is to check the compatibility and integrability conditions (\\ref{K1}), (\\ref{eq:gamma_integrability}). The first one holds if $\\alpha \\cdot a=0$, i.e. either the acceleration $\\alpha$ or the rotation $a$ are zero. Indeed, as shown in Ref. \\cite{Kubiznak:2007kh}, the general PD solution with acceleration possesses a conformal Killing tensor, but not the usual one. In the case $a=0$, the second condition does not hold. So, further we will consider the solution with zero acceleration $\\alpha=0$, which corresponds to the dyonic Kerr-Newman-NUT-AdS solution.\n\nIn the {\\bf second step} we pick up the $r$-dependent part from $\\beta = \\Sigma \/ \\Delta_\\theta$ for $\\alpha$\n\\begin{align}\n\\beta=\\frac{\\Sigma}{\\Delta_\\theta} \\quad \\Rightarrow \\quad \\alpha=r^2, \\quad A_\\theta=\\Delta_\\theta.\n\\end{align}\nSimilarly, as the {\\bf third step}, the $r$-dependent part for $\\gamma^{\\alpha\\beta}$ is defined as\n\\begin{align}\n\\gamma^{\\alpha\\beta} = \n\\Delta^{-1}_r \\begin{pmatrix}\n s^2 & as \\\\\n as & a^2 \\\\\n\\end{pmatrix}, \\quad s = \\Sigma + a\\chi = r^2 + a^2 - 2 a C N + N^2.\n\\end{align}\nIn the last {\\bf fourth step}, we obtain the nontrivial Killing tensor for the Kerr-Newman-NUT-AdS metric:\n\\begin{align}\n &\n K^{\\mu\\nu} =\n r^2 g^{\\mu\\nu}\n +\\Delta^{-1}_r S^\\mu S^\\nu\n - \\Delta_r \\delta_r^\\mu \\delta_r^\\nu, \\quad\n S^\\mu = s \\delta^\\mu_t + a \\delta^\\mu_\\varphi.\n\\end{align}\n\\section{Gal'tsov-Kechkin (GK) solution}\nIn 1994 one of the present authors in collaboration with O. Kechkin derived the general stationary charged black hole solution within the Einstein-Maxwell- dilaton-axion (EMDA) gravity, which is the ${\\cal N} =4, D=4$ supergravity consistently truncated to the theory with one vector field \\cite{Galtsov:1994pd}. This solution was seven-parametric, containing mass $M$, electric and magnetic charges $Q,P$, rotation parameter $a$, NUT $N$ and asymptotic values of the dilaton and axion (irrelevant for the metric) as independent parameters. Less general (without NUT) solution was derived by A. Sen \\cite{Sen:1992ua} in the context of the dimensionally reduced effective action of the heterotic string, and now it is commonly referred as Kerr-Sen metric. Non-rotating solutions with NUT were independently obtained by Kallosh et al.\\cite{Kallosh:1994ba} and Johnson and Myers\\cite{Johnson:1994nj}. \nNow the Kerr-Sen metric is often considered as a deformed Kerr solution in modeling deviations from the standard picture of black holes. \n\nThe GK solution \\cite{Galtsov:1994pd} was shown \\cite{Garcia:1995qz} to belong to type Petrov I, contrary to Kerr-Newman-NUT solution in the Einstein-Maxwell gravity. The same was shown for Kerr-Sen solution without NUT \\cite{Burinskii:1995hk}. Though the Kerr-Sen metric is not type D, the Hamilton-Jacoby equation was shown to be separable for it \\cite{Konoplya:2018arm,Lan:2018lyj}, though the Killing tensor was not found explicitly (for further discussion see \\cite{Papadopoulos:2020kxu}). But for the solution with NUT, it was claimed that no second rank Killing tensor exists \\cite{Siahaan:2019kbw}. So here we use the new technique to resolve this controversy. \n \nThe line element of the GK solution can be written in the form (\\ref{Kerr})\n\n \n \n \n \nwhere the functions $\\Delta$, $\\omega$, $\\Sigma$ are redefined as follows\n\\begin{align}\n & \\Delta = (r - r_{-}) (r - 2M) + a^2 - (N-N_{-})^2, \\nonumber\\\\\n & \\Sigma = r(r-r_{-}) + (a\\cos\\theta + N)^2 - N_{-}^2, \\nonumber\\\\\n &\\omega = \\frac{-2w}{\\Delta - a^2 \\sin^2\\theta},\\quad\n w = N \\Delta \\cos\\theta\n + a \\sin^2\\theta \\left( M(r-r_{-}) + N(N - N_{-}) \\right)\\nonumber .\n\\end{align}\nThe full solution also contains Maxwell, dilaton $\\phi$ and axion $\\kappa$ fields (whose form can be found in \\cite{Galtsov:1994pd}), and represents a family with seven parameters: mass $M$, NUT charge $N$, rotation parameter $a$, electric and magnetic charges $Q$ and $P$, and asymptotic complex axidilaton charge $z_\\infty = \\kappa_\\infty + i {\\rm e}^{-2\\phi_\\infty}$, irrelevant for the metric.\nThe following abbreviations are used:\n\\begin{equation}\n r_{-} = \\frac{M |\\mathcal{Q}|^2 }{|\\mathcal{M}|^2},\\quad\n N_{-} = \\frac{N |\\mathcal{Q}|^2 }{2|\\mathcal{M}|^2},\\quad\n \\mathcal{M} = M + i N,\\quad\n \\mathcal{Q} = Q - i P,\\quad\n \\mathcal{D} = - \\frac{{\\mathcal{Q}^*}^2}{2\\mathcal{M}}.\n\\end{equation}\nThe metric is presented in the Kerr-like form, but the metric functions are essentially different. The solution reduces to Kerr-NUT for $Q=P=0$. \n\nNow we apply our procedure to construct the Killing tensor.\nIt can be easily verified that the consistency (\\ref{K1}) and the integrability (\\ref{eq:gamma_integrability}) conditions are satisfied. The uplift turns out to be as simple as in the vacuum Kerr example, leading to the result \n\\begin{align}\n &\n K^{\\mu\\nu} =\n r(r-r_{-}) g^{\\mu\\nu}\n +\\Delta^{-1} S^\\mu S^\\nu\n - \\Delta \\delta_r^\\mu \\delta_r^\\nu,\n \\\\\\nonumber &\n S^\\mu = s \\delta^\\mu_t + a \\delta^\\mu_\\varphi,\\qquad\n s = r(r-r_{-}) + a^2 + N^2 - N_{-}^2.\n\\end{align}\nThis expression is new and is applicable both to the Kerr-Sen solution $N=N_-=0$, and for the full GK solution.\n\n\n\\section{Conclusions}\nIn this article, we reviewed new geometric method of generating Killing tensor in spacetimes with codimension one foliation \\cite{Kobialko:2021aqg}. Using our general lift equations one can try to rise a trivial Killing tensor defined in the slices into a nontrivial Killing tensor in the bulk. For this, the foliation must satisfy some consistency and integrability conditions, which we have presented explicitly. Furthermore, we have completely solved the lifting equations in terms of the functions $\\alpha$, $\\gamma^{\\alpha\\beta}$ and formulated the theorem (\\ref{KBG}) for generation of the non-trivial Killing tensor.\n\nFinding a foliation compatible with integrability and consistency conditions can be a challenging task. The existence of such a foliation means that the slices represent fundamental photon surfaces provided the corresponding inequalities hold. This generalizes the result of Ref. \\cite{Koga:2020akc} to the case of general stationary spaces. Conversely, the existence of fundamental photon surfaces, though does not guarantee the existence of the Killing tensor, but may serve as an indication that such tensor may exist. Therefore, it is recommended to check the consistency and integrability conditions for fundamental photon surfaces, if these are known. This property makes the search for fundamental photon surfaces important also for studying the integrability of geodesic motion. It is tempting to conjecture that the existence of fundamental photon surfaces implies the existence of Killing tensor if the slice is equipotential or spherical \\cite{Cederbaum:2019rbv}.\n\nUsing this technique, we were able to derive Killing tensor for EMDA dyon with NUT (GK solution \\cite{Galtsov:1994pd}), which is also valid for Kerr-Sen solution as a particular case.\n\\section*{Acknowledgements} \nThe work was supported by the Russian Foundation for Basic Research on the project 20-52-18012Bulg-a, and the Scientific and Educational School of Moscow State University \"Fundamental and Applied Space Research\". I.B. is also grateful to the Foundation for the Advancement of Theoretical Physics and Mathematics ``BASIS'' for support.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\begin{center} \n\\begin{figure*}[t!]\n\\includegraphics[width=0.95\\textwidth]{figures\/Intro_pic.png}\n\\captionsetup{justification=centerlast}\n\\caption{ \\label{fig:MLTTN}\nData flow of the Machine Learning analysis for the $b$-jet classification of the LHCb experiment at CERN. After Proton-Proton collisions, $b$- and $\\bar{b}$-quarks are created, which subsequently fragment into particle jets (left). The different particles within the jets are tracked by the LHCb detector. Selected features of the detected particle data are used as input for the Machine Learning analysis by NNs and TNs in order to determine the charge of the initial quark (right).\n}\n\\end{figure*}\n\\end{center}\n\\section*{Introduction}\n\nArtificial Neural Networks (NN) are a well-established tool for applications in Machine Learning and they are of increasing interest in both research and industry~\\cite{bishop1996neural, haykin2009neural, nielsen2015neural, goodfellow2016aaron, LeCun_15, silver2016mastering}. Inspired by biological neural networks, they are able to recognize patterns while processing a huge amount of data. In a nutshell, a NNs describes a functional mapping containing many variational parameters, which are optimised during the training procedure. Recently, deep connections between Machine Learning and quantum physics have been identified and continue to be uncovered~\\cite{Carleo_2019}. On one hand, NNs have been applied to describe the behavior of complex quantum many-body systems~\\cite{Deng_2017, Nomura_2017, Carleo_2017} while, on the other hand, quantum-inspired technologies and algorithms are taken into account to solve Machine Learning tasks~\\cite{Schuld_2018, Das_Sarma_2019, Stoudenmire_2018}.\n\nOne particular numerical method originated from quantum physics which has been increasingly compared to NNs are Tensor Networks (TNs)~\\cite{TTNvsNN, Chen_2018, levine2017deep}. TNs have been developed to investigate quantum many-body systems on classical computers by efficiently representing the exponentially large quantum wavefunction $|\\psi \\rangle$ in a compact form and they have proven to be an essential tool for a broad range of applications~\\cite{m07, Schollw_ck_2011, sv13, dm16, gqh17, bauls2019simulating, TTNA19, felser2019twodimensional, SimoneBook, Carmen_Ba_uls_2020}. The accuracy of the TN approximation can be controlled with the so-called \\textit{bond-dimension} $\\chi$, an auxiliary dimension for the indices of the connected local tensors. Recently, it has been shown that TN methods can also be applied to solve Machine Learning (ML) tasks very effectively~\\cite{NIPS2016_6211, Stoudenmire_2018, alex2016exponential, levine2017deep, khrulkov2017expressive, Liu_2019,roberts2019tensornetwork}.\nIndeed, even though NNs have been highly developed in recent decades by industry and research, the first approaches of Machine Learning with TN yield already comparable results when applied to standard datasets~\\cite{NIPS2016_6211, Stoudenmire_2018, glasser2018probabilistic}.\nDue to their original development focusing on quantum systems, TNs allow to easily compute quantities such as quantum correlations or entanglement entropy and thereby they grant access to insights on the learned data from a distinct point of view for the application in ML~\\cite{levine2017deep, Liu_2019}. Hereafter, we demonstrate the effectiveness of the approach and, more importantly, that it allows introducing algorithms to simplify and explain the learning process, unveiling a pathway to an explainable Artificial Intelligence. As a potential application of this approach, we present a TN supervised learning of identifying the charge of $b$-quarks (i.e. $b$ or $\\bar{b}$) produced in high-energy proton-proton collisions at the Large Hadron Collider (LHC) accelerator at CERN.\n\nIn what follows, we first describe the quantum-inspired Tree Tensor Network (TTN) and introduce different quantities that can be extracted from the TTN classifier which are not easily accessible for the biological-inspired Deep NN (DNN), such as correlation functions and entanglement entropy which can be used to explain the learning process and subsequent classifications, paving the way to an efficient and transparent ML tool.\nIn this regard, we introduce the Quantum-Information Post-learning feature Selection (QuIPS), a protocol that reduces the complexity of the ML model based on the information the single features provide for the classification problem. We then briefly describe the LHCb experiment and its simulation framework, the main observables related to $b$-jets physics, and the relevant quantities for this analysis together with the underlying LHCb data \\cite{lhcb_open, lhcb_doi}. \nWe further compare the performance obtained by the DNN and the TTN, before presenting the analytical insights into the TTN which, among others, can be exploited to improve future data analysis of high energy problems for a deeper physical understanding of the LHCb data.\nMoreover, we introduce the Quantum-Information Adaptive Network Optimisation (QIANO), which adapts the TN representation by reducing the number of free parameters based on the captured information within the TN while aiming to maintain the highest accuracy possible. Therewith, we can optimise the trained TN classifier for a targeted prediction speed without the necessity to relearn a new model from scratch.\n\\\\\n\n\nTensor Networks (TNs) are not only a well-established way to represent a quantum wavefunction $|\\psi \\rangle$, but more general an efficient representation of information as such. In the mathematical context, a TN approximates a high-order tensor by a set of low-order tensors that are contracted in a particular underlying geometry and have common roots with other decompositions, such as the Singular Value Decomposition (SVD) or Tucker decomposition~\\cite{Tucker1966}. Among others, some of the most successful TN representations are the Matrix Product State (MPS) - or Tensor Trains~\\cite{_stlund_1995, Schollw_ck_2011, ttrains, NIPS2016_6211}, the Tree Tensor Network (TTN) - or Hierarchical Tucker decomposition~\\cite{TTN14,HTdec,Liu_2019}, and the Projected Entangled Pair States (PEPS)~\\cite{verstraete2004renormalization, Or_s_2014}.\n\nFor a supervised learning problem, a TN can be used as the \\textit{weight tensor} $W$~\\citep{NIPS2016_6211,Stoudenmire_2018,Liu_2019}, a high-order tensor which acts as classifier for the input data $\\{{\\bf x}\\}$: Each sample $\\bf x$ is encoded by a \\textit{feature map} $\\Phi({\\bf x})$ and subsequently classified by the \\textit{weight tensor} $W$. The final confidence of the classifier for a certain class labeled by $l$ is given by the probability\n\\begin{equation}\n\\mathcal{P}_{l}({\\bf x}) = W_l\\cdot \\Phi({\\bf x}) ~.\n\\end{equation}\n\nIn the following, we use a TTN $\\Psi$ to represent $W$ (see Fig.~\\ref{fig:MLTTN}, bottom right) which can be described as a contraction of its $N$ hierarchically connected local tensors $T_{\\{\\chi\\}}$\n\\begin{equation}\n\\Psi = \\sum_{\\chi } T_{l,\\chi_1, \\chi_2}^{[1]} \\prod_{\\eta=2}^{N} T_{\\chi_n,\\chi_{2n}, \\chi_{2n+1}}^{[\\eta]}\n\\end{equation}\nwhere $n\\in[1,N]$.\nTherefore, \nwe can interpret the TTN classifier $\\Psi$ as well as a set of quantum many-body wavefunctions $|\\psi_l\\rangle$ - one for each of the class labels $l$ (see Supplementary Methods). For the classification, we represent each sample $x$ by a product state $\\Phi(x)$. Therefore, we map each feature $x_i\\in {\\bf x}$ into a quantum spin by choosing the \\textit{feature map} $\\Phi({\\bf x})$ as a Kronecker product of $N+1$ \\textit{local feature maps} \n\\begin{equation}\n\\Phi^{[i]}(x_i) = \\left[ \\cos{\\left( \\frac{\\pi x_i'}{2} \\right)},\\sin{\\left( \\frac{\\pi x_i'}{2} \\right)} \\right]\n\\label{eq:feature_map}\n\\end{equation}\nwhere $x_i'\\equiv x_i\/x_{i,\\text{max}}\\in [0,1]$ is the re-scaled value with respect to the maximum $x_{i,\\text{max}}$ within all samples of the training set. \n\nAccordingly, we classify a sample $x$ by computing the overlap $\\langle \\Phi(x) | \\psi_l \\rangle$ for all labels $l$ with the product state $\\Phi(x)$ resulting in the weighted probabilities\n\\b\n\\mathcal{P}_l = \\frac{|\\langle \\Phi(x)| \\psi_l\\rangle|^2}{\\sum_{l}|\\langle \\Phi(x)| \\psi_l\\rangle|^2}\n\\e\nfor each class. We point out, that we can encode the input data in different non-linear feature maps as well (see Supplementary Notes).\\\\\n\nOne of the major benefits of Tensor Networks in quantum mechanics is the accessibility of information within the network. They allow to efficiently measure information quantities such as entanglement entropy and correlations. Based on these quantum-inspired measurements, we here introduce the QuIPS protocol for the TN application in Machine Learning, which exploits the information encoded and accessible in the TN in order to rank the input features according to their importance for the classification.\n\nIn information theory, entropy as such is a measure of the information content inherent in the possible outcomes of variables, such as e.g. a classification~\\citep{6773024,6773067,NiC00}. In TNs such information content can be assessed by means of the entanglement entropy $S$ which describes the shared information between TN bipartitions. The entanglement $S$ is measured via the Schmidt decomposition, that is, decomposing $|\\psi\\rangle$ into two bipartitions $|\\psi^A_\\alpha \\rangle$ and $|\\psi^B_\\alpha \\rangle$~\\cite{NiC00} such that\n\\b\n\\Psi = \\sum_\\alpha^\\chi \\lambda_\\alpha |\\Psi^A_\\alpha\\rangle \\otimes |\\Psi^B_\\alpha\\rangle ,\n\\e\nwhere $\\lambda_\\alpha$ are the Schmidt-coefficients (non-zero, normalised singular values of the decomposition). The entanglement entropy is then defined as $S=-\\sum_\\alpha \\lambda_\\alpha^2 \\ln{\\lambda_\\alpha^2}$. Consequently, the minimal entropy $S=0$ is obtained only if we have one single non-zero singular value $\\lambda_1=1$. In this case, we can completely separate the two bipartitions as they share no information. On the contrary, higher $S$ means that information is shared among the bipartitions. \n\nIn the Machine Learning context, the entropy can be interpreted as follows: If the features in one bipartition provide no valuable information for the classification task, the entropy is zero. On the contrary, $S$ increases the more information between the two bipartitions are exploited. This analysis can be used to optimize the learning procedure: whenever $S=0$, the feature can be discarded with no loss of information for the classification. Thereby, a second model with fewer features and fewer tensors can be introduced. This second, more efficient model results in the same predictions in less time. On the contrary, a high bipartition entropy highlights which feature - or combination of features - are important for the correct predictions. \n\nThe second set of measurements we take into account are the correlation functions \n\\b\nC^l_{i,j} = \\frac{\\langle \\psi_{l}| \\sigma_i^z \\sigma_j^z | \\psi_l \\rangle}{\\langle \\psi_{l}| \\psi_l \\rangle}\n\\e\nfor each pair of features (located at site $i$ and $j$) and for each class $l$. The correlations offer an insight into the possible relation among the information that the two features provide. In case of maximum correlation or anti-correlation among them for all classes $l$, the information of one of the features can be obtained by the other one (and vice versa), thus one can be neglected. In case of no correlation among them, the two features may provide fundamentally different information for the classification. \nThe correlation analysis allows pinpointing if two features give independent information. However, the correlation itself - in contrast to the entropy - does not tell if this information is important for the classification.\n\n\nIn conclusion, based on the previous insights, namely {\\it (i)}\n \n a low entropy of a feature bipartition signals that one of the two bipartitions can be discarded, providing negligible loss of information and\n \n {\\it (ii)} if two features are completely (anti-)correlated we can neglect at least one of them, the QuIPS enables to filter out the most valuable features for the classification.\n \n\\\\\n\n\nNowadays, in particle physics, ML is widely used for the classification of jets, \\textit{i.e.} streams of particles produced by the fragmentation of quarks and gluons.\nThe jet substructure can be exploited to solve such classification problems \\cite{jet_substructure}.\nML techniques have been proposed to identify boosted, hadronically decaying top quarks \\cite{top_tagger}, or to identify the jet charge \\cite{jet_ml}.\nThe ATLAS and CMS collaborations developed ML algorithms in order to identify jets generated by the fragmentation of $b$-quarks \\cite{atlas_ml1, atlas_ml2, cms_ml}: a comprehensive review on ML techniques at the LHC can be found in \\cite{jet_lhc}.\n\nThe LHCb experiment in particular is, among others, dedicated to the study of the physics of $b$- and $c$-quarks produced in proton-proton collisions. Here, ML methods have been introduced recently\nfor the discrimination between $b$- and $c$-jets by using Boosted Decision Tree classifiers~\\cite{lhcb_tagging}. However, a crucial topic for the LHCb experiment, which is yet unexploited by ML, is the identification of the charge of a $b$-quark, \\textit{i.e.} discriminating between a $b$ or $\\bar{b}$. Such identification can be used in many physics measurements, and it is the core of the determination of the charge asymmetry in ${b}$-pairs production, a quantity sensitive to physics beyond the Standard Model~\\cite{asy_np}.\nWhenever produced in a scattering event, $b$-quarks have a short lifetime as free particles; indeed, they manifest themselves as bound states (hadrons) or as narrow cones of particles produced by the hadronization (jets). In the case of the LHCb experiment, the $b$-jets are detected by the apparatus located in the forward region of proton-proton collisions (see Fig.~\\ref{fig:MLTTN}, left)~\\cite{lhcb_det}. The LHCb detector includes a particle identification system that distinguishes different types of charged particles within the jet, and a high-precision tracking system able to measure the momentum of each particles~\\cite{lhcb_perf}. Still, the separation between $b$- and $\\bar{b}$-jets is a highly difficult task because the $b$-quark fragmentation produce dozens of particles via non-perturbative Quantum Chromodynamics processes, resulting in non-trivial correlations between the jet particles and the original quark.\n\nThe algorithms used to identify the charge of the $b$-quarks based on information on the jets\nare called tagging methods. The tagging algorithm performance is typically quantified with the \\textit{tagging power} $\\epsilon_{tag}$, representing the effective fraction of jets that contribute to the statistical uncertainty in an asymmetry measurement~\\cite{tag_pow1, tag_pow2}. In particular, the tagging power $\\epsilon_{tag}$ takes into account the efficiency $\\epsilon_{eff}$ (the fraction of jets for which the classifier takes a decision) and the prediction accuracy $a$ (the fraction of correctly classified jets among them) as follows: \n\\b\n\\epsilon_{tag} = \\epsilon_{eff} \\cdot (2a-1)^2 ~.\n\\e\nTo date, the muon tagging method gives the best performance on the $b$- vs $\\bar{b}$-jet discrimination using the dataset collected in the LHC Run I~\\cite{lhcb_asy}:\nhere, the muon with the highest momentum in the jet is selected, and its electric charge is used to decide on the $b$-quark charge.\n\nFor the ML application, we now formulate the identification of the $b$-quark charge in terms of a supervised learning problem. As described above, we implemented a TTN as a classifier and applied it to the LHCb problem analysing its performance. Alongside, a DNN analysis is performed to the best of our capabilities, and both algorithms are compared with the muon tagging approach.\nBoth the TTN and the DNN, use as input for the supervised learning $16$ features of the jet substructure from the official simulation data released by the LHCb collaboration \\cite{lhcb_open, lhcb_doi}.\nThe $16$ features are determined as follows: the muon with the highest $p_\\mathrm{T}$ among all other detected muons in the jet is selected and the same is done for the highest $p_\\mathrm{T}$ kaon, pion, electron, and proton, resulting in 5 different selected particles. For each particle, three observables are considered: (i) The momentum relative to the jet axis ($p^{rel}_{\\mathrm{T}}$), (ii) the particle charge ($q$), and (iii) the distance between the particle and the jet axis ($\\Delta R$), for a total of $5 \\times 3$ observables. \nIf a particle type is not found in a jet, the related features are set to $0$.\nThe $16$th feature is the total jet charge $Q$, defined as the weighted average of the particles charges $q_i$ inside the jet, using the particles $p^{rel}_{\\mathrm{T}}$ as weights:\n\\b\nQ = \\frac{\\sum_i (p^{rel}_{\\mathrm{T}})_i q_i}{\\sum_i (p^{rel}_{\\mathrm{T}})_i} ~.\n\\e\n\n\\section*{Results}\n\\subsection*{Analysis framework}\nIn the following, we present the jet classification performance for the TTN and the DNN applied to the LHCb dataset, also comparing both ML techniques with the muon tagging approach.\nFor the DNN we use an optimized network with three hidden layers of 96 nodes (see Supplementary Methods for details). Hereafter, we aim to compare the best possible performance of both approaches therefore, we optimised the hyper-parameters of both methods in order to obtain the best possible results from each of them, TTN and DNN. Therefore, we split the dataset of about $700k$ events (samples) into two sub-sets: $60\\%$ of the samples are used in the training process while the remaining $40\\%$ are used as test set to evaluate and compare the different methods. For each event prediction after the training procedure, both ML models output the probability $\\mathcal{P}_{b}$ to classify the event as a jet generated by a $b$- or a $\\bar{b}$-quark. A threshold $\\Delta$ around $\\mathcal{P}_{b}=0.5$ is then defined in which we classify the quark as unknown in order to optimise the overall tagging power $\\epsilon_{tag}$.\n\n\\subsection*{Jet classification performance }\\label{sec:results}\nWe obtain similar performances in terms of the raw prediction accuracy applying both ML approaches after the training procedure on the test data: the TTN takes a decision on the charge of the quark in $\\epsilon_{eff}^{\\text{TTN}}=54.5\\%$ of the cases with an overall accuracy of $a^{\\text{TTN}}=70.56\\%$, while the DNN decides in $\\epsilon_{eff}^{\\text{DNN}}=55.3\\%$ of the samples with $a^{\\text{DNN}}=70.49\\%$. We checked both approaches for biases in physical quantities to ensure that both methods are able to properly capture the physical process behind the problem and thus that they can be used as valid tagging methods for LHCb events (see Supplementary Methods).\n\n\\begin{figure}[ht]\n\\centering\n\\begin{minipage}{.225\\linewidth}\n \\centering\n \n \\includegraphics[width=\\linewidth]{figures\/TagPower.png}\n \\subcaption{}\n \\label{fig:Tagging}\n\\end{minipage}%\n\\begin{minipage}{.225\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/appendix_g\/roc.png}\n \\subcaption{}\n \\label{fig:roc}\n\\end{minipage}\n\\begin{minipage}{.25\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/Prob_DNNSQ.png}\n \\subcaption{}\n \\label{fig:probDNN}\n\\end{minipage}%\n\\begin{minipage}{.25\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/Prob_TTN.png}\n \\subcaption{}\n \\label{fig:probTTN}\n\\end{minipage}\n\\caption{Comparison of the DNN and TNN analysis: (a) Tagging power for the DNN (green), TTN (blue) and the muon tagging (red), (b) ROC curves for the DNN (green) and the TTN (blue, but completely covered by DNN), compared with the \\emph{line of no-discrimination} (dotted navy-blue line), (c) probability distribution for the DNN, and (d) for the TTN. In the two distributions (c)+(d), the correctly classified events (green) are shown in the total distribution (light blue). Below, in black all samples where a muon was detected in the jet.}\n\\end{figure}\n\nIn Fig.~\\ref{fig:Tagging} we present the tagging power of the different approaches as a function of the jet transverse momentum $p_\\mathrm{T}$. Evidently, both Machine Learning methods perform significantly better than the muon tagging approach for the complete range of jet transverse momentum $p_T$, while the TTN and DNN display comparable performances within the statistical uncertainties.\n\nIn Figs.~\\ref{fig:probDNN} and ~\\ref{fig:probTTN} we present the histograms of the confidences for predicting a $b$-flavored jet for all samples in the test data set for the DNN and the TTN respectively. Interestingly, even though both approaches give similar performances in terms of overall precision and tagging power, the prediction confidences are fundamentally different. For the DNN, we see a Gaussian-like distribution with, in general, not very high confidence for each prediction. Thus, we obtain less correct predictions with high confidences, but at the same time, fewer wrong predictions with high confidences compared to the TTN predictions. On the other hand, the TTN displays a flatter distribution including more predictions - correct and incorrect - with higher confidence. Remarkably though, we can see peaks for extremely confident predictions (around 0 and around 1) for the TTN. These peaks can be traced back to the presence of the muon; noting that the charge of which is a well-defined predictor for a jet generated by a $b$-quark. The DNN lacks these confident predictions exploiting the muon charge.\nFurther, we mention that using different cost functions for the DNN, i.e. cross-entropy loss function and the Mean Squared Error, lead to similar results (see Supplementary Methods).\n\nFinally, in Fig.~\\ref{fig:roc} we present the Receiving Operator Characteristic (ROC) curves for the TTN and the DNN together with the \\emph{line of no-discrimination}, which represents a randomly guessing classifier: the two ROC curves for TTN and DNN are perfectly coincident, and the Area Under the Curve (AUC) for the two classifiers is the almost same ($AUC^{TTN}=0.689$ and $AUC^{DNN}=0.690$).\nThe graph illustrates the similarity in the outputs between TTN and DNN despite the different confidence distributions. This is further confirmed by a Pearson correlation factor of $r=0.97$ between the outputs of the two classifiers.\n\nIn conclusion, the two different approaches result in similar outcomes in terms of prediction performances. However, the underlying information used by the two discriminators is inherently different. For instance, the DNN predicts more conservatively, in the sense that the confidences for each prediction tend to be lower compared with the TTN. Additionally, the DNN does not exploit the presence of the muon as strongly as the TTN, even though the muon is a good predictor for the classification.\n\n\\subsection*{Exploiting insights into the data with TTN}\\label{sec:insights}\nAs previously mentioned, the TTN analysis allows to efficiently measure the captured correlations and the entanglement within the classifier. These measurements give insight into the learned data and can be exploited via QuIPS to identify the most important features typically used for the classifications.\n\n\\begin{center} \n\\begin{figure}[ht]\n\\centering\n\\begin{minipage}{.25\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/corr_l1.png}\n \\subcaption{}\n \\label{fig:Corr}\n\\end{minipage}%\n\\begin{minipage}{.25\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/Entropy.png}\n \\subcaption{}\n \\label{fig:Entr}\n\\end{minipage}\n\\begin{minipage}{.225\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/QR_TagPower.png}\n \\subcaption{}\n \\label{fig:QR_TagPower}\n\\end{minipage}%\n\\begin{minipage}{.225\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/TagPower_Trunc.png}\n \\subcaption{}\n \\label{fig:Trunc_TagPower}\n\\end{minipage}\n\\caption{Exploiting the information provided by the learned TTN classifier: (a) Correlations between the $16$ input features (blue for anti-correlated, white for uncorrelated, red for correlated). The numbers indicate $q$, $p_T^{rel}$, $\\Delta R$ of the muon (1-3), kaon (4-6), pion (7-9), electron (10-12), proton (13-15) and the jet charge $Q$ (16). \n(b) Entropy of each feature as the measure for the information provided for the classification. (c) Tagging power for learning on all features (blue), the best 8 proposed by QuIPS exploiting insights from (a)+(b) (magenta), the worst 8 (yellow) and the muon tagging (red). (d) Tagging power for decreasing bond dimension truncated after training: The complete model (blue shades for $\\chi=100$, $\\chi=50$, $\\chi=5$), for using the QuIPS best 8 features only (violet shades for $\\chi=16$, $\\chi=5$), and the muon tagging (red).}\n\\end{figure}\n\\end{center}\n\nIn Fig.~\\ref{fig:Corr} we present the correlation analysis allowing us to pinpoint if two features give independent information. For both labels ($l=b,\\bar{b}$) the results are very similar, thus in Fig.~\\ref{fig:Corr} we present only $l=b$. We see among others that the momenta $p_T^{rel}$ and distance $\\Delta R$ of all particles are correlated except for the kaon. Thus this particle provides information to the classification which seems to be independent of the information gained by the other particles. However, the correlation itself does not tell if this information is important for the classification. Thus, we compute the entanglement entropy $S$ of each feature, as reported in Fig.~\\ref{fig:Entr}. Here, we conclude that the features with the highest information content are the total charge and $p_T^{rel}$ and distance $\\Delta R$ of the kaon. Driven by these insights, we employ the QuIPS to discard half of the features by selecting the 8 most important ones:\ni.-iii. charge, momenta, and distance of the muon; iv.-vi. charge, momenta, and distance of the kaon; vii. charge of the pion; viii. total detected charge. To test the QuIPS performance, we compared it with an independent but more time-expensive analysis on the importance of the different particle types: the two approaches perfectly matched. \nFurther, we studied two new models, one composed of the 8 most important features proposed by the QuIPS, and, for comparison, another with the 8 discarded features. In Fig.~\\ref{fig:QR_TagPower} we show the tagging power for the different analyses with the complete 16-sites (model $M_{16}$), the best 8 ($B_8$), the worst 8 ($W_8$) and the muon tagging. Remarkably, we see that\nthe models $M_{16}$ and $B_8$ give \ncomparable results, while model $W_8$\nresults are even worse than the classical approach. These performances are confirmed by the prediction accuracy of the different models: While only less than $1\\%$ of accuracy is lost from $M_{16}$ to $B_8$, the accuracy of the model $W_8$ drastically drops to around $52\\%$ - that is, almost random predictions. Finally, in this particular run, the model $B_8$ has been trained $4.7$ times faster with respect to model $M_{16}$ and predicts $5.5$ times faster as well (The actual speed-up depends on the bond-dimension and other hyperparameters).\n\n\\begin{table*}[ht]\n \\centering\n \\begin{tabular}{c||c c c |c c c}\n & \\multicolumn{3}{c|}{Model $M_{16}$ (incl. all 16 features)} & \\multicolumn{3}{|c}{Model $B_8$ (best 8 features determined by QuIPS)} \\\\\n $\\bf \\chi$ & Prediction time & Accuracy & Free parameters & Prediction time & Accuracy & Free parameters \\\\\n \\hline\n $\\bf 200$ & $345\\,\\mu$s & $70.27~\\%~(63.45~\\%)$ & $51501$ & -&- & -\\\\\n $\\bf 100$ & $178\\,\\mu$s& $70.34~\\%~(63.47~\\%)$ & $25968$ &-&-&-\\\\\n $\\bf 50$ & $105\\,\\mu$s & $70.26~\\%~(63.47~\\%)$ & $13214$ &-&-&-\\\\\n $\\bf 20$ & $62\\,\\mu$s & $70.31~\\%~(63.46~\\%)$ & $5576$ &- &- &-\\\\\n $\\bf 16$ & - & - & - & $19\\,\\mu$s &$69.10~\\%~(62.78~\\%)$ &$264$\\\\\n $\\bf 10$ & $40\\,\\mu$s & $70.36~\\%~(63.44~\\%)$ & $1311$ & $19\\,\\mu$s &$69.01~\\%~(62.78~\\%)$ & $171$\\\\\n $\\bf 5$ & $37\\,\\mu$s & $69.84~\\%~(62.01~\\%)$ & $303$ & $19\\,\\mu$s &$69.05~\\%~(62.76~\\%)$ & $95$\n \\end{tabular}\n \\caption{ Prediction time, accuracy with (and without) applied cuts $\\Delta$ and number of free parameters of the TTN for different bond dimension $\\chi$ when we reduce the TTN model with QIANO, both for the complete 16 (left) and the QuIPS reduced 8 features (right). For the model $M_{16}$ with all 16 features (left), we trained the TTN with $\\chi=200$ and truncate from there while for the reduced model $B_8$ (right), the original bond-dimension was $\\chi=16$ (being the maximum $\\chi$ in this subspace).}\n \\label{tab:table_QIANO}\n\\end{table*}\n\nA critical point of interest in real-time ML applications is the prediction time. For example, in the LHCb Run 2 data-taking, the high-level software trigger takes a decision approximately every $1 \\ \\mu$s~\\cite{lhcb_perf} and shorter latencies are expected in future Runs. Consequently, with the aid of the QuIPS protocol, we can efficiently reduce the prediction computational time while maintaining a comparable high prediction power.\nHowever, with TTNs, we can undertake an even further step to reduce the prediction time by reducing the bond dimension $\\chi$ after the training procedure. Here, we introduce the \\textit{Quantum information Adaptive Network Optimization} (QIANO) performing this truncation by means of the well-established SVD for TN~\\cite{SimoneBook,Schollw_ck_2011,TTNA19} in a way ensuring to introduce the least infidelity possible. In other words, QIANO can adjust the bond dimension $\\chi$ to achieve a targeted prediction time while \nkeeping the \nprediction accuracy reasonably high. We stress that this can be done without relearning a new model, as would be the case with NN.\n\nFinally, we apply QuIPS and QIANO to reduce the information in the TTN in an optimal way for a targeted balance between prediction time and accuracy. In Fig.~\\ref{fig:Trunc_TagPower} we show the tagging power taking the original TTN and truncate it to different bond-dimensions $\\chi$. We can see, that even though we compress quite heavily, the overall tagging power does not change significantly. In fact, we only drop about $0.03\\%$ in the overall prediction accuracy, while at the same time improving the average prediction time from $345\\, \\mu$s to $37\\, \\mu$s (see Tab.~\\ref{tab:table_QIANO}). Applying the same idea to the model $B_8$ we can reduce the average prediction time effectively down to $19\\, \\mu$s on our machines, a performance compatible with current real-time classification rate. \n\n\\section*{Discussion}\\label{sec:conclusion}\n\nWe analysed an LHCb dataset for the classification of $b$- and $\\bar{b}$-jets with two different ML approaches, a DNN and a TTN. We showed that we obtained with both techniques a tagging power about one order of magnitude higher than the classical muon tagging approach, which up to date is the best-published result for this classification problem. We pointed out that, even though both approaches result in similar tagging power, they treat the data very differently. In particular, TTN effectively recognises the importance of the presence of the muon as a strong predictor for the jet classification. Here, we point out that we only used a conjugate gradient descent for the optimisation of our TTN classifier. Deploying more sophisticated optimisation procedures which have already been proven to work for Tensor Trains, such as stochastic gradient descent~\\cite{torchmps} or Riemannian optimisation~\\cite{alex2016exponential}, may further improve the performance (in both time and accuracy) in future applications.\n\nWe further explained the crucial benefits of the TTN approach over the DNNs, namely (i) the ability to efficiently measuring correlations and the entanglement entropy, and (ii) the power of compressing the network while keeping a high amount of information (to some extend even lossless compression). We showed how the former quantum-inspired measurements help to set up a more efficient ML model: in particular, by introducing an information-based heuristic technique, we can establish the importance of single features based on the information captured within the trained TTN classifier only. Using this insight, we introduced the QuIPS, which can significantly reduce the model complexity by discarding the least-important features maintaining high prediction accuracy.\nThis selection of features based on their informational importance for the trained classifier is one major advantage of TNs targeting to effectively decrease training and prediction time. Regarding the latter benefit of the TTN, we introduced the QIANO, which allows to decrease the TTN prediction time by optimally decreasing its representative power based on information from the quantum entropy, introducing the least possible infidelity. In contrast to DNNs, with the QIANO we do not need to set up a new model and train it from scratch, but we can optimise the network post-learning adaptively to the specific conditions, e.g., the used CPU or the required prediction time of the final application.\n\nFinally, we showed that using QuIPS and QIANO we can effectively compress the trained TTN to target a given prediction time. \nIn particular, we decreased our prediction times from $345\\mu s$ to $19 \\mu s$. We stress that, while we only used one CPU for the predictions, in future application we might obtain a speed-up from $10$ to $100$ times by parallelising the tensor contractions on GPUs~\\cite{milsted2019tensornetwork}. Thus, \nwe are confident that it is possible to reach a MHz prediction rate while still obtaining results significantly better than the classical muon tagging approach.\nHere, we also point out that, for using this algorithm on the LHCb real-time data acquisition system, it would be necessary to develop custom electronic cards like FPGAs, or GPUs with an optimized architecture. Such solutions should be explored in the future.\n\nGiven the competitive performance of the presented TTN method at its application in high energy physics, we envisage a multitude of possible future applications in high-energy experiments at CERN and in other fields of science. Future applications of our approach in the LHCb experiment may include the discrimination between $b$-jets, $c$-jets and light flavour jets~\\cite{lhcb_tagging}. A fast and efficient real-time identification of $b$- and $c$-jets can be the key point for several studies in high energy physics, ranging from the search for the rare Higgs boson decay in two $c$-quarks, up to the search for new particles decaying in a pair of heavy-flavour quarks ($b\\bar{b}$ or $c \\bar{c}$). \n\n\\section*{Methods}\n\n\\begin{center} \n\\begin{figure}[ht]\n\\includegraphics[width=0.35\\textwidth]{figures\/B-tagging_diagram.jpg}\n\\caption{ \\label{fig:Btagging}\nIllustrative sketch showing an LHCb experiment and the two possible tagging algorithms: a single particle tagging algorithm, exploiting information coming from one single particle (muon), and the inclusive tagging algorithm which exploits the information on all the jet constituents.\n}\n\\end{figure}\n\\end{center}\n\n\\paragraph*{\\textbf{LHCb particle detection ---}} LHCb is fully instrumented in the phase space region of proton-proton collisions defined by the pseudo-rapidity ($\\eta$) range [2,5], with $\\eta$ defined as \n\\b\n\\eta = -\\mathrm{log}\\left[\\mathrm{tan}\\left(\\frac{\\theta}{2}\\right)\\right] ~,\n\\e\nwhere $\\theta$ is the angle between the particle momentum and the beam axis (see Fig.~\\ref{fig:Btagging}). The direction of particles momenta can be fully identified by $\\eta$ and by the azimuthal angle $\\phi$, defined as the angle in the plane transverse to the beam axis. The projection of the momentum in this plane is called transverse momentum ($p_{\\mathrm{T}}$). The energy of charged and neutral particles is measured by electromagnetic and hadronic calorimeters. In the following, we work with physics natural units.\n\nAt LHCb jets are reconstructed using a Particle Flow algorithm~\\cite{aleph_perf} for charged and neutral particles selection and using the anti-$k_t$ algorithm~\\cite{anti_kt} for clusterization. The jet momentum is defined as the sum of the momenta of the particles that form the jet, while the jet axis is defined as the direction of the jet momentum. Most of the particles that form the jet are contained in a cone of radius $\\Delta R=\\sqrt{(\\Delta \\eta)^2+(\\Delta \\phi)^2}=0.5$, where $\\Delta \\eta$ and $\\Delta \\phi$ are respectively the pseudo-rapidity difference and the azimuthal angle difference between the particles momenta and the jet axis. For each particle inside the jet cone, the momentum relative to the jet axis ($p^{rel}_{\\mathrm{T}}$) is defined as the projection of the particle momentum in the plane transverse to the jet axis.\n\\\\\n\n\\paragraph*{\\textbf{LHCb Dataset ---}}\nDifferently from other ML performance analyses, the dataset used in this paper has been prepared specifically for this LHCb classification problem, therefore baseline ML models and benchmarks on it do not exist. In particle physics, features are strongly dependent on the detector considered (\\emph{i.e.} different experiments may have a different response on the same physical object) and for this reason the training has been performed on a dataset that reproduces the LHCb experimental conditions, in order to obtain the optimal performance with this experiment.\n\nThe LHCb simulation datasets used for our analysis are produced with a Monte Carlo technique using the framework GAUSS~\\cite{gauss}, which makes use of PYTHIA 8~\\cite{pythia} to generate proton-proton interactions and jet fragmentation and uses EvtGen~\\cite{evtgen} to simulate $b$-hadrons decay. The GEANT4 software~\\cite{geant4_1,geant4_2} is used to simulate the detector response, and the signals are digitized and reconstructed using the LHCb analysis framework.\n\nThe used dataset contains $b$ and $\\bar{b}$-jets produced in proton-proton collisions at a center-of-mass energy of $13\\,$TeV~\\cite{lhcb_open, lhcb_doi}. Pairs of $b$-jets and $\\bar{b}$-jets are selected by requiring a jet $p_{\\mathrm{T}}$ greater than 20 GeV and $\\eta$ in the range [2.2,4.2] for both jets.\\\\\n\n\\paragraph*{\\textbf{Muon tagging ---}}\nLHCb measured the $b \\bar{b}$ forward-central asymmetry using the dataset collected in the LHC Run I~\\cite{lhcb_asy} using the muon tagging approach:\nIn this method, the muon with the highest momentum in the jet cone is selected, and its electric charge is used to decide on the $b$-quark charge.\nIn fact, if this muon is produced in the original semi-leptonic decay of the $b$-hadron, its charge is totally correlated with the $b$-quark charge.\nUp to date, the muon tagging method gives the best performance on the $b$- vs $\\bar{b}$-jet discrimination.\nAlthough this method can distinguish between $b$- and $\\bar{b}$-quark with good accuracy, its efficiency is low as it is only applicable on jets where a muon is found and it is intrinsically limited by the $b$-hadrons branching ratio in semi-leptonic decays. Additionally, the muon tagging may fail in some scenarios, where the selected muon is produced not by the decay of the $b$-hadron but in other decay processes. In these cases, the muon may not be completely correlated with the $b$-quark charge.\\\\\n\n\\paragraph*{\\textbf{Machine Learning approaches ---}}\nWe train the TTN and analyse the data with different bond dimensions $\\chi$. The auxiliary dimension $\\chi$ controls the number of free parameters within the variational TTN ansatz. While the TTN is able to capture more information from the training data with increasing bond dimension $\\chi$, choosing $\\chi$ too large may lead to overfitting and thus can worsen the results in the test set. For the DNN we use an optimized network with three hidden layers of 96 nodes (see Supplementary Methods for details).\n\nFor each event prediction, both methods give as output the probability $\\mathcal{P}_{b}$ to classify a jet as generated by a $b$- or a $\\bar{b}$-quark. This probability (\\textit{i.e.} the confidence of the classifier) is normalized in the following way: for values of probability $\\mathcal{P}_{b}>0.5$ ($\\mathcal{P}_{b}<0.5$) a jet is classified as generated by a $b$-quark ($\\bar{b}$-quark), with an increasing confidence going to $\\mathcal{P}_{b}=1$ ($\\mathcal{P}_{b}=0$). Therefore a completely confident classifier returns a probability distribution peaked at $\\mathcal{P}_{b}=1$ and $\\mathcal{P}_{b}=0$ for jets classified as generated by $b$- and $\\bar{b}$-quark respectively.\n\nWe introduce a threshold $\\Delta$ symmetrically around the prediction confidence of $\\mathcal{P}_{b}=0.5$ in which we classify the event as unknown. We optimise the cut on the predictions of the classifiers (\\textit{i.e.} their confidences) to maximise the tagging power for each method based on the training samples. In the following analysis we find $\\Delta^{\\text{TTN}} = 0.40$ ($\\Delta^{\\text{DNN}} = 0.20$) for the TTN (DNN). Thereby, we predict for the TTN (DNN) a $b$-quark with confidences $\\mathcal{P}_{b}>C^{\\text{TTN}}=0.70$ ($\\mathcal{P}_{b}>C^{\\text{DNN}}=0.60$), a $\\bar{b}$-quark with confidences $\\mathcal{P}_{b}<0.30$ ($\\mathcal{P}_{b}<0.40$) and no prediction for the range in between (see Fig.~\\ref{fig:probDNN} and Fig.~\\ref{fig:probTTN}).\\\\\n\n\\subsection*{Data availability}\nThis paper is based on data obtained by the LHCb experiment, but is analyzed independently, and has not been reviewed by the LHCb collaboration.\nThe data are available in the official LHCb open data repository \\cite{lhcb_open, lhcb_doi}. \n\n\\subsection*{Code availability}\nThe software code used for the analysis of the Deep Neural Network can be freely acquired when contacting gianelle@pd.infn.it and it is permitted to use it for any kind of private or commercial usage including modification and distribution without any liabilities or warranties. The software code for the TTN analysis is currently not available for public use. For more information, please contact timo.felser@physik.uni-saarland.de.\n\n\n\n\\subsection*{Acknowledgments}\nWe are very grateful to Konstantin Schmitz for valuable comments and discussions on the Machine Learning comparison. We thank Miles Stoudenmire for fruitful discussions on the application of the Tensor Networks Machine Learning code.\n\nThis work is partially supported by the Italian PRIN 2017 and Fondazione CARIPARO, the Horizon 2020 research and innovation programme under grant agreement No 817482 (Quantum Flagship - PASQuanS) and the QuantERA projects QTFLAG and QuantHEP. We acknowledge computational resources by CINECA and the Cloud Veneto. \n\nThe work is partially supported by the German Federal Ministry for Economic Affairs and Energy (BMWi) and the European Social Fund (ESF) as part of the EXIST program under the project \\textit{Tensor Solutions}.\n\nWe acknowledge the LHCb Collaboration for the valuable help and the Istituto Nazionale di Fisica Nucleare and the Department of Physics and Astronomy of the University of Padova for the support.\n\n\\subsection*{Author contributions}\nConceptualization (TF, DL, SM); Data Analysis (DZ, LS, TF, MT, AG); Funding Acquisition (DL, SM); Investigation (DZ, LS, TF, SM); Methodology (TF, SM); Tensor Network Software Development (TF using private resources); Validation (DZ, LS, TF, MT); Writing \u2013 original draft (TF, SM); Writing - review \\& editing (all authors).\n\n\\subsection*{Competing interests}\nThe authors declare no competing interests.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}