diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziiuv" "b/data_all_eng_slimpj/shuffled/split2/finalzziiuv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziiuv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:1}\n\nSpecial functions of mathematical physics are usually defined either in form of power series, or as solutions to some differential equations, or via integral representations. Of course, for a given function, these three (and possibly other) forms coincide for all arguments and parameters values for that they do exist. However, the validity domains of different representations can be unequal. Very often the series representations of the special functions hold valid only on some restricted domains. In order to define the corresponding functions for other values of their arguments and parameters, analytical continuation of the series in form of integral representations is usually employed. \n\nFor the special functions of Fractional Calculus (FC), the situation is very similar to the one described above. For instance, one of the most important FC special function - the two-parameters Mittag-Leffler function - is usually defined in form of a power series:\n\\begin{equation}\n\\label{ML}\nE_{\\alpha,\\beta}(z) = \\sum_{k=0}^{+\\infty} \\frac{z^k}{\\Gamma(\\alpha\\, k + \\beta)},\\ \\alpha >0,\\ \\beta,z\\in \\Com.\n\\end{equation}\nBecause the series is convergent for all $z\\in \\Com$, this definition can be used for all $z\\in \\Com$ without any analytical continuation. Still, the integral representations of the Mittag-Leffler function are very important, say, for derivation of its asymptotic behavior (\\cite{Dzh}) and for its numerical calculation (\\cite{GLL}). For \n$0<\\alpha <2$ and $\\Re(\\beta)>0$, the following integral representations of the Mittag-Leffler function in terms of the integrals over the Hankel-type contours were presented in \\cite{Dzh}: \n$$\nE_{\\alpha,\\beta}(z) ={1 \\over 2\\pi\\alpha i}\\int_{\\gamma(\\epsilon;\\delta)}\n{e^{\\zeta^{1\/\\alpha}}\\zeta^{(1-\\beta)\/\\alpha} \\over \\zeta -z}\\, d\\zeta, \\ z\\in G^{(-)}(\\epsilon;\\delta),\n$$\n$$\nE_{\\alpha,\\beta}(z) = \\frac{1}{\\alpha}z^{(1-\\beta)\/\\alpha}e^{z^{1\/\\alpha}} + {1\\over 2\\pi\\alpha i}\\int_{\\gamma(\\epsilon;\\delta)}\n{e^{\\zeta^{1\/\\alpha}}\\zeta^{(1-\\beta)\/\\alpha} \\over \\zeta -z} d\\zeta, \\ z\\in G^{(+)}(\\epsilon;\\delta),\n$$\nwhere the integration contour $\\gamma(\\epsilon;\\delta) \\ (\\epsilon >0, \\ 0<\\delta\\le \\pi)$ with\nnon-decreasing $\\arg \\zeta$ consists of the following parts:\n\\par \\noindent\n1) the ray $\\arg \\zeta =-\\delta, \\ |\\zeta|\\ge \\epsilon$;\n\\par \\noindent\n2) the arc $-\\delta \\le \\arg \\zeta \\le \\delta$ of the circumference $|\\zeta|=\\epsilon$;\n\\par \\noindent\n3) the ray $\\arg \\zeta =\\delta, \\ |\\zeta|\\ge \\epsilon$.\n\\par \\noindent\nFor $0<\\delta< \\pi$, the domain $G^{(-)}(\\epsilon;\\delta)$ is to the left\n of the contour $\\gamma(\\epsilon;\\delta)$ and the domain $G^{(+)}(\\epsilon;\\delta)$ is to the right of this contour. If $\\delta\n=\\pi $, the contour $\\gamma(\\epsilon;\\delta)$ consists of the circumference $|\\zeta|=\\epsilon $ and of the cut\n$-\\infty <\\zeta \\le -\\epsilon$. In this case, the domain $G^{(-)}(\\epsilon;\\delta)$ is the circle $|\\zeta|<\n\\epsilon$ and $G^{(+)}(\\epsilon;\\alpha)=\\{\\zeta: |\\arg\\zeta| <\\pi, \\ |\\zeta|>\\epsilon \\}$.\n\nFor some parameters values, the Mittag-Leffler function can be also introduced in terms of solutions to the fractional differential equations with the Riemann-Liouville or Caputo fractional derivatives. For instance, for $0<\\alpha \\le 1$, the equation \n\\begin{equation}\n\\label{eqRL}\n(D^\\alpha_{0+} y)(t) = \\lambda\\, y(t)\n\\end{equation}\nhas the general solution (\\cite{LucSri})\n\\begin{equation}\n\\label{eqRL_sol}\ny(t) = C\\, t^{\\alpha -1} E_{\\alpha, \\alpha}(\\lambda\\, t^\\alpha),\\ C\\in \\R.\n\\end{equation}\nIn the equation \\eqref{eqRL}, the Riemann-Liouville fractional derivative $D^\\alpha_{0+}$ is defined by \n\\begin{equation}\n\\label{RLD}\n(D^\\alpha_{0+} \\, f)(t) = \\frac{d}{dt}(I^{1-\\alpha}_{0+} f)(t),\\ t>0,\n\\end{equation}\nwhere $I^\\alpha_{0+}$ is the Riemann-Liouville fractional integral of order $\\alpha\\ (\\alpha >0)$:\n\\begin{equation}\n\\label{RLI}\n\\left(I^\\alpha_{0+} f \\right) (t)=\\frac{1}{\\Gamma (\\alpha )}\\int\\limits_0^t(t -\\tau)^{\\alpha -1}f(\\tau)\\,d\\tau,\\ t>0.\n\\end{equation}\nThe general solution to the equation \n\\begin{equation}\n\\label{eqC}\n(\\,_*D^\\alpha_{0+} y)(t) = \\lambda\\, y(t)\n\\end{equation}\nwith the Caputo fractional derivative\n\\begin{equation}\n\\label{CD}\n(\\,_*D^\\alpha_{0+}\\, f)(t) = (D^\\alpha_{0+} \\, f)(t) - f(0)\\frac{t^{-\\alpha}}{\\Gamma(1-\\alpha)},\\ t>0 \n\\end{equation}\nhas the form (\\cite{LucGor})\n\\begin{equation}\n\\label{eqC_sol}\ny(t) = C\\, E_{\\alpha, 1}(\\lambda\\, t^\\alpha),\\ C\\in \\R.\n\\end{equation}\nAs we see, the solutions to the fractional differential equations \\eqref{eqRL} and \\eqref{eqC} are expressed in terms of the Mittag-Leffler functions. However, the arguments of these functions are $\\lambda\\, t^\\alpha$ and not just $\\lambda\\, t$. Thus, these solutions are represented in form of power series with the fractional and not integer exponents. For more advanced properties and applications of the Mittag-Leffler type functions see \\cite{Dzh} and the recent book \\cite{GKRM}. \n\nIn \\cite{Luc21b}, the single- and multi-terms fractional differential equations with the general fractional derivatives of Caputo type have been studied. By definition, their solutions belong to the class of the FC special functions (as the ones represented in form of solutions to the fractional differential equations). Moreover, in \\cite{Luc21b}, another representation of these new FC special functions was derived, namely, in form of the convolution series generated by the Sonine kernels. \n\nThe convolution series are a far reaching generalization of the conventional power series and the power series with the fractional exponents including the Mittag-Leffler functions \\eqref{eqRL_sol} and \\eqref{eqC_sol}. They represent a new type of the FC special functions worth for investigation. In \\cite{Luc21d}, the convolution series were employed for derivation of two different forms of a generalized convolution Taylor formula for representation of a function as a convolution polynomial with a remainder in form of a composition of the $n$-fold general fractional integral and the $n$-fold general sequential fractional derivative of the Riemann-Liouville and the Caputo types, respectively. In \\cite{Luc21d}, the generalized Taylor series in form of convolution series were also discussed. In this paper, we employ the convolution series for derivation of analytical solutions to the single- and multi-terms fractional differential equations with the general fractional derivatives in the Riemann-Liouville sense. This type of the fractional differential equations was not yet considered in the FC literature.\n\nThe rest of the paper is organized as follows. In the 2nd section, we introduce the general fractional derivatives of the Riemann-Liouville and Caputo types with the Sonine kernels from a special class of kernels and discuss some of their properties needed for the further discussions. In the 3rd section, we first provide some results regarding the convolution series generated by the Sonine kernels. Then the convolution series are applied for derivation of analytical solutions to the single- and multi-terms fractional differential equations with the general fractional derivatives in the Riemann-Liouville sense. For a treatment of the single- and multi-terms fractional differential equations with the general fractional derivatives in the Caputo sense, we refer the interested readers to \\cite{Luc21b}. \n\n\n\\section{General Fractional Integrals and Derivatives}\n\nThe general fractional derivatives (GFDs) with the kernel $k$ in the Riemann-Liouville and in the Caputo sense, respectively, are defined as follows (\\cite{Koch11,LucYam16,Koch19_1,LucYam20,Luc21a,Luc21c}): \n\\begin{equation}\n\\label{FDR-L}\n(\\D_{(k)}\\, f) (t) = \\frac{d}{dt}(k\\, *\\, f)(t)= \\frac{d}{dt}\\int_0^t k(t-\\tau)f(\\tau)\\, d\\tau,\n\\end{equation}\n\\begin{equation}\n\\label{FDC}\n( _*\\D_{(k)}\\, f) (t) = (\\D_{(k)}\\, f) (t) - f(0)k(t),\n\\end{equation}\nwhere by $*$ the Laplace convolution is denoted:\n\\begin{equation}\n\\label{2-2}\n(f\\, *\\, g)(t) = \\int_0^{t}\\, f(t-\\tau)g(\\tau)\\, d\\tau.\n\\end{equation}\n\nThe Riemann-Liouville and the Caputo fractional derivatives of order $\\alpha,\\ 0< \\alpha < 1$, defined by \\eqref{RLD} and \\eqref{CD}, respectively, are particular cases of the GFDs \\eqref{FDR-L} and \\eqref{FDC} with the kernel\n\\begin{equation}\n\\label{single}\nk(t) = h_{1-\\alpha}(t),\\ 0 <\\alpha <1,\\ h_{\\beta}(t) := \\frac{t^{\\beta -1}}{\\Gamma(\\beta)},\\ \\beta >0.\n\\end{equation}\n\nThe multi-term fractional derivatives and the fractional derivatives of the\ndistributed order are also particular cases of the GFDs \\eqref{FDR-L} and \\eqref{FDC} with the kernels \n\\begin{equation}\n\\label{multi}\nk(t) = \\sum_{k=1}^n a_k\\, h_{1-\\alpha_k}(t),\n\\ \\ 0 < \\alpha_1 <\\dots < \\alpha_n < 1,\\ a_k \\in \\R,\\ k=1,\\dots,n, \n\\end{equation}\n\\begin{equation}\n\\label{distr}\nk(t) = \\int_0^1 h_{1-\\alpha}(t)\\, d\\rho(\\alpha),\n\\end{equation}\nrespectively, where $\\rho$ is a Borel measure defined on the interval $[0,\\, 1]$.\n\nSeveral useful properties of the Riemann-Liouville fractional integral and the Riemann-Liouville and Caputo fractional derivatives are based on the formula\n\\begin{equation}\n\\label{2-9}\n(h_{\\alpha} \\, * \\, h_\\beta)(t) \\, = \\, h_{\\alpha+\\beta}(t),\\ \\alpha,\\beta >0,\\ t>0\n\\end{equation}\nthat immediately follows from the well-known representation of the Euler Beta-function in terms of the Gamma-function:\n$$\nB(\\alpha,\\beta) := \\int_0^1 (1-\\tau)^{\\alpha -1}\\, \\tau^{\\beta -1}\\, d\\tau \\, = \\, \\frac{\\Gamma(\\alpha)\\Gamma(\\beta)}{\\Gamma(\\alpha+\\beta)},\\ \\alpha,\\beta>0.\n$$ \nIn the formula \\eqref{2-9} and in what follows, the power function $h_{\\alpha}$ is defined as in \\eqref{single}. \n\nIn our discussions, we employ the integer order convolution powers that for a function $f=f(t),\\ t>0$ are defined by the expression\n\\begin{equation}\n\\label{I-6}\nf^{}(t):= \\begin{cases}\n1,& n=0,\\\\\nf(t), & n=1,\\\\\n(\\underbrace{f* \\ldots * f}_{n\\ \\mbox{times}})(t),& n=2,3,\\dots .\n\\end{cases}\n\\end{equation}\nFor the kernel $\\kappa(t) = h_\\alpha(t)$ of the Riemann-Liouville fractional integral, we apply the formula \\eqref{2-9} and arrive at the important representation\n\\begin{equation}\n\\label{2-13}\nh_{\\alpha}^{}(t) = h_{n\\alpha}(t),\\ n\\in \\N.\n\\end{equation}\nA well-known particular case of \\eqref{2-13} is the formula\n\\begin{equation}\n\\label{2-13-1}\n\\{1\\}^n(t) = h_{1}^n(t)= h_{n}(t)=\\frac{t^{n-1}}{\\Gamma(n)} = \\frac{t^{n-1}}{(n-1)!},\\ n\\in \\N,\n\\end{equation}\nwhere by $\\{1\\}$ we denoted a function that is identically equal to 1 for $t\\ge 1$. \n\nNow let us write down the formula \\eqref{2-9} for $\\beta = 1-\\alpha,\\ 0<\\alpha <1$: \n\\begin{equation}\n\\label{3-1}\n(h_{\\alpha}\\, * \\, h_{1-\\alpha})(t) = h_1(t) = \\{1 \\},\\ 0<\\alpha<1,\\ t>0.\n\\end{equation}\n\nIn \\cite{Abel1,Abel2}, Abel employed the relation \\eqref{3-1} to derive an inversion formula for the operator that is nowadays referred to as the Caputo fractional derivative and obtained it in form of the Riemann-Liouville fractional integral (solution to the Abel model for the tautochrone problem). \n\nBy an attempt to extend the Abel solution method to more general integral equations of convolution type, \nSonine introduced in \\cite{Son} the relation\n\\begin{equation}\n\\label{3-2}\n(\\kappa \\, *\\, k )(t) = \\{1 \\},\\ t>0\n\\end{equation}\nthat is nowadays referred to as the Sonine condition. The functions that satisfy the Sonine condition are called the Sonine kernels. For a Sonine kernel $\\kappa$, the kernel $k$ that satisfies the Sonine condition \\eqref{3-2} is called an associated kernel to $\\kappa$. Of course, $\\kappa$ is then an associated kernel to $k$. In what follows, we denote the set of the Sonine kernels by $\\mathcal{S}$. \n\nIn \\cite{Son}, Sonine introduced a class of the Sonine kernels in the form\n\\begin{equation}\n\\label{3-3}\n\\kappa(t) = h_{\\alpha}(t) \\cdot \\, \\kappa_1(t),\\ \\kappa_1(t)=\\sum_{k=0}^{+\\infty}\\, a_k t^k, \\ a_0 \\not = 0,\\ 0<\\alpha <1,\n\\end{equation}\n\\begin{equation}\n\\label{3-4}\nk(t) = h_{1-\\alpha}(t) \\cdot k_1(t),\\ k_1(t)=\\sum_{k=0}^{+\\infty}\\, b_k t^k, \n\\end{equation}\nwhere $\\kappa_1=\\kappa_1(t)$ and $k_1=k_1(t)$ are analytical functions and the coefficients $a_k,\\ b_k,\\ k\\in \\N_0$ satisfy the following triangular system of linear equations:\n\\begin{equation}\n\\label{3-5}\na_0b_0 = 1,\\ \\sum_{k=0}^n\\Gamma(k+1-\\alpha)\\Gamma(\\alpha+n-k)a_{n-k}b_k = 0,\\ n\\ge 1.\n\\end{equation}\nAn important example of the kernels from $\\mathcal{S}$ in form \\eqref{3-3}, \\eqref{3-4} was derived in \\cite{Son} in terms of the Bessel function $J_{\\nu}$ and the modified Bessel function $I_{\\nu}$: \n\\begin{equation}\n\\label{Bess}\n\\kappa(t) = (\\sqrt{t})^{\\alpha-1}J_{\\alpha-1}(2\\sqrt{t}),\\ \nk(t) = (\\sqrt{t})^{-\\alpha}I_{-\\alpha}(2\\sqrt{t}),\\ 0<\\alpha <1,\n\\end{equation}\nwhere \n$$\nJ_\\nu (t) = \\sum_{k=0}^{+\\infty} \\frac{(-1)^k(t\/2)^{2k+\\nu}}{k!\\Gamma(k+\\nu+1)},\\ \nI_\\nu (t) = \\sum_{k=0}^{+\\infty} \\frac{(t\/2)^{2k+\\nu}}{k!\\Gamma(k+\\nu+1)}.\n$$\n\nFor other examples of the Sonine kernels we refer the readers to \\cite{Luc21c,Koch11,Luc21a,Sam}.\n\nIn this paper, we deal with the general fractional integrals (GFIs) with the kernels $\\kappa \\in \\mathcal{S}$ \ndefined by the formula\n\\begin{equation}\n\\label{GFI}\n(\\I_{(\\kappa)}\\, f)(t) := (\\kappa\\, *\\, f)(t) = \\int_0^t \\kappa(t-\\tau)f(\\tau)\\, d\\tau,\\ t>0\n\\end{equation}\nand with the GFDs with the associated Sonine kernels $k$ in the Riemann-Liouville and Caputo senses defined by \\eqref{FDR-L} and \\eqref{FDC}, respectively. \n\n\nIn our discussions, we restrict ourselves to a class of the Sonine kernels from the space $C_{-1,0}(0,+\\infty)$ that is an important particular case of the following two-parameters family of spaces (\\cite{Luc21b,Luc21a,Luc21c}):\n\\begin{equation}\n\\label{subspace}\n C_{\\alpha,\\beta}(0,+\\infty) \\, = \\, \\{f:\\ f(t) = t^{p}f_1(t),\\ t>0,\\ \\alpha < p < \\beta,\\ f_1\\in C[0,+\\infty)\\}.\n\\end{equation}\nBy $C_{-1}(0,+\\infty)$ we mean the space $C_{-1,+\\infty}(0,+\\infty)$.\n\nThe set of such Sonine kernels will be denoted by $\\mathcal{L}_{1}$ (\\cite{Luc21c}):\n\\begin{equation}\n\\label{Son}\n(\\kappa,\\, k \\in \\mathcal{L}_{1} ) \\ \\Leftrightarrow \\ (\\kappa,\\, k \\in C_{-1,0}(0,+\\infty))\\wedge ((\\kappa\\, *\\, k)(t) \\, = \\, \\{1\\}).\n\\end{equation}\n\nIn the rest of this section, we present some important results for the GFIs and the GFDs with the Sonine kernels from $\\mathcal{L}_{1}$ on the space $C_{-1}(0,+\\infty)$ and its sub-spaces. \n\nThe basic properties of the GFI \\eqref{GFI} on the space $C_{-1}(0,+\\infty)$ easily follow from the known properties of the Laplace convolution:\n\\begin{equation}\n\\label{GFI-map}\n\\I_{(\\kappa)}:\\, C_{-1}(0,+\\infty)\\, \\rightarrow C_{-1}(0,+\\infty),\n\\end{equation}\n\\begin{equation}\n\\label{GFI-com}\n\\I_{(\\kappa_1)}\\, \\I_{(\\kappa_2)} = \\I_{(\\kappa_2)}\\, \\I_{(\\kappa_1)},\\ \\kappa_1,\\, \\kappa_2 \\in \\mathcal{L}_{1},\n\\end{equation}\n\\begin{equation}\n\\label{GFI-index}\n\\I_{(\\kappa_1)}\\, \\I_{(\\kappa_2)} = \\I_{(\\kappa_1*\\kappa_2)},\\ \\kappa_1,\\, \\kappa_2 \\in \\mathcal{L}_{1}.\n\\end{equation}\n\nFor the functions $f\\in C_{-1}^1(0,+\\infty):=\\{f:\\ f^\\prime\\in C_{-1}(0,+\\infty)\\}$, the GFDs of the Riemann-Liouville type can be represented as follows (\\cite{Luc21a}):\n\\begin{equation}\n\\label{GFDL-1}\n(\\D_{(k)}\\, f) (t) = (k\\, * \\, f^\\prime)(t) + f(0)k(t),\\ t>0.\n\\end{equation}\nThus, for $f\\in C_{-1}^1(0,+\\infty)$, the GFD \\eqref{FDC} of the Caputo type takes the form\n\\begin{equation}\n\\label{GFDC_1}\n( _*\\D_{(k)}\\, f) (t) = (k\\, * \\, f^\\prime)(t),\\ t>0. \n\\end{equation}\n\nIt is worth mentioning that in the FC publications, the Caputo fractional derivative \\eqref{CD} is often defined as in the formula \\eqref{GFDC_1}:\n\\begin{equation}\n\\label{CD-1}\n(\\,_*D^\\alpha_{0+}\\, f)(t) = (h_{1-\\alpha}\\, * \\, f^\\prime)(t) = (I^{1-\\alpha}_{0+} f^\\prime)(t),\\ t>0. \n\\end{equation}\n\nNow, following \\cite{Luc21a,Luc21d}, we define the $n$-fold GFI and the $n$-fold sequential GFDs in the Riemann-Liouville and Caputo sense. \n\n\\begin{definition}[\\cite{Luc21a}]\n\\label{d1}\nLet $\\kappa \\in \\mathcal{L}_{1}$. The $n$-fold GFI ($n \\in \\N$) is a composition of $n$ GFIs with the kernel $\\kappa$:\n\\begin{equation}\n\\label{GFIn}\n(\\I_{(\\kappa)}^{}\\, f)(t) := (\\underbrace{\\I_{(\\kappa)} \\ldots \\I_{(\\kappa)}}_{n\\ \\mbox{times}}\\, f)(t),\\ t>0.\n\\end{equation}\n\\end{definition}\n\nIt is worth mentioning that the index law \\eqref{GFI-index} leads to a representation of the $n$-fold GFI \\eqref{GFIn} in form of the GFI with the kernel $\\kappa^{}$:\n\\begin{equation}\n\\label{GFIn-1}\n(\\I_{(\\kappa)}^{}\\, f)(t) = (\\kappa^{}\\, *\\, f)(t) = (\\I_{(\\kappa)^{}}\\, f)(t),\\ t>0.\n\\end{equation}\n\nThe kernel $\\kappa^{},\\ n\\in \\N$ belongs to the space $C_{-1}(0,+\\infty)$, but it is not always a Sonine kernel. \n\n\\begin{definition}[\\cite{Luc21d}]\n\\label{d2}\nLet $\\kappa \\in \\mathcal{L}_{1}$ and $k$ be its associated Sonine kernel. The $n$-fold sequential GFDs in the Riemann-Liouville and in the Caputo sense, respectively, are defined as follows:\n\\begin{equation}\n\\label{GFDLn}\n(\\D_{(k)}^{}\\, f)(t) := (\\underbrace{\\D_{(k)} \\ldots \\D_{(k)}}_{n\\ \\mbox{times}}\\, f)(t),\\ t>0,\n\\end{equation}\n\\begin{equation}\n\\label{GFDLn-C}\n(\\,_*\\D_{(k)}^{}\\, f)(t) := (\\underbrace{\\,_*\\D_{(k)} \\ldots \\,_*\\D_{(k)}}_{n\\ \\mbox{times}}\\, f)(t),\\ t>0.\n\\end{equation}\n\\end{definition}\n\nIt is worth mentioning that in \\cite{Luc21b,Luc21a}, the $n$-fold GFDs ($n \\in \\N$) were defined in a different form:\n\\begin{equation}\n\\label{GFDLn-1}\n(\\D_{(k)}^n\\, f)(t) := \\frac{d^n}{dt^n} ( k^{} * f)(t),\\ t>0,\n\\end{equation}\n\\begin{equation}\n\\label{GFDLn-1_C}\n(\\,_*\\D_{(k)}^n\\, f)(t) := ( k^{} * f^{(n)})(t),\\ t>0.\n\\end{equation}\n\nThe $n$-fold sequential GFDs \\eqref{GFDLn} and \\eqref{GFDLn-C} are a far reaching generalization of the Riemann-Liouville and the Caputo sequential fractional derivatives to the case of the Sonine kernels from $\\mathcal{L}_{1}$.\n\nSome important connections between the $n$-fold GFI \\eqref{GFIn} and the $n$-fold sequential GFDs \\eqref{GFDLn} and \\eqref{GFDLn-C} in the Riemann-Liouville and Caputo senses are provided in the so-called first and second fundamental theorems of FC (\\cite{Luc20}) formulated below. \n\n\\begin{theorem}[\\cite{Luc21d}]\n\\label{t3-n}\nLet $\\kappa \\in \\mathcal{L}_{1}$ and $k$ be its associated Sonine kernel. \n\nThen, the $n$-fold sequential GFD \\eqref{GFDLn} in the Riemann-Liouville sense is a left inverse operator to the $n$-fold GFI \\eqref{GFIn} on the space $C_{-1}(0,+\\infty)$: \n\\begin{equation}\n\\label{FTLn}\n(\\D_{(k)}^{}\\, \\I_{(\\kappa)}^{}\\, f) (t) = f(t),\\ f\\in C_{-1}(0,+\\infty),\\ t>0,\n\\end{equation}\nand the $n$-fold sequential GFD \\eqref{GFDLn-C} in the Caputo sense is a left inverse operator to the $n$-fold GFI \\eqref{GFIn} on the space $C_{-1,(k)}^n(0,+\\infty)$: \n\\begin{equation}\n\\label{FTLn-C}\n(\\,_*\\D_{(k)}^{}\\, \\I_{(\\kappa)}^{}\\, f) (t) = f(t),\\ f\\in C_{-1,(k)}^n(0,+\\infty),\\ t>0,\n\\end{equation}\nwhere $C_{-1,(k)}^n(0,+\\infty) := \\{f:\\ f(t)=(\\I_{(k)}^{}\\, \\phi)(t),\\ \\phi\\in C_{-1}(0,+\\infty)\\}$.\n\\end{theorem}\n\n\\begin{theorem}[\\cite{Luc21d}]\n\\label{tgcTfn}\nLet $\\kappa \\in \\mathcal{L}_{1}$ and $k$ be its associated Sonine kernel. \n\nFor a function $f\\in C_{-1,(k)}^{(n)}(0,+\\infty) = \\{ f:\\ (\\D_{(k)}^{}\\, f)\\in C_{-1}(0,+\\infty),\\ j=0,1,\\dots,n\\}$, the formula\n\\begin{equation}\n\\label{sFTLn}\n(\\I_{(\\kappa)}^{}\\, \\D_{(k)}^{}\\, f) (t) = f(t) - \\sum_{j=0}^{n-1}\\left( k\\, * \\, \\D_{(k)}^{}\\, f\\right)(0)\\kappa^{}(t) = \n\\end{equation}\n$$\nf(t) - \\sum_{j=0}^{n-1}\\left( \\I_{(k)}\\, \\D_{(k)}^{}\\, f\\right)(0)\\kappa^{}(t),\\ t>0\n$$\nholds valid, where $\\I_{(\\kappa)}^{}$ is the $n$-fold GFI \\eqref{GFIn} and $\\D_{(k)}^{}$ is the $n$-fold sequential GFD \\eqref{GFDLn} in the Riemann-Liouville sense. \n\nFor a function $f\\in C_{-1}^{n}(0,+\\infty):=\\{f:\\ f^{(n)}\\in C_{-1}(0,+\\infty)\\}$, the formula\n\\begin{equation}\n\\label{sFTLn-C}\n(\\I_{(\\kappa)}^{}\\,_*\\D_{(k)}^{}\\, f) (t) = f(t) - f(0) - \\sum_{j=1}^{n-1}\\left(\\,_*\\D_{(k)}^{}\\, f\\right)(0)\\left( \\{1\\} \\, *\\, \\kappa^{}\\right)(t)\n\\end{equation}\nholds valid, where $\\I_{(\\kappa)}^{}$ is the $n$-fold GFI \\eqref{GFIn} and $\\,_*\\D_{(k)}^{}$ is the $n$-fold sequential GFD \\eqref{GFDLn-C}.\n\\end{theorem}\n\nFor the proofs of Theorems \\ref{t3-n} and \\ref{tgcTfn} and their particular cases we refer the interested readers to \\cite{Luc21d}. \n\n\\section{Solutions to the Fractional Differential Equations with the GFDs in the Riemann-Liouville Sense in Terms of the Convolution Series}\n\nFirst, we introduce the convolution series and treat some of their properties needed for the further discussions. \n\nFor a Sonine kernel $\\kappa \\in \\mathcal{L}_{1}$, a convolution series in form \n\\begin{equation}\n\\label{conser}\n\\Sigma_\\kappa(t) = \\sum_{j=0}^{+\\infty} a_j\\, \\kappa^{}(t),\\ a_j\\in \\R\n\\end{equation}\nwas introduced in \\cite{Luc21c} for analytical treatment of the fractional differential equations with the $n$-fold GFDs of the Caputo type by means of an operational calculus developed for these GFDs. In \\cite{Luc21d}, some of the results presented in \\cite{Luc21c} were extended to convolution series in form \\eqref{conser} generated by any function $\\kappa \\in C_{-1}(0,+\\infty)$ (that is not necessarily a Sonine kernel). \n\nA very important question regarding convergence of the convolution series \\eqref{conser} was answered in \\cite{Luc21b,Luc21d}. \n\n\\begin{theorem}[\\cite{Luc21d}]\n\\label{t11}\nLet a function $\\kappa \\in C_{-1}(0,+\\infty)$ be represented in the form\n\\begin{equation}\n\\label{rep}\n\\kappa(t) = h_{p}(t)\\kappa_1(t),\\ t>0,\\ p>0,\\ \\kappa_1\\in C[0,+\\infty)\n\\end{equation} \nand the convergence radius of the power series\n\\begin{equation}\n\\label{ser}\n\\Sigma(z) = \\sum^{+\\infty }_{j=0}a_{j}\\, z^j,\\ a_{j}\\in \\Com,\\ z\\in \\Com\n\\end{equation}\nbe non-zero. Then the convolution series \n\\eqref{conser}\nis convergent for all $t>0$ and defines a function from the space $C_{-1}(0,+\\infty)$. \nMoreover, the series \n\\begin{equation}\n\\label{conser_p}\nt^{1-\\alpha}\\, \\Sigma_\\kappa(t) = \\sum^{+\\infty }_{j=0}a_{j}\\, t^{1-\\alpha}\\, \\kappa^{}(t),\\ \\ \\alpha = \\min\\{p,\\, 1\\}\n\\end{equation}\nis uniformly convergent for $t\\in [0,\\, T]$ for any $T>0$.\n\\end{theorem}\n\nIn what follows, we always assume that the coefficients of the convolution series satisfy the condition that the convergence radius of the corresponding power series is non-zero and thus Theorem \\ref{t11} is applicable for these convolution series. \n\nAs an example, let us consider the geometric series\n\\begin{equation}\n\\label{geom}\n\\Sigma(z) = \\sum_{j=0}^{+\\infty} \\lambda^{j}z^{j},\\ \\lambda \\in \\Com,\\ z\\in \\Com.\n\\end{equation}\nFor $\\lambda \\not =0$, the convergence radius $r$ of this series is equal to $1\/|\\lambda|$ and thus we can apply Theorem \\ref{t11} that says that the convolution series generated by a function $\\kappa \\in C_{-1}(0,+\\infty)$ in form\n\\begin{equation}\n\\label{l}\nl_{\\kappa,\\lambda}(t) = \\sum_{j=0}^{+\\infty} \\lambda^{j}\\kappa^{}(t),\\ \\lambda \\in \\Com\n\\end{equation}\nis convergent for all $t>0$ and defines a function from the space $C_{-1}(0,+\\infty)$.\n\nThe convolution series $l_{\\kappa,\\lambda}$ defined by \\eqref{l} plays a very important role in the operational calculus for the GFD of Caputo type developed in \\cite{Luc21b}. It provides a far reaching generalization of both the exponential function and the two-parameters Mittag-Leffler function in form \\eqref{eqRL_sol}. \n\nIndeed, let us consider the convolution series \\eqref{l} in the case of the kernel function $\\kappa=\\{ 1\\}$. Due to the formula $\\kappa^{}(t) = \\{ 1\\}^{}(t) = h_{j+1}(t)$ (see \\eqref{2-13}), the convolution series \\eqref{l} is reduced to the power series for the exponential function:\n\\begin{equation}\n\\label{l-Mic}\nl_{\\kappa,\\lambda}(t) = \\sum_{j=0}^{+\\infty} \\lambda^{j}h_{j+1}(t) =\n\\sum_{j=0}^{+\\infty} \\frac{(\\lambda\\, t)^j}{j!} = e^{\\lambda\\, t}.\n\\end{equation}\n\nFor the kernel $\\kappa(t) = h_{\\alpha}(t)$ of the Riemann-Liouville fractional integral, the formula $\\kappa^{}(t) = h_{\\alpha}^{}(t) = h_{(j+1)\\alpha}(t)$ (see \\eqref{2-13}) holds valid. Thus, the convolution series \\eqref{l} takes the form\n\\begin{equation}\n\\label{l-Cap}\nl_{\\kappa,\\lambda}(t) = \\sum_{j=0}^{+\\infty} \\lambda^{j}h_{(j+1)\\alpha}(t) =\nt^{\\alpha-1}\\sum_{j=0}^{+\\infty} \\frac{\\lambda^j\\, t^{j\\alpha}}{\\Gamma(j\\alpha+\\alpha)} = t^{\\alpha -1}E_{\\alpha,\\alpha}(\\lambda\\, t^{\\alpha})\n\\end{equation}\nthat is the same as the two-parameters Mittag-Leffler function \\eqref{eqRL_sol}. \n\nFor $\\kappa \\in \\mathcal{L}_{1}$, another important convolution series was introduced in \\cite{Luc21b} as follows:\n\\begin{equation}\n\\label{L}\nL_{\\kappa,\\lambda}(t) = (k \\, *\\ l_{\\kappa,\\lambda})(t) = 1 + \\left(\\{ 1 \\} * \\sum_{j=1}^{+\\infty} \\lambda^{j}\\kappa^{}(\\cdot)\\right)(t),\\ \\lambda \\in \\Com,\n\\end{equation}\nwhere $k$ is the Sonine kernel associated to the kernel $\\kappa$. It is easy to see that in the case $\\kappa=\\{ 1\\}$, the convolution series \\eqref{L} coincides with the exponential function: \n\\begin{equation}\n\\label{L-Mic}\nL_{\\kappa,\\lambda}(t) = 1 + \\left(\\{ 1 \\} * \\sum_{j=1}^{+\\infty} \\lambda^{j}h_j(\\cdot) \\right)(t)=\n1+ \\sum_{j=1}^{+\\infty} \\lambda^{j}h_{j+1}(t) = e^{\\lambda\\, t}.\n\\end{equation}\n\nIn the case of the kernel $\\kappa(t) = h_{\\alpha}(t),\\ t>0,\\ 0<\\alpha <1$, the convolution series $L_{\\kappa,\\lambda}$ is reduced to the two-parameters Mittag-Leffler function \\eqref{eqC_sol}:\n\\begin{equation}\n\\label{L-Cap-op}\nL_{\\kappa,\\lambda}(t)\\, = \\, 1 + \\left(\\{ 1 \\} * \\sum_{j=1}^{+\\infty} \\lambda^{j}h_{j\\alpha}(\\cdot)\\right)(t) =\n1 + \\sum_{j=1}^{+\\infty} \\lambda^{j}h_{j\\alpha+1}(t) = E_{\\alpha,1}(\\lambda\\, t^{\\alpha}).\n\\end{equation}\n\nAnalytical solutions to the single- and multi-terms fractional differential equations with the $n$-fold GFDs of the Caputo type were presented in \\cite{Luc21b} in terms of the convolution series $l_{\\kappa,\\lambda}$ and $L_{\\kappa,\\lambda}$. In the rest of this section, we treat the linear single- and multi-terms fractional differential equations with the $n$-fold GFDs in the Riemann-Liouville sense.\n\nWe start with the following auxiliary result:\n\n\\begin{theorem}\n\\label{eqconv}\nTwo convolution series generated by the same Sonine kernel $\\kappa \\in \\mathcal{L}_{1}$ coincide for all $t>0$, i.e., \n\\begin{equation}\n\\label{ser12}\n\\sum_{j=0}^{+\\infty} b_{j}\\, \\kappa^{}(t) \\equiv \\sum_{j=0}^{+\\infty} c_{j}\\, \\kappa^{}(t),\\ t>0\n\\end{equation}\nif and only if the corresponding coefficients of these series are equal: \n\\begin{equation}\n\\label{ser13}\na_j = b_j,\\ j=0,1,2,\\dots .\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof} \nIf the corresponding coefficients of two convolution series generated by the same Sonine kernel $\\kappa \\in \\mathcal{L}_{1}$ are equal, then we have just one series and evidently the identity \\eqref{ser12} holds valid. \n\nThe idea of the proof of the second part of this theorem is the same as the one for the proof of the analogous calculus result for the power series, i.e., under the condition that the identity \\eqref{ser12} holds valid we first show that $b_0=c_0$ and then apply the same arguments to prove that $b_1=c_1$, $b_2=c_2$, etc. \n\nAccording to Theorem \\ref{t11}, the convolution series in form \\eqref{conser} is uniformly convergent on any interval $[\\epsilon,\\ T]$, and thus we can apply the GFI $\\I_{(k)}$ to this series term by term:\n$$\n\\left( \\I_{(k)}\\, \\sum_{j=0}^{+\\infty} a_{j}\\, \\kappa^{}(\\cdot )\\right)(t) = \n \\sum_{j=0}^{+\\infty} \\left( \\I_{(k)}\\, a_j\\, \\kappa^{}(\\cdot)\\right)(t) = \n $$\n $$\n\\sum_{j=0}^{+\\infty} \\left( a_j\\, (k(\\cdot)\\, *\\, \\kappa^{}(\\cdot )\\right)(t) = \na_0 + \\sum_{j=1}^{+\\infty} a_{j}\\, \\left(\\{ 1\\} \\, *\\, \\kappa^{}(\\cdot)\\right)(t) = \n$$\n$$\na_0 + \\left( \\{1\\}\\, *\\, \\sum_{j=1}^{+\\infty} a_j\\, \\kappa^{}(\\cdot)\\right)(t) = a_0 + (\\{1\\}\\, *\\, f_1)(t),\n$$\nwhere $f_1$ is the following convolution series:\n\\begin{equation}\n\\label{f1}\nf_1(t) = \\sum_{j=1}^{+\\infty} a_j\\, \\kappa^{}(t)=\\sum_{j=0}^{+\\infty} a_{j+1}\\, \\kappa^{}(t).\n\\end{equation}\nSummarizing the calculations from above, for the convolution series in form \\eqref{conser}, the formula\n\\begin{equation}\n\\label{Intcon} \n\\left( \\I_{(k)}\\, \\sum_{j=0}^{+\\infty} a_{j}\\, \\kappa^{}(\\cdot )\\right)(t) = a_0 + \\left( \\{1\\}\\, *\\, \\sum_{j=0}^{+\\infty} a_{j+1}\\, \\kappa^{}(\\cdot)\\right)(t)\n\\end{equation}\nholds valid.\n\nBecause the convergence radius of the power series $\\Sigma_1(t) = \\sum_{j=0}^{+\\infty} a_{j+1}\\, z^j$ is the same as the convergence radius of the power series $\\Sigma(t) = \\sum_{j=0}^{+\\infty} a_{j}\\, z^j$, Theorem \\ref{t11} ensures the inclusion $f_1 \\in C_{-1}(0,+\\infty)$, where $f_1$ is defined by the formula \\eqref{f1}. As have been shown in \\cite{LucGor}, the definite integral of a function from $C_{-1}(0,+\\infty)$ is a continuous function on the whole interval $[0,\\, +\\infty)$ that takes the value zero at the point zero:\n\\begin{equation}\n\\label{a0_2} \n\\left( \\{1\\}\\, * \\, f_1 \\right) (x) = (I_{0+}^1\\, f_1)(x) \\in C[0,\\ +\\infty),\\ \\ (I_{0+}^1\\, f_1)(0) = 0.\n\\end{equation}\n\nNow we act with the GFI $\\I_{(k)}$ on the equality \\eqref{ser12} and apply the formula \\eqref{Intcon} to get the relation\n\\begin{equation}\n\\label{ser12_1}\nb_0 + \\left( \\{1\\}\\, *\\, \\sum_{j=0}^{+\\infty} b_{j+1}\\, \\kappa^{}(\\cdot)\\right)(t) \\equiv c_0 + \\left( \\{1\\}\\, *\\, \\sum_{j=0}^{+\\infty} c_{j+1}\\, \\kappa^{}(\\cdot)\\right)(t),\\ t>0.\n\\end{equation}\nSubstituting the point $t=0$ into the equality \\eqref{ser12_1} and using the formula \\eqref{a0_2}, we deduce that $b_0 = c_0$. Now we differentiate the equality \\eqref{ser12_1} and get the following identity:\n\\begin{equation}\n\\label{ser12_2}\n\\sum_{j=0}^{+\\infty} b_{j+1}\\, \\kappa^{}(t) \\equiv \\sum_{j=0}^{+\\infty} c_{j+1}\\, \\kappa^{}(t),\\ t>0.\n\\end{equation}\nThis identity has exactly same structure as the identity \\eqref{ser12} from Theorem \\ref{eqconv}. Thus we can apply the same arguments as above and derive the relation $b_1 = c_1$. By repeating the same reasoning again and again we arrive at the formula \\eqref{ser13} that we wanted to prove.\n\\end{proof}\n\nNow we are ready to apply the method of convolution series for derivation of solutions to the fractional differential equations with the GFDs and start with the fractional relaxation equation with the GFD of the Riemann-Liouville type:\n\\begin{equation}\n\\label{eq-1-1}\n( \\D_{(k)}\\, y)(t) = \\lambda y(t), \\ \\lambda \\in \\R,\\ t>0. \n\\end{equation}\n\nAs in the case of the power series, we look for a general solution to this equation in form of a convolution series generated by the Sonine kernel $\\kappa$ that is an associated kernel to the kernel $k$ of the GFD from the equation \\eqref{eq-1-1}:\n\\begin{equation}\n\\label{sol-1-1}\ny(t) = \\sum_{j=0}^{+\\infty} b_j\\, \\kappa^{}(t),\\ b_j\\in \\R.\n\\end{equation}\n\nTo proceed, let as first calculate the image of the convolution series \\eqref{sol-1-1} by action of the GFD $\\D_{(k)}$: \n$$\n( \\D_{(k)}\\, y)(t) = \\left( \\D_{(k)}\\, \\sum_{j=0}^{+\\infty} b_j\\, \\kappa^{}(\\cdot)\\right)(t) = \n\\frac{d}{dt}\\left( \\I_{(k)}\\, \\sum_{j=0}^{+\\infty} b_j\\, \\kappa^{}(\\cdot)\\right)(t).\n$$\n\nIn the proof of Theorem \\ref{eqconv} we already calculated the image of the convolution series \\eqref{sol-1-1} by action of the GFI $\\I_{(k)}$ (formula \\eqref{Intcon}). Applying this formula, we arrive at the representation \n\\begin{equation}\n\\label{term1}\n( \\D_{(k)}\\, y)(t) = \\frac{d}{dt}\\left( b_0 + \\left( \\{1\\}\\, *\\, \\sum_{j=0}^{+\\infty} b_{j+1}\\, \\kappa^{}\n(\\cdot ) \\right)(t)\\right) = \\sum_{j=0}^{+\\infty} b_{j+1}\\, \\kappa^{}(t).\n\\end{equation}\n\nIn the next step, we substitute the right-hand side of \\eqref{term1} into the equation \\eqref{eq-1-1} and get an equality of two convolution series generated by the same kernel $\\kappa$:\n$$\n\\sum_{j=0}^{+\\infty} b_{j+1}\\, \\kappa^{}(t) = \\sum_{j=0}^{+\\infty} \\lambda\\ b_{j}\\, \\kappa^{}(t),\\ t>0.\n$$\nApplication of Theorem \\ref{eqconv} to the above identity leads to the following relations for the coefficients of the convolution series \\eqref{sol-1-1}:\n\\begin{equation}\n\\label{coef1}\nb_{j+1} = \\lambda\\, b_j,\\ j=0,1,2,\\dots .\n\\end{equation}\nThe infinite system \\eqref{coef1} of linear equations can be easily solved step by step and we arrive at the explicit solution in form\n\\begin{equation}\n\\label{coef2}\nb_{j} = b_0\\, \\lambda^j,\\ j=1,2,\\dots ,\n\\end{equation}\nwhere $b_0\\in \\R$ is an arbitrary constant. Summarizing the arguments presented above, we proved the following theorem:\n\n\\begin{theorem}\n\\label{t-eq1}\nThe general solution to the fractional relaxation equation \\eqref{eq-1-1} with the GFD \\eqref{FDR-L} in the Riemann-Liouville sense \ncan be represented as follows:\n\\begin{equation}\n\\label{eig}\ny(t) = \\sum_{j=0}^{+\\infty} b_0 \\, \\lambda^j\\, \\kappa^{}(t) = b_0\\, l_{\\kappa,\\lambda}(t),\\ b_0\\in \\R,\n\\end{equation}\nwhere $l_{\\kappa,\\lambda}$ is the convolution series \\eqref{l}.\n\\end{theorem}\n\n\n\\begin{remark}\n\\label{r1}\nThe constant $b_0$ in the general solution \\eqref{eig} to the equation \\eqref{eq-1-1} can be determined from a suitably posed initial condition. The form of this initial condition is prescribed by Theorem \\ref{tgcTfn} (see also the formula \\eqref{Intcon}). Indeed, setting $n=1$ in the relation \\eqref{sFTLn}, we get the following representation of the projector operator of the GFD \\eqref{FDR-L} in the Riemann-Liouville sense:\n\\begin{equation}\n\\label{proj1}\n(P\\, f)(t) = f(t) - (\\I_{(\\kappa)}\\, \\D_{(k)}\\, f) (t) = \\left(\\I_{(k)}\\, f\\right) (0)\\kappa(t),\\ f\\in C_{-1,(k)}^{(1)}(0,+\\infty).\n\\end{equation}\nThus, the initial-value problem \n\\begin{equation}\n\\label{eq-1-1-iv}\n\\begin{cases}\n( \\D_{(k)}\\, y)(t) = \\lambda y(t), \\ \\lambda \\in \\R,\\ t>0,\\\\\n\\left(\\I_{(k)}\\, y\\right) (0) = b_0\n\\end{cases}\n\\end{equation}\nhas a unique solution given by the formula \\eqref{eig}.\n\\end{remark}\n\n\nIn the case of the Sonine kernel $k(t) = h_{1-\\alpha}(t),\\ 0<\\alpha<1$, the equation \\eqref{eq-1-1} is reduced to the equation \\eqref{eqRL} with the Riemann-Liouville fractional derivative and its solution \\eqref{eig} is exactly the solution \\eqref{eqRL_sol} of the equation \\eqref{eqRL} in terms of the two-parameters Mittag-Leffler function (see the formula \\eqref{l-Cap}). The initial-value problem \\eqref{eq-1-1-iv} takes the well-known form\n\\begin{equation}\n\\label{eq-1-1-iv-RL}\n\\begin{cases}\n( D_{0+}^\\alpha\\, y)(t) = \\lambda y(t), \\ \\lambda \\in \\R,\\ t>0,\\\\\n\\left(I_{0+}^{1-\\alpha}\\, y\\right) (0) = b_0.\n\\end{cases}\n\\end{equation}\nIts unique solution is given by the formula $y(t) = b_0\\, t^{\\alpha -1} E_{\\alpha, \\alpha}(\\lambda\\, t^\\alpha)$. \n\nNow we proceed with the inhomogeneous equation of type \\eqref{eq-1-1}\n\\begin{equation}\n\\label{eq-1-2}\n( \\D_{(k)}\\, y)(t) = \\lambda y(t) + f(t), \\ \\lambda \\in \\R,\\ t>0, \n\\end{equation} \nwhere the source function $f$ is represented in form of a convolution series\n\\begin{equation}\n\\label{f}\nf(t) = \\sum_{j=0}^{+\\infty} a_j\\, \\kappa^{}(t),\\ a_j\\in \\R.\n\\end{equation} \nAgain, we look for solutions to the equation \\eqref{eq-1-2} in form of the convolution series \\eqref{sol-1-1}. Applying exactly the same reasoning as above, we arrive at the following infinite system of linear equations for the coefficients of the convolution series \\eqref{sol-1-1}:\n\\begin{equation}\n\\label{coef1-1}\nb_{j+1} = \\lambda\\, b_j + a_j,\\ j=0,1,2,\\dots .\n\\end{equation}\nThe explicit form of solutions to this system of equations is as follows:\n\\begin{equation}\n\\label{coef2-1}\nb_{j} = b_0\\, \\lambda^j\\, +\\, \\sum_{i=0}^{j-1}a_i\\, \\lambda^{j-i-1}, \\ j=1,2,\\dots,\n\\end{equation}\nwhere $b_0\\in \\R$ is an arbitrary constant. Then the general solution to the equation \\eqref{eq-1-2} takes the form\n$$\ny(t) = b_0\\, \\kappa(t)+ \\sum_{j=1}^{+\\infty} \\left(b_0\\, \\lambda^j\\, +\\, \\sum_{i=0}^{j-1}a_i\\, \\lambda^{j-i-1}\\right) \\, \\kappa^{}(t) = \n$$\n$$\nb_0\\, \\sum_{j=0}^{+\\infty} \\lambda^j\\,\\kappa^{}(t) + \\sum_{j=1}^{+\\infty} \\sum_{i=0}^{j-1} a_i\\, \\lambda^{j-i-1}\\, \\kappa^{}(t).\n$$\nBy direct calculations, we verify that the second sum in the last formula can be written in a more compact form:\n$$\n\\sum_{j=1}^{+\\infty} \\sum_{i=0}^{j-1} a_i\\, \\lambda^{j-i-1}\\, \\kappa^{}(t) = \\sum_{i=0}^{+\\infty} a_i \\sum_{j=1}^{+\\infty} \\lambda^{j-1}\\, \\kappa^{}(t) = (f\\, *\\, l_{\\kappa,\\lambda})(t),\n$$\nwhere the convolution series $l_{\\kappa,\\lambda}$ is defined by \\eqref{l}. We thus proved the following result:\n\n\\begin{theorem}\n\\label{t-eq2}\nThe general solution to the inhomogeneous equation \\eqref{eq-1-2} has the form\n\\begin{equation}\n\\label{eig1}\ny(t) = b_0\\, l_{\\kappa,\\lambda}(t) +(f\\, *\\, l_{\\kappa,\\lambda})(t),\\ b_0\\in \\R,\n\\end{equation}\nwhere the convolution series $l_{\\kappa,\\lambda}$ is defined by \\eqref{l}. \n\nThe constant $b_0$ is uniquely determined by the initial condition \n\\begin{equation}\n\\label{ic1}\n\\left(\\I_{(k)}\\, y\\right) (0) = b_0.\n\\end{equation}\n\\end{theorem}\n\nApplying Theorem \\ref{t-eq2} to the case of the Riemann-Liouville fractional derivative (kernel $k(t) = h_{1-\\alpha}(t),\\ 0<\\alpha<1$), we obtain the well-known result (\\cite{LucSri}):\n\nThe unique solution to the initial-value problem \n$$\n\\begin{cases}\n( D_{0+}^\\alpha\\, y)(t) = \\lambda y(t)+f(t), \\ \\lambda \\in \\R,\\ t>0,\\\\\n\\left(I_{0+}^{1-\\alpha}\\, y\\right) (0) = b_0\n\\end{cases}\n$$\nis given by the formula\n$$\ny(t) = b_0\\, t^{\\alpha -1} E_{\\alpha, \\alpha}(\\lambda\\, t^\\alpha) + \\int_{0}^t \\tau^{\\alpha -1} E_{\\alpha, \\alpha}(\\lambda\\, \\tau^\\alpha)\\, f(t-\\tau)\\, d\\tau.\n$$\n\nLet us now consider a linear inhomogeneous multi-term fractional differential equation with the sequential GFDs \\eqref{GFDLn} of the Riemann-Liouville type and with the constant coefficients:\n\\begin{equation}\n\\label{eq-1-3}\n\\sum_{i=0}^n\\lambda_i(\\D_{(k)}^{}\\, y)(t) = f(t), \\ \\lambda_i \\in \\R,\\ i=0,1,\\dots,n,\\ \\lambda_n \\not = 0,\\ t>0, \n\\end{equation} \nwhere the source function $f$ is represented in form of the convolution series \\eqref{f}. \n\nAs in the case of the single-term equation \\eqref{eq-1-2}, we look for solutions to the multi-term equation \\eqref{eq-1-3} in form of the convolution series \\eqref{sol-1-1}. First we determine the images of the convolution series \\eqref{sol-1-1} by action of the sequential GFDs $\\D_{(k)}^{}$, $i=1,2,\\dots,n$. For $i=1$, the image is provided by the formula \\eqref{term1}. For $i=2,\\dots,n$, the formula \\eqref{term1} is applied iterative and we arrive at the following result:\n\\begin{equation}\n\\label{termi}\n( \\D_{(k)}^{}\\, y)(t) = \\sum_{j=0}^{+\\infty} b_{j+i}\\, \\kappa^{}(t),\\ i=1,2,\\dots,n.\n\\end{equation}\nNow we substitute the convolution series \\eqref{sol-1-1}, its images by action of the sequential GFDs $\\D_{(k)}^{}$, $i=1,2,\\dots,n$ provided by the formula \\eqref{termi}, and the convolution series \\eqref{f} for the source function into the equation \\eqref{eq-1-3} and arrive at the following identity:\n$$\n \\sum_{i=0}^n\\lambda_i\\, \\left(\\sum_{j=0}^{+\\infty} b_{j+i}\\, \\kappa^{}(t)\\right) = \\sum_{j=0}^{+\\infty} a_{j}\\, \\kappa^{}(t),\\ t>0.\n$$\nApplication of Theorem \\ref{eqconv} to the above identity leads to the following infinite triangular system of linear equations for the coefficients of the convolution series \\eqref{sol-1-1}:\n\\begin{equation}\n\\label{coef1-3}\n\\begin{cases}\n\\lambda_0\\, b_0 +\\lambda_1\\, b_1 +\\dots + \\ \\lambda_n\\, b_n = a_0,\\\\\n\\lambda_0\\, b_1 +\\lambda_1\\, b_2 +\\dots + \\ \\lambda_n\\, b_{n+1} = a_1,\\\\\n\\dots \\\\\n\\lambda_0\\, b_n +\\lambda_1\\, b_{n+1} +\\dots + \\ \\lambda_n\\, b_{2n} = a_n,\\\\\n\\lambda_0\\, b_{n+1} +\\lambda_1\\, b_{n+2} +\\dots + \\ \\lambda_n\\, b_{2n+1} = a_{n+1} \\\\\n\\dots\n\\end{cases}\n\\end{equation}\nIn this system, the first $n$ coefficients ($b_0,\\ b_1,\\dots,b_{n-1}$) can be chosen arbitrary and all other coefficients are determined step by step as solutions to the infinite triangular system \\eqref{coef1-3} of linear equations:\n\\begin{equation}\n\\label{coef1-4}\nb_{n+l}=(a_l - \\lambda_0\\, b_l- \\dots - \\lambda_{n-1} b_{n+l-1})\/\\lambda_n,\\ l=0,1,2,\\dots\n\\end{equation}\n\nWe thus proved the following result:\n\n\\begin{theorem}\n\\label{t-eq3}\nThe general solution to the inhomogeneous multi-term fractional differential equation \\eqref{eq-1-3} can be represented as the convolution series \\eqref{sol-1-1}, where the first $n$ coefficients ($b_0,\\ b_1,\\dots,b_{n-1}$) are arbitrary real constants and other coefficients are calculated according to the formula \\eqref{coef1-4}.\n\\end{theorem}\n\nThe constants $b_0,\\ b_1,\\dots,b_{n-1}$ in the general solution to the equation \\eqref{eq-1-3} presented in Theorem \\eqref{t-eq3} can be determined based on the suitably posed initial conditions. The form of these initial conditions is prescribed by Theorem \\ref{tgcTfn}. Indeed, for a function $f\\in C_{-1,(k)}^{(n)}(0,+\\infty)$, the formula \\eqref{sFTLn} can be rewritten as follows: \n\\begin{equation}\n\\label{projn}\n(P\\, f)(t) = f(t) - (\\I_{(\\kappa)}^{}\\, \\D_{(k)}^{}\\, f) (t) = \\sum_{j=0}^{n-1}\\left( \\I_{(k)}\\, \\D_{(k)}^{}\\, f\\right)(0)\\kappa^{}(t),\\ t>0,\n\\end{equation}\nwhere $P$ is the projector operator of the $n$-fold sequential GFD of the Riemann-Liouville type. Thus, to uniquely determine the constants $b_0,\\ b_1,\\dots,b_{n-1}$ in the general solution, the equation \\eqref{eq-1-3} has to be equipped with the initial conditions in the form\n\\begin{equation}\n\\label{icm}\n\\left( \\I_{(k)}\\, \\D_{(k)}^{}\\, y\\right)(0) = b_j,\\ j=0,1,\\dots,n-1.\n\\end{equation}\n\nFinally, we mention that the inhomogeneous multi-term fractional differential equation of type \\eqref{eq-1-3} with the sequential Riemann-Liouville fractional derivatives (the case of the kernel $k(t)=h_{1-\\alpha}(t)$ in the equation \\eqref{eq-1-3}) was treated in \\cite{LucSri,Luc99} my using operational calculus of the Mikusi\\'nski type for the Riemann-Liouville fractional derivative. \n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn comparison to the metallic and semiconducting material, the thermoelectric effects are strongly suppressed in \nsuperconductors~\\cite{chandrasekhar2009,machon}. One of the reasons behind this is the interference of temperature \ndependent super current with the thermal current. The electron-hole symmetry present in the superconducting density \nof states (DOS) makes the opposite directional electron and hole thermo-currents (generated due to the thermal \ngradient) nullify each other~\\cite{ozaeta2014}. \n\nRecently, superconducting hybrid structures, especially ferromagnet-superconductor (FS) junctions, have attracted a \nlot of research interests due to the dramatic boosts of thermoelectric effects in them~\\cite{chandrasekhar2009,\nkalenkov2012theory,ozaeta2014,machon,machon2,kolenda2,kolenda2016}. \nInducing spin-triplet correlation within the superconductor and the asymmetric DOS profiles corresponding to the two spin \nsub-bands of the ferromagnet are the key features to be utilized in order to make such FS junction suitable in the context \nof thermal transport. The asymmetry in the two spin sub-bands according to the polarization of the ferromagnet can manipulate \nthe Andreev reflection (AR) which occurs when an incoming electron reflects back as a hole from the FS interface resulting \nin a cooper pair transmission into the superconductor within the sub-gapped regime~\\cite{andreev1}. Mixing of electron and \nhole-like excitations due to Andreev reflection may yield large electron-hole asymmetry. This asymmetry makes the expression \nof the thermoelectric coefficient to get rid of $E\/T_F$ factor which is responsible for the low value of the thermoelectric \ncoefficient in the normal state of the material~\\cite{abrikosov1988fundamentals}.\n\nIn order to investigate the thermoelectric properties of a material or hybrid junction it is customary to derive the thermal \nconductance (TC) or thermal current generated by the temperature gradient~\\cite{yokoyama2008heat,beiranvand2016spin,\nalomar2014thermoelectric}. Particularly in case of superconducting hybrid junction, the information of the superconducting gap \nparameter like its magnitude, pairing symmetry etc. can be extracted from the behavior of the thermal conductance~\\cite{yokoyama2008heat}. \nFrom the application perspective, it is more favorable to compute the Seebeck coefficient (SkC), known as \nthermopower, which is the open circuit voltage developed across the junction due to the electron flow caused by the thermal \ngradient~\\cite{blundell2009concepts}. Enhancement of \\sbk can pave the way of promising application to make an efficient \nheat-to-energy converter which may be a step forward to the fulfilment of the global demand of \nenergy~\\cite{alomar2014thermoelectric}. Since last few decades intense research is being carried out in search of newer and \nefficient energy harvesting devices that convert waste heat into electricity~\\cite{hwang2015large,snyder2008complex}. Usage \nof good thermoelectric material is one of the ways of making those devices more efficient. Now in order to determine how \ngood thermoelectric a system is, one can calculate \\sbk as well as the dimensionless parameter called figure of merit (FOM) \nwhich is naively the ratio of the power extracted from the device to the power we have to continually provide in order to \nmaintain the temperature difference~\\cite{sevinccli2010enhanced,zebarjadi2012perspectives}. It provides us an estimate of \nthe efficiency of a mesoscopic thermoelectric device like refrigerator, generator etc. based on thermoelectric \neffects~\\cite{giazotto2006opportunities}. Improving this thermoelectric \\fom with enhanced \\sbk so that the heat-electricity\nconversion is more efficient~\\cite{goldsmid3electronic,xu2014enhanced,liu2010enhancement,dragoman2007giant,ohta2007giant}\nis one of the greatest challenges in material science. Particularly, \nenhancement of the performance of any superconducting hybrid junction is much more challenging due to the above-mentioned reasons. \n\nThe prospects of FS junction, as far as thermoelectric property is concerned, depend on the new ingredients to manipulate \nthe spin dependent particle-hole symmetry. The latter has been implemented using external magnetic \nfield~\\cite{linder2016,ozaeta2014,bathen2016,kolenda2016,linder2007spin}, quantum dot at the junction~\\cite{hwang}, non-uniform \nexchange field~\\cite{alidoust2010phase}, phase modulation~\\cite{zhao2003phase}, magnetic impurities~\\cite{kalenkov2012theory} \nor internal properties like inverse proximity effect~\\cite{peltonen2010} etc. Recently, Machon {\\it et al. }~have considered simultaneous \neffects of spin splitting and spin polarized transport~\\cite{machon} in order to obtain enhanced thermoelectric effects in FS \nhybrid structure. In addition to these effects, presence of spin-orbit field~\\cite{linder2016,alomar2014thermoelectric} can \nplay a vital role in this context. \n\nStudy of interfacial spin-orbit coupling effect on transport phenomena has become a topic of intense research interest during \npast few decades due to the spin manipulation~\\cite{datta1990electronic}. Interplay of the polarization and the interfacial \nfield may lead to marked anisotropy in the junction electrical conductance~\\cite{hogl} and Josephson current~\\cite{costa16}. \nInterfacial spin-orbit field, especially Rashba spin-orbit field~\\cite{rashba,rashba2} arising due to the confinement potential \nat the semiconductor or superconductor hybrid structure, can also be the key ingredient behind such spin \nmanipulation~\\cite{sun2015general}. \n\nThe aspect of thermal transport in FS hybrid junction incorporating the role of interfacial spin-orbit interaction has not \nbeen studied in detail so far in case of ordinary ferromagnet. A few groups have performed their research in this direction \nin graphene~\\cite{alomar2014thermoelectric,beiranvand2016spin}. Motivated by these facts, in this article we study \nthermoelectric properties of a FS structure with Rashba spin-orbit interaction (RSOI)~\\cite{rashba,review} at the interfacial \nlayer. We employ Blonder-Tinkham-Klapwijk~\\cite{blonder} (BTK) formalism to compute the \\tc, \\sbk and \\fom therein. We \ninvestigate the role of RSOI on the thermoelectric properties. The interfacial scalar barrier at the FS interface reduces \n\\tc. On the other hand, the presence of RSOI at the FS interface can stimulate enhancement of \\tc driven by the thermal \ngradient across the junction. In order to reveal the local thermoelectric response we investigate the behavior of the \nthermopower with the polarization, temperature as well as the barrier strength. \\sbk is enhanced when the polarization of \nthe ferromagnet increases towards the half-metallic limit. In presence of finite barrier at the junction, it could be higher \neven for low polarization. Presence of RSOI at the interface may reduce or enhance it depending on the barrier strength, \ntemperature and the polarization. For higher barrier strength it always shows non-monotonic behavior with the temperature \nboth in presence and absence of RSOI. Similar non-monotonic behavior is obtained for \\fom with the rise of temperature and \nRashba strength. We predict that FOM can exceed the value $1$ with higher polarization of the ferromagnet. The magnitude \ncan even be more than $5$ for higher strength of barrier potential at the junction. It is also true in presence of weak \nRSOI. On the contrary, strong Rashba interaction can reduce it irrespective of the polarization and temperature. \n\nThe remainder of the paper is organized as follows. In Sec.~\\ref{modeltheory} we describe our model and the theoretical background. \nWe discuss our results for thermal conductance, thermopower and Figure of merit in Sec.~\\ref{result}. \nFinally, we summarise and conclude in Sec.~\\ref{conclu}.\n\n\\section{Model and theoretical background}\\label{modeltheory}\nWe consider a model comprising of a ferromagnet F ($z>0$) and a $s$-wave superconductor S ($z<0$) hybrid structure as shown in \nFig.~\\ref{geometry}. The flat interface of semi-infinite ferromagnet-superconductor (FS) junction located at $z=0$ is modelled \nby a $\\delta$-function potential with dimensionless barrier strength $Z$~\\cite{vzutic2000tunneling,blonder} and Rashba spin-orbit \ninteraction (RSOI) with strength $\\lambda_{rso}$. The FS junction can be described by the Bogoliubov-deGennes (BdG) \nequation~\\cite{de1999superconductivity} as,\n\\begin{eqnarray}\n\\begin{bmatrix}\n[\\hat{H}_e-\\mu]\\hat{\\sigma}_0 & \\hat{\\Delta}\\\\\n\\hat{\\Delta}^\\dagger & [\\mu-\\hat{H}_h]\\hat{\\sigma}_0 \n\\end{bmatrix} \\Psi(\\mathbf{r})=E \\Psi(\\mathbf{r})\n\\end{eqnarray}\nwhere the single-particle Hamiltonian for the electron is given by,\n\\beq\n\\hat{H_e}=-(\\hbar^2\/2m)\\nabla^2-(\\Delta_{xc}\/2) \\Theta(z) \\mathbf{m}.\\hat{\\mathbf{\\sigma}}+\\hat{H}_{int}.\n\\end{equation}\nSimilarly, for hole the Hamiltonian reads $\\hat{H}_h=\\hat{\\sigma}_2 \\hat{H}_e^*\\hat{\\sigma}_2$. The excitations of the electrons \nwith effective mass $m$ are measured with respect to the chemical potential $\\mu$. We set $m=1$ and $\\mu=0$ throughout our \ncalculation. The interfacial barrier is described by the Hamiltonian \n$\\hat{H}_{int}=(V d\\hat{\\sigma_0}+ \\mathbf{\\omega}\\cdot\\hat{\\sigma})\\delta(z)$~\\cite{hogl} with the \n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=6.5cm,height=4.6cm]{Fig1.pdf}\n\\caption{(Color online) Cartoon of the FS junction with the magnetization vector $\\mathbf{m}$. The dark red (dark grey) color is \nused to highlight the interfacial region of the FS hybrid structure. The F-region is kept at higher temperature ($T_F=T+\\delta T\/2$) \ncompared to the S-region ($T_S=T-\\delta T\/2$) in order to maintain a temperature gradient ($\\delta T=T_F-T_S$) across the junction.}\n\\label{geometry}\n\\end{center}\n\\end{figure}\nheight $V$, width $d$ and Rashba field $\\omega$ $=\\lambda[k_y,-k_x,0]$, $\\lambda$ being the effective strength of the RSOI. \nThe Stoner band model~\\cite{stoner1939collective}, characterized by exchange spin splitting $\\Delta_{xc}$, is employed to \ndescribe the F-region with the magnetization vector $\\mathbf{m}=[\\sin{\\theta}\\cos{\\phi},\\sin{\\theta}\\sin{\\phi},\\cos{\\theta}]$. \nHere $\\hat{\\sigma}$ is the Pauli spin matrix. Note that, the growth direction ($z$-axis) of the heterostructure is chosen \nalong [001] crystallographic axis~\\cite{matos2009angular}. The superconducting pairing potential is expressed as \n$\\hat{\\Delta}=\\Delta_s \\Theta(z)\\hat{\\sigma_0}$. We assume it to be a spatially independent positive constant following \nRef.~\\onlinecite{hogl}.\n\nDepending on the incoming electron energy there are four scattering processes possible at the FS interface. For electron \nwith a particular spin, say $\\sigma$, there can be normal reflection (NR), Andreev reflection (AR), tunneling as electron \nlike (TE) or hole like (TH) quasi-particles. In addition to these phenomena there may be spin-flip scattering processes \ndue to the interfacial spin-orbit field. Accordingly, we can have spin-flip counter parts of the above-mentioned four \nscattering processes namely, spin-flip NR (SNR), spin-flip AR (SAR), spin-flip TE (STE) and spin-flip TH \n(STH)~\\cite{de1995andreev,cao2004spin}. The above mentioned scattering processes are schematically displayed in \nFig.~\\ref{scattering} \n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=7.7cm,height=4.5cm]{Fig2.pdf}\n\\caption{(Color online) Schematic diagram for the quantum mechanical scattering processes taking place at FS interface. The solid and \nhollow spheres are used to denote electron (e) and hole (h), respectively. The letters `R (L)' indicates the right (left)-moving \nparticles. Corresponding spin states are denoted by `up' ($\\up$) and `down' ($\\dn$), respectively.}\n\\label{scattering}\n\\end{center}\n\\end{figure}\nfor a right-moving electron with spin $\\up$ (eRup). Note that, due to the possibility of spin-flip scattering processes in presence of\nRSOI at the FS interface, spin-triplet~\\cite{eschrig2011spin} superconducting correlation ($\\up\\up$ or $\\dn\\dn$) can be induced \nin addition to the conventional singlet pairing ($\\up\\dn$ or $\\dn\\up$)~\\cite{hogl}.\n\nThe solution of the \\bdg equations for the F-region, describing electrons and holes with spin $\\sigma$, can be written as~\\cite{hogl},\n\\begin{eqnarray}\n\\Psi_{\\sigma}^F(z)&=&\\frac{1}{\\sqrt{k_{\\sigma}^e}} e^{i k_{\\sigma}^e z}\\psi_{\\sigma}^e + r_{\\sigma,\\sigma}^e\ne^{-ik^e_{\\sigma}z}\\psi_{\\sigma}^e+r_{\\sigma,\\sigma}^h e^{ik^e_{\\sigma}z}\\psi_{\\sigma}^h \\non \\\\\n&&+r_{\\sigma,-\\sigma}^e e^{-ik^e_{-\\sigma}z}\\psi_{-\\sigma}^e \n+r_{\\sigma,-\\sigma}^e e^{ik^e_{-\\sigma}z}\\psi_{-\\sigma}^e\n\\label{enormal}\n\\end{eqnarray}\nwhere {\\small $k^{e(h)}_{\\sigma}=\\sqrt{k_F^2-k_{||}^2+2m(\\sigma\\Delta_{xc}\/2+(-) E)\/\\hbar^2}$} is the electron (hole)-like wave vector. \n$\\sigma$ may be $\\pm 1$ depending on whether the spin is parallel or anti-parallel to the vector $\\mathbf{m}$. $k_F$ and $k_{||}$ are \nthe Fermi and in-plane wave vector, respectively. The spinors for the electron-like and hole-like quasi-particles are respectively \n$\\psi_{\\sigma}^e=[\\psi_{\\sigma},0]^T$ and $\\psi_{\\sigma}^h=[0,\\psi_{\\sigma}]^T$ with \n{\\small $\\psi_{\\sigma}^T=[\\sigma\\sqrt{1+\\sigma\\cos{\\theta}}e^{-i \\phi},\\sqrt{1-\\sigma\\cos{\\theta}}]\/\\sqrt{2}$}.\nHere, $r^{e(h)}_{\\sigma,\\sigma^{\\prime}}$ corresponds to the amplitude of normal (Andreev) reflection from the FS interface. $\\sigma$ and \n$\\sigma^{\\prime}$ are the spin states for the incident and reflected electron or hole depending on the spin-conserving or spin-flipping \nprocess. Similarly, inside the superconducting region the solutions for the electron-like and hole like quasiparticles read~\\cite{hogl} \n\\begin{eqnarray}\n\\Psi_{\\sigma}^{S}=t^e_{\\sigma,\\sigma}\\left[\\begin{array}{c} u\n\\\\ 0 \\\\v\\\\0\\end{array} \\right]{e}^{iq_{e}z}+ t^e_{\\sigma,-\\sigma}\\left[\\begin{array}{c} 0\n\\\\ u \\\\0\\\\v\\end{array} \\right]{e}^{iq_{e}z}\\non \\\\\n+t^h_{\\sigma,\\sigma}\\left[\\begin{array}{c} u\n\\\\ 0 \\\\v\\\\0\\end{array} \\right]{e}^{-iq_{h}z}+ t^h_{\\sigma,-\\sigma}\\left[\\begin{array}{c} 0\n\\\\ u \\\\0\\\\v\\end{array} \\right]{e}^{-iq_{h}z},\n\\label{esupcon}\n\\end{eqnarray}\nwhere the $z$-components of the quasi-particle wave vectors can be expressed as, \n{\\small $q_{e(h)}=\\sqrt{q_F^2-k_{||}^2+(-)2 m \\sqrt{E^2-\\Delta^2}\/\\hbar^2}$} and the superconducting coherence factors are \n{\\small $u(v)=\\sqrt{[1\\pm\\sqrt{1-\\Delta^2\/E^{2}}]\/2}$}. We set the Fermi wave vector in both the F and S-regions to be the same \n\\ie $q_F=k_F$~\\cite{hogl}. Note that, we have written only the $z$ component of the wave functions. In the $x-y$ plane the wave \nvector is conserved giving rise to the planar wave function as, $\\Psi_{\\sigma}(\\mathbf{r})=\\Psi_{\\sigma}(z) e^{i (k_{x} x+k_{y} y)}$ \nwhere $k_x$ and $k_y$ are the components of $k_{||}$. Here, $t_{\\sigma,\\sigma^{\\prime}}^{e(h)}$ denotes the amplitude of spin-conserving \nor spin-flipping transmitted electron (hole) like quasi-particles in the S region. We obtain the reflection and transmission amplitudes \nusing the boundary conditions as,\n\\begin{eqnarray}\n\\Psi^F_{\\sigma}|_{z=0^+}&=&\\Psi_{\\sigma}^S|_{z=0^-}~,\\non \\\\\n\\frac{\\hbar^2}{2m}\\left(\\frac{d}{dz}\\Psi_{\\sigma}^S|_{z=0^-}-\\frac{d}{dz}\\zeta\\Psi_{\\sigma}^F|_{z=0^+}\\right)\n&=&V d~\\zeta\\Psi_{\\sigma}^F|_{z=0^+} \\non \\\\\n+\\begin{bmatrix} \\mathbf{\\omega}.\\hat{\\sigma} & 0 \\\\\n 0 & -\\mathbf{\\omega}.\\hat{\\sigma}\n\\end{bmatrix}\n\\Psi_{\\sigma}^F|_{z=0^+}\n\\end{eqnarray}\nwhere $\\zeta=diag(1,1,-1,-1)$. We describe our results in terms of the dimensionless barrier strength $Z=\\frac{V \\ d \\ m}{\\hbar^2 k_F}$, \nRSOI strength $\\lambda_{rso}=\\frac{2m\\lambda}{\\hbar^2}$ and spin polarization $P=\\frac{\\Delta_{xc}}{2 E_F}$.\\\\\n\nIn presence of thermal gradient across the junction with no applied bias voltage, the electronic contribution to the thermal conductance,\n(see appendix~\\ref{appndx1} for details), in terms of the scattering processes is given by~\\cite{blonder,yokoyama2008heat},\n\\begin{eqnarray}\n\\kappa&=&\\sum\\limits_{\\sigma}\\int\\limits_0^{\\infty}\\int\\limits_{s}\\frac{d^2k_{||}}{2 \\pi k_F^2}\\left[1-R^h_{\\sigma}-R^e_{\\sigma} \\right] \\non \\\\\n&&~~~~~~~~~~~~~\\left[\\frac{(E-E_F)^2}{T^2\\cosh^2{(\\frac{E-E_F}{2k_B T})}}\\right]dE \n\\label{kappa_form}\n\\end{eqnarray}\nwhere the NR and AR probability can be defined as \n$ R_{\\sigma}^{e(h)}(E,k_{||})=Re[k_{\\sigma}^{e(h)}|r_{\\sigma}^{e(h)}|^2+k_{-\\sigma}^{e(h)}|r_{-\\sigma}^{e(h)}|^2]$ satisfying the current conservation. \nHere, the integration with respect to $k_{||}$ is performed over the entire plane $x-y$ of the interface. It is convenient to define a dimensionless \nwave vector $k=k_{||}\/k_F$ and compute the integration in terms of it while calculating the TC. The Boltzmann constant is\ndenoted by $k_B$. $T$ is scaled by $T_c$, \nwhich is the critical temperature of the conventional singlet superconductor. \n\nWithin the linear response regime, we obtain the expression for the thermopower or \\sbk in unit of $k_B\/e$ as follows~\\cite{wysokinski2012thermoelectric},\n\\beq\nS =-\\left(\\frac{V}{\\delta T}\\right)_{I=0}=-\\frac{1}{T} \\frac{\\alpha}{G}\n\\label{sbk_exp}\n\\end{equation}\nwhere the thermoelectric coefficient $\\alpha$ and the electrical conductance $G$, in unit of $G_0$ ($e^2\/h$), are represented as,\n\\begin{eqnarray}\n\\alpha&=&\\sum\\limits_{\\sigma}\\int\\limits_0^{\\infty}\\int\\limits_{s}\\frac{d^2k_{||}}{2 \\pi k_F^2}\\left[1-R^h_{\\sigma}-R^e_{\\sigma} \\right] \\non \\\\\n&&~~~~~~~~~~~~~~~~~~\\left[\\frac{(E-E_F)}{T \\cosh^2(\\frac{E-E_F}{2 T})}\\right]dE \n\\label{alphaform}\n\\end{eqnarray}\nand \n\\begin{eqnarray}\nG&=&\\sum\\limits_{\\sigma}\\int\\limits_0^{\\infty}\\int\\limits_{s}\\frac{d^2k_{||}}{2 \\pi k_F^2}\\frac{\\left[1+R^h_{\\sigma}-R^e_{\\sigma}\\right]}\n{\\left[T\\cosh^2{(\\frac{E-E_F}{2 T})}\\right]}dE .\n\\label{intG}\n\\end{eqnarray}\nHere $\\alpha$ is expressed in unit of $G_0 k_B T\/e$ ($\\equiv k_B e T \/h$). In terms of SkC, electrical conductance and thermal \nconductance, the \\fom $zT$ is given by,\n\\begin{eqnarray}\nzT=\\frac{S^2 G T}{K}\n\\label{merit}\n\\end{eqnarray}\nwhere $K=\\kappa-\\frac{\\alpha^2}{TG}$ is expressed in unit of $\\kappa_0$ ($\\equiv k_B^2T\/h$). After applying the temperature difference \nbetween the two sides of the junction we obtain thermal current which essentially develops a voltage difference between them following \nthe Peltier effect. This causes a correction to the thermal conductance as well. We consider such correction while defining the \\fom of \nthe system as every material manifesting Seebeck effect must exhibit the Peltier effect~\\cite{bardas1995peltier}.\n\n\\section{Results and Discussion}\\label{result}\nIn this section we present our numerical results for \\tc, \\sbk and \\fom of the ferromagnet-superconductor junction, both in absence \nand presence of interfacial RSOI, in three different sub-sections. We discuss our results in terms of the scattering processes \nthat occur at the interface of the FS hybrid structure and various parameters of the system. \n\\subsection{Thermal conductance}\nIn this subsection we discuss the effect of polarization and RSOI, both in absence and presence of finite scalar barrier, on the behavior of \nthe \\tc throughout the temperature regime from low to high.\n\\subsubsection{Effect of polarization and barrier in absence of RSOI}\nIn Fig.~\\ref{cond_T} we show the variation of \\tc $\\kappa$ as a function of temperature $T\/T_c$ in absence of RSOI \nfor various polarization strength $P$ of the ferromagnet, starting from the unpolarized ($P=0$) towards the half-metallic ($P=0.9$) \nlimit. Fig.~\\ref{cond_T}[(a), (b), (c) and (d)] correspond to the interfacial scalar barrier strength $Z=0$, $1$, $2$ and $4$, respectively. \nFrom all the four figures it is apparent that \\tc increases exponentially with temperature. This behavior, being independent of the barrier \nstrength, is true for conventional normal metal-superconductor junction ($P=0$) as well as for any finite value of polarization ($P\\ne 0$) \nof the ferromagnet. The fully developed gap of the superconductor is responsible for the exponential increase of the thermal \nconductance~\\cite{andreev,andreev1} \nWith the increase of temperature, the superconducting gap decreases resulting in reduction of AR amplitude and simultaneous increase of \ntunneling as electron-like quasi-particles. Thermal resistance of the superconductor falls off exponentially as the temperature is \nincreased~\\cite{andreev}. As a consequence, $\\kappa$ rises with the temperature following an exponential nature. \n\nHowever, the rate of increase of $\\kappa$ completely depends on the polarization $P$ of the ferromagnet. Gradual tunability of the \npolarization $P$ does not ensure any monotonic \n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=8.7cm,height=7.5cm]{Fig3.pdf}\n\\caption{(Color online) The behavior of thermal conductance ($\\kappa$), in unit of $k_B^2\/h$, is shown as a function of temperature \n($T\/T_c$) in absence of RSOI ($\\lambda_{rso}=0$) for different values of barrier strength ($Z$) and polarization ($P$) of the ferromagnet.}\n\\label{cond_T}\n\\end{center}\n\\end{figure}\nbehavior of the TC. It depends on both the temperature and barrier strength. To illustrate this, we discuss the scenarios for \ndifferent values of $Z$ one by one. When $Z=0$, the rate of increase of $\\kappa$ is very slow with the increase of polarization for \na particular value of $T\/T_c$ (see Fig.~\\ref{cond_T}(a)). This is true as long as $T\/T_c<0.3$. On the other hand, for $T\/T_c>0.3$ the scenario \nbecomes opposite \\ie~$\\kappa$ starts decreasing with the change of polarization for a fixed $T\/T_c$. There is a cross-over temperature \n$T_x$ ($\\sim 0.3$ in this case) separating the two different behaviors of the \\tc with polarization. We explain this phenomenon as \nfollows. For very low $T\/T_c$ \\ie~$TT_x$). \n\nNow let us consider finite $Z$ at the interface. In presence of the barrier, incident electrons encounter NR along with AR from the \ninterface. NR reduces $\\kappa$. Hence, the higher is the barrier strength $Z$, the lower is $\\kappa$ for a particular temperature and \npolarization. This is apparent by comparing all the four figures of Fig.~\\ref{cond_T}. The cross-over temperature $T_x$, separating the \nbehaviors of the \\tc with $P$, decreases as soon as we consider finite $Z$ as depicted in Fig.~\\ref{cond_T}(b). It becomes $\\sim$ 0.2 for \n$Z=1$. However, $T_x$ translates towards the high temperature limit with the increase of barrier strength (see Fig.~\\ref{cond_T}(c) and (d)). \nFor low $Z$, \\tc does not change by appreciable amount with the increase of $P$ because of NR. In the low temperature regime, enhancement \nof $P$ causes reduction of AR. This does not ensure the increase of \\tc as tunneling decreases due to the reflection from the interface. \nAs $Z$ is enhanced, NR starts dominating over the other processes. This not only causes reduction of \\tc but also translates $T_x$ towards \nthe high temperature regime. For example, $T_x \\sim 0.5$ (see Fig.~\\ref{cond_T}(c)) and $0.8$ (see Fig.~\\ref{cond_T}(d)) for $Z=2$ and \n$Z=4$, respectively. More over, there is a tendency of saturation of $\\kappa$ when $T\\rightarrow T_c$ irrespective of $P$ for higher barrier \nstrength associated with very small change of $\\kappa$ with $P$. For higher $Z$, AR, TE and TH are dominated by NR. Therefore, tuning \npolarization does not cause appreciable variation in the tunneling as well as AR resulting in very small change of \\tc leading towards \nits saturation.\n\nTherefore, the effect of polarization of the ferromagnet cannot be uniquely determined. The behavior of \\tc with \npolarization changes depending on the temperature and the barrier strength as well.\n\nSo far, we have not discussed about the orientation of the magnetization. We present all of our results for $\\theta=\\pi\/2$ and $\\phi=0$. \nVery recently, H\\\"{o}gl {\\it et al. }~have revealed the fact that electronic conductance shows an anisotropy with the rotation of the magnetization \n$\\mathbf{m}$~\\cite{hogl}. However, in case of thermal transport, contributions from all the energy values are taken into consideration. \nTherefore, with the change of $\\mathbf{m}$ there is no appreciable change in \\tc as all contributions due to different \norientations of $\\mathbf{m}$ are averaged out during integration over the energy (see Eq.(\\ref{kappa_form})). This fact remains unchanged \nfor any temperature ($T$10$^{-9}$yr$^{-1}$) compared to other SFGs\nof similar mass (C09). Recent studies show that galaxies with higher\nSSFRs or larger half-light radii for their stellar mass have\nsystematically lower metallicities \\citep[e.g.,][]{Tremonti,\n Ellison08}. However, we have found even greater under-abundances in\nthe GPs, which have high SSFRs but are extremely compact.\n\nSome models show that in highly concentrated (typical sizes $<$3~kpc)\nlow-mass galaxies such as the GPs, galactic winds induced by their\nlarge SSFR are strong enough to escape from their weak potential\nwells, diminishing the observed global abundances\n\\citep[e.g.,][]{Finlator08}. In contrast, analytical models by\n\\citet{Dalcanton07} show that any subsequent star formation to the\noutflow will remove their effects on metallicity, unless galaxies have\nan inefficient star formation. Smoothed particle hydrodynamics (SPH) \nplus $N$-body simulations have\nshown that low star formation efficiencies, regulated by supernova\n(SN) feedback, could be primarily responsible for the lower\nmetallicities of low-mass galaxies and their overall trend in the MZR\n\\citep{Brooks07}. As shown by \\citet{Erb06} for SFGs at $z \\sim 2$,\nthe constancy in the offset of the MZR suggests the presence of\nselective metal-rich gas loss driven by SN winds.\n\nInflow of metal-poor gas, either from the outskirts of the galaxy or\nbeyond, can dilute metals in the galaxy centers, explaining an offset\nto lower abundances in both the MZR\n\\citep{Mihos96,Barnes96,Finlator08} and the N\/O -- O\/H diagram. In\nstarburst galaxies, a recent cold gas accretion can be due to\ninteractions, which eventually increases the gas surface density and\nconsequently the star formation. As explained by \\citet{Ellison08},\nthe dilution of metals due to an inflow can be restored by the effects\nof star formation depending on the dilution-to-dynamical timescale\nratio. Since this ratio depends inversely with galaxy radius,\ngalaxies with smaller radius, such as the GPs, may be expected to take\nlonger time to enhance their oxygen abundances to the values expected\nfrom the MZR. In this line, the position of GPs in the\n$M_{\\star}-$N\/O relation and the offset observed in the N\/O -- O\/H\nplane may favor this scenario. Models by \\citet{Koppen05} have shown\nthat the rapid decrease of the oxygen abundance during an episode of\nmassive and rapid accretion of metal-poor gas is followed by a slower\nevolution which leads to the closed-box relation, thus forming a loop\nin the N\/O -- O\/H diagram.\n\nThe inflow hypothesis is also strongly suggested by the disturbed\nmorphologies and close companions observed in spatially resolved {\\em HST}\nimages for three GPs and most LBAs \\citep[C09;][]{Overzier08}. Recent\nresults revealed that galaxies involved in galaxy interactions fall\n$\\ga$0.2 dex below the MZR of normal galaxies due to tidally induced\nlarge-scale gas inflow to the galaxies' central regions\n\\citep[e.g.,][]{Kewley06,Michel-Dansac08,Peeples09}. Several\n$N$-body\/SPH simulations have shown that major interactions drive\nstarbursts and gas inflow from the outskirts of the H {\\sc i}\nprogenitor disks \\citep[e.g.,][]{Mihos96,Rupke10}, also supporting\nthis scenario.\n\nWe conclude arguing that recent interaction-induced inflow of gas,\npossibly coupled with a selective metal-rich gas loss driven by SN\nwinds, may explain our findings and the known galaxy properties.\nNevertheless, further work is needed to constrain this possible\nscenario. In particular, future assessment of the H {\\sc i} gas\nproperties and the star formation efficiency of GPs, as well as the\nbehavior of effective yields with mass compared with models of\nchemical evolution, will shed new light on the relative importance of\nthe above processes.\n\nOur results allow us to further constrain the evolutionary status of the\nGPs. These galaxies, as well as the low-mass LBAs and the\n[\\oiii]-selected galaxies by \\citet{Salzer09} should be analyzed and\ncompared in more detail to elucidate whether these rare objects are\nsharing similar evolutionary pathways. Even if this is not the case,\ntheir properties suggest that these galaxies are snapshots of an\nextreme and short phase in their evolution. They therefore offer the\nopportunity of studying in great detail the physical processes\ninvolved in the triggering and evolution of massive star formation and\nthe chemical enrichment processes, under physical conditions\napproaching those in galaxies at higher redshifts.\n\nForthcoming analysis, based on high S\/N intermediate\/high-resolution\nspectroscopy and deep NIR imaging with the Gran Telescopio Canarias (GTC), \nwill be used to better constrain the evolutionary status of the GPs.\n\n\\acknowledgments\n\nWe are very grateful to P. Papaderos, M. Moll\\'a and Y. Tsamis for\nvery stimulating discussions and useful suggestions to improve this\nmanuscript. We also thank the anonymous referee for a useful and prompt \nreport. This work has been funded by grants AYA2007-67965-C03-02,\nand CSD2006-00070: First Science with the GTC ({\\url\n http:\/\/www.iac.es\/consolider-ingenio-gtc\/}) of the\nConsolider-Ingenio 2010 Program, by the Spanish MICINN.\n\n{\\it Facility:} \\facility{Sloan}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Analysis}\n\\label{sec:analysis}\n\n\\emph{How ``easier'' for CG is $\\mathcal{W} - U_1 \\Lambda_1 U_1^{\\!\\top}$ in~(\\ref{eq:h-temporal}) compared to $\\mathcal{W}$ in~(\\ref{eq:temporal})?}\n\nIn solving a linear system $A \\mathbf{x} = \\mathbf{b}$ where $A$ is an $n \\times n$ symmetric positive-definite matrix, CG generates a unique sequence of iterates $\\mathbf{x}_i$, $i=0,1,\\dots$, such that the $A$-norm $\\norm{\\mathbf{e}_i}_A$ of the error $\\mathbf{e}_i \\mathrel{\\operatorname{:=}} \\mathbf{x}^* - \\mathbf{x}_i$ is minimized over the Krylov subspace $\\mathcal{K}_i \\mathrel{\\operatorname{:=}} \\gen{\\mathbf{b}, A \\mathbf{b}, \\dots, A^i \\mathbf{b}}$ at each iteration $i$, where $\\mathbf{x}^* \\mathrel{\\operatorname{:=}} A^{-1} \\mathbf{b}$ is the exact solution and the $A$-norm is defined by $\\norm{\\mathbf{x}}_A \\mathrel{\\operatorname{:=}} \\sqrt{\\mathbf{x}^{\\!\\top} A \\mathbf{x}}$ for $\\mathbf{x} \\in \\mathbb{R}^n$.\n\nA well-known result on the rate of convergence of CG that assumes minimal knowledge of the eigenvalues of $A$ states that the $A$-norm of the error at iteration $i$, relative to the $A$-norm of the initial error $\\mathbf{e}_0 \\mathrel{\\operatorname{:=}} \\mathbf{x}^*$, is upper-bounded by~\\cite{Tref97}\n\\begin{equation}\n\t\\frac{\\norm{\\mathbf{e}_i}_A}{\\norm{\\mathbf{e}_0}_A} \\le \\phi_i(A) \\mathrel{\\operatorname{:=}} 2 \\left(\n\t\t\t\\frac{\\sqrt{\\kappa(A)} - 1}{\\sqrt{\\kappa(A)} + 1}\n\t\t\\right)^i,\n\\label{eq:bound}\n\\end{equation}\nwhere $\\kappa(A) \\mathrel{\\operatorname{:=}} \\norm{A} \\norm{A^{-1}} = \\lambda_1(A) \/ \\lambda_n(A)$ is the $2$-norm \\emph{condition number} of $A$, and $\\lambda_j(A)$ for $j = 1,\\dots,n$ are the eigenvalues of $A$ in descending order.\n\nIn our case, matrix $\\mathcal{L}_\\alpha(\\mathcal{W})$ of linear system~(\\ref{eq:temporal}) has condition number\n\\begin{equation}\n\t\\kappa(\\mathcal{L}_\\alpha(\\mathcal{W}))\n\t\t= \\frac{1 - \\alpha \\lambda_n(\\mathcal{W})}{1 - \\alpha \\lambda_1(\\mathcal{W})}\n\t\t= \\frac{1 - \\alpha \\lambda_n(\\mathcal{W})}{1 - \\alpha}.\n\\label{eq:cond}\n\\end{equation}\nThe first equality holds because for each eigenvalue $\\lambda$ of $\\mathcal{W}$ there is a corresponding eigenvalue $(1 - \\alpha \\lambda) \/ (1 - \\alpha)$ of $\\mathcal{L}_\\alpha(\\mathcal{W})$, which is a decreasing function. The second holds because $\\lambda_1(\\mathcal{W}) = 1$~\\cite{Chun97}.\n\nNow, let $\\mathcal{W}_r \\mathrel{\\operatorname{:=}} \\mathcal{W} - U_1 \\Lambda_1 U_1^{\\!\\top}$ for $r = 0,1,\\dots,n-1$, where $\\Lambda_1$, $U_1$ represent the largest $r$ eigenvalues and the corresponding eigenvectors of $\\mathcal{W}$ respectively. Clearly, $\\mathcal{W}_r$ has the same eigenvalues as $\\mathcal{W}$ except for the largest $r$, which are replaced by zero. That is, $\\lambda_1(\\mathcal{W}_r) = \\lambda_{r+1}(\\mathcal{W})$ and $\\lambda_n(\\mathcal{W}_r) = \\lambda_n(\\mathcal{W})$. The latter is due to the fact that $\\lambda_n(\\mathcal{W}) \\le -1 \/ (n-1) \\le 0$~\\cite{Chun97}, so the new zero eigenvalues do not affect the smallest one. Then,\n\\begin{equation}\n\t\\kappa(\\mathcal{L}_\\alpha(\\mathcal{W}_r))\n\t\t= \\frac{1 - \\alpha \\lambda_n(\\mathcal{W})}{1 - \\alpha \\lambda_{r+1}(\\mathcal{W})}\n\t\t\\le \\kappa(\\mathcal{L}_\\alpha(\\mathcal{W})).\n\\label{eq:cond-r}\n\\end{equation}\nThis last expression generalizes~(\\ref{eq:cond}). Indeed, $\\mathcal{W} = \\mathcal{W}_0$. Then, our hybrid spectral-temporal filtering involves CG on $\\mathcal{L}_\\alpha(\\mathcal{W}_r)$ for $r \\ge 0$, compared to the baseline temporal filtering for $r = 0$. The inequality in~(\\ref{eq:cond-r}) is due to the fact that $|\\lambda_j(\\mathcal{W})| \\le 1$ for $j = 1,\\dots,n$~\\cite{Chun97}. Removing the largest $r$ eigenvalues of $\\mathcal{W}$ clearly improves (decreases) the condition number of $\\mathcal{L}_\\alpha(\\mathcal{W}_r)$ relative to $\\mathcal{L}_\\alpha(\\mathcal{W})$. The improvement is dramatic given that $\\alpha$ is close to $1$ in practice. For $\\alpha = 0.99$ and $\\lambda_{r+1}(\\mathcal{W}) = 0.7$ for instance, $\\kappa(\\mathcal{L}_\\alpha(\\mathcal{W}_r)) \/ \\kappa(\\mathcal{L}_\\alpha(\\mathcal{W})) = 0.0326$.\n\n\\begin{figure}[b!]\n\\vspace{-6pt}\n\\begin{tabular}{cc}\n\\begin{tikzpicture}\n\\begin{axis}[\n\tgrid=both,\n\twidth=.45\\textwidth,\n\theight=.4\\textwidth,\n\tenlargelimits=false,\n\txlabel={order $j$},\n\tylabel={eigenvalue $\\lambda_j(\\mathcal{W})$},\n]\n\\addplot[blue] table{figs\/rate\/eig.txt};\n\\addplot[red] coordinates {(300,-1) (300,1)};\n\\end{axis}\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}\n\\begin{axis}[\n\twidth=.55\\textwidth,\n\theight=.4\\textwidth,\n\tenlargelimits=false,\n\txlabel={rank $r$ (space)},\n\tylabel={iteration $i$ (time)},\n]\n\\addplot[\n\tcontour prepared={\n\t\tlabels over line,\n\t\tlabel distance=70pt,\n\t\tcontour label style={font=\\tiny},\n\t},\n\tcontour prepared format=matlab,\n] table{figs\/rate\/contour.txt};\n\\end{axis}\n\\end{tikzpicture}\n\\\\\n(a) & (b)\n\\end{tabular}\n\\vspace{-6pt}\n\\caption{(a) In descending order, eigenvalues of adjacency matrix $\\mathcal{W}$ of Oxford5k dataset of $n = 5,063$ images with global GeM features by ResNet101\nand $k = 50$ neighbors per point (see section~\\ref{sec:exp}). Eigenvalues on the left of vertical red line at $j=300$ are the largest $300$ ones, candidate for removal. (b) Contour plot of upper bound $\\phi_i(\\mathcal{L}_\\alpha(\\mathcal{W}_r))$ of CG's relative error as a function of rank $r$ and iteration $i$ for $\\alpha = 0.99$, illustrating the space($r$)-time($i$) trade-off for constant relative error.}\n\\label{fig:analysis}\n\\end{figure}\n\nMore generally, given the eigenvalues $\\lambda_{r+1}(\\mathcal{W})$ and $\\lambda_n(\\mathcal{W})$, the improvement can be estimated by measuring the upper bound $\\phi_i(\\mathcal{L}_\\alpha(\\mathcal{W}_r))$ for different $i$ and $r$. A concrete example is shown in Figure~\\ref{fig:analysis}, where we measure the eigenvalues of the adjacency matrix $\\mathcal{W}$ of a real dataset, remove the largest $r$ for $0 \\le r \\le 300$ and plot the upper bound $\\phi_i(\\mathcal{L}_\\alpha(\\mathcal{W}_r))$ of the relative error as a function of rank $r$ and iteration $i$ given by~(\\ref{eq:bound}) and~(\\ref{eq:cond-r}). Clearly, as more eigenvalues are removed, less CG iterations are needed to achieve the same relative error; the approximation error represented by the temporal term decreases and at the same time the linear system becomes easier to solve. Of course, iterations become more expensive as $r$ increases; precise timings are given in section~\\ref{sec:exp}.\n\n\n\\section{Conclusions} \\label{sec:conclusions}\n\nIn this work we have tested the two most successful manifold ranking methods of temporal filtering~\\cite{ITA+17} and spectral filtering~\\cite{IAT+18} on the very challenging new benchmark of Oxford and Paris datasets~\\cite{RIT+18}. It is the first time that such methods are evaluated at the scale of one million images. \\emph{Spectral filtering}, with both its FSR and FSRw variants, fails at this scale, despite the significant space required for additional vector embeddings. It is possible that a higher rank would work, but it wouldn't be practical in terms of space. In terms of query time, \\emph{temporal filtering} is only practical with its truncated variant at this scale. It works pretty well in terms of performance, but the query time is still high.\n\nOur new \\emph{hybrid filtering} method allows for the first time to strike a reasonable balance between the two extremes. Without truncation, it outperforms temporal filtering while being significantly faster, and its memory overhead is one order of magnitude less than that of spectral filtering. Unlike spectral filtering, it is possible to extremely sparsify the dataset embeddings with only negligible drop in performance. This, together with its very low rank, makes our hybrid method even faster than spectral, despite being iterative. More importantly, while previous methods were long known in other fields before being applied to image retrieval, to our knowledge our hybrid method is novel and can apply \\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot to any field where graph signal processing applies and beyond. Our theoretical analysis shows exactly why our method works and quantifies its space-time-accuracy trade-off using simple ideas from numerical linear algebra.\n\\subsection{Derivation \\alert{(version 1: inverse only)}}\n\nWe begin with the eigenvalue decomposition $\\mathcal{W} = U \\Lambda U^{\\!\\top}$, which we partition as $\\Lambda = \\operatorname{diag}(\\Lambda_1, \\Lambda_2)$ and $U = (U_1 \\ U_2)$. Matrices $\\Lambda_1$ and $\\Lambda_2$ are diagonal $r \\times r$ and $(n-r) \\times (n-r)$, respectively. Matrices $U_1$ and $U_2$ are $n \\times r$ and $n \\times (n-r)$, respectively, and have the following orthogonality properties, all due to the orthogonality of $U$ itself:\n\\begin{equation}\n\tU_1^{\\!\\top} U_1 = I_r, \\quad U_2^{\\!\\top} U_2 = I_{n-r}, \\quad\n\tU_1^{\\!\\top} U_2 = \\mathbf{O}, \\quad U_1 U_1^{\\!\\top} + U_2 U_2^{\\!\\top} = I_n.\n\\label{eq:ortho}\n\\end{equation}\nThen, $\\mathcal{W}$ is decomposed as\n\\begin{equation}\n\t\\mathcal{W} = U_1 \\Lambda_1 U_1^{\\!\\top} + U_2 \\Lambda_2 U_2^{\\!\\top}.\n\\label{eq:w-decomp}\n\\end{equation}\nSimilarly, $h_\\alpha(\\mathcal{W})$ is decomposed as\n\\begin{align}\n\th_\\alpha(\\mathcal{W})\n\t\t& = U h_\\alpha(\\Lambda) U^{\\!\\top}\n\t\t\t\\label{eq:h-decomp-1} \\\\\n\t\t& = U_1 h_\\alpha(\\Lambda_1) U_1^{\\!\\top} + U_2 h_\\alpha(\\Lambda_2) U_2^{\\!\\top},\n\t\t\t\\label{eq:h-decomp-2}\n\\end{align}\nwhich is due to the fact that diagonal matrix $h_\\alpha(\\Lambda)$ is obtained by element-wise application, hence decomposed as $h_\\alpha(\\Lambda) = \\operatorname{diag}(h_\\alpha(\\Lambda_1), h_\\alpha(\\Lambda_2))$. Here the first term is exactly the low-rank approximation that is used by spectral filtering, and the second is the approximation error\n\\begin{align}\n\te_\\alpha(\\mathcal{W})\n\t\t& \\mathrel{\\operatorname{:=}} U_2 h_\\alpha(\\Lambda_2) U_2^{\\!\\top}\n\t\t\t\\label{eq:error-1} \\\\\n\t\t& = (1 - \\alpha) U_2 (I_{n-r} - \\alpha \\Lambda_2)^{-1} U_2^{\\!\\top}\n\t\t\t\\label{eq:error-2} \\\\\n\t\t& = (1 - \\alpha) \\left(\n\t\t\t\t\\left( I_n - \\alpha U_2 \\Lambda_2 U_2^{\\!\\top} \\right)^{-1} - U_1 U_1^{\\!\\top}\n\t\t\t\\right)\n\t\t\t\\label{eq:error-3} \\\\\n\t\t& = h_\\alpha(U_2 \\Lambda_2 U_2^{\\!\\top}) - (1 - \\alpha) U_1 U_1^{\\!\\top}.\n\t\t\t\\label{eq:error-4}\n\\end{align}\nWe have used the definition~(\\ref{eq:transfer}) of $h_\\alpha$ in~(\\ref{eq:error-2}) and~(\\ref{eq:error-4}). Equation~(\\ref{eq:error-3}) can be verified by direct multiplication\n\\begin{align}\n\t& \\left( U_1 U_1^{\\!\\top} + U_2 (I_{n-r} - \\alpha \\Lambda_2)^{-1} U_2^{\\!\\top} \\right)\n\t\t(I_n - \\alpha U_2 \\Lambda_2 U_2^{\\!\\top})\n\t\t\t\\label{eq:verify-1} \\\\\n\t& = U_1 U_1^{\\!\\top} +\n\t U_2 (I_{n-r} - \\alpha\\Lambda_2)^{-1} U_2^{\\!\\top} -\n\t U_2 (I_{n-r} - \\alpha\\Lambda_2)^{-1} \\alpha\\Lambda_2 U_2^{\\!\\top}\n\t\t\t\\label{eq:verify-2} \\\\\n\t& = U_1 U_1^{\\!\\top} +\n\t U_2 (I_{n-r} - \\alpha\\Lambda_2)^{-1} (I_{n-r} - \\alpha\\Lambda_2) U_2^{\\!\\top}\n\t\t\t\\label{eq:verify-3} \\\\\n\t& = U_1 U_1^{\\!\\top} + U_2 U_2^{\\!\\top}\n\t\t\t\\label{eq:verify-4} \\\\\n\t& = I_n,\n\t\t\\label{eq:verify-6}\n\\end{align}\nwhere we have used the orthogonality properties~(\\ref{eq:ortho}) in the expansion~(\\ref{eq:verify-2}) and in~(\\ref{eq:verify-6}). A similar verification holds for the opposite order of multiplication.\n\nNow, combining~(\\ref{eq:h-decomp-2}),~(\\ref{eq:error-4}) and~(\\ref{eq:w-decomp}), we have proved the following.\n\n\\begin{theorem}\n\tAssuming the definition~(\\ref{eq:transfer}) of transfer function $h_\\alpha$ and the eigenvalue decomposition~(\\ref{eq:w-decomp}) of the symmetrically normalized adjacency matrix $\\mathcal{W}$, $h_\\alpha(\\mathcal{W})$ is decomposed as\n\t%\n\t\\begin{equation}\n\t\th_\\alpha(\\mathcal{W}) = U_1 g_\\alpha(\\Lambda_1) U_1^{\\!\\top} + h_\\alpha(\\mathcal{W} - U_1 \\Lambda_1 U_1^{\\!\\top}),\n\t\\label{eq:main}\n\t\\end{equation}\n\t%\n\twhere\n\t%\n\t\\begin{equation}\n\t\tg_\\alpha(A) \\mathrel{\\operatorname{:=}} (1 - \\alpha) \\left( (I_n - \\alpha A)^{-1} - I_n \\right)\n\t\\label{eq:aux}\n\t\\end{equation}\n\t%\n\tfor $n \\times n$ real symmetric matrix $A$. For $x \\in \\mathbb{R}$ in particular, $g_\\alpha(x) \\mathrel{\\operatorname{:=}} (1 - \\alpha) \\alpha x \/ (1 - \\alpha x)$.\n\\end{theorem}\n\n\n\\subsection{Derivation}\n\\label{sec:deriv}\n\nWe begin with the eigenvalue decomposition $\\mathcal{W} = U \\Lambda U^{\\!\\top}$, which we partition as $\\Lambda = \\operatorname{diag}(\\Lambda_1, \\Lambda_2)$ and $U = (U_1 \\ U_2)$. Matrices $\\Lambda_1$ and $\\Lambda_2$ are diagonal $r \\times r$ and $(n-r) \\times (n-r)$, respectively. Matrices $U_1$ and $U_2$ are $n \\times r$ and $n \\times (n-r)$, respectively, and have the following orthogonality properties, all due to the orthogonality of $U$ itself:\n\\begin{equation}\n\tU_1^{\\!\\top} U_1 = I_r, \\quad U_2^{\\!\\top} U_2 = I_{n-r}, \\quad\n\tU_1^{\\!\\top} U_2 = \\mathbf{O}, \\quad U_1 U_1^{\\!\\top} + U_2 U_2^{\\!\\top} = I_n.\n\\label{eq:ortho}\n\\end{equation}\nThen, $\\mathcal{W}$ is decomposed as\n\\begin{equation}\n\t\\mathcal{W} = U_1 \\Lambda_1 U_1^{\\!\\top} + U_2 \\Lambda_2 U_2^{\\!\\top}.\n\\label{eq:w-decomp}\n\\end{equation}\nSimilarly, $h_\\alpha(\\mathcal{W})$ is decomposed as\n\\begin{align}\n\th_\\alpha(\\mathcal{W})\n\t\t& = U h_\\alpha(\\Lambda) U^{\\!\\top}\n\t\t\t\\label{eq:h-decomp-1} \\\\\n\t\t& = U_1 h_\\alpha(\\Lambda_1) U_1^{\\!\\top} + U_2 h_\\alpha(\\Lambda_2) U_2^{\\!\\top},\n\t\t\t\\label{eq:h-decomp-2}\n\\end{align}\nwhich is due to the fact that diagonal matrix $h_\\alpha(\\Lambda)$ is obtained element-wise, hence decomposed as $h_\\alpha(\\Lambda) = \\operatorname{diag}(h_\\alpha(\\Lambda_1), h_\\alpha(\\Lambda_2))$. Here the first term is exactly the low-rank approximation that is used by spectral filtering, and the second is the approximation error\n\\begin{align}\n\te_\\alpha(\\mathcal{W})\n\t\t& \\mathrel{\\operatorname{:=}} U_2 h_\\alpha(\\Lambda_2) U_2^{\\!\\top}\n\t\t\t\\label{eq:error-1} \\\\\n\t\t& = (1 - \\alpha) \\left(\n\t\t\t\tU_2 (I_{n-r} - \\alpha \\Lambda_2)^{-1} U_2^{\\!\\top} + U_1 U_1^{\\!\\top} - U_1 U_1^{\\!\\top}\n\t\t\t\\right)\n\t\t\t\\label{eq:error-2} \\\\\n\t\t& = (1 - \\alpha) \\left(\n\t\t\t\t\\left(U_2 (I_{n-r} - \\alpha \\Lambda_2) U_2^{\\!\\top} + U_1 U_1^{\\!\\top} \\right)^{-1}\n\t\t\t\t- U_1 U_1^{\\!\\top}\n\t\t\t\\right)\n\t\t\t\\label{eq:error-3} \\\\\n\t\t& = (1 - \\alpha) \\left(\n\t\t\t\t\\left( I_n - \\alpha U_2 \\Lambda_2 U_2^{\\!\\top} \\right)^{-1} - U_1 U_1^{\\!\\top}\n\t\t\t\\right)\n\t\t\t\\label{eq:error-4} \\\\\n\t\t& = h_\\alpha(U_2 \\Lambda_2 U_2^{\\!\\top}) - (1 - \\alpha) U_1 U_1^{\\!\\top}.\n\t\t\t\\label{eq:error-5}\n\\end{align}\nWe have used the definition~(\\ref{eq:transfer}) of $h_\\alpha$ in~(\\ref{eq:error-2}) and~(\\ref{eq:error-5}). Equation~(\\ref{eq:error-4}) is due to the orthogonality properties~(\\ref{eq:ortho}). Equation~(\\ref{eq:error-3}) follows from the fact that for any invertible matrices $A$, $B$ of conformable sizes,\n\\begin{equation}\n\t\\left( U_1 A U_1 + U_2 B U_2 \\right)^{-1} = U_1 A^{-1} U_1 + U_2 B^{-1} U_2,\n\\label{eq:inversion}\n\\end{equation}\nwhich can be verified by direct multiplication, and is also due to orthogonality.\n\nNow, combining~(\\ref{eq:h-decomp-2}),~(\\ref{eq:error-5}) and~(\\ref{eq:w-decomp}), we have proved the following.\n\n\\begin{theorem}\n\tAssuming the definition~(\\ref{eq:transfer}) of transfer function $h_\\alpha$ and the eigenvalue decomposition~(\\ref{eq:w-decomp}) of the symmetrically normalized adjacency matrix $\\mathcal{W}$, $h_\\alpha(\\mathcal{W})$ is decomposed as\n\t%\n\t\\begin{equation}\n\t\th_\\alpha(\\mathcal{W}) = U_1 g_\\alpha(\\Lambda_1) U_1^{\\!\\top} + h_\\alpha(\\mathcal{W} - U_1 \\Lambda_1 U_1^{\\!\\top}),\n\t\\label{eq:main}\n\t\\end{equation}\n\t%\n\twhere\n\t%\n\t\\begin{equation}\n\t\tg_\\alpha(A)\n\t\t\t\\mathrel{\\operatorname{:=}} h_\\alpha(A) - h_\\alpha(\\mathbf{O})\n\t\t\t= (1 - \\alpha) \\left( (I_n - \\alpha A)^{-1} - I_n \\right)\n\t\\label{eq:aux}\n\t\\end{equation}\n\t%\n\tfor $n \\times n$ real symmetric matrix $A$. For $x \\in [-1,1]$ in particular, $g_\\alpha(x) \\mathrel{\\operatorname{:=}} h_\\alpha(x) - h_\\alpha(0) = (1 - \\alpha) \\alpha x \/ (1 - \\alpha x)$.\n\\end{theorem}\n\nObserve that $\\Lambda_2, U_2$ do not appear in~(\\ref{eq:main}) and indeed it is only the largest $r$ eigenvalues $\\Lambda_1$ and corresponding eigenvectors $U_1$ of $\\mathcal{W}$ that we need to compute. The above derivation is generalized from $h_\\alpha$ to a much larger class of functions in appendix~\\ref{sec:deriv2}.\n\n\\subsection{Derivation \\alert{(version 2: series)}}\n\nWe begin with the eigenvalue decomposition $\\mathcal{W} = U \\Lambda U^{\\!\\top}$, which we partition as $\\Lambda = \\operatorname{diag}(\\Lambda_1, \\Lambda_2)$ and $U = (U_1 \\ U_2)$. Matrices $\\Lambda_1$ and $\\Lambda_2$ are diagonal $r \\times r$ and $(n-r) \\times (n-r)$, respectively. Matrices $U_1$ and $U_2$ are $n \\times r$ and $n \\times (n-r)$, respectively, and have the following orthogonality properties, all due to the orthogonality of $U$ itself:\n\\begin{equation}\n\tU_1^{\\!\\top} U_1 = I_r, \\quad U_2^{\\!\\top} U_2 = I_{n-r}, \\quad\n\tU_1^{\\!\\top} U_2 = \\mathbf{O}, \\quad U_1 U_1^{\\!\\top} + U_2 U_2^{\\!\\top} = I_n.\n\\label{eq:2-ortho}\n\\end{equation}\nThen, $\\mathcal{W}$ is decomposed as\n\\begin{equation}\n\t\\mathcal{W} = U_1 \\Lambda_1 U_1^{\\!\\top} + U_2 \\Lambda_2 U_2^{\\!\\top}.\n\\label{eq:2-w-decomp}\n\\end{equation}\nNow, as in~\\cite{IAT+18}, we generalize $h_\\alpha$ to any function $h$ that has a series expansion\n\\begin{equation}\n\th(A) = \\sum_{i=0}^\\infty c_i A^i.\n\\label{eq:series}\n\\end{equation}\nThen, assuming that $h(\\mathcal{W})$ converges absolutely, it is similarly decomposed as\n\\begin{align}\n\th(\\mathcal{W})\n\t\t& = U h(\\Lambda) U^{\\!\\top}\n\t\t\t\\label{eq:2-h-decomp-1} \\\\\n\t\t& = U_1 h(\\Lambda_1) U_1^{\\!\\top} + U_2 h(\\Lambda_2) U_2^{\\!\\top},\n\t\t\t\\label{eq:2-h-decomp-2}\n\\end{align}\nwhich is due to the fact that diagonal matrix $h_\\alpha(\\Lambda)$ is obtained by element-wise application, hence decomposed as $h_\\alpha(\\Lambda) = \\operatorname{diag}(h_\\alpha(\\Lambda_1), h_\\alpha(\\Lambda_2))$. Here the first term is exactly the low-rank approximation that is used by spectral filtering, and the second is the approximation error\n\\begin{align}\n\te_\\alpha(\\mathcal{W})\n\t\t& \\mathrel{\\operatorname{:=}} U_2 h(\\Lambda_2) U_2^{\\!\\top}\n\t\t\t\\label{eq:2-error-1} \\\\\n\t\t& = \\sum_{i=0}^\\infty c_i U_2 \\Lambda_2^i U_2^{\\!\\top}\n\t\t\t\\label{eq:2-error-2} \\\\\n\t\t& = \\sum_{i=0}^\\infty c_i \\left( U_2 \\Lambda_2 U_2^{\\!\\top} \\right)^i - c_0 U_1 U_1^{\\!\\top}\n\t\t\t\\label{eq:2-error-3} \\\\\n\t\t& = h(U_2 \\Lambda_2 U_2^{\\!\\top}) - h(0) U_1 U_1^{\\!\\top}.\n\t\t\t\\label{eq:2-error-4}\n\\end{align}\nWe have used the series expansion~(\\ref{eq:series}) of $h$ in~(\\ref{eq:2-error-2}) and~(\\ref{eq:2-error-4}). Equation~(\\ref{eq:2-error-3}) is due to the fact that\n\\begin{equation}\n\t(U_2 \\Lambda_2 U_2^{\\!\\top})^i = U_2 \\Lambda_2^i U_2^{\\!\\top}\n\\label{eq:term-pos}\n\\end{equation}\nfor $i \\ge 1$, as can be verified by induction, while for $i = 0$,\n\\begin{equation}\n\tU_2 \\Lambda_2^0 U_2^{\\!\\top} = U_2 U_2^{\\!\\top} = I_n - U_1 U_1^{\\!\\top} = (U_2 \\Lambda_2 U_2^{\\!\\top})^0 - U_1 U_1^{\\!\\top}.\n\\label{eq:term-zero}\n\\end{equation}\nIn both~(\\ref{eq:term-pos}) and~(\\ref{eq:term-zero}) we have used the orthogonality properties~(\\ref{eq:2-ortho}).\n\nNow, combining~(\\ref{eq:2-h-decomp-2}),~(\\ref{eq:2-error-4}) and~(\\ref{eq:2-w-decomp}), we have proved the following.\n\n\\begin{theorem}\n\tAssuming the series expansion~(\\ref{eq:series}) of transfer function $h$ and the eigenvalue decomposition~(\\ref{eq:2-w-decomp}) of the symmetrically normalized adjacency matrix $\\mathcal{W}$, and given that $h(\\mathcal{W})$ converges absolutely, it is decomposed as\n\t%\n\t\\begin{equation}\n\t\th(\\mathcal{W}) = U_1 g(\\Lambda_1) U_1^{\\!\\top} + h(\\mathcal{W} - U_1 \\Lambda_1 U_1^{\\!\\top}),\n\t\\label{eq:2-main}\n\t\\end{equation}\n\t%\n\twhere\n\t%\n\t\\begin{equation}\n\t\tg(A) \\mathrel{\\operatorname{:=}} h(A) - h(\\mathbf{O})\n\t\\label{eq:2-aux}\n\t\\end{equation}\n\t%\n\tfor $n \\times n$ real symmetric matrix $A$. For $h = h_\\alpha$ and for $x \\in [-1,1]$ in particular, $g_\\alpha(x) \\mathrel{\\operatorname{:=}} h_\\alpha(x) - h_\\alpha(0) = (1 - \\alpha) \\alpha x \/ (1 - \\alpha x)$.\n\\end{theorem}\n\nObserve that $\\Lambda_2, U_2$ do not appear in~(\\ref{eq:main}) and indeed it is only the largest $r$ eigenvalues $\\Lambda_1$ and corresponding eigenvectors $U_1$ of $\\mathcal{W}$ that we need to compute.\n\n\\section{General derivation}\n\\label{sec:deriv2}\n\nThe derivation of our algorithm in section~\\ref{sec:deriv} applies only to the particular function (filter) $h_\\alpha$~(\\ref{eq:transfer}). Here, as in~\\cite{IAT+18}, we generalize to a much larger class of functions, that is, any function $h$ that has a series expansion\n\\begin{equation}\n\th(A) = \\sum_{i=0}^\\infty c_i A^i.\n\\label{eq:series}\n\\end{equation}\nWe begin with the same eigenvalue decomposition~(\\ref{eq:w-decomp}) of $\\mathcal{W}$ and, assuming that $h(\\mathcal{W})$ converges absolutely, its corresponding decomposition\n\\begin{equation}\n\th(\\mathcal{W}) = U_1 h(\\Lambda_1) U_1^{\\!\\top} + U_2 h(\\Lambda_2) U_2^{\\!\\top},\n\\label{eq:2-h-decomp}\n\\end{equation}\nsimilar to~(\\ref{eq:h-decomp-2}), where $U_1$, $U_2$ have the same orthogonality properties~(\\ref{eq:ortho}).\n\nAgain, the first term is exactly the low-rank approximation that is used by spectral filtering, and the second is the approximation error\n\\begin{align}\n\te_\\alpha(\\mathcal{W})\n\t\t& \\mathrel{\\operatorname{:=}} U_2 h(\\Lambda_2) U_2^{\\!\\top}\n\t\t\t\\label{eq:2-error-1} \\\\\n\t\t& = \\sum_{i=0}^\\infty c_i U_2 \\Lambda_2^i U_2^{\\!\\top}\n\t\t\t\\label{eq:2-error-2} \\\\\n\t\t& = \\sum_{i=0}^\\infty c_i \\left( U_2 \\Lambda_2 U_2^{\\!\\top} \\right)^i - c_0 U_1 U_1^{\\!\\top}\n\t\t\t\\label{eq:2-error-3} \\\\\n\t\t& = h(U_2 \\Lambda_2 U_2^{\\!\\top}) - h(0) U_1 U_1^{\\!\\top}.\n\t\t\t\\label{eq:2-error-4}\n\\end{align}\nAgain, we have used the series expansion~(\\ref{eq:series}) of $h$ in~(\\ref{eq:2-error-2}) and~(\\ref{eq:2-error-4}). Now, equation~(\\ref{eq:2-error-3}) is due to the fact that\n\\begin{equation}\n\t(U_2 \\Lambda_2 U_2^{\\!\\top})^i = U_2 \\Lambda_2^i U_2^{\\!\\top}\n\\label{eq:term-pos}\n\\end{equation}\nfor $i \\ge 1$, as can be verified by induction, while for $i = 0$,\n\\begin{equation}\n\tU_2 \\Lambda_2^0 U_2^{\\!\\top} = U_2 U_2^{\\!\\top} = I_n - U_1 U_1^{\\!\\top} = (U_2 \\Lambda_2 U_2^{\\!\\top})^0 - U_1 U_1^{\\!\\top}.\n\\label{eq:term-zero}\n\\end{equation}\nIn both~(\\ref{eq:term-pos}) and~(\\ref{eq:term-zero}) we have used the orthogonality properties~(\\ref{eq:ortho}).\n\nFinally, combining~(\\ref{eq:2-h-decomp}),~(\\ref{eq:2-error-4}) and~(\\ref{eq:w-decomp}), we have proved the following.\n\n\\begin{theorem}\n\tAssuming the series expansion~(\\ref{eq:series}) of transfer function $h$ and the eigenvalue decomposition~(\\ref{eq:w-decomp}) of the symmetrically normalized adjacency matrix $\\mathcal{W}$, and given that $h(\\mathcal{W})$ converges absolutely, it is decomposed as\n\t%\n\t\\begin{equation}\n\t\th(\\mathcal{W}) = U_1 g(\\Lambda_1) U_1^{\\!\\top} + h(\\mathcal{W} - U_1 \\Lambda_1 U_1^{\\!\\top}),\n\t\\label{eq:2-main}\n\t\\end{equation}\n\t%\n\twhere\n\t%\n\t\\begin{equation}\n\t\tg(A) \\mathrel{\\operatorname{:=}} h(A) - h(\\mathbf{O})\n\t\\label{eq:2-aux}\n\t\\end{equation}\n\t%\n\tfor $n \\times n$ real symmetric matrix $A$. For $h = h_\\alpha$ and for $x \\in [-1,1]$ in particular, $g_\\alpha(x) \\mathrel{\\operatorname{:=}} h_\\alpha(x) - h_\\alpha(0) = (1 - \\alpha) \\alpha x \/ (1 - \\alpha x)$.\n\\end{theorem}\n\nThis general derivation explains where the general definition of function $g$~(\\ref{eq:2-aux}) is coming from in~(\\ref{eq:aux}) corresponding to our treatment of $h_\\alpha$ in section~\\ref{sec:deriv}.\n\\section{Experiments}\n\\label{sec:exp}\n\nIn this section we evaluate our hybrid method on popular image retrieval benchmarks.\nWe provide comparisons to baseline methods, analyze the trade-off between runtime complexity, memory footprint and search accuracy, and compare with the state of the art.\n\n\\subsection{Experimental setup}\n\\label{sec:expSetup}\n\n\\head{Datasets.}\nWe use the revisited retrieval benchmark~\\cite{RIT+18} of the popular Oxford buildings~\\cite{PCISZ07} and Paris~\\cite{PCISZ08} datasets, referred to as $\\mathcal{R}$Oxford\\xspace and $\\mathcal{R}$Paris\\xspace, respectively.\nUnless otherwise specified, we evaluate using the \\emph{Medium} setup and always report mean Average Precision (mAP).\nLarge-scale experiments are conducted on $\\mathcal{R}$Oxford\\xspace+$\\mathcal{R}$1M\\xspace and $\\mathcal{R}$Paris\\xspace+$\\mathcal{R}$1M\\xspace by adding the new 1M challenging distractor set~\\cite{RIT+18}.\n\n\\head{Image representation.}\nWe use GeM descriptors~\\cite{RTC18} to represent images.\nWe extract GeM at 3 different image scales, aggregate the 3 descriptors, and perform whitening, exactly as in~\\cite{RTC18}.\nFinally, each image is represented by a single vector with $d = 2048$ dimensions, since ResNet-101 architecture is used.\n\n\\head{Baseline methods.} We consider the two baseline methods described in Section~\\ref{sec:background}, namely temporal and spectral filtering.\n\\emph{Temporal filtering} corresponds to solving a linear system with CG~\\cite{ITA+17} and is evaluated for different numbers of CG iterations. It is used with truncation at large scale to speed up the search~\\cite{ITA+17} and is denoted by \\emph{Temporal}$\\dagger$.\n\\emph{Spectral filtering} corresponds to FSR and its FSRw variant~\\cite{IAT+18}.\nBoth FSR variants are parametrized by the rank $r$ of the approximation, which is equal to the dimensionality of the spectral embedding.\n\n\\head{Implementation details.}\nTemporal ranking is performed with the implementation\\footnote{\\url{https:\/\/github.com\/ahmetius\/diffusion-retrieval\/}} provided by Iscen~\\emph{et al}\\onedot~\\cite{ITA+17}.\nThe adjacency matrix is constructed by using top $k=50$ reciprocal neighbors.\nPairwise similarity between descriptors $\\mathbf{v}$ and $\\mathbf{z}$ is estimated by $( \\mathbf{v}^{\\top}\\mathbf{z} )_+^3$.\nParameter $\\alpha$ is set to $0.99$, while the observation vector $\\mathbf{y}$ includes the top $5$ neighbors.\nThe eigendecomposition is performed on the largest connected component, as in~\\cite{IAT+18}.\nIts size is 933,412 and 934,809 for $\\mathcal{R}$Oxford\\xspace+$\\mathcal{R}$1M\\xspace and $\\mathcal{R}$Paris\\xspace+$\\mathcal{R}$1M\\xspace, respectively.\nTimings are measured with Matlab implementation on a 4-core Intel Xeon 2.60GHz CPU with 200 GB of RAM.\nWe only report timings for the diffusion part of the ranking and exclude the initial nearest neighbor search used to construct the observation vector.\n\n\\begin{figure}[t]\n\\vspace{-5pt}\n\\input{fig_main}\n\\vspace{-10pt}\n\\caption{mAP \\vs CG iteration $i$ and mAP \\vs time for temporal, spectral, and hybrid filtering.\nSparsified hybrid is used with sparsity $99$\\%.\n\\label{fig:mainexp}}\n\\end{figure}\n\n\\begin{figure}[t]\n\\input{fig_tradeoff2}\n\\vspace{-10pt}\n \\caption{mAP \\vs CG iteration $i$ for different rank $r$ for our hybrid method, where $r=0$ means temporal only.\n \\label{fig:tradeoff} }\n\\vspace{-10pt}\n\\end{figure}\n\n\\subsection{Comparisons}\n\\head{Comparison with baseline methods.} We first compare performance, query time and required memory for temporal, spectral, and hybrid ranking.\nWith respect to the memory, all methods store the initial descriptors, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot one $2048$-dimensional vector per image.\nTemporal ranking additionally stores the sparse regularized Laplacian. \nSpectral ranking stores for each vector an additional embedding of dimensionality equal to rank $r$, which is a parameter of the method.\nOur hybrid method stores both the Laplacian and the embedding, but with significantly lower rank $r$.\n\nWe evaluate on $\\mathcal{R}$Oxford\\xspace+$\\mathcal{R}$1M\\xspace and $\\mathcal{R}$Paris\\xspace+$\\mathcal{R}$1M\\xspace with global image descriptors, which is a challenging large scale problem, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot large adjacency matrix, where prior methods fail or have serious drawbacks.\nResults are shown in Figure~\\ref{fig:mainexp}.\nTemporal ranking with CG is\nreaching saturation near $20$ iterations as in~\\cite{ITA+17}.\nSpectral ranking is evaluated for a decomposition of rank $r$ whose computation and required memory are reasonable and feasible on a single machine.\nFinally, the proposed hybrid method is evaluated for\nrank $r=400$, which is a good compromise of speed and memory, as shown below.\n\nSpectral ranking variants (FSR, FSRw) are not performing well despite requiring about $250\\%$ additional memory compared to nearest neighbor search in the original space. Compared to hybrid, more than one order of magnitude higher rank $r$ is required for problems of this scale.\nTemporal ranking achieves good performance but at much more iterations and higher query times.\nOur hybrid solution provides a very reasonable space-time trade-off.\n\n\\head{Runtime and memory trade-off.}\nWe report the trade-off between number of iterations and rank $r$, representing additional memory, in more detail in Figure~\\ref{fig:tradeoff}.\nIt is shown that the number of iterations to achieve the top mAP decreases as the rank increases.\nWe achieve the optimal trade-off at $r=400$ where we only need 5 or less iterations.\nNote that, the rank not only affects the memory and the query time of the spectral part in a linear manner, but the query time of the temporal part too~(\\ref{eq:box}).\n\n\\head{Sparsification} of spectral embeddings is exploited in prior work~\\cite{IAT+18}.\nWe sparsify the embeddings of our hybrid method by setting the smallest values of $U_1$ to zero until a desired level of sparsity is reached.\nWe denote this method by \\emph{SHybrid}.\nThis sparse variant provides memory savings and an additional speedup due to the computations with sparse matrices.\nFigure~\\ref{fig:sparse} shows that performance loss remains small even for extreme sparsity \\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot $99$\\%, while the results in Figure~\\ref{fig:mainexp} show that it offers a significant speedup in the global descriptor setup.\n\n\\begin{figure}\n\\input{fig_sparse}\n\\vspace{-10pt}\n \\caption{mAP \\vs level of sparsification on our hybrid method for $r = 400$. Dashed horizontal line indicates no sparsification. \\label{fig:sparse} }\n\\end{figure}\n\n\\begin{table}[t]\n\\begin{center}\n\\input{tab_res}\n\\vspace{5pt}\n\\caption{Performance, memory and query time comparison on $\\mathcal{R}$Oxford\\xspace +$\\mathcal{R}$1M\\xspace with GeM descriptors for temporal (20 iterations), truncated temporal (20 iterations, 75k images in shortlist), spectral ($r=5k$), and hybrid ranking ($r=400$, 5 iterations). Hybrid ranking is sparsified by setting the 99\\% smallest values to 0. Reported memory excludes the initial descriptors requiring 8.2 GB. $U_1$ is stored with double precision.\n\t\\label{tab:ptm}}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[t]\n\\input{fig_large_trunc}\n\\vspace{-5pt}\n\\caption{Time (s) - memory (MB) - performance\n(mAP) for different methods.\nWe show mAP \\vs time, memory \\vs time, and memory \\vs mAP on $\\mathcal{R}$Oxford\\xspace+$\\mathcal{R}$1M\\xspace.\nMethods in the comparison: temporal for 20 iterations, truncated temporal for 20 iterations and shortlist of size 50k, 75k, 100k, 200k and 300k, spectral (FSRw) with $r=5k$, hybrid with $r \\in \\{100, 200, 300, 400\\}$ and 5 iterations, sparse hybrid with 99\\% sparsity, $r \\in \\{100, 200, 300, 400\\}$ and 5 iterations.\nText labels indicate the shortlist size (in thousands) for truncated temporal and rank for hybrid. Observe that the two plots on the left are aligned horizontally with respect to time, while the two at the bottom vertically with respect to memory.\n \\label{fig:large_trunc} }\n\\end{figure}\n\n\n\\head{Performance-memory-speed comparison}\nwith the baselines is shown in Table~\\ref{tab:ptm}.\nOur hybrid approach enjoys query times lower than those of temporal with truncation or spectral with FSRw, while at the same time achieves higher performance and requires less memory than the spectral-only approach.\n\nWe summarize our achievement in terms of mAP, required memory, and query time in Figure~\\ref{fig:large_trunc}.\nTemporal ranking achieves high performance at the cost of high query time and its truncated counterpart saves query time but sacrifices performance.\nSpectral ranking is not effective at this scale, while our hybrid solution achieves high performance at low query times.\n\n\\head{Comparison with the state of the art.}\nWe present an extensive comparison with existing methods in the literature for global descriptors at small and large scale (1M distractors).\nWe choose $r=400$ and $5$ iterations for our hybrid method, $20$ iterations for temporal ranking, $r=2k$ and $r=5k$ for spectral ranking at small and large scale, respectively.\nTemporal ranking is also performed with truncation on a shortlist size of $75k$ images at large scale.\nThe comparison is presented in Table~\\ref{tab:soa}.\nOur hybrid approach performs the best or second best right after the temporal one, while providing much smaller query times at a small amount of additional required memory.\n\n\\begin{table}[t]\n\\vspace{10pt}\n\\begin{center}\n\\input{tab_soa}\n\\caption{mAP comparison with existing methods in the literature on small and large scale datasets, using Medium and Hard setup of the revisited benchmark.\n\t\\label{tab:soa}\n}\n\\end{center}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\nMost image retrieval methods obtain their initial ranking of the database images by computing similarity between the query descriptor and descriptors of the database images. Descriptors based on local features~\\cite{SZ03,PCISZ07} have been largely replaced by more efficient CNN-based image descriptors~\\cite{GARL17,RTC18}.\nRegardless of the initial ranking, the retrieval performance is commonly boosted by considering the manifold structure of the database descriptors, rather than just independent distances of query to database images.\nExamples are query expansion~\\cite{CPSIZ07,AZ12} and diffusion~\\cite{DB13,ITA+17,IAT+18}. \\emph{Query expansion} uses the results of initial ranking to issue a novel, enriched, query~\\cite{CPSIZ07} on-line only. \\emph{Diffusion} on the other hand, is based on the $k$-NN graph of the dataset that is constructed off-line, so that, assuming novel queries are part of the dataset, their results are essentially pre-computed. Diffusion can then be seen as infinite-order query expansion~\\cite{ITA+17}.\n\nThe significance of the performance boost achieved by diffusion has been recently demonstrated at the ``Large-Scale Landmark Recognition''\\footnote{\\url{https:\/\/landmarkscvprw18.github.io\/}} challenge in conjunction with CVPR 2018. The vast majority of top-ranked teams have used query expansion or diffusion\nas the last step of their method.\n\nRecently, efficient diffusion methods have been introduced to the image retrieval community. Iscen \\emph{et al}\\onedot~\\cite{ITA+17} apply diffusion to obtain the final ranking, in particular by solving a large and sparse system of linear equations. Even though an efficient \\emph{conjugate gradient} (CG)~\\cite{Hack94} solver is used, query times on large-scale datasets are in a range of several seconds. A significant speed-up is achieved by \\emph{truncating} the system of linear equations. Such an approximation, however, brings a slight degradation in the retrieval performance. Their method can be interpreted as graph\nfiltering in the {\\em temporal} domain.\n\nIn the recent work of Iscen \\emph{et al}\\onedot~\\cite{IAT+18}, more computation is shifted to the off-line phase to accelerate the query. The solution of the linear system is estimated by low-rank approximation of the $k$-NN graph Laplacian. Since the eigenvectors of the Laplacian represent a Fourier basis of the graph, this is interpreted as graph filtering in the {\\em spectral} domain.\nThe price to pay\nis increased space complexity to store the embeddings of the database descriptors. For comparable performance, a 5k-10k dimensional vector is needed per image.\n\n\nIn this paper, we introduce a \\emph{hybrid} method that combines spectral filtering~\\cite{IAT+18} and temporal filtering%\n~\\cite{ITA+17}. This hybrid method offers a trade-off between speed (\\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot, the number of iterations of CG) and the additional memory required\n(\\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot, the dimensionality of the embedding). The two approaches~\\cite{ITA+17,IAT+18} are extreme cases of our hybrid method. We show that the proposed method pairs or outperforms the previous methods while either requiring less memory or being significantly faster -- only three to five iterations of CG are necessary for embeddings of 100 to 500 dimensions.\n\nWhile both temporal and spectral filtering approaches were known in other scientific fields before being successfully applied to image retrieval, to our knowledge the proposed method is novel and can be applied to other domains.\n\nThe rest of the paper is organized as follows. Related work is reviewed in Section~\\ref{sec:related}. Previous work on temporal and spectral filtering is detailed in Sections~\\ref{sec:temporal} and~\\ref{sec:spectral} respectively, since the paper builds on this work. The proposed method is described in Section~\\ref{sec:hybrid} and its behavior is analyzed in Section~\\ref{sec:analysis}. Experimental results are provided in Section~\\ref{sec:exp}. Conclusions are drawn in Section~\\ref{sec:conclusions}.\n\\section{Hybrid spectral-temporal filtering}\n\\label{sec:hybrid}\n\nTemporal filtering~(\\ref{eq:temporal}) is performed once for every new query represented by $\\mathbf{y}$, but $\\mathcal{W}$ represents the dataset and is fixed. Could CG be accelerated if we had some very limited additional information on $\\mathcal{W}$?\n\nOn the other extreme, spectral filtering~(\\ref{eq:spectral}) needs a large number of eigenvectors and eigenvalues of $\\mathcal{W}$ to provide a high quality approximation, but always leaves some error. Could we reduce this space requirement by allocating some additional query time to recover the approximation error?\n\nThe answer is positive to both questions and in fact these are the two extreme cases of \\emph{hybrid spectral-temporal filtering}, which we formulate next.\n\n\\input{deriv-inv-v2}\n\n\\subsection{Algorithm}\n\n\\emph{Why is decomposition~(\\ref{eq:main}) of $h_\\alpha(\\mathcal{W})$ important?} Because given an observation vector $\\mathbf{y}$ at query time, we can express the ranking vector $\\mathbf{x}$ as\n\\begin{equation}\n\t\\mathbf{x} = \\mathbf{x}^s + \\mathbf{x}^t,\n\\label{eq:hybrid}\n\\end{equation}\nwhere the first, \\emph{spectral}, term $\\mathbf{x}^s$ is obtained by spectral filtering\n\\begin{equation}\n\t\\mathbf{x}^s = U_1 g_\\alpha(\\Lambda_1) U_1^{\\!\\top} \\mathbf{y},\n\\label{eq:h-spectral}\n\\end{equation}\nas in~\\cite{IAT+18}, where $g_\\alpha$ applies element-wise, while the second, \\emph{temporal}, term $\\mathbf{x}^t$ is obtained by temporal filtering, that is, solving the linear system\n\\begin{equation}\n\t\\mathcal{L}_\\alpha(\\mathcal{W} - U_1 \\Lambda_1 U_1^{\\!\\top}) \\mathbf{x}^t = \\mathbf{y},\n\\label{eq:h-temporal}\n\\end{equation}\nwhich we do by a few iterations of CG as in~\\cite{ITA+17}. The latter is possible because $\\mathcal{L}_\\alpha(\\mathcal{W} - U_1 \\Lambda_1 U_1^{\\!\\top})$ is still positive-definite, like $\\mathcal{L}_\\alpha(\\mathcal{W})$.\nIt's also possible without an explicit dense representation of $U_1 \\Lambda_1 U_1^{\\!\\top}$ because CG, like all Krylov subspace methods, only needs \\emph{black-box} access to the matrix $A$ of the linear system, that is, a mapping $\\mathbf{z} \\mapsto A \\mathbf{z}$ for $\\mathbf{z} \\in \\mathbb{R}^n$. For system~(\\ref{eq:h-temporal}) in particular, according to the definition~(\\ref{eq:laplacian}) of $\\mathcal{L}_\\alpha$, we use the mapping\n\\begin{equation}\n\t\\mathbf{z} \\mapsto \\left(\n\t\t\t\\mathbf{z} - \\alpha \\left( \\mathcal{W} \\mathbf{z} - U_1 \\Lambda_1 U_1^{\\!\\top} \\mathbf{z} \\right)\n\t\t\\right) \/ (1 - \\alpha),\n\\label{eq:box}\n\\end{equation}\nwhere product $\\mathcal{W} \\mathbf{z}$ is efficient because $\\mathcal{W}$ is sparse as in~\\cite{ITA+17}, while $U_1 \\Lambda_1 U_1^{\\!\\top} \\mathbf{z}$ is efficient if computed right-to-left because $U_1$ is an $n \\times r$ matrix with $r \\ll n$ and $\\Lambda_1$ is diagonal as in~\\cite{IAT+18}.\n\n\\subsection{Discussion}\n\n\\emph{What is there to gain from spectral-temporal decomposition~(\\ref{eq:hybrid}) of $\\mathbf{x}$?}\n\nFirst, since the temporal term~(\\ref{eq:h-temporal}) can recover the spectral approximation error, the rank $r$ of $U_1$, $\\Lambda_1$ in the spectral term~(\\ref{eq:h-spectral}) can be chosen as small as we like. In the extreme case $r = 0$, the spectral term vanishes and we recover temporal filtering~\\cite{ITA+17}. This allows efficient computation of only a few eigenvectors\/values, even with the Lanczos algorithm rather than the approximate method of~\\cite{IAT+18}. Most importantly, it reduces significantly the space complexity, at the expense of query time.\nLike spectral filtering, it is possible to \\emph{sparsify} $U_1$ to compress the dataset embeddings and accelerate the queries online. In fact, we show in section~\\ref{sec:exp} that sparsification is much more efficient in our case.\n\nSecond, the matrix $\\mathcal{W} - U_1 \\Lambda_1 U_1^{\\!\\top}$ is effectively like $\\mathcal{W}$ with the $r$ largest eigenvalues removed. This improves significantly the condition number of matrix $\\mathcal{L}_\\alpha(\\mathcal{W} - U_1 \\Lambda_1 U_1^{\\!\\top})$ in the temporal term~(\\ref{eq:h-temporal}) compared to $\\mathcal{L}_\\alpha(\\mathcal{W})$ in the linear system~(\\ref{eq:temporal}) of temporal filtering~\\cite{ITA+17}, on which the convergence rate depends. In the extreme case $r = n$, the temporal term vanishes and we recover spectral filtering~\\cite{IAT+18}. In turn, even with small $r$, this reduces significantly the number of iterations needed for a given accuracy, at the expense of computing and storing $U_1$, $\\Lambda_1$ off-line as in~\\cite{IAT+18}. The improvement is a function of $\\alpha$ and the spectrum of $\\mathcal{W}$, and is quantified in section~\\ref{sec:analysis}.\n\nIn summary, for a given desired accuracy, we can choose the rank $r$ of the spectral term and a corresponding number of iterations of the temporal term, determining a trade-off between the space needed for the eigenvectors (and the off-line cost to obtain them) and the (online) query time. Such choice is not possible with either spectral or temporal filtering alone: at large scale, the former may need too much space and the latter may be too slow.\n\n\n\n\n\\section{Related work}\n\\label{sec:related}\nQuery expansion (QE) has been a standard way to improve recall of image retrieval since the work of Chum \\emph{et al}\\onedot~\\cite{CPSIZ07}.\nA variety of approaches exploit local feature matching and perform various types of verification.\nSuch matching ranges from selective kernel matching~\\cite{TJ14} to geometric consensus~\\cite{CPSIZ07,CMPM11,JB09} with RANSAC-like techniques.\nThe verified images are then used to refine the global or local image representation of a novel query.\n\nAnother family of QE methods are more generic and simply assume a global image descriptor~\\cite{SLBW14,JHS07,DJAH14,DGBQG11,ZYCYM12,DB13,AZ12}.\nA simple and popular one is average-QE~\\cite{CPSIZ07}, recently extended to $\\alpha$-QE~\\cite{RTC18}.\nAt some small extra cost, recall is significantly boosted.\nThis additional cost is restricted to the on-line query phase. This is in contrast to another family of approaches that considers an off-line pre-processing of the database.\nGiven the nearest neighbors list for database images, QE is performed by adapting the local similarity measure~\\cite{JHS07}, using reciprocity constraints~\\cite{DJAH14,DGBQG11} or graph-based similarity propagation~\\cite{ZYCYM12,DB13,ITA+17}. The graph-based approaches, also known as diffusion, are shown to achieve great performance~\\cite{ITA+17} and to be a good way for feature fusion~\\cite{ZYCYM12}.\nSuch on-line re-ranking is typically orders of magnitude more costly than simple average-QE.\n\nThe advent of CNN-based features, especially global image descriptors, made QE even more attractive.\nAverage-QE or $\\alpha$-QE are easily applicable and very effective with a variety of CNN-based descriptors~\\cite{RTC18,GARL17,TSJ16,KMO15}.\nState-of-the-art performance is achieved with diffusion on global or regional descriptors~\\cite{ITA+17}.\nThe latter is possible due to the small number of regions that are adequate to represent small objects, in contrast to thousands in the case of local features.\n\nDiffusion based on tensor products can be attractive in terms of performance~\\cite{BBT+18,BZW+17}. However, in this work, we focus on the page-rank like diffusion~\\cite{ITA+17,ZWG+03} due to its reasonable query times. An iterative on-line solution was commonly preferred~\\cite{DB13} until the work of Iscen \\emph{et al}\\onedot~\\cite{ITA+17}, who solve a linear system to speed up the process.\nAdditional off-line pre-processing and the construction and storage of additional embeddings reduce diffusion to inner product search in the spectral ranking of Iscen \\emph{et al}\\onedot~\\cite{IAT+18}. This work lies exactly in between these two worlds and offers a trade-off exchanging memory for speed and vice versa.\n\nFast random walk with restart~\\cite{ToFP06} is very relevant in the sense that it follows the same diffusion model as~\\cite{ITA+17,ZWG+03} and is a hybrid method like ours. It first disconnects the graph into distinct components through clustering and then obtains a low-rank spectral approximation of the residual error. Apart from the additional complexity, parameters \\emph{etc}\\onedot} \\def\\vs{\\emph{vs}\\onedot of the off-line clustering process and the storage of both eigenvectors and a large inverse matrix, its online phase is also complex, involving the Woodbury matrix identity and several dense matrix-vector multiplications. Compared to that, we first obtain a \\emph{very} low-rank spectral approximation of the original graph, and then solve a sparse linear system of the residual error. Thanks to orthogonality properties, the online phase is nearly as simple as the original one and significantly faster.\n\\section{Problem formulation and background}\n\\label{sec:background}\n\n\nThe methods we consider are based on a nearest neighbor graph of a dataset of $n$ items, represented by $n \\times n$ \\emph{adjacency matrix} $W$. The graph is undirected and weighted according to similarity: $W$ is sparse, symmetric, nonnegative and zero-diagonal. We symmetrically normalize $W$ as $\\mathcal{W} \\mathrel{\\operatorname{:=}} D^{-1\/2} W D^{-1\/2}$, where $D \\mathrel{\\operatorname{:=}} \\operatorname{diag}(W \\mathbf{1})$ is the diagonal \\emph{degree matrix}, containing the row-wise sum of $W$ on its diagonal.\nThe eigenvalues of $\\mathcal{W}$ lie in $[-1,1]$~\\cite{Chun97}.\n\nAt query time, we are given a sparse $n \\times 1$ \\emph{observation vector} $\\mathbf{y}$, which is constructed by searching for the nearest neighbors of a query item in the dataset and setting its nonzero entries to the corresponding similarities. The problem is to obtain an $n \\times 1$ \\emph{ranking vector} $\\mathbf{x}$ such that retrieved items of the dataset are ranked by decreasing order of the elements of $\\mathbf{x}$. Vector $\\mathbf{x}$ should be close to $\\mathbf{y}$ but at the same time similar items are encouraged to have similar ranks in $\\mathbf{x}$, essentially by exploring the graph to retrieve more items.\n\n\\subsection{Temporal filtering} \\label{sec:temporal}\n\nGiven a parameter $\\alpha \\in [0,1)$, define the $n \\times n$ \\emph{regularized Laplacian} function by\n\\begin{equation}\n\t\\mathcal{L}_\\alpha(A) \\mathrel{\\operatorname{:=}} (I_n - \\alpha A) \/ (1 - \\alpha)\n\\label{eq:laplacian}\n\\end{equation}\nfor $n \\times n$ real symmetric matrix $A$, where $I_n$ is the $n \\times n$ identity matrix. Iscen \\emph{et al}\\onedot~\\cite{ITA+17} then define $\\mathbf{x}$ as the unique solution of the linear system\n\\begin{equation}\n\t\\mathcal{L}_\\alpha(\\mathcal{W}) \\mathbf{x} = \\mathbf{y},\n\\label{eq:temporal}\n\\end{equation}\nwhich they obtain approximately in practice by a few iterations of the \\emph{conjugate gradient} (CG) method, since $\\mathcal{L}_\\alpha(\\mathcal{W})$ is positive-definite. At large scale, they truncate $\\mathcal{W}$ by keeping only the rows and columns corresponding to a fixed number of neighbors of the query, and re-normalize. Then,~(\\ref{eq:temporal}) only performs re-ranking within this neighbor set.\n\nWe call this method \\emph{temporal\\footnote{``Temporal'' stems from conventional signal processing where signals are functions of ``time''; while ``spectral'' is standardized also in graph signal processing.} filtering} because if $\\mathbf{x}$, $\\mathbf{y}$ are seen as signals, then $\\mathbf{x}$ is the result of applying a linear graph filter on $\\mathbf{y}$, and CG iteratively applies a set of recurrence relations that determine the filter. While $\\mathcal{W}$ is computed and stored off-line,~(\\ref{eq:temporal}) is solved online (at query time), and this is expensive.\n\n\\subsection{Spectral filtering} \\label{sec:spectral}\n\nLinear system~(\\ref{eq:temporal}) can be written as $\\mathbf{x} = h_\\alpha(\\mathcal{W}) \\mathbf{y}$, where the \\emph{transfer function} $h_\\alpha$ is defined by\n\\begin{equation}\n\th_\\alpha(A) \\mathrel{\\operatorname{:=}} \\mathcal{L}_\\alpha(A)^{-1} = (1 - \\alpha) (I_n - \\alpha A)^{-1}\n\\label{eq:transfer}\n\\end{equation}\nfor $n \\times n$ real symmetric matrix $A$. Given the eigenvalue decomposition $\\mathcal{W} = U \\Lambda U^{\\!\\top}$ of the symmetric matrix $\\mathcal{W}$, Iscen \\emph{et al}\\onedot~\\cite{IAT+18} observe that $h_\\alpha(\\mathcal{W}) = U h_\\alpha(\\Lambda) U^{\\!\\top}$, so that~(\\ref{eq:temporal}) can be written as\n\\begin{equation}\n\t\\mathbf{x} = U h_\\alpha(\\Lambda) U^{\\!\\top} \\mathbf{y},\n\\label{eq:spectral}\n\\end{equation}\nwhich they approximate by keeping only the largest $r$ eigenvalues and the corresponding eigenvectors of $\\mathcal{W}$. This defines a low-rank approximation $h_\\alpha(\\mathcal{W}) \\approx U_1 h_\\alpha(\\Lambda_1) U_1^{\\!\\top}$ instead. This method is referred to as \\emph{fast spectral ranking} (FSR) in~\\cite{IAT+18}. Crucially, $\\Lambda$ is a diagonal matrix, hence $h_\\alpha$ applies element-wise, as a scalar function $h_\\alpha(x) \\mathrel{\\operatorname{:=}} (1 - \\alpha) \/ (1 - \\alpha x)$ for $x \\in [-1,1]$.\n\nAt the expense of off-line computing and storing the $n \\times r$ matrix $U_1$ and the $r$ eigenvalues in $\\Lambda_1$, filtering is now computed as a sequence of low-rank matrix-vector multiplications, and the query is accelerated by orders of magnitude compared to~\\cite{ITA+17}. However, the space overhead is significant. To deal with approximation errors when $r$ is small, a heuristic is introduced that gradually falls back to the similarity in the original space for items that are poorly represented, referred to as FSRw~\\cite{IAT+18}.\n\nWe call this method \\emph{spectral filtering} because $U$ represents the Fourier basis of the graph and the sequence of matrix-vector multiplications from right to left in the right-hand side of~(\\ref{eq:spectral}) represents the Fourier transform of $\\mathbf{y}$ by $U^{\\!\\top}$, filtering in the frequency domain by $h_\\alpha(\\Lambda)$, and finally the inverse Fourier transform to obtain $\\mathbf{x}$ in the time domain by $U$.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}