diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzerng" "b/data_all_eng_slimpj/shuffled/split2/finalzzerng" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzerng" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and Main Results}\n\nWe study slow-fast systems driven by fractional Brownian motions (fBm):\n\\begin{alignat}{4}\n dX_t^\\varepsilon&=f(X_t^\\varepsilon,Y_t^\\varepsilon)\\,dt+g(X_t^\\varepsilon, Y_t^\\varepsilon)\\,dB_t, &\\qquad X_0^\\varepsilon&=X_0, \\label{eq:slow}\\\\\n dY_t^\\varepsilon&=\\frac{1}{\\varepsilon}b(X_t^\\varepsilon,Y_t^\\varepsilon)\\,dt+\\frac{1}{\\varepsilon^{\\hat{H}}}\\sigma\\,d\\hat{B}_t, &\\qquad Y_0^\\varepsilon&=Y_0, \\label{eq:fast}\n\\end{alignat}\nwhere $B$ and $\\hat{B}$ are independent fBms on an underlying complete probability space $(\\Omega, {\\mathcal F},\\ensuremath\\mathbb{P})$ with Hurst parameters $H\\in(\\frac12,1)$ and $\\hat{H}\\in(1-H,1)$, respectively. Here, $g:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\Lin[m]{d}$ and $\\sigma\\in\\Lin{n}$ is non-degenerate. As the scale parameter $\\varepsilon>0$ is taken to $0$, one hopes that the \\emph{slow motion} $X^\\varepsilon$ is well approximated by an \\emph{effective dynamics} $\\bar{X}$. For $H=\\hat{H}=\\frac12$, this convergence has been studied by myriad authors since the seminal works of Bogolyubov-Mitropol{\\textquotesingle}ski\\u{\\i} \\cite{Bogolyubov1955} and Hasminskii \\cite{Hasminskii1968}, see e.g. the monographs and survey articles \\cite{Freidlin2012,Skorokhod2002,Pavliotis2008,Berglund2006,Liu2012,Li2018} and references therein for a comprehensive overview. It is still a very active research area \\cite{Liu2020,Roeckner2020,Roeckner2020a}.\n\nFor $H,\\hat{H}\\neq\\frac12$, the SDEs \\eqref{eq:slow}--\\eqref{eq:fast} provide a suitable model for economic, medical, and climate phenomena exhibiting a genuinely non-Markovian behavior in both the system and its environment. It is for example very well known that neglecting temporal memory effects in climate modeling by resorting to a diffusion model results in prediction notoriously mismatching observational data \\cite{Ashkenazy2003,Karner2002,Davidsen2010,Barboza2014}. It thus became widely popular to use fBm in climate modeling \\cite{Sonechkin1998,Yuan2014,Eichinger2020}.\n\n\nWhile slow-fast systems with fractional noise have seen a tremendous spike of interest in the last two years \\cite{Bourguin-ailus-Spiliopoulos-typical,Bourguin-Gailus-Spiliopoulos,Hairer2020,Pei-Inaham-Xu, Pei-Inaham-Xu2,Han2021}, all of these works resort to Markovian, strongly mixing fast processes by choosing $\\hat{H}=\\frac12$ in \\eqref{eq:fast}. The main contribution of this article is to establish the convergence $X^\\varepsilon\\to\\bar{X}$ even for a \\emph{non-Markovian} fast dynamics by allowing $\\hat{H}\\neq\\frac12$. It hardly comes as a surprise that this renders the analysis much more delicate and it is not clear at all if an averaging principle can even hold for a fractional, \\emph{non-mixing} environment. In fact, the usual assumption in the aforementioned works on Markovian averaging principles is a strong mixing condition with an algebraic rate \\cite{Heunis1994,Abourashchi2010}. This condition is essentially never satisfied for a fractional dynamics \\cite{Bai2016}.\n\nRecent work of Hairer and the first author of this article suggests the following ansatz for the effective dynamics:\n\\begin{equation}\\label{eq:effective_dynamics}\n d\\bar{X}_t=\\bar{f}(\\bar{X}_t)\\,dt+\\bar{g}(\\bar{X}_t)\\,dB_t,\\qquad \\bar{X}_0=X_0,\n\\end{equation}\nwhere $\\bar{f}(x)\\ensuremath\\triangleq\\int f(x,y)\\,\\pi^x(dy)$ and similar for $\\bar{g}$ \\cite{Hairer2020}. For $\\hat{H}=\\frac12$, this work showed that the average is taken with respect to the unique invariant $\\pi^x$ of the fast dynamics with \\emph{frozen} slow input\n\\begin{equation}\\label{eq:frozen_fast}\n dY_t^x=b(x,Y_t^x)\\,dt+\\sigma\\,d\\hat{B}_t.\n\\end{equation} \nFor $\\hat{H}\\neq\\frac12$, it is \\emph{a priori} not clear what $\\pi^x$ should be. We show that it is the one-time marginal of the unique stationary path space law $\\ensuremath\\mathbb{P}_{\\pi^x}\\in\\P\\big(\\ensuremath{\\mathcal{C}}(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)\\big)$, see \\cref{sec:physical_solution} for details. [Here and in the sequel, $\\P(\\ensuremath{\\mathcal{X}})$ denotes the set of Borel probability measures on a Polish space $\\ensuremath{\\mathcal{X}}$.]\n \n\nIn addition to standard regularity requirements ensuring well-posedness of the slow-fast system (see \\cref{cond:feedback} below), we shall impose a contractivity condition on the drift in \\eqref{eq:fast}:\n\\begin{definition}\\label{define-semi-contractive}\n Let $\\lambda, R\\geq 0$ and $\\kappa>0$. We write $\\S(\\kappa, R, \\lambda)$ for the set of Lipschitz continuous functions $b:\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}^n$ satisfying\n \\begin{equation}\\label{eq:semicontractive}\n \\Braket{b(x)-b(y),x-y}\\leq\\begin{cases}\n -\\kappa|x-y|^2, & |x|,|y|\\geq R,\\\\\n \\lambda|x-y|^2, &\\text{otherwise}.\\\\\n \\end{cases}\n\\end{equation}\n\\end{definition}\n\nNote that $\\lambda$ may be smaller than $\\Lip{b}$, whence its prescription is not necessarily redundant. If $b=-\\nabla V$ is a gradient vector field with potential $V$, then \\eqref{eq:semicontractive} is equivalent to $V$ being at most $\\lambda$-concave on $|x|0$. Then there is a number $\\lambda_0>0$ such that, if $b(x,\\cdot)\\in\\S\\big(\\kappa, R,\\lambda_0\\big)$ for every $x\\in\\ensuremath{\\mathbb{R}}^d$, all of the following hold:\n \\begin{itemize}\n \\item For every $x\\in\\ensuremath{\\mathbb{R}}^d$, there exists a unique stationary path space law $\\ensuremath\\mathbb{P}_{\\pi^x}\\in\\P\\big(\\ensuremath{\\mathcal{C}}(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)\\big)$ for the frozen fast dynamics \\eqref{eq:frozen_fast}.\n \\item Let $\\pi^x\\in\\P(\\ensuremath{\\mathbb{R}}^n)$ be the one-time marginal of $\\ensuremath\\mathbb{P}_{\\pi^x}$. If \n \\begin{equation*}\n x\\mapsto\\bar{g}(x)\\ensuremath\\triangleq\\int_{\\ensuremath{\\mathbb{R}}^n}g(x,y)\\,\\pi^x(dy)\\in\\ensuremath{\\mathcal{C}}_b^2\\big(\\ensuremath{\\mathbb{R}}^d,\\Lin[m]{d}\\big),\n \\end{equation*}\n then there is a unique pathwise solution to \\eqref{eq:effective_dynamics} and $X^\\varepsilon\\to\\bar{X}$ as $\\varepsilon\\to 0$ in $\\ensuremath{\\mathcal{C}}^\\alpha\\big([0,T],\\ensuremath{\\mathbb{R}}^d\\big)$ in probability for any $T>0$.\n \\end{itemize} \n\\end{theorem}\n\nThe regularity of $\\bar{g}$ not only hinges on the regularity of $g$ but also on the fast dynamics. First we note that the requirement on $\\bar{g}$ clearly holds for a diffusion coefficient depending only on the slow motion $X^\\varepsilon$: \n\\begin{equation*}\n dX_t^\\varepsilon=f(X_t^\\varepsilon,Y_t^\\varepsilon)\\,dt+g(X_t^\\varepsilon)\\,dB_t.\n\\end{equation*}\nAnother class of examples is provided by \\cref{cor:smooth} below. \n\nThe technical core of the proof of \\cref{thm:feedback_fractional} is a quantitative \\emph{quenched ergodic theorem} on the conditional evolution of the process \\eqref{eq:frozen_fast}. We prove this by means of a control argument, which is of independent interest. In fact, it allows us to improve recent work of Panloup and Richard \\cite{Panloup2020} by establishing geometric ergodicity for a class of SDEs driven by additive fractional noise. To our best knowledge, this is the first result achieving an exponential convergence rate for a fractional dynamics (excluding the trivial instance of an everywhere contractive drift).\n\nLet $\\TV{\\mu}\\ensuremath\\triangleq\\sup_A|\\mu(A)|$ denote the total variation norm, $\\ensuremath{\\mathcal{W}}^p$ be the $p$-Wasserstein distance, and $\\ensuremath{\\mathbb{W}}^p$ be the Wasserstein-like metric for generalized initial conditions introduced in \\cref{def:wasserstein}.\n \n\\begin{theorem}[Geometric Ergodic Theorem]\\label{thm:geometric}\nLet $(Y_t)_{t\\geq 0}$ be the solution to the SDE\n\\begin{equation}\\label{eq:sde_intro}\n dY_t=b(Y_t)\\,dt+\\sigma\\,dB_t\n\\end{equation}\nstarted in the generalized initial condition $\\mu$, where $\\sigma\\in\\Lin{n}$ is non-degenerate and $B$ is an fBm with Hurst parameter $H\\in(0,1)$. Then, for any $p\\geq 1$ and any $\\kappa,R>0$, there exists a $\\Lambda=\\Lambda(\\kappa,R,p)>0$ such that, whenever $b\\in\\S\\big(\\kappa,R,\\Lambda\\big)$, there is a unique invariant measure $\\mathcal I_\\pi$ for \\eqref{eq:sde_intro} in the sense of \\cref{initial-condition}. Moreover, \n \\begin{equation}\\label{eq:wasserstein_time_t}\n \\ensuremath{\\mathcal{W}}^p(\\mathcal{L}(Y_t),\\pi)\\leq Ce^{-ct} \\ensuremath{\\mathbb{W}}^p\\big(\\mu,\\mathcal I_{\\pi}\\big) \\qquad \\forall\\, t\\geq 0\n \\end{equation}\n and\n \\begin{equation}\\label{eq:tv_process}\n \\TV{\\L(Y_{\\cdot+t})-\\ensuremath\\mathbb{P}_\\pi}\\leq Ce^{-ct}\\ensuremath{\\mathbb{W}}^1\\big(\\mu,\\mathcal I_\\pi\\big) \\qquad \\forall\\, t\\geq 0,\n \\end{equation}\n where $c,C>0$ are numerical constants independent of $t\\geq 0$ and $\\mu$.\n\\end{theorem}\n\n\nThe work \\cite{Hairer2005} already contained a result on the rate of convergence. There, the author assumed an off-diagonal contraction condition, see \\cref{cond:off_diagonal} below, and obtained an algebraic rate in \\eqref{eq:tv_process}. Very recently Panloup and Richard \\cite{Panloup2020} studied $b\\in\\S(\\kappa,R,0)$ for which they found a rate of order $e^{-Dt^\\gamma}$ for some $\\gamma<\\frac23$ in both \\eqref{eq:wasserstein_time_t} and \\eqref{eq:tv_process}. Albeit these works did not require a global Lipschitz condition on the drift for Hurst parameters $H<\\frac12$, we emphasize that they do impose this assumption for $H>\\frac12$ to obtain \\eqref{eq:tv_process}. This is due to the lack of regularity of a certain fractional integral operator. \\Cref{thm:geometric} thus provides a genuine ramification of the results of \\cite{Panloup2020} in the latter case. We note that similarly to the work of Panloup and Richard, the Wasserstein decay \\eqref{eq:wasserstein_time_t} also holds for more general Gaussian driving noises with stationary increments. We shall briefly comment on this in \\cref{sec:geometric_ergodicity}.\n\nWith the spiking interest in numerical methods based on the generalized Langevin equation with memory kernel \\cite{Chak2020,Leimkuhler2020}, \\cref{thm:geometric} and the quenched quantitative ergodic theorem underpinning it can give a better theoretical understanding. A first step would be to derive quantitative estimates on the constants $c$, $C$, and $\\Lambda$; a possible pathway is outlined in \\cref{rem:constant_xi} below. It is an interesting open question if there is indeed a finite threshold value of $\\Lambda$ beyond which the exponential rates \\eqref{eq:wasserstein_time_t}--\\eqref{eq:tv_process} no longer hold. As established by Eberle, such a transition from exponential to sub-exponential rates does not happen in case $H=\\frac12$ \\cite{Eberle2016}.\n\n\\begin{example}\n Let us give an example of a drift not covered by the sub-exponential convergence theorems of \\cite{Panloup2020}. Consider the double-well potential\n \\begin{equation*}\n V(x)=\\alpha|x|^4-\\beta|x|^2\n \\end{equation*}\n for $\\alpha,\\beta>0$. We modify $V$ outside of a compact such that its Hessian is bounded. Set $b=-\\nabla V$. It is clear that $b\\notin\\bigcup_{\\kappa,R>0}\\S(\\kappa,R,0)$ as soon as $\\beta>0$. However, for $\\frac{\\beta}{\\alpha}$ sufficiently small, \\cref{thm:geometric} furnishes an exponential rate of convergence.\n\\end{example}\n\n\\paragraph{Outline of the article.} The next section features a brief overview of preliminary material. In \\cref{sec:convergence}, we prove the quantitative quenched ergodic theorem and deduce \\cref{thm:geometric}. The proof of \\cref{thm:feedback_fractional} is concluded in \\cref{sec:feedback}.\n\\paragraph{Acknowledgements.} We would like to thank the anonymous referees for their careful reading and helpful comments. Partial support from the EPSRC under grant no. EP\/S023925\/1 is also acknowledged.\n\n\\section{Preliminaries}\\label{sec:preliminaries}\n\nRecall that one-dimensional fractional Brownian motion with Hurst parameter $H\\in(0,1)$ is the centered Gaussian process $(B_t)_{t\\geq 0}$ with\n\\begin{equation*}\n \\Expec{(B_t-B_s)^2}=|t-s|^{2H},\\qquad s,t\\geq 0.\n\\end{equation*}\nTo construct $d$-dimensional fBm one lets the coordinates evolve as independent one-dimensional fBms with the same Hurst parameter. We will make frequent use of the following classical representation of one-dimensional fBm as a fractional integral of a two-sided Wiener process $(W_t)_{t\\in\\ensuremath{\\mathbb{R}}}$, which is due to Mandelbrot and van Ness \\cite{Mandelbrot1968}:\n\\begin{equation}\\label{eq:mandelbrot}\n B_t=\\alpha_H\\int_{-\\infty}^0 (t-u)^{H-\\frac12}-(-u)^{H-\\frac12}\\,dW_u+\\alpha_H\\int_0^t(t-u)^{H-\\frac12}\\,dW_u,\\qquad t\\geq 0.\n\\end{equation}\nHere, $\\alpha_H>0$ is some explicitly known normalization constant and we also write $B_t=\\bar B_t+\\tilde B_t$. \n\n\n\\subsection{Invariant Measures of Fractional SDEs} \\label{sec:physical_solution}\n\nAlbeit being certainly non-Markovian on its own, the solution to \\eqref{eq:sde_intro} can actually be cast as the marginal of an infinite-dimensional Feller process $Z_t\\ensuremath\\triangleq\\big(Y_t,(W_s)_{s\\leq t}\\big)$ with values in $\\ensuremath{\\mathbb{R}}^n\\times\\H_H$. Here, $W$ is the two-sided Wiener process driving the equation through \\eqref{eq:mandelbrot} and $\\H_H$ is a H\\\"older-type space of paths $\\ensuremath{\\mathbb{R}}_-\\to\\ensuremath{\\mathbb{R}}^n$ supporting the Wiener measure $\\ensuremath{\\mathsf{W}}$. More concretely, $\\H_H$ is the closure of the space $\\{f\\in\\ensuremath{\\mathcal{C}}_c^\\infty(\\ensuremath{\\mathbb{R}}_-,\\ensuremath{\\mathbb{R}}^n):\\,f(0)=0\\}$ in the norm\n\\begin{equation*}\n \\|f\\|_{\\H_H}\\ensuremath\\triangleq\\sup_{s,t\\leq 0}\\frac{\\big|f(t)-f(s)\\big|}{|t-s|^{\\frac{1-H}{2}}\\sqrt{1+|t|+|s|}}.\n\\end{equation*} \nTo ensure that this construction actually furnishes a solution to \\eqref{eq:sde_intro}, we of course have to assume that the law of the second marginal of $Z$ coincides with $\\ensuremath{\\mathsf{W}}$ for each time $t\\geq 0$. This motivates the following definition:\n\\begin{definition}[\\cite{Hairer2005}]\\label{initial-condition}\nA measure $\\mu\\in\\P(\\ensuremath{\\mathbb{R}}^n\\times\\H_H)$ with $\\Pi_{\\H_H}^*\\mu=\\ensuremath{\\mathsf{W}}$ is called a \\emph{generalized initial condition}. A generalized initial condition $\\mathcal I_\\pi$, which is invariant for the Feller process $Z$ is called an \\emph{invariant measure} for the SDE \\eqref{eq:sde_intro}. We write $\\pi\\ensuremath\\triangleq\\Pi_{\\ensuremath{\\mathbb{R}}^n}^*\\mathcal I_\\pi$ for the first marginal and $\\ensuremath\\mathbb{P}_\\pi\\in\\P\\big(\\ensuremath{\\mathcal{C}}(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)\\big)$ for the law of the first coordinate of $Z$ when started in $\\mathcal I_\\pi$.\n\\end{definition} \n\nBy only adding the past of the driving noise to the auxiliary process $Z$, Hairer's framework rules out the existence of `unphysical' invariant measures, which frequently occur in the theory of random dynamical systems, see \\cite{Hairer2009} for details.\n\n\nThere are only a few examples for which the invariant measure can be written down explicitly:\n\\begin{example}\\label{ex:disintegration_fou}\n Let $Y$ be the fractional Ornstein-Uhlenbeck process \\cite{Cheridito2003}, that is,\n \\begin{equation*}\n dY_t=-Y_t\\,dt+dB_t.\n \\end{equation*}\n Then it is well known that its invariant measure is given by\n \\begin{equation*}\n \\mathcal I_\\pi(dy,dw)=\\delta_{F(w)}(dy)\\ensuremath{\\mathsf{W}}(dw),\\qquad F(w)\\ensuremath\\triangleq-\\int_{-\\infty}^0 e^s D_Hw(s)\\,ds,\n \\end{equation*}\n where $D_H:\\H_H\\to\\H_{1-H}$ is a continuous linear operator switching between Wiener and fBm paths, see \\cite[Eq. (3.6)]{Hairer2005} for the precise definition. The first marginal of $\\mathcal I_\\pi$ and the stationary path space law are given by\n \\begin{equation*}\n \\pi=\\L\\left(\\int_{-\\infty}^0 e^s\\,dB_s\\right)\\quad\\text{and}\\quad\\ensuremath\\mathbb{P}_\\pi=\\L\\left(\\int_{-\\infty}^t e^s\\,dB_s\\right)_{t\\geq 0}.\n \\end{equation*}\n \n\\end{example}\n\n\\begin{remark}\n The invariant measure of \\eqref{eq:sde_intro} is in general not of product form.\n\\end{remark}\n\nSince $\\sigma\\in\\Lin{n}$ is non-degenerate, one can show that there is an isomorphism between the strictly stationary solutions to \\eqref{eq:sde_intro} and the set of invariant measures (provided one quotients the latter by the equivalence relation identifying generalized initial initial conditions which generate the same evolution in the first marginal). It is also not hard to prove the following:\n\\begin{proposition}[\\cite{Hairer2005}]\\label{prop:existence_invariant_measure}\n If $\\sigma\\in\\Lin{n}$ and $b\\in\\S(\\kappa,R,\\lambda)$ for some $\\kappa>0$, $R,\\lambda\\geq 0$, then there exists an invariant measure for \\eqref{eq:sde_intro} in the sense of \\cref{initial-condition}. Moreover, $\\mathcal I_\\pi$ has moments of all orders.\n\\end{proposition}\n\nThe conclusion of \\cref{prop:existence_invariant_measure} actually holds for a merely locally Lipschitz off-diagonal large scale contractive drift (see \\cref{cond:off_diagonal} below). See also \\cite{Hairer2007,Deya2019} for versions for multiplicative noise. Finally, we introduce a Wasserstein-type distance for generalized initial conditions:\n\\begin{definition}\\label{def:wasserstein}\nLet $\\mu$ and $\\nu$ be generalized initial conditions. Let $\\mathscr{C}_{\\Delta}(\\mu,\\nu)$ denote the set of couplings of $\\mu$ and $\\nu$ concentrated on the diagonal $\\Delta_{\\H_H}\\ensuremath\\triangleq\\{(w,w^\\prime)\\in\\H_H^2:\\,w=w^\\prime\\}$. For $p\\geq 1$, we set\n \\begin{equation*}\n \\ensuremath{\\mathbb{W}}^p(\\mu,\\nu)\\ensuremath\\triangleq\\inf_{\\rho\\in\\mathscr{C}_\\Delta(\\mu,\\nu)}\\left(\\int_{(\\ensuremath{\\mathbb{R}}^n\\times\\H_H)^2}|x-y|^p\\,\\rho(dx,dw,dy,dw^\\prime)\\right)^{\\frac1p}.\n\\end{equation*}\n\\end{definition}\n\nNote that clearly $\\ensuremath{\\mathcal{W}}^p\\big(\\Pi_{\\ensuremath{\\mathbb{R}}^n}^*\\mu,\\Pi_{\\ensuremath{\\mathbb{R}}^n}^*\\nu\\big)\\leq \\ensuremath{\\mathbb{W}}^p(\\mu,\\nu)$ and the inequality is strict in general.\n\n\\subsection{Large Scale Contractions}\n\nKnown ergodic theorems on \\eqref{eq:sde_intro} require either a Lyapunov-type stability or a large scale contractivity condition on the drift $b$. The former indicates that once far out, the solutions have the tendency to come back to a neighborhood of the origin. Under this condition, it is conceivable that two distinct solutions can come back from diverging routes, thus allowing to couple them. The Lyapunov stability condition was used in \\cite{Fontbona2017,Deya2019} for multiplicative noise. \n\nA large scale contraction on the other hand will force two solutions to come closer once they have left a ball $B_R$ of sufficiently large radius $R>0$. The following two conditions appeared in previous works:\n\n\\begin{condition}[Off-diagonal large scale contraction, \\cite{Hairer2005}]\\label{cond:off_diagonal}\nThere exist numbers $\\tilde \\kappa>0$ and $D,\\lambda\\geq 0$ such that \n\\begin{equation}\\label{quasi-contr}\n \\Braket{b(x)-b(y),x-y}\\leq \\big(D-\\tilde \\kappa|x-y|^2\\big)\\wedge\\big(\\lambda|x-y|^2\\big)\\qquad \\forall\\, x,y\\in\\ensuremath{\\mathbb{R}}^n.\n\\end{equation}\n\\end{condition}\n\n\n\\begin{condition}[Large scale contraction, \\cite{Panloup2020}]\nThere exist numbers $R\\geq 0$ and $\\kappa>0$ such that \n \\begin{equation}\\label{contractive}\n \\Braket{b(x)-b(y),x-y}\\leq -\\kappa|x-y|^2 \\qquad \\forall\\, x,y\\in \\ensuremath{\\mathbb{R}}^n\\setminus B_R. \n \\end{equation}\n\\end{condition}\n\n\\begin{example}\n The function $b(x)=x-x^3$ is a large scale contraction. \n\\end{example}\n\nWe will later use the following standard result, a slightly weaker version of which was proven in \\cite[Lemma 5.1]{Panloup2020}.\n\\begin{lemma}\\label{lem:bigger_ball}\nIf $b$ is locally Lipschitz continuous and satisfies the large scale contraction condition \\eqref{contractive}, then for any $\\bar{\\kappa}\\in(0,\\kappa)$, there is an $\\bar{R}>0$ such that\n \\begin{equation*}\n \\braket{b(x)-b(y),x-y}\\leq -\\bar{\\kappa}|x-y|^2 \\qquad \\forall\\, y\\in\\ensuremath{\\mathbb{R}}^n,\\, |x|>\\bar{R}.\n \\end{equation*}\n\\end{lemma}\n\\begin{proof}\n Since $\\braket{b(x)-b(y),x-y}\\leq -{\\kappa}|x-y|^2$ for $x$ and $y$ outside of the ball $B_R$, \nwe only need to show that the required contraction holds for any $|y|\\leq R$ and $|x|>\\bar R$. \nFix such $x$ and $y$. \n\nWithout loss of generality, we may also assume that $\\bar{R}\\geq R+1$. Then there is a $\\beta\\in(0,1)$ such that $z_\\beta\\ensuremath\\triangleq (1-\\beta)x+\\beta y$ has norm $|z_\\beta|=R+1$. \nSince $x-y=\\ensuremath{\\frac} 1 \\beta(x-z_\\beta)$ and, since $x, z_\\beta$ are outside of $B_R$, \n \\begin{equation*}\n \\braket{b(x)-b(z_\\beta),x-y}\\leq-\\ensuremath{\\frac} 1 \\beta \\kappa|x-z_\\beta|^2=-\\kappa\\beta|x-y|^2.\n \\end{equation*}\nLet $K\\ensuremath\\triangleq\\Lip[B_{R+1}]{b}$ denote the Lipschitz constant of $b$ on $B_{ R+1}$. Since $|z_\\beta-y|=(1-\\beta)|x-y|$, it holds that\n \\begin{align*}\n \\braket{b(x)-b(y),x-y}&=\\braket{b(x)-b(z_\\beta),x-y} +\\\\\\\n & \\leq -\\kappa\\beta|x-y|^2+K(1-\\beta)|x-y|^2\n \\end{align*}\nSince $\\beta$ is the length of the proportion of the line segment outside of $B_{R+1}$,\nwe can choose it as close to $1$ as we like by choosing $\\bar R$ sufficiently large $\\big(\\beta=\\ensuremath{\\frac}{|x-z_\\beta|}{|x-y|}\\geq\\frac{|x|-R-1}{|x|+R}\\geq\\frac{\\bar{R}-R-1}{\\bar{R}+R}\\big)$.\n\\end{proof}\n\n\\begin{remark}\\label{rem:large_scall_off_diagonal}\n\\leavevmode\n\\begin{enumerate}\n \\item Let $b: \\ensuremath{\\mathbb{R}}^n\\to \\ensuremath{\\mathbb{R}}^n$ be a globally Lipschitz continuous function. Then the large scale contraction condition \\eqref{contractive} is equivalent to $b\\in\\bigcup_{\\lambda>0}\\S(\\kappa,R,\\lambda)$. In view of \\cref{lem:bigger_ball}, condition \\eqref{eq:semicontractive} also holds for a merely locally Lipschitz continuous $b$ at the cost of a smaller contractive rate and a bigger contractive range. In fact, choose $\\bar\\kappa\\in(0,\\kappa)$ and let $\\bar R>R$ be the corresponding radius furnished by \\cref{lem:bigger_ball}. This gives \\eqref{eq:semicontractive} with $\\kappa\\rightsquigarrow\\bar{\\kappa}$, $R\\rightsquigarrow\\bar{R}$, and $\\lambda\\rightsquigarrow \\Lip[B_{\\bar{R}}]{b}$.\n \\item\\label{it:off_diagonal} The off-diagonal large scale contraction condition is weaker than the large scale contraction condition. With the former, there may be no $\\kappa>0$ such that \\eqref{contractive} holds in the region $\\{|x-y| \\leq \\ensuremath{\\frac} D {2\\tilde \\kappa}\\} \\cap \\{|x|\\geq R, |y|\\geq R\\}$. On the other hand, if \\eqref{contractive} holds and $b$ is locally Lipschitz continuous, we can choose any $\\tilde \\kappa<\\kappa$. In fact, denoting the radius from \\cref{lem:bigger_ball} by $\\bar R>0$, one only needs to show \\eqref{quasi-contr} when both $x$ and $y$ are in $B_{\\bar R}$. To this end, we pick $\\lambda=\\Lip[B_{\\bar{R}}]{b}$ and $D\\geq\\sup_{x,y\\in B_{\\bar{R}}}(\\tilde\\kappa+\\lambda)|x-y|^2$.\n\\end{enumerate}\n\\end{remark}\n\n\n\\section{The Conditional Evolution of Fractional Dynamics}\\label{sec:convergence}\n\nTo derive strong $L^p$-bounds on the H\\\"older norm of the slow motion in \\cref{sec:feedback} below, we need to study the conditional distribution of the evolution \\eqref{eq:frozen_fast}. Unlike the Markovian case, the conditioning changes the dynamics and the resulting evolution may \\emph{no longer} solve the original equation. We will show that, in the limit $t\\to\\infty$, the law of the conditioned dynamics still converges to $\\pi^x$, the first marginal of the invariant measure for the fast dynamics with frozen slow input \\eqref{eq:frozen_fast}. The rate of convergence is however slower (only algebraic rather than exponential).\n\n\nLet us first state the regularity assumption imposed in \\cref{thm:feedback_fractional}. For this we introduce a convenient notation, which we shall frequently use in the sequel. We write $a\\lesssim b$ if there is a constant $C>0$ such that $a\\leq C b$. The constant $C$ is independent of any ambient parameters on which $a$ and $b$ may depend.\n\n\\begin{condition}\\label{cond:feedback}\n The drift $b:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}^n$ satisfies the following conditions:\n \\begin{itemize}\n \\item \\emph{Linear growth:}\n \\begin{equation*}\n |b(x,y)|\\lesssim 1+|x|+|y|, \\qquad \\forall\\, x\\in\\ensuremath{\\mathbb{R}}^d,y\\in\\ensuremath{\\mathbb{R}}^n.\n \\end{equation*}\n \\item \\emph{Uniformly locally Lipschitz in the first argument:} For each $R>0$, there is an $L_R>0$ such that\n \\begin{equation*}\n \\sup_{y\\in\\ensuremath{\\mathbb{R}}^n}|b(x_1,y)-b(x_2,y)|\\leq L_R|x_1-x_2|, \\qquad \\forall\\, |x_1|,|x_2|\\leq R.\n \\end{equation*}\n \\item \\emph{Uniformly Lipschitz in the second argument:} There is an $L>0$ such that\n \\begin{equation*}\n \\sup_{x\\in\\ensuremath{\\mathbb{R}}^d}|b(x,y_1)-b(x,y_2)|\\leq L|y_1-y_2|, \\qquad \\forall\\, y_1,y_2\\in\\ensuremath{\\mathbb{R}}^n.\n \\end{equation*}\n \\end{itemize}\n\\end{condition}\n\nLet $({\\mathcal F}_t)_{t\\geq 0}$ be a complete filtration to which $\\hat{B}$ is adapted. For any continuous, $({\\mathcal F}_t)_{t\\geq 0}$-adapted, $\\ensuremath{\\mathbb{R}}^d$-valued process $X$ with continuous sample paths, and any $\\varepsilon>0$, the equation\n\\begin{equation}\\label{eq:general_flow}\n d\\Phi_{t}^X=\\frac{1}{\\varepsilon}b\\big(X_t,\\Phi_{t}^X\\big)\\,dt+\\frac{1}{\\varepsilon^{\\hat{H}}}\\sigma\\,d\\hat{B}_t,\\qquad \\Phi_{t}^{X}=y,\n\\end{equation}\nhas a unique global pathwise solution under \\cref{cond:feedback}, see \\cref{lem:comparison} below. The flow $\\Phi_{s,t}^X(y)$ associated with \\eqref{eq:general_flow} is therefore well defined. An important special case of \\eqref{eq:general_flow} is when the extrinsic process is given by a fixed point $x\\in\\ensuremath{\\mathbb{R}}^d$. For this we reserve the notation $\\bar{\\Phi}^x$:\n\\begin{equation}\\label{eq:general_flow-fixed-x}\n d\\bar{\\Phi}_t^x=\\frac{1}{\\varepsilon}b(x,\\bar{\\Phi}_t^x)\\,dt+\\frac{1}{\\varepsilon^{\\hat{H}}}\\sigma\\,d\\hat{B}_t,\\qquad \\bar{\\Phi}_0^x=y.\n\\end{equation}\nWe would like the reader to observe that the dependency of flows on the scale parameter $\\varepsilon>0$ is suppressed in our notation. Note that, by self-similarity, sending $\\varepsilon\\to 0$ in \\eqref{eq:general_flow-fixed-x} is equivalent to keeping $\\varepsilon=1$ fixed and taking $t\\to\\infty$. As the $\\varepsilon$-dependence of the flows \\eqref{eq:general_flow}--\\eqref{eq:general_flow-fixed-x} will play a key r\\^ole in \\cref{sec:feedback}, we choose to introduce a new notation in case $\\varepsilon=1$, which is used throughout the rest of this section: \n\\begin{definition}\\label{def:flow}\n Let $\\ensuremath{\\mathfrak{h}}\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)\\ensuremath\\triangleq\\big\\{f\\in\\ensuremath{\\mathcal{C}}(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n):\\,f(0)=0\\big\\}$ and $x\\in\\ensuremath{\\mathbb{R}}^d$. We denote the flow of the ordinary differential equation\n \\begin{equation}\\label{eq:ode_solution}\n dy_t=b(x,y_t)\\,dt+d\\ensuremath{\\mathfrak{h}}_t\n \\end{equation}\n by $\\Psi^x_{s,t}(y,\\ensuremath{\\mathfrak{h}})$, where $y\\in\\ensuremath{\\mathbb{R}}^n$ and $0\\leq s\\leq t$. It is given by the solution to the integral equation\n \\begin{equation*}\n \\Psi_{s,t}^x(y,\\ensuremath{\\mathfrak{h}})=y+\\int_s^t b\\big(x,\\Psi_{s,r}^x(y,\\ensuremath{\\mathfrak{h}})\\big)\\,dr+\\ensuremath{\\mathfrak{h}}_t-\\ensuremath{\\mathfrak{h}}_s.\n \\end{equation*}\n We also use the abbreviation $\\Psi^x_{t}\\ensuremath\\triangleq \\Psi^x_{0,t}$.\n\\end{definition}\n\nUnder \\cref{cond:feedback}, \\eqref{eq:ode_solution} is well posed and it follows that $\\Psi_{s,t}^x(y, \\ensuremath{\\mathfrak{h}})=\\Psi_{t-s}^x(y,\\theta_s \\ensuremath{\\mathfrak{h}})$ for each $0\\leq s\\leq t$ and $y\\in\\ensuremath{\\mathbb{R}}^n$, where $\\theta_sf=f(\\cdot+s)-f(\\cdot)$ is the Wiener shift operator on the path space. If $x\\in\\ensuremath{\\mathbb{R}}^d$, $y\\in\\ensuremath{\\mathbb{R}}^n$, or $\\ensuremath{\\mathfrak{h}}\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)$ are random, we understand \\cref{def:flow} pathwise for each fixed sample $\\omega\\in\\Omega$. The solutions to \\eqref{eq:general_flow} and \\eqref{eq:general_flow-fixed-x} are also understood in this sense.\n\n\n\\subsection{Processes with a Locally Independent Increment Decomposition}\\label{sec:increment}\n\nThe derivation of the conditioned evolution relies on the following simple fact: For $t,h\\geq 0$, we have\n\\begin{equation}\\label{eq:increment_decomposition}\n (\\theta_t\\hat{B})_h=\\hat{B}_{t+h}-\\hat{B}_t=\\bar{\\hat{B}}_h^t+\\tilde{\\hat{B}}_h^t,\n\\end{equation}\nwhere, in a slight abuse of notation (the integrand has to be multiplied by the identity matrix),\n\\begin{equation*}\n \\bar{\\hat{B}}_h^t\\ensuremath\\triangleq \\alpha_{\\hat H}\\int_{-\\infty}^t\\left((t+h-u)^{\\hat{H}-\\frac12}-(t-u)^{\\hat{H}-\\frac12}\\right)\\,d\\hat{W}_u,\\quad \\tilde{\\hat{B}}_h^t\\ensuremath\\triangleq\\alpha_{\\hat H}\\int_t^{t+h} (t+h-u)^{\\hat{H}-\\frac12}\\,d\\hat{W}_u.\n\\end{equation*}\nThis decomposition is easily obtained by rearranging \\eqref{eq:mandelbrot}. For any $t\\geq 0$, the two components $\\bar{\\hat{B}}^t$ and $ \\tilde{\\hat{B}}^t$ are independent. We call $\\bar{\\hat{B}}^t$ the \\emph{smooth} part of the increment, whereas $\\tilde{\\hat{B}}^t$ is referred to as the \\emph{rough} part. This terminology is based on the fact that, away from the origin, the process $\\bar{\\hat{B}}^t$ has continuously differentiable sample paths and therefore the `roughness' of $\\hat{B}$ essentially comes from $\\tilde{\\hat{B}}^t$. Indeed, it is not hard to check that $\\tilde{\\hat{B}}^t$ is of precisely the same H\\\"older regularity as $\\hat{B}$. We also observe that $\\tilde{\\hat{B}}^t\\overset{d}{=}\\tilde{\\hat{B}}^0\\ensuremath\\triangleq\\tilde{\\hat{B}}$ for all $t>0$. \n\nThe process $\\tilde{\\hat{B}}$ is---up to a prefactor---known as Riemann-Liouville process (or type-II fractional Brownian motion) and was initially studied by L\\'evy \\cite{Levy1953}. Its use in modelling was famously discouraged in \\cite{Mandelbrot1968} due to its overemphasis of the origin and the `regularized' process \\eqref{eq:mandelbrot} was proposed instead. In fact as we shall see below, the lack of stationarity of the increments of $\\tilde{\\hat{B}}$ complicates the analysis of the conditioned evolution. \n\n\\begin{definition}\\label{def:ind_increment}\nLet $({\\mathcal F}_t)_{t\\geq 0}$ be a complete filtration. An $({\\mathcal F}_t)_{t\\geq 0}$-adapted stochastic process $Z$ is said to have a \\emph{locally independent decomposition of its increments} with respect to $({\\mathcal F}_t)_{t\\geq 0}$ if for any $t\\geq 0$, there exists an increment decomposition of the form\n$$(\\theta_t Z)_h=\\tilde Z^t_h+\\bar Z^t_h, \\qquad h\\geq 0,$$\nwhere $\\bar Z^t \\in {\\mathcal F}_t$ and $\\tilde Z^t$ is independent of ${\\mathcal F}_t$. \n\\end{definition}\n\n\nAs seen in \\eqref{eq:increment_decomposition}, an fBm $\\hat{B}$ has a locally independent decomposition of its increments with respect to any filtration $({\\mathcal F}_t)_{t\\geq 0}$ \\emph{compatible} with $\\hat{B}$. By this we mean that $(\\hat{W}_s)_{s\\leq t}\\in{\\mathcal F}_t$ and $(\\theta_t\\hat{W}_s)_{s\\geq t}$ is independent of ${\\mathcal F}_t$ for any $t\\geq 0$. \n\n\n\\begin{example}\\label{example-1}\nLet us give some further examples, which will become important later on:\n\\begin{enumerate}\n\\item\\label{it:rough_decomposition} Let $(\\hat W_t)_{t\\geq 0}$ be a Wiener process and \n$ \\tilde{\\hat B}_t\\ensuremath\\triangleq\\alpha_{\\hat H}\\int_0^{t} (t-u)^{H-\\frac12}\\,d\\hat W_u$ be the Riemann-Liouville process. \nThen, for any $t\\geq 0$ and $h\\geq 0$, \n \\begin{align}\n (\\theta_t\\tilde{\\hat B})_h&=\\alpha_{\\hat H}\\int_0^t \\Big( (t+h-u)^{\\hat H-\\frac12}-(t-u)^{\\hat H-\\frac12}\\Big)\\,d\\hat W_u+\\alpha_{\\hat H}\\int_t^{t+h}(t+h-u)^{\\hat H-\\frac12}\\,d\\hat W_u\\nonumber\\\\\n &\\ensuremath\\triangleq Q^t_h+\\tilde{\\hat B}^t_h.\\label{eq:z_t}\n \\end{align}\nThus, $\\tilde{\\hat B}$ admits a locally independent decomposition of its increments with respect to any filtration compatible with $\\hat{B}$.\n\n\\item Another example, given in \\cite{Gehringer-Li-2020, Gehringer-Li-2020-1}, is the stationary fractional Ornstein-Uhlenbeck process $Z_t=\\int_{-\\infty }^t e^{-(t-s)}\\,d\\hat{B}_s$. More generally, it is clear that $Z_t=\\int_{-\\infty }^t \\mathfrak{G}(s,t)\\,d\\hat{B}_s$ with a suitable kernel $\\mathfrak{G}$ also has this property.\n\n\\item\\label{it:smooth_decomposition} Albeit not being a direct instance of \\cref{def:ind_increment}, it is also interesting to observe a \\emph{fractal} property of $\\hat{B}$: The smooth part of the increment has an independent decomposition as $\\bar{\\hat B}_h^t=P_h^t+Q_h^t$, where $Q^t$ was defined in \\eqref{eq:z_t} and\n\\begin{equation*}\n P_h^t\\ensuremath\\triangleq\\alpha_{\\hat H}\\int_{-\\infty}^0\\Big((t+h-u)^{\\hat H-\\frac12}-(t-u)^{\\hat H-\\frac12}\\Big)\\,d\\hat W_u.\n\\end{equation*}\n\\end{enumerate}\n\\end{example}\n\nOur argument for the quenched ergodic theorem will be based on a two step conditioning procedure making use of an explicit representation of the conditioned process. We state it for a general noise with locally independent increments:\n\n\\begin{lemma}\\label{lem:conditioning_general}\nLet $0\\leq s\\leq t0$ we define the set\n\\begin{equation}\\label{eq:omega}\n \\Omega_{\\alpha}\\ensuremath\\triangleq\\Big\\{f\\in \\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)\\cap\\ensuremath{\\mathcal{C}}^2\\big((0,\\infty),\\ensuremath{\\mathbb{R}}^n\\big):\\limsup_{t\\to\\infty}\\left(t^{\\alpha}\\big|\\dot{f}(t)\\big|+t^{1+\\alpha}\\big|\\ddot{f}(t)\\big|\\right)<\\infty\\Big\\}.\n\\end{equation}\nThis space is equipped with the semi-norm \n\\begin{equation*}\n \\|f\\|_{\\Omega_\\alpha}\\ensuremath\\triangleq\\sup_{t\\geq 1}t^{\\alpha}\\big|\\dot{f}(t)\\big|+\\sup_{t\\geq 1}t^{1+\\alpha}\\big|\\ddot{f}(t)\\big|.\n\\end{equation*}\nWe also set $\\Omega_{\\alpha-}\\ensuremath\\triangleq\\bigcap_{\\beta<\\alpha}\\Omega_{\\beta}$. The motivation for this definition stems from the following lemma:\n\n\\begin{lemma}\\label{lem:smooth_part_decay}\n Let $\\varepsilon>0$ and $t\\geq 0$. Then $\\varepsilon^{-\\hat{H}}\\bar{\\hat{B}}^t_{\\varepsilon\\cdot}\\overset{d}{=}\\bar{\\hat{B}}^t\\overset{d}{=}\\bar{\\hat{B}}\\in\\Omega_{(1-\\hat{H})-}$ a.s. and $\\|\\bar{\\hat{B}}\\|_{\\Omega_\\alpha}\\in\\bigcap_{p\\geq 1} L^p$ for any $\\alpha<1-\\hat{H}$.\n\\end{lemma}\n\\begin{proof}\n Let $\\delta\\in\\big(0,1-\\hat H\\big)$. It is enough to prove that there is a random variable $C>0$ with moments of all orders such that\n \\begin{equation}\\label{eq:estimate_all_orders}\n \\big|\\dot{\\bar{\\hat{B}}}_t\\big|\\leq \\frac{C}{t^{1-\\hat{H}-\\delta}},\\qquad\\big|\\ddot{\\bar{\\hat{B}}}_t\\big|\\leq\\frac{C}{t^{2-\\hat{H}-\\delta}}\n \\end{equation}\n for all $t\\geq 1$ on a set of probability one. This in turn easily follows from sample path properties of the standard Wiener process. Firstly, we have that\n \\begin{equation*}\n \\dot{\\bar{\\hat{B}}}_t=\\alpha_{\\hat{H}}\\left(\\hat{H}-\\frac12\\right)\\int_{-\\infty}^0(t-u)^{\\hat{H}-\\frac32}\\,dW_u=-\\alpha_{\\hat{H}}\\left(\\hat{H}-\\frac12\\right)\\left(\\hat{H}-\\frac32\\right)\\int_{-\\infty}^0 (t-u)^{\\hat{H}-\\frac52}W_u\\,du\n \\end{equation*}\n since $\\lim_{u\\to-\\infty}(t-u)^{\\hat{H}-\\frac32}W_u=0$. Therefore,\n \\begin{align*}\n \\big|\\dot{\\bar{\\hat{B}}}_t\\big|&\\lesssim \\left(\\sup_{-1\\leq s\\leq 0} |W_s| \\int_{-1}^0 (t-u)^{\\hat{H}-\\frac52}\\,du+\\sup_{s\\leq -1}\\frac{|W_s|}{(t-s)^{\\frac12+\\delta}}\\int_{-\\infty}^{-1} (t-u)^{\\hat{H}-2+\\delta}\\,du\\right)\\\\\n &\\leq C\\left(t^{\\hat{H}-\\frac52}+(t+1)^{\\hat{H}-1+\\delta}\\right).\n \\end{align*}\n The fact that $C$ has moments of all order is an easy consequence of Fernique's theorem. In fact, the Wiener process defines a Gaussian measure on the separable Banach space\n \\begin{equation*}\n \\mathcal{M}^{\\frac12+\\delta}\\ensuremath\\triangleq\\left\\{f\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n):\\,\\|f\\|_{\\mathcal{M}^{\\frac12+\\delta}}\\ensuremath\\triangleq\\sup_{u\\geq 0}\\frac{|f(u)|}{(1+u)^{\\frac12+\\delta}}<\\infty\\right\\}\n \\end{equation*}\n By Fernique's theorem, the random variable $\\|W\\|_{\\mathcal{M}^{\\frac12+\\delta}}$ has therefore Gaussian tails. The first estimate in \\eqref{eq:estimate_all_orders} follows. The bound on $\\big|\\ddot{\\bar{\\hat{B}}}_t\\big|$ is similar.\n\\end{proof}\n\n\n\\subsection{A Universal Control}\n\nLet $b\\in\\S(\\kappa,R,\\lambda)$, $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n)$, and $u\\in L^\\infty([0,1],\\ensuremath{\\mathbb{R}}^n)$. Let us consider the following controlled ordinary differential equation:\n\\begin{equation}\\label{eq:controlled_ode}\n x^{\\varsigma,u}(t)=x_0+\\int_0^t b\\big(x^{\\varsigma,u}(s)\\big)\\,ds+\\varsigma(t)+\\int_0^t u(s)\\,ds,\\qquad t\\in[0,1].\n\\end{equation}\nWe think of $\\varsigma$ as an external `adversary' and of $u$ as a control. Since $b$ is Lipschitz continuous, it is standard that there is a unique global solution to \\eqref{eq:controlled_ode}. If $u\\equiv 0$, we adopt the shorthand $x^{\\varsigma}\\ensuremath\\triangleq x^{\\varsigma,0}$.\n\nThe aim of this section is to exhibit an $\\eta\\in(0,1)$ as large as possible so that the following holds: Given $\\bar{R}>0$, there is an $M>0$ such that, for any adversary $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n)$ and any initial condition $x_0\\in\\ensuremath{\\mathbb{R}}^n$, we can find a control $u\\in L^\\infty([0,1],\\ensuremath{\\mathbb{R}}^n)$ with $|u|_\\infty\\leq M$ ensuring that the occupation time of $x^{\\varsigma,u}$ of the set $\\ensuremath{\\mathbb{R}}^n\\setminus B_{\\bar{R}}$ is at least $\\eta$. It is important to emphasize that the sup-norm of the control $|u|_\\infty$ may neither depend on the adversary $\\varsigma$ nor on the initial condition $x_0$ (otherwise the construction of $u$ essentially becomes trivial). We shall actually choose $u$ as concatenation of the zero function and a \\emph{universal} control $\\hat u\\in L^\\infty([0,N^{-1}],\\ensuremath{\\mathbb{R}}^n)$ for a sufficiently large, but universal, $N\\in\\ensuremath{\\mathbb{N}}$.\n\nWe begin with a lemma:\n\n\\begin{lemma}\\label{lem:control_bound}\n There is a constant $C>0$ independent of $\\varsigma$ and $u$ such that, for the solution of \\eqref{eq:controlled_ode},\n \\begin{equation*}\n |x^{\\varsigma,u}(t)-x^{\\varsigma}(t)|^2\\leq C(1+|u|^2_\\infty)t\n \\end{equation*}\n for all $t\\in [0,1]$.\n\\end{lemma}\n\\begin{proof}\n Since $b$ is contractive on the large scale, there are constants $D,\\tilde{\\kappa}>0$ such that\n \\begin{equation*}\n \\braket{b(x)-b(y),x-y}\\leq D-\\tilde{\\kappa}|x-y|^2\n \\end{equation*}\n for all $x,y\\in\\ensuremath{\\mathbb{R}}^n$, see \\cref{rem:large_scall_off_diagonal} \\ref{it:off_diagonal}. Define now $f(t)\\ensuremath\\triangleq e^{\\tilde\\kappa t}\\big|x^{\\varsigma,u}(t)-x^{\\varsigma}(t)\\big|^2$, then\n \\begin{equation*}\n f^\\prime(t)=\\tilde\\kappa f(t)+2e^{\\tilde\\kappa t}\\Braket{b\\big(x^{\\varsigma,u}(t)\\big)-b\\big(x^{\\varsigma}(t)\\big)+u(t),x^{\\varsigma,u}(t)-x^{\\varsigma}(t)}\\leq 2D e^{\\tilde\\kappa}+\\frac{|u(t)|^2}{\\tilde\\kappa}\n \\end{equation*}\n for all $t\\in [0,1]$. Consequently, setting $C\\ensuremath\\triangleq \\max(2D,\\tilde\\kappa^{-1})$, we have\n \\begin{equation*}\n \\big|x^{\\varsigma,u}(t)-x^{\\varsigma}(t)\\big|^2\\leq C \\int_0^t e^{-\\tilde\\kappa(t-s)}\\left(1+|u(s)|^2\\right)\\,ds\n \\end{equation*}\n and the lemma follows at once.\n\\end{proof}\n\nFor a piecewise constant function $u:[0,1]\\to\\ensuremath{\\mathbb{R}}^n$, let $\\ensuremath{\\mathcal{D}}_u\\subset[0,1]$ denote the finite set of discontinuities. We then have the following control result:\n\\begin{proposition}\\label{prop:control}\n\tLet $\\eta<\\frac12$ and $\\bar{R}>0$. Then there is a value $M>0$ such that the following holds true: For each $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n)$ and each $x_0\\in\\ensuremath{\\mathbb{R}}^n$, we can find a piecewise constant control $u\\in L^\\infty([0,1],\\ensuremath{\\mathbb{R}}^n)$ with $|u|_\\infty+|\\ensuremath{\\mathcal{D}}_u|\\leq M$ such that the occupation time of $x^{\\varsigma,u}$ of the set $\\ensuremath{\\mathbb{R}}^n\\setminus B_{\\bar{R}}$ is greater than or equal to $\\eta$.\n\\end{proposition}\n\n\\begin{proof}\n We prove that there exist an integer $N$ and a control $\\hat u\\in L^\\infty([0,N^{-1}])$ with at most two constant pieces independent of both the initial condition $x_0$ and the adversary $\\varsigma$ such that either\n \\begin{equation*}\n \\mathop{\\mathrm {Leb}} \\Big(\\Big\\{t\\in[0,N^{-1}]:|x^{\\varsigma}(t)|>\\bar{R} \\Big\\}\\Big)\\geq \\ensuremath{\\frac} \\eta N\\quad\\text{or}\\quad\\mathop{\\mathrm {Leb}} \\Big(\\Big\\{t\\in[0,N^{-1}]:|x^{\\varsigma,\\hat u}(t)|>\\bar{R}\\Big\\}\\Big)\\geq \\ensuremath{\\frac} \\eta N.\n \\end{equation*}\n In the former case, we of course choose $u\\equiv 0$, otherwise we let $u=\\hat u$. By the flow property of well-posed ordinary differential equations, the solution to \\eqref{eq:controlled_ode} restarted at time $N^{-1}$ solves a similar equation (with new adversary $\\tilde{\\varsigma}(\\cdot)=\\theta_{N^{-1}} \\varsigma \\in\\ensuremath{\\mathcal{C}}_0([0,1-N^{-1}],\\ensuremath{\\mathbb{R}}^n)$ and initial condition $x^{\\varsigma,u}(N^{-1})$). Upon constructing $\\hat u$, we can thus easily deduce the proposition by iterating this construction.\n\n Suppose that the time spent by uncontrolled solution $(x_t^\\varsigma)_{t \\in [0,N^{-1}]}$ in $\\ensuremath{\\mathbb{R}}^n\\setminus B_{\\bar{R}}$ is strictly less than $\\frac{\\eta}{N}$. We let $A_{x_0,\\varsigma}$ be the set of times $t\\in[0,N^{-1}]$ at which $|x^\\varsigma(t)|\\leq \\bar{R}$. Note that $A_{x_0,\\varsigma}$ is the union of a countable number of closed, disjoint intervals. By assumption, we have $\\mathop{\\mathrm {Leb}}(A_{x_0,\\varsigma})>(1-\\eta)N^{-1}$. \n \n For $\\delta\\ensuremath\\triangleq (2N)^{-1}$ and $e$ any fixed unit vector, we define $\\hat u$ to be the piecewise constant function\n \\begin{equation*}\n \\hat u(t)=\\begin{cases}\n \\frac{2\\bar{R}+1}{(1-2\\eta)\\delta}e, & t\\in [0, \\delta],\\\\\n -\\frac{2\\bar{R}+1}{(1-2\\eta)\\delta}e, & t\\in (\\delta, 2\\delta],\n \\end{cases}\n \\end{equation*}\n so that\n \\begin{equation*}\n \\int_0^t \\hat u(s)\\,ds=\\begin{cases}\n \\frac{2\\bar{R}+1}{(1-2\\eta)\\delta}te, & t\\in [0, \\delta],\\\\\n \\frac{2\\bar{R}+1}{(1-2\\eta)\\delta}(2\\delta-t)e, & t\\in (\\delta, 2\\delta].\n \\end{cases}\n \\end{equation*}\n We observe that\n \\begin{equation}\\label{eq:lower_control}\n |x^{\\varsigma,\\hat u}(t)|\\geq \\left |\\int_0^t \\hat u(s)\\,ds \\right|-|x^\\varsigma(t)|-\\Lip{b}\\int_{0}^t\\big|x^{\\varsigma,\\hat u}(s)-x^\\varsigma(s)\\big|\\,ds.\n \\end{equation}\n Moreover, owing to \\cref{lem:control_bound}, we can bound\n \\begin{equation}\\label{eq:lower_control_n}\n \\phantom{\\leq}\\int_{0}^t\\big|x^{\\varsigma,\\hat u}(s)-x^\\varsigma(s)\\big|\\,ds\\leq\\sqrt{C}(1+|\\hat u|_\\infty)\\int_{0}^{2\\delta}\\sqrt{s}\\,ds=\\frac{2\\sqrt{C}}{3 N^{\\frac32}}\\left(1+\\frac{2(2\\bar{R}+1)N}{1-2\\eta}\\right)<\\Lip{b}^{-1},\n \\end{equation}\n provided we choose the integer $N=N(C,\\bar{R},\\eta,\\Lip{b})$ large enough. Define the set $B_{x_0,\\varsigma}\\ensuremath\\triangleq A_{x_0,\\varsigma}\\cap [(1-2\\eta)\\delta,(1+2\\eta)\\delta]$. Combining \\eqref{eq:lower_control} and \\eqref{eq:lower_control_n}, we then certainly have that $|x^{\\varsigma,\\hat u}(t)|>\\bar{R}$ for all $t\\in B_{x_0,\\varsigma}$. Since\n \\begin{equation*}\n \\mathop{\\mathrm {Leb}}(B_{x_0,\\varsigma})\\geq\\frac{(1-\\eta)}{N}-2(1-2\\eta)\\delta=\\frac{\\eta}{N}\n \\end{equation*}\n and $|\\hat u|_\\infty$ as well as $|\\ensuremath{\\mathcal{D}}_{\\hat{u}}|$ only depend on $N$ and $\\bar R$, this finishes the proof.\n\\end{proof}\n\nWe conclude our study of the deterministic controlled ODE \\eqref{eq:controlled_ode} with the following stability result which is proven by a standard Gr\\\"onwall argument:\n\\begin{lemma}\\label{lem:cont_control}\n Let $x^{\\varsigma,u}$ denote the solution to the controlled differential equation \\eqref{eq:controlled_ode} with initial condition $x_0\\in\\ensuremath{\\mathbb{R}}^n$ and control $u\\in L^\\infty([0,1],\\ensuremath{\\mathbb{R}}^n)$. Then, for any $w\\in\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n)$, we have the bound\n \\begin{equation*}\n |x^{\\varsigma,u}-\\tilde{x}|_\\infty\\leq e^{\\Lip{b}}\\left|\\int_0^\\cdot u(s)\\,ds-w\\right|_\\infty,\n \\end{equation*}\n where $\\tilde{x}$ is the unique solution to\n \\begin{equation*}\n \\tilde{x}(t)=x_0+\\int_0^t b\\big(\\tilde{x}(s)\\big)\\,ds+w(t)+\\varsigma(t),\\qquad t\\in[0,1].\n \\end{equation*}\n\\end{lemma}\n\n\n\n\\subsection{Exponential Stability of the Conditional Evolution}\nWe now turn to the conditional evolution of \\eqref{eq:general_flow-fixed-x} derived in \\cref{lem:conditioning}. For brevity, we drop the hat on the driving fBm throughout this and the next section. Remember that we have to study SDEs driven by a Riemann-Liouville process \n\\begin{equation*}\n\\tilde{B}_t\\ensuremath\\triangleq\\alpha_H\\int_0^{t} (t-u)^{H-\\frac12}\\,dW_u,\n\\end{equation*}\nwhere $(W_t)_{t\\geq 0}$ is a standard Wiener process. Recall from \\cref{def:flow} that, for $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)$, $\\Psi_{s,t}(\\cdot, \\varsigma+\\sigma\\tilde B)$ denotes the solution flow to the equation \n\\begin{equation}\\label{eq:rl_sde}\n dX_t=b(X_t)\\,dt+d\\varsigma_t+\\sigma\\,d\\tilde{B}_t.\n\\end{equation}\nFor brevity, let us henceforth set $\\Psi_{s,t}^{\\varsigma}(\\cdot)\\ensuremath\\triangleq\\Psi_{s,t}(\\cdot,\\varsigma+\\sigma\\tilde B)$.\n\nWe first prove that---starting from any two initial points---the laws of the solutions converge to each other with an exponential rate. This however does not yet imply the convergence of $\\L\\big(\\Psi_t^{\\varsigma}(x)\\big)$ to the first marginal of the invariant measure $\\pi$ of the equation $dX_t=b(X_t)\\,dt+\\sigma\\,dB_t$ since, even if we choose $X_0\\sim\\pi$, we have $\\L\\big(\\Psi_t^{\\varsigma}(X_0)\\big)\\neq\\pi$ for $t>0$ in general. \n\nAs a preparation, we let $\\big(\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n),\\ensuremath{\\mathcal{H}}_H,\\mu_H\\big)$ denote the abstract Wiener space induced by the Gaussian process $(\\tilde B_t)_{t\\in[0,1]}$. Recall that the Cameron-Martin space is given by $\\ensuremath{\\mathcal{H}}_H=\\mathscr{K}_H(H_0^1)$, where\n\\begin{equation*}\n \\mathscr{K}_H f(t)\\ensuremath\\triangleq\\begin{cases}\n \\displaystyle\\alpha_H\\int_0^t (t-s)^{H-\\frac32}f(s)\\,ds, & H>\\frac12,\\\\ \n \\displaystyle\\alpha_H\\frac{d}{dt}\\int_0^t (t-s)^{H-\\frac12}f(s)\\,ds, & H<\\frac12,\n \\end{cases}\\qquad t\\in[0,1],\n\\end{equation*}\nand \n\\begin{equation*}\n H_0^1\\ensuremath\\triangleq\\left\\{f=\\int_0^\\cdot\\dot{f}(s)\\,ds:\\,\\dot{f}\\in L^2([0,1],\\ensuremath{\\mathbb{R}}^n)\\right\\}\n\\end{equation*}\nis the Cameron-Martin space of the standard Wiener process. The inner product on $\\ensuremath{\\mathcal{H}}_H$ is defined by $\\braket{\\mathscr{K}_H f,\\mathscr{K}_H g}_{\\ensuremath{\\mathcal{H}}_H}\\ensuremath\\triangleq\\braket{\\dot{f},\\dot{g}}_{L^2}$. \n\nWe shall make use of the following simple observation:\n\\begin{lemma}\\label{lem:cameron_martin_facts}\n Let $f:[0,1]\\to\\ensuremath{\\mathbb{R}}^n$ be piecewise linear with $f(0)=0$. Then, for each $H\\in(0,1)$, $f\\in\\ensuremath{\\mathcal{H}}_H$ and\n \\begin{equation}\\label{eq:cameron_martin_bound}\n \\|f\\|_{\\ensuremath{\\mathcal{H}}_H}\\lesssim|\\dot{f}|_\\infty \\big(1+\\big|\\ensuremath{\\mathcal{D}}_{\\dot{f}}\\big|\\big).\n \\end{equation}\n\\end{lemma}\n\\begin{proof}\n It follows from \\cite[Theorem 5]{Picard2011} (see also \\cite{Samko1993}) that the inverse of $\\ensuremath{\\mathscr{K}}_H$ exists on the set of Lipschitz functions and there is a numerical constant $\\varrho_H>0$ such that $\\ensuremath{\\mathscr{K}}_H^{-1}=\\varrho_H\\ensuremath{\\mathscr{K}}_{1-H}$. Notice also that we have $\\frac{d}{dt}\\ensuremath{\\mathscr{K}}_H^{-1}f=\\ensuremath{\\mathscr{K}}_H^{-1}\\dot{f}$.\n \n Let us first consider the case $H<\\frac12$. The bound \\eqref{eq:cameron_martin_bound} is an immediate consequence of \n \\begin{equation*}\n \\left|\\frac{d}{dt}\\ensuremath{\\mathscr{K}}_H^{-1} f(t)\\right|\\leq\\varrho_H\\int_0^t (t-s)^{-H-\\frac12}\\big|\\dot{f}(s)\\big|\\,ds\\lesssim|\\dot{f}|_\\infty \\qquad\\forall\\,t\\in[0,1].\n \\end{equation*}\n For $H>\\frac12$ we let $\\tau_1,\\dots,\\tau_k$ denote the jump points of $\\dot{f}$ in the interval $[0,t)$. Notice that\n \\begin{align*}\n \\left|\\frac{d}{dt}\\ensuremath{\\mathscr{K}}_H^{-1} f(t)\\right|&\\leq\\varrho_H\\left|\\frac{d}{dt}\\left(\\sum_{i=1}^{k-1}\\int_0^{\\tau_1}(t-s)^{\\frac12-H}\\dot{f}(s)\\,ds+\\cdots+\\int_{\\tau_k}^t (t-s)^{\\frac12-H}\\dot{f}(s)\\,ds\\right)\\right| \\\\\n &\\lesssim |\\dot{f}|_\\infty\\big(1+|\\ensuremath{\\mathcal{D}}_{\\dot{f}}|\\big)t^{\\frac12-H}.\n \\end{align*}\n Since $1-2H>-1$, we obtain\n \\begin{equation*}\n \\|f\\|_{\\ensuremath{\\mathcal{H}}_H}=\\left\\|\\frac{d}{dt}\\ensuremath{\\mathscr{K}}_H^{-1}f\\right\\|_{L^2}\\lesssim|\\dot{f}|_\\infty\\big(1+|\\ensuremath{\\mathcal{D}}_{\\dot{f}}|\\big),\n \\end{equation*}\n as required.\n \n\\end{proof}\n\n\nThe next important lemma lifts the control result of \\cref{prop:control} to solutions of SDEs with additive noise:\n\\begin{lemma}\\label{lem:probabilistic_control}\n Let $b\\in\\S(\\kappa, R,\\lambda)$ and $\\sigma\\in\\Lin{n}$ be invertible. Then, for any $\\bar R>0$ and any $\\eta\\in(0,\\frac12)$, there is constant $\\a_{\\eta,\\bar{R}}>0$ such that the following holds: For each $x\\in\\ensuremath{\\mathbb{R}}^n$ and each $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)$, we can find an event $\\ensuremath{\\mathscr{A}}_{x,\\varsigma}$ with $\\ensuremath\\mathbb{P}(\\ensuremath{\\mathscr{A}}_{x,\\varsigma})\\geq\\a_{\\eta,\\bar{R}}$ such that\n \\begin{equation*}\n \\int_0^1 \\mathbf 1_{\\big\\{t: \\big|\\Psi_{t}^{\\varsigma}(x)(\\omega)\\big|>\\bar R\\big\\}}(s)\\,ds > \\eta \\qquad \\forall\\, \\omega \\in \\ensuremath{\\mathscr{A}}_{x,\\varsigma}.\n \\end{equation*}\n\\end{lemma}\n\n\\begin{proof}\n Let $u_{x,\\varsigma}\\in L^\\infty([0,1],\\ensuremath{\\mathbb{R}}^n)$ be the piecewise constant control furnished by \\cref{prop:control} such that the occupation time of $x^{\\varsigma,u_{x,\\varsigma}}$ of the set $\\ensuremath{\\mathbb{R}}^n\\setminus B_{\\bar{R}+1}$ is greater than $\\eta$. We set $U_{x,\\varsigma}\\ensuremath\\triangleq\\int_0^\\cdot u_{x,\\varsigma}(s)\\,ds$ and note that $U_{x,\\varsigma}$ is piecewise linear. \\Cref{lem:cont_control} allows us to choose an $\\varepsilon>0$ (independent of $x$ and $\\varsigma$) such that, on the event $\\ensuremath{\\mathscr{A}}_{x,\\varsigma}\\ensuremath\\triangleq\\big\\{\\big|U_{x,\\varsigma}-\\sigma\\tilde{B}\\big|_\\infty\\leq\\varepsilon\\big\\}$, the occupation time of $\\big(\\Psi^{\\varsigma}_{h}(x)\\big)_{h\\in[0,1]}$ of $\\ensuremath{\\mathbb{R}}^n\\setminus B_{\\bar R} $ exceeds $\\eta$. \n\n It remains to show that $\\inf_{x,\\varsigma}\\ensuremath\\mathbb{P}(\\ensuremath{\\mathscr{A}}_{x,\\varsigma})>0$. To this end, we first note that $U_{x,\\varsigma}\\in\\ensuremath{\\mathcal{H}}_H$ by \\cref{lem:cameron_martin_facts}. By the Cameron-Martin formula (see e.g. \\cite{Bogachev1998}), \n \\begin{align*}\n \\ensuremath\\mathbb{P}(\\ensuremath{\\mathscr{A}}_{x,\\varsigma})&\\geq\\ensuremath\\mathbb{P}\\big(\\big|\\sigma^{-1}U_{x,\\varsigma}-\\tilde{B}\\big|_\\infty\\leq|\\sigma|^{-1}\\varepsilon\\big)\\\\\n &=\\exp\\left(-\\frac12\\|\\sigma^{-1}U_{x,\\varsigma}\\|_{\\ensuremath{\\mathcal{H}}_H}^2\\right)\\int_{\\{|x|_\\infty\\leq|\\sigma|^{-1}\\varepsilon\\}}e^{\\braket{x,U_{x,\\varsigma}}_{\\ensuremath{\\mathcal{H}}_H}}\\,\\mu_H(dx).\n \\end{align*}\n Consequently, Jensen's inequality and spherical symmetry give\n \\begin{equation}\\label{eq:quant_lower_bound}\n \\ensuremath\\mathbb{P}(\\ensuremath{\\mathscr{A}}_{x,\\varsigma})\\geq\\exp\\left(-\\frac12\\|\\sigma^{-1}U_{x,\\varsigma}\\|_{\\ensuremath{\\mathcal{H}}_H}^2\\right)\\ensuremath\\mathbb{P}\\big(|\\tilde{B}|_\\infty\\leq|\\sigma|^{-1}\\varepsilon\\big).\n \\end{equation}\n Combining \\cref{prop:control,lem:cameron_martin_facts}, we obtain that $\\sup_{x,\\varsigma}\\|U_{x,\\varsigma}\\|_{\\ensuremath{\\mathcal{H}}_H}\\lesssim M(1+M)$. This concludes the proof.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:conditional_initial_condition_wasserstein}\n Let $\\sigma\\in\\Lin{n}$ be invertible. Then, for any $\\kappa, R>0$ and any $p\\geq 1$, there exists a number $\\Lambda=\\Lambda(\\kappa,R,p)\\in(0,\\kappa)$ such that the following holds: If $b\\in\\S\\big(\\kappa,R, \\Lambda\\big)$, there are constants $c,C>0$ such that, for any $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n)$,\n \\begin{equation*}\n \\ensuremath{\\mathcal{W}}^p\\Big( \\L\\big( {\\Psi}^{\\varsigma}_t(Y)\\big), \\L\\big( {\\Psi}^{\\varsigma}_t(\\tilde Y)\\big)\\Big)\\leq C\\ensuremath{\\mathcal{W}}^p\\big(\\L(Y),\\L(\\tilde Y)\\big) e^{-c t}\n \\end{equation*}\n for all $t\\geq 0$.\n \\end{proposition}\n\n\\begin{proof} \nWrite $X_t\\ensuremath\\triangleq\\Psi^{\\varsigma}_{t}(Y)$ and $Z_t=\\Psi_{t}^{\\varsigma}(\\tilde Y)$. Let $\\mu_t\\ensuremath\\triangleq\\L(X_t)$ and $\\nu_t\\ensuremath\\triangleq\\L(Z_t)$, thus $(X_t,Z_t)$ is a synchronous coupling of $\\mu_t$ and $\\nu_t$. Our strategy for proving the exponential convergence of $t\\mapsto\\ensuremath{\\mathcal{W}}^p(\\mu_t,\\nu_t)$ is to show that, for any $t>0$, the evolution of $(X_s)_{s\\in[t,t+1]}$ conditional on ${\\mathcal F}_t$ spends a sufficient amount of time in the contractive region $\\{|x|>R\\}$. As noted in \\cref{example-1} \\ref{it:rough_decomposition}, there is an independent increment decomposition $(\\theta_t\\tilde{B})_{h}= Q^t_h+\\tilde{B}^t_h$ for the Riemann-Liouville process. Using this and the conditional evolution derived in \\cref{lem:conditioning_general}, we find\n \\begin{align}\n \\Expec{\\big|X_{t+1}-Z_{t+1}\\big|^p}& = \\Expec{\\Expec{\\big|\\Psi^{\\varsigma}_{t,t+1}(X_t)-\\Psi^{\\varsigma}_{t,t+1}(Z_t)\\big|^p\\,\\middle|\\,{\\mathcal F}_t}}\\nonumber\\\\\n &= \\Expec{\\Expec{\\Big|\\Psi_{1}\\big(X_t, \\theta_t\\varsigma+\\sigma\\theta_t\\tilde B\\big)-\\Psi_{1}\\big(Z_t, \\theta_t\\varsigma+\\sigma \\theta_t\\tilde B\\big)\\Big|^p\\,\\middle|\\,{\\mathcal F}_t}}\\nonumber\\\\\n &=\\Expec{\\Expec{\\Big|\\Psi_{1}\\big(X_t, \\theta_t\\varsigma+\\sigma Q^t+\\sigma\\tilde{B}^t\\big)-\\Psi_{1}\\big(Z_t, \\theta_t\\varsigma+\\sigma Q^t+\\sigma\\tilde{B}^t\\big)\\Big|^p\\,\\middle|\\,{\\mathcal F}_t}} \\nonumber\\\\\n &= \\Expec{ \\Expec{\\Big|\\Psi^{\\theta_t\\varsigma+\\ell}_{1} (x)-\\Psi_{1}^{\\theta_t\\varsigma+\\ell}(z)\\Big|^p}\\bigg|_{\\substackal{x&=X_t,z=Z_t,\\\\\\ell&=\\sigma Q^t}}},\\label{two-initial-conditions}\n \\end{align}\n where in the last step we also used that $(\\tilde{B}^t_{h})_{h\\geq 0}\\overset{d}{=}(\\tilde{B}_{h})_{h\\geq 0}$.\n\nBy assumption, the drift $b$ does not expand by more than a factor of $\\Lambda$ on all of $\\ensuremath{\\mathbb{R}}^n$. We therefore have the pathwise estimate\n\\begin{equation}\\label{eq:rl_lipschitz}\n\\big|\\Psi_{s,t}^{\\theta_t\\varsigma+\\ell}(x)-\\Psi_{s,t}^{\\theta_t\\varsigma+\\ell}(z)\\big|^p\\leq e^{p(t-s)\\Lambda} |x-z|^p\n\\end{equation}\nfor all $0\\leq s< t\\leq 1$. Let $\\eta\\in(0,\\frac12)$ and $\\bar\\kappa\\in(0,\\kappa)$ be such that $\\Xi\\ensuremath\\triangleq\\bar\\kappa\\eta-\\Lambda(1-\\eta)>0$ (recall that we assume $\\Lambda<\\kappa$). Let $\\bar R>R$ be the corresponding radius furnished by \\cref{lem:bigger_ball}. For any $x\\in\\ensuremath{\\mathbb{R}}^n$ and any $\\varsigma,\\ell\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)$, let $\\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}$ be the event from \\cref{lem:probabilistic_control}. Recall that $\\ensuremath\\mathbb{P}(\\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell})\\geq\\a_{\\eta,\\bar{R}}>0$ and \n\\begin{equation*}\n \\int_0^1 \\mathbf 1_{\\big\\{s: \\big|\\Psi_{s}^{\\theta_t\\varsigma+\\ell}(x)(\\omega)\\big|>\\bar R\\big\\}}(r)\\,dr > \\eta \\qquad \\forall\\, \\omega \\in \\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}.\n\\end{equation*}\nSince $\\Xi>0$, by possibly decreasing $\\Lambda$ we can also ensure that \n\\begin{equation}\\label{eq:lambda}\n 0<\\Lambda < \\frac1p\\log\\left(\\ensuremath{\\frac} {1-\\a_{\\eta,\\bar{R}}e^{-p \\Xi} }{1-\\a_{\\eta,\\bar{R}}}\\right).\n\\end{equation}\nOwing to pathwise continuity of $h\\mapsto\\Psi_h^{\\theta_t\\varsigma+\\ell}(x)$, there are random times $t_1,\\dots,t_{2N(\\omega)}$ such that, for all $\\omega\\in \\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}$,\n\\begin{itemize}\n \\item $0\\leq t_1(\\omega)<\\cdots\\bar R\\big\\}$.\n\\end{itemize}\nTogether with \\eqref{eq:rl_lipschitz} it follows that, on the event $\\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}$,\n\\begin{align*}\n&\\phantom{\\leq}\\big|\\Psi_{1}^{\\theta_t\\varsigma+\\ell}(x)-\\Psi_{1}^{\\theta_t\\varsigma+\\ell}(z)\\big|^p\n= \\Big|\\Psi_{t_{2N},1}^{\\theta_t\\varsigma+\\ell}\\Big(\\Psi_{t_{2N}}^{\\theta_t\\varsigma+\\ell} (x)\\Big)-\\Psi_{t_{2N},1}^{\\theta_t\\varsigma+\\ell}\\Big(\\Psi_{t_{2N}}^{\\theta_t\\varsigma+\\ell} (z)\\Big)\\Big|^p\\\\\n&\\leq e^{p(1-t_{2N})\\Lambda} \\big|\\Psi_{t_{2N}}^{\\theta_t\\varsigma+\\ell}(x)-\\Psi_{t_{2N}}^{\\theta_t\\varsigma+\\ell}(z)\\big|^p\\\\\n&\\leq e^{p (1-t_{2N})\\Lambda} e^{-p (t_{2N}-t_{2N-1})\\bar\\kappa} \\big|\\Psi_{t_{2N-1}}^{\\theta_t\\varsigma+\\ell}(x)-\\Psi_{t_{2N-1}}^{\\theta_t\\varsigma+\\ell}(z)\\big|^p\\\\\n&\\leq\\cdots\\leq \\exp\\left[p\\left(\\Lambda\\sum_{i=0}^N(t_{2i+1}-t_{2i})-\\bar\\kappa\\sum_{i=0}^{N} (t_{2i}-t_{2i-1}) \\right)\\right] |x-z|^p\\\\\n&\\leq e^{ -p\\Xi}|x-z|^p,\n\\end{align*}\nwhere we have set $t_{2N+1}\\ensuremath\\triangleq 1$ for convenience. On the complementary event $\\Omega\\setminus \\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}$, we apply the trivial estimate \\eqref{eq:rl_lipschitz}. Inserting these bounds back into \\eqref{two-initial-conditions}, we conclude that\n\\begin{equation*}\n \\Expec{\\big|X_{t+1}-Z_{t+1}\\big|^p}\\leq\\Big(\\big(1-\\a_{\\eta,\\bar R}\\big)e^{p\\Lambda}+\\a_{\\eta,\\bar R}e^{-p\\Xi}\\Big)\\Expec{|X_t-Z_t|^p}\\ensuremath\\triangleq\\rho\\Expec{|X_t-Z_t|^p}.\n\\end{equation*}\nObserve that $\\rho<1$ by \\eqref{eq:lambda}. Finally, a straight-forward induction shows that\n\\begin{equation}\\label{eq:to_minimize}\n \\ensuremath{\\mathcal{W}}^p\\Big(\\L\\big( {\\Psi}^{\\varsigma}_t(Y)\\big), \\L\\big( {\\Psi}^{\\varsigma}_t(\\tilde Y)\\big)\\Big)\\leq\n \\big\\|X_t-Z_t\\big\\|_{L^p}\\leq e^{\\Lambda} \\rho^{[t]}\\big\\|Y-\\tilde Y\\big\\|_{L^p}\\leq\\frac{e^{\\Lambda}}{\\rho}e^{-|\\log\\rho|t}\\big\\|Y-\\tilde Y\\big\\|_{L^p},\n\\end{equation}\nwhere $[\\cdot]$ denotes the integer part. Minimize over the set of couplings of $\\L(Y)$ and $\\L(\\tilde{Y})$ to conclude the proof.\n\\end{proof}\n\nA more explicit expression for the threshold value $\\Lambda(\\kappa,R,p)$ can be derived by the method outlined in \\cref{rem:constant_xi} below. We abstain from including further details in this work. Let us however introduce the following notation:\n\\begin{definition}\n Let $\\kappa,R>0$ and $p\\geq 1$. We abbreviate $\\S_p(\\kappa,R)\\ensuremath\\triangleq\\S\\big(\\kappa,R,\\Lambda(\\kappa,R,p)\\big)$ with the constant from \\cref{prop:conditional_initial_condition_wasserstein}.\n\\end{definition}\n\nBy \\cref{lem:conditioning}, the Wasserstein bound of \\cref{prop:conditional_initial_condition_wasserstein} lifts to bounds on the fast motion with frozen slow input \\eqref{eq:general_flow-fixed-x}. We obtain the following Lipschitz dependence of the flow $\\bar \\Phi$ on the initial value:\n\n\\begin{corollary}\\label{cor:fast_different_initial}\n Let $({\\mathcal F}_t)_{t\\geq 0}$ be a filtration compatible with the fBm $\\hat{B}$. Let $0\\leq s\\leq t$ and let $X$, $Y$, and $\\tilde{Y}$ be ${\\mathcal F}_s$-measurable random variables. Suppose that there are $\\kappa,R>0$ such that $b(x,\\cdot)\\in\\S_1(\\kappa,R)$ for every $x\\in\\ensuremath{\\mathbb{R}}^d$. Then there is a constant $c>0$ such that, for any Lipschitz continuous function $h:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$,\n \\begin{equation*}\n \\Big|\\Expec{h\\big(X,\\bar{\\Phi}_{s,t}^X(Y)\\big)-h\\big(X,\\bar{\\Phi}_{s,t}^X(\\tilde{Y})\\big)\\,\\middle|\\,{\\mathcal F}_s}\\Big|\\lesssim\\Lip{h}|Y-\\tilde Y|e^{-c\\frac{|t-s|}{\\varepsilon}}.\n \\end{equation*}\n If, in addition, $b(x,\\cdot)\\in\\S_p(\\kappa,R)$ for all $x\\in\\ensuremath{\\mathbb{R}}^d$, then also\n \\begin{equation*}\n \\Big\\|\\bar{\\Phi}_{s,t}^X(Y)-\\bar{\\Phi}_{s,t}^X(\\tilde{Y})\\Big\\|_{L^p}\\lesssim\\|Y-\\tilde Y\\|_{L^p}e^{-c\\frac{|t-s|}{\\varepsilon}}.\n \\end{equation*}\n\\end{corollary}\n\\begin{proof}\n The first estimate is an immediate consequence of \\cref{lem:conditioning} and Kantorovich-Rubinstein duality. The second bound follows from the fact that we used a synchronous coupling in the proof of \\cref{prop:conditional_initial_condition_wasserstein}.\n\\end{proof}\n\nThe proof of \\cref{prop:conditional_initial_condition_wasserstein} shows that its conclusion actually holds if $\\tilde B$ is replaced by another process $Z$ with similar properties:\n\\begin{remark}\\label{rem:Wasserstein-general}\nLet $Z$ be a process with locally independent increment decomposition $\\theta_t Z=\\bar Z^t+\\tilde Z^t$. Assume that\n\\begin{enumerate}\n\\item the ${\\mathcal F}_t$-adapted part $\\bar Z^t$ takes values in $\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^n)$ and\n\\item\\label{it:cm_dense} there is a unit vector $e\\in\\ensuremath{\\mathbb{R}}^n$ such that, for each $t\\geq 0$, $\\L\\big((\\tilde Z^t_h\\cdot e)_{h\\in[0,1]}\\big)$ is supported on all of $\\ensuremath{\\mathcal{C}}_0([0,1])$.\n\\end{enumerate} \nThen a statement similar to \\cref{prop:conditional_initial_condition_wasserstein} holds.\n\\end{remark}\n\n\\begin{example}\\label{ex:titmarsh}\n Suppose that $\\tilde{Z}^t_h=\\int_t^{t+h}\\mathfrak{G}(t+h-s)\\,dW_s$ for some kernel $\\mathfrak{G}:\\ensuremath{\\mathbb{R}}_+\\to\\Lin{n}$ which is square integrable at the origin and continuous on $(0,\\infty)$. Then the requirement \\ref{it:cm_dense} in \\cref{rem:Wasserstein-general} holds if $\\int_0^t|\\mathfrak{G}(s)|\\,ds>0$ for each $t>0$. Indeed, this can be shown by a clever application of Titmarsh's convolution theorem as in \\cite[Lemma 2.1]{Cherny2008}.\n\\end{example}\n\nThe example shows that in particular an fBm of any Hurst parameter $H\\in(0,1)$ falls in the regime of \\cref{rem:Wasserstein-general}. Hence, we have the following corollary to \\cref{prop:conditional_initial_condition_wasserstein}:\n\\begin{corollary}\\label{conveergence-equilibrium}\n Let $p\\geq 1$ and suppose that $b\\in\\S_p(\\kappa,R)$ for some $\\kappa,R>0$. Let $(X_t)_{t\\geq 0}$ be the solution to \n \\begin{equation}\\label{eq:fbm_sde}\n dX_t=b(X_t)\\,dt+\\sigma\\,dB_t\n \\end{equation}\n started in the generalized initial condition $\\mu$, where $(B_t)_{t\\geq 0}$ is an fBm with Hurst parameter $H\\in(0,1)$ and $\\sigma\\in\\Lin{n}$ is invertible. Then there is a unique invariant measure $\\mathcal I_\\pi\\in\\P(\\ensuremath{\\mathbb{R}}^n\\times\\H_H)$ for the equation \\eqref{eq:fbm_sde} in the sense of \\cref{initial-condition}. Moreover, writing $\\pi=\\Pi_{\\ensuremath{\\mathbb{R}}^n}^*\\mathcal I_\\pi$ for the first marginal, there are constants $c,C>0$ such that\n \\begin{equation}\\label{eq:wasserstein_fbm}\n \\ensuremath{\\mathcal{W}}^p\\big(\\L( X_t),\\pi\\big)\\leq C\\ensuremath{\\mathbb{W}}^p(\\mu,\\mathcal I_\\pi) e^{-ct}\n \\end{equation}\n for all $t\\geq 0$.\n \\end{corollary}\n\\begin{proof}\n By \\cref{prop:existence_invariant_measure}, we know that there is an invariant measure $\\mathcal I_\\pi$ to \\eqref{eq:fbm_sde} with moments of all orders. The Wasserstein estimate \\eqref{eq:wasserstein_fbm} then follows by the very same arguments as in \\cref{prop:conditional_initial_condition_wasserstein}. The only difference is that we now have to specify a generalized initial condition $\\nu\\in\\P\\big((\\ensuremath{\\mathbb{R}}^n\\times\\H_H)^2\\big)$ for the coupling $(X_t,Z_t)$, see \\cref{sec:physical_solution}. Unlike for the conditioned dynamics, we have $Z_t\\sim\\pi$ if we start $Z$ in the invariant measure $\\mathcal I_\\pi$. In order for our previous argument to apply, we need to ensure that the past of the noises in the synchronous coupling coincide. In \\eqref{eq:to_minimize} we can thus only minimize over couplings in the set \n \\begin{equation*}\n \\big\\{\\rho\\in\\P\\big((\\ensuremath{\\mathbb{R}}^n\\times\\H_H)^2\\big):\\,\\rho(\\ensuremath{\\mathbb{R}}^n\\times\\ensuremath{\\mathbb{R}}^n\\times\\Delta_{\\H_H})=1\\big\\},\n \\end{equation*}\n which precisely yields \\eqref{eq:wasserstein_fbm}.\n\\end{proof}\n \n\n\\subsection{Quenched Convergence to the Invariant Measure}\\label{quenched-convergence}\n\nThe other distance, which will play a r\\^ole in \\cref{sec:uniform_bounds} below, is between $\\L\\big(\\Psi_t^\\varsigma(Y)\\big)$ and the stationary law $\\pi$ of the equation \\eqref{eq:fbm_sde}. We stress that---contrarily to the proof of \\cref{conveergence-equilibrium}---we cannot simply start the process in the invariant measure. In fact, the measure $\\pi$ is not stationary for \\eqref{eq:rl_sde} since the increments of $\\tilde{B}$ are not stationary. It is therefore necessary to wait for a sufficient decay of the deterministic `adversary' $\\varsigma$, whence we only find an algebraic rate of convergence. Before we state the result, let us first illustrate that there is indeed no hope for an exponential rate:\n\\begin{example}\n Let\n \\begin{equation*}\n dX_t=-X_t\\,dt+d\\tilde{B}_t,\\qquad dY_t=-Y_t\\,dt+dB_t. \n \\end{equation*}\n If we start both $X$ and $Y$ in the generalized initial condition $\\delta_0\\otimes\\ensuremath{\\mathsf{W}}$, then $\\L(X_t)=N(0,\\Sigma_{t}^2)$ and $\\L(Y_t)=N(0,\\bar\\Sigma_{t}^2)$ where\n \\begin{equation*}\n \\Sigma_t^2=\\bar\\Sigma_t^2-\\Expec{\\left|\\int_0^te^{-(t-s)}\\dot{\\bar{B}}_s\\,ds\\right|^2}.\n \\end{equation*}\n In particular, $\\ensuremath{\\mathcal{W}}^2\\big(\\L(X_t),\\L(Y_t)\\big)=|\\Sigma_{t}-\\bar\\Sigma_t|\\gtrsim t^{-(1-\\hat{H})}$ uniformly in $t\\geq 1$. Since it is easy to see that $\\ensuremath{\\mathcal{W}}^2\\big(\\L(Y_t),\\pi\\big)\\lesssim e^{-t}$, it follows that $\\ensuremath{\\mathcal{W}}^2\\big(\\L(X_t),\\pi\\big)\\gtrsim t^{-(1-\\hat{H})}$.\n\\end{example}\n\n\\begin{proposition}\\label{prop:conditional_stationary_wasserstein}\n Suppose that $b\\in\\S_p(\\kappa,R)$ for some $\\kappa,R>0$ and $\\sigma\\in\\Lin{n}$ is invertible. Let $p\\geq 1$, $\\varsigma\\in\\Omega_\\alpha$ for some $\\alpha>0$, and $Y$ be an ${\\mathcal F}_0$-measurable random variable. Then, for each $\\beta<\\min\\big(\\alpha,1-H\\big)$, there is a constant $C>0$ such that\n \\begin{equation}\\label{eq:wasserstein_quenched}\n \\ensuremath{\\mathcal{W}}^p\\big(\\L(\\Psi^\\varsigma_t(Y)), \\pi\\big)\\leq C\\frac{\\big(1+\\|\\varsigma\\|_{\\Omega_\\beta}\\big)\\big(1+\\ensuremath{\\mathcal{W}}^p(\\L(Y),\\pi)\\big)}{t^{\\beta}}\n \\end{equation}\n for all $t>0$.\n\\end{proposition}\n\n\\begin{proof}\n Fix $t\\geq 1$, abbreviate $X\\ensuremath\\triangleq\\Psi_\\cdot^{\\varsigma}(Y)$, and let $Z$ be the stationary solution to the equation \\eqref{eq:fbm_sde}. We assume that $X$ and $Z$ are driven by the same Wiener process. Let us first consider the case $p\\geq 2$. Recall the following locally independent decompositions from \\cref{sec:increment}:\n $$\\theta_t B=\\bar B^t+\\tilde B^t, \\qquad \\theta_t \\tilde B=Q^t+\\tilde B^t.$$\n Remember also that the `smooth' part of the fBm increment can be further decomposed as $\\bar{B}^t=P^t+Q^t$, see \\cref{example-1} \\ref{it:smooth_decomposition}. Therefore, \n \\begin{align}\n \\Expec{\\big|X_{t+1}-Z_{t+1}\\big|^p}& = \\Expec{\\Expec{\\big|\\Psi_{t,t+1}(X_t,\\varsigma+ \\sigma\\tilde B)-\\Psi_{t,t+1}(Z_t,\\sigma B)\\big|^p\\,\\middle|\\,{\\mathcal F}_t}}\\nonumber\\\\\n &=\\Expec{\\Expec{\\Big|\\Psi_{1}\\big(X_t, \\theta_t\\varsigma+\\sigma Q^t+\\sigma \\tilde B^t\\big)-\\Psi_{1}\\big(Z_t,\\sigma P^t+\\sigma Q^t+\\sigma\\tilde B^t\\big)\\Big|^p\\,\\middle|\\,{\\mathcal F}_t}}\\nonumber\\\\\n &=\\Expec{\\Big| \\Psi_1^{\\theta_t \\varsigma+ \\ell} (x) - \\Psi_1^{\\bar\\ell+\\ell} (z) \\Big|^p\\bigg|_{\\substackal{x&=X_t,z=Z_t,\\\\\\ell&=\\sigma Q^t,\\bar{\\ell}=\\sigma P^t}}}\\label{eq:expec_diff}\n \\end{align}\n Write $R_h\\ensuremath\\triangleq\\Psi_h^{\\theta_t \\varsigma+\\ell} (x)$ and $S_h\\ensuremath\\triangleq \\Psi_h^{\\bar\\ell+\\ell} (z) $. Notice that, since $\\varsigma$ and $\\bar\\ell$ are differentiable,\n \\begin{align*}\n \\frac{d}{dh}\\big|R_{h}-S_{h}\\big|^p&=p\\Braket{\\dot{\\varsigma}_{t+h}-\\dot{\\bar{\\ell}}_h+b\\big(R_{h}\\big)-b\\big(S_{h}\\big),R_{h}-S_{h}}\\big|R_{h}-S_{h}\\big|^{p-2}\\\\\n &\\leq p(\\Lambda+\\gamma)\\big|R_{h}-S_{h}\\big|^p\n +\\left(\\frac{p-1}{\\gamma p}\\right)^{p-1}\\left(|\\dot{\\varsigma}_{t+h}|+|\\dot{\\bar{\\ell}}_{h}|\\right)^p\n \\end{align*}\n for any $\\gamma>0$, where $\\Lambda=\\Lambda(\\kappa,R,p)$ is the expansion threshold derived in \\cref{prop:conditional_initial_condition_wasserstein}. It follows that, for any $0\\leq h_1\\leq h_2\\leq 1$,\n \\begin{align}\n &\\phantom{\\leq}\\big|R_{h_2}-S_{h_2}\\big|^p\\nonumber\\\\\n &\\leq\\big|R_{h_1}-S_{h_1}\\big|^p e^{p(\\Lambda+\\gamma)(h_2-h_1)}+\\left(\\frac{p-1}{\\gamma p}\\right)^{p-1}\\int_{h_1}^{h_2}e^{p(\\Lambda+\\gamma)(h_2-s)}\\left(|\\dot\\varsigma_{t+s}|+|\\dot{\\bar{\\ell}}_{s}|\\right)^p\\,ds\\nonumber\\\\\n &\\leq \\big|R_{h_1}-S_{h_1}\\big|^p e^{p(\\Lambda+\\gamma)(h_2-h_1)}+C_\\gamma(h_2-h_1),\\label{eq:estimate_waserstein_1}\n \\end{align}\n where we abbreviated\n \\begin{equation*}\n C_\\gamma\\ensuremath\\triangleq \\left(\\frac{p-1}{\\gamma p}\\right)^{p-1}\\left(\\frac{\\|\\varsigma\\|_{\\Omega_\\beta}}{t^{\\beta}}+|\\dot{\\bar{\\ell}}|_\\infty\\right)^p.\n \\end{equation*}\n We now argue similarly to \\cref{prop:conditional_initial_condition_wasserstein}: Pick $\\eta\\in(0,\\frac12)$ and $\\bar{\\kappa}\\in(0,\\kappa)$ such that $\\Xi\\ensuremath\\triangleq \\eta\\bar{\\kappa}-(1-\\eta)\\Lambda>0$. Let $\\bar{R}>0$ be the corresponding constant of \\cref{lem:bigger_ball} and $\\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}$ be the event furnished by \\cref{lem:probabilistic_control}. As before, we write $t_1,\\dots,t_{2N(\\omega)}$ for the random times characterizing the excursions of $(R_h)_{h\\in[0,1]}$ outside of $B_{\\bar R}$, see \\cref{prop:conditional_initial_condition_wasserstein}. By an argument similar to \\eqref{eq:estimate_waserstein_1}, \n \\begin{equation}\\label{eq:estimate_waserstein_2}\n \\big|R_{t_{2i}}-S_{t_{2i}}\\big|^p\\leq \\big|R_{t_{2i-1}}-S_{t_{2i-1}}\\big|^p e^{p(\\gamma-\\bar{\\kappa})(t_{2i}-t_{2i-1})}+C_\\gamma (t_{2i}-t_{2i-1})\n \\end{equation}\n for all $i=1,\\dots,N(\\omega)$ on the set $\\ensuremath{\\mathscr{A}}_{x,\\theta_t\\varsigma+\\ell}$. Combining \\eqref{eq:estimate_waserstein_1} and \\eqref{eq:estimate_waserstein_2}, we further find on this set\n \\begin{align*}\n \\phantom{\\leq}\\big|R_{1}-S_{1}\\big|^p&\\leq e^{p(\\Lambda+\\gamma)(1-t_{2k})}\\big|R_{t_{2k}}-S_{t_{2k}}\\big|^p+C_\\gamma(1-t_{2k})\\\\\n &\\leq e^{p(\\Lambda+\\gamma)(1-t_{2k})}e^{p(\\gamma-\\bar{\\kappa})(t_{2k}-t_{2k-1})}\\big|R_{t_{2k-1}}-S_{t_{2k-1}}\\big|^p+C_\\gamma(1-t_{2k-1})\\\\\n &\\leq\\cdots \\leq e^{p(\\Lambda+\\gamma)(1-\\eta) +p(\\gamma-\\bar \\kappa)\\eta}|x-z|^p+C_\\gamma \n \\leq e^{-p\\big(\\Xi-\\gamma\\big)}|x-z|^p+C_\\gamma \n \\end{align*}\n Choose $\\gamma>0$ sufficiently small such that simultaneously $\\Xi-\\gamma>0$ and \n \\begin{equation*}\n \\rho\\ensuremath\\triangleq \\big(1-\\a_{\\eta,\\bar{R}}\\big)e^{p(\\Lambda+\\gamma)}+\\a_{\\eta,\\bar{R}}e^{-p(\\Xi-\\gamma)}<1.\n \\end{equation*}\n This shows that\n \\begin{equation}\\label{eq:estimate_p_bigger}\n \\Expec{\\big|R_{1}-S_{1}\\big|^p}\\leq\\rho|x-y|^p+C_\\gamma.\n \\end{equation}\n It is clear that the estimate \\eqref{eq:estimate_p_bigger} also holds for $p<2$ with the constant\n \\begin{equation*}\n C_\\gamma=\\frac{1}{(2\\gamma)^{\\frac{p}{2}}}\\left(\\frac{\\|\\varsigma\\|_{\\Omega_\\beta}}{t^{\\beta}}+|\\dot{\\bar{\\ell}}|_\\infty\\right)^p\n \\end{equation*}\n and a slightly increased $\\rho<1$. Since $P^t=\\bar{B}_{t+\\cdot}$, \\cref{lem:smooth_part_decay} and the identity \\eqref{eq:expec_diff} show that\n \\begin{equation*}\n \\Expec{\\big|X_{t+1}-Y_{t+1}\\big|^p}\\leq\\rho\\Expec{\\big|X_{t}-Y_{t}\\big|^p}+\\frac{C\\big(1+\\|\\varsigma\\|_{\\Omega_\\beta}^p\\big)}{t^{p\\beta}}\n \\end{equation*}\n for some numerical constant $C>0$ independent of $t$ and $\\varsigma$. Therefore, iterating this bound we find\n \\begin{equation}\\label{eq:quenched_wasserstein}\n \\Expec{\\big|X_{t}-Y_{t}\\big|^p}\\lesssim e^{-ct}\\Expec{|X_0-Y_0|^p}+C\\big(1+\\|\\varsigma\\|_{\\Omega_\\beta}^p\\big)\\sum_{i=0}^{[t]-2}\\frac{\\rho^i}{(t-1-i)^{p\\beta}}.\n \\end{equation}\n The last sum is easily seen to be $\\lesssim t^{-p\\beta}$ uniformly in $t\\geq 2$ and the claim follows at once.\n\\end{proof}\n\nBy a strategy inspired by \\cite[Section 7]{Panloup2020} (see also \\cite{Hairer2005}), we can lift \\cref{prop:conditional_stationary_wasserstein} to a total variation bound. Since the exposition of Panloup and Richard does not immediately transfer to the problem at hand, we choose to include the necessary details. Consider the system\n\\begin{equation}\\label{eq:girsanov_coupling}\n \\begin{aligned}[c]\n dX_s&=b(X_s)\\, ds +d\\varsigma_s+\\sigma d\\tilde{B}_s,\\\\\n dZ_s&=b(Z_s)\\, ds +\\sigma\\,dB_s+\\sigma\\varphi^t(s)\\,ds,\n \\end{aligned}\n\\end{equation}\nwhere $X_0$ is an arbitrary initial condition and $Z$ is the stationary solution of the first equation. Our aim is to exhibit an adapted integrable function $\\varphi^t:[0,t+1]\\to\\ensuremath{\\mathbb{R}}^n$ which vanishes on $[0,t]$ and ensures that $X_{t+1}=Z_{t+1}$. To this end, we define\n\\begin{equation}\\label{eq:coupling_function}\n \\varphi^t(s)\\ensuremath\\triangleq\\left\\{\n \\begin{array}{ll}\n \\left(2\\frac{|X_t-Z_t|^{\\frac12}}{|X_s-Z_s|^{\\frac12}}+\\lambda\\right)\\sigma^{-1}(X_s-Z_s)-\\dot{\\bar{B}}_s+\\sigma^{-1}\\dot{\\varsigma}_s, \\quad \\quad &s\\in [t,t+1],\\\\\n 0, \\qquad \\qquad &\\hbox{ otherwise.}\n \\end{array}\\right.\n\\end{equation}\n\\begin{lemma}\\label{lem:girsanov}\n Let $t\\geq 1$, $\\varsigma\\in\\Omega_\\alpha$, $b\\in\\S(\\kappa,R,\\lambda)$, \n and consider the system \\eqref{eq:girsanov_coupling} with $\\varphi^t$ defined in \\eqref{eq:coupling_function}. Then $X_{t+1}=Z_{t+1}$ and, for any $\\beta<\\alpha\\wedge(1-H)$,\n\\begin{equation}\\label{eq:phi_norm}\n |\\varphi^t|_\\infty\\lesssim |X_t-Z_t|+\\frac{\\|\\varsigma\\|_{\\Omega_\\beta}+\\|\\bar{B}\\|_{\\Omega_\\beta}}{t^{\\beta}},\\qquad |\\dot{\\varphi}^t|_\\infty\\lesssim |X_t-Z_t|^{\\frac12}+|X_t-Z_t|+\\frac{\\|\\varsigma\\|_{\\Omega_\\beta}+\\|\\bar{B}\\|_{\\Omega_\\beta}}{t^{1+\\beta}},\n\\end{equation}\nwhere the derivative of $\\varphi^t$ is understood as right- and left-sided derivative at the boundaries $t$ and $t+1$, respectively.\n\\end{lemma}\n\\begin{proof}\n The argument is a minor modification of \\cite[Lemma 5.8]{Hairer2005}: Abbreviate $f(s)\\ensuremath\\triangleq|X_s-Z_s|^2$, then\n \\begin{equation*}\n f^\\prime(s)=2\\braket{b(X_s)-b(Z_s)+\\dot{\\varsigma}_s-\\sigma\\dot{\\bar{B}}_s-\\sigma\\varphi^t(s),X_s-Z_s}\\leq -4|X_t-Z_t|^{\\frac12}f(s)^{\\frac34}\n \\end{equation*}\n since $b\\in\\S(\\kappa,R,\\lambda)$. It follows that\n \\begin{equation*}\n |X_s-Z_s|^{\\frac12}\\leq |X_t-Z_t|^{\\frac12}-(s-t)|X_t-Z_t|^{\\frac12}\\qquad\\forall\\, s\\in[t,t+1],\n \\end{equation*}\n whence $X_{t+1}=Z_{t+1}$. This also implies\n \\begin{equation*}\n \\left|\\frac{d}{ds}\\big(X_s-Z_s\\big)\\right|\\leq\\big(\\Lip{b}+2+\\lambda\\big)|X_t-Z_t|^{\\frac12}|X_s-Z_s|^{\\frac12}\n \\end{equation*}\n and consequently\n \\begin{equation*}\n \\left|\\frac{d}{ds}\\left(\\frac{X_s-Z_s}{|X_s-Z_s|^{\\frac12}}\\right)\\right|\\leq\\frac32\\frac{\\left|\\frac{d}{ds}\\big(X_s-Z_s\\big)\\right|}{|X_s-Z_s|^{\\frac12}}\\lesssim|X_t-Z_t|^{\\frac12}.\n \\end{equation*}\n The bounds \\eqref{eq:phi_norm} follow at once.\n\\end{proof}\n\\begin{remark}\n We stress that the bound on $|\\dot{\\varphi}^t|_\\infty$ only holds for a Lipschitz continuous drift $b$.\n\\end{remark}\n\nIt is now easy to prove the following result:\n\\begin{proposition}\\label{prop:conditional_stationary_total}\n Assume the conditions of \\cref{prop:conditional_stationary_wasserstein} for $p=1$. Then, for any $\\beta<\\alpha\\wedge(1-H)$, it holds that\n \\begin{equation*}\n \\TV{\\L\\big(\\Psi^\\varsigma_t(Y)\\big)-\\pi}\\lesssim {t^{-\\frac{\\beta}{3}}\\big(1+\\|\\varsigma\\|_{\\Omega_\\beta}\\big)\\big(1+\\ensuremath{\\mathcal{W}}^1(\\L(Y),\\pi)\\big)} \\qquad \\forall\\, t>0.\n \\end{equation*}\n\\end{proposition}\n\n\\begin{proof}\n Let $B$ and $B^\\prime$ be $H$-fBms built from underlying two-sided Wiener processes $W$ and $W^\\prime$, see \\eqref{eq:mandelbrot}. Recall that $\\tilde{B}$ is the Riemann-Liouville process associated with $B$. Let $X$ and $Z$ solve \n \\begin{align}\\label{eq:proof_tv_equations}\n \\begin{split}\n dX_s&=b(X_s)\\,ds+d\\varsigma_s+\\sigma d\\tilde{B}_s,\\\\\n dZ_s&=b(Z_s)\\,ds+\\sigma\\,dB^\\prime_s,\n \\end{split}\n \\end{align}\n where $X_0\\overset{d}{=}Y$ and $Z$ is the stationary solution. \n Fix $t>1$.\n We shall use the bound\n \\begin{align}\n \\TV{\\L\\big(\\Psi^\\varsigma_{t+1}(Y)\\big)-\\pi}&=\\inf_{(\\tilde B,B^\\prime)}\\ensuremath\\mathbb{P}\\big(X_{t+1}\\neq Z_{t+1}\\big)\\leq\\inf_{(W,W^\\prime)}\\ensuremath\\mathbb{P}\\big(X_{t+1}\\neq Z_{t+1}\\big) \n \\nonumber\\\\\n &\\leq\\inf_{(W,W^\\prime)}\\ensuremath\\mathbb{P}\\big(X_{t+1}\\neq Z_{t+1},|X_t-Z_t|\\leq\\delta\\big)+ \\inf_{(W,W^\\prime)}\\ensuremath\\mathbb{P}\\big(|X_t-Z_t|>\\delta\\big).\\label{eq:tv_proof}\n \\end{align}\n Taking $W$ and $W'$ equal, we are in the setting of \\cref{prop:conditional_stationary_wasserstein}. The estimate \\eqref{eq:quenched_wasserstein} thus shows that, for any $\\delta\\in(0,1]$,\n\\begin{equation}\n\\label{eq:tv_proof_2}\n\\inf_{(W,W^\\prime)}\\ensuremath\\mathbb{P}\\big(|X_t-Z_t|>\\delta)\\nonumber \\leq \\frac{C\\big(1+\\|\\varsigma\\|_{\\Omega_\\beta}\\big)\\big(1+\\ensuremath{\\mathcal{W}}^1(\\L(Y),\\pi)\\big)}{\\delta t^\\beta}.\n\\end{equation}\n To bound the first term in \\eqref{eq:tv_proof} we exploit the fact that $X_t$ and $Z_t$ are already close so that we can couple them at time $t+1$ with a controlled cost. \n Let $\\varphi^t$ be the function from \\cref{lem:girsanov}; in particular $\\varphi^t(s)=0$ for $s\\frac12.\n \\end{cases}\n \\end{equation*}\n In either case, \\eqref{eq:phi_norm} yields\n \\begin{equation*}\n \\int_t^{t+1}|\\psi^t(s)|^2\\,ds\\lesssim \\delta+\\frac{\\|\\varsigma\\|^2_{\\Omega_\\beta}+\\|\\bar{B}\\|^2_{\\Omega_\\beta}}{t^{2\\beta}}\n \\end{equation*}\n on the event $\\{|X_t-Z_t|\\leq \\delta\\}$ and therefore\n \\begin{align*}\n &\\phantom{\\leq}\\inf_{(W,W^\\prime)}\\ensuremath\\mathbb{P}\\big(X_{t+1}\\neq Z_{t+1},|X_t-Z_t|\\leq\\delta\\big) \n \\lesssim\\sqrt{\\delta}+\\frac{\\|\\varsigma\\|_{\\Omega_\\beta}+\\big\\|\\|\\bar{B}\\|_{\\Omega_\\beta}\\big\\|_{L^2}}{t^\\beta}.\n \\end{align*}\nCombining this with \\eqref{eq:tv_proof} and \\cref{lem:smooth_part_decay}, we have proven\n \\begin{equation}\\label{eq:tv_final}\n \\TV{\\L\\big(\\Psi_{t+1}^\\varsigma(Y)\\big)-\\pi}\\lesssim \\big(1+\\|\\varsigma\\|_{\\Omega_\\beta}\\big)\\big(1+\\ensuremath{\\mathcal{W}}^1(\\L(Y),\\pi)\\big)\\left(\\sqrt{\\delta}+\\frac{1}{\\delta t^\\beta}\\right),\n \\end{equation}\n which is minimized for $\\delta=t^{-\\frac{2\\beta}{3}}$.\n\\end{proof}\n\n\nBy duality and \\cref{lem:conditioning}, we obtain the following ergodic theorem as a corollary to \\cref{prop:conditional_stationary_wasserstein,prop:conditional_stationary_total}. It provides the fundamental estimates for our proof of the averaging principle for the fractional slow-fast system with feedback dynamics.\n\\begin{corollary}\\label{cor:total_variation_conditional}\n Let $0\\leq s\\leq t$ and let $X,Y$ be ${\\mathcal F}_s$-measurable random variables. Suppose that there are $\\kappa,R>0$ such that $b(x,\\cdot)\\in\\S_1(\\kappa,R)$ for every $x\\in\\ensuremath{\\mathbb{R}}^d$. Then, for any $\\zeta<1-\\hat{H}$ and\n \\begin{enumerate}\n \\item\\label{it:ergodicity_wasserstein} any Lipschitz function $h:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$, \n \\begin{equation*}\n \\Big|\\Expec{h\\big(X,\\bar{\\Phi}_{s,t}^X(Y)\\big)-\\bar{h}(X)\\,|\\,{\\mathcal F}_s}\\Big|\\lesssim\\Lip{h}\\Big(1+\\big\\|\\varepsilon^{-\\hat{H}}\\bar{\\hat{B}}_{\\varepsilon\\cdot}^s\\big\\|_{\\Omega_\\zeta}\\Big)\\big(1+|Y|\\big)\\left(1\\wedge\\frac{\\varepsilon^{\\zeta}}{|t-s|^{\\zeta}}\\right).\n \\end{equation*}\n \\item\\label{it:ergodicity_tv} any bounded measurable function $h:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$,\n \\begin{equation*}\n \\Big|\\Expec{h\\big(X,\\bar{\\Phi}_{s,t}^X(Y)\\big)-\\bar{h}(X)\\,|\\,{\\mathcal F}_s}\\Big|\\lesssim |h|_\\infty\\Big(1+\\big\\|\\varepsilon^{-\\hat{H}}\\bar{\\hat{B}}_{\\varepsilon\\cdot}^s\\big\\|_{\\Omega_\\zeta}\\Big)\\big(1+|Y|\\big)\\left(1\\wedge\\frac{\\varepsilon^{\\frac{\\zeta}{3}}}{|t-s|^{\\frac{\\zeta}{3}}}\\right).\n \\end{equation*}\n \\end{enumerate}\n Here, as usual, $\\bar{h}(x)=\\int_{\\ensuremath{\\mathbb{R}}^n} h(x,y)\\,\\pi^x(dy)$.\n\\end{corollary}\n\n\n\n\n\\subsection{Geometric Ergodicity for SDEs Driven by Fractional Brownian Motion}\\label{sec:geometric_ergodicity}\n\nApplying the arguments of \\cref{prop:conditional_initial_condition_wasserstein,prop:conditional_stationary_total} to the equation \\begin{equation}\\label{eq:sde}\n dY_t=b(Y_t)\\,dt+\\sigma\\,dB_t, \n\\end{equation} \nwe obtain an exponential rate of convergence improving the known results:\n\\begin{proof}[Proof of \\cref{thm:geometric}]\n In \\cref{conveergence-equilibrium} we have already proven the Wasserstein decay \\eqref{eq:wasserstein_time_t}:\n \\begin{equation*}\n \\ensuremath{\\mathcal{W}}^p(\\mathcal{L}(Y_t),\\pi)\\leq Ce^{-ct}\\ensuremath{\\mathbb{W}}^p\\big(\\mu,\\pi\\big), \\qquad \\forall\\, t\\geq 0\n \\end{equation*}\n The total variation rate \\eqref{eq:tv_process} then follows by a similar Girsanov coupling as in the proof of \\cref{prop:conditional_stationary_total}. In fact, we now consider\n \\begin{align*}\n dX_s&=b(X_s)\\,ds+\\sigma dB_s,\\\\\n dZ_s&=b(Z_s)\\,ds+\\sigma\\,dB_s +\\sigma\\varphi^t(s)\\,ds,\n \\end{align*}\n where $X$ is started in the generalized initial condition $\\mu$ and $Z$ is the stationary solution. Let us define\n \\begin{equation*}\n \\varphi^t(s)\\ensuremath\\triangleq -\\left(\\frac{4|X_t-Z_t|^{\\frac12}}{|X_s-Z_s|^{\\frac12}}+\\lambda\\right)\\sigma^{-1}(X_s-Z_s)\\mathbf 1_{[t,t+1]}(s).\n \\end{equation*}\n It can then be checked similarly to \\cref{lem:girsanov} that $X_{t+1}=Y_{t+1}$ and\n \\begin{equation*}\n |\\varphi^t|_\\infty\\lesssim |X_t-Z_t|,\\qquad |\\dot{\\varphi}^t|_\\infty\\lesssim |X_t-Z_t|^{\\frac12}+|X_t-Z_t|.\n \\end{equation*}\n Consequently, the estimate \\eqref{eq:tv_final} becomes\n \\begin{equation}\\label{eq:geometric_time_t}\n \\TV{\\L(Y_{t+1})-\\pi}\\lesssim\\ensuremath{\\mathbb{W}}^1\\big(\\mu,\\pi\\big)\\left(\\sqrt{\\delta}+\\frac{e^{-ct}}{\\delta}\\right)\n \\end{equation}\n and choosing $\\delta= e^{-\\frac{ct}{2}}$ shows a geometric decay of the total variation distance at a fixed time. To get asserted decay on the path space \\eqref{eq:tv_process}, we observe that, by the very same argument as in \\cite[Proposition 7.2 (iii)]{Panloup2020}, $\\varphi^t$ actually induces a coupling on the path space with a similar cost. Hence, $\\TV{\\L(Y_{t+\\cdot})-\\ensuremath\\mathbb{P}_\\pi}$ is still bounded by a quantity proportional to the right-hand side of \\eqref{eq:geometric_time_t} and \\eqref{eq:tv_process} follows at once. \n\\end{proof}\n\\begin{remark}\\label{rem:constant_xi}\n The admissible repulsivity strength $\\Lambda(\\kappa,R,p)$ obtained in the proof of \\cref{thm:geometric} is certainly not optimal. We therefore abstain from deriving a quantitative upper bound. Let us however indicate one way to obtain such an estimate: Start from \\eqref{eq:quant_lower_bound} in the proof \\cref{lem:probabilistic_control} and recall a standard result (see e.g. \\cite[Theorem D.4]{Piterbarg2012}) saying that\n \\begin{equation*}\n \\ensuremath\\mathbb{P}\\big(|\\tilde{B}|_\\infty\\leq|\\sigma|^{-1}\\varepsilon\\big)\\geq 1-K\\big(|\\sigma|^{-1}\\varepsilon\\big)^{\\frac{1}{H}}e^{-H(|\\sigma|^{-1}\\varepsilon)^2}\n \\end{equation*}\n for a known numerical constant $K>0$. Finally optimize over all constants involved.\n\\end{remark}\n\nLet us finally sketch the main differences for a more general Gaussian driving noise $G$ in equation \\eqref{eq:sde}. We assume that $G$ has continuous sample paths and a moving average representation similar to \\eqref{eq:mandelbrot} with a kernel $\\mathfrak{G}:\\ensuremath{\\mathbb{R}}\\to\\Lin{n}$ which vanishes on $(-\\infty,0]$, is continuous on $(0,\\infty)$, and satisfies\n\\begin{equation*}\n\t\\int_{-\\infty}^t\\big|\\mathfrak{G}(t-u)-\\mathfrak{G}(-u)\\big|^2\\,du<\\infty\n\\end{equation*}\nfor each $t>0$. Then\n\\begin{equation*}\n\tG_t=\\int_{-\\infty}^t \\mathfrak{G}(t-u)-\\mathfrak{G}(-u)\\,dW_u,\\qquad t\\geq 0,\n\\end{equation*}\nhas the locally independent increment decomposition \n\\begin{equation*}\n\t\\big(\\theta_t G\\big)_h=\\int_{-\\infty}^t\\mathfrak{G}(t+h-u)-\\mathfrak{G}(t-u)\\,dW_u+\\int_t^{t+h}\\mathfrak{G}(t+h-u)\\,dW_u\\ensuremath\\triangleq\\bar{G}^t_h+\\tilde{G}^t_h\n\\end{equation*} \nwith respect to any compatible filtration. Moreover, we require that\n\\begin{equation*}\n \\int_0^{\\delta} |\\mathfrak{G}(u)|\\,du>0\n\\end{equation*}\nfor each $\\delta>0$. We remark that (up to a time-shift) this is certainly implied by the assumptions of Panloup and Richard, see \\cite[Condition $\\boldsymbol{(\\mathrm{C}_2)}$]{Panloup2020}. As we have seen in \\cref{ex:titmarsh}, the Cameron-Martin space of $(\\tilde{G}_h)_{h\\in[0,1]}$ then densely embeds into $\\ensuremath{\\mathcal{C}}_0([0,1],\\ensuremath{\\mathbb{R}}^n)$. Thus \\cref{rem:Wasserstein-general} applies and we obtain a geometric rate in Wasserstein distance, provided that there is a stationary measure for the equation $dY_t=b(Y_t)\\,dt+\\sigma\\,dG_t$.\n\n\n\\section{The Fractional Averaging Principle}\\label{sec:feedback}\n\nLet us remind the reader of the setup of \\cref{thm:feedback_fractional}: We consider the slow-fast system\n\\begin{alignat}{4}\n dX_t^\\varepsilon&=f(X_t^\\varepsilon,Y_t^\\varepsilon)\\,dt+g(X_t^\\varepsilon,Y_t^\\varepsilon)\\,dB_t,& \\qquad&X_0^\\varepsilon=X_0,\\label{eq:slow_feedback_sec}\\\\\n dY_t^\\varepsilon&=\\frac{1}{\\varepsilon}b(X_t^\\varepsilon,Y_t^\\varepsilon)\\,dt+\\frac{1}{\\varepsilon^{\\hat{H}}}\\sigma\\,d\\hat{B}_t,&\\qquad &Y_0^\\varepsilon=Y_0,\\label{eq:fast_feedback_sec}\n\\end{alignat}\ndriven by independent $d$-dimensional and $n$-dimensional fractional Brownian motions $B$ and $\\hat{B}$ with Hurst parameters $H\\in(\\frac12,1)$ and $\\hat{H}\\in(1-H,1)$, respectively. We claim that $X_t^\\varepsilon$ converges to the solution of the na\\\"ively averaged equation \\eqref{eq:effective_dynamics} as $\\varepsilon\\to 0$. \n\nLet us also introduce the following filtrations for later reference:\n\\begin{equation*}\n {\\mathcal G}_t\\ensuremath\\triangleq\\sigma(B_s,s\\leq t),\\quad\\hat{{\\mathcal G}}_t\\ensuremath\\triangleq\\sigma(\\hat{B}_s,s\\leq t),\\quad {\\mathcal F}_t\\ensuremath\\triangleq{\\mathcal G}_t\\vee\\hat{{\\mathcal G}}_t.\n\\end{equation*}\nTo be utterly precise, we actually use the right-continuous completion of ${\\mathcal F}$ in order to ensure that hitting time of an open sets by a continuous, adapted process is a stopping time. Observe that ${\\mathcal F}$ is compatible with the fBm $\\hat{B}$, see \\cref{sec:increment}.\n\nWe shall first convince ourselves that, under the conditions of \\cref{thm:feedback_fractional}, the pathwise solution of the slow-fast system \\eqref{eq:slow_feedback_sec}--\\eqref{eq:fast_feedback_sec} exists globally. If the drift vector field $b:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}^n$ in \\eqref{eq:fast_feedback_sec} were \\emph{globally} Lipschitz continuous, this would be an easy consequence of the standard Young bound \\cite{Young1936}:\n\\begin{equation}\\label{eq:young}\n \\left|\\int_s^t f_r\\,d\\ensuremath{\\mathfrak{h}}_r\\right|\\lesssim |f|_{\\ensuremath{\\mathcal{C}}^\\beta}|\\ensuremath{\\mathfrak{h}}|_{\\ensuremath{\\mathcal{C}}^\\alpha}|t-s|^{\\alpha+\\beta}+|f_s||\\ensuremath{\\mathfrak{h}}|_{\\ensuremath{\\mathcal{C}}^\\alpha}|t-s|^\\alpha,\n\\end{equation}\nprovided that $\\alpha+\\beta>1$. We shall also prove a bound on the moments of the H\\\"older norm of the solution for any fixed scale $\\varepsilon$. The main technical estimates in the proof of \\cref{thm:feedback_fractional} are delegated to \\cref{sec:uniform_bounds}, allowing us to easily conclude the argument in \\cref{sec:proof} by appealing to L\\^e's stochastic sewing lemma \\cite{Le2020}. \n\n\\subsection{A Solution Theory for the Slow-Fast System}\\label{sec:solution_theory}\n\n\nWe shall begin with a deterministic (pathwise) existence and uniqueness result. Fix a terminal time $T>0$ and let $\\ensuremath{\\mathfrak{h}}=(\\ensuremath{\\mathfrak{h}}^1,\\ensuremath{\\mathfrak{h}}^2)\\in\\ensuremath{\\mathcal{C}}^{\\alpha_1}([0,T],\\ensuremath{\\mathbb{R}}^{m})\\times\\ensuremath{\\mathcal{C}}^{\\alpha_2}([0,T],\\ensuremath{\\mathbb{R}}^{n})$, where $\\alpha_1>\\frac12$ and $\\alpha_2>1-\\alpha_1$. We consider the Young differential equation\n\\begin{equation}\\label{eq:ode}\n z(t)=\\begin{pmatrix}z^1(t)\\\\z^2(t)\\end{pmatrix}=z_0+\\int_0^t \\begin{pmatrix}F_1\\big(z(s)\\big)\\{\\mathcal F}_2\\big(z(s)\\big)\\end{pmatrix}\\,ds+\\int_0^t G\\big(z(s)\\big)\\,d\\ensuremath{\\mathfrak{h}}_s.\n\\end{equation}\nWe impose the following assumptions on the data:\n\n\\begin{condition}\\label{cond:data_ode}\n\\leavevmode\n\\begin{enumerate}\n \\item\\label{it:cond_ode_1} $F_1:\\ensuremath{\\mathbb{R}}^{d}\\times\\ensuremath{\\mathbb{R}}^{n}\\to\\ensuremath{\\mathbb{R}}^{d}$ is bounded and globally Lipschitz continuous.\n \\item\\label{it:cond_ode_2} $F_2:\\ensuremath{\\mathbb{R}}^{d}\\times\\ensuremath{\\mathbb{R}}^{n}\\to\\ensuremath{\\mathbb{R}}^{n}$ is locally Lipschitz continuous and of linear growth, that is, $|F_2(z,x)|\\lesssim 1+|x|+|z|$ for all $x\\in\\ensuremath{\\mathbb{R}}^n$ and $z\\in\\ensuremath{\\mathbb{R}}^d$. Moreover, there are $\\kappa,D>0$ such that\n \\begin{equation*}\n \\Braket{F_2(z, x)-F_2(z,y),x-y}\\leq D- \\kappa|x-y|^2 \\qquad \\forall\\, x,y\\in\\ensuremath{\\mathbb{R}}^n, \\forall \\,z\\in \\ensuremath{\\mathbb{R}}^d.\n\\end{equation*}\n \\item\\label{it:cond_ode_3} $G:\\ensuremath{\\mathbb{R}}^{d}\\times\\ensuremath{\\mathbb{R}}^{n}\\to\\Lin[m+n]{d+n}$ is of the form $G=\\begin{pmatrix}G_1 & 0\\\\ 0 & G_2\\end{pmatrix}$ with $G_1\\in\\Cb{2}\\big(\\ensuremath{\\mathbb{R}}^{d}\\times\\ensuremath{\\mathbb{R}}^{n},\\Lin[m]{d}\\big)$ and $G_2\\in\\Lin{d}$ is constant.\n\\end{enumerate}\n\\end{condition}\n\nOur proof for the well-posedness of \\eqref{eq:ode} and the non-explosiveness is based on the following comparison lemma, versions of which will be of repeated use in the sequel:\n\\begin{lemma}\\label{lem:comparison}\n Let $F_2:\\ensuremath{\\mathbb{R}}^{d}\\times\\ensuremath{\\mathbb{R}}^{n}\\to\\ensuremath{\\mathbb{R}}^{n}$ satisfy \\cref{cond:data_ode} \\ref{it:cond_ode_2} and let $\\varsigma\\in\\ensuremath{\\mathcal{C}}_0(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^{n})$, $z\\in\\ensuremath{\\mathcal{C}}(\\ensuremath{\\mathbb{R}}_+,\\ensuremath{\\mathbb{R}}^{d})$. \n \\begin{enumerate}\n \\item Then for any $x_0\\in\\ensuremath{\\mathbb{R}}^{n}$, there are unique global solutions to\n \\begin{equation*}\n x(t)=x_0+\\int_0^t F_2\\big(z(s),x(s)\\big)\\,ds+\\varsigma_t,\\qquad y(t)=x_0-\\int_0^ty(s)\\,ds+\\varsigma_t.\n \\end{equation*}\n Furthermore, on any finite time interval $[0,T]$, the difference of the solutions satisfies the bound\n \\begin{equation}\\label{eq:solution_difference}\n |x(t)-y(t)|^2\\lesssim\\int_0^t e^{-\\kappa(t-s)}\\big(1+|y(s)|+|z(s)|\\big)^2\\,ds\n \\end{equation}\n for all $t\\in[0,T]$. In particular,\n \\begin{equation}\\label{eq:a_priori_sup}\n |x|_\\infty\\lesssim 1+|x_0|+|\\varsigma|_\\infty+|z|_\\infty.\n \\end{equation}\n\n \\item If, in addition, $\\varsigma\\in\\ensuremath{\\mathcal{C}}^\\alpha([0,T],\\ensuremath{\\mathbb{R}}^{n})$ for some $\\alpha>0$, then $x\\in\\ensuremath{\\mathcal{C}}^\\alpha([0,T],\\ensuremath{\\mathbb{R}}^{n})$ and the following bound holds:\n \\begin{equation}\\label{eq:comparison_apriori} \n |x|_{\\ensuremath{\\mathcal{C}}^{\\alpha}}\\lesssim 1+|x_0|+|z|_\\infty+|\\varsigma|_{\\ensuremath{\\mathcal{C}}^{\\alpha}}.\n \\end{equation}\n \\end{enumerate}\n\\end{lemma}\n\\begin{proof} \n Since $F_2$ is locally Lipschitz, it is clear that uniqueness holds for the equation defining $x$. To see existence, first notice that $\\tilde{x}(t)\\ensuremath\\triangleq x(t)-\\varsigma_t$ solves\n \\begin{equation*}\n \\tilde{x}(t)=x_0+\\int_0^t F_2\\big(z(s),\\tilde{x}(s)+\\varsigma_s\\big)\\,ds.\n \\end{equation*}\n Set $\\Upsilon(s,x) =F_2\\big(z(s), x+\\varsigma_s\\big)$. This function is jointly continuous in $(s,x)$. Therefore, a local solution exists by the Carath\\'eodory theorem. \n\n On the other hand, global existence and uniqueness of $y$ is standard. Consequently, the required non-explosion statement follows easily upon establishing \\eqref{eq:solution_difference}. To this end, we first observe that, for all $z\\in\\ensuremath{\\mathbb{R}}^{d}$ and all $x,y\\in\\ensuremath{\\mathbb{R}}^{n}$, the off-diagonal large scale contraction property and the linear growth of $F$ furnish the following bound:\n \\begin{align*}\n \\Braket{F_2(z,x)+y,x-y}&\\leq D-\\kappa|x-y|^2+\\\\\\\n &\\leq D- \\frac{\\kappa}{2}|x-y|^2+\\frac{C}{\\kappa}\\big(1+|z|+|y|\\big)^2\n \\end{align*}\n for some uniform constant $C>0$, where we also used Young's inequality. Consequently, the function $h(t)\\ensuremath\\triangleq e^{\\kappa t}|x(t)-y(t)|^2$ satisfies \n \\begin{equation*}\n h^\\prime(t)\\lesssim e^{\\kappa t}\\big(1+|y(t)|+|z(t)|\\big)^2\n \\end{equation*}\n and \\eqref{eq:solution_difference} follows at once. \n\n The bound \\eqref{eq:comparison_apriori} is an immediate consequence of \\eqref{eq:solution_difference} together with the fact that\n \\begin{equation*}\n |x|_{\\ensuremath{\\mathcal{C}}^{\\alpha}}\\lesssim|F_2(z,x)|_{\\infty}T^{1- \\alpha}+|\\varsigma|_{\\ensuremath{\\mathcal{C}}^{\\alpha}}\\lesssim \\big(1+|z|_\\infty+|x|_{\\infty}\\big) T^{1-\\alpha}+|\\varsigma|_{\\ensuremath{\\mathcal{C}}^{\\alpha}}.\\qedhere\n \\end{equation*}\n\\end{proof}\n\nThe announced existence and uniqueness result for \\eqref{eq:ode} is as follows:\n\\begin{proposition}\\label{prop:abstract_ode}\n Under \\cref{cond:data_ode}, for any $T>0$ and any $\\beta<\\alpha_1\\wedge\\alpha_2$, \\eqref{eq:ode} has a unique global solution in $\\ensuremath{\\mathcal{C}}^{\\beta}([0,T],\\ensuremath{\\mathbb{R}}^{d+n})$. \n\\end{proposition}\n\\begin{proof}\nOwing to \\cref{lem:comparison}, it is enough to derive an \\emph{a priori} bound on $|z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}$, $\\tilde{\\alpha}\\in[\\beta,\\alpha_1)$, to conclude with a standard Picard argument. \n\nLet $\\delta\\in(0,1)$. By the Young bound \\eqref{eq:young}, we see that\n \\begin{align*}\n |z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}&\\lesssim |F_1|_\\infty\\delta^{1-\\tilde{\\alpha}}+\\big(\\big|G_1(z^1,z^2)\\big|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}\\wedge\\alpha_2}}+|G_1|_\\infty\\big)|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}\\\\\n &\\lesssim\\big(1+|z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}+|z^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big)\\delta^{\\alpha_1-\\tilde{\\alpha}},\n \\end{align*}\n where the prefactor is proportional to $M\\ensuremath\\triangleq|F_1|_\\infty+|G|_\\infty+\\Lip{G}$. We may apply \\cref{lem:comparison} to $z^2$ to further find\n \\begin{equation*}\n |z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}\\lesssim\n \\big(1+|z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}+|z_0|+|\\ensuremath{\\mathfrak{h}}^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big)\\delta^{\\alpha_1-\\tilde{\\alpha}}.\n \\end{equation*}\n Here, we take the H\\\"older norms of $z^1,z^2$ over the interval $[0,\\delta]$, whereas we use the full interval $[0,T]$ for $\\ensuremath{\\mathfrak{h}}^1$ and $\\ensuremath{\\mathfrak{h}}^2$. For $\\delta>0$ small enough, we therefore get\n \\begin{equation}\\label{eq:iteration}\n |z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([0,\\delta])}\\lesssim\\big(1+|z_0|+|\\ensuremath{\\mathfrak{h}}^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big).\n \\end{equation}\n Combining this with \\cref{lem:comparison}, we can find a constant $C>0$ such that\n \\begin{equation*}\n |z(\\delta)|\\leq|z_0|+|z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([0,\\delta])}+|z^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}([0,\\delta])}\\leq C\\big(1+|z_0|+|\\ensuremath{\\mathfrak{h}}^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big).\n \\end{equation*}\n This bound can now be easily iterated and together with \\eqref{eq:iteration} we see that there is a (increased) constant $C$ such that\n \\begin{equation*}\n |z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([t,t+\\delta])}\\lesssim\\big(1+|z_t|+|\\ensuremath{\\mathfrak{h}}^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big)\\leq C^{\\left[\\frac{t}{\\delta}\\right]+1}\\big(1+|z_0|+|\\ensuremath{\\mathfrak{h}}^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big)^{\\left[\\frac{t}{\\delta}\\right]+2}\n \\end{equation*}\n for each $t\\in[0,T-\\delta]$. Since $|\\cdot|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([0,T])}\\leq 2\\delta^{\\tilde{\\alpha}-1}\\sup_t |\\cdot|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([t,t+\\delta])}$, we get that\n \\begin{equation}\\label{eq:a_priori}\n |z^1|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([0,T])}\\leq\\frac{2C^{\\left[\\frac{t}{\\delta}\\right]+1}}{\\delta^{1-\\tilde\\alpha}}\\big(1+|z_0|+|\\ensuremath{\\mathfrak{h}}^2|_{\\ensuremath{\\mathcal{C}}^{\\alpha_2}}\\big)\\big(1+|\\ensuremath{\\mathfrak{h}}^1|_{\\ensuremath{\\mathcal{C}}^{\\alpha_1}}\\big)^{\\left[\\frac{T}{\\delta}\\right]+2}.\n \\end{equation}\n\n Local existence and uniqueness of a solution to \\eqref{eq:ode} is a classical consequence of the Young bound. Indeed, if we define \n \\begin{equation*}\n A_\\delta\\ensuremath\\triangleq\\left\\{f\\in\\ensuremath{\\mathcal{C}}^{\\beta}([0,\\delta],\\ensuremath{\\mathbb{R}}^{d+n}):\\,f(0)=z_0\\text{ and }|f|_{\\ensuremath{\\mathcal{C}}^{\\beta}}\\leq 1\\right\\},\n \\end{equation*}\n then, for $\\delta>0$ small enough, the operator $\\mathcal{A}_\\delta: A_\\delta\\to A_\\delta$,\n \\begin{equation*}\n (\\mathcal{A}_\\delta z)(t)\\ensuremath\\triangleq z_0+\\int_0^t\\begin{pmatrix}F_1\\big(z(s)\\big)\\{\\mathcal F}_2\\big(z(s)\\big)\\end{pmatrix}\\,ds+\\int_0^t G\\big(z(s)\\big)\\,d\\ensuremath{\\mathfrak{h}}_s,\n \\end{equation*}\n is contracting on a complete metric space. Abbreviating $\\gamma\\ensuremath\\triangleq\\alpha_1\\wedge\\alpha_2$, this in turn follows from the well-known bounds\n \\begin{align*}\n \\left|\\int_0^\\cdot G\\big(z(s)\\big)\\,d\\ensuremath{\\mathfrak{h}}_s\\right|_{\\ensuremath{\\mathcal{C}}^{\\beta}}&\\lesssim(\\Lip{G}+|G|_\\infty)(|z|_{\\ensuremath{\\mathcal{C}}^{\\beta}}+1)|\\ensuremath{\\mathfrak{h}}|_{\\ensuremath{\\mathcal{C}}^{\\gamma}}\\delta^{\\gamma-\\beta},\\\\\n \\left|\\int_0^\\cdot G\\big(z(s)\\big)-G\\big(\\bar{z}(s)\\big)\\,d\\ensuremath{\\mathfrak{h}}_s\\right|_{\\ensuremath{\\mathcal{C}}^{\\beta}}&\\lesssim (\\Lip{G}+\\Lip{DG})|\\ensuremath{\\mathfrak{h}}|_{\\ensuremath{\\mathcal{C}}^{\\gamma}}\\delta^{\\gamma-\\beta}|z-\\bar{z}|_{\\ensuremath{\\mathcal{C}}^{\\beta}},\\\\\n \\left|\\int_0^\\cdot \\begin{pmatrix}F_1\\big(z(s)\\big)\\{\\mathcal F}_2\\big(z(s)\\big)\\end{pmatrix}\\,ds\\right|_{\\ensuremath{\\mathcal{C}}^{\\beta}}&\\leq \\big(|F_1|_{\\infty;\\,B_{\\delta^{\\beta}}(z_0)}+|F_2|_{\\infty;\\,B_{\\delta^{\\beta}}(z_0)}\\big)\\delta^{1-\\beta},\\\\\n \\left|\\int_0^\\cdot \\begin{pmatrix}F_1\\big(z(s)\\big)-F_1\\big(\\bar{z}(s)\\big)\\{\\mathcal F}_2\\big(z(s)\\big)-F_2\\big(\\bar{z}(s)\\big)\\end{pmatrix}\\,ds\\right|_{\\ensuremath{\\mathcal{C}}^{\\beta}}&\\leq \\big(\\Lip{F_1}+\\Lip[B_{\\delta^{\\beta}}(z_0)]{F_2}\\big)\\delta|z-\\bar{z}|_{\\ensuremath{\\mathcal{C}}^{\\beta}}\n \\end{align*}\n for all $z,\\bar{z}\\in A_\\delta$, where $|\\cdot|_{\\infty;\\,A}$ and $\\Lip[A]{\\cdot}$ denote the respective norms of the function restricted to the set $A$. Here, we also used that $\\max\\big(|z-z_0|_\\infty,|\\bar z-z_0|_\\infty\\big)\\leq\\delta^\\beta$ since $z,\\bar{z}\\in A_\\delta$ by assumption. Consequently, there is a unique solution to \\eqref{eq:ode} in $\\ensuremath{\\mathcal{C}}^{\\beta}([0,\\delta],\\ensuremath{\\mathbb{R}}^{d+n})$. Global existence and uniqueness follow from the \\emph{a priori} estimates \\eqref{eq:comparison_apriori} and \\eqref{eq:a_priori} by a standard maximality argument.\n\\end{proof}\n\nWe now bring the randomness back in the picture. To this end, let $\\alpha>0$, $p\\geq 1$, and $T>0$. We define the space\n\\begin{equation*}\n {\\mathcal B}_{\\alpha,p}([0,T],\\ensuremath{\\mathbb{R}}^d)\\ensuremath\\triangleq\\left\\{X:[0,T]\\times\\Omega\\to\\ensuremath{\\mathbb{R}}^d:\\,X\\text{ is }({\\mathcal F}_t)_{t\\in[0,T]}\\text{-adapted and }\\|X\\|_{{\\mathcal B}_{\\alpha,p}([0,T],\\ensuremath{\\mathbb{R}}^d)}<\\infty\\right\\},\n\\end{equation*} \nwhere we introduced the semi-norm\n\\begin{equation*}\n \\|X\\|_{{\\mathcal B}_{\\alpha,p}([0,T],\\ensuremath{\\mathbb{R}}^d)}\\ensuremath\\triangleq\\sup_{s\\neq t\\in[0,T]}\\frac{\\|X_t-X_s\\|_{L^p}}{|t-s|^\\alpha}.\n\\end{equation*}\nIf the terminal time $T$ and the dimension $d$ are clear from the context, we shall also write ${\\mathcal B}_{\\alpha,p}$ for brevity. By Kolmogorov's continuity theorem, we have the continuous embeddings\n\\begin{equation}\\label{eq:embeddings}\n L^p\\big(\\Omega,\\ensuremath{\\mathcal{C}}^{\\alpha+\\delta}([0,T],\\ensuremath{\\mathbb{R}}^d)\\big)\\hookrightarrow{\\mathcal B}_{\\alpha,p}([0,T],\\ensuremath{\\mathbb{R}}^d)\\hookrightarrow L^p\\big(\\Omega,\\ensuremath{\\mathcal{C}}^{\\alpha-\\delta-\\frac1p}([0,T],\\ensuremath{\\mathbb{R}}^d)\\big)\n\\end{equation}\nfor any $\\delta>0$. Finally, let us also introduce the Besov-type space\n\\begin{align*}\n W_0^{\\alpha,\\infty}([0,T],\\ensuremath{\\mathbb{R}}^d)&\\ensuremath\\triangleq \\big\\{f:[0,T]\\to\\ensuremath{\\mathbb{R}}^d:\\,|f|_{\\alpha,\\infty}<\\infty\\big\\},\\\\\n |f|_{\\alpha,\\infty}&\\ensuremath\\triangleq\\sup_{t\\in[0,T]}\\left(|f(t)|+\\int_0^t\\frac{|f(t)-f(s)|}{|t-s|^{\\alpha+1}}\\,ds\\right).\n\\end{align*}\nNualart and R\\u{a}s\\c{c}anu proved the following classical result:\n\\begin{proposition}[{\\cite[Theorem 2.1.II]{Rascanu2002}}]\\label{prop:nualart}\n Let $f:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}^d$ be bounded Lipschitz continuous and $g:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\Lin[m]{d}$ be of class $\\ensuremath{\\mathcal{C}}_b^2$. Let $(Y_t)_{t\\in[0,T]}$ be a stochastic process with sample paths in $\\ensuremath{\\mathcal{C}}^{\\gamma}([0,T],\\ensuremath{\\mathbb{R}}^n)$ for some $\\gamma>1-H$ and let $B$ be an fBm with Hurst parameter $H>\\frac12$. Then there is a unique global solution to the equation\n \\begin{equation*}\n X_t=X_0+\\int_0^t f(X_s,Y_s)\\,ds+\\int_0^tg(X_s,Y_s)\\,dB_s\n \\end{equation*}\n and, provided that $X_0\\in L^\\infty$, we also have that\n \\begin{equation*}\n |X|_{\\alpha,\\infty}\\in\\bigcap_{p\\geq 1} L^p\n \\end{equation*}\n for each $\\alpha<\\frac12\\wedge\\gamma$.\n\\end{proposition} \n\n\\begin{corollary}\\label{cor:norm_bound_solution}\n Fix the scale parameter $\\varepsilon>0$ and a terminal time $T>0$. Let $\\alpha0$ is sufficiently small. Instead, we employ \\cref{prop:nualart}: Since $Y^\\varepsilon\\in\\ensuremath{\\mathcal{C}}^{\\hat H-}([0,T],\\ensuremath{\\mathbb{R}}^n)$ by \\cref{lem:comparison}, we see that, for each $\\alpha<\\frac12\\wedge\\hat H$, $|X^\\varepsilon|_{\\alpha,\\infty}\\in\\bigcap_{p\\geq 1} L^p$, It is clear that\n\\begin{equation*}\n W_0^{\\alpha,\\infty}([0,T],\\ensuremath{\\mathbb{R}}^d)\\hookrightarrow \\ensuremath{\\mathcal{C}}^{\\alpha-\\delta}([0,T],\\ensuremath{\\mathbb{R}}^d)\n\\end{equation*}\nfor any $\\delta>0$. Combine this with the continuous embedding \\eqref{eq:embeddings} to conclude \\eqref{eq:b_norm_bound}.\n\\end{proof}\n\n\\begin{remark}\n We finally record that \\cref{prop:abstract_ode,cor:norm_bound_solution} are the only places in the proof of \\cref{thm:feedback_fractional} which require a linear growth of the drift $b$, see \\cref{cond:feedback}. In fact, the remainder of the argument would still work, \\emph{mutatis mutandis}, under the weaker assumption of a polynomially growing drift, i.e., $|b(x,y)|\\lesssim 1+|x|^N+|y|^N$ for some $N\\in\\ensuremath{\\mathbb{N}}$. It is however unclear whether the solution to \\eqref{eq:slow_feedback_sec}--\\eqref{eq:fast_feedback_sec} exists globally in this case.\n\\end{remark}\n\n\n\n\\subsection{Uniform Bounds on the Slow Motions}\\label{sec:uniform_bounds}\n\nOur strategy in proving \\cref{thm:feedback_fractional} is as follows: The integrals in \\eqref{eq:slow_feedback_sec} are approximated by suitable Riemann sums, on which we then aim to establish uniform bounds. These estimates translate into bounds on the integrals in view of L\\^e's stochastic sewing lemma \\cite{Le2020}.\n\nFix a terminal time $T>0$ and let $\\mathcal{S}^p$ denote the set of adapted two-parameter processes on the simplex with finite $p^\\text{th}$ moments; in symbols:\n\\begin{equation*}\n \\mathcal{S}^p\\ensuremath\\triangleq\\left\\{A:[0,T]^2\\times\\Omega\\to\\ensuremath{\\mathbb{R}}^d:\\,A_{s,t}=0\\text{ for }s\\geq t\\text{ and }A_{s,t}\\in L^p(\\Omega,{\\mathcal F}_t,\\ensuremath\\mathbb{P})\\text{ for all }s,t\\geq 0\\right\\}.\n\\end{equation*}\nGiven $\\eta,\\bar{\\eta}>0$, we define the spaces\n\\begin{align*}\n H_\\eta^p&\\ensuremath\\triangleq\\left\\{A\\in\\mathcal{S}^p:\\,\\|A\\|_{H_\\eta^p}\\ensuremath\\triangleq\\sup_{0\\leq s\\frac12$, and $\\bar{\\eta}>1$. Suppose that $A\\in H_\\eta^p\\cap\\bar{H}_{\\bar{\\eta}}^p$. Then, for every $0\\leq s\\leq t\\leq T$, the limit\n \\begin{equation*}\n I_{s,t}(A)\\ensuremath\\triangleq\\lim_{|P|\\to 0}\\sum_{[u,v]\\in P}A_{u,v}\n \\end{equation*}\n along partitions $P$ of $[s,t]$ with mesh $|P|\\ensuremath\\triangleq\\max_{[u,v]\\in P}|v-u|$ tending to zero exists in $L^p$. The limiting process $I(A)$ is additive in the sense that $I_{s,u}(A)+I_{u,t}(A)=I_{s,t}(A)$ for all $0\\leq s\\leq u\\leq t\\leq T$. Furthermore, there is a constant $C=C(p,\\eta,\\bar{\\eta})$ such that\n \\begin{equation*}\n \\|I_{s,t}(A)\\|_{L^p}\\leq C\\left(\\vertiii{A}_{\\bar{H}_{\\bar{\\eta}}^p}|t-s|^{\\bar{\\eta}}+\\|A\\|_{H_{\\eta}^p}|t-s|^\\eta\\right)\n \\end{equation*}\n for all $0\\leq s\\leq t\\leq T$. Moreover, if $\\|\\Expec{A_{s,t}\\,|\\,{\\mathcal F}_s}\\|_{L^p}\\lesssim|t-s|^{\\bar{\\eta}}$, then $I(A)\\equiv 0$.\n\\end{proposition}\nRecall our notation of the fast motion's flow from \\eqref{eq:general_flow} and \\eqref{eq:general_flow-fixed-x}, respectively. We are ultimately going to apply \\cref{prop:stochastic_sewing} with the two-parameter process\n\\begin{equation}\\label{eq:riemann_summands}\n A_{s,t}^\\varepsilon\\ensuremath\\triangleq\\int_s^t \\left(g\\Big(X_s^\\varepsilon,\\bar{\\Phi}_{s,r}^{X_s^\\varepsilon}\\big(\\Phi_{0,s}^{X^\\varepsilon}(Y_0)\\big)\\Big)-\\bar{g}\\big(X_s^\\varepsilon\\big)\\right)\\,dB_r,\\quad 0\\leq s1-H$. Let $X$ be an $({\\mathcal F}_t)_{t\\in[0,T]}$-adapted stochastic process with $\\alpha$-H\\\"older sample paths. Moreover assume that $X\\in{\\mathcal B}_{\\alpha,p}$. Let $f:\\ensuremath{\\mathbb{R}}^d\\to\\ensuremath{\\mathbb{R}}$ be a bounded Lipschitz continuous function. Then we have the following bound on the Young integral:\n \\begin{equation*}\n \\left\\|\\int_s^t f(X_r)\\,dB_r\\right\\|_{{\\mathcal B}_{H,p}}\\lesssim\\big(|f|_\\infty+\\Lip{f}\\big)\\big(1+\\|X\\|_{{\\mathcal B}_{\\alpha,p}}\\big),\n \\end{equation*}\n uniformly in $0\\leq s\\frac12$ and let $h:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n \\to\\ensuremath{\\mathbb{R}}$ be a Lipschitz continuous function. Let $p>2$ and $\\alpha>1-H$. Let $X$ be an $\\ensuremath{\\mathbb{R}}^d$-valued, $({\\mathcal F}_t)_{t\\in[0,T]}$-adapted process with $\\sup_{t\\in[0,T]}\\|X_t\\|_{L^p}<\\infty$ and sample paths in $\\ensuremath{\\mathcal{C}}^\\alpha([0,T],\\ensuremath{\\mathbb{R}}^d)$. Let $Y_0\\in L^p$. Define\n \\begin{equation*}\n A_{s,t}\\ensuremath\\triangleq\\int_s^t h\\Big(X_s,\\bar{\\Phi}_{s,r}^{X_s}\\big(\\Phi_{0,s}^{X}(Y_0)\\big)\\Big)\\,dB_r,\n \\end{equation*}\n where the integration is understood in the mixed Wiener-Young sense, see \\eqref{eq:wiener_young}. If $A\\in H_\\eta^2\\cap\\bar{H}_{\\bar{\\eta}}^2$ for some $\\eta>\\frac12$ and $\\bar{\\eta}>1$, then, for any $\\varepsilon>0$ and any $0\\leq s\\leq t\\leq T$, \n \\begin{equation*}\n \\lim_{|P|\\to 0} \\sum_{[u,v]\\in P([s,t])} A_{u,v}=\\int_{s}^t h\\big(X_r,\\Phi_{0,r}^{X}(Y_0)\\big)\\,dB_r,\n \\end{equation*} \n where the right-hand side is the Young integral.\n\\end{lemma}\n\\begin{proof}\n We first note that, by \\cref{lem:comparison}, the process $\\Phi_{0,\\cdot}^X(Y_0)$ takes values in $\\ensuremath{\\mathcal{C}}^\\beta([0,T],\\ensuremath{\\mathbb{R}}^d)$ for any $\\beta<\\hat{H}$. The pathwise Young integral $\\int h\\big(X_r,\\Phi_{0,r}^{X}(Y_0)\\big)\\,dB_r$ is thus well defined and is given by the limit of the Riemann sums of\n \\begin{equation*}\n \\tilde{A}_{s,t}\\ensuremath\\triangleq h\\big(X_s,\\Phi_{0,s}^X(Y_0)\\big)(B_t-B_s)\n \\end{equation*}\n along any sequence of partitions. By the last part of \\cref{prop:stochastic_sewing}, it now suffices to show that $\\|A_{s,t}-\\tilde{A}_{s,t}\\|_{L^2}\\lesssim |t-s|^{\\bar{\\eta}}$ for some $\\bar{\\eta}>1$. \n\n To see this, we apply \\cref{lem:wiener_integral_bound} with $\\kappa=0$ to find that, for each $\\beta<\\hat{H}$,\n \\begin{align*}\n &\\phantom{\\leq}\\big\\|A_{s,t}-\\tilde{A}_{s,t}\\big\\|_{L^2}=\\left\\|\\int_s^t \\Big(h\\big(X_s,\\bar{\\Phi}_{s,r}^{X_s}\\big(\\Phi_{0,s}^{X}(Y_0)\\big)\\big)-h\\big(X_s,\\Phi_{0,s}^X(Y_0)\\big)\\Big)\\,dB_r\\right\\|_{L^2}\\\\\n &\\leq\\Big\\|\\sup_{s\\leq r\\leq t}\\Big|h\\big(X_s,\\bar{\\Phi}_{s,r}^{X_s}\\big(\\Phi_{0,s}^{X}(Y_0)\\big)\\big)-h\\big(X_s,\\Phi_{0,s}^X(Y_0)\\big)\\Big|\\Big\\|_{L^p}|t-s|^H\\\\\n &\\leq\\Lip{h}\\Big\\|\\Big|\\bar{\\Phi}_{s,\\cdot}^{X_s}\\big(\\Phi_{0,s}^{X}(Y_0)\\big)\\Big|_{\\ensuremath{\\mathcal{C}}^{\\beta}}\\Big\\|_{L^p}|t-s|^{H+\\beta}.\n \\end{align*}\n Since $H+\\hat{H}>1$, we can conclude with \\cref{lem:comparison,lem:fast_process_moments}.\n\\end{proof}\nOur interest in \\cref{lem:sewing_young} is of course in applying it to the slow motion \\eqref{eq:slow_feedback_sec} and the Riemann summands $A^\\varepsilon_{s,t}$ defined in \\eqref{eq:riemann_summands}. We have already seen in \\cref{cor:norm_bound_solution} that $X^\\varepsilon\\in\\bigcap_{p\\geq 1}{\\mathcal B}_{\\alpha,p}$ for any $\\alpha<\\frac12\\wedge\\hat{H}$. We are therefore left to check that $A^\\varepsilon\\in H_\\eta^p\\cap\\bar{H}_{\\bar{\\eta}}^p$ for some $\\eta>\\frac12$, $\\bar{\\eta}>1$, and $p\\geq 2$. Since these estimates are somewhat technically involved and require longer computations, we devote a subsection to each of the norms $\\|\\cdot\\|_{H_{\\eta}^p}$ and $\\vertiii{\\;\\cdot\\;}_{\\bar{H}_{\\bar{\\eta}}^p}$, respectively. \n\n\n\\subsubsection{Controlling the Increment $A^\\varepsilon_{s,t}$}\\label{sec:sewing_1}\n\nLet $h:\\ensuremath{\\mathbb{R}}^d\\times \\ensuremath{\\mathbb{R}}^n\\to \\ensuremath{\\mathbb{R}}^d$. Recall that write $\\bar h(x)=\\int h(x,y) \\pi^x(dy)$ for its average with respect to the first marginal of the invariant measure of the process $\\bar \\Phi^x$, see \\eqref{eq:general_flow-fixed-x} and \\cref{initial-condition}.\nThe following lemma exploits the convergence rates derived in \\cref{sec:convergence}. [The reader should observe that without further notice we assume that the conditions of \\cref{thm:feedback_fractional} on the drift $b:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}^n$ are in place.]\n\\begin{lemma}\\label{lem:sewing_helper_1}\n Let $q>1$. Let $h:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$ be a bounded measurable function and let $X,Y\\in L^q$ be ${\\mathcal F}_s$-measurable random variables. Then, for any $0\\leq s\\leq t$, any $p\\geq 2$, and any $\\zeta<1-\\hat{H}$, we have that\n \\begin{equation*}\n \\left\\|\\int_s^t \\Big( h\\big(X,\\bar{\\Phi}_{s,r}^X(Y)\\big)-\\bar{h}(X)\\Big) \\,dr\\right\\|_{L^p}\\lesssim |h|_\\infty\\Big(1+\\|Y\\|_{L^q}^{\\frac{1}{p}}+\\|X\\|_{L^q}^{\\frac{1}{p}}\\Big)\\varepsilon^{\\frac{\\zeta}{3p}}|t-s|^{1-\\frac{\\zeta}{3p}}.\n \\end{equation*}\n\\end{lemma}\n\\begin{proof}\nThere is no loss of generality in assuming that $\\bar{h}\\equiv 0$. Notice also that the trivial estimate $\\big\\|\\int_s^t h\\big(X,\\bar{\\Phi}_{s,r}^X(Y)\\big)\\,dr\\big\\|_{L^\\infty}\\leq|h|_\\infty|t-s|$. By interpolation, we can therefore restrict ourselves to the case $p=2$. Clearly,\n \\begin{equation*}\n \\Expec{\\left|\\int_s^t h\\big(X,\\bar{\\Phi}_{s,r}^X(Y)\\big)\\,dr\\right|^2}=2\\int_s^t\\int_s^v\\Expec{h\\big(X,\\bar{\\Phi}_{s,r}^X(Y)\\big)h\\big(X,\\bar{\\Phi}_{s,v}^X(Y)\\big)}\\,dr\\,dv.\n \\end{equation*}\n For $r0$. Moreover, there is a $\\gamma>0$ such that \n \\begin{equation*}\n \\|A\\|_{H_{\\eta}^p}\\lesssim |h|_\\infty\\Big(1+\\sup_{0\\leq t\\leq T}\\|X_t\\|_{L^q}\\Big)\\varepsilon^{\\gamma}.\n \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n Again, we may assume that $\\bar{h}\\equiv 0$ without any loss of generality. Since $X$ is $({\\mathcal F}_t)_{t\\in[0,T]}$-adapted, we can use \\cref{lem:wiener_integral_bound} to obtain that, for $\\tilde{q}>p$ and $\\kappa\\in[0,H-\\frac12)$,\n \\begin{equation*}\n \\|A_{s,t}\\|_{L^p}\\lesssim\\left\\|\\left|h\\Big(X_s,\\bar{\\Phi}_{s,\\cdot}^{X_s}\\big(\\Phi_{0,s}^X(Y_0)\\big)\\Big)\\right|_{-\\kappa}\\right\\|_{L^{\\tilde{q}}}|t-s|^{H-\\kappa}.\n \\end{equation*}\n By \\cref{lem:sewing_helper_1,lem:fast_process_moments}, we obtain \n \\begin{equation*}\n \\left\\|\\int_u^v h\\Big(X_s,\\bar{\\Phi}_{s,r}^{X_s}\\big(\\Phi_{0,s}^X(Y_0)\\big)\\Big)\\,dr\\right\\|_{L^{\\tilde{q}}}\\lesssim |h|_\\infty\\bigg(1+\\|Y_0\\|^{\\frac{1}{\\tilde{q}}}_{L^q}+\\sup_{0\\leq r\\leq s}\\|X_r\\|_{L^q}^{\\frac{1}{\\tilde{q}}}\\bigg)\\varepsilon^{\\frac{\\zeta}{3\\tilde{q}}}|v-u|^{1-\\frac{\\zeta}{3\\tilde{q}}}\n \\end{equation*}\n for all $u,v\\in[s,t]$ and any $\\zeta<1-\\hat{H}$. Therefore, Kolmogorov's continuity theorem shows that\n \\begin{equation*}\n \\left\\|\\left|h\\Big(X_s,\\bar{\\Phi}_{s,\\cdot}^{X_s}\\big(\\Phi_{0,s}^X(Y_0)\\big)\\Big)\\right|_{-\\kappa}\\right\\|_{L^{\\tilde{q}}}\\lesssim |h|_\\infty\\bigg(1+\\|Y_0\\|^{\\frac{1}{\\tilde{q}}}_{L^q}+\\sup_{0\\leq t\\leq T}\\|X_t\\|_{L^q}^{\\frac{1}{\\tilde{q}}}\\bigg)\\varepsilon^{\\frac{\\zeta}{3\\tilde{q}}},\n \\end{equation*}\n provided that we choose $\\tilde{q}>\\kappa^{-1}\\left(1+\\frac{\\zeta}{3}\\right)$, and the final result follows.\n \\end{proof}\n\n\n\\subsubsection{Continuity of the Invariant Measures}\nLet $\\varepsilon>0$ and $s0$. In order to keep the statements of the next lemmas concise, we shall freely absorb quantities independent of $0\\leq s\\leq t$ and $\\varepsilon\\in(0,1]$ into the prefactor hidden beneath $\\lesssim$.\n\n\\begin{lemma}\\label{lem:continuity}\n Let $p\\geq 1$ and suppose that $b(x,\\cdot)\\in\\S_p(\\kappa,R)$ for all $x\\in\\ensuremath{\\mathbb{R}}^d$. Let $X,\\bar{X}\\in L^\\infty$, and $Y\\in L^p$ be ${\\mathcal F}_s$-measurable random variables. Then\n \\begin{equation*}\n \\left\\|\\bar{\\Phi}_{s,t}^X(Y)-\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\right\\|_{L^p}\\lesssim\\|X-\\bar{X}\\|_{L^{p}}.\n \\end{equation*}\n\\end{lemma}\n\\begin{proof}\n We abbreviate $\\Lambda\\ensuremath\\triangleq\\Lambda(\\kappa,R,p)$ and observe that, for any $s\\leq u\\leq r$,\n \\begin{align*}\n \\frac{d}{dr}\\Big|\\bar{\\Phi}_{u,r}^{X}(Y)-\\bar{\\Phi}_{u,r}^{\\bar{X}}(Y)\\Big|^2&=\\frac{2}{\\varepsilon}\\Braket{b\\big(X,\\bar{\\Phi}_{u,r}^{X}(Y)\\big)-b\\big(\\bar{X},\\bar{\\Phi}_{u,r}^{\\bar{X}}(Y)\\big),\\bar{\\Phi}_{u,r}^{X}(Y)-\\bar{\\Phi}_{u,r}^{\\bar{X}}(Y)}\\\\\n &\\leq \\frac{2(\\Lambda+1)}{\\varepsilon}\\Big|\\bar{\\Phi}_{u,r}^{X}(Y)-\\bar{\\Phi}_{u,r}^{\\bar{X}}(Y)\\Big|^2+ \\frac{\\Lip[\\|X\\|_{L^\\infty}\\vee\\|\\bar{X}\\|_{L^\\infty}]{b}^2}{2\\varepsilon}|X-\\bar{X}|^2\n \\end{align*}\n with probability $1$. It follows that\n \\begin{equation}\\label{eq:cont_interpolate_1}\n \\Big|\\bar{\\Phi}_{u,r}^X(Y)-\\bar{\\Phi}^{\\bar{X}}_{u,r}(Y)\\Big|\\lesssim\\Lip[\\|X\\|_{L^\\infty}\\vee\\|\\bar{X}\\|_{L^\\infty}]{b}e^{(\\Lambda+1)\\frac{|r-u|}{\\varepsilon}}|X-\\bar{X}|.\n \\end{equation}\n This bound is of course only useful on a time interval with length of order $\\varepsilon$. We therefore expand\n \\begin{equation*}\n \\left\\|\\bar{\\Phi}_{s,t}^X(Y)-\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\right\\|_{L^p}\\leq\\sum_{(t_i,t_{i+1})\\in P([s,t];\\varepsilon)}\\left\\|\\bar{\\Phi}_{t_{i+1},t}^{\\bar{X}}\\big(\\bar{\\Phi}_{s,t_{i+1}}^X(Y)\\big)-\\bar{\\Phi}_{t_{i},t}^{\\bar{X}}\\big(\\bar{\\Phi}_{s,t_{i}}^X(Y)\\big)\\right\\|_{L^p}.\n \\end{equation*}\n \\Cref{cor:fast_different_initial} shows that\n \\begin{align*}\n \\left\\|\\bar{\\Phi}_{t_{i+1},t}^{\\bar{X}}\\big(\\bar{\\Phi}_{s,t_{i+1}}^X(Y)\\big)-\\bar{\\Phi}_{t_{i},t}^{\\bar{X}}\\big(\\bar{\\Phi}_{s,t_{i}}^X(Y)\\big)\\right\\|_{L^p}&\\lesssim\\Big\\|\\bar{\\Phi}_{s,t_{i+1}}^X(Y)-\\bar{\\Phi}_{t_{i},t_{i+1}}^{\\bar{X}}\\big(\\bar{\\Phi}_{s,t_{i}}^X(Y)\\big)\\Big\\|_{L^p} e^{-c\\frac{|t-t_{i+1}|}{\\varepsilon}}\\\\\n & \\lesssim \\|X-\\bar{X}\\|_{L^p}e^{-c\\frac{|t-t_{i+1}|}{\\varepsilon}},\n \\end{align*}\n where the last inequality uses \\eqref{eq:cont_interpolate_1} together with $|t_{i+1}-t_i|\\asymp\\varepsilon$. Consequently,\n \\begin{equation*}\n \\left\\|\\bar{\\Phi}_{s,t}^X(Y)-\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\right\\|_{L^p}\\lesssim\\|X-\\bar{X}\\|_{L^p}\\sum_{(t_i,t_{i+1})\\in P([s,t];\\varepsilon)}e^{-c\\frac{|t-t_{i+1}|}{\\varepsilon}}\\lesssim\\|X-\\bar{X}\\|_{L^p}\n \\end{equation*}\n uniformly in $0\\leq s\\leq t$ and $\\varepsilon\\in(0,1]$. \n\\end{proof}\n\n\\Cref{lem:continuity} implies the local Lipschitz continuity of the invariant measure $\\pi^x$ in the parameter $x\\in\\ensuremath{\\mathbb{R}}^d$:\n\\begin{proposition}\\label{lem:wasserstein_holder}\n Let $p\\geq 1$ and $K>0$. Suppose that $b(x,\\cdot)\\in\\S_p(\\kappa,R)$ for all $x\\in\\ensuremath{\\mathbb{R}}^d$. Then\n \\begin{equation*}\n \\ensuremath{\\mathcal{W}}^p(\\pi^{x_1},\\pi^{x_2})\\lesssim |x_1-x_2|,\n \\end{equation*}\n uniformly for $|x_1|,|x_2|\\leq K$.\n\\end{proposition}\n\\begin{proof}\n Owing to \\cref{thm:geometric}, it follows that\n \\begin{equation*}\n \\ensuremath{\\mathcal{W}}^p(\\pi^{x_1},\\pi^{x_2})\\leq\\limsup_{\\varepsilon\\to 0}\\big\\|\\bar{\\Phi}^{x_1}_{0,1}(0)-\\bar{\\Phi}^{x_2}_{0,1}(0)\\big\\|_{L^p}\n \\end{equation*} \n and we conclude with \\cref{lem:continuity}.\n\\end{proof}\n\nThe simple proof of the following corollary is left to the reader.\n\\begin{corollary}\\label{cor:lipschitz_average}\nLet $h:\\ensuremath{\\mathbb{R}}^d\\times \\ensuremath{\\mathbb{R}}^n\\to \\ensuremath{\\mathbb{R}}^d$ be Lipschitz continuous. Then $\\bar{h}:\\ensuremath{\\mathbb{R}}^d\\to\\ensuremath{\\mathbb{R}}^d$ is locally Lipschitz.\n\\end{corollary}\n\\subsubsection{Controlling the Second Order Increment $\\delta A^\\varepsilon_{s,u,t}$}\\label{sec:sewing_2}\n\nUniform bounds on the second order increments are difficult to obtain even for the Markovian fast dynamic. The first technical estimate of this subsection is the following:\n\\begin{lemma}\\label{lem:sewing_helper_2}\n Let $1\\leq p0$ such that \n \\begin{equation*}\n \\Big\\|\\Expec{h\\big(X,\\bar{\\Phi}^{X}_{s,t}(Y)\\big)-h\\big(\\bar{X},\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\big)\\,\\middle|\\,\\mathcal{F}_s}\\Big\\|_{L^p}\\lesssim\\Lip{h}\\big(1+\\|Y\\|_{L^q}\\big)\\|X-\\bar{X}\\|_{L^{p}}^{\\rho}\\left(1\\wedge\\frac{\\varepsilon^\\gamma}{|t-s|^\\gamma}\\right).\n \\end{equation*}\n\\end{lemma}\n\\begin{proof}\n By \\cref{cor:total_variation_conditional} \\ref{it:ergodicity_wasserstein} and H\\\"older's inequality, we certainly have\n \\begin{equation}\\label{eq:sewing_helper_2_interpolate}\n \\Big\\|\\Expec{h\\big(X,\\bar{\\Phi}^{X}_{s,t}(Y)\\big)-h\\big(\\bar{X},\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\big)\\,\\middle|\\,\\mathcal{F}_s}\\Big\\|_{L^p}\\lesssim\\Lip{h}\\big(1+\\|Y\\|_{L^q}\\big)\\left(1\\wedge\\frac{\\varepsilon^{\\zeta}}{|t-s|^{\\zeta}}\\right).\n \\end{equation}\n On the other hand, by the continuity lemma (\\cref{lem:continuity}),\n \\begin{align*}\n &\\phantom{\\lesssim}\\Big\\|\\Expec{h\\big(X,\\bar{\\Phi}^{X}_{s,t}(Y)\\big)-h\\big(\\bar{X},\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\big)\\,\\middle|\\,\\mathcal{F}_s}\\Big\\|_{L^p}\\\\\n &\\lesssim\\Lip{h}\\left(\\|X-\\bar{X}\\|_{L^p}+\\Big\\|\\bar{\\Phi}^{X}_{s,t}(Y)-\\bar{\\Phi}^{\\bar{X}}_{s,t}(Y)\\Big\\|_{L^p}\\right)\\lesssim\\Lip{h}\\|X-\\bar{X}\\|_{L^{p}}.\n \\end{align*}\n Finally, we interpolate this bound with \\eqref{eq:sewing_helper_2_interpolate}.\n\\end{proof}\n\nOur remaining task is to derive an estimate on the distance between $\\Phi^Z_{s,t}$ and $\\bar{\\Phi}_{s,t}^{Z_s}$. This is based on the following version of \\cref{lem:continuity}:\n\\begin{lemma}\\label{lem:continuity_path}\n Let $p\\geq 1$ and suppose that $b(x,\\cdot)\\in\\S_p(\\kappa,R)$ for all $x\\in\\ensuremath{\\mathbb{R}}^d$. Let $Y\\in L^p$ be ${\\mathcal F}_s$-measurable and $Z$ be a continuous process. Assume that $|Z|_\\infty\\in L^\\infty$. Then\n \\begin{equation*}\n \\Big\\|\\bar{\\Phi}^{Z_s}_{s,t}(Y)-\\Phi_{s,t}^Z(Y)\\Big\\|_{L^p}\\lesssim\\Big\\|\\sup_{r\\in[s,t]}|Z_r-Z_s|\\Big\\|_{L^{p}}\n \\end{equation*}\n\\end{lemma}\n\\begin{proof}\n The reader can easily check that the very same argument we gave at the beginning of the proof of \\cref{lem:continuity} also shows that, for $0\\leq s\\leq u\\leq r\\leq T$,\n \\begin{align*}\n \\Big|\\bar{\\Phi}_{u,r}^{Z_s}(Y)-\\Phi_{u,r}^Z(Y)\\Big|&\\lesssim\\Lip[\\||Z|_{\\infty}\\|_{L^\\infty}]{b}\\left(\\int_{\\frac{u}{\\varepsilon}}^{\\frac{r}{\\varepsilon}} e^{2(\\Lambda+1)\\left(\\frac{r}{\\varepsilon}-v\\right)}|Z_{\\varepsilon v}-Z_s|^2\\,dv\\right)^{\\frac12}\\\\\n &\\lesssim \\sup_{v\\in[u,r]}|Z_v-Z_s| e^{(\\Lambda+1)\\frac{|r-u|}{\\varepsilon}}.\n \\end{align*}\n The asserted bound then follows along the same lines as \\cref{lem:continuity}.\n\\end{proof}\n\nThe following estimate is now an easy consequence:\n\\begin{lemma}\\label{lem:sewing_helper_3}\n Let $p\\geq 1$ and suppose that $b(x,\\cdot)\\in\\S_p(\\kappa,R)$ for all $x\\in\\ensuremath{\\mathbb{R}}^d$. Let $h:\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n\\to\\ensuremath{\\mathbb{R}}$ be Lipschitz continuous. Assume furthermore that $X$ and $Y$ are ${\\mathcal F}_u$- and ${\\mathcal F}_s$-measurable random variables, respectively. Moreover, let $Z\\in{\\mathcal B}_{\\alpha, p}([0,T],\\ensuremath{\\mathbb{R}}^d)$ for some $\\alpha>0$ and assume that $|Z|_\\infty\\in L^\\infty$. Then\n \\begin{equation}\\label{eq:sewing_moderately_3}\n \\left\\|\\Expec{h\\Big(X,\\bar{\\Phi}^{X}_{u,t}\\big(\\bar{\\Phi}_{s,u}^{Z_s}(Y)\\big)\\Big)-h\\Big(X,\\bar{\\Phi}^{X}_{u,t}\\big(\\Phi_{s,u}^{Z}(Y)\\big)\\Big)\\,\\middle|\\,\\mathcal{F}_u}\\right\\|_{L^{p}}\\lesssim \\Lip{h}\\|Z\\|_{{\\mathcal B}_{\\alpha,p}}|u-s|^{\\alpha}e^{-c\\frac{|t-u|}{\\varepsilon}}.\n \\end{equation} \n\\end{lemma}\n\\begin{proof}\n By \\cref{cor:fast_different_initial}, we have that\n \\begin{align*}\n &\\phantom{\\lesssim}\\Big\\|\\Expec{h\\Big(X,\\bar{\\Phi}^{X}_{u,t}\\big(\\bar{\\Phi}_{s,u}^{Z_s}(Y)\\big)\\Big)-h\\Big(X,\\bar{\\Phi}^{X}_{u,t}\\big(\\Phi_{s,u}^{Z}(Y)\\big)\\Big)\\,\\middle|\\,\\mathcal{F}_u}\\Big\\|_{L^p}\\\\\n &\\lesssim \\Lip{h}\\Big\\|\\bar{\\Phi}_{s,u}^{Z_s}(Y)-\\Phi_{s,u}^{Z}(Y)\\Big\\|_{L^p}e^{-c\\frac{|t-u|}{\\varepsilon}}.\n \\end{align*}\n By \\cref{lem:continuity_path},\n \\begin{equation*}\n \\Big\\|\\bar{\\Phi}_{s,u}^{Z_s}(Y)-\\Phi_{s,u}^{Z}(Y)\\Big\\|_{L^p}\\lesssim\\|Z\\|_{{\\mathcal B}_{\\alpha,p}}|u-s|^{\\alpha}.\\qedhere\n \\end{equation*}\n\\end{proof}\n\nFinally, we can establish the second estimate needed for the application of \\cref{prop:stochastic_sewing}:\n\\begin{proposition}\\label{prop:sewing_2}\n Let $1\\leq p1-H$ and $|X|_\\infty\\in L^\\infty$. Define \n \\begin{equation*}\n A_{s,t}\\ensuremath\\triangleq\\int_s^t \\bigg(h\\Big(X_s,\\bar{\\Phi}_{s,r}^{X_s}\\big(\\Phi_{0,s}^X(Y_0)\\big)\\Big)-\\bar{h}(X_s)\\bigg)\\,dB_r,\n \\end{equation*}\n in the mixed Wiener-Young sense, see \\eqref{eq:wiener_young}. Then $A\\in\\bar{H}_{\\bar{\\eta}}^p$ for any $\\bar{\\eta}<\\alpha+H$ and any $\\varepsilon>0$. Moreover, there is a $\\gamma>0$ such that\n \\begin{equation*}\n \\vertiii{A}_{\\bar{H}_{\\bar{\\eta}}^p}\\lesssim\\Lip{h}\\big(1\\vee\\||X|_\\infty\\|_{L^\\infty}\\big)\\big(1\\vee\\|X\\|_{{\\mathcal B}_\\alpha,p}\\big)\\varepsilon^{\\gamma}.\n \\end{equation*}\n\\end{proposition}\n\\begin{proof}\n Fix $1<\\bar{\\eta}<\\alpha+H$ and choose $\\rho\\in(0,1)$ such that $\\bar{\\eta}0$ sufficiently small. Here, the last inequality used that, for any $p\\geq 1$, $\\big\\|\\dot{\\bar{B}}_r^u\\big\\|_{L^{p}}\\lesssim |r-u|^{H-1}$ together with the elementary fact\n \\begin{equation*}\n \\int_u^t\\frac{1}{|r-u|^{1-H}}\\left(1\\wedge\\frac{\\varepsilon^\\gamma}{|r-u|^\\gamma}\\right)\\,dr\\lesssim\\varepsilon^\\delta|t-u|^{H-\\delta}\n \\end{equation*}\n for any $\\delta\\in(0,\\gamma]$.\n\n The term $\\rom{2}$ can be handled similarly in view of \\cref{lem:sewing_helper_3}.\n\\end{proof}\n\n\\subsection{Proof of \\cref{thm:feedback_fractional}}\\label{sec:proof}\n\nThe estimates of the previous two subsection furnish the following fundamental estimates:\n\\begin{proposition}\\label{prop:final_control}\n Let $2\\leq p1-H$ such that $X$ has $\\alpha$-H\\\"older sample paths and $X\\in{\\mathcal B}_{\\alpha,p}$. If, in addition, $|X|_\\infty\\in L^\\infty$, then, for any $\\eta0$ such that\n \\begin{equation}\n \\left\\|\\int_0^\\cdot \\Big(h\\big(X_r,\\Phi_{0,s}^X(Y_0)\\big)-\\bar{h}(X_r)\\Big)\\,dB_r\\right\\|_{{\\mathcal B}_{\\eta,p}}\\lesssim\\big(|h|_\\infty+\\Lip{h}\\big)\\big(1+\\||X|_\\infty\\|_{L^\\infty}\\big)\\big(1+\\|X\\|_{{\\mathcal B}_{\\alpha,p}}\\big)\\varepsilon^{\\gamma},\\label{eq:combine_sewing_1}\n \\end{equation}\n and\n \\begin{equation}\\label{eq:combine_sewing_2}\n \\left\\|\\int_0^\\cdot h\\big(X_r,\\Phi_{0,r}^X(Y_0)\\big)\\,dB_r\\right\\|_{{\\mathcal B}_{\\eta,p}}\\lesssim\\big(|h|_\\infty+\\Lip{h}\\big)\\big(1+\\||X|_\\infty\\|_{L^\\infty}\\big)\\big(1+\\|X\\|_{{\\mathcal B}_{\\alpha,p}}\\big),\n \\end{equation}\n uniformly in $0\\leq s0$ and $M>0$, let us define the $({\\mathcal F}_t)_{t\\geq 0}$-stopping time $\\tau_M^\\varepsilon\\ensuremath\\triangleq\\inf\\{t\\geq 0:\\,|X_t^\\varepsilon|>M\\}$. Applying the previous proposition to the slow-fast system \\eqref{eq:slow_feedback_sec}--\\eqref{eq:fast_feedback_sec}, we can deduce relative compactness of the stopped slow motion $X^{\\varepsilon,M}\\ensuremath\\triangleq X^\\varepsilon_{\\cdot\\wedge\\tau_M^\\varepsilon}$:\n\n\\begin{corollary}\\label{cor:tightness}\n Consider the slow-fast system \\eqref{eq:slow_feedback_sec}--\\eqref{eq:fast_feedback_sec} with \\cref{cond:feedback} in place. Let $\\beta<\\frac12\\wedge\\hat{H}$ and $p\\geq 2$. Suppose that there are $\\kappa,R>0$ and $q>p$ such that $b(x,\\cdot)\\in\\S_q(\\kappa,R)$ for each $x\\in\\ensuremath{\\mathbb{R}}^d$. Then, for any $M>0$,\n \\begin{equation*}\n \\sup_{\\varepsilon\\in(0,1]}\\big\\|X^{\\varepsilon,M}\\big\\|_{{\\mathcal B}_{\\beta,p}}<\\infty.\n \\end{equation*}\n\\end{corollary} \n\\begin{proof}\n Recall from \\cref{cor:norm_bound_solution} that, for each $\\varepsilon>0$, there is a unique global solution $X^\\varepsilon$ to \\eqref{eq:slow_feedback_sec} with values in $\\ensuremath{\\mathcal{C}}^{\\alpha}([0,T],\\ensuremath{\\mathbb{R}}^d)$ for some $\\alpha>1-H$. Moreover, since the H\\\"older norm of the stopped solution $X^{\\varepsilon,M}$ is controlled by the H\\\"older norm of $X^\\varepsilon$, the argument of \\cref{cor:norm_bound_solution} also shows that $\\big\\|X^{\\varepsilon,M}\\big\\|_{{\\mathcal B}_{\\beta,p}}<\\infty$ for each $\\beta<\\frac12\\wedge\\hat{H}$ and $p\\geq 1$. Employing \\cref{prop:final_control}, we obtain that, for any $\\gamma0$ sufficiently small, the proof is concluded by a standard iteration argument.\n\\end{proof}\n\nNow we can finish the proof of \\cref{thm:feedback_fractional} by localizing the argument of Hairer and Li. To this end, we rely on the following deterministic residue lemma:\n\\begin{lemma}[Residue Lemma]\\label{lem:residue_lemma}\n Let $F:\\ensuremath{\\mathbb{R}}^d\\to\\ensuremath{\\mathbb{R}}^d$ be Lipschitz continuous, $G:\\ensuremath{\\mathbb{R}}^d\\to\\Lin[m]{d}$ be of class $\\ensuremath{\\mathcal{C}}_b^2$, and $\\ensuremath{\\mathfrak{h}}\\in\\ensuremath{\\mathcal{C}}^\\alpha([0,T],\\ensuremath{\\mathbb{R}}^n)$ for some $\\alpha>\\frac12$. Moreover, let $Z,\\bar{Z}\\in\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}([0,T],\\ensuremath{\\mathbb{R}}^d)$ for some $\\tilde{\\alpha}\\in(1-\\alpha,\\alpha]$ with $Z_0=\\bar{Z}_0$. Then there is a constant $C$ depending only on $F$, $G$, and the terminal time $T$ such that\n \\begin{equation*}\n |z-\\bar{z}|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}\\leq C\\exp\\left(C|\\ensuremath{\\mathfrak{h}}|_{\\ensuremath{\\mathcal{C}}^\\alpha}^{\\frac1\\alpha}+C|Z|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}^{\\frac{1}{\\tilde{\\alpha}}}+C|\\bar{Z}|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}}^{\\frac{1}{\\tilde{\\alpha}}}\\right)|Z-\\bar{Z}|_{\\ensuremath{\\mathcal{C}}^{\\tilde{\\alpha}}},\n \\end{equation*}\n where $z$ and $\\bar{z}$ are the solutions to the equations\n \\begin{equation*}\n z_t=Z_t+\\int_0^t F(z_s)\\,ds+\\int_0^t G(z_s)\\,d\\ensuremath{\\mathfrak{h}}_s,\\qquad\\bar{z}_t=\\bar{Z}_t+\\int_0^t F(z_s)\\,ds+\\int_0^t F(\\bar{z}_s)\\,d\\ensuremath{\\mathfrak{h}}_s.\n \\end{equation*}\n\\end{lemma}\nAlbeit the statement of \\cref{lem:residue_lemma} is slightly stronger than \\cite[Lemma 2.2]{Hairer2020}, it is straight-forward to show that the very same proof still applies. We therefore omit the details and finally turn to the proof of the main result of this article:\n\n\\begin{proof}[{Proof of \\cref{thm:feedback_fractional}}]\n First observe that, by the assumptions of the theorem and \\cref{cor:lipschitz_average}, there exists a unique global solution to the averaged equation \\eqref{eq:effective_dynamics}, see \\cite{Lyons1998,Lyons2002,Rascanu2002}. We fix $\\bar{\\alpha}\\in(\\alpha,H)$ with $(\\bar\\alpha-\\alpha)^{-1}0$. Consequently, by \\cref{prop:final_control}, we deduce that\n \\begin{align*}\n \\left\\|\\int_0^\\cdot \\Big(g\\big(X_r^{\\varepsilon,M},\\Phi_{0,r}^{X^{\\varepsilon,M}}(Y_0)\\big)-\\bar{g}\\big(X_r^{\\varepsilon,M}\\big)\\Big)\\,dB_r\\right\\|_{{\\mathcal B}_{\\bar{\\alpha},p}}&\\lesssim\\varepsilon^\\gamma,\n \\\\\n \\left\\|\\int_0^\\cdot \\Big(f\\big(X_r^{\\varepsilon,M},\\Phi_{0,r}^{X^{\\varepsilon,M}}(Y_0)\\big)-\\bar{f}\\big(X_r^{\\varepsilon,M}\\big)\\Big)\\,dr\\right\\|_{{\\mathcal B}_{\\bar{\\alpha},p}}&\\lesssim\\varepsilon^\\gamma.\n \\end{align*}\n Therefore, $\\big\\|\\hat{X}^{\\varepsilon,M}-\\bar{X}^{\\varepsilon,M}\\big\\|_{{\\mathcal B}_{\\bar{\\alpha},p}}\\lesssim\\varepsilon^\\gamma$, where\n \\begin{align*}\n \\hat{X}^{\\varepsilon,M}_t&\\ensuremath\\triangleq X_0+\\int_0^t f\\big(X^{\\varepsilon,M}_r,\\Phi_{0,r}^{X^{\\varepsilon,M}}(Y_0)\\big)\\,dr+\\int_0^t g\\big(X^{\\varepsilon,M}_r,\\Phi_{0,r}^{X^{\\varepsilon,M}}(Y_0)\\big)\\,dB_r,\\\\\n \\bar{X}^{\\varepsilon,M}_t&\\ensuremath\\triangleq X_0+\\int_0^t\\bar{f}\\big(X^{\\varepsilon,M}_r\\big)\\,dr+\\int_0^t\\bar{g}\\big(X^{\\varepsilon,M}_r\\big)\\,dB_r.\n \\end{align*}\n In particular, $\\big|\\hat{X}^{\\varepsilon,M}-\\bar{X}^{\\varepsilon,M}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha}\\to 0$ in probability by the embedding \\eqref{eq:embeddings}. Note also the decomposition\n \\begin{equation*}\n X_t^{\\varepsilon,M}=\\hat{X}_t^{\\varepsilon,M}-\\bar{X}_t^{\\varepsilon,M}+X_0+\\int_0^t\\bar{f}(X_r^{\\varepsilon})\\,dr+\\int_0^t \\bar{g}(X_r^{\\varepsilon})\\,dB_r,\\quad t\\in[0,\\tau_M^\\varepsilon\\wedge T],\n \\end{equation*}\n whence \\cref{lem:residue_lemma} furnishes the bound\n \\begin{equation}\\label{eq:residue_bound}\n \\big|X^{\\varepsilon}-\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha([0,\\tau_M^\\varepsilon\\wedge T])}\\leq C\\exp\\left(C|B|_{\\ensuremath{\\mathcal{C}}^\\alpha}^{\\frac{1}{\\alpha}}+C\\big|\\hat{X}^{\\varepsilon,M}-\\bar{X}^{\\varepsilon,M}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha}^{\\frac{1}{\\alpha}}\\right)\\big|\\hat{X}^{\\varepsilon,M}-\\bar{X}^{\\varepsilon,M}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha}.\n \\end{equation}\n As we have seen above, for each $M>0$, the right-hand side goes to $0$ in probability as $\\varepsilon\\to 0$. Hence, we also have that $\\big|X^{\\varepsilon}-\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha([0,\\tau_M^\\varepsilon\\wedge T])}\\to 0$ in probability.\n\n On the other hand, note that\n \\begin{align}\n \t\\ensuremath\\mathbb{P}(\\tau_M^\\varepsilonT^{-\\gamma}(M-\\|X_0\\|_{L^\\infty})-1\\right)\\label{eq:split}\n \\end{align}\n for each $\\gamma>0$. By \\cref{prop:nualart}, we know that $\\big|\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\gamma([0,T])}\\in L^1$ provided that $\\gamma<\\frac12$. We fix such a $\\gamma$.\n\n It is now easy to finish the proof. Let $\\delta_1,\\delta_2\\in(0,1)$ be given. Then we can find a $M>0$ such that\n \\begin{equation*}\n \t\\ensuremath\\mathbb{P}\\left(\\big|\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\gamma([0,T])}>T^{-\\gamma}(M-\\|X_0\\|_{L^\\infty})-1\\right)\\leq\\frac{\\delta_2}{2}.\n \\end{equation*}\n For this $M$, we can also find an $\\varepsilon_0>0$ such that\n \\begin{equation*}\n \t\\ensuremath\\mathbb{P}\\left(\\big|X^{\\varepsilon}-\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha([0,\\tau_M^\\varepsilon\\wedge T])}>\\delta_1\\right)\\leq\\frac{\\delta_2}{4}\\qquad\\forall\\,\\varepsilon\\in(0,\\varepsilon_0).\n \\end{equation*}\n The estimate \\eqref{eq:split} therefore yields that\n \\begin{align*}\n \t\\ensuremath\\mathbb{P}\\left(\\big|X^{\\varepsilon}-\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha([0,T])}>\\delta_1\\right)&\\leq\\ensuremath\\mathbb{P}\\left(\\big|X^{\\varepsilon}-\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha([0,\\tau_M^\\varepsilon\\wedge T])}>\\delta_1,\\tau_M^\\varepsilon\\geq T\\right)+\\ensuremath\\mathbb{P}(\\tau_M^\\varepsilon\\delta_1\\right)+\\frac{\\delta_2}{2}\\leq\\delta_2\n \\end{align*}\n for all $\\varepsilon\\in(0,\\varepsilon_0)$. Hence, $\\big|X^{\\varepsilon}-\\bar{X}\\big|_{\\ensuremath{\\mathcal{C}}^\\alpha([0,T])}\\to 0$ in probability as $\\varepsilon\\to 0$, as required.\n\\end{proof}\n\n\\begin{remark}\n The proof above shows that we can choose\n \\begin{equation*}\n \\lambda_0=\\inf_{x\\in\\ensuremath{\\mathbb{R}}^d}\\Lambda(\\kappa,R,p)\n \\end{equation*}\n for any $p>\\max\\big(2,(H-\\alpha)^{-1}\\big)$ in \\cref{thm:feedback_fractional}. Here, $\\Lambda$ is the constant from \\cref{prop:conditional_initial_condition_wasserstein}.\n\\end{remark}\n\n\n\\subsection{Smoothness of the Averaged Coefficients}\n\nLet us finally show that an \\emph{everywhere contractive} fast process falls in the regime of \\cref{thm:feedback_fractional}. While smoothness of $\\bar g$ also holds under less restrictive conditions, the proof becomes much more involved. To keep this article concise, we chose to report on these results in future work.\n\n\n\\begin{corollary}\\label{cor:smooth}\n Suppose that\n \\begin{itemize}\n \\item $g\\in\\ensuremath{\\mathcal{C}}_b^3\\big(\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n,\\Lin[m]{d}\\big)$,\n \\item there is a $\\kappa>0$ such that $b(x,\\cdot)\\in\\S(\\kappa,0,0)$ for every $x\\in\\ensuremath{\\mathbb{R}}^d$,\n \\item $b\\in\\ensuremath{\\mathcal{C}}^3\\big(\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n,\\ensuremath{\\mathbb{R}}^d\\big)$ is globally Lipschitz continuous and there is an $N\\in\\ensuremath{\\mathbb{N}}$ such that, for each $i,j,k\\in\\{x,y\\}$,\n \\begin{equation*}\n |D^2_{i,j} b(x,y)|+|D^3_{i,j,k}b(x,y)|\\lesssim 1+|y|^N\\qquad\\forall\\,x\\in\\ensuremath{\\mathbb{R}}^d,\\,\\forall\\, y\\in\\ensuremath{\\mathbb{R}}^n.\n \\end{equation*}\n \\end{itemize}\n Then the conclusion of \\cref{thm:feedback_fractional} holds.\n\\end{corollary}\n\\begin{example}\n Let $V\\in\\ensuremath{\\mathcal{C}}^4(\\ensuremath{\\mathbb{R}}^d\\times\\ensuremath{\\mathbb{R}}^n)$. If $\\inf_{x,y}D_{y,y}^2 V(x,y)\\geq\\kappa$, $|D^2_{x,y}V|_\\infty+|D^2_{y,y}V|_\\infty<\\infty$, and, for each $i,j,k\\in\\{x,y\\}$,\n \\begin{equation*}\n |D^3_{i,j,y}V(x,y)|+|D^4_{i,j,k,y}V(x,y)|\\lesssim 1+|y|^N\\qquad\\forall\\,x\\in\\ensuremath{\\mathbb{R}}^d,\\,\\forall\\, y\\in\\ensuremath{\\mathbb{R}}^n,\n \\end{equation*}\n then $b=-D_y V$ falls in the regime of \\cref{cor:smooth}. To give a concrete example, we can choose $V(x,y)=\\big(2+\\sin(x)\\big)\\big(y^2+\\sin(y)\\big)$, which furnishes the drift $b(x,y)=-\\big(2+\\sin(x)\\big)\\big(2y+\\cos(y)\\big)$.\n\\end{example}\n\n\n\\begin{proof}[Proof of \\cref{cor:smooth}]\n In order to apply \\cref{thm:feedback_fractional} it is enough to show that, for any $g\\in\\ensuremath{\\mathcal{C}}_b^3(\\ensuremath{\\mathbb{R}}^n)$, the function\n \\begin{equation*}\n \\bar{h}(x)\\ensuremath\\triangleq\\int_{\\ensuremath{\\mathbb{R}}^n}g(y)\\,\\pi^x(dy)\n \\end{equation*}\n is again of class $\\ensuremath{\\mathcal{C}}_b^2(\\ensuremath{\\mathbb{R}}^d)$. To this end, we define $h_t(x)\\ensuremath\\triangleq\\Expec{g(Y_t^x)}$ where $Y^x$ is the solution to the SDE\n \\begin{equation*}\n dY_t^x=b(x,Y_t^x)\\,dt+\\sigma\\,d\\hat{B}\n \\end{equation*}\n started in the generalized initial condition $\\delta_0\\otimes\\ensuremath{\\mathsf{W}}$. Note that $h_t\\to\\bar{h}$ pointwise as $t\\to\\infty$ by \\cref{thm:geometric}. Since $h_t\\in\\ensuremath{\\mathcal{C}}_b^2(\\ensuremath{\\mathbb{R}}^d)$ for each $t\\geq 0$, it thus suffices to show that \n \\begin{equation}\\label{eq:derivative_bound}\n \\sup_{t\\geq 0} \\left(|D h_t|_\\infty+|D^2 h_t|_\\infty\\right)<\\infty\n \\end{equation} \n and both $D h_t$ and $D^2 h_t$ converge locally uniformly along a subsequence. By a straight-forward `diagonal sequence' argument, we actually only need to prove uniform convergence on a fixed compact $K\\subset\\ensuremath{\\mathbb{R}}^d$.\n\n Under the assumptions of the corollary, it is easy to see that the mapping $x\\mapsto Y_t^x$ is three-times differentiable for each $t\\geq 0$ and it holds that\n \\begin{align*}\n D_x Y_t^x&=\\int_0^tJ_{s,t}D_x b(x,Y_s^x)\\,ds, \\label{eq:first_derivative}\\\\\n D^2_{x,x} Y_t^x(u\\otimes v)&=\\int_0^tJ_{s,t}\\Big(D_{x,x}^2 b(x,Y_s^x)(u\\otimes v)+2D_{x,y}^2 b(x,Y_s^x)\\big(u\\otimes D_x Y_s^x(v)\\big) \\nonumber\\\\\n &\\phantom{=\\int_0^tJ_{s,t}}+D^2_{y,y}b(x,Y_s^x)\\big(D_x Y_s^x(u)\\otimes D_x Y_s^x(v)\\big)\\Big)\\,ds,\n \\end{align*}\n where $J_{s,t}$ solves the homogeneous problem\n \\begin{equation*}\n J_{s,t}=\\mathrm{id}+\\int_s^t D_yb(x,Y_r^x)J_{s,r}\\,dr.\n \\end{equation*}\n Since $b(x,\\cdot)\\in\\S(\\kappa,0,0)$, it is not hard to see that, for each $x\\in\\ensuremath{\\mathbb{R}}^d$ and $y\\in\\ensuremath{\\mathbb{R}}^n$, $D_yb(x,y)\\leq-\\kappa$ in the sense of quadratic forms. In particular, the operator norm of $J$ satisfies the bound\n \\begin{equation*}\n |J_{s,t}|\\leq e^{-\\kappa(t-s)}.\n \\end{equation*}\n By an argument similar to \\cref{lem:fast_process_moments}, it follows that, for any $p\\geq 1$,\n \\begin{equation*}\n \\sup_{t\\geq 0}\\sup_{x\\in\\ensuremath{\\mathbb{R}}^d}\\big\\|D_xY_t^x\\big\\|_{L^p}<\\infty\\quad\\text{and}\\quad \\sup_{t\\geq 0}\\sup_{x\\in\\ensuremath{\\mathbb{R}}^d}\\big\\|D_{x,x}^2Y_t^x\\big\\|_{L^p}<\\infty.\n \\end{equation*}\n Based on this, it is straight-forward to verify \\eqref{eq:derivative_bound}. Consequently, by the Arzela-Ascoli theorem, there is a subsequence of times along which $Dh$ converges uniformly on $K$. By a similar---albeit more tedious---computation, the reader can easily check that also \n \\begin{equation*}\n \\sup_{t\\geq 0}\\sup_{x\\in\\ensuremath{\\mathbb{R}}^d}\\big\\|D_{x,x,x}^3Y_t^x\\big\\|_{L^p}<\\infty.\n \\end{equation*}\n In particular, $D^3 h$ is uniformly bounded, whence we can pass to a further subsequence along which $D^2 h$ also converges uniformly on $K$. Therefore, $\\bar{h}\\in\\ensuremath{\\mathcal{C}}_b^2(\\ensuremath{\\mathbb{R}}^d)$ as required.\n\\end{proof}\n\n{\n\\footnotesize\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\t\t\t\t\n\t\tA \\emph{square} $S$ in a matrix $M= \\left( a_{i,j} \\right)$ is a $2 \\times 2$ submatrix of the form \\[S = \\left(\\begin{array}{cc}\n\t\t\ta_{i,j} & a_{i, j+s} \\\\\n\t\t\ta_{i+s,j} & a_{i+s, j+s}\n\t\t\\end{array}\\right).\n\t\t\\]\n\t\t\n\t\tIn 1996 Erickson \\cite{erickson1996introduction} asked for the largest $n$ such that there exists an $n \\times n$ binary matrix $M$ with no squares which have constant entries. An upper bound was first given by Axenovich and Manske \\cite{axenovich2008monochromatic}, before the answer 14 was determined by Bacjer and Eliahou in \\cite{bacher2010extremal}. \n\t\t\n\t\tRecently, Ar\\'evalo, Montejano and Rold\\'an-Pensado \\cite{arevalo2020zero} initiated the study of a zero-sum variant of Erickson's problem. Here we wish to avoid \\emph{zero-sum squares}, squares with entries that sum to $0$.\n\t\t\n\t\tZero-sum problems have been well-studied since the classic Erd\\H{o}s-Ginsburg-Ziv Theorem in 1961 \\cite{erdos1961theorem}. Much of the research has been on zero-sum problems in finite abelian groups (see the survey \\cite{gao2006zero} for details), but problems have also been studied in other settings such as on graphs (see e.g. \\cite{caro2016ero, caro2019zero, caro2020zero, bialostocki1993zero}). Of particular relevance is the result of Balister, Caro, Rousseau and Yuster in \\cite{balister2002zero} on submatrices of integer valued matrices where the rows and columns sum to $0 \\mod p$, and the result of Caro, Hansberg and Montejano on zero-sum subsequences in bounded sum $\\{-1,1\\}$-sequences \\cite{caro2019zerosum}. \n\t\t\n\t\tGiven an $n \\times m$ matrix $M = \\left( a_{i,j} \\right)$ define the \\emph{discrepancy} of $M$ as the sum of the entries, that is\n\t\t\\[\\disc(M) = \\sum_{\\substack{1 \\leq i \\leq n\\\\1 \\leq j \\leq m}} a_{i,j}. \\]\n\t\tWe say a square $S$ is a \\emph{zero-sum square} if $\\disc(S) = 0$, or equivalently,\n\t\t\\[a_{i,j} + a_{i, j+s} + a_{i+s,j} + a_{i+s, j+s} = 0.\\]\n\t\t\n\t\tWe will be interested in $\\{-1,1\\}$-matrices $M$ which do not contain any zero-sum squares, and we shall call such matrices \\emph{zero-sum square free}. Clearly matrices with at most one $-1$ are zero-sum square free and, in general, there are many such matrices when the number of $-1$s is low. Instead, we will be interested in matrices which have a similar number of $1$s and $-1$s or, equivalently, matrices with small discrepancy (in absolute value).\n\t\t\n\t\tAn $n \\times m$ $\\{-1,1\\}$-matrix $M = \\left(a_{i,j}\\right)$ is said to be \\emph{$t$-diagonal} for some $0 \\leq t \\leq n +m -1$ if\n\t\t\\[a_{i,j} = \\begin{cases}\n\t\t\t1 & i + j \\leq t + 1,\\\\\n\t\t\t-1 & i + j \\geq t +2.\n\t\t\\end{cases}\\]\n\t\tWe say a matrix $M$ is \\emph{diagonal} if there is some $t$ such that a $t$-diagonal matrix $N$ can be obtained from $M$ by applying vertical and horizontal reflections. \n\t\tDiagonal matrices are of particular interest since they can have low discrepancy, yet they never contain a zero-sum square.\n\t\t\n\t\tAr\\'evalo, Montejano and Rold\\'an-Pensado \\cite{arevalo2020zero} proved that, except when $n \\leq 4$, every $n \\times n$ non-diagonal $\\{-1,1\\}$-matrix $M$ with $|\\disc(M)| \\leq n$ has a zero-sum square. They remark that it should be possible to extend their proof to give a bound of $2n$, and they conjecture that the bound $Cn$ should hold for any $C > 0$ when $n$ is large enough relative to $C$.\n\t\t\n\t\t\\begin{conjecture}[Conjecture 3 in \\cite{arevalo2020zero}]\n\t\t\tFor every $C > 0$ there is a integer $N$ such that whenever $n \\geq N$ the\n\t\t\tfollowing holds: every $n \\times n$ non-diagonal $\\{-1, 1\\}$-matrix $M$ with $|\\disc(M)| \\leq Cn$\n\t\t\tcontains a zero-sum square.\n\t\t\\end{conjecture}\n\t\t\n\t\tWe prove this conjecture in a strong sense with the following theorem.\n\t\t\n\t\t\\begin{theorem}\\label{thm:low-bound}\n\t\t\tLet $n \\geq 5$. Every $n \\times n$ non-diagonal $\\{-1,1\\}$-matrix $M$ with $|\\disc(M)| \\leq n^2\/4$ contains a zero-sum square.\n\t\t\\end{theorem}\n\t\n\t\tThe best known construction for a non-diagonal zero-sum square free matrix has discrepancy close to $n^2\/2$, and our computer experiments suggest that this construction is in fact optimal. Closing the gap between the upper and lower bounds remains a very interesting problem and we discuss it further in Section \\ref{sec:open-problems}. \n\t\t\t\t\n\t\t\n\t\t\\section{Proof}\n\t For $p\\leq r$ and $q \\leq s$ define the \\emph{consecutive submatrix} $M[p:r, q:s]$ by\n\t\t\\[M[p:r, q:s] = \\left(\\begin{array}{cccc}\n\t\t\ta_{p,q} & a_{p, q+1} & \\dotsb & a_{p, s} \\\\\n\t\t\ta_{p+1, q} & a_{p+1, q+1} & \\dotsb & a_{p+1, s} \\\\\n\t\t\t\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\t\t\ta_{r, q} & a_{r+1, q} & \\dotsb & a_{r,s}\n\t\t\\end{array}\n\t\t \\right).\n\t\t \\]\n\t\t Throughout the rest of this paper, we will assume that all submatrices except squares are consecutive submatrices.\n\t\t\n\t\tWe start by stating the following lemma from \\cite{arevalo2020zero} which, starting from a small $t'$-diagonal submatrix $M'$, determines many entries of the matrix $M$. An example application is shown in Figure \\ref{fig:struct}.\n\t\t\n\t\t\n\t\n\t\t\\begin{lemma}[Claim 3 in \\cite{arevalo2020zero}]\n\t\t\t\\label{lem:struct}\n\t\t\tLet $M$ be an $n \\times n $ $\\{-1,1\\}$-matrix with no zero-sum squares, and suppose that there is a submatrix $M' = M[p: p+s, q: q+s]$ which is $t'$-diagonal for some $2 \\leq t' \\leq 2s - 3$. Let $t = t + p + q -2$ and suppose $t \\leq n$. \t\t\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item The submatrix \\[N = M[1: \\min(t + \\floor{t\/2}, n), 1:\\min(t + \\floor{t\/2}, n)]\\] is $t$-diagonal.\n\t\t\t\\end{enumerate} \n\t\t\t\n\t\t\tFurthermore, both $a_{i,j} = 1$ and $a_{j,i} = 1$ whenever $t + \\floor{t\/2} < j \\leq t + \\floor{t\/2} + t -2$ and one of the following holds:\n\t\t\t\\begin{enumerate}\n\t\t\t\t \\setcounter{enumi}{1}\n\t\t\t\t\\item $j - t \\leq i \\leq t + \\floor{\\frac{t}{2}}$;\n\t\t\t\t\\item $i \\leq \\floor{\\frac{t}{2}} - \\floor{\\frac{j - t - \\floor{t\/2} -1}{2}}$;\n\t\t\t\t\\item $i = j$.\n\t\t\t\\end{enumerate}\n\t\\end{lemma}\n\n\t\t\\begin{figure}\n\t\t\\centering\n\t\t\\begin{tikzpicture}[scale=0.5\\textwidth\/11cm]\n\t\t\t\n\t\t\t\n\t\t\t\\fill[yellow5] (0,10) rectangle +(5,1);\n\t\t\t\\fill[yellow5] (0,9) rectangle +(4,1);\n\t\t\t\\fill[yellow5] (0,8) rectangle +(1,1);\n\t\t\t\\fill[yellow5] (0,7) rectangle +(1,1);\n\t\t\t\\fill[yellow5] (0,6) rectangle +(1,1);\n\t\t\t\n\t\t\t\\fill[yellow7] (1,8) rectangle +(2,1);\n\t\t\t\\fill[yellow7] (1,7) rectangle +(1,1);\n\t\t\t\n\t\t\t\\fill[blue7] (3, 8) rectangle +(1,1);\n\t\t\t\\fill[blue7] (2, 7) rectangle +(2,1);\n\t\t\t\\fill[blue7] (1, 6) rectangle +(3,1);\n\t\t\t\n\t\t\t\\fill[blue5] (4, 6) rectangle +(1, 4);\n\t\t\t\\fill[blue5] (0, 4) rectangle +(7, 2);\n\t\t\t\\fill[blue5] (5, 6) rectangle +(2, 5);\n\t\t\t\n\t\t\t\\fill[blue3] (7, 10) rectangle +(3, 1);\n\t\t\t\\fill[blue3] (7, 9) rectangle +(2,1);\n\t\t\t\\fill[blue3] (0, 1) rectangle +(1, 3);\n\t\t\t\\fill[blue3] (1,2) rectangle +(1, 2);\n\t\t\t\n\t\t\t\\fill[lightblue5] (3, 3) rectangle +(4, 1);\n\t\t\t\\fill[lightblue5] (4, 2) rectangle +(3, 1);\n\t\t\t\\fill[lightblue5] (5, 1) rectangle +(2, 1);\n\t\t\t\n\t\t\t\\fill[lightblue5] (7,4) rectangle +(1, 4);\n\t\t\t\\fill[lightblue5] (8,4) rectangle +(1, 3);\n\t\t\t\\fill[lightblue5] (9,4) rectangle +(1, 2);\n\t\t\t\n\t\t\t\\fill[lightblue3] (7, 3) rectangle +(1,1);\n\t\t\t\\fill[lightblue3] (8, 2) rectangle +(1,1);\n\t\t\t\\fill[lightblue3] (9, 1) rectangle +(1,1);\n\t\t\t\n\t\t\t\\draw[black] (0,1) -- +(11,0);\n\t\t\t\\draw[black] (0,2) -- +(11,0);\n\t\t\t\\draw[black] (0,3) -- +(11,0);\n\t\t\t\\draw[black] (0,4) -- +(11,0);\n\t\t\t\\draw[black] (0,5) -- +(11,0);\n\t\t\t\\draw[black] (0,6) -- +(11,0);\n\t\t\t\\draw[black] (0,7) -- +(11,0);\n\t\t\t\\draw[black] (0,8) -- +(11,0);\n\t\t\t\\draw[black] (0,9) -- +(11,0);\n\t\t\t\\draw[black] (0,10) -- +(11,0);\n\t\t\t\\draw[black, very thick] (0,11) -- +(11,0);\n\t\t\t\n\t\t\t\\draw[very thick, black] (0,0) -- +(0,11);\n\t\t\t\\draw[black] (1,0) -- +(0,11);\n\t\t\t\\draw[black] (2,0) -- +(0,11);\n\t\t\t\\draw[black] (3,0) -- +(0,11);\n\t\t\t\\draw[black] (4,0) -- +(0,11);\n\t\t\t\\draw[black] (5,0) -- +(0,11);\n\t\t\t\\draw[black] (6,0) -- +(0,11);\n\t\t\t\\draw[black] (7,0) -- +(0,11);\n\t\t\t\\draw[black] (8,0) -- +(0,11);\n\t\t\t\\draw[black] (9,0) -- +(0,11);\n\t\t\t\\draw[black] (10,0) -- +(0,11);\n\t\t\t\n\t\t\t\\draw[very thick] (1,6) rectangle +(3,3);\n\t\t\t\n\t\t\t\n\t\t\t\\draw[|-|] (0, 11.5) -- +(5, 0);\n\t\t\t\\draw[-|] (5,11.5) -- +(2,0);\n\t\t\t\\draw[-|] (7, 11.5) -- +(3,0);\n\t\t\t\n\t\t\t\\draw node[above] at (2.5, 11.5) {$t$};\n\t\t\t\\draw node[above] at (6, 11.5) {$\\floor{t\/2}$};\n\t\t\t\\draw node[above] at (8.5, 11.5) {$t-2$};\n\t\t\t\n\t\t\t\n\t\t\\end{tikzpicture}\n\t\t\n\t\t\\caption{The entries known from applying Lemma \\ref{lem:struct}. The yellow squares represent $-1$s and the blue squares represent $1$s. The submatrix $M'$ is show in a darker shade.}\n\t\t\\label{fig:struct}\n\t\\end{figure}\n\n\tNote that we can apply this lemma even when it is a reflection of $M'$ which is $t$-diagonal; we just need to suitably reflect $M$ and potentially multiply by $-1$, and then undo these operations at the end. The matrix $N$ will always contains at least one of $a_{1,1}$, $a_{1,n}$, $a_{n,1}$ and $a_{n,n}$, and if $N$ contains two, then $M$ is diagonal.\n\t\n\tWe will also make use of the following observation. This will be used in conjunction with the above lemma to guarantee the existence of some additional $1$s, which allows us to show a particular submatrix has positive discrepancy. \n\n\t\\begin{observation}\n\t\t\\label{obs:oneof}\n\t\tLet $M$ be an $n \\times n $ $\\{-1,1\\}$-matrix with no zero-sum squares, and suppose that $a_{i,i} = 1$ for every $i \\in [n]$. Then at least one of $a_{i,j}$ and $a_{j,i}$ is 1. In particular, $a_{i,j} + a_{j,i} \\geq 0$ for all $1 \\leq i ,j \\leq n$.\n\t\\end{observation}\n\n\n\n\n\tThe final lemma we will need to prove Theorem \\ref{thm:low-bound} is a variation on Claims 1 and 2 from \\cite{arevalo2020zero}. The main difference between Lemma \\ref{lem:submatrix} and the result used by Ar\\'evalo, Montejano and Rold\\'an-Pensado is that we will always find a square submatrix. This simplifies the proof of Theorem \\ref{thm:low-bound}.\n\t\n\t\\begin{lemma}\n\t\t\\label{lem:submatrix}\n\t\tFor $n \\geq 8$, every $n \\times n $ $\\{-1,1\\}$-matrix $M$ with $|\\disc(M)| \\leq n^2\/4$ has an $n' \\times n'$ submatrix $M'$ with $|\\disc(M')| \\leq (n')^2\/4$ for some $(n-1)\/2 \\leq n' \\leq (n+1)\/2$.\n\t\\end{lemma}\n\t\\begin{proof}\n\t\tWe only prove this in the case $n$ is odd as the case $n$ is even is similar, although simpler.\n\t\tPartition the matrix $M$ into 9 regions as follows. Let the four $(n-1)\/2 \\times (n-1)\/2$ submatrices containing $a_{1,1}$, $a_{1,n}$, $a_{n,1}$ and $a_{n,n}$ be $A_1, \\dots, A_4$ respectively. Let the $(n-1)\/2 \\times 1$ submatrix between $A_1$ and $A_2$ be $B_1$ and define $B_2$, $B_3$ and $B_4$ similarly. Finally, let the central entry be $B_5$. The partition is shown in Figure \\ref{fig:regions-part}.\n\t\t\n\t\t\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}{0.45\\textwidth}\n\t\t\\centering\n\t\t\\begin{tikzpicture}[scale=\\textwidth\/10cm]\n\t\t\t\\draw[step=1, very thin, gray] (0,0) grid (9,9);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (0,5) rectangle(4,9);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (5,5) rectangle (9,9);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (0,0) rectangle (4,4);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (5,0) rectangle (9,4);\n\t\t\t\\draw node[fill=white] at (2, 7) {$A_1$};\n\t\t\t\\draw node[fill=white] at (7, 7) {$A_2$};\n\t\t\t\\draw node[fill=white] at (7, 2) {$A_3$};\n\t\t\t\\draw node[fill=white] at (2, 2) {$A_4$};\n\t\t\t\\draw[fill=none, stroke=black, very thick] (4,5) rectangle(5,9);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (5,4) rectangle(9,5);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (4,0) rectangle(5,4);\n\t\t\t\\draw[fill=none, stroke=black, very thick] (0,4) rectangle(4,5);\n\t\t\t\\draw node[fill=white, inner sep=1] at (4.5, 7) {$B_1$};\n\t\t\t\\draw node[fill=white, inner sep=1] at (7, 4.5) {$B_2$};\n\t\t\t\\draw node[fill=white, inner sep=1] at (4.5, 2) {$B_3$};\n\t\t\t\\draw node[fill=white, inner sep=1] at (2, 4.5) {$B_4$};\n\t\t\t\\draw node[fill=white, inner sep=1] at (4.5, 4.5) {$B_5$};\n\t\t\t\n\t\t\\end{tikzpicture}\n\t\t\\caption{}\n\t\t\\label{fig:regions-part}\n\t\\end{subfigure}\\hfil\n\t\\begin{subfigure}{0.45\\textwidth}\n\t\t\\centering\n\t\t\\begin{tikzpicture}[scale=\\textwidth\/10cm]\n\t\t\t\\draw[step=1, very thin, gray] (0,0) grid (9,9);\n\t\t\t\n\t\t\t\\draw[fill=none, black, very thick] (4,0) rectangle (9,5);\n\t\t\t\\draw[fill=none, black, very thick] (0,4) rectangle(5,9);\n\t\t\t\\draw[fill=none, black, very thick] (0,0) rectangle(9,9);\n\t\t\t\\draw node[] at (2.5, 6.5) {$A'_1$};\n\t\t\t\\draw node[] at (6.5, 2.5) {$A'_3$};\n\t\t\t\n\t\t\\end{tikzpicture}\n\t\t\\caption{}\n\t\t\\label{fig:regions-overlap}\n\t\\end{subfigure\n\t\\caption{A subset of the regions used in the proof of Lemma \\ref{lem:submatrix}.}\n\t\\label{fig:regions}\n\\end{figure}\n\t\t\n\tAs these partition the matrix $M$, we have\n\t\\begin{equation}\\label{eqn:part}\n\t\t\\disc(M) = \\disc(A_1) + \\dotsb + \\disc(A_4) + \\disc(B_1) + \\dotsb + \\disc(B_5).\n\t\\end{equation}\n\t\n\tLet the overlapping $(n+1)\/2 \\times (n+1)\/2$ submatrices containing $a_{1,1}$, $a_{1,n}$, $a_{n,1}$ and $a_{n,n}$ be $A_1', \\dots, A_4'$, as indicated in Figure \\ref{fig:regions-overlap}. The submatrices $B_1, \\dots, B_4$ each appear twice in the $A_i'$ and $B_5$ appears four times and, by subtracting these overlapping regions, we obtain a second equation for $\\disc(M)$:\n\t\t\\begin{multline}\\label{eqn:part2}\n\t\t\t\\disc(M) = \\disc(A_1') + \\dotsb + \\disc(A_4')\\\\ - \\disc(B_1) - \\dotsb - \\disc(B_4) - 3 \\disc(B_5).\n\t\t\\end{multline}\n\t\n\tIf any of the $A_i$ or $A_i'$ have $|\\disc(A_i)| \\leq (n-1)^2\/16$ or $|\\disc(A_i')| \\leq (n+1)^2\/16$ respectively, we are done, so we may assume that this is not the case.\n\tFirst, suppose that $\\disc(A_i) > (n-1)^2\/16$ and $\\disc(A_i') > (n+1)^2\/16$ for all $i = 1,2,3,4$. Since $n - 1$ is even and $\\disc(A_i) \\in \\mathbb{Z}$, we must have $\\disc(A_i) \\geq (n-1)^2\/16 + 1\/4$, and similarly, $\\disc(A_i') \\geq (n+1)^2\/16 + 1\/4$. Adding the equations (\\ref{eqn:part}) and (\\ref{eqn:part2}) we get the bound\n\t\\[n^2\/2 \\geq 2 \\disc(M) \\geq (n+1)^2\/4 + (n-1)^2\/4 + 2 - 2 \\disc(B_5), \\]\n\twhich reduces to $\\disc(B_5) \\geq 5\/4$. This gives a contradiction since $B_5$ is a single square. Similarly we get a contradiction if, for every $i$, both $\\disc(A_i) < - (n-1)^2\/16$ and $\\disc(A_i') < - (n+1)^2\/16$. \n\t\n\tThis only leaves the case where two of the 8 submatrices have different signs. If $A_i' > (n+1)^2\/16$, then, for $n \\geq 8$, \\[A_i > (n+1)^2\/16 - n > -(n-1)^2\/16,\\] and either $|\\disc(A_i)| \\leq (n-1)^2\/16$, a contradiction, or $\\disc(A_i) > 0$. By repeating the argument when $\\disc(A_i')$ is negative, it follows that $A_i$ and $A_i'$ have the same sign for every $i$. In particular, two of the $A_i$ must have different signs, and we can apply an interpolation argument as in \\cite{arevalo2020zero}.\n\t\n\tWithout loss of generality we can assume that $\\disc(A_1) > (n-1)^2\/16$ and $\\disc(A_2) < -(n-1)^2\/16$. Consider the sequence of matrices $N_0, \\dots, N_{(n+1)\/2} $ where \\[N_i = M[1: (n-1)\/2, 1 + i: i + (n-1)\/2].\\]\n\tWe claim that there is a $j$ such that $|\\disc(N_j)| \\leq (n-1)^2\/16$, which would complete the proof of the lemma. By definition, $N_0 = A_1$ and $N_{(n+1)\/2} = A_2$ so there must be some $j$ such that $\\disc(N_{j-1}) > 0$ and $\\disc(N_j) \\leq 0$. Since the submatrices $N_{j-1}$ and $N_{j}$ share most of their entries $|\\disc(N_{j-1}) - \\disc(N_j)| \\leq (n-1)$, and as $(n-1)^2\/8 > (n-1)$, it cannot be the case that $\\disc(N_{j-1}) > (n-1)^2\/16$ and $\\disc(N_j) < -(n-1)^2\/16$. This means there must be some $j$ such that $|\\disc(N_j)| \\leq (n-1)^2\/16$, as required. \n\t\\end{proof}\n\n\tArmed with the above results, we are now ready to prove our main result, but let us first give a sketch of the proof which avoids the calculations in the main proof.\n\t\n\t\\begin{proof}[Sketch proof of Theorem \\ref{thm:low-bound}]\t\n\t Assume we have an $n\\times n$ $\\{-1,1\\}$-matrix $M$ with $|\\disc(M)| \\leq n^2\/4$ which is zero-sum square free. We will prove the result by induction, so we assume that the result is true for $5 \\leq n' < n$. \n\t \n\t Applying Lemma \\ref{lem:submatrix} gives a submatrix $M'$ with low discrepancy. Since $M'$ must also be zero-sum square free, we know that it is diagonal by the induction hypothesis. Applying Lemma \\ref{lem:struct} then gives us a lot of entries $M$ and, in particular, a submatrix $N$ with high discrepancy. Since we are assuming that $M$ has low discrepancy, the remainder $M \\setminus N$ of $M$ not in $N$ must either have low discrepancy or negative discrepancy. In both cases we will find $B$, a submatrix of $M$ with low discrepancy. When the discrepancy of $M \\setminus N$ is low, we use an argument similar to the proof of Lemma \\ref{lem:submatrix}, and when the discrepancy of $M \\setminus N$ is negative, we find a positive submatrix using Observation \\ref{obs:oneof} and use an interpolation argument.\n\t \n\t By the induction hypothesis, $B$ must also be diagonal and we can apply Lemma \\ref{lem:struct} to find many entries of $M$. By looking at specific $a_{i,j}$, we will show that the two applications of Lemma \\ref{lem:struct} contradict each other.\n\t\\end{proof}\n\n\tWe now give the full proof of Theorem \\ref{thm:low-bound}, complete with all the calculations. To start the induction, we must check the cases $n < 30$ which is done using a computer. The problem is encoded as a SAT problem using PySAT \\cite{imms-sat18} and checked for satisfiability with the CaDiCaL solver. The code to do this is attached to the arXiv submission.\n\n\t\\begin{proof}[Proof of Theorem \\ref{thm:low-bound}]\n\t\tWe will use induction on $n$. A computer search gives the result for all $n < 30$, so we can assume that $n \\geq 30$ and that the result holds for all $5 \\leq n' < n$. \n\t\t\n\t\tSuppose, towards a contradiction, that $M$ is an $n \\times n$ matrix with no zero-sum squares and $|\\disc(M)| \\leq n^2\/4$ . \t\t\n\t\tBy Lemma \\ref{lem:submatrix}, we can find an $n' \\times n'$ submatrix $M' = M[p:p+s, q:q+s]$ with $(n-1)\/2 \\leq n ' \\leq (n+1)\/2$ and $|\\disc(M')| \\leq (n')^2\/4$. By the induction hypothesis and our assumption that $M$ doesn't contain a zero-sum square, the matrix $M'$ must be diagonal. By reflecting $M$ and switching $-1$ and $1$ as necessary, we can assume that the submatrix $M'$ is $t'$-diagonal for some $t'$, and that $t := t' + p + q -2 \\leq n$.\n\t\t\n\t\tWe will want to apply Lemma \\ref{lem:struct}, for which we need to check $2 \\leq t' \\leq 2s - 3$. If $t' \\leq 1$ or $t' \\geq 2s - 2$, then the discrepancy of $M'$ is \\[|\\disc(M')| \\geq (n')^2 -1 > (n')^2\/4,\\] which contradicts our choice of $M'$. In fact, since $\\disc(M') \\leq (n')^2\/4$ and $\\disc(M') \\leq (n')^2 - t'(t'+1)$ we find\n\t\t\\begin{equation}\n\t\t\\label{eqn:tbound}\n\t\tt \\geq t' \\geq \\frac{1}{2} \\left( \\sqrt{3(n')^2 + 1} -1 \\right) \\approx 0.433 n.\n\t\t\\end{equation}\t\n\t\n\t\tIf $t + \\floor{t\/2} \\geq n$, the matrix $M$ is $t$-diagonal and we are done, so we can assume that this is not the case, and that $t \\leq 2n\/3$. We will also need the following bound on $2t + \\floor{t\/2} -2$, which follows almost immediately from (\\ref{eqn:tbound}).\n\t\t\n\t\t\\begin{claim}\\label{claim:tgeqnmins1}\n\t\t\tWe have\n\t\t\t\\[2t + \\floor{t\/2} -2 \\geq n - 1.\\]\n\t\t\\end{claim}\n\t\t\\begin{proof}\n\t\t\tSubstituting $n' \\geq (n-1)\/2$ into (\\ref{eqn:tbound}) gives the following bound on $t$.\n\t\t\t\\[t \\geq \\frac{1}{4} \\left( \\sqrt{3n^2 - 6n + 7} - 2 \\right)\\]\n\t\t\tWe now lower bound $\\floor{t\/2}$ by $(t-1)\/2$ to find\n\t\t\t\\begin{align*}\n\t\t\t\t2t + \\floor{t\/2} -2 &\\geq 2t + \\frac{t-5}{2}\\\\\n\t\t\t\t&\\geq \\frac{5}{8} \\sqrt{3n^2 - 6n + 7} - \\frac{15}{4}\n\t\t\t\\end{align*}\n\t\t\tThe right hand side grows like $\\frac{\\sqrt{75}}{8} n$ asymptotically, which is faster than $n$, so the claim is certainly true for large enough $n$. In fact, the equation $ \\frac{5}{8} \\sqrt{3n^2 - 6n + 7} - \\frac{15}{4} \\geq n -1$ can be solved explicitly to obtain the following the bound on $n$:\n\t\t\t\\[n \\geq \\frac{1}{11} \\left( 251 + 20 \\sqrt{166} \\right) \\approx 46.2.\\]\n\t\t\tThis still leaves the values $30 \\leq n \\leq 46$ for which the bounds above are not sufficient. These cases can be checked using a computer.\n\t\t\\end{proof}\n\t\t\n\t\t\n\n\t\t\n\t\t\n\t\tLet $k = \\ceil{5n\/6}$ and let $N$ be the $k \\times k$ sub-matrix in the top left corner which contains $a_{1,1}$ i.e. $N = M[1:k, 1:k]$. We will apply Lemma \\ref{lem:struct} and Observation \\ref{obs:oneof} to guarantee lots of 1s in $N$, and therefore ensure $N$ has large discrepancy. This will mean that the rest of $M$ which is not in $N$ must have low discrepancy, and we can find another diagonal submatrix, $B$.\n\t\t\n\t\t\\begin{claim}\\label{claim:B}\n\t\t\tThere is an $(n-k) \\times (n-k)$ submatrix $B$ which is disjoint from $N$ and with $|\\disc(B)| \\leq (n-k)^2\/4$.\n\t\t\\end{claim}\n\t\t\\begin{proof}\n\t\t\t\t\\begin{figure}\n\t\t\t\t\t\\centering\n\t\t\t\t\t\\begin{subfigure}{0.45\\textwidth}\n\t\t\t\t\t\t\\begin{tikzpicture}[scale=\\textwidth\/12cm]\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\\fill[gray] (10, 0) rectangle (11, 2);\n\t\t\t\t\t\t\t\\fill[gray] (11, 13) rectangle (13, 12);\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\\draw[very thick] (0,0) rectangle (13, 13);\n\t\t\t\t\t\t\t\\draw[very thick] (0, 13) rectangle (11, 2);\n\t\t\t\t\t\t\t\\draw node at (5.5, 7.5) {N};\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\\draw[very thick] (0, 0) rectangle (2, 2);\n\t\t\t\t\t\t\t\\draw node at (1,1) {$B_7$};\n\t\t\t\t\t\t\t\\draw[very thick] (2, 0) rectangle (4, 2);\n\t\t\t\t\t\t\t\\draw node at (3,1) {$B_8$};\n\t\t\t\t\t\t\t\\draw[very thick] (4, 0) rectangle (6, 2);\n\t\t\t\t\t\t\t\\draw node at (5,1) {$B_9$};\n\t\t\t\t\t\t\t\\draw[very thick] (6, 0) rectangle (8, 2);\n\t\t\t\t\t\t\t\\draw node at (7,1) {$B_{10}$};\n\t\t\t\t\t\t\t\\draw[very thick] (8, 0) rectangle (10, 2);\n\t\t\t\t\t\t\t\\draw node at (9,1) {$B_{11}$};\n\t\t\t\t\t\t\t\\draw[very thick] (11, 0) rectangle (13, 2);\n\t\t\t\t\t\t\t\\draw node at (12,1) {$B_1$};\n\t\t\t\t\t\t\t\\draw[very thick] (11, 2) rectangle (13, 4);\n\t\t\t\t\t\t\t\\draw node at (12,3) {$B_2$};\n\t\t\t\t\t\t\t\\draw[very thick] (11, 4) rectangle (13, 6);\n\t\t\t\t\t\t\t\\draw node at (12,5) {$B_3$};\n\t\t\t\t\t\t\t\\draw[very thick] (11, 6) rectangle (13, 8);\n\t\t\t\t\t\t\t\\draw node at (12,7) {$B_4$};\n\t\t\t\t\t\t\t\\draw[very thick] (11, 8) rectangle (13, 10);\n\t\t\t\t\t\t\t\\draw node at (12,9) {$B_5$};\n\t\t\t\t\t\t\t\\draw[very thick] (11, 10) rectangle (13, 12);\n\t\t\t\t\t\t\t\\draw node at (12,11) {$B_6$};\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\t\\end{subfigure}\\hfil\n\t\t\t\t\\caption{The matrix $M$ with the submatrices $N$ and $B_1$, $\\dots$, $B_{11}$. The entries of $M$ which are not in any of the submatrices are shown in grey.}\\label{fig:b-part}\n\t\t\t\t\\end{figure}\n\t\t\t\n\t\t\t\tConsider the 11 $(n-k) \\times (n-k)$ disjoint submatrices of $M$ $B_1 , \\dots, B_{11}$ given by \n\t\t\t\t\\[ B_i = \\begin{cases}\n\t\t\t\t\tM[k: n, n - ik: n - (i-1)k ] & \\text{if $i \\leq 6$}\\\\\n\t\t\t\t\tM[(i- 7)(n-k) : (i-6) (n - k), k : n] & \\text{if $ i > 6$},\n\t\t\t\t\\end{cases}\\]\n\t\t\t\tand shown in Figure \\ref{fig:b-part}. The submatrix $B_1$ contains $a_{n,n}$ and sits in the bottom right of $M$, while the others lie along the bottom and right-hand edges of $M$.\n\t\t\t\t\n\t\t\t\tIf one of the $B_i$ satisfies $|\\disc(B_i)| \\leq (n-k)^2\/4$, we are done, so suppose this is not the case. \n\t\t\t\t\n\t\t\t\tWe start by using Observation \\ref{obs:oneof} to show that $\\disc(B_1) > 0$. Let the entries of $B$ be $b_{i,j}$ where $1 \\leq i,j \\leq n-k$. By Claim \\ref{claim:tgeqnmins1}, $2t + \\floor{t\/2} - 2 \\geq n -1$ and, applying Lemma $1$, $b_{i,i} = 1$ for all $i \\leq n- k - 1$. Further, by Observation \\ref{obs:oneof}, we have $b_{i,j} + b_{j,i} \\geq 0$ for all $1 \\leq i,j \\leq n -k -1$. This means \n\t\t\t\t\\[\\disc(B_1) \\geq (n-k - 1) - (2(n-k) - 1) = - (n-k)\\]\n\t\t\t\t For $(n- k) \\geq 5$, $(n-k) < (n-k)^2\/4$ so we must have $\\disc(B_1) > (n-k)^2\/4$. \n\t\t\t\t \n\t\t\t\t As $\\disc(B_1) > 0 $, if $\\disc(B_i) < 0$ for any $i \\neq 1$, we can use an interpolation argument as in Lemma \\ref{lem:submatrix} to find the claimed matrix. The argument only requires \n\t\t\t\t \\[2(n-k) < \\frac{(n-k)^2}{2}\\]\n\t\t\t\t which is true for $(n-k) > 4$.\n\t\t\t\t \n\t\t\t\tWe must now be in the case where $\\disc(B_i) > (n-k)^2\/4$ for every $i$. The bulk of the work in this case will be bounding the discrepancy of the matrix $N$, and then the discrepancy of $M$. There are $2nk - 12(n-k)^2 \\leq 10(n-k)$ entries of $M$ in the gaps between the $B_i$ i.e. there are at most $10(n-k)$ entries $a_{i,j}$ which are not contained in either $N$ or one of the $B_i$. In particular, we have \n\t\t\t\t\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\disc(M) &\\geq \\disc(N) + \\disc(B_1) + \\dotsb + \\disc(B_{11}) - 10 (n-k) \\notag\\\\\n\t\t\t\t\t&> \\disc(N) + 11 (n-k)^2\/4 - 10(n-k)\\label{eqn:disc}\n\t\t\t\t\\end{align}\n\t\t\n\t\tLet $s = \\min\\left\\{ k, t + \\floor{t\/2} \\right\\}$ so that $M[1:s, 1:s]$ is $t$ diagonal, and let $r = k - s$ be the number of remaining rows. Let $a_1, \\dots, a_4$ be the number of 1s in $N$ guaranteed by Lemma \\ref{lem:struct}, and let $a_5$ be the number of additional 1s guaranteed by also applying Observation \\ref{obs:oneof}. This guarantees that at least one of $a_{i,j}$ and $a_{j,i}$ is $1$ for all $(t+2)\/2 \\leq i , j \\leq r$, and $a_5 \\geq r(r-1)$.\n\t\t\n\t\tWe have the following bounds.\n\t\t\\begin{align*}\n\t\ta_1 &= s^2 - \\frac{t(t+1)}{2},\\\\\n\t\ta_2 &= 2 \\sum_{i=1}^r(t-i),\\\\\n\t\ta_3 &= 2 \\sum_{i=1}^r \\left( \\floor{\\frac{t}{2}} - \\floor{\\frac{i-1}{2}} \\right),\\\\\n\t\ta_4 &= r,\\\\\n\t\ta_5 &\\geq r(r-1).\n\t\t\\end{align*}\n\t\t\n\t\tLet us first consider the case where $s = k$, so that $N$ is $t$-diagonal. In this case $a_2 = \\dotsb = a_5 = 0$, and we can easily write down the discrepancy of $N$ as $k^2 - t(t+1)$. Since $k \\geq 5n\/6$, we get the bound\n\t\t\\begin{align*}\n\t\t\t\\disc(N) &\\geq \\frac{25n^2}{36} - t(t+1).\n\t\t\t\\intertext{Substituting this into (\\ref{eqn:disc}) and using the bounds $(n-5)\/6\\leq n - k \\leq n\/6$ we get}\n\t\t\t\\disc(N) &> \\frac{25n^2}{36} - t(t+1) + \\frac{11}{4}\\left( \\frac{n-5}{6} \\right)^2 - \\frac{10n}{6}\\\\\n\t\t\t&= \\frac{1}{144} \\left( 111 n^2 - 350n - 144t^2 - 144t + 275\\right).\n\t\t\t\\intertext{For $n \\geq 4$, the righthand side is greater than $n^2\/4$ whenever}\n\t\t\tt &< \\frac{1}{12} \\left( \\sqrt{75 n^2 - 350n + 311} - 6 \\right) \\approx 0.721n + o(n).\n\t\t\t\\intertext{Since we have assumed $t \\leq 2n\/3$, we get a contradiction for all sufficiently large $n$. In fact, we get a contradiction for all $n \\geq 39$. The remaining cases need to be checked using exact values for the floor and ceiling functions which we do with the help of a computer.}\n\t\t\\end{align*}\n\t\t\n\t\tNow we consider the case where $s = t + \\floor{t\/2}$ which is very similar, although more complicated. To be in this case, we must have $t + \\floor{t\/2} \\leq k$ which implies \n\t\t\\[ t + \\frac{t-1}{2} \\leq \\frac{5(n+1)}{6},\\]\n\t\tand $t \\leq (5n + 8)\/9 \\approx 0.556n$.\n\t\t\n\t\t\\begin{align*}\n\t\t\t\\intertext{Start by using the bounds $(t-1)\/2 \\leq \\floor{t\/2}$ and $\\floor{(i-1)\/2} \\leq (i-1)\/2$ to get}\n\t\t\ta_1 + \\dotsb + a_5 &\\geq \\left( t + \\frac{t-1}{2} \\right)^2 - \\frac{t(t+1)}{2} + r (2t -r - 1) + r(t - 1) \\\\&\\qquad - \\frac{r(r-1)}{2} + r + r(r-1)\\\\\n\t\t\t&= \\frac{7t^2}{4} - 2t - \\frac{r^2}{2} + 3rt - \\frac{5r}{2} + \\frac{1}{4}.\n\t\t\t\\intertext{By definition, $r = k - t - \\floor{t\/2}$, so we get the bounds $5n\/6 - t - t\/2 \\leq r \\leq 5(n+1)\/6 - t - (t-1)\/2$, and substituing these in gives}\n\t\t\ta_1 + \\dotsb + a_5 &\\geq \\frac{7}{4} t^2 - 2t + \\frac{1}{4} - \\frac{1}{2} \\left( \\frac{5(n+1)}{6} - t - \\frac{t-1}{2}\\right)^2 + 3t \\left( \\frac{5n}{6} -t - \\frac{t}{2} \\right)\\\\&\\qquad - \\frac{5}{2} \\left( \\frac{5(n+1)}{6} - t - \\frac{t-1}{2} \\right)\\\\\n\t\t\t&= \\frac{1}{72} \\left( - 25n^2 + 270nt - 230n - 279 t^2 + 270t - 286 \\right) \n\t\t\t\\intertext{Plugging this into (\\ref{eqn:disc}) and using the bounds $5n\/6 \\leq k \\leq 5(n+1)\/6$ we get}\n\t\t\t\\disc(M) &>2 (a_1 + \\dotsb a_5) - \\left( \\frac{5(n+1)}{6} \\right)^2 + \\frac{11}{4} \\left( \\frac{n-5}{6} \\right)^2 - \\frac{10n}{6}\\\\\n\t\t\t&\\geq \\frac{1}{48} \\left( - 63n^2 + 360nt - 490n - 372t^2 + 360t - 323 \\right).\n\t\t\\end{align*}\n\t\t\t\tWhen $n \\geq 27$, this is greater than $n^2\/4$ whenever\n\t\\begin{align*}\n\t\t&\\frac{1}{186}\\left(90n + 90 - \\sqrt{1125 n^2 - 29370n - 21939}\\right) <\\\\\n\t\t&\\qquad t < \\frac{1}{186}\\left(90n + 90 + \\sqrt{1125 n^2 - 29370n - 21939}\\right),\n \t\\end{align*}\n or approximately,\n \\[0.304n < t < 0.664n.\\]\n We have the bounds \n \\[ \\frac{1}{4} \\left( \\sqrt{3n^2 - 6n + 7} - 2 \\right) \\leq t \\leq \\frac{5n + 8}{9}, \\]\n and so, for $n \\geq 36$, $\\disc(M) > n^2\/4$.\n \n This again leaves a few cases which we check with the help of a computer (although they could feasibly be checked by hand).\t\t\n\\end{proof}\n\t\t\n\t\tGiven a submatrix $B$ as in the above claim we apply the induction hypothesis, noting that $(n-k) \\geq 5$ since $n \\geq 30$, to find that $B$ is diagonal. Let $C$ be the diagonal submatrix obtained from applying Lemma 4 to $B$, and let $C$ be $\\ell$-diagonal up to rotation. Note that $\\ell \\geq 3$ as $(n-k) \\geq 5$, and we can assume $\\ell \\leq 2n\/3$ as $M$ is not diagonal.\n\t\t\n\t\tHence, $C$ contains exactly one of $a_{1,1}$, $a_{1,n}$, $a_{n,1}$ and $a_{n,n}$, and we will split into cases based on which one $C$ contains. We will also sometimes need to consider cases for whether the entry is $1$ or $-1$, but in all cases we will find a contradiction.\n\t\t\n\t\tFrom Lemma \\ref{lem:struct} applied to $M'$ and Claim \\ref{claim:tgeqnmins1}, we already know some of the entries and we highlight some important entries in the following claim.\n\t\t\n\t\t\\begin{claim}\\label{claim:particular1s}\n\t\t\tWe have\n\t\t\t\\begin{enumerate}\n\t\t\t\t\\item $a_{j,1} = a_{1, j} = \\begin{cases}\n\t\t\t\t\t1 & t + 1 \\leq j \\leq n-1,\\\\\n\t\t\t\t\t-1 & 1 \\leq j \\leq t,\n\t\t\t\t\\end{cases}$\n\t\t\t\t\\item $a_{2,t} = a_{t,2} = 1$,\n\t\t\t\t\\item $a_{i,i} = 1$ for all $(t+2)\/2 \\leq i \\leq n -1$.\n\t\t\t\\end{enumerate}\n\t\t\\end{claim}\n\t\t\n\t\tSuppose the submatrix $C$ contains $a_{1,1}$ so sits in the top-left corner. Since $M[1:t + \\floor{t\/2}, 1:t + \\floor{t\/2}]$ is $t$-diagonal, $C$ must also be $t$-diagonal. As $C$ was found by applying Lemma \\ref{lem:struct} to $B$, it must contain a $-1$ from $B$. Hence, $t \\geq 5n\/6$ which is a contradiction as we assumed that $t \\leq 2n\/3$.\n\t\t\n\t\tSuppose instead that $C$ contains $a_{1,n}$ so sits in the top-right corner. Since $\\ell \\geq 3$, if the corner entry is $-1$, so is the entry $a_{1,n-1}$, but this contradicts Claim \\ref{claim:particular1s}. Suppose instead the corner entry is $1$. Since $C$ is $\\ell$-diagonal up to rotation we have, for all $1 \\leq i, (n - j + 1) \\leq \\ell + \\floor{\\ell\/2}$, \n\t\t\\begin{equation}\\label{eqn:C}\n\t\t\ta_{i,j} = \\begin{cases}\n\t\t\t\t-1 & i + (n -j + 1) \\geq \\ell + 2,\\\\\n\t\t\t\t1 & \\text{otherwise}.\n\t\t\t\\end{cases}\n\t\t\\end{equation}\n\t\t\n\t\tIf $n - \\ell > t$, then $a_{1, n-\\ell} = -1$ by (\\ref{eqn:C}) and $a_{1, n-\\ell} = 1$ by Claim \\ref{claim:particular1s}.\n\t\tSuppose $n - \\ell < t$. Then $a_{1, t} = 1$ by (\\ref{eqn:C}) and $a_{1, t} = -1$ as $M[1:t, 1:t]$ is $t$-diagonal.\n\t\tFinally, when $n-\\ell = t$, we have $a_{2,t} = -1$ by (\\ref{eqn:C}) and $a_{2,t} = 1$ from Claim \\ref{claim:particular1s}. Some illustrative examples of these three cases are shown in Figure \\ref{fig:case-1-n}.\n\t\t\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\begin{subfigure}{0.25\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\begin{tikzpicture}[scale=\\textwidth\/12cm]\n\t\t\t\t\t\n\t\t\t\t\t\\fill[fill=yellow5] (0, 11) rectangle(6, 12);\n\t\t\t\t\t\\fill[fill=blue5] (6, 11) rectangle(11, 12);\n\t\t\t\t\t\\fill[fill=blue5] (5, 10) rectangle(6,11);\n\t\t\t\t\t\\draw[very thin, gray] (0,0) grid (12,12);\n\t\t\t\t\t\\draw[thick] (6,12)\n\t\t\t\t\t\\foreach \\myvar in {6, 5, 4, ..., 0}\n\t\t\t\t\t\t-- (\\myvar, 6 + \\myvar) -- (\\myvar, 5 + \\myvar)\n\t\t\t\t\t} -- (0, 12) -- (6,12);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (9,12)\n\t\t\t\t\t\\foreach \\myvar in {9, 10, ..., 12}\n\t\t\t\t\t\t-- (\\myvar, 21 - \\myvar) -- (\\myvar, 20 - \\myvar)\n\t\t\t\t\t};\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (8,8) rectangle (12, 12);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (8, 12) -- (9,11);\n\t\t\t\t\t\\draw[thick] (9, 12) -- (8,11);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[very thick] (0,0) -- (12,0) -- (12,12) -- (0,12) --(0,0);\n\t\t\t\t\t\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\\caption{$n- \\ell > t$}\n\t\t\t\\end{subfigure}\\hfil\n\t\t\t\\begin{subfigure}{0.25\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\begin{tikzpicture}[scale=\\textwidth\/12cm]\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t\t\\fill[fill=yellow5] (0, 11) rectangle(6, 12);\n\t\t\t\t\t\\fill[fill=blue5] (6, 11) rectangle(11, 12);\n\t\t\t\t\t\\fill[fill=blue5] (5, 10) rectangle(6,11);\n\t\t\t\t\t\\draw[very thin, gray] (0,0) grid (12,12);\n\t\t\t\t\t\\draw[thick] (6,12)\n\t\t\t\t\t\\foreach \\myvar in {6, 5, 4, ..., 0}\n\t\t\t\t\t\t-- (\\myvar, 6 + \\myvar) -- (\\myvar, 5 + \\myvar)\n\t\t\t\t\t} -- (0, 12) -- (6,12);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (9,12)\n\t\t\t\t\t\\foreach \\myvar in {5, 6, ..., 12}\n\t\t\t\t\t\t-- (\\myvar, 17 - \\myvar) -- (\\myvar, 16 - \\myvar)\n\t\t\t\t\t};\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (3,3) rectangle (12, 12);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (5, 12) -- (6,11);\n\t\t\t\t\t\\draw[thick] (6, 12) -- (5,11);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[very thick] (0,0) -- (12,0) -- (12,12) -- (0,12) --(0,0);\n\t\t\t\t\t\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\\caption{$n- \\ell < t$}\n\t\t\t\\end{subfigure}\\hfil\n\t\t\t\\centering\n\t\t\t\\begin{subfigure}{0.25\\textwidth}\n\t\t\t\t\\centering\n\t\t\t\t\\begin{tikzpicture}[scale=\\textwidth\/12cm]\n\t\t\t\t\t\n\t\t\t\t\t\\fill[fill=yellow5] (0, 11) rectangle(6, 12);\n\t\t\t\t\t\\fill[fill=blue5] (6, 11) rectangle(11, 12);\n\t\t\t\t\t\\fill[fill=blue5] (5, 10) rectangle(6,11);\n\t\t\t\t\t\\draw[very thin, gray] (0,0) grid (12,12);\n\t\t\t\t\t\\draw[thick] (6,12)\n\t\t\t\t\t\\foreach \\myvar in {6, 5, 4, ..., 0}\n\t\t\t\t\t\t-- (\\myvar, 6 + \\myvar) -- (\\myvar, 5 + \\myvar)\n\t\t\t\t\t} -- (0, 12) -- (6,12);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (9,12)\n\t\t\t\t\t\\foreach \\myvar in {6, 7, ..., 12}\n\t\t\t\t\t\t-- (\\myvar, 18 - \\myvar) -- (\\myvar, 17 - \\myvar)\n\t\t\t\t\t};\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (3,3) rectangle (12, 12);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[thick] (5, 11) -- (6,10);\n\t\t\t\t\t\\draw[thick] (6, 11) -- (5,10);\n\t\t\t\t\t\n\t\t\t\t\t\\draw[very thick] (0,0) -- (12,0) -- (12,12) -- (0,12) --(0,0);\n\t\t\t\t\t\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\\caption{$n - \\ell =t$}\n\t\t\t\\end{subfigure\n\t\t\t\\caption{The three cases when $C$ contains $a_{1,n}$ and $a_{1,n} = 1$. The yellow squares represent some of the $a_{i,j}$ which are known to be $-1$ from Claim \\ref{claim:particular1s} and the blue squares those which are $1$. The square which gives the contradiction is marked with a cross.}\n\t\t\t\\label{fig:case-1-n}\n\t\t\\end{figure}\n\t\t\n\t\tThe case where $C$ contains $a_{n,1}$ is done in the same way with the rows and columns swapped.\n\t\t\n\t\tThis leaves the case where $C$ contains $a_{n,n}$. Since $\\ell \\geq 3$, if the entry $a_{n, n}$ equals $-1$, so does the entry $a_{n-1, n-1}$, and this contradicts Claim \\ref{claim:particular1s}. If instead $a_{n,n} = 1$, we consider the entry $a_{i, i}$ where $i = n + 1 - \\ceil{(l+2)\/2}$, which must be $-1$. However, since $\\ell \\leq 2n\/3$, \\[n + 1 - \\ceil{(l+2)\/2}\\geq n + - \\frac{n}{3} - \\frac{1}{2} > \\frac{n}{3} + 1 \\geq \\frac{t+2}{2},\\] and $a_{i,i} = 1$ by Claim \\ref{claim:particular1s}. This final contradiction is shown in Figure \\ref{fig:case-n-n}.\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.4\\textwidth\/12cm]\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\foreach \\myvar in {4,...,11}\n\t\t\t\t\t\\fill[blue5] (\\myvar-1, 12-\\myvar) rectangle +(1,1);}\n\t\t\t\t\n\t\t\t\t\\draw[very thin, gray] (0,0) grid (12,12);\n\t\t\t\t\\draw[thick] (6,12)\n\t\t\t\t\\foreach \\myvar in {6, 5, 4, ..., 0}\n\t\t\t\t\t-- (\\myvar, 6 + \\myvar) -- (\\myvar, 5 + \\myvar)\n\t\t\t\t} -- (0, 12) -- (6,12);\n\t\t\t\t\n\t\t\t\t\\draw[thick] (7,0)\n\t\t\t\t\\foreach \\myvar in {7,8, ..., 12}\n\t\t\t\t\t-- (\\myvar, \\myvar - 7) -- (\\myvar, \\myvar - 6)\n\t\t\t\t};\n\t\t\t\t\n\t\t\t\t\\draw[thick] (5,0) rectangle (12, 7);\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\draw[thick] (8, 3) -- (9,4);\n\t\t\t\t\\draw[thick] (8, 4) -- (9,3);\n\t\t\t\t\n\t\t\t\t\\draw[very thick] (0,0) -- (12,0) -- (12,12) -- (0,12) --(0,0);\n\t\t\t\t\n\t\t\t\\end{tikzpicture}\n\t\t\t\\caption{The case where $C$ contains $a_{n,n}$ and $a_{n,n} = 1$. The square marked with a cross gives a contradiction.}\n\t\t\t\\label{fig:case-n-n}\n\t\t\\end{figure}\n\t\\end{proof}\n\n\tWe remark that it should be possible to improve the bound $n^2\/4$ using a similar proof provided one can check a large enough base case. Indeed, we believe that all the steps in the above proof hold when the bound is increased to $n^2\/3$, but only when $n$ is large enough. For example, Claim \\ref{claim:tgeqnmins1} fails for $n = 127$ and our proof of Claim \\ref{claim:B} fails for $n = 86$. Checking base cases this large is far beyond the reach of our computer check, and some new ideas would be needed here.\n\t \n\\section{Open problems}\\label{sec:open-problems}\n\n\t\tThe main open problem is to determine the correct lower bound for the discrepancy of a non-diagonal $\\{-1,1\\}$-matrix with no zero-sum squares. We have improved the lower bound to $n^2\/4$, but this does not appear to be optimal.\n\n\t\tThe best known construction is the following example by Ar\\'evalo, Montejano and Rold\\'an-Pensado \\cite{arevalo2020zero}. Let $M = \\left(a_{i,j}\\right)$ be given by \\[a_{i,j} = \\begin{cases}\n\t\t\t-1 & \\text{$i$ and $j$ are odd},\\\\\n\t\t\t1 & \\text{otherwise}.\n\t\t\\end{cases}\\]\n\t\tThis has discrepancy $n^2\/2$ when $n$ is even and $(n-1)^2\/2 - 1$ when $n$ is odd. With the help of a computer we have verified that this construction is best possible when $9 \\leq n \\leq 32$, and we conjecture that this holds true for all $n \\geq 9$. In fact, our computer search shows that the above example is the unique zero-sum square free non-diagonal matrix with minimum (in magnitude) discrepancy, up to reflections and multiplying by $-1$. \n\n\t\tWe note that the condition $n \\geq 9$ is necessary, as shown by the $8 \\times 8$ zero-sum square free $\\{-1, 1\\}$-matrix with discrepancy 30 given in Figure \\ref{fig:counter}.\n\n\\begin{restatable}{conjecture}{conj}\n\tLet $n\\geq9$. Every $n \\times n$ non-diagonal $\\{-1, 1\\}$-matrix $M$ with \\[|\\disc(M)| \\leq \\begin{cases}\n\t\t\\frac{n^2}{2} - 1 & \\text{$n$ is even}\\\\\n\t\t\\frac{(n-1)^2}{2} -2 & \\text{$n$ is odd}\n\t\\end{cases}\\]\n\tcontains a zero-sum square.\n\\end{restatable}\n\n\n\n\t\t\t\\begin{figure}\n\t\t\\centering\n\t\t\\begin{tikzpicture}[scale=0.5\\textwidth\/8cm]\n\t\t\t\n\t\t\t\n\t\t\t\\fill[blue5] (0,0) rectangle (1,1);\n\t\t\t\\fill[blue5] (1,0) rectangle (2,1);\n\t\t\t\\fill[yellow5] (2,0) rectangle (3,1);\n\t\t\t\\fill[blue5] (3,0) rectangle (4,1);\n\t\t\t\\fill[blue5] (4,0) rectangle (5,1);\n\t\t\t\\fill[blue5] (5,0) rectangle (6,1);\n\t\t\t\\fill[blue5] (6,0) rectangle (7,1);\n\t\t\t\\fill[blue5] (7,0) rectangle (8,1);\n\t\t\t\n\t\t\t\\fill[blue5] (0,1) rectangle (1,2);\n\t\t\t\\fill[blue5] (1,1) rectangle (2,2);\n\t\t\t\\fill[blue5] (2,1) rectangle (3,2);\n\t\t\t\\fill[blue5] (3,1) rectangle (4,2);\n\t\t\t\\fill[blue5] (4,1) rectangle (5,2);\n\t\t\t\\fill[blue5] (5,1) rectangle (6,2);\n\t\t\t\\fill[blue5] (6,1) rectangle (7,2);\n\t\t\t\\fill[blue5] (7,1) rectangle (8,2);\n\t\t\t\n\t\t\t\\fill[blue5] (0,2) rectangle (1,3);\n\t\t\t\\fill[blue5] (1,2) rectangle (2,3);\n\t\t\t\\fill[blue5] (2,2) rectangle (3,3);\n\t\t\t\\fill[blue5] (3,2) rectangle (4,3);\n\t\t\t\\fill[blue5] (4,2) rectangle (5,3);\n\t\t\t\\fill[blue5] (5,2) rectangle (6,3);\n\t\t\t\\fill[blue5] (6,2) rectangle (7,3);\n\t\t\t\\fill[blue5] (7,2) rectangle (8,3);\n\t\t\t\n\t\t\t\\fill[yellow5] (0,3) rectangle (1,4);\n\t\t\t\\fill[blue5] (1,3) rectangle (2,4);\n\t\t\t\\fill[blue5] (2,3) rectangle (3,4);\n\t\t\t\\fill[blue5] (3,3) rectangle (4,4);\n\t\t\t\\fill[blue5] (4,3) rectangle (5,4);\n\t\t\t\\fill[blue5] (5,3) rectangle (6,4);\n\t\t\t\\fill[blue5] (6,3) rectangle (7,4);\n\t\t\t\\fill[blue5] (7,3) rectangle (8,4);\n\t\t\t\n\t\t\t\\fill[yellow5] (0,4) rectangle (1,5);\n\t\t\t\\fill[yellow5] (1,4) rectangle (2,5);\n\t\t\t\\fill[blue5] (2,4) rectangle (3,5);\n\t\t\t\\fill[blue5] (3,4) rectangle (4,5);\n\t\t\t\\fill[blue5] (4,4) rectangle (5,5);\n\t\t\t\\fill[blue5] (5,4) rectangle (6,5);\n\t\t\t\\fill[blue5] (6,4) rectangle (7,5);\n\t\t\t\\fill[blue5] (7,4) rectangle (8,5);\n\t\t\t\n\t\t\t\\fill[yellow5] (0,5) rectangle (1,6);\n\t\t\t\\fill[yellow5] (1,5) rectangle (2,6);\n\t\t\t\\fill[yellow5] (2,5) rectangle (3,6);\n\t\t\t\\fill[blue5] (3,5) rectangle (4,6);\n\t\t\t\\fill[blue5] (4,5) rectangle (5,6);\n\t\t\t\\fill[blue5] (5,5) rectangle (6,6);\n\t\t\t\\fill[blue5] (6,5) rectangle (7,6);\n\t\t\t\\fill[yellow5] (7,5) rectangle (8,6);\n\t\t\t\n\t\t\t\n\t\t\t\\fill[yellow5] (0,6) rectangle (1,7);\n\t\t\t\\fill[yellow5] (1,6) rectangle (2,7);\n\t\t\t\\fill[yellow5] (2,6) rectangle (3,7);\n\t\t\t\\fill[yellow5] (3,6) rectangle (4,7);\n\t\t\t\\fill[blue5] (4,6) rectangle (5,7);\n\t\t\t\\fill[blue5] (5,6) rectangle (6,7);\n\t\t\t\\fill[blue5] (6,6) rectangle (7,7);\n\t\t\t\\fill[blue5] (7,6) rectangle (8,7);\n\t\t\t\n\t\t\t\\fill[yellow5] (0,7) rectangle (1,8);\n\t\t\t\\fill[yellow5] (1,7) rectangle (2,8);\n\t\t\t\\fill[yellow5] (2,7) rectangle (3,8);\n\t\t\t\\fill[yellow5] (3,7) rectangle (4,8);\n\t\t\t\\fill[yellow5] (4,7) rectangle (5,8);\n\t\t\t\\fill[blue5] (5,7) rectangle (6,8);\n\t\t\t\\fill[blue5] (6,7) rectangle (7,8);\n\t\t\t\\fill[blue5] (7,7) rectangle (8,8);\n\t\t\t\n\t\t\t\\draw[step=1, thin] (0,0) grid (8,8);\n\t\t\t\n\t\t\t\\draw[very thick] (0,0) rectangle(8,8);\n\t\t\\end{tikzpicture}\n\t\t\n\t\t\\caption{An $8 \\times 8$ $\\{-1,1\\}$-matrix with no zero-sum squares and discrepancy 30. The yellow squares represent a $-1$ and the blue squares represent a $1$.}\n\t\t\\label{fig:counter}\n\t\\end{figure}\n\n\n\n\tAr\\'evalo, Montejano and Rold\\'an-Pensado prove their result for both $n \\times n$ and $n \\times (n+1)$ matrices, and computational experiments suggest that Theorem \\ref{thm:low-bound} holds for $n \\times (n+1)$ matrices as well. More generally, what is the best lower bound for a general $n \\times m$ matrix when $n$ and $m$ are large?\n\t\n\t\\begin{problem}\n\t\tLet $f(n, m)$ be the minimum $d \\in \\mathbb{N}$ such that there exists an $n \\times m$ non-diagonal $\\{-1,1\\}$ matrix $M$ with $|\\disc(M)| \\leq d$. What are the asymptotics of $f(n,m)$?\n\t\\end{problem}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAnti-de Sitter (AdS) backgrounds of supergravity are an essential part of the \nAdS\/CFT correspondence \\cite{Maldacena:1997re} and have been studied\nin recent years from varying perspectives. On the one hand they can be constructed as compactifications of higher-dimensional supergravities as is the natural\nset up in the AdS\/CFT correspondence.\\footnote{See \\cite{Kehagias:1998gn,Morrison:1998cs} for earlier work and \ne.g.\\ \\cite{Polchinski:2010hw} and references therein for a more recent review.} Alternatively, one can investigate and, if possible, \nclassify their appearance directly in a given supergravity without relating it\nto any compactification.\n\nFor a given AdS background it is also of interest to study its properties\nand in particular its moduli space $\\mathcal{M}$, i.e.\\ the subspace of the scalar field space\nthat is spanned by flat directions of the AdS background.\nThis moduli space has been heavily investigated \nin Minkowskian backgrounds of string theory as it prominently appears\nin its low energy effective theory.\nFor AdS backgrounds much less is known about $\\mathcal{M}$, partly because the defining equations are more involved and furthermore quantum corrections contribute unprotected.\n\n\n\nIn \\cite{deAlwis:2013jaa,Louis:2014gxa} supersymmetric $\\textrm{AdS}_{4}$ vacua \nand their classical supersymmetric moduli spaces\nwere studied in four-dimensional ($d=4$) supergravities \nwith $\\mathcal{N}=1,2,4$ supersymmetry\nwithout considering their relation to higher-dimensional theories.\\footnote{Throughout this paper we only consider $\\textrm{AdS}$ backgrounds that preserve all supercharges of a given supergravity and furthermore only consider the subspace of the moduli space that preserves all these supercharges. This is what we mean by supersymmetric $\\textrm{AdS}$ backgrounds and supersymmetric moduli spaces.}\nFor $\\mathcal{N}=1$ it was found that the supersymmetric moduli space is at best a real submanifold of the original K\\\"ahler field space.\nSimilarly, for $\\mathcal{N}=2$ the supersymmetric moduli space \nis at best a product of a real manifold times a K\\\"ahler manifold\nwhile $\\mathcal{N}=4$ $\\textrm{AdS}$ backgrounds have no supersymmetric moduli space.\n This analysis was repeated for $\\textrm{AdS}_{5}$ vacua in $d=5$ gauged supergravity\nwith 16 supercharges ($\\mathcal{N}=4$) in \\cite{Louis:2015dca} and for $\\textrm{AdS}_{7}$ vacua in $d=7$ gauged supergravity with 16~supercharges in \\cite{Louis:2015mka}. For the $d=5,\\, \\mathcal{N}=4$ theories it was shown that the supersymmetric moduli space is\nthe coset $\\mathcal{M}=SU(1,m)\/(U(1)\\times SU(m))$ while in $d=7$ it was proven that again \nno supersymmetric moduli space exists.\n\n\nIn this paper we focus on supersymmetric $\\textrm{AdS}_{5}$ vacua in $d=5$ gauged \nsupergravities with eight supercharges ($\\mathcal{N}=2$)\ncoupled to an arbitrary number of vector-, tensor- and hypermultiplets. \nA related analysis was carried out in \\cite{Tachikawa:2005tq}\nfor the coupling of Abelian vector multiplets and hypermultiplets.\nWe confirm the results of \\cite{Tachikawa:2005tq} and generalize \nthe analysis by including tensor multiplets and\nnon-Abelian vector multiplets. \nIn particular, we show that also in this more general case \nthe unbroken gauge group has to \n be of the form $H\\times U(1)_{R}$\nwhere the $U(1)_R$-factor is gauged by the graviphoton.\nThis specifically forbids unbroken semisimple gauge groups in AdS\nbackgrounds.\n\nIn a second step\nwe study the supersymmetric moduli space $\\mathcal{M}$ \nof the previously obtained $\\textrm{AdS}_{5}$ backgrounds\nand show that it necessarily is a K\\\"ahler submanifold of the quaternionic scalar field space $\\mathcal{T}_H$ spanned by all scalars in the hypermultiplets.\\footnote{This result was also obtained in \\cite{Tachikawa:2005tq}. Our results is more general as we include tensor multiplets and non-Abelian vector multiplet in the analysis.}\nThis is indeed consistent with the AdS\/CFT correspondence where the \nmoduli space $\\mathcal{M}$ is mapped to the conformal manifold of the dual \nsuperconformal field theory (SCFT). For the gauged supergravities considered here\nthe dual theories are $d=4,\\, \\mathcal{N}=1$ SCFTs.\nIn \\cite{Asnin:2009xx} it was indeed shown that \nthe conformal manifold of these SCFTs is a K\\\"ahler manifold. \n\nThe organization of this paper is as follows. In section \\ref{sec:sugra} we briefly review gauged $\\mathcal{N}=2$ supergravities in five dimensions. This will then be used to study the conditions for the existence of supersymmetric $\\textrm{AdS}_{5}$ vacua and determine some of their properties in section~\\ref{sec:vacua}. Finally, in section \\ref{sec:moduli} we compute the conditions on the moduli space of these vacua and show that it is a K\\\"ahler manifold. \n\n\n\n\\section{Gauged $\\mathcal{N}=2$ supergravity in five dimensions}\\label{sec:sugra}\n\nTo begin with let us review five-dimensional gauged $\\mathcal{N}=2$ supergravity following \\cite{Gunaydin:2000xk,Bergshoeff:2002qk,Bergshoeff:2004kh}.\\footnote{Ref.~\\cite{Bergshoeff:2004kh}\nconstructed the most general version of five-dimensional gauged $\\mathcal{N}=2$ supergravity.} The theory consists of the gravity multiplet with field content\n\\begin{equation}\n\\{g_{\\mu\\nu}, \\Psi_{\\mu}^{\\mathcal{A}}, A_{\\mu}^{0}\\}\\ , \\quad \\mu,\\nu=0,...,4\\ ,\\quad \n\\mathcal{A}=1,2\\ ,\n\\end{equation}\nwhere $g_{\\mu\\nu}$ is the metric of space-time, $\\Psi_{\\mu}^{\\mathcal{A}}$ is\nan $SU(2)_{R}$-doublet of symplectic Majorana gravitini and $A_{\\mu}^{0}$ is the graviphoton. In\nthis paper we consider theories that additionally contain $n_{V}$\nvector multiplets, $n_{H}$ hypermultiplets and $n_{T}$ tensor\nmultiplets. A vector multiplet $\\{A_{\\mu}, \\lambda^{\\mathcal{A}}, \\phi\\}$\ntransforms in the adjoint representation of the gauge group $G$ and contains a vector $A_{\\mu}$, a doublet of gauginos $\\lambda^{\\mathcal{A}}$ and a real scalar~$\\phi$. In $d=5$ a vector is Poincar\\'e dual to an antisymmetric \ntensor field $B_{\\mu\\nu}$ which carry an arbitrary representation of $G$. This gives rise to tensor multiplets which have the same field content as vector multiplets, but with a two-form instead of a vector. Since vector- and tensor multiplets mix in the Lagrangian, we label their scalars $\\phi^{i}$ by the same index $i,j=1,...,n_{V}+n_{T}$. Moreover, we label the vector fields (including the graviphoton) by $I,J=0,1,...,n_{V}$, the tensor fields by $M,N=n_{V}+1,...,n_{V}+n_{T}$ and also introduce a combined index $\\tilde{I}=(I,M)$. Finally, the $n_{H}$ hypermultiplets\n\\begin{equation}\n\\{q^{u}, \\zeta^{\\alpha}\\}, \\quad u=1,2,...,4n_{H}\\ , \\quad \\alpha=1,2,...,2n_{H}\\ , \n\\end{equation}\ncontain $4n_{H}$ real scalars $q^{u}$ and $2n_{H}$ hyperini $\\zeta^{\\alpha}$.\n\nThe bosonic Lagrangian of $\\mathcal{N}=2$ gauged supergravity in five dimensions reads\\footnote{\nNote that we set the gravitational constant $\\kappa=1$ in this paper.}\n\\cite{Bergshoeff:2004kh}\n\\begin{equation}\\label{eq:Lagrangian}\n\\begin{aligned}\ne^{-1}\\mathcal{L}&=\\tfrac{1}{2}\n-\\tfrac{1}{4}a_{\\tilde{I}\\tilde{J}}H^{\\tilde{I}}_{\\mu\\nu}H^{\\tilde{J}\\mu\\nu}-\\tfrac{1}{2}g_{ij}\\mathcal{D}_{\\mu}\\phi^{i}\\mathcal{D}^{\\mu}\\phi^{j}-\\tfrac{1}{2}G_{uv}\\mathcal{D}_{\\mu}q^{u}\\mathcal{D}^{\\mu}q^{v}-g^{2}V(\\phi,q)\\\\\n&+\\tfrac{1}{16g}e^{-1}\\epsilon^{\\mu\\nu\\rho\\sigma\\tau}\\Omega_{MN}B^{M}_{\\mu\\nu}\\left(\\partial_{\\rho}B^{N}_{\\sigma\\tau}+2gt_{IJ}^{N}A_{\\rho}^{I}F_{\\sigma\\tau}^{J}+gt_{IP}^{N}A_{\\rho}^{I}B_{\\sigma\\tau}^{P}\\right)\\\\\n&+\\tfrac{1}{12}\\sqrt{\\tfrac{2}{3}}e^{-1}\\epsilon^{\\mu\\nu\\rho\\sigma\\tau}C_{IJK}A_{\\mu}^{I}\\left[F_{\\nu\\rho}^{J}F_{\\sigma\\tau}+f_{FG}^{J}A_{\\nu}^{F}A_{\\rho}^{G}\\left(-\\tfrac{1}{2}F_{\\sigma\\tau}^{K}+\\tfrac{g^{2}}{10}f_{HL}^{K}A_{\\sigma}^{H}A_{\\tau}^{L}\\right)\\right]\\\\\n&-\\tfrac{1}{8}e^{-1}\\epsilon^{\\mu\\nu\\rho\\sigma\\tau}\\Omega_{MN}t_{IK}^{M}t_{FG}^{N}A_{\\mu}^{I}A_{\\nu}^{F}A_{\\rho}^{G}\\left(-\\tfrac{g}{2}F_{\\sigma\\tau}^{K}+\\tfrac{g^{2}}{10}f_{HL}^{K}A_{\\sigma}^{H}A_{\\tau}^{L}\\right)\n\\ .\n\\end{aligned}\n\\end{equation}\nIn the rest of this section we recall the various ingredients which\nenter this Lagrangian.\nFirst of all $H^{\\tilde{I}}_{\\mu\\nu}=(F_{\\mu\\nu}^{I}, B_{\\mu\\nu}^{M})$\nwhere\n$F_{\\mu\\nu}^{I}=2\\partial_{[\\mu}A_{\\nu]}^{I}+gf_{JK}^{I}A^{J}_{\\mu}A^{K}_{\\nu}$\nare the field strengths with $g$ being the gauge coupling constant.\nThe scalar fields in $\\mathcal{L}$ can be interpreted as coordinate charts from spacetime $M_{5}$ to a target space $\\mathcal{T}$,\n\\begin{equation}\\label{eq:target space}\n\\phi^{i} \\otimes q^{u}: M_{5} \\longrightarrow \\mathcal{T}.\n\\end{equation}\nLocally $\\mathcal{T}$ is a product $\\mathcal{T}_{VT} \\times \\mathcal{T}_{H}$ where the first\nfactor is a projective special real manifold $(\\mathcal{T}_{VT}, g)$ of\ndimension $n_{V}+n_{T}$. It is constructed as a hypersurface in an $(n_{V}+n_{T}+1)$-dimensional real manifold $\\mathcal{H}$ with local coordinates $h^{\\tilde{I}}$. This hypersurface is defined by \n\\begin{equation}\\label{eq:polynomial}\nP(h^{\\tilde{I}}(\\phi))=C_{\\tilde{I}\\tilde{J}\\tilde{K}}h^{\\tilde{I}}h^{\\tilde{J}}h^{\\tilde{K}}=1,\n\\end{equation}\nwhere $P(h^{\\tilde{I}}(\\phi))$ is a cubic homogeneous polynomial with $C_{\\tilde{I}\\tilde{J}\\tilde{K}}$ constant and completely symmetric. Thus $\\mathcal{T}_{VT}=\\{P=1\\}\\subset \\mathcal{H}$. \n\nThe generalized gauge couplings in \\eqref{eq:Lagrangian} correspond to a positive metric on the ambient space $\\mathcal{H}$, given by\n\\begin{equation}\\label{adef}\na_{\\tilde{I}\\tilde{J}}:=-2C_{\\tilde{I}\\tilde{J}\\tilde{K}}h^{\\tilde{K}}+3h_{\\tilde{I}}h_{\\tilde{J}}\\ ,\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:hlower}\n h_{\\tilde{I}}= C_{\\tilde{I}\\tilde{J}\\tilde{K}}h^{\\tilde{J}}h^{\\tilde{K}}\\ . \n\\end{equation}\nThe pullback metric $g_{ij}$ is the (positive) metric on the hypersurface \n$\\mathcal{T}_{VT}$ and is given by\n\\begin{equation}\\label{gpull}\ng_{ij}:=h_{i}^{\\tilde{I}}h_{j}^{\\tilde{J}}a_{\\tilde{I}\\tilde{J}}\\ ,\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:hder}\nh_{i}^{\\tilde{I}}:=-\\sqrt{\\tfrac{3}{2}}\\,\\partial_{i}h^{\\tilde{I}}(\\phi)\\\n.\n\\end{equation}\nThese quantities satisfy (see Appendix C in \\cite{Bergshoeff:2004kh} for more details)\n\\begin{equation}\nh^{\\tilde{I}}h_{\\tilde{I}}=1\\ ,\\qquad\nh_{\\tilde{I}}h_{i}^{\\tilde{I}}=0\\ ,\\qquad\nh_{\\tilde{I}}h_{\\tilde{J}}+h_{\\tilde{I}}^{i}h_{\\tilde{J}i}=a_{\\tilde{I}\\tilde{J}} \\ ,\n\\label{eq:hmetric}\n\\end{equation}\nwhere we raise and lower indices with the appropriate metrics $a_{\\tilde{I}\\tilde{J}}$ or $g_{ij}$ respectively.\nThe metric $g_{ij}$ induces a covariant derivative which acts on the $h^{\\tilde{I}}_{i}$ via \n\\begin{equation}\\label{eq:covderh}\n\\nabla_{i}h^{\\tilde{I}}_{j}=-\\sqrt{\\tfrac{2}{3}}\\, (h^{\\tilde{I}}g_{ij}+T_{ijk}h^{\\tilde{I}k})\\ ,\n\\end{equation}\nwhere $T_{ijk}:=C_{\\tilde{I}\\tilde{J}\\tilde{K}}h_{i}^{\\tilde{I}}h_{j}^{\\tilde{J}}h_{k}^{\\tilde{K}}$ is a completely symmetric tensor. \n\nThe second factor of $\\mathcal{T}$ in (\\ref{eq:target space}) is a quaternionic K\\\"ahler manifold $(\\mathcal{T}_{H},G, Q)$ of real dimension $4n_{H}$ (see \\cite{Andrianopoli:1996cm} for a more extensive introduction). Here $G_{uv}$ is a Riemannian metric and $Q$ denotes a $\\nabla^{G}$ invariant rank three subbundle $Q\\subset \\text{End} (T\\mathcal{T}_H)$ that is locally spanned by a triplet $J^{n}$, $n=1,2,3$ of almost complex structures which satisfy $J^{1}J^{2}=J^{3}$ and $(J^{n})^{2}=-\\text{Id}$. Moreover the metric $G_{uv}$ is hermitian with respect to all three $J^{n}$ and one defines the associated triplet of two-forms $\\omega^{n}_{uv}:=G_{uw}(J^{n})^{w}_{v}$. In contrast to the K\\\"ahlerian case, the almost complex structures are not parallel but the Levi-Civita connection $\\nabla^{G}$ of $G$ rotates the endomorphisms inside $Q$, i.e. \n\\begin{equation}\\label{nabladef}\n\\nabla J^{n}:=\\nabla^{G}J^{n}-\\epsilon^{npq}\\theta^{p}J^{q}=0\\ .\n\\end{equation}\nNote that $\\nabla$ differs from $\\nabla^{G}$ by an\n$SU(2)$-connection with connection one-forms $\\theta^{p}$.\nFor later use let us note that the metric $G_{uv}$ can be expressed in terms of vielbeins $\\mathcal{U}^{\\alpha\\mathcal{A}}_{u}$ as\n\\begin{equation}\nG_{uv}= C_{\\alpha\\beta}\\epsilon_{\\mathcal{A}\\mathcal{B}}\\mathcal{U}^{\\alpha\\mathcal{A}}_{u}\\mathcal{U}^{\\beta\\mathcal{B}}_{v}\\ ,\n\\end{equation}\nwhere $C_{\\alpha\\beta}$ denotes the flat metric on $Sp(2n_{H},\\mathbb{R})$\nand the $SU(2)$-indices $\\mathcal{A},\\mathcal{B}$ are raised and lowered with $\\epsilon_{\\mathcal{A}\\mathcal{B}}$. \n\nThe gauge group $G$ is specified by the generators $t_{I}$ of its Lie algebra $\\mathfrak{g}$ and the structure constants $f_{IJ}^{K}$,\n\\begin{equation}\n[t_{I},t_{J}]=-f_{IJ}^{K}t_{K}\\ .\n\\end{equation}\nThe vector fields transform in the adjoint representation of the gauge group, i.e.\\ $t_{IJ}^{K}=f_{IJ}^{K}$ while the tensor fields\ncan carry an arbitrary representation.\nThe most general representation for $n_{V}$ vector multiplets and $n_{T}$ tensor multiplets has been found in \\cite{Bergshoeff:2002qk} and is given by\n\\begin{equation}\\label{trep}\n t_{I\\tilde{J}}^{\\tilde{K}}=\n\\begin{pmatrix}\nf_{IJ}^{K} & t_{IJ}^{N}\\\\\n0 & t_{IM}^{N}\\\\\n\\end{pmatrix}.\n\\end{equation}\nWe see that the block matrix $t_{IJ}^{N}$ mixes vector- and tensor\nfields. However the $t_{IJ}^{N}$ are only nonzero if the chosen\nrepresentation of the gauge group is not completely reducible. This\nnever occurs for compact gauge groups but there exist non-compact\ngauge groups containing an Abelian ideal that admit representations\nof this type, see\n\\cite{Bergshoeff:2002qk}. There it is also shown that the construction\nof a generalized Chern-Simons term in the action for vector- and\ntensor multiplets requires the existence of an invertible and\nantisymmetric matrix $\\Omega_{MN}$. In particular, the $t_{I\\tilde J}^{N}$\nare of the form\n\\begin{equation}\\label{eq:Omega}\nt_{I\\tilde{J}}^{N}=C_{I\\tilde{J}P}\\Omega^{PN}\\ .\n\\end{equation}\n\nThe gauge group is realized on the scalar fields via the action of\nKilling vectors $\\xi_{I}$ for the vector- and tensor multiplets and\n$k_{I}$ for the hypermultiplets that satisfy the Lie\nalgebra~${\\mathfrak{g}}$~of~$G$, \n\\begin{equation}\\label{Killingc}\n\\begin{aligned}\n{}[\\xi_{I},\\xi_{J}]^{i}&:=\\xi_I^j\\partial_j \\xi^i_J-\\xi_J^j\\partial_j \\xi^i_I=\n-f_{IJ}^{K}\\, \\xi_{K}^{i}\\ ,\\\\\n[k_{I},k_{J}]^{u}&:=k_I^v\\partial_v k_J^u-k_J^v\\partial_v k_I^u=\n-f_{IJ}^{K}\\,k_{K}^{u}\\ .\n\\end{aligned}\n\\end{equation}\nIn the case of the projective special real manifold, one can obtain an explicit expression for the Killing vectors $\\xi_{I}^{i}$ given by \\cite{Bergshoeff:2004kh}\n\\begin{equation}\\label{eq:VTkilling}\n\\xi_{I}^{i}:= -\\sqrt{\\tfrac{3}{2}}\\,t_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde{J}}h^{i}_{\\tilde{K}}=-\\sqrt{\\tfrac{3}{2}}\\,t_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde{J}i}h_{\\tilde{K}}\\ .\n\\end{equation}\nThe second equality is due to the fact that \\cite{Gunaydin:1984ak}\n\\begin{equation}\\label{eq:representation0}\nt_{I\\tilde{J}}^{\\tilde{K}}\\,h^{\\tilde{J}}h_{\\tilde{K}}= 0\\ ,\n\\end{equation}\nand thus \n\\begin{equation}\n0=\\partial_{i}(t_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde{J}}h_{\\tilde{K}}) = t_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde{J}}\\partial_{i}h_{\\tilde{K}}+t_{I\\tilde{J}}^{\\tilde{K}}(\\partial_{i}h^{\\tilde{J}})h_{\\tilde{K}}\\ , \n\\end{equation}\nwhich implies\\footnote{Note that the derivative\n $h_{\\tilde{I}i}=\\sqrt{\\tfrac{3}{2}}\\,\\partial_{i}h_{\\tilde{I}}$\nhas an additional minus sign compared to \\eqref{eq:hder} which can be\nshown by lowering the index with $a_{\\tilde{I}\\tilde{J}}$ given in \\eqref{adef}.}\n\\begin{equation}\\label{eq:representation}\nt_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde{J}}h^{i}_{\\tilde{K}}=t_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde{J}i}h_{\\tilde{K}}\\ .\n\\end{equation}\n\nThe Killing vectors $k_{I}^u$ on the quaternionic K\\\"ahler\nmanifold $\\mathcal{T}_H$ \\cite{Andrianopoli:1996cm,Alekseevsky:2001if,Bergshoeff:2002qk} have to be triholomorphic which implies \n\\begin{equation}\\label{eq:Jinvariance}\n\\nabla_{u}\nk^{I}_{w}(J^{n})_{v}^{w}-(J^{n})_{u}^{w}\\nabla_{w}k^{I}_{v}=2\\epsilon^{npq}\\omega^{p}_{uv}\\mu^{Iq}\\\n.\n\\end{equation}\nHere $\\mu_{I}^{n}$ is a\ntriplet of moment maps which also satisfy\n\\begin{equation}\\label{eq:covdermomentmap}\n\\tfrac{1}{2}\\omega^{n}_{uv}k_{I}^{v}=-\\nabla_{u}\\mu_{I}^{n}\n\\ ,\n\\end{equation}\nand the equivariance condition\n\\begin{equation}\\label{eq:equivariance}\nf_{IJ}^{K}\\mu_{K}^{n}=\\tfrac{1}{2}\\omega_{uv}^{n}k_{I}^{u}k_{J}^{v}-2\\epsilon^{npq}\\mu_{Ip}\\mu_{Jq}\\ .\n\\end{equation}\nFurthermore the covariant derivative of the Killing vectors \nobeys \\cite{D'Auria:2001kv,Alekseevsky:2001if}\n\\begin{equation}\\label{eq:covderkilling}\n\\nabla_{u}k_{Iv} +\\nabla_{v}k_{Iu} = 0\\ ,\\qquad \\nabla_{u}k_{Iv} -\\nabla_{v}k_{Iu} = \\omega^{n}_{uv}\\mu_{nI}+L_{Iuv} \\ ,\n\\end{equation}\nwhere \nthe $L_{Iuv}$ are related to the gaugino mass matrix and commute with\n$J^{n}$.\nFor later use we define\n\\begin{equation}\\label{SLdef}\nS^{n}_{Iuv}:={L}_{Iuw}(J^{n})^{w}_{v}\\ ,\\qquad L_{uv}:=h^{I}L_{Iuv}\\\n,\\qquad S_{uv}^{n}:=h^{I}S_{Iuv}^{n}\\ ,\n\\end{equation}\nwhere the $S^{n}_{Iuv}$ are symmetric in $u,v$ \\cite{Alekseevsky:2001if}. \n\nBefore we proceed let us \nnote that for $n_{H}=0$, i.e.\\ when there are no hypermultiplets,\nconstant Fayet-Iliopoulos (FI) terms can exist which have to satisfy\nthe equivariance condition \\eqref{eq:equivariance}. \nIn this case the first term on the right hand side of\n\\eqref{eq:equivariance}\nvanishes which implies that \n there\nare only two possible solutions \\cite{Bergshoeff:2004kh}. \nIf the gauge group contains an $SU(2)$-factor, the FI-terms have to be\nof the form\n\\begin{equation}\n\\mu_{I}^{n}= c e_{I}^{n}\\ ,\\quad c \\in \\mathbb{R}\\ ,\n\\end{equation}\nwhere the $e_{I}^{n}$ are nonzero constant vectors for $I=1,2,3$ of\nthe $SU(2)$-factor that satisfy\n\\begin{equation}\n \\epsilon^{mnp}e^{m}_{I}e^{n}_{J}=f_{IJ}^{K}e^{p}_{K}\\ .\n\\end{equation}\n The second solution has $U(1)$-factors in the gauge group and the constant moment maps are given by \n\\begin{equation}\\label{eq:AbelianFI}\n \\mu_{I}^{n}=c_{I}e^{n}\\ ,\\quad c_{I}\\in \\mathbb{R}\\ ,\n\\end{equation}\nwhere $e^{n}$ is a constant $SU(2)$-vector and\n$I$ labels the $U(1)$-factors. \n\nFinally, the covariant derivatives of the scalars in \\eqref{eq:Lagrangian} are given by\n\\begin{equation}\\label{eq:covderivatives} \n\\mathcal{D}_{\\mu}\\phi^{i} = \\partial_{\\mu}\\phi^{i} + gA_{\\mu}^{I}\\xi_{I}^{i}(\\phi)\\ , \\qquad \\mathcal{D}_{\\mu} q^{u} = \\partial_{\\mu}q^{u}+gA_{\\mu}^{I}k_{I}^{u}(q)\\ .\n\\end{equation}\nThe scalar potential\n\\begin{equation}\\label{eq:potential}\nV=2g_{ij}W^{i\\mathcal{A}\\mathcal{B}}W_{\\mathcal{A}\\mathcal{B}}^{j}+2g_{ij}\\mathcal{K}^{i}\\mathcal{K}^{j}+2N^{\\alpha}_{\\mathcal{A}}N_{\\alpha}^{\\mathcal{A}}-4S_{\\mathcal{A}\\mathcal{B}}S^{\\mathcal{A}\\mathcal{B}},\n\\end{equation}\nis defined in terms of the couplings\\footnote{Note that the $h^{M}$ in\n the direction of the tensor multiplets do not appear\n explicitly. Nevertheless, the couplings can implicitly depend on the\n scalars in the tensor multiplet as they might appear in $h^{I}$\n after solving \\eqref{eq:polynomial}.}\n\\begin{equation}\\label{eq:definitions}\n\\begin{aligned}\nS^{\\mathcal{A}\\mathcal{B}}&:=h^{I}\\mu_{I}^{n}\\sigma_{n}^{\\mathcal{A}\\mathcal{B}}\\ ,\\qquad\nW_{i}^{\\mathcal{A}\\mathcal{B}}:=h^{I}_{i}\\mu^{n}_{I}\\sigma_{n}^{\\mathcal{A}\\mathcal{B}}\\ ,\\\\\n\\mathcal{K}^{i}&:=\\tfrac{\\sqrt{6}}{4} h^{I}\\xi_{I}^{i}\\ ,\\qquad\nN^{\\alpha\\mathcal{A}}:=\\tfrac{\\sqrt{6}}{4} h^{I}k_{I}^{u}\\mathcal{U}_{u}^{\\alpha\\mathcal{A}}\\ .\n\\end{aligned}\n\\end{equation}\nHere $\\sigma^{n}_{\\mathcal{A}\\mathcal{B}}$ are the Pauli matrices with an index lowered by $\\epsilon_{\\mathcal{A}\\mathcal{B}}$, i.e.\n\\begin{equation}\n\\sigma^{1}_{\\mathcal{A}\\mathcal{B}}= \\begin{pmatrix} 1 & 0 \\\\ 0 & -1 \\end{pmatrix}\\\n,\\quad\n\\sigma^{2}_{\\mathcal{A}\\mathcal{B}}= \\begin{pmatrix} -i & 0 \\\\ 0 & -i \\end{pmatrix}\\\n, \\quad\n\\sigma^{3}_{\\mathcal{A}\\mathcal{B}}= \\begin{pmatrix} 0 & -1 \\\\ -1 & 0 \\end{pmatrix}\\\n.\n\\end{equation}\nAs usual the couplings \\eqref{eq:definitions}\nare related to the\nscalar parts of the supersymmetry variations of the fermions via\n\\begin{equation}\\label{susytrans}\n\\begin{aligned}\n\\delta_{\\epsilon}\\psi_{\\mu}^{\\mathcal{A}}&=D_{\\mu}\\epsilon^{\\mathcal{A}}-\\tfrac{ig}{\\sqrt{6}}S^{\\mathcal{A}\\mathcal{B}}\\gamma_{\\mu}\\epsilon_{\\mathcal{B}}+...\\ , \\\\\n\\delta_{\\epsilon}\\lambda^{i\\mathcal{A}}&=g\\mathcal{K}^{i}\\epsilon^{\\mathcal{A}}-gW^{i\\mathcal{A}\\mathcal{B}}\\epsilon_{\\mathcal{B}}+...\\ ,\\\\\n\\delta_{\\epsilon}\\zeta^{\\alpha}&=gN_{\\mathcal{A}}^{\\alpha}\\epsilon^{\\mathcal{A}}+...\\ .\n\\end{aligned}\n\\end{equation}\nHere $\\epsilon^{\\mathcal{A}}$ denote the supersymmetry parameters. This concludes our review of $d=5$ supergravity and we now turn to its possible supersymmetric $\\textrm{AdS}$ backgrounds.\n\n\n\n\\section{Supersymmetric $\\textrm{AdS}_{5}$ vacua}\\label{sec:vacua}\n\nIn this section we determine the conditions that lead to\n $\\textrm{AdS}_{5}$ vacua which preserve all eight supercharges.\nThis requires the vanishing of all fermionic \nsupersymmetry transformations, i.e.\n\\begin{equation}\n\\vev{\\delta_{\\epsilon}\\psi_{\\mu}^{\\mathcal{A}}}=\\vev{\\delta_{\\epsilon}\\lambda^{i\\mathcal{A}}}=\\vev{\\delta_{\\epsilon}\\zeta^{\\alpha}}=0 \\ \n\\end{equation}\nwhere $\\vev{\\ }$ denotes the value of a quantity\nevaluated in the background. Using the fact that $W^{i\\mathcal{A}\\mathcal{B}}$ and $\\mathcal{K}^{i}$ are linearly\nindependent \\cite{Gunaydin:2000xk} and \\eqref{susytrans}, this implies the following four conditions,\n\\begin{equation}\\label{eq:conditions}\n\\vev{W_{i}^{\\mathcal{A}\\mathcal{B}}}=0\\ , \\quad \\vev{S_{\\mathcal{A} \\mathcal{B}}}\\,\\epsilon^{\\mathcal{B}}=\\Lambda U_{\\mathcal{A}\\mathcal{B}}\\,\\epsilon^{\\mathcal{B}}\\ ,\\quad \\vev{N^{\\alpha\\mathcal{A}}}=0\\ ,\\quad \\vev{\\mathcal{K}^{i}}=0\\ .\n\\end{equation}\nHere $\\Lambda \\in \\mathbb{R}$ is related to the cosmological constant and\n$U_{\\mathcal{A}\\mathcal{B}}=v_{n}\\sigma_{\\mathcal{A}\\mathcal{B}}^{n}$ for $v\\in S^{2}$ is an $SU(2)$-matrix.\n$U_{\\mathcal{A}\\mathcal{B}}$ appears in the Killing spinor equation for $\\textrm{AdS}_{5}$ which reads \\cite{Shuster:1999zf}\n\\begin{equation}\n \\vev{D_{\\mu}\\epsilon_{\\mathcal{A}}}=\\tfrac{ia}{2}\\, U_{\\mathcal{A}\\mathcal{B}}\\,\\gamma_{\\mu}\\epsilon^{\\mathcal{B}}\\ , \\quad a\\in \\mathbb{R}\\ .\n\\end{equation}\nAs required for an $\\textrm{AdS}$ vacuum, the conditions \\eqref{eq:conditions}\ngive a negative background value for the scalar potential\n$\\vev{V(\\phi,q)}< 0$ which can be seen from (\\ref{eq:potential}).\nUsing the definitions (\\ref{eq:definitions}), we immediately see that \nthe four conditions \\eqref{eq:conditions} can also be formulated as \nconditions on the moment maps and Killing vectors,\n\\begin{equation}\\label{eq:backgroundmomentmaps}\n\\vev{h^{I}_{i}\\mu^{n}_{I}}=0\\ ,\\qquad\n\\vev{h^{I}\\mu^{n}_{I}}=\\Lambda v^{n}\\ ,\\qquad\n\\vev{h^{I}k_{I}^{u}}=0\\ , \\qquad \\vev{h^{I}\\xi_{I}^{i}}=0\\ .\n\\end{equation}\n Note that due to \\eqref{eq:polynomial}, \\eqref{gpull} we need to have\n $\\vev{h^{I}}\\neq0$ for some $I$ and $\\vev{h^{\\tilde I}_i}\\neq0$ for every $i$ and some $\\tilde I$.\\footnote{\nIn particular this can also hold at the\n origin of the scalar field space $\\vev{\\phi^i}=0$, i.e.\\ for unbroken gauge groups.}\n\n\nIn order to solve \\eqref{eq:backgroundmomentmaps} we combine\nthe first two conditions as\n\\begin{equation}\\label{eq:momentummaps}\n \\vev{\\begin{pmatrix}h^{I} \\\\ h^{I}_{i} \\end{pmatrix} \\mu_{I}^{n}} = \\begin{pmatrix}\\Lambda v^{n} \\\\ 0\\end{pmatrix}.\n\\end{equation}\nLet us enlarge these equations to the tensor multiplet indices by introducing $\\mu_{\\tilde{I}}^{n}$ where we keep in mind that $\\mu^{n}_{N}\\equiv0$. Then we use the fact that the matrix $(h^{\\tilde{I}},h^{\\tilde{I}}_{i})$ is invertible in special real geometry (see Appendix C of \\cite{Bergshoeff:2004kh}), so we can multiply (\\ref{eq:momentummaps}) with $(h^{\\tilde{I}},h^{\\tilde{I}}_{i})^{-1}$ to obtain a solution for both equations given by\n\\begin{equation}\n\\vev{\\mu_{\\tilde{I}}^{n}}=\\Lambda v^{n}\\vev{h_{\\tilde{I}}}\\ .\n\\end{equation}\nNote that this condition is non-trivial since it implies that the moment maps point in the same direction in $SU(2)$-space for all $I$.\nFurthermore, using the $SU(2)_{R}$-symmetry we can rotate the vector $v^{n}$\nsuch that $v^{n}=v\\delta^{n3}$ and absorb the constant $v\\in \\mathbb{R}$\ninto $\\Lambda$. \nThus only $\\vev{\\mu_{I}}:=\\vev{\\mu_{I}^{3}}\\neq 0$, $\\forall I$ in the above equation. Since by definition $\\vev{\\mu_{N}^{n}}= 0$, this implies\n\\begin{equation}\\label{eq:momentmapsvacuum}\n\\vev{\\mu_{I}}=\\Lambda \\vev{h_{I}}\\ , \\quad \\vev{h_{N}}= 0\\ .\n\\end{equation}\nIn particular, this means that the first two equations in \\eqref{eq:hmetric} hold in the vacuum for only the vector indices, i.e.\\\n\\begin{equation}\\label{eq:hmetricvacuum}\n\\vev{h^{I}h_{I}}=1\\ , \\quad \\vev{h_{I}h^{I}_{i}}=0\\ .\n\\end{equation}\nMoreover due to the explicit form of the moment maps in \\eqref{eq:momentmapsvacuum}, the equivariance condition \\eqref{eq:equivariance} reads in the background\n\\begin{equation}\\label{equivariancevacuum}\n f_{IJ}^{K}\\vev{\\mu_{K}}=\\tfrac{1}{2}\\vev{\\omega^{3}_{uv}k_{I}^{u}k_{J}^{v}}.\n\\end{equation}\n\n\nSince \\eqref{eq:potential} has to hold in the vacuum, $\\vev{h^{I}}\\neq 0$ for some $I$ and thus the background necessarily has non-vanishing moment maps due to \\eqref{eq:momentmapsvacuum}. This in turn implies that part of the $R$-symmetry is gauged, as can be seen from the covariant derivatives of the fermions which always contain a term of the form $A_{\\mu}^{I}\\vev{\\mu_{I}^{3}}$ \\cite{Bergshoeff:2004kh}. More precisely, this combination gauges the $U(1)_{R}\\subset SU(2)_{R}$ generated by $\\sigma^{3}$. From \\eqref{eq:momentmapsvacuum} we infer $A_{\\mu}^{I}\\vev{\\mu_{I}^{3}}=\\Lambda A_{\\mu}^{I}\\vev{h_{I}}$ which can be identified with the graviphoton \\cite{Gunaydin:1984ak}.\n\nWe now turn to the last two equations in \\eqref{eq:backgroundmomentmaps}. Let us first prove that the third equation $\\vev{h^{I}k_{I}^{u}}=0$ implies the fourth $\\vev{h^{I}\\xi_{I}^{i}}=0$. This can be shown by expressing $\\vev{\\xi_{I}^{i}}$ in terms of $\\vev{k_{I}^{u}}$ via the equivariance condition \\eqref{equivariancevacuum}. Note that we learn from \\eqref{eq:VTkilling} that the background values of the Killing vectors on the manifold $\\mathcal{T}_{VT}$ are given by\n\\begin{equation}\\label{xinA}\n \\vev{\\xi_{I}^{i}}=\n-\\sqrt{\\tfrac{3}{2}}\\,\\vev{t_{I\\tilde{J}}^{\\tilde{K}}h^{\\tilde Ji}h_{\\tilde K}}\n=-\\sqrt{\\tfrac{3}{2}}\\,\\vev{f_{IJ}^{K}h^{Ji}h_{K} + t_{IJ}^{N}h^{Ji}h_{N}}\n=-\\sqrt{\\tfrac{3}{2}}\\,\\vev{f_{IJ}^{K}h^{Ji}h_{K}}\n\\ ,\n\\end{equation}\nwhere we used \\eqref{trep} and \\eqref{eq:momentmapsvacuum}. Inserting \\eqref{eq:momentmapsvacuum}, \\eqref{equivariancevacuum} into \\eqref{xinA} one indeed computes\n\\begin{equation}\\label{eq:Killingvacuum}\n\\vev{\\xi_{I}^{i}} =\n-\\sqrt{\\tfrac{3}{2}}\\tfrac{1}{2\\Lambda}\\,\\vev{h^{J}_{i}\\omega_{uv}^{3}k_{I}^{u}k_{J}^{v}}\\\n.\n\\end{equation}\nBut then $\\vev{h^{I}\\xi_{I}^{i}}=0$ is always satisfied if $\\vev{h^{I}k_{I}^{u}}=0$. Moreover this shows that $\\vev{\\xi_{I}^{i}}\\neq0$ is only\npossible for $\\vev{k^u_I}\\neq0$. Note that the reverse is not true in general as can be seen from \\eqref{xinA}.\nWe are thus left with analyzing the third condition in \\eqref{eq:backgroundmomentmaps}.\n\nLet us first note that for $n_{H}=0$ there are no Killing vectors ($k_I^u\\equiv0$) and the third equation in \\eqref{eq:backgroundmomentmaps} is automatically satisfied.\nHowever \\eqref{eq:momentmapsvacuum} can nevertheless hold if the constant FI-terms discussed below \\eqref{SLdef} are of the form given in \\eqref{eq:AbelianFI} and thus only gauge groups with Abelian factors are allowed in this case.\n\nNow we turn to $n_{H}\\neq 0$. Note that then $\\vev{h^{I}k_{I}^{u}}=0$ has two possible solutions:\n\\begin{equation}\\begin{aligned}\\label{twocases}\ni)& \\quad \\vev{k_{I}^{u}}=0\\ ,\\quad \\textrm{for all}\\ I\\\\\nii)&\\quad \\vev{k_{I}^{u}}\\neq0 \\ ,\\quad \\textrm{for some}\\ I \\ \\textrm{with}\\ \\vev{h^{I}}\\ \\textrm{appropriately tuned}.\n\\end{aligned}\\end{equation}\nBy examining the covariant derivatives (\\ref{eq:covderivatives}) of the scalars we see that in the first case there is no gauge symmetry breaking by the hypermultiplets while in the second case $G$ is spontaneously broken. \nNote that not all possible gauge groups can remain unbroken in the vacuum. In fact, for case $i)$ the equivariance condition \\eqref{equivariancevacuum} implies\n\\begin{equation}\n f_{IJ}^{K}\\vev{\\mu_{K}}=0\\ .\n\\end{equation}\nThis can only be satisfied if the adjoint representation of ${\\mathfrak{g}}$ has a non-trivial zero eigenvector, i.e.\\ if the center of $G$ is non-trivial (and continuous).\\footnote{For more details on Lie groups and their adjoint representation, see for example \\cite{O'Raifeartaigh:1986vq}.} In particular, this holds for all gauge groups with an Abelian factor but all semisimple gauge groups have to be broken in the vacuum.\n\nIn the rest of this section we discuss the spontaneous symmetry\nbreaking for case $ii)$ and the details\nof the Higgs mechanism.\nLet us first consider the case where only a set of Abelian factors in $G$\nis spontaneously broken, i.e.\\ $\\vev{k^u_I}\\neq0$ for $I$ labeling\nthese Abelian factors.\nFrom \\eqref{xinA} we then learn \n$\\vev{\\xi_{I}^{i}}=0$ and \nthus we only have spontaneous symmetry breaking in the hypermultiplet\nsector\nand the Goldstone bosons necessarily are recruited out of these \nhypermultiplets.\nHence the vector multiplet corresponding to a broken Abelian factor in\n$G$ becomes massive by ``eating'' an entire hypermultiplet. \nIt forms a ``long'' vector multiplet containing the massive vector,\nfour gauginos and four scalars obeying the AdS mass relations.\n\nNow consider spontaneously broken non-Abelian factors of $G$,\ni.e.\\ $\\vev{k^u_I}\\neq0$ for $I$ labeling\nthese non-Abelian factors.\nIn this case we learn from \\eqref{eq:Killingvacuum} \nthat either $\\vev{\\xi_{I}^{i}}=0$ as before or $\\vev{\\xi_{I}^{i}}\\neq 0$.\nHowever the Higgs mechanism is essentially unchanged compared to the Abelian\ncase in that entire hypermultiplets are eaten and all massive vectors\nreside in long multiplets.\\footnote{Note that short BPS vector\n multiplets which exist in this theory cannot appear since the breaking\n necessarily involves the hypermultiplets.} \n\nHowever there always has to exists at least one unbroken generator of\n$G$ which commutes with all other unbroken generators, i.e.\\ the\nunbroken gauge group in the vacuum is always of the form $H\\times\nU(1)_{R}$. To see this, consider the mass matrix $M_{IJ}$ of the gauge\nbosons $A^{I}_{\\mu}$. \nDue to \\eqref{eq:covderivatives} and \\eqref{eq:Killingvacuum}, this is given by\n\\begin{equation}\n M_{IJ} = \\vev{G_{uv}k_{I}^{u}k_{J}^{v}}+\\vev{g_{ij}\\xi_{I}^{i}\\xi_{J}^{j}}=\\vev{K_{uv}k^{u}_{I}k^{v}_{J}}\\ .\n\\end{equation}\nHere $K_{uv}$ is an invertible matrix which can be given in terms of $G_{uv}$ and $S_{uv}$ defined in \\eqref{SLdef} as\n\\begin{equation}\n K_{uv} = \\vev{\\left(\\tfrac{5}{8}G_{uv}-\\tfrac{6}{8\\Lambda}S_{uv}\\right)}\\ .\n\\end{equation}\nSince $\\vev{h^{I}k_{I}^{u}}=0$ the mass matrix $M_{IJ}$ has a zero\neigenvector given by $\\vev{h^{I}}$, i.e.\\ the graviphoton\n$\\vev{h^{I}}A_{I}^{\\mu}$ always remains massless in the vacuum. In the\nbackground the commutator of the corresponding Killing vector $h^{I}k_{I}^{u}$ with any other isometry $k_{J}$ is given by\n\\begin{equation}\n \\vev{[h^{I}k_{I}, k_{J}]^{u}} =\n \\vev{h^{I}(k_{I}^{v}\\partial_{v}k_{J}^{u}-k_{J}^{v}\\partial_{v}k_{I}^{u})}=\n -\\vev{h^{I}k_{J}^{v}\\partial_{v}k_{I}^{u}}\\ .\n\\end{equation}\nThis vanishes for $\\vev{k_{J}^{u}}=0$ and thus the $R$-symmetry\ncommutes with every other symmetry generator of the vacuum, i.e.\\ the\nunbroken gauge group is $H \\times U(1)_{R}$. In particular, every\ngauge group $G$ which is not of this form has to be broken $G \\rightarrow H\\times U(1)_{R}$.\n\nLet us close this section with the observation that the number of broken generators is determined by the number of linearly\nindependent $\\vev{k_{I}^{u}}$. This coincides with the number of\nGoldstone bosons $n_{G}$. In fact the $\\vev{k_{I}^{u}}$ form a basis in the\nspace of\nGoldstone bosons $\\mathcal{G}$ and we have $\\mathcal{G}=\\text{span}_{\\mathbb{R}}\\{\\vev{k_{I}^{u}}\\}$ with $\\text{dim}(\\mathcal{G}) = \\rk \\vev{k_{I}^{u}} = n_{G}$.\n\nIn conclusion, we have shown that the conditions for maximally supersymmetric $\\textrm{AdS}_{5}$ vacua are given by\n\\begin{equation}\n \\vev{\\mu_{I}}=\\Lambda\\, \\vev{h_{I}}, \\quad \\vev{h_{M}}=0, \\quad \\vev{h^{I}k_{I}^{u}}=\\vev{h^{I}\\xi_{I}^{i}}=0\\ .\n\\end{equation}\nNote that the tensor multiplets enter in the final result only implicitly since the $h^{I}$ and its derivatives are functions of all scalars $\\phi^{i}$.\nThe first equation implies that a $U(1)_{R}$-symmetry is always gauged\nby the graviphoton while the last equation shows that the unbroken\ngauge group in the vacuum is of the form $H\\times U(1)_{R}$. This\nreproduces the result of \\cite{Tachikawa:2005tq} that the $U(1)_{R}$\nhas to be unbroken and gauged in a maximally supersymmetric $\\textrm{AdS}_{5}$\nbackground. In the dual four-dimensional SCFT this $U(1)_{R}$ is\ndefined by a-maximization. Moreover we discussed that if the gauge\ngroup is spontaneously broken the massive vector multiplets\nare long multiplets. \nFinally, we showed that space of Goldstone bosons is given by\n$\\mathcal{G}=\\text{span}_{\\mathbb{R}}\\{\\vev{k_{I}^{u}}\\}$ which will be used in the next section to compute the moduli space $\\mathcal{M}$ of these vacua.\n\n\n\\section{Structure of the moduli space}\\label{sec:moduli}\n\nWe now turn to the computation of the moduli space $\\mathcal{M}$ of the maximally supersymmetric $\\textrm{AdS}_{5}$ vacua determined in the previous section.\nLet us denote by $\\mathcal{D}$ the space of all possible deformations of the\nscalar fields $\\phi\\rightarrow \\vev{\\phi}+\\delta \\phi$, $q\\rightarrow\n\\vev{q}+\\delta q$ that leave the conditions\n\\eqref{eq:backgroundmomentmaps} invariant. However, if the gauge group\nis spontaneously broken the corresponding Goldstone bosons are among\nthese deformations but they should not be counted as moduli. Thus the\nmoduli space is defined as the space of deformations $\\mathcal{D}$ modulo the space of\nGoldstone bosons $\\mathcal{G}$, i.e.\\ $\\mathcal{M}=\\mathcal{D} \/ \\mathcal{G}$. \nIn order to determine $\\mathcal{M}$ we vary (\\ref{eq:backgroundmomentmaps})\nto linear order and characterize the space $\\mathcal{D}$ spanned by $\\delta \\phi$\nand $\\delta q$ that are not fixed.\\footnote{Since we consider the\n variations of the vacuum equations \\eqref{eq:backgroundmomentmaps}\n to first order in the scalar fields, this procedure only gives a\n necessary condition for the moduli space.} We then show that the\nGoldstone bosons also satisfy the equations defining $\\mathcal{D}$ and\ndetermine the quotient $\\mathcal{D} \/ \\mathcal{G}$. \n\nLet us start by varying the second condition of (\\ref{eq:backgroundmomentmaps}). This yields\n\\begin{equation}\n\\vev{\\delta(h^{I}\\mu^{n}_{I})}= \\vev{(\\partial_{i}h^{I})\\,\\mu^{n}_{I}}\\,\\delta\\phi^{i}+\\vev{h^{I}\\nabla_{u}\\mu^{n}_{I}}\\,\\delta q^{u}=-\\tfrac{1}{2}\\vev{\\omega_{uv}^{n}h^{I}k_{I}^{v}}\\delta q^{u}\\equiv 0\\ ,\n\\end{equation}\nwhere we used (\\ref{eq:backgroundmomentmaps}) and\n(\\ref{eq:covdermomentmap}). \nSince this variation vanishes automatically, no conditions are imposed on the scalar field variation.\n\nThe variation of the first condition in (\\ref{eq:backgroundmomentmaps}) gives\n\\begin{equation}\\label{varone}\n\\begin{aligned}\n\\vev{\\delta(h_{i}^{I}\\mu^{n}_{I})}&=\\vev{(\\nabla_{j}h_{i}^{I})\\,\\mu^{n}_{I}}\\,\\delta\\phi^{j}+\\vev{h_{i}^{I}\\nabla_{u}\\mu_{I}^{n}}\\,\\delta q^{u}\\\\\n&=-\\sqrt{\\tfrac{2}{3}}\\vev{\\mu^{n}_{I}(h^{I}g_{ij}+h^{Ik}T_{ijk})}\\,\\delta \\phi^{j}-\\tfrac{1}{2}\\vev{h^{I}_{i}\\omega^{n}_{uv}k^{v}_{I}}\\,\\delta q^{u}\\\\\n&=-\\sqrt{\\tfrac{2}{3}}\\Lambda \\delta^{n3} \\delta\\phi_{i}-\\tfrac{1}{2}\\vev{h^{I}_{i}\\omega^{n}_{uv}k^{v}_{I}}\\,\\delta q^{u}=0\\ ,\n\\end{aligned}\n\\end{equation}\nwhere in the second step we used (\\ref{eq:covderh}), (\\ref{eq:covdermomentmap})\nwhile in the third we used (\\ref{eq:backgroundmomentmaps}). \nFor $n=1,2$ \\eqref{varone} imposes\n\\begin{equation}\n\\langle h^{I}_{i}\\omega_{uv}^{1,2}k^{v}_{I}\\rangle\\, \\delta q^{u} = 0\\ , \\label{eq:12}\n\\end{equation}\nwhile \nfor $n=3$ the deformations $\\delta \\phi_{i}$ can be expressed in terms of $\\delta q^{u}$ as\n\\begin{equation} \\label{eq:deltaphi}\n \\delta \\phi_{i} = -\\sqrt{\\tfrac{3}{2}}\\tfrac{1}{2\\Lambda}\\vev{h_{i}^{I}\\omega_{uv}^{3}k_{I}^{v}}\\, \\delta q^{u}\\ .\n\\end{equation}\nThus all deformations $\\delta \\phi_{i}$ are fixed either to vanish or to be related to $\\delta q^{u}$. In other words, the entire space of deformations can be spanned by scalars in the hypermultiplets only, i.e.\\ $\\mathcal{D}\\subset \\mathcal{T}_{H}$. Note that this is in agreement with \\eqref{eq:Killingvacuum} and also $\\mathcal{G} \\subset \\mathcal{T}_{H}$.\n\nFinally, we vary the third condition in (\\ref{eq:backgroundmomentmaps}) to obtain\n\\begin{equation}\n\\vev{\\delta(h^{I}k_{Iu})}=\\vev{\\partial_{i}h^{I}k_{Iu}}\\,\\delta\\phi^{i}+\\vev{h^{I}\\nabla_{v}k_{Iu}}\\,\\delta q^{v}=0.\n\\end{equation}\nInserting \\eqref{eq:deltaphi} and using \\eqref{eq:hmetric}, (\\ref{eq:backgroundmomentmaps}) we find\n\\begin{equation}\\label{eq:Killing1}\n\\big(\\tfrac{1}{2\\Lambda}\\vev{k^{Iu}\\omega^{3}_{vw}k_{I}^{w}} + \\vev{h^{I}\\nabla_{v}k_{I}^{u}}\\big)\\,\\delta q^{v} = 0\\ .\n\\end{equation}\nThus we are left with the two conditions \\eqref{eq:12} and\n \\eqref{eq:Killing1} whose solutions determine $\\mathcal{D}$. For a generic supergravity we will not solve them here in general. However the conditions alone suffice to prove that the moduli space is a K\\\"ahler submanifold of $\\mathcal{T}_H$ as we will now show.\n\nAs a first step we prove that the Goldstone bosons satisfy \\eqref{eq:12} and \\eqref{eq:Killing1}.\nWe know from section~\\ref{sec:vacua} that the Goldstone directions are\nof the form $\\delta q^{u} = c^I\\vev{k_{I}^{u}}$ where $c^I$ are constants.\nInserted into \\eqref{eq:12} we find\n\\begin{equation}\nc^I\\vev{h_{i}^{J}\\omega_{uv}^{1,2}k^{u}_{I}k^{v}_{J}}=2c^I\\vev{h_{i}^{J}f_{IJ}^{K}\\mu_{K}^{1,2}}\n= 0\\ ,\n\\end{equation}\nwhere we used (\\ref{equivariancevacuum}) and the fact that $\\vev{\\mu_{K}^{1,2}}=0$.\nTo show that the Goldstone bosons also satisfy (\\ref{eq:Killing1})\nwe first observe that\n\\begin{equation}\\label{eq:killingalgebra2}\n \\vev{h^{I}(\\nabla_{v}k_{I}^{u})k^{v}_{J}}= \\vev{h^{I}(\\partial_{v}k_{I}^{u})k_{J}^{v}-h^{I}(\\partial_{v}k_{J}^{u})k_{I}^{v}} = -\\vev{h^{I}[k_{I},k_{J}]^{u}} = \\vev{f_{IJ}^{K}h^{I}k_{K}^{u}}\\ ,\n\\end{equation}\nwhere \nin the first step we used \\eqref{eq:backgroundmomentmaps},\nadded a term which vanishes in the\nbackground\n and then in the second step used \\eqref{Killingc}.\nIn addition we need to show\n\\begin{equation}\\label{eq:structureconstants}\n\\vev{f_{IJ}^{K}h^{I}k_{K}^{u}}=\\vev{f_{IJ}^{K}h_{K}k^{Iu}}\\ .\n\\end{equation}\nIndeed, using \\eqref{eq:hmetric} and $\\vev{h^{I}k_{I}^{u}}=0$ we find\n\\begin{equation}\n\\vev{f_{IJ}^{K}h^{I}k_{K}^{u}}=\\vev{f_{IJ}^{K}h^{I}k^{Lu}a_{KL}}=\\vev{f_{IJ}^{K}h^{I}k^{Lu}h_{K}^{i}h_{Li}}\\ .\n\\end{equation}\nInserting \\eqref{eq:representation} evaluated in the vacuum, i.e.\\\n$\\vev{f_{IJ}^{K}h^{J}h_{K}^{i}}=\\vev{f_{IJ}^{K}h^{Ji}h_{K}}$ and using\nagain \\eqref{eq:hmetric}\nwe obtain \n\\begin{equation}\n\\vev{f_{IJ}^{K}h^{I}k_{K}^{u}}\n=\\vev{f_{IJ}^{K}h^{Ii}k^{Lu}h_{K}h_{iL}}=\\vev{f_{IJ}^{K}h_{K}k^{Lu}\\delta^{I}_{L}}=\\vev{f_{IJ}^{K}h_{K}k^{Iu}}\\ ,\n\\end{equation}\nwhich proves \\eqref{eq:structureconstants} as promised.\n\nTurning back to \\eqref{eq:Killing1}, we insert $\\delta q^{u}= c^{I}\n\\vev{k_{I}^{u}}$ and use \\eqref{equivariancevacuum} and \\eqref{eq:killingalgebra2}\nto arrive at\n\\begin{equation}\\label{GBint}\n\\tfrac{1}{2\\Lambda}c^{I}\\vev{k^{Ju}\\omega^{3}_{vw}k_{J}^{w}k_{I}^{v}}+c^{I}\\vev{h^{J}\\nabla_{v}k_{J}^{u}k_{I}^{v}}=\\tfrac{1}{\\Lambda}c^{I}\\vev{k^{Ju}f_{IJ}^{K}\\mu_{K}}+c^{I}\\vev{f_{JI}^{K}h^{J}k_{K}^{u}}\\ .\n\\end{equation}\nUsing again that $\\vev{\\mu_{I}}=\\Lambda \\vev{h_{I}}$ and applying \\eqref{eq:structureconstants}, this yields\n\\begin{equation}\n\\tfrac{1}{\\Lambda}c^{I}\\vev{k^{Ju}f_{IJ}^{K}\\mu_{K}}+c^{I}\\vev{f_{JI}^{K}h^{J}k_{K}^{u}}=(f_{JI}^{K}+f_{IJ}^{K})c^{I}\\vev{h^{J}k_{K}^{u}}= 0\\ .\n\\end{equation}\nThus the Goldstone directions $\\delta q^{u}=c^{I}\\vev{k_{I}^{u}}$ leave the vacuum conditions \\eqref{eq:backgroundmomentmaps} invariant and hence $\\mathcal{G} \\subset \\mathcal{D}$.\n\nLet us now consider the moduli space $\\mathcal{M} = \\mathcal{D} \/ \\mathcal{G}$ and show that\n$J^{3}(\\mathcal{M})=\\mathcal{M}$, i.e.\\ $J^{3}$ restricts to an almost complex\nstructure on $\\mathcal{M}$. Concretely we show that the defining equations for the moduli space, \\eqref{eq:12} and \\eqref{eq:Killing1}, are invariant under $J^{3}$. For equations (\\ref{eq:12}) this follows from the fact that $J^{3}$ interchanges the two equations. This can be seen by substituting $\\delta q'^{u} = (J^{3})^{u}_{v}\\delta q^{v}$ and using that $J^{1}J^{2}=J^{3}$ on a quaternionic K\\\"ahler manifold. \n\nTurning to \\eqref{eq:Killing1}, we note that since only\n$\\vev{\\mu_{I}^{3}}\\neq 0$ the covariant derivative\n\\eqref{eq:Jinvariance} of the Killing vectors $k_{I}^{u}$ commutes\nwith $J^{3}$ in the vacuum, i.e.\n\\begin{equation}\n \\vev{\\nabla_{u}k^{I}_{w}(J^{n})_{v}^{w}-(J^{n})_{u}^{w}\\nabla_{w}k^{I}_{v}}=2\\epsilon^{npq}\\vev{\\omega^{p}_{uv}\\mu^{Iq}} = 0\\ .\n\\end{equation}\nThis implies that the second term in \\eqref{eq:Killing1} is invariant\nunder $J^{3}$ and we need to show that this also holds for the first\nterm. In fact, we will show in the following\nthat this term vanishes on the moduli space and is only \nnonzero for Goldstone directions.\n\n\nLet us first note that in general\n$\\rk{\\vev{k_{I}^{u}\\omega_{vw}^{3}k^{wI}}}\\leq\\rk{\\vev{k_{I}^{u}}}=n_{G}$.\nHowever, \n$\\vev{k_{I}^{u}\\omega_{vw}^{3}k^{wI}k_{J}^{v}} \\neq 0$ (as we\nsaw in \\eqref{GBint}) implies that the rank of the two matrices has\nto coincide. This in turn says that the first term in\n\\eqref{eq:Killing1}\ncan only be nonzero in the Goldstone directions and thus has to\nvanish\nfor the directions spanning $\\mathcal{M}$. Thus the whole equation \\eqref{eq:Killing1} is $J^{3}$-invariant on\n$\\mathcal{M}$. \nTherefore we have an almost complex structure\n$\\tilde{J}:=J^{3}\\vert_{\\mathcal{M}}$ and a compatible metric\n$\\tilde{G}:=G\\vert_{\\mathcal{M}}$ on $\\mathcal{M}$. Thus $(\\mathcal{M}, \\tilde{G}, \\tilde{J})$ is an almost hermitian submanifold of the quaternionic K\\\"ahler manifold $(\\mathcal{T}_{H}, G, Q)$. \n\nIn the following we want to use theorem 1.12 of \\cite{Alekseevsky:2001om}: an almost Hermitian submanifold $(M, G, J)$ of a quaternionic K\\\"ahler manifold $(\\tilde{M}, \\tilde{G}, Q)$ is K\\\"ahler if and only if it is totally complex, i.e.\\ if there exists a section $I$ of $Q$ that anticommutes with $J$ and satisfies\n\\begin{equation}\nI(T_{p}M) \\perp T_{p}M \\quad \\forall p\\in M\\ .\n\\end{equation}\nIn particular, this condition is satisfied if the associated fundamental two-form $\\omega_{uw}=G_{uw}I_{v}^{w}$ on $M$ vanishes.\n\nNow let us show that the moduli space $\\mathcal{M}$ actually is totally\ncomplex and hence K\\\"ahler. To do so, we use \\eqref{eq:covderkilling}\nand \\eqref{SLdef} to \nnote that in the vacuum\n\\eqref{eq:momentmapsvacuum} \n$ \\vev{\\omega^{3}_{uv}}$ is given by \n\\begin{equation}\\label{eq:omega3}\n \\vev{\\omega^{3}_{uv}}=\\tfrac{2}{\\Lambda}\\vev{h^{I}\\nabla_{u}k_{Iv}-L_{uv}}\\ .\n\\end{equation}\nWe just argued that \n$\\vev{k_{I}^{u}\\omega_{vw}^{3}k^{wI}}$ vanishes on $\\mathcal{M}$\nand thus \\eqref{eq:Killing1} projected onto $\\mathcal{M}$ also implies\n\\begin{equation}\\label{eq:nablaG}\n \\vev{h^{I} \\nabla_{u}k_{vI}}\\vert_{\\mathcal{M}} = 0 \\ .\n\\end{equation}\nSince $\\vev{\\omega^{1}_{uv}}=-\\vev{\\omega^{3}_{uw}(J^{2})_{v}^{w}}$, we can multiply \\eqref{eq:omega3} with $-(J^{2})^{w}_{v}$ from the right and obtain\n\\begin{equation}\n\\vev{\\omega^{1}_{uv}}\\vert_{\\mathcal{M}} =\n\\tfrac{2}{\\Lambda}\\vev{S^{2}_{uv}-h^{I}\n \\nabla_{u}k_{wI}(J^{2})_{v}^{w}}\\vert_{\\mathcal{M}}=0\\ ,\n\\end{equation}\nwhere in the first step we used \\eqref{SLdef}.\nThis expression vanishes due to \n\\eqref{eq:nablaG} and the fact that $S^{2}_{uv}$ is symmetric while\n$\\omega^{1}_{uv}$ is antisymmetric.\nThus $\\mathcal{M}$ is totally complex and in particular $(\\mathcal{M}, \\tilde{G}, \\tilde{J})$ is a K\\\"ahler submanifold. \n\nAs proved in \\cite{Alekseevsky:2001om} a K\\\"ahler submanifold can\nhave at most half the dimension of the ambient quaternionic K\\\"ahler\nmanifold, i.e.\\ $\\text{dim}(\\mathcal{M}) \\leq 2n_{H}$.\\footnote{Applying the same method as in $d=4$, $\\mathcal{N}=2$ this can be checked explicitly \\cite{deAlwis:2013jaa}.}\nNote that in the case of an unbroken gauge group we have $\\mathcal{G} = \\{\\emptyset\\}$ and thus $\\mathcal{D}=\\mathcal{M}$. This is the case of maximal dimension of the moduli space. If the gauge group is now spontaneously broken then additional scalars are fixed by \\eqref{eq:12}. Since $\\mathcal{M}$ is $J^{3}$-invariant, every $\\delta q^{u} \\in \\mathcal{M}$ can be written as $\\delta q^{u} = (J^{3})_{v}^{u}\\delta q'^{v}$ for some $\\delta q'^{u}\\in \\mathcal{M}$. Combined with the fact that $J^{1}J^{2}=J^{3}$ this implies that the two conditions in \\eqref{eq:12} are equivalent on $\\mathcal{M}$. Furthermore we have $\\rk{\\vev{h^{I}_{i}\\omega_{uv}^{1}k_{I}^{v}}}=\\rk{\\vev{k_{u}^{I}}}=n_{G}$ and thus $n_{G}$ scalars are fixed by \\eqref{eq:12}. In conclusion, we altogether have\n\\begin{equation}\n\\text{dim}(\\mathcal{M})=\\text{dim}(\\mathcal{D})-\\text{dim}(\\mathcal{G})\\leq (2n_{H}-n_{G})-n_{G}\\ ,\n\\end{equation}\nso the moduli space has at most real dimension $2n_{H}-2n_{G}$.\n\n\n\n\n\\section*{Acknowledgments}\nThis work was supported by the German Science Foundation (DFG) under\nthe Collaborative Research Center (SFB) 676 ``Particles, Strings and the Early\nUniverse'', the Research Training Group (RTG) 1670 ``Mathematics\ninspired by String Theory and Quantum Field Theory'' and the Joachim-Herz Stiftung.\n\nWe have benefited from conversations and correspondence with David Ciupke, Peter-Simon Dieterich, Malte Dyckmanns, Jonathan Fisher, Severin L\\\"ust, Stefan Vandoren and Owen Vaughan.\n\n\n\n\n\n\n\n\n\\newpage \n\n\\providecommand{\\href}[2]{#2}\\begingroup\\raggedright","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nThe theory of artificial neural networks (ANN) represents an open research field setting the stage for the implementation of a statistical mechanical approach in novel interdisciplinary problems, such as the modeling of the collective behavior of the human brain neurons. An important field of application of ANN is represented by the pattern recognition analysis \\cite{Egmont,Wang}, which has received an increasing interest in the literature, witnessed by the extensive application of ANN to tackle complex real-word problems, e.g. in medical diagnosis \\cite{Haya,Jiang,Shayea} and in biological sequences analysis \\cite{Ding,Qian, Condon,Cart}.\nRecent works, in this field, paved also the way to the systematic use of technical tools borrowed from Information Theory and Statistical Mechanics \\cite{McKay,Anand,Tkacik}.\\\\\nIn this paper, in particular, we adopt information theoretic methods \\cite{Kim,Vihn} to classify a sequence of hazelnuts images, and show how our approach allows for improving the performance of pattern recognition procedures performed via ANN algorithms.\nFrom a preliminary statistical analysis on the image histograms, we identify some relevant observables to be used in the implementation of a machine learning algorithm. A special focus of our approach is on the role of \\textit{fluctuations} of the histograms around the corresponding \\textit{mean} distribution. In particular, by making use of various notions of ``distance'' between histograms, we introduce two statistical scales, whose magnitude affects the performance of a machine learning algorithm in disentangling and extracting the distinctive features of the hazelnuts.\\\\\nThe paper is organized as follows.\\\\\nIn Sec. \\ref{sec:sec1} we introduce the two aforementioned statistical scales and discuss their dependence on a quantity referred to as the ``image resolution''. We comment on the need of a large separation between two such scales to obtain an efficient pattern recognition: the lack of a wide separation between them is due to large histograms fluctuations which blur the distinctive features of the hazelnuts, thus hindering a proper classification of the data. \\\\\nIn Sec. \\ref{sec:sec2} we test, then, the prediction of our statistical analysis by employing a machine learning algorithm, known as Support Vector Machines (SVM) \\cite{Haykin,Webb}. The numerical results we obtained not only confirm the relevance of the aforementioned scale separation, but also show that the predicted onset of an optimal scale of description can be recovered through the use of a SVM algorithm, provided that its performance is \\textit{averaged} over a sufficiently large set of training samples. \\\\\nConclusions are finally drawn in Sec. \\ref{sec:conc}.\\\\\nThe main results of this work can be summarized as follows:\n\\begin{itemize}\n\t\\item We introduce two typical statistical scales, whose magnitude critically affects the performance of a pattern recognition algorithm based on statistical variables;\n \\item We describe the dependence of such scales on the scale of resolution, thus unveiling the onset of an optimal resolution at which the pattern recognition is favoured;\n \\item We numerically recover the results of the statistical analysis by using a SVM algorithm, and also shed light on the role of \\textit{averaging} the performance of a SVM over sufficiently many training samples.\n\\end{itemize}\n\n\\section{The original set of hazelnut images: a statistical approach}\n\\label{sec:sec1}\n\nIn this work we consider the problem of pattern recognition applied to a sequence of hazelnut images, to be categorized into three different sets: ``good'' ($G$), ``damaged'' ($D$) and ``infected'' ($I$). In the sequel, we will use the shorthand notation $\\mathcal{S}=\\{G,D,I\\}$, and, for any $A \\in \\mathcal{S}$, we will also denote $N_A=card(A)$. Our database consists of a set of $800$ x-ray scanned images, cf. Fig. \\ref{hazelnuts}, with $N_G=750$, $N_D=25$ and $N_I=25$. The analysis outlined below is meant to provide a guiding strategy to assess, and possibly enhance, the performance of pattern recognition methods based on ANN algorithms. The prominent distinctive features of the three sets $G$, $D$ and $I$ are not detectable from a solely visual inspection of the x-ray images. Hence, in order to extract some valuable information, we relied on the computation of the histograms of the hazelnut images, shown in Fig. \\ref{hazelnuts2}.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{S_6.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{S_7.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{S_8.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{A_1.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{A_2.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{A_3.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{C_15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{C_16.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.15\\textwidth, height=0.15\\textwidth]{C_17.pdf}\n\\caption{X-ray scanned images of good hazelnuts (top row), damaged hazelnuts (middle row) and infected hazelnuts (bottom row).}\\label{hazelnuts}\n\\end{figure}\n\nTherefore, for any $A \\in \\mathcal{S}$, we computed the number of pixels, in the image pertaining to the $i$-th hazelnut belonging to the set $A$ (with $i= 1,...,N_{A}$), characterized by the shade of gray $j$ (conventionally running from the value $0$ - black - to $255$ - white). After normalizing wrt the total number of pixels forming the same image, we thus obtained the so-called image histogram $p_i^{(A)}(j)$. We could also compute, then, the mean histogram pertaining to $A$, denoted by $\\overline{p}_i^{(A)}(j)$, which was obtained by averaging over the $N_A$ histograms $p_i^{(A)}(j)$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{meanhistS.pdf}\n\\includegraphics[width=0.45\\textwidth]{meanhistA.pdf}\\\\\n\\includegraphics[width=0.45\\textwidth]{meanhistC.pdf}\n\\includegraphics[width=0.45\\textwidth]{meanhistTot.pdf}\n\\caption{Image histograms of good hazelnuts (top left), damaged hazelnuts (top right), infected hazelnuts (bottom left). The horizontal axis displays the shades of gray, conventionally running from $0$ to $255$. For each of the three sets, the figures display the (normalized) histograms of single hazelnuts as well as the (normalized) mean histogram. On the bottom right corner, the mean histograms of the three different sets are compared.}\\label{hazelnuts2}\n\\end{figure}\n\nA quantitative characterization of the images can be afforded by introducing various notions of ``distance'' between different histograms \\cite{Cha}: we considered, in particular, the norm in $L^1$, in $L^2$ (euclidean), in $L^\\infty$, the Squared $\\chi^2$ distance and the Jeffrey's divergence \\cite{SHC}. \nIt is worth briefly recalling some basic aspects concerning the latter two notions of distance, borrowed from probability theory.\nThe Squared $\\chi^2$ distance corresponds to the symmetrized version of the Pearson's $\\chi^2$ test \\cite{Plackett}, which, given a histogram $p(j)$ and a reference histogram $q(j)$, defines their relative distance as:\n\\begin{equation}\nd_{\\chi^2}=\\sum_j\\frac{(p(j)-q(j))^2}{q(j)} \\label{chisq} \\quad .\n\\end{equation}\nThus, the quantity $d_{\\chi^2}$ in (\\ref{chisq}) resembles the standard euclidean distance between the two histograms, except that it introduces a weight corresponding to the inverse of the reference histogram.\\\\\nOn the other hand, the Jeffreys' divergence \\cite{Jeffreys} belongs to the Shannon entropy family \\cite{Beck}, and corresponds to the symmetrized version of the Kullback-Leibler (K-L) divergence (or \\textit{relative entropy}) \\cite{KL}, defined as:\n\\begin{equation}\nd_{K-L}(p\\|q)=\\sum_j \\left(p(j)\\log\\left(\\frac{p(j)}{q(j)}\\right) \\right)=H(p,q)-H(p) \\label{KL} \\quad ,\n\\end{equation}\nwhere $H(p,q)$ is the cross entropy of $p$ and $q$, and $H(p)$ is the entropy of $p$ \\cite{Kull,Jay}.\nMore in general, the K-L divergence (\\ref{KL}), is a member of the family of the so-called $f$-divergencies \\cite{Mori,Ali} and stems as a limiting case of the more general R\\'enyi's (or $\\alpha$-) divergence \\cite{Xu}. It is worth recalling its definition: given any two continuous distributions $p$ and $q$, over a space $\\Omega$, with $p$ absolutely continuous wrt $q$, the $f$-divergence of $p$ from $q$ is\n\\begin{equation}\nd_{f}(p\\|q)=\\int_\\Omega f\\left(\\frac{dp}{dq}\\right)dq \\quad ,\n\\end{equation}\nwhere $f$ is a convex function such that $f(1)=0$.\\\\\nThen, for any $A \\in \\mathcal{S}$, we considered the distance (or \\textit{fluctuation}), defined according to the various notions introduced above, between the histogram $p_i^{(A)}(j)$ and the corresponding mean $\\overline{p}_i^{(A)}(j)$. Next, by averaging over the set $A$, one obtains a characteristic ``statistical scale'' (still depending on the chosen notion of distance) characterizing the fluctuations within each set $A$. \nTo clarify the meaning of the entries in Tab. \\ref{normtot}, let us illustrate, for instance, the procedure to calculate the quantity $\\langle d \\rangle^{(A)}_2$. To this aim, we introduce the euclidean distance between the histograms $p_i^{(A)}(j)$ and $\\overline{p}_i^{(A)}(j)$:\n\\begin{equation}\nd_{2,i}^{(A)}=\\sqrt{\\sum_{j=1}^{N_g}|p_i^{(A)}(j)-\\overline{p}_i^{(A)}(j)|^2} \\label{eucl}\n\\end{equation}\n\n\\begin{table}[bth]\n\\centering\n\\begin{tabular}{c|c|c|c|c|c|}\n \n & $\\langle d \\rangle^{(A)}_1$ & $\\langle d \\rangle^{(A)}_2$ & $\\langle d \\rangle^{(A)}_\\infty$ & $\\langle d \\rangle^{(A)}_{\\chi^2}$ & $\\langle d \\rangle^{(A)}_{J}$\\\\\n\\hline\n $A =G$ & $0.2079$ & $0.0372$ & $0.0139$ & $0.0495$ & $0.0369$ \\\\\n\\hline\n $A =D$ & $0.2485$ & $0.0488$ & $0.0162$ & $0.0776$ & $0.0477$ \\\\ \n\\hline\n $A =I$ & $0.2097$ & $0.0379$ & $0.0145$ & $0.0435$ & $0.0401$ \\\\\n \\hline\n\\end{tabular}\n %\n\\caption{Typical fluctuation of the histograms of the hazelnuts from the corresponding mean histogram, within each of the sets $G$, $D$, and $I$. The quantities $\\langle d \\rangle^{(A)}$ are evaluated by using different notions of distances: norm in $L^1$, in $L^2$ (euclidean), in $L^\\infty$, Squared $\\chi^2$ distance and Jeffreys divergence.}\n %\n \\label{normtot}\n \\end{table}\n\n\\begin{table}[bth]\n\\centering\n\\begin{tabular}{c|c|c|c|c|c|}\n \n & $\\Delta^{(A,B)}_{1}$ & $\\Delta^{(A,B)}_{2}$ & $\\Delta^{(A,B)}_{\\infty}$ & $\\Delta^{(A,B)}_{\\chi^2}$ & $\\Delta^{(A,B)}_{J}$\\\\\n\\hline\n $A=G, B=D$ & $0.0923$ & $0.0162$ & $0.0036$ & $0.0089$ & $0.0200$ \\\\\n\\hline\n $A=D, B=I$ & $0.0533$ & $0.0090$ & $0.0028$ & $0.0021$ & $0.0030$ \\\\ \n\\hline\n $A=G, B=I$ & $0.0526$ & $0.0115$ & $0.0051$ & $0.0044$ & $0.0124$ \\\\ \n \\hline\n\\end{tabular}\n %\n\\caption{Average distances between between pairs of mean histograms referring to two different sets $A$ and $B$, evaluated, as in Tab \\ref{normtot}, using different notions of distance: norm in $L^1$, in $L^2$ (euclidean), in $L^\\infty$, Squared $\\chi^2$ distance and Jeffreys divergence.}\n %\n \\label{Deltatot}\n \\end{table}\n\nFrom the knowledge of $d_{2,i}^{(A)}$ in (\\ref{eucl}), the quantity $\\langle d \\rangle^{(A)}_2$, shown in Tab. \\ref{normtot}, is then computed by averaging over $A$:\n\\begin{equation}\n\\langle d \\rangle_2^{(A)} =\\frac{1}{N_{A}}\\sum_{i=1}^{N_{A}}d_{2,i}^{(A)} \\label{aver}\n\\end{equation} \nIt is worth noticing, from Tab. \\ref{normtot}, that, no matter of what notion of distance is adopted, the magnitude of the fluctuations is not significantly affected by $N_{A}$.\nThe scale $\\langle d \\rangle^{(A)}$, which, for any $A \\in \\mathcal{S}$, is of the order $\\langle d \\rangle^{(A)}\\simeq 10^{-2}$, can be thus regarded as an intrinsic statistical scale pertaining to the set $A$. \nIt is worth comparing such scale with another statistical scale, denoted by $\\Delta^{(A,B)}$, whose values are listed in Tab. \\ref{Deltatot}. The quantity $\\Delta^{(A,B)}$ is defined as the distance, computed by using the various notions of distance introduced above, between the pair of mean histograms relative to the sets $A$ and $B$, with $(A,B)\\in\\mathcal{S}$ and $A \\neq B$. The symmetric form of the distances introduced above entails, in particular, that $\\Delta^{(A,B)}=\\Delta^{(B,A)}$. \nA better interpretation of the meaning of the scales $\\langle d \\rangle^{(A)}$ and $\\Delta^{(A,B)}$ can be achieved by noticing that a large value of $\\langle d \\rangle^{(A)}$ mirrors the presence of a considerable amount of noise on top of the mean histogram $\\overline{p}_i^{(A)}(j)$, which thus blurs the distinctive features of the set $A$. On the contrary, a larger value of $\\Delta^{(A,B)}$ reflects a more significant separation between the mean histograms of the two sets $A$ and $B$, which instead favours the pattern recognition. In the sequel of this Section we will focus, therefore, on the ratio of two such scales.\nFrom an inspection of Tabs. \\ref{normtot} and \\ref{Deltatot}, we first observe that $\\Delta^{(A,B)}\\sim\\langle d \\rangle^{(A)}$. That is, the two scales are comparable: the fluctuations, within each set, are comparable with the typical distances between different sets. This entails, hence, that the histograms shown in Fig. \\ref{hazelnuts2} can not be regarded as a useful source of information to perform a pattern recognition.\nA different route can be pursued by just focusing on a selected portion of the original images. This approach is motivated by the assumption that the distinctive features of each of the three sets are mostly contained in the ``nuclei'' of the hazelnuts. We calculated, therefore, the histograms corresponding to the cropped portions of the original images, delimited by the tick red rectangles shown in Fig. \\ref{hazelnuts3}. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.31\\textwidth]{HistoS1.pdf}\n\\includegraphics[width=0.31\\textwidth]{HistoS6.pdf}\n\\includegraphics[width=0.31\\textwidth]{HistoS12.pdf}\\\\\n\\includegraphics[width=0.31\\textwidth]{HistoA1.pdf}\n\\includegraphics[width=0.31\\textwidth]{HistoA12.pdf}\n\\includegraphics[width=0.31\\textwidth]{HistoA13.pdf}\\\\\n\\includegraphics[width=0.31\\textwidth]{HistoC2.pdf}\n\\includegraphics[width=0.31\\textwidth]{HistoC6.pdf}\n\\includegraphics[width=0.31\\textwidth]{HistoC13.pdf}\n\\caption{Image histograms of good hazelnuts (top row), damaged hazelnuts (middle row) and infected hazelnuts (bottom row). Each image shows the histogram of the entire hazelnut (top histogram) and the histogram referring to the fraction of the image delimited by the thick red rectangles, characterized by $\\epsilon=80$ and $\\rho=2.5$.}\\label{hazelnuts3}\n\\end{figure}\n\nThe red rectangles in Fig. \\ref{hazelnuts3} are identified by the pair of parameters $\\{\\epsilon, \\rho\\}$, where $\\epsilon$, related to the image resolution, is defined as the number of pixels comprised along the horizontal length of the rectangles, while $\\rho$ is the ratio of the number of pixels along the vertical length to the corresponding number of pixels along the horizontal one.\nIn our simulations, the values of the parameters $\\{\\epsilon, \\rho\\}$ were kept constant when calculating the histograms relative to different hazelnut nuclei.\nFigure \\ref{hazelnuts3} refers, for instance, to the case corresponding to $\\epsilon=80$ and $\\rho = 2.5$. \nIn Figs. \\ref{nuclS},\\ref{nuclA} and \\ref{nuclC}, shown is the result of the image processing of the hazelnut nuclei, performed through a noise removal filter (adaptive Wiener filtering) and various edge-detector algorithms. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{NucleusS10.pdf}\n\\includegraphics[width=0.45\\textwidth]{NucleusS8.pdf}\\\\\n\\includegraphics[width=0.45\\textwidth]{NucleusS6.pdf}\n\\includegraphics[width=0.45\\textwidth]{NucleusS13.pdf}\n\\caption{Image processing of the hazelnut nuclei belonging to the set $G$, for $\\epsilon=100$ and $\\rho=1.5$, by means of edge-detection algorithms, respectively: Sobel's algorithm (top right figure) , Canny's algorithm (bottom left figure) and Roberts' algorithm (bottom right figure).}\\label{nuclS}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{NucleusA1.pdf}\n\\includegraphics[width=0.45\\textwidth]{NucleusA6.pdf}\\\\\n\\includegraphics[width=0.45\\textwidth]{NucleusA10.pdf}\n\\includegraphics[width=0.45\\textwidth]{NucleusA15.pdf}\n\\caption{Image processing of the hazelnut nuclei belonging to the set $D$, for $\\epsilon=100$ and $\\rho=1.5$, by means of edge-detection algorithms, respectively: Sobel's algorithm (top right figure) , Canny's algorithm (bottom left figure) and Roberts' algorithm (bottom right figure).}\\label{nuclA}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{NucleusC3.pdf}\n\\includegraphics[width=0.45\\textwidth]{NucleusC7.pdf}\\\\\n\\includegraphics[width=0.45\\textwidth]{NucleusC6.pdf}\n\\includegraphics[width=0.45\\textwidth]{NucleusC11.pdf}\n\\caption{Image processing of the hazelnut nuclei belonging to the set $I$, for $\\epsilon=100$ and $\\rho=1.5$, by means of edge-detection algorithms, respectively: Sobel's algorithm (top right figure) , Canny's algorithm (bottom left figure) and Roberts' algorithm (bottom right figure).}\\label{nuclC}\n\\end{figure}\n\nIn Fig. \\ref{hazelnutsnucl}, which is worth comparing with Fig. \\ref{hazelnuts2}, we plotted the mean histograms relative to the cropped images, with $\\epsilon=80$ and $\\rho =2.5$. The question arises, then, as to whether the separation between the two scales $\\langle d \\rangle^{(A)}$ and $\\Delta^{(A,B)}$ is amenable to be enhanced by tuning the two parameters $\\epsilon$ and $\\rho$. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{meanhistSnucl80r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistAnucl80r25.pdf}\\\\\n\\vspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistCnucl80r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl80r25.pdf}\n\\caption{Image histograms of the hazelnut nuclei belonging to the sets $G$ (top left), $D$ hazelnuts (top right), $I$ hazelnuts (bottom left). For each of the three sets, the figures display the histograms of single hazelnuts as well as the mean histogram in the corresponding set (mean histogram). On the bottom right corner, the mean histograms of the three sets are compared. All the histograms were obtained by setting $\\epsilon = 80$ and $\\rho=2.5$.}\\label{hazelnutsnucl}\n\\end{figure}\n\nWe thus studied the behaviour of the mean histograms, shown in Fig. \\ref{hazelnutsnucl}, as well as of the typical fluctuations occurring in each set, as a function of $\\epsilon$ and $\\rho$: in our simulations, $\\epsilon$ spans a broad range of values, whereas we let $\\rho$ attain the values $1.5$ and $2.5$, cf. Fig. \\ref{rect}. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{rect.pdf}\n\\caption{Different values of the scale of resolution: $\\epsilon=80$ (red rectangle), $\\epsilon =60$ (magenta rectangle), $\\epsilon=40$ (blue rectangle), $\\epsilon=20$ (green rectangle). All the colored rectangles shown in the picture are obtained by setting $\\rho =2.5$.}\\label{rect}\n\\end{figure}\n\nIn Fig. \\ref{nuclhist1} and \\ref{nuclhist2}, the mean histograms of the sets $G$, $D$ and $I$ are shown for different values of $\\epsilon$, and for two different values of $\\rho$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl80r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl60r15.pdf}\\\\\n\\vspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl40r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl20r15.pdf}\n\\caption{Mean histograms of the hazelnut nuclei at different scales of resolution: $\\epsilon =80$ (top left), $\\epsilon =60$ (top right), $\\epsilon =40$ (bottom left) and $\\epsilon =20$ (bottom right), with $\\rho=1.5$.}\\label{nuclhist1}\n\\end{figure}\n\nWe focused, in particular, on the investigation of the dependence of the scales $\\langle d \\rangle^{(A)}(\\epsilon; \\rho)$ and $\\Delta^{(A,B)}(\\epsilon; \\rho)$ on the resolution $\\epsilon$. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl80r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl60r25.pdf}\\\\\n\\vspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl40r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{meanhistTotnucl20r25.pdf}\n\\caption{Mean histograms of the nuclei of the hazelnuts at different scales of description: $\\epsilon =80$ (top left), $\\epsilon =60$ (top right), $\\epsilon =40$ (bottom left) and $\\epsilon =20$ (bottom right), with $\\rho=2.5$.}\\label{nuclhist2}\n\\end{figure}\n\nFigures \\ref{norm1} and \\ref{mean1} illustrate the behaviour of $\\langle d \\rangle^{(A)}$ and $\\Delta^{(A,B)}$ vs. $\\epsilon$ for $\\rho=1.5$, whereas \nFigs. \\ref{norm2} and \\ref{mean2} show the analogous behaviour of $\\langle d \\rangle^{(A)}$ and $\\Delta^{(A,B)}$ vs. $\\epsilon$ for $\\rho=2.5$\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{FluctS15.pdf}\n\\includegraphics[width=0.45\\textwidth]{FluctA15.pdf}\n\\includegraphics[width=0.45\\textwidth]{FluctC15.pdf}\n\\caption{Behaviour of the distances $\\langle d \\rangle^{(A)}$ vs. $\\epsilon$, with $\\rho=1.5$.}\\label{norm1}\n\\end{figure}\n\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{dAS15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{dSC15.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{dAC15.pdf}\n\\caption{Behaviour of the distances $\\Delta^{(A,B)}$ vs. $\\epsilon$, with $\\rho=1.5$.}\\label{mean1}\n\\end{figure}\n\nThe two plots \\ref{mean1} and \\ref{mean2} reveal that reducing $\\epsilon$ leads, on the one hand, to a remarkable increase of $\\Delta^{(A,B)}$, which attains an order of magnitude of about $\\Delta^{(A,B)} \\simeq 10^{-1}$. On the other hand, this effect is counterbalanced by the simultaneous increase of the scale $\\langle d \\rangle^{(A)}$, evidenced in Figs. \\ref{norm1} and \\ref{norm2}, which turns out to be, for both the considered values of $\\rho$, of the same order of magnitude of $\\Delta^{(A,B)}$. This is more clearly visible in Fig. \\ref{ratio}, which illustrates the behaviour of the ratio of $\\Delta^{(A,B)}$ to $\\langle d \\rangle^{(A)}$ and to $\\langle d \\rangle^{(B)}$, for different values of $\\epsilon$, obtained by setting $A = G$ and $B = D$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{FluctS25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{FluctA25.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{FluctC25.pdf}\n\\caption{Behaviour of the distances $\\langle d \\rangle^{(A)}$ vs. $\\epsilon$, with $\\rho=2.5$.}\\label{norm2}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{dAS25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{dSC25.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{dAC25.pdf}\n\\caption{Behaviour of the distances $\\Delta^{(A,B)}$ vs. $\\epsilon$, with $\\rho=2.5$.}\\label{mean2}\n\\end{figure}\n\nThe plots in Fig. \\ref{ratio} confirm that the two scales $\\Delta^{(A,B)}$ and $\\langle d \\rangle^{(A)}$ remain of the same order, also when reducing $\\epsilon$. \nOn the contrary, an efficient pattern recognition, based on the analysis of the image histograms, can be obtained if the ratio $\\Delta^{(A,B)} \/\\langle d \\rangle^{(A)} \\gg 1$, i.e. when the mean statistical distance between different sets overwhelms the typical size of fluctuations characteristic of each set.\nThus, the study of the behaviour of the two latter scales allows one to predict a poor performance of a machine learning algorithm aiming at classifying the hazelnuts on the basis of the image histograms.\nNevertheless, an interesting aspect can be evinced from an inspection of Fig. \\ref{ratio}: despite the similarity of the magnitudes of the two statistical scales, the plot of their ratio vs. $\\epsilon$ yields a non monotonic function. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{ratioASAr15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{ratioASSr15.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{ratioASAr25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{ratioASSr25.pdf}\n\\caption{Behaviour of the ratio $\\Delta^{(G,D)}\/\\langle d \\rangle^{(G)}$ (left column) and $\\Delta^{(G,D)}\/\\langle d \\rangle^{(D)}$ (right column) vs. $\\epsilon$, for $\\rho=1.5$ (upper row) and $\\rho=2.5$ (lower row).}\\label{ratio}\n\\end{figure}\n\nTo better evidence this point, we plotted, in Fig. \\ref{ratio2}, the ratio of the scale $\\Delta^{(A,B)}$ to the geometric mean $\\sqrt{\\langle d \\rangle^{(A)} \\langle d \\rangle^{(B)}}$, where we set $A=G, B=D$ (left panel) and $A=G, B=I$ (right panel). \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{ratioASr15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{ratioCSr15.pdf}\n\\caption{\\textit{Left panel:} Behaviour of the ratio $\\Delta^{(G,D)}\/\\sqrt{\\langle d \\rangle^{(G)} \\langle d \\rangle^{(D)}}$ vs. $\\epsilon$, for $\\rho=1.5$. \\textit{Right panel:} Behaviour of the ratio $\\Delta^{(G,I)}\/\\sqrt{\\langle d \\rangle^{(G)} \\langle d \\rangle^{(I)}}$ vs. $\\epsilon$, for $\\rho=1.5$.}\\label{ratio2}\n\\end{figure}\n\nIn Fig. \\ref{ratio3}, instead, for reasons to be further clarified in Sec. \\ref{sec:sec2}, we show the results, analogous to those portrayed in Fig. \\ref{ratio2}, obtained by merging the two sets $D$ and $I$ into one single set, labeled as $nG$ (``not good'' hazelnuts). The plot in Fig. \\ref{ratio3} shows that, for $\\rho=1.5$, the value $\\epsilon^*=70$ maximizes the ratio of the aforementioned statistical scales wrt almost all the various notions of ``statistical distance'' we considered. In Sec. \\ref{sec:sec2} we will show that such optimal value $\\epsilon^*$, here obtained by only relying on information theoretic methods, can be also recovered by using Support Vector Machines numerical algorithms, by averaging their performance over a set of training samples.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.70\\textwidth]{GnGr15.pdf}\n\\caption{\\textit{Left panel:} Behaviour of the ratio $\\Delta^{(G,nG)}\/\\sqrt{\\langle d \\rangle^{(G)} \\langle d \\rangle^{(nG)}}$ vs. $\\epsilon$, for $\\rho=1.5$. The plot evidences the onset of an optimal scale $\\epsilon^*$ at which the ratio of the statistical scales is maximized.}\\label{ratio3}\n\\end{figure}\n\n\n\\section{Support Vector Machines}\n\\label{sec:sec2}\n\nIn this Section, we discuss the results obtained by elaborating our data through a supervised learning method known as Support Vector Machines (SVM) \\cite{Haykin,Boser, Cortes,Vapnik95,Vapnik98}. The SVM constitute a machine learning algorithm which seeks a separation of a set of data into two classes, by determining the \\textit{best separating hyperplane} (BSH) (also referred to, in the literature, as the ``maximal margin hyperplane'' \\cite{Webb}), cf. Fig. \\ref{SVM}. It is worth recapitulating the basic notions underpinning the numerical algorithm we used.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM2.pdf}\n\\caption{\\textit{Left panel:} Example of a linear discriminant analysis based on the SVM algorithm. Shown are three different hyperplanes: $\\Pi_1$, which does not separate the two classes, $\\Pi_2$ which separates the classes but only with a small margin, and $\\Pi_3$, which corresponds to the best separating hyperplane. \\textit{Right panel:} Illustration of the best separating hyperplane (red straight line), the canonical hyperplanes (black dashed lines), the support vectors (magenta circles) and the margin of separation $\\xi$.}\\label{SVM}\n\\end{figure}\n\nLet $\\{\\textbf{x}\\}$ denote the set of data (input pattern) to be classified, with $\\textbf{x} \\in E\\subseteq\\mathbb{R}^N$, and consider a given training set $\\mathcal{T}=\\{\\textbf{x}_k,d_k\\}_{k=1}^{N_T}$, where $N_T$ denotes the dimensionality of $\\mathcal{T}$. Let, then, $d_k=\\{+1,-1\\}$ denote the \\textit{desired response} parameter corresponding to $\\textbf{x}_k$, whose value depends on which of the two classes $\\textbf{x}_k$ belongs to.\nThe equation of a hyperplane $\\Pi$ in $\\mathbb{R}^N$ reads:\n\\begin{equation}\n\\textbf{w}^T \\cdot \\textbf{x} + b=0 \\quad , \\nonumber\n\\end{equation}\nwith $\\textbf{w}$ and $b$ denoting, respectively, a $N$-dimensional adjustable weight vector and a bias. The BHS is the hyperplane characterized by the pair $(\\textbf{w}_o,b_o)$ which, for linearly separable patterns, fulfills the following conditions \\cite{Haykin}:\n\n\\begin{eqnarray}\n\\textbf{w}_o^T \\cdot \\textbf{x}_k+b_o\\ge 1 \\quad \\text{for $d_k = +1$} \\quad ,\\nonumber\\\\\n\\textbf{w}_o^T \\cdot \\textbf{x}_k+b_o\\le -1 \\quad \\text{for $d_k = -1$} \\quad .\\label{suppvec}\n\\end{eqnarray}\n\nThe data points, portrayed in magenta color in the right panel of Fig. \\ref{SVM}, for which Eqs. (\\ref{suppvec}) are satisfied with the equality sign, are called \\textit{support vectors}, and lie on the so-called \\textit{canonical hyperplanes} \\cite{Webb}, represented by the black dashed lines in the right panel of Fig. \\ref{SVM}. Figure \\ref{SVM} also illustrates the so-called \\textit{margin of separation}, defined as the distance $\\xi=1\/\\|\\textbf{w}_o\\|$ between the support vectors and the BSH.\nThe BSH, which maximizes $\\xi$ under the constraints (\\ref{suppvec}), can be found by determining the saddlepoint of the Lagrangian function $d\\mathcal{L}(\\textbf{w},b,\\lambda_1,...,\\lambda_{N_T})=0$, given by:\n\\begin{equation}\n\\mathcal{L}(\\textbf{w},b,\\lambda_1,...,\\lambda_{N_T})=\\frac{1}{2}\\textbf{w}^T\\cdot \\textbf{w}-\\sum_{k=1}^{N_T} \\lambda_k[d_k(\\textbf{w} \\cdot \\textbf{x}_k+b)-1] \\label{lagr} \\quad .\n\\end{equation} \nThe solution of such variational problem is easily found in the form \\cite{Haykin}:\n\\begin{equation}\n\\mathbf{w}_o=\\sum_{k=1}^{N_T} \\lambda_k d_k \\textbf{x}_k \\label{sol1}\n\\end{equation}\nwhere the Lagrange multipliers $\\lambda_k$ satisfy the conditions:\n\\begin{eqnarray}\n\\sum_{k=1}^{N_T} \\lambda_k d_k&=&0 \\quad ,\\nonumber\\\\\n\\lambda_k[d_k(\\textbf{w} \\cdot \\textbf{x}_k+b)-1]&=&0 \\quad \\text{for $k=1,...,N_T$} \\quad ,\\nonumber\n\\end{eqnarray}\n(the latter being known as the ``Karush-Kuhn-Tucker complementarity condition'' \\cite{Webb}) whereas $b_o$ can be determined, once $\\textbf{w}_o$ is known, using Eqs. (\\ref{suppvec}).\nWhen the two classes are not linearly separable, a possible strategy consists in introducing a suitable (nonlinear) function $\\Phi:E\\rightarrow F$ , which makes it possible to map the original pattern inputs into a \\textit{feature space} $F\\subseteq\\mathbb{R}^M$, in which a linear separation can be performed, cf. Fig. \\ref{nlSVM} \\cite{Webb}.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.75\\textwidth]{nlSVM.pdf}\n\\caption{Patterns which are not linearly separable can be mapped, via a function $\\Phi$, into a \\textit{feature space} where a linear separation of the classes can be achieved}\\label{nlSVM}\n\\end{figure}\nThus,by denoting as $\\boldsymbol\\Phi(\\textbf{x})=\\{\\Phi_j(\\textbf{x})\\}_{j=1}^M$ a set of nonlinear transformations from the original input space to the feature space, the corresponding variational problem leads now, in place of Eq. (\\ref{sol1}), to the expression:\n\\begin{equation}\n\\mathbf{w}_o=\\sum_{k=1}^{N_T} \\lambda_k d_k \\boldsymbol\\Phi(\\textbf{x}_k) \\label{sol2} \\quad .\n\\end{equation}\nIn our implementation of the SVM algorithm, we regarded the set $G$ as one of the two classes, whereas the other class, formerly introduced in Sec. \\ref{sec:sec1} and denoted by $nG$, was thought of as given by the union $nG = D \\cup I$.\nWe thus relied on the analysis of the histograms of the hazelnut nuclei, detailed in Sec. \\ref{sec:sec1}. Therefore, we introduced two variables to identify each hazelnut: we set $\\textbf{x}=(x_{mean},x_{max})$, where, for each histogram relative to an hazelnut nucleus, $x_{mean}$ and $x_{max}$ denote, respectively, the \\textit{average} shade of gray and the shade of gray equipped with the highest probability.\nTherefore, in the space spanned by the coordinates $x_{mean}$ and $x_{max}$, and parameterized by the values of $\\epsilon$ and $\\rho$, each hazelnut is represented by a single dot. The resulting distribution of dots, for different values of $\\epsilon$ and $\\rho$, is illustrated in Figs. \\ref{raw3a} and \\ref{raw3b}, which evidence a clustering of points, for both the considered values of $\\rho$, around the bisectrix of the plane. This is readily explained by considering that, when reducing $\\epsilon$, the histograms attain a more and more symmetric shape. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_20r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_40r15.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_60r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_80r15.pdf}\n\\caption{Classification of the data in the 2D space spanned by the values of the observables $x_{mean}$ (horizontal axis) and $x_{max}$ (vertical axis), for $\\epsilon=20$ (top left), $\\epsilon=40$ (top right), $ \\epsilon=60$ (bottom left), and $ \\epsilon=80$ (bottom right), with $\\rho=1.5$.}\\label{raw3a}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_20r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_40r25.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_60r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Raw3_80r25.pdf}\n\\caption{Classification of the data in the 2D space spanned by the values of the observables $x_{mean}$ (horizontal axis) and $x_{max}$ (vertical axis), for $\\epsilon=20$ (top left), $\\epsilon=40$ (top right), $ \\epsilon=60$ (bottom left), and $ \\epsilon=80$ (bottom right), with $\\rho=2.5$.}\\label{raw3b}\n\\end{figure}\n\nFurthermore, an inspection of Figs. \\ref{raw3a} and \\ref{raw3b} reveals that the dots corresponding to the sets $D$ and $I$ are nested within the ensemble of points belonging to the set $G$: the classes $G$ and $nG$ are not amenable to be disentangled by a linear SVM regression, as also confirmed by the plots in Figs. \\ref{lin1} and \\ref{lin2}. \nIn each of the two latter figures, the left plot shows the elements of the adopted (randomly selected) training set: green and red symbols identify the elements of the two classes $G$ and $nG$, while the black circles indicate the support vectors. The black line indicates the boundary (best separating hyperplane) detected by the SVM, which sensibly depends on the chosen training set. The right plot, instead, displays all the available data (red and blue crosses represent, respectively, the elements of the classes $G$ and $nG$), complemented by the SVM test set output (red and blue circles). \nThe proper match between the colours of the circles and the crosses would indicate a successfully accomplished separation between the two classes, which, though, is not obtained with our data.\nFurthermore, no remarkable improvement is obtained by attempting a classification of the data by means of a nonlinear SVM algorithm, based on radial basis functions \\cite{Webb}, as shown in Figs. \\ref{nonlin1} and \\ref{nonlin2}. \nThe results of this Section, confirm, therefore, the predictions of the statistical analysis outlined in Sec. \\ref{sec:sec1}: the presence of a not linearly separable entanglement between points belonging to different classes can be thus traced back to the lack of a suitable statistical scales separation.\\\\\nThere is another relevant aspect, concerned with the implementation of the SVM algorithm, to be pointed out. \\\\\nWe remark, in fact, that each of the plots shown in Figs. \\ref{lin1} and \\ref{lin2} pertains to a specific training set of data $\\mathcal{T}$.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_20r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_40r15.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_60r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_80r15.pdf}\n\\caption{Classification of the data through a linear SVM algorithm. Shown is the 2D space spanned by the values of the observables $x_{mean}$ (horizontal axis) and $x_{max}$ (vertical axis), for $\\epsilon=20$ (top left), $\\epsilon=40$ (top right), $\\epsilon=60$ (bottom left), and $ \\epsilon=80$ (bottom right), with $\\rho=1.5$. In each left subfigure, shown are the training set of data (green and red crosses, denoting, respectively, the elements of the classes $G$ and $nG$), the support vectors (black circles) and the best separating hyperplane (black line). According to the SVM classification,the elements of the class $nG$ are expected to lie on the right of the boundary line. The right sub-figures, instead, display the 2D representation of all the available data (red and blue crosses, denoting, respectively, the elements of $G$ and those of $nG$) and the SVM output (red and blue circles).}\\label{lin1}\n\\end{figure}\n \n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_20r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_40r25.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_60r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_Lineare_80r25.pdf}\n\\caption{Classification of the data with a linear SVM algorithm, as in Fig. \\ref{lin1}, but with $\\rho=2.5$.}\\label{lin2}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_20r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_40r15.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_60r15.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_80r15.pdf}\n\\caption{Classification of the data through a nonlinear SVM algorithm (based on radial basis functions) for $\\rho=1.5$ (cf. the caption of Fig. \\ref{lin1}).}\\label{nonlin1}\n\\end{figure}\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_20r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_40r25.pdf}\\\\\n\\vspace{2mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_60r25.pdf}\n\\hspace{1mm}\n\\includegraphics[width=0.45\\textwidth]{SVM_nonLineare_80r25.pdf}\n\\caption{Classification of the data with a nonlinear SVM algorithm (based on radial basis functions) for $\\rho=2.5$ (cf. the caption of Fig. \\ref{lin1}).}\\label{nonlin2}\n\\end{figure}\n\nWe can introduce, then, the quantity $\\Psi_\\ell(\\mathcal{T}_\\ell;\\epsilon,\\rho)$, relative to the specific training set $\\mathcal{T}_\\ell$, and defined as the ratio of the number of hazelnuts, belonging to the class $G$ and mistakenly classified as belonging to the class $nG$, to the total number of hazelnuts in the database, given by $N_G+N_{nG}$. The function $\\Psi_\\ell$ is an indicator of the performance of the SVM algorithm, and sensibly depends on the structure of the training sample considered in the simulation. \nThus, while the behaviour of $\\Psi_\\ell$, pertaining to single training samples, yields no indication about the onset of an optimal scale $\\epsilon^*$, the average $\\langle \\Psi \\rangle$, given by\n\\begin{equation}\n\\langle \\Psi \\rangle(\\epsilon,\\rho) = \\frac{1}{N_c}\\sum_{\\ell=1}^{N_c}\\Psi_\\ell(\\mathcal{T}_\\ell;\\epsilon,\\rho) \\quad , \\nonumber\n\\end{equation}\nand computed over a sufficiently large number $N_c$ of training samples, attains a minimum precisely at $\\epsilon^*=70$, cf. Fig. \\ref{Psi}. The latter value of $\\epsilon$ corresponds, in fact, to the scale of resolution maximizing the two statistical scales introduced in Sec. \\ref{sec:sec1}, cf. Fig. \\ref{ratio3}. The plot in Fig. \\ref{Psi} confirms, hence, that the onset of an optimum scale $\\epsilon^*$ can be numerically evinced also by means of SVM algorithms, provided that performance of the SVM is \\textit{averaged} over a sufficiently large number of training samples.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.65\\textwidth]{psi.pdf}\n\\caption{Behavior of $\\langle \\Psi \\rangle$, averaged over $N_c=500$ training samples, vs. $\\epsilon$, with $\\rho=1.5$, with error bars (in red).}\\label{Psi}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:conc}\n\nIn this work we performed a statistical analysis on the histograms of a set of hazelnut images, with the aim of obtaining a preliminary estimate of the performance of a machine learning algorithm based on statistical variables. We shed light, in Sec. \\ref{sec:sec1}, on the relevance of two statistical scales, which need to be widely separated to accomplish a successful pattern recognition. The intrinsic lack of such scale separation in our data was also evidenced by the numerical results reported in Sec. \\ref{sec:sec2}, revealing that no exhaustive classification can be achieved through SVM algorithms.\nMoreover, the analysis outlined in Sec. \\ref{sec:sec1} also unveiled the onset of an optimal resolution $\\epsilon^*$, which is expected to optimize the pattern recognition. This observation was also corroborated by the results discussed in Sec. \\ref{sec:sec2}, where the same value $\\epsilon^*$, maximizing the performance of the SVM algorithm, is recovered by averaging over a sufficiently large number of training samples.\nOur results, thus, strengthen the overall perspective that a preliminary estimate of the intrinsic statistical scales of the data constitute a decisive step in the field of pattern recognition and, moreover, pave the way for the further implementation of statistical mechanical techniques aimed at the development of a generation of more refined neural networks algorithms.\n\n\\newpage\n\n{\\bf Acknowledgments}\n\n\\vskip 5pt\n\nWe would like to thank Ferrero and Soremartec for their long-standing support of our research activity. We also thank Dr. A. Boscolo and Dr. L. Placentino, for providing us with the set of x-ray images used in this work.\nThis study was funded by ITACA, a project financed by the European Union, the Italian Ministry of Economy and Finance and the Piedmont Region.\n\n\\vskip 10pt\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe rotation frequencies\n$\\nu$ of pulsars generally decrease slowly in time, but occasionally experience sudden increases $\\Delta\\nu$ that are usually accompanied by increases in the absolute value of their spin-down rates, $\\dot{\\nu}$ \\citep{rm69,rd69,sl96}. \nThese spin-up events, known as glitches, are infrequent, not periodic, and cover a wide range of sizes \\citep[from $\\Delta \\nu\/\\nu \\sim 10^{-11}$ to $\\Delta \\nu\/\\nu \\sim 10^{-5}$;][]{elsk11,ymh+13}. \nThe mechanism that generates these events is not completely understood, but they are believed to be caused by angular momentum transfer from an internal neutron superfluid\nto the rest of the neutron star \\citep{ai75}.\n\nThanks to the few long-term monitoring campaigns that keep operating, some since the 1970s \\citep[e.g.][]{hlk+04,ymh+13}, the number of detected glitches has slowly increased, thereby improving the significance of statistical studies in pulsar populations. \n\\citet{ml90}, \\citet{lsg00}, and \\citet{elsk11} showed that the glitch activity $\\dot{\\nu}_{\\rm{g}}$ (defined as the mean frequency increment per unit of time due to glitches) correlates linearly with $|\\dot\\nu|$. \nThey also found that young pulsars (using the characteristic age, $\\tau_c=-\\nu\/2\\dot{\\nu}$, as a proxy for age), which also have the highest $|\\dot\\nu|$, exhibit glitches more often than older pulsars, with rates varying from about one glitch per year to one per decade among the young pulsars. \nUsing a larger and unbiased sample, \\cite{fer+17} confirmed that the size distribution of all glitches in a large and representative sample of pulsars is multi-modal \\citep[recently also seen by][]{ka14b,apj17}, with at least two well-defined classes of glitches: large glitches in a relatively narrow range $\\Delta \\nu \\sim (10-30)\\, \\rm{\\mu Hz}$, and small glitches with a much wider distribution, from $\\sim 10\\,\\mathrm{\\mu Hz}$ down to at least $10^{-4}\\,\\mathrm{\\mu Hz}$. \nFurther, \\cite{fer+17} found that a constant ratio $\\dot\\nu_{\\rm{g}}\/|\\dot\\nu| = 0.010 \\pm 0.001$ is consistent with the behaviour of nearly all rotation-powered pulsars and magnetars.\nThe only exception are the (few) very young pulsars, which have the highest spin-down rates, such as the Crab pulsar (PSR B0531$+$21) and PSR B0540$-$69. \n\nBecause glitches are rare events, the number of known glitches in the vast majority of pulsars is not enough to perform robust statistical analyses on individual bases.\nThis has made people focus on the few objects that have the largest numbers of detected glitches (about 10 pulsars).\nThe statistical distributions of glitch sizes and times between consecutive glitches (waiting times), for the nine pulsars with more than five known glitches at the time, were studied by \\citet{mpw08}.\nThey found that seven out of the nine pulsars exhibited power-law-like size distributions and exponential waiting time distributions.\nThe distributions of the other two (PSRs J0537$-$6910 and B0833$-$45, the Vela pulsar) were\nbetter described by Gaussian functions, setting preferred sizes and time scales.\nThese results have been further confirmed by \\citet{fmh17} and \\citet{hmd18}, who also found that there are at least two main behaviours among the glitching pulsars.\n\nCorrelations between glitch sizes and the times to the nearest glitches, either backward or forward, are naturally expected.\nWe know that glitch activity is driven by the spin-down rate \\citep{fer+17}, which suggests that glitches are the release of some stress that builds up at a rate determined by $|\\dot{\\nu}|$.\nIf the stress is completely released at each glitch, then one should expect a correlation between size and the time since the last glitch.\nConversely, if glitches occur when a certain critical state is reached, one should expect a correlation between size and the time to the next glitch, as longer times would be needed to come back to the critical state after the largest glitches.\nMoreover, if both assumptions were indeed correct, glitches would all be of equal sizes and occur periodically. \nHowever, with the exception of PSR J0537$-$6910 (see below), no other pulsars have shown significant correlations between glitch sizes and the times to the nearest events \\citep[e.g.][]{wmp+00,ywml10,mhf18}.\nThis may be partly due to small-number statistics and might improve in the future, provided a substantial number of pulsars continue to be monitored for glitches.\n\nThe case of PSR J0537$-$6910, however, is very clear.\nWith more than 40 glitches detected in $\\sim 13$\\,yr, the statistical conclusions about its behaviour are much more significant than for any other pulsar.\nAs first reported by \\citet{mmw+06}, its glitch sizes exhibit a strong correlation with the waiting time to the following glitch \\citep[see also][who confirmed the correlation using twice as much data]{aeka18,fagk18}.\n\n\\citet{aeka18} interpret\nthis behaviour as an indication that glitches in this pulsar occur only once some threshold is reached.\nMoreover, this behaviour would imply that not necessarily all the stress is released in the glitches, thereby giving rise to the variety of (unpredictable) glitch sizes observed and the lack of backward time correlation.\n\nIn this work we study the sequence of glitches in the pulsars with at least ten detected events, by characterizing their distributions of glitch sizes and waiting times between successive glitches. Also, we test two hypotheses to explain\nwhy most pulsars do not show a correlation between glitch size and time to the following glitch: the effects of undetected small glitches and the possibility that two different classes of glitches are present in each pulsar.\n\n\\section{Pulsars with at least ten detected glitches} \n\\label{s1}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{fig1.pdf}\n\\caption{Upper part of the $P-\\dot{P}$ diagram for all known pulsars. \n\tThe pulsars in our sample have at least ten detected glitches and are labeled with different symbols. \n\tLines of constant spin-down rate $\\dot{\\nu}$ are shown and labeled. \n\t$P$ and $\\dot{P}$ values were taken from the ATNF pulsar catalog \\protect\\footnotemark.\n}\\label{fig1}\n\\end{figure}\n\\footnotetext{\\url{http:\/\/www.atnf.csiro.au\/research\/pulsar\/psrcat}}\nTo date, there are eight pulsars with at least 10 detected glitches (Fig. \\ref{fig1}).\nPSRs J0205$+$6449, B0531$+$21 (the Crab pulsar), B1737$-$30, B1758$-$23, and J0631$+$1036 have been observed regularly by the Jodrell Bank Observatory \\citep[JBO,][]{hlk+04}.\nPSR B1338$-$62 has been observed by the Parkes telescope, and the Vela pulsar has been observed by several telescopes, including Parkes, the Jet Propulsion Laboratory, and others in Australia and Southafrica \\citep[e.g.][]{downs81,mkhr87,ymh+13,buc13}. \nPSR J0537$-$6910 is the only object in our sample not detected in the radio band and was observed for 13 years by the \\textit{Rossi X-ray Timing Explorer} \\citep[RXTE,][]{aeka18,fagk18}. \nGlitch epochs and sizes were taken from the JBO online glitch catalog \\footnote{\\url{http:\/\/www.jb.man.ac.uk\/pulsar\/glitches\/gTable.html}}, where more information and the appropriate references for each measurement can be found.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{fig_2.pdf}\n\\caption{Logarithm (base 10) of glitch sizes $\\Delta\\nu$ (with $\\Delta\\nu$ measured in $\\mu$Hz) as a function of the glitch epoch for the pulsars in the sample. \n\tThe gray areas mark periods of time in which there were no observations for more than 3 months. \n\t$N_g$ is the number of glitches detected in the respective pulsar, until 20 April 2019 (MJD 58593). \n\tTo build a continuous sample, in the analyses of the Crab pulsar, we only use the 25 glitches after MJD 45000, when daily observations started \\citep{eas+14}. All panels share the same scale, in both axes.\n\t} \\label{fig2}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{fig_3.pdf}\n\\caption{Distribution of $\\log \\Delta \\nu$ (with $\\Delta \\nu$ measured in $\\rm{\\mu Hz}$) for the pulsars in our sample. The orange areas indicate that glitches with $\\Delta\\nu<0.01\\,\\rm{\\mu Hz}$ could be missing due to detectability issues.} \\label{fig3}\n\\end{figure*}\n\nFigures \\ref{fig2} and \\ref{fig3} show that the Vela pulsar and PSR J0537$-$6910 produce glitches of similar sizes, particularly large glitches ($\\Delta \\nu > 10$ $\\mu$Hz), and in fairly regular time intervals. The absence of smaller glitches in these pulsars is not a selection effect, as it is quite unlikely that a considerable amount of glitches with sizes up to $\\Delta \\nu \\sim 10\\,\\rm{\\mu Hz}$, far above the detection limits reported in the literature \\citep[see ][and text below]{wxe+15}, could have gone undetected. \nOn the other hand, the rest of the pulsars exhibit irregular waiting times and cover a wider range of sizes ($\\Delta \\nu \\sim 10^{-3}-10$ $\\mu$Hz). \n\nThe cadence of the timing observations varies considerably from pulsar to pulsar (and even with time for individual pulsars), and the sensitivity of the observations, from which the glitch measurements were performed, are also different between different pulsars.\nThis means that the chances of detecting very small glitches are different for each pulsar and that the completeness of the samples towards small events might also be different \\citep{eas+14}.\nNonetheless, in this study we use a single value to represent the glitch size below which samples are likely to be incomplete due to detectability issues.\nFor an observing cadence of 30 days and a rotational noise of 0.01 rotational phases, glitch detection is severely compromised below sizes $\\Delta\\nu \\sim 10^{-2}\\, \\rm{\\mu Hz}$, especially if their frequency derivative steps are larger than $|\\Delta\\dot{\\nu}|\\sim 10^{-15} \\, \\rm{Hz\\, s^{-1}}$ \\citep[see][]{wxe+15}. We use the above numbers to characterize the glitch detection capabilities in this sample of pulsars, but we note that such cadence and rotational noise are rather pessimistic values in some cases.\n\n\\section{Distributions of glitch sizes and times between glitches} \n\\label{distris}\n \nIn the following, we model the distributions of glitch sizes ($\\Delta\\nu$, measured in $\\mu$Hz) and the distributions of times between successive glitches ($\\Delta \\tau$, measured in yr) for each pulsar in our sample. \nFour probability density distributions are considered: Gaussian,\n\\begin{equation}\nM(x|\\mu,\\sigma) = C_{\\rm{Gauss}}\\,\\exp\\left[\\frac{-(x-\\mu)^2}{2\\sigma^2}\\right]\\text{,}\n\\end{equation}\npower-law,\n\\begin{equation}\nM(x|\\alpha) = \\dfrac{\\alpha - 1}{x_{\\rm{min}}}\\left(\\dfrac{x}{x_{\\rm{min}}}\\right)^{-\\alpha}\\text{,}\n\\end{equation}\nlog-normal,\n\\begin{equation}\nM(x|\\mu_{\\rm{L-N}},\\sigma_{\\rm{L-N}}) = \\dfrac{C_{\\rm{L-N}}}{x}\\,\\exp\\left[\\frac{-(\\ln x-\\mu_{\\rm{L-N}})^2}{2\\sigma_{\\rm{L-N}}^2}\\right]\\text{,}\n\\end{equation}\nand exponential,\n\\begin{equation}\nM(x|\\lambda) = \\lambda\\, \\exp\\left[-\\lambda(x-x_{\\rm{min}})\\right]\\text{.}\n\\end{equation}\n\nThe set $\\{\\mu,\\sigma, \\alpha, \\mu_{\\rm{L-N}},\\sigma_{\\rm{L-N}}, \\lambda\\}$ are the fitting parameters. \nAll the distributions are normalized in the range $x_{\\rm{min}}$ to $\\infty$. Formally, $x_{\\rm{min}}$ is given by detection limits. \nHowever, it is not simple to define precise values for $\\Delta \\nu_{\\rm{min}}$ and $\\Delta \\uptau_{\\rm{min}}$ for each pulsar.\nThus we use $\\Delta \\nu_{\\rm{min}} = 10^{-2}\\, \\mu$Hz for the glitch sizes (see previous section), and the smallest interval of time between glitches in each pulsar as $\\Delta \\uptau_{\\rm{min}}$.\n\nFor the Gaussian and log-normal distributions the normalization constants $C_{\\rm{Gauss}}$ and $C_{\\rm{L-N}}$ were found numerically.\nWe use the maximum likelihood technique to obtain the parameters of the models that describe best the data, and use the Akaike Information Criterion \\citep[AIC,][]{aka74} to compare the different models \\citep[see also the Appendix in][]{fer+17}.\n\n\\begin{figure*}\n\\includegraphics[width=18cm]{fig_4.pdf}\n\\caption{Cumulative distribution of glitch sizes and model fits. The best-fitting models are indicated by thicker curves.}\\label{fig4}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[width=18cm]{fig_5.pdf}\n\\caption{Cumulative distribution of waiting times between successive glitches and model fits. The best-fitting models are indicated by thicker curves.}\\label{fig5}\n\\end{figure*}\n\n\\begin{table*\n\\centering\n\\caption{Distributions of glitch sizes: results of the fits and the AIC weights for each model; using glitches with $\\Delta\\nu \\geq 0.01\\, \\rm{\\mu Hz}$.}\n\\label{Table_sizes}\n\\begin{tabular}{@{}lcccccccccc@{}}\n\\toprule \\toprule\nPSR Name & $w^{\\rm{Gauss}}$ & $w^{\\textrm{Power law}}$ & $w^{\\textrm{L-N}}$ & $w^{\\rm{Exp}}$ & $\\hat{\\mu}$ & $\\hat{\\sigma}$ & $\\hat{\\alpha}$ & $\\hat{\\mu}_{\\rm{L-N}}$ & $\\hat{\\sigma}_{\\rm{L-N}}$ & $\\hat{\\lambda}$ \\\\\n & & & & & $\\rm{\\mu Hz}$ & $\\rm{\\mu Hz}$ & & & & $(\\rm{\\mu Hz})^{-1}$\\\\\n\\midrule\nJ0205$+$6449 & $10^{-8}$ & $\\mathbf{0.66}$ & 0.33 & $10^{-5}$ & 15(5) & 20(4) & 1.27(6) & 0.7(7) & 2.5(3) & 0.07(6)\\\\\n\nB0531$+$21 & $10^{-17}$ & 0.02 & $\\mathbf{0.97}$ & $10^{-7}$ & 1.2(5) & 3(1) & 1.4(1) & -1.3(3) & 1.5(2) & 0.8(7)\\\\\n\nJ0537$-$6910 & $\\mathbf{0.96}$ & $10^{-24}$ & $10^{-8}$ & 0.03 & 15(1) & 9.9(9) & 1.19(5) & 2.2(2) & 1.3(2) & 0.063(6)\\\\\n\nJ0631$+$1036 & $10^{-12}$ & $\\mathbf{0.94}$ & 0.05 & $10^{-8}$ & 1(1) & 3(1) & 1.4(1) & -1.9(6) & 2.1(4) & 0.61(4)\\\\\n\nB0833$-$45 & $\\mathbf{0.997}$ & $10^{-13}$ & $10^{-6}$ & 0.002 & 21(2) & 9(1) & 1.2(4) & 2.7(2) & 1.2(4) & 0.05(1)\\\\\n\nB1338$-$62 & $10^{-5}$ & 0.07 & $\\mathbf{0.53}$ & 0.4 & 2.5(5) & 2.7(3) & 1.36(5) & -0.1(3) & 1.6(1) & 0.4(1)\\\\\n\nB1737$-$30 & $10^{-14}$ & $\\mathbf{0.82}$ & 0.17 & $10^{-7}$ & 0.6(2) & 1.0(2) & 1.38(6) & -2.0(3) & 1.9(1) & 1.5(8)\\\\\n\nB1758$-$23 & 0.06 & 0.004 & 0.07 & $\\mathbf{0.866}$ & 0.6(1) & 0.51(8) & 1.3(2) & -1.2(4) & 1.5(3) & 1.7(6)\\\\\n\\bottomrule\n\\end{tabular}\n\\tablefoot{$w^m$ denotes the Akaike weight of the model $m$. \n$\\hat \\mu$ and $\\hat \\sigma$ are the mean and the standard deviation of the Gaussian model, and $\\hat\\alpha$ is the power-law index. \n$\\hat \\mu_{\\rm{L-N}}$ and $\\hat \\sigma_{\\rm{L-N}}$ are the mean and the standard deviation of the log-normal model, respectively. $\\hat \\lambda$ is the rate parameter of the exponential distribution. \nThe values in parentheses correspond to the uncertainty in the last quoted digit and were calculated using the usual bootstrap method. We marked in bold the values of $w^m$ for the best models.}\n\\end{table*}\n\n\\begin{table*}\n\\centering\n\\caption{Distributions of waiting times between successive glitches: results of the fits and the AIC weights for each model.}\\label{Table_times}\n\\begin{tabular}{@{}lcccccccccc@{}}\n\\toprule \\toprule\nPSR Name & $w^{\\rm{Gauss}}$ & $w^{\\rm{Power law}}$ & $w^{\\rm{L-N}}$ & $w^{\\rm{Exp}}$ & $\\hat{\\mu}$ & $\\hat{\\sigma}$ & $\\hat{\\alpha}$ & $\\hat{\\mu}_{\\rm{L-N}}$ & $\\hat{\\sigma}_{\\rm{L-N}}$ & $\\hat{\\lambda}$\\\\\n& & & & & yr & yr & & & & yr$^{-1}$\\\\\n\\midrule\n\nJ0205$+$6449 & $0.001$ & $0.40$ & 0.16 & $\\mathbf{0.43}$ & 1.3(4)& 1.4(4) & 1.7(1) & -$0.2(3)$ & 1.0(1) & 0.9(5)\\\\\n\nB0531$+$21 & $10^{-4}$ & $10^{-5}$ & 0.15 & $\\mathbf{0.84}$ & 1.3(2) & 1.3(2) & 1.4(1) & -$0.2(2)$ & 1.0(1) & 0.8(2)\\\\\n\nJ0537$-$6910 & $\\mathbf{0.72}$ & $10^{-10}$ & 0.07 & 0.2 & 0.28(2) & 0.15(1) & 1.64(8) & -1.44(9) & 0.65(6) & 4.3(4)\\\\\n\nJ0631$+$1036 & $10^{-4}$ & $10^{-5}$ & 0.20 & $\\mathbf{0.79}$ & 1.4(4) & 1.7(6) & 1.3(2) & -0.3(3) & 1.2(2) & 0.7(3)\\\\\n\nB0833$-$45 & $\\mathbf{0.993}$ & $10^{-10}$ & $10^{-4}$ & 0.006 & 2.5(2) & 1.2(1) & 1.3(3) & 0.7(2) & 0.9(2) & 0.41(9)\\\\\n\nB1338$-$62 & 0.25 & $10^{-3}$ & 0.20 & $\\mathbf{0.54}$ & 0.88(9) & 0.42(4) & 1.9(2) & -0.3(1) & 0.51(5) & 1.7(3)\\\\\n\nB1737$-$30 & $10^{-5}$ & $10^{-6}$ & 0.17 & $\\mathbf{0.82}$ & 0.9(1) & 0.9(1) & 1.44(7) & -0.6(1) & 1.0(1) & 1.2(2)\\\\\n\nB1758$-$23 & 0.04 & 0.16 & 0.08 & $\\mathbf{0.72}$ & 2.4(4) & 1.4(2) & 2.1(2) & 0.7(1) & 0.61(8) & 0.6(2)\\\\\n\\bottomrule\n\\end{tabular}\n\\tablefoot{$w^m$ denotes the Akaike weights of the model $m$. $\\hat \\mu$ and $\\hat \\sigma$ are the mean and the standard deviation of the Gaussian model, and $\\hat\\alpha$ is the power-law index. $\\hat \\mu_{\\rm{L-N}}$ and $\\hat \\sigma_{\\rm{L-N}}$ are the mean and the standard deviation of the log-normal model, respectively. $\\hat \\lambda$ is the rate parameter of the exponential distribution. The values in parentheses correspond to the uncertainties in the last digit, and were calculated by using the bootstrap method. We marked in bold the values of $w^m$ for the best models.}\n\\end{table*}\n\nFigures \\ref{fig4}-\\ref{fig5} and Tables \\ref{Table_sizes}-\\ref{Table_times} summarize the results of fitting these distributions to each pulsar. There is no single distribution type that can simultaneously describe all the pulsars satisfactorily, for either sizes or waiting times.\nThe size distributions present a large variety (as also found in the model of \\citealt{cm19}): the log-normal distribution gives the best fit for the Crab pulsar and PSR B1338$-$62, power-law for PSRs J0631$+$1036, B1737$-$30, and J0205$+$6449, and exponential for PSRs B1758$-$23.\n\n\nWe also note that PSR J0205+6449 and PSR B1758$-$23 are the pulsars with the fewest recorded glitches in the sample (both have 13 glitches detected), hence we ought to wait and confirm this result once more events are detected.\n\nIn the case of PSRs J0537$-$6910 and B0833$-$45 (Vela), the best fit for both size and waiting time distributions are Gaussian functions.\nTheir size distributions are centered at large sizes $\\Delta \\nu \\approx 15$ and $20\\, \\rm{\\mu Hz}$, respectively, consistent with the peak of large glitches in the combined distribution for all pulsars \\citep{fer+17}. \n\nThe distributions of times between successive glitches offer more homogeneous results. Besides the case of PSR J0537$-$6910 and the Vela pulsar (best modelled by Gaussian functions), the waiting time distributions for all the other pulsars are best represented by exponential functions.\nThese results are in agreement with \\citet{mpw08,wwty12}, and \\citet{hmd18} for almost all the pulsars studied. \nThe only exception is PSR B1338$-$62, for which \\cite{hmd18} reported a local maximum in the distribution and classified this pulsar as a quasi-periodic glitcher.\n\nIf $\\Delta\\nu_{\\rm{min}}$ is set to the size of the smallest detected glitch in each pulsar (rather than to $10^{-2}\\, \\mu$Hz), the results of the fits are very similar, and give parameters within the uncertainties presented in Table \\ref{Table_sizes}.\n\n\n\\section{Time series correlations: Glitch size and time to the next glitch} \n\\label{s2}\n\nDifferent studies have shown that for PSR J0537$-$6910 the glitch magnitudes $\\Delta \\nu_k$ are strongly correlated with the waiting times to the following glitch $\\Delta \\uptau_{k+1}$ \\citep[][and see Fig. \\ref{fig6}]{mmw+06,aeka18,fagk18}. \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{fig_6.pdf}\n\\caption{Time to next glitch, $\\Delta \\uptau_{k+1}$, as a function of glitch size, $\\Delta \\nu_k$, for all the pulsars in the sample.} \n\\label{fig6}\n\\end{figure*}\n\nWe test whether this correlation is also present in the other pulsars of the sample, and show the results in Table \\ref{dnu_dt_next} and Fig. \\ref{fig6} \\citep[this is fairly consistent with][though we note that the samples of glitches are not exactly the same]{mhf18}. None of them exhibits a correlation as clear as PSR J0537$-$6910. \nHowever, for PSRs J0205$+$6449, J0631$+$1036, B1338$-$62, and B1758$-$23, the Pearson correlation coefficients are larger than $0.5$ and the $p$-values are $\\sim 10^{-3}$, or less. \nTherefore, at $95\\%$ confidence level ($p$-values $ < 0.05$), we can reject the null hypothesis that $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$ are uncorrelated in these pulsars.\nSince the Pearson coefficient can be dominated by outliers, we also compute the Spearman rank correlation coefficient, obtaining similar or even stronger correlations, except for PSR J0631$+$1036.\n\n\\begin{table}\n\\centering\n\\caption{Correlation coefficients between $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$.} \n\\begin{tabular}{@{}lccccc@{}}\n\\toprule \\toprule\nPSR Name & $N_{\\mathrm{g}}$ & $r_p$ & $p_p$ & $r_s$ & $p_s$ \\\\\n\\midrule\nJ0205$+$6449 & 13 & 0.88 & 0.0002 & $0.76$ & 0.004 \\\\\nB0531$+$21 & 25 & -0.10 & 0.62 & -0.12 & 0.57 \\\\\nJ0537$-$6910 & 45 & 0.95 & $10^{-22}$ & 0.95 & $10^{-23}$ \\\\\nJ0631$+$1036 & 17 & 0.93 & $10^{-7}$ & 0.20 & 0.45 \\\\\nB0833$-$45 & 20 & 0.24 & 0.31 & 0.31 & 0.21 \\\\\nB1338$-$62 & 23 & 0.59 & 0.003 & 0.70 & 0.0002\\\\\nB1737$-$30 & 36 & 0.29 & 0.09 & 0.29 & 0.08 \\\\\nB1758$-$23 & 13 & 0.76 & 0.003 & 0.80 & 0.001 \\\\\n\\bottomrule\n\\end{tabular}\n\\tablefoot{The first and second columns contain the names of the pulsars and the respective number of glitches detected, respectively. \nThe third and fourth columns correspond to the Pearson linear correlation coefficient $r_p$ and the respective $p$-value $p_p$. \nThe last two columns correspond to the Spearman correlation coefficient $r_s$ and the respective $p$-value $p_s$.\n\t}\\label{dnu_dt_next}\n\\end{table}\n\nIt is also interesting to note that not only for PSR J0537$-$6910, but for all pulsars in the sample except the Crab, both the Pearson and Spearman correlation coefficients are positive. \nThe probability of finding at least six out of seven pulsars having the same sign as our reference case, just by chance, is rather low. \nThe probability of getting exactly $k$ successes among $n$ trials, with $1\/2$ success probability in each trial, is $P(k\\,|\\,n) = {n\\choose k}(1\/2)^n$. \nThus, the probability of getting at least 6 successes in 7 trials is\n\n\\begin{equation}\nP(\\geq 6\\,|\\,7)=P(6\\,|\\,7)+P(7\\,|\\,7)=\\frac{1}{16}=0.0625\\,.\n\\end{equation}\n\nThis low probability suggests that the waiting time to the following glitch is at least partially regulated by the size of the previous glitch.\n\nIn order to explain why the correlation for all other pulsars is much less clear than\nfor PSR J0537$-$6910, we explore two hypotheses, both of which are motivated by noting that most glitches in PSR J0537$-$6910 are large:\n\n\n\\begin{itemize}\n\\item[(I)] The correlation is intrinsically present in the full population of glitches of each pulsar, but glitches below a certain size threshold are not detected, thereby increasing by random amounts the times between the detected ones and worsening the correlation.\\\\\n\n\n\\item[(II)] There are two classes of glitches: glitches above a certain threshold size that follow the correlation, and glitches below the same threshold that are uncorrelated. \n\n\\end{itemize}\n\n\n\n\\subsection{Hypothesis I: Incompleteness of the sample}\n\nIn order to test the first hypothesis, we simulate a hypothetical pulsar with 100 glitches that follow a perfect correlation between $\\Delta\\nu_{k}$ and $\\Delta \\uptau_{k+1}$.\nThe events smaller than a certain value are then removed to understand the effect of their absence in the correlation. \nThe procedure is the following:\n\n\n\\begin{enumerate}\n\n\\item\nGlitch sizes are generated from a power-law distribution given by $dN\/d \\Delta\\nu \\propto \\Delta\\nu^{-\\alpha}$, with power-law index $\\alpha>1$. \nWe choose a power-law distribution because it mainly produces small events, and we want to see the effect of removing a substantial fraction of them.\nSeveral different choices for $\\alpha$ were considered. \nHere we only show the results for $\\alpha = 1.2$ and 1.4, as they generate distributions that resemble some of the ones observed.\n\nThe distributions do not have an upper cutoff, and the lower limit was varied so that, after reducing the sample of glitches (as we explain in step 3 below), the resulting sample covers the typical observed range of glitch sizes ($10^{-2} - 10^2\\, \\rm{\\mu Hz}$).\n\n\\item\nThe time to the next glitch $\\Delta \\uptau_{k+1}$ is computed in terms of the glitch size $\\Delta \\nu_k$ as:\n\\begin{equation}\n\\Delta\\uptau_{k+1}= C\\Delta\\nu_{k}\\, . \n\\label{eq_corr}\n\\end{equation}\nThe value of the proportionality constant $C$ is irrelevant in this case, since we are simulating a generic pulsar.\n\\\\\n\n\\item \nSteps (1) and (2) are repeated until a sequence of 100 glitches is reached. \nThen the 80 smallest are removed, thereby leaving a reduced sample of 20 to be analyzed, which is comparable to the number of glitches observed in each of our 8 pulsars.\nThe lower limit for the distribution is computed analytically so that, after reducing the sample of glitches, the final sample covers the typical observed range of glitch sizes ($10^{-2} - 10^2\\, \\rm{\\mu Hz}$).\\\\\n\n\\item \nFinally, we calculate the time interval between each pair of successive glitches in the reduced sample, and determine both the Spearman and Pearson correlation coefficients between $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$.\n\n\\end{enumerate}\n\n\n\nAfter simulating $10^4$ cases, it was found that removing all glitches smaller than a certain value has a minor effect on the correlation. \nRepresentative realizations are shown in Fig. \\ref{hyp1}, where the correlation between $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$ is plotted in log-scale to show more clearly the dispersion produced by the removal of the smallest glitches. \nWe observe that missing small glitches does not substantially worsen the correlation: more than $90\\%$ of the realizations give correlation coefficients $\\geq 0.95$ (both Pearson and Spearman).\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{fig_h1_v1.pdf}\n\\caption{Reduced samples of simulated glitches from an assumed parent distribution $dN\/d\\Delta\\nu\\propto \\Delta\\nu^{-\\alpha}$ with a perfect correlation $\\Delta\\tau_{k+1}=C\\Delta\\nu_k$, with $C=0.21\\, \\mathrm{yr\\, \\mu Hz^{-1}}$.\nTop: Resulting correlation between $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$. \nBottom: The corresponding distributions of $\\log \\Delta \\nu$ for the reduced samples of glitches. For both panels, each color (and point marker) represents a typical realization in the simulations, for different power-law exponents as shown in the legends.}\\label{hyp1}\n\\end{figure}\n\nFor $\\alpha>1.4$ the distribution becomes narrower, accumulating towards the lower limit. \nSince a large fraction of the simulated glitches have very similar sizes, after removing the 80 smallest glitches the correlation does worsen, and yields correlation coefficients between $0.4$ and $0.9$, which are similar to those exhibited by the real data.\nHowever, in these cases the distributions of glitch sizes differ strongly from those observed for the pulsars in our sample.\n\nFrom these simulations, we conclude that it is unlikely that the non-detection of all the glitches below a certain detection limit is the explanation for the low observed correlations in pulsars other than PSR J0537$-$6910.\n\n\\subsection{Hypothesis II: Two classes of intrinsically different glitches}\n\nThe second hypothesis states that pulsars exhibit two classes of glitches: larger events, which follow a linear correlation between $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$; and smaller events, for which these variables are uncorrelated.\nWe allow the point of separation between large and small glitches to be different for each pulsar.\n\nTo visualize whether this hypothesis works, correlation coefficients (for the same pair of variables, $\\Delta \\nu_k$ and $\\Delta \\uptau_{k+1}$) were calculated for sub-sets of glitches of the original sample. \nThe sub-sets are defined as all glitches with sizes larger or equal to a given $\\Delta\\nu_\\textrm{min}$. \nCorrelation coefficients as a function of $\\Delta\\nu_\\textrm{min}$ are plotted in Fig. \\ref{r_df0_min} for each pulsar.\nVisual inspection of the plots immediately tells us that by removing small glitches no pulsar reaches the level of correlation observed for PSR J0537$-$6910, for both correlation tests.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=180mm]{r_df0_min.pdf}\n\\caption{Pearson (orange squares) and Spearman (blue dots) correlation coefficients for glitches larger or equal than $\\Delta \\nu_\\textrm{min}$. Each panel represents a pulsar in our sample. For each pulsar, the last point in the plot was calculated with its five largest glitches. Note that some pulsars are shown in log-scale for a better visualization.} \n\\label{r_df0_min}\n\\end{figure*}\n\n\n\nIn the following we explore the curves in Fig. \\ref{r_df0_min} in some more detail.\nFor that purpose, Monte Carlo simulations of pulsars with correlated and uncorrelated glitches were performed.\nSince the underlying glitch size distributions of the pulsars in the sample are unknown, we use the measured values of a given pulsar.\nThe following is the procedure for one realization:\n\n\\begin{enumerate}\n\\item The glitches larger than a certain value $\\Delta\\nu^{\\star}$ are chosen in random order and assigned epochs according to their size.\nThe first one is set at an arbitrary epoch and the epochs of the following ones are assigned according to \n\\begin{equation}\n\\Delta\\uptau_{k+1}=\\Delta\\nu_{k}\\cdot 10^{x}\\, ,\n\\label{eq_hyp2}\n\\end{equation}\nwhere $x$ is drawn from a Gaussian distribution centred at $\\bar{x}=\\log(C)$ and with a standard deviation equal to $\\sigma_{\\bar{x}}$. \nThe latter allows us to introduce a dispersion in the correlation of the simulated glitches. \nThe distribution of $\\log(\\Delta\\uptau_{k+1}\/\\Delta\\nu_k)$ for all glitches with $\\Delta\\nu>5\\,\\mu$Hz in PSR J0537$-$6910 can be well modelled by a Gaussian distribution with standard deviation $\\sigma_{0537}=0.085$ (in logarithmic scale, if $\\Delta\\uptau_{k+1}$ is measured in days and $\\Delta\\nu_k$ is measured in $\\mu$Hz). \nIn the simulations, $\\sigma_{\\bar{x}}$ was set either to zero (i.e. $x=\\log(C)$, perfect correlation) or to multiples of $\\sigma_{0537}$. \\\\\n\n\\item The glitches smaller than $\\Delta\\nu^{\\star}$ are distributed randomly over the time span between the first and the last correlated glitches. \nThe resulting waiting times of all, correlated and uncorrelated glitches are then multiplied by a factor that ensures that their sum equals the time in between the first and the last observed glitches. \\\\\n\n\\item Steps 1 and 2 were repeated $10^4$ times for each considered value of $\\Delta \\nu^{\\star}$. \n\n\\end{enumerate}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{fig_explaining_sim2_v3.pdf}\n\\caption{Correlation coefficients $r_p$ (orange) and $r_s$ (blue) versus $\\Delta\\nu_\\textrm{min}$ for simulated glitches under hypothesis II, and for three $\\Delta\\nu^{\\star}$ cases: left, when all glitches are correlated ($\\Delta\\nu^{\\star}\\sim0$); middle, about half of them are correlated ($\\Delta\\nu^{\\star}=12.39\\,\\mu$Hz); right, none of them is correlated ($\\Delta\\nu^{\\star}=40\\,\\mu$Hz).\nShaded regions represent the values of the $70\\%$ closer to the median of all realizations.\nThe dashed lines show particular realizations.\nThese simulations used the glitch sizes of PSR J0537$-$6910 and $\\sigma_{\\bar{x}}=\\sigma_{0537}$.\nIn all cases the last points in the plots were calculated using the five largest glitches.\n}\n\\label{examples}\n\\end{figure*}\n\n\nThe plots in Fig. \\ref{examples} show the results of simulations using the glitch sizes of PSR J0537$-$6910 and $\\sigma_{\\bar{x}}=\\sigma_{0537}$ for three values of $\\Delta \\nu^{\\star}$.\nThe results are shown via curves of $r$ versus $\\Delta\\nu_\\textrm{min}$, to compare with Fig. \\ref{r_df0_min}.\nThe shaded areas represent the $70\\%$ of the correlation coefficients closer to the median of all realizations.\nWe visually inspected the distributions of $r_p$ and $r_s$ for all possible $\\Delta\\nu_\\textrm{min}$ values, and for many $\\Delta \\nu^{\\star}$ cases.\nIt was verified that the median is sufficiently close to the maximum of the distribution in most cases.\nThough, this tends to fail for the largest $\\Delta\\nu_\\textrm{min}$ values, where the $r_p$ and $r_s$ distributions are rather flat.\nBut this is irrelevant because any conclusion pointing to a case in which only a few glitches are correlated (large $\\Delta\\nu_\\textrm{min}$) would have little statistical value, regardless of the above.\nThus we are confident that the shaded areas effectively cover the most possible outcomes of series of glitches under the assumptions considered.\n\nWe now use the plots in Fig. \\ref{examples} to understand the curves of the correlation coefficients as functions of $\\Delta\\nu_\\textrm{min}$ in Fig. \\ref{r_df0_min}, in the frame of hypothesis~II:\n\n\n\\begin{itemize}\n\n\\item[(a)] If all glitches were correlated, which is the case shown in the leftmost plot in Fig. \\ref{examples}, the correlation coefficients would decrease gradually as $\\Delta\\nu_\\textrm{min}$ increases. \nThis is because a progressive reduction of the sample, starting from the smallest events (i.e. increasing the remaining waiting times by small random amounts), will gradually kill the correlation.\nNote that the correlation coefficients of the simulated glitches start at values just below $1.0$ for the smallest $\\Delta\\nu_\\textrm{min}$, just like the observations of PSR J0537$-$6910.\nThis is because $\\sigma_{\\bar{x}}=\\sigma_{0537}$ in those simulations.\nOnly for $\\sigma_{\\bar{x}}=0$ the simulations would start at correlation coefficients equal to $1.0$.\n\n\\item[(b)] If only glitches above a certain size $\\Delta \\nu^{\\star}$ were correlated, the correlation coefficients would improve as small glitches are eliminated, and the remaining sub-set approaches the one in which all glitches are correlated (as in the middle plot of Fig. \\ref{examples}).\nOne would expect a maximum correlation for $\\Delta\\nu_\\textrm{min}\\sim\\Delta \\nu^{\\star}$, and a gradual decrease as $\\Delta\\nu_\\textrm{min}$ increases beyond $\\Delta \\nu^{\\star}$.\n\n\\item[(c)] If there were no correlated glitches, we should expect a rather flat curve of low correlation coefficients oscillating around zero (rightmost plot in Fig. \\ref{examples}).\n\n\\end{itemize}\n\nThe behaviours just described correspond to the general trends exhibited by the shaded areas in Fig. \\ref{examples}, which evolve smoothly with $\\Delta\\nu_\\textrm{min}$. \nHowever, particular realizations show abrupt variations, of both signs, just as the observations do in Fig. \\ref{r_df0_min}.\n\nClearly, PSR J0537$-$6910 is best represented by case (a).\nIndeed, both correlation coefficients for this pulsar are maximum (and very similar) when all glitches are included and they decrease gradually as the smallest glitches are removed (Fig. \\ref{r_df0_min}).\nNonetheless, we note that $r_p$ stays above $0.9$ (and $p_p<3\\times10^{-12}$) for $\\Delta\\nu_\\textrm{min}\\leq7\\,\\mu$Hz, hence it is possible that the smallest glitches are not correlated. \nAnother indication for this possibility is that the six glitches below $5\\,\\mu$Hz fall to the right of the distribution of $\\log(\\Delta\\uptau_{k+1}\/\\Delta\\nu_k)$ for all glitches, and the width of the distribution is reduced considerably (from more than 2 decades to a half decade) when they are removed.\nIn other words, the straight line that best fits the ($\\Delta\\uptau_{k+1}$, $\\Delta\\nu_k$) points passes closer to the origin \\citep[a more physically motivated situation,][]{aeka18}, and the data exhibit a smaller dispersion around this line, when the smallest glitches are not included.\n\n\n\n\nThe pulsars B1338$-$62, and B1758$-$23 may in principle also correspond to case (a).\nAs mentioned at the beginning of section \\ref{s2}, they present mildly significant correlations when all their glitches are considered, and both their $r_p$ and $r_s$ curves in Fig. \\ref{r_df0_min} decrease as $\\Delta\\nu_\\textrm{min}$ increases.\nBy performing simulations with $\\Delta \\nu^{\\star}=0$, and for different values of $\\sigma_{\\bar{x}}$, we find that the correlation coefficients of PSR B1758$-$23 are within the range of $70\\%$ of the possible outcomes if $\\sigma_{\\bar{x}}$ is set to 5-6 times $\\sigma_{0537}$.\n\nFor PSR B1338$-$62 the situation is less clear because the amplitudes of the variations of both $r_p$ and $r_s$ for $\\Delta\\nu_\\textrm{min}<1\\,\\mu$Hz are rather high.\nOne possible interpretation is that all glitches are correlated and the variations are due to the correlation not being perfect (i.e. $\\sigma_{\\bar{x}}\\neq0$).\nWe find that only for $\\sigma_{\\bar{x}}\\geq10\\times\\sigma_{0537}$ the simulations can reproduce such behaviour and the observed values. \nAnother possibility is that $\\Delta \\nu^{\\star}\\sim0.2\\,\\mu$Hz, which could explain the local maxima of $r_p$ and $r_s$ around that value.\nThe maxima and subsequent values can indeed be reproduced with lower levels of noise, $\\sigma_{\\bar{x}}=5\\times\\sigma_{0537}$. \nBut for smaller values of $\\Delta\\nu_\\textrm{min}$ most realizations ($>70\\%$) give correlation coefficients below $0.5$, thus they fail at reproducing the observed $0.6$-$0.7$ at $\\Delta\\nu_\\textrm{min}=0$.\n\nIt is clear that Hypothesis II does not apply to this pulsar directly, and that the observations are not consistent with a set of uncorrelated glitches either.\nBased on the lack of glitches with sizes equal or less than $0.1\\,\\mu$Hz after MJD $\\sim$ 50400 (Fig. \\ref{fig2}), we speculate that the sample might be incomplete for glitches smaller than this size after this date\\footnote{This would be a more extreme case than those considered for the Hypothesis I because $0.1\\,\\mu$Hz is a rather high limit.}.\n\n\n\nThe pulsars J0205$+$6449 and J0631$+$1036 also exhibit significant Pearson correlations when all their glitches are considered.\nHowever, their $r_s$ curves tend to increase with $\\Delta\\nu_\\textrm{min}$ rather to decrease.\nAs mentioned before, the Pearson test can be affected by outliers, hence the behaviour we see for $r_p$ is likely due to the very broad size and waiting times distributions and the low numbers of events towards the high ends of the distributions, which produce outlier points for both pulsars (Fig.\\ref{fig6}).\nIt is therefore difficult to conclude anything for PSR J0631+1036. \nMoreover, the observed behaviour is very hard to reproduce by the simulations, even for high levels of noise (we tried up to $\\sigma_{\\bar{x}}=12\\times\\sigma_{0537}$).\nPerhaps its largest glitches ($\\Delta\\nu\\geq0.1\\,\\mu$Hz) are indeed correlated, but the statistics are too low to conclude anything.\n\nFor J0205, however, the Spearman coefficients $r_s$ are rather high ($>0.55$ for all $\\Delta\\nu_\\textrm{min}$) and both coefficients become similar and even higher for $\\Delta\\nu_\\textrm{min}>1\\,\\mu$Hz. \nIt is possible that glitches above this size are correlated in this pulsar.\nWe find that the observed $r_p$ and $r_s$, and their evolution with $\\Delta\\nu_\\textrm{min}$, are within the $70\\%$ of simulations with $\\Delta \\nu^{\\star}=1.3\\,\\mu$Hz and for $\\sigma_{\\bar{x}} = 2\\times\\sigma_{0537}$. \nWe note, however, that in this case the correlation coefficients observed for $\\Delta\\nu_\\textrm{min}\\leq0.1\\,\\mu$Hz are higher than the vast majority of the realizations.\nPerhaps the small glitches are also correlated and follow their own relation, though we did not simulate such scenario.\nWe conclude that the Hypothesis II does not fully explain this pulsar, although the 8 glitches above $1\\,\\mu$Hz appear to be well correlated indeed.\n\n\n\n\nThe Vela pulsar is the only pulsar in the sample that seems well represented by case (b).\nThe highest $r_p=0.68$ has a probability $p_p=0.003$ and is obtained for $\\Delta\\nu_\\textrm{min}\\sim2\\,\\mu$Hz.\nBoth $r_p$ and $r_s$ decline monotonically for larger $\\Delta\\nu_\\textrm{min}$ values. \nThis behaviour suggests that glitches of sizes above $\\sim2\\,\\mu$Hz might indeed be correlated, but the correlation is somewhat noisy.\nThe observed correlation coefficients fall within the middle 70$\\%$ of the realizations if $\\sigma_{\\bar{x}}=2\\times\\sigma_{0537}$ and for $\\Delta\\nu^{\\star}=2$-$10\\,\\mu$Hz.\nThe case $\\Delta\\nu^{\\star}=9.35\\,\\mu$Hz is presented in Fig. \\ref{vela_h2}.\nWe prefer this case because simulations for $\\Delta\\nu^{\\star}=2\\,\\mu$Hz tend to fail at reproducing the low correlation coefficients ($\\leq0.4$) observed for the smallest $\\Delta\\nu_\\textrm{min}$.\n\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{vela_2_hip2.pdf}\n\\caption{Observations and simulations of the Vela pulsar.\nLeft: Shaded regions indicate the values obtained by the $70\\%$ closer to the median of all realizations. \nThe observations are overlaid using dashed lines.\nCentre: comparison of observations (dashed) and one particular realization.\nRight: $\\Delta\\uptau_{k+1}$ versus $\\Delta\\nu_k$ for the same realization (red triangles) and for the observations (grey dots).\nOrange represents $r_p$ values and blue represents $r_s$ values in all panels.\nThe simulations were performed using $\\sigma_{\\bar{x}}=2\\times\\sigma_{0537}$ and $\\Delta\\nu^{\\star}=9.35\\,\\mu$Hz.}\n\\label{vela_h2}\n\\end{figure*}\n\n\n\nFinally, the cases of PSRs B0531+21 (the Crab) and B1737$-$30 are rather inconclusive.\nThe Crab pulsar is perhaps the pulsar for which case (c) applies the best. \nBoth correlation coefficients are negative or positive, and in both cases stay at relatively low absolute values, which leads to the conclusion that there are no correlated glitches in the Crab pulsar.\nWe note that the high $r_p$ and $r_s$ values observed for $\\Delta\\nu_\\textrm{min}\\sim0.6\\,\\mu$Hz are obtained with the 5-6 largest events and that a linear fit to their $\\Delta \\nu_k - \\Delta\\uptau_{k+1}$ does not pass close to the origin.\n\n \nThe case of B1737$-$30 is more complex.\nThe observations show two $\\Delta\\nu_\\textrm{min}$ values, $0.0015$ and $0.03\\,\\mu$Hz, after which the correlation coefficients decrease with the removal of more small glitches (Fig. \\ref{r_df0_min}). \nThis behaviour is hard to reproduce under Hypothesis II, unless the dispersion of the correlation is increased considerably, to $10\\times\\sigma_{0537}$ or more.\nWe conclude that Hypothesis II does not apply to this pulsar directly and that there is some extra complexity, as the data are also inconsistent with a set of purely uncorrelated glitches.\n\n\n\nSurprisingly, even though no pulsar complies perfectly with Hypothesis II, and the only way in some cases is to increase the dispersion of the correlation ($\\sigma_{\\bar{x}}\\gg\\sigma_{0537}$), there is no pulsar in the sample that is well represented by case (c) (only the Crab, to some extent).\n\nTherefore, the sizes of at least some glitches must be positively correlated with the times to the next glitch in the available datasets.\nThe question is why this correlation is much stronger in PSR J0537$-$6910 than in all other pulsars of our sample.\nCould this be an effect of its particularly high spin-down rate? Or the fact that most of its glitches are large?\nIt could be that the correlations are indeed there, as stated in Hypothesis II, but for some reason exhibit high $\\sigma_{\\bar{x}}$ values. \nMaybe the fact that the glitches in PSR J0537$-$6910 occur so frequently ensures that the relationship stays pure.\nBut it could also be that reality was more complex.\nFor instance, it could be that both small and large glitches were correlated, but each of them followed a different law.\n\n\n\n\\section{Other correlations} \n\\label{s3}\n\nWe looked for other correlations between the glitch sizes and the times between them.\nSpecifically, we tried $\\Delta \\nu_k$ vs $\\Delta \\uptau_{k}$ (size of the glitch versus the time since the preceding glitch), and $\\Delta \\nu_k$ vs $\\Delta \\nu_{k-1}$ (size of the glitch versus the size of the previous glitch).\nNo pulsar shows a significant correlation between these quantities (Table \\ref{others_correlations}). \n\n\n\\begin{table*}\n\\caption{Correlation coefficients for the pairs of variables $(\\Delta \\nu_k,\\,\\Delta \\uptau_{k})$, and $(\\Delta \\nu_k,\\, \\Delta \\nu_{k-1})$.} \\label{others_correlations}\n\\small\n \\begin{subtable}{0.47\\textwidth}\n \\begin{tabular*}{\\linewidth}{@{}l \n @{\\extracolsep{\\fill}} SS\n S[table-format=2.2(2)]\n S[table-format=2.2(2)]@{}}\n \\toprule\n \\phantom{Var.} & \n \\multicolumn{4}{c}{$\\Delta \\nu_k$ vs $\\Delta \\uptau_{k}$}\\\\\n \\cmidrule{1-5}\n {PSR Name}& {$r_p$} & {$p_p$} & {$r_s$} & {$p_s$}\\\\\n \\midrule\n J0205$+$6449\\hspace{0.5cm} & 0.16 & 0.60 & 0.44 & 0.15\\\\\n B0531$+$21 & -0.02 & 0.90 & 0.40 & 0.05\\\\\n J0537$-$6910 & -0.08 & 0.60 & -0.12 & 0.41 \\\\[1ex]\n J0631$+$1036 & -0.10 & 0.68 & -0.18 & 0.49 \\\\\n B0833$-$45 & 0.55 & 0.01 & 0.27 & 0.24 \\\\\n B1338$-$62 & -0.30 & 0.16 & -0.18 & 0.41 \\\\[1ex]\n B1737$-$30 & -0.02 & 0.89 & -0.10 & 0.56 \\\\\n B1758$-$23 & -0.02 & 0.94 & -0.04 & 0.89 \\\\\n \\bottomrule\n \\end{tabular*}%\n \n \\end{subtable}%\n \\hspace*{\\fill}%\n \\begin{subtable}{0.47\\textwidth}\n \\begin{tabular*}{\\linewidth}{@{}l \n @{\\extracolsep{\\fill}} SS\n S[table-format=2.2(2)]\n S[table-format=2.2(2)]@{}}\n \\toprule\n \\phantom{Var.}\n & \\multicolumn{4}{c}{$\\Delta \\nu_k$ vs $\\Delta \\nu_{k-1}$}\\\\\n \\cmidrule{1-5}\n {PSR Name}& {$r_p$} & {$p_p$} & {$r_s$} & {$p_s$}\\\\\n \\midrule\n J0205$+$6449\\hspace{0.5cm} & -0.06 & 0.83 & 0.25 & 0.42\\\\\n B0531$+$21 & -0.10 & 0.61 & -0.15 & 0.47\\\\\n J0537$-$6910 & -0.13 & 0.38 & -0.16 & 0.29\\\\[1ex]\n J0631$+$1036 & -0.12 & 0.65 & 0.32 & 0.21 \\\\\n B0833$-$45 & -0.08 & 0.71 & -0.12 & 0.59\\\\\n B1338$-$62 & -0.33 & 0.13 & -0.13 & 0.55\\\\[1ex]\n B1737$-$30 & -0.11 & 0.50 & 0.03 & 0.85\\\\\n B1758$-$23 & -0.02 & 0.92 & -0.04 & 0.89\\\\\n \\bottomrule\n \\end{tabular*}%\n \n \\end{subtable}\n\\tablefoot{The first column contains the names of the pulsars considered in the sample. $r_{n}$ and $p_{n}$ correspond to a correlation coefficient and its $p$-value, respectively. The sub-index $n = p$ denotes the Pearson correlation, and $n=s$ denotes the Spearman correlation.}\n\\end{table*}\n\n\nNearly all pulsars in our sample show negative correlation coefficients (both, Pearson and Spearman) for $\\Delta \\nu_k$ vs $\\Delta \\uptau_{k}$. \nThe only exceptions are the Vela pulsar and PSR J0205$+$6449 (see Table \\ref{others_correlations}). Our results are in general agreement with \\cite{mhf18},\nwho also found a lack of correlation between $\\Delta \\nu_k$ vs $\\Delta \\uptau_{k}$ for individual pulsars.\n\nFor $\\Delta \\nu_k$ vs $\\Delta \\nu_{k-1}$, in most cases the correlation coefficients are close to zero and the $p$-values are larger than $0.2$, i.e., no individual pulsar shows a significant correlation. However, the results could still be meaningful for the sample as a whole because all the pulsars have negative correlation coefficients, except for the Spearman coefficients for PSRs J0631$+$1036 and B1737$-$30). \nThe probability of getting all Pearson's correlations coefficients of the same sign just by chance, regardless of whether the sign is positive or negative, is $2\\times p_{\\mathrm{binom}}(8|8) = 0.007$. This could establish an interesting constraint on the glitch mechanism: Smaller glitches are somewhat more likely to be followed by larger ones, and vice-versa.\nHowever, this statement has to be confirmed with more data in the future.\n\n\n\n\\section{Discussion} \n\\label{disc}\n\n\\citet{fer+17} found that all pulsars (with the strong exception of the Crab pulsar and PSR B0540$-$69)\nare consistent with a constant ratio between the glitch activity, $\\dot{\\nu}_{\\rm g}$, and the spin-down rate,\n$\\dot\\nu_{\\rm{g}}\/|\\dot\\nu| = 0.010 \\pm 0.001$, i.e., $\\approx 1\\%$ of their spin-down is recovered by the glitches. This fraction has been interpreted as the fraction of the moment of inertia in a superfluid component that transfers its angular momentum to the rest of the star in the glitches \\citep{lel99,aghe12}.\n\\citet{fer+17} used the observed bimodal distribution of glitch sizes to distinguish between large and small glitches, with the boundary at $\\Delta\\nu=10\\, \\mathrm{\\mu Hz}$, and argued that the constant ratio is determined by the large glitches, whose rate, $\\dot N_\\ell$ is also proportional to $|\\dot\\nu|$. In\nthis scenario, the\nmuch lower (sometimes null) glitch activities measured in many low-$|\\dot{\\nu}|$ pulsars are due to \ntheir observation time spans\nnot being long enough to include any large glitches (or any glitch at all).\nInterestingly, the pulsars in our sample (except the Crab) are quite consistent with the constant ratio (Fig. \\ref{fig_discussion}), even those, like PSRs B1338$-$62, B1737$-$30, and B1758$-23$, which do not have any large glitches contributing to their activities.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{discussion.pdf}\n\\caption{$\\dot{\\nu}_g\/|\\dot{\\nu}|$ versus $|\\dot{\\nu}|$ for pulsars in our sample.\nThe dashed-line with the blue region correspond to the constant ratio $\\dot{\\nu}_g\/|\\dot{\\nu}| = 0.010 \\pm\n0.001$, determined by \\citet{fer+17}.\nThe error bars were calculated as described in the latter paper.}\n\\label{fig_discussion}\n\\end{figure}\n\n\nOn the other hand, pulsars with higher spin-down rates also have a larger fraction of large glitches. At the highest spin-down rates ($|\\dot{\\nu}|\\geq 10^{-11}$\\,Hz\\,s$^{-1}$), the production of large glitches becomes comparable and sometimes higher than the production of small glitches, again with the notorious exception of the Crab and PSR B0540$-$69.\nThis trend is also followed by the pulsars in our sample: all large glitches (but one in PSR J0631+1036), are concentrated in PSRs J0205$+$6449, J0537$-$6910, and the Vela pulsar, which are (together with the Crab) the ones with largest $|\\dot{\\nu}|$ values (see Fig. \\ref{fig1} and \\ref{fig_discussion}). \n\n\nThus, it seems to be the case that both large and small glitches draw from the same angular momentum reservoir (for all but the very young, Crab-like pulsars), but have different trigger mechanisms, the large ones being produced once a critical state\nis reached, whereas small ones occur in a more random fashion. \nFor reasons still to be understood, the glitch activity of relatively younger, high $|\\dot\\nu|$, Vela-like pulsars is dominated by large glitches, whereas for smaller $|\\dot \\nu|$ the large glitches become less frequent, both in absolute terms and relative to the small ones \\citep{wmp+00,elsk11}. \nIn this context, it is interesting to note that recent long-term braking index measurements\nindicate that Vela-like pulsars move towards the region where PSRs J0631+1036, B1737$-$30, and B1758$-$23 are located on the $P$--$\\dot{P}$ diagram \\citep[][]{els17}.\n\n\n\n\n\n\n\\section{Summary and Conclusions} \n\\label{conc}\n\nWe studied the individual glitching behaviour of the eight pulsars that today have at least ten detected glitches.\nOur main conclusions are the following:\n\n\n\\begin{enumerate}\n\n\\item \nWe confirm the previous result by \\cite{mpw08} and \\cite{hmd18} that, for Vela and PSR J0537$-$6910, the distributions of both their glitch sizes and waiting times are best fitted by Gaussians, indicating well-defined scales for both variables. For all other pulsars studied, the waiting time distribution is best fitted by an exponential (as would be expected for mutually uncorrelated events), but they have a variety of best-fitting size distributions: a power law for PSR J0205+6449, J0631+1036, and B1737$-$30, a log-normal for the Crab and PSR B1338$-$62, and an exponential for PSR B1758$-$23.\n\n\\item \nAll pulsars in our sample, except for the Crab, have positive Spearman and Pearson correlation coefficients for the relation between the size of each glitch, $\\Delta\\nu_k$, and the waiting time to the following glitch, $\\Delta\\tau_{k+1}$. For each coefficient, the probability for this happening by chance is $1\/16=6.25\\%$. \nBoth coefficients also stay positive as the small glitches are removed\n(see Fig. \\ref{r_df0_min}).\n\n\n\\item \nPSR J0537$-$6910 shows by far the strongest correlation between glitch size and waiting time until the following glitch ($r_p=r_s=0.95$, $p$-values $\\lesssim 10^{-22}$). \nAnother three pulsars, PSRs J0205$+$6449, B1338$-$62, and B1758$-$23, have quite significant correlations ($p$-values $\\leq 0.004$ for both coefficients).\n\n\\item\nOur first hypothesis to explain the much weaker correlations in all other pulsars compared to PSR J0537$-$6910, namely missing glitches that are too small to be detected, is very unlikely to be correct. Our Monte Carlo simulations show that, for reasonable glitch size distributions, it cannot produce an effect as large as observed.\n\n\\item \nOur alternative hypothesis, namely that there are two classes of glitches, large correlated ones and small uncorrelated ones, comes closer to reproducing the observed relations; notably for PSRs J0205$+$6449 and Vela.\nThe resulting correlations for both pulsars present dispersions that are twice the one observed for PSR J0537$-$6910.\nFor the other pulsars, the required dispersions to accommodate this hypothesis are much larger.\n\n\n\\item\nThe correlation coefficients between the sizes of two successive glitches, $\\Delta\\nu_{k-1}$ and $\\Delta\\nu_k$, as well as between the size of a glitch, $\\Delta\\nu_k$ and the waiting time since the previous glitch, $\\Delta\\tau_k$, are generally not significant in individual pulsars, but they are negative for most cases, suggesting some (weaker) relation also among these variables.\n\n\\item\nExcept for the Crab, all pulsars in our sample are consistent with the constant ratio between glitch activity and spin-down rate, $\\dot\\nu_\\mathrm{g}\/|\\dot\\nu|=0.010\\pm 0.001$ \\citep{fer+17}. This includes cases dominated by large glitches, as well as others with only small glitches. \n\n\\item\nThe previous results suggest that large and small glitches draw their angular momentum from a common reservoir, although they might be triggered by different mechanisms. Large glitches, which dominate at large $|\\dot\\nu|$ (except for the Crab and PSR B0540$-$69), might occur once a certain critical state\nis reached, while small glitches, dominating in older pulsars with lower $|\\dot\\nu|$, occur at essentially random times.\n\n\n\n\n\\end{enumerate}\n\nAll the above is based on the behaviour of the pulsars with the most detected glitches. \nEven though we have shown before that the activity of all pulsars appears to be consistent with one single trend, these pulsars could still be outliers among the general population. \nOnly many more years of monitoring will clarify the universality of these results.\n\n\n\\begin{acknowledgements}\nWe thank Vanessa Graber and Simon Guichandut for valuable comments on the first draft of this article. We are also grateful to Wilfredo Palma for conversations that guided us at the beginning of this work. We also thank Ben Shaw for information regarding the detection of recent glitches and for keeping the glitch catalog up to date.\nThis work was supported in Chile by CONICYT, through the projects ALMA31140029, Basal AFB-170002, and FONDECYT\/Regular 1171421 and 1150411.\nJ.R.F. acknowledges\tpartial support by an NSERC Discovery Grant awarded to A. Cumming at McGill University.\nC.M.E. acknowledges support by the Universidad de Santiago de Chile (USACH).\n\\end{acknowledgements}\n\n\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}