diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhjcx" "b/data_all_eng_slimpj/shuffled/split2/finalzzhjcx" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhjcx" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{intro}\n\nIn regression scenarios where the data have a time series structure, there is often parameter instability with respect to time \\citep{kim08}.\nA popular strategy to account for such dynamic patterns is to employ regime switching where parameters vary in time, taking on finitely many values, controlled by an unobservable Markov chain.\nSuch models are referred to as Markov-switching or regime-switching regression models, following the seminal papers by \\citet{gol73} and \\citet{ham89}. A basic Markov-switching regression model involves a time series $\\{Y_t\\}_{t=1,\\ldots,T}$ and an associated sequence of covariates $x_1,\\ldots,x_T$ (including the possibility of $x_t=y_{t-1}$), with the relation between $x_t$ and $Y_t$ specified as\n \\begin{linenomath*}\n\\begin{equation}\\label{simplemodel}\nY_t = f^{(s_t)}(x_t) + \\sigma_{s_t} \\epsilon_t \\, ,\n\\end{equation}\n \\end{linenomath*}\nwhere typically $\\epsilon_t \\stackrel{iid}{\\sim} \\mathcal{N} (0,1)$ and $s_t$ is the state at time $t$ of an unobservable $N$-state Markov chain. In other words, the functional form of the relation between $x_t$ and $Y_t$ and the residual variance change over time according to state switches of an underlying Markov chain, i.e.\\ each state corresponds to a regime with different stochastic dynamics.\nThe Markov chain induces serial dependence, typically such that the states are persistent in the sense that regimes are active for longer periods of time, on average, than they would be if an independent mixture model was used to select among regimes. The classic example is an economic time series where the effect of an explanatory variable may differ between times of high and low economic growth \\citep{ham08}.\n\nThe simple model given in (\\ref{simplemodel}) can be (and has been) modified in various ways, for example allowing for multiple covariates or for general error distributions from the generalized linear model (GLM) framework. An example for the latter is the Markov-switching Poisson regression model discussed in \\citet{wan01}. However, in the existing literature the relationship between the target variable and the covariates is commonly specified in parametric form and usually assumed to be linear, with little investigation, if any, into the absolute or relative goodness of fit. The aim of the present work is to provide effective and accessible methods for a nonparametric estimation of the functional form of the predictor. These build on a) the strengths of the hidden Markov model (HMM) machinery \\citep{zuc09}, in particular the forward algorithm, which allows for a simple and fast evaluation of the likelihood of a Markov-switching regression model (parametric or nonparametric), and b) the general advantages of penalized B-splines, i.e.\\ P-splines \\citep{eil96}, which we employ to obtain almost arbitrarily flexible functional estimators of the relationship between target variable and covariate(s). Model fitting is done via a numerical maximum penalized likelihood estimation, using either generalized cross-validation or an information criterion approach to select smoothing parameters that control the balance between goodness-of-fit and smoothness. Since parametric polynomial models are included as limiting cases for very large smoothing parameters, such a procedure also comprises the possibility to effectively reduce the functional effects to their parametric limiting cases, so that the conventional parametric Markov-switching regression models effectively are nested special cases of our more flexible models.\n\nOur approach is by no means limited to models of the form given in (\\ref{simplemodel}). In fact, the flexibility of the HMM machinery allows for the consideration of models from a much bigger class, which we term {\\em Markov-switching generalized additive models} (MS-GAMs). These are simply generalized additive models (GAMs) with an additional time component, where the predictor --- including additive smooth functions of covariates, parametric terms and error terms --- is subject to regime changes controlled by an underlying Markov chain, analogously to (\\ref{simplemodel}). While the methods do not necessitate a restriction to additive structures, we believe these to be most relevant in practice and hence have decided to focus on these models in the present work.\nOur work is closely related to that of \\citet{des13}. Those authors, however, confine their consideration to the case of only one covariate and the identity link function.\nFurthermore, we note that our approach is similar in spirit to that proposed in \\citet{lan15}, where the aim is to nonparametrically estimate the densities of the state-dependent distributions of an HMM.\n\n\nThe paper is structured as follows. In Section \\ref{MSGAMs}, we formulate general Markov-switching regression models, describe how to efficiently evaluate their likelihood, and develop the spline-based nonparametric estimation of the functional form of the predictor. The performance of the suggested approach is then investigated in three simulation experiments in Section \\ref{simul}. In Section \\ref{appl}, we apply the approach to Spanish energy price data to demonstrate its feasibility and potential. We conclude in Section \\ref{discuss}.\n\n\\section{Markov-switching generalized additive models}\n\\label{MSGAMs}\n\n\\subsection{Markov-switching regression models}\n\nWe begin by formulating a Markov-switching regression model with arbitrary form of the predictor, encompassing both parametric and nonparametric specifications. Let $\\{Y_t\\}_{t=1,\\ldots,T}$ denote the target variable of interest (a time series), and let $x_{p1},\\ldots,x_{pT}$ denote the associated values of the $p$-th covariate considered, where $p=1,\\ldots,P$. We summarize the covariate values at time $t$ in the vector $\\mathbf{x}_{\\cdot t}=(x_{1t},\\ldots,x_{Pt})$. Further let $s_1,\\ldots,s_T$ denote the states of an underlying unobservable $N$-state Markov chain $\\{S_t\\}_{t=1,\\ldots,T}$. Finally, we assume that conditional on $(s_t,\\mathbf{x}_{\\cdot t})$, $Y_t$ follows some distribution from the exponential family and is independent of all other states, covariates and observations. We write\n \\begin{linenomath*}\n\\begin{equation}\\label{expform}\n g\\bigl( \\mathbbm{E}(Y_t \\mid s_t,\\mathbf{x}_{\\cdot t}) \\bigr) = \\eta^{(s_t)}(\\mathbf{x}_{\\cdot t}),\n\\end{equation}\n \\end{linenomath*}\nwhere $g$ is some link function, typically the canonical link function associated with the exponential family distribution considered. That is, the expectation of $Y_t$ is linked to the covariate vector $\\mathbf{x}_{\\cdot t}$ via the predictor function $\\eta^{(i)}$, which maps the covariate vector to $\\mathbbm{R}$, when the underlying Markov chain is in state $i$, i.e.\\ $S_t=i$. Essentially there is one regression model for each state $i$, $i=1,\\ldots,N$. In the following, we use the shorthand $\\mu_t^{(s_t)}=\\mathbbm{E}(Y_t \\mid s_t,\\mathbf{x}_{\\cdot t})$.\n\nTo fully specify the conditional distribution of $Y_t$, additional parameters may be required, depending on the error distribution considered. For example, if $Y_t$ is conditionally Poisson distributed, then (\\ref{expform}) fully specifies the state-dependent distribution (e.g.\\ with $g(\\mu)=\\log(\\mu)$), whereas if $Y_t$ is normally distributed (in which case $g$ usually is the identity link), then the variance of the error needs to be specified, and would typically be assumed to also depend on the current state of the Markov chain. We use the notation $\\phi^{(s_t)}$ to denote such additional time-dependent parameters (typically dispersion parameters), and denote the conditional density of $Y_t$, given $(s_t,\\mathbf{x}_{\\cdot t})$, as $p_Y(y_t ,\\mu_t^{(s_t)},\\phi^{(s_t)})$. The simplest and probably most popular such model assumes a conditional normal distribution for $Y_t$, a linear form of the predictor and a state-dependent error variance, leading to the model\n \\begin{linenomath*}\n\\begin{equation}\\label{linpred}\nY_t = \\beta_{0}^{(s_t)} + \\beta_1^{(s_t)} x_{1t} + \\ldots + \\beta_p^{(s_t)} x_{Pt} + \\sigma_{s_t}\\epsilon_t,\n\\end{equation}\n \\end{linenomath*}\nwhere $\\epsilon_t \\stackrel{iid}{\\sim} \\mathcal{N}(0,1)$ (cf.\\ \\citealp{fru06}; \\citealp{kim08}).\n\nAssuming homogeneity of the Markov chain --- which can easily be relaxed if desired --- we summarize the probabilities of transitions between the different states in the $N \\times N$ transition probability matrix (t.p.m.) $\\boldsymbol{\\Gamma}=\\left( \\gamma_{ij} \\right)$, where $\\gamma_{ij}=\\Pr \\bigl(S_{t+1}=j\\vert S_t=i \\bigr)$, $i,j=1,\\ldots,N$. The initial state probabilities are summarized in the row vector $\\boldsymbol{\\delta}$, where $\\delta_{i} = \\Pr (S_1=i)$, $i=1,\\ldots,N$. It is usually convenient to assume $\\boldsymbol{\\delta}$ to be the stationary distribution, which, if it exists, is the solution to $\\boldsymbol{\\delta}\\boldsymbol{\\Gamma}=\\boldsymbol{\\delta}$ subject to $\\sum_{i=1}^N \\delta_i=1$.\n\n\n\\subsection{Likelihood evaluation by forward recursion}\n\nA Markov-switching regression model, with conditional density $p_Y(y_t,\\mu_t^{(s_t)},\\phi^{(s_t)})$ and underlying Markov chain characterized by $(\\boldsymbol{\\Gamma},\\boldsymbol{\\delta})$, can be regarded as an HMM with additional dependence structure (here in the form of covariate influence); see \\citet{zuc09}. This opens up the way for exploiting the efficient and flexible HMM machinery. Most importantly, irrespective of the type of exponential family distribution considered, an extremely efficient recursion can be applied in order to evaluate the likelihood of a Markov-switching regression model, namely the so-called {forward algorithm}. To see this, consider the vectors of forward variables, defined as the row vectors\n \\begin{linenomath*}\n\\begin{align*}\n\\boldsymbol{\\alpha}_t & = \\bigl( {\\alpha}_t (1), \\ldots , {\\alpha}_t (N) \\bigr) \\, , \\; t=1,\\ldots,T, \\\\\n\\text{where } \\; {\\alpha}_t (j) & = p (y_1, \\ldots, y_t, S_t=j \\mid \\mathbf{x}_{\\cdot 1} \\ldots \\mathbf{x}_{\\cdot t}) \\; \\text{ for } \\; j=1,\\ldots,N \\, .\n\\end{align*}\n \\end{linenomath*}\nHere $p$ is used as a generic symbol for a (joint) density. Then the following recursive scheme can be applied:\n \\begin{linenomath*}\n \\begin{align}\\label{forw}\n \\nonumber \\boldsymbol{\\alpha}_1 & = \\boldsymbol{\\delta} \\mathbf{Q}(y_1) \\, , \\\\\n \\boldsymbol{\\alpha}_{t} & = \\boldsymbol{\\alpha}_{t-1} \\boldsymbol{\\Gamma} \\mathbf{Q}(y_{t}) \\quad (t=2,\\ldots,T)\\, ,\n \\end{align}\n \\end{linenomath*}\nwhere $$\\mathbf{Q}(y_t)= \\text{diag} \\bigl( p_Y(y_t,\\mu_t^{(1)},\\phi^{(1)}), \\ldots, p_Y(y_t,\\mu_t^{(N)},\\phi^{(N)}) \\big).$$\nThe recursion (\\ref{forw}) follows immediately from $${\\alpha}_{t} (j)= \\sum_{i=1}^N {\\alpha}_{t-1}(i) \\gamma_{ij} p_Y(y_t,\\mu_t^{(j)},\\phi^{(j)}),$$ which in turn can be derived in a straightforward manner using the model's dependence structure (for example analogously to the proof of Proposition 2 in \\citealp{zuc09}).\nThe likelihood can thus be written as a matrix product:\n \\begin{linenomath*}\n\\begin{equation}\\label{lik}\n\\mathcal{L}(\\boldsymbol{\\theta}) = \\sum_{i=1}^N {\\alpha}_T(i) = \\boldsymbol{\\delta} \\mathbf{Q}(y_1) \\boldsymbol{\\Gamma} \\mathbf{Q}(y_{2}) \\ldots \\boldsymbol{\\Gamma} \\mathbf{Q}(y_{T}) \\mathbf{1} \\, ,\n\\end{equation}\n \\end{linenomath*}\nwhere $\\mathbf{1}\\in \\mathbbm{R}^N$ is a column vector of ones, and where $\\boldsymbol{\\theta}$ is a vector comprising all model parameters. The computational cost of evaluating (\\ref{lik}) is {linear} in the number of observations, $T$, such that a numerical maximization of the likelihood is feasible in most cases, even for very large $T$ and moderate numbers of states $N$.\n\n\n\\subsection{Nonparametric modelling of the predictor}\\label{nonp}\n\nNotably, the likelihood form given in (\\ref{lik}) applies for any form of the conditional density $p_Y(y_t,\\mu_t^{(s_t)},\\phi^{(s_t)})$. In particular, it can be used to estimate simple Markov-switching regression models, e.g.\\ with linear predictors, or in fact with any GLM-type structure within states. Here we are concerned with a nonparametric estimation of the functional relationship between $Y_t$ and $\\mathbf{x}_{\\cdot t}$. To achieve this, we consider a GAM-type framework \\citep{woo06}, with the predictor comprising additive smooth state-dependent functions of the covariates:\n \\begin{linenomath*}\n\\begin{equation*}\n g( \\mu_t^{(s_t)} ) = \\eta^{(s_t)}(\\mathbf{x}_{\\cdot t}) = \\beta_0^{(s_t)} + f_1^{(s_t)}(x_{1t}) + f_2^{(s_t)}(x_{2t}) + \\ldots + f_P^{(s_t)}(x_{Pt}).\n\\end{equation*}\n \\end{linenomath*}\nWe simply have one GAM associated with each state of the Markov chain.\nTo achieve a flexible estimation of the functional form, we express each of the functions $f_p^{(i)}$, $i=1,\\ldots,N$, $p=1,\\ldots,P$, as a finite linear combination of a high number of basis functions, $B_{1}, \\ldots , B_{K}$:\n \\begin{linenomath*}\n\\begin{equation}\\label{lincom}\n\tf_p^{(i)}(x) = \\sum_{k=1}^K \\gamma_{ipk} B_k(x).\n\\end{equation}\n \\end{linenomath*}\nNote that different sets of basis functions can be applied to represent the different functions, but to keep the notation simple we here consider a common set of basis functions for all $f_p^{(i)}$. A common choice for the basis is to use B-splines, which form a numerically stable, convenient basis for the space of polynomial splines, i.e.\\ piecewise polynomials that are fused together smoothly at the interval boundaries; see \\citet{deb78} and \\citet{eil96} for more details. We use cubic B-splines, in ascending order in the basis used in (\\ref{lincom}). The number of B-splines considered, $K$, determines the flexibility of the functional form, as an increasing number of basis functions allows for an increasing curvature of the function being modeled. Instead of trying to select an optimal number of basis elements, we follow \\citet{eil96} and modify the likelihood by including a difference penalty on coefficients of adjacent B-splines. The number of basis B-splines, $K$, then simply needs to be sufficiently large in order to yield high flexibility for the functional estimates. Once this threshold is reached, a further increase in the number of basis elements no longer changes the fit to the data due to the impact of the penalty. Considering second-order differences --- which leads to an approximation of the integrated squared curvature of the function estimate \\citep{eil96} --- leads to the difference penalty $0.5 \\lambda_{ip} \\sum_{k=3}^K (\\Delta^2 \\gamma_{ipk})^2$,\nwhere $\\lambda_{ip} \\geq 0$ are smoothing parameters and where $\\Delta^2 \\gamma_{ipk} = \\gamma_{ipk}-2\\gamma_{ip,k-1}+\\gamma_{ip,k-2}$.\n\nWe then modify the (log-)likelihood of the MS-GAM --- specified by $p_Y(y_t,\\mu_t^{(s_t)},\\phi^{(s_t)})$ in combination with (\\ref{lincom}) and underlying Markov chain characterized by $(\\boldsymbol{\\Gamma},\\boldsymbol{\\delta})$ --- by including the above difference penalty, one for each of the smooth functions appearing in the state-dependent predictors:\n \\begin{linenomath*}\n\\begin{equation}\\label{penlik}\nl_{\\text{pen.}} = \\log \\bigl( \\mathcal{L} (\\boldsymbol{\\theta}) \\bigr) - \\sum_{i=1}^N \\sum_{p=1}^P \\dfrac{\\lambda_{ip}}{2} \\sum_{k=3}^K (\\Delta^2 \\gamma_{ipk})^2\\ .\n\\end{equation}\n \\end{linenomath*}\n\nThe maximum penalized likelihood estimate then reflects a compromise between goodness-of-fit and smoothness, where an increase in the smoothing parameters leads to an increased emphasis being put on smoothness. We discuss the choice of the smoothing parameters in more detail in the subsequent section. As $\\lambda_{ip} \\rightarrow\\infty$, the corresponding penalty dominates the log-likelihood, leading to a sequence of estimated coefficients $\\gamma_{ip1}, \\ldots , \\gamma_{ipK}$ that are on a straight line. Thus, we obtain the common linear predictors, as given in (\\ref{linpred}), as a limiting case. Similarly, we can obtain parametric functions with arbitrary polynomial order $q$ as limiting cases by considering $(q+1)$-th order differences in the penalty. The common parametric regression models thus are essentially nested within the class of nonparametric models that we consider. One can of course obtain these nested special cases more directly, by simply specifying parametric rather than nonparametric forms for the predictor. On the other hand, it can clearly be advantageous not to constrain the functional form in any way {\\em a priori}, though still allowing for the possibility of obtaining constrained parametric cases as a result of a data-driven choice of the smoothing parameters. Standard GAMs and even GLMs are also nested in the considered class of models ($N=1$), but this observation is clearly less relevant, since powerful software is already available for these special cases.\n\n\n\\subsection{Inference}\n\\label{estim}\n\nFor given smoothing parameters, all model parameters --- including the parameters determining the Markov chain, any dispersion parameters, the coefficients $\\gamma_{ipk}$ used in the linear combinations of B-splines and any other parameters required to specify the predictor --- can be estimated simultaneously by numerically maximizing the penalized log-likelihood given in (\\ref{penlik}). For each function $f_p^{(i)}$, $i=1,\\ldots,N$, $p=1,\\ldots,P$, one of the coefficients needs to be fixed to render the model identifiable, such that the intercept controls the height of the predictor function. A convenient strategy to achieve this is to first standardize each sequence of covariates $x_{p1},\\ldots,x_{pT}$, $p=1,\\ldots,P$, shifting all values by the sequence's mean and dividing the shifted values by the sequence's standard deviation, and second consider an odd number of B-spline basis functions $K$ with $\\gamma_{ip,(K+1)\/2}=0$ fixed.\nThe numerical maximization is carried out subject to well-known technical issues arising in all optimization problems, including parameter constraints and local maxima of the likelihood. The latter can be either easy to deal with or a challenging problem, depending on the complexity of the model considered. \n\nUncertainty quantification, on both the estimates of parametric parts of the model and on the function estimates, can be performed based on the approximate covariance matrix available as the inverse of the observed Fisher information or using a parametric bootstrap \\citep{efr93}. The latter avoids relying on asymptotics, which is particularly problematic when the number of B-spline basis functions increases with the sample size. From the bootstrap samples, we can obtain pointwise as well as simultaneous confidence intervals for the estimated regression functions. Pointwise confidence intervals are simply given via appropriate quantiles obtained from the bootstrap replications. Simultaneous confidence bands are obtained by scaling the pointwise confidence intervals until they contain a pre-specified fraction of all bootstrapped curves completely \\citep{kri10}.\n\nFor the closely related class of parametric HMMs, identifiability holds under fairly weak conditions, which in practice will usually be satisfied, namely that the t.p.m.\\ of the unobserved Markov chain has full rank and that the state-specific distributions are distinct \\citep{gas13}. This result transfers to the more general class of MS-GAMs if, additionally, the state-specific GAMs are identifiable. Conditions for the latter are simply the same as in any standard GAM and in particular the nonparametric functions have to be centered around zero. Furthermore, in order to guarantee estimability of a smooth function on a given domain, it is necessary that the covariate values cover that domain sufficiently well. In practice, i.e.\\ when dealing with finite sample sizes, parameter estimation will be difficult if the level of correlation, as induced by the unobserved Markov chain, is low, and also if the state-specific GAMs are similar. The stronger the correlation in the state process, the clearer becomes the pattern and hence the easier it is for the model to allocate observations to states. Similarly, the estimation performance will be best, in terms of numerical stability, if the state-specific GAMs are clearly distinct. (See also the simulation experiments in Section \\ref{simul} below.)\n\n\n\\subsection{Choice of the smoothing parameters}\n\\label{smoothies}\n\nIn Section~\\ref{estim}, we described how to fit an MS-GAM to data for a {\\em given} smoothing parameter vector. To choose adequate smoothing parameters in a data-driven way, generalized cross-validation can be applied.\nA leave-one-out cross-validation will typically be computationally infeasible. Instead, for a given time series to be analyzed, we generate $C$ random partitions such that in each partition a high percentage of the observations, e.g.\\ 90\\%, form the calibration sample, while the remaining observations constitute the validation sample. For each of the $C$ partitions and any $\\boldsymbol{\\lambda}=(\\lambda_{11},\\ldots,\\lambda_{1P},\\ldots,\\lambda_{N1},\\ldots,\\lambda_{NP})$, the model is then calibrated by estimating the parameters using only the calibration sample (treating the data points from the validation sample as missing data, which is straightforward using the HMM forward algorithm; see \\citealp{zuc09}). Subsequently, proper scoring rules \\citep{gne07} can be used on the validation sample to assess the model for the given $\\boldsymbol{\\lambda}$ and the corresponding calibrated model. For computational convenience, we consider the log-likelihood of the validation sample, under the model fitted in the calibration stage, as the score of interest (now treating the data points from the calibration sample as missing data). From some pre-specified grid $\\boldsymbol{\\Lambda} \\subset \\mathbbm{R}_{\\geq 0}^{N\\times P}$, we then select the $\\boldsymbol{\\lambda}$ that yields the highest mean score over the $C$ cross-validation samples. The number of samples $C$ needs to be high enough to give meaningful scores (i.e.\\ such that the scores give a clear pattern rather than noise only; from our experience, $C$ should not be smaller than 10), but must not be too high to allow for the approach to be computationally feasible.\n\nAn alternative, less computer-intensive approach for selecting the smoothing parameters is based on the Akaike Information Criterion (AIC), calculating, for each smoothing parameter vector from the grid considered, the following AIC-type statistic:\n$$ \\text{AIC}_p = -2 \\log \\mathcal{L} + 2 \\nu. $$\nHere $\\mathcal{L}$ is the unpenalized likelihood under the given model (fitted via penalized maximum likelihood), and $\\nu$ denotes the effective degrees of freedom, defined as the trace of the product of the Fisher information matrix for the unpenalized likelihood and the inverse Fisher information matrix for the penalized likelihood \\citep{gra92}. Using the effective degrees of freedom accounts for the effective dimensionality reduction of the parameter space resulting from the penalization. From all smoothing parameter vectors considered, the one with the smallest $\\text{AIC}_p$ value is chosen.\n\n\n\n\n\n\\section{Simulation experiments}\\label{simul}\n\n{\\em Scenario I.} We first consider a relatively simple scenario, with a Poisson-distributed target variable, a 2-state Markov chain selecting the regimes and only one covariate:\n$$Y_t \\sim \\text{Poisson} (\\mu_t^{(s_t)}) \\, ,$$\nwhere\n$$\\log ( \\mu_t^{(s_t)} ) = \\beta_0^{(s_t)} + f^{(s_t)}(x_{t}) \\, .$$\nThe functional forms chosen for $f^{(1)}$ and $f^{(2)}$ are displayed by the dashed curves in Figure \\ref{fig1}; both functions go through the origin. We further set $\\beta_0^{(1)}=\\beta_0^{(2)}=2$ and\n$$\\boldsymbol{\\Gamma}=\\begin{pmatrix} 0.9 & 0.1 \\\\ 0.1 & 0.9 \\end{pmatrix} \\, .$$\nAll covariate values were drawn independently from a uniform distribution on $[-3,3]$. We ran 200 simulations, in each run generating $T=300$ observations from the model described. An MS-GAM, with Poisson-distributed response and log-link, was then fitted via numerical maximum penalized likelihood estimation, as described in Section \\ref{estim} above, using the optimizer \\texttt{nlm} in R. We set $K=15$, hence using 15 B-spline basis densities in the representation of each functional estimate.\n\nWe implemented both generalized cross-validation and the AIC-based approach for choosing the smoothing parameter vector from a grid $\\boldsymbol{\\Lambda}= \\Lambda_1 \\times \\Lambda_2$, where $\\Lambda_1=\\Lambda_2=\\{0.125,1,8,64,512,4096\\}$. We considered $C=25$ folds in the cross-validation. For both approaches, we estimated the mean integrated squared error (MISE) for the two functional estimators, as follows:\n$$ \\widehat{\\text{MISE}}_{f^{(j)}} = \\frac{1}{200}\\sum_{z=1}^{200} \\left( \\int_{-3}^{3} \\biggl( \\hat{f}_z^{(j)}(x) - {f}^{(j)}(x) \\biggr)^2 dx \\right) ,$$\nfor $j=1,2$, where $\\hat{f}_z^{(j)}(x)$ is the functional estimate of ${f}^{(j)}(x)$ obtained in simulation run $z$. Using cross-validation, we obtained $\\widehat{\\text{MISE}}_{f^{(1)}}=1.808$ and $\\widehat{\\text{MISE}}_{f^{(2)}}=0.243$, while using the AIC-type criterion we obtained the slightly better values $\\widehat{\\text{MISE}}_{f^{(1)}}=1.408$ and $\\widehat{\\text{MISE}}_{f^{(2)}}=0.239$. In the following, we report the results obtained using the AIC-based approach.\n\n\\begin{figure}[tbh]\n\\centering\n\\includegraphics[width=0.76\\textwidth]{PoissonMSGAMs_new.png}\n\\vspace{-0.5em}\n\\caption{Displayed are the true functions $f^{(1)}$ and $f^{(2)}$ used in {\\em Scenario I} (dashed lines) and their estimates obtained in 200 simulation runs (green and red lines for states 1 and 2, respectively).}\\label{fig1}\n\\end{figure}\n\nThe sample mean estimates of the transition probabilities $\\gamma_{11}$ and $\\gamma_{22}$ were obtained as $0.894$ (Monte Carlo standard deviation of estimates: $0.029$) and $0.896$ ($0.032$), respectively. The estimated functions $\\hat{f}^{(1)}$ and $\\hat{f}^{(2)}$ from all 200 simulation runs are visualized in Figure \\ref{fig1}. The functions have been shifted so that they go through the origin. All fits are fairly reasonable. The sample mean estimates of the predictor value for $x_{t}=0$ were obtained as $2.002$ ($0.094$) and $1.966$ ($0.095$) for states 1 and 2, respectively. \\\\\n\n\n\n\n\\noindent\n{\\em Scenario II.} The second simulation experiment we conducted is slightly more involved, with a normally distributed target variable, an underlying 2-state Markov chain and now two covariates:\n$$Y_t \\sim \\mathcal{N} (\\mu_t^{(s_t)},\\sigma_{s_t})\\, ,$$\nwhere\n$$\\mu_t^{(s_t)} = \\beta_0^{(s_t)} + f_1^{(s_t)}(x_{1t})+ f_2^{(s_t)}(x_{2t}) \\, .$$\nThe functional forms we chose for $f_1^{(1)}$, $f_1^{(2)}$, $f_2^{(1)}$ and $f_2^{(2)}$ are displayed in Figure \\ref{fig2}; again all functions go through the origin.\nWe further set $\\beta_0^{(1)}=1$, $\\beta_0^{(2)}=-1$, $\\sigma_{1}=3$, $\\sigma_{2}=2$ and\n$$\\boldsymbol{\\Gamma}=\\begin{pmatrix} 0.95 & 0.05 \\\\ 0.05 & 0.95 \\end{pmatrix} \\, .$$\nThe covariate values were drawn independently from a uniform distribution on $[-3,3]$. In each of 200 simulation runs, $T=1000$ observations were generated.\n\nFor the choice of the smoothing parameter vector, we considered the grid $\\boldsymbol{\\Lambda}= \\Lambda_1 \\times \\Lambda_2 \\times \\Lambda_3 \\times \\Lambda_4$, where $\\Lambda_1=\\Lambda_2=\\Lambda_3=\\Lambda_4=\\{0.25,4,64,1024,16384\\}$. The AIC-based smoothing parameter selection led to MISE estimates that overall were marginally lower than their counterparts obtained when using cross-validation ($0.555$ compared to $0.565$, averaged over all four functions being estimated), so again in the following we report the results obtained based on the AIC-type criterion. The (true) function $f_1^{(2)}$ is in fact a straight line, and, notably, the associated smoothing parameter was chosen as $16384$, hence as the maximum possible value from the grid considered, in 129 out of the 200 cases, whereas for example for the function $f_2^{(2)}$, which has a moderate curvature, the value $16384$ was not chosen even once as the smoothing parameter.\n\n\\begin{figure}[tbh]\n\\centering\n\\includegraphics[width=1\\textwidth]{normalMSGAMs_new.png}\n\\vspace{-0.5em}\n\\caption{Displayed are the true functions $f_1^{(1)}$, $f_1^{(2)}$, $f_2^{(1)}$ and $f_2^{(2)}$ used in {\\em Scenario II} (dashed lines) and their estimates obtained in 200 simulation runs (red and green lines for states 1 and 2, respectively; $f_1^{(1)}$ and $f_1^{(2)}$, which describe the state-dependent effect of the covariate $x_{1t}$ on the predictor, and corresponding estimates are displayed in the left panel; $f_2^{(1)}$ and $f_2^{(2)}$, which describe the state-dependent effect of the covariate $x_{2t}$ on the predictor, and corresponding estimates are displayed in the right panel).}\\label{fig2}\n\\end{figure}\n\nIn this experiment, the sample mean estimates of the transition probabilities $\\gamma_{11}$ and $\\gamma_{22}$ were obtained as $0.950$ (Monte Carlo standard deviation of estimates: $0.011$) and $0.948$ ($0.012$), respectively. The estimated functions $\\hat{f}_1^{(1)}$, $\\hat{f}_1^{(2)}$, $\\hat{f}_2^{(1)}$ and $\\hat{f}_2^{(2)}$ from all 200 simulation runs are displayed in Figure \\ref{fig2}. Again all have been shifted so that they go through the origin. The sample mean estimates of the predictor value for $x_{1t}=x_{2t}=0$ were $0.989$ ($0.369$) and $-0.940$ ($0.261$) for states 1 and 2, respectively. The sample mean estimates of the state-dependent error variances, $\\sigma_{1}$ and $\\sigma_{2}$, were obtained as $2.961$ ($0.107$) and $1.980$ ($0.078$), respectively. Again the results are very encouraging, with not a single simulation run leading to a complete failure in terms of capturing the overall pattern. \\\\\n\n\n\\noindent\n{\\em Scenario III.} The estimator behavior both in {\\em Scenario I} and in {\\em Scenario II} is encouraging, and demonstrates that inference in MS-GAMs is clearly practicable in these two settings, both of which may occur in similar form in real data. However, as discussed in Section \\ref{estim}, in some circumstances, parameter identification in finite samples can be difficult, especially if the level of correlation as induced by the Markov chain is low. To illustrate this, we re-ran {\\em Scenario I}, using the exact same configuration as described above except that we changed $\\boldsymbol{\\Gamma}$ to\n$$\\boldsymbol{\\Gamma}=\\begin{pmatrix} 0.6 & 0.4 \\\\ 0.4 & 0.6 \\end{pmatrix} \\, .$$\nIn other words, compared to {\\em Scenario I}, there is substantially less autocorrelation in the series that are generated. Figure \\ref{fig3} displays the estimated functions $\\hat{f}^{(1)}$ and $\\hat{f}^{(2)}$ in this slightly modified scenario. Due to the fairly low level of autocorrelation, the estimator performance is substantially worse than in {\\em Scenario I}, and in several simulation runs the model failed to capture the overall pattern, by allocating pairs $(y_t,x_t)$ with high values of the covariate $x_t$ to the wrong state of the Markov chain. The deterioration in the estimator performance is also reflected by higher standard errors: The sample mean estimates of the transition probabilities $\\gamma_{11}$ and $\\gamma_{22}$ were obtained as $0.590$ (Monte Carlo standard deviation of estimates: $0.082$) and $0.593$ ($0.088$), respectively, and the sample mean estimates of the predictor value for $x_{t}=0$ were obtained as $1.960$ ($0.151$) and $2.017$ ($0.145$) for states 1 and 2, respectively.\n\n\\begin{figure}[tbh]\n\\centering\n\\includegraphics[width=0.76\\textwidth]{PoissonMSGAMs_id_new.png}\n\\vspace{-0.5em}\n\\caption{Displayed are the true functions $f^{(1)}$ and $f^{(2)}$ used in {\\em Scenario III} (dashed lines) and their estimates obtained in 200 simulation runs (green and red lines for states 1 and 2, respectively).}\\label{fig3}\n\\end{figure}\n\n\n\n\\section{Illustrating example: Spanish energy prices}\\label{appl}\n\nWe analyze the data collected on the daily price of energy in Spain between 2002 and 2008. The data, 1784 observations in total, are available in the R package \\texttt{MSwM} \\citep{san14}. We consider the relationship over time between the price of energy $P_t$ and the Euro\/Dollar exchange rate $r_t$. The stochastic volatility of financial time series makes the assumption of a fixed relationship between these two variables over time questionable, and it seems natural to consider a Markov-switching regime model. It is also probable that their relationship within a regime is non-linear with unknown functional form, thereby motivating the use of flexible nonparametric predictor functions.\nHere, our aim is to illustrate the potential benefits of considering flexible MS-GAMs rather than GAMs or parametric Markov-switching models when analyzing time series regression data.\n\nTo this end, we consider four different models. As benchmark models, we considered two parametric models with state-dependent linear predictor $\\beta_0^{(s_t)} + \\beta_1^{(s_t)} r_t$, with one (LIN) and two states (MS-LIN), respectively, assuming the response variable $P_t$ to be normally distributed. Additionally, we considered two nonparametric models as introduced in Section \\ref{nonp}, with one state (hence a basic GAM) and two states (MS-GAM), respectively. In these two models, we assumed $P_t$ to be gamma-distributed, applying the log link function to meet the range restriction for the (positive) mean.\n\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{fitted.png}\n \\caption{Observed energy price against Euro\/Dollar exchange rate (gray points), with estimated state-dependent mean energy prices (solid lines) for one-state (blue) and two-state (green and red) nonparametric and linear models; nonparametric models are shown together with associated approximate 95\\% pointwise confidence intervals obtained based on 999 parametric bootstrap samples (dotted lines).}\n \\label{fig:real_models}\n\\end{figure}\n\nFigure \\ref{fig:real_models} shows the fitted curves for each model. For each one-state model (GAM and LIN), the mean curve passes through a region with no data (for values of $r_t$ around $-1$). This results in response residuals with clear systematic deviation. It is failings such as this which demonstrate the need for regime-switching models. Models were formally compared using one-step-ahead forecast evaluation, by means of the sum of the log-likelihoods of observations $P_u$ under the models fitted to all preceding observations, $P_1,\\ldots,P_{u-1}$, considering $u=501,\\ldots,1784$ (such that models are fitted to a reasonable number of observations). We obtained the following log-likelihood scores for each model: $-2314$ for LIN, $-2191$ for GAM, $-2069$ for MS-LIN and $-1703$ for MS-GAM. Thus, in terms of out-of-sample forecasts, the MS-GAM performed much better than any other model considered. Both two-state models performed much better than the single-state models, however the inflexibility of the MS-LIN model resulted in a poorer performance than that of its nonparametric counterpart, as clear non-linear features in the regression data are ignored.\n\n\n\n\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{timeseries.png}\n \\caption{Globally decoded state sequence for the two-state (red and green) MS-LIN and MS-GAM.}\n \\label{fig:real_tsseries}\n\\end{figure}\n\nFor the MS-GAM, the transition probabilities were estimated to be $\\gamma_{11} = 0.991$ (standard error: $0.006$) and $\\gamma_{22} = 0.993$ ($0.003$). The estimated high persistence within states gives evidence that identifiability problems such as those encountered in {\\em Scenario III} in the simulation experiments did not occur here. Figure \\ref{fig:real_tsseries} gives the estimated regime sequence from the MS-GAM and MS-LIN models obtained using the Viterbi algorithm. Both sequences are similar, with one state relating to occasions where price is more variable and generally higher. However, the MS-LIN model does tend to predict more changes of regime than the MS-GAM, which may be a result of its inflexibility.\n\n\n\n\n\n\n\n\n\\section{Concluding remarks}\n\\label{discuss}\n\nWe have exploited the strengths of the HMM machinery and of penalized B-splines to develop a flexible new class of models, MS-GAMs, which show promise as a useful tool in time series regression analysis. A key strength of the inferential approach is ease of implementation, in particular the ease with which the code, once written for any MS-GAM, can be modified to allow for various model formulations. This makes interactive searches for an optimal model among a suite of candidate formulations practically feasible. Model selection, although not explored in detail in the current work, can be performed along the lines of \\citet{cel08} using cross-validated likelihood, or can be based on AIC-type criteria such as the one we considered for smoothing parameter selection. For more complex model formulations, local maxima of the likelihood can become a challenging problem. In this regard, estimation via the EM algorithm, as suggested in \\citet{des13} for a smaller class of models, could potentially be more robust (cf.\\ \\citealp{bul08}), but is technically more challenging, not as straightforward to generalize and hence less user-friendly \\citep{mcd14}.\n\nIn the example application to energy price data, the MS-GAM clearly outperformed the other models considered, as it accommodates both the need for regime change over time and the need to capture non-linear relationships within a regime. However, even the very flexible MS-GAM has some shortcomings. In particular, it is apparent from the plots, but also from the estimates of the transition probabilities, which indicated a very high persistence of regimes, that the regime-switching model addresses long-term dynamics, but fails to capture the short-term (day-to-day) variations within each regime. In this regard, it would be interesting to explore models that incorporate regime switching (for capturing long-term dynamics induced by persistent market states) but for example also autoregressive error terms within states (for capturing short-term fluctuations). Furthermore, the plots motivate a distributional regression approach, where not only the mean but also variance and potentially other parameters are modeled as functions of the covariates considered. In particular, it is conceptually straightforward to use the suggested type of estimation algorithm also for Markov-switching generalized additive models for location, shape and scale (GAMLSS; \\citealp{rig05}).\n\nThere are various other ways to modify or extend the approach, in a relatively straightforward manner, in order to enlarge the class of models that can be considered. First, it is of course straightforward to consider semiparametric versions of the model, where some of the functional effects are modeled nonparametrically and others parametrically. Especially for complex models, with high numbers of states and\/or high numbers of covariates considered, this can improve numerical stability and decrease the computational burden associated with the smoothing parameter selection. The consideration of interaction terms in the predictor is possible via the use of tensor products of univariate basis functions. The likelihood-based approach also allows for the consideration of more involved dependence structures (e.g.\\ semi-Markov state processes; \\citealp{lan11}). Finally, in case of multiple time series, random effects can be incorporated into a joint model.\n\n\n\n\\renewcommand\\refname{References}\n\\makeatletter\n\\renewcommand\\@biblabel[1]{}\n\n\\markboth{}{}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{}\n\\vspace{-1cm}\n\n\n\\footnotetext{\\textit{$^{a}$~Department of Physics, Advanced Materials and Liquid Crystal Institute, Kent State University, Kent, OH 44242, USA; E-mail: jselinge@kent.edu}}\n\n\n\n\n\n\\section{Introduction}\n\nIn condensed matter physics, there are many situations where spatial geometry is not compatible with optimum local interactions. In these cases, materials cannot fill up space with the ideal local structure, and must find complex ways to reconcile the incompatibility \\cite{sadoc_mosseri_1999,ramirez2003geometric,han2008geometric,hirata2013geometric,selinger2022director}. A well-known example of geometric frustration is an Ising antiferromagnet on a triangular lattice. Once the first two spins on a triangle align opposite to each other, the third spin is frustrated because it cannot simultaneously minimize its interactions with the other two. This frustration leads to a highly degenerate ground state. Such geometric frustration in the spin system has many significant physical consequences, such as residual entropy in water ice and spin ice \\cite{harris1997geometrical,ramirez2003geometric}. Other typical examples of geometric frustration can be found in close-packing problems in free space or confined geometries \\cite{grason2015colloquium,hagan2021equilibrium}. As a result, the concept of geometric frustration is of crucial importance in understanding many complex natural phenomena. In some cases, geometric frustration can also be utilized to engineer structures in condensed matter systems \\cite{araki2013defect}. For example, in a nematic liquid crystal, boundary conditions can be designed intentionally to introduce topological defects in the bulk.\n\nIn a recent article \\cite{selinger2022director}, our group argued that chiral liquid crystals are geometrically frustrated. Our argument is based on a reformulated version of the Oseen-Frank elastic theory \\cite{selinger2018interpretation}, which decomposes the free energy density into four fundamental elastic modes---the well-known splay, twist, and bend modes, and the less-well-known biaxial splay ($\\boldsymbol{\\Delta}$) mode. In this reformulated theory, the saddle-splay free energy is naturally included as a combination of the bulk elastic modes. The theory shows that a chiral liquid crystal has a natural tendency to form pure twist, which is double twist. However, it is impossible to fill up 3D Euclidean space with pure double twist \\cite{virga2019uniform}. One way for a chiral liquid crystal to reconcile this incompatibility is to form the cholesteric phase, which has a combination of the favorable twist and the unfavorable $\\boldsymbol{\\Delta}$ mode. Another way is to form a blue phase, which has regions of approximately pure twist separated by disclinations. Hence, both the cholesteric phase and blue phases can be regarded as frustrated structures.\n\nThis view of the cholesteric phase as a frustrated structure may be surprising to some readers. Researchers often neglect saddle-splay and the $\\boldsymbol{\\Delta}$ mode, and consider only splay, twist, and bend. In this common way of thinking, the cholesteric phase appears to be an ideal unfrustrated structure, because it has uniform nonzero twist, with zero splay and bend.\n\nThe purpose of this article is to demonstrate explicitly that the common way of thinking is incorrect, and the cholesteric phase really is frustrated. For this demonstration, we consider a chiral liquid crystal in finite geometries with free boundary conditions. As a general rule, one can identify the ideal local structure of a material by looking near a free boundary. Any material feels strong packing constraints in the interior, but it has much more freedom to achieve its ideal local structure at the boundary. Here, we show that a chiral liquid crystal has double twist, not cholesteric single twist, near a free boundary. For a small system size, double twist extends throughout the system. For a large system size, double twist exists in a boundary layer, while cholesteric single twist may form in the interior.\n\nIn Sec.~II, we review the reformulated Oseen-Frank theory and show that the optimum local deformation is double twist, not cholesteric single twist. We then consider two specific finite geometries with free boundary conditions. In Sec.~III, we model a chiral liquid crystal in a cylinder with free boundary conditions, using a combination of approximate analytic calculations, director simulations, and nematic order tensor simulations. When the cylinder radius is small, compared with the natural twist of the liquid crystal, the ground state has double twist. When the radius increases, geometric frustration causes a transition to a different configuration---either single twist (as in the cholesteric phase) or double twist with disclinations (as in a blue phase). In Sec.~IV, we study a chiral liquid crystal between two parallel surfaces with free boundary conditions, using director simulations. In this case, geometric frustration induces distortions close to the free surfaces, leading to surface states composed of regularly arranged double-twist regions.\n\n\\section{Optimum Local Deformation}\n\nWe consider a chiral nematic liquid crystal described by the director field $\\hat{\\boldsymbol{n}}(\\boldsymbol{r})$. As discussed in previous papers \\cite{selinger2018interpretation,selinger2022director}, the reformulated Oseen-Frank free energy density takes the form\n\\begin{eqnarray}\\label{Selinger_free_energy}\n \\nonumber f &=& \\frac{1}{2}\\left(K_{11}-K_{24}\\right) S^2 + \\frac{1}{2}\\left(K_{22}-K_{24}\\right) T^2 + \\frac{1}{2} K_{33} |\\boldsymbol{B}|^2 \\\\\n & & + K_{24} \\mathrm{Tr}\\left( \\boldsymbol{\\Delta}^2 \\right) - K_{22} q_0 T .\n\\end{eqnarray}\nIn this expression, the first three terms represent the free energy cost of splay $S=\\boldsymbol{\\nabla} \\cdot \\hat{\\boldsymbol{n}}$, twist $T=\\hat{\\boldsymbol{n}} \\cdot \\left(\\boldsymbol{\\nabla} \\times \\hat{\\boldsymbol{n}}\\right)$, and bend $\\boldsymbol{B}=\\hat{\\boldsymbol{n}} \\times \\left(\\boldsymbol{\\nabla} \\times \\hat{\\boldsymbol{n}}\\right)$ deformations. The fourth term represents the free energy cost of the biaxial splay $\\boldsymbol{\\Delta}$, which is the traceless, symmetric tensor \n\\begin{eqnarray}\\label{biaxial_splay_mode}\n \\nonumber \\Delta_{ij} &=& \\frac{1}{2}(\\partial_i n_j + \\partial_j n_i - n_i n_k \\partial_k n_j - n_j n_k \\partial_k n_i \\\\\n & & - \\delta_{ij}\\partial_k n_k + n_i n_j \\partial_k n_k) .\n\\end{eqnarray}\nThe last term is an extra contribution to the free energy, linear in the twist, which is permitted by symmetry in a chiral liquid crystal. Here, we write its coefficient as $K_{22}q_0$, where $q_0$ is an inverse length. We will see below that $q_0$ is related to the natural pitch of the chiral liquid crystal.\n\nIn this section, we want to minimize the \\emph{local} free energy density at a single point in the liquid crystal. We choose coordinates so that this point is the origin, and the director at that point is $\\hat{\\boldsymbol{n}}=\\hat{\\boldsymbol{z}}$. One possible configuration of the director field is a cholesteric phase with single twist. If we choose coordinates so that the helical axis is along $\\hat{\\boldsymbol{x}}$, the director field can be written as\n\\begin{eqnarray}\\label{cholesteric_helical_structure}\n \\hat{\\boldsymbol{n}}_{ST} = \\hat{\\boldsymbol{y}}\\sin q x + \\hat{\\boldsymbol{z}}\\cos q x,\n\\end{eqnarray}\nwith the subscript $ST$ for single twist. This director field has the deformation modes\n\\begin{eqnarray}\\label{DeltaMode_cholesteric_helical_structure}\n && S=0, \\quad T=q, \\quad \\boldsymbol{B}=0,\\\\\n \\nonumber && \\boldsymbol{\\Delta} = \n \\begin{pmatrix}\n 0 & \\frac{1}{2}q\\cos{q x} & -\\frac{1}{2}q\\sin{q x} \\\\\n \\frac{1}{2}q\\cos{q x} & 0 & 0 \\\\\n -\\frac{1}{2}q\\sin{q x} & 0 & 0\n \\end{pmatrix}.\n\\end{eqnarray}\nBy inserting these deformation modes into Eq.~(\\ref{Selinger_free_energy}), we find the free energy density $f_{ST}=\\frac{1}{2}K_{22}q^2-K_{22} q_0 q$. Minimizing that expression over the parameter $q$ then gives $q=q_0$ and\n\\begin{eqnarray}\\label{fST}\n f_{ST}=-\\frac{K_{22}q_0^2}{2}.\n\\end{eqnarray}\n\nAs an alternative, the director field could form a configuration with double twist. In cylindrical coordinates, that configuration can be written as\n\\begin{eqnarray}\\label{double_twist_structure}\n \\hat{\\boldsymbol{n}}_{DT} = \\hat{\\boldsymbol{\\phi}}\\sin\\theta(\\rho) + \\hat{\\boldsymbol{z}}\\cos\\theta(\\rho),\n\\end{eqnarray}\nwith the subscript $DT$ for double twist. The function $\\theta(\\rho)$ describes how the director depends on the radial coordinate away from the $z$-axis. At the origin, the director is aligned with the $z$-axis, and hence $\\theta(0) = 0$. Near the origin, $\\theta(\\rho)$ can be approximated by the lowest-order term in a power series, $\\theta(\\rho) \\approx q \\rho$. With this assumption, the four elastic modes at the origin are $T=2q$, $S=0$, $\\boldsymbol{B}=0$, and $\\boldsymbol{\\Delta}=0$. Hence, the free energy density at the origin is $f_{DT}=2(K_{22}-K_{24}) q^2 - 2 K_{22} q_0 q$. Minimizing this free energy over the parameter $q$ gives $q=K_{22} q_0\/[2(K_{22}-K_{24})]$ and \n\\begin{eqnarray}\\label{fDT}\n f_{DT}=-\\frac{K_{22}^2 q_0^2}{2(K_{22}-K_{24})}.\n\\end{eqnarray}\n\nNow we can compare the local free energy densities $f_{ST}$ and $f_{DT}$. From Eqs.~(\\ref{fST}) and~(\\ref{fDT}), we can see that the double-twist configuration has a lower free energy density when ${K_{24}>0}$. By comparison, the single-twist configuration has a lower free energy density when ${K_{24}<0}$, and the configurations are degenerate when ${K_{24}=0}$.\n\nIn most ordinary liquid crystals, all four quadratic terms in the free energy density of Eq.~(\\ref{Selinger_free_energy}) are positive, and hence the elastic coefficients satisfy the conditions $(K_{11}-K_{24})>0$, $(K_{22}-K_{24})>0$, $K_{33}>0$, and $K_{24}>0$. These conditions are called the Ericksen inequalities \\cite{Ericksen1966,selinger2018interpretation}. Some recent studies have shown that the inequality $(K_{22}-K_{24})>0$ can be violated in lyotropic chromonic liquid crystals \\cite{davidson2015chiral,nayani2015spontaneous,long2022violation}. However, to our knowledge, there are no reports of liquid crystals violating the Ericksen inequality $K_{24}>0$. Indeed, any violation of that inequality would lead to a peculiar liquid crystal with a spontaneous $\\boldsymbol{\\Delta}$ deformation. We must conclude that $K_{24}>0$ is the general case, and thus the local free energy density is lower for double twist than for single twist.\n\nAlthough double twist is preferred over single twist locally, at a single point, we do not yet know what will happen over longer length scales. Mathematical studies \\cite{virga2019uniform,pollard2021intrinsic,da2021moving} have proved that pure double twist cannot fill up 3D Euclidean space; i.e.\\ it is not compatible with 3D Euclidean geometry. Instead, over longer length scales, double twist must be combined with the other deformation modes. To find the effects of these geometric constraints, we must consider a finite system with a specific size and shape, and we must minimize the integrated free energy rather than the free energy density.\n\nIn the following sections, we investigate two finite systems---a long cylinder and a large slab. In order to eliminate the possibility of introducing deformations from anchoring conditions \\cite{oswald2005nematic,araki2013defect}, we only consider free boundary conditions.\n\n\\section{Chiral Liquid Crystal in a Cylinder}\n\nIn this section, we investigate a chiral liquid crystal confined in a long cylinder of radius $R$ with its long axis along $\\hat{\\boldsymbol{z}}$. The cylinder has free boundary conditions, so that the director on the boundary does not have any preferred orientation. To simplify our model, we only consider the director field as a function of $x$ and $y$, assuming that it is independent of $z$.\n\n\\subsection{Approximate Analytic Calculation}\n\nWe begin with an approximate analytic calculation to compare the total free energies of single-twist and double-twist configurations inside the finite cylinder.\n\nFor single twist, we assume the director field of Eq.~(\\ref{cholesteric_helical_structure}). This director field fills up space, with the free energy density given by Eq.~(\\ref{fST}), independent of position. Hence, the total free energy (per length in the $z$ direction) is just\n\\begin{eqnarray}\\label{Ftotal_single_twist}\n F_{\\mathrm{ST}} = -\\frac{K_{22}q_0^2}{2} \\pi R^2 = -\\frac{\\pi K_{22}\\bar{q}_0^2}{2}.\n\\end{eqnarray}\nHere, $\\bar{q}_0=q_0 R$ is a dimensionless parameter representing the natural twist $q_0$ of the liquid crystal, scaled by the cylinder radius $R$.\n\nFor double twist, we assume the director field of Eq.~(\\ref{double_twist_structure}), in cylindrical coordinates, with an unknown function $\\theta(\\rho)$. The total free energy (per length in the $z$ direction) then becomes the integral\n\\begin{align}\n F_{\\mathrm{DT}} &= \\int_0^R 2\\pi\\rho d\\rho\\Biggl[\n \\frac{K_{22}}{2}\\left(\\theta'+\\frac{\\sin2\\theta}{2\\rho}\\right)^2\n +\\frac{K_{33}\\sin^4\\theta}{2\\rho^2}\\nonumber\\\\\n &\\quad-\\frac{K_{24}\\theta'\\sin2\\theta}{\\rho}\n -K_{22}q_0\\left(\\theta'+\\frac{\\sin2\\theta}{2\\rho}\\right)\\Biggr].\n \\label{fDTintegral}\n\\end{align}\nTo minimize the free energy, we derive the Euler-Lagrange equation\n\\begin{equation}\n \\theta''+\\frac{\\theta'}{\\rho}-\\frac{\\sin4\\theta}{4\\rho^2}\n -\\frac{2K_{33}\\cos\\theta\\sin^3\\theta}{K_{22}\\rho^2}-\\frac{2q_0\\sin^2\\theta}{\\rho}=0.\n\\end{equation}\nAt $\\rho=0$, the boundary condition is $\\hat{\\boldsymbol{n}}=\\hat{\\boldsymbol{z}}$, so that\n\\begin{equation}\n \\theta(0)=0.\n\\end{equation}\nAt $\\rho=R$, we have a free boundary, which implies that $\\partial f\/\\partial\\theta'=0$, and hence\n\\begin{equation}\n \\theta'(R)+\\frac{\\sin2\\theta(R)}{2R}-\\frac{K_{24}\\sin2\\theta(R)}{K_{22}R}-q_0=0.\n\\end{equation}\nWe solve this system of equations as a power series in the natural twist $q_0$, which gives\n\\begin{align}\n\\theta(\\rho)&=\\frac{q_0\\rho K_{22}}{2(K_{22}-K_{24})}\n+\\frac{q_0^3\\rho^3 K_{22}^2 (2K_{22}-6K_{24}+3K_{33})}{96(K_{22}-K_{24})^3}\\nonumber\\\\\n&+\\frac{q_0^3 R^2 \\rho K_{22}^2 (2K_{22}K_{24}-2K_{24}^2-2K_{22}K_{33}+K_{24}K_{33})}{32(K_{22}-K_{24})^4}\\nonumber\\\\\n&+O(q_0^5).\n\\label{theta_of_rhobar_solution}\n\\end{align}\nWe can see that $\\theta(\\rho)$ is an odd function of $q_0$, because switching the sign of the natural twist reverses the handedness of the resulting configuration. This solution implies a director field with twist of order $q_0$, bend of order $q_0^2$, $\\boldsymbol{\\Delta}$ mode of order $q_0^3$, and zero splay.\n\nAs an aside, the factors of $(K_{22}-K_{24})$ in the denominators of Eq.~(\\ref{theta_of_rhobar_solution}) show that the series expansion breaks down if $K_{24}\\to K_{22}$. In this limit, the Ericksen inequality $K_{22}-K_{24}>0$ is violated, and the liquid crystal is at a critical point for the formation of spontaneous double twist, as discussed in Ref.~\\cite{long2022violation}. At the critical point, it has a divergent susceptibility to $q_0$, which might be regarded as a applied chiral field. In that limit, we find $\\theta(\\rho)\\propto (q_0)^{1\/3}$. We will not consider that special case further in this article.\n\nTo find the total free energy of the double-twist configuration (per length in the $z$ direction), we put the series expansion for $\\theta(\\rho)$ back into Eq.~(\\ref{fDTintegral}). This calculation gives\n\\begin{equation}\\label{Ftotal_double_twist}\n F_{\\mathrm{DT}} = -\\frac{\\pi K_{22}^2\\bar{q}_0^2}{2(K_{22}-K_{24})}\n + \\frac{\\pi K_{22}^4 K_{33}\\bar{q}_0^4}{64(K_{22}-K_{24})^4}+O(\\bar{q}_0^6),\n\\end{equation}\nwhere again $\\bar{q}_0=q_0 R$. Now we can compare the total free energies of the single-twist configuration in Eq.~(\\ref{Ftotal_single_twist}) and the double-twist configuration in Eq.~(\\ref{Ftotal_double_twist}). When the parameter $\\bar{q}$ is small, double twist has a lower free energy than single twist, as expected from the previous section. However, when $\\bar{q}$ increases, the positive quartic term raises the free energy for double twist. If we neglect higher-order terms in the power series, we can estimate that the free energies become equal at\n\\begin{eqnarray}\\label{transition_point}\n \\bar{q}_0 = 2^{5\/2}\\left(\\frac{K_{22}-K_{24}}{K_{22}}\\right)^{3\/2}\\left(\\frac{K_{24}}{K_{33}}\\right)^{1\/2}.\n\\end{eqnarray}\nBeyond this transition point, single twist has a lower free energy than double twist. As a result, our free energy comparison predicts a transition between double twist and single twist at a particular $\\bar{q}_0$, which can be induced by either changing the natural twist $q_0$ or changing the cylinder radius $R$.\n\nIn the case of equal Frank elastic constants, $K_{11}=K_{22}=K_{33}=2K_{24}=K$, our prediction for the transition point is just $\\bar{q}_0=\\sqrt{2}$. If $K_{24}$ decreases toward 0, the double-twist regime becomes smaller and the single-twist regime becomes larger, so that it becomes easier for the liquid crystal to form a cholesteric phase with single twist. This trend is reasonable because the $K_{24}$ gives the energy cost for the unfavorable $\\boldsymbol{\\Delta}$ deformation in the cholesteric phase. By comparison, if $K_{33}$ decreases toward 0, the double-twist regime becomes larger and the single-twist regime becomes smaller. That trend is reasonable because $K_{33}$ gives the energy cost for the unfavorable bend deformation, which is present in the double-twist structure away from the central axis at $\\rho=0$.\n\n\\subsection{Director Field Simulations}\n\nWe would like to go beyond the comparison of single-twist and double-twist configurations, in order to determine whether the liquid crystal can cross over between these limits by forming intermediate configurations. For this reason, we perform simulations of the director field inside a cylinder.\n\nIn director field simulations, researchers often parameterize the three components of $\\hat{\\boldsymbol{n}}$ in terms of the angles $\\theta$ and $\\phi$ in conventional spherical coordinates. Here, that parameterization is not convenient, because the coordinate system is singular whenever $\\hat{\\boldsymbol{n}}$ is along the $z$-axis, which occurs right in the center of the double-twist configuration. As an alternative, we use a version of spherical coordinates based on the $x$-axis. Hence, we write \n\\begin{eqnarray}\\label{director_field_n}\n \\hat{\\boldsymbol{n}}(x,y) = \\left(\\sin{u}, \\cos{u}\\sin{v}, \\cos{u}\\cos{v}\\right),\n\\end{eqnarray}\nin terms of angles $u(x,y)$ and $v(x,y)$. We assume equal Frank elastic constants, $K_{11}=K_{22}=K_{33}=2K_{24}=K$, so that the free energy (per length in the $z$ direction) simplifies to\n\\begin{align}\\label{ftotal_director_field_simulations}\n F &= K\\int dx dy\\biggl[\\frac{1}{2}\\left(\\partial_i n_j\\right) \\left(\\partial_i n_j\\right)\n -q_0\\epsilon_{ijk}n_i\\partial_j n_k\\biggr]\\\\\n &= K\\int dx dy\\biggl[\\frac{1}{2}|\\boldsymbol{\\nabla}u|^2+\\frac{1}{2}(\\cos^2 u)|\\boldsymbol{\\nabla}v|^2\n +q_0(\\cos v)\\partial_y u\n \\nonumber\\\\\n &\\qquad\\qquad\n -q_0(\\cos^2 u)\\partial_x v\n + \\frac{1}{2}q_0(\\sin 2u)(\\sin v)\\partial_y v\\biggr].\\nonumber\n\\end{align}\nTo simulate time-dependent relaxation, we define the dissipation function as\n\\begin{equation}\\label{dissipation_function_for_director_field}\n D = \\frac{1}{2} \\gamma_1 \\int dx dy |\\dot{\\boldsymbol{n}}|^2\n = \\frac{1}{2} \\gamma_1 \\int dx dy \\left[\\dot{u}^2 +(\\cos^2 u)\\dot{v}^2\\right],\n\\end{equation}\nwith rotational viscosity $\\gamma_1$. We then solve the overdamped equations of motion\n\\begin{eqnarray}\\label{equation_of_motion_director_field}\n -\\frac{\\delta F}{\\delta u}-\\frac{\\delta D}{\\delta\\dot{u}}=0,\\quad\n -\\frac{\\delta F}{\\delta v}-\\frac{\\delta D}{\\delta\\dot{v}}=0,\n\\end{eqnarray}\nby finite-element modeling using the software package COMSOL, running foward in time until the director field reaches an equilibrium state. We choose units so that $K=\\gamma_1=1$ and radius $R=10$, and vary the natural twist $q_0$.\n\nBecause the product $\\bar{q}_0=q_0 R$ is the only dimensionless parameter in the problem, it must control the equilibrium state of the director field. To determine how the equilibrium state evolves as a function of this parameter, we begin with a uniform nematic configuration aligned with the $z$-axis at $q_0 = 0$, then add a small increment to $q_0$ and let the director field relax. After a steady state is reached, another increment is added to $q_0$, and this process is repeated until $q_0$ is sufficiently large. To avoid trapping the director field in a metastable state, we also do simulations starting from the final state at the largest value of $q_0$, and then reducing $q_0$ by a series of decrements. We identify the ground state by comparing the total free energies of the different configurations at the same $q_0$. \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.92\\textwidth]{Figure1}\n\\caption{Ground states of the simulated director field for different values of $\\bar{q}_0=q_0 R$. Rows 1--3 are the double-twist (DT) configuration, rows 4--5 are cholesteric-y (Ch-y), and rows 6-8 are cholesteric-z (Ch-z). Column a shows visualizations of the director field, with cylinders representing the apolar director. Columns b--e show contour plots of the splay, bend, twist, and $\\boldsymbol{\\Delta}$ modes, respectively. (Column c represents the magnitude $|\\boldsymbol{B}|$, and column e represents the positive eigenvalue of $\\boldsymbol{\\Delta}$.)}\n\\end{figure*}\n\nThe ground state configurations at different values of $\\bar{q}_0$ are shown in Fig.~1. When $\\bar{q}_0 = 0.5$, the chiral liquid crystal is most stable in a double-twist (DT) configuration (Fig.~1, row~1). By calculating the four elastic modes in the director field, we find that there is a large twist throughout the cylinder. The bend and $\\boldsymbol{\\Delta}$ modes are zero at the center of the cylinder, but near the edge they grow to be nonzero (although much smaller than the twist). The splay is zero everywhere. All four elastic modes show full rotational symmetry about the $z$-axis. When $\\bar{q}_0$ increases to $1.0$ or $2.6$ (Fig.~1, rows~2--3), the ground state remains a double-twist configuration. The observed twist increases because of the increased natural twist, and the bend and $\\boldsymbol{\\Delta}$ modes also grow substantially. These features of the simulation results for small $\\bar{q}_0$ are consistent with the predictions for the DT state from the approximate analytic calculation, except that DT persists up to a somewhat higher value of $\\bar{q}_0$ than expected from that calculation.\n\nWhen $\\bar{q}_0$ increases from $2.6$ to $2.7$, a symmetry-breaking transition occurs, and the ground state transforms into a cholesteric-like configuration (Fig.~1, row~4). At this transition, the full rotational symmetry is lost. In the bulk of the cylinder, away from the surface, the director configuration is similar to a cholesteric helical structure with its pitch axis along $\\hat{\\boldsymbol{x}}$. Because the director orientation at the center is along $\\hat{\\boldsymbol{y}}$, we call this structure cholesteric-y (Ch-y). From the contour plots of the four elastic modes, away from the surface, we see that the $\\boldsymbol{\\Delta}$ mode is about half of the twist, while the splay and bend modes are almost zero. These results agree with the predictions for cholesteric single twist. Close to the surface, the Ch-y configuration has a mixture of all four elastic modes. This configuration persists as $\\bar{q}_0$ increases up to $3.6$ (Fig.~1, row~5).\n\nWhen $\\bar{q}_0$ increases from $3.6$ to $3.7$, another transition occurs, and the ground state becomes a cholesteric-z (Ch\\nobreakdash-z) configuration (Fig.~1, row~6). In the bulk of the cylinder, the Ch-z director field is also a cholesteric helical structure similar to Ch-y, except that the director at the center is along $\\hat{\\boldsymbol{z}}$. Close to the surface, Ch-z shows a more complex behavior than the structure in the bulk. The free boundary condition allows formation of double twist, so that the director field introduces more double twist at the surface while being constrained by the single twist in the bulk. As $\\bar{q}_0$ further increases to $6.5$ and $8.5$ (Fig.~1, rows~7--8), more cholesteric pitches are pushed into the cylinder, and the entire Ch-z configuration becomes more like a standard cholesteric helical structure.\n\nTo further understand the structural transitions inside the cylinder, we plot the free energies of the simulated DT, Ch-y, and Ch-z states as functions of $\\bar{q}_0$ in Fig.~2(a). To emphasize the differences in free energies, we subtract off a common baseline, which is the free energy of the standard cholesteric single-twist configuration $F_{\\mathrm{ST}}$ from Eq.~(\\ref{Ftotal_single_twist}). This figure shows that the free energies of the configurations cross at two specific values of $\\bar{q}_0$. The first derivative of the total free energy changes discontinuously at the two transition points, which indicates that those two structural transitions are similar to first-order phase transitions. The discontinuity in slope is large at the transition from DT to CH-y, but much smaller at the transition from Ch-y to Ch-z, so that the latter could be a weak first-order transition.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{Figure2}\n\\caption{(a)~Comparison of the free energies of the simulated DT, Ch-y, and Ch-z states, all relative to $F_{\\mathrm{ST}}$ from Eq.~(\\ref{Ftotal_single_twist}), as functions of $\\bar{q}_0$. Simulations are run from different initial states, in order to search for the ground state. Solid lines with different colors are used to distinguish the different initial states: red starting from DT at $\\bar{q}_0=0.1$, blue starting from Ch-z at $\\bar{q}_0=4.2$, cyan starting from Ch-y at $\\bar{q}_0=2.8$. Arrows indicate the direction of changing $\\bar{q}_0$ in each series of simulations. (b)~Positive eigenvalue of $\\boldsymbol{\\Delta}$ at the center of the cylinder, in the simulated ground state, as a function of $\\bar{q}_0$. (c)~Free energy of the simulated ground state, relative to the free energy $F_{PDT}$ of unachievable perfect double twist, as a function of $\\bar{q}_0$.}\n\\end{figure}\n\nBecause the $\\boldsymbol{\\Delta}$ deformation mode does not have rotational symmetry about the director, it can be used to characterize quantitatively how the rotational symmetry is broken as $\\bar{q}_0$ increases. Figure~2(b) shows the positive eigenvalue of $\\boldsymbol{\\Delta}$ at the center of the disk, in the ground state, as a function of $\\bar{q}_0$. From this figure, we can see that the $\\boldsymbol{\\Delta}$ deformation is always zero at the center in the DT state, because that state has full rotational symmetry. When DT transforms into Ch-y, there is a discontinuous jump in the value of $\\boldsymbol{\\Delta}$ at the center, which implies that the full rotational symmetry is broken. If we regard $\\boldsymbol{\\Delta}$ at the center as an order parameter for the cholesteric phase, then the discontinuous change in $\\boldsymbol{\\Delta}$ also represents a first-order transition, in agreement with the free energy change in Fig.~2(a).\n\nOverall, our director field simulations show that the local optimum structure of chiral liquid crystals, which is double twist, can fit in a finite cylinder when $\\bar{q}_0=q_0 R$ is sufficiently small. However, when $\\bar{q}_0$ is large, the frustration between the local optimum structure and the spatial geometry becomes too great. Because of this frustration, the chiral liquid crystal combines the favorable double twist with other elastic modes. It does not form the standard cholesteric configuration everywhere in the cylinder. Instead, it forms the Ch-y or Ch-z configuration, which have cholesteric single twist in the bulk, and more complex structure in the boundary layer near the surface. As the $\\bar{q}_0$ continues to increase, the bulk grows larger relative to the boundary layer, and hence the liquid crystal asymptotically approaches the standard cholesteric configuration.\n\nRecently, Meiri and Efrati developed a general theoretical formalism to describe cumulative geometric frustration~\\cite{meiri2021cumulative}. One key feature of their theory is that the total free energy grows superextensively with respect to system size, for small system sizes. We can apply their theory to our current simulations. In their theory, the free energy is measured relative to a specific baseline, which is the free energy of the \\emph{unachievable} perfect state, i.e. the ideal local free energy density times the volume. Here, this baseline is the free energy of \\emph{unachievable} perfect double twist $F_{PDT}=-\\pi K_{22}^2\\bar{q}_0^2\/[2(K_{22}-K_{24})]$, which is the free energy density $f_{DT}$ of Eq.~(\\ref{fDT}) times the volume of the cylinder (per length in the $z$ direction). Figure~2(c) shows the simulated free energy in the ground state, relative to $F_{PDT}$, as a function of $\\bar{q}_0=q_0 R$, plotted on a logarithmic scale. For small $\\bar{q}_0$, the results are well fit by the prediction of Eq.~(\\ref{Ftotal_double_twist}), which gives\n\\begin{equation}\n\\Delta F = F_{DT}-F_{PDT}=\\frac{\\pi K_{22}^4 K_{33}(q_0 R)^4 }{64(K_{22}-K_{24})^4}\\to\\frac{\\pi\\bar{q}_0^4}{4}\n\\end{equation}\nfor our simulated case of $K_{11}=K_{22}=K_{33}=2K_{24}=K=1$. This free energy scales superextensively with system size for small size. For large $\\bar{q}_0$, the results are well fit by the prediction of Eq.~(\\ref{Ftotal_single_twist}), which gives\n\\begin{equation}\n\\Delta F = F_{ST}-F_{PDT}=\\frac{\\pi K_{22}K_{24}(q_0 R)^2}{2(K_{22}-K_{24})}\\to\\frac{\\pi\\bar{q}_0^2}{2}.\n\\end{equation}\nThat free energy scales extensively with system size for large size. Indeed, the transition from DT to Ch-y can be understood as a way of relieving the superextensive free energy of double twist. Hence, our results show a crossover from superextensive to extensive growth of free energy, at a length scale set by the natural twist $q_0$, in agreement with the theory of cumulative geometric frustration.\n\n\\subsection{Nematic Order Tensor Simulations}\n\nAs an alternative possible response to geometric frustration, instead of forming a combination of director deformation modes, a liquid crystal might form regions of the favorable mode separated by disclinations~\\cite{selinger2022director}. Indeed, chiral liquid crystals often form blue phases, which can be regarded as cubic lattices of double-twist tubes and disclinations. Here, we want to understand how this response can occur in the simple geometry of a chiral liquid crystal in a cylinder with free boundary conditions. The formation of disclinations cannot be described by simulations of the director field. Instead, we must use simulations of the full nematic order tensor.\n\nWe implement a series of simulations for the nematic order of a chiral liquid crystal confined in the same geometry as in the previous section, which is a cylinder of radius $R$ with free boundary conditions. We still assume that the nematic order is a function of $x$ and $y$, independent of $z$. It is represented by a traceless symmetric tensor $Q_{ij}(x,y)$, which has five degrees of freedom. Wherever the liquid crystal is uniaxial, the relation between the nematic order tensor and the director is $Q_{ij} = S(\\frac{3}{2}n_i n_j-\\frac{1}{2}\\delta_{ij})$, where $S$ is the scalar order parameter. In terms of the nematic order tensor, the Landau-de Gennes free energy of the liquid crystal (per length in the $z$ direction) is conventionally written as\n\\begin{align}\\label{ftotal_Q_tensor}\n \\nonumber F&=\\int\\bigg[ \\frac{A_0}{2}\\left(1-\\frac{\\gamma}{3}\\right)\\Tr\\boldsymbol{Q}^2\n - \\frac{A_0\\gamma}{3}\\Tr\\boldsymbol{Q}^3\n + \\frac{A_0\\gamma}{4}\\left(\\Tr\\boldsymbol{Q}^2\\right)^2\\\\\n &\\qquad + \\frac{L_1}{2}\\left(\\partial_i Q_{jk}\\right)\\left(\\partial_i Q_{jk}\\right)\n + \\frac{L_4}{2}\\epsilon_{lik}Q_{lj}\\partial_k Q_{ij}\\bigg] dx dy\n\\end{align}\nInside the integrand, the first three terms give the bulk free energy density in terms of the parameters $A_0$ and $\\gamma$. When the liquid crystal is uniform, the optimum scalar order parameter $S_0$ can be expressed as a function of $\\gamma$. The last two terms in Eq.~(\\ref{ftotal_Q_tensor}) give the distortion free energy density associated with spatial variations of the nematic order. The $L_1$ term is a nonchiral energy penalty for gradients of $Q_{ij}$, which corresponds to the single elastic constant $K$ for gradients of $\\hat{\\boldsymbol{n}}$. The $L_4$ term is a chiral term, which favors twist of the nematic order. Based on the combination of these two terms, the natural twist of a cholesteric helix is $q_0 = L_4\/(4L_1)$.\n\nPrevious studies have already considered the free energy of Eq.~(\\ref{ftotal_Q_tensor}) as a model for blue phases in an \\emph{infinite} system~\\cite{grebel1983landau,dupuis2005numerical,alexander2006stabilizing,duzgun2018comparing}. By rescaling the length, energy, and $Q$-tensor, those studies found that the behavior depends only on two dimensionless ratios\n\\begin{eqnarray}\\label{two_dimensionless_ratios}\n \\tau = \\frac{9\\left(3-\\gamma\\right)}{\\gamma},\\quad \n \\kappa = \\sqrt{\\frac{27 L_4^2}{4 L_1 A_0 \\gamma}},\n\\end{eqnarray}\nknown as the reduced temperature and the chirality, respectively. Here, we want to find the behavior in a \\emph{finite} cylinder of radius $R$. In addition to $\\tau$ and $\\kappa$, the behavior must also depend on a third dimensionless parameter related to the finite size $R$. This third parameter can be written as $\\bar{q}_0 = q_0 R = L_4 R\/(4L_1)$, just as in director simulations of the previous section. In our nematic order tensor simulations, we fix the radius $R=10$ and elastic constant $L_1=2.32$, and we tune the chiral coefficient $L_4$ to explore the structural evolution. To change the free energy cost of disclinations, we adjust the values of $A_0$ and $\\gamma$. This choice of parameters implicitly sets $\\tau$ and $\\kappa$, and hence puts us in the cholesteric region or the blue-phase region of the known phase diagram for the infinite system.\n\nTo model pure relaxational dynamics, we define the dissipation function\n\\begin{eqnarray}\\label{dissipation_function_for_Qtensor}\n D = \\frac{1}{2}\\Gamma_1 \\int dx dy \\dot{Q}_{ij} \\dot{Q}_{ij},\n\\end{eqnarray}\nwhere $\\Gamma_1$ is analogous to a rotational viscosity for the $\\boldsymbol{Q}$-tensor, choosing time units so that $\\Gamma_1=1$. We then solve the overdamped equations of motion\n\\begin{eqnarray}\\label{equation_of_motion_Qtensor}\n -\\frac{\\delta F}{\\delta Q_{ij}} - \\frac{\\delta D}{\\delta\\dot{Q}_{ij}} = 0\n\\end{eqnarray}\nby finite-element modeling. To avoid trapping the $\\boldsymbol{Q}$-tensor field in a metastable state, we begin our simulations with different initial states, and compare the total free energies of the resulting equilibrium states to identify the overall ground state. Because introducing a disclination might require overcoming a high energy barrier, we consider an initial state with lattice of disclinations, as in the literature on blue phases~\\cite{grebel1983landau},\n\\begin{align}\\label{initial_condition_singledisclination}\n\\boldsymbol{Q}&=\nc\\left[\\begin{matrix}\n-1 & 0 & 0 \\\\\n0 & -1 & 0 \\\\\n0 & 0 & 2\n\\end{matrix}\\right]\n+b\\text{Re}\\left[\\begin{matrix}\n0 & 0 & 0 \\\\\n0 & 1 & -i \\\\\n0 & -i & -1\n\\end{matrix}\\right]e^{i(qx-\\frac{\\pi}{3})}\\nonumber\\\\\n&\\quad+b\\text{Re}\\left[\\begin{matrix}\n\\frac{3}{4} & \\frac{\\sqrt{3}}{4} & -\\frac{i\\sqrt{3}}{2} \\\\\n\\frac{\\sqrt{3}}{4} & \\frac{1}{4} & -\\frac{i}{2} \\\\\n-\\frac{i\\sqrt{3}}{2} & -\\frac{i}{2} & -1\n\\end{matrix}\\right]e^{i(\\frac{qx}{2}-\\frac{qy\\sqrt{3}}{2}+\\frac{\\pi}{3})}\\nonumber\\\\\n&\\quad+b\\text{Re}\\left[\\begin{matrix}\n\\frac{3}{4} & -\\frac{\\sqrt{3}}{4} & \\frac{i\\sqrt{3}}{2} \\\\\n-\\frac{\\sqrt{3}}{4} & \\frac{1}{4} & -\\frac{i}{2} \\\\\n\\frac{i\\sqrt{3}}{2} & -\\frac{i}{2} & -1\n\\end{matrix}\\right]e^{i(\\frac{qx}{2}+\\frac{qy\\sqrt{3}}{2}+\\frac{\\pi}{3})}.\n\\end{align}\nIn this initial state, the scalar order parameter and the biaxiality are controlled by the two parameters, $b$ and $c$, and the twist deformation is tuned by $q$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.48\\textwidth]{Figure3}\n\\caption{(a)--(b)~Free energies of the states found in our nematic order tensor simulations, as functions of the chiral parameter $L_4$, with bulk coefficients for stiff and soft nematic order, respectively. All free energies are relative to the free energy of the standard biaxial cholesteric helical structure inside the cylinder, at the same value of $L_4$. As in Fig.~2, solid lines with different colors represent simulations run from different initial states. Arrows indicate the direction of changing $L_4$ in each series of simulations. (c)--(d)~Director field visualizations for the ground state with soft nematic order at $L_4=2.4$ and $2.5$, respectively. The orientation of the cylinders represents the local eigenvector of $\\boldsymbol{Q}$ with the largest eigenvalue. The red dot marks the $-1\/2$ disclination.}\n\\end{figure}\n\nTo simulate a chiral liquid crystal with stiff nematic order, we choose the bulk coefficients $A_0=100$ and $A_0\\gamma=453$. When $L_4$ is changed continuously from $2$ to $3.8$, the dimensionless ratio $\\tau$ remains at $-3$, and $\\kappa$ ranges from $0.16$ to $0.3$. These values of $\\tau$ and $\\kappa$ correspond to the cholesteric region of the phase diagram for an infinite system~\\cite{alexander2006stabilizing}. Our simulation results for this range of parameters are shown in Fig.~3(a). From the free energy graph, we can identify the same structural transitions as already seen in the director field simulations. When $\\bar{q}_0$ is small, the ground state is DT. As $\\bar{q}_0$ is increased to about $2.7$, the ground state changes from DT to Ch-y. When $\\bar{q}_0$ reaches $3.7$, there is a further transition from Ch-y to Ch-z. When $\\bar{q}_0$ is large enough, the ground state becomes similar to a standard cholesteric helical structure.\n\nTo simulate a chiral liquid crystal with softer nematic order, we choose $A_0=0.905$ and $A_0\\gamma=7.10$. When $L_4$ is changed continuously from $2$ to $3.2$, $\\tau$ remains at $-5.6$, and $\\kappa$ ranges from $1.3$ to $2$. These values of $\\tau$ and $\\kappa$ are in the blue-phase region of the bulk phase diagram. The simulation results for those parameters are shown in Fig.~3(b). When $\\bar{q}_0$ is small, the ground state is still DT. Unlike the results for stiff nematic order, the ground state is never Ch-y or Ch-z. Instead, there is a transition between DT (Fig.~3(c)) and a single-disclination (SD) configuration (Fig.~3(d)) as $\\bar{q}_0$ is increased to about $2.7$. By visualizing the director (defined as the eigenvector of $\\boldsymbol{Q}$ with the largest eigenvalue) inside the cylinder, we can see that the SD configuration is composed of three double-twist regions separated by a $-1\/2$ disclination at the center. As we further increase $\\bar{q}_0$ to values greater than $4.8$, the ground state becomes a complex combination of disclinations, double-twist regions, and single-twist regions.\n\nTo explore the extreme case in which the free energy cost of a disclination is very low, we perform simulations with $A_0=-0.027$ and $A_0\\gamma=2.92$. Figure~4 shows the ground states for three values of $\\bar{q}_0$ beyond the value where a single disclination forms. At $\\bar{q}_0=4.31$, the ground state (Fig.~4(a)) shows seven large double-twist regions separated by six $-1\/2$ disclinations. As $\\bar{q}_0$ is increased to $6.47$, there is a complex arrangement of double-twist structures inside the cylinder. The ground state (Fig.~4(b)) has three large double-twist regions separated by a $-1\/2$ disclination at the center, which resembles an SD configuration except that six small double-twist regions are inserted close to the surface. When $\\bar{q}_0=8.62$, the size of the optimum double-twist region becomes small enough, compared to the cylinder radius, so that the ground state (Fig.~4(c)) is filled with many double-twist domains separated by $-1\/2$ disclinations. The general behavior here is that, as $\\bar{q}_0$ increases, the chiral liquid crystal tries to fill the cylinder with as many double-twist domains as possible.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.75\\textwidth]{Figure4}\n\\caption{(a)--(c) Visualization of the nematic order tensor fields for a chiral liquid crystal with very soft nematic order at $L_4=4$, $6$, and $8$, respectively. The orientation of the cylinders represents the local eigenvector of $\\boldsymbol{Q}$ with the largest eigenvalue. The red dots mark the $-1\/2$ disclinations, and the green circles indicate the small double-twist regions in part~(b).}\n\\end{figure*}\n\nOverall, our simulations of the nematic order tensor demonstrate that a chiral liquid crystal can have two possible responses to geometric frustration. If the bulk free energy coefficients are large, so that the nematic order is stiff, then the liquid crystal combines the favorable double twist with the unfavorable $\\boldsymbol{\\Delta}$ mode, and it forms a cholesteric structure with single twist. It does not form disclinations, because the energy cost of disclinations is too high. These results are consistent with our director simulations in the previous section. By contrast, if the bulk free energy coefficients are smaller, so that the nematic order is softer, then the liquid crystal forms regions of the favorable double twist separated by disclinations. In that sense, it forms a simple version of a blue phase in the finite geometry.\n\n\\section{Chiral Liquid Crystal in a Slab}\n\nIn this section, we consider a slab of chiral liquid crystal between two infinite, parallel plates, with free boundary conditions on both of the plates. If there were no geometric frustration, then we might expect the liquid crystal to form a standard cholesteric director field. Here, we show that geometric frustration leads to a more complex structure near the plates.\n\nTo model the chiral liquid crystal between the plates, we use a Cartesian coordinate system with the plates parallel to the $(x,y)$ plane, located at $z=\\pm d\/2$, respectively. To simplify the model, we assume that the director field depends only on $x$ and $z$, and is independent of $y$. We write it in conventional spherical coordinates as \n\\begin{eqnarray}\\label{director_field_n_in_a_slab}\n \\hat{\\boldsymbol{n}}(x,z) = \\left(\\sin{\\theta}\\cos{\\phi}, \\sin{\\theta}\\sin{\\phi}, \\cos{\\theta}\\right).\n\\end{eqnarray}\nThe free energy takes the same form as Eq.~(\\ref{ftotal_director_field_simulations}), and the dissipation function takes the same form as Eq.~(\\ref{dissipation_function_for_director_field}), except now expressed in terms of angles $\\theta(x,z)$ and $\\phi(x,z)$. We solve the overdamped equations of motion for $\\theta$ and $\\phi$ to obtain the equilibrium director field.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{Figure5}\n\\caption{\\label{fig:5} Visualization of the director field in chiral liquid crystals confined between parallel plates with free boundary conditions. In each image, the orientation of the cylinders indicates the local director $\\hat{\\boldsymbol{n}}$, and the color below the cylinders shows the free energy density (relative to the standard cholesteric helix at the corresponding value of $q_0$). (a)~Full height of cell with $q_0=3\\pi\/2$. (b)~Zoomed-in surface state for $q_0=3\\pi\/2$. (c)~Full height of cell with $q_0=\\pi$. (d)~Zoomed-in surface state for $q_0=\\pi$. (e)~Full height of cell with $q_0=\\pi\/2$. (f)~Full height of cell with $q_0=\\pi\/4$. Compared to the previous director field simulations in a capillary tube, we use twist with the opposite handedness in these simulations.}\n\\end{figure*}\n\nSimulations are conducted in a rectangular box in the $(x,z)$ plane. In the $x$ direction, the size of the box is $40$, and periodic boundary conditions are applied to simulate an unbounded system. In the $z$ direction, the thickness of the box is fixed to $d=20$, and we vary $q_0$ to study how the director configuration depends on the ratio of the cell thickness to the natural pitch. For the initial condition, we choose a standard cholesteric helical structure with a pitch slightly bigger than the natural pitch $\\pi\/q_0$, with pitch axis parallel to $\\hat{\\boldsymbol{z}}$. By relaxing the director field from the initial condition, we find equilibrium configurations which have lower total free energy than the standard cholesteric helical structure at the corresponding $q_0$. Our results are illustrated in Fig.~5.\n\nLet us first consider the case of $q_0 = 3\\pi\/2$ (Fig.~5(a)). In the interior of the cell, the director field shows a standard cholesteric helical structure, with pitch axis parallel to $\\hat{\\boldsymbol{z}}$. The free energy density in the interior is equal to the free energy density of the standard cholesteric phase at $q_0=3\\pi\/2$. By contrast, at each of the free boundaries, there is a row of large semicircular domains of double twist, arranged regularly in the $x$ direction, forming a surface state. Between each pair of two adjacent semicircular domains (Fig.~5(b)), a small double-twist region is inserted to connect the entire chain together. At the center of each large semicircular domain, the free energy density is lower than the standard cholesteric phase. Away from the center, the free energy density gradually increases. Approaching the domain periphery, the free energy density becomes even higher than the standard cholesteric phase. However, the total free energy for each domain is still lower than the total free energy of the standard cholesteric phase filling the same region. To connect the domains near the free surfaces and the cholesteric helical structure in the interior, the director field transforms gradually from the surface state to the interior structure, manifested as a trail of wrinkles in the free energy density plot.\n\nFor $q_0 = \\pi$ (Fig.~5(c)), the equilibrium state is similar to the previous case, except that the semicircular domains of double twist (Fig.~5(d)) become larger. Because of the increased size of these domains, the surface state has a longer wavelength in the $x$ direction. As $q_0$ is decreased to $\\pi\/2$ (Fig.~5(e)), the semicircular domains of double twist become even larger, so that the surface states fully penetrate through the interior of the cell. The director field around $z=0$ is slightly distorted away from the standard cholesteric helical structure because of the influence of the surface states. When $q_0 = \\pi\/4$ (Fig.~5(f)), the surface states strongly interfere with each other, resulting in a buckled lamellar pattern in the interior of the liquid crystal cell.\n\nIn a sense, the behavior found here can be regarded as an inside-out version of the Helfrich-Hurault effect. In the usual Helfrich-Hurault effect, which is observed in many cholesteric liquid crystals~\\cite{blanc2021helfrich}, the surfaces provide rigid anchoring. A horizontal modulation forms in the interior of a cell, while the surfaces maintain a standard cholesteric helical structure. Here, the surfaces have free boundary conditions, and a horizontal modulation forms at the surfaces, while the interior remains closer to the standard cholesteric helix.\n\n\\section{Discussion}\n\nIn this work, we theoretically demonstrate that double twist is the optimum deformation that minimizes the local free energy density of chiral liquid crystals. Because of geometric constraints, pure double twist cannot fill up 3D Euclidean space, and chiral liquid crystals must compromise between the local optimum and the global structure. We have studied two model systems to see how this geometric frustration affects chiral liquid crystals confined in a finite system with free boundary conditions.\n\nFirst, we investigate a chiral liquid crystal confined in a long cylinder, using analytic theory, director field simulations, and nematic order tensor simulations. All these techniques show that the equilibrium structure is controlled by the dimensionless parameter $\\bar{q}_0=q_0 R$, i.e.\\ the ratio of the cylinder radius to the natural pitch. When $\\bar{q}_0$ is sufficiently small, the liquid crystal forms a double-twist configuration. However, when $\\bar{q}_0$ is larger, the double-twist configuration accumulates too much geometric frustration. In that case, if the nematic order is stiff, the liquid crystal forms a cholesteric helix, which combines the favorable double twist with the unfavorable $\\boldsymbol{\\Delta}$ deformation mode. If the nematic order is soft, it forms double-twist domains separated by disclinations.\n\nSecond, we study a chiral liquid crystal confined between two infinite, parallel plates with free boundary conditions. In this geometry, double twist cannot fill up the slab. Instead, geometric frustration induces surface states close to the free boundaries, where semicircular domains of double twist are arranged in a row. The size of each semicircular domain at the surface becomes larger as the natural pitch increases. When the ratio of the cell thickness to the natural pitch becomes small enough, the surface states penetrate through the entire cell, causing a buckled lamellar structure in the interior.\n\nAfter this research was completed, we learned of a recent experimental and theoretical article by Pi\\v{s}ljar et al.~\\cite{pisljar2022skyrmions}, which investigates blue phase III as a topological fluid of skyrmions. Their work is related to our current study because it also examines finite-size effects in chiral liquid crystals. When they reduce the thickness of a sample, they find that the temperature range of the cholesteric phase is reduced, and the liquid crystal can more easily form half-skyrmions or a blue phase. This is the same trend that we find in this article. It supports the view that the cholesteric phase is a frustrated structure, which is only stabilized because of geometric constraints in a large geometry.\n\nTo compare our work with Ref.~\\cite{pisljar2022skyrmions}, we would highlight two small distinctions, which are only differences of emphasis, not scientific disagreements. First, they present a detailed study of a realistic experimental system. By contrast, we investigate very simple models, with the minimal features to demonstrate geometric frustration. Second, by concentrating on skyrmions or half-skyrmions, they emphasize the effects of topology. We would say that the fundamental issue is the elasticity of chiral liquid crystals, which favors double twist more than cholesteric single twist, combined with geometric constraints. Topology is important only because disclinations allow the liquid crystal to develop more regions of double twist.\n\nIn conclusion, by studying two model systems with free boundary conditions, we explicitly demonstrate geometric frustration in chiral liquid crystals. Because of frustration, the configuration of a chiral liquid crystal in a finite system can depend on the geometry of the container in unexpected ways. This theoretical research provides a new perspective on how to understand the structure of chiral liquid crystals, and may help to predict and control new geometric effects.\n\n\\section*{Conflicts of interest}\n\nThere are no conflicts to declare.\n\n\\section*{Acknowledgements}\n\nThis work was supported by National Science Foundation Grant DMR-1409658. \n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:Introduction}\n\nOscillatory behaviour is well-known to arise in many areas of biology, particularly in the study of neural networks. Through chemical and\/or electrical synapses, neurons can fall into complex synchronous periodic behaviour patterns governing a wide variety of cognitive tasks $\\cite{Buschman,CarmichaelChesselet,JutrasBuffalo}$ and potentially aiding in memory formation $\\cite{MohnsBlumberg}$. An important approach to studying the dynamic behaviour of neural activity mathematically has been through the study of weakly coupled oscillators $\\cite{ErmentroutChow,WeaklyConnectedBook}$. Hence, from a theoretical perspective, modern dynamical systems theory can be used to greatly enhance our understanding of synchronous patterns in neural networks. \n\nOne of the most important characteristics of studying weakly coupled oscillators is that each oscillator can be reduced through a process of averaging to a single phase variable lying on the circle $S^1$ under minor technical assumptions \\cite{Corinto,DeVille,ErmentroutAverage,WeaklyConnectedBook,Udeigwe}. It is exactly these so-called `phase models' that have allowed researchers to identify patterns of synchrony arising in the study of coupled oscillators. To date there have been many nontrivial stable synchrony patterns that have been identified, including traveling waves, target patterns and rotating waves $\\cite{ErmentroutStability,ErmentroutRen,Paullet,ErmentroutSpiral,Udeigwe}$. Some authors have even postulated that such patterns of synchrony in neurons could be related to simple geometric structures perceived during visual hallucinations $\\cite{ErmentroutCowan,Tass}$.\n\nIn this paper we will work to further the current understanding of the stability in coupled phase models by inspecting an infinite system of ordinary differential equations. We will consider a countably infinite index set $V$ along with the system of ordinary differential equations indexed by the elements of $V$ given by\n\\begin{equation} \\label{PhaseSystem}\n\t\\dot{\\theta}_v(t) = \\omega_v + \\sum_{v'\\in N(v)} H(\\theta_{v'}(t) - \\theta_v(t)),\n\\end{equation}\nfor each $v \\in V$. Here $\\dot{x} = dx\/dt$, $\\theta_v: \\mathbb{R}^+ \\to S^1$ and $H:S^1 \\to S^1$ is (at least) twice differentiable and a $2\\pi$-periodic function describing the interaction between oscillators indexed by the elements of $N(v) \\subseteq V\\setminus\\{v\\}$ for all $v \\in V$. One thinks of the elements in $N(v)$ as belonging to a neighbourhood of the element $v$. More precisely, we will say that if $v' \\in N(v)$ the state $\\theta_{v'}$ {\\em influences} the state $\\theta_{v}$. Throughout this manuscript we will assume the influence topology to be symmetric, which formally means that $v \\in N(v')$ if and only if $v' \\in N(v)$ for all $v,v'\\in V$. The $\\omega_v$ are taken to be real-valued constants representing intrinsic differences in the oscillators and\/or external inputs. Our interest in system $(\\ref{PhaseSystem})$ is to describe sufficient conditions for the stability of synchronized patterns of oscillation arising as solutions to this model. \n\nThe differential equation $(\\ref{PhaseSystem})$ can be interpreted as a generalization of the celebrated Kuramoto model which has widely been applied in mathematical neuroscience $\\cite{Cumin, Kuramoto}$. Particularly, we will see through the applications of our work here that the motivating examples of interaction functions $H$ are given by sinusoidal functions, as in the Kuramoto model. Furthermore, this work refrains from inspecting the case when $V$ has a finite number of elements since this was taken care of in a previous study by Ermentrout $\\cite{ErmentroutStability}$. It was exactly this study that made explicit the link between spectral graph theory and the stability of coupled oscillators, showing that under mild conditions one may interpret the linearization about a synchronous solution as corresponding to a weighted graph. Many investigations have used these results to show that finite systems of coupled oscillators exhibit stable solutions for an arbitrary finite number of oscillators, but most fail to mention how stability changes as the number of oscillators increases without bound. What could be expected is that the spectral gap in this case shrinks, causing eigenvalues to converge onto the imaginary axis as the number of oscillators tends to infinity. It is exactly this type of phenomena which has been a recent area of investigation for partial differential equations $\\cite{SandstedeScheel}$. Studies have shown that the truncation from infinite to finite can help to stabilize otherwise unstable solutions, and that faint remnants of the instabilities reveal themselves for large finite truncations. Therefore, in this work we wish to detail sufficient conditions for stability in infinite dimensions, allowing one to address all of the previously stated concerns as it pertains to a well-studied paradigm in mathematical neuroscience. \n\nTypical investigations into coupled phase models focus on one or both of the following aspects: the interaction between oscillators (different $H$ functions) and the effect of coupling topologies (different $V$ and $N(v)$). The work presented here focusses more on the latter since we will follow similar method's utilized in the finite dimensional setting to understand how the coupling topologies can endow an underlying graph theoretical framework to help understand the stability of synchronous solutions. We will see that this graph theoretic framework can reward networks with more complex connections with faster decay rates of slight perturbations. This paper will use some of the well-developed theory of random walks on infinite weighted graphs to obtain algebraic (as opposed to exponential) decay rates of small perturbations off synchronous solutions to our phase model $(\\ref{PhaseSystem})$. In fact, the reason we have chosen $V$ to represent the index set for our differential equation is that we will see in the coming sections it can be shown to correspond to the vertices of an infinite weighted graph. In doing so we provide a fascinating link between two seemingly unrelated areas of mathematics and introduce a new application for some very theoretical mathematical tools. \n\nSystem $(\\ref{PhaseSystem})$ generalizes the Kuramoto-type coupled phase model studied in $\\cite{MyWork}$ given by\n\\begin{equation}\\label{PhaseSystem2}\n\t\\dot{\\theta}_{i,j} = \\omega + \\sin(\\theta_{i+1,j} - \\theta_{i,j}) + \\sin(\\theta_{i-1,j} - \\theta_{i,j}) + \\sin(\\theta_{i,j+1} - \\theta_{i,j}) + \\sin(\\theta_{i,j-1} - \\theta_{i,j}),\n\\end{equation}\nfor each $(i,j)\\in\\mathbb{Z}^2$. In particular, system (\\ref{PhaseSystem2}) was shown to possess a rotating wave solution, which as previously mentioned, pertains to important biological phenomena. In this work we will extend the investigation of $\\cite{MyWork}$ by demonstrating that not only the rotating wave solution is stable, but there are infinitely many stable synchronous states to the model (\\ref{PhaseSystem2}) which can be analyzed using the methods of this paper. By using a particular phase model as a case study, we aid the reader in better understanding the techniques of this paper and further some known mathematical results. \n\nThis paper is organized as follows. In Section $\\ref{sec:Perturbation}$ we lay out the basic mathematical framework that allows us to study the stability of synchronous solutions. In Section $\\ref{sec:Graphs}$ we provide a brief overview of the relevant results and techniques from graph theory, including an investigation of random walks on infinite weighted graphs in Section $\\ref{subsec:RandomWalks}$. These results on graphs are then used to formulate the main stability result of this paper, Theorem~\\ref{thm:Stability}, which is presented in Section $\\ref{sec:Stability}$. Theorem~\\ref{thm:Stability} shows that if the graph defined by the linearization about the synchronous solution has dimension at least two, then we may obtain nonlinear stability of said synchronous solution. The proof of Theorem~\\ref{thm:Stability} is left to Section $\\ref{sec:StabilityProof}$ where a typical Picard iteration method is employed in conjunction with the results from infinite weighted graphs. In Section $\\ref{sec:Applications}$ we present a thorough analysis of some stable states in a particular coupled phase model, demonstrating that there are infinitely many stable synchronous states in this model. Finally in Section $\\ref{sec:Discussion}$ we present a brief discussion of the results in this work, particularly regarding the shortcomings of the main stability result regarding one dimensional graphs, along with some future directions for inquiry. \n\n\n\n\n\n \n\n\n\n\n\\section{The Perturbation System} \\label{sec:Perturbation}\n\nOur interest in this work lies in investigating synchronous solutions to $(\\ref{PhaseSystem})$. Such solutions are all oscillating with identical frequency and are typically referred to as phase-locked. Solutions of this type take the form \n\\begin{equation} \\label{PhaseAnsatz}\n\t\\theta_v(t) = \\Omega t + \\bar{\\theta}_v,\n\\end{equation} \nwhere $\\bar{\\theta}_v$ is a time-independent phase-lag for all $v \\in V$ and the elements $\\theta_v(t)$ are identically oscillating with period $2\\pi\/\\Omega$, for some $\\Omega \\in \\mathbb{R}$. The ansatz $(\\ref{PhaseAnsatz})$ reduces $(\\ref{PhaseSystem})$ to solving the infinite system \n\\begin{equation} \\label{PhaseLagsEqn}\n\t(\\omega_v - \\Omega) + \\sum_{v' \\in N(v)} H(\\bar{\\theta}_{v'} - \\bar{\\theta}_v) = 0\n\\end{equation}\nfor the phase-lags $\\bar{\\theta} = \\{\\bar{\\theta}_v\\}_{v \\in V}$ and the frequency $\\Omega$. \n\nAssuming we have a solution to $(\\ref{PhaseLagsEqn})$, our interest turns to applying a slight perturbation to the ansatz $(\\ref{PhaseAnsatz})$, written\n\\begin{equation} \\label{PerturbedAnsatz}\n\t\\theta_v(t) = \\Omega t + \\bar{\\theta}_v + \\psi_v(t),\n\\end{equation}\nand inspecting conditions to which $\\psi_v(t) \\to 0$ as $t \\to \\infty$ for all $v \\in V$. Notice that the perturbed ansatz $(\\ref{PerturbedAnsatz})$ leads to the system of differential equations\n\\begin{equation} \\label{PerturbationSystem}\n\t\\dot{\\psi}_v = (\\omega_v - \\Omega) + \\sum_{v' \\in N(v)} H(\\bar{\\theta}_{v'} + \\psi_{v'} - \\bar{\\theta}_{v} - \\psi_{v}), \\ \\ \\ \\ \\ v \\in V,\n\\end{equation} \nwhich from $(\\ref{PhaseLagsEqn})$ has a steady-state solution given by $\\psi = 0$. We have also suppressed the dependence of the variables in $(\\ref{PerturbationSystem})$ on $t$ for the ease of notation. System $(\\ref{PerturbationSystem})$ forms the basis for our investigation in this work. \n\nAs is well-known from the study of finite-dimensional ordinary differential equations, linearizing about a steady-state solution can provide great insight into the stability of this solution. In the case of infinite-dimensional ordinary differential equations one may follow this line of inquiry and attempt similar techniques, but there are some subtleties which only reveal themselves in infinite dimensions. Notice that linearizing $(\\ref{PerturbationSystem})$ about its steady-state solution $\\psi = 0$ results in the linear operator, $L_{\\bar{\\theta}}$, acting upon the sequences $x = \\{x_v\\}_{v \\in V}$ by\n\\begin{equation} \\label{Linearization}\n\t[L_{\\bar{\\theta}}x]_v = \\sum_{v' \\in N(v)} H'(\\bar{\\theta}_{v'} - \\bar{\\theta}_{v})(x_{v'} - x_v),\n\\end{equation} \nfor each $v \\in V$. Operators of the form (\\ref{Linearization}) leave the possibility that a continuum of spectral values intersects the imaginary axis, and therefore typical exponential stability results relying on the existence of a spectral gap cannot be inferred about any phase-locked solution to $(\\ref{PhaseSystem})$.\n\nLet us illustrate with a prototypical example so that the reader can better comprehend the situation we wish to investigate here. Consider $V = \\mathbb{Z}^2$ and an associated linear operator in the form of (\\ref{Linearization}), acting upon the sequences $x = \\{x_{i,j}\\}_{(i,j) \\in \\mathbb{Z}}$, given by \n\\begin{equation} \\label{LinearizationEx}\n\t[Lx]_{i,j} = (x_{i+1,j} - x_{i,j}) + (x_{i-1,j} - x_{i,j}) + (x_{i,j+1} - x_{i,j}) + (x_{i,j-1} - x_{i,j}),\t\n\\end{equation}\nwhich is referred to as the two-dimensional discrete Laplacian operator. It is well-known that when posed upon the spaces $\\ell^2(\\mathbb{Z}^2)$ or $\\ell^\\infty(\\mathbb{Z}^2)$, defined below in (\\ref{ellpSpace}) and (\\ref{ellInfty}), respectively, the spectrum of (\\ref{LinearizationEx}) is exactly the subset of the complex plane given by the real interval $[-4,0]$ \\cite{Cahn}. Hence, we see that the spectrum is both uncountable and intersects the imaginary axis of the complex plane, therefore implying that exponential dichotomies based upon the linearization (\\ref{LinearizationEx}) cannot be inferred, at least when working with the Banach spaces $\\ell^2(\\mathbb{Z}^2)$ or $\\ell^\\infty(\\mathbb{Z}^2)$. In Section~\\ref{subsec:Trivial} we will return to this example to see that stability can in fact be inferred based upon the linearization (\\ref{LinearizationEx}) using the stability theorem presented in this work. \n\nMoving back to the general situation of (\\ref{Linearization}), when $H'$ is uniformly bounded above, a simple argument following similarly to that which is laid out in Proposition $5.2$ of $\\cite{MyWork}$ shows that $L_{\\bar{\\theta}}$ is not a Fredholm operator on $\\ell^\\infty(V)$, and again we cannot obtain exponential dichotomies based upon these linear operators. Therefore, it is such situations that necessitates the work on stability of phase-locked solutions which is presented in this manuscript. We will overcome this failure to produce exponential dichotomies and obtain algebraic dichotomies instead. Prior to providing the main stability result of this paper, we require a brief overview of the relevant graph theoretic techniques which we will be applying in this work. \n\n\n\n\n\n\n\n\n\n\n\\section{Denumerable Graph Networks} \\label{sec:Graphs}\n\nThroughout the following subsections we will provide the necessary definitions and results to obtain our main result Theorem~\\ref{thm:Stability}. We will use similar nomenclature and notation to that of Delmotte $\\cite{Delmotte}$, which appears to now be quite standard. Another excellent source which summarizes much of the relevant results and more is Telcs' textbook $\\cite{TelcsBook}$. The majority of this section comes as a brief literature review to bring the reader up to date on some of the relevant results pertaining to our work in this manuscript. We will also provide some brief lemmas and corollaries to extrapolate the results and make them more easily applicable to our work here. \n\n\n\\subsection{Preliminary Definitions}\n\nWe consider a graph $G = (V,E)$ with a countably infinite collection of vertices, $V$, and a set of unoriented edges between these vertices, $E$, with the property that at most one edge can connect two vertices. We refer to a loop as an edge which initiates and terminates at the same vertex. If there exists an edge $e\\in E$ connecting the two vertices $v,v'\\in V$ then we write $v\\sim v'$, and since the edges are unoriented this relation is naturally symmetric in that $v' \\sim v$ as well. In this way we may equivalently consider the set of edges $E$ as a subset of the product $V \\times V$. A graph is called connected if for any two vertices $v,v'\\in V$ there exists a finite sequence of vertices in $V$, $\\{v_1,v_2, \\dots , v_n\\}$, such that $v\\sim v_1$, $v_1\\sim v_2$, $\\dots$, $v_n \\sim v'$. We will only consider connected graphs for the duration of this work. \n\nWe will also consider a weight function on the edges between vertices, written $w:V\\times V \\to [0,\\infty)$, such that for all $v,v'\\in V$ we have $w(v,v') = w(v',v)$ and $w(v,v') > 0$ if and only if $v \\sim v'$. This then leads to the notion of a weighted graph, written as the triple $G = (V,E,w)$. The weight function also naturally extends to the notion of the measure (or sometimes weight) of a vertex, $m:V \\to [0,\\infty]$, defined by\n\\begin{equation} \\label{VertexMeasure}\n\tm(v) := \\sum_{v'\\in V} w(v,v') = \\sum_{v\\sim v'} w(v,v').\n\\end{equation} \nThroughout this work we will only consider graphs and weight functions such that $m(v) < \\infty$ for all $v \\in V$. The weight function then leads to the definition of an operator acting on the graph. \n\n\\begin{defn} \\label{def:Laplacian}\n\tFor any function $f:V \\to \\mathbb{C}$ acting on the vertices of $G = (V,E,w)$, we define the {\\bf graph Laplacian} to be the linear operator $L_G$ acting on these functions by\n\t\\begin{equation} \\label{NormLap}\n\t\tL_Gf(v) = \\frac{1}{m(v)}\\sum_{v\\sim v'} w(v,v')(f(v') - f(v)). \n\t\\end{equation} \n\\end{defn} \n\nNatural spatial settings for the graph Laplacian operator of Definition $\\ref{def:Laplacian}$ are the sequence spaces \n\\begin{equation} \\label{ellpSpace}\n\t\\ell^p(V) = \\{f:V \\to \\mathbb{C}\\ |\\ \\sum_{v\\in V} |f(v)|^p < \\infty\\},\t\n\\end{equation} \nfor any $p\\in [1,\\infty)$. The vector space $\\ell^p(V)$ become a Banach space when equipped with the norm\n\\begin{equation} \n\t\\|f\\|_p := \\bigg(\\sum_{v\\in V} |f(v)|^p\\bigg)^{\\frac{1}{p}}.\n\\end{equation} \nWe may also consider the Banach space $\\ell^\\infty(V)$, the vector space of all uniformly bounded functions $f:V\\to \\mathbb{C}$ with norm given by\n\\begin{equation} \\label{ellInfty}\n\t\\|f\\|_\\infty := \\sup_{v\\in V} |f(v)|.\n\\end{equation}\nOne often writes the elements of the sequence spaces in the alternate form as sequences indexed by the elements of $V$, $f = \\{f_v\\}_{v\\in V}$, where $f_v := f(v)$. Also, it should be noted that these definitions extend to any countable index set $V$, independent of a respective graph. \n\nThroughout this work we will also consider bounded linear operators acting between these sequence spaces. That is, consider a bounded linear operator $T:\\ell^p(V) \\to \\ell^q(V)$ for some $1 \\leq p,q \\leq \\infty$. We denote the norm of this operator as \n\\begin{equation} \\label{ellpOpNorm}\n\t\\|T\\|_{p \\to q} := \\sup_{0 \\neq f \\in \\ell^p(V)} \\frac{\\|Tf\\|_q}{\\|f\\|_p}.\n\\end{equation} \nOne should note that there are many different, but equivalent, versions of this norm which one may work with. We now wish to provide sufficient conditions for which a graph Laplacian defines a bounded linear operator acting from $\\ell^p(V)$ back into itself, for every $p \\in [1,\\infty]$. Before doing so, we remark that for a vertex $v \\in V$ we define the degree to be\n\\begin{equation}\n\t{\\rm Deg}(v) = \\#\\{v'\\in V |\\ v\\sim v'\\},\n\\end{equation}\nwhere $\\#\\{\\cdot\\}$ represents the cardinality of the set. We use this definition to provide the following results. \n\n\\begin{lem} \\label{lem:NormLaplacian}\n\tLet $G = (V,E,w)$ be a weighted graph. If there exists finite $D > 0$ such that ${\\rm Deg}(v) \\leq D$ for all $v \\in V$ then the graph Laplacian $(\\ref{NormLap})$ defines a bounded linear operator on $\\ell^p(V)$ for all $p\\in [1,\\infty]$.\t\n\\end{lem}\n\n\\begin{proof}\n\tPrior to working with a specific sequence space let us begin by noticing that from the definition of the measure $(\\ref{VertexMeasure})$ we have\n\t\\begin{equation}\n\t\t\\frac{w(v,v')}{m(v)} \\leq \\sum_{v\\sim v'} \\frac{w(v,v')}{m(v)} = 1,\n\t\\end{equation} \n\tfor all $v,v' \\in V$. Hence,\n\t\\begin{equation} \\label{LNormInequality}\n\t\t\\begin{split}\n\t\t\t|L_Gf(v)| &\\leq \\frac{1}{m(v)}\\sum_{v\\sim v'} w(v,v')(|f(v')| + |f(v)|) \\\\\n\t\t\t&= |f(v)| + \\sum_{v\\sim v'} \\frac{w(v,v')}{m(v)}|f(v')|, \n\t\t\\end{split}\t\n\t\\end{equation}\n\tfor all $v \\in V$. One may apply H\\\"older's Inequality to find \n\t\\begin{equation}\n\t\t\\sum_{v\\sim v'} \\frac{w(v,v')}{m(v)}|f(v')| \\leq \\bigg(\\sup_{v' \\in V} \\frac{w(v,v')}{m(v)}\\bigg) \\bigg(\\sum_{v\\sim v'} |f(v')|\\bigg) \\leq \\sum_{v\\sim v'} |f(v')|,\t\n\t\\end{equation}\n\tfor all $v \\in V$. Combining this with $(\\ref{LNormInequality})$ gives\n\t\\begin{equation} \\label{LNormInequality2}\n\t\t|L_Gf(v)| \\leq |f(v)| + \\sum_{v\\sim v'} |f(v')|. \t\n\t\\end{equation}\n\tWe now use $(\\ref{LNormInequality2})$ to demonstrate boundedness of the operator $L$ on the sequence spaces, starting with $\\ell^1(V)$ and $\\ell^\\infty(V)$.\n\t\n\t$\\underline{L_G:\\ell^1(V) \\to \\ell^1(V)}$: Let $f \\in \\ell^1(V)$. Summing $|L_Gf(v)|$ over all $v \\in V$ and applying $(\\ref{LNormInequality2})$ gives\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\|L_Gf\\|_1 &= \\sum_{v \\in V} |Lf(v)| \\\\\n\t\t\t\t &\\leq \\sum_{v \\in V} |f(v)| + \\sum_{v \\in V} \\sum_{v\\sim v'} |f(v')| \\\\ \n\t\t\t\t &\\leq \\|f\\|_1 + D\\|f\\|_1 \\\\\n\t\t\t\t & = (D+1)\\|f\\|_1.\n\t\t\\end{split} \n\t\\end{equation}\n\tThis therefore shows that $\\|L_G\\|_{1\\to 1} \\leq D+1$.\n\t\n\t$\\underline{L_G:\\ell^\\infty(V) \\to \\ell^\\infty(V)}$: Let $f \\in \\ell^\\infty(V)$. Taking the supremum of $|L_Gf(v)|$ over all $v \\in V$ and applying $(\\ref{LNormInequality2})$ gives\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\|L_Gf\\|_\\infty &= \\sup_{v \\in V} |Lf(v)| \\\\\n\t\t\t\t&\\leq \\sup_{v \\in V} |f(v)| + \\sum_{v\\sim v'} |f(v')| \\\\\n\t\t\t\t&\\leq (D+1) \\|f\\|_\\infty.\n\t\t\\end{split}\n\t\\end{equation}\n\tHence, $\\|L_G\\|_{\\infty \\to \\infty} \\leq D+1$. \n\t \n\tNow that we have shown that $L_G$ is bounded on both $\\ell^1$ and $\\ell^\\infty$, standard interpolation result over the $\\ell^p(V)$ spaces imply that $L_G:\\ell^p(V) \\to \\ell^p(V)$ are uniformly bounded for all $1 < p < \\infty$ (see for example Exercise $12$ of $\\S 2.6$ from $\\cite{InfiniteMatrices}$). This completes the proof.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\subsection{Graphs as Metric Measure Spaces}\n\nIn this section we extend some of the notions introduced about graphs and provide the necessary nomenclature to introduce random walks on graphs. We saw that for a weighted graph, $G = (V,E,w)$, we define the measure of a vertex as in $(\\ref{VertexMeasure})$. This notion extends to the volume of a subset, $V_0 \\subset V$, by defining \n\\begin{equation}\n\t{\\rm Vol}(V_0) := \\sum_{v \\in V_0} m(v).\n\\end{equation} \nHence, under this definition of volume, a weighted graph naturally becomes a measure space on the $\\sigma$-algebra given by the power set of $V$. \n\nGraphs also have a natural underlying metric, $\\rho$, given by the function which returns the smallest number of edges to produce a path between two vertices $v,v' \\in V$. Note that since $G$ is assumed connected, the distance function is well-defined. This metric allows for the consideration of a ball of radius $r \\geq 0$ centred at the vertex $v \\in V$, denoted by\n\\begin{equation}\n\tB(v,r) := \\{v'\\ |\\ \\rho(v,v') \\leq r\\}.\n\\end{equation} \nIn this work we will write ${\\rm Vol}(v,r)$ to denote ${\\rm Vol}(B(v,r))$. The combination of the graph metric and the vertex measure allows one to interpret a weighted graph as a {\\em metric-measure space}. \n\nWe provide a series of definitions to further our understanding of graphs as metric-measure spaces. \n\n\\begin{defn} \\label{def:VolumeGrowth}\n\tThe weighted graph $G = (V,E,w)$ satisfies a {\\bf uniform polynomial volume growth} condition of order $d$, abbreviated VG(d), if there exists $d > 0$ and $c_{vol,1},c_{vol,2} > 0$ such that\n\t\\begin{equation}\n\t\tc_{vol,1}r^d \\leq {\\rm Vol}(v,r) \\leq c_{vol,2}r^d,\n\t\\end{equation}\n\tfor all $v \\in V$ and $r \\geq 0$.\n\\end{defn}\n\nIn some cases one may also consider graphs with more general volume growth conditions, but for the purposes of this work we will restrict ourselves to polynomial growth conditions. The characteristic examples of graphs satisfying $VG(d)$ are the integer lattices $\\mathbb{Z}^d$ with all edges of weight $1$ such that there exists an edge between two vertices $n,n' \\in \\mathbb{Z}^d$ if and only if $\\|n - n'\\|_1 = 1$. This is pointed out on, for example, page $10$ of $\\cite{BarlowCoulhonGrigoryan}$. For the duration of this work we will restrict our attention to those graphs satisfying VG(d) with $d\\geq 2$. \n\n\\begin{defn} \\label{def:Delta}\n\tWe say $G = (V,E,w)$ satisfies the {\\bf local elliptic property}, denoted $\\Delta$, if there exists an $\\alpha > 0$ such that\n\t\\begin{equation}\n\t\tw(v,v') \\geq \\alpha m(v)\n\t\\end{equation}\n\tfor all $v,v' \\in V$ such that $v' \\sim v$.\n\\end{defn}\n\nIt appears that the notation of using $\\Delta$ to denote this local elliptic property has become commonplace in the literature, and therefore we use this notation for consistency. The following lemma points out an important set of sufficient conditions to satisfy $\\Delta$.\n\n\\begin{lem}\\label{lem:Delta}\n\tLet $G = (V,E,w)$ be a weighted graph. If there exists constants $D,w_{min}, w_{max} > 0$ such that \n\t\\begin{equation}\n\t\tw_{min} \\leq w(v,v') \\leq w_{max},\n\t\\end{equation}\n\tand ${\\rm deg}(v) \\leq D$ for all $v \\in V$ and $v \\sim v'$, then $G$ satisfies $\\Delta$. \n\\end{lem}\n\n\\begin{proof}\n\tFor all $v \\in V$ we have\n\t\\begin{equation}\n\t\tm(v) = \\sum_{v \\sim v'} w(v,v') \\leq Dw_{max}.\n\t\\end{equation}\n\tThis gives\n\t\\begin{equation}\n\t\t\\frac{w(v,v')}{m(v)} \\geq \\frac{w_{min}}{Dw_{max}} > 0.\n\t\\end{equation} \n\tThus, $G = (V,E,w)$ will satisfy $\\Delta$ for any $\\alpha > 0$ such that $w_{min} \\geq \\alpha Dw_{max}$.\n\\end{proof}\n\n\\begin{defn}\n\tThe weighted graph $G = (V,E,w)$ satisfies the {\\bf Poincar\\'e inequality}, abbreviated PI, if there exists a $C_{PI} > 0$ such that\n\t\\begin{equation}\n\t\t\\sum_{v \\in B(v_0,r)} m(v)|f(v) - f_B(v_0)|^2 \\leq C_{PI} r^2\\bigg( \\sum_{v,v' \\in B(v_0,2r)} w(v,v')(f(v) - f(v'))^2\\bigg),\n\t\\end{equation}\t\n\tfor all functions $f:V \\to \\mathbb{R}$, all $v_0 \\in V$, all $r > 0$, where\n\t\\begin{equation}\n\t\tf_B(v_0) = \\frac{1}{{\\rm Vol}(v_0,r)} \\sum_{v\\in B(v_0,r)} m(v)f(v).\n\t\\end{equation}\n\\end{defn}\n\nThe Poincar\\'e Inequality is certainly the most difficult of the three definitions to work with. In practice it can be quite difficult to confirm whether or not a weighted graph satisfies $PI$, although some methods to obtain this inequality are given in $\\cite{PoincareInequalities}$. Next we will introduce an important definition and result from $\\cite{RoughIsometry}$ that can aid in determining if a graph satisfies $PI$. \n\n\\begin{defn} \\label{def:RoughIsometry}\n\tLet $G = (V,E,w)$ and $G' = (V',E',w')$ be two infinite weighted graphs satisfying $\\Delta$ with respective graph metrics given by $\\rho$ and $\\rho'$. A map $T: V \\to V'$ is called a {\\bf rough isometry} if there exists $a,c > 1$, $b > 0$ and $M > 0$ such that\n\t\\begin{subequations}\n\t\t\\begin{equation} \\label{Rough1}\n\t\t\t\ta^{-1}\\rho(v_1,v_2) - b \\leq \\rho'(T(v_1),T(v_2)) \\leq a\\rho(v_1,v_2) + b, \\ \\ \\ \\ \\ \\forall v_1,v_2\\in V, \n\t\t\\end{equation}\t\n\t\t\\begin{equation} \\label{Rough2}\n\t\t\t\\rho'(T(V),v') \\leq M, \\ \\ \\ \\ \\ \\forall v'\\in V',\n\t\t\\end{equation}\n\t\t\\begin{equation} \\label{Rough3}\n\t\t\tc^{-1}m(v) \\leq m'(T(v)) \\leq cm(v), \\ \\ \\ \\ \\ \\forall v\\in V,\t\n\t\t\\end{equation}\n\t\\end{subequations}\n\twhere $m$ and $m'$ are the vertex measures associated to the graph $G$ and $G'$, respectively. If $T: G \\to G'$ is a rough isometry, $G$ and $G'$ are said to be {\\bf rough isometric}. \n\\end{defn}\n\n\n\\begin{prop}[{\\em \\cite{RoughIsometry}, \\S 5.3, Proposition 5.15(2)}] \\label{prop:RoughInvariance}\n\tLet $G$ and $G'$ be two infinite weighted graphs satisfying $\\Delta$ that are rough isometric. Then if there exists a $d > 0$ such that $G$ satisfies $VG(d)$ and $PI$, then $G'$ satisfies $VG(d)$ and $PI$ as well. \n\\end{prop}\n\nHambly and Kumagai's original statement of Proposition $\\ref{prop:RoughInvariance}$ refers to our $PI$ as a {\\em weak} Poincar\\'e inequality since they sometimes use a stronger inequality in their work. Hambly and Kumagai also originally provide their statement in terms of a more general volume growth condition, but we will work with the weaker version stated here since we are only interested in polynomial volume growth. In fact, using property $(\\ref{Rough3})$ one can show that property $VG(d)$ is preserved under rough isometries in a straightforward way. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Random Walks on Weighted Graphs} \\label{subsec:RandomWalks}\n\nOur interest here will be in continuous time random walks on the vertices of a weighted graph $G = (V,E,w)$. One can interpret this as being at a single vertex on the graph, and then waiting an exponentially distributed amount of time to move along an edge to another vertex of the graph. Upon arriving at the next vertex, this process begins again by waiting an exponentially distributed amount of time to move along an edge to another vertex of the graph. The weights in this scenario act as preferences to moving along a certain edge; the greater the weight, the greater the preference. More precisely, if we are at the vertex $v \\in V$, then the probability we move to $v' \\sim v$ is given by $w(v,v')\/m(v)$. The important thing to note here is that there are exactly two probabilities involved: the probability of when to move from one vertex to the next and the probability of choosing the vertex to move to. We use the notation $p_t(v,v')$ to denote the probability that after time $t \\geq 0$ we have arrived at the vertex $v'$ having started at vertex $v$. By definition we have\n\\begin{equation} \\label{ProbSum}\n\t\\sum_{v'\\in V} p_t(v,v') = 1\n\\end{equation} \nfor all $v \\in V$. Delmotte $\\cite{Delmotte}$ points out that the $p_t(\\cdot,\\cdot)$ are not necessarily symmetric in their arguments due to the weights on the graph, but it has been shown that\n\\begin{equation}\n\t\\frac{p_t(v,v')}{m(v')} = \\frac{p_t(v',v)}{m(v)}.\n\\end{equation} \nThis has prompted some authors $\\cite{BernicotCoulhonFrey,RoughIsometry,Horn}$ to instead study the symmetric transition densities \n\\begin{equation} \\label{qDensity}\n\tq_t(v,v') := \\frac{p_t(v,v')}{m(v')}\n\\end{equation}\nfor all $v,v'\\in V$.\n\nMuch work has been done to understand the long-time behaviour of the probabilities $p_t(\\cdot,\\cdot)$, notably the pioneering work of Delmotte $\\cite{Delmotte}$. Most applicable to our present situation is that these probabilities are used to understand the solution to the spatially discrete heat equation \n\\begin{equation} \\label{NormalizedLap}\n\t\\dot{x}_v = \\frac{1}{m(v)}\\sum_{v' \\in V} w(v,v')(x_{v'} - x_v),\\ \\ \\ \\ \\ v\\in V.\n\\end{equation}\nWhen considering all elements $\\{x_v\\}_{v\\in V}$, the right hand side of $(\\ref{NormalizedLap})$ is a graph Laplacian operator, again denoted $L_G$. Then as stated in Theorem $23$ of $\\cite{KellerLenz}$, $L_G$ is the infinitesimal generator of the semigroup $P_t = e^{L_Gt}$. For an initial condition $x_0 = \\{x_{v,0}\\}_{v \\in V}$ the fundamental solution to $(\\ref{NormalizedLap})$ with this initial condition is given by\n\\begin{equation} \\label{HeatSoln}\n\tx_v(t) = [P_tx_0]_v = \\sum_{v' \\in V} p_t(v,v')x_{v',0} \n\\end{equation} \nfor each $v \\in V$, thus showing the connection between the probabilities $p_t(\\cdot,\\cdot)$ and the semigroup $P_t$. The fact that $(\\ref{HeatSoln})$ solves $(\\ref{NormalizedLap})$ was pointed out by Delmotte, and other sources include, but are not limited to, $\\cite{Horn,Weber}$ for continuous time transitions and $\\cite{Grigoryan1,Grigoryan2}$ for discrete time transitions. One also can see using the identity $(\\ref{qDensity})$, the solution $(\\ref{HeatSoln})$ now can be interpreted as the Lebesgue integral\n\\begin{equation}\n\t[P_tx_0]_v = \\sum_{v' \\in V} q_t(v,v')x_{v',0}m(v'),\t\n\\end{equation}\nfor the symmetric $q_t$'s and the initial condition over a discrete space with respect to the measure $m: V \\to [0,\\infty)$. \n\n\\begin{prop}[{\\em \\cite{Delmotte}, \\S 3.1, Proposition 3.1}] \\label{prop:Delmotte}\n\tAssume there exists $d > 0$ such that the weighted graph $G = (V,E,w)$ satisfies $VG(d)$, $PI$ and $\\Delta$. Then for all $v,v' \\in V$ and $t \\geq 0$ there exists a constant $C_0 > 0$ independent of $v,v'$ and $t$ such that\n\t\\begin{equation} \\label{DelmotteUpper}\n\t\tp_t(v,v') \\leq C_0m(v')t^{-\\frac{d}{2}}. \n\t\\end{equation} \t\n\\end{prop}\n\nDelmotte proves a much stronger version of Proposition $\\ref{prop:Delmotte}$ under more general volume growth conditions that applies to a diverse range of graphs, but for our purposes we work with Proposition $\\ref{prop:Delmotte}$ as it is stated here. Delmotte also goes further to prove that the assumptions of Proposition $\\ref{prop:Delmotte}$ are equivalent to a Parabolic Harnack Inequality, which we do not explicitly state here because it will not be necessary to our result. What is important to note though is that Theorem $2.32$ of $\\cite{GyryaSaloffCoste}$ dictates that any graph (or more generally metric space) satisfying the Parabolic Harnack Inequality further satisfies the estimate\n\\begin{equation} \\label{NeighbourDecay}\n\t|p_t(v_1,v_3) - p_t(v_2,v_3)| \\leq Cm(v_3) \\bigg(\\frac{d(v_1,v_2)}{\\sqrt{t}}\\bigg)^\\eta p_{2t}(v_1,v_3) \n\\end{equation}\nfor all $v_1,v_2,v_3 \\in V$ and some independent $C, \\eta > 0$. Thus, we may assume that when the conditions of Proposition $\\ref{prop:Delmotte}$ are satisfied, then so must be $(\\ref{NeighbourDecay})$.\n\n\\begin{cor} \\label{cor:InfDecay}\n\tLet $G = (V,E,w)$ be a weighted graph satisfying the assumptions of Proposition $\\ref{prop:Delmotte}$. If there exists an $M > 0$ such that $m(v) \\leq M$ for all $v \\in V$, then for any $x_0 \\in \\ell^1(V)$ there exists a constant $C > 0$ such that\n\t\\begin{equation} \\label{Contraction1}\n\t\t\\|P_t\\|_{1 \\to \\infty} \\leq Ct^{-\\frac{d}{2}}.\n\t\\end{equation}\n\\end{cor}\n\n\\begin{proof}\n\tSince the conditions of Proposition $\\ref{prop:Delmotte}$ are satisfied for some $d > 0$, there exists a $C_0 > 0$ such that $(\\ref{DelmotteUpper})$ holds. Then apply H\\\"older's Inequality to the general solution $(\\ref{HeatSoln})$ to find that \n\t\\begin{equation} \\label{UltraContractive1}\n\t\t|[P_tx_0]_v| \\leq C_0Mt^{-\\frac{d}{2}} \\|x_0\\|_1.\t\t\n\t\\end{equation} \n\tTaking the supremum over all $v \\in V$ gives the desired result. \n\\end{proof}\n\nUltracontractive properties such as that stated in Corollary $\\ref{cor:InfDecay}$ of one parameter semigroups have been intensely studied, notably in the seminal work of Varopoulos who provided similar results to $(\\ref{Contraction1})$ in a much more general setting, but nonetheless arrived at a similar conclusion $\\cite{Varopoulus}$. \n\nIt has been demonstrated (see, for example, page $219$ of $\\cite{KellerLenz}$) that if $0 \\leq x_{v,0} \\leq 1$ for all $v\\in V$, then $0 \\leq [P_tx_0]_v \\leq 1$ for all $v \\in V$. The lower bound follows immediately from the positivity of the probabilities $p_t(\\cdot,\\cdot)$, whereas the upper bound follows from a direct application of H\\\"older's Inequality and the identity $(\\ref{ProbSum})$. Following the comments at the beginning of Section $1.2$ of $\\cite{BernicotCoulhonFrey}$, this implies that there exists a $C_{op} > 0$ for which \n\\begin{equation}\n\t\\|P_t\\|_{p \\to p} \\leq C_{op}\n\\end{equation} \nfor all $1 \\leq p \\leq \\infty$. These uniform bounds and the ultracontractivity property $(\\ref{UltraContractive1})$ can be extended further by the following lemma.\n\n\\begin{lem} \\label{lem:Ultracontractive}\n\tAssume there exists constants $C,C_{op}>0$ such that $\\|P_t\\|_{1\\to \\infty} \\leq Ct^{-\\frac{d}{2}}$ and $\\|P_t\\|_{1 \\to 1} \\leq C_{op}$. Then for all $1 \\leq p \\leq \\infty$ there exists a constant $C > 0$, independent of $p$, such that\n\t\\begin{equation} \\label{UltraContractive2}\n\t\t\\|P_t\\|_{1 \\to p} \\leq C t^{-\\frac{d}{2}(1 - \\frac{1}{p})}.\n\t\\end{equation} \n\\end{lem}\n\n\\begin{proof}\n\tWe begin by recalling the log-convexity property of $\\ell^p$ norms. For any $1 \\leq p_0 \\leq p_1 \\leq \\infty$ and $0 < \\gamma < 1$ we define $p_\\gamma$ by the equation \n\t\\begin{equation}\n\t\t\\frac{1}{p_\\gamma} = \\frac{1 - \\gamma}{p_0} + \\frac{\\gamma}{p_1}.\n\t\\end{equation}\n\tThen for all $x \\in \\ell^{p_0}(V)$ we have\n\t\\begin{equation}\n\t\t\\|x\\|_{p_\\gamma} \\leq \\|x\\|_{p_0}^{1 - \\gamma}\\|x\\|_{p_1}^\\gamma.\n\t\\end{equation} \t\n\t\n\tTo apply this log-convexity property to our present situation we take $p_0 = 1$ and $p_1 = \\infty$. Then $p_\\gamma = \\frac{1}{1-\\gamma}$ and for any $x \\in \\ell^1(V)$ we have\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\|P_tx\\|_{p_\\gamma} &\\leq \\|P_tx\\|_1^{\\frac{1}{p_\\gamma}}\\|P_tx\\|_{\\infty}^{1 - \\frac{1}{p_\\gamma}} \\\\\n\t\t\t&\\leq \\|P_t\\|_{1 \\to 1}^\\frac{1}{p_\\gamma}\\|x\\|_1^{\\frac{1}{p_\\gamma}} \\|P_t\\|_{1\\to \\infty}^{1 - \\frac{1}{p_\\gamma}}\\|x\\|_1^{1 - \\frac{1}{p_\\gamma}} \\\\\n\t\t\t&\\leq C_{op}^\\frac{1}{p_\\gamma}C^{1 - \\frac{1}{p_\\gamma}}t^{-\\frac{d}{2}(1 - \\frac{1}{p_\\gamma})}\\|x\\|_1.\n\t\t\\end{split}\n\t\\end{equation} \n\tThus, taking $\\|x\\|_1 = 1$ shows $\\|P_t\\|_{1\\to p_\\gamma} \\leq C_{op}^\\frac{1}{p_\\gamma}C^{1 - \\frac{1}{p_\\gamma}}t^{-\\frac{d}{2}(1 - \\frac{1}{p_\\gamma})}$. By varying $\\gamma \\in (0,1)$ we obtain the result for $1 < p < \\infty$ and the endpoints $p = 1,\\infty$ are taken care of by assumption. Finally, the bound $C_{op}^\\frac{1}{p_\\gamma}C^{1 - \\frac{1}{p_\\gamma}}$ as a function of $p$ is uniformly bounded on $p\\in[1,\\infty]$, and therefore taking any $C > 0$ to be the supremum of this function will give that\n\t\\begin{equation}\n\t\t\\|P_tx\\|_{1\\to p} \\leq Ct^{-\\frac{d}{2}(1 - \\frac{1}{p_\\gamma})}\\|x\\|_1,\t\n\t\\end{equation} \n\tcompleting the proof.\n\\end{proof}\n\nWe will see in the coming chapter that our investigations will greatly utilize the case $p = 2$ from Lemma $\\ref{lem:Ultracontractive}$, which gives\n\\begin{equation} \\label{UltraContractive3}\n\t\\|P_t\\|_{1 \\to 2} \\leq C_2 t^{-\\frac{d}{4}}.\t\n\\end{equation} \nFinally, to avoid the singularity at $t = 0$ in $(\\ref{UltraContractive1})$ and $(\\ref{UltraContractive3})$, we will use the alternative upper bounds:\n\\begin{subequations}\n\t\\begin{equation}\n\t\t\\|P_t\\|_{1 \\to \\infty} \\leq \\tilde{C}(1 + t)^{-\\frac{d}{2}},\t\n\t\\end{equation}\t\n\t\\begin{equation}\n\t\t\\|P_t\\|_{1 \\to 2} \\leq \\tilde{C}(1 + t)^{-\\frac{d}{4}},\n\t\\end{equation}\t\n\\end{subequations}\nwith a new constant $\\tilde{C} > 0$. Note that such an alternative upper bound is possible since for large $t$ these new upper bounds decay at the same rate as the bounds in $(\\ref{UltraContractive1})$ and $(\\ref{UltraContractive3})$ and for small $t \\geq 0$ the operator $P_t$ is well-behaved and finite. We will also use such an alternative upper bound of $(1+t)^{-\\frac{\\eta}{2}}$ in $(\\ref{NeighbourDecay})$ for the same reason.\n\nThis concludes our very brief exploration of the rich and diverse area of random walks on graphs. The results stated in this section will be applied to the phase system $(\\ref{PerturbationSystem})$ in the coming sections in order to understand the stability of phase-locked solutions.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{A General Stability Theorem} \\label{sec:Stability}\n\nHaving now laid a foundation in the theory of random walks on weighted graphs, we are now in a position to return to our discussion of the stability of phase-locked solutions to the phase system $(\\ref{PhaseSystem})$. For simplicity, in this section and the next we will often write the perturbation system $(\\ref{PerturbationSystem})$ abstractly as an ordinary differential equation in the variable $\\psi = \\{\\psi_v(t)\\}_{v\\in V}$ as \n\\begin{equation} \\label{CoupledNetwork2}\n\t\\dot{\\psi} = \\mathcal{F}(\\psi),\n\\end{equation} \nwhere $\\mathcal{F}: \\mathbb{R}^V \\to \\mathbb{R}^V$ represents the right-hand side of $(\\ref{PerturbationSystem})$. Throughout this section we will lay out the Hypotheses necessary on $(\\ref{PhaseSystem})$ to eventually provide the main stability theorem for phase-locked solutions to this system. The first of these hypotheses is as follows:\n\n\\begin{hyp} \\label{Hyp:Neighbours}\n\tThere exists a finite $D \\geq 1$ such that $1 \\leq \\#N(v) \\leq D$ for every $v \\in V$. \n\\end{hyp} \n\nHypothesis $\\ref{Hyp:Neighbours}$ says that each element $\\psi_v$ is influenced by a finite number of other elements, and that the number of influences on any single element is uniformly bounded. Moreover, we recall from our definition of the model that if $\\psi_{v'}$ influences $\\psi_v$, then $\\psi_v$ influences $\\psi_{v'}$, and hence the influence topology is symmetric. \n\n\\begin{hyp} \\label{Hyp:Linearization}\n\tThe coupling function $H:S^1 \\to S^1$ satisfies\n\t\\begin{equation} \\label{PositiveWeights}\n\t\tH'(\\bar{\\theta}_{v'} - \\bar{\\theta}_v) = H'(\\bar{\\theta}_v - \\bar{\\theta}_{v'}) \\geq 0\n\t\\end{equation} \n\tfor all $v \\in V$ and $v' \\in N(v)$.\n\\end{hyp}\n\nWhat we obtain from Hypothesis $\\ref{Hyp:Linearization}$ is our phase-locked solution naturally can be related to an infinite weighted graph, simply denoted $G = (V,E,w)$, with vertex set $V$ and edge set, $E$, contained in the set of all possible influences. Indeed, the symmetric weight function is given by \n\\begin{equation}\n\tw(v,v') = \\left\\{\n \t\t\\begin{array}{cl}\n \t\t\tH'(\\bar{\\theta}_{v'} - \\bar{\\theta}_{v}) & : v' \\in N(v)\\\\\n \t\t\t0 & : v' \\notin N(v). \\\\\n \t\t\\end{array}\n \t\\right.\n\\end{equation} \nThe condition $(\\ref{PositiveWeights})$ guarantees that $w(v,v') = w(v',v) \\geq 0$ for all $v,v' \\in V$. Furthermore, Hypothesis $\\ref{Hyp:Neighbours}$ guarantees that the degree of each vertex is finite and uniformly bounded. \n \nNotice that a necessary condition for there to be an edge between vertices $v,v' \\in V$ is that $v' \\in N(v)$ (or equivalently $v \\in N(v')$). This condition is not sufficient since it could be the case that for some $v \\in V$ and $v' \\in N(v)$ we have $H'(\\bar{\\theta}_{v'} - \\bar{\\theta}_{v}) = 0$, and therefore there is no edge between $v$ and $v'$ by definition of a weight function on a graph. This implies that even if $v' \\in N(v)$, the distance between these vertices on the graph $G$ is not guaranteed to be $1$ since there may not be an edge connecting these vertices. This necessitates the following hypothesis.\n\n\\begin{hyp} \\label{Hyp:GraphDistance}\n\t$G$ is a connected graph. Furthermore the associated graph metric, $\\rho: V \\times V \\to [0,\\infty)$, satisfies\n\t\\begin{equation} \\label{MetricBound}\n\t\t\\sup_{v \\in V, v'\\in N(v)}\\rho(v,v') < \\infty.\t\n\t\\end{equation}\t\n\\end{hyp}\n\nBefore we are able to apply the results of random walks on infinite graphs, we must point out that the linearization $L_{\\bar{\\theta}}$ given in $(\\ref{Linearization})$ is not in the form that was investigated through random walks. That is, we are missing the $1\/m(v)$ term from $(\\ref{NormalizedLap})$. If $m(v)$ is positive and independent of $v$ we may simply rescale $t \\to m(v) t$, which will apply the appropriate $1\/m(v)$ term to obtain the appropriate Laplacian operator. Then the operator $[1\/m(v)]L_{\\bar{\\theta}}$ is in the appropriate form to apply the theory from random walks on graphs.\n\nWe will now describe how to overcome this problem when $m(v)$ is not independent of $v$. To begin, note that Hypotheses $\\ref{Hyp:Neighbours}$ and $\\ref{Hyp:Linearization}$ together give that there exists an $M > 0$ such that\n\\begin{equation}\n\tm(v) = \\sum_{v' \\in N(v)} H'(\\bar{\\theta}_{v'} - \\bar{\\theta}_v) \\leq M\n\\end{equation} \nfor all $v \\in V$. Letting $t \\to (M + 1)t$ scales $(\\ref{CoupledNetwork2})$ to the equivalent differential equation \n\\begin{equation}\n\t\\dot{\\psi} = \\frac{1}{M + 1}\\mathcal{F}(\\psi).\n\\end{equation}\nFurthermore, linearizing about the steady-state $\\psi = 0$ now results in the linearization \n\\begin{equation}\n\t\\tilde{L}_{\\bar{\\theta}} := \\frac{1}{M + 1}L_{\\bar{\\theta}}.\n\\end{equation}\nWe have now normalized the operator $L_{\\bar{\\theta}}$, and wish to consider a new graph, $\\tilde{G}$, so that the measure of each vertex is given by $\\tilde{m}(v) = M + 1$ for all $v \\in V$. In doing so we will have that $\\tilde{L}_{\\bar{\\theta}}$ is of the proper form to apply the results of the previous section. First, notice that \n\\begin{equation}\n\t\\sum_{v' \\in V} \\frac{w(v,v')}{M + 1} = \\sum_{v' \\in N(v)} \\frac{H'(\\bar{\\theta}_{v'} - \\bar{\\theta}_v)}{M+1} \\leq \\frac{M}{M + 1} < 1. \n\\end{equation}\nLet us extend the graph $G$ to $\\tilde{G}$ by adding a loop at every vertex (an edge which originates and terminates at the same vertex) and augment to a new weight function $\\tilde{w}: V \\times V \\to [0,\\infty)$ given by\n\\begin{equation} \\label{LoopConstruction}\n\t\\tilde{w}(v,v') = \\left\\{\n \t\t\\begin{array}{cl}\n \t\t\tH'(\\bar{x}_{v'} - \\bar{x}_{v}) & : v' \\in N(v),\\ v' \\neq v\\\\\n \t\t\t1 + M - \\sum_{v''}H'(\\bar{x}_{v''} - \\bar{x}_{v}) & : v' = v \\\\ \n\t\t\t0 & : v' \\notin N(v). \\\\\n \t\t\\end{array}\n \t\\right.\t\n\\end{equation}\nThat is, the missing weight for the measure $\\tilde{m}$ to be identically $M + 1$ for each $v\\in V$ is made up for by the new loop connecting each vertex to itself. Notice that adding loops to a graph does not change the form of the graph Laplacian. Indeed, for each $v \\in V$ we have\n\\begin{equation}\n\t\\begin{split}\n\t\t\\sum_{v' \\in N(v)} w(v,v')(x_{v'} - x_v) &= \\sum_{v' \\in N(v)} \\underbrace{\\tilde{w}(v,v')}_\\text{$=w(v,v')$}(x_{v'} - x_v) \\\\\n\t\t&= \\sum_{v' \\in N(v)} \\tilde{w}(v,v')(x_{v'} - x_v) + \\underbrace{\\tilde{w}(v,v)(x_v - x_v)}_\\text{$= 0$} \\\\\n\t\t&= \\sum_{v' \\in N(v)\\cup\\{v\\}} \\tilde{w}(v,v')(x_{v'} - x_v).\t\n\t\\end{split}\n\\end{equation}\nThe underlying graph will be denoted $\\tilde{G} = (V,\\tilde{E},\\tilde{w})$. Notice that if $G$ is connected then $\\tilde{G}$ is also connected since we have not eliminated any edges from $G$ to form $\\tilde{G}$. Furthermore, we again have a uniform bound on the weight function $\\tilde{w}$ given by $M + 1$. One should also note that Hypothesis $\\ref{Hyp:Neighbours}$ also gives that $\\tilde{L}_{\\bar{\\theta}}$ again is a bounded operator on the sequence spaces from Lemma $\\ref{lem:NormLaplacian}$. \n\n\\begin{hyp} \\label{Hyp:DecayRates}\n\tAssume that one of the following is true: \n\t\\begin{itemize}\n\t\t\\item If $m(v)$ is independent of $v\\in V$, assume there exists a $d\\geq 2$ such that the graph $G = (V,E,w)$ satisfies $VG(d)$, $PI$ and $\\Delta$. \n\t\t\\item If $m(v)$ is not independent of $v \\in V$, assume there exists a $d \\geq 2$ such that the graph $\\tilde{G} = (V,\\tilde{E},\\tilde{w})$ (as constructed above) satisfies $VG(d)$, $PI$ and $\\Delta$. \t\n\t\\end{itemize}\t\n\\end{hyp}\n\nNotice that the assumptions of this hypothesis imply that the graph satisfies the assumptions of Proposition $\\ref{prop:Delmotte}$, and therefore we obtain the algebraic decay rates on the transition probabilities of a random walk on the vertices of the graph $\\cite{Delmotte}$. This hypothesis then in turn allows one to infer the results of Corollary $\\ref{cor:InfDecay}$ and Lemma $\\ref{lem:Ultracontractive}$. This leads to the following stability theorem whose proof is left to Section $\\ref{sec:StabilityProof}$.\n\n\\begin{thm} \\label{thm:Stability}\n\tConsider the system $(\\ref{PerturbationSystem})$ for a twice-differentiable function $H:S^1 \\to S^1$ satisfying the Hypothesis $\\ref{Hyp:Neighbours}$ and $\\ref{Hyp:Linearization}$. Assume further that linearizing about the steady-state $\\psi = 0$ leads to a linear operator, $L_{\\bar{\\theta}}$, satisfying Hypotheses $\\ref{Hyp:GraphDistance}$ and $\\ref{Hyp:DecayRates}$. Then, there exists an $\\epsilon > 0$ for which every $\\psi_0 = \\{\\psi_{v,0}\\}_{v \\in V}$ with the property that\n\t\\begin{equation}\n\t\t\\|\\psi_0\\|_1 \\leq \\varepsilon,\t\n\t\\end{equation} \n\tdefines a unique solution of $(\\ref{PerturbationSystem})$, $\\psi(t)$ for all $t \\geq 0$, satisfying the following properties:\n\t\\begin{enumerate}\n\t\t\\item $\\psi(0) = \\psi_0$.\n\t\t\\item $\\psi(t) \\in \\ell^p(V)$ for all $1 \\leq p \\leq \\infty$.\n\t\t\\item There exists a $C > 0$ such that \n\t\t\\begin{subequations}\n\t\t\t\\begin{equation}\n\t\t\t\t\t\\|\\psi(t)\\|_1 \\leq C \\|\\psi_0\\|_1, \n\t\t\t\t\\end{equation}\n\t\t\t\\begin{equation}\n\t\t\t\t\t\\|\\psi(t)\\|_2 \\leq C (1 + t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1, \n\t\t\t\t\\end{equation}\n\t\t\t\t\\begin{equation}\n\t\t\t\t\t\\|\\psi(t)\\|_\\infty \\leq C (1 + t)^{-\\frac{d}{2}}\\|\\psi_0\\|_1,\n\t\t\t\\end{equation}\n\t\t\\end{subequations}\n\t\t\tfor all $t \\geq 0$ and $d\\geq 2$ given by Hypothesis~\\ref{Hyp:DecayRates}. \n\t\\end{enumerate} \t\n\\end{thm}\n\nWe present the following extension of Theorem~\\ref{thm:Stability} for completion. It should be noted that the proof is identical to that of Lemma $\\ref{lem:Ultracontractive}$ where one simply applies the log-convexity of the $\\ell^p$ norms and is therefore omitted. \n\n\\begin{cor}\n\tUnder the assumptions of Theorem~\\ref{thm:Stability} there exists a $C > 0$ such that the solution $\\psi(t)$ further satisfies\n\t\\begin{equation}\n\t\t\\|\\psi(t)\\|_p \\leq C(1+t)^{-\\frac{d}{2}(1-\\frac{1}{p})}\\|\\psi_0\\|_1,\n\t\\end{equation} \n\tfor all $t\\geq 0$ and $p \\in [1,\\infty]$.\n\\end{cor}\n\n\\begin{rmk} \\label{rmk:Digraph}\n\tIt is important to note that the most restrictive assumption is the symmetry condition $(\\ref{PositiveWeights})$ of Hypothesis $\\ref{Hyp:Linearization}$. When $(\\ref{PositiveWeights})$ is broken for even a single index, the graph becomes a directed graph (or digraph) and therefore all of the theory from Section $\\ref{sec:Graphs}$ can no longer be applied. This situation would require the development of comparable techniques to obtain similar decay rates for random walks on digraphs which do not appear to be available at this time. \n\\end{rmk}\n\n\\begin{rmk}\n\tAlthough in this work we have assumed that coupling between oscillators is identically given through the function $H$, Theorem~\\ref{thm:Stability} could be extended to non-identical coupling functions as well. We have refrained from doing this here for the ease of conveying the results and also due to the fact that it seems impractical to assume that a symmetry condition equivalent to $(\\ref{PositiveWeights})$ could hold. That is, for a system with non-identical coupling such as\n\t\\begin{equation}\n\t\t\\dot{\\theta} = \\omega_v + \\sum_{v' \\in N(v)} H(\\theta_{v'} - \\theta_v,v,v'),\n\t\\end{equation} \n\tcondition $(\\ref{PositiveWeights})$ would need to be replaced with a condition such as\n\t\\begin{equation} \\label{NonIdenticalSymmetry}\n\t\tH'(\\bar{\\theta}_{v'} - \\bar{\\theta}_v,v,v') = H'(\\bar{\\theta}_{v} - \\bar{\\theta}_{v'},v',v) \\geq 0 \n\t\\end{equation}\n\tfor all $v \\in V$ and $v' \\in N(v)$. It appears that stability results for non-identical coupling functions would be more realistic if the symmetry condition $(\\ref{NonIdenticalSymmetry})$ could merely be replaced with the simpler condition $H'(\\bar{\\theta}_{v'} - \\bar{\\theta}_v,v,v') \\geq 0$, thus resulting in a digraph. This is exactly what has already been done for finite systems of coupled oscillators in $\\cite{ErmentroutStability}$. \n\\end{rmk}\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proof of Theorem~\\ref{thm:Stability}} \\label{sec:StabilityProof}\n\nThroughout this section we will assume that Hypothesis $\\ref{Hyp:Neighbours},\\ref{Hyp:Linearization},\\ref{Hyp:GraphDistance}$ and $\\ref{Hyp:DecayRates}$ hold. We will work through this proof under the assumption that the second case of Hypothesis $\\ref{Hyp:DecayRates}$ holds, although the proof using the first case is nearly identical. Following the discussion prior to stating Hypothesis $\\ref{Hyp:DecayRates}$, we will apply the appropriate re-parametrization of $t$. Let us further write $(\\ref{CoupledNetwork2})$ in the equivalent form\n\\begin{equation} \\label{CoupledNetwork3}\n\t\\dot{\\psi} = \\tilde{L}_{\\bar{\\theta}}\\psi + \\mathcal{G}(\\psi),\n\\end{equation}\nwhere $\\mathcal{G}(\\psi) = \\frac{1}{M+1}\\mathcal{F}(\\psi) - \\tilde{L}_{\\bar{\\theta}}\\psi$ and we suppress the dependence of $\\psi$ on $t$. Notice that $\\mathcal{G}(0) = 0$ and its the derivative satisifies $\\mathcal{G}'(0) = 0$. Denoting $P_t = e^{\\tilde{L}_{\\bar{\\theta}}t}$ to be the semigroup generated by the linearization $\\tilde{L}_{\\bar{\\theta}}$ for all $t \\geq 0$, we arrive at the equivalent formulation of $(\\ref{CoupledNetwork3})$ given by\n\\begin{equation} \\label{IntegralForm}\n\t\\psi(t) = P_t \\psi(0) + \\int_0^t P_{t-s}\\mathcal{G}(\\psi(s))ds\t\n\\end{equation} \nfor any $t \\geq 0$. Then any function $\\psi(t)$ which satisfies $(\\ref{IntegralForm})$ satisfies the differential equation $(\\ref{PerturbationSystem})$ for $t \\geq 0$.\n\nLet $Q: \\mathbb{R}^V \\to \\mathbb{R}$ be the operator acting upon the elements $x = \\{x_v\\}_{v\\in V}$ by \n\\begin{equation} \\label{Quadratic}\n\tQ(x) = \\sum_{v \\in V} \\sum_{v' \\in N(v)} |x_{v'} - x_v|^2.\n\\end{equation}\nClearly $Q(x) \\geq 0$ for all $x$, and furthermore using the Parallelogram Law one can see that for any $x \\in \\ell^2(V)$ we have\n\\begin{equation} \\label{QuadBound}\n\t0 \\leq Q(x) \\leq 2D\\|x\\|_2^2, \n\\end{equation} \nwhere we recall that from Hypothesis $\\ref{Hyp:Neighbours}$ we have that $\\#N(v) \\leq D$ for all $v \\in V$. Furthermore, $\\sqrt{Q(\\cdot)}$ defines a seminorm, and therefore satisfies the triangle inequality (for example, see Lemma $4.3$ of $\\cite{Jorgensen}$). This leads to the first result. \n\n\\begin{lem} \\label{lem:Boundedness1}\n\tFor any $\\psi \\in \\ell^2(V)$, there exists a $K > 0$, depending on $\\|\\psi\\|_2$, such that\n\t\\begin{equation}\n\t\t\\|\\mathcal{G}(\\psi)\\|_1 \\leq KQ(\\psi).\n\t\\end{equation}\n\\end{lem}\n\n\\begin{proof}\n\tLet us write $\\delta := \\|\\psi\\|_2$. We then have that $\\|\\psi\\|_\\infty \\leq \\delta$ and hence $|\\psi_{v'} - \\psi_v| \\leq 2\\delta$ for all $v,v' \\in V$. Since $H \\in C^2(\\mathbb{R})$ we can define\n\t\\begin{equation}\n\t\tK_1(\\delta) := \\sup_{|x| \\leq 2\\delta} |H''(x)| < \\infty.\n\t\\end{equation} \t\n\tBy Taylor's Theorem, for all $v \\in V$ and $v' \\in N(v)$ we have\n\t\\begin{equation} \n\t\t|H(\\bar{\\theta}_{v'} + \\psi_{v'} - \\bar{\\theta}_v -\\psi_v) - H(\\bar{\\theta}_{v'} - \\bar{\\theta}_v) - H'(\\bar{\\theta}_{v'} - \\bar{\\theta}_v)(\\psi_{v'} - \\psi_v)| \\leq \\frac{K_1(\\delta)}{2}|\\psi_{v'} - \\psi_v|^2.\n\t\\end{equation} \n\tThen recalling $\\mathcal{G}(0) = 0$ and $\\mathcal{G}'(0) = 0$, and using the previous inequality we get\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\|\\mathcal{G}(\\psi)\\|_1 &= \\|\\mathcal{G}(\\psi) - \\mathcal{G}(0) - \\mathcal{G}'(0)\\psi\\|_1 \\\\\n\t\t\t&= \\frac{1}{M+1}\\sum_v \\bigg|\\sum_{v' \\in N(v)} H(\\bar{\\theta}_{v'} + \\psi_{v'} - \\bar{\\theta}_v -\\psi_v) - H(\\bar{\\theta}_{v'} - \\bar{\\theta}_v) \\\\\n\t\t\t &\\ \\ \\ \\ \\ - H'(\\bar{\\theta}_{v'} - \\bar{\\theta}_v)(\\psi_{v'} - \\psi_v)\\bigg| \\\\\n\t\t\t&\\leq \\frac{K_1(\\delta)}{2(M+1)} \\sum_v \\sum_{v' \\in N(v)} |\\psi_{v'} - \\psi_v|^2 \\\\\n\t\t\t&= \\frac{K_1(\\delta)}{2(M+1)} Q(\\psi), \n\t\t\\end{split}\n\t\\end{equation}\n\tcompleting the proof of the lemma. \n\\end{proof}\n\nNow from the decay rates in Section $\\ref{subsec:RandomWalks}$, for all $t \\geq 0$ we have the following decay estimates for the semigroup $P_t$:\n\\begin{subequations} \\label{DecayEstimates}\n\t\\begin{equation} \\label{PtContraction}\n\t\t\\|P_t\\|_{p \\to p} \\leq C_{op},\\ \\ {\\rm for\\ all}\\ \\ 1 \\leq p \\leq \\infty, \n\t\\end{equation}\n\t\\begin{equation} \\label{2Decay}\n\t\t\\|P_t\\|_{1 \\to 2} \\leq C_1(1 + t)^{-\\frac{d}{4}},\n\t\\end{equation}\n\t\\begin{equation} \\label{InftyDecay}\n\t\t\\|P_t\\|_{1 \\to \\infty} \\leq C_1(1 + t)^{-\\frac{d}{2}},\n\t\\end{equation}\n\\end{subequations}\nfor some $C_{op}, C_1 > 0$. There is also one more important estimate which must be established in the following lemma. \n\n\\begin{lem} \\label{lem:QForm}\n\tLet $\\eta > 0$ be the associated value to $P_t$ that satisfies the estimate $(\\ref{NeighbourDecay})$. For all $\\psi \\in \\ell^2(V)$, there exists a constant $C_Q > 0$ independent of $\\psi$ such that \n\t\\begin{equation} \n\t\t\\sqrt{Q(P_t\\psi)} \\leq C_Q (1 + t)^{-\\frac{\\eta}{2}} \\|P_{t}|\\psi| \\|_2,\n\t\\end{equation} \n\twhere $|\\psi| = \\{|\\psi_v|\\}_{v\\in V}$.\n\\end{lem}\n\n\\begin{proof}\n\tTo begin, since Hypothesis $\\ref{Hyp:DecayRates}$ guarantees that the measure of each vertex is uniformly bounded, we combine this statement with $(\\ref{NeighbourDecay})$ and the uniform boundedness of the metric given in Hypothesis $\\ref{Hyp:GraphDistance}$ to find that there exists a $C > 0$ (independent of $v,v'$ and $t$) such that \n\t\\begin{equation}\n\t\t|p_t(v,v'') - p_t(v',v'')| \\leq C (1 + t)^{-\\frac{\\eta}{2}} p_{2t}(v,v''),\n\t\\end{equation} \n\tfor all $v,v'' \\in V$, $v' \\in N(v)$. Then for any $\\psi \\in \\ell^2(V)$ we have \n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t|[P_t\\psi]_v - [P_t\\psi]_{v'}| &\\leq \\sum_{v'' \\in V} |p_t(v,v'') - p_t(v',v'')| |\\psi_{v''}| \\\\\n\t\t\t&\\leq C(1 + t)^{-\\frac{\\eta}{2}} \\sum_{v'' \\in V} p_{2t}(v,v'') |\\psi_{v''}| \\\\\n\t\t\t&= C(1 + t)^{-\\frac{\\eta}{2}}[P_{2t}|\\psi|]_v. \n\t\t\\end{split}\n\t\\end{equation}\n\tThis in turn gives\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\sqrt{Q(P_t\\psi)} &= \\sqrt{\\sum_{v \\in V} \\sum_{v' \\in N(v)} |[P_t \\psi]_{v'} - [P_t \\psi]_v|^2} \\\\\n\t\t\t&\\leq C(1 + t)^{-\\frac{\\eta}{2}}\\sqrt{\\sum_{v \\in V} \\sum_{v' \\in N(v)} |[P_{2t}|\\psi|]_v|^2} \\\\\n\t\t\t&\\leq CD(1 + t)^{-\\frac{\\eta}{2}}\\sqrt{\\sum_{v \\in V} |[P_{2t}|\\psi|]_v|^2} \\\\\n\t\t\t&\\leq CD(1 + t)^{-\\frac{\\eta}{2}}\\|P_{2t}|\\psi|\\|_2. \n\t\t\\end{split}\n\t\\end{equation}\n\t Finally, using the fact that $P_{2t} = P_tP_t$ and the decay estimate $\\|P_t\\|_{2\\to 2} \\leq C_{op}$ from $(\\ref{PtContraction})$ we arrive at the final result\n\t \\begin{equation}\n\t \t\\sqrt{Q(P_t\\psi)} \\leq CC_{op}D(1 + t)^{-\\frac{\\eta}{2}}\\|P_{t}|\\psi|\\|_2.\t\n\t \\end{equation}\n\\end{proof}\n\nLet us now consider an initial condition $\\psi_0 \\in \\ell^1(V)$. We want to prove that if $\\|\\psi_0\\|_1$ is chosen small enough, there exists a solution $\\psi(t)$ to $(\\ref{PerturbationSystem})$ with $\\psi(0) = \\psi_0$ belonging to the space\n\\begin{equation} \\label{USpace}\n\t\\begin{split}\n\tX = \\bigg\\{\\psi(t)\\bigg|\\ &\\psi(0) = \\psi_0,\\ \\|\\psi(t)\\|_2 \\leq 2C_1(1 + t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1\\ \\ {\\rm and}\\ \\\\\n\t\t&\\sqrt{Q(\\psi(t))} \\leq 2C_1C_Q(1 + t)^{-\\frac{d}{4}-\\frac{\\eta}{2}}\\|\\psi_0\\|_1,\\ \\forall \\ t\\geq 0 \\bigg\\} \n\t\\end{split}\t\n\\end{equation} \nwhere $C_1 > 0$ is the constant taken from the decay estimates $(\\ref{DecayEstimates})$ and $C_Q > 0$ is the constant from Lemma $\\ref{lem:QForm}$. Prior of showing the existence of a solution to $\\ref{PerturbationSystem}$, we show that functions belonging to $X$ indeed satisfy the additional statements of Theorem~\\ref{thm:Stability}. We require the following lemma, which has been repurposed from $\\cite{IntegralLemma}$.\n\n\\begin{lem}[{\\em \\cite{IntegralLemma}, \\S 3, Lemma 3.2}]{\\bf (Restated)} \\label{lem:IntegralLemma}\n\tLet $\\gamma_1, \\gamma_2$ be positive real numbers. If $\\gamma_1,\\gamma_2 \\neq 1$ or if $\\gamma_1 = 1 < \\gamma_2$ then there exists a $C_{\\gamma_1,\\gamma_2} > 0$ such that\n\t\\begin{equation}\n\t\t\\int_0^t (1 + t - s)^{- \\gamma_1}(1 + s)^{-\\gamma_2}ds \\leq C_{\\gamma_1,\\gamma_2} (1 + t)^{-\\min\\{\\gamma_1 + \\gamma_2 - 1, \\gamma_1, \\gamma_2\\}},\n\t\\end{equation} \t \n\tfor all $t \\geq 0$.\n\\end{lem}\n\nWe now provide the necessary results to proving Theorem~\\ref{thm:Stability}. We first define the mapping \n\\begin{equation} \\label{TMapping}\n\tT\\psi(t) = P_t\\psi_0 + \\int_0^t P_{t-s}\\mathcal{G}(\\psi(s))ds,\n\\end{equation}\nwhere $t \\geq 0$. Notice that a fixed point of this mapping belonging to $X$ for all $t \\geq 0$ will satisfy the differential equation $(\\ref{PerturbationSystem})$. We present the following proposition.\n\n\\begin{prop} \\label{prop:Decays}\n\tLet $t_0 > 0$ and assume that $\\psi(t)$ satisfies \n\t\\begin{equation}\n\t\t\\|\\psi(t)\\|_2 \\leq 2C_1(1 + t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1\n\t\\end{equation}\n\tand\n\t\\begin{equation}\n\t\t \\sqrt{Q(\\psi(t))} \\leq 2C_1C_Q(1 + t)^{-\\frac{d}{4} - \\frac{\\eta}{2}}\\|\\psi_0\\|_1\n\t\\end{equation}\n\tfor all $0 \\leq t \\leq t_0$. Then there exists an $\\varepsilon_0 > 0$, independent of $t_0$, such that if $\\|\\psi_0\\|_1 \\leq \\varepsilon_0$ we have \n\t\\begin{subequations} \\label{MappingDecay1}\n\t\t\\begin{align}\n\t\t\t\\|T\\psi(t)\\|_2 &\\leq 2C_1(1 + t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1, \\\\\n\t\t\t\\sqrt{Q(T\\psi(t))} &\\leq 2C_1C_Q(1 + t)^{-\\frac{d}{4} - \\frac{\\eta}{2}}\\|\\psi_0\\|_1\n\t\t\\end{align}\n\t\\end{subequations}\n\tfor all $0 \\leq t \\leq t_0$. Furthermore, there exists a $C_\\infty > 0$, independent of $t_0$, such that\n\t\t\\begin{subequations} \\label{MappingDecay2}\n\t\t\t\\begin{align} \n\t\t\t\t\\|T\\psi(t)\\|_1 &\\leq C_\\infty \\|\\psi_0\\|_1, \\label{ell1NormBound}, \\\\\n\t\t\t\t\\|T\\psi(t)\\|_\\infty &\\leq C_\\infty (1 + t)^{-\\frac{d}{2}}\\|\\psi_0\\|_1.\n\t\t\t\\end{align}\n\t\t\\end{subequations} \n\tfor all $0 \\leq t \\leq t_0$.\n\\end{prop}\n\n\\begin{proof}\n\tBegin by considering $\\|\\psi_0\\|_1 \\leq \\frac{1}{2C_1}$ so that $\\|\\psi(t)\\|_2 \\leq 1$ for all $0 \\leq t \\leq t_0$. This in turn guarantees the existence of a uniform $K > 0$ such that the results of Lemma $\\ref{lem:Boundedness1}$ hold. Then using the decay estimate $(\\ref{2Decay})$ we now have\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\|T\\psi(t)\\|_2 &\\leq \\|P_t\\psi_0\\|_2 + \\int_0^t \\|P_{t-s}\\mathcal{G}(\\psi(s))\\|_2 ds \\\\\n\t\t\t&\\leq C_1(1+t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1 + C_1\\int_0^t (1+t-s)^{-\\frac{d}{4}}\\|\\mathcal{G}(\\psi(s))\\|_2 ds \\\\\n\t\t\t&\\leq C_1(1+t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1 + C_1K\\int_0^t (1+t-s)^{-\\frac{d}{4}}Q(\\psi(s)) ds \\\\\n\t\t\t&\\leq C_1(1+t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1 + 4C_1^3C_Q^2K \\int_0^t(1+t-s)^{-\\frac{d}{4}}(1 + s)^{-\\frac{d}{2} - \\eta}\\|\\psi_0\\|_1^2 ds \n\t\t\\end{split}\n\t\\end{equation} \n\tfor all $0 \\leq t \\leq t_0$. From Lemma $\\ref{lem:IntegralLemma}$, there exists $C_{\\frac{d}{4},\\frac{d}{2} + \\eta} > 0$ such that\n\t\\begin{equation}\n\t\t\\int_0^t (1 + t - s)^{-\\frac{d}{4}}(1 + t)^{-\\frac{d}{2} - \\eta}ds \\leq C_{\\frac{d}{4},\\frac{d}{2} + \\eta}(1 + t)^{-\\frac{d}{4}}.\t\n\t\\end{equation}\n\tThus,\n\t\\begin{equation}\n\t\t\\|T\\psi(t)\\|_2 \\leq [C_1 + 4C_1^3C_Q^2C_{\\frac{d}{4},\\frac{d}{2} + \\eta}K\\|\\psi_0\\|_1](1 + t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1.\t\n\t\\end{equation}\n\t\n\tSimilarly, we use the bounds given in Lemma $\\ref{lem:QForm}$ to obtain\n\t\\begin{equation}\n \t\t\\begin{split}\n \t \t\t\\sqrt{Q(T\\psi(t))} &\\leq \\sqrt{Q(P_t\\psi_0)} + \\int_0^t \\sqrt{Q(P_{t-s}\\mathcal{G}(\\psi(s)))}ds \\\\\n \t\t\t&\\leq C_Q(1+t)^{-\\frac{\\eta}{2}}\\|P_{t}\\psi_0\\|_2 + C_Q \\int_0^t (1 + t - s)^{-\\frac{\\eta}{2}}\\|P_{t-s}|\\mathcal{G}(\\psi(s))\\||_2 ds\\\\\n \t\t\t&\\leq C_1C_Q(1 + t)^{-\\frac{d}{4} - \\frac{\\eta}{2}}\\|\\psi_0\\|_1 \\\\\n \t\t\t&\\qquad + 4C_1^3C_Q^3K\\int_0^t(1+t-s)^{-\\frac{d}{4} - \\frac{\\eta}{2}}(1 + s)^{-\\frac{d}{2} - \\eta}\\|\\psi_0\\|_1^2 ds. \n \t\t\\end{split}\n\t\\end{equation} \n\tFrom Lemma $\\ref{lem:IntegralLemma}$, there exists a $C_{\\frac{d}{4} + \\frac{\\eta}{2}, \\frac{d}{2} + \\eta} > 0$ such that\n\t\\begin{equation}\n\t\t\\int_0^t(1+t-s)^{-\\frac{d}{4} - \\frac{\\eta}{2}}(1 + s)^{-\\frac{d}{2} - \\eta} ds \\leq C_{\\frac{d}{4} + \\frac{\\eta}{2}, \\frac{d}{2} + \\eta}(1 + t)^{-\\frac{d}{4} - \\frac{\\eta}{2}}.,\t\n\t\\end{equation}\n\twhich implies that \n\t\\begin{equation}\n\t\t\\sqrt{Q(T\\psi(t))} \\leq [C_1C_Q + 4C_1^3C_Q^3C_{\\frac{d}{4} + \\frac{\\eta}{2}, \\frac{d}{2} + \\eta}K](1 + t)^{-\\frac{d}{4} - \\frac{\\eta}{2}}\\|\\psi_0\\|_1\t\n\t\\end{equation}\n\tTherefore, taking\n\t\t\\begin{equation}\n\t\t\t\\varepsilon_0 := \\min\\bigg\\{\\frac{1}{2C_1},\\frac{1}{4C_1^3C_Q^2C_{\\frac{d}{4},\\frac{d}{2} + \\eta}K},\\frac{1}{4C_1^3C_Q^2C_{\\frac{d}{4},\\frac{d}{2} + \\eta}K}\\bigg\\}\n\t\t\\end{equation}\t\n\tgives the decay estimates (\\ref{MappingDecay1}).\n\t\n\tWe now turn to proving the remaining decay estimates (\\ref{MappingDecay2}). First, note that $(\\ref{PtContraction})$ details that $\\|P_t\\|_{1 \\to 1} \\leq C_{op}$ for all $t \\geq 0$. Then, for all $0 \\leq t \\leq t_0$ we have\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\\|T\\psi(t)\\|_1 &\\leq \\|P_t\\psi_0\\|_1 + \\int_0^t \\|P_{t-s}\\mathcal{G}(\\psi(s))\\|_1ds \\\\\n\t\t&\\leq C_{op}\\|\\psi_0\\|_1 + C_{op}\\int_0^t \\|\\mathcal{G}(\\psi(s))\\|_1 ds \\\\\n\t\t&\\leq C_{op}\\|\\psi_0\\|_1 + C_{op}K\\int_0^t Q(\\psi(s))ds \\\\\n\t\t&\\leq C_{op}\\|\\psi_0\\|_1 + 4C_{op}C_1^2C^2_QK\\int_0^t (1 + s)^{-\\frac{d}{2} - \\eta}\\|\\psi_0\\|_1^2 ds \\\\\n\t\t&= C_{op}\\|\\psi_0\\|_1 + \\frac{4C_{op}C_1^2C^2_QK}{\\frac{d}{2} + \\eta}[1 - (1 + t)^{1 - \\frac{d}{2} - \\eta}]\\|\\psi_0\\|_1^2. \n\t\t\\end{split}\t\n\t\\end{equation} \n\tSince $d \\geq 2$ and $\\eta > 0$ we have that $[1 - (1 + t)^{1 - \\frac{d}{2} - \\eta}] \\leq 1$ for all $t \\geq 0$. Thus, $\\|T\\psi(t)\\|_1$ is uniformly bounded by a constant multiple of $\\|\\psi_0\\|_1$, depending only on $\\varepsilon_0$, for all $0 \\leq t \\leq t_0$, proving the first bound of (\\ref{MappingDecay2}). \n\t\n\tThe $\\ell^\\infty(V)$ decay follows in a similar manner to above, although we now use the decay estimate $(\\ref{InftyDecay})$. In this case we now have\n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\|T\\psi(t)\\|_\\infty &\\leq \\|P_t\\psi_0\\|_\\infty + \\int_0^t \\|P_{t-s}\\mathcal{G}(\\psi(s))\\|_\\infty ds \\\\ \n\t\t\t&\\leq C_1(1 + t)^{-\\frac{d}{2}}\\|\\psi_0\\|_1 + C_1\\int_0^t (1 + t - s)^{-\\frac{d}{2}} \\|\\mathcal{G}(\\psi(s))\\|_1 ds \\\\\n\t\t\t&\\leq C_1(1 + t)^{-\\frac{d}{2}}\\|\\psi_0\\|_1 + 4C_1^3C^2_QK\\int_0^t (1 + t - s)^{-\\frac{d}{2}}(1 + s)^{-\\frac{d}{2} - \\eta}\\|\\psi_0\\|_1^2 ds. \n\t\t\\end{split}\t\n\t\\end{equation} \n\tApplying Lemma $\\ref{lem:IntegralLemma}$ with $\\gamma_1 = \\frac{d}{2} \\geq 1$ and $\\gamma_2 = \\frac{d}{2} + \\eta > 1$ gives that there exists $C_{\\frac{d}{2},\\frac{d}{2} + \\eta} > 0$ such that\n\t\\begin{equation}\n\t\t\\int_0^t (1 + t - s)^{-\\frac{d}{2}}(1 + s)^{-\\frac{d}{2} - \\eta} ds \\leq C_{\\frac{d}{2},\\frac{d}{2} + \\eta}(1 + t)^{-\\frac{d}{2}}.\t\n\t\\end{equation} \n\tTherefore, we have the decay estimate\n\t\\begin{equation}\n\t\t\\|T\\psi(t)\\|_\\infty \\leq [C_1 + 4C_1^3C_{\\frac{d}{2},\\frac{d}{2} + \\eta}C^2_QK\\|\\psi_0\\|_1] (1 + t)^{-\\frac{d}{2}}\\|\\psi_0\\|_1,\t\n\t\\end{equation}\n\twhich holds for all $0 \\leq t \\leq t_0$, and is independent of $t_0$. This concludes the proof of the proposition. \n\\end{proof}\n\nOne can see that Proposition~\\ref{prop:Decays} gives that\n\\begin{equation}\n\tT:X \\to X\n\\end{equation}\nis well-defined when $\\|\\psi_0\\|_1 \\leq \\varepsilon$. It should be noted that the further estimates given in (\\ref{MappingDecay2}) show that upon obtaining a fixed point of $T$ in $X$, we necessarily satisfy all the estimates stated in Theorem~\\ref{thm:Stability}. Moreover, recall that $\\ell^1(V) \\subsetneq \\ell^p(V)$ for all $1 < p \\leq \\infty$, and therefore Proposition~\\ref{prop:Decays} dictates that a fixed point of $T$ in $X$ immediately belongs to $\\ell^p(V)$ for all $p\\in[0,\\infty]$. We now show the existence of a fixed point of $T$, which therefore then in turn proves Theorem~\\ref{thm:Stability}. \n\n\\begin{proof}[Proof of Theorem~\\ref{thm:Stability}]\n\tFirst, since $\\mathcal{G}'(0) = 0$, we may consider a $\\delta > 0$ such that \n\t\\begin{equation} \\label{LipschitzConstant}\n\t\t\\sup_{\\|x\\|_2 \\leq \\delta} \\|\\mathcal{G}'(x)\\|_{2\\to 2} \\leq \\frac{1}{2C_{op}}.\n\t\\end{equation}\n\tNotice that such a uniform bound in guaranteed by Hypotheses $\\ref{Hyp:Neighbours}$, $\\ref{Hyp:Linearization}$ and the smoothness of the function $H$. Then when $\\|\\psi_0\\|_1 \\leq \\frac{\\delta}{2C_1}$, we have that $2C_1(1 + t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1 \\leq \\delta$. Therefore, let \n\t\\begin{equation} \\label{Epsilon}\n\t\t\\varepsilon := \\min\\bigg\\{\\varepsilon_0,\\frac{\\delta}{2C_1}\\bigg\\} > 0,\n\t\\end{equation}\n\twhere we use $\\varepsilon_0$ to denote the constant required by Proposition~\\ref{prop:Decays}. We therefore take $\\|\\psi_0\\|_1 \\leq \\varepsilon$.\n\t\n\tNow consider the space \n\t\\begin{equation} \\label{U1Space}\n\t\t\\begin{split}\n\t\t\tX_1 = \\bigg\\{\\psi(t),\\ t\\in [0,1]\\bigg|\\ &\\psi(0) = \\psi_0,\\ \\|\\psi(t)\\|_2 \\leq 2C_1(1 + t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1\\ \\ {\\rm and}\\ \\\\\n\t\t\t&\\sqrt{Q(\\psi(t))} \\leq 2C_1C_Q(1 + t)^{-\\frac{d}{4}-\\frac{\\eta}{2}}\\|\\psi_0\\|_1,\\ 0 \\leq t \\leq 1 \\bigg\\}. \n\t\t\\end{split}\t\n\t\\end{equation}\n\tThen by our choice of $\\varepsilon$ in $(\\ref{Epsilon})$ we have that\n\t\\begin{equation}\n\t\tT: X_1 \\to X_1\n\t\\end{equation} \n\tis well-defined for $0 \\leq t \\leq 1$. Let us consider the metric on the space $X_1$ given by\n\t\\begin{equation} \\label{U1Metric}\n\t\t\\rho_{X_1}(\\psi_1(t),\\psi_2(t)) := \\sup_{t \\in [0,1]} \\|\\psi_1(t) - \\psi_2(t)\\|_2,\n\t\\end{equation}\n\tfor any $\\psi_1(t),\\psi_2(t) \\in X_1$. Using the fact that $\\|P_t\\|_{2 \\to 2} \\leq C_{op}$ for all $t \\geq 0$, for any $\\psi_1(t),\\psi_2(t)\\in X_1$ this metric gives \n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\rho_{X_1}(T\\psi_1(t),T\\psi_2(t)) &\\leq \\sup_{t \\in [0,1]} \\int_0^t \\|P_t[\\mathcal{G}(\\psi_1(s)) - \\mathcal{G}(\\psi_2(s))]\\|_2ds \\\\\n\t\t\t&\\leq\t \\sup_{t \\in [0,1]} C_{op}\\int_0^t \\|\\mathcal{G}(\\psi_1(s)) - \\mathcal{G}(\\psi_2(s))\\|_2ds \\\\\n\t\t\t&\\leq \\sup_{t \\in [0,1]} \\frac{1}{2}\\int_0^t \\|\\psi_1(s) - \\psi_2(s)\\|_2ds \\\\\n\t\t\t&\\leq \\sup_{t \\in [0,1]} \\frac{t}{2} \\rho_{X_1}(\\psi_1(t),\\psi_2(t)) \\\\ \n\t\t\t&\\leq \\frac{1}{2} \\rho_{X_1}(\\psi_1(t),\\psi_2(t)),\n\t\t\\end{split}\n\t\\end{equation}\n\twhere we have used the Lipschitz property \n\t\\[\n\t\t\\|\\mathcal{G}(\\psi_1(t)) - \\mathcal{G}(\\psi_2(t))\\|_2 \\leq \\frac{1}{2C_{op}}\\|\\psi_1(t) - \\psi_2(t)\\|_2\n\t\\]\t\n\twhich follows from $(\\ref{LipschitzConstant})$ and our choice of $\\varepsilon > 0$. Thus, $T: X_1 \\to X_1$ is a contraction. We provide the following claim, which will be proved after we have completed this proof.\n\t\n\t\\begin{claim} \\label{claim:U1Complete}\n\t\t$X_1$ is complete with respect to the metric $(\\ref{U1Metric})$.\n\t\\end{claim}\n\t\n\tTherefore by the contraction mapping principle there exists a unique fixed point, $\\psi_1^*(t) \\in X_1$. That is, we have identified a unique solution to the differential equation $(\\ref{PerturbationSystem})$ satisfying the decay rates of the space $X$ for $t \\in [0,1]$. \n\t\n\tWe now proceed by induction. Let us assume that for some positive integer $n \\geq 1$ there exists a unique solution to the differential equation $(\\ref{PerturbationSystem})$ satisfying the decay rates of the space $X$ for $t \\in [0,n]$. That is, there exists a unique fixed point $\\psi_n^*(t)$ to $T$ on the interval $[0,n]$ with the properties that $\\psi_n^*(0) = \\psi_0$, $\\|\\psi_n^*(t)\\|_2 \\leq 2C_1(1 + t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1$ and $\\sqrt{Q(\\psi_n^*(t))} \\leq 2C_1C_Q(1 + t)^{-\\frac{d}{4}-\\frac{\\eta}{2}}\\|\\psi_0\\|_1$ for all $t \\in [0,n]$. We wish to use this function to extend to a solution on $[0,n+1]$. \n\t \n\t Begin by defining the space\n\t \\begin{equation}\n\t \t\\begin{split}\n\t\t\tX_{n+1} = \\bigg\\{&\\psi(t),\\ t\\in [0,n+1]\\bigg|\\ \\psi(t) = \\psi_n^*(t)\\ t \\in [0,n],\\ \\|\\psi(t)\\|_2 \\leq 2C_1(1 + t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1\\\\\n\t\t\t& \\ {\\rm and}\\ \\sqrt{Q(\\psi(t))} \\leq 2C_1C_Q(1 + t)^{-\\frac{d}{4}-\\frac{\\eta}{2}}\\|\\psi_0\\|_1, \\ n \\leq t \\leq n+1 \\bigg\\}. \n\t\t\\end{split}\t\t\n\t \\end{equation}\nAgain, from out choice of $\\varepsilon > 0$, Proposition~\\ref{prop:Decays} guarantees that \n\\begin{equation}\n\tT:X_{n+1} \\to X_{n+1}\n\\end{equation}\nis well-defined. Let us consider the metric on $X_{n+1}$ given by\n\\begin{equation} \\label{Un+1Metric}\n\t\\rho_{X_{n+1}}(\\psi_1(t),\\psi_2(t)) := \\sup_{t \\in [0,n+1]} \\|\\psi_1(t) - \\psi_2(t)\\|_2.\n\\end{equation}\nfor any $\\psi_1(t),\\psi_2(t) \\in X_{n+1}$. A proof nearly identical to that of the proof of Claim $\\ref{claim:U1Complete}$ shows that $X_{n+1}$ is complete with respect to the metric $(\\ref{Un+1Metric})$. \n\nNow, for any $\\psi_1(t),\\psi_2(t) \\in X_{n+1}$ we have that $\\psi_1(t) = \\psi_2(t)$ for all $t \\in [0,n]$. This then gives \n\t\\begin{equation}\n\t\t\\begin{split}\n\t\t\t\\rho_{X_{n+1}}(T\\psi_1(t),T\\psi_2(t)) &\\leq \\sup_{t \\in [n,n+1]} \\int_n^t \\|P_t[\\mathcal{G}(\\psi_1(s)) - \\mathcal{G}(\\psi_2(s))]\\|_2ds \\\\\n\t\t\t&\\leq \\sup_{t \\in [n,n+1]} \\frac{1}{2}\\int_n^t \\|\\psi_1(s) - \\psi_2(s)\\|_2ds \\\\\n\t\t\t&\\leq \\sup_{t \\in [n,n+1]} \\frac{t - n}{2} \\rho_{X_{n+1}}(\\psi_1(t),\\psi_2(t)) \\\\\n\t\t\t&\\leq \\frac{1}{2} \\rho_{X_{n+1}}(\\psi_1(t),\\psi_2(t)),\n\t\t\\end{split}\n\t\\end{equation} \n\tshowing that $T:X_{n+1}\\to X_{n+1}$ is a contraction. By the contraction mapping principle, there exists a unique solution $\\psi^*_{n+1}(t)$ which extends the solution $\\psi_n^*(t)$ onto the interval $[0,n+1]$ and satisfying the required decay rates on this interval. Therefore, there exists a solution to the differential equation $(\\ref{PerturbationSystem})$ for arbitrarily large values of $t$ and satisfies the decay rates of the space $X$. Per the discussion following Proposition~\\ref{prop:Decays}, this therefore gives the results of Theorem~\\ref{thm:Stability}.\n\\end{proof}\n\nWe conclude with the proof of Claim $\\ref{claim:U1Complete}$.\n\n\\begin{proof}[Proof of Claim~\\ref{claim:U1Complete}]\n\tLet $\\{\\psi_n(t)\\}_{n = 1}^\\infty$ be a Cauchy sequence in $X_1$. For each fixed $t \\in [0,1]$, $\\{\\psi_n(t)\\}_{n=1}^\\infty$ forms a Cauchy sequence in the complete space $\\ell^2(V)$. Hence, there exists a pointwise limit to the sequence, denoted $\\psi(t)$ which belongs to $\\ell^2(V)$ for all $t \\in [0,1]$. Furthermore, the uniformity of the metric $\\rho_{X_1}$ further implies that \n\t\\begin{equation} \\label{X1Convergence}\n\t\t\\lim_{n \\to \\infty} \\rho_{X_1}(\\psi_n(t),\\psi(t)) = 0.\n\t\\end{equation}\n\tIt therefore only remains to show that $\\psi(t) \\in X_1$. \n\t\n\tLet $\\varepsilon > 0$ be arbitrary. From $(\\ref{X1Convergence})$ there exists $N \\geq 0$ such that for all $n \\geq N$ and $t \\in [0,1]$ we have\n\t\\begin{equation}\n\t\t\\|\\psi_n(t) - \\psi(t)\\|_2 < \\varepsilon.\n\t\\end{equation}\n\tThen for all $t\\in [0,1]$ we have\n\t\\begin{equation}\n\t\t\\|\\psi(t)\\|_2 \\leq \\|\\psi_n(t) - \\psi(t)\\|_2 + \\|\\psi_n(t)\\|_2 < \\varepsilon + 2C_1(1 + t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1.\n\t\\end{equation}\n\tLetting $\\varepsilon \\to 0^+$ gives that $\\|\\psi(t)\\|_2 \\leq 2C_1(1 + t)^{-\\frac{d}{4}}\\|\\psi_0\\|_1$ for all $t \\in [0,1]$.\n\t\nNow for the final condition. Notice that $(\\ref{QuadBound})$\ndictates that since $\\rho_{X_1}(\\psi_n(t),\\psi(t)) \\to 0$\nas $n \\to \\infty$, we have that \n\\begin{equation}\n \\lim_{n \\to \\infty} \\sup_{t\\in [0,1]}\\sqrt{Q(\\psi_n(t) - \\psi(t))} \\leq 2D\\lim_{n \\to \\infty}\\rho_{X_1}(\\psi_n(t),\\psi(t)) = 0.\n\\end{equation} \n\tThus, we merely repeat the previous arguments showing the bounds on $\\|\\psi(t)\\|_2$ to obtain the appropriate bound on $\\sqrt{Q(\\psi(t))}$. This completes the proof of the claim. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Applications} \\label{sec:Applications}\n\nIn this section we will apply the results of Theorem~\\ref{thm:Stability} to a phase system which was analyzed in $\\cite{MyWork}$. Here we will take $V = \\mathbb{Z}^2$ and consider the system of coupled oscillators given by\n\\begin{equation} \\label{MyPhaseSystem1}\n\t\\dot{\\theta}_{i,j} = \\omega + \\sin(\\theta_{i+1,j} - \\theta_{i,j}) + \\sin(\\theta_{i-1,j} - \\theta_{i,j}) + \\sin(\\theta_{i,j+1} - \\theta_{i,j}) + \\sin(\\theta_{i,j-1} - \\theta_{i,j}),\n\\end{equation}\nfor all $(i,j)\\in\\mathbb{Z}^2$ and a fixed $\\omega \\in \\mathbb{R}$. That is, for each $(i,j) \\in \\mathbb{Z}^2$, we have that $N((i,j)) = \\{(i\\pm1,j),(i,j\\pm1)\\}$. For the ease of notation, we will follow $\\cite{MyWork}$ to compactly write $(\\ref{MyPhaseSystem1})$ as\n\\begin{equation} \\label{MyPhaseSystem2}\n\t\\dot{\\theta}_{i,j} = \\omega + \\sum_{i',j'} \\sin(\\theta_{i',j'} - \\theta_{i,j}),\t\n\\end{equation}\nfor all $(i,j) \\in \\mathbb{Z}^2$. \n\nOur interest lies in phase-locked solutions to $(\\ref{MyPhaseSystem2})$ of the form $(\\ref{PhaseAnsatz})$ with $\\Omega = \\omega$. Solving for the phase-lags, $\\bar{\\theta} = \\{\\bar{\\theta}_{i,j}\\}_{(i,j)\\in\\mathbb{Z}^2}$ requires one to solve \n\\begin{equation} \\label{MyPhaseLags}\n\t\\sum_{i',j'} \\sin(\\bar{\\theta}_{i',j'} - \\bar{\\theta}_{i,j}) = 0\t\n\\end{equation}\nfor all $(i,j) \\in \\mathbb{Z}^2$. Having obtained a solution to $(\\ref{MyPhaseLags})$, one may follow the general framework of Section $\\ref{sec:Perturbation}$ by applying a slight perturbation $\\psi = \\{\\psi_{i,j}\\}_{(i,j)\\in\\mathbb{Z}^2}$ to the phase-locked solution to arrive at the perturbation system\n\\begin{equation} \\label{PerturbationSystem2}\n\t\\dot{\\psi}_{i,j} = \\sum_{i',j'} \\sin(\\bar{\\theta}_{i',j'} + \\psi_{i',j'} - \\bar{\\theta}_{i,j} - \\psi_{i,j}) \n\\end{equation} \nfor all $(i,j)\\in\\mathbb{Z}^2$. The reader is reminded that $\\psi = 0$ is a steady-state solution to this perturbation system and its stability corresponds to the stability of the phase-locked solution $\\{\\omega t + \\bar{\\theta}_{i,j}\\}_{(i,j)\\in\\mathbb{Z}^2}$. \n\nThroughout the following subsections we will provide examples of phase-lags that solve $(\\ref{MyPhaseLags})$ and thus provide phase-locked solutions to $(\\ref{MyPhaseSystem2})$. Furthermore, we will demonstrate how one may apply Theorem~\\ref{thm:Stability} to find that these phase-locked solutions are stable. It should immediately be noted that Hypothesis $\\ref{Hyp:Neighbours}$ is satisfied by our phase system $(\\ref{MyPhaseSystem2})$ since each oscillator is influenced by exactly the four oscillators corresponding to the four nearest-neighbours on the lattice $\\mathbb{Z}^2$. Furthermore, since the coupling function $H(x) = \\sin(x)$ is an odd function, $H'(x) = \\cos(x)$ is an even function and therefore the symmetry requirement $(\\ref{PositiveWeights})$ will be met, although the positivity requirement will need to be checked on a case-by-case basis. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Stability of the Trivial Solution} \\label{subsec:Trivial}\n\nWe begin by illustrating an application of Theorem~\\ref{thm:Stability} with the simplest solution to $(\\ref{MyPhaseLags})$, the trivial solution. Since $\\sin(0) = 0$, taking $\\bar{\\theta}_{i,j} = 0$ for all $(i,j) \\in \\mathbb{Z}^2$ leads to a solution to $(\\ref{MyPhaseLags})$. The perturbation system in the variables $\\psi = \\{\\psi_{i,j}\\}_{(i,j)\\in\\mathbb{Z}^2}$ then becomes\n\\begin{equation}\n\t\\dot{\\psi}_{i,j} = \\sum_{i',j'} \\sin(\\psi_{i',j'} - \\psi_{i,j})\n\\end{equation} \nfor all $(i,j)\\in\\mathbb{Z}^2$. Linearizing this system of ordinary differential equations about the steady-state $\\psi = 0$ results in the linear operator, denoted $L_1$, acting on the sequences $x = \\{x_{i,j}\\}_{(i,j) \\in \\mathbb{Z}^2}$ by \n\\begin{equation}\n\t[L_1x]_{i,j} = \\sum_{i',j'} (x_{i',j'} - x_{i,j}),\n\\end{equation}\nfor every $(i,j)\\in\\mathbb{Z}^2$, which is exactly the linearization (\\ref{LinearizationEx}) discussed in Section~\\ref{sec:Perturbation}. Here we interpret the underlying graph, denoted $G_1 = (\\mathbb{Z}^2,E_1,w_1)$, to have vertex set $\\mathbb{Z}^2$ and an edge set $E_1$ containing all nearest-neighbour interactions between vertices. The weights of each edge are identically given by $1$, thus giving that the measure of each vertex is identically $4$. An illustration of $G_1$ is given in Figure $\\ref{fig:TrivialGraph}$ for visual reference. \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width = 7cm]{Graph2.eps}\n\t\\caption{A representation of the graph $G_1$ associated with the linearization of $(\\ref{MyPhaseSystem1})$ about the trivial phase-locked solution. Dots represent vertices of the graph and the lines connecting these vertices represent the edges.}\n\t\\label{fig:TrivialGraph}\n\\end{figure} \n\nWe see that since each vertex in $G_1$ has measure exactly $4$, we are in the first case of Hypothesis $\\ref{Hyp:DecayRates}$. Following the discussion prior to Hypothesis $\\ref{Hyp:DecayRates}$, we apply the linear time re-parametrization $t \\to 4t$ to system $(\\ref{PerturbationSystem2})$. Our re-parametrized system now results in the linearization about $\\psi = 0$ given by\n\\begin{equation}\n\t[\\tilde{L}_1x_{i,j}] = \\sum_{i',j'} \\frac{1}{4}(x_{i',j'} - x_{i,j}),\n\\end{equation} \nfor every $(i,j)\\in\\mathbb{Z}^2$. Due to the fact that the weight of each vertex is identical, we have not added any loops to the original underlying graph $G_1$. What is important to note though is that now one sees that $-\\tilde{L}_1$ is in the form of a graph Laplacian.\n\nThe operator $\\tilde{L}_1$ and its underlying graph $G_1$ are a well-studied example in the theory of random walks on infinite graphs. Following Definition $\\ref{def:VolumeGrowth}$ it was pointed out that this graph $G_1$ satisfies $VD(2)$, and Telcs points out in the proof of Theorem $3$ of $\\cite{Telcs}$ that this graph satisfies the properties $PI$ and $\\Delta$ as well. Therefore, Hypothesis $\\ref{Hyp:DecayRates}$ holds for the trivial phase-locked solution, along with Hypotheses $\\ref{Hyp:Neighbours}$, $\\ref{Hyp:Linearization}$ and $\\ref{Hyp:GraphDistance}$, thus allowing one to apply Theorem~\\ref{thm:Stability} to this situation. We state this here as a corollary of Theorem~\\ref{thm:Stability}. \n\n\\begin{cor}\n\tThe trivial phase-locked solution to $(\\ref{MyPhaseSystem1})$ given by $\\{\\omega t\\}_{(i,j) \\in \\mathbb{Z}^2}$ is locally asymptotically stable with respect to small perturbations in $\\ell^1(\\mathbb{Z}^2)$.\t\n\\end{cor}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Stability of the Rotating Wave Solution}\n\nWe now turn our attention to a nontrivial solution of $(\\ref{MyPhaseSystem1})$. It was shown in $\\cite{MyWork}$ that there exists a solution resembling a rotating wave in that the values of the phase-lags about concentric rings of cells will increase from $0$ up to $2\\pi$. Let us simply denote this solution by $\\bar{\\theta} = \\{\\bar{\\theta}_{i,j}\\}_{(i,j) \\in \\mathbb{Z}^2}$. For visual reference, the solution is illustrated in Figure $\\ref{fig:RotWave}$. Here the core of the rotating wave is given by the four cell ring with indices $(i,j) = (0,0), (0,1), (1,0), (1,1)$, represented at the centre of Figure $\\ref{fig:RotWave}$. \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width = 8cm]{PhaseSolution.eps}\n\t\\caption{Symmetry of the phase-locked solution on the finite lattice. All phase-lags are obtained by phase advances and delays of the elements in the shaded cells. Image originally appears in $\\cite{MyWork}$.}\n\t\\label{fig:RotWave}\n\\end{figure} \n\nLinearizing the perturbation system about this solution leads to the linear operator, denoted $L_2$, acting upon the sequences $x = \\{x_{i,j}\\}_{(i,j) \\in \\mathbb{Z}^2}$ by\n\\begin{equation}\n\t[L_2x]_{i,j} = \\sum_{i',j} \\cos(\\bar{\\theta}_{i',j'} - \\bar{\\theta}_{i,j}) (x_{i',j'} - x_{i,j}), \n\\end{equation}\nfor all $(i,j)\\in\\mathbb{Z}^2$. We again have that the conditions of Hypothesis $\\ref{Hyp:Linearization}$ are satisfied since it was shown that all local interactions are such that $|\\bar{\\theta}_{i',j'} - \\bar{\\theta}_{i,j}| \\leq \\frac{\\pi}{2}$. More precisely, with the exception of the 'centre' four cells at $(i,j) = (0,0), (0,1), (1,0), (1,1)$ all local interactions are such that $|\\bar{\\theta}_{i',j'} - \\bar{\\theta}_{i,j}| < \\frac{\\pi}{2}$, whereas the coupling between any two of the four centre cells is exactly $\\pi\/2$. Hence, we consider the weighted and connected graph, $G_2 = (\\mathbb{Z}^2,E_2,w_2)$, with vertex set $\\mathbb{Z}^2$ and an edge set which contains all nearest-neighbour connections less those connections between any two of the centre four cells since $\\cos(\\frac{\\pi}{2}) = 0$. A visual representation of this graph is given in Figure $\\ref{fig:RotGraph}$, where one notes the absence of connection between the core indices. \n\nWe now remind the reader of some important properties of this rotating wave solution which were detailed in $\\cite{MyWork}$. First, the solution is obtained via phase advances and phase delays of a solution obtained on the indices $1 \\leq j \\leq i$, which is represented by the shaded cells in Figure $\\ref{fig:RotWave}$. Furthermore, we have that\n\\begin{equation}\n\t\\begin{split}\n\t\\begin{aligned}\n\t\t&\\bar{\\theta}_{i,i} = 0, \\\\\n\t\t&\\bar{\\theta}_{i,0} = \\frac{\\pi}{2} - \\bar{\\theta}_{i,1},\n\t\\end{aligned}\n\t\\end{split}\n\\end{equation} \nfor all $i \\geq 1$ and \n\\begin{equation}\n\t0 < \\bar{\\theta}_{i,j} \\leq \\frac{\\pi}{4}\n\\end{equation}\nfor all $1 \\leq j < i$. Then one uses these facts to see that\n\\begin{equation} \\label{NNBound1}\n\t|\\bar{\\theta}_{i',j'} - \\bar{\\theta}_{i,j}| \\leq \\frac{\\pi}{4},\n\\end{equation} \nfor all $1 \\leq j \\leq i$ and $1 \\leq j' \\leq i'$. Another property of the solution is that \n\\begin{equation}\n\t\\bar{\\theta}_{i,j} \\leq \\bar{\\theta}_{i+1,j}\n\\end{equation} \nfor all $i \\leq j \\leq i$. One now sees that \n\\begin{equation} \\label{NNBound2}\n\t\\frac{\\pi}{2} > \\bar{\\theta}_{2,0} - \\bar{\\theta}_{2,1} = \\frac{\\pi}{2} - 2\\bar{\\theta}_{2,1} \\geq \\frac{\\pi}{2} - 2\\bar{\\theta}_{i,1} = \\bar{\\theta}_{i,0} - \\bar{\\theta}_{i,1} \\geq 0, \n\\end{equation}\nfor all $i \\geq 2$. Equations $(\\ref{NNBound1})$ and $(\\ref{NNBound2})$ therefore combine to show that all nearest-neighbour interactions within the indices $1 \\leq j \\leq i$ remain bounded away from $\\pm\\frac{\\pi}{2}$. This therefore gives that\n\\begin{equation} \\label{NNEdges}\n\t0 < \\inf_{1 \\leq j \\leq i} \\cos(\\bar{\\theta}_{i',j'} - \\bar{\\theta}_{i,j}) \\leq \\sup_{1 \\leq j \\leq i} \\cos(\\bar{\\theta}_{i',j'} - \\bar{\\theta}_{i,j}) \\leq 1.\n\\end{equation} \nSince the elements at the indices $1 \\leq j \\leq i$ are used to define the solution over all the indices, one has that the edge weights are uniformly bounded above and away from $0$. \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width = 7cm]{Graph.eps}\n\t\\caption{A representation of the graph $G_2$.}\n\t\\label{fig:RotGraph}\n\\end{figure} \n\nNow one should note that $G_2$ is significantly different from $G_1$ in that not all vertices have the same number of edges attached to it and that the weights of each edge are not identical, thus putting us in the situation of the second case of Hypothesis $\\ref{Hyp:DecayRates}$. Let $w_{min} > 0$ and $w_{max} > 0$ be uniform lower and upper bounds on the weight function, respectively. This allows one to apply the time re-parametrization given by $t \\to (4w_{max}+1)t$ (since each vertex has degree at most $4$), resulting in the linear operator \n\\begin{equation}\n\t\\tilde{L}_2 := \\frac{1}{(4w_{max}+1)}L_2\n\\end{equation}\nand resulting graph $\\tilde{G}_2 = (\\mathbb{Z}^2,\\tilde{E}_2,\\tilde{w}_2)$. Recall from our work in Section $\\ref{sec:Stability}$ that the graph $\\tilde{G}_2$ is merely the graph $G_2$ with added edges connecting each vertex to itself (loops). Moreover, by the construction $(\\ref{LoopConstruction})$, the weight of each loop is bounded above by $4w_{max}+1$ and below by $1$. Therefore the edge weights of $\\tilde{G}_2$ are uniformly bounded above by an $\\tilde{w}_{max} > 0$ and below by an $\\tilde{w}_{min} > 0$. Then, Lemma $\\ref{lem:Delta}$ implies that there exists an $\\alpha > 0$ such that $\\tilde{G}_2$ satisfies $\\Delta$. To show that $\\tilde{G}_2$ further satisfies $PI$ and $VD(2)$, we provide the following proposition. \n\n\\begin{prop} \\label{prop:RoughIsometry}\n\tThe graphs $G_1$ and $\\tilde{G}_2$ are rough isomorphic. \t\n\\end{prop}\n\n\\begin{proof}\n\tSince $G_1$ and $\\tilde{G}_2$ have the same vertex set, let us consider the identity mapping $\\mathcal{I}: \\mathbb{Z}^2 \\to \\mathbb{Z}^2$ which acts by $\\mathcal{I}((i,j)) = (i,j)$ for all $(i,j) \\in \\mathbb{Z}^2$. We will systematically verify the three rough isometry properties $(\\ref{Rough1})$, $(\\ref{Rough2})$ and $(\\ref{Rough3})$ to show that $\\mathcal{I}$ is a rough isometry between the graphs $G_1$ and $\\tilde{G}_2$. We will let $\\rho_1$, $m_1$ be the distance and vertex weight functions on the graph $G_1$ and $\\rho_2$, $m_2$ be the distance and vertex weight functions on the graph $\\tilde{G}_2$.\n\t\n\t\\underline{Property $(\\ref{Rough1})$}: Recall that the edge sets of $G_1$ and $\\tilde{G}_2$ differ only by the centre four edges which are present in the former and absent in the latter. Let $n_1 = (i_1,j_1), n_2 = (i_2,j_2) \\in \\mathbb{Z}^2$. Then since $G_1$ has more connections between vertices than $\\tilde{G}_2$, one immediately has\n\t\\begin{equation} \\label{RoughProp1}\n\t\t\\rho_1(n_1,n_2) \\leq \\rho_2(n_1,n_2),\n\t\\end{equation} \n\tsince any path between the vertices $n_1$ and $n_2$ in $\\tilde{G}_2$ could potentially be shortened by the addition of edges connecting different vertices. Conversely, any shortest path connecting vertices in $G_1$ potentially traverses an edge which is absent in $\\tilde{G}_2$. Following along this path in $\\tilde{G}_2$ requires one to replace those steps across the missing edges with two addition steps to circumvent the missing edge. Since there are a maximum of four missing edges to circumvent, we obtain \n\t\\begin{equation} \\label{RoughProp2}\n\t\t\\rho_2(n_1,n_2) \\leq \\rho_1(n_1,n_2) + 8.\n\t\\end{equation}\n\tTherefore, to obtain the bounds $(\\ref{Rough1})$ we use $(\\ref{RoughProp1})$ and $(\\ref{RoughProp2})$ and take, for example, $a = 2$ and $b = 8$ to have\n\t\\begin{equation}\n\t\t\\frac{1}{2}\\rho_1(n_1,n_2) - 8 \\leq \\rho_2(n_1,n_2) \\leq 2\\rho_1(n_1,n_2) + 8.\n\t\\end{equation}\n\t\n\t\\underline{Property $(\\ref{Rough2})$}: This property is trivially satisfied for any $M > 0$ since for all $n = (i,j)\\in\\mathbb{Z}^2$ we have $\\rho_2(\\mathbb{Z}^2,n) = 0$.\n\t\n\t\\underline{Property $(\\ref{Rough3})$}: Recall that for all $n = (i,j)\\in\\mathbb{Z}^2$ we have $m_1(n) = 4$. Furthermore, we keep with the notation above to denote $\\tilde{w}_{min} > 0$ and $\\tilde{w}_{max} > 0$ as uniform lower and upper bounds, respectively, on the weight function of $\\tilde{G}_2$. Therefore, since each vertex has at least one edge connected to it and at most five (four nearest-neighbours and one loop) we have\n\t\\begin{equation}\n\t\t\\tilde{w}_{min} \\leq m_2(n) \\leq 5\\tilde{w}_{max},\n\t\\end{equation}\n\tfor all $n = (i,j)\\in\\mathbb{Z}^2$. Hence, \n\t\\begin{equation}\n\t\t\\frac{\\tilde{w}_{min}}{4}m_1(n) = \\tilde{w}_{min} \\leq m_2(n) \\leq 5\\tilde{w}_{max} = \\frac{5\\tilde{w}_{max}}{4}m_1(n).\n\t\\end{equation} \n\tTaking $c := \\max\\{\\frac{5\\tilde{w}_{max}}{4},\\frac{4}{\\tilde{w}_{min}}, 2\\} > 1$ gives\n\t\\begin{equation}\n\t\tc^{-1}m_1(n) \\leq m_2(n) \\leq cm_1(n)\n\t\\end{equation}\n\tfor all $n = (i,j) \\in \\mathbb{Z}^2$. This completes the proof since we have shown that $\\mathcal{I}:\\mathbb{Z}^2 \\to \\mathbb{Z}^2$ satisfies all three conditions to be a rough isometry. \n\\end{proof}\n\n\\begin{cor} \\label{cor:RotWaveStability}\n\t$\\tilde{G}_2$ satisfies $VG(2)$, $PI$ and $\\Delta$. \n\\end{cor}\n\n\\begin{proof}\n\tThis is a direct consequence of Propositions $\\ref{prop:RoughInvariance}$ and $\\ref{prop:RoughIsometry}$.\n\\end{proof}\n\nTherefore, Corollary $\\ref{cor:RotWaveStability}$ allows one to now apply the stability results of Theorem~\\ref{thm:Stability} since we have verified all of the necessary hypotheses. We summarize these results as a corollary of Theorem~\\ref{thm:Stability} and the work in $\\cite{MyWork}$.\n\n\\begin{cor}\n\tThere exists a phases-locked rotating wave solution to $(\\ref{MyPhaseSystem1})$ which is locally asymptotically stable with respect to small perturbations in $\\ell^1(\\mathbb{Z}^2)$. \n\\end{cor} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Doubly Spatially Periodic Stable Patterns}\n\nNow that we have two applications of Theorem~\\ref{thm:Stability} to specific phase-locked solutions, let us demonstrate that system $(\\ref{MyPhaseSystem1})$ can exhibit infinitely many stable phase-locked solutions. Begin by fixing any two integers $N_1,N_2 \\geq 5$ and consider the phase-lags given by\n\\begin{equation} \\label{Lags2}\n\t\\bar{\\theta}_{i,j} = \\frac{2\\pi [i]_{N_1}}{N_1} + \\frac{2\\pi [j]_{N_2}}{N_2},\n\\end{equation}\nwhere we have used the notation $[n]_N = n \\pmod{N}$. These solutions form periodic waves in both the horizontal and vertical directions. It is easy to see that solutions of this type solve $(\\ref{MyPhaseLags})$ since for all $(i,j)\\in\\mathbb{Z}^2$ we have\n\\begin{equation}\n\t\\begin{split}\n\t\\begin{aligned}\n\t\t\\sin(\\bar{\\theta}_{i+1,j} - \\bar{\\theta}_{i,j}) + \\sin(\\bar{\\theta}_{i-1,j} - \\bar{\\theta}_{i,j}) = \\sin\\bigg(\\frac{2\\pi}{N_1}\\bigg) + \\sin\\bigg(-\\frac{2\\pi}{N_1}\\bigg) = 0, \n\t\\end{aligned}\n\t\\end{split}\n\\end{equation}\nwhere we have used the $2\\pi$-periodicity of sine and the fact that it is an odd function. An equivalent formulation holds for the up\/down connections, giving that the left\/right and up\/down connections of $(\\ref{MyPhaseSystem1})$ independently cancel to satisfy $(\\ref{MyPhaseLags})$.\n\nApplying the perturbation ansatz and linearizing about $\\psi = 0$ results in a linear operator $L_3$ which acts on the sequences $x = \\{x_{i,j}\\}_{(i,j)\\in\\mathbb{Z}^2}$ by\n\\begin{equation}\n\t\\begin{split}\n\t\t[L_3 x]_{i,j} = \\cos\\bigg(&\\frac{2\\pi}{N_1}\\bigg)\\bigg[(x_{i+1,j} - x_{i,j}) + (x_{i-1,j} - x_{i,j})\\bigg] \\\\ \n\t\t&+ \\cos\\bigg(\\frac{2\\pi}{N_2}\\bigg)\\bigg[(x_{i,j+1} - x_{i,j}) + (x_{i,j-1} - x_{i,j})\\bigg].\n\t\\end{split}\n\\end{equation}\nFurthermore, since $N_1,N_2 \\geq 5$ we have that \n\\begin{equation}\n\t0 < \\frac{2\\pi}{N_1}, \\frac{2\\pi}{N_2} < \\frac{\\pi}{2} \\implies \\cos\\bigg(\\frac{2\\pi}{N_1}\\bigg), \\cos\\bigg(\\frac{2\\pi}{N_2}\\bigg) > 0, \n\\end{equation}\ngiving that the symmetry and positivity requirements of $(\\ref{PositiveWeights})$ have been met. Here the underlying graph, $G_3 = (\\mathbb{Z}^2,E_3,w_3)$, has vertex set $\\mathbb{Z}^2$ and an edge set, $E_3$ which contains all nearest-neighbour interactions, similar to $G_1$. The difference now is that the associated weight function assigns a weight of $\\cos(\\frac{2\\pi}{N_1})$ to horizontal (left\/right) edges and a weight of $\\cos(\\frac{2\\pi}{N_2})$ to vertical (up\/down) edges. Clearly this graph is connected, and looks indistinguishable to that given in Figure $\\ref{fig:TrivialGraph}$ for $G_1$ when edge weights are not labelled.\n\nNow, the measure of each vertex is given identically by $2\\cos(\\frac{2\\pi}{N_1}) + 2\\cos(\\frac{2\\pi}{N_2})$. Hence, following the discussion prior to Hypothesis $(\\ref{Hyp:DecayRates})$, we apply the linear time re-parametrization $t \\to [2\\cos(\\frac{2\\pi}{N_1}) + 2\\cos(\\frac{2\\pi}{N_2})]t$ to system $(\\ref{PerturbationSystem2})$. This results in a graph Laplacian upon linearizing about the phase-lags $(\\ref{Lags2})$, denoted \n\\begin{equation}\n\t\\tilde{L}_3 = \\frac{1}{2\\cos(\\frac{2\\pi}{N_1}) + 2\\cos(\\frac{2\\pi}{N_2})} L_3.\n\\end{equation} \nOne should note in this case, as with the trivial solution, our re-parametrization of $t$ has not added any loops to the original underlying graph $G_3$. In fact, we have not altered the underlying graph $G_3$ in any way since no edges have been added. \n\n\n\\begin{prop} \\label{prop:RoughIsometry2}\n\tThe graphs $G_1$ and $G_3$ are rough isomorphic. \n\\end{prop}\n\n\\begin{proof}\n\tThis proof follows in exactly the same was as the proof of Proposition $\\ref{prop:RoughIsometry}$. We again consider the identity mapping $\\mathcal{I}: \\mathbb{Z}^2 \\to \\mathbb{Z}^2$ acting between the vertex sets of the respective graphs. Furthermore, since $G_1$ and $G_3$ have the same vertex and edge set, they have the same distance function. This immediately gives that property $(\\ref{Rough1})$ is satisfied for any $a > 1$ and $b > 0$. \n\t\n\tAs previously stated, since the vertex sets of $G_1$ and $G_3$ are the same, property ($\\ref{Rough2}$) is also immediately satisfied. This leaves one to check that property ($\\ref{Rough3}$) is satisfied. But this is again obvious since the vertex weights in both $G_1$ and $G_3$ are independent of the vertices. Therefore, one finds that $G_1$ and $G_3$ are rough isomorphic. \n\\end{proof}\n\n\\begin{cor}\n\t$G_3$ satisfies $VD(2)$, $PI$ and $\\Delta$.\n\\end{cor}\n\nHence, we see that we have satisfied all of the hypotheses to apply Theorem~\\ref{thm:Stability}. This leads to the following corollary. \n\n\\begin{cor}\n\tFor any two integers $N_1,N_2 \\geq 5$, the phase-locked solution to $(\\ref{MyPhaseSystem1})$ given by $\\{\\omega t + \\bar{\\theta}_{i,j}\\}_{(i,j) \\in \\mathbb{Z}^2}$ with $\\bar{\\theta}_{i,j}$ as in $(\\ref{Lags2})$, is locally asymptotically stable with respect to small perturbations in $\\ell^1(\\mathbb{Z}^2)$. \n\\end{cor}\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion} \\label{sec:Discussion}\n\nIn this work we have provided sufficient conditions to obtain local asymptotic stability of phase-locked solutions to infinite systems of coupled oscillators. We saw that this stability result was obtained from a unification of some results pertaining to random walks on infinite weighted graphs. It is clear that the results in this work are best suited for odd coupling functions, with the example examined in detail in the work being $H(x) = \\sin(x)$. Moreover, such a sinusoidal coupling function is not a mere mathematical idealization to demonstrate the uses of this new stability result, but conversely our hypotheses and assumptions are based upon well-studied paradigms in the theory of weakly coupled oscillators, such as the Kuramoto model.\n\nAside from pointing out the symmetry limitation in Remark $\\ref{rmk:Digraph}$, our work here has another minor shortcoming. One should notice that Hypothesis $\\ref{Hyp:DecayRates}$ requires that the underlying graph satisfy $VG(d)$ for some $d \\geq 2$, leaving the case when $d = 1$ out of the scope of Theorem~\\ref{thm:Stability}. Although this appears as a minor detail in our hypotheses, the proof of Theorem~\\ref{thm:Stability} heavily relies on the fact that $d\\geq 2$ in order to properly apply Lemma $\\ref{lem:IntegralLemma}$. It appears that the cases when $d = 1$ cannot be reconciled using the methods put forth in this paper, but it should be noted that techniques such as the discrete Fourier transform could potentially be applied to obtain similar results. Furthermore, our examples have used positive integer values for $d$ due to their relevance to the important integer lattices, but it should be noted that it is possible to construct fractal graphs which dimension $d$ which need not be an integer. Although it could be difficult to motivate the study of coupled oscillators on fractal graphs, it is certainly interesting to note that Theorem~\\ref{thm:Stability} makes no claim that $d$ is an integer, and therefore could be applied to fractal graphs with dimension $d \\geq 2$ so long as the remaining assumptions are satisfied. \n\nLet us examine a brief example of a case where Theorem~\\ref{thm:Stability} cannot be applied. Consider the system with $V = \\mathbb{Z}$ given by\n\\begin{equation} \\label{ChainSystem}\n\t\\dot{\\theta}_i = \\omega + \\sin(\\theta_{i+1} - \\theta_i) + \\sin(\\theta_{i-1} - \\theta_i),\n\\end{equation} \nfor all $i \\in \\mathbb{Z}$ and some $\\omega\\in\\mathbb{R}$. One may follow the same techniques laid out in Section $\\ref{subsec:Trivial}$ to inspect the stability of the trivial phase-locked solution given by $\\theta_i(t) = \\omega t$, for all $i \\in \\mathbb{Z}$. We refrain from going into detail here, but one can show that the underlying graph is the one-dimensional version of $G_1$ from Section $\\ref{subsec:Trivial}$, for which we give a visual representation in Figure $\\ref{fig:LinearGraph}$. Furthermore, in the discussion following Definition $\\ref{def:VolumeGrowth}$ it was pointed out that this graph satisfies $VG(1)$, thus we find that we are unable to determine stability of this trivial phase-locked solution using Theorem~\\ref{thm:Stability}. Of course, stability of the linearized equation can be determined via the random walk probabilities outlined in this manuscript, or with the discrete Fourier transform in a manner similar to \\cite{Stefanov}, but the extension to nonlinear stability cannot be obtained with the methods in this manuscript due to the relatively slow decay which prevents one from closing the bootstrapping argument. \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width = 7cm]{LinearGraph.eps}\n\t\\caption{The graph corresponding to the trivial phase-locked solution to system $(\\ref{ChainSystem})$. Here the vertices lie in one-to-one correspondence with the elements of $\\mathbb{Z}$. All edges have weight exactly $1$ and connect the vertex at $i \\in \\mathbb{Z}$ to the vertices at $i\\pm1$.}\n\t\\label{fig:LinearGraph}\n\\end{figure} \n\nMost interestingly, it seems that this problem in system $(\\ref{ChainSystem})$ cannot be reconciled by increasing the distance of influence for each oscillator. That is, consider a generalization of system $(\\ref{ChainSystem})$ given by the phase system\n\\begin{equation} \\label{GeneralChainSystem}\n\t\\dot{\\theta}_i = \\omega + \\sum_{j = i-n}^{i+n} \\sin(\\theta_j - \\theta_i),\n\\end{equation}\nfor each $i \\in \\mathbb{Z}$ and a fixed $n \\geq 1$. One finds that the trivial phase-locked solution again relates to a weighted graph which satisfies $VG(1)$ (for example, see Figure $\\ref{fig:LinearGraph2}$), thus showing that by increasing the range of influences we still cannot conclude local asymptotic stability. Hence, one hopes to determine an appropriate analogue of Theorem~\\ref{thm:Stability} which works for any graph satisfying $VG(d)$ for arbitrary $d > 0$. \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width = 7cm]{LinearGraph2.eps}\n\t\\caption{The graph corresponding to the trivial phase-locked solution to system $(\\ref{GeneralChainSystem})$ with $n = 2$. Here again the vertices lie in one-to-one correspondence with the elements of $\\mathbb{Z}$. All edges have weight exactly $1$ and connect the vertex at $i \\in \\mathbb{Z}$ to the vertices at $i\\pm1$ and $i\\pm2$.}\n\t\\label{fig:LinearGraph2}\n\\end{figure} \n\nThe work in this paper is only a starting point for many necessary further investigations into the study of infinitely many coupled oscillators. Now that a link between the stability of phase-locked solutions and random walks on weighted graphs has been established, it is the hope that questions in one area could motivate research in the other. It is also the hope that many of the results presented in Section $\\ref{sec:Graphs}$ for undirected weighted graphs can be extended in an appropriate way for weighted digraphs, which would in turn help to provide a more robust stability result than that which has been established in Theorem~\\ref{thm:Stability}.\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgements}\n\nThis work is supported by an Ontario Graduate Scholarship while at the University of Ottawa. The author is very thankful to Benoit Dionne and Victor LeBlanc for their comments on how to properly convey the results. The author would also like to acknowledge a beneficial correspondence with Erik Van Vleck of the University of Kansas which initiated the work in this manuscript. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Abstract}\nWe present a new hardware-agnostic side-channel attack that targets one of the most fundamental software caches in modern computer systems:\nthe operating system page cache. \nThe page cache is a pure software cache that contains all disk-backed pages, including program binaries, shared libraries, and other files, and our attacks thus work across cores and CPUs.\nOur side-channel permits unprivileged monitoring of some memory accesses of other processes, with a spatial resolution of \\SI{4}{\\kilo\\byte} and a temporal resolution of \\SI{2}{\\micro\\second} on Linux (restricted to $6.7$ measurements per second) and \\SI{466}{\\nano\\second} on Windows (restricted to $223$ measurements per second); this is roughly the same order of magnitude as the current state-of-the-art cache attacks.\nWe systematically analyze our side channel by demonstrating different local attacks, including a sandbox bypassing high-speed covert channel, timed user-interface redressing attacks, and an attack recovering automatically generated temporary passwords.\nWe further show that we can trade off the side channel's hardware agnostic property for remote exploitability. \nWe demonstrate this via a low profile remote covert channel that uses this page-cache side-channel to exfiltrate information from a malicious sender process through innocuous server requests. \nFinally, we propose mitigations for some of our attacks, which have been acknowledged by operating system vendors and\nslated for future security patches. \n\n\\section{Introduction}\\label{sec:intro} \nModern processors are highly optimized for performance and efficiency.\nA large share of these optimizations is based upon \\emph{caching} - taking advantage of temporal and spatial locality to minimize slower memory or disk accesses. \nIndeed, caching architectures typically fetch or prefetch code and data into fast buffers closer to the processor. \n\nAlthough side-channels have been known and utilized primarily in military contexts for decades~\\cite{Young2002,Lampson1973}, the idea of cache side-channel attacks gained more attention over the last twenty years~\\cite{Kocher1996,Bernstein2004,Percival2005}.\nOsvik~\\etal\\cite{Osvik2006} showed that an attacker can observe the cache state at the granularity of a cache set using \\PrimeProbe, and later Yarom~\\etal\\cite{Yarom2014} showed this with cache line granularity using \\FlushReload. \nWhile different cache attacks have different use cases, the accuracy of \\FlushReload remains unrivaled.\n\nIndeed, virtually all \\FlushReload attacks target pages in the so-called page cache~\\cite{Yarom2014,Irazoqui2015Lucky,Gruss2015Template,Inci2016,Lipp2016,Irazoqui2016Cross}.\nThe page cache is a pure software cache implemented in all major operating systems today, and it contains virtually all pages in use. \nPages that contain data accessible to multiple programs, such as disk-backed pages (\\eg program binaries, shared libraries, other files, etc.), are shared among all processes regardless of privilege and permission boundaries~\\cite{Gorman2004}. The operating system uses the page cache to store frequently used pages in memory, this obviating slow disk loads whenever a process needs to access said pages. \nThere is a large body of works exploiting \\FlushReload in various scenarios over the past several years~\\cite{Yarom2014,Irazoqui2015Lucky,Gruss2015Template,Inci2016,Lipp2016,Irazoqui2016Cross}. \nThere have also been a series of software (side-channel) cache attacks in the literature, including attacks on the browser cache~\\cite{Felten2000,Jackson2006,Bortz2007,Jia2015,VanGoethem2015} and exploiting page deduplication~\\cite{Suzaki2011,Owens2011,Xiao2012covert,Xiao2013security,Barresi2015,Gruss2015dedup,Bosman2016,Razavi2016}; however, page deduplication is mostly disabled or limited to deduplication within a security domain today~\\cite{WindowsServerDeduplication, RedHatKSM, Vanderveen2016}.\n\nIn this paper, we present a new attack on the operating system page cache. \nWe present a set of local attacks that work entirely without any timers, utilizing operating system calls (\\texttt{mincore} on Linux and \\texttt{QueryWorkingSetEx} on Windows) to elicit page cache information. \nWe also show that page cache metadata can leak to a remote attacker over a network channel, producing a stealthy covert channel between a malicious local sender process and an external attacker.\n\nWe comprehensively evaluate and characterize the software-cache side channel by comparing it to hardware-cache side channels.\nLike the recent DRAMA attack~\\cite{Pessl2016,Wang2017leaky}, our side-channel attack works across cores and across CPUs\nwith a spatial granularity of \\SI{4}{\\kilo\\byte}. \nFor comparison, the spatial granularity of the DRAMA attack is \\SI{2}{\\kilo\\byte} on dual-channel systems up to and including the Haswell processor architecture, and \\SI{1}{\\kilo\\byte} on more recent dual-channel systems.\nThe temporal granularity of the DRAMA attack is around \\SI{300}{\\nano\\second}, whereas the temporal granularity of our attack is \\SI{2}{\\micro\\second} on Linux (restricted to $6.7$ measurements per second) and \\SI{466}{\\nano\\second} on Windows (restricted to $223$ measurements per second). \nHence, we conclude that our attack can compete with the current state-of-the-art in microarchitectural attacks.\n\nFinally, we present several ways to mitigate our attack in software, and observe that certain page replacement algorithms reduce the applicability\nof our attack while simultaneously improving the system performance. \nIn our responsible disclosure, both Microsoft and the Linux security team acknowledged the problem and informed us that they will follow our recommendations with security patches to mitigate our attack.\n\nTo summarize, we make the following contributions:\n\\begin{compactenum}\n\t\\item We present a novel attack targeting the page cache.\n\t\\item We present a high-speed covert channel which is agnostic to specific hardware configurations.\n\t\\item We present a set of local attacks which can compete with state-of-the-art microarchitectural attacks.\n\t\\item We present a remote attack which can leak information across the network.\n\\end{compactenum}\n\nWe begin in \\Cref{sec:background} with background information on hardware caches, cache attacks, and software caches,\nfollowed by our threat model in \\Cref{sec:threatmodel}.\n\\Cref{sec:attack} overviews our attack.\n\\Cref{sec:step1} presents a novel method to spy on the page cache state.\n\\Cref{sec:eviction} shows how page cache eviction can be done efficiently on Linux and Windows.\n\\Cref{sec:local} presents (timing-free) local page cache attacks.\n\\Cref{sec:remote} presents remote page cache attacks.\n\\Cref{sec:countermeasures} discusses different countermeasures against our attack.\n\\Cref{sec:conclusion} concludes our work.\n\n\\section{Background}\\label{sec:background} \nWe begin with a brief discussion of hardware and software cache attacks, followed\nby some background on the operating system page cache that we exploit.\n\n\\subsection{Hardware and Software Cache Side-Channel Attacks}\\label{sec:bg_cat}\nThe suggestion of cache attacks harks back to the timing attacks of Kocher~\\cite{Kocher1996}.\nOsvik~\\etal\\cite{Osvik2006} presented a technique with a finer granularity called \\PrimeProbe.\nYarom~\\etal\\cite{Yarom2014} presented \\FlushReload, which is still today the cache attack technique with the highest accuracy (virtually no false negatives or false positives) and a finer granularity than most other attacks (one cache line).\nConsequently, \\FlushReload is also used in other applications, including the covert channel in Spectre~\\cite{Kocher2019} and Meltdown~\\cite{Lipp2018meltdown}. \n\\FlushReload requires shared memory with the victim application.\nHowever, all modern operating systems share code and unmodified data of every program and shared library (and any unmodified file-backed page in general) across privilege boundaries and applications.\n\nCaches also exist in software, caching remote data, data that has been retrieved from slow or offline storage, or precomputed results.\nSome of these caches have very specific use-cases, such as browser caches used for website content; other caches are more generic,\nsuch as the page cache that stores a large portion of code and data used. \nCaches make use of the principle of locality to retain common computations closer to the processor, and consequently\nthey can leak information about the cache contents.\n\nFor example, browser caches leak information about browsing history and other possibly sensitive user information~\\cite{Felten2000,Jackson2006,Bortz2007,Jia2015,VanGoethem2015}.\nRequested resources may have different access times, depending on whether the resource is being served from\na local cache or a remote server, and these differences can be distinguished by an attacker.\nAs another example of a software-based side channel, page-deduplication attacks exploit page deduplication across security boundaries.\nA copy-on-write page fault reveals the fact that the requested page was deduplicated and that\nanother process must have a page with identical content. \nSuzaki~\\etal\\cite{Suzaki2011} presented the first page-deduplication attack, which detected programs running in co-located virtual machines.\nSubsequently, several other page-deduplication attacks were demonstrated~\\cite{Owens2011,Xiao2012covert,Xiao2013security,Gruss2015dedup}.\nToday, page deduplication is either completely disabled for security reasons or restricted to deduplication within a security domain~\\cite{WindowsServerDeduplication, RedHatKSM, Bosman2016,Razavi2016}.\n\n\\subsection{Operating System Page Cache}\\label{sec:pagecache}\n\nVirtual memory creates the illusion for each involved process of running alone on the system. To do this,\nit provides isolation between processes so that different processes may operate on the same addresses\nwithout interfering with each other. Each virtual memory page may be mapped by the operating\nsystem, with varying properties, to an arbitrary physical memory page.\n\nWhen multiple processes map a virtual page to the same physical page, this page is part of\n\\emph{shared memory}. Shared memory typically may arise out of inter-process communication or, more broadly, to reduce physical memory consumption. For example, if shared library and common binary pages\non the hard disk are mapped multiple times by different processes, they map to the same pages in physical\nmemory.\n\nIndeed, any page that might be used by more than one process may be mapped as shared memory. However,\nif a process wants to write to such a page, it must first secure a private copy of the page, so as not to\nbreak the isolation between processes. The efficiency savings come because a great many pages are never\nmodified and, instead, remain shared among multiple processes in a read-only state.\n\nThe operating system page cache is a generalization of the above memory sharing scenario, and, in fact, all modern operating systems (\\eg Windows, Linux, and OS X) implement a page cache. The page cache contains all pages that are memory mapped files, any file read from\nthe disk, and (depending on the system) possibly other pages such as anonymous pages or shared memory~\\cite{Gorman2004}.\nThe operating system keeps track of which pages in the page cache are clean (\\ie their data is unmodified from the disk version)\nand which are dirty (\\ie modified since they were first loaded from the disk). \nIdeally, the page cache incorporates all available memory, allowing the operating system to minimize the disk I\/O.\n\nThe introduction of a page cache disrupts the traditional functioning of the operating system under a page fault.\nWithout a page cache, the operating system reserves a free physical page frame, loads the data from the disk into that\nphysical page frame, and then maps a virtual page to the physical page frame accordingly.\nIf there are no available physical page frames, the system swaps out pages to the disk using an operating system-dependent\npage-replacement algorithm. \nIn Linux, this algorithm had traditionally been based on a variant of the Least Recently Used (LRU) paradigm~\\cite{Corbato1968paging}, and\nLRU-related data structures can still be found throughout the kernel code. More recent Linux versions implement an improved variant called CLOCK-Pro~\\cite{Jiang2005} along with several adaptions~\\cite{LWN_clockpro}. Within this improved framework,\nLinux moves pages among multiple lists (an inactive list, an active list, and a recently evicted list).\nIn contrast to Linux, Windows uses the \\emph{working-set} model of page caching to introduce more fairness among processes\ncompeting for memory~\\cite{Denning1968,Denning1980,Carr1981}.\nThe page replacement algorithm used on Windows was based on Clock or pseudo-random replacement~\\cite{Friedman1999,Russinovich1998} in older Windows versions, and today is likely a variant of the Aging algorithm~\\cite{Bruno2013technet}. \n\nWith a page cache, the operating system endeavors to make full use of all physical page frames, and a page-replacement\nalgorithm is still needed for evicting page cache pages (swapping is less relevant on modern operating systems~\\cite{DebianSSD,Horn2017Swap,Crowthers2011}). \nAlso pages from KVM virtual machines are cached in the host-side page cache if the machine is configured to use a write-back caching strategy~\\cite{Djordjevic2015}.\n\nBoth Linux and Windows provide mechanisms for checking whether a page is\nresident in the page cache - the \\texttt{mincore} system call for Linux, and the \\texttt{QueryWorkingSetEx} system call for Windows.\n\n\n\\section{Threat Model}\\label{sec:threatmodel}\nOur threat model is based on the threat model for \\FlushReload~\\cite{Yarom2014,Irazoqui2015Lucky,Gruss2015Template,Inci2016,Lipp2016,Irazoqui2016Cross}.\n\nSpecifically, we assume that attacker and victim have access to the same operating system page cache.\nOn Linux, we also assume that the attacker has read access to the target page, which may be any page of any attacker-accessible file on the system.\nThis assumption is satisfied, for example, when attacker and victim are\n\\begin{compactitem}\n\\item processes running under the same operating system, or\n\\item processes running in isolated sandboxes with shared files (\\eg Firejail~\\cite{Firejail}). \n\\end{compactitem}\nOn Windows, read access to the target page is not necessary for our attack.\n\nOur local attacks are timing-free, in that they do not rely on hardware timing differences.\nOur remote attack leverages timing differences between memory and disk access, measured on a remote system, as a proxy for the required local information.\n\n\\section{High-Level View of the Attack}\\label{sec:attack}\nOur attack fundamentally relies on the attacker's capability to distinguish whether a page is in the page cache or not.\nIn the local attack we are agnostic to the underlying hardware, \\ie we do not exploit any timing differences although this would be practically possible on virtually all systems.\nThus, we use the \\texttt{mincore} system call on Linux for this purpose and the \\texttt{QueryWorkingSetEx} system call on Windows. \nThe \\texttt{mincore} system call returns which pages of a memory range are present in memory (\\ie in the page cache) and which are not. \nLikewise, the \\texttt{QueryWorkingSetEx} system call returns a list of pages that are in the current working set of a process, and thus are present in the page cache. \n\nBringing the page cache into a known state is not trivial, as it behaves like a fully associative cache.\nPrevious approaches for page cache eviction can lead to out-of-memory situations~\\cite{Seaborn2015BH,Gruss2016Row,Vanderveen2016} or consume too much time and impose system pressure~\\cite{Gruss2018Rowhammer}.\nThis is not practical when evicting pages often, \\eg multiple times per second.\nHence, they have not been used in published side-channel attacks so far, but only to support other attacks, \\eg relocation of a page for Rowhammer.\nFor Linux, we devise a working-set-based eviction strategy that efficiently accesses groups of other pages more frequently than the page to evict. \n\nOn Windows, our attack is much more efficient than on Linux.\nOn Linux, the page cache is directly influenced by all processes.\nIn contrast, Windows has per-process working sets~\\cite{WinAPI}, and the page cache is influenced indirectly through these working sets. Hence, for Windows, we present an attack which evicts pages only from the working set of the victim process, but not from the page cache (\\ie not from DRAM), \\ie causing no additional disk accesses.\nAlthough both attack variants follow the same attack methodology, we have to distinguish between the Linux and Windows variant at several places in the remainder of the paper. \n\nIn contrast to hardware cache attacks and page-deduplication attacks, our local attacks are non-destructive, allowing us to repeat measurements.\nMeasuring whether a memory location is cached or not manipulates the state such that the information is not available anymore at a later point in both hardware cache attacks~\\cite{Osvik2006,Yarom2014} and page-deduplication attacks~\\cite{Suzaki2011,Owens2011}.\nHowever, it is not the case for our local attack.\nAs we rely on the \\texttt{mincore} and \\texttt{QueryWorkingSetEx} system calls, we can arbitrarily check whether the page is in the page cache (on Linux) or the process working set~\\cite{WinAPI} (Windows).\nThese checks are non-destructive as they neither modify nor influence the state of the page cache or the process working set with respect to the target memory location.\n\n\\begin{figureA}[t]{strategy_overview}[Attack overview.]\n \\includegraphics[width=\\hsize]{.\/images\/attack_strategy_overview}\n\\end{figureA}\n\nOur attack is illustrated in \\cref{fig:strategy_overview}.\nThe attacker wants to measure when the function \\texttt{foo()} is called by a victim program.\nThe attacker determines the page which contains the function \\texttt{foo()}.\nBy observing when the page is in the page cache, the attacker learns when \\texttt{foo()} was called.\n\n\nOur attack continuously runs through the following steps:\nInitially, the target pages are in the page cache (on Linux) respectively the working set of the victim process (on Windows).\nAfter the eviction, the page is not in the page cache (Linux) or process working set (Windows) anymore.\nThe attacker can now continuously probe when the page is added back in.\nAs soon as the page is found in the page cache (Linux) or the process working set (Windows), the attacker logs the memory access and evicts the page again.\n\nIn the following sections, we detail the two main steps of the attack, \\ie determining the page cache state (defining the temporal resolution) and performing the page cache eviction (defining the maximum frequency at which the attack can be performed).\n\n\\section{Determining the Page Cache State}\\label{sec:step1}\nIn this section, we discuss how to determine the page cache state. \nNote that although our attack starts with the page cache eviction, following the attack description is easier when understanding how to determine the page cache state first.\n\nThe attacker wants to determine when a specific page from a shared library is loaded into the page cache, as this is exactly the time of the access by the victim program.\nThus, the shared library containing the target page an attacker wants to observe accesses to has to be mapped into the attacker's address space.\nThis is possible using \\texttt{mmap} on Linux and either \n\\texttt{LoadLibraryEx} or \\texttt{CreateFileMappingA} and \\texttt{MapViewOfFile} on Windows.\n\nTo map the shared library, the user only requires read-only access to the file containing the target page.\nAs the attacker process works on its own mapping of the shared library, all addresses are observed relative to the start of the shared library.\nHence, security mechanisms such as Address Space Layout Randomization (ASLR) have no effect on our attack.\n\nTo determine whether or not a page is in the page cache, we rely on the operating-system provides APIs to query the page cache. \nOn Linux, this API is provided by the \\texttt{mincore} system call.\n\\texttt{mincore} expects the base address and length of a memory area and returns a vector indicating for each page whether it is in the page cache or not. \nOn Windows, there are two variants which are discussed as follows. \n\n\\subsection{Windows Process Working-Set State}\nOn Windows, every process has a working set which is a very small subset of the page cache.\nWe cannot query the page cache directly as on Linux but instead we focus on the working set.\nWhile this makes determining the cache state more complex, the following eviction is much easier and faster (\\cf~\\cref{sec:workingseteviction}).\nOn Windows, we rely on the \\texttt{QueryWorkingSetEx} system call.\nThis function takes a process handle and an array specifying the virtual addresses of interest as arguments.\nIt returns a vector of structures which, if the page is part of the working set, contain various information about the corresponding pages.\nIn contrast to the official documentation~\\cite{WinAPI}, the \\texttt{QueryWorkingSetEx} system call only requires the \\texttt{PROCESS\\_QUERY\\_LIMITED\\_INFORMATION} permission. \nBy default, the attacker process has this permission for handles of other processes of the same user and even for some processes with a higher integrity level (as part of the generic execute access)~\\cite{MSDN2018}.\nWe devise two different variants to determine whether or not a page is in the working set of a process based on the return value of the \\texttt{QueryWorkingSetEx} system call.\n\n\\parhead{Variant 1: Low Share Count and Attacker-Readable} \nThe \\texttt{ShareCount} represents the number of processes that have this page in their working set.\nIt is one of the members in the structure returned by \\texttt{QueryWorkingSetEx}.\nUnfortunately, the value is capped to 7 processes, \\ie if more processes have the page in their working set, the number remains 7.\nHowever, as the working-set size is limited to \\SI{1.4}{\\mega\\byte} by default, this rarely happens for a page.\nIn fact, most pages in the page cache have a \\texttt{ShareCount} of 0 due to the small working-set sizes.\nWith this variant, we do not need any permissions for other processes. \nHence, we can mount the attack even across users without restrictions.\n\n\\parhead{Variant 2: High Share Count or Not Attacker-Readable} \nIf the \\texttt{ShareCount} is 7 or larger, we cannot gain any information by calling \\texttt{QueryWorkingSetEx} on our own process.\nInstead, we can use \\texttt{QueryWorkingSetEx} directly on the victim process, \\ie the attacking process must have the \\texttt{PROCESS\\_QUERY\\_LIMITED\\_INFORMATION} permission for the victim process handle.\nAs \\texttt{QueryWorkingSetEx} takes virtual addresses, we need to figure out the virtual address.\nThis is not a problem if pages from shared files are targeted (\\eg shared libraries) as they are typically mapped to the same virtual address in different processes.\nHowever, if the pages are not shared, \\ie not attacker-readable, \\texttt{QueryWorkingSetEx} still leaks information if the virtual address is known.\nHence, we can use \\texttt{QueryWorkingSetEx} to determine directly whether the target page is in the working set of the victim process.\n\n\\subsection{Spatial and Temporal Granularity} \nOne limitation of our attack is the coarse spatial granularity of \\SI{4}{\\kilo\\byte}, \\ie one page. This is identical to a recent attack on TLB entries~\\cite{Gras2018TLB} and similar to the DRAMA attack~\\cite{Pessl2016,Wang2017leaky} on a single-channel DDR3 system, which has the same spatial granularity (one \\SI{4}{\\kilo\\byte} page).\nThe spatial granularity of the DRAMA attack increases with the number of banks, ranks, channels, and processors.\nIt is \\SI{2}{\\kilo\\byte} on dual-channel systems up to Haswell, and \\SI{1}{\\kilo\\byte} on more recent dual-channel systems.\nIf a target region contains other frequently used data, the signal-to-noise ratio decreases in our attack just as it does for the DRAMA attack.\nHowever, this just increases the number of measurements an attacker has to perform.\n\nThe temporal granularity of the DRAMA attack is constrained by the time it takes to run one or two rounds of \\FlushReload, which is around \\SI{300}{\\nano\\second}~\\cite{Pessl2016,Wang2017leaky}.\nThe temporal granularity of our attack is constrained by the time the system call consumes, which we observed to be \\SI{2.04}{\\micro\\second} on average for \\texttt{mincore} with a standard error of \\SI{20}{\\nano\\second}, and \\SI{465.91}{\\nano\\second} on average for \\texttt{QueryWorkingSetEx} with a standard error of \\SI{0.20}{\\nano\\second}.\nHence, on Linux, it is only \\SIx{6.8} times lower than the DRAMA attack, and on Windows only \\SI{55}{\\percent} lower than the DRAMA attack.\nThus, our attack can be used as a reasonable replacement for the hardware-dependent DRAMA attack.\nHowever, as we describe in \\cref{sec:eviction}, the eviction limits how often an attacker can measure, \\ie $6.7$ times per second on Linux and $223$ times per second on Windows.\n\n\\subsection{Alternatives to \\texttt{mincore} and \\texttt{QueryWorkingSetEx}}\nAs an alternative to \\texttt{mincore} on Linux, we also investigated whether it is possible to mount the same attack using \\texttt{procfs} information, namely \\texttt{\/proc\/self\/pagemap}.\nHowever, \\texttt{\/proc\/self\/pagemap} only shows the information from the page translation tables.\nAs operating systems commonly use lazy page mapping, the page is in practice not mapped into the attacker process and thus, the information in \\texttt{\/proc\/self\/pagemap} does not change.\nFurthermore, as a response to Rowhammer attacks~\\cite{Seaborn2015BH}, access to \\texttt{\/proc\/self\/pagemap} was first restricted and nowadays it is often not accessible by unprivileged processes.\n\nAs a more generic alternative to \\texttt{mincore} and \\texttt{QueryWorkingSetEx}, we investigated the timing of pagefaults as another source of information.\nAccessing a page may trigger a pagefault.\nMeasuring the time it takes to handle the pagefault reveals whether it was a soft pagefault, mapping a page already present in the page cache, or a regular pagefault, loading data from the disk.\nThe timing differences we observed there are easy to distinguish, with 1 to 2 orders of magnitude between the two cases.\nIn our remote attack we exploit these timing differences.\nHowever, this makes page cache eviction more difficult as the accessed page is now the least-recently used one.\n\nFinally, as stated in \\cref{sec:threatmodel}, our local attacks are entirely attack hardware-agnostic.\nHence, we cannot use any timing differences in our local attacks.\n\n\\section{Page Cache Eviction}\\label{sec:eviction}\\label{sec:eviction_sota}\nIn this section, we discuss how page cache eviction can be implemented efficiently on Linux and Windows systems.\nPage cache eviction is the process of accessing enough pages in the right way such that a target page is evicted.\nWe show that we improve over state-of-the-art eviction algorithms by $1$ to $2$ orders of magnitude, enabling practical side-channel attacks through the page cache for the first time.\n\nLess efficient variants of page cache eviction have been used in previous work~\\cite{Holen2017,Gruss2018Rowhammer}.\nHolen~\\etal\\cite{Holen2017} generates a large amount of data, simply exhausting the physical memory.\nUsing this approach it takes \\SI{8}{\\second} or more to evict a target page on Linux.\nFurthermore, when reproducing their results we observed severe stability issues, constantly leading to crashes and system lock-ups during eviction.\nThe technique presented by Gruss~\\etal\\cite{Gruss2018Rowhammer} takes \\SI{2.68}{\\second} on Linux to evict a target page.\nOn Windows, their technique is slower, with an average execution time of \\SI{10.1}{\\second}.\nState-of-the-art microarchitectural side-channel attacks have a higher temporal resolution by more than \\SIx{6} orders of magnitude~\\cite{Yarom2014,Pessl2016,Maurice2017Hello}.\nHence, we can conclude that page cache eviction, as done in previous work, is far too slow for side-channel attacks with a relevant frequency. \nWe solve this problem by combining the technique from \\cref{sec:step1} with efficient page cache eviction on Linux (\\cref{sec:ev_linux_efficient}) and process working-set eviction on Windows (\\cref{sec:workingseteviction}).\n\n\\subsection{Efficient Page Cache Eviction on Linux}\\label{sec:ev_linux_efficient}\nThe optimal cache eviction for the attacker would evict only the target page of the victim, without affecting other cached pages.\nHence, our idea is to mostly access pages which are already in the page cache (to keep them there) and also access a few non-cached pages in order to evict the target page.\n\nIn a feasibility analysis, we measured how many pages an attacker can locate inside the page cache. \nOn our test system, we had \\SI{1040542} files accessible to the attacker program, amounting to \\SI{77}{\\giga\\byte} of disk space.\nWe found that less than \\SI{1}{\\percent} of the files had pages in the page cache, still amounting to \\SI{68}{\\percent} to \\SI{72}{\\percent} of the total page cache pages.\nThis information is all available to an unprivileged attacker using system calls like \\texttt{mmap} and \\texttt{mincore}.\nThe attacker creates a long list of all pages currently in the page cache.\nThe attacker also creates a list of further pages that could be loaded into the page cache to increase memory pressure.\nBoth lists can be updated occasionally to reflect changes in the system memory use.\nThe attacker adapts the amount of pages accessed in these two lists to achieve efficient cache eviction.\n\nThis is done by creating 3 eviction sets:\n\n\\parhead{Eviction Set 1}\nThese are pages already in the page cache, used by other processes. \nTo keep them in the page cache, a thread continuously accesses these pages while also keeping the system load low by using \\texttt{sched\\_yield} and \\texttt{sleep}.\nConsequently, they are among the most recently accessed pages of the system and eviction of these pages becomes highly unlikely.\n\n\\parhead{Eviction Set 2}\nThese are pages not yet in the page cache.\nUsing \\texttt{mincore}, we can check whether the target page was evicted, and stop the eviction immediately, reducing the eviction runtime.\nPages in this eviction set are randomly accessed, to avoid repeated accesses and thus any similarity to the pages in eviction set 1 for the replacement algorithm.\n\n\\parhead{Eviction Set 3}\nIf swapping is disabled, we use another eviction set, namely non-evictable pages, \\eg dynamic content.\nThese pages are only created and filled with content, but never again read or written.\nAs they cannot be swapped, they block a certain amount of memory, reducing the required eviction-set size.\nThis reduces the runtime of the eviction significantly.\nStill, this introduces no stability issues, as we always keep a large amount of pages ready for immediate eviction, \\ie the previous 2 eviction sets.\n\n\\parhead{Alternative Approaches and Optimizations}\nWe investigated whether the file system influences the attack performance. \nFor our tests, we used \\texttt{ext4} as a file system.\nWe compared the attack performance by running our attack on \\texttt{XFS} and \\texttt{ReiserFS}.\nHowever, we only found negligible timing differences.\n\nWe also investigated whether the use of the \\texttt{madvise} and \\texttt{posix\\_fadvise} system calls on Linux can improve the attack performance. \nThese system calls allow a programmer to provide usage hints for a given memory or file range to the kernel.\nThe advice \\texttt{MADV\\_DONTNEED} indicates that the process will not access the specified pages any time soon again, whereas the advice \\texttt{MADV\\_WILLNEED} indicates that the process will soon access the specified pages again.\nThus, the operating system will evict the corresponding pages from the page cache.\nWe found that marking the target page as \\texttt{MADV\\_DONTNEED} and all eviction set pages as \\texttt{MADV\\_WILLNEED} was often\nignored by the kernel, which ignores these hints unless the process exclusively owns the pages (\\texttt{madvise}) or when\nno other process has the file mapped (\\texttt{posix\\_fadvise}). \nStill, this allows to use \\texttt{posix\\_fadvise} on files regardless how frequently they are accessed, \\eg via \\texttt{read()}, as long as they are not mapped. \nHence, we are able to mount a covert channel by using \\texttt{posix\\_fadvise} on a file which was not mapped by any (other) process, instead of eviction. \n\n\\subsubsection{Evaluation}\\label{sec:evicteval}\nWe measured the precision and recall of our eviction by monitoring a periodic event which was triggered every second.\nThe page cache eviction using all 3 eviction sets simultaneously achieves an average runtime of \\SI{149}{\\milli\\second} ($\\sigma=\\SI{1.3}{\\milli\\second}$) on average and an F-Score of \\SI{1.0}.\n\nHence, while the temporal resolution of our attack is generally \\SI{2.04}{\\micro\\second} on Linux, the rate at which events can be observed in practice is lower.\nThe reason is that, if the event occurs, eviction is necessary, and thus, the temporal resolution for events with a higher frequency is limited to \\SI{149}{\\milli\\second} on average.\nThis still allows capturing more than \\SIx{6} keystrokes per second, enough to capture keystrokes accurately for most users~\\cite{Schwarz2018KeyDrown}.\nIn this case, the temporal resolution of the DRAMA attack is 6 orders of magnitude higher~\\cite{Pessl2016,Wang2017leaky}.\n\nThe temporal resolution is also significantly higher than that of page-deduplication attacks.\nThe frequency at which page deduplication happens is lower the more memory the system has, and has in use, and the less power the device should invest in deduplication.\nIn practice deduplication happens every 2 to 45 minutes, depending on the system configuration~\\cite{Gruss2015dedup}.\nHence, our attack has an at least 800 times higher temporal resolution than the best page-deduplication attacks.\n\n\\parhead{Limitations}\\label{sec:linux_limitations}\nOne obvious limitation of our approach is that the target page has to be in the page cache.\nHowever, as detailed in \\cref{sec:pagecache}, virtually all pages used by user programs end up in the page cache, even dynamically allocated ones.\n\nOn Linux, the page must also be accessible to the attacker, \\eg file-backed memory such as binary pages, shared library pages, or other files.\nThis is exactly the same requirement (and limitation) of \\FlushReload attacks~\\cite{Yarom2014,Irazoqui2015Lucky,Gruss2015Template,Inci2016,Lipp2016,Irazoqui2016Cross}.\nOther microarchitectural attacks, \\eg \\PrimeProbe, may not have this requirement but usually have other similarly constraining requirements, such as knowledge of the physical address which is difficult to obtain in practice~\\cite{Maurice2017Hello}.\nPage-deduplication attacks also do not have this limitation, but they face other limitations such as a significantly lower temporal resolution and, more recently, that page deduplication is mostly disabled or limited to deduplication within a security domain~\\cite{WindowsServerDeduplication, RedHatKSM, Vanderveen2016}.\nOn Windows, we do not have this limitation, \\ie we can also attack dynamically allocated memory on Windows.\n\nDue to the nature of the exploited side channel, our attack comes with clear limitations.\nLike other cache attacks, the side channel experiences noise if the target location is not only used by the event the attacker wants to spy on but also other events.\nThis is the same limitation as for any other cache side-channel attack~\\cite{Yarom2014,Gruss2015Template}.\n\nAnother limitation which frequently poses a problem in hardware cache attacks is prefetching~\\cite{Yarom2014,Gruss2015Template}.\nUnsurprisingly, software again implements the same techniques as hardware.\nWhen accessing the SSD, the Linux kernel reads ahead to increase the performance of file accesses.\nIf not specified otherwise, the readahead window is 32 pages large, \\cf \\texttt{\/sys\/block\/sda\/queue\/read\\_ahead\\_kb}.\nThis is similar to the adjacent line prefetcher and the streaming prefetcher in hardware.\nWhenever a cache miss occurs, the adjacent line prefetcher always fetches the sibling cache line region into the cache, \\ie the adjacent \\SI{64}{\\byte}.\nWhenever a second cache miss within a page occurs, the streaming prefetcher reads ahead of the cache miss and reads up to \\SI{512}{\\byte} (\\ie 8 cache lines) into the cache.\nGruss~\\etal\\cite{Gruss2015Template} noted that this limits their attack to a small number of memory locations per page.\nThe same limitations apply to our work, \\ie monitoring multiple pages within a 32-page window can be noisy.\nHowever, we found that this still leaves a multitude of viable attack targets.\nTo avoid triggering the prefetcher, we add the pages surrounding the target page to the eviction set 1, \\ie we reduce their chance of being evicted, in order to avoid all noise from prefetching, as no other page from this range will be accessed.\n\nFinally, the attacker process can, of course, only perform measurements and evictions when it is scheduled.\nHence, scheduling can introduce false negatives into our attack.\nAgain, this is also the case for hardware cache attacks~\\cite{Maurice2017Hello}.\n\nCompared to previous work, we improve the state-of-the-art for page cache eviction by a factor of more than $16$ and additionally avoid cache eviction in most cases (\\cf \\cref{sec:step1}).\nWith these two building blocks, we are able to mount practical attacks as demonstrated in the following sections.\nThe ideal target for our attack is a function or data block which is used at frequencies below $8$ times per second, but where a temporal resolution of \\SI{2}{\\micro\\second} can leak a sufficient amount of information.\nFurthermore, our ideal target resides on a page which is mostly accessed for this function or data block, and not for unrelated functions or data.\n\n\\subsection{Process Working-Set Eviction on Windows}\\label{sec:workingseteviction}\nAs previous page cache eviction techniques~\\cite{Holen2017,Gruss2018Rowhammer} are too slow to mount generic side-channel attacks, we pursue a different approach on Windows.\nWindows has per-process working sets~\\cite{WinAPI}, which (by default) are constrained to a size between \\SI{100}{\\kilo\\byte} and \\SI{1.4}{\\mega\\byte}~\\cite{WinAPI}.\nHence, we evict a page from the process working set rather than from the page cache.\nOur results show that the runtime of the eviction is on par with eviction in hardware cache attacks.\n\nWe use process working-set eviction in both, covert channels and side-channel attacks. \nFor a covert channel, the sender can add pages to the working set, \\eg by accessing them.\nTo evict pages, we use an unintended behavior of \\texttt{VirtualUnlock} that comes from a programming error~\\cite{MSDN2018}.\nCalling \\texttt{VirtualUnlock} on a page which is not locked evicts it directly from the working set.\nFor reasons of backward-compatibility, the behavior was never changed~\\cite{MSDN2018}.\nAdditionally, pages which are only read in one of the processes can be locked, so that they are never removed from the working set.\nThis way, arbitrary information can be encoded into the \\texttt{ShareCount} of the page cache pages -- up to 3 bits exist, which allows 7 sharers.\nHence, we can transmit arbitrary information without any special privileges (as long as the receiver is not constrained by an App Container).\nThe default maximum working-set size is \\SI{1.4}{\\mega\\byte}.\nAs the page size is \\SI{4}{\\kilo\\byte}, that is, there are at most 345 page slots in the working set by default~\\cite{WinAPI}.\nHence, we can exploit self-eviction (from the working set) for the side channel, which can happen frequently with a little heavy memory pressure because of the small working-set size.\nPages that are not accessed are evicted from the working set, but remain in RAM and mapped in the process.\nHowever, we can speed up eviction by reducing the victim process' working-set size using \\texttt{SetProcessWorkingSetSize} on the other process~\\cite{WinAPI}.\nThe lowest possible value for the maximum working-set size is 13 pages (\\SI{52}{\\kilo\\byte}).\n\n\\subsubsection{Evaluation}\nWe found that \\texttt{VirtualUnlock} has a success rate of \\SI{100}{\\percent} over several million tests.\nThe average time to evict a page from the process working set with \\texttt{VirtualUnlock} is \\SI{4.48}{\\milli\\second} with a standard error of \\SI{3.6}{\\micro\\second}.\n\nSimilarly to Linux (\\cf \\cref{sec:ev_linux_efficient}), the higher runtime of the eviction has a local influence on the temporal resolution of our attack.\nGenerally, the temporal resolution of our attack on Windows is \\SI{466}{\\nano\\second}, which is only \\SI{55}{\\percent} lower than the temporal resolution of the DRAMA attack~\\cite{Pessl2016,Wang2017leaky}.\nThe eviction on Windows via \\texttt{VirtualUnlock} consumes \\SI{4.48}{\\milli\\second}, limiting the temporal resolution for high-frequency events to \\SI{4.48}{\\milli\\second} on average.\nThus, locally the temporal resolution of the DRAMA attack is 4 orders of magnitude higher than the temporal resolution of the DRAMA attack~\\cite{Pessl2016,Wang2017leaky}. \nAgain, this is fast enough for inter-keystroke timing attacks~\\cite{Schwarz2018KeyDrown,Monaco2018}.\n\nWhile prefetching posed a relevant limitation on Linux, it is no problem on Windows.\nOn Windows, features like SuperFetch fetch memory into the page cache, acting like an intelligent hardware prefetcher or speculative execution.\nIndeed, SuperFetch speculatively prefetches pages from the disk into the main memory, based on similar past usage, \\eg same time of day, same sequence of applications started~\\cite{Schmid2007}.\nHowever, these pages are not added to the working set of any process. \nThus, our side channel remains entirely unaffected by these Windows features.\nThis makes the side channel very well suited for inter-keystroke timing attacks~\\cite{Schwarz2018KeyDrown,Monaco2018}.\n\n\\parhead{Limitations}\nOur attack on Windows has clear limitations, mainly introduced by the permissions required by the attacker.\nMore specifically, the \\texttt{SetProcessWorkingSetSize} system call requires the \\texttt{PROCESS\\_SET\\_QUOTA} permission on the process handle~\\cite{WinAPI}.\nBy default, the attacker process has this permission for handles of other processes of the same user running on the same or a lower integrity level.\nProcesses with a higher integrity level, \\eg processes running with Administrator privileges, cannot be attacked using this system call~\\cite{MSDN2018}.\nThe \\texttt{VirtualUnlock} only works on our own process and requires no permissions.\nAlso, noise is again a limitation, which exists for both the Linux and the Windows variant of our attack, but this is again also true for any other cache side-channel attack~\\cite{Yarom2014,Gruss2015Template,Maurice2017Hello}.\nIn our tests, we were always able to reliably evict the page from the victim's working set indicating very low error rates.\n\n\\section{Local Attacks}\\label{sec:local}\nIn this section we present and evaluate our local attacks.\nThe temporal resolution naturally scales with the performance of the system.\nWe perform all performance evaluations on recent systems with multiple gigabytes of RAM, with off-the-shelf mid-class consumer SSDs (\\eg transfer rates above $\\SI{250}{\\mega\\byte\/\\second}$~\\cite{Tallis2018}).\nFor our tests on Linux, we have swapping disabled. \nThis is recommended with recent processors (\\eg Haswell or newer) and to reduce disk wear~\\cite{DebianSSD,Horn2017Swap,Crowthers2011}.\nDisabling swapping allows for a better comparison with related work which also focuses on such recent systems~\\cite{Pessl2016,Wang2017leaky,Irazoqui2016Cross,Lipp2016}.\n\n\\subsection{Covert Channel}\\label{sec:covert}\nTo systematically evaluate the page cache side channel, we adapt different state-of-the-art hardware cache attacks to it and demonstrate that they achieve a comparable performance.\nIn this section, we cover the first example, a covert channel between two processes additionally isolated by running them in different Firejail sandboxes~\\cite{Firejail}.\nThe strongly isolated sender process sends a secret file from a restrained environment to a receiver process which can forward the data to the attacker.\n\nAs evicting a page is comparably slow (\\cf \\cref{sec:eviction}), and checking the state of a page is comparably fast (\\cf \\cref{sec:step1}), it is optimal to reduce the number of evictions.\nHence, it is more efficient to transmit multiple bits at once.\nWe took this into account for the design of our covert channel. \nWe follow the basic principle of hardware cache covert channels~\\cite{Maurice2017Hello,Pessl2016,Gruss2016Flush,Maurice2015C5}.\nFirst, a large shared file (\\eg a shared library) is mapped read-only into the address space of the sender and receiver process.\nAs described in \\cref{sec:attack}, we use \\texttt{mmap} for this purpose on Linux.\nOn Windows, we use \\texttt{CreateFileMappingA} and \\texttt{MapViewOfFile} for the same purpose.\n\nThe covert channel works by accessing or not accessing specific pages. \nWe use two pages to transmit a `READY' signal and one page to transmit an `ACK' signal.\nThe remaining pages up to the end of the file are used as data transmission bits.\nThe two `READY' pages are used alternately to avoid any race conditions in the protocol between the transmission of two subsequent messages.\nOn Windows, we use two `READY' pages and two `ACK' pages, for the two transmission directions.\n\nThe present state of each page of the mapped file (\\cf \\cref{sec:step1}) corresponds to one bit of the message.\nHence, the size of the file defines the maximum message size of a single transmission.\nTo avoid the prefetcher, we only allow a single access in a region of 32 pages.\nIf the file has a size $S$, the (maximum) message size is computed as $w=\\frac{S}{4096 \\cdot 32} \\textnormal{bits}$.\nFor instance, on Linux, Firefox' \\texttt{libxul.so} or Chromium's \\texttt{chromium-browser} binaries are more than \\SI{100}{\\mega\\byte} large.\nSimilarly, large files can also be found on Windows.\n\nThese large files allow transmitting more than $3200$ bits in a single message including the 3 pages required for the control channels.\nTo avoid the introduction of noise, the attacker can skip noisy pages, \\ie pages which are also accessed by other system activity.\nBy combining pages from multiple shared libraries, the attacker can easily find a significantly higher number of pages that can be used for transmissions, leading to very large message sizes $w$.\nThe pages are numbered from $0,1,..,i,..,w$, \\ie it is not relevant which file they belong to.\nInstead of a static list of files to check, the attacker could also use a dynamic approach and a jamming-agreement protocol~\\cite{Maurice2017Hello}.\n\nTo exchange a message, the sender first checks the present state of the `ACK' page (\\cf \\cref{sec:step1}).\nIf the `ACK' page is present, the sender knows the receiver is ready for the next transmission.\nThe sender then evicts (\\cf \\cref{sec:eviction}) any pages that are mapped, \\eg from previous transmissions.\nAfter that, the sender reads the next $w$ bits ($w$ is the message size) from the secret to transmit.\nIf the $i$-th bit is set, page $i$ page is accessed.\nOtherwise, page $i$ is not accessed.\nAs soon as the sender is done with accessing the data transmission pages, it accesses the currently to-be-set `READY' page, to signal the receiver to start reading the message.\n\nOn the other side, the receiver first waits until a `READY' page is present.\nAs soon as it is set, the receiver reads the message by analyzing the present state of the pages of the memory mapped files.\nAfter that, the receiver accesses the `ACK' page again to inform the sender that it is ready for the next message.\n\nWhile above protocol is implemented with \\texttt{mmap}, \\texttt{mincore} (\\cf~\\cref{sec:step1}), and page cache eviction (\\cf~\\cref{sec:ev_linux_efficient}) on Linux, we use a slightly different mechanism on Windows as we only work with working-set eviction (\\cf~\\cref{sec:workingseteviction}).\nOn Windows, we lock pages in the working set which should always remain in the working set, \\ie the `READY' and `ACK' bit pages of the sender and the receiver process on the corresponding receiving side.\nAdditionally, we increase the minimal working-set size so that none of the pages we use are removed from the working set.\nWe temporarily add pages into the working set by accessing them and remove pages surgically from the working set by calling \\texttt{VirtualUnlock}.\nHence, the covert channel information is perfectly (no information loss) stored in the page cache in the \\texttt{ShareCount} for the shared pages.\nUsing \\texttt{QueryWorkingSetEx} the receiving side can read the \\texttt{ShareCount} and decode the information that was encoded in the page cache.\n\n\\parhead{Performance Evaluation}\nWe tested the implementation by transmitting random messages between two processes.\nThe test system was equipped with an Intel i5-5200U processor, \\SI{8}{\\giga\\byte} DDR3-1600 RAM, and a \\SI{256}{\\giga\\byte} Samsung SSD.\n\nFor the tests on Linux, we used Ubuntu 16.04 with kernel version 4.4.0-101-generic.\nWe observed transmission rates of up to \\SI{9.69}{\\kilo\\byte\/\\second} with an average transmission rate of \\SI{7.04}{\\kilo\\byte\/\\second} with a standard error of \\SI{0.18}{\\kilo\\byte\/\\second}.\nWe did not observe any influence by the core or CPU scheduling, which is not surprising, as both the system calls and the page cache eviction can equally run on any core or CPU.\nWe observed a bit-error rate of less than \\SI{0.00003}{\\percent}.\nWe also evaluated the covert channel in a cross-sandbox scenario using Firejail~\\cite{Firejail}.\nFirejail was configured to prevent all outgoing inter-process communication, deny all network traffic, and only allow read access to the file system.\nWe did not observe any influence from running the covert channel in isolated Firejail sandboxes.\nThis is not specific to Firejail but works identically on other sandbox and container solutions that utilize the host system page cache, \\eg Docker if configured accordingly.\n\nFor the tests on Windows, we used two different hardware setups with fully updated Windows 10 installations.\nOn the Intel i5-5200U system, we observed transmission rates of up to \\SI{152.57}{\\kilo\\byte\/\\second} with an average transmission rate of \\SI{100.11}{\\kilo\\byte\/\\second} with a standard error of \\SI{0.79}{\\kilo\\byte\/\\second} and a bit-error rate below \\SI{0.000006}{\\percent}.\nOn a second system, an Intel i7-6700K with a SanDisk Ultra II 480GB SATA SSD (running Ubuntu 19.04 with a 4.18.0-11-generic kernel), we observed transmission rates of up to \\SI{278.16}{\\kilo\\byte\/\\second} with an average transmission rate of \\SI{273.44}{\\kilo\\byte\/\\second} with a standard error of \\SI{0.23}{\\kilo\\byte\/\\second}, again with a bit-error rate below \\SI{0.000006}{\\percent}.\n\nFor a performance comparison in a similar cross-CPU scenario, Pessl~\\etal\\cite{Pessl2016} reported an error rate of \\SI{0.4}{\\percent} for their DRAMA covert channel, albeit with a channel capacity of \\SI{74.5}{\\kilo\\byte\/\\second} which is much slower than our Windows-based covert channel, but faster than our Linux-based covert channel.\nWu~\\etal\\cite{Wu2014} presented a cross-CPU covert channel which achieves a channel capacity of \\SI{93.25}{\\byte\/\\second}.\nHence, our Linux covert channel outperforms this one by two orders of magnitude and our Windows covert channel even by three to four orders of magnitude.\nIn particular, the covert channel on the i7-6700K test system can even compete with \\FlushReload and \\FlushFlush covert channels which require specific hardware (Intel processors) and shared memory~\\cite{Gruss2016Flush}.\nThus, we conclude that our covert channel can very well compete with state-of-the-art hardware-component-based covert channels.\nYet, our covert channel works regardless of the presence of these leaking hardware components.\n\n\\subsection{Authentication UI Redress Attack}\\label{sec:redress}\nIn this section, we present a user-interface redress attack~\\cite{Rydstedt2010,Niemietz2012,Chen2014,Bianchi2015,Ren2015,Fratantonio2017} which relies on our side channel as a trigger.\nThe basic idea is to detect when an interesting window is opened and to place an identically looking fake window over it.\nThis can be so stealthy that even advanced users do not notice it~\\cite{Fratantonio2017}.\nHowever, to achieve this, the latency between the original window opening and the fake window being placed over it must be very low.\nFortunately, our side channel provides us with exactly this capability, regardless of any other information leakage.\nNote that the operating systems authentication windows may be protected.\nHowever, other password prompts, \\eg for password managers, browsers, and mail clients, are usually unprotected and can be targeted in our attack.\n\nWe use our side channel to detect when a root authentication window on Ubuntu 16.04 is displayed.\nWe detect this with a latency of \\SI{2.04}{\\micro\\second} on average, and it does not take us longer to make our fake window visible and move it on top of the real window.\nThe user now types in the root password in our fake window.\nDepending on the attacker capabilities, the attacker can either forward the password to the real window or simply close the fake window after the password was entered.\nIn the latter case, the user would see the original authentication window afterwards and likely think that the password was rejected on the first try, \\eg because of a typing error occurred.\n\nTo identify binary pages which are used when spawning the root authentication window, we performed an automated template attack (\\cf \\cref{sec:keystroke}).\nNote that the template attack is performed on an attacker-controlled system with identical software installed.\nHence, the attacker can take arbitrary means (\\eg side-channel attacks or a debugger) to find interesting memory locations that can be exploited on the victim system.\nThe attacker first runs a debugger-based or cache-based template attack~\\cite{Gruss2015Template} to identify binary regions that handle the corresponding event.\nIn a second run, the attacker templates with our page cache side-channel attack.\nIn our specific case, the result of the templating was that the strongest leakage is page 2 in the binary file \\texttt{polkit-gnome-authentication-agent-1}.\nHence, on the victim system, the attacker simply uses the previously obtained templates to mount the attack.\n\nMounting the same attack on Windows 10 works even better.\nHere, the latency is only \\SI{465.91}{\\nano\\second}, which is clearly not perceivable for a human.\nAlso, unsurprisingly, we found that fake windows can be created on Windows just as on Linux.\n\nEvents like authentication windows and password prompts are very well suited for our attack due to the low frequency in which they occur.\nThis also makes the automated templating for leaking pages less noisy.\n\n\\subsection{Keystroke Timing Attack}\\label{sec:keystroke}\nIn this section, we present an inter-keystroke-timing attack~\\cite{Song2001,Ristenpart2009,Zhang2009,Gruss2015Template,Monaco2018} on keyboard input in the root authentication window on Ubuntu 18.04.\nTo mount a keystroke timing attack, we first identify pages that are loaded into the page cache when the user presses a key using a template attack~\\cite{Gruss2015Template} (\\cf \\cref{sec:redress}).\nWe target the Ubuntu 18.04 authentication window, where the user types in the root password.\nIn the template attack, we identified page 14 of \\texttt{libgksu2.so.0.0.2} as a viable target page.\n\n\\begin{figureA}[t]{keystrokes}[Values returned by the page cache side channel during a password entry on Linux (top) and while typing in an editor on Windows (bottom). On Windows we observe key up and key down events due to the page selected and the high attack frequency achievable. In both cases, there is no noise between the keystrokes.]\n \\centering\n\\begin{subfigureA}[t]{\\hsize}{keystroke_linux}\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[\nmlineplot,\nstyle={font=\\footnotesize},\nxlabel={Time [seconds]},\nylabel={Value},\nwidth=1.0\\hsize,\nxmin=-0.25,\nxmax=10.75,\nymax=1.5,\nymin=0,\nytick={0,1},\nnodes near coords,\nnodes near coords align={anchor=north,text height={8pt}},\npoint meta=explicit symbolic,\nheight=2.5cm\n]\n\\addplot+[only marks,mark options={draw=black,fill=black}, mark=*] table[x=time,y=retval,meta=letter,col sep=comma] {keystrokes.csv};\n\\end{axis}\n\\end{tikzpicture}\n\\vspace{-0.5cm}\n\\end{subfigureA}\n\n\\begin{subfigureA}[t]{\\hsize}{keystroke_windows}\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[\nmlineplot,\nstyle={font=\\footnotesize},\nxlabel={Time [seconds]},\nylabel={Value},\nwidth=1.0\\hsize,\nxmin=-0.25,\nxmax=9,\nymax=1.5,\nymin=0,\nytick={0,1},\nnodes near coords,\nnodes near coords align={anchor=north,text height={8pt}},\npoint meta=explicit symbolic,\nheight=2.5cm\n]\n\\addplot+[only marks,mark options={draw=black,fill=black}, mark=*] table[x=time,y=retval,meta=letter,col sep=comma] {keystrokes_win.csv};\n\\end{axis}\n\\end{tikzpicture}\n\\end{subfigureA}\n\\end{figureA}\n\n\\cref{fig:keystrokes} shows two attack traces of a password entry, one on Linux (\\cref{fig:keystroke_linux}) and one on Windows 10 (\\cref{fig:keystroke_windows}) in \\texttt{notepad.exe}.\nWe obtain identical traces on Windows when running the attack on Firefox.\nNote that on Linux, for an extremely fast typing person, we could miss some keystrokes, \\ie false negatives can occur.\nHowever, we can gather these traces multiple times and combine the information from multiple traces to recover highly accurate inter-keystroke timings~\\cite{Schwarz2018KeyDrown,Monaco2018}.\nFor Windows, the temporal resolution is much higher, far below the timing variations of a human~\\cite{Schwarz2018KeyDrown,Monaco2018}, allowing us to reliably detect and report all inter-keystroke timings including key down and key up events.\n\nWhen running the side-channel attack on an idle system for one hour, we did not observe a single false positive, neither on Windows nor on Linux.\nThis is not surprising, if the memory region is used by unrelated events we would have already seen such noise in the template phase.\nHowever, as the attacker can and will choose the memory region based on the templating, the attacker chooses memory regions which are not really used by any unrelated events. \nThus, in the optimal case, the selected memory region is completely noise-free.\nIn such a case, there is no functionality in the operating systems that could lead to false positives due to spurious cache hits.\nRunning the attacker binary inside a Firejail sandbox~\\cite{Firejail} had no measurable influence on the accuracy of the attack.\n\n\\subsection{PHP Password Generation}\\label{sec:token}\nThe PHP \\texttt{microtime} function returns the current UNIX timestamp in microseconds.\nIt is carelessly used by some frameworks to initialize the PHP pseudo-random number generator (PRNG) before it is used in cryptographic operations or to generate temporary passwords~\\cite{Zhang2014,Argyros2012,Esser2008}.\nThis is known as a bad practice and considered insecure, not least due to side-channel attacks~\\cite{Zhang2014}.\nDuring our research we found that the popular phpMyFAQ framework~\\cite{Rinne2018} still relies on this approach.\\footnote{We responsibly disclosed this vulnerability to the developers of phpMyFAQ who issued a patch following our recommendation.}\n\nWe mount our page cache attack on the main PHP binary (7.0.4-7ubuntu2), on the function \\texttt{zif\\_microtime}.\nThis function is read-only and shared with any unprivileged process including the attacker.\nIn our case, the function resides on page \\texttt{0x1b9} (\\SIx{441}) of the binary.\nBy monitoring this page, we can determine the return value of \\texttt{microtime} at the initialization of the PRNG.\nBased on this, we can reconstruct any password generated based on the same PRNG initialization, as the password generation algorithm is also publicly available. \n\nDue to the large variance on the runtime of PHP scripts, we only detected an access to the \\texttt{microtime} function with an accuracy of $\\pm \\SI{1.5}{\\milli\\second}$.\nHowever, this is practical to brute force the range of remaining possible return values. \nOn a newer PHP version (7.0.30-0ubuntu0.16.04.1), we observed an average difference of $\\pm \\SI{2.0}{\\milli\\second}$.\nThus, we have to try around \\SIx{4000} different passwords in the real-world attack.\nWe confirmed that in \\SI{85}{\\percent} of the test runs, the real password of the user was among the \\SIx{4000} generated passwords from the attacker.\nHence, also in this scenario, our page cache side channel can compete with state-of-the-art attacks~\\cite{Zhang2014}.\n\nOur attack also works on Windows.\nHowever, as the main source of noise is the varying runtime of PHP, the accuracy is not measurably better on Windows.\n\n\\subsection{Oracle Attacks}\\label{sec:oracle}\nOur side channel also allows implementing padding- or length-oracle attacks.\nFor instance, a password or token comparison using \\texttt{strcmp} forms a length oracle.\nIf the attacker can place the string on the page boundary, the attacker can measure at which byte of the string the comparison terminated.\nBy manipulating the string, the attacker can figure our the correct password or token.\n\nWe verified that this attack is practical in a small proof-of-concept program.\nThe attacker passes the string through an API to the victim process.\nBy using our page-cache- or working-set-based side channel we can determine whether the second page was loaded into the page cache or added to the working set.\nIf this was the case, the attacker learns that the bytes on the first page were guessed correctly.\n\nAs the attacker can fully control the frequency of the measurements here and can repeat the attack, we observed no cases where we could not successfully leak the secret.\n\n\\section{Remote Attack}\\label{sec:remote}\nFor our remote attack we have to distinguish soft pagefaults, \\ie just mapping the page from the page cache, and regular pagefaults, \\ie page cache misses, over a network connection.\nIn this scenario, two physically separated processes wish to communicate with each other. \nThe sender process runs on a server and has access to information that the attacker wants to have.\nHowever, it is unprivileged, firewalled, and possibly sandboxed, so it cannot reach any network resources or expose files for remote access. \nHowever, the server exposes multiple files to the public internet, \\eg over a web server.\nWe also assume that the sender process has read permissions to these files, \\eg Apache has world-readable permissions on files in the web server root directory by default. \nThe receiver process runs on a remote server, measuring the remote access latency to pages in these public files.\nHence, the sender process can encode the information in the page cache state of these pages. \n\n\\begin{figureA}[t]{cache_hit_miss_dist}[Timing histogram of the remote covert channel with a \\SI{100}{\\kilo\\byte} file (25 pages).]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[\nstyle={font=\\footnotesize},\nxlabel={Latency [ms]},\nylabel={Frequency},\nheight=3.5cm,\nscaled x ticks=false,\nwidth=\\hsize,\nxmin=5,\nxmax=15,\n]\n\\addplot+[no marks,thick] table[x=time,y=hits,col sep=comma] {remote_timing.csv};\n\\addplot+[no marks,densely dotted,thick] table[x=time,y=misses,col sep=comma] {remote_timing.csv};\n\\legend{Hits,Misses}\n\\end{axis}\n\\end{tikzpicture}\n\\end{figureA}\n\n\\parhead{Page Cache Hits and Misses}\nOf course, a remote attacker cannot invoke \\texttt{mincore} to check which pages are in cache, so the attacker needs to rely on timing. \nHence, we first try to distinguish cache hits and misses over the network, similarly to the related work in~\\cite{Tiwari2018,Schwarz2018netspectre}, by performing remote accesses with and without clearing the page cache. \nWe also ensured that there was no other intermediary network caching or proxy caching active by passing appropriate HTTP headers to the server.\n\\Cref{fig:cache_hit_miss_dist} shows the frequencies of remote access latencies for various cached and uncached accesses; the figure shows that cache hits\ncan be distinguished from cache misses. Here, the mean access time was \\SI{8.4}{\\milli\\second} for cache hits and \\SI{14.2}{\\milli\\second} for cache misses to access a file with 25 pages (around \\SI{100}{\\kilo\\byte}).\nThe latency differences between cache hits and misses grow with the number of pages accessed.\nHence, we use larger files for the subsequent remote attacks.\n\n\\subsection{Covert Channel Protocol}\n\\Cref{fig:remote_covert_channel} depicts how the two processes communicate over the covert channel. \nThe local sender process is an unprivileged (possibly sandboxed) malware that encodes secret data from the victim machine into page cache hits and misses, and the remote receiver process decoding the secret data after measuring the remote access latency. \nFor this, the sender process uses one file to encode data, and another file for synchronization (control file). \nThe sender process first evicts both the data and control files from the file system cache (Step 1) using \\texttt{posix\\_fadvise} on a rarely used file, \\ie a file which is not currently locked in memory by another process.\nNote that the attacker could also use any other means of page cache eviction as described in~\\cref{sec:eviction}.\nIt then encodes one bit of information in the data file (Step 2) by either bringing it into the page cache by reading the file (encoding a `1'), or not bringing it into the cache (encoding a `0'). \nAfter encoding, the sender waits for the control file to be read by the remote process (Step 3).\nFor this, the sender uses \\texttt{mincore} on the control file in a loop, checking how many of the file's pages are in the page cache.\nIn our case, the sender waits until \\SI{80}{\\percent} of the file are cached, indicating that the remote attacker accessed it.\n\n\n\\begin{figureA}[t]{remote_covert_channel_res}[Transmitting a sequence of alternating `0's and `1's by accessing a \\SI{10}{\\mega\\byte} file (2560 pages). A threshold can distinguish the two cases.]\n\\centering\n\\begin{tikzpicture}\n\\begin{axis}[\nstyle={font=\\footnotesize},\nylabel={Latency [ms]},\nxlabel={Sequence Number},\nheight=3.5cm,\nwidth=\\hsize,\nxmin=2000,\nxmax=2049,\nytick={0.095,0.1,0.105,0.11,0.115},\nyticklabels={95,100,105,110,115},\n]\n\\addplot+[] table[x=packet,y=time,col sep=comma] {covert.csv};\n\\addplot[thick, dashed, red] coordinates {(2000,0.105)(2099,0.105)};\n\\end{axis}\n\\end{tikzpicture}\n\\end{figureA}\n\nThe receiver process measures the access latency, inferring the bits the sender process was trying to transmit (Step 4). \nIn our experiments, the access time threshold that demarcated a `0' from a `1' was set to \\SI{105}{\\milli\\second} for our hard drive experiments, as illustrated in \\Cref{fig:remote_covert_channel_res}.\n\nImmediately after the receiver process accessed the data file, it also accesses the control file (Step 5), to let the sender know the next bit can be transmitted now.\nThe sender then continues at Step 1 again.\nThis happens until the sender has transmitted all bits of secret information.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\hsize]{.\/images\/remote_covert_channel}\n\\caption{Illustration of the web server covert channel.}\n\\label{fig:remote_covert_channel} \n\\end{figure}\n\n\\parhead{Evaluation}\nOur experimental setup involved two separate, but geographically close, machines, \\ie a network distance of 4 hops.\nThe victim machine was running the Linux Mint (kernel version 4.10.0-38) on an AMD A10-6700 with \\SI{8}{\\giga\\byte} RAM and a \\SI{977}{\\giga\\byte} hard drive. \nThe victim machine exposed two files to the network, \\texttt{data.jpg} (\\SI{10}{\\mega\\byte}) and \\texttt{control.jpg}, used as the data and control files respectively. \nThe remote machine was also running Linux Mint (kernel version 4.13.0-37) on an Intel Core i7-7700 with \\SI{16}{\\giga\\byte} RAM and a \\SI{219}{\\giga\\byte} SSD.\n\nFor the evaluation, we transmitted \\SI{4000} bits from the local machine to the remote machine multiple times. \nThe transmission took \\SI{517}{\\second} on average, which corresponds to an average bit rate of \\SI{7.74}{\\bit\/\\second} and an average bit error rate of \\SI{0.2}{\\percent}. \nThis is a higher bit rate than several other remote covert channels~\\cite{Crosby2009,Cock2014timing,Schwarz2018netspectre}. \nThe bit rate can be further increased by encoding information through more than one file, which is realistic given the vast number of files most web servers today have. \nTo increase stealthiness, the attacker may choose to access the two files from different IPs, as the sender process is agnostic to this. \n\nAs our covert channel relies on timing differences, we also repeated our experiments on a machine with an SSD.\nDistinguishing a page cache hit from a page cache miss through timing over the network, could be more difficult as the timing differences can be smaller. \nTo overcome this, we simply use a larger image file (\\SI{30}{\\mega\\byte}, 7680 pages) to amplify the timing difference. However, this meant that the load latency threshold that demarcates a read hit and miss would need to be scaled up similarly from the previous experiment, and was set to \\SI{300}{\\milli\\second} for the experiments on SSDs.\nFurthermore, we reduce the geographical distance between attacker and victim to 2 network hops. \nThe victim server was running on a machine with Linux Mint on an Intel Core i7-7700 (kernel version 4.13.0-37) with \\SI{16}{\\giga\\byte} RAM and a recent off-the-shelf \\SI{219}{\\giga\\byte} SSD, and the attacker machine was the same as before. \nThe transmission of \\SI{4000} bits, now takes \\SI{1298}{\\second} on average, giving us an average bit rate of \\SI{3.08}{\\bit\/\\second} at an average bit error rate of \\SI{0.35}{\\percent}.\nHence, this remote timing covert channel is also possible on a machine with an SSD. \n\nOur proof-of-concept implementation could be further optimized to yield a higher transmission rate, to mount the attack over a greater geographical distance, or to use smaller files, simply by repeating measurements for each single bit~\\cite{Schwarz2018netspectre}.\nIn our proof-of-concept we did not repeat any measurements to obtain a single bit, again indicating the high capacity of this remote covert channel.\n\n\\subsection{Remote Side Channel}\nSimilarly to our local side-channel attacks, we could also mount remote side-channel attacks exploiting the page cache.\nThis information could be used to determine whether certain pages or scripts have been recently accessed~\\cite{Tiwari2018}.\nHowever, in practice it is difficult to evict the cache remotely and eviction can be tricky without the information from the local system.\nFurthermore, controlling the working set via a huge number of remote file accesses will make the attack very conspicuous,\nthough it may still be practically effective for opportunity-based attacks (\\eg password reset pages) such as those presented in Section~\\ref{sec:token}.\n\n\\section{Countermeasures}\\label{sec:countermeasures} \nOur side-channel attack targets the operating system page cache via operating system interfaces and behavior.\nHence, it clearly can be mitigated by modifying the operating system implementation.\n\n\\parhead{Privileged Access}\nThe \\texttt{QueryWorkingSetEx} and \\texttt{mincore} system calls are the core of our side-channel attack.\nRequiring a higher privilege level for these system calls stops our attack.\nThe downside of restricting access to these system calls is that existing programs which currently make use of these system calls might break.\nHence, we analyzed how frequently \\texttt{mincore} is called by any of the software running on a typical Linux installation.\nWe used the Linux \\texttt{perf} tools to measure over a 5 hour period whenever the \\texttt{sys\\_enter\\_mincore} system call is called by any application.\\footnote{We use \\texttt{sudo perf stat -e 'syscalls:sys\\_enter\\_mincore' -a sleep 18000} for this purpose.}\nDuring these 5 hours a user performed regular operations on the system, \\ie running various work-related tools like Libre Office, gcc, Clion, Thunderbird, Firefox, Nautilus, and Evince, but also non-work-related tools like Spotify.\nThe system was also running regular background tasks during this time frame.\nSurprisingly, the \\texttt{sys\\_enter\\_mincore} system call was not called a single time.\nThis indicates that making the \\texttt{mincore} system call privileged is feasible and would mitigate our attack at a very low implementation cost.\n\nOn Windows, there are multiple possible solutions to mitigate our attacks by adapting the privileges required for the system calls we use.\nFirst of all, it is questionable why a process can obtain working-set information of another process via \\texttt{QueryWorkingSetEx}.\nEspecially, as this contradicts the official documentation~\\cite{WinAPI}.\nSecond, the share count information could be omitted from the struct returned by \\texttt{QueryWorkingSetEx} as it exposes information about other processes to the attacker.\nThe combination of these two changes mitigates all our attack variants on Windows.\n\nWe responsibly disclosed our findings to Microsoft, and they acknowledged the problem and will roll out these changes in Windows 10 19H1.\nSpecifically, Windows will require \\texttt{PROCESS\\_QUERY\\_INFORMATION} for \\texttt{QueryWorkingSetEx} instead of \\texttt{PROCESS\\_QUERY\\_LIMITED\\_INFORMATION} to prevent lesser privileged processes from directly obtaining working set information.\nMicrosoft also follows our second recommendation of omitting the share count information, to prevent indirect observations on working set changes in other processes.\n\nIt was also surprising that Windows allows changing the working-set size for another process.\nIf this would be restricted, it would be much more difficult to reliably evict across processes.\nThe performance of our covert channel would decrease if \\texttt{VirtualUnlock} did not have the ``feature'' that it removes pages from the working set if they are not locked.\n\nAlternative approaches like page locking, signal burying, or disabling page sharing are likely not practical for most use cases or impose significant overheads.\n\n\\parhead{Preventing Efficient Eviction while Increasing the System Performance}\nOn Windows, we used working set eviction instead of page cache eviction as on Linux.\nWe verified that the approach we used on Linux, \\ie page cache eviction, also works on Windows.\nHowever, it performs much worse than on Linux and optimizing the eviction appeared to be far more tricky.\nOne reason for this is that with working-set-based algorithms, processes cannot directly influence the eviction probability for pages owned by or shared with other processes~\\cite{Denning1968,Denning1980,Carr1981}.\nOn Linux, we are only able to evict pages efficiently because we can trick the page replacement algorithm into believing our target page would be the best choice for eviction.\nThe reason for this lies in the fact that Linux uses a global page replacement algorithm, \\ie an algorithm which does not distinguish between different processes.\nGlobal page replacement algorithms have been known for decades to allow one process to perform a denial-of-service on other processes~\\cite{Denning1968,Denning1980,Carr1981,Russinovich2012}.\n\nWorking-set algorithms deplete these denial-of-service situations and they also increase the general system performance by making more clever choices for eviction candidates~\\cite{Denning1968,Denning1980,Carr1981}.\nHence, switching to working-set algorithms on Linux, as on Windows~\\cite{Russinovich2012}, makes our attack less practical.\nWe can also transfer this insight to hardware caches: If hardware caches would use replacement algorithms that guarantee fairness in a similar way, attacks like \\PrimeProbe would not be possible anymore, because the attacker would rather evict its own cache lines, rather than the one required by the victim process.\nThis is a larger change, but it might make remote attacks that rely on page cache eviction less practical.\n\n\\section{Conclusion}\\label{sec:conclusion} \nWe have demonstrated a variety of local and remote attacks against the page cache used in modern operating systems, thereby highlighting a new source for side- and covert channels that is hardware and timing agnostic.\nOn the local front, we have demonstrated a high-speed cross-sandbox covert channel, a UI redressing attack triggered by a side channel, a keystroke-timing side channel, and password-recovery side channel from a vulnerable PHP script.\nOn the remote front, we have shown that forgoing hardware agnosticism permits a low profile covert channel from a local malicious sender, and a higher profile side channel.\nThe severity of this attack surface is exacerbated by the variety of isolation techniques that share the page cache, including regular Unix processes, sandboxes, Function-as-a-Service platforms, managed language runtimes, web browsers, and even select remote processes.\nStronger permissioning, as we recommend, will help against some of our local attacks.\n\n\\section*{Acknowledgments}\nWe want to thank James Forshaw for helpful discussions on COM use cases and Simon Gunacker for early explorative work on this topic.\nDaniel Gruss and Michael Schwarz were supported by a generous gift from ARM and also by a generous gift from Intel.\nAri Trachtenberg and Trishita Tiwari were supported, in part, by the National Science Foundation under Grant No. CCF-1563753 and Boston University's Distinguished Summer Research Fellowship, Undergraduate Research Opportunities Program, and the department of Electrical and Computer Engineering.\nAny opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding parties.\n\n\n{\\footnotesize \\bibliographystyle{acm-url}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecently, the QCD factorization formula for non-leptonic $B$ decays into two light mesons ~\\cite{Beneke:1999br,Beneke:2000ry} has been generalized to include QED~\\cite{Beneke:2020vnb}. In this paper, we discuss the implications of QED on the factorization of non-leptonic heavy-to-heavy decays, specifically $B \\to D^{(*)} L$ where $L$ is a light meson $L= \\pi, K^{(*)}, \\rho$. Schematically, in QCD \\cite{Beneke:2000ry}\n\\begin{equation}\n \\left\\langle D^{(*)} L | Q_i |\\bar{B} \\right\\rangle = \nF^{B \\to D^{(*)}} \\!\\times T_i * \\phi_{L} \\ ,\n\\label{eq:QCDF}\n\\end{equation} \nwhich is valid in the heavy-quark limit applied to both bottom and charm mesons. The factorization formula describes the matrix element of operator $Q_i$ in terms of the heavy-to-heavy transition form factor $F^{B\\to D^{(*)}}$, the light-cone distribution amplitude (LCDA) $\\phi_L$ of the light-meson $L$ and their convolution with the perturbatively calculable scattering kernel $T_i$. Contrary to $B$ decays into two light mesons, here only the ``form factor\" term contributes, while ``spectator scattering\" is power suppressed in the $\\Lambda_{\\rm QCD}\/m_Q$ counting and thus absent at leading order, where $m_Q$ is the heavy quark mass. \n\nThe QCD corrections to the short-distance kernels \n$T_i$ are already known to \n$\\mathcal{O}(\\alpha_s^2)$ (NNLO)\n\\cite{Huber:2016xod}. Recently, these heavy-to-heavy decays have received renewed attention, due to deviations between theoretical predictions and experimental measurements of $\\bar{B}_d^0 \\to D^+ K^-$ and $\\bar{B}^0_{(s)} \\to D_{(s)}^+ \\pi^-$ \\cite{Bordone:2020gao, Cai:2021mlt, Bordone:2021cca}. Such decays are dominated by colour-allowed tree topologies described by the topological tree-amplitude $a_1(D^+ L^-)$. Taking as a specific example $\\bar B_d^0 \\to D^+ K^-$, the amplitude becomes \\cite{Beneke:2000ry}:\n\\begin{align}\n {\\mathcal{A}}(\\bar B_d^0 \\to D^+ K^-) & = i \\, \\frac{G_F}{\\sqrt{2}} \\, V^*_{us} \\, V_{cb} \\; a_1(D^+ K^-) \\, f_K \\, (m_B^2-m_D^2)\\, F_0^{B\\to D}(m^2_K) \\, , \\label{eq:defa1}\n\\end{align}\nwhere $f_K$ is the kaon decay constant. The coefficient $a_1(D^+K^-)$ is known up to NNLO \\cite{Huber:2016xod}. Power corrections of $\\mathcal{O}(\\Lambda_{\\rm QCD}\/m_Q)$ were discussed in \\cite{Beneke:2000ry} and were estimated to be at the subpercent level at most in \\cite{Bordone:2020gao} using sum-rule techniques. Not included in these estimates are QED effects, which are expected to be smaller. \n\nIn this paper, we discuss the impact of QED on heavy-to-heavy non-leptonic decays extending our work in \\cite{Beneke:2020vnb}. Once including QED effects, the branching fraction and amplitudes are no longer infrared-finite, and the process that should be considered is $B\\to D^{(*)} L (\\gamma)$ where \n$\\gamma$ is any number of soft photons with total \nenergy less than $\\Delta E$ in the $B$ meson rest frame. At scales above $\\Delta E$, QED corrections are purely virtual. The factorization of the so-called ``non-radiative'' amplitude along the lines of~\\cite{Beneke:2020vnb} can then be formulated if $\\Delta E\\ll \\Lambda_{\\rm QCD}$. We calculate these virtual corrections to non-radiative $\\bar{B}^0\\to D^{+(*)} L^-$ decays and combine them with ultrasoft radiation to obtain the infrared finite physical branching fractions. \n\nThe paper is organized as follows: First, we give the QED generalized factorization formula and the explicit results for the hard scattering kernels. We then discuss the ultrasoft effects and the phenomenological implications of QED on Standard Model predictions. \n\n\n\\section{Factorization Formula}\nIn this work we consider the decay of a $\\bar{B}_{(s)}$ meson into a heavy $D^{(*)+}_{(s)}$ meson and a light meson $L^-$ ($L=\\pi, \\, \\rho, \\, K, \\, K^\\ast)$ mediated by the current-current operators for $b\\to c$ transitions, \ngiven by the weak Hamiltonian\n\\begin{equation} \n\\mathcal{H}_{\\rm eff} = \\frac{G_F}{\\sqrt{2}} \\,V_{uD}^* V_{cb}\n \\left( C_1 Q_1 + C_2 Q_2\\right) + \\mathrm{h.c.}\n\\end{equation}\nwith the CMM operator basis~\\cite{Chetyrkin:1997gb} \n\\begin{align}\n Q_1 &= [\\bar c \\gamma^\\mu T^a (1 -\\gamma_5) b]\n [\\bar D \\gamma_\\mu T^a (1 - \\gamma_5) u] ,\n\\nonumber\\\\\n Q_2 &= [\\bar c \\gamma^\\mu (1 -\\gamma_5) b]\n [\\bar D \\gamma_\\mu (1 - \\gamma_5) u],\n\\label{eq:weakham}\n\\end{align}\nand $D=d$ or $s$. $T^a$ denotes the SU(3) colour generator in the fundamental representation. In general, the $B$ meson decays to $D^{(*)} L$ via (a combination of) quark topologies, corresponding to colour-allowed tree decay (class-I), colour-suppressed tree decay (class-II) and via weak annihilation. QCD and electroweak penguin topologies do not contribute. Both, the colour-suppressed tree and weak annihilation topologies are power-suppressed with respect to the colour-allowed tree topology \\cite{Beneke:2000ry}. In the following, we consider only $\\bar{B}_{(s)}^0 \\to D_{(s)}^{(*)+} L^-$ decays which proceed (predominantly) through the colour-allowed tree topology. As in the derivation of the QCD factorization formula \\cite{Beneke:2000ry}, we adopt the power counting $z\\equiv m_c^2\/m_b^2 \\sim \\mathcal{O}(1)$. \n\nSimilar as for $B$ decays into two light mesons, the QCD factorization formula can be extended to include QED as \n\\begin{eqnarray}\n\\label{eq:QEDF}\n\\left\\langle D^+ L^- | Q_i |\\bar B\\right\\rangle &=& 4i E_D E_L \\,\n\\zeta^{BD}_{Q_L}(m_L^2)\n\\, \\int_0^1 du \\, \nH_{i,Q_{L}}(u,z)\n\\, f_{L} \\Phi_{L}(u) \n \\, , \n\\end{eqnarray}\nwhere here $f_L$ is the renormalization-scale independent QCD decay constant.\\footnote{This definition differs slightly from \\cite{Beneke:2020vnb} where a QED generalized decay constant was introduced. Here all QED effects are put into the definition of the LCDA $\\Phi_L$ as discussed in detail in \\cite{BLCDApaper}.} The kernels, form factors and LCDAs depend on the electric charge $Q_L$ of the light meson $L$. These objects are generalized to include virtual short- and long-distance photon exchange. As given in \\eqref{eq:QEDF}, these objects are single-scale quantities. We shall later replace the HQET form factor $\\zeta^{BD}_{Q_L}$ by a suitable physical quantity, making use of the fact that an analogous factorization theorem holds for the non-radiative semi-leptonic $\\bar{B} \\to D ^+\\ell^- \\bar{\\nu}_\\ell$ amplitude at the kinematical point $q^2=m_L^2$ and $E_\\ell = m_b(1-z)\/2$ (as discussed in Sec.~\\ref{sec:semileptonic} and \\cite{Beneke:2020vnb}). This procedure is akin to using the full QCD rather than HQET $B\\to D$ form factor in QCD alone.\nAs stated above, we only consider $L^-$, with $Q_L = -1$. In the following, we therefore omit the $Q_L$ subscript. In \\eqref{eq:QEDF}, we choose a specific normalization for the form factor, where $E_D= (m_B^2+m_D^2-m_L^2)\/(2 m_B)$ and $E_L= (m_B^2-m_D^2+m_L^2)\/(2 m_B)$ are the energies of the $D$ and $L$ mesons in the $B$ rest frame, respectively. As we treat the charm quark as heavy, the scattering kernels $H_i$ depend on $z\\equiv m_c^2\/m_b^2$ as well as on the momentum fraction $u$. Finally, we note that the factorization formula also holds for $\\bar{B}\\to D^{*+} L^-$ and $\\Lambda_b \\to \\Lambda_c^+ L^-$, as was the case without QED.\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[scale=0.8]{figures\/Diagrams.pdf}}\n\\caption{\\label{fig:dia} One-loop $\\alpha_{\\rm em}$-contributions to $b\\to c \\bar{u}D$ transitions of the operator $Q_2$. }\n\\end{figure}\n\n\n\\section{Details on the calculation}\n\nTo derive the factorization formula already given in \\eqref{eq:QEDF}, we follow the procedure discussed in detail in \\cite{Beneke:2020vnb} for $B$ decays into two light mesons. There all the relevant QED generalized operator definitions were given. The heavy-to-heavy calculation in this paper is in fact simpler than that of the $B$ into two light mesons as now only QCD$\\times$QED $\\to$ SCET$_{\\rm I}$ matching is required. In the following, we compute the $\\mathcal{O}(\\alpha_{\\rm em} \\alpha_s^0)$ contributions to the scattering kernel $H_i$. The corresponding quark-level diagrams are shown in Fig.~\\ref{fig:dia}. \n\nWe proceed by matching the QCD operators $Q_{1,2}$ onto the QED generalized SCET$_{\\rm I}$ operators\n\\begin{equation}\n \\mathcal{O}_\\mp(t)= \\bar{\\chi}^{(D)}(tn_-) \\frac{ \\slashed{n}_{-}}{2} (1-\\gamma_5) \\chi^{(u)}(0) \\, \\bar{h}_{v'}(0) \\slashed{n}_{+} (1\\mp\\gamma_5)S_{n_+}^{\\dagger (Q_{L})}(0) h_v(0) \\, , \\label{eq:Op1}\n\\end{equation}\nwhere the bottom (charm) quark is treated as a static quark $h_v \\; (h_{v'})$ with velocity $v \\;(v')$. Here $n_+^\\mu$ is a light-like reference vector defined along the direction of motion of the light meson $L$, and $n_-^\\mu$ refers to the opposite direction such that $n_+^2=n_-^2=0$ and $n_+n_-=2$. The difference with QCD is the presence of the soft QED Wilson line that depends on the electric charge of the light meson $L$ \\cite{Beneke:2020vnb}:\n\\begin{eqnarray}\n\\label{eq:softwilsonlines}\nS^{(Q_L)}_{n_\\pm}(x) &=& \\exp \\left\\{ -i Q_L e \\int_0^{\\infty} \\!ds \\,\n n_\\pm A_{s}(x + s n_\\pm) \\right\\} \n\\nonumber\n\\end{eqnarray}\nThe QCD part of the soft Wilson line cancels in \\eqref{eq:Op1}, because the operators are colour singlets. \nIn pure QED, to all orders in $\\alpha_{\\rm em}$, \n\\begin{equation}\n \\left\\langle D^{(*)} L | Q_1 |\\bar B\\right\\rangle \\equiv \\langle Q_1 \\rangle =0 \\ ,\n\\end{equation}\nbecause the colour-octet QCD operator $Q_1$ cannot match onto the colour-singlet SCET operators $\\mathcal{O}_\\mp$. \nThe renormalized matrix element of $Q_2$ is matched as\\footnote{The hard-scattering kernels are functions of $t$ and their product with $\\mathcal{O}_\\mp$ is in fact a convolution.}\n\\begin{equation}\n\\label{eq:Q2matel}\n\\langle Q_2 \\rangle = H_- \\langle \\mathcal{O}_- \\rangle + H_+ \\langle \\mathcal{O}_+ \\rangle \\ .\n\\end{equation}\nThe matching coefficients $H_{+}$ and $ H_{-}$ can be expressed as\n\\begin{align}\n\\label{eq:oneloopmaster}\nH_- &= A_{2-}^{(0)} + \\frac{\\alpha_{\\rm em}}{4\\pi}\\left\\{ A_{2-}^{(1)} + Z_\\textup{ext}^{(1)} - Y^{(1)} + Z_{2j}^{(1)} A_{j-}^{(0)}\n\\right\\} \\,, \\\\\nH_+ &=\\frac{\\alpha_{\\rm em}}{4\\pi} A_{2+}^{(1)} \\ ,\n\\end{align}\nwhere the superscript indicates the expansion coefficients in powers of $\\alpha_{\\rm em}\/(4\\pi)$. Here $A_{2-}^{(0)}\\;(A_{2\\mp}^{(1)})$ are the bare tree-level (one-loop) on-shell matrix elements of operator $Q_2$. We note that $H_+=0$ if $m_c\\to 0$, because the chirality flipped operator does not contribute for light final states. The one-loop QED operator renormalization constants are given by $Z_{ij}$ in (16) of \\cite{Beneke:2020vnb}, where $j$ also runs over evanescent operators \\cite{Beneke:2009ek} (see also \\cite{Huber:2016xod}). The external field renormalization contribution $Z_\\textup{ext}^{(1)}$ accounting for the one-loop on-shell renormalization of the $b$-quark and $c$-quark fields is\n\\begin{equation}\n\\label{eq:Zext}\nZ_\\textup{ext}^{(1)} = -Q_d^2\\biggl[\\frac{3}{2\\epsilon} + \\frac{3}{2}L_b + 2 \\biggr] - Q_u^2\\biggl[\\frac{3}{2\\epsilon} + \\frac{3}{2}L_b - \\frac{3}{2}\\ln z + 2 \\biggr] \\, ,\n\\end{equation}\nwhere \n\\begin{equation}\n L_b \\equiv \\ln \\frac{\\mu^2}{m_b^2} \\ .\n \\end{equation}\n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[scale=1.2]{figures\/Zhh2}}\n\\caption{\\label{fig:Zhh} $\\alpha_{\\rm em}$-corrections to the heavy-to-heavy current. Here the dot represents the soft Wilson line $S_{n_+}$.}\n\\end{figure}\n\nSimilar to \\cite{Beneke:2020vnb}, we write the SCET operator renormalization factor $Y$ in two parts:\n\\begin{equation}\n\\label{eq:Y1}\nY^{(1)}(u,v) = Z_{hh}^{(1)} \\delta (u-v) + Z^{(1)}_{\\bar{C}}(u,v)\n\\end{equation}\nwhere $Z^{(1)}_{\\bar{C}}(u,v)$ is the anti-collinear kernel computed in (22) in \\cite{Beneke:2020vnb}. The generalized heavy-to-heavy current is now defined as\n\\begin{equation}\n\\bar{h}_{v'}(0) \\slashed{n}_+ (1-\\gamma_5) S_{n_+}^{\\dagger (Q_L)}(0) h_v(0)\n ,\n \\end{equation}\nwhere the soft Wilson line $S_{n_+}$ arises from the soft decoupling of the anti-collinear fields. We obtain $Z_{hh}$ at one-loop by calculating the diagrams in Fig.~\\ref{fig:Zhh} by regularizing the infrared (IR) divergences with an off-shellness $k_q^2$ for the $q=u,d$ quarks. As discussed in Appendix A of~\\cite{Beneke:2019slt} (see also \\cite{Beneke:2020vnb}), this also requires modifying the soft Wilson line propagator.\nIn the $\\overline{\\rm MS}$ scheme, we obtain\n\\begin{equation}\n\\begin{split}\n\\label{eq:Zhh}\nZ_{hh}^{(1)} &= \\frac{1}{\\epsilon}\\biggl\\{ -Q_d^2\\biggl(\\frac{1+z}{1-z}\\ln z + 2 \\biggr) +2 Q_{L} Q_d \\biggl(\\frac{\\ln z}{1-z} +i \\pi + 1\\biggr) \\\\ \n&+ Q_{L}^2 \\biggl(\\frac{1}{\\epsilon} + \\ln \\frac{\\mu^2}{z \\delta_{\\bar{c}}^2}-1 \\biggr)\n\\biggr\\} \\ ,\n\\end{split}\n\\end{equation}\nwhere $\\delta_{\\bar c}\\equiv k_{q}^2\/(n_-k_{q})$ is the off-shell regulator in the soft Wilson line. \nFor neutral $L$, this reduces to the QCD result (see (39) in~\\cite{Huber:2016xod}) after replacing the charge factors $Q_d^2\\to C_F, Q_L \\to 0$. Finally, $Y^{(1)}$ can be obtained using \\eqref{eq:Y1} by adding \\eqref{eq:Zhh} and $Z_{\\bar{C}}$ from (22) in \\cite{Beneke:2020vnb}. It is independent of the IR regulator $\\delta_{\\bar{c}}$ as it should be. \n\n\n\n\n\n\n\n\\subsection{Hard matching coefficients}\\label{sec:hmc}\nAt tree level, only $A_{2-}^{(0)} = 1$ contributes. At one-loop, we obtain the hard-scattering kernels by computing the on-shell matrix elements $A^{(1)}_{2\\mp}$ from the diagrams in Fig.~\\ref{fig:dia}. The hard-scattering kernels are obtained using the renormalization factors in \\eqref{eq:oneloopmaster}. The $1\/\\epsilon$ poles properly cancel, and we find\n\\begin{align}\n\\label{eq:H2}\nH_-^{(1)} =& -Q_d^2 \\biggl\\{\\frac{L_b^2}{2} + L_b\\biggl(\\frac{5}{2} -2\\ln(u(1-z)) \\biggr) + h\\bigl(u(1-z)\\bigr) + \\frac{\\pi^2}{12} + 7 \\biggr\\} \\nonumber \\\\ \n&-Q_u^2 \\biggl\\{\\frac{L_c^2}{2} + L_c\\biggl(\\frac{5}{2}+2\\pi i -2\\ln \\Bigl(\\bar{u}\\frac{1-z}{z}\\Bigr) \\biggr) + h\\Bigl(\\bar{u}\\Bigl(1-\\frac{1}{z}\\Bigr)\\Bigr) + \\frac{\\pi^2}{12} + 7 \\biggr\\} \\nonumber \\\\\n&+Q_d Q_u \\biggl\\{\\frac{L_b^2}{2} + \\frac{L_c^2}{2} -6 L_\\nu +2L_b\\biggl(2-\\ln(\\bar{u}(1-z))\\biggr) \\nonumber \\\\\n&-2L_c\\biggl(1-i\\pi+\\ln\\Bigl(u\\frac{1-z}{z}\\Bigr) \\biggr) +g\\bigl(\\bar{u}(1-z)\\bigr) + g\\Bigl(u\\Bigl(1-\\frac{1}{z}\\Bigr) \\Bigr) +\\frac{\\pi^2}{6} -12 \\biggr\\} \\nonumber \\\\\n&+Q_d Q_u f(z)\\,, \\\\\nH_+^{(1)} = &-Q_d^2 \\sqrt{z} \\, w\\bigl(u(1-z)\\bigr) -Q_u^2 \\frac{1}{\\sqrt{z}} \\;w\\Bigl(\\bar{u}\\Bigl(1-\\frac{1}{z}\\Bigr)\\Bigr) -Q_d Q_u \\sqrt{z}\\,\\frac{\\ln z}{1-z} \\,,\n\\end{align}\nwhere we have defined \n\\begin{equation}\nz\\equiv \\frac{m_c^2}{m_b^2} \\ , \\qquad L_c \\equiv \\ln \\frac{\\mu^2}{m_c^2} = L_b - \\ln z \\,, \\qquad L_\\nu \\equiv \\ln \\frac{\\nu^2}{m_b^2} \\ ,\n\\end{equation}\nwhere $\\nu$ refers to the scale of the Wilson coefficient $C_2(\\nu)$. In addition, we defined the functions\n\\begin{equation}\n\\begin{split}\nh(s) &\\equiv \\ln^2 s -2\\ln s +\\frac{s \\ln s}{1-s}-2 \\text{Li}_2\\Bigl(\\frac{s-1}{s}\\Bigr) \\,, \\\\\ng(s) &\\equiv h(s) -\\frac{3 s \\ln s}{1-s} \\,, \\\\\nf(z) &\\equiv \\biggl(1-\\frac{1+z}{1-z}\\ln z \\biggr)L_b + \\frac{\\ln z}{1-z}\\biggl( \\frac{1}{2}(1+z)\\ln z-2-z \\biggr) \\,, \\\\\nw(s) &\\equiv \\frac{1-s+s\\ln s}{(1-s)^2} \\,.\n\\end{split}\n\\end{equation}\nTo extract the imaginary parts from these functions $z$ is understood as $z-i \\epsilon$. The function $f(z)$ contains the entire contribution of the last diagram of Fig.~\\ref{fig:dia}. For the other contributions in $H_-^{(1)}$ in \\eqref{eq:H2}, we note the symmetry when exchanging the contributions involving the charm and bottom quarks, as they are both treated as heavy quarks. Explicitly, the $Q_u^2$ term, arising from the fourth diagram in Fig.~\\ref{fig:dia}, is obtained from the $Q_d^2$ term (first diagram) when switching $L_b\\to L_c, Q_d\\to Q_u, u\\to \\bar{u}$ and $z\\to 1\/z$. The second and third diagram in Fig.~\\ref{fig:dia} give the $Q_u Q_d$ terms, which are symmetric under the exchange of $z\\to 1\/z, u\\to \\bar{u}$ and $L_b\\to L_c$. \nWe emphasize that the hard-scattering kernel $H_-$ in \\eqref{eq:H2} diverges when taking the $z\\to 0$ limit: \n\\begin{align}\nH_-^{(1)}(z \\to 0) &= \\frac{Q_u^2}{2} \\biggl( \\ln^2 z - (2 L_b +1)\\ln z \\biggr) + \\rm{finite~terms} \\ , \\\\\nH_+^{(1)}(z \\to 0) &= 0 \\,.\n\\end{align}\nIn pure QCD, this limit can be smoothly taken as the collinear divergences between the diagrams where the gluon couples to the charm quark cancel, and the only divergences remain in the factorizable diagrams (given in the second line in Fig.~\\ref{fig:dia}). In QCD, these terms are then absorbed into the form factor. In QED, the collinear divergences will be cancelled when normalizing to the semi-leptonic decay as discussed below (see also \\cite{Beneke:2020vnb}). \n\nTaking now the matrix element of \\eqref{eq:Q2matel}, requires the QED generalized light-meson LCDA $\\Phi_L$ introduced in (51) of \\cite{Beneke:2020vnb} and further discussed in \\cite{BLCDApaper}. The matrix element is normalized by $E_L= (m_B^2-m_D^2+m_L^2)\/(2 m_B)$ and is multiplied by the soft rearrangement factor $R_{\\bar{c}}^{(Q_L)}$. This rearrangement, discussed in detail in Sec. 4.1 of \\cite{Beneke:2020vnb}, removes the overlap of the soft and collinear sectors and ensures the renormalizability of the light-meson LCDA. In addition, we need to introduce QED generalized HQET form factors $\\zeta^{BD}$ and $\\zeta^{BD^*}$ which we define similar to (52) of \\cite{Beneke:2020vnb} by replacing the $\\chi_{C}\\to h_{v'}$:\n\\begin{align}\n\\label{eq:SCET1FFs}\n\\langle D|\\frac{1}{R_{\\bar{c}}^{(Q_L)}} \\bar{h}_{v'}(0) \\slashed{n}_+ S_{n_+}^{\\dagger(Q_L)}(0) h_v (0)|\\bar{B}\\rangle = 4 E_{D}\\, \\zeta^{B D}_{Q_L}\\ , \n\\end{align}\nwhere our normalization differs from the standard HQET Isgur-Wise function. For the $D^*$, we define\n\\begin{align}\n\\label{eq:SCET1FFs2}\n\\langle D^{*}|\\frac{1}{R_{\\bar{c}}^{(Q_L)}} \\bar{h}_{v'}(0) \\slashed{n}_+ \\gamma_5 S_{n_+}^{\\dagger(Q_L)}(0) h_v (0)|\\bar{B}\\rangle = - 4 E_{D^{*}} \\; \\epsilon^{*} \\cdot v\\, \\zeta^{B D^{*}}_{ Q_L}\\ , \n\\end{align}\nwhere $\\epsilon^*$ is the $D^*$ polarization vector and $E_{D^{(*)}} = (m_B^2+m_{D^{(*)}}^2-m_L^2)\/(2 m_B)$.\nDue to the Wilson line $S_{n_+}^{(Q_L)}$ and due to the soft-subtraction term $R_{\\bar{c}}^{(Q_L)}$, the form factor $\\zeta$ depends on the charge of the light meson. In the following, as we only consider charged $L^-$, we omit this additional subscript. Defining then $H\\equiv H_-+H_+$ and $H^* \\equiv H_- - H_+$, allows us to write the factorization formula, already quoted in \\eqref{eq:QEDF}, as\n\\begin{align}\n\\langle D^{+} L^-| Q_2 |\\bar{B} \\rangle &= 4i E_{D} E_{L} \\zeta^{BD}\\int_0^1 du \\; H(u,z) f_L \\Phi_L(u) \\ , \\label{eq:factscet1}\\\\\n\\langle D^{*+} L^-| Q_2 |\\bar{B} \\rangle &= 4i E_{D^*} E_{L}\\;\\epsilon^*\\cdot v\\;\\zeta^{BD^{*}}\\int_0^1 du\\; H^*(u,z) f_L \\Phi_L(u) \\ . \n\\label{eq:factscet}\n\\end{align}\nFinally, we note that the above holds also for longitudinally polarized vector mesons ($L=\\rho, K^*$), while decays to transversely polarized states are power suppressed.\n\n\n\\subsection{Semi-leptonic decay}\n\\label{sec:semileptonic}\n\\subsubsection{Factorization formula}\nThe QCD$\\times$QED factorization formulas for the non-radiative semi-leptonic $\\bar B \\to D^{+(*)} \\ell^- \\bar{\\nu}_{\\ell}$ amplitude are very similar to those of non-leptonic $\\bar{B}\\to D^{+(*)}L^-$ decays in \\eqref{eq:factscet1} and \\eqref{eq:factscet}. We define\n\\begin{align}\n\\mathcal{A}^{{\\rm sl},D^{(*)}}_{\\text{non-rad}}& = \n\\frac{G_F}{\\sqrt{2}} V_{cb} C_{\\rm sl}\\,\n\\bra{D^{+(*)} \\ell^- \\bar{\\nu}_\\ell}Q_{\\rm sl} \\ket{\\bar B} \\ , \n\\end{align}\nwhere the semi-leptonic operator is $Q_{\\rm sl} = \\bar{c}\\gamma^\\mu (1-\\gamma_5)b \\, \\bar{\\ell}\\gamma_\\mu(1-\\gamma_5)\\nu$. $C_{\\rm sl}$ is the semi-leptonic hard Wilson coefficient. Following \\cite{Beneke:2020vnb}, the QCD$\\times$QED $\\to$ SCET matching of the amplitude gives \n\\begin{equation}\\label{eq:QEDFsemi}\n\\langle D^+ \\ell^- \\bar{\\nu}_\\ell| Q_{\\rm sl} |\\bar{B} \\rangle = 4 i E_{D} E_{\\rm sl} \\, Z_{\\ell} \\,\\zeta^{BD}(E_\\ell,q^2) H_{\\rm sl}(E_\\ell,q^2,z) \\ ,\n\\end{equation}\nwhere we defined the spinor product\n\\begin{equation}\ni E_{\\rm sl} \\equiv \\bar{u}(p_\\ell) \n\\,\\frac{\\slashed{n}_-}{2} (1-\\gamma_5) v_{\\nu_{\\ell}}(p_{\\nu_\\ell}) \\ .\n\\end{equation} \n$Z_{\\ell}$ denotes the renormalized leptonic matrix element as defined in~\\cite{Beneke:2020vnb}.\nThe factorization for $B\\to D^*$ is obtained by replacing $E_D\\to E_{D^*} \\; \\epsilon^*\\cdot v$. In full generality, the hard-scattering kernels and form factor $\\zeta^{BD^{(*)}}$ depend on the lepton energy $E_\\ell$ and $q^2 = (p_\\ell + p_{\\nu_\\ell})^2$ and we defined as $H_{\\rm sl}^{(*)}\\equiv H_{\\rm sl,-} \\pm H_{\\rm sl,+}$, where the upper (lower) sign applies to $D\\;(D^*)$. Comparing the semi-leptonic factorization in \\eqref{eq:QEDFsemi} with \\eqref{eq:factscet1}, we observe that they are analogous under the replacements $E_L f_L \\Phi_L \\to E_{\\rm sl} Z_\\ell$ and $H \\to H_{\\rm sl}$, with the same HQET form factor $\\zeta^{BD}$, in the appropriate kinematic limits. In addition, the hard Wilson coefficient that appears in the full amplitude is replaced by $C_2 \\to C_{\\rm sl}$.\n\n\\subsubsection{Physical form factor}\nSimilar to~\\cite{Beneke:2020vnb}, for charged $L$ we replace the HQET form factor with the corresponding physical form factor $\\mathcal{F}^{BD^{(*)}}$ obtained from the non-radiative semi-leptonic amplitude at the kinematic point $q^2 = m_L^2$ and at the maximal lepton energy at this point: $E_\\ell^{\\rm max} = (E_L + \\sqrt{E_L^2 - m_L^2})\/2$. We define\n\\begin{align}\n\\label{eq:curlyFexp}\n\\mathcal{F}^{BD} \\equiv \\lim_{E_\\ell \\to E_\\ell^{\\rm max}}\\frac{\\mathcal{A}^{{\\rm sl},D}_{\\text{non-rad}}}{G_F\/\\sqrt{2} V_{cb} 4 i E_{D} E_{\\rm sl}} \\ , \n\\end{align}\nand equivalently for $D^*$ by replacing $E_D\\to E_{D^*} \\epsilon^*\\cdot v$. For $q^2=m_L^2$, the spinor product for massless leptons becomes\n\\begin{equation}\n E_{\\rm sl}(q^2=m_L^2) = \\sqrt{E_\\ell^{\\rm max}-E_\\ell} \\, \\frac{8 E_\\ell^{\\rm max} \\sqrt{E_\\ell^{\\rm max}(4E_\\ell E_\\ell^{\\rm max} - m_L^2)}}{4(E_\\ell^{\\rm max})^2 - m_L^2} \\ ,\n\\end{equation}\nwhich goes to zero in the strict limit $E_\\ell \\to E_\\ell^{\\rm max}$, as the non-radiative amplitude goes to zero at the kinematic endpoint. The physical form factor $\\mathcal{F}^{BD^{(*)}}$ however remains finite in this limit. Comparing with \\eqref{eq:QEDFsemi}, we find\n\\begin{equation}\n\\label{eq:curlyF}\n\\mathcal{F}^{BD^{(*)}} = C_{\\rm sl}\\, Z_{\\ell} \\,\\zeta^{BD^{(*)}}(E_\\ell,q^2) H^{(*)}_{\\rm sl}(E_\\ell,q^2,z) \\ .\n\\end{equation}\nFinally, the relevant semi-leptonic hard-scattering kernels are\n\\begin{align}\nH_{\\rm sl,-}^{(1)} = &- Q_d Q_\\ell \\biggl\\{\\frac{L_b^2}{2} + L_b \\biggl(1-2\\ln(1-z) \\biggr) + h(1-z) + \\frac{\\pi^2}{12} + 5 \\biggr\\} \\nonumber \\\\\n&+ Q_u Q_\\ell \\biggl\\{\\frac{L_c^2}{2}-3L_\\nu +3L_b -2L_c \\biggl(1-i\\pi +\\ln \\Bigl(\\frac{1-z}{z} \\Bigr) \\biggr) + g\\Bigl(1-\\frac{1}{z}\\Bigr) + \\frac{\\pi^2}{12}-6 \\biggr\\} \\nonumber \\\\\n& + Q_u Q_d f(z) -Q_d^2 \\biggl(\\frac{3}{2}L_b +2 \\biggr) -Q_u^2 \\biggl(\\frac{3}{2}L_c +2 \\biggr) \\ ,\\\\\nH_{\\rm sl,+}^{(1)} =& -Q_d Q_\\ell \\sqrt{z} \\;w\\big(1-z\\big) - Q_u Q_d\\sqrt{z} \\frac{\\ln z}{1-z} \\ ,\n\\end{align}\nwhere we used $E_\\ell = m_b (1-z)\/2$ for consistency as in the short-distance kernels we use the quark masses. \n\nReplacing now the HQET form factor $\\zeta^{BD^{(*)}}$ using \\eqref{eq:curlyF}, we find the factorization formula: \n\\begin{eqnarray}\\label{eq:qedffinal}\nC_2 \\left\\langle D^{+} L^- | Q_2 |\\bar B\\right\\rangle &=& 4i E_D E_L \\frac{C_2}{C_{\\rm sl}}\\,\n\\mathcal{F}^{BD}(m_L^2) \\, \\int_0^1 du \\, \nT(u,z) f_{L} \\frac{\\Phi_{L}(u)}{Z_\\ell} \n \\, , \n\\end{eqnarray}\nwhere we defined\n\\begin{equation}\n\\label{eq:Tredefinition}\nT^{(*)}(u)\\equiv \\frac{H^{(*)}(u)}{H^{(*)}_\\textup{sl}} \\ .\n\\end{equation}\nThe factorization formula for $\\bar{B}\\to D^{+*}L^-$ is obtained by replacing $E_D\\to E_{D^*} \\; \\epsilon^*\\cdot v$. In \\eqref{eq:qedffinal}, we multiplied with the hard Wilson coefficient $C_2$ to make the different contributions explicit. We note that the drawback of \\eqref{eq:qedffinal} is that the physical form factor $\\mathcal{F}^{BD}$ includes electroweak-scale physics. On the other hand, the benefit of introducing this physical form factor is that \\eqref{eq:curlyFexp} provides an explicit prescription allowing to extract it from experimental data. The procedure applied here is similar to the standard procedure in QCD alone, where the HQET $B\\to D^{(*)}$ form factor is replaced by the full QCD form factor. This can be seen explicitly, when taking the limit $\\alpha_{\\rm em}\\to 0$ and redefining the form factors in terms of the kinematical factors: \n\\begin{align}\n\\hat{\\mathcal{F}}^{BD}& \\equiv \\frac{4 E_D E_L}{m_B^2 - m_D^2} \\mathcal{F}^{BD} \\underset{\\alpha_{\\rm em}\\to\\; 0}{\\to} F_0^{BD}\n\\ ,\\label{eq:hatcurlf} \\\\\n\\hat{\\mathcal{F}}^{BD^*} &\\equiv -\\frac{2 E_{D^*} E_L}{m_{D^*} m_B} \\mathcal{F}^{BD^*} \\underset{\\alpha_{\\rm em}\\to\\; 0}{\\to} A_0^{BD^{*}} \n\\label{eq:hatcurlfstar} \\ , \n\\end{align}\nwhere $F_0$ and $A_0$ are the QCD form factors defined in \\cite{Beneke:2000ry}. \nThe one-loop electromagnetic correction to the hard-scattering kernel is given by\n\\begin{align}\n\\label{eq:T2}\nT^{(1)} = &\\phantom{+} Q_d^2\\biggl\\{ 2L_b \\ln u - h\\bigl(u(1-z)\\bigr) + h\\bigl(1-z\\bigr) \\biggr\\} \\nonumber \\\\\n&+ Q_u^2\\biggl\\{ -3L_\\nu +3L_b - L_c(3-2\\ln \\bar{u}) -h\\Bigl(\\bar{u}\\Bigl(1-\\frac{1}{z}\\Bigr)\\Bigr) +g\\Bigl(1-\\frac{1}{z}\\Bigr) -11 \\biggr\\} \\nonumber \\\\\n& + Q_d Q_u\\biggl\\{ -3L_\\nu -2L_b \\ln \\bar{u} -2L_c \\ln u +g\\bigl(\\bar{u}(1-z)\\bigr) + g\\Bigl(u\\Bigl(1-\\frac{1}{z}\\Bigr)\\Bigr) \\nonumber \\\\\n&-h\\bigl(1-z\\bigr) - g\\Bigl(1-\\frac{1}{z}\\Bigr)-11 \\biggr\\} \\nonumber \\\\\n&-\\sqrt{z}\\biggl[ Q_d^2 \\Bigl(w\\bigl(u(1-z)\\bigr) - w(1-z)\\Bigr) + Q_u^2 \\frac{1}{z}w\\Bigl(\\bar{u}\\Bigl(1-\\frac{1}{z}\\Bigr)\\Bigr) + Q_d Q_u w(1-z) \\, \\biggr] \\ ,\n\\end{align}\nand the corresponding $T^{*}$ is obtained by setting $\\sqrt{z}\\to -\\sqrt{z}$. Unlike $H_-$, the hard-scattering kernel $T$ in~\\eqref{eq:T2} is free from collinear divergences when taking the limit $z\\to 0$. In this limit, \\eqref{eq:T2} reduces to (73) in \\cite{Beneke:2020vnb}.\n\n\n\\section{QED corrections to branching fractions and ratios}\n\\label{sec:pheno}\n\n\\begin{table}[t]\n\t\\begin{center}\n\t\t\\begin{tabularx}{0.98\\textwidth}{|c C C C|}\n\t\\hline \n\t\\multicolumn{4}{|c|}{Coupling constants and masses [GeV]}\\\\\n\t\t\\hline \n $\\alpha_{\\rm em}(m_Z)=1\/127.96$ & $\\alpha_s(m_Z) = 0.1181$ & $m_B = 5.279$ & $m_Z=91.1876$ \n \\\\ \\hline\n\t\t\\end{tabularx}\n\t\t\t\t\\vskip 1pt\n\\begin{tabularx}{0.98\\textwidth}{|C C C C|}\n\\hline\n\t\\multicolumn{4}{|c|}{Decay constants [MeV] and masses}\\\\\n\t\t\\hline \n$f_\\pi= 130.2 \\pm 1.4$ & $f_K=155.6 \\pm 0.4$ & $m_b = 4.78$ GeV & $m_c = 1.67$ GeV \n\\\\ \t\\hline\n\\end{tabularx}\n\\vskip 1pt\n\\begin{tabularx}{0.98\\textwidth}{|C C|}\n\\hline\n\\multicolumn{2}{|c|}{CKM parameters} \\\\ \\hline\n$|V_{ud}|= 0.97370 \\pm 0.00014$ & $|V_{us}|=0.2245 \\pm 0.0008$\\\\ \n\\hline\n\\end{tabularx}\n\\vskip 1pt\n\t\\begin{tabularx}{0.98\\textwidth}{|C C C C|}\n\t\\hline\n\t\\multicolumn{4}{|c|}{Wilson coefficients and coupling constants at $\\nu=4.8$ GeV}\\\\\n\t\t\\hline \n $C_1^{\\rm QCD}=-0.258$ & $C_2^{\\rm QCD}=1.008$ & $\\alpha_{\\rm em}=1\/132.24$ & $\\alpha_s^{(4)} = 0.21617 $ \n \\\\ \\hline\\end{tabularx}\n\t\t\\vskip 1pt\n\t\t\\begin{tabularx}{0.98\\textwidth}{|C C C|}\n\\hline\t\\multicolumn{3}{|c|}{Parameters of distributions amplitudes at $\\mu=1$ GeV}\\\\\n\t\t\\hline \t\n$a_2^{\\pi}=0.15$ & $a_1^{\\bar{K}}=0.06$ & $a_2^{\\bar{K}} = 0.14$ \\\\ \\hline\n$a_2^{\\rho}=0.17$ & $a_1^{\\bar{K^*}}=0.06$ & $a_2^{\\bar{K^*}} = 0.16$ \\\\ \\hline\n\\end{tabularx}\n\t\t\\vskip 1pt\n\t\t\\begin{tabularx}{0.98\\textwidth}{|C |}\n\\hline\t\\multicolumn{1}{|c|}{Coupling constants at $\\mu=1$ GeV}\\\\\n\t\t\\hline \t\n $\\alpha_{\\rm em}=1\/134.05$\n \\\\ \\hline\n\\end{tabularx}\n\\caption{Inputs used to obtain QCD NNLO predictions and QED effects. The Gegenbauer coefficients of the $\\pi$ and $K$ are taken from \\cite{Bali:2019dqc} and evolved to $1\\,$GeV with LL accuracy. The $\\rho$ and $K^*$ Gegenbauer coefficients are taken from \\cite{Straub:2015ica}, which uses Lattice QCD results from \\cite{Arthur:2010xf}.\nWe use $\\alpha_s^{(4)}$ with four flavours and the three-loop running, while the Wilson coefficients were matched to the full SM at the electroweak scale $\\mu_0= m_W$. The quark masses are to be understood as two-loop pole masses.}\n\\label{tab:inputs}\n\t\\end{center}\n\\end{table}\nIn this section, we discuss the phenomenological implications of the QED effects on $B \\to D^{(*)} L$ decays, where we focus on $L = \\pi, K$. Our formalism also holds for vector meson final states $L=\\rho, K^*$ and baryon decays, which we briefly discuss at the end of Sec.~\\ref{sec:qedampl}. We divide the QED effects in three groups arising at different scales:\n\\begin{itemize}\n \\item corrections to Wilson coefficients above the scale $m_b$,\n \\item corrections to the kernels, form factors and LCDA from scales between $m_b$ and $\\Lambda_{\\rm QCD}$,\n \\item corrections to the decay rate from ultrasoft photon radiation below $\\Lambda_{\\rm QCD}$.\n\\end{itemize}\nThe first two corrections contribute at the amplitude level and can be included via \n\\begin{equation}\\label{eq:fact}\n \\mathcal{A}(B \\to D^{(*)} L) = \\mathcal{A}_{BD^{(*)}} \\;a_1(D^{(*)} L) \\ ,\n\\end{equation}\n where $a_1$ parametrizes the colour-allowed tree-amplitude and\n\\begin{equation}\\label{eq:AtoDLfirst}\n \\mathcal{A}_{BD} = i \\frac{G_F}{\\sqrt{2}}V^*_{uD} V_{cb} f_L \\; 4E_L E_D\\mathcal{F}^{BD}(m_L^2) \\ .\n\\end{equation}\nA similar expression holds for $D^*$ with the replacement $E_D \\to E_{D^*} \\; \\epsilon^* \\cdot v$. In QCD, taking the limit $\\alpha_{\\rm em} \\to 0$, $\\mathcal{A}_{BD}$ in~\\eqref{eq:AtoDLfirst} reduces to \n\\begin{align}\n A_{BD}&\\equiv i \\frac{G_F}{\\sqrt{2}}V^*_{uD} V_{cb} f_L (m_B^2 - m_D^2)F_0^{BD}(m_L^2) \\ , \\end{align}\nwhile for $D^*$, we have \n\\begin{align}\n A_{BD^{*}}&\\equiv -i \\frac{G_F}{\\sqrt{2}}V^*_{uD} V_{cb} f_L 2m_{D^*}\\; \\epsilon^* \\cdot p_B A_0^{BD^*}(m_L^2) \\,.\n\\end{align}\nTo parametrize the QED effects, we can now factor out this QCD expression by writing\n\\begin{equation}\n\\label{eq:AtoDL}\n \\mathcal{A}(B \\to D^{(*)} L) = A_{BD^{(*)}} \\, {\\biggl(\\frac{\\hat{\\mathcal{F}}^{BD^{(*)}}}{F^{BD^{(*)}}_0}\\biggr)} \\, \\left[a_1^{\\rm QCD}(D^{(*)} L) + \\delta a_1(D^{(*)} L)\\right] \\ ,\n\\end{equation}\nin terms of the reduced form factors $\\hat{\\mathcal{F}}_{BD^{(*)}}$ defined in \\eqref{eq:hatcurlf} and \\eqref{eq:hatcurlfstar}. In addition, for $B\\to D^*$, we have $F_0^{BD^*} \\equiv A_0^{BD^*}$. The shift $\\delta a_1$ encodes the $\\mathcal{O}(\\alpha_{\\rm em})$ QED correction coming from the Wilson coefficient, the hard-scattering kernel, and the LCDA, respectively:\n\\begin{align}\n\\label{eq:deltaa1}\n \\delta a_1(D^{(*)} L) &\\equiv \\delta a_1^{\\rm WC}(D^{(*)} L)+ \\delta a_1^{\\rm K}(D^{(*)} L) + \\delta a_1^{\\rm L}(D^{(*)} L) \\,.\n\\end{align}\nDue to the replacement of the HQET form factor by $\\mathcal{F}_{BD^{(*)}}$, this includes QED effects from the semi-leptonic Wilson coefficient $C_{\\rm sl}$ in $\\delta a_1^{\\rm WC}$, from $H_{\\rm sl}$ in $\\delta a_1^K$ and the leptonic matrix element $Z_\\ell$ in $\\delta a_1^L$, as can be seen from \\eqref{eq:qedffinal}.\nHowever, $\\delta a_1$ does not include QED corrections to the form factor, which are contained in the ratio $\\hat{\\mathcal{F}}_{BD^{(*)}}\/F_0^{BD^{(*)}} = 1 + \\mathcal{O}(\\alpha_{\\rm em})$.\\footnote{Explicitly, the QED corrections to the physical form factor defined in \\eqref{eq:curlyF} are\n\\begin{align*}\n \\hat{\\mathcal{F}}^{BD^{(*)}}(m_L^2) & = F_0^{BD^{(*)}}(m_L^2) \\;\\left(1+ \\delta a^{\\rm WC}_{\\rm sl} + \\delta a^{\\rm K}_{\\rm sl} + \\delta a^{\\rm L}_{\\rm sl} + \\delta a^{\\rm F}_{\\rm sl} \\right) \\ ,\n\\end{align*}\nwhich includes the QED corrections to the Wilson coefficient $C_{\\rm sl}, H_{\\rm sl}, Z_\\ell$ and HQET form factor $\\zeta^{BD^{(*)}}$, respectively. \n}\nThese QED corrections cancel in ratios with semi-leptonic decays, for which we provide numerical estimates in Sec.~\\ref{subsec:slratios}.\nFor our numerical analysis, we use the inputs given in Table~\\ref{tab:inputs}. For completeness, we also evaluate the pure QCD NNLO $a_1(D L)$ using \\cite{Huber:2016xod} with our inputs: \n\\begin{equation}\n\\begin{split}\n\\label{eq:a1qcd}\na_1^{\\rm QCD}(D^+K^-)&= 1.008 + [0.024 + 0.009 i]_{\\rm NLO} + [0.029 + 0.029 i]_{\\rm NNLO} \\\\\n&= 1.061^{+0.017}_{-0.016}+ 0.038^{+0.025}_{-0.014} i\\,,\n\\end{split}\n\\end{equation}\nwhere the uncertainty is fully dominated by the scale variation $m_b\/2<\\mu <2m_b$ around the central value $\\mu=m_b$. Here we used the Gegenbauer coefficients from \\cite{Bali:2019dqc} evolved to $m_b$ with NLL accuracy. The uncertainty coming from the charm mass is negligible. Even when allowing very conservative variations of both $1.3\\; {\\rm GeV}\n